Main repository for Vispy

Overview

VisPy: interactive scientific visualization in Python

Main website: http://vispy.org

Build Status Coverage Status Zenodo Link


VisPy is a high-performance interactive 2D/3D data visualization library. VisPy leverages the computational power of modern Graphics Processing Units (GPUs) through the OpenGL library to display very large datasets. Applications of VisPy include:

  • High-quality interactive scientific plots with millions of points.
  • Direct visualization of real-time data.
  • Fast interactive visualization of 3D models (meshes, volume rendering).
  • OpenGL visualization demos.
  • Scientific GUIs with fast, scalable visualization widgets (Qt or IPython notebook with WebGL).

Announcements

  • Release! Version 0.6.5, September 24, 2020
  • Release! Version 0.6.4, December 13, 2019
  • Release! Version 0.6.3, November 27, 2019
  • Release! Version 0.6.2, November 4, 2019
  • Release! Version 0.6.1, July 28, 2019
  • Release! Version 0.6.0, July 11, 2019
  • Release! Version 0.5.3, March 28, 2018
  • Release! Version 0.5.2, December 11, 2017
  • Release! Version 0.5.1, November 4, 2017
  • Release! Version 0.5, October 24, 2017
  • Release! Version 0.4, May 22, 2015
  • VisPy tutorial in the IPython Cookbook
  • Release! Version 0.3, August 29, 2014
  • EuroSciPy 2014: talk at Saturday 30, and sprint at Sunday 31, August 2014
  • Article in Linux Magazine, French Edition, July 2014
  • GSoC 2014: two GSoC students are currently working on VisPy under the PSF umbrella
  • Release!, Version 0.2.1 04-11-2013
  • Presentation at BI forum, Budapest, 6 November 2013
  • Presentation at Euroscipy, Belgium, August 2013
  • EuroSciPy Sprint, Belgium, August 2013
  • Release! Version 0.1.0 14-08-2013

Using VisPy

VisPy is a young library under heavy development at this time. It targets two categories of users:

  1. Users knowing OpenGL, or willing to learn OpenGL, who want to create beautiful and fast interactive 2D/3D visualizations in Python as easily as possible.
  2. Scientists without any knowledge of OpenGL, who are seeking a high-level, high-performance plotting toolkit.

If you're in the first category, you can already start using VisPy. VisPy offers a Pythonic, NumPy-aware, user-friendly interface for OpenGL ES 2.0 called gloo. You can focus on writing your GLSL code instead of dealing with the complicated OpenGL API - VisPy takes care of that automatically for you.

If you're in the second category, we're starting to build experimental high-level plotting interfaces. Notably, VisPy now ships a very basic and experimental OpenGL backend for matplotlib.

Installation

Please follow the detailed installation instructions on the VisPy website.

Structure of VisPy

Currently, the main subpackages are:

  • app: integrates an event system and offers a unified interface on top of many window backends (Qt4, wx, glfw, IPython notebook with/without WebGL, and others). Relatively stable API.
  • gloo: a Pythonic, object-oriented interface to OpenGL. Relatively stable API.
  • scene: this is the system underlying our upcoming high level visualization interfaces. Under heavy development and still experimental, it contains several modules.
    • Visuals are graphical abstractions representing 2D shapes, 3D meshes, text, etc.
    • Transforms implement 2D/3D transformations implemented on both CPU and GPU.
    • Shaders implements a shader composition system for plumbing together snippets of GLSL code.
    • The scene graph tracks all objects within a transformation graph.
  • plot: high-level plotting interfaces.

The API of all public interfaces are subject to change in the future, although app and gloo are relatively stable at this point.

Genesis

VisPy began when four developers with their own visualization libraries decided to team up: Luke Campagnola with PyQtGraph, Almar Klein with Visvis, Cyrille Rossant with Galry, Nicolas Rougier with Glumpy.

Now VisPy looks to build on the expertise of these developers and the broader open-source community to build a high-performance OpenGL library.


External links

Comments
  • [WIP] Layout grids using Cassowary

    [WIP] Layout grids using Cassowary

    • [ ] Get baseline implementation working (just layout, no padding / margins)
    • [ ] Implement margins
    • [ ] Implement padding
    • [ ] document code
    • [ ] Regular test cases
    • [ ] Pathological extreme test cases
    opened by bollu 135
  • [WIP] IPython extension mechanism for Vispy

    [WIP] IPython extension mechanism for Vispy

    closes #977

    So, I tried to take a stab at this, and here's my first attempt. I foresee this needing a lot of iteration to get right.

    _Things to decide:_

    • location of the ipython extension file
    • features of the extension
    • different magic operators that this could have

    _TODO:_

    • [ ] I'm not sure how to write test cases for this. I'm guessing iPython provides some way to do this, but I'll have to look into this
    • [ ] Documentation for this, so that it's discover-able
    • [ ] list the extension on iPython's extension wiki
    • [ ] Publish the extension ( push it to PyPi with the tag Framework :: IPython)
    component: jupyter-widget-backend 
    opened by bollu 118
  • ENH: Text

    ENH: Text

    Currently working:

    1. Calculation (in shaders) and utilization of SDF.
    2. Linux text display and system font listing.
    3. OSX text display and system font listing.
    4. Win32 text display (auto-download of compiled 32- and 64-bit freetype libs) and system font listing.
    5. Automatic downloading of a couple nice open-source fonts.
    6. Make it a Visual.
    7. Very basic testing, we can improve once the visual-testing PR is merged.
    type: enhancement 
    opened by larsoner 112
  • Scenegraph overhaul

    Scenegraph overhaul

    This PR is a continuation of #895 (and supersedes it). I am keeping both PRs open for now so that the prior changes may be reviewed in isolation. Major tasks for this PR are:

    • [x] Remove multi-parenting in scenegraph
    • [x] Nodes have a single transform (and related simplifications)
    • [x] SceneCanvas.draw_visual should be simple and efficient
    • [x] Implement picking (finally)
    • [ ] ViewNode and a way to automatically create a view on a scenegraph subtree
    • [x] Other simplifications:
      • [x] Remove all ViewBox clip methods except fragment.
      • [x] Remove render/framebuffer/canvas scenegraph nodes (might restore these later if there turns out to be a need)

    Visuals to migrate:

    • [x] Line
    • [x] Mesh
    • [x] GridLines
    • [x] Markers
    • [x] Volume
    • [x] Image
    • [x] Text
    • [x] Axis
    • [x] Polygon
    • [x] Box
    • [x] Cube
    • [x] Ellipse
    • [x] Isocurve
    • [x] Isosurface
    • [x] Plane
    • [x] Rectangle
    • [x] RegularPolygon
    • [x] Spectrogram
    • [x] Tube
    • [x] XYZAxis
    • [x] Histogram
    • [x] LinePlot
    • [x] Isoline
    • [x] SurfacePlot
    • [x] ColorBar

    EDIT BY Eric89GXL:

    Related issues:

    • [x] Toggling a node toggles children #986
    • [x] Text visual changing viewport #981
    • [x] Type of Canvas size #976
    • [x] Centering of text visuals #975: working now?
    • [x] Scene benchmarks broken #965: need to fix?
    • [x] Depth drawing precision #953: need to add?
    • [x] Get SceneCanvas from node #927: exists?
    • [x] Plane and box PR #901: need to update visuals
    • [x] Original PR #895
    • [x] Reuse FBO in viewbox #791
    • [x] Picking example #693
    • [x] Axes, ticks, grid #677
    • [x] Grid plot with individual zooming #434: does it work?
    • [x] Picking #140
    • [ ] Line editing example PR #926: modify to use picking?
    • [ ] Camera translate speed #914: need to add?
    • [ ] Performance of the SceneGraph #628

    closes #986 closes #981 closes #976 closes #975 closes #965 closes #953 closes #927 closes #791 closes #693 closes #677 closes #434 closes #140

    opened by campagnola 103
  • Reorganize visuals layer

    Reorganize visuals layer

    Reorganization to make visuals a top-level package. Fixes #448. Only one example currently works; I'll deal with the rest after feed back on a couple of issues:

    1. Any complaints about the current structure?
    2. Poll: should transforms be top-level as well?
    3. There are many modules for users to import, so it would be nice to pick one place where most of the common names are imported by default. Currently there is a bit of this going on in scene.__init__, but it's a mess. I'd like to make many of the examples look like:
    import vispy.scene
    
    canvas = vispy.scene.SceneCanvas()
    line = vispy.scene.Line(...)
    line.transform = vispy.scene.AffineTransform()
    

    vispy.plot should do something similar, but with a more extensive set of imports aimed toward scientific vis.

    component: visuals 
    opened by campagnola 102
  • WIP: Ripping gloo in two via GLIR

    WIP: Ripping gloo in two via GLIR

    This PR refactors gloo to break it in two pieces, one is the high level gloo interface, which generates GLIR commands. The other is the GLIR interpreter.

    In the progress I will also fix some outstanding issues with gloo, as described in #464, like getting rid of local strorage and allow specifying uniforms/attributes before the source code is set.

    Closes #464, #510, #450, #338, #407, #351

    Overview of changes that this PR makes

    All gloo objects:

    • No more activate, deactivate methods. No more handle and target properties. Only an id property.
    • Gloo objects are context aware. They are associated with the context that is current at the moment of instantiation. If no context is active, they are associated to a "pending context" which will become in use by the first app.Cancas that requests a canvas. This means all our examples still work.

    Program:

    • The API of Program changes WRT attributes and uniforms. There are no more active_uniforms. Just a function to get info on all variables in shading code.
    • There are no variable objects, variable.py is gone. These objects served mainly for deferred setting of values. The GLIR command queue takes over this function.
    • There are no more VertexShader and FragmentShader classes, shaders.py is gone. It turns out that shaders do not really have a function other then temporarily existsting to compile the code. Vispy programs should now have more GPU memory available. Just call program.set_shaders() to set source code.
    • There are now also warnings when attributes/uniforms are set that are not active. Due to this I found some unnecessary lines of code in some examples.

    Buffer:

    • No more local storage. This means that setting discontinuous data does not work anymore. Much simpler code though.
    • As a consequence data cannot be set on views.
    • No more automatic conversion. Passing float64 is not allowed. Or should we convert in this case?

    Texture:

    • There is no more local copy.
    • Passing float64 is not allowed.
    • You can use set_size with completely different size and format.
    • You can now use set_data using any type and shape.
    • Texture does not support views anymore.
    • Texture with no data (only shape) can be initialized with Texture((100,100))

    Framebuffer

    • ColorBuffer, DepthBuffer and StencilBuffer are no more. Just use RenderBuffer. You do not have to specify the format, as the FrameBuffer will do this when attaching (if format is None).
    • Deactivating a FrameBuffer will make the previous framebuffer active. This is the job of the GLIR implementation, because in gloo you may not always have access to this information.

    Outstanding questions for GLIR spec

    • Should GLIR pass enums by string or int? There seem to be more people in favor of strings.

    Work to do after this

    • There is a call to glGetViewport in text.py. This needs to be removed; we need to get viewport from the event instead.
    • GPU objects must be deleted when Gloo objects get cleaned up. I won't be surprised if that will cause problems, so let's do this after Vispy with the current changes seems stable.
    • Uniform arrays #345
    component: gloo 
    opened by almarklein 98
  • Marker Visual

    Marker Visual

    Remark: the purpose of this pull request is to prepare a much more interesting pull request where I implemented a simple abstract renderer for markers (which explain why I became maniac on marker and edge sizes)

    • corrected GLSL function for markers where some key points are defined relative to fragment size instead of being defined relative to marker size $v_size: the defaults were visible in the demo/gloo/how_markers.py example on the clobber marker that was kind of dislocating when becoming too small; they could be seen also as deformation of bars or cross markers when displayed at very different sizes.
    • changed frag rendering so that the size of the marker is the guaranted size in pixel of the face. The edge is now outside the face and if too small will be mixed with face antialiasing or not displayed if below 1/10th pixel wide
    • added marker function rescaling so that the edge have the same width for the same edge_width parameter whatever the marker (and as far as I checked the edge width is exactly edge_width wide in pixels) and whatever the part of the marker (before tailed_arrow had a wider edge on the tail than on the arrow)
    • added very common markers triangle up and down and star (at least in matplotlib) and changed the total size of the fragment in vert and frag for the edge to be rendered correctly
    • added a RescalingRelativeEdgeMarkerVisual that rescales as the scene is rescaled (see the added example below) and for which edge is expressed as a fraction of the marker size This new marker visual can be used with all markers and rescale the whole marker as the minimum of x and y rescaling.
    • added a RescalingXYRelativeEdgeMarkerVisual that is similar to the previous one but rescale independently in the x and y directions. It works only with specific markers called markersxy which are defined for arbitrary x and y sizes ($v_size becomes a vec2). Some of these markers keep a 1:1 aspect ratio (called square markers), using only the min or the max of the x and y size. The others used both sizes (hbar, vbar, cross, x, rect and ellipse). Note that the distinction between hbar and vbar does not much sense because of the 2 sizes but is kept for the moment. Ellipse marker code may seem complicated but the key problem is to correctly rescale the function so that the edge width is regular around the perimeter of the ellipse. The correct rescaling is given by the gradient of the level set defining the marker (the marker function in the code) taken at the point on the ellipse for which the normal goes through the point currently considered in the fragment shader. I could not find a way to keep the code of this computation siple so I used an approximation: the point on the ellipse is the point corresponding to the hyperbola constant coordinate in elliptic coordinate system (http://en.wikipedia.org/wiki/Elliptic_coordinate_system). It's much simpler to compute and the result is nice looking (you can see some defaults by rescaling the ellipse so that it has a large size and is almost flat).
    type: enhancement component: visuals 
    opened by bdurin 92
  • WIP: Volume rendering & camera changes

    WIP: Volume rendering & camera changes

    This PR implements volume rendering. It's basically a port of the technique that I use in Visvis.

    • New Volume visual with two rendering styles (iso and mip).
    • Volume example in examples/scene/volume.py Try it, and use the keys to switch cameras etc.
    • Refactored cameras a bit in general
    • Refactored TurnTableCamera: no more mode and distance. You can just change the fov while getting the same zoom level. Setting fov to 0 means orthographic. Zooming is done by moving camera backward.
    • New FlyCamera to fly around your data.

    This is about ready. There are more things to do, but that can be picked up in another PR. (I added todo items for this in volume.py)

    I wish to do more with the camera's, but we should probably discuss this first. Will make a new issue for this.

    todo:

    • [x] avoid warning when swithching render styles
    • [x] camera's

    closes issue #725

    volume

    Closes #725

    type: enhancement component: visuals 
    opened by almarklein 86
  • OOGL/GL namespaces

    OOGL/GL namespaces

    There are currently two low-level GL namespaces: gl and oogl. The first one contains raw OpenGL ES commands, the second one implements an object-oriented layer on top of OpenGL. Even though we definitely need both, I fear it might be confusing for new-comers that even the simplest examples need two different namespaces that do basically the same thing.

    I think it would be worth writing extremely simple functions in oogl that implement the most basic things, e.g. something like oogl.clear(r, g, b, a), maybe oogl.draw_elements(...), etc. At the very least, we could just write dumb wrappers like:

    def clear(*args):
        gl.glClearColor(*args)
        gl.glClear(gl.GL_COLOR_BUFFER_BIT | gl.GL_DEPTH_BUFFER_BIT)
    

    It would simplify the simplest examples, making them cleaner and easier to understand. I can already see beginners asking: "But what is the difference between vispy.gl and vispy.oogl? Why do I need them both just to display a black screen?" and so on.

    Another advantage is that it would make the code conventions more uniform, i.e. we use names_with_underscores in OOGL but camelCase in GL, which is not really nice and might confuse beginners.

    One goal could be the following: the most simple things could be done with either vispy.gl and no oogl (raw OpenGL), or vispy.oogl and no pure GL.

    type: enhancement 
    opened by rossant 82
  • Glir queue refactoring

    Glir queue refactoring

    Closes #623

    This implememts:

    • gloo objects are associated when used, rather than on intialization*
    • There is a Canvas.context property which gives access to the GL functions.
    • The canvas.context.glir is the glir queue for a canvas.
    • The `canvas.context.shared' is used by backends to share objects, and also contains the glir parser.
    • The canvas.gloo and canvas.glir are removed.
    • The old vispy.gloo.clear() also still works; it uses whatever canvas is current at that time. Maybe we should deprecate this. At least discourage it.
    • Only one example uses canvas.gloo. We should probably modify all to do so.

    More on queues: each canvas has a glir queue on its context. There is no shared glir queue. Further, each gloo object has a glir queue too. Glir queues can be associated with other queues (e.g. program.glir.associate(texture.glir) so that when the queue gets flushed, the queues of its "dependencies" are flushed as well. This approach makes it possible to associate queues as late as possible, while objects can also be shared (e.g. a texture is associated with both a program and frame buffer).

    type: enhancement component: gloo 
    opened by almarklein 81
  • add optional 'internalformat' parameter to Texture2D and Texture3D

    add optional 'internalformat' parameter to Texture2D and Texture3D

    When constructing or resizing textures, allow internalformat to be specified if the application needs to control precision or representation of texture storage.

    gloo.Texture3D(..., internalformat='rgba8') tex.resize(..., internalformat='rgba8')

    Accept lower-case strings similar to the existing 'format' argument, including the same generic values as 'format' ('luminance', 'alpha', 'luminance_alpha', 'rgb', 'rgba').

    Allow these precision/type suffixes: '8', '16', '16f', '32f'. Allow these base formats with precision suffixes: 'r', 'rg', 'rgb', 'rgba'. For example, 'rgb32f' means GL_RGB32F.

    When keyowrd argument is omitted, internalformat is chosen based on number of channels and precision is left unspecified so the backend or OpenGL driver can chose an appropriate default.

    type: enhancement component: gloo 
    opened by karlcz 71
  • Update TubeVisual points instead of vertices/faces

    Update TubeVisual points instead of vertices/faces

    Overrides the MeshVisual .set_data() method so that a user can update the Tube's points as opposed to the Tube's vertices and faces. Also updated the init method to call methods that compute the faces and indices to allow for code re-use when calling .set_data()

    opened by JamesTRoss1 1
  • Seems like `vispy` conflicts with `PyQt6` and `Pyvista`

    Seems like `vispy` conflicts with `PyQt6` and `Pyvista`

    Hello guys,

    I can't make vispy canvas work properly inside the same PyQt6 application with the pyvista scene.

    For the MWE I've modified a bit an "Embed VisPy into Qt" example from the Gallery, adding a pyvista scene inside.

    import sys
    
    from PyQt6.QtCore import Qt
    from PyQt6.QtWidgets import (
        QApplication,
        QMainWindow,
        QDockWidget,
    )
    
    import numpy as np
    
    from pyvistaqt import QtInteractor
    import pyvista as pv
    
    IMAGE_SHAPE = (600, 800)  # (height, width)
    CANVAS_SIZE = (800, 600)  # (width, height)
    NUM_LINE_POINTS = 200
    
    from vispy.scene import SceneCanvas, visuals
    
    
    class GUIWindow(QMainWindow):
        def __init__(self):
            super().__init__()
            self._create_canvas()
            self._create_vtk()
    
        def _create_vtk(self):
            self.scene = PyvistaSceneQt(self)
            self.dock_vtk = QDockWidget("Scene", self)
            self.dock_vtk.setWidget(self.scene.interactor)
            self.addDockWidget(Qt.DockWidgetArea.RightDockWidgetArea, self.dock_vtk)
    
        def _create_canvas(self):
            self.dock_tree = QDockWidget("Tree", self)
            self.canvas_wrapper = CanvasWrapper()
            self.dock_tree.setWidget(self.canvas_wrapper.canvas.native)
            self.addDockWidget(Qt.DockWidgetArea.RightDockWidgetArea, self.dock_tree)
    
    class PyvistaSceneQt(QtInteractor):
    
        def __init__(self, *args, **kwargs):
            super().__init__(*args, **kwargs)
            self.add_sphere()
    
        def add_sphere(self):
            """ add a sphere to the pyqt frame """
            sphere = pv.Sphere()
            self.add_mesh(sphere, show_edges=True)
            self.reset_camera()
    
    class CanvasWrapper:
        def __init__(self):
            self.canvas = SceneCanvas(size=CANVAS_SIZE)
            self.grid = self.canvas.central_widget.add_grid()
    
            self.view_top = self.grid.add_view(0, 0, bgcolor="cyan")
            image_data = _generate_random_image_data(IMAGE_SHAPE)
            self.image = visuals.Image(
                image_data,
                texture_format="auto",
                cmap="viridis",
                parent=self.view_top.scene,
            )
            self.view_top.camera = "panzoom"
            self.view_top.camera.set_range(
                x=(0, IMAGE_SHAPE[1]), y=(0, IMAGE_SHAPE[0]), margin=0
            )
    
            self.view_bot = self.grid.add_view(1, 0, bgcolor="#c0c0c0")
            line_data = _generate_random_line_positions(NUM_LINE_POINTS)
            self.line = visuals.Line(line_data, parent=self.view_bot.scene, color="black")
            self.view_bot.camera = "panzoom"
            self.view_bot.camera.set_range(x=(0, NUM_LINE_POINTS), y=(0, 1))
    
    def _generate_random_image_data(shape, dtype=np.float32):
        rng = np.random.default_rng()
        data = rng.random(shape, dtype=dtype)
        return data
    
    
    def _generate_random_line_positions(num_points, dtype=np.float32):
        rng = np.random.default_rng()
        pos = np.empty((num_points, 2), dtype=np.float32)
        pos[:, 0] = np.arange(num_points)
        pos[:, 1] = rng.random((num_points,), dtype=dtype)
        return pos
    
    
    if __name__ == "__main__":
        app = QApplication([])
    
        window = GUIWindow()
        window.show()
        sys.exit(app.exec())
    
    

    When I run this example under Ubuntu 22.04 python 3.10 I see a black rectangle inside both vispy and pyvista widgets and the application freezes. If I change the order of calls of constructor methods inside the GUIWindow.__init__() like so:

        def __init__(self):
            super().__init__()
            #self._create_vtk()
            self._create_canvas()        
            self._create_vtk()
    

    then the pyvista scene initializes as usual, but the vispy canvas is still a black rectangle.

    If I comment out the canvas constructor - the pyvista scene works as expected.

    If I comment out the pyvista scene constructor - the vispy canvas works as expected.

    And finally, If I change MWE to use PyQt5 instead of PyQt6 - both pyvista and vispy works fine without any conflicts.

    The problem arises only when I use both pyvista and vispy in PyQt6 application.

    opened by mkondratyev85 10
  • Adding points to a 3D scene by clicking

    Adding points to a 3D scene by clicking

    I have a vispy scene embeded into a Qt application and I load a pointcloud using a scatter visual. I want to add points to this pointcloud by clicking on the scene. My scene starts with a GridLines and XYZAxis visuals and an TurntableCamera. I am using the following code to get a transformation from the canvas to the GridLines visual and transform the click event position to world coordinates but I'm getting wrong results. My code is the following

    import vispy
    vispy.use('PySide6')
    from vispy.app.qt import QtSceneCanvas
    ...
    self.ui.vispy_canvas = QtSceneCanvas(Gui3D)
    self.ui.vispy_canvas._canvas.events.mouse_double_click.connect(self.add_wp)
    ...
    self.canvas = self.ui.vispy_canvas
    self.view = self.canvas.central_widget.add_view()
    self.view.camera = cameras.TurntableCamera()
    self.grid = visuals.GridLines(parent=self.view.scene)
    self.axis = visuals.XYZAxis(width=5, parent=self.view.scene)
    ...
    def add_wp(self, e):
      state = self.view.camera.get_state()
      tr = self.grid.get_transform(map_from='canvas', map_to='visual')
      pos = e.pos
      x, y, _, _ = tr.map(pos)
      print(state, pos, [x, y], sep="\n")
    

    When clicking in the center of the XYZ axis the result should be X=0, Y=0 but I'm getting an output:

    {'scale_factor': 3.3353044838515125, 'center': [0.0, 0.0, 0.0], 'fov': 45.0, 'elevation': 30.0, 'azimuth': 30.0, 'roll': 0.0}
    [682 431]
    [684.2222168791365, -1185.104739936092]
    

    I tried multiplying with the scale factor:

    pos = e.pos * state['scale_factor']
    

    but it didn't change anything

    {'scale_factor': 3.3353044838515125, 'center': [0.0, 0.0, 0.0], 'fov': 45.0, 'elevation': 30.0, 'azimuth': 30.0, 'roll': 0.0}
    [2271.3423535 1424.1750146]
    [685.794587570637, -1184.751870844964]
    
    opened by frank20a 1
  • Avoid explicit loop in computing normals in MeshData

    Avoid explicit loop in computing normals in MeshData

    Test timing: (assuming you have two folder named as MeshData__original and MeshData__PR, which corresponds to the main branch and this PR branch:

    from vispy_main_branch.geometry import MeshData as MeshData__original
    from vispy_PR_branch.geometry import MeshData as MeshData__PR
    import numpy as np
    import time
    
    def run(N_vertex, N_faces):
        ###############################################
        # prepare data
        np.random.seed(0)
    
        vertices = np.random.rand(N_vertex, 3).astype(np.float32)
        faces = np.random.choice(N_vertex, size=(N_faces, 3))
    
        # remove rows that are repeated
        faces.sort(axis=1)
        faces = np.unique(faces, axis=0)
    
        # remove rows with repeated vertices indices
        faces = faces[(faces[:, 0] != faces[:, 1]) & (faces[:, 0] != faces[:, 2]) & (faces[:, 1] != faces[:, 2])]
        ###############################################
    
        mesh_ori = MeshData__original(vertices=vertices, faces=faces)
        mesh_pr = MeshData__PR(vertices=vertices, faces=faces)
    
        t = time.time()
        vn_ori = mesh_ori.get_vertex_normals()
        elapsed_ori = time.time() - t
        t = time.time()
        vn_pr = mesh_pr.get_vertex_normals()
        elapsed_pr = time.time() - t
    
        assert np.isclose(vn_ori, vn_pr).all()
    
        print(f"{N_vertex},{N_faces},{elapsed_ori},{elapsed_pr}")
    
    
    N_vertex = 10_000
    N_faces = 10_000
    
    print("N_vertex,N_faces,original,PR")
    while N_vertex < 5_000_000:
        run(N_vertex, N_faces)
        N_vertex = int(N_vertex * 1.1)
    
    # N_vertex = 10_000
    # N_faces = 10_000
    
    # while N_faces < 5_000_000:
    #     run(N_vertex, N_faces)
    #     N_faces = int(N_faces * 1.1)
    

    timings_N_vertex timings_N_faces

    (plotted with the following)

    import matplotlib.pyplot as plt
    import numpy as np
    import csv
    
    with open('out.csv', 'r') as f:
        reader = csv.reader(f)
        label = next(reader)
        data = np.array(list(list(map(float, line)) for line in reader))
    
    
    plt.plot(data[:, 0], data[:, 2], label=label[2])
    plt.plot(data[:, 0], data[:, 3], label=label[3])
    plt.xlabel(label[0])
    plt.ylabel("time (sec)")
    
    plt.legend()
    plt.savefig("timings_N_vertex.png")
    # plt.show()
    
    opened by soraxas 4
  • Noob Here. Performance is way too slow. (1 FPS)

    Noob Here. Performance is way too slow. (1 FPS)

    To preface, this is my first time writing an issue post so I'm unsure of the format so sorry in advance if this post angers you. I am trying to use two different cameras (with separate canvases) as I need to have the cameras viewing the same object from different angles and positions. I don't know if there is a way to do this on one canvas. Also, I'm trying to use compound visuals (seen inside of my init function below). I use a fairly simplistic mesh with like ~10k faces.

            if not firstView:
                vispy.app.qt.QtSceneCanvas.__init__(self, keys='interactive', size=(1080, 720), bgcolor=(0,0,0,0))
            else:
                vispy.app.qt.QtSceneCanvas.__init__(self, keys=None, size=(1080, 720), bgcolor=(0,0,0,0))
            self.unfreeze()
            self.meshes = []
            self.filters = []
            self.view = self.central_widget.add_view()
            self.view.camera = 'fly'
            self.view.camera.fov = 90
            self.view.camera.scale_factor = 1.0
            self.view.camera.zoom_factor = -1
            self.measure_fps()
            self.view.camera.center = (-10,3.5,3.5)
            initialRotation = Quaternion.create_from_euler_angles(90,90,0, True)
            self.view.camera.rotation1 = initialRotation
            if not firstView:
                self.view.camera.interactive = True
                self.view.camera.auto_roll = True
            else:
                self.view.camera.interactive = True
                self.view.camera.auto_roll = False
            varMesh = trimesh.load(file)
            print(str(varMesh.vertices))
            varMeshnp = varMesh.vertices.view(np.ndarray)
            print('max', str(varMeshnp.max(axis=0)))
            self.mesh = None
            if not firstView:
                self.connect(self.on_key_press)
            mdata = geometry.MeshData(varMesh.vertices, varMesh.faces)
            print(str(varMesh.vertices))
            if firstView:
                compound = Compound([Mesh(meshdata=mdata, shading=None, color=(1,0,0,.6), parent = self.view.scene)], parent = self.view.scene)
                self.shading_filter = ShadingFilter(shading='flat', diffuse_light=(1,1,1,.7), ambient_light = (1, 1, 1, .4), specular_light = (1,1,1,.5), shininess=30)
                compound._subvisuals[0].attach(self.shading_filter)
    
            if not firstView:
                compound3 = Compound([Mesh(meshdata=mdata, shading=None, color=(1,1,1,1), parent=self.view.scene)], parent = self.view.scene)
                self.wireframe_filter = WireframeFilter(wireframe_only=True, color='white', width=0.05)
                compound3._subvisuals[0].attach(self.wireframe_filter)
                print(str(dir(compound3)))
                #self.mesh.attach(ColorFilter(filter = (.2, .2, 1, 1)))
                #self.mesh.attach(Alpha(.7))
                self.shading_filter = ShadingFilter(shading='flat', diffuse_light=(1,1,1,.9))
                compound3._subvisuals[0].attach(self.shading_filter)
    
            self.attach_headlight()
            self.timer = vispy.app.Timer(connect = self.__init_move__)
            self.timer.start(0.000001)
            finalArr = list(self.view.camera.center)
            initialPose = list(self.view.camera.center)
    
        def attach_headlight(self):
            light_dir = (0, 1, 0, 0)
            self.shading_filter.light_dir = light_dir[:3]
            initial_light_dir = self.view.camera.transform.imap(light_dir)
    
            @self.view.scene.transform.changed.connect
            def on_transform_change(event):
                transform = self.view.camera.transform
    

    I then am trying to add visuals do each canvas separately using:

                    compound.add_subvisual(Tube(points=tubeArr, radius=.01, color= "blue", parent=canvas1.view.scene))
                    compound.add_subvisual(Tube(points=warningTube, radius=.02, color="orange", parent=canvas1.view.scene))
                    compound.add_subvisual(Tube(points=warningTube2, radius=.04, color= "yellow", parent=canvas1.view.scene))
    

    and then subsequently updated the nodes inside the compound visual using:

                    compound._subvisuals[2].set_data(points=tubeArr, radius=.01, color= "blue", parent=canvas1.view.scene)
                    compound._subvisuals[3].set_data(points=warningTube, radius=.02, color= "orange", parent=canvas1.view.scene)
                    compound._subvisuals[4].set_data(points=warningTube2, radius=.04, color= "yellow", parent=canvas1.view.scene)
    
    

    Without the additional subvisuals, vispy runs at 60 FPS, which is perfectly acceptable for me albeit my machine should be able to easily do more than that as it has a fairly new graphics card. With the additional Tube subvisual, the framerate drops to between 1-2 FPS. Any insight into any of this would be extremely helpful, as currently vispy is not very useful for a live 3d visualization software to me.

    opened by JamesTRoss1 11
Releases(v0.12.1)
Main repository for Vispy

VisPy: interactive scientific visualization in Python Main website: http://vispy.org VisPy is a high-performance interactive 2D/3D data visualization

vispy 2.6k Feb 17, 2021
Statistics and Visualization of acceptance rate, main keyword of CVPR 2021 accepted papers for the main Computer Vision conference (CVPR)

Statistics and Visualization of acceptance rate, main keyword of CVPR 2021 accepted papers for the main Computer Vision conference (CVPR)

Hoseong Lee 78 Aug 23, 2022
OpenStats is a library built on top of streamlit that extracts data from the Github API and shows the main KPIs

Open Stats Discover and share the KPIs of your OpenSource project. OpenStats is a library built on top of streamlit that extracts data from the Github

Pere Miquel Brull 4 Apr 3, 2022
This repository contains a streaming Dataflow pipeline written in Python with Apache Beam, reading data from PubSub.

Sample streaming Dataflow pipeline written in Python This repository contains a streaming Dataflow pipeline written in Python with Apache Beam, readin

Israel Herraiz 9 Mar 18, 2022
This is a small repository for me to implement my simply Data Visualisation skills through Python.

Data Visualisations This is a small repository for me to implement my simply Data Visualisation skills through Python. Steam Population Chart from 10/

null 9 Dec 31, 2021
Generate visualizations of GitHub user and repository statistics using GitHub Actions.

GitHub Stats Visualization Generate visualizations of GitHub user and repository statistics using GitHub Actions. This project is currently a work-in-

JoelImgu 3 Dec 14, 2022
Generate visualizations of GitHub user and repository statistics using GitHub Actions.

GitHub Stats Visualization Generate visualizations of GitHub user and repository statistics using GitHub Actions. This project is currently a work-in-

Aditya Thakekar 1 Jan 11, 2022
Main repository for Vispy

VisPy: interactive scientific visualization in Python Main website: http://vispy.org VisPy is a high-performance interactive 2D/3D data visualization

vispy 2.6k Feb 13, 2021
Main repository for Vispy

VisPy: interactive scientific visualization in Python Main website: http://vispy.org VisPy is a high-performance interactive 2D/3D data visualization

vispy 2.6k Feb 17, 2021
Statistics and Visualization of acceptance rate, main keyword of CVPR 2021 accepted papers for the main Computer Vision conference (CVPR)

Statistics and Visualization of acceptance rate, main keyword of CVPR 2021 accepted papers for the main Computer Vision conference (CVPR)

Hoseong Lee 78 Aug 23, 2022
Main repository for the Sphinx documentation builder

Sphinx Sphinx is a tool that makes it easy to create intelligent and beautiful documentation for Python projects (or other documents consisting of mul

null 5.1k Jan 2, 2023
Tesseract Open Source OCR Engine (main repository)

Tesseract OCR About This package contains an OCR engine - libtesseract and a command line program - tesseract. Tesseract 4 adds a new neural net (LSTM

null 48.3k Jan 5, 2023
Main repository for the Sphinx documentation builder

Sphinx Sphinx is a tool that makes it easy to create intelligent and beautiful documentation for Python projects (or other documents consisting of mul

null 5.1k Jan 4, 2023
Main repository of the zim desktop wiki project

Zim - A Desktop Wiki Editor Zim is a graphical text editor used to maintain a collection of wiki pages. Each page can contain links to other pages, si

Zim Desktop Wiki 1.6k Dec 30, 2022
Tesseract Open Source OCR Engine (main repository)

Tesseract OCR About This package contains an OCR engine - libtesseract and a command line program - tesseract. Tesseract 4 adds a new neural net (LSTM

null 48.4k Jan 9, 2023
Main repository for the HackBio'2021 Virtual Internship Experience for #Team-Greider ❤️

Hello ?? #Team-Greider The team of 20 people for HackBio'2021 Virtual Bioinformatics Internship ?? ??️ ??‍?? HackBio: https://thehackbio.com ?? Ask us

Siddhant Sharma 7 Oct 20, 2022
This is the main repository of open-sourced speech technology by Huawei Noah's Ark Lab.

Speech-Backbones This is the main repository of open-sourced speech technology by Huawei Noah's Ark Lab. Grad-TTS Official implementation of the Grad-

HUAWEI Noah's Ark Lab 295 Jan 7, 2023
Main repository for the chatbot Bobotinho.

Bobotinho Bot Main repository for the chatbot Bobotinho. ℹ️ Introduction Twitch chatbot with entertainment commands. ‎ ?? Technologies Concurrent code

Bobotinho 14 Nov 29, 2022
Web scraping library and command-line tool for text discovery and extraction (main content, metadata, comments)

trafilatura: Web scraping tool for text discovery and retrieval Description Trafilatura is a Python package and command-line tool which seamlessly dow

Adrien Barbaresi 704 Jan 6, 2023
Main repo for Inboxen.org

Inboxen This is the complete system with everything you need to set up Inboxen. The current maintainer of this repo is Matt Molyneaux GPG keys GPG key

Inboxen 249 Nov 13, 2022