Vignette is a face tracking software for characters using osu!framework.

Overview


Discord GitHub Super-Linter Total Download Count

Vignette is a face tracking software for characters using osu!framework. Unlike most solutions, Vignette is:

  • Made with osu!framework, the game framework that powers osu!lazer, the next iteration of osu!.
  • Open source, from the very core.
  • Always evolving - Vignette improves every update, and it tries to know you better too, literally.

Running

We provide releases from GitHub Releases and also from Visual Studio App Center. Vignette releases builds for a select few people before we create a release here, so pay attention.

You can also run Vignette by cloning the repository and running this command in your terminal.

dotnet run --project Vignette.Desktop

Developing

Please make sure you meet the prerequisites:

Contributing

The style guide is defined in the .editorconfig at the root of this repository and it will be picked up in intellisense by capable editors. Please follow the provided style for consistency.

License

Vignette is Copyright © 2020 Ayane Satomi and the Vignette Authors, licensed under GNU General Public License v3.0 with SDK exception. For the full license text please see the LICENSE file in this repository. Live2D however is also additionally under another license which can be found here: Live2D Open Software License.

Commercial Use and Support

While Vignette is GPL-3.0, We do not provide commercial support, there is nothing stopping you from using it commercially but if you want proper dedicated support from the Vignette engineers, we highly recommend the Enterprise tier on our Open Collective.

Comments
  • Refactor User Interface

    Refactor User Interface

    First and foremost, this is the awaited UI refresh which now sports a sidebar instead of a full screen menu. This also sports updated styling on several components and updates osu!framework and Fluent System Icons. Backdrops (backgrounds) get a significant update as well now allowing video and images as a target.

    Under the hood, I have refactored themeing and keybind management (UI is again to follow). Themes can now be edited on the fly but only the export button works. Applying live will follow. I've also laid down the foundation to avatar, recognition, and camera settings but only as hollow controls that don't do anything.

    priority:high area:user-interface 
    opened by LeNitrous 18
  • Refactor Vignette.Camera

    Refactor Vignette.Camera

    This PR fixes issue #234.

    The previous solution that I've implemented is simply not adding a duplicate item in the FluentDropdown, and warning about it with a console write statement. image image

    Now, the solution is indexing the friendly names so that all options pop up. We're now faced with a "can't open camera by index" bug.

    opened by Speykious 9
  • Allow osu!framework to not block compositing

    Allow osu!framework to not block compositing

    Desktop effects are killed globally when vignette is running; Some parts like disabling decorations are fine, but transparency, wobbly windows, smooth animations for actions, etc are all disabled as long as Vignette is running.

    proposal 
    opened by Martmists-GH 8
  • [NeedHelp]It crashed

    [NeedHelp]It crashed

    It crashed the first time I open it. I'm using windows7,service pack 1 dotnet x64 5.0.11.30524

    It happened like this in most cases QQ截图20220109124725

    And sometimes like this QQ截图20220109125148

    As far as I know,no logs/crash reports or dumps are created :( Can U help?

    invalid:wont-fix 
    opened by huzpsb 7
  • Vignette bundles the dotnet runtime

    Vignette bundles the dotnet runtime

    It seems the last issue went missing so I'm re-adding it.

    Reasons to bundle:

    • No need for end user to install it

    Reasons not to bundle:

    • User likely already has dotnet installed
    • Installer or install script can install it if missing
    • Prevent duplication of dependencies
    • Allow package manager (or user) to update dotnet with important fixes without the need for a new Vignette release
    • Some systems may need a custom patch to dotnet, which a bundled runtime would overwrite
    invalid:wont-fix 
    opened by Martmists-GH 6
  • Evaluate CNTK or Tensorflow for Tracking Backend

    Evaluate CNTK or Tensorflow for Tracking Backend

    Unfortunately, our Tracking Backend, which is FaceRecognitionDotNet, which uses DLib and OpenCV, didn't turn out as performant as expected. The delta is too high to make a significant data, and the models currently perform poorly. In light of that, I will have to make a backend we can control directly instead of relying on others' work which we're not sure that has any quality.

    Right now we're looking at CNTK and Tensorflow. While CNTK is from Microsoft, there is more laywork on Tensorflow, so we'll have to decide on this.

    proposal priority:high 
    opened by sr229 6
  • Use FFmpeg instead of EmguCV

    Use FFmpeg instead of EmguCV

    Currently, EmguCV is being used only to handle webcam input. We've had various problems with runtimes not being in the right place and cameras not being detected.

    Thus I propose that we use FFmpeg for that task. I think that it will be much easier to deal with as we can just use it as a system-installed binary. Not to mention that the library is LGPL which is just perfect for our use-case.

    priority:medium area:recognition 
    opened by Speykious 5
  • Lag Compensation for Prediction Data to Live2D

    Lag Compensation for Prediction Data to Live2D

    As part of #28, we have discussed how raw data would result on jittery rough data, even if the neural network used is theoretically as precise as a human eye predicting the facial movements of the subject. To compensate for jittery input, we will implement a sort of lag-compensation algorithm.

    Background

    John Carmack's work with Latency Mitigation for Virtual Reality Devices (source) explains that the physical movement from the user's head up to the eyes is critical to the experience. While the document is designed mainly for virtual reality, one can argue the methodologies used to provide a seamless experience for virtual reality can be applied for a face tracking application, as face tracking like HMDs, are also very demanding "human-in-the-loop" interfaces.

    Byeong-Doo Choi, et al.'s work with frame interpolation using a novel algorithm for motion prediction would enhance a target video's temporal resolution, by using Adaptive OBMC. Such frame interpolation techniques according to the paper has been proven to give better results than the current algorithms used for frame interpolation in the market.

    Strategy

    As stated on the background, there are many strategies we can perform lag compensation for such raw jittery input from prediction data from the neural network, it is limited to these two strategies:

    Frame Interpolation by Motion Prediction

    Byeong Doo-Choi, et al. achieves frame interpolation by the following:

    First, we propose the bilateral motion estimation scheme to obtain the motion field of an interpolated frame without yielding the hole and overlapping problems. Then, we partition a frame into several object regions by clustering motion vectors. We apply the variable-size block MC (VS-BMC) algorithm to object boundaries in order to reconstruct edge information with a higher quality. Finally, we use the adaptive overlapped block MC (OBMC), which adjusts the coefficients of overlapped windows based on the reliabilities of neighboring motion vectors. The adaptive OBMC (AOBMC) can overcome the limitations of the conventional OBMC, such as over-smoothing and poor de-blocking

    According to their experiments, such method would produce better image quality for the interpolated frames, which is helpful for prediction in our neural network, however it comes with a cost of having to process the video at runtime, as the experiment is only done on pre-rendered video frames already.

    View Bypass/Time Warping

    John Carmack's work with reducing input latency for VR HMDs suggests a multitude of methods, one of them is View Bypass - a method achieved by getting a newer sampling of the input.

    To achieve this, the input should be sampled once but can be used by both the simulation and the rendering task, thus reducing the latency for such. However, the input and the game thread must run in parallel and the programmer must be careful not to reference the game state otherwise it would cause a race condition.

    Another method mentioned by Carmack is Time Warping, which he states that:

    After drawing a frame with the best information at your disposal, possibly with bypassed view parameters, instead of displaying it directly, fetch the latest user input, generate updated view parameters, and calculate a transformation that warps the rendered image into a position that approximates where it would be with the updated parameters. Using that transform, warp the rendered image into an updated form on screen that reflects the new input. If there are two dimensional overlays present on the screen that need to remain fixed, they must be drawn or composited in after the warp operation, to prevent them from incorrectly moving as the view parameters change.

    There are different methods of warping which is forward warping and reverse warping, and such warping methods can be used along with View Bypassing. However, the increased complexity for lag compensation of doing input with the main loop concurrently is possible as the input loop will be independent of the game state entirely.

    Conclusion

    Such strategies mentioned would allow us to have smoother experience, however, based on my personal analysis, I found that Carmack's solutions would be more feasible for a project of our scale. We simply don't have the team and the technical resources to do from-camera video interpolation as it would be computationally expensive to be implemented with minimal overhead.

    area:documentation proposal priority:high 
    opened by sr229 5
  • Hook up Tracking Worker to Live2D

    Hook up Tracking Worker to Live2D

    As the final task for Milestone 1, we're going to hook up the tracking worker to Live2D and see if we can spot some bugs before we turn in on our release.

    proposal priority:high 
    opened by sr229 5
  • User Inteface

    User Inteface

    We want to customize the Layout, and to do that we need to do the following:

    • Make the Live2D a draggable component
    • Custom Backgrounds (Green Screen default, white default background, or Image).
    • Persist this layout into a format (YAML, perhaps?)

    Todo

    • [ ] Draggable and resizable Live2D container.
    • [ ] Backgrounds support (White background, Green background, user-defined).

    Essentially, since we're going to have a layout similar to this:

    image

    proposal priority:high 
    opened by sr229 5
  • Extension System

    Extension System

    Discussed in https://github.com/vignetteapp/vignette/discussions/216

    Originally posted by sr229 May 9, 2021 This has been requested by the community; however, this is kinda low priority as we focus most on the core components. The way this works is the following:

    • Extensions can expose their settings in MainMenu.
    • They will be strictly be conformant to the o!f model to properly load. This is considered "bare minimum" for what people requires to make an extension.
    • They will be packaged as either a .dll or a .nupkg which the program can "extract" or "compile" into a DLL, something we can do once we have a better idea with how to dynamically load assemblies.

    Anyone can propose a better design here since this is a RFC, we appreciate alternative approaches for this.

    priority:high 
    opened by sr229 4
  • UI controls, sprites, containers, etc as a Nuget package.

    UI controls, sprites, containers, etc as a Nuget package.

    It would be a nice idea if you could make a seperate library that includes all the UI controls, themeable sprite, containers, etc as a nuget package. It could allow other developers to integrate it to their projects and have access to a nice suite of UI controls + other stuff instead of writing them from scratch.

    priority:high area:user-interface 
    opened by Whatareyoulaughingat 6
  • VRM Support

    VRM Support

    Here's a little backlog while we're working on the rendering/scene/model API for the extensions. Since this is a reference implementation for all 3D/2D model support extensions, VRM is going to be our flagship extension and will serve as a extension reference for model support.

    References

    proposal priority:high area-extensions 
    opened by sr229 0
  • Steamworks API integration

    Steamworks API integration

    As part of #251, we might want to include Steamworks API just in case people might have a use for it on our Steam releases. It would be optional and will be hidden under a build flag.

    proposal priority:medium 
    opened by sr229 2
  • First time user experience (OOBE)

    First time user experience (OOBE)

    Design specifications are now released for the first time user experience. This will guide them to set up the bare essentials so they can get up and running quickly.

    priority:medium area:user-interface 
    opened by sr229 0
  • Internalization Support (i18n)

    Internalization Support (i18n)

    We'll have to support multiple languages. A good start is looking at Crowdin as a source. We'll support languages by demand but for starters I think we'll support English, Japanese, and Chinese (Simplified and Traditional) given we have people proficient in those languages.

    As for implementation, That would be the second part of investigation.

    good first issue priority:low 
    opened by LeNitrous 13
  • Documentation Tasks

    Documentation Tasks

    We'll have to document more significant parts at some point. We'd want contributors to have an idea how everything works in the back-end after all.

    For now we can direct them to osu!framework's Getting Started wiki pages.

    area:documentation good first issue priority:low 
    opened by LeNitrous 0
Releases(2021.1102.2)
Owner
Vignette
The open source VTuber Toolkit. Made with 💖.
Vignette
Realtime Face Anti Spoofing with Face Detector based on Deep Learning using Tensorflow/Keras and OpenCV

Realtime Face Anti-Spoofing Detection ?? Realtime Face Anti Spoofing Detection with Face Detector to detect real and fake faces Please star this repo

Prem Kumar 86 Aug 3, 2022
Swapping face using Face Mesh with TensorFlow Lite

Swapping face using Face Mesh with TensorFlow Lite

iwatake 17 Apr 26, 2022
Face-Recognition-Attendence-System - This face recognition Attendence system using Python

Face-Recognition-Attendence-System I have developed this face recognition Attend

Riya Gupta 4 May 10, 2022
Joint detection and tracking model named DEFT, or ``Detection Embeddings for Tracking.

DEFT: Detection Embeddings for Tracking DEFT: Detection Embeddings for Tracking, Mohamed Chaabane, Peter Zhang, J. Ross Beveridge, Stephen O'Hara

Mohamed Chaabane 253 Dec 18, 2022
Tracking code for the winner of track 1 in the MMP-Tracking Challenge at ICCV 2021 Workshop.

Tracking Code for the winner of track1 in MMP-Trakcing challenge This repository contains our tracking code for the Multi-camera Multiple People Track

DamoCV 29 Nov 13, 2022
Tracking Pipeline helps you to solve the tracking problem more easily

Tracking_Pipeline Tracking_Pipeline helps you to solve the tracking problem more easily I integrate detection algorithms like: Yolov5, Yolov4, YoloX,

VNOpenAI 32 Dec 21, 2022
Quadruped-command-tracking-controller - Quadruped command tracking controller (flat terrain)

Quadruped command tracking controller (flat terrain) Prepare Install RAISIM link

Yunho Kim 4 Oct 20, 2022
Python package for multiple object tracking research with focus on laboratory animals tracking.

motutils is a Python package for multiple object tracking research with focus on laboratory animals tracking. Features loads: MOTChallenge CSV, sleap

Matěj Šmíd 2 Sep 5, 2022
Employee-Managment - Company employee registration software in the face recognition system

Employee-Managment Company employee registration software in the face recognitio

Alireza Kiaeipour 7 Jul 10, 2022
Official PyTorch implementation of "Synthesis of Screentone Patterns of Manga Characters"

Manga Character Screentone Synthesis Official PyTorch implementation of "Synthesis of Screentone Patterns of Manga Characters" presented in IEEE ISM 2

Tsubota 2 Nov 20, 2021
img2pose: Face Alignment and Detection via 6DoF, Face Pose Estimation

img2pose: Face Alignment and Detection via 6DoF, Face Pose Estimation Figure 1: We estimate the 6DoF rigid transformation of a 3D face (rendered in si

Vítor Albiero 519 Dec 29, 2022
Code for HLA-Face: Joint High-Low Adaptation for Low Light Face Detection (CVPR21)

HLA-Face: Joint High-Low Adaptation for Low Light Face Detection The official PyTorch implementation for HLA-Face: Joint High-Low Adaptation for Low L

Wenjing Wang 77 Dec 8, 2022
DVG-Face: Dual Variational Generation for Heterogeneous Face Recognition, TPAMI 2021

DVG-Face: Dual Variational Generation for HFR This repo is a PyTorch implementation of DVG-Face: Dual Variational Generation for Heterogeneous Face Re

null 52 Dec 30, 2022
[TIP 2021] SADRNet: Self-Aligned Dual Face Regression Networks for Robust 3D Dense Face Alignment and Reconstruction

SADRNet Paper link: SADRNet: Self-Aligned Dual Face Regression Networks for Robust 3D Dense Face Alignment and Reconstruction Requirements python

Multimedia Computing Group, Nanjing University 99 Dec 30, 2022
Face Synthetics dataset is a collection of diverse synthetic face images with ground truth labels.

The Face Synthetics dataset Face Synthetics dataset is a collection of diverse synthetic face images with ground truth labels. It was introduced in ou

Microsoft 608 Jan 2, 2023
Face Library is an open source package for accurate and real-time face detection and recognition

Face Library Face Library is an open source package for accurate and real-time face detection and recognition. The package is built over OpenCV and us

null 52 Nov 9, 2022
VGGFace2-HQ - A high resolution face dataset for face editing purpose

The first open source high resolution dataset for face swapping!!! A high resolution version of VGGFace2 for academic face editing purpose

Naiyuan Liu 232 Dec 29, 2022
A large-scale face dataset for face parsing, recognition, generation and editing.

CelebAMask-HQ [Paper] [Demo] CelebAMask-HQ is a large-scale face image dataset that has 30,000 high-resolution face images selected from the CelebA da

switchnorm 1.7k Dec 26, 2022
Python tools for 3D face: 3DMM, Mesh processing(transform, camera, light, render), 3D face representations.

face3d: Python tools for processing 3D face Introduction This project implements some basic functions related to 3D faces. You can use this to process

Yao Feng 2.3k Dec 30, 2022