A 35mm camera, based on the Canonet G-III QL17 rangefinder, simulated in Python.

Overview

c is for Camera

A 35mm camera, based on the Canonet G-III QL17 rangefinder, simulated in Python.

The purpose of this project is to explore and understand the logic in the mechanisms of a camera by using object-oriented programming to represent real-world objects. It's also a way to appreciate the intricate mechanical logic embodied in a device like a camera.

'Canonet G-III QL17'

It aims towards completeness in its modelling of the real world. For example, if you open the back of the camera in daylight with a partially exposed film, it will ruin the film.

See the c is for Camera documentation.

A quick tour

Clone the repository:

git clone https://github.com/evildmp/C-is-for-Camera.git

or:

git clone [email protected]:evildmp/C-is-for-Camera.git

In the C-is-for-Camera directory, start a Python 3 shell.

>>> from camera import Camera
>>> c = Camera()

See the camera's state:

>>> c.state()
================== Camera state =================

------------------ Controls ---------------------
Selected speed:            1/120

------------------ Mechanical -------------------
Back closed:               True
Lens cap on:               False
Film advance mechanism:    False
Frame counter:             0
Shutter cocked:            False
Shutter timer:             1/128 seconds
Iris aperture:             ƒ/16
Camera exposure settings:  15.0 EV

------------------ Metering ---------------------
Light meter reading:        4096 cd/m^2
Exposure target:            15.0 EV
Mode:                       Shutter priority
Battery:                    1.44 V
Film speed:                 100 ISO

------------------ Film -------------------------
Speed:                      100 ISO
Rewound into cartridge:     False
Exposed frames:             0 (of 24)
Ruined:                     False

------------------ Environment ------------------
Scene luminosity:           4096 cd/m^2

Advance the film:

>>> c.film_advance_mechanism.advance()
On frame 0 (of 24)
Advancing film
On frame 1 (of 24)
Cocking shutter
Cocked

Release the shutter:

>>> c.shutter.trip()
Shutter openening for 1/128 seconds
Shutter closes
Shutter uncocked
'Tripped'

It's not possible to advance the mechanism twice without releasing the shutter:

>>> c.film_advance_mechanism.advance()
On frame 1 (of 24)
Advancing film
On frame 2 (of 24)
Cocking shutter
Cocked
>>> c.film_advance_mechanism.advance()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/daniele/Repositories/camera/camera.py", line 56, in advance
    raise self.AlreadyAdvanced
camera.AlreadyAdvanced

If you open the back in daylight it ruins the film:

>>> c.back.open()
Opening back
Resetting frame counter to 0
'Film is ruined'

Close the back and rewind the film:

>>> c.back.close()
Closing back
>>> c.film_rewind_mechanism.rewind()
Rewinding film
You might also like...
A Planar RGB-D SLAM which utilizes Manhattan World structure to provide optimal camera pose trajectory while also providing a sparse reconstruction containing points, lines and planes, and a dense surfel-based reconstruction.
A Planar RGB-D SLAM which utilizes Manhattan World structure to provide optimal camera pose trajectory while also providing a sparse reconstruction containing points, lines and planes, and a dense surfel-based reconstruction.

ManhattanSLAM Authors: Raza Yunus, Yanyan Li and Federico Tombari ManhattanSLAM is a real-time SLAM library for RGB-D cameras that computes the camera

A GUI for Face Recognition, based upon Docker, Tkinter, GPU and a camera device.

Face Recognition GUI This repository is a GUI version of Face Recognition by Adam Geitgey, where e.g. Docker and Tkinter are utilized. All the materia

A hobby project which includes a hand-gesture based virtual piano using a mobile phone camera and OpenCV library functions
A hobby project which includes a hand-gesture based virtual piano using a mobile phone camera and OpenCV library functions

Overview This is a hobby project which includes a hand-gesture controlled virtual piano using an android phone camera and some OpenCV library. My moti

[CVPR 2022] Official PyTorch Implementation for
[CVPR 2022] Official PyTorch Implementation for "Reference-based Video Super-Resolution Using Multi-Camera Video Triplets"

Reference-based Video Super-Resolution (RefVSR) Official PyTorch Implementation of the CVPR 2022 Paper Project | arXiv | RealMCVSR Dataset This repo c

Easy to use Python camera interface for NVIDIA Jetson

JetCam JetCam is an easy to use Python camera interface for NVIDIA Jetson. Works with various USB and CSI cameras using Jetson's Accelerated GStreamer

Python tools for 3D face: 3DMM, Mesh processing(transform, camera, light, render), 3D face representations.
Python tools for 3D face: 3DMM, Mesh processing(transform, camera, light, render), 3D face representations.

face3d: Python tools for processing 3D face Introduction This project implements some basic functions related to 3D faces. You can use this to process

Omnidirectional camera calibration in python
Omnidirectional camera calibration in python

Omnidirectional Camera Calibration Key features pure python initial solution based on A Toolbox for Easily Calibrating Omnidirectional Cameras (Davide

CLOCs: Camera-LiDAR Object Candidates Fusion for 3D Object Detection

CLOCs is a novel Camera-LiDAR Object Candidates fusion network. It provides a low-complexity multi-modal fusion framework that improves the performance of single-modality detectors. CLOCs operates on the combined output candidates of any 3D and any 2D detector, and is trained to produce more accurate 3D and 2D detection results.

Back to the Feature: Learning Robust Camera Localization from Pixels to Pose (CVPR 2021)
Back to the Feature: Learning Robust Camera Localization from Pixels to Pose (CVPR 2021)

Back to the Feature with PixLoc We introduce PixLoc, a neural network for end-to-end learning of camera localization from an image and a 3D model via

Comments
  • c.state() with lens cap on causes ValueError

    c.state() with lens cap on causes ValueError

    c = Camera()
    c.lens_cap.on = True
    c.state()
    ================== Camera state =================
    
    ------------------ Controls ---------------------
    Film speed:                100 ISO
    Selected speed:            1/125
    
    ------------------ indicators -------------------
    ---------------------------------------------------------------------------
    ValueError                                Traceback (most recent call last)
    <ipython-input-23-f6f70ece2079> in <module>
    ----> 1 c.state()
    
    ~/C-is-for-Camera/camera.py in state(self)
        103         print()
        104         print("------------------ indicators -------------------")
    --> 105         print(f"Light meter reading        ƒ/{self.exposure_indicator():.2g}")
        106         print(f"Frame counter:             {self.frame_counter}")
        107         print()
    
    ValueError: Unknown format code 'g' for object of type 'str'
    opened by tommaso50 1
Camera-caps - Examine the camera capabilities for V4l2 cameras

camera-caps This is a graphical user interface over the v4l2-ctl command line to

Jetsonhacks 25 Dec 26, 2022
Genetic Algorithm, Particle Swarm Optimization, Simulated Annealing, Ant Colony Optimization Algorithm,Immune Algorithm, Artificial Fish Swarm Algorithm, Differential Evolution and TSP(Traveling salesman)

scikit-opt Swarm Intelligence in Python (Genetic Algorithm, Particle Swarm Optimization, Simulated Annealing, Ant Colony Algorithm, Immune Algorithm,A

郭飞 3.7k Jan 3, 2023
A complete end-to-end demonstration in which we collect training data in Unity and use that data to train a deep neural network to predict the pose of a cube. This model is then deployed in a simulated robotic pick-and-place task.

Object Pose Estimation Demo This tutorial will go through the steps necessary to perform pose estimation with a UR3 robotic arm in Unity. You’ll gain

Unity Technologies 187 Dec 24, 2022
A framework for analyzing computer vision models with simulated data

3DB: A framework for analyzing computer vision models with simulated data Paper Quickstart guide Blog post Installation Follow instructions on: https:

3DB 112 Jan 1, 2023
"Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback"

This is code repo for our EMNLP 2017 paper "Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback", which implements the A2C algorithm on top of a neural encoder-decoder model and benchmarks the combination under simulated noisy rewards.

Khanh Nguyen 131 Oct 21, 2022
Trained on Simulated Data, Tested in the Real World

Trained on Simulated Data, Tested in the Real World

livox 43 Nov 18, 2022
Simulated garment dataset for virtual try-on

Simulated garment dataset for virtual try-on This repository contains the dataset used in the following papers: Self-Supervised Collision Handling via

null 33 Dec 20, 2022
PINN Burgers - 1D Burgers equation simulated by PINN

PINN(s): Physics-Informed Neural Network(s) for Burgers equation This is an impl

ShotaDEGUCHI 1 Feb 12, 2022
Jetson Nano-based smart camera system that measures crowd face mask usage in real-time.

MaskCam MaskCam is a prototype reference design for a Jetson Nano-based smart camera system that measures crowd face mask usage in real-time, with all

BDTI 212 Dec 29, 2022
Detection of drones using their thermal signatures from thermal camera through YOLO-V3 based CNN with modifications to encapsulate drone motion

Drone Detection using Thermal Signature This repository highlights the work for night-time drone detection using a using an Optris PI Lightweight ther

Chong Yu Quan 6 Dec 31, 2022