Fileson - JSON File database tools

Overview

Fileson - JSON File database tools

Fileson is a set of Python scripts to create JSON file databases and use them to do various things, like compare differences between two databases. There are a few key files:

  • fileson.py contains Fileson class to read, manipulate and write Fileson databases. Relies on logdict.py, a logging-enabled hashmap.
  • fileson_util.py is a command-line toolkit to create Fileson databases and do useful things with them
  • fileson_backup.py contains helper logic for creating crypto keys, encryption/decryption, upload/download from S3, and most importantly, backup/restore functionality. | fileson_tool.py is a config-based interface to simple backups.

API documentation (everything very much subject to change) available at https://fileson.readthedocs.io/en/latest/

Quickstart to backup

If you are not that interested in the details of this library, set up your backup process in a few straightforward steps:

Prerequisites (S3 and boto3)

  1. Sign up for AWS and create an S3 bucket.
  2. Create a new identity that has privileges for writing to that bucket. Yes, you will need to google 'grant identity access to s3 bucket' for how to do this.
  3. Use something like S3 Browser to check you can upload to your bucket with your newly created credentials.
  4. Get boto3 for Python and configure the credentials. Maybe even do a test with the S3 sample code (boto3 quickstart documentation is excellent)

Using the fileson_tool.py

  1. Edit the included fileson.ini (and create an encryption key if you want encrypted backups, see the comments inside the ini file)
  2. Run python3 fileson_tool.py scan to create the .fson files for your backup entries.
  3. Run python3 fileson_tool.py backup to back everything up. This will take long, so maybe use -e entryname to do it one by one.
  4. Repeat from (2) whenever you want to update the backup!

The backup process should tolerate interruptions with ctrl-c and carry on where it left later (it logs every upload and flushes the log to disk after every file).

Tip: You may want to have the fileson.ini in a separate directory and run the scan and backup commands from there, so you have a nice folder to (also) back up to your cloud -- encrypted and name-obfuscated back up files are of little use without the .fson and .log files!

Create a Fileson database

user@server:~$ python3 fileson_util.py scan files.fson ~/mydir

Fileson databases are essentially log files with JSON objects per row, containing directory and file information (name, modified date, size) for ~/mydir and some additional metadata for each scan (changes to entries are appended to the end).

To calculate an SHA1 checksum for the files as well:

user@server:~$ python3 fileson_util.py scan files.fson ~/mydir -c sha1

Calculating SHA1 checksums is somewhat slow, around 1 GB/s on modern m.2 SSD and 150 MB/s on a mechanical drive, so you can use -c sha1fast to only include the beginning of the file. It will differentiate most cases quite well.

Fileson databases are versioned. Once a database exists, repeated call to fileson_util.py scan will update the database, keeping track of the changes. You can then use this information to view changes between given runs, etc.

Normally SHA1 checksums are carried over if the previous version had a file with same name, size and modification time. For a stricter version, you can use -s or --strict to require full path match. Note that this means calculating new checksum for all moved files.

Duplicate detection

Once you have a Fileson database ready, you can do fun things like see if you have any duplicates in your folder (cryptic string before duplicates identifies the checksum collision, whether it is based on size or sha1):

user@server:~$ python3 fileson_util.py duplicates pics.fson

1afc8e06e081b772eadd6a981a83f67077e2ef10
2009/2009-03-07/DSC_3962-2.NEF
2009/2009-03-07/DSC_3962.NEF

Many folders tend to have a lot of small files common (including empty files), for example source code with git repositories, and that is OK so you can use for example -m 1M to only show duplicates that have a minimum size of 1 MB.

You can skip database creation and give a directory to the command as well:

user@server:~$ python3 fileson_util.py duplicates /mnt/d/SomeFolder -m 1M -c sha1fast

Change detection

Once you have a Fileson database or two, you can compare them with fileson_util.py diff. Like the duplicate command, one or both can be a directory. Note that two files with different checksum types will essentially differ on all files.

user@server:~$ python3 fileson_util.py diff myfiles-2010.fson myfiles-2020.fson \
  myfiles-2010-2020.delta

The myfiles-2010-2020.delta now contains a row per difference between the two databases/directories -- files that exist only in origin, only in target, or have changed.

Let's say you move some.zip around a bit (JSON formatted for clarity):

user@server:~$ python3 fileson_util.py scan files.fson ~/mydir -c sha1
user@server:~$ mv ~/mydir/some.zip ~/mydir/subdir/newName.zip
user@server:~$ python3 fileson_util.py diff files.fson ~/mydir -c sha1 -p
{"path": ".", "src": {"modified_gmt": "2021-02-28 19:42:05"},
    "dest": {"modified_gmt": "2021-02-28 19:42:26"}}
{"path": "some.zip", "src": {"size": 0, "modified_gmt": "2021-02-23 21:57:25"},
    "dest": null}
{"path": "subdir", "src": {"modified_gmt": "2021-02-28 19:42:05"},
    "dest": {"modified_gmt": "2021-02-28 19:42:26"}}
{"path": "subdir/newName.zip", "src": null,
    "dest": {"size": 0, "modified_gmt": "2021-02-23 21:57:25"}}

Doing an incremental backup would involve grabbing the deltas which have src set to null. With SHA1 checksums, you could also only upload the new file if the file blob has not been uploaded before (keeping a separate Fileson object log of backed up files).

Loading Fileson databases has special syntax similar to git where you can revert to previous versions with db.fson~1 to get the previous version or db.fson~3 to back down 3 steps. This makes printing out changes after a scan a breeze. Instead of the fileson_util.py diff invocation above, you could update the db and see what changed:

user@server:~$ python3 fileson_util.py scan files.fson
user@server:~$ python3 fileson_util.py diff files.fson~1 files.fson -p
[ same output as the above diff ]

Note that you did not have to specify checksum type or directory, as it is detected automatically from the Fileson DB.

Use Fileson for simple backups to local or cloud

Fileson contains a robust set of utilities to make backups locally or into S3, either unencrypted or with secure AES256 encryption. For S3 you need to have boto3 client configured first.

Encryption

Encryption is done with 256 bit key that you can generate easily:

user@server:~$ python3 fileson_backup.py keygen password salt > my.key

Now my.key contains a 64-hex key generated with given password and salt (with PBKDF2 using AES256 and 1 million iterations by default). You can use the key to encrypt and decrypt data.

user@server:~$ python3 fileson_backup.py encrypt some.txt some.enc my.key
user@server:~$ python3 fileson_backup.py decrypt some.enc some2.txt my.key
user@server:~$ diff some.txt some2.txt

Uploading to S3 and downloading

A simple upload/download client is also provided:

user@server:~$ python3 fileson_backup.py upload some.txt s3://mybucket/objpath
user@server:~$ python3 fileson_backup.py download s3://mybucket/objpath some2.txt
user@server:~$ diff some.txt some2.txt

Just add -k my.key to encrypt/decrypt files on the fly with upload and download.

Backup up a Fileson-scanned directory

Once you have a Fileson database at hand, you can do a backup run. Certain considerations:

  1. Base path of files is taken from Fileson DB, so if you used a relative path when scanning, backup command needs to be run in the same directory.
  2. To avoid backing up same files over and over, second command is a backup logfile, essentially recording SHA1 hashes and locations of files backed up.
  3. You need to specify either a local directory or S3 path

Backup log is essentially a Fileson DB for your backup location, and it is written line-by-line as backup is progressing. So if the backup process gets interrupted, you can just rerun the backup command and it should resume with next item that was not yet backed up.

Here is an example of simple backup to a local folder:

user@server:~$ python3 fileson_scan.py scan db.fson ~/mydir -c sha1
user@server:~$ python3 fileson_backup.py backup db.fson db_backup.log /mnt/backup

That's it. Once files change, re-run scan to update changes and then backup to upload any added objects.

Note: Support for removing files that no longer exist in db.fson from backup location is not yet done.

You might also like...
The ldap2json script allows you to extract the whole LDAP content of a Windows domain into a JSON file.
The ldap2json script allows you to extract the whole LDAP content of a Windows domain into a JSON file.

ldap2json The ldap2json script allows you to extract the whole LDAP content of a Windows domain into a JSON file. Features Authenticate with password

JsonParser - Parsing the Json file by provide the node name

Json Parser This project is based on Parsing the json and dumping it to CSV via

JSON Interoperability Vulnerability Labs
JSON Interoperability Vulnerability Labs

JSON Interoperability Vulnerability Labs Description These are the companion labs to my research article "An Exploration of JSON Interoperability Vuln

cysimdjson - Very fast Python JSON parsing library

Fast JSON parsing library for Python, 7-12 times faster than standard Python JSON parser.

simplejson is a simple, fast, extensible JSON encoder/decoder for Python

simplejson simplejson is a simple, fast, complete, correct and extensible JSON http://json.org encoder and decoder for Python 3.3+ with legacy suppo

A fast JSON parser/generator for C++ with both SAX/DOM style API
A fast JSON parser/generator for C++ with both SAX/DOM style API

A fast JSON parser/generator for C++ with both SAX/DOM style API Tencent is pleased to support the open source community by making RapidJSON available

 simdjson : Parsing gigabytes of JSON per second
simdjson : Parsing gigabytes of JSON per second

JSON is everywhere on the Internet. Servers spend a *lot* of time parsing it. We need a fresh approach. The simdjson library uses commonly available SIMD instructions and microparallel algorithms to parse JSON 4x faster than RapidJSON and 25x faster than JSON for Modern C++.

import json files directly in your python scripts
import json files directly in your python scripts

Install Install from git repository pip install git+https://github.com/zaghaghi/direct-json-import.git Use With the following json in a file named inf

Low code JSON to extract data in one line

JSON Inline Low code JSON to extract data in one line ENG RU Installation pip install json-inline Usage Rules Modificator Description ?key:value Searc

Owner
Joonas Pihlajamaa
Joonas Pihlajamaa
Creates fake JSON files from a JSON schema

Use jsf along with fake data generators to provide consistent and meaningful fake data for your system.

Andy Challis 86 Jan 3, 2023
Json utils is a python module that you can use when working with json files.

Json-utils Json utils is a python module that you can use when working with json files. it comes packed with a lot of featrues Features Converting jso

Advik 4 Apr 24, 2022
Random JSON Key:Pair Json Generator

Random JSON Key:Value Pair Generator This simple script take an engish dictionary of words and and makes random key value pairs. The dictionary has ap

Chris Edwards 1 Oct 14, 2021
Same as json.dumps or json.loads, feapson support feapson.dumps and feapson.loads

Same as json.dumps or json.loads, feapson support feapson.dumps and feapson.loads

boris 5 Dec 1, 2021
Ibmi-json-beautify - Beautify json string with python

Ibmi-json-beautify - Beautify json string with python

Jefferson Vaughn 3 Feb 2, 2022
A tools to find the path of a specific key in deep nested JSON.

如何快速从深层嵌套 JSON 中找到特定的 Key #公众号 在爬虫开发的过程中,我们经常遇到一些 Ajax 加载的接口会返回 JSON 数据。

kingname 56 Dec 13, 2022
An tiny CLI to load data from a JSON File during development.

JSON Server - An tiny CLI to load data from a JSON File during development.

Yuvraj.M 4 Mar 22, 2022
Convert your subscriptions csv file into a valid json for Newpipe!

Newpipe-CSV-Fixer Convert your Google subscriptions CSV file into a valid JSON for Newpipe! Thanks to nikcorg for sharing how to convert the CSV into

Juanjo 44 Dec 29, 2022
Json GUI for No Man's Sky save file

NMS-Save-Parser Json GUI for No Man's Sky save file GUI python NMS_SAVE_PARSER.py [optional|save.hg] converter only python convert.py usage: conver

null 2 Oct 19, 2022
Package to Encode/Decode some common file formats to json

ZnJSON Package to Encode/Decode some common file formats to json Available via pip install znjson In comparison to pickle this allows having readable

ZINC 2 Feb 2, 2022