CLI tool and python library that converts the output of popular command-line tools and file-types to JSON or Dictionaries. This allows piping of output to tools like jq and simplifying automation scripts.

Overview

Tests Pypi

Try the new jc web demo!

JC is now available as an Ansible filter plugin in the community.general collection! See this blog post for an example.

JC

JSON CLI output utility

jc JSONifies the output of many CLI tools and file-types for easier parsing in scripts. See the Parsers section for supported commands and file-types.

This allows further command-line processing of output with tools like jq by piping commands:

ls -l /usr/bin | jc --ls | jq '.[] | select(.size > 50000000)'
{
  "filename": "docker",
  "flags": "-rwxr-xr-x",
  "links": 1,
  "owner": "root",
  "group": "root",
  "size": 68677120,
  "date": "Aug 14 19:41"
}

or using the alternative "magic" syntax:

jc ls -l /usr/bin | jq '.[] | select(.size > 50000000)'
{
  "filename": "docker",
  "flags": "-rwxr-xr-x",
  "links": 1,
  "owner": "root",
  "group": "root",
  "size": 68677120,
  "date": "Aug 14 19:41"
}

The jc parsers can also be used as python modules. In this case the output will be a python dictionary, or list of dictionaries, instead of JSON:

>>> import jc.parsers.ls
>>> 
>>> data='''-rwxr-xr-x  1 root  wheel    23648 May  3 22:26 cat
... -rwxr-xr-x  1 root  wheel    30016 May  3 22:26 chmod
... -rwxr-xr-x  1 root  wheel    29024 May  3 22:26 cp
... -rwxr-xr-x  1 root  wheel   375824 May  3 22:26 csh
... -rwxr-xr-x  1 root  wheel    28608 May  3 22:26 date
... -rwxr-xr-x  1 root  wheel    32000 May  3 22:26 dd
... -rwxr-xr-x  1 root  wheel    23392 May  3 22:26 df
... -rwxr-xr-x  1 root  wheel    18128 May  3 22:26 echo'''
>>>
>>> jc.parsers.ls.parse(data)
[{'filename': 'cat', 'flags': '-rwxr-xr-x', 'links': 1, 'owner': 'root', 'group': 'wheel', 'size': 23648,
'date': 'May 3 22:26'}, {'filename': 'chmod', 'flags': '-rwxr-xr-x', 'links': 1, 'owner': 'root',
'group': 'wheel', 'size': 30016, 'date': 'May 3 22:26'}, {'filename': 'cp', 'flags': '-rwxr-xr-x',
'links': 1, 'owner': 'root', 'group': 'wheel', 'size': 29024, 'date': 'May 3 22:26'}, {'filename': 'csh',
'flags': '-rwxr-xr-x', 'links': 1, 'owner': 'root', 'group': 'wheel', 'size': 375824, 'date': 'May 3
22:26'}, {'filename': 'date', 'flags': '-rwxr-xr-x', 'links': 1, 'owner': 'root', 'group': 'wheel',
'size': 28608, 'date': 'May 3 22:26'}, {'filename': 'dd', 'flags': '-rwxr-xr-x', 'links': 1, 'owner':
'root', 'group': 'wheel', 'size': 32000, 'date': 'May 3 22:26'}, {'filename': 'df', 'flags':
'-rwxr-xr-x', 'links': 1, 'owner': 'root', 'group': 'wheel', 'size': 23392, 'date': 'May 3 22:26'},
{'filename': 'echo', 'flags': '-rwxr-xr-x', 'links': 1, 'owner': 'root', 'group': 'wheel', 'size': 18128,
'date': 'May 3 22:26'}]

Two representations of the data are possible. The default representation uses a strict schema per parser and converts known numbers to int/float JSON values. Certain known values of None are converted to JSON null, known boolean values are converted, and, in some cases, additional semantic context fields are added.

Note: Some parsers have calculated epoch timestamp fields added to the output. Unless a timestamp field name has a _utc suffix it is considered naive. (i.e. based on the local timezone of the system the jc parser was run on).

If a UTC timezone can be detected in the text of the command output, the timestamp will be timezone aware and have a _utc suffix on the key name. (e.g. epoch_utc) No other timezones are supported for aware timestamps.

To access the raw, pre-processed JSON, use the -r cli option or the raw=True function parameter in parse().

Schemas for each parser can be found at the documentation link beside each parser below.

Release notes can be found here.

Why Would Anyone Do This!?

For more information on the motivations for this project, please see my blog post.

See also:

Installation

There are several ways to get jc. You can install via pip; other OS package repositories like apt-get, dnf, zypper, pacman, nix-env, guix, brew, or portsnap; via DEB/RPM packages; or by downloading the correct binary for your architecture and running it anywhere on your filesystem.

Pip (macOS, linux, unix, Windows)

pip3 install jc

OS Package Repositories

OS Command
Debian/Ubuntu linux apt-get install jc
Fedora linux dnf install jc
openSUSE linux zypper install jc
Arch linux pacman -S jc
NixOS linux nix-env -iA nixpkgs.jc or nix-env -iA nixos.jc
Guix System linux guix install jc
MacOS brew install jc
FreeBSD portsnap fetch update && cd /usr/ports/textproc/py-jc && make install clean
Ansible filter plugin ansible-galaxy collection install community.general

For more packages and binaries, see https://kellyjonbrazil.github.io/jc-packaging/.

Usage

jc accepts piped input from STDIN and outputs a JSON representation of the previous command's output to STDOUT.

COMMAND | jc PARSER [OPTIONS]

Alternatively, the "magic" syntax can be used by prepending jc to the command to be converted. Options can be passed to jc immediately before the command is given. (Note: command aliases and shell builtins are not supported)

jc [OPTIONS] COMMAND

The JSON output can be compact (default) or pretty formatted with the -p option.

Note: For best results set the LANG locale environment variable to C. For example, either by setting directly on the command-line: $ LANG=C date | jc --date, or by exporting to the environment before running commands: $ export LANG=C.

Parsers

Options

  • -a about jc. Prints information about jc and the parsers (in JSON, of course!)
  • -d debug mode. Prints trace messages if parsing issues are encountered (use -dd for verbose debugging)
  • -h jc help
  • -m monochrome JSON output
  • -p pretty format the JSON output
  • -q quiet mode. Suppresses parser warning messages
  • -r raw output. Provides a more literal JSON output, typically with string values and no additional semantic processing
  • -v version information

Setting Custom Colors via Environment Variable

You can specify custom colors via the JC_COLORS environment variable. The JC_COLORS environment variable takes four comma separated string values in the following format:

JC_COLORS=<keyname_color>,<keyword_color>,<number_color>,<string_color>

Where colors are: black, red, green, yellow, blue, magenta, cyan, gray, brightblack, brightred, brightgreen, brightyellow, brightblue, brightmagenta, brightcyan, white, or default

For example, to set to the default colors:

JC_COLORS=blue,brightblack,magenta,green

or

JC_COLORS=default,default,default,default

Custom Parsers

Custom local parser plugins may be placed in a jc/jcparsers folder in your local "App data directory":

  • Linux/unix: $HOME/.local/share/jc/jcparsers
  • macOS: $HOME/Library/Application Support/jc/jcparsers
  • Windows: $LOCALAPPDATA\jc\jc\jcparsers

Local parser plugins are standard python module files. Use the jc/parsers/foo.py parser as a template and simply place a .py file in the jcparsers subfolder.

Local plugin filenames must be valid python module names, therefore must consist entirely of alphanumerics and start with a letter. Local plugins may override default plugins.

Note: The application data directory follows the XDG Base Directory Specification

Compatibility

Some parsers like ls, ps, dig, etc. will work on any platform. Other parsers that are platform-specific will generate a warning message if they are used on an unsupported platform. To see all parser information, including compatibility, run jc -ap.

You may still use a parser on an unsupported platform - for example, you may want to parse a file with linux lsof output on an macOS laptop. In that case you can suppress the warning message with the -q cli option or the quiet=True function parameter in parse():

cat lsof.out | jc --lsof -q

Tested on:

  • Centos 7.7
  • Ubuntu 18.04
  • Ubuntu 20.04
  • Fedora32
  • macOS 10.11.6
  • macOS 10.14.6
  • NixOS
  • FreeBSD12
  • Windows 10

Contributions

Feel free to add/improve code or parsers! You can use the jc/parsers/foo.py parser as a template and submit your parser with a pull request.

Please see the Contributing Guidelines for more information.

Acknowledgments

Examples

Here are some examples of jc output. For more examples, see here or the parser documentation.

arp

arp | jc --arp -p          # or:  jc -p arp
[
  {
    "address": "gateway",
    "hwtype": "ether",
    "hwaddress": "00:50:56:f7:4a:fc",
    "flags_mask": "C",
    "iface": "ens33"
  },
  {
    "address": "192.168.71.1",
    "hwtype": "ether",
    "hwaddress": "00:50:56:c0:00:08",
    "flags_mask": "C",
    "iface": "ens33"
  },
  {
    "address": "192.168.71.254",
    "hwtype": "ether",
    "hwaddress": "00:50:56:fe:7a:b4",
    "flags_mask": "C",
    "iface": "ens33"
  }
]

CSV files

cat homes.csv
"Sell", "List", "Living", "Rooms", "Beds", "Baths", "Age", "Acres", "Taxes"
142, 160, 28, 10, 5, 3,  60, 0.28,  3167
175, 180, 18,  8, 4, 1,  12, 0.43,  4033
129, 132, 13,  6, 3, 1,  41, 0.33,  1471
...
cat homes.csv | jc --csv -p
[
  {
    "Sell": "142",
    "List": "160",
    "Living": "28",
    "Rooms": "10",
    "Beds": "5",
    "Baths": "3",
    "Age": "60",
    "Acres": "0.28",
    "Taxes": "3167"
  },
  {
    "Sell": "175",
    "List": "180",
    "Living": "18",
    "Rooms": "8",
    "Beds": "4",
    "Baths": "1",
    "Age": "12",
    "Acres": "0.43",
    "Taxes": "4033"
  },
  {
    "Sell": "129",
    "List": "132",
    "Living": "13",
    "Rooms": "6",
    "Beds": "3",
    "Baths": "1",
    "Age": "41",
    "Acres": "0.33",
    "Taxes": "1471"
  }
]

dig

dig cnn.com @205.251.194.64 | jc --dig -p          # or:  jc -p dig cnn.com @205.251.194.64
[
  {
    "id": 52172,
    "opcode": "QUERY",
    "status": "NOERROR",
    "flags": [
      "qr",
      "rd",
      "ra"
    ],
    "query_num": 1,
    "answer_num": 1,
    "authority_num": 0,
    "additional_num": 1,
    "question": {
      "name": "cnn.com.",
      "class": "IN",
      "type": "A"
    },
    "answer": [
      {
        "name": "cnn.com.",
        "class": "IN",
        "type": "A",
        "ttl": 27,
        "data": "151.101.65.67"
      }
    ],
    "query_time": 38,
    "server": "2600",
    "when": "Tue Mar 30 20:07:59 PDT 2021",
    "rcvd": 100,
    "when_epoch": 1617160079,
    "when_epoch_utc": null
  }
]

/etc/hosts file

cat /etc/hosts | jc --hosts -p
[
  {
    "ip": "127.0.0.1",
    "hostname": [
      "localhost"
    ]
  },
  {
    "ip": "::1",
    "hostname": [
      "ip6-localhost",
      "ip6-loopback"
    ]
  },
  {
    "ip": "fe00::0",
    "hostname": [
      "ip6-localnet"
    ]
  }
]

ifconfig

ifconfig | jc --ifconfig -p          # or:  jc -p ifconfig
[
  {
    "name": "ens33",
    "flags": 4163,
    "state": [
      "UP",
      "BROADCAST",
      "RUNNING",
      "MULTICAST"
    ],
    "mtu": 1500,
    "ipv4_addr": "192.168.71.137",
    "ipv4_mask": "255.255.255.0",
    "ipv4_bcast": "192.168.71.255",
    "ipv6_addr": "fe80::c1cb:715d:bc3e:b8a0",
    "ipv6_mask": 64,
    "ipv6_scope": "0x20",
    "mac_addr": "00:0c:29:3b:58:0e",
    "type": "Ethernet",
    "rx_packets": 8061,
    "rx_bytes": 1514413,
    "rx_errors": 0,
    "rx_dropped": 0,
    "rx_overruns": 0,
    "rx_frame": 0,
    "tx_packets": 4502,
    "tx_bytes": 866622,
    "tx_errors": 0,
    "tx_dropped": 0,
    "tx_overruns": 0,
    "tx_carrier": 0,
    "tx_collisions": 0,
    "metric": null
  }
]

INI files

cat example.ini
[DEFAULT]
ServerAliveInterval = 45
Compression = yes
CompressionLevel = 9
ForwardX11 = yes

[bitbucket.org]
User = hg

[topsecret.server.com]
Port = 50022
ForwardX11 = no
cat example.ini | jc --ini -p
{
  "bitbucket.org": {
    "serveraliveinterval": "45",
    "compression": "yes",
    "compressionlevel": "9",
    "forwardx11": "yes",
    "user": "hg"
  },
  "topsecret.server.com": {
    "serveraliveinterval": "45",
    "compression": "yes",
    "compressionlevel": "9",
    "forwardx11": "no",
    "port": "50022"
  }
}

ls

$ ls -l /usr/bin | jc --ls -p          # or:  jc -p ls -l /usr/bin
[
  {
    "filename": "apropos",
    "link_to": "whatis",
    "flags": "lrwxrwxrwx.",
    "links": 1,
    "owner": "root",
    "group": "root",
    "size": 6,
    "date": "Aug 15 10:53"
  },
  {
    "filename": "ar",
    "flags": "-rwxr-xr-x.",
    "links": 1,
    "owner": "root",
    "group": "root",
    "size": 62744,
    "date": "Aug 8 16:14"
  },
  {
    "filename": "arch",
    "flags": "-rwxr-xr-x.",
    "links": 1,
    "owner": "root",
    "group": "root",
    "size": 33080,
    "date": "Aug 19 23:25"
  }
]

netstat

netstat -apee | jc --netstat -p          # or:  jc -p netstat -apee
[
  {
    "proto": "tcp",
    "recv_q": 0,
    "send_q": 0,
    "local_address": "localhost",
    "foreign_address": "0.0.0.0",
    "state": "LISTEN",
    "user": "systemd-resolve",
    "inode": 26958,
    "program_name": "systemd-resolve",
    "kind": "network",
    "pid": 887,
    "local_port": "domain",
    "foreign_port": "*",
    "transport_protocol": "tcp",
    "network_protocol": "ipv4"
  },
  {
    "proto": "tcp6",
    "recv_q": 0,
    "send_q": 0,
    "local_address": "[::]",
    "foreign_address": "[::]",
    "state": "LISTEN",
    "user": "root",
    "inode": 30510,
    "program_name": "sshd",
    "kind": "network",
    "pid": 1186,
    "local_port": "ssh",
    "foreign_port": "*",
    "transport_protocol": "tcp",
    "network_protocol": "ipv6"
  },
  {
    "proto": "udp",
    "recv_q": 0,
    "send_q": 0,
    "local_address": "localhost",
    "foreign_address": "0.0.0.0",
    "state": null,
    "user": "systemd-resolve",
    "inode": 26957,
    "program_name": "systemd-resolve",
    "kind": "network",
    "pid": 887,
    "local_port": "domain",
    "foreign_port": "*",
    "transport_protocol": "udp",
    "network_protocol": "ipv4"
  },
  {
    "proto": "raw6",
    "recv_q": 0,
    "send_q": 0,
    "local_address": "[::]",
    "foreign_address": "[::]",
    "state": "7",
    "user": "systemd-network",
    "inode": 27001,
    "program_name": "systemd-network",
    "kind": "network",
    "pid": 867,
    "local_port": "ipv6-icmp",
    "foreign_port": "*",
    "transport_protocol": null,
    "network_protocol": "ipv6"
  },
  {
    "proto": "unix",
    "refcnt": 2,
    "flags": null,
    "type": "DGRAM",
    "state": null,
    "inode": 33322,
    "program_name": "systemd",
    "path": "/run/user/1000/systemd/notify",
    "kind": "socket",
    "pid": 1607
  }
]

/etc/passwd file

cat /etc/passwd | jc --passwd -p
[
  {
    "username": "root",
    "password": "*",
    "uid": 0,
    "gid": 0,
    "comment": "System Administrator",
    "home": "/var/root",
    "shell": "/bin/sh"
  },
  {
    "username": "daemon",
    "password": "*",
    "uid": 1,
    "gid": 1,
    "comment": "System Services",
    "home": "/var/root",
    "shell": "/usr/bin/false"
  }
]

ping

ping 8.8.8.8 -c 3 | jc --ping -p          # or:  jc -p ping 8.8.8.8 -c 3
{
  "destination_ip": "8.8.8.8",
  "data_bytes": 56,
  "pattern": null,
  "destination": "8.8.8.8",
  "packets_transmitted": 3,
  "packets_received": 3,
  "packet_loss_percent": 0.0,
  "duplicates": 0,
  "time_ms": 2005.0,
  "round_trip_ms_min": 23.835,
  "round_trip_ms_avg": 30.46,
  "round_trip_ms_max": 34.838,
  "round_trip_ms_stddev": 4.766,
  "responses": [
    {
      "type": "reply",
      "timestamp": null,
      "bytes": 64,
      "response_ip": "8.8.8.8",
      "icmp_seq": 1,
      "ttl": 118,
      "time_ms": 23.8,
      "duplicate": false
    },
    {
      "type": "reply",
      "timestamp": null,
      "bytes": 64,
      "response_ip": "8.8.8.8",
      "icmp_seq": 2,
      "ttl": 118,
      "time_ms": 34.8,
      "duplicate": false
    },
    {
      "type": "reply",
      "timestamp": null,
      "bytes": 64,
      "response_ip": "8.8.8.8",
      "icmp_seq": 3,
      "ttl": 118,
      "time_ms": 32.7,
      "duplicate": false
    }
  ]
}

ps

ps axu | jc --ps -p          # or:  jc -p ps axu
[
  {
    "user": "root",
    "pid": 1,
    "cpu_percent": 0.0,
    "mem_percent": 0.1,
    "vsz": 128072,
    "rss": 6784,
    "tty": null,
    "stat": "Ss",
    "start": "Nov09",
    "time": "0:08",
    "command": "/usr/lib/systemd/systemd --switched-root --system --deserialize 22"
  },
  {
    "user": "root",
    "pid": 2,
    "cpu_percent": 0.0,
    "mem_percent": 0.0,
    "vsz": 0,
    "rss": 0,
    "tty": null,
    "stat": "S",
    "start": "Nov09",
    "time": "0:00",
    "command": "[kthreadd]"
  },
  {
    "user": "root",
    "pid": 4,
    "cpu_percent": 0.0,
    "mem_percent": 0.0,
    "vsz": 0,
    "rss": 0,
    "tty": null,
    "stat": "S<",
    "start": "Nov09",
    "time": "0:00",
    "command": "[kworker/0:0H]"
  }
]

traceroute

traceroute -m 2 8.8.8.8 | jc --traceroute -p          # or:  jc -p traceroute -m 2 8.8.8.8
{
  "destination_ip": "8.8.8.8",
  "destination_name": "8.8.8.8",
  "hops": [
    {
      "hop": 1,
      "probes": [
        {
          "annotation": null,
          "asn": null,
          "ip": "192.168.1.254",
          "name": "dsldevice.local.net",
          "rtt": 6.616
        },
        {
          "annotation": null,
          "asn": null,
          "ip": "192.168.1.254",
          "name": "dsldevice.local.net",
          "rtt": 6.413
        },
        {
          "annotation": null,
          "asn": null,
          "ip": "192.168.1.254",
          "name": "dsldevice.local.net",
          "rtt": 6.308
        }
      ]
    },
    {
      "hop": 2,
      "probes": [
        {
          "annotation": null,
          "asn": null,
          "ip": "76.220.24.1",
          "name": "76-220-24-1.lightspeed.sntcca.sbcglobal.net",
          "rtt": 29.367
        },
        {
          "annotation": null,
          "asn": null,
          "ip": "76.220.24.1",
          "name": "76-220-24-1.lightspeed.sntcca.sbcglobal.net",
          "rtt": 40.197
        },
        {
          "annotation": null,
          "asn": null,
          "ip": "76.220.24.1",
          "name": "76-220-24-1.lightspeed.sntcca.sbcglobal.net",
          "rtt": 29.162
        }
      ]
    }
  ]
}

uptime

uptime | jc --uptime -p          # or:  jc -p uptime
{
  "time": "11:35",
  "uptime": "3 days, 4:03",
  "users": 5,
  "load_1m": 1.88,
  "load_5m": 2.0,
  "load_15m": 1.94,
  "time_hour": 11,
  "time_minute": 35,
  "time_second": null,
  "uptime_days": 3,
  "uptime_hours": 4,
  "uptime_minutes": 3,
  "uptime_total_seconds": 273780
}

XML files

cat cd_catalog.xml
xml version="1.0" encoding="UTF-8"?>
<CATALOG>
  <CD>
    <TITLE>Empire BurlesqueTITLE>
    <ARTIST>Bob DylanARTIST>
    <COUNTRY>USACOUNTRY>
    <COMPANY>ColumbiaCOMPANY>
    <PRICE>10.90PRICE>
    <YEAR>1985YEAR>
  CD>
  <CD>
    <TITLE>Hide your heartTITLE>
    <ARTIST>Bonnie TylerARTIST>
    <COUNTRY>UKCOUNTRY>
    <COMPANY>CBS RecordsCOMPANY>
    <PRICE>9.90PRICE>
    <YEAR>1988YEAR>
  CD>
  ...
cat cd_catalog.xml | jc --xml -p
{
  "CATALOG": {
    "CD": [
      {
        "TITLE": "Empire Burlesque",
        "ARTIST": "Bob Dylan",
        "COUNTRY": "USA",
        "COMPANY": "Columbia",
        "PRICE": "10.90",
        "YEAR": "1985"
      },
      {
        "TITLE": "Hide your heart",
        "ARTIST": "Bonnie Tyler",
        "COUNTRY": "UK",
        "COMPANY": "CBS Records",
        "PRICE": "9.90",
        "YEAR": "1988"
      }
    ]
  }
}

YAML files

cat istio.yaml 
apiVersion: "authentication.istio.io/v1alpha1"
kind: "Policy"
metadata:
  name: "default"
  namespace: "default"
spec:
  peers:
  - mtls: {}
---
apiVersion: "networking.istio.io/v1alpha3"
kind: "DestinationRule"
metadata:
  name: "default"
  namespace: "default"
spec:
  host: "*.default.svc.cluster.local"
  trafficPolicy:
    tls:
      mode: ISTIO_MUTUAL
cat istio.yaml | jc --yaml -p
[
  {
    "apiVersion": "authentication.istio.io/v1alpha1",
    "kind": "Policy",
    "metadata": {
      "name": "default",
      "namespace": "default"
    },
    "spec": {
      "peers": [
        {
          "mtls": {}
        }
      ]
    }
  },
  {
    "apiVersion": "networking.istio.io/v1alpha3",
    "kind": "DestinationRule",
    "metadata": {
      "name": "default",
      "namespace": "default"
    },
    "spec": {
      "host": "*.default.svc.cluster.local",
      "trafficPolicy": {
        "tls": {
          "mode": "ISTIO_MUTUAL"
        }
      }
    }
  }
]

© 2019-2021 Kelly Brazil

Comments
  • New parser: update-alternatives --query

    New parser: update-alternatives --query

    The Debian alternatives system has a update-alternatives command which has a --query option which will:

    Display information about the link group like --display does, but in a machine parseable way (since version 1.15.0, see section QUERY FORMAT below).

    For example for the editor (omitting the Slaves output):

    update-alternatives --query editor
    Name: editor
    Link: /usr/bin/editor
    Status: manual
    Best: /bin/nano
    Value: /usr/bin/vim.basic
    
    Alternative: /bin/nano
    Priority: 40
    
    Alternative: /usr/bin/nvim
    Priority: 30
    
    Alternative: /usr/bin/vim.basic
    Priority: 30
    
    Alternative: /usr/bin/vim.tiny
    Priority: 15
    

    Perhaps this might be a useful new parser for jc, the Ansible alternatives module is only able to set alternatives, not read them, so I've written some Bash to use as a Ansible facts.d script for converting the output into JSON:

    #!/usr/bin/env bash
    
    set -euo pipefail
    
    jo $(
      echo "${1}"=$( 
        if query=$(update-alternatives --query "${1}")
        then
          declare -a alternatives=()
          readarray -t alternatives < <(grep -e '^Alternative' <<< "${query}" | sed 's/Alternative: //')
          declare -a priorities=()
          readarray -t priorities < <(grep -e '^Priority' <<< "${query}" | sed 's/Priority: //')
          jo name="${1}" link=$(
            grep ^Link <<< "${query}" | sed 's/^Link: //'
          ) value=$(
            grep ^Value <<< "${query}" | sed 's/^Value: //'
          ) best=$(
            grep ^Best <<< "${query}" | sed 's/Best: //'
          ) alternatives=$(
            jo -a $(
              i=0
              while [[ "${i}" -lt "${#alternatives[@]}" ]]
              do
                jo path="${alternatives[${i}]}" priority="${priorities[${i}]}" 
                ((i++))
              done
            )
          )
        else
          jo state=absent
        fi
      )
    )
    

    Running this script with an argument, for example editor results in:

    {
      "editor": {
        "name": "editor",
        "link": "/usr/bin/editor",
        "value": "/usr/bin/vim.basic",
        "best": "/bin/nano",
        "alternatives": [
          {
            "path": "/bin/nano",
            "priority": 40
          },
          {
            "path": "/usr/bin/nvim",
            "priority": 30
          },
          {
            "path": "/usr/bin/vim.basic",
            "priority": 30
          },
          {
            "path": "/usr/bin/vim.tiny",
            "priority": 15
          }
        ]
      }
    }
    
    new-parser 
    opened by chriscroome 29
  • Add support for UFW

    Add support for UFW

    Hello,

    First of all, I would like to thank the creators / contributors of this lib, it is very useful for my projects !

    I want know if it possible to add support for ufw command, like ufw status :)

    Thank's

    new-parser 
    opened by yannmichaux 24
  • Add short example of using jc with NGS?

    Add short example of using jc with NGS?

    Open an issue to discuss the new feature, bug fix, or parser before opening a pull request.

    I would like to add a short example of using jc with NGS (to the main readme). That would be related and just after the Python example. Something like this (plus some explanation):

    echo(``jc dig example.com``[0].answer)
    

    Would this be OK?

    opened by ilyash-b 21
  • Feature request: new parser sshd -T

    Feature request: new parser sshd -T

    This is probably one for the far back-burner or bin @kellyjonbrazil :smile: !

    The configuration of an OpenSSH server can be printed using sshd -T, for example:

    sshd -T | sort
    acceptenv LANG
    acceptenv LC_*
    addressfamily any
    allowagentforwarding yes
    allowstreamlocalforwarding yes
    allowtcpforwarding yes
    authenticationmethods any
    authorizedkeyscommand none
    authorizedkeyscommanduser none
    authorizedkeysfile .ssh/authorized_keys .ssh/authorized_keys2
    authorizedprincipalscommand none
    authorizedprincipalscommanduser none
    authorizedprincipalsfile none
    banner none
    casignaturealgorithms ssh-ed25519,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,[email protected],[email protected],rsa-sha2-512,rsa-sha2-256
    chrootdirectory none
    ciphers [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected]
    clientalivecountmax 3
    clientaliveinterval 0
    compression yes
    disableforwarding no
    exposeauthinfo no
    fingerprinthash SHA256
    forcecommand none
    gatewayports no
    gssapiauthentication no
    gssapicleanupcredentials yes
    gssapikexalgorithms gss-group14-sha256-,gss-group16-sha512-,gss-nistp256-sha256-,gss-curve25519-sha256-,gss-group14-sha1-,gss-gex-sha1-
    gssapikeyexchange no
    gssapistorecredentialsonrekey no
    gssapistrictacceptorcheck yes
    hostbasedacceptedalgorithms [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],ssh-ed25519,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,[email protected],[email protected],rsa-sha2-512,rsa-sha2-256
    hostbasedauthentication no
    hostbasedusesnamefrompacketonly no
    hostkeyagent none
    hostkeyalgorithms [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],ssh-ed25519,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,[email protected],[email protected],rsa-sha2-512,rsa-sha2-256
    hostkey /etc/ssh/ssh_host_ecdsa_key
    hostkey /etc/ssh/ssh_host_ed25519_key
    hostkey /etc/ssh/ssh_host_rsa_key
    ignorerhosts yes
    ignoreuserknownhosts no
    ipqos lowdelay throughput
    kbdinteractiveauthentication no
    kerberosauthentication no
    kerberosorlocalpasswd yes
    kerberosticketcleanup yes
    kexalgorithms [email protected],curve25519-sha256,[email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group14-sha256
    listenaddress 0.0.0.0:22
    listenaddress [::]:22
    logingracetime 120
    loglevel INFO
    macs [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1
    maxauthtries 6
    maxsessions 10
    maxstartups 10:30:100
    modulifile /etc/ssh/moduli
    passwordauthentication yes
    permitemptypasswords no
    permitlisten any
    permitopen any
    permitrootlogin without-password
    permittty yes
    permittunnel no
    permituserenvironment no
    permituserrc yes
    persourcemaxstartups none
    persourcenetblocksize 32:128
    pidfile /run/sshd.pid
    port 22
    printlastlog yes
    printmotd no
    pubkeyacceptedalgorithms [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],ssh-ed25519,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,[email protected],[email protected],rsa-sha2-512,rsa-sha2-256
    pubkeyauthentication yes
    pubkeyauthoptions none
    rekeylimit 0 0
    revokedkeys none
    securitykeyprovider internal
    streamlocalbindmask 0177
    streamlocalbindunlink no
    strictmodes yes
    subsystem sftp /usr/lib/openssh/sftp-server
    syslogfacility AUTH
    tcpkeepalive yes
    trustedusercakeys none
    usedns no
    usepam yes
    versionaddendum none
    x11displayoffset 10
    x11forwarding yes
    x11uselocalhost yes
    xauthlocation /usr/bin/xauth
    

    For a raw JSON version splitting on the first space on each line would be mostly fine, perhaps the Key/Value file parser could have support for splitting on the first space per line added? However that wouldn't work for HostKey as only the last value would be left.

    A dedicated sshd_config parser could do things like split values at commas and / or spaces into lists, however it isn't quite that simple...

    For example for Ciphers, HostbasedAcceptedAlgorithms and KexAlgorithms:

    if the specified list begins with a ‘+’ character, then the specified signature algorithms will be appended to the default set instead of replacing them. If the specified list begins with a ‘-’ character, then the specified signature algorithms (including wildcards) will be removed from the default set instead of replacing them. If the specified list begins with a ‘^’ character, then the specified signature algorithms will be placed at the head of the default set

    Perhaps this could look something this?

    KexAlgorithms:
     strategy: append # for +
     algorithms:
       - [email protected]
       - curve25519-sha256,[email protected]
       - ecdh-sha2-nistp256
    

    Another complication are things like AuthenticationMethods:

    This option must be followed by one or more lists of comma-separated authentication method names, or by the single string any to indicate the default behaviour of accepting any single authentication method.

    None of this initial ideas are great:

    AuthenticationMethods: any
    
    AuthenticationMethods:
      - publickey
      - password
    
    AuthenticationMethods:
      1:
        - publickey
        - password
      2:
        - publickey
        - keyboard-interactive
    

    Also since YAML / JSON doesn't support ordered lists and the order is critical that could be an issue and I'm not sure that a variable having multiple potential types is a good idea.

    In addition the sshd_config file uses camel case but sshd -T has lower case variable names, I don't know if a mapping for this would be a good thing and / or a necessary option?, this isn't something to worry about since the man page for sshd_config states:

    keywords are case-insensitive and arguments are case-sensitive

    Another potential gotcha is that some variables take simply yes / no as values, like AllowAgentForwarding, so this would perhaps make sense as a boolean rather than a string, however AllowTcpForwarding allows all, local, no, remote or yes so this could be a boolean or a string depending on the value, however my preference would probably be for it always to be a string since Ansible role validation only support variables being of one type.

    To make matters worse there is the Match block, for example in /etc/ssh/sshd_config you could have:

    Match group sftp
        AllowGroups sftp
        DenyGroups sudo root
        ChrootDirectory %h
        X11Forwarding no
        AllowTcpForwarding no
        AuthenticationMethods publickey password
        PasswordAuthentication yes
        PubkeyAuthentication yes
        PermitUserRC no
        PermitRootLogin No
        PermitTTY yes
        ForceCommand internal-sftp
    

    However allthough you can get the resulting configuration for a user using sshd -T -C user=foo the results are still in the same flat format and the Match directive is never printed.

    Perhaps a JC sshd_config parser isn't a practical suggestion... in any case the full list of configuration options are here.

    new-parser ready-to-ship 
    opened by chriscroome 20
  • Add parser for mdadm

    Add parser for mdadm

    Linux software RAID is usually implemented with the md kernel module and managed with the mdadm userspace tool. Unfortunately mdadm doesn't output it's data in a format convenient for further automated processing like json.

    For example when using mdadm --query --detail /dev/md0 the output looks like this:

    /dev/md0:
               Version : 1.1
         Creation Time : Tue Apr 13 23:22:16 2010
            Raid Level : raid1
            Array Size : 5860520828 (5.46 TiB 6.00 TB)
         Used Dev Size : 5860520828 (5.46 TiB 6.00 TB)
          Raid Devices : 2
         Total Devices : 2
           Persistence : Superblock is persistent
    
         Intent Bitmap : Internal
    
           Update Time : Tue Jul 26 20:16:31 2022
                 State : clean 
        Active Devices : 2
       Working Devices : 2
        Failed Devices : 0
         Spare Devices : 0
    
    Consistency Policy : bitmap
    
                  Name : virttest:0
                  UUID : 85c5b164:d58a5ada:14f5fe07:d642e843
                Events : 2193679
    
        Number   Major   Minor   RaidDevice State
           3       8       17        0      active sync   /dev/sdb1
           2       8       33        1      active sync   /dev/sdc1
    

    mdadm also has the "examine" command which gives information about a md raid member device:

    # mdadm --examine -E /dev/sdb1
    /dev/sdb1:
              Magic : a92b4efc
            Version : 1.1
        Feature Map : 0x1
         Array UUID : 85c5b164:d58a5ada:14f5fe07:d642e843
               Name : virttest:0
      Creation Time : Tue Apr 13 23:22:16 2010
         Raid Level : raid1
       Raid Devices : 2
    
     Avail Dev Size : 11721041656 sectors (5.46 TiB 6.00 TB)
         Array Size : 5860520828 KiB (5.46 TiB 6.00 TB)
        Data Offset : 264 sectors
       Super Offset : 0 sectors
       Unused Space : before=80 sectors, after=0 sectors
              State : clean
        Device UUID : 813162e5:2e865efe:02ba5570:7003165c
    
    Internal Bitmap : 8 sectors from superblock
        Update Time : Tue Jul 26 20:16:31 2022
      Bad Block Log : 512 entries available at offset 72 sectors
           Checksum : f141a577 - correct
             Events : 2193679
    
    
       Device Role : Active device 0
       Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
    

    Having a parser for this in jc would be really helpful when dealing with md raid in scripts.

    new-parser 
    opened by g-v-egidy 20
  • ifconfig parser fail to list multi line for inet6 addr

    ifconfig parser fail to list multi line for inet6 addr

    Hi there,

    am I missing something obvious ? $ ifconfig wlp1s0 | grep -c inet6 4 $ jc -p ifconfig wlp1s0 | grep -c ipv6_addr 1 $

    Also didn't find an "ip" command parser yet.

    jc version 1.17.3 on Ubuntu 22.04

    enhancement ready-to-ship 
    opened by matrixbx 17
  • Tests depend on specific `TZ` setting

    Tests depend on specific `TZ` setting

    Try

    TZ=UTC python setup.py test
    

    This should cause timestamp-parsing-related test failures.

    I suppose that the timestamp parser relies on the current TZ environment setting being the same as the one used for generating the timestamp, which might not be the case for the pre-recorded test input. I had to explicitly run

    TZ=America/Los_Angeles python setup.py test
    

    to make the tests succeed.

    IMHO, the test script should take care of setting the expected timezone.

    opened by ccorn 16
  • New parser: URL

    New parser: URL

    A parser for URL strings would be nice. Some possible examples:

    $ echo "http://example.com/test/path?q1=foo&q2=bar#frag" | jc -p --url
    {
      "scheme": "http",
      "host": "example.com",
      "path": ["test", "path"],
      "query": { "q1": "foo", "q2": "bar" },
      "fragment": "frag"
    }
    
    $ echo "http://john:[email protected]" | jc -p --url
    { "scheme": "http", "host": "example.com", "user": "john", "password": "pass" }
    
    $ echo "http://example.com:8080" | jc -p --url
    { "scheme": "http", "host": "example.com", "port": 8080 }
    
    new-parser 
    opened by archiif 14
  • bug sfdisk parcer

    bug sfdisk parcer

    HI! I tried the parser on two operating systems : Debian 10 and CentOS 8.

    If I specify sfdisk -l /dev/sda1 | jc --sfdisk , the output is : [{"disk":"/dev/sda1","cylinders":1,"units":"sectors of 1 * 512 = 512 bytes"}]

    But, if I want to get information on the entire sdc drive, I get a parser error :

     sfdisk -l /dev/sda | jc --sfdisk -dd :
    
    Traceback (most recent call last):
      File "/usr/local/bin/jc", line 8, in <module>
        sys.exit(main())
      File "/usr/local/lib/python3.8/site-packages/jc/cli.py", line 618, in main
        result = parser.parse(data, raw=raw, quiet=quiet)
      File "/usr/local/lib/python3.8/site-packages/jc/parsers/sfdisk.py", line 296, in parse
        item['heads'] = line.split()[4]
    IndexError: list index out of range
    
    .... sfdisk.py in parse(data='Disk /dev/sda: 32 GiB, 34359738368 bytes, 671088...     2099200 67108863 65009664  31G 8e Linux LVM\n', raw=False, quiet=False)
    
    

    If I comment out 2 lines ::

    296 # item['heads'] = line.split()[4] 297 # item['sectors_per_track'] = line.split()[6]

    I have this result : [{"disk":"/dev/sda","cylinders":32,"units":"sectors of 1 * 512 = 512 bytes"},{"disk":"identifier","cylinders":57705}]

    If my normal sfdisk output here:

    # sfdisk -l /dev/sda
    Disk /dev/sda: 32 GiB, 34359738368 bytes, 67108864 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: dos
    Disk identifier: 0x5fac7705
    
    Device     Boot   Start      End  Sectors Size Id Type
    /dev/sda1  *       2048  2099199  2097152   1G 83 Linux
    /dev/sda2       2099200 67108863 65009664  31G 8e Linux LVM 
    
    

    That is, all drive information is not displayed. I would also like to ask for support for the -F flag, I often use it to get information about disk resizing.

    Thanx

    bug 
    opened by caisyew 14
  • Flatten XML Parser?

    Flatten XML Parser?

    XML that makes use of a lot of attributes, rather than elements, results in JSON that is hard to work with using Ansible / JMESPath, for example nmap has XML output (but not JSON):

    nmap -oX - -p 443 galaxy.ansible.com | xmllint --pretty 1 -
    
    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE nmaprun>
    <?xml-stylesheet href="file:///usr/bin/../share/nmap/nmap.xsl" type="text/xsl"?>
    <!-- Nmap 7.92 scan initiated Wed Oct 26 11:51:38 2022 as: nmap -oX - -p 443 galaxy.ansible.com -->
    <nmaprun scanner="nmap" args="nmap -oX - -p 443 galaxy.ansible.com" start="1666781498" startstr="Wed Oct 26 11:51:38 2022" version="7.92" xmloutputversion="1.05">
      <scaninfo type="connect" protocol="tcp" numservices="1" services="443"/>
      <verbose level="0"/>
      <debugging level="0"/>
      <hosthint>
        <status state="up" reason="unknown-response" reason_ttl="0"/>
        <address addr="172.67.68.251" addrtype="ipv4"/>
        <hostnames>
          <hostname name="galaxy.ansible.com" type="user"/>
        </hostnames>
      </hosthint>
      <host starttime="1666781498" endtime="1666781498">
        <status state="up" reason="syn-ack" reason_ttl="0"/>
        <address addr="172.67.68.251" addrtype="ipv4"/>
        <hostnames>
          <hostname name="galaxy.ansible.com" type="user"/>
          <hostname name="galaxy.ansible.com" type="PTR"/>
        </hostnames>
        <ports>
          <port protocol="tcp" portid="443">
            <state state="open" reason="syn-ack" reason_ttl="0"/>
            <service name="https" method="table" conf="3"/>
          </port>
        </ports>
        <times srtt="12260" rttvar="9678" to="100000"/>
      </host>
      <runstats>
        <finished time="1666781498" timestr="Wed Oct 26 11:51:38 2022" summary="Nmap done at Wed Oct 26 11:51:38 2022; 1 IP address (1 host up) scanned in 0.10 seconds" elapsed="0.10" exit="success"/>
        <hosts up="1" down="0" total="1"/>
      </runstats>
    </nmaprun>
    

    Convert this into JSON / YAML and the results are not great...

    nmap -oX - -p 443 galaxy.ansible.com | xmllint --pretty 1 - | jc --xml -py
    
    ---
    nmaprun:
      '@scanner': nmap
      '@args': nmap -oX - -p 443 galaxy.ansible.com
      '@start': '1666781628'
      '@startstr': Wed Oct 26 11:53:48 2022
      '@version': '7.92'
      '@xmloutputversion': '1.05'
      scaninfo:
        '@type': connect
        '@protocol': tcp
        '@numservices': '1'
        '@services': '443'
      verbose:
        '@level': '0'
      debugging:
        '@level': '0'
      hosthint:
        status:
          '@state': up
          '@reason': unknown-response
          '@reason_ttl': '0'
        address:
          '@addr': 172.67.68.251
          '@addrtype': ipv4
        hostnames:
          hostname:
            '@name': galaxy.ansible.com
            '@type': user
      host:
        '@starttime': '1666781628'
        '@endtime': '1666781628'
        status:
          '@state': up
          '@reason': syn-ack
          '@reason_ttl': '0'
        address:
          '@addr': 172.67.68.251
          '@addrtype': ipv4
        hostnames:
          hostname:
          - '@name': galaxy.ansible.com
            '@type': user
          - '@name': galaxy.ansible.com
            '@type': PTR
        ports:
          port:
            '@protocol': tcp
            '@portid': '443'
            state:
              '@state': open
              '@reason': syn-ack
              '@reason_ttl': '0'
            service:
              '@name': https
              '@method': table
              '@conf': '3'
        times:
          '@srtt': '13479'
          '@rttvar': '11398'
          '@to': '100000'
      runstats:
        finished:
          '@time': '1666781628'
          '@timestr': Wed Oct 26 11:53:48 2022
          '@summary': Nmap done at Wed Oct 26 11:53:48 2022; 1 IP address (1 host up)
            scanned in 0.10 seconds
          '@elapsed': '0.10'
          '@exit': success
        hosts:
          '@up': '1'
          '@down': '0'
          '@total': '1'
    

    However if the XML is flattened using XSLT first:

    nmap -oX - -p 443 galaxy.ansible.com | xmllint --pretty 1 - > galaxy.ansible.com.xml
    xsltproc attributes2elements.xslt galaxy.ansible.com.xml 
    
    <?xml version="1.0"?>
    <?xml-stylesheet href="file:///usr/bin/../share/nmap/nmap.xsl" type="text/xsl"?><!-- Nmap 7.92 scan initiated Wed Oct 26 12:01:56 2022 as: nmap -oX - -p 443 galaxy.ansible.com -->
    <nmaprun><scanner>nmap</scanner><args>nmap -oX - -p 443 galaxy.ansible.com</args><start>1666782116</start><startstr>Wed Oct 26 12:01:56 2022</startstr><version>7.92</version><xmloutputversion>1.05</xmloutputversion>
      <scaninfo><type>connect</type><protocol>tcp</protocol><numservices>1</numservices><services>443</services></scaninfo>
      <verbose><level>0</level></verbose>
      <debugging><level>0</level></debugging>
      <hosthint>
        <status><state>up</state><reason>unknown-response</reason><reason_ttl>0</reason_ttl></status>
        <address><addr>172.67.68.251</addr><addrtype>ipv4</addrtype></address>
        <hostnames>
          <hostname><name>galaxy.ansible.com</name><type>user</type></hostname>
        </hostnames>
      </hosthint>
      <host><starttime>1666782116</starttime><endtime>1666782116</endtime>
        <status><state>up</state><reason>syn-ack</reason><reason_ttl>0</reason_ttl></status>
        <address><addr>172.67.68.251</addr><addrtype>ipv4</addrtype></address>
        <hostnames>
          <hostname><name>galaxy.ansible.com</name><type>user</type></hostname>
          <hostname><name>galaxy.ansible.com</name><type>PTR</type></hostname>
        </hostnames>
        <ports>
          <port><protocol>tcp</protocol><portid>443</portid>
            <state><state>open</state><reason>syn-ack</reason><reason_ttl>0</reason_ttl></state>
            <service><name>https</name><method>table</method><conf>3</conf></service>
          </port>
        </ports>
        <times><srtt>10773</srtt><rttvar>8291</rttvar><to>100000</to></times>
      </host>
      <runstats>
        <finished><time>1666782116</time><timestr>Wed Oct 26 12:01:56 2022</timestr><summary>Nmap done at Wed Oct 26 12:01:56 2022; 1 IP address (1 host up) scanned in 0.10 seconds</summary><elapsed>0.10</elapsed><exit>success</exit></finished>
        <hosts><up>1</up><down>0</down><total>1</total></hosts>
      </runstats>
    </nmaprun>
    

    You then have something that is nicer to work with:

    xsltproc attributes2elements.xslt galaxy.ansible.com.xml | jc --xml -py
    
    ---
    nmaprun:
      scanner: nmap
      args: nmap -oX - -p 443 galaxy.ansible.com
      start: '1666782116'
      startstr: Wed Oct 26 12:01:56 2022
      version: '7.92'
      xmloutputversion: '1.05'
      scaninfo:
        type: connect
        protocol: tcp
        numservices: '1'
        services: '443'
      verbose:
        level: '0'
      debugging:
        level: '0'
      hosthint:
        status:
          state: up
          reason: unknown-response
          reason_ttl: '0'
        address:
          addr: 172.67.68.251
          addrtype: ipv4
        hostnames:
          hostname:
            name: galaxy.ansible.com
            type: user
      host:
        starttime: '1666782116'
        endtime: '1666782116'
        status:
          state: up
          reason: syn-ack
          reason_ttl: '0'
        address:
          addr: 172.67.68.251
          addrtype: ipv4
        hostnames:
          hostname:
          - name: galaxy.ansible.com
            type: user
          - name: galaxy.ansible.com
            type: PTR
        ports:
          port:
            protocol: tcp
            portid: '443'
            state:
              state: open
              reason: syn-ack
              reason_ttl: '0'
            service:
              name: https
              method: table
              conf: '3'
        times:
          srtt: '10773'
          rttvar: '8291'
          to: '100000'
      runstats:
        finished:
          time: '1666782116'
          timestr: Wed Oct 26 12:01:56 2022
          summary: Nmap done at Wed Oct 26 12:01:56 2022; 1 IP address (1 host up) scanned
            in 0.10 seconds
          elapsed: '0.10'
          exit: success
        hosts:
          up: '1'
          down: '0'
          total: '1'
    

    So I was wondering if a ---xml-flatten parser that first used XSLT to flatten XML might be something that could be considered?

    enhancement ready-to-ship 
    opened by chriscroome 13
  • New parser: gpg

    New parser: gpg

    The gpg command has a machine readable output format, see this Stackoverflow comment and the one after it, for example (you can also add -vv for a more verbose output):

    gpg --with-colons --show-keys /usr/share/keyrings/debian-archive-bullseye-stable.gpg 
    pub:-:4096:1:605C66F00D6C9793:1613238862:1865526862::-:::scSC::::::23::0:
    fpr:::::::::A4285295FC7B1A81600062A9605C66F00D6C9793:
    uid:-::::1613238862::2C045EB517DDC06A1FC747D1E310AD33A8CB50E4::Debian Stable Release Key (11/bullseye) <[email protected]>::::::::::0:
    

    The above command can currently be used via Ansible to check keys, for example for Docker:

    - name: Set a variable for the Docker GPG fingerprint 
      ansible.builtin.set_fact:
        docker_gpg_fingerprint: 9DC858229FC7DD38854AE2D88D81803C0EBFCD88
    
    - name: Docker GPG key present
      ansible.builtin.get_url:
        url: "https://download.docker.com/linux/{{ ansible_distribution | lower }}/gpg"
        dest: /etc/apt/keyrings/docker.asc
        mode: 0644
        owner: root
        group: root
    
    - name: Docker GPG key check command
      ansible.builtin.command: gpg --with-colons --show-keys -v /etc/apt/keyrings/docker.asc
      check_mode: false
      changed_when: false
      register: docker_gpg
    
    - name: Docker GPG key checked
      ansible.builtin.assert:
        that:
          - docker_gpg_fingerprint in docker_gpg.stdout
    

    Which is fine, but it could be nicer -- would this be a suitable command to consider for a jc parser?

    new-parser 
    opened by chriscroome 13
  • What about a jc Ansible module?

    What about a jc Ansible module?

    I realise there is a jc Ansible filter, but it occurred to me that a jc module would potentially be easier and simpler to use and therefore be more useful.

    If one were to be written perhaps it could have the ability to read files as well as the results of commands, however I guess it would be impossible to ensure that commands don't change anything, but only using commands that don't change things could be suggested as being best practice?

    - name: Register the SSHD config
      community.general.jc:
        path: /etc/ssh/sshd_config  # src: could be an alias for path
        type: auto                  # parser: could be alias for type, auto, the default, could use
      register: sshd_config         # the path / filename to determine the parser to use?
    
    - name: Register the PHP 8.0 FPM configuration
      community.general.jc:
        src: /etc/php/8.0/fpm/php.ini
        parser: ini
      register: php80_fpm_configuration
    
    - name: Register the UFW status
      community.general.jc:
        command: ufw status
        parser: ufw
      register: ufw_status
    
    - name: Register the history for the foo User
      community.general.jc:
        command: history
      become: true
      become_user: foo
      register: foo_history
      
    - name: Test rsync source to dest and register the results
      community.general.jc:
        command: rsync -i -a --dry-run source/ dest/
        parser: rsync               # parser: / type: could be optional if it can be automatically
      changed_when: false           # detected from the command being run?
      args:
        chdir: /foo/bar
      register: rsync_results
    

    I know all the above examples can be implemented using slurp, command and set_fact so it isn't really needed but having a community.general.jc module would make life easier!

    Just a thought...

    opened by chriscroome 2
  • Add AIX mount support

    Add AIX mount support

    AIX's mount output is unique:

    $ mount | head

      node       mounted        mounted over    vfs       date        options      
    -------- ---------------  ---------------  ------ ------------ --------------- 
             /dev/hd4         /                jfs2   Sep 06 11:46 rw,log=/dev/hd8 
             /dev/hd2         /usr             jfs2   Sep 06 11:46 rw,log=/dev/hd8 
             /dev/hd9var      /var             jfs2   Sep 06 11:46 rw,log=/dev/hd8 
             /dev/hd3         /tmp             jfs2   Sep 06 11:46 rw,log=/dev/hd8 
             /dev/hd1         /home            jfs2   Sep 06 11:47 rw,log=/dev/hd8 
             /dev/hd11admin   /admin           jfs2   Sep 06 11:47 rw,log=/dev/hd8 
             /proc            /proc            procfs Sep 06 11:47 rw              
             /dev/hd10opt     /opt             jfs2   Sep 06 11:47 rw,log=/dev/hd8 
    

    Add support in the mount parser.

    I will submit a pull request for this shortly.

    new-parser ready-to-ship 
    opened by davemq 2
  • Unable to parse `zipinfo` output containing whitespace in the archive path

    Unable to parse `zipinfo` output containing whitespace in the archive path

    After running zipinfo using a path to an archive containing a space character, jc is unable to consume the output. Example reproduction:

    $ zipinfo 'some_dir/another dir/archive.zip' > out
    
    Archive:  some_dir/another dir/archive.zip
    Zip file size: xxx bytes, number of entries: xxx
    ...
    

    This works when there is no space in the path (e.g. zipinfo archive.zip). However when it contains a space character an error occurs:

    $ jc -d --zipinfo < out
    Traceback (most recent call last):
      ...
        self.data_out = self.parser_module.parse(
      File "... jc/parsers/zipinfo.py", line 173, in parse
        _, archive = line.split()
    ValueError: too many values to unpack (expected 2)
    

    One solution is to replace:

    https://github.com/kellyjonbrazil/jc/blob/36ce3c791dd2d3f0b5244961e889da5fe4bb0e6f/jc/parsers/zipinfo.py#L173

    With:

    # Remove the field name prefix (the double-space is intentional)
    archive = line.removeprefix('Archive:  ')
    

    This will support both situations, tested on MacOS with zipinfo v3.0.

    Alternatively, a more specific separator could be used (e.g. a double-space). However, it would be safer to just remove the field name prefix from the line instead, as paths could also contain that same separator.

    bug ready-to-ship 
    opened by daniel-rhoades 3
  • jc fails on lsusb -v output

    jc fails on lsusb -v output

    Output sample attached lsusb.txt

    self = <jc.parsers.lsusb._LsUsb object at 0x000001D483FD5180>
    line = '     Ext Status: 0000.0044'
    
        def _add_hub_port_status_attributes(self, line):
            # Port 1: 0000.0103 power enable connect
            first_split = line.split(': ', maxsplit=1)
            port_field = first_split[0].strip()
            second_split = first_split[1].split(maxsplit=1)
            port_val = second_split[0]
    >       attributes = second_split[1].split()
    E       IndexError: list index out of range
    
    bug ready-to-ship 
    opened by hotab 4
  • JC parser Netstat not working on Windows

    JC parser Netstat not working on Windows

    image

    I try to build electron app and need to retrieve routing table from the device, I used JC for parsing netstat -nr value to JSON and show on render method

    on MacOS/Darwin I already test and work very well, any suggestion for my problem ? I want to use this method on all platform. Thanks

    new-parser 
    opened by YosaRama 3
Releases(v1.22.4)
Owner
Kelly Brazil
Kelly Brazil
Sink is a CLI tool that allows users to synchronize their local folders to their Google Drives. It is similar to the Git CLI and allows fast and reliable syncs with the drive.

Sink is a CLI synchronisation tool that enables a user to synchronise local system files and folders with their Google Drives. It follows a git C

Yash Thakre 16 May 29, 2022
A command-line based, minimal torrent streaming client made using Python and Webtorrent-cli. Stream your favorite shows straight from the command line.

A command-line based, minimal torrent streaming client made using Python and Webtorrent-cli. Installation pip install -r requirements.txt It use

Jonardon Hazarika 17 Dec 11, 2022
A Python command-line utility for validating that the outputs of a given Declarative Form Azure Portal UI JSON template map to the input parameters of a given ARM Deployment Template JSON template

A Python command-line utility for validating that the outputs of a given Declarative Form Azure Portal UI JSON template map to the input parameters of a given ARM Deployment Template JSON template

Glenn Musa 1 Feb 3, 2022
A command line tool made in Python for the popular rhythm game

osr!name A command line tool made in Python for the popular rhythm game "osu!" that changes the player name of a .osr file (replay file). Example: Not

null 2 Dec 28, 2021
Juniper Command System is a Micro CLI Tool that allows you to manage your files, launch applications, as well as providing extra tools for OS Management.

Juniper Command System is a Micro CLI Tool that allows you to manage your files, launch applications, as well as providing extra tools for OS Management.

Juan Carlos Juárez 1 Feb 2, 2022
A python CLI app that converts a mp4 file into a gif with ASCII effect added.

Video2ASCIIgif This CLI app takes in a mp4 format video, converts it to a gif with ASCII effect applied. This also includes full control over: backgro

Sriram R 6 Dec 31, 2021
A command line tool (and Python library) for archiving Twitter JSON

A command line tool (and Python library) for archiving Twitter JSON

Documenting the Now 1.3k Dec 28, 2022
split-manga-pages: a command line utility written in Python that converts your double-page layout manga to single-page layout.

split-manga-pages split-manga-pages is a command line utility written in Python that converts your double-page layout manga (or any images in double p

Christoffer Aakre 3 May 24, 2022
A simple CLI based any Download Tool, that find files and let you stream or download thorugh WebTorrent CLI or Aria or any command tool

Privateer A simple CLI based any Download Tool, that find files and let you stream or download thorugh WebTorrent CLI or Aria or any command tool How

Shreyash Chavan 2 Apr 4, 2022
This is a CLI utility that allows you to view RedFlagDeals.com on the command line.

RFD Description Motivation Installation Usage View Hot Deals View and Sort Hot Deals Search Advanced View Posts Shell Completion bash zsh Description

Dave G 8 Nov 29, 2022
flora-dev-cli (fd-cli) is command line interface software to interact with flora blockchain.

Install git clone https://github.com/Flora-Network/fd-cli.git cd fd-cli python3 -m venv venv source venv/bin/activate pip install -e . --extra-index-u

null 14 Sep 11, 2022
CLI program that allows you to change your Alacritty config with one command without editing the config file.

Pycritty Change your alacritty config on the fly! Installation: pip install pycritty By default, only the program itself will be installed, but you ca

Antonio Sarosi 184 Jan 7, 2023
Command line tool for monitoring changes of File entities scoped in a Synapse File View

Synapse Monitoring Provides tools for monitoring and keeping track of File entity changes in Synapse with the use of File Views. Learn more about File

Sage Bionetworks 3 May 28, 2022
[WIP]An ani-cli like cli tool for movies and webseries

mov-cli A cli to browse and watch movies. Installation This project is a work in progress. However, you can try it out python git clone https://github

null 166 Dec 30, 2022
PyArmor is a command line tool used to obfuscate python scripts

PyArmor is a command line tool used to obfuscate python scripts, bind obfuscated scripts to fixed machine or expire obfuscated scripts.

Dashingsoft 2k Jan 7, 2023
Unofficial Open Corporates CLI: OpenCorporates is a website that shares data on corporations under the copyleft Open Database License. This is an unofficial open corporates python command line tool.

Unofficial Open Corporates CLI OpenCorporates is a website that shares data on corporations under the copyleft Open Database License. This is an unoff

Richard Mwewa 30 Sep 8, 2022
A set of libraries and functions for simplifying automating Cisco devices through SecureCRT.

This is a set of libraries for automating Cisco devices (and to a lesser extent, bash prompts) over ssh/telnet in SecureCRT.

Matthew Spangler 7 Mar 30, 2022
Spotify Offline is a command line tool that allows one to download Spotify playlists in MP3 format.

Spotify Offline v0.0.2 listen to your favorite spotify songs, offline Overview Spotify Offline (spotifyoffline) is a command line tool that allows one

Aarush Gupta 1 Nov 28, 2021
tox-server is a command line tool which runs tox in a loop and calls it with commands from a remote CLI.

Tox Server tox-server is a command line tool which runs tox in a loop and calls it with commands from a remote CLI. It responds to commands via ZeroMQ

Alexander Rudy 3 Jan 10, 2022