Working Time Statistics of working hours and working conditions by industry and company

Overview

Working Time

统计各行业,各公司工作时间与工作条件。

原始数据来源:https://github.com/WorkerLivesMatter/WorkingTime,向发起人致敬。

经过少量处理,整理为供PostgreSQL直接可以使用的数据表。

Public Demo: http://demo.pigsty.cc/d/worktime-query

如何使用?

如果你已经有了pigsty环境, 使用管理用户在管理节点上克隆本项目并执行 make all 即可

git clone https://github.com/Vonng/worktime && cd worktime
make all

数据说明

CREATE TABLE worktime.worktime
(
    id          INTEGER NOT NULL,
    company     TEXT,
    department  TEXT,
    job         TEXT,
    base        TEXT,
    work_begin  TEXT,
    work_end    TEXT,
    launch_time TEXT,
    dinner_time TEXT,
    wed         TEXT,
    fri         TEXT,
    workdays    TEXT,
    summary     TEXT,
    remark      TEXT,
    category    TEXT,
    suggestion  TEXT,
    struct      TEXT,
    welfare     TEXT,
    is_foreign  BOOLEAN,
    domain      TEXT NOT NULL
) partition by list (domain);

CREATE TABLE worktime.internet PARTITION OF worktime.worktime FOR VALUES IN ('互联网');
CREATE TABLE worktime.finance  PARTITION OF worktime.worktime FOR VALUES IN ('金融');
CREATE TABLE worktime.foreign  PARTITION OF worktime.worktime FOR VALUES IN ('外企');
CREATE TABLE worktime.misc     PARTITION OF worktime.worktime FOR VALUES IN ('其他');

COMMENT ON TABLE worktime.worktime IS '企业工作时间统计表';
COMMENT ON TABLE worktime.internet IS '企业工作时间统计表:互联网行业';
COMMENT ON TABLE worktime.finance IS '企业工作时间统计表:金融行业';
COMMENT ON TABLE worktime.foreign IS '企业工作时间统计表:外企';
COMMENT ON TABLE worktime.misc IS '企业工作时间统计表:其他';

CREATE INDEX ON worktime.worktime(company, department);
COMMENT ON COLUMN worktime.worktime.id IS '原始数据行号';
COMMENT ON COLUMN worktime.worktime.company IS '公司';
COMMENT ON COLUMN worktime.worktime.department IS '部门';
COMMENT ON COLUMN worktime.worktime.job IS '岗位';
COMMENT ON COLUMN worktime.worktime.base IS 'base地';
COMMENT ON COLUMN worktime.worktime.work_begin IS '上班时间';
COMMENT ON COLUMN worktime.worktime.work_end IS '下班时间';
COMMENT ON COLUMN worktime.worktime.launch_time IS '午饭时间';
COMMENT ON COLUMN worktime.worktime.dinner_time IS '晚饭时间';
COMMENT ON COLUMN worktime.worktime.wed IS '周三是否特殊';
COMMENT ON COLUMN worktime.worktime.fri IS '周五是否特殊';
COMMENT ON COLUMN worktime.worktime.workdays IS '一周工作天数';
COMMENT ON COLUMN worktime.worktime.summary IS '新人是否日报/周报';
COMMENT ON COLUMN worktime.worktime.remark IS '备注';
COMMENT ON COLUMN worktime.worktime.category IS '行业/公司性质';
COMMENT ON COLUMN worktime.worktime.suggestion IS '建议';
COMMENT ON COLUMN worktime.worktime.struct IS '组内 35 岁及以上基层员工( 组长及以下)比例,格式为 x / y,x 为 35岁以上的人数,y 为总人数';
COMMENT ON COLUMN worktime.worktime.welfare IS '是否有其他福利(如:五险一金,带薪年假,公费旅游,免费三餐)';
COMMENT ON COLUMN worktime.worktime.is_foreign IS '是否为外资企业?';
COMMENT ON COLUMN worktime.worktime.domain IS '大分类:互联网、金融、外企、其他';
You might also like...
Minimal working example of data acquisition with nidaqmx python API

Data Aquisition using NI-DAQmx python API Based on this project It is a minimal working example for data acquisition using the NI-DAQmx python API. It

OpenARB is an open source program aiming to emulate a free market while encouraging players to participate in arbitrage in order to increase working capital.

Overview OpenARB is an open source program aiming to emulate a free market while encouraging players to participate in arbitrage in order to increase

A real-time financial data streaming pipeline and visualization platform using Apache Kafka, Cassandra, and Bokeh.
A real-time financial data streaming pipeline and visualization platform using Apache Kafka, Cassandra, and Bokeh.

Realtime Financial Market Data Visualization and Analysis Introduction This repo shows my project about real-time stock data pipeline. All the code is

 Integrate bus data from a variety of sources (batch processing and real time processing).
Integrate bus data from a variety of sources (batch processing and real time processing).

Purpose: This is integrate bus data from a variety of sources such as: csv, json api, sensor data ... into Relational Database (batch processing and r

Leverage Twitter API v2 to analyze tweet metrics such as impressions and profile clicks over time.
Leverage Twitter API v2 to analyze tweet metrics such as impressions and profile clicks over time.

Tweetmetric Tweetmetric allows you to track various metrics on your most recent tweets, such as impressions, retweets and clicks on your profile. The

ForecastGA is a Python tool to forecast Google Analytics data using several popular time series models.
ForecastGA is a Python tool to forecast Google Analytics data using several popular time series models.

ForecastGA is a tool that combines a couple of popular libraries, Atspy and googleanalytics, with a few enhancements.

PyEmits, a python package for easy manipulation in time-series data.
PyEmits, a python package for easy manipulation in time-series data.

PyEmits, a python package for easy manipulation in time-series data. Time-series data is very common in real life. Engineering FSI industry (Financial

Average time per match by division
Average time per match by division

HW_02 Unzip matches.rar to access .json files for matches. Get an API key to access their data at: https://developer.riotgames.com/ Average time per m

A Python 3 library making time series data mining tasks, utilizing matrix profile algorithms
A Python 3 library making time series data mining tasks, utilizing matrix profile algorithms

MatrixProfile MatrixProfile is a Python 3 library, brought to you by the Matrix Profile Foundation, for mining time series data. The Matrix Profile is

Owner
Feng Ruohang
haha
Feng Ruohang
BasstatPL is a package for performing different tabulations and calculations for descriptive statistics.

BasstatPL is a package for performing different tabulations and calculations for descriptive statistics. It provides: Frequency table constr

Angel Chavez 1 Oct 31, 2021
Supply a wrapper ``StockDataFrame`` based on the ``pandas.DataFrame`` with inline stock statistics/indicators support.

Stock Statistics/Indicators Calculation Helper VERSION: 0.3.2 Introduction Supply a wrapper StockDataFrame based on the pandas.DataFrame with inline s

Cedric Zhuang 1.1k Dec 28, 2022
COVID-19 deaths statistics around the world

COVID-19-Deaths-Dataset COVID-19 deaths statistics around the world This is a daily updated dataset of COVID-19 deaths around the world. The dataset c

Nisa Efendioğlu 4 Jul 10, 2022
track your GitHub statistics

GitHub-Stalker track your github statistics ?? features find new followers or unfollowers find who got a star on your project or remove stars find who

Bahadır Araz 34 Nov 18, 2022
TextDescriptives - A Python library for calculating a large variety of statistics from text

A Python library for calculating a large variety of statistics from text(s) using spaCy v.3 pipeline components and extensions. TextDescriptives can be used to calculate several descriptive statistics, readability metrics, and metrics related to dependency distance.

null 150 Dec 30, 2022
Important dataframe statistics with a single command

quick_eda Receiving dataframe statistics with one command Project description A python package for Data Scientists, Students, ML Engineers and anyone

Sven Eschlbeck 2 Dec 19, 2021
First and foremost, we want dbt documentation to retain a DRY principle. Every time we repeat ourselves, we waste our time. Second, we want to understand column level lineage and automate impact analysis.

dbt-osmosis First and foremost, we want dbt documentation to retain a DRY principle. Every time we repeat ourselves, we waste our time. Second, we wan

Alexander Butler 150 Jan 6, 2023
Very useful and necessary functions that simplify working with data

Additional-function-for-pandas Very useful and necessary functions that simplify working with data random_fill_nan(module_name, nan) - Replaces all sp

Alexander Goldian 2 Dec 2, 2021
Tools for working with MARC data in Catalogue Bridge.

catbridge_tools Tools for working with MARC data in Catalogue Bridge. Borrows heavily from PyMarc

null 1 Nov 11, 2021