pandas-gbq is a package providing an interface to the Google BigQuery API from pandas

Overview

pandas-gbq

Build Status Version Status Coverage Status Black Formatted

pandas-gbq is a package providing an interface to the Google BigQuery API from pandas

Installation

Install latest release version via conda

$ conda install pandas-gbq --channel conda-forge

Install latest release version via pip

$ pip install pandas-gbq

Install latest development version

$ pip install git+https://github.com/pydata/pandas-gbq.git

Usage

See the pandas-gbq documentation for more details.

Comments
  • ENH: Convert read_gbq() function to use google-cloud-python

    ENH: Convert read_gbq() function to use google-cloud-python

    Description

    I've rewritten the current read_gbq() function using google-cloud-python, which handles the naming of structs and arrays out of the box. For more discussion about this, see: https://github.com/pydata/pandas-gbq/issues/23.

    ~However, because of the fact that google-cloud-python potentially uses different authentication flows and may break existing behavior, I've left the existing read_gbq() function and and named this new function from_gbq(). If in the future we are able to reconcile the authentication flows and/or decide to deprecate flows that are not supported in google-cloud-python, we can rename this to read_gbq().~

    UPDATE: As requested in comment by @jreback (https://github.com/pydata/pandas-gbq/pull/25/files/a763cf071813c836b7e00ae40ccf14e93e8fd72b#r110518161), I deleted old read_gbq() and named my new function read_gbq(), deleting all legacy functions and code.

    Added in a few lines to requirements file, but I'll leave it to you @jreback to deal with conda dependency issues which you mentioned in Issue 23.

    Let know if any questions or if any tests need to be written. You can confirm that it works by running the following:

    q = """
    select ROW_NUMBER() over () row_num, struct(a,b) col, c, d, c*d c_times_d, e
    from
    (select * from
        (SELECT 1 a, 2 b, null c, 0 d, 100 e)
        UNION ALL
        (SELECT 5 a, 6 b, 0 c, null d, 200 e)
        UNION ALL
        (SELECT 8 a, 9 b, 10.0 c, 10 d, 300 e)
    )
    """
    df = gbq.read_gbq(q, dialect='standard')
    df
    

    | row_num | col | c | d | c_times_d | e | |---------|--------------------|------|------|-----------|-----| | 2 | {u'a': 5, u'b': 6} | 0.0 | NaN | NaN | 200 | | 1 | {u'a': 1, u'b': 2} | NaN | 0.0 | NaN | 100 | | 3 | {u'a': 8, u'b': 9} | 10.0 | 10.0 | 100.0 | 300 |

    q = """
    select array_agg(a) mylist
    from
    (select "1" a UNION ALL select "2" a)
    """
    df = gbq.read_gbq(q, dialect='standard')
    df
    

    | mylist | |--------| | [1, 2] |

    q = """
    select array_agg(struct(a,b)) col, f
    from
    (select * from
        (SELECT 1 a, 2 b, null c, 0 d, 100 e, "hello" f)
        UNION ALL
        (SELECT 5 a, 6 b, 0 c, null d, 200 e, "ok" f)
        UNION ALL
        (SELECT 8 a, 9 b, 10.0 c, 10 d, 300 e, "ok" f)
    )
    group by f
    """
    df = gbq.read_gbq(q, dialect='standard')
    df
    

    | col | f | |------------------------------------------|-------| | [{u'a': 5, u'b': 6}, {u'a': 8, u'b': 9}] | ok | | [{u'a': 1, u'b': 2}] | hello |

    Confirmed that col_order and index_col still work ~(feel free to pull that out into a separate function since there's now redundant code with read_gbq())~, and I removed the type conversion lines which appear to be unnecessary (google-cloud-python and/or pandas appears to do the necessary type conversion automatically, even if there are nulls; can confirm by examining the datatypes in the resulting dataframes).

    type: feature request 
    opened by jasonqng 55
  • Performance

    Performance

    We're starting to use BigQuery heavily but becoming increasingly 'bottlenecked' with the performance of moving moderate amounts of data from BigQuery to python.

    Here's a few stats:

    • 29.1s: Pulling 500k rows with 3 columns of data (with cached data) using pandas-gbq
    • 36.5s: Pulling the same query with google-cloud-bigquery - i.e. client.query(query)..to_dataframe()
    • 2.4s: Pulling very similar data - same types, same size, from our existing MSSQL box hosted in AWS (using pd.read_sql). That's on standard drivers, nothing like turbodbc involved

    ...so using BigQuery with python is at least an order of magnitude slower than traditional DBs.

    We've tried exporting tables to CSV on GCS and reading those in, which works fairly well for data processes, though not for exploration.

    A few questions - feel free to jump in with partial replies:

    • Are these results expected, or are we doing something very wrong?
    • My prior is that a lot of this slowdown is caused by pulling in HTTP pages, converting to python objects, and then writing those into arrays. Is this approach really scalable? Should pandas-gbq invest resources into getting a format that's query-able in exploratory workflows that can deal with more reasonable datasets? (or at least encourage Google to)
    opened by max-sixty 45
  • BUG: oauth2client deprecated, use google-auth instead.

    BUG: oauth2client deprecated, use google-auth instead.

    Remove the use of oauth2client and use google-auth library, instead.

    Rather than check for multiple versions of the libraries, use the setup.py to specify compatible versions. I believe this is safe since Pandas checks for the pandas_gbq package.

    Since google-auth does not use the argparse module to override user authentication flow settings, add a parameter to choose between the web and console flow.

    Closes https://github.com/pydata/pandas-gbq/issues/37.

    type: bug 
    opened by tswast 41
  • to_gbq result in UnicodeEncodeError

    to_gbq result in UnicodeEncodeError

    Hi, I'm using Heroku to run a python based ETL process where I'm pushing the contents of a Pandas dataframe into Google BQ using to_gbq. However, it's generating a UnicodeEncodeError with the following stack trace, due to some non-latin characters.

    Strangely this works fine on my Mac but when I try to run it on Heroku, it's failing. It seems that for some reason, http.client.py is getting an un-encoded string rather than bytes and therefore, it's trying to encode with latin-1, which is the default but obviously would choke on anything non-latin, like Chinese chars.

    2018-01-08T04:54:17.307496+00:00 app[run.2251]: Load is 100.0% Complete044+00:00 app[run.2251]: 2018-01-08T04:54:20.443238+00:00 app[run.2251]: Traceback (most recent call last): 2018-01-08T04:54:20.443267+00:00 app[run.2251]: File "AllCostAndRev.py", line 534, in 2018-01-08T04:54:20.443708+00:00 app[run.2251]: main(yaml.dump(data=ads_dict)) 2018-01-08T04:54:20.443710+00:00 app[run.2251]: File "AllCostAndRev.py", line 475, in main 2018-01-08T04:54:20.443915+00:00 app[run.2251]: private_key=environ['skynet_bq_pk'] 2018-01-08T04:54:20.443917+00:00 app[run.2251]: File "/app/.heroku/python/lib/python3.6/site-packages/pandas_gbq/gbq.py", line 989, in to_gbq 2018-01-08T04:54:20.444390+00:00 app[run.2251]: connector.load_data(dataframe, dataset_id, table_id, chunksize) 2018-01-08T04:54:20.444391+00:00 app[run.2251]: File "/app/.heroku/python/lib/python3.6/site-packages/pandas_gbq/gbq.py", line 590, in load_data 2018-01-08T04:54:20.444653+00:00 app[run.2251]: job_config=job_config).result() 2018-01-08T04:54:20.444656+00:00 app[run.2251]: File "/app/.heroku/python/lib/python3.6/site-packages/google/cloud/bigquery/client.py", line 748, in load_table_from_file 2018-01-08T04:54:20.445248+00:00 app[run.2251]: response = upload.transmit_next_chunk(transport) 2018-01-08T04:54:20.445250+00:00 app[run.2251]: File "/app/.heroku/python/lib/python3.6/site-packages/google/resumable_media/requests/upload.py", line 395, in transmit_next_chunk 2018-01-08T04:54:20.444942+00:00 app[run.2251]: file_obj, job_resource, num_retries) 2018-01-08T04:54:20.445457+00:00 app[run.2251]: retry_strategy=self._retry_strategy) 2018-01-08T04:54:20.444943+00:00 app[run.2251]: File "/app/.heroku/python/lib/python3.6/site-packages/google/cloud/bigquery/client.py", line 777, in _do_resumable_upload 2018-01-08T04:54:20.445458+00:00 app[run.2251]: File "/app/.heroku/python/lib/python3.6/site-packages/google/resumable_media/requests/_helpers.py", line 101, in http_request 2018-01-08T04:54:20.445592+00:00 app[run.2251]: func, RequestsMixin._get_status_code, retry_strategy) 2018-01-08T04:54:20.445594+00:00 app[run.2251]: File "/app/.heroku/python/lib/python3.6/site-packages/google/resumable_media/_helpers.py", line 146, in wait_and_retry 2018-01-08T04:54:20.445725+00:00 app[run.2251]: response = func() 2018-01-08T04:54:20.445726+00:00 app[run.2251]: File "/app/.heroku/python/lib/python3.6/site-packages/google/auth/transport/requests.py", line 186, in request 2018-01-08T04:54:20.445866+00:00 app[run.2251]: method, url, data=data, headers=request_headers, **kwargs) 2018-01-08T04:54:20.445867+00:00 app[run.2251]: File "/app/.heroku/python/lib/python3.6/site-packages/requests/sessions.py", line 508, in request 2018-01-08T04:54:20.446099+00:00 app[run.2251]: resp = self.send(prep, **send_kwargs) 2018-01-08T04:54:20.446101+00:00 app[run.2251]: File "/app/.heroku/python/lib/python3.6/site-packages/requests/sessions.py", line 618, in send 2018-01-08T04:54:20.446456+00:00 app[run.2251]: r = adapter.send(request, **kwargs) 2018-01-08T04:54:20.446457+00:00 app[run.2251]: File "/app/.heroku/python/lib/python3.6/site-packages/requests/adapters.py", line 440, in send 2018-01-08T04:54:20.446728+00:00 app[run.2251]: timeout=timeout 2018-01-08T04:54:20.446730+00:00 app[run.2251]: File "/app/.heroku/python/lib/python3.6/site-packages/urllib3/connectionpool.py", line 601, in urlopen 2018-01-08T04:54:20.446969+00:00 app[run.2251]: chunked=chunked) 2018-01-08T04:54:20.446970+00:00 app[run.2251]: File "/app/.heroku/python/lib/python3.6/site-packages/urllib3/connectionpool.py", line 357, in _make_request 2018-01-08T04:54:20.447229+00:00 app[run.2251]: conn.request(method, url, **httplib_request_kw) 2018-01-08T04:54:20.447231+00:00 app[run.2251]: File "/app/.heroku/python/lib/python3.6/http/client.py", line 1239, in request 2018-01-08T04:54:20.447690+00:00 app[run.2251]: File "/app/.heroku/python/lib/python3.6/http/client.py", line 1284, in _send_request 2018-01-08T04:54:20.448232+00:00 app[run.2251]: body = _encode(body, 'body') 2018-01-08T04:54:20.448234+00:00 app[run.2251]: File "/app/.heroku/python/lib/python3.6/http/client.py", line 161, in _encode 2018-01-08T04:54:20.448405+00:00 app[run.2251]: UnicodeEncodeError: 'latin-1' codec can't encode characters in position 553626-553628: Body ('信用卡') is not valid Latin-1. Use body.encode('utf-8') if you want to send it encoded in UTF-8. 2018-01-08T04:54:20.447689+00:00 app[run.2251]: self._send_request(method, url, body, headers, encode_chunked) 2018-01-08T04:54:20.448396+00:00 app[run.2251]: (name.title(), data[err.start:err.end], name)) from None 2018-01-08T04:54:20.621819+00:00 heroku[run.2251]: State changed from up to complete 2018-01-08T04:54:20.609814+00:00 heroku[run.2251]: Process exited with status 1

    opened by 2legit 24
  • Import error with pandas_gbq

    Import error with pandas_gbq

    There's a bug with the most recent google bigquery library. Hence, this error occurs in pandas-gbq

    ImportError: pandas-gbq requires google-cloud-bigquery: cannot import name 'collections_abc'

    opened by winsonhys 23
  • When appending to a table, load if the dataframe contains a subset of the existing schema

    When appending to a table, load if the dataframe contains a subset of the existing schema

    Purpose

    Current behavior of to_gbq is fail if the schema of the new data is not equivalent to the current schema. However, this means that the load fails if the new data is missing columns that are present in the current schema. For instance, this may occur when the data source I am using to construct the dataframe only provides non-empty values. Rather than determining the current schema of the GBQ table and adding empty columns to my dataframe, I would like to_gbq to load my data if the columns in the dataframe are a subset of the current schema.

    Primary changes made

    • Factoring a schema function out of verify_schema to support both verify_schema and schema_is_subset
    • schema_is_subset determines whether local_schema is a subset of remote_schema
    • the append flag uses schema_is_subset rather than verify_schema to determine if the data can be loaded

    Auxiliary changes made

    • PROJECT_ID etc are retrieved from an environment variable to facilitate local testing
    • Running test_gbq through autopep8 added a row after two class names
    opened by mr-mcox 19
  • Set project_id (and other settings) once for all subsequent queries so you don't have to pass every time

    Set project_id (and other settings) once for all subsequent queries so you don't have to pass every time

    One frustrating thing is having to pass the project_id (among other parameters) every time you write a query. For example, personally, I usually use the same project_id, almost always query with standard sql, and usually turn off verbose. I have to pass those three with every read_gbq, typing which adds up.

    Potential options include setting an environment variable and reading from these default settings, but sometimes it can be different each time and fiddling with environment variables feels unfriendly. My suggestion would perhaps be to add a class that can wrap read_gbq() and to_gbq() in a client object. You could set the project_id attribute and dialect and whatever else in the client object, then re-use the object every time you want a query with those settings.

    A very naive implementation here in this branch: https://github.com/pydata/pandas-gbq/compare/master...jasonqng:client-object-class?expand=1

    Usage would be like:

    >>> import gbq
    >>> client = gbq.Client(project_id='project-name',dialect='standard',verbose=False)
    >>> client.read("select 1")
       f0_
    0    1
    >>> client.read("select 2")
       f0_
    0    2
    >>> client.verbose=True
    >>> client.read("select 3")
    Requesting query... ok.
    Job ID: c7d7e4c0-883a-4e14-b35f-61c9fae0c08b
    Query running...
    Query done.
    Processed: 0.0 B Billed: 0.0 B
    Standard price: $0.00 USD
    
    Retrieving results...
    Got 1 rows.
    
    Total time taken 1.66 s.
    Finished at 2018-01-02 14:06:01.
       f0_
    0    3
    

    Does that seem like a reasonable solution to all this extra typing or is there another preferred way? If so, I can open up a PR with the above branch.

    Thanks, my tired fingers thank you all!

    @tswast @jreback @parthea @maxim-lian

    opened by jasonqng 17
  • Structs lack proper names as dicts and arrays get turned into array of dicts

    Structs lack proper names as dicts and arrays get turned into array of dicts

    Version 0.1.4

    This query returns a improperly named dict:

    q = """
    select struct(a,b) col
    from
    (SELECT 1 a, 2 b)
    """
    df = gbq.read_gbq(q, dialect='standard', verbose=False)
    

    image

    Compare with result from Big Query: image

    An array of items also get turned into a arrays of dicts sometimes. For example:

    q = """
    select array_agg(a)
    from
    (select "1" a UNION ALL select "2" a)
    """
    gbq.read_gbq(q, dialect='standard', verbose=False, project_id='project')
    

    outputs: image

    Compare to Big Query: image

    These issues may or may not be related?

    type: bug help wanted 
    opened by jasonqng 17
  • Printing rather than logging?

    Printing rather than logging?

    We're printing in addition to logging, when querying from BigQuery. This makes controlling the output much harder, aside from being un-idiomatic.

    Printing in white, logging in red:

    https://cloud.githubusercontent.com/assets/5635139/23176541/6028b884-f831-11e6-911a-48aa7741a4da.png

    type: feature request 
    opened by max-sixty 17
  • BUG: Add bigquery scope for google credentials

    BUG: Add bigquery scope for google credentials

    Bigquery requires scoped credentials when loading application default credentials

    Quick test code below, it will return "invalid token" error. When uncomment the create_scoped() statement, the code run correctly without any error.

    # use google default application credentials
    export GOOGLE_APPLICATION_CREDENTIALS=/PATH/TO/GOOGLE_DEFAULT_CREDENTIALS.json
    
    import httplib2
    
    from googleapiclient.discovery import build
    from oauth2client.client import GoogleCredentials
    
    credentials = GoogleCredentials.get_application_default()
    #credentials = credentials.create_scoped('https://www.googleapis.com/auth/bigquery')
    
    http = httplib2.Http()
    http = credentials.authorize(http)
    
    service = build('bigquery', 'v2', http=http)
    
    jobs = service.jobs()
    job_data = {'configuration': {'query': {'query': 'SELECT 1'}}}
    
    jobs.insert(projectId='projectid', body=job_data).execute()
    
    type: bug 
    opened by xcompass 16
  • read_gbq() unnecessarily waiting on getting default credentials from Google

    read_gbq() unnecessarily waiting on getting default credentials from Google

    When attempting to grant pandas access to my GBQ project, I am running into an issue where read_gbq is trying to get default credentials, failing / timing out, then printing out a URL to go to to grant the credentials. Since I'm not running this on google cloud platform, I do not expect to be able to get default credentials. In my case, I only want to run the CLI flow (without having oauth call back to my local server).

    Here's the code

    >>> import pandas_gbq as gbq
    >>> gbq.read_gbq('SELECT 1', project_id=<project_id>, auth_local_webserver=False)
    

    Here's what I see when I trigger a SIGINT once the query is invoked:

      File "/usr/lib/python3.5/site-packages/pandas_gbq/gbq.py", line 214, in get_credentials
        credentials = self.get_application_default_credentials()
      File "/usr/lib/python3.5/site-packages/pandas_gbq/gbq.py", line 243, in get_application_default_credentials
        credentials, _ = google.auth.default(scopes=[self.scope])
      File "/usr/lib/python3.5/site-packages/google/auth/_default.py", line 277, in default
        credentials, project_id = checker()
      File "/usr/lib/python3.5/site-packages/google/auth/_default.py", line 274, in <lambda>
        lambda: _get_gce_credentials(request))
      File "/usr/lib/python3.5/site-packages/google/auth/_default.py", line 176, in _get_gce_credentials
        if _metadata.ping(request=request):
      File "/usr/lib/python3.5/site-packages/google/auth/compute_engine/_metadata.py", line 73, in ping
        timeout=timeout)
      File "/usr/lib/python3.5/site-packages/google/auth/transport/_http_client.py", line 103, in __call__
        method, path, body=body, headers=headers, **kwargs)
      File "/usr/lib/python3.5/http/client.py", line 1106, in request
        self._send_request(method, url, body, headers)
      File "/usr/lib/python3.5/http/client.py", line 1151, in _send_request
        self.endheaders(body)
      File "/usr/lib/python3.5/http/client.py", line 1102, in endheaders
        self._send_output(message_body)
      File "/usr/lib/python3.5/http/client.py", line 934, in _send_output
        self.send(msg)
      File "/usr/lib/python3.5/http/client.py", line 877, in send
        self.connect()
      File "/usr/lib/python3.5/http/client.py", line 849, in connect
        (self.host,self.port), self.timeout, self.source_address)
      File "/usr/lib/python3.5/socket.py", line 702, in create_connection
        sock.connect(sa)
    KeyboardInterrupt
    

    I've also tried setting the env variable GOOGLE_APPLICATIONS_CREDENTIALS to empty. I'm using pandas-gbq version at commit 64a19b.

    type: bug type: cleanup 
    opened by dfontenot 15
  • docs: fix reading dtypes

    docs: fix reading dtypes

    Hello. I faced the same confusion as #579, so have tried to update the docs.

    Not only the BQ DATE type but also the TIME, TIMESTAMP, and FLOAT64 types seems to be wrong.

    It seems to be due to a breaking change in google-cloud-bigquery v3.
    https://cloud.google.com/python/docs/reference/bigquery/latest/upgrading#changes-to-data-types-loading-a-pandas-dataframe

    I have confirmed the correct dtypes as:

    >>> import pandas
    
    >>> sql1 = """
    SELECT
      TRUE AS BOOL,
      123 AS INT64,
      123.456 AS FLOAT64,
    
      TIME '12:30:00.45' AS TIME,
      DATE "2023-01-01" AS DATE,
      DATETIME "2023-01-01 12:30:00.45" AS DATETIME,
      TIMESTAMP "2023-01-01 12:30:00.45" AS TIMESTAMP
    """
    
    >>> pandas.read_gbq(sql).dtypes
    BOOL                        boolean
    INT64                         Int64
    FLOAT64                     float64
    TIME                         dbtime
    DATE                         dbdate
    DATETIME             datetime64[ns]
    TIMESTAMP       datetime64[ns, UTC]
    dtype: object
    
    >>> sql2 = """
    SELECT
      DATE "2023-01-01" AS DATE,
      DATETIME "2023-01-01 12:30:00.45" AS DATETIME,
      TIMESTAMP "2023-01-01 12:30:00.45" AS TIMESTAMP,
    UNION ALL
    SELECT
      DATE "2263-04-12" AS DATE,
      DATETIME "2263-04-12 12:30:00.45" AS DATETIME,
      TIMESTAMP "2263-04-12 12:30:00.45" AS TIMESTAMP
    """
    
    >>> pandas.read_gbq(sql2).dtypes
    DATE         object
    DATETIME     object
    TIMESTAMP    object
    dtype: object
    

    Fixes #579 🦕

    api: bigquery size: s 
    opened by yokomotod 1
  • chore: updates python version to 3.11

    chore: updates python version to 3.11

    Introducing this for discussion. Planning on adding version 3.11 as a test platform. Not sure about all the inner workings behind our codebase, so there are prolly a number of things that need to be added/edited. Feel free to comment.

    @tswast @parthea

    api: bigquery size: s 
    opened by chalmerlowe 2
  • feat: adds ability to provide redirect uri

    feat: adds ability to provide redirect uri

    WIP PR for discussion: aiming to provide the ability to include a redirect URI, client ID, and client secrets to facilitate the migration away from "out of band" OAuth authentication.

    @tswast

    See also changes in these repos:

    • https://github.com/googleapis/python-bigquery-pandas/pull/595
    • https://github.com/googleapis/google-auth-library-python-oauthlib/pull/259
    • https://github.com/pydata/pydata-google-auth/pull/58
    api: bigquery size: m 
    opened by chalmerlowe 1
  • Problems installing the package on macOS M1 chip

    Problems installing the package on macOS M1 chip

    Hi,

    I am having problems to install this package on a macos with M1 chip.

    The error: Could not find <Python.h>. This could mean the following: * You're on Ubuntu and haven't run apt-get install python3-dev. * You're on RHEL/Fedora and haven't run yum install python3-devel or dnf install python3-devel (make sure you also have redhat-rpm-config installed) * You're on Mac OS X and the usual Python framework was somehow corrupted (check your environment variables or try re-installing?) * You're on Windows and your Python installation was somehow corrupted (check your environment variables or try re-installing?)

      [end of output]
    

    note: This error originates from a subprocess, and is likely not a problem with pip. error: legacy-install-failure

    × Encountered error while trying to install package. ╰─> grpcio

    note: This is an issue with the package mentioned above, not pip. hint: See above for output from the failure.

    Environment details

    • OS type and version: MacOS 12.4 (chip M1)
    • Python version: Python 3.10.0
    • pip version: pip 22.3.1

    Thanks!

    api: bigquery 
    opened by davidgarciatwenix 0
  • Ability to handle a dry_run

    Ability to handle a dry_run

    Hi, after checking out the pandas_gbq.read_gbq call parametrization I see that I can supply configuration={'dry_run': True} to make the query job to be a dry run.

    However it will still attempt to find query destination to try download rows, which in this case will be nonexistent. It would be great if the pandas_gbq would be aware of dry_run and just output the query stats to debug log or return some stats data.

    e.g. querying something like this: pandas_gbq.read_gbq("SELECT * FROM 'my_project_id.billing_ds.cloud_pricing_export'", configuration={'dry_run': True})

    still results in the exception

    Traceback (most recent call last): File "big_query_utils.py", line 134, in print(read_df('SELECT * FROM 'my_project_id.billing_ds.cloud_pricing_export'', configuration={'dry_run': True})) File "/Users/.../big_query/big_query_utils.py", line 95, in read_df return pandas_gbq.read_gbq(sql_or_table_id, **gbq_kwargs) File "/Users/.../lib/python3.9/site-packages/pandas_gbq/gbq.py", line 921, in read_gbq final_df = connector.run_query( File "/Users/.../lib/python3.9/site-packages/pandas_gbq/gbq.py", line 526, in run_query rows_iter = self.client.list_rows( File "/Users/.../lib/python3.9/site-packages/google/cloud/bigquery/client.py", line 3790, in list_rows table = self.get_table(table.reference, retry=retry, timeout=timeout) File "/Users/.../lib/python3.9/site-packages/google/cloud/bigquery/client.py", line 1034, in get_table api_response = self._call_api( File "/Users/.../lib/python3.9/site-packages/google/cloud/bigquery/client.py", line 782, in _call_api return call() File "/Users/.../lib/python3.9/site-packages/google/api_core/retry.py", line 283, in retry_wrapped_func return retry_target( File "/Users/.../lib/python3.9/site-packages/google/api_core/retry.py", line 190, in retry_target return target() File "/Users/.../lib/python3.9/site-packages/google/cloud/_http/init.py", line 494, in api_request raise exceptions.from_http_response(response) google.api_core.exceptions.NotFound: 404 GET https://bigquery.googleapis.com/bigquery/v2/projects/my_project_id/datasets/_6a20f817b1e72d456384bdef157062be9989000e/tables/anon71d825e7efee2856ce2b5e50a3df3a2579fd5583d14740ca3064bab740c8ffd9?prettyPrint=false: Not found: Table my_project_id:_6a20f817b1e72d456384bdef157062be9989000e.anon71d825e7efee2856ce2b5e50a3df3a2579fd5583d14740ca3064bab740c8ffd9

    type: feature request api: bigquery 
    opened by ehborisov 0
  • NUMERIC Field failing with conversion from NoneType to Decimal is not supported

    NUMERIC Field failing with conversion from NoneType to Decimal is not supported

    Saving data to NUMERIC Field failing with conversion from NoneType to Decimal is not supported

    • python 3.9
    • pandas 1.5.1

    Stack trace

    
    ...........
    
    df.to_gbq(project_id=self.client.project,
              File "/Users/xxxx/.local/lib/python3.9/site-packages/pandas-1.5.1-py3.9-macosx-10.9-x86_64.egg/pandas/core/frame.py", line 2168, in to_gbq
    gbq.to_gbq(
        File "/Users/xxxx/.local/lib/python3.9/site-packages/pandas-1.5.1-py3.9-macosx-10.9-x86_64.egg/pandas/io/gbq.py", line 218, in to_gbq
    pandas_gbq.to_gbq(
        File "/Users/xxxx/.local/lib/python3.9/site-packages/pandas_gbq-0.17.9-py3.9.egg/pandas_gbq/gbq.py", line 1198, in to_gbq
    connector.load_data(
        File "/Users/xxxx/.local/lib/python3.9/site-packages/pandas_gbq-0.17.9-py3.9.egg/pandas_gbq/gbq.py", line 591, in load_data
    chunks = load.load_chunks(
        File "/Users/xxxx/.local/lib/python3.9/site-packages/pandas_gbq-0.17.9-py3.9.egg/pandas_gbq/load.py", line 240, in load_chunks
    load_parquet(
        File "/Users/xxxx/.local/lib/python3.9/site-packages/pandas_gbq-0.17.9-py3.9.egg/pandas_gbq/load.py", line 128, in load_parquet
    dataframe = cast_dataframe_for_parquet(dataframe, schema)
    File "/Users/xxxx/.local/lib/python3.9/site-packages/pandas_gbq-0.17.9-py3.9.egg/pandas_gbq/load.py", line 103, in cast_dataframe_for_parquet
    cast_column = dataframe[column_name].map(decimal.Decimal)
    File "/Users/xxxx/.local/lib/python3.9/site-packages/pandas-1.5.1-py3.9-macosx-10.9-x86_64.egg/pandas/core/series.py", line 4539, in map
    new_values = self._map_values(arg, na_action=na_action)
    File "/Users/xxxx/.local/lib/python3.9/site-packages/pandas-1.5.1-py3.9-macosx-10.9-x86_64.egg/pandas/core/base.py", line 890, in _map_values
    new_values = map_f(values, mapper)
    File "pandas/_libs/lib.pyx", line 2918, in pandas._libs.lib.map_infer
    TypeError: conversion from NoneType to Decimal is not supported
    
    api: bigquery 
    opened by ismailsimsek 1
Releases(v0.18.1)
Owner
Google APIs
Clients for Google APIs and tools that help produce them.
Google APIs
google-cloud-bigtable Apache-2google-cloud-bigtable (🥈31 · ⭐ 3.5K) - Google Cloud Bigtable API client library. Apache-2

Python Client for Google Cloud Bigtable Google Cloud Bigtable is Google's NoSQL Big Data database service. It's the same database that powers many cor

Google APIs 39 Dec 3, 2022
Python interface to Oracle Database conforming to the Python DB API 2.0 specification.

cx_Oracle version 8.2 (Development) cx_Oracle is a Python extension module that enables access to Oracle Database. It conforms to the Python database

Oracle 841 Dec 21, 2022
Google Sheets Python API v4

pygsheets - Google Spreadsheets Python API v4 A simple, intuitive library for google sheets which gets your work done. Features: Open, create, delete

Nithin Murali 1.4k Dec 31, 2022
A pandas-like deferred expression system, with first-class SQL support

Ibis: Python data analysis framework for Hadoop and SQL engines Service Status Documentation Conda packages PyPI Azure Coverage Ibis is a toolbox to b

Ibis Project 2.3k Jan 6, 2023
Pandas on AWS - Easy integration with Athena, Glue, Redshift, Timestream, QuickSight, Chime, CloudWatchLogs, DynamoDB, EMR, SecretManager, PostgreSQL, MySQL, SQLServer and S3 (Parquet, CSV, JSON and EXCEL).

AWS Data Wrangler Pandas on AWS Easy integration with Athena, Glue, Redshift, Timestream, QuickSight, Chime, CloudWatchLogs, DynamoDB, EMR, SecretMana

Amazon Web Services - Labs 3.3k Dec 31, 2022
ClickHouse Python Driver with native interface support

ClickHouse Python Driver ClickHouse Python Driver with native (TCP) interface support. Asynchronous wrapper is available here: https://github.com/myma

Marilyn System 957 Dec 30, 2022
Asynchronous interface for peewee ORM powered by asyncio

peewee-async Asynchronous interface for peewee ORM powered by asyncio. Important notes Since version 0.6.0a only peewee 3.5+ is supported If you still

05Bit 666 Dec 30, 2022
A pythonic interface to Amazon's DynamoDB

PynamoDB A Pythonic interface for Amazon's DynamoDB. DynamoDB is a great NoSQL service provided by Amazon, but the API is verbose. PynamoDB presents y

null 2.1k Dec 30, 2022
A Pythonic, object-oriented interface for working with MongoDB.

PyMODM MongoDB has paused the development of PyMODM. If there are any users who want to take over and maintain this project, or if you just have quest

mongodb 345 Dec 25, 2022
TileDB-Py is a Python interface to the TileDB Storage Engine.

TileDB-Py TileDB-Py is a Python interface to the TileDB Storage Engine. Quick Links Installation Build Instructions TileDB Documentation Python API re

TileDB, Inc. 149 Nov 28, 2022
Google Cloud Client Library for Python

Google Cloud Python Client Python idiomatic clients for Google Cloud Platform services. Stability levels The development status classifier on PyPI ind

Google APIs 4.1k Jan 1, 2023
A simple python package that perform SQL Server Source Control and Auto Deployment.

deploydb Deploy your database objects automatically when the git branch is updated. Production-ready! ⚙️ Easy-to-use ?? Customizable ?? Installation I

Mert Güvençli 10 Dec 7, 2022
Creating a python package to convert /transfer excelsheet data to a mysql Database Table

Creating a python package to convert /transfer excelsheet data to a mysql Database Table

Odiwuor Lameck 1 Jan 7, 2022
An extension package of 🤗 Datasets that provides support for executing arbitrary SQL queries on HF datasets

datasets_sql A ?? Datasets extension package that provides support for executing arbitrary SQL queries on HF datasets. It uses DuckDB as a SQL engine

Mario Šaško 19 Dec 15, 2022
Apache Libcloud is a Python library which hides differences between different cloud provider APIs and allows you to manage different cloud resources through a unified and easy to use API

Apache Libcloud - a unified interface for the cloud Apache Libcloud is a Python library which hides differences between different cloud provider APIs

The Apache Software Foundation 1.9k Dec 25, 2022
Python version of the TerminusDB client - for TerminusDB API and WOQLpy

TerminusDB Client Python Development status ⚙️ Python Package status ?? Python version of the TerminusDB client - for TerminusDB API and WOQLpy Requir

TerminusDB 66 Dec 2, 2022
A CRUD and REST api with mongodb atlas.

Movies_api A CRUD and REST api with mongodb atlas. Setup First import all the python dependencies in your virtual environment or globally by the follo

Pratyush Kongalla 0 Nov 9, 2022
Pandas Google BigQuery

pandas-gbq pandas-gbq is a package providing an interface to the Google BigQuery API from pandas Installation Install latest release version via conda

Python for Data 345 Dec 28, 2022
Pandas Google BigQuery

pandas-gbq pandas-gbq is a package providing an interface to the Google BigQuery API from pandas Installation Install latest release version via conda

Python for Data 348 Jan 3, 2023
This package helps you to directly download an APK from Google Play by providing the package id of the app

Apk Downloader About | Features | Technologies | Requirements | Starting | License | Author ?? About This package helps you to directly download an AP

Daniel Agyapong 9 Dec 11, 2022