:: Description :: A fast MySQL driver written in pure C/C++ for Python. Compatible with gevent through monkey patching :: Requirements :: Requires Python (http://www.python.org) :: Installation :: python setup.py build install
A fast MySQL driver written in pure C/C++ for Python. Compatible with gevent through monkey patching.
Overview
Comments
-
Parallel queries
I am unable to execute queries to same database in separate connections in parallel. amysql + pthreads somehow serializes queries (perhaps doesn't release GIL?) amysql + gevent crashes (python: PacketWriter.cpp:93: void PacketWriter::pull(size_t): Assertion `m_writeCursor - m_readCursor <= cbSize' failed.)
for comparison, mysqldb + pthreads runs same queries in parallel just fine.
version info: x86_64, amysql git, python 2.7.1, gevent 0.13.6
-
Issue on connect timeout
I tried this code
import amysql cnn = amysql.Connection() cnn.connect(DB_HOST, DB_PORT, DB_USER, DB_PASS, DB_NAME)
wait for > 10 seconds cnn.close()
cnn = amysql.Connection() cnn.connect(DB_HOST, DB_PORT, DB_USER, DB_PASS, DB_NAME) Traceback (most recent call last): File "
", line 1, in RuntimeError: Connection timed out (0) It produces an error I have to do connect two times for it to have a reconnect
-
Connection reset by peer when receiving
cnn.connect ('a internal ip address which running mysql (some old version)', 3306,'user','pwd', "bugzilla") cnn.query("show tables") Traceback (most recent call last): File "
", line 1, in umysql.Error: (0, 'Connection reset by peer when receiving') sometimes show :
cnn.query("select count(*) from bugs") Traceback (most recent call last): File "
", line 1, in umysql.SQLError: (1043, 'Bad handshake') -
connecting to AWS RDS instance fails
When I try to connect with ultramysql to Amazon's db instance, I get weird exception:
In [1]: import umysql In [2]: conn = umysql.Connection() In [3]: conn.connect('xxxxx.us-east-1.rds.amazonaws.com', 3306, 'xxxxx', 'xxxxx', 'xxxxx') --------------------------------------------------------------------------- SystemError Traceback (most recent call last) /home/fevral13/<ipython-input-3-213a95b4356f> in <module>() ----> 1 conn.connect('xxxxx.us-east-1.rds.amazonaws.com', 3306, 'xxxxx', 'xxxxx', 'xxxxx') SystemError: error return without exception set In [4]: conn.is_connected() Out[4]: True In [5]: conn.query('select 1') --------------------------------------------------------------------------- Error Traceback (most recent call last) /home/fevral13/<ipython-input-5-4ec56317c561> in <module>() ----> 1 conn.query('select 1') Error: (0, 'Connection reset by peer when receiving')
But when I use pymysql, for example, everything works:
In [6]: import pymysql In [7]: c = pymysql.Connect(host='xxxxx.us-east-1.rds.amazonaws.com', port=3306, user='xxxxx', passwd='xxxxx', db='xxxxx') In [8]: c.query('select 1') Out[8]: 1
So the db is up and connectable
-
Support stored procedure.
It don't break exist tests, and pass the test i add for stored procedure. The first commit is a hexdump utility i use to solve this problem, if you don't like it, you can just pick the rest commits (the second commit is the changes need to support stored procedure, the third one is the test).
-
ultramysql throws not connected exception
ultramysql throws not connected exception
db = umysql.Connection() db.connect(config.db_host, config.db_port, config.db_user, config.db_password, config.db_database) ... def get_info(): rs = db.query(sql) RuntimeError: Not connected <Greenlet at 0x1df6870: handle(<socket at 0x1e59390 fileno=[Errno 9] Bad file des, ('127.0.0.1', 64871))> failed with RuntimeError
How to make it KeepAlive forever?
-
DBAPI 2.0
Hello,
It would be interesting to support DBAPI 2.0 http://www.python.org/dev/peps/pep-0249/
That would facilitate ultramysql's integration as a driver into sqlalchemy for instance
-
Dict Cursor in ultraMySQL
So here q.rows[0] gives a tuple.
How can I get results from dict cursor as in MySQL.
import umysql cnn = umysql.Connection() cnn.connect(DB_HOST, 3306, DB_USER, DB_PASSWD, DB_DB) cnn.connect(DB_HOST, 3306, DB_USER, DB_PASSWD, DB_DB) q = cnn.query("select * from test_table") q.rows[0]
(1, u'abc', u'def', 0, datetime.datetime(2013, 8, 13, 14, 25, 46), 0, datetime.datetime(2012, 8, 13, 14, 26, 3), 1)
-
reserved identifier violation
I would like to point out that identifiers like "
__UMCONNECTION_H___
" and "__PACKETREADER_H__
" do not fit to the expected naming conventions of the C/C++ language standard. Would you like to adjust your selection for unique names? -
Socket receive buffer full
I have a couple simple loops that calculate a bunch of values then try to insert them into MySQL, something like this:
for i in items: # compare i to every other item, returned as a dictionary calcedVals = calculateValues(items, i) for j in calcedVals: dbconn.query('INSERT INTO tableName VALUES (%s, %s, %s)' % (i, j, calcedVals[j]))
where i and j are
INT
s in MySQL, and each calculated comparison is aDOUBLE
in MySQL.There are 11,396 items so it would be inserting about 11,396*11,395 (129,857,420) rows, but it only inserted 1,525,202 (it was on the 134th i and 9667th j). Then umysql gave this error and essentially quit:
umysql.Error: (0, 'Socket receive buffer full')
Do you know what could have caused that? Do you think I should just put a
try..except
around it, and just close and reopen the connection if that happens? Although, I'm not sure if that will fix it, since I don't see any references tom_reader
inConnection::close()
.Thanks, Paulie
-
Native gevent support
I would love to use ultramysql in my project, except for this one thing: "Compatible with gevent through monkey patching"; unfortunately, gevent's monkey-patching breaks some unsuspecting 3rd-party code. After gevent.monkey.patch_all(), strange failures started to show up in Hbase-Thrift, for example.
Can there be an option in ultramysql to make it natively gevent-aware without monkey-patching?
Many thanks!
-
DECIMAL columns to float and the query buffer automatically enlarged.
- The DECIMAL MySQL columns were retrieved as strings. I changed that to float, since that's a more appropriate type to retrieve them as.
- The PacketReader buffer was allocated when the object was created and didn't change it's size. For large queries (SELECT * FROM large_table) this happened: // Socket buffer got full! setError("Socket receive buffer full", 0, UME_OTHER); return false; -- this ended in a python script forced exit. I added char * PacketReader::resizeBuffer(size_t new_size) to be able to resize the buffer, and in the case of a big query, when the buffer get's filled, the bool Connection::readSocket() method will automatically resize the buffer to twice it's size. A big query can expect more data, so this saves newer allocation times.
-
Fix spurious 'Socket receive buffer full' errors
(This may be related to #34.)
In some cases, it was possible for the buffer to be reported as full when in fact it wasn't; it had lots of space at the beginning of the buffer, but the write cursor reached the end. The
skip
method never detected the read and write pointers aligning and so never reset the buffer. This would happen when reading a large result set from the server very quickly; I suspect its occurrence depends on factors like network connection bandwidth and latency as well as the net_buffer_length of the server.This builds on my previous PR.
-
Use standard unix style file-separator in Manifest.in
The current backslash separator in Manifest.in does not allow creation of source distributions / rpms on linux. Changing this to the more standard forward slash fixes this issue. ie:
before this change:
[steve@localhost ~/src/ultramysql (master)]$ python setup.py sdist running sdist running check reading manifest template 'MANIFEST.in' warning: no files found matching 'python\*.c' warning: no files found matching 'python\*.h' warning: no files found matching 'lib\*.cpp' warning: no files found matching 'lib\*.h' writing manifest file 'MANIFEST' creating umysql-2.61 ... ... Creating tar archive removing 'umysql-2.61' (and everything under it)
after this change:
[steve@localhost ~/src/ultramysql (master)]$ python setup.py sdist running sdist running check reading manifest template 'MANIFEST.in' writing manifest file 'MANIFEST' creating umysql-2.61 creating umysql-2.61/lib creating umysql-2.61/python making hard links in umysql-2.61... hard linking LICENSE -> umysql-2.61 hard linking README -> umysql-2.61 hard linking setup.py -> umysql-2.61 hard linking ./lib/Connection.cpp -> umysql-2.61/./lib hard linking ./lib/PacketReader.cpp -> umysql-2.61/./lib hard linking ./lib/PacketWriter.cpp -> umysql-2.61/./lib hard linking ./lib/SHA1.cpp -> umysql-2.61/./lib hard linking ./lib/capi.cpp -> umysql-2.61/./lib hard linking ./python/io_cpython.c -> umysql-2.61/./python hard linking ./python/umysql.c -> umysql-2.61/./python hard linking lib/Connection.h -> umysql-2.61/lib hard linking lib/PacketReader.h -> umysql-2.61/lib hard linking lib/PacketWriter.h -> umysql-2.61/lib hard linking lib/SHA1.h -> umysql-2.61/lib hard linking lib/mysqldefs.h -> umysql-2.61/lib hard linking lib/socketdefs.h -> umysql-2.61/lib hard linking python/umysql.h -> umysql-2.61/python Creating tar archive removing 'umysql-2.61' (and everything under it)
-
query('SELECT...) returns tuple (0L, 0L) instead of rows/fields, sometimes
Versions: umysql-2.61, python-2.7.6, gevent-1.0.1.
query('SELECT...) returns tuple (0L, 0L) sometimes, as if it is query('INSERT...) or query('UPDATE...).
In almost all cases exactly the same query returns expected "result.rows" and "result.fields". But then it suddenly returns tuple (0L, 0L) and this behaviour can be cured by reconnecting to DB. I catch this bug several times a day, so I may add any debug code - please advise.
Current workaround:
result = db_conn.query(sql, values) if sql.lstrip().startswith('SELECT') and isinstance(result, tuple): log.error('reconnecting to db on tuple SELECT: {}'.format(result)) # Logs: (0L, 0L) try: db_conn.close() except Exception: pass db_conn = umysql.Connection() db_conn.connect(...) return db_conn.query(sql, values) # Normal "result.rows" this time. return result
-
PyPy support
Replaces
PyObject_Malloc/Free
with theirPyMem
counterparts; PyPy does not support thePyObject
variants. Arguably, based on the documents for PyMalloc the PyMem variants are a better fit for this use case anyway (PyObject_
being intended for "small" allocations).Disables the use of CP1250 under PyPy because it lacks the
PyUnicode_Encode
function.There are no new tests failures. (
testConnectWithWrongDB
fails with a 1044 error for me and not 1049 with both the original code under CPython and this code under PyPy).
Owner
ESN Social Software
Pure-python PostgreSQL driver
pg-purepy pg-purepy is a pure-Python PostgreSQL wrapper based on the anyio library. A lot of this library was inspired by the pg8000 library. Credits
Pure Python MySQL Client
PyMySQL Table of Contents Requirements Installation Documentation Example Resources License This package contains a pure-Python MySQL client library,
A Relational Database Management System for a miniature version of Twitter written in MySQL with CLI in python.
Mini-Twitter-Database This was done as a database design course project at Amirkabir university of technology. This is a relational database managemen
python-beryl, a Python driver for BerylDB.
python-beryl, a Python driver for BerylDB.
ClickHouse Python Driver with native interface support
ClickHouse Python Driver ClickHouse Python Driver with native (TCP) interface support. Asynchronous wrapper is available here: https://github.com/myma
DataStax Python Driver for Apache Cassandra
DataStax Driver for Apache Cassandra A modern, feature-rich and highly-tunable Python client library for Apache Cassandra (2.1+) and DataStax Enterpri
PyMongo - the Python driver for MongoDB
PyMongo Info: See the mongo site for more information. See GitHub for the latest source. Documentation: Available at pymongo.readthedocs.io Author: Mi
Motor - the async Python driver for MongoDB and Tornado or asyncio
Motor Info: Motor is a full-featured, non-blocking MongoDB driver for Python Tornado and asyncio applications. Documentation: Available at motor.readt
Motor - the async Python driver for MongoDB and Tornado or asyncio
Motor Info: Motor is a full-featured, non-blocking MongoDB driver for Python Tornado and asyncio applications. Documentation: Available at motor.readt
Neo4j Bolt driver for Python
Neo4j Bolt Driver for Python This repository contains the official Neo4j driver for Python. Each driver release (from 4.0 upwards) is built specifical
MySQL database connector for Python (with Python 3 support)
mysqlclient This project is a fork of MySQLdb1. This project adds Python 3 support and fixed many bugs. PyPI: https://pypi.org/project/mysqlclient/ Gi
MySQL database connector for Python (with Python 3 support)
mysqlclient This project is a fork of MySQLdb1. This project adds Python 3 support and fixed many bugs. PyPI: https://pypi.org/project/mysqlclient/ Gi
A library for python made by me,to make the use of MySQL easier and more pythonic
my_ezql A library for python made by me,to make the use of MySQL easier and more pythonic This library was made by Tony Hasson , a 25 year old student
A simple Python tool to transfer data from MySQL to SQLite 3.
MySQL to SQLite3 A simple Python tool to transfer data from MySQL to SQLite 3. This is the long overdue complimentary tool to my SQLite3 to MySQL. It
Script em python para carregar os arquivos de cnpj dos dados públicos da Receita Federal em MYSQL.
cnpj-mysql Script em python para carregar os arquivos de cnpj dos dados públicos da Receita Federal em MYSQL. Dados públicos de cnpj no site da Receit
Implementing basic MySQL CRUD (Create, Read, Update, Delete) queries, using Python.
MySQL with Python Implementing basic MySQL CRUD (Create, Read, Update, Delete) queries, using Python. We can connect to a MySQL database hosted locall
Python MYSQL CheatSheet.
Python MYSQL CheatSheet Python mysql cheatsheet. Install Required Windows(WAMP) Download and Install from HERE Linux(LAMP) install packages. sudo apt
Creating a python package to convert /transfer excelsheet data to a mysql Database Table
Creating a python package to convert /transfer excelsheet data to a mysql Database Table
aiomysql is a library for accessing a MySQL database from the asyncio
aiomysql aiomysql is a "driver" for accessing a MySQL database from the asyncio (PEP-3156/tulip) framework. It depends on and reuses most parts of PyM