Question stringlengths 25 7.47k | Q_Score int64 0 1.24k | Users Score int64 -10 494 | Score float64 -1 1.2 | Data Science and Machine Learning int64 0 1 | is_accepted bool 2
classes | A_Id int64 39.3k 72.5M | Web Development int64 0 1 | ViewCount int64 15 1.37M | Available Count int64 1 9 | System Administration and DevOps int64 0 1 | Networking and APIs int64 0 1 | Q_Id int64 39.1k 48M | Answer stringlengths 16 5.07k | Database and SQL int64 1 1 | GUI and Desktop Applications int64 0 1 | Python Basics and Environment int64 0 1 | Title stringlengths 15 148 | AnswerCount int64 1 32 | Tags stringlengths 6 90 | Other int64 0 1 | CreationDate stringlengths 23 23 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
I'm wondering if there is any built-in way in Python to test a MySQL server connection. I know I can use PyMySQL, MySQLdb, and a few others, but if the user does not already have these dependencies installed my script will not work? How can I write a Python script to test a MySQL connection without requiring external d... | 0 | 2 | 1.2 | 0 | true | 31,524,504 | 0 | 244 | 1 | 0 | 0 | 31,524,210 | Python distributions do not include support for MySQL, which is only available by installing a third-party module such as PyMySQL or MySQLdb. The only relational support included in Python is for the SQLite database (in the shape of the sqlite3 module).
There is, however, nothing to stop you distributing a third-party ... | 1 | 0 | 0 | Test MySQL Connection with default Python | 1 | python,mysql | 0 | 2015-07-20T18:54:00.000 |
I've been trying to do this and I really got no clue. I've search a lot and i know that i can merge the files with easily with VBA or other languages, but i really want to do it with Python.
Can anyone get me on track? | 2 | 1 | 1.2 | 0 | true | 31,531,821 | 0 | 262 | 1 | 0 | 0 | 31,530,728 | I wished there was a straight forward support from openpyxl/xlsxwriter to copy sheets across different workbooks.
However, I see you would have to mash up a recipe using a couple of libraries:
One for reading the worksheet data and,
Another for writing data to a unified xlsx
For both of the above there are lot of opt... | 1 | 0 | 1 | Merge(Combine) several .xlsx with one worksheet into just one workbook (Python) | 1 | python,excel | 0 | 2015-07-21T05:01:00.000 |
I have an excel file with 234 rows and 5 columns. I want to create an array for each column so that when I can read each column separately in xlrd. Does anyone can help please? | 1 | 0 | 0 | 1 | false | 31,540,754 | 0 | 898 | 1 | 0 | 0 | 31,540,437 | I might, but as a former user of the excellent xlrd package, I really would recommend switching to pyopenxl. Quite apart from other benefits, each worksheet has a columns attribute that is a list of columns, each column being a list of cells. (There is also a rows) attribute.
Converting your code would be relatively pa... | 1 | 0 | 0 | Creating arrays in Python by using excel data sheet | 3 | python | 0 | 2015-07-21T13:27:00.000 |
Im using python 2.7 with mongodb as my database. (actually it dosen't matter which database i use)
In my database i have millions of documents, from time to time i need to iterate over all of them.
It's not realistic to pull all the documents in one query because that will kill the memory, instead i pull each iteration... | 0 | 0 | 0 | 0 | false | 31,543,147 | 0 | 48 | 1 | 0 | 0 | 31,542,307 | The only chance you have is to take some sample documents to calculate their average size. The more difficult part is to know what the available memory is, keeping in mind that there are other processes that consume ram in parallel!
So even when you take this road, you need to keep an amount of ram free. I doubt that t... | 1 | 0 | 1 | Decide how many documents to pull from Database for memory utilization | 1 | python,sql,mongodb,memory-management,mongoengine | 0 | 2015-07-21T14:44:00.000 |
I used to use manage.py sqlall app to dump the database to sql statements. While, after upgrading to 1.8, it doesn't work any more.
It says:
CommandError: App 'app' has migrations. Only the sqlmigrate and
sqlflush commands can be used when an app has migrations.
It seems there is not a way to solve this.
I need to ... | 1 | 1 | 1.2 | 0 | true | 31,545,842 | 1 | 309 | 1 | 0 | 0 | 31,545,025 | You can dump the db directly with mysqldump as allcaps suggested, or run manage.py migrate first and then it should work. It's telling you there are migrations that you have yet to apply to the DB. | 1 | 0 | 0 | Django: How to dump the database in 1.8? | 2 | python,django,django-models | 0 | 2015-07-21T16:49:00.000 |
I've been struggling with "sqlite3.OperationalError database is locked" all day....
Searching around for answers to what seems to be a well known problem I've found that it is explained most of the time by the fact that sqlite does not work very nice in multithreading where a thread could potentially timeout waiting ... | 3 | 9 | 1 | 0 | false | 31,547,325 | 1 | 4,653 | 1 | 0 | 0 | 31,547,234 | I have had a lot of these problems with Sqlite before. Basically, don't have multiple threads that could, potentially, write to the db. If you this is not acceptable, you should switch to Postgres or something else that is better at concurrency.
Sqlite has a very simple implementation that relies on the file system for... | 1 | 0 | 0 | Django sqlite database is locked | 2 | python,django,sqlite | 0 | 2015-07-21T18:47:00.000 |
I'm working with XLSX files with pivot tables and writing an automated script to parse and extract the data. I have multiple pivot tables per spreadsheet with cost categories, their totals, and their values for each month etc. Any ideas on how to use openpyxl to parse each pivot table? | 0 | 0 | 0 | 0 | false | 31,556,316 | 0 | 1,122 | 1 | 0 | 0 | 31,551,135 | This is currently not possible with openpyxl. | 1 | 0 | 0 | Extracting data from excel pivot tables using openpyxl | 1 | python,excel,pivot-table,xlsx,openpyxl | 0 | 2015-07-21T23:05:00.000 |
I have an excel file composed of several sheets. I need to load them as separate dataframes individually. What would be a similar function as pd.read_csv("") for this kind of task?
P.S. due to the size I cannot copy and paste individual sheets in excel | 4 | 0 | 0 | 1 | false | 31,582,822 | 0 | 15,976 | 1 | 0 | 0 | 31,582,821 | exFile = ExcelFile(f) #load file f
data = ExcelFile.parse(exFile) #this creates a dataframe out of the first sheet in file | 1 | 0 | 0 | How to open an excel file with multiple sheets in pandas? | 3 | python,excel,import | 0 | 2015-07-23T09:04:00.000 |
I have a Django application that runs on apache server and uses Sqlite3 db. I want to access this database remotely using a python script that first ssh to the machine and then access the database.
After a lot of search I understand that we cannot access sqlite db remotely. I don't want to download the db folder using ... | 1 | 0 | 0 | 0 | false | 31,583,131 | 0 | 1,226 | 2 | 0 | 0 | 31,582,861 | Sqlite needs to access the provided file. So this is more of a filesystem question rather than a python one. You have to find a way for sqlite and python to access the remote directory, be it sftp, sshfs, ftp or whatever. It entirely depends on your remote and local OS. Preferably mount the remote subdirectory on your ... | 1 | 0 | 0 | Remotely accessing sqlite3 in Django using a python script | 2 | python,django,sqlite | 0 | 2015-07-23T09:07:00.000 |
I have a Django application that runs on apache server and uses Sqlite3 db. I want to access this database remotely using a python script that first ssh to the machine and then access the database.
After a lot of search I understand that we cannot access sqlite db remotely. I don't want to download the db folder using ... | 1 | 3 | 1.2 | 0 | true | 31,583,957 | 0 | 1,226 | 2 | 0 | 0 | 31,582,861 | Leaving aside the question of whether it is sensible to run a production Django installation against sqlite (it really isn't), you seem to have forgotten that, well, you are actually running Django. That means that Django can be the main interface to your data; and therefore you should write code in Django that enables... | 1 | 0 | 0 | Remotely accessing sqlite3 in Django using a python script | 2 | python,django,sqlite | 0 | 2015-07-23T09:07:00.000 |
I am trying to add users to my Google Analytics account through the API but the code yields this error:
googleapiclient.errors.HttpError: https://www.googleapis.com/analytics/v3/management/accounts/**accountID**/entityUserLinks?alt=json returned "Insufficient Permission">
I have Admin rights to this account - MANAGE US... | 2 | 0 | 0 | 0 | false | 31,866,981 | 1 | 480 | 1 | 1 | 0 | 31,621,373 | The problem was I using a service account when I should have been using an installed application. I did not need a service account since I had access using my own credentials.That did the trick for me! | 1 | 0 | 0 | Google Analytics Management API - Insert method - Insufficient permissions HTTP 403 | 2 | api,python-2.7,google-analytics,insert,http-error | 0 | 2015-07-24T23:46:00.000 |
I am trying to access the remote database from one Linux server to another which is connected via LAN.
but it is not working.. after some time it will generate an error
`_mysql_exceptions.OperationalError: (2003, "Can't connect to MySQL server on '192.168.0.101' (99)")'
this error is random it will raise any time.
ea... | 2 | 0 | 1.2 | 0 | true | 31,717,545 | 0 | 1,039 | 2 | 0 | 0 | 31,645,016 | This issue is due to so many pending request on the remote database.
So in this situation MySql closes the connection to the running script.
to overcome this situation put
time.sleep(sec) # here int is a seconds in number that to sleep the script.
it will solve this issue.. without transferring database to local serve... | 1 | 0 | 0 | python mysql database connection error | 2 | python,mysql,database-connection,mysql-python,remote-server | 0 | 2015-07-27T04:30:00.000 |
I am trying to access the remote database from one Linux server to another which is connected via LAN.
but it is not working.. after some time it will generate an error
`_mysql_exceptions.OperationalError: (2003, "Can't connect to MySQL server on '192.168.0.101' (99)")'
this error is random it will raise any time.
ea... | 2 | -3 | -0.291313 | 0 | false | 41,724,945 | 0 | 1,039 | 2 | 0 | 0 | 31,645,016 | My solution was to collect more queries for one commit statement if those were insert queries. | 1 | 0 | 0 | python mysql database connection error | 2 | python,mysql,database-connection,mysql-python,remote-server | 0 | 2015-07-27T04:30:00.000 |
I'm trying to embed a bunch of URLs into an Excel file using Python with XLSXWriter's function write_url(), but it gives me the warning of it exceeding the 255 character limit. I think this is happening because it may be using the built-in HYPERLINK Excel function.
However, I found that Apache POI from Java doesn't se... | 1 | 0 | 0 | 0 | false | 31,662,734 | 1 | 599 | 2 | 0 | 0 | 31,661,485 | 255 characters in a URL is an Excel 2007+ limitation. Try it in Excel.
I think the XLS format allowed longer URLs (so perhaps that is the difference).
Also XlsxWriter doesn't use the HYPERLINK() function internally (although it is available to the user via the standard interface). | 1 | 0 | 0 | Why is Apache POI able to write a hyperlink more than 255 characters but not XLSXWriter? | 2 | java,python,excel,apache-poi,xlsxwriter | 0 | 2015-07-27T19:19:00.000 |
I'm trying to embed a bunch of URLs into an Excel file using Python with XLSXWriter's function write_url(), but it gives me the warning of it exceeding the 255 character limit. I think this is happening because it may be using the built-in HYPERLINK Excel function.
However, I found that Apache POI from Java doesn't se... | 1 | 1 | 0.099668 | 0 | false | 36,582,681 | 1 | 599 | 2 | 0 | 0 | 31,661,485 | Obviously the length limitation of a hyperlink address in .xlsx (using Excel 2013) is 2084 characters. Generating a file with a longer address using POI, repairing it with Excel and saving it will yield an address with a length of 2084 characters.
The Excel UI and .xls files seem to have a limit of 255 characters, as a... | 1 | 0 | 0 | Why is Apache POI able to write a hyperlink more than 255 characters but not XLSXWriter? | 2 | java,python,excel,apache-poi,xlsxwriter | 0 | 2015-07-27T19:19:00.000 |
In SQLAlchemy, when I try to query for user by
request.db.query(models.User.password).filter(models.User.email == email).first()
Of course it works with different DB (SQLite3).
The source of the problem is, that the password is
sqlalchemy.Column(sqlalchemy_utils.types.passwordPasswordType(schemes=['pbkdf2_sha512']), ... | 0 | 0 | 0 | 0 | false | 31,781,831 | 1 | 84 | 1 | 0 | 0 | 31,733,583 | Actually it was a problem with Alembic migration, in migration table must be also created with the PasswordType, not String or any other type | 1 | 0 | 0 | PasswordType not supported in Postgres | 1 | python,postgresql,sqlalchemy | 0 | 2015-07-30T20:38:00.000 |
I need to fetch the data using REST Endpoints(returns JSON file) and load the data(JSON) into Cassandra cluster which is sitting on AWS.
This is a migration effort, which involves millions of records. No access to source DB. Only access to REST End points.
What are the options I have?
What is the programming language ... | 1 | 1 | 0.197375 | 0 | false | 31,738,908 | 0 | 129 | 1 | 0 | 1 | 31,737,396 | Cassandra 2.2.0 give feature to insert and get data as JSON .So you can use that .
Like for insert json data .
CREATE TABLE test.example (
id int PRIMARY KEY,
id2 int,
id3 int
) ;
cqlsh > INSERT INTO example JSON '{"id":10,"id2":10,"id3":10}' ;
For Select data as Json :
cqlsh > SELECT json * FROM exampl... | 1 | 0 | 0 | Data Migration to Cassandra using REST End points | 1 | python,json,rest,cassandra,data-migration | 0 | 2015-07-31T02:59:00.000 |
I'm using a session with autocommit=True and expire_on_commit=False. I use the session to get an object A with a foreign key that points to an object B. I then call session.expunge(a.b); session.expunge(a).
Later, when trying to read the value of b.some_datetime, SQLAlchemy raises a DetachedInstanceError. No attribute ... | 1 | 1 | 1.2 | 0 | true | 31,967,070 | 0 | 573 | 1 | 0 | 0 | 31,746,829 | One of the mapped class's fields had an onupdate attribute, which caused it to expire whenever the object is changed.
The solution is to call session.refresh(myobj) between the flush and the call to session.expunge(). | 1 | 0 | 0 | DetachedInstanceError: SQLAlchemy wants to refresh the DateTime attribute of an expunged instance | 1 | python,sqlalchemy | 0 | 2015-07-31T12:59:00.000 |
I have a question about SQL, especially SQLite3. I have two tables, let's name them main_table and temp_table. These tables are based on the same relational schema so they have the same columns but different rows (values).
Now what I want to do:
For each row of the main_table I want to replace it if there is a row in ... | 1 | 0 | 0 | 0 | false | 31,748,808 | 0 | 334 | 1 | 0 | 0 | 31,748,654 | You have 2 approaches:
Update current rows inside main_table with data from temp_table. The relation will be based by ID.
Add a column to temp_table to mark all rows that have to be transferred to main_table or add aditional table to store IDs that have to be transferred. Then delete all rows that have to be transferr... | 1 | 0 | 0 | SQL - update main table using temp table | 2 | python,sql,sqlite,sql-update | 0 | 2015-07-31T14:29:00.000 |
I'm writing an application for topology optimization within the ABAQUS PDE. As I have quite some iterations, in each of which FEM is performed, a lot of data is written to the system -- and thus a lot of time is lost on I/O.
Is it possible to limit the amount of information that gets written into the ODB file? | 0 | 1 | 1.2 | 0 | true | 31,808,849 | 0 | 571 | 1 | 0 | 0 | 31,755,078 | Indeed it's possible. You should check the frequency of your output in the field output section inside the step module. You can configure it in terms of step intervals of time, number of increments, exact amount of outputs, etc.
If you're running your analysis from a inp file, you can add FREQ = X after the *STEP comma... | 1 | 0 | 0 | Limited ODB output in ABAQUS | 1 | python,abaqus,odb | 0 | 2015-07-31T20:57:00.000 |
I have a web application that uses flask and mongodb. I recently downloaded a clone of it from github onto a new Linux machine, then proceeded to run it. It starts and runs without any errors, but when I use a function that needs access to the database, I get this error:
File "/usr/local/lib/python2.7/dist-packages/py... | 0 | 0 | 0 | 0 | false | 31,865,825 | 1 | 1,948 | 1 | 0 | 0 | 31,755,276 | Well, it ended up being an issue with the String specifying the working directory. Once it was resolved I was able to connect to the database. | 1 | 0 | 0 | Cursor Instance Error when connecting to mongo db? | 1 | python,mongodb,flask,pymongo | 0 | 2015-07-31T21:14:00.000 |
I have a string of categories stored in a table. The categories are separated by a ',', so that I can turn the string into a list of strings as
category_string.split(',')
I now want to select all elements of a sql table which have one of the the following categories [catergory1, catagory2].
I have many such compariso... | 0 | 1 | 0.197375 | 0 | false | 31,759,707 | 1 | 1,109 | 1 | 0 | 0 | 31,759,266 | I figured it out. Basically one needs to use the like command and or_(
carl | 1 | 0 | 0 | sql alchemy filter: string split and comparison of list elements | 1 | python,sqlalchemy | 0 | 2015-08-01T07:05:00.000 |
Usually the workflow I have is as follows:
Perform SQL query on database,
Load it into memory
Transform data based on logic foo()
Insert the transformed data to a table in a database.
How should unit test be written for this kind of workflow? I'm really new to testing.
Anyway, I'm using Python 3.4. | 1 | 0 | 0 | 0 | false | 31,769,998 | 0 | 538 | 1 | 0 | 0 | 31,769,814 | One way to test this kind of workflow is by using a special database just for testing. The test database mirrors the structure of your production database, but is otherwise completely empty (i.e. no data is in the tables). The routine is then as follows
Connect to the test database (and and maybe reload its structure)... | 1 | 0 | 0 | How should unit test be written for data transformation? | 2 | python,unit-testing,tdd,integration-testing | 1 | 2015-08-02T08:04:00.000 |
Say you have a column that contains the values for the year, month and date. Is it possible to get just the year? In particular I have
ALTER TABLE pmk_pp_disturbances.disturbances_natural ADD COLUMN sdate timestamp without time zone;
and want just the 2004 from 2004-08-10 05:00:00. Can this be done with Postgres or mus... | 0 | -3 | -0.197375 | 0 | false | 31,820,790 | 0 | 38 | 1 | 0 | 0 | 31,820,655 | I think no. You're forced to read the entire value of a column. You can divide the date in few columns, one for the year, another for the month, etc. , or store the date on an integer format if you want an aggressive space optimization. But it will doing the database worst about scalability and modifications.
The datab... | 1 | 0 | 0 | Can Postgres be used to take only the portion of a date from a field? | 3 | python,postgresql | 0 | 2015-08-04T22:46:00.000 |
I have an SQLAlchemy DB column which is of type datetime:
type(<my_object>) --> sqlalchemy.orm.attributes.InstrumentedAttribute
How do I reach the actual date in order to filter the DB by weekday() ? | 5 | 6 | 1 | 0 | false | 31,857,049 | 0 | 3,875 | 1 | 0 | 0 | 31,841,054 | I got it:
from sqlalchemy import func
(func.extract(<my_object>, 'dow') == some_day)
dow stands for 'day of week'
The extract is an SQLAlchemy function allowing the extraction of any field from the column object. | 1 | 0 | 0 | Extract a weekday() from an SQLAlchemy InstrumentedAttribute (Column type is datetime) | 1 | python,sqlalchemy,flask-sqlalchemy | 0 | 2015-08-05T19:19:00.000 |
I have several thousand excel documents. All of these documents are 95% the same in terms of column headings. However, since they are not 100% identical, I cannot simply merge them together and upload it into a database without messing up the data.
Would anyone happen to have a library or an example that they've ran i... | 0 | 0 | 0 | 0 | false | 31,842,901 | 0 | 177 | 1 | 0 | 0 | 31,842,810 | If a large proportion of them are similar, and this is a one-off operation it may be worth your while coding the solution for the majority and handling the other documents (or groups of them if they are similar) separately. If using Python to do this you could simply build a dynamic query where the columns that are pre... | 1 | 0 | 0 | Python merge excel documents with dynamic columns | 1 | python,excel,pandas,xlwt | 0 | 2015-08-05T21:04:00.000 |
This will be a bit of a lengthly post, sorry in advance. I have a bit of experience using MongoDB (been awhile) and I'm so-so with python, but I have a big project and I would like some feedback before spending lots of time coding.
The project involves creating a gallery where individual presentation slides (from apple... | 0 | 0 | 1.2 | 0 | true | 31,867,932 | 0 | 70 | 1 | 0 | 0 | 31,862,957 | Your post's points and questions are quoted below. My comments follow each quote.
By having a database where each slide is a document that contains the filename of the slide, the filename of the preview thumbnail of the
slide, and an array containing searchable tag words, one will be able
to query specific sets o... | 1 | 0 | 0 | Will I be able to implement this design using MongoDB (via pyMongo) | 1 | python,mongodb,pymongo,database | 0 | 2015-08-06T18:14:00.000 |
How can I store python 'list' values into MySQL and access it later from the same database like a normal list?
I tried storing the list as a varchar type and it did store it. However, while accessing the data from MySQL I couldn't access the same stored value as a list, but it instead it acts as a string. So, accessing... | 3 | 2 | 0.197375 | 0 | false | 31,879,445 | 0 | 5,462 | 1 | 0 | 0 | 31,879,337 | Are you using an ORM like SQLAlchemy?
Anyway, to answer your question directly, you can use json or pickle to convert your list to a string and store that. Then to get it back, you can parse it (as JSON or a pickle) and get the list back.
However, if your list is always a 3 point coordinate, I'd recommend making separa... | 1 | 0 | 0 | storing python list into mysql and accessing it | 2 | python,mysql | 0 | 2015-08-07T13:47:00.000 |
Let's say I have the following Microsoft Access Database: random.mdb.
The main thing I'm trying to achieve is to use read_sql() from pandas so that I can work with the data I have using python. How would I approach this? Is there a way to convert the Microsoft Access database to a SQL database... to eventually pass in ... | 0 | 1 | 0.066568 | 0 | false | 31,949,357 | 0 | 2,680 | 1 | 0 | 0 | 31,949,312 | use sql server import export module to convert, but you will need table structure ready in sql server or there may be many other utilities | 1 | 0 | 0 | How to convert a Microsoft Access Database to a SQL database (and then open it with pandas)? | 3 | python,sql,sql-server,ms-access,pandas | 0 | 2015-08-11T18:26:00.000 |
So I've been trying to solve this for a while now and can't seem to find a way to speed up performance of inserts with Django despite the many suggestions and tips found on StackOverflow and many Google searches.
So basically I need to insert a LOT of data records (~2 million) through Django into my MySQL DB, each reco... | 2 | 1 | 0.066568 | 0 | false | 31,977,326 | 1 | 2,130 | 3 | 0 | 0 | 31,977,138 | This is out of django's scope really. Django just translates your python into on INSERT INTO statement. For most performance on the django layer skipping it entirely (by doing sql raw) might be best, even though python processing is pretty fast compared to IO of a sql-database.
You should rather focus on the database. ... | 1 | 0 | 0 | Increasing INSERT Performance in Django For Many Records of HUGE Data | 3 | python,mysql,django,database,insert | 0 | 2015-08-12T23:33:00.000 |
So I've been trying to solve this for a while now and can't seem to find a way to speed up performance of inserts with Django despite the many suggestions and tips found on StackOverflow and many Google searches.
So basically I need to insert a LOT of data records (~2 million) through Django into my MySQL DB, each reco... | 2 | 0 | 0 | 0 | false | 31,977,679 | 1 | 2,130 | 3 | 0 | 0 | 31,977,138 | You can also try to delete any index on the tables (and any other constraint), the recreate the indexes and constraints after the insert.
Updating indexes and checking constraints can slow down every insert. | 1 | 0 | 0 | Increasing INSERT Performance in Django For Many Records of HUGE Data | 3 | python,mysql,django,database,insert | 0 | 2015-08-12T23:33:00.000 |
So I've been trying to solve this for a while now and can't seem to find a way to speed up performance of inserts with Django despite the many suggestions and tips found on StackOverflow and many Google searches.
So basically I need to insert a LOT of data records (~2 million) through Django into my MySQL DB, each reco... | 2 | 0 | 1.2 | 0 | true | 32,000,816 | 1 | 2,130 | 3 | 0 | 0 | 31,977,138 | So I found that editing the mysql /etc/mysql/my.cnf file and configuring some of the InnoDB settings significantly increased performance.
I set:
innodb_buffer_pool_size = 9000M 75% of your system RAM
innodb_log_file_size = 2000M 20%-30% of the above value
restarted the mysql server and this cut down 50 inserts fro... | 1 | 0 | 0 | Increasing INSERT Performance in Django For Many Records of HUGE Data | 3 | python,mysql,django,database,insert | 0 | 2015-08-12T23:33:00.000 |
What is the difference between executing raw SQL on the SQLAlchemy engine and the session? Specifically against a MSSQL database.
engine.execute('DELETE FROM MyTable WHERE MyId IN(1, 2, 3)')
versus
session.execute('DELETE FROM MyTable WHERE MyId IN(1, 2, 3)')
I've noticed that executing the SQL on the session, caus... | 1 | 1 | 0.197375 | 0 | false | 32,419,439 | 0 | 1,516 | 1 | 0 | 0 | 32,020,502 | The reason why MSSQL Server was hanging, was not because of the difference between calling execute on the engine or the session, but because a delete was being called on the table, without a commit, and then a subsequent read. | 1 | 0 | 0 | Executing raw SQL on the SQLAlchemy engine versus a session | 1 | python,sql-server,sqlalchemy | 0 | 2015-08-15T01:02:00.000 |
I am new to Dynamodb and have a requirement of grouping the documents on the basis of a certain condition before performing other operations.
From what i could read on the internet, i figured out that there is no direct way to group dynamodb documents.
Can anyone confirm if thats true of help out with a solution if tha... | 0 | 1 | 0.197375 | 0 | false | 32,045,125 | 1 | 582 | 1 | 0 | 0 | 32,044,338 | Amazon DynamoDB is a NoSQL database, so you won't find standard SQL capabilities like group by and average().
There is, however, the ability to filter results, so you will only receive results that match your criteria. It is then the responsibility of the calling app to perform grouping and aggregations.
It's really a ... | 1 | 0 | 0 | Does Dynamodb support Groupby/ Aggregations directly? | 1 | python,amazon-web-services,amazon-dynamodb | 0 | 2015-08-17T06:52:00.000 |
I am writing a script to compare the records in a database in my DynamoDB with record in another database in EC2.
I will appreciate any help with iterating through the table in Python. | 1 | 0 | 0 | 0 | false | 48,822,032 | 0 | 1,953 | 1 | 0 | 0 | 32,073,427 | I came across the same need when I query DynamoDB from an AWS Lambda in Python and expected the dataset to be over 128MB of memory limit. If I can't iterate through, I'll have to pay extra bucks to AWS. Unfortunately, it seems there is no way to do so except converting the query response to an iterator (which wouldn't ... | 1 | 0 | 0 | How to iterate through a table in AWS DynamoDB? | 1 | python,amazon-web-services,amazon-dynamodb | 0 | 2015-08-18T13:10:00.000 |
I created an Excel spreadsheet using Pandas and xlsxwriter, which has all the data in the right rows and columns. However, the formatting in xlsxwriter is pretty basic, so I want to solve this problem by writing my Pandas spreadsheet on top of a template spreadsheet with Pyxl.
First, however, I need to get Pyxl to only... | 1 | 1 | 1.2 | 1 | true | 32,119,557 | 0 | 256 | 1 | 0 | 0 | 32,077,627 | To be honest I'd be tempted to suggest you use openpyxl all the way if there is something that xlsxwriter doesn't do, though I think that it's formatting options are pretty extensive. The most recent version of openpyxl is as fast as xlsxwriter if lxml is installed.
However, it's worth noting that Pandas has tended to ... | 1 | 0 | 0 | Use Pyxl to read Excel spreadsheet data up to a certain row | 1 | python,excel,openpyxl | 0 | 2015-08-18T16:17:00.000 |
I'm working on a distributed system where one process is controlling a hardware piece and I want it to be running as a service. My app is Django + Twisted based, so Twisted maintains the main loop and I access the database (SQLite) through Django, the entry point being a Django Management Command.
On the other hand, fo... | 0 | 1 | 0.099668 | 0 | false | 32,235,411 | 1 | 143 | 1 | 1 | 0 | 32,213,796 | No there is nothing inherently wrong with that approach. We currently use a similar approach for a lot of our work. | 1 | 0 | 0 | Twisted + Django as a daemon process plus Django + Apache | 2 | python,django,sqlite,twisted,daemon | 0 | 2015-08-25T20:49:00.000 |
I made a sheet with a graph using python and openpyxl. Later on in the code I add some extra cells that I would also like to see in the graph. Is there a way that I can change the range of cell that the graph is using, or maybe there is another library that lets me do this?
Example:
my graph initially uses columns... | 0 | 0 | 0 | 0 | false | 32,256,294 | 0 | 1,178 | 1 | 0 | 0 | 32,254,733 | At the moment it is not possible to preserve charts in existing files. With rewrite in version 2.3 of openpyxl the groundwork has been laid that will make this possible. When it happens will depend on the resources available to do the work. Pull requests gladly accepted.
In the meantime you might be able find a workaro... | 1 | 0 | 0 | Python Excel, Is it possible to update values of a created graph? | 1 | python,graph,openpyxl | 0 | 2015-08-27T16:20:00.000 |
Anybody know of any currently worked on projects that wire up MongoDB to the most recent version of Django? mongoengine's Django module github hasn't been updated in 2 years (and I don't know if I can use its regular module with Django) and django-nonrel uses Django 1.6. Anybody tried using django-nonrel with Django 1.... | 0 | 0 | 0 | 0 | false | 32,260,215 | 1 | 210 | 1 | 0 | 0 | 32,260,031 | If you are using mongoengine, there is no need of django-nonrel.You can directly use django latest versions. | 1 | 0 | 0 | Django: MongoDB engine for Django 1.8 | 1 | python,django,mongodb | 0 | 2015-08-27T21:50:00.000 |
My original purpose was to bulk insert into my db
I tried pyodbc, sqlalchemy and ceodbc to do this with executemany function but my dba checked and they execute each row individually.
his solution was to run procedure that recieve table (user defined data type) as parameter and load it into the real table.
The problem ... | 2 | 0 | 0 | 0 | false | 32,305,711 | 0 | 826 | 1 | 0 | 0 | 32,297,244 | Sqlalchemy bulk operations doesn't really inserts a bulk. It's written in the docs.
And we've checked it with our dba.
Thank you we'll try the xml. | 1 | 0 | 0 | Python mssql passing procedure user defined data type as parameter | 1 | sql-server,python-2.7,sqlalchemy,pyodbc,pymssql | 0 | 2015-08-30T13:51:00.000 |
I would like to read a column of date from SQL database. However, the format of the date in the database is something like 27-Jan-13 which is day-month-year. When I read this column using peewee DateField it is read in a format which cannot be compared later using datetime.date.
Can anyone help me solve the issue? | 0 | 0 | 0 | 0 | false | 32,429,590 | 0 | 366 | 1 | 0 | 0 | 32,303,115 | You need to store the data in the database using the format %Y-%m-%d. When you extract the data you can present it in any format you like, but to ensure the data is sorted correctly (and recognized by SQLite as a date) you must use the %Y-%m-%d format (or unix timestamps if you prefer that way). | 1 | 0 | 0 | Reading DateField with specific format from SQL database using peewee | 1 | python,peewee,datefield | 0 | 2015-08-31T02:14:00.000 |
I know there are various ETL tools available to export data from oracle to MongoDB but i wish to use python as intermediate to perform this. Please can anyone guide me how to proceed with this?
Requirement:
Initially i want to add all the records from oracle to mongoDB and after that I want to insert only newly inserte... | 1 | 0 | 0 | 0 | false | 32,305,243 | 0 | 889 | 1 | 0 | 0 | 32,305,131 | To answer your question directly:
1. Connect to Oracle
2. Fetch all the delta data by timestamp or id (first time is all records)
3. Transform the data to json
4. Write the json to mongo with pymongo
5. Save the maximum timestamp / id for next iteration
Keep in mind that you should think about the data model consi... | 1 | 0 | 0 | Export from Oracle to MongoDB using python | 1 | python,oracle,mongodb | 0 | 2015-08-31T06:29:00.000 |
We are migrating some data from our production database and would like to archive most of this data in the Cloud Datastore.
Eventually we would move all our data there, however initially focusing on the archived data as a test.
Our language of choice is Python, and have been able to transfer data from mysql to the data... | 6 | 7 | 1.2 | 0 | true | 33,367,328 | 1 | 3,726 | 1 | 1 | 0 | 32,316,088 | There is no "bulk-loading" feature for Cloud Datastore that I know of today, so if you're expecting something like "upload a file with all your data and it'll appear in Datastore", I don't think you'll find anything.
You could always write a quick script using a local queue that parallelizes the work.
The basic gist wo... | 1 | 0 | 0 | Is it possible to Bulk Insert using Google Cloud Datastore | 3 | python,mysql,google-cloud-datastore | 0 | 2015-08-31T16:47:00.000 |
I am doing a mini-project on Web-Crawler+Search-Engine. I already know how to scrape data using Scrapy framework. Now I want to do indexing. For that I figured out Python dictionary is the best option for me. I want mapping to be like name/title of an object (a string) -> the object itself (a Python object).
Now the pr... | 1 | 1 | 0.099668 | 0 | false | 32,334,348 | 0 | 339 | 1 | 0 | 0 | 32,325,390 | If you want to store dynamic data in a database, here are a few options. It really depends on what you need out of this.
First, you could go with a NoSQL solution, like MongoDB. NoSQL allows you to store unstructured data in a database without an explicit data schema. It's a pretty big topic, with far better guides/... | 1 | 0 | 1 | How to store a dynamic python dictionary in MySQL database? | 2 | python,mysql,dictionary,scrapy | 0 | 2015-09-01T06:58:00.000 |
I'm working on aws S3 multipart upload, And I am facing following issue.
Basically I am uploading a file chunk by chunk to s3, And during the time if any write happens to the file locally, I would like to reflect that change to the s3 object which is in current upload process.
Here is the procedure that I am following,... | 1 | 3 | 1.2 | 0 | true | 32,352,584 | 1 | 1,069 | 1 | 0 | 1 | 32,348,812 | There is no API in S3 to retrieve a part of a multi-part upload. You can list the parts but I don't believe there is any way to retrieve an individual part once it has been uploaded.
You can re-upload a part. S3 will just throw away the previous part and use the new one in it's place. So, if you had the old and new ... | 1 | 0 | 0 | How to read a part of amazon s3 key, assuming that "multipart upload complete" is yet to happen for that key? | 1 | python,amazon-web-services,file-upload,amazon-s3,boto | 0 | 2015-09-02T08:59:00.000 |
I am trying to migrate django models from sqlite to postgres. I tested it locally and now trying to do the samething with remote database. I dumped the data first then started the application which created the tables in remote database.
Finally I am trying to loaddata but it looks like hanged and no errors.
Is there a ... | 2 | 1 | 0.197375 | 0 | false | 32,385,965 | 1 | 353 | 1 | 0 | 0 | 32,362,384 | I had no solution, so I ran loaddata locally and used pg_dump and ran the dump with pqsl -f and restored the data. | 1 | 0 | 0 | manage.py loaddata hangs when loading to remote postgres | 1 | python,django,django-south | 0 | 2015-09-02T20:19:00.000 |
Is it bad practice to have Django perform migrations on a predominantly Rails web app?
We have a RoR app and have moved a few of the requirements out to Python. One of the devs here has suggested creating some of the latest database migration using Django and my gut says this is a bad idea.
I haven't found any solid st... | 0 | 1 | 0.197375 | 0 | false | 32,450,497 | 1 | 254 | 1 | 0 | 0 | 32,450,413 | I think you need to maintain migrations at one system (in this case, Rails), because it will be difficult to check migrations between two different apps. What you'll do if you'll haven't access to another app?
But you can store something like db/schema.rb for django tracked in git. | 1 | 0 | 0 | Rails and Django migrations on a shared database | 1 | python,ruby-on-rails,django,postgresql,ruby-on-rails-4 | 0 | 2015-09-08T06:11:00.000 |
Recently, I used Python and Scrapy to crawl article information like 'title' from a blog. Without using a database, the results are fine / as expected. However, when I use SQLalchemy, I received the following error:
InterfaceError:(sqlite3.InterfaceError)Error binding parameter 0
-probably unsupported type.[SQL:u'I... | 2 | 0 | 1.2 | 0 | true | 32,471,731 | 1 | 454 | 1 | 0 | 0 | 32,460,120 | The problem that you're experiencing is that SQLite3 wants a datatype of "String", and you're passing in a list with a unicode string in it.
change:
item['title'] = sel.xpath('//*[@class="link_title"]/a/text()').extract()
to
item['title'] = sel.xpath('//*[@class="link_title"]/a/text()').extract()[0].
You'll be left wit... | 1 | 0 | 0 | InterfaceError:(sqlte3.InterfaceError)Error binding parameter 0 | 1 | python,xpath,sqlite,scrapy | 0 | 2015-09-08T14:13:00.000 |
I have to write a python script which will copy a file in s3 to my EBS directory, here the problem is I'm running this python script from my local machine. is there any boto function in which I can copy from s3 to EBS without storing in my local? | 0 | 3 | 0.53705 | 0 | false | 32,478,697 | 0 | 830 | 1 | 0 | 1 | 32,478,432 | No. EBS volumes are accessible only on the EC2 instance they're mounted on. If you want to download a file directly from S3 to an EBS volume, you need to run your script on the EC2 instance. | 1 | 0 | 0 | Copy file from S3 to EBS | 1 | python,amazon-web-services,amazon-s3,boto,boto3 | 0 | 2015-09-09T11:33:00.000 |
My organisation uses Business Objects as a layer over its Oracle database so that people like me (i.e. not in the IT dept) can access the data without the risk of breaking something.
I have a PythonAnywhere account where I have a few dashboards built using Flask.
Each morning, BO sends me an email with the cvs files of... | 1 | 1 | 1.2 | 0 | true | 32,530,008 | 0 | 875 | 1 | 0 | 0 | 32,509,768 | PythonAnywhere dev here: we don't support regular FTP, unfortunately. If there was a way to tell BO to send the data via an HTTP POST to a website, then you could set up a simple Flask app to handle that -- but I'm guessing from what you say that it doesn't :-( | 1 | 0 | 0 | Sending csv file via FTP to PythonAnywhere | 1 | ftp,pythonanywhere | 0 | 2015-09-10T19:01:00.000 |
I am trying to read a data from csv file to postgres table. I have two columns in table, but there are four fields in csv data file. I want to read only two specific columns from csv to table. | 0 | 0 | 0 | 0 | false | 32,553,942 | 0 | 509 | 1 | 0 | 0 | 32,553,773 | Would you know how to do it if there were only those two columns in CSV file?
If yes, then the simplest solution is to transform the CSV prior to importing into Postgres. | 1 | 0 | 0 | how to copy specific columns from CSV file to postgres table using psycopg2? | 1 | python,sql,postgresql,csv,psycopg2 | 0 | 2015-09-13T19:36:00.000 |
It's been two days I am trying to work with cx_Oracle. I want to connect to oracle from python. But I am getting "ImportError: DLL load failed: The specified procedure could not be found." error. I have already gone through many posts and tried the things suggested on them, but nothing helped me.
I checked the versions... | 3 | 1 | 0.197375 | 0 | false | 32,578,626 | 0 | 1,699 | 1 | 0 | 0 | 32,561,547 | I was able to sort it out. I had installed the incorrect version of cx_Oracle previously. It was for 12c oracle client. I installed 11g version later and it started working for me.
Note: There is no need to set ORACLE_HOME environment variable.
Oracle client, Python, Windows OS all of them must be of same architecture.... | 1 | 0 | 0 | Not able to import cx_Oracle in python "ImportError: DLL load failed: The specified procedure could not be found." | 1 | oracle,python-2.7,cx-oracle | 0 | 2015-09-14T09:37:00.000 |
Has anyone installed psycopg2 for python 3 on Centos 7? I'm sure it's possible, but when I run:
pip install psycopg2
I get:
Could not find a version that satisfies the requirement pyscopg2 (from versions: )
No matching distribution found for pyscopg2 | 0 | 2 | 0.379949 | 0 | false | 32,576,369 | 0 | 1,254 | 1 | 0 | 0 | 32,576,326 | You have misspelled the name of the library. The correct name is psycopg2 | 1 | 0 | 0 | psycopg2 for python3 on Centos 7 | 1 | python,postgresql,python-3.x,centos,centos7 | 0 | 2015-09-15T01:35:00.000 |
I found this line to help configure Postgresql in web2py but I can't seem to find a good place where to put it :
db = DAL("postgres://myuser:mypassword@localhost:5432/mydb")
Do I really have to write it in all db.py ? | 0 | 2 | 1.2 | 0 | true | 32,618,356 | 1 | 939 | 1 | 0 | 0 | 32,616,625 | Files in the /models folder are executed in alphabetical order, so just put the DAL definition at the top of the first model file that needs to use it (it will then be available globally in all subsequent model files as well as all controllers and views). | 1 | 0 | 0 | web2py database configuration | 2 | python,database,web2py | 0 | 2015-09-16T19:00:00.000 |
I just upgraded my django from 1.7.1 to 1.8.4. I tried to run python manage.py migrate but I got this error:
django.db.utils.ProgrammingError: relation "django_content_type" does not exist
I dropped my database, created a new one, and ran the command again. But I get the same error. Am I missing something? Do I need to... | 6 | 1 | 1.2 | 0 | true | 32,637,043 | 1 | 10,126 | 4 | 0 | 0 | 32,620,930 | Well, I found the issue. I have auditlog installed as one my apps. I removed it and migrate works fine. | 1 | 0 | 0 | Django 1.8 migrate: django_content_type does not exist | 6 | django,python-2.7,django-1.7,django-migrations,django-1.8 | 0 | 2015-09-17T00:45:00.000 |
I just upgraded my django from 1.7.1 to 1.8.4. I tried to run python manage.py migrate but I got this error:
django.db.utils.ProgrammingError: relation "django_content_type" does not exist
I dropped my database, created a new one, and ran the command again. But I get the same error. Am I missing something? Do I need to... | 6 | 0 | 0 | 0 | false | 67,501,508 | 1 | 10,126 | 4 | 0 | 0 | 32,620,930 | i had drop the database and rebuild it, then in the PyCharm Terminal py manage.py makemigrations and py manage.py migrate fix this problem. I think the reason is the table django_content_type is the django's table, if it misssed can not migrate, so have to drop the database and rebuild. | 1 | 0 | 0 | Django 1.8 migrate: django_content_type does not exist | 6 | django,python-2.7,django-1.7,django-migrations,django-1.8 | 0 | 2015-09-17T00:45:00.000 |
I just upgraded my django from 1.7.1 to 1.8.4. I tried to run python manage.py migrate but I got this error:
django.db.utils.ProgrammingError: relation "django_content_type" does not exist
I dropped my database, created a new one, and ran the command again. But I get the same error. Am I missing something? Do I need to... | 6 | 6 | 1 | 0 | false | 32,623,157 | 1 | 10,126 | 4 | 0 | 0 | 32,620,930 | Delete all the migration folder from your app and delete the database then migrate your database......
if this does not work delete django_migration table from database and add the "name" column in django_content_type table ALTER TABLE django_content_type ADD COLUMN name character varying(50) NOT NULL DEFAULT 'anyName'... | 1 | 0 | 0 | Django 1.8 migrate: django_content_type does not exist | 6 | django,python-2.7,django-1.7,django-migrations,django-1.8 | 0 | 2015-09-17T00:45:00.000 |
I just upgraded my django from 1.7.1 to 1.8.4. I tried to run python manage.py migrate but I got this error:
django.db.utils.ProgrammingError: relation "django_content_type" does not exist
I dropped my database, created a new one, and ran the command again. But I get the same error. Am I missing something? Do I need to... | 6 | 2 | 0.066568 | 0 | false | 37,074,120 | 1 | 10,126 | 4 | 0 | 0 | 32,620,930 | Here's what I found/did. I am using django 1.8.13 and python 2.7. The problem did not occur for Sqlite. It did occur for PostgreSQL.
I have an app the uses a GenericForeignKey (which relies on Contenttypes). I have another app that has a model that is linked to the first app via the GenericForeignKey. If I run makemig... | 1 | 0 | 0 | Django 1.8 migrate: django_content_type does not exist | 6 | django,python-2.7,django-1.7,django-migrations,django-1.8 | 0 | 2015-09-17T00:45:00.000 |
I've a read-only access to a database server dbserver1. I need to store the result set generated from my query running on dbserver1 into another server of mine dbserver2. How should I go about doing that?
Also can I setup a trigger which will automatically copy new entries that will come in to the dbserver1 to dbserve... | 0 | 1 | 1.2 | 0 | true | 32,659,567 | 0 | 449 | 1 | 0 | 0 | 32,659,008 | lad2015 answered the first part. The second part can be infinitely more dangerous as it involves calling outside the Sql Server process.
In the bad old days one would use the xp_cmdshell. These days it may be more worthwhile to create an Unsafe CLR stored procedure that'll call the python script.
But it is very dangero... | 1 | 0 | 0 | Copy database from one server to another server + trigger python script on a database event | 1 | python,sql-server,database-design,triggers,database-cloning | 0 | 2015-09-18T18:45:00.000 |
In the environment, we have an excel file, which includes rawdata in one sheet and pivot table and charts in another sheet.
I need to append rows every day to raw data automatically using a python job.
I am not sure, but there may be some VB Script running on the front end which will refresh the pivot tables.
I used op... | 1 | 3 | 0.53705 | 0 | false | 32,684,112 | 0 | 3,002 | 1 | 0 | 0 | 32,682,336 | You have to save the files with the extension ".xlsm" rather than ".xlsx". The .xlsx format exists specifically to provide the user with assurance that there is no VBA code within the file. This is an Excel standard and not a problem with openpyxl. With that said, I haven't worked with openpyxl, so I'm not sure what yo... | 1 | 0 | 0 | xlsx file extension not valid after saving with openpyxl and keep_vba=true. Which is the best way? | 1 | python,xlsx,openpyxl | 0 | 2015-09-20T17:37:00.000 |
I'm using Python 2.7.5 in Spyder along with Sqlite3 3.6.21. I have noticed the execute method to be pretty slow, pretty much regardless of the size of the database I'm creating. After doing some research, no solution really works for me:
Python 3 is not supported by Spyder yet
updating the Sqlite3 version does not wor... | 0 | 0 | 0 | 0 | false | 32,700,885 | 0 | 157 | 1 | 0 | 0 | 32,682,674 | Indeed, inspired by Colonel Thirty Two's comment above, I just realized that I need to wrap all my operations into one transaction. This was trivial to implement and improved overall efficiency drastically. Thanks once again! | 1 | 0 | 0 | python sqlite3 execute method slow | 1 | python,sqlite | 0 | 2015-09-20T18:12:00.000 |
I have a large Excel file (450mb+). I need to replace (,) -> (; or .) for one of my fastload scripts to work. I am not able to open the file at all. Any script would actually involve opening the file, performing operation, saving and closing the file, in that order.
Will a VB script like that work here for the 450mb+ ... | 1 | 0 | 0 | 0 | false | 32,691,381 | 0 | 273 | 1 | 0 | 0 | 32,690,561 | If you have access to a Linux environment (which you might since you mention shell script as one of your options) then just use sed in a terminal or Putty:
sed -i .bak 's/,/;/g' yourfile.excel
Sed streams the text without loading the entire file at once.
-i will make changes to your original file but providing .bak wil... | 1 | 0 | 0 | Replace characters without opening Excel file | 1 | python,excel,shell,vbscript | 0 | 2015-09-21T08:27:00.000 |
I had posted about this error some time back but need some more clarification on this.
I'm currently building out a Django Web Application using Visual Studio 2013 on a Windows 10 machine (Running Python3.4). While starting out I was constantly dealing with the MySQL connectivity issue, for which I did a mysqlclient pi... | 0 | 1 | 0.066568 | 0 | false | 32,723,517 | 1 | 660 | 1 | 0 | 0 | 32,715,175 | This approach worked ! I was able to install the mysqlclient inside the virtual environment through the following command:-
python -m pip install mysqlclient
Thanks Much..!!!!! | 1 | 0 | 0 | No Module Named MySqlDb in Python Virtual Enviroment | 3 | python,mysql,django | 0 | 2015-09-22T11:02:00.000 |
Because I need to parse and then use the actual data in cells, I open an xlsm in openpyxl with data_only = True.
This has proved very useful. Now though, having the same need for an xlsm that contains formuale in cells, when I then save my changes, the formulae are missing from the saved version.
Are data_only = True a... | 6 | 2 | 0.197375 | 0 | false | 32,776,318 | 0 | 6,128 | 1 | 0 | 0 | 32,772,954 | If you want to preserve the integrity of the workbook, ie. retain the formulae, the you cannot use data_only=True. The documentation makes this very clear. | 1 | 0 | 0 | How to save in openpyxl without losing formulae? | 2 | python,python-2.7,openpyxl | 0 | 2015-09-25T00:21:00.000 |
Arrghh... I am trying to use mySQL with Python. I have installed all the libraries for using mySQL, but keep getting the: "ImportError: No module named mysql.connector" for "import mysql.connector", "mysql", etc..
Here is my config:
I have a RHEL server:
Red Hat Enterprise Linux Server release 6.7 (Santiago)
with P... | 0 | -1 | -0.099668 | 0 | false | 32,786,846 | 0 | 544 | 2 | 0 | 0 | 32,786,620 | Nevermind!!!
Apparently I am installing Python and libraries in the right directories and such (I have always used YUM), but apparently there are other versions of Python installed.. need to clean that up.
Running: /usr/bin/python
All the modules worked!
Running: python (Linux finding python in the path somewhere)
Mod... | 1 | 0 | 0 | mySQL within Python 2.7.9 | 2 | python,mysql,linux,python-2.7 | 0 | 2015-09-25T16:24:00.000 |
Arrghh... I am trying to use mySQL with Python. I have installed all the libraries for using mySQL, but keep getting the: "ImportError: No module named mysql.connector" for "import mysql.connector", "mysql", etc..
Here is my config:
I have a RHEL server:
Red Hat Enterprise Linux Server release 6.7 (Santiago)
with P... | 0 | 2 | 1.2 | 0 | true | 32,786,680 | 0 | 544 | 2 | 0 | 0 | 32,786,620 | You should use virtualenv in order to isolate the environment. That way your project libs won't clash with other projects libs. Also, you probably should install the Mysql driver/connector from pip.
Virtualenv is a CLI tool for managing your environment. It is really easy to use and helps a lot. What it does is to crea... | 1 | 0 | 0 | mySQL within Python 2.7.9 | 2 | python,mysql,linux,python-2.7 | 0 | 2015-09-25T16:24:00.000 |
I wrote a Python program which produces invoices in a specific form as .xlsx files using openpyxl. I have the general invoice form as an Excel workbook and my program copies this form and fills up the details about the specific client (eg. client refernce number, price, etc.) which are read from another .txt file.
The ... | 3 | 0 | 0 | 0 | false | 32,790,713 | 0 | 1,165 | 1 | 0 | 0 | 32,788,716 | openpyxl does not support multiple styles within an individual cell. | 1 | 0 | 0 | Multiple styles in one cell in openpyxl | 1 | python,excel,openpyxl | 0 | 2015-09-25T18:45:00.000 |
I want to install py-MySQLdb but I always get the same lib error..
Any suggestions?
Thanks in advance
ImportError: /usr/lib/libz.so.6: unsupported file layout
*** [do-configure] Error code 1
Stop in /usr/ports/databases/py-MySQLdb.
*** [install] Error code 1
Stop in /usr/ports/databases/py-MySQLdb. | 1 | 0 | 0 | 0 | false | 32,839,165 | 0 | 226 | 1 | 0 | 0 | 32,826,804 | It seems like you mixed 32 and 64bit libraries.
I suggest cleaning up the wrong libraries (at least libz seems to be affected) and restoring them from backup/installation medium | 1 | 0 | 0 | FreeBSD 9.3 ImportError while installing py-MySQLdb | 1 | python,freebsd | 0 | 2015-09-28T15:38:00.000 |
I am writing a Python script to fetch and update some data on a remote oracle database from a Linux server. I would like to know how can I connect to remote oracle database from the server.
Do I necessarily need to have an oracle client installed on my server or any connector can be used for the same?
And also if I use... | 1 | 0 | 0 | 0 | false | 32,836,606 | 0 | 6,425 | 1 | 0 | 0 | 32,836,510 | Yes, you definitely need to install an Oracle Client, it even says so in cx_oracle readme.txt.
Another recommendation you can find there is installing an oracle instant client, which is the minimal installation needed to communicate with Oracle, and is the simplest to use.
Other dependencies can usually be found in the... | 1 | 0 | 0 | Python connection to Oracle database | 2 | python,oracle,database-connection,cx-oracle | 0 | 2015-09-29T05:47:00.000 |
I inherited a project and am having what seems to be a permissions issue when trying to interact with the database. Basically we have a two step process of detach and then delete.
Does anyone know where the user would come from if the connection string only has driver, server, and database name.
EDIT
I am on Windows ... | 3 | 2 | 0.197375 | 0 | false | 32,915,064 | 0 | 989 | 2 | 0 | 0 | 32,914,037 | Since you're on Windows, a few things you should know:
Using the Driver={SQL Server} only enables features and data types
supported by SQL Server 2000. For features up through 2005, use {SQL
Native Client} and for features up through 2008 use {SQL Server
Native Client 10.0}.
To view your ODBC connections, go to Start ... | 1 | 0 | 0 | Where does pyodbc get its user and pwd from when none are provided in the connection string | 2 | python,database,permissions,pyodbc | 0 | 2015-10-02T18:57:00.000 |
I inherited a project and am having what seems to be a permissions issue when trying to interact with the database. Basically we have a two step process of detach and then delete.
Does anyone know where the user would come from if the connection string only has driver, server, and database name.
EDIT
I am on Windows ... | 3 | 3 | 1.2 | 0 | true | 32,951,286 | 0 | 989 | 2 | 0 | 0 | 32,914,037 | I just did a few tests and the {SQL Server} ODBC driver apparently defaults to using Windows Authentication if the Trusted_connection and UID options are both omitted from the connection string. So, your Python script must be connecting to the SQL Server instance using the Windows credentials of the user running the sc... | 1 | 0 | 0 | Where does pyodbc get its user and pwd from when none are provided in the connection string | 2 | python,database,permissions,pyodbc | 0 | 2015-10-02T18:57:00.000 |
In my Google App Engine App, I have a large number of entities representing people. At certain times, I want to process these entities, and it is really important that I have the most up to date data. There are far too many to put them in the same entity group or do a cross-group transaction.
As a solution, I am cons... | 0 | 1 | 1.2 | 0 | true | 32,941,257 | 1 | 60 | 1 | 1 | 0 | 32,915,462 | if, like you say in comments, your lists change rarely and cant use ancestors (I assume because of write frequency in the rest of your system), your proposed solution would work fine. You can do as many get(multi) and as frequently as you wish, datastore can handle it.
Since you mentioned you can handle having that key... | 1 | 0 | 0 | GAE/P: Storing list of keys to guarantee getting up to date data | 2 | python,google-app-engine,google-cloud-datastore,eventual-consistency | 0 | 2015-10-02T20:35:00.000 |
I'm startint a project on my own and I'm having some troubles with importing datas from IMDb. Already downloaded everything that's necessary but I'm kinda newbie in this python and command lines stuff, and it's pissing me off because I'm doing my homework (trying to learn how to do these things) but I can't reach it :(... | 0 | 3 | 1.2 | 0 | true | 32,972,532 | 0 | 1,057 | 1 | 0 | 0 | 32,953,669 | Problem solved!
For those who are having the same problem, here it goes:
Download the java movie database. It works witch postrgres or mysql. You'll have to download java runtime. After that open the readme in the directory you installed the java movie database, there are all the instructions, but I'll help you.
Follow... | 1 | 0 | 0 | How to run imdbpy2sql.py and import data from IMDb to postgres | 1 | python,postgresql,imdb,imdbpy | 0 | 2015-10-05T16:44:00.000 |
I need to change data in large excel file(more than 240 000 rows on sheet), it's possible through win32com.client, but I need use Linux OS ...
Please, could you advise something suitable! | 4 | -1 | -0.099668 | 0 | false | 32,997,001 | 0 | 414 | 1 | 0 | 0 | 32,994,822 | If it's raw data, I always export it to a .csv file and work on it directly. CSV is a simple format with one row per line and all the elements on the row separated with commas. Depending on what you want to do, it's not hard to write a python script to edit that. | 1 | 0 | 0 | Change data in large excel file(more than 240 000 rows on sheet) | 2 | python,python-2.7 | 0 | 2015-10-07T14:22:00.000 |
I tried to open and read the contents of cookie.sqlite file in the firefox.
cookie.sqlie is the database file were all cookies of webpages opened in firefox are stored. When I am trying to access with a python program it is not allowing to read, as cookie.sqlite is locked. How to open and read the contents. | 0 | 0 | 1.2 | 0 | true | 33,078,468 | 0 | 858 | 1 | 0 | 0 | 33,078,383 | That's not a programming question as is -- sqlite sets a flag in the file so that when trying to open the file a second time, the other sqlite instance knows that the file is "dirty", because the first Sqlite is actively modifying it. There's no way around this -- data integrity is the core functionality of databases, ... | 1 | 0 | 0 | how to read cookie.sqlite file in firefox through python program | 1 | python,sqlite,cookies | 0 | 2015-10-12T10:02:00.000 |
I'm not overly familiar with Linux and am trying to run a Python script that is dependent upon Python 3.4 as well as pymssql. Both Python 2.7 and 3.4 are installed (usr/local/lib/[PYTHON_VERSION_HERE]). pymssql is also installed, except it's installed in the Python 2.7 directory, not the 3.4 directory. When I run my... | 1 | 2 | 0.197375 | 0 | false | 35,328,465 | 0 | 5,321 | 1 | 0 | 0 | 33,114,337 | It is better if when you run python3.4 you can have modules for that version.
Another way to get the desire modules running is install pip for python 3.4
sudo apt-get install python3-pip
Then install the module you want
python3.4 -m pip install pymssql | 1 | 0 | 0 | How to install pymssql to Python 3.4 rather than 2.7 on Ubuntu Linux? | 2 | python,linux,pymssql | 0 | 2015-10-13T23:38:00.000 |
I'd like to use the Dropbox API (with access only to my own account) to generate a link to SomeFile.xlsx that I can put in an email to multiple Dropbox account holders, all of whom are presumed to have access to the file. I'd like for the same link, when clicked on, to talk to Dropbox to figure out where SomeFile.xlsx ... | 0 | 1 | 1.2 | 0 | true | 33,136,183 | 0 | 82 | 1 | 0 | 0 | 33,134,816 | No, Dropbox doesn't have an API like this. | 1 | 0 | 0 | Relative link to local copy of Dropbox file | 1 | python,dropbox,dropbox-api | 0 | 2015-10-14T20:18:00.000 |
I am doing an application in Excel and I'd like to use python language. I've seen a pretty cool library called xlwings, but to run it a user need to have python installed.
Is there any possibility to prepare this kind of application that will be launch from a PC without Python?
Any suggestion are welcome! | 1 | 0 | 1.2 | 0 | true | 33,176,797 | 0 | 963 | 3 | 0 | 0 | 33,172,842 | A small workaround could be to package your application with cx_freeze or pyinstaller. Then it can run on a machine without installing python. The downside is of course that the program tend to be a bit bulky in size. | 1 | 0 | 1 | create an application in excel using python for a user without python | 4 | python,excel,xlwings | 0 | 2015-10-16T14:24:00.000 |
I am doing an application in Excel and I'd like to use python language. I've seen a pretty cool library called xlwings, but to run it a user need to have python installed.
Is there any possibility to prepare this kind of application that will be launch from a PC without Python?
Any suggestion are welcome! | 1 | 0 | 0 | 0 | false | 33,176,474 | 0 | 963 | 3 | 0 | 0 | 33,172,842 | It is possible using xlloop. This is a customized client-server approach, where the client is an excel .xll which must be installed on client's machine.
The server can be written in many languages, including python, and of course it must be launched on a server that has python installed. Currently the .xll is availabl... | 1 | 0 | 1 | create an application in excel using python for a user without python | 4 | python,excel,xlwings | 0 | 2015-10-16T14:24:00.000 |
I am doing an application in Excel and I'd like to use python language. I've seen a pretty cool library called xlwings, but to run it a user need to have python installed.
Is there any possibility to prepare this kind of application that will be launch from a PC without Python?
Any suggestion are welcome! | 1 | 0 | 0 | 0 | false | 33,172,988 | 0 | 963 | 3 | 0 | 0 | 33,172,842 | no. you must need install a python and for interpreting the python function etc. | 1 | 0 | 1 | create an application in excel using python for a user without python | 4 | python,excel,xlwings | 0 | 2015-10-16T14:24:00.000 |
I'm new to using databases in Python and I'm playing around with MySQLdb. I have several methods that will issue database calls. Do I need to go through the database connection steps every time I want to make a call or is the instance of the database persistent? | 1 | 1 | 1.2 | 0 | true | 33,243,613 | 0 | 32 | 1 | 0 | 0 | 33,243,575 | Connection instance is persistent, you can connect one time and work with connection as long as you need. | 1 | 0 | 0 | Do I need to call MySQLdb.connect() in every method where I execute a database operation? | 1 | python,mysql,mysql-python | 0 | 2015-10-20T17:57:00.000 |
I have the following table
create table players (name varchar(30), playerid serial primary key);
And I am working with the script:
def registerPlayer(name):
"""Registers new player."""
db = psycopg2.connect("dbname=tournament")
c = db.cursor()
player = "insert into players values (%s);"
scores = "in... | 0 | 2 | 0.379949 | 0 | false | 33,248,538 | 0 | 74 | 1 | 0 | 0 | 33,248,482 | Double-quotes in SQL are not strings - they escape table, index, and other object names (ex. "John Smith" refers to a table named John Smith). Only single quoted strings are actually strings.
In any case, if you are using query parameters properly (which, in your example code, you seem to be), you should not have to wo... | 1 | 0 | 0 | Quotations not working in PostgreSQL Queries | 1 | python,sql,database,postgresql | 0 | 2015-10-20T23:24:00.000 |
I filled an Excel sheet with a correct float numbers based on the German decimal point format. So, the number 3.142 is correctly written 3,142, and if it is written 3.142 (or '3.142 by declaring it as a text entry in order to avoid English interpretation as 3142), then I want to report an error to the author of the Exc... | 2 | 0 | 0 | 0 | false | 33,275,544 | 0 | 549 | 1 | 0 | 0 | 33,263,378 | When it comes to value of numbers openpyxl doesn't care about their formatting so it will report 3142 in both cases. I don't think coercing this to a string makes any sense at all. | 1 | 0 | 0 | How to read a cell of a sheet filled with floats containing German decimal point | 2 | python,python-3.x,openpyxl | 0 | 2015-10-21T15:31:00.000 |
I have an EC2 instance and an S3 bucket in different region. The bucket contains some files that are used regularly by my EC2 instance.
I want to programatically download the files on my EC2 instance (using python)
Is there a way to do that? | 5 | 0 | 0 | 0 | false | 33,375,622 | 1 | 6,069 | 1 | 0 | 1 | 33,298,821 | As mentioned above, you can do this with Boto. To make it more secure and not worry about the user credentials, you could use IAM to grant the EC2 machine access to the specific bucket only. Hope that helps. | 1 | 0 | 0 | Access to Amazon S3 Bucket from EC2 instance | 5 | python,amazon-web-services,amazon-s3,amazon-ec2,amazon-iam | 0 | 2015-10-23T09:17:00.000 |
I create a database connection in the __init__ method of a Python class and want to make sure that the connection is closed on object destruction.
It looks like I can do this in __del__() or make the class a context manager and close the connection in __exit__(). I wonder which one is more Pythonic. | 1 | 1 | 1.2 | 0 | true | 33,306,596 | 0 | 523 | 1 | 0 | 0 | 33,306,517 | It looks like I can do this in __del__() or make the class a context manager and close the connection in __exit__(). I wonder which one is more Pythonic.
I won't comment on what's more "pythonic", since that is a highly subjective question.
However, Python doesn't make very strict guarantees on when a destructor is c... | 1 | 0 | 0 | What's the preferred way to close a psycopg2 connection used by Python object? | 1 | python,database-connection,psycopg2 | 0 | 2015-10-23T15:46:00.000 |
In my django application (django 1.8) I'm using two databases, one 'default' which is MySQL, and another one which is a schemaless, read-only database.
I've two models which are accessing this database, and I'd like to exclude these two models permanently from data and schema migrations:
makemigrations should never de... | 17 | 2 | 0.132549 | 0 | false | 44,014,653 | 1 | 7,061 | 2 | 0 | 0 | 33,385,618 | So far, I've tried different things, all without any success:
used the managed=False Meta option on both Models
That option (the managed = False attribute on the model's meta options) seems to meet the requirements.
If not, you'll need to expand the question to say exactly what is special about your model that manag... | 1 | 0 | 0 | django: exclude models from migrations | 3 | python,django,django-models,django-migrations | 0 | 2015-10-28T07:53:00.000 |
In my django application (django 1.8) I'm using two databases, one 'default' which is MySQL, and another one which is a schemaless, read-only database.
I've two models which are accessing this database, and I'd like to exclude these two models permanently from data and schema migrations:
makemigrations should never de... | 17 | 1 | 0.066568 | 0 | false | 68,460,381 | 1 | 7,061 | 2 | 0 | 0 | 33,385,618 | You have the correct solution:
used the managed=False Meta option on both Models
It may appear that it is not working but it is likely that you are incorrectly preempting the final result when you see - Create model xxx for models with managed = False when running makemigrations.
How have you been checking/confirming... | 1 | 0 | 0 | django: exclude models from migrations | 3 | python,django,django-models,django-migrations | 0 | 2015-10-28T07:53:00.000 |
Well, I tried to understand Open Database Connectivity and Python DB-API, but I can't.
ODBC is some kind of standard and Python DB-API is another standard, but why not use just one standard? Or maybe I got these terms wrong.
Can someone please explain these terms and the difference between them as some of the explanati... | 1 | 1 | 0.197375 | 0 | false | 33,404,879 | 0 | 342 | 1 | 0 | 0 | 33,404,837 | There are other programming languages besides python -- java, javascript, ruby, perl, cobol, lisp, smalltalk, go, r, and many, many others. None of them can use the python db-api, but all of them could, potentially use odbc. Python offers odbc for people who come from other languages and already know odbc,and its own d... | 1 | 0 | 0 | Difference between ODBC and Python DB-API? | 1 | database,odbc,python-db-api | 0 | 2015-10-29T02:15:00.000 |
I'm trying to use PyMySQL on Ubuntu.
I've installed pymysql using both pip and pip3 but every time I use import pymysql, it returns ImportError: No module named 'pymysql'
I'm using Ubuntu 15.10 64-bit and Python 3.5.
The same .py works on Windows with Python 3.5, but not on Ubuntu. | 69 | 0 | 0 | 0 | false | 59,368,220 | 0 | 196,169 | 5 | 0 | 0 | 33,446,347 | if you are using SPYDER IDE , just try to restart the console or restart the IDE, it works | 1 | 0 | 1 | No module named 'pymysql' | 19 | python,python-3.x,ubuntu,pymysql | 0 | 2015-10-30T23:34:00.000 |
I'm trying to use PyMySQL on Ubuntu.
I've installed pymysql using both pip and pip3 but every time I use import pymysql, it returns ImportError: No module named 'pymysql'
I'm using Ubuntu 15.10 64-bit and Python 3.5.
The same .py works on Windows with Python 3.5, but not on Ubuntu. | 69 | 0 | 0 | 0 | false | 50,157,898 | 0 | 196,169 | 5 | 0 | 0 | 33,446,347 | I had this same problem just now, and found the reason was my editor (Visual Studio Code) was running against the wrong instance of python; I had it set to run again python bundled with tensorflow, I changed it to my Anaconda python and it worked. | 1 | 0 | 1 | No module named 'pymysql' | 19 | python,python-3.x,ubuntu,pymysql | 0 | 2015-10-30T23:34:00.000 |
I'm trying to use PyMySQL on Ubuntu.
I've installed pymysql using both pip and pip3 but every time I use import pymysql, it returns ImportError: No module named 'pymysql'
I'm using Ubuntu 15.10 64-bit and Python 3.5.
The same .py works on Windows with Python 3.5, but not on Ubuntu. | 69 | 0 | 0 | 0 | false | 49,817,699 | 0 | 196,169 | 5 | 0 | 0 | 33,446,347 | sudo apt-get install python3-pymysql
This command also works for me to install the package required for Flask app to tun on ubuntu 16x with WISG module on APACHE2 server.
BY default on WSGI uses python 3 installation of UBUNTU.
Anaconda custom installation won't work. | 1 | 0 | 1 | No module named 'pymysql' | 19 | python,python-3.x,ubuntu,pymysql | 0 | 2015-10-30T23:34:00.000 |
I'm trying to use PyMySQL on Ubuntu.
I've installed pymysql using both pip and pip3 but every time I use import pymysql, it returns ImportError: No module named 'pymysql'
I'm using Ubuntu 15.10 64-bit and Python 3.5.
The same .py works on Windows with Python 3.5, but not on Ubuntu. | 69 | 0 | 0 | 0 | false | 57,734,684 | 0 | 196,169 | 5 | 0 | 0 | 33,446,347 | Just a note:
for Anaconda install packages command:
python setup.py install | 1 | 0 | 1 | No module named 'pymysql' | 19 | python,python-3.x,ubuntu,pymysql | 0 | 2015-10-30T23:34:00.000 |
I'm trying to use PyMySQL on Ubuntu.
I've installed pymysql using both pip and pip3 but every time I use import pymysql, it returns ImportError: No module named 'pymysql'
I'm using Ubuntu 15.10 64-bit and Python 3.5.
The same .py works on Windows with Python 3.5, but not on Ubuntu. | 69 | 0 | 0 | 0 | false | 63,201,272 | 0 | 196,169 | 5 | 0 | 0 | 33,446,347 | I also got this error recently when using Anaconda on a Mac machine.
Here is what I found:
After running python3 -m pip install PyMySql, pymysql module is under /Library/Python/3.7/site-packages
Anaconda wants this module to be under /opt/anaconda3/lib/python3.8/site-packages
Therefore, after copying pymysql module t... | 1 | 0 | 1 | No module named 'pymysql' | 19 | python,python-3.x,ubuntu,pymysql | 0 | 2015-10-30T23:34:00.000 |
I am trying to read data from text file (which is output given by Tesseract OCR) and save the same in excel file. The problem i am facing here is the text files are in space separated format, and there are multiple files. Now i need to read all the files and save the same in excel sheet.
I am using MATLAB to import and... | 0 | 0 | 1.2 | 1 | true | 33,481,202 | 0 | 212 | 1 | 0 | 0 | 33,479,646 | To read a text file in Matlab you can use fscanf or textscan then to export to excel you can use xlswrite that write directly to the excel file. | 1 | 0 | 0 | Importing data from text file and saving the same in excel | 1 | matlab,python-2.7,csv,export-to-csv | 0 | 2015-11-02T14:14:00.000 |
Is it possible to insert many rows into a table using one query in pyhdb? Because when I have millions of records to insert, inserting each record in a loop is not very efficient. | 0 | 0 | 0 | 0 | false | 72,168,012 | 0 | 1,564 | 1 | 0 | 0 | 33,514,183 | pyhdb executemany() is faster than simply execute()
but for larger records even if you divide in chunks and use executemany() it still takes significant time.
For better and faster performance use string formatting like values (?, ?, ?...) instead of values('%s', '%s', '%s', ...)
This saves a lots of time that heavy ty... | 1 | 0 | 0 | How to insert many rows into a table using pyhdb? | 3 | python,sap | 0 | 2015-11-04T05:15:00.000 |
All of the MySql modules I've found are compatible with Python 2.7 or 3.4, but none with 3.5. Any way I can use a MySql module with the newest Python version?
ANSWER:
The regular Python versions of mysql-connector-python would not work, but the rf version did.
python -m pip install mysql-connector-python-rf | 3 | 3 | 1.2 | 0 | true | 33,524,987 | 0 | 1,498 | 1 | 0 | 0 | 33,524,731 | Python tries hard to be forward compatible. A pure-python module written for 3.4 should work with 3.5; a binary package may work, you just have to try it and see. | 1 | 0 | 0 | Will a MySql module for Python 3.4 work with 3.5? | 1 | mysql,python-3.x | 0 | 2015-11-04T14:45:00.000 |
I'm using openpyxl to put data validation to all rows that have "Default" in them. But to do that, I need to know how many rows there are.
I know there is a way to do that if I were using Iterable workbook mode, but I also add a new sheet to the workbook and in the iterable mode that is not possible. | 38 | 68 | 1.2 | 0 | true | 33,543,305 | 0 | 117,258 | 1 | 0 | 0 | 33,541,692 | ws.max_row will give you the number of rows in a worksheet.
Since version openpyxl 2.4 you can also access individual rows and columns and use their length to answer the question.
len(ws['A'])
Though it's worth noting that for data validation for a single column Excel uses 1:1048576. | 1 | 0 | 0 | How to find the last row in a column using openpyxl normal workbook? | 3 | python,excel,openpyxl | 0 | 2015-11-05T10:06:00.000 |
I am using sqlAlchemy to interact with a postgres database. It is all set to work with inserting string data. The data I receive is normally utf-8 and the setup works very well. As a edge case, recently, data came up in the format somedata\xtrailingdata.
SQLAlchemy is attempting to make this entry with somedata complet... | 0 | 0 | 0 | 0 | false | 33,685,308 | 0 | 442 | 2 | 0 | 0 | 33,665,544 | There's nothing Unicode about it. \x is a byte literal prefix and requires a hex value to follow. PostgreSQL also supports the \x syntax also, so it may be PostgreSQL that's dropping it.
Consider escaping all slashes or find-replace on \x before handing to SQLAlchemy | 1 | 0 | 0 | Inserting unicode into string with sqlAlchemy | 2 | python,postgresql,utf-8,sqlalchemy,unicode-string | 0 | 2015-11-12T06:27:00.000 |
I am using sqlAlchemy to interact with a postgres database. It is all set to work with inserting string data. The data I receive is normally utf-8 and the setup works very well. As a edge case, recently, data came up in the format somedata\xtrailingdata.
SQLAlchemy is attempting to make this entry with somedata complet... | 0 | 0 | 0 | 0 | false | 34,233,985 | 0 | 442 | 2 | 0 | 0 | 33,665,544 | The problem turned out to be \x00. When passed to SQLAlchemy an a value with \x00, it truncates it to the data preceding \x00. We traced the problem to the C library underneath the SQLAlchemy. | 1 | 0 | 0 | Inserting unicode into string with sqlAlchemy | 2 | python,postgresql,utf-8,sqlalchemy,unicode-string | 0 | 2015-11-12T06:27:00.000 |
Ubuntu python2 and python3 both can import sqlite3, but I can not type sqlite3 in command prompt to open it, it said sqlite3 is not installed, if i want to use it out of python should I install sqlite3 solely using apt-get or I can find it in some directory of python, add it to path and use directly in command line.
I ... | 1 | 0 | 0 | 0 | false | 48,045,704 | 0 | 16,159 | 1 | 0 | 0 | 33,691,635 | Any Cygwin users who find themselves here: run the Cygwin installation .exe...
Choose "Categories", "Database", and then one item says "sqlite3 client to access sqlite3 databases", or words to that effect. | 1 | 0 | 0 | How to open sqlite3 installed by Python 3? | 3 | python,python-3.x,sqlite | 0 | 2015-11-13T11:23:00.000 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.