Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I have a very particular problem, I have a deck of cards. The deck is a list of lists of tuples, each internal list is a suit with card tuples (suit, value), sorted smallest to largest(2-Ace). I would like to find the smallest card in the deck. So basically I want to take the first object from each suit, and find the smallest. Short of an horridly ugly for loop, what is the most pythonic way to do this?
`min` takes a key function. You can use this to get the first element to use it as the comparison: ``` min(my_list, key=lambda x: x[0]) ```
Lets assume that the deck list is created as below: ``` TheHearts = TheSpades = TheDiamonds = TheClubs = [2,3,4,5,6,7,8,9,10,"j","q","k","a"] TheDeck = [TheHearts, TheSpades, TheDiamonds, TheClubs] ``` the following line can provide you the first object of the first list: ``` print TheDeck[0][0] ``` or even if you make a dictionary, then you can use the names instead of numbers: ``` TheHearts = TheSpades = TheDiamonds = TheClubs = [2,3,4,5,6,7,8,9,10,"j","q","k","a"] TheDeck = {'TheHearts':TheHearts, 'TheSpades':TheSpades, 'TheDiamonds':TheDiamonds, 'TheClubs':TheClubs} print TheDeck['TheHearts'][0] ```
Python, min among first objects in lists
[ "", "python", "list", "min", "playing-cards", "" ]
I have some data that looks like this: ``` A | B 97 |556 257|803 803|257 257|323 556|97 ``` I'm trying to figure out what the best way to filter the results such that it removes the duplicate rows. For example it only shows the row 257|803 and not 803|257. What would be the best way to do this?
``` SELECT * FROM T x WHERE x.A < x.B OR NOT EXISTS ( SELECT * FROM T y WHERE y.A = x.B AND y.B = x.A ); ``` The truth table for this strange condition: ``` A | B | (A<B) | (NOT exists) | (A<B OR NOT exists) ---+----+-------+--------------+---------------------- 97 |556 | True | False | True 257|803 | True | False | True 803|257 | False | False | False 257|323 | True | True | True 556|97 | False | False | False ``` Result: ``` a | b -----+----- 97 | 556 257 | 803 257 | 323 (3 rows) ```
Try this `Query` for `MYSQL` ``` select distinct greatest(t1.A, t1.B), least(t1.A, t1.B) from your_table t1 , your_table t2 where t1.B=t2.A and t1.A=t2.B ``` **[SQL Fiddle](http://www.sqlfiddle.com/#!2/96eba/8)** **[Refer my Answer. Only the inner Query](https://stackoverflow.com/questions/16342712/sql-query-how-to-get-items-from-one-col-paired-with-another-but-not-visa-versa/16343404#16343404)** **`Edit`** **SQL SERVER** Version ``` select * from (select case when t1.A>t1.B then t1.A end as A1, case when t1.A>t1.B then t1.B end as B1 from your_table t1 , your_table t2 where t1.B=t2.A and t1.A=t2.B)t where t.A1 is not null ``` **[SQL Fiddle](http://www.sqlfiddle.com/#!3/96eba/5)**
Filter rows that have the same data in different columns
[ "", "sql", "" ]
Given: `customer[id BIGINT AUTO_INCREMENT PRIMARY KEY, email VARCHAR(30), count INT]` I'd like to execute the following atomically: Update the customer if he already exists; otherwise, insert a new customer. In theory this sounds like a perfect fit for [SQL-MERGE](http://en.wikipedia.org/wiki/Merge_(SQL)) but the database I am using doesn't support [MERGE with AUTO\_INCREMENT columns](https://stackoverflow.com/a/6307884/14731). <https://stackoverflow.com/a/1727788/14731> seems to indicate that if you execute a query or update statement against a non-existent row, the database will lock the index thereby preventing concurrent inserts. Is this behavior guaranteed by the SQL standard? Are there any databases that do not behave this way? **UPDATE**: Sorry, I should have mentioned this earlier: the solution must use READ\_COMMITTED transaction isolation *unless* that is impossible in which case I will accept the use of SERIALIZABLE.
Answering my own question since there seems to be a lot of confusion around the topic. It seems that: ``` -- BAD! DO NOT DO THIS! -- insert customer (email, count) select 'foo@example.com', 0 where not exists ( select 1 from customer where email = 'foo@example.com' ) ``` is open to race-conditions (see [Only inserting a row if it's not already there](https://stackoverflow.com/q/3407857/14731)). From what I've been able to gather, the only portable solution to this problem: 1. Pick a key to merge against. This could be the primary key, or another unique key, but it **must** have a unique constraint. 2. Try to `insert` a new row. You must catch the error that will occur if the row already exists. 3. The hard part is over. At this point, the row is guaranteed to exist and you are protected from race-conditions by the fact that you are holding a write-lock on it (due to the `insert` from the previous step). 4. Go ahead and `update` if needed or `select` its primary key.
This question is asked about once a week on SO, and the answers are almost invariably wrong. Here's the right one. ``` insert customer (email, count) select 'foo@example.com', 0 where not exists ( select 1 from customer where email = 'foo@example.com' ) update customer set count = count + 1 where email = 'foo@example.com' ``` If you like, you can insert a count of 1 and skip the `update` if the inserted rowcount -- however expressed in your DBMS -- returns 1. The above syntax is absolutely standard and makes no assumption about locking mechanisms or isolation levels. If it doesn't work, your DBMS is broken. Many people are under the mistaken impression that the `select` executes "first" and thus introduces a race condition. No: that `select` is *part of* the `insert`. The insert is atomic. There is no race.
Do databases always lock non-existent rows after a query or update?
[ "", "sql", "insert", "locking", "h2", "" ]
The PyBluez project seems to be the canonical project for doing Bluetooth in Python (Please correct me if I'm wrong). However, last version is 0.18 from Nov 2009). Issues (currently, I'm most annoyed by issue 43: no support for python 2.7) aren't being fixed. I do see some activity though, so I wonder its state. Is PyBluez the canonical way to do Bluetooth? Is there an alternative to PyBluez? (I'm alost missing BlueSoleil stack support).
there is new version of pybluez 0.20 works with python 2.7 and 3.3 <https://code.google.com/p/pybluez/downloads/list>
There are alternative binaries here: <http://www.lfd.uci.edu/~gohlke/pythonlibs/#pybluez> The mailing list appears to not be completely dead: <https://groups.google.com/forum/#!forum/pybluez>
Is PyBluez alive?
[ "", "python", "bluetooth", "" ]
Say I have a main module `app.py` which defines a global variable `GLOBVAR = 123`. Additionally this module imports a class `bar` located in another module `foo`: ``` from foo import bar ``` In the main module `app` I now call a method from the class `bar`. Within that method I want to access the value `GLOBVAR` from the main module `app`. One straight-forward way would be to simply pass `GLOBVAR` to the method as parameter. But is there also another solution in Python that allows me to access `GLOBVAR` directly? In module `foo` I tried one of the following: ``` from app import GLOBVAR # option 1 import app.GLOBVAR # option 2 ``` However, both options lead to the following error at runtime: ``` ImportError: cannot import name bar ``` I understand this leads to a cyclic import between `app` and `foo`. So, is there a solution to this in Python, or do I have to pass the value as parameter to the function?
There are many ways to solve the same problem, and passing parameters is generally to be recommended. But if you do have some package wide global constants you can do that too. You will want to put these in a whole other module and import that module from both `app` and `foo` modules. If you build a package globals you can even put these in the `__init__.py` ... but another named module like `settings` or `config` can also be used. For instance if you package layout is: ``` mypackage/ __init__.py app.py foo.py config.py ``` Then: config.py ``` GLOBVAR = 'something' ``` app.py ``` from mypackage.config import GLOBVAR ``` foo.py ``` from mypackage.config import GLOBVAR ``` if you just put the GLOBVAR in `__init__.py` then you would do `from mypackage import GLOBVAR` which could be prettier if you go for that sort of thing. **EDIT** I'd also recommend using absolute imports even if you are using python 2, and always use the package name explicitly rather than relative imports for readability and because it makes things easier to split out later if you need to move something to a new different package
You can import a variable from the `__main__` module like this: ``` """ main module """ import foo name = "Joe" foo.say_hi() ``` and `foo.py`: ``` """ foo module, to be imported from __main__ """ import __main__ def say_hi(): print "Hi, %s!" % __main__.name ``` and it looks like this: ``` $ python main.py Hi, Joe! ``` Of course you can not access the variable before you define it. So you may need to put the access to `__main__.name` at function level, where it is evaluated *after* the import. In contrast to the module level, which is evaluated at the time of the import (where the variable not yet exists).
Python: How to access a variable in calling module?
[ "", "python", "" ]
Im trying to run a windows command line application from a python script with a ini config file in the command which i suspect isnt passing when its executed. The command is c:\BLScan\blscan.exe test.ini. The ini file is the config file that the application needs to know what parameters to scan with. This is the script im using ``` import subprocess from subprocess import Popen, PIPE cmd = '/blscan/blscan test.ini' p = Popen(cmd , stdout=PIPE, stderr=PIPE) out, err = p.communicate() print "Return code: ", p.returncode print out.rstrip(), err.rstrip() ``` When I use subprocess.popen to call the application it doesnt look to be reading the ini file. The device line is an indicator that the tuner hasnt been identified from the ini file so the program is dropping to the default tuner. ``` Return code: 0 BLScan ver.1.1.0.1091-commited Config name: .\test.ini Device 0: TBS 6925 DVBS/S2 Tuner Device number: Total Scan Time = 0.000s Transponders not found ! >>> ``` This is how it looks when run from the dos shell. ``` C:\BLScan>blscan test.ini BLScan ver.1.1.0.1091-commited Config name: .\test.ini Scan interval 0 From 3400 to 3430 Mhz, Step 5 Mhz, Horizontal, Minimal SR 1000 KS, Maximal SR 10 0000 KS 3400 Mhz ... 3405 Mhz ... 3410 Mhz ... ``` Any advice would be appreciated
When you run this from the DOS shell, your current working directory is `C:\BLscan`, as is obvious from the prompt you show: ``` C:\BLScan>blscan test.ini ``` You can also tell from the error output that it's definitely looking in the current working directory. (Some Windows programs will, e.g., try the same directory as the executable… but you can't count on that, and this one does not.) ``` Config name: .\test.ini ``` So, if your current directory were not `C:\BLScan`, it wouldn't work from the DOS shell either. Try this: ``` C:\BLScan>cd \ C:\>\BLScan\blscan test.ini ``` You will get the exact same error you're getting in Python. If you can't rely on being in C:\BLScan, you have to pass an absolute path. For example, this will work again: ``` C:\>\BLScan\blscan \BLScan\test.ini ``` Python is no different from the shell here. If you give it a relative path like `test.ini`, it will use the current working directory. So, you have the same two options: ``` os.chdir('/blscan') p = subprocess.popen('blscan test.ini') ``` … or: ``` p = subprocess.popen(r'\BLScan\blscan \BLScan\test.ini') ```
Try passing the arguments to the subprocess.call as an array: subprocess.call(["/blscan/blscan.exe","test.ini"]) Also, based on the command line versus py output in your question, double check that your blscan.exe tool works even when your "working directory" is different. Maybe you need to needs to be in the same working directory where blscan.exe is located. os.chdir("C:\BLScan")
running windows command line program from python with ini file argument
[ "", "python", "command", "ini", "" ]
I am working on `PHP page` that connects with `Oracle`. I came across this SQL and I am not sure it is doing what is supposed to, so I thought I would ask here. The SQL in question is like this: ``` select tableA.id, tableA.name, tableB.details from tableA left join tableB on tableB.id = tableA.id and tableB.logId = '<logged_in_user>' ``` Now when I log in as a user who does not have entries in tableB, I am still getting records when this query runs. And my hypothesis is that instead of `'and'` the clause should have been `'where'`. ``` left join tableB on tableB.id = tableA.id where tableB.logId = '<logged_in_user>' ``` So my two questions are. 1. Am I right? 2. If so, then why does the query return result? what is the `'and'` clause checking? Could someone explain the difference when using AND vs WHERE in the above query? Thanks in advance!
> Am I right? No. This query: ``` select tableA.id, tableA.name, tableB.details from tableA left join tableB on tableB.id = tableA.id and tableB.logId = '<logged_in_user>' ``` Is very different from: ``` select tableA.id, tableA.name, tableB.details from tableA left join tableB on tableB.id = tableA.id where tableB.logId = '<logged_in_user>' ``` It's the criteria for joining. In the first case, you take from A, then join with B when there's a matching id **and** a logId, else leave details null. In the second, you take from A, then join with B when there's a matching id, else leave details null, *and then you keep only rows from A where there's a B match that has a logId* -- eliminating rows from A in the process, and de facto turning it into an inner join.
If you only want results with values in both tables what you really want is an **inner join**. Once you have that, it doesn't really matter where you filter results out.
Usage of AND vs WHERE clause in SQL
[ "", "sql", "oracle", "" ]
This is part of a project that I've been working on for a bit now. I'm having a hard time synthesizing the mental pseudocode with something I can actually write, largely due to personal misconceptions about how stuff works. I'm tracking inventory for an imaginary fruit stand, using the following class: ``` class Fruit(object) def __init__(self,name,qty,price) self.color = str(color) self.qty = int(qty) self.price = float(price) def getName(self): return self.color def getQty(self): return self.qty def getPrice(self): return self.price def description(self): return self.color,self.qty,self.price ``` Objects in this class end up getting imported as values for keys of a dictionary. There can be multiple values for each key, stored in a list, such that when the dictionary is printed, it returns something like this (with line breaks for readability:) ``` Lemon: [(Yellow, 5, 2.99)] Apple: [(Red, 10, 0.99), (Green, 9, 0.69)] Cherry: [(White, 2, 5.99),(Red, 5, 5.99)] ``` I am trying now to end up getting the sum value of all bits of fruit on my imaginary fruit stand. My original thought was just to treat each value as a list, iterate through the list until I hit an integer, multiply the integer by the float in the index right behind it, and then add that quantity to a variable that had a running total; then move on to keep looking for another integer. Which would work great, if "index 1" of list "Cherry" weren't (Red, 5, 5.99) rather than (2). So, a couple of questions. 1. Every time I do something like this, where I have a dictonary with multiple values which are all class objects, each value is stored as a parenthetical (for lack of a better thing to call it.) Is that something that should be happening, or do I need to re-evaluate the way I am doing what I am doing, because I am causing that to happen? 2. If I don't need to re-evaluate my choices here, is there a way to break down the "parentheticals" (for lack of a better thing to call them immediately coming to mind) to be able to treat the whole thing as a list? 3. Is there a better way to do what I want to do that doesn't involve iterating through a list? Thanks in advance. I have been fighting with various parts of this problem for a week.
Assuming a dictionary like this: ``` fruit_stand = { "Lemon": [(Yellow, 5, 2.99)], "Apple": [(Red, 10, 0.99), (Green, 9, 0.69)], "Cherry": [(White, 2, 5.99),(Red, 5, 5.99)] } ``` You can actually just iterate over the dictionary to get its keys: ``` for fruit_name in fruit_stand: print fruit_name # Lemon # Apple # Cherry # NOTE: Order not guaranteed # Cherry, Lemon, Apple are equally likely ``` You can then use the `items` method of the dictionary to get a tuple (what you are calling a "parenthetical") of `key`, `value` pairs: ``` for fruit_name, fruits in fruit_stand.items(): print fruit_name, "=>", fruits # Lemon => [(Yellow, 5, 2.99)] # Apple => [(Red, 10, 0.99), (Green, 9, 0.69)] # Cherry => [(White, 2, 5.99),(Red, 5, 5.99)] ``` `list`s (that's the bracketed `[]`) are also iterable: ``` for fruit in [(White, 2, 5.99),(Red, 5, 5.99)]: print fruit # (White, 2, 5.99) # (Red, 5, 5.99) ``` So we can work with each `fruits` list to get access to our `tuple`: ``` for fruit_name, fruit_list in fruit_stand.items(): # Ignore fruit_name and iterate over the fruits in fruit_list for fruit in fruit_list: print fruit ``` As we saw with `items` we can unpack tuples into multiple values: ``` x, y = (1, 2) print x print y # 1 # 2 ``` So we can unpack each `fruit` into its component parts: ``` for fruit_name, fruit_list in fruit_stand.items(): # Ignore fruit_name and iterate over the fruits in fruit_list for color, quantity, cost in fruit_list: print color, quantity, cost ``` And then getting the total isn't hard: ``` # We need to store our value somewhere total_value = 0 for fruit_name, fruit_list in fruit_stand.items(): # Ignore fruit_name and iterate over the fruits in fruit_list for color, quantity, cost in fruit_list: total_value += (quantity * cost) print total_value ``` --- All that being said, there are *much* clearer ways of doing things: 1. You could use list comprehension to simplify your `for` loops: ``` for fruit_name in fruit_stand operation(fruit_name) ``` can be translated to this list comprehension: ``` [operation(fruit_name) for fruit_name in fruit_stand] ``` Therefore, we can translate our nested `for` loops into: ``` sum([cost * quantity \ # Final operation goes in front for _, fruits in fruit_stand.items() \ for _, cost, quantity in fruits]) ``` Because we don't actually *need* the list we can get rid of it and Python will create a generator for us instead: ``` sum(cost * quantity \ # Note the missing [] for _, fruits in fruit_stand.items() \ for _, cost, quantity in fruits) ``` 2. You could add a `__add__` method to `Fruit` so that adding two sets of fruits together gives you the total cost of both sets of `Fruit` (but you would probably want to create an intermediate data structure to do that, say `Basket` so `Fruit` didn't have to worry about `quantity` which doesn't really belong to `Fruit`, save in the sense that any instance of `Fruit` has the intrinsic quantity 1. I'm omitting the code for this option because this answer is already far too long.
The word you're looking for is ["tuple"](http://www.diveintopython.net/native_data_types/tuples.html). These "parentheticals" are [tuples](http://www.diveintopython.net/native_data_types/tuples.html). One of Python's major strengths is its compact syntax for iterating over lists. You can calculate the sum of the prices of each fruit in a single line: ``` >>> fruit = [("Red", 10, 0.99), ("Green", 9, 0.69)] >>> sum(qty * price for color, qty, price in fruit) 16.11 ``` Breaking it down, the stuff inside of `sum()` is called a [generator expression](http://www.python.org/dev/peps/pep-0289/). `for color, qty, price in fruit` iterates over `fruit` and unpacks the tuples into three named variables. These variables can then be used on the left-hand side of the `for`. For each tuple in the `fruit` list, the generator expression calculates `qty * price`. If you put this expression inside of square brackets it becomes a [list comprehension](http://docs.python.org/2/tutorial/datastructures.html#list-comprehensions) and lets you see the calculated values. ``` >>> [qty * price for color, qty, price in fruit] [9.9, 6.209999999999999] ```
Breaking down instances of classes into lists, or: a better way to sum class variables?
[ "", "python", "class", "" ]
I am trying to find all `year` and `caseseqnumbers` in a table where the type is not appellant Rep 1. The error is coming from the fact that a `year` and `caseseqnumber` can have many rows in the table. This is what i have tried: ``` Select caseseqnumber, year from caseparticipants where not exists (Select * from caseparticipants where participanttype = 'Appellant Rep 1') ``` Any help?!
You could join caseparticipants with itself, using a LEFT JOIN. If the join doens't succeed, it means that caseseqnumber and year don't have a row with `participanttype = 'Appellant Rep 1'`: ``` SELECT c1.caseseqnumber, c1.year FROM caseparticipants c1 LEFT JOIN caseparticipants c2 ON c1.year=c2.year AND c1.caseseqnumber=c2.caseseqnumber AND c2.participanttype = 'Appellant Rep 1' WHERE c2.year IS NULL ``` **EDIT** To compare the number of distinct combinations of caseseqnumber, year, and the number of combinations that have a a type of 'Appellant Rep 1' you could use this SQL Server query: ``` SELECT COUNT(DISTINCT CAST(c1.caseseqnumber AS VARCHAR) + '-' + CAST(c1.year AS VARCHAR)), COUNT(DISTINCT CAST(c2.caseseqnumber AS VARCHAR) + '-' + CAST(c2.year AS VARCHAR)) FROM caseparticipants c1 LEFT JOIN caseparticipants c2 ON c1.year=c2.year AND c1.caseseqnumber=c2.caseseqnumber AND c2.participanttype = 'Appellant Rep 1' ```
why do you need to do a nested search in there . Nested searches are needed only needed in case of checking things in more than one database Table. stick to ``` select caseqnumber,year from caseparticipants where paticipanttype <> 'Appellant Rep 1' ``` (<> is the sql clause for NOT EQUAL TO)
Exists SQL Statement
[ "", "sql", "exists", "" ]
I'm trying to do a for loop. Introduce a value (p) into an empty list. I want to do a kind of vector that save me all source values of the math operations. ``` lista=list() for index in len(lista+1): lista.append(p) index=index+1 print lista ``` Thx for your help
``` lista=list() for index in range(len(lista)): lista.insert(p) index=index+1 print lista ```
If you just want to insert a value into an empty list it should look like this: ``` ps = [] # create the empty list ps.append(p) # add the value p ``` or you could just do this which gives the same result `ps = [p]` If you want to insert it n-times you can use a for loop like this: ``` ps = [] for i in range(n): ps.append(p) ``` or just do this which gives the same result `ps = n * [p]`
for loop in Python creating a list
[ "", "python", "" ]
A file 'Definitions.py' contains the code: ``` def testing(a,b): global result for count in range(a,b): result.append(0) print result ``` While file 'Program.py' contains the code: ``` from Definitions import * result = [] testing(0,10) print result ``` The result of the Definitions.py is the expected list of zeros, while the variable within Program.py is just the empty list despite results being defined as a global variable. How can this be made to run the function from Definitions.py but pass the resulting variable to be used within Program.py?
Global namespaces are relative to a module. They are not shared between modules. You could return the `result`: ``` def testing(a,b): result = [] for count in range(a,b): result.append(0) return result ``` and use it like this: ``` result = testing(0,10) print result ``` But note that above, a new list, `result = []`, is being created in `testing` each time it is called.
The global scope only applies to the module (file) that it's in.
How to import a module of functions and pass variables between modules
[ "", "python", "python-2.7", "import", "module", "global", "" ]
is there a way to control the screen/monitor with python, like for example turn it off an on again (comparable to the shortcut-buttons on laptop keyboards)? Thank you!
Most modern laptops have a physical connection between the button and the monitor. For instance, my Dell precision i can boot the PC and dimm my display even before BIOS launches which tells me, it's wired via hardware connections. With that sad, i can still emulate a few system-calls from within the OS to execute the task as well, not just as energy efficient as pressing the actual button. Doing this requires you to (on windows) use `pywin32` to connect to the appropriate system api's and execute a very specific command to do so, and it is tricky if you're not familar with System API's and calling the windows 32 functions. Something along the lines of: ``` import os, win32com.client def runScreensaver(): strComputer = "." objWMIService = win32com.client.Dispatch("WbemScripting.SWbemLocator") objSWbemServices = objWMIService.ConnectServer(strComputer,"root\cimv2") colItems = objSWbemServices.ExecQuery("Select * from Win32_Desktop") for objItem in colItems: if objItem.ScreenSaverExecutable: os.system(objItem.ScreenSaverExecutable + " /start") break ``` If you're on **linux** you can try to just execute `os.system()` one of the following: * [How to change the Monitor brightness on Linux?](https://stackoverflow.com/questions/6625836/how-to-change-the-monitor-brightness-on-linux) * [Adjust screen brightness/contrast in Python?](https://stackoverflow.com/questions/6421356/adjust-screen-brightness-contrast-in-python) * <https://wiki.archlinux.org/index.php/Backlight>
In python ``` import win32gui import win32con #to turn off use :- win32gui.SendMessage(win32con.HWND_BROADCAST, win32con.WM_SYSCOMMAND, win32con.SC_MONITORPOWER, 2) #turn on use :- win32gui.SendMessage(win32con.HWND_BROADCAST, win32con.WM_SYSCOMMAND, win32con.SC_MONITORPOWER, -1) ```
Control Screen with Python
[ "", "python", "controls", "screen", "keyboard-shortcuts", "" ]
I'm investigating how to develop a decent web app with Python. Since I don't want some high-order structures to get in my way, my choice fell on the lightweight [Flask framework](https://flask.palletsprojects.com/en/1.1.x/). Time will tell if this was the right choice. So, now I've set up an Apache server with mod\_wsgi, and my test site is running fine. However, I'd like to speed up the development routine by making the site automatically reload upon any changes in py or template files I make. I see that any changes in site's .wsgi file causes reloading (even without WSGIScriptReloading On in the apache config file), but I still have to prod it manually (ie, insert extra linebreak, save). Is there some way how to cause reload when I edit some of the app's py files? Or, I am expected to use IDE that refreshes the .wsgi file for me?
Run the `flask run` CLI command with [debug mode](https://flask.palletsprojects.com/quickstart/#debug-mode) enabled, which will automatically enable the reloader. As of Flask 2.2, you can pass `--app` and `--debug` options on the command line. ``` $ flask --app main.py --debug run ``` `--app` can also be set to `module:app` or `module:create_app` instead of `module.py`. [See the docs for a full explanation.](https://flask.palletsprojects.com/cli/#application-discovery) More options are available with: ``` $ flask run --help ``` Prior to Flask 2.2, you needed to set the `FLASK_APP` and `FLASK_ENV=development` environment variables. ``` $ export FLASK_APP=main.py $ export FLASK_ENV=development $ flask run ``` It is still possible to set `FLASK_APP` and `FLASK_DEBUG=1` in Flask 2.2.
If you are talking about test/dev environments, then just use the debug option. It will auto-reload the flask app when a code change happens. ``` app.run(debug=True) ``` Or, from the shell: ``` $ export FLASK_DEBUG=1 $ flask run ``` <http://flask.palletsprojects.com/quickstart/#debug-mode>
Auto reloading python Flask app upon code changes
[ "", "python", "apache", "flask", "" ]
Is there some way in python to manually free memory allocated for a string while some other variable still references it? I would like to free a string passed to a function inside of the function. ``` def use_string(s): do_something_with_s(s) #can I free memory used for s here? do_other_things_that_dont_need_s() my_string = "hi there."*100000 use_string(my_string) do_some_other_things_that_dont_need_my_string() ``` As I understand, both s (in the context of use\_string) and my\_string will be referencing the same immutable string object, which can only be freed when no variables point to it. Is this ture?
Since Python objects are passed as a reference passed by value (ie you can't change the reference, but you can change the contents of the referenced object unless the object is immutable), you cannot change the reference a caller directly passes into the function from inside the function. If you need to change/free the string, you need to wrap it in a class so you can change the only reference to the string which is inside the object. I'd recommend taking a deep breath before obfuscating the code like this, but when you really need it, you really need it. This example is designed only to in minimal code show the concept, not to show how to do it in a beautiful way :) ``` def use_string(wrap): print wrap.s # Prints 'olle' wrap.s = None # Resets the reference print wrap.s # Prints 'None' class wrapper(object): pass a = wrapper() a.s = "olle" print a.s # Prints 'olle' use_string(a) print a.s # Prints 'None' ```
Yes, that's what the [`del` keyword](http://docs.python.org/2/reference/simple_stmts.html#the-del-statement) is for. Whether it actually makes sense to do this in a garbage collected language is another matter.
Python Free String Passed to Function
[ "", "python", "function", "memory", "args", "" ]
* open gVim. * then using the File Menu and MenuItem Open to open a file pi.py which has the following tiny script: ![enter image description here](https://i.stack.imgur.com/0XUdn.png) How do I execute this code using gVim? --- **EDIT** If I use either `:! python pi.py` or `:w !python -` then I get the following: ![enter image description here](https://i.stack.imgur.com/Ly32R.png)
You don't need to save the file, you can run the current buffer as stdin to a command such as `python` by typing: ``` :w !python - ``` (The hyphen at the end probably isn't necessary, python will generally use stdin by default) edit: seeing as you are new to vim, note that this will not save the file, it will just run it. You will probably want to learn how to save your file.
If you have python support compiled into vim you can use `:pyfile %` to run the current file. (python 2.7) If you have python 3 support use `:py3file %` instead [pyfile help](http://vimdoc.sourceforge.net/htmldoc/if_pyth.html#%3apyfile)
Executing Python with Gvim
[ "", "python", "windows", "vim", "" ]
here shows **date of birth** field in my openerp model ``` 'date_of_birth': fields.date('Date of Birth'), ``` need to change its default date to 25years earlier.because its easier to user to pick year. ( in openerp jquery default load current 20years in list and user have to get some time to select earlier year ). for ex : ``` _defaults = { 'date_of_birth':fields.date.context_today - 25years ``` please advice me to implement this issue (if its with python function seems good for my requirement )
Check out the [dateutil module](http://labix.org/python-dateutil) - you'll have to install it. It makes this sort of calculation easy. ``` >>> import datetime ... from dateutil import relativedelta ... ... today = datetime.date.today() ... past = today + relativedelta.relativedelta(years=-25) ... print(today, past, sep='\t') 2013-05-05 1988-05-05 ```
If your 'Date of birth' is a string, then you can use [enter link description here](http://docs.python.org/2/library/time.html#time.strptime)strptime to convert it into a datetime object. With that object, it is much easier to substract 25 years using the [timedelta](http://docs.python.org/2/library/datetime.html#timedelta-objects) module You can't substract 25 years straight away, you have to count how many days manually. But in the end, it will look like this: ``` import datetime my_date = datetime.datetime.strptime('1 Feb 2013', '%d %b %Y') # converts the date 1 Feb 2013 into a datetime object new_date = my_date - datetime.timedelta(days=36000) # Substracts 36000 days print(new_date.strftime('%d %b %y')) # prints: '11 Jul 1914' ```
How to Date deduction with Python?
[ "", "python", "xml", "odoo", "" ]
From the table below, how could I possibly get data out like this: My sql knowledge is limited to select and other basic stuff. ``` Heading 1 Eg: Kitchenware Heading 2 Eg: Knives Heading 3 Eg: Butter Knives Item: Cut em all Item: Cull em all Item: Smear em all Heading 3 Eg: Meat Knives Item: Cut em meat Item: Cull em meat Item: Smear em meat ``` Levels 1 and 2 are headings and cannot hold items. Level 3 can hold items. Level 4 are items. Will it be possible to do the above. Sometimes, Level 3 may come after Level 1. ``` "id" "name" "description" "level" "parent" "country" "maxLevel" "1" "Kitchenware" "Kitchenware description" "1" "0" "US" "0" "2" "Knives" "All our knives" "2" "1" "US" "0" "3" "Butter Knives" "All Butter Knives" "3" "2" "US" "0" "4" "Cut em all" "Cut em all" "4" "3" "US" "0" "5" "Cull em all" "Cull em all" "4" "3" "US" "0" "6" "Smear em all" "Smear em all" "4" "3" "US" "0" "7" "Meat Knives" "All Meat Knives" "3" "2" "US" "0" "8" "Cut em meat" "Cut em meat" "4" "7" "US" "0" "9" "Cull em meat" "Cull em meat" "4" "7" "US" "0" "10" "Smear em meat" "Smear em meat" "4" "7" "US" "0" ``` **Table create** ``` CREATE TABLE `products` ( `id` INT(10) NULL AUTO_INCREMENT, `name` VARCHAR(50) NULL DEFAULT NULL, `description` VARCHAR(240) NULL DEFAULT NULL, `level` TINYINT(1) NULL DEFAULT '0', `parent` INT(10) NULL DEFAULT '0', `country` VARCHAR(2) NULL DEFAULT NULL, `maxLevel` INT(1) NULL DEFAULT NULL, PRIMARY KEY (`id`) ) COLLATE='utf8_general_ci' ENGINE=MyISAM; ``` **Table Data** ``` INSERT IGNORE INTO `products` (`id`, `name`, `description`, `type`, `parent`, `country`, `maxLevel`) VALUES (1, 'Kitchenware', 'Kitchenware description', 1, 0, 'US', 0), (2, 'Knives', 'All our knives', 2, 1, 'US', 0), (3, 'Butter Knives', 'All Butter Knives', 3, 2, 'US', 0), (4, 'Cut em all', 'Cut em all', 4, 3, 'US', 0), (5, 'Cull em all', 'Cull em all', 4, 3, 'US', 0), (6, 'Smear em all', 'Smear em all', 4, 3, 'US', 0), (7, 'Meat Knives', 'All Meat Knives', 3, 2, 'US', 0), (8, 'Cut em meat', 'Cut em meat', 4, 7, 'US', 0), (9, 'Cull em meat', 'Cull em meat', 4, 7, 'US', 0), (10, 'Smear em meat', 'Smear em meat', 4, 7, 'US', 0); ```
For building a hieraric tree you have to use server coding like PHP. An alternative for displaying is to build indentation chars based on record level.
You can build a table like this ``` H1 H2 H3 Item: Kitchenware Knives Butter Knives Cut em all Kitchenware Knives Meat Knives Cut em all Kitchenware Knives Butter Knives Smear em meat ``` then you can easily select based on `h1 or h2 or h3` And table structure something like this. ``` CREATE TABLE `products` ( `id` INT(10) NULL AUTO_INCREMENT, `h1` VARCHAR(50) NULL DEFAULT NULL, `h2` VARCHAR(240) NULL DEFAULT NULL, `h3` VARCHAR(240) NULL DEFAULT NULL, `Item:` VARCHAR(240) NULL DEFAULT NULL, `level` TINYINT(1) NULL DEFAULT '0', `parent` INT(10) NULL DEFAULT '0', `country` VARCHAR(2) NULL DEFAULT NULL, `maxLevel` INT(1) NULL DEFAULT NULL, PRIMARY KEY (`id`) ) COLLATE='utf8_general_ci' ENGINE=MyISAM; ``` SQl something like `SELECT from table WHERE h1=somthing AND h3=somthing`
How to do a sql to get data from a table like this?
[ "", "mysql", "sql", "database", "" ]
In SQL Server 2005, I have a complex view that needs to specify additional conditions on certain relations. These conditions are not known at the time of view creation. Here's a greatly simplified version. ``` SELECT fields FROM table1 LEFT JOIN table2 ON ((table1.pid = table2.fid) AND (table2.condition1 = @runtimecondition)); LEFT JOIN table3 ON .... LEFT JOIN table4 ON .... LEFT JOIN table5 ON .... ``` Dynamic SQL to directly access the tables and do the joins is not an option for me because of 3rd party constraints (this is an integration and they want to have a single point of access for me code, preferably a view -- rather than grant access to a variety of tables). Can this be done with a view? Do I have to use a stored procedure? Is this a problem that could be addressed by a Table Valued Function?
You can use Inline Table-Valued function ``` CREATE FUNCTION dbo.Condition ( @condition1 int, @condition2 int, ) RETURNS TABLE AS RETURN ( SELECT * FROM table1 t LEFT JOIN table2 t2 ON t.pid = t2.fid AND t2.condition1 = ISNULL(@condition1, t2.condition1) LEFT JOIN table3 t3 ON t.pid = t3.fid AND t3.condition1 = ISNULL(@condition2, t3.condition1) LEFT JOIN table4 ON .... LEFT JOIN table5 ON .... ) ``` Demo on [**SQLFiddle**](http://sqlfiddle.com/#!3/55d33/2) For improving performance use this indexes: ``` CREATE INDEX x ON dbo.table1(pid) CREATE INDEX x ON dbo.table2(condition1) INCLUDE(fid, pid) CREATE INDEX x ON dbo.table3(condition1) INCLUDE(fid, pid) ``` Plan Diagram(on example of the three tables) ![enter image description here](https://i.stack.imgur.com/6YI55.png)
You can expose field(s) of interest to list of produced fields: ``` CREATE VIEW myview AS SELECT fields, table2.condition1 AS condition1 FROM table1 LEFT JOIN table2 ON (table1.pid = table2.fid); LEFT JOIN table3 ON .... LEFT JOIN table4 ON .... LEFT JOIN table5 ON .... ``` This lets `VIEW` users impose conditions when they use your view, something like this: ``` SELECT * FROM myview WHERE condition1 = @runtimecondition ```
parameters for view in SQL server
[ "", "sql", "sql-server", "" ]
The analysis included following: 1. How many words in the text data? 2. How many times the keyword appear in the text data? ``` import os f= open(os.path.join(os.getcwd(), 'test1.txt'), 'r') dataString =f.read() ``` This code is just opening the downloaded text file. I have no idea what to do next. I'm totally stuck. Please help me. It's just okay to give me some advice or hint..
If you want to count the number of unique word you will need to do something like this to count everything without cases like "hello, " interfering with the calculation. ``` print len(set(re.findall('\w+', dataString.lower()))) # Number of unique words print len(re.findall('\w+', dataString.lower())) # Total number of words ``` To display the number of counts of a specific word you can use [list comprehension](http://www.secnetix.de/olli/Python/list_comprehensions.hawk). ``` words = re.findall('\w+', dataString.lower()) print len([word for word in words if word == 'hello']) ``` Or you can use [count](http://www.tutorialspoint.com/python/list_count.htm). ``` print words.count('hello') ```
In `dataString`, you'll want to go through each word and count it. So you'll need a way to identify individual words in a string. You can either use the [`string.split`](http://docs.python.org/2/library/stdtypes.html#str.split) method or some [regex](http://docs.python.org/2/library/re.html) to split the words up a little more cleanly.
I want to create a program that can analyze downloaded text file in python
[ "", "python", "text", "" ]
I have two tables: ``` CREATE TABLE `articles` ( `id` int(11) NOT NULL AUTO_INCREMENT, `title` varchar(1000) DEFAULT NULL, `last_updated` datetime DEFAULT NULL, PRIMARY KEY (`id`), KEY `last_updated` (`last_updated`), ) ENGINE=InnoDB AUTO_INCREMENT=799681 DEFAULT CHARSET=utf8 CREATE TABLE `article_categories` ( `article_id` int(11) NOT NULL DEFAULT '0', `category_id` int(11) NOT NULL DEFAULT '0', PRIMARY KEY (`article_id`,`category_id`), KEY `category_id` (`category_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 | ``` This is my query: ``` SELECT a.* FROM articles AS a, article_categories AS c WHERE a.id = c.article_id AND c.category_id = 78 AND a.comment_cnt > 0 AND a.deleted = 0 ORDER BY a.last_updated LIMIT 100, 20 ``` And an `EXPLAIN` for it: ``` *************************** 1. row *************************** id: 1 select_type: SIMPLE table: a type: index possible_keys: PRIMARY key: last_updated key_len: 9 ref: NULL rows: 2040 Extra: Using where *************************** 2. row *************************** id: 1 select_type: SIMPLE table: c type: eq_ref possible_keys: PRIMARY,fandom_id key: PRIMARY key_len: 8 ref: db.a.id,const rows: 1 Extra: Using index ``` It uses a full index scan of `last_updated` on the first table for sorting but does not use any index for join (`type: index` in explain). This is very bad for performance and kills the whole database server since this is a very frequent query. I've tried reversing table order with `STRAIGHT_JOIN`, but this gives `filesort, using_temporary`, which is even worse. Is there any way to make MySQL use index for joining and for sorting at the same time? **=== update ===** I'm really desperate about this. Maybe some kind of denormalization can help here?
If you have lots of categories, this query cannot be made efficient. No single index can cover two tables at once in `MySQL`. You have to do denormalization: add `last_updated`, `has_comments` and `deleted` into `article_categories`: ``` CREATE TABLE `article_categories` ( `article_id` int(11) NOT NULL DEFAULT '0', `category_id` int(11) NOT NULL DEFAULT '0', `last_updated` timestamp NOT NULL, `has_comments` boolean NOT NULL, `deleted` boolean NOT NULL, PRIMARY KEY (`article_id`,`category_id`), KEY `category_id` (`category_id`), KEY `ix_articlecategories_category_comments_deleted_updated` (category_id, has_comments, deleted, last_updated) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 ``` and run this query: ``` SELECT * FROM ( SELECT article_id FROM article_categories WHERE (category_id, has_comments, deleted) = (78, 1, 0) ORDER BY last_updated DESC LIMIT 100, 20 ) q JOIN articles a ON a.id = q.article_id ``` Of course you should update `article_categories` as well whenever you update relevant columns in `article`. This can be done in a trigger. Note that the column `has_comments` is boolean: this will allow using an equality predicate to make a single range scan over the index. Also note that the `LIMIT` goes into the subquery. This makes `MySQL` use late row lookups which it does not use by default. See this article in my blog about why do they increase performance: * [**MySQL ORDER BY / LIMIT performance: late row lookups**](http://explainextended.com/2009/10/23/mysql-order-by-limit-performance-late-row-lookups/) If you were on SQL Server, you could make an indexable view over your query, which essentially would make a denormalized indexed copy of `article_categories` with the additional fields, automatically mainained by the server. Unfortunately, `MySQL` does not support this and you will have to create such a table manually and write additional code to keep it in sync with the base tables.
Before getting to your specific query, it's important to understand how an index works. With appropriate statistics, this query: ``` select * from foo where bar = 'bar' ``` ... will use an index on `foo(bar)` if it's selective. That means, if `bar = 'bar'` amounts to selecting most of the table's rows, it'll go faster to just read the table and eliminate rows that don't apply. In contrast, if `bar = 'bar'` means only selecting a handful of rows, reading the index makes sense. Suppose we now toss in an order clause and that you've indexes on each of `foo(bar)` and `foo(baz)`: ``` select * from foo where bar = 'bar' order by baz ``` If `bar = 'bar'` is very selective, it's cheap to grab all rows that comply, and to sort them in memory. If it's not at all selective, the index on `foo(baz)` makes little sense because you'll fetch the entire table anyway: using it would mean going back and forth on disk pages to read the rows in order, which is very expensive. Toss in a limit clause, however, and `foo(baz)` might suddenly make sense: ``` select * from foo where bar = 'bar' order by baz limit 10 ``` If `bar = 'bar'` is very selective, it's still a good option. If it's not at all selective, you'll quickly find 10 matching rows by scanning the index on `foo(baz)` -- you might read 10 rows, or 50, but you'll find 10 good ones soon enough. Suppose the latter query with indexes on `foo(bar, baz)` and `foo(baz, bar)` instead. Indexes are read from left to right. One makes very good sense for this potential query, the other might make none at all. Think of them like this: ``` bar baz baz bar --------- --------- bad aaa aaa bad bad bbb aaa bar bar aaa bbb bad bar bbb bbb bar ``` As you can see, the index on `foo(bar, baz)` allows to start reading at `('bar', 'aaa')` and fetching the rows in order from that point forward. The index on `foo(baz, bar)`, on the contrary, yields rows sorted by `baz` irrespective of what `bar` might hold. If `bar = 'bar'` is not at all selective as a criteria, you'll quickly run into matching rows for your query, in which case it makes sense to use it. If it's very selective, you may end up iterating gazillions of rows before finding enough that match `bar = 'bar'` -- it might still be a good option, but it's as optimal. With that being addressed, let's get back to your original query... You need to join articles with categories, to filter articles that are in a particular category, with more than one comment, that aren't deleted, and then sort them by date, and then grabbing a handful of them. I take it that most articles are not deleted, so an index on that criteria won't be of much use -- it'll only slow down writes and query planning. I presume most articles have a comment or more, so that won't be selective either. I.e. there's little need to index it either. Without your category filter, index options are reasonably obvious: `articles(last_updated)`; possibly with the comment count column to the right, and the deleted flag to the left. With your category filter, it all depends... If your category filter is very selective, it actually makes very good sense to select all rows that are within that category, sort them in memory, and pick the top matching rows. If your category filter is not at all selective and yields almost all articles, the index on `articles(last_update)` makes sense: valid rows are all over the place, so read rows in order until you find enough that match and *voilà*. In the more general case, it's just vaguely selective. To the best of my knowledge, the stats collected don't look into correlations much. Thus, the planner has no good way to estimate whether it'll find articles with the right category fast enough to be worth reading the latter index. Joining and sorting in memory will usually be cheaper, so the planner goes with that. Anyway, you've two options to force the use of an index. One is to acknowledge that the query planner is not perfect and to use a hint: <http://dev.mysql.com/doc/refman/5.5/en/index-hints.html> Be wary though, because sometimes the planner is actually correct in not wanting to use the index you'd like it to or vice version. Also, it may become correct in a future version of MySQL, so keep that in mind as you maintain your code over the years. Edit: `STRAIGHT_JOIN`, as point out by DRap works too, with similar caveats. The other is to maintain an extra column to tag frequently selected articles (e.g. a tinyint field, which is set to 1 when they belong to your specific category), and then add an index on e.g. `articles(cat_78, last_updated)`. Maintain it using a trigger and you'll do fine.
How to make JOIN query use index?
[ "", "mysql", "sql", "join", "query-optimization", "" ]
I am new to python and I am struggling to accomplish a simple ask. I am trying to copy my current directory name and placing it into a variable. I dont need the contents of that directory, or the path to that directory. I just need the name of my current directory to be placed into a variable. Thank you
For current working directory, use: ``` import os cdir=os.getcwd().split(os.sep)[-1] print (cdir) ``` For the script directory, use: ``` import os sdir=os.path.dirname(os.path.abspath(__file__)).split(os.sep)[-1] print (sdir) ```
As seen in [my other answer](https://stackoverflow.com/a/9711761/548696), you can achieve that this way: ``` import os module_dir = os.path.dirname(__file__) # get current file's directory ```
copy directory name and placing it into a variable (python)
[ "", "python", "directory", "" ]
I have 2 lists: ``` first_lst = [('-2.50', 0.49, 0.52), ('-2.00', 0.52, 0.50)] second_lst = [('-2.50', '1.91', '2.03'), ('-2.00', '1.83', '2.08')] ``` I want to do the following math to it: Multiply `0.49` by `1.91` (the corresponding values from `first_lst` and `second_lst`), and multiply `0.52` by `2.03` (corresponding values also). I want to do that under condition that values at position `0` in each corresponding tuple is idential so `-2.50` == `-2.50` etc. Obviously, we do the same math for remaning tuples as well. My code: ``` [((fir[0], float(fir[1])*float(sec[1]), float(fir[2])*float(sec[2])) for fir in first_lst) for sec in second_lst if fir[0] == sec[0]] ``` Generates however some object: ``` [<generator object <genexpr> at 0x0223E2B0>] ``` Can you help me fix the code?
You need to use `tuple()` or `list()` to convert that generator expression to a `list` or `tuple`: ``` [tuple((fir[0], fir[1]*sec[1], fir[2]*sec[2]) for fir in first_lst)\ for sec in second_lst if fir[0] == sec[0]] ``` Working version of your code: ``` >>> first_lst = [tuple(float(y) for y in x) for x in first_lst] >>> second_lst = [tuple(float(y) for y in x) for x in second_lst] >>> [((fir[0],) + tuple(x*y for x, y in zip(fir[1:], sec[1:]))) \ for fir in first_lst for sec in second_lst if fir[0]==sec[0]] [(-2.5, 0.9359, 1.0555999999999999), (-2.0, 0.9516000000000001, 1.04)] ```
Considering that your `first_lst` and `second_lst` are defined as follows. ``` >>> first_lst = [('-2.50', '0.49', '0.52'), ('-2.00', '0.52', '0.50')] >>> second_lst = [('-2.50', '1.91', '2.03'), ('-2.00', '1.83', '2.08')] ``` The following list comprehension may be useful. ``` >>> [tuple((float(elem[0][0]), float(elem[0][1])*float(elem[1][1]), float(elem[0][2])*float(elem[1][2]))) for elem in zip(first_lst, second_lst) if elem[0][0]==elem[1][0]] [(-2.5, 0.9359, 1.0555999999999999), (-2.0, 0.9516000000000001, 1.04)] ```
Getting <generator object <genexpr>
[ "", "python", "python-2.7", "list-comprehension", "" ]
I can't figure out how to install BeautifulSoup4 on Python. I've downloaded it, unpacked it and moved it to User directory. Typed in Terminal 'cd beautifulsoup4-4.1.3' to enter directory Then I type 'python setup.py install' Then I get this message: ``` Adams-iMac:beautifulsoup4-4.1.3 adamparis$ python setup.py install running install running build running build_py creating build creating build/lib creating build/lib/bs4 copying bs4/__init__.py -> build/lib/bs4 copying bs4/dammit.py -> build/lib/bs4 copying bs4/element.py -> build/lib/bs4 copying bs4/testing.py -> build/lib/bs4 creating build/lib/bs4/builder copying bs4/builder/__init__.py -> build/lib/bs4/builder copying bs4/builder/_html5lib.py -> build/lib/bs4/builder copying bs4/builder/_htmlparser.py -> build/lib/bs4/builder copying bs4/builder/_lxml.py -> build/lib/bs4/builder creating build/lib/bs4/tests copying bs4/tests/__init__.py -> build/lib/bs4/tests copying bs4/tests/test_builder_registry.py -> build/lib/bs4/tests copying bs4/tests/test_docs.py -> build/lib/bs4/tests copying bs4/tests/test_html5lib.py -> build/lib/bs4/tests copying bs4/tests/test_htmlparser.py -> build/lib/bs4/tests copying bs4/tests/test_lxml.py -> build/lib/bs4/tests copying bs4/tests/test_soup.py -> build/lib/bs4/tests copying bs4/tests/test_tree.py -> build/lib/bs4/tests running install_lib creating /Library/Python/2.7/site-packages/bs4 error: could not create '/Library/Python/2.7/site-packages/bs4': Permission denied Adams-iMac:beautifulsoup4-4.1.3 adamparis$ ``` At the end it says 'Permission denied. Why is this?
try $ sudo python setup.py install. type in your password when prompted.
you can type `sudo pip install beautifulsoup4`
Python, Install Beautiful Soup4-4.1.3 on Mac. Access Denied?
[ "", "python", "macos", "beautifulsoup", "" ]
The question of how to speed up importing of Python modules has been asked previously ([Speeding up the python "import" loader](https://stackoverflow.com/questions/2010255/speeding-up-the-python-import-loader) and [Python -- Speed Up Imports?](https://stackoverflow.com/questions/6025635/python-speed-up-imports)) but without specific examples and has not yielded accepted solutions. I will therefore take up the issue again here, but this time with a specific example. I have a Python script that loads a 3-D image stack from disk, smooths it, and displays it as a movie. I call this script from the system command prompt when I want to quickly view my data. I'm OK with the 700 ms it takes to smooth the data as this is comparable to MATLAB. However, it takes an additional 650 ms to import the modules. So from the user's perspective the Python code runs at half the speed. This is the series of modules I'm importing: ``` import numpy as np import matplotlib.pyplot as plt import matplotlib.animation as animation import scipy.ndimage import scipy.signal import sys import os ``` Of course, not all modules are equally slow to import. The chief culprits are: ``` matplotlib.pyplot [300ms] numpy [110ms] scipy.signal [200ms] ``` I have experimented with using `from`, but this isn't any faster. Since Matplotlib is the main culprit and it's got a reputation for slow screen updates, I looked for alternatives. One is PyQtGraph, but that takes 550 ms to import. I am aware of one obvious solution, which is to call my function from an interactive Python session rather than the system command prompt. This is fine but it's too MATLAB-like, I'd prefer the elegance of having my function available from the system prompt. I'm new to Python and I'm not sure how to proceed at this point. Since I'm new, I'd appreciate links on how to implement proposed solutions. Ideally, I'm looking for a simple solution (aren't we all!) because the code needs to be portable between multiple Mac and Linux machines.
you could build a simple server/client, the server running continuously making and updating the plot, and the client just communicating the next file to process. I wrote a simple server/client example based on the basic example from the `socket` module docs: <http://docs.python.org/2/library/socket.html#example> here is server.py: ``` # expensive imports import numpy as np import matplotlib.pyplot as plt import matplotlib.animation as animation import scipy.ndimage import scipy.signal import sys import os # Echo server program import socket HOST = '' # Symbolic name meaning all available interfaces PORT = 50007 # Arbitrary non-privileged port s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.bind((HOST, PORT)) s.listen(1) while 1: conn, addr = s.accept() print 'Connected by', addr data = conn.recv(1024) if not data: break conn.sendall("PLOTTING:" + data) # update plot conn.close() ``` and client.py: ``` # Echo client program import socket import sys HOST = '' # The remote host PORT = 50007 # The same port as used by the server s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect((HOST, PORT)) s.sendall(sys.argv[1]) data = s.recv(1024) s.close() print 'Received', repr(data) ``` you just run the server: ``` python server.py ``` which does the imports, then the client just sends via the socket the filename of the new file to plot: ``` python client.py mytextfile.txt ``` then the server updates the plot. On my machine running your imports take 0.6 seconds, while running `client.py` 0.03 seconds.
Not an actual answer to the question, but a hint on how to profile the import speed with Python 3.7 and [tuna](https://github.com/nschloe/tuna) (a small project of mine): ``` python3 -X importtime -c "import scipy" 2> scipy.log tuna scipy.log ``` [![enter image description here](https://i.stack.imgur.com/ak2hH.png)](https://i.stack.imgur.com/ak2hH.png)
improving speed of Python module import
[ "", "python", "performance", "import", "module", "" ]
I am stymied at the moment. I am sure that I am missing something simple, but how do you move a series of dates forward by x units? In my more specific case I want to add 180 days to a date series within a dataframe. Here is what I have so far: ``` import pandas, numpy, StringIO, datetime txt = '''ID,DATE 002691c9cec109e64558848f1358ac16,2003-08-13 00:00:00 002691c9cec109e64558848f1358ac16,2003-08-13 00:00:00 0088f218a1f00e0fe1b94919dc68ec33,2006-05-07 00:00:00 0088f218a1f00e0fe1b94919dc68ec33,2006-06-03 00:00:00 00d34668025906d55ae2e529615f530a,2006-03-09 00:00:00 00d34668025906d55ae2e529615f530a,2006-03-09 00:00:00 0101d3286dfbd58642a7527ecbddb92e,2007-10-13 00:00:00 0101d3286dfbd58642a7527ecbddb92e,2007-10-27 00:00:00 0103bd73af66e5a44f7867c0bb2203cc,2001-02-01 00:00:00 0103bd73af66e5a44f7867c0bb2203cc,2008-01-20 00:00:00 ''' df = pandas.read_csv(StringIO.StringIO(txt)) df = df.sort('DATE') df.DATE = pandas.to_datetime(df.DATE) df['X_DATE'] = df['DATE'].shift(180, freq=pandas.datetools.Day) ``` This code generates a type error. For reference I am using: ``` Python 2.7.4 Pandas '0.12.0.dev-6e7c4d6' Numpy '1.7.1' ```
If I understand you, you don't actually want `shift`, you simply want to make a new column next to the existing `DATE` which is 180 days after. In that case, you can use `timedelta`: ``` >>> from datetime import timedelta >>> df.head() ID DATE 8 0103bd73af66e5a44f7867c0bb2203cc 2001-02-01 00:00:00 0 002691c9cec109e64558848f1358ac16 2003-08-13 00:00:00 1 002691c9cec109e64558848f1358ac16 2003-08-13 00:00:00 5 00d34668025906d55ae2e529615f530a 2006-03-09 00:00:00 4 00d34668025906d55ae2e529615f530a 2006-03-09 00:00:00 >>> df["X_DATE"] = df["DATE"] + timedelta(days=180) >>> df.head() ID DATE X_DATE 8 0103bd73af66e5a44f7867c0bb2203cc 2001-02-01 00:00:00 2001-07-31 00:00:00 0 002691c9cec109e64558848f1358ac16 2003-08-13 00:00:00 2004-02-09 00:00:00 1 002691c9cec109e64558848f1358ac16 2003-08-13 00:00:00 2004-02-09 00:00:00 5 00d34668025906d55ae2e529615f530a 2006-03-09 00:00:00 2006-09-05 00:00:00 4 00d34668025906d55ae2e529615f530a 2006-03-09 00:00:00 2006-09-05 00:00:00 ``` Does that help any?
You could use `pd.DateOffset`. Which seems to be faster than `timedelta`. ``` In [930]: df['x_DATE'] = df['DATE'] + pd.DateOffset(days=180) In [931]: df Out[931]: ID DATE x_DATE 8 0103bd73af66e5a44f7867c0bb2203cc 2001-02-01 2001-07-31 0 002691c9cec109e64558848f1358ac16 2003-08-13 2004-02-09 1 002691c9cec109e64558848f1358ac16 2003-08-13 2004-02-09 4 00d34668025906d55ae2e529615f530a 2006-03-09 2006-09-05 5 00d34668025906d55ae2e529615f530a 2006-03-09 2006-09-05 2 0088f218a1f00e0fe1b94919dc68ec33 2006-05-07 2006-11-03 3 0088f218a1f00e0fe1b94919dc68ec33 2006-06-03 2006-11-30 6 0101d3286dfbd58642a7527ecbddb92e 2007-10-13 2008-04-10 7 0101d3286dfbd58642a7527ecbddb92e 2007-10-27 2008-04-24 9 0103bd73af66e5a44f7867c0bb2203cc 2008-01-20 2008-07-18 ``` --- Timings Medium ``` In [948]: df.shape Out[948]: (10000, 3) In [950]: %timeit df['DATE'] + pd.DateOffset(days=180) 1000 loops, best of 3: 1.51 ms per loop In [949]: %timeit df['DATE'] + timedelta(days=180) 100 loops, best of 3: 2.71 ms per loop ``` Large ``` In [952]: df.shape Out[952]: (100000, 3) In [953]: %timeit df['DATE'] + pd.DateOffset(days=180) 100 loops, best of 3: 4.16 ms per loop In [955]: %timeit df['DATE'] + timedelta(days=180) 10 loops, best of 3: 20 ms per loop ```
Add days to dates in dataframe
[ "", "python", "pandas", "datetime", "" ]
*Pretext, I am familiar with the semantical differences between a NULL value and an empty string.* I have a MySQL table where I store a lot of hostnames with their IP addresses (as a string) and wonder what would appear to be more natural (or efficient storage-wise) in case a hostname cannot be resolved. A NULL value or an empty string (in which case it probably should be a VARCHAR and not a CHAR) I would tend towards a NULL value but I would like to have this confirmed or disconfirmed.
In **MyISAM MYSQL you save one bit per row not using NULL**. As it is stated [here](http://dev.mysql.com/doc/refman/5.0/en/column-count-limit.html): > Declaring columns NULL can reduce the maximum number of columns permitted. For MyISAM tables, NULL columns require additional space in the row to record whether their values are NULL. Each NULL column takes one bit extra, rounded up to the nearest byte. Take a look [here](http://dev.mysql.com/doc/refman/5.5/en/storage-requirements.html) as well: > In addition, while a NULL itself does not require any storage space, NDBCLUSTER reserves 4 bytes per row if the table definition contains any columns defined as NULL, up to 32 NULL columns. (If a MySQL Cluster table is defined with more than 32 NULL columns up to 64 NULL columns, then 8 bytes per row is reserved.) Moreover **it also makes the database work faster** at it stated [here](http://my.safaribooksonline.com/book/-/9781449332471/4dot-optimizing-schema-and-data-types/id3613176) (taken from [stackoverflow](https://stackoverflow.com/questions/471367/when-to-use-null-in-mysql-tables) - @DavidWinterbottom link didn't work for me, I added a different sourse) > It's harder for MySQL to optimize queries that refer to nullable coumns, because they make indexes, index statistics, and value comparisons more complicated. A nullable column uses more storage space and requires special processing inside MySQL. When a nullable column is indexed, it requires an extra byte per entry and can even cause a fixed-size inded (such as an index on a single integer column) to be converted to a variable-sized one in MyISAM. In most of the cases non-NULL values behave more predictable when combined with `COUNT()` and other aggregating function but you can also see a NULL behave according to your needs. As it is stated [here](http://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html), **not all group (aggregate) functions ignore NULL** for instance, `COUNT()` would give you different result that `COUNT(*)` for a column containing NULL values. On the other hand as other point out NULL better reflects the meaning of entry - it is an unknown value and if you wanted to count all the hosts you would probably `COUNT()` to behave exactly as it does.
**First**: Consider closely the different semantics of **NULL** and **Empty-String**. * The first is best interpreted as something like: *There is a valid value for this field, but that value is not yet known*. * The second always means: *The valid value for this field is known, and it is precisely ""*. **Second**: Recognize that indexing and filtering works better and more efficiently on **Empty-String** than on **NULL**, so don't use the latter when you really mean the former. **Third**: Recognize that all expressions that use **NULL** are susceptible to the non-intuitiveness of three-valued logic unless the NULL is religiously coalesced to **Empty-String** (or some other contextually valid value) first. In particular, the law of excluded middle no longer applies, so the expression *A or ~A* is no longer tautologically true whenever the evaluation of *A* requires evaluation of a **NULL** term. Forgetting this can lead to very subtle and hard-to-locate bugs. The not-equals operator exposes this regularly: ``` When A has the value NULL: The expression A = 0 returns false; The expression A <> 0 returns false; and The expression A OR NOT A returns false! ``` **Update**: I guess the essence of my point is that they are NOT the same creature, but rather very different beasts. Each has its place. A second address field should always be non-null (unless you intend to allow entry of partial or incomplete addresses), and it's default should always be the valid and known value of Empty-String. NULL should be restricted to cases where a valid and known value will be supplied later, and in fact o signal some sort of validation failure that must be resolved. **From OP below:** > A row will not be updated. On the insertion there is either an IP > address or there is none (because it could not be resolved). **Response:** Then I recommend using **Empty-String** as the default, and make the field NON-NULL. Only use **NULL** when you must, as it has subtle disadvantages.
NULL or empty string more efficient/natural?
[ "", "mysql", "sql", "database", "database-design", "database-schema", "" ]
For [Jedi](https://github.com/davidhalter/jedi) we want to generate our [test coverage](https://github.com/davidhalter/jedi/issues/196). There is a [related question](https://stackoverflow.com/q/8636828/727827) in stackoverflow, but it didn't help. We're using py.test as a test runner. However, we are unable to add the imports and other "imported" stuff to the report. For example `__init__.py` is always reported as being uncovered: ``` Name Stmts Miss Cover -------------------------------------------------- jedi/__init__ 5 5 0% [..] ``` Clearly this file is being imported and should therefore be reported as tested. We start tests like this [\*]: ``` py.test --cov jedi ``` As you can see we're using `pytest-coverage`. So how is it possible to properly count coverage of files like `__init__.py`? [\*] We also tried starting test without `--doctest-modules` (removed from `pytest.ini`) and activate the coverage module earlier by `py.test -p pytest_cov --cov jedi`. Neither of them work. I've offered a bounty. Please try to fix it within Jedi. It's publicly available.
@hynekcer gave me the right idea. But basically the easiest solution lies somewhere else: **Get rid of `pytest-cov`!** Use ``` coverage run --source jedi -m py.test coverage report ``` instead!!! This way you're just running a coverage on your current py.test configuration, which works perfectly fine! It's also philosophically the right way to go: Make each program do one thing well - `py.test` runs tests and `coverage` checks the code coverage. Now this might sound like a rant, but really. `pytest-cov` hasn't been working properly for a while now. Some tests were failing, just because we used it. --- As of **2014**, pytest-cov seems to have changed hands. `py.test --cov jedi test` seems to be a useful command again (look at the comments). However, you don't need to use it. But in combination with `xdist` it can speed up your coverage reports.
**I fixed the test coverage to 94%** by [this patch](https://github.com/davidhalter/jedi/pull/214) that simplifies import dependencies and by the command: ``` py.test --cov jedi test # or py.test --cov jedi test --cov-report=html # + a listing with red uncovered lines ``` Uncovered lines are only in conditional commands or in some less used functions but all headers are completely covered. The problem was that the tests configuration `test/conftest.py` did import prematurely by dependencies almost all files in the project. The conftest file defines also additional command line options and settings that should be set before running the test. Therefore I think that pytest\_cov plugin works correctly if it ignores everything that was imported together with this file, although it is a pain. I excluded also `__init__.py` and `settings.py` from the report because they are simple and with the complete coverage but they are also imported prematurely in the dependency of conftest.
Using py.test with coverage doesn't include imports
[ "", "python", "pytest", "coverage.py", "" ]
I want to print a double in Python. But it doesn't work, and I don't know why. My code: ``` test = 3000 / (1500 * 2000) print str(test) ``` I always get a `0` and not a `0.001` even if I do the following: ``` test = 3000 / (1500 * 2000) print '%.10f' % test ``` I get a `0.000000000` and not `0.001`. How do I tell Python that this should be a double?
In Python 2.x you need to convert at least one of the operands to float, as integer division results in truncated output in Python 2.x: ``` >>> 3000 / (1500 * 2000) 0 >>> 3000.0 / (1500 * 2000) # Or use float(3000) 0.001 >>> float(3000) / (1500 * 2000) 0.001 ``` Or you can import division of Python 3.x in Python 2.x, where integer division results in *true* division: ``` >>> from __future__ import division >>> 3000.0 / (1500 * 2000) 0.001 ``` Note that this will affect the division in the whole module.
That's because you only used integer values at the math operation. Thus the result will be a (truncated) integer. If you want a float result, involve at least one float operand.
Force Python calculation in double
[ "", "python", "" ]
I need to make a smart menu, for which I need a ManyToMany relation. My model is: ``` from django.db import models class Health_plan(models.Model): a = models.IntegerField () b = models.IntegerField () class Doctors_list(models.Model): name = models.CharField(max_length=30) hp_id = models.ManyToManyField(Health_plan) def __unicode__(self): return self.name ``` How do I make this relation in the database ? I was thinking in puting the `health_plans` (a,b) as columns, and the doctors as rows, with 0s and 1s to identify their covered health\_plans. Someone told me this was a misuse of a `ManyToManyField`, I don't know wich step to take. Help appreciated
The approach of puting the `health_plans` as columns is not necessarily wrong, but it implies that you have a fixed number of health plans and that you will never add a new one. The traditional approach for many-to-many relationships in relational databases is to introduce a table in the middle. This table will just contain the association between a doctor and a health plan. If you have a `Doctor` table that contains: ``` id name 1 foo 2 bar ``` And a `HealthPlan` table: ``` id model 1 a 2 b ``` You then add a table `Doctor_HealthPlan` that is like: ``` doctor_id healthplan_id 1 2 2 1 2 2 ``` The `ManyToMany` field type in django will automatically create this table for you. Your code is correct, but you should probably rename `hp_id` to something like `health_plans`, since it is a proxy that allows you to access the list of health plans associated to a doctor.
Django's ORM already takes care of the intermediate table so you don't have to "make this relation(ship) in the database", but given your question you obviously need to learn about proper relational model normalisation - if you don't understand the relational model you won't get nowhere with Django's ORM, nor with any other sql stuff FWIW. For the record, in the relational model, a many to many relationship is modeled as a relation ("table" in SQL) with foreign keys on both other tables, ie: ``` health_plan(#health_plan_id, name, ...) doctor(#doctor_id, firstname, lastname, ...) doctors_health_plans(#health_plan_id, #doctor_id) ``` So your django models should be: ``` class HealthPlan(models.Model): # no need to define an 'id' field, # the ORM provides one by default name = models.CharField(....) class Doctor(models.Model): firstname = models.CharField(....) lastname = models.CharField(....) health_plans = models.ManyToManyField(HealthPlan, related_name="doctors") ``` Then you'll be able to get all HealthPlans for a Doctor : ``` doc = Doctor.objects.get(pk=xxxx) doc.health_plans.all() ``` and all Doctors for an HealthPlan: ``` plan = HealthPlan.objects.get(pk=xxxx) plan.doctors.all() ``` The FineManual(tm) is your friend as usual...
Django ManyToMany Field
[ "", "python", "database", "django", "model", "many-to-many", "" ]
This question is making me pull my hair out. if I do: ``` def mygen(): for i in range(100): yield i ``` and call it from one thousand threads, how does the generator know what to send next for each thread? Everytime I call it, does the generator save a table with the counter and the caller reference or something like that? It's weird. Please, clarify my mind on that one.
`mygen` does not have to remember anything. Every call to `mygen()` returns an independent iterable. These iterables, on the other hand, have state: Every time `next()` is called on one, it jumps to the correct place in the generator code -- when a `yield` is encountered, control is handed back to the caller. The actual implementation is rather messy, but in principle you can imagine that such an iterator stores the local variables, the bytecode, and the current position in the bytecode (a.k.a. instruction pointer). There is nothing special about threads here.
A function like this, when called, will return a generator object. If you have separate threads calling `next()` on the same generator object, they will interfere with eachother. That is to say, 5 threads calling `next()` 10 times each will get 50 different yields. If two threads each create a generator by calling `mygen()` within the thread, they will have separate generator objects. A generator is an object, and its state will be stored in memory, so two threads that each create a `mygen()` will refer to separate objects. It'd be no different than two threads creating an object from a `class`, they'll each have a different object, even though the class is the same. if you're coming at this from a C background, this is **not** the same thing as a function with `static` variables. The state is maintained in an object, not statically in the variables contained in the function.
How do Python Generators know who is calling?
[ "", "python", "generator", "" ]
My query is like this. ``` select id,name,salary,(salary-2000) AS [Deductedsalary] from EmployeeDetails ``` I want to use the [Deductedsalary] column in where clause, similar to below query. ``` select id, name, salary, (salary-2000) AS [Deductedsalary] from EmployeeDetails where [Deductedsalary] > 5000 ``` NOTE: I don't want to use filter query with 'salary' column
You can use a subquery to define the alias: ``` select * from ( select salary-2000 as deducted_salary , * from EmployeeDetails ) SubQueryAlias where Deductedsalary > 5000 ```
You can't do it directly - you'll have to either filter on the underlying `Salary` column, or use something like a CTE: ``` WITH CTE AS ( SELECT id, name, salary, DeductedSalary = salary - 2000 FROM dbo.EmployeeDetails ) SELECT id, name, salary, DeductedSalary FROM CTE WHERE [Deductedsalary] > 5000 ```
Using temp column to filter the results
[ "", "sql", "sql-server-2008", "t-sql", "" ]
Can someone tell me how exactly Python's for loops are implemented? The reason I'm asking this is because I'm getting different behavior in the following two for loops when I expect the same behavior (assuming `cases` is just a set of elements): First for loop: ``` for case in cases: blah ``` Second for loop: ``` for i in range(len(cases)): case = cases[i] blah ``` I'm running my code in a multi-threaded environment. Basically, I'm wondering whether Python's for loop's iterating over a set (as in the first for loop) is simply a quickhand way of the second one. What exactly happens when we use the python for loop, and is there any underlying optimization/ implementation that may be causing the behavior difference I'm observing?
No, the second format is quite different. The `for` loop calls `iter()` on the to-loop-over sequence, and uses `next()` calls on the result. Consider it the equivalent of: ``` iterable = iter(cases): while True: try: case = next(iterable) except StopIteration: break # blah ``` The result of calling `iter()` on a list is a list iterator object: ``` >>> iter([]) <list_iterator object at 0x10fcc6a90> ``` This object keeps a reference to the original list and keeps track of the index it is at. That index starts at 0 and increments until it the list has been iterated over fully. Different objects can return different iterators with different behaviours. With threading mixed in, you could end up replacing `cases` with something else, but the iterator would still reference the old sequence.
``` l = [1, 2, 3, 4, 5] l = iter(l) while True: try: print l.next() except StopIteration: exit() ```
Python for loop implementation
[ "", "python", "for-loop", "implementation", "" ]
Is there a short way to get number of objects in pickled file - shorter than writing a function that opens the file, keeps calling `pickle.load` method and updating `num_of_objs` by 1 until it catches `EOFError` and returns the value?
No, there isn't. The pickle format does not store that information. If you need that type of metadata, you need to add it to the file yourself when writing: ``` pickle.dump(len(objects), fileobj) for ob in objects: pickle.dump(ob, fileobj) ``` Now the first record tells you how many more are to follow.
There is no direct way of finding the length of a pickle, but if you are afraid of running an endless loop you could try the following, ``` company_id_processed=[] with open("responses_pickle.pickle", "rb") as f: while True: try: current_id=pickle.load(f)['name'] company_id_processed.append(current_id) except EOFError: print 'Pickle ends' break ```
How to get number of objects in a pickle?
[ "", "python", "python-3.x", "pickle", "" ]
I am doing the following: ``` 'm:max'.lstrip('m:') 'ax' ``` What I need to get is remove the `"m:"` only. I have a lot of other combinations, which I am using to filter search results -- for example`"g:Holiday` would search a particular category for "Holiday". Also, sometimes there is no prefix, which would mean it would search all categories. How would I do this?
This is the only way I could figure out how to accomplish the above: ``` re.sub(r'^[a-z]:','',string) ```
You are not 100% clear on what the rules are, but if it's simply an issue of removing everything before the first colon, then you can use `split`: ``` >>> 'm:abc'.split(':',1)[-1] 'abc' >>> 'mabc'.split(':',1)[-1] 'mabc' >>> 'm:a:bc'.split(':',1)[-1] 'a:bc' ``` The second argument to `split` limits the number of splits to perform, and the `[-1]` gets the right part of the split (or the first element if there are no splits). See [the documentation on `str.split()`](http://docs.python.org/2/library/stdtypes.html#str.split). If, however, the colon *must* be in second position: ``` def remove_prefix(s): return s[2:] if len(s) > 1 and s[1] == ':' else s ```
How to do a non-greedy lstrip()
[ "", "python", "" ]
I would like to set the default key-arguments of an instance method dynamically. For example, with ``` class Module(object): def __init__(self, **kargs): set-default-key-args-of-method(self.run, kargs) # change run arguments def run(self, **kargs): print kargs ``` We would have: ``` m = Module(ans=42) m.run.im_func.func_code.co_argcount # => 2 m.run.im_func.func_code.co_varnames # => ('self','ans','kargs') m.run.im_func.func_defaults # => (42,) m.run() # print {'ans':42} ``` I tried something with types.CodeType (which I don't really understand) for a function (not a method) and got it to work (well *not-to-fail*), but the added key-arguments did not show in the kargs dictionary of the function (it only print {}) The change has to be done for the current instance only. Actually, I am using a class right now (I'm OO in my mind) so I would like to do it with a class method, but a function is maybe better. Something like: ``` def wrapped_run(**kargs): def run(**key_args): print key_args return wrap-the-run-function(run, kargs) run = wrapped_run(ans=42) run.func_code.co_argcount # => 1 run.func_code.co_varnames # => ('ans','key_args') ## keep the 'key_args' or not run.func_defaults # => (42,) run() # print {'ans':42} ``` Any advise or idea is welcome. *A little on the context:* The Module class is some kind a function wrapper, which can be use to include the lower-end function in a dataflow system automatically but add intermediate procedures. I would like the module run function (actually, it will probably be it's \_\_call\_\_\_ function) to have the correct API in order for the dataflow system to nicely generate the correct module's input transparently. I'm using python 2.7
For the sake of closure, I give the only solution that was found: use `exec` (proposed by **mgilson**) ``` import os, new class DynamicKargs(object): """ Class that makes a run method with same arguments as those given to the constructor """ def __init__(self, **kargs): karg_repr = ','.join([str(key)+'='+repr(value) \ for key,value in kargs.iteritems()]) exec 'def run(self,' + karg_repr + ',**kargs):\n return self._run(' + karg_repr + ',**kargs)' self.run = new.instancemethod(run, self) def _run(self, **kargs): print kargs # this can also be done with a function def _run(**kargs): print kargs def dynamic_kargs(**kargs): karg_repr = ','.join([str(key)+'='+repr(value) for key,value in kargs.iteritems()]) exec 'def run(' + karg_repr + ',**kargs):\n return _run(' + karg_repr + ',**kargs)' return run # example of use # -------------- def example(): dyn_kargs = DynamicKargs(question='ultimate', answer=42) print 'Class example \n-------------' print 'var number:', dyn_kargs.run.im_func.func_code.co_argcount print 'var names: ', dyn_kargs.run.im_func.func_code.co_varnames print 'defaults: ', dyn_kargs.run.im_func.func_defaults print 'run print: ', dyn_kargs.run() print '' dyn_kargs = dynamic_kargs(question='foo', answer='bar') print 'Function example \n----------------' print 'var number:', dyn_kargs.func_code.co_argcount print 'var names: ', dyn_kargs.func_code.co_varnames print 'defaults: ', dyn_kargs.func_defaults print 'run print: ', dyn_kargs() ``` The `example` function prints: ``` Class example ------------- var number: 3 var names: ('self', 'answer', 'question', 'kargs') defaults: (42, 'ultimate') run print: {'answer': 42, 'question': 'ultimate'} Function example ---------------- var number: 2 var names: ('answer', 'question', 'kargs') defaults: ('bar', 'foo') run print: {'answer': 'bar', 'question': 'foo'} ``` However: * there might be problem if arguments value are not well represented by their `repr` * I think it is too complicated (thus not pythonic), and personally, I did not use it
You might be looking for something like this: ``` class Module(object): def __init__(self, **kargs): old_run = self.run.im_func def run(self,**kwargs): kargs_local = kargs.copy() kargs.update(kwargs) return old_run(self,**kargs) self.run = run.__get__(self,Module) def run(self,**kargs): print kargs m1 = Module(foo=3,bar='baz') m1.run() print type(m1.run) m2 = Module(foo=4,qux='bazooka') m2.run() print type(m2.run) ``` I've just created a wrapper instancemethod around the previous function. (partially inspired by [this post](https://stackoverflow.com/questions/1015307/python-bind-an-unbound-method)). Alternatively: ``` from functools import partial from types import MethodType class Module(object): def __init__(self, **kargs): self.run = MethodType(partial(self.run.im_func,**kargs),self,Module) def run(self,**kargs): print kargs ``` but this still doesn't give the API you're looking for...
Dynamically adding key-arguments to method
[ "", "python", "dynamic", "methods", "named-parameters", "" ]
I'm working on a renderfarm, and I need my clients to be able to launch multiple instances of a renderer, without blocking so the client can receive new commands. I've got that working correctly, however I'm having trouble terminating the created processes. At the global level, I define my pool (so that I can access it from any function): ``` p = Pool(2) ``` I then call my renderer with apply\_async: ``` for i in range(totalInstances): p.apply_async(render, (allRenderArgs[i],args[2]), callback=renderFinished) p.close() ``` That function finishes, launches the processes in the background, and waits for new commands. I've made a simple command that will kill the client and stop the renders: ``` def close(): ''' close this client instance ''' tn.write ("say "+USER+" is leaving the farm\r\n") try: p.terminate() except Exception,e: print str(e) sys.exit() ``` It doesn't seem to give an error (it would print the error), the python terminates but the background processes are still running. Can anyone recommend a better way of controlling these launched programs?
Found the answer to my own question. The primary problem was that I was calling a third-party application rather than a function. When I call the subprocess [either using call() or Popen()] it creates a new instance of python whose only purpose is to call the new application. However when python exits, it will kill this new instance of python and leave the application running. The solution is to do it the hard way, by finding the pid of the python process that is created, getting the children of that pid, and killing them. This code is specific for osx; there is simpler code (that doesn't rely on grep) available for linux. ``` for process in pool: processId = process.pid print "attempting to terminate "+str(processId) command = " ps -o pid,ppid -ax | grep "+str(processId)+" | cut -f 1 -d \" \" | tail -1" ps_command = Popen(command, shell=True, stdout=PIPE) ps_output = ps_command.stdout.read() retcode = ps_command.wait() assert retcode == 0, "ps command returned %d" % retcode print "child process pid: "+ str(ps_output) os.kill(int(ps_output), signal.SIGTERM) os.kill(int(processId), signal.SIGTERM) ```
I found solution: stop pool in separate thread, like this: ``` def close_pool(): global pool pool.close() pool.terminate() pool.join() def term(*args,**kwargs): sys.stderr.write('\nStopping...') # httpd.shutdown() stophttp = threading.Thread(target=httpd.shutdown) stophttp.start() stoppool=threading.Thread(target=close_pool) stoppool.daemon=True stoppool.start() signal.signal(signal.SIGTERM, term) signal.signal(signal.SIGINT, term) signal.signal(signal.SIGQUIT, term) ``` Works fine and always i tested. ``` signal.SIGINT ``` Interrupt from keyboard (CTRL + C). Default action is to raise KeyboardInterrupt. ``` signal.SIGKILL ``` Kill signal. It cannot be caught, blocked, or ignored. ``` signal.SIGTERM ``` Termination signal. ``` signal.SIGQUIT ``` Quit with core dump.
How to terminate multiprocessing Pool processes?
[ "", "python", "multiprocessing", "python-multiprocessing", "pool", "" ]
I just realized that `json.dumps()` adds spaces in the JSON object e.g. ``` {'duration': '02:55', 'name': 'flower', 'chg': 0} ``` how can remove the spaces in order to make the JSON more compact and save bytes to be sent via HTTP? such as: ``` {'duration':'02:55','name':'flower','chg':0} ```
``` json.dumps(separators=(',', ':')) ```
In some cases you may want to get rid of the **trailing whitespaces** only. You can then use ``` json.dumps(separators=(',', ': ')) ``` There is a space after `:` but not after `,`. This is useful for diff'ing your JSON files (in version control such as `git diff`), where some editors will get rid of the trailing whitespace but python json.dump will add it back. *Note: This does not exactly answers the question on top, but I came here looking for this answer specifically. I don't think that it deserves its own QA, so I'm adding it here.*
Python - json without whitespaces
[ "", "python", "json", "serialization", "" ]
I have a list with floats, each number with 3 decimals (eg. 474.259). If I verify the number in the list like this: ``` if 474.259 in list_sample: print "something!" ``` Then the message is shown, but if I take the number from another list and I round it: ``` number = other_list[10] number = round(number, 3) if number == 474.259: print "the numbers are same!" if number in list_sample: print "something!" ``` The second message is not shown.
Comparing floating point numbers for exact equality usually won't do what you want. This is because floating point numbers in computers have a representation (storage format) which is inherently inaccurate for many real numbers. I suggest reading about it here: <http://floating-point-gui.de/> and doing something like a "fuzzy compare" using an "epsilon" tolerance value to consider the numbers equal so long as they differ by less than x% or whatever.
You could also following an approach, where you compare the values based on an arbitrary precision. For example, convert all your floats like this: ``` def internalPrecision(number): precision = 1000 return int(round(number * precision)) ``` If you do this, both operators `==` and `in` should work.
using list operator "in" with floating point values
[ "", "python", "numpy", "" ]
I have the the following dict that contains the following data: ``` response = {"status":"ERROR","email":"EMAIL_INVALID","name":"NAME_INVALID"} ``` I am trying to create a new dict based on 'response' that is suposed to look like the following: ``` {'api_error': {'list': [{'converted_value': 'No special characters allowed.', 'field': 'name', 'value': 'NAME_INVALID'}, {'converted_value': 'invalid email', 'field': 'email', 'value': 'EMAIL_INVALID'}], 'status': 'ERROR'}, 'email': 'EMAIL_INVALID', 'email_label': 'invalid email', 'name': 'NAME_INVALID', 'name_label': 'No special characters allowed.', 'status': 'ERROR'} ``` So far I have been able to do the following: ``` ret = {} for k in response: if k != 'status': ret[k+"_label"] = convert(response[k]) ret[k] = response[k] else: ret[k] = convert(response[k]) ``` where 'convert' function translates each value of response. for example NAME\_INVALID is converted to 'No special characters allowed.' and so on. Here is the output of what above code is doing: ``` {"status":"ERROR","name_label":"No special characters allowed.", "email_label":"invalid email","name":"NAME_INVALID","email":"EMAIL_INVALID"} ``` I am getting problem creating the rest of the dictionary. The one whose key is 'api\_error'. What will be the most efficient way of doing that?
``` import pprint response = {"status": "ERROR", "email": "EMAIL_INVALID", "name": "NAME_INVALID"} def convert(label): return {'NAME_INVALID': 'No special characters allowed', 'EMAIL_INVALID': 'invalid email', 'ERROR': 'ERROR'}[label] ret = {} for k in response: if k != 'status': ret[k + "_label"] = convert(response[k]) ret[k] = response[k] info = {'converted_value': ret[k + "_label"], 'field': k, 'value': response[k]} (ret.setdefault('api_error', {}) .setdefault('list', []) .append(info)) else: ret[k] = convert(response[k]) ret.setdefault('api_error', {})['status'] = ret[k] pprint.pprint(ret) ``` yields ``` {'api_error': {'list': [{'converted_value': 'invalid email', 'field': 'email', 'value': 'EMAIL_INVALID'}, {'converted_value': 'No special characters allowed', 'field': 'name', 'value': 'NAME_INVALID'}], 'status': 'ERROR'}, 'email': 'EMAIL_INVALID', 'email_label': 'invalid email', 'name': 'NAME_INVALID', 'name_label': 'No special characters allowed', 'status': 'ERROR'} ```
make one more function that will be like ``` def make2nddict(response): for k in response: if k != 'status': d = {} d['converted_value'] = convert(k) d['field'] = k d['value'] = response[k] arr.append(d) else: final[k] = response[k] final['list'] = arr arr= [] final = {} def convert(error): if error == 'NAME_INVALID': return 'No special characters allowed' elif error == 'EMAIL_INVALID': return 'EMAIL_INVALID' else: return error ret = {} for k in response: if k != 'status': ret[k+"_label"] = convert(response[k]) ret[k] = response[k] else: ret[k] = convert(response[k]) ``` put output of noth the function in your api\_error dictionary good luck
create a python dictionary based on another dictionary
[ "", "python", "list", "dictionary", "" ]
I have found some answers to this question before, but they seem to be obsolete for the current Python versions (or at least they don't work for me). I want to check if a substring is contained in a list of strings. I only need the boolean result. I found this solution: ``` word_to_check = 'or' wordlist = ['yellow','orange','red'] result = any(word_to_check in word for word in worldlist) ``` From this code I would expect to get a `True` value. If the word was "der", then the output should be `False`. However, the result is a generator function, and I can't find a way to get the `True` value. Any idea?
You can import `any` from `__builtin__` in case it was replaced by some other `any`: ``` >>> from __builtin__ import any as b_any >>> lst = ['yellow', 'orange', 'red'] >>> word = "or" >>> b_any(word in x for x in lst) True ``` Note that in Python 3 `__builtin__` has been renamed to `builtins`.
## Posted code The OP's posted code using *any()* is correct and should work. The spelling of "worldlist" needs to be fixed though. ## Alternate approach with str.join() That said, there is a simple and fast solution to be had by using the substring search on a single combined string: ``` >>> wordlist = ['yellow','orange','red'] >>> combined = '\t'.join(wordlist) >>> 'or' in combined True >>> 'der' in combined False ``` For short wordlists, this is several times faster than the approach using *any*. And if the *combined* string can be precomputed before the search, the in-operator search will always beat the *any* approach even for large wordlists. ## Alternate approach with sets The O(n) search speed can be reduced to O(1) if a substring set is precomputed in advance and if we don't mind using more memory. Precomputed step: ``` from itertools import combinations def substrings(word): for i, j in combinations(range(len(word) + 1), 2): yield word[i : j] wordlist = ['yellow','orange','red'] word_set = set().union(*map(substrings, wordlist)) ``` Fast O(1) search step: ``` >>> 'or' in word_set True >>> 'der' in word_set False ```
Check if substring is in a list of strings?
[ "", "python", "string", "list", "" ]
So I'm trying to make a text adventure, and I'm attempting to keep track of stats by using list. So for example I have: ``` User_stats = [] User_weapon = "" User_armor = "" Sword = 10 Pike = 13 Crude = 2 Heavy = 5 print (weapons) print ("First lets pick a weapon,(enter the type of weapon, not the name)") User_weapon = input() User_stats.append(User_weapon) print (armor) print("Now lets pick some armor (enter the first word of the armor)") User_armor = input () User_stats.append(User_armor) print (User_stats) ``` which prints the list of what the user chose such `[Sword, Crude]`. Is there a way to pull the values off of those variables and sum them (in order to determine if an attack succeeds or not)? Thanks for any help!
You should have some kind of dictionary for this which holds the relation weapon/armor type to value: ``` weapons = { 'Sword': 10, 'Pike': 13 } armors = { 'Crude': 2, 'Heavy': 5 } ``` Then when you know the user has choosen a weapon, you can just get the value using `weapons['Sword']` or with your variables `weapons[User_Weapon]` or even `weapons[User_stats[0]]`.
dictionaries may be what you want. ``` >>>weapons = {'axe': 2, 'sword': 1 } >>>weaponMods = {'crude': -1, 'amazing': 20 } >>>print weapons['axe'] 2 >>> print weapons['axe'] + weaponMods['crude'] 1 ```
Python Summing variables in a list
[ "", "python", "text", "" ]
So basically, I want to order my query by how many times a given column has a value. I have done this by doing a join with another query that counts the repetitions, but I would like to avoid the other extensive query. I'm thinking there might be some trick with order by that does what I want. NOTE: The query I'm working with is huge and filled with company info, so I'm going to write an example query that exemplifies my problem. Using ``` SELECT user1, rev1, user2, rev2, userCount FROM someTable, (SELECT user2, count(*) userCount FROM someTable WHERE something1 = something2) counts WHERE something1 = something2 AND counts.user2 = someTable.user2 ORDER BY userCount DESC ``` I get the correct result, so I tried ``` SELECT user1, rev1, user2, rev2 FROM someTable WHERE something1 = something2 ORDER BY COUNT(user2) DESC ``` but this doesn't yield the results I want. Is there some SQL trick that solves this for me? Thanks!
``` SELECT user1, rev1, user2, rev2, COUNT(*) OVER (PARTITION BY user2) as c FROM someTable WHERE something1 = something2 ORDER BY c DESC ``` edit: beated by Gordon from 30secs!! .)
Your query doesn't work because you are using a `count()` without a corresponding `group by`. It just doesn't make sense, because you don't have an aggregation query. A good way to do what you want to do is using analytic functions: ``` SELECT user1, rev1, user2, rev2 FROM (select t.*, COUNT(*) over (partition by user2) as numuser2 from someTable WHERE something1 = something2 ) t ORDER BY numuser2 desc, user2 ``` The `count(*) over` syntax is going to count the rows for each `user2` value and append that to the row. By the way, I also added `user2` to the `order by`. This way, if some values have the same count, they will still be separated.
Ordering query results by row repetitions without join
[ "", "sql", "database", "oracle", "" ]
these are my tables. ``` Step stepid name a place1 b place2 c place3 d place4 e place5 f place6 Stage stageid start finish 1 a b 2 b c 3 c d 4 d e 5 e f Trip tripid stageid 1 1 1 2 1 3 1 4 1 5 ``` I want to achieve this query result ``` tripid stageid 1 place1,place2,place3,place4,place5,place6 ``` I know how put all ids into one field and how to replace ids by their assigned names. However I do not know how to join both of these solutions. This is the sql I'm using for placing ids into one field. ``` SELECT REPLACE(stageid,'''') as stages FROM (SELECT SYS_CONNECT_BY_PATH(stageid,' ') stageid, level FROM trip START WITH stageid = (select min(stageid) from trip) CONNECT BY PRIOR stageid < stageid ORDER BY level DESC) WHERE rownum = 1; ``` Start and finish are foreign keys of stepid and stageid of trip table is a foreign key of of stageid in Stage table I work on oracle developer ver 3.2. Can you please help.
Assuming you have access to `LISTAGG`, then this is one option which should be pretty close: ``` SELECT tripid, stages || ',' || step.name FROM ( SELECT trip.tripid, LISTAGG(step.name, ',') within group (order by stage.stageid) stages, MAX(stage.stageId) maxStageId FROM stage INNER JOIN step on stage.startstep = step.stepid INNER JOIN trip on stage.stageid = trip.stageid GROUP BY trip.tripid ) t INNER JOIN stage ON t.maxStageId = stage.stageId INNER JOIN step on stage.finishstep = step.stepid ``` [SQL Fiddle Demo](http://sqlfiddle.com/#!4/25be3/2) Basically it uses the startstep to build the list, and then concatonates the last finishstep based on the max stage id.
I think the simplest way to do this is just to use `listagg()` to aggregate the strings. Join the tables together and then aggregate them: ``` select t.tripid, listagg(st.name, ',') within group (order by st.stepid) from trip t join stage s on t.stageid = s.stageid join step st on st.stepid in (s.start, s.finish) group by t.tripid ```
How to join ids of two columns into one field and replaces those ids by their names
[ "", "sql", "oracle", "" ]
I am looking to figure out both the current Battery Capacity and the Design Capacity. So far what I could get to work is using the [Win32\_Battery() class](http://msdn.microsoft.com/en-us/library/windows/desktop/aa394074%28v=vs.85%29.aspx) which doesn't give all the information I need (at least not on my system). I used the pure-python [wmi library](http://timgolden.me.uk/python/wmi/index.html) for that. On the other hand I found this which works [In Python, how can I detect whether the computer is on battery power?](https://stackoverflow.com/questions/6153860/in-python-how-can-i-detect-whether-the-computer-is-on-battery-power), but unfortunately it doesn't provide any information on Capacity neither. The [Battery Information structure](http://msdn.microsoft.com/en-us/library/windows/desktop/aa372661%28v=vs.85%29.aspx) and the [Battery Status structure](http://msdn.microsoft.com/en-us/library/windows/desktop/aa372671%28v=vs.85%29.aspx) seem perfect for this. Now I know that I have to use the [DeviceIoControl function](http://msdn.microsoft.com/en-us/library/windows/desktop/aa372699%28v=vs.85%29.aspx) to do so and I found this [C++ code](http://msdn.microsoft.com/en-us/library/windows/desktop/bb204769%28v=vs.85%29.aspx) that explains it a little. I would prefer something that simply uses ctypes and not the python win32api provided by [pywin32](https://pypi.python.org/pypi/pywin32). If you have an idea how to do this in python please let me know! Thanks in advance.
Tim Golden's excellent [wmi](https://pypi.python.org/pypi/WMI/) module will, I believe, give you everything you want. You'll just have to do several queries to get everything: ``` import wmi c = wmi.WMI() t = wmi.WMI(moniker = "//./root/wmi") batts1 = c.CIM_Battery(Caption = 'Portable Battery') for i, b in enumerate(batts1): print 'Battery %d Design Capacity: %d mWh' % (i, b.DesignCapacity or 0) batts = t.ExecQuery('Select * from BatteryFullChargedCapacity') for i, b in enumerate(batts): print ('Battery %d Fully Charged Capacity: %d mWh' % (i, b.FullChargedCapacity)) batts = t.ExecQuery('Select * from BatteryStatus where Voltage > 0') for i, b in enumerate(batts): print '\nBattery %d ***************' % i print 'Tag: ' + str(b.Tag) print 'Name: ' + b.InstanceName print 'PowerOnline: ' + str(b.PowerOnline) print 'Discharging: ' + str(b.Discharging) print 'Charging: ' + str(b.Charging) print 'Voltage: ' + str(b.Voltage) print 'DischargeRate: ' + str(b.DischargeRate) print 'ChargeRate: ' + str(b.ChargeRate) print 'RemainingCapacity: ' + str(b.RemainingCapacity) print 'Active: ' + str(b.Active) print 'Critical: ' + str(b.Critical) ``` This is certainly not cross-platform, and it requires a 3rd party resource, but it does work very well.
The most reliable way to retrieve this information is by using [GetSystemPowerStatus](https://msdn.microsoft.com/en-us/library/windows/desktop/aa372693(v=vs.85).aspx) instead of WMI. [psutil](https://github.com/giampaolo/psutil) exposes this information under Linux, Windows and FreeBSD: ``` >>> import psutil >>> >>> def secs2hours(secs): ... mm, ss = divmod(secs, 60) ... hh, mm = divmod(mm, 60) ... return "%d:%02d:%02d" % (hh, mm, ss) ... >>> battery = psutil.sensors_battery() >>> battery sbattery(percent=93, secsleft=16628, power_plugged=False) >>> print("charge = %s%%, time left = %s" % (battery.percent, secs2hours(battery.secsleft))) charge = 93%, time left = 4:37:08 ``` The relevant commit is [here](https://github.com/giampaolo/psutil/pull/963/files).
Getting Battery Capacity in Windows with Python
[ "", "python", "windows", "ctypes", "pywin32", "battery", "" ]
I have a bunch of wordlists on a server of mine, and I've been planning to make a simple open-source JSON API that returns if a password is on the list1, as a method of validation. I'm doing this in Python with Flask, and literally just returning if input is present. One small problem: the wordlists total about 150 million entries, and 1.1GB of text. My API (minimal) is below. Is it more efficient to store every row in MongoDB and look up repeatedly, or to store the entire thing in memory using a singleton, and populate it on startup when I call `app.run`? Or are the differences subjective? Furthermore, is it even good practice to do the latter? I'm thinking the lookups might start to become taxing if I open this to the public. I've also had someone suggest a [Trie](http://en.wikipedia.org/wiki/Trie) for efficient searching. **Update:** I've done a bit of testing, and document searching is painfully slow with such a high number of records. Is it justifiable to use a database with proper indexes for a single column of data that needs to be efficiently searched? ``` from flask import Flask from flask.views import MethodView from flask.ext.pymongo import PyMongo import json app = Flask(__name__) mongo = PyMongo(app) class HashCheck(MethodView): def post(self): return json.dumps({'result' : not mongo.db.passwords.find({'pass' : request.form["password"])}) # Error-handling + test cases to come. Negate is for bool. def get(self): return redirect('/') if __name__ == "__main__": app.add_url_rule('/api/', view_func=HashCheck.as_view('api')) app.run(host="0.0.0.0", debug=True) ``` 1: I'm a security nut. I'm using it in my login forms and rejecting common input. One of the wordlists is UNIQPASS.
What I would suggest is a hybrid approach. As requests are made do two checks. The first in a local cache and the the second in the MongoDB store. If the first fails but the second succeeds then add it to the in memory cache. Over time the application will "fault" in the most common "bad passwords"/records. This has two advantages: 1) The common words are rejected very fast from within memory. 2) The startup cost is close to zero and amortized over many queries. When storing the word list in MongoDB I would make the \_id field hold each word. By default you will get a ObjectId which is a complete waste in this case. We can also then leverage the automatic index on \_id. I suspect the poor performance you saw was due to there not being an index on the 'pass' field. You can also try adding one on the 'pass' field with: ``` mongo.db.passwords.create_index("pass") ``` To complete the \_id scenario: to insert a word: ``` mongo.db.passwords.insert( { "_id" : "password" } ); ``` Queries then look like: ``` mongo.db.passwords.find( { "_id" : request.form["password"] } ) ``` As @Madarco mentioned you can also shave another bit off the query time by ensuring results are returned from the index by limiting the returned fields to just the \_id field (`{ "_id" : 1}`). ``` mongo.db.passwords.find( { "_id" : request.form["password"] }, { "_id" : 1} ) ``` HTH - Rob P.S. I am not a Python/Pymongo expert so might not have the syntax 100% correct. Hopefully it is still helpful.
Given that your list is totally static and fits in memory, I don't see a compelling reason to use a database. I agree that a Trie would be efficient for your goal. A hash table would work too. PS: it's too bad about Python's Global Interpreter Lock. If you used a language with real multithreading, you could take advantage of the unchanging data structure and run the server across multiple cores with shared memory.
Lookup speed: State or Database?
[ "", "python", "mongodb", "flask", "" ]
I'm having some trouble with my SQL query. I got this table: ``` insert into Table1 (date, personssn) insert ('2012-01-21 12:01:33', '123456789'); insert into Table1 (date, personssn) insert ('2012-02-22 12:01:33', '123456789'); ``` The problem is that I want to select the personssn who have a date CLOSEST to the current date. I've been working with "CURDATE()" but can't seem to get it to work. Anyone that can help me in the right direction? Thanks.
``` select * from Table1 order by abs(now() - date) desc limit 1 ```
Use datediff. It returns the Difference between two dates. However, you can have dates before and after today in your database. That's why you need ABS() without it the smallest value would be selected first, but we want the value closest to 0. IE a Date-Difference of 3 is “bigger” than -250, but 3 days off is closer. That is why you use the absolute value. ``` SELECT t1.date, t1.personssn FROM Table1 AS t1 ORDER BY ABS(DATEDIFF(t1.date, NOW())) ASC LIMIT 5 ```
SQL - Find date closest to current date
[ "", "mysql", "sql", "" ]
I am learning Python from Coursera. In this course they use **SimpleGUI** module on [CodeSkulptor](http://www.codeskulptor.org/). Can anyone tell me how to integrate SimpleGUI with python 2.7 and 3.0 shell?
From the coursera forums by Darren Gallagher > From the CodeSkulptor Documentation: > > <http://www.codeskulptor.org/docs.html> > > "... implements a subset of Python 2.6...CodeSkulptor's Python is not a subset in one respect...Implemented on top of JavaScript..." > > I don't think that CodeSkulptor / SimpleGUI is a Python Module, as we know it. It is written on top of Javascript to allow the user output to their web browser, as opposed to their desktop and/or interpreter window. > > The module I have found / used in Python that is most similar to SimpleGUI is Pygame - in both syntax and display. It requires a little more to get a project 'running' but is definitely worth investing time in. I'm sure what we will learn in the coming weeks with SimpleGUI will be very transferable. The full thread can be found here [Can the staff give us the SIMPLEGUI module?](https://class.coursera.org/interactivepython-002/forum/thread?thread_id=1029&post_id=5512#post-5512) (**Note need to be enrolled to the course to view the link**) --- **A python package called SimpleGUICS2Pygame has since been created to run CodeSkulptor code using Pygame, I haven't tried it myself yet but it can be found** [**here**](https://pypi.python.org/pypi/SimpleGUICS2Pygame/)
You can just use SimpleGUITk (<http://pypi.python.org/pypi/SimpleGUITk>) which implements a Tk version of simplegui. To use your CodeSkulptor code in the desktop, you just need to replace ``` import simplegui ``` with ``` import simpleguitk as simplegui ``` and that's it, your program made for CodeSkulptor code should work on the desktop.
How to integrate SimpleGUI with Python 2.7 and 3.0 shell
[ "", "python", "python-2.7", "python-3.x", "python-imaging-library", "codeskulptor", "" ]
I am trying to set the text of an `Entry` widget using a button in a GUI using the `tkinter` module. This GUI is to help me classify thousands of words into five categories. Each of the categories has a button. I was hoping that using a button would significantly speed me up and I want to double check the words every time otherwise I would just use the button and have the GUI process the current word and bring the next word. The command buttons for some reason are not behaving like I want them to. This is an example: ``` import tkinter as tk from tkinter import ttk win = tk.Tk() v = tk.StringVar() def setText(word): v.set(word) a = ttk.Button(win, text="plant", command=setText("plant")) a.pack() b = ttk.Button(win, text="animal", command=setText("animal")) b.pack() c = ttk.Entry(win, textvariable=v) c.pack() win.mainloop() ``` So far, when I am able to compile, the click does nothing.
You might want to use [`insert`](https://web.archive.org/web/20201111190045id_/https://effbot.org/tkinterbook/entry.htm#Tkinter.Entry.insert-method) method. [You can find the documentation for the Tkinter Entry Widget here.](https://web.archive.org/web/20201105231837id_/http://effbot.org/tkinterbook/entry.htm) This script inserts a text into `Entry`. The inserted text can be changed in `command` parameter of the Button. ``` from tkinter import * def set_text(text): e.delete(0,END) e.insert(0,text) return win = Tk() e = Entry(win,width=10) e.pack() b1 = Button(win,text="animal",command=lambda:set_text("animal")) b1.pack() b2 = Button(win,text="plant",command=lambda:set_text("plant")) b2.pack() win.mainloop() ```
If you use a "text variable" `tk.StringVar()`, you can just `set()` that. No need to use the Entry delete and insert. Moreover, those functions don't work when the Entry is disabled or readonly! The text variable method, however, does work under those conditions as well. ``` import Tkinter as tk ... entry_text = tk.StringVar() entry = tk.Entry( master, textvariable=entry_text ) entry_text.set( "Hello World" ) ```
How to set the text/value/content of an `Entry` widget using a button in tkinter
[ "", "python", "events", "tkinter", "tkinter-entry", "" ]
I am writing a small multi-threaded http file downloader and would like to be able to shrink the available threads as the code encounters errors The errors would be specific to http errors returned where the web server is not allowing any more connections eg. If I setup a pool of 5 threads, each thread is attempting to open it's own connection and download a chunk of the file. The server may only allow 2 connections and will I believe return 503 errors, I want to detect this and shut down a thread, eventually limiting the size of the pool to presumably only the 2 that the server will allow Can I make a thread stop itself? Is self.*Thread*\_stop() sufficient? Do I also need to join()? Here's my worker class that does the downloading, grabs from the queue to process, once downloaded it dumps the result into resultQ to be saved to file by the main thread It's in here where I would like to detect a http 503 and stop/kill/remove a thread from the available pools - and of course re-add the failed chunk back to the queue so the remaining threads will process it ``` class Downloader(threading.Thread): def __init__(self, queue, resultQ, file_name): threading.Thread.__init__(self) self.workQ = queue self.resultQ = resultQ self.file_name = file_name def run(self): while True: block_num, url, start, length = self.workQ.get() print 'Starting Queue #: %s' % block_num print start print length #Download the file self.download_file(url, start, length) #Tell queue that this task is done print 'Queue #: %s finished' % block_num self.workQ.task_done() def download_file(self, url, start, length): request = urllib2.Request(url, None, headers) if length == 0: return None request.add_header('Range', 'bytes=%d-%d' % (start, start + length)) while 1: try: data = urllib2.urlopen(request) except urllib2.URLError, u: print "Connection did not start with", u else: break chunk = '' block_size = 1024 remaining_blocks = length while remaining_blocks > 0: if remaining_blocks >= block_size: fetch_size = block_size else: fetch_size = int(remaining_blocks) try: data_block = data.read(fetch_size) if len(data_block) == 0: print "Connection: [TESTING]: 0 sized block" + \ " fetched." if len(data_block) != fetch_size: print "Connection: len(data_block) != length" + \ ", but continuing anyway." self.run() return except socket.timeout, s: print "Connection timed out with", s self.run() return remaining_blocks -= fetch_size chunk += data_block resultQ.put([start, chunk]) ``` Below is where I init the thread pool, further down I put items to the queue ``` # create a thread pool and give them a queue for i in range(num_threads): t = Downloader(workQ, resultQ, file_name) t.setDaemon(True) t.start() ```
> Can I make a thread stop itself? Don't use `self._Thread__stop()`. It is enough to exit the thread's `run()` method (you can check a flag or read a sentinel value from a queue to know when to exit). > It's in here where I would like to detect a http 503 and stop/kill/remove a thread from the available pools - and of course re-add the failed chunk back to the queue so the remaining threads will process it You can simplify the code by separating responsibilities: * `download_file()` should not try to reconnect in the infinite loop. If there is an error; let's the code that calls `download_file()` resubmit it if necessary * the control about the number of concurrent connections can be encapsulated in a `Semaphore` object. Number of threads may differ from number of concurrent connections in this case ``` import concurrent.futures # on Python 2.x: pip install futures from threading import BoundedSemaphore def download_file(args): nconcurrent.acquire(timeout=args['timeout']) # block if too many connections # ... nconcurrent.release() #NOTE: don't release it on exception, # allow the caller to handle it # you can put it into a dictionary: server -> semaphore instead of the global nconcurrent = BoundedSemaphore(5) # start with at most 5 concurrent connections with concurrent.futures.ThreadPoolExecutor(max_workers=NUM_THREADS) as executor: future_to_args = dict((executor.submit(download_file, args), args) for args in generate_initial_download_tasks()) while future_to_args: for future in concurrent.futures.as_completed(dict(**future_to_args)): args = future_to_args.pop(future) try: result = future.result() except Exception as e: print('%r generated an exception: %s' % (args, e)) if getattr(e, 'code') != 503: # don't decrease number of concurrent connections nconcurrent.release() # resubmit args['timeout'] *= 2 future_to_args[executor.submit(download_file, args)] = args else: # successfully downloaded `args` print('f%r returned %r' % (args, result)) ``` See [`ThreadPoolExecutor()` example](http://docs.python.org/dev/library/concurrent.futures.html#threadpoolexecutor-example).
you should be using a threadpool to control the life of your threads: * <http://www.inductiveload.com/posts/easy-thread-pools-in-python-with-threadpool/> Then when a thread exists, you can send a message to the main thread (that is handling the threadpool) and then change the size of the threadpool, and postpone new requests or failed requests in a stack that you'll empty. tedelanay is absolutely right about the daemon status you're giving to your threads. There is no need to set them as daemons. Basically, you can simplify your code, you could do something as follows: ``` import threadpool def process_tasks(): pool = threadpool.ThreadPool(4) requests = threadpool.makeRequests(download_file, arguments) for req in requests: pool.putRequest(req) #wait for them to finish (or you could go and do something else) pool.wait() if __name__ == '__main__': process_tasks() ``` where `arguments` is up to your strategy. Either you give your threads a queue as argument and then empty the queue. Or you can get process the queue in process\_tasks, block while the pool is full, and open a new thread when a thread is done, but the queue is not empty. It all depends on your needs and the context of your downloader. resources: * <http://chrisarndt.de/projects/threadpool/> * <http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/203871> * <http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/196618> * <http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/302746> * <http://lethain.com/using-threadpools-in-python/>
Python - Shrinking pool of threads dynamically / Stop a thread
[ "", "python", "threadpool", "" ]
how can I select all from a table, and if there are identical values of column `name` then only select the row that has the greatest `id` value so If there was a table like this: ``` id name age country ---+------+---+------- 1 bob 24 UK 2 john 48 USA 3 janet 72 USSR 4 bob 96 Ukraine ``` it would only select the 'bob' with the highest id so the result would return: ``` id name age country ---+------+---+------- 2 john 48 USA 3 janet 72 USSR 4 bob 96 Ukraine ``` Thank you.
You could use a subquery to calculate the maximum ID for every name, and then return all rows that matches the IDs returned by the subquery: ``` SELECT * FROM People WHERE id IN (SELECT MAX(id) FROM People GROUP BY Name) ``` Please see fiddle [here](http://sqlfiddle.com/#!2/30d88/2).
Try this query ``` select * from table_name where ID in(select MAX(ID) from table_name group by name) ```
select only rowwith highest id value if there are identical column values
[ "", "mysql", "sql", "" ]
I have to tables (tbla and tblb) with each one col (id): select \* from tbla; ``` ID --- 1 3 5#A select * from tblb; ID --- 2 3 ``` Now I need a full join: ``` select a.id, b.id from tbla a full outer join tblb b on b.id = a.id; ID ID1 -------- 1 3 3 5#A 2 ``` ... but without entries containing a #-sign in tbla.id ``` select a.id, b.id from tbla a full outer join tblb b on b.id = a.id where a.id not like '%#%'; ID ID1 -------- 1 3 3 ``` why is the entry with id 2 from tblb missing?
Because when you do a `full outer join`, columns on either side can end up with a `NULL` value. Explicitly check for NULL: ``` select a.id, b.id from tbla a full outer join tblb b on b.id = a.id where a.id not like '%#%' or a.id is null; ``` (Originally, I had suggested moving the logic to the `on` clause. Alas, the `full outer join` keeps records in both tables, even when no records match the condition. So, moving the condition to the `on` clause doesn't fix anything.)
When you do outer joins, you have to do your filtering in the from clause. If you do it in the where clause your join effectively becomes an inner join. So change this: ``` full outer join tblb b on b.id = a.id where a.id not like '%#%' ``` to this ``` full outer join tblb b on b.id = a.id and a.id not like '%#%' ```
oracle sql full join with table a not in
[ "", "sql", "oracle", "outer-join", "" ]
I need to get all the customer name where their preference MINPRICE and MAXPRICE is the same. Here's my schema: ``` CREATE TABLE CUSTOMER ( PHONE VARCHAR(25) NOT NULL, NAME VARCHAR(25), CONSTRAINT CUSTOMER_PKEY PRIMARY KEY (PHONE), ); CREATE TABLE PREFERENCE ( PHONE VARCHAR(25) NOT NULL, ITEM VARCHAR(25) NOT NULL, MAXPRICE NUMBER(8,2), MINPRICE NUMBER(8,2), CONSTRAINT PREFERENCE_PKEY PRIMARY KEY (PHONE, ITEM), CONSTRAINT PREFERENCE_FKEY FOREIGN KEY (PHONE) REFERENCES CUSTOMER (PHONE) ); ``` I think I need to do some compare between rows and rows? or create another views to compare? any easy way to do this? its one to many. a customer can have multiple preferences so i need to query a list of customer that have the same minprice and maxprice. compare between rows minprice=minprice and maxprice=maxprice
A self-join on `preference` would find rows with the same price preference, but a different phone number: ``` select distinct c1.name , p1.minprice , p1.maxprice from preference p1 join preference p2 on p1.phone <> p2.phone and p1.minprice = p2.minprice and p1.maxprice = p2.maxprice join customer c1 on c1.phone = p1.phone join customer c2 on c2.phone = p2.phone order by p1.minprice , p1.maxprice , c1.name ```
It seems strange that you have minprice and maxprice in your preference table. Is that a table that you update after each transaction, such that each customer only has 1 active preference record? I mean, it reads like a customer could pay two different prices for the same item, which seems odd. Assuming customer and preference are 1:1 ``` SELECT c.* FROM customer c INNER JOIN preference p ON c.phone = p.phone WHERE p.minprice = p.maxprice ``` However, if a customer can have multiple preferences and you are looking for minprice = maxprice for ALL item ... then you could do this ``` SELECT c.* FROM (SELECT phone, MIN(minprice) as allMin, MAX(maxprice) as allMax FROM preference GROUP BY phone) p INNER JOIN customer c on p.phone = c.phone WHERE allMin = allMax ```
Get results that have the same data in the table
[ "", "sql", "oracle", "" ]
How best to find the number of occurrences of a given array within a set of arrays (two-dimensional array) in python (with numpy)? This is (simplified) what I need expressed in python code: ``` patterns = numpy.array([[1, -1, 1, -1], [1, 1, -1, 1], [1, -1, 1, -1], ...]) findInPatterns = numpy.array([1, -1, 1, -1]) numberOfOccurrences = findNumberOfOccurrences(needle=findInPatterns, haystack=patterns) print(numberOfOccurrences) # should print e.g. 2 ``` In reality, I need to find out how often each array can be found within the set. But the functionality described in the code above would already help me a lot on my way. Now, I know I could use loops to do that but was wondering if there was a more efficient way to do this? Googling only let me to numpy.bincount which does exactly what I need but not for two-dimensional arrays and only for integers.
With an array of `1`s and `-1`s, performance wise nothing is going to beat using `np.dot`: if (and only if) all items match then the dot product will add up to the number of items in the row. So you can do ``` >>> haystack = np.array([[1, -1, 1, -1], ... [1, 1, -1, 1], ... [1, -1, 1, -1]]) >>> needle = np.array([1, -1, 1, -1]) >>> haystack.dot(needle) array([ 4, -2, 4]) >>> np.sum(haystack.dot(needle) == len(needle)) 2 ``` This is sort of a toy particular case of convolution based image matching, and you could rewrite it easily to look for patterns shorter than a full row, and even speed it up using FFTs.
``` import numpy A = numpy.array([[1, -1, 1, -1], [1, 1, -1, 1], [1, -1, 1, -1]]) b = numpy.array([1, -1, 1, -1]) print ((A == b).sum(axis=1) == b.size).sum() ``` This will do a row match, and we select and count the rows where all values match the pattern we are looking for. This requires that `b` has the same shape as `A[0]`.
Python: Find number of occurrences of given array within two-dimensional array
[ "", "python", "multidimensional-array", "numpy", "" ]
This instruction works: ``` SELECT INTO unsolvedNodes array_agg(DISTINCT idDestination) FROM road WHERE idOrigin = ANY(solvedNodes) AND NOT (idDestination = ANY(solvedNodes)); ``` But I would like to use something this way: ``` SELECT INTO unsolvedNodes array_agg(DISTINCT idDestination), lengths array_agg(length) FROM road WHERE idOrigin = ANY(solvedNodes) AND NOT (idDestination = ANY(solvedNodes)); ``` How to use only one `SELECT INTO` instruction to set multiple variables?
In **PL/pgSQL** you can `SELECT INTO` *as many variables at once as you like* directly. You just had the syntax backwards: ``` SELECT INTO unsolvedNodes, lengths array_agg(DISTINCT idDestination), array_agg(length) FROM road WHERE idOrigin = ANY(solvedNodes) AND NOT (idDestination = ANY(solvedNodes)); ``` You have the keyword `INTO` followed by a list of target variables, and you have a corresponding `SELECT` list. The target of the `INTO` clause can be (quoting [the manual here](https://www.postgresql.org/docs/current/plpgsql-statements.html#PLPGSQL-STATEMENTS-SQL-ONEROW)): > ...a record variable, a row variable, or a comma-separated list of > simple variables and record/row fields. Also: > The `INTO` clause can appear almost anywhere in the SQL command. > Customarily it is written either just before or just after the list of > select\_expressions in a `SELECT` command, or at the end of the command > for other command types. It is recommended that you follow this > convention in case the PL/pgSQL parser becomes stricter in future versions. This is **not to be confused** with [`SELECT INTO` in the SQL dialect of Postgres](https://www.postgresql.org/docs/current/sql-selectinto.html) - which should not be used any more. It goes against standard SQL and will eventually be removed, most likely. The manual actively discourages its continued use: > It is best to use `CREATE TABLE AS` for this purpose in new code.
Yes, ``` SELECT name,family INTO cName, cFamily FROM "CommonUsersModel"; ``` OR ``` SELECT INTO cName, cFamily name,family FROM "CommonUsersModel" ```
SELECT INTO with more than one attribution
[ "", "sql", "postgresql", "variable-assignment", "plpgsql", "" ]
I am trying to increment a binary sequence in python while maintaining the bit length. So far I am using this piece of code... ``` '{0:b}'.format(long('0100', 2) + 1) ``` This will take the binary number, convert it to a long, adds one, then converts it back to a binary number. Eg, 01 -> 10. However, if I input a number such as '0100', instead of incrementing it to '0101', my code increments it to '101', so it is disregarding the first '0', and just incrementing '100' to '101'. Any help on how to make my code maintain the bit length will be greatly appreciated. Thanks
`str.format` lets you specify the length as a parameter like this ``` >>> n = '0100' >>> '{:0{}b}'.format(long(n, 2) + 1, len(n)) '0101' ```
That's because `5` is represented as '101' after conversion from int(or long) to binary, so to prefix some 0's before it you've use `0` as filler and pass the width of the initial binary number while formatting. ``` In [35]: b='0100' In [36]: '{0:0{1:}b}'.format(long(b, 2) + 1,len(b)) Out[36]: '0101' In [37]: b='0010000' In [38]: '{0:0{1:}b}'.format(long(b, 2) + 1,len(b)) Out[38]: '0010001' ```
Python - Incrementing a binary sequence while maintaining the bit length
[ "", "python", "binary", "" ]
I have 2 tables and need to get the distinct combined results between them. ``` TABLE1: A B C TABLE2: A D E DESIRED RESULT: A B C D E ``` Anyone know the SQL for to get this result? Thanks!
You can try, `union` should work: ``` select * from table 1 union select * from table 2 ```
You can use a `UNION` query to get the distinct results from both tables ``` select yourCol from table1 union select yourCol from table2 ``` The `UNION` will remove any duplicate values. If you wanted to return all values including duplicates then you could use a `UNION ALL`
Combined distinct values for 2 tables needed
[ "", "mysql", "sql", "" ]
I have two queries which return separate result sets, and the queries are returning the correct output. How can I combine these two queries into one so that I can get one single result set with each result in a separate column? Query 1: ``` SELECT SUM(Fdays) AS fDaysSum From tblFieldDays WHERE tblFieldDays.NameCode=35 AND tblFieldDays.WeekEnding=? ``` Query 2: ``` SELECT SUM(CHdays) AS hrsSum From tblChargeHours WHERE tblChargeHours.NameCode=35 AND tblChargeHours.WeekEnding=? ``` Thanks.
You can aliasing both query and Selecting them in the select query <http://sqlfiddle.com/#!2/ca27b/1> ``` SELECT x.a, y.b FROM (SELECT * from a) as x, (SELECT * FROM b) as y ```
You can use a `CROSS JOIN`: ``` SELECT * FROM ( SELECT SUM(Fdays) AS fDaysSum FROM tblFieldDays WHERE tblFieldDays.NameCode=35 AND tblFieldDays.WeekEnding=1) A -- use you real query here CROSS JOIN (SELECT SUM(CHdays) AS hrsSum FROM tblChargeHours WHERE tblChargeHours.NameCode=35 AND tblChargeHours.WeekEnding=1) B -- use you real query here ```
Combining the results of two SQL queries as separate columns
[ "", "sql", "" ]
I am doing a join in select statement and it returns duplicate professional id because of the join. ``` Select P.ProfessionalID, P.ProfessionalName , S.SpecialtyName from Professionals P, ProfessionalSpecialtyRelation PR, Specialties S where ( P.ProfessionalName Like '%arif%' OR P.ProfessionalSurname Like '%%' ) And P.ProfessionalID = PR.ProfessionalID AND P.ProfessionalID = POR.ProfessionalID AND PR.SpecialtyID = S.SpecialtyID ``` If Prof has two Specialities, it returns that Prof twice. How can I avoid that? what is the best way to do it?
If you only want to return one speciality then you can use an aggregate (max/min) function: ``` Select P.ProfessionalID, P.ProfessionalName, max(S.SpecialtyName) SpecialtyName from Professionals P inner join ProfessionalSpecialtyRelation PR on P.ProfessionalID = PR.ProfessionalID -- and P.ProfessionalID = POR.ProfessionalID You are not joining to a table with POR alias inner join Specialties S on PR.SpecialtyID = S.SpecialtyID where P.ProfessionalName Like '%arif%' OR P.ProfessionalSurname Like '%%' group by P.ProfessionalID, P.ProfessionalName; ``` Or since you are using SQL Server you can also use the `row_number()` function to return only one row for each professional: ``` select ProfessionalID, ProfessionalName, SpecialtyName from ( Select P.ProfessionalID, P.ProfessionalName, S.SpecialtyName, row_number() over(partition by P.ProfessionalID order by S.SpecialtyName) rn from Professionals P inner join ProfessionalSpecialtyRelation PR on P.ProfessionalID = PR.ProfessionalID -- and P.ProfessionalID = POR.ProfessionalID You are not joining to a table with POR alias inner join Specialties S on PR.SpecialtyID = S.SpecialtyID where P.ProfessionalName Like '%arif%' OR P.ProfessionalSurname Like '%%' ) d where rn = 1; ``` Note: I changed the query to use ANSI JOIN syntax (INNER JOIN) instead of the comma separated list with the joins in the WHERE clause.
You don't explain what is the speciality you want for your professor having two specialities. You need to group by professor, and apply an aggregate function on the specialities. Depending on the aggregate function, you will get different results: ``` SELECT P.ProfessionalID , P.ProfessionalName , MAX(S.SpecialtyName) FROM Professionals P INNER JOIN ProfessionalSpecialtyRelation PR ON P.ProfessionalID = PR.ProfessionalID INNER JOIN Specialties S ON PR.SpecialtyID = S.SpecialtyID WHERE P.ProfessionalName Like '%arif%' OR P.ProfessionalSurname Like '%%' GROUP BY P.ProfessionalID, P.ProfessionalName ``` With the `MAX` function, you will get the maxium of the specialty, in term of string. Note that you should write your joins explicitly with `JOIN` clause, instead of writing it implicitly.
sql select returns duplicate rows because of joins
[ "", "sql", "sql-server", "select", "join", "" ]
I'm writing Python that targets versions 3.2 and higher. It looks like using the built-in function callable is the most straightforward and efficient way to do this. I've seen recommendations for `hasattr(x, "__call__")`, `collections.Callable(x)`, and just using `try/except` around an attempted call. I've tested items that are callable (a class and a function), using `timeit` with 100,000 iterations; in both cases using callable takes only about 75% of the time of checking for the attribute. When the item is not callable (an integer and a string) using callable stays at the same cost as a class or function while checking for the attribute is about 2.3 times more expensive than for a class or function. I didn't expect that difference, but it also favors the clear and concise `callable(x)` approach. But I'm relatively new to Python and no expert, so are there reasons I'm not aware of that I should use the hasattr approach or another approach? FWIW, the results of the various timeits follow. The first character is just t for timeit, the second indicates what the type of the object being tested (c = class, f = function, i = integer, s = string), and the rest indicates the method (attr - check attribute, call - use callable, try - use try/except). ``` tcattr 0.03665385400199739 tccall 0.026238360142997408 tctry 0.09736267629614304 tfattr 0.03624538065832894 tfcall 0.026362861895904643 tftry 0.032501874250556284 tiattr 0.08297350149314298 ticall 0.025826044152381655 titry 0.10657657453430147 tsattr 0.0840187013927789 tscall 0.02585409547373274 tstry 0.10742772077628615 ```
`hasattr()` will return more false positives than `callable`: ``` >>> class X(object): ... def __getattr__(self, name): ... return name ... >>> i = X() >>> from collections import Callable >>> isinstance(i, Callable) False >>> callable(i) False >>> hasattr(i, '__call__') True >>> i() Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'X' object is not callable ``` I'm not sure which `callable` you were testing, but both look nicer than `hasattr` and handle more cases, so I would use them in place of `hasattr()`.
Great question! I'd say you should use `callable`. Several points apart from the speed issue: 1. It's explicit, simple, clear, short, and neat. 2. It's a Python built-in, so anyone who doesn't already know what it does can find out easily. 3. `try... except TypeError` has a problem: `TypeError` can sometimes be raised by other things. For example, if you successfully call a function which raises `TypeError` in its body, the `except` will erroneously catch that and assume that the object was not callable. 4. Some common customisations, like `__getattr__`, can cause `hasattr` to make mistakes. 5. `collections.abc.Callable` seems like rather heavy machinery for something so simple. After all, `callable` does the same job. Footnote: the `try` block is a [very common pattern](http://docs.python.org/3/glossary.html#term-eafp) in Python for this sort of thing, so you may see a lot of it in other people's code. However, as I've outlined above, this is one case where it's not quite suitable.
Using callable(x) vs. hasattr(x, "__call__")
[ "", "python", "python-3.x", "callable", "" ]
I don't know if the title of the post is the appropriate. I have the following table ![enter image description here](https://i.stack.imgur.com/mPr6U.png) and an Array in php with some items, `parsed_array`. What I want to do is to find all the SupermarketIDs which have all the items of the `parsed_array`. For example, if `parsed_array` contains `[111,121,131]` I want the result to be `21` which is the ID of the Supermarket that contains all these items. I tried to do it like that: ``` $this->db->select('SupermarketID'); $this->db->from('productinsupermarket'); for ($i=0; $i<sizeof($parsed_array); $i++) { $this->db->where('ItemID', $parsed_array[$i]); } $query = $this->db->get(); return $query->result_array(); ``` If there is only one item in the `parsed_array` the result is correct because the above is equal to ``` SELECT SupermarketID FROM productinsupermarket WHERE ItemID=parsed_array[0]; ``` but if there are more than one items, lets say two, is equal to ``` SELECT SupermarketID FROM productinsupermarket WHERE ItemID=parsed_array[0] AND ItemID=parsed_array[1]; ``` which of course return an empty table. Any idea how can this be solved?
You can use `where_in` in codeigniter as below, ``` if(count($parsed_array) > 0) { $this->db->where_in('ItemID', $parsed_array); } ``` [Active record class in codeigniter](http://ellislab.com/codeigniter/user-guide/database/active_record.html#select)
There are at least two ways of generating the result you want, either a self join (no fun to generate with a dynamic number of items) or using `IN`, `GROUP BY` and `HAVING`. I can't really tell you how to generate it using CodeIgniter, I assume you're better at that than I am :) ``` SELECT SupermarketID FROM productinsupermarket WHERE ItemID IN (111,121,131) -- The 3 item id's you're looking for GROUP BY SupermarketID HAVING COUNT(ItemId) = 3; -- All 3 must match ``` [An SQLfiddle to test with](http://sqlfiddle.com/#!2/09c4f/1). EDIT: As @ypercube mentions below, if the ItemId can show up more than once for a SupermarketID, you'll want to use `COUNT(DISTINCT ItemId)` to count only unique rows instead of counting every occurrence.
Sql statement with multi ANDs querying the same column
[ "", "sql", "codeigniter", "" ]
I am interested in gleaning information from an ESRI `.shp` file. Specifically the `.shp` file of a polyline feature class. When I open the `.dbf` of a feature class, I get what I would expect: a table that can open in excel and contains the information from the feature class' table. However, when I try to open a `.shp` file in any program (excel, textpad, etc...) all I get is a bunch of gibberish and unusual ASCII characters. I would like to use Python (2.x) to interpret this file and get information out of it (in this case the vertices of the polyline). I do not want to use any modules or non built-in tools, as I am genuinely interested in how this process would work and I don't want any dependencies. Thank you for any hints or points in the right direction you can give!
Your question, basically, is "I have a file full of data stored in an arbitrary binary format. How can I use python to read such a file?" The answer is, [this link](http://www.esri.com/library/whitepapers/pdfs/shapefile.pdf) contains a description of the format of the file. Write a dissector based on the technical specification.
If you don't want to go to all the trouble of writing a parser, you should take look at [pyshp](https://code.google.com/p/pyshp/), a pure Python shapefile library. I've been using it for a couple of months now, and have found it quite easy to use. There's also a python binding to shapelib, if you search the web. But I found the pure Python solution easier to hack around with.
How to parse a .shp file?
[ "", "python", "parsing", "arcgis", "shapefile", "" ]
I have an array of 5 numbers: ``` A = [10, 20, 40, 80, 110] ``` I need to create a new array with a 10nth length numbers. The extra numbers could be the average number between the two # of `A`. for example: EDIT B = [10 , 15 , 20 ,30, 40, 60, 80, 95, 110 ] Is it possible using a scipy or numpy function ?
Use [numpy.interp](http://docs.scipy.org/doc/numpy/reference/generated/numpy.interp.html): ``` import numpy as np Y = [10, 20, 40, 80, 110] N = len(Y) X = np.arange(0, 2*N, 2) X_new = np.arange(2*N-1) # Where you want to interpolate Y_new = np.interp(X_new, X, Y) print(Y_new) ``` yields ``` [ 10. 15. 20. 30. 40. 60. 80. 95. 110.] ``` ---
Using [this answer](https://stackoverflow.com/a/5347492/1258041): ``` In [1]: import numpy as np In [2]: a = np.array([10, 20, 40, 80, 110]) In [3]: b = a[:-1] + np.diff(a)/2 In [4]: c = np.empty(2 * a.size -1) In [5]: c[::2] = a In [6]: c[1::2] = b In [7]: c Out[7]: array([ 10., 15., 20., 30., 40., 60., 80., 95., 110.]) ```
Double the length of a python numpy array with interpolated new values
[ "", "python", "arrays", "numpy", "scipy", "" ]
I'm confused about the '&' in the below if statement. Is the fist if saying "if bits *and* 0x10 = 0x10" ``` def lcd_byte(bits, mode): if bits&0x10==0x10: GPIO.output(LCD_D4, True) if bits&0x20==0x20: GPIO.output(LCD_D5, True) if bits&0x40==0x40: GPIO.output(LCD_D6, True) if bits&0x80==0x80: GPIO.output(LCD_D7, True) ```
No, `&` is *bitwise AND*, not boolean *AND*. Bitwise *AND* works on the binary level, for each individual bit of the operand values. Thus, with values 1 and 0 you get: ``` 1 & 1 == 1 1 & 0 == 0 0 & 1 == 0 0 & 0 == 0 ``` But with integer values that, in binary, would comprise of multiple bits, the operator works on each bit pair instead: ``` 2 & 1 == 0 2 & 2 == 2 ``` The expression `& 0x10` tests if the 5th bit is set: ``` 0b00010000 & 0x10 == 0x10 ``` because `0x10` (hexadecimal), or 16 decimal, is `00010000` in binary. `0x20` is the 6th bit, `0x40` is the 7th, and `0x80` is the left-most or 8th bit. The code is a little more verbose than needed; when testing against just one bit, the operator will either return `0` or the bit tested against and the `==` part can be dispensed with: ``` def lcd_byte(bits, mode): if bits & 0x10: GPIO.output(LCD_D4, True) if bits & 0x20: GPIO.output(LCD_D5, True) if bits & 0x40: GPIO.output(LCD_D6, True) if bits & 0x80: GPIO.output(LCD_D7, True) ``` See the [Bitwise operations](https://en.wikipedia.org/wiki/Bitwise_operation) article on Wikipedia for the nitty-gritty details on how bitwise operations work. Python defines [several bitwise operators](http://docs.python.org/2/reference/expressions.html#shifting-operations): * `&`: AND * `|`: OR * `^`: XOR * `<<`: left-shift * `>>`: right-shift
What's happening is that a particular bit of the byte is being tested. For example, suppose the value is 0xAF or in decimal, 175. In binary that is: ``` bit 7654 3210 ---------------- value 1010 1111 ``` Now we want to test to see if bit 5 in the above is set. So we make a *mask* containing just that bit set. It's called a mask because it's like a stencil, only allowing certain bits to "show through" where there is a 1 in the mask. 0 bits in the mask are "masked off" and we don't care about their values. ``` bit 7654 3210 ---------------- = 0x20 or 32 in decimal value 0010 0000 ``` Now we can use a *bitwise and* operation to test the value. By "bitwise" we mean that each bit in one value is tested against the corresponding bit in the other, and by "and" we mean that the resulting bit is set only if the bit is set in both of the original values. In Python this operation is written `&`. ``` ORIGINAL VALUE MASK RESULT bit 7654 3210 bit 7654 3210 bit 7654 3210 ---------------- & ---------------- = ---------------- value 1010 1111 value 0010 0000 value 0010 0000 ``` Now we look at the result. If the result is non-zero, then we know that the bit was set in the original value. Now, the Python code you posted actually checks not if it's non-zero, but to see if it's the actual mask value. You only really need to do that if you are testing multiple bits at once, which you can do just by including multiple 1 bits in the mask. In that case, the result is non-zero if *any* of the corresponding bits are set in the original value. Sometimes that's what you want, but if you want to make sure *all* of them are set, then you need to explicitly compare the result to the mask value. In this specific example, each bit corresponds to a specific GPIO output, which is activated when the bit is turned on (presumably these outputs are all turned off at once earlier in the code).
Python - Confused by if statement if bits&0x20==0x20
[ "", "python", "if-statement", "" ]
I have a bitmask value stored as an int in sql. I'd like to turn that value into a comma separated list of the values contained in the bitmask value. So, for example, the results might look like so: ``` id name bitMaskValue values ---------------------------------------- 1 Bob 5 1,4 2 Mary 13 1,4,8 3 Stan 11 1,2,8 ``` Is there a way to accomplish this in a sql statement? This is SQL Server 2008.
``` declare @I integer = 2117 Declare @v varChar(32) = '' Declare @Bit tinyInt = 0 while @I > 0 Begin Set @v += case @I %2 WHen 1 Then str(@bit,2,1) + ',' else '' End Set @Bit += 1 Set @i /= 2 End Select case When len(@v) > 0 Then left(@v, len(@v) -1) else '' End ```
This should work: ``` SELECT id, name, bitMaskValue, SUBSTRING( CASE WHEN bitMaskValue & 1 = 1 THEN ',1' ELSE '' END + CASE WHEN bitMaskValue & 2 = 2 THEN ',2' ELSE '' END + CASE WHEN bitMaskValue & 4 = 4 THEN ',4' ELSE '' END + CASE WHEN bitMaskValue & 8 = 8 THEN ',8' ELSE '' END , 2, 64) As [values] FROM yourTable ```
In SQL, how can I split the values in a bitmask total into a comma delimited string
[ "", "sql", "sql-server-2008", "t-sql", "" ]
In SQL Server, create a view using two tables. One of tables used `*` in view. Then I add a column to the table. This is now causing view error. Must rebuild the view to solve this problem. If this table is used a lot of views, how to identify relevant views and rebuild them? There are a number of ways? Thank you! Test environment: SQL Server 2008 Test SQL: ``` if exists(select * from sys.objects where name='tblTestA' and type='u') DROP TABLE tblTestA create table tblTestA(Part varchar(10),Qty int) insert into tblTestA values('A',10) insert into tblTestA values('B',20) go if exists(select * from sys.objects where name='tblTestB' and type='u') DROP TABLE tblTestB GO create table tblTestB(Part varchar(10),Price decimal(9,4)) GO insert into tblTestB values('A',1.1) insert into tblTestB values('B',2.2) GO if exists(select * from sys.objects where name='v_test' and type='v') DROP VIEW v_test go create View v_test as select a.*,b.Price from tblTestA a, tblTestB b where a.Part=b.Part go ``` Execute: ``` select * from v_test go ``` result: ``` Part Qty Price A 10 1.1000 B 20 2.2000 ``` Add a column ``` alter table tblTestA add Remark nvarchar(200) not null default('test') go ``` Execute: ``` select * from v_test go ``` Result: ``` Part Qty Price A 10 test B 20 test ```
You can figure out related views by using SQL Server Management Studio. You need to select the table in SSMS and then right click it. Select `View Dependencies`. In this picture the table is `CompanyCategoryXref` (blue arrow). The dependent view in this case is `CompanyCategory` (red arrow). You can also see that the selected item `CompanyCategory` is of object type `View` (yellow circle). ![SSMS Dependency](https://i.stack.imgur.com/GKa77.jpg)
If you create your view `WITH SCHEMABINDING`, you won't encounter this problem. For your *old* views ;-) you can use [sp\_refreshview](http://msdn.microsoft.com/en-us/library/ms187821.aspx). To find which views rely on which tables, you *could* use the GUI, but that doesn't scale very well. Instead, I recommend you use the system-management views e.g. [sys.sql\_expression\_dependencies](http://msdn.microsoft.com/en-us/library/bb677315.aspx). That lets you iterate over the dependent views and issue `exec sp_refreshview` for each one.
In SQL Server, table add a column. causing view error
[ "", "sql", "sql-server", "view", "" ]
I need help in the most efficient way to convert the following list into a dictionary: ``` l = ['A:1','B:2','C:3','D:4'] ``` At present, I do the following: ``` mydict = {} for e in l: k,v = e.split(':') mydict[k] = v ``` However, I believe there should be a more efficient way to achieve the same. Any idea ?
use `dict()` with a generator expression: ``` >>> lis=['A:1','B:2','C:3','D:4'] >>> dict(x.split(":") for x in lis) {'A': '1', 'C': '3', 'B': '2', 'D': '4'} ``` Using dict-comprehension ( as suggested by @PaoloMoretti): ``` >>> {k:v for k,v in (e.split(':') for e in lis)} {'A': '1', 'C': '3', 'B': '2', 'D': '4'} ``` Timing results for 10\*\*6 items: ``` >>> from so import * >>> %timeit case1() 1 loops, best of 3: 2.09 s per loop >>> %timeit case2() 1 loops, best of 3: 2.03 s per loop >>> %timeit case3() 1 loops, best of 3: 2.17 s per loop >>> %timeit case4() 1 loops, best of 3: 2.39 s per loop >>> %timeit case5() 1 loops, best of 3: 2.82 s per loop ``` so.py: ``` a = ["{0}:{0}".format(i**2) for i in xrange(10**6)] def case1(): dc = {} for i in a: q, w = i.split(':') dc[q]=w def case2(): dict(x.split(":") for x in a) def case3(): {k:v for k,v in (e.split(':') for e in a)} def case4(): dict([x.split(":") for x in a]) def case5(): {x.split(":")[0] : x.split(":")[1] for x in a} ```
``` >>> dict(map(lambda s: s.split(":"), ["A:1", "B:2", "C:3", "D:4"])) {'A': '1', 'C': '3', 'B': '2', 'D': '4'} ```
Efficient way to convert a list to dictionary
[ "", "python", "" ]
Is it possible to make a [Bland-Altman plot](http://en.wikipedia.org/wiki/Bland%E2%80%93Altman_plot) in Python? I can't seem to find anything about it. Another name for this type of plot is the *Tukey mean-difference plot*. **Example:** ![enter image description here](https://i.stack.imgur.com/0ZicM.png)
If I have understood the theory behind the plot correctly, this code should provide the basic plotting, whereas you can configure it to your own particular needs. ``` import matplotlib.pyplot as plt import numpy as np def bland_altman_plot(data1, data2, *args, **kwargs): data1 = np.asarray(data1) data2 = np.asarray(data2) mean = np.mean([data1, data2], axis=0) diff = data1 - data2 # Difference between data1 and data2 md = np.mean(diff) # Mean of the difference sd = np.std(diff, axis=0) # Standard deviation of the difference plt.scatter(mean, diff, *args, **kwargs) plt.axhline(md, color='gray', linestyle='--') plt.axhline(md + 1.96*sd, color='gray', linestyle='--') plt.axhline(md - 1.96*sd, color='gray', linestyle='--') ``` The corresponding elements in `data1` and `data2` are used to calculate the coordinates for the plotted points. Then you can create a plot by running e.g. ``` from numpy.random import random bland_altman_plot(random(10), random(10)) plt.title('Bland-Altman Plot') plt.show() ``` ![Bland-Altman Plot](https://i.stack.imgur.com/jNkAq.png)
This is now implemented in statsmodels: <https://www.statsmodels.org/devel/generated/statsmodels.graphics.agreement.mean_diff_plot.html> Here is their example: ``` import statsmodels.api as sm import numpy as np import matplotlib.pyplot as plt # Seed the random number generator. # This ensures that the results below are reproducible. np.random.seed(9999) m1 = np.random.random(20) m2 = np.random.random(20) f, ax = plt.subplots(1, figsize = (8,5)) sm.graphics.mean_diff_plot(m1, m2, ax = ax) plt.show() ``` which produces this: [![enter image description here](https://i.stack.imgur.com/kQqJs.png)](https://i.stack.imgur.com/kQqJs.png)
Bland-Altman plot in Python
[ "", "python", "matplotlib", "plot", "" ]
I have a table called `DUTY` (columns: `dutyid, dutyname, staffid`) and a table called `STAFF` (columns: `staffid, staffname`) In order to be fair, each of the staff will be auto assigned to each duty entry (record). So what should I do whenever I would like to insert a duty entry, it will auto assign the `staffid` (sequential) for it. Example * staffid : 1 staffname: Jack * staffid : 2 staffname: Mary So when I insert a new entry (first entry) for duty, it will auto insert `staffid = 1` for duty table. For second entry, `staffid` will be 2. And for the following entry, it will keep looping the `staffid` sequentially. Desired answer: ``` dutyid dutyname staffid 1 cleaning 1 2 cleaning 2 3 cleaning 1 4 cleaning 2 5 cleaning 1 6 cleaning 2 7 cleaning 3 new staff 8 cleaning 1 9 cleaning 2 10 cleaning 3 ``` Can anyone show and explain to me what I should do in my stored procedure... Thanks
Try this : ``` select * from duty; Declare @dutyname varchar(20)='cleaning' Declare @staffid int Declare @maxstaffid int select @staffid=staffid from duty where dutyid= (select max(dutyid) from duty); select @maxstaffid=max(staffid) from staff; print @maxstaffid; insert into duty (dutyname, staffid) select @dutyname, case when @staffid=@maxstaffid then (select min(staffid) from staff) else (select min(staffid) from staff where staffid>@staffid) end ; select * from duty; ```
See [sqlfiddle](http://sqlfiddle.com/#!3/cfce1/12) This query will return the next staff number and loop back to the fist one ``` SELECT max(staffid) FROM ( SELECT top 1 staff.staffid FROM staff, ( SELECT top 1 staffid FROM duty ORDER BY id DESC ) d WHERE staff.staffid > d.staffid Order by staff.staffid UNION SELECT top 1 staffid FROM staff ORDER BY staffid ) x ```
Insert table record with auto generate foreign id
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I'm learning how to use pickle. I've created a namedtuple object, appended it to a list, and tried to pickle that list. However, I get the following error: ``` pickle.PicklingError: Can't pickle <class '__main__.P'>: it's not found as __main__.P ``` I found that if I ran the code without wrapping it inside a function, it works perfectly. Is there an extra step required to pickle an object when wrapped inside a function? Here is my code: ``` from collections import namedtuple import pickle def pickle_test(): P = namedtuple("P", "one two three four") my_list = [] abe = P("abraham", "lincoln", "vampire", "hunter") my_list.append(abe) with open('abe.pickle', 'wb') as f: pickle.dump(abe, f) pickle_test() ```
Create the named tuple *outside* of the function: ``` from collections import namedtuple import pickle P = namedtuple("P", "one two three four") def pickle_test(): my_list = [] abe = P("abraham", "lincoln", "vampire", "hunter") my_list.append(abe) with open('abe.pickle', 'wb') as f: pickle.dump(abe, f) pickle_test() ``` Now `pickle` can find it; it is a module global now. When unpickling, all the `pickle` module has to do is locate `__main__.P` again. In your version, `P` is a *local*, to the `pickle_test()` function, and that is not introspectable or importable. Note that `pickle` stores just the module and the class name, as taken from the class's `__name__` attribute. Make sure that the first argument of the `namedtuple()` call matches the global variable you are assigning to; `P.__name__` must be `"P"`! It is important to remember that `namedtuple()` is a class factory; you give it parameters and it returns a class object for you to create instances from. `pickle` only stores the *data* contained in the instances, plus a string reference to the original class to reconstruct the instances again.
I found [this answer](https://stackoverflow.com/a/28149627/7640677) in another thread. For pickling to work correctly, the variable assigned to the namedtuple must have the same name as the namedtuple itself ``` group_t = namedtuple('group_t', 'field1, field2') # this will work mismatched_group_t = namedtuple('group_t', 'field1, field2') # this will throw the error ```
How to pickle a namedtuple instance correctly
[ "", "python", "python-2.7", "pickle", "namedtuple", "" ]
I created a big multidimensional array `M` with `np.zeros((1000,1000))`. After certain number of operations, I don't need it anymore. How can I free a RAM dynamically during program's execution? Does `M=0` do it for me?
In *general* you can't. Even if you remove all the references to an object, it is left to the python implementation to re-use or free the memory. On CPython you could call [`gc.collect()`](http://docs.python.org/2/library/gc.html#gc.collect) to force a garbage collection run. But while that may reclaim memory, it doesn't necessarily return it to the OS. **But**: numpy is an extension module that does its own thing, and manages its own memory. When I monitor the memory usage of a python process, I see the RAM usage (Resident Set Size) going down after `del(M)` ``` In [1]: import numpy as np In [2]: M = np.zeros((1000,1000)) In [3]: del(M) In [4]: ``` Just after starting IPython: ``` slackbox:~> ps -u 77778 USER PID %CPU %MEM VSZ RSS TT STAT STARTED TIME COMMAND rsmith 77778 0.0 0.5 119644 22692 0 S+ 2:37PM 0:00.39 /usr/local/bin/py ``` After importing numpy (1): ``` slackbox:~> ps -u 77778 USER PID %CPU %MEM VSZ RSS TT STAT STARTED TIME COMMAND rsmith 77778 1.0 0.8 168548 32420 0 S+ 2:37PM 0:00.49 /usr/local/bin/py ``` After creating the array (2): ``` slackbox:~> ps -u 77778 USER PID %CPU %MEM VSZ RSS TT STAT STARTED TIME COMMAND rsmith 77778 0.0 1.0 176740 40328 0 S+ 2:37PM 0:00.50 /usr/local/bin/py ``` After the call to `del` (3): ``` slackbox:~> ps -u 77778 USER PID %CPU %MEM VSZ RSS TT STAT STARTED TIME COMMAND rsmith 77778 0.0 0.8 168548 32496 0 S+ 2:37PM 0:00.50 /usr/local/bin/py slackbox:~> ``` So *in this case* using `del()` can reduce the amount of RAM used. Note that there is an exception to this with numpy. Numpy can use memory allocated by another extension library. In that case the numpy object is marked that numpy doesn't own the memory, and freeing it is left to the other library.
Use the [del](http://docs.python.org/2/tutorial/datastructures.html#the-del-statement) statement: ``` del M ``` And by the way, a `float64` array of shape `(1000, 1000)` takes only 7 Mb. If you're having memory problems, it's likely that the problem is elsewhere.
dynamic memory allocation in python
[ "", "python", "memory", "numpy", "ram", "" ]
I'm doing a programming challenge and I'm going crazy with one of the challenges. In the challenge, I need to compute the MD5 of a string. The string is given in the following form: `n[c]`: Where `n` is a number and `c` is a character. For example: `b3[a2[c]]` => `baccaccacc` Everything went ok until I was given the following string: `1[2[3[4[5[6[7[8[9[10[11[12[13[a]]]]]]]]]]]]]` This strings turns into a string with 6227020800 `a`'s. This string is more than 6GB, so it's nearly impossible to compute it in practical time. So, here is my question: **Are there any properties of MD5 that I can use here?** I know that there has to be a form to make it in short time, and I suspect it has to be related to the fact that all the string has is the same character repeated multiple times.
You probably have created a (recursive) function to produce the result as a single value. Instead you should use a generator to produce the result as a stream of bytes. These you can then feed byte by byte into your MD5 hash routine. The size of the stream does not matter this way, it will just have an impact on the computation time, not on the memory used. Here's an example using a single-pass parser: ``` import re, sys, md5 def p(s, pos, callBack): while pos < len(s): m = re.match(r'(d+)[', s[pos:]) if m: # repetition? number = m.group(1) for i in range(int(number)): endPos = p(s, pos+len(number)+1, callBack) pos = endPos elif s[pos] == ']': return pos + 1 else: callBack(s[pos]) pos += 1 return pos + 1 digest = md5.new() def feed(s): digest.update(s) sys.stdout.write(s) sys.stdout.flush() end = p(sys.argv[1], 0, feed) print print "MD5:", digest.hexdigest() print "finished parsing input at pos", end ```
All hash functions are designed to work with byte streams, so you should not first generate the whole string, and after that hash it - you should write generator, which produces chunks of string data, and feed it to MD5 context. And, MD5 uses 64-byte (or char) buffer so it would be a good idea to feed 64-byte chunks of data to the context.
Hashing same character multiple times
[ "", "python", "string", "algorithm", "hash", "md5", "" ]
I have written a case condition inside where clause which is working fine without any sub queries, but it is not working with sub queries for example ``` declare @isadmin varchar(5) = 'M' select * from Aging_calc_all a where a.AccountNumber in (case @isadmin when 'M' then 1 else 0 end) ``` which is working fine. However this does not seem to work - ``` select * from Aging_calc_all a where a.AccountNumber in (case @isadmin when 'M' then (select AccountNumber from ACE_AccsLevelMaster where AssignedUser=7) else 0 end) ``` Any suggestions or this is a t-sql bug in 2008.
The error you're receiving is "Subquery returned more than 1 value", I believe. So you can't return multiple values after THEN. You should rewrite your query to something like this: ``` select * from Aging_calc_all a where (@isadmin='M' and a.AccountNumber in (select AccountNumber from ACE_AccsLevelMaster where AssignedUser=7)) or (@isadmin<>'M' and a.AccountNumber=0) ```
``` select * from Aging_calc_all a where a.AccountNumber in ( SELECT AccountNumber from ACE_AccsLevelMaster where AssignedUser=7 AND @isadmin = 'M' UNION ALL select 0 where @isadmin <> 'M' ) ``` **EDITED** To show how use multiple criteria ``` select * from Aging_calc_all a where a.AccountNumber in ( SELECT AccountNumber from ACE_AccsLevelMaster where AssignedUser=7 AND @isadmin = 'M' -- case to select from another table: UNION ALL select * from ANOTHER_TABLE where @isadmin = 'A' -- case to select from const set (1, 2, 3): UNION ALL select * from ( select 1 union all select 2 union all select 3) x where @isadmin = 'B' -- case to show how 'else' work UNION ALL select 0 where @isadmin not in( 'M', 'A', 'B') ) ```
select inside case statement in where clause tsql
[ "", "sql", "sql-server", "t-sql", "" ]
My code iterates over a set and print actor's names: ``` for actor in actorsByMovies(): print actor+",", ``` The result looks like: ``` Brad Pitt, George Clooney, ``` But I want it to detect the last element so that it won't print the last comma. The result should be instead: ``` Brad Pitt, George Clooney ``` How can I do that?
``` print(', '.join(actorsByMovies())) ```
The solution by @jamylak is best whenever you can use it, but if you have to keep the loop for other things, you might do something like this: ``` from __future__ import print_function # Python [2.6, 3) for index, actor in enumerate(actorsByMovies()): if index > 0: print(', ', end='') print(actor, end='') ``` Using the same new `print` function, you could do this instead of using `str.join`: ``` print(*actorsByMovies(), sep=', ') ```
Python: How to not print comma in last element in a for loop?
[ "", "python", "string", "list", "" ]
Hypothetically, if I have 6 rows that meet a certain criteria .. (ie, `WHERE First = 'Bob' AND Middle = 'J' AND Last = 'Dole'`) ..and I do a `DELETE FROM table WHERE First = 'Bob' AND Middle = 'J' AND Last = 'Dole' LIMIT 5`.. Which 5 rows will be deleted first? My assumption is that it would delete the rows with the lowest primary key first, is this correct?
By default SQL will delete rows from top to bottom in your database, inshort first 5 records in the table, so if you want, you can ORDER them first using `ORDER BY` clause with `ASC` or `DESC`
Better you use something like this ``` Delete from table where id in (select id from table where First = 'Bob' AND Middle = 'J' AND Last = 'Dole' order by id asc) ``` now change asc to dsc and you can be sure how it works . thanks
Deleting rows with MySQL
[ "", "mysql", "sql", "" ]
During web scraping and after getting rid of all html tags, I got the black telephone character \u260e in unicode (☎). But unlike [this response](https://stackoverflow.com/questions/7266842/character-u260e) I do want to get rid of it too. I used the following regular expressions in Scrapy to eliminate html tags: ``` pattern = re.compile("<.*?>|&nbsp;|&amp;",re.DOTALL|re.M) ``` Then I tried to match \u260e and I think I got caught by [the backslash plague](http://docs.python.org/2/howto/regex.html#the-backslash-plague). I tried unsuccessfully this patterns: ``` pattern = re.compile("<.*?>|&nbsp;|&amp;|\u260e",re.DOTALL|re.M) pattern = re.compile("<.*?>|&nbsp;|&amp;|\\u260e",re.DOTALL|re.M) pattern = re.compile("<.*?>|&nbsp;|&amp;|\\\\u260e",re.DOTALL|re.M) ``` None of this worked and I still have \u260e as an output. How can I make this disappear?
Using Python 2.7.3, the following works fine for me: ``` import re pattern = re.compile(u"<.*?>|&nbsp;|&amp;|\u260e",re.DOTALL|re.M) s = u"bla ble \u260e blo" re.sub(pattern, "", s) ``` Output: ``` u'bla ble blo' ``` As pointed by @Zack, this works due to the fact that the string is now in unicode, i.e., the string is already converted, and the sequence of characters `\u260e` is now the -- probably -- two bytes used to write that little black phone ☎ (: Once both the string to be searched and the regular expression have the black phone itself, and not the sequence of characters `\u260e`, they both match.
If your string is already unicode, there's two easy ways. The second one will affect more than just the ☎, obviously. ``` >>> import string >>> foo = u"Lorum ☎ Ipsum" >>> foo.replace(u'☎', '') u'Lorum Ipsum' >>> "".join(s for s in foo if s in string.printable) u'Lorum Ipsum' ``` * [Remove non-ascii characters but leave periods and spaces](https://stackoverflow.com/questions/8689795/python-remove-non-ascii-characters-but-leave-periods-and-spaces) for more information about `string.printable` * [The SHORTEST way to remove multiple spaces in a string in Python](https://stackoverflow.com/questions/1546226/the-shortest-way-to-remove-multiple-spaces-in-a-string-in-python) if you don't want multiple whitespaces.
How to eliminate the telephone emoji in unicode?
[ "", "python", "regex", "python-2.7", "scrapy", "" ]
I have a very long query with a lot of LEFT JOINS in. The problem is that I have two COUNTS which do not seem to work together. Both the counts will equal the same value even if they're not. Here's what I have; ``` SELECT T.ID, T.name, T.pic, T.T_ID, COUNT(P.T_ID) AS plays, COUNT(L.T_ID) AS likes, S.Status, G.gig_name, G.date_time, G.lineup, G.price, G.ticket, E.action, E.ID, E.timestamp, E.E_ID FROM events E LEFT JOIN TRACKS T ON T.ID = E.ID AND E.action = 'has uploaded a track.' AND E.E_ID = T.T_ID LEFT JOIN STATUS S ON S.ID = E.ID AND E.action = 'has some news.' AND E.E_ID = S.S_ID LEFT JOIN GIGS G ON G.ID = E.ID AND E.action = 'has posted a gig.' AND E.E_ID = G.G_ID LEFT JOIN track_plays P ON P.A_ID = E.ID AND E.action = 'has uploaded a track.' AND E.E_ID = P.T_ID LEFT JOIN track_likes L ON L.ID = E.E_ID AND E.action = 'has uploaded a track.' WHERE E.ID = '3' GROUP BY E.E_ID ORDER BY E.timestamp DESC LIMIT 15 ``` I won't explain all the query but I think you'll get the gist of it. The JOINS in question are the last two. If `COUNT(P.T_ID)` = `100` `COUNT(L.T_ID)` will also = `100`. Thanks in advance!
You might try "rephrasing" those counts as correlated subqueries: ``` SELECT T.ID, T.name, T.pic, T.T_ID, (SELECT COUNT(*) FROM track_plays WHERE A_ID = E.ID AND E.action = 'has uploaded a track.' AND E.E_ID = T_ID) AS plays, (SELECT COUNT(*) FROM track_likes WHERE ID = E.E_ID AND E.action = 'has uploaded a track.') AS likes, S.Status, G.gig_name, G.date_time, G.lineup, G.price, G.ticket, E.action, E.ID, E.timestamp, E.E_ID FROM events E LEFT JOIN TRACKS T ON T.ID = E.ID AND E.action = 'has uploaded a track.' AND E.E_ID = T.T_ID LEFT JOIN STATUS S ON S.ID = E.ID AND E.action = 'has some news.' AND E.E_ID = S.S_ID LEFT JOIN GIGS G ON G.ID = E.ID AND E.action = 'has posted a gig.' AND E.E_ID = G.G_ID WHERE E.ID = '3' GROUP BY E.E_ID ORDER BY E.timestamp DESC LIMIT 15 ``` I find this to be simpler to understand/maintain and often faster.
You should try using `COUNT(DISTINCT)` ``` SELECT T.ID, T.name, T.pic, T.T_ID, COUNT(DISTINCT P.T_ID) AS plays, COUNT(DISTINCT L.T_ID) AS likes, S.Status, G.gig_name, G.date_time, G.lineup, G.price, G.ticket, E.action, E.ID, E.timestamp, E.E_ID FROM events E LEFT JOIN TRACKS T ON T.ID = E.ID AND E.action = 'has uploaded a track.' AND E.E_ID = T.T_ID LEFT JOIN STATUS S ON S.ID = E.ID AND E.action = 'has some news.' AND E.E_ID = S.S_ID LEFT JOIN GIGS G ON G.ID = E.ID AND E.action = 'has posted a gig.' AND E.E_ID = G.G_ID LEFT JOIN track_plays P ON P.A_ID = E.ID AND E.action = 'has uploaded a track.' AND E.E_ID = P.T_ID LEFT JOIN track_likes L ON L.ID = E.E_ID AND E.action = 'has uploaded a track.' WHERE E.ID = '3' GROUP BY E.E_ID ORDER BY E.timestamp DESC LIMIT 15 ```
Left Join makes two counts from different tables the same value
[ "", "mysql", "sql", "" ]
I am trying to remove specific line numbers from a file in python in a way such as **./foo.py filename.txt 4 5 2919** *Where 4 5 and 2919 are line numbers* What I am trying to do is: ``` for i in range(len(sys.argv)): if i>1: # Avoiding sys.argv[0,1] newlist.append(int(sys.argv[i])) ``` Then: ``` count=0 generic_loop{ bar=file.readline() count+=1 if not count in newlist: print bar } ``` it prints all the lines in original file (with blank spaces in between)
You can try something like this: ``` import sys import os filename= sys.argv[1] lines = [int(x) for x in sys.argv[2:]] #open two files one for reading and one for writing with open(filename) as f,open("newfile","w") as f2: #use enumerate to get the line as well as line number, use enumerate(f,1) to start index from 1 for i,line in enumerate(f): if i not in lines: #`if i not in lines` is more clear than `if not i in line` f2.write(line) os.rename("newfile",filename) #rename the newfile to original one ``` Note that for the generation of temporary files it's better to use [`tempfile`](http://docs.python.org/dev/library/tempfile.html) module.
You can use [`enumerate`](http://docs.python.org/dev/library/functions.html#enumerate) to determine the line number: ``` import sys exclude = set(map(int, sys.argv[2:])) with open(sys.argv[1]) as f: for num,line in enumerate(f, start=1): if num not in exclude: sys.stdout.write(line) ``` You can remove `start=1` if you start counting at 0. In the above code, the line numbering starts with 1: ``` $ python3 so-linenumber.py so-linenumber.py 2 4 5 import sys with open(sys.argv[1], 'r') as f: sys.stdout.write(line) ``` If you want to write the content to the file itself, write it to a [temporary file](http://docs.python.org/dev/library/tempfile.html#tempfile.NamedTemporaryFile) instead of sys.stdout, and then [rename](http://docs.python.org/dev/library/os.html#os.rename) that to the original file name (or use [sponge](http://linux.die.net/man/1/sponge) on the command-line), like this: ``` import os import sys from tempfile import NamedTemporaryFile exclude = set(map(int, sys.argv[2:])) with NamedTemporaryFile('w', delete=False) as outf: with open(sys.argv[1]) as inf: outf.writelines(line for n,line in enumerate(inf, 1) if n not in exclude) os.rename(outf.name, sys.argv[1]) ```
python remove "many" lines from file
[ "", "python", "list", "file-io", "" ]
I am working in: * Eclipse * Windows 7 * 64-bit Python 3.3 I want to import `writer.pyx` (yes, Cython) into `main.py`. At the top of `main.py`, I have the appropriate import statement: ``` import writer ``` Both `main.py` and `writer.pyx` are in the same directory, and that directory is also in Windows' PYTHONPATH environment variable. However, it gives me the error `ImportError: No module named 'writer'`. So, as far as I can tell, it should be working. But, here's the kicker: in that same directory, there's a file called `reader.pyx` that I'm also importing in `main.py` - and it works perfectly. No issues, no errors. So, clear summary: * `main.py` is `import`ing `writer.pyx` and `reader.pyx` * All three files are in the same directory (and PYTHONPATH lists that directory) * `reader.pyx` imports fine, but `writer.pyx` throws an `ImportError: No module named 'writer'` Any ideas as to how I can fix this? Visual representation: ``` import reader import writer def function(): # code ``` P.S. This is not my code, and it used to run just fine on this very computer, and the code has not been changed since. This leads me to believe it's an environment problem, but I'm not sure what. Something with Cython, perhaps? I don't have any real experience with it.
From what I understand, `pyx` files need to be [compiled before they can be loaded](http://docs.cython.org/src/reference/compilation.html). You can do this from within your script by using `pyximport`, if you first issue: ``` import pyximport; pyximport.install(pyimport = True) ``` On top of that, based on the fact that there seems to be another `reader.py` in your path, I'd suggest you create a folder in the same directory where `main.py` resides (say you name it `test_imports`) and put both `reader.pyx` and `writer.pyx` there, so that you're sure you're importing those files when you issue: ``` from test_imports import reader, writer ``` *Note that the `test_imports` directory will also need an empty `__init__.py` file that tells Python it is a [package](http://docs.python.org/2/tutorial/modules.html#packages).*
Are you working in a package? If so you will need to import it using the package name: ``` from mypackage import writer ```
Python "Module Not Found" - PYTHONPATH fixes not working
[ "", "python", "import", "module", "cython", "" ]
I have a simple question. I'm just wanting to know how I would have my program read the "input()" and see if there are integers or any kind of number in the string, and if so then print out a message saying so. Pretty much I just want to know how I would make sure that no one enters in a number for their name. Thanks! ``` yn = None while yn != "y": print("What is your name?") name = input() print("Oh, so your name is {0}? Cool!".format(name)) print("Now how old are you?") age = input() print("So your name is {0} and you're {1} years old?".format(name, age)) print("y/n?") yn = input() if yn == "y": break if yn == "n": print("Then here, try again!") print("Cool!") ```
Use the [`str.isdigit()` method](http://docs.python.org/3/library/stdtypes.html#str.isdigit) on strings, together with the [`any()` function](http://docs.python.org/3/library/functions.html#any): ``` if any(c.isdigit() for c in name): # there is a digit in the name ``` `.isdigit()` returns `True` for any string that consists only of digits. This includes any Unicode character that is marked as a numeric digit or numeric decimal. `any()` loops over the sequence you pass in and returns `True` as soon as it finds the first element that is `True`, and `False` if all elements are `False`. Demo: ``` >>> any(c.isdigit() for c in 'Martijn Pieters') False >>> any(c.isdigit() for c in 'The answer is 42') True ```
*see if there are integers or any kind of number in the string* ``` any(c.isdigit() for c in name) ``` returns `True` for strings such as "123", "123.45", "abc123".
Making sure no integers in a string?
[ "", "python", "string", "integer", "" ]
I want to build up a new list in which every n-th element of an initial list is left out, e.g.: from `['first', 'second', 'third', 'fourth', 'fifth', 'sixth', 'seventh']` make `['second', 'third', 'fifth', 'sixth'` because `n = 3` How to do that? Is it - first of all - correct to accomplish this by building up a new list, instead of trying to delete? For the latter I tried with `deque` and `rotate` but that ended up in confusion. To build up a new list I was trying something with `range(1,len(list),n)` but that are the element positions to be deleted and not the ones which are to be kept for the new list. How do I get my desired list?
``` >>> [s for (i,s) in enumerate(['first', 'second', 'third', 'fourth', 'fifth', 'sixth', 'seventh']) if i%3] ['second', 'third', 'fifth', 'sixth'] ``` The answer in a few steps: The `enumerate` function gives a list of tuples with the index followed by the item: ``` >>> list(enumerate(['first', 'second', 'third', 'fourth', 'fifth', 'sixth', 'seventh'])) [(0, 'first'), (1, 'second'), (2, 'third'), (3, 'fourth'), (4, 'fifth'), (5, 'sixth'), (6, 'seventh')] ``` then you check if the index does not divide by three, and if so you include the item.
You can create a new list and filter at the same time: ``` >>> li = ['first', 'second', 'third', 'fourth', 'fifth', 'sixth', 'seventh'] >>> new_list = [item for index, item in enumerate(li) if index % 3 != 0] >>> new_list ['second', 'third', 'fifth', 'sixth'] ``` ### Discussion `enumerate(li)` returns a list of tuple: (index, item): ``` [(0, 'first'), (1, 'second'), (2, 'third'), (3, 'fourth'), (4, 'fifth'), (5, 'sixth'), (6, 'seventh')] ``` We only pick out those items with index % 3 != 0, which means dropping items with index 0, 3, 6, ... ### Update: A Different Solution If you don't mind using itertools: ``` >>> import itertools >>> list(itertools.compress(li, itertools.cycle(range(3)))) ['second', 'third', 'fifth', 'sixth'] ``` The `cycle(range(3))` returns an infinite list: ``` [0, 1, 2, 0, 1, 2, ...] ``` `compress()` picks out those items that is not zero.
Python: building new list from existing by dropping every n-th element
[ "", "python", "list", "" ]
I tried to use multiple assignment as show below to initialize variables, but I got confused by the behavior, I expect to reassign the values list separately, I mean b[0] and c[0] equal 0 as before. ``` a=b=c=[0,3,5] a[0]=1 print(a) print(b) print(c) ``` Result is: [1, 3, 5] [1, 3, 5] [1, 3, 5] Is that correct? what should I use for multiple assignment? what is different from this? ``` d=e=f=3 e=4 print('f:',f) print('e:',e) ``` result: ('f:', 3) ('e:', 4)
If you're coming to Python from a language in the C/Java/etc. family, it may help you to stop thinking about `a` as a "variable", and start thinking of it as a "name". `a`, `b`, and `c` aren't different variables with equal values; they're different names for the same identical value. Variables have types, identities, addresses, and all kinds of stuff like that. Names don't have any of that. *Values* do, of course, and you can have lots of names for the same value. If you give `Notorious B.I.G.` a hot dog,\* `Biggie Smalls` and `Chris Wallace` have a hot dog. If you change the first element of `a` to 1, the first elements of `b` and `c` are 1. If you want to know if two names are naming the same object, use the `is` operator: ``` >>> a=b=c=[0,3,5] >>> a is b True ``` --- You then ask: > what is different from this? ``` d=e=f=3 e=4 print('f:',f) print('e:',e) ``` Here, you're rebinding the name `e` to the value `4`. That doesn't affect the names `d` and `f` in any way. In your previous version, you were assigning to `a[0]`, not to `a`. So, from the point of view of `a[0]`, you're rebinding `a[0]`, but from the point of view of `a`, you're changing it in-place. You can use the `id` function, which gives you some unique number representing the identity of an object, to see exactly which object is which even when `is` can't help: ``` >>> a=b=c=[0,3,5] >>> id(a) 4473392520 >>> id(b) 4473392520 >>> id(a[0]) 4297261120 >>> id(b[0]) 4297261120 >>> a[0] = 1 >>> id(a) 4473392520 >>> id(b) 4473392520 >>> id(a[0]) 4297261216 >>> id(b[0]) 4297261216 ``` Notice that `a[0]` has changed from 4297261120 to 4297261216—it's now a name for a different value. And `b[0]` is also now a name for that same new value. That's because `a` and `b` are still naming the same object. --- Under the covers, `a[0]=1` is actually calling a method on the list object. (It's equivalent to `a.__setitem__(0, 1)`.) So, it's not *really* rebinding anything at all. It's like calling `my_object.set_something(1)`. Sure, likely the object is rebinding an instance attribute in order to implement this method, but that's not what's important; what's important is that you're not assigning anything, you're just mutating the object. And it's the same with `a[0]=1`. --- user570826 asked: > What if we have, `a = b = c = 10` That's exactly the same situation as `a = b = c = [1, 2, 3]`: you have three names for the same value. But in this case, the value is an `int`, and `int`s are immutable. In either case, you can rebind `a` to a different value (e.g., `a = "Now I'm a string!"`), but the won't affect the original value, which `b` and `c` will still be names for. The difference is that with a list, you can change the value `[1, 2, 3]` into `[1, 2, 3, 4]` by doing, e.g., `a.append(4)`; since that's actually changing the value that `b` and `c` are names for, `b` will now b `[1, 2, 3, 4]`. There's no way to change the value `10` into anything else. `10` is 10 forever, just like Claudia the vampire is 5 forever (at least until she's replaced by Kirsten Dunst). --- \* Warning: Do not give Notorious B.I.G. a hot dog. Gangsta rap zombies should never be fed after midnight.
Cough cough ``` >>> a,b,c = (1,2,3) >>> a 1 >>> b 2 >>> c 3 >>> a,b,c = ({'test':'a'},{'test':'b'},{'test':'c'}) >>> a {'test': 'a'} >>> b {'test': 'b'} >>> c {'test': 'c'} >>> ```
Python assigning multiple variables to same value? list behavior
[ "", "python", "list", "" ]
I'm parsing a xml file in which I get basic expressions (like `id*10+2`). What I am trying to do is to evaluate the expression to actually get the value. To do so, I use the `eval()` method which works very well. The only thing is the numbers are in fact hexadecimal numbers. The `eval()` method could work well if every hex number was prefixed with '0x', but I could not find a way to do it, neither could I find a similar question here. How would it be done in a clean way ?
Use the [`re`](http://docs.python.org/2/library/re.html) module. ``` >>> import re >>> re.sub(r'([\dA-F]+)', r'0x\1', 'id*A+2') 'id*0xA+0x2' >>> eval(re.sub(r'([\dA-F]+)', r'0x\1', 'CAFE+BABE')) 99772 ``` Be warned though, with an invalid input to `eval`, it won't work. There are also [many risks](http://nedbatchelder.com/blog/201206/eval_really_is_dangerous.html) of using `eval`. If your hex numbers have lowercase letters, then you could use this: ``` >>> re.sub(r'(?<!i)([\da-fA-F]+)', r'0x\1', 'id*a+b') 'id*0xa+0xb' ``` This uses a negative lookbehind assertion to assure that the letter `i` is not before the section it is trying to convert (preventing `'id'` from turning into `'i0xd'`. Replace `i` with `I` if the variable is `Id`.
If you can parse expresion into individual numbers then I would suggest to use [int function](http://docs.python.org/2/library/functions.html#int): ``` >>> int("CAFE", 16) 51966 ```
Appending '0x' before the hex numbers in a string
[ "", "python", "parsing", "" ]
I have started my IPython Notebook with ``` ipython notebook --pylab inline ``` This is my code in one cell ``` df['korisnika'].plot() df['osiguranika'].plot() ``` This is working fine, it will draw two lines, but on the same chart. I would like to draw each line on a separate chart. And it would be great if the charts would be next to each other, not one after the other. I know that I can put the second line in the next cell, and then I would get two charts. But I would like the charts close to each other, because they represent the same logical unit.
Make the multiple axes first and pass them to the Pandas plot function, like: ``` fig, axs = plt.subplots(1,2) df['korisnika'].plot(ax=axs[0]) df['osiguranika'].plot(ax=axs[1]) ``` It still gives you 1 figure, but with two different plots next to each other.
You can also call the show() function after each plot. e.g ``` plt.plot(a) plt.show() plt.plot(b) plt.show() ```
Make more than one chart in same IPython Notebook cell
[ "", "python", "pandas", "ipython", "jupyter-notebook", "" ]
Why do I get an error when writing my select statement like this - Note the space/tab/newline next to my table name "Personnel Number" (I know that naming convention is bad, but I had to do it, since that is the way a excel spreadsheet looks like that I have to import regularly to the table) ``` SELECT md.[Personnel Number] FROM MainDump md LEFT JOIN EthicsManagement em on em.[Personnel Number] = md.[Personnel Number] ``` And I have to write it like this to not give me an error: ``` SELECT md.[Personnel Number ] FROM MainDump md LEFT JOIN EthicsManagement em on em.[Personnel Number] = md.[Personnel Number ] ```
It's fairly obvious that your column name contains a trailing line feed. If you're absolutely sure that it's not possible to rename columns, I'm not aware of any other way to type a line feed in a column literal than the one you use: ``` SELECT md.[Personnel Number ] ``` In any case, I have a strong feeling that there's a bug in your import code. I suspect that the column is the last one in the line, you've set `\n` as line separator but your file actually uses `\r\n`.
``` SELECT md.`Personnel Number` FROM MainDump md LEFT JOIN EthicsManagement em on em.`Personnel Number` = md.`Personnel Number` ``` try to use [`] key behind Esc on top left in the keyboard
Select statement format Error with spaces
[ "", "sql", "sql-server", "select", "" ]
I have a working solution for creating a list some of random numbers, count their occurrencies, and put the result in a dictionary which looks the following: ``` random_ints = [random.randint(0,4) for _ in range(6)] dic = {x:random_ints.count(x) for x in set(random_ints)]) ``` so that for, say [0,2,1,2,1,4] I get {0: 1, 1: 2, 2: 2, 4:1} I was wondering if its possible to express this in a one liner, preferably without the use of a library function - I want to see what's possible with python :) When I try to integrate the two lines in one I dont know howto express the two references to the same comprehensioned list of random\_ints ..??? I expected something like: ``` dic = {x:random_ints.count(x) for x in set([random.randint(0,4) for _ in range(6)] as random_ints)) ``` which of course does not work... I looked (nested) list comprehensions up here on SO, but I could not apply the solutions I found to my problem. thanks, s.
There are a couple ways to achieve something like that, but none of them is exactly what you want. What you can't do is simple binding of a name to a fixed value inside the list/dict comprehension. If `random_ints` does not depend on any of the iteration variables needed for `dic`, it's better to do it the way you did it, and create `random_ints` separately. Conceptually, the only things that should be in the dict comprehension are the things that need to be created separately for each item in the dict. `random_ints` doesn't meet this criterion; you only need one `random_ints` overall, so there's no reason to put it in the dict comprehension. That said, one way to do it is to fake it by iterating over a one-element list containing your `random_ints`: ``` {x:random_ints.count(x) for random_ints in [[random.randint(0,4) for _ in range(6)]] for x in set(random_ints)} ```
Here is a one-liner that relies on `random` and `collections` modules. ``` >>> import collections >>> import random >>> c = collections.Counter(random.randint(0, 6) for _ in range(100)) >>> c Counter({2: 17, 1: 16, 0: 14, 3: 14, 4: 14, 5: 13, 6: 12}) ```
how to access a nested comprehensioned-list
[ "", "python", "list-comprehension", "" ]
I am developing an audio algorithm using Python and Numpy. Now I want to speed up that algorithm by implementing a part of it in C. In the past, [I have done this using cython](http://bastibe.de/real-time-signal-processing-in-python.html). Now I want to do the same thing using the new [cffi](https://cffi.readthedocs.org/en/release-0.6/). For testing purposes, I wrote a trivial C function: ``` void copy(float *in, float *out, int len) { for (int i=0; i<len; i++) { out[i] = in[i]; } } ``` Now I want to create two numpy arrays and have those be processed by this function. I figured out a way to do that: ``` import numpy as np from cffi import FFI ffi = FFI() ffi.cdef("void copy(float *in, float *out, int len);") C = ffi.dlopen("/path/to/copy.dll") float_in = ffi.new("float[16]") float_out = ffi.new("float[16]") arr_in = 42*np.ones(16, dtype=np.float32) float_in[0:16] = arr_in[0:16] C.copy(float_in, float_out, 16) arr_out = np.frombuffer(ffi.buffer(float_out, 16*4), dtype=np.float32) ``` However, I would like to improve this code: 1. Is there a way to directly access the underlying float buffers of the numpy arrays without copying them? 2. `ffi.buffer` is very convenient for quickly converting to contents of a C array to a Numpy array. Is there an equivalent way for quickly converting a numpy array into a C array without copying the individual elements? 3. For some applications, `float_in[0:16] = arr_in[0:16]` is a convenient way of accessing data. The opposite, `arr_out[0:16] = float_out[0:16]` does not work however. Why not?
The `ctypes` attribute of ndarray can interact with the ctypes module, for example, `ndarray.ctypes.data` is the data address of the array, you can cast it to a `float *` pointer, and then pass the pointer to the C function. ``` import numpy as np from cffi import FFI ffi = FFI() ffi.cdef("void copy(float *in, float *out, int len);") C = ffi.dlopen("ccode.dll") a = 42*np.ones(16, dtype=np.float32) b = np.zeros_like(a) pa = ffi.cast("float *", a.ctypes.data) pb = ffi.cast("float *", b.ctypes.data) C.copy(pa, pb, len(a)) print b ``` For your question 3: I think ffi array doesn't provide numpy the necessary information to access it's inner buffer. So numpy try to convert it to a float number which failed. The best solution I can thinks is convert it to list first: ``` float_in[0:16] = list(arr_in[0:16]) ```
An update to this: modern versions of CFFI have `ffi.from_buffer()`, which turns any buffer object (like a numpy array) to a `char *` FFI pointer. You can now do directly: ``` cptr = ffi.cast("float *", ffi.from_buffer(my_np_array)) ``` or directly as arguments to the call (the `char *` is casted automatically to `float *`): ``` C.copy(ffi.from_buffer(arr_in), ffi.from_buffer(arr_out), 16) ```
How to pass a Numpy array into a cffi function and how to get one back out?
[ "", "python", "arrays", "numpy", "python-cffi", "" ]
I have a table structure like below. I need to select the row where `User_Id =100` and `User_sub_id = 1` and `time_used = minimum of all` and where `Timestamp` the highest. The output of my query should result in : ``` US;1365510103204;NY;1365510103;100;1;678; ``` My query looks like this. ``` select * from my_table where CODE='DE' and User_Id = 100 and User_sub_id = 1 and time_used = (select min(time_used) from my_table where CODE='DE' and User_Id=100 and User_sub_id= 1); ``` this returns me all the 4 rows. I need only 1, the one with highest timestamp. Many Thanks ``` CODE: Timestamp: Location: Time_recorded: User_Id: User_sub_Id: time_used "US;1365510102420;NY;1365510102;100;1;1078; "US;1365510102719;NY;1365510102;100;1;978; "US;1365510103204;NY;1365510103;100;1;878; "US;1365510102232;NY;1365510102;100;1;678; "US;1365510102420;NY;1365510102;100;1;678; "US;1365510102719;NY;1365510102;100;1;678; "US;1365510103204;NY;1365510103;100;1;678; "US;1365510102420;NY;1365510102;101;1;678; "US;1365510102719;NY;1365510102;101;1;638; "US;1365510103204;NY;1365510103;101;1;638; ```
Then try this: ``` select * from my_table where CODE='DE' and User_Id=100 and User_sub_id=1 and time_used=( select min(time_used) from my_table where CODE='DE' and User_Id=100 and User_sub_id=1 ) order by "timestamp" desc -- <-- this adds sorting limit 1; -- <-- this retrieves only one row ```
Another possibly faster solution is using window functions: ``` select * from ( select code, timestamp, min(time_used) over (partition by user_id, user_sub_id) as min_used, row_number() over (partition by user_id, user_sub_id order by timestamp desc) as rn, time_used, user_id, user_sub_id from my_table where CODE='US' and User_Id = 100 and User_sub_id = 1 ) t where time_used = min_used and rn = 1; ``` This only needs to scan the table once instead of twice as your solution with the sub-select is doing. I would strongly recommend to rename the column `timestamp`. First this is a reserved word and using them is not recommended. And secondly it doesn't document anything - it's horrible name as such. `time_used` is much better and you should find something similar for `timestamp`. Is that the "recording time", the "expiration time", the "due time" or something completely different?
Debugging a SQL Query
[ "", "sql", "database", "postgresql", "" ]
The `hex()` function in python, puts the leading characters `0x` in front of the number. Is there anyway to tell it NOT to put them? So `0xfa230` will be `fa230`. The code is ``` import fileinput f = open('hexa', 'w') for line in fileinput.input(['pattern0.txt']): f.write(hex(int(line))) f.write('\n') ```
## (Recommended) ### [Python 3 f-strings](https://docs.python.org/3/reference/lexical_analysis.html#f-strings): Answered by [@GringoSuave](https://stackoverflow.com/a/52674814/1219006) ``` >>> i = 3735928559 >>> f'{i:x}' 'deadbeef' ``` --- ## Alternatives: ### [`format` builtin function](https://docs.python.org/3/library/functions.html#format) (good for single values only) ``` >>> format(3735928559, 'x') 'deadbeef' ``` ### And sometimes we still may need to use [`str.format` formatting](https://docs.python.org/3/library/string.html#format-string-syntax) in certain situations [@Eumiro](https://stackoverflow.com/a/16414606/1219006) (Though I would still recommend `f-strings` in most situations) ``` >>> '{:x}'.format(3735928559) 'deadbeef' ``` ### (Legacy) `f-strings` should solve all of your needs, but [`printf`-style formatting](https://docs.python.org/3/library/stdtypes.html#old-string-formatting) is what we used to do [@msvalkon](https://stackoverflow.com/a/16414614/1219006) ``` >>> '%x' % 3735928559 'deadbeef' ``` ### Without string formatting [@jsbueno](https://stackoverflow.com/a/73154538/1219006) ``` >>> i = 3735928559 >>> i.to_bytes(4, "big").hex() 'deadbeef' ``` ## Hacky Answers (avoid) ### `hex(i)[2:]` [@GuillaumeLemaître](https://stackoverflow.com/users/2312677/guillaume-lema%C3%AEtre) ``` >>> i = 3735928559 >>> hex(i)[2:] 'deadbeef' ``` This relies on string slicing instead of using a function / method made specifically for formatting as `hex`. This is why it may give unexpected output for negative numbers: ``` >>> i = -3735928559 >>> hex(i)[2:] 'xdeadbeef' >>> f'{i:x}' '-deadbeef' ```
Use this code: ``` '{:x}'.format(int(line)) ``` it allows you to specify a number of digits too: ``` '{:06x}'.format(123) # '00007b' ``` For Python 2.6 use ``` '{0:x}'.format(int(line)) ``` or ``` '{0:06x}'.format(int(line)) ```
How to use hex() without 0x in Python?
[ "", "python", "" ]
Find all bars that sell beers that are cheaper than all beers sold by "99 bottles" EDIT: Interpretation: So compare all the beers from Bar1 and check if all the those beers are cheaper than "99 bottles" example: ``` Is bluemoon price in motiv cheaper than bluemoon in 99 bottles? Is Guiness price in motiv cheaper than Guiness in 99 bottles? ``` Since there is only two beers in each bar. Then motiv has cheaper beer. This is what I have so far but I do not get the right output. ``` select * from sells s1, sells s2 where s1.bar <>s2.bar and s2.bar <> '"99 bottles"' and s1.beer=s2.beer and s1.price < all (select s.price from sells s where s.bar ='"99 bottles"') ; ``` The following is what the table contains. ``` bar | beer | price --------------+----------+------- "99 bottles" | Bluemoon | 10 "99 bottles" | Guiness | 9 "rosies" | Bluemoon | 11 "rosies" | Guiness | 5 "motiv" | Bluemoon | 4 "motiv" | Guiness | 2 ``` The solution should be motiv, but I am having trouble trying to get the right query.
``` SELECT DISTINCT b.bar FROM barbeerprice b WHERE b.bar <> '99 bottles' -- '99 bottles' must not sell a same beer cheaper AND NOT EXISTS ( SELECT * FROM barbeerprice me JOIN barbeerprice nx99 ON nx99.beer = b.beer AND nx99.bar = '99 bottles' AND nx99.bar <> me.bar AND nx99.price < me.price WHERE me.bar = b.bar ) ; ```
You just need the beers cheaper than cheapest beer in 99 bottles. Try something like: ``` SELECT * FROM sells s1 where s1.price < (select MIN(price) FROM sells s2 where s2.bar = '"99 bottles"') and s1.bar <> '"99 bottles"' ``` PS: if you want show just the bar with ALL beer cheaper than 99 bottles, this query need some edit.
How to write the following query correctly?
[ "", "sql", "postgresql", "" ]
Hey so I am really stuck with this one query and I am new to SQL and Oracle. So basically have a database with LastName, FirstName, Email, and State. I have another database with the skills that each of the people from the first database have and each skill has an ID number. I need to produce a list of specific people with the specific skill ID ('3') but when I do that, I get redundant information, as in, the same name comes up 3 or 4 times each and I don't want that. This is the query I used: ``` SELECT FirstName, LastName, Email FROM CONSULTANTS, SKILLS WHERE SKILLS.ExpertiseID = '3' AND CONSULTANTS.STATE = 'NJ' OR CONSULTANTS.STATE = 'NY'; ``` Any help would seriously be appreciated
``` SELECT FirstName, LastName, Email FROM CONSULTANTS, SKILLS WHERE SKILLS.ExpertiseID = '3' AND ( CONSULTANTS.STATE = 'NJ' OR CONSULTANTS.STATE = 'NY' ); GROUP BY FirstName, LastName, Email ``` Use by GROUP BY to eliminate repetitive rows
> "I get redundant information, as in, the same name comes up 3 or 4 > times each and I don't want that" That's because you query doesn't have an explicit join between SKILLS and CONSULTANTS. So what you have is a list of all the consultants in New York and New Jersey *cross joined* with all the skills that have an ID of `3`. The fact that you have a product of several records per consultant suggests to me that `ExpertiseID` is not a unique key but without seeing the data I can't be sure. The suggestion to use GROUP BY (or DISTINCT come to that) is at best a kludge and at worst plain wrong, as it still produces the wrong result set (all consultants, not just consultants with that skill). You should fix this the proper way, by using a join which links consultants to the skills they actually have. Without your data model I can't tell you exactly how to do this, but either there is a foreign key on SKILLS referencing the CONSULTANT primary key - `ConsultantID` ? - or there is an intersection table joining the two (in an ideal world it would be called CONSULTANT\_SKILLS). If you have neither a foreign key nor an intersection table then you have a broken data model, and no chance to getting the desired result set.
SQL Redundant Results
[ "", "sql", "database", "oracle", "" ]
I am trying to get some data from db. 1. `username` from login table 2. `email` from contact table checking with 2 values in two tables `tutor` and `institute` This is my code so far : ``` SELECT s. * , c.email, l.username FROM ( SELECT contact_id AS id, login_id, username, tutor_code AS code, tutor_name AS Name, 'tutor' AS profile FROM tutors WHERE tutor_code = $code AND tutor_name = '$name' UNION ALL SELECT contact_id AS id, login_id, username, institute_code AS code, institute_name AS Name, 'institute' AS profile FROM institutes WHERE institute_code = $code AND institute_name = '$name' ) INNER JOIN contact c ON s.id = c.contact_id INNER JOIN login l ON s.login_id = l.login_id ``` This query is not working and there is an error message. > # 1054 - Unknown column 'username' in 'field list' **UPDATE** ``` SELECT s. * , c.email, l.username FROM ( SELECT contact_id AS id, login_id, username, tutor_code AS code, tutor_name AS Name, 'tutor' AS profile FROM tutors WHERE tutor_code = $code AND tutor_name = '$name' UNION ALL SELECT contact_id AS id, login_id, username, institute_code AS code, institute_name AS Name, 'institute' AS profile FROM institutes WHERE institute_code = $code AND institute_name = '$name' )s INNER JOIN contact c ON s.id = c.contact_id INNER JOIN login l ON s.login_id = l.login_id ```
Since it appears you are retrieving your `username` from `login`, the column `username` most likely doesn't exist in `tutors` and/or `institutes`, it is also not necessary for the join to `login` because you are joining on `login_id`, I think you could just remove the `username` columns from your subquery: ``` SELECT s. * , c.email, l.username FROM ( SELECT contact_id AS id, login_id, --username, tutor_code AS code, tutor_name AS Name, 'tutor' AS profile FROM tutors WHERE tutor_code = $code AND tutor_name = '$name' UNION ALL SELECT contact_id AS id, login_id, --username, institute_code AS code, institute_name AS Name, 'institute' AS profile FROM institutes WHERE institute_code = $code AND institute_name = '$name' ) s INNER JOIN contact c ON s.id = c.contact_id INNER JOIN login l ON s.login_id = l.login_id ``` *I've also added the alias `s` to your subuqery as I assume it's omission was a typo as it would throw a syntax error in its absence*
there is no need to call `username` from `tutors` and from `institutes` table and use `as abc` when your subquery closed like ``` SELECT s. * , c.email, l.username FROM ( SELECT contact_id AS id, login_id, tutor_code AS code, tutor_name AS Name, 'tutor' AS profile FROM tutors WHERE tutor_code = $code AND tutor_name = '$name' UNION ALL SELECT contact_id AS id, login_id, institute_code AS code, institute_name AS Name, 'institute' AS profile FROM institutes WHERE institute_code = $code AND institute_name = '$name' ) as s INNER JOIN contact c ON s.id = c.contact_id INNER JOIN login l ON s.login_id = l.login_id ``` This query will return you Duplicate entries because you are using `ALL` in `union`. Hope it works for you.
Select data from 3 tables
[ "", "mysql", "sql", "union", "subquery", "" ]
I am a Python programmer, but new to webservices. Task: I have a Typo3-Frontend and a Postgresql-database. I want to write a backend between these two parts in Python. Another developer gave me a wsdl-file and xsd-file to work with, so we use SOAP. The program I code should be bound to a port (TCP/IP) and act as a service. The data/payload will be encoded in json-objects. ``` Webclient <---> Frontend <---> Backend(Me) <---> Database ``` My ideas: 1. I code all functions by hand from the wsdl-file, with the datatypes from xsd. 2. I bind a service to a port that recieves incoming json-data 3. I parse the incoming data, do some database-operations, do other stuff 4. I return the result to the frontend. Questions: 1. Do I have to code all the methods/functions descripted in the wsdl-file by hand? 2. Do I have to define the complex datatypes by hand? 3. How should I implement the communication between the frontend and the backend? Thanks in advance! Steffen
I have successfully used [suds client](https://fedorahosted.org/suds/wiki/Documentation) to communicate with [Microsoft Dynamics NAV](http://en.wikipedia.org/wiki/Microsoft_Dynamics_NAV) (former Navision). Typical session looks like this: ``` from suds.client import Client url = 'http://localhost:7080/webservices/WebServiceTestBean?wsdl' client = Client(url) ``` By issuing `print client` you get the list of types and operations supported by the serivce. ``` Suds - version: 0.3.3 build: (beta) R397-20081121 Service (WebServiceTestBeanService) tns="http://test.server.enterprise.rhq.org/" Prefixes (1): ns0 = "http://test.server.enterprise.rhq.org/" Ports (1): (Soap) Methods: addPerson(Person person, ) echo(xs:string arg0, ) getList(xs:string str, xs:int length, ) getPercentBodyFat(xs:string name, xs:int height, xs:int weight) getPersonByName(Name name, ) hello() testExceptions() testListArg(xs:string[] list, ) testVoid() updatePerson(AnotherPerson person, name name, ) Types (23): Person Name Phone AnotherPerson ``` WSDL operations are exposed as ordinary python functions and you can use ordinary dicts in place of WSDL types.
I would go with [Twisted](http://twistedmatrix.com/documents/13.0.0/web/howto/xmlrpc.html) since I am working with it anyway and enjoy the system. Another asynchronous option may be [Tornado](http://technobeans.wordpress.com/2012/09/06/tornado-soap-web-services/). Or a synchronous version with [Flask](http://flask.pocoo.org/mailinglist/archive/2011/2/23/soaplib-with-flask/#09a7a09a70920abee11bc922bfce7e56). I'm sure there are many other options. I would look for a higher level framework like those listed above so you don't have to spend too much time connecting the frontend to the backend.
Backend for SOAP in Python
[ "", "python", "web-services", "soap", "wsdl", "" ]
For the sake of learning, is there a shorter way to do: `if string.isdigit() == False :` I tried: `if !string.isdigit() :` and `if !(string.isdigit()) :` which both didn't work.
Python's "not" operand is `not`, not `!`. Python's "logical not" operand is `not`, not `!`.
In python, you use the `not` keyword instead of `!`: ``` if not string.isdigit(): do_stuff() ``` This is equivalent to: ``` if not False: do_stuff() ``` i.e: ``` if True: do_stuff() ``` Also, from the [PEP 8 Style Guide](http://www.python.org/dev/peps/pep-0008/): > Don't compare boolean values to True or False using ==. > > Yes: if greeting: > > No: if greeting == True > > Worse: if greeting is True:
Shorter way to check if a string is not isdigit()
[ "", "python", "" ]
I am playing a bit with the python api for sqlite3, i have a little table for store languages with an id, name and creation\_date fields. I am trying to map the raw query results into a `namedtuple` as the docs recommend, it that way i can manage rows in a more readable way, so here is my `namedtuple`. ``` LanguageRecord = namedtuple('LanguageRecord', 'id, name, creation_date') ``` The code that the docs suggest for the mapping is as follows: ``` for language in map(LanguageRecord._make, c.fetchall()): # do something with languages ``` This is fine when i want to return a collection of languages but in this case i want just to retrieve one language: ``` c.execute('SELECT * FROM language WHERE name=?', (name,)) ``` So my first attempt it was something like this: ``` language = map(LanguageRecord._make, c.fetchone()) ``` This code doesn't works because `fetchone()` returns a tuple instead a list with one tuple, so the `map` function tries to create three `namedtuples` one for each tuple field thought. My first approach to solve this was to explicitly create a list and append to it the tuple result, something like: ``` languages = [] languages.append(c.fetchone()) for language in map(LanguageRecord._make, languages): # do something with language ``` My second approach was to use `fetchall()` although i just want one record. I can set the name field with a `unique` constrain in the database in order to garantize just one result. ``` for language in map(LanguageRecord._make, c.fetchall()): # do something with languages ``` Another approach could be use `fetchall()[0]` without the `unique` constrain to garantize just one result. My question is which is the best and common way to deal with this problem, should i use always `fetchall` to maintain a common interface and let the database manage the uniqueness logic? or should i create a list explicitly as in approach 1? Is there a more easy way to accomplish this task?
There is a much easier way! Sqlite3 provides a way for the user to define ["row factories"](http://docs.python.org/2/library/sqlite3.html#sqlite3.Connection.row_factory). These row factories take the cursor and the tuple row and can return whatever type of object it wants. Once you set the row factory with ``` con.row_factory = my_row_factory ``` then rows returned by the cursor will be the result of `my_row_factory` applied to the tuple-row. For example, ``` import sqlite3 import collections LanguageRecord = collections.namedtuple('LanguageRecord', 'id name creation_date') def namedtuple_factory(cursor, row): return LanguageRecord(*row) con = sqlite3.connect(":memory:") con.row_factory = namedtuple_factory cur = con.cursor() cur.execute("select 1,2,3") print(cur.fetchone()) ``` yields ``` LanguageRecord(id=1, name=2, creation_date=3) ``` --- For another example of how to define a namedtuple factory, see [this post](http://peter-hoffmann.com/2010/python-sqlite-namedtuple-factory.html). --- By the way, if you set ``` conn.row_factory = sqlite3.Row ``` then rows are returned as dicts, whose keys are the table's column names. Thus, instead of accessing parts of the namedtuple with things like `row.creation_date` you could just use the builtin `sqlite3.Row` row factory and access the equivalent with `row['creation_date']`.
An improved `row_factory` is actually this, which can be reused for all sorts of queries: ``` from collections import namedtuple def namedtuple_factory(cursor, row): """Returns sqlite rows as named tuples.""" fields = [col[0] for col in cursor.description] Row = namedtuple("Row", fields) return Row(*row) conn = sqlite3.connect(":memory:") conn.row_factory = namedtuple_factory cur = con.cursor() ```
Mapping result rows to namedtuple in python sqlite
[ "", "python", "sqlite", "namedtuple", "" ]
I'm trying to append one variable from several tables together (aka row-bind, concatenate) to make one longer table with a single column in Hive. I think this is possible using `UNION ALL` based on this question ( [HiveQL UNION ALL](https://stackoverflow.com/questions/14096968/hiveql-union-all) ), but I'm not sure an efficient way to accomplish this? The pseudocode would look something like this: ``` CREATE TABLE tmp_combined AS SELECT b.var1 FROM tmp_table1 b UNION ALL SELECT c.var1 FROM tmp_table2 c UNION ALL SELECT d.var1 FROM tmp_table3 d UNION ALL SELECT e.var1 FROM tmp_table4 e UNION ALL SELECT f.var1 FROM tmp_table5 f UNION ALL SELECT g.var1 FROM tmp_table6 g UNION ALL SELECT h.var1 FROM tmp_table7 h; ``` Any help is appreciated!
Try with following coding... ``` Select * into tmp_combined from ( SELECT b.var1 FROM tmp_table1 b UNION ALL SELECT c.var1 FROM tmp_table2 c UNION ALL SELECT d.var1 FROM tmp_table3 d UNION ALL SELECT e.var1 FROM tmp_table4 e UNION ALL SELECT f.var1 FROM tmp_table5 f UNION ALL SELECT g.var1 FROM tmp_table6 g UNION ALL SELECT h.var1 FROM tmp_table7 h ) CombinedTable ``` Use with the statement : set hive.exec.parallel=true This will execute different selects simultaneously otherwise it would be step by step.
I would say that's both straightforward and efficient way to do the row-bind, at least, that's what I would use in my code. Btw, it might cause you some syntax error if you put your pseudo code directly, you may try: ``` create table join_table as select * from (select ... join all select join all select...) tmp; ```
Combine many tables in Hive using UNION ALL?
[ "", "sql", "hive", "hiveql", "" ]
We have been using AppEngine's images API with no problem for the past year. Suddenly in the last week or so the images API seems to be corrupting the image. We use the images API to do a few different operations but the one that seems to be causing the problem is that we do an images.rotation(0) on TIFF data to convert it to a PNG. (We haven't tried other file type conversions but the point is that this was working for over a year so why should it suddenly stop working? Furthermore, we need it to work with TIFF to PNG as TIFF is the format of inbound data) This worked without problem for a long time and suddenly today I find that any TIFF that goes through the process is corrupted on output. It looks as though it's doubled and skewed. This is using the Python 2.7 API on AppEngine 1.7.7. We are using the Google images API directly not through PIL. Please help! This is killing our production environment. Example code: ``` from google.appengine.api import images import webapp2 def get_sample(): # sample.tiff is a 1bit black and white group3 tiff from a fax service with open("sample.tiff") as x: f = x.read() return f class MainHandler(webapp2.RequestHandler): def get(self): # Convert to PNG using AppEngine's images API by doing a rotation of 0 degrees. # This worked fine for over a year and now suddenly started corrupting the # output image with a grainy double image that looks like two of the # same image are layered on top of each other and vibrating. sample = get_sample() png = images.rotate(sample, 0) self.response.headers["Content-Type"] = "image/png" self.response.out.write(png) application = webapp2.WSGIApplication([('/', MainHandler)], debug=True) ```
This turned out to be due to a recent change to the images API that introduced a bug which affected operations involving TIFF files, which has since been reverted. More information is in the original bug report. <https://code.google.com/p/googleappengine/issues/detail?id=9284>
I have been using this to load the image, I don't use tiff images but that could be the problem i guess - maybe use PIL to convert the image? ``` class Image(BaseHandler): def get(self): employee = clockin.Employee.get(self.request.get("img_id")) if employee.avatar: self.response.headers['Content-Type'] = "image/png" image = images.resize(employee.avatar, 150, 150) self.response.out.write(image) else: self.response.out.write("No image") ```
Google AppEngine images API suddenly corrupting images
[ "", "python", "image", "api", "google-app-engine", "" ]
I would like to write a statement in python with logical implication. Something like: ``` if x => y: do_sth() ``` Of course, I know I could use: ``` if (x and y) or not x: do_sth() ``` But is there a logical operator for this in python?
`p => q` is the same as `not(p) or q`, so you could try that!
Just because it's funny: x => y could be `bool(x) <= bool(y)` in python.
Is there an implication logical operator in python?
[ "", "python", "math", "logical-operators", "implication", "" ]
I want something similar to `executor.map`, except when I iterate over the results, I want to iterate over them according to the order of completion, e.g. the work item that was completed first should appear first in the iteration, etc. This is so the iteration will block iff every single work item in the sequence is not finished yet. I know how to implement this myself using queues, but I'm wondering whether it's possible using the `futures` framework. (I mostly used thread-based executors, so I'd like an answer that applies to these, but a general answer would be welcome as well.) **UPDATE**: Thanks for the answers! Can you please explain how I can use `as_completed` with `executor.map`? `executor.map` is the most useful and succinct tool for me when using futures, and I'd be reluctant to start using `Future` objects manually.
[`executor.map()`](http://docs.python.org/dev/library/concurrent.futures.html#concurrent.futures.Executor.map), like the builtin [`map()`](http://docs.python.org/3.4/library/functions.html#map), only returns results in the order of the iterable, so unfortunately you can't use it to determine the order of completion. [`concurrent.futures.as_completed()`](http://docs.python.org/dev/library/concurrent.futures.html#concurrent.futures.as_completed) is what you're looking for - here's an example: ``` import time import concurrent.futures times = [3, 1, 2] def sleeper(secs): time.sleep(secs) print('I slept for {} seconds'.format(secs)) return secs # returns in the order given with concurrent.futures.ThreadPoolExecutor(max_workers=3) as executor: print(list(executor.map(sleeper, times))) # I slept for 1 seconds # I slept for 2 seconds # I slept for 3 seconds # [3, 1, 2] # returns in the order completed with concurrent.futures.ThreadPoolExecutor(max_workers=3) as executor: futs = [executor.submit(sleeper, secs) for secs in times] print([fut.result() for fut in concurrent.futures.as_completed(futs)]) # I slept for 1 seconds # I slept for 2 seconds # I slept for 3 seconds # [1, 2, 3] ``` Of course if you are required to use a map interface, you could create your own `map_as_completed()` function which encapsulates the above (maybe add it to a subclassed `Executor()`), but I think creating futures instances through `executor.submit()` is a simpler/cleaner way to go (also allows you to provide no-args, kwargs).
concurrent futures returns an iterator based on time of completion -- this sounds like it's exactly what you were looking for. <http://docs.python.org/dev/library/concurrent.futures.html#concurrent.futures.as_completed> Please let me know if you have any confusion or difficulty wrt implementation.
Python's `concurrent.futures`: Iterate on futures according to order of completion
[ "", "python", "multithreading", "future", "" ]
XlsxWriter object save as http response to create download in Django?
I think you're asking about how to create an excel file in memory using `xlsxwriter` and return it via `HttpResponse`. Here's an example: ``` try: import cStringIO as StringIO except ImportError: import StringIO from django.http import HttpResponse from xlsxwriter.workbook import Workbook def your_view(request): # your view logic here # create a workbook in memory output = StringIO.StringIO() book = Workbook(output) sheet = book.add_worksheet('test') sheet.write(0, 0, 'Hello, world!') book.close() # construct response output.seek(0) response = HttpResponse(output.read(), mimetype="application/vnd.openxmlformats-officedocument.spreadsheetml.sheet") response['Content-Disposition'] = "attachment; filename=test.xlsx" return response ```
A little update on @alecxe response for Python 3 (**io.BytesIO** instead of **StringIO.StringIO**) and Django >= 1.5 (**content\_type** instead of **mimetype**), with the fully in-memory file assembly that has since been implemented by @jmcnamara (**{'in\_memory': True}**) ! Here is the full example : ``` import io from django.http.response import HttpResponse from xlsxwriter.workbook import Workbook def your_view(request): output = io.BytesIO() workbook = Workbook(output, {'in_memory': True}) worksheet = workbook.add_worksheet() worksheet.write(0, 0, 'Hello, world!') workbook.close() output.seek(0) response = HttpResponse(output.read(), content_type="application/vnd.openxmlformats-officedocument.spreadsheetml.sheet") response['Content-Disposition'] = "attachment; filename=test.xlsx" output.close() return response ```
XlsxWriter object save as http response to create download in Django
[ "", "python", "django", "excel", "httpresponse", "xlsxwriter", "" ]
``` create table WEL ( pipe_type varchar(30),date DATE ) insert into WEL values(H.T.NO.2,....,....) ``` getting error `multi-part identifier h.t.no.2 could not be bound and 2 is a incorrect syntax` is there any problem with varchar or any other way to insert H.T.NO.2 into table
For Character Type Data Types we must give single quotes For inserting a single record we have two options **First one** insert into WEL values('H.T.NO.2','2013-07-07'); **Second one** insert into WEL(pipe\_type,date) values('H.T.NO.2','2013-07-07'); This will helpful when *default* value is given.For example If i give *default* value for date like '2000-02-02'.Then we will write query like this insert into WEL(pipe\_type) values('H.T.NO.2'); Then system takes *default* value which you give(like 2000-02-02 ) for that one. **Importance of Delimiter(;):** I observe that you didn't give ";" .If so, The database checks for another query.So for one query its not an important,But for multiple its very Important.
You have to enclose `varchar` and `date` values in single quotes: ``` insert into WEL values('H.T.NO.2', '2013-12-31') ```
The multi-part identifier not working during insertion
[ "", "mysql", "sql", "" ]