Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
``` if losttwice <= 2: bet = _________ # <- Here elif losttwice <= 5: bet = bet * 2 else: bet = startingbet ``` Can anyone help me to add one more thing to this? I would like to do a random 50% chance when `losttwice <= 2` (when I lost 1-2 times) for it to be `bet = startingbet` or `bet = bet * 2` based on 50% chance.
``` if losttwice <= 2: bet = random.choice((startingbet, bet*2)) ```
`if random.random() > 0.5:` (and `import random` on the top) might be useful. You should be able to figure it out based on that.
Python: randomly assign one of two values
[ "", "python", "" ]
``` def temperature(weather): '''(list of ints) -> list of strs Modify and return list, replacing each temp in list for weather condition. Hot being over 25 degrees, and cool being under. ''' ``` So, if I run temperature([24, 29, 11]), I want it to return ['cool', 'hot', 'cool']. This is what I got. I think I'm creating a new list instead of modifying it though. How would I modify a list instead of making a new list using a for loop? ``` temp =[] for degrees in weather: if degrees > 25: temp = temp + ['hot'] else: temp = temp + ['cool'] return temp ```
While modifying the input list is usually a bad idea, if you really want to do that, use `enumerate` to get the indices and element access notation to change the list contents: ``` for index, degrees in enumerate(weather): if degrees > 25: weather[index] = 'hot' else: weather[index] = 'cold' ``` If you make a new list, don't say ``` temp = temp + [whatever] ``` That creates a copy of `temp` to append the new item, and can degrade performance to quadratic time. Instead, use ``` temp += [whatever] ``` or ``` temp.append(whatever) ``` both of which modify `temp`.
Never mutate arguments passed to you. ``` temp = [] ... temp.append('hot') ... temp.append('cool') ```
Python: modifying list of ints to list of strs using for loop
[ "", "python", "list", "for-loop", "" ]
I have two lists ``` A=[['1','1'],['2','1'],['3','2']] B=[['1','1'],['2','2']] ``` I want to perform A-B operation on these comparing only first element. so A-B should give ``` Output=[['3', '2']] ``` So far, I could do only on row comparison ``` [x for x in A if not x in B] ``` which gives output as `[['2', '1'], ['3', '2']]`
This? ``` >>> [i for i in A if not any(i[0] == k for k, _ in B)] [['3', '2']] ``` `any()` is used to check if the first element of each list is the same as any other value in every list in `B`. If it is, it returns True, but as we want the opposite of this, we use `not any(...)`
You can also use [`collections.OrderedDict`](http://docs.python.org/2/library/collections.html#collections.OrderedDict) and set difference here: ``` >>> from collections import OrderedDict >>> dic1 = OrderedDict((k[0],k) for k in A) >>> [dic1[x] for x in set(dic1) - set(y[0] for y in B)] [['3', '2']] ``` Overall complexity is going to be `O(max(len(A), len(B)))` If order doesn't matter then a normal dict is sufficient.
Difference between list of lists
[ "", "python", "list", "compare", "" ]
I am using a DBN (deep belief network) from [nolearn](https://pypi.python.org/pypi/nolearn) based on scikit-learn. I have already built a Network which can classify my data very well, now I am interested in exporting the model for deployment, but I don't know how (I am training the DBN every time I want to predict something). In `matlab` I would just export the weight matrix and import it in another machine. Does someone know how to export the model/the weight matrix to be imported without needing to train the whole model again?
First, install [joblib](https://github.com/joblib/joblib). You can use: ``` >>> import joblib >>> joblib.dump(clf, 'my_model.pkl', compress=9) ``` And then later, on the prediction server: ``` >>> import joblib >>> model_clone = joblib.load('my_model.pkl') ``` This is basically a Python pickle with an optimized handling for large numpy arrays. It has the same limitations as the regular pickle w.r.t. code change: if the class structure of the pickle object changes you might no longer be able to unpickle the object with new versions of nolearn or scikit-learn. If you want long-term robust way of storing your model parameters you might need to write your own IO layer (e.g. using binary format serialization tools such as protocol buffers or avro or an inefficient yet portable text / json / xml representation such as [PMML](http://www.dmg.org/v4-1/GeneralStructure.html)).
Pickling/unpickling has the disadvantage that it only works with matching python versions (major and possibly also minor versions) and sklearn, joblib library versions. There are alternative descriptive output formats for machine learning models, such as developed by the [Data Mining Group](http://dmg.org), such as the predictive models markup language (PMML) and the portable format for analytics (PFA). Of the two, PMML is [much better supported](http://dmg.org/pmml/products.html). So you have the option of saving a model from scikit-learn into PMML (for example using [sklearn2pmml](https://github.com/jpmml/sklearn2pmml)), and then deploy and run it in java, spark, or hive using [jpmml](https://github.com/jpmml) (of course you have more choices).
Python scikit-learn: exporting trained classifier
[ "", "python", "scikit-learn", "" ]
I am trying to cast a string and a column value concatenated with the following sql commant: ``` CAST('Strign:'+[KlirAn] as NVARCHAR(max)) ``` After executing this command i get the following error: ``` Msg 402, Level 16, State 1, Line 1 The data types varchar and ntext are incompatible in the add operator. ``` Any help please?
Try the following: ``` 'String:'+ CAST([KlirAn] as NVARCHAR(max)) ```
Try this ``` SELECT 'String:'+CONVERT(NVARCHAR(max),[KlirAn]) FROM table ```
Cast string+ntext to nvarchar error
[ "", "sql", "sql-server", "" ]
I would like the user to use control-c to close a script, but when control-c is pressed it shows the error and reason for close (which make sense). Is there a way to have my own custom output to the screen rather than what is shown? Not sure how to handle that specific error.
You could use `try..except` to catch KeyboardInterrupt: ``` import time def main(): time.sleep(10) if __name__ == '__main__': try: main() except KeyboardInterrupt: print('bye') ```
use the [signal module](https://docs.python.org/3.5/library/signal.html) to define a handler for the SIGINT signal: ``` import signal import sys def sigint_handler(signal_number, stack_frame): print('caught SIGINT, exiting') sys.exit(-1) signal.signal(signal.SIGINT, sigint_handler) raw_input('waiting...') ```
custom output when control-c is used to exit python script
[ "", "python", "error-handling", "" ]
I want the end of a python script to open windows photo gallery from python I try: ``` os.system("C:\\Program Files (x86)\\Windows Live\\Photo Gallery\\WLXPhotoGallery.exe"); ``` I get: ``` 'C:\Program' is not recognized as an internal or external command, operable program or batch file. ``` any ideas how to get this one sorted?
Don't use `os.system()`. Use the [`subprocess` module](http://docs.python.org/2/library/subprocess.html#replacing-os-system) instead: ``` import subprocess subprocess.call("C:\\Program Files (x86)\\Windows Live\\Photo Gallery\\WLXPhotoGallery.exe") ```
As Martijn Pieters points out, you really should use `subprocess`. However, if you are really curious as to why your call didn't work, it's because calling `os.system("C:\\Program Files (x86)\\Windows Live\\Photo Gallery\\WLXPhotoGallery.exe");` is equivalent to typing this on the command line: `C:\Program Files (x86)\Windows Live\Photo Gallery\WLXPhotoGallery.exe`. See those spaces in the file path? The windows shell sees each space-separated string as a separate command/argument. Therefore, it tries to execute the program `C:\Program` with the arguments `Files`, `(x86)\Windows`, `Live\Photo`, `Gallery\WLXPhotoGallery.exe`. Of course, since there's no program on your computer at `C:\Program`, this borks. If, for whatever reason, you really REALLY want to go with `os.system`, you should think about how you would execute the command on the command line itself. To execute this on the command line, you'd type `"C:\Program Files (x86)\Windows Live\Photo Gallery\WLXPhotoGallery.exe"` (the quotes escape the spaces). In order to translate this into your `os.system` call, you should do this: ``` os.system('"C:\\Program Files (x86)\\Windows Live\\Photo Gallery\\WLXPhotoGallery.exe"'); ``` Really though, you should use `subprocess` Hope this helps
Open windows photo gallery from python
[ "", "python", "windows", "" ]
As the answer that I got receive from my question which drive me nuts applying to my code for some reason, why it did not worked. Here's a part of my data on table. ![enter image description here](https://i.stack.imgur.com/E8ML8.png) **Original Code:** ``` SELECT AGE_RANGE, COUNT(*) FROM ( SELECT CASE WHEN YearsOld BETWEEN 0 AND 5 THEN '0-5' WHEN YearsOld BETWEEN 6 AND 10 THEN '6-10' WHEN YearsOld BETWEEN 11 AND 15 THEN '11-15' WHEN YearsOld BETWEEN 16 AND 20 THEN '16-20' WHEN YearsOld BETWEEN 21 AND 30 THEN '21-30' WHEN YearsOld BETWEEN 31 AND 40 THEN '31-40' WHEN YearsOld > 40 THEN '40+' END AS 'AGE_RANGE' FROM ( SELECT YEAR(CURDATE())-YEAR(DATE(birthdate)) 'YearsOld' FROM MyTable ) B ) A GROUP BY AGE_RANGE ``` and here is the result. ![enter image description here](https://i.stack.imgur.com/GguHS.png) What I'm trying to do is, I'm trying to add another column which would count on how many people who is in that area which would be **location** as you would see on the picture on top which includes Perth, Western Australia, Sunbury, Victoria and so on. **First attempt to fix my problem** As you would see below, I've added location, and COUNT(location) loc to get the name of the location and count on how many location that is duplicated in the table. ``` SELECT AGE_RANGE, COUNT(*), location, COUNT(location) loc FROM ( SELECT CASE WHEN YearsOld BETWEEN 0 AND 5 THEN '0-5' WHEN YearsOld BETWEEN 6 AND 10 THEN '6-10' WHEN YearsOld BETWEEN 11 AND 15 THEN '11-15' WHEN YearsOld BETWEEN 16 AND 20 THEN '16-20' WHEN YearsOld BETWEEN 21 AND 30 THEN '21-30' WHEN YearsOld BETWEEN 31 AND 40 THEN '31-40' WHEN YearsOld > 40 THEN '40+' END AS 'AGE_RANGE', 'location' FROM ( SELECT YEAR(CURDATE())-YEAR(DATE(birthday)) 'YearsOld' FROM event_participants ) B ) A FROM event_participants WHERE location <> '' GROUP BY location HAVING loc >= 1 ORDER BY loc DESC LIMIT 5 ``` Which results to.. ![enter image description here](https://i.stack.imgur.com/SWnRr.png) Obviously, hearing from the aid of god doesn't work out to well.. **My second attempt to fix this problem** ``` SELECT AGE_RANGE, COUNT(*), location FROM ( SELECT CASE WHEN YearsOld BETWEEN 0 AND 5 THEN '0-5' WHEN YearsOld BETWEEN 6 AND 10 THEN '6-10' WHEN YearsOld BETWEEN 11 AND 15 THEN '11-15' WHEN YearsOld BETWEEN 16 AND 20 THEN '16-20' WHEN YearsOld BETWEEN 21 AND 30 THEN '21-30' WHEN YearsOld BETWEEN 31 AND 40 THEN '31-40' WHEN YearsOld > 40 THEN '40+' END AS 'AGE_RANGE', 'location' FROM ( SELECT YEAR(CURDATE())-YEAR(DATE(birthday)) 'YearsOld' FROM event_participants ) B ) A GROUP BY AGE_RANGE ``` And the result is... ![enter image description here](https://i.stack.imgur.com/1BWuh.png) Instead of **showing** the name of the countries, it only show location. Well.. Long story short, what's wrong with the code? Any solution for this stuff? It was like, my desired result would be, ``` AGE_RANGE COUNT(*) location total number of same location 16-20 4 Name of location # 21-30 45 Name of location # 31-40 79 Name of location # 40+ 102 Name of location # ```
This should work: Think of each subquery as it's own table, so you just need to make sure to select all the columns you need. (CASE WHEN THEN END AS) is actually just a single column value. You will probably also want to group on location so you can get the same age ranges for different locations. (Ex. 11-15 in Canada and 11-15 in USA) ``` SELECT AGE_RANGE, COUNT(*), A.location FROM ( SELECT CASE WHEN YearsOld BETWEEN 0 AND 5 THEN '0-5' WHEN YearsOld BETWEEN 6 AND 10 THEN '6-10' WHEN YearsOld BETWEEN 11 AND 15 THEN '11-15' WHEN YearsOld BETWEEN 16 AND 20 THEN '16-20' WHEN YearsOld BETWEEN 21 AND 30 THEN '21-30' WHEN YearsOld BETWEEN 31 AND 40 THEN '31-40' WHEN YearsOld > 40 THEN '40+' END AS 'AGE_RANGE', B.location FROM ( SELECT YEAR(CURDATE())-YEAR(DATE(birthday)) 'YearsOld', location /* << just missing this select */ FROM event_participants ) B ) A GROUP BY A.location, AGE_RANGE ```
Try this ``` SELECT AGE_RANGE, COUNT(*), location FROM ( SELECT CASE WHEN YearsOld BETWEEN 0 AND 5 THEN '0-5' WHEN YearsOld BETWEEN 6 AND 10 THEN '6-10' WHEN YearsOld BETWEEN 11 AND 15 THEN '11-15' WHEN YearsOld BETWEEN 16 AND 20 THEN '16-20' WHEN YearsOld BETWEEN 21 AND 30 THEN '21-30' WHEN YearsOld BETWEEN 31 AND 40 THEN '31-40' WHEN YearsOld > 40 THEN '40+' END AS 'AGE_RANGE', 'location' FROM ( SELECT YEAR(CURDATE())-YEAR(DATE(birthday)) 'YearsOld' FROM event_participants ) B ) A GROUP BY location, AGE_RANGE ```
Mysql select with CASE not retrieving other column value?
[ "", "mysql", "sql", "" ]
I would like to count all possible combinations of each number from my table. I would like my query to return something like this: ``` Number (Value) Count 1 39 2 450 3 41 ``` My table looks like this: ![enter image description here](https://i.stack.imgur.com/2C8N0.png) When I run the following query: ``` SELECT * FROM dbo.LottoDraws ld JOIN dbo.CustomerSelections cs ON ld.draw_date = cs.draw_date CROSS APPLY( SELECT COUNT(1) correct_count FROM (VALUES(cs.val1),(cs.val2),(cs.val3),(cs.val4),(cs.val5),(cs.val6))csv(val) JOIN (VALUES(ld.draw1),(ld.draw2),(ld.draw3),(ld.draw4),(ld.draw5),(ld.draw6))ldd(draw) ON csv.val = ldd.draw WHERE ld.draw_date = '2013-07-05' )CC ORDER BY correct_count desc ``` I get something like this: ![enter image description here](https://i.stack.imgur.com/S75xz.png)
I offer this solution because `unpivot` often performs better than a series of `union all`s. The reason is that each `union all` can result in a full table scan whereas the `unpivot` does its work with a single scan. So, you can write what you want as: ``` select val, count(*) from (select pk, val from test unpivot (val for col in (val1, val2, val3, val4, val5, val6) ) as unpvt ) t group by val order by val; ```
Assuming I did understand correctly your needs, I solved the problem in the following way. My assumptions are that you need to count the number of occurrencies a single value appears on either val columns (val1,val2,val3,etc..) indifferently. this is my test data: ``` CREATE TABLE Test( pk int PRIMARY KEY IDENTITY(1,1) NOT NULL, val1 int, val2 int, val3 int, val4 int, val5 int, val6 int ) INSERT INTO Test SELECT 1,2,3,4,5,6 UNION ALL SELECT 1,2,3,4,5,6 UNION ALL SELECT 1,2,3,4,5,6 UNION ALL SELECT 1,2,3,4,5,6 UNION ALL SELECT 3,3,3,3,3,3 UNION ALL SELECT 1,2,3,4,5,7 ``` And this is the query returning the count of single val occurrencies: ``` SELECT v, SUM(c) FROM ( SELECT val1 v, COUNT(*) c FROM Test GROUP BY val1 UNION ALL SELECT val2 v, COUNT(*) FROM Test GROUP BY val2 UNION ALL SELECT val3 v, COUNT(*) FROM Test GROUP BY val3 UNION ALL SELECT val4 v, COUNT(*) FROM Test GROUP BY val4 UNION ALL SELECT val5 v, COUNT(*) FROM Test GROUP BY val5 UNION ALL SELECT val6 v, COUNT(*) FROM Test GROUP BY val6 ) T GROUP BY v ``` Results in my test case are: ``` val occurrencies 1 5 2 5 3 11 4 5 5 5 6 4 7 1 ```
Counting the number of times a number comes up
[ "", "sql", "sql-server-2008", "t-sql", "" ]
I'm writing a function that will return a list of square numbers but will return an empty list if the function takes the parameter ('apple') or (range(10)) or a list. I have the first part done but can't figure out how to return the empty set if the parameter n is not an integer- I keep getting an error: unorderable types: str() > int() I understand that a string can't be compared to an integer but I need it to return the empty list. ``` def square(n): return n**2 def Squares(n): if n>0: mapResult=map(square,range(1,n+1)) squareList=(list(mapResult)) else: squareList=[] return squareList ```
You can use the `type` function in python to check for what data type a variable is. To do this you would use `type(n) is int` to check if `n` is the data type you want. Also, `map` already returns a list so there is no need for the cast. Therefore... ``` def Squares(n): squareList = [] if type(n) is int and n > 0: squareList = map(square, range(1, n+1)) return squareList ```
You can chain all the conditions which result in returning an empty list into one conditional using `or`'s. i.e if it is a list, or equals `'apple'`, or equals `range(10)` or `n < 0` then return an empty list. ELSE return the mapResult. ``` def square(n): return n**2 def squares(n): if isinstance(n,list) or n == 'apple' or n == range(10) or n < 0: return [] else: return list(map(square,range(1,n+1))) ``` `isinstance` checks if `n` is an instance of `list`. Some test cases: ``` print squares([1,2]) print squares('apple') print squares(range(10)) print squares(0) print squares(5) ``` Gets ``` [] [] [] [] [1, 4, 9, 16, 25] >>> ```
Python Squares Function
[ "", "python", "python-3.x", "" ]
hi i have ssis package and following expression which gives me todays date and time for file name ``` @[User::FilePath]+ "Bloomberg_"+REPLACE((DT_STR, 20, 1252) (DT_DBTIMESTAMP)@[System::StartTime], ":", "")+".xls" \\public\\Bloomberg_Upload\\Bloomberg_2013-07-05 005738.xls ``` I need to get one date previous like following only for weekdays: ``` \\public\\Bloomberg_Upload\\Bloomberg_2013-07-04 005738.xls ``` How can I do this ? For Monday - > If I execute my package on Monday date should be of Friday. please guide me i'm trying like this - ``` (DT_I4)DATEPART("weekday",@[System::StartTime]) ==2 ? Replace((DT_STR, 20, 1252)(DATEADD( "D", -3,@[System::StartTime])),":","-") + ".xls" : Replace((DT_STR, 20, 1252)(DATEADD( "D", -1,@[System::StartTime])),":","-") + ".xls" ```
If I understand you correctly you are just trying to work out how to obtain the previous day's date and if the previous day's date happens to be the weekend then get the last workday. You almost had it right with your code, you just needed to change your weekday constant. This code will check to see if it is Monday and if it is then subtract 3 days otherwise subtract 1. ``` @[User::FilePath]+"Bloomberg_"+((DT_I4)DATEPART("weekday",@[System::StartTime]) ==1 ? Replace((DT_STR, 20, 1252)(DATEADD( "D", -3,@[System::StartTime])),":","") + ".xls" : Replace((DT_STR, 20, 1252)(DATEADD( "D", -1,@[System::StartTime])),":","") + ".xls") ```
Maybe it is better to use `GETDATE()` and then you can do the minus like this: ``` DATEADD("day", -1, GETDATE()) ``` Also have a look here: [DATEADD (SSIS Expression)](http://msdn.microsoft.com/en-us/library/ms141719.aspx)
how do i select previous date using ssis expression?
[ "", "sql", "ssis", "" ]
I have the following data in a SQL Table: ![enter image description here](https://i.stack.imgur.com/eps9X.png) I need to query the data so I can get a list of missing "**familyid**" per employee. For example, I should get for Employee 1021 that is missing in the sequence the IDs: 2 and 5 and for Employee 1027 should get the missing numbers 1 and 6. Any clue on how to query that? Appreciate any help.
**Find the first missing value** I would use the `ROW_NUMBER` [window function](http://msdn.microsoft.com/en-us/library/ms189461.aspx#sectionToggle3 "OVER Clause (Transact-SQL)") to assign the "correct" sequence ID number. Assuming that the sequence ID restarts every time the employee ID changes: ``` SELECT e.id, e.name, e.employee_number, e.relation, e.familyid, ROW_NUMBER() OVER(PARTITION BY e.employeeid ORDER BY familyid) - 1 AS sequenceid FROM employee_members e ``` Then, I would filter the result set to only include the rows with mismatching sequence IDs: ``` SELECT * FROM ( SELECT e.id, e.name, e.employee_number, e.relation, e.familyid, ROW_NUMBER() OVER(PARTITION BY e.employeeid ORDER BY familyid) - 1 AS sequenceid FROM employee_members e ) a WHERE a.familyid <> a.sequenceid ``` Then again, you should easily group by `employee_number` and find the first missing sequence ID for each employee: ``` SELECT a.employee_number, MIN(a.sequence_id) AS first_missing FROM ( SELECT e.id, e.name, e.employee_number, e.relation, e.familyid, ROW_NUMBER() OVER(PARTITION BY e.employeeid ORDER BY familyid) - 1 AS sequenceid FROM employee_members e ) a WHERE a.familyid <> a.sequenceid GROUP BY a.employee_number ``` **Finding all the missing values** Extending the previous query, we can detect a missing value every time the difference between `familyid` and `sequenceid` changes: ``` -- Warning: this is totally untested :-/ SELECT b.employee_number, MIN(b.sequence_id) AS missing FROM ( SELECT a.*, a.familyid - a.sequenceid AS displacement SELECT e.*, ROW_NUMBER() OVER(PARTITION BY e.employeeid ORDER BY familyid) - 1 AS sequenceid FROM employee_members e ) a ) b WHERE b.displacement <> 0 GROUP BY b.employee_number, b.displacement ```
Here is one approach. Calculate the maximum family id for each employee. Then join this to a list of numbers up to the maximum family id. The result has one row for each employee and expected family id. Do a `left outer join` from this back to the original data, on the `familyid` and the number. Where nothing matches, those are the missing values: ``` with nums as ( select 1 as n union all select n+1 from nums where n < 20 ) select en.employee, n.n as MissingFamilyId from (select employee, min(familyid) as minfi, max(familyid) as maxfi from t group by employee ) en join nums n on n.n <= maxfi left outer join t on t.employee = en.employee and t.familyid = n.n where t.employee_number is null; ``` Note that this will not work when the missing `familyid` is that last number in the sequence. But it might be the best that you can do with your data structure. Also the above query assumes that there are at most 20 family members.
SQL query find missing consecutive numbers
[ "", "sql", "sql-server", "" ]
Hi all im a beginner at programming, i was recently given the task of creating this program and i am finding it difficult. I have previously designed a program that calculates the number of words in a sentence that are typed in by the user, is it possible to modify this program to achieve what i want? ``` import string def main(): print "This program calculates the number of words in a sentence" print p = raw_input("Enter a sentence: ") words = string.split(p) wordCount = len(words) print "The total word count is:", wordCount main() ```
Use [`collections.Counter`](http://docs.python.org/2/library/collections.html#collections.Counter) for counting words and [open()](http://docs.python.org/2/library/functions.html#open) for opening the file: ``` from collections import Counter def main(): #use open() for opening file. #Always use `with` statement as it'll automatically close the file for you. with open(r'C:\Data\test.txt') as f: #create a list of all words fetched from the file using a list comprehension words = [word for line in f for word in line.split()] print "The total word count is:", len(words) #now use collections.Counter c = Counter(words) for word, count in c.most_common(): print word, count main() ``` `collections.Counter` example: ``` >>> from collections import Counter >>> c = Counter('aaaaabbbdddeeegggg') ``` [Counter.most\_common](http://docs.python.org/2/library/collections.html#collections.Counter.most_common) returns words in sorted order based on their count: ``` >>> for word, count in c.most_common(): ... print word,count ... a 5 g 4 b 3 e 3 d 3 ```
To open files, you can use the [open](http://docs.python.org/2/library/functions.html#open) function ``` from collections import Counter with open('input.txt', 'r') as f: p = f.read() # p contains contents of entire file # logic to compute word counts follows here... words = p.split() wordCount = len(words) print "The total word count is:", wordCount # you want the top N words, so grab it as input N = int(raw_input("How many words do you want?")) c = Counter(words) for w, count in c.most_common(N): print w, count ```
A program that opens a text file, counts the number of words and reports the top N words ordered by the number of times they appear in the file?
[ "", "python", "file", "word-count", "" ]
In Django 1.4 and above : > There is a new file called `app.py` in every django application. It defines the scope of the app and some initials required when loaded. Why don't they use `__init__.py` for the purpose? Any advantage over `__init__.py` approach? Can you link to some official documentation for the same?
As all the links you have provided clearly show, this is nothing at all to do with Django itself, but a convention applied by the third-party app django-oscar.
Even though the details are wrong (there's no app.py in new Django projects), the question is still valid. `__init__.py` is imported implicitly when importing a sub-module. So if something in `__init__.py` executes automatically with side effects, you might run into unintended consequences. Doing everything in `app.py` incurs a longer import, but separates package init from app init logic.
What is the purpose of app.py in django apps?
[ "", "python", "django", "web", "" ]
These is one keyword confliction issue in the query module of my application,please see if you can tell me a smart solution. First,In query module,each query condition contains three parts in UI: 1.field name,its value is fixed,e.g origin,finalDest... 2.operator,it is a select list which includes "like","not like","in","not in","=","!=" 3.value,this part is input by user.then in back-end,it will assemble the SQL statement according to UI's query criteria,e.g if user type/select following stuff in UI ``` Field Name Operator Value origin like CHI finalDest in SEL ``` In back-end,it will generate following SQL: **select \* from Booking where origin like '%CHI%' and finalDest in ('SEL').** But there is a bug,e.g if user type some of special symbol in "value",e.g "'","\_" etc,it will lead to the generated SQL also contain ' or \_ ,e.g: **select \* from Booking where origin like '%C\_HI%' and finalDest in ('S'EL').** you could see as there is special symbol in "where" block,the SQL can't be executed For this problem,my solution is add escape character "/" in front of the special symbol before executing it,but what i know is just ' or \_ that would conflict with the SQL keywords,do you know if there is any others similar symbol that i need to handle or do you guys have any better idea that can avoid the injection Sorry,forgot told you what language i am using,i am using java,the DB is mysql,i also use hibernate,there are a lot of people said why i didn't use PreparedStatement,this is a little complex,simply speaking,in my company,we had a FW called dynamic query,we pre-defined the SQL fragment in a XML file,then we will assemble the SQL according to the UI pass in criteria with the jxel expression,as the SQL is kinda of pre-defined stuff,i afraid if change to use PreparedStatement,it will involve a lot of change for our FW,so what we care is just on how to fix the SQL injection issue with a simple way.
``` 0 An ASCII NUL (0x00) character. ' A single quote (“'”) character. " A double quote (“"”) character. b A backspace character. n A newline (linefeed) character. r A carriage return character. t A tab character. Z ASCII 26 (Control+Z). See note following the table. \ A backslash (“\”) character. % A “%” character. See note following the table. _ A “_” character. See note following the table ``` **[Reference](http://dev.mysql.com/doc/refman/5.0/en/string-literals.html)** **[Stack Similar Question](https://stackoverflow.com/questions/712580/list-of-special-characters-for-sql-like-clause)**
The code should begin attempting to stop SQL injection on the server side prior to sending any information to the database. I'm not sure what language you are using, but this is normally accomplished by creating a statement that contains bind variables of some sort. In Java, this is a `PreparedStatement`, other languages contains similar features. Using bind variables or parameters in a statement will leverage built in protection against SQL injection, which honestly is going to be better than anything you or I write on the database. If your doing any `String` concatenation on the server side to form a complete SQL statement, this is an indicator of a SQL injection risk.
smart solution of SQL injection
[ "", "mysql", "sql", "database", "" ]
Hey I am running into an issue when trying to run a cron job with a python script from ubuntu. This is what I have done: 1.) Wrote a simple tkinter app: source for the code is from this url - <http://www.ittc.ku.edu/~niehaus/classes/448-s04/448-standard/simple_gui_examples/sample.py> ``` #!/usr/bin/python from Tkinter import * class App: def __init__(self,parent): f = Frame(parent) f.pack(padx=15,pady=15) self.entry = Entry(f,text="enter your choice") self.entry.pack(side= TOP,padx=10,pady=12) self.button = Button(f, text="print",command=self.print_this) self.button.pack(side=BOTTOM,padx=10,pady=10) self.exit = Button(f, text="exit", command=f.quit) self.exit.pack(side=BOTTOM,padx=10,pady=10) def print_this(self): print "this is to be printed" root = Tk() root.title('Tkwidgets application') app = App(root) root.mainloop() ``` 2.) changed the script to become executable: ``` chmod 777 sample.py ``` 3.) Added the script to my cronjob to be run every minute for testing purposes. I opened crontab -e and added the following to my file: ``` * * * * * /home/bbc/workspace/python/tkinter/sample.py ``` 4.) Disclaimer: I did not add any additional environment variables for tkinter nor did I change my cronjob script at /etc/init.d/cron 5.) I was tracking the cron job by doing a tail -f /var/log/syslog ``` $ tail -f /var/log/syslog Jul 7 18:33:01 bbc CRON[11346]: (bbc) CMD (/home/bbc/workspace/python/tkinter/sample.py) Jul 7 18:33:01 bbc CRON[11343]: (CRON) error (grandchild #11344 failed with exit status 1) Jul 7 18:33:01 bbc CRON[11343]: (CRON) info (No MTA installed, discarding output) Jul 7 18:33:01 bbc CRON[11342]: (CRON) error (grandchild #11346 failed with exit status 1) Jul 7 18:33:01 bbc CRON[11342]: (CRON) info (No MTA installed, discarding output) ``` Any help on debugging this issue will be most appreciated...
I'm not sure what you expect to happen here. The cronjob won't have access to a display where it can display the GUI, so the button will never be displayed, so `print_this` will never be run FWIW, when I tried to run your code I got an error: ``` File "./t.py", line 4 def __init__(self,parent): ^ IndentationError: expected an indented block ``` Not sure if that's just caused by copy/paste into the page or if it's a real problem with your code.
In linux mint 17 I had to do the following: Make the scripts executable ~$chmod +x script.py You have to enable X ACL for localhost to connect to for GUI applications to work ~$ xhost +local: Add the following line in the crontab "env DISPLAY=:0.0" \* \* \* \* \* env DISPLAY=:0.0 /usr/bin/python /your-script-somewhere.py And one more line to crontab ">/dev/null 2>&1" \* \* \* \* \* env DISPLAY=:0.0 /usr/bin/python /your-script-somewhere.py >/dev/null 2>&1 you can check errors in /var/log/syslog file ~$ tail -20 /var/log/syslog more info: <https://help.ubuntu.com/community/CronHowto>
Trying to run a python script from ubuntu crontab
[ "", "python", "ubuntu", "cron", "tkinter", "" ]
I am new to pandas and I am struggling to figure out how to convert my data to a timeseries object. I have sensor data, in which there is a relative time index with reference to the beginning of the experiment. This is not in date/time format. All documentation that I have found online deals/starts with some sort of dated data. A short chunk of my data looks like: ``` 0.000000 49.431958 4.119330 -0.001366 -9.483122E-9 0.025000 49.501745 4.125145 0.004710 2.322330E-8 0.050000 49.479531 4.123294 0.013725 1.185336E-7 0.075000 49.492309 4.124359 0.006082 1.607667E-7 0.325000 49.515702 4.126309 0.024307 9.750522E-7 2.925000 49.437069 4.119756 0.000202 9.148022E-6 3.025000 49.521010 4.126751 0.014313 9.590506E-6 3.425000 49.510001 4.125833 -0.003913 1.075210E-5 ``` The time data is in the first column. I tried to load the data with: ``` datalabels= ['time', 'voltage pack', 'av. cell voltage', 'current', 'charge count', 'soc', 'energy', 'unknown1', 'unknown2', 'unknown3'] datalvm= pd.read_csv(dpath+dfile, header=None, skiprows=25, names=datalabels, delimiter='\t', parse_dates={'Timestamp':['time']}, index_col='Timestamp') ``` But I just get an indexed series, not a timeseries. Any help would be greatly appreciated. Cheers!
You should construct pandas TimeSeries objects by parsing the timestamps to dateTime objects. This requires you to pick some arbitrary starting point ``` start = dt.datetime(year=2000,month=1,day=1) time = datalvm['time'][1:] floatseconds = map(float,time) #str->float #floats to datetime objects -> this is you timeseries index datetimes = map(lambda x:dt.timedelta(seconds=x)+start,floatseconds) #construct the time series timeseries = dict() #timeseries are collected in a dictionary for signal in datalabels[1:]: data =map(float,datalvm[signal][1:].values) t_s = pd.Series(data,index=datetimes,name=signal) timeseries[signal] = t_s #convert timeseries dict to dataframe dataframe = pd.DataFrame(timeseries) ``` After you've constructed the timeSeries you can use the resample function: ``` dataframe['soc'].resample('1sec') ```
You can just do it using `cut` on the groupby (you can specify the bins if you want), or groupby however you want, using the data above (that's why I am reading via `StringIO`) ``` In [22]: df= pd.read_csv(StringIO(data), header=None, delimiter='\s+') In [23]: df.columns = ['time','col1','col2','col3','col4'] In [24]: df Out[24]: time col1 col2 col3 col4 0 0.000 49.431958 4.119330 -0.001366 -9.483122e-09 1 0.025 49.501745 4.125145 0.004710 2.322330e-08 2 0.050 49.479531 4.123294 0.013725 1.185336e-07 3 0.075 49.492309 4.124359 0.006082 1.607667e-07 4 0.325 49.515702 4.126309 0.024307 9.750522e-07 5 2.925 49.437069 4.119756 0.000202 9.148022e-06 6 3.025 49.521010 4.126751 0.014313 9.590506e-06 7 3.425 49.510001 4.125833 -0.003913 1.075210e-05 In [25]: df.groupby(pd.cut(df['time'],2)).sum() Out[25]: time col1 col2 col3 col4 time (-0.00343, 1.712] 0.475 247.421245 20.618437 0.047458 0.000001 (1.712, 3.425] 9.375 148.468080 12.372340 0.010602 0.000029 ```
pandas timeseries with relative time
[ "", "python", "pandas", "time-series", "" ]
I have a simple script that graphs the TPR/SP tradeoff. It produces a pdf like (note the placement of the x-axis numbers): ![enter image description here](https://i.stack.imgur.com/x8hk8.png) The relevant code is likely: ``` xticks(range(len(SP_list)), [i/10.0 for i in range(len(SP_list))], size='small') ``` The whole code is: ``` SN_list = [0.89451476793248941, 0.83544303797468356, 0.77215189873417722, 0.70042194092827004, 0.63291139240506333, 0.57805907172995785, 0.5527426160337553, 0.5527426160337553, 0.53586497890295359, 0.52742616033755274, 0.50632911392405067, 0.48101265822784811, 0.45569620253164556, 0.43459915611814348, 0.40084388185654007, 0.3628691983122363, 0.31223628691983124, 0.25738396624472576, 0.20253164556962025, 0.12658227848101267, 0.054852320675105488, 0.012658227848101266] SP_list = [0.24256292906178489, 0.24780976220275344, 0.25523012552301255, 0.25382262996941896, 0.25684931506849318, 0.36533333333333334, 0.4548611111111111, 0.51778656126482214, 0.54978354978354982, 0.59241706161137442, 0.63492063492063489, 0.80851063829787229, 0.81203007518796988, 0.85123966942148765, 0.88785046728971961, 0.91489361702127658, 0.9135802469135802, 0.9242424242424242, 0.94117647058823528, 0.967741935483871, 1.0, 1.0] figure() xlabel('Score cutoff point') ylabel('Percent') plot(SP_list) plot(SN_list) legend(('True Positive Rate', 'Specificity'), 'upper left', prop={"size":9}, shadow=False, fancybox=False) grid(False) xticks(range(len(SP_list)), [i/10.0 for i in range(len(SP_list))], size='small') savefig('SP_SN.png') ```
You can call `autoscale()` after `xticks()`, which will set the new axes' limits automatically. ![enter image description here](https://i.stack.imgur.com/A7U6O.png)
It is doing exactly what you told it to. The problem is a poor interaction of the default tick locator and your use of `xticks`. By default the locator tries to set a range to give you 'nice' tick locations. If you remove the `xticks` line, you will see that it gives you a range of `[0, 25]`. When you use `xticks` it puts the given string at the given location, but does not change the limits. You just need to set the limits by adding ``` xlim([0, len(SP_list) - 1]) ```
xticks ends placement of numbers on x-axis prematurely, i.e. the ticks do not reach the right end
[ "", "python", "python-2.7", "matplotlib", "" ]
I have the following variables: ``` query = "(first_name = ?) AND (country = ?)" # string values = ['Bob', 'USA'] # array ``` I need the following result: ``` result = "(first_name = Bob) AND (country = USA)" # string ``` The number of substitutions varies (1..20). What is the best way to do it in Ruby?
If you don't mind destroying the array: `query.gsub('?') { values.shift }` Otherwise just copy the array and then do the replacements. **Edited:** Used gsub('?') instead of a regex.
If you can control the `query` string, the [String#%](http://www.ruby-doc.org/core-2.0/String.html#method-i-25) operator is what you need: ``` query = "(first_name = %s) AND (country = %s)" # string values = ['Bob', 'USA'] # array result = query % values #=> "(first_name = Bob) AND (country = USA)" ``` You have to replace `?` with `%s` in your `query` string.
What is the most elegant way in Ruby to substitute '?' in string with values in array
[ "", "sql", "ruby-on-rails", "ruby", "string", "" ]
I know this question has been asked before but those questions typically lack specific details and result in answers that say something like "It depends what you are trying to do..." so the main gist of this app is that it retrieves remote data (ex. text) and object (ex. images). Since PHP and python are the two programming languages I feel comfortable with I felt python was more suited for desktop gui apps. I'm creating a desktop music player and here are some of the technical specs I want to include: * A sign in construct that authenticates user credentials (Like spotify, skype, league of legends) against a remote database (This will be in mysql.) My thinking is to create a web api for the client to query via HTTP/HTTPS GET/POST * A client side SQLite database that stores the filename, filepath and id3 tags of the song so upon launching, the application displays each song in a row with the song length, artist, album, genre (Like iTunes) * Retrieve remote images and display them within the application's frame (Like skype displays a person's profile picture.) * Must be cross-platform (At least in Windows and Mac), look native in different OS's but the native look and feel should be easily overridden with custom styles (Ex. rounded buttons with gradients.) * Compilation for Windows and Mac should be relatively straightforward Of the popular python gui toolkits like PyQt, PyGTK, Tkinter, wxPython, Pyjamas, PyGObject and PySide which are well suited for my application and why? Why are the others not well suited for these specs? Which have good documentation and active communities?
Welcome to a *fun* area: 1. Watch out for urllib2: it doesn't check the certificate or the certificate chain. Use requests instead or use ssl library and check it yourself. See [Urllib and validation of server certificate](https://stackoverflow.com/questions/6648952/urllib-and-validation-of-server-certificate) Sometimes a little ZeroMQ (0mq) can simplify your server. 2. You should consider shipping a self signed certificate with your application if you are a private server/private client pair. At that point, validating the certificate eliminates a host of other possible problems. 3. While you could read a lot about security issues, like [Crypto 101](https://speakerdeck.com/pyconslides/crypto-101-by-laurens-van-houtven), the short version is use TLS (the new name for SSL) to transmit the data and GPG to store the data. TLS keeps others from seeing and altering the data when moving it. GPG keeps others from seeing and altering the data when storing or retreiving it. See also: [How to safely keep a decrypted temporary data?](https://stackoverflow.com/questions/17010088/how-to-safely-keep-a-decrypted-temporary-data) Enough about security! 4. SqlLite3, used with gpg, is fine until you get too large. After that, you can move to MariaDB (the supported version of MySQL), PostGreSQL, or something like Mongo. I'm a proponent of [doing things that don't scale](http://paulgraham.com/ds.html) and getting something working now is worthwhile. 5. For the GUI, you'll hate my answer: HTML/CSS/JavaScript. The odds that you will need a portal or mobile app or remote access or whatever are compelling. Use jQuery (or one of its lightweight cousins like Zepto). Then run your application as a full screen application without a browser bar or access to other sites. You can use libraries to emulate the native look and feel, but customers almost always go "oh, I know how to use that" and stop asking. 6. Still hate the GUI answer? While you could use Tcl/Tk or Wx but you will forever be fighting platform bugs. For example, OS/X (Mac) users need to install ActiveState's Tcl/Tk instead of the default one. You will end up with a heavy solution for the images (PIL or ImageMagick) instead of just the HTML image tag. There is a huge list of other GUIs to play with, including some using the new 'yield from' construct. Still, you do better with HTML/CSS/JavaScript for now. Watch the "JavaScript: the Good Parts" and then adopt an attitude of ship it as it works. 7. Push hard to use either Python 2,7 or Python 3.3+. You don't want to be fighting the rising tide of better support when you making a complicated application. I do this stuff for FUN!
## GUI library First of all, please leverage the work people have done to compile [this GuiProgramming list](http://wiki.python.org/moin/GuiProgramming). One one the packages that stood out to me was [`Kivy`](http://kivy.org/#home). You should definitely check it out (at least the videos / intro). The `Kivy` project has some [nice introduction](http://kivy.org/docs/gettingstarted/intro.html). I have to say I haven't followed it in full but it looks very promising. As for your requirements to be cross-platforms you can do that. ## Packaging There is [an extensive documentation on how to package your app](http://kivy.org/docs/guide/packaging.html) for the different platforms. For `MacOSX` and `Windows` it uses [`PyInstaller`](http://www.pyinstaller.org/wiki), but there are instructions for `Android` and `iOS`. ## Client-side database Yes, [`sqlite3`](http://www.sqlite.org/download.html) is the way to go. You can use `sqlite3` with `Kivy`. You can also use [`SQLAlchemy`](http://docs.sqlalchemy.org/en/latest/intro.html#installation) if you need to connect to your sqlite database or if you need to connect to a remote one. ## Retrieving content The [`requests` library is awesome](http://docs.python-requests.org/en/latest/) to do http requests and IMHO is much simpler to use than a combination of `httplib`, `urllib2`.
Considerations for python gui toolkit for app that queries remote database?
[ "", "python", "tkinter", "wxpython", "pyqt", "pygtk", "" ]
What is the best way to handle return values when using multiple update statements in a stored procedure. I am using this stored procedure in a data adapter and am trying to get the update result total. I'm using **SQL Server 2008** and .NET 4.5 For example: ``` CREATE PROCEDURE [dbo].[usrsp_MessageConversation_delete] ( @fromMbrID int, @toMbrID int ) AS BEGIN SET NOCOUNT OFF; UPDATE mbr_Messages SET fromDeleted = 1 WHERE (fromMbrID = @fromMbrID) AND (toMbrID = @toMbrID) UPDATE mbr_Messages SET toDeleted = 1 WHERE (fromMbrID = @toMbrID) AND (toMbrID = @fromMbrID) //RETURN TOTAL NUMBER OF UPDATED RECORDS EG: RETURN @@ROWCOUNT ?? END ``` I want to return the total number of updated records or 0 if non were updated. Do I need to add any other error handling - if so any suggestions? **UPDATE: I ended up going with this:** ``` BEGIN SET NOCOUNT ON; DECLARE @rowCounter INT; BEGIN TRY UPDATE mbr_Messages SET fromDeleted = 1 WHERE (fromMbrID = @fromMbrID) AND (toMbrID = @toMbrID) AND (fromDeleted = 0); SET @rowCounter = @@ROWCOUNT; UPDATE mbr_Messages SET toDeleted = 1 WHERE (fromMbrID = @toMbrID) AND (toMbrID = @fromMbrID) AND (toDeleted = 0); SET @rowCounter = @rowCounter + @@ROWCOUNT; END TRY BEGIN CATCH PRINT 'Update failed'; END CATCH RETURN @rowCounter END ```
You want to do something like below: ``` CREATE PROCEDURE [dbo].[usrsp_MessageConversation_delete] ( @fromMbrID int, @toMbrID int ) AS BEGIN SET NOCOUNT OFF; DECLARE @RowCount1 INTEGER DECLARE @RowCount2 INTEGER DECLARE @TotalRows INTEGER UPDATE mbr_Messages SET fromDeleted = 1 WHERE (fromMbrID = @fromMbrID) AND (toMbrID = @toMbrID) SET @RowCount1=@@RowCount UPDATE mbr_Messages SET toDeleted = 1 WHERE (fromMbrID = @toMbrID) AND (toMbrID = @fromMbrID) SET @RowCount2=@@RowCount SET @TotalRows = @RowCount1 + @RowCount2 --RETURN TOTAL NUMBER OF UPDATED RECORDS RETURN @TotalRows END ``` You need to assign @@RowCount to some variable as it gets reset once you use it. **Edit:** Also add error handling code: Try..Catch and Transactions.
If your stored procedure is short, I not really suggest to use any error handling. But here is one example for error handling ``` IF @@ERROR <> 0 BEGIN --your statement return 12345; -- to mark your error location END ``` More information about [@@Error](http://msdn.microsoft.com/en-us/library/ms188790%28v=sql.90%29.aspx)
sql multiple updates in stored procedure
[ "", "sql", "sql-server", "sql-server-2008", "stored-procedures", "" ]
Lets say i have a tuple of list like: ``` g = (['20', '10'], ['10', '74']) ``` I want the max of two based on the first value in each list like ``` max(g, key = ???the g[0] of each list.something that im clueless what to provide) ``` And answer is ['20', '10'] Is that possible? what should be the key here? According to above answer. Another eg: ``` g = (['42', '50'], ['30', '4']) ans: max(g, key=??) = ['42', '50'] ``` PS: By max I mean numerical maximum.
Just pass in a callable that gets the first element of each item. Using [`operator.itemgetter()`](http://docs.python.org/2/library/operator.html#operator.itemgetter) is easiest: ``` from operator import itemgetter max(g, key=itemgetter(0)) ``` but if you *have* to test against integer values instead of lexographically sorted items, a lambda might be better: ``` max(g, key=lambda k: int(k[0])) ``` Which one you need depends on what you expect the maximum to be for strings containing digits of differing length. Is `'4'` smaller or larger than `'30'`? Demo: ``` >>> g = (['42', '50'], ['30', '4']) >>> from operator import itemgetter >>> max(g, key=itemgetter(0)) ['42', '50'] >>> g = (['20', '10'], ['10', '74']) >>> max(g, key=itemgetter(0)) ['20', '10'] ``` or showing the difference between `itemgetter()` and a `lambda` with `int()`: ``` >>> max((['30', '10'], ['4', '10']), key=lambda k: int(k[0])) ['30', '10'] >>> max((['30', '10'], ['4', '10']), key=itemgetter(0)) ['4', '10'] ```
You can use `lambda` to specify which item should be used for comparison: ``` >>> g = (['20', '10'], ['10', '74']) >>> max(g, key = lambda x:int(x[0])) #use int() for conversion ['20', '10'] >>> g = (['42', '50'], ['30', '4']) >>> max(g, key = lambda x:int(x[0])) ['42', '50'] ``` You can also use `operator.itemegtter`, but in this case it'll not work as the items are in string form.
Python: Maximum of lists of 2 or more elements in a tuple using key
[ "", "python", "python-2.7", "max", "" ]
I have 3 tables : **champions, roles, champs\_to\_roles** The champs\_to\_roles table looks like this : ``` |ID_champ|ID_role| ---------------- | 2| 2| | 4| 5| | 5| 3| | 3| 2| | 1| 1| | 1| 2| ``` I'm trying to SELECT the ID\_champ WHERE ID\_role = 1 AND ID\_role = 2. At this point I have the following code : ``` SELECT DISTINCT `c`.`name` FROM `champions` AS c, ( SELECT `ID_champ` FROM `champs_to_roles` WHERE `ID_role` IN(1,2) ) AS r WHERE `r`.`ID_champ` = `c`.`ID` ``` However, this returns me ID\_champ with ID\_role = 1 OR Id\_role = 2 OR both of them How can I fetch what I need ? Thanks a lot :)
Use two inner joins to the association table - one for each role type: ``` SELECT c.name FROM champions c JOIN champs_to_roles ctr1 ON ctr1.ID_champ = c.id AND ctr1.ID_role = 1 JOIN champs_to_roles ctr2 ON ctr2.ID_champ = c.id AND ctr2.ID_role = 2 ``` By using a *inner* join, *both* rows in `champs_to_roles` must exist for the champion to be returned. The DISTINCT keyword was dispensed with because you don't need it (unless you have duplicate entries in your association table, which seems unlikely and a bad design if even possible)
I would probably go with the solution by Bohemian, but for completeness there is also the possibility to use a count and check the number of roles returned from a subselect ``` SELECT c.name FROM champions AS c INNER JOIN ( SELECT ID_champ, COUNT(DISTINCT ID_role) AS NumRole FROM champs_to_roles WHERE ID_role IN(1,2) GROUP BY ID_champ ) AS r ON r.ID_champ = c.ID AND NumRole = 2 ``` (this could also be done without the subselect but it would be a touch less readable when you are not returning the columns that you need to group by, etc)
Multiple required value for a WHERE clause?
[ "", "mysql", "sql", "" ]
This is the problem I'm running into, instead of returning new\_word and printing it out it just prints 'None' ``` text = "hey hey hey,hey" word = 'hey' def censor(text,word): new_word = "" count = 0 if word in text: latter_count = ((text.index(word))+len(word)) while count < text.index(word): new_word+= text[count] count += 1 for i in range(len(word)): new_word += '*' while latter_count < len(text) : new_word += text[latter_count] latter_count += 1 if word in new_word : censor(new_word,word) else : return new_word print censor(text,word) ```
A function returns `None` if there is no return statement. Probably while doing recursion, `if word in text:` is False, and so there is nothing to return. You also did not return the recursive step. You must *return* `censor(new_word,word)`
You're not returning in the first branch of the `if` toward the end. Change that to ``` if word in new_word: return censor(new_word,word) ``` Your function will also return None if `word in text` is false, so you might want to add an `else` at the end to return an empty string or some other default value in that case.
Why does my python method print out 'None' when I try to print out the string that the method returns?
[ "", "python", "" ]
My stored procedure receives a parameter which is a comma-separated string: ``` DECLARE @Account AS VARCHAR(200) SET @Account = 'SA,A' ``` I need to make from it this statement: ``` WHERE Account IN ('SA', 'A') ``` What is the best practice for doing this?
Create this function (sqlserver 2005+) ``` CREATE function [dbo].[f_split] ( @param nvarchar(max), @delimiter char(1) ) returns @t table (val nvarchar(max), seq int) as begin set @param += @delimiter ;with a as ( select cast(1 as bigint) f, charindex(@delimiter, @param) t, 1 seq union all select t + 1, charindex(@delimiter, @param, t + 1), seq + 1 from a where charindex(@delimiter, @param, t + 1) > 0 ) insert @t select substring(@param, f, t - f), seq from a option (maxrecursion 0) return end ``` use this statement ``` SELECT * FROM yourtable WHERE account in (SELECT val FROM dbo.f_split(@account, ',')) ``` --- Comparing my split function to XML split: Testdata: ``` select top 100000 cast(a.number as varchar(10))+','+a.type +','+ cast(a.status as varchar(9))+','+cast(b.number as varchar(10))+','+b.type +','+ cast(b.status as varchar(9)) txt into a from master..spt_values a cross join master..spt_values b ``` XML: ``` SELECT count(t.c.value('.', 'VARCHAR(20)')) FROM ( SELECT top 100000 x = CAST('<t>' + REPLACE(txt, ',', '</t><t>') + '</t>' AS XML) from a ) a CROSS APPLY x.nodes('/t') t(c) Elapsed time: 1:21 seconds ``` f\_split: ``` select count(*) from a cross apply clausens_base.dbo.f_split(a.txt, ',') Elapsed time: 43 seconds ``` This will change from run to run, but you get the idea
Try this one - **DDL:** ``` CREATE TABLE dbo.Table1 ( [EmpId] INT , [FirstName] VARCHAR(7) , [LastName] VARCHAR(10) , [domain] VARCHAR(6) , [Vertical] VARCHAR(10) , [Account] VARCHAR(50) , [City] VARCHAR(50) ) INSERT INTO dbo.Table1 ([EmpId], [FirstName], [LastName], [Vertical], [Account], [domain], [City]) VALUES (345, 'Priya', 'Palanisamy', 'DotNet', 'LS', 'Abbott', 'Chennai'), (346, 'Kavitha', 'Amirtharaj', 'DotNet', 'CG', 'Diageo', 'Chennai'), (647, 'Kala', 'Haribabu', 'DotNet', 'DotNet', 'IMS', 'Chennai') ``` **Query:** ``` DECLARE @Account VARCHAR(200) SELECT @Account = 'CG,LS' SELECT * FROM Table1 WHERE [Vertical] = 'DotNet' AND (ISNULL(@Account, '') = '' OR Account IN ( SELECT t.c.value('.', 'VARCHAR(20)') FROM ( SELECT x = CAST('<t>' + REPLACE(@Account, ',', '</t><t>') + '</t>' AS XML) ) a CROSS APPLY x.nodes('/t') t(c) )) ``` **Output:** ![proff](https://i.stack.imgur.com/iiUCJ.png) **Extended statistics:** ![stat](https://i.stack.imgur.com/sJoxj.png) **SSMS SET STATISTICS TIME + IO:** ***XML:*** ``` (3720 row(s) affected) Table 'temp'. Scan count 3, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. SQL Server Execution Times: CPU time = 187 ms, elapsed time = 242 ms. ``` ***CTE:*** ``` (3720 row(s) affected) Table '#BF78F425'. Scan count 360, logical reads 360, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'temp'. Scan count 1, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. SQL Server Execution Times: CPU time = 281 ms, elapsed time = 335 ms. ```
Parse comma-separated string to make IN List of strings in the Where clause
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "" ]
I have a python file (`my_code.py`) in `Home/Python_Codes` folder in `ubuntu`. I want to run it in python shell. How can I do that? I do [this](https://stackoverflow.com/questions/7420937/run-program-in-python-shell) `>>> execfile('~/Python_Codes/my_code.py')` but it gives me path error
You should expand tilde(~) to actual path. Try following code. In Python 2.x: ``` import os execfile(os.path.expanduser('~/Python_Codes/my_code.py')) ``` In Python 3.x (no `execfile` in Python 3.x): ``` import os with open(os.path.expanduser('~/Python_Codes/my_code.py')) as f: exec(f.read()) ```
Importing your module will execute any code at the top indent level - which includes creating any functions and classes you have defined there. ``` james@Brindle:/tmp$ cat my_codes.py def myfunc(arg1, arg2): print "arg1: %s, arg2: %s" % (arg1, arg2) print "hello" james@Brindle:/tmp$ python Python 2.7.5 (default, Jun 14 2013, 22:12:26) [GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.0.60)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import my_codes hello >>> my_codes.myfunc("one", "two") arg1: one, arg2: two >>> ``` To add `~/Python_Codes` to the list of places that python will search, you can manipulate `sys.path` to add that directory to the start of the list. ``` >>> import sys >>> print sys.path ['', ... '/Library/Python/2.7/site-packages'] >>> sys.path.insert(0,'/home/me/Python_codes/') >>> import my_codes ```
run python file in python shell
[ "", "ubuntu", "python", "" ]
This is my situation: I have 2 tables, **tickets** and **tickets-details**. I need to retrieve the info inside **"tickets"** and just the LAST reply from **"tickets-details"** for each ticket, then show em in a table. My problem is that **"ticket-details"** returns a row per each reply and I'm getting more than one row per ticket. How can I achieve this in a single query ? I tried adding `DISTINCT` into my `SELECT` but didn't. I tried using `GROUP BY` **id\_ticket** but didnt' work too because I wasn't getting the last reply from ticket-details This is my query: ``` SELECT DISTINCT ti.id_ticket,ti.title,tiD.Reply,ti.status FROM tickets ti INNER JOIN ticket-details tiD ON ti.id_ticket = tiD.id_ticket WHERE user = '$id_user' ORDER BY status desc ``` ---------------------------------- EDIT----------------------------------------------- my tables: tickets(**id\_ticket**, user, date, title, status) ticket-details(**id\_ticketDetail**, id\_ticket, dateReply, reply)
I do not know Your datatabase model, but if ID is autoincremented you can extend your script with this condition: ``` SELECT ti.id_ticket,ti.title,tiD.Reply,ti.status FROM tickets ti INNER JOIN ticket-details tiD ON ti.id_ticket = tiD.id_ticket WHERE user = '$id_user' and tiD.id_ticket in (select max(a.id) from ticket-details a group by a.id_ticket) ORDER BY status desc ``` Or if you have some kind of date attribute, change new condition to your date attribute (in my example it is ticked\_date ) ``` and tiD.ticked_date in (select max(a.ticked_date) from ticket-details a group by a.id_ticket) ```
Assuming that the max `id_ticketDetail` represents the most recent record in `ticket-details` you can try ``` SELECT ti.id_ticket, ti.title, tiD.Reply, ti.status FROM tickets ti JOIN ( SELECT id_ticket, reply FROM `ticket-details` d JOIN ( SELECT MAX(id_ticketDetail) max_id FROM `ticket-details` GROUP BY id_ticket ) q ON d.id_ticketDetail = q.max_id ) tiD ON ti.id_ticket = tiD.id_ticket WHERE ti.user = '$id_user' ORDER BY ti.status DESC ``` or a version with max `dateReply` ``` SELECT ti.id_ticket, ti.title, tiD.Reply, ti.status FROM tickets ti JOIN ( SELECT d.id_ticket, d.reply FROM `ticket-details` d JOIN ( SELECT id_ticket, MAX(dateReply) max_dateReply FROM `ticket-details` GROUP BY id_ticket ) q ON d.id_ticket = q.id_ticket AND d.dateReply = q.max_dateReply ) tiD ON ti.id_ticket = tiD.id_ticket WHERE ti.user = '$id_user' ORDER BY ti.status DESC ``` Here is **[SQLFiddle](http://sqlfiddle.com/#!2/b5664/4)** demo for both queries.
SQL INNER JOIN returning more rows than I need
[ "", "mysql", "sql", "" ]
Set to string. Obvious: ``` >>> s = set([1,2,3]) >>> s set([1, 2, 3]) >>> str(s) 'set([1, 2, 3])' ``` String to set? Maybe like this? ``` >>> set(map(int,str(s).split('set([')[-1].split('])')[0].split(','))) set([1, 2, 3]) ``` Extremely ugly. Is there better way to serialize/deserialize sets?
Use `repr` and `eval`: ``` >>> s = set([1,2,3]) >>> strs = repr(s) >>> strs 'set([1, 2, 3])' >>> eval(strs) set([1, 2, 3]) ``` Note that `eval` is not safe if the source of string is unknown, prefer `ast.literal_eval` for safer conversion: ``` >>> from ast import literal_eval >>> s = set([10, 20, 30]) >>> lis = str(list(s)) >>> set(literal_eval(lis)) set([10, 20, 30]) ``` help on `repr`: ``` repr(object) -> string Return the canonical string representation of the object. For most object types, eval(repr(object)) == object. ```
The question is little unclear because the title of the question is asking about string and set conversion but then the question at the end asks how do I serialize ? ! let me refresh the concept of [Serialization](https://www.cs.uic.edu/~troy/fall04/cs441/drake/serialization.html) is the process of encoding an object, including the objects it refers to, as a stream of byte data. If interested to serialize you can use: ``` json.dumps -> serialize json.loads -> deserialize ``` If your question is more about how to convert set to string and string to set then use below code (it's tested in Python 3) **String to Set** ``` set('abca') ``` **Set to String** ``` ''.join(some_var_set) ``` example: ``` def test(): some_var_set=set('abca') print("here is the set:",some_var_set,type(some_var_set)) some_var_string=''.join(some_var_set) print("here is the string:",some_var_string,type(some_var_string)) test() ```
Convert set to string and vice versa
[ "", "python", "string", "python-2.7", "set", "" ]
I am working on a python program. I want to take a user input which is less then 140 characters. If the sentence exceeds the word limit, It should just print the 140 characters. I am able to enter characters but this is what happens. I am new to python. How can I achieve this? ``` def isAlpha(c): if( c >= 'A' and c <='Z' or c >= 'a' and c <='z' or c >= '0' and c <='9'): return True else: return False def main(): userInput = str(input("Enter The Sentense: ")) for i in range(140): newList = userInput[i] print(newList) ``` this is the output i get ``` Enter The Sentense: this is t h i s i s Traceback (most recent call last): File "<pyshell#1>", line 1, in <module> main() File "C:/Users/Manmohit/Desktop/anonymiser.py", line 11, in main newList = userInput[i] IndexError: string index out of range ``` Thank you for the help
``` userInput = str(input("Enter The Sentense: ")) truncatedInput = userInput[:140] ```
Why not just test for the `len`? ``` if len(input) > 140: print "Input exceeds 140 characters." input = input[:140] ``` You can also put up other errors using this or quit the program, if you want to. The `input = input[:140]` makes sure that only the first 140 characters of the string are captured. This is wrapped around in an `if` so that if the input length is less than 140, the `input = input[:140]` line does not execute and the error is not shown. This is called Python's Slice Notation, a useful link for quick learning would be [this.](https://stackoverflow.com/questions/509211/the-python-slice-notation) Explanation for your error - ``` for i in range(140): newList = userInput[i] print(newList) ``` If the `userInput` is of length 5, then accessing the 6th element gives an error, since no such element exists. Similarly, you try to access elements until 140 and hence get this error. If all you're trying to do is split the string into it's characters, then, an easy way would be - ``` >>> testString = "Python" >>> list(testString) ['P', 'y', 't', 'h', 'o', 'n'] ```
How to take user input for less then 140 characters?
[ "", "python", "" ]
I do not know what is the problem ``` SELECT DP.CODE_VALEUR CODE, MAX(VA.CODE_TYPE_VALEUR) CODE_TYPE_VALEUR, MAX(VA.NOM_VALEUR) STOCK_NAME, (SUM(COURS_ACQ_VALEUR) / SUM(QUANTITE_VALEUR)) CMP, MAX(DP.CODE_COMPTE) CODE_COMPTE, SUM(DP.QUANTITE_VALEUR) QTEVALEUR, round(SUM(DP.VALORISATION_BOURSIERE), 3) VALORISATION_BOURSIERE, round((SUM(DP.VALORISATION_BOURSIERE) / SUM(DP.QUANTITE_VALEUR)), 3) COURS FROM DETAILPORTEFEUILLE DP, VALEUR VA WHERE DP.CODE_COMPTE IN (SELECT P.CODE_COMPTE_RATTACHE FROM PROCURATION P WHERE P.IDWEB_MASTER = 8 AND NVL(P.CAN_SEE_PORTEFEUILLE, 0) != 0) AND VA.CODE_VALEUR = DP.CODE_VALEUR AND DP.QUANTITE_VALEUR > 0 AND DP.CODE_VALEUR = 'TN0007250012' ```
try to add ``` GROUP BY DP.CODE_VALEUR ```
A SELECT list cannot include both a group function, such as AVG, COUNT, MAX, MIN, SUM, STDDEV, or VARIANCE, and an individual column expression, unless the individual column expression is included in a GROUP BY clause. Drop either the group function or the individual column expression from the SELECT list or add a GROUP BY clause that includes all individual column expressions listed. OR ADD GROUP BY DP.CODE\_VALEUR
Error code 937, SQL state 42000: ORA-00937: not a single-group group function
[ "", "sql", "oracle11g", "" ]
It's possible to delete using join statements to qualify the set to be deleted, such as the following: ``` DELETE J FROM Users U inner join LinkingTable J on U.id = J.U_id inner join Groups G on J.G_id = G.id WHERE G.Name = 'Whatever' and U.Name not in ('Exclude list') ``` However I'm interested in deleting both sides of the join criteria -- both the `LinkingTable` record and the User record on which it depends. I can't turn cascades on because my solution is Entity Framework code first and the bidirectional relationships make for multiple cascade paths. Ideally, I'd like something like: ``` DELETE J, U FROM Users U inner join LinkingTable J on U.id = J.U_id ... ``` Syntactically this doesn't work out, but I'm curious if something like this is possible?
Nope, you'd need to run multiple statements. Because you need to delete from two tables, consider creating a temp table of the matching ids: ``` SELECT U.Id INTO #RecordsToDelete FROM Users U JOIN LinkingTable J ON U.Id = J.U_Id ... ``` And then delete from each of the tables: ``` DELETE FROM Users WHERE Id IN (SELECT Id FROM #RecordsToDelete) DELETE FROM LinkingTable WHERE Id IN (SELECT Id FROM #RecordsToDelete) ```
The way you say is Possible in `MY SQL` but *not* for `SQL SERVER` You can use of the "deleted" pseudo table for deleting the values from Two Tables at a time like, ``` begin transaction; declare @deletedIds table ( samcol1 varchar(25) ); delete #temp1 output deleted.samcol1 into @deletedIds from #temp1 t1 join #temp2 t2 on t2.samcol1 = t1.samcol1 delete #temp2 from #temp2 t2 join @deletedIds d on d.samcol1 = t2.samcol1; commit transaction; ``` For brief Explanation you can take a look at this [Link](https://stackoverflow.com/questions/783726/how-do-i-delete-from-multiple-tables-using-inner-join-in-sql-server) and to Know the Use of Deleted Table you can follow this [Using the inserted and deleted Tables](http://msdn.microsoft.com/en-us/library/ms191300%28v=sql.105%29.aspx)
Is it possible to delete from multiple tables in the same SQL statement?
[ "", "sql", "sql-server", "delete-row", "cascading-deletes", "" ]
Here is the issue I am facing. I have a User model that `has_one` profile. The query I am running is to find all users that belong to the opposite sex of the `current_user` and then I am sorting the results by last\_logged\_in time of all users. The issue is that last\_logged\_in is an attribute of the User model, while gender is an attribute of the profile model. Is there a way I can index both last\_logged\_in and gender? If not, how do I optimize the query for the fastest results?
An index on gender is unlikely to be effective, unless you're looking for a gender that is very under-represented in the table, so index on last\_logged\_in and let the opposite gender be filtered out of the result set without an index. It might be worth it if the columns were on the same table as an index on (gender, last\_logged\_in) could be used to identify exactly which rows are required, but even then the majority of the performance improvement would come from being able to retrieve the rows in the required sort order by scanning the index. Stick to indexing the last\_logged\_in column, and look for an explain plan that demonstrates the index being used to satisfy the sort order.
``` add_index :users, :last_logged_in add_index :profiles, :gender ``` This will speed up finding all opposite-sex users and then sorting them by time. You can't have cross-table indexes.
Rails: Indexing and optimizing db query
[ "", "sql", "ruby-on-rails", "ruby-on-rails-3", "indexing", "" ]
I have had a `python` script that uses `rpy2` internally. This script was working until very recently. However, it stopped working now. I got an error that I had not seen previously. I can reproduce the error with the following lines of code: ``` $ python Python 2.6.1 (r261:67515, Jun 24 2010, 21:47:49) [GCC 4.2.1 (Apple Inc. build 5646)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import rpy2.robjects as robjects cannot find system Renviron Error in getLoadedDLLs() : there is no .Internal function 'getLoadedDLLs' Error in checkConflicts(value) : ".isMethodsDispatchOn" is not a BUILTIN function Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Library/Python/2.6/site-packages/rpy2-2.2.5dev_20120328-py2.6-macosx-10.6- universal.egg/rpy2/robjects/__init__.py", line 17, in <module> from rpy2.robjects.robject import RObjectMixin, RObject File "/Library/Python/2.6/site-packages/rpy2-2.2.5dev_20120328-py2.6-macosx-10.6-universal.egg/rpy2/robjects/robject.py", line 9, in <module> class RObjectMixin(object): File "/Library/Python/2.6/site-packages/rpy2-2.2.5dev_20120328-py2.6-macosx-10.6-universal.egg/rpy2/robjects/robject.py", line 22, in RObjectMixin __show = rpy2.rinterface.baseenv.get("show") LookupError: 'show' not found ``` I do not why this should not work. Is there any way to fix this.
rpy2-2.2.5 belongs to the previous series (2.2.x), and was working with older versions of R (R keeps evolving). The current releases of rpy2 are in the 2.3.x series (latest is 2.3.6), but they require Python 2.7, or Python 3.3 (if you want the latest R, you'll have to get a recent Python ;-) )
[This page](http://thomas-cokelaer.info/blog/2012/01/installing-rpy2-with-different-r-version-already-installed/) describes a potential solution for this problem (at least, the problem described by the author looks very similar): apparently, rpy2 has to be recompiled and given the new version of R as an argument.
rpy2 not working after upgrading R to 3.0.1
[ "", "python", "r", "rpy2", "" ]
I am trying to compare two rows in a csv. For example: ``` abc, 2, foo, bar, baz abc, 2, bar,baz, band cab, 3, baz,bar, foo cab, 3, baz,bar, foo ``` Is there way `csv` module or any module in python to check if the column 1 is same or different. For example: in the first two lines, we see `2` and in the third line we see the number `3`. is there a way to find that out The idea behind it is to sum the values corresponding to a particular value in `column 1` so, ``` abc, 2, 10,11,12 abc, 2, 7,8,9 cab, 3, 4,5,6 cab, 3, 1,2,3 ``` I essentially want to sum up the values `12+9` for since it has the same column 1. and numbers `6 and 3` since the value 3 is same in column 1 To sum it up I am assuming I can create a list with ``` a=list() ``` append the value to that list ``` a.append(float(line[4])) ``` and use numpy to sum it up ``` numpy.sum(a) ``` Could any one please help me figure out a pythonic way to find if the two values are same.
Something like this: ``` >>> from collections import Counter >>> c = Counter() with open('abc') as f: reader = csv.reader(f, delimiter = ',', skipinitialspace = True) for row in reader: c[row[1]] += int(row[-1]) ... >>> c Counter({'2': 21, '3': 9}) ``` To find the columns use `itertools.groupby`: ``` >>> with open('abc') as f: reader = csv.reader(f, delimiter = ',', skipinitialspace = True) for k,g in groupby(enumerate(reader), key = lambda x:x[1][1]): print k," was common on the rows :",",".join(str(x[0]) for x in g) ... 2 was common on the rows : 0,1 3 was common on the rows : 2,3 ```
Have a look at the [pandas](http://pandas.pydata.org/) library, you can easily aggregate columns based on groups. For example if you have a csv like: ``` col1,col2,col3,col4,col5 abc,2,10,11,12 abc,2,7,8,9 cab,3,4,5,6 cab,3,1,2,3 ``` You can group and sum based on the values in `col2` with just a couple of lines of code: ``` import pandas as pd df = pd.DataFrame.from_csv('test.csv') df.groupby('col2').sum() ``` Which gives you: ``` col3 col4 col5 col2 2 17 19 21 3 5 7 9 ```
Comparing two lines in csv file - Python
[ "", "python", "linux", "csv", "numpy", "" ]
Using Django 1.5.1. Python 2.7.3. I wanted to do a unique together constraint with a foreign key field and a slug field. So in my model meta, I did ``` foreign_key = models.ForeignKey("self", null=True, default=None) slug = models.SlugField(max_length=40, unique=False) class Meta: unique_together = ("foreign_key", "slug") ``` I even checked the table description in Postgres (9.1) and the constraint was put into the database table. ``` -- something like "table_name_foreign_key_id_slug_key" UNIQUE CONSTRAINT, btree (foreign_key_id, slug) ``` However, I could still save into the database table a foreign\_key of None/null and duplicate strings. For example, I could input and save ``` # model objects with slug="python" three times; all three foreign_key(s) # are None/null because that is their default value MO(slug="python").save() MO(slug="python").save() MO(slug="python").save() ``` So after using unique\_together, why can I still input three of the same valued rows? I'm just guessing right now that it might have to do with the default value of None for the foreign\_key field, because before the unique\_together, when I just had unique=True on slug, everything worked fine. So if that is the case, what default value should I have that indicates a null value, but also maintains the unique constraint?
In Postgresql `NULL` isn't equal to any other `NULL`. Therefore the rows you create are not the same (from Postgres' perspective). **Update** You have a few ways to deal with it: * Forbid the `Null` value for foreign key and use some default value * Override the `save` method of your model to check that no such row exists * Change SQL standard :)
Add a `clean` method to your model, so you can edit an existing row. ``` def clean(self): queryset = MO.objects.exclude(id=self.id).filter(slug=self.slug) if self.foreign_key is None: if queryset.exists(): raise ValidationError("A row already exists with this slug and no key") else: if queryset.filter(foreign_key=self.foreign_key).exists(): raise ValidationError("This row already exists") ``` Beware, `clean` (or `full_clean`) isn't called by the default `save` method. NB: if you put this code in the `save` method, update forms (like in the admin) won't work: you will have a traceback error due to the `ValidationError` exception.
Django unique together constraint failure?
[ "", "python", "django", "postgresql", "" ]
I am reading two columns of a csv file using pandas `readcsv()` and then assigning the values to a dictionary. The columns contain strings of numbers and letters. Occasionally there are cases where a cell is empty. In my opinion, the value read to that dictionary entry should be `None` but instead `nan` is assigned. Surely `None` is more descriptive of an empty cell as it has a null value, whereas `nan` just says that the value read is not a number. Is my understanding correct, what IS the difference between `None` and `nan`? Why is `nan` assigned instead of `None`? Also, my dictionary check for any empty cells has been using `numpy.isnan()`: ``` for k, v in my_dict.iteritems(): if np.isnan(v): ``` But this gives me an error saying that I cannot use this check for `v`. I guess it is because an integer or float variable, not a string is meant to be used. If this is true, how can I check `v` for an "empty cell"/`nan` case?
NaN is used as a placeholder for [missing data *consistently* in pandas](https://pandas.pydata.org/pandas-docs/dev/user_guide/gotchas.html#choice-of-na-representation), consistency is good. I usually read/translate NaN as **"missing"**. *Also see the ['working with missing data'](https://pandas.pydata.org/docs/user_guide/missing_data.html) section in the docs.* Wes writes in the docs ['choice of NA-representation'](https://pandas.pydata.org/pandas-docs/dev/user_guide/gotchas.html#choice-of-na-representation): > After years of production use [NaN] has proven, at least in my opinion, to be the best decision given the state of affairs in NumPy and Python in general. The special value NaN (Not-A-Number) is used *everywhere* as the NA value, and there are API functions [`isna`](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.isna.html#pandas.DataFrame.isna) and [`notna`](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.notna.html#pandas.DataFrame.notna) which can be used across the dtypes to detect NA values. > ... > Thus, I have chosen the Pythonic “practicality beats purity” approach and traded integer NA capability for a much simpler approach of using a special value in float and object arrays to denote NA, and promoting integer arrays to floating when NAs must be introduced. *Note: the ["gotcha" that integer Series containing missing data are upcast to floats](https://pandas.pydata.org/pandas-docs/dev/user_guide/gotchas.html#support-for-integer-na).* In my opinion the main reason to use NaN (over None) is that it can be stored with numpy's float64 dtype, rather than the less efficient object dtype, *see [NA type promotions](https://pandas.pydata.org/pandas-docs/dev/user_guide/gotchas.html#na-type-promotions)*. ``` # without forcing dtype it changes None to NaN! s_bad = pd.Series([1, None], dtype=object) s_good = pd.Series([1, np.nan]) In [13]: s_bad.dtype Out[13]: dtype('O') In [14]: s_good.dtype Out[14]: dtype('float64') ``` Jeff comments (below) on this: > `np.nan` allows for vectorized operations; its a float value, while `None`, by definition, forces object type, which basically disables all efficiency in numpy. > > > **So repeat 3 times fast: object==bad, float==good** Saying that, many operations may still work just as well with None vs NaN (but perhaps are not supported i.e. they may sometimes give [surprising results](https://stackoverflow.com/a/19866269/1240268)): ``` In [15]: s_bad.sum() Out[15]: 1 In [16]: s_good.sum() Out[16]: 1.0 ``` --- To answer the second question: You should be using [`isna`](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.isna.html#pandas.DataFrame.isna) and [`notna`](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.notna.html#pandas.DataFrame.notna) to test for missing data (NaN).
`NaN` can be used as a numerical value on mathematical operations, while `None` cannot (or at least shouldn't). `NaN` is a numeric value, as defined in [IEEE 754 floating-point standard](http://en.wikipedia.org/wiki/IEEE_floating_point). `None` is an internal Python type (`NoneType`) and would be more like "inexistent" or "empty" than "numerically invalid" in this context. The main "symptom" of that is that, if you perform, say, an average or a sum on an array containing NaN, even a single one, you get NaN as a result... In the other hand, you cannot perform mathematical operations using `None` as operand. So, depending on the case, you could use `None` as a way to tell your algorithm not to consider invalid or inexistent values on computations. That would mean the algorithm should test each value to see if it is `None`. Numpy has some functions to avoid NaN values to contaminate your results, such as `nansum` and `nan_to_num` for example.
What is the difference between NaN and None?
[ "", "python", "numpy", "pandas", "nan", "" ]
I have 1034961 rows data from database(mysql) table. structure.. table : tb\_blog\_image ``` | id(pk) | link(TEXT) | img_link(TEXT) | | 1 |blogpost.com/aaa|http://sky.png | | 2 |blogpost.com/aaa|http://girl.png | | 3 |blogpost.com/aaa|http://cad.png | ``` now, I want to select specific img\_link list from tb\_blog\_images. sql.. ``` select link, img_link from tb_blog_image where link = 'blogpost.com/aaa'; ``` result 38rows 6.37sec how to improve select performance? make link column to index? table normalization? I want to run within 1sec. I want to listen various tips.
Ideally, you'd use the primary key, (id) within your program. But failing that, an index on the link column would solve your problem. Right now, the query you are running requires that all one million rows from the table be loaded from disk, and then compared to your query string value. Using an index, this could be reduced to no more than three or four disk reads. Also, a text datatype is stored separately from the rest of the table, which is not very efficient. I would re-define the two text columns to something like varchar(300), which is certainly long enough for any URL that might be encountered, but provides for more efficient storage as well: the TEXT type is really for (potentially) long fields like memos, or the content of web pages.
You may be interested in the variables to be tuned for fine performances. ``` Set global thread_cache_size = 4; Set global query_cache_size = 1024*1024*1024; Set global query_cache_limit=768*1024; Set global query_cache_min_res_unit = 2048; Set long_query_time = 5; ``` Source: **<http://www.snip2code.com/Snippet/1171/List-of-Variables-to-be-set-in-MySQL-for?fromPage=1>**
how to improve select performance in mysql?
[ "", "mysql", "sql", "performance", "" ]
I have been searching and although I find long and complicated (many features I do not need) on how to just simply have python have linux use the cat function to concatenate files to a single file. From my reading apparently subprocess is the way to do this. Here is what I have but obviously does not work :( ``` subprocess.call("cat", str(myfilelist[0]), str(myfilelist[1]), str(myfilelist[2]), str(myfilelist[3]), ">", "concatinatedfile.txt"]) ``` the above assumes: ``` myfilelist[] ``` the above list has 4 filenames + paths as a list; for example one item in the list is "mypath/myfile1.txt" I would take non subprocess (but simple) methods also
since `>` is a shell function you need to do `shell=True` `subprocess.call("echo hello world > some.txt",shell=True)` ... works in windows at least alternatively you can do somethign like ``` with open("redirected_output.txt") as f: subprocess.call("/home/bin/cat some_file.txt",stdout=f) ```
If you want to use cat and redirection > you must call a shell, for example via system: ``` from os import system system("cat a.txt b.txt > c.txt") ``` However you must pay attention to code injection.
python concatenate files using subprocess
[ "", "python", "linux", "concatenation", "" ]
I need to compare two sets for equality. I have an **Employee** table as ``` emp_id | Name ----------------- 1 Thomas 2 John 3 Jeff ..... ..... ``` and an **Employee\_Language** table ``` Emp_id | Language ------------------ 1 English 1 French 2 Thai 3 English 3 French 3 Chinese ... ... ``` So I need to find all employees who knows a list of given languages for example 'English,French' I know, using ``` select * from employee_language lng where find_in_set(lng.language,'English,French') ``` Will give a list of employees who either knows English or French .. How can I get employees who knows both ?
``` WHERE language IN('x','y') GROUP BY emp_id HAVING COUNT (*) = 2 ``` (where '2' is the number of items in the IN clause) So your whole query could be: ``` SELECT e.emp_Id , e.Name FROM Employee e JOIN Employee_Language l ON e.emp_id = l.emp_id WHERE l.Language IN('English', 'French') GROUP BY e.emp_id HAVING COUNT(*) = 2 ``` ### See [this SQLFiddle](http://sqlfiddle.com/#!2/00d74/3)
I just Improved the Accepted Answer by removing the constant (2) in the 'HAVING' clause and parameterized the criteria .. hope this help someone ``` set @lst = 'English,French,Chinese' ; SELECT e.emp_Id,Name FROM Employee e JOIN Employee_Language l ON e.emp_id = l.emp_id WHERE FIND_IN_SET(l.Language,@lst) GROUP BY e.emp_id HAVING COUNT(*) = (SELECT LENGTH(@lst) - LENGTH(REPLACE(@lst, ',', '')) AS `occurrences`)+1 ; ``` [SQL Fiddle](http://sqlfiddle.com/#!2/39421/1)
Compare two sets in MySQL for equality
[ "", "mysql", "sql", "" ]
I'm trying to read a column of numbers into python with the `csv` module. I get the following behavior: ``` import csv f=open('myfile.txt','r') reader=csv.reader(f) print [x for x in reader] # This outputs the contents of "myfile.txt", # broken up by line. print [x for x in reader] # This line prints an empty list. ``` Why is this happening? Is there some reason the reader object can only be used once?
Same reason here: ``` >>> li=[1,2,3,4,5,6,7,8,9] >>> it=iter(li) >>> print [x for x in it], [x for x in it] [1, 2, 3, 4, 5, 6, 7, 8, 9], [] ``` Note the empty list... csv.reader is an [iterator](http://docs.python.org/2/glossary.html#term-iterator) that produces items from a container or sequence one by one until the [StopIteration](http://docs.python.org/2/library/exceptions.html#exceptions.StopIteration) exception indicates there are no more items. For built-in types (and all library types like csv that I know of), iteration is one way, and the only way to 'go back' is to keep the items you are interested in or recreate the iterator. You can hack/fool csv.reader by doing a backwards seek I suppose, but why do this? You can make a copy of an iterator if you need to: ``` >>> it_copy=list(it) >>> print [x for x in it_copy],[x for x in it_copy] [1, 2, 3, 4, 5, 6, 7, 8, 9] [1, 2, 3, 4, 5, 6, 7, 8, 9] ``` Or use [itertools.tee](https://stackoverflow.com/a/17417194/298607) as Mark Ransom notes. The best is to just design your algorithm around a one-way trip through an iterator. Less memory and often faster.
The reason you can only go one way is because the file you passed it only goes one way, if you want loop over the csv file again you can do something like ``` >>> with open("output.csv", 'r') as f: r = csv.reader(f) for l in r: print l f.seek(0) for l in r: print l ``` that was a really bad explanation, and unfortunately I don't know the term for `only goes one way`, perhaps someone else could help me out with my vocabulary...
Why can I only use a reader object once?
[ "", "python", "csv", "python-2.4", "" ]
I'm newbie to sql. There are two different tables with same columns and assume that Name unique. **TABLE\_A** ``` Name | AGE ----------- Toby | 2 Milo | 1 Achmed| 3 ``` **TABLE\_B** ``` Name | AGE ----------- Milo | 2 ``` TABLE\_B is my superior table, If TABLE\_B name value contains in TABLE\_A, than TABLE\_B's value should be selected. RESULT is shown below. RESULT is not a table, the result of query. **RESULT** ``` Name | AGE ----------- Toby | 2 Milo | 2 Achmed| 3 ``` I have already solve this problem on the programming side, but I'm curries about the sql query to get this result. TSQL or PLSQL doesnt matter.
You can use `FULL JOIN` to get all rows from both tables and `COALESCE` to give precedence to Table\_B if Name exists in both tables. ``` SELECT COALESCE (b.Name, a.Name) as Name ,COALESCE (b.Age, a.Age) as Age FROM Table_A a FULL JOIN Table_B b ON a.Name = b.Name ``` **[SQLFiddle DEMO](http://www.sqlfiddle.com/#!6/5b1ca/1)**
You could also eliminate any duplicates while using union, I think this would be faster than a full join: ``` SELECT Name, Age FROM Table_B UNION SELECT Name, Age FROM Table_A WHERE TABLE_A.NAME NOT IN (SELECT Name FROM Table_B) ORDER BY Name; ```
merging two tables with a query
[ "", "sql", "sql-server", "t-sql", "plsql", "toad", "" ]
I am using [this buildpack](https://github.com/arg0s/heroku-buildpack-python-ffmpeg-lame) (heroku-python-buildpack-ffmpeg-lame) for my app on heroku that uses ffmpeg to convert uploaded videos to .mp4. I had been using the version without libmp3lame, and since I switched I am getting the error ``` ffmpeg: error while loading shared libraries: libmp3lame.so.0: cannot open shared object file: No such file or directory ``` I checked to see where libmp3lame.so.0 is located on my server with heroku run --app myapp find / -name libmp3lame.so.0, and the resulting path was /app/vendor/lame/lib/libmp3lame.so.0. I tried adding /vendor/lame/lib to my heroku path using the heroku config:set command, but even after adding it I still get the same error. Anyone know what the problem could be?
Fortunately, I stumbled across [this similar question](https://stackoverflow.com/questions/13799636/why-changing-heroku-buildpack-for-existing-app-doesnt-run-bin-release), and I was able to see that all I needed to do was look at the bin/release file from the buildpack I was using and make sure the correct PATH and LD\_LIBRARY\_PATH were set to match the config\_vars in that file. I set them using the heroku config:set command. Apparently the config\_vars are only taken from the apps first deploy. Anyway, hope this will save someone else some time down the road.
I solved the problem like this ``` ln -s /usr/local/lib/libmp3lame.so.0 /usr/lib64/libmp3lame.so.0 ```
ffmpeg: error while loading shared libraries: libmp3lame.so.0: cannot open shared object file: No such file or directory
[ "", "python", "heroku", "ffmpeg", "" ]
I've been all day trying to solve the `test_errors()` function in "[Exercise 48: Advanced User Input](http://learnpythonthehardway.org/book/ex48.html)" of the book [Learn Python The Hard Way](http://learnpythonthehardway.org/book/). `assert_equal()`, a function in the tests asks me for the tuples in order and I haven't been able to code it that way. My loops always returns first the nouns and last the error tuples, I don't know how to break the loop so it starts again but with the right values to continue or whatever is necessary to sort this tuples in the order they should be. Here's the code: ``` class Lexicon(object): def scan(self, stringo): vocabulary = [[('direction', 'north'), ('direction', 'south'), ('direction', 'east'), ('direction', 'west')], [('verb', 'go'), ('verb', 'kill'), ('verb', 'eat')], [('stop', 'the'), ('stop', 'in'), ('stop', 'of')], [('noun', 'bear'), ('noun', 'princess')], # Remember numbers [('error', 'ASDFADFASDF'), ('error', 'IAS')], [('number', '1234'), ('number','3'), ('number', '91234')]] self.stringo = stringo got_word = '' value = [] rompe = self.stringo.split() #split rompe en los espacios for asigna in vocabulary: for encuentra in asigna: if encuentra[1] in rompe: value.append(encuentra) return value eLexicon = Lexicon() from nose.tools import * from ex48.ex48 import eLexicon def test_directions(): assert_equal(eLexicon.scan("north"), [('direction', 'north')]) result = eLexicon.scan("north south east") assert_equal(result, [('direction', 'north'), ('direction', 'south'), ('direction', 'east')]) def test_verbs(): assert_equal(eLexicon.scan("go"), [('verb', 'go')]) result = eLexicon.scan("go kill eat") assert_equal(result, [('verb', 'go'), ('verb', 'kill'), ('verb', 'eat')]) def test_stops(): assert_equal(eLexicon.scan("the"), [('stop', 'the')]) result = eLexicon.scan("the in of") assert_equal(result, [('stop', 'the'), ('stop', 'in'), ('stop', 'of')]) def test_nouns(): assert_equal(eLexicon.scan("bear"), [('noun', 'bear')]) result = eLexicon.scan("bear princess") assert_equal(result, [('noun', 'bear'), ('noun', 'princess')]) #def test_numbers(): # assert_equal(lexicon.scan("1234"), [('number', 1234)]) # result = lexicon.scan("3 91234") # assert_equal(result, [('number', 3), # ('number', 91234)]) def test_errors(): assert_equal(eLexicon.scan("ASDFADFASDF"), [('error', 'ASDFADFASDF')]) result = eLexicon.scan("bear IAS princess") assert_equal(result, [('noun', 'bear'), ('error', 'IAS'), ('noun', 'princess')]) ====================================================================== FAIL: tests.ex48_tests.test_errors ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/home/totoro/Desktop/Python/projects/ex48/tests/ex48_tests.py", line 43, in test_errors ('noun', 'princess')]) AssertionError: Lists differ: [('noun', 'bear'), ('noun', 'p... != [('noun', 'bear'), ('error', '... First differing element 1: ('noun', 'princess') ('error', 'IAS') - [('noun', 'bear'), ('noun', 'princess'), ('error', 'IAS')] + [('noun', 'bear'), ('error', 'IAS'), ('noun', 'princess')] ---------------------------------------------------------------------- Ran 5 tests in 0.006s ``` Many thanks in advance for taking the time.
The words in the test are in the same order going in as coming out. As such you need to re-order your `for`-loops to iterate over the input first: ``` value = [] for rompe in stringo.split(): for asigna in vocabulary: for encuentra in asigna: if encuentra[1] == rompe: value.append(encuentra) ``` This will return the `encuentra`s in the correct order. **Note 1**: You should not be hard-coding the numbers or errors. **Note 2**: You can drastically reduce the complexity of this algorithm by using dictionary or two. ### Example: ``` vocabulary = { 'direction': 'north east south west up down left right back'.split(), 'noun': 'bear princess door cabinet'.split(), 'stop': 'the in of from at it'.split(), 'verb': 'go kill eat stop'.split(), } ''' This creates a lookup using a dictionary-comprehension: {'at': 'stop', # [...] 'up': 'direction', 'west': 'direction'} ''' classifications = {i: k for k, v in vocabulary.iteritems() for i in v} def classify(word): try: return 'number', int(word) except ValueError: return classifications.get(word, 'error'), word def scan(words): return [classify(word) for word in words.split()] ```
``` for word in self.stringo.split(): for pair in vocabulary: if pair[0][1] == word: value.append(pair[0]) elif pair[1][1] == word: value.append(pair[1]) elif pair[2][1] == word: value.append(pair[2]) elif pair[3][1] == word: value.append(pair[3]) ```
Learn Python the Hard Way, Exercise 48
[ "", "python", "tuples", "" ]
My regex is something like below ``` text = 'id 5 result pass id 4 result fail id 3 result fail id 2 result fail id 1 result pass' for i in re.finditer('id (.+?) result (.+)', text): id = i.group(1) result = i.group(2) print 'id' print 'result' ``` The output is OK. But how do I reverse it to get the results in the other order where id will start from 1 with the pass or fail result
A good way is (which will be faster than using a `lambda` in the sorted): ``` sorted(re.finditer(...,text),key=attrgetter('group'),reverse=True): ``` Or you could turn the iterator into a list and reverse it: ``` for i in reversed(list(re.finditer('id (.+?) result (.+)', text))): ```
Assign the finditer call to a list and reverse it. ``` matches = list(re.finditer(...)) matches.reverse() for i in matches: # etc. ```
how to reverse a regex in python?
[ "", "python", "regex", "reverse", "" ]
I have a list of data in ten columns as shown below. It has few thousands lines. ``` $1 $2 $3 $4 $5 $6 $7 $8 $9 $10 | 8455 105@O13 | 8132 101@H13 8131 101@O13 | 68.43 | 7490 93@O16 | 8868 110@H16 8867 110@O16 | 68.30 | 7561 94@O12 | 9185 114@H13 9184 114@O13 | 66.83 | 8776 109@O12 | 7481 93@H12 7480 93@O12 | 65.55 | 8867 110@O16 | 8432 105@H23 8431 105@O23 | 64.48 | 9832 122@O13 | 6357 79@H16 6356 79@O16 | 64.44 | 9194 114@O15 | 5699 71@H12 5698 71@O12 | 64.06 | 8849 110@O25 | 5780 72@H12 5779 72@O12 | 63.99 ``` I want to select lines from column $3 and column $6 which match some special expression. The criteria that I would like to use as the regular expression is 'the number before "@" sign is same in both columns'. If the this criteria is matched, than I want to print those lines out to a new file. I have tried in awk something like this ``` awk '$3~/[1@]/ {print $1,$2,$3,$4,$5,$6,$7,$8,$9,$10}' hhHB_inSameLayer_065_128-maltoLyo12per.tbl ``` but it doesn't give what I want. I apreciate if anyone could give some help on this. note: also appreciate if I get some help in perl or python. Many thanks in advance.
Try the following in awk. Split `$3` and `$6` into arrays based on the `@` separator and print if the first elements of each match ``` awk '{split($3, a, "@"); split($6, b, "@");if (a[1] == b[1]) print}' ``` Or more idiomatically ``` awk '{split($3, a, "@"); split($6, b, "@")}; a[1] == b[1]' ``` Or a quick Python 2.6+ solution ``` from __future__ import print_function with open('testfile.txt') as f: for line in f: fields = line.split() fields3 = fields[2].split('@') fields6 = fields[5].split('@') if fields3[0] == fields6[0]: print(line, end='') ```
Code for GNU [sed](/questions/tagged/sed "show questions tagged 'sed'"): ``` sed -r '/^\|\s+\S+\s+([0-9]+@).*\|.*\1/!d' file ``` --- Assuming there is a header of two rows: ``` sed -r '1,2p;/^\|\s+\S+\s+([0-9]+@).*\|.*\1/!d' file ```
selecting lines according certain criteria
[ "", "python", "perl", "awk", "" ]
Suppose I have an ASOC-type table with the following entries: ``` +----------+----------+ | PERSON | SKILL | +----------+----------+ | Bob | Java | | Bob | Database | | Sarah | Java | | Sarah | HTML | | Jane | Java | | Jane | HTML | | Jane | Database | +----------+----------+ ``` I want a query that will return the list of persons who have both the Java and Database skills, i.e. Bob and Jane. If I run a query like: ``` SELECT PERSON FROM PERSON_SKILLS_ASOC WHERE SKILL = 'Java' OR SKILL = 'Database' ``` I will get Sarah as well, who doesn't qualify for the position I'm trying to fill. Do I need to do some kind of union/intersection query? I'm on MySQL. Thanks in advance.
``` select a.person from (select person from person_skills_asoc where skill = 'Java') a (select person from person_skills_asoc where skill = 'Database') b where a.person = b.person; ``` or ``` select a.person from person_skills_asoc a, person_skills_asoc b where a.person = b.person and a.skill = 'Java' and b.skill = 'Database'; ```
In SQL Server for example you could use `INTERSECT`, apparently that's not available in MySQL (yet?). A possible solution using a self join (check the **[SQLFiddle](http://www.sqlfiddle.com/#!2/45d94/3)**): ``` SELECT P1.PERSON FROM PERSON_SKILLS_ASOC AS P1 INNER JOIN PERSON_SKILLS_ASOC AS P2 ON P1.PERSON = P2.PERSON WHERE P1.SKILL = 'Java' AND P2.SKILL = 'Database'; ``` There is also another nice answer on SO on alternatives for `INTERSECT` in MySQL here: <https://stackoverflow.com/a/3201426/249353>.
Retrieving rows that have multiple association entries
[ "", "mysql", "sql", "" ]
My list of Dicts ``` [ {'town':'A', 'x':12, 'y':13}, {'town':'B', 'x':100, 'y':43}, {'town':'C', 'x':19, 'y':5} ] ``` My starting point is: ``` x = 2 Y =3 ``` My maximum range: ``` mxr = 30 ``` My function: ``` def calculateRange (x1, x2, y1, y2): squareNumber = math.sqrt(math.pow ((x1-x2),2) + math.pow((y1-y2),2)) return round(squareNumber, 1) ``` How to iterate my list and push data and the result of my function in a new list if the result of calculateRange <= to my maximum range I would like to have finally: ``` [ {'town':'A', 'x':12, 'y':13, 'r':someting }, {'town':'C', 'x':19, 'y':5, 'r':someting} ] ```
Just use a loop: ``` for entry in inputlist: entry['r'] = min(mxr, calculateRange(x, entry['x'], y, entry['y'])) ``` Dictionaries are mutable, adding a key is reflected in all references to the dictionary. Demo: ``` >>> import math >>> def calculateRange (x1, x2, y1, y2): ... squareNumber = math.sqrt(math.pow ((x1-x2),2) + math.pow((y1-y2),2)) ... return round(squareNumber, 1) ... >>> x = 2 >>> y = 3 >>> mxr = 30 >>> inputlist = [ ... {'town':'A', 'x':12, 'y':13}, ... {'town':'B', 'x':100, 'y':43}, ... {'town':'C', 'x':19, 'y':5} ... ] >>> for entry in inputlist: ... entry['r'] = min(mxr, calculateRange(x, entry['x'], y, entry['y'])) ... >>> inputlist [{'town': 'A', 'x': 12, 'r': 14.1, 'y': 13}, {'town': 'B', 'x': 100, 'r': 30, 'y': 43}, {'town': 'C', 'x': 19, 'r': 17.1, 'y': 5}] ```
I guess you're looking for something like this: ``` >>> lis = [ {'town':'A', 'x':12, 'y':13}, {'town':'B', 'x':100, 'y':43}, {'town':'C', 'x':19, 'y':5} ] >>> x = 2 >>> y = 3 for dic in lis: r = calculate(x,y,dic['x'],dic['y']) dic['r'] = r ... >>> lis = [x for x in lis if x['r'] <= mxr] >>> lis [{'y': 13, 'x': 12, 'town': 'A', 'r': 14.142135623730951}, {'y': 5, 'x': 19, 'town': 'C', 'r': 17.11724276862369}] ```
Python iterate list of dicts and create a new one
[ "", "python", "dictionary", "" ]
I am using a subquery to get the maximum id from a group. The query is returning the correct `max(id)` from a group. But what I want as a result out of this table: ``` id--------Name--------GROUP------------Result 1---------ABC----------A----------------Pass 2---------DEF----------B----------------FAIL 3---------GEH----------A----------------Pass 4---------ABC----------B----------------FAIL 5---------DEF----------A----------------FAIL 6---------GEH----------B----------------PASS ``` Is max id's of each group with a result of pass students only? sorry for the kind of English used to describe my problem.
@Narayan-this will give the max(id) for each group for students with Result as Pass ``` SELECT MAX(ID) FROM YourTable WHERE Result = 'PASS' GROUP BY `GROUP`; ```
If you only want groups where **all** students passed use ``` select max(id) as max_id, `group` from your_table group by `group` having sum(result <> 'Pass') = 0 ```
mysql query not returning specific rows with max(id) data
[ "", "mysql", "sql", "" ]
Maybe the title of the question is not the appropiate but here is the explanation: I have the following tables: ![enter image description here](https://i.stack.imgur.com/yIo5j.png) There are only 5 benefit codes available: ![enter image description here](https://i.stack.imgur.com/qac0p.png) So, one employee can have associated 1 to 5 benefits, but also employees without any benefit. What I need to return in a query is a list of employees with a coded column for the benefits associated, like the following example: ![enter image description here](https://i.stack.imgur.com/cUiw0.png) So the column "**benefits**" is a coded column from the associated benefits of the employee. If Peter has associated **Medical** and **Education** benefits then the coded value for "benefits" column should be as shown in the table "01001", where 0 means no association and 1 means associaton. Right now im doing the follogin and is working but takes too long to process and Im sure there is a better way and faster: ``` SELECT emp.employee_id, emp.name, emp.lastname, CASE WHEN lif.benefitcode IS NULL THEN '0' ELSE '1' END + CASE WHEN med.benefitcode IS NULL THEN '0' ELSE '1' END + CASE WHEN opt.benefitcode IS NULL THEN '0' ELSE '1' END + CASE WHEN uni.benefitcode IS NULL THEN '0' ELSE '1' END + CASE WHEN edu.benefitcode IS NULL THEN '0' ELSE '1' END as benefits FROM employee emp -- life LEFT JOIN ( SELECT c.benefitcode, c.employee_id FROM employee_benefit c WHERE c.isactive = 1 and c.benefitcode = 'lf' ) lif on lif.employee_id = emp.employee_id -- medical LEFT JOIN ( SELECT c.benefitcode, c.employee_id FROM employee_benefit c WHERE c.isactive = 1 and c.benefitcode = 'md' ) med on med.employee_id = emp.employee_id -- optical LEFT JOIN ( SELECT c.benefitcode, c.employee_id FROM employee_benefit c WHERE c.isactive = 1 and c.benefitcode = 'op' ) opt on opt.employee_id = emp.employee_id -- uniform LEFT JOIN ( SELECT c.benefitcode, c.employee_id FROM employee_benefit c WHERE c.isactive = 1 and c.benefitcode = 'un' ) uni on uni.employee_id = emp.employee_id -- education LEFT JOIN ( SELECT c.benefitcode, c.employee_id FROM employee_benefit c WHERE c.isactive = 1 and c.benefitcode = 'ed' ) edu on edu.employee_id = emp.employee_id ``` Any clue on what is the best way with best performance? Thanks a lot.
First off, note that this operation actually [*de-normalizes*](http://en.wikipedia.org/wiki/Denormalization) the model and I would *keep the normalized table design* if it were my decision. I'm not sure what "pressure" mandates this situation, but I find that such approaches make the model harder to maintain and use. Such de-normalization *may* slow down queries that could otherwise use indexing on the join table. That being said, one approach is to use [PIVOT](http://msdn.microsoft.com/en-us/library/ms177410%28v=sql.105%29.aspx), which is an SQL Server (2005+) extension. I've worked up an [example on sqlfiddle](http://sqlfiddle.com/#!3/adad5/1). The example needs to be tailored some the actual table schema and benefit values - the pivot is just over the linking (employee\_benefit) table in this case. Note the pre-filter on the benefit status to avoid the columns (and thus implicit grouping) from creeping through the PIVOT. ### Query ``` select pvt.* from ( select emp, benefitcode from benefits where isactive = 1 ) as b pivot ( -- implicit group by on other columns! count (benefitcode) -- the set of all values (to turn into columns) for benefitcode in ([lf], [md], [op]) ) as pvt ``` ### Definition ``` create table benefits ( emp int, isactive tinyint, benefitcode varchar(10) ) go insert into benefits (emp, isactive, benefitcode) values (1, 0, 'lf'), (1, 1, 'md'), (1, 1, 'op'), (2, 1, 'lf'), (3, 1, 'lf'), (3, 1, 'md'), (3, 1, 'md') go ``` ### Result ``` EMP LF MD OP 1 0 1 1 -- excludes inactive LF for emp #1 2 1 0 0 3 1 2 0 -- counts multiple benefits ``` Note that, just like with the many left-joins, the benefit data is now *column oriented* over a fixed set of values. Then just manipulate the data from the resulting query (e.g. as done in the original code, but checking for >= 1) to build the desired [bit array](http://en.wikipedia.org/wiki/Bit_array) value. ### Will it perform better? I'm not sure. However, my "initial hunch" is that the query may perform much better if the employee column is indexed but the benefit is not; and it may perform worse given the inverse - [inspect the query plan](http://msdn.microsoft.com/en-us/library/ms178071%28v=sql.105%29.aspx) of both approaches to know what SQL Server is *really* doing.
Why not just join to a table that codes the benefits to an integer (Life -> 10000, Medical -> 1000, ..., education -> 1; and then * Sum the benefit code integer; * convert the sum to a string; * prepend the string '0000' and take the right-most 5 characters. *Update:* ``` select EmployeeID, right('0000' + convert(varchar(5),sum(map.value)),5) from ( select value=10000, benefit = 'Lif' union all select value= 1000, benefit = 'Med' union all select value= 100, benefit = 'Uni' union all select value= 10, benefit = 'Opt' union all select value= 1, benefit = 'Edu' ) map join blah blah group by EmployeeID ```
SQL query normalize multiple row values into one row one column field
[ "", "sql", "sql-server", "database", "" ]
I have some code like: ``` class Pump: def __init__(self): print("init") def getPumps(self): pass p = Pump.getPumps() print(p) ``` But I get an error like: ``` Traceback (most recent call last): File "C:\Users\Dom\Desktop\test\test.py", line 7, in <module> p = Pump.getPumps() TypeError: getPumps() missing 1 required positional argument: 'self' ``` Why doesn't `__init__` seem to be called, and what does this exception mean? My understanding is that `self` is passed to the constructor and methods automatically. What am I doing wrong here?
To use the class, first create an instance, like so: ``` p = Pump() p.getPumps() ``` A full example: ``` >>> class TestClass: ... def __init__(self): ... print("init") ... def testFunc(self): ... print("Test Func") ... >>> testInstance = TestClass() init >>> testInstance.testFunc() Test Func ```
You need to initialize it first: ``` p = Pump().getPumps() ```
Why do I get "TypeError: Missing 1 required positional argument: 'self'"?
[ "", "python", "constructor", "typeerror", "positional-argument", "" ]
So, I finally gave in and grabbed South. The problem is, every time I try to follow the tutorial and run ``` "python manage.py schemamigration myapp --initial" ``` I get an error ``` "There is no enabled application matching 'myapp'" ``` --Things I have tried-- I have tripple checked my settings file, running Import South from the django shell returns no errors, and I have added manage.py containing folder to PYTHONPATH, as well as wsgi.py and settings.py. I have run python manage.py and python C:\path\to\manage.py variants, even went into my python directory and verified that south was in the site-packages folder. syncdb runs fine, ending with "not synced (use migrations)". python manage.py migrate runs without returning errors but otherwise seems to have no effect. I have tried running the said command both before and after running syncdb, which has no effect on the outcome. --Other potentially pertinent info-- Django 1.5.1, Python 2.7, no other external apps used, Windows 7 64 bit, python is added to the windows path, South installed via python setup.py install command. Installation completed successfully. I do not use a virtualenv, and would really prefer to avoid this as it would mean alot of refactoring of this current project's setup and wasted time. I plan to move to a virtualenv setup in the future, but not now. What's going on? How do I fix this? Net searches revealed no good info at all, I am completely at a loss...
migrations exist on a per-app basis. each app may or may not have its own migrations, but you need to create them for each app where you want to use them. (often all apps) `./manage.py migrate` is a shortcut that runs migrations for all apps
This error can be misleading: it is thrown not when South tries to import the app, but when it tries to get the app's `models` module. * perhaps it cannot import the application (because you didn't add its name to `INSTALLED_APPS`) * perhaps it cannot import the `models` module, because the file `models.py` does not exist, or because the directory `models/` does not contain an `__init__.py`. South doesn't import the `models` module itself. Instead, it leaves that job to `django.db.models.get_app('app_of_interest')`, which according to its docstring "Returns the module containing the models for the given app\_label." The error message raised by `get_app` is, in fact, different depending on whether it failed to import the app or the model, but both exceptions are `ImproperlyConfigured`, and the `schemamigrations` script doesn't look any deeper than that. Because South says it is accepting security updates only (it entered end-of-life with Django 1.7's migration feature), I'm not submitting a fix to its codebase, but instead documenting the problem here.
Django South Error: "there is no enabled application matching 'myapp'"
[ "", "python", "django", "django-south", "" ]
How to select row with name starting with 'A' until name starting with 'D'? And sort them alphabetically? Something like a combination of LIKE and =<> ? Sample Table: ``` ID Name 4001 Spartakol 4002 Tabunjong 4003 Mabini 4004 Carlos 4005 Antonio 4006 Babsy 4007 Jose 4008 David 4009 Cruz ``` Sample Output: ``` 4005 Antonio 4006 Babsy 4004 Carlos 4009 Cruz 4008 David ``` with name starting with 'Cr' until name starting with 'D' Sample Output: ``` 4009 Cruz 4008 David ```
Select the names from `'A'` up to, but not including `'E'`: ``` select ID, Name from SampleTable where Name >= 'A' and Name < 'E' order by Name ``` As this is a plain comparison, it can use an index if you have one for that field.
Guffa's answer is probably the most efficient. To be complete, you could also use ``` LIKE '[a-d]%' ``` Depending on your database `COLLATION`, `LIKE` might be case sensitive or not.
Select a Range of Letters
[ "", "sql", "sql-server", "" ]
i am given a csv file but i want to skip the first row of data and move on to the next. Here is my code: ``` def read_csv(inputfile): return list(csv.reader(inputfile)) #<----- def generate_xml(reader,outfile): root = Element('Solution') root.set('version','1.0') tree = ElementTree(root) head = SubElement(root, 'DrillHoles') description = SubElement(head,'description') current_group = None i = 1 for row in reader.next(): #<----- x1,y1,z1,x2,y2,z2,cost = row if current_group is None or i != current_group.text: current_group = SubElement(description, 'hole',{'hole_id':"%s"%i}) collar = SubElement(current_group,'collar') toe = SubElement(current_group,'toe') cost1 = SubElement(current_group,'cost') collar.text = ','.join((x1,y1,z1)) toe.text = ','.join((x2,y2,z2)) cost1.text = cost i+=1 head.set('total_holes', '%s'%i) indent.indent(root) tree.write(outfile) ``` As you can see, i return the csv file as a list then i pass it onto the generate\_xml function. However when i run the complete program there is an ``` error: 'list' object has no attribute 'next' ```
You have a *list*, not an iterator. Just slice it instead: ``` for row in reader[1:]: ``` or skip that first row when you still have an actual `csv.reader()` object: ``` def read_csv(inputfile): reader = csv.reader(inputfile) next(reader) return list(reader) ``` You'd be better off returning the `reader` object instead of reading all rows into memory there; unless you absolutely need random access to the rows. You also should really use the [`next()` function](http://docs.python.org/2/library/functions.html#next) instead as it works for Python 2.6+ and 3, where the iterator `.next()` method has been renamed to `.__next__()`. You'd otherwise *never* use `for row in reader.next()`, because `.next()` on a `csv.reader()` iterator returns *one* row.
For python 2.x, the code is: ``` data = [] with open('xxx.csv') as f: r = csv.reader(f) name = r.next()[1] # assume the first 2 rows are name and unit unit = r.next() for row in r: data.append(row) ``` For python 3.x, use `next(r)` instead of `r.next()`
CsvReader Next function
[ "", "python", "csv", "" ]
I have a query similar to this: ``` select 'table_1', count(*) from table_1 union select 'table_2', count(*) from table_2 union select 'table_n', count(*) from table_n ``` returning the total of rows for each table (n tables). ``` table_1 | 100 table_2 | 150 table_n | 400 ``` I want to know if there is a mysql function that can just add a new record at the end making the sum of all rows like this: ``` table_1 | 100 table_2 | 150 table_n | 400 total | 650 ``` Is there a way to do that in mySQL (version 5.5) whithout using a procedure? (for exemple using a variable inside the sql if supported)
``` select ifnull(table_name,'Total'), sum(row_count) from (select 'table_1' table_name, count(*) row_count from table_1 union select 'table_2' table_name, count(*) row_count from table_2 union select 'table_n' table_name, count(*) row_count from table_n ) temp group by table_name with rollup; ```
Maybe use WITH ROLLUP:- ``` SELECT TableName, TableCount FROM ( SELECT 'table_1' AS TableName, COUNT(*) AS TableCount FROM table_1 union SELECT 'table_2' AS TableName, COUNT(*) AS TableCount FROM table_2 SELECT select 'table_n' AS TableName, COUNT(*) AS TableCount FROM table_n ) Sub1 GROUP BY TableName, TableCount WITH ROLLUP ```
mySQL union with calculated row
[ "", "mysql", "sql", "sum", "" ]
I'm looking for a way to split a text into n-grams. Normally I would do something like: ``` import nltk from nltk import bigrams string = "I really like python, it's pretty awesome." string_bigrams = bigrams(string) print string_bigrams ``` I am aware that nltk only offers bigrams and trigrams, but is there a way to split my text in four-grams, five-grams or even hundred-grams? Thanks!
Great native python based answers given by other users. But here's the `nltk` approach (just in case, the OP gets penalized for reinventing what's already existing in the `nltk` library). There is an [ngram module](http://www.nltk.org/_modules/nltk/model/ngram.html) that people seldom use in `nltk`. It's not because it's hard to read ngrams, but training a model base on ngrams where `n > 3` will result in much data sparsity. ``` from nltk import ngrams sentence = 'this is a foo bar sentences and I want to ngramize it' n = 6 sixgrams = ngrams(sentence.split(), n) for grams in sixgrams: print(grams) ```
I'm surprised that this hasn't shown up yet: ``` In [34]: sentence = "I really like python, it's pretty awesome.".split() In [35]: N = 4 In [36]: grams = [sentence[i: i + N] for i in range(len(sentence) - N + 1)] In [37]: for gram in grams: print (gram) ['I', 'really', 'like', 'python,'] ['really', 'like', 'python,', "it's"] ['like', 'python,', "it's", 'pretty'] ['python,', "it's", 'pretty', 'awesome.'] ```
n-grams in python, four, five, six grams?
[ "", "python", "string", "nltk", "n-gram", "" ]
I have this SQL Syntax but it's not working and receive this error: > "#1248 - Every derived table must have its own alias". Could you help me? ``` SELECT * FROM produse_comenzi JOIN comenzi ON comenzi.id_comanda = produse_comenzi.id_comanda JOIN (SELECT DISTINCT numar_factura FROM facturi) ON facturi.id_comanda = comenzi.id_comanda ```
In the second join you are using a subquery but you haven't given the result an alias, i.e. something to identify the result by ``` SELECT * FROM produse_comenzi JOIN comenzi ON comenzi.id_comanda = produse_comenzi.id_comanda JOIN (SELECT DISTINCT numar_factura FROM facturi) -- has no alias ON facturi.id_comanda = comenzi.id_comanda ``` you should do ``` SELECT * FROM produse_comenzi JOIN comenzi ON comenzi.id_comanda = produse_comenzi.id_comanda JOIN (SELECT DISTINCT numar_factura, id_comanda FROM facturi) AS facturi ON facturi.id_comanda = comenzi.id_comanda ```
You must add an alias to each subquery that's being treated as a table: ``` SELECT * FROM produse_comenzi JOIN comenzi ON comenzi.id_comanda = produse_comenzi.id_comanda JOIN (SELECT DISTINCT numar_factura FROM facturi) x ON x.id_comanda = comenzi.id_comanda ``` Here I have named the result set `x` and referred to that in the join condition. You can change "x" to whatever you like.
Error message - Every derived table must have its own alias
[ "", "sql", "distinct", "alias", "" ]
I assume there's some beautiful Pythonic way to do this, but I haven't quite figured it out yet. Basically I'm looking to create a testing module and would like a nice simple way for users to define a character set to pull from. I could potentially concatenate a list of the various charsets associated with string, but that strikes me as a very unclean solution. Is there any way to get the charset that the regex represents? Example: ``` def foo(regex_set): re.something(re.compile(regex_set)) foo("[a-z]") >>> abcdefghijklmnopqrstuvwxyz ``` The compile is of course optional, but in my mind that's what this function would look like.
Paul McGuire, author of [Pyparsing](http://pyparsing.wikispaces.com), has written an [inverse regex parser](http://pyparsing.wikispaces.com/file/view/invRegex.py), with which you could do this: ``` import invRegex print(''.join(invRegex.invert('[a-z]'))) # abcdefghijklmnopqrstuvwxyz ``` If you do not want to install Pyparsing, there is also [a regex inverter that uses only modules from the standard library](http://www.mail-archive.com/python-list@python.org/msg125198.html) with which you could write: ``` import inverse_regex print(''.join(inverse_regex.ipermute('[a-z]'))) # abcdefghijklmnopqrstuvwxyz ``` Note: neither module can invert all regex patterns. --- And there are differences between the two modules: ``` import invRegex import inverse_regex print(repr(''.join(invRegex.invert('.')))) print(repr(''.join(inverse_regex.ipermute('.')))) ``` yields ``` '0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!"#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~' '0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!"#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~ \t\n\r\x0b\x0c' ``` --- Here is another difference, this time pyparsing enumerates a larger set of matches: ``` x = list(invRegex.invert('[a-z][0-9]?.')) y = list(inverse_regex.ipermute('[a-z][0-9]?.')) print(len(x)) # 26884 print(len(y)) # 1100 ``` ---
A regex is not needed here. If you want to have users select a character set, let them just pick characters. As I said in my comment, simply listing all the characters and putting checkboxes by them would be sufficent. If you want something that is more compact, or just looks cooler, you could do something like one of these: ![One way of displaying the letter selection. (green = selected)](https://i.stack.imgur.com/RMnsf.png) ![Another way of displaying the letter selection. (no x = selected](https://i.stack.imgur.com/NjxXk.png) ![Yet another way of displaying the letter selection. (black bg = selected)](https://i.stack.imgur.com/7cBu5.png) Of course, if you actually use this, what you come up with will undoubtedly look better than these (And they will also actually have all the letters in them, not just "A"). If you need, you could include a button to invert the selection, select all, clear selection, save selection, or anything else you need to do.
Generate random string from regex character set
[ "", "python", "regex", "string", "" ]
In Python, is it possible to write a function that returns the dimensions of a multidimensional array (given the assumption that the array's dimensions are not jagged)? For example, the dimensions of `[[2,3], [4,2], [3,2]]` would be `[3, 2]`, while the dimensions of `[[[3,2], [4,5]],[[3,4],[2,3]]]` would be `[2,2,2]`. Does Python have any built-in functions that will return all of the dimensions of a multidimensional array, or will I need to implement this function myself?
No, there's nothing built-in because with such "arrays"1 it can be jagged and the concept of "dimensions" or "shape" doesn't make any sense at all. So, you'll have to write your own. If you can make an assumption of uniformity along all dimensions, you can proceed as follows: ``` dim1 = len(a) dim2 = len(a[0]) dim3 = len(a[0][0]) . . . ``` It'd be pretty easy to make this recursive to handle all dimensions. This should do it: ``` def dim(a): if not type(a) == list: return [] return [len(a)] + dim(a[0]) ``` But if you need something like this, you might want to consider looking at [NumPy](http://www.numpy.org/) arrays which have [`numpy.ndarray.shape`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.shape.html) which would give you what you're looking for. ``` from numpy import array l = [[2, 3], [4, 2], [3, 2]] a = array(l) print a.shape ``` Output ``` (3, 2) ``` 1 In scare quotes because you're not really looking at array, you're looking at a list, or a list of lists, or a list of list of lists....
You can do it with numpy: ``` import numpy l = [[2,3], [4,2], [3,2]] m = numpy.array(l) print m.shape ``` But the shape of your second example is [2,2,2], not [1,4,5], unless I've misunderstood your question...
Find the dimensions of a multidimensional Python array
[ "", "python", "arrays", "dimensions", "" ]
I was looking all over for any kind of small example or something to help me and I couldn't find it. I feel like I am missing something obvious. I have a large function in one .py file that I want to call in another .py file and use all the variables from the function and I couldn't do it. So I was trying to do it with a small example and still can't get it. Say I have a file mod1.py that contains: ``` def mod(num): a = 5 b = 6 c = num ``` I tried adding all kinds of returns to it and still couldn't get it to work. But I call it in a file named mod\_read.py: ``` import mod1 mod1.mod(1) d = mod1.mod.a print(d) ``` When I run mod\_read.py I get the error: "AttributeError: 'function' object has no attribute 'a'". What obvious thing am I missing?
1. Local variables only exist while the function is being run. 2. Local variables are never attributes of their containing function. If you want to get values from a function then you need to return them from it. Otherwise, look into OOP if you need more persistence.
You can return multiple values from a function as tuple: ``` def mod(num): a = 5 b = 6 c = num return (a, b, num) ``` Then in the other file, use it something like this: ``` import mod1 d, b, num = mod1.mod(1) print(d) ```
Accessing variables that are inside a function
[ "", "python", "module", "" ]
Quick question to mainly satisfy my curiosity on the topic. I am writing some large python programs with an SQlite database backend and will be dealing with a large number of records in the future, so I need to optimize as much as I can. For a few functions, I am searching through keys in a dictionary. I have been using the "in" keyword for prototyping and was planning on going back and optimizing those searches later as I know the "in" keyword is generally O(n) (as this just translates to python iterating over an entire list and comparing each element). But, as a python dict is basically just a hash map, is the python interpreter smart enough to interpret: ``` if(key in dict.keys()): ...code... ``` to: ``` if(dict[key] != None): ...code... ``` It is basically the same operation but the top would be O(n) and the bottom would be O(1). It's easy for me to use the bottom version in my code, but then I was just curious and thought I would ask.
First, `key in d.keys()` is guaranteed to give you the same value as `key in d` for any dict `d`. And the `in` operation on a `dict`, or the `dict_keys` object you get back from calling `keys()` on it (in 3.x), is *not* O(N), it's O(1). There's no real "optimization" going on; it's just that using the hash is the obvious way to implement `__contains__` on a hash table, just as it's the obvious way to implement `__getitem__`. --- You may ask where this is guaranteed. Well, it's not. [Mapping Types](http://docs.python.org/3/library/stdtypes.html#mapping-types-dict) defines `dict` as, basically, a hash table implementation of [`collections.abc.Mapping`](http://docs.python.org/3/library/collections.abc.html#collections.abc.Mapping). There's nothing stopping someone from creating a hash table implementation of a Mapping, but still providing O(N) searches. But it would be extra work to make such a bad implementation, so why would they? If you really need to prove it to yourself, you can test every implementation you care about (with a profiler, or by using some type with a custom `__hash__` and `__eq__` that logs calls, or…), or read the source. --- In 2.x, you do not want to call `keys`, because that generates a `list` of the keys, instead of a `KeysView`. You could use `iterkeys`, but that may generate an iterator or something else that's not O(1). So, just use the dict itself as a sequence. Even in 3.x, you don't want to call `keys`, because there's no need to. Iterating a `dict`, checking its `__contains__`, and in general treating it like a sequence is *always* equivalent to doing the same thing to its keys, so why bother? (And of course building the trivial `KeyView`, and accessing through it, are going to add a few nanoseconds to your running time and a few keystrokes to your program.) (It's not quite clear that using sequence operations is equivalent for `d.keys()`/`d.iterkeys()` and `d` in 2.x. Other than performance issues, they *are* equivalent in every CPython, Jython, IronPython, and PyPy implementation, but it doesn't seem to be stated anywhere the way it is in 3.x. And it doesn't matter; just use `key in d`.) --- While we're at it, note that this: ``` if(dict[key] != None): ``` … is not going to work. If the `key` is not in the `dict`, this will raise `KeyError`, not return `None`. Also, you should never check `None` with `==` or `!=`; always use `is`. You can do this with a `try`—or, more simply, do `if dict.get(key, None) is not None`. But again, there's no reason to do so. Also, that won't handle cases where `None` is a perfectly valid item. If that's the case, you need to do something like `sentinel = object(); if dict.get(key, sentinel) is not sentinel:`. --- So, the right thing to write is: ``` if key in d: ``` --- More generally, this is not true: > I know the "in" keyword is generally O(n) (as this just translates to python iterating over an entire list and comparing each element The `in` operator, like most other operators, is just a call to a `__contains__` method (or the equivalent for a C/Java/.NET/RPython builtin). `list` implements it by iterating the list and comparing each element; `dict` implements it by hashing the value and looking up the hash; `blist.blist` implements it by walking a B+Tree; etc. So, it could be O(n), O(1), O(log n), or something completely different.
In Python 2 `dict.keys()` creates the whole list of keys first that's why it is an `O(N)` operation, while `key in dict` is an `O(1)` operation. `if(dict[key] != None)` will raise `KeyError` if key is not found in the dict, so it is not equivalent to the first code. Python 2 results: ``` >>> dic = dict.fromkeys(range(10**5)) >>> %timeit 10000 in dic 1000000 loops, best of 3: 170 ns per loop >>> %timeit 10000 in dic.keys() 100 loops, best of 3: 4.98 ms per loop >>> %timeit 10000 in dic.iterkeys() 1000 loops, best of 3: 402 us per loop >>> %timeit 10000 in dic.viewkeys() 1000000 loops, best of 3: 457 ns per loop ``` In Python 3 `dict.keys()` returns a view object which is quite faster than Python 2's `keys()` but still slower simple normal `key in dict`: Python 3 results: ``` >>> dic = dict.fromkeys(range(10**5)) >>> %timeit 10000 in dic 1000000 loops, best of 3: 295 ns per loop >>> %timeit 10000 in dic.keys() 1000000 loops, best of 3: 475 ns per loop ``` Use just: ``` if key in dict: #code ```
Python dictionary keys. "In" complexity
[ "", "python", "dictionary", "hashmap", "complexity-theory", "big-o", "" ]
I'd like to concatenate a sequence of integers with a list of tokens in a single byte string, so that, for example, the following: ``` foo bar baz ``` would become (with 1 byte for the integer representation): ``` b'\x00foo' b'\x01bar' b'\x02baz' ``` The best I've come up with looks like this: ``` for i, token in zip(range(256), "foo bar baz".split()): print(i.to_bytes(1, sys.byteorder) + token.encode()) ``` However the iteration itself involves conversion between Python's integer and bytes at each step in the loop, so that it is much slower (10 times for me) than to simply iterate over the integers. The question is: is there a way to iterate directly over the *byte string representation* of the integers, and not the integer themselves?
I think might be faster: ``` from struct import pack for i, token in enumerate(b"foo bar bazq".split()): print(pack('@B%ds' % len(token), i, token)) ``` Output: ``` b'\x00foo' b'\x01bar' b'\x02bazq' ``` If you only have or want 3 character strings you could use the simpler `print` function/statement: ``` print(pack('@B3s', i, token)) ```
You can use numpy and access directly the buffer interface to avoid conversions: On python 2.7 (numpy 1.7.1), this code: ``` N = arange(256, dtype='uint8') for i, token in enumerate("foo bar baz".split()): print repr(N.data[i] + token.encode()) ``` gives: ``` '\x00foo' '\x01bar' '\x02baz' ```
Increment efficiently over a sequence of integers in byte representation
[ "", "python", "python-3.x", "" ]
When i return a list of product stating with the letter "A" it obviously returns all products that start with the letter "A". I would also like to return all products that start with the letter "A" and products which also have "The" at the begining. I also would like to get a better idea of how the best way to do this is. Would you use purely MS SQL or pass over the parameters using ASP.NET? For example, i currently return this when i search for products with the letter "R" * Rocky * Rocky 2 Sample code ``` SELECT title FROM dbo.product WHERE (title LIKE 'R%') ``` I would like it to return.. for example... * The Rock * Rocky * Rocky 2 **UPDATE:** Thanks for all your help on this. I am going to investigate Full Text Search a little more. But for the intrum i will use... ``` SELECT title FROM dbo.product WHERE (title LIKE 'The R%') OR (title LIKE 'R%') ```
To exclude specifically "The" from the front of the title: ``` SELECT title FROM dbo.product WHERE title LIKE 'R%' OR title LIKE 'The R%' ``` This approach has the advantage of avoiding a leading wildcard, which breaks the index lookup. If you have a list of leading words you would like to ignore, you would use a set of search strings, e.g., 'R%', 'The R%', 'A R%', 'the R%', etc. (However, the IN list doesn't work with wildcards.)
Try this one - ``` SELECT title FROM dbo.product WHERE title LIKE '%R%' ```
how to ignore the "the" in a product title ASP.NET/SQL
[ "", "sql", "asp.net", "sql-server", "linq-to-sql", "" ]
I'd like to store a set of data into a database **but** if it's a pre-existing record, I'd like to alter it. Otherwise, create a new one. Is there a combine statement for that? (Haven't got any when googling.) Right now, the best I have is to check if already exists and then perform one of the operations. Seems cumbersome to me. ``` create table Stuff ( Id int identity(1001, 1) primary key clustered, Beep int unique, Boop nvarchar(50)) ```
MySQL uses `INSERT... ON DUPLICATE KEY` and MSSQL uses `MERGE` `MERGE` is [supported by Azure](http://msdn.microsoft.com/en-us/library/windowsazure/ee336270.aspx), and I can highly recommend this [blog article](http://www.purplefrogsystems.com/blog/2011/12/introduction-to-t-sql-merge-basics/) on it, as a good intro to the statement Here is a merge statement based on the schema provided... ``` create table #Stuff ( Id int identity(1001, 1) primary key clustered, Beep int unique, Boop nvarchar(50), Baap nvarchar(50) ); INSERT INTO #Stuff VALUES (1,'boop', 'poop'); INSERT INTO #Stuff VALUES (2,'beep', 'peep'); SELECT * FROM #STUFF; MERGE #Stuff USING (VALUES(1,'BeepBeep','PeepPeep')) AS TheNewThing(A,B,C) ON #Stuff.Beep = TheNewThing.A WHEN MATCHED THEN UPDATE SET #Stuff.Boop = TheNewThing.B, #Stuff.Baap = 'fixed' WHEN NOT MATCHED THEN INSERT (Beep,Boop,Baap) VALUES ( TheNewThing.A, TheNewThing.B, TheNewThing.C); SELECT * FROM #STUFF ``` I also found a really good [SO Q](https://stackoverflow.com/questions/11216067/what-is-using-in-sql-server-2008-merge-syntax) which might make good further reading
IN MYSQL : You may use INSERT ... ON DUPLICATE KEY UPDATE . eg: ``` INSERT INTO table (a,b,c) VALUES (4,5,6) ON DUPLICATE KEY UPDATE c=9; ``` For more information: <http://dev.mysql.com/doc/refman/5.6/en/insert-on-duplicate.html>
Insert and alter in one statement
[ "", "sql", "azure-sql-database", "" ]
I am drawing two subplots with Matplotlib, essentially following : ``` subplot(211); imshow(a); scatter(..., ...) subplot(212); imshow(b); scatter(..., ...) ``` Can I draw lines between those two subplots? How would I do that?
The solution from the other answers are suboptimal in many cases (as they would only work if no changes are made to the plot after calculating the points). A better solution would use the specially designed [`ConnectionPatch`](https://matplotlib.org/tutorials/text/annotations.html#using-connectionpatch): ``` import matplotlib.pyplot as plt from matplotlib.patches import ConnectionPatch import numpy as np fig = plt.figure(figsize=(10,5)) ax1 = fig.add_subplot(121) ax2 = fig.add_subplot(122) x,y = np.random.rand(100),np.random.rand(100) ax1.plot(x,y,'ko') ax2.plot(x,y,'ko') i = 10 xy = (x[i],y[i]) con = ConnectionPatch(xyA=xy, xyB=xy, coordsA="data", coordsB="data", axesA=ax2, axesB=ax1, color="red") ax2.add_artist(con) ax1.plot(x[i],y[i],'ro',markersize=10) ax2.plot(x[i],y[i],'ro',markersize=10) plt.show() ``` [![enter image description here](https://i.stack.imgur.com/8Tr9L.png)](https://i.stack.imgur.com/8Tr9L.png)
You could use `fig.line`. It adds any line to your figure. Figure lines are higher level than axis lines, so you don't need any axis to draw it. This example marks the same point on the two axes. It's necessary to be careful with the coordinate system, but the transform does all the hard work for you. ``` import matplotlib.pyplot as plt import matplotlib import numpy as np fig = plt.figure(figsize=(10,5)) ax1 = fig.add_subplot(121) ax2 = fig.add_subplot(122) x,y = np.random.rand(100),np.random.rand(100) ax1.plot(x,y,'ko') ax2.plot(x,y,'ko') i = 10 transFigure = fig.transFigure.inverted() coord1 = transFigure.transform(ax1.transData.transform([x[i],y[i]])) coord2 = transFigure.transform(ax2.transData.transform([x[i],y[i]])) line = matplotlib.lines.Line2D((coord1[0],coord2[0]),(coord1[1],coord2[1]), transform=fig.transFigure) fig.lines = line, ax1.plot(x[i],y[i],'ro',markersize=20) ax2.plot(x[i],y[i],'ro',markersize=20) plt.show() ``` ![enter image description here](https://i.stack.imgur.com/V7Tc6.png)
Drawing lines between two plots in Matplotlib
[ "", "python", "matplotlib", "" ]
I am trying to learn inheritance in python. I write class "Course" as super class of class "AdvacedCourse" as shown below. ``` class Course(object): def __init__(self, crsName ="python", duration=45): self.crsName = crsName self.duration = 25 ``` And the sub class is: ``` import Course class AdvancedCourse (Course): def __init__(self, crsName ="python", duration=45): self.crsName = "java" self.duration = 25 ``` But I got stuck on an error: ``` class AdvancedCourse (Course): TypeError: module.__init__() takes at most 2 arguments (3 given) ``` Any suggestions?
This is a problem with importing, not inheritance. `Course` is the module: you need to inherit from `Course.Course`. (In Python we usually name modules in lower case, though).
I assume that class `Course` is in another module `Course.py`. Then you should import it with `from Course import Course`. And @Daniel is right - you should have module in file `course.py` (lowercase) and import statement will be `from course import Course`.
TypeError Inheritance in Python
[ "", "python", "inheritance", "" ]
I'm trying to get the syntax to show a list of records where the `Actual Live Date` is equal to or greater than today by 7 days and ideally show how many per day (thursday = 7 records). i was thinking something along the lines of: ``` SELECT [PW Number] ,[status] ,[install Date] ,[ICL Client Code] ,[Actual Live Date] FROM [QuoteBase].[dbo].[Circuits] WHERE [Actual Live Date] BETWEEN today and 7 days time (this is where I am a little stuck as im fairly new) ```
If you need "today" to start at 00:01 this morning, then you need to remove the time portion from e.g. `GETDATE()`. I'm also using explicit functions to show that I'm adding days: ``` SELECT [PW Number] ,[status] ,[install Date] ,[ICL Client Code] ,[Actual Live Date] FROM [QuoteBase].[dbo].[Circuits] WHERE [Actual Live Date] BETWEEN DATEADD(day,DATEDIFF(day,0,GETDATE(),0) AND DATEADD(day,DATEDIFF(day,0,GETDATE()+7,0) ``` If `Actual Live Date` contains a time component, you may want to adjust the `+7` to `+8`, depending on exactly which rows should be included in the result or not.
you can use the `BETWEEN` and `GETDATE()` functions for this, the `GETDATE() + 7` will add 7 days onto todays date ``` SELECT [PW Number] ,[status] ,[install Date] ,[ICL Client Code] ,[Actual Live Date] FROM [QuoteBase].[dbo].[Circuits] WHERE [Actual Live Date] BETWEEN GETDATE() AND GETDATE() + 7 ```
Selecting records where 'ActualLiveDate' is between today and X days in to the future with SQL Server
[ "", "sql", "sql-server", "" ]
I need to return daily SUM from two columns covering 7 months I am attempting to do this, but this is way too time consuming. Is there a method to return these in sequence incremented one day at a time? ``` select SUM(TotalBet)SALES, SUM(TotalWin)PRIZES from play nolock where CurrentDate between '1/1/2013 07:00:00' and '1/2/2013 7:00:00' select SUM(TotalBet)SALES, SUM(TotalWin)PRIZES from play nolock where CurrentDate between '1/2/2013 07:00:00' and '1/3/2013 7:00:00' select SUM(TotalBet)SALES, SUM(TotalWin)PRIZES from play nolock where CurrentDate between '1/3/2013 07:00:00' and '1/4/2013 7:00:00' select SUM(TotalBet)SALES, SUM(totalbet)PRIZES from play nolock where CurrentDate between '1/4/2013 07:00:00' and '1/5/2013 7:00:00' select SUM(TotalBet)SALES, SUM(TotalWin)PRIZES from play nolock where CurrentDate between '1/5/2013 07:00:00' and '1/6/2013 7:00:00' select SUM(TotalBet)SALES, SUM(TotalWin)PRIZES from play nolock where CurrentDate between '1/6/2013 07:00:00' and '1/7/2013 7:00:00' ```
You want an aggregation. The key is to convert the `datetime` to a `date`. To get the right date, subtract 7 hours: ``` select cast(CurrentDate - 7.0/24 as date) as thedate, SUM(TotalBet) as SALES, SUM(totalbet) as PRIZES from play nolock group by cast(CurrentDate - 7.0/24 as date) order by 1; ```
Use a [numbers table](http://www.kodyaz.com/articles/sql-server-tsql-cte-numbers-table.aspx) to generate your dates, and then join to the table and group by the day. You could go straight to the table, but if you have missing dates (say, a holiday) they won't show in the results without a numbers table. ``` SELECT D.Date, SUM(TotalBet) as Sales, SUM(TotalBet) as Prizes FROM ( SELECT DATEADD(dd, CONVERT(datetime, '2013-01-01 07:00'), Number) as Date FROM dbo.NumbersTable(0, 210) --210 days ) as D LEFT OUTER JOIN Play as P ON P.CurrentDate >= D.Date AND P.CurrentDate < D.Date GROUP BY D.Date ``` You'll also (probably) want to not use BETWEEN, as you could end up with duplicates on the boundaries (BETWEEN is inclusive on both sides).
I need to return daily SUM from two columns covering 7 months
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I need to populate a list with a few 1's in random places. I can successfully create a list of random numbers with: ``` from random import randint l = [randint(0,1023) for _ in range(0,10)] ``` How do I populate a list with 1's in the position specified by l?
# A Sparse List My understanding of "sparse list" is that most (say, more than 95% of) values will be zero and for reasons of **memory efficiency** you don't wish to *store* these (*cf.* [Sparse array](http://en.wikipedia.org/wiki/Sparse_array)). ## List comprehension Using your list comprehension, you can use [***Conditional Expression Resolution***](http://mail.python.org/pipermail/python-dev/2005-September/056846.html) (*foo* `if` *condition* `else` *bar*) to determine whether a one or a zero is at that position. For example: ``` In [1]: from random import randint In [2]: l = [randint(0,1023) for _ in range(0,10)] In [3]: l Out[3]: [987, 356, 995, 192, 21, 22, 1013, 375, 796, 339] In [4]: 1 if 987 in l else 0 Out[4]: 1 In [5]: 1 if 988 in l else 0 Out[5]: 0 ``` This means that you don't need to populate the second list you mention in your question, you could just iterate over the range 0 - 1023 and use: ``` 1 if index in l else 0 ``` ## Dictionary comprehension Alternatively, you could use a [***dictionary comprehension***](http://www.python.org/dev/peps/pep-0274/). I think this is more readable: ``` In [1]: from random import randint In [2]: l = {randint(0, 1023): 1 for _ in xrange(0, 10)} ``` This will generate a dictionary like this: ``` In [3]: l Out[3]: {216: 1, 381: 1, 384: 1, 392: 1, 396: 1, 472: 1, 585: 1, 630: 1, 784: 1, 816: 1} ``` Then you access the elements, specifying a default value of zero. If the value at the requested position is set, you'll get your one: ``` In [4]: l.get(216, 0) Out[4]: 1 ``` If the value is not set, you'll get a zero: ``` In [5]: l.get(217, 0) Out[5]: 0 ``` To obtain a list of the positions: ``` In [6]: l.keys() Out[6]: [384, 392, 472, 630, 216, 585, 396, 381, 784, 816] ``` ## Flaw in both of the above approaches `randint(0, 1023)` could emit the same number more than once, leading to *clashes*, which would result in fewer than the required number of ones. ## Tying it all together I would wrap the dictionary-based implementation in a `class` to make it easy to (re-)use. ``` from random import randint class RandomSparseList(object): def __init__(self, size, min_bits, max_bits): self.size = int(size) self.bits = {} self.bits_set = randint(min_bits, max_bits) while self.bits_set > len(self.bits): self.bits[randint(0, self.size)] = 1 def __len__(self): return self.size def __getitem__(self, index): if index < 0 or index >= self.size: raise IndexError return self.bits.get(int(index), 0) def __iter__(self): for i in xrange(self.size): yield self.__getitem__(i) def __contains__(self, index): return index in self.bits def __repr__(self): return '[{}]'.format(', '.join(str(x) for x in self)) def set_bits(self): return self.bits.keys() ``` ## Example usage ### I've put this `class` in a file: ``` In [1]: from random_sparse_list import RandomSparseList ``` ### Create an instance: ``` In [2]: rsl = RandomSparseList(1024, 10, 40) ``` ### Check the length of the list: ``` In [3]: len(rsl) Out[3]: 1024 ``` ### Which bits are set? ``` In [4]: rsl.set_bits() Out[4]: [523, 400, 285, 158, 419, 434, 701, 67, 843, 846, 591, 720, 470, 864, 912, 739, 996, 485, 489, 234, 1005, 573, 381, 784] ``` 24: That's certainly in the range of 10-40. ### Random-access: ``` In [5]: rsl[523] Out[5]: 1 In [6]: rsl[524] Out[6]: 0 ``` ### Is a bit set? ``` In [7]: 400 in rsl Out[7]: True In [8]: 401 in rsl Out[8]: False ``` ### Iteration over the list: ``` In [9]: for index, value in enumerate(rsl): ...: if value: ...: print '{} found at index {}'.format(value, index) ...: 1 found at index 67 1 found at index 158 1 found at index 234 1 found at index 285 1 found at index 381 1 found at index 400 1 found at index 419 1 found at index 434 1 found at index 470 1 found at index 485 1 found at index 489 1 found at index 523 1 found at index 573 1 found at index 591 1 found at index 701 1 found at index 720 1 found at index 739 1 found at index 784 1 found at index 843 1 found at index 846 1 found at index 864 1 found at index 912 1 found at index 996 1 found at index 1005 ``` ### String representation: ``` In [10]: rsl Out[10]: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] ``` ## Note A [***`set`***](http://docs.python.org/2/library/stdtypes.html#set)-based implementation would be even more memory efficient but the [dict](http://docs.python.org/2/library/stdtypes.html#dict) above can be easily changed to contain (random or otherwise) values other than `0` and `1`. # Update Inspired by this question and the lack of a standard sparse `list` implementation, I've added a [sparse\_list](https://pypi.python.org/pypi/sparse_list/0.1) implementation to the Cheese Shop. You can install it with `pip install sparse_list` and then the `RandomSparseList` implementation is much simpler for you: ``` from sparse_list import SparseList from random import randint class RandomSparseList(SparseList): def __init__(self, size, min_bits, max_bits): super(RandomSparseList, self).__init__(size, 0) self.bits = randint(min_bits, max_bits) while self.bits > len(self.elements): self.elements[randint(0, self.size)] = 1 ``` This will work exactly as in the examples above but with a few extras such as extended slicing. You can read (and contribute to) the source [on GitHub](https://github.com/johnsyweb/python_sparse_list).
> I need to create a large list of 0's with 10 - 40 l's in random places > to benchmark an algorithm. Might this work for you? ``` import random zeros = [0] * 1024 ones = [1] * random.randint(10, 40) l = zeros + ones random.shuffle(l) # the list l contains many zeros and 10 - 40 1's in random places. where_the_ones_are = [i for i, x in enumerate(l) if x == 1] ```
Populating a sparse list with random 1's
[ "", "python", "" ]
I am currently learning the concept of list comprehensions in python. However, I have huge problems when the list I am iterating over contains sublists of equal or different length. For example, I would like to turn the code for `union_set()` into a one-line-comprehension: ``` def union_set(L): S_union = set() for i in range(len(L)): S_union.update(set(L[i])) return S_union L1 = [1, 2, 3] L2 = [4, 5, 6] L3 = [7, 8, 9] L = [L1, L2, L3] print(L) print(union_set(L)) ``` I am pretty sure this should be possible (maybe by 'somehow' unpacking the sublists' content(?)), but I am affraid that I am missing something here. Can anyone help?
Using list-comprehension, you can do something like that: ``` >>> L1 = [1, 2, 3] >>> L2 = [4, 5, 6] >>> L3 = [7, 8, 9] >>> L = [L1, L2, L3] >>> s=set([x for y in L for x in y]) >>> s set([1, 2, 3, 4, 5, 6, 7, 8, 9]) ``` y is iterating over the sublist, while x iterates over items in y.
Use an empty `set` and `.union` it: ``` L1 = [1, 2, 3] L2 = [4, 5, 6] L3 = [7, 8, 9] print set().union(L1, L2, L3) ``` Used in your code as: ``` L = [L1, L2, L3] def union_set(L): return set().union(*L) ```
How extract items of sublists in a one-line-comprehension in python?
[ "", "python", "python-3.x", "list-comprehension", "" ]
I am using a temporary table in a function to save some results however I don't know how to return the table from the function. Ideally I would like to do everything in one query (i.e. not two queries: one for calling the function, the other to get data from the temp table). Currently my `main_function()` is as follows: ``` CREATE OR REPLACE FUNCTION main_function() RETURNS void AS $BODY$ BEGIN DROP TABLE IF EXISTS temp_t CASCADE; CREATE TEMP TABLE temp_t AS SELECT * FROM tbl_t limit 0; EXECUTE 'INSERT INTO temp_t ' || 'SELECT * FROM tbl_t limit 10'; END; $BODY$ LANGUAGE 'plpgsql' ; ``` And I am calling it like so: ``` SELECT * from main_function(); SELECT * from temp_t; ``` Again, the problem is that I don't actually want to call the second query. The first query should return the temp table as a result, however I cannot do this since the temp table is created in `main_function()` so it cannot be its return type. Any ideas on how to achieve this? Thanks
Inside your main\_function(): ``` RETURN QUERY SELECT * FROM temp_t; ``` ...if temp\_t table consists of e.g. column1 (type integer), column2 (boolean) and column3 (varchar(100)), you should also define returned type as: ``` CREATE OR REPLACE FUNCTION main_function(column1 OUT integer, column2 OUT boolean, column3 OUT varchar(100)) RETURNS SETOF record AS (...) ``` Another way is to define new data type: ``` CREATE TYPE temp_t_type AS ( column1 integer, column2 boolean, column3 varchar(100) ); ``` That type can be returned by your functions in the same way as normal data types: ``` CREATE OR REPLACE FUNCTION main_function() RETURNS SETOF temp_t_type AS (...) ``` ...and return result from the function in the same way as mentioned above.
Are you sure you *need* a temporary table? Most of the time, there is a cheaper solution. Your example can simply be: ``` CREATE OR REPLACE FUNCTION main_function() RETURNS SETOF tbl_t AS $BODY$ BEGIN RETURN QUERY EXECUTE 'SELECT * FROM tbl_t LIMIT 10'; END $BODY$ LANGUAGE plpgsql; ``` You also don't need `EXECUTE` or even plpgsql for the simple case: ``` CREATE OR REPLACE FUNCTION main_function() RETURNS SETOF tbl_t AS $BODY$ SELECT * FROM tbl_t LIMIT 10; $BODY$ LANGUAGE sql; ``` Never quote the language name. It's an identifier.
How to return temp table result in postgresql function
[ "", "sql", "postgresql", "types", "" ]
How can I get the record details of an aggregate function without using a sub query? I have a data set as follows: ``` CREATE TABLE SOMEDATA( id int, low int, op int, yc int, tc int, volume int, Atimestamp time) INSERT INTO SOMEDATA VALUES(5631, 5500, 5600, 5680, 5680, 300527, '13:16:12.462') INSERT INTO SOMEDATA VALUES(5631, 5500, 5600, 5680, 5680, 301720, '13:16:13.304') INSERT INTO SOMEDATA VALUES(5631, 5500, 5600, 5680, 5680, 302041, '13:16:13.306') INSERT INTO SOMEDATA VALUES(5631, 5500, 5600, 5680, 5680, 302410, '13:16:13.682') INSERT INTO SOMEDATA VALUES(5631, 5500, 5600, 5680, 5680, 302548, '13:16:15.818') INSERT INTO SOMEDATA VALUES(5632, 5500, 5600, 5680, 5680, 302548, '13:16:15.818') ``` Which I query by doing: ``` SELECT * FROM SOMEDATA INNER JOIN (select max(Atimestamp) as tm,id FROM SOMEDATA group by id) t on t.tm = SOMEDATA.Atimestamp AND SOMEDATA.id = t.id ``` This seems like a bad way to do it though ( eg as I understand it, this query locks the table twice) - is there a better way to do this ( with HAVING perhaps )?
You should be able to use the RANK() function for this. Something like: ``` SELECT id, low, op, yc, tc, volume, Atimestamp FROM ( SELECT id, low, op, yc, tc, volume, Atimestamp, RANK() OVER (PARTITION BY id ORDER BY Atimestamp DESC) AS rank FROM somedata ) a WHERE a.rnk = 1 ```
Please try: ``` SELECT * FROM( SELECT *, ROW_NUMBER() OVER (PARTITION BY id ORDER BY Atimestamp DESC) RNum From SOMEDATA )x WHERE RNum=1 ``` OR ``` ;WITH x AS( SELECT *, ROW_NUMBER() OVER (PARTITION BY id ORDER BY Atimestamp DESC) RNum From SOMEDATA ) SELECT * FROM x WHERE RNum=1 ```
How to get details of row with an aggregate function without subquery?
[ "", "sql", "sql-server", "" ]
I created a stored procedure which when passed nothing as parameter should return the entire table. But if the studentId is passed, then return her details. Something like this ``` create procedure usp_GetStudents @studentId int = null as if (@studentId = null) select * from Student else select * from Student where studentId = @studentId ``` Output ``` exec usp_GetStudents -- No records returned though there are records in the table exec usp_GetStudents @studentId = null -- No records returned exec usp_GetStudents @studentId = 256 -- 1 entry returned ``` Just curious to know if anything is wrong in the syntax/logic for returning all the entries of the table? Thank you
You're trying to test for null using `=`, a [comparison operator](http://msdn.microsoft.com/en-us/library/ms188074.aspx). If you're using ANSI nulls, any comparison against `null` is `false`. Where `@studentId` is *any* value (or `null`) the following expressions are all `false`: ``` @studentId = null -- false @studentId > null -- false @studentId >= null -- false @studentId < null -- false @studentId <= null -- false @studentId <> null -- false ``` So, in order to test for `null` you must use a special predicate, [`is null`](http://msdn.microsoft.com/en-us/library/ms188795.aspx), i.e.: ``` @studentId is null ```
Shorter way to do that: ``` create procedure usp_GetStudents @studentId int = null as select * from Student where studentId = isnull(@studentId,studentId) ``` You can't chack if value is null using `=`. For your example you have to replace condition `@studentId = null` to `is null` syntax. Try to change your code as below: ``` create procedure usp_GetStudents @studentId int = null as if (@studentId is null) select * from Student else select * from Student where studentId = @studentId ```
conditional stored procedure with/without passing parameter
[ "", "sql", "stored-procedures", "" ]
I'm writing a function to read from socket and return exact size of data. And I thought of codes like: ``` whlie len(buf) < required: buf += socket.recv(required - len(buf)) return buf ``` But I think this might waste some CPU resources(not sure). I'm looking for a system call or something to return exact amount of data from socket, and it could be blocked until the required size is received.
This will not waste CPU resources. `sock.recv` will [block](http://docs.python.org/2/library/socket.html#socket.socket.setblocking) until at least one byte is available (but may buffer and return multiple bytes if they are available), so your application will not enter a busy loop. In general, the buffer length of the `sock.recv` call [should not have anything to do with the length of the message you'd like to retrieve](http://docs.python.org/2/library/socket.html#socket.socket.recv_into). --- String concatenation is not efficient for your buffer though. You may want to consider using: * A list, and `''.join()` (`b''.join()` if using Python 3) * The `StringIO` and `cStringIO` modules (or `io.BytesIO` if using Python 3). * Using [`sock.recv_into`](http://docs.python.org/2/library/socket.html#socket.socket.recv_into) and `StringIO` / `bytesIO` .
If you are using the default socket library you can use the following. ``` import socket host, port = "localhost", 8000 num_bytes = 5000 sock = socket.socket() sock.connect((host, port)) while True: data = sock.recv(num_bytes) handle(data) ``` You can take a look at the possible parameters here [socket.recv documentation](http://docs.python.org/2/library/socket.html#socket.socket.recv)
How to receive certain size of data in socket programming?
[ "", "python", "sockets", "" ]
I am trying to select all the elements where x\_id=1 but there will be multiple rows for that result with the same user\_id and I just want it to show one result for each user id (instead of multiple). How would I be able to do this in SQL im completely lost? Table: ## a id | x\_id | user\_id
`SELECT DISTINCT user_id FROM table WHERE x_id = 1;`
select distinct user\_id from a where x\_id = 1;
Select Data From Table Without Multiple Of The Same User ID
[ "", "mysql", "sql", "database", "" ]
Using `requests` in Python I am performing a GET, requesting a JSON file which I can later access and modify, tweak, etc. with the command `solditems.json()`. However I would like to save this JSON file to my computer. Looking through the requests docs I found nothing, does anybody have an easy way I can do this?
You can do it just as you would without `requests`. Your code might look something like, ``` import json import requests solditems = requests.get('https://github.com/timeline.json') # (your url) data = solditems.json() with open('data.json', 'w') as f: json.dump(data, f) ```
Based on @Lukasa's comment, this accelerates @Jared's solution : ``` import requests solditems = requests.get('https://github.com/timeline.json') # (your url) data = solditems.content with open('data.json', 'wb') as f: f.write(data) ```
Saving a json file to computer python
[ "", "python", "python-requests", "" ]
I have 2 tables with the following structure: Table A: ``` id_A col1 1 val1 2 val2 3 val3 ... .... ``` Table B: ``` id_B mycol id_A_val 1 smval1 null 2 null 1 3 null 2 ... ... ... ``` I want to copy values from Table A's col1 into Table B's mycol This is my expected result: Expected: ``` id_B mycol id_A_val 1 smval1 null 2 val1 1 3 val2 2 ... ... ... ``` I tried several combinations of SQL UPDATE. This was the latest I tried - but it throws an error saying "Subquery returned more than 1 value." Tried: ``` UPDATE [dbo].[Table_B] SET MYCOL = (SELECT inst.[COL1] FROM [dbo].[TABLE_A] a, [dbo].[TABLE_B] b WHERE a.[ID_A] = b.[ID_A_VAL] AND b.ID_A_VAL IS NOT NULL) ``` Can someone throw some light on the correct direction to get a working query?
Try this: ``` update b set mycol=table_a.col1 from table_b b inner join table_a on b.id_A_val=table_a.id_A ```
Try This ``` UPDATE tableb SET mycol=a.col1 FROM tableb b INNER JOIN tablea a ON a.id_A=b.id_A_val WHERE b.mycol is null ```
how to copy data from multiple rows of one table to another in sql server?
[ "", "sql", "sql-server", "sql-update", "multiple-records", "" ]
I have a numpy matrix where each row is a picture. I can reshape the rows and display the images with matplotlib.pyplot. The problem is: I don't want to display the Images separately, I want to display them after each other like a video. How is that possible in python?
Well, I don't know if it is the best way but I've used matplotlib.pyplot to solve my problem. Import it as "plt" and do the following: ``` matrix=numpy.genfromtxt(path,delimiter=',') # Read the numpy matrix with images in the rows c=matrix[0] c=c.reshape(120, 165) # this is the size of my pictures im=plt.imshow(c) for row in matrix: row=row.reshape(120, 165) # this is the size of my pictures im.set_data(row) plt.pause(0.02) plt.show() ```
Hello my solutions using matplotlib animations: ``` from matplotlib import pyplot as plt import numpy as np from matplotlib.animation import FuncAnimation video # my numpy matrix of shape (frames, channel, w, h) # initializing a figure in # which the graph will be plotted fig = plt.figure(figsize=(10,10)) # create an axis ax = plt.axes() # initializing a line variable img = ax.imshow(np.ndarray(shape=video.shape[2:]) def animate(i): img.set_data(i) return img, anim = FuncAnimation(fig, animate, frames=video, interval=20, blit=True) anim.save(filename, writer='ffmpeg', fps=5, bitrate=2000) ``` Note if you are in a jupyter notebook you can use a temporary file and a ipywidgets to display the video without saving it. ``` from tempfile import NamedTemporaryFile f = NamedTemporaryFile(mode='w+', suffix='.mp4') #...previous code anim.save(f.name, writer='ffmpeg', fps=5, bitrate=2000) ``` ``` from ipywidgets import Video Video.from_file(f.name) ```
Displaying Numpy Matrix as Video
[ "", "python", "numpy", "" ]
It looks like celery does not release memory after task finished. Every time a task finishes, there would be 5m-10m memory leak. So with thousands of tasks, soon it will use up all memory. ``` BROKER_URL = 'amqp://user@localhost:5672/vhost' # CELERY_RESULT_BACKEND = 'amqp://user@localhost:5672/vhost' CELERY_IMPORTS = ( 'tasks.tasks', ) CELERY_IGNORE_RESULT = True CELERY_DISABLE_RATE_LIMITS = True # CELERY_ACKS_LATE = True CELERY_TASK_RESULT_EXPIRES = 3600 # maximum time for a task to execute CELERYD_TASK_TIME_LIMIT = 600 CELERY_DEFAULT_ROUTING_KEY = "default" CELERY_DEFAULT_QUEUE = 'default' CELERY_DEFAULT_EXCHANGE = "default" CELERY_DEFAULT_EXCHANGE_TYPE = "direct" # CELERYD_MAX_TASKS_PER_CHILD = 50 CELERY_DISABLE_RATE_LIMITS = True CELERYD_CONCURRENCY = 2 ``` Might be same with issue, but it does not has an answer: [RabbitMQ/Celery/Django Memory Leak?](https://stackoverflow.com/questions/10302503/rabbitmq-celery-django-memory-leak) I am not using django, and my packages are: ``` Chameleon==2.11 Fabric==1.6.0 Mako==0.8.0 MarkupSafe==0.15 MySQL-python==1.2.4 Paste==1.7.5.1 PasteDeploy==1.5.0 SQLAlchemy==0.8.1 WebOb==1.2.3 altgraph==0.10.2 amqp==1.0.11 anyjson==0.3.3 argparse==1.2.1 billiard==2.7.3.28 biplist==0.5 celery==3.0.19 chaussette==0.9 distribute==0.6.34 flower==0.5.1 gevent==0.13.8 greenlet==0.4.1 kombu==2.5.10 macholib==1.5.1 objgraph==1.7.2 paramiko==1.10.1 pycrypto==2.6 pyes==0.20.0 pyramid==1.4.1 python-dateutil==2.1 redis==2.7.6 repoze.lru==0.6 requests==1.2.3 six==1.3.0 tornado==3.1 translationstring==1.1 urllib3==1.6 venusian==1.0a8 wsgiref==0.1.2 zope.deprecation==4.0.2 zope.interface==4.0.5 ``` I just added a test task like, test\_string is a big string, and it still has memory leak: ``` @celery.task(ignore_result=True) def process_crash_xml(test_string, client_ip, request_timestamp): logger.info("%s %s" % (client_ip, request_timestamp)) test = [test_string] * 5 ```
It was this config option that made my worker does not release memory. ``` CELERYD_TASK_TIME_LIMIT = 600 ``` refer to: <https://github.com/celery/celery/issues/1427>
There are two settings which can help you mitigate growing memory consumption of celery workers: * [Max tasks per child setting](http://docs.celeryproject.org/en/latest/userguide/workers.html#max-tasks-per-child-setting) (v2.0+): > With this option you can configure the maximum number of tasks a worker can execute before it’s replaced by a new process. This is useful if you have memory leaks you have no control over for example from closed source C extensions. * [Max memory per child setting](http://docs.celeryproject.org/en/latest/userguide/workers.html#max-memory-per-child-setting) (v4.0+): > With this option you can configure the maximum amount of resident memory a worker can execute before it’s replaced by a new process. > This is useful if you have memory leaks you have no control over for example from closed source C extensions. However, those options only work with the default pool (prefork). For safe guarding against memory leaks for threads and gevent pools you can add an utility process called [memmon](https://superlance.readthedocs.io/en/latest/memmon.html), which is part of the [superlance](https://superlance.readthedocs.io/en/latest/) extension to supervisor. Memmon can monitor all running worker processes and will restart them automatically when they exceed a predefined memory limit. Here is an example configuration for your supervisor.conf: ``` [eventlistener:memmon] command=/path/to/memmon -p worker=512MB events=TICK_60 ```
Celery does not release memory
[ "", "python", "rabbitmq", "celery", "amqp", "" ]
I want to get current identity value of a specific table Like IDENT\_CURRENT('table') in sql server
``` SELECT AUTOINC_SEED FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME='TableName' AND COLUMN_NAME='ColumnName' ``` from [Hamid's answer](https://stackoverflow.com/a/17544448/161250) is fine if what you're looking for is what the identity column's seed value is (i.e. what the first ever value of the identity column was or is going to be), but if you're looking for what the next value of an inserted row is going to be, this is the query you want to use: ``` SELECT AUTOINC_NEXT FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME='TableName' AND COLUMN_NAME='ColumnName' ```
If you want to get a last identity inserted value for a particular table use the following statement: ``` select IDENT_CURRENT('tablename') ``` For example: ``` select IDENT_CURRENT('Employee') ```
How to get current identity number of specific table in sql server compact
[ "", "sql", "sql-server-ce", "" ]
I am working on an anonymizer program which sensors the given words in the list. This is what i have so far. I am new to python so not sure how can i achieve this. ``` def isAlpha(c): if( c >= 'A' and c <='Z' or c >= 'a' and c <='z' or c >= '0' and c <='9'): return True else: return False def main(): message = [] userInput = str(input("Enter The Sentense: ")) truncatedInput = userInput[:140] for i in range(len(truncatedInput)): if(truncatedInput[i] == 'DRAT'): truncatedInput[i] = 'x' print(truncatedInput[i]) ``` this is the output i get ``` Enter The Sentense: DRAT D R A T ``` I want the word to be replaced by XXXX
You have several problems with your code: 1. There already exists an `islpha` function; it is a `str` method (see example below). 2. Your `trucatedInput` is a `str`, which is an immutable type. You can't reassign parts of an immutable type; i.e. `myStr[3]='x'` would normally fail. If you really want to do this, you're better off representing your truncated input as a list and using `''.join(truncatedInput)` to turn it into a string later. 3. You are currently looking at the characters in your truncated input to check if any of them equals `'DRAT'`. This is what your first for-loop in `main` does. However, what you seem to want is to iterate over the words themselves - you will need a "chunker" for this. This is a slightly difficult problem if you want to deal with free-form English. For example, a simple word chunker would simply split your sentence on spaces. However, what happens when you have a sentence containing the word "DRAT'S"? Due to such cases, you will be forced to create a proper chunker to deal with punctuations as required. This is a fairly high-level design decision. You may want to take a look at [`NLTK`](http://nltk.org/) to see if any of its chunkers will help you out. **Examples**: `str.isalpha` ``` In [3]: myStr = 'abc45d' In [4]: for char in myStr: ...: print char, char.isalpha() ...: a True b True c True 4 False 5 False d True ``` strings are immutable ``` In [5]: myStr[3] = 'x' --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-5-bf15aed01ea1> in <module>() ----> 1 myStr[3] = 'x' TypeError: 'str' object does not support item assignment ``` Finally, as others have recommended, you're likely much better off using `str.replace` anyways. However, be wary of replacing substrings of non-cencored words. For example, the substring "hell" in the word "hello" does not need to be censored. To accommodate for such text, you may want to use [`re.sub`](http://docs.python.org/2/library/re.html#re.sub), a regex substitution, as opposed to `str.replace`. One additional note, python allows for transitive comparisons. So you can shorten `if( c >= 'A' and c <='Z' or c >= 'a' and c <='z' or c >= '0' and c <='9')` into `if( 'Z' >= c >= 'A' or 'z' >= c >= 'a' or '9' >= c >= '0')`. This, by the way, can be replaced with `if c.isalpha() or c.isdigit()` Hope this helps
You could use [string.replace()](http://docs.python.org/2/library/string.html#string.replace) ``` truncatedInput.replace('DRAT', 'xxxx') ``` This will replace the first occurence of DRAT with xxxx, even if it is part of a longer sentence. If you want different functionality let me know.
How to make anonymizer in python?
[ "", "python", "" ]
I wrote this python function which takes a list as a parameter and determines which elements of the list are perfect squares and then returns a new list of just those select elements. Here is my function: ``` def square(n): return n**2 def perfectSquares1(L): import math m=max(L) for n in L: if type(n) is int and n>0: Result=map(square,range(1,math.floor(math.sqrt(m)))) L1=list(Result) L2=list(set(L).intersection(set(L1))) return L2 ``` But now I'm trying to re-work it a little: I want to write a one-line Boolean function that takes n as a parameter and returns True if n is a perfect square and returns false otherwise. Any advice? I can't figure out a way to make it only one line.
You can do: ``` import math def perfect_sq(n): return n == int(math.sqrt(n)) * int(math.sqrt(n)) ``` Or you can use: ``` import math def perfect_sq(n): return n == int(math.sqrt(n)) ** 2 ```
``` lambda n: math.sqrt(n) % 1 == 0 ```
Python Program on Perfect Squares
[ "", "python", "python-3.x", "" ]
I'm trying to implement PageObject pattern for my first Login test. While running it I'm receiving the following error: ``` >> py.test -v test_login.py ============================= test session starts ============================== platform linux2 -- Python 2.7.3 -- pytest-2.3.4 plugins: xdist collected 0 items / 1 errors ==================================== ERRORS ==================================== ____________________ ERROR collecting test_login_logout.py _____________________ test_login_logout.py:10: in <module> > from ui.pages import LoginPage ../pages/__init__.py:1: in <module> > from loginPage import LoginPage ../pages/loginPage.py:3: in <module> > from base import BasePage E ImportError: No module named base ``` Here is the pythonpath: > Pythonpath: PYTHONPATH="${PYTHONPATH}:/usr/lib/python2.7/" > > > export PYTHONPATH As far as it's one of my first tests a lot of code was copy-pasted, maybe there's something wrong with it but I'd can't get it. Will be very pleased with any suggestions on this point. Also below is the structure and content of my so-called PageObject implementation: 1. **ui** * **base** + \_\_ init \_\_ .py + basePage.py + configs.py + wrapper.py * **pages** + \_\_ init \_\_ .py + loginPage.py * **tests** + \_\_ init \_\_ .py + test\_login.py * **\_\_ init \_\_ .py** > **ui/\_\_ init \_\_ .py**: ``` __author__ = 'testuser' ``` > **ui/base/\_\_ init \_\_ .py**: ``` from wrapper import SeleniumWrapper from basePage import BasePage selenium_driver = SeleniumWrapper() ``` > **ui/base/basePage.py**: ``` class BasePage(object): def __init__(self, driver): self.driver = driver def get_current_url(self): return str(self.driver.current_url) ``` > **ui/base/configs.py**: ``` import os try: os.environ["HOST"] HOST = os.environ["HOST"] except KeyError: os.environ["HOST"] = 'http://www.website.com' HOST = str(os.environ["HOST"]) PORT = '' BASE_URL_US = '%s:%s/en/' % (HOST, PORT) EMAIL = 'test.user@gmail.com' PASSWORD = 'secret' ``` > **ui/base/wrapper.py**: ``` from selenium import webdriver import configs class SeleniumWrapper: _instance = None def __new__(cls, *args, **kwargs): if not cls._instance: cls._instance = super(SeleniumWrapper, cls).__new__(cls, *args, **kwargs) return cls._instance def connect(self, host=configs.BASE_URL_US): self.driver = webdriver.Firefox() self.base_url = host return self.driver ``` > **ui/pages/\_\_ init \_\_ .py**: ``` from loginPage import LoginPage ``` > **ui/pages/loginPage.py**: ``` from base import BasePage class LoginPage(object): login_page_link = '.log-in>a' email_field_locator = 'email' password_field_locator = 'password' login_button_locator = 'submit' def __init__(self, driver, base_url): self.driver = driver self.driver.get(base_url) def login_action(self, email, password): login_page = self.driver.find_element_by_css_selector(self.login_page_link) email_element = self.driver.find_element_by_id(self.email_field_locator) password_element = self.driver.find_element_by_id(self.password_field_locator) login_button = self.driver.find_element_by_id(self.login_button_locator) login_page.click() email_element.send_keys(email) password_element.send_keys(password) login_button.click() ``` > **ui/tests/\_\_ init \_\_ .py**: ``` __author__ = 'testuser' ``` > **ui/tests/test\_login.py**: ``` import sys import os import pytest if __name__ == '__main__': sys.path.append(os.path.dirname(__file__) + '/../') from ui.base import selenium_driver from ui.pages import LoginPage from ui.base import configs @pytest.mark.ui class TestLoginLogout(object): @classmethod def setup_class(cls): cls.verificationErrors = [] cls.driver = selenium_driver.connect() cls.driver.implicitly_wait(10) cls.base_url = selenium_driver.base_url cls.email = configs.EMAIL cls.password = configs.PASSWORD @classmethod def teardown_class(cls): cls.driver.quit() assert cls.verificationErrors == [] def test_login_positive(self): welcome_page = LoginPage(self.driver, self.base_url) login_page = welcome_page.login_action(self.email, self.password) # assert 'property' in login_page.get_current_url() if __name__ == '__main__': pytest.main([__file__, "-s"]) ```
Your `base` module is located in the `ui` module. Hence line: ``` from base import BasePage ``` should be: ``` from ui.base import BasePage ```
i was just a beginner when this error struck what i did was instead of: ``` from models.post import Post ``` i did the following after installing post as package: ``` from post import Post ``` it worked for me
ImportError: No module named base
[ "", "python", "python-2.7", "selenium-webdriver", "pytest", "pageobjects", "" ]
I am working in Oracle Apex 4.2. I have two tables: I have simple report to build ``` select id, name, location_id from tablel1 ----------------------------------- | ID | NAME | PROJECT_ID | ----------------------------------- | 1 | P1 | 23:45:56 | | 2 | P2 | 23 | | 3 | P3 | 45:65 | ----------------------------------- ------------------------------------------ | ID | NAME | SITE | ------------------------------------------ | 23 | Orlando | SITE1 | | 45 | Arizona | SITE2 | | 65 | Maimi | SITE3 | ------------------------------------------ ``` However the problem I am having is that location\_id holds only information about id so it needs to look up different table for concat value of two columns (name ||' - '||site ). It would be dead simple however there is another curve ball: location\_id holds results of shuttle, so it is populated by values like this 34:45:56:67. I need to convert that to: Orlando - SITE1, Arizona - SITE2, Miami - SITE3 so all those results are returned IN ONE ROW of report As this is report it can be done by : transffering column report into 'Display as text based on LOV', building PL/SQL block which generates SQL statement and loops through values... etc. I tried many approaches and I am running out of ideas and time solve this problem. Any help greatly appreciated.
With SQL only (Oracle 11g): ``` select x.id, x.name, listagg(t2.name || t2.site, ', ') within group (order by t2.id) from ( select distinct t1.id, t1.name, regexp_substr(t1.project_id, '[^:]+', 1, level) id_site from tablel1 t1 connect by level <= regexp_count(t1.project_id, ':') + 1 ) x, table22 t2 where t2.id = x.id_site group by x.id, x.name ``` This gives: ``` 1 P1 Orlando - SITE1, Arizona - SITE2, Miami - SITE3 2 P2 Orlando - SITE1 3 P3 Arizona - SITE2, Miami - SITE3 ```
Here is procedure: ``` DECLARE CURSOR c (p_id NUMBER) IS SELECT NAME||' - '||SITE FROM TABLE2 WHERE ID = p_id; l_tsv VARCHAR2(1000) := '23:45:56'; l_item NUMBER; lc_t VARCHAR(200); lc_result VARCHAR2(4000); BEGIN FOR i IN 1 .. LENGTH(l_tsv) - LENGTH(REPLACE(l_tsv, ':', '')) + 1 LOOP l_item := REGEXP_SUBSTR(l_tsv, '[^:]+', 1, i); OPEN c (l_item); FETCH c INTO lc_t; CLOSE c; lc_result := lc_result ||', '||lc_t; END LOOP; lc_result := SUBSTR(lc_result,3); dbms_output.put_line(lc_result); END; ``` **UPDATE** Function: ``` CREATE OR REPLACE FUNCTION some_name(l_tsv VARCHAR2) RETURN VARCHAR2 IS CURSOR c (p_id NUMBER) IS SELECT NAME||' - '||SITE FROM TABLE2 WHERE ID = p_id; l_item NUMBER; lc_t VARCHAR(200); lc_result VARCHAR2(4000); BEGIN FOR i IN 1 .. LENGTH(l_tsv) - LENGTH(REPLACE(l_tsv, ':', '')) + 1 LOOP l_item := REGEXP_SUBSTR(l_tsv, '[^:]+', 1, i); OPEN c (l_item); FETCH c INTO lc_t; CLOSE c; lc_result := lc_result ||', '||lc_t; END LOOP; lc_result := SUBSTR(lc_result,3); RETURN (lc_result); END some_name; ```
Conversion of multiple values within one column
[ "", "sql", "oracle", "report", "oracle-apex", "lov", "" ]
I just started learning python and was wondering if their was a way to shortcut a line of code. For example could I use something along the lines of. ``` command = input() if command = "create turtle" t =turtle.Pen() ``` or ``` turtleCommand = input() if turtleCommand = "circle" t.forward(100) t.left(91) ``` The turtle thing is just hypothetical maybe if a string "inputted" (if thats a word) activated a defineFunction
You can write a function: ``` def draw_circle(t): t.forward(100) t.left(91) ``` And then call it: ``` t = turtle.Pen() command = input() if command == "circle": draw_circle(t) elif command = "stuff": ... ``` A more robust solution would be to use a dictionary that maps commands to functions: ``` commands = { "circle": draw_circle, "square": draw_square } ``` And then get a function by name: ``` t = turtle.Pen() turtle_command = input() command = commands[turtle_command] command(t) ```
``` def docircle(pen): pen.forward(100) pen.left(91) commands = { 'circle': docircle, ... } ... commands[turtleCommand](t) ```
How To create a shortcut for a command in python 3.3
[ "", "python", "python-3.x", "" ]
I would like to test if a string is computable using the built in function "eval". I would like to make a condition in my definition. I want it to produce true if the string is computable using eval, and false if trying to "eval" the string produces and error. Any functions that would help me to do so? Thanks. Example: t="(8+(2-4)" s="8+(2-4))" eval(s) would produce 6 eval(t) would produce error i want to be able to use these two conditions in my definition where I would be expecting either an integer or error from the eval expression
I'm assuming you want to check the syntax before making a call to eval(). You can try [ast.parse](http://docs.python.org/2/library/ast.html#ast.parse), as mentioned in this other [answer](https://stackoverflow.com/a/11854793/1046267). (Example as given in that answer, for easier reference): ``` import ast def is_valid_python(code): try: ast.parse(code) except SyntaxError: return False return True >>> is_valid_python('1 // 2') True >>> is_valid_python('1 /// 2') False ```
``` def f(string): try: return eval(string) except: return False ```
Python Conditions
[ "", "python", "wing-ide", "" ]
``` CREATE TABLE app_for_leave ( sno integer NOT NULL, eid integer, ename varchar(20), sd date, ed date, sid integer, status boolean DEFAULT false, CONSTRAINT pk_snoa PRIMARY KEY (sno) ); ``` Basic Insertion is :: ``` INSERT INTO app_for_leave(sno, eid, sd, ed, sid, status) VALUES(1,101,'2013-04-04','2013-04-04',2,'f' ); ``` ... ``` INSERT INTO app_for_leave(sno, eid, sd, ed, sid, status) VALUES (?, ?, ?, ?, ?, ?); ``` My Requirement:: How to insert data into a table using **stored procedures**?
PostgreSQL [didn't support stored procedures](https://wiki.postgresql.org/wiki/FAQ#Does_PostgreSQL_have_stored_procedures.3F) until PG11. Prior to that, you could get the same result using a function. For example: ``` CREATE FUNCTION MyInsert(_sno integer, _eid integer, _sd date, _ed date, _sid integer, _status boolean) RETURNS void AS $BODY$ BEGIN INSERT INTO app_for_leave(sno, eid, sd, ed, sid, status) VALUES(_sno, _eid, _sd, _ed, _sid, _status); END; $BODY$ LANGUAGE 'plpgsql' VOLATILE COST 100; ``` You can then call it like so: ``` select * from MyInsert(1,101,'2013-04-04','2013-04-04',2,'f' ); ``` The main limitations on Pg's stored functions - as compared to true stored procedures - are: 1. inability to return multiple result sets 2. no support for autonomous transactions (BEGIN, COMMIT and ROLLBACK within a function) 3. no support for the SQL-standard CALL syntax, though the ODBC and JDBC drivers will translate calls for you. [Example](http://sqlfiddle.com/#!12/89188/2) Starting from PG11, the `CREATE PROCEDURE` syntax is [introduced](http://www.postgresqltutorial.com/postgresql-create-procedure/) which provides support for transactions. ``` CREATE PROCEDURE MyInsert(_sno integer, _eid integer, _sd date, _ed date, _sid integer, _status boolean) LANGUAGE SQL AS $BODY$ INSERT INTO app_for_leave(sno, eid, sd, ed, sid, status) VALUES(_sno, _eid, _sd, _ed, _sid, _status); $BODY$; ``` Which could be called with: ``` CALL MyInsert(1,101,'2013-04-04','2013-04-04',2,'f' ); ```
Starting from PostgreSQL 11 you could [create stored procedures](https://www.postgresql.org/docs/11/static/sql-createprocedure.html) and invoke them using [CALL](https://www.postgresql.org/docs/11/static/sql-call.html): ``` CREATE PROCEDURE MyInsert(_sno integer, _eid integer, _sd date, _ed date, _sid integer, _status boolean) LANGUAGE SQL AS $$ INSERT INTO app_for_leave(sno, eid, sd, ed, sid, status) VALUES(_sno, _eid, _sd, _ed, _sid, _status); $$; CALL MyInsert(1,101,'2013-04-04','2013-04-04',2,'f' ); ``` Plus it allows to [handle transaction](https://www.postgresql.org/about/news/1855/) > SQL Stored Procedures > > **PostgreSQL 11 introduces SQL stored procedures that allow users to use > embedded transactions (i.e. BEGIN, COMMIT/ROLLBACK) within a > procedure.** Procedures can be created using the CREATE PROCEDURE > command and executed using the CALL command.
How to insert data into table using stored procedures in postgresql
[ "", "sql", "postgresql", "postgresql-9.2", "" ]
I have troubles getting proper data. I have table structure like: ``` id INT(11) AI order_id INT(11) status varchar(45) ``` This table log status changes for orders. So order\_id's will have few statuses. Now I need to select rows and group them by order\_id, where order never had status (not even one status with given order\_id) != 'example' We don't show orders, where one of members had status = example Sample data ``` 1 12 ready 1 12 example 2 13 ready 2 13 sent ``` So I don't want order 12 to show at all, because one of it members have "example" status I've tried grouping results, but it's not enough.
Not quite sure if you want the records for order which have had a status of example, or ones which have never had a status of example To get a list of orders (with the status grouped up) which have had a status of example:- ``` SELECT a.order_id, GROUP_CONCAT(a.status) FROM SomeTable a INNER JOIN ( SELECT order_id, COUNT(*) FROM SomeTable WHERE status = 'example' GROUP BY order_id ) b ON a.order_id = b.order_id GROUP BY order_id ``` To get those which have NEVER had a status of exmaple ``` SELECT a.order_id, GROUP_CONCAT(a.status) FROM SomeTable a LEFT OUTER JOIN ( SELECT order_id, COUNT(*) FROM SomeTable WHERE status = 'example' GROUP BY order_id ) b ON a.order_id = b.order_id WHERE b.order_id IS NULL GROUP BY order_id ``` EDIT ``` SELECT a.order_id, GROUP_CONCAT(a.status) FROM SomeTable a -- Statuses LEFT OUTER JOIN ( SELECT order_id, COUNT(*) FROM SomeTable WHERE status = 'example' GROUP BY order_id ) b -- Get any order id which has had a status of example (as a LEFT JOIN) ON a.order_id = b.order_id INNER JOIN ( SELECT order_id, MAX(id) AS Latestid FROM SomeTable GROUP BY order_id ) c -- Get the latest status for each order (ie, max id) ON a.order_id = c.order_id LEFT OUTER JOIN ( SELECT order_id, id FROM SomeTable WHERE status = 'example2' ) d -- Get the id of the order status of example2 ON a.order_id = d.order_id AND c.Latestid = d.id -- join on the same order id and that the record id matches the latest record id WHERE b.order_id IS NULL -- reject those where a match was found on example for any status AND d.order_id IS NULL -- reject those where a match was found on example2 for the latest status GROUP BY order_id ```
you can do it by simple join query : ``` select a.order_id from ordrstatus as a left outer join (select orderid , count(*) as status from orderstatus where status = 'example' group by orderid) as b on a.orderid = b.orderid where b.status = 0 or b.status is NUll ``` Join query always run faster then IN query . by using Join in query it will run only one time .
MySQL: SELECT if non of group by members is equal to x
[ "", "mysql", "sql", "" ]
I am getting the following error when trying to do a bulk insert into an *empty* mongodb collection. > pymongo.errors.DuplicateKeyError: E11000 duplicate key error index: > cmdDistros.locDistro.$*id* dup key: { : > ObjectId('51dac9d0c74cd81acd85c0fd') } I am not specifying an \_id when I create any of the documents, so mongodb should create the unique index correct? Here is the code I used: ``` #Populate database with uniform distribution entries = [] for coor in freeIndices: for theta in range(360): entry = {"x" : coor[0], "y" : coor[1], "heading" : theta} for i in range(numData): entry["data" + str(i)] = 1./numData entries.append(entry) print "Entries created, loading into database..." locDistro.insert(entries) ``` Taking fate out of mongoDB's hands, I tried creating my own index using: ``` #Populate database with uniform distribution entries = [] idNum = 0 for coor in freeIndices: for theta in range(360): print idNum entry = {"_id" : idNum, "x" : coor[0], "y" : coor[1], "heading" : theta} idNum += 1 for i in range(numData): entry["data" + str(i)] = 1./numData entries.append(entry) print "Entries created, loading into database..." locDistro.insert(entries, manipulate = False) ``` The print statement showed each idnum as the documents were created, and they were all unique and incremented just as expected. However on insert, I received the error: > pymongo.errors.DuplicateKeyError: E11000 duplicate key error index: > cmdDistros.locDistro.$*id* dup key: { : 0 } and only one document was inserted into my database. I am completely stumped, anyone have an answer as to why this might be happening?
You need to understand that your entries list has a bunch of references to **one** entry dict. So when PyMongo sets entries[0]['\_id'], all the other entries get the same \_id. (In fact, PyMongo will iterate through the list setting each entry's \_id, so all the entries will have the **final** \_id at the end.) A quick fix would be: ``` entries.append(entry.copy()) ``` This is merely a shallow copy, but in the code you shared I believe this is enough to fix your problem.
Delete the key `"_id"`: ``` for i in xrange(2): doc['i'] = i if '_id' in doc: del doc['_id'] collection.insert(doc) ``` Or manually create a new one: ``` from bson.objectid import ObjectId for i in xrange(2): doc['i'] = i doc['_id'] = ObjectId() collection.insert(doc) ``` [Getting "err" : "E11000 duplicate key error when inserting into mongo using the Java driver](https://stackoverflow.com/questions/21119928/getting-err-e11000-duplicate-key-error-when-inserting-into-mongo-using-the)
MongoDB insert raises duplicate key error
[ "", "python", "mongodb", "pymongo", "database", "" ]
# My Problem: I have this problem where if I try to run py2exe on my python file that uses Pyqt/Pyside I get the following error when trying to run the EXE generated in I:\Documents\Python\Buttonio\_Testio\dist : ## Error recieved when running Package.exe: ``` I:\Documents\Python\Buttonio_Testio\dist>Package.exe Traceback (most recent call last): File "Package.py", line 1, in <module> File "PySide\__init__.pyc", line 55, in <module> File "PySide\__init__.pyc", line 11, in _setupQtDirectories File "PySide\_utils.pyc", line 87, in get_pyside_dir File "PySide\_utils.pyc", line 83, in _get_win32_case_sensitive_name File "PySide\_utils.pyc", line 58, in _get_win32_short_name WindowsError: [Error 3] The system cannot find the path specified. ``` ## My setup.py looks like this: ``` from distutils.core import setup import py2exe setup(console=['Package.py']) My program, in Package.py looks like this: from PySide.QtCore import * from PySide.QtGui import * import sys import Gui class Ui_Dialog(QDialog, Gui.Ui_Dialog): #Setupui and function afterwards generated converting XML file made by QT Desiner to python. def setupUi(self, Dialog): Dialog.setObjectName("Dialog") Dialog.resize(279, 295) self.textBrowser = QtGui.QTextBrowser(Dialog) self.textBrowser.setGeometry(QtCore.QRect(10, 10, 256, 192)) self.textBrowser.setObjectName("textBrowser") self.pushButton = QtGui.QPushButton(Dialog) self.pushButton.setGeometry(QtCore.QRect(10, 210, 251, 71)) self.pushButton.setObjectName("PushButton") self.retranslateUi(Dialog) QtCore.QObject.connect(self.pushButton, QtCore.SIGNAL("clicked()"), self.textBrowser.clear) QtCore.QMetaObject.connectSlotsByName(Dialog) def retranslateUi(self, Dialog): Dialog.setWindowTitle(QtGui.QApplication.translate("Dialog", "Dialog", None, QtGui.QApplication.UnicodeUTF8)) self.pushButton.setText(QtGui.QApplication.translate("Dialog", "Clear Spam", None, QtGui.QApplication.UnicodeUTF8)) def __init__(self, parent=None): super(Ui_Dialog, self).__init__(parent) self.setupUi(self) self.spam() def spam(self): self.textBrowser.append("SPAM") self.textBrowser.append("SPAM") self.textBrowser.append("SPAM") self.textBrowser.append("LOL") self.textBrowser.append("I") self.textBrowser.append("AM") self.textBrowser.append("SPAMMING") self.textBrowser.append("MYSELF") app = QApplication(sys.argv) form = Ui_Dialog() form.show() app.exec_() ``` I am running windows 8 with 32bit python and 32bit modules installed. The setup.py file was in the same folder as Package.py when I ran setup.py in Command Prompt. Besides that I don't know what other information may help fix my problem. That's all, thank you for any answers in advance.
The problem was that \_utils.py used `__file__`, but `__file__` is not available in frozen executables. This has been fixed in PySide 1.2.1 that was released a few days ago (it was fixed by commit [817a5c9bd39d3a22e2a7db9aa497059be57d58d7](https://qt.gitorious.org/pyside/pyside/commit/817a5c9bd39d3a22e2a7db9aa497059be57d58d7)).
I was desperately looking for a solution to this for the past three hours. I ended up going into the \_ \_ init\_ \_.py file in C:\Python27\Lib\site-packages\PySide directory. I changed the last line to the following: ``` try: _setupQtDirectories() except WindowsError: pass ``` It is ugly I admit. I hope PySide people will fix this soon.
System path error with PyQt and Py2exe
[ "", "python", "pyqt", "exe", "pyside", "py2exe", "" ]
I do this `linear regression` with `StatsModels`: ``` import numpy as np import statsmodels.api as sm from statsmodels.sandbox.regression.predstd import wls_prediction_std n = 100 x = np.linspace(0, 10, n) e = np.random.normal(size=n) y = 1 + 0.5*x + 2*e X = sm.add_constant(x) re = sm.OLS(y, X).fit() print(re.summary()) prstd, iv_l, iv_u = wls_prediction_std(re) ``` My questions are, `iv_l` and `iv_u` are the upper and lower *confidence intervals* or *prediction intervals*? How I get others? I need the confidence and prediction intervals for all points, to do a plot.
**update** see [the second answer](https://stackoverflow.com/a/47191929/1234438) which is more recent. Many of the models and results classes have now a `get_prediction` method that provides additional information including prediction intervals and/or confidence intervals for the predicted mean. **old answer:** `iv_l` and `iv_u` give you the limits of the prediction interval for each point. Prediction interval is the confidence interval for an observation and includes the estimate of the error. I think, confidence interval for the mean prediction is not yet available in `statsmodels`. (Actually, the confidence interval for the fitted values is hiding inside the summary\_table of influence\_outlier, but I need to verify this.) Proper prediction methods for statsmodels are on the TODO list. **Addition** Confidence intervals are there for OLS but the access is a bit clumsy. To be included after running your script: ``` from statsmodels.stats.outliers_influence import summary_table st, data, ss2 = summary_table(re, alpha=0.05) fittedvalues = data[:, 2] predict_mean_se = data[:, 3] predict_mean_ci_low, predict_mean_ci_upp = data[:, 4:6].T predict_ci_low, predict_ci_upp = data[:, 6:8].T # Check we got the right things print np.max(np.abs(re.fittedvalues - fittedvalues)) print np.max(np.abs(iv_l - predict_ci_low)) print np.max(np.abs(iv_u - predict_ci_upp)) plt.plot(x, y, 'o') plt.plot(x, fittedvalues, '-', lw=2) plt.plot(x, predict_ci_low, 'r--', lw=2) plt.plot(x, predict_ci_upp, 'r--', lw=2) plt.plot(x, predict_mean_ci_low, 'r--', lw=2) plt.plot(x, predict_mean_ci_upp, 'r--', lw=2) plt.show() ``` ![enter image description here](https://i.stack.imgur.com/KkjT5.png) This should give the same results as SAS, <http://jpktd.blogspot.ca/2012/01/nice-thing-about-seeing-zeros.html>
For test data you can try to use the following. ``` predictions = result.get_prediction(out_of_sample_df) predictions.summary_frame(alpha=0.05) ``` I found the summary\_frame() method buried [here](https://github.com/statsmodels/statsmodels/issues/987#issuecomment-133575422) and you can find the get\_prediction() method [here](http://www.statsmodels.org/dev/generated/statsmodels.regression.linear_model.OLSResults.get_prediction.html). You can change the significance level of the confidence interval and prediction interval by modifying the "alpha" parameter. I am posting this here because this was the first post that comes up when looking for a solution for confidence & prediction intervals – even though this concerns itself with test data rather. Here's a function to take a model, new data, and an arbitrary quantile, using this approach: ``` def ols_quantile(m, X, q): # m: OLS model. # X: X matrix. # q: Quantile. # # Set alpha based on q. a = q * 2 if q > 0.5: a = 2 * (1 - q) predictions = m.get_prediction(X) frame = predictions.summary_frame(alpha=a) if q > 0.5: return frame.obs_ci_upper return frame.obs_ci_lower ```
confidence and prediction intervals with StatsModels
[ "", "python", "statistics", "statsmodels", "" ]
Given, for example, the following from table `alpha`: ``` Field1 | Field2 | Field3 ------------------------ Foo | Bar | ABCD ``` How could I break this data down into: ``` Field1 | Field2 | Field3 ------------------------ Foo | Bar | A Foo | Bar | B Foo | Bar | C Foo | Bar | D ``` I'm sure there's a fancy `join` trick that could do it, but I can't figure it out. Speed optimisation isn't a priority - this query is only being used for a one-off report, so I don't mind if it's slow as molasses (gives me chance to make a coffee!)
you can do it by following steps easily : 1. Step1 : Create one sql table valued function which can split word in to characters . you can do it by run following script . ``` CREATE FUNCTION [dbo].[SPLITWORD]( @WORD VARCHAR(MAX) ) RETURNS @words TABLE (item VARCHAR(8000)) BEGIN declare @count int, @total int select @total = len(@WORD), @count = 0 while @count <= @total begin insert into @words select substring(@WORD, @count, 1) select @count = @count + 1 end RETURN END ``` 2.Steps Run following Query which will return result you want . ``` SELECT A.FIELD1 , A.Field2 , B.ITEM FROM alpha AS A CROSS APPLY ( SELECT * FROM SPLITWORD(A.Field3) WHERE ITEM != '' ) AS B ```
Something like: ``` declare @alpha table (Field1 varchar(20), Field2 varchar(20), Field3 varchar(6)) insert into @alpha(Field1, Field2, Field3) values ('Foo','Bar','ABCD') ;With Numbers(n) as ( select 1 union all select 2 union all select 3 union all select 4 union all select 5 union all select 6 ) select Field1,Field2,SUBSTRING(Field3,n,1) from @alpha inner join Numbers on n <= LEN(Field3) ``` (unless you already have a convenient `Numbers` table you haven't mentioned in the question, in which case its even simpler) Result: ``` Field1 Field2 -------------------- -------------------- ---- Foo Bar A Foo Bar B Foo Bar C Foo Bar D ```
SQL - Break down a field's data into separate rows
[ "", "sql", "sql-server", "" ]
Although I may be very confused as to what the [`property()`](http://docs.python.org/2/library/functions.html#property) function does, I'm trying to use it to create an attribute which is read-only. Ideally, I'd like to be able to refer to the attribute directly but not be allowed to assign to it. When experimenting, I got this very curious behavior: ``` >>> class Boo(object): ... def __init__(self, x): ... self.__x = x ... def getx(self): ... return self.__x ... x = property(getx) ... >>> b = Boo(1) >>> b.__x = 2 >>> b.getx() 1 >>> b.__x 2 ``` I'd like to add that when I used `x` and `_x` as the attribute names, reassigning the attribute caused the getter to return the changed value, i.e. both `b.getx()` and `b.x`/`b._x` gave me `2`. I realize that I'm using `x` as the property name, though, but when I tried the following I got an `AttributeError` in my `__init__()`: ``` >>> class Boo(object): ... def __init__(self, x): ... self.__x = x ... def getx(self): ... return self.__x ... __x = property(getx) ... >>> b = Boo(1) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 3, in __init__ AttributeError: can't set attribute ```
The problem here has nothing to do with `property`, but with double-underscore attributes, which are subject to what's called ["Private name mangling"](http://docs.python.org/2/reference/expressions.html#atom-identifiers). ``` >>> b = Boo(1) >>> '__x' in dir(b) False >>> '_Boo__x' in dir(b) True ``` So, when you do this: ``` >>> b.__x = 2 ``` You're not changing the value of the attribute the `getx` function is looking at, you're creating a *new* attribute. If you just use a name for the attribute that doesn't start with two underscores—such as `_x`—everything works as you intended. As a general rule, use a single underscore for "advisory private"—as in, "users of this object probably shouldn't care about this value", and a double underscore only when you actually need mangling (because of complex inheritance issues that rarely come up). What if you want "real private", like C++ or Java? You can't have it. If you hide or protect the attribute well enough, someone will just monkeypatch the `getx` method or the `x` property. So, Python doesn't give a way to hide attributes.
Your problem is that using double underscore attribute names *mangles the name*. So when you are dealing with `__x` inside of your class definition, outside of the class it actually looks like `_Boo__x`. That is, > \_ + (class name) + (double underscore attribute name) To demonstrate, ``` >>> b = Boo(1) >>> b.__x = 2 >>> b.getx() 1 >>> b.x # NOTE: same as calling getx 1 >>> b.__x # why didn't x return 2 if we changed it? 2 >>> b._Boo__x # because it's actually saved in this attribute 1 >>> b._Boo__x = 3 # setting it here then works >>> b.x 3 >>> b.getx() 3 ```
Attempting to create a read-only property attribute - getter returns initialized value, direct access returns changed value
[ "", "python", "oop", "python-2.7", "properties", "" ]
``` > pip install yolk Downloading/unpacking yolk Cannot fetch index base URL https://pypi.python.org/simple/ Could not find any downloads that satisfy the requirement yolk No distributions at all found for yolk Storing complete log in /Users/harith/.pip/pip.log ``` when I read the file I see ``` > cat /Users/harith/.pip/pip.log ------------------------------------------------------------ /Users/harith/.shared/virtualenvs/pennytracker/bin/pip run on Mon Jul 1 20:26:02 2013 Downloading/unpacking yolk Getting page https://pypi.python.org/simple/yolk/ Could not fetch URL https://pypi.python.org/simple/yolk/: HTTP Error 503: Service Unavailable Will skip URL https://pypi.python.org/simple/yolk/ when looking for download links for yolk Getting page https://pypi.python.org/simple/ Could not fetch URL https://pypi.python.org/simple/: HTTP Error 503: Service Unavailable Will skip URL https://pypi.python.org/simple/ when looking for download links for yolk Cannot fetch index base URL https://pypi.python.org/simple/ URLs to search for versions for yolk: * https://pypi.python.org/simple/yolk/ Getting page https://pypi.python.org/simple/yolk/ Could not fetch URL https://pypi.python.org/simple/yolk/: HTTP Error 503: Service Unavailable Will skip URL https://pypi.python.org/simple/yolk/ when looking for download links for yolk Could not find any downloads that satisfy the requirement yolk No distributions at all found for yolk Exception information: Traceback (most recent call last): File "/Users/harith/.shared/virtualenvs/pennytracker/lib/python2.7/site-packages/pip-1.3.1-py2.7.egg/pip/basecommand.py", line 139, in main status = self.run(options, args) File "/Users/harith/.shared/virtualenvs/pennytracker/lib/python2.7/site-packages/pip-1.3.1-py2.7.egg/pip/commands/install.py", line 266, in run requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle) File "/Users/harith/.shared/virtualenvs/pennytracker/lib/python2.7/site-packages/pip-1.3.1-py2.7.egg/pip/req.py", line 1026, in prepare_files url = finder.find_requirement(req_to_install, upgrade=self.upgrade) File "/Users/harith/.shared/virtualenvs/pennytracker/lib/python2.7/site-packages/pip-1.3.1-py2.7.egg/pip/index.py", line 171, in find_requirement raise DistributionNotFound('No distributions at all found for %s' % req) DistributionNotFound: No distributions at all found for yolk ``` Am i doing anything wrong?
This is the full text of the blog post linked below: If you've tried installing a package with pip recently, you may have encountered this error: ``` Could not fetch URL https://pypi.python.org/simple/Django/: There was a problem confirming the ssl certificate: <urlopen error [Errno 1] _ssl.c:504: error:0D0890A1:asn1 encoding routines:ASN1_verify:unknown message digest algorithm> Will skip URL https://pypi.python.org/simple/Django/ when looking for download links for Django==1.5.1 (from -r requirements.txt (line 1)) Could not fetch URL https://pypi.python.org/simple/: There was a problem confirming the ssl certificate: <urlopen error [Errno 1] _ssl.c:504: error:0D0890A1:asn1 encoding routines:ASN1_verify:unknown message digest algorithm> Will skip URL https://pypi.python.org/simple/ when looking for download links for Django==1.5.1 (from -r requirements.txt (line 1)) Cannot fetch index base URL https://pypi.python.org/simple/ Could not fetch URL https://pypi.python.org/simple/Django/1.5.1: There was a problem confirming the ssl certificate: <urlopen error [Errno 1] _ssl.c:504: error:0D0890A1:asn1 encoding routines:ASN1_verify:unknown message digest algorithm> Will skip URL https://pypi.python.org/simple/Django/1.5.1 when looking for download links for Django==1.5.1 (from -r requirements.txt (line 1)) Could not fetch URL https://pypi.python.org/simple/Django/: There was a problem confirming the ssl certificate: <urlopen error [Errno 1] _ssl.c:504: error:0D0890A1:asn1 encoding routines:ASN1_verify:unknown message digest algorithm> Will skip URL https://pypi.python.org/simple/Django/ when looking for download links for Django==1.5.1 (from -r requirements.txt (line 1)) Could not find any downloads that satisfy the requirement Django==1.5.1 (from -r requirements.txt (line 1)) No distributions at all found for Django==1.5.1 (from -r requirements.txt (line 1)) Storing complete log in /Users/paul/.pip/pip.log ``` This seems to be an issue with an old version of OpenSSL being incompatible with pip 1.3.1. If you're using a non-stock Python distribution (notably EPD 7.3), you're very likely to have a setup that isn't going to work with pip 1.3.1 without a shitload of work. The easy workaround for now, is to install pip 1.2.1, which does not require SSL: ``` curl -O https://pypi.python.org/packages/source/p/pip/pip-1.2.1.tar.gz tar xvfz pip-1.2.1.tar.gz cd pip-1.2.1 python setup.py install ``` If you are using EPD, and you're not using it for a class where things might break, you may want to consider installing the new incarnation: Enthought Canopy. I know they were aware of the issues caused by the previous version of OpenSSL, and would imagine they are using a new version now that should play nicely with pip 1.3.1.
I used to use the `easy_install pip==1.2.1` workaround but I randomly [found](https://stackoverflow.com/questions/25662073/python-django-cannot-use-imagefield-because-pillow-is-not-installed) that if you're having this bug, you probably installed a 32bit version of python on a 64bit machine. In short : If you install a 64bit version of it by installing it from the source and then build your virtualenv upon it, you wont have that pip bug anymore.
pip cannot install anything
[ "", "python", "pip", "" ]
I have a tab delimited file with \n EOL characters that looks something like this: ``` User Name\tCode\tTrack\tColor\tNote\n\nUser Name2\tCode2\tTrack2\tColor2\tNote2\n ``` I am taking this input file and reformatting it into a nested list using `split('\t')`. The list should look like this: ``` [['User Name','Code','Track','Color','Note'], ['User Name2','Code2','Track2','Color2','Note2']] ``` The software that generates the file allows the user to press "enter" key any number of times while filling out the "Note" field. It also allows the user to press "enter" creating any number of newlines without entering any visible text in the "Note" field at all. Lastly, the user may press "enter" any number of times in the middle of the "Note" creating multiple paragraphs, but this would be such a rare occurrence from the operational standpoint that I am willing to leave this eventuality not addressed if it complicates the code much. This possibility is really, really low priority. As seen in the sample above, these actions can result in a sequence of "\n\n..." codes of any length preceding, trailing or replacing the "Note" field. Or to put it this way, the following replacements are required before I can place the file object into a list: ``` \t\n\n... preceding "Note" must become \t \n\n... trailing "note" must become \n \n\n... in place of "note" must become \n \n\n... in the middle of the text note must become a single whitespace, if easy to do ``` I have tried using strip() and replace() methods without success. Does the file object need to be copied into something else first before replace() method can be used on it? I have experience with Awk, but I am hoping Regular Expressions are not needed for this as I am very new to Python. This is the code that I need to improve in order to address multiple newlines: ``` marker = [i.strip() for i in open('SomeFile.txt', 'r')] marker_array = [] for i in marker: marker_array.append(i.split('\t')) for i in marker_array: print i ```
Count the tabs; if you presume that the note field never has 4 tabs on one line in it, you can collect the note until you find a line that *does* have 4 tabs in it: ``` def collapse_newlines(s): # Collapse multiple consecutive newlines into one; removes trailing newlines return '\n'.join(filter(None, s.split('\n'))) def read_tabbed_file(filename): with open(filename) as f: row = None for line in f: if line.count('\t') < 4: # Note continuation row[-1] += line continue if row is not None: row[-1] = collapse_newlines(row[-1]) yield row row = line.split('\t') if row is not None: row[-1] = collapse_newlines(row[-1]) yield row ``` The above generator function will not yield a row until it is certain that there is no note continuing on the next line, effectively looking ahead. Now use the `read_tabbed_file()` function as a generator and loop over the results: ``` for row in read_tabbed_file(yourfilename): # row is a list of elements ``` Demo: ``` >>> open('/tmp/test.csv', 'w').write('User Name\tCode\tTrack\tColor\tNote\n\nUser Name2\tCode2\tTrack2\tColor2\tNote2\n') >>> for row in read_tabbed_file('/tmp/test.csv'): ... print row ... ['User Name', 'Code', 'Track', 'Color', 'Note'] ['User Name2', 'Code2', 'Track2', 'Color2', 'Note2'] ```
The first problem you're having is `in` - which tries to be helpful and reads in one line of text from the file at a time. ``` >>> [i for i in open('SomeFile.txt', 'r') ] ['User Name\tCode\tTrack\tColor\tNote\n', '\n', 'User Name2\tCode2\tTrack2\tColor2\tNote2\n', '\n'] ``` Adding in the call to `.strip()` does strip the whitespace from each line, but that leaves you with empty lines - it doesn't take those empty elements out of the list. ``` >>> [i.strip() for i in open('SomeFile.txt', 'r') ] ['User Name\tCode\tTrack\tColor\tNote', '', 'User Name2\tCode2\tTrack2\tColor2\tNote2', ''] ``` However, you can provide in `if` clause to the list comprehension to make it drop lines that only have a newline: ``` >>> [i.strip() for i in open('SomeFile.txt', 'r') if len(i) >1 ] ['User Name\tCode\tTrack\tColor\tNote', 'User Name2\tCode2\tTrack2\tColor2\tNote2'] >>> ```
Remove multiple EOL in file
[ "", "python", "eol", "" ]
I have below tow table table 1: The teacher table ``` teacher_id teacher_name 1 xx 2 yy 3 zz ``` table 2: the student table ``` stu_id stu_name tearcher1_id teacher2_id tearcher3_id 1 aa 1 2 2 bb 2 3 3 cc 1 ``` I want to get a list by one sql statement from the teachers included the count of who appears in student table as below: ``` teacher_id teacher_name num_selected_by_stu 1 xx 2 2 yy 2 3 zz 1 ``` I have tried below sql but seems not work, ``` select * from teatcher t1 left join ( select stu_id,tearcher1_id,tearcher2_id,tearcher3_id,count(stu_id) as num_selected_by_stu from student group by stu_id,tearcher1_id,tearcher2_id,tearcher3_id) t2 ON ( t2.teacher1_id=t1.teacher_id or t2.teacher2_id=t1.teacher_id or t2.teacher3_id=t1.teacher_id) ``` and,anyone can help?
``` SELECT teacher_id teacher_name NVL(num1, 0) + NVL(num2, 0) + NVL(num3,0) as num_selected_by_stu FROM teacher t left outer join ( SELECT count(*) as num1, tearcher1_id FROM student group by tearcher1_id ) t1 on t1.tearcher1_id = t.tearcher_id left outer join ( SELECT count(*) as num2, tearcher2_id FROM student group by tearcher2_id ) t2 on t2.tearcher2_id = t.tearcher_id left outer join ( SELECT count(*) as num3, tearcher3_id FROM student group by tearcher3_id ) t3 on t3.tearcher3_id = t.tearcher_id ; ```
Please try: ``` SELECT *, (SELECT COUNT(*) FROM Student t2 WHERE t1.teacher_id=t2.tearcher1_id OR t1.teacher_id=t2.teacher2_id OR t1.teacher_id=t2.tearcher3_id) num_selected_by_stu FROM Teacher t1 ```
how to select a record from one table and count the number of which appears another table and count it
[ "", "sql", "oracle", "" ]
I have a table that looks like this ``` fID_a fID_b 1 1 1 2 2 2 3 1 ``` I want all `fID_a` where `fID_b` is 1 AND where `fID_a` is a single record in that table. I have a sql query that looks like this ``` select fID_a from tbl where fID_b = 1 group by fID_a having count(*) = 1 ``` But that query still includes the `fID_a` 1 even though there are 2 records in that table!
Try this ``` SELECT fID_a FROM tbl GROUP BY fID_a HAVING MAX(fID_b)=1 AND MIN(fID_b)=1 ```
If you use a `where` clause then you would filter out other values what would make the `count` inaccurate. ``` select fID_a from tbl group by fID_a having count(*) = 1 and sum(case when fID_b = 1 then 1 else 0 end) = 1 ```
Select distinct value when count = 1 in SQL
[ "", "sql", "select", "group-by", "having-clause", "" ]
In the following code, I want to calculate the percent of G and C characters in a sequence. In Python 3 I correctly get `0.5`, but on Python 2 I get `0`. Why are the results different? ``` def gc_content(base_seq): """Return the percentage of G and C characters in base_seq""" seq = base_seq.upper() return (seq.count('G') + seq.count('C')) / len(seq) gc_content('attacgcg') ```
`/` is a different operator in Python 3; in Python 2 `/` alters behaviour when applied to 2 integer operands and returns the result of a floor-division instead: ``` >>> 3/2 # two integer operands 1 >>> 3/2.0 # one operand is not an integer, float division is used 1.5 ``` Add: ``` from __future__ import division ``` to the top of your code to make `/` use float division in Python 2, or use `//` to force Python 3 to use integer division: ``` >>> from __future__ import division >>> 3/2 # even when using integers, true division is used 1.5 >>> 3//2.0 # explicit floor division 1.0 ``` Using either of these techniques works in Python 2.2 or newer. See [PEP 238](http://www.python.org/dev/peps/pep-0238/) for the nitty-gritty details of why this was changed.
In python2.x `/` performs integers division. ``` >>> 3/2 1 ``` To get desired result you can change either one of the operands to a float using `float()`: ``` >>> 3/2. #3/2.0 1.5 >>> 3/float(2) 1.5 ``` or use `division` from `__future__`: ``` >>> from __future__ import division >>> 3/2 1.5 ```
Division in Python 3 gives different result than in Python 2
[ "", "python", "python-3.x", "python-2.x", "" ]
I have imported some data from an excel table in to an SQL table and I now need to write a view over the data to combine it with some other fields. My problem is that the table is in the following form with these columns; **Name** **Project\_One\_ID** **Project\_Two\_ID** **Project\_Three\_ID** Rather than the form I could use, which would be a link table with columns like this; **Name** **ProjectID** Is it possible to convert this type of table? Or use it as it is? I could do it in code but im struggling in SQL. I have two other tables that need to link on to either side of this link table to create my overall view. Thanks for any pointers,
You could do it as a `UNION` and then join to other tables: ``` SELECT * FROM ( SELECT Name, Project_One_ID ProjectID FROM Projects UNION ALL SELECT Name, Project_Two_ID ProjectID FROM Projects UNION ALL SELECT Name, Project_Three_ID ProjectID FROM Projects ) AS P INNER JOIN ProjectData PD ON P.ProjectID = PD.ProjectID ```
Depending on your version of SQL Server you can use CROSS APPLY to UNPIVOT the data. CROSS APPLY and VALUES will work in SQL Server 2008+: ``` select name, ProjectId from yourtable cross apply ( values ('Project_One_ID', Project_One_ID), ('Project_Two_ID', Project_Two_ID), ('Project_Three_ID', Project_Three_ID) ) c (col, ProjectId); ``` If you are using SQL Server 2005, then you can use CROSS APPLY with UNION ALL: ``` select name, ProjectId from yourtable cross apply ( select 'Project_One_ID', Project_One_ID union all select 'Project_Two_ID', Project_Two_ID union all select 'Project_Three_ID', Project_Three_ID ) c (col, ProjectId) ```
SQL flat table query
[ "", "sql", "sql-server", "" ]
I'm plotting some data based on pandas dataframes and series. Following is a part of my code. This code gives an error. ``` RuntimeError: underlying C/C++ object has been deleted from matplotlib import pyplot as plt from matplotlib.backends.backend_pdf import PdfPages fig = plt.figure() dfs = df['col2'].resample('10t', how='count') dfs.plot() plt.show() reg = df.groupby('col1').size() reg.sort() reg[-10:].plot(kind='barh') plt.show() pp = PdfPages('foo.pdf') fig.savefig(pp, format='pdf') pp.close() ``` I have two questions. 1. How to plot multiple plots in one output?(Here I get multiple outputs for each and every plot) 2. How to write all these plots in to one pdf? I found [this](https://stackoverflow.com/questions/11328958/matplotlib-pyplot-save-the-plots-into-a-pdf) as a related question.
Following is the part of code which gave me the expected result, there may be more elegant ways to do this; ``` def plotGraph(X): fig = plt.figure() X.plot() return fig plot1 = plotGraph(dfs) plot2 = plotGraph2(reg[:-10]) pp = PdfPages('foo.pdf') pp.savefig(plot1) pp.savefig(plot2) pp.close() ```
Please see the [following](http://pandas.pydata.org/pandas-docs/dev/visualization.html#targeting-different-subplots) for targeting different subplots with Pandas. I am assuming you need 2 subplots (in row fashion). Thus, your code may be modified as follows: ``` from matplotlib import pyplot as plt fig, axes = plt.subplots(nrows=2) dfs = df['col2'].resample('10t', how='count') dfs.plot(ax=axes[0]) reg = df.groupby('col1').size() reg.sort() reg[-10:].plot(kind='barh',ax=axes[0]) plt.savefig('foo.pdf') ```
Multiple matplotlib plots in same figure + in to pdf-Python
[ "", "python", "pdf", "matplotlib", "plot", "pandas", "" ]
I'm looking for a way to essentially screen capture a whole wxPanel, and save it as a PNG. The kicker is, there is no screen. I need to be able to "draw" the panel and save the drawn panel with no actual screen. I'm using Python 2.7, running on Ubuntu 12.04. Is this possible?
I found the trick, and lo, its name is VFB: <http://linux.die.net/man/1/xvfb>
I wrote a tutorial on how to do this sort of thing here: * <http://www.blog.pythonlibrary.org/2010/04/16/how-to-take-a-screenshot-of-your-wxpython-app-and-print-it/> However, as someone already pointed out, if X (or similar) is unavailable, then you won't be able to generate a GUI of any sort. Or perhaps you meant a headless box (i.e. no monitor) rather than no screen?
Saving wxPanel to Image
[ "", "python", "python-2.7", "wxpython", "wxwidgets", "image-capture", "" ]