Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
im new to python and programming, and its not easy for me to get that stuff in my mind. because the books i started to read are completely boring, i startet to play around with some ideas. here is what i want to do: open the textfile, count the frequency of every single value (just a list of systemnames), sort the list by frequency, and return the result. after searching the web for some code to do it, i got this here: ``` file = open('C:\\Temp\\Test2.txt', 'r') text = file.read() file.close() word_list = text.lower().split(None) word_freq = {} for word in word_list: word_freq[word] = word_freq.get(word, 0) + 1 list = sorted(word_freq.keys()) for word in list: print ("%-10s %d" % (word, word_freq[word])) ``` It works, but it sorts by the words / systemnames in the list: ``` pc05010 3 pc05012 1 pc05013 8 pc05014 2 ``` I want it like that: ``` pc05013 8 pc05010 3 pc05014 2 pc05012 1 ``` now im searching for the sort-by-value function for hours. i bet its so easy, but i found nothing. for my beginners point of view, it has something to do with this line: ``` list = sorted(word_freq.keys()) ``` i thought maybe its: ``` list = sorted(word_freq.values()) ``` but no.... its very frustrating to me to see all the tons of information about this language, but could not get such simple things to work. please help :) thanks a lot!
You've to use `word_freq.items()` here: ``` lis = sorted(word_freq.items(), key = lambda x:x[1], reverse = True) for word,freq in lis: print ("%-10s %d" % (word, freq)) ``` Don't use `list` as a variable name.
Take a look at collections.Counter ``` >>> wordlist = ['foo', 'bar', 'foo', 'baz'] >>> import collections >>> counter = collections.Counter(wordlist) >>> counter.most_common() [('foo', 2), ('baz', 1), ('bar', 1)] ```
sort list by frequency-value in python
[ "", "python", "sorting", "count", "" ]
I am trying to make my code to ask for a password and receive it, if the password is correct something will happen if not something else will happen, when I tried my code it gives me an error on line 10, and 18: ``` if (password == 'xxxx'): UnboundLocalError: local variable 'password' referenced before assignment ``` Here is the code: ``` import random def askforPassword(): print('Welcome to the Machine!') print('Enter the password: ') password = input() def getPassword(): while passwordTry != 0: if (password == 'xxxx'): print('Correct') else: passwordTry -= 1 print('INCORRECT!') passwordTry = 5 askforPassword() getPassword() ```
Since you're asking why it doesn't work, let's look at the error you get: ``` Traceback (most recent call last): File "pw.py", line 18, in <module> getPassword() File "pw.py", line 10, in getPassword if (password == 'xxxx'): UnboundLocalError: local variable 'password' referenced before assignment ``` What it's saying is that you are trying to access a local variable `'password'`, and you haven't created any such local variable. If you want to use a global variable, say so explicitly: ``` def getPassword(): global password while passwordTry != 0: if (password == 'xxxx'): print('Correct') else: passwordTry -= 1 print('INCORRECT!') ``` But this still won't work, because nobody's setting that global variable, either. You need to change `askforPassword` too: ``` def askforPassword(): global password print('Welcome to the Machine!') print('Enter the password: ') password = input() ``` --- This still has a lot of problems. For example, you only call `askforPassword` once, not once each time through the loop, so it's just going to ask once and then print `INCORRECT!` 5 times. Also, it would be much better to *not* use global variables—has `askforPassword` `return` the password, and store it in a local variable in `getPassword`. ``` def askforPassword(): print('Welcome to the Machine!') print('Enter the password: ') password = input() return password def getPassword(): while passwordTry != 0: password = askforPassword() if (password == 'xxxx'): print('Correct') else: passwordTry -= 1 print('INCORRECT!') ``` And you probably want to return something from `getPassword` too, so whoever calls it knows whether you succeeded or failed.
You might want to think about changing up the logic slightly to something like this: ``` import random def askforPassword(): print('Welcome to the Machine!') print('Enter the password: ') password = input() return password def getPassword(): passwordTry = 5 while passwordTry: if (askforPassword() == 'xxxx'): print('Correct') break else: passwordTry -= 1 print('INCORRECT!') getPassword() ```
Variable referenced before assignment error
[ "", "python", "function", "" ]
In some of the similar questions, this particular problem is either not solved by the suggested work-arounds, or the questions have wavered to different topics. Hence, I had to ask this question : The error returned is : ``` Traceback (most recent call last): File "learn.py", line 8, in <module> frame = simplegui.create_frame("Home", 300, 200) AttributeError: 'module' object has no attribute 'create_frame' ``` This is with respect to the following code ``` import simplegui message = "Welcome!" def click(): global message message = "Good job!" def draw(canvas): canvas.draw_text(message, [50,112], 48, "Red") frame = simplegui.create_frame("Home", 300, 200) frame.add_button("Click me", click) frame.set_draw_handler(draw) frame.start() ``` I have installed the "simplegui" using pip on Ubuntu, still the problem seems unfounded. Please suggest a possible solution.
The problem you're running into is that there are two libraries called simplegui. [The one on pypi](https://pypi.python.org/pypi/simplegui) (the one that's giving you the error) is totally different from [the one for codeskulptor](http://activeed.rice.edu/comp160/docs.html) (the one for which you have example code). If you want to use codeskulptor's example code you'll have to run your code inside codeskulptor. If you want to run your code on your local computer you'll have to abandon the codeskulptor example code.
its probably because just like the error says there isnt an attribute in that module called `create_frame` im not very familiar with simlplegui but im pretty sure it a GUI generator that uses Tkinter so you dont need to create the frame because Tk does it for you but you have to have Tkinter installed here is an example code: ``` import simplegui g = simplegui.GUI() def buttoncallback(): g.status("Button Clicked!") g.button("Click me!", buttoncallback) g.button("Click me too!", buttoncallback) def listboxcallback(text): g.status("listbox select: '{0}'".format(text)) g.listbox(["one", "two", "three"], listboxcallback) g.listbox(["A", "B", "C"], listboxcallback) def scalecallback(text): g.status("scale value: '{0}'".format(text)) g.scale("Scale me!", scalecallback) g.run() ``` you dont need to actually make the frame just give the information for the frame or window then Tk automatically makes a window with the given infomoation sorry if this is confusing but i hope it helped
module object has no attribute 'create_frame'
[ "", "python", "python-2.7", "" ]
I have the following code to match an escaped string: ``` match_str = r'''(["/']).*?(?<!\\)(\\\\)*\1''' test_str = r'''"This is an \"escaped\" string" and this isn't.''' mo = re.match(match_str, test_str) if mo: print mo.group() ``` which works fine. However, while I understand I need the groups in there to handle the repetition, etc., I'm not interested in using the groups after the match. I know I can just call `mo.group(0)` and get the whole thing, but for what I am doing it would be helpful if it could behave as if no groups were found in this type of case, i.e. that `mo.groups()` would return `(None)`. Is there any way to do this? EDIT: If it helps, I'm trying to do something like this: ``` ma = [myclass("regex1nogroups", [func1]), myclass("regex2twogroups", [func2, func3]), myclass("regex3fourgroups", [func4, func5, func6, func7]), myclass("regex4nogroups", [func8])] for mc in ma: mo = re.match(mc.pattern, str_to_match) if mo: for n in range(len(mc.funclist)): result = mo.group(n+1 if mo.groups() else 0) mc.funclist[n](result) ``` using the length of the list of functions to determine how many groups the regex should produce. I could add an extra flag member to `myclass` to be true if I want to just assume there are no groups, but it would be nice to avoid this.
Just add in `?:` and you get a non-capturing group: ``` (?:\\\\) ```
I ended up just approaching the problem in a different way, and taking the obvious step of looking at the length of the function list, rather than looking at `re.groups()`: ``` ma = [myclass("regex1nogroups", [func1]), myclass("regex2twogroups", [func2, func3]), myclass("regex3fourgroups", [func4, func5, func6, func7]), myclass("regex4nogroups", [func8])] for mc in ma: mo = re.match(mc.pattern, str_to_match) if mo: for n,f in enumerate(mc.funclist): result = mo.group(n+1 if len(mc.funclist) > 1 else 0) f(result) ```
Suppressing matching groups in Python regex
[ "", "python", "regex", "" ]
I have a table with a column that contains the entire file name, the data looks like: ``` FilePath FileSize ------------------------------ -------- dsch2_chs_wyj.rar 694KB AllInOneHash.rar 19KB FilePropertyModifier.rar 12KB jquery_1_7_api_chm_chs.rar 285KB startupmgr.rar 38KB JQueryTableExample_Edi.rar 33KB hdpartioncalc_csharp.rar 49KB XMLMenuTest.zip 3KB ``` Now I want to extract the file extension name, like `.rar` and insert into a new table. ``` INSERT INTO Attachment ( Id, [FileName], ExtensionName, StoredPath, CreateOn, UploaderIP, DataBinary ) SELECT ba.Id, ba.Title, '{Extension Name}', ba.FilePath, GETDATE(), NULL, NULL FROM BlogAttachment ba ``` But T-SQL doesn't have a `LastIndexOf()` function. How can I cut the substring in an easy way?
``` SELECT CASE WHEN filepath LIKE '%.%' THEN RIGHT(filepath, Len(filepath) - Charindex('.', filepath)) ELSE filepath END FilePath FROM tbl1 ``` [Demo](http://sqlfiddle.com/#!3/483b3/1)
You can get the extension by doing: ``` select reverse(left(reverse(FilePath), charindex('.', reverse(FilePath)) - 1)) ``` However, I would recommend that you check that there is a `'.'` in the name first: ``` select (case when FilePath like '%.%' then reverse(left(reverse(FilePath), charindex('.', reverse(FilePath)) - 1)) else '' end) as Extension ```
T-SQL Get File Extension Name from a Column
[ "", "sql", "sql-server", "" ]
I wrote a code in Python 2.7 for creating list of prime numbers. The code is ``` def primes_list(num): ans = [2] for i in range(3, num, 2): for j in ans: if i % j == 0: break else: ans.append(i) else: return ans ``` Is this more efficient than [Sieve of Eratosthenes](http://en.wikipedia.org/wiki/Sieve_of_Eratosthenes) or not? I think the memory-efficiency should be better but I have doubts about time-efficiency. How to calculate the time and memory efficiency and how to benchmark the efficiencies?
No, that's trial division which is much worse than the sieve of eratosthenes in time-complexity. Its space complexity is *a bit* better, but, since primes are about `n/log(n)` you are not saving huge quantities of memory. also the sieve can be done using bit-vectors which reduce the *constants* by 32/64 times(and thus for *practical* purposes it might be even better). --- Small benchmark that shows the difference in timings: ``` >>> timeit.timeit('primes_list(1000)', 'from __main__ import primes_list', number=1000) 0.901777982711792 >>> timeit.timeit('erat(1000)', 'from __main__ import erat', number=1000) 0.2097640037536621 ``` As you can see, even with `n=1000` eratosthenes is more than 4 times faster. If we increase the search up to `10000`: ``` >>> timeit.timeit('primes_list(10000)', 'from __main__ import primes_list', number=1000) 50.41101098060608 >>> timeit.timeit('erat(10000)', 'from __main__ import erat', number=1000) 2.3083159923553467 ``` Now eratosthenes is 21 times faster. As you can see it's clear that eratosthenes is **much** faster. --- using numpy arrays it's quite easy to reduce the memory by 32 or 64(depending on your machine architecture) and obtain much faster results: ``` >>> import numpy as np >>> def erat2(n): ... ar = np.ones(n, dtype=bool) ... ar[0] = ar[1] = False ... ar[4::2] = False ... for j in xrange(3, n, 2): ... if ar[j]: ... ar[j**2::2*j] = False ... return ar.nonzero()[0] ... >>> timeit.timeit('erat2(10000)', 'from __main__ import erat2', number=1000) 0.5136890411376953 ``` An other 4 times faster than the other sieve.
What you are doing is [trial divison](http://en.wikipedia.org/wiki/Trial_division) - testing each candidate prime against each known prime below it. Skipping odd numbers in the call to `range` will save you some divisions, but this is exactly the technique sieveing is based on: you know that every second number is divisible by 2, and therefore composite. The sieve just extends this to: * every third number is divible by three, and therefore composite; * every fifth by five, and therefore composite and so on. Since the Sieve is regarded as one of the most time-efficient algorithms available, and trial division as one of the *least* (wikipedia describes the sieve as `O(n log log n)`, and per the comments below, your algorithm is likely `O(n^2 / log n)`) ) , it is reasonable to assume that trial division with some Sieve-like optimisation falls short of sieving.
Is this more efficient than Sieve of Eratosthenes?
[ "", "python", "performance", "primes", "sieve-of-eratosthenes", "" ]
Today I was trying to find a method, to do some processing on strings in python. Some more senior programmer than I'm said not to use `+=` but use `''.join()` I could also read this in eg: <http://wiki.python.org/moin/PythonSpeed/#Use_the_best_algorithms_and_fastest_tools> . But I tested this myself and found a bit strange results ( It's not that I'm trying to second guess them but I want to under stand). The idea was if there was a string `"This is \"an example text\"` containing spaces" the string should be converted to `Thisis"an example text"containingspaces` The spaces are removed, but only outside the quotes. I measured the performance of two different versions of my algorithm one using the `''.join(list)` and one using `+=` ``` import time #uses '+=' operator def strip_spaces ( s ): ret_val = "" quote_found = False for i in s: if i == '"': quote_found = not quote_found if i == ' ' and quote_found == True: ret_val += i if i != ' ': ret_val += i return ret_val #uses "".join () def strip_spaces_join ( s ): #ret_val = "" ret_val = [] quote_found = False for i in s: if i == '"': quote_found = not quote_found if i == ' ' and quote_found == True: #ret_val = ''.join( (ret_val, i) ) ret_val.append(i) if i != ' ': #ret_val = ''.join( (ret_val,i) ) ret_val.append(i) return ''.join(ret_val) def time_function ( function, data): time1 = time.time(); function(data) time2 = time.time() print "it took about {0} seconds".format(time2-time1) ``` On my machine this yielded this output with a minor advantage for the algorithm using `+=` ``` print '#using += yields ', timeit.timeit('f(string)', 'from __main__ import string, strip_spaces as f', number=1000) print '#using \'\'.join() yields ', timeit.timeit('f(string)', 'from __main__ import string, strip_spaces_join as f', number=1000) ``` when timed with timeit : ``` #using += yields 0.0130770206451 #using ''.join() yields 0.0108470916748 ``` The difference is really minor. But why is `''.join()` not clearly out performing the function that uses `+=` , but there seems to be a small advantage for the ''.join() version. I tested this on Ubuntu 12.04 with python-2.7.3
Do use the correct methodology when comparing algorithms; use the [`timeit` module](http://docs.python.org/2/library/timeit.html) to eliminate fluctuations in CPU utilization and swapping. Using `timeit` shows there is little difference between the two approaches, but `''.join()` is *slightly* faster: ``` >>> s = 1000 * string >>> timeit.timeit('f(s)', 'from __main__ import s, strip_spaces as f', number=100) 1.3209099769592285 >>> timeit.timeit('f(s)', 'from __main__ import s, strip_spaces_join as f', number=100) 1.2893600463867188 >>> s = 10000 * string >>> timeit.timeit('f(s)', 'from __main__ import s, strip_spaces as f', number=100) 14.545105934143066 >>> timeit.timeit('f(s)', 'from __main__ import s, strip_spaces_join as f', number=100) 14.43651008605957 ``` Most of the work in your functions is the looping over each and every character and testing for quotes and spaces, not string concatenation itself. Moreover, the `''.join()` variant does more work; you are appending the elements to a list first (this replaces the `+=` string concatenation operations), *then* you are concatenating these values at the end using `''.join()`. And that method is still ever so slightly faster. You may want to strip back the work being done to compare *just* the concatenation part: ``` def inplace_add_concatenation(s): res = '' for c in s: res += c def str_join_concatenation(s): ''.join(s) ``` which shows: ``` >>> s = list(1000 * string) >>> timeit.timeit('f(s)', 'from __main__ import s, inplace_add_concatenation as f', number=1000) 6.113742113113403 >>> timeit.timeit('f(s)', 'from __main__ import s, str_join_concatenation as f', number=1000) 0.6616439819335938 ``` This shows `''.join()` concatenation is still a *heck* of a lot faster than `+=`. The speed difference lies in the loop; `s` is a list in both cases, but `''.join()` loops over the values in C, while the other version has to do all it's looping in Python. And that makes all the difference here.
Another option is to write a function which joins using a generator, rather than appending to a list each time. For example: ``` def strip_spaces_gen(s): quote_found = False for i in s: if i == '"': quote_found = not quote_found if i == ' ' and quote_found == True: # Note: you (c|sh)ould drop the == True, but I'll leave it here so as to not give an unfair advantage over the other functions yield i if i != ' ': yield i def strip_spaces_join_gen(ing): return ''.join(strip_spaces_gen(ing)) ``` This appears to be about the same (as a join) for a shorter string: ``` In [20]: s = "This is \"an example text\" containing spaces" In [21]: %timeit strip_spaces_join_gen(s) 10000 loops, best of 3: 22 us per loop In [22]: %timeit strip_spaces(s) 100000 loops, best of 3: 13.8 us per loop In [23]: %timeit strip_spaces_join(s) 10000 loops, best of 3: 23.1 us per loop ``` But faster for larger strings. ``` In [24]: s = s * 1000 In [25]: %timeit strip_spaces_join_gen(s) 100 loops, best of 3: 12.9 ms per loop In [26]: %timeit strip_spaces(s) 100 loops, best of 3: 17.1 ms per loop In [27]: %timeit strip_spaces_join(s) 100 loops, best of 3: 17.5 ms per loop ```
python list comprehension vs +=
[ "", "python", "performance", "list-comprehension", "augmented-assignment", "" ]
I am using Python 3.3 and am trying to make use of the wonderful fgfgen / forge\_fdf script (thanks guys, btw). When I attempt to run a sample test of fdfgen, I return the following error. ``` safe = utf16.replace('\x00)', '\x00\\)').replace('\x00(', '\x00\\(') TypeError: expected bytes, bytearray or buffer compatible object ``` After some looking around, this seems to be a result of python 3 handling unicode encoding? but I am unsure. Here is a sample of the fdfgen code executed followed by the fdfgen code so nicely provided. Thanks in advance: ``` >>> from fdfgen import forge_fdf >>> fields = [('last_name', u'Spencer')] >>> fdf = forge_fdf('SMBRPython.pdf', fields, [], [], []) ``` --- ``` # -*- coding: utf-8 -*- """ Port of the PHP forge_fdf library by Sid Steward (http://www.pdfhacks.com/forge_fdf/) Anders Pearson <anders@columbia.edu> at Columbia Center For New Media Teaching and Learning <http://ccnmtl.columbia.edu/> """ __author__ = "Anders Pearson <anders@columbia.edu>" __credits__ = ("Sébastien Fievet <zyegfryed@gmail.com>," "Brandon Rhodes <brandon@rhodesmill.org>") import codecs def smart_encode_str(s): """Create a UTF-16 encoded PDF string literal for `s`.""" utf16 = s.encode('utf_16_be') safe = utf16.replace('\x00)', '\x00\\)').replace('\x00(', '\x00\\(') return ('%s%s' % (codecs.BOM_UTF16_BE, safe)) def handle_hidden(key, fields_hidden): if key in fields_hidden: return "/SetF 2" else: return "/ClrF 2" def handle_readonly(key, fields_readonly): if key in fields_readonly: return "/SetFf 1" else: return "/ClrFf 1" def handle_data_strings(fdf_data_strings, fields_hidden, fields_readonly): for (key, value) in fdf_data_strings: if type(value) is bool: if value: yield "<<\n/V/Yes\n/T (%s)\n%s\n%s\n>>\n" % ( smart_encode_str(key), handle_hidden(key, fields_hidden), handle_readonly(key, fields_readonly), ) else: yield "<<\n/V/Off\n/T (%s)\n%s\n%s\n>>\n" % ( smart_encode_str(key), handle_hidden(key, fields_hidden), handle_readonly(key, fields_readonly), ) else: yield "<<\n/V (%s)\n/T (%s)\n%s\n%s\n>>\n" % ( smart_encode_str(value), smart_encode_str(key), handle_hidden(key, fields_hidden), handle_readonly(key, fields_readonly), ) def handle_data_names(fdf_data_names, fields_hidden, fields_readonly): for (key, value) in fdf_data_names: yield "<<\n/V /%s\n/T (%s)\n%s\n%s\n>>\n" % ( smart_encode_str(value), smart_encode_str(key), handle_hidden(key, fields_hidden), handle_readonly(key, fields_readonly), ) def forge_fdf(pdf_form_url="", fdf_data_strings=[], fdf_data_names=[], fields_hidden=[], fields_readonly=[]): """Generates fdf string from fields specified pdf_form_url is just the url for the form fdf_data_strings and fdf_data_names are arrays of (key,value) tuples for the form fields. FDF just requires that string type fields be treated seperately from boolean checkboxes, radio buttons etc. so strings go into fdf_data_strings, and all the other fields go in fdf_data_names. fields_hidden is a list of field names that should be hidden fields_readonly is a list of field names that should be readonly The result is a string suitable for writing to a .fdf file. """ fdf = ['%FDF-1.2\n%\xe2\xe3\xcf\xd3\r\n'] fdf.append("1 0 obj\n<<\n/FDF\n") fdf.append("<<\n/Fields [\n") fdf.append(''.join(handle_data_strings(fdf_data_strings, fields_hidden, fields_readonly))) fdf.append(''.join(handle_data_names(fdf_data_names, fields_hidden, fields_readonly))) fdf.append("]\n") if pdf_form_url: fdf.append("/F (" + smart_encode_str(pdf_form_url) + ")\n") fdf.append(">>\n") fdf.append(">>\nendobj\n") fdf.append("trailer\n\n<<\n/Root 1 0 R\n>>\n") fdf.append('%%EOF\n\x0a') return ''.join(fdf) ```
Encoding produces byte values, but you are using string values to try to replace things. Use Byte literals instead: ``` safe = utf16.replace(b'\x00)', b'\x00\\)').replace(b'\x00(', b'\x00\\(') return (b'%s%s' % (codecs.BOM_UTF16_BE, safe)) ```
Fdfgen has now been ported to Python 3, mostly just by explicitly turning all of the strings into byte literals, as Martjin Pieters mentioned.
python 3 fdfgen unicode TypeError
[ "", "python", "pdf", "fdf", "" ]
I have a function like this: ``` def checks(a,b): for item in a: if b[1] == item[1]: return True else: return False ``` I want to check if the second value of b is in the second value of item in a such as: ``` checks(['5v','7y'],'6y') >>> True ``` But the code I have right now will return `False` because I believe it's comparing `'6y'` with `'5v'`. How do I solve this?
You’re returning `True` at the right spot, but if the first item doesn‘t match, the function returns `False` immediately instead of continuing with the loop. Just move the `return False` to the end of the function, outside of the loop: ``` def checks(a,b): for item in a: if b[1] == item[1]: return True return False ``` `True` will be returned if an item is matched and `False` will be returned if the loop finishes without a match. Anyways, that explains why your code wasn’t working, but use `any` as suggested by others to be Pythonic. =)
This can be expressed in a simpler way: ``` def checks(a, b): return any(b[1] == item[1] for item in a) ```
Check if item in list
[ "", "python", "list", "" ]
One of my Django websites has following database models: In Django App “common”: ``` class Collection(models.Model): name = models.CharField(max_length = 255, unique = True) _short_name = models.CharField(db_column="short_name", max_length = 32, blank=True) class Particle(models.Model): content = models.TextField(blank=False) owner = models.ForeignKey(Collection) order = models.IntegerField(null=True, blank=True) ``` In Django App “sitcom”: ``` class Media(models.Model): name = models.CharField(max_length = 248) _short_name = models.CharField(db_column="short_name", max_length = 32, blank=True) capital = models.CharField(max_length = 1) description = models.TextField(blank=True) progress = models.CharField(max_length = 32, blank=True, null=True) class Relation(models.Model): name = models.CharField(max_length = 128) _short_name = models.CharField(db_column="short_name", max_length = 32, blank=True) description = models.TextField(blank=True) parent = models.ForeignKey('self', blank=True, null=True) order = models.IntegerField(blank=True, null=True) particle = models.ForeignKey(Particle, blank=True, null=True) media = models.ForeignKey(Media, blank=True, null=True) ``` In short, model class Relation has 3 foreign keys to other tables. The problem is, when I use Django Admin to change a single Relation, the page (change\_form) loads rather slowly. Later, I changed model class Relation as following: ``` class Relation(models.Model): name = models.CharField(max_length = 128) _short_name = models.CharField(db_column="short_name", max_length = 32, blank=True) description = models.TextField(blank=True) order = models.IntegerField(blank=True, null=True) parent_id = models.IntegerField(blank=True, null=True) particle_id = models.IntegerField(blank=True, null=True) media_id = models.IntegerField(blank=True, null=True) ``` The modification changed Foreign Keys to IntegerFields, so it disabled some of the magics inside Django ORM system, and now the change form page loads really fast. My question is, what is the “disabled magics inside django orm”? what has the potential to cause the problem?
It is not the magic of django Orm. It is magic of Form. When you create a Foreign key in Model, then in ModelForm, a ModelChoiceField creates which has all choices of ForeignKey Model. And django Admin use all the properties of Form to create HTML. So use this code. ``` from django import forms class RelationForm(forms.ModelForm): parent = forms.ChoiceField(required=False, choices=Relation.objects.values_list('id', 'name')) particle = forms.ChoiceField(required=False, choices=Particle.objects.values_list('id', 'content')) media = forms.ChoiceField(required=False, choices=Media.objects.values_list('id', 'name')) class Meta: model = Relation ``` In Admis Site ``` from django.contrib import admin class RelationAdmin(admin.ModelAdmin): form = RelationForm model = Relation ``` You can also cache the choices pass in a form.
In admin.py ``` from django.contrib import admin class RelationAdmin(admin.ModelAdmin): raw_id_fields = ('Media','Particle',) admin.site.register(Relation, RelationAdmin) ``` This brings up a nice little UI element in the form and considerably improves performance since it doesn't have to load a huge number of options in the select box.
Django admin change form load quite slow
[ "", "python", "django", "performance", "django-models", "django-admin", "" ]
I'm a beginner, and I wanted to know if there was a simpler way of write this out in Python. I'm assuming some type of dictionary, but I do not understand how to write it out. I was on a cruise a couple of days ago, and I play craps. I wanted to know if the odds are somewhat correct. So, I wrote this, but I know there is a simpler way. ``` import random dice2 = 0 dice3 = 0 dice4 = 0 dice5 = 0 dice6 = 0 dice7 = 0 dice8 = 0 dice9 = 0 dice10 = 0 dice11 = 0 dice12 = 0 for i in range(100000): dice1 = random.randint(1,6) dice2 = random.randint(1,6) number = dice1 + dice2 #print(dice1) if number == 2: dice2 +=1 elif number == 3: dice3 += 1 elif number == 4: dice4 += 1 elif number == 5: dice5 += 1 elif number == 6: dice6 += 1 elif number == 7: dice7 += 1 elif number == 8: dice8 += 1 elif number == 9: dice9 += 1 elif number == 10: dice10 += 1 elif number == 11: dice11 += 1 elif number == 12: dice12 += 1 total = dice2+dice3+dice4+dice5+dice6+dice7+dice8+dice9+dice10+dice11+dice12 ``` At the end of this, it just prints out the percentage of hits on numbers from 2-12.
``` from random import randint dice = [0]*11 for i in range(100000): dice[randint(1,6)+randint(1,6)-2] += 1 total = sum(dice) #it is 100000, of course for i, v in enumerate(dice, 2): print('{0}: {1}%'.format(i, v*100.0/total)) ```
I'd use [`Counter`](http://docs.python.org/2/library/collections.html#counter-objects), as that's what it was made for: ``` from random import randint from collections import Counter counts = Counter(randint(1, 6) + randint(1, 6) for i in range(100000)) total = sum(counts.values()) number_of_tens = counts[10] ```
How can I simplify this code
[ "", "python", "" ]
It's my first time having to design a web app that has to think about image upload / Download traffic & also using Amazon web servers. I've been reading around but I am still pretty confuse on how best to tackle my issues. I am building a web app that will have significant traffic with images and I've noticed that `Amazon RDS` is cheaper than `Amazon S3`. Normally in the websites that I am used to build I don't need to worry about local storage vs db storage. Would I be loosing anything by storing all the pics in the db instead of local (S3)? This then lead me to think that it would be best to storage the images in the `Amazon RDS`. As a result of that I wonder: The user will upload 4 images in general: `mainPicThumb`, `mainPic`, `secondPic` and `secondPicThumb`. Each one will also have the fields `name` and `size`. For efficiency should I put each picture type in a different table or it shouldn't make an issue? I mean, if I separate the pictures in different tables would I gain quicker access speed and search speed or gain anything? also most users queries will return 10 different Thumb images I am building my backend with PHP and using an ORM ( Propel ). I would appreciate any guidance / advice.
You made some fundamental calculation errors in calculating your costs. Storing data in a relational database is never, ever, going to be cheaper than storing it in a completely optimized flat data storage system. When storing data in S3, you will only ever be billed by actual storage usage, and actual network usage. That's the precise amount of data, averaged over the entire month, at $0.095 per GB/mth (lower tiers apply starting at 1TB/mth), plus $0.004 per 10000 external GET requests, plus $0.120 per GB data transfer to the internet (lower tiers apply starting at 10TB/mth). When storing data in RDS, you pay an hourly fee for the instance, plus $0.125 per GB/mth for storage, plus a cost per IO operation on the underlying storage. A single query, requesting megabytes of binary data, could easily trigger hundreds or thousands of IO operations - or none, if the result happens to be cached. It's very hard to predict IO usage, except that it has no linear relation by definition to the amount of queries executed and the amount of data transferred. As a guideline, I just checked a LAMP server, at about 0.60 load since it's the middle of the night, and it's continuously processing about 50~150 IO operations per second on its storage disk (OS, swap and /tmp are on another), while barely doing anything. For data stored in RDS, actually retrieving the data only means that you have transferred it to your EC2 instance or another means of accessing it. You will then still incur full costs for actually processing requests and transferring the data onwards to the internet from there. Summarizing: storing data in RDS instead of S3 will always be more expensive. It's just hard to predict whether it'll be 10, 100 or 1000 times as expensive. Use S3 for storing files, that's what the Simple Storage Service is for. It will also be far, ***FAR*** more performant, especially if you bind it to CloudFront to utilize its caching edge locations. (all prices mentioned assume cheapest Amazon locations - prices may vary slightly elsewhere)
I have architected solutions on AWS for Stock photography sites which stores millions of images spanning TB's, would like to share some of the best practice in AWS for your requirement: P1) Store the Original Image file in S3 Standard option P2) Store the reproducible images like thumbs etc in the S3 Reduced Redundancy option (RRS) to save costs P3) Meta data about images including the S3 URL can be stored in Amazon RDS or Amazon DynamoDB depending upon the query complexity. Query the entries from Amazon RDS. If your query is complex it is also common practice to Store the meta data in Amazon CloudSearch or Apache Solr. P4) Deliver your thumbs to users with low latency using Amazon CloudFront.
Best way to handle images into the Database
[ "", "sql", "amazon-web-services", "amazon-s3", "amazon-rds", "" ]
I have reviewed this several times and can't find a reason why it shouldn't work hopefully someone can reveal what I'm missing here's my prepared statement. When performing a `var_dump` on `$query` directly after initializing the prepared statement it returns false. ``` $query = $link->prepare("IF (EXISTS(SELECT * FROM banned WHERE ipAddr=? OR uid=?)) THEN UPDATE banned SET isBanned='1', ipAddr=?, uid=?, bannedAt=NOW(), reason=? WHERE ipAddr=? OR uid=? ELSE INSERT INTO banned (isBanned, ipAddr, uid, bannedAt, reason) VALUES('1', ?, ?, NOW(), ?) END IF"); $query->bind_param("sisissisis", $details['IP'], $details['uid'], $details['IP'], $details['uid'], $details['reason'], $details['IP'], $details['uid'], $details['IP'], $details['uid'], $details['reason']); ``` Table structure ``` banned ------ id, uid, ipAddr, bannedAt, reason, isBanned ``` **Note: if this is not the best way to achieve the same result then please offer a suggestion**
`IF` statement is not supported in prepared statements in Mysql. For full list of supported statements and more read *[SQL Syntax for Prepared Statements](http://dev.mysql.com/doc/refman/5.0/en/sql-syntax-prepared-statements.html)* IMHO you have several options 1. Try using `INSERT INTO ... ON DUPLICATE KEY UPDATE` or `REPLACE`. You'll need to have correct `UNIQUE` index(indices) for it. 2. Wrap you code into a stored procedure A stored procedure might look look like ``` DELIMITER $$ CREATE PROCEDURE sp_add_banned(IN aIP VARCHAR(15), IN aId INT, aReason VARCHAR(100)) BEGIN IF (EXISTS(SELECT * FROM banned WHERE ipAddr=aIP OR uid=aId)) THEN UPDATE banned SET isBanned='1', ipAddr=aIP, uid=aId, bannedAt=NOW(), reason=aReason WHERE ipAddr=aIP OR uid=aId; ELSE INSERT INTO banned (isBanned, ipAddr, uid, bannedAt, reason) VALUES('1', aIP, aId, NOW(), aReason); END IF; END$$ DELIMITER ; ``` *Adjust data types for `IN` parameters as they defined in your `banned` table* Here is **[SQLFiddle](http://sqlfiddle.com/#!2/e3149/4)** demo. And your php part will look like ``` $query = $link->prepare('CALL sp_add_banned(?, ?, ?)'); $query->bind_param('sis', $details['IP'], $details['uid'], $details['reason']); ```
MySQL > IF EXISTS syntax only works in a store procedure. You could create a Store Procedure, then called it from your code for validating and Insert or update. For your reference: [mySql If Syntax](http://dev.mysql.com/doc/refman/5.0/en/if.html) And if you are using 'IF' function on SELECT Queries, it have a different syntax. Here is the [Link](http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html)
sql IF statement in prepared mysqli statement
[ "", "sql", "mysqli", "prepared-statement", "" ]
I am trying to delete a record based on a column called `time` (`datetime`). Below is my sql statement, and the table which I am working with. SQL server says that 0 records are effected by this query, what am I doing wrong? ``` DELETE FROM msg WHERE (time = '5/26/2013 8:39:44 PM') sender receiver msg time ============================================================================ bob jen this is a message 5/26/2013 8:39:44 PM jen mel Message to pel 5/26/2013 8:44:29 PM ```
try this: ``` DELETE FROM msg WHERE convert(datetime,time,101) = convert(datetime,'5/26/2013 8:39:44',101) ```
Does it work? ``` DELETE FROM msg WHERE time = convert(datetime, '5/26/2013 8:39:44 PM', 111) ```
Delete using a timedate column
[ "", "sql", "sql-server", "" ]
Used a loop to add a bunch of elements to a list with mylist = [] ``` for x in otherlist: mylist.append(x[0:5]) ``` But instead of the expected result ['x1','x2',...], I got: [u'x1', u'x2',...]. Where did the u's come from and why? Also is there a better way to loop through the other list, inserting the first six characters of each element into a new list?
The u means unicode, you probably will not need to worry about it ``` mylist.extend(x[:5] for x in otherlist) ```
The `u` means *unicode*. It's Python's internal string representation (from version ... ?). Most times you don't need to worry about it. (Until you do.)
What does [u'abcd', u'bcde'] mean in Python?
[ "", "python", "append", "" ]
is there any method or external library that receives some range as string and converts it to the index in the array? I mean something like the print selected pages function in google chrome - [link](https://support.google.com/chrome/answer/1379552?hl=en) so then it will select the related items from the array? example: ``` x = ['a','b','c','d','e','f'] x.get_selected_items('1, 3-4, 6') >>>['a','c','d','f'] ``` Thanks
Shove your text range through [this recipe](http://code.activestate.com/recipes/577279-generate-list-of-numbers-from-hyphenated-and-comma/), then pass it to [`operator.itemgetter()`](http://docs.python.org/2/library/operator.html#operator.itemgetter), and finally apply it to your sequence. Note the off-by-one bit, so either map each element to subtract 1, or put a dummy element at the beginning of your sequence.
``` >>> from operator import itemgetter >>> x = ['a','b','c','d','e','f'] >>> items = itemgetter(0, slice(2, 4), 5)(x) >>> [j for i in items for j in (i if isinstance(i, list) else [i])] ['a', 'c', 'd', 'f'] ```
in python, I want to select from array an range by given string
[ "", "python", "arrays", "filter", "" ]
Consider for example the list in Python containing both strings of letters and numbers ``` a = ['Total', '1', '4', '5', '2'] ``` How would it be possible to convert this into the mixed value list ``` b = ['Total', 1.0, 4.0, 5.0, 2.0] ``` Note that in general we may not know where the letters string will be in the list i.e. we might have ``` a = ['Total', '1', '4', 'Next', '2'] ```
You can use a generator function and exception handling: ``` >>> def func(seq): for x in seq: try: yield float(x) except ValueError: yield x ... >>> a = ['Total', '1', '4', '5', '2'] >>> list(func(a)) ['Total', 1.0, 4.0, 5.0, 2.0] ```
Just convert everything but the first column: ``` b = a[:1] + [float(i) for i in a[1:]] ``` Judging from [your other question](https://stackoverflow.com/questions/16781344/str-to-float-error-in-csv-python) you are processing a CSV file, so if the first column is always a string there is no point in converting that part of each row: ``` >>> a = ['Total', '1', '4', '5', '2'] >>> a[:1] + [float(i) for i in a[1:]] ['Total', 1.0, 4.0, 5.0, 2.0] ``` You could also use a `try:` - `except ValueError` approach, but why incur the overhead when it is known up front what column has text and what columns have numeric values?
Convert Python list of strings into floats, where list also contains words
[ "", "python", "string", "list", "floating-point", "" ]
I have a list of numbers. ``` L1=[12,32,21,......] ``` I need to execute the following test on each member,tolerating upto 2 failures-no more. NOTE : The function is ILLUSTRATIVE( not actual )- the objective is to test each member and return failed members as a list. Also for performance sake,the idea is to abort as soon as failures exceed 2. ``` def isgreaterthan10(self,x): if x<10: return false else: return true ``` So I do the following. ``` def evaluateList(L1): Failedlist=list() Failures=0 for x in L1: if not isgreaterthan10(x): Failedlist.add(x) Failures+=1 if Failures>2: return (False,[]) return (True,Failedlist) ``` But I am sure this can be done in a more efficient 'pythonic' way,as performance is premium. Would appreciate any help in achieving the same. I am on Python 2.6.5
A few tips to make it more Pythonic: When naming things, functions use underscores, not CamelCase, and variables start with lowercase letters. Your filter function can simply return the value of `x<10`, rather than branching and returning a Boolean constant. I'm assuming by the presence of `self` that it is part of a class, but since it never uses `self`, you can define it to be a static method instead. ``` @staticmethod def is_greater_than_10(x): return x < 10 ``` (If it's not part of a class, simply remove `self` from the argument list.) In your evaluation function, there's no need to return an explicit Boolean constant to indicate success or failure (but not for the reason I initially posted in my comment). Instead, raise an exception to indicate too many small values. ``` class TooManySmallValues(Exception): pass def evaluate_list(l1): failed_list = list() failures=0 for x in l1: if not is_greater_than_10(x): failed_list.append(x) failures+=1 if failures>2: raise TooManySmallValues() return failed_list ``` Now, where you might have called the function like this: ``` result, failures = evaluate_list(some_list) if not result: # do something about the many small values else: # do something about the acceptable list and the small number of failure ``` you would call it like this: ``` try: failures = evaluate_list(some_list) except TooManySmallValues: # do something about the many small values ``` Finally, unless the list is huge and you will actually observe a significant performance gain by stopping early, use a list comprehension to generate all failures at once, then check how many there were: ``` def improved_evaluate_list(l1): failed_list = [ x for x in l1 if not is_greater_than_10(x) ] if len(failed_list) > 2: raise TooManySmallValues() else: return failed_list ```
If performance is key, I'd vectorize it using numpy (or scipy). ``` >>> import numpy >>> L1 = [47, 92, 65, 25, 44, 8, 74, 42, 48, 56, 74, 5, 60, 84, 88, 16, 69, 87, 9, 82, 69, 82, 40, 49, 1, 45, 93, 70, 22, 40, 97, 49, 95, 34, 28, 91, 79, 9, 32, 91, 41, 22, 36, 2, 57, 69, 81, 73, 7, 71] >>> arr = numpy.array(L1) >>> count_of_num_greater_than_10 = numpy.sum(arr > 10) >>> num_greater_than_10 <= 2 False ``` Granted, it won't short-circuit, so if you have two false statements very early on, it will calculate the rest. ### Timing results. Simple timing test, doing a 1000 iterations with a random 1000 element list populated with numbers from 1 to 100 (with the setup of array creation done before starting the timer), shows the vectorized method is over 100 times faster. ``` >>> import timeit >>> timeit.timeit('sum([n>10 for n in L1])>=2', setup='import numpy; L1=list(numpy.random.randint(1,100,1000))', number=1000) 2.539483070373535 >>> timeit.timeit('numpy.sum(L1>10)>=2', setup='import numpy; L1=numpy.random.randint(1,100,1000)', number=1000) 0.01939105987548828 ``` If you want failed members, its not that hard; you can find the numbers not greater than 10 with: ``` >>> list(arr[numpy.where(arr<10)]) [8, 5, 9, 1, 9, 2, 7] ``` Again the vectorized version is orders of magnitude faster than the non-vectorized version: ``` >>> timeit.timeit('[i for i in L1 if i < 10]', setup='import numpy; L1=list(numpy.random.randint(1,100,1000))', number=1000) 0.4471170902252197 >>> timeit.timeit('L1[numpy.where(L1<10)]', setup='import numpy; L1=numpy.random.randint(1,100,1000)', number=1000) 0.011003971099853516 ```
Operating on members of a list
[ "", "python", "" ]
I have a table like this: ``` Name | TimeA | TimeB | ValueA | ValueB ``` And, I am performing some `MERGE` operations as follows: ``` CREATE TABLE #TEMP1... INSERT INTO #TEMP1 SELECT Name, Value FROM dbo.AnotherTable WHERE ... MERGE INTO dbo.MyTable AS Target USING (SELECT Name, Value FROM #TEMP1) AS Source ON Target.Name = Source.Name AND Target.TimeA = @TimeA WHEN MATCHED THEN UPDATE SET ValueA = Value WHEN NOT MATCHED THEN INSERT (Name, TimeA, TimeB, ValueA) VALUES (Source.Name, @TimeA, @TimeB, Value) ``` The Query Execution Plan says the following: ``` MERGE -> Table Merge 3% -> Compute Scalar 0% -> Top 0% -> Compute Scalar 0% -> Compute Scalar 0% -> Nested Loops (Left Outer Join) 0% <- Constant Scan 0% ^ | | --- Compute Scalar 0% <- Table Spool (Kager Spool) 12% <- Table Scan 86% ``` The plan, however does not tell me that an index will improve the performance. I'm thinking an unclustered index on `(Name,TimeA)` should improve performance. Is there a better way to achieve performance for MERGE queries like this? **EDIT 1**: I should note the sizes of the tables. On an average Source always contains 30-70 rows on an average and Target contains > 30 million rows.
I would consider ``` WHEN MATCHED AND ValueA <> Value THEN ``` YOu may be updating records that do not need to be.
This is here for reference. There are some relevant points that I used to improve my query: * Using the optimization suggested by @HLGEM. It made total sense. * Two relevant points from the [MSDN article here](http://msdn.microsoft.com/en-us/library/cc879317%28v=sql.105%29.aspx) > To improve the performance of the MERGE statement, we recommend the > following index guidelines: > > * Create an index on the join columns in the source table that is unique and covering. > * Create a unique clustered index on the join columns in the target table. * Another point from the same MSDN article that suggested not to place constants in the query > To filter out rows from the source or target tables, use one of the > following methods. Specify the search condition for row filtering in > the appropriate WHEN clause. For example, WHEN NOT MATCHED AND > S.EmployeeName LIKE 'S%' THEN INSERT.... Therefore, the indexes I added were: ``` ALTER TABLE #TEMP1 ADD CONSTRAINT PK_TEMP1 PRIMARY KEY CLUSTERED (ELEMENTINSTANCE, ifAlias) CREATE CLUSTERED INDEX IX_MyTable ON dbo.MyTable(Name) ``` And my final query was this: ``` MERGE INTO dbo.MyTable AS Target USING (SELECT Name, Value FROM #TEMP1) AS Source ON Target.Name = Source.Name WHEN MATCHED AND ValueA <> Value AND Target.TimeA = @TimeA THEN UPDATE SET ValueA = Value WHEN NOT MATCHED THEN INSERT (Name, TimeA, TimeB, ValueA) VALUES (Source.Name, @TimeA, @TimeB, Value) ``` This gives me the following execution plan: ``` MERGE -> Table Merge 3% -> Compute Scalar 0% -> Top 0% -> Compute Scalar 0% -> Compute Scalar 0% -> Nested Loops (Left Outer Join) 0% <- Table Scan (#TEMP1) Source 12% ^ | | --- Compute Scalar 0% <- Clustered Index Seek (dbo.MyTable) 12% ``` Thank you everyone for your help! Hopefully this should maintain the performance for sometime.
What kind of indexes will help me in this MERGE query?
[ "", "sql", "sql-server", "sql-server-2008", "query-optimization", "" ]
Normally, WM\_CONCAT is an aggregate function that return values from table separated by comma like here. Suppose I have a table `foo` like this: ``` col_id | col_text 111 | This 111 | is 111 | a 111 | test. ``` If I use this query: ``` SELECT CAST(WM_CONCAT(col_text) AS VARCHAR2(100)), col_id FROM foo ``` the result would be ``` This, is, a, test. ``` Is it possible to change the separator(`','`) to other characters like `'.'` or `'|'` of the `WM_CONCAT()` function? Or create a user defined function that can be executed like `WM_CONCAT()`?
You might want to use [`LISTAGG`](http://docs.oracle.com/cd/E11882_01/server.112/e41084/functions089.htm). ``` SELECT col_id, LISTAGG(col_text, '|') WITHIN GROUP (ORDER BY col_text) text FROM table1 GROUP BY col_id ``` Output: ``` | COL_ID | TEXT | ---------------------------- | 111 | This|a|is|test. | ``` **[SQLFiddle](http://sqlfiddle.com/#!4/946ac/5)** **UPDATE** If you need to get distinct text values in a list ``` SELECT col_id, LISTAGG(col_text, '|') WITHIN GROUP (ORDER BY col_text) text FROM ( SELECT DISTINCT col_id, col_text FROM table1 ) GROUP BY col_id ``` **[SQLFiddle](http://sqlfiddle.com/#!4/11ef4/2)**
Problem with LISTAGG that it returns varchar2 and is limited to 4000 bytes ``` SELECT LISTAGG(LEVEL, CHR(10)) WITHIN GROUP (ORDER BY NULL) FROM Dual CONNECT BY LEVEL < 2000 ORA-01489 Result of string concat is too large ``` I've found one workaround, but it looks ugly and is mutch slower ``` SELECT EXTRACT(XMLTYPE('<doc>' || XMLAGG(XMLTYPE('<ln>' || LEVEL || CHR(10) || '</ln>')).GetClobVal() || '</doc>'), '/doc/ln/text()').GetClobVal() FROM Dual CONNECT BY LEVEL < 2000 ```
Change separator of WM_CONCAT function of Oracle 11gR2
[ "", "sql", "oracle", "function", "oracle11g", "" ]
Here's my first Python program, a little utility that converts from a Unix octal code for file permissions to the symbolic form: ``` s=raw_input("Octal? "); digits=[int(s[0]),int(s[1]),int(s[2])]; lookup=['','x','w','wx','r','rx','rw','rwx']; uout='u='+lookup[digits[0]]; gout='g='+lookup[digits[1]]; oout='o='+lookup[digits[2]]; print(uout+','+gout+','+oout); ``` Are there ways to shorten this code that take advantage of some kind of "list processing"? For example, to apply the `int` function all at once to all three characters of `s` without having to do explicit indexing. And to index into `lookup` using the whole list `digits` at once?
Here is a slightly optimized version of your code: ``` s = raw_input("Octal? ") digits = map(int, s) lookup = ['','x','w','wx','r','rx','rw','rwx'] perms = [lookup[d] for d in digits] rights = ['{}={}'.format(*x) for x in zip('ugo', perms)] print ','.join(rights) ```
``` digits=[int(s[0]),int(s[1]),int(s[2])]; ``` can be written as: ``` digits = map(int,s) ``` or: ``` digits = [ int(x) for x in s ] #list comprehension ``` As it looks like you might be using python3.x (or planning on using it in the future based on your function-like print usage), you may want to opt for the list-comprehension unless you want to dig in further and use `zip` as demonstrated by one of the later answers.
How simplify list processing in Python?
[ "", "python", "list", "" ]
``` ifile = wave.open("input.wav") ``` how can I write this file into a numpy float array now?
``` >>> from scipy.io.wavfile import read >>> a = read("adios.wav") >>> numpy.array(a[1],dtype=float) array([ 128., 128., 128., ..., 128., 128., 128.]) ``` Typically it would be bytes which are then ints... here we just convert it to float type. You can read about read here: <https://docs.scipy.org/doc/scipy/reference/tutorial/io.html#module-scipy.io.wavfile>
Seven years after the question was asked... ``` import wave import numpy # Read file to get buffer ifile = wave.open("input.wav") samples = ifile.getnframes() audio = ifile.readframes(samples) # Convert buffer to float32 using NumPy audio_as_np_int16 = numpy.frombuffer(audio, dtype=numpy.int16) audio_as_np_float32 = audio_as_np_int16.astype(numpy.float32) # Normalise float32 array so that values are between -1.0 and +1.0 max_int16 = 2**15 audio_normalised = audio_as_np_float32 / max_int16 ```
Python: write a wav file into numpy float array
[ "", "python", "numpy", "wav", "" ]
I know the difference between a .py and a .pyc file. My question is **not** about **how**, **but** about **why** According to the [docs](http://docs.python.org/2/tutorial/modules.html#compiled-python-files): > A program doesn’t run any faster when it is read from a .pyc or .pyo > file than when it is read from a .py file; the only thing that’s > faster about .pyc or .pyo files is the speed with which they are > loaded. .pyc files load imports faster. But after loading the 'running' part of .pyc files takes the same time as the 'running' part in .py files? Why is is this? I would expected that * bit code (.pyc) is closer to the Python Virtual Machine and thus runs faster * .py files are being compiled to .pyc before they are being executed. This takes an extra step and thus costs time. My question: After the import part, Why does the running part of .pyc files doesn't speed up execution compared to .py files?
When you run a `.py` file, it is first compiled to bytecode, then executed. The loading of such a file is slower because for a `.pyc`, the compilation step has already been performed, but after loading, the same bytecode interpretation is done. In pseudocode, the Python interpreter executes the following algorithm: ``` code = load(path) if path.endswith(".py"): code = compile(code) run(code) ```
The way the programs are *run* is always the same. The compiled code is interpreted. The way the programs are *loaded* differs. If there is a current `pyc` file, this is taken as the compiled version, so no compile step has to be taken before running the command. Otherwise the `py` file is read, the compiler has to compile it (which takes a little time) but then the compiled version in memory is interpreted just like in the other way.
Why is the 'running' of .pyc files not faster compared to .py files?
[ "", "python", "pyc", "" ]
I am making a simple text based game but I can't even get past the second line of code. My code so far looks like this: ``` print("Welcome To City Text! First You Must Name Your City.") ``` Then the person is supposed to type a name and the shell will print "Welcome To "Town Name" City" The problem is I don't know how to do this. Does anyone know how to do this?
From the fact that you're using `print("text")` rather than `print "text"`, I assume you're using Python 3.x rather than Python 2.x. In that case, `raw_input` won't work, because that was renamed [`input`](http://docs.python.org/3/library/functions.html#input) (the original `input` function did [something else](http://docs.python.org/2/library/functions.html#input), and was removed entirely). So, if you're getting a NameError when you use `raw_input`, just replace it with `input`. (If you *aren't* getting a NameError, you're using Python 2.x, and you should leave out the parenthesis around the string you're printing; in Python 2.x, `print` is a [statement](http://docs.python.org/2/reference/simple_stmts.html#the-print-statement), not a function. It will still work with the parentheses, but it's just going to create confusion.)
The line raw\_input() lets you get the input from the console. The argument you pass it is the line it prints before getting the input. So your code would look something like this: ``` var_name=raw_input("Welcome To City Text! First You Must Name Your City.") ``` this will print: Welcome To City Text! First You Must Name Your City. then let you type until you hit enter and return what you typed.
Input And Print In Python
[ "", "python", "" ]
I want to generate a 24-bit WAV-format audio file using Python 2.7 from an array of floating point values between -1 and 1. I can't use [scipy.io.wavfile.write](http://docs.scipy.org/doc/scipy/reference/generated/scipy.io.wavfile.write.html) because it only supports 16 or 32 bits. The documentation for Python's own [wave](http://docs.python.org/2/library/wave.html) module doesn't specify what format of data it takes. So is it possible to do this in Python?
Another option is available in [`wavio`](https://github.com/WarrenWeckesser/wavio) (also on PyPI: <https://pypi.python.org/pypi/wavio>), a small module I created as a work-around to the problem of scipy not yet supporting 24 bit WAV files. The file [`wavio.py`](https://github.com/WarrenWeckesser/wavio/blob/master/wavio.py) contains the function `write`, which writes a numpy array to a WAV file. To write a 24-bit file, use the argument `sampwidth=3`. The only dependency of `wavio` is numpy; `wavio` uses the standard library `wave` to deal with the WAV file format. For example, ``` import numpy as np import wavio rate = 22050 # samples per second T = 3 # sample duration (seconds) f = 440.0 # sound frequency (Hz) t = np.linspace(0, T, T*rate, endpoint=False) sig = np.sin(2 * np.pi * f * t) wavio.write("sine24.wav", sig, rate, sampwidth=3) ```
I already [submitted an answer to this question](https://stackoverflow.com/a/17443437/500098) 2 years ago, where I recommended [scikits.audiolab](http://scikits.appspot.com/audiolab). In the meantime, the situation has changed and now there is a library available which is much easier to use and much easier to install, it even comes with its own copy of the [libsndfile](http://www.mega-nerd.com/libsndfile/) library for Windows and OSX (on Linux it's easy to install anyway): [PySoundFile](http://pysoundfile.readthedocs.org/)! If you have CFFI and NumPy installed, you can install PySoundFile simply by running ``` pip install soundfile --user ``` Writing a 24-bit WAV file is easy: ``` import soundfile as sf sf.write('my_24bit_file.wav', my_audio_data, 44100, 'PCM_24') ``` In this example, `my_audio_data` has to be a NumPy array with `dtype` `'float64'`, `'float32'`, `'int32'` or `'int16'`. BTW, I made an [overview page](http://nbviewer.ipython.org/github/mgeier/python-audio/blob/master/audio-files/index.ipynb) where I tried to compare many available Python libraries for reading/writing sound files.
How do I write a 24-bit WAV file in Python?
[ "", "python", "wav", "" ]
I've got a script in python which looks exactly like this: ``` x = input("Enter your name: ") print("Hello " + x) input("Press<enter>") ``` I've saved it correctly, and when I open the .py file, the terminal opens, and then closes almost instantly. I've figured out it says `SyntaxError: invalid syntax`. I've checked my code and to me its correct? I'm new to Python and I'm also using Python 3.3.2, the latest version. Why is this happening?
The Python 2.x function *[input()](https://docs.python.org/2/library/functions.html#input)* can only be used with integers. 3.x can be used with both strings and integers. You are probably using Python 2.x. For Python2.x, you must use *raw\_input()* To get your code to work, you must use Python3.x
These are the steps that I would follow to run the file: 1. open Terminal (or Powershell if you're using Windows) 2. go to the directory in your terminal where the python file resides that you're trying to fun 3. run the python file using for Terminal: python filename.py or for Powershell: filename.py This should work for you. If you're double clicking the file and trying to get it to run that way then yes, it will show a Terminal pop up and close immediately. If you're trying to open it to edit it and you're using Windows, you need to right-click the file, and select "Edit with IDLE"
Syntax Error when opening .py Python file?
[ "", "python", "syntax", "python-3.x", "terminal", "" ]
I have a file with following data: ``` <<row>>12|xyz|abc|2.34<</row>> <<eof>> ``` The file may have several rows like this. I am trying to design a parser which will parse each row present in this file and return an array with all rows. What would be the best way of doing it? The code has to be written in python. Code should not take rows that do not start with `<<row>>` or should raise error. =======> UPDATE <======== I just found that a particular `<<row>>` can span multiple lines. So my code and the code present below aren't working anymore. Can someone please suggest an efficient solution? The data files can contain hundreds to several thousands of rows.
A simple way without regular expressions: ``` output = [] with open('input.txt', 'r') as f: for line in f: if line == '<<eof>>': break elif not line.startswith('<<row>>'): continue else: output.append(line.strip()[7:-8].split('|')) ``` This uses every line starting with `<<row>>` until a line contains only `<<eof>>`
``` def parseFile(fileName): with open(fileName) as f: def parseLine(line): m = re.match(r'<<row>>(\d+)\|(\w+)\|(\w+)\|([\d\.]+)<</row>>$', line) if m: return m.groups() return [ values for values in ( parseLine(line) for line in f if line.startswith('<<row>>')) if values ] ``` And? Am I different? ;-)
Parsing a string pattern (Python)
[ "", "python", "string-parsing", "" ]
I have a very large text file that contains hockey statistics. I need two things from each line: 1. the name of the player 2. the points.(the first set of numbers) And I want to return the top 10 list. Below is a sample of the text file but it continues much longer. ``` html_log:Bob 1217.1 1.75 696:48 1 5 38 6 109 61 14:42 633 223 25 435:36 182 34 0.55 html_log:Steve 485.5 1.26 385:18 7 12 -1 28 172 218 16:04 839 94 101 143:18 44 15 -0.03 html_log:Jim 1153.3 1.84 625:54 1 2 71 3 2 10 7:58 499 3 5 616:36 241 36 1.13 ``` -repeats with more players and stats (no newlines) I need to get the player name, in this case the text following the "html\_log" tag I also need the first set of numbers, and need to output to return a top 10 list. Optimum result would output ->> ``` Bob 1217.1 Jim 1153.3 Steve 485.5 ``` + rest of users in text file and their rating, highest to lowest. or just the top 10 highest out of the text file.
Just break it down into small pieces, each of which is easy. In English: For each line in the file, you want the first two values, and you want to split the first value after the colon, and you want to treat the second value as a number. Then, you want to keep track of the top 10 pairs, ordered by that second value. In Python: ``` with open('large_file.txt') as f: pairs = (line.split()[:2] for line in f) processed_pairs = ((pair[0].split(':')[1], float(pair[1])) for pair in pairs) top_10_pairs = heapq.nlargest(10, processed_pairs, key=operator.itemgetter(1)) ``` Now you've got a `list` of `name`, `score` pairs, which is easy to print out: ``` for name, score in top_10_pairs: print('{} {}'.format(name, score)) ``` No matter how big the file is, this won't keep more than 10 processed pairs (plus a read buffer and some other basic stuff) in memory at a time, because we're just transforming an iterator full of files (a file) step by step into other iterators, and feeding that into `heapq.nlargest`, which only keeps the top `n` around.
``` dict(line.split()[:2] for line in [line.split(":")[1] for line in data.split("\n")]) # {'Bob': '1217.1', 'Jim': '1153.3', 'Steve': '485.5'} ```
parsing a file so as to get the top 10 players.
[ "", "python", "list", "parsing", "" ]
Is it possible to write it shorter? I am mostly interested in **not writing *`r[0].value`* twice**. The alternative **must be shorter**. ``` (r[0].value for r in sheet.range(USERROLELIST) if r[0].value) ``` `if r[0].value` -- to check that it != None `sheet.range` is from openpyxl module.
Here's another way of writing it, however I much prefer the code you have given ``` filter(None, (r[0].value for r in sheet.range(USERROLELIST))) ```
Do it like this: ``` (r for r in (R[0].value for R in sheet.range(USERROLELIST)) if r) ``` --- If you really want to make things with the "least amount of characters possible" You can try something like this: First of all, import your modules with shorter names... ``` from openpyxl import sheet sr = sheet.range ``` Secondly, have shorter variable names: ``` L = USERROLELIST ``` Finally, do away with extra whitespace: ``` (r for r in(R[0].value for R in sr(L))if r) ```
Shorter way to check if true in Python list comprehension expression?
[ "", "python", "" ]
I have a method in a class which decrypts a variable, and returns it. I remove the returned variable with "del" after use. What is the danger of these garbage values being accessed...and how can I best protect myself from them? Here is the code: ``` import decrypter import gc # mangled variable names used def decrypt(__var): __cleartext = decrypter.removeencryption(__var) return __cleartext __p_var = "<512 encrypted password text>" __p_cleartext = decrypt(__p_var) <....do login with __p_cleartext...> del __p_var, __p_cleartext gc.collect() ``` Could any of the variables, including \_\_var and \_\_cleartext be exploited at this point? Thanks! --- I've done a little more googling. Before I spend a few hours going down the wrong path...what I'm hearing is: 1. Store the password as a salted hash on the system (which it is doing now). 2. The salt for the hash should be entered in by the user at suite start (being done now) 3. However, the salt should be held in C process and not python. 4. The python script should pass the hash to to the C process for decryption. The python script is handling the login for a mysql database, and the password is needed to open the DB connection. If the code were along the lines of... ``` # MySQLdb.connect(host, user, password, database) mysql_host = 'localhost' mysql_db = 'myFunDatabase' hashed_user = '\xghjd\xhjiw\xhjiw\x783\xjkgd6\xcdw8' hashed_password = 'ghjkde\xhu78\x8y9tyk\x89g\x5de56x\xhyu8' db = MySQLdb.connect(mysql_host, <call_c(hashed_user)>, <call_c(hashed_password)>, mysql_db]) ``` Would this resolve (at least) the issue of python leaving garbage all over? --- P.s. I also found the post about memset ([Mark data as sensitive in python](https://stackoverflow.com/questions/982682/mark-data-as-sensitive-in-python/983525#983525)) but I'm assuming if I use C to decrypt the hash, this is not helpful. P.P.S. The dycrypter is currentlt a python script. If I were to add memset to the script and then "compile" it using py2exe or pyinstaller....would this actually do anything to help protect the password? My instincts say no, since all pyinstaller does is package up the normal interpreter and the same bytecode the local interpreter creates...but I don;t know enough about it...? --- So...following Aya's suggestion of making the encryption module in C, how much of a discernible memory footprint would the following setup leave. Part of the big issue is; the ability to decrypt the password must remain available throughout the run of the program as it will be called repeatedly...it's not a one-time thing. Make a C object which is started when the user logins in. It contains the decryption routine and the holds a copy of the salt entered by the user at login. The stored salt is obscured in the running object (in memory) by having been hashed by it's own encryption routine using a randomly generated salt. The randomly generated salt would still have to be held in a variable in the object too. This is not really to secure the salt, but just to try and obfuscate the memory footprint if someone should take a peek at it (making the salt hard to identify). I.e. c-obj ``` mlock() /*to keep the code memory resident (no swap)*/ char encrypt(data, salt){ (...) return encrypted_data } char decrypt(data, salt){ (...) return decrypted_data } stream_callback(stream_data){ return decrypt(stream_data, decrypt(s-gdhen, jhgtdyuwj)) } void main{ char jhgtdyuwj=rand(); s-gdhen = encrypt(<raw_user_input>, jhgtdyuwj); } ``` Then, the python script calls the C object directly, which passes the unencrypted result right into the MySQLdb call without storing any returns in any variable. I.e. ``` #!/usr/bin/python encrypted_username = 'feh9876\xhu378\x&457(oy\x' encrypted_password = 'dee\x\xhuie\xhjfirihy\x^\xhjfkekl' # MySQLdb.connect(host, username, password, database) db = MySQLdb.connect(self.mysql_host, c-obj.stream_callabck(encrypted_username), c-obj.stream_callback(encrypted_password), self.mysql_database) ``` What kind of memory footprint might this leave which could be snooped?
Any security system is only as strong as its weakest link. It's difficult to tell what the weakest link is in your current system, since you haven't really given any details on the overall architecture, but if you're actually using Python code like you posted in the question (let's call this `myscript.py`)... ``` #!/usr/bin/python encrypted_username = 'feh9876\xhu378\x&457(oy\x' encrypted_password = 'dee\x\xhuie\xhjfirihy\x^\xhjfkekl' # MySQLdb.connect(host, username, password, database) db = MySQLdb.connect(self.mysql_host, c-obj.stream_callabck(encrypted_username), c-obj.stream_callback(encrypted_password), self.mysql_database) ``` ...then regardless of how or where you decrypt the password, any user can come along and run a script like this... ``` import MySQLdb def my_connect(*args, **kwargs): print args, kwargs return MySQLdb.real_connect(*args, **kwargs) MySQLdb.real_connect = MySQLdb.connect MySQLdb.connect = my_connect execfile('/path/to/myscript.py') ``` ...which will print out the plaintext password, so implementing the decryption in C is like putting ten deadbolts on the front door, but leaving the window wide open. If you want a good answer on how to secure your system, you'll have to provide some more information on the overall architecture, and what attack vectors you're trying to prevent. If someone manages to hack root, you're pretty much screwed, but are better ways to conceal the password from non-root users. However, if you're satisfied that the machine you're running this code on is secure (in the sense that it can't be accessed by any 'unauthorized' users), then none of this password obfuscation stuff is necessary - you may as well just put the cleartext password directly into the Python source code. --- **Update** Regarding architecture, I meant, how many separate servers are you running, what responsibilities do they have, and how are they meant to communicate with each other, and/or the outside world? Assuming the primary goal is to prevent unauthorized access to the MySQL server, and assuming MySQL runs on a different server to the Python script, then why are you more concerned about someone gaining access to the server running the Python script, and getting the password for the MySQL server, rather than gaining access to the MySQL server directly? If you're using a 'salt' as a decryption key for the encrypted MySQL password, then how does an authorized user pass that value to the system? Do they have to login to the server via, say, ssh, and run the script from the commandline, or it this something accessible via, say, a webserver? Either way, if someone does compromise the system running the Python script, they merely have to wait until the next authorized user comes along, and 'sniff' the 'salt' they enter.
If no other references to the value exist, the your *gc.collect* normally destroys the object. However, something as simple as string interning or cacheing may keep an unexpected reference, leaving the value alive in memory. Python has a number of implementations (PyPy, Jython, PyPy) that do different things internally. The language itself makes very few guarantees about whether or when the value would actually get erased from memory. In your example, you also use name mangling. Because the mangling is easily reproduced by hand, this doesn't add any security at all. One further thought: It isn't clear what your security model is. If the attacker can call your decrypt function and run arbitrary code in the same process, what would prevent them from wrapping decrypt to keep a code of the inputs and outputs.
Python security: Danger of uncollected variables out of scope
[ "", "python", "security", "exploit", "garbage", "" ]
I have problems with SQL performance. For sudden reason the following queries are very slow: I have two lists which contains Id's of a certain table. I need to delete all records from the first list if the Id's already exists in the second list: ``` DECLARE @IdList1 TABLE(Id INT) DECLARE @IdList2 TABLE(Id INT) -- Approach 1 DELETE list1 FROM @IdList1 list1 INNER JOIN @IdList2 list2 ON list1.Id = list2.Id -- Approach 2 DELETE FROM @IdList1 WHERE Id IN (SELECT Id FROM @IdList2) ``` It is possible the two lists contains more than 10.000 records. In that case both queries takes each more than 20 seconds to execute. The execution plan also showed something I don't understand. Maybe that explains why it is so slow: ![Queryplan of both queries](https://i.stack.imgur.com/FsJjT.png) I Filled both lists with 10.000 sequential integers so both list contained value 1-10.000 as starting point. As you can see both queries shows for @IdList2 *Actual Number of Rows* is 50.005.000!!. @IdList1 is correct (*Actual Number of Rows* is 10.000) I know there are other solutions how to solve this. Like filling a third list instaed of removing from first list. But my question is: **Why are these delete queries so slow and why do I see these strange query plans?**
Add a Primary key to your table variables and watch them scream ``` DECLARE @IdList1 TABLE(Id INT primary Key not null) DECLARE @IdList2 TABLE(Id INT primary Key not null) ``` because there's no index on these table variables, any joins or subqueries must examine on the order of 10,000 times 10,000 = 100,000,000 pairs of values.
SQL Server compiles the plan when the table variable is empty and does not recompile it when rows are added. Try ``` DELETE FROM @IdList1 WHERE Id IN (SELECT Id FROM @IdList2) OPTION (RECOMPILE) ``` This will take account of the actual number of rows contained in the table variable and get rid of the nested loops plan Of course creating an index on `Id` via a constraint may well be beneficial for other queries using the table variable too.
Very slow DELETE query
[ "", "sql", "sql-server", "performance", "sql-execution-plan", "table-variable", "" ]
Is there a way to `slice` through a whole list while excluding a range of values or multiple range of values in the middle of the list? For example: ``` list = [1,2,3,4,5,6,7,8,9,0] print list[......] #some code inside ``` I'd like the above code to print the list while excluding a range of values so the output would be: `[1,2,3,8,9,0]` or excluding multiple value ranges so the output would be: `[1,2,6,7,0]` by using the slice notation or any other simple method you can suggest.
``` >>> nums = [1,2,3,4,5,6,7,8,9,0] >>> exclude = set(range(4, 8)) >>> [n for n in nums if n not in exclude] [1, 2, 3, 8, 9, 0] ``` Another example ``` >>> exclude = set(range(4, 8) + [1] + range(0, 2)) >>> [n for n in nums if n not in exclude] [2, 3, 8, 9] ```
Use list comprehensions: ``` >>> mylist = [1,2,3,4,5,6,7,8,9,0] >>> print [i for i in mylist if i not in xrange(4,8)] [1, 2, 3, 8, 9, 0] ``` Or if you want to exclude numbers in two different ranges: ``` >>> print [i for i in mylist if i not in xrange(4,8) and i not in xrange(1,3)] [3, 8, 9, 0] ``` By the way, it's not good practice to name a list `list`. It is already a built-in function/type. --- If the list was unordered and was a list of strings, you can use `map()` along with `sorted()`: ``` >>> mylist = ["2", "5", "3", "9", "7", "8", "1", "6", "4"] >>> print [i for i in sorted(map(int,mylist)) if i not in xrange(4,8)] [1, 2, 3, 8, 9] ```
Can I use the slice method to return a list that excludes ranges in the middle of the original list?
[ "", "python", "list", "range", "slice", "" ]
I have a python class in which I have a method I want to run a number of threads ``` class OutageTool: def main(self): outages = [{ 'var1' : 1, 'var2' : 2, }, { 'var1' : 3, 'var2' : 4, }] for outage in outages: t = threading.Thread(target=self.outage_thread, args=(outage)) t.start() def outage_thread(self, outage): """ some code here """ ``` When I run this code I'm getting the error ``` TypeError: outage_thread() takes exactly 2 arguments (3 given) ``` I'm new to python and would really appreciate any ideas on what's happening here. C
You have forgotten a `,` in your creation of `Thread`. In python doing `(5)` will result in the integer `5`, while doing `(5,)` would turn into a **tuple** with one entry that is the integer `5`. If you change your `args=(outage)` to `args=(outage,)` it should work as expected.
Make it ``` t = threading.Thread(target=self.outage_thread, args=(outage,)) ``` (Notice the , (command) after `outage`). In your case args is not a tuple.
threading error - too many args
[ "", "python", "python-multithreading", "" ]
I am trying to round a floating point number in python to zero decimal places. However, the round method is leaving a trailing 0 every time. ``` value = 10.01 rounded_value = round(value) print rounded_value ``` results in 10.0 but I want 10 How can this be achieved? Converting to an int?
Pass the rounded value to `int()` to get rid of decimal digits: ``` >>> value = 10.01 >>> int(round(value)) 10 >>> value = 10.55 >>> int(round(value)) 11 ```
`10.0` and `10` are the same `float` value. When you `print` that value, you get the string `10.0`, because that's the default string representation of the value. (The same string you get by calling `str(10.0)`.) If you want a non-default representation, you need to ask for it explicitly. For example, using the [`format`](http://docs.python.org/2/library/functions.html#format) function: ``` print format(rounded_value, '.0f') ``` Or, using the other formatting methods: ``` print '{:.0f}'.format(rounded_value) print '%.0f' % (rounded_value,) ``` The full details for why you want `'.0f'` are described in the [Format Specification Mini-Language](http://docs.python.org/2/library/string.html#formatspec), but intuitively: the `f` means you want fixed-point format (like `10.0` instead of, say, `1.0E2`), and the `.0` means you want no digits after the decimal point (like `10` instead of `10.0`). Meanwhile, if the *only* reason you `round`ed the value was for formatting… never do that. Leave the precision on the float, then trim it down in the formatting: ``` print format(value, '.0f') ```
python round leaving a trailing 0
[ "", "python", "" ]
Why using print won´t work, and it does work using return? I want to output the bigger number: ``` def bigger(x,y): if x<y: return y return x print bigger(2,7) ``` It prints out: > 7 ``` def bigger(x,y): if x<y: print y print x print bigger(2,7) ``` It prints out: > 7 > > None
Any particular reason you're not using [Python's builtin max](http://docs.python.org/2/library/functions.html#max)? Functions implicitly return `None` if no explicit return value is specified. So, in your second example, you are printing the result of a function that doesn't return anything. While the function itself is already printing what the result is. Your first version is the correct and what should be used version.
> Why using print won´t work, and it does work using return? It works. If a function doesn't have a return statement, the function returns a `None`. In second case you just print it.
Python newbie: why it works with return and not with print? (inside an if statement)
[ "", "python", "if-statement", "return", "" ]
I am making a text based game and have been doing just fine but now but I have run into an error with `int`. My code so far looks like this: ``` money = 500 print("You Have $500 To Spend On Your City. The Population Is 0 People") input_var3 = input("What Will You Spend Your Money On? A House Or A Restraunt. ") if input_var3 == "House": money - 100 print("You Have Spent Your Money On A House") print("You Now Have $" + money) if input_var3 == "Restraunt": money - 150 print("You Have Spent Your Money On A Restraunt") print("You Now Have $" + money) ``` Your money is equal to 500 dollars but if you spend it on a house or restraunt you will have less money and the shell will print how much you have left. However I always get this error: ``` Traceback (most recent call last): File "C:\Users\Jason\Desktop\City Text.py", line 11, in <module> print("You Now Have $" + money) TypeError: Can't convert 'int' object to str implicitly ``` I have realized that I must make a sting instead of the Int but I am not sure how to do this. Can anyone help?
The `money` variable is an *integer*. You can't mix integers with strings when concatenating them together. Use the `str()` function, which converts an integer to a string: ``` print("You Now Have $" + str(money)) ``` Also, I think you're intending to take 100 away from the money value. `money - 100` just returns `500 - 100`, which is `400`. If you want to make `money` equal to `400`, do: ``` money -= 100 ``` Which is equivalent to: ``` money = money - 100 ```
You need to store the value inside variable. ``` if input_var3 == "House": money -= 100 # Notice the usage of -= print("You Have Spent Your Money On A House") print("You Now Have $" + str(money)) # Type casting if input_var3 == "Restraunt": money = money - 150 # Same as -= print("You Have Spent Your Money On A Restraunt") print("You Now Have $" + str(money)) # Type casting ```
Subtracting From Int And Printing
[ "", "python", "" ]
I have made a very simple game in python using pygame. The score is based on whatever level the player reached. I have the level as a variable called `score`. I want to display the top level at the start or end of the game. I would be even more happy to display more than one score, but all of the other threads I have seen were too complicated for me to understand, so please keep it simple: I'm a beginner, only one score is necessary.
I recommend you use [shelve](https://docs.python.org/3.8/library/shelve.html). For example: ``` import shelve d = shelve.open('score.txt') # here you will save the score variable d['score'] = score # thats all, now it is saved on disk. d.close() ``` Next time you open your program use: ``` import shelve d = shelve.open('score.txt') score = d['score'] # the score is read from disk d.close() ``` and it will be read from disk. You can use this technique to save a list of scores if you want in the same way.
You can use the `pickle` module to save variables to disk and then reload them. Example: ``` import pickle # load the previous score if it exists try: with open('score.dat', 'rb') as file: score = pickle.load(file) except: score = 0 print "High score: %d" % score # your game code goes here # let's say the user scores a new high-score of 10 score = 10; # save the score with open('score.dat', 'wb') as file: pickle.dump(score, file) ``` This saves a single score to disk. The nice thing about pickle is that you can easily extend it to save multiple scores - just change `scores` to be an array instead of a single value. `pickle` will save pretty much any type of variable you throw at it.
Saving the highscore for a game?
[ "", "python", "save", "pygame", "leaderboard", "" ]
Im trying to find out how to get certain data from a file in the easiest way possible. I have searched all over the internet but can't find anything. I want to be able to do this: File.txt: ``` data1 = 1 data2 = 2 ``` but i want to get only data1 like so, ``` p = open('file.txt') f = p.get(data1) print(f) ``` Any Ideas, Thanks in advance.
``` with open("file.txt", "r") as f: key, val = f.readline().split('=') if key.strip() == 'data1': # if data1 is not the first line # do something with value and data ```
You can do: ``` with open("file.txt", "r") as f: for line in f: key, val = line.split('=') key = key.strip() val = val.strip() if key == 'data1': # if data1 is not the first line # do something with value and data ``` using map: ``` from operator import methodcaller with open("file.txt", "r") as f: for line in f: key, val = map(methodcaller("strip", " "), line.split('=')) if key == "data1": # do something with value and data ```
Python: Data Getting
[ "", "python", "input", "data-collection", "" ]
I am writing two queries in my sql procedure to retrieve data based on the parameter provided from the user in my asp.net application. ``` IF @Action = 'ID' SELECT Name,FirstName,LastName,Employee_ID,Location FROM Salary WHERE Employee_ID=@Employee_ID ELSE IF @Action = 'LASTNAME' SELECT Name,FirstName,LastName,Employee_ID,Location FROM Salary WHERE LastName like '%' + @LastName + '%' ``` Now what I want is to combine the query into single query so that irrespective of whether ID is passed as parameter or lastname is passed i must be able to get the data. Any help would be greatly appreciated. Thanks in Advance.
This solved my problem ``` SELECT Name, FirstName, LastName, Employee_ID, Location FROM Salary WHERE Employee_ID = ISNULL(@Employee_ID, Employee_ID) and LastName LIKE '%' + ISNULL(@LastName,LastName) + '%' ```
If all parameters can be used together: ``` SELECT Name,FirstName,LastName,Employee_ID,Location FROM Salary WHERE Employee_ID=@Employee_ID or LastName like '%' + @LastName + '%'; ``` One parameter - one call: ``` SELECT Name,FirstName,LastName,Employee_ID,Location FROM Salary WHERE (@Employee_ID is not null and Employee_ID=@Employee_ID) or (@Employee_ID is null and LastName like '%' + @LastName + '%') ```
SQL Query for retrieving data by combining two queries
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have a text file that contains tweets per line, that need to be altered for a machine learning format. Im using python and basic unix text manipulation (regex) to achieve a lot of my string manipulation, and im gettin the hang of sed, grep and pythons .re function....this next problem however is mindblower for me, and wondering if anyone could help me with this. I have tried a few google searches, but tbh no luck :( I always start with pseudocode to make it easier on me, and this is what i want... "*Replace -token1- OR -token2- OR -token3- OR -token4- with integer '1', replace all **other** words/tokens with integer '0'* " Lets say my list of words/tokens for which need to become '1' is the following: * :) * cool * happy * fun and my tweets look like this: * this has been a fun day :) * i find python cool! it makes me happy The output of the new program/function would be: * 0 0 0 0 1 0 1 * 0 0 0 1 0 0 0 1 NOTE1: Notice how 'cool' has a '!' behind it, it should be included as well, although i can always remove all punctuation in the file first, to make it easier NOTE2: All tweets will be lowercase, I already have a function that changes all the lines into lowercase Does anyone know how to do this using unix regex (such as sed, grep, awk) or even how to do it in python? BTW this is NOT homework, im working on a sentiment analysis program and am experimenting a bit. thanx! :)
``` from string import punctuation as pnc tokens = {':)', 'cool', 'happy', 'fun'} tweets = ['this has been a fun day :)', 'i find python cool! it makes me happy'] for tweet in tweets: s = [(word in tokens or word.strip(pnc) in tokens) for word in tweet.split()] print(' '.join('1' if t else '0' for t in s)) ``` Output: ``` 0 0 0 0 1 0 1 0 0 0 1 0 0 0 1 ``` The `or` in the 4th line is there to handle `:)`, as suggested by @EOL. There are still cases that will not be handled correctly, such as with `cool :), I like it`. The problem is inherent to the requirements.
In `awk`: ``` awk ' NR==FNR { a[$1]; next } { gsub(/!/, "", $0) # This will ignore `!`. Other rules can be added. for (i=1;i<=NF;i++) { if ($i in a) { printf "1 " } else { printf "0 " } } print "" }' lookup tweets ``` ### Test: (You'll probably need to alter `gsub` line to handle special cases.) ``` [jaypal:~/Temp] cat lookup :) cool happy fun [jaypal:~/Temp] cat tweets this has been a fun day :) i find python cool! it makes me happy [jaypal:~/Temp] awk ' NR==FNR { a[$1]; next } { gsub(/!/, "", $0) for (i=1;i<=NF;i++) { if ($i in a) { printf "1 " } else { printf "0 " } } print "" }' lookup tweets 0 0 0 0 1 0 1 0 0 0 1 0 0 0 1 ```
Best way to change words into numbers using specific word list
[ "", "python", "regex", "sed", "awk", "" ]
With the help of joksnet's programs [here](https://stackoverflow.com/questions/4460921/extract-the-first-paragraph-from-a-wikipedia-article-python) I've managed to get plaintext Wikipedia articles that I'm looking for. The text returned includes Wiki markup for the headings, so for example, the sections of the [Albert Einstein article](http://en.wikipedia.org/wiki/Albert_einstein) are returned like this: ``` ==Biography== ===Early life and education=== blah blah blah ``` What I'd really like to do is feed the retrieved text to a function and wrap all the top level sections in bold html tags and the second level sections in italics, like this: ``` <b>Biography</b> <i>Early life and education</i> blah blah blah ``` But I'm afraid I don't know how to even start, at least not without making the function dangerously naive. Do I need to use regular expressions? Any suggestions greatly appreciated. PS Sorry if "parsing" is too strong a word for what I'm trying to do here.
I ended up doing this: ``` def parseWikiTitles(x): counter = 1 while '===' in x: if counter == 1: x = x.replace('===','<i>',1) counter = 2 else: x = x.replace('===',r'</i>',1) counter = 1 counter = 1 while '==' in x: if counter == 1: x = x.replace('==','<b>',1) counter = 2 else: x = x.replace('==',r'</b>',1) counter = 1 x = x.replace('<b> ', '<b>', 50) x = x.replace(r' </b>', r'</b>', 50) x = x.replace('<i> ', '<i>', 50) x = x.replace(r' </i>', r'<i>', 50) return x ``` I pass the string of text with wiki titles to that function and it returns the same text with the == and === replaced with bold and italics HTML tags. The last thing removes spaces before and after titles, for example `== title ==` gets converted to `<b>title</b>` instead of `<b> title </b>` Has worked without problem so far. Thanks for the help guys, Alex
I think the best way here would be to let MediaWiki take care of the parsing. I don't know the library you're using, but basically this is the difference between <http://en.wikipedia.org/w/api.php?action=query&prop=revisions&titles=Albert%20Einstein&rvprop=content> which returns the raw wikitext and <http://en.wikipedia.org/w/api.php?action=query&prop=revisions&titles=Albert%20Einstein&rvprop=content&rvparse> which returns the parsed HTML.
Making a (hopefully simple) wiki parser with python
[ "", "python", "parsing", "mediawiki", "wikipedia", "" ]
I've heard from many Pythonists that they prefer list comprehensions because they can do everything you can do using high order functions such as filter and reduce, **and** more. So this question address them: what is a solid example of something you can do with them, that is tricky to do with HOFs?
The answer is that there is no such example. Everything you can do with list comprehensions has a mechanical translation to higher-order functions. In fact, this is how Haskell implements list comprehensions: it desugars them to higher-order functions. Given a list comprehension like this: ``` [(x, y) | x <- [1..3], y <- [4..6]] ``` Haskell desugars it to: ``` concatMap (\x -> concatMap (\y -> [(x, y)]) [4..6]) [1..3] ``` Similarly, if you put in predicates like: ``` [(x, y) | x <- [1..3], y <- [4..6], x + y /= 5] ``` ... then that desugars to: ``` concatMap (\x -> concatMap (\y -> if (x + y) == 5 then [(x, y)] else []) [4..6]) [1..3] ``` In fact, this desugaring is part of the Haskell specification, which you can find [here](http://www.haskell.org/onlinereport/exps.html#sect3.11).
As has been said, everything you can do with list comprehensions can be desugared into higher-order functions, but a large part of the problem with doing this in Python is that Python lacks support for the kind of point-free programming you can use with `filter`, `map`, and friends in Haskell. Here's a somewhat contrived example, but I think you'll get the idea. Let's take this Python code: `[(x,y) for x,y in zip(xrange(20), xrange(20, 0, -1)) if x % 2 == 0 and y % 2 == 0]` All it does is print this out: `[(0, 20), (2, 18), (4, 16), (6, 14), (8, 12), (10, 10), (12, 8), (14, 6), (16, 4), (18, 2)]` Here's the equivalent version with filter: `filter(lambda ns : ns[0] % 2 == 0 and ns[1] % 2 == 0, zip(xrange(20), xrange(20, 0, -1)))` I hope you'll agree with me that it's a lot uglier. There isn't really much you can do to make it less ugly without defining a separate function. But let's look at the equivalent version in Haskell: ``` [(x,y) | (x,y) <- zip [0..20] [20,19..0], x `mod` 2 == 0 && y `mod` 2 == 0] ``` Okay, pretty much as good as the Python list comprehension version. What about the equivalent filter version? ``` import Data.Function let f = (&&) `on` (==0) . (`mod` 2) filter (uncurry f) $ zip [0..20] [20,19..0] ``` Okay, we had to do an import, but the code is (imo) a lot clearer once you understand what it does, although some people might still prefer `f` to be pointed, or even a lambda with filter. In my opinion the point-free version is more concise and conceptually clear. But the main point I want to make is that it is not really going to be this clear in Python because of the inability to partially apply functions without bringing in a separate library, and the lack of a composition operator, so in *Python* it is a good idea to prefer list comprehensions over map/filter, but in Haskell it can go either way depending on the specific problem.
What is a solid example of something that can be done with list comprehensions that is tricky with high order functions?
[ "", "python", "haskell", "clojure", "functional-programming", "list-comprehension", "" ]
I am using unittest and selenium to automate my browser testing. How would I go about making a test which I can run multiple times, where a user creates a ticket. The ticket has to has a title name, each time I run the test I want the title name to be random. I would like the format: "Test ticket, 1 | Test ticket, 2..."
The [faker](https://pypi.python.org/pypi/Faker/0.0.4) module offers some functionality to populate several different types of data: ``` import faker f = faker.Faker() In [11]: f. f.city f.full_address f.phonenumber f.zip_code f.company f.last_name f.state f.email f.lorem f.street_address f.first_name f.name f.username In [11]: f.city() Out[11]: u'Treyview' ``` . If you are going to *test* randomly, I recommend randomly generating a [seed](http://docs.python.org/2/library/random.html#random.seed) (and logging it), that way you can recreate any failing tests. *What you don't want is tests which fail but it's unclear why (i.e. if testing again, with different random values, passes).*
If you just need the string `Test ticket,1 ...` it is: ``` from random import randint randomString = "Test ticket, " + randint(min,max) ``` If you want to generate random strings you could use ``` ''.join(random.choice(string.ascii_uppercase + string.digits) for x in range(max)) ``` You also might want to think about preventing strings to be equal. In that case you could create a `range(min,max)` and use `random.shuffle()`
Random data for test automation in python
[ "", "python", "selenium", "automation", "webdriver", "" ]
I've read a few posts in StackOverflow about the `forms.Form` vs `forms.ModelForm`. It seems that when there is already a table in database for the form, it is better to use `ModelForm` so that you don't have to declare all the attribute again as they already exist in the corresponding class in model.py So my question is 1. How to exclude a series of attributes of the original class, is the following exclude expression okie? ``` class CreatePositionForm(ModelForm): class Meta: model = job_position exclude = ('create_date','post_date','is_valid',) ``` 2. When using the ModelForm, does it mean that the original attributes are not supposed to be change and there would be no more code before the class Meta?(just like the above code) 3. anyone can tell me what it the benefits of having a form over simply HTML form tags except that it helps report errors in ``` form.field_name.errors ``` I am new learner of Django and bear with me if these questions look stupid and thank you very much for your clarification!
As django docs state: > If you’re building a database-driven app, chances are you’ll have forms that map closely to Django models. For instance, you might have a BlogComment model, and you want to create a form that lets people submit comments. In this case, it would be redundant to define the field types in your form, because you’ve already defined the fields in your model. > > For this reason, Django provides a helper class that let you create a Form class from a Django model. ModelForm's are useful when you need a form that is closely related to any of the defined models. The advantage that ModelForm gives over the `form.Form` class is that you don't need to write extra code. For example: say you have a model defined as follows: ``` class Profile(models.Model): first_name = models.CharField(max_length=50) last_name = models.CharField(max_length=50) email = models.EmailField() phone = models.IntegerField() profile_summary = models.TextField() # .. some more fields .. ``` Now let's create a form using 3 different methods and see which one of them is the easiest to implement. **Simple HTML form** For a plain HTML form you'll need to implement the form elements in the template and then validate the data yourself as follows. ``` <form id="profile-form"> <div> <label>First Name</label> <input name="first_name" type="text" /> </div> <div> <label>Last Name</label> <input name="last_name" type="text" /> </div> <div> <label>Email</label> <input name="email" type="text" /> </div> <div> <label>Phone Number</label> <input name="phone" type="text" /> </div> <div> <label>Profile Summary</label> <textarea name="profile_summary"></textarea> </div> </form> ``` and then in the view: ``` def update_profile(request): if request.method == 'POST' first_name = request.POST.get('first_name') # Do this for all the fields and validate them # manually ``` **Using forms.Form class** Using the `django.forms.Form` class you can define the form as follows ``` from django import forms class ProfileForm(forms.Form): first_name = forms.CharField() last_name = forms.CharField() email = forms.EmailField() phone = forms.IntegerField() summary = forms.TextField() # Notice the code duplication here # we already have this defined in our models. ``` **Using ModelForm class** ``` from django import forms class ProfileForm(forms.ModelForm) class Meta: model = models.Profile ``` So using `ModelForm` class you can simply create the form within 4-5 lines of code and also have django do validation and display form errors for you. Now to answer your questions, 1. There are two ways you can exclude specific fields from a form class. you can either use `exclude` or define `fields`. **Exclude** ``` from django import forms class ProfileForm(forms.ModelForm) class Meta: model = models.Profile exclude = ('phone',) ``` **Fields** ``` from django import forms class ProfileForm(forms.ModelForm) class Meta: model = models.Profile fields = ('first_name', 'last_name', 'email', 'profile_summary') ``` 1. I don't really understand what do you mean by `original attributes are not supposed to be changed` ??? 2. Benefits of using ModelForm * Less Code / No Code Duplication * Form Validation * Form Errors * Quick * pretty code
It seems like "validation" would be kind of a big answer to 3 - it's usually a major feature of web frameworks. As for not using HTML tags, there's the [DRY principle](http://en.wikipedia.org/wiki/Don%27t_repeat_yourself): if you're already defining what the fields of a form are (for validation purposes), why would you have to duplicate that in your templates? Have a *single source of truth* about how your forms look like (in Django, the `Form` objects, or `ModelForm`s together with `Model`s), and derive what you can (the HTML) from that single source.
Can a form class of ModelForm change the attributes?
[ "", "python", "django", "forms", "" ]
I'm calling `subprocess.popen()` on the [xpdf](http://www.foolabs.com/xpdf/) program pdfinfo, which is returning text including some characters in the upper half of the 8-bit character set. I pass the result to a JSON serializer and it is complaining when it gets to character `\xae` (the ® sign): ``` >>> import json >>> json.dumps({'a':'Adobe\xae'}) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "c:\app\python\2.7.3\lib\json\__init__.py", line 231, in dumps return _default_encoder.encode(obj) File "c:\app\python\2.7.3\lib\json\encoder.py", line 201, in encode chunks = self.iterencode(o, _one_shot=True) File "c:\app\python\2.7.3\lib\json\encoder.py", line 264, in iterencode return _iterencode(o, 0) UnicodeDecodeError: 'utf8' codec can't decode byte 0xae in position 5: invalid start byte ``` How can I fix this? I am totally confused about codecs and where I'm supposed to add the right information to help Python figure out how to deal with it. --- Edit: If the input is from *me* (or at least my source code), not another process, I could just use a Unicode string literal ``` >>> json.dumps({'a':u'Adobe\u00ae'}) '{"a": "Adobe\\u00ae"}' ``` and Python would handle it fine. But I don't know what hint to give Python to decode the output of pdfinfo as Unicode.
First, you need to figure out what the character encoding of the data you're getting back is. I'm guessing that it's [Windows-1252](http://en.wikipedia.org/wiki/Windows-1252), which has the symbol "®" at code point 0xAE. So, to decode that, you would use the [`str.decode`](http://docs.python.org/2/library/stdtypes.html#str.decode) function: ``` raw_data = 'Adobe\xae' decoded = raw_data.decode('Windows-1252') print decoded # Prints "Adobe®" ```
There is a parameter `ensure_ascii` for json encoding. ``` >>> json.dumps({'a':u'Adobe\u00ae'}, ensure_ascii=False) u'{"a": "Adobe\xae"}' >>> print json.dumps({'a':u'Adobe\u00ae'}, ensure_ascii=False) {"a": "Adobe®"} ``` If `ensure_ascii` is `False`, the result may contain non-ASCII characters and the return value may be a unicode instance.
Converting upper 128 characters in Python from subprocess.popen() to use in JSON
[ "", "python", "unicode", "encoding", "" ]
Supose these 3 tables: **SHIP :** ``` ID -------------------- 1 2 3 ``` **ARRIVE:** ``` shipID Atime -------------------- 1 t1 1 t3 ``` **LEAVE:** ``` shipID Ltime -------------------- 1 t2 1 t4 ``` I need a query that returns : ``` shipID Atime Ltime ------------------------------ 1 t1 null 1 null t2 1 t3 null 1 null t4 ``` where t1>t2>t3>t4 this result is acceptable to : ``` shipID Atime Ltime ------------------------------ 1 t1 null 1 t3 null 1 null t2 1 null t4 ```
Try: ``` SELECT shipID, atime, null ltime from arrive UNION ALL SELECT shipID, null atime, ltime from `leave` order by coalesce(atime,ltime) ``` SQLFiddle [here](http://www.sqlfiddle.com/#!2/a4bb5/6).
Try this: ``` SELECT DISTINCT t1.shipid, a.atime, l.ltime FROM ( SELECT shipID, atime AS time from arrive UNION ALL SELECT shipID, ltime AS time from `leave` ) AS t1 LEFT JOIN arrive AS a ON a.shipid = t1.shipid AND a.atime = t1.time LEFT JOIN `leave` AS l ON l.shipid = t1.shipid AND l.ltime = t1.time ``` See it in action here: * [**SQL Fiddle Demo**](http://www.sqlfiddle.com/#!2/a4bb5/4) This will give you: ``` | SHIPID | ATIME | LTIME | ---------------------------- | 1 | t1 | (null) | | 1 | t3 | (null) | | 1 | (null) | t2 | | 1 | (null) | t4 | ```
Combine two tables
[ "", "sql", "" ]
![enter image description here](https://i.stack.imgur.com/t2QUb.png)![enter image description here](https://i.stack.imgur.com/S6Rqx.png)I have created SSIS package which is running successfully and dumping the data to the required place. But the same package results in the error when i run it through job. I googled n got these links but failed to get the way out- <http://social.msdn.microsoft.com/Forums/en-US/sqltools/thread/9034bdc4-24fd-4d80-ad8d-cc780943397a/> <http://www.progtown.com/topic390755-error-at-start-job-the-job-was-invoked-by-user-sa.html> Please suggest .
The screen captures are great but the detail is going to be on the sublines, so in the first picture, where you have expanded the [+] sign and it says "Executed as user X. Unable to open Step output file" If you select that row, there is generally more detail displayed in the bottom pane. ## General trouble shooting for something working in BIDS/SSDT but not in SQL Agent That said, generally when something works in BIDS/SSDT and does not in the SQL Agent, then the first thing to look at is the difference in permissions. You are running the package in visual studio and *your* credentials are used for * File System * Database (unless a specific user and pass are provided) * General SaaS (Send Mail Task will use some mail host to transfer the email) Running things in a SQL Agent job can complicate things as you now have the ability for each job individual job step to run under the SQL Agent account or a delegated set of credentials your DBA has established. Further complicating matters are network resources---my `K:` drive might be mapped to \\server1\it\billinkc whereas the SQL Server Agent Account might have it mapped to \\server2\domainAccount\SQLServer\ or it might be entirely unmapped. As Gowdhaman008 mentioned, there can also be a 32 vs 64 bit mismatch. Generally this is specific to using Excel as a source/destination but also rears its head with other RDBMS specific drivers and/or ODBC connections for said resources. ## Specific to your example Based on the fragment of the error message, my primary assumption is that the account `CORP\CORP-MAD$` does not have access to the location where the file has been placed. To resolve that, ensure the MAD$ account has read/write access to the location the Happy files have been placed. Since that account ends in $, it might only exist on the computer where SQL Agent is running. If it's accessing a network/SaaS resource, you might need to create an explicit Credential in SQL Server (under Security) and then authorize that Credential for SSIS subtasks. A secondary, less likely, possibility is that the files don't exist and that's just a weird Send Mail error. I know I still get plenty of hits on [The parameter 'address' cannot be an empty string](http://billfellows.blogspot.com/2008/12/parameter-address-cannot-be-empty.html) even though an email address is provided.
I am assuming that it is running in BIDS, not in SQL Agent job. I faced this kind of problem and set the package property in the agent job as following screenshot[checked the `Use 32 bit runtime`] and it worked for me. ![enter image description here](https://i.stack.imgur.com/kJRkh.png) Hope this helps!
The job failed. The job was invoked by user<user>. The last step to run was step1
[ "", "sql", "ssis", "job-scheduling", "sql-agent-job", "" ]
What does this error message mean? > SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in position 123-125: truncated \uXXXX escape I get this error reported at a position in the comments, which contain only non-Unicode chars. The problematic code is the following: ``` """ loads Font combinations from a file # # The font combinations are saved in the format: % -> Palatino, Helvetica, Courier \usepackage{mathpazo} %% --- Palatino (incl math) \usepackage[scaled=.95]{helvet} %% --- Helvetica (Arial) \usepackage{courier} %% --- Courier \renewcommand{\fontdesc}{Palatino, Arial, Courier} % <------------------- # # with "% ->" indicating the start of a new group # and "% <" indicating the end. """ ```
It's worth noting that the "problematic code" is not technically a comment, but a multiline string which will be evaluated during bytecode compilation. Depending in its location in the source file, it may end up in a [docstring](http://www.python.org/dev/peps/pep-0257/), so it has to be syntactically valid. For example... ``` >>> def myfunc(): ... """This is a docstring.""" ... pass >>> myfunc.__doc__ 'This is a docstring.' >>> help(myfunc) Help on function myfunc in module __main__: myfunc() This is a docstring. ``` There's no true multiline comment delimiter in Python, so if you don't want it to be evaluated, use several single-line comments... ``` # This is my comment line 1 # ...line 2 # etc. def myfunc(): pass ```
As the others have said, it's trying to parse `\usepackage` as a Unicode escape and failing because it's invalid. The way around this is to escape the backslash: ``` """\\usepackage"" ``` Or to use a [raw string](http://docs.python.org/3/reference/lexical_analysis.html#string-and-bytes-literals) instead: ``` r"""\usepackage""" ``` [PEP 257](http://www.python.org/dev/peps/pep-0257/), which covers docstring conventions, suggests the latter.
Unicode decode bytes error (Python)
[ "", "python", "unicode", "" ]
I have some object.ID-s which I try to store in the user session as tuple. When I add first one it works but tuple looks like `(u'2',)` but when I try to add new one using `mytuple = mytuple + new.id` got error `can only concatenate tuple (not "unicode") to tuple`.
You need to make the second element a 1-tuple, eg: ``` a = ('2',) b = 'z' new = a + (b,) ```
Since Python 3.5 ([PEP 448](https://www.python.org/dev/peps/pep-0448/)) you can do unpacking within a tuple, list set, and dict: ``` a = ('2',) b = 'z' new = (*a, b) ```
Python add item to the tuple
[ "", "python", "tuples", "" ]
I am developing a game in Python and was wondering how to give it its own icon. I am using a windows computer and have no extra things installed with Python. Oh also I am using version 3.3 Is this even possible. P.S I have found other things on Stack Overflow but they are using different Operating Systems like Ubuntu and Macintosh
There are two steps: first build the Python executable. For this you will need something like [py2exe](http://www.py2exe.org/), "which converts Python scripts into executable Windows programs, able to run without requiring a Python installation." Then once you have your executable, to give it an icon, you can use the answer to this question: [Add icon to existing EXE file from the command line](https://stackoverflow.com/questions/673203/add-icon-to-existing-exe-file-from-the-command-line) for that finishing touch.
Update (2021-11-3): Py2Exe new versions are now available so @dmitris's solution will now work although PyInstaller is another option. However the Python 3.5 series of Py2Exe ended on September 30, 2020. --- Original Answer: @dmitri's solution works but Py2Exe stopped development at python 3.4 and will not work with newer versions PyInstaller would also do this. ``` pip install pyinstaller pyinstaller --onefile --windowed --icon=youricon.ico yourprogram.py ``` Python version 3.7.3.
How To Add An Icon Of My Own To A Python Program
[ "", "python", "" ]
Is there a way to interrupt (`Ctrl+C`) a Python script based on a loop that is embedded in a Cython extension? I have the following python script: ``` def main(): # Intantiate simulator sim = PySimulator() sim.Run() if __name__ == "__main__": # Try to deal with Ctrl+C to abort the running simulation in terminal # (Doesn't work...) try: sys.exit(main()) except (KeyboardInterrupt, SystemExit): print '\n! Received keyboard interrupt, quitting threads.\n' ``` This runs a loop that is part of a C++ Cython extension. Then, while pressing `Ctrl+C`, the `KeyboardInterrupt` is thrown but ignored, and the program keeps going until the end of the simulation. The work around I found, is to handle the exception from within the extension by catching the `SIGINT` signal : ``` #include <execinfo.h> #include <signal.h> static void handler(int sig) { // Catch exceptions switch(sig) { case SIGABRT: fputs("Caught SIGABRT: usually caused by an abort() or assert()\n", stderr); break; case SIGFPE: fputs("Caught SIGFPE: arithmetic exception, such as divide by zero\n", stderr); break; case SIGILL: fputs("Caught SIGILL: illegal instruction\n", stderr); break; case SIGINT: fputs("Caught SIGINT: interactive attention signal, probably a ctrl+c\n", stderr); break; case SIGSEGV: fputs("Caught SIGSEGV: segfault\n", stderr); break; case SIGTERM: default: fputs("Caught SIGTERM: a termination request was sent to the program\n", stderr); break; } exit(sig); } ``` Then : ``` signal(SIGABRT, handler); signal(SIGFPE, handler); signal(SIGILL, handler); signal(SIGINT, handler); signal(SIGSEGV, handler); signal(SIGTERM, handler); ``` Can't I make this work from Python, or at least from Cython instead ? As I am about to port my extension under Windows/MinGW, I would appreciate to have something less Linux specific.
You have to periodically check for pending signals, for example, on every Nth iteration of the simulation loop: ``` from cpython.exc cimport PyErr_CheckSignals cdef Run(self): while True: # do some work PyErr_CheckSignals() ``` `PyErr_CheckSignals` will run signal handlers installed with [signal](http://docs.python.org/2/library/signal.html) module (this includes raising `KeyboardInterrupt` if necessary). `PyErr_CheckSignals` is pretty fast, it's OK to call it often. Note that it should be called from the main thread, because Python runs signal handlers in the main thread. Calling it from worker threads has no effect. **Explanation** Since signals are delivered asynchronously at unpredictable times, it is problematic to run any meaningful code directly from the signal handler. Therefore, Python queues incoming signals. The queue is processed later as part of the interpreter loop. If your code is fully compiled, interpreter loop is never executed and Python has no chance to check and run queued signal handlers.
If you are trying to handle `KeyboardInterrupt` in code that releases the GIL (for example, because it uses `cython.parallel.prange`), you will need to re-acquire the GIL to call `PyErr_CheckSignals`. The following snippet (adapted from @nikita-nemkin's answer above) illustrates what you need to do: ``` from cpython.exc cimport PyErr_CheckSignals from cython.parallel import prange cdef Run(self) nogil: with nogil: for i in prange(1000000) # do some work but check for signals every once in a while if i % 10000 == 0: with gil: PyErr_CheckSignals() ```
Cython, Python and KeyboardInterrupt ignored
[ "", "python", "cython", "keyboardinterrupt", "" ]
I'm using `yaml.dump` to output a dict. It prints out each item in alphabetical order based on the key. ``` >>> d = {"z":0,"y":0,"x":0} >>> yaml.dump( d, default_flow_style=False ) 'x: 0\ny: 0\nz: 0\n' ``` Is there a way to control the order of the key/value pairs? In my particular use case, printing in reverse would (coincidentally) be good enough. For completeness though, I'm looking for an answer that shows how to control the order more precisely. I've looked at using `collections.OrderedDict` but PyYAML doesn't (seem to) support it. I've also looked at subclassing `yaml.Dumper`, but I haven't been able to figure out if it has the ability to change item order.
There's probably a better workaround, but I couldn't find anything in the documentation or the source. --- **Python 2 (see comments)** I subclassed `OrderedDict` and made it return a list of unsortable items: ``` from collections import OrderedDict class UnsortableList(list): def sort(self, *args, **kwargs): pass class UnsortableOrderedDict(OrderedDict): def items(self, *args, **kwargs): return UnsortableList(OrderedDict.items(self, *args, **kwargs)) yaml.add_representer(UnsortableOrderedDict, yaml.representer.SafeRepresenter.represent_dict) ``` And it seems to work: ``` >>> d = UnsortableOrderedDict([ ... ('z', 0), ... ('y', 0), ... ('x', 0) ... ]) >>> yaml.dump(d, default_flow_style=False) 'z: 0\ny: 0\nx: 0\n' ``` --- **Python 3 or 2 (see comments)** You can also write a custom representer, but I don't know if you'll run into problems later on, as I stripped out some style checking code from it: ``` import yaml from collections import OrderedDict def represent_ordereddict(dumper, data): value = [] for item_key, item_value in data.items(): node_key = dumper.represent_data(item_key) node_value = dumper.represent_data(item_value) value.append((node_key, node_value)) return yaml.nodes.MappingNode(u'tag:yaml.org,2002:map', value) yaml.add_representer(OrderedDict, represent_ordereddict) ``` But with that, you can use the native `OrderedDict` class.
If you upgrade PyYAML to 5.1 version, now, it supports dump without sorting the keys like this: ``` yaml.dump(data, sort_keys=False) ``` As shown in `help(yaml.Dumper)`, `sort_keys` defaults to `True`: ``` Dumper(stream, default_style=None, default_flow_style=False, canonical=None, indent=None, width=None, allow_unicode=None, line_break=None, encoding=None, explicit_start=None, explicit_end=None, version=None, tags=None, sort_keys=True) ``` (These are passed as kwargs to `yaml.dump`)
Can PyYAML dump dict items in non-alphabetical order?
[ "", "python", "dictionary", "yaml", "pyyaml", "" ]
How would one go about finding the date of the next Saturday in Python? Preferably using `datetime` and in the format '2013-05-25'?
``` >>> from datetime import datetime, timedelta >>> d = datetime.strptime('2013-05-27', '%Y-%m-%d') # Monday >>> t = timedelta((12 - d.weekday()) % 7) >>> d + t datetime.datetime(2013, 6, 1, 0, 0) >>> (d + t).strftime('%Y-%m-%d') '2013-06-01' ``` I use `(12 - d.weekday()) % 7` to compute the delta in days between given day and next Saturday because `weekday` is between 0 (Monday) and 6 (Sunday), so Saturday is 5. But: * 5 and 12 are the same modulo 7 (yes, we have 7 days in a week :-) ) * so `12 - d.weekday()` is between 6 and 12 where `5 - d.weekday()` would be between 5 and -1 * so this allows me not to handle the negative case (-1 for Sunday). Here is a very simple version (no check) for any weekday: ``` >>> def get_next_weekday(startdate, weekday): """ @startdate: given date, in format '2013-05-25' @weekday: week day as a integer, between 0 (Monday) to 6 (Sunday) """ d = datetime.strptime(startdate, '%Y-%m-%d') t = timedelta((7 + weekday - d.weekday()) % 7) return (d + t).strftime('%Y-%m-%d') >>> get_next_weekday('2013-05-27', 5) # 5 = Saturday '2013-06-01' ```
I found this [pendulum](https://pendulum.eustace.io/) pretty useful. Just one line ``` In [4]: pendulum.now().next(pendulum.SATURDAY).strftime('%Y-%m-%d') Out[4]: '2019-04-27' ``` See below for more details: ``` In [1]: import pendulum In [2]: pendulum.now() Out[2]: DateTime(2019, 4, 24, 17, 28, 13, 776007, tzinfo=Timezone('America/Los_Angeles')) In [3]: pendulum.now().next(pendulum.SATURDAY) Out[3]: DateTime(2019, 4, 27, 0, 0, 0, tzinfo=Timezone('America/Los_Angeles')) In [4]: pendulum.now().next(pendulum.SATURDAY).strftime('%Y-%m-%d') Out[4]: '2019-04-27' ```
Finding the date of the next Saturday
[ "", "python", "datetime", "python-2.7", "" ]
I am trying to use convert in an where clause in the select statement. My query looks like this: ``` SELECT DISTINCT TOP 10 [SurveyResult].* ,[Ticket].[RefNumber] FROM [SurveyResult] LEFT JOIN [Ticket] ON [SurveyResult].[TicketID] = [Ticket].[TicketID] JOIN [SurveyResponse] AS SurveyResponse1 ON [SurveyResult].[ResultID] = SurveyResponse1.[ResultID] JOIN [QuestionAnswer] AS QuestionAnswer1 ON SurveyResponse1.[AnswerID] = QuestionAnswer1.[AnswerID] JOIN [SurveyQuestion] AS SurveyQuestion1 ON QuestionAnswer1.[QuestionID] = SurveyQuestion1.[QuestionID] WHERE SurveyQuestion1.[SurveyID] = [SurveyResult].[SurveyID] AND SurveyQuestion1.[QuestionID] = 'C86CB39A-8FE0-4FE8-B38F-17F1BE611016' AND CONVERT(INT, SurveyResponse1.[Response]) >= 1 AND CONVERT(INT, SurveyResponse1.[Response]) <= 5 ``` The problem is that I get some errors when converting the values to integer in the where statement. I know I have some rows that don't contain numbers in the Response column but I filter those so without the convert part in the where clause I get only numbers so it works like this: ``` SELECT TOP 1000 [ResponseID] ,[ResultID] ,[Response] FROM [WFSupport].[dbo].[SurveyResponse] JOIN QuestionAnswer ON SurveyResponse.AnswerID = QuestionAnswer.AnswerID WHERE QuestionAnswer.QuestionID = 'C10BF42E-5D51-46BC-AD89-E57BA80EECFD' ``` And in the results I get numbers but once I add the convert part in the statement I I get an error that it can't convert some text to numbers.
Either do like Mark says or just have NULL values default to something numerical, this would give you a where statement like: ``` WHERE SurveyQuestion1.[SurveyID] = [SurveyResult].[SurveyID] AND SurveyQuestion1.[QuestionID] = 'C86CB39A-8FE0-4FE8-B38F-17F1BE611016' AND CONVERT(INT, ISNULL(SurveyResponse1.[Response],0)) BETWEEN 1 AND 5 ``` The important part is the ISNULL() function and I also used BETWEEN to avoid duplicate converts.
Try: ``` SELECT DISTINCT TOP 10 [SurveyResult].*, [Ticket].[RefNumber] FROM [SurveyResult] LEFT JOIN [Ticket] ON [SurveyResult].[TicketID] = [Ticket].[TicketID] JOIN [SurveyResponse] AS SurveyResponse1 ON [SurveyResult].[ResultID] = SurveyResponse1.[ResultID] JOIN [QuestionAnswer] AS QuestionAnswer1 ON SurveyResponse1.[AnswerID] = QuestionAnswer1.[AnswerID] JOIN [SurveyQuestion] AS SurveyQuestion1 ON QuestionAnswer1.[QuestionID] = SurveyQuestion1.[QuestionID] where SurveyQuestion1.[SurveyID] = [SurveyResult].[SurveyID] AND SurveyQuestion1.[QuestionID] = 'C86CB39A-8FE0-4FE8-B38F-17F1BE611016' AND CASE SurveyQuestion1.[QuestionID] WHEN 'C86CB39A-8FE0-4FE8-B38F-17F1BE611016' THEN Convert(int, SurveyResponse1.[Response]) ELSE 0 END BETWEEN 1 AND 5 ``` (The `AND SurveyQuestion1.[QuestionID] = 'C86CB39A-8FE0-4FE8-B38F-17F1BE611016'` is retained in case the query is using an index on QuestionID - if not, it can be removed, as the same condition is implicit in the subsequent `CASE` condition.)
Sql in sql server with convert
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "" ]
I have got this problem. I´m trying to set text on a lineEdit object on pyqt4, then wait for a few seconds and changing the text of the same lineEdit. For this I´m using the `time.sleep()` function given on the python Time module. But my problem is that instead of setting the text, then waiting and finally rewrite the text on the lineEdit, it just waits the time it´s supposed to sleep and only shows the final text. My code is as follows: ``` from PyQt4 import QtGui from gui import * class Ventana(QtGui.QMainWindow, Ui_MainWindow): def __init__(self, parent=None): QtGui.QWidget.__init__(self, parent) self.setupUi(self) self.button.clicked.connect(self.testSleep) def testSleep(self): import time self.lineEdit.setText('Start') time.sleep(2) self.lineEdit.setText('Stop') def mainLoop(self, app ): sys.exit( app.exec_()) if __name__ == '__main__': import sys app = QtGui.QApplication(sys.argv) window = Ventana() window.show() sys.exit(app.exec_()) ```
You can't use `time.sleep` here because that freezes the GUI thread, so the GUI will be completely frozen during this time. You should probably use a [`QTimer`](http://pyqt.sourceforge.net/Docs/PyQt4/qtimer.html) and use it's `timeout` signal to schedule a signal for deferred delivery, or it's `singleShot` method. For example (adapted your code to make it run without dependencies): ``` from PyQt4 import QtGui, QtCore class Ventana(QtGui.QWidget): def __init__(self, parent=None): QtGui.QWidget.__init__(self, parent) self.setLayout(QtGui.QVBoxLayout()) self.lineEdit = QtGui.QLineEdit(self) self.button = QtGui.QPushButton('clickme', self) self.layout().addWidget(self.lineEdit) self.layout().addWidget(self.button) self.button.clicked.connect(self.testSleep) def testSleep(self): self.lineEdit.setText('Start') QtCore.QTimer.singleShot(2000, lambda: self.lineEdit.setText('End')) def mainLoop(self, app ): sys.exit( app.exec_()) if __name__ == '__main__': import sys app = QtGui.QApplication(sys.argv) window = Ventana() window.show() sys.exit(app.exec_()) ```
Also, take a look at the QThread sleep() function, it puts the current thread to sleep and allows other threads to run. <https://doc.qt.io/qt-5/qthread.html#sleep>
Sleep is not working on pyqt4
[ "", "python", "qt", "python-3.x", "pyqt", "sleep", "" ]
I am trying to combine entries from two columns in a spreadsheet into a single list in python. The first column contains the first person in every pair and is the person who asked the question, the questioner. The second is the person who responded: the answerer. I want the list to look like this: ``` [('Jack', 'Jill'), ('Jack', 'John'), ('Jack', 'Jason'), ('Jill', 'John')...] ``` However, my list looks like this: ``` ['(Jack, Jill)', '(Jack, John)', '(Jack, Jason)', '(Jack, john)'...] ``` The key difference is that in the first list, the quotation marks are on the inside of the parentheses and in the second, they are on the outside. Here's my process: ``` answerers = line['answerers'].split(" ") for answerer in answerers: edgelist.append("(" + line['questioner'] + ", " + answerer + ")") ``` What should I do differently to have the quotation marks on the inside, around each person, rather than on the outside?
just keep them as two seperate lists, for example `asker` and `answerer` then do this ``` >>> asker = ['Jack','Jack','Jack','Jill'] >>> answerer = ['Jill','John','Jason','John'] >>> finalList = zip(asker, answerer) >>> >>> >>> finalList [('Jack', 'Jill'), ('Jack', 'John'), ('Jack', 'Jason'), ('Jill', 'John')] ```
``` edgelist.append(( line['questioner'] ,answerer)) ``` assuming I understand your question
How to match parentheses and comma formatting in list, python
[ "", "python", "list", "formatting", "parentheses", "" ]
Hi I have a list of values. I want to get another list with the amount of times every values in that list occurs. This is fairly easy, but I also need to have the values which are not present in the original list, to be present in the frequency list, but then with value 0. For example: ``` I = [0,1,1,2,2,2,4,4,5,5,6,6,6,8,8,8] ``` What you expect: ``` freqI = [1,2,3,2,2,2,3,3] ``` What I need: ``` freqI = [1,2,3,0,2,2,3,0,3] ``` As you can see 3 and 7 are not present in **I**, though they are still accounted for in the frequency list. My initial try ended up giving me the first kind of solution (with the intermediate values): ``` d = {x:I.count(x) for x in I} sorted_x = sorted(d.iteritems(), key=operator.itemgetter(0)) ``` How can I get the frequency count (aka histogram) of my array, with the intermediate values present ?
``` >>> lis = [0,1,1,2,2,2,4,4,5,5,6,6,6,8,8,8] >>> maxx,minn = max(lis),min(lis) >>> from collections import Counter >>> c = Counter(lis) >>> [c[i] for i in xrange(minn,maxx+1)] [1, 2, 3, 0, 2, 2, 3, 0, 3] ``` or as suggested by @DSM we can get `min` and `max` from the `dict` itself: ``` >>> [c[i] for i in xrange( min(c) , max(c)+1)] [1, 2, 3, 0, 2, 2, 3, 0, 3] ```
How about ``` >>> I = [0,1,1,2,2,2,4,4,5,5,6,6,6,8,8,8] >>> from collections import Counter >>> frequencies = Counter(I) >>> frequencies Counter({2: 3, 6: 3, 8: 3, 1: 2, 4: 2, 5: 2, 0: 1}) ``` You can query the counter for any number. For numbers it hasn't seen, it gives 0 ``` >>> frequencies[42] 0 ```
Get frequency count of elements in an array
[ "", "python", "" ]
``` >>> import sys >>> sys.getsizeof("") 40 ``` Why does the empty string use so many bytes? Does anybody know what is stored in those 40 bytes?
In Python strings are objects, so that values is the size of the object itself. So this size will always be bigger than the string size itself. From `stringobject.h`: ``` typedef struct { PyObject_VAR_HEAD long ob_shash; int ob_sstate; char ob_sval[1]; /* Invariants: * ob_sval contains space for 'ob_size+1' elements. * ob_sval[ob_size] == 0. * ob_shash is the hash of the string or -1 if not computed yet. * ob_sstate != 0 iff the string object is in stringobject.c's * 'interned' dictionary; in this case the two references * from 'interned' to this object are *not counted* in ob_refcnt. */ } PyStringObject; ``` From here you can get some clues on how those bytes are used: * `len(str)+1` bytes to store the string itself; * 8 bytes for the hash; * (...)
You can find some information about the implementation if python strings in a [weblog article by Laurent Luce](http://www.laurentluce.com/posts/python-string-objects-implementation/). Additionally you can browse the [source](http://svn.python.org/projects/python/trunk/Objects/stringobject.c). The size of string objects depends on the OS and type of machine and some choices. On 64-bit FreeBSD, using unicode for string literals (`from __future__ import unicode_literals`): ``` In [1]: dir(str) Out[1]: ['__add__', '__class__', '__contains__', '__delattr__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__getitem__', '__getnewargs__', '__getslice__', '__gt__', '__hash__', '__init__', '__le__', '__len__', '__lt__', '__mod__', '__mul__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__rmod__', '__rmul__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '_formatter_field_name_split', '_formatter_parser', 'capitalize', 'center', 'count', 'decode', 'encode', 'endswith', 'expandtabs', 'find', 'format', 'index', 'isalnum', 'isalpha', 'isdigit', 'islower', 'isspace', 'istitle', 'isupper', 'join', 'ljust', 'lower', 'lstrip', 'partition', 'replace', 'rfind', 'rindex', 'rjust', 'rpartition', 'rsplit', 'rstrip', 'split', 'splitlines', 'startswith', 'strip', 'swapcase', 'title', 'translate', 'upper', 'zfill'] In [2]: import sys In [3]: sys.getsizeof("") Out[3]: 52 In [4]: sys.getsizeof("test") Out[4]: 68 In [7]: sys.getsizeof("t") Out[7]: 56 In [8]: sys.getsizeof("te") Out[8]: 60 In [9]: sys.getsizeof("tes") Out[9]: 64 ``` Every character uses 4 bytes extra in this case.
How is str implemented in Python?
[ "", "python", "python-2.7", "python-internals", "" ]
I create a dictionary for the most used words and get the top ten. I need to sort this for the list, which should be in order. I can't do that without making a list, which I can't use. Here is my code. I am away dictionaries cannot be sorted, but i still need help. ``` most_used_words = Counter() zewDict = Counter(most_used_words).most_common(10) newDict = dict(zewDict) keys = newDict.keys() values = newDict.values() msg = ('Here is your breakdown of your most used words: \n\n' 'Word | Times Used' '\n:--:|:--:' '\n' + str(keys[0]).capitalize() + '|' + str(values[0]) + '\n' + str(keys[1]).capitalize() + '|' + str(values[1]) + '\n' + str(keys[2]).capitalize() + '|' + str(values[2]) + '\n' + str(keys[3]).capitalize() + '|' + str(values[3]) + '\n' + str(keys[4]).capitalize() + '|' + str(values[4]) + '\n' + str(keys[5]).capitalize() + '|' + str(values[5]) + '\n' + str(keys[6]).capitalize() + '|' + str(values[6]) + '\n' + str(keys[7]).capitalize() + '|' + str(values[7]) + '\n' + str(keys[8]).capitalize() + '|' + str(values[8]) + '\n' + str(keys[9]).capitalize() + '|' + str(values[9])) r.send_message(user, 'Most Used Words', msg) ``` How would I do it so the msg prints the words in order from most used word on the top to least on the bottom with the correct values for the word? Edit: I know dictionaries cannot be sorted on their own, so can I work around this somehow?
From the [Docs](http://docs.python.org/2/library/collections.html#collections.Counter.most_common): `most_common([n])` > Return a list of the n most common elements and their counts from the > most common to the least. If n is not specified, most\_common() returns > all elements in the counter. Elements with equal counts are ordered > arbitrarily: ``` >>> Counter('abracadabra').most_common(3) [('a', 5), ('r', 2), ('b', 2)] ``` --- Your code can be: ``` from collections import Counter c = Counter(most_used_words) msg = "Here is your breakdown of your most used words:\n\nWords | Times Used\n:--:|:--:\n" msg += '\n'.join('%s|%s' % (k.capitalize(), v) for (k, v) in c.most_common(10)) r.send_message(user, 'Most Used Words', msg) ```
Once you have the `values` it's as simple as: ``` print('Word | Times Used') for e, t in collections.Counter(values).most_common(10): print("%s|%d" % (e,t)) ``` Print something like: ``` Word | Times Used e|4 d|3 a|2 c|2 ```
Organize dictionary by frequency
[ "", "python", "dictionary", "" ]
When selecting a single column from a pandas DataFrame(say `df.iloc[:, 0]`, `df['A']`, or `df.A`, etc), the resulting vector is automatically converted to a Series instead of a single-column DataFrame. However, I am writing some functions that takes a DataFrame as an input argument. Therefore, I prefer to deal with single-column DataFrame instead of Series so that the function can assume say df.columns is accessible. Right now I have to explicitly convert the Series into a DataFrame by using something like `pd.DataFrame(df.iloc[:, 0])`. This doesn't seem like the most clean method. Is there a more elegant way to index from a DataFrame directly so that the result is a single-column DataFrame instead of Series?
As @Jeff mentions there are a few ways to do this, but I recommend using loc/iloc to be more explicit (and raise errors early if you're trying something ambiguous): ``` In [10]: df = pd.DataFrame([[1, 2], [3, 4]], columns=['A', 'B']) In [11]: df Out[11]: A B 0 1 2 1 3 4 In [12]: df[['A']] In [13]: df[[0]] In [14]: df.loc[:, ['A']] In [15]: df.iloc[:, [0]] Out[12-15]: # they all return the same thing: A 0 1 1 3 ``` The latter two choices remove ambiguity in the case of integer column names (precisely why loc/iloc were created). For example: ``` In [16]: df = pd.DataFrame([[1, 2], [3, 4]], columns=['A', 0]) In [17]: df Out[17]: A 0 0 1 2 1 3 4 In [18]: df[[0]] # ambiguous Out[18]: A 0 1 1 3 ```
As **Andy Hayden** recommends, utilizing .iloc/.loc to index out (single-columned) dataframe is the way to go; another point to note is how to express the index positions. Use a **listed Index labels/positions** whilst specifying the argument values to index out as Dataframe; failure to do so will return a 'pandas.core.series.Series' **Input:** ``` A_1 = train_data.loc[:,'Fraudster'] print('A_1 is of type', type(A_1)) A_2 = train_data.loc[:, ['Fraudster']] print('A_2 is of type', type(A_2)) A_3 = train_data.iloc[:,12] print('A_3 is of type', type(A_3)) A_4 = train_data.iloc[:,[12]] print('A_4 is of type', type(A_4)) ``` **Output:** ``` A_1 is of type <class 'pandas.core.series.Series'> A_2 is of type <class 'pandas.core.frame.DataFrame'> A_3 is of type <class 'pandas.core.series.Series'> A_4 is of type <class 'pandas.core.frame.DataFrame'> ```
Keep selected column as DataFrame instead of Series
[ "", "python", "pandas", "" ]
I have this table `BarCode`, when I do a in query like this ``` SELECT Barcode FROM BarcodeTable WHERE BarCode IN ( '53-3-1', '51-1-1', '51-2-1', '50-10-1', '50-8-1', '51-4-1', '50-1-1' ) ``` why do I get ``` 53-3-1 50-1-1 50-8-1 50-10-1 51-1-1 51-2-1 51-4-1 ``` Instead of ???? ``` 53-3-1 51-1-1 51-2-1 50-10-1 50-8-1 51-4-1 50-1-1 ``` Why does SQL changing the order of barcodes? Because of the way SQL is ordering them (by itself) is causing an error in a dynamic query that is being executed by `sp_executesql` that relies on the order I send the input. Why is the SQL changing input order by itself? *51-4-1*
`IN` does not imply any ordering of the final result: This SQL: ``` ... WHERE x IN (1, 2, 3) ``` Should produce the same results as this: ``` ... WHERE x IN (3, 2, 1) ``` The `IN` clause only filters the rows, basically, for each row you can think of it as just asking "should this row be part of the resultset?". To get a specific ordering, you're going to have to add an `ORDER BY` clause to your statement. To get a specific order, you can use an inline table: ``` SELECT Barcode FROM BarcodeTable inner join (values ('*53-3-1*', 1), ('*51-1-1*', 2), ('*51-2-1*', 3), ('*50-10-1*', 4), ('*50-8-1*', 5), ('*51-4-1*', 6), ('*50-1-1*', 7)) as DummyTable (value, sortorder) on BarCode = value order by sortorder ``` This will order by the sortorder field of the DummyTable. You can not trick SQL Server into using the `IN` clause itself for ordering, you *have* to add an `ORDER BY` in some form or another. You can also use a `CASE WHEN ...` expression to produce the sort order value depending on the barcode: ``` SELECT Barcode FROM BarcodeTable WHERE BarCode IN ('*53-3-1*', '*51-1-1*', '*51-2-1*', '*50-10-1*', '*50-8-1*', '*51-4-1*', '*50-1-1*') ORDER BY CASE BarCode WHEN '*53-3-1*' THEN 1 WHEN '*51-1-1*' THEN 2 WHEN '*51-2-1*' THEN 3 WHEN '*50-10-1*' THEN 4 WHEN '*50-8-1*' THEN 5 WHEN '*51-4-1*' THEN 6 WHEN '*50-1-1*' THEN 7 END ``` As the comment by Lieven suggests, there's an alternative on SQL Server 2005, using the `WITH` clause: ``` WITH DummyTable (value, sortorder) AS ( SELECT '*53-3-1*' AS value, 1 AS sortorder UNION ALL SELECT '*51-1-1*', 2 UNION ALL SELECT '*51-2-1*', 3 UNION ALL SELECT '*50-10-1*', 4 UNION ALL SELECT '*50-8-1*', 5 UNION ALL SELECT '*51-4-1*', 6 UNION ALL SELECT '*50-1-1*', 7 ) SELECT Barcode FROM BarcodeTable inner join DummyTable on BarCode = value order by sortorder ``` (note, I'm no expert in the usage of `WITH`, the above was just something I hacked together, but seems to work)
Unless you specify an order by statement, you should not rely on particular order in the output. You must provide an `ORDER BY` to get what you expect.
Why is IN operator showing wrong results?
[ "", "sql", "sql-server-2008", "t-sql", "sql-server-2005", "" ]
In python, I have a list that should have **one and only one** truthy value (that is, `bool(value) is True`). Is there a clever way to check for this? Right now, I am just iterating across the list and manually checking: ``` def only1(l) true_found = False for v in l: if v and not true_found: true_found=True elif v and true_found: return False #"Too Many Trues" return true_found ``` This seems inelegant and not very pythonic. Is there a cleverer way to do this?
The most verbose solution is not always the most unelegant solution. Therefore I add just a minor modification (in order to save some redundant boolean evaluations): ``` def only1(l): true_found = False for v in l: if v: # a True was found! if true_found: # found too many True's return False else: # found the first True true_found = True # found zero or one True value return true_found ``` --- Here are some timings for comparison: ``` # file: test.py from itertools import ifilter, islice def OP(l): true_found = False for v in l: if v and not true_found: true_found=True elif v and true_found: return False #"Too Many Trues" return true_found def DavidRobinson(l): return l.count(True) == 1 def FJ(l): return len(list(islice(ifilter(None, l), 2))) == 1 def JonClements(iterable): i = iter(iterable) return any(i) and not any(i) def moooeeeep(l): true_found = False for v in l: if v: if true_found: # found too many True's return False else: # found the first True true_found = True # found zero or one True value return true_found ``` My output: ``` $ python -mtimeit -s 'import test; l=[True]*100000' 'test.OP(l)' 1000000 loops, best of 3: 0.523 usec per loop $ python -mtimeit -s 'import test; l=[True]*100000' 'test.DavidRobinson(l)' 1000 loops, best of 3: 516 usec per loop $ python -mtimeit -s 'import test; l=[True]*100000' 'test.FJ(l)' 100000 loops, best of 3: 2.31 usec per loop $ python -mtimeit -s 'import test; l=[True]*100000' 'test.JonClements(l)' 1000000 loops, best of 3: 0.446 usec per loop $ python -mtimeit -s 'import test; l=[True]*100000' 'test.moooeeeep(l)' 1000000 loops, best of 3: 0.449 usec per loop ``` As can be seen, the OP solution is significantly better than most other solutions posted here. As expected, the best ones are those with short circuit behavior, especially that solution posted by Jon Clements. At least for the case of two early `True` values in a long list. Here the same for no `True` value at all: ``` $ python -mtimeit -s 'import test; l=[False]*100000' 'test.OP(l)' 100 loops, best of 3: 4.26 msec per loop $ python -mtimeit -s 'import test; l=[False]*100000' 'test.DavidRobinson(l)' 100 loops, best of 3: 2.09 msec per loop $ python -mtimeit -s 'import test; l=[False]*100000' 'test.FJ(l)' 1000 loops, best of 3: 725 usec per loop $ python -mtimeit -s 'import test; l=[False]*100000' 'test.JonClements(l)' 1000 loops, best of 3: 617 usec per loop $ python -mtimeit -s 'import test; l=[False]*100000' 'test.moooeeeep(l)' 100 loops, best of 3: 1.85 msec per loop ``` I did not check the statistical significance, but interestingly, this time the approaches suggested by F.J. and especially that one by Jon Clements again appear to be clearly superior.
One that doesn't require imports: ``` def single_true(iterable): i = iter(iterable) return any(i) and not any(i) ``` Alternatively, perhaps a more readable version: ``` def single_true(iterable): iterator = iter(iterable) # consume from "i" until first true or it's exhausted has_true = any(iterator) # carry on consuming until another true value / exhausted has_another_true = any(iterator) # True if exactly one true found return has_true and not has_another_true ``` This: * Looks to make sure `i` has any true value * Keeps looking from that point in the iterable to make sure there is no other true value
How can I check that a list has one and only one truthy value?
[ "", "python", "" ]
Making some exercises I don't know how to make this query Having this 2 tables StudentTable(**IDstudent**,....) Exam(**IDexam**,...,student,...,result) where * student in exam references IDstudent in student * resutl has a boolean value example ``` StudentTable IDstudent S0001 S0002 S0003 EXAM IDexam student result 1 S0001 true 2 S0002 true 3 S0002 true 4 S0003 false ``` The query have to show the ID of student with the largest number of true in exam and the number In the case of example S0002 2 I've tried ``` SELECT student, count(1) FROM Exam E join StudentTable S on E.student=S.id_student WHERE result='true' GROUP by student ``` What I have is ``` S0001 1 S0002 2 ``` but I don't know how take the max * How can I do? This is the link to the schema <http://sqlfiddle.com/#!2/895ea/8>
Try this: ``` SELECT student, count(1) FROM Exam E join StudentTable S on E.student=S.id_student WHERE result='true' GROUP by student ORDER by 2 DESC LIMIT 0,1 ``` **LIMIT (N,N)** clause in MySQL is equivalent to **TOP (N)** in T-SQL
One thing I like this query is that it supports duplicate student having the highest number of `true` answer. ``` SELECT a.* FROM StudentTable a INNER JOIN ( SELECT Student FROM Exam WHERE result = 'true' GROUP BY Student HAVING COUNT(*) = ( SELECT COUNT(*) count FROM Exam WHERE result = 'true' GROUP BY Student ORDER BY count DESC LIMIT 1 ) ) b ON a.IDStudent = b.Student ``` * [SQLFiddle Demo](http://sqlfiddle.com/#!2/1823a/3) * [SQLFiddle Demo (having duplicate highest number of `true` result)](http://sqlfiddle.com/#!2/2b0fa8/2)
find max ID with the largest number of occurrence of an attribute
[ "", "mysql", "sql", "" ]
So, I've got redis working with python -- exciting! I need to store a tuple and retrieve it / parse it later. Construct below isn't working, I think because the returned tuple is quoted -- there is a quote on either end of it. It seems to me that the quotes indicate that it isn't actually a tuple, but rather a string. So does anyone know how to get redis to actually return a working tuple? Thanks! ``` >>> tup1 = ('2011-04-05', 25.2390232323, 0.32093240923490, 25.239502352390) >>> r.lpush('9999', tup1) 1L >>> r.lpop('9999') "('2011-04-05', 25.2390232323, 0.3209324092349, 25.23950235239)" >>> v = r.lpop('9999') >>> test=v[0] Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'NoneType' object has no attribute '__getitem__' ```
You have already `popped` and element from `v` and the element you are trying to retrieve no longer exists. Try this: ``` >>> tup1 = ('2011-04-05', 25.2390232323, 0.32093240923490, 25.239502352390) >>> r.lpush('9999', tup1) >>> v = r.lpop('9999') >>> test = eval(v)[0] ``` Since the return type is a string, you need `eval` to evaluate the type
If you would like to get the tuple back as such, I recommend using 'pickle'. ``` >>> tup1 = ('2011-04-05', 25.2390232323, 0.32093240923490, 25.239502352390) >>> import pickle >>> r.lpush('9999', pickle.dumps(tup1)) 1L >>> v = pickle.loads(r.lpop('9999')) >>> v ('2011-04-05', 25.2390232323, 0.3209324092349, 25.23950235239) >>> type(v) <type 'tuple'> >>> ```
tuple in redis / python: can store, not retrieve
[ "", "python", "redis", "" ]
I have the following error when I trying to send Zipfile content via [suds](https://fedorahosted.org/suds/) method ``` 'ascii' codec can't decode byte 0x8c in position 10: ordinal not in range(128) ``` Here is my code: ``` try: project_archive = open(os.path.join(settings.MEDIA_ROOT, 'zip/project.zip'), "rb") data = project_archive.read() client = Client(settings.UPLOAD_PROJECT_WS_URL) client.service.uploadProject(data) except Exception as e: return HttpResponse(e) else: return HttpResponse("Project was exported") ```
suds doesn't support soap file attachment (not last time I checked, it has been a while). Work around here: <https://fedorahosted.org/suds/attachment/ticket/350/soap_attachments.2.py> or use a different library
Assuming that in the WSDL the argument type is `xsd:base64Binary`, you need to: ``` client.service.uploadProject(base64.b64encode(data)) ``` In my case the sever was written in JAX-WS and the function argument type was `Byte[]` and Base64 worked for me
Python zipfile sending via suds error: "'ascii' codec can't decode byte 0x8c in position 10: ordinal not in range(128)"
[ "", "python", "zip", "ascii", "suds", "" ]
Assuming the following structure: ``` class SetupTestParam(object): def setup_method(self, method): self.foo = bar() @pytest.fixture def some_fixture(): self.baz = 'foobar' ``` I use `SetupTestParam` as a parent class for test classes. ``` class TestSomething(SetupTestParam): def test_a_lot(self, some_fixture): with self.baz as magic: with magic.fooz as more_magic: blah = more_magic.much_more_magic() # repetative bleh ... # not repetative code here assert spam == 'something cool' ``` Now, writing tests gets repetitive (with statement usage) and I would like to write a decorator to reduce the number of code lines. But there is a problem with pytest and the function signature. I found out [library](https://pypi.python.org/pypi/decorator) which should be helpful but I can't manage to get it to work. I made a `classmethod` in my `SetupTestParam` class. ``` @classmethod @decorator.decorator def this_is_decorator(cls, f): def wrapper(self, *args, **kw): with self.baz as magic: with magic.fooz as more_magic: blah = more_magic.much_more_magic() # repetative bleh return f(self, *args) return wrapper ``` After I decorate the `test_a_lot` method, I receive the error `TypeError: transaction_decorator() takes exactly 1 argument (2 given)` Can someone explain me please what am I doing wrong? (I assume there is a problem with `self` from the test method?)
After some tweaking and realizing that I need to pass a parameter to decorator I choosed to write it as a class: ``` class ThisIsDecorator(object): def __init__(self, param): self.param = param # Parameter may vary with the function being decorated def __call__(self, fn): wraps(fn) # [1] def wrapper(fn, fn_self, *args): # [2] fn_self refers to original self param from function fn (test_a_lot) [2] with fn_self.baz as fn_self.magic: # I pass magic to fn_self to make magic accesible in function fn (test_a_lot) with fn_self.magic.fooz as more_magic: blah = self.param.much_more_magic() # repetative bleh return fn(fn_self, *args) return decorator.decorator(wrapper, fn) ``` [1] I use `wraps` to have original `fn` `__name__`, `__module__` and `__doc__`. [2] Params passed to `wrapper` were `self = <function test_a_lot at 0x24820c8> args = (<TestSomething object at 0x29c77d0>, None, None, None, None), kw = {}` so I took out the `args[0]` as a `fn_self`. Original version (without passing a parameter): ``` @classmethod def this_is_decorator(cls, fn): @wraps(fn) def wrapper(fn, fn_self, *args): with fn_self.baz as fn_self.magic: with fn_self.magic.fooz as more_magic: blah = more_magic.much_more_magic() # repetative bleh return fn(fn_self, *args) return decorator.decorator(wrapper,fn) ``` Thanks go to Mike Muller for pointing out the right direction.
Chaining decorators is not the simplest thing to do. One solution might be to separate the two decorators. Keep the `classmethod` but move `decorator.decorator` to the end: ``` @classmethod def this_is_decorator(cls, f): def wrapper(self, *args, **kw): with self.baz as magic: with magic.fooz as more_magic: blah = more_magic.much_more_magic() # repetative bleh return f(self, *args) return decorator.decorator(wrapper, f) ``` Maybe this works for you.
Writing decorator for pytest test method
[ "", "python", "decorator", "pytest", "" ]
Simple question - why when I print the value of the `@len` variable in the query below would I be getting the value 1, instead of 12 (the number of characters in the specified string)? ``` DECLARE @string varchar DECLARE @index int DECLARE @len int DECLARE @char char(1) SET @string = 'content loop' SET @index = 1 SET @len= LEN(@string) print @len ```
Your declaration of `@string` is wrong. You have no length on the `varchar`. Try this: ``` declare @string varchar(255); -- or whatever ``` You just learned that the default in this situation is 1. This is clearly specified in the [documentation](http://msdn.microsoft.com/en-us/library/ms176089.aspx). As a further note, MS SQL seems to make this rather complicated: > When n is not specified in a data definition or variable declaration > statement, the default length is 1. When n is not specified when using > the CAST and CONVERT functions, the default length is 30. The right habit is to always include the length when using `varchar` or `nvarchar`.
You need to give the variable @string an actual length. Print the variable @string and it will probably return 'C'.
Why does SQL LEN function return '1' for a string with several characters?
[ "", "sql", "t-sql", "sql-server-2008-r2", "" ]
I am new to Python objects and have a lot of questions. I need to pass a function to my object, and then evaluate the function. The code is similar to this: ``` from sympy import var class eval: def __init__(self, M): self.M = M def fun(self, x): M = self.M print M(x) x = var('x') ak = eval(x+2) ak.fun(x) ``` This is the error: ``` TypeError Traceback (most recent call last) (ipython-input-1-b7ef311bd1f0> in <module)() 12 13 ak = eval(x+2) ---> 14 ak.fun(x) (ipython-input-1-b7ef311bd1f0) in fun(self, x) 7 def fun(self, x): 8 M = self.M ----> 9 print M(x) 10 11 x = var('x') TypeError: 'Add' object is not callable ```
> I am new to Python objects... Having questions is great, but the objects and classes behind SymPy are quite complex and learning the basics of the [Python object model](http://docs.python.org/2/reference/datamodel.html) before delving in such a library is strongly encouraged. There are many issues with the suggested code: ## Purely language related errors * `eval` is build-in so it is bad style to overwrite it * using old-style classes ## Using SymPy as if it is some language extension SymPy does **not** provide new syntax for creating python functions. Especially, `(x+2)(4)` is **not** going to give you `6`. If you want this just write `myfun = lambda _: _+2; fun(4)` without using SymPy. `x+2` is a SymPy object (`Add(Symbol('x')+Integer(2))`), not some python AST. You can substitute `x` for something else with `(x+2).subs(x,y)` but you can not expect the library to magically know that you have something special in mind for `Symbol('x')` when you write `(x+2)(4)`. You can as well write `blah = Symbol('random_string'); (blah+2)(4)`. ## Minor SymPy errors `var` is a helper function used to create `Symbol` objects, but it is meant for interactive use in the interpreter. Do not use it in library code because as a side effect it injects global variables in the namespace. Just use `Symbol('x')`. # Now about `x+2` being callable In 0.7.2 recursive calling was implemented. What this means is that you can create a SymPy `Expr` tree that contains unevaluated symbolic objects and apply the whole tree on another object, the calls propagating inwards until all unevaluated objects are substituted with evaluated ones. I guess the above description is not clear so here is an example: You want to create a differential operator object `D` which can do the following: ``` >>> op = g(y)*D # where g is Function and y is Symbol >>> op(f(x)) g(y)*f(x).diff(x) ``` The way this works is to go down the tree (`Mul(g(y), D)` in this case), skip evaluated symbolic objects and evaluate unevaluated symbolic objects. Because a lot of SymPy's users start using it before reading about the data model this caused a lot of confusion, so we moved the recursive calling scheme to the `rc` method. In 0.7.3 `(x+2)(4)` will raise errors again.
Your code already can pass a function to the object. Functions are first-class citizens in Python you can pass them as any other object. The issue might be with your `sympy` version. Compare: ``` >>> import sympy >>> sympy.__version__ '0.7.1.rc1' >>> from sympy.abc import x >>> (x + 2)(x) Traceback (most recent call last): File "<input>", line 1, in <module> TypeError: 'Add' object is not callable ``` And: ``` >>> import sympy >>> sympy.__version__ '0.7.2' >>> from sympy.abc import x >>> (x + 2)(x) x + 2 ``` i.e., the same code works on `0.7.2` but it fails on `0.7.1rc1` version.
Functions in SymPy
[ "", "python", "sympy", "" ]
Given the list ``` a = [6, 3, 0, 0, 1, 0] ``` What is the best way in Python to return the list ``` b = [1, 1, 0, 0, 1, 0] ``` where the 1's in b correspond to the non-zero elements in a. I can think of running a for loop and simply appending onto b, but I'm sure there must be a better way.
List comprehension method: ``` a = [6, 3, 0, 0, 1, 0] b = [1 if i else 0 for i in a] print b >>> [1, 1, 0, 0, 1, 0] ``` --- Timing mine, [Dave's method](https://stackoverflow.com/a/16775401/1561176) and [Lattyware's slight alteration](https://stackoverflow.com/questions/16775379/list-of-non-zero-elements-in-a-list-in-python#comment24170675_16775401): ``` $ python3 -m timeit -s "a = [6, 3, 0, 0, 1, 0]" "[1 if i else 0 for i in a]" 1000000 loops, best of 3: 0.838 usec per loop $ python3 -m timeit -s "a = [6, 3, 0, 0, 1, 0]" "[int(i != 0) for i in a]" 100000 loops, best of 3: 2.15 usec per loop $ python3 -m timeit -s "a = [6, 3, 0, 0, 1, 0]" "[i != 0 for i in a]" 1000000 loops, best of 3: 0.794 usec per loop ``` Looks like my method is twice as fast ... I am actually surprised it is *that* much faster. Taking out the `int()` call, however, makes the two essentially the same - but leaves the resulting list full of `Booleans` instead of `Integers` (True/False instead of 1/0)
``` b = [int(i != 0) for i in a] ``` Will give you what you are looking for.
List of non-zero elements in a list in Python
[ "", "python", "list", "" ]
I have a calendar table, that I'm trying to use to deal with some weekend and holiday issues. The structure is simple: ``` CREATE TABLE calendar ( daterank INT, thedate DATE ); ``` The idea is that every record has a daterank, that is used for comparison purposes. Non-holiday weekdays have incremental dateranks, weekends and holidays have a daterank equal to the immediately preceding non-holiday weekday. Setting daterank for non-holiday weekdays turned out to be easier than I thought it would be, but setting the weekends and holidays is more complicated than I thought it would be. A subset of the data: ``` daterank thedate 881 2013-05-21 882 2013-05-22 883 2013-05-23 884 2013-05-24 NULL 2013-05-25 NULL 2013-05-26 885 2013-05-27 886 2013-05-28 887 2013-05-29 888 2013-05-30 889 2013-05-31 NULL 2013-06-01 ``` What I want, in the above case, is to replace the NULLs for 5/25 and 5/26 with 884 (the value for 5/24), the NULL for 6/1 with 889, etc. What doesn't work: ``` UPDATE calendar c1 SET c1.daterank = ( SELECT MAX(c2.daterank) FROM calendar c2 WHERE c2.thedate < c1.thedate AND c2.daterank IS NOT NULL ) WHERE daterank IS NULL ; ``` Any ideas?
You must not use an alias for the table you want to update. ``` UPDATE calendar SET daterank = ( SELECT MAX(c2.daterank) FROM calendar c2 WHERE c2.thedate < calendar.thedate AND c2.daterank IS NOT NULL ) WHERE daterank IS NULL ```
You can use the following to get the result: ``` ;with cte as ( select daterank, thedate from calendar where daterank is null ) update c set c.daterank = d.daterank from cte c cross apply ( select top 1 daterank, thedate from calendar d where d.thedate < c.thedate and d.daterank is not null order by daterank desc ) d; ``` See [SQL Fiddle with Demo](http://sqlfiddle.com/#!3/ef4ff/4)
UPDATE record with subquery into own table
[ "", "sql", "sql-server", "" ]
Hi this is the table Im using say Loandetails ``` Loanno Balance amount DueDATE 1001045 308731.770000 12/31/99 1001045 2007700.740000 12/31/99 1001045 3087318905.770000 11/01/99 1001045 308731.770000 12/31/99 ``` I have to select `Loanno` and `DueDate` based on the `maximum value of Balance Amount`. `Loanno` is not unique.Please help me out on this.
Try this: ``` SELECT L.Loanno, L.Balanceamount, L.DueDate FROM dbo.Loan L INNER JOIN ( SELECT Loanno, MAX(Balancemount) as MaxBalance FROM dbo.Loan GROUP BY LoanNo ) SUB ON L.Loanno = SUB.Loanno AND L.Balanceamount = SUB.MaxBalance ``` The sub query returns the `Maximum balance` for each LoanNo (regardless of date) When joined back to your original table you are left with the LoanNo, Maximum Balance and Date at which this is Due. Ok just tested the query below in MS Acccess and it works just fine, substitute `Table1` with your actual table name: ``` SELECT T.LoanNo, T.DueDate, T.BalanceAmount FROM Table1 As T INNER JOIN ( SELECT T.Loanno, Max([T.Balanceamount]) AS MaxBalance FROM Table1 as T GROUP BY T.Loanno) SUB ON T.LoanNo = SUB.LoanNo AND T.BalanceAmount = SUB.MaxBalance ```
In SQLServer2005+ you can use [ROW\_NUMBER](http://msdn.microsoft.com/en-us/library/ms186734%28v=sql.105%29.aspx) ranking function ``` ;WITH cte AS ( SELECT *, ROW_NUMBER() OVER(PARTITION BY Loanno ORDER BY [Balance Amount] DESC) AS rn FROM dbo.your_tableName ) SELECT Loanno, DueDate FROM cte WHERE rn = 1 ```
select two columns based on the maximum value of the other column in SQL
[ "", "sql", "ms-access", "max", "" ]
I have an SQL statement which contains a subquery embedded in an `ARRAY()` like so: ``` SELECT foo, ARRAY(SELECT x from y) AS bar ... ``` The query works fine, however in the psycopg2 results cursor the array is returned as a string (as in `"{1,2,3}"`), not a list. My question is, what would be the best way to convert strings like these into python lists?
It works for me without the need for parsing: ``` import psycopg2 query = """ select array(select * from (values (1), (2)) s); """ conn = psycopg2.connect('dbname=cpn user=cpn') cursor = conn.cursor() cursor.execute(query) rs = cursor.fetchall() for l in rs: print l[0] cursor.close() conn.close() ``` Result when executed: ``` $ python stackoverflow_select_array.py [1, 2] ``` ## Update You need to register the uuid type: ``` import psycopg2, psycopg2.extras query = """ select array( select * from (values ('A0EEBC99-9C0B-4EF8-BB6D-6BB9BD380A11'::uuid), ('A0EEBC99-9C0B-4EF8-BB6D-6BB9BD380A11'::uuid) )s ); """ psycopg2.extras.register_uuid() conn = psycopg2.connect('dbname=cpn user=cpn') cursor = conn.cursor() cursor.execute(query) rs = cursor.fetchall() for l in rs: print l[0] cursor.close() conn.close() ``` Result: ``` $ python stackoverflow_select_array.py [UUID('a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11'), UUID('a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11')] ```
If every result cursor ARRAY is of the format `'{x,y,z}'`, then you can do this to strip the string of the braces and split it into a list by comma-delimiter: ``` >>> s = '{1,2,3}' >>> s '{1,2,3}' >>> l = s.rstrip('}').lstrip('{').split(',') >>> l ['1', '2', '3'] >>> >>> s = '{1,2,3,a,b,c}' >>> s '{1,2,3,a,b,c}' >>> l = s.rstrip('}').lstrip('{').split(',') >>> l ['1', '2', '3', 'a', 'b', 'c'] ```
Return PostgreSQL UUID array as list with psycopg2
[ "", "python", "postgresql", "type-conversion", "uuid", "psycopg2", "" ]
I have a set like (669256.02, 6117662.09, 669258.61, 6117664.39, 669258.05, 6117665.08) which I need to iterate over, like ``` for x,y in (669256.02, 6117662.09, 669258.61, 6117664.39, 669258.05, 6117665.08) print (x,y) ``` which would print ``` 669256.02 6117662.09 669258.61 6117664.39 669258.05 6117665.08 ``` im on Python 3.3 btw
You can use an iterator: ``` >>> lis = (669256.02, 6117662.09, 669258.61, 6117664.39, 669258.05, 6117665.08) >>> it = iter(lis) >>> for x in it: ... print (x, next(it)) ... 669256.02 6117662.09 669258.61 6117664.39 669258.05 6117665.08 ```
``` >>> nums = (669256.02, 6117662.09, 669258.61, 6117664.39, 669258.05, 6117665.08) >>> for x, y in zip(*[iter(nums)]*2): print(x, y) 669256.02 6117662.09 669258.61 6117664.39 669258.05 6117665.08 ```
iterating over two values of a list at a time in python
[ "", "python", "list", "" ]
Since python is bundled with the Tide SDK, I can't figure out how to use access external modules. I've tried copying the module folder "Lib/site-packages/YourModuleHere" to the tide SDK directory, and this suggestion here: [TIdeSDK Python module import](https://stackoverflow.com/questions/14115173/tidesdk-python-module-import) but with no success. The module I'm trying to use is <https://github.com/burnash/gspread> Any ideas? Thanks...
You may try <http://www.py2exe.org/index.cgi/Tutorial> to convert your python code to exe with all needed modules then use `Ti.Process.createProcess()` to call your exe
In current version of TideSDK, loading custom python modules is not supported.It loads default set of Python modules compiled within the SDK.
Python & Tide SDK - import external module?
[ "", "python", "module", "external", "tidesdk", "" ]
I have the following structure: ``` [ { u'123456': {'name': "Bill"}, u'234567': {'name': "Dave"}, u'345678': {'name': "Tom"} }, ] ``` During a for loop new items are added to the list using the extend function. Unfortunately this results in the following structure: ``` [ { u'123456': {'name': "Bill"}, u'234567': {'name': "Dave"}, u'345678': {'name': "Tom"} }, { u'555555': {'name': "Steve"}, u'666666': {'name': "Michael"}, u'777777': {'name': "George"} } ] ``` The intended result is actually a flat structure such in the following: ``` [ { u'123456': {'name': "Bill"}, u'234567': {'name': "Dave"}, u'345678': {'name': "Tom"}, u'555555': {'name': "Steve"}, u'666666': {'name': "Michael"}, u'777777': {'name': "George"} } ] ``` Is it possible to append to the list so that the structure gets built in a flat way. **or** Is it possible to flatten after the loop has finished?
If your list is named `l` you could use `l[0].update(new_dict)`. Example: ``` l = [{u'123456': {'name': "Bill"}}] l[0].update({u'234567': {'name': "Dave"}}) print(l) ``` Nice formatted output is: ``` [ { u'123456': {'name': 'Bill'}, u'234567': {'name': 'Dave'} } ] ```
Where you currently have something like this: ``` mylist.extend(newdict) ``` You should use this: ``` mylist[0].update(newdict) ```
Appending/Merging in Python
[ "", "python", "" ]
Sample Data: ``` +===========================================================================+ |NoBoxes | Carrier | ProcessDateTime | Errored | Voided | TrackingNumber | +===========================================================================+ | 2 | UPS | 5/22/2013 8:14 | 0 | 0 | 1Z1234567891234567| +---------------------------------------------------------------------------+ | | UPS | | 0 | 1 | 1Z1234567891234567| +---------------------------------------------------------------------------+ | 5 | UPS | 5/22/2013 8:22 | 1 | 0 | | +---------------------------------------------------------------------------+ | 7 | UPS | 5/22/2013 8:14 | 0 | 0 | 1Z9876543210987654| +---------------------------------------------------------------------------+ | 1 | UPS | 5/22/2013 8:22 | 0 | 0 | 1Z1472583691472583| +---------------------------------------------------------------------------+ | 1 | FedEx | 5/22/2013 8:14 | 0 | 0 | xxxxxxxxxxxxxxxx | +---------------------------------------------------------------------------+ | 8 | FedEx | 5/22/2013 8:22 | 0 | 0 | yyyyyyyyyyyyyyyy | +---------------------------------------------------------------------------+ | 3 | USPS | 5/22/2013 8:14 | 0 | 0 | zzzzzzzzzzzzzzzz | +---------------------------------------------------------------------------+ | 4 | USPS | 5/22/2013 8:22 | 0 | 0 | aaaaaaaaaaaaaaaa | +---------------------------------------------------------------------------+ | 7 | UPS | 5/22/2013 8:14 | 0 | 0 | 1Z9638527411012396| +---------------------------------------------------------------------------+ | 9 | UPS | 5/22/2013 8:22 | 0 | 0 | 1Z4561591981655445| +---------------------------------------------------------------------------+ ``` Now with a table like this, how can I get the sum of `NoBoxes` Where `Carrier` = UPS, `ProcessDateTime` = Today, `Errored` = 0, and `TrackingNumber` Having Count = 1? The duplicate Tracking Numbers represent a shipment that was voided. I don't want to sum `Errored` shipments as those didn't ship. I have tried about 10 different statements and nothing can seem to get me where I need to be. The issue I ***Believe*** is that the voided rows do not contain a `ProcessDateTime`. So when I use something like: ``` SELECT Sum(NoBoxes) FROM Info WHERE (Carrier='UPS') AND (ProcessDateTime>{ts '2013-05-23 00:00:00'} And ProcessDateTime<{ts '2013-05-24 00:00:00'}) AND (Errored=0) GROUP BY TrackingNumber HAVING (Count(*)=1) ``` It still returns `TrackingNumber` that have been voided because the query doesn't contain any rows with null `ProcessDateTime` so then I tried: ``` SELECT Sum(NoBoxes), ProcessDateTime FROM Info WHERE ((Carrier='UPS') AND (Errored=0)) OR ((Carrier='UPS') AND (ProcessDateTime Is Null) AND (Errored=0)) GROUP BY ProcessDateTime, TrackingNumber HAVING (Count(*)=1) ``` But this still doesn't do the job it just returns everything. Also tried `HAVING (Count(TrackingNumber)=1)` but didn't seem that did anything. I just can't figure out how to get rid of all duplicate tracking numbers, then return the sum of all `UPS` with a `ProcessDateTime` value within criteria. Because it seems the only way to get rid of the duplicates is to not use `ProcessDateTime` at all, but then I have no way of telling what the `ProcessDateTime` is, and I need to know in order to filter for specific date. I kind of understand that I will probably have to do something like: ``` (Select TrackingNumber From Info Where Carrier = 'UPS' And Errored = 0 Group By TrackingNumber HAVING Count(*) = 1) As A ``` Then do something along the lines of: ``` Select Sum(NoBoxes) As Total From Info Join A On Info.TrackingNumber = A.TrackingNumber Where Info.ProcessDateTime>{ts '2013-05-23 00:00:00'} And Info.ProcessDateTime<{ts '2013-05-24 00:00:00'} ``` But I am simply not knowledgeable about this to get all the right order and syntax correct on such a Query. Given the table provided I would like a single Row returned with a single Column with the Value of `24`
After reading over my own question I realized how to do this using exactly the same sub queries I though I should, this worked exactly as I hoped. ``` Select Sum(NoBoxes) As Total From Info Join (Select TrackingNumber From Info Where Carrier = 'UPS' And Errored = 0 Group By TrackingNumber HAVING Count(*) = 1) As A On Info.TrackingNumber = A.TrackingNumber Where Info.ProcessDateTime>{ts '2013-05-22 00:00:00'} And Info.ProcessDateTime<{ts '2013-05-23 00:00:00'} ```
``` Select Sum( NoBoxes ) From Info As I Where Carrier = 'UPS' And ProcessDateTime >= '2013-05-22 00:00:00' And ProcessDateTime < '2013-05-23 00:00:00' And Errored = 0 And Voided = 0 And Not Exists ( Select 1 From Info As I1 Where I1.TrackingNumber = I.TrackingNumber And Voided <> 0 ) ``` One item I noted is that the date ranges in your sample queries do not overlap with the sample data you provided. Below is a link to the SQL Fiddle version. You did not specify the database and version so I used SQL Server, however the above solution should work in pretty much all database products. [SQL Fiddle version](http://www.sqlfiddle.com/#!3/dc21b/15)
How to get Sum of Column, Where another column has Count = 1
[ "", "sql", "select", "group-by", "distinct", "" ]
I am trying to learn Python and can, for the life of it, not figure out, why this: ``` i = raw_input("enter a number") if int(i): print "yes" else: print "false" ``` would not return true if i == "0" Background: I am trying to implement Union-Find Algorithm. Everything works fine, but when I try to connect two points and one is 0 it won't let me through the control. (Python 2.7)
Python types has boolean value, defined in [special methods](http://docs.python.org/2.7/library/stdtypes.html#truth-value-testing). in particular, `0`, `None`, `False`, `""` (and any other empty sequence) are false. Obviously, ``` >>> int("0") 0 ``` What's more, the value of `False` is 0, and the value of `True` is 1, for most purposes (except for their representation as strings, and their `type`, which is `bool`): ``` >>> 0 == False True ```
I think you meant `i.isdigit()` instead of `int(i)`.
Python why does int("0") return false
[ "", "python", "validation", "int", "string-conversion", "" ]
Why does this code return `'static PyObject*\npy_\x00(void)\n{'`? I thought it should look for the first group and replace it. Also, I don't see where the `\x00` comes from. ``` re.sub(r'def\s+([a-zA-Z_][a-zA-Z_0-9]*)\s*\(\s*\):',r'static PyObject*\npy_\0(void)\n{','def myfunc():') ```
Looks like the example copied from the [docs for re](http://docs.python.org/2/library/re.html), except you changed one piece. You have: ``` r'static PyObject*\npy_\0(void)\n{' ``` It should be (use \1 for the first group): ``` r'static PyObject*\npy_\1(void)\n{' ``` When you use `\0`, that is interpreted as the escape for null `\x00`. If you really want group 0 (the entire substring matched by the re), you need to use `\g<0>`.
`\0` does not reference the matched pattern. It should be `\g<0>` ``` r'static PyObject*\npy_\g<0>(void)\n{' ``` This results in ``` static PyObject*\npy_def myfunc():(void)\n{ ``` If you want to replace the first captured group, you could use `\g<1>`, but `\1` will also work.
Group in Python regex substitution
[ "", "python", "regex", "" ]
What's the next best option for database-agnostic full-text search for Django without Haystack? I have a model like: ``` class Paper(models.Model): title = models.CharField(max_length=1000) class Person(models.Model): name = models.CharField(max_length=100) class PaperReview(models.Model): paper = models.ForeignKey(Paper) person = models.ForeignKey(Person) ``` I need to search for papers by title and reviewer name, but I also want to search from the perspective of a person and find which papers they have and haven't reviewed. With Haystack, it's trivial to implement a full-text index to search by title and name fields, but as far as I can tell, there's no way to do the "left outer join" necessary to find papers without a review by a specific person.
I ended up using [djorm-ext-pgfulltext](https://pypi.python.org/pypi/djorm-ext-pgfulltext), which provides a simple Django interface for PostgreSQL's built-in full text search features.
Haystack is just a wrapper that exposes a few different search engine backends: * Solr * ElasticSearch * Whoosh * Xapian There might be other backends as well available as plugins. So the real question here is, is there a search backend that gives me the desired functionality, and does haystack expose that functionality? The answer to that is, you can probably use elasticsearch\*, but note the asterix. Generally, when creating a search index, it's a good idea to think about the documents in the same way you might if you were creating a no-rel database and you want those documents to be as flat as possible. So one possibility might be to have an array of char fields on a paperreview index. The array would contain all of the related foreign key references. Another might be to use "nested documents" in elasticsearch. And lastly, to use "parent/child documents" in elasticsearch. You can still use haystack for indexing, with some hacking, but you will probably want to use one of the raw backends directly, such as pyelasticsearch or pyes. * <http://www.elasticsearch.org/guide/reference/mapping/nested-type/> * <http://www.elasticsearch.org/guide/reference/mapping/parent-field/> * <http://pyelasticsearch.readthedocs.org/en/latest/> * <http://pyes.readthedocs.org/en/latest/>
Efficient Django full-text search without Haystack
[ "", "python", "django", "django-haystack", "" ]
I would like to check a date value in my SQL query. If a date is equal to a predefined date then do not print anything, ELSE print the existing date value. How can I write it correctly in order to take the desired date value ? I have the following query: ``` (SELECT (CASE WHEN (PaymentsMade.PaymentDate = '09/09/1987') THEN ' ' ELSE PaymentsMade.PaymentDate END) ) as dateOfPayment ``` When I run this query it works correctly when the date is equal to '09/09/1987' , whereas when the date is not equal to '09/09/1987' it prints '01/01/1900'. How can I retrieve the dates values that are not equal to the predefined date '09/09/1987'? Any advice would be appreciated. Thanks
The CASE clause needs to return a consistently-typed value, so it is implicitly converting a space to a date (which is evaluated as 1 Jan 1900). You have two choices: * select a null instead of a blank space. * explicitly cast the date in the `else` condition to a string. Here's an (implicit) example of the former: ``` SELECT (CASE WHEN PaymentsMade.PaymentDate <> '09/09/1987' THEN PaymentsMade.PaymentDate END) as dateOfPayment ```
Use NULL, not empty string An empty string is cast to zero implicitly, which is '01/01/1900' ``` SELECT CAST('' AS datetime) ```
If/then/else in SQL Query
[ "", "sql", "t-sql", "if-statement", "case", "" ]
This is strange. I have mix of public as well as private files. I want normal urls in public files, and signed urls in private files. I tried to change `AWS_QUERYSTRING_AUTH to False` as I see by default, it's True in django-storages. But, when I change it, my private files url is not signed (thus not accessible). May be I am missing something here. What can be solution? Thanks in advance.
`AWS_QUERYSTRING_AUTH` sets the default behavior, but you can override it when you create an instance of `S3BotoStorage`, by passing in an additional argument to the initializer: ``` S3BotoStorage(bucket="foo", querystring_auth=False) ``` So if you have one bucket private and another bucket public, you can set the `querystring_auth` argument appropriately and get your desired behavior.
put this in your `settings.py` ``` AWS_QUERYSTRING_AUTH = False ```
Why does S3 (using with boto and django-storages) give signed url even for public files?
[ "", "python", "django", "amazon-s3", "boto", "django-storage", "" ]
I'm working on a game which involves vehicles at some point. I have a MySQL table named "vehicles" containing the data about the vehicles, including the column "plate" which stores the License Plates for the vehicles. Now here comes the part I'm having problems with. I need to find an unused license plate before creating a new vehicle - it should be an alphanumeric 8-char random string. How I achieved this was using a while loop in Lua, which is the language I'm programming in, to generate strings and query the DB to see if it is used. However, as the number of vehicles increases, I expect this to become even more inefficient it is right now. Therefore, I decided to try and solve this issue using a MySQL query. The query I need should simply generate a 8-character alphanumeric string which is not already in the table. I thought of the generate&check loop approach again, but I'm not limiting this question to that just in case there's a more efficient one. I've been able to generate strings by defining a string containing all the allowed chars and randomly substringing it, and nothing more. Any help is appreciated.
This problem consists of two very different sub-problems: * the string must be seemingly random * the string must be unique While randomness is quite easily achieved, the uniqueness without a retry loop is not. This brings us to concentrate on the uniqueness first. Non-random uniqueness can trivially be achieved with `AUTO_INCREMENT`. So using a uniqueness-preserving, pseudo-random transformation would be fine: * Hash has been suggested by @paul * AES-encrypt fits also * But there is a nice one: `RAND(N)` itself! A sequence of random numbers created by the same seed is guaranteed to be * reproducible * different for the first 8 iterations * if the seed is an `INT32` So we use @AndreyVolk's or @GordonLinoff's approach, but with a **seeded** `RAND`: e.g. Assumin `id` is an `AUTO_INCREMENT` column: ``` INSERT INTO vehicles VALUES (blah); -- leaving out the number plate SELECT @lid:=LAST_INSERT_ID(); UPDATE vehicles SET numberplate=concat( substring('ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789', rand(@seed:=round(rand(@lid)*4294967296))*36+1, 1), substring('ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789', rand(@seed:=round(rand(@seed)*4294967296))*36+1, 1), substring('ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789', rand(@seed:=round(rand(@seed)*4294967296))*36+1, 1), substring('ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789', rand(@seed:=round(rand(@seed)*4294967296))*36+1, 1), substring('ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789', rand(@seed:=round(rand(@seed)*4294967296))*36+1, 1), substring('ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789', rand(@seed:=round(rand(@seed)*4294967296))*36+1, 1), substring('ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789', rand(@seed:=round(rand(@seed)*4294967296))*36+1, 1), substring('ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789', rand(@seed)*36+1, 1) ) WHERE id=@lid; ```
I woudn't bother with the likelihood of collision. Just generate a random string and check if it exists. If it does, try again and you shouldn't need to do it more that a couple of times unless you have a huge number of plates already assigned. Another solution for generating an 8-character long pseudo-random string in pure (My)SQL: ``` SELECT LEFT(UUID(), 8); ``` You can try the following (pseudo-code): ``` DO SELECT LEFT(UUID(), 8) INTO @plate; INSERT INTO plates (@plate); WHILE there_is_a_unique_constraint_violation -- @plate is your newly assigned plate number ``` --- Since this post has received a unexpected level of attention, let me highlight [ADTC's comment](https://stackoverflow.com/questions/16737910/generating-a-random-unique-8-character-string-using-mysql/16738332#comment69315008_16738332) : the above piece of code is quite dumb and produces sequential digits. For slightly less stupid randomness try something like this instead : ``` SELECT LEFT(MD5(RAND()), 8) ``` And for true (cryptograpically secure) randomness, use `RANDOM_BYTES()` rather than `RAND()` (but then I would consider moving this logic up to the application layer).
Generating a random & unique 8 character string using MySQL
[ "", "mysql", "sql", "" ]
I have .txt file(example): > A professional is a person who is engaged in a certain activity, or > occupation, for gain or compensation as means of livelihood; such as a > permanent career, not as an amateur or pastime. Due to the personal > and confidential nature of many professional services, and thus the > necessity to place a great deal of trust in them, most professionals > are subject to strict codes of conduct enshrining rigorous ethical and > moral obligations. How to count how many times there is the word "professional"? (using NLTK - is the best option?) ``` text_file = open("text.txt", "r+b") ```
I have changed my answer to better reflect your wishes: ``` from nltk import word_tokenize with open('file_path') as f: content = f.read() # we will use your text example instead: content = "A professional is a person who is engaged in a certain activity, or occupation, for gain or compensation as means of livelihood; such as a permanent career, not as an amateur or pastime. Due to the personal and confidential nature of many professional services, and thus the necessity to place a great deal of trust in them, most professionals are subject to strict codes of conduct enshrining rigorous ethical and moral obligations." def Count_Word(word, data): c = 0 tokens = word_tokenize(data) for token in tokens: token = token.lower() # this plural check is dangerous, if trying to find a word that ends with an 's' token = token[:-1] if token[-1] == 's' else token if token == word: c += 1 return c print Count_Word('professional', content) >>> 3 ``` --- Here is a modified version of the method: ``` def Count_Word(word, data, leading=[], trailing=["'s", "s"]): c = 0 tokens = word_tokenize(data) for token in tokens: token = token.lower() for lead in leading: if token.startswith(lead): token = token.partition(lead)[2] for trail in trailing: if token.endswith(trail): token = token.rpartition(trail)[0] if token == word: c += 1 return c ``` I have added to optional arguments which are lists of leading or trailing parts of the word that you want to trim in order to find it... At the moment I only put a default `'s` or `s` . But if you find that others will suit you you can always add them.. If the lists start getting to long, you can make them constants.
Can be solved in one line (plus import): ``` >>> from collections import Counter >>> Counter(w.lower() for w in open("text.txt").read().split())['professional'] 2 ```
Counting words in a text file
[ "", "python", "nltk", "" ]
I started learning Python few weeks ago (with no previous knowledge of it nor programming). I want to create a definition which will for given dictionary as an argument return a tuple consisted of two lists - one where there are only keys of dictionary, and the other where there are only values of the given dictionary. Basically the code will look like this: ``` """Iterate over the dictionary named letters, and populate the two lists so that keys contains all the keys of the dictionary, and values contains all the corresponding values of the dictionary. Return this as a tuple in the end.""" def run(dict): keys = [] values = [] for key in dict.keys(): keys.append(key) for value in dict.values(): values.append(value) return (keys, values) print run({"a": 1, "b": 2, "c": 3, "d": 4}) ``` This code worked perfectly (it's not my solution though). But what if I do not want to use the *.keys()* and *.values()* methods? In that case, I tried using something like this, but I got "unhashable type: 'list'" error message: ``` def run(dict): keys = [] values = [] for key in dict: keys.append(key) values.append(dict[keys]) return (keys, values) print run({"a": 1, "b": 2, "c": 3, "d": 4}) ``` What seems to be the problem?
You are trying to use the whole `keys` list as a key: ``` values.append(dict[keys]) ``` Perhaps you meant to use `dict[key]` instead? A `list` is a mutable type, and cannot be used as a key in a dictionary (it could change in-place making the key no longer locatable in the internal hash table of the dictionary). Alternatively, loop over the `.items()` sequence: ``` for key, value in dct.items(): keys.append(key) values.append(value) ``` Please don't use `dict` as a variable name; you are shadowing the built-in type by doing that.
Another simpler (less chance for bugs) way to write your function ``` def run(my_dict): return zip(*my_dict.items()) ```
Python error: unhashable type: 'list'
[ "", "python", "dictionary", "tuples", "" ]
I have a question that I am sure has been on the mind of every intermediate-level Python programmer at some point: that is, how to fix/prevent/avoid/work around those ever-so-persistent and equally frustrating `NameErrors`. I'm not talking about actual errors (like typos, etc.), but a bizarre problem that basically say a global name was not defined, when in reality it was defined further down. For whatever reason, Python seems to be extremely needy in this area: every single variable absolutely positively has to hast to be defined above and only above anything that refers to it (or so it seems). For example: ``` condition = True if condition == True: doStuff() def doStuff(): it_worked = True ``` Causes Python to give me this: ``` Traceback (most recent call last): File "C:\Users\Owner\Desktop\Python projects\test7.py", line 4, in <module> doStuff() NameError: name 'doStuff' is not defined ``` However, the name WAS defined, just not where Python apparently wanted it. So for a cheesy little function like `doStuff()` it's no big deal; just cut and paste the function into an area that satisfies the system's requirement for a certain order. But when you try to actually design something with it it makes organizing code practically impossible (I've had to "un-organize" tons of code to accomodate this bug). I have never encountered this problem with any of the other languages I've written in, so it seems to be specific to Python... but anyway I've researched this in the docs and haven't found any solutions (or even potential leads to a possible solution) so I'd appreciate any tips, tricks, workarounds or other suggestions. It may be as simple as learning a specific organizational structure (like some kind of "Pythonic" and very strategic approach to working around the bug), or maybe just use a lot of `import` statements so it'll be easier to organize those in a specific order that will keep the system from acting up...
Avoid writing code (other than declarations) at top-level, use a `main()` function in files meant to be executed directly: ``` def main(): condition = True if condition: do_stuff() def do_stuff(): it_worked = True if __name__ == '__main__': main() ``` This way you only need to make sure that [the `if..main` construct](http://docs.python.org/2/library/__main__.html) follows the `main()` function (e.g. place it at the end of the file), the rest can be in any order. The file will be fully parsed (and thus all the names defined in the module can be resolved) by the time `main()` is executed.
As a rule of thumb: For most cases define all your functions first and then use them later in your code.
Seeking general advice on how to prevent relentless "NameErrors" in Python
[ "", "python", "namespaces", "function", "" ]
Is there a method in numpy for calculating the Mean Squared Error between two matrices? I've tried searching but found none. Is it under a different name? If there isn't, how do you overcome this? Do you write it yourself or use a different lib?
You can use: ``` mse = ((A - B)**2).mean(axis=ax) ``` Or ``` mse = (np.square(A - B)).mean(axis=ax) ``` * with `ax=0` the average is performed along the row, for each column, returning an array * with `ax=1` the average is performed along the column, for each row, returning an array * with omitting the ax parameter (or setting it to `ax=None`) the average is performed element-wise along the array, returning a scalar value
This isn't part of `numpy`, but it will work with `numpy.ndarray` objects. A `numpy.matrix` can be converted to a `numpy.ndarray` and a `numpy.ndarray` can be converted to a `numpy.matrix`. ``` from sklearn.metrics import mean_squared_error mse = mean_squared_error(A, B) ``` See [Scikit Learn mean\_squared\_error](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_squared_error.html) for documentation on how to control axis.
Mean Squared Error in Numpy?
[ "", "python", "arrays", "numpy", "mean", "mean-square-error", "" ]
I used the following code to compute square root in Python: ``` from math import * #find average of two numbers def average(a, b): print "the average is",(a + b) / 2.0,"\n" return (a + b) / 2.0 #guess not good enouhgh def improve(guess, x): print "improved guess is ",average(guess, x/guess),"\n" return average(guess, x/guess) #As long as the guess is not good enough, keep on improving the guess def good_enough(guess, x): d = abs(guess*guess - x) print d," is the currentguess\n" return (d < 0.001) #As long as the guess is not good enough, keep on improving the guess def square_root(guess, x): while(not good_enough(guess, x)): guess = improve(guess, x) print "current guess is ",guess,"\n" return guess if __name__ == "__main__": x = square_root(5, 33) print "final answer is",x ``` The result of the square root of 33 was :**5.74456521739** I used the square root function in excel 2003: ``` =sqrt(33) ``` setting result at 15 decimal places and got the result:5.744562646538030 I then used: ``` math.sqrt(33) ``` from the standard Python 2.7.2 math library and got the result: **5.74456264654** I then increased accuracy of my Program: return (d < 0.000001) and got return **5.74456264654** the same as the my program The question is why is Python rounding and Excel 2003 is rounding in different places. How can someone know which one is better to use in a critical situation? For example friends who are writing maths equations that need a high degree of accuracy in physics for a PHD thesis?
Python and Excel both uses double precision floating point, which the precision depends on underlying C library, and it normally uses the hardware floating point unit. Common FPU implements IEEE-754 double. Having said that, I suspect you're using `print` statement which does the formatting. See the difference below. ``` >>> import math >>> math.sqrt(33) 5.744562646538029 >>> print math.sqrt(33) 5.74456264654 ```
You can use the `decimal` module to achieve this level of accuracy: ``` >>> import decimal >>> decimal.getcontext().precision = 15 >>> decimal.Decimal(33).sqrt() Decimal('5.744562646538028659850611468') ``` In regards to floating point inaccuracies: <http://docs.python.org/2/tutorial/floatingpoint.html>
Python Excel floating point big difference
[ "", "python", "excel", "floating-point", "" ]
I have an HTML tag like this: ``` <ul class="clearfix"> "<li><span class="bold-title">Starts:</span> October 2013</li>" </ul> ``` I want to extract "October 2013". My code is: ``` start_date = articl.find('ul', class_='clearfix').find('li').text.strip() ``` ...which extracts "Starts: October 2013". How is it possible to take only the date?
With a regex: ``` import re ss = ''' <ul class="clearfix"> <li><span class="bold-title">Starts:</span> October 2013</li>" </ul> blah blah <ul class="clearfix"> <li><<a href="/derives/certificats/"> November 2014 </li>" </ul> ''' regx = re.compile('<ul +class="clearfix">.+?' '<li>.*? *([^<>]+?) *</li>', re.DOTALL) print regx.findall(ss) # prints ['October 2013', 'November 2014'] ```
Use [`.contents`](http://www.crummy.com/software/BeautifulSoup/bs4/doc/#contents-and-children), which returns a list: ``` >>> from bs4 import BeautifulSoup as BS >>> html = (stuff above) >>> soup = BS(html) >>> print soup.find('li').contents[1].strip() October 2013 ```
How to extract specific text inside an HTML tag with Beautiful Soup?
[ "", "python", "python-2.7", "web-scraping", "beautifulsoup", "html-parsing", "" ]
I'm trying to send a GET request to a server with two headers which have the same name, but different values: ``` url = 'whatever' headers = {'X-Attribute': 'A', 'X-Attribute': 'B'} requests.get(url, headers = headers) ``` This obviously doesn't work, since the headers dictionary can't contain two keys `X-Attribute`. Is there anything I can do, i.e. can I pass headers as something other than a dictionary? The requirement to send requests in this manner is a feature of the server, and I can't change it.
`requests` stores the request headers in a `dict`, which means every header can only appear once. So without making changes to the `requests` library itself it won't be possible to send multiple headers with the same name. However, if the server is [HTTP1.1 compliant](http://www.w3.org/Protocols/rfc2616/rfc2616-sec4.html#sec4.2), it *must* be able to accept the same as one header with a comma separated list of the single values. --- Late edit: Since this was brought back to my attention, a way to make this work would be to use a custom instance of `str` that allows to store multiple of the same value in a `dict` by implementing the hash contract differently (or actually in a `CaseInsensitiveDict`, wich uses `lower()` on the names). Example: ``` class uniquestr(str): _lower = None def __hash__(self): return id(self) def __eq__(self, other): return self is other def lower(self): if self._lower is None: lower = str.lower(self) if str.__eq__(lower, self): self._lower = self else: self._lower = uniquestr(lower) return self._lower r = requests.get("https://httpbin.org/get", headers={uniquestr('X'): 'A', uniquestr('X'): 'B'}) print(r.text) ``` Produces something like: ``` { ... "headers": { ... "X": "A,B", }, ... } ``` Interestingly, in the response the headers are combined, but they are really sent as two separate header lines.
requests is using urllib2.urlencode under the hood (or something similar) in order to encode the headers. This means that a list of tuples can be sent as the payload argument instead of a dictionary, freeing the headers list from the unique key constraint imposed by the dictionary. Sending a list of tuples is described in the urlib2.urlencode documentation. <http://docs.python.org/2/library/urllib.html#urllib.urlencode> The following code will solve the problem without flattening or dirty hacks: ``` url = 'whatever' headers = [('X-Attribute', 'A'), ('X-Attribute', 'B')] requests.get(url, headers = headers) ```
Python requests can't send multiple headers with same key
[ "", "python", "python-requests", "" ]
My database contains rows that generally look like: ``` PersonItem __________ id personId itemId ╔════╦══════════╦════════╗ ║ ID ║ PERSONID ║ ITEMID ║ ╠════╬══════════╬════════╣ ║ 1 ║ 123 ║ 456 ║ ║ 2 ║ 123 ║ 456 ║ ║ 3 ║ 123 ║ 555 ║ ║ 4 ║ 444 ║ 456 ║ ║ 5 ║ 123 ║ 456 ║ ║ 6 ║ 333 ║ 555 ║ ║ 7 ║ 444 ║ 456 ║ ╚════╩══════════╩════════╝ ``` I need to find all the actual records where the PersonId and the ItemId column match some other record in the database for those two columns.... ``` | 1 | 123 | 456 | 2 | 123 | 456 | 5 | 123 | 456 | 4 | 444 | 456 | 7 | 444 | 456 ``` How can I go about getting these results?
You can do joins to get around with duplicate records. ``` SELECT a.* FROM TableName a INNER JOIN ( SELECT PersonID, ItemID, COUNT(*) totalCount FROM TableName GROUP BY PersonID, ItemID HAVING COUNT(*) > 1 ) b ON a.PersonID = b.PersonID AND a.ItemID = b.ItemID ``` * [SQLFiddle Demo](http://www.sqlfiddle.com/#!2/10930/2) OUTPUT ``` ╔════╦══════════╦════════╗ ║ ID ║ PERSONID ║ ITEMID ║ ╠════╬══════════╬════════╣ ║ 1 ║ 123 ║ 456 ║ ║ 2 ║ 123 ║ 456 ║ ║ 5 ║ 123 ║ 456 ║ ║ 4 ║ 444 ║ 456 ║ ║ 7 ║ 444 ║ 456 ║ ╚════╩══════════╩════════╝ ```
Something like this should do the trick: ``` SELECT P1.* FROM PersonItem P1 INNER JOIN PersonItem P2 ON P2.ID <> P1.ID AND P2.PersonId = P1.PersonId AND P2.ItemId = P1.ItemId ```
Mysql query to find all rows that have the same values as another row
[ "", "mysql", "sql", "" ]
We're using [amqplib](http://barryp.org/software/py-amqplib/) to publish/consume messages. I want to be able to read the number of messages on a queue (ideally both acknowledged and unacknowledged). This will allow me to show a nice status diagram to the admin users and detect if a certain component is not keeping up with the load. I can't find any information in the amqplib docs about reading queue status. Can someone point me in the right direction?
**Using pika:** ``` import pika pika_conn_params = pika.ConnectionParameters( host='localhost', port=5672, credentials=pika.credentials.PlainCredentials('guest', 'guest'), ) connection = pika.BlockingConnection(pika_conn_params) channel = connection.channel() queue = channel.queue_declare( queue="your_queue", durable=True, exclusive=False, auto_delete=False ) print(queue.method.message_count) ``` **Using PyRabbit:** ``` from pyrabbit.api import Client cl = Client('localhost:55672', 'guest', 'guest') cl.get_messages('example_vhost', 'example_queue')[0]['message_count'] ``` **Using HTTP** Syntax: ``` curl -i -u user:password http://localhost:15672/api/queues/vhost/queue ``` Example: ``` curl -i -u guest:guest http://localhost:15672/api/queues/%2f/celery ``` Note: Default vhost is `/` which needs to be escaped as `%2f` **Using CLI:** ``` $ sudo rabbitmqctl list_queues | grep 'my_queue' ```
following the answer of ChillarAnand you can get the value easily. the data is in the object. ``` import pika connection = pika.BlockingConnection(pika.ConnectionParameters( host='localhost', port=5672, credentials=pika.credentials.PlainCredentials('guest', 'guest'), ) channel = connection.channel() print(channel.queue_declare(queue="your_queue", durable=True, exclusive=False, auto_delete=False).method.message_count) ``` and you will get the exact message number
Getting number of messages in a RabbitMQ queue
[ "", "python", "rabbitmq", "message-queue", "py-amqplib", "" ]
I have a MFC application which runs some embedded Python scripts. I am trying to make one of the dialogs this embedded script creates modal, but I am not having much success. Can anyone point me the way to make a modal dialog? Do I need to use a windows functions for this or only Tk or Python functions are enough? For what I googled looks like the following combination of functions should do the magic, but they dont seem to work the way I was expecting: ``` focus_set() grab_set() transient(parent) ```
`grab_set` is the proper mechanism for making a window "application modal". That is, it takes all input from all other windows in the same application (ie: other Tkinter windows in the same process), but it allows you to interact with other applications. If you want your dialog to be globally modal, use `grab_set_global`. This will take over *all* keyboard and mouse input for the entire system. You must be extremely careful when using this because you can easily lock yourself out of your computer if you have have a bug that prevents your app from releasing the grab. When I have the need to do this, during development I'll try to write a bulletproof failsafe such as a timer that will release the grab after a fixed amount of time.
In one of my projects I used the Tcl window manager attribute '-disabled' onto the parent window, that called a (modal) toplevel dialog window. Don't know which windows you show with your MFC application are created or used with Tcl stuff, but if your parent window is Tk based you could do this: In Python simply call onto the parent window inside the creation method of your toplevel window: ``` MyParentWindow.wm_attributes("-disabled", True) ``` After you got what you want with your modal window don't forget to use a callback function inside your modal window, to enable inputs on your parent window again! (otherwise you won't be able to interact with your parent window again!): ``` MyParentWindow.wm_attributes("-disabled", False) ``` A Tkinter (Tcl Version 8.6) Python example (tested on Windows 10 64bit): ``` # Python 3+ import tkinter as tk from tkinter import ttk class SampleApp(tk.Tk): def __init__(self, *args, **kwargs): tk.Tk.__init__(self, *args, **kwargs) self.minsize(300, 100) self.button = ttk.Button(self, text="Call toplevel!", command=self.Create_Toplevel) self.button.pack(side="top") def Create_Toplevel(self): # THE CLUE self.wm_attributes("-disabled", True) # Creating the toplevel dialog self.toplevel_dialog = tk.Toplevel(self) self.toplevel_dialog.minsize(300, 100) # Tell the window manager, this is the child widget. # Interesting, if you want to let the child window # flash if user clicks onto parent self.toplevel_dialog.transient(self) # This is watching the window manager close button # and uses the same callback function as the other buttons # (you can use which ever you want, BUT REMEMBER TO ENABLE # THE PARENT WINDOW AGAIN) self.toplevel_dialog.protocol("WM_DELETE_WINDOW", self.Close_Toplevel) self.toplevel_dialog_label = ttk.Label(self.toplevel_dialog, text='Do you want to enable my parent window again?') self.toplevel_dialog_label.pack(side='top') self.toplevel_dialog_yes_button = ttk.Button(self.toplevel_dialog, text='Yes', command=self.Close_Toplevel) self.toplevel_dialog_yes_button.pack(side='left', fill='x', expand=True) self.toplevel_dialog_no_button = ttk.Button(self.toplevel_dialog, text='No') self.toplevel_dialog_no_button.pack(side='right', fill='x', expand=True) def Close_Toplevel(self): # IMPORTANT! self.wm_attributes("-disabled", False) # IMPORTANT! self.toplevel_dialog.destroy() # Possibly not needed, used to focus parent window again self.deiconify() if __name__ == "__main__": app = SampleApp() app.mainloop() ``` For more information about Tcl window manager attributes, just take a look at the Tcl documentation: <https://wiki.tcl.tk/9457>
How to create a modal dialog in tkinter?
[ "", "python", "python-3.x", "mfc", "tkinter", "" ]
New at this: What's the best way to copy (numerical) elements from one list to another without binding the elements across both lists? Eg: ``` A=[[6,2,-1],[9,-2,3]] B=A[0] del B[0] ``` will also set `A` to `[[2,-1],[9,-2,3]]`, even though I'd like `A` to remain unchanged.
The problem is that by doing this ``` B=A[0] ``` You just make another reference to the A list You probably want to make a copy of the list like ``` B=list(A[0]) ``` In case list contains object that needs to be copied as well, you'll need to deep copy the whole list ``` import copy B = copy.deepcopy(A[0]) ``` but you don't need this for integers
Your code is making `B` simply a reference to the first item in `A`. This is a problem because the first item in `A` is a mutable object, namely a list. What you want to do is copy the *elements* of `A[0]` into a new list, which you will call `B`: ``` b = a[0][:] # lowercase variable names according to PEP8 ``` Note that this still only makes [shallow copies](http://en.wikipedia.org/wiki/Object_copy#Shallow_copy) of the items in `a[0]`. This will work for your case since you've said that those elements are numeric, which means they're immutable. If, however, you had more nested lists contained in `a[0]`, or other mutable objects instead of numbers, you could end up with the same problem later on, one level further down. Just be careful to pay attention to where you need whole new objects, and where references are sufficient.
Deleting elements in a given list only?
[ "", "python", "list", "" ]
Let's consider that I have two lists Person 1 : ``` 2012-08 person 1 23 2012-09 person 1 63 2012-10 person 1 99 2012-11 person 1 62 ``` and Person 2 : ``` 2012-08 person 2 45 2012-09 person 2 69 2012-10 person 2 12 2012-11 person 2 53 ``` What's your suggestion if I'de like to have a tabular data with the following pattern: ``` Date Person 1 Person 2 ----- --------- --------- 2012-08 23 45 2012-09 63 69 2012-10 99 12 2012-11 62 53 ``` ***UPDATE***: Here is the list : ``` List1 = [(u'201206', u'Customer_1', 0.19048299999999993), (u'201207', u'Customer_1', 15.409000999998593), (u'201208', u'Customer_1', 71.1695730000299), (u'201209', u'Customer_1', 135.73918600011424), (u'201210', u'Customer_1', 235.26299999991522), (u'201211', u'Customer_1', 271.768984999485), (u'201212', u'Customer_1', 355.90968299883934), (u'201301', u'Customer_1', 508.39194049821526), (u'201302', u'Customer_1', 631.136656500077), (u'201303', u'Customer_1', 901.9127695088399), (u'201304', u'Customer_1', 951.9143960094264)] List 2 = [(None, None, None), (None, None, None), (None, None, None), (None, None, None), (None, None, None), (None, None, None), (None, None, None), (u'201301', u'Customer_2', 3.7276289999999657), (u'201302', u'Customer_2', 25.39122749999623), (u'201303', u'Customer_2', 186.77777299985306), (u'201304', u'Customer_2', 387.97834699805617)] ```
Use [`itertools.izip()`](http://docs.python.org/2/library/itertools.html#itertools.izip) to combine two input sequences *while processing*: ``` import itertools reader1 = csv.reader(file1) reader2 = csv.reader(file2) for row1, row2 in itertools.izip(reader1, reader2): # process row1 and row2 together. ``` This will work with lists too; `izip()` makes merging of long lists efficient; it is the iterator version of the [`zip()` function](http://docs.python.org/2/library/functions.html#zip), which, in python 2, materializes the whole combined list in memory. If you can possibly retool the functions that create your input lists into generators, use that: ``` def function_for_list1(inputfilename): with open(inputfilename, 'rb') as f: reader = csv.reader(f) for row in reader: # process row yield row def function_for_list2(inputfilename): with open(inputfilename, 'rb') as f: reader = csv.reader(f) for row in reader: # process row yield row for row1, row2 in itertools.izip(function_for_list1(somename), function_for_list2(someothername)): # process row1 and row2 together ``` This arrangement makes that you can process gigabytes of information while only holding in memory what you need to process one small set of rows.
``` l1=[ ['2012-08','person 1',23], ['2012-09','person 1',63], ['2012-10','person 1',99], ['2012-11','person 1',62]] l2=[ ['2012-08','person 2',45], ['2012-09','person 2',69], ['2012-10','person 2',12], ['2012-11','person 2',53]] h1 = { x:z for x,y,z in l1} h2 = { x:z for x,y,z in l2} print "{:<10}{:<10}{:<10}".format("Date", "Person 1", "Person 2") print "{:<10}{:<10}{:<10}".format('-'*5, '-'*8, '-'*8) for d in sorted(h1): print "{:<10} {:<10}{:<10}".format(d,h1[d],h2[d]) ``` **Output** ``` Date Person 1 Person 2 ----- -------- -------- 2012-08 23 45 2012-09 63 69 2012-10 99 12 2012-11 62 53 ```
Python, outputting Two lists with a common column into a file (TXT or CSV)
[ "", "python", "unix", "csv", "" ]
When I am trying to generate random string of size 100 or more, it gives me exception... ``` msg="".join(random.sample(string.letters+string.digits,random.randint(5,100))) Exception: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python2.7/random.py", line 316, in sample raise ValueError, "sample larger than population" ValueError: sample larger than population ``` Can you help me and explain how can I generate a random string of size more than 100?? and also why is this exception?
``` len(string.digits + string.letters) = 62. ``` The random.sample function does not sample with replacement, so it cannot sample more than 62 elements of this list. You might want to try an approach using list comprehension.
You have 62 characters to draw from. If you try to draw longer samples, you will get the exception. You can fill up your sample space to at least the necessary length like this: ``` chars = string.letters + string.digits sample_space = chars*((100/len(chars))+1) msg="".join(random.sample(sample_space, random.randint(5,100))) ```
Generating random string of size 100 or more in python
[ "", "python", "string", "exception", "random", "size", "" ]
How do I get a consistent answer using GridSearchCV in sci-kit learn? I assume I'm getting different answers b/c different random numbers are causing the folds to be different each time I run it, though it is my understanding that the below code should solve this as `KFold` has `shuffle=False` by default. ``` clf = GridSearchCV(SVC(), param_grid, cv=KFold(n, n_folds=10)) ```
As you identified in the comments, predict\_proba is NOT deterministic! But it does accept a random\_state (as does KFold). I've found before that setting shuffle=False can lead to really poor results if your data were collected in a non-random order, so IMHO you're better off using shuffle and setting random\_state to some number. From [the docs](http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html) ``` class sklearn.svm.SVC(C=1.0, kernel='rbf', degree=3, gamma=0.0, coef0=0.0, shrinking=True, probability=False, tol=0.001, cache_size=200, class_weight=None, verbose=False, max_iter=-1, random_state=None) ``` **random\_state** : int seed, RandomState instance, or None (default) The seed of the pseudo random number generator to use when shuffling the data for probability estimation.
I think you're looking for this parameter: random\_state=7 Most things that have a random\_state parameter leave it at None, which allows variation. You must set it to some number to get consistent results. I set it to 7 because I like 7. Pick any number.
Consistent answer to sci-kit learn GridSearchCV
[ "", "python", "machine-learning", "scikit-learn", "" ]
I am using opencv 2 with a webcam. I can get the video stream and process it, but I can't seem to figure out a way to resize the display window. I have some video images stacked horizontally, but the image dimension is very small that it's difficult to see things. My code is pretty simple, and along the lines of this: ``` cv2.namedWindow("main") .... result = np.hstack((res2, foreground)) result = np.hstack((ff, result)) cv2.imshow("main", result) cv2.waitKey(20) ``` The [opencv documentation](http://docs.opencv.org/modules/highgui/doc/user_interface.html) states: ``` namedWindow flags – Flags of the window. Currently the only supported flag is CV_WINDOW_AUTOSIZE . If this is set, the window size is automatically adjusted to fit the displayed image (see imshow() ), and you cannot change the window size manually. ``` But qt backends apparently have extra flags. I don't have a qt backend. Is there a way for me to increase the size of the images so that I can see them?
Yes, unfortunately you can't manually resize a `nameWindow` window without Qt backend. Your options: * use `cv2.resize` function to resize the image to desired size prior to displaying the image * install OpenCV with Qt backend support and use `cv2.namedWindow("main", CV_WINDOW_NORMAL)`
Simply write ``` cv2.namedWindow("main", cv2.WINDOW_NORMAL) ``` and then manually resize it to your desired size ``` cv2.resizeWindow('image', 900, 900) ```
How to resize window in opencv2 python
[ "", "python", "opencv", "" ]
<http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.infocenter.dc38151.1540/doc/html/san1278453173757.html> The functions TRUNCATE and TRUNCNUM are not supported in Adaptive Server Enterprise. Does anyone know another way of doing this in ASE? Thanks
I know these ways: ``` select Number = floor ( 455.443 ) select Number = cast ( 455.443 as int ) select Number = convert ( int, 455.443 ) select Number = 455.443 - ( 455.443 % 1 ) ```
How about using the Floor function? It essentially does the same thing, and is supported in ASE.
how to truncate a number in sybase ASE?
[ "", "sql", "sap-ase", "" ]
I have a function that takes the argument `NBins`. I want to make a call to this function with a scalar `50` or an array `[0, 10, 20, 30]`. How can I identify within the function, what the length of `NBins` is? or said differently, if it is a scalar or a vector? I tried this: ``` >>> N=[2,3,5] >>> P = 5 >>> len(N) 3 >>> len(P) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: object of type 'int' has no len() >>> ``` As you see, I can't apply `len` to `P`, since it's not an array.... Is there something like `isarray` or `isscalar` in python? thanks
``` >>> import collections.abc >>> isinstance([0, 10, 20, 30], collections.abc.Sequence) True >>> isinstance(50, collections.abc.Sequence) False ``` **note**: `isinstance` also supports a tuple of classes, check `type(x) in (..., ...)` should be avoided and is unnecessary. You may also wanna check `not isinstance(x, (str, unicode))` As noted by [@2080](https://stackoverflow.com/questions/16807011/python-how-to-identify-if-a-variable-is-an-array-or-a-scalar/16807050?noredirect=1#comment128960203_16807050) and also [here](https://stackoverflow.com/questions/2937114/python-check-if-an-object-is-a-sequence#comment88639515_2937122) this won't work for `numpy` arrays. eg. ``` >>> import collections.abc >>> import numpy as np >>> isinstance((1, 2, 3), collections.abc.Sequence) True >>> isinstance(np.array([1, 2, 3]), collections.abc.Sequence) False ``` In which case you may try the answer from [@jpaddison3](https://stackoverflow.com/a/19773559/1219006): ``` >>> hasattr(np.array([1, 2, 3]), "__len__") True >>> hasattr([1, 2, 3], "__len__") True >>> hasattr((1, 2, 3), "__len__") True ``` However as noted [here](https://stackoverflow.com/questions/2937114/python-check-if-an-object-is-a-sequence#comment127867518_63465553), this is not perfect either, and will incorrectly (at least according to me) classify dictionaries as sequences whereas `isinstance` with `collections.abc.Sequence` classifies correctly: ``` >>> hasattr({"a": 1}, "__len__") True >>> from numpy.distutils.misc_util import is_sequence >>> is_sequence({"a": 1}) True >>> isinstance({"a": 1}, collections.abc.Sequence) False ``` You could customise your solution to something like this, add more types to `isinstance` depending on your needs: ``` >>> isinstance(np.array([1, 2, 3]), (collections.abc.Sequence, np.ndarray)) True >>> isinstance([1, 2, 3], (collections.abc.Sequence, np.ndarray)) True ```
Previous answers assume that the array is a python standard list. As someone who uses numpy often, I'd recommend a very pythonic test of: ``` if hasattr(N, "__len__") ```
python: how to identify if a variable is an array or a scalar
[ "", "python", "arrays", "variables", "scalar", "" ]
The following code is to calculate nth term og fibonacci sequence in python using matrix exponentiation for various test cases t.But the program gives absurd output.Please tell me where i am wrong.when i ran the code in C++ it runs perfectly. ``` class matrix: def __init__(self): self.a=self.b=self.c=1 self.d=0 def mul(self,e,f): ret = matrix() ret.a=(e.a*f.a)+(e.b+f.c) ret.b=(e.a*f.b)+(e.b+f.d) ret.c=(e.c*f.a)+(e.d+f.c) ret.d=(e.c*f.b)+(e.d+f.d) return ret def exp(self,a,p): if(p==0): temp=matrix() temp.a=temp.b=temp.c=temp.d=1 return temp if(p==1): return a if(p%2==0): return self.exp(self.mul(a,a),p/2) else: return self.mul(a,self.exp(self.mul(a,a),(p-1)/2)) def fib(self,n): if (n==0): return 0 if (n==1): return 1 s=matrix() s=self.exp(s,n) return s.d t=int(raw_input()) while(t>0): v=matrix() n=int(raw_input()) print v.fib(n) t=t-1 ```
There are several issues, in order of importance: 1) Your multiplication is wrong. Note the multiplications at the right where you have sums): ``` def mul(self,e,f): ret = matrix() ret.a=(e.a*f.a)+(e.b*f.c) ret.b=(e.a*f.b)+(e.b*f.d) ret.c=(e.c*f.a)+(e.d*f.c) ret.d=(e.c*f.b)+(e.d*f.d) return ret ``` 2) In the last line, you do `return s.d` but you should return `s.b` or `s.c` or you will get one less fibonacci. 3) The line `temp.a=temp.b=temp.c=temp.d=1` is not necessary because the constructor does the work. Besides it is wrong, because `d` should be `0`. 4) Why are `mul` and `exp` class functions if they don't use `self`. It does no harm but they should be `@staticmethod` 5) Again, it does no harm but your second recursive call is unnecessarily complex. Just write: ``` return matrix.mul(a,matrix.exp(a, p-1)) ```
The problem lies in your `__init__` function. In python the so-called variables are just 'tags' to data in the memory. To compare with C/C++, these can be thought of as pointers. when you assign `self.a = self.b = self.c`, you are basically assigning three different names to the same data in the memory. Any change you make in `a` will be reflected back in `b` and `c` and so on. For your problem where you need three separate variables, one way to change the `__init__` function is like: ``` self.a, self.b, self.c = 1, 1, 1 ``` or you can use `copy`. `copy()` tells python to assign a new memory location and then assign the tag on the right hand side to that location. For more read the official documentation on this <http://docs.python.org/2/library/copy.html>. You can also read a short walk-through on this in [Python Tutorial: Shallow and Deep-copy](http://www.python-course.eu/deep_copy.php)
Calculate nth term of Fibonacci sequence in Python
[ "", "python", "matrix", "fibonacci", "" ]