Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I am using sql server 2008, i have the following data ``` sNames sDate (varchar(MAX)) (date) ========== ============= ALS 10/02/2012 SSP 11/03/2012 MRP 11/05/2012 ALS 14/06/2012 ALS 04/10/2012 ALS 03/11/2012 MRP 05/09/2012 PPL 18/08/2012 ``` I want to order the list by sDate in desc but must show distinct sNames. kindly guide me
Using the latest dates for duplicate `sNames` you can do ``` select sNames, max(sDate) from your_table group by sNames order by max(sDate) desc ```
``` select sName,sDate From [your_table] order by sDate Desc ```
Order By and Distinct in same SQL query
[ "", "sql", "sql-server", "sql-server-2008", "sql-server-2005", "" ]
Disclaimer: I'm not sure how to title this question correctly, so I apologize if it has been asked already. The questions I found with similar titles and content did not address my problem. I have two tables, Issues and Text. Issues and Text both share an ID column that acts as the primary key for Issues. Text uses ID and Field as the key columns. Ultimately I would like a sql query that selects the content in Issues I want, then based on the ID selects the correct values from Text. I've been using joins, but I don't know to get it to work when I want multiple rows. I'd like to have column A in the output for the matched ID's and a Text.Field value of A, and column B for matched ID's and a Text.Field value of B. How would I go about accomplishing this? **I'm pulling from a Netezza environment, so Pivot is not available** Thanks Example Structure: ``` Issues Table: |ID|Column1|Column2| -------------------- |0 | 17| 18| |1 | 19| 20| Text Table: |ID| Field| Value | -------------------- |0 | A| 30| |0 | B| 31| |1 | A| 40| |2 | B| 41| Output: |ID|Column1|Column2|Column3 (Field = 'A') | Column4 (Field = 'B')| ------------------------------------------------------------------ | 0| 17| 18| 30 | 31 | | 1| 19| 20| 40 | 41 | ```
``` SELECT Issues.ID, Issues.Column1, Issues.Column2, Text.Value Column3, Text2.Value Column4 FROM Issues LEFT OUTER JOIN Text ON Text.ID = Issues.ID AND Text.Field = 'A' LEFT OUTER JOIN Text AS Text2 ON Text2.ID = Issues.ID AND Text2.Field = 'B' ```
You can also use an aggregate function with a CASE expression to get the result: ``` select i.id, i.column1, i.column2, max(case when t.field='A' then t.value end) Column3, max(case when t.field='B' then t.value end) Column4 from issues i left join text t on i.id = t.id group by i.id, i.column1, i.column2; ``` See [SQL Fiddle with Demo](http://sqlfiddle.com/#!3/98d34/1)
SQL Join with Multiple Rows of a Single Table
[ "", "sql", "join", "netezza", "" ]
How to find rows whose date column is null without using 'is null' in the query? ``` SELECT * FROM HR_OPERATING_UNITS WHERE date_to IS NULL; ``` In above query I want to use a binding variable which can have null in it. So, expected query is: ``` SELECT * FROM HR_OPERATING_UNITS WHERE date_to=:P_TO_DATE; ``` But the above statement doesn't fetch any rows through, although there is null in that binding variable.
The standard approach is to handle the variable being null separately: ``` SELECT * FROM HR_OPERATING_UNITS WHERE date_to=:P_TO_DATE OR (:P_TO_DATE IS NULL AND date_to IS NULL) ``` If you really must only have one reference to the variable: ``` SELECT * FROM HR_OPERATING_UNITS WHERE COALESCE(date_to, '1900-01-01') = COALESCE(:P_TO_DATE, '1900-01-01') ``` Picking a value that will never appear in the `date_to` column as the dummy value.
If I understood your question you want to filter whenever or bind variable is null, or not. If so, try the following: ``` SELECT * FROM HR_OPERATING_UNITS WHERE (:P_TO_DATE IS NULL AND date_to IS NULL) OR (date_to=:P_TO_DATE); ```
Without using 'isnull' and find rows which are having null in a particular column
[ "", "sql", "" ]
I have a MySQL database with the following tables: ``` items | id, item items_tags | id, item_name, item_id, tag_name, tag_id tags | id, tag ``` I'd like to allow the user to search for items with any tag or any combination of tags. Here's some example data to show what I'd like to do: ``` items: id | item ----------- 1 | banana 2 | orange 3 | tomato items_tags: id | item_name | item_id | tag_name | tag_id --------------------------------------------- 1 | banana | 1 | yellow | 1 2 | banana | 1 | fruit | 2 3 | orange | 2 | orange | 3 4 | orange | 2 | fruit | 2 5 | tomato | 3 | red | 4 6 | tomato | 3 | vegetable| 5 tags: id | tag -------------- 1 | yellow 2 | fruit 3 | orange 4 | red 5 | vegetable ``` What query could I run to only return items tagged with "yellow" and "fruit" (i.e., should return *row* 1 of items)?
if you want the item with any of the two tag then: ``` select distinct item_id, item_name from items_tags where tag_name in ('yellow', 'fruit'); ``` if you want the item having both tag then: ``` select item_id, item_name from items_tags where tag_name in ('yellow', 'fruit') group by item_id, item_name having count(*) = 2; ``` based on your comment ``` select a.id, a.item from items a, items_tags b, tags c where a.id = b.item_id and b.tag_id = c.id group by id, item having (group_concat(c.tag) like '%yellow%' and group_concat(c.tag) like '%fruit%') or group_concat(c.tag) = 'red'; ``` This query gives id and item from items table. It gives item which has both yellow and fruit tag. and the items with only red tag. if you want to get items with two tags and only two tags then use following condition in having clause ``` (group_concat(c.tag) like '%yellow%' and group_concat(c.tag) like '%fruit%' and count(*) = 2) ```
Try this:: ``` Select *, count(1) as noOfTags from items_tags group by item_id having tag_name in ('yellow','fruit') and noOfTags=2 ```
MySQL query to search for items with certain tags
[ "", "mysql", "sql", "select", "" ]
I'm storing currencies in a Decimal. From the client, I could be receiving strings in the following formats: ``` US$1,000.00 €1.000,00 ``` So far, I've written: ``` re.sub(r'[^\d\.]', '', 'US$1,000.00') ``` which will return `1000.00` (formatted the way I'd like) for the first example and `1.000` for the second (which I don't). What would be the best way to catch both decimals correctly?
I found a module that takes care of alot of the complexities in currency formatting (in particular with respect to periods, commas and a bunch more things). The package is called `Babel`, here is a link to the particular method(s) that could help: <http://babel.edgewall.org/wiki/ApiDocs/babel.numbers#babel.numbers:parse_decimal> Docs: <http://babel.edgewall.org/wiki/ApiDocs/babel.numbers> Lots of other helpful internationalization utils in there.
You could try splitting and then glueing things back together ``` import re; z = re.split("[,.]", re.sub([^\d\.\,], '', "$1,000.00")) ''.join(z[0:-2]) + ".".join(z[-2:]) # '1000.00' ```
Python Regex: Formatting use of commas, periods internationally
[ "", "python", "regex", "" ]
I have two tables: Products and Items. I want to select `distinct` items that belong to a product based on the `condition` column, sorted by `price ASC`. ``` +-------------------+ | id | name | +-------------------+ | 1 | Mickey Mouse | +-------------------+ +-------------------------------------+ | id | product_id | condition | price | +-------------------------------------+ | 1 | 1 | New | 90 | | 2 | 1 | New | 80 | | 3 | 1 | Excellent | 60 | | 4 | 1 | Excellent | 50 | | 5 | 1 | Used | 30 | | 6 | 1 | Used | 20 | +-------------------------------------+ ``` Desired output: ``` +----------------------------------------+ | id | name | condition | price | +----------------------------------------+ | 2 | Mickey Mouse | New | 80 | | 4 | Mickey Mouse | Excellent | 50 | | 6 | Mickey Mouse | Used | 20 | +----------------------------------------+ ``` Here's the query. It returns six records instead of the desired three: ``` SELECT DISTINCT(items.condition), items.price, products.name FROM products INNER JOIN items ON products.id = items.product_id WHERE products.id = 1 ORDER BY items."price" ASC, products.name; ```
Correct PostgreSQL query: ``` SELECT DISTINCT ON (items.condition) items.id, items.condition, items.price, products.name FROM products INNER JOIN items ON products.id = items.product_id WHERE products.id = 1 ORDER BY items.condition, items.price, products.name; ``` > SELECT DISTINCT ON ( expression [, ...] ) keeps only the first row of > each set of rows where the given expressions evaluate to equal. Details [`here`](http://www.postgresql.org/docs/current/static/sql-select.html)
There is no `distinct()` function in SQL. Your query is being parsed as ``` SELECT DISTINCT (items.condition), ... ``` which is equivalent to ``` SELECT DISTINCT items.condition, ... ``` `DISTINCT` applies to the whole row - if two or more rows all have the same field values, THEN the "duplicate" row is dropped from the result set. You probably want something more like ``` SELECT items.condition, MIN(items.price), products.name FROM ... ... GROUP BY products.id ```
Select DISTINCT returning too many records
[ "", "sql", "postgresql", "distinct", "greatest-n-per-group", "distinct-on", "" ]
I have the following table. ``` ____________________________________ | carid | changeid | data1 | data2 | |_______|__________|_______|_______| | 1 | 1 |a |b | | 1 | 2 |c |d | | 1 | 3 |e |f | | 2 | 3 |g |h | | 2 | 2 |i |j | | 2 | 4 |k |l | | 3 | 5 |m |n | | 3 | 1 |o |p | | 4 | 6 |q |r | | 4 | 2 |s |t | |_______|__________|_______|_______| ``` I want to select the following result: --- ``` | carid | changeid | data1 | data2 | |_______|__________|_______|_______| | 1 | 1 |a |b | | 1 | 2 |c |d | | 1 | 3 |e |f | | 3 | 5 |m |n | | 3 | 1 |o |p | |_______|__________|_______|_______| ``` In words: If a row has changeid=1 I want to select all the rows with the same carid as the row with changeid=1. This problem is quite easy to solve with a query using multiple selects. First select all rows with changeid=1 and take those carids and select all rows with those carids. Simple enough. I was more wondering if it is possible to solve this problem **without** using multiple selects? Preferably I'm looking for a faster solution but I can try that out myself.
You can join the table back to itself ``` SELECT DISTINCT a.* FROM YourTable a INNER JOIN YourTable b ON b.carid = a.carid and b.changeid = 1 ``` Table a is all the rows you want to output, filtered by table b which limits the set to those with changeid = 1. This should have excellent performance as everything is done in a set oriented manner. `DISTINCT` may not be necessary if changeid 1 may only occur once, and should be avoided if possible as it may introduce a significant performance hit for a large result set.
For multiple select you mean using IN? ``` SELECT carid, changeid, data1, data2 FROM YourTable WHERE carid IN (SELECT carid FROM YourTable WHERE changeid = 1) ```
Multiple selects really needed?
[ "", "sql", "" ]
I've got table structure as follows: ``` Field Type id int(11) AI user varchar(64) date timestamp key int(11) ``` What I need to do, is find rows (from given day), where difference between two successive rows (closest timestamp (for given user) is less than 1300. I was told, to use query like this: ``` select t.* from (select t.*, @nextdate as nextdate, @nextdate := date from my_table t order by date desc ) t where t.nextdate - t.date < 1300; ``` But it didn't seem to work. Can anyone help me solve my task?
try this :- ``` select t1.user, t1.date d1,t2.date d2 ,t1.date-t2.date from (select @val:=@val+1 rowid,user, date from mytable,(select @val:=0) a order by user,date) t1, (select @val1:=@val1+1 rowid,user, date from mytable,(select @val1:=1) b order by user,date) t2 where t1.rowid = t2.rowid and t1.user = t2.user and t1.date-t2.date < 1300; ``` see [DEMO](http://sqlfiddle.com/#!8/6af1c/8)
DATEDIFF(t.netxdata, t.date) < (1300 / 3600 / 24) I assumed 1300 in in seconds, so I converted it to days which is what DATEDIFF returns. Please, be aware that this query will do a full scan of your table, which might be expensive.
Mysql: Find rows, where timestamp difference is less than x
[ "", "mysql", "sql", "" ]
I am gettin a error while running the below code. ``` #!/usr/bin/python import subprocess import os def check_output(*popenargs, **kwargs): process = subprocess.Popen(stdout=subprocess.PIPE, *popenargs, **kwargs) output, unused_err = process.communicate() retcode = process.poll() if retcode: cmd = kwargs.get("args") if cmd is None: cmd = popenargs[0] error = subprocess.CalledProcessError(retcode, cmd) error.output = output raise error return output location = "%s/folder"%(os.environ["Home"]) subprocess.check_output(['./MyFile']) ``` **Error** ``` subprocess.check_output(['./MyFile']) AttributeError: 'module' object has no attribute 'check_output' ``` I am working on `Python 2.6.4` .
Just use : ``` check_output(['./MyFile']) ``` You've defined your own function, it's not an attribute of `subprocess` module(for Python 2.6 and earlier). You can also assign the function to the imported module object(but that's not necessary): ``` subprocess.check_output = check_output location = "%s/folder" % (os.environ["Home"]) subprocess.check_output(['./MyFile']) ```
You *probably* just want to use `check_output`, but, just so you know, there is a method `subprocess.check_output`, but it's not defined until Python 2.7 (<http://docs.python.org/3/library/subprocess.html#subprocess.check_output>) You might even want this, which defines the function in the module if it's not there (i.e. running before v2.7). ``` try: subprocess.check_output except: subprocess.check_output = check_output subprocess.check_output() ```
check_output error in python
[ "", "python", "python-2.6", "" ]
I'm new to Python and am working with Python 3.3.2.. I'm having trouble including data items in strings that I am trying to print. The `%` operator either prints out literally as in: ``` for miles in range (10, 70, 10): km=miles*1.609 print ("%d miles --> %3.2f kilometers") % (miles, km) ``` produces: ``` %d miles --> %3.2f kilometers ``` along with the error: ``` Unsupported operand type for %: 'NoneType' and 'tuple' ``` But when I change the third line to: ``` print (%d "miles -->" %3.2f "kilometers") % (miles, km) ``` I get a syntax error on the %d. I was able to get the program to work correctly from the interactive mode by eliminating the '`print()`' function but it will not produce an output in IDLE without the '`print()`'. What am I missing? The research I have done trying to figure this out has been helpful in learning more about Python, but so far hasn't gotten the program working. By the way it should output: ``` 10 miles --> 16.09 kilometers 20 miles --> 32.18 kilometers 30 miles --> 48.27 kilometers 40 miles --> 64.36 kilometers 50 miles --> 80.45 kilometers 60 miles --> 96.54 kilometers ``` I appreciate any help or assistance I can get on this problem.
Move `% (miles, km)` inside the `print` function: `print` function returns `None`, so in your code you're actually trying to do: ``` None % (miles, km) ``` That's why you're getting that error. **Working code:** ``` >>> for miles in range (10, 70, 10): ... km=miles*1.609 ... print ("%d miles --> %3.2f kilometers" % (miles, km)) ... 10 miles --> 16.09 kilometers 20 miles --> 32.18 kilometers 30 miles --> 48.27 kilometers 40 miles --> 64.36 kilometers 50 miles --> 80.45 kilometers 60 miles --> 96.54 kilometers ``` Using **new style [string formatting](http://docs.python.org/2/library/string.html#formatspec)**: ``` >>> for miles in range (10, 70, 10): km=miles*1.609 print ("{:d} miles --> {:3.2f} kilometers".format(miles, km)) ... 10 miles --> 16.09 kilometers 20 miles --> 32.18 kilometers 30 miles --> 48.27 kilometers 40 miles --> 64.36 kilometers 50 miles --> 80.45 kilometers 60 miles --> 96.54 kilometers ```
Try using .format() method on your sting ``` >>> for miles in range(10, 70, 10): ... km = miles*1.609 ... print ("{0} miles ---> {1} kilometers".format(miles, km)) ```
Python print function issues
[ "", "python", "string", "function", "printing", "formatting", "" ]
I am writing a code in python for a project that has to accomplish a few things; 1) read in data from an xls file column by column 2) average each row of the columns in groups of three 3) then average the resulting columns I have accomplished 1 and 2 but can't quite seem to get 3, I think a lot of the trouble I'm having stems from the fact that I am using float however I need the numbers to 6 decimal places. Any help and patience is appreciated, I'm very new to python ``` v = open("Pt_2_Test_Data.xls", 'wb') #created file to write output to w = open("test2.xls") count = 0 for row in w: #read in file for line in w: columns = line.split("\t") #split up into columns date = columns[0] time = columns[1] a = columns[2] b = columns[3] c = columns[4] d = columns[5] e = columns[6] f = columns[7] g = columns[8] h = columns[9] i = columns[10] j = columns[11] k = columns[12] l = columns[13] m = columns[14] n = columns[15] o = columns[16] p = columns[17] q = columns[18] r = columns[19] s = columns[20] t = columns[21] u = columns[22] LZA = columns[23] SZA = columns[24] LAM = columns[25] count += 1 A = 0 if count != 0: # gets rid of column tiles filter1 = ((float(a) + float(b) + float(c))/3) filter1 = ("%.6f" %A) filter2 = (float(d) + float(e) + float(f))/3 filter2 = ("%.6f" %filter2) filter3 = (float(g) + float(h) + float(i))/3 filter3 = ("%.6f" %filter3) filter4 = (float(j) + float(k) + float(l))/3 filter4 = ("%.6f" %filter4) filter5 = (float(m) + float(n) + float(o))/3 filter5 = ("%.6f" %filter5) filter6 = (float(p) + float(q) + float(r))/3 filter6 = ("%.6f" %filter6) filter7 = (float(s) + float(t) + float(u))/3 filter7 = ("%.6f" %filter7) A = [filter1, filter2, filter3, filter4, filter5, filter6, filter7] A = ",".join(str(x) for x in A).join('[]') print A avg = [float(sum(col))/float(len(col)) for col in zip(*A)] print avg ``` I have also tried formatting the data like so: ``` A = ('{0} {1} {2} {3} {4} {5} {6} {7} {8}'.format(date, time, float(filter1), float(filter2), float(filter3), float(filter4), float(filter5), float(filter6), float(filter7))+'\n') # average of triplets print A ``` thinking I could access the values of each column and preform the necessary math on them by calling them like you would when using a dictionary, however this was unsuccessful:it seemed it was recognizing the data either as a row (so trying to access any column by [0] was out of bounds) or by the individual characters, not as a list of numbers. Is this related to using the float function?
I'm not sure I understand which columns you want to average in 3), but maybe this does what you want: ``` with open("test2.xls") as w: w.next() # skip over header row for row in w: (date, time, a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q, r, s, t, u, LZA, SZA, LAM) = row.split("\t") # split columns into fields A = [(float(a) + float(b) + float(c))/3, (float(d) + float(e) + float(f))/3, (float(g) + float(h) + float(i))/3, (float(j) + float(k) + float(l))/3, (float(m) + float(n) + float(o))/3, (float(p) + float(q) + float(r))/3, (float(s) + float(t) + float(u))/3] print ('['+ ', '.join(['{:.6f}']*len(A)) + ']').format(*A) avg = sum(A)/len(A) print avg ``` You could do the same thing a little more concisely with code like the following: ``` avg = lambda nums: sum(nums)/float(len(nums)) with open("test2.xls") as w: w.next() # skip over header row for row in w: cols = row.split("\t") # split into columns # then split that into fields date, time, values, LZA, SZA, LAM = (cols[0], cols[1], map(float, cols[2:23]), cols[23], cols[24], cols[25]) A = [avg(values[i:i+3]) for i in xrange(0, 21, 3)] print ('['+ ', '.join(['{:.6f}']*len(A)) + ']').format(*A) print avg(A) ```
You can use the `decimal` module to display the exact numbers. ``` from decimal import * getcontext().prec = 6 # sets the precision to 6 ``` note that floating points are used which means that: ``` print(Decimal(1)/(Decimal(7)) # 0.142857 print(Decimal(100)/(Decimal(7)) # results in 14.2857 ``` This means you probably need to set the precision to a higher value to get 6 decimal places... for example: ``` from decimal import * getcontext().prec = 28 print("{0:.6f}".format(Decimal(100) / Decimal(7))) # 14.285714 ``` To give a complete answer to your question, could you explain what average you are looking for? The average over all (21) columns? Could you maybe post some test\_data.xls?
Averaging down a column of averaged data
[ "", "python", "csv", "average", "multiple-columns", "" ]
``` s="(8+(2+4))" def checker(n): if len(n) == 0: return True if n[0].isdigit==True: if n[1].isdigit==True: return False else: checker(n[1:]) else: checker(n[1:]) ``` This is what I have so far. Simple code, trying to see if a string meets the following conditions. However when i perform checker(s) i get: ``` True IndexError: string index out of range ``` Any help? Thanks in advance Edit: The function's purpose is to produce true if the string contains only single digit numbers, and false if 2 or more-figured digits exist in the string.
When the length of `n` is 0, the `n[0]` part is going to raise an error because the string in empty. You should add a `return` statement there instead of print. ``` def checker(n): if len(n) < 2: return True if n[0] in x: ``` Note that the conditions must be `len(n) < 2` otherwise you'll get an error on `n[1]` when the length of string is 1. Secondly you're trying to match characters to a list which contains integers, so the *in* checks are always going to be `False`. Either convert the list items to string or better use `str.isdigit`. ``` >>> '1'.isdigit() True >>> ')'.isdigit() False >>> '12'.isdigit() True ``` --- **Update:** You can use `regex` and `all` for this: ``` >>> import re def check(strs): nums = re.findall(r'\d+',strs) return all(len(c) == 1 for c in nums) ... >>> s="(8+(2+4))" >>> check(s) True >>> check("(8+(2+42))") False ``` --- Working version of your code: ``` s="(8+(2+4))" def checker(n): if not n: #better than len(n) == 0, empty string returns False in python return True if n[0].isdigit(): #str.digit is a method and it already returns a boolean value if n[1].isdigit(): return False else: return checker(n[1:]) # use return statement for recursive calls # otherwise the recursive calls may return None else: return checker(n[1:]) print checker("(8+(2+4))") print checker("(8+(2+42))") ``` **output:** ``` True False ```
You should do `return True` after the first if statement, not `print True`. The function continues to run after that statement and hits an error when the input is size 0.
IndexError string index out of range
[ "", "python", "wing-ide", "" ]
I'm trying to make a program that basically has an inputted string and what the program is supposed to do is to output the character that occurs the most and says how many times it occurs. It also outputs the character that occurs the least and says how many times it occurs. I'm having trouble getting started with this as I'm doing this in part of a summer college course so its a whole semesters class in 6 weeks so the class goes by pretty fast. Could someone please explain the logic behind this for me so I could get started? We haven't learned many different methods so if you could stick to basic python programming that would be nice. <- Like while loops and for loops we learned, lists, tuples, strings etc. We didn't learn any thing else.. Thanks
I like a challenge. No complex data structures, just simple loops and ifs. If this is too complex then your teacher has done a bad job! ``` w = "This is the song that doesn't end; yes it goes on and on my friend." max_letter = w[0] min_letter = w[0] max = w.count(w[0]) min = w.count(w[0]) for c in w: if c is not " ": if w.count(c) > max: max_letter = c max = w.count(c) if w.count(c) < min: min_letter = c min = w.count(c) print max, max_letter print min, min_letter >>> 7 n >>> 1 T ``` @Rohan asked how I built this up, and its only fair I describe it. *Basicaly, its an exercise in answering questions, and making new questions as I go through.* **What are the maximum and minimum occurring letters?** The first thing you know is you need to find and print some things. Those things are the minimum and maximum letters. At the start of everything, I know the first letter will be both, so lets start there. *It should be said though, what happens if the string is empty?* ``` max_letter = w[0] min_letter = w[0] ``` **If these letters are the most and least occurring how often do they appear?** Now, I know I need to keep track of extra information, and since I decided the first letter is the minimum and maximum set the count of this letter to be the `min` and `max`. ``` max = w.count(w[0]) min = w.count(w[0]) ``` **How can I know that these letters really are the most and least common?** Well, I need to check all of the letters, I can do this in a loop: ``` for c in w: ``` **Is this current character something I am checking for?** In this case, I only want things that aren't spaces, but I could check for anything here. ``` if c is not " ": ``` **Is this current letter the most common?** Not sure, so check it against the maximum, if it is then update what the letter with the maximum count is, and what the maximum count is. ``` if w.count(c) > max: max_letter = c max = w.count(c) ``` **Same for the least common...** ``` if w.count(c) < min: min_letter = c min = w.count(c) ``` **Then print out what I found out** ``` print max, max_letter print min, min_letter ``` **Can this algorithm be better?** Yes. This algorithm checked if `'n'` was the maximum letter 7 times. The answer never changes. It also runs through the string many times - * Once per letter in the for loop * In a simple counting algorithm, it would run through again in each iteration to get the count.
``` from collections import Counter the_string = "This is a string!" Counter(x for x in the_string if not x.isspace()).most_common() ``` Here is a way to get started without Counters/dicts/etc. ``` >>> the_string = "This is a string!" >>> A = [0] * 256 >>> for x in the_string: ... if not x.isspace(): ... A[ord(x)] += 1 ... ``` `ord()` maps each character onto a position in `A` ``` >>> A [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 3, 0, 0, 0, 0, 1, 0, 0, 0, 1, 3, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] ``` You easily can find *one* of the most common characters like this ``` >>> chr(A.index(max(A))) 'i' ``` The minimum is more complicated, since we need the minimum that isn't `0` ``` >>> chr(A.index(min(x for x in A if x))) '!' ``` Ok, you probably aren't comfortable with max and min and generator expressions, but you should be able to work out for to do it with a `for` loop after 6 weeks
Program that counts most and least common non-space characters
[ "", "python", "" ]
I have a function: ``` ALTER FUNCTION [dbo].[func_ParseString] (@list nvarchar(MAX)) RETURNS @tbl TABLE (string VARCHAR(500) NOT NULL) AS BEGIN DECLARE @pos int, @nextpos int, @valuelen int SELECT @pos = 0, @nextpos = 1 WHILE @nextpos > 0 BEGIN SELECT @nextpos = charindex(', ', @list, @pos + 1) SELECT @valuelen = CASE WHEN @nextpos > 0 THEN @nextpos ELSE len(@list) + 1 END - @pos - 1 INSERT @tbl (string) VALUES (substring(@list, @pos + 1, @valuelen)) SELECT @pos = @nextpos END RETURN END ``` I have a table | id | name | age | | --- | --- | --- | | 1 | Dan | 20 | | 2 | Chris | 30 | | 3 | Andy | 20 | When I try a select in statement it only returns all values for the first name in my comma delimited string ``` SELECT * FROM table WHERE name IN (SELECT string COLLATE DATABASE_DEFAULT FROM [dbo].[func_ParseString]('Dan, Andy') ``` This only returns row 1 when I want to return rows 1 and 3 Can someone help?
Your function is returning a leading blank in front of Andy. You should use `LTRIM` function to remove it. Either in function, on insert into @tbl: ``` INSERT @tbl (string) VALUES (LTRIM (substring(@list, @pos + 1, @valuelen))) ``` or when you call function: ``` SELECT LTRIM(string) FROM [dbo].[func_ParseString] ('Dan, Andy') ```
I don't remember where I found this function. I implemented it and works nice ``` ALTER FUNCTION [dbo].[StringSplit] ( @delimited nvarchar(max), @delimiter nvarchar(100) ) RETURNS @t TABLE ( -- Id column can be commented out, not required for sql splitting string id int identity(1,1), -- I use this column for numbering splitted parts val nvarchar(max) ) AS BEGIN declare @xml xml set @xml = N'<root><r>' + replace(@delimited,@delimiter,'</r><r>') + '</r></root>' insert into @t(val) select r.value('.','varchar(max)') as item from @xml.nodes('//root/r') as records(r) RETURN END GO DECLARE @String NVARCHAR(max) SET @String = 'Dan, Andy' SELECT Val FROM [dbo].[StringSplit] (@String, ',') ```
String Split sql function only returning first word in string
[ "", "sql", "sql-server", "split", "" ]
I want to create password as password field in views. **models.py:** ``` class User(models.Model): username = models.CharField(max_length=100) password = models.CharField(max_length=50) ``` **forms.py:** ``` class UserForm(ModelForm): class Meta: model = User ```
Use widget as `PasswordInput` ``` from django import forms class UserForm(forms.ModelForm): password = forms.CharField(widget=forms.PasswordInput) class Meta: model = User ```
You should create a `ModelForm` ([docs](https://docs.djangoproject.com/en/dev/topics/forms/modelforms/)), which has a field that uses the `PasswordInput` widget from the forms library. It would look like this: ## models.py ``` from django import models class User(models.Model): username = models.CharField(max_length=100) password = models.CharField(max_length=50) ``` ## forms.py (not views.py) ``` from django import forms class UserForm(forms.ModelForm): class Meta: model = User widgets = { 'password': forms.PasswordInput(), } ``` For more about using forms in a view, see [this section of the docs](https://docs.djangoproject.com/en/dev/topics/forms/#using-a-form-in-a-view).
How to create Password Field in Model Django
[ "", "python", "django", "django-models", "django-forms", "" ]
I use Scilab, and want to convert an array of booleans into an array of integers: ``` >>> x = np.array([4, 3, 2, 1]) >>> y = 2 >= x >>> y array([False, False, True, True], dtype=bool) ``` In Scilab I can use: ``` >>> bool2s(y) 0. 0. 1. 1. ``` or even just multiply it by 1: ``` >>> 1*y 0. 0. 1. 1. ``` Is there a simple command for this in Python, or would I have to use a loop?
Numpy arrays have an `astype` method. Just do `y.astype(int)`. Note that it might not even be necessary to do this, depending on what you're using the array for. Bool will be autopromoted to int in many cases, so you can add it to int arrays without having to explicitly convert it: ``` >>> x array([ True, False, True], dtype=bool) >>> x + [1, 2, 3] array([2, 2, 4]) ```
The `1*y` method works in Numpy too: ``` >>> import numpy as np >>> x = np.array([4, 3, 2, 1]) >>> y = 2 >= x >>> y array([False, False, True, True], dtype=bool) >>> 1*y # Method 1 array([0, 0, 1, 1]) >>> y.astype(int) # Method 2 array([0, 0, 1, 1]) ``` If you are asking for a way to convert Python lists from Boolean to int, you can use `map` to do it: ``` >>> testList = [False, False, True, True] >>> map(lambda x: 1 if x else 0, testList) [0, 0, 1, 1] >>> map(int, testList) [0, 0, 1, 1] ``` Or using list comprehensions: ``` >>> testList [False, False, True, True] >>> [int(elem) for elem in testList] [0, 0, 1, 1] ```
How to convert a boolean array to an int array
[ "", "python", "integer", "boolean", "type-conversion", "scilab", "" ]
I am trying to add an attachment to my timeline with the multipart encoding. I've been doing something like the following: ``` req = urllib2.Request(url,data={body}, header={header}) resp = urllib2.urlopen(req).read() ``` And it has been working fine for application/json. However, I'm not sure how to format the body for multipart. I've also used some libraries: requests and poster and they both return 401 for some reason. How can I make a multipart request either with a libary(preferably a plug-in to urllib2) or with urllib2 itself (like the block of code above)? **EDIT:** I also would like this to be able to support the mirror-api "video/vnd.google-glass.stream-url" from <https://developers.google.com/glass/timeline> For the request using poster library here is the code: ``` register_openers() datagen, headers = multipart_encode({'image1':open('555.jpg', 'rb')}) ``` Here it is using requets: ``` headers = {'Authorization' : 'Bearer %s' % access_token} files = {'file': open('555.jpg', 'rb')} r = requests.post(timeline_url,files=files, headers=headers) ``` Returns 401 -> header Thank you
This is how I did it and how the python client library does it. ``` from email.mime.multipart import MIMEMultipart from email.mime.nonmultipart import MIMENonMultipart from email.mime.image import MIMEImage mime_root = MIMEMultipart('related', '===============xxxxxxxxxxxxx==') headers= {'Content-Type': 'multipart/related; ' 'boundary="%s"' % mime_root.get_boundary(), 'Authorization':'Bearer %s' % access_token} setattr(mime_root, '_write_headers', lambda self: None) #Create the metadata part of the MIME mime_text = MIMENonMultipart(*['application','json']) mime_text.set_payload("{'text':'waddup doe!'}") print "Attaching the json" mime_root.attach(mime_text) if method == 'Image': #DO Image file_upload = open('555.jpg', 'rb') mime_image = MIMENonMultipart(*['image', 'jpeg']) #add the required header mime_image['Content-Transfer-Encoding'] = 'binary' #read the file as binary mime_image.set_payload(file_upload.read()) print "attaching the jpeg" mime_root.attach(mime_image) elif method == 'Video': mime_video = MIMENonMultipart(*['video', 'vnd.google-glass.stream-url']) #add the payload mime_video.set_payload('https://dl.dropboxusercontent.com/u/6562706/sweetie-wobbly-cat-720p.mp4') mime_root.attach(mime_video) ``` Mark Scheel I used your video for testing purposes :) Thank you.
There is a working Curl example of a multipart request that uses the streaming video url feature here: [Previous Streaming Video Answer with Curl example](https://stackoverflow.com/questions/16997932/attaching-video-with-video-vnd-google-glass-stream-url-after-update-xe6/16998808#16998808) It does exactly what you are trying to do, but with Curl. You just need to adapt that to your technology stack. The 401 you are receiving is going to prevent you even if you use the right syntax. A 401 response indicates you do not have authorization to modify the timeline. Make sure you can insert a simple hello world text only card first. Once you get past the 401 error and get into parsing errors and format issues the link above should be everything you need. One last note, you don't need [urllib2](http://docs.python.org/2/library/urllib2.html), the Mirror API team dropped a gem of a feature in our lap and we don't need to be bothered with getting the binary of the video, check that example linked above I *only* provided a URL in the multipart payload, no need to stream the binary data! Google does all the magic in XE6 and above for us. Thanks Team Glass! I think you will find this is simpler than you think. Try out the curl example and watch out for incompatible video types, when you get that far, if you don't use a compatible type it will appear not to work in Glass, make sure your video is encoded in a Glass friendly format. Good luck!
Multipart POST request Google Glass
[ "", "python", "django", "google-mirror-api", "" ]
I was using pyinstaller before to try and get my app with twisted as an executable, but I got this error when executing: ``` Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/cx_Freeze/initscripts/Console.py", line 27, in <module> exec code in m.__dict__ File "client_test.py", line 2, in <module> File "/usr/local/lib/python2.7/dist-packages/Twisted-13.0.0-py2.7-linux-x86_64.egg/twisted/__init__.py", line 53, in <module> _checkRequirements() File "/usr/local/lib/python2.7/dist-packages/Twisted-13.0.0-py2.7-linux-x86_64.egg/twisted/__init__.py", line 37, in _checkRequirements raise ImportError(required + ": no module named zope.interface.") ImportError: Twisted requires zope.interface 3.6.0 or later: no module named zope.interface. ``` So then, I tried using cx\_freeze, but I get the **exact** same error, even when using `'namespace_packages': ['zope']` like [this example.](https://code.google.com/p/pythonxy/source/browse/src/python/cx_Freeze/DOC/samples/zope/qotd.py?repo=xy-27&r=11f515ac7e0b807a35c3be657333b794616e3601) From where I'm building the executable, I can open a python interpreter and sucessfully import zope.interface, and I installed it through `easy_install`, then ran `pip install -U zope.interface` later on, which didn't have any effect. Here's my `setup.py` for cx\_freeze: ``` import sys from cx_Freeze import setup, Executable # Dependencies are automatically detected, but it might need fine tuning. build_exe_options = {"excludes": ["tkinter"], 'namespace_packages':['zope'], 'append_script_to_exe':True } setup( name = "exetest", version = "0.1", description = "My first executable", options = {"build_exe": build_exe_options}, executables = [Executable("client_test.py")]) ``` **EDIT 1:** Forgot to mention that I also tried putting a blank `__init__.py` file under `zope.interface`, and that also didn't help. **EDIT 2:** When using cx\_freeze, inside the library.zip of the build folder, zope.interface is in there and I don't think any of the modules are missing, but I still get the `ImportError` This is from the output of cx\_freeze: ``` Missing modules: ? _md5 imported from hashlib ? _sha imported from hashlib ? _sha256 imported from hashlib ? _sha512 imported from hashlib ? builtins imported from zope.schema._compat ? ctypes.macholib.dyld imported from ctypes.util ? dl imported from OpenSSL ? html imported from twisted.web.server ? netbios imported from uuid ? ordereddict imported from zope.schema._compat ? queue imported from twisted.internet.threads ? twisted.python._epoll imported from twisted.internet.epollreactor ? twisted.python._initgroups imported from twisted.python.util ? urllib.parse imported from twisted.web.server ? win32wnet imported from uuid ? wsaccel.utf8validator imported from autobahn.utf8validator ? zope.i18nmessageid imported from zope.schema._messageid ? zope.testing.cleanup imported from zope.schema.vocabulary ``` **EDIT 3:** Here's the sys.path output from my executable (shortened with the `..`) ``` ['../build/exe.linux-x86_64-2.7/client_test', '../build/exe.linux-x86_64-2.7', '../build/exe.linux-x86_64-2.7/client_test.zip', '../build/exe.linux-x86_64-2.7/library.zip'] ``` Here's the error I get when I import `zope.interface` directly: ``` Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/cx_Freeze/initscripts/Console.py", line 27, in <module> exec code in m.__dict__ File "client_test.py", line 3, in <module> File "/usr/local/lib/python2.7/dist-packages/zope.schema-4.3.2-py2.7.egg/zope/__init__.py", line 1, in <module> __import__('pkg_resources').declare_namespace(__name__) ImportError: No module named pkg_resources ``` **After adding `pkg_resources` to my includes in my cx\_freeze setup.py, the program ran**
Add `pkg_resources` to your `includes` in your setup.py for cx\_Freeze.
I had the same issue with cx\_freeze. None of the above solutions seemed to work in my case. For me this solution from [here](http://sourceforge.net/p/cx-freeze/mailman/message/26483012/) worked : You need to actually create `zope/__init__.py` as an empty file so that the normal processing performed by imp.find\_module() actually works
ImportError with cx_Freeze and pyinstaller
[ "", "python", "twisted", "pyinstaller", "cx-freeze", "" ]
How to Create stored proc that gives totals grouped for month and year sale tablename ``` year week date value 2008 4 2008-01-21 50 2008 5 2008-02-22 25 2008 6 2008-02-23 30 ```
Try this ``` CREATE PROCEDURE dbo.GetTotalAmount AS BEGIN SELECT DATEPART(YEAR, date)as [Year], MONTH(date) as [Month], SUM(Value) as [Value] FROM Sale GROUP BY DATEPART(YEAR, date), MONTH(date) END EXEC dbo.GetTotalAmount ```
Here is a full stored procedure code with parameters and example on how to run this. ``` CREATE PROCEDURE dbo.GetTotalByDate ( @StartDate datetime, @EndDate datetime ) AS BEGIN SELECT S.year as [Year], MONTH(S.date) as [Month], SUM(S.Value) as [Value] FROM Sale S WHERE S.date BETWEEN @StartDate and @EndDate GROUP BY S.year, MONTH(S.date) ORDER BY [Year], [Month] END EXEC dbo.GetTotalByDate @StartDate = '01/01/2013', @EndDate = '01/01/2014' ```
How to write Stored procedure to get the group data of a month and year
[ "", "sql", "sql-server", "" ]
How to simple format string with decimal number for show it with spaces between every three digits? I can make something like this: ``` some_result = '12345678,46' ' '.join(re.findall('...?', test[:test.find(',')]))+test[test.find(','):] ``` and result is: ``` '123 456 78,46' ``` but I want: ``` '12 345 678,46' ```
This is a bit hacky, but: ``` format(12345678.46, ',').replace(',', ' ').replace('.', ',') ``` As described in [Format specification mini-language](http://docs.python.org/2/library/string.html#format-specification-mini-language), in a format\_spec: > The ',' option signals the use of a comma for a thousands separator. Then we just replace each comma with a space, then the decimal point with a comma, and we're done. For more complex cases using `str.format` instead of `format`, the format\_spec goes after the colon, as in: ``` '{:,}'.format(12345678.46) ``` See [PEP 378](http://www.python.org/dev/peps/pep-0378/) for details. --- Meanwhile, if you're just trying to use the standard grouping and separators for your system's locale, there are easier ways to do that—the `n` format type, or the `locale.format` function, etc. For example: ``` >>> locale.setlocale(locale.LC_NUMERIC, 'pl_PL') >>> format(12345678, 'n') 12 345 678 >>> locale.format('%.2f' 12345678.12, grouping=True) 12 345 678,46 >>> locale.setlocale(locale.LC_NUMERIC, 'fr_FR') >>> locale.format('%.2f' 12345678.12, grouping=True) 12345678,46 >>> locale.setlocale(locale.LC_ALL, 'en_AU') >>> locale.format('%.2f' 12345678.12, grouping=True) 12,345,678.46 ``` If your system locale is, say, `pl_PL`, just calling `locale.setlocale(locale.LC_NUMERIC)` (or `locale.setlocale(locale.LC_ALL)`) will pick up the Polish settings that you want, but the same person running your program in Australia will pick up the Australian settings that he wants.
I think a regex would be much nicer: ``` >>> import re >>> some_result = '12345678,46' >>> re.sub(r"\B(?=(?:\d{3})+,)", " ", some_result) '12 345 678,46' ``` **Explanation:** ``` \B # Assert that we're not at the start of a number (?= # Assert that the following regex could match from here: (?: # The following non-capturing group \d{3} # which contains three digits )+ # and can be matched one or more times , # until a comma follows. ) # End of lookahead assertion ```
Format string - spaces between every three digit
[ "", "python", "" ]
I have the following database schema for an attendance system: ![database schema](https://i.stack.imgur.com/Ene5p.png) How would I write an SQL query to generate a good report of entries on day X? I need it to generate a report that has Employee Name | TimeIn | TimeOut Bob | 10:00 | 11:00 Sam | 10:30 | 18:00 Bob | 11:30 | 15:00 but the row that defines if it was a time in or out is set by entryType (1 being in, 0 being out), so I would aliases TimeIn and TimeOut. My attempt was ``` `SELECT firstName, time from log INNER JOIN users on log.employeeID = users.employeeID WHERE date = GETDATE()` ``` but this doesn't handle the fact that some times are entry, some are exit. Note that there can be multiple sign ins per date. Update: Another attempt, but the subquery returns multiple rows ``` select firstName, (select time as timeIn from log where entryType = 1), (select time as timeOut from log where entryType = 0) inner join users on log.uID = users.uID from log group by uID ```
This works in Oracle (apologies for the non-ANSI style, but you should get the drift).. ``` SELECT FORENAME,SURNAME,L1.TIME IN_TIME,L2.TIME OUT_TIME FROM EMPLOYEES EMP, LOG L1, LOG L2 WHERE EMP.EMPLOYEE_ID = L1.EMPLOYEE_ID AND EMP.EMPLOYEE_ID = L2.EMPLOYEE_ID AND L1.ENTRYTYPE = 1 AND L2.ENTRYTYPE = 0 AND L2.TIME = (SELECT MIN(TIME) FROM LOG WHERE EMPLOYEE_ID = L2.EMPLOYEE_ID AND L2.ENTRYTYPE = 0 AND TIME > L1.TIME) ``` Update: Ah, yes, hadn't considered that. In this case you need an outer join. something like this (untested): ``` SELECT FORENAME,SURNAME,L1.TIME IN_TIME,L2.TIME OUT_TIME FROM EMPLOYEES EMP INNER JOIN LOG L1 ON EMP.EMPLOYEE_ID = L1.EMPLOYEE_ID AND L1.ENTRYTYPE = 1 LEFT OUTER JOIN LOG L2 ON EMP.EMPLOYEE_ID = L2.EMPLOYEE_ID AND L2.ENTRYTYPE = 0 AND L2.TIME = (SELECT MIN(TIME) FROM LOG WHERE EMPLOYEE_ID = L2.EMPLOYEE_ID AND L2.ENTRYTYPE = 0 AND TIME > L1.TIME) ```
Simply this will work. Try this ``` SELECT FORENAME,SURNAME,LG.IN_TIME,LG.OUT_TIME FROM EMPLOYEES EMP INNER JOIN (SELECT MIN(TIME) IN_TIME,MAX(TIME) OUT_TIME,EMPLOYEE_ID FROM LOG GROUP BY EMPLOYEE_ID) LG ON EMP.EMPLOYEE_ID=LG.EMPLOYEE_ID ``` **Note** : I didnt include the entry type because at any time min time will be swipe in and max time will be swipe out # Updated To show no of sign ins and outs try something like this, ``` SELECT FORENAME,SURNAME,LG.IN_TIME,LG.OUT_TIME,LG.no_of_ins, LG.no_of_outs FROM EMPLOYEES EMP INNER JOIN (SELECT MIN(TIME) IN_TIME,MAX(TIME) OUT_TIME,EMPLOYEE_ID, COUNT( CASE WHEN ENTRY_TYPE='I' THEN 1 ELSE O END noi) no_of_ins, COUNT( CASE WHEN ENTRY_TYPE='O' THEN 1 ELSE O END nou) no_of_outs, GROUP BY EMPLOYEE_ID) LG ON EMP.EMPLOYEE_ID=LG.EMPLOYEE_ID ```
Constructing an SQL query for schema
[ "", "sql", "database", "schema", "" ]
I apologize if this is a question that has already been resolved. I want to get the current directory when running a Python script or within Python. The following will return the full path including the current directory: ``` os.getcwd() ``` I can also get the path all the way up to the current directory: ``` os.path.dirname(os.getcwd()) ``` Using `os.path.split` will return the same thing as the above, plus the current folder, but then I end up with an object I want: ``` (thing_I_dont_want, thing_I_want) = os.path.split(os.getcwd()) ``` Is there a way I can get just the thing I want, the current folder, without creating any objects I don't want around? Alternately, is there something I can put in place of the variable `thing_I_dont_want`that will prevent it from being created (e.g. `(*, thing_I_want)`)? Thanks!
Like this: ``` os.path.split(os.getcwd())[1] ``` Although `os.path.split` returns a tuple, you don't need to unpack it. You can simply select the item that you need and ignore the one that you don't need.
Use [`os.path.split`](http://docs.python.org/2/library/os.path.html#os.path.split): ``` >>> os.path.split(os.getcwd()) ('/home/user', 'py') >>> os.path.split(os.getcwd())[-1] 'py' ``` help on `os.path.split`: ``` >>> print os.path.split.__doc__ Split a pathname. Returns tuple "(head, tail)" where "tail" is everything after the final slash. Either part may be empty. ```
Getting just the current directory without the full path in python
[ "", "python", "os.path", "" ]
I would like to know how I can get my code to not crash if a user types anything other than a number for input. I thought that my else statement would cover it but I get an error. > Traceback (most recent call last): File "C:/Python33/Skechers.py", > line 22, in > run\_prog = input() File "", line 1, in NameError: name 's' is not defined In this instance I typed the letter "s". Below is the portion of the code that gives me the issue. The program runs flawlessly other than if you give it letters or symbols. I want it to print "Invalid input" instead of crashing if possible. Is there a trick that I have to do with another elif statement and isalpha function? ``` while times_run == 0: print("Would you like to run the calculation?") print("Press 1 for YES.") print("Press 2 for NO.") run_prog = input() if run_prog == 1: total() times_run = 1 elif run_prog == 2: exit() else: print ("Invalid input") print(" ") ``` I tried a few variations of this with no success. ``` elif str(run_prog): print ("Invalid: input") print(" ") ``` I appreciate any feedback even if it is for me to reference a specific part of the python manual. Thanks!
Contrary to what you think, your script is *not* being run in Python 3.x. Somewhere on your system you have Python 2.x installed and the script is running in that, causing it to use 2.x's insecure/inappropriate `input()` instead.
The error message you showed indicates that `input()` tried to evaluate the string typed as a Python expression. This in turn means you're not actually using Python 3; `input` only does that in 2.x. Anyhow, I strongly recommend you do it this way instead, as it makes explicit the kind of input you want. ``` while times_run == 0: sys.stdout.write("Would you like to run the calculation?\n" "Press 1 for YES.\n" "Press 2 for NO.\n") try: run_prog = int(sys.stdin.readline()) except ValueError: run_prog = 0 if not (1 <= run_prog <= 2): sys.stdout.write("Invalid input.\n") continue # ... what you have ... ```
Getting error if user types a string
[ "", "python", "python-3.x", "" ]
I would like to load the contents of a .txt file as a string and extract a specific piece of information. The information, which has lots of text before and after it's occurence, looks like this: ``` ValueName: 1234 ``` But could also look like: ``` ValueName: 123456 ``` That is to say, the value is always a string of integers, but of varying length. I would like to find the 'ValueName' in the string, and return characters starting 6 characters afterwards. My thought is to say check and see if the 10 characters starting 6 characters after the 'ValueName' are integers, and return them in order if they are. Is this possible? Thanks kindly.
You can use a regular expression to extract the value following `ValueName:` ``` >>> import re >>> line = 'some dummy text ValueName: 123456 some dummy text' >>> m = re.findall(r'ValueName:\s+([0-9]+)',line) >>> m ['123456'] ``` This will find multiple matches if they exist. ``` >>> import re >>> line = 'blah blah ValueName: 1234 blah blah ValueName: 5678' >>> m = re.findall(r'ValueName:\s+([0-9]+)',line) >>> m ['1234', '5678'] ```
Regular expressions will make this simpler, as Brian's answer (among others) shows. But don't use a regex if you're not willing to understand what it does. If you want to put off the learning curve for now,\* this isn't that hard to do with simple string processing: ``` def numeric_value_names(path): with open(path) as f: for line in f: bits = line.partition('ValueName:') if bits[1] and not bits[0]: rest = bits[2][6:].rstrip() if rest.isdigit(): yield rest ``` --- Using `str.partition` this way may be a bit obtuse to novices, so you may want to make the condition more obvious: ``` def numeric_value_names(path): with open(path) as f: for line in f: if line.startswith('ValueName:'): bits = line.partition('ValueName:') rest = bits[2][6:].rstrip() if rest.isdigit(): yield rest ``` --- \* You *definitely* want to learn simple regular expressions at some point; the only question is whether you have something more pressing to do now…
Python Extract Values of varying length from Text Files
[ "", "python", "" ]
So, I'm hard at work on a text-based RPG game on Python 2.7, but I came across a problem in the character menu. Here's what it looks like: ``` def raceselect(n): if n==0: print "(name) the Human." if n==1: print "(name) the Dwarf." if n==2: print "(name) the Elf." if n==3: print "(name) the Halfling." n = raw_input raceselect(n) ``` 0, 1, 2, and 3 are all used as raw\_input answers on the previous screen when prompted with the options. However, when the script is run, the options are shown, and the input box shows, however when a number is answered the script simply ends. I can't for the life of me figure out what is causing this, unless it's the fact that I used (name) and raw\_input earlier in the script, which I doubt. Please help! --Crux\_Haloine
You need to turn the raw input into a int: ``` n = int(raw_input()) ``` Also, you need to call the function, so use `raw_input()` rather than just `raw_input`
`n = raw_input` here will bring nothing to you. You should use `n = int(raw_input())` according to your need. And I think it is better for you to use a `dict` or `list` rather than several `if`: ``` def raceselect(n): races = {0: 'Human', 1: 'Dwarf', 2: 'Elf', 3: 'Halfing'} if n in races: print '(name) the %s' % races[n] else: print 'wrong input' ```
Text Based RPG Functioning Error?
[ "", "python", "text", "python-2.7", "" ]
I just deployed my first ever web app and I am curious if there is an easy way to track every time someone visits my website, well I am sure there is but how?
know from myself people are obsessed with traffic, statistics, looking at other sites – tracking their stats and so on. And if there is enough demand, of course there are sites to satisfy You. I wanted to put those sites and tools in one list together, because at least for me this field was really unclear – I didn’t know what means Google Pagerank, Alexa, Compete, Technorati rankings and I could continue so on. I must say not always these stats are precise, but however they give at least overview, how popular the certain page is, how many visitors that sites gets – and if You compare those stats with Your site statistics, You can get pretty precise results then. <http://www.stuffedweb.com/3-tools-to-track-your-website-visitors/> <http://www.1stwebdesigner.com/design/10-ways-how-to-track-site-traffic-popularity-statistics/>
Easy as pie, use Google Analytics, you just have to include a tiny script in your app's pages <http://www.google.com/analytics/>
how do I track how many users visit my website
[ "", "python", "flask", "pythonanywhere", "" ]
I have an object that has optional fields. I have defined my serializer this way: ``` class ProductSerializer(serializers.Serializer): code = serializers.Field(source="Code") classification = serializers.CharField(source="Classification", required=False) ``` I [thought](http://django-rest-framework.org/api-guide/fields.html#core-arguments) `required=False` would do the job of bypassing the field if it doesn't exist. However, it is mentioned in the documentation that this affects deserialization rather than serialization. I'm getting the following error: ``` 'Product' object has no attribute 'Classification' ``` Which is happening when I try to access `.data` of the serialized instance. (Doesn't this mean it's deserialization that's raising this?) This happens for instances that do not have `Classification`. If I omit `Classification` from the serializer class it works just fine. How do I correctly do this? Serialize an object with optional fields, that is.
The serializers are deliberately designed to use a fixed set of fields so you wouldn't easily be able to optionally drop out one of the keys. You could use a [SerializerMethodField](http://www.django-rest-framework.org/api-guide/fields/#serializermethodfield) to either return the field value or `None` if the field doesn't exist, or you could not use serializers at all and simply write a view that returns the response directly. **Update for REST framework 3.0** `serializer.fields` can be modified on an instantiated serializer. When dynamic serializer classes are required I'd probably suggest altering the fields in a custom `Serializer.__init__()` method.
The method describe below did the work for me. Pretty simple,easy and worked for me. DRF version used = djangorestframework (3.1.0) ``` class test(serializers.Serializer): id= serializers.IntegerField() name=serializers.CharField(required=False,default='some_default_value') ```
Django REST Framework - Serializing optional fields
[ "", "python", "django", "serialization", "django-rest-framework", "" ]
I have the following table layout (for columns): ID renterID BookID dateOfRental And I was asked to make a report page where I could see how many books each renterID has rented on a certain date. Now I am not even sure this is possible, but I thought I'd ask you guys for at least a direction or similar snippet from which I can learn?
Something like: ``` SELECT renterID , COUNT(DISTINCT BookID) FROM your_table WHERE dateOfRental = 'the_date' -- or if you want a date period: dateOfRental BETWEEN start_date AND end_date GROUP BY renterID ; ```
I assume that you want this for a particular date: ``` SELECT COUNT(*) BookCount, renterId, dateOfRental FROM someTable WHERE dateOfRental = [a certain date] GROUP BY renterId ```
Count rows with same colum value for certain date
[ "", "mysql", "sql", "database", "" ]
Hi I used the django inbult auth urls and views for my project and now have finished the initial user account creation/login/reset password process. Now, the user can log in and be redirected to the after successful login url accounts/profile/. I have several doubts on the django login function. For convenience, I've copy paste the django inbuilt login function code below. ``` @sensitive_post_parameters() @csrf_protect @never_cache def login(request, template_name='registration/login.html', redirect_field_name=REDIRECT_FIELD_NAME, authentication_form=AuthenticationForm, current_app=None, extra_context=None): """ Displays the login form and handles the login action. """ redirect_to = request.REQUEST.get(redirect_field_name, '') if request.method == "POST": form = authentication_form(request, data=request.POST) if form.is_valid(): # Ensure the user-originating redirection url is safe. if not is_safe_url(url=redirect_to, host=request.get_host()): redirect_to = resolve_url(settings.LOGIN_REDIRECT_URL) # Okay, security check complete. Log the user in. auth_login(request, form.get_user()) return HttpResponseRedirect(redirect_to) else: form = authentication_form(request) current_site = get_current_site(request) context = { 'form': form, redirect_field_name: redirect_to, 'site': current_site, 'site_name': current_site.name, } if extra_context is not None: context.update(extra_context) return TemplateResponse(request, template_name, context, current_app=current_app) ``` My questions are: **1 Is the `REDIRECT_FIELD_NAME` in the function set as '/profile/' in `django.contrib.auth` ?** I could see this variable is imported from `django.contrib.auth` ``` from django.contrib.auth import REDIRECT_FIELD_NAME, login as auth_login, logout as auth_logout, get_user_model ``` I don't have any setting for this variable, but after user successfully logged in, the page will be directed to /accounts/profile/ **2 Has the login function passed the account info about the user? If yes, how can I access it?** From the code, if user successfully logged in, page will be redirected: `return HttpResponseRedirect(redirect_to)` in my case, redirected to accounts/profile/ , initially the view for the url was simply a ``` HttpResponse("You have logged in successfully") ``` now when I am trying to implement the view function, I realize that no info about the user has been passed. I've tried to `print request` in the view function, but there is no info about the user in the message printed in the server terminal, all I get is a long list of system settings or other info. However, the login should pass the info of who has just successfully logged in to the successful log in urls right? Thank you very much for explaining.
After the login, you can access the user info by referring `request.user` in views and just `{{user}}` in templates. All you need to make sure is you're passing the [`RequestContext`](https://docs.djangoproject.com/en/dev/ref/templates/api/#subclassing-context-requestcontext) in the `HttpResponse` for the future request. Yes, `REDIRECT_FIELD_NAME` is defined in [`__init__.py`](https://github.com/django/django/blob/master/django/contrib/auth/__init__.py#L12) of django.contrib.auth which is simply a `"next"` what you passed from the login form. In Django, there are more than one ways to force a user to login. By decorating a view function with `@login_required`, by calling the build-in login view for an user defined URL and etc., Refer about the login settings variables [here](https://docs.djangoproject.com/en/dev/ref/settings/#login-redirect-url). You'll get some more ideas. [Building custom login page](http://www.fir3net.com/Django/django.html). That link gives you an example for custom login implementaion. Consider you have decorated a view with `@login_required` and it's corresponding URL is `/login_test/`. Then the `{{next}}` context variable in the login form will be rendered with `/login_test/`. So after you login, `<input type="hidden" name="next" value="{{ next }}" />` This element's value will be taken for redirecting as per the `REDIRECT_FIELD_NAME`. Though I suspect that that example is missing the setting of `settings.LOGIN_URL` to the URL `login/`. Never mind, it's being passed as an argument in the decorator itself.
To override this behavior just put following in `settings.py` of your app : `LOGIN_REDIRECT_URL = "/"` This will redirect to your home page. You can change this url to preferred url.
Django- why inbuilt auth login function not passing info about user to after successful login url
[ "", "python", "django", "django-authentication", "django-login", "" ]
How can I split the following string based on the '-' character? So if I had this string: `LD-23DSP-1430` How could I split it into separate columns like this: ``` LD 23DSP 1430 ``` Also, is there a way to split each character into a separate field if I needed to (without the '-')? I'm trying to find a way to replace each letter with the NATO alphabet. So this would be..... Lima Delta Twenty Three Delta Sierra Papa Fourteen Thirty.... in one field. I know I can get the left side like this: ``` LEFT(@item, CHARINDEX('-', @item) - 1) ```
I wouldn't exactly say it is easy or obvious, but with just two hyphens, you can reverse the string and it is not too hard: ``` with t as (select 'LD-23DSP-1430' as val) select t.*, LEFT(val, charindex('-', val) - 1), SUBSTRING(val, charindex('-', val)+1, len(val) - CHARINDEX('-', reverse(val)) - charindex('-', val)), REVERSE(LEFT(reverse(val), charindex('-', reverse(val)) - 1)) from t; ``` Beyond that and you might want to use `split()` instead.
Here's a little function that will do "NATO encoding" for you: ``` CREATE FUNCTION dbo.NATOEncode ( @String varchar(max) ) RETURNS TABLE WITH SCHEMABINDING AS RETURN ( WITH L1 (N) AS (SELECT 1 UNION ALL SELECT 1), L2 (N) AS (SELECT 1 FROM L1, L1 B), L3 (N) AS (SELECT 1 FROM L2, L2 B), L4 (N) AS (SELECT 1 FROM L3, L3 B), L5 (N) AS (SELECT 1 FROM L4, L4 C), L6 (N) AS (SELECT 1 FROM L5, L5 C), Nums (Num) AS (SELECT Row_Number() OVER (ORDER BY (SELECT 1)) FROM L6) SELECT NATOString = Substring(( SELECT Convert(varchar(max), ' ' + D.Word) FROM Nums N INNER JOIN (VALUES ('A', 'Alpha'), ('B', 'Beta'), ('C', 'Charlie'), ('D', 'Delta'), ('E', 'Echo'), ('F', 'Foxtrot'), ('G', 'Golf'), ('H', 'Hotel'), ('I', 'India'), ('J', 'Juliet'), ('K', 'Kilo'), ('L', 'Lima'), ('M', 'Mike'), ('N', 'November'), ('O', 'Oscar'), ('P', 'Papa'), ('Q', 'Quebec'), ('R', 'Romeo'), ('S', 'Sierra'), ('T', 'Tango'), ('U', 'Uniform'), ('V', 'Victor'), ('W', 'Whiskey'), ('X', 'X-Ray'), ('Y', 'Yankee'), ('Z', 'Zulu'), ('0', 'Zero'), ('1', 'One'), ('2', 'Two'), ('3', 'Three'), ('4', 'Four'), ('5', 'Five'), ('6', 'Six'), ('7', 'Seven'), ('8', 'Eight'), ('9', 'Niner') ) D (Digit, Word) ON Substring(@String, N.Num, 1) = D.Digit WHERE N.Num <= Len(@String) FOR XML PATH(''), TYPE ).value('.[1]', 'varchar(max)'), 2, 2147483647) ); ``` This function will work on even very long strings, and performs pretty well (I ran it against a 100,000-character string and it returned in 589 ms). Here's an example of how to use it: ``` SELECT NATOString FROM dbo.NATOEncode('LD-23DSP-1430'); -- Output: Lima Delta Two Three Delta Sierra Papa One Four Three Zero ``` I intentionally made it a table-valued function so it could be inlined into a query if you run it against many rows at once, just use `CROSS APPLY` or wrap the above example in parentheses to use it as a value in the `SELECT` clause (you can put a column name in the function parameter position).
SQL Server 2005 Using CHARINDEX() To split a string
[ "", "sql", "sql-server-2005", "" ]
It is a basic question. I am trying the following code: ``` class SMS_store: def __init__(self): self=[] #probably something is wrong here def add_new_arrival(self,from_number,time_arrived,text_of_SMS): self.append([False,from_number,time_arrived,text_of_SMS]) #append list to self list self[len(self)-1]=tuple(self[len(self)-1]) def message_count(self): return len(self) my_inbox=SMS_store() my_inbox.add_new_arrival('01234','9:37 AM','How are you?') ``` But I get the following error: ``` >>> Traceback (most recent call last): File "C:\Users\Arnob\Desktop\New Text Document.py", line 15, in <module> my_inbox.add_new_arrival('01234','9:37 AM','How are you?') File "C:\Users\Arnob\Desktop\New Text Document.py", line 8, in add_new_arrival self.append([False,from_number,time_arrived,text_of_SMS]) #append list to self list AttributeError: 'SMS_store' object has no attribute 'append' >>> ``` What is wrong in my code?
You can subclass `list` like this ``` class SMS_store(list): def add_new_arrival(self, from_number, time_arrived, text_of_SMS): self.append((False, from_number, time_arrived, text_of_SMS)) #append tuple to self def message_count(self): return len(self) ``` Notice there is no need for `__init__` unless you wish to do something extra there. You don't need to append a list and then turn it into a tuple, you can create the tuple directly with `()` instead of `[]`
If you want to inherit from `list`, use the following: ``` class SMS_store(list): ^^^^^^ ``` and remove that assignment to `self` from the `__init__` method. That said, you might want to simply have a named attribute containing the list: ``` class SMS_store(object): def __init__(self): self.messages = [] def add_new_arrival(self, from_number, time_arrived, text_of_SMS): self.messages.append((False,from_number,time_arrived,text_of_SMS)) def message_count(self): return len(self.messages) my_inbox = SMS_store() my_inbox.add_new_arrival('01234','9:37 AM','How are you?') ``` As far as representing actual messages, this sounds like a good use case for [`namedtuple`](http://docs.python.org/2/library/collections.html#collections.namedtuple). It's just like a tuple, but allows access to fields by name. Here is a quick illustration: ``` import collections SMS = collections.namedtuple('SMS', 'from_number time_arrived text_of_SMS') sms = SMS(from_number='01234', time_arrived='9:37 AM', text_of_SMS='How are you?') print sms.text_of_SMS ```
Create List Object Class in Python
[ "", "python", "list", "class", "append", "" ]
I am trying to visualize graphs generated from networkx, using d3py. I used the example provided (<https://github.com/mikedewar/d3py/blob/master/examples/d3py_graph.py>) but all I get is the graph without node names, how do I plot node names as well? Also, how do I change edge and node colors?
*Here is a partial answer, but also read the caveat at the end.* The easiest thing is changing the node colour. If you run the example you pointed to with ``` with d3py.NetworkXFigure(G, width=500, height=500) as p: p += d3py.ForceLayout() p.css['.node'] = {'fill': 'blue', 'stroke': 'magenta'} p.show() ``` then you will have blue nodes with a magenta outline (you can use any html colour you like). For edge colour, there is a hardcoded `stroke: black;` in the file `d3py/geoms/graph.py`. You can comment it out and reinstall d3py ``` line = { "stroke-width": "1px", "stroke": "black", } self.css[".link"] = line ``` Then you can specify edge colour and width as follows: ``` with d3py.NetworkXFigure(G, width=500, height=500) as p: p += d3py.ForceLayout() p.css['.node'] = {'fill': 'blue', 'stroke': 'magenta'} p.css['.link'] = {'stroke': 'red', 'stoke-width': '3px'} p.show() ``` There does not seem to be any way to add node labels easily with d3py. Using just d3.js, it can be done relatively easily (see [this example](http://bl.ocks.org/MoritzStefaner/1377729)). Which leads us to the major caveat... **Caveat** As @flup already mentioned, d3py does not seem really mature (the code from their Github repository does not even run the NetworkX example, there is a missing `host` parameter in `networkx_figure.py`). There has been no activity in the repo since a couple of months, so I guess the project is maybe abandoned. You could probably reach your goal much more easily using d3.js directly.
Just in case anyone ends up here looking around at d3py. I made d3py a couple of years ago now to see what it would feel like to plot d3 graphics from Python. It was fun! But I didn't have the time to do it properly, which was a shame. However, Rob Story did! He made vincent, which is available at <https://github.com/wrobstory/vincent>. It's a much better implementation, and is much better thought out. Please do check it out!
How to plot graphs using networkx and d3py
[ "", "python", "d3.js", "graph", "networkx", "d3py", "" ]
I have two strings like `s1='fly,dream';s2='dream,fly'` I want the `s1` equals to `s2`. The code I tried is: ``` def Isequal(m,n): s1=m.split(',') s2=n.split(',') s1.sort() s2.sort() if s1 == s2: print 'Equal' else: print s1,s2 ``` Note:s1 may be equal to s2. Then ``` def Isequal(m,n): s1=m.split(',') s2=n.split(',') if s1 == s2.reverse() || s1 == s2: print 'Equal' else: print s1,s2 ``` Is this code right? I there something to improve?
Your code splits the two strings by `,` (which returns a `list`) and calls the `sort` method on the `list`. Since the two substrings are identical, sorting the list of the substrings results in equal lists. The best way to know what is happening is printing the stuff out. See the results. ``` >>> s1 = 'fly,dream' >>> s2 = 'dream,fly' >>> s1 = s1.split(',') >>> s1 ['fly', 'dream'] >>> s2 = s2.split(',') >>> s2 ['dream', 'fly'] >>> s1.sort() >>> s1 ['dream', 'fly'] >>> s2.sort() >>> s2 ['dream', 'fly'] >>> s1 == s2 True ``` If you want to check that the two strings consist of the same substrings, use `sets`, like follows : ``` >>> varOne = set(s1.split(',')) >>> varTwo = set(s2.split(',')) >>> varOne == varTwo True ``` **Beware** that sets only allow unique items, so `fly,dream,fly` and `dream,dream,fly` will result in `True` here.
Set would be more elegant here: ``` def Isequal(m, n): s1 = set(m.split(',')) s2 = set(n.split(',')) if s1 == s2: print 'Equal' else: print s1, s2 ``` and should be more efficient too.
python return equal where the string is not actually equal?
[ "", "python", "" ]
In c/c++, we could have: ``` maxnum = 10; double xlist[maxnum]; ``` How to set a maximum length for a python list/set?
You don't and do not need to. Python lists grow and shrink dynamically as needed to fit their contents. Sets are implemented as a hash table, and like Python dictionaries grow and shrink dynamically as needed to fit their contents. Perhaps you were looking for [`collections.deque`](http://docs.python.org/2/library/collections.html#collections.deque) (which takes a `maxlen` parameter) or something using a [`heapq`](http://docs.python.org/2/library/heapq.html) (using `heapq.heappushpop()` when you have reached the maximum) instead?
Here is extended version of python's `list`. It behaves like `list`, but will raise `BoundExceedError`, if length is exceeded (tried in python 2.7): ``` class BoundExceedError(Exception): pass class BoundList(list): def __init__(self, *args, **kwargs): self.length = kwargs.pop('length', None) super(BoundList, self).__init__(*args, **kwargs) def _check_item_bound(self): if self.length and len(self) >= self.length: raise BoundExceedError() def _check_list_bound(self, L): if self.length and len(self) + len(L) > self.length: raise BoundExceedError() def append(self, x): self._check_item_bound() return super(BoundList, self).append(x) def extend(self, L): self._check_list_bound(L) return super(BoundList, self).extend(L) def insert(self, i, x): self._check_item_bound() return super(BoundList, self).insert(i, x) def __add__(self, L): self._check_list_bound(L) return super(BoundList, self).__add__(L) def __iadd__(self, L): self._check_list_bound(L) return super(BoundList, self).__iadd__(L) def __setslice__(self, *args, **kwargs): if len(args) > 2 and self.length: left, right, L = args[0], args[1], args[2] if right > self.length: if left + len(L) > self.length: raise BoundExceedError() else: len_del = (right - left) len_add = len(L) if len(self) - len_del + len_add > self.length: raise BoundExceedError() return super(BoundList, self).__setslice__(*args, **kwargs) ``` **Usage**: ``` >>> l = BoundList(length=10) >>> l.extend([1, 2, 3, 4, 5, 6, 7, 8, 9, 10]) >>> l [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] >>> # now all these attempts will raise BoundExceedError: >>> l.append(11) >>> l.insert(0, 11) >>> l.extend([11]) >>> l += [11] >>> l + [11] >>> l[len(l):] = [11] ```
How to set a max length for a python list/set?
[ "", "python", "list", "max", "maxlength", "" ]
I have a list of numbers that needs to be calculated from a function. And i need to calculate 2 millions times. I could have done it this way but is there a simpler way: ``` def funcx(): return random.random() # for simplicity we use random top10 = [] # max len = 10 for i in range(2000000): j = funcx() top10.append(j) top10 = sorted(top10, reverse=True)[:10] ```
Update: 2013-me was confused, at best, and this is not correct. See <https://stackoverflow.com/a/68587827/1126841> instead. --- Use a fixed-size heap instead of sorting the list each time: ``` import heapq top10=[] for i in range(2000000): heapq.heappush(top10, funcx()) top10 = top10[:10] ``` Asymptotically, the running time is the same, but there should be less overhead. Another option is to use the `nsmallest` function: ``` heapq.nsmallest(10, (funcx() for i in range(2000000)) ) ``` This is less efficient than simply sorting the list and return the first 10 items, but it should (i.e., I didn't check) use less memory.
I would like to show correct solution using fixed size heap ([accepted answer](https://stackoverflow.com/a/17527689/6604502) is incorrect). Let's say you want 10 smallest elements. Then you can use max heap and after each push perform a pop. The pop will remove the largest element leaving you with array of 10 smallest elements. There is even and efficient operation `heapq.heappushpop`. The code for 10 smallest elements then look like: ``` import heapq top10 = [] for i in range(2000000): # Heapq implements min heap, so we need to negate the numbers heapq.heappushpop(top10, -funcx()) print(top10) ``` Anyways this code is basically the same as implementation of `heapq.nsmallest` (it handles some extra corner cases e.g. `n == 1`), so you better use that: ``` heapq.nsmallest(10, (funcx() for i in range(2000000))) ``` or `heap.nlargest` of n largest elements.
How do you iterate and keep the top 10 max? Python
[ "", "python", "" ]
i am trying to learn python by myself on codeacademy, and i am looking through past lessons, but i can't figure out what i did wrong. i think i copied everything correctly. the assignment is to check the user input word to see if it contains at least one character. if it does contain more than one character, the program is supposed to print the word the user inputted in the beginning. if not, the program is supposed to say " empty". the code lets me input a word, but then even if the word has more than one character, it will not print out the word. i feel like the solution is probably very simple, but i can't figure it out. i think the semicolons are in the right spaces. i would appreciate your help very much ``` print "Welcome to the English to Pig Latin translator!" original = raw_input("tell me your secrets") def true_function(): if len(original)>= 1: print(original) else: print("empty") ```
you need to call the `true_function()` for it to be executed do something like this ``` print "Welcome to the English to Pig Latin translator!" def true_function(): original = raw_input("tell me your secrets") if len(original)>= 1: print(original) else: print("empty") true_function() ``` notice how i call `true_function()` before you were just taking input and thats it but know the input is asked in the function then ran through the condition here are a few tutorials on functions if you dont fully understand [Tutorials point: Functions](http://www.tutorialspoint.com/python/python_functions.htm) [ZetCode calling functions](http://zetcode.com/lang/python/functions/)
It's because you never call the `true_function()` function. You can either remove that, and just have: ``` print "Welcome to the English to Pig Latin translator!" original = raw_input("tell me your secrets") if len(original)>= 1: print(original) else: print("empty") ``` Or, call the `true_function()` afterwards, passing the variable `original` as an argument: ``` def true_function(original): if len(original)>= 1: print(original) else: print("empty") print "Welcome to the English to Pig Latin translator!" original = raw_input("tell me your secrets") true_function(original) ```
please help me with my else if loop in python ?
[ "", "python", "" ]
In numpy is there a fast way of calculating the mean across multiple axis? I am calculating the mean on all but the 0 axis of an n-dimensional array. I am currently doing this; ``` for i in range(d.ndim - 1): d = d.mean(axis=1) ``` I'm wondering if there is a solution that doesn't use a python loop.
In numpy 1.7 you can give multiple axis to `np.mean`: ``` d.mean(axis=tuple(range(1, d.ndim))) ``` I am guessing this will perform similarly to the other proposed solutions, unless reshaping the array to flatten all dimensions triggers a copy of the data, in which case this should be much faster. So this is probably going to give a more consistent performance.
My approach would be to reshape the array to flatten all of the higher dimensions and then run the mean on axis 1. Is this what your looking for? ``` In [14]: x = np.array([[[1,2],[3,4]],[[5,6],[7,8]]]) In [16]: x.reshape((x.shape[0], -1)).mean(axis=1) Out[16]: array([ 2.5, 6.5]) ``` (step 2 just calculates the product of the lengths of the higher dims)
Getting the mean of multiple axis of a numpy array
[ "", "python", "numpy", "" ]
Python noob here. I need to store an array of float arrays. I am doing this but its not working out: ``` distance = [] ##declare my array distance.append ([]) ##add an empty array to the array distance[len(distance)-1].append ([0,1,2,3.5,4.2]) ## store array in array[0] print distance[0][1] ## this doesnt work, the array above got stored as 1 item ```
Use `list.extend` not `list.append`: The difference between `extend` and `append` is that `append` appends the object passed to it as it is. While `extend` expects that item passed to it to be an iterable(list, tuple,string, etc) and appends it's items to the list. Using `append` we can append any type of object; i.e iterable or non-iterable. --- ``` >>> lis = [1,2,3] >>> lis.append(4) #non-iterable >>> lis.append('foo') #iterable >>> lis [1, 2, 3, 4, 'foo'] ``` --- But `extend` behaves differently and actually appends the individual items from the iterable to the list. ``` >>> lis = [1,2,3] >>> lis.extend('foo') #string is an iterable in python >>> lis [1, 2, 3, 'f', 'o', 'o'] #extend appends individual characters to the list >>> lis.extend([7,8,9]) #same thing happend here >>> lis [1, 2, 3, 'f', 'o', 'o', 7, 8, 9] >>> lis.extend(4) #an integer is an not iterable so you'll get an error TypeError: 'int' object is not iterable ``` --- **Your Code** ``` >>> distance = [[]] >>> distance[-1].extend ([0,1,2,3.5,4.2]) >>> distance [[0, 1, 2, 3.5, 4.2]] ``` This returns: ``` [[0, 1, 2, 3.5, 4.2]] ``` If you wanted to do this then there's no need to `append` the empty `[]` and then call `list.extend`, just use `list.append` directly: ``` >>> ditance = [] ##declare my array >>> distance.append([0,1,2,3.5,4.2]) >>> distance [[0, 1, 2, 3.5, 4.2]] ```
Use `extend` instead of `append`: ``` distance[-1].extend([0,1,2,3.5,4.2]) ``` (Also, note that `distance[len(distance)-1]` can be written `distance[-1]`.)
Nested float arrays in Python
[ "", "python", "arrays", "nested", "" ]
I have a django project, using django-rest-framework to create api. Want to use token base authentication system so api call for (put, post, delete) will only execute for authorized user. I installed 'rest\_framework.authtoken' and created token for each users. So, now from django.contrib.auth.backends authenticate, it returns user, with auth\_token as attribute. (when loged in successfully). Now my question is how can I send the token with post request to my api and at api side how can I verify if token is valid and belongs to the correct user? Are there any methods in app rest\_framework.authtoken to validate given user and its token? not found [this](http://django-rest-framework.org/api-guide/authentication/#tokenauthentication) very useful! Update (changes I made): Added this in my settings.py: ``` REST_FRAMEWORK = { 'DEFAULT_AUTHENTICATION_CLASSES': ( 'rest_framework.authentication.BasicAuthentication', 'rest_framework.authentication.SessionAuthentication', 'rest_framework.authentication.TokenAuthentication', ) } ``` Also sending Token in my header but its still not working: ``` if new_form.is_valid: payload= {"createNewUser": { "users": request.POST["newusers"], "email": request.POST["newemail"] } } headers = {'content-type' : 'application/json', 'Authorization': 'Token 6b929e47f278068fe6ac8235cda09707a3aa7ba1'} r = requests.post('http://localhost:8000/api/v1.0/user_list', data=json.dumps(payload), headers=headers, verify=False) ```
> "how can I send the token with post request to my api" From the docs... For clients to authenticate, the token key should be included in the Authorization HTTP header. The key should be prefixed by the string literal "Token", with whitespace separating the two strings. For example: ``` Authorization: Token 9944b09199c62bcf9418ad846dd0e4bbdfc6ee4b ``` > "at api side how can I verify if token is valid and belongs to the correct user?" You don't need to do anything, just access `request.user` to return the authenticated user - REST framework will deal with returning a '401 Unauthorized' response to any incorrect authentication.
To answer the first half of your question: > how can I send the token with post request to my api You can use the Python [requests](http://docs.python-requests.org/en/master/user/authentication/) library. For the django-rest-framework TokenAuthentication, the token needs to be passed in the header and prefixed by the string `Token` ([see here](http://www.django-rest-framework.org/api-guide/authentication/#tokenauthentication)): ``` import requests mytoken = "4652400bd6c3df8eaa360d26560ab59c81e0a164" myurl = "http://localhost:8000/api/user_list" # A get request (json example): response = requests.get(myurl, headers={'Authorization': 'Token {}'.format(mytoken)}) data = response.json() # A post request: data = { < your post data >} requests.post(myurl, data=data, headers={'Authorization': 'Token {}'.format(mytoken)}) ```
How to use TokenAuthentication for API in django-rest-framework
[ "", "python", "django", "django-authentication", "django-rest-framework", "" ]
Is there a way to create an XML schema from an existing database in SQL Server 2008, SQL Server Management Studio? I have a DB with ~50 tables. I'm looking to create a "nice" diagram showing the relationship between those tables. Using another application called SQL Designer (<https://code.google.com/p/wwwsqldesigner/>) will give me the "nice" looking picture, but I don't know how to create the XML schema required. I did a search across the forum (and MS) and couldn't quite find my answer. I could find tools which created a database, but I'm looking at the reverse... I need a pretty picture which shows the database in diagrammatic form. I thought if I could get my db structure into XML then SQL Designer will do the rest for me. Thanks for your assistance. Nick
If you only need the xml schema of tables query them with this: ``` select top 0 * FROM daTable FOR XML AUTO,XMLSCHEMA ``` If you need the table names and columns in order to create a representation of your database and how tables are connected you can use something like this: ``` SELECT s.name as '@Schema' ,t.name as '@Name' ,t.object_id as '@Id' ,( SELECT c.name as '@Name' ,c.column_id as '@Id' ,IIF(ic.object_id IS NOT NULL,1,0) as '@IsPrimaryKey' ,fkc.referenced_object_id as '@ColumnReferencesTableId' ,fkc.referenced_column_id as '@ColumnReferencesTableColumnId' FROM sys.columns as c LEFT OUTER JOIN sys.index_columns as ic ON c.object_id = ic.object_id AND c.column_id = ic.column_id AND ic.index_id = 1 LEFT OUTER JOIN sys.foreign_key_columns as fkc ON c.object_id = fkc.parent_object_id AND c.column_id = fkc.parent_column_id WHERE c.object_id = t.object_id FOR XML PATH ('Column'),TYPE ) FROM sys.schemas as s INNER JOIN sys.tables as t ON s.schema_id = t.schema_id FOR XML PATH('Table'),ROOT('Tables') ``` Let your application use the ColumnReferencesTableId and ColumnReferencesTableColumnId to get table relations. You could also further join back to columns and tables which are referenced if you prefer writing their names out but I thought their Ids would suffice.
Combined with a cursor running through INFORMATION\_SCHEMA or sysobjects, the following should help you: ``` SELECT * FROM [MyTable] FOR XML AUTO, XMLSCHEMA ``` I'm uncertain as to whether you can simply apply this to a whole database, or what postprocessing effort would be required to combine all the various table schemas, but it's something to work with.
how to create XML schema from an existing database in SQL Server 2008
[ "", "sql", "sql-server-2008", "" ]
After I added `c.category <> 'AGILE'` to the query below, the results set stopped including `NULL` values for `c.category`. How can I get rows with a `NULL` c.category back in my result set, without doing a `UNION`? ``` select p.number, p.method ,sum(p.amount) AS amount ,count(*) AS count,c.category from payments p inner join headers a on p.name = a.name inner join customer c on c.number = p.number and a.status = 'APPROVED' and a.type IN ('REGULAR', 'TRANSFER', 'OTHER') and c.category <> 'AGILE' group by p.payment_method ,p.cust_number ,c.u_cust_category ```
`NULL` is neither equal to nor unequal to any particular value. If you want to include NULL values, you would want something like ``` and( c.category <> 'AGILE' or c.category IS NULL) ```
This simply works: ``` (c.category <> 'AGILE' OR c.category IS NULL) ```
Why does inequality test eliminate NULL values?
[ "", "sql", "oracle", "" ]
I'm still in the learning phase of SQL statements and I'm hoping someone out there can help. I have a many-to-many database base relationship. The table Department can have multiple Jobs associated with it and and Jobs can be related to multiple Departments. So I have this basic relationship type. ``` Job.ID (one-to-many) Jobs.JobID Jobs.DepartmentID (many-to-one) Department.ID ``` What I'm trying to do is get a list of Jobs that aren't already associated with a department. ``` tbl=Job ID Job Active 1 10-3242 Yes 2 12-3902 Yes 3 12-3898 Yes tbl=Jobs ID DepartmentID JobID 1 3 1 2 3 2 tbl=Department ID Department 1 Administration 2 Sales 3 Production ``` Query: ``` string sql = "SELECT Job FROM (Job " + "INNER JOIN Jobs ON Job.ID = Jobs.JobID) " + "INNER JOIN Department ON Jobs.DepartmentID = Department.ID " + "WHERE Department.Department <> 'Production'"; ``` I'm expecting the job code `12-3898` to be returned but obviously I'm forgetting something. Any assistance would be great. Cheers.
You can use a [`LEFT JOIN`](http://www.codinghorror.com/blog/2007/10/a-visual-explanation-of-sql-joins.html). The `LEFT JOIN` keyword returns all rows from the left table with the matching rows in the right table. The result is `NULL` in the right side if there is no match. Since you want the jobs without a matching department, you can check if the joined `DepartmentID` is `NULL`: ``` SELECT Job.Job FROM Job LEFT JOIN Jobs ON Job.ID = Jobs.JobID WHERE Jobs.DepartmentID IS NULL; ``` Checkout [**this demo**](http://sqlfiddle.com/#!2/008dc/1). Let me know if it works.
``` select job from job where id not in (select jobId from jobs) ```
SQL Statement - Excluding certain data
[ "", "sql", "select", "many-to-many", "inner-join", "where-clause", "" ]
I want to use Spanish instead of English to program in Python. How to define aliases for keywords such as `if` and `for`?
1. You don't. "if" and "for" are hardcoded parts of Python. You could write your own programming language or a translator but you can't do what you are asking for in real Python (you could in Lisp or in C/C++ using macros). 2. More importantly, [programming is done in English](http://www.catb.org/esr/faqs/hacker-howto.html#skills4 "If you don't have functional English, learn it."). It's not necessarily fair, but that is the way it is. All the documentation is in English and so are the the methods from all the libraries. From ESR: *"that **English is the working language of the hacker culture and the Internet**, and that you will need to know it to function in the hacker community."* and *"**Linus Torvalds, a Finn, comments his code in English** (it apparently never occurred to him to do otherwise). His fluency in English has been an important factor in his ability to recruit a worldwide community of developers for Linux. It's an example worth following."*
If you are fine to have iPython as a dependency, you might have a look at [Custom input transformation — IPython 3.2.1 documentation](https://ipython.org/ipython-doc/3/config/inputtransforms.html). Basically, ipython is already preprocessing anything that is given to the prompt, and it allows code to specify more transformations in the various compilation steps it handles before passing it to the python interpreter itself. At first glance, the most approriate transformer for transforming reserved keywords is `TokenInputTransformer.wrap()`.
Defining aliases for keywords in Python
[ "", "python", "" ]
How can I write a comprehension to extract all values of key='a'? ``` alist=[{'a':'1a', 'b':'1b'},{'a':'2a','b':'2b'}, {'a':'3a','b':'3b'}] ``` The following works but I just hacked until I got what I want. Not a good way to learn. ``` [alist['a'] for alist in alist if 'a' in alist] ``` in the comprehension I have been trying to use `if key='a' in alist else 'No data'`
``` [elem['a'] for elem in alist if 'a' in elem] ``` might be a clearer way of phrasing what you have above. The "for elem in alist" part will iterate over alist, allowing this to look through each dictionary in alist. Then, the "if 'a' in elem" will ensure that the key 'a' is in the dictionary before the lookup occurs, so that you don't get a KeyError from trying to look up an element that doesn't exist in the dictionary. Finally, taking elem['a'] gives you the value in each dictionary with key 'a'. This whole statement will then give the list of values in each of the dictionaries with key 'a'. Hope this makes it a bit clearer.
You can do: ``` alist=[{'a':'1a', 'b':'1b'},{'a':'2a','b':'2b'}, {'a':'3a','b':'3b'}] new_list = [a.get('a') for a in alist] ``` If you want to restrict it only to dictionary with a key `a`, ``` new_list = [a.get('a') for a in alist if a.get('a')] ``` Based on gnibbler's suggestion: ``` new_list = [a.get('a') for a in alist if 'a' in a ] ```
Create a list from a list of dictionaries using comprehension for specific key value
[ "", "python", "" ]
I will count of total area's record where there status will different like 'PENDING','SENT INFO','PTP' with group by and multiple select queries but result not got so plese help me. My query is: ``` SELECT AREA, (SELECT COUNT( * ) FROM prospectmaster WHERE ZONE = 'AHMEDABAD' && STATUS = 'PENDING' GROUP BY AREA) AS PENDING, (SELECT COUNT( * ) FROM prospectmaster WHERE ZONE = 'AHMEDABAD' && STATUS = 'SENT INFO.' GROUP BY AREA) AS CNT FROM prospectmaster WHERE ZONE = 'AHMEDABAD' GROUP BY AREA ``` I want to this type of result. ``` AREA PENDING INFO SENT PTP AHMEDABAD 1 2 1 GANDHINAGAR 1 5 4 KHEDA 3 10 9 ``` I think some problem in query but I doesn't got it.So,Please help me.
``` SELECT AREA, SUM(CASE WHEN STATUS = 'PENDING' THEN 1 ELSE 0 END) AS PENDING, SUM(CASE WHEN STATUS = 'SENT INFO.' THEN 1 ELSE 0 END) AS cnt, SUM(CASE WHEN STATUS = 'PTP' THEN 1 ELSE 0 END) AS PTP FROM prospectmaster WHERE ZONE = 'AHMEDABAD' GROUP BY AREA; ``` If `STATUS` has 'SENT INFO.' record in DB with `.`
``` SELECT AREA, SUM(if(STATUS='PENDING',1,0)) AS PENDING, SUM(if(STATUS='SENT INFO',1,0)) AS "INFO SENT", SUM(if(STATUS='PTP',1,0)) AS PTP FROM prospectmaster WHERE ZONE = 'AHMEDABAD' GROUP BY AREA; ```
Query with multiple SELECT and GROUP BY
[ "", "mysql", "sql", "" ]
The NOT IN clause is omitting the ***NULL*** values when checking the condition. ``` INPUT ID NAME 1 A 2 <null> 3 C SELECT... FROM... WHERE NAME NOT IN ('C') ``` is only returning ID value 1. I need both 1 & 2. Could this be done?
``` WHERE NAME NOT IN ('C') OR NAME IS NULL ```
Either you check NULL values ``` select * from not_in where name not in ('C') or name is null; ``` or you can convert NULL values in any other character with coalesce. I use ' ' in the sample below. ``` select * from not_in where coalesce(name, ' ') not in ('C'); ```
NOT IN clause and NULL value
[ "", "mysql", "sql", "oracle", "" ]
I'm doing a project involving data collection and logging. I have 2 threads running, a collection thread and a logging thread, both started in main. I'm trying to allow the program to be terminated gracefully when with Ctrl-C. I'm using a `threading.Event` to signal to the threads to end their respective loops. It works fine to stop the `sim_collectData` method, but it doesn't seem to be properly stopping the `logData` thread. The `Collection terminated` print statement is never executed, and the program just stalls. (It doesn't end, just sits there). The second `while` loop in `logData` is to make sure everything in the queue is logged. The goal is for Ctrl-C to stop the collection thread immediately, then allow the logging thread to finish emptying the queue, and only then fully terminate the program. (Right now, the data is just being printed out - eventually it's going to be logged to a database). I don't understand why the second thread never terminates. I'm basing what I've done on this answer: [Stopping a thread after a certain amount of time](https://stackoverflow.com/questions/6524459/stopping-a-thread-python). What am I missing? ``` def sim_collectData(input_queue, stop_event): ''' this provides some output simulating the serial data from the data logging hardware. ''' n = 0 while not stop_event.is_set(): input_queue.put("DATA: <here are some random data> " + str(n)) stop_event.wait(random.randint(0,5)) n += 1 print "Terminating data collection..." return def logData(input_queue, stop_event): n = 0 # we *don't* want to loop based on queue size because the queue could # theoretically be empty while waiting on some data. while not stop_event.is_set(): d = input_queue.get() if d.startswith("DATA:"): print d input_queue.task_done() n += 1 # if the stop event is recieved and the previous loop terminates, # finish logging the rest of the items in the queue. print "Collection terminated. Logging remaining data to database..." while not input_queue.empty(): d = input_queue.get() if d.startswith("DATA:"): print d input_queue.task_done() n += 1 return def main(): input_queue = Queue.Queue() stop_event = threading.Event() # used to signal termination to the threads print "Starting data collection thread...", collection_thread = threading.Thread(target=sim_collectData, args=(input_queue, stop_event)) collection_thread.start() print "Done." print "Starting logging thread...", logging_thread = threading.Thread(target=logData, args=(input_queue, stop_event)) logging_thread.start() print "Done." try: while True: time.sleep(10) except (KeyboardInterrupt, SystemExit): # stop data collection. Let the logging thread finish logging everything in the queue stop_event.set() main() ```
The problem is that your logger is waiting on `d = input_queue.get()` and will not check the event. One solution is to skip the event completely and invent a unique message that tells the logger to stop. When you get a signal, send that message to the queue. ``` import threading import Queue import random import time def sim_collectData(input_queue, stop_event): ''' this provides some output simulating the serial data from the data logging hardware. ''' n = 0 while not stop_event.is_set(): input_queue.put("DATA: <here are some random data> " + str(n)) stop_event.wait(random.randint(0,5)) n += 1 print "Terminating data collection..." input_queue.put(None) return def logData(input_queue): n = 0 # we *don't* want to loop based on queue size because the queue could # theoretically be empty while waiting on some data. while True: d = input_queue.get() if d is None: input_queue.task_done() return if d.startswith("DATA:"): print d input_queue.task_done() n += 1 def main(): input_queue = Queue.Queue() stop_event = threading.Event() # used to signal termination to the threads print "Starting data collection thread...", collection_thread = threading.Thread(target=sim_collectData, args=(input_queue, stop_event)) collection_thread.start() print "Done." print "Starting logging thread...", logging_thread = threading.Thread(target=logData, args=(input_queue,)) logging_thread.start() print "Done." try: while True: time.sleep(10) except (KeyboardInterrupt, SystemExit): # stop data collection. Let the logging thread finish logging everything in the queue stop_event.set() main() ```
I'm not an expert in threading, but in your `logData` function the first `d=input_queue.get()` is blocking, i.e., if the queue is empty it will sit an wait forever until a queue message is received. This is likely why the `logData` thread never terminates, it's sitting waiting forever for a queue message. Refer to the [Python docs] to change this to a non-blocking queue read: use `.get(False)` or `.get_nowait()` - but either will require some exception handling for cases when the queue is empty.
How to let a Python thread finish gracefully
[ "", "python", "multithreading", "python-2.7", "" ]
According to the [documentation](http://dev.mysql.com/doc/refman/5.7/en/comments.html), there are two ways you can comment to the end of a line in MySql: * `"#"` * `"-- "` Is there any difference between these? If so, when should I use one vs. the other? If not, does anyone know why both are supported? Seems strange to me, especially when the hyphenated version still differs slightly from the standard [SQL Syntax](http://dev.mysql.com/doc/refman/5.7/en/ansi-diff-comments.html).
It only really matters if you want your SQL to be portable. For instance `--` comments are OK in Sqlite and Postegresql while `#` are not. Your best bet is to use `--` with a space following. (As far as I can remember I've hardly ever seen anything else)
As the link you provided clearly explain it `--` is the standard SQL "comment separator". Where MySQL departs from standard is in requiring an *space* after `--` to be recognized as a comment. "Standard" SQL does not require this. To provide an example, in the following code, `--` is recognized as a comment token: ``` mysql> CREATE TABLE T(C int); -- This is my new table Query OK, 0 rows affected (0.18 sec) ``` But notice how the interactive interpreter misbehave without space *after* `--`: ``` mysql> CREATE TABLE T(C int); --This is my new table Query OK, 0 rows affected (0.24 sec) -> ; ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '--This is my new table' at line 1 ``` --- MySQL support some other comment format to accommodate with habit of various programmers : `#` like many script language and `/* ... */` like C. It is quite astounding that `//` is not yet part of them. ``` mysql> CREATE TABLE T(C int); /* This is my new table */ Query OK, 0 rows affected (0.22 sec) mysql> CREATE TABLE T(C int); # This is my new table Query OK, 0 rows affected (0.24 sec) ```
MySql Comment Syntax - What's the difference between "#" and "-- "
[ "", "mysql", "sql", "syntax", "comments", "" ]
My question is : Is there a way to do find the last day of a month in Hive, like Oracle SQL function ? : `LAST_DAY(D_Dernier_Jour)` Thanks.
You could make use of `last_day(dateString)` UDF provided by Nexr. It returns the last day of the month based on a date string with yyyy-MM-dd HH:mm:ss pattern. ``` Example: SELECT last_day('2003-03-15 01:22:33') FROM src LIMIT 1; 2003-03-31 00:00:00 ``` You need to pull it from their [Github Repository](https://github.com/nexr/hive-udf/wiki) and build. Their wiki page contains all the info on how to build and use it with Hive. HTH
As of Hive 1.1.0, [`last_day(string date)`](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF) function is available. `last_day(string date)` > Returns the **last day of the month** which the date belongs to. date is a string in the format 'yyyy-MM-dd HH:mm:ss' or 'yyyy-MM-dd'. The time part of date is ignored.
Find last day of a month in Hive
[ "", "sql", "hive", "hiveql", "" ]
I have a textfile, let's call it `goodlines.txt` and I want to load it and make a list that contains each line in the text file. I tried using the `split()` procedure like this: ``` >>> f = open('goodlines.txt') >>> mylist = f.splitlines() Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: '_io.TextIOWrapper' object has no attribute 'splitlines' >>> mylist = f.split() Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: '_io.TextIOWrapper' object has no attribute 'split' ``` Why do I get these errors? Is that not how I use `split()`? ( I am using `python 3.3.2`)
You are using `str` methods on an open file object. You can read the file as a list of lines by simply calling `list()` on the file object: ``` with open('goodlines.txt') as f: mylist = list(f) ``` This *does* include the newline characters. You can strip those in a list comprehension: ``` with open('goodlines.txt') as f: mylist = [line.rstrip('\n') for line in f] ```
Try this: ``` >>> f = open('goodlines.txt') >>> mylist = f.readlines() ``` `open()` function returns a file object. And for file object, there is no method like `splitlines()` or `split()`. You could use `dir(f)` to see all the methods of file object.
Python: AttributeError: '_io.TextIOWrapper' object has no attribute 'split'
[ "", "python", "python-3.x", "" ]
I have consumed a bunch of tweets in a mongodb database. I would like to query these tweets using pymongo. For example, I would like to query for screen\_name. However, when I try to do this, python does not return a tweet but a message about pymongo.cursor.Cursor. Here is my code: ``` import sys import pymongo from pymongo import Connection connection = Connection() db = connection.test tweets = db.tweets list(tweets.find())[:1] ``` I get a JSON, which looks like this: ``` {u'_id': ObjectId('51c8878fadb68a0b96c6ebf1'), u'contributors': None, u'coordinates': {u'coordinates': [-75.24692983, 43.06183036], u'type': u'Point'}, u'created_at': u'Mon Jun 24 17:53:19 +0000 2013', u'entities': {u'hashtags': [], u'symbols': [], u'urls': [], u'user_mentions': []}, u'favorite_count': 0, u'favorited': False, u'filter_level': u'medium', u'geo': {u'coordinates': [43.06183036, -75.24692983], u'type': u'Point'}, u'id': 349223725943623680L, u'id_str': u'349223725943623680', u'in_reply_to_screen_name': None, u'in_reply_to_status_id': None, u'in_reply_to_status_id_str': None, u'in_reply_to_user_id': None, u'in_reply_to_user_id_str': None, u'lang': u'en', u'place': {u'attributes': {}, u'bounding_box': {u'coordinates': [[[-79.76259, 40.477399], [-79.76259, 45.015865], [-71.777491, 45.015865], [-71.777491, 40.477399]]], u'type': u'Polygon'}, u'country': u'United States', u'country_code': u'US', u'full_name': u'New York, US', u'id': u'94965b2c45386f87', u'name': u'New York', u'place_type': u'admin', u'url': u'http://api.twitter.com/1/geo/id/94965b2c45386f87.json'}, u'retweet_count': 0, u'retweeted': False, u'source': u'<a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a>', u'text': u'Currently having a heat stroke', u'truncated': False, u'user': {u'contributors_enabled': False, u'created_at': u'Fri Oct 28 02:04:05 +0000 2011', u'default_profile': False, u'default_profile_image': False, u'description': u'young and so mischievious', u'favourites_count': 1798, u'follow_request_sent': None, u'followers_count': 368, u'following': None, u'friends_count': 335, u'geo_enabled': True, u'id': 399801173, u'id_str': u'399801173', u'is_translator': False, u'lang': u'en', u'listed_count': 0, u'location': u'Upstate New York', u'name': u'Joe Catanzarita', u'notifications': None, u'profile_background_color': u'D6640D', u'profile_background_image_url': u'http://a0.twimg.com/profile_background_images/702001815/f87508e73bbfab8c8c85ebe10b29fcf6.png', u'profile_background_image_url_https': u'https://si0.twimg.com/profile_background_images/702001815/f87508e73bbfab8c8c85ebe10b29fcf6.png', u'profile_background_tile': True, u'profile_banner_url': u'https://pbs.twimg.com/profile_banners/399801173/1367200323', u'profile_image_url': u'http://a0.twimg.com/profile_images/378800000012256721/d8b5f801fb331de6ead4aed42dc77a46_normal.jpeg', u'profile_image_url_https': u'https://si0.twimg.com/profile_images/378800000012256721/d8b5f801fb331de6ead4aed42dc77a46_normal.jpeg' , u'profile_link_color': u'140DE0', u'profile_sidebar_border_color': u'FFFFFF', u'profile_sidebar_fill_color': u'E0F5A6', u'profile_text_color': u'120212', u'profile_use_background_image': True, u'protected': False, u'screen_name': u'JoeCatanzarita', u'statuses_count': 6402, u'time_zone': u'Quito', u'url': None, u'utc_offset': -18000, u'verified': False}} ``` However, when I try to query for this screen\_name, I get: ``` tweets.find({"screen_name": "JoeCatanzarita"}) <pymongo.cursor.Cursor at 0x52c02f0> ``` And when I then try to count the number of tweets which have "screen\_name": "name", I get: ``` tweets.find({"screen_name": "name"}).count() 0 ``` Any idea what I am doing wrong/how I can get pymongo to return the tweets I am looking for? Thanks!
Ok now I see what's your problem: If you look carefully into your document you will notice that "screen\_name" is inside the subdocument user, so if you want to acess it all you have to do is the following: ``` tweets.find({"user.screen_name": "JoeCatanzarita"}) #for example. ``` Whenever you are in a situation where the element you are trying to find is inside a subdocument like in this situation or inside an array always use this syntax.
PyMongo's find() method returns a Cursor. To actually execute the query on the server and retrieve results, iterate the cursor with `list` or a for loop: ``` for doc in tweets.find({'screen_name': 'name'}): print(doc) # Or: docs = list(tweets.find({'screen_name': 'name'})) ``` If `tweets.find({"screen_name": "name"}).count()` returns 0, it means no documents match your query. Edit: now that you've posted an example document, I see you want to query like: ``` list(tweets.find({'user.screen_name': 'name'})) ``` ... since the `screen_name` field is embedded in the `user` sub-document.
Why do I get a pymongo.cursor.Cursor when trying to query my mongodb db via pymongo?
[ "", "python", "mongodb", "twitter", "pymongo", "database-cursor", "" ]
Ok so here is my class: ``` class Vec: """ A vector has two fields: D - the domain (a set) f - a dictionary mapping (some) domain elements to field elements elements of D not appearing in f are implicitly mapped to zero """ def __init__(self, labels, function): self.D = labels self.f = function ``` I need help creating a function that takes in two vectors, lets say: ``` u = Vec({'a','b','c'}, {'a':0,'c':1,'b':4}) v = Vec({'A','B','C'},{'A':1}) ``` the function equal: ``` equal(u,v) ``` should return: ``` false ``` So far I've tried this: ``` v = Vec({'x','y','z'},{'y':1,'x':2}) u = Vec({'x','y','z'},{'y':1,'x':0}) def equal(u,v): "Returns true iff u is equal to v" assert u.D == v.D for d in v.f: for i in u.f: if v.f[d] == u.f[i]: return True else: return False print (equal(u,v)) ``` I get true which is incorrect because it's only looking at the last value: 'y':1, how can I check for both?
The method that you are trying to implement has already been done for you. You can use the set equality and the dictionary equality operator. I ask you not to make a function called `equal` and instead use `__eq__` which allows the use of `==` on class instances. Here's what you can do ``` def __eq__(self, anotherInst): return self.D == anotherInst.D and self.f == anotherInst.f ``` Read about the `__eq__` method in the [Python Docs](http://docs.python.org/2/reference/datamodel.html#object.__eq__) Test Run after applying the changes - ``` >>> u = Vec({'a','b','c'}, {'a':0,'c':1,'b':4}) >>> v = Vec({'A','B','C'},{'A':1}) >>> u == v False ```
You can compare the fields: ``` def equal(self, u, v): return u.D == v.D and u.f == v.f ```
Find if 2 vectors are equal in python
[ "", "python", "class", "function", "vector", "python-3.x", "" ]
I have a table containing a set of tasks to perform: ``` Task ID Name 1 Washing Up 2 Hoovering 3 Dusting ``` The user can add one or more Notes to a Note table. Each note is associated with a task: ``` Note ID ID_Task Completed(%) Date 11 1 25 05/07/2013 14:00 12 1 50 05/07/2013 14:30 13 1 75 05/07/2013 15:00 14 3 20 05/07/2013 16:00 15 3 60 05/07/2013 17:30 ``` I want a query that will select the Task ID, Name and it's % complete, which should be zero if there aren't any notes for it. The query should return: ``` ID Name Completed (%) 1 Washing Up 75 2 Hoovering 0 3 Dusting 60 ``` I've really been struggling with the query for this, which I've read is a "greatest n per group" type problem, of which there are many examples on SO, none of which I can apply to my case (or at least fully understand). My intuition was to start by finding the MAX(Date) for each task in the note table: ``` SELECT ID_Task, MAX(Date) AS Date FROM Note GROUP BY ID_Task ``` Annoyingly, I can't just add "Complete %" to the above query unless it's contained in a GROUP clause. Argh! I'm not sure how to jump through this hoop in order to somehow get the task table rows with the column appended to it. Here is my pathetic attempt, which fails as it only returns tasks with notes and then duplicates task records at that (one for each note, so it's a complete fail). ``` SELECT Task.ID, Task.Name, Note.Complete FROM Task JOIN (SELECT ID_Task, MAX(Date) AS Date FROM Note GROUP BY ID_Task) AS InnerNote ON Task.ID = InnerNote.ID_Task JOIN Note ON Task.ID = Note.ID_Task ``` Can anyone help me please?
If we assume that tasks only become more complete, you can do this with a `left outer join` and aggregation: ``` select t.ID, t.Name, coalesce(max(n.complete), 0) from tasks t left outer join notes n on t.id = n.id_task group by t.id, t.name ``` If tasks can become "less complete" then you want the one with the last date. For this, you can use `row_number()`: ``` select t.ID, t.Name, coalesce(n.complete, 0) from tasks t left outer join (select n.*, row_number() over (partition by id_task order by date desc) as seqnum from notes n ) n on t.id = n.id_task and n.seqnum = 1; ``` In this case, you don't need a `group by`, because the `seqnum = 1` performs the same role.
``` select a.ID, a.Name, isnull((select completed from Note where ID_Task = b.ID_Task and Date = b.date),0) from Task a LEFT OUTER JOIN (select ID_Task, max(date) date from Note group by ID_Task) b ON a.ID = b.ID_Task; ``` [See DEMO here](http://sqlfiddle.com/#!3/bc1e3/1)
Select rows in one table, adding column where MAX(Date) of rows in other, related table
[ "", "sql", "sql-server", "greatest-n-per-group", "" ]
I'm working with a database where we have a FNN column (Full National Number). In Australia, all FNN's are 10 digits and begin with a 0 followed by the single digit state number and then the 8 digit phone number. Currently, half the entries only have 9 digits, meaning that the first 0 is excluded, but the other half have the full 10 digits. I want to concatenate a 0 to all column values that don't have a 0 to begin with. Here's the current query that I've attempted but it results on a 0 affected. ``` UPDATE SUBSCRIBERS SET FNN=concat('0',FNN) WHERE FNN LIKE '[1-9]%'; ``` State digits do not begin with a 0, so I only need to concatenate a 0 where the first number is between 1-9. Why isn't the above query working? Thanks. Regards, Matt
In your question you mentioned that some of them have 9 digits while the other half have 10 digits, So I suggest instead of looking for rows that has FNN start with 1-9, why dont look for rows that has FNN = 9 digits? ``` UPDATE SUBSCRIBERS SET FNN=concat('0',FNN) WHERE CHAR_LENGTH(FNN) = 9; ```
Probably you can use `not like`: ``` UPDATE SUBSCRIBERS SET FNN=concat('0',FNN) WHERE FNN NOT LIKE '0%'; ```
MySQL - Concatenate 0 to all column values if no 0 to begin with
[ "", "mysql", "sql", "concatenation", "" ]
I am new to SQL so bear with me. I am returning data from multiple tables. Followed is my SQL (let me know if there is a better approach): ``` SELECT [NonScrumStory].[IncidentNumber], [NonScrumStory].[Description], [DailyTaskHours].[ActivityDate], [Application].[AppName], [SupportCatagory].[Catagory], [DailyTaskHours].[PK_DailyTaskHours],n [NonScrumStory].[PK_NonScrumStory] FROM [NonScrumStory], [DailyTaskHours], [Application], [SupportCatagory] WHERE ([NonScrumStory].[UserId] = 26) AND ([NonScrumStory].[PK_NonScrumStory] = [DailyTaskHours].[NonScrumStoryId]) AND ([NonScrumStory].[CatagoryId] = [SupportCatagory].[PK_SupportCatagory]) AND ([NonScrumStory].[ApplicationId] = [Application].[PK_Application]) AND ([NonScrumStory].[Deleted] != 1) AND [DailyTaskHours].[ActivityDate] >= '1/1/1990' ORDER BY [DailyTaskHours].[ActivityDate] DESC ``` This is what is being returned: ![enter image description here](https://i.stack.imgur.com/XR02a.png) This is nearly correct. I only want it to return one copy of PK\_NonScrumStory though and I can't figure out how. Essentially, I only want it to return one copy so one of the top two rows would not be returned.
You could group by the NonScrumStore columns, and then aggregate the other columns like this: ``` SELECT [NonScrumStory].[IncidentNumber], [NonScrumStory].[Description], MAX( [DailyTaskHours].[ActivityDate]), MAX( [Application].[AppName]), MAX([SupportCatagory].[Catagory]), MAX([DailyTaskHours].[PK_DailyTaskHours]), [NonScrumStory].[PK_NonScrumStory] FROM [NonScrumStory], [DailyTaskHours], [Application], [SupportCatagory] WHERE ([NonScrumStory].[UserId] = 26) AND ([NonScrumStory].[PK_NonScrumStory] = [DailyTaskHours].[NonScrumStoryId]) AND ([NonScrumStory].[CatagoryId] = [SupportCatagory].[PK_SupportCatagory]) AND ([NonScrumStory].[ApplicationId] = [Application].[PK_Application]) AND ([NonScrumStory].[Deleted] != 1) AND [DailyTaskHours].[ActivityDate] >= '1/1/1990' group by [NonScrumStory].[IncidentNumber], [NonScrumStory].[Description],[NonScrumStory].[PK_NonScrumStory] ORDER BY 3 DESC ```
From the screenshot it seems DISTINCT should have solved your issue but if not you could use the ROW\_NUMBER function. ``` ;WITH CTE AS ( SELECT ROW_NUMBER() OVER (PARTITION BY [NonScrumStory].[PK_NonScrumStory] ORDER BY [DailyTaskHours].[ActivityDate] DESC) AS RowNum, [NonScrumStory].[IncidentNumber], [NonScrumStory].[Description], [DailyTaskHours].[ActivityDate], [Application].[AppName], [SupportCatagory].[Catagory], [DailyTaskHours].[PK_DailyTaskHours],n [NonScrumStory].[PK_NonScrumStory] FROM [NonScrumStory], [DailyTaskHours], [Application], [SupportCatagory] WHERE ([NonScrumStory].[UserId] = 26) AND ([NonScrumStory].[PK_NonScrumStory] = [DailyTaskHours].[NonScrumStoryId]) AND ([NonScrumStory].[CatagoryId] = [SupportCatagory].[PK_SupportCatagory]) AND ([NonScrumStory].[ApplicationId] = [Application].[PK_Application]) AND ([NonScrumStory].[Deleted] != 1) AND [DailyTaskHours].[ActivityDate] >= '1/1/1990' ) SELECT * FROM CTE WHERE RowNum = 1 ORDER BY [ActivityDate] DESC ```
SELECTing only one copy of a row with a specific key that is coming from multiple tables
[ "", "sql", "sql-server", "" ]
I am working on a project to get information from a web page. in the html source I have the following: **Resultado de Busca:** Foram encontrados **264** casais I need to get the number between "encontrados" and "casais" is there anyway in Python to do that? what string function should i use? i want o avoid using regular expression in this case. ``` import urllib.request f = urllib.request.urlopen("http://listadecasamento.fastshop.com.br/ListaCasamento/ListaCasamentoBusca.aspx?Data=2013-06-07") s = f.read() print(s.split()) ``` I got this so far, but now I am having trouble finding the number I need. ``` import urllib.request f = urllib.request.urlopen("http://listadecasamento.fastshop.com.br/ListaCasamento/ListaCasamentoBusca.aspx?Data=2013-06-07") s = f.read() num = int(s[s.index("encontrados")+len("encontrados"):s.index("casais")]) ``` this give me the error bellow TypeError: Type str doesn't support the buffer API
I'd recommend using a library such as [Beautiful Soup](https://pypi.python.org/pypi/beautifulsoup4) if it's HTML you want to parse. No need for regex. **EDIT** Using the URL you just added, this is the sample code to get the HTML object out: ``` import BeautifulSoup import re import urllib data = urllib.urlopen('http://listadecasamento.fastshop.com.br/ListaCasamento/ListaCasamentoBusca.aspx?Data=2013-06-07').read() soup = BeautifulSoup.BeautifulSoup(data) element = soup.find('span', attrs={'class': re.compile(r".*\btxt_resultad_busca_casamento\b.*")}) print element.text ``` This will find the HTML `span` element on the page that has the class `txt_resultad_busca_casamento`, which I believe is the data you're trying to extract. From there you can just parse the `.text` attribute to get the exact data you're interested in. **EDIT 2** Oops, just realised that uses regular expressions... it seems class matching in BeautifulSoup isn't perfect! This line should work instead, at least until the site changes their HTML: ``` element = soup.find('div', attrs={'id': 'ctl00_body_uppBusca'}).find('span') ```
Given that you can't parse html with regular expression, if you treat your file as a bag of text you have to use regex or something like: ``` a = 'Resultado de Busca: Foram encontrados 264 casais' #your page text num = int(a[a.index("encontrados")+len("encontrados"):a.index("casais")]) ```
search for a string inside html source with python (3.3.1)
[ "", "python", "string", "split", "" ]
I want to find words that have consecutive letter pairs using regex. I know for just one consecutive pair like **zoo (oo), puzzle (zz), arrange (rr)**, it can be achieved by `'(\w){2}'`. But how about * two consecutive pairs: **committee (ttee)** * three consecutive pairs: **bookkeeper (ookkee)** edit: * `'(\w){2}'` is actually wrong, it finds any two letters instead of a double letter pair. * My intention is to find the **words** that have letter pairs, not the pairs. * By 'consecutive', I mean there is no other letter between letter pairs.
You can use this pattern: ``` [a-z]*([a-z])\1([a-z])\2[a-z]* ``` the idea is to use backreferences `\1` and `\2` that refer to the capturing groups. Note that `(\w){2}` matches two word characters but not the same character.
Use [re.finditer](http://docs.python.org/2/library/re.html#re.finditer) ``` >>> [m.group() for m in re.finditer(r'((\w)\2)+', 'zoo')] ['oo'] >>> [m.group() for m in re.finditer(r'((\w)\2)+', 'arrange')] ['rr'] >>> [m.group() for m in re.finditer(r'((\w)\2)+', 'committee')] ['mm', 'ttee'] >>> [m.group() for m in re.finditer(r'((\w)\2)+', 'bookkeeper')] ['ookkee'] ``` Check whether the string contain consecutive pair: ``` >>> bool(re.search(r'((\w)\2){2}', 'zoo')) False >>> bool(re.search(r'((\w)\2){2}', 'arrange')) False >>> bool(re.search(r'((\w)\2){2}', 'committee')) True >>> bool(re.search(r'((\w)\2){2}', 'bookkeeper')) True ``` You can also use following non-capturing (`?:`) version: ``` (?:(\w)\1){2} ```
python: how to find consecutive pairs of letters by regex?
[ "", "python", "regex", "" ]
I have an excel workbook that runs some vba on opening which refreshes a pivot table and does some other stuff. Then I wish to import the results of the pivot table refresh into a dataframe in python for further analysis. ``` import xlrd wb = xlrd.open_workbook('C:\Users\cb\Machine_Learning\cMap_Joins.xlsm') ``` The refreshing and opening of the file works fine. But how do I select the data from the first sheet from say row 5 including header down to last record n.
You can use pandas' ExcelFile [`parse`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.io.parsers.ExcelFile.parse.html) method to read Excel sheets, see [io docs](http://pandas.pydata.org/pandas-docs/stable/io.html#excel-files): ``` xls = pd.ExcelFile('C:\Users\cb\Machine_Learning\cMap_Joins.xlsm') df = xls.parse('Sheet1', skiprows=4, index_col=None, na_values=['NA']) ``` *`skiprows` will ignore the first 4 rows (i.e. start at row index 4), and several [other options](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.io.parsers.ExcelFile.parse.html).*
The accepted answer is old (as discussed in comments of the accepted answer). Now the preferred option is using [pd.read\_excel()](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_excel.html). For example: ``` df = pandas.read_excel('C:\Users\cb\Machine_Learning\cMap_Joins.xlsm'), skiprows=[0,1,2,3,4]) ```
reading excel to a python data frame starting from row 5 and including headers
[ "", "python", "excel", "pandas", "dataframe", "import", "" ]
I'm completely new at SQL and have been at this for 6 hours now, to now avail. I must be missing something simple. In a nutshell: I want to delete a post in a Wordpress database based on a partial matching string in another table in the database. Here's what I've got so far. It should explain what I'm trying to do: ``` CASE WHEN option_value FROM wp_options LIKE '%domain.com%' THEN DELETE FROM wp_posts WHERE post_title = 'uniqueID' ELSE DELETE FROM wp_posts WHERE post_title = 'XXXXXXXXXX' END ``` The ELSE and X's are there to make sure nothing changes if they don't match. What am I missing? :-) @Dan Bracuk This was the final code that did the trick: ``` DELETE FROM wp_posts WHERE post_title = CASE WHEN (SELECT option_value FROM wp_options WHERE option_value LIKE '%domain.com%' LIMIT 1) LIKE '%domain.com' THEN 'UniqueID' ELSE 'XXXXXXX' END ```
Your sql syntax is not valid. It has to resemble this: ``` delete from wp_posts where post_title = case when (subquery to get data from option_value table) like '%domain.com%' then 'uniqueId' else 'XXXXXXX' end ``` However, it's probably better not to use a case construct here. I would do something like this: ``` delete from wp_posts where (post_title = 'uniqueId' and somefield in (select somefield from option_value where wp_options like '%domain.com%' ) ) or (post_title = 'XXXXXX' and somefield in (select somefield from option_value except select somefield from option_value where wp_options like '%domain.com%' ) ) ``` Note that some databases use the word minus instead of except and others simply don't support that syntax. However, this gives you the general idea.
It's hard to give a definitive answer without a bit more context. However, from an initial look at your code, assuming that it needs to adhere to standard SQL syntax, the following line would need updating from ``` option_value FROM wp_options LIKE '%domain.com%' ``` to ``` WHEN EXISTS(SELECT 1 FROM wp_options WHERE option_value LIKE '%domain.com%') ``` Hope that helps!
Deleting a row in Table X based on the existence of a value in Table Y
[ "", "sql", "wordpress", "" ]
It seems that many get this error but each situation is different. My code: ``` i = 0 def sort(a): b = len(a) if(i == b): print (a) elif(b == 0): print ('Error. No value detected...') elif(b == 1): print (a) elif(a[i]>a[i+1]): a[i], a[i+1] = a[i+1], a[i] i = i + 1 print(a) sort(a) ``` Error Code: ``` Traceback (most recent call last): File "<string>", line 301, in runcode File "<interactive input>", line 1, in <module> File "(File location, you don't need to know....)", line 8, in sort if(i == b): UnboundLocalError: local variable 'i' referenced before assignment ``` I am not sure what this error means or what is wrong.
Your variable `i` is defined at the global (module) level. See [Short Description of the Scoping Rules?](https://stackoverflow.com/questions/291978/short-description-of-python-scoping-rules) for info the order in which python looks for your variable. If you only try to reference the variable from within your function, then you will not get the error: ``` i = 0 def foo(): print i foo() ``` Since there is no local variable `i`, the global variable is found and used. But if you assign to `i` in your function, then a local variable is created: ``` i = 0 def foo(): i = 1 print i foo() print i ``` Note that the global variable is unchanged. In your case you include the line `i = i + 1`, thus a local variable is created. But you attempt to reference this variable before it is assigned any value. This illustrates the error you are getting: ``` i = 0 def foo(): print i i = 1 foo() ``` Either declare `global i` within your function, to tell python to use the global variable rather than creating a local one, or rewrite your code completely (since it does not perform as I suspect you think it does)
Since you assign to it, your variable `i` is a local variable in your `sort()` function. However, you are trying to use it before you assign anything to it, so you get this error. If you intend to use the global variable `i` you must include the statement `global i` somewhere in your function.
Error Code: UnboundLocalError: local variable referenced before assignment
[ "", "python", "computer-science", "" ]
I'm with a great difficulty in formulate a SQL for a module of notifications when a new user register. I have a database of **Notifications**, I set up a notification to be sent. Examples: * Send notification when a man and blue eyes register; * Send notification when a woman register; * Send a notification when a blue-eyed woman, brown and work in the company Foo; With these rules we can see that there can be several possibilities (so the table columns are optional). **Some details:** * The table columns are defined as integers because are FK. I just did not put tables because the structure is unnecessary, since the SQL will only relashionship between User and Notification; * The date field is used to store both the date of registration of the notice of such person. So I can only filter to notify the new register of user; --- ## Table Structure ``` User: +------------+----------+------+-----+---------+------------+ | Field | Type | Null | Key | Default | Extra | +------------+----------+------+-----+---------+------------+ | Id | int(11) | NO | PRI | | auto_incre | | Gender | int(11) | YES | | | | | HairColor | int(11) | YES | | | | | EyeColor | int(11) | YES | | | | | Company | int(11) | YES | | | | | Date | datetime | NO | | | | | ... | | | | | | +------------+----------+------+-----+---------+------------+ Notification: +------------+----------+------+-----+---------+------------+ | Field | Type | Null | Key | Default | Extra | +------------+----------+------+-----+---------+------------+ | Id | int(11) | NO | PRI | | auto_incre | | Gender | int(11) | YES | | | | | HairColor | int(11) | YES | | | | | EyeColor | int(11) | YES | | | | | Company | int(11) | YES | | | | | Date | datetime | NO | | | | +------------+----------+------+-----+---------+------------+ ``` --- ## Initial idea The initial idea I had was doing a select for each possibility and joining via union: ``` -- Selects new users by gender notification SELECT * FROM Notification inner join User on ( User.Date >= Notification.Date and Notification.Gender = User.Gender and Notification.HairColor is null and Notification.EyeColor is null and Notification.Company is null ) union all -- Selects new users by gender and hair color notification SELECT * FROM Notification inner join User on ( User.Date >= Notification.Date and Notification.Gender = User.Gender and Notification.HairColor = User.HairColor and Notification.EyeColor is null and Notification.Company is null ) -- ... and so on, doing a select for each option, resulting in 16 selects (4 columns: gender, hair color, eye color and company) ``` --- # My question is: Is there another way I can do this SQL querying all the possibilities of notifications in a more easy? Following this structure of 4 columns we already have 16 selects. In my real structure will have more columns with something unfeasible to keep it that way. Is there any other suggestion storage structure of the data for a better way to do this functionality?
``` SELECT * FROM Notification inner join User on ( User.Date >= Notification.Date and (Notification.Gender is null or Notification.Gender = User.Gender) and (Notification.HairColor is null or Notification.HairColor = User.HairColor) and (Notification.EyeColor is null Notification.EyeColor = User.EyeColor) and (Notification.Company is null or Notification.Company = User.Company) ) ``` This way you get every set of user with the notification stored in the tables.
This is the way I would implement this user registration / notification functionality: 1. Three tables: Users, Notif\_type, Notif\_queue. 2. A trigger on insert on table Users which calls a stored procedure SendNotification(user\_id). 3. The stored proc will have the logic which you can change overtime without having to modify the schema/data. The logic will be: * to select the type of notification (form Notif\_type) the new user should receive based on your rules; * to insert a row in Notif\_queue which holds a FK to user\_id and notif\_type\_id, so that the functionality notifying the user is completely de-coupled from the notification rules.
SQL design for notification of new registered users
[ "", "sql", "database-design", "mysqli", "" ]
I tried to implement a simple adjacency matrix to keep track of which nodes are connected to which nodes in an undirected graph. However, my adjacency matrix keeps screwing up by changing entire columns instead of individual cells. Here's my code: ``` def setup_adj_matrix(size, edges): # initialize matrix with zeros adj_matrix = [[0] * size] * size # edges is a list of tuples, representing 2 nodes connected by an edge for edge in edges: v1 = edge[0] v2 = edge[1] adj_matrix[v1][v2] = 1 adj_matrix[v2][v1] = 1 for row in adj_matrix: print row ``` for a graph with 3 nodes (0, 1, 2) and edges [(0,1),(0,2),(1,2)], I should be getting ``` [[0,1,1], [1,0,1], [1,1,0]] ``` However, I am getting all 1's. Any ideas where the problem might be?
The lists are all shallow copies of each other, so when you edit one you're actually editing each row. Try this for initializing the matrix: ``` adj_matrix = [[0] * size for i in range(size)] ```
The multiplication operator with a list and an int returns multiple references to the same list, not multiple copies. Your array contains the same object nine times. You can create the correct array with a double list comprehension: ``` def init_matrix(x, y): return [[[0] for i in range(x)] for j in range(y)] ```
Adjacency Matrix not populating correctly python
[ "", "python", "" ]
How do I set up Selenium to work with Python? I just want to write/export scripts in Python, and then run them. Are there any resources for that? I tried googling, but the stuff I found was either referring to an outdated version of Selenium (RC), or an outdated version of Python.
You mean Selenium WebDriver? Huh.... **Prerequisite**: Install Python based on your OS Install with following command ``` pip install -U selenium ``` And use this module in your code ``` from selenium import webdriver ``` You can also use many of the following as required ``` from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import Select from selenium.common.exceptions import NoSuchElementException ``` ## Here is an updated answer I would recommend you to run script without IDE... Here is my approach 1. USE IDE to find xpath of object / element 2. And use find\_element\_by\_xpath().click() An example below shows login page automation ``` #ScriptName : Login.py #--------------------- from selenium import webdriver #Following are optional required from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import Select from selenium.common.exceptions import NoSuchElementException baseurl = "http://www.mywebsite.com/login.php" username = "admin" password = "admin" xpaths = { 'usernameTxtBox' : "//input[@name='username']", 'passwordTxtBox' : "//input[@name='password']", 'submitButton' : "//input[@name='login']" } mydriver = webdriver.Firefox() mydriver.get(baseurl) mydriver.maximize_window() #Clear Username TextBox if already allowed "Remember Me" mydriver.find_element_by_xpath(xpaths['usernameTxtBox']).clear() #Write Username in Username TextBox mydriver.find_element_by_xpath(xpaths['usernameTxtBox']).send_keys(username) #Clear Password TextBox if already allowed "Remember Me" mydriver.find_element_by_xpath(xpaths['passwordTxtBox']).clear() #Write Password in password TextBox mydriver.find_element_by_xpath(xpaths['passwordTxtBox']).send_keys(password) #Click Login button mydriver.find_element_by_xpath(xpaths['submitButton']).click() ``` There is an another way that you can find xpath of any object - 1. Install Firebug and Firepath addons in firefox 2. Open URL in Firefox 3. Press F12 to open Firepath developer instance 4. Select Firepath in below browser pane and chose select by "xpath" 5. Move cursor of the mouse to element on webpage 6. in the xpath textbox you will get xpath of an object/element. 7. Copy Paste xpath to the script. Run script - ``` python Login.py ``` You can also use a CSS selector instead of xpath. CSS selectors are slightly faster than xpath in most cases, and are usually preferred over xpath (if there isn't an ID attribute on the elements you're interacting with). Firepath can also capture the object's locator as a CSS selector if you move your cursor to the object. You'll have to update your code to use the equivalent find by CSS selector method instead - ``` find_element_by_css_selector(css_selector) ```
There are a lot of sources for selenium - here is good one for simple use [Selenium](http://selenium-python.readthedocs.io/), and here is a example snippet too [Selenium Examples](https://gist.github.com/hugs/830011) You can find a lot of good sources to use selenium, it's not too hard to get it set up and start using it.
How to use Selenium with Python?
[ "", "python", "selenium", "selenium-webdriver", "automation", "" ]
When I type `play`, a random number is assigned to `number1`. It asks me for a prediction and I put in a number, say 5. After putting in 5 I always get the `else` statement and not the `if` statement. I even put a `print()` to find out what number was generated. Sometimes I'm right on or within 1 (The game also allows for within 1) and it still re-directs me to the `else` statement. Could anyone help? Thanks. ``` money = 1000000 def luckyrollgame(): global money from random import choice print('You are in the game lobby of Lucky Roll.') print('Choose either \'rules,\' \'play,\' or \'back\'') lobby = input() if lobby == 'rules': luckyrollgamerules() if lobby == 'play': die = [1, 2, 3, 4, 5, 6] number1 = choice(die) prediction = input('Please type your prediction number: ') if prediction == number1: print('Good job! You guessed right!') money = money + 3 print('You now have ' + str(dollars) + 'dollars.') if prediction == number1 - 1: print('Good job! You guessed right!') money = money + 3 print('You now have ' + str(dollars) + 'dollars.') if prediction == number1 + 1: print('Good job! You guessed right!') money = money + 3 print('You now have ' + str(dollars) + 'dollars.') else: print('I\'m sorry. You didn\'t get the number right.') print('The number was ' + str(number1) + '.') money = money - 1 print('You now have ' + str(money) + 'dollars.') print('--------------------------------------------------') altluckyrollgame() if lobby == 'back': altvillagescene() else: print('Please type a valid option.') print('--------------------------------') altluckyrollgame() ``` \*Functions such as `altluckyrollgame()` or `altvillagescene()` are part of the game logic and defined elsewhere, so you can ignore them.
Use the `elif` statement after the very first statement. At current, your code ``` if lobby == 'back': altvillagescene() else: print('Please type a valid option.') print('--------------------------------') altluckyrollgame() ``` is checking if lobby == 'back' and running the else in all other cases. You probably don't want this, as the code under else is run in addition to every other if case. ``` if x == 0: pass elif x == 1: pass else: pass ``` Code should look like this ``` money = 1000000 def luckyrollgame(): global money from random import choice print('You are in the game lobby of Lucky Roll.') print('Choose either \'rules,\' \'play,\' or \'back\'') lobby = input() if lobby == 'rules': luckyrollgamerules() elif lobby == 'play': die = [1, 2, 3, 4, 5, 6] number1 = choice(die) prediction = input('Please type your prediction number: ') ######################### This too try: prediction = int(prediction) except ValueError: prediction = -10 ######################### if prediction == number1: print('Good job! You guessed right!') money = money + 3 print('You now have ' + str(dollars) + 'dollars.') elif prediction == number1 - 1: print('Good job! You guessed right!') money = money + 3 print('You now have ' + str(dollars) + 'dollars.') elif prediction == number1 + 1: print('Good job! You guessed right!') money = money + 3 print('You now have ' + str(dollars) + 'dollars.') else: print('I\'m sorry. You didn\'t get the number right.') print('The number was ' + str(number1) + '.') money = money - 1 print('You now have ' + str(money) + 'dollars.') print('--------------------------------------------------') altluckyrollgame() elif lobby == 'back': altvillagescene() else: print('Please type a valid option.') print('--------------------------------') altluckyrollgame() ```
Your problem is that you're comparing a string to an integer. You'll need to first convert the input to an `int`: ``` try: guess = int(prediction) except ValueError: #Handle when a person enters an invalid number here ```
Condition always evaluates to else branch
[ "", "python", "" ]
I have a hash, which I am using for an SQL query ``` :profiles => {:gender => Female, :idea => ''} ``` The way it is setup now, it is looking for profiles with gender female and an empty idea. Meaning the string in the idea column is empty. How do I get this to be the exact opposite. I want to find rows where the idea column is not empty. **NOTE: I am not looking for IS NOT NULL because the string is empty and NOT NOT\_NULL.**
There is no way to express this within the hash itself but you can use one of the other styles of querying e.g. ``` Profile.where("gender = ? and idea != ''", gender) ``` or if you upgrade to Rails 4 you can use the `not` method to invert a condition. e.g. ``` Profile.where.not(idea: '') ``` and you can combine multiple `where` calls e.g. ``` Profile.where(gender: 'Female').where.not(idea: '') ```
You may be looking for !string.empty?
How to put "Is not empty" as a hash value?
[ "", "sql", "ruby-on-rails", "ruby", "ruby-on-rails-3", "" ]
How can I select like this? Can I create a User defined Aggregate Function `SELECT Max(A),(SELECT TOP 1 FROM TheGroup Where B=Max(A))` FROM MyTable where MyTable as Shown Below ``` A B C -------------- 1 2 S 3 4 S 4 5 T 6 7 T ``` I want a Query Like this ``` SELECT MAX(A),(B Where A=Max(A)),C FROM MYTable GROUP BY C ``` I'm Expecting the result as below ``` MAX(A) Condition C ----------------------- 3 4 S 6 7 T ```
Try Following Query : ``` SELECT TABLE1.A , TABLE2.B , TABLE1.C FROM ( SELECT MAX(A) AS A,C FROM MYTable GROUP BY C ) AS TABLE1 INNER JOIN ( SELECT * FROM MYTable ) AS TABLE2 ON TABLE1.A = TABLE2.A ``` [SQLFIDDLE](http://sqlfiddle.com/#!3/ae901/3) you can do it by simple join query . join query always run faster then In query . Join query run only one time at the time of execution of the query . we can archive same result by using IN query .
``` SELECT A,B,C FROM (SELECT *, ROW_NUMBER() OVER (PARTITION BY C ORDER BY A DESC) RN FROM MyTable) WHERE RN = 1 ``` (this query will always return only one row per C value) OR ``` WITH CTE_Group AS ( SELECT C, MAX(A) AS MaxA FROM MyTable GROUP BY C ) SELECT g.MaxA, t.B, g.C FROM MyTable t INNER JOIN CTE_Group g ON t.A = g.MaxA AND t.C = g.C ``` (if there are multiple rows that have same Max(A) value - this query will return all of them)
SELECT Row Values WHERE MAX() is Column Value In GROUP BY Query
[ "", "sql", "sql-server", "t-sql", "" ]
Database: Sybase Advantage 11 On my quest to normalize data, I am trying to delete the results I get from this `SELECT` statement: ``` SELECT tableA.entitynum FROM tableA q INNER JOIN tableB u on (u.qlabel = q.entityrole AND u.fieldnum = q.fieldnum) WHERE (LENGTH(q.memotext) NOT IN (8,9,10) OR q.memotext NOT LIKE '%/%/%') AND (u.FldFormat = 'Date') ; ``` This is the `DELETE` statement I have come up with: ``` DELETE FROM tableA WHERE (SELECT q.entitynum FROM tableA q INNER JOIN tableB u on (u.qlabel = q.entityrole AND u.fieldnum = q.fieldnum) WHERE (LENGTH(q.memotext) NOT IN (8,9,10) OR q.memotext NOT LIKE '%/%/%') AND (u.FldFormat = 'Date')) ; ``` I continuously get this error when I try to run this statement: ``` ERROR IN SCRIPT: poQuery: Error 7200: AQE Error: State = S0000; NativeError = 2124; [iAnywhere Solutions][Advantage SQL Engine]Invalid operand for operator: = Boolean value cannot be operated with non-Boolean value. ``` I have also tried this statement: ``` DELETE FROM tableA INNER JOIN tableB u on (u.qlabel = tableA.entityrole AND u.fieldnum = tableA.fieldnum) WHERE (LENGTH(q.memotext) NOT IN (8,9,10) OR tableA.memotext NOT LIKE '%/%/%') AND (u.FldFormat = 'Date') ; ``` Which results in: ``` ERROR IN SCRIPT: poQuery: Error 7200: AQE Error: State = 42000; NativeError = 2117; [iAnywhere Solutions][Advantage SQL Engine] Unexpected token: INNER -- Expecting semicolon. -- Location of error in the SQL statement is: 23 (line: 2 column: 1) ``` Could someone aid me in properly constructing a DELETE query that will result in the proper data being removed?
You need to identify the primary key in TableA in order to delete the correct record. The primary key may be a single column or a combination of several columns that uniquely identifies a row in the table. If there is no primary key, then the ROWID pseudo column may be used as the primary key. ``` DELETE FROM tableA WHERE ROWID IN ( SELECT q.ROWID FROM tableA q INNER JOIN tableB u on (u.qlabel = q.entityrole AND u.fieldnum = q.fieldnum) WHERE (LENGTH(q.memotext) NOT IN (8,9,10) OR q.memotext NOT LIKE '%/%/%') AND (u.FldFormat = 'Date')); ```
Your second `DELETE` query was nearly correct. Just be sure to **put the table name (or an alias) between `DELETE` and `FROM`** to specify which table you are deleting from. This is simpler than using a nested `SELECT` statement like in the other answers. ## Corrected Query (option 1: using full table name): ``` DELETE tableA FROM tableA INNER JOIN tableB u on (u.qlabel = tableA.entityrole AND u.fieldnum = tableA.fieldnum) WHERE (LENGTH(tableA.memotext) NOT IN (8,9,10) OR tableA.memotext NOT LIKE '%/%/%') AND (u.FldFormat = 'Date') ``` ## Corrected Query (option 2: using an alias): ``` DELETE q FROM tableA q INNER JOIN tableB u on (u.qlabel = q.entityrole AND u.fieldnum = q.fieldnum) WHERE (LENGTH(q.memotext) NOT IN (8,9,10) OR q.memotext NOT LIKE '%/%/%') AND (u.FldFormat = 'Date') ``` More examples here: [How to Delete using INNER JOIN with SQL Server?](https://stackoverflow.com/questions/16481379/how-to-delete-using-inner-join-with-sql-server)
How to write a SQL DELETE statement with a SELECT statement in the WHERE clause?
[ "", "sql", "select", "where-clause", "advantage-database-server", "sql-delete", "" ]
There are two n-length arrays ( `a` and `b` ) consisting of integers > 2. On every turn I want to remove an integer from each array ( `a[i]` and `b[j]` ) given that a certain condition about them is true (e.g. that they are not co-prime). ( If the condition is not true, I'll try to remove another combination ) After all I want to find the maximum number of turns I can achieve this (until there is no possible combination to remove which meets the condition). Let's call this the optimum number of turns. I tried to solve this with a search algorithm and a `PriorityQueue` using Python: ``` def search(n, a, b): q = queue.PriorityQueue() encountered = set() encountered.add((tuple(a), tuple(b))) q.put((number_of_coprime_combinations(a, b), a, b)) while q: cost, a, b = q.get() combs = not_coprime_combinations(a, b) if not combs: return n - len(a) for a, b in combs: if not (tuple(a), tuple(b)) in encountered: q.put((number_of_coprime_combinations(a, b), a, b)) encountered.add((tuple(a), tuple(b))) ``` `number_of_coprime_combinations(a, b)` returns the number of possible co-prime combinations given the arrays `a` and `b`. This is used as the cost of the given state of the two arrays. ``` def number_of_coprime_combinations(a, b): n = 0 for idx_a, x in enumerate(a): for idx_b, y in enumerate(b): if is_coprime(x, y): n += 1 return n ``` `not_coprime_combinations(a, b)` returns a list of possible states, where a not co-prime combination has been removed from `a` and `b`: ``` def not_coprime_combinations(a, b): l = [] for idx_a, x in enumerate(a): for idx_b, y in enumerate(b): if not is_coprime(x, y): u, v = a[:], b[:] del(u[idx_a]) del(v[idx_b]) l.append((u, v)) return l >>> not_coprime_combinations([2,3],[5,6]) [([3], [5]), ([2], [5])] ``` The problem is that this solution is highly inefficient for large arrays of large integers. So I'm wondering if there is any better solution to this problem.. **EXAMPLE:** ``` n = 4 a = [2, 5, 6, 7] b = [4, 9, 10, 12] ``` One could remove: ``` (2, 4) (5, 10) (6, 9) ``` Which would lead to the optimum solution: ``` a = [7] b = [12] ``` But if one would remove: ``` (6, 12) (2, 10) ``` one would get to the suboptimal solution: ``` a = [5, 7] b = [4, 9] ``` The algorithm should always come up the optimum number of turns (in this example 3).
As far as I can tell, to solve this: * Construct bipartite graph G such that for each Ai and Bj, if GCD(Ai,Bj) > 1, there is an edge (Ai, Bj) in G. * Find the **maximum matching** of G * Cardinality of the matching is the solution I don't see how this could be solved faster.
I know where this problem you took. And you solution for this problem is wrong because its O(n^2) and greedy. n <= 10^5. 2 > a,b < 10^9 from array I think in this problem you have to find some trick. And all algorithm for maximum matchings in bipartite graphs will TL.
Find maximum number of valid combinations for elements of two arrays
[ "", "python", "algorithm", "python-3.x", "" ]
I have a query with results like ID, Value. What I want is to get the values in order of their ids and also calculate the accumulated value in another column. take a look at my simplified code: ``` declare @TempTable Table ( ID int, Value int ) insert into @TempTable values (1, 10), (2, -15), (3, 12), (4, 18), (5, 5) select t1.ID, t1.Value, SUM(t2.Value) AccValue from @TempTable t1 inner join @TempTable t2 on t1.ID >= t2.ID group by t1.ID, t1.Value order by t1.ID Result: ID Value AccValue 1 10 10 2 -15 -5 3 12 7 4 18 25 5 5 30 ``` What I have come up with, is to use inner join between the result and itself for that purpose. But for huge amount of data, it's clearly a low performance issue. Is there any other alternative to do that?
In 2012 version, you can use: ``` SELECT id, Value, AccValue = SUM(Value) OVER (ORDER BY ID ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) FROM @TempTable ; ``` For previous versions of SQL-Server, see my answer in this similar question: [Recursive SQL- How can I get this table with a running total?](https://stackoverflow.com/questions/17050660/recursive-sql-how-can-i-get-this-table-with-a-running-total/17051046#17051046), with a **cursor** solution. Even better, follow the link to the great article by [@Aaron Bertrand](https://dba.stackexchange.com/users/1186/aaron-bertrand), that has a thorough test of various methods to calculate a running total: **[Best approaches for running totals – updated for SQL Server 2012](http://www.sqlperformance.com/2012/07/t-sql-queries/running-totals)**
You can use recursion: ``` ;WITH x AS ( SELECT [ID], [Value], bal=[Value] FROM Table1 WHERE [ID] = 1 UNION ALL SELECT y.[ID], y.[Value], x.bal+(y.[Value]) as bal FROM x INNER JOIN Table1 AS y ON y.[ID] = x.[ID] + 1 ) SELECT [ID], [Value], AccValue= bal FROM x order by ID OPTION (MAXRECURSION 10000); ``` **[SQL FIDDLE](http://www.sqlfiddle.com/#!3/10df7/1)**
Accumulating in SQL
[ "", "sql", "sql-server", "cumulative-sum", "" ]
I want to count the number of occurrences of each of certain words in a data frame. I currently do it using `str.contains`: ``` a = df2[df2['col1'].str.contains("sample")].groupby('col2').size() n = a.apply(lambda x: 1).sum() ``` Is there a method to match regular expression and get the count of occurrences? In my case I have a large dataframe and I want to match around 100 strings.
Update: Original answer counts those rows which contain a substring. To count all the occurrences of a substring you can use [`.str.count`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.count.html): ``` In [21]: df = pd.DataFrame(['hello', 'world', 'hehe'], columns=['words']) In [22]: df.words.str.count("he|wo") Out[22]: 0 1 1 1 2 2 Name: words, dtype: int64 In [23]: df.words.str.count("he|wo").sum() Out[23]: 4 ``` --- The `str.contains` method accepts a regular expression: ``` Definition: df.words.str.contains(self, pat, case=True, flags=0, na=nan) Docstring: Check whether given pattern is contained in each string in the array Parameters ---------- pat : string Character sequence or regular expression case : boolean, default True If True, case sensitive flags : int, default 0 (no flags) re module flags, e.g. re.IGNORECASE na : default NaN, fill value for missing values. ``` For example: ``` In [11]: df = pd.DataFrame(['hello', 'world'], columns=['words']) In [12]: df Out[12]: words 0 hello 1 world In [13]: df.words.str.contains(r'[hw]') Out[13]: 0 True 1 True Name: words, dtype: bool In [14]: df.words.str.contains(r'he|wo') Out[14]: 0 True 1 True Name: words, dtype: bool ``` To count the occurences you can just sum this boolean Series: ``` In [15]: df.words.str.contains(r'he|wo').sum() Out[15]: 2 In [16]: df.words.str.contains(r'he').sum() Out[16]: 1 ```
You can use `value_count` function. ``` import pandas as pd # URL to .csv file data_url = 'https://vincentarelbundock.github.io/Rdatasets/csv/carData/Arrests.csv' # Reading the data df = pd.read_csv(data_url, index_col=0) ``` [![enter image description here](https://i.stack.imgur.com/eH3Jo.png)](https://i.stack.imgur.com/eH3Jo.png) ``` # pandas count distinct values in column df['sex'].value_counts() ``` [![enter image description here](https://i.stack.imgur.com/peeAY.png)](https://i.stack.imgur.com/peeAY.png) Source: [link](https://www.marsja.se/pandas-count-occurrences-in-column-unique-values/)
Count occurrences of each of certain words in pandas dataframe
[ "", "python", "pandas", "dataframe", "" ]
I'm trying to build a database that will store information on movies. ``` Title Plot Genre Rating Director ``` The only thing that is bothering me is that most films don't just have one genre and I'm struggling to figure out how to store this on a MySQL Database. At first I was thinking that I'll just have one table and store all the genres in one column, separating them by a comma and when I want to retrieve them separate them using PHP, but I'm not sure this is the best way as I think I would have trouble sorting and searching for a specific genre e.g. Horror when the collumn contains 'Horror, Thriller, Action'.
I would suggest you should follow the following structure: > tablename: movies > > movieid, title, plot, rating, director ``` > sample data: > > 1 titanic Bollywood 10 James Cameron ``` > tablename: genres > > genreid, genre ``` > sample data: > 1 Horror > 2 Thriller > 3 Action > 4 Love ``` > tablename: moviegenres > > moviegenresid, movieid, genreid ``` > sample data: > 1 1 2 > 2 1 4 ``` And the query is: ``` select m.*,group_concat(g.genre) from movies m inner join moviegenres mg on m.movieid=mg.movieid inner join genres g on g.genreid=mg.genreid group by m.movieid ; ``` See the [fiddle](http://www.sqlfiddle.com/#!2/6922d/6)
What you are looking to model here is called a "many-to-many" relationship, and is very common when modelling "real world" categorisations. There are many descriptions out there of how to work with such relationships including: * Praveen's answer here which is specific to your question. * <http://en.wikipedia.org/wiki/Junction_table> - the extra table linking two populations in many/may relationships is usually called an intersection table or junction table. * <http://www.tomjewett.com/dbdesign/dbdesign.php?page=manymany.php> which helpfully shows an example with the table and key/constraints design, a handy data representation diagram in case that isn't clear, and how the relationship is modelled and used in the application. * Any good database design book/tutorial will cover this somewhere. Do not be tempted to skip the extra intersection table by storing multiple genres in one field for each film (a comma separated list for instance). This is a very common "anti pattern" that *will* cause you problems, maybe not today, maybe not tomorrow, but eventually. I recommend anyone working with database design give Bill Karwin's "SQL Antipatterns" (<http://pragprog.com/book/bksqla/sql-antipatterns>) a read. It is written in a way that should be accessible to a relative beginner, but contains much that those of us who should know better need reminding of from time to time (many-to-many relations, the list-in-a-field solution/problem, and what you should do instead, are one of the first things the book covers).
Movie Database, storing multiple genres
[ "", "mysql", "sql", "database-design", "join", "many-to-many", "" ]
I wanted to find the total number of orders placed by a customer till date and the last order date. **Customer** ``` custome_id customer_name 1 JOHN 2 ALEX ``` **Order** ``` order_id customer_id order_date status R1 1 06/06/2013 completed R2 1 05/29/2013 completed B1091 1 01/17/2011 canceled B2192 1 12/24/2010 completed ``` Note: order\_id is not helpful to find last order as they are not incremental The query which I am trying is ``` select customer.customer_id, customer.customer_name, order.order_id as last_order_id, max(order.order_date) as maxOrderDate, sum( case when order.status='completed' then 1 else 0) as completed_orders, count( order_id) as total_orders from customer as customer inner join order as order on customer.customer_id = order.customer_id where customer.id = 1 group by customer.customer_id, customer.customer_name, order.order_id ``` Expecting results as ``` customer_id customer_name Last_order_id maxOrderDate completed_orders total_orders 1 JOHN R1 06/06/2013 3 4 ```
In case you want to get last `Order_ID`, you need to join order table with sub-query like this: ``` SELECT tbl.customer_id, tbl.customer_name, o.order_id, MaxOrderDate, Completed_orders, Total_Order FROM [ORDER] o JOIN ( SELECT c.customer_id, c.customer_name, MAX(o.order_date) AS MaxOrderDate ,SUM(CASE WHEN o.status = 'completed' THEN 1 ELSE 0 END) AS Completed_orders ,COUNT(order_id) AS Total_Order FROM Customer c JOIN [Order] o ON c.customer_id = o.customer_id WHERE c.customer_id = 1 GROUP BY c.customer_id,c.customer_name ) tbl ON o.CUSTOMER_ID = tbl.CUSTOMER_ID AND o.order_date = tbl.MaxOrderDate ``` Result: ``` ╔═════════════╦═══════════════╦══════════╦══════════════╦══════════════════╦═════════════╗ ║ CUSTOMER_ID ║ CUSTOMER_NAME ║ ORDER_ID ║ MAXORDERDATE ║ COMPLETED_ORDERS ║ TOTAL_ORDER ║ ╠═════════════╬═══════════════╬══════════╬══════════════╬══════════════════╬═════════════╣ ║ 1 ║ JOHN ║ R1 ║ 06/06/2013 ║ 3 ║ 4 ║ ╚═════════════╩═══════════════╩══════════╩══════════════╩══════════════════╩═════════════╝ ``` ### See [this SQLFiddle](http://sqlfiddle.com/#!3/34d83/7)
``` select customer_id, MAX(order_date) as last_order_date, count(*) as num_orders from Order group by customer_id ```
select total orders and last order from a customer
[ "", "sql", "join", "count", "informix", "" ]
Say we need a program which takes a list of strings and splits them, and appends the first two words, in a tuple, to a list and returns that list; in other words, a program which gives you the first two words of each string. ``` input: ["hello world how are you", "foo bar baz"] output: [("hello", "world"), ("foo", "bar")] ``` It can be written like so (we assume valid input): ``` def firstTwoWords(strings): result = [] for s in strings: splt = s.split() result.append((splt[0], splt[1])) return result ``` But a list comprehension would be much nicer. ``` def firstTwoWords(strings): return [(s.split()[0], s.split()[1]) for s in strings] ``` But this involves two calls to `split()`. *Is there a way to perform the split only once from within the comprehension?* I tried what came naturally and it was invalid syntax: ``` >>> [(splt[0],splt[1]) for s in strings with s.split() as splt] File "<stdin>", line 1 [(splt[0],splt[1]) for s in strings with s.split() as splt] ^ SyntaxError: invalid syntax ```
Well, in this particular case: ``` def firstTwoWords(strings): return [s.split()[:2] for s in strings] ``` Otherwise, though, you can use one generator expression: ``` def firstTwoWords(strings): return [(s[0], s[1]) for s in (s.split() for s in strings)] ``` And if performance is actually critical, just use a function.
Writing what comes to mind naturally from English and hoping it's valid syntax rarely works, unfortunately. The generalized form of what you're trying to do is bind some expression to a name within a comprehension. There's no direct support to that, but since a `for` clause in a comprehension binds a name to each element from a sequence in turn, you can use `for` over single-element containers to achieve the same effect: ``` >>> strings = ["hello world how are you", "foo bar baz"] >>> [(splt[0],splt[1]) for s in strings for splt in [s.split()]] [('hello', 'world'), ('foo', 'bar')] ```
Eliminating redundant function calls in comprehensions from within the comprehension
[ "", "python", "list-comprehension", "" ]
So i have a list of heights: ``` heights = [1, 2, 3, 5, 7, 8, 8, 13] ``` And im using this function to store each height integer value and its index in the list in a class i called Node. ``` def initializeNodes(heights): ans = [] for height in heights: ans.append(Node(heights.index(height), height)) return ans ``` But my problem is, because their are two 8's in the list, its giving them both the same first 8 position of 5 in the list: ``` 0 1 1 2 2 3 3 5 4 7 5 8 5 8 7 13 ``` How can i go around this? Thanks!
Use [`enumerate()`](http://docs.python.org/2/library/functions.html#enumerate) to generate an index: ``` def initializeNodes(heights): ans = [] for i, height in enumerate(heights): ans.append(Node(i, height)) return ans ``` You can collapse the four lines into 1 using a list comprehension: ``` def initializeNodes(heights): return [Node(i, height) for i, height in enumerate(heights)] ```
The problem with `list.index` is that it'll only return the index of first occurrence of the item. ``` >>> heights = [1, 2, 2, 3, 5, 5, 7, 8, 8, 13] >>> heights.index(2) 1 >>> heights.index(5) 4 >>> heights.index(8) 7 ``` help on `list.index`: > L.index(value, [start, [stop]]) -> integer -- return first index of > value. You can do provide a different `start` value to `list.index` than 0, to get the index of repeated items: ``` >>> heights.index(5,heights.index(5)+1) #returns the index of second 5 5 ``` But that is very cumbersome, a better solution as @MartijnPieters already mentioned is `enumerate`
Python list iteration
[ "", "python", "list", "iteration", "" ]
I have a database on SQL Server 2012 and am having some problems with a few tables that become slow after a while and the thing that helps is rebuilding the indexes. I was wondering if anyone has advice on what could be wrong in any of them, I will post their structure and indexes below. I have not built this structure myself but have full access to modify. **Table1** * ID (int, not null) * Type (tinyint, not null) * Name (PK, nvarchar(255), not null) * fkID (PK, int, not null) * UID (int, not null) Indexes: * I\_UID (Unique, Non-Clustered) [UID] * I\_Name (Non-Unique, Non-Clustered) [Type,Name] * pk\_Name (Clustered) [Name, fkID] **Table2** * ID (PK, bigint, not null) * Name (nvarchar(50), not null) * ShortValue (nvarchar(250), null) * StringValue (nvarchar(max), null) * IntValue (int, null) * FloatValue (float, null) * DateTimeValue (datetime, null) * BoolValue (bit, null) * fkPID (FK, int, null) * fkAID (FK, int, null) * fkAGID (FK, int, null) * fkVID (FK, int, null) * fkCID (FK, int, null) * fkL (FK, int, not null) * fkIMID (FK, not null) * fkPRID (FK, int, null) * fkNID (int, null) Indexes: * I\_AG (Non-Unique, Non-Clustered) [fkAGID] * I\_IM (Non-Unique, Non-Clustered) [fkIMID] * I\_R (Non-Unique, Non-Clustered) [fkPRID] * PK\_D (Clustered) 5447370 * I\_PDL (Non-Unique, Non-Clustered) [fkL] **Table3** * ID (PK, int, not null) * fkPID (FK, int, not null) * fkAID (FK, int, not null) * Sort (int, not null) * Group (nvarchar(50), null) * Size (int, null) * FMB (nvarchar(50), null) Indexes: * PK\_D (Clustered) 5447370 * I\_PAA (Non-Unique, Non-Clustered) [fkAID] * I\_PAP (Non-Unique, Non-Clustered) [fkPID] * I\_PAPID (Non-Unique, Non-Clustered) [fkPID,fkAID]
A great tool that will analyze and, if necessary, rebuild indexes and update statistics is Ola Hallengren's Index and Statistics Maintenance tool/script. We run this thing nightly, along with the Integrity Check, to keep our databases healthy. <http://ola.hallengren.com/>
One column that sticks out to me is this one: ``` pk_Name (Clustered) [Name, fkID] ``` Clustered keys determine the *physical* order of records in the database table. If `Name` is a string and values are inserted in "random" order (i.e. not always at the end of the table alphabetically) there could be performance problems as the database is always having to "insert" rows into the physical tables. This could cause the table data to become fragmented, which could reduce performance as well. Re-building a clustered index also re-organizes the physical data which is likely why you're seeing improved performance afterwards. Recomputing statistics could also be a factor, but a primary key that results in non-consecutive inserts is usually a red flag. Also your definition doesn't specify the columns that make up the clustered indices on tables 2 and 3, but based on the name I'm assuming they're indexed by `ID`.
Keep having to reindex sql tables
[ "", "sql", "sql-server", "" ]
I have a table STARTSTOP ``` ACTION DATA ID_PPSTARTSTOPPOZ 0 2013-03-18 08:38:00 10451 1 2013-03-18 09:00:00 10453 0 2013-03-18 09:50:00 10466 1 2013-03-18 10:38:00 10467 0 2013-03-19 11:54:00 10499 1 2013-03-19 12:32:00 10505 ``` Action 0 -> START ACTION Action 1 -> STOP ACTION DATA is a timestamp of action I would like to run a select statement that would return records something like: ``` ACTION_1 ACTION_2 DURATION 10451 10453 22 10466 10466 48 ... ``` OR summary for all actions duration in one row. Is it feasible with a single database query? (without creating additional tables)
``` select A1.ID_PPSTARTSTOPPOZ as Action_0, A2.Action_1, datediff (minute, A1.DATA ,A2.DATA) from STARTSTOP A1 JOIN ( select ID_PPSTARTSTOPPOZ as Action_1, DATA, (select max(ID_PPSTARTSTOPPOZ) FROM STARTSTOP where ID_PPSTARTSTOPPOZ<T.ID_PPSTARTSTOPPOZ AND ACTION=0) AS PREV_ACTION from STARTSTOP T where ACTION=1 ) A2 on A1.ID_PPSTARTSTOPPOZ=A2.PREV_ACTION where ACTION = 0 order by A1.ID_PPSTARTSTOPPOZ ``` [DATEDIFF function](http://www.firebirdsql.org/refdocs/langrefupd21-intfunc-datediff.html) [SQLFiddle Example for MSSQL but it has to work under Firebird too](http://sqlfiddle.com/#!3/31d2b/9)
It could be done with a single select but algorithmic EXECUTE BLOCK would do much faster: ``` EXECUTE BLOCK RETURNS (ACTION_1 INTEGER, ACTION_2 INTEGER, DURATION INTEGER) AS DECLARE VARIABLE act INTEGER; DECLARE VARIABLE act_id INTEGER; DECLARE VARIABLE d TIMESTAMP = NULL; DECLARE VARIABLE d1 TIMESTAMP = NULL; BEGIN FOR SELECT action, data, id_ppstartstoppoz FROM startstop ORDER BY data ASC INTO :act, :d, :act_id DO BEGIN IF (:act = 0) THEN BEGIN d1 = :d; action_1 = :act_id; END ELSE BEGIN IF (NOT :d1 IS NULL) THEN BEGIN action_2 = :act_id; duration = DATEDIFF(SECOND, :d1, :d); SUSPEND; d1 = NULL; END END END END ```
firebird - self join on one table
[ "", "sql", "join", "firebird", "self-join", "" ]
Another question to do with my minigame. I've received great help on the previous questions and I hope I don't waste your time on such a, most likely foolish, question. Here's the code: ``` import time import random inventory = [] miningskill = 1 fishingskill = 1 gold = 0 rawfish = ["Mackarel", "Cod", "Salmon", "Herring", "Tuna"] cookedfish = ["Cooked Mackarel", "Cooked Cod", "Cooked Salmon", "Cooked Herring", "Cooked Tuna"] trash = ["Old Shoe", "Thin Bone", "Rusted Empty Box", "Plank Fragment"] special = ["Copper Ring"] mackarel_range = range(1,3) cod_range = range(3,5) salmon_range = range(5,7) herring_range = range(7,9) tuna_range = range(9,11) oldshoe_range = range(11,16) plasticbag_range = range(16,21) rustedemptybox_range = range(21,26) plankfragment_range = range(26,31) copperring_range = range(31,32) print" _ _ _ _ ______ _ _ _ " print" | | | | (_) ( ) | ____(_) | | (_)" print" | | ___ | |_ __ _ _ __ |/ ___ | |__ _ ___| |__ _ _ __ __ _ " print" | | / _ \| | '_ \| | '_ \ / __| | __| | / __| '_ \| | '_ \ / _` |" print" | |___| (_) | | |_) | | | | | \__ \ | | | \__ \ | | | | | | | (_| |" print" |______\___/|_| .__/|_|_| |_| |___/ |_| |_|___/_| |_|_|_| |_|\__, |" print" | | __/ |" print" |_| |___/ " time.sleep(2) print". . . .--." print" \ / o .'| )" print" \ / .-. .--..--. . .-. .--. o | --:" print" \ / (.-' | `--. | ( )| | | )" print" ' `--'' `--'-' `- `-' ' `- o '---'o`--'" time.sleep(2) print "In this current version the first item in your inventory is sold." def sell_function(): if inventory[0] in rawfish: sold = inventory.pop(0) global gold gold = gold+5 print "You have sold a", sold, "for 5 gold coins!" elif inventory[0] in trash: sold = inventory.pop(0) global gold gold = gold+1 print "You have recycled a", sold, "for 1 gold coins!" elif inventory[0] in special: sold = inventory.pop(0) global gold gold = gold+10 print "You have sold a", sold, "for 10 gold coins!" elif inventory[0] in cookedfish: sold = inventory.pop(0) global gold gold = gold+8 print "You have sold a", sold, "for 8 gold goins!" else: print "Shopkeeper:'You can't sell that.'" def fish_function_beginner(): random_fishingchance = random.randrange(1,32) if 1 <= random_fishingchance < 3: inventory.append("Mackarel") print "You have reeled in a Mackarel!" time.sleep(0.5) print "You place it into your inventory" global fishingskill fishingskill = fishingskill + 1 fishingskill_new = fishingskill print "Your fishing skill is now",fishingskill_new,"It has increased by 1" elif 3 <= random_fishingchance < 5: inventory.append("Cod") print "You have reeled in a Cod!" time.sleep(0.5) print "You place it into your inventory" global fishingskill fishingskill = fishingskill + 1 fishingskill_new = fishingskill print "Your fishing skill is now",fishingskill_new,"It has increased by 1" elif 5 <= random_fishingchance < 7: inventory.append("Salmon") print "You have reeled in a Salmon!" time.sleep(0.5) print "You place it into your inventory" global fishingskill fishingskill = fishingskill + 1 fishingskill_new = fishingskill print "Your fishing skill is now",fishingskill_new,"It has increased by 1" elif 7 <= random_fishingchance < 9: inventory.append("Herring") print "You have reeled in a Herring!" time.sleep(0.5) print "You place it into your inventory" global fishingskill fishingskill = fishingskill + 1 fishingskill_new = fishingskill print "Your fishing skill is now",fishingskill_new,"It has increased by 1" elif 9 <= random_fishingchance < 11: inventory.append("Tuna") print "You have reeled in a Tuna!" time.sleep(0.5) print "You place it into your inventory" global fishingskill fishingskill = fishingskill + 1 fishingskill_new = fishingskill print "Your fishing skill is now",fishingskill_new,"It has increased by 1" elif 11 <= random_fishingchance < 16: inventory.append("Old Shoe") print "You have reeled in an Old Shoe..." time.sleep(0.5) print "You place it into your inventory" elif 16 <= random_fishingchance < 21: inventory.append("Thin Bone") print "You have reeled in a Thin Bone..." time.sleep(0.5) print "You place it into your inventory" elif 21 <= random_fishingchance < 26: inventory.append("Rusted Empty Box") print "You have reeled in a Rusted Empty Box..." time.sleep(0.5) print "You place it into your inventory" elif 26 <= random_fishingchance < 31: inventory.append("Plank Fragment") print "You have reeled in a Plank Fragment..." time.sleep(0.5) print "You place it into your inventory" elif 31 <= random_fishingchance < 32: inventory.append("Copper Ring") print "You have reeled in a ring shaped object covered in mud." print "After cleaning it you notice it is a Copper Ring!" time.sleep(0.5) print "You place it into your inventory" global fishingskill fishingskill = fishingskill + 2 fishingskill_new = fishingskill print "Your fishing skill is now",fishingskill_new,"It has increased by 2" else: print "It seems your fishing line has snapped!" def fish_function_amateur(): random_fishingchance = random.randrange(1,29) if 1 <= random_fishingchance < 3: inventory.append("Mackarel") print "You have reeled in a Mackarel!" time.sleep(0.5) print "You place it into your inventory" global fishingskill fishingskill = fishingskill + 1 fishingskill_new = fishingskill print "Your fishing skill is now",fishingskill_new,"It has increased by 1" elif 3 <= random_fishingchance < 5: inventory.append("Cod") print "You have reeled in a Cod!" time.sleep(0.5) print "You place it into your inventory" global fishingskill fishingskill = fishingskill + 1 fishingskill_new = fishingskill print "Your fishing skill is now",fishingskill_new,"It has increased by 1" elif 5 <= random_fishingchance < 7: inventory.append("Salmon") print "You have reeled in a Salmon!" time.sleep(0.5) print "You place it into your inventory" global fishingskill fishingskill = fishingskill + 1 fishingskill_new = fishingskill print "Your fishing skill is now",fishingskill_new,"It has increased by 1" elif 7 <= random_fishingchance < 9: inventory.append("Herring") print "You have reeled in a Herring!" time.sleep(0.5) print "You place it into your inventory" global fishingskill fishingskill = fishingskill + 1 fishingskill_new = fishingskill print "Your fishing skill is now",fishingskill_new,"It has increased by 1" elif 9 <= random_fishingchance < 11: inventory.append("Tuna") print "You have reeled in a Tuna!" time.sleep(0.5) print "You place it into your inventory" global fishingskill fishingskill = fishingskill + 1 fishingskill_new = fishingskill print "Your fishing skill is now",fishingskill_new,"It has increased by 1" elif 11 <= random_fishingchance < 15: inventory.append("Old Shoe") print "You have reeled in an Old Shoe..." time.sleep(0.5) print "You place it into your inventory" elif 15 <= random_fishingchance < 19: inventory.append("Thin Bone") print "You have reeled in a Thin Bone..." time.sleep(0.5) print "You place it into your inventory" elif 19 <= random_fishingchance < 24: inventory.append("Rusted Empty Box") print "You have reeled in a Rusted Empty Box..." time.sleep(0.5) print "You place it into your inventory" elif 24 <= random_fishingchance < 29: inventory.append("Plank Fragment") print "You have reeled in a Plank Fragment..." time.sleep(0.5) print "You place it into your inventory" elif 29 <= random_fishingchance < 30: inventory.append("Copper Ring") print "You have reeled in a ring shaped object covered in mud." print "After cleaning it you notice it is a Copper Ring!" time.sleep(0.5) print "You place it into your inventory" global fishingskill fishingskill = fishingskill + 2 fishingskill_new = fishingskill print "Your fishing skill is now",fishingskill_new,"It has increased by 2" else: print "It seems your fishing line has snapped!" def action_function(): while True: print "For a list of commands type 'help'" action = raw_input("What do you want to do? >") if action == "quit": break end if action == "sell": sell_function() if action == "fish": print "You throw your reel..." time.sleep(10) fish_skillcheck_function() if action == "inventory": print "You begin to open your inventory" time.sleep(0.5) print inventory if action == "money": print "You have",gold,"gold" if action == "gold": print "You have",gold,"gold" if action == "cook": fish_cookcheck_function() if action == "fishingskill": if 1 <= fishingskill < 75: print "Your fishing skill is",fishingskill print "Fishing Rank: Beginner" elif 75 <= fishingskill < 150: print "Your fishing skill is",fishingskill print "Fishing Rank: Amateur" elif 150 <= fishingskill < 300: print "Your fishing skill is",fishingskill print "Fishing Rank: Regular" elif 300 <= fishingskill < 500: print "Your fishing skill is",fishingskill print "Fishing Rank: Seasoned" elif 500 <= fishingskill < 750: print "Your fishing skill is",fishingskill print "Fishing Rank : Professional" elif 750 <= fishingskill < 1000: print "Your fishing skill is",fishingskill print "Fishing Rank : Expert" elif 1000 <= fishingskill < 9001: print "Your fishing skill is",fishingskill print "Fishing Rank : Fishing Grandmaster" else: print "Your skill is not available." if action == "help": print "Commands are= 'help' 'quit' 'sell' 'fish' 'fishingskill' 'gold' 'money' 'cook' and 'inventory" if action == "cook": fish_cookcheck_function() def fish_cookcheck_function(): if inventory[0] == "Mackarel": cooked = inventory.pop(0) inventory.append("Cooked Mackarel") print "You have cooked a", cooked elif inventory[0] == "Cod": cooked = inventory.pop(0) inventory.append("Cooked Cod") print "You have cooked a", cooked elif inventory[0] == "Salmon": cooked = inventory.pop(0) inventory.append("Cooked Salmon") print "You have cooked a", cooked elif inventory[0] == "Herring": cooked = inventory.pop(0) inventory.append("Cooked Herring") print "You have cooked a", cooked elif inventory[0] == "Tuna": cooked = inventory.pop(0) inventory.append("Cooked Tuna") print "You have cooked a", cooked else: "You can't cook this." action_function() def fish_skillcheck_function(): if 1 <= fishingskill < 75: fish_function_beginner() elif 75 <= fishingskill < 150: fish_function_amateur() else: print "My fishing level is too low" action_function() ``` Now, according to what I think should happen, if there's a problem such as there not being anything to sell in the inventory then it should go onto the else part of ``` def sell_function(): if inventory[0] in rawfish: sold = inventory.pop(0) global gold gold = gold+5 print "You have sold a", sold, "for 5 gold coins!" elif inventory[0] in trash: sold = inventory.pop(0) global gold gold = gold+1 print "You have recycled a", sold, "for 1 gold coins!" elif inventory[0] in special: sold = inventory.pop(0) global gold gold = gold+10 print "You have sold a", sold, "for 10 gold coins!" elif inventory[0] in cookedfish: sold = inventory.pop(0) global gold gold = gold+8 print "You have sold a", sold, "for 8 gold goins!" else: print "Shopkeeper:'You can't sell that.'" ``` However, as some of you more advanced programmers may have noticed there is some problem. This is the error I receive when I try to sell with an empty inventory(similarly to what happens when I try to cook with an empty inventory) : ``` Traceback (most recent call last): File "C:\Users\Lolpin\Desktop\fishinglooptest.py", line 308, in <module> action_function() File "C:\Users\Lolpin\Desktop\fishinglooptest.py", line 231, in action_function sell_function() File "C:\Users\Lolpin\Desktop\fishinglooptest.py", line 40, in sell_function if inventory[0] in rawfish: IndexError: list index out of range ``` When I tried to find answered questions about the same/similar error I can't put them into context of my code. .\_.
Use a `if inventory` condition to check whether `inventory` is empty: ``` def sell_function(): if inventory: if inventory[0] in rawfish: #other code else: print "Shopkeeper:'You can't sell that.'" ```
The problem is that your `inventory` list is empty (so there is no 0th element). You can check for this by using `if` statement: ``` def sell_function(): if not inventory: print "your inventory is empty! You cannot sell anything" return if inventory[0] in rawfish: ... ```
Else not working IndexError: list index out of range
[ "", "python", "runtime-error", "" ]
I built following Select query to get below results ``` SELECT Orders.OrderID, OrderDetails.ProductCode, OrderDetails.Coupon From Orders, OrderDetails WHERE Orders.OrderID=OrderDetails.OrderID ``` Results ``` Order ID Product Code Coupon 22 A 22 B XYZ 22 C 23 D 123 24 E ``` I want it to display like this: ``` Order ID Product Code Coupon 22 A XYZ 22 B XYZ 22 C XYZ 23 D 123 24 E ``` so that it fills empty coupons from not empty coupon field where order id matches. Your help will be greatly appreciated. Thanks.
``` SELECT t1.OrderID, t1.ProductCode, MAX(ISNULL(t2.Coupon,'')) as CouponCode, t1.CustomerName --Here you have select list by using alias 't' --don't forget it to add in group by clause FROM ( select O.OrderID,OD.ProductCode,OD.CouponCode as Coupon,C.CustomerName --Here add the list of columns from Orders O inner join OrderDetails OD on O.OrderID=OD.OrderID Inner join customers C on O.CustomerID=C.CustomerID )t1 INNER JOIN ( select O.OrderID from Orders O inner join OrderDetails OD on O.OrderID=OD.OrderID Inner join customers C on O.CustomerID=C.CustomerID )t2 ON CAST(t1.OrderID AS VARCHAR)= CAST(t2.OrderID AS VARCHAR) GROUP BY t1.OrderID, t1.ProductCode, t1.CustomerName --Add the extra fields. order by t1.OrderID ``` **[SQL Fiddle with Your Data](http://www.sqlfiddle.com/#!3/a860e/1)**
This is kind of ugly but does the job: ``` SELECT o.OrderID, od1.ProductCode, COALESCE(od1.Coupon,od2.Coupon) From Orders o inner join OrderDetails od1 on o.OrderID=od1.OrderID left join (select OrderID,MAX(Coupon) as Coupon from OrderDetails where Coupon is not null group by OrderID) od2 on o.OrderID=od2.OrderID ``` It's using `GROUP BY` and `MAX` to ensure that there's only one row in `od2` for each `OrderID` value, even if multiple rows in `OrderDetails` already have `Coupon` set.
SQL Server: fill Null Fields from Not Null Column for multiple rows having same id
[ "", "sql", "sql-server", "" ]
So I have a python application that is being bundled into a .app using py2app. I have some debugging print statements in there that would normally print to stdout if I was running the code in the terminal. If I just open the bundled .app, obviously I don't see any of this output. Is any of this actually being printed somewhere even though I don't see it?
It goes to standard output (so in the command line, if you are opening it in command line, or eg. to the log, if you are running it with cron jobs). For more advanced and flexible usage try using built-in logging support ([`logging` module](http://docs.python.org/2/library/logging.html)). It allows you to direct messages to the console, to the file, stream (like external logging server), `syslog` process, email etc., as well as filter based on log message level. Debug statements (logging statements) can then be still placed in the production code, but configured to not cause any logging other than errors. Very useful and simplifies logging a lot.
Where the stdout and stderr streams are redirect to depends on how you run the application, and on which OSX release you are. When you run the application from the Terminal ("MyApp.app/Contents/MacOS/MyApp") the output ends up in the terminal window. When you run the application by double clicking, or using the open(1) command, the output ends up in Console.app when using OSX before 10.8 and is discarded on OSX 10.8. I have a patch that redirects output to the logs that Console.app reads even for OSX 10.8, but that is not in a released version at this point in time. P.S. I'm the maintainer of py2app.
Where does stuff "print" to when not running application from terminal?
[ "", "python", "macos", "stdout", "py2app", "" ]
I'm having a problem because I have an output of A formatted like so: ``` [0.018801, 0.011839, -3332.980568, 0.009446, -3332.984916, 0.007438, -3332.982958] [0.020493, 0.015735, -3332.980353, 0.013179, -3332.968465, 0.055135, 0.135461] [0.020678, 0.018212, -3332.983603, 0.011993, 0.097811, 0.014364, 0.099570] [0.020758, 0.015798, -3332.982745, 0.013539, 0.086793, 0.007399, -3332.984997] [-3332.992594, 0.014576, -3332.979745, 0.015103, 0.089420, 0.009226, 0.090133] ``` however, I need each row to be separated by a comma in order for it to work in this bit of code: ``` def mean(a): return sum(a) / len(a) a = [A] print map(mean, zip(*a)) ``` is there any way to achieve this while still keeping A as a list of floats? because `', '.join` requires string values which will not allow me to take the mean below is the code I am using to generate A: ``` with open("test2.xls") as w: w.next() # skip over header row for row in w: (date, time, a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q, r, s, t, u, LZA, SZA, LAM) = row.split("\t") # split columns into fields A = [(float(a) + float(b) + float(c))/3, (float(d) + float(e) + float(f))/3, (float(g) + float(h) + float(i))/3, (float(j) + float(k) + float(l))/3, (float(m) + float(n) + float(o))/3, (float(p) + float(q) + float(r))/3, (float(s) + float(t) + float(u))/3] ``` Any help is appreciated clarification: I don't need a longer list necessarily I need the list I have to be the same but for each row to be separated by commas. so when I pass this through mean: ``` def mean(a): return sum(a) / len(a) a = [A] print map(mean, zip(*a) ``` I only get the last row: ``` [-3332.992594, 0.014576, -3332.979745, 0.015103, 0.089420, 0.009226, 0.090133] ``` However if I write the output of A and separate each row by a comma like so: ``` def mean(a): return sum(a) / len(a) a = [[0.018801, 0.011839, -3332.980568, 0.009446, -3332.984916, 0.007438, -3332.982958], [0.020493, 0.015735, -3332.980353, 0.013179, -3332.968465, 0.055135, 0.135461], [0.020678, 0.018212, -3332.983603, 0.011993, 0.097811, 0.014364, 0.099570], [0.020758, 0.015798, -3332.982745, 0.013539, 0.086793, 0.007399, -3332.984997], [-3332.992594, 0.014576, -3332.979745, 0.015103, 0.089420, 0.009226, 0.090133]] print map(mean, zip(*a)) ``` I get the desired output of ``` [-666.582372, 0.015232, -3332.981403, 0.012652, -1333.1358714, 0.018713, -1333.128558] ``` or the mean of each column. How can I do this without having to manually doctor the A vector with commas?
You are trying to generate a list of lists. However, you aren't saving each list -- every time you go through the loop you create a new record and then replace it with the next record. This is why a eventually just contains the last record. The commas aren't relevant -- that's just Python's syntax. The values are not stored with commas internally! Instead of assigning each record to A, initialize A as an empty list and then add each new record to the end. ``` A = [] with open("test2.xls") as w: w.next() # skip over header row for row in w: (date, time, a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q, r, s, t, u, LZA, SZA, LAM) = row.split("\t") # split columns into fields A.append([(float(a) + float(b) + float(c))/3, (float(d) + float(e) + float(f))/3, (float(g) + float(h) + float(i))/3, (float(j) + float(k) + float(l))/3, (float(m) + float(n) + float(o))/3, (float(p) + float(q) + float(r))/3, (float(s) + float(t) + float(u))/3]) ```
Extrapolating from your [other question](https://stackoverflow.com/questions/17489218/averaging-down-a-column-of-averaged-averages), I think you could do what you want with something like this: ``` def mean(a): return sum(a) / len(a) averages = [] with open("test2.xls") as w: w.next() # skip over header row for row in w: (date, time, a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q, r, s, t, u, LZA, SZA, LAM) = row.split("\t") # split columns into fields A = [(float(a) + float(b) + float(c))/3, (float(d) + float(e) + float(f))/3, (float(g) + float(h) + float(i))/3, (float(j) + float(k) + float(l))/3, (float(m) + float(n) + float(o))/3, (float(p) + float(q) + float(r))/3, (float(s) + float(t) + float(u))/3] averages.append(A) print map(mean, zip(*averages)) ``` Alternatively it could be done a little more concisely with code similar to this: ``` def mean(a): return sum(a) / len(a) averages = [] with open("test2.xls") as w: w.next() # skip over header row for row in w: (date, time, a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q, r, s, t, u, LZA, SZA, LAM) = row.split("\t") # split columns into fields A = [mean(map(float, (a, b, c))), mean(map(float, (d, e, f))), mean(map(float, (g, h, i))), mean(map(float, (j, k, l))), mean(map(float, (m, n, o))), mean(map(float, (p, q, r))), mean(map(float, (s, t, u)))] averages.append(A) print map(mean, zip(*averages)) ``` And even more concisely with this: ``` def mean(a): return sum(a) / len(a) averages = [] with open("test2.xls") as w: w.next() # skip over header row for row in w: cols = row.split("\t") # split into columns # then split that into fields date, time, values, LZA, SZA, LAM = (cols[0], cols[1], cols[2:23], cols[23], cols[24], cols[25]) A = [mean(map(float, values[i:i+3])) for i in xrange(0, 21, 3)] averages.append(A) print map(mean, zip(*averages)) ``` In the last one you could rename `averages` to `a` if you wanted because there is no longer a field named `a` that would conflict with it. Regardless, all code fragments will print the same answer.
Separating rows of output by commas and staying in float
[ "", "python", "format", "rows", "" ]
I have a table with columns similar to below , but with about 30 date columns and 500+ records ``` id | forcast_date | actual_date 1 10/01/2013 12/01/2013 2 03/01/2013 06/01/2013 3 05/01/2013 05/01/2013 4 10/01/2013 09/01/2013 ``` and what I need to do is get a query with output similar to ``` week_no | count_forcast | count_actual 1 4 6 2 5 7 3 2 1 ``` etc My query is ``` SELECT weekofyear(forcast_date) as week_num, COUNT(forcast_date) AS count_forcast , COUNT(actual_date) AS count_actual FROM table GROUP BY week_num ``` but what I am getting is the forcast\_date counts repeated in each column, i.e. ``` week_no | count_forcast | count_actual 1 4 4 2 5 5 3 2 2 ``` Can any one please tell me the best way to formulate the query to get what I need?? Thanks
Just in case someone else comes along with the same question: Instead of trying to use some amazing query, I ended up creating an array of date\_columns\_names and a loop in the program that was calling this query, and for each date\_column\_name, performing teh asme query. It is a bit slower, but it does work
Try: ``` SELECT WeekInYear, ForecastCount, ActualCount FROM ( SELECT A.WeekInYear, A.ForecastCount, B.ActualCount FROM ( SELECT weekofyear(forecast_date) as WeekInYear, COUNT(forecast_date) as ForecastCount, 0 as ActualCount FROM TableWeeks GROUP BY weekofyear(forecast_date) ) A INNER JOIN ( SELECT * FROM ( SELECT weekofyear(forecast_date) as WeekInYear, 0 as ForecastCount, COUNT(actual_date) as ActualCount FROM TableWeeks GROUP BY weekofyear(actual_date) ) ActualTable ) B ON A.WeekInYear = B.WeekInYear) AllTable GROUP BY WeekInYear; ``` Here's my [Fiddle Demo](http://sqlfiddle.com/#!2/f68263/1)
MySQL Group by week num w/ multiple date column
[ "", "mysql", "sql", "" ]
I would like to find all records in Column A where, after sorting Column C in ascending order, Column D begins with a value other than the earliest date. From the below example, I would want it to return records for Ex2 and Ex3 and not return records for Ex1. **REVISED UPDATES/REQUIREMENTS:** 1) I would like it grouped by Column A and ordered by Column C. 2) I would like to find all records in Column A where the first value in Column D is not the lowest date. ``` Column A Column B Column C Column D -------- -------- -------- -------- Ex1 Title A 1 2003/1/1 Ex1 Title B 2 2003/2/2 Ex2 Title C 3 2004/4/4 Ex2 Title D 4 2004/3/3 Ex3 Title E 5 2005/6/6 Ex3 Title F 6 2005/5/5 ``` Any Ideas?
You appear to assume an ordering by [Column B], which you can make explicit in a self join: ``` SELECT t1.[Column A], t1.[Column B], t1.[Column C], t1.[Column D] FROM YourTableName t1 JOIN YourTableName t2 ON t2.[Column A] = t1.[Column A] AND t2.[Column B] > t1.[Column B] WHERE t2.[Column C] > t1.[Column C] AND t2.[Column D] < t1.[Column D] ``` This assumes there are exactly two rows where [Column A] = 'Ex1', etc. If you have more than two rows with the same value in [Column A] you will probably find the results to be unexpected. Note that the first two comparisons are part of the join condition, and the third and fourth comparisons are part of the WHERE clause. UPDATE: Working to changed requirements: There may be twenty rows with [Column A] = 'Ex1'. Return distinct values of [Column A] for which the lowest value of [Column C] is not in the same row as the lowest value of [Column D]. [Column B] is not relevant. ``` SELECT DISTINCT t1.[Column A] FROM YourTableName t1 JOIN (SELECT [Column A], MIN([Column C]) AS MinC, MIN([Column D]) AS MinD FROM YourTableName GROUP BY [Column A]) t2 ON t2.[Column A] = t1.[Column A] WHERE t1.[Column C] = t2.MinC AND t1.[Column D] <> t2.MinD ``` This returns: ``` Column A Ex1 ``` For the following test table: ``` CREATE TABLE YourTableName ([Column A] VARCHAR(50), [Column B] VARCHAR(50), [Column C] INT, [Column D] INT) ``` And test data: ``` INSERT INTO YourTableName ([Column A], [Column B], [Column C], [Column D]) VALUES ('Ex1', 'Title A', 1, 2) INSERT INTO YourTableName ([Column A], [Column B], [Column C], [Column D]) VALUES ('Ex1', 'Title B', 2, 1) INSERT INTO YourTableName ([Column A], [Column B], [Column C], [Column D]) VALUES ('Ex1', 'Title C', 3, 1) ```
New answer specifically addressing the following specifications (which seem to comprise the problem now): > UPDATE: I would like it grouped by Column A and ordered by Column C. > > UPDATE #2: I want to return all examples from Column A where the > "first" value in Column D is not the lowest value. ``` SELECT a.* FROM tableName a WHERE a.columnD <> ( SELECT min(columnD) FROM tableName b WHERE a.columnA=b.columnA ) ORDER BY a.columnC ```
SQL compare min value in one column with max date in different column
[ "", "sql", "sql-server-2008", "" ]
Why can't i do this? ``` extout = os.path.splitext(args.outputfile)[1].lower() if extout != (".csv" | ".xml"): # <------- creates error sys.stderr.write('ERROR: Invalid extension or none found. This program only recognizes .csv or .xml extensions %s\n') sys.exit(1) ``` This gives me an type error: ``` Unsupported operand types for |: 'str' and 'str' ```
Use a tuple and `not in`: ``` if extout not in (".csv", ".xml"): ``` This will return `True` if `extout` doesn't matches any of the tuple items otherwise `False`. `|` is an Bitwise OR operator in python that's why it is not working on strings.
The easiest way to do what you want is: ``` if extout not in ('.csv', '.xml'): ... ```
Boolean Operators in Python
[ "", "python", "" ]
Here is what I am trying to do: ``` select * from table where in ('completedDate', 'completedBy', 'cancelDate', 'cancelBy') is not null ``` If the four columns above are not null I need to display the records. I know I would do this with several where/and clauses but trying to learn and make my new stuff cleaner. Is what I am trying to do above possible in a cleaner way?
If I understand correctly I guess you want to do that: ``` select * from table where completedDate is not null and completedBy is not null and cancelDate is not null and cancelBy is not null ``` Regarding clarity of code I don't see a better way to write it, that's what I would code anyway. **EDIT**: I wouldn't really do that in this case, but if this is a very common condition you can add a computed column in the table (stored or not), or create a view on top of table, and do: ``` select * from view where importantFieldsAreNotNull = 1 ```
``` -- Under reasonable assumption on data types: select * from [table] where completedBy+cancelBy+DATENAME(yy,completedDate)+ DATENAME(yy,cancelDate) is not null ```
Trying to create cleaner sql syntax
[ "", "sql", "sql-server", "t-sql", "" ]
I have a table with a list of dates of datetime format in two columns. Now what I would like to do is pull each non null date from `Column1`, add one year to each value and then store it into `Column2`. So below, after the update, `Column2` (the one with the nulls) will show: ``` 2014-07-09 00:00:00.000 2013-07-30 00:00:00.000 2013-10-19 00:00:00.000 2013-10-19 00:00:00.000 ``` ![enter image description here](https://i.stack.imgur.com/4lofK.png) How does the syntax go? Do I need to do a select followed by an update?
``` UPDATE tbl SET Column2 = DATEADD(year, 1, Column1) WHERE Column2 IS NULL AND Column1 IS NOT NULL ``` That's gone to be ok
You can use [DATEADD](http://msdn.microsoft.com/en-us/library/ms186819.aspx) ``` UPDATE YourTable SET Column2 = DATEADD(YEAR, 1, Column1) WHERE Column1 IS NOT NULL ```
SQL update datetime entries
[ "", "sql", "sql-server", "database", "sql-server-2008", "t-sql", "" ]
I have the tables: * `Candidates` * `CandidateLanguages` * `CandidateSkills` Each candidate may have more than 1 language and more than 1 skill So for `Candidate` "FRED", his records in `CandidateLanguages` may be ``` FRED - ENGLISH FRED - FRENCH ``` and his records in `CandidateSkills` may be ``` FRED - RUNNING FRED - JUMPING ``` and for `Candidate` "JIM" his records in `CandidateLanguages` may be ``` JIM - ENGLISH ``` and his records in `CandidateSkills` may be ``` JIM - RUNNING ``` My query needs to select candidates that match multiple skills and languages. So for example in English: Select all of the candidates who speak ALL of the selected languages and have ALL of the selected skills... Or put another way.... ``` SELECT ALL candidates WHERE (language = 'FRENCH' AND language is 'ENGLISH') AND (skill = 'RUNNING' AND skill = 'JUMPING') ``` Of the two candidates above, this should only return "FRED" I understand that the problem is with trying to select multiple records from the Language and Skills table and I think that joins may be required, but now I am lost...
The problem you are solving is called **Relational Division**. See this article: **[Divided We Stand: The SQL of Relational Division](https://www.simple-talk.com/sql/t-sql-programming/divided-we-stand-the-sql-of-relational-division/)** and this question for a few ways to solve it: **[How to filter SQL results in a has-many-through relation](https://stackoverflow.com/questions/7364969/how-to-filter-sql-results-in-a-has-many-through-relation)** One way to solve it (which will be - in general - the most efficient): ``` SELECT ALL c.candidate FROM Candidates c JOIN CandidateLanguages lang1 ON lang1.candidate = c.candidate AND lang1.language = 'English' JOIN CandidateLanguages lang2 ON lang2.candidate = c.candidate AND lang2.language = 'French' JOIN CandidateSkills sk1 ON sk1.candidate = candidate AND sk1.skill = 'Running' JOIN CandidateSkills sk2 ON sk2.candidate = candidate AND sk2.skill = 'Jumping' ; ``` Another way, which seems easier to write, especially if there are a lot of languages and skills involved, is to use two derived tables with `GROUP BY` in each of them: ``` SELECT ALL c.candidate FROM Candidates c JOIN ( SELECT candidate FROM CandidateLanguages WHERE language IN ('English', 'French') GROUP BY candidate HAVING COUNT(*) = 2 -- the number of languages ) AS lang ON lang.candidate = c.candidate JOIN ( SELECT candidate FROM CandidateSkills WHERE skill IN ('Running', 'Jumping') GROUP BY candidate HAVING COUNT(*) = 2 -- the number of skills ) AS sk ON sk.candidate = c.candidate ; ```
If you want *all skills* and *all languages*, simply counting the multiplications will be enough. ``` select c.id from candidate c join candidateLanguage cl on c.id = cl.candidateId join language l on cl.languageId = l.id join candidateSkill cs on c.id = cd.candidateId join skill s on s.id = cs.skillId group by c.id having count(*) = 4 ``` The `having` condition can be expressed as ``` having count(*) = (select count(*) from skill) * (select count(*) from language) ``` **What do I do here?** * Listing all possible Candidate-Language-Skill triplets * Grouping them by candidate * if the count equals to (count of skills) \* (count of languages) then for this candidate **all combinations are present** **EDIT:** If you only want a *subset* of languages and skills, you can filter it: ``` select c.id from candidate c join candidateLanguage cl on c.id = cl.candidateId join language l on cl.languageId = l.id join candidateSkill cs on c.id = cd.candidateId join skill s on s.id = cs.skillId where l.name in ('English', 'French') and s.name in ('RUNNING', 'JUMPING') group by c.id having count(*) = 4 ``` The difference here is that you can count only those skills and languages that are matching your criteria.
Complicated SQL Statement: select that matches on multiple tables
[ "", "sql", "" ]
I have a sqlite table with columns x, y, and z. x and y are unique keys and z is the value. I would like to use R to insert data into this table. If a duplicate record - based on the x and y fields - is being inserted, I would like sqlite to reject the record and continue. In sql, this can be done using "insert or ignore", can this be done using the R package RSQLite? So far there is an option dbWriteTable which writes an R data frame to a sqlite table, but it doesn't seem like there is an option for "insert or ignore"
I found the source where dbWriteTable constructs the sql string and sends it to sqlite. You can use this modified source to allow the "insert or ignore" syntax <https://gist.github.com/jeffwong/5925000>
Rework of answer from @Karsten W to load all data with same process: ``` library(RSQLite) # create table con <- dbConnect(drv=RSQLite::SQLite(), ":memory:") dbExecute(con, "CREATE TABLE tab1 (a CHAR(6) NOT NULL, b CHAR(6) NOT NULL, PRIMARY KEY (a, b));") load_data <- function(x) { # load data # we want to add only the new combinations of a and b insertnew <- dbSendQuery(con, "INSERT OR IGNORE INTO tab1 VALUES (:a,:b)") dbBind(insertnew, params=x) # execute dbClearResult(insertnew) # release the prepared statement } # new data dat1 <- data.frame(a=letters[1:10], b=LETTERS[11:20], stringsAsFactors=FALSE) load_data(dat1) print(dbGetQuery(con, "SELECT COUNT(*) FROM tab1;")) # new data, partly redundant dat1 <- data.frame(a=letters[2:11], b=LETTERS[12:21], stringsAsFactors=FALSE) load_data(dat1) print(dbGetQuery(con, "SELECT COUNT(*) FROM tab1;")) ```
RSQLite insert ignore to skip duplicates
[ "", "sql", "r", "sqlite", "" ]
Ok this is what I get: > Failed to retrieve data for this request. (Microsoft.SqlServer.Management.Sdk.Sfc) > Additional information: An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo) > CREATE FILE encountered operating system error 5(error not found) while attempting to open or create the physical fife 'C: \Program Fifes (x86) \Microsoft SQL Server \MSSQL.1 \MSSQL \Data \MyCompany.mdf. (MIcrosoft SQL Server, Error: 5123) I have already reinstalled SQL Server 2012, and it still doesn't work. Before this problem I tried to attach the AdventureWorks database and insert some command (I have deleted it) and since then I keep having this problem. I am new to SQL Server. Thanks
This is a permission issue. The account under which the SQL Server process is running has no write permission to the programs folder. Either you add those permissions to the account running the SQL Server service or you attach your database at a location where the SQL Server service account has read/write rights. See [this DBA Stackexchange question](https://dba.stackexchange.com/q/22250) for the same issue.
if you use SSMS try to execute it as Administrator.
Can't attach any database to SQL Server 2012
[ "", "sql", "sql-server-2012", "" ]
I'm importing flat files through SSIS which then exports them into a SQL table. I need to add an additional column containing a GUID somewhere in the middle so that it can also be exported to the table. I've made sure there's an additional column ready in the SQL Table for the GUID to be passed into but I'm unsure of how to create the GUID in the package, any ideas? Thanks
You can do this via a Script Component Transformation. In your data flow task, between source and destination, add the script component. Under 'Inputs and Outputs' add an output column, name it as you like and in Data Type Properties give it DataType of `unique identifier [DT_GUID]` Use this script (Make sure ScriptLanguage is VB.net): ``` Imports System Imports System.Data Imports System.Math Imports Microsoft.SqlServer.Dts.Pipeline.Wrapper Imports Microsoft.SqlServer.Dts.Runtime.Wrapper <microsoft.sqlserver.dts.pipeline.ssisscriptcomponententrypointattribute> _ <clscompliant(false)> _ Public Class ScriptMain Inherits UserComponent Public Overrides Sub Input0_ProcessInputRow(ByVal Row As Input0Buffer) ' Create a Globally Unique Identifier with SSIS Row.Guid = System.Guid.NewGuid() End Sub End Class ```
below link should help you . 1. Add a derived transform 2. add a new column 3. get dt\_guid here the link with more details <http://microsoft-ssis.blogspot.com/2011/02/create-guid-column-in-ssis.html> best of luck
How can I add a GUID column in SSIS from a Flat File source before importing to a SQL Table?
[ "", "sql", "guid", "ssis", "flat-file", "" ]
I'm proficient in Python but a noob at Scala. I'm about to write some dirty experiment code in Scala, and came across the thought that it would be really handy if Scala had a function like `help()` in Python. For example, if I wanted to see the built-in methods for a Scala `Array` I might want to type something like `help(Array)`, just like I would type `help(list)` in Python. Does such a thing exist for Scala?
I do not know of one built-in but you should use [Scaladocs](http://www.scala-lang.org/api/current/index.html) to find the same information. Unless you use eclipse which has an auto complete with short explanations. For instance it will give you all the commands for arrays after typing 'array.'.
I think tab completion is the closest thing to Python's help. There is also a dated but still relevant [post](http://dcsobral.blogspot.com/2011/12/using-scala-api-documentation.html) from @dcsobral on using Scala documentation and [Scalex](http://scalex.org/) which is similar to Hoogle for Haskell. This is the tab completion in the `Object` `Array`. ``` scala> Array. apply asInstanceOf canBuildFrom concat copy empty emptyBooleanArray emptyByteArray emptyCharArray emptyDoubleArray emptyFloatArray emptyIntArray emptyLongArray emptyObjectArray emptyShortArray fallbackCanBuildFrom fill isInstanceOf iterate newBuilder ofDim range tabulate toString unapplySeq ``` This is for the methods on the class `Array`. Not sure why this doesn't show value members after `a.` ``` scala> val a = Array(1,2,3) a: Array[Int] = Array(1, 2, 3) scala> a. apply asInstanceOf clone isInstanceOf length toString update ``` Though a little daunting at times tab completion on a method shows the method signatures. Here it is for `Array.fill` ``` def fill[T](n1: Int, n2: Int)(elem: => T)(implicit evidence$10: reflect.ClassTag[T]): Array[Array[T]] def fill[T](n1: Int, n2: Int, n3: Int)(elem: => T)(implicit evidence$11: reflect.ClassTag[T]): Array[Array[Array[T]]] def fill[T](n1: Int, n2: Int, n3: Int, n4: Int)(elem: => T)(implicit evidence$12: reflect.ClassTag[T]): Array[Array[Array[Array[T]]]] def fill[T](n1: Int, n2: Int, n3: Int, n4: Int, n5: Int)(elem: => T)(implicit evidence$13: reflect.ClassTag[T]): Array[Array[Array[Array[Array[T]]]]] def fill[T](n: Int)(elem: => T)(implicit evidence$9: reflect.ClassTag[T]): Array[T] ```
Scala equivalent of Python help()
[ "", "python", "scala", "equivalent", "" ]
I'm formatting GPS output logs and I need an efficient method to remove x number of lines above the line that contains a 0 and y number of lines below that line. ``` *--------------------------------------* UTC Time: 000000.00 Latitude: 0000.0000 N/S ind.: N Longitude: 0000.0000 E/W ind: E Position fix ind: 0 Satellites Used: 3 MSL Altitude: 00.0 *--------------------------------------* ``` If the line contains "Position fix ind: 0", remove 6 lines above it and remove 3 lines below in and remove the line it is in EDIT: The input file is a .log file EDIT 2: input file ``` 1 2 3 *--------------------------------------* UTC Time: 000000.00 Latitude: 0000.0000 N/S ind.: N Longitude: 0000.0000 E/W ind: E Position fix ind: 0 Satellites Used: 3 MSL Altitude: 00.0 *--------------------------------------* 3 2 1 1 2 3 *--------------------------------------* UTC Time: 000000.00 Latitude: 0000.0000 N/S ind.: N Longitude: 0000.0000 E/W ind: E Position fix ind: 5 Satellites Used: 3 MSL Altitude: 00.0 *--------------------------------------* 3 2 1 ```
``` def remLines(infilepath, outfilepath, delim, above, below): infile = open(infilepath) outfile = open(outfilepath, 'w') buff = [] line = infile.readline() while line: if line.strip() == delim: buff = [] for _ in range(below): # need to error check here, if you're not certain that your input file is correctly formatted infile.readline() else: if len(buff) == above: outfile.write(buff[0]) buff = buff[1:] buff.append(line) line = infile.readline() outfile.write(''.join(buff)) if __name__ == "__main__": remLines('path/to/input', 'path/to/output', "Position fix ind: 0", 6,3) ``` **Testing**: Input: ``` 1 2 3 *--------------------------------------* UTC Time: 000000.00 Latitude: 0000.0000 N/S ind.: N Longitude: 0000.0000 E/W ind: E Position fix ind: 0 Satellites Used: 3 MSL Altitude: 00.0 *--------------------------------------* 3 2 1 1 2 3 *--------------------------------------* UTC Time: 000000.00 Latitude: 0000.0000 N/S ind.: N Longitude: 0000.0000 E/W ind: E Position fix ind: 5 Satellites Used: 3 MSL Altitude: 00.0 *--------------------------------------* 3 2 1 ``` Output: ``` 1 2 3 3 2 1 1 2 3 *--------------------------------------* UTC Time: 000000.00 Latitude: 0000.0000 N/S ind.: N Longitude: 0000.0000 E/W ind: E Position fix ind: 5 Satellites Used: 3 MSL Altitude: 00.0 *--------------------------------------* 3 2 1 ```
You can use a `set` here, iterate over the file and as soon as you see `'Position fix ind: 0'` in a line(say, index of the line is `i`), then add a set of numbers from `i-6` to `i+3` to a set. ``` f = open('abc') se = set() for i,x in enumerate(f): if 'Position fix ind: 0' in x: se.update(range(i-6,i+4)) f.close() ``` Now iterate over the file again and skip those indexes that are present in that set: ``` f = open('abc') f1 = open('out.txt', 'w') for i,x in enumerate(f): if i not in se: f1.write(x) f.close() f1.cose() ``` input file: ``` 1 2 3 *--------------------------------------* UTC Time: 000000.00 Latitude: 0000.0000 N/S ind.: N Longitude: 0000.0000 E/W ind: E Position fix ind: 0 Satellites Used: 3 MSL Altitude: 00.0 *--------------------------------------* 3 2 1 1 2 3 *--------------------------------------* UTC Time: 000000.00 Latitude: 0000.0000 N/S ind.: N Longitude: 0000.0000 E/W ind: E Position fix ind: 5 Satellites Used: 3 MSL Altitude: 00.0 *--------------------------------------* 3 2 1 ``` **output:** ``` 1 2 3 3 2 1 1 2 3 *--------------------------------------* UTC Time: 000000.00 Latitude: 0000.0000 N/S ind.: N Longitude: 0000.0000 E/W ind: E Position fix ind: 5 Satellites Used: 3 MSL Altitude: 00.0 *--------------------------------------* 3 2 1 ```
Remove amount of lines above and below a line containing a string
[ "", "python", "python-2.5", "" ]
I have two dates `06-Jan-2009` and `12-Dec-2010` I want to calculate the date difference between these two dates.. (basis on the month and year). I want to get the answer 2 Year. But if dates is `06-Jan-2009` And `12-Oct-2010` then I need 10 Months as output.
Try this one: ``` Declare @SDate DateTime ='06-Jan-2009' Declare @EDate DateTime ='12-Oct-2009' select case when DateDiff(M,@sDate,@EDate) <=12 then DateDiff(M,@sDate,@EDate) else Round( ( Convert(Decimal(18,0) ,DateDiff(M,@sDate,@EDate)/12.0)),0) end ```
``` declare @Birthdate datetime declare @AsOnDate datetime declare @years int declare @months int declare @days int declare @hours int declare @minutes int --NOTE: date of birth must be smaller than As on date, --else it could produce wrong results set @Birthdate = '1989-11-30 9:27 pm' --birthdate set @AsOnDate = Getdate() --current datetime --calculate years select @years = datediff(year,@Birthdate,@AsOnDate) --calculate months if it's value is negative then it --indicates after __ months; __ years will be complete --To resolve this, we have taken a flag @MonthOverflow... declare @monthOverflow int select @monthOverflow = case when datediff(month,@Birthdate,@AsOnDate) - ( datediff(year,@Birthdate,@AsOnDate) * 12) <0 then -1 else 1 end --decrease year by 1 if months are Overflowed select @Years = case when @monthOverflow < 0 then @years-1 else @years end select @months = datediff(month,@Birthdate,@AsOnDate) - (@years * 12) --as we do for month overflow criteria for days and hours --& minutes logic will followed same way declare @LastdayOfMonth int select @LastdayOfMonth = datepart(d,DATEADD (s,-1,DATEADD(mm, DATEDIFF(m,0,@AsOnDate)+1,0))) select @days = case when @monthOverflow<0 and DAY(@Birthdate)> DAY(@AsOnDate) then @LastdayOfMonth + (datepart(d,@AsOnDate) - datepart(d,@Birthdate) ) - 1 else datepart(d,@AsOnDate) - datepart(d,@Birthdate) end declare @hoursOverflow int select @hoursOverflow = case when datepart(hh,@AsOnDate) - datepart(hh,@Birthdate) <0 then -1 else 1 end select @hours = case when @hoursOverflow<0 then 24 + datepart(hh,@AsOnDate) - datepart(hh,@Birthdate) else datepart(hh,@AsOnDate) - datepart(hh,@Birthdate) end declare @minutesOverflow int select @minutesOverflow = case when datepart(mi,@AsOnDate) - datepart(mi,@Birthdate) <0 then -1 else 1 end select @minutes = case when @hoursOverflow<0 then 60 - (datepart(mi,@AsOnDate) - datepart(mi,@Birthdate)) else abs(datepart (mi,@AsOnDate) - datepart(mi,@Birthdate)) end select @Months=case when @days < 0 or DAY(@Birthdate)> DAY(@AsOnDate) then @Months-1 else @Months end Declare @lastdayAsOnDate int; set @lastdayAsOnDate = datepart(d,DATEADD(s,-1,DATEADD(mm, DATEDIFF(m,0,@AsOnDate),0))); Declare @lastdayBirthdate int; set @lastdayBirthdate = datepart(d,DATEADD(s,-1,DATEADD(mm, DATEDIFF(m,0,@Birthdate)+1,0))); if (@Days < 0) ( select @Days = case when( @lastdayBirthdate > @lastdayAsOnDate) then @lastdayBirthdate + @Days else @lastdayAsOnDate + @Days end ) print convert(varchar,@years) + ' year(s), ' + convert(varchar,@months) + ' month(s), ' + convert(varchar,@days) + ' day(s), ' + convert(varchar,@hours) + ':' + convert(varchar,@minutes) + ' hour(s)' ```
Calculate Duration between two dates (Month and Year will be Considered)
[ "", "sql", "sql-server", "datediff", "" ]
I've started to use the IPython Notebook and am enjoying it. Sometimes, I write buggy code that takes massive memory requirements or has an infinite loop. I find the "interrupt kernel" option sluggish or unreliable, and sometimes I have to restart the kernel, losing everything in memory. I also sometimes write scripts that cause OS X to run out of memory, and I have to do a hard reboot. I'm not 100% sure, but when I've written bugs like this before and ran Python in the terminal, I can usually `CTRL`+`C` my scripts. I am using the Anaconda distribution of IPython notebook with Firefox on Mac OS X.
I could be wrong, but I'm pretty sure that the "interrupt kernel" button just sends a SIGINT signal to the code that you're currently running (this idea is supported by Fernando's comment [here](http://lighthouseinthesky.blogspot.com/2011/09/review-ipython-notebooks.html)), which is the same thing that hitting CTRL+C would do. Some processes within python handle SIGINTs more abruptly than others. If you desperately need to stop something that is running in iPython Notebook and you started iPython Notebook from a terminal, you can hit CTRL+C twice in that terminal to interrupt the entire iPython Notebook server. This will stop iPython Notebook alltogether, which means it won't be possible to restart or save your work, so this is obviously not a great solution (you need to hit CTRL+C twice because it's a safety feature so that people don't do it by accident). In case of emergency, however, it generally kills the process more quickly than the "interrupt kernel" button.
You can press `I` twice to interrupt the kernel. This only works if you're in Command mode. If not already enabled, press `Esc` to enable it.
Is there an equivalent to CTRL+C in IPython Notebook in Firefox to break cells that are running?
[ "", "python", "ipython", "jupyter-notebook", "" ]
In order to use `random.choice()`, I have to convert my string to a list: ``` >>> x = "hello" >>> y = list(x) >>> y ['h', 'e', 'l', 'l', 'o'] ``` But trying to do that in reverse yields a string that actually looks like `['h', 'e', 'l', 'l', 'o']` instead of just `hello`. Doing this repeatedly causes an infinite loop that produces strings which look like this: ``` "'", '"', ',', ' ', "'", ',', "'", ',', ' ', "'", ' ', "'", ',', ' ', "'", '"', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", '"', "'", ',', ' ', "'", ',', "'", ',', ' ', "'", ' ', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", ',', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", ',', "'", ',', ' ', "'", ' ', "'", ',', ' ' ``` And so on. So, how can I convert the list back into a string: ``` >>> x = ['x', 'y', 'z'] >>> y = something(x) >>> y 'xyz' ```
``` >>> x = "hello" >>> y = list(x) >>> y ['h', 'e', 'l', 'l', 'o'] >>> ''.join(y) 'hello' ```
`"".join (x)` will do it quite easily.
Converting list to string?
[ "", "python", "" ]
Hi below is my sample table "tbltemptransaction" & "tblProduct" ``` tbltemptransaction table userid pid pname description qty 3 1 HP PROBOOK 4440S Intel Core i5 5 3 1 HP PROBOOK 4440S Intel Core i5 3 3 6 DocuPrint C1110B 16ppm black 2 3 2 iMac 21.5" Intel Core i5 3 tblProduct table PID PName Qty Description UnitPrice 1 HP PROBOOK 4440S 20 Intel Core i5 2300 1 2 iMac 21.5" 20 Intel Core i5 3999 2 3 NEC V260 Projector 10 2700 lumens 2899 3 4 DES-1228P Web Smart 10 24-Port 899 5 5 DES-1210 Series 5 8-Port Web 699 5 6 DocuPrint C1110B 15 16ppm black 899 4 ``` The table where userid=3 that i can get right now where running my sql is is like this. ``` pid pname description qty 1 HP PROBOOK 4440S Intel Core i5 5 1 HP PROBOOK 4440S Intel Core i5 3 6 DocuPrint C1110B 16ppm black 2 2 iMac 21.5" Intel Core i5 3 ``` May i know how to code to make it like table below ? ``` pid pname description qty 1 HP PROBOOK 4440S Intel Core i5 8 6 DocuPrint C1110B 16ppm black 2 2 iMac 21.5" Intel Core i5 3 ``` If the pid is the same then qty will sum together. Thanks.
When `pid`, `pname` and `description` are the same (like in your example) then `group by` will aggregate those results to one row. ``` select t.pid, t.pname, t.description, sum(t.qty) as qty from tbltemptransaction t group by t.pid, t.pname, t.description ``` **Edit** ``` select t.pid, sum(t.qty) as qty from tbltemptransaction t where pid = @userid group by t.pid ``` @user id is a parameter for session("userid") **Final Edit** ``` select t.pid, p.pname, p.description, sum(t.qty) as qty from tbltemptransaction t inner join tblproducts p on p.pid = t.pid where t.userid = 3 group by t.pid, p.pname, p.description ```
``` select sum(qty) as qty, pid, pname, description from tbltemptransaction group by pid,pname, description ```
Sql Distinct & count
[ "", "sql", "count", "distinct", "" ]
What algorithms are available for size efficient [A dictionary or associative array](http://en.wikipedia.org/wiki/Associative_array)? For example, with this key/value set, how can one avoid duplication "Alice" in values? ``` { "Pride and Prejudice": "Alice", "The Brothers Karamazov": "Pat", "Wuthering Heights": "Alice" } ``` I checked [Python's implementation on dictionary](http://www.laurentluce.com/posts/python-dictionary-implementation/), but it seems like the implementation is focused on speed (keeping O(1)) not size.
As mentioned by [bennofs](https://stackoverflow.com/users/2494803/bennofs) in comments, you could use [`intern()`](http://docs.python.org/2/library/functions.html#intern) to ensure that identical strings are stored only once: ``` class InternDict(dict): def __setitem__(self, key, value): if isinstance(value, str): super(InternDict, self).__setitem__(key, intern(value)) else: super(InternDict, self).__setitem__(key, value) ``` Here's an example of the effect that has: ``` >>> d = {} >>> d["a"] = "This string is presumably too long to be auto-interned." >>> d["b"] = "This string is presumably too long to be auto-interned." >>> d["a"] is d["b"] False >>> di = InternDict() >>> di["a"] = "This string is presumably too long to be auto-interned." >>> di["b"] = "This string is presumably too long to be auto-interned." >>> di["a"] is di["b"] True ```
One way to improve space efficiency (in addition to sharing values, which (as bennofs points out in the comments) you can probably accomplish efficiently by using sys.intern) is to use [hopscotch hashing](http://mcg.cs.tau.ac.il/projects/hopscotch-hashing-1), which is an open addressing scheme (a variant of linear probing) for resolving collisions - closed addressing schemes use more space because you need to allocate a linked list for each bucket, whereas with an open addressing scheme you'll just use an open adjacent slot in the backing array without needing to allocate any linked lists. Unlike other open addressing schemes (such as cuckoo hashing or vanilla linear probing), hopscotch hashing performs well under a high load factor (over 90%) and guarantees constant time lookups.
size efficient dictionary(associative array) implementation
[ "", "python", "algorithm", "dictionary", "" ]
So I have a table of aliases linked to record ids. I need to find duplicate aliases with unique record ids. To explain better: ``` ID Alias Record ID 1 000123 4 2 000123 4 3 000234 4 4 000123 6 5 000345 6 6 000345 7 ``` The result of a query on this table should be something to the effect of ``` 000123 4 6 000345 6 7 ``` Indicating that both record 4 and 6 have an alias of 000123 and both record 6 and 7 have an alias of 000345. I was looking into using GROUP BY but if I group by alias then I can't select record id and if I group by both alias and record id it will only return the first two rows in this example where both columns are duplicates. The only solution I've found, and it's a terrible one that crashed my server, is to do two different selects for all the data and then join them ``` ON [T_1].[ALIAS] = [T_2].[ALIAS] AND NOT [T_1].[RECORD_ID] = [T_2].[RECORD_ID] ``` Are there any solutions out there that would work better? As in, not crash my server when run on a few hundred thousand records?
It looks as if you have two requirements: 1. Identify all aliases that have more than one record id, and 2. List the record ids for these aliases horizontally. The first is a lot easier to do than the second. Here's some SQL that ought to get you where you want with the first: ``` WITH A -- Get a list of unique combinations of Alias and [Record ID] AS ( SELECT Distinct Alias , [Record ID] FROM T1 ) , B -- Get a list of all those Alias values that have more than one [Record ID] associated AS ( SELECT Alias FROM A GROUP BY Alias HAVING COUNT(*) > 1 ) SELECT A.Alias , A.[Record ID] FROM A JOIN B ON A.Alias = B.Alias ``` Now, as for the second. If you're satisfied with the data in this form: ``` Alias Record ID 000123 4 000123 6 000345 6 000345 7 ``` ... you can stop there. Otherwise, things get tricky. The PIVOT command will *not* necessarily help you, because it's trying to solve a different problem than the one you have. I am assuming that you can't necessarily predict how many duplicate `Record ID` values you have per `Alias`, and thus don't know how many columns you'll need. If you have only two, then displaying each of them in a column becomes a relatively trivial exercise. If you have more, I'd urge you to consider whether the destination for these records (a report? A web page? Excel?) might be able to do a better job of displaying them horizontally than SQL Server can do in returning them arranged horizontally.
Perhaps what you want is just the `min()` and `max()` of `RecordId`: ``` select Alias, min(RecordID), max(RecordId) from yourTable t group by Alias having min(RecordId) <> max(RecordId) ``` You can also count the number of distinct values, using `count(distinct)`: ``` select Alias, count(distinct RecordId) as NumRecordIds, min(RecordID), max(RecordId) from yourTable t group by Alias having count(DISTINCT RecordID) > 1; ```
In SQL, find duplicates in one column with unique values for another column
[ "", "sql", "sql-server", "" ]
In a python unit test (actually Django), what is the correct `assert` statement that will tell me if my test result contains a string of my choosing? ``` self.assertContainsTheString(result, {"car" : ["toyota","honda"]}) ``` I want to make sure that my `result` contains at least the json object (or string) that I specified as the second argument above ``` {"car" : ["toyota","honda"]} ```
``` self.assertContains(result, "abcd") ``` You can modify it to work with json. Use `self.assertContains` only for `HttpResponse` objects. For other objects, use `self.assertIn`.
To assert if a string is or is not a substring of another, you should use `assertIn` and `assertNotIn`: ``` # Passes self.assertIn('bcd', 'abcde') # AssertionError: 'bcd' unexpectedly found in 'abcde' self.assertNotIn('bcd', 'abcde') ``` These are new since [Python 2.7](https://docs.python.org/2/library/unittest.html#unittest.TestCase.assertIn) and [Python 3.1](https://docs.python.org/3/library/unittest.html#unittest.TestCase.assertIn)
Python/Django: how to assert that unit test result contains a certain string?
[ "", "python", "json", "django", "unit-testing", "assert", "" ]
i am having problem with my retrieving of date after the date fall after 12. for example : if i click from the calander extender: 2/7/2013 to 19/july/2013 , is will throw it me with this error : The DateTime represented by the string is not supported in calendar System.Globalization.GregorianCalendar. this is my code. ``` var format = "MM/dd/yyyy"; DateTime one = DateTime.ParseExact(startdate, format, CultureInfo.InvariantCulture); DateTime two = DateTime.ParseExact(enddate, format, CultureInfo.InvariantCulture); if (two >= one) { SqlConnection conn = new SqlConnection("Data Source=""catalog="";Integrated Security=True"); conn.Open(); SqlCommand cmd = new SqlCommand("SELECT Name,CLass, NRIC, StallNo, AmountSpent ,TimeDate=convert(nvarchar,timedate,103) FROM StudentTransactions WHERE TimeDate BETWEEN '" + one + "' AND '" + two + "'", conn); SqlDataReader reader = cmd.ExecuteReader(); GridView1.DataSource = reader; GridView1.DataSourceID = null; GridView1.Visible = true; GridView1.DataBind(); conn.Close(); } ```
try this ``` select CONVERT(varchar,<datecol>,103) --- and will return as dd/mm/yyyy where convert(date,<datecol>) between '' and '' ```
If you want to alter the format of a column of a `GridView` from CodeBehind Add a `RowDataBound` to your grid view. Then in the `GridView_RowDataBound(object sender, GridViewRowEventArgs e)` method, you'll be able to access e which will provide you with access to the individual cells of that row where you can specify a format. [Reference](http://forums.asp.net/p/1589807/4025153.aspx) [Reference2](http://www.aspdotnet-suresh.com/2011/05/how-to-set-date-format-in-gridview.html)
string is not supported in calendar System.Globalization.GregorianCalendar
[ "", "asp.net", "sql", "sql-server", "" ]
I have a table with email addresses. I am trying to fill a `DomainName` column with the domains from the email addresses. The select statement works fine on its own. But when I add it to the update query I get the following: > Subquery returned more than 1 value. This is not permitted when the subquery follows `=`, `!=`, `<`, `<=`, `>`, `>=` or when the subquery is used as an expression. What should I be using in place of the `=`? ``` update Table1 set Table1.DomainName= (select SUBSTRING(Table1.Email,(CHARINDEX('@',Table1.Email)+1),LEN(Table1.Email) - (CHARINDEX('@',Table1.Email))) as DomainName FROM Table1) ```
Just drop the subselect. ``` update Table1 set Table1.DomainName= SUBSTRING(Table1.Email,(CHARINDEX('@',Table1.Email)+1),LEN(Table1.Email) - (CHARINDEX('@',Table1.Email))) FROM Table1 ```
Remove the `SELECT` statement, it should be this way: ``` update Table1 set Table1.DomainName= SUBSTRING(Table1.Email, (CHARINDEX('@', Table1.Email) + 1), LEN(Table1.Email) - (CHARINDEX('@',Table1.Email)) ```
SQL Update Query with select statement
[ "", "sql", "" ]