Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
My task is to create a recursive function in Python that takes a list and a value of 0 as its inputs and then adds up all of the odd numbers on the list and returns that value. Below is the code that I have and it keeps returning that the list index is out of range. No matter what I do I can not get it to work. ``` def addodds2(x,y): total=0 a=x[y] while y<len(x): if a%2!=0: total+=a return(addodds2(x,y+1)) else: return(addodds2(x,y+1)) return(total) print(addodds2([3,2,4,7,2,4,1,3,2],0)) ```
Your code should be (the comments explain my corrections): ``` def addodds2(x,y): total=0 if y<len(x): #you don't need a while there a=x[y] #you have to do this operation if y<len(x), otherwise you would get the index error you are getting if a%2!=0: total+=a return total+addodds2(x,y+1) #you have to sum the current total to the result returned by the addodds2() function (otherwise you would got 0 as the final result) return total print(addodds2([3,2,4,7,2,4,1,3,2],0)) ```
Since you are trying to solve this recursively, I don't think you want that `while` loop. When you are trying to solve a problem recursively, you need two parts: you need a part that does some of the work, and you need a part that handles reaching the end of the work. This is the "basis case". Often when solving problems like this, if you have a zero-length list you hit the basis case immediately. What should be the result for a zero-length list? I'd say 0. So, here's the basic outline of a function to add together all the numbers in a list: Check the length, and if you are already at the end or after the end, return 0. Otherwise, return the current item added to a recursive call (with the index value incremented). Get that working, and then modify it so it only adds the odd values. P.S. This seems like homework, so I didn't want to just give you the code. It's easier to remember this stuff if you actually figure it out yourself. Good luck!
Recursive Function in Python adding Odd Values in List
[ "", "python", "" ]
I have a 2D numpy array S representing a state space, with 80000000 rows (as states) and 5 columns (as state variables). I initialize K0 with S, and at each iteration, I apply a state transition function f(x) on all of the states in Ki, and delete states whose f(x) is not in Ki, resulting Ki+1. Until it converges i.e. Ki+1 = Ki. Going like this would take ages: ``` K = S to_delete = [0] While to_delete: to_delete = [] for i in xrange(len(K)): if not f(i) in K: to_delete.append(K(i)) K = delete(K,to_delete,0) ``` So I wanted to make a vectorized implementation : slice K in columns, apply f and, join them once again, thus obtaining f(K) somehow. The question now is how to get an array of length len(K), say Sel, where each row Sel[i] determine whether f(K[i]) is in K. Exactly like the function in1d works. Then it would be simple to make ``` K=K[Sel]] ```
Your question is difficult to understand because it contains extraneous information and contains typos. If I understand correctly, you simply want an efficient way to perform a set operation on the rows of a 2D array (in this case the intersection of the rows of `K` and `f(K)`). You can do this with [numpy.in1d](http://docs.scipy.org/doc/numpy/reference/generated/numpy.in1d.html) if you create [structured array](http://docs.scipy.org/doc/numpy/user/basics.rec.html) view. Code: if this is `K`: ``` In [50]: k Out[50]: array([[6, 6], [3, 7], [7, 5], [7, 3], [1, 3], [1, 5], [7, 6], [3, 8], [6, 1], [6, 0]]) ``` and this is `f(K)` (for this example I subtract 1 from the first col and add 1 to the second): ``` In [51]: k2 Out[51]: array([[5, 7], [2, 8], [6, 6], [6, 4], [0, 4], [0, 6], [6, 7], [2, 9], [5, 2], [5, 1]]) ``` then you can find all rows in `K` also found in `f(K)` by doing something this: ``` In [55]: k[np.in1d(k.view(dtype='i,i').reshape(k.shape[0]),k2.view(dtype='i,i'). reshape(k2.shape[0]))] Out[55]: array([[6, 6]]) ``` `view` and `reshape` create flat structured views so that each row appears as a single element to `in1d`. `in1d` creates a boolean index of `k` of matched items which is used to fancy index `k` and return the filtered array.
Not sure if I understand your question entirely, but if the interpretation of Paul is correct, it can be solved efficiently and fully vectorized using the [numpy\_indexed](https://github.com/EelcoHoogendoorn/Numpy_arraysetops_EP) package as such in a single readable line: ``` import numpy_indexed as npi K = npi.intersection(K, f(K)) ``` Also, this works for rows of any type or shape.
Implementation of numpy in1d for 2D arrays?
[ "", "python", "numpy", "" ]
So I have a long text file with a bunch of numbers and I want to reformat this file so that every 12 characters are on their own line, the file is 4392 characters long. My strategy was to add the contents of the infile to a list and slice and append the first 12 characters to a new list then write it to an outfile using a while loop for the list slicing parameters. I am getting an error on `out.writelines(l)` : ``` TypeError: writelines() argument must be a sequence of strings. ``` Here is my code: ``` l = [] outl=[] with open('r6.txt', 'r') as f, \ open('out.txt', 'w') as out: outl.append(f) a = 0 b = 11 while b <= 4392: l.append(outl[a:b]) l.append('/n') out.writelines(l) a+=12 b+=12 l=[] ```
Vlad-ardelean is correct in saying you need to append `f.readlines()` to `outl` instead of the file `f`. Also, you're using `writelines()` to write a single line each time, but `writelines()` is intended for writing out a list of strings to a file, not one item lists. Perhaps a better way to approach the insertion of newline characters would be: ``` l = [] outl=[] with open('r6.txt', 'r') as f, \ open('out.txt', 'w') as out: # gets entire file as one string and removes line breaks outl = ''.join(f.readlines()).replace('\n','') l = [outl[each:each+12]+'\n' for each in xrange(0,len(outl),12)] out.writelines(l) ``` Sample input for r6: ``` abcdefeounv lernbtlttb berolinervio bnrtopimrtynprymnpobm,t 2497839085gh b640h846j048nm5gh0m8-9 2g395gm4-59m46bn 2vb-9mb5-9046m-b946m-b946mb-96m-05n=570n;rlgbm'dfb ``` output: ``` abcdefeounv lernbtlttbbe rolinerviobn rtopimrtynpr ymnpobm,t249 7839085ghb64 0h846j048nm5 gh0m8-92g395 gm4-59m46bn2 vb-9mb5-9046 m-b946m-b946 mb-96m-05n=5 70n;rlgbm'df b ```
Hm, although other answers seem to be correct, I still think that the final solution can be, well, faster: ``` with open('r6.txt', 'r') as f, \ open('out.txt', 'w') as out: # call anonymous lambda function returning f.read(12) until output is '', put output to part for part in iter(lambda: f.read(12), ''): # write this part and newline character out.write(part) out.write('\n') ```
Splitting up a text file into sets of n characters
[ "", "python", "list", "format", "slice", "" ]
I am trying to get a list of book titles that begin with a certain letter. However, I need to disregard "the" at the beginning of the title. For instance, when I'm looking for titles that begin with "a", this query should return both "the alamo" and "american history". How can I do this?
use ``` WHERE CASE WHEN SUBSTR(title, 1, 4) = 'the ' THEN SUBSTR(title, 5) ELSE title END LIKE '...' ``` in your query, or use patterns.
try this **Query** ``` select titles from tablename where titles like 'a%' or titles like 'the a%' ``` **Edit** ``` select titles from tablename where REPLACE(titles,'the ','') like 'a%' ```
get first letter of string, disregarding "the"
[ "", "sql", "string", "sql-server-2008", "" ]
I'm wondering how to approach the topic of user organizations in ASP.NET WebForms application and SQL Server 2008. * Create SQL Servers in-built Logins / Users or * Create a Username / User and use it to connect to the database. A user data held in the table with password (salt). Is the second option is a good option? If true please support me with your knowledge.
It sounds like Forms Authentication can help you immensely: <http://support.microsoft.com/kb/301240>
For an ASP.NET application, you should be going with your second option. You **definitely** want a strong hashing algorithm equipped with a fairly long salt to make sure your passwords are secure. In terms of an algorithm, you should look at something like [Bcrypt](http://en.wikipedia.org/wiki/Bcrypt), it's got a pretty good reputation on [Security StackExchange](https://security.stackexchange.com/questions/4781/do-any-security-experts-recommend-bcrypt-for-password-storage).
ASP.NET + SQL Server user management - best practices?
[ "", "asp.net", ".net", "sql", "sql-server-2008", "" ]
I'm converting an image to **base64** string and sending it from android device to the server. Now, I need to change that string back to an image and save it in the database. Any help?
Try this: ``` import base64 imgdata = base64.b64decode(imgstring) filename = 'some_image.jpg' # I assume you have a way of picking unique filenames with open(filename, 'wb') as f: f.write(imgdata) # f gets closed when you exit the with statement # Now save the value of filename to your database ```
Convert base64\_string into opencv (RGB): ``` from PIL import Image import cv2 # Take in base64 string and return cv image def stringToRGB(base64_string): imgdata = base64.b64decode(str(base64_string)) img = Image.open(io.BytesIO(imgdata)) opencv_img= cv2.cvtColor(np.array(img), cv2.COLOR_BGR2RGB) return opencv_img ```
How to convert base64 string to image?
[ "", "python", "base64", "" ]
Lets say, I have Product and Score tables. ``` Product ------- id name Score ----- id ProductId ScoreValue ``` I want to get the top 10 Products with the highest AVERAGE scores, how do I get the average and select the top 10 products in one select statement? here is mine which selects unexpected rows ``` SELECT TOP 10 Product.ProductName Score.Score FROM Product, Score WHERE Product.ID IN (select top 100 productid from score group by productid order by sum(score) desc) order by Score.Score desc ```
Give this a try, ``` WITH records AS ( SELECT a.ID, a.Name, AVG(b.ScoreValue) avg_score, DENSE_RANK() OVER (ORDER BY AVG(b.ScoreValue) DESC) rn FROM Product a INNER JOIN Score b ON a.ID = b.ProductID GROUP BY a.ID, a.Name ) SELECT ID, Name, Avg_Score FROM records WHERE rn <= 10 ORDER BY avg_score DESC ``` The reason why I am not using `TOP` is because it will not handle duplicate record having the highest average. But you can use `TOP WITH TIES` instead.
This might do it ``` SELECT TOP 10 p.ProductName, avg( s.Score ) as avg_score FROM Product p inner join Score s on s.product_id = p.product_id group by p.ProductName, p.product_id order by avg_score desc ```
select top 10 with the highest average score
[ "", "sql", "sql-server", "select", "join", "" ]
merge list of lists for example ``` list_of_lists = [['NA','NA','NA','0,678'], ['0.327','NA','NA','NA'], ...] ``` I want ``` merged = ['0.327','NA','NA','0,678'] ``` Please comment.
Use a list comprehension with a nested generator expression to pick the first non-`NA` element, together with `zip()`: ``` merged = [next((el for el in elements if el != 'NA'), 'NA') for elements in zip(*list_of_lists)] ``` Demo: ``` >>> list_of_lists = [['NA','NA','NA','0,678'], ['0.327','NA','NA','NA']] >>> [next((el for el in elements if el != 'NA'), 'NA') for elements in zip(*list_of_lists)] ['0.327', 'NA', 'NA', '0,678'] ``` The `next((...), default)` call expression will pick the *first* element that is not equal to `'NA'`, falling back to `'NA'` if no such element exists.
Assuming that there both list do not have values in the same position (as suggested by Martijn Pieters) then you could use: ``` for i in range(len(l1)): l1[i] = l2[i] if l1[i] == 'NA' and l2[i] != 'NA' else l1[i] ``` Hope this helps!
how to merge list of lists without changing the order of sublists in python?
[ "", "python", "list", "merge", "" ]
Let's say I have a dictionary: ``` myDict = { 'title': 'a nice title', 'nice_list': [1,2,3,4,5,6,6,7,...,99999], 'nice_lists_last_item': 99999, } ``` I only want to append an item to `nice_list` if it is larger than the final item. What is quicker: 1. Using: `if new_element > nice_list[-1]` or 1. Using: `if new_element > nice_lists_last_item` Does method 1 have to scan through the whole list (and/or put all of `nice_list` into memory each time) to find that item? Which is quicker? (bearing in mind I intend to do a few billion of these comparisons?) Method 2 would store the last element as its own distinct dict entry, so is that faster?
When in doubt, test: ``` >>> %timeit if 1 > myDict['nice_list'][-1]: 0 10000000 loops, best of 3: 110 ns per loop >>> %timeit if 1 > myDict['nice_lists_last_item']: 0 10000000 loops, best of 3: 68.8 ns per loop >>> nice_list = myDict['nice_list'] >>> %timeit if 1 > nice_list[-1]: 0 10000000 loops, best of 3: 62.6 ns per loop >>> nice_lists_last_item = myDict['nice_lists_last_item'] >>> %timeit if 1 > nice_lists_last_item: 0 10000000 loops, best of 3: 43.4 ns per loop ``` As you can see, accessing the dictionary value directly is faster than accessing the list from the dictionary and then accessing its last value. But accessing the last value of the list directly is faster. This should be no surprise; Python lists know their own length and are implemented in memory as arrays, so finding the last item is as simple as subtracting 1 from the length and doing pointer arithmetic. Accessing dictionary keys is a bit slower because of the overhead of collision detection; but it's only slower by a few nanoseconds. And finally, if you really want to save a few more nanoseconds, you could store the last value in its own value. The biggest slowdown comes when you do *both*.
Getting an item from a list is O(1) as noted [here](http://wiki.python.org/moin/TimeComplexity). Even so, storing the value explicitly will still be faster, because no matter how fast the lookup is, it's still going to be slower than not doing a lookup at all. (However, if you store the value explicitly, you'll have to update it when you add a new item to the list; whether the combined cost if updating it and checking it is more than the cost of grabbing the last item every time is something you'll have to benchmark yourself; it will likely depend on how often you wind up actually appending a new item.) Note that there is no question of "putting all of `nice_list` into memory". If you have a dict with a list in it, that entire list is already in memory. Looking up a value in it won't cause it to take up any more memory, but if you have billions of these lists, you will run out of memory before you even try to look anything up, because just creating the lists will use up too much memory.
List Lookup Performance - Does returning the last element of a list have to scan through the whole list?
[ "", "python", "list", "" ]
I have a post table which has a photo sub table, so one post can have many photos, one of the photo table column is [Priority]. I need to select records from the post table with only the photo that has a top priority in another table: So the result should be like: ``` Photo Post pic1 Article1 picX Article2 ``` Currently my results shows as ``` Photo Post pic1 Article1 pic2 Article1 picX Article2 ``` with this query: ``` SELECT [Photo], [PostTitle] FROM [Post] sp INNER JOIN [PostPhotos] spp ON (sp.AutoId = spp.PostId) WHERE sp.[AutoId] IN (SELECT [PostID] FROM [Favorites] WHERE [UserId] = 'UserXXX') ``` I tried a join query without success: ``` SELECT photo, [PostTitle], [AskingPrice] FROM (SELECT sp.[AutoId], [PostTitle] FROM [SellPost] sp WHERE sp.[AutoId] IN (SELECT [PostID] FROM [Favorites] WHERE [UserId] = 'UserId') )a full OUTER JOIN(SELECT TOP 1 [PostId], [photo] FROM [PostPhotos] spp WHERE PostId IN (SELECT [PostID] FROM [Favorites] WHERE [UserId] = 'UserXXX') ORDER BY [Priority] ASC )b on (a.AutoId = b.PostId) order by a.AutoId; ``` My Tables: ``` Table Post PostId, PostTitle Table PostPhotos AutoId, PostId, Photo, Priority --> 1 post can have many photos ``` Can someone please kindly assist. Thanks.
Sorry for the mistakes in my post, I think I have been thinking too deep, all I need was: ``` SELECT [p].[AutoId], [PostTitle], [Photo] FROM [Post] p INNER JOIN [PostPhotos] pp ON [p].[AutoId] = [pp].[PostId] WHERE p.[AutoId] IN (SELECT [PostID] FROM [Favorites] WHERE [UserId] = @UserId) AND [Priority] = 1 ```
Try with this hint: ``` SELECT [p].[PostId],[PostTitle],[Photo] FROM [Post] p INNER JOIN [PostPhots] pp ON [p].[PostId] = [pp].[PostId] WHERE [p].[PostId] IN (SELECT TOP 1 [PostId] FROM [PostPhotos] ORDER BY [Priority] DESC) ```
SQL Server + Selecting records with sub tables items
[ "", "sql", "sql-server", "" ]
I am trying to list groups that have more graduate than undergraduate student members. I feel I have the concept behind my idea, but making the query is a little more difficult then a simple translation. Below is my code, I currently am getting a missing right parenthesis error where COUNT(student.career = 'GRD'). Thanks. ``` SELECT studentgroup.name COUNT(student.career = 'GRD') - COUNT(student.career = 'UGRD') AS Gradnum FROM studentgroup INNER JOIN memberof ON studentgroup.GID = memberof.GroupID INNER JOIN student ON memberof.StudentID = student.SID WHERE Gradnum > 1; ```
``` SELECT studentgroup.GID, max(studentgroup.name) FROM studentgroup INNER JOIN memberof ON studentgroup.GID = memberof.GroupID INNER JOIN student ON memberof.StudentID = student.SID GROUP BY studentgroup.GID HAVING SUM(CASE WHEN student.career = 'GRD' THEN 1 WHEN student.career = 'UGRD'THEN -1 ELSE 0 END) >0 ```
``` SELECT studentgroup.name SUM(CASE WHEN student.career = 'GRD' THEN 1 ELSE 0 END) - SUM(CASE WHEN student.career = 'UGRD' THEN 1 ELSE 0 END) AS Gradnum FROM studentgroup INNER JOIN memberof ON studentgroup.GID = memberof.GroupID INNER JOIN student ON memberof.StudentID = student.SID WHERE Gradnum > 1 GROUP BY studentgroup.name; ```
SQL SELECT string Greater than using count()
[ "", "sql", "count", "" ]
I'm using [webbrowser](http://docs.python.org/2/library/webbrowser.html), so I can open a html to an performance test I'm currently doing. This small piece of code is the begin of the automation. The goal of the function `perf_measure` is to return how long took to load the page in `url` entirely. ``` import webbrowser def perf_measure(url=""): try: webbrowser.open(url) except webbrowser.Error, e: print "It couldn't open the url: ", url url = "www.google.com" open_browser(url) ``` How can I accomplish that? I just need the value, in seconds, like: ``` www.google.com Total time to load page in (secs): 2.641 ```
Do you need to use the web browser? As in do you need to view the result? Otherwise you could do this. ``` import urllib2 from time import time stream = urllib2.urlopen('http://www.rarlab.com/rar/winrar-x64-420.exe') start_time = time() output = stream.read() end_time = time() stream.close() print(end_time-start_time) ``` If you want a more human-readable result you can use [round](http://docs.python.org/2/library/functions.html#round). ``` print(round(end_time-start_time, 3)) ``` **Output** ``` 0.865000009537 # Without Round 0.865 # With Round ```
If you want to time the page load (including all of the resources it loads, rendering time etc.) in a real browser you can use [Selenium Webdriver](https://selenium-python.readthedocs.io/api.html). This will open your browser of choice, load the URL and then extract timings: ``` from selenium import webdriver def time_url(driver, url): driver.get(url) # Use the browser Navigation Timing API to get some numbers: # https://developer.mozilla.org/en-US/docs/Web/API/Navigation_timing_API navigation_start = driver.execute_script( "return window.performance.timing.navigationStart") dom_complete = driver.execute_script( "return window.performance.timing.domComplete") total_time = dom_complete - navigation_start print(f"Time {total_time}ms") driver = webdriver.Chrome() try: url = "https://httpbin.org/delay/" time_url(driver, url + '1') time_url(driver, url + '2') finally: driver.close() ``` There are many [other metrics](https://developer.mozilla.org/en-US/docs/Web/API/PerformanceTiming) you can load if you want to know the render-time separately from the loading time etc.
measuring the time to load a page - Python
[ "", "python", "" ]
I am working with a csv file with 3 columns that looks like this: ``` timeStamp, value, label 15:22:57, 849, CPU pid=26298:percent 15:22:57, 461000, JMX MB 15:22:58, 28683, Disks I/O 15:22:58, 3369078, Memory pid=26298:unit=mb:resident 15:22:58, 0, JMX 31690:gc-time 15:22:58, 0, CPU pid=26298:percent 15:22:58, 503000, JMX MB ``` The `label` column contains distinct values (say a total of 5), which include spaces, colons and other special characters. What I am trying to achieve is to plot time against each metric (either on the same plot or on separate ones). I can do this with `matplotlib`, but I first need to group the `[timeStamps, value]` pairs according to the 'label'. I looked into the `csv.DictReader` to get the labels and the `itertools.groupby` to group by the 'label', but I am struggling to do this in a proper 'pythonic' way. Any suggestion?
You don't need `groupby`; you want to use [`collections.defaultdict`](http://docs.python.org/2/library/collections.html#collections.defaultdict) to collect series of `[timestamp, value]` pairs keyed by label: ``` from collections import defaultdict import csv per_label = defaultdict(list) with open(inputfilename, 'rb') as inputfile: reader = csv.reader(inputfile) next(reader, None) # skip the header row for timestamp, value, label in reader: per_label[label.strip()].append([timestamp.strip(), float(value)]) ``` Now `per_label` is a dictionary with labels as keys, and a list of `[timestamp, value]` pairs as values; I've stripped off whitespace (your input sample has a lot of extra whitespace) and turned the `value` column into floats. For your (limited) input sample that results in: ``` {'CPU pid=26298:percent': [['15:22:57', 849.0], ['15:22:58', 0.0]], 'Disks I/O': [['15:22:58', 28683.0]], 'JMX 31690:gc-time': [['15:22:58', 0.0]], 'JMX MB': [['15:22:57', 461000.0], ['15:22:58', 503000.0]], 'Memory pid=26298:unit=mb:resident': [['15:22:58', 3369078.0]]} ```
You can try [pandas](http://pandas.pydata.org/) which provide a nice structure to dealing with data. Read the csv to the `DataFrame` ``` In [123]: import pandas as pd In [124]: df = pd.read_csv('test.csv', skipinitialspace=True) In [125]: df Out[125]: timeStamp value label 0 15:22:57 849 CPU pid=26298:percent 1 15:22:57 461000 JMX MB 2 15:22:58 28683 Disks I/O 3 15:22:58 3369078 Memory pid=26298:unit=mb:resident 4 15:22:58 0 JMX 31690:gc-time 5 15:22:58 0 CPU pid=26298:percent 6 15:22:58 503000 JMX MB ``` Group the `DataFrame` by `label` ``` In [154]: g = df.groupby('label') ``` Now you can get what you want ``` In [155]: g.get_group('JMX MB') Out[155]: timeStamp value label 1 15:22:57 461000 JMX MB 6 15:22:58 503000 JMX MB ```
reading a csv and grouping data by a column
[ "", "python", "csv", "" ]
I have a mySQL table named 'values' : ``` values: file | metadata | value ________________________ 01 | duration | 50s 01 | size | 150mo 01 | extension| avi 02 | duration | 20s 02 | extension| mkv 03 | duration | 20s 03 | extension| mpeg ``` An user will create his own query in SQL, it will look like this : ``` SELECT file FROM values WHERE (metadata='duration' AND value='20s') AND (metadata='extension' AND value='mkv') ``` I know this query is bad, but I can't change the 'values' table. I don't know how to get the file\_id with these conditions.. Any ideas ? Thanks in advance !
Like this: ``` SELECT file FROM ( SELECT file, MAX(CASE WHEN metadata = 'duration' THEN value END) AS duration, MAX(CASE WHEN metadata = 'extension' THEN value END) AS extension FROM `values` WHERE metadata IN ('duration', 'extension') GROUP BY file ) AS sub WHERE duration = '20s' AND extension = 'mkv'; ``` See it in action here: * [**SQL Fiddle Demo**](http://sqlfiddle.com/#!2/fc4e9/10) --- ## Update If you want to do this dynamically, and assuming that these metadata names are stored in a new separate table, then you can use the dynamic sql to do this. Something like this: ``` SET @sql = NULL; SET @cols = NULL; SELECT GROUP_CONCAT(DISTINCT CONCAT('MAX(IF(m.metadata_name = ''', m.metadata_name, ''', v.value, 0)) AS ', '''', m.metadata_name, '''') ) INTO @cols FROM Metadata AS m; SET @sql = CONCAT(' SELECT v.file, ', @cols ,' FROM `values` AS v INNER JOIN metadata AS m ON v.metadata_id = m.metadata_id GROUP BY v.file'); prepare stmt FROM @sql; execute stmt; ``` ## [Updated SQL Fiddle Demo](http://sqlfiddle.com/#!2/ddcac/5) Then you can put this inside a stored procedure and use it to display them, or query them the way you want.
try this ``` SELECT file FROM `values` WHERE (metadata='duration' AND value='20s') OR (metadata='extension' AND value='mkv') ``` * `VALUES` is reserved key word for mysql , use backticks around it **EDIT:** you should have table like that ``` file | size | duration | extension | 1 | 150mo| 50s | avi 2 | null | 20s | mkv 3 | null | 20s | mpeg ``` then you query will be ``` select file from `values` where duration = '20s' and extension = 'mkv' ```
SQL with AND in the same column
[ "", "mysql", "sql", "conditional-statements", "" ]
Here are the columns in my table: ``` Id EmployeeId IncidentRecordedById DateOfIncident Comments TypeId Description IsAttenIncident ``` I would like to delete duplicate rows where `EmployeeId, DateOfIncident, TypeId` and `Description` are the same - just to clarify - I do want to keep one of them. I think I should be using the `OVER` clause with `PARTITION`, but I am not sure. Thanks
If you want to keep one row of the duplicate-groups you can use `ROW_NUMBER`. In this example i keep the row with the lowest `Id`: ``` WITH CTE AS ( SELECT rn = ROW_NUMBER() OVER( PARTITION BY employeeid, dateofincident, typeid, description ORDER BY Id ASC), * FROM dbo.TableName ) DELETE FROM cte WHERE rn > 1 ```
use this query without using CTE.... delete a from (select id,name,place, ROW\_NUMBER() over (partition by id,name,place order by id) row\_Count from dup\_table) a where a.row\_Count >1
How do I delete duplicate rows in SQL Server using the OVER clause?
[ "", "sql", "sql-server", "t-sql", "" ]
I have a list: `[(160, 177), (162, 169), (163, 169), (166, 173), (166, 176), (166, 177), (169, 176), (169, 177)]` I want the inverse of this list, so it becomes: `[(177, 160), (169, 162), (169, 163), (173, 166), (176, 166), (177, 166), (176, 169), (177, 169)]` I think you can do something like `list1[:-1]` or something like that.
In my opinion it's cleaner to reverse the pairs explicitly: ``` [(snd, fst) for fst, snd in thelist] ```
``` a=[(160, 177), (162, 169), (163, 169), (166, 173), (166, 176), (166, 177), (169, 176), (169, 177)] b=[e[::-1] for e in a] print b ``` Runnable code in this Bunk - <http://codebunk.com/bunk#-It1THfMsVDWUQMq8eRT>
inverse of a list python [(160,177),(198,123)]
[ "", "python", "list", "" ]
I Have searched the web and various forums but I cannot figure out why this won't work. My Database is made up from the following Tables: ``` CREATE TABLE CUSTOMER( custid Number(4), cfirstname varchar2(30), csurname varchar2(20) NOT NULL, billingaddr varchar2(30), cgender varchar2(1), CONSTRAINT custpk PRIMARY KEY (custid), CONSTRAINT genderconst CHECK(cgender in ('M','F','m','f')) ); CREATE TABLE PRODUCT( prodid Number(4), prodname varchar2(30), currentprice Number(6,2), CONSTRAINT cprice_chk CHECK(currentprice >= 0 AND currentprice <=5000 ), CONSTRAINT prodpk PRIMARY KEY (prodid), CONSTRAINT pricepos CHECK((currentprice >= 0)) ); CREATE TABLE SALESPERSON( spid Number(4), spfirstname varchar2(30), spsurname varchar2(30), spgender varchar2(1), CONSTRAINT salespk PRIMARY KEY (spid) ); CREATE TABLE SHOPORDER( ordid Number(4), deliveryaddress varchar2(30), custid Number(4) NOT NULL, spid Number(4) NOT NULL, CONSTRAINT orderpk PRIMARY KEY (ordid), CONSTRAINT orderfk1 FOREIGN KEY (custid) REFERENCES CUSTOMER(custid), CONSTRAINT orderfk2 FOREIGN KEY (spid) REFERENCES SALESPERSON(spid) ); CREATE TABLE ORDERLINE( qtysold Number(4), qtydelivered Number(4), saleprice Number (6,2), ordid Number(4) NOT NULL, prodid Number(4) NOT NULL, CONSTRAINT qty_chk CHECK (qtydelivered >= 0 AND qtydelivered <=99), CONSTRAINT price_chk CHECK(saleprice >= 0 AND saleprice <=5000 ), CONSTRAINT linefk1 FOREIGN KEY (ordid) REFERENCES SHOPORDER(ordid), CONSTRAINT linefk2 FOREIGN KEY (prodid) REFERENCES PRODUCT(prodid) ); ``` And I am using an insert statement to insert the following: ``` INSERT INTO SHOPORDER(ordid, deliveryaddress, spid) VALUES (41, NULL, 23); ``` Whether I use '' or NULL it gives me the error: ORA-01400: cannot insert NULL into ("S9710647"."SHOPORDER"."CUSTID"); My issue that I have not set deliveryaddress as a Primary key nor is it a Foreign key or contain any NOT NULL CoNSTRAINTS. Is there a factor that I am missing here? The majority of forums have had people with problems relating to constraints. I cannot see any conflicting constraints. Cheers
You're only inserting the columns `ordid`, `deliveryaddress` and `spid` into `SHOPORDER` which means the others will probably default to `NULL`. However, you've declared `custId` as `NOT NULL` so that's not allowed. You can actually tell what the complaint is by looking at the error message: ``` ORA-01400: cannot insert NULL into ("S9710647"."SHOPORDER"."CUSTID"); ^^^^^^ ``` It's clearly having troubles with the `CUSTID` column there and you *know* you haven't explicitly set that, so it must be the default value causing you grief. You can fix it by either inserting a specific value in to that column as well, or by giving a non-NULL default value to it, though you'll have to ensure the default exists in the `CUSTOMER` table lest the `orderfk1` foreign key constraint will fail.
The problem is that this: ``` INSERT INTO SHOPORDER(ordid, deliveryaddress, spid) VALUES (41, NULL, 23); ``` uses the default values for all columns that you don't specify an explicit value for, so it's equivalent to this: ``` INSERT INTO SHOPORDER(ordid, deliveryaddress, custid, spid) VALUES (41, NULL, NULL, 23); ``` which violates the `NOT NULL` constraint on `custid`.
ORACLE - Cannot insert a NULL value to a NON-Primary Key
[ "", "sql", "oracle", "ora-01400", "" ]
I have a dictionary of addresses with their usernames and passwords listed that looks something like this: ``` address_dict = {'address1':{'username':'abc', 'password':'123'}, 'address2':{'username':'xyz', 'password':'456'}} ``` Is there a way to make this dictionary accessible for multiple scripts to read from and possibly write to? Like save it as seperate python file and import it or something?
Yes, you can do just that: ``` # module.py address_dict = {'address1':{'username':'abc', 'password':'123'}, 'address2':{'username':'xyz', 'password':'456'}} # main.py import module print(module.address_dict) ``` If you don't like the `module.` prefix, you could import the dictionary like so: ``` from module import address_dict print(address_dict) ```
To access it and modify it at runtime, you can just define it in a module and then import it. But if you want your changes to be persistent (i.e. see the changed version next time you run the script) you need something else, like a database. The simplest to use in this case would probably be the [`shelve`](http://docs.python.org/2/library/shelve.html) module, which is based on `pickle`. You can also use `pickle` itself if you wish.
Can I make a dictionary available to multiple python scripts?
[ "", "python", "" ]
I have a large number of images (hundreds of thousands) and, for each one, I need to say whether or not it has a watermark in the top right corner. The watermark is always the same and is in the same position. It takes the form of a ribbon with a symbol and some text. I'm looking for simple and fast way to do this that, ideally, doesn't use SciPy (as it's not available on the server I'm using -- but it can use NumPy) So far, I've tried using PIL and the crop function to isolate the area of the image where the watermark should be and then compared the histograms with a RMS function (see <http://snipplr.com/view/757/compare-two-pil-images-in-python/>). That doesn't work very well as there are lots of errors in both directions. Any ideas would be much appreciated. Thanks
Another possibility is to use machine learning. My background is natural language processing (not computer vision), but I tried creating a training and testing set using the description of your problem and it seems to work (100% accuracy on unseen data). ### Training set The training set consisted of the same images with the watermark (positive example), and without the watermark (negative example). ### Testing set The testing set consists of images that were not in the training set. ### Example data If interested, you can try it with the [example training and testing images](http://bwbaugh.com/stack-overflow/16222178_watermark.tar). ### Code: Full version available [as a gist](https://gist.github.com/bwbaugh/5463151). Excerpt below: ``` import glob from classify import MultinomialNB from PIL import Image TRAINING_POSITIVE = 'training-positive/*.jpg' TRAINING_NEGATIVE = 'training-negative/*.jpg' TEST_POSITIVE = 'test-positive/*.jpg' TEST_NEGATIVE = 'test-negative/*.jpg' # How many pixels to grab from the top-right of image. CROP_WIDTH, CROP_HEIGHT = 100, 100 RESIZED = (16, 16) def get_image_data(infile): image = Image.open(infile) width, height = image.size # left upper right lower box = width - CROP_WIDTH, 0, width, CROP_HEIGHT region = image.crop(box) resized = region.resize(RESIZED) data = resized.getdata() # Convert RGB to simple averaged value. data = [sum(pixel) / 3 for pixel in data] # Combine location and value. values = [] for location, value in enumerate(data): values.extend([location] * value) return values def main(): watermark = MultinomialNB() # Training count = 0 for infile in glob.glob(TRAINING_POSITIVE): data = get_image_data(infile) watermark.train((data, 'positive')) count += 1 print 'Training', count for infile in glob.glob(TRAINING_NEGATIVE): data = get_image_data(infile) watermark.train((data, 'negative')) count += 1 print 'Training', count # Testing correct, total = 0, 0 for infile in glob.glob(TEST_POSITIVE): data = get_image_data(infile) prediction = watermark.classify(data) if prediction.label == 'positive': correct += 1 total += 1 print 'Testing ({0} / {1})'.format(correct, total) for infile in glob.glob(TEST_NEGATIVE): data = get_image_data(infile) prediction = watermark.classify(data) if prediction.label == 'negative': correct += 1 total += 1 print 'Testing ({0} / {1})'.format(correct, total) print 'Got', correct, 'out of', total, 'correct' if __name__ == '__main__': main() ``` ### Example output ``` Training 1 Training 2 Training 3 Training 4 Training 5 Training 6 Training 7 Training 8 Training 9 Training 10 Training 11 Training 12 Training 13 Training 14 Testing (1 / 1) Testing (2 / 2) Testing (3 / 3) Testing (4 / 4) Testing (5 / 5) Testing (6 / 6) Testing (7 / 7) Testing (8 / 8) Testing (9 / 9) Testing (10 / 10) Got 10 out of 10 correct [Finished in 3.5s] ```
Is the position of the watermark exact? How is the watermark being applied to the background image? I'll assume the watermark is a partial add or multiply function. The watermarked image is probably calculated as such: ``` resultPixel = imagePixel + (watermarkPixel*mixinValue) ``` mixinValue would be 0.0-1.0, you could therefore complete the mix by reapplying the watermark with a multiplier of (1-mixinValue). This should result in pixels that match the watermark. Just test to color of the result image against the original watermark. ``` testPixel = resultPixel + (watermarkPixel*(1-mixinValue)) assert testPixel == watermarkPixel ``` Of course compression of the watermarked image will probably cause some variance in your testPixel.
Detecting a Specific Watermark in a Photo with Python (without SciPy)
[ "", "python", "image-processing", "computer-vision", "python-imaging-library", "image-comparison", "" ]
I'm working through the "Python For Data Analysis" and I don't understand a particular functionality. Adding two pandas series objects will automatically align the indexed data but if one object does not contain that index it is returned as NaN. For example from book: ``` a = Series([35000,71000,16000,5000],index=['Ohio','Texas','Oregon','Utah']) b = Series([NaN,71000,16000,35000],index=['California', 'Texas', 'Oregon', 'Ohio']) ``` Result: ``` In [63]: a Out[63]: Ohio 35000 Texas 71000 Oregon 16000 Utah 5000 In [64]: b Out[64]: California NaN Texas 71000 Oregon 16000 Ohio 35000 ``` When I add them together I get this... ``` In [65]: a+b Out[65]: California NaN Ohio 70000 Oregon 32000 Texas 142000 Utah NaN ``` So why is the Utah value NaN and not 500? It seems that 500+NaN=500. What gives? I'm missing something, please explain. **Update:** ``` In [92]: # fill NaN with zero b = b.fillna(0) b Out[92]: California 0 Texas 71000 Oregon 16000 Ohio 35000 In [93]: a Out[93]: Ohio 35000 Texas 71000 Oregon 16000 Utah 5000 In [94]: # a is still good a+b Out[94]: California NaN Ohio 70000 Oregon 32000 Texas 142000 Utah NaN ```
Pandas does not assume that 500+NaN=500, but it is easy to ask it to do that: ``` a.add(b, fill_value=0) ```
The default approach is to assume that any computation involving NaN gives NaN as the result. Anything plus NaN is NaN, anything divided by NaN is NaN, etc. If you want to fill the NaN with some value, you have to do that explicitly (as Dan Allan showed in his answer).
Adding two Series with NaNs
[ "", "python", "pandas", "" ]
I have 2 tables in my database, fleets and personnel\_onboard\_fleets. I am trying to get a list of all of the fleets (which works) and count how many personnel are onboard each fleet (which doesn't). However, it isn't showing up the results if there aren't any personnel onboard. ``` SELECT f.*, count(pof.ID) AS onboard FROM fleets f, personnel_onboard_fleets pof WHERE f.fleet_ID = pof.fleet_ID ORDER BY f.status ``` I am expecting 2 results, one fleet with 2 people on board and one with zero people onboard. However I only get the one result. I have tried to use the following ``` SELECT f.*, IFNULL(COUNT(pof.ID), 0) AS onboard FROM fleets f, personnel_onboard_fleets pof WHERE f.fleet_ID = pof.fleet_ID ORDER BY f.status ``` I cant really see what is wrong with the query, is there anything else that needs to be added to show fleets with 0 persons onboard. My original query before the count shows all fleets fine, so I know it is something to do with the count. This is driving me crazy! Any help would be much appreciated!!
Try: ``` SELECT f.fleet_id, Count(pof.id) AS onboard FROM fleets f LEFT JOIN personnel_onboard_fleets pof ON f.fleet_id = pof.fleet_id GROUP BY f.fleet_id ORDER BY f.status; ```
`COUNT()` is an aggregate function, it returns the total number of rows. If you specify a field name (as you did), the number of non-`NULL` values is returned. You need to use GROUP BY. This aggregates rows based on a field. And then, `COUNT()` returns the total for each group. ``` SELECT f.fleet_ID, count(pof.ID) AS onboard FROM fleets f LEFT JOIN personnel_onboard_fleets pof ON f.fleet_ID = pof.fleet_ID GROUP BY f.fleet_ID ORDER BY f.status; ```
COUNT ifnull mysql query
[ "", "mysql", "sql", "count", "" ]
I'm trying to connect to a server via SSH by a Python script. Currently i'm trying out paramiko. I set up a public key between the client and the server so i don't need a password. I'm using the following code at the moment: ``` ssh = paramiko.SSHClient() ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) ssh.connect('192.168.56.102', 'oracle', None, '~/.ssh/id_rsa') stdin, stdout, stderr = ssh.exec_command('ls') ``` But when running this I'm getting the error > [Errno -8] Servname not supported for ai\_socktype Any help?
Basically this complains about ai\_socktype if you skip any of the parameters but don't explicitly name the parameters after the one you skipped. In this case the problem is that the parameter 'username' is the second parameter but is not 'username='oracle'. That is also why adding the port number fixes it (then no parameters are being skipped)
You can just specify the parameters. Don't need to add the port number (if it's ok with the default one). From the definition of [connect](http://docs.paramiko.org/en/1.13/api/client.html#paramiko.client.SSHClient.connect), your example would look like this: ``` ssh.connect('192.168.56.102', username='oracle', password=None, key_filename='~/.ssh/id_rsa') ```
Errno -8 Servname not supported when using Paramiko, Python
[ "", "python", "python-2.7", "paramiko", "" ]
I'm learning Python and the simple ways to handle lists is presented as an advantage. Sometimes it is, but look at this: ``` >>> numbers = [20,67,3,2.6,7,74,2.8,90.8,52.8,4,3,2,5,7] >>> numbers.remove(max(numbers)) >>> max(numbers) 74 ``` A very easy, quick way of obtaining the second largest number from a list. Except that the easy list processing helps write a program that runs through the list twice over, to find the largest and then the 2nd largest. It's also destructive - I need two copies of the data if I wanted to keep the original. We need: ``` >>> numbers = [20,67,3,2.6,7,74,2.8,90.8,52.8,4,3,2,5,7] >>> if numbers[0]>numbers[1]): ... m, m2 = numbers[0], numbers[1] ... else: ... m, m2 = numbers[1], numbers[0] ... >>> for x in numbers[2:]: ... if x>m2: ... if x>m: ... m2, m = m, x ... else: ... m2 = x ... >>> m2 74 ``` Which runs through the list just once, but isn't terse and clear like the previous solution. So: is there a way, in cases like this, to have both? The clarity of the first version, but the single run through of the second?
Since @OscarLopez and I have different opinions on what the second largest means, I'll post the code according to my interpretation and in line with the first algorithm provided by the questioner. ``` def second_largest(numbers): count = 0 m1 = m2 = float('-inf') for x in numbers: count += 1 if x > m2: if x >= m1: m1, m2 = x, m1 else: m2 = x return m2 if count >= 2 else None ``` (Note: Negative infinity is used here instead of `None` since `None` has different sorting behavior in Python 2 and 3 – see [Python - Find second smallest number](https://stackoverflow.com/questions/26779618); a check for the number of elements in `numbers` makes sure that negative infinity won't be returned when the actual answer is undefined.) If the maximum occurs multiple times, it may be the second largest as well. Another thing about this approach is that it works correctly if there are less than two elements; then there is no second largest. Running the same tests: ``` second_largest([20,67,3,2.6,7,74,2.8,90.8,52.8,4,3,2,5,7]) => 74 second_largest([1,1,1,1,1,2]) => 1 second_largest([2,2,2,2,2,1]) => 2 second_largest([10,7,10]) => 10 second_largest([1,1,1,1,1,1]) => 1 second_largest([1]) => None second_largest([]) => None ``` **Update** I restructured the conditionals to drastically improve performance; almost by a 100% in my testing on random numbers. The reason for this is that in the original version, the `elif` was always evaluated in the likely event that the next number is not the largest in the list. In other words, for practically every number in the list, two comparisons were made, whereas one comparison mostly suffices – if the number is not larger than the second largest, it's not larger than the largest either.
You could use the [heapq](https://docs.python.org/3.6/library/heapq.html) module: ``` >>> el = [20,67,3,2.6,7,74,2.8,90.8,52.8,4,3,2,5,7] >>> import heapq >>> heapq.nlargest(2, el) [90.8, 74] ``` And go from there...
Get the second largest number in a list in linear time
[ "", "python", "performance", "" ]
``` SELECT * FROM address WHERE name LIKE 'a%' OR name LIKE '% a%' LIMIT 10 ``` This query retrieves `name`s that start with `a` either at the beginning `'a%'` or in a word in the middle `'% a%'`. How can I retrieve results from `LIKE 'a%'` first then `LIKE '% a%'`?.
add `ORDER BY` clause, ``` SELECT * FROM address WHERE name LIKE 'a%' OR name LIKE '% a%' ORDER BY CASE WHEN name LIKE 'a%' THEN 0 ELSE 1 END LIMIT 10 ```
Here it is: ``` SELECT t1.* FROM ( SELECT * FROM address WHERE name LIKE 'a%' LIMIT 10 ) t1 WHERE t1.name LIKE '% a%' ```
Two LIKE conditions but retrieving from one first then the other
[ "", "mysql", "sql", "" ]
How can I Count the Lending comments for each lending (the comments is on another table called "LendingComments" with a reference Column called "LendingId" ? ``` SELECT LendingStatus.Status, Products.Productname, Products.Serial_number, Deposits.Amount, Lendings.DeliveryDate, Lendings.Id AS LendingId, Products.Id AS ProductId FROM Lendings LEFT JOIN Products ON Lendings.ProductId = Products.Id LEFT JOIN LendingStatus ON Lendings.StatusId = LendingStatus.Id LEFT JOIN Deposits ON Lendings.DepositId = Deposits.Id WHERE PersonId = 561 ORDER BY DeliveryDate DESC ```
Maby like this (if I understand the question well enough) ``` SELECT LendingStatus.Status, Products.Productname, Products.Serial_number,Deposits.Amount, Lendings.DeliveryDate, Lendings.Id AS LendingId, Products.Id AS ProductId, LendingComments.NumLendingComments FROM Lendings LEFT JOIN Products ON Lendings.ProductId = Products.Id LEFT JOIN LendingStatus ON Lendings.StatusId = LendingStatus.Id LEFT JOIN Deposits ON Lendings.DepositId = Deposits.Id OUTER APPLY ( SELECT COUNT(*) AS NumLendingComments FROM LendingComments PL WHERE PL.LendingID = Lendings.LendingID ) AS LendingComments WHERE Personid = 561 ORDER BY DeliveryDate desc ```
Maybe this helps: ``` SELECT CommentCount = Sum(lc.comments) OVER ( partition BY lc.id), lendingstatus.status, products.productname, products.serial_number, deposits.amount, lendings.deliverydate, lendings.id AS LendingId, products.id AS ProductId FROM lendings LEFT JOIN products ON lendings.productid = products.id LEFT JOIN lendingstatus ON lendings.statusid = lendingstatus.id LEFT JOIN deposits ON lendings.depositid = deposits.id LEFT JOIN LendingComments lc ON lc.LendingId = lendings.Lendings.Id WHERE personid = 561 ORDER BY deliverydate DESC ``` However, you have not shown the `PersonLendings` table, have you?
Count from another table with join
[ "", "sql", "sql-server", "join", "count", "" ]
I'm working with an ItemNumber field in a legacy system that is 99% numbers, but there are a few records that contain letters. The numbers are all padded with leading zeros so I thought I would just cast them as bigint's to solve this problem, but of course it throws an error when it gets to the records with letters in them. I thought the following case statement would have worked, but it still throws the error. Why in the world is SQL Server evaluating the cast if the isnumeric(itemnumber) = 1 condition isn't true? ``` select case when isnumeric(itemnumber) = 1 then cast(itemnumber as bigint) else itemnumber end ItemNumber from items ``` And what's the best workaround?
Your expression tries to convert a `VARCHAR` value into a `BIGINT` if it's numeric and leave the value as is if it's not. Since you are mixing datatypes in the `CASE` statement, `SQL Server` tries to cast them all into `BIGINT` but fails on non-numeric values. If you just want to omit non-numeric values, get rid of the `ELSE` clause: ``` SELECT CASE ISNUMERIC(itemnumber) WHEN 1 THEN CAST(itemnumber AS BIGINT) END FROM items ```
Maybe because: > ISNUMERIC returns 1 for some characters that are not numbers, such as plus (+), minus (-), and valid currency symbols such as the dollar sign ($). For a complete list of currency symbols, see money and smallmoney (Transact-SQL). <http://msdn.microsoft.com/en-us/library/ms186272.aspx>
Does SQL Server really evaluate every 'then' clause in a case expression?
[ "", "sql", "sql-server", "" ]
I'm trying to make a simple, local database using Python where I can set values, get values, etc and I keep getting this error: ``` #Simple Database #Functions include Set[name][value] # Get[name] # Unset[name] # Numequalto[value] # End # Begin, Rollback, Commit varlist = [] ops = [] class item: def __init__(self,name,value): self.name = name self.value = value class db: def __init__(self): self.varlist = [] self.ops = [] def Set(self,nm,val): changed = False #Bool for keeping track of update for item in varlist: #run through current list if item.name == nm: #If the name is found item.value = val #update the value changed = True #found it break #exit if found if not changed: newitem = item() #Create a new one and append it newitem.name = nm newitem.value = val varlist.append(newitem) def Get(key): for item in varlist: if item.name == key: return item.value break def Unset(key): for item in varlist: if item.name == key: item.value = -1 break def Numequalto(key): count = 0 for item in varlist: if item.value == key: count+=1 return count def main(): newdb = db() varlist=[] comm = "" #prime it while comm.lower() != "end": comm = input("Command: ") if comm.lower() == "begin": print("----SESSION START---") while comm.lower() != "end": comm = input("Command: ") part = [] for word in comm.split(): part.append(word.lower()) print(part) if part[0].lower()=="set": newdb.Set(part[1],part[2]) print(varlist) elif part[0].lower()=="get": gotten = Get(part[1]) print(gotten) elif part[0].lower()=="unset": Unset(part[1]) elif part[0].lower()=="numequalto": numequal = Numequalto(part[1]) print(numequal) print("Finished") else: print("--ERROR: Must BEGIN--") if __name__ == "__main__": main() ``` When I run this, and try to create a new item in my list using the command ``` set a 25 ``` I get this error: ``` Traceback (most recent call last): File "/Volumes/CON/LIFE/SimpleDatabase.py", line 81, in <module> main() File "/Volumes/CON/LIFE/SimpleDatabase.py", line 65, in main newdb.Set(part[1],part[2]) File "/Volumes/CON/LIFE/SimpleDatabase.py", line 27, in Set newitem = item() #Create a new one and append it UnboundLocalError: local variable 'item' referenced before assignment ``` Any help would be much appreciated. I'm pretty new to Python
The line ``` for item in varlist: ``` shadows the class `item` with a local variable. So that when you get to your `item()` it thinks you are trying to call the local variable instead of the class. You can tell that your class `item` is never being constructed because it would fail as you are passing no parameters to the `__init__` Also you should really call your class `Item`. Once I did that I got the constructor error as expected.
You have a few issues with your code: 1. You are shadowing the class `item` with a local variable of the same name. 2. You are using `varlist` instead of `self.varlist`. 3. Some of your class methods doesn't recieve a `self` first argument. 4. Also there is a strong convention in python to name classes with a first capital letter.
Making a database with custom commands in python
[ "", "python", "database", "class", "" ]
I wish to create a 'find' procedure **myself**, which is capable of finding a sub-string in a string and it also should be able to read a string backward and give position of match- just like the original find function in python. I am unable to figure out what logic should I use- also I don't know how the original find functions? I just started to use python and am fairly new to programming as well. Any guidance shall be highly appreciated!
Here is a solution that returns all the hints in a list, and `rfind` is defined using the original `find` keyword `backwards`. You can use for integers or floats also. You can easily modify it in order to return only the first hint. ``` def find( x, string, backward = False, ignore_case = False ): x = str(x) string = str(string) if ignore_case: x = x.lower() string = string.lower() str_list = [ i for i in string ] x_list = [ i for i in x ] if backward: x_list.reverse() str_list.reverse() x = ''.join(x_list) string = ''.join(str_list) lenx = len(x) ans = [] for i in range( len(str_list) - lenx ): if x == string[i:i+lenx]: ans.append( i ) return ans def rfind( x, string, ignore_case = False): return find( x, string, backward = True, ignore_case = ignore_case ) print find('f','abcdefgacdfh') # [5, 10] print rfind('f','abcdefgacdfh') # [1, 6] print find(12,'aaa3331222aa12a') # [6, 12] print rfind(12,'aaa3331222aa12a') # [1, 7] ```
> also I don't know how the original find functions A good way to learn about functions without googling is to use [Ipython](http://ipython.org/)and especially the [notebook variant](http://ipython.org/notebook.html/). These allow you to write python code interactively, and have some special features. Typing the name of a function in Ipython (either notebook or the interpreter) with a question mark returns some information about the function e.g ``` find? Type: function String Form:<function find at 0x2893cf8> File: /usr/lib/pymodules/python2.7/matplotlib/mlab.py Definition: find(condition) Docstring: Return the indices where ravel(condition) is true ``` Typing two question marks reveals the source code ``` find?? Type: function String Form:<function find at 0x2893cf8> File: /usr/lib/pymodules/python2.7/matplotlib/mlab.py Definition: find(condition) Source: def find(condition): "Return the indices where ravel(condition) is true" res, = np.nonzero(np.ravel(condition)) return res ``` You would then need to go down the rabbit hole further to find exactly how find worked.
How does find procedure work in python
[ "", "python", "" ]
I have a table that contains employees attendance details. For this example let’s assume only three different attendance types: ‘Holiday’ / ‘Bank-Holiday’ / ‘Sickness’. I can find the totals of each for each employee without difficulty, however I need to combine the results from ‘Holiday’ and ‘Bank-Holiday’. --- e.g. if the table contained the following data: John - ‘Holiday’ John - ‘Bank-Holiday’ John - ‘Sickness’ Jenny – ‘Holiday’ --- The results should be Name - Days Sick – Days Holiday John – 1 – 2 Jenny – 0 - 1
My two cents' worth: Instead of hard-coding this type of categorization into queries you *really* should create an [AbsenceCategories] table with values like... ``` AbsenceType AbsenceCategory ----------- --------------- Sickness Sick Holiday Holiday Bank-Holiday Holiday ``` ...so when "Top Management" sends you the memo informing you that "Unscheduled Absence" must be tracked separately and categorized as "Holiday" then you can just add it to your table and you don't have to change all of your queries and such.
You can use `IIF` inside a SUM Statement to count certain conditions: ``` SELECT [YourTable].[Name], SUM(IIF([YourTable].[Attendance] = 'Sickness', 1, 0)) AS [Days Sick], SUM(IIF([YourTable].[Attendance] IN ('Holiday' , 'Bank-Holiday'), 1, 0)) AS [Days Holiday] FROM [YourTable] GROUP BY [YourTable].[Name]; ```
MS Access SQL - Count if a field matches either of two criteria
[ "", "sql", "ms-access", "count", "criteria", "" ]
**Edit:** So I found out that NDSolve for ODE is using Runge Kutta to solve the equations. How can I use the Runge Kutta method on my python code to solve the ODE I have below? From my post on [text files with float entries](https://stackoverflow.com/questions/16325258/text-files-with-float-entries-python), I was able to determine that `python` and `mathematica` immediately start diverging with a tolerance of 10 to the negative 6. **End Edit** For last few hours, I have been trying to figure out why my solutions in Mathematica and Python differ by `5000` something `km`. I am led to believe one program has a higher error tolerance when simulating over millions of seconds in flight time. My question is which program is more accurate, and if it isn't python, how can I adjust the precision? With `Mathematica`, I am less than 10km away from `L4` where as with `Python` I am `5947` km away. The codes are listed below: `Python` ``` import numpy as np from scipy.integrate import odeint import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D from numpy import linspace from scipy.optimize import brentq me = 5.974 * 10 ** (24) # mass of the earth mm = 7.348 * 10 ** (22) # mass of the moon G = 6.67259 * 10 ** (-20) # gravitational parameter re = 6378.0 # radius of the earth in km rm = 1737.0 # radius of the moon in km r12 = 384400.0 # distance between the CoM of the earth and moon d = 300 # distance the spacecraft is above the Earth pi1 = me / (me + mm) pi2 = mm / (me + mm) mue = 398600.0 # gravitational parameter of earth km^3/sec^2 mum = G * mm # grav param of the moon mu = mue + mum omega = np.sqrt(mu / (r12 ** 3)) nu = -np.pi / 4 # true anomaly pick yourself xl4 = r12 / 2 - 4671 # x location of L4 yl4 = np.sqrt(3) / 2 * r12 # y print("The location of L4 is", xl4, yl4) # Solve for Jacobi's constant def f(C): return (omega ** 2 * (xl4 ** 2 + yl4 ** 2) + 2 * mue / r12 + 2 * mum / r12 + 2 * C) c = brentq(f, -5, 0) print("Jacobi's constant is",c) x0 = (re + 200) * np.cos(nu) - pi2 * r12 # x location of the satellite y0 = (re + 200) * np.sin(nu) # y location print("The satellite's initial position is", x0, y0) vbo = (np.sqrt(omega ** 2 * (x0 ** 2 + y0 ** 2) + 2 * mue / np.sqrt((x0 + pi2 * r12) ** 2 + y0 ** 2) + 2 * mum / np.sqrt((x0 - pi1 * r12) ** 2 + y0 ** 2) + 2 * -1.21)) print("Burnout velocity is", vbo) gamma = 0.4678 * np.pi / 180 # flight path angle pick yourself vx = vbo * (np.sin(gamma) * np.cos(nu) - np.cos(gamma) * np.sin(nu)) # velocity of the bo in the x direction vy = vbo * (np.sin(gamma) * np.sin(nu) + np.cos(gamma) * np.cos(nu)) # velocity of the bo in the y direction print("The satellite's initial velocity is", vx, vy) # r0 = [x, y, 0] # v0 = [vx, vy, 0] u0 = [x0, y0, 0, vx, vy, 0] def deriv(u, dt): return [u[3], # dotu[0] = u[3] u[4], # dotu[1] = u[4] u[5], # dotu[2] = u[5] (2 * omega * u[4] + omega ** 2 * u[0] - mue * (u[0] + pi2 * r12) / np.sqrt(((u[0] + pi2 * r12) ** 2 + u[1] ** 2) ** 3) - mum * (u[0] - pi1 * r12) / np.sqrt(((u[0] - pi1 * r12) ** 2 + u[1] ** 2) ** 3)), # dotu[3] = that (-2 * omega * u[3] + omega ** 2 * u[1] - mue * u[1] / np.sqrt(((u[0] + pi2 * r12) ** 2 + u[1] ** 2) ** 3) - mum * u[1] / np.sqrt(((u[0] - pi1 * r12) ** 2 + u[1] ** 2) ** 3)), # dotu[4] = that 0] # dotu[5] = 0 dt = np.linspace(0.0, 6.0 * 86400.0, 2000000.0) # secs to run the simulation u = odeint(deriv, u0, dt) x, y, z, x2, y2, z2 = u.T fig = plt.figure() ax = fig.add_subplot(111, projection='3d') ax.plot(x, y, z, color = 'r') # adding the Lagrange point phi = np.linspace(0, 2 * np.pi, 100) theta = np.linspace(0, np.pi, 100) xm = 2000 * np.outer(np.cos(phi), np.sin(theta)) + xl4 ym = 2000 * np.outer(np.sin(phi), np.sin(theta)) + yl4 zm = 2000 * np.outer(np.ones(np.size(phi)), np.cos(theta)) ax.plot_surface(xm, ym, zm, color = '#696969', linewidth = 0) ax.auto_scale_xyz([-8000, 385000], [-8000, 385000], [-8000, 385000]) # adding the earth phi = np.linspace(0, 2 * np.pi, 100) theta = np.linspace(0, np.pi, 100) xm = 2000 * np.outer(np.cos(phi), np.sin(theta)) ym = 2000 * np.outer(np.sin(phi), np.sin(theta)) zm = 2000 * np.outer(np.ones(np.size(phi)), np.cos(theta)) ax.plot_surface(xm, ym, zm, color = '#696969', linewidth = 0) ax.auto_scale_xyz([-8000, 385000], [-8000, 385000], [-8000, 385000]) plt.show() # The code below finds the distance between path and l4 my_x, my_y, my_z = (xl4, yl4, 0.0) delta_x = x - my_x delta_y = y - my_y delta_z = z - my_z distance = np.array([np.sqrt(delta_x ** 2 + delta_y ** 2 + delta_z ** 2)]) minimum = np.amin(distance) print("Closet approach to L4 is", minimum) ``` `Mathematica` ``` ClearAll["Global`*"]; me = 5.974*10^(24); mm = 7.348*10^(22); G = 6.67259*10^(-20); re = 6378; rm = 1737; r12 = 384400; \[Pi]1 = me/(me + mm); \[Pi]2 = mm/(me + mm); M = me + mm; \[Mu]1 = 398600; \[Mu]2 = G*mm; \[Mu] = \[Mu]1 + \[Mu]2; \[CapitalOmega] = Sqrt[\[Mu]/r12^3]; \[Nu] = -\[Pi]/4; xl4 = 384400/2 - 4671; yl4 = Sqrt[3]/2*384400 // N; Solve[\[CapitalOmega]^2*(xl4^2 + yl4^2) + 2 \[Mu]1/r12 + 2 \[Mu]2/r12 + 2*C == 0, C] x = (re + 200)*Cos[\[Nu]] - \[Pi]2*r12 // N y = (re + 200)*Sin[\[Nu]] // N {{C -> -1.56824}} -19.3098 -4651.35 vbo = Sqrt[\[CapitalOmega]^2*((x)^2 + (y)^2) + 2*\[Mu]1/Sqrt[(x + \[Pi]2*r12)^2 + (y)^2] + 2*\[Mu]2/Sqrt[(x - \[Pi]1*r12)^2 + (y)^2] + 2*(-1.21)] 10.8994 \[Gamma] = 0.4678*Pi/180; vx = vbo*(Sin[\[Gamma]]*Cos[\[Nu]] - Cos[\[Gamma]]*Sin[\[Nu]]); vy = vbo*(Sin[\[Gamma]]*Sin[\[Nu]] + Cos[\[Gamma]]*Cos[\[Nu]]); r0 = {x, y, 0}; v0 = {vx, vy, 0} {7.76974, 7.64389, 0} s = NDSolve[{x1''[t] - 2*\[CapitalOmega]*x2'[t] - \[CapitalOmega]^2* x1[t] == -\[Mu]1/((Sqrt[(x1[t] + \[Pi]2*r12)^2 + x2[t]^2])^3)*(x1[t] + \[Pi]2* r12) - \[Mu]2/((Sqrt[(x1[t] - \[Pi]1*r12)^2 + x2[t]^2])^3)*(x1[t] - \[Pi]1*r12), x2''[t] + 2*\[CapitalOmega]*x1'[t] - \[CapitalOmega]^2* x2[t] == -\[Mu]1/(Sqrt[(x1[t] + \[Pi]2*r12)^2 + x2[t]^2])^3* x2[t] - \[Mu]2/(Sqrt[(x1[t] - \[Pi]1*r12)^2 + x2[t]^2])^3* x2[t], x3''[t] == 0, x1[0] == r0[[1]], x1'[0] == v0[[1]], x2[0] == r0[[2]], x2'[0] == v0[[2]], x3[0] == r0[[3]], x3'[0] == v0[[3]]}, {x1, x2, x3}, {t, 0, 1000000}]; ParametricPlot3D[ Evaluate[{x1[t], x2[t], x3[t]} /. s], {t, 0, 10*24*3600}, PlotStyle -> {Red, Thick}] g1 = ParametricPlot3D[ Evaluate[{x1[t], x2[t], x3[t]} /. s], {t, 0, 5.75*3600*24}, PlotStyle -> {Red}, PlotRange -> {{-10000, 400000}, {-10000, 400000}}]; g2 = Graphics3D[{Blue, Opacity[0.6], Sphere[{-4671, 0, 0}, re]}]; g3 = Graphics3D[{Green, Opacity[0.6], Sphere[{379729, 0, 0}, rm]}]; g4 = Graphics3D[{Black, Sphere[{xl4, yl4, 0}, 2000]}]; Show[g2, g1, g3, g4, Boxed -> False] (*XYdata=Flatten[Table[Evaluate[{x1[t],x2[t],x3[t]}/.s],{t,5.5*24*\ 3600,5.78*24*3600,1}],1]; X1Y1data=Flatten[Table[Evaluate[{x1'[t],x2'[t],x3'[t]}/.s],{t,5.5*24*\ 3600,5.78*24*3600,1}],1]; SetDirectory[NotebookDirectory[]]; Export["OrbitData.txt",XYdata,"CSV"]; Export["OrbVeloc.txt",X1Y1data,"CSV"];*) ```
If at this point your problem has reduced to just wanting to use Runge-Kutta, you can for example replace this line: ``` u = odeint(deriv, u0, dt) ``` with something like this: ``` #reverse the order of arguments def deriv2(t,u): return deriv(u,t) # initialize a 4th order Runge-Kutta ODE solver solver = ode(deriv2).set_integrator('dopri5') solver.set_initial_value(u0) u = np.empty((len(dt), 6)) u[0,:] = u0 for ii in range(1,len(dt)): u[ii] = solver.integrate(dt[ii]) ``` (+obviously replace the odeint import with ode). Note that this is significantly slower for this type of ODE. To use the dop853, use solver.set\_integrator('dop853').
I re-wrote the def deriv part of the ode and it works now! So the `Mathematica` plot and the `Python` agree. ``` def deriv(u, dt): return [u[3], # dotu[0] = u[3] u[4], # dotu[1] = u[4] u[5], # dotu[2] = u[5] (2 * omega * u[4] + omega ** 2 * u[0] - mue * (u[0] + pi2 * r12) / np.sqrt(((u[0] + pi2 * r12) ** 2 + u[1] ** 2) ** 3) - mum * (u[0] - pi1 * r12) / np.sqrt(((u[0] - pi1 * r12) ** 2 + u[1] ** 2) ** 3)), # dotu[3] = that (-2 * omega * u[3] + omega ** 2 * u[1] - mue * u[1] / np.sqrt(((u[0] + pi2 * r12) ** 2 + u[1] ** 2) ** 3) - mum * u[1] / np.sqrt(((u[0] - pi1 * r12) ** 2 + u[1] ** 2) ** 3)), # dotu[4] = that 0] # dotu[5] = 0 ```
ode integration in python versus mathematica results
[ "", "python", "wolfram-mathematica", "precision", "differential-equations", "" ]
ok, So I thought it would be a good idea to get familiar with Python. (I have had experience with Java, php, perl, VB, etc. not a master of any, but intermediate knowledge) so I am attempting to write a script that will take a the data from a socket, and translate it to the screen. rough beginning code to follow: my code seems to correctly read the binary info from the socket, but I can't unpack it since I don't have access to the original structure. I have the output for this stream with a different program, (which is terribly written which is why I am tackling this) when I do print out the recv, it's like this... ``` b'L\x00k\x07vQ\n\x01\xffh\x00\x04NGIN\x04MAIN6Product XX finished reprocessing cdc XXXXX at jesadr 0c\x00k\x07vQ\n\x01\xffF\x00\x06CSSPRD\x0cliab_checkerCCheckpointed to XXXXXXXXXXXXXXXX:XXXXXXX.XXX at jesadr 0 (serial 0)[\x00l\x07vQ\n\x00\xff\x01\x00\x05MLIFE\x06dayendBdayend 1 Copyright XXXX XXXXXXX XXXXXXX XXXXX XXX XXXXXX XXXXXXXX. ``` from looking at this, and comparing it to the output of the other program, I would surmise that it should be broken up like.. ``` b'L\x00k\x07vQ\n\x01\xffh\x00\x04NGIN\x04MAIN6Product XX finished reprocessing cdc XXXXX at jesadr 0' ``` with corresponding info ``` 04-23 00:00:43 10 1 NGIN MAIN 255 104 Product XX finished reprocessing cdc XXXXX at jesadr 0 ``` Now, based on my research, it looks like I need to use the "struct" and unpack it, however I have no idea of the original structure of this, I only know what info is available from it, and to be honest, I'm having a hell of a time figuring this out. I have used the python interpreter to attempt to unpack bits and pieces of the line, however it is an exercise in frustration. If anyone can at least help me get started, I would very much appreciate it. Thanks
Okay. I think I've managed to decode it, although I'm not sure about the intermediate 16-bit value. This Python 2.7 code... ``` from cStringIO import StringIO import struct import time def decode(f): def read_le16(f): return struct.unpack('<h', f.read(2))[0] def read_timestamp(f): ts = struct.unpack('<l', f.read(4))[0] return time.ctime(ts) def read_byte(f): return ord(f.read(1)) def read_pascal(f): l = ord(f.read(1)) return f.read(l) result = [] # Read total length result.append('Total message length is %d bytes' % read_le16(f)) # Read timestamp result.append(read_timestamp(f)) # Read 3 x byte result.append(read_byte(f)) result.append(read_byte(f)) result.append(read_byte(f)) # Read 1 x LE16 result.append(read_le16(f)) # Read 3 x pascal string result.append(read_pascal(f)) result.append(read_pascal(f)) result.append(read_pascal(f)) return result s = 'L\x00k\x07vQ\n\x01\xffh\x00\x04NGIN\x04MAIN6Product XX finished reprocessing cdc XXXXX at jesadr 0c\x00k\x07vQ\n\x01\xffF\x00\x06CSSPRD\x0cliab_checkerCCheckpointed to XXXXXXXXXXXXXXXX:XXXXXXX.XXX at jesadr 0 (serial 0)[\x00l\x07vQ\n\x00\xff\x01\x00\x05MLIFE\x06dayendBdayend 1 Copyright XXXX XXXXXXX XXXXXXX XXXXX XXX XXXXXX XXXXXXXX.' f = StringIO(s) print decode(f) print decode(f) print decode(f) ``` ...yields... ``` ['Total message length is 76 bytes', 'Tue Apr 23 05:00:43 2013', 10, 1, 255, 104, 'NGIN', 'MAIN', 'Product XX finished reprocessing cdc XXXXX at jesadr 0'] ['Total message length is 99 bytes', 'Tue Apr 23 05:00:43 2013', 10, 1, 255, 70, 'CSSPRD', 'liab_checker', 'Checkpointed to XXXXXXXXXXXXXXXX:XXXXXXX.XXX at jesadr 0 (serial 0)'] ['Total message length is 91 bytes', 'Tue Apr 23 05:00:44 2013', 10, 0, 255, 1, 'MLIFE', 'dayend', 'dayend 1 Copyright XXXX XXXXXXX XXXXXXX XXXXX XXX XXXXXX XXXXXXXX.'] ``` The timestamps are out by 5 hours, so I'm assuming it's a timezone thing.
I'd say you're right in using struct but what sucks about struct is that afaik you'll always have to know the original structure. Maybe reading the tcp specs and isos will help all though it's still going to be hell of a time figuring it out :/
Python unpacking binary stream from tcp socket
[ "", "python", "sockets", "struct", "" ]
I have been stuck on this for a while now and iv changed my code so many times it pains me.Anyway i have made a DB in sql database browser i have been reading on how to place it in my app that am working on at the moment.So i placed it in my assets folder and made a database helper java class.But i just can get my DB file into eclipse DDMS i have look at stack overflows answer but just don't know where the problem lies. I changed my DBHelper code again to [Database not copying from assets](https://stackoverflow.com/questions/5945196/database-not-copying-from-assets/5945326#5945326) and still no DB file in my DDMS. The image's you see are were i am at the moment.This is the first computer coding iv ever done am new to android.Thank You <http://s1.postimg.org/6u667r3bj/snip1.jpg> <http://s13.postimg.org/l5vsas40n/snip2.jpg> ``` package com.mybasicapp; import java.io.File; import java.io.FileOutputStream; import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; import android.content.Context; import android.database.SQLException; import android.database.sqlite.SQLiteDatabase; import android.database.sqlite.SQLiteException; import android.database.sqlite.SQLiteOpenHelper; public class DataBaseHelper extends SQLiteOpenHelper{ private Context mycontext; private String DB_PATH = "/data/data/gr.peos/databases/"; //private String DB_PATH = mycontext.getApplicationContext().getPackageName()+"/databases/"; private static String DB_NAME = "BLib.sqlite";//the extension may be .sqlite or .db public SQLiteDatabase myDataBase; /*private String DB_PATH = "/data/data/" + mycontext.getApplicationContext().getPackageName() + "/databases/";*/ public DataBaseHelper(Context context) throws IOException { super(context,DB_NAME,null,1); this.mycontext=context; boolean dbexist = checkdatabase(); if(dbexist) { //System.out.println("Database exists"); opendatabase(); } else { System.out.println("Database doesn't exist"); createdatabase(); } } public void createdatabase() throws IOException{ boolean dbexist = checkdatabase(); if(dbexist) { //System.out.println(" Database exists."); } else{ this.getReadableDatabase(); try{ copydatabase(); } catch(IOException e){ throw new Error("Error copying database"); } } } private boolean checkdatabase() { //SQLiteDatabase checkdb = null; boolean checkdb = false; try{ String myPath = DB_PATH + DB_NAME; File dbfile = new File(myPath); //checkdb = SQLiteDatabase.openDatabase(myPath,null,SQLiteDatabase.OPEN_READWRITE); checkdb = dbfile.exists(); } catch(SQLiteException e){ System.out.println("Database doesn't exist"); } return checkdb; } private void copydatabase() throws IOException { //Open your local db as the input stream InputStream myinput = mycontext.getAssets().open(DB_NAME); // Path to the just created empty db String outfilename = DB_PATH + DB_NAME; //Open the empty db as the output stream OutputStream myoutput = new FileOutputStream("/data/data/gr.peos/databases /BLib.sqlite"); // transfer byte to inputfile to outputfile byte[] buffer = new byte[1024]; int length; while ((length = myinput.read(buffer))>0) { myoutput.write(buffer,0,length); } //Close the streams myoutput.flush(); myoutput.close(); myinput.close(); } public void opendatabase() throws SQLException { //Open the database String mypath = DB_PATH + DB_NAME; myDataBase = SQLiteDatabase.openDatabase(mypath, null, SQLiteDatabase.OPEN_READWRITE); } public synchronized void close(){ if(myDataBase != null){ myDataBase.close(); } super.close(); } @Override public void onCreate(SQLiteDatabase arg0) { // TODO Auto-generated method stub } @Override public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { // TODO Auto-generated method stub } } ```
If I understand you correct, you wont to add a ready SQLite file to you db folder. 1. Open your emulator and make sure it's selected. 2. In the DDMS open /data/data/[your\_package\_name]/databases folder and select it. 3. Use the - + buttons to pull and insert the files. ![enter image description here](https://i.stack.imgur.com/c8epW.png)
Using SQLiteOpenHelper is the right way to go. It works great for me. However, the only functions I'm overriding are ``` public void onCreate(SQLiteDatabase db) public void onUpgrade (SQLiteDatabase db, int oldVersion, int newVersion) ``` I'm letting it create the database in its default location, in the root of my app's internal storage area, I think. In your case, if I understand what you posted, you want to copy a database from the assets folder to internal storage. My suggestion is 4 parts: 1. Temporarily let Android create an empty db for you and figure out where it put it. 2. Modify your copyDatabase code to copy to that location. 3. Override onCreate to call copyDatabase. Android will only call onCreate if no database exists, thus you can unconditionally call copyDatabase. 4. In the future, as you make schema changes, handle them in onUpgrade. That should be all. Let me know how you progress with this. One caveat: onCreate already creates an empty database, so I'm not sure copying a db from assets will work without somehow closing and reopening the db. Another approach, which I'm using is to put DDL in your assets file instead and process that in onCreate. Something like this (except I'm using an array of strings instead of an asset file): ``` String[] ddl = { "CREATE TABLE [T1] ([id] INTEGER NOT NULL, [col1] CHAR);" + "CREATE TABLE [T2] ([id] INTEGER NOT NULL, [col2] CHAR)"; }; db.beginTransaction(); try { for (int i = 0, limit = ddl.length; i < limit; i++) db.execSQL (ddl [i]); } finally { db.endTransaction(); } ```
SqliteDatabase not found in DDMS
[ "", "android", "sql", "" ]
I have 3 tables from where I substract data, and to get most of the data I have a query that works pretty good, but I can't get a specific row and that's where I need some help. Table 1: ``` EquipmentID | EquipmentName | EquipmentTypeID 15 | Tesla | 68 16 | Colombus | 93 ``` Table 2: ``` EquipmentTypeID | DisplayName | 68 | Electrical Device| 93 | GPS Device | ``` Table 3: ``` EquipmentID | IPAddress | 15 | 192.168.1.1| 16 | 192.168.0.1| ``` So far the data I get is the following using the folowing SQL Syntax: ``` SELECT DISTINCT t1.IPAddress, t2.EquipmentID FROM Table3 t1 JOIN Table1 t2 ON t1.EquipmentID = t2.EquipmentID WHERE IPAddress LIKE '%192%' ``` The result I get looks like ``` IPAddress | EquipmentID | 192.168.1.1| 15 | 192.168.0.1| 16 | ``` However when I do a JOIN like the followiing then the result is just messed up ``` SELECT DISTINCT t1.IPAddress, t2.EquipmentID, t3.EquipmentTypeID, t4.DisplayName FROM Table3 t1 JOIN Table1 t2 ON t2.EquipmentID = t1.EquipmentID JOIN Table2 t3 ON t3.EquipmentTypeID = t1.EquipmentTypeID JOIN Table2 t4 ON t3.EquipmentTypeID = t1.EquipmentTypeID WHERE IPAddress LIKE '%192' ``` But now the result I get is the following: ``` IPAddress | EquipmentID |EquipmentTypeID| DisplayName | 192.168.1.1| 15 |68 | ElectricalDevice| 192.168.1.1| 15 |93 | GPS Device | 192.168.0.1| 16 |68 | ElectricalDevice| 192.168.0.1| 16 |93 | GPS Device | ``` Any ideas on how to get the right display name for the corresponding IPAddress and EquipmentID? If you need more clarification please let me know. Thank you for any help in advance
You are joining Table2 twice and on the wrong id: ``` JOIN Table2 t3 ON t3.EquipmentTypeID = t1.EquipmentTypeID JOIN Table2 t4 ON t3.EquipmentTypeID = t1.EquipmentTypeID ``` try this way: ``` SELECT DISTINCT t3.IPAddress, t2.EquipmentID, t3.EquipmentTypeID, t2.DisplayName FROM Table3 t3 JOIN Table1 t1 ON t1.EquipmentID = t3.EquipmentID JOIN Table2 t2 ON t2.EquipmentTypeID = t3.EquipmentTypeID WHERE IPAddress LIKE '%192' ``` Note: I changed the names of joined tables because it was a bit confusing.
``` SELECT t3.IPAddress, t3.EquipmentID, t2.EquipmentTypeID, t2.DisplayName FROM Table3 t3 JOIN Table1 t1 ON t3.EquipmentID = t1.EquipmentID JOIN Table2 t2 ON t2.EquipmentTypeID = t1.EquipmentTypeID WHERE IPAddress LIKE '%192.%' GROUP BY t3.IPAddress, t3.EquipmentID, t2.EquipmentTypeID, t2.DisplayName ```
A little help to better understand JOINS
[ "", "sql", "sql-server", "database", "sql-server-2008", "" ]
I want to know how I can exchange values of a same column in the same table itself in one query. For example, Table is like below. ``` SerialNo Status 1 Married 2 Single 3 Married ``` Now, Result what i want is that "Married" should be converted into Single and "Single" should be converted into Married. Expected: ``` SerialNo Status 1 Single 2 Married 3 Single ``` This should be accomplished in ONE query only. Is it possible to do so with a single query ? If yes, Help me. Thanks in advance.
``` UPDATE MyTable SET Status = (CASE WHEN Status = 'Married' THEN 'Single' ELSE 'Married' END ) ```
Use this: ``` SET Status = CASE WHEN Status = 'Married' THEN 'Single' WHEN Status = 'Single' THEN 'Married' ELSE 'Unknown' END ```
how to exchange values of same column in same table
[ "", "sql", "self-join", "" ]
I have an sql database (from an expressionengine install.) Within the `exp_channel_data` table I have a column `field_id_90` that contains only text values. The problem is all these values are in CAPITALS. Ideally I would like them lowercase. Is it possible to run a query or something to change these? **--UPDATED--** Thank you for the responses. Just wondering about whether or not its possible to Capitalise the values? Most of the values are single words, but some are "word,word" and some are "word-word" Is that possible?
Try : ``` SELECT lower (field_id_90) FROM exp_channel_data; ```
`lower()` should work for SQL Server, MySQL, Oracle and PostgreSQL: ``` select lower(YourColumn) from YourTable ```
Is it possible to change the values in a database column to lowercase?
[ "", "sql", "database", "" ]
I built an API with flask. My app does not have any static assets hence there's no reason for me to use nginx. I wish to run gunicorn on port 80. I have a "deploy script": ``` mkdir .log 2> /dev/null DEBUG=0 gunicorn -b 0.0.0.0:80 backend:app --access-logfile .log/access.log --error-logfile .log/general.log ``` I wish to run gunicorn on port 80 with `authbind`. I followed this [guide](https://mutelight.org/authbind) here. *Note that I am able to run `authbind python -m SimpleHTTPServer 80`* When I try to run `authbind ./deployment.run 80`, I am seeing the following error: ``` 2013-04-25 15:32:55 [24006] [ERROR] Can't connect to ('0.0.0.0', 80) 2013-04-25 15:33:08 [24018] [INFO] Starting gunicorn 0.17.4 2013-04-25 15:33:08 [24018] [ERROR] Retrying in 1 second. 2013-04-25 15:33:09 [24018] [ERROR] Retrying in 1 second. 2013-04-25 15:33:10 [24018] [ERROR] Retrying in 1 second. 2013-04-25 15:33:11 [24018] [ERROR] Retrying in 1 second. 2013-04-25 15:33:12 [24018] [ERROR] Retrying in 1 second. ``` Any ideas why I am unable to bind gunicorn to port 80? Any recommendations?
Try putting authbind inside your deployment script, e.g.: ``` mkdir .log 2> /dev/null DEBUG=0 authbind gunicorn -b 0.0.0.0:80 backend:app --access-logfile .log/access.log --error-logfile .log/general.log ``` Then just run `./deployment.run 80`. (Also, your script doesn't seem to be using any parameters; perhaps replace `80` in your script with `$1`?)
If you are on a unix-like environment, ports < 1024 (like 80) will require superuser privileges.
Getting Gunicorn to run on port 80
[ "", "python", "http", "gunicorn", "" ]
So I have expression such as "./folder/thisisa.test/file.cxx.h" How do I substitute/remove all the "." but the last dot?
To match all but the last dot with a regex: ``` '\.(?=[^.]*\.)' ``` Using a [lookahead](http://www.regular-expressions.info/lookaround.html) to check that's there another dot after the one we found (the lookahead's not part of the match).
Without regular expressions, using [`str.count`](https://docs.python.org/3/library/stdtypes.html#str.count) and [`str.replace`](https://docs.python.org/3/library/stdtypes.html#str.replace): ``` s = "./folder/thisisa.test/file.cxx.h" s.replace('.', '', s.count('.')-1) # '/folder/thisisatest/filecxx.h' ```
Python regex matching all but last occurrence
[ "", "python", "regex", "python-2.7", "directory", "" ]
I am able to read and slice pandas dataframe using python datetime objects, however I am forced to use only *existing dates* in index. For example, this works: ``` >>> data <class 'pandas.core.frame.DataFrame'> DatetimeIndex: 252 entries, 2010-12-31 00:00:00 to 2010-04-01 00:00:00 Data columns: Adj Close 252 non-null values dtypes: float64(1) >>> st = datetime.datetime(2010, 12, 31, 0, 0) >>> en = datetime.datetime(2010, 12, 28, 0, 0) >>> data[st:en] Adj Close Date 2010-12-31 593.97 2010-12-30 598.86 2010-12-29 601.00 2010-12-28 598.92 ``` However if I use a start or end date that is not present in the DF, I get python KeyError. My Question : How do I query the dataframe object for a date range; even when the start and end dates are not present in the DataFrame. Does pandas allow for range based slicing? I am using pandas version 0.10.1
Use `searchsorted` to find the nearest times first, and then use it to slice. ``` In [15]: df = pd.DataFrame([1, 2, 3], index=[dt.datetime(2013, 1, 1), dt.datetime(2013, 1, 3), dt.datetime(2013, 1, 5)]) In [16]: df Out[16]: 0 2013-01-01 1 2013-01-03 2 2013-01-05 3 In [22]: start = df.index.searchsorted(dt.datetime(2013, 1, 2)) In [23]: end = df.index.searchsorted(dt.datetime(2013, 1, 4)) In [24]: df.iloc[start:end] Out[24]: 0 2013-01-03 2 ```
Short answer: Sort your data (`data.sort()`) and then I think everything will work the way you are expecting. Yes, you can slice using datetimes not present in the DataFrame. For example: ``` In [12]: df Out[12]: 0 2013-04-20 1.120024 2013-04-21 -0.721101 2013-04-22 0.379392 2013-04-23 0.924535 2013-04-24 0.531902 2013-04-25 -0.957936 In [13]: df['20130419':'20130422'] Out[13]: 0 2013-04-20 1.120024 2013-04-21 -0.721101 2013-04-22 0.379392 ``` As you can see, you don't even have to build datetime objects; strings work. Because the datetimes in your index are not sequential, the behavior is weird. If we shuffle the index of my example here... ``` In [17]: df Out[17]: 0 2013-04-22 1.120024 2013-04-20 -0.721101 2013-04-24 0.379392 2013-04-23 0.924535 2013-04-21 0.531902 2013-04-25 -0.957936 ``` ...and take the same slice, we get a different result. It returns the first element inside the range and stops at the first element outside the range. ``` In [18]: df['20130419':'20130422'] Out[18]: 0 2013-04-22 1.120024 2013-04-20 -0.721101 2013-04-24 0.379392 ``` This is probably not useful behavior. If you want to select ranges of dates, would it make sense to sort it by date first? ``` df.sort_index() ```
python pandas dataframe slicing by date conditions
[ "", "python", "dataframe", "pandas", "" ]
In Python, how do I pass a list that contains only one string? For example: ``` def fn(mylist): return len(mylist) print fn(('abc', 'def')) # prints 2 print fn(('abc')) # prints 3 ``` I want it to print `1` for the one string in the list `('abc')` but instead it prints `3` for the 3 characters of the string.
That's a tuple, not a list. To make a one-tuple, do this: ``` print fn(('abc',)) ``` To make a list of length one, do this: ``` print fn(['abc']) ``` In your scenario, I think a list would be more appropriate. Use lists when you have a bunch of the same elements of the same type, and tuples when you have a “record”, or some elements of possibly different types and you don't need to add or remove any entries. (Lists often contain tuples.)
``` fn(['abc']) ``` passes a list ``` fn(('abc')) ``` passes a string in parentheses which are ignored. As other posters have pointed out ``` fn(('abc', )) ``` passes a tuple.
How to pass a list containing a single string in python?
[ "", "python", "" ]
How can I strip the comma from a Python string such as `Foo, bar`? I tried `'Foo, bar'.strip(',')`, but it didn't work.
You want to [`replace`](http://docs.python.org/2/library/stdtypes.html#str.replace) it, not [`strip`](http://docs.python.org/2/library/stdtypes.html#str.strip) it: ``` s = s.replace(',', '') ```
Use `replace` method of strings not `strip`: ``` s = s.replace(',','') ``` An example: ``` >>> s = 'Foo, bar' >>> s.replace(',',' ') 'Foo bar' >>> s.replace(',','') 'Foo bar' >>> s.strip(',') # clears the ','s at the start and end of the string which there are none 'Foo, bar' >>> s.strip(',') == s True ```
How to strip comma in Python string
[ "", "python", "string", "strip", "" ]
I'm running into a mysterious import error when using nosetests to run a test suite that I can't reproduce outside of the nose. Furthermore, the import error disappears when I skip a subset of the tests. **Executive Summary:** I am getting an import error in Nose that a) only appears when tests bearing a certain attribute are excluded and b) cannot be reproduced in an interactive python session, even when I ensure that the sys.path is the same for both. **Details:** The package structure looks like this: ``` project/ module1/__init__.py module1/foo.py module1/test/__init__.py module1/test/foo_test.py module1/test/test_data/foo_test_data.txt module2/__init__.py module2/bar.py module2/test/__init__.py module2/test/bar_test.py module2/test/test_data/bar_test_data.txt ``` Some of the tests in foo\_test.py are slow, so I've created a @slow decorator to allow me to skip them with a nosetests option: ``` def slow(func): """Decorator sets slow attribute on a test method, so nosetests can skip it in quick test mode.""" func.slow = True return func class TestFoo(unittest.TestCase): @slow def test_slow_test(self): load_test_data_from("test_data/") slow_test_operations_here def test_fast_test(self): load_test_data_from("test_data/") ``` When I want to run the fast unit tests only, I use ``` nosetests -vv -a'!slow' ``` from the root directory of the project. When I want to run them all, I remove the final argument. Here comes the detail that I suspect is to blame for this mess. The unit tests need to load test data from files (not best practice, I know.) The files are placed in a directory called "test\_data" in each test package, and the unit test code refers to them by a relative path, assuming the unit test is being run from the test/ directory, as shown in the example code above. To get this to work with running nose from the root directory of the project, I added the following code to **init**.py in each test package: ``` import os import sys orig_wd = os.getcwd() def setUp(): """ test package setup: change working directory to the root of the test package, so that relative path to test data will work. """ os.chdir(os.path.dirname(os.path.abspath(__file__))) def tearDown(): global orig_wd os.chdir(orig_wd) ``` As far as I understand, nose executes the setUp and tearDown package methods before and after running the tests in that package, which ensures that the unit test can find the appropriate test\_data directory, and the working directory is reset to the original value when the tests are complete. So much for the setup. The problem is, I get an import error **only** when I run the full suite of tests. The same modules import just fine when I exclude the slow tests. (To clarify, the tests throwing import errors are not slow, so they execute in either scenario.) ``` $ nosetests ... ERROR: Failure: ImportError (No module named foo_test) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Python/2.7/site-packages/nose/loader.py", line 413, in loadTestsFromName addr.filename, addr.module) File "/Library/Python/2.7/site-packages/nose/importer.py", line 47, in importFromPath return self.importFromDir(dir_path, fqname) File "/Library/Python/2.7/site-packages/nose/importer.py", line 80, in importFromDir fh, filename, desc = find_module(part, path) ImportError: No module named foo_test ``` If I run the test suite without the slow tests, then no error: ``` $ nosetests -a'!slow' ... test_fast_test (module1.test.foo_test.TestFoo) ... ok ``` In a python interactive session, I can import the test module with no trouble: ``` $ python Python 2.7.1 (r271:86832, Aug 5 2011, 03:30:24) [GCC 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2335.15.00)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import module1.test >>> module1.test.__path__ ['/Users/USER/project/module1/test'] >>> dir(module1.test) ['__builtins__', '__doc__', '__file__', '__name__', '__package__', '__path__', 'orig_wd', 'os', 'setUp', 'sys', 'tearDown'] ``` When I set a breakpoint in nose/importer.py, things look different: ``` > /Library/Python/2.7/site-packages/nose/importer.py(83)importFromDir() -> raise (Pdb) l 78 part, part_fqname, path) 79 try: 80 fh, filename, desc = find_module(part, path) 81 except ImportError, e: 82 import pdb; pdb.set_trace() 83 -> raise 84 old = sys.modules.get(part_fqname) 85 if old is not None: 86 # test modules frequently have name overlap; make sure 87 # we get a fresh copy of anything we are trying to load 88 # from a new path (Pdb) part 'foo_test' (Pdb) path ['/Users/USER/project/module1/test'] (Pdb) import module1.test.foo_test *** ImportError: No module named foo_test #If I import module1.test, it works, but the __init__.py file is not being executed (Pdb) import partition.test (Pdb) del dir (Pdb) dir(partition.test) ['__doc__', '__file__', '__name__', '__package__', '__path__'] #setUp and tearDown missing? (Pdb) module1.test.__path__ ['/Users/USER/project/module1/test'] #Module path is the same as before. (Pdb) os.listdir(partition.test.__path__[0]) #All files are right where they should be... ['.svn', '__init__.py', '__init__.pyc', 'foo_test.py', 'foo_test.pyc','test_data'] ``` I see the same screwy results even if I copy sys.path from my interactive session into the pdb session and repeat the above. Can anyone give me any insight about what might be going on? I realize I'm doing several non-standard things at the same time, which could lead to strange interactions. I'd be as interested in advice on how to simplify my architecture as I would be to get an explanation for this bug.
Here is how to track down the context of the error. ``` nosetests --debug=nose,nose.importer --debug-log=nose_debug <your usual args> ``` Afterwards, check the `nose_debug` file. Search for your error message "`No module named foo_test`". Then look at the preceding few lines to see which files/directories nose was looking at. In my case, nose was attempting to run some code which I had imported into my codebase - a 3rd party module which contained its own tests, but which I was not intending to include in my test suite. To resolve this, I used the [nose-exclude](https://pypi.python.org/pypi/nose-exclude) plugin to exclude this directory.
It's just nose adjusting your path by default. It will change sys.path before importing your module, possibly allowing double code execution and imports outside of package (like your case). To avoid this, setup your `PYTHONPATH` before running nose and use `nose --no-path-adjustment`. See: <http://nose.readthedocs.org/en/latest/usage.html#cmdoption--no-path-adjustment> If you cannot add a command line argument you can use an env var (`NOSE_NOPATH=y`) or this in `.noserc`: ``` [nosetests] no-path-adjustment=1 ```
Import errors when running nosetests that I can't reproduce outside of nose
[ "", "python", "unit-testing", "importerror", "nose", "" ]
I am new to Oracle and I have two functions which will used with high frequency. And I wonder which is better between them. This one: ``` FUNCTION GET_MY_MONEY (myType IN NUMBER) RETURN NUMBER AS var_amount NUMBER; var_result NUMBER; BEGIN var_result := 0; var_amount := 0; SELECT amount INTO var_amount FROM mytable WHERE type = myType AND sysdate >= date_from AND sysdate <= date_to; var_result := var_amount*1000; RETURN var_result; EXCEPTION WHEN OTHERS THEN RETURN 0; END; ``` or this one: ``` FUNCTION GET_MY_MONEY (myType IN NUMBER) RETURN NUMBER AS var_count NUMBER; var_amount NUMBER; var_result NUMBER; BEGIN var_result := 0; var_count := 0; var_amount := 0; SELECT count(*) INTO var_count FROM mytable WHERE type = myType AND sysdate >= date_from AND sysdate <= date_to; IF (var_count > 0) THEN SELECT amount INTO var_amount FROM mytable WHERE type = myType AND sysdate >= date_from AND sysdate <= date_to; var_result := var_amount*1000; RETURN var_result; ELSE RETURN 0; END IF; EXCEPTION WHEN OTHERS THEN RETURN 0; END; ``` which is better for performance? which is return faster when they are called? Thanks in advance.
In general, it depends. How frequently will you call the function and pass in a `myType` value that causes the query to return 0 rows? If in the 99.9% of calls, the query will return exactly 1 row, then the second approach will run the query to be executed twice. While it's likely that the second call won't cause the function to be twice as expensive as the first since the blocks you're interested in are almost guaranteed to be cached, the second approach will almost certainly be substantially slower. On the other hand, if a large fraction of calls will involve a `myType` value that doesn't return a row, the second approach will frequently not have to execute the query a second time. And the first approach will incur the overhead of handling the exception a large fraction of the time which will almost certainly be more expensive than the second query. In most cases, the more efficient solution will be obvious based on the probability that 0 rows will be returned. Most of the time, the function is only called when the caller is pretty confident that the `myType` value they would be passing in is valid so the first approach would end up more efficient. As the fraction of calls that result in 0 rows being found increases, the second approach gets more efficient. Where that line is will depend on a number of factors not least of which being your table, your data, your hardware, and your Oracle version. You'd need to run a benchmark test to determine whether for your particular code the line of demarcation is 10% or 20% or 90%.
The question was edited but `SELECT amount INTO var_amount ...` will fail if the there are more than one row.(maybe you want to select `sum(amount)`). First approach is better because: * is far more intelligible * you don't scan the table twice. * in the second you get var\_amount but you don't use it. * the count(\*) is useless, you can just `select nvl(sum(amount),0) as amount` However, you must assign a value to date variables(`date_from` and `date_to`) or put them as parameters. And you can return an expression, ie: `RETURN var_count*1000;`
Should I use exception instead count(*) in functions?
[ "", "sql", "performance", "oracle", "" ]
I have two lists: ``` list_a = [1,5,8] list_b = [12,4,2,5,7,5,3,6,8] ``` The elements in `list_a` correspond to the indices of elements in `list_b`. Both lists are of size greater than 100. How can I delete the elements of `list_b` whose indices are in `list_a`, so if you take the lists above the resulting list is `[12,2,5,7,3,6]`?
Two options: * Create a new list with a list comprehension: ``` newlist = [el for i, el in enumerate(oldlist) if i not in indices_to_delete] ``` This will be all the faster if `indices_to_delete` was a `set`: ``` indices_to_delete = set(indices_to_delete) newlist = [el for i, el in enumerate(oldlist) if i not in indices_to_delete] ``` because membership testing in a set is O(1) vs. O(n) in a list. * Remove the indices in reverse-sorted order from the list in-place: ``` for index in sorted(indices_to_delete, reversed=True): del oldlist[index] ``` If you *don't* remove items in reverse sorted order, items with higher indices are moved up as items with lower indices are removed and the rest of your `indices_to_delete` no longer match the items you needed to remove.
This should accomplish that: ``` for delidx in reversed(toDelete): del otherList[delidx] ``` Also, using a list comprehension: ``` l2 = [elem for i, elem in enumerate(l1) if i not in toDelete] ```
deleting multiple elements without updating till the end
[ "", "python", "list", "" ]
I have a program in Python that basically does the following: ``` for j in xrange(200): # 1) Compute a bunch of data # 2) Write data to disk ``` 1) takes about 2-5 minutes 2) takes about ~1 minute Note that there is too much data to keep in memory. Ideally what I would like to do is write the data to disk in a way that avoids idling the CPU. Is this possible in Python? Thanks!
You could try [using multiple processes](http://docs.python.org/2/library/multiprocessing.html) like this: ``` import multiprocessing as mp def compute(j): # compute a bunch of data return data def write(data): # write data to disk if __name__ == '__main__': pool = mp.Pool() for j in xrange(200): pool.apply_async(compute, args=(j, ), callback=write) pool.close() pool.join() ``` `pool = mp.Pool()` will create a pool of worker processes. By default, the number of workers equals the number of CPU cores your machine has. Each [pool.apply\_async](http://docs.python.org/2/library/multiprocessing.html#multiprocessing.pool.multiprocessing.Pool.apply_async) call queues a task to be run by a worker in the pool of worker processes. When a worker is available, it runs `compute(j)`. When the worker returns a value, `data`, a thread in the main process runs the callback function `write(data)`, with `data` being the data returned by the worker. Some caveats: * The data has to be picklable, since it is being communicated from the worker process back to the main process via a [Queue](http://docs.python.org/2/library/multiprocessing.html#exchanging-objects-between-processes). * There is no guarantee that the order in which the workers complete tasks is the same as the order in which the tasks were sent to the pool. So the order in which the data is written to disk may not correspond to `j` ranging from 0 to 199. One way around this problem would be to write the data to a sqlite (or other kind of) database with `j` as one of the fields of data. Then, when you wish to read the data in order, you could `SELECT * FROM table ORDER BY j`. * Using multiple processes will increase the amount of memory required as data is generated by the worker processes and data waiting to be written to disk accumulates in the Queue. You might be able to reduce the amount of memory required by using NumPy arrays. If that is not possible, then you might have to reduce the number of processes: ``` pool = mp.Pool(processes=1) ``` That will create one worker process (to run `compute`), leaving the main process to run `write`. Since `compute` takes longer than `write`, the Queue won't get backed up with more than one chunk of data to be written to disk. However, you would still need enough memory to compute on one chunk of data while writing a different chunk of data to disk. If you do not have enough memory to do both simultaneously, then you have no choice -- your original code, which runs `compute` and `write` sequentially, is the only way.
You can use something such as `Queue.Queue` (the module is here: [Queue](http://docs.python.org/2/library/queue.html)) and `threading.Thread` (or `threading.start_new_thread` if you just want a function), the module is here: [threading](http://docs.python.org/2/library/threading.html) - As a file write is not CPU intensive and use more IO. (and the GIL doesn't affect it).
Write data to disk in Python as a background process
[ "", "python", "file", "multiprocessing", "" ]
Let say I have a list of ``` ['a','man', 'and', 'a', 'woman'] ``` How do I remove the repeated 'a' so that it will only be: ``` ['a','man', 'and', 'woman'] ```
If the order is not important, then you can just do: ``` d = ['a', 'man', 'and', 'a', 'woman'] list(set(d)) ```
Keeps order: ``` >>> from collections import OrderedDict >>> L = ['a','man', 'and', 'a', 'woman'] >>> list(OrderedDict.fromkeys(L)) ['a', 'man', 'and', 'woman'] ```
Remove repeated item in list in python
[ "", "python", "list", "" ]
I have this script that scans a csv file and if the value in the first `column == 200` it saves that row into an array. The problem is, I need to save that row and the next 10 rows in that same array. What can I add to the script so it does that? I tried to do `for row in len(10):` but I get an error. ``` p = csv.reader(open('file.csv'), delimiter=';') a=[0]*2881 a = numpy.array(a, dtype=dict) for row in p: if row[0]=="200": a=row break print a ```
It's hard to figure out exactly what you want, but I'm going to try to guess. Maybe you want `a` to be a `list` of the 11 rows from the CSV starting from the first one with `200` in the first column: ``` p = csv.reader(open('file.csv'), delimiter=';') for row in p: if row[0]=="200": a = [row] + list(itertools.islice(p, 10)) break ``` In English: `a` is a list of the found row, plus a list of the next 10 rows. If you want that to be stored in a `numpy` array instead of a `list`, it's much easier to construct the array from the list after this loop than to construct the array in advance and try to add to it on the fly.
Perhaps something like this? ``` #!/usr/local/pypy-1.9/bin/pypy import csv def gen(): with open('file.csv', 'rb') as file_: reader = csv.reader(file_, delimiter=';') state = 'no feed' for row in reader: if row[0]=="200": yield row state = 'feed' count = 1 elif state == 'feed': if count < 10: yield row count += 1 if count >= 10: state = 'no feed' count = 0 print list(gen()) ```
CSV file into an array
[ "", "python", "csv", "" ]
Why is it that in the following code, using a class variable as a method pointer results in unbound method error, while using an ordinary variable works fine: ``` class Cmd: cmd = None @staticmethod def cmdOne(): print 'cmd one' @staticmethod def cmdTwo(): print 'cmd two' def main(): cmd = Cmd.cmdOne cmd() # works fine Cmd.cmd = Cmd.cmdOne Cmd.cmd() # unbound error !! if __name__=="__main__": main() ``` The full error: ``` TypeError: unbound method cmdOne() must be called with Cmd instance as first argument (got nothing instead) ```
I like to view this behaviour from the "bottom up". A function in Python acts as a "[descriptor object](http://docs.python.org/2/reference/datamodel.html#implementing-descriptors)". As such, it has a `__get__()` method. A read access to a class attribute which has such a `__get__()` method is "redirected" to this method. A attribute access to the class is executed as `attribute.__get__(None, containing_class)`, while an attribute access to the instance is mapped to `attribute.__get__(instance, containing_class)`. A function's `__get__()` method's task is to wrap the function in a method object which wraps away the `self` parameter - for the case of an attribute access to the instance. This is called a bound method. On a class attribute access on 2.x, a function's `__get__()` returns an unbound method wrapper, while, as I [learned today](https://stackoverflow.com/a/16244344/296974), on 3.x, it returns itself. (Note that the `__get__()` mechanism still exists in 3.x, but a function just returns itself.) That's nearly the same, if you look at how it is called, but an unbound method wrapper additionally checks for the correct type of the `self` argument. A `staticmethod()` call just creates an object whose `__get__()` call is designed to return the originally given object so that it undoes the described behaviour. That's how [HYRY's trick](https://stackoverflow.com/a/16229814/296974) works: the attribute acces undoes the `staticmethod()` wrapping, the call does it again so that the "new" attribute has the same status as the old one, although in this case, `staticmethod()` seems to be applied twice (but really isn't). (BTW: It even works in this weird context: ``` s = staticmethod(8) t = s.__get__(None, 2) # gives 8 ``` although `8` is not a function and `2` is not a class.) In your question, you have two situations: ``` cmd = Cmd.cmdOne cmd() # works fine ``` accesses the class and asks for its `cmdOne` attribute, a `staticmethod()` object. This is queried via its `__get__()` and returns the original function, which is then called. That's why it works fine. ``` Cmd.cmd = Cmd.cmdOne Cmd.cmd() # unbound error ``` does the same, but then assigns this function to `Cmd.cmd`. The next line is an attribute access - which does, again, the `__get__()` call to the function itself and thus returns an unbound method, which must be called with a correct `self` object as first argument.
You need to use `staticmethod()` to convert the function: ``` Cmd.cmd = staticmethod(Cmd.cmdOne) ```
Pointers to static methods in Python
[ "", "python", "function-pointers", "static-methods", "" ]
I have a defined an object ( simplified below) and want to return an element as a list. Is there an easier way than doing the below ? ``` class objectRecord(object): def __init__(self,fields): self.time=fields[0] self.definition=fields[1] self.name=fields[2] self.source=fields[3] self.size=fields[4] self.value=fields[5] self.increasedsize=fields[6] self.count=fields[7] ``` rest of `__init__` omitted ``` def getList(self): return [self.name,self.definition,self.name,self.source,self.size,self.value,self.increasedsize,self.count] ``` rest of class omitted
You can get a dictionary containing all object attributes: ``` def get_attrs_dict(self): return self.__dict__ ``` If you just want the list of values: ``` def get_attrs_list(self): return self.__dict__.values() ```
Here is one way to do it, via `getattr()`: ``` def getList(self): attributes = 'time definition name source'.split() return [getattr(self, attribute) for attribute in attributes] ``` You can add more attributes to the list if needed.
returning a list from an objects elements?
[ "", "python", "object", "" ]
For example, I want to create a function object from . ``` mystr = \ """ def foo(a=1): print a pass """ ``` However, using compile(mystr) will only give me a code object. I want to have module level function object just like the string is part of the source code. Can this be achieved?
`exec mystr` will execute the code you have given.
Yes use [`exec`](http://docs.python.org/2/reference/simple_stmts.html?highlight=exec%20statement#the-exec-statement): ``` >>> mystr = \ """ def foo(a=1): print a pass """ >>> exec mystr >>> foo <function foo at 0x0274F0F0> ```
Can I create a module level function object dynamically from string
[ "", "python", "" ]
I have a table showing the count of occurrences of words in a text at certain data points. Here is a simplified example: ``` Word Chapter Count dog 1 3 dog 2 7 dog 3 1 cat 2 4 ``` Notice that there is no row for 'cat' in chapters 1 and 3, because the word was not used there. I need to SELECT INTO a temp table (in prep for other aggregation, etc.) the above data, but I need 'cat' to show up for chapters 1 and 3 with a count of 0. The result should be: ``` Word Chapter Count dog 1 3 dog 2 7 dog 3 1 cat 1 0 cat 2 4 cat 3 0 ``` Any tips would be much appreciated. Thanks.
I don't know your data structure, but I think what you are trying to do is: ``` create table Chapters (Chapter int); insert Chapters values (1); insert Chapters values (2); insert Chapters values (3); create table Words (Word varchar(50)); insert into Words values ('dog'); insert into Words values ('cat'); create table Chapters_Words (Word varchar(50), Chapter int, [Count] int); insert into Chapters_Words values ('dog', 1, 3); insert into Chapters_Words values ('dog', 2, 7); insert into Chapters_Words values ('dog', 3, 1); insert into Chapters_Words values ('cat', 2, 4); select f.Word, f.Chapter, isnull(w.[Count], 0) [Count] from Chapters_Words w right join ( select w.Word, c.Chapter from Chapters c cross join Words w ) f on f.Chapter = w.Chapter and f.Word = w.Word ``` Result: ``` Word Chapter Count -------------------------------------------------- ----------- ----------- dog 1 3 dog 2 7 dog 3 1 cat 1 0 cat 2 4 cat 3 0 ```
Null does *NOT* mean zero, nor does "zero" mean null. Sigh... Having said that, the "coalesce()" function is a Pop Favorite, depending on your RDBMS implementation: [COALESCE with NULL](https://stackoverflow.com/questions/1766905/coalesce-with-null). See also [SQL ISNULL(), NVL(), IFNULL() and COALESCE() Functions](http://www.w3schools.com/sql/sql_isnull.asp)
How to Return Data Where Null means Zero?
[ "", "sql", "null", "counting", "" ]
I am trying to remove listings from a table where they do not exist in another table. I am able to select the lists with: ``` SELECT count(mlsnum) as oldRecords FROM coordinates WHERE mlsnum NOT IN (SELECT mlsnum FROM RETS_Listings_full) ``` BUT when I try to delete the listings, I am not able to ``` delete FROM coordinates t1 LEFT JOIN RETS_Listings_full t2 ON t2.MLSNUM = t1.MLSNUM WHERE t2.mlsnum IS NULL ``` the error says - Msg 102, Level 15, State 1, Line 1 Incorrect syntax near 't1'.
You can retrieve a list of `mlsnum` and delete that way ``` DELETE FROM coordinates WHERE mlsnum IN ( SELECT mlsnum FROM coordinates WHERE mlsnum NOT IN ( SELECT mlsnum FROM RETS_Listings_full) ) ```
> ...where they do not exist... You are quite near to the solution: use `NOT EXISTS`: ``` DELETE FROM coordinates t1 WHERE NOT EXISTS ( SELECT 1 FROM RETS_Listings_full t2 WHERE t1.mlsnum = t2.mlsnum ) ```
Delete from one table where coloumn is not in another table
[ "", "sql", "" ]
I'm doing some revision for a databases exam and one of the questions is as follows: Given the table `Items` (columns: `itemid`, `description`, `unitcost`), formulate a query to find each item that costs more than the average and how much more than the average it costs. My answer so far is ``` SELECT itemid, description, unitcost - AVG(unitcost) FROM Items WHERE unitcost > (SELECT AVG(unitcost) FROM Items) ``` * Would this work? From what I understand, expressions are allowed in the list of selected columns but I'm not sure if that extends to the use of functions like `AVG()`. * Is there a cleaner, clearer way to express the same thing? Preferably using only ISO SQL constructs because my course is rather theoretical and not connected to any specific dialect. Thanks friends :)
One query that should work with almost any version of SQL (that accepts the explicit JOIN syntax) would be: ``` SELECT i.itemid, i.description, i.unitcost - a.avg_cost cost_diff FROM (SELECT AVG(unitcost) avg_cost FROM Items) a JOIN Items i ON i.unitcost > a.avg_cost ```
The original query can be turned into a valid (ANSI SQL) query with a slight change (using window functions): ``` select itemid, description, unitcost - avg(unitcost) over() as delta from items where unitcost > (select avg(unitcost) from items); ``` SQLFiddle example: <http://sqlfiddle.com/#!12/cdb33/1>
SQL query: find each item that costs more than the average and how much more
[ "", "sql", "" ]
Lets say I want to log like this formatting string : ``` %(levelname)s %(asctime)s %(module)s %(funcName)s %(message)s %(user_id) ``` It can be done using this type of logging command : `logging.error('Error fetching information', extra = { 'user_id': 22 } )` This will add the current userid to logging messages for current request. But the extra dict needs to be added to every logging call. Is there a good way to add this context in a common function in django (eg Middleware, or index function of a view ), so that the extra dictionary with user id is set, and all further logging calls in the current request also log the current user.
There exists a ThreadLocal middleware on <https://github.com/jedie/django-tools/blob/master/django_tools/middlewares/ThreadLocal.py> which helps you with your issue in making the current request available everywhere. So what you need to do is add the middleware to your MIDDLEWARE\_CLASSES setting, and create a function somewhere like this: ``` from django_tools.middlewares import ThreadLocal def log_something(levelname, module, funcname, message): user = ThreadLocal.get_current_user() # do your logging here. "user" is the user object and the user id is in user.pk ```
Here's a possible approach without thread locals or middleware: in your `views.py`, say, have a dict mapping threads to requests, and a lock to serialise access to it: ``` from threading import RLock shared_data_lock = RLock() request_map = {} def set_request(request): with shared_data_lock: request_map[threading.current_thread()] = request ``` Create the following filter and attach to the handlers which need to output request-specific info: ``` import logging class RequestFilter(logging.Filter): def filter(self, record): with shared_data_lock: request = request_map.get(threading.current_thread()) if request: # Set data from the request into the record, e.g. record.user_id = request.user.id return True ``` Then, as the first statement of each view, set the request in the map: ``` def my_view(request, ...): set_request(request) # do your other view stuff here ``` With this setup, you should find that your log output contains the relevant request-specific info.
django logging set context globally per request?
[ "", "python", "django", "logging", "" ]
What is an efficient way to strip the time from this returned dataset using jQuery? I need the time to exist in the database because of ordering purposes however I need to display only the date. `<adopt_date>Apr 25 2013 2:41PM</adopt_date>` is an example of a result from my Web Service. Here is how I currently usethe information if it helps: `var adopted_date = $(this).find('adopt_date').text();` Where $(this) is dataset. Thanks!
Try this: ``` $("adopt_date").html(function (index, value) { return value.replace(/\d{1,2}:\d{2}(AM|PM)/ig, ''); }); ``` [**FIDDLE**](http://jsfiddle.net/xDwAX/)
You can do this with plain JavaScript: ``` adopted_date = adopted_date.replace(/ [0-9]*[0-9]:[0-9][0-9]A*P*M/,''); ``` **[jsFiddle example](http://jsfiddle.net/j08691/2b7Bm/1)**
Remove time from a date/time string using jQuery
[ "", "jquery", "sql", "date", "time", "" ]
Here are my tables and insert values: ``` create table student ( LastName varchar(40), FirstName varchar(40), SID number(5), SSN number(9), Career varchar(4), Program varchar(10), City varchar(40), Started number(4), primary key (SID), unique(SSN) ); create table enrolled ( StudentID number(5), CourseID number(4), Quarter varchar(6), Year number(4), primary key (StudentID, CourseID), foreign key (StudentID) references student(SID), foreign key (CourseID) references course(CID) ); insert into student values ( 'Brennigan', 'Marcus', 90421, 987654321, 'UGRD', 'COMP-GPH', 'Evanston', 2001 ); insert into student values ( 'Patel', 'Deepa', 14662, null, 'GRD', 'COMP-SCI', 'Evanston', 2003 ); insert into student values ( 'Snowdon', 'Jonathan', 08871, 123123123, 'GRD', 'INFO-SYS', 'Springfield', 2005 ); insert into student values ( 'Starck', 'Jason', 19992, 789789789, 'UGRD', 'INFO-SYS', 'Springfield', 2003 ); insert into student values ( 'Johnson', 'Peter', 32105, 123456789, 'UGRD', 'COMP-SCI', 'Chicago', 2004 ); insert into student values ( 'Winter', 'Abigail', 11035, 111111111, 'GRD', 'PHD', 'Chicago', 2003 ); insert into student values ( 'Patel', 'Prakash', 75234, null, 'UGRD', 'COMP-SCI', 'Chicago', 2001 ); insert into student values ( 'Snowdon', 'Jennifer', 93321, 321321321, 'GRD', 'COMP-SCI', 'Springfield', 2004 ); insert into enrolled values (11035, 1020, 'Fall', 2005); insert into enrolled values (11035, 1092, 'Fall', 2005); insert into enrolled values (11035, 8772, 'Spring', 2006); insert into enrolled values (75234, 3201, 'Winter', 2006); insert into enrolled values (08871, 1092, 'Fall', 2005); insert into enrolled values (90421, 8772, 'Spring', 2006); insert into enrolled values (90421, 2987, 'Spring', 2006); ``` I have the following query: ``` SELECT e.studentid FROM enrolled e FULL OUTER JOIN student s ON e.studentid = s.sid WHERE ((e.quarter = 'Fall') OR (e.quarter = 'Spring')) GROUP BY e.studentid HAVING count(e.studentid) = 1; ``` But this only returns ``` 8871 ``` This should return ``` 8871 90421 ``` The goal of this query is: List students who were enrolled in at least one course in Fall quarter or at least one course in the Spring quarter, but not both. You will have to add a couple of rows to test your query. Any help would be appreciated it. Thanks.
give this a try, ``` SELECT a.SID FROM student a INNER JOIN enrolled b ON a.SID = b.StudentID WHERE b.quarter IN ('Fall', 'Spring') GROUP BY a.SID HAVING COUNT(DISTINCT b.quarter) = 1 ```
``` SELECT e.studentid FROM enrolled e WHERE e.quarter IN ('Spring', 'Fall') GROUP BY e.studentid HAVING COUNT(DISTINCT quarter) = 1; ``` `90421` is enrolled in two different courses during spring, that's why your original query counted it twice. You need to count seasons, not enrollments.
SQL GROUP BY QUERY Not Returning desired output
[ "", "sql", "oracle", "" ]
Is there any way to simplify this function? Specifically, I'd like to rewrite it with fewer lines of indentation. ``` # split string (first argument) at location of separators (second argument, should be a string) def split_string(text, separators): text = ' ' + text + ' ' words = [] word = "" for char in text: if char not in separators: word += char else: if word: words.append(word) word = "" if not words: words.append(text) return words ```
Your code seems to produce ``` >>> split_string("foo.,.bar", ".,") [' foo'] ``` but your comment says ``` split_string("foo.,.bar", ".,") will return ["foo", "bar"] ``` Assuming the comment is what's intended, then I'd use `itertools.groupby` (I hate using regexes): ``` from itertools import groupby def splitter(text, separators): grouped = groupby(text, lambda c: c in separators) return [''.join(g) for k,g in grouped if not k] ``` which gives ``` >>> splitter("foo.,.bar", ".,") ['foo', 'bar'] ``` `groupby` returns an iterator over consecutive terms grouped by some function -- in this case, `lambda c: c in separators` -- of the terms.
Try using [re.split](http://docs.python.org/2/library/re.html), for example: ``` re.split('[%s]' % (separators),string) ``` The `[]` creates a regular expression character class to split on.
How can I simplify this function?
[ "", "python", "" ]
How to check if a String equals to Empty, and to a different string constant (such as '\n', '\t', etc.) with Python? This is what I used: ``` if not text or text == '\n' or text == '\t': log.debug("param 'text': " + text) return None ``` how to do it better?
``` if not text.rstrip(): log.warning("Empty param 'text': " + text) return None ``` **str.rstrip([chars]):** The method rstrip() returns a copy of the string in which all chars have been stripped from the end of the string (default whitespace characters).
``` if not text or text.isspace(): # empty param text ``` Regarding `isspace`, it returns `True` if there are only whitespace characters and at least one character.
How to check if a string is different string constant (such as '\n', '\t') with Python
[ "", "python", "string", "" ]
I have a membership table with the following columns: ``` Member_number | StartDate | EndDate XYZ | 01-Jan-2002 | 01-March-2002 ABC | 01-Feb-2002 | 01-March-2002 ``` Basically, I want to show how many members were present in specific month. My problem is I don't know how to break this time span into months. How can I see this result? ``` Month | NumberOfMembers Jan | 1 Feb | 2 March | 2 ```
Given a members table that looks something like this: ``` create table dbo.members ( member_number int not null primary key , start_date datetime not null , end_date datetime not null , ) ``` And a table-valued function that generates sequences of consecutive integers, like this: ``` create function dbo.IntegerRange ( @from int , @thru int ) returns @sequence table ( value int not null primary key clustered ) as begin declare @increment int = case when @from > @thru then -1 else 1 end ; with sequence(value) as ( select value = @from union all select value + @increment from sequence where value < @thru ) insert @sequence select value from sequence order by value option ( MAXRECURSION 0 ) return end ``` A query like this should give you what you want: ``` select [year] = period.yyyy , [month] = case period.mm , when 1 then 'Jan' when 2 then 'Feb' when 3 then 'Mar' when 4 then 'Apr' when 5 then 'May' when 6 then 'Jun' when 7 then 'Jul' when 8 then 'Aug' when 9 then 'Sep' when 10 then 'Oct' when 11 then 'Nov' when 12 then 'Dev' else '***' end , member_cnt = sum( case when m.member_number is not null then 1 else 0 end ) from ( select yyyy = yyyy.value , mm = mm.value , dtFrom = dateadd( month , mm.value - 1 , dateadd( year , yyyy.value - 1900 , convert(date,'') ) ) , dtThru = dateadd( day , - 1 , dateadd( month , mm.value , dateadd( year , yyyy.value - 1900 , convert(date,'') ) ) ) from dbo.IntegerRange(2000,2013) yyyy full join dbo.IntegerRange(1,12) mm on 1=1 ) period left join dbo.members m on period.dtFrom <= m.end_date and period.dtThru >= m.start_date group by period.yyyy , period.mm order by period.yyyy , period.mm ``` The first table expression in the `from` clause creates a virtual table of the periods (months, in this case, but the technique doesn't limit itself to months or even weeks) covering the reporting period: ``` from ( select yyyy = yyyy.value , mm = mm.value , dtFrom = dateadd( month , mm.value - 1 , dateadd( year , yyyy.value - 1900 , convert(date,'') ) ) , dtThru = dateadd( day , - 1 , dateadd( month , mm.value , dateadd( year , yyyy.value - 1900 , convert(date,'') ) ) ) from dbo.IntegerRange(2000,2013) yyyy full join dbo.IntegerRange(1,12) mm on 1=1 ) period ``` That is then joined, via a `left outer join`, ensuring that *all* periods are reported, not just those periods with active members, to the members table to collect, for each reporting period in the virtual `period` table above, the set of members who were active during the period: ``` left join dbo.members m on period.dtFrom <= m.end_date and period.dtThru >= m.start_date ``` We then group by the year and month of each period and then order the results by year/month number: ``` group by period.yyyy , period.mm order by period.yyyy , period.mm ``` In creating the results set to be returned, we return the year of the period, the month number (converted to a friendly name), and the count of active members. Note that we have to use the `sum()` aggregate function here rather than `count()` as empty periods will have a single row returned (with `null` in all columns). `Count()`, unlike all other aggregate functions, includes `null` values in the aggregation. `Sum()` is applied to a case expression acting as a discriminant function returning 1 or 0 identifying whether the row indicates useful or missing data: ``` select [year] = period.yyyy , [month] = case period.mm , when 1 then 'Jan' when 2 then 'Feb' when 3 then 'Mar' when 4 then 'Apr' when 5 then 'May' when 6 then 'Jun' when 7 then 'Jul' when 8 then 'Aug' when 9 then 'Sep' when 10 then 'Oct' when 11 then 'Nov' when 12 then 'Dev' else '***' end , member_cnt = sum( case when m.member_number is not null then 1 else 0 end ) ``` Easy!
``` DECLARE @minMonth DATE SELECT @minMonth = MIN(StartDate) FROM Table1 DECLARE @maxMonth DATE SELECT @maxMonth = MAX(EndDate) FROM Table1 ;WITH CTE_Months AS ( SELECT @minMonth AS Mnth UNION ALL SELECT DATEADD(MM,1,Mnth) FROM CTE_Months WHERE Mnth<@MaxMonth ) SELECT Mnth AS Month, COUNT(*) as Members FROM CTE_Months m LEFT JOIN Table1 t on m.Mnth BETWEEN t.StartDate AND t.EndDate GROUP BY Mnth ``` **[SQLFiddle Demo](http://www.sqlfiddle.com/#!6/3222b/2)** CTE will find all months from min StartDate to max EndDate, if you need different min and max, just change how you get @MinMonth and @MaxMonth If you don't want to show zeros for months that possibly don't have any members, replace LEFT JOIN with INNER at the end.
Counting distinct members based on number of months in duration of time
[ "", "sql", "sql-server", "" ]
I need to generate all dates between two given dates. This works fine as long as there is just one date range. However, if I have multiple date ranges, this solution doesn't work. I have searched here as well as on asktom, but couldn't locate relevant pointers/solution. I tried both the solutions using all\_objects and CONNECT BY ROWNUM, but no luck. Here is the problem statement: [sqlfiddle](http://sqlfiddle.com/#!4/7837c/13) **Input** ``` ID START_DATE END_DATE 101 April, 01 2013 April, 10 2013 102 May, 10 2013 May, 12 2013 ``` **Output** ``` ID Dates 101 April, 01 2013 101 April, 02 2013 101 April, 03 2013 101 April, 04 2013 101 April, 05 2013 101 April, 06 2013 101 April, 07 2013 101 April, 08 2013 101 April, 09 2013 101 April, 10 2013 102 May, 10 2013 102 May, 11 2013 102 May, 12 2013 ```
``` select A.ID, A.START_DATE+delta dt from t_dates A, ( select level-1 as delta from dual connect by level-1 <= ( select max(end_date - start_date) from t_dates ) ) where A.START_DATE+delta <= A.end_date order by 1, 2 ```
Please try: ``` select distinct ID, START_DATE+level-1 DATES from dual a, TABLE_DATES b connect by level <= (END_DATE-START_DATE)+1 order by ID; ```
Generating dates between two dates
[ "", "sql", "oracle", "datetime", "" ]
I'm trying to do this ``` SELECT table1.*, table2.id as t2id FROM table1 as t1 INNER JOIN table2 as t2 ON t2.field1 = t1.field2 AND t1.field2 = 'value' AND IF(SELECT COUNT(*) FROM table2 WHERE id = 10 > 0) ``` It error says ``` You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'SELECT COUNT(*) FROM table2 WHERE id = 10 > 0) LIMIT ' at line 1 ``` I know the error is with the if condition as when I remove it, it works, but my thinking is the select will return a null value if it was not successful i.e. it didn't find anything in the table with id 10 from table2.
You're missing the "then" part of your if. It has too look like IF (condition, then, else), but you're just doing the condition without any output. Try it that way: ``` AND IF((SELECT COUNT(*) FROM table2 WHERE id = 10) > 0, 'true', 'false') ```
Try removing the `IF`. ``` SELECT table1.*, table2.id as t2id FROM table1 as t1 INNER JOIN table2 as t2 ON t2.field1 = t1.field2 AND t1.field2 = 'value' AND (SELECT COUNT(*) FROM table2 WHERE id = 10) > 0; ```
Select statement within if condition not working with additional parameters
[ "", "mysql", "sql", "" ]
When we convert or cast date in sql, see below sql code ``` SELECT CONVERT(VARCHAR(10), GETDATE(), 110) AS [MM-DD-YYYY] ``` it works fine, I just want to know the meaning of 110 in above code. what it does actually, sometimes we use 102, 112 etc. what is the use of that number.
That number indicates Date and Time Styles You need to look at [CAST and CONVERT (Transact-SQL)](http://msdn.microsoft.com/en-us/library/ms187928.aspx). Here you can find the meaning of all these Date and Time Styles. Styles with century (e.g. 100, 101 etc) means year will come in `yyyy` format. While styles without century (e.g. 1,7,10) means year will come in `yy` format. You can also refer to [SQL Server Date Formats](http://www.sql-server-helper.com/tips/date-formats.aspx). Here you can find all date formats with examples.
When you convert expressions from one type to another, in many cases there will be a need within a stored procedure or other routine to convert data from a datetime type to a varchar type. The `Convert function` is used for such things. The `CONVERT()` function can be used to display date/time data in various formats. **Syntax** ``` CONVERT(data_type(length), expression, style) ``` *Style* - style values for datetime or smalldatetime conversion to character data. Add 100 to a style value to get a four-place year that includes the century (yyyy). **Example 1** take a style value 108 which defines the following format: > hh:mm:ss Now use the above style in the following query: ``` select convert(varchar(20),GETDATE(),108) ``` **Example 2** we use the style value 107 which defines the following format: > Mon dd, yy Now use that style in the following query: ``` select convert(varchar(20),GETDATE(),107) ``` **Similarly** ``` style-106 for Day,Month,Year (26 Sep 2013) style-6 for Day, Month, Year (26 Sep 13) style-113 for Day,Month,Year, Timestamp (26 Sep 2013 14:11:53:300) ```
SELECT CONVERT(VARCHAR(10), GETDATE(), 110) what is the meaning of 110 here?
[ "", "sql", "date", "casting", "getdate", "" ]
What is *wrong* with the following code? ``` class A: def A_M(self): pass class B: @staticmethod def C(): super(B).A_M() ``` error (Python 2.7.3): ``` >>> a = A() >>> a.B.C() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "..x.py", line 36, in C def C(): super(B).A_M() NameError: global name 'B' is not defined ``` **Edit**: the solution was simple as this: ``` class A: def A_M(self): pass class B: @staticmethod def C(): A().A_M() #use of A() instead of supper, etc. ``` **Important Note** that there is an issue with this solution. If you change the name of super class (i.e. `A`) then you will have to update all uses inside itself as `A` :)).
``` class A(object): def foo(self): print('foo') @staticmethod def bar(): print('bar') class B(object): @staticmethod def bar(obj): # A.foo is not staticmethod, you can't use A.foo(), # you need an instance. # You also can't use super here to get A, # because B is not subclass of A. obj.foo() A.foo(obj) # the same as obj.foo() # A.bar is static, you can use it without an object. A.bar() class B(A): def foo(self): # Again, B.foo shouldn't be a staticmethod, because A.foo isn't. super(B, self).foo() @staticmethod def bar(): # You have to use super(type, type) if you don't have an instance. super(B, B).bar() a, b = A(), B() a.B.bar(a) b.foo() B.bar() ``` See [this](http://www.artima.com/weblogs/viewpost.jsp?thread=236278) for details on `super(B, B)`.
You need to use a fully-qualified name. Also, in python 2.7, you need to use `(object)`, else `super(A.B)` will give `TypeError: must be type, not classobj` ``` class A(object): def A_M(self): pass class B(object): @staticmethod def C(): super(A.B).A_M() ``` Finally, `super(A.B)` is essentially `object` here. Did you mean for `B` to inherit from `A`? Or were you simply looking for `A.A_M()`?
Python: nested class with static method fails
[ "", "python", "class", "python-2.7", "super", "" ]
I have my own package in python and I am using it very often. what is the most elegant or conventional directory where i should put my package so it is going to be imported without playing with PYTHONPATH or sys.path? What about site-packages for example? `/usr/lib/python2.7/site-packages`. Is it common in python to copy and paste the package there ?
I usually put the stuff i want to have ready to import in the user site directory: ``` ~/.local/lib/pythonX.X/site-packages ``` To show the right directory for your platform, you can use `python -m site --user-site` --- edit: it will show up in `sys.path` once you create it: ``` mkdir -p "`python -m site --user-site`" ```
So if your a novice like myself and your directories are not very well organized you may want to try this method. Open your python terminal. Import a module that you know works such as numpy in my case and do the following. `Import numpy` `numpy.__file__` which results in `'/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site- packages/numpy/__init__.py'` The result of `numpy.__file__` is the location you should put the python file with your module (excluding the `numpy/__init__.py`) so for me that would be `/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site- packages` To do this just go to your terminal and type `mv "location of your module" "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site- packages"` Now you should be able to import your module.
Where should I put my own python module so that it can be imported
[ "", "python", "python-2.7", "" ]
I'm trying to read from a list in python using while loops but I just can't seem to get it. I keep getting list index out of range errors. This is the list: ``` names = [['Bilbo', 'Baggins'], ['Gollum'], ['Tom', 'Bombadil'], ['Aragorn']] ``` Instead of using for loops for: ``` for person in people: to_print = "" for name in person: to_print += name + " " print(to_print) ```
The equivalent `while` loop version of your `for` loop code is: ``` >>> i = 0 >>> while i < len(names): j = 0 to_print = "" while j < len(names[i]): to_print += names[i][j] + " " j += 1 print(to_print) i += 1 Bilbo Baggins Gollum Tom Bombadil Aragorn ``` And a better version of the `for` loop code is: ``` >>> for person in names: print(' '.join(person)) Bilbo Baggins Gollum Tom Bombadil Aragorn ```
To use Python lists, you don't need indexes. ``` for bunch_of_names in names: if len(bunch_of_names) > 1: # e.g. bunch_of_names = ['Bilbo', 'Baggins'] print 'First name:', bunch_of_names[0], 'Last name: ', bunch_of_names[1] else: # e.g. ['Aragorn'] print 'Name: ', bunch_of_names[0] # the only one ```
Reading from list
[ "", "python", "list", "" ]
On smaller lists of up to size N = ~1950 or so, I get the correct output... however, list sizes that are not much larger return an error rather than the expected result. My code: ``` def merge(left, right, aux=[]): if left == []: for x in right: aux.append(x) return aux elif right == []: for x in left: aux.append(x) return aux elif left[0] > right[0]: aux.append(right[0]) return merge(left, right[1:], aux) else: aux.append(left[0]) return merge(left[1:], right, aux) def msort(values): size = len(values) if size == 1: return values else: mid = size/2 return merge(msort(values[:mid]), msort(values[mid:]), []) ``` Running `msort` on these test lists give me the expected (ascending order) output: ``` val = [x for x in range(1950, 0, -1) val = [x for x in range(4,0,-1)] ``` e.g. [1,2,3,...,1948,1949,1950] and [1,2,3,4] However, when I run `msort` on this test case: ``` val = [x for x in range(2000,0,-1)] ``` I now receive this error (after numerous tracebacks to `merge`): ``` RuntimeError: maximum recursion depth exceeded in cmp ``` So my question is: What went wrong with my implementation here? I can't use it with lists of N ~ >=2000. Why?
Your merge function uses recursion that has a limit. If you iterate instead of recurring you circumvent this: ``` def merge(left, right, aux=[]): while left and right: if left[0] > right[0]: aux.append(right.pop(0)) else: aux.append(left.pop(0)) aux.extend(right) aux.extend(left) return aux ``` This is an example of the usage: ``` >>> merge([1,3,5,7], [3,4,5,6]) [1, 3, 3, 4, 5, 5, 6, 7] ```
The problem is that you're doing the **merge** recursively. As pointed out, it's fine to call `merge(msort(left),msort(right))`, but since your `merge` function actually calls itself to do the merging, you're hitting the recursion limit. Consider calling your `merge` function on lists of length 1000. To merge those lists, you need 2000 calls to merge (since you only add ~1 element to `aux` with each call).
What's wrong with my mergesort implementation?
[ "", "python", "sorting", "recursion", "mergesort", "" ]
So I have a relatively simple query that has exactly two parameters, one of which pulls a long from a form and selects only records from a single table, where one field has that value. (It's a table of design projects and the user is selecting a designer whose projects should be listed.) If I open the form and then manually open the query it works *perfectly*. If I have a second form (which populates a listbox with the query results) try to set a recordset equal to the query results, it fails with "Run-time error '3061'. Too few parameters. Expected 1." If I set the parameter to a static integer, e.g. 3, it works fine (but is clearly useless). Why would my VBA code be unable to read text from a text field on a form, when Access itself clearly can? Here is my query: ``` SELECT [Project Request Log TABLE].Designer1, [Project Request Log TABLE].Priority, [Project Request Log TABLE].ProjectName, [Project Request Log TABLE].Manager, [Project Request Log TABLE].SME1, [Project Request Log TABLE].Priority, [Project Request Log TABLE].ProjectID FROM Designers INNER JOIN [Project Request Log TABLE] ON Designers.ID = [Project Request Log TABLE].Designer1 WHERE ((([Project Request Log TABLE].Designer1)=[Forms]![frm_selectDesigner]![txtDesignerId]) AND (([Project Request Log TABLE].PercentComplete)<>1)) ORDER BY [Project Request Log TABLE].Designer1, [Project Request Log TABLE].Priority; ``` Here is the line of VBA that gives the error: ``` Set rst_projects = dbs.OpenRecordset("qryDesignerProjectPrioritySet", dbOpenDynaset) ``` Thanks. Edit: the form on which one selects a designer opens the second form, on which the above code attempts to open a recordset. The original frm\_selectDesigner is not closed, it is hidden when one clicks OK, but remains open. Edit 2: If I include the line ``` DoCmd.OpenQuery "qryDesignerProjectPrioritySet" ``` The query opens and has the right results. If the very next line tries to assign the results of that query as a recordset as above, it gives the 3601 error? There must be some sort of error in how I wrote the OpenRecordset command, right?
That `OpenRecordset()` *should* be a simple basic operation; I can't understand why it's failing when `DoCmd.OpenQuery "qryDesignerProjectPrioritySet"` works. See what happens with a minimal procedure which does only enough to attempt `OpenRecordset()`. Insert the following code as a new standard module, and run Debug->Compile from the VB Editor's main menu. Assuming it compiles without error, test the sub with the `frm_selectDesigner` form open in form view. If it doesn't compile, you likely need to add a reference for DAO or ACEDAO. ``` Option Compare Database Option Explicit Public Sub test_OpenRecordset() Dim dbs As DAO.Database Dim rst_projects As DAO.Recordset Set dbs = CurrentDb Set rst_projects = dbs.OpenRecordset("qryDesignerProjectPrioritySet", dbOpenDynaset) rst_projects.Close Set rst_projects = Nothing Set dbs = Nothing End Sub ``` If it compiles and runs without error, compare that code with your failing code to see if you can spot differences such as the way the object variables are declared and assigned. If that effort doesn't lead to a solution, or if `test_OpenRecordset` also throws the same error, all I can think to suggest is [HOW TO decompile and recompile](https://stackoverflow.com/questions/3266542/ms-access-how-to-decompile-and-recompile).
you can set the parameters in code like this (you'll have to Dim/Set the query, too): ``` ... Dim prm As DAO.Parameter Set qdef = db.QueryDefs("qryName") 'Evaluate and set the query's parameters. For Each prm In qdef.Parameters prm.Value = Eval(prm.Name) Next prm Set rs = qdef.OpenRecordset ... ```
Access 2003 VBA: query works when run directly, but run-time error 3061 when run from code?
[ "", "sql", "ms-access", "vba", "ms-access-2003", "" ]
Is it possible to somehow create aliases (like Unix `alias` command) in psql? I mean, not SQL FUNCTION, but local aliases to ease manual queries?
Why not use a view? May be [views](http://www.postgresql.org/docs/9.2/static/sql-createview.html) will help in your case.
I don't know about any possibility. There is only workaround for *psql* based on *psql* variables, but there is lot of limits - using parameters for this queries is difficult. ``` postgres=# \set whoami 'SELECT CURRENT_USER;' postgres=# :whoami current_user -------------- pavel (1 row) ```
psql shortcut for frequently used queries? (like Unix "alias")
[ "", "sql", "postgresql", "psql", "" ]
We get **weekly** data files (flat files) from our vendor to **import** into SQL, and at times the column names **change** or new columns are **added**. What we have currently is an `SSIS` package to import columns that have been **defined**. Since we've **assigned** the mapping, `SSIS` only throws up an **error** when a column is **absent**. However when a new column is **added** (apart from the existing ones), it **doesn't** get imported at all, as it is not named. This is a concern for us. What we'd like is to get the list of all the columns **present** in the flat file so that we can check whether any new columns are present before we import the file. I am relatively new to SSIS, so a detailed help would be much appreciated. Thanks!
I agree with the answer provided by @TabAlleman. SSIS can't natively handle dynamic columns (and niether can your SQL destination). May I propose an alternative? You can detect a change in headers without using a C# Script Tasks. One way to do this would be to create a flafile connection that reads the entire row as a single column. Use a Conditional Split to discard anything other than the header row. Save that row to a RecordSet object. Any change? Send Email. The "Get Header Row" DataFlow would look like this. [Row Number](https://dba.stackexchange.com/questions/87196/how-do-i-set-up-a-derived-column-transformation-to-get-row-number-in-ssis-2014) if needed. [![enter image description here](https://i.stack.imgur.com/6lBF6.png)](https://i.stack.imgur.com/6lBF6.png) The Control Flow level would look like this. Use a [ForEach ADO RecordSet](http://www.select-sql.com/mssql/loop-through-ado-recordset-in-ssis.html) object to assign the header row value to an SSIS variable `CurrentHeader`.. [![enter image description here](https://i.stack.imgur.com/DfWWN.png)](https://i.stack.imgur.com/DfWWN.png) Above, the precedent constraints (fx icons ) of ``` [@ExpectedHeader] == [@CurrentHeader] [@ExpectedHeader] != [@CurrentHeader] ``` determine whether you load data or send email. Hope this helps!
Exactly how to code this will depend on the rules for the flat file layout, but I would approach this by writing a script task that [reads the flat file using the file system object and a StreamReader object](https://www.google.com/search?q=recursive%20query&rls=com.microsoft%3aen-us&ie=UTF-8&oe=UTF-8&startIndex=&startPage=1&safe=images#rls=com.microsoft%3aen-us&sclient=psy-ab&q=flat%20file%20stream%20reader%20vb.net&oq=flat%20file%20stream%20reader%20vb.net&gs_l=serp.12...167141.168001.1.170157.7.6.0.0.0.0.1406.1406.7-1.1.0...0.0...1c.1.11.psy-ab.PbuSj_ppZ60&pbx=1&bav=on.2,or.r_cp.r_qf.&bvm=bv.45921128,d.bGE&fp=8ffa1c3a07024faf&biw=1280&bih=870), and looks at the columns, which are hopefully named in the first line of the file. However, about all you can do if the columns have changed is send an alert. I know of no way to dynamically change your data transformation task to accomodate new columns. It will have to be edited to handle them. And frankly, if all you're going to do is send an alert, you might as well just use the error handler to do it, and save yourself the trouble of pre-reading the column list.
Get list of columns of source flat file in SSIS
[ "", "sql", "ssis", "" ]
I have a `products` table and a `sales` table that keeps record of how many items a given product sold during each date. Of course, not all products have sales everyday. I need to generate a report that tells me how many **consecutive** days a product has had sales (from the latest date to the past) and how many items it sold during those days only. I'd like to tell you how many things I've tried so far, but the only succesful (and slow, recursive) ones are solutions inside my application and not inside SQL, which is what I want. I also have browsed several similar questions on SO but I haven't found one that lets me have a clear idea of what I really need. I've setup a [SQLFiddle here](http://sqlfiddle.com/#!9/00240/2) to show you what I'm talking about. There you will see the only query I can think of, which doesn't give me the result I need. I also added comments there showing what the result of the query should be. I hope someone here knows how to accomplish that. Thanks in advance for any comments! Francisco
<http://sqlfiddle.com/#!2/20108/1> Here is a store procedure that do the job ``` CREATE PROCEDURE myProc() BEGIN -- Drop and create the temp table DROP TABLE IF EXISTS reached; CREATE TABLE reached ( sku CHAR(32) PRIMARY KEY, record_date date, nb int, total int) ENGINE=HEAP; -- Initial insert, the starting point is the MAX sales record_date of each product INSERT INTO reached SELECT products.sku, max(sales.record_date), 0, 0 FROM products join sales on sales.sku = products.sku group by products.sku; -- loop until there is no more updated rows iterloop: LOOP -- Update the temptable with the values of the date - 1 row if found update reached join sales on sales.sku=reached.sku and sales.record_date=reached.record_date set reached.record_date = reached.record_date - INTERVAL 1 day, reached.nb=reached.nb+1, reached.total=reached.total + sales.items; -- If no more rows are updated it means we hit the most longest days_sold IF ROW_COUNT() = 0 THEN LEAVE iterloop; END IF; END LOOP iterloop; -- select the results of the temp table SELECT products.sku, products.title, products.price, reached.total as sales, reached.nb as days_sold from reached join products on products.sku=reached.sku; END// ``` Then you just have to do ``` call myProc() ```
A solution in pure SQL without store procedure : [Fiddle](http://sqlfiddle.com/#!9/00240/86/0) ``` SELECT sku , COUNT(1) AS consecutive_days , SUM(items) AS items FROM ( SELECT sku , items -- generate a new guid for each group of consecutive date -- ie : starting with day_before is null , @guid := IF(@sku = sku and day_before IS NULL, UUID(), @guid) AS uuid , @sku := sku AS dummy_sku FROM ( SELECT currents.sku , befores.record_date as day_before , currents.items FROM sales currents LEFT JOIN sales befores ON currents.sku = befores.sku AND currents.record_date = befores.record_date + INTERVAL 1 DAY ORDER BY currents.sku, currents.record_date ) AS main_join CROSS JOIN (SELECT @sku:=0) foo_sku CROSS JOIN (SELECT @guid:=UUID()) foo_guid ) AS result_to_group GROUP BY uuid, sku ``` The query is really not that hard. Declare variables via `cross join (SELECT @type:=0) type`. Then in the selects, you can set variables value row by row. It is necessary for simulating Rank function.
How to GROUP BY consecutive data (date in this case)
[ "", "mysql", "sql", "group-by", "" ]
I am currently attempting to write a code that iterates through a sequence (x), searching for a word that the user inputs. Below is the code. ``` x = [1, 2, 3, 4, 5, 6, 7, 8, 9, 0] i = -1 while True: s = input("Enter a word to search: ") if s != "Quit": try: while i < len(x): i = x.index(s, i+1) print("found at index", i) except ValueError: print("Not found") i = -1 else: break print("Goodbye") ``` The above code works fine through the iteration, but will always return the ValueError after iterating through the sequence. I attempted to rectify this by adding: ``` while i < len(x): ``` thinking the iteration would stop once it reached the end of the sequence, but it continues to throw the exception after returning the found values from the sequence. For instance, if user enters "9", what is returned is: ``` found at index 8 Not found ```
You are trying to find *all* occurrences, but you won't find a next occurrence past the last: ``` >>> 'abc'.index('a', 0) 0 >>> 'abc'.index('a', 1) Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: substring not found ``` You'd need to set a flag to indicate that you found at least one match, because the exception is going to be thrown for any number of matches: ``` i = -1 try: found = False while i < len(x): i = x.index(s, i+1) print("found at index", i) found = True except ValueError: if not found: print("Not found") ``` but if you are going to scan the whole `x` list *anyway*, just use a filter: ``` matches = [i for i, value in enumerate(x) if value == s]: if not matches: print('Not found') for match in matches: print("found at index", i) ``` If you only need to find *one* match, the first, you don't need to use a loop at all: ``` try: print("found at index", x.index(s)) except ValueError: print("not found") ``` as there is no need to loop over the starting position then.
Get a count first and then get the indexes of the occurrences if you want locations of multiple occurrences in the list ``` x = [1, 2, 3, 4, 5, 6, 7, 8, 9, 0] n = 0 while True: s = input("Enter a word to search: ") if s != "Quit": n = x.count(s) if s == 0: print('not found') else: i = -1 while n > 0: print('found at index:', x.index(s,i+1)) n = n - 1 else: break print("Goodbye") ``` Though there probably is a simpler way also.
Exception caught even value is true?
[ "", "python", "exception", "iteration", "python-3.3", "" ]
I wanted to update pip on my main install of Python, specifically to get the list command. Which also includes the list- updates capability. So I ran: ``` sudo pip install --upgrade pip ``` All looked good on the install but then I went to run pip and got this: (end of install included if it helps) ``` Installing pip script to /usr/local/bin Installing pip-2.7 script to /usr/local/bin Successfully installed pip Cleaning up... tom@tom-sam:~$ pip list -o bash: /usr/bin/pip: No such file or directory tom@tom-sam:~$ pip bash: /usr/bin/pip: No such file or directory ``` Somewhat obviously I'm hosed since this is my system install of python.. I read a few answers here but have not been able to determine the easiest fix.
I had the same message on linux. ``` /usr/bin/pip: No such file or directory ``` but then checked which pip was being called. ``` $ which pip /usr/local/bin/pip ``` On my debian wheezy machine I fixed it doing following... ``` /usr/local/bin/pip uninstall pip apt-get remove python-pip apt-get install python-pip ``` ==================================== This was due to mixup installing with `apt-get` and updating with `pip install -U pip`. These also installed libraries at 2 different places which caused problems for me. ``` /usr/lib/python2.7/dist-packages /usr/local/lib/python2.7/dist-packages ```
Before getting happy with apt-get removes and installs. It's worthwhle to reset your bash cache. ``` hash -r ``` Bash will cache the path to pip using the distrubtion install (apt-get) which is /usr/bin/pip. If you're still in the same shell session, due to the cache, after updating pip from pip your shell will still look in /usr/bin/ and not /usr/local/bin/ for example: ``` $apt-get install python-pip $which pip /usr/bin/pip $pip install -U pip $which pip /usr/bin/pip $hash -r $which pip /usr/local/bin/pip ```
I screwed up the system version of Python Pip on Ubuntu 12.10
[ "", "python", "pip", "ubuntu-12.10", "" ]
We have a cms system that write html content blocks into sql server database. I know the table name and field name where these html content blocks reside. Some html contains links () to pdf files. Here is a fragment: ``` <p>A deferred tuition payment plan, or view the <a href="/uploadedFiles/Tuition-Reimbursement-Deferred.pdf" target="_blank">list</a>.</p> ``` I need to extract pdf file names from all such html content blocks. At the end I need to get a list: ``` Tuition-Reimbursement-Deferred.pdf Some-other-file.pdf ``` of all pdf file names from that field. Any help is appreciated. Thanks. **UPDATE** I have received many replies, thank you so much, but I forgot to mention that we are still using SQL Server 2000 here. So, this had to be done using SQL 2000 SQL.
Well it's not pretty but this works using standard Transact-SQL: ``` SELECT CASE WHEN CHARINDEX('.pdf', html) > 0 THEN SUBSTRING( html, CHARINDEX('.pdf', html) - PATINDEX( '%["/]%', REVERSE(SUBSTRING(html, 0, CHARINDEX('.pdf', html)))) + 1, PATINDEX( '%["/]%', REVERSE(SUBSTRING(html, 0, CHARINDEX('.pdf', html)))) + 3) ELSE NULL END AS filename FROM mytable ``` Could expand the list of delimiting characters before the filename from `["/]` (which matches *either* a quotation mark or slash) if you like. See [SQL Fiddle demo](http://sqlfiddle.com/#!6/92764/6)
**Create this function**: ``` create function dbo.extract_filenames_from_a_tags (@s nvarchar(max)) returns @res table (pdf nvarchar(max)) as begin -- assumes there are no single quotes or double quotes in the PDF filename declare @i int, @j int, @k int, @tmp nvarchar(max); set @i = charindex(N'.pdf', @s); while @i > 0 begin select @tmp = left(@s, @i+3); select @j = charindex('/', reverse(@tmp)); -- directory delimiter select @k = charindex('"', reverse(@tmp)); -- start of href if @j = 0 or (@k > 0 and @k < @j) set @j = @k; select @k = charindex('''', reverse(@tmp)); -- start of href (single-quote*) if @j = 0 or (@k > 0 and @k < @j) set @j = @k; insert @res values (substring(@tmp, len(@tmp)-@j+2, len(@tmp))); select @s = stuff(@s, 1, @i+4, ''); -- remove up to ".pdf" set @i = charindex(N'.pdf', @s); end return end GO ``` **A demo on using that function**: ``` declare @t table (html varchar(max)); insert @t values (' <p>A deferred tuition payment plan, or view the <a href="/uploadedFiles/Tuition-Reimbursement-Deferred.pdf" target="_blank">list</a>.</p>'), (' <p>A deferred tuition payment plan, or view the <a href="Two files here-Reimbursement-Deferred.pdf" target="_blank">list</a>.</p>And I use single quotes <a href=''/look/path/The second file.pdf'' target="_blank">list</a>'); select t.*, p.pdf from @t t cross apply dbo.extract_filenames_from_a_tags(html) p; ``` **Results**: ``` |HTML | PDF | -------------------------------------------------------------------- |<p>A deferred tui.... | Tuition-Reimbursement-Deferred.pdf | |<p>A deferred tui.... | Two files here-Reimbursement-Deferred.pdf | |<p>A deferred tui.... | The second file.pdf | ``` [SQL Fiddle Demo](http://sqlfiddle.com/#!3/0039c/1/0)
How to extract file names from a field that contains html content in sql server?
[ "", "sql", "sql-server", "sql-server-2000", "" ]
I am new to Oracle. I tried a query in `MySQL` and it works fine ``` select segment1, description, organization_id from apps.mtl_system_items_b where organization_id IN (110 , 268, 448) GROUP BY segment1 HAVING COUNT(DISTINCT organization_id) = 3 ``` But in `Oracle`, it says `not a group by expression`. What's wrong? EDIT:- I want the result like this ``` organisation_id segment1 110 306145244 268 306145244 448 306145244 110 444444444 268 444444444 448 444444444 ```
You have ungrouped an unaggregated expressions in your `SELECT` list. `MySQL` allows them (without specifying which of many possible values will it return), but `Oracle` does not. By definition, your query will return only segments with 3 different organizations. So which of those organizations you want to return? If you only want to return segment, just use this: ``` SELECT segment1 FROM apps.mtl_system_items_b WHERE organization_id IN (110 , 268, 448) GROUP BY segment1 HAVING COUNT(DISTINCT organization_id) = 3 ``` If you want all items with segments belonging to three organizations, use this: ``` SELECT segment1, description, organization_id FROM ( SELECT i.*, COUNT(DISTINCT organization_id) OVER (PARTITION BY segment1) cnt FROM apps.mtl_system_items_b WHERE organization_id IN (110, 268, 448) ) WHERE cnt = 3 ```
Sorry for the quick answer ``` select segment1, description, organization_id from apps.mtl_system_items_b where organization_id IN (110 , 268, 448) GROUP BY segment1, description, organization_id HAVING COUNT(DISTINCT organization_id) =3 ``` Maybe this explanation could help you to gain some understanding in terms of GROUP BY The SQL GROUP BY clause can be used in an SQL SELECT statement to collect data across multiple records and group the results by one or more columns. The syntax for the SQL GROUP BY clause is: ``` SELECT column1, column2, ... column_n, aggregate_function (expression) FROM tables WHERE predicates GROUP BY column1, column2, ... column_n; ``` aggregate\_function can be a function such as SQL SUM function, SQL COUNT function, SQL MIN function, or SQL MAX function. Since you have the aggregate\_function you need to group by each of the field you display or select. Please refer to the depth-explanation below > <http://docs.oracle.com/javadb/10.6.2.1/ref/rrefsqlj32654.html>
Oracle says not a group by expresssion
[ "", "sql", "oracle", "" ]
I am making a very simple game where the bird (player) has to dodge the rock and if it gets hit by the rock you lose. I am trying to use pygame.sprite.collide\_rect() to tell if they touched but I cant seem to figure how to correctly use it. Here is my code: ``` import pygame import os, sys import random import time img_path = os.path.join('C:\Python27', 'player.png') img_path2 = os.path.join('C:\Python27', 'rock.png') class Bird(object): def __init__(self): self.image_s = pygame.image.load(img_path) self.image_b = self.image_s.get_rect() self.x = 0 self.y = 0 def handle_keys(self): key = pygame.key.get_pressed() dist = 2 if key[pygame.K_DOWN]: self.y += dist elif key[pygame.K_UP]: self.y -= dist if key[pygame.K_RIGHT]: self.x += dist elif key[pygame.K_LEFT]: self.x -= dist def draw(self, surface): surface.blit(self.image, (self.x, self.y)) def background(self, surface): bg = os.path.join('C:\Python27', 'bg.png') self.image2 = pygame.image.load(bg) surface.blit(self.image2, (0,0)) class Rock(object): def __init__(self, x=640, y=0,): self.image_s = pygame.image.load(img_path2) self.image_b = self.image_s.get_rect() self.x = x self.y = y dist = 10 self.dist = dist def rock(self): dist = 10 self.x -=dist def rock_draw(self, surface): surface.blit(self.image, (self.x, self.y)) def checkCollision(sprite1, sprite2): col = pygame.sprite.collide_rect(sprite1, sprite2) if col == True: sys.exit() pygame.init() screen = pygame.display.set_mode((640, 200)) bird = Bird() rock = Rock() clock = pygame.time.Clock() running = True while running: for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() running = False if rock.x < 0: y = random.randint(10, 190) rock = Rock(640, y) rock.checkCollision(bird.image_b, rock.image_b) bird.handle_keys() rock.rock() screen.fill((255,255,255)) bird.background(screen) bird.draw(screen) rock.rock_draw(screen) pygame.display.update() clock.tick(40) ``` When I try to run it it tells me it only takes 2 arguments and I gave three when I try to fix that I get all kinds of different error messages.
``` def checkCollision(sprite1, sprite2): col = pygame.sprite.collide_rect(sprite1, sprite2) if col == True: sys.exit() ``` should be ``` def checkCollision(self, sprite1, sprite2): col = pygame.sprite.collide_rect(sprite1, sprite2) if col == True: sys.exit() ``` since it's a method bound to an object.
You have this: ``` col = pygame.sprite.collide_rect(sprite1, sprite2) ``` But an easier way to do this would be to simply use colliderect which is a function of rect. It might be easier to try this: ``` col=sprite1.rect.colliderect(sprite2.rect) ```
How to Use Sprite Collide in Pygame
[ "", "python", "pygame", "sprite", "rect", "" ]
How can I set a conditional filter in a SQL WHERE clause? For example, I have a parameter @ID with the following procedure ``` SELECT * FROM myTable WHERE Column1 = 'test' AND Column2 = @ID ``` However, If @ID = -1 I don't want the last part of the SQL (AND Column2 = @ID) included I realize I can make an if statement with 2 separate queries, however this is a large script and has this same issue multiple times, so I was hoping there was a better way than nearly duplicating several queries
This is ok for T-SQL: ``` SELECT * FROM myTable WHERE Column1 = 'test' AND (@ID = -1 OR Column2 = @ID) ```
Just include the condition in your SQL as an OR, note the brackets ``` SELECT * FROM myTable WHERE Column1 = 'test' AND (@ID = -1 OR Column2 = @ID) ```
Conditional filter in WHERE clause
[ "", "sql", "" ]
It's written [in the doc](https://docs.djangoproject.com/en/1.5/topics/auth/customizing/#custom-users-and-signals) that: > Another limitation of custom User models is that you can’t use django.contrib.auth.get\_user\_model() as the sender or target of a signal handler. Instead, you must register the handler with the resulting User model. See Signals for more information on registering an sending signals. I guess it means you can do the following: ``` from django.contrib.auth import get_user_model User = get_user_model() @receiver(post_save, sender=User) def user_saved(sender=None, instance=None, **kwargs): # something ``` Isn't it? I'm just wondering if I understand well (I don't understand why they say it's a "limitation", but whatever, just want to check).
That should work. I think they mean to use the same function as sender in doc: > as the **sender** or **target** of a signal handler. Instead, **you must register** > the handler **with the resulting User model**
It's because the object hasn't been "installed" when the signal is being created so get\_user\_model() can't find the object that it needs to attach the signal handler. See [this bug](https://code.djangoproject.com/ticket/19218) for the details on how it was found and what the problem is. Your example wouldn't work because the `get_user_model()` call would fail for this reason. For now the only way to make a signal handler work with a custom User class is to name it directly without using `get_user_model()`, eg ``` @receiver(post_save, sender=myapp.MyUserModel) # can't use get_user_model() here def user_saved(sender=None, instance=None, **kwargs): # something ``` Your coding style could also do with some work: when you run `User = get_user_model()`, that creates a variable called `User` with its value set to the results of the `get_user_model()` function call. Python convention (and that of most other languages) is for normal variables to start with a lower-case letter and for classes to start with an upper case letter. So `user = get_user_model()` and then using the `user` variable later on would make much more sense to anyone reading your code and would help to avoid confusion in the future.
Django 1.5 custom user model - signals limitation
[ "", "python", "django", "django-users", "django-1.5", "" ]
I have a situation where it is reasonable to have a division by 0.0 or by -0.0 where I would expect to see +Inf and -Inf, respectively, as results. It seems that Python enjoys throwing a ``` ZeroDivisionError: float division by zero ``` in either case. Obviously, I figured that I could simply wrap this with a test for 0.0. However, I can't find a way to distinguish between +0.0 and -0.0. (FYI you can easily get a -0.0 by typing it or via common calculations such as -1.0 \* 0.0). IEEE handles this all very nicely, but Python seems to take pains to hide the well thought out IEEE behavior. In fact, the fact that 0.0 == -0.0 is actually an IEEE feature so Python's behavior seriously breaks things. It works perfectly well in C, Java, Tcl, and even JavaScript. Suggestions?
``` from math import copysign def divide(numerator, denominator): if denominator == 0.0: return copysign(float('inf'), denominator) return numerator / denominator >>> divide(1, -0.0) -inf >>> divide(1, 0) inf ```
I completely agree with @Mark Ransom, except that I would use `try` instead: ``` def f(a, b): try: return a / b except ZeroDivisionError: return copysign(float('inf'), denominator) ``` The reason I recommend this is that if you are performing this function many times, you don't have to waste time each iteration checking if the value is zero before you attempt the division. **EDIT**: I have compared the speed of the `try` compared to the `if` function: ``` def g(a, b): if b == 0: return copysign(float('inf'), b) else: return a / b ``` Here is the tests: ``` s = time.time() [f(10, x) for x in xrange(-1000000, 1000000, 1)] print 'try:', time.time()-s s = time.time() [g(10, x) for x in xrange(-1000000, 1000000, 1)] print 'if:', time.time()-s ``` Here is the result: ``` try: 0.573683023453 if: 0.610251903534 ``` This indicates the `try` method is faster, at least on my machine.
How to get Python division by -0.0 and 0.0 to result in -Inf and Inf, respectively?
[ "", "python", "python-2.7", "ieee-754", "divide-by-zero", "" ]
``` class MainHandler(BaseHandler): @tornado.web.authenticated def get(self): self.render("index.html", messages=MessageMixin.cache) ``` So the `MainHandler` does not pass `request` or `current_user` to `index.html`. But in `index.html` I tried `<p>{{ current_user }}</p> <p>{{ request }}</p>` and then there's a lot of output generated. So is this some kind of 'global variable' in Tornado ?
Several things are given to you for free in Tornado templates. These variables do not need to be passed in - this is what you are seeing this with current\_user and request. Here is a [list](http://gavinroy.com/posts/tornado-tip-of-the-day-38-variables-exposed-i.html) of all the variables you get by default
* The secret is in source code! * tornado.web has a function named 'get\_template\_namespace', you even can overwrite * code detail: ``` def get_template_namespace(self): """ Returns a dictionary to be used as the default template namespace. May be overridden by subclasses to add or modify values. The results of this method will be combined with additional defaults in the tornado.template module and keyword arguments to render or render_string. """ namespace = dict( handler=self, request=self.request, current_user=self.current_user, locale=self.locale, _=self.locale.translate, pgettext=self.locale.pgettext, static_url=self.static_url, xsrf_form_html=self.xsrf_form_html, reverse_url=self.reverse_url ) namespace.update(self.ui) return namespace ```
Tornado - 'Global variables' in tornado?
[ "", "python", "web", "tornado", "" ]
Is there an elegant or pythonic way to exclude entries containing duplicate values when using `zip`? As an example: ``` >>> list1 = [0, 1] >>> list2 = [0, 2] >>> zip(list1, list2) [(0, 0), (1, 2)] ``` I would like to have just the second element `[(1, 2)]`. Currently, I do `[x for x in zip(list1, list2) if len(set(x)) == len(x)]` but this feels a bit tedious. Is there a better way to do this? --- EDIT : And how do I scale this to the general case, where there are more than two lists? ``` >>> list1 = [0, 1] >>> list2 = [0, 2] >>> list3 = [0, 3] >>> ... >>> zip(list1, list2, list3, ...) ``` If any entry contains *any* duplicate values, it should be discarded (not every value in the tuple has to be equal).
What about ``` [(x,y) for (x,y) in zip(list1, list2) if x != y] ``` General case: ``` [x for x in zip(list1, list2, ... listn) if not all(z == x[0] for z in x[1:])] ``` That finds duplicates where every element is equal. If only one pair needs to be equal to count as a duplicate, you can use the set method you already mentioned in your question provided you have hashable types. If you have unhashable types, the (interesting) question of identifying duplicates has been answered previously [here](https://stackoverflow.com/q/2213923/674039).
You only have 2-value tuples, so you can compare the first value against the second. The list comprehension is the best option: ``` [x for x in zip(list1, list2) if x[0] != x[1]] ``` For the general case, provided your values are all hashable, you already have the best option. If you have non-hashable types you'd need to special-case the 'unique' handling already, so that is outside the scope here.
Zip without duplicate values in the resulting tuples
[ "", "python", "" ]
I was surprised that I couldn't find this question on here. I would like to take extract one line from a telnet response and make it a variable. (actually one number from that line). I can extract up to where I need using telnet.read\_until(), but the whole beginning is still there. The printout shows different statuses of a machine. The line I am trying to get is formatted like this: ``` CPU Utilization : 5 % ``` I really only need the number, but there are many ':' and '%' characters in the rest of the output. Can anyone help me extract this value? Thanks in advance! Here is my code (this reads the whole output and prints): ``` import telnetlib, time print ("Starting Client...") host = input("Enter IP Address: ") timeout = 120 print ("Connecting...") try: session = telnetlib.Telnet(host, 23, timeout) except socket.timeout: print ("socket timeout") else: print("Sending Commands...") session.write("command".encode('ascii') + b"\r") print("Reading...") output = session.read_until(b"/r/n/r/n#>", timeout ) session.close() print(output) print("Done") ``` Edit: some example of what an output could be: ``` Boot Version : 1.1.3 (release_82001975_C) Post Version : 1.1.3 (release_82001753_E) Product VPD Version : release_82001754_C Product ID : 0x0076 Hardware Strapping : 0x004C CPU Utilization : 5 % Uptime : 185 days, 20 hours, 31 minutes, 29 seconds Current Date/Time : Fri Apr 26 17:50:30 2013 ```
As you say in the question: > I can extract up to where I need using `telnet.read_until()`, but the whole beginning is still there. So you can get all of the lines up to and including the one you want into a variable `output`. The only thing you're missing is how to get just the last line in that `output` string, right? That's easy: just split `output` into lines and take the last one: ``` output.splitlines()[:-1] ``` Or just split off the last line: ``` output.rpartition('\n')[-1] ``` --- This doesn't change output, it's just an expression that computes a new value (the last line in `output`). So, just doing this, followed by `print(output)`, won't do anything visibly useful. Let's take a simpler example: ``` a = 3 a + 1 print(a) ``` That's obviously going to print `3`. If you want to print `4`, you need something like this: ``` a = 3 b = a + 1 print(b) ``` So, going back to the real example, what you want is probably something like this: ``` line = output.rpartition('\n')[-1] print(line) ``` And now you'll see this: ``` CPU Utilization : 5 % ``` --- Of course, you still need something like Johnny's code to extract the number from the rest of the line: ``` numbers = [int(s) for s in line.split() if s.isdigit()] print(numbers) ``` Now you'll get this: ``` ['5'] ``` --- Notice that gives you a list of one string. If you want just the one string, you still have another step: ``` number = numbers[0] print(number) ``` Which gives you: ``` 5 ``` --- And finally, `number` is still the *string* `'5'`, not the integer `5`. If you want that, replace that last bit with: ``` number = int(numbers[0]) print(number) ``` This will still *print out* `5`, but now you have a variable you can actually use as a number: ``` print(number / 100.0) # convert percent to decimal ``` --- I'm depending on the fact that [`telnet` defines end-of-line as `\r\n`](http://www.freesoft.org/CIE/RFC/1123/31.htm), and any not-quite-telnet-compatible server that gets it wrong is almost certainly going to use either Windows-style (also `\r\n`) or Unix-style (just `\n`) line endings. So, splitting on `\n` will always get the last line, even for screwy servers. If you don't need to worry about that extra robustness, you can split on `\r\n` instead of `\n`. --- There are other ways you could solve this. I would probably either use something like `session.expect([r'CPU Utilization\s*: (\d+)\s*%'])`, or wrap the session as an iterator of lines (like a file) and then just do write the standard `itertools` solution. But this seems to be simplest given what you already have.
If you want to get only numbers: ``` >>> output = "CPU Utilization : 5 %" >>> [int(s) for s in output.split() if s.isdigit()] [5] >>> output = "CPU Utilization : 5 % % 4.44 : 1 : 2" >>> [int(s) for s in output.split() if s.isdigit()] [5, 4.44, 1, 2] ``` EDIT: ``` for line in output: print line # this will print every single line in a loop, so you can make: print [int(s) for s in line.split() if s.isdigit()] ```
How can I read one line from a telnet response with Python?
[ "", "python", "python-3.x", "telnet", "python-3.3", "telnetlib", "" ]
I have a list of countries like: ``` countries=['American Samoa', 'Canada', 'France'...] ``` I want to convert them like this: ``` countries=['AS', 'CA', 'FR'...] ``` Is there any module or any way to convert them?
There is a module called [`pycountry`](https://pypi.python.org/pypi/pycountry). Here's an example code: ``` import pycountry input_countries = ['American Samoa', 'Canada', 'France'] countries = {} for country in pycountry.countries: countries[country.name] = country.alpha_2 codes = [countries.get(country, 'Unknown code') for country in input_countries] print(codes) # prints ['AS', 'CA', 'FR'] ```
You can use this csv file : [country code list into a CSV](http://geohack.net/gis/wikipedia-iso-country-codes.csv). ``` import csv dic = {} with open("wikipedia-iso-country-codes.csv") as f: file= csv.DictReader(f, delimiter=',') for line in file: dic[line['English short name lower case']] = line['Alpha-2 code'] countries = ['American Samoa', 'Canada', 'France'] for country in countries: print(dic[country]) ``` Will print: ``` AS CA FR ``` Few more [alternatives](https://gis.stackexchange.com/questions/1047/full-list-of-iso-alpha-2-and-iso-alpha-3-country-codes).
How to convert country names to ISO 3166-1 alpha-2 values, using python
[ "", "python", "country-codes", "iso-3166", "" ]
My class has a dict, for example: ``` class MyClass(object): def __init__(self): self.data = {'a': 'v1', 'b': 'v2'} ``` Then I want to use the dict's key with MyClass instance to access the dict, for example: ``` ob = MyClass() v = ob.a # Here I expect ob.a returns 'v1' ``` I know this should be implemented by \_\_getattr\_\_, but I'm new to Python, I don't exactly know how to implement it.
``` class MyClass(object): def __init__(self): self.data = {'a': 'v1', 'b': 'v2'} def __getattr__(self, attr): return self.data[attr] ``` --- ``` >>> ob = MyClass() >>> v = ob.a >>> v 'v1' ``` Be careful when implementing `__setattr__` though, you will need to make a few modifications: ``` class MyClass(object): def __init__(self): # prevents infinite recursion from self.data = {'a': 'v1', 'b': 'v2'} # as now we have __setattr__, which will call __getattr__ when the line # self.data[k] tries to access self.data, won't find it in the instance # dictionary and return self.data[k] will in turn call __getattr__ # for the same reason and so on.... so we manually set data initially super(MyClass, self).__setattr__('data', {'a': 'v1', 'b': 'v2'}) def __setattr__(self, k, v): self.data[k] = v def __getattr__(self, k): # we don't need a special call to super here because getattr is only # called when an attribute is NOT found in the instance's dictionary try: return self.data[k] except KeyError: raise AttributeError ``` --- ``` >>> ob = MyClass() >>> ob.c = 1 >>> ob.c 1 ``` If you don't need to set attributes just use a namedtuple eg. ``` >>> from collections import namedtuple >>> MyClass = namedtuple("MyClass", ["a", "b"]) >>> ob = MyClass(a=1, b=2) >>> ob.a 1 ``` If you want the default arguments you can just write a wrapper class around it: ``` class MyClass(namedtuple("MyClass", ["a", "b"])): def __new__(cls, a="v1", b="v2"): return super(MyClass, cls).__new__(cls, a, b) ``` or maybe it looks nicer as a function: ``` def MyClass(a="v1", b="v2", cls=namedtuple("MyClass", ["a", "b"])): return cls(a, b) ``` --- ``` >>> ob = MyClass() >>> ob.a 'v1' ```
Late to the party, but found two really good resources that explain this better (IMHO). As explained [here](http://western-skies.blogspot.com.br/2008/02/complete-example-of-getattr-in-python.html), you should use `self.__dict__` to access fields from within `__getattr__`, in order to avoid infinite recursion. The example provided is: > ``` > def __getattr__(self, attrName): > if not self.__dict__.has_key(attrName): > value = self.fetchAttr(attrName) # computes the value > self.__dict__[attrName] = value > return self.__dict__[attrName] > ``` Note: in the second line (above), a more Pythonic way would be (`has_key` apparently was even removed in Python 3): ``` if attrName not in self.__dict__: ``` The [other resource](http://farmdev.com/src/secrets/magicmethod/#introducing-getattr) explains that the `__getattr__` is invoked only when the attribute is not found in the object, and that `hasattr` always returns `True` if there is an implementation for `__getattr__`. It provides the following example, to demonstrate: > ``` > class Test(object): > def __init__(self): > self.a = 'a' > self.b = 'b' > > def __getattr__(self, name): > return 123456 > > t = Test() > print 'object variables: %r' % t.__dict__.keys() > #=> object variables: ['a', 'b'] > print t.a > #=> a > print t.b > #=> b > print t.c > #=> 123456 > print getattr(t, 'd') > #=> 123456 > print hasattr(t, 'x') > #=> True > ```
Python: how to implement __getattr__()?
[ "", "python", "" ]
Im creating a simple python program that gives basic functionality of an SMS\_Inbox. I have created an SMS\_Inbox method. ``` store = [] message_count = 0 class sms_store: def add_new_arrival(self,number,time,text): store.append(("From: "+number, "Recieved: "+time,"Msg: "+text)) **message_count += 1** def delete(self,i): if i > len(store-1): print("Index does not exist") else: del store[i] message_count -= 1 ``` In the bolded bit I am getting an error: ``` UnboundLocalError: local variable 'message_count' referenced before assignment. ``` I created a global variable store which is an empty list and this works when I use the add\_new\_variable object. However for some reason it is not adding values to my global message\_count variable. Please help
That's not how classes work. Data should be stored within the class instance, not globally. ``` class SMSStore(object): def __init__(self): self.store = [] self.message_count = 0 def add_new_arrival(self,number,time,text): self.store.append(("From: "+number, "Recieved: "+time,"Msg: "+text)) self.message_count += 1 def delete(self, i): if i >= len(store): raise IndexError else: del self.store[i] self.message_count -= 1 sms_store = SMSStore() sms_store.add_new_arrival("1234", "now", "lorem ipsum") try: sms_store.delete(20) except IndexError: print("Index does not exist") print sms_store.store # multiple separate stores sms_store2 = SMSStore() sms_store2.add_new_arrival("4321", "then", "lorem ipsum") print sms_store2.store ```
If the variable you are referring to is `message_count`, the error is because in Python, you have to specify a variable as `global` before you can make edits with it. This should work. ``` store = [] message_count = 0 class sms_store: def add_new_arrival(self,number,time,text): global message_count store.append(("From: "+number, "Recieved: "+time,"Msg: "+text)) message_count += 1 def delete(self,i): if i > len(store-1): print("Index does not exist") else: global message_count del store[i] message_count -= 1 ``` As written above, you'd be better off encapsulating it in the `__init__` function instead of declaring it `global`.
Python global variable and class functionality
[ "", "python", "class", "" ]
I'm trying to create a simple `Rectangle`-class in python, but I also need to use points and sizes in my code, so I'm trying to inherit my `Rectangle` from `Point` and `Size`. The problem is, my `Rectangle`'s initialize method looks awful and I'm not sure if it's even suitable for any code at all. Here's what I got: ``` class Size: def __init__(self, width=0, height=0): self.width = width self.height = height class Point: def __init__(self, x=0, y=0): self.x = x self.y = y class Rectangle(Size, Point): def __init__(self, size=None, position=None): #I'd rather not use 4 variables if size: super(Size, self).__init__(size.width, size.height) else: super(Size, self).__init__() if position: super(Point, self).__init__(position.x, position.y) else: super(Point, self).__init__() ``` However, it looks awful and it doesn't even work: `TypeError: object.__init__() takes no parameters` Is there a cleaner way to do this? I could of course just force my rectangle to take `size` and `position` (not to make them optional) but I would rather not. I could also define my `Rectangle` to have a `has-a` relationship with `Size` and `Point` rather than `is-a` relationship, but that's not even proper OOP and I'm mostly learning here so I'd rather not do that either.
First I'm just gonna answer your question: Go with `has-a` relationship. ``` class Rectangle: def __init__(self, size=None, position=None): self.size = size or Size() self.position = position or Point() ``` Think about it: *is* rectangle a position and a size, or does rectangle *have* a position and a size? If you're coding a game which constructs of only 2D rectangles, then there's your answer, stop reading. --- ...however, if you're going to have more shapes (such as `Circle`), you should think twice before doing anything. When coding OOP, always start by thinking what you need. So, eventually we might need a `Circle` class too. We should think of that and other shapes before coding our `Rectangle`-class. `Circle` and `Rectangle` both have a common attribute called `position`. This is why you should have a base class called `Shape` or `Geometry` which defines atleast `position`. ``` class Geometry: def __init__(self, position=None): self.position = position or Point() class Rectangle(Geometry): def __init__(self, size=None, position=None): self.size = size or Size() super().__init__(position) class Circle(Geometry): def __init__(self, radius=0, position=None): self.radius = radius super().__init__(position) ``` Also think about what kind of other classes you might have, and see if they have any common attributes: * Triangle * Pentagon * Other polygons * Line * etc. You'll soon realize they all got `position`, which is why we have `Geometry`. They also have somekind of `size` (length for line, radius for circle, width&height for triangle...), so you can create few base classes for different `size`'s too (such as width&height for triangle and rectangle).
You're using `super()` wrong. You specify *your own* class in the call, to get to the next parent class. All your other classes need to do the same thing. The way you're doing it now, you're saying to call e.g. `Size`'s parent class, which is `object`. This being Python 3, you can just do `super()` without any arguments and it'll figure all that out for you. Of course this means that you can't explicitly call both parent classes using `super()`, as you want to do, because a given class has only one `super()`. If you really want to do that, you'll have to call them as `Size.__init__(self, ...)` and so on. But I have to say, the inheritance scheme you have cooked up does not make any sense. A rectangle is neither a kind of size nor a kind of point. Instead, it *has* those things, which indicates encapsulation (where a `Rectangle` has `size` and `position` attributes which are instances of those particular classes, or, really, `namedtuple`s) rather than inheritance. Encapsulation is certainly "proper OOP."
Troublesome multi-inheritance code
[ "", "python", "class", "inheritance", "python-3.x", "multiple-inheritance", "" ]
Is there any way to do something like this in python 2.7? ``` def scaleit(g, k): for item in g: yield item*k promise = ?????? # defines a generator for reference but not use: # other functions can make use of it, # but it requires a call to promise.fulfill() to # define what the generator is going to yield; # promise raises an error if next() is called # before the promise is fulfilled f = scaleit(promise, 3) promise.fulfill(range(10)) for item in f: print item ```
Yes; generators don't run until they're actually iterated, so you can just defer iterating the fulfilled promise's value until requested: ``` class Promise(object): def fulfill(self, result): self.result = result def __iter__(self): return iter(self.result) def scaleit(g, k): for item in g: yield item*k promise = Promise() f = scaleit(promise, 3) promise.fulfill(range(10)) print list(f) ```
Is this what you want? ``` def scaleit(g, k): for item in g: yield item * k class Promise(object): def __init__(self): self.g = None def fulfill(self, g): self.g = iter(g) def __iter__(self): return self def next(self): return next(self.g) promise = Promise() f = scaleit(promise, 3) promise.fulfill(range(10)) for item in f: print item ```
updating a python generator after it has been created
[ "", "python", "python-2.7", "generator", "" ]
In tableWidget I have one column entirely made up of checkable items. I can't figure out how to center the checkbox or at least remove the text-box next to it. As you can see on this picture ![enter image description here](https://i.stack.imgur.com/DzSdk.png) text-box has that ugly outline when I click on cell, I would like that turned off in entire table if it's possible. I've read that I need to use a delegates to control the positioning of items/icons, but for nooby like me it would take too long to understand that right, so if there is some simple solution that will make that column less ugly, please, examples would be appreciated.
That rectangle is a focus recangle and [cannot be hidden via an stylesheet](http://qt-project.org/forums/viewthread/3448). Edit: So you have **four** options: 1 - It seems that u can use ``` tablewidget->setFocusPolicy(Qt::NoFocus); ``` But you will lose the ability to process keyboard events. See [FocusPolicy](http://qt-project.org/doc/qt-4.8/qwidget.html#focusPolicy-prop) 2 - Set the checkable widgetitems as disabled, not selectable through [setFlags](http://qt-project.org/doc/qt-4.8/qtablewidgetitem.html#setFlags). I don't know if this is a bug, but in my Qt i will be alowed to still click on checkboxes 3 - Set your first column as checkable though [setFlags](http://qt-project.org/doc/qt-4.8/qtablewidgetitem.html#setFlags) too, and just dont use that second column. Checkboxes will be shown in same column that the Strings but at left. 4- That custom delegate you dont want to create. And you [got here an example](https://stackoverflow.com/questions/2055705/hide-the-border-of-the-selected-cell-in-qtablewidget-in-pyqt)
An example that worked using pyqt4. Adapted from [falsinsoft](http://falsinsoft.blogspot.com/2013/11/qtablewidget-center-checkbox-inside-cell.html "falsinsoft") ![qcheck box centre inside pyqt4 atble widget](https://i.stack.imgur.com/pN55H.png) ``` table = QTableWidget() cell_widget = QWidget() chk_bx = QCheckBox() chk_bx.setCheckState(Qt.Checked) lay_out = QHBoxLayout(cell_widget) lay_out.addWidget(chk_bx) lay_out.setAlignment(Qt.AlignCenter) lay_out.setContentsMargins(0,0,0,0) cell_widget.setLayout(lay_out) tableWidget.setCellWidget(i, 0, cell_widget) ```
Align checkable items in qTableWidget
[ "", "python", "qt", "qt4", "pyqt4", "qtablewidget", "" ]
I'm looking for advice for designing a database that has generic entities that want to be related to several different other entity types. Horrible intro sentence, I know ... so please let me explain by example. Consider that I have two different entities Employees and Customers with table defs: ``` Employees ---------- EmployeeID int PK FirstName varchar LastName varchar ... other Employee specific fields Customers ---------- CustomerID int PK FirstName varchar LastName varchar ... other Customer specific fields ``` A better design might have the common fields, FirstName and LastName, in a related base table, but that's not the part I'm struggling with. Now, consider that I want to be able to store an unlimited number of Addresses and PhoneNumbers for my Employees and Customers, and define the tables: ``` Addresses ---------- AddressID int PK AddressLine varchar City varchar State varchar PostalCode varchar PhoneNumbers ------------- PhoneNumberID int PK PhoneNumber varchar PhoneExtension varchar ``` And then two additional tables to relate Addresses and PhoneNumbers to the Employees: ``` EmployeeAddresses ------------------ EmployeeAddressID int PK EmployeeID int FK Employees.EmployeeID AddressID int FK Addresses.AddressID EmployeeAddressType enum EmployeePhoneNumbers --------------------- EmployeePhoneNumberID int PK EmployeeID int FK Employees.EmployeeID PhoneNumberID int FK PhoneNumbers.PhoneNumberID EmployeePhoneNumberType enum ``` And two similar tables, CustomerAddresses and CustomerPhoneNumbers, to relate Addresses and PhoneNumbers to the Customers table. Any Employee-specific or Customer-specific aspects of the Addresses and PhoneNumbers, like EmployeeAddressType in the above, also go in these last four tables. From what I've found researching the Internet, this design is called Table-Per-Type (TPT) or Table-Per-Subclass (TPS). And the polymorphic advantages seem appealing, e.g., I could add and AddressLine2 to the Addresses table down the road and both my Employess and Customers automatically gain the benefit of the extra address line. The disadvantages noted by those sources on TPT are slower query speed and hard to implement. And now my fairly open-ended plea for advice ... What other disadvantages am I not considering? What gotchas can you run into trying to maintain and evolve an application based on this design? And finally, is the above design what most experienced database designers would use? Thanks.
Employees and Customers are both subclasses of the class People, as noted in the previous response. These two subclasses might not be mutually exclusive. There is a technique, called Class Table Inheritance. In this technique, there will be three tables, People, Employees, and Customers. Attributes common to all people, like Address, will be in the People table. You can get the details by visiting this tag [class-table-inheritance](/questions/tagged/class-table-inheritance "show questions tagged 'class-table-inheritance'") and looking under the "Info" tag.
1. use single table inheritance to start. it is the simplest, easiest, and fastest. 2. use the Party Model. Individuals and Organizations are both Parties, and can *play the role* of customers or employees. 3. consider email addresses, phone numbers, web sites, and mailing addresses all to be sub-types of "contact method" or address. 4. if you use a tool like JBoss Hibernate (java) or NHibernate (.net), then this does most of the work for you.
Table-Per-Type Database Design Advice
[ "", "sql", "database", "database-design", "" ]
I'm building a gallery using Django(1.5.1) on my local machine. In my Album model I have a `ImageField`. There is a view to show all images of an album. It works well, but at the end images don't show up. There's border of images as you can see but images don't load. ### screenshot ![enter image description here](https://i.stack.imgur.com/nyExT.png) ### models.py ``` class Category(models.Model): ### class Album(models.Model): category = models.ForeignKey(Category, related_name='albums') ### class Image(models.Model): album = models.ForeignKey(Album) image = models.ImageField(upload_to = 'images/albums/') ``` ### views.py ``` def detail(request, album_id): album = get_object_or_404(Album, pk=album_id) return render(request, 'gallery/detail.html', {'album': album}) ``` ### detail.html ``` <h1>{{ album.title }}</h1> {% for image in album.image_set.all %} <a> <img src="{{ image.image.url }}" height="420"></a> {% endfor %} ``` If this is my album address: `http://localhost:8000/gallery/1/` Then image URL is:`http://localhost:8000/media/images/albums/photo_4.JPG (I get 404 when enter it in browser)` This media root and url: ``` MEDIA_ROOT = '/media/' MEDIA_URL = '/localhost:8000/media/' ``` My media root has *777 permission*. What should I do now? Where is the problem?
I have a clue on what's the problem. `MEDIA_URL` should be like this: ``` MEDIA_ROOT='<the full path to your media folder>' (i.e: '/home/ike/project/media/') MEDIA_URL='/media/' ``` Note the slash character at the beginning. That is because media is a folder in your root server folder and not relative to whatever other url you call it. And add these lines to the end of your `urls.py` file: ``` # You might need to import static function like this: #from django.contrib.staticfiles.urls import static urlpatterns += staticfiles_urlpatterns() urlpatterns += static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT) ``` You can check the following documentation: <https://docs.djangoproject.com/en/dev/howto/static-files> Hope this helps
If you're using the dev server then you need to add something to your urls.py to make django serve the media files, cf: 1.4.x : <https://docs.djangoproject.com/en/1.4/howto/static-files/#serving-other-directories> 1.5.x: <https://docs.djangoproject.com/en/dev/howto/static-files/#serving-files-uploaded-by-a-user>
Images from ImageField in Django don't load in template
[ "", "python", "django", "image", "django-templates", "django-models", "" ]
I currently have: ``` FORMAT = '%(asctime)s - %(levelname)s - %(message)s' logging.basicConfig(format=FORMAT, datefmt='%d/%m/%Y %H:%M:%S', filename=LOGFILE, level=getattr(logging, options.loglevel.upper())) ``` ... which works great, however I'm trying to do: ``` FORMAT = '%(MYVAR)s %(asctime)s - %(levelname)s - %(message)s' ``` and that just throws keyerrors, even though `MYVAR` is defined. Is there a workaround? `MYVAR` is a constant, so it would be a shame of having to pass it everytime I invoke the logger. Thank you!
You could use a [custom filter](http://docs.python.org/2/howto/logging-cookbook.html#using-filters-to-impart-contextual-information): ``` import logging MYVAR = 'Jabberwocky' class ContextFilter(logging.Filter): """ This is a filter which injects contextual information into the log. """ def filter(self, record): record.MYVAR = MYVAR return True FORMAT = '%(MYVAR)s %(asctime)s - %(levelname)s - %(message)s' logging.basicConfig(format=FORMAT, datefmt='%d/%m/%Y %H:%M:%S') logger = logging.getLogger(__name__) logger.addFilter(ContextFilter()) logger.warning("'Twas brillig, and the slithy toves") ``` yields ``` Jabberwocky 24/04/2013 20:57:31 - WARNING - 'Twas brillig, and the slithy toves ```
You could use a custom `Filter`, as `unutbu` says, or you could use a `LoggerAdapter`: ``` import logging logger = logging.LoggerAdapter(logging.getLogger(__name__), {'MYVAR': 'Jabberwocky'}) FORMAT = '%(MYVAR)s %(asctime)s - %(levelname)s - %(message)s' logging.basicConfig(format=FORMAT, datefmt='%d/%m/%Y %H:%M:%S') logger.warning("'Twas brillig, and the slithy toves") ``` which gives Jabberwocky 25/04/2013 07:39:52 - WARNING - 'Twas brillig, and the slithy toves Alternatively, just pass the information with every call: ``` import logging logger = logging.getLogger(__name__) FORMAT = '%(MYVAR)s %(asctime)s - %(levelname)s - %(message)s' logging.basicConfig(format=FORMAT, datefmt='%d/%m/%Y %H:%M:%S') logger.warning("'Twas brillig, and the slithy toves", extra={'MYVAR': 'Jabberwocky'}) ``` which gives the same result. Since MYVAR is practically constant, the `LoggerAdapter` approach requires less code than the `Filter` approach in your case.
How to input variables in logger formatter?
[ "", "python", "logging", "" ]
Is it possible to write a statement that selects a column from a table and converts the results to a string? Ideally I would want to have comma separated values. For example, say that the SELECT statement looks something like ``` SELECT column FROM table WHERE column<10 ``` and the result is a column with values ``` |column| -------- | 1 | | 3 | | 5 | | 9 | ``` I want as a result the string "1, 3, 5, 9"
You can do it like this: **[Fiddle demo](http://sqlfiddle.com/#!3/3bfea/3)** ``` declare @results varchar(500) select @results = coalesce(@results + ',', '') + convert(varchar(12),col) from t order by col select @results as results | RESULTS | ----------- | 1,3,5,9 | ```
There is new method in SQL Server 2017: `SELECT STRING_AGG (column, ',') AS column FROM Table;` that will produce `1,3,5,9` for you
SQL Server convert select a column and convert it to a string
[ "", "sql", "sql-server", "string", "select", "" ]
i've a issue with Python. My case: i've a gzipped file from a partner platform (i.e. h..p//....namesite.../xxx) If i click the link from my browser, it will download a file like (i.e. namefile.xml.gz). So... if i read this file with python i can decompress and read it. Code: ``` content = gzip.open(namefile.xml.gz,'rb') print content.read() ``` But i can't if i try to read the file from remote source. From remote file i can read only the encoded string, but not decoded it. Code: ``` response = urllib2.urlopen(url) encoded =response.read() print encoded ``` With this code i can read the string encoded... but i can't decoded it with gzip or lzip. Any advices? Thanks a lot
Unfortunately the method @Aya suggests does not work, since GzipFile extensively uses `seek` method of the file object (not supported by response). So you have basically two options: 1. Read the contents of the remote file into `io.StringIO`, and pass the object into `gzip.GzipFile` (if the file is small) 2. download the file into a temporary file on disk, and use `gzip.open` There is another option (which requires some coding) - to implement your own reader using `zlib` module. It is rather easy, but you will need to know about a magic constant ([How can I decompress a gzip stream with zlib?](https://stackoverflow.com/questions/1838699/how-can-i-decompress-a-gzip-stream-with-zlib)).
For Python v3.2 or later, you can use the [`gzip.GzipFile`](http://docs.python.org/2.7/library/gzip.html#gzip.GzipFile) class to wrap the file object returned by `urllib2.urlopen()`, with something like this... ``` import urllib2 import gzip response = urllib2.urlopen(url) gunzip_response = gzip.GzipFile(fileobj=response) content = gunzip_response.read() print content ``` ...which will transparently decompress the response stream as you read it.
Decompress remote .gz file in Python
[ "", "python", "gzip", "" ]
I have the following code segment : ``` class A: def __init__(self): self.state = 'CHAT' def method1(self): self.state = 'SEND' def printer(self): print self.state class B(A): def method2(self): self.method1() print self.state ob_B = B() ob_A = A() ob_B.method2() ob_A.printer() ``` This gives me the output : ``` SEND CHAT ``` I want it to print : ``` SEND SEND ``` That is, when `B.method2` is modifying self.state by calling self.method1, I want it to modify the already existing value of `self.state = 'CHAT'` in A's instance. How can I do this?
The instance is passed as the first argument to each of your methods, so `self` is the instance. You are setting instance attributes and not class variables. ``` class A: def __init__(self): A.state = 'CHAT' def method1(self): A.state = 'SEND' def printer(self): print A.state class B(A): def method2(self): self.method1() print B.state ob_B = B() ob_A = A() ob_B.method2() ob_A.printer() ``` --- ``` SEND SEND ```
``` ob_B = B() ob_A = A() ob_B.method2() ob_A.printer() ``` You need to call ob\_B.method2() -- without the parentheses that statement is just a reference to the function and doesn't actually call it.
How to access a class variable from another class in python?
[ "", "python", "class", "class-variables", "" ]
I'm in the process of trying to remove a $ from cells in a specific column with nothing. Basically, I want to take cells that resemble $3.45 and make the final product look like 3.45. I've been working with the REPLACE command (with the backtick) and can't seem get it to work correctly. Here's what I tried: ``` REPLACE('`$', '`$', '') ``` Do I need to throw a like command in there along with something to handle the blank space I want to replace the dollar signs with? I know this may be a simple solution but I really want to learn this.
Thanks for the assistance from @nickvaccaro and @vlad I ended up leveraging your suggestions with the following find and replace code: ``` UPDATE TABLE_NAME SET COLUMN_NAME = REPLACE(COLUMN_NAME, '$', '') ```
You can get rid of that back tick. Check this out: ``` DECLARE @Value VARCHAR(50) SET @Value = '$3.50' SELECT REPLACE(@Value, '$', '') ```
Not able leverage T-SQL to replace dollar sign with a blank
[ "", "sql", "sql-server", "t-sql", "" ]
I want to compare values from two variables (dictionary and list). Dictionary has a nested construct, so I have to loop over the all items. I discovered simple solution, but I am pretty sure that I can do this in a better way (using python). In a brief I want to find items from `user_from_database` which are not existing in a `user_from_client` variable. **My solution:** ``` #variable containing users from client side users_from_client = { "0": { "COL1": "whatever", "COL2": "val1", "COL3": "whatever", }, "1": { "COL1": "whatever", "COL2": "val2", "COL3": "whatever", }, "3": { "COL1": "whatever", "COL2": "val3", "COL3": "whatever", } } #variable containing users from the database users_from_database = [ ["val1"], ["val2"], ["val5"], ["val7"] ] #This function is used to find element from the nested dictionaries(d) def _check(element, d, pattern = 'COL2'): exist = False for k, user in d.iteritems(): for key, item in user.iteritems(): if key == pattern and item == element: exist = True return exist #Finding which users should be removed from the database to_remove = [] for user in users_from_db: if not _check(user[0], users_from_agent): if user[0] not in to_remove: to_remove.append(user[0]) #to_remove list contains: [val5, val7"] ``` What is the better way to give the same result using python approach ? Probably I don't have to add that I'm a newbie with python (I assume that you are able to see that watching the code above).
Just use an [error-safe dictionary lookup](http://docs.python.org/2/library/stdtypes.html#dict.get): ``` def _check(element, d, pattern = 'COL2'): for user in d.itervalues(): if user.get(pattern) == element: return True return False ``` Or as a one liner: ``` def _check(element, d, pattern = 'COL2'): return any(user.get(pattern) == element for user in d.itervalues()) ``` Or trying to do the entire job as a one-liner: ``` #Finding which users should be removed from the database to_remove = set( name for name in users_from_database.itervalues() if not any(user.get('COL2') == name for (user,) in users_from_client) ) assert to_remove == {"val5", "val7"} ``` `set`s can make it ever more concise (and efficient): ``` to_remove = set( user for (user,) in users_from_database ) - set( user.get('COL2') for user in users_from_client ) ``` --- Your data structures are a bit wierd. Consider using: ``` users_from_client = [ { "COL1": "whatever", "COL2": "val1", "COL3": "whatever", }, { "COL1": "whatever", "COL2": "val2", "COL3": "whatever", }, { "COL1": "whatever", "COL2": "val3", "COL3": "whatever", } ] #variable containing users from the database users_from_database = set( "val1", "val2", "val5", "val7" ) ``` Which reduces your code to: ``` to_remove = users_from_database - set( user.get('COL2') for user in users_from_client ) ```
Well I don't know of any super elegant way to do this, but there are some minor improvements you can make to your code. First off, you aren't using `k`, so you might as well iterate over just the values. Second, you don't need to keep track of `exists`, you can just return immediately when you find a match. Lastly, if you're checking for a key,value pair, you can just test if the tuple is contained in items. ``` def _check(element, d, pattern = 'COL2'): for user in d.itervalues(): if (pattern, element) in user.items(): return True return False ```
Comparing value from a nested dictionaries and list
[ "", "python", "list", "dictionary", "comparison", "nested-lists", "" ]
My Python program was too slow. So, I **profiled** it and found that most of the time was being spent in a function that **computes distance** between two points (a point is a list of 3 Python floats): ``` def get_dist(pt0, pt1): val = 0 for i in range(3): val += (pt0[i] - pt1[i]) ** 2 val = math.sqrt(val) return val ``` To analyze why this function was so slow, I wrote two test programs: one in Python and one in C++ that do similar computation. They compute the distance between 1 million pairs of points. (The test code in Python and C++ is below.) The Python computation takes 2 seconds, while C++ takes 0.02 seconds. A 100x difference! Why is Python code **so much slower** than C++ code for such simple math computations? How do I **speed it up** to match the C++ performance? The Python code used for testing: ``` import math, random, time num = 1000000 # Generate random points and numbers pt_list = [] rand_list = [] for i in range(num): pt = [] for j in range(3): pt.append(random.random()) pt_list.append(pt) rand_list.append(random.randint(0, num - 1)) # Compute beg_time = time.clock() dist = 0 for i in range(num): pt0 = pt_list[i] ri = rand_list[i] pt1 = pt_list[ri] val = 0 for j in range(3): val += (pt0[j] - pt1[j]) ** 2 val = math.sqrt(val) dist += val end_time = time.clock() elap_time = (end_time - beg_time) print elap_time print dist ``` The C++ code used for testing: ``` #include <cstdlib> #include <iostream> #include <ctime> #include <cmath> struct Point { double v[3]; }; int num = 1000000; int main() { // Allocate memory Point** pt_list = new Point*[num]; int* rand_list = new int[num]; // Generate random points and numbers for ( int i = 0; i < num; ++i ) { Point* pt = new Point; for ( int j = 0; j < 3; ++j ) { const double r = (double) rand() / (double) RAND_MAX; pt->v[j] = r; } pt_list[i] = pt; rand_list[i] = rand() % num; } // Compute clock_t beg_time = clock(); double dist = 0; for ( int i = 0; i < num; ++i ) { const Point* pt0 = pt_list[i]; int r = rand_list[i]; const Point* pt1 = pt_list[r]; double val = 0; for ( int j = 0; j < 3; ++j ) { const double d = pt0->v[j] - pt1->v[j]; val += ( d * d ); } val = sqrt(val); dist += val; } clock_t end_time = clock(); double sec_time = (end_time - beg_time) / (double) CLOCKS_PER_SEC; std::cout << sec_time << std::endl; std::cout << dist << std::endl; return 0; } ```
A sequence of optimizations: ### The original code, with small changes ``` import math, random, time num = 1000000 # Generate random points and numbers # Change #1: Sometimes it's good not to have too much randomness. # This is one of those cases. # Changing the code shouldn't change the results. # Using a fixed seed ensures that the changes are valid. # The final 'print dist' should yield the same result regardless of optimizations. # Note: There's nothing magical about this seed. # I randomly picked a hash tag from a git log. random.seed (0x7126434a2ea2a259e9f4196cbb343b1e6d4c2fc8) pt_list = [] rand_list = [] for i in range(num): pt = [] for j in range(3): pt.append(random.random()) pt_list.append(pt) # Change #2: rand_list is computed in a separate loop. # This ensures that upcoming optimizations will get the same results as # this unoptimized version. for i in range(num): rand_list.append(random.randint(0, num - 1)) # Compute beg_time = time.clock() dist = 0 for i in range(num): pt0 = pt_list[i] ri = rand_list[i] pt1 = pt_list[ri] val = 0 for j in range(3): val += (pt0[j] - pt1[j]) ** 2 val = math.sqrt(val) dist += val end_time = time.clock() elap_time = (end_time - beg_time) print elap_time print dist ``` ### Optimization #1: Put the code in a function. The first optimization (not shown) is to embed all of the code except the `import` in a function. This simple change offers a 36% performance boost on my computer. ### Optimization #2: Eschew the `**` operator. You don't use `pow(d,2)` in your C code because everyone knows that this is suboptimal in C. It's also suboptimal in python. Python's `**` is smart; it evaluates `x**2` as `x*x`. However, it takes time to be smart. You know you want `d*d`, so use it. Here's the computation loop with that optimization: ``` for i in range(num): pt0 = pt_list[i] ri = rand_list[i] pt1 = pt_list[ri] val = 0 for j in range(3): d = pt0[j] - pt1[j] val += d*d val = math.sqrt(val) dist += val ``` ### Optimization #3: Be pythonic. Your Python code looks a whole lot like your C code. You aren't taking advantage of the language. ``` import math, random, time, itertools def main (num=1000000) : # This small optimization speeds things up by a couple percent. sqrt = math.sqrt # Generate random points and numbers random.seed (0x7126434a2ea2a259e9f4196cbb343b1e6d4c2fc8) def random_point () : return [random.random(), random.random(), random.random()] def random_index () : return random.randint(0, num-1) # Big optimization: # Don't generate the lists of points. # Instead use list comprehensions that create iterators. # It's best to avoid creating lists of millions of entities when you don't # need those lists. You don't need the lists; you just need the iterators. pt_list = [random_point() for i in xrange(num)] rand_pts = [pt_list[random_index()] for i in xrange(num)] # Compute beg_time = time.clock() dist = 0 # Don't loop over a range. That's too C-like. # Instead loop over some iterable, preferably one that doesn't create the # collection over which the iteration is to occur. # This is particularly important when the collection is large. for (pt0, pt1) in itertools.izip (pt_list, rand_pts) : # Small optimization: inner loop inlined, # intermediate variable 'val' eliminated. d0 = pt0[0]-pt1[0] d1 = pt0[1]-pt1[1] d2 = pt0[2]-pt1[2] dist += sqrt(d0*d0 + d1*d1 + d2*d2) end_time = time.clock() elap_time = (end_time - beg_time) print elap_time print dist ``` ## Update ### Optimization #4, Use numpy The following takes about 1/40th the time of the original version in the timed section of the code. Not quite as fast as C, but close. Note the commented out, "Mondo slow" computation. That takes about ten times as long as the original version. There is an overhead cost with using numpy. The setup takes quite a bit longer in the code that follows compared to that in my non-numpy optimization #3. Bottom line: You need to take care when using numpy, and the setup costs might be significant. ``` import numpy, random, time def main (num=1000000) : # Generate random points and numbers random.seed (0x7126434a2ea2a259e9f4196cbb343b1e6d4c2fc8) def random_point () : return [random.random(), random.random(), random.random()] def random_index () : return random.randint(0, num-1) pt_list = numpy.array([random_point() for i in xrange(num)]) rand_pts = pt_list[[random_index() for i in xrange(num)],:] # Compute beg_time = time.clock() # Mondo slow. # dist = numpy.sum ( # numpy.apply_along_axis ( # numpy.linalg.norm, 1, pt_list - rand_pts)) # Mondo fast. dist = numpy.sum ((numpy.sum ((pt_list-rand_pts)**2, axis=1))**0.5) end_time = time.clock() elap_time = (end_time - beg_time) print elap_time print dist ```
Some general hints: Move all your code into a main() function and use the normal ``` if __name__ == "__main__": main() ``` construct. It greatly improves speed due to variable-scoping. See [Why does Python code run faster in a function?](https://stackoverflow.com/questions/11241523/why-does-python-code-run-faster-in-a-function) for an explanation of why. Don't use `range()` since it generates the complete range at once which is slow for large numbers; instead use `xrange()` which uses a generator.
Why is computing point distances so slow in Python?
[ "", "python", "performance", "" ]
My aim is to, View all property viewers and planned viewings, in the year 2013. I believe i have got 90% of the way to a solution, but at the moment, it does not work. **Tables in use** \***yr\_viewer\*** ``` Clientnum, CHAR(5), NOT nullable (PRIMARY KEY 1) Branchnum, CHAR(3), NOT nullable (PRIMARY KEY 2) Prefferedtype, VARCHAR2(15), nullable MAXIMUMRENT, NUMBER (17,2), nullable Finished, NUMBER(1,0), nullable ``` **yr\_viewing** ``` propertynum, CHAR(5), NOT nullable (PRIMARY KEY 1) dateviewed, Date nullable (format - 1-jan-2013) Clientnum, CHAR(5), NOT nullable (PRIMARY KEY 2) Staffnum, CHAR(5), nullable Comments, VARCHAR2 (300), nullable ``` **yr\_Client** ``` Clientnum, CHAR(5), NOT nullable (PRIMARY KEY 1) Firstname Varchar2(20), nullable Lastname Varchar2(20), nullable Address Varchar2(50), nullable Telephonenum Char (13), nullable ``` **My Query** ``` select distinct c.Firstname, c.Lastname, v.PropertyNum, v.DateViewed from yr_viewing, yr_viewer i inner join YR_VIEWING v on i.ClientNum = v.ClientNum inner join YR_CLIENT c on i.ClientNum = c.ClientNum where dateviewed between '01-jan-2013' and '31-dec-2013' ```
There's some trouble around your joins. You're cross-joining `yr_viewing` with `yr_viewer`, then you're joining in `yr_viewing` again. You don't pull any columns from `yr_viewer`, so leave it out of the query altogether. And you have no need to include `yr_viewing` twice. Try something like this: ``` select distinct c.Firstname, c.Lastname, v.PropertyNum, v.DateViewed from yr_viewing v inner join YR_CLIENT c on i.ClientNum = c.ClientNum where dateviewed between '01-jan-2013' and '31-dec-2013' ``` One more thing: your dates will only work if the Oracle `NLS_DATE_FORMAT` is set to DD-MON-YYYY, which it normally isn't. Even if it is you shouldn't trust it. Better to use ANSI date literals and change your `WHERE` clause as follows: ``` where dateviewed between DATE '2013-01-01' and DATE '2013-12-31' ```
Answer: ``` select distinct c.Firstname, c.Lastname, v.PropertyNum, v.DateViewed from yr_viewer VV inner join YR_VIEWING v on VV.ClientNum = v.ClientNum inner join YR_CLIENT c on V.ClientNum = c.ClientNum where TO_CHAR(dateviewed, 'yyyy') = '2013' ``` Edit: Double quotes (Suggestion by Alex)
BETWEEN and JOIN
[ "", "sql", "oracle", "join", "between", "" ]
I want to have some daemon that finds images that I need to convert into web and thumb versions. I thought python could be useful here, but I'm not sure if I'm doing things right here. I want to convert 8 photos simultaneously, the queue of images to be converted can be very long. We have several cores on the server and spawning each convert in a new process should let the OS take use of the available cores and things will go faster, right? This is the key point here, to make a process from python that again calls imagemagick's convert script and hope that things go a bit faster than running one and one convert from the python main thread. So far I only started testing. So here is my test code. It will create 20 tasks (which is to sleep between 1 and 5 seconds), and give those tasks to a pool that in total has 5 threads. ``` from multiprocessing import Process from subprocess import call from random import randrange from threading import Thread from Queue import Queue class Worker(Thread): def __init__(self, tid, queue): Thread.__init__(self) self.tid = tid self.queue = queue self.daemon = True self.start() def run(self): while True: sec = self.queue.get() print "Thread %d sleeping for %d seconds\n\n" % (self.tid, sec) p = Process(target=work, args=(sec,)) p.start() p.join() self.queue.task_done() class WorkerPool: def __init__(self, num_workers): self.queue = Queue() for tid in range(num_workers): Worker(tid, self.queue) def add_task(self, sec): self.queue.put(sec) def complete_work(self): self.queue.join() def work(sec): call(["sleep", str(sec)]) def main(): seconds = [randrange(1, 5) for i in range(20)] pool = WorkerPool(5) for sec in seconds: pool.add_task(sec) pool.complete_work() if __name__ == '__main__': main() ``` So I run this script on the server: ``` johanhar@mamadev:~$ python pythonprocesstest.py ``` And then I check my processes on the server: ``` johanhar@mamadev:~$ ps -fux ``` The result from `ps` looks wrong to me. To me it looks as if I have something happening under python but in one process, so it will only go slower the more converts (or sleep as in this test case) I have even if we have several cores on the server... ``` USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND johanhar 24246 0.0 0.0 81688 1608 ? S 13:44 0:00 sshd: johanhar@pts/28 johanhar 24247 0.0 0.0 108336 1832 pts/28 Ss 13:44 0:00 \_ -bash johanhar 49753 0.6 0.0 530620 7512 pts/28 Sl+ 15:14 0:00 \_ python pythonprocesstest.py johanhar 49822 0.0 0.0 530620 6252 pts/28 S+ 15:14 0:00 \_ python pythonprocesstest.py johanhar 49824 0.0 0.0 100904 564 pts/28 S+ 15:14 0:00 | \_ sleep 4 johanhar 49823 0.0 0.0 530620 6256 pts/28 S+ 15:14 0:00 \_ python pythonprocesstest.py johanhar 49826 0.0 0.0 100904 564 pts/28 S+ 15:14 0:00 | \_ sleep 3 johanhar 49837 0.0 0.0 530620 6264 pts/28 S+ 15:14 0:00 \_ python pythonprocesstest.py johanhar 49838 0.0 0.0 100904 564 pts/28 S+ 15:14 0:00 | \_ sleep 3 johanhar 49846 0.0 0.0 530620 6264 pts/28 S+ 15:14 0:00 \_ python pythonprocesstest.py johanhar 49847 0.0 0.0 100904 564 pts/28 S+ 15:14 0:00 \_ sleep 3 ``` So if you still don't get the problem or what I'm asking for. Is this approach what you could call "multi core programming"?
I think you are misreading the `ps` output. I count 4 distinct Python instances, each which could, in principle, be allocated to its own core. Whether they actually do get their own core is one of the harder bits of multi-processing. Yes, there is the superior Python process (PID 49753) which is parent to the sub-processes, but there is also a `bash` which is parent to that in an analogous way.
Short & direct: yes you are running multiple `convert` process on multiple cores. Longer & slightly indirect: I wouldn't call it "multicore programming", even if it actually is, because that wording usually means running multiple threads of your program on multiple cores, and you're not doing that (at least in CPython, python threads are subject to the GIL and cannot actually run at the same time on multiple cores). Also you don't need to parallelize your python code because that's not your bottleneck (you are spending your time in `convert`, not in python code) If you only want to parallelize `convert`, you don't even need any threading or other fancy stuff in your python code. The python script could just be serialized and loop through the photos, spawning new convert processes until you reach the number you like. Then just sit waiting for one of them to finish and spawn a new one; repeat as needed for all your photos. (but I do agree that threads are more natural and elegant code than that sort of wait-for-event-loop)
Am I doing multi core programming the right way here
[ "", "python", "multithreading", "process", "multicore", "" ]
What is the best way to normalize information about a `spouse` in a database containing information about `persons`? The data includes: ``` person_id first name middle name last name phone number address vehicle house health destiny items spouse first name spouse middle name spouse last name spouse phone number spouse address ``` I was thinking of keeping a single table to regard all `persons` (spouse or otherwise) and to distinguish a `spouse` if its row has a value for the `person_id` of another `person`. Is this sort of self reference advisable? I was also going to create tables for repeating data. Such as `health`, `vehicle`, etc.
Normalizing spousal information would include removing the `spouse *` columns. If you want a self-referencing table, you should have a `spouse_id` column that references `person_id`; but don't repeat all the spousal information like name, address, and phone number. For one-to-many relationships like person-to-vehicle, yes, you will want tables on the "many" end (e.g. `vehicle`) with a `person_id` FK column. Also, strongly consider breaking `address` out to its own table. If you are planning to store all the elements of an address in this one column, that is very denormalized (< [3NF](http://en.wikipedia.org/wiki/Third_normal_form)): they should be broken out into distinct columns (e.g. `street`, `municipality`, `region` etcetera); and these really beg to be in a distinct table. Are self-referencing tables advisable? It really depends on the situation; but they make sense where they come up naturally in data in my experience: I think a generic "person" scenario as you have outlined it qualifies. In contrast, consider a rather contrived "picture" scenario - table `picture` containing a `of_picture_id` column to cover pictures of pictures...of pictures.... (Hmmm, now that doesn't sound so contrived to me...; but hopefully you get the idea.)
Spouse is also a "person", hence the spouse's details has to be captured as a separate record. The only way that can be done is by introducing `spouse_id` as a self referential key. The above table you have shown is not normalized since a person's table contains another person's details. I suggest you modify the 'persons' schema the following way ``` person_id first_name middle_name last_name phone_number address vehicle house health destiny items spouse_id ```
Normalizing self referencing attributes
[ "", "sql", "database-design", "relational-database", "normalization", "" ]
I have a 1D array in numpy and I want to find the position of the index where a value exceeds the value in numpy array. E.g. ``` aa = range(-10,10) ``` Find position in `aa` where, the value `5` gets exceeded.
This is a little faster (and looks nicer) ``` np.argmax(aa>5) ``` Since [`argmax`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.argmax.html) will stop at the first `True` ("In case of multiple occurrences of the maximum values, the indices corresponding to the first occurrence are returned.") and doesn't save another list. ``` In [2]: N = 10000 In [3]: aa = np.arange(-N,N) In [4]: timeit np.argmax(aa>N/2) 100000 loops, best of 3: 52.3 us per loop In [5]: timeit np.where(aa>N/2)[0][0] 10000 loops, best of 3: 141 us per loop In [6]: timeit np.nonzero(aa>N/2)[0][0] 10000 loops, best of 3: 142 us per loop ```
given the sorted content of your array, there is an even faster method: [searchsorted](http://docs.scipy.org/doc/numpy/reference/generated/numpy.searchsorted.html). ``` import time N = 10000 aa = np.arange(-N,N) %timeit np.searchsorted(aa, N/2)+1 %timeit np.argmax(aa>N/2) %timeit np.where(aa>N/2)[0][0] %timeit np.nonzero(aa>N/2)[0][0] # Output 100000 loops, best of 3: 5.97 µs per loop 10000 loops, best of 3: 46.3 µs per loop 10000 loops, best of 3: 154 µs per loop 10000 loops, best of 3: 154 µs per loop ```
Numpy first occurrence of value greater than existing value
[ "", "python", "numpy", "" ]
Trying to install python-spidermonkey using pip on my Mac OS, failed to do so because it's missing nspr: ``` $ pip install python-spidermonkey Downloading/unpacking python-spidermonkey Running setup.py egg_info for package python-spidermonkey Traceback (most recent call last): File "<string>", line 16, in <module> File "/Users/smin/ENV/build/python-spidermonkey/setup.py", line 186, in <module> **platform_config() File "/Users/smin/ENV/build/python-spidermonkey/setup.py", line 143, in platform_config return nspr_config(config=config) File "/Users/smin/ENV/build/python-spidermonkey/setup.py", line 87, in nspr_config return pkg_config("nspr", config) File "/Users/smin/ENV/build/python-spidermonkey/setup.py", line 59, in pkg_config raise RuntimeError("No package configuration found for: %s" % pkg_name) RuntimeError: No package configuration found for: nspr Complete output from command python setup.py egg_info: Traceback (most recent call last): File "<string>", line 16, in <module> File "/Users/shengjiemin/work/Ceilo-ENV/build/python-spidermonkey/setup.py", line 186, in <module> **platform_config() File "/Users/smin/rmonkey/setup.py", line 143, in platform_config return nspr_config(config=config) File "/Users/smin/ENV/build/python-spidermonkey/setup.py", line 87, in nspr_config return pkg_config("nspr", config) File "/Users/smin/ENV/build/python-spidermonkey/setup.py", line 59, in pkg_config raise RuntimeError("No package configuration found for: %s" % pkg_name) RuntimeError: No package configuration found for: nspr ---------------------------------------- Command python setup.py egg_info failed with error code 1 in /Users/smin/ENV/build/python-spidermonkey ``` I then tried to install nspr: ``` sudo port install nspr ``` but it didn't make any difference, still the same error. Any ideas?
Finally I got this fixed by myself by following the thread here : <http://davisp.lighthouseapp.com/projects/26898/tickets/38-trouble-installing-on-mac-os-x-107> Two steps: 1.move the files from the Darwin-XXX folder inside spidermonkey to spider monkey/libjs to get through this error. ``` cd /home/smin/virt-ENV/build/python-spidermonkey/spidermonkey mv Darwin-i386/* libjs/ ``` 2.Make the change below in the file: `libjs/jsutil` from: ``` typedef int js_static_assert_line##line[(condition) ? 1 : -1] ``` to: ``` typedef int js_static_assert_line##line[(condition) ? 1 : 0] ```
I found a workaround (the third line below) for installing on MacOS 10.9. ``` $ brew install pkg-config $ brew install nspr $ cd /usr/local/lib/pkgconfig/ # IMPORTANT $ ln -s nspr.pc ../../Cellar/nspr/4.10.8_1/lib/pkgconfig/nspr.pc $ git clone git://github.com/davisp/python-spidermonkey.git $ cd python-spidermonkey $ cp spidermonkey/Linux-x86_64/* spidermonkey/libjs/ $ python setup.py build $ python setup.py test # test failed, but it's okay to ignore $ sudo python setup.py install ``` This works fine with my python code that uses spidermonkey module like below. Hope this helps. ``` rt = spidermonkey.Runtime() cx = rt.new_context() cx.execute(js) cx.execute("...") ```
Mac OS - Failed to install python-spidermonkey because nspr not found
[ "", "python", "macos", "nspr", "" ]