Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
How can i achieve if i want to select an attribute ONLY and ONLY if another attribute is NULL. I know the long way of this: ``` select val1 from test_table where val1 is not null OR select val2 from test_table where val1 is null ``` The bad with this solution that if i have a very long and complicated select then i need to write it twice... EG: ``` select val1 from test_table where condition1 AND condition2 AND condition3 AND val1 is not null OR select val2 from test_table where condition1 AND condition2 AND condition3 AND val1 is null ``` So i hope, maybe there is a shorter form for this. Anyway this is something like an IF or CASE in programming languages (i think). Any idea appreciated. First ide: inside select can shorten the second select
``` select coalesce(val1, val2) from test_table ```
``` SELECT ISNULL(val1,val2) FROM test_table ``` Or ``` SELECT COALESCE(val1,val2) FROM test_table ``` I prefer the latter because you can have multiple values like: ``` SELECT COALESCE(val1,val2,val3,val4,....,valn) FROM test_table ``` which means if val1 is null then take val2, if that is null too then take val3 etc until it gets to valn...
Select another attribute if the original attribute is NULL (Conditional select)
[ "", "sql", "select", "null", "oracle10g", "" ]
I have a Table With name "TmpTable", this table have the Parent Child Relationship, table have data look like: ``` Id | Name | ParentId 1 PCs 0 2 MACs 1 3 Keyboard 0 4 Mouse 0 5 Screen 3 6 Keyboard 4 7 Mouse 0 8 Screen 0 9 Key 0 10 xyz 9 ``` Now I want to Select One Column With this Which shows the Sequence Number, in that Parent and Child have the Same S.No. and which did not have any child have Increased S.No., result will be same as below: ``` Id | Name | ParentId | SNo 1 PCs 0 1 2 MACs 1 1 3 Keyboard 0 2 4 Mouse 0 3 5 Screen 3 2 6 Keyboard 4 3 7 Mouse 0 4 8 Screen 0 5 9 Key 0 6 10 xyz 9 6 ``` How can I archive this result please Guide/help me in this.
You can use [`DENSE_RANK()`](http://technet.microsoft.com/en-us/library/ms173825.aspx) function in combination with `ORDER BY CASE`: ``` SELECT *, DENSE_RANK() OVER (ORDER BY CASE WHEN ParentID = 0 THEN ID ELSE ParentID END) FROM TmpTable ORDER BY Id ``` **[SQLFiddle DEMO](http://sqlfiddle.com/#!3/76120/10)**
You can try like this. ``` ;with cte as (Select Row_Number() Over(Order by Id) as Row,Id from Table1 where Parentid=0 ) Select A.*,b.row as Sno from Table1 A inner join cte as b on b.id=a.parentid or (b.id=a.id and a.parentid=0) order by a.id; ``` [**Sql Fiddle Demo**](http://sqlfiddle.com/#!3/d88cc/12)
Assign Same SNo to Parent and its Child
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I'm creating tables using the pandas to\_html function, and I'd like to be able to highlight the bottom row of the outputted table, which is of variable length. I don't have any real experience of html to speak of, and all I found online was this ``` <table border="1"> <tr style="background-color:#FF0000"> <th>Month</th> <th>Savings</th> </tr> <tr> <td>January</td> <td>$100</td> </tr> </table> ``` So I know that the final row must have `<tr style=""background-color:#FF0000">` (or whatever colour I want) rather than just `<tr>`, but what I don't really know how to do is get this to occur with the tables I'm making. I don't think I can do it with the to\_html function itself, but how can I do it after the table has been created? Any help is appreciated.
You can do it in javascript using jQuery: ``` $('table tbody tr').filter(':last').css('background-color', '#FF0000') ``` Also newer versions of pandas add a class `dataframe` to the table html so you can filter out just the pandas tables using: ``` $('table.dataframe tbody tr').filter(':last').css('background-color', '#FF0000') ``` But you can add your own classes if you want: ``` df.to_html(classes='my_class') ``` Or even multiple: ``` df.to_html(classes=['my_class', 'my_other_class']) ``` If you are using the IPython Notebook here is the full working example: ``` In [1]: import numpy as np import pandas as pd from IPython.display import HTML, Javascript In [2]: df = pd.DataFrame({'a': np.arange(10), 'b': np.random.randn(10)}) In [3]: HTML(df.to_html(classes='my_class')) In [4]: Javascript('''$('.my_class tbody tr').filter(':last') .css('background-color', '#FF0000'); ''') ``` Or you can even use plain CSS: ``` In [5]: HTML(''' <style> .df tbody tr:last-child { background-color: #FF0000; } </style> ''' + df.to_html(classes='df')) ``` The possibilities are endless :) ***Edit:** create an html file* ``` import numpy as np import pandas as pd HEADER = ''' <html> <head> <style> .df tbody tr:last-child { background-color: #FF0000; } </style> </head> <body> ''' FOOTER = ''' </body> </html> ''' df = pd.DataFrame({'a': np.arange(10), 'b': np.random.randn(10)}) with open('test.html', 'w') as f: f.write(HEADER) f.write(df.to_html(classes='df')) f.write(FOOTER) ```
Since pandas has styling functionality now, you don't need JavaScript hacks anymore. This is a pure pandas solution: ``` import pandas as pd df = [] df.append(dict(date='2016-04-01', sleep=11.2, calories=2740)) df.append(dict(date='2016-04-02', sleep=7.3, calories=3600)) df.append(dict(date='2016-04-03', sleep=8.3, calories=3500)) df = pd.DataFrame(df) def highlight_last_row(s): return ['background-color: #FF0000' if i==len(s)-1 else '' for i in range(len(s))] s = df.style.apply(highlight_last_row) ``` [![enter image description here](https://i.stack.imgur.com/z474p.png)](https://i.stack.imgur.com/z474p.png)
Pandas Dataframes to_html: Highlighting table rows
[ "", "python", "html", "pandas", "" ]
I am trying to create a list named "userlist" with all the usernames listed beside "List:", my idea is to parse the line with "List:" and then split based on "," and put them in a list, however am not able to capture the line ,any inputs on how can this be achieved? ``` output=""" alias: tech.sw.host name: tech.sw.host email: tech.sw.host email2: tech.sw.amss type: email list look_elsewhere: /usr/local/mailing-lists/tech.sw.host text: List tech SW team list_supervisor: <username> List: username1,username2,username3,username4, : username5 Members: User1,User2, : User3,User4, : User5 """ #print output userlist = [] for line in output : if "List" in line: print line ```
Using `regex`, `str.translate` and `str.split` : ``` >>> import re >>> from string import whitespace >>> strs = re.search(r'List:(.*)(\s\S*\w+):', ph, re.DOTALL).group(1) >>> strs.translate(None, ':'+whitespace).split(',') ['username1', 'username2', 'username3', 'username4', 'username5'] ``` You can also create a dict here, which will allow you to access any attribute: ``` def func(lis): return ''.join(lis).translate(None, ':'+whitespace) lis = [x.split() for x in re.split(r'(?<=\w):',ph.strip(), re.DOTALL)] dic = {} for x, y in zip(lis[:-1], lis[1:-1]): dic[x[-1]] = func(y[:-1]).split(',') dic[lis[-2][-1]] = func(lis[-1]).split(',') print dic['List'] print dic['Members'] print dic['alias'] ``` **Output:** ``` ['username1', 'username2', 'username3', 'username4', 'username5'] ['User1', 'User2', 'User3', 'User4', 'User5'] ['tech.sw.host'] ```
If it were me, I'd parse the entire input so as to have easy access to every field: ``` inFile = StringIO.StringIO(ph) d = collections.defaultdict(list) for line in inFile: line = line.partition(':') key = line[0].strip() or key d[key] += [part.strip() for part in line[2].split(',')] print d['List'] ```
capturing the usernames after List: tag
[ "", "python", "" ]
In Visual Studio I want to state that if a calculation box is blank due to there being no figures available on that particluar project, then show zero. My calculation is a very simple: ``` =(ReportItems!textbox21.Value) / (ReportItems!textbox19.Value) ``` For this I wrote an IIf statement: ``` =IIf( IsNothing(ReportItems!textbox21.Value) Or IsNothing(ReportItems!textbox19.Value), 0, ((ReportItems!textbox21.Value)/(ReportItems!textbox19.Value))) ``` But this still shows as #Error if there is a blank in either of textbox 21 or 19. see below picture. ![enter image description here](https://i.stack.imgur.com/fGevV.png) Can anyone advise on how to fix this?
I have managed to fix it by inserting the below function:- ``` Function Divide(Numerator as Double, Denominator as Double) If Denominator = 0 Then Return 0 Else Return Numerator/Denominator End If End Function ``` and then re-writing my calculation as :- ``` =Code.Divide(ReportItems!textbox21.Value, ReportItems!textbox19.Value) ```
Can you try instead of IsNothing convert the value as string and compare with string.empty `IsNothing(ReportItems!textbox21.Value)` changed as, `ReportItems!textbox21.Value.ToString() = ""`
Trying to show zero instyead of #Error in visual studio
[ "", "sql", "visual-studio", "visual-studio-2008", "" ]
I'm using Pervasive SQL 10.3 (let's just call it MS SQL since almost everything is the same regarding syntax) and I have a query to find duplicate customers using their email address as the duplicate key: ``` SELECT arcus.idcust, arcus.email2 FROM arcus INNER JOIN ( SELECT arcus.email2, COUNT(*) FROM arcus WHERE RTRIM(arcus.email2) != '' GROUP BY arcus.email2 HAVING COUNT(*)>1 ) dt ON arcus.email2=dt.email2 ORDER BY arcus.email2"; ``` My problem is that I need to do a case insensitive search on the email2 field. I'm required to have UPPER() for the conversion of those fields. I'm a little stuck on how to do an UPPER() in this query. I've tried all sorts of combinations including one that I thought for sure would work: ``` ... ON UPPER(arcus.email2)=UPPER(dt.email2) ... ``` ... but that didn't work. It took it as a valid query, but it ran for so long I eventually gave up and stopped it. Any idea of how to do the UPPER conversion on the email2 field? Thanks!
If your database is set up to be case sensitive, then your inner query will have to take account of this to perform the grouping as you intended. If it is not case sensitive, then you won't require UPPER functions. Assuming your database IS case sensitive, you could try the query below. Maybe this will run faster... ``` SELECT arcus.idcust, arcus.email2 FROM arcus INNER JOIN ( SELECT UPPER(arcus.email2) as upperEmail2, COUNT(*) FROM arcus WHERE RTRIM(arcus.email2) != '' GROUP BY UPPER(arcus.email2) HAVING COUNT(*)>1 ) dt ON UPPER(arcus.email2) = dt.upperEmail2 ```
The collation of a character string will determine how SQL Server compares character strings. If you store your data using a case-insensitive format then when comparing the character string “AAAA” and “aaaa” they will be equal. You can place a collate Latin1\_General\_CI\_AS for your email column in the where clause. Check the link below for how to implement collation in a sql query. [How to do a case sensitive search in WHERE clause](https://stackoverflow.com/questions/1831105/how-to-do-a-case-sensitive-search-in-where-clause-im-using-sql-server)
Need to UPPER SQL statement with INNER JOIN SELECT
[ "", "sql", "select", "inner-join", "" ]
At my organization clients can be enrolled in multiple programs at one time. I have a table with a list of all of the programs a client has been enrolled as unique rows in and the dates they were enrolled in that program. Using an External join I can take any client name and a date from a table (say a table of tests that the clients have completed) and have it return all of the programs that client was in on that particular date. If a client was in multiple programs on that date it duplicates the data from that table for each program they were in on that date. The problem I have is that I am looking for it to only return one program as their "Primary Program" for each client and date even if they were in multiple programs on that date. I have created a hierarchy for which program should be selected as their primary program and returned. For Example: 1.)Inpatient 2.)Outpatient Clinical 3.)Outpatient Vocational 4.)Outpatient Recreational So if a client was enrolled in Outpatient Clinical, Outpatient Vocational, Outpatient Recreational at the same time on that date it would only return "Outpatient Clinical" as the program. My way of thinking for doing this would be to join to the table with the previous programs multiple times like this: ``` FROM dbo.TestTable as TestTable LEFT OUTER JOIN dbo.PreviousPrograms as PreviousPrograms1 ON TestTable.date = PreviousPrograms1.date AND PreviousPrograms1.type = 'Inpatient' LEFT OUTER JOIN dbo.PreviousPrograms as PreviousPrograms2 ON TestTable.date = PreviousPrograms2.date AND PreviousPrograms2.type = 'Outpatient Clinical' LEFT OUTER JOIN dbo.PreviousPrograms as PreviousPrograms3 ON TestTable.date = PreviousPrograms3.date AND PreviousPrograms3.type = 'Outpatient Vocational' LEFT OUTER JOIN dbo.PreviousPrograms as PreviousPrograms4 ON TestTable.date = PreviousPrograms4.date AND PreviousPrograms4.type = 'Outpatient Recreational' ``` and then do a condition CASE WHEN in the SELECT statement as such: ``` SELECT CASE WHEN PreviousPrograms1.name IS NOT NULL THEN PreviousPrograms1.name WHEN PreviousPrograms1.name IS NULL AND PreviousPrograms2.name IS NOT NULL THEN PreviousPrograms2.name WHEN PreviousPrograms1.name IS NULL AND PreviousPrograms2.name IS NULL AND PreviousPrograms3.name IS NOT NULL THEN PreviousPrograms3.name WHEN PreviousPrograms1.name IS NULL AND PreviousPrograms2.name IS NULL AND PreviousPrograms3.name IS NOT NULL AND PreviousPrograms4.name IS NOT NULL THEN PreviousPrograms4.name ELSE NULL END as PrimaryProgram ``` The bigger problem is that in my actual table there are a lot more than just four possible programs it could be and the CASE WHEN select statement and the JOINs are already cumbersome enough. Is there a more efficient way to do either the SELECTs part or the JOIN part? Or possibly a better way to do it all together? I'm using SQL Server 2008.
You can simplify (replace) your `CASE` by using [`COALESCE()`](http://technet.microsoft.com/en-us/library/ms190349.aspx) instead: ``` SELECT COALESCE(PreviousPrograms1.name, PreviousPrograms2.name, PreviousPrograms3.name, PreviousPrograms4.name) AS PreviousProgram ``` `COALESCE()` returns the first non-null value. Due to your design, you still need the `JOIN`s, but it would be much easier to read if you used very short aliases, for example `PP1` instead of `PreviousPrograms1` - it's just a lot less code noise.
You can simplify the Join by using a bridge table containing all the program types and their priority (my sql server syntax is a bit rusty): ``` create table BridgeTable ( programType varchar(30), programPriority smallint ); ``` This table will hold all the program types and the program priority will reflect the priority you've specified in your question. As for the part of the case, that will depend on the number of records involved. One of the tricks that I usually do is this (assuming programPriority is a number between 10 and 99 and no type can have more than 30 bytes, because I'm being lazy): ``` Select patient, date, substr( min(cast(BridgeTable.programPriority as varchar) || PreviousPrograms.type), 3, 30) From dbo.TestTable as TestTable Inner Join dbo.BridgeTable as BridgeTable Left Outer Join dbo.PreviousPrograms as PreviousPrograms on PreviousPrograms.type = BridgeTable.programType and TestTable.date = PreviousPrograms.date Group by patient, date ```
More efficient way of doing multiple joins to the same table and a "case when" in the select
[ "", "sql", "sql-server-2008", "" ]
I'd like to retrieve some rows utilizing my index on Columns A and B. I was told the only way to ensure my index is being used to retrieve the rows is to use an ORDER by clause, for example: ``` A B offset 1 5 1 1 4 2 2 5 3 2 4 4 SELECT A,B FROM TableX WHERE offset > 0 AND offset < 5 ORDER BY A,B ASC ``` but then I would like my results for just those rows returned to be ordered by column B and not A,B. ``` A B 1 4 2 4 2 5 1 5 ``` How can I do this and still ensure my index is being used and not a full table scan? If I was to use ORDER BY B then doesn't this mean MySQL will scan by B and defeat the purpose of having the two column index?
Any index that includes A or B cloumns will have no effect on your query, regardless of your `ORDER BY`. You need an index on `offset` as that is the field that is being used in hte `WHERE` clause.
Sorry, but maybe I did not understand the question.. The above output query should result: ``` A B 1 4 1 5 2 4 2 5 ``` For avoiding table scan, you should add an index for the offset and use it in your WHERE clause. If possible to use unique then use it. > CREATE UNIQUE INDEX offsetidx ON TableX (offset); or > CREATE INDEX offsetidx ON TableX (offset);
Using ORDER BY while still maintaining use of index
[ "", "mysql", "sql", "sql-order-by", "" ]
I am trying to print the date and month as 2 digit numbers. ``` timestamp = date.today() difference = timestamp - datetime.timedelta(localkeydays) localexpiry = '%s%s%s' % (difference.year, difference.month, difference.day) print localexpiry ``` This gives the output as `201387`. This there anyway to get the output as `20130807`. This is because I am comparing this against a string of a similar format.
Use date formatting with [`date.strftime()`](http://docs.python.org/2/library/datetime.html#datetime.date.strftime): ``` difference.strftime('%Y%m%d') ``` Demo: ``` >>> from datetime import date >>> difference = date.today() >>> difference.strftime('%Y%m%d') '20130807' ``` You can do the same with the separate integer components of the `date` object, but you need to use the right string formatting parameters; to format an integer to two digits with leading zeros, use `%02d`, for example: ``` localexpiry = '%04d%02d%02d' % (difference.year, difference.month, difference.day) ``` but using `date.strftime()` is more efficient.
You can also use [format](http://docs.python.org/2/library/functions.html#format) ([datetime](http://docs.python.org/2/library/datetime.html#datetime.datetime.__format__), [date](http://docs.python.org/2/library/datetime.html#datetime.date.__format__) have \_\_format\_\_ method): ``` >>> import datetime >>> dt = datetime.date.today() >>> '{:%Y%m%d}'.format(dt) '20130807' >>> format(dt, '%Y%m%d') '20130807' ```
Two-Digit dates in Python
[ "", "python", "date", "" ]
I've applied the basic example of django-filters with the following setup: models.py ``` class Shopper(models.Model): FirstName = models.CharField(max_length=30) LastName = models.CharField(max_length=30) Gender_CHOICES = ( ('','---------'), ('Male','Male'), ('Female','Female'), ) Gender = models.CharField(max_length=6, choices=Gender_CHOICES, default=None) School_CHOICES = ( ('','---------'), (u'1', 'Primary school'), (u'2', 'High School'), (u'3', 'Apprenticeship'), (u'4', 'BsC'), (u'5', 'MsC'), (u'6', 'MBA'), (u'7', 'PhD'), ) HighestSchool = models.CharField(max_length=40, blank = True, choices = School_CHOICES,default=None) ``` views.py: ``` def shopperlist(request): f = ShopperFilter(request.GET, queryset=Shopper.objects.all()) return render(request,'mysapp/emailcampaign.html', {'filter': f}) ``` urls.py: ``` url(r'^emailcampaign/$', views.shopperlist, name='EmailCampaign'), ``` template: ``` {% extends "basewlogout.html" %} {% block content %} {% csrf_token %} <form action="" method="get"> {{ filter.form.as_p }} <input type="submit" name="Filter" value="Filter" /> </form> {% for obj in filter %} <li>{{ obj.FirstName }} {{ obj.LastName }} {{ obj.Email }}</li> {% endfor %} <a href="{% url 'UserLogin' %}"> <p> Return to home page. </p> </a> {% endblock %} ``` forms.py: ``` class ShopperForm(ModelForm): class Meta: model = Shopper ``` The empty choices `('','---------')` were added to make sure django-filters will display it during filtering and let that field unspecified. However, for the non-mandatory `HighestSchool` field it displays twice when using the model in a create scenario with `ModelForm.` I.e. `('','---------')` should not be listed for non-mandatory field. Then the empty\_label cannot be selected during filtering... How can this be solved with having one empty\_label listed during the create view and have the possibility to leave both mandatory and non-mandatory fields unspecified during filtering?
You can accomplish this within the `__init__` by updating the `empty_label` directly on the filter, thus avoiding the need to redefine *all* options. ``` class FooFilter(django_filters.FilterSet): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.filters["some_choice_field"].extra.update(empty_label="All") class Meta: model = Foo fields = [ "some_choice_field", ] ```
First of all, you should not add the blank choice to your choices set. So please remove the following lines: ``` ('','---------'), ``` Django will automatically add these when the Field is not required and everything (django models, django forms, django views etc) *except* django-filter will work as expected. Now, concerning django-filter. What you describe is a common requirement, already present in the issues of the [django-filter project](https://github.com/alex/django-filter). Please take a look here for a possible workaround <https://github.com/alex/django-filter/issues/45>. I will just copy paste cvk77's code from there for reference: ``` class NicerFilterSet(django_filters.FilterSet): def __init__(self, *args, **kwargs): super(NicerFilterSet, self).__init__(*args, **kwargs) for name, field in self.filters.iteritems(): if isinstance(field, ChoiceFilter): # Add "Any" entry to choice fields. field.extra['choices'] = tuple([("", "Any"), ] + list(field.extra['choices']))` ``` So to use that just extend your filters from NicerFilterSet instead of normal FilterSet. Also a Q+D solution you could do is the following: Define a global variable in your models.py: ``` GENDER_CHOICES = ( ('Male','Male'), ('Female','Female'), ) ``` and use it as expected in your Model. Now, in your filters.py define another global variable like this: ``` FILTER_GENDER_CHOICES = list(models.FILTER_CHOICES) FILTER_GENDER_CHOICES.insert(0, ('','---------') ) ``` and then define your filter like this: ``` class ShopperFilter(django_filters.FilterSet): gender = django_filters.ChoiceFilter(choices= FILTER_GENDER_CHOICES ) class Meta: model = models.Shopper ```
How to display empty_label for non-mandatory fields with django-filter?
[ "", "python", "django", "django-filters", "" ]
I have been working at learning Python over the last week and it has been going really well, however I have now been introduced to custom functions and I sort of hit a wall. While I understand the basics of it, such as: ``` def helloworld(): print("Hello World!") helloworld() ``` I know this will print "Hello World!". However, when it comes to getting information from one function to another, I find that confusing. ie: function1 and function2 have to work together to perform a task. Also, when to use the `return` command. Lastly, when I have a list or a dictionary inside of a function. I'll make something up just as an example. ``` def my_function(): my_dict = {"Key1":Value1, "Key2":Value2, "Key3":Value3, "Key4":Value4,} ``` How would I access the key/value and be able to change them from outside of the function? ie: If I had a program that let you input/output player stats or a character attributes in a video game. I understand bits and pieces of this, it just confuses me when they have different functions calling on each other. Also, since this was my first encounter with the custom functions. Is this really ambitious to pursue and this could be the reason for all of my confusion? Since this is the most complex program I have seen yet.
`Functions` in python can be both, a regular `procedure` and a `function` with a return value. Actually, every Python's function will return a value, which might be `None`. If a return statement is not present, then your function will be executed completely and leave normally following the code flow, yielding `None` as a return value. ``` def foo(): pass foo() == None >>> True ``` If you have a `return` statement inside your function. The return value will be the **return value of the expression following it**. For example you may have `return None` and you'll be explicitly returning `None`. You can also have `return` without anything else and there you'll be implicitly returning `None`, or, you can have `return 3` and you'll be returning value 3. This may grow in complexity. ``` def foo(): print('hello') return print('world') foo() >>>'hello' def add(a,b): return a + b add(3,4) >>>7 ``` If you want a dictionary (or any object) you created inside a function, just return it: ``` def my_function(): my_dict = {"Key1":Value1, "Key2":Value2, "Key3":Value3, "Key4":Value4,} return my_dict d = my_function() d['Key1'] >>> Value1 ``` Those are the basics of function calling. There's even more. There are functions that return functions (also treated as decorators. You can even return multiple values (not really, you'll be just returning a tuple) and a lot a fun stuff :) ``` def two_values(): return 3,4 a,b = two_values() print(a) >>>3 print(b) >>>4 ``` Hope this helps!
The primary way to pass information between functions is with arguments and return values. Functions can't see each other's variables. You might think that after ``` def my_function(): my_dict = {"Key1":Value1, "Key2":Value2, "Key3":Value3, "Key4":Value4,} my_function() ``` `my_dict` would have a value that other functions would be able to see, but it turns out that's a really brittle way to design a language. Every time you call `my_function`, `my_dict` would lose its old value, even if you were still using it. Also, you'd have to know all the names used by every function in the system when picking the names to use when writing a new function, and the whole thing would rapidly become unmanageable. Python doesn't work that way; I can't think of any languages that do. Instead, if a function needs to make information available to its caller, `return` the thing its caller needs to see: ``` def my_function(): return {"Key1":"Value1", "Key2":"Value2", "Key3":"Value3", "Key4":"Value4",} print(my_function()['Key1']) # Prints Value1 ``` Note that a function ends when its execution hits a `return` statement (even if it's in the middle of a loop); you can't execute one return now, one return later, keep going, and return two things when you hit the end of the function. If you want to do that, keep a list of things you want to return and return the list when you're done.
Python custom function
[ "", "python", "function", "python-3.x", "" ]
I have two series `s1` and `s2` in pandas and want to compute the intersection i.e. where all of the values of the series are common. How would I use the `concat` function to do this? I have been trying to work it out but have been unable to (I don't want to compute the intersection on the indices of `s1` and `s2`, but on the values).
Place both series in Python's [set container](https://docs.python.org/2/library/stdtypes.html#set) then use the set intersection method: ``` s1.intersection(s2) ``` and then transform back to list if needed. Just noticed pandas in the tag. Can translate back to that: ``` pd.Series(list(set(s1).intersection(set(s2)))) ``` From comments I have changed this to a more Pythonic expression, which is shorter and easier to read: ``` Series(list(set(s1) & set(s2))) ``` should do the trick, except if the index data is also important to you. Have added the `list(...)` to translate the set before going to pd.Series as pandas does not accept a set as direct input for a Series.
Setup: ``` s1 = pd.Series([4,5,6,20,42]) s2 = pd.Series([1,2,3,5,42]) ``` Timings: ``` %%timeit pd.Series(list(set(s1).intersection(set(s2)))) 10000 loops, best of 3: 57.7 µs per loop %%timeit pd.Series(np.intersect1d(s1,s2)) 1000 loops, best of 3: 659 µs per loop %%timeit pd.Series(np.intersect1d(s1.values,s2.values)) 10000 loops, best of 3: 64.7 µs per loop ``` So the numpy solution can be comparable to the set solution even for small series, if one uses the `values` explicitly.
Finding the intersection between two series in Pandas
[ "", "python", "pandas", "series", "intersection", "" ]
I am working with a NAO-robot on a Windows-XP machine and Python 2.7. I want to detect markers in speech. The whole thing worked, but unfortunately I have to face now a 10 Secounds delay and my events aren't detected (the callback function isnt invoked). First, my main-function: ``` from naoqi import ALProxy, ALBroker from speechEventModule import SpeechEventModule myString = "Put that \\mrk=1\\ there." NAO_IP = "192.168.0.105" NAO_PORT = 9559 memory = ALProxy("ALMemory", NAO_IP, NAO_PORT) tts = ALProxy("ALTextToSpeech", NAO_IP, NAO_PORT) tts.enableNotifications() myBroker = ALBroker("myBroker", "0.0.0.0", # listen to anyone 0, # find a free port and use it NAO_IP, # parent broker IP NAO_PORT) # parent broker port global SpeechEventListener SpeechEventListener = SpeechEventModule("SpeechEventListener", memory) memory.subscribeToEvent("ALTextToSpeech/CurrentBookMark", "SpeechEventListener", "onBookmarkDetected") tts.say(initialString) ``` And here my speechEventModule: ``` from naoqi import ALModule from naoqi import ALProxy NAO_IP = "192.168.0.105" NAO_PORT = 9559 SpeechEventListener = None leds = None memory = None class SpeechEventModule(ALModule): def __init__(self, name, ext_memory): ALModule.__init__(self, name) global memory memory = ext_memory global leds leds = ALProxy("ALLeds",NAO_IP, NAO_PORT) def onBookmarkDetected(self, key, value, message): print "Event detected!" print "Key: ", key print "Value: " , value print "Message: " , message if(value == 1): global leds leds.fadeRGB("FaceLeds", 0x00FF0000, 0.2) if(value == 2): global leds leds.fadeRGB("FaceLeds", 0x000000FF, 0.2) ``` Please, do anybody have the same problem? Can anybody give me an advice? Thanks in advance!
You are subscribing to the event outside your module. if I am not wrong you have to do it into the `__init__` method. ``` class SpeechEventModule(ALModule): def __init__(self, name, ext_memory): ALModule.__init__(self, name) memory = ALProxy("ALMemory") leds = ALProxy("ALLeds") ``` Anyway, check that your main function keeps running forever (better if you catch a keyboard interruption) or you program will end before he can catch any keyword. ``` try: while True: time.sleep(1) except KeyboardInterrupt: print print "Interrupted by user, shutting down" myBroker.shutdown() sys.exit(0) ``` Take a look to [this tutorial](http://www.aldebaran-robotics.com/documentation/dev/python/reacting_to_events.html), it could be helpful.
Here is how it would be done with a more recent Naoqi version: ``` import qi import argparse class SpeechEventListener(object): """ A class to react to the ALTextToSpeech/CurrentBookMark event """ def __init__(self, session): super(SpeechEventListener, self).__init__() self.memory = session.service("ALMemory") self.leds = session.service("ALLeds") self.subscriber = self.memory.subscriber("ALTextToSpeech/CurrentBookMark") self.subscriber.signal.connect(self.onBookmarkDetected) # keep this variable in memory, else the callback will be disconnected def onBookmarkDetected(self, value): """ callback for event ALTextToSpeech/CurrentBookMark """ print "Event detected!" print "Value: " , value # key and message are not useful here if value == 1: self.leds.fadeRGB("FaceLeds", 0x00FF0000, 0.2) if value == 2: self.leds.fadeRGB("FaceLeds", 0x000000FF, 0.2) if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("--ip", type=str, default="127.0.0.1", help="Robot IP address. On robot or Local Naoqi: use '127.0.0.1'.") parser.add_argument("--port", type=int, default=9559, help="Naoqi port number") args = parser.parse_args() # Initialize qi framework connection_url = "tcp://" + args.ip + ":" + str(args.port) app = qi.Application(["SpeechEventListener", "--qi-url=" + connection_url]) app.start() session = app.session speech_event_listener = SpeechEventListener(session) tts = session.service("ALTextToSpeech") # tts.enableNotifications() --> this seems outdated while True: raw_input("Say something...") tts.say("Put that \\mrk=1\\ there.") ```
Naoqi eventhandling 10 Secounds delay
[ "", "python", "event-handling", "delay", "led", "nao-robot", "" ]
I have to use `DECODE` to implement custom sort: ``` SELECT col1, col2 FROM tbl ORDER BY DECODE(col1, 'a', 3, 'b', 2, 'c', 1) DESC ``` What will happen if col1 has more values that the three specified in decode clause?
DECODE will return NULL, for the values of col1 which are not specified. The NULL-Values will be placed at the front per default . if you want to change this behavior you can either define the default value in DECODE ``` SELECT col1, col2 FROM tbl ORDER BY DECODE(col1, 'a', 3, 'b', 2, 'c', 1, 0) DESC ``` or NULLS LAST in the order clause ``` SELECT col1, col2 FROM tbl ORDER BY DECODE(col1, 'a', 3, 'b', 2, 'c', 1) DESC NULLS LAST ```
the decode function will return NULL value and it is at the bottom of your sort. You can verify it: select decode('z','a', 3, 'b', 2, 'c', 1) from dual; you can also control the appearance of the null value with NULLS LAST/NULLS FIRST in the order clause.
Oracle - DECODE - How will it sort when not every case is specified?
[ "", "sql", "oracle", "" ]
I am new to python. I am trying to create a retry decorator that, when applied to a function, will keep retrying until some criteria is met (for simplicity, say retry 10 times). ``` def retry(): def wrapper(func): for i in range(0,10): try: func() break except: continue return wrapper ``` Now that will retry on any exception. How can I change it such that it retries on specific exceptions. e.g, I want to use it like: ``` @retry(ValueError, AbcError) def myfunc(): //do something ``` I want `myfunc` to be retried only of it throws `ValueError` or `AbcError`.
You can supply a `tuple` of exceptions to the `except ..` block to catch: ``` from functools import wraps def retry(*exceptions, **params): if not exceptions: exceptions = (Exception,) tries = params.get('tries', 10) def decorator(func): @wraps(func) def wrapper(*args, **kw): for i in range(tries): try: return func(*args, **kw) except exceptions: pass return wrapper return decorator ``` The catch-all `*exceptions` parameter will always result in a tuple. I've added a `tries` keyword as well, so you can configure the number of retries too: ``` @retry(ValueError, TypeError, tries=20) def foo(): pass ``` Demo: ``` >>> @retry(NameError, tries=3) ... def foo(): ... print 'Futzing the foo!' ... bar ... >>> foo() Futzing the foo! Futzing the foo! Futzing the foo! ```
``` from functools import wraps class retry(object): def __init__(self, *exceptions): self.exceptions = exceptions def __call__(self, f): @wraps(f) # required to save the original context of the wrapped function def wrapped(*args, **kwargs): for i in range(0,10): try: f(*args, **kwargs) except self.exceptions: continue return wrapped ``` Usage: ``` @retry(ValueError, Exception) def f(): print('In f') raise ValueError >>> f() In f In f In f In f In f In f In f In f In f In f ```
Can I pass an exception as an argument to a function in python?
[ "", "python", "decorator", "python-decorators", "" ]
Is it posible to CAST or CONVERT with string data type (method that takes data type parametar as string), something like: ``` CAST('11' AS 'int') ``` but not ``` CAST('11' AS int) ```
You would have to use dynamic sql to achieve that: ``` DECLARE @type VARCHAR(10) = 'int' DECLARE @value VARCHAR(10) = '11' DECLARE @sql VARCHAR(MAX) SET @sql = 'SELECT CAST(' + @value + ' AS ' + @type + ')' EXEC (@sql) ``` **[SQLFiddle DEMO with INT](http://sqlfiddle.com/#!3/d41d8/18554)** // [with datetime](http://sqlfiddle.com/#!3/d41d8/18555)
No. There are many places in T-SQL where it wants, specifically, a name given to it - not a string, nor a variable *containing* a name.
CAST and CONVERT in T-SQL
[ "", "sql", "sql-server", "t-sql", "" ]
I'm running Pycharm 2.6.3 with python 2.7 and django 1.5.1. When I try to run django's manage.py task from Pycharm (Tools / Run manage.py task), syncdb, for instance, I get the following: ``` bash -cl "/usr/bin/python2.7 /home/paulo/bin/pycharm-2.6.3/helpers/pycharm/django_manage.py syncdb /home/paulo/Projetos/repo2/Paulo Brito/phl" Traceback (most recent call last): File "/home/paulo/bin/pycharm-2.6.3/helpers/pycharm/django_manage.py", line 21, in <module> run_module(manage_file, None, '__main__', True) File "/usr/lib/python2.7/runpy.py", line 170, in run_module mod_name, loader, code, fname = _get_module_details(mod_name) File "/usr/lib/python2.7/runpy.py", line 103, in _get_module_details raise ImportError("No module named %s" % mod_name) ImportError: No module named manage Process finished with exit code 1 ``` If I run the first line on the console passing project path between single quotes, it runs without problems, like this: ``` bash -cl "/usr/bin/python2.7 /home/paulo/bin/pycharm-2.6.3/helpers/pycharm/django_manage.py syncdb '/home/paulo/Projetos/repo2/Paulo Brito/phl'" ``` I tried to format the path like that in project settings / django support, but Pycharm won't recognize the path. How can I work in PyCharm with paths with spaces? Thanks. EDIT 1 PyCharm dont recognize path with baskslash as valid path either.
it's known bug <http://youtrack.jetbrains.com/issue/PY-8449> Fixed in PyCharm 2.7
In UNIX you can escape whitespaces with a backslash: ``` /home/paulo/Projetos/repo2/Paulo\ Brito/phl ```
Pycharm: Run manage task won't work if path contains space
[ "", "python", "django", "pycharm", "" ]
Or how do I make this thing work? I have an Interval object: ``` class Interval(Base): __tablename__ = 'intervals' id = Column(Integer, primary_key=True) start = Column(DateTime) end = Column(DateTime, nullable=True) task_id = Column(Integer, ForeignKey('tasks.id')) @hybrid_property #used to just be @property def hours_spent(self): end = self.end or datetime.datetime.now() return (end-start).total_seconds()/60/60 ``` And a Task: ``` class Task(Base): __tablename__ = 'tasks' id = Column(Integer, primary_key=True) title = Column(String) intervals = relationship("Interval", backref="task") @hybrid_property # Also used to be just @property def hours_spent(self): return sum(i.hours_spent for i in self.intervals) ``` Add all the typical setup code, of course. Now when I try to do `session.query(Task).filter(Task.hours_spent > 3).all()` I get `NotImplementedError: <built-in function getitem>` from the `sum(i.hours_spent...` line. So I was looking at [this part](http://docs.sqlalchemy.org/en/rel_0_7/orm/extensions/hybrid.html#building-custom-comparators) of the documentation and theorized that there might be some way that I can write something that will do what I want. [This part](http://docs.sqlalchemy.org/en/rel_0_7/orm/extensions/hybrid.html#correlated-subquery-relationship-hybrid) also looks like it may be of use, and I'll be looking at it while waiting for an answer here ;)
SQLAlchemy is not smart enough to build SQL expression tree from these operands, you have to use explicit `propname.expression` decorator to provide it. But then comes another problem: there is no portable way to convert interval to hours in-database. You'd use `TIMEDIFF` in MySQL, `EXTRACT(EPOCH FROM ... ) / 3600` in PostgreSQL etc. I suggest changing properties to return `timedelta` instead, and comparing apples to apples. ``` from sqlalchemy import select, func class Interval(Base): ... @hybrid_property def time_spent(self): return (self.end or datetime.now()) - self.start @time_spent.expression def time_spent(cls): return func.coalesce(cls.end, func.current_timestamp()) - cls.start class Task(Base): ... @hybrid_property def time_spent(self): return sum((i.time_spent for i in self.intervals), timedelta(0)) @time_spent.expression def hours_spent(cls): return (select([func.sum(Interval.time_spent)]) .where(cls.id==Interval.task_id) .label('time_spent')) ``` The final query is: ``` session.query(Task).filter(Task.time_spent > timedelta(hours=3)).all() ``` which translates to (on PostgreSQL backend): ``` SELECT task.id AS task_id, task.title AS task_title FROM task WHERE (SELECT sum(coalesce(interval."end", CURRENT_TIMESTAMP) - interval.start) AS sum_1 FROM interval WHERE task.id = interval.task_id) > %(param_1)s ```
For a simple example of SQLAlchemy's coalesce function, this may help: [Handling null values in a SQLAlchemy query - equivalent of isnull, nullif or coalesce](http://progblog10.blogspot.com/2014/06/handling-null-values-in-sqlalchemy.html). Here are a couple of key lines of code from that post: ``` from sqlalchemy.sql.functions import coalesce my_config = session.query(Config).order_by(coalesce(Config.last_processed_at, datetime.date.min)).first() ```
How do I implement a null coalescing operator in SQLAlchemy?
[ "", "python", "sqlalchemy", "null-coalescing-operator", "" ]
I'm trying to set up Django-Celery. I'm going through the tutorial <http://docs.celeryproject.org/en/latest/django/first-steps-with-django.html> when I run $ python manage.py celery worker --loglevel=info I get ``` [Tasks] /Users/msmith/Documents/dj/venv/lib/python2.7/site-packages/djcelery/loaders.py:133: UserWarning: Using settings.DEBUG leads to a memory leak, never use this setting in production environments! warnings.warn('Using settings.DEBUG leads to a memory leak, never ' [2013-08-08 11:15:25,368: WARNING/MainProcess] /Users/msmith/Documents/dj/venv/lib/python2.7/site-packages/djcelery/loaders.py:133: UserWarning: Using settings.DEBUG leads to a memory leak, never use this setting in production environments! warnings.warn('Using settings.DEBUG leads to a memory leak, never ' [2013-08-08 11:15:25,369: WARNING/MainProcess] celery@sfo-mpmgr ready. [2013-08-08 11:15:25,382: ERROR/MainProcess] consumer: Cannot connect to amqp://guest@127.0.0.1:5672/celeryvhost: [Errno 61] Connection refused. Trying again in 2.00 seconds... ``` has anyone encountered this issue before? settings.py ``` # Django settings for summertime project. import djcelery djcelery.setup_loader() BROKER_URL = 'amqp://guest:guest@localhost:5672/' ... INSTALLED_APPS = { ... 'djcelery', 'celerytest' } ``` wsgi.py ``` import djcelery djcelery.setup_loader() ```
**Update Jan 2022**: This answer is outdated. As suggested in comments, please refer to [this link](https://docs.celeryproject.org/en/latest/django/first-steps-with-django.html#using-celery-with-django) The problem is that you are trying to connect to a local instance of RabbitMQ. Look at this line in your `settings.py` ``` BROKER_URL = 'amqp://guest:guest@localhost:5672/' ``` If you are working currently on development, you could avoid setting up Rabbit and all the mess around it, and just use a development version of a message queue with the Django database. Do this by replacing your previous configuration with: ``` BROKER_URL = 'django://' ``` ...and add this app: ``` INSTALLED_APPS += ('kombu.transport.django', ) ``` Finally, launch the worker with: ``` ./manage.py celery worker --loglevel=info ``` Source: <http://docs.celeryproject.org/en/latest/getting-started/brokers/django.html>
I got this error because `rabbitmq` was not started. If you installed `rabbitmq` via brew you can start it using `brew services start rabbitmq`
Django Celery - Cannot connect to amqp://guest@127.0.0.8000:5672//
[ "", "python", "django", "celery", "" ]
``` dict1={'s1':[1,2,3],'s2':[4,5,6],'a':[7,8,9],'s3':[10,11]} ``` how can I get all the value which key is with 's'? like `dict1['s*']`to get the result is `dict1['s*']=[1,2,3,4,5,6,10,11]`
``` >>> [x for d in dict1 for x in dict1[d] if d.startswith("s")] [1, 2, 3, 4, 5, 6, 10, 11] ``` or, if it needs to be a regex ``` >>> regex = re.compile("^s") >>> [x for d in dict1 for x in dict1[d] if regex.search(d)] [1, 2, 3, 4, 5, 6, 10, 11] ``` What you're seeing here is a nested [list comprehension](http://docs.python.org/2/tutorial/datastructures.html#list-comprehensions). It's equivalent to ``` result = [] for d in dict1: for x in dict1[d]: if regex.search(d): result.append(x) ``` As such, it's a little inefficient because the regex is tested way too often (and the elements are appended one by one). So another solution would be ``` result = [] for d in dict1: if regex.search(d): result.extend(dict1[d]) ```
``` >>> import re >>> from itertools import chain def natural_sort(l): # http://stackoverflow.com/a/4836734/846892 convert = lambda text: int(text) if text.isdigit() else text.lower() alphanum_key = lambda key: [ convert(c) for c in re.split('([0-9]+)', key) ] return sorted(l, key = alphanum_key) ... ``` Using **glob** pattern, `'s*'`: ``` >>> import fnmatch def solve(patt): keys = natural_sort(k for k in dict1 if fnmatch.fnmatch(k, patt)) return list(chain.from_iterable(dict1[k] for k in keys)) ... >>> solve('s*') [1, 2, 3, 4, 5, 6, 10, 11] ``` Using `regex`: ``` def solve(patt): keys = natural_sort(k for k in dict1 if re.search(patt, k)) return list(chain.from_iterable( dict1[k] for k in keys )) ... >>> solve('^s') [1, 2, 3, 4, 5, 6, 10, 11] ```
how to get dict value by regex in python
[ "", "python", "regex", "dictionary", "" ]
I have 2 tables, A and B. I want to insert the name and 'notInTime' into table B if that name IS NOT currently falling into the period between time\_start and time\_end. eg: time now = 10:30 ``` TABLE A NAME TIME_START (DATETIME) TIME_END (DATETIME) A 12:00 14:00 A 10:00 13:00 B 09:00 11:00 B 10:00 11:00 C 12:00 14:00 D 16:00 17:00 Table B Name Indicator A intime B intime ``` If run the query should add the following to Table B ``` C notInTime D notinTime ```
This will add those in time and those not in time to table\_b ``` declare @now time = '10:30' INSERT INTO TABLE_B(Name, Indicator) select a.NAME, case when b.chk = 1 THEN 'intime' else 'notInTime' end from ( select distinct NAME from TABLE_A ) a outer apply (select top 1 1 chk from TABLE_A where @now between TIME_START and TIME_END and a.Name = Name) b ```
``` INSERT INTO TABLE_B b (column_name_1, column_name_2) SELECT 'C', CASE WHEN EXISTS (SELECT 1 FROM TABLE_A a WHERE a.NAME = 'C' AND '10:30' BETWEEN TIME(a.TIME_START) AND TIME(a.TIME_END)) THEN 'intime' ELSE 'notinTime' END UNION ALL SELECT 'D', CASE WHEN EXISTS (SELECT 1 FROM TABLE_A a WHERE a.NAME = 'D' AND '10:30' BETWEEN TIME(a.TIME_START) AND TIME(a.TIME_END)) THEN 'intime' ELSE 'notinTime' END ```
Sql Server Insert a new row if 1 value does not exist in dest table
[ "", "sql", "insert", "" ]
I have a column which contains dates in varchar2 with varying formats such as 19.02.2013, 29-03-2013, 30/12/2013 and so on. But the most annoying of all is 20130713 (which is July 13, 2013) and I want to convert this to dd-mm-yyyy or dd-mon-yyyy.
If the column contains all those various formats, you'll need to deal with each one. Assuming that your question includes *all* known formats, then you have a couple of options. You can use to\_char/to\_date. This is dangerous because you'll get a SQL error if the source data is not a valid date (of course, getting an error might be preferable to presenting bad data). Or you can simply rearrange the characters in the string based on the format. This is a little simpler to implement, and doesn't care what the delimiters are. Method 1: ``` case when substr(tempdt,3,1)='.' then to_char(to_date(tempdt,'dd.mm.yyyy'),'dd-mm-yyyy') when substr(tempdt,3,1)='-' then tempdt when length(tempdt)=8 then to_char(to_date(tempdt,'yyyymmdd'),'dd-mm-yyyy') when substr(tempdt,3,1)='/' then to_char(to_date(tempdt,'dd/mm/yyyy'),'dd-mm-yyyy') ``` Method 2: ``` case when length(tempdt)=8 then substr(tempdt,7,2) || '-' || substr(tempdt,5,2) || '-' || substr(tempdt,1,4) when length(tempdt)=10 then substr(tempdt,1,2) || '-' || substr(tempdt,4,2) || '-' || substr(tempdt,7,4) end ``` [SQLFiddle here](http://www.sqlfiddle.com/#!4/a556b/8/0)
Convert to date, then format to char: ``` select to_char(to_date('20130713', 'yyyymmdd'), 'dd MON yyyy') from dual; ``` gives `13 JUL 2013`.
How to convert yyyymmdd to dd-mm-yyyy in Oracle?
[ "", "sql", "oracle", "date", "datetime", "oracle-sqldeveloper", "" ]
I must create two functions. One that can tell whether one number is odd or even by returning t/f, and the other will call the first function then return how many even numbers there are. This is my code so far: ``` Even = [0,2,4,6,8] IsEvenInput = int(input("Please enter a number: ")) def IsEvenDigit(a): if a in Even: return True else: return False y = IsEvenDigit(IsEvenInput) print(y) def CountEven(b): count = 0 for a in b: if IsEvenDigit(a): count+=1 return count d = input("Please enter more than one number: ") y = CountEven(d) print(y) ``` This keeps outputting 0 and doesn't actually count. What am I doing wrong now?
Here is another approach: ``` def is_even(number): return number % 2 == 0 def even_count(numbers_list): count = 0 for number in numbers_list: if is_even(number): count += 1 return count raw_numbers = input("Please enter more than one number: ") numbers_list = [int(i) for i in raw_numbers.split()] count = even_count(numbers_list) print(count) ``` This will take care of all other numbers too.
``` d = input("Please enter more than one number: ") ``` This is going to return a string of numbers, perhaps separated by spaces. You'll need to `split()` the string into the sequence of text digits and then turn those into integers. --- There's a general approach to determining whether a number is odd or even using the modulus / remainder operator, `%`: if the remainder after division by `2` is `0` then the number is even.
Count Even Numbers User has Inputted PYTHON 3
[ "", "python", "count", "" ]
I'm trying to do a query on this table: ``` Id startdate enddate amount 1 2013-01-01 2013-01-31 0.00 2 2013-02-01 2013-02-28 0.00 3 2013-03-01 2013-03-31 245 4 2013-04-01 2013-04-30 529 5 2013-05-01 2013-05-31 0.00 6 2013-06-01 2013-06-30 383 7 2013-07-01 2013-07-31 0.00 8 2013-08-01 2013-08-31 0.00 ``` I want to get the output: ``` 2013-01-01 2013-02-28 0 2013-03-01 2013-06-30 1157 2013-07-01 2013-08-31 0 ``` I wanted to get that result so I would know when money started to come in and when it stopped. I am also interested in the number of months before money started coming in (which explains the first row), and the number of months where money has stopped (which explains why I'm also interested in the 3rd row for July 2013 to Aug 2013). I know I can use min and max on the dates and sum on amount but I can't figure out how to get the records divided that way. Thanks!
Here's one idea (and [a fiddle](http://sqlfiddle.com/#!6/ea3e0/1) to go with it): ``` ;WITH MoneyComingIn AS ( SELECT MIN(startdate) AS startdate, MAX(enddate) AS enddate, SUM(amount) AS amount FROM myTable WHERE amount > 0 ) SELECT MIN(startdate) AS startdate, MAX(enddate) AS enddate, SUM(amount) AS amount FROM myTable WHERE enddate < (SELECT startdate FROM MoneyComingIn) UNION ALL SELECT startdate, enddate, amount FROM MoneyComingIn UNION ALL SELECT MIN(startdate) AS startdate, MAX(enddate) AS enddate, SUM(amount) AS amount FROM myTable WHERE startdate > (SELECT enddate FROM MoneyComingIn) ``` --- And a second, without using `UNION` ([fiddle](http://sqlfiddle.com/#!6/ea3e0/16)): ``` SELECT MIN(startdate), MAX(enddate), SUM(amount) FROM ( SELECT startdate, enddate, amount, CASE WHEN EXISTS(SELECT 1 FROM myTable b WHERE b.id>=a.id AND b.amount > 0) THEN CASE WHEN EXISTS(SELECT 1 FROM myTable b WHERE b.id<=a.id AND b.amount > 0) THEN 2 ELSE 1 END ELSE 3 END AS partition_no FROM myTable a ) x GROUP BY partition_no ``` although I suppose as written it assumes `Id` are in order. You could substitute this with a `ROW_NUMBER() OVER(ORDER BY startdate)`.
``` with CT as ( select t1.*, ( select max(endDate) from t where startDate<t1.StartDate and SIGN(amount)<>SIGN(t1.Amount) ) as GroupDate from t as t1 ) select min(StartDate) as StartDate, max(EndDate) as EndDate, sum(Amount) as Amount from CT group by GroupDate order by StartDate ``` [SQLFiddle demo](http://sqlfiddle.com/#!3/d6e4a/6)
grouping by date range in t-sql
[ "", "sql", "sql-server", "t-sql", "" ]
I have data in a file that has two sets of values and then an unspecified series of arrays (each with 3 sub items in them) For example: ``` ('January', 2, [('curly', 30), ('larry',10), ('moe',20)]) ``` I need the data read and present back and partially re-assign the data to new variables. For example: ``` Month: January Section: 3 curly has worked 30 hours larry has worked 10 hours moe has worked 20 hours ``` I can get the first two parts of the string read, *but don't know how to break the arrays down* - each file may have different numbers of sub arrays, so need to do as while loop? ``` import ast filecontent = ast.literal_eval(filename.read()) for item in filecontent: month = filecontent[0] section = filecontent[1] name1 = filecontent[2] # not working hours1 = filecontent[3]# not working name2 = filecontent[4]# not working hours2 = filecontent[5]# not working # account for additional arrays somehow? print ("month:" + month) print ("section" + str (section)) print (str (name1) + "has worked" + str (hours1)) print (str (name2) + "has worked" + str (hours2)) ```
You could use a dictionary to store your workers. **Edit** ``` work = [('January', 2, [('curly', 30, 5), ('larry',10, 5), ('moe',20, 10)])] workers = dict() month = "" section = "" for w in work: month = w[0] section = w[1] for worker, time, overtime in w[2]: workers[worker] = (time, overtime) print "Month: {0}\nSection: {1}".format(month, section) print "".join("%s has worked %s hours, overtime %s\n" % (worker, time[0], time[1]) for worker, time in workers.items()) ```
You need to iterate over the third item in the sequence. ``` for item in filecontent: print 'Month %s' % item[0] print 'Section %d' % item[1] for name, hours in item[2]: print "%s has worked %d hours" % (name, hours) ```
break string and arrays down intro separate items
[ "", "python", "arrays", "file", "python-3.x", "tuples", "" ]
I have table `A` with a primary key on column `ID` and tables `B,C,D...` that have 1 or more columns with foreign key relationships to `A.ID`. How do I write a query that shows me all tables that contain a specific value (eg `17`) of the primary key? I would like to have **generic sql code that can take a table name and primary key value** and display all tables that reference that specific value via a foreign key. The result should be a **list of table names**. I am using MS SQL 2012.
Not an ideal one, but should return what is needed (list of tables): ``` declare @tableName sysname, @value sql_variant set @tableName = 'A' set @value = 17 declare @sql nvarchar(max) create table #Value (Value sql_variant) insert into #Value values (@value) create table #Tables (Name sysname, [Column] sysname) create index IX_Tables_Name on #Tables (Name) set @sql = 'declare @value sql_variant select @value = Value from #Value ' set @sql = @sql + replace(( select 'insert into #Tables (Name, [Column]) select ''' + quotename(S.name) + '.' + quotename(T.name) + ''', ''' + quotename(FC.name) + ''' where exists (select 1 from ' + quotename(S.name) + '.' + quotename(T.name) + ' where ' + quotename(FC.name) + ' = @value) ' from sys.columns C join sys.foreign_key_columns FKC on FKC.referenced_column_id = C.column_id and FKC.referenced_object_id = C.object_id join sys.columns FC on FC.object_id = FKC.parent_object_id and FC.column_id = FKC.parent_column_id join sys.tables T on T.object_id = FKC.parent_object_id join sys.schemas S on S.schema_id = T.schema_id where C.object_id = object_id(@tableName) and C.name = 'ID' order by S.name, T.name for xml path('')), '&#x0D;', CHAR(13)) --print @sql exec(@sql) select distinct Name from #Tables order by Name drop table #Value drop table #Tables ```
You want to look at `sys.foreignkeys`. I would start from <http://blog.sqlauthority.com/2009/02/26/sql-server-2008-find-relationship-of-foreign-key-and-primary-key-using-t-sql-find-tables-with-foreign-key-constraint-in-database/> to give something like ``` declare @value nvarchar(20) = '1' SELECT 'select * from ' + QUOTENAME( SCHEMA_NAME(f.SCHEMA_ID)) + '.' + quotename( OBJECT_NAME(f.parent_object_id) ) + ' where ' + COL_NAME(fc.parent_object_id,fc.parent_column_id) + ' = ' + @value FROM sys.foreign_keys AS f INNER JOIN sys.foreign_key_columns AS fc ON f.OBJECT_ID = fc.constraint_object_id INNER JOIN sys.objects AS o ON o.OBJECT_ID = fc.referenced_object_id ```
SQL how do you query for tables that refer to a specific foreign key value?
[ "", "sql", "sql-server", "foreign-keys", "sql-server-2012", "" ]
I have a django-rest-framework REST API with hierarchical resources. I want to be able to create subobjects by POSTing to `/v1/objects/<pk>/subobjects/` and have it automatically set the foreign key on the new subobject to the `pk` kwarg from the URL without having to put it in the payload. Currently, the serializer is causing a 400 error, because it expects the `object` foreign key to be in the payload, but it shouldn't be considered optional either. The URL of the subobjects is `/v1/subobjects/<pk>/` (since the key of the parent isn't necessary to identify it), so it is still required if I want to `PUT` an existing resource. Should I just make it so that you POST to `/v1/subobjects/` with the parent in the payload to add subobjects, or is there a clean way to pass the `pk` kwarg from the URL to the serializer? I'm using `HyperlinkedModelSerializer` and `ModelViewSet` as my respective base classes. Is there some recommended way of doing this? So far the only idea I had was to completely re-implement the ViewSets and make a custom Serializer class whose get\_default\_fields() comes from a dictionary that is passed in from the ViewSet, populated by its kwargs. This seems quite involved for something that I would have thought is completely run-of-the-mill, so I can't help but think I'm missing something. Every REST API I've ever seen that has writable endpoints has this kind of URL-based argument inference, so the fact that django-rest-framework doesn't seem to be able to do it at all seems strange.
Make the parent object serializer field read\_only. It's not optional but it's not coming from the request data either. Instead you pull the pk/slug from the URL in `pre_save()`... ``` # Assuming list and detail URLs like: # /v1/objects/<parent_pk>/subobjects/ # /v1/objects/<parent_pk>/subobjects/<pk>/ def pre_save(self, obj): parent = models.MainObject.objects.get(pk=self.kwargs['parent_pk']) obj.parent = parent ```
Here's what I've done to solve it, although it would be nice if there was a more general way to do it, since it's such a common URL pattern. First I created a mixin for my ViewSets that redefined the `create` method: ``` class CreatePartialModelMixin(object): def initial_instance(self, request): return None def create(self, request, *args, **kwargs): instance = self.initial_instance(request) serializer = self.get_serializer( instance=instance, data=request.DATA, files=request.FILES, partial=True) if serializer.is_valid(): self.pre_save(serializer.object) self.object = serializer.save(force_insert=True) self.post_save(self.object, created=True) headers = self.get_success_headers(serializer.data) return Response( serializer.data, status=status.HTTP_201_CREATED, headers=headers) return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST) ``` Mostly it is copied and pasted from `CreateModelMixin`, but it defines an `initial_instance` method that we can override in subclasses to provide a starting point for the serializer, which is set up to do a partial deserialization. Then I can do, for example, ``` class SubObjectViewSet(CreatePartialModelMixin, viewsets.ModelViewSet): # .... def initial_instance(self, request): instance = models.SubObject(owner=request.user) if 'pk' in self.kwargs: parent = models.MainObject.objects.get(pk=self.kwargs['pk']) instance.parent = parent return instance ``` (I realize I don't actually need to do a `.get` on the pk to associate it on the model, but in my case I'm exposing the slug rather than the primary key in the public API)
Django REST Framework: creating hierarchical objects using URL arguments
[ "", "python", "django", "rest", "django-rest-framework", "" ]
I'm trying to make a simple script in python that will scan a tweet for a link and then visit that link. I'm having trouble determining which direction to go from here. From what I've researched it seems that I can Use Selenium or Mechanize? Which can be used for browser automation. Would using these be considered web scraping? Or I can learn one of the twitter apis , the Requests library, and pyjamas(converts python code to javascript) so I can make a simple script and load it into google chrome's/firefox extensions. Which would be the better option to take?
There are many different ways to go when doing web automation. Since you're doing stuff with Twitter, you could try the Twitter API. If you're doing any other task, there are more options. * [`Selenium`](https://pypi.python.org/pypi/selenium) is very useful when you need to click buttons or enter values in forms. The only drawback is that it opens a separate browser window. * [`Mechanize`](http://wwwsearch.sourceforge.net/mechanize/), unlike Selenium, does not open a browser window and is also good for manipulating buttons and forms. It might need a few more lines to get the job done. * [`Urllib`](http://docs.python.org/2/library/urllib.html)/[`Urllib2`](http://docs.python.org/library/urllib2.html) is what I use. Some people find it a bit hard at first, but once you know what you're doing, it is very quick and gets the job done. Plus you can do things with cookies and proxies. It is a built-in library, so there is no need to download anything. * [`Requests`](http://docs.python-requests.org/en/latest/) is just as good as `urllib`, but I don't have a lot of experience with it. You can do things like add headers. It's a very good library. Once you get the page you want, I recommend you use [BeautifulSoup](http://www.crummy.com/software/BeautifulSoup/bs4/doc/) to parse out the data you want. I hope this leads you in the right direction for web automation.
I am not expect in web scraping. But I had some experience with both Mechanize and Selenium. I think in your case, either Mechanize or Selenium will suit your needs well, but also spend some time look into these Python libraries Beautiful Soup, urllib and urlib2. From my humble opinion, I will recommend you use Mechanize over Selenium in your case. Because, Selenium is not as light weighted compare to Mechanize. Selenium is used for emulating a real web browser, so you can actually perform '**click action**'. There are some draw back from Mechanize. You will find Mechanize give you a hard time when you try to click a type **button** input. Also Mechanize doesn't understand java-scripts, so many times I have to mimic what java-scripts are doing in my own python code. Last advise, if you somehow decided to pick Selenium over Mechanize in future. Use a headless browser like PhantomJS, rather than Chrome or Firefox to reduce Selenium's computation time. Hope this helps and good luck.
Can anyone clarify some options for Python Web automation
[ "", "python", "selenium-webdriver", "browser-automation", "pyjamas", "" ]
I'd like to save to disk all the variables I create within a particular function in one go, so that I can load them later. Something of the type: ``` >>> def test(): a=1 b=2 save.to.file(filename='file', all.variables) >>> load.file('file') >>> a >>> 1 ``` Is there a way to do this in python? I know cPickle can do this, but as far as I know, one has to type cPickle.dump() for every single variable, and my script has dozens. Also, it seems that cPickle stores only the values and not the names of the variables, so one has to remember the order the data was originally saved.
Assuming all of the variables you want to save are local to the current function, you *can* get at them via the [`locals`](http://docs.python.org/3.3/library/functions.html#locals) function. This is almost always a very bad idea, but it is doable. For example: ``` def test(): a=1 b=2 pickle.dump(file, locals()) ``` If you `print locals()`, you'll see that it's just a dict, with a key for each local variable. So, when you later `load` the pickle, what you'll get back is that same dict. If you want to inject it into your local environment, you can… but you have to be very careful. For example, this function: ``` def test2(): locals().update(pickle.load(file)) print a ``` … will be compiled to expect `a` to be a global, rather than a local, so the fact that you've updated the local `a` will have no effect. This is just one of the reasons it's a bad idea to do this. So, what's the *right* thing to do? Most simply, instead of having a whole slew of variables, just have a dict with a slew of keys. Then you can pickle and unpickle the dict, and everything is trivial. Or, alternatively, explicitly pickle and unpickle the variables you want by using a tuple: ``` def test(): a = 1 b = 2 pickle.dump(file, (a, b)) def test2(): a, b = pickle.load(file) print a ``` --- In a comment, you say that you'd like to pickle a slew or variables, skipping any that can't be pickled. To make things simpler, let's say you actually just want to pickle a dict, skipping any values that can't be pickled. (The above should show why this solution is still fully general.) So, how do you know whether a value can be pickled? Trying to predict that is a tricky question. Even if you had a perfect list of all pickleable types, that still wouldn't help—a list full of integers can be pickled, but a list full of bound instance methods can't. This kind of thing is exactly why [EAFP](http://docs.python.org/3/glossary.html#term-eafp) ("Easier to Ask Forgiveness than Permission") is an important principle in duck-typed languages like Python.\* The way to find out if something can be pickled is to pickle it, and see if you get an exception. Here's a simple demonstration: ``` def is_picklable(value): try: pickle.dumps(value) except TypeError: return False else: return True def filter_dict_for_pickling(d): return {key: value for key, value in d.items() if is_picklable((key, value))} ``` You can make this a bit less verbose, and more efficient, if you put the whole stashing procedure in a wrapper function: ``` def pickle_filtered_dict(d, file): for key, value in d.items(): pickle.dump((key, value), file) except TypeError: pass def pickles(file): try: while True: yield pickle.load(file) except EOFError: pass def unpickle_filtered_dict(file): return {key: value for key, value in pickles(file)} ```
If you are not satisfied with the API of `pickle`, consider [shelve](http://docs.python.org/2/library/shelve.html) which does the pickling for you with a nicer `dict`-like front end. ex. ``` >>> import shelve >>> f = shelve.open('demo') >>> f <shelve.DbfilenameShelf object at 0x000000000299B9E8> >>> list(f.keys()) ['test', 'example'] >>> del f['test'] >>> del f['example'] >>> list(f.keys()) [] >>> f['a'] = 1 >>> list(f.keys()) ['a'] >>> list(f.items()) [('a', 1)] ```
saving variables within function - python
[ "", "python", "function", "save", "" ]
I have a set of a numbers that are related to a key. When the key is not in the dictionary I want to add it along with its value a set() and it the key exists I would like to just a number to the existing set for that key. The way I did is like this: ``` for num in datasource: if not key in dict.keys(): dict[key] = set().add(num) else: dict[key].add(num) ``` But the issue with this is that when I add the number 03 it will add 0,3,03 to the set when what I really want to add is just 03. Any help would be appreciated.
Try this, for adding new set elements as values for a given key: ``` d = {} d.setdefault(key, set()).add(value) ``` Alternatively, use a `defaultdict`: ``` from collections import defaultdict d = defaultdict(set) d[key].add(value) ``` Either solution will effectively create a [multimap](https://en.wikipedia.org/wiki/Multimap): a data structure that for a given key can hold multiple values - in this case, inside a `set`. For your example in particular, this is how you'd use it: ``` d = {} for num in datasource: d.setdefault(key, set()).add(num) ``` Alternatively: ``` from collections import defaultdict d = defaultdict(set) for num in datasource: d[key].add(num) ```
Use [dict.setdefault](http://docs.python.org/2/library/stdtypes.html#dict.setdefault): ``` d.setdefault(key, set()).add(num) ``` Or use [collections.defaultdict](http://docs.python.org/2/library/collections.html#collections.defaultdict): ``` d = collections.defaultdict(set) ... d[key].add(num) ```
Creating a dictionary and adding a set as its value
[ "", "python", "python-2.7", "" ]
Is there a way to have the items of a Tkinter Listbox be Entry Widgets? The result would be that you could dynamically modify the text in an Listbox entry. If your Listbox looks like: ``` -------- | Apples | | Pears | | Oranges | --------- ``` then you would want to be able to click on Apples and write some arbitrary text - you could then bind the Enter key, say, to trigger a function based on the new text.
you could give the user some entries then create a listbox from that input but you cant just change a listboxes text like that maybe try a different GUI Library like WX **EDIT** here is something you can do: ``` from Tkinter import * root = Tk() opt_list = ['opt1','opt2','opt3','opt4','opt5'] sel_list = [] def get_sel(): sel_list.append(Lb1.curselection()) root.destroy() def change_opt(): entry = E.get() change = entry.split(" ") print change Lb1.insert(int(change[0]),change[1]) root.update() def cancel(): root.destroy() E = Entry(root) A = Button(root, text ="Change", command = change_opt) B = Button(root, text ="Submit", command = get_sel) C = Button(root, text ="Cancel", command = cancel) Lb1 = Listbox(root, selectmode=MULTIPLE) for i,j in enumerate(opt_list): Lb1.insert(i,j) Lb1.pack() B.pack() C.pack() E.pack() A.pack() root.mainloop() ``` this will make a listbox with the options in `opt_list` then when you type for example `5 hello` the entry and press Change it will add the option `hello` to the fifth place thats the only way i can think of
I know it has been a while since this question, but I have created a widget called 'ListboxEditable', which is able to act as a listbox and, when double-clicking on an item, the user can type anything inside an entry. Then, when the user clicks another row, the information is saved on the corresponding modified cell. Note that the user can use the up and down keys to browse the entire given list (the selected row has a different background color). This code has been developed based on the answer from @Bryan Oakley. # Minimal working case ``` # Imports from tkinter import * from tkinter.ttk import * # Import for the listboxEditable from ListboxEditable import * # Colors colorActiveTab="#CCCCCC" # Color of the active tab colorNoActiveTab="#EBEBEB" # Color of the no active tab # Fonts fontLabels='Calibri' sizeLabels2=13 # Main window root = Tk() # *** Design ***** frame_name=Frame(root,bg=colorActiveTab) # Column frame frame_name_label=Frame(frame_name,bg='blue') # Label frame label_name=Label(frame_name_label, text="Header", bg='blue', fg='white', font=(fontLabels, sizeLabels2, 'bold'), pady=2, padx=2, width=10) frame_name_listbox=Frame(frame_name,bg='blue') # Label frame list_name=['test1','test2','test3'] listBox_name=ListboxEditable(frame_name_listbox,list_name) # *** Packing **** frame_name.pack(side=LEFT,fill=Y) frame_name_label.pack(side=TOP, fill=X) label_name.pack(side=LEFT,fill=X) frame_name_listbox.pack(side=TOP, fill=X) listBox_name.placeListBoxEditable() # Infinite loop root.mainloop() ``` # ListboxEditable class ``` # Author: David Duran Perez # Date: May 26, 2017 # Necessary imports from tkinter import * from tkinter import ttk # Colors colorActiveTab="#CCCCCC" # Color of the active tab colorNoActiveTab="#EBEBEB" # Color of the no active tab # Fonts fontLabels='Calibri' sizeLabels2=13 class ListboxEditable(object): """A class that emulates a listbox, but you can also edit a field""" # Constructor def __init__(self,frameMaster,list): # *** Assign the first variables *** # The frame that contains the ListboxEditable self.frameMaster=frameMaster # List of the initial items self.list=list # Number of initial rows at the moment self.numberRows=len(self.list) # *** Create the necessary labels *** ind=1 for row in self.list: # Get the name of the label labelName='label'+str(ind) # Create the variable setattr(self, labelName, Label(self.frameMaster, text=self.list[ind-1], bg=colorActiveTab, fg='black', font=(fontLabels, sizeLabels2), pady=2, padx=2, width=10)) # ** Bind actions # 1 left click - Change background getattr(self, labelName).bind('<Button-1>',lambda event, a=labelName: self.changeBackground(a)) # Double click - Convert to entry getattr(self, labelName).bind('<Double-1>',lambda event, a=ind: self.changeToEntry(a)) # Move up and down getattr(self, labelName).bind("<Up>",lambda event, a=ind: self.up(a)) getattr(self, labelName).bind("<Down>",lambda event, a=ind: self.down(a)) # Increase the iterator ind=ind+1 # Place def placeListBoxEditable(self): # Go row by row placing it ind=1 for row in self.list: # Get the name of the label labelName='label'+str(ind) # Place the variable getattr(self, labelName).grid(row=ind-1,column=0) # Increase the iterator ind=ind+1 # Action to do when one click def changeBackground(self,labelNameSelected): # Ensure that all the remaining labels are deselected ind=1 for row in self.list: # Get the name of the label labelName='label'+str(ind) # Place the variable getattr(self, labelName).configure(bg=colorActiveTab) # Increase the iterator ind=ind+1 # Change the background of the corresponding label getattr(self, labelNameSelected).configure(bg=colorNoActiveTab) # Set the focus for future bindings (moves) getattr(self, labelNameSelected).focus_set() # Function to do when up button pressed def up(self, ind): if ind==1: # Go to the last # Get the name of the label labelName='label'+str(self.numberRows) else: # Normal # Get the name of the label labelName='label'+str(ind-1) # Call the select self.changeBackground(labelName) # Function to do when down button pressed def down(self, ind): if ind==self.numberRows: # Go to the last # Get the name of the label labelName='label1' else: # Normal # Get the name of the label labelName='label'+str(ind+1) # Call the select self.changeBackground(labelName) # Action to do when double-click def changeToEntry(self,ind): # Variable of the current entry self.entryVar=StringVar() # Create the entry #entryName='entry'+str(ind) # Name self.entryActive=ttk.Entry(self.frameMaster, font=(fontLabels, sizeLabels2), textvariable=self.entryVar, width=10) # Place it on the correct grid position self.entryActive.grid(row=ind-1,column=0) # Focus to the entry self.entryActive.focus_set() # Bind the action of focusOut self.entryActive.bind("<FocusOut>",lambda event, a=ind: self.saveEntryValue(a)) # Action to do when focus out from the entry def saveEntryValue(self,ind): # Find the label to recover labelName='label'+str(ind) # Remove the entry from the screen self.entryActive.grid_forget() # Place it again getattr(self, labelName).grid(row=ind-1,column=0) # Change the name to the value of the entry getattr(self, labelName).configure(text=self.entryVar.get()) ``` # Some sreenshots [![enter image description here](https://i.stack.imgur.com/yAOZy.png)](https://i.stack.imgur.com/yAOZy.png) [![enter image description here](https://i.stack.imgur.com/I9Enn.png)](https://i.stack.imgur.com/I9Enn.png) [![enter image description here](https://i.stack.imgur.com/F0Ac7.png)](https://i.stack.imgur.com/F0Ac7.png)
Tkinter Listbox with Entry
[ "", "python", "tkinter", "listbox", "tkinter-entry", "" ]
I have a long list of xy coordinates, and would like to convert it into numpy array. ``` >>> import numpy as np >>> xy = np.random.rand(1000000, 2).tolist() ``` The obvious way would be: ``` >>> a = np.array(xy) # Very slow... ``` However, the above code is unreasonably slow. Interestingly, to transpose the long list first, convert it into numpy array, and then transpose back would be much faster (20x on my laptop). ``` >>> def longlist2array(longlist): ... wide = [[row[c] for row in longlist] for c in range(len(longlist[0]))] ... return np.array(wide).T >>> a = longlist2array(xy) # 20x faster! ``` Is this a bug of numpy? EDIT: This is a list of points (with xy coordinates) generated on-the-fly, so instead of preallocating an array and enlarging it when necessary, or maintaining two 1D lists for x and y, I think current representation is most natural. Why is looping through 2nd index faster than 1st index, given that we are iterating through a python list in both directions? EDIT 2: Based on @tiago's answer and [this question](https://stackoverflow.com/questions/367565/how-do-i-build-a-numpy-array-from-a-generator), I found the following code twice as fast as my original version: ``` >>> from itertools import chain >>> def longlist2array(longlist): ... flat = np.fromiter(chain.from_iterable(longlist), np.array(longlist[0][0]).dtype, -1) # Without intermediate list:) ... return flat.reshape((len(longlist), -1)) ```
Implementing this in Cython without the extra checking involved to determine dimensionality, etc. nearly eliminates the time difference you are seeing. Here's the `.pyx` file I used to verify that. ``` from numpy cimport ndarray as ar import numpy as np cimport cython @cython.boundscheck(False) @cython.wraparound(False) def toarr(xy): cdef int i, j, h=len(xy), w=len(xy[0]) cdef ar[double,ndim=2] new = np.empty((h,w)) for i in xrange(h): for j in xrange(w): new[i,j] = xy[i][j] return new ``` I would assume that the extra time is spent in checking the length and content of each sublist in order to determine the datatype, dimension, and size of the desired array. When there are only two sublists, it only has to check two lengths to determine the number of columns in the array, instead of checking 1000000 of them.
This is because the fastest-varying index of your list is the last one, so `np.array()` has to traverse the array many times because the first index is much larger. If your list was transposed, `np.array()` would be faster than your `longlist2array`: ``` In [65]: import numpy as np In [66]: xy = np.random.rand(10000, 2).tolist() In [67]: %timeit longlist2array(xy) 100 loops, best of 3: 3.38 ms per loop In [68]: %timeit np.array(xy) 10 loops, best of 3: 55.8 ms per loop In [69]: xy = np.random.rand(2, 10000).tolist() In [70]: %timeit longlist2array(xy) 10 loops, best of 3: 59.8 ms per loop In [71]: %timeit np.array(xy) 1000 loops, best of 3: 1.96 ms per loop ``` There is no magical solution for your problem. It's just how Python stores your list in memory. Do you really need to have a list with that shape? Can't you reverse it? (And do you really need a list, given that you're converting to numpy?) If you must convert a list, this function is about 10% faster than your `longlist2array`: ``` from itertools import chain def convertlist(longlist) tmp = list(chain.from_iterable(longlist)) return np.array(tmp).reshape((len(longlist), len(longlist[0]))) ```
why is converting a long 2D list to numpy array so slow?
[ "", "python", "performance", "numpy", "" ]
I'm trying to use pandas to manipulate a .csv file but I get this error: > pandas.parser.CParserError: Error tokenizing data. C error: Expected 2 fields in line 3, saw 12 I have tried to read the pandas docs, but found nothing. My code is simple: ``` path = 'GOOG Key Ratios.csv' #print(open(path).read()) data = pd.read_csv(path) ``` How can I resolve this? Should I use the `csv` module or another language?
you could also try; ``` data = pd.read_csv('file1.csv', on_bad_lines='skip') ``` Do note that this will cause the offending lines to be skipped. If you don't expect many bad lines and want to (at least) know their amount and IDs, use `on_bad_lines='warn'`. For advanced handling of bads, you can pass a callable. **Edit** For Pandas < 1.3.0 try ``` data = pd.read_csv("file1.csv", error_bad_lines=False) ``` as per [pandas API reference](https://pandas.pydata.org/docs/reference/api/pandas.read_csv.html).
It might be an issue with * the delimiters in your data * the first row, as @TomAugspurger noted To solve it, try specifying the `sep` and/or `header` arguments when calling `read_csv`. For instance, ``` df = pandas.read_csv(filepath, sep='delimiter', header=None) ``` In the code above, `sep` defines your delimiter and `header=None` tells pandas that your source data has no row for headers / column titles. Thus saith [the docs](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html): "If file contains no header row, then you should explicitly pass header=None". In this instance, pandas automatically creates whole-number indices for each field {0,1,2,...}. According to the docs, the delimiter thing should *not* be an issue. The docs say that "if sep is None [not specified], will try to automatically determine this." I however have not had good luck with this, including instances with obvious delimiters. Another solution may be to try auto detect the delimiter ``` # use the first 2 lines of the file to detect separator temp_lines = csv_file.readline() + '\n' + csv_file.readline() dialect = csv.Sniffer().sniff(temp_lines, delimiters=';,') # remember to go back to the start of the file for the next time it's read csv_file.seek(0) df = pd.read_csv(csv_file, sep=dialect.delimiter) ```
pandas.parser.CParserError: Error tokenizing data
[ "", "python", "csv", "pandas", "" ]
I'm running test cases and I'd like to set up a my logging in such a way that it automatically logs all cases where tests fails - but I'd like to get a custom response, for example if an assertion fails I'd like to get the response to a request made by my test, not just default message which assertion failed. At present I only know that the assertion failed, but I don't know what the program returned. So say I'm testing a view function, for example I have a test which looks roughly likes this (part of whole TestCase class) ``` def edit_profile(self): return self.app.get("/edit_profile", follow_redirects=True) def test_edit_profile(self): rv = self.edit_profile() assert "Edit your profile admin" in rv.data ``` Is there a way for me to configure logging in such a way that each test failure will log rv.data to a logs file? Currently I simply add logging.debug(rv.data) before the assertion that failed in previous tests, run the test again, debug the issue, and go on, but this is ineffective, it's easy to forget about those loggging.debug() later on, and it would be much faster if I had an function to automatically log my webpage response to test requests if they fail.
``` self.assertIn('Edit your profile admin', rv.data, msg=rv.data) ``` Use the `assertWhatever` methods. I don't fully understand why, but you're not supposed to use `assert` statements for assertions in `unittest`. (Other frameworks let you assert with `assert`.) For reference, adding a message to an `assert` assertion works as follows: ``` assert 'Edit your profile admin' in rv.data, rv.data ```
You can do something like this: ``` def test_edit_profile(self): rv = self.edit_profile() try: assert "Edit your profile admin" in rv.data except AssertionError: # Do your logging here ``` Edit: Was pointed out that this takes the assert functionality away basically, since the assertion is handled by the except block. Suggestions welcome. Edit: This would work, but is pretty sloppy. ``` def test_edit_profile(self): rv = self.edit_profile() try: assert "Edit your profile admin" in rv.data except AssertionError: assert "Edit your profile admin" in rv.data # Do your logging here ```
Is there a way to configure Python logging to log content or context of assert failures?
[ "", "python", "testing", "logging", "flask", "assertions", "" ]
I have a string, and i need to check whether it contains a `number/digit at the end` of the string, and need to increment that `number/digit` at the end of the string with `+1` I will get the strings as below ``` string2 = suppose_name_1 string3 = suppose_name_22 string4 = supp22ose45_na56me_45 ``` for sure i will get the string in the above format like suppose\_somthing + `Underscore` + `digits` So from the above strings 1. I need to check whether a string contains a number/digit at the end of the string after underscore 2. If it contains then need to increment that with `+1` like below string2 = suppose\_name\_**2** string3 = suppose\_name\_**23** string4 = supp22ose45\_na56me\_**46** How can we do this in python by using regular expressions or something, but that should be very fast. I have done something like [here](https://stackoverflow.com/questions/11232474/is-there-a-better-way-to-find-if-string-contains-digits), but want to implement with re that will be very fast , so approached SO `Edit:` sorry din't mentioned above Sometimes it contains just `something_name` without integer, hence i need to check whether it contains a number in it first
You don't need regex. You can just use simple `str.replace`: ``` >>> s = 'suppose_name_1' >>> index = s.rfind('_') # Last index of '_' >>> s.replace(s[index+1:], str(int(s[index+1:]) + 1)) 'suppose_name_2' ``` If you need to first check whether you have digits at the end, you can check that using `str.isdigit()` method: ``` >>> s = 'suppose_name' >>> >>> index = s.rfind('_') >>> if s[index+1:].isdigit(): s = s.replace(s[index+1:], str(int(s[index+1:]) + 1)) >>> s 'suppose_name' ```
How about using regular expressions: ``` import re def process_string(s): try: part1, part2 = re.search('^(.*_)(\d+)$', s).groups() part2 = str(int(part2) + 1) return part1 + part2 except AttributeError: return s print process_string("suppose_name_1") print process_string("suppose_name_22") print process_string("supp22ose45_na56me_45") print process_string("suppose_name") ``` prints: ``` suppose_name_2 suppose_name_23 supp22ose45_na56me_46 suppose_name ``` FYI, there is nothing wrong or scary with using regular expressions.
Check whether a string contains a numeric/digit/number in python
[ "", "python", "string", "digit", "" ]
My table has a `TIME` field. I want to keep only 5 newest rows. Can I delete the old rows without using `SELECT`? I think logic should be something like this: ``` DELETE FROM tbl WHERE row_num > 5 ORDER BY TIME ``` How can I implement this in MySQL whitout using `SELECT` to get list of `TIME` values?
Without proper `ORDER BY` clause, SQL result set have to be considered as *unordered*. You have to provide a column to explicitly store your rows sequence numbers. This could be a time stamp or the auto\_increment column of your table. Please keep in mind you could have concurrent access to your table as well. What should be the expected behavior if someone else is inserting while you are deleting? As far as I can tell this could lead to situation where you keep only the "5 latest rows" + "those inserted on the other transaction". --- If your have the `time` column for that purpose on your table and a `PRIMARY KEY` (or some other `UNIQUE NOT NULL` column) you could write: ``` DELETE tbl FROM tbl LEFT JOIN (SELECT * FROM tbl ORDER BY tm DESC LIMIT 5) AS k ON (tbl.pk) = (k.pk) WHERE k.`time` IS NULL; ``` If you have composite primary key `(a,b)` You could write: ``` DELETE tbl FROM tbl LEFT JOIN (SELECT * FROM tbl ORDER BY tm DESC LIMIT 5) AS k ON (tbl.a,tbl.b) = (k.a,k.b) WHERE k.tm IS NULL; ```
Maybe this would be an alternative: ``` DELETE FROM tbl WHERE primary_key NOT IN (SELECT primary_key FROM tbl ORDER BY time DESC LIMIT 5) ```
Correct way to delete 'older' rows in MySQL
[ "", "mysql", "sql", "" ]
What is the most **Pythonic** way to right split into groups of threes? I've seen this answer <https://stackoverflow.com/a/2801117/1461607> but I need it to be right aligned. Preferably a simple efficient one-liner without imports. * '123456789' = ['123','456','789'] * '12345678' = ['12','345','678'] * '1234567' = ['1','234','567']
Another way, not sure about efficiency (it'd be better if they were already numbers instead of strings), but is another way of doing it in 2.7+. ``` for i in map(int, ['123456789', '12345678', '1234567']): print i, '->', format(i, ',').split(',') #123456789 -> ['123', '456', '789'] #12345678 -> ['12', '345', '678'] #1234567 -> ['1', '234', '567'] ```
simple (iterated from the answer in your link): `[int(a[::-1][i:i+3][::-1]) for i in range(0, len(a), 3)][::-1]` Explanation : `a[::-1]` is the reverse list of `a` We will compose the inversion with the slicing. ## Step one : reverse the list ``` a = a[::-1] '123456789' - > '987654321' ``` ## Step Two : Slice in parts of three's ``` a[i] = a[i:i+3] '987654321' -> '987','654','321' ``` ## Step Three : invert the list again to present the digits in increasing order ``` a[i] = int(a[i][::-1]) '987','654','321' -> 789, 654, 123 ``` ## Final Step : invert the whole list ``` a = a[::-1] 789, 456, 123 -> 123, 456, 789 ``` ## Bonus : Functional synthetic sugar It's easier to debug when you have proper names for functions ``` invert = lambda a: a[::-1] slice = lambda array, step : [ int( invert( array[i:i+step]) ) for i in range(len(array),step) ] answer = lambda x: invert ( slice ( invert (x) , 3 ) ) answer('123456789') #>> [123,456,789] ```
Right split a string into groups of 3
[ "", "python", "python-3.x", "" ]
I have a list something like the following: ``` [{'modified': 'Thu, 08 Aug 2013 18:28:13 +0000', 'path': '/test4.txt'}, {'modified': 'Thu, 06 Aug 2013 18:28:17 +0000', 'path': '/test5.txt'}, ... ] ``` and so on. I want to sort the list by 'modified', in chronological (or reverse chronological) order What is the best way of doing this?
Your sort function needs to understand the dates first, else it will just sort by the letters of the weekday. The [dateutil](http://labix.org/python-dateutil) package is perfect for this task. It will automatically identify the date and doesn't require a format string like `datetime.strptime`. ``` from dateutil import parser # sort ascending print sorted(yourlist, key=lambda k: parser.parse(k['modified'])) # sort descending print sorted(yourlist, key=lambda k: parser.parse(k['modified']), reverse=True) ```
``` from datetime import datetime strfmt = '%a, %d %b %Y %H:%M:%S +0000' res = sorted(files, key=lambda e: datetime.strptime(e['modified'], strfmt)) ``` For the reverse, use the optional boolean `reverse` argument: ``` rev = sorted(files, key=..., reverse=True) ``` See [`sorted()`](http://docs.python.org/3/library/functions.html#sorted) for more information.
How to sort files by list elements by date modified, in Python?
[ "", "python", "list", "sorting", "" ]
Hello Dear StackOverfloooow Members, I am having trouble understanding the FuncAnimation module of matplotlib. Would you mind helping me out a bit? I have two questions: 1. Why does both die `init` and `animate` function need a comma after giving back only `PLOT`? 2. Why does my code not update the `time_text`? If I let it print t after every animation i corectly adds one up in console, but the text does not get updated in the plot. . ``` import numpy as np import matplotlib.pyplot as plt from matplotlib import animation fig = plt.figure() sub = fig.add_subplot(111,xlim=(0, 10), ylim=(0, 1)) PLOT, = sub.plot([],[]) time_text = sub.text(1,1,"",transform = sub.transAxes, ha="right") t = 0 def init(): PLOT.set_data([],[]) time_text.set_text("") return PLOT,time_text def animate(i): global t x = np.linspace(0,10,1000) y = np.exp(- ((x-0.01*i)/(2))**2 )/np.sqrt(2*np.pi) t += 1 PLOT.set_data(x,y) time_text.set_text("time = "+str(t)) return PLOT, time_text ani = animation.FuncAnimation(fig, animate, init_func=init, frames=2000, interval=20, blit=True) plt.show() ```
1) Not really sure what you mean. In both `init` and `animate` you need to return both `PLOT` and `time_text`, so you seperate them with a comma so each function will return a tuple. For the 'hanging' comma in `PLOT, = sub.plot([],[])`, `sub.plot` is returning a list with a single element, a `matplotlib.lines.Line2D` object. The `PLOT,` is unpacking this single element list. So, you can also do this: ``` PLOT = sub.plot([],[])[0] ``` To get the element. 2) Your code does update `time_text`, you are just drawing it out of the bounds of the graph. For example change: ``` time_text = sub.text(1,1,"",transform = sub.transAxes, ha="right") ``` To: ``` time_text = sub.text(1,0,"",transform = sub.transAxes, ha="right") ``` To get it to display in the bottom right, or `0.5, 0.5` to get it to display in the middle of the screen.
The other answers do not entirely answer the first question. The return statement of `init` and `animate` is needed strictly only when `blit=True` is used. Those functions are expected to return an iterable of artists to update for the animation. Having two artists, the return could look like ``` return artist1, artist2 # or return [artist1, artist2] ``` If a single artist shall be updated, one has to remember that still an iterable is needed. Using the comma after that single artist is then just the easiest way to make the function return an iterable ``` return artist1, # or return [artist1] ```
Python Matplotlib FuncAnimation
[ "", "python", "animation", "matplotlib", "" ]
Let's say I have a file 'test.csv' containing the following headers and data: ``` h1 c1 h2 h3 c2 1 0 2 3 1 3 0 2 1 0 0 1 2 3 3 ``` What is the best option in python to only select and save the columns of interest and discard all others? Assuming I'm only interested in saving the h columns, I thought of something along these lines: ``` f = open('test.csv') s = save('new_test.csv', data = f, saveColumns=['h1','h2','h3'])´ n = load('new_test.csv') print n h1 h2 h3 1 2 3 3 2 1 0 2 3 ```
I found a very straightforward way of doing this: ``` import pandas as pd selectColumns = ['h1','h2','h3'] table = pd.read_csv('test.csv') tableNew = table[selectColumns] pd.to_csv('tableNew') ```
``` f = open("test.csv") header = {i: x for i, x in enumerate(f.readline().split())} columns = ('h1','h2','h3') for l in f: print [x for i, x in enumerate(l.split()) if header[i] in columns] ```
selecting only relevant columns within a file - python
[ "", "python", "file", "save", "" ]
I have a descriptor on a class, and its `__set__` method does not get called. I have been looking long and hard on this for a few hours and have no answer for this. But what I noticed below is that when I assign 12 to MyTest.X, it erases the property descriptor for X, and replaces it with the value of 12. So the print statement for the Get function gets called. That's good. But the print statement for the `__set__` function does NOT get called at all. Am I missing something? ``` class _static_property(object): ''' Descriptor class used for declaring computed properties that don't require a class instance. ''' def __init__(self, getter, setter): self.getter = getter self.setter = setter def __get__(self, instance, owner): print "In the Get function" return self.getter.__get__(owner)() def __set__(self, instance, value): print "In setter function" self.setter.__get__()(value) class MyTest(object): _x = 42 @staticmethod def getX(): return MyTest._x @staticmethod def setX(v): MyTest._x = v X = _static_property(getX, setX) print MyTest.__dict__ print MyTest.X MyTest.X = 12 print MyTest.X print MyTest.__dict__ ```
The other two answers allude to using metaclasses to accomplish what you want to do. To help learn about them, here's an example of applying one to the code in your question that makes it do what you want: ``` class _static_property(object): """Descriptor class used for declaring computed properties that don't require a class instance. """ def __init__(self, getter, setter): self.getter = getter self.setter = setter def __get__(self, obj, objtype=None): print("In the getter function") return self.getter(obj) def __set__(self, obj, value): print("In setter function") self.setter(obj, value) class _MyMetaClass(type): def getX(self): return self._x def setX(self, v): self._x = v X = _static_property(getX, setX) class MyTest(object): __metaclass__ = _MyMetaClass # Python 2 syntax _x = 42 #class MyTest(object, metaclass=_MyMetaClass): # Python 3 (only) syntax # _x = 42 print(MyTest.__dict__) print(MyTest.X) MyTest.X = 12 print(MyTest.X) print(MyTest.__dict__) ``` Classes are instances of their metaclass which is normally type `type`. However you can use that as a base class and derive your own specialized meta-subclass -- that in this case is one that has class attributes which are data descriptors (aka properties). Note that the `self` argument in a meta or meta-subclass method is a metaclass instance, which is a class. In code above, it's named `MyTest`.
Descriptors work on instances of classes, not classes themselves. If you had an instance of `MyTest`, `X` would work as you expect, but accessing it as a class attribute actually sets the attribute of the class object itself. You could try defining a custom metaclass to add the descriptor to, then creating `MyTest` with that metaclass.
Why does the Python descriptor __set__ not get called
[ "", "python", "" ]
I using next queries for extracting top 100 and 101 lines from DB and gettings following elapsing times, which completely different (second query ~8 slower than first): ``` SELECT TOP (100) * FROM PhotoLike WHERE photoAccountId=@accountId AND accountId<>@accountId ORDER BY createDate DESC GO ``` SQL Server Execution Times: CPU time = 187 ms, elapsed time = 202 ms. ``` SELECT TOP (101) * FROM PhotoLike WHERE photoAccountId=@accountId AND accountId<>@accountId ORDER BY createDate DESC GO ``` SQL Server Execution Times: CPU time = 266 ms, elapsed time = 1644 ms. Execution plan of first two cases: ![Select top 100 and 101 with variable](https://i.stack.imgur.com/Id9Hz.png) But if I get rid of @accoundId variable, I get following results, which approximately equals and faster more than 2 times than first query from this question. ``` SELECT TOP (100) * FROM PhotoLike WHERE photoAccountId=10 AND accountId<>10 ORDER BY createDate DESC GO ``` SQL Server Execution Times: CPU time = 358 ms, elapsed time = 90 ms. ``` SELECT TOP (101) * FROM PhotoLike WHERE photoAccountId=10 AND accountId<>10 ORDER BY createDate DESC GO ``` SQL Server Execution Times: CPU time = 452 ms, elapsed time = 93 ms. Execution plan of second two cases: ![Select top 100 and 101 without variable](https://i.stack.imgur.com/wUcf2.png) Why is this happen and how can I improve performance with varibales? **UPDATE** Added execution plans.
There are a couple of things going on here. When you use variables SQL Server doesn't sniff the values at all except if you also add `OPTION (RECOMPILE)`. The estimate for the number of rows matching `photoAccountId=@accountId` is much smaller with the guess than is actually the case. (Note the thick line coming out of the index seek in the second plan and the decision to use a parallel plan). Also `TOP 100` / `TOP 101` is the [cut off point](http://web.archive.org/web/20180324233420/http://sqlblog.com:80/blogs/paul_white/archive/2010/08/27/sorting-row-goals-and-the-top-100-problem.aspx) between the `TOP N` sort using an algorithm that just needs space to sort 100 rows and it doing a full sort.. The inaccurate row count estimate likely means there is insufficient memory allocated for the full sort and it is spilling to `tempdb`. Simply adding `OPTION (RECOMPILE)` to the query with variables will likely improve things somewhat though it looks as though even the "fast" plan is doing many key lookups that could be avoided with different indexing.
I wonder if this could be parameter sniffing related. How fast does the following query go? ``` DECLARE @accountIdParam int; SELECT @accountIdParam = @accountId; SELECT TOP (101) * FROM PhotoLike WHERE photoAccountId=@accountIdParam AND accountId<>@accountIdParam ORDER BY createDate DESC GO ```
Low performance query while using database variables
[ "", "sql", "sql-server", "performance", "" ]
I would like to be able to use a USB device from several applications (for instance I run a Flask web application), making sure only one uses it at a time. In my case I am using a relay to open / close a door. The door takes about 20 seconds to open. During that time the relay should not be activated, because this would lock the door in the middle. Thanks in advance!
You have a number of possible solutions. The [posix\_ipc](http://semanchuk.com/philip/posix_ipc/) or [sysv\_ipc](http://semanchuk.com/philip/sysv_ipc/) Python modules provide a named semaphore class. You can initialize a semaphore to 1 and have all your web processes acquire the semaphore while they send a command to the USB device. A somewhat similar mechanism can be achieved using a file lock, maybe using [fcntl.lockf](http://docs.python.org/2/library/fcntl.html#fcntl.lockf). Another completely different approach could be to put a new process between your web server processes and the USB API. This process exposes an API that receives requests from the web server workers and implicitly serializes those into the USB device by means of being a single threaded process.
Not exactly an answer to your question, but maybe another idea about how to tackle this with out using globals. Why don't you write a small program for controlling your USB device. This script runs once (one instance) on your server and takes care of communicating with the device in the manner you need. It also takes care of concurrency. Now communicate from your web application via pipes, sockets, whatever with this script, send commands to it and receive results from it.
How to prevent concurrent access to a resource such as a USB device?
[ "", "python", "concurrency", "locking", "semaphore", "superglobals", "" ]
I am trying to create a new text file in an empty folder. The path to the folder is: ``` C:\Users\Tor\Desktop\Python files\retning ``` When I type this in the command line in windows explorer, I get straight to the empty folder. When I type my code in Python I get an errormessage and it looks like Python has replaced a couple of the `'\'` with `'\\'` This is my code ``` sector='A5' g=open('C:\Users\Tor\Desktop\Python files\retning\retning'+sector+'.txt', 'a') ``` and this is the errormessage ``` Traceback (most recent call last): File "C:\Users\Tor\Desktop\Python files\filer som behandler output\Vindretning.py", line 2, in <module> g=open('C:\Users\Tor\Desktop\Python files\retning\retning'+sector+'.txt', 'a') IOError: [Errno 22] invalid mode ('a') or filename: 'C:\\Users\\Tor\\Desktop\\Python files\retning\retningA5.txt' ``` Can anyone please tell me what I am doing wrong, or what is happening here?
`\` needs to be escaped in the strings. That is why `\\` or raw strings are used (`r'test String'`) Using raw strings solves the problem here. Something like, ``` open(r'C:\Programming Test Folder\test_file.py') ``` So, your code gets changed to ``` g=open(r'C:\Users\Tor\Desktop\Python files\retning\retning{}.txt'.format(sector), 'a') ``` Or use `/` in Windows, like follows ``` g=open('C:/Users/Tor/Desktop/Python files/retning/retning'+sector+'.txt', 'a') ```
This is *normal* behaviour; Python is giving you a string representation that can be pasted right back into a Python script or interpreter prompt. Since `\` is a character used in Python string literals to start an escape sequence (such as `\n` or `\xa0`) the backslashes are doubled. In fact, it is the characters *without* escaped backslashes that are the key here; `\r` is the escape code for a carriage return. You need to use one of the following options to specify Windows paths instead: * Escape all backslashes by doubling them in your string literals: ``` g = open('C:\\Users\\Tor\\Desktop\\Python files\\retning\\retning'+sector+'.txt', 'a') ``` Now the `\r` won't be interpreted as an escape code. * Use a raw string literal: ``` g = open(r'C:\Users\Tor\Desktop\Python files\retning\retning'+sector+'.txt', 'a') ``` In raw string literals most escape codes are ignored. * Use *forward* slashes: ``` g = open('C:/Users/Tor/Desktop/Python files/retning/retning'+sector+'.txt', 'a') ``` Forward slashes work fine as path separators on Windows, and there's no chance of them being interpreted as escape characters.
Python reads \\ when \ is the input
[ "", "python", "string", "append", "" ]
I have a database of parent-child connections. The data look like the following but could be presented in whichever way you want (dictionaries, list of lists, JSON, etc). ``` links=(("Tom","Dick"),("Dick","Harry"),("Tom","Larry"),("Bob","Leroy"),("Bob","Earl")) ``` The output that I need is a hierarchical JSON tree, which will be rendered with d3. There are discrete sub-trees in the data, which I will attach to a root node. So I need to recursively go though the links, and build up the tree structure. The furthest I can get is to iterate through all the people and append their children, but I can't figure out to do the higher order links (e.g. how to append a person with children to the child of someone else). This is similar to another question [here](https://stackoverflow.com/questions/10737273/recursive-function-to-create-hierarchical-json-object), but I have no way to know the root nodes in advance, so I can't implement the accepted solution. I am going for the following tree structure from my example data. ``` { "name":"Root", "children":[ { "name":"Tom", "children":[ { "name":"Dick", "children":[ {"name":"Harry"} ] }, { "name":"Larry"} ] }, { "name":"Bob", "children":[ { "name":"Leroy" }, { "name":"Earl" } ] } ] } ``` This structure renders like this in my d3 layout. ![Rendered image](https://i.stack.imgur.com/1Acgc.png)
To identify the root nodes you can unzip `links` and look for parents who are not children: ``` parents, children = zip(*links) root_nodes = {x for x in parents if x not in children} ``` Then you can apply the recursive method: ``` import json links = [("Tom","Dick"),("Dick","Harry"),("Tom","Larry"),("Bob","Leroy"),("Bob","Earl")] parents, children = zip(*links) root_nodes = {x for x in parents if x not in children} for node in root_nodes: links.append(('Root', node)) def get_nodes(node): d = {} d['name'] = node children = get_children(node) if children: d['children'] = [get_nodes(child) for child in children] return d def get_children(node): return [x[1] for x in links if x[0] == node] tree = get_nodes('Root') print json.dumps(tree, indent=4) ``` I used a set to get the root nodes, but if order is important you can use a list and [remove the duplicates](https://stackoverflow.com/questions/10549345/how-to-remove-duplicate-items-from-a-list-using-list-comprehension).
Try follwing code: ``` import json links = (("Tom","Dick"),("Dick","Harry"),("Tom","Larry"),("Tom","Hurbert"),("Tom","Neil"),("Bob","Leroy"),("Bob","Earl"),("Tom","Reginald")) name_to_node = {} root = {'name': 'Root', 'children': []} for parent, child in links: parent_node = name_to_node.get(parent) if not parent_node: name_to_node[parent] = parent_node = {'name': parent} root['children'].append(parent_node) name_to_node[child] = child_node = {'name': child} parent_node.setdefault('children', []).append(child_node) print json.dumps(root, indent=4) ```
Recursively build hierarchical JSON tree?
[ "", "python", "django", "json", "recursion", "d3.js", "" ]
Two transactions in two sessions are operating on same item as: In session 1: ``` begin tran T1 insert into Invoice with (item,OrderNumber) select 'ItemA', max(OrderNumber)+1 from Orders where item='ItemA' waitfor delay '00:00:05' commit T1 ``` In session 2: ``` begin tran T2 insert into Invoice with (item,OrderNumber) select 'ItemA', max(OrderNumber)+1 from Orders where item='ItemA' commit T2 ``` If just like this, two identical rows will be insert into the table Orders. But I want to transaction in either session is done first and then another transaction can read new max(OrderNumber) and then insert next value. I add holdlock to T1 as: ``` begin tran T1 insert into Invoice with (item,OrderNumber) select 'ItemA', max(OrderNumber)+1 from Orders with (holdlock) where item='ItemA' waitfor delay '00:00:05' commit T1 ``` Does SQl SERVER assign shared lock to select first since it parse select statement first then assign exclusive lock to insert statement? How does exactly locks works to each other in two session? Thanks for any hints
You can use serializable isolation level for your transaction. Ex: ``` set transaction isolation level serializable begin tran insert into Invoice with (item,OrderNumber) select 'ItemA', max(OrderNumber)+1 from Orders where item='ItemA' waitfor delay '00:00:05' commit tran ``` Serializable option will provide following transaction features: 1. Statements cannot read data that has been modified but not yet committed by other transactions 2. No other transactions can modify data that has been read by the current transaction until the current transaction completes 3. Other transactions cannot insert new rows with key values that would fall in the range of keys read by any statements in the current transaction until the current transaction completes. Above will work for your problem but I would suggest using an identity column instead of max ordernumber + 1 logic. So change the OrderNumber to be identity in your table and when you read data use row\_number number to compute order number by Item in runtime, here is an example query: ``` select Item, Row_Number() over(partition by Item order by OrderNumber) as OrderNumber from Invoice ``` So the above query will give the result you need.
Database locking schemes are integral part of any database management application. For integrity of data stored in databases there are different locking schemes provided by different database vendors. You should check the following links [First Link](http://www.mssqltips.com/sqlservertip/1968/understanding-sql-server-locking/) [Second Link](http://aboutsqlserver.com/2011/04/28/locking-in-microsoft-sql-server-part-2-locks-and-transaction-isolation-levels/). If these doesn't help then please let me know so that I can help you further.
How do locks work in a insert...select statement?
[ "", "sql", "sql-server", "t-sql", "" ]
I need to write a query for my search page. My filters look like this: ``` @StartDateForColumn1 datetime, @FinishDateForColumn1 datetime, @StartDateForColumn2 datetime, @FinishDateForColumn2 datetime, ``` All of them can be null. If so, I need all data: ``` select * from Table1 ``` For example: If only`@FinishDateForColumn1` is filled by the user, the query will be: ``` select * from Table1 where Column1<@FinishDateForColumn1 ``` So I don't want to write if-else for all conditions. Is there any other shortest way to write a query for this example? Maybe I need to use coalesce, but I don't know how to use it in this query.
the following assumes that you don't have any dates outside the range 01/01/1971 and 31/12/2099. ``` select * from Table1 where Column1< ISNULL(@FinishDateForColumn1,'31/Dec/2099') and Column1 > ISNULL(@StartDateForColumn1, '01/Jan/1971') and Column2< ISNULL(@FinishDateForColumn2,'31/Dec/2099') and Column2 > ISNULL(@StartDateForColumn2, '01/Jan/1971') ```
You can use `coalesce`, or `isnull`, in the following fashion ``` select * from Table1 where Column1<= isnull(@FinishDateForColumn1, Column1) ``` NB. This does assume that your column1 data does not contain nulls.
Search query for multiple date filters
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "coalesce", "" ]
I have a messages table with 5 millions rows, I want to retrieve the last 10 rows of two users conversation ``` +---------------------------------------------------------------+ | messages | +---------------------------------------------------------------+ | message_id | id_sender | id_dest | subject | message | time | +---------------------------------------------------------------+ ``` This is the query how I can optimize it? ``` SELECT * FROM (SELECT message FROM messages WHERE ( id_sender = id1 AND id_dest = id2 ) or ( id_dest = id1 AND id_sender = id2 ) ORDER BY message_id DESC LIMIT 10) AS ttbl ORDER BY message_id ASC ``` Thanks! I have an index on id\_sender id\_dest and message\_1d Updated
This is your query: ``` SELECT * FROM (SELECT * FROM messages WHERE (id_sender = id1 AND id_dest = id2) or (id_dest = id1 AND id_sender = id2) ORDER BY message_id DESC LIMIT 10 ) ttbl ORDER BY message_id ASC; ``` Create an index on `message(id_sender, id_dest, message_id)`. This will allow the inner query to be satisfied using an index. I would't worry about the `*` -- it is bad practice to use `*` in operational queries, because the underlying tables might change. For this, you are choosing 10 records. The engine will have to look up the 10 records to get all the fields and them sort them (the first sort should be handled by the index). Sorting 10 records is generally not a big deal.
First note on optimization, it's a lot more involved than *how can I optimize this?* Secondly, some ideas: * Don't use `SELECT *` if it's not necessary. Just bring back the fields needed. * This builds off the first one. Build a covering index. This means that if the fields `a, b, c` are used in the query *anywhere*, then you can build an index on `a, b, c` on the table. This will allow the database to read off the index page rather than having to seek, load, and read from the data page.
optimize query sql for messenger
[ "", "mysql", "sql", "postgresql", "" ]
I am trying to figure out how I can count the uppercase letters in a string. I have only been able to count lowercase letters: ``` def n_lower_chars(string): return sum(map(str.islower, string)) ``` Example of what I am trying to accomplish: ``` Type word: HeLLo Capital Letters: 3 ``` When I try to flip the function above, It produces errors: ``` def n_upper_chars(string): return sum(map(str.isupper, string)) ```
You can do this with [`sum`](https://docs.python.org/3/library/functions.html#sum), a [generator expression](https://docs.python.org/3/reference/expressions.html#grammar-token-generator_expression), and [`str.isupper`](https://docs.python.org/3/library/stdtypes.html#str.isupper): ``` message = input("Type word: ") print("Capital Letters: ", sum(1 for c in message if c.isupper())) ``` See a demonstration below: ``` >>> message = input("Type word: ") Type word: aBcDeFg >>> print("Capital Letters: ", sum(1 for c in message if c.isupper())) Capital Letters: 3 >>> ```
You can use `re`: ``` import re string = "Not mAnY Capital Letters" len(re.findall(r'[A-Z]',string)) ``` > *5*
Count the uppercase letters in a string with Python
[ "", "python", "string", "python-3.x", "count", "uppercase", "" ]
I am learning how to use the `threading` and the `multiprocessing` modules in Python to run certain operations in parallel and speed up my code. I am finding it hard (maybe because I don't have any theoretical background about it) to understand what the difference is between a `threading.Thread()` object and a `multiprocessing.Process()` one. Also, it is not entirely clear to me how to instantiate a queue of jobs and have only 4 (for example) of them running in parallel, while the others wait for resources to free before being executed. I find the examples in the documentation clear, but not very exhaustive; as soon as I try to complicate things a bit, I receive a lot of weird errors (like a method that can't be pickled, and so on). So, when should I use the `threading` and `multiprocessing` modules? Can you link me to some resources that explain the concepts behind these two modules and how to use them properly for complex tasks?
[What Giulio Franco says](https://stackoverflow.com/a/18114475) is true for multithreading vs. multiprocessing *in general*. However, Python\* has an added issue: There's a Global Interpreter Lock that prevents two threads in the same process from running Python code at the same time. This means that if you have 8 cores, and change your code to use 8 threads, it won't be able to use 800% CPU and run 8x faster; it'll use the same 100% CPU and run at the same speed. (In reality, it'll run a little slower, because there's extra overhead from threading, even if you don't have any shared data, but ignore that for now.) There are exceptions to this. If your code's heavy computation doesn't actually happen in Python, but in some library with custom C code that does proper GIL handling, like a numpy app, you will get the expected performance benefit from threading. The same is true if the heavy computation is done by some subprocess that you run and wait on. More importantly, there are cases where this doesn't matter. For example, a network server spends most of its time reading packets off the network, and a GUI app spends most of its time waiting for user events. One reason to use threads in a network server or GUI app is to allow you to do long-running "background tasks" without stopping the main thread from continuing to service network packets or GUI events. And that works just fine with Python threads. (In technical terms, this means Python threads give you concurrency, even though they don't give you core-parallelism.) But if you're writing a CPU-bound program in pure Python, using more threads is generally not helpful. Using separate processes has no such problems with the GIL, because each process has its own separate GIL. Of course you still have all the same tradeoffs between threads and processes as in any other languages—it's more difficult and more expensive to share data between processes than between threads, it can be costly to run a huge number of processes or to create and destroy them frequently, etc. But the GIL weighs heavily on the balance toward processes, in a way that isn't true for, say, C or Java. So, you will find yourself using multiprocessing a lot more often in Python than you would in C or Java. --- Meanwhile, Python's "batteries included" philosophy brings some good news: It's very easy to write code that can be switched back and forth between threads and processes with a one-liner change. If you design your code in terms of self-contained "jobs" that don't share anything with other jobs (or the main program) except input and output, you can use the [`concurrent.futures`](http://docs.python.org/3/library/concurrent.futures.html) library to write your code around a thread pool like this: ``` with concurrent.futures.ThreadPoolExecutor(max_workers=4) as executor: executor.submit(job, argument) executor.map(some_function, collection_of_independent_things) # ... ``` You can even get the results of those jobs and pass them on to further jobs, wait for things in order of execution or in order of completion, etc.; read the section on `Future` objects for details. Now, if it turns out that your program is constantly using 100% CPU, and adding more threads just makes it slower, then you're running into the GIL problem, so you need to switch to processes. All you have to do is change that first line: ``` with concurrent.futures.ProcessPoolExecutor(max_workers=4) as executor: ``` The only real caveat is that your jobs' arguments and return values have to be pickleable (and not take too much time or memory to pickle) to be usable cross-process. Usually this isn't a problem, but sometimes it is. --- But what if your jobs can't be self-contained? If you can design your code in terms of jobs that *pass messages* from one to another, it's still pretty easy. You may have to use `threading.Thread` or `multiprocessing.Process` instead of relying on pools. And you will have to create `queue.Queue` or `multiprocessing.Queue` objects explicitly. (There are plenty of other options—pipes, sockets, files with flocks, … but the point is, you have to do *something* manually if the automatic magic of an Executor is insufficient.) But what if you can't even rely on message passing? What if you need two jobs to both mutate the same structure, and see each others' changes? In that case, you will need to do manual synchronization (locks, semaphores, conditions, etc.) and, if you want to use processes, explicit shared-memory objects to boot. This is when multithreading (or multiprocessing) gets difficult. If you can avoid it, great; if you can't, you will need to read more than someone can put into an SO answer. --- From a comment, you wanted to know what's different between threads and processes in Python. Really, if you read Giulio Franco's answer and mine and all of our links, that should cover everything… but a summary would definitely be useful, so here goes: 1. Threads share data by default; processes do not. 2. As a consequence of (1), sending data between processes generally requires pickling and unpickling it.\*\* 3. As another consequence of (1), directly sharing data between processes generally requires putting it into low-level formats like Value, Array, and `ctypes` types. 4. Processes are not subject to the GIL. 5. On some platforms (mainly Windows), processes are much more expensive to create and destroy. 6. There are some extra restrictions on processes, some of which are different on different platforms. See [Programming guidelines](http://docs.python.org/3/library/multiprocessing.html#multiprocessing-programming) for details. 7. The `threading` module doesn't have some of the features of the `multiprocessing` module. (You can use `multiprocessing.dummy` to get most of the missing API on top of threads, or you can use higher-level modules like `concurrent.futures` and not worry about it.) --- \* It's not actually Python, the language, that has this issue, but CPython, the "standard" implementation of that language. Some other implementations don't have a GIL, like Jython. \*\* If you're using the [fork](https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods) start method for multiprocessing—which you can on most non-Windows platforms—each child process gets any resources the parent had when the child was started, which can be another way to pass data to children.
Multiple threads can exist in a single process. The threads that belong to the same process share the same memory area (can read from and write to the very same variables, and can interfere with one another). On the contrary, different processes live in different memory areas, and each of them has its own variables. In order to communicate, processes have to use other channels (files, pipes or sockets). If you want to parallelize a computation, you're probably going to need multithreading, because you probably want the threads to cooperate on the same memory. Speaking about performance, threads are faster to create and manage than processes (because the OS doesn't need to allocate a whole new virtual memory area), and inter-thread communication is usually faster than inter-process communication. But threads are harder to program. Threads can interfere with one another, and can write to each other's memory, but the way this happens is not always obvious (due to several factors, mainly instruction reordering and memory caching), and so you are going to need synchronization primitives to control access to your variables.
What are the differences between the threading and multiprocessing modules?
[ "", "python", "multithreading", "parallel-processing", "process", "multiprocessing", "" ]
How to get occurrence counts for for the elements of a float array?. If the array is [-1,2,3,-1,3,4,4,4,4,4], the result should be [2,1,2,5], not necessarily in that order, and the mapping from counts to the elements that are counted is not needed, only counts matter. numpy.histogram would do something similar, but it must use bins, which requires precomputing bin-size to separate the elements and can also create unnecessarily many empty bins. This can also be done manually with hashing or sorting, but it seems there should be a fast, one-shot way without python-level loops. Thanks! Edit: I tried the solutions suggested at the time of writing and thought I'd share the results as they are somewhat unexpected. What I did not mention originally is that the flow works with rather small lists, but the operation is invoked millions of times, which is somewhat a cornercase. The test and its printout are below. histogramize1 is my original function whose performance I wanted to improve. It is by x2 faster then the second fastest, and it would be interesting to know why. ``` import numpy as np from collections import Counter from timeit import timeit def histogramize1(X): cnts = {} for x in X: if x in cnts: cnts[x] += 1 else: cnts[x] = 1 lst = [ v for k,v in cnts.iteritems() ] lX = len(X) return [ float(x)/lX for x in lst ] def histogramize2(X): ua,uind= np.unique(X,return_inverse=True) lX = len(X) res = [float(x)/lX for x in np.bincount(uind)] return res def histogramize3(X): counts = Counter(X) lX = len(X) res = [float(x)/lX for x in counts.viewvalues()] return res def histogramize4(X): lX = len(X) return [float(X.count(i))/lX for i in np.unique(X)] if __name__ == '__main__': lst0 = [-1,2,3,-1,3,4,4,4,4,4] lst = lst0 + lst0 + lst0 + lst0 num = 100000 print timeit("histogramize1(lst)",setup="from __main__ import histogramize1, lst",number=num) print timeit("histogramize2(lst)",setup="from __main__ import histogramize2, lst",number=num) print timeit("histogramize3(lst)",setup="from __main__ import histogramize3, lst",number=num) print timeit("histogramize4(lst)",setup="from __main__ import histogramize4, lst",number=num) ``` This prints: 1.35243415833 10.0806729794 2.89171504974 15.5577590466
For Python 2.7+: ``` >>> from collections import Counter >>> counts = Counter([-1,2,3,-1,3,4,4,4,4,4]) >>> counts.viewvalues() # counts.values() in Python 3+ dict_values([1, 2, 5, 2]) ``` <http://docs.python.org/library/collections.html#collections.Counter> (There are implementations for 2.4 and 2.5 if you're stuck with older versions, though.) And since `Counter` is subclassed from `dict`, you *can* get the values that are counted if you ever need them. `counts.viewitems()` (2.7) or `counts.items()` (3+) will give you an iterable mapping.
If you do want a numpy solution: ``` >>> a=np.array( [-1,2,3,-1,3,4,4,4,4,4]) >>> ua,uind=np.unique(a,return_inverse=True) #This returns the unique values and indices of those values. >>> ua array([-1, 2, 3, 4]) >>> uind array([0, 1, 2, 0, 2, 3, 3, 3, 3, 3]) >>> np.bincount(uind) array([2, 1, 2, 5]) ``` This has the additional benefit of showing what count goes with what number. A bit over twice as fast for small arrays to boot: ``` import numpy as np from collections import Counter a=np.random.randint(0,100,(500)) alist=a.tolist() In [27]: %timeit Counter(alist).viewvalues() 1000 loops, best of 3: 209 us per loop In [28]: %timeit ua,uind=np.unique(a,return_inverse=True);np.bincount(uind) 10000 loops, best of 3: 85.8 us per loop ```
exact histogram of an array
[ "", "python", "numpy", "" ]
This is a structure I'm getting from elsewhere, that is, a list of deeply nested dictionaries: ``` { "foo_code": 404, "foo_rbody": { "query": { "info": { "acme_no": "444444", "road_runner": "123" }, "error": "no_lunch", "message": "runner problem." } }, "acme_no": "444444", "road_runner": "123", "xyzzy_code": 200, "xyzzy_rbody": { "api": { "items": [ { "desc": "OK", "id": 198, "acme_no": "789", "road_runner": "123", "params": { "bicycle": "2wheel", "willie": "hungry", "height": "1", "coyote_id": "1511111" }, "activity": "TRAP", "state": "active", "status": 200, "type": "chase" } ] } } } { "foo_code": 200, "foo_rbody": { "query": { "result": { "acme_no": "260060730303258", "road_runner": "123", "abyss": "26843545600" } } }, "acme_no": "260060730303258", "road_runner": "123", "xyzzy_code": 200, "xyzzy_rbody": { "api": { "items": [ { "desc": "OK", "id": 198, "acme_no": "789", "road_runner": "123", "params": { "bicycle": "2wheel", "willie": "hungry", "height": "1", "coyote_id": "1511111" }, "activity": "TRAP", "state": "active", "status": 200, "type": "chase" } ] } } } ``` Asking for different structures is out of question (legacy apis etc). So I'm wondering if there's some clever way of extracting selected values from such a structure. The candidates I was thinking of: * flatten particular dictionaries, building composite keys, smth like: { "foo\_rbody.query.info.acme\_no": "444444", "foo\_rbody.query.info.road\_runner": "123", ... } Pro: getting every value with one access and if predictable key is not there, it means that the structure was not there (as you might have noticed, dictionaries may have different structures depending on whether it was successful operation, error happened, etc). Con: what to do with lists? * Use some recursive function that would do successive key lookups, say by "foo\_rbody", then by "query", "info", etc. Any better candidates?
You can try this rather trivial function to access nested properties: ``` import re def get_path(dct, path): for i, p in re.findall(r'(\d+)|(\w+)', path): dct = dct[p or int(i)] return dct ``` Usage: ``` value = get_path(data, "xyzzy_rbody.api.items[0].params.bicycle") ```
Maybe the function `byPath` in my answer to [this post](https://stackoverflow.com/questions/18069262/count-non-empty-end-leafs-of-a-python-dicitonary-array-data-structure-recursiv/18069577#18069577) might help you.
Extracting values from deeply nested JSON structures
[ "", "python", "json", "" ]
I am selecting from tables that have retail and corporate customers. I want My result set to return the names of both corporate and retail customers in one column. Currently I am returning them in two different columns like this: ``` select e.cust_id, e.cust_location, f.location max(case when e.preferredname is not null then e.preferredname end )RETAIL_CUST_NAME, max(case when e.preferredname is null then e.CORP_NANME end )CORPORATE_CUST_NAME from Mytable e, myothertable f where e.cust-id = f.cust_id group by e.cust_id, e.cust_location, f.location, e.preferredname, e.corp_name; ``` Is what I am trying to do Possible and How Can I achieve this without having to return a different column for retail and another for corporate customer?
If only one of the two fields is ever populated, then returning whichever is populated as a single column is pretty simple: ``` select e.cust_id, e.cust_location, f.location coalesce(e.preferredname, e.CORP_NANME) as CUST_NAME, from Mytable e join myothertable f on e.cust_id = f.cust_id ``` `coalesce` returns the first non-null value it encounters. I'm not sure what the point of the aggregate in your query is, so I left that out. --- As a footnote, in Oracle, `nvl` performs very similarly to `coalesce`, with these important distinctions: 1. `nvl` only takes 2 parameters, whereas coalesce can take *n* parameters 2. when functions or equations are pass into them as parameters, `nvl` will evaluate all of its parameters, but `coalesce` will evaluate each in order, stopping when it reaches a non-null value (in other words, `coalesce` will use short-circuit evaluation, but `nvl` will not). This is mostly significant because you'll often see `nvl` used for a similar purpose.
Write query as below and you can get both cust\_name in one column ``` select e.cust_id, e.cust_location, f.location max(case when e.preferredname is not null then e.preferredname Whene preferredname is null then e.CORP_NANME end )CUST_NAME ```
Different table fields in one result set Oracle
[ "", "sql", "oracle", "" ]
Lets say I have a **list of tuples** as follows ``` l = [(4,1), (5,1), (3,2), (7,1), (6,0)] ``` I would like to iterate over the items where the 2nd element in the tuple is 1? I can do it using an if condition in the loop, but I was hoping there will a be amore *pythonic* way of doing it? Thanks
You can use a list comprehension: ``` [ x for x in l if x[1] == 1 ] ``` You can iterate over tuples using generator syntax as well: ``` for tup in ( x for x in l if x[1] == 1 ): ... ```
How about ``` ones = [(x, y) for x, y in l if y == 1] ``` or ``` ones = filter(lambda x: x[1] == 1, l) ```
python - iterating over a subset of a list of tuples
[ "", "python", "" ]
I have a python code which looks something like this: ``` os.system("...../abc.bat") open("test.txt") ..... ``` Where the abc.bat upon complete run creates test.txt. Now the problem is that since there is no "wait" here code directly goes to open("file.txt") and naturally does not find it and hence the problem. Is there any way I can find out status if the bat run has finished and now the python can move to open("test.txt")? Note: I have used an idea where in the bat file has command- type nul>run.ind and this file is deleted by the bat file at the end as an indicator that bat run has finished. Can I somehow make use of this in the python code like : ``` os.system("...../abc.bat") if file run.ind exists == false: open ("test.txt") ``` This seems to be somewhat crude. Are there any other methods to do this in simpler way? or in other words is it possible to introduce some "wait" till the bat run is finished?
Maybe you could try to put it on a while loop. ``` import os import time os.system("...../abc.bat") while True: try: o = open("test.txt", "r") break except: print "waiting the bat.." time.sleep(15) # Modify (seconds) to se sufficient with your bat to be ready continue ```
I'm unsure of the wait semantics of `os.system` on all platforms, but you could just use [subprocess.call()](http://docs.python.org/2/library/subprocess.html#subprocess.call) which the documentation for `os.system` is pointing to; > Run the command described by args. **Wait for command to complete**, then return the returncode attribute.
wait statement in python
[ "", "python", "batch-file", "" ]
I store a log as follows: ``` LOG ID | MODELID | EVENT 1 | 1 | Upped 2 | 1 | Downed 3 | 2 | Downed 4 | 1 | Upped 5 | 2 | Multiplexed 6 | 1 | Removed ``` Then I have the models as: ``` MODEL ID | NAME 1 | Model 1 2 | Model 2 ``` I want to end up with the LOG entry with the HIGHEST ID in LOG associated with a model as a result: ``` NAME | EVENT Model 1 | Removed Model 2 | Multiplexed ``` A simple join gives me all the results: ``` SELECT * FROM MODEL AS M LEFT JOIN LOG AS L ON L.MODELID = M.ID ``` But this gives me all the records. What am I missing?
Try this ``` SELECT M.NAME,L.EVENT FROM LOG L INNER JOIN MODEL M ON L.MODELID = M.ID WHERE L.ID IN ( SELECT MAX(ID) FROM LOG GROUP BY MODELID ) ```
Maybe you need a subselect. Let's start by breaking down the problem. First you want the HIGHEST ID for a given MODELID in the LOG table. ``` SELECT MODELID ,MAX(ID) FROM LOG GROUP BY MODELID ``` Now if we use this as a subselect (virtual table) then you can also get the model name. E.g. ``` SELECT M.NAME ,L.EVENT FROM MODEL M ,( SELECT MODELID AS MODELID ,MAX(ID) AS MAXID FROM LOG GROUP BY MODELID ) S ,LOG L WHERE M.ID = S.MODELID AND L.ID = S.MAXID ``` Give that a go (I haven't tested it myself).
Postgres: Getting the highest matching log associated with a record?
[ "", "sql", "postgresql", "" ]
Assuming you have "table" already in Hive, is there a quick way like other databases to be able to get the "CREATE" statement for that table?
[As of Hive 0.10](https://cwiki.apache.org/Hive/languagemanual-ddl.html#LanguageManualDDL-ShowCreateTable) this [patch-967](https://issues.apache.org/jira/browse/HIVE-967) implements `SHOW CREATE TABLE` which "shows the `CREATE TABLE` statement that creates a given table, or the `CREATE VIEW` statement that creates a given view." Usage: ``` SHOW CREATE TABLE myTable; ```
Steps to generate Create table DDLs for all the tables in the Hive database and export into text file to run later: ***step 1)*** create a `.sh` file with the below content, say `hive_table_ddl.sh` ``` #!/bin/bash rm -f tableNames.txt rm -f HiveTableDDL.txt hive -e "use $1; show tables;" > tableNames.txt wait cat tableNames.txt |while read LINE do hive -e "use $1;show create table $LINE;" >>HiveTableDDL.txt echo -e "\n" >> HiveTableDDL.txt done rm -f tableNames.txt echo "Table DDL generated" ``` ***step 2)*** Run the above shell script by passing 'db name' as paramanter ``` >bash hive_table_dd.sh <<databasename>> ``` output : All the create table statements of your DB will be written into the `HiveTableDDL.txt`
How to get/generate the create statement for an existing hive table?
[ "", "sql", "hive", "hiveql", "" ]
I'm looking for a way to view all of the variables during running so I can debug easier. I've tried the following but it doesn't work how I want it to: ``` import inspect a = False b = "" c = "test" d = {} e = [] f = ["Test", "Test"] g = ("One", "1", "Two", "2") h = inspect.currentframe() print h.f_locals ``` Ideally I want it to print it similar to below or just let me see what variable has what data ``` a False b "" c test d {} e [] f test, test g One, 1, Two, 2 ``` This way I can see the variables and their data easily... In VBA it's easy because you have a window with all variables. Thanks in advance - Hyflex
You can use `vars()`: ``` a = False b = "" c = "test" d = {} e = [] f = ["Test", "Test"] g = ("One", "1", "Two", "2") for k, v in vars().items(): if not (k.startswith('__') and k.endswith('__')): print k,'--',v ``` **Output:** ``` a -- False c -- test b -- e -- [] d -- {} g -- ('One', '1', 'Two', '2') f -- ['Test', 'Test'] ``` Help on `vars`: ``` >>> print vars.__doc__ vars([object]) -> dictionary Without arguments, equivalent to locals(). With an argument, equivalent to object.__dict__. ```
A couple of non-stdlib things that I use frequently: First, one of ipython's more useful magic functions: `%whos` ``` In [21]: a = 'hi' ...: bob = list() ...: In [22]: %whos Variable Type Data/Info ---------------------------- a str hi bob list n=0 ``` `%who` just lists the vars without giving info on the contents. Second, [q](https://pypi.python.org/pypi/q). You can do more powerful inline debugging and even open an interactive prompt at an arbitrary point in your code. ``` In [1]: def stuff(): ...: a = 'hi' ...: b = 'whatever' ...: c = [1,2,3] ...: import q; q.d() ...: return a,b ...: In [2]: stuff() Python console opened by q.d() in stuff >>> ``` [Here's](http://www.youtube.com/watch?feature=player_embedded&v=OL3De8BAhME&t=25m13s) an amusing video (lightning talk) of the author talking about q if anyone is interested.
Print / view all variables whilst a script is running in python
[ "", "python", "debugging", "python-2.7", "error-handling", "" ]
I'm following an api and I need to use a Base64 authentication of my User Id and password. 'User ID and Password need to both be concatenated and then Base64 encoded' it then shows the example ``` 'userid:password' ``` It then proceeds to say 'Provide the encoded value in an "Authorization Header"' 'for example: **Authorization: BASIC {Base64-encoded value}**' How do I write this into a python api request? ``` z = requests.post(url, data=zdata ) ``` Thanks
You can encode the data and make the request by doing the following: ``` import requests, base64 usrPass = "userid:password" b64Val = base64.b64encode(usrPass) r=requests.post(api_URL, headers={"Authorization": "Basic %s" % b64Val}, data=payload) ``` I'm not sure if you've to add the "BASIC" word in the Authorization field or not. If you provide the API link, It'd be more clear.
The requests library has [Basic Auth support](http://docs.python-requests.org/en/latest/user/authentication/) and will encode it for you automatically. You can test it out by running the following in a python repl ``` from requests.auth import HTTPBasicAuth r = requests.post(api_URL, auth=HTTPBasicAuth('user', 'pass'), data=payload) ``` You can confirm this encoding by typing the following. ``` r.request.headers['Authorization'] ``` outputs: ``` u'Basic c2RhZG1pbmlzdHJhdG9yOiFTRG0wMDY4' ```
Base64 Authentication Python
[ "", "python", "api", "authentication", "passwords", "request", "" ]
I'm using the following to try and insert a record into a postgresql database table, but it's not working. I don't get any errors, but there are no records in the table. Do I need a commit or something? I'm using the postgresql database that was installed with the Bitnami djangostack install. ``` import psycopg2 try: conn = psycopg2.connect("dbname='djangostack' user='bitnami' host='localhost' password='password'") except: print "Cannot connect to db" cur = conn.cursor() try: cur.execute("""insert into cnet values ('r', 's', 'e', 'c', 'w', 's', 'i', 'd', 't')""") except: print "Cannot insert" ```
If don't want to have to commit each entry to the database, you can add the following line: ``` conn.autocommit = True ``` So your resulting code would be: ``` import psycopg2 try: conn = psycopg2.connect("dbname='djangostack' user='bitnami' host='localhost' password='password'") conn.autocommit = True except: print "Cannot connect to db" cur = conn.cursor() try: cur.execute("""insert into cnet values ('r', 's', 'e', 'c', 'w', 's', 'i', 'd', 't')""") except: print "Cannot insert" ```
Turns out I needed `conn.commit()` at the end
Python psycopg2 not inserting into postgresql table
[ "", "python", "postgresql", "insert", "psycopg2", "" ]
I have the follwong Query on multi tables ``` SELECT DISTINCT b.BoxBarcode as [Box Barcode], (select case when b.ImagesCount IS NULL then 0 else b.ImagesCount end) as [Total Images], s.StageName as [Current Stage] ,d.DocuementTypeName as [Document Type], u.UserName as [Start User],uu.UserName as [Finished User] FROM [dbo].[Operations] o inner join dbo.LKUP_Stages s on o.stageid=s.id inner join dbo.LKUP_Users u on u.id=o.startuserid left join dbo.LKUP_Users uu on uu.id=o.FinishedUserID inner join boxes b on b.id=o.boxid inner join LKUP_DocumentTypes d on d.ID = b.DocTypeID where b.IsExportFinished = 0 ``` when i select count from the Boxes table where `IsExportFinished = 0` i got the Count 42 records, when i run the above qoury i got 71 records, i want just the 42 records in Boxes table to retrived.
You are doing a one-to-many join, i.e. at least one of the tables have multiple rows that match the join criteria. Step one is to find which table(s) that give the "duplicates". Once you have done that you may be able to fix the problem by adding additional criteria to the join. I'm taking a guess that the same `boxid` occurs several times in the `Operations` table. If that is the case you need to decide which `Operation` row you want to select and then update the SQL accordingly.
Try this one - ``` SELECT Box_Barcode = b.BoxBarcode , Total_Images = ISNULL(b.ImagesCount, 0) , Current_Stage = s.StageName , Document_Type = d.DocuementTypeName , Start_User = u.UserName , Finished_User = uu.UserName FROM ( SELECT DISTINCT o.stageid , o.boxid , o.startuserid , o.FinishedUserID FROM dbo.[Operations] ) o JOIN dbo.LKUP_Stages s ON o.stageid = s.id JOIN dbo.boxes b ON b.id = o.boxid JOIN dbo.LKUP_DocumentTypes d ON d.id = b.DocTypeID JOIN dbo.LKUP_Users u ON u.id = o.startuserid LEFT JOIN dbo.LKUP_Users uu ON uu.id = o.FinishedUserID WHERE b.IsExportFinished = 0 ```
Get DISTINCT record using INNER JOIN
[ "", "sql", "sql-server", "database", "sql-server-2008", "t-sql", "" ]
I have two tables MANUAL\_TRANSACTIONS and MANUAL\_LIST\_TEMP. What I wanted to achieve is to update the MANUAL\_TRANSACTIONS with the information from MANUAL\_LIST\_TEMP. Here only records which is present in the MANUAL\_LIST\_TEMP table should be updated to MANUAL\_TRANSACTIONS. I have done something like below but the problem with following statement is it updates every records from MANUAL\_TRANSACTIONS table. ``` UPDATE MANUAL_TRANSACTIONS SET ( "Age", "Assigned_To", "Attachments", "Comments", "Completed_Date_Time" , "Content_Type", "Created", "Created_By","Cycle_Time (Crt to Complete)" , "Cycle_Time (First reply)", "DISTRIBUTION_CHANNEL")= (SELECT MANUAL_LIST_TEMP."Age", MANUAL_LIST_TEMP."Assigned_To", MANUAL_LIST_TEMP."Attachments", MANUAL_LIST_TEMP."Comments", MANUAL_LIST_TEMP."Completed_Date_Time", MANUAL_LIST_TEMP."Content_Type", MANUAL_LIST_TEMP."Created", MANUAL_LIST_TEMP."Created_By", MANUAL_LIST_TEMP."Cycle_Time (Crt to Complete)", MANUAL_LIST_TEMP."Cycle_Time (First reply)", MANUAL_LIST_TEMP."Distribution_Channel" FROM MANUAL_LIST_TEMP WHERE MANUAL_LIST_TEMP.ID = MANUAL_TRANSACTIONS.ID) ```
You should also add the WHERE clause to your UPDATE statement: ``` UPDATE MANUAL_TRANSACTIONS SET ( "Age", "Assigned_To", "Attachments", "Comments", "Completed_Date_Time" , "Content_Type", "Created", "Created_By","Cycle_Time (Crt to Complete)" , "Cycle_Time (First reply)", "DISTRIBUTION_CHANNEL")= (SELECT MANUAL_LIST_TEMP."Age", MANUAL_LIST_TEMP."Assigned_To", MANUAL_LIST_TEMP."Attachments", MANUAL_LIST_TEMP."Comments", MANUAL_LIST_TEMP."Completed_Date_Time", MANUAL_LIST_TEMP."Content_Type", MANUAL_LIST_TEMP."Created", MANUAL_LIST_TEMP."Created_By", MANUAL_LIST_TEMP."Cycle_Time (Crt to Complete)", MANUAL_LIST_TEMP."Cycle_Time (First reply)", MANUAL_LIST_TEMP."Distribution_Channel" FROM MANUAL_LIST_TEMP WHERE MANUAL_LIST_TEMP.ID = MANUAL_TRANSACTIONS.ID) WHERE MANUAL_TRANSACTIONS.ID IN (SELECT MANUAL_LIST_TEMP.ID FROM MANUAL_LIST_TEMP); ``` For best results, convert your UPDATE into a MERGE statement: ``` MERGE INTO MANUAL_TRANSACTIONS tgt USING ( SELECT MANUAL_LIST_TEMP.ID, MANUAL_LIST_TEMP."Age", MANUAL_LIST_TEMP."Assigned_To", MANUAL_LIST_TEMP."Attachments", MANUAL_LIST_TEMP."Comments", MANUAL_LIST_TEMP."Completed_Date_Time", MANUAL_LIST_TEMP."Content_Type", MANUAL_LIST_TEMP."Created", MANUAL_LIST_TEMP."Created_By", MANUAL_LIST_TEMP."Cycle_Time (Crt to Complete)", MANUAL_LIST_TEMP."Cycle_Time (First reply)", MANUAL_LIST_TEMP."Distribution_Channel" FROM MANUAL_LIST_TEMP ) src ON (tgt.ID = src.ID) WHEN MATCHED THEN UPDATE SET tgt."Age" = src."Age" , tgt."Assigned_To" = src."Assigned_To" [...] ``` The MERGE statement will only update rows in `MANUAL_TRANSACTIONS` (the *target* table) which have matching rows in `MANUAL_LIST_TEMP` (the *source* table).
You need one more where clause for UPDATE
Update query with INNER Join
[ "", "sql", "oracle", "" ]
I'm trying to parse out content from specific meta tags. Here's the structure of the meta tags. The first two are closed with a backslash, but the rest don't have any closing tags. As soon as I get the 3rd meta tag, the entire contents between the `<head>` tags are returned. I've also tried `soup.findAll(text=re.compile('keyword'))` but that does not return anything since keyword is an attribute of the meta tag. ``` <meta name="csrf-param" content="authenticity_token"/> <meta name="csrf-token" content="OrpXIt/y9zdAFHWzJXY2EccDi1zNSucxcCOu8+6Mc9c="/> <meta content='text/html; charset=UTF-8' http-equiv='Content-Type'> <meta content='en_US' http-equiv='Content-Language'> <meta content='c2y_K2CiLmGeet7GUQc9e3RVGp_gCOxUC4IdJg_RBVo' name='google-site- verification'> <meta content='initial-scale=1.0,maximum-scale=1.0,width=device-width' name='viewport'> <meta content='notranslate' name='google'> <meta content="Learn about Uber's product, founders, investors and team. Everyone's Private Driver - Request a car from any mobile phone—text message, iPhone and Android apps. Within minutes, a professional driver in a sleek black car will arrive curbside. Automatically charged to your credit card on file, tip included." name='description'> ``` Here's the code: ``` import csv import re import sys from bs4 import BeautifulSoup from urllib.request import Request, urlopen req3 = Request("https://angel.co/uber", headers={'User-Agent': 'Mozilla/5.0') page3 = urlopen(req3).read() soup3 = BeautifulSoup(page3) ## This returns the entire web page since the META tags are not closed desc = soup3.findAll(attrs={"name":"description"}) ```
Edited: Added regex for case sensitivity as suggested by @Albert Chen. Python 3 Edit: ``` from bs4 import BeautifulSoup import re import urllib.request page3 = urllib.request.urlopen("https://angel.co/uber").read() soup3 = BeautifulSoup(page3) desc = soup3.findAll(attrs={"name": re.compile(r"description", re.I)}) print(desc[0]['content']) ``` Although I'm not sure it will work for every page: ``` from bs4 import BeautifulSoup import re import urllib page3 = urllib.urlopen("https://angel.co/uber").read() soup3 = BeautifulSoup(page3) desc = soup3.findAll(attrs={"name": re.compile(r"description", re.I)}) print(desc[0]['content'].encode('utf-8')) ``` Yields: ``` Learn about Uber's product, founders, investors and team. Everyone's Private Dri ver - Request a car from any mobile phoneΓÇötext message, iPhone and Android app s. Within minutes, a professional driver in a sleek black car will arrive curbsi de. Automatically charged to your credit card on file, tip included. ```
Description is Case-Sensitive.So, we need to look for both 'Description' and 'description'. Case1: 'Description' in [Flipkart.com](http://view-source:http://www.flipkart.com/) Case2: 'description' in [Snapdeal.com](http://view-source:http://www.snapdeal.com/) ``` from bs4 import BeautifulSoup import requests url= 'https://www.flipkart.com' page3= requests.get(url) soup3= BeautifulSoup(page3.text) desc= soup3.find(attrs={'name':'Description'}) if desc == None: desc= soup3.find(attrs={'name':'description'}) try: print desc['content'] except Exception as e: print '%s (%s)' % (e.message, type(e)) ```
Extracting contents from specific meta tags that are not closed using BeautifulSoup
[ "", "python", "beautifulsoup", "" ]
In python, how do I concatenate 3 lists using a list comprehension? Have: ``` list1 = [1,2,3,4] list2 = [5,6,7,8] list3 = [9,10,11,12] ``` Want: ``` allList = [1,2,3,4,5,6,7,8,9,10,11,12] ``` I tried using a list comprehension, but I'm not very good at them yet. These are what I have tried: ``` allList = [n for n in list1 for n in list2 for n in list3 ] ``` this was a bad idea, obviously and yielded len(list1)\*len(list2)\*len(list3) worth of values. Oops. So I tried this: ``` allList = [n for n in list1, list2, list3] ``` but that gave me allList = [list1, list 2, list3] (3 lists of lists) I know you can concatenate using the + operator (as in x = list1 + list2 + list3)but how do you do this using a simple list comprehension? There is a similar question here: [Concatenate 3 lists of words](https://stackoverflow.com/questions/13906032/concatenate-3-lists-of-words) , but that's for C#.
A better solution is to use [`itertools.chain`](http://docs.python.org/3.3/library/itertools.html#itertools.chain) instead of addition. That way, instead of creating the intermediate list `list1 + list2`, and then another intermediate list `list1 + list2 + list3`, you just create the final list with no intermediates: ``` allList = [x for x in itertools.chain(list1, list2, list3)] ``` However, an empty list comprehension like this is pretty silly; just use the `list` function to turn any arbitrary iterable into a list: ``` allList = list(itertools.chain(list1, list2, list3)) ``` Or, even better… if the only reason you need this is to loop over it, just leave it as an iterator: ``` for thing in itertools.chain(list1, list2, list3): do_stuff(thing) ``` --- While we're at it, the ["similar question" you linked to](https://stackoverflow.com/questions/13906032/concatenate-3-lists-of-words) is actually a very different, and more complicated, question. But, because `itertools` is so cool, it's still a one-liner in Python: ``` itertools.product(list1, list2, list3) ``` Or, if you want to print it out in the format specified by that question: ``` print('\n'.join(map(' '.join, itertools.product(list1, list2, list3)))) ```
Here are some options: ``` >>> sum([list1, list2, list3], []) [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12] >>> list1 + list2 + list3 [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12] ``` With comprehension: (it's really not necessary) ``` >>> [x for x in sum([list1, list2, list3], [])] [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12] ```
how do I concatenate 3 lists using a list comprehension?
[ "", "python", "list", "list-comprehension", "" ]
I have a web application which acts as an interface to an offsite server which runs a very long task. The user enters information and hits submit and then chrome waits for the response, and loads a new webpage when it receives it. However depending on the network, input of the user, the task can take a pretty long time and occasionally chrome loads a "no data received page" before the data is returned (though the task is still running). Is there a way to put either a temporary page while my task is thinking or simply force chrome to continue waiting? Thanks in advance
While you could change your timeout on the server or other tricks to try to keep the page "alive", keep in mind that there might be other parts of the connection that you have no control over that could timeout the request (such as the timeout value of the browser, or any proxy between the browser and server, etc). Also, you might need to constantly up your timeout value if the task takes longer to complete (becomes more advanced, or just slower because more people use it). In the end, this sort of problem is typically solved by a change in your architecture. ## Use a Separate Process for Long-Running Tasks Rather than submitting the request and running the task in the handling view, the view starts the running of the task in a separate process, then immediately returns a response. This response can bring the user to a "Please wait, we're processing" page. That page can use one of the many push technologies out there to determine when the task was completed (long-polling, web-sockets, server-sent events, an AJAX request every N seconds, or the dead-simplest: have the page reload every 5 seconds). ## Have your Web Request "Kick Off" the Separate Process Anyway, as I said, the view handling the request doesn't do the long action: it just kicks off a background process to do the task for it. You can create this background process dispatch yourself (check out [this Flask snippet](http://flask.readthedocs.org/en/latest/patterns/celery/) for possible ideas), or use a library like [Celery](http://www.celeryproject.org/) or ([RQ](http://python-rq.org/)). Once the task is complete, you need some way of notifying the user. This will be dependent on what sort of notification method you picked above. For a simple "ajax request every N seconds", you need to create a view that handles the AJAX request that checks if the task is complete. A typical way to do this is to have the long-running task, as a last step, make some update to a database. The requests for checking the status can then check this part of the database for updates. ## Advantages and Disadvantages Using this method (rather than trying to fit the long-running task into a request) has a few benefits: 1.) Handling long-running web requests is a tricky business due to the fact that there are multiple points that could time out (besides the browser and server). With this method, all your web requests are very short and much less likely to timeout. 2.) Flask (and other frameworks like it) is designed to only support a certain number of threads that can respond to web queries. Assume it has 8 threads: if four of them are handling the long requests, that only leaves four requests to actually handle more typical requests (like a user getting their profile page). Half of your web server could be tied up doing something that is not serving web content! At worse, you could have all eight threads running a long process, meaning your site is completely unable to respond to web requests until one of them finishes. The main drawback: there is a little more set up work in getting a task queue up and running, and it does make your entire system slightly more complex. However, I would highly recommend this strategy for long-running tasks that run on the web.
Let's assume: 1. This is not a server issue, so we don't have to go fiddle with Apache, nginx, etc. timeout settings. 2. The delay is minutes, not hours or days, just to make the scenario manageable. 3. You control the web page on which the user hits submit, and from which user interaction is managed. If those obtain, I'd suggest not using a standard HTML form submission, but rather have the submit button kick off a JavaScript function to oversee processing. It would put up a "please be patient...this could take a little while" style message, then use jQuery.ajax, say, to call the long-time-taking server with a long timeout value. jQuery timeouts are measured in milliseconds, so 60000 = 60 seconds. If it's longer than that, increase your specified timeout accordingly. I have seen reports that not all clients will allow super-extra-long timeouts (e.g. Safari on iOS apparently has a 60-second limitation). But in general, this will give you a platform from which to manage the interactions (with your user, with the slow server) rather than being at the mercy of simple web form submission. There are a few edge cases here to consider. The web server timeouts may indeed need to be adjusted upward (Apache defaults to 300 seconds aka 5 minutes, and nginx less, IIRC). Your client timeouts (on iOS, say) may have maximums too low for the delays you're seeing. Etc. Those cases would require either adjusting at the server, or adopting a different interaction strategy. But an AJAX-managed interaction is where I would start.
Time out issues with chrome and flask
[ "", "python", "google-chrome", "web-applications", "flask", "" ]
Currently I am trying to read in a csv file using the csv module in python. When I run the piece of code below I get an error that states the file does not exist. My first guess is that maybe, I have the file saved in the wrong place or I need to provide pyton with a file path. currently I have the file saved in C:\Documents and Settings\eag29278\My Documents\python test code\test\_satdata.csv. one side note im note sure wether having set the mode to 'rb' (read binary) was the right move. ``` import csv with open('test_satdata.csv', 'rb') as csvfile: satreader = csv.reader(csvfile, delimiter=' ', lineterminator=" ") for row in satreader: print ', '.join(row) ``` Here is the errror code. ``` Traceback (most recent call last): File "C:/Python27/test code/test csv parse.py", line 2, in <module> with open('test_satdata.csv', 'rb') as csvfile: IOError: [Errno 2] No such file or directory: 'test_satdata.csv' ```
Your code is using a relative path; python is looking in the current directory (whatever that may be) to load your file. What the current directory *is* depends on how you started your Python script and if you executed any code that may have changed the current working directory. Use a full absolute path instead: ``` path = r'C:\Documents and Settings\eag29278\My Documents\python test code\test_satdata.csv' with open(path, 'rb') as csvfile: ``` Using `'rb'` is entirely correct, the [`csv` module](http://docs.python.org/2/library/csv.html#csv.reader) recommends you do so: > If *csvfile* is a file object, it must be opened with the ‘b’ flag on platforms where that makes a difference. Windows *is* such a platform.
You can hit this error when you're running a Python script from a Directory where the file is not contained. Sounds simple to fix, put the CSV file in the same folder as the .PY file. However when you're running under an IDE like VSCode the command output might `cd` to another directory when it executes your python file. ``` PS C:\git\awesome> cd 'c:\git\awesome'; ${env:PYTHONIOENCODING}='UTF-8'; ${env:PYTHONUNBUFFERED}='1'; & 'C:\Program Files (x86)\Python37-32\python.exe' 'c:\Users\jeremy\.vscode\extensions\ms-python.python-2019.9.34911\pythonFiles\ptvsd_launcher.py' '--default' '--client' '--host' 'localhost' '--port' '1089' 'c:\git\Project\ReadCSV.py' ``` See how I'm in my awesome repo and the first command is: cd 'c:\git\awesome'; Then it executes the python file: 'c:\git\Project\ReadCSV.py' So its expecting the CSV file in 'c:\git\awesome'. To fix it, either use the full file names or CD to the directory containing the CSV file you wish to read.
error "no such file or directory" when reading in csv file in python
[ "", "python", "csv", "filepath", "" ]
As the title suggests... I'm trying to figure out the fastest way with the least overhead to determine if a record exists in a table or not. Sample query: ``` SELECT COUNT(*) FROM products WHERE products.id = ?; vs SELECT COUNT(products.id) FROM products WHERE products.id = ?; vs SELECT products.id FROM products WHERE products.id = ?; ``` Say the `?` is swapped with `'TB100'`... both the first and second queries will return the exact same result (say... `1` for this conversation). The last query will return `'TB100'` as expected, or nothing if the `id` is not present in the table. The purpose is to figure out if the `id` is in the table or not. If not, the program will next insert the record, if it is, the program will skip it or perform an UPDATE query based on other program logic outside the scope of this question. Which is faster and has less overhead? (This will be repeated tens of thousands of times per program run, and will be run many times a day). (Running this query against M$ SQL Server from Java via the M$ provided JDBC driver)
`SELECT TOP 1 products.id FROM products WHERE products.id = ?;` will outperform all of your suggestions as it will terminate execution after it finds the first record.
`EXISTS` (or `NOT EXISTS`) is specially designed for checking if something exists and therefore should be (and is) the best option. It will halt on the first row that matches so it does not require a `TOP` clause and it does not actually select any data so there is no overhead in size of columns. You can safely use `SELECT *` here - no different than `SELECT 1`, `SELECT NULL` or `SELECT AnyColumn`... [(you can even use an invalid expression like `SELECT 1/0` and it will not break)](http://www.sqlfiddle.com/#!6/d41d8/6290). ``` IF EXISTS (SELECT * FROM Products WHERE id = ?) BEGIN --do what you need if exists END ELSE BEGIN --do what needs to be done if not END ```
Fastest way to determine if record exists
[ "", "sql", "sql-server", "performance", "select", "count", "" ]
I'm working scientifically with images from a microscope, where each of the 3 RGB channels are encoded as uint16 (0-65536). Currently I use OpenCV2 and NumPy to work with the images, and using the flag "cv2.IMREAD\_UNCHANGED" everything works fine with the reading, and I can do some work on the image and return it to uint16, ``` img = cv2.imread('dummy.tif',cv2.IMREAD_UNCHANGED ) #do some work here img = img.astype(numpy.uint16) cv2.imwrite('processed.tif',img ) ``` However, so far I can't seem to find any way to save the processed images as 16bit RGB again. Using the cv2.imwrite command just convert the image to uint8, thus turning everything white (that is, everything is truncated to 255, the max for uint8 channels). Any idea how to proceed?
OpenCV does support writing 16 Bit TIFF images. Make sure you are using a current version (>= 2.2). The truncation probably happens to img in your code before saving with OpenCV.
Maybe it helps if the numpy.uint16 is replace by cv2.CV\_16U. In some example the parameter is passed in as a string e.g. 'uint16'. Sry, reputation too low for a comment.
Python: How to save image with 16 bit channels (e.g. 48 RGB)?
[ "", "python", "opencv", "" ]
I saw this complex select statement that has several components: aliases and subqueries ``` SELECT u.ID, u.username, u.active, u.email, u.admin, u.banned, u.name, (SELECT GROUP_CONCAT( g.name SEPARATOR'-groupsep-' ) FROM groups g, group_memberships gm WHERE g.ID = gm.group AND gm.user = u.ID ) AS groupmemberships FROM users u WHERE u.ID={$uid} ``` The part that doesn't make sense to me is the part `AS groupmemberships`. What is it aliasing? The element that follows `u.name`? I unfamiliar with subquery syntax. Any useful explanation and a link to official documentation would help.
This query is using a subquery in the `FROM` clause of the query. The subquery returns a set of rows which is in turn used as a table by the rest of the query. In MySQL, it is mandatory to give an alias to this 'virtual table'. So concretely, the name `groupmemberships` refers to the result set from ``` SELECT GROUP_CONCAT( g.name SEPARATOR'-groupsep-' ) FROM groups g, group_memberships gm WHERE g.ID = gm.group AND gm.user = u.ID ``` A more detailed explanation can be found in [the MySQL documentation about subqueries in the FROM clause](https://dev.mysql.com/doc/refman/5.6/en/from-clause-subqueries.html).
From the [Mysql manual](http://dev.mysql.com/doc/refman/5.0/en/from-clause-subqueries.html) > Subqueries are legal in a SELECT statement's FROM clause. The actual syntax is: > > SELECT ... FROM (subquery) [AS] name ... > > The [AS] name clause is mandatory, because every table in a FROM clause must have a name. So basically I can give my resulting table a name and use it in the outer query as a table itself. Hope that helps :)
Complex SQL query with an alias
[ "", "mysql", "sql", "alias", "" ]
I have two tables in two schemas - schema1 and schema2. Both the tables have the same design, except that Schema2 has a clustered index on the primary key of the tables. Schema1 tables don't have primary key (That's how the old design was and I've to revamp it with a new schema design which is schema2) In schema 2, COL\_1 is the primary key for table1 and (COL\_4, COL\_12 ) are keys for table2 which are indexed. Table1 (col\_1, col\_2, col\_3, col\_4 ... col\_10) Table2(col\_1,col\_4,col\_12, .... col\_20) I have a query which retrieves data from table1, and is as follows ``` SELECT t1.COL_1,t1.COL_2, t1.COL_3, t1.COL_4,t1.COL_5 FROM table1 t1 LEFT JOIN table2 t2 ON t2.COL_1 = t1.COL_1, AND t2.COL_4 = t1.COL_4 WHERE t1.col_10 = '/some string/' ``` When I run this query in both the schemas, I get the number of rows retrieved same. But the order of the rows are not the same and I don't know how to compare the data in both. > My questions. > > 1. Can I assume that both the results in the two schemas match, just 'coz the rowcount match ? > 2. Do the results differ if since there is index in the tables in schema2 ? I would like to have an understanding of the above behaviour. Thanks in advance.
Table 1/Schema 1 is a heap table, you insert a record it's added to the end of that table. When you query that table the records are (but don't count it) returned in the same order they are inserted. Table 2/Schema 2 is a clustered index table, i.e. when you insert a record into that table it's inserted in between records if needed (or appended if the new record primary key is greater than all other existing ones). When you query that table the records are returned (but don't count in it) in a sorted order of the primary key. If you wish to compare these two tables and be certain they are exactly the same you can do this (be prepared it will take awhile if it's a huge table). ``` -- show all records in table1 that do not exist in table2 select * from table1 except select * from table2 ``` and the other way around ``` -- show all records in table2 that do not exist in table1 select * from table2 except select * from table1 ``` if no records are returned from these two queries, the tables are the same. if you have "updated/created" columns or identity column that is allowed to differ, then you list the columns you wish to compare in all queries.
> But the order of the rows are not the same and I don't know how to compare the data in both Of course. You added a clustered index - this means that the indexed table is stored sorted according to the index. But without an `ORDER BY` clause, there is not defined order. > I don't know how to compare the data in both Use the [`ORDER BY`](http://www.techonthenet.com/sql/order_by.php) clause to order the data as you wish. This will allow for comparisons. The query you have posted should return corresponding rows in the same order as your join condition is on `COL_1`, so not so sure why the issue is.
How does index affect the result of two tables - one with index and another without index?
[ "", "sql", "sql-server-2008", "select", "clustered-index", "" ]
The date column that I have is in `varchar2` and I want to convert those values in `YYYY-MM-DD` ``` DATE 7/26/2013 7/29/2013 8/1/2013 8/4/2013 7/28/2013 7/31/2013 8/3/2013 7/30/2013 8/5/2013 7/25/2013 8/2/2013 8/6/2013 7/27/2013 ```
## You should ***never*** store dates in a `VARCHAR` column So in order to display it differently now, you need to first convert the string to a date and then back to a string If you are certain that all "dates" do have the right (and same) format, then the following should work: ``` select to_char(to_date(date, 'mm/dd/yyyy'), 'yyyy-mm-dd') from the_table; ``` But I wouldn't be surprised if that gives you an error because one or more rows have a date which is formatted differently. Something which could not have happened had you defined the column to be of type `DATE` right from the beginning. You should really, really consider changing that column to be a real `DATE` column. Btw: `DATE` is a horrible name for such a column. It is a reserved word and it simply doesn't tell you anything about what is actually stored in that column (a "start date", an "end date", a "birth date", a "due date", ...)
You have to convert your string to `date` and than convert to `char` with expected format ``` select TO_CHAR(TO_DATE(datecol, 'MM/DD/YYYY'),'YYYY-MM-DD') from your_table ``` [**Sql Fiddle Demo**](http://sqlfiddle.com/#!4/d41d8/15531/0)
Convert mm/dd/yyyy to yyyy-mm-dd in Oracle
[ "", "sql", "oracle", "datetime", "" ]
I have table called '**system**' has 2 columns '**variable**' and '**value**' with the following data: * variable = 'username' => value = 'myuser' * variable = 'password' => value = 'mypass' *I want to update these 2 fields in 1 query.* **the 2 queries are:** ``` UPDATE System SET Value = 'myuser' WHERE Variable = 'Username' UPDATE System SET Value = 'mypass' WHERE Variable = 'Password' ```
``` UPDATE System SET Value = (CASE WHEN Variable = 'Username' THEN 'myuser' WHEN Variable = 'Password' THEN 'mypass' END) WHERE Variable = 'Username' or Variable = 'Password' ```
Yes, using he case statement: ``` UPDATE System SET Value = (case when Variable = 'UserName' then 'myuser' when Variable = 'Password' then 'mypass' else Value end) WHERE Variable in ('Username', 'Password'); ```
is it possible to update 2 fields in the same column in one query (mysql)
[ "", "mysql", "sql", "sql-update", "" ]
Is there any methods for python+selenium to find parent elements, brother elements, or child elements just like `driver.find_element_parent?` or `driver.find_element_next?` or `driver.find_element_previous`? eg: ``` <tr> <td> <select> <option value=0, selected='selected'> </option> <option value=1, > </option> <option value=2,> </option> </select> </td> <td> 'abcd' <input name='A'> </input> <td> <tr> ``` I've tried like below, but fail: ``` input_el=driver.find_element_by_name('A') td_p_input=find_element_by_xpath('ancestor::input') ``` How can **I get the parent of input element** and then, finally, **get the option selected**?
You can find a parent element by using `..` xpath: ``` input_el = driver.find_element_by_name('A') td_p_input = input_el.find_element_by_xpath('..') ``` --- What about making a separate xpath for getting selected option, like this: ``` selected_option = driver.find_element_by_xpath('//option[@selected="selected"]') ```
From your example, I figure you only want the selected option within a table-row if and only if this row also has an input element with the name "A", no matter where in the html-tree this element resides below the row-element. You can achieve this via the xpath ancestor-axis. For the sake of better readability I will show how to do this step by step (but you can actually put everything in only one xpath-expression): ``` # first find your "A" named element namedInput = driver.find_element_by_name("A"); # from there find all ancestors (parents, grandparents,...) that are a table row 'tr' rowElement = namedInput.find_element_by_xpath(".//ancestor::tr"); # from there find the first "selected" tagged option selectedOption = rowElement.find_element_by_xpath(".//option[@selected='selected']"); ```
How to find parent elements by python webdriver?
[ "", "python", "selenium", "xpath", "selenium-webdriver", "parent-child", "" ]
I have done two algorithms and I want to check which one of them is more 'efficient' and uses less memory. The first one creates a numpy array and modifies the array. The second one creates a python empty array and pushes values into this array. Who's better? First program: ``` f = open('/Users/marcortiz/Documents/vLex/pylearn2/mlearning/classify/files/models/model_training.txt') lines = f.readlines() f.close() zeros = np.zeros((60343,4917)) for l in lines: row = l.split(",") for element in row: zeros[lines.index(l), row.index(element)] = element X = zeros[1,:] Y = zeros[:,0] one_hot = np.ones((counter, 2)) ``` The second one: ``` f = open('/Users/marcortiz/Documents/vLex/pylearn2/mlearning/classify/files/models/model_training.txt') lines = f.readlines() f.close() X = [] Y = [] for l in lines: row = l.split(",") X.append([float(elem) for elem in row[1:]]) Y.append(float(row[0])) X = np.array(X) Y = np.array(Y) one_hot = np.ones((counter, 2)) ``` My theory is that the first one is slower but uses less memory and it's more 'stable' while working with large files. The second one it's faster but uses a lot of memory and its not so stable while working with large files (543MB, 70,000 lines) Thanks!
Well finally I made some changes thanks to the answers. My two programs: ``` f = open('/Users/marcortiz/Documents/vLex/pylearn2/mlearning/classify/files/models/model_training.txt') zeros = np.zeros((60343,4917)) counter = 0 start = timeit.default_timer() for l in f: row = l.split(",") counter2 = 0 for element in row: zeros[counter, counter2] = element counter2 += 1 counter = counter + 1 stop = timeit.default_timer() print stop - start f.close() ``` Time of the first program--> 122.243036032 seconds Second program: ``` f = open('/Users/marcortiz/Documents/vLex/pylearn2/mlearning/classify/files/models/model_training.txt') zeros = np.zeros((60343,4917)) counter = 0 start = timeit.default_timer() for l in f: row = l.split(",") counter2 = 0 zeros[counter, :] = [i for i in row] counter = counter + 1 stop = timeit.default_timer() print stop - start f.close() ``` Time of the second program: 102.208696127 seconds! Thanks.
The problem with both codes is that you're loading the whole file in memory first using `file.readlines()`, you should iterate over the file object directly to get one line at a time. ``` from itertools import izip #generator function def func(): with open('filename.txt') as f: for line in f: row = map(float, l.split(",")) yield row[1:], row[0] X, Y = izip(*func()) X = np.array(X) Y = np.array(Y) ... ``` I am sure a pure numpy solution is going to be faster than this.
Python: push item vs creating empty list (efficiency)
[ "", "python", "arrays", "performance", "memory", "stability", "" ]
I've found a great python library implementing Levenshtein functions (distance, ratio, etc.) at <http://code.google.com/p/pylevenshtein/> but the project seems inactive and the documentation is nowhere to be found. I was wondering if anyone knows better than me and can point me to the documentation.
You won't have to generate the docs yourself. There's an online copy of the original Python Levenshtein API: <http://www.coli.uni-saarland.de/courses/LT1/2011/slides/Python-Levenshtein.html>
Here is an example: ``` # install with: pip install python-Levenshtein from Levenshtein import distance edit_dist = distance("ah", "aho") ```
Where can the documentation for python-Levenshtein be found online?
[ "", "python", "documentation", "levenshtein-distance", "" ]
Is there a way to programatically get the line number and name of a function? For example, I want to pass a list of strings to a function : ``` s = [calling_module, calling_function, line_number] report(s) ``` Currently I just put it all in manually : ``` s = ["module abc", "func()", "line 22", "notes"] report(s) ``` But I would like to know if there is an automatic way for python to fill in the module name (I think \_\_name\_\_ does that), function name and line number for me. Is there a way?
Use [`inspect`](http://docs.python.org/2/library/inspect.html) module functions. For example, ``` import inspect def b(): f = inspect.currentframe() current = inspect.getframeinfo(f) caller = inspect.getframeinfo(f.f_back) #caller = inspect.getframeinfo(inspect.getouterframes(f)[1][0]) print(__name__, current.filename, current.function, current.lineno, caller.function) def a(): b() a() ```
You may want something along the lines of `traceback.extract_stack()`: ``` >>> def test(): ... print "In Function" ... print traceback.extract_stack() ... >>> >>> test() In Function [('<stdin>', 1, '<module>', None), ('<stdin>', 3, 'test', None)] ``` Though the results would need to be parsed.
getting the module and line that a function was called from
[ "", "python", "" ]
I have a column called `Cities`, this column has data as the following: ``` City 1 CITY 1 CITY 1 ``` I want to Select only the `CITY` (Capitalized) This is not doing the job: ``` SELECT * FROM Location WHERE Cities LIKE 'CITY 1' ``` This is giving me all the `CITY 1` includes `City 1`
By default, SQL Server will use a case insensitive collation. ``` SELECT * FROM Location WHERE Cities LIKE 'CITY 1' COLLATE Latin1_General_CS_AI ``` Mark your comparison operator with a case-sensitive collation to apply a case sensitive filter
I assume your SQL is in a non case-sensitive collation in which case a search for 'C' and 'c' will return the same result. You need to do a case-sensitive search by collating the query: ``` SELECT * FROM Location WHERE Cities LIKE 'CITY 1' COLLATE <Insert case sensitive collation here> ``` e.g. ``` SELECT * FROM Location WHERE Cities LIKE 'CITY 1' COLLATE Latin1_General_CS_AI ```
SELECT Only the words that are capitalized
[ "", "sql", "sql-server", "" ]
I am trying to find out whether a list of integers is coherent or 'at one stretch', meaning that the difference between two neighboring elements must be exactly one and that the numbers must be increasing monotonically. I [found](https://stackoverflow.com/a/3149493/145400) a neat approach where we can group by the number in the list minus the position of the element in the list -- this difference changes when the numbers are not coherent. Obviously, there should be exactly one group when the sequence does not contain gaps or repetitions. Test: ``` >>> l1 = [1, 2, 3, 4, 5, 6] >>> l2 = [1, 2, 3, 4, 5, 7] >>> l3 = [1, 2, 3, 4, 5, 5] >>> l4 = [1, 2, 3, 4, 5, 4] >>> l5 = [6, 5, 4, 3, 2, 1] >>> def is_coherent(seq): ... return len(list(g for _, g in itertools.groupby(enumerate(seq), lambda (i,e): i-e))) == 1 ... >>> is_coherent(l1) True >>> is_coherent(l2) False >>> is_coherent(l3) False >>> is_coherent(l4) False >>> is_coherent(l5) False ``` It works well, but I personally find that this solution is a bit too convoluted in view of the simplicity of the problem. Can you come up with a clearer way to achieve the same without significantly increasing the code length? # Edit: summary of answers From the answers given below, the solution ``` def is_coherent(seq): return seq == range(seq[0], seq[-1]+1) ``` clearly wins. For small lists (10^3 elements), it is on the order of 10 times faster than the `groupby` approach and (on my machine) still four times faster than the next best approach (using `izip_longest`). It has the worst scaling behavior, but even for a large list with 10^8 elements it is still two times faster than the next best approach, which again is the `izip_longest`-based solution. Relevant timing information obtained with `timeit`: ``` Testing is_coherent_groupby... small/large/larger/verylarge duration: 8.27 s, 20.23 s, 20.22 s, 20.76 s largest/smallest = 2.51 Testing is_coherent_npdiff... small/large/larger/verylarge duration: 7.05 s, 15.81 s, 16.16 s, 15.94 s largest/smallest = 2.26 Testing is_coherent_zip... small/large/larger/verylarge duration: 5.74 s, 20.54 s, 21.69 s, 24.62 s largest/smallest = 4.28 Testing is_coherent_izip_longest... small/large/larger/verylarge duration: 4.20 s, 10.81 s, 10.76 s, 10.81 s largest/smallest = 2.58 Testing is_coherent_all_xrange... small/large/larger/verylarge duration: 6.52 s, 17.06 s, 17.44 s, 17.30 s largest/smallest = 2.65 Testing is_coherent_range... small/large/larger/verylarge duration: 0.96 s, 4.14 s, 4.48 s, 4.48 s largest/smallest = 4.66 ``` Testing code: ``` import itertools import numpy as np import timeit setup = """ import numpy as np def is_coherent_groupby(seq): return len(list(g for _, g in itertools.groupby(enumerate(seq), lambda (i,e): i-e))) == 1 def is_coherent_npdiff(x): return all(np.diff(x) == 1) def is_coherent_zip(seq): return all(x==y+1 for x, y in zip(seq[1:], seq)) def is_coherent_izip_longest(l): return all(a==b for a, b in itertools.izip_longest(l, xrange(l[0], l[-1]+1))) def is_coherent_all_xrange(l): return all(l[i] + 1 == l[i+1] for i in xrange(len(l)-1)) def is_coherent_range(seq): return seq == range(seq[0], seq[-1]+1) small_list = range(10**3) large_list = range(10**6) larger_list = range(10**7) very_large_list = range(10**8) """ fs = [ 'is_coherent_groupby', 'is_coherent_npdiff', 'is_coherent_zip', 'is_coherent_izip_longest', 'is_coherent_all_xrange', 'is_coherent_range' ] for n in fs: print "Testing %s..." % n t1 = timeit.timeit( '%s(small_list)' % n, setup, number=40000 ) t2 = timeit.timeit( '%s(large_list)' % n, setup, number=100 ) t3 = timeit.timeit( '%s(larger_list)' % n, setup, number=10 ) t4 = timeit.timeit( '%s(very_large_list)' % n, setup, number=1 ) print " small/large/larger/verylarge duration: %.2f s, %.2f s, %.2f s, %.2f s" % (t1, t2, t3, t4) print " largest/smallest = %.2f" % (t4/t1) ``` Test machine: * Linux 3.2.0 (Ubuntu 12.04) * Python 2.7.3 (gcc 4.1.2) * numpy 1.6.2 built with Intel compiler * CPU: E5-2650 @ 2.00GHz * 24 GB of memory
how bout ``` sorted_list = sorted(my_list) return sorted_list == range(sorted_list[0],sorted_list[-1]+1) ``` or if its only coherent if it is already sorted ``` return my_list == range(my_list[0],my_list[-1]+1) ``` if you are using python 3 you will need `list(range(...))`
Unless I'm overlooking something in your examples, this simpler solution is actually shorter. ``` >>> l1 = [1, 2, 3, 4, 5, 6] >>> l2 = [1, 2, 3, 4, 5, 7] >>> l3 = [1, 2, 3, 4, 5, 5] >>> l4 = [1, 2, 3, 4, 5, 4] >>> l5 = [6, 5, 4, 3, 2, 1] >>> >>> def is_coherent(seq): ... return seq == range(seq[0], seq[0]+len(seq), 1) ... >>> is_coherent(l1) True >>> is_coherent(l2) False >>> is_coherent(l3) False >>> is_coherent(l4) False >>> is_coherent(l5) False >>> ``` The results of some basic performance tests seem to indicate that this method is significantly quicker (I've added your example as `is_coherent2`): ``` Carl > python -m timeit -s 'from t import is_coherent, l1' 'is_coherent(l1)' 1000000 loops, best of 3: 0.782 usec per loop Carl > python -m timeit -s 'from t import is_coherent, l3' 'is_coherent(l3)' 1000000 loops, best of 3: 0.796 usec per loop Carl > python -m timeit -s 'from t import is_coherent2, l1' 'is_coherent2(l1)' 100000 loops, best of 3: 4.54 usec per loop Carl > python -m timeit -s 'from t import is_coherent2, l3' 'is_coherent2(l3)' 100000 loops, best of 3: 4.93 usec per loop ```
Python: find out whether a list of integers is coherent
[ "", "python", "sequence", "python-itertools", "" ]
I need a small billing report for the usage of the VMs inside openstack after it is stopped, so far I already find the way to get flavor information (vCPU, disk, memory) from instance name. And I want to know the VM's startup time to calculate now. Are there any good ways to fetch it from openstack python API ? It will be nice if you can paste the code as well.
(I got the answer from [china-openstack community](http://groups.google.com/group/china-openstack-user-group), and shared here) In the [novaclient usage module](http://docs.openstack.org/developer/python-novaclient/api/novaclient.v1_1.usage.html#novaclient.v1_1.usage.Usage), all the instance (active or terminated) can be fetched by `list` API, the detail information is fetched via `get` API, it is not clear what information are exposed via this python document. Fortunately the [openstack api : os-simple-tenant-usage](http://api.openstack.org/api-ref.html#ext-os-simple-tenant-usage) tells the data structure, the `uptime` is what I want. ``` "tenant_usage": { "server_usages": [ { ... (skipped) "uptime": 3600, "vcpus": 1 } ], ``` openstack dashboard (at least Folsom version) use this API as well.
I just wanted to retrieve server's uptime. I mean real uptime for the time the server has been UP, not since its creation. * I created a new machine, the machine was running and I was getting some *update* value; this was nicely incremented * Then I stopped the machine and issued the request again: The response correctly reports *"state": "stopped"*, but the *uptime* attr. is still being incremented. ==> Again, in this extension it is not uptime, it is time from creation Request to the *os-simple-tenant-usage* extension (after obtaining an auth. token): `GET http://rdo:8774/v2/4e1900cf21924a098709c23480e157c0/os-simple-tenant-usage/4e1900cf21924a098709c23480e157c0` (with the correct tenant ID) Response (notice the machine is stopped and uptime is a non-zero value): ``` { "tenant_usage": { "total_memory_mb_usage": 0.000007111111111111112, "total_vcpus_usage": 1.388888888888889e-8, "start": "2014-02-25T14:20:19.660179", "tenant_id": "4e1900cf21924a098709c23480e157c0", "stop": "2014-02-25T14:20:19.660184", "server_usages": [ { "instance_id": "ca4465a8-38ca-40de-b138-82efcc88c7cf", "uptime": 1199, "started_at": "2014-02-25T14:00:20.000000", "ended_at": null, "memory_mb": 512, "tenant_id": "4e1900cf21924a098709c23480e157c0", "state": "stopped", "hours": 1.388888888888889e-8, "vcpus": 1, "flavor": "m1.tiny", "local_gb": 1, "name": "m1" } ], "total_hours": 1.388888888888889e-8, "total_local_gb_usage": 1.388888888888889e-8 } } ``` So despite its name **uptime** it is just **time since the server creation**.
How can I get VM instance running time in openstack via python API?
[ "", "python", "openstack-nova", "" ]
I have two tables: CustomerInformation ``` CustomerName CustomerAddress CustomerID LocationID BillDate CITY - 1 500 N ST 47672001 29890 2012-07-20 00:00:00.000 0 CITY - 1 500 N ST 47672001 29890 2012-07-20 00:00:00.000 6890 CITY - 1 500 N ST 47672001 29890 2012-08-17 00:00:00.000 0 CITY - 9 510 N ST 47643241 29890 2012-08-17 00:00:00.000 5460 CITY - 4213 500 S ST 43422001 29890 2012-09-17 00:00:00.000 0 CITY - 5 100 N ST 23272001 29890 2012-09-17 00:00:00.000 4940 CITY - 3 010 N ST 43323001 29890 2012-10-19 00:00:00.000 0 CITY - 78 310 N ST 12222001 29890 2012-10-19 00:00:00.000 5370 ``` and CustomerMeters has three columns: ID, Name, Address The connection between these two tables is: CustomerAddress, so I can join the two based on Address: ``` SELECT * FROM CustomerInformation JOIN CustomerMeters ON CustomerAddress = Address ``` Now, the problem is I have so many records (over 20000 in the CustomerInformation), Is there away that I list how many records that matches in both tables, and how many records are only in the CustomerInformation table? Thank you.
``` select C.grp, count(*) from CustomerInformation as ci left outer join CustomerMeters as cm on cm.CustomerAddress = ci.Address outer apply ( select case when cm.ID is not null then 'Number of records in both tabless' else 'not in CustomerMeters' end as grp ) as C group by C.grp ``` or ``` --number of records in both tables select count(*) from CustomerInformation as ci where ci.Address in (select cm.CustomerAddress from CustomerMeters as cm) --number of records in CustomerInformation which are not in CustomerMeters select count(*) from CustomerInformation as ci where ci.Address not in (select cm.CustomerAddress from CustomerMeters as cm) ```
Count of records that resulted from the join: ``` SELECT COUNT(*) FROM CustomerInformation JOIN CustomerMeters ON CustomerAddress = Address ``` Number of records *exclusively* in the `CustomerInformation` table: ``` SELECT COUNT(*) FROM CustomerInformation AS CI -- Records in CustomerInformation WHERE NOT EXISTS(SELECT * -- that are not in CustomerMeters FROM CustomerMeters AS CM WHERE CM.Address = CI.CustomerAddress) ```
SQL Server Comparing how many data matches, and only in one tables
[ "", "sql", "sql-server", "" ]
I have looked around StackOverflow and couldn't find an answer to my specific question so forgive me if I have missed something. ``` import re target = open('output.txt', 'w') for line in open('input.txt', 'r'): match = re.search(r'Stuff', line) if match: match_text = match.group() target.write(match_text + '\n') else: continue target.close() ``` The file I am parsing is huge so need to process it line by line. This (of course) leaves an additional newline at the end of the file. How should I best change this code so that on the final iteration of the 'if match' loop it doesn't put the extra newline character at the end of the file. Should it look through the file again at the end and remove the last line (seems a bit inefficient though)? The existing StackOverflow questions I have found cover removing all new lines from a file. If there is a more pythonic / efficient way to write this code I would welcome suggestions for my own learning also. Thanks for the help!
Write the newline of each line at the beginning of the *next* line. To avoid writing a newline at the beginning of the first line, use a variable that is initialized to an empty string and then set to a newline in the loop. ``` import re with open('input.txt') as source, open('output.txt', 'w') as target: newline = '' for line in source: match = re.search(r'Stuff', line) if match: target.write(newline + match.group()) newline = '\n' ``` I also restructured your code a bit (the `else: continue` is not needed, because what else is the loop going to do?) and changed it to use the `with` statement so the files are automatically closed.
Another thing you can do, is to `truncate` the file. `.tell()` gives us the current byte number in the file. We then subtract one, and truncate it there to remove the trailing newline. ``` with open('a.txt', 'w') as f: f.write('abc\n') f.write('def\n') f.truncate(f.tell()-1) ``` On Linux and MacOS, the `-1` is correct, but on Windows it needs to be `-2`. A more Pythonic method of determining which is to check `os.linesep`. ``` import os remove_chars = len(os.linesep) with open('a.txt', 'w') as f: f.write('abc\n') f.write('def\n') f.truncate(f.tell() - remove_chars) ``` --- **kindal**'s answer is also valid, with the exception that you said it's a large file. This method will let you handle a terabyte sized file on a gigabyte of RAM.
Don't write final new line character to a file
[ "", "python", "" ]
I am surprised I have not been able to find a solution to this. We have a table ``` ORDER # | PRODUCT ID | PRICE 1 | 1 | 1.00 1 | 2 | 2.00 2 | 3 | 3.00 2 | 4 | 4.00 3 | 1 | 5.00 3 | 4 | 6.00 ``` We want to capture the sum of the revenues of all orders which included productID=1. The result in this example should be 1+2+5+6 = 14 What is the best way to achieve this? Currently, best solution I have is to run two queries. 1 - `SELECT orderID FROM table WHERE prodID=$prodID` 2 - `SELECT price FROM table WHERE orderID=[result of the above]` This has worked, but would strongly prefer to have a single query.
Try: ``` select sum(price) as total_price from orders where prod_order in (select prod_order from orders where product_id = 1) ``` Check [this SQLFiddle](http://sqlfiddle.com/#!3/29bf4/3) to confirm the result.
Here is a query that gives the results you are looking for: ``` SELECT OrderNum, SUM(PRICE) as TotalPrice FROM MyTable AS M WHERE EXISTS (SELECT 1 -- Include only orders that contain product 1 FROM MyTable AS M2 WHERE M2.OrderNum=M.OrderNum AND M2.ProductId=1) GROUP BY OrderNum ```
SQL Query - Sum of Order tha Contain an Item
[ "", "sql", "select", "where-clause", "" ]
I have a large function in my script that contains the bulk of the logic of my program. At one point, it used to span ~100 lines which I then tried to refactor into multiple smaller functions. However, I had many local variables that were eventually being modified in the smaller functions, and I needed some way to keep track of them in the scope of the larger function. For instance, it looked like ``` def large_func(): x = 5 ... 100 lines ... ``` to ``` def large_func(): x = 6 small_func_that_will_increment_x() small_func() .... ``` What is a pythonic way to handle this? The two approaches I can think of are: 1) global variables --- will probably get messy as I have many variables 2) using a dict to keep track of them like ``` tracker = { 'field1' : 5 'field2' : 4 } ``` and make modifications on the dict instead. Is there a different way to do this that I might have overlooked?
Without more information, it's hard to know whether this is appropriate or not, but… An object is a namespace. In particular, you can turn each of those local variables into attributes on an object. For example: ``` class LargeThing(object): def __init__(self): self.x = 6 def large_func(self): self.small_func_that_will_increment_x() self.small_func() # ... def small_func_that_will_increment_x(self): self.x += 1 ``` Whether the `self.x = 6` belongs in `__init__` or at the start of `large_func`, or whether this is even a good idea, depends on what all those variables actually mean, and how they fit together.
Closures will work here: ``` def large_func() x = 6 def func_that_uses_x(): print x def func_that_modifies_x(): nonlocal x # python3 only x += 1 func_that_uses_x() func_that_modifies_x() ```
Python Large Functions Many Variables (Refactoring)
[ "", "python", "" ]
Did my research and made several improvements, got very close to solving this issue but now I'm stuck and need help please. Task: To convert a list of strings floats to floats with 2 decimal points Original list: ``` mylist = ['17.21', '33.40', '24.39', '3.48', '1.02', '0.61', '18.03', '1.84'] ``` Aim ``` mylist = [17.21, 33.40, 24.39, 3.48, 1.02, 0.61, 18.03, 1.84] ``` My script attempts ``` mylisttwo = map(float, mylist) ``` It gave me this ``` [17.21, 33.4, 24.39, 3.48, 1.02, 0.60999999999999999, 18.03, 1.84] ``` Then I thought id format it ``` floatlist = [] for item in mylisttwo: floatlist.append("{0:.2f}".format(item)) ``` but that gave me a list of string floats again!! arghhhh ``` ['17.21', '33.40', '24.39', '3.48', '1.02', '0.61', '18.03', '1.84'] ``` What am I doing wrong here? Thanks
`[round(float(i), 2) for i in mylist]` This should work. Format is for inserting strings in other strings, as in `"Hello, {}! What is your favourite {}?".format("Jim", "colour")`. If you want to the numbers to only exist to 2 decimal places, you could use the decimal module.
You asked a question "What am I doing wrong here?". I think, what you are doing wrong is passing as an argument to: ``` floatlist.append() ``` this: ``` "{0:.2f}".format(item) ``` Format method of str class is simply returning formatted string, and you are expecting to get fixed two decimal points float. Float does not give such an option. [user2387370](https://stackoverflow.com/users/2387370/user2387370) is right, you should use Decimal.
Convert a list of string floats to a list of floats with 2 decimal points
[ "", "python", "list", "floating-point", "decimal", "" ]
I'm using SQL server 2008. ``` SELECT resultTable.OrderNumber, resultTable.ProjectId, resultTable.BatchId, resultTable.CustomerId, resultTable.City, resultTable.Street, resultTable.PostalCode, resultTable.Country, resultTable.CreatedDate, resultTable.Name, COUNT(*) OVER() as OrdersCount, Row_Number() OVER (ORDER BY CASE WHEN @sortBy = 'OrderNumber' THEN resultTable.OrderNumber END, CASE WHEN @sortBy = 'ProjectId' THEN resultTable.ProjectId END, CASE WHEN @sortBy = 'Address' THEN resultTable.Country, resultTable.City, resultTable.Street, resultTable.PostalCode END, CASE WHEN @sortBy = 'CreatedDate' THEN resultTable.CreatedDate END) as RowIndex FROM resultTable ``` This query fires with syntax error near "," ``` THEN resultTable.Country, resultTable.City, resultTable.Street, resultTable.PostalCode END, ``` If I remove the columns after the first "," everything is OK, but I'm trying to sort by address, which consists of 4 columns. Probably I cannot see something easy here. PS: `@sortBy` is just a string parameter.
Try something like this: ``` ORDER BY CASE WHEN @sortBy = 'OrderNumber' THEN resultTable.OrderNumber END, CASE WHEN @sortBy = 'ProjectId' THEN resultTable.ProjectId END, CASE WHEN @sortBy = 'Address' THEN resultTable.Country END, CASE WHEN @sortBy = 'Address' THEN resultTable.City END, CASE WHEN @sortBy = 'Address' THEN resultTable.Street END, CASE WHEN @sortBy = 'Address' THEN resultTable.PostalCode END, CASE WHEN @sortBy = 'CreatedDate' THEN resultTable.CreatedDate END ```
You probably want ``` CASE WHEN @sortBy = 'Address' THEN isnull(resultTable.Country,'') + isnull( resultTable.City,'') + isnull( resultTable.Street,'') + isnull( resultTable.PostalCode,'') end ```
SQL server: Order by multiple columns incorrect syntax
[ "", "sql", "sql-server-2008", "" ]
![enter image description here](https://i.stack.imgur.com/BpLII.png) In the above t-sql table I would like very much for the Total row to appear at the bottom. I have been wracking my head against this and in all of my other Queries simply using ORDER BY Status works as Total is alphabetically much farther down the list than most of our row values. This is not the case here and I just can't figure out how to change it I'm pretty new to sql and I'be been having a lot of difficulty even determining how to phrase a google search. So far I've just gotten results pertaining to Order By
The results of a `select` query, unless an order is explicitly specified via an 'order by' clause, can be returned in *any* order. Moreover, the order in which they are returned is not even deterministic. Running the exact same query 3 times in succession might return the exact same result set in 3 different orderings. So if you want a particular order to your table, you need to order it. An order by clause like ``` select * from myTable t where ... order by case Status when 'Total' then 1 else 0 end , Status ``` would do you. The 'Total' row will float to the bottom, the other rows will be ordered in collating sequence. You can also order things arbitrarily with this technique: ``` select * from myTable t where ... order by case Status when 'Deceased' then 1 when 'Total' then 2 when 'Active' then 3 when 'Withdrawn' then 4 else 5 end ``` will list the row(s) with a status of 'Deceased' first, followed by the row(s) with a status of 'Total', then 'Active' and 'Withdrawn', and finally anything that didn't match up to an item in the list.
In SQL Server (and most other databases), you can use `case` to sort certain [statūs](http://en.wiktionary.org/wiki/status#Noun_3) above others: ``` order by case Status when 'Total' then 2 else 1 end , Status ``` In MS Access, you can use `iif`: ``` order by iif(Status = 'Total', 2, 1) , Status ```
Changing position of a row in sql
[ "", "sql", "sql-server-2008", "t-sql", "" ]
I am trying to insert some data from an excel sheet on to SQL Server 2008 R2. Can I Create a table which on insertion of the data can insert leading 0 if the length is less than 5 --- ``` FirstName Lastname State Country Zip XY z VA USA 22031 AB Y MO USA 423 Anna Belle WA USA 5234 ``` Output: ``` FirstName Lastname State Country Zip XY z VA USA 22031 AB Y MO USA 00423 Anna Belle WA USA 05234 ``` Note: The ZIP should still act like an INT
Leave the ZIP in the int format, and add the leading zeroes when you select it. Like this: ``` select right('0000'+convert(varchar(5), ZIP), 5) ```
You can pad the Zip in a varchar field using a pad function like ``` INSERT INTO TableName (FirstName, Lastname, State, Country, Zip) VALUES ('AB', 'Y', 'MO'. 'USA', REPLACE(STR(423, 5), SPACE(1), '0') ) ``` or selecting it formatted if you want to keep the field as INT ``` SELECT FirstName, Lastname, State, Country, REPLACE(STR(zip, 5), SPACE(1), '0') as ZIP ```
Can we create a table which Can insert leading Zeros on operation
[ "", "sql", "sql-server-2008", "t-sql", "" ]
I'm trying desperately to get Celery to play nicely with Django, but to no avail. I am getting tripped up on the following: project/settings.py: ``` ... import djcelery djcelery.setup_loader() BROKER_URL = 'django://' CELERY_RESULT_BACKEND = 'django://' CELERY_TASK_SERIALIZER = 'json' CELERY_RESULT_SERIALIZER = 'json' CELERY_ENABLE_UTC = True ... ``` *app/tasks.py:* ``` from celery.task import task @task() def scheduled_task(param1, param2): ... return something ``` Calling `scheduled_task(param1, param2)` directly (without the decorator) works as expected. However when adding the decorator and firing up the 'development' celery worker like so: ``` python manage.py celery worker --loglevel=info ``` ...I get the following error: ``` TypeError: 'module' object is not callable ``` I've pinned this down to the `@task` decorator. Every combination I try fails, including: ``` from celery import task from celery.task import task from celery.task.base import task @task @task() @task.task @task.task() @celery.task @celery.task() ``` Nothing seems to make any difference to the call stack in the exception, they all *appear* to think that `task` is a module, and not callable! To make things even more frustrating: ``` >>> from celery.task import task >>> task <function task at 0x10aa2a758> ``` That sure looks callable to me! Any idea what might be happening? If I've missed anything, I'm happy to post additional logs, files or clarify anything else.
*(Converted to an answer from comments)* From the [stack trace](http://pastebin.com/Qgcr6EDB) I take it that the line `return backend(app=self, url=url)` is where the exception happens. So whatever `backend` is, it doesn't seem to be a callable. I would try to set a pdb breakpoint in that file (`celery/app/base.py`) by wrapping that line in ``` try: backend(app=self, url=url) except: import pdb; pdb.set_trace(), ``` and then inspecting `backend`, and moving up the stack (`u` command in pdb, `d` to go down again, `w` to display call stack) to debug where it all goes wrong. The [celery docs](http://docs.celeryproject.org/en/latest/userguide/tasks.html#basics) also mention this: > **How do I import the task decorator?** > > The task decorator is available on your Celery instance, if you don’t know what that is then please read First Steps with Celery. > > **If you’re using Django or are still using the “old” module based celery API**, then you can import the task decorator like this: ``` from celery import task @task def add(x, y): return x + y ``` So that should clear up any amiguity about what way to import the task decorator is the right one.
Just in case somebody is using Celery beat and gets the same error message. In my app I used ``` command=/opt/python/run/venv/bin/celery beat -A appname --loglevel=INFO --workdir=/tmp -S django --pidfile /tmp/celerybeat.pid ``` and got this error message. Since I copied most of the code for daemonizing Celery beat using supervisord (you need a special config for this), I did not realize that the "-S django" presumes the use of django\_celery\_beat package, which I had not installed before. I installed it because it anyway has advantages for production use and the error disappeared.
celery task decorator throws "TypeError: 'Module object is not callable"
[ "", "python", "django", "celery", "django-celery", "" ]
I'm trying to define a method to check whether or not every element of a list is a factor of the parameter. Here's what I have: ``` def factorall(x): if all(x % num for num in nums) == 0: return True else: return False ``` (In this case nums is a list of the integers from 1 to 10) However, this returns true for any number. I'm assuming this happens because it is only checking 1 and then returning True, but shouldn't all() be checking for every element of the list before returning True? I'm a bit unfamiliar with all() so I probably implemented it incorrectly. Can someone point me in the right direction? Thanks!
You should do the comparison inside the `all` function, or simply remove it, and use negation of the result `x % num`: ``` def factorall(x): return all(not x % num for num in nums) ``` The return statement works same as: ``` return all(x % num == 0 for num in nums) ``` I agree that the 2nd one seems clearer.
you should use `not any` instead of all ``` def factorall(x): return not any(x%num for num in nums) #if any of these is a value other than 0 ``` or if you want it like you currently have it ``` def factorall(x): return all(x%num==0 for num in nums) ```
Trying to check a condition for every element in a list of integers
[ "", "python", "" ]
The table I am dealing with has multiple rows which have the same values for `lat` and `lon`. The example shows that `1`, `3`, `5` have the same location but the `name` attribute differs. The `hash` is built from `name`, `lat` and `lon` and differs therefore. ``` BEFORE: id | name | lat | lon | flag | hash ----+------+-----+-----+------+------ 1 | aaa | 16 | 48 | 0 | 2cd <-- duplicate 2 | bbb | 10 | 22 | 0 | 3fc 3 | ccc | 16 | 48 | 0 | 8ba <-- duplicate 4 | ddd | 10 | 23 | 0 | c33 5 | eee | 16 | 48 | 0 | 751 <-- duplicate ``` I need to identify "duplicates" within this table and want to assign the flag `1` (*primary*) to one of them and the flag `2` (*secondary*) to the others. It is not important which "duplicate" is flagged as *primary*. ``` AFTER: id | name | lat | lon | flag | hash ----+------+-----+-----+------+------ 1 | aaa | 16 | 48 | 1 | 2cd <-- updated 2 | bbb | 10 | 22 | 0 | 3fc 3 | ccc | 16 | 48 | 2 | 8ba <-- updated 4 | ddd | 10 | 23 | 0 | c33 5 | eee | 16 | 48 | 2 | 751 <-- updated ``` I started experimenting with `INNER JOIN` [inspired by this post](https://stackoverflow.com/a/38578/356895) and this [visual description](http://www.codinghorror.com/blog/2007/10/a-visual-explanation-of-sql-joins.html). With this I am able to assign the same flag to all duplicates. ``` UPDATE table t1 INNER JOIN table_name t2 ON t1.lat = t2.lat AND t1.lon = t2.lon AND t1.hash != t2.hash SET t1.flag = 2; ``` I also tested `LEFT OUTER JOIN` with `WHERE t2.id IS NULL` which could work when there are only two rows. However, I cannot think my head off how a `JOIN` should work with **more then two duplicates**. Mark Harrison also assumes *"that you're joining on columns with no duplicates"* [at the beginning of his post](https://stackoverflow.com/a/38578/356895) which sound as if this is not a good idea. I am using MySQL if this is of interest.
Not sure this is very efficient, but it [works in just one query](http://sqlfiddle.com/#!2/091ec/2): ``` UPDATE t JOIN ( SELECT MAX(t.id) AS maxid, lat, lon FROM t JOIN t AS duplicates USING (lat, lon) GROUP BY lat, lon HAVING COUNT(*) > 1 ) AS maxima USING (lat, lon) SET flag = IF(id = maxid, 1, 2); ```
Assuming you want all canonical records to have a `flag=1` (and not just those records that have duplicates) you can do: ``` UPDATE table t1 SET t1.flag = 1 WHERE t1.id in ( SELECT min(id) FROM table t2 WHERE t1.lat = t2.lat AND t1.lon = t2.lon ); UPDATE table SET flag = 2 WHERE flag is null; COMMIT; ```
How to update multiple duplicates with different values on the same table?
[ "", "mysql", "sql", "join", "duplicates", "" ]
I have been trying to optimize a python script I wrote for the last two days. Using several profiling tools (cProfile, line\_profiler etc.) I narrowed down the issue to the following function below. `df` is a numpy array with 3 columns and +1,000,000 rows (data type is float). Using line\_profiler, I found out that the function spends most of the time whenever it needs to access the numpy array. `full_length += head + df[rnd_truck, 2]` and `full_weight += df[rnd_truck,1]` take most of the time, followed by `full_length = df[rnd_truck,2]` `full_weight = df[rnd_truck,1]` lines. As far as I see the bottleneck is caused by the access time the function tries to grab a number from the numpy array. When I run the function as `MonteCarlo(df, 15., 1000.)` it takes 37 seconds to call the function for 1,000,000 times in a i7 3.40GhZ 64bit Windows machine with 8GB RAM. In my application, I need to run it for 1,000,000,000 to ensure convergence, which brings the execution time to more than an hour. I tried using the `operator.add` method for the summation lines, but it did not help me at all. It looks like I have to figure out a faster way to access this numpy array. Any ideas would be welcome! ``` def MonteCarlo(df,head,span): # Pick initial truck rnd_truck = np.random.randint(0,len(df)) full_length = df[rnd_truck,2] full_weight = df[rnd_truck,1] # Loop using other random truck until the bridge is full while 1: rnd_truck = np.random.randint(0,len(df)) full_length += head + df[rnd_truck, 2] if full_length > span: break else: full_weight += df[rnd_truck,1] # Return average weight per feet on the bridge return(full_weight/span) ``` Below is a portion of the `df` numpy array I am using: ``` In [31] df Out[31]: array([[ 12. , 220.4, 108.4], [ 11. , 220.4, 106.2], [ 11. , 220.3, 113.6], ..., [ 4. , 13.9, 36.8], [ 3. , 13.7, 33.9], [ 3. , 13.7, 10.7]]) ```
As noted by other people, this isn't vectorized at all, so your slowness is really due to slowness of the Python interpreter. [Cython](http://cython.org) can help you a lot here with minimal changes: ``` >>> %timeit MonteCarlo(df, 5, 1000) 10000 loops, best of 3: 48 us per loop >>> %timeit MonteCarlo_cy(df, 5, 1000) 100000 loops, best of 3: 3.67 us per loop ``` where `MonteCarlo_cy` is just (in the IPython notebook, after `%load_ext cythonmagic`): ``` %%cython import numpy as np cimport numpy as np def MonteCarlo_cy(double[:, ::1] df, double head, double span): # Pick initial truck cdef long n = df.shape[0] cdef long rnd_truck = np.random.randint(0, n) cdef double full_weight = df[rnd_truck, 1] cdef double full_length = df[rnd_truck, 2] # Loop using other random truck until the bridge is full while True: rnd_truck = np.random.randint(0, n) full_length += head + df[rnd_truck, 2] if full_length > span: break else: full_weight += df[rnd_truck, 1] # Return average weight per feet on the bridge return full_weight / span ```
Using cython to compile the function gives a very substantial improvement to runtime. In a separate file called "funcs.pyx" I have the following code: ``` cimport cython import numpy as np cimport numpy as np def MonteCarlo(np.ndarray[np.float_t, ndim=2] df, float head, float span): # Pick initial truck cdef int rnd_truck = np.random.randint(0,len(df)) cdef float full_length = df[rnd_truck,2] cdef float full_weight = df[rnd_truck,1] # Loop using other random truck until the bridge is full while 1: rnd_truck = np.random.randint(0,len(df)) full_length += head + df[rnd_truck, 2] if full_length > span: break else: full_weight += df[rnd_truck,1] # Return average weight per feet on the bridge return(full_weight/span) ``` Everything is the same except for the type declarations in front of the variables. Here's the file I used to test it out: ``` import numpy as np import pyximport pyximport.install(reload_support=True, setup_args={'include_dirs':[np.get_include()]}) import funcs def MonteCarlo(df,head,span): # Pick initial truck rnd_truck = np.random.randint(0,len(df)) full_length = df[rnd_truck,2] full_weight = df[rnd_truck,1] # Loop using other random truck until the bridge is full while 1: rnd_truck = np.random.randint(0,len(df)) full_length += head + df[rnd_truck, 2] if full_length > span: break else: full_weight += df[rnd_truck,1] # Return average weight per feet on the bridge return(full_weight/span) df = np.random.rand(1000000,3) reload(funcs) %timeit [funcs.MonteCarlo(df, 15, 1000) for i in range(10000)] %timeit [MonteCarlo(df, 15, 1000) for i in range(10000)] ``` I only ran it 10000 times, but even so, there's a huge improvement. ``` 16:42:30: In [31]: %timeit [funcs.MonteCarlo(df, 15, 1000) for i in range(10000)] 10 loops, best of 3: 131 ms per loop 16:42:37: In [32]: %timeit [MonteCarlo(df, 15, 1000) for i in range(10000)] 1 loops, best of 3: 1.75 s per loop ```
Optimizing a python function with numpy arrays
[ "", "python", "function", "loops", "optimization", "numpy", "" ]
I'm currently trying to port some Scala code to a Python project and I came across the following bit of Scala code: ``` lazy val numNonZero = weights.filter { case (k,w) => w > 0 }.keys ``` `weights` is a really long list of tuples of items and their associated probability weighting. Elements are frequently added and removed from this list but checking how many elements have a non-zero probability is relatively rare. There are a few other rare-but-expensive operations like this in the code I'm porting that seem to benefit greatly from usage of `lazy val`. What is the most idiomatic Python way to do something similar to Scala's `lazy val`?
In Scala, `lazy val` is a final variable that is evaluated once at the time it is first accessed, rather than at the time it is declared. It is essentially a [memoized](http://en.wikipedia.org/wiki/Memoization) function with no arguments. Here's one way you can implement a memoization decorator in Python: ``` from functools import wraps def memoize(f): @wraps(f) def memoized(*args, **kwargs): key = (args, tuple(sorted(kwargs.items()))) # make args hashable result = memoized._cache.get(key, None) if result is None: result = f(*args, **kwargs) memoized._cache[key] = result return result memoized._cache = {} return memoized ``` --- Here's how it can be used. With `property` you can even drop the empty parentheses, just like Scala: ``` >>> class Foo: ... @property ... @memoize ... def my_lazy_val(self): ... print "calculating" ... return "some expensive value" >>> a = Foo() >>> a.my_lazy_val calculating 'some expensive value' >>> a.my_lazy_val 'some expensive value' ```
Essentially, you want to change how attribute access works for `numNonZero`. Python does that with a [descriptor](http://docs.python.org/2/howto/descriptor.html). In particular, take a look at their application to [Properties](http://docs.python.org/2/howto/descriptor.html#properties). With that, you can defer calculation until the attribute is accessed, caching it for later use.
Python equivalent of Scala's lazy val
[ "", "python", "scala", "" ]
I've got a file that I've already opened, and I want to auto-close it. Is there any existing way to wrap the file in a `with` statement?
This works fine: ``` f = open('file') with f: print >> f, "Open" print f.closed # True ``` But this will fail, as `file.__enter__` does not behave like a recursive mutex: ``` f = open('file') with f: print >> f, "Open" with f: print >> f, "Open" print >> f, "This errors, as the file is already closed" ```
Tested in Python 3: ``` >>> f = open('test.txt', 'w') >>> with f: ... f.closed ... f.write('a') False 1 >>> f.closed True ``` So yes, you can. It doesn't reopen files that have already been closed, though: ``` >>> f.closed True >>> with f: ... f.write('a') Traceback (most recent call last): File "<pyshell#10>", line 1, in <module> with f: ValueError: I/O operation on closed file. ``` --- The reason for this is likely the fact that context managers are just objects that can be used with `with` statements; the `as <identifier>` bit in a `with` statement provides an alias for the object supplied to the `with` statement so that you can create the objects in the `with` statement itself without having to put the variable declaration on another line: ``` >>> f = open('test.txt', 'w') >>> with f as ff: ... ff.closed False ``` This makes it easy to use `with` statements on the same object multiple times (such as if the object is set up such that its `__enter__` (re)starts a connection while its `__exit__` closes the connection while allowing it to be reopened), which could be very useful for database transactions and the like.
Can I use a with statement on a file that already exists?
[ "", "python", "" ]
I have a table with a number of dates (some dates will be NaN) and I need to find the oldest date so a row may have DATE\_MODIFIED, WITHDRAWN\_DATE, SOLD\_DATE, STATUS\_DATE etc.. So for each row there will be a date in one or more of the fields I want to find the oldest of those and make a new column in the dataframe. Something like this, if I just do one , eg DATE MODIFIED I get a result but when I add the second as below ``` table['END_DATE']=min([table['DATE_MODIFIED']],[table['SOLD_DATE']]) ``` I get: ``` ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() ``` For that matter will this construct work to find the min date, assuming I create correct date columns initially?
Just apply the `min` function along the axis=1. ``` In [1]: import pandas as pd In [2]: df = pd.read_csv('test.cvs', parse_dates=['d1', 'd2', 'd3']) In [3]: df.ix[2, 'd1'] = None In [4]: df.ix[1, 'd2'] = None In [5]: df.ix[4, 'd3'] = None In [6]: df Out[6]: d1 d2 d3 0 2013-02-07 00:00:00 2013-03-08 00:00:00 2013-05-21 00:00:00 1 2013-02-07 00:00:00 NaT 2013-05-21 00:00:00 2 NaT 2013-03-02 00:00:00 2013-05-21 00:00:00 3 2013-02-04 00:00:00 2013-03-08 00:00:00 2013-01-04 00:00:00 4 2013-02-01 00:00:00 2013-03-06 00:00:00 NaT In [7]: df.min(axis=1) Out[7]: 0 2013-02-07 00:00:00 1 2013-02-07 00:00:00 2 2013-03-02 00:00:00 3 2013-01-04 00:00:00 4 2013-02-01 00:00:00 dtype: datetime64[ns] ```
If `table`is your DataFrame, then use its `min` method on the relevant columns: ``` table['END_DATE'] = table[['DATE_MODIFIED','SOLD_DATE']].min(axis=1) ```
Finding the min date in a Pandas DF row and create new Column
[ "", "python", "pandas", "" ]
I am trying to call a python script in another python script. The directories are different. I tried ``` import subprocess subprocess.call("C:\temp\hello2.py", shell=True) ``` But got nothing. It does not work. I reviewed many forums, but all of them are about calling it when both scripts are on the same directory. I tried having both scripts in the same directory. In this case, I can run the model in the Python.exe (through cmd window) but not in IDLE. In IDLE, I do not even get an error message. I really need to do that, such that I can't define the other script as a different module, etc. I need to call a script in another script.
Escape backslash (`\`) ``` "C:\\temp\\hello2.py" ``` or use raw string ``` r"C:\temp\hello2.py" ``` --- ``` >>> print "C:\temp\hello2.py" C: emp\hello2.py >>> print "C:\\temp\\hello2.py" C:\temp\hello2.py >>> print r"C:\temp\hello2.py" C:\temp\hello2.py ```
First the backslash thing, and second you should **always** call python scripts with the python interpreter. You never know what are `*.py` files associated with. So: ``` import sys import subprocess subprocess.call([sys.executable, 'C:\\temp\\hello2.py'], shell=True) ```
Call a python script in a python script
[ "", "python", "call", "subprocess", "" ]
I want make a specific element of each dictionary the first element in a list of dictionaries. How would I do this, or is it even possible given that dictionaries are unordered? I don't want to use .sort() by itself or OrderedDict because both will not bring the desired element to the top of the keys as the keys are alphabetically ordered. Is there a way to use the lamba keyword in combination with sort in Python 2.7 to do this? Ex: I want `b` to be the first key ``` {'a': 1, 'b': 2, 'c': 3} {'b': 2, 'a': 1, 'c': 3} ``` --- CONTEXT: I have a large set of dictionaries where each dictionary represents a bunch of info about a job. Each dictionary has a code that identifies the job. My superior would like to have this code moved to the top of its dictionary.
Wrap your dict in a list and move your code out. Instead of ``` data = {'code':'this is the code', 'some':'other', 'stuff':'here'} ``` do ``` data = ['this is the code', {'some':'other', 'stuff':'here'}] ```
As others have pointed out, Python's dictionaries are now ordered (by insertion order) since Python 3.6. The cleanest way I know to move a single item to the front of a dictionary is to use `pop()`: ``` d = {'a': 1, 'b': 2, 'c': 3} # original dict d = {'b': d.pop('b'), **d} # new order print(d) # {'b': 2, 'a': 1, 'c': 3} ``` Python [guarantees left-to-right execution order](https://docs.python.org/3.3/reference/expressions.html#evaluation-order), so you know the `pop` takes place before expanding `**d`, which means `'b'` will occur at the front and not later in the `**d` expansion. If you're going to be doing this a bunch and you don't want to write the key twice, or if the key is going to be a computed expression, you can make a helper function: ``` def plop(d, key): return {key: d.pop(key)} d = {'a': 1, 'b': 2, 'c': 3} # original dict d = {**plop(d, 'b'), **d} # new order print(d) # {'b': 2, 'a': 1, 'c': 3} ```
How to make a specific key the first key in a dictionary?
[ "", "python", "sorting", "dictionary", "" ]
I'm still new to SQL and am kind of stumped on this one. I would greatly appreciate any help or advice. I have a table with a value column and an ID column that I then order by the value column in descending order. i.e: ``` Value | ID 12 | A 09 | A 08 | B 08 | C 07 | A 06 | B 03 | B 01 | C ``` I am trying to do two things: 1. For each row, calculate the percent of the total sum for its respective ID. Row 1 would be 12/(12+9+7), row 2: 3/(12+9+7), row 3: 8/(8+6+3), etc. 2. Calculate the running total of the percentage calculated in (1) for each ID. Essentially just a cumulative sum for each ID. The output would look something like this. ``` Value | ID | UnitValue | RunningTotal ------------------------------------- 12 | A | 0.43 | 0.43 09 | A | 0.32 | 0.75 08 | B | 0.47 | 0.47 08 | C | 0.89 | 0.89 07 | A | 0.25 | 1.00 06 | B | 0.35 | 0.82 03 | B | 0.18 | 1.00 01 | C | 0.11 | 1.00 ```
For SQL Server 2008 ``` ;WITH CTE AS ( SELECT Value ,ID ,CONVERT(DECIMAL(10,2),Value/CONVERT(DECIMAL(10,2),SUM(Value) OVER(PARTITION BY [ID]))) AS [Unit Value] FROM Table1 ) SELECT a.Value,a.ID,a.[Unit Value], (SELECT SUM(b.[Unit Value]) FROM CTE b WHERE a.ID = b.ID AND b.[Unit Value] <= a.[Unit Value]) AS [RunningTotal] FROM CTE a ORDER BY a.ID,[RunningTotal] ``` [**SQL FIDDLE DEMO**](http://www.sqlfiddle.com/#!6/56366/3)
Try to use over() partition by () statement
SQL unit value and cumulative sum by groups
[ "", "sql", "" ]
I'm having great difficulty in finding how to do this. I've been searching for hours but can't come up with a working solution. I have two tables. ``` TableA - Code, TranDate, Amount TableB - Code Amount ``` I wish to Update TableB Amount by deducting the sum of Amount in TableA (where a.code = b.code and a.TranDate > 'A GIVEN DATE' I have to confess to having limited ability with SQL. I am using SQL server 2005.
Easy way with Common Table Extensions: ``` with cte as ( select A.Code, sum( A.Amount ) as Amount from TableA A inner join TableB B on A.Code = B.Code where A.TranDate > *someDete* Group by A.Code ) update B Set B.amount = CTE.amount from TableB B inner join CTE on B.Code = CTE.Code ``` **Edited** Due OP comment: CTE is available from sqlserver 2005: <http://technet.microsoft.com/en-us/library/ms190766(v=sql.90).aspx> Query is tested on sqlserver 2008: <http://sqlfiddle.com/#!3/9e424/1/0>
Try: ``` @Date = 'A GIVEN DATE' UPDATE TableB SET TableB.Amount = TableB.Amount - (SELECT SUM(TableA.Amount) FROM TableA as A, TableB WHERE A.Code = B.Code AND A.TranDate > @Date) ```
updating a field with sum of fields in another table
[ "", "sql", "sql-server-2005", "" ]
I am new to programming. I have hundreds of CSV files in a folder and certain files have the letters DIF in the second column. I want to rewrite the CSV files without those lines in them. I have attempted doing that for one file and have put my attempt below. I need also need help getting the program to do that dfor all the files in my directory. Any help would be appreciated. Thank you ``` import csv reader=csv.reader(open("40_5.csv","r")) for row in reader: if row[1] == 'DIF': csv.writer(open('40_5N.csv', 'w')).writerow(row) ```
I made some changes to your code: ``` import csv import glob import os fns = glob.glob('*.csv') for fn in fns: reader=csv.reader(open(fn,"rb")) with open (os.path.join('out', fn), 'wb') as f: w = csv.writer(f) for row in reader: if not 'DIF' in row: w.writerow(row) ``` The glob command produces a list of all files ending with .csv in the current directory. If you want to give the source directory as an argument to your program, have a look into sys.argv or argparse (especially the latter is very powerful for command line parsing). You also have to be careful when opening a file in 'w' mode: It means truncating the file, i.e. in your loop you would always overwrite the existing file, ending up in only one csv line. The direcotry 'out' must exist or the script will produce an IOError. Links: [open](http://docs.python.org/2/library/functions.html#open) [sys.argv](http://docs.python.org/2/library/sys.html) [argparse](http://docs.python.org/dev/library/argparse.html) [glob](http://docs.python.org/2/library/glob.html)
[Most sequence types support the **in** or **not in** operators](http://docs.python.org/2/library/stdtypes.html#sequence-types-str-unicode-list-tuple-bytearray-buffer-xrange), which are much simpler to use to test for values than figuring index positions. ``` for row in reader: if not 'DIF' in row: csv.writer(open('40_5N.csv', 'w')).writerow(row) ```
Reading CSV files and rewriting them without certain rows Python
[ "", "python", "csv", "if-statement", "for-loop", "" ]
I want to save an ID between requests, using Flask `session` cookie, but I'm getting an `Internal Server Error` as result, when I perform a request. I prototyped a simple Flask app for demonstrating my problem: ``` #!/usr/bin/env python from flask import Flask, session app = Flask(__name__) @app.route('/') def run(): session['tmp'] = 43 return '43' if __name__ == '__main__': app.run() ``` Why I can't store the `session` cookie with the following value when I perform the request?
According to [Flask sessions documentation](http://flask.pocoo.org/docs/quickstart/#sessions): > ... > What this means is that the user could look at the contents of your > cookie but not modify it, unless they know the secret key used for > signing. > > In order to use sessions you **have to set a secret key**. Set *secret key*. And you should return string, not int. ``` #!/usr/bin/env python from flask import Flask, session app = Flask(__name__) @app.route('/') def run(): session['tmp'] = 43 return '43' if __name__ == '__main__': app.secret_key = 'A0Zr98j/3yX R~XHH!jmN]LWX/,?RT' app.run() ```
As **@falsetru** mentioned, you have to set a secret key. Before sending the `session` cookie to the user's browser, Flask signs the cookies cryptographically, and that doesn't mean that you cannot decode the cookie. I presume that Flask keeps track of the signed cookies, so it can perform it's own 'magic', in order to determine if the cookie that was sent along with the request (request headers), is a valid cookie or not. Some methods that you may use, all related with Flask class instance, generally defined as `app`: * defining the `secret_key` variable for `app` object ``` app.secret_key = b'6hc/_gsh,./;2ZZx3c6_s,1//' ``` * using the `config()` method ``` app.config['SECRET_KEY'] = b'6hc/_gsh,./;2ZZx3c6_s,1//' ``` * using an external configuration file for the entire Flask application ``` $ grep pyfile app.py app.config.from_pyfile('flask_settings.cfg') $ cat flask_settings.py SECRET_KEY = b'6hc/_gsh,./;2ZZx3c6_s,1//' ``` Here's an example (an adaptation from [this article](https://overiq.com/flask/0.12/sessions-in-flask/)), focused on providing a more clearer picture of Flask `session` cookie, considering the participation of both Client and Server sides: ``` from flask import Flask, request, session import os app = Flask(__name__) @app.route('/') def f_index(): # Request Headers, sent on every request print("\n\n\n[Client-side]\n", request.headers) if 'visits' in session: # getting value from session dict (Server-side) and incrementing by 1 session['visits'] = session.get('visits') + 1 else: # first visit, generates the key/value pair {"visits":1} session['visits'] = 1 # 'session' cookie tracked from every request sent print("[Server-side]\n", session) return "Total visits:{0}".format(session.get('visits')) if __name__ == "__main__": app.secret_key = os.urandom(24) app.run() ``` Here's the output: ``` $ python3 sessions.py * Serving Flask app "sessions" (lazy loading) * Environment: production WARNING: Do not use the development server in a production environment. Use a production WSGI server instead. * Debug mode: off * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit) [Client-side] Upgrade-Insecure-Requests: 1 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Connection: keep-alive Host: 127.0.0.1:5000 User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:62.0) Gecko/20100101 Firefox/62.0 Accept-Encoding: gzip, deflate Accept-Language: en-US,en;q=0.5 [Server-side] <SecureCookieSession {'visits': 1}> 127.0.0.1 - - [12/Oct/2018 14:27:05] "GET / HTTP/1.1" 200 - [Client-side] Upgrade-Insecure-Requests: 1 Cookie: session=eyJ2aXNpdHMiOjF9.DqKHCQ.MSZ7J-Zicehb6rr8qw43dCVXVNA # <--- session cookie Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Connection: keep-alive Host: 127.0.0.1:5000 User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:62.0) Gecko/20100101 Firefox/62.0 Accept-Encoding: gzip, deflate Accept-Language: en-US,en;q=0.5 [Server-side] <SecureCookieSession {'visits': 2}> 127.0.0.1 - - [12/Oct/2018 14:27:14] "GET / HTTP/1.1" 200 - ``` --- You may have noticed that in the example above, I'm using the `os` lib and the `urandom()` function, in order to generate Flask's secret key, right? From [the official doc](http://flask.pocoo.org/docs/1.0/quickstart/#sessions): > **How to generate good secret keys** > > A secret key should be as random as possible. Your operating system has ways to generate pretty random data based on a cryptographic random generator. Use the following command to quickly generate a value for Flask.secret\_key (or SECRET\_KEY): > > $ python -c 'import os; print(os.urandom(16))' > > b'\_5#y2L"F4Q8z\n\xec]/' --- **PLUS NOTE** > As you can see, the creators of Flask support the practice of using `os.urandom()` for building the Flask secret key, from older versions of the tool to its latest version. So: why **@joshlsullivan's** answer received downvotes (deserves an upvote) and why **@MikhailKashkin** writes that, using `os.urandom()` is terrible idea, are mysteries.
Internal Server Error when using Flask session
[ "", "python", "session", "cookies", "flask", "" ]