title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
listlengths
1
5
Dependant subqueries in Django
38,596,189
<p>I'm been bashing my head against the wall trying to make this query using the Django ORM, I searched everywhere but couldn't find an answer. This is my model.</p> <pre><code>class Decision(models.Model): ACTION_CHOICES = [('include', _('Include')), ('exclude', _('Exclude'))] user = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE) date_taken = models.DateTimeField(_('date taken'), default=timezone.now) action = models.CharField(max_length=7, choices=ACTION_CHOICES) content_type = models.ForeignKey(ContentType, on_delete=models.CASCADE) object_id = models.PositiveIntegerField() content_object = GenericForeignKey('content_type', 'object_id') project = models.ForeignKey('projects.Project', on_delete=models.CASCADE) is_permanent = models.BooleanField(_('is permanent'), default=False) </code></pre> <p>And what I want to query are the lastest permanent decisions for every content_object that a user has made in a project, since this query is called repeatedly in the application, iterating through a simpler QuerySet like <code>Decision.filter(user_id=user.pk, project_id=project.pk)</code> to get the final result it's not an option. So far I solved it using a RawQuerySet like this one.</p> <pre><code>Decision.objects.raw("SELECT decision.id, decision.content_type_id, decision.object_id " "FROM canvasblocks_decision AS decision " "WHERE decision.date_taken = (SELECT max(last_decision.date_taken) " " FROM canvasblocks_decision AS last_decision " " WHERE last_decision.object_id = decision.object_id AND " " last_decision.content_type_id = decision.content_type_id AND " " last_decision.project_id = %s AND decision.user_id = %s) ", [project.pk, user.pk]) </code></pre> <p>But I don't like this solution, because I lose all Django ORM power and since I have to filter this query again to check actions, that power lose it's painful, cause I have to transform this on a subquery with all the complexity that brings with it. So guys, you know a better way to do this? btw I'm using Django 1.9.4. Thanks in advance.</p>
0
2016-07-26T17:19:16Z
38,597,159
<p>If the dataset is small, you could make multiple passes, but that is not a very good idea.</p> <p>A couple of options, depending on the database you are using.</p> <p>1) You can change the sql to use rank or dense_rank function to make the query much simpler.</p> <pre><code>select decision.id, decision.content_type_id, decision.object_id, first_value (last_decision.date_taken) over (partition by ecision.id, decision.content_type_id, decision.object_id order by last_decision.date_taken desc ) from canvasblocks_decision AS decision ... </code></pre> <p>2) You could put the same logic in an annotation, to get the rank. That way you have everything your django object gives and you get this extra column.</p> <pre><code>from django.db.models.expressions import RawSQL Decision.objects.filter().annotate(rank=RawSQL("RANK() OVER (partition by id, content_type, object_id (ORDER BY date_taken DESC)", []) ) </code></pre> <p>..</p> <p>This might help : <a href="http://stackoverflow.com/a/35948419/237939">http://stackoverflow.com/a/35948419/237939</a></p>
0
2016-07-26T18:16:39Z
[ "python", "sql", "django-queryset" ]
Can I use annotation in django model like this and how?
38,596,356
<p>models.py</p> <pre><code>matches = models.ManyToManyField('Matches') ... def get_rating(self): from django.db.models import Sum, Value, IntegerField from django.db.models.functions import Coalesce return self.matches.annotate(rating=Coalesce(Sum('matches__rating_difference'), Value(0), output_field=IntegerField()) + Value(1000)) </code></pre> <p><code>rating_difference</code> contains player rating points</p> <p><code>get_rating</code> should return sum of points (player rating)</p> <p>template.html</p> <pre><code>{{ player.matches.get_rating.(?)rating }} </code></pre>
1
2016-07-26T17:28:39Z
38,612,363
<p><strong>it works</strong></p> <p>models.py</p> <pre><code>matches = models.ManyToManyField('Matches') ... def get_rating(self): from django.db.models import Sum, Value, IntegerField from django.db.models.functions import Coalesce return self.matches.aggregate(rating=Coalesce(Sum('rating_difference'), Value(0), output_field=IntegerField()) + Value(1000)) </code></pre> <p>template.py</p> <pre><code>{{ player.matches.get_rating.rating_difference }} </code></pre>
0
2016-07-27T12:00:49Z
[ "python", "django", "django-models", "django-templates" ]
Python usersite directory points to file that doesn't exist
38,596,404
<p>I have downloaded and installed python modules before with no problem. Recently, however, none of the modules(They did install correctly) I have installed have been importable.</p> <p>The modules installed where I expected them to - C:\Python27\lib\site-packages, and I have a .pth file created in that directory that points to each of the packages I want to be able to import.</p> <p>The problem is that my site directory points to a file that doesn't exist: C:\Users\cam\AppData\Roaming\Python\Python27\site-packages</p> <p>By including the code</p> <pre><code>import site site.addsitedir("C:\Python27\lib\site-packages") </code></pre> <p>However, I'd much rather just modify my site directories to point to that directory as well. I've looked around but have not found any way to do this. Any suggestions?</p>
0
2016-07-26T17:32:10Z
38,596,660
<p>Add PYTHONPATH as system variable and add all the desired paths as values to that variable</p>
0
2016-07-26T17:48:02Z
[ "python", "windows" ]
Python: how to retain the file extension when renaming files with os?
38,596,511
<p>Say I have a folder with n <code>csv</code> files which I want to rename. The new filename is going to be something like <code>ABxxxx</code>, with <code>xxxx</code> being a progressive number from 1 to 1000.</p> <p><strong>While doing this, how can I retain the original file extension, which is <code>csv</code>?</strong></p> <p>What I have done so far has changed the filenames but has pruned away the extension:</p> <pre><code>directory=r'C:\Me\MyDir' subdir=[x[0] for x in os.walk(directory)] subdir.pop(0) for i in subdir: temp_dir=r''+i os.chdir(temp_dir) a='A' b='B' for file in glob.glob("*.csv"): for i in range(1,1001): newname=a+b+i os.rename(file,newname) </code></pre>
0
2016-07-26T17:38:38Z
38,596,535
<p>You can simply append <code>'.csv'</code> to your new filename:</p> <pre><code>os.rename(file, newname + '.csv') </code></pre> <p>In general (for any file type), a better way to do this would be to get the existing extension first using <a href="https://docs.python.org/2/library/os.path.html#os.path.splitext" rel="nofollow"><code>os.path.splitext</code></a> and then append that to the new filename.</p> <pre><code>oldext = os.path.splitext(file)[1] os.rename(file, newname + oldext) </code></pre>
3
2016-07-26T17:39:51Z
[ "python", "csv", "file-extension", "file-rename" ]
Python: how to retain the file extension when renaming files with os?
38,596,511
<p>Say I have a folder with n <code>csv</code> files which I want to rename. The new filename is going to be something like <code>ABxxxx</code>, with <code>xxxx</code> being a progressive number from 1 to 1000.</p> <p><strong>While doing this, how can I retain the original file extension, which is <code>csv</code>?</strong></p> <p>What I have done so far has changed the filenames but has pruned away the extension:</p> <pre><code>directory=r'C:\Me\MyDir' subdir=[x[0] for x in os.walk(directory)] subdir.pop(0) for i in subdir: temp_dir=r''+i os.chdir(temp_dir) a='A' b='B' for file in glob.glob("*.csv"): for i in range(1,1001): newname=a+b+i os.rename(file,newname) </code></pre>
0
2016-07-26T17:38:38Z
38,596,807
<p>Use <code>os.path.splitext</code> to build a tuple of <code>(basepath, extension)</code> and <code>enumerate</code> to generate your "uniquifier". Now you can just use vanilla string formatting to glue it together</p> <pre><code>for i in subdir: temp_dir=r''+i os.chdir(temp_dir) a='A' b='B' for idx, file in enumerate(glob.glob("*.csv")): os.rename(file,'{0}{2}{1}'.format(*(os.path.splitext(file) + (idx,)))) </code></pre>
1
2016-07-26T17:56:42Z
[ "python", "csv", "file-extension", "file-rename" ]
Pop value from dictionary stored in Flask session
38,596,514
<p>Here is my session variable,</p> <pre><code>session['data'] = json.loads(request.data.decode()) print(session.pop('data', None)) </code></pre> <p>Print looks like this:</p> <pre><code>{u'mark': u'all', u'chr': u'1A', u'distmin': 5, u'distmax': 10} </code></pre> <p>My question is, how to subset this dict,</p> <pre><code>print(session.pop('data["mark"]', None)) </code></pre> <p>This is not working, this returns <code>None</code>. </p>
0
2016-07-26T17:38:55Z
38,598,101
<p>That isn't how Python's mappings work. <code>data["mark"]</code> is a valid key. To access nested mappings, you need to specify the keys separately.</p> <pre><code>session['data']['mark'] = 'spam' </code></pre> <p>The key used for <code>pop</code> should match that for <code>__getitem__</code>. Just as you don't use <code>session['data["mark"]']</code> to access the dictionary associated with the <code>data</code> key, you wouldn't remove keys the same way. The syntax you're looking for is</p> <pre><code>session['data'].pop('mark', None) </code></pre> <p>Mark the session as modified after changing a nested object like this. The session can only do this automatically for direct changes.</p> <pre><code>session.modified = True </code></pre>
3
2016-07-26T19:13:25Z
[ "python", "session", "flask" ]
Removing certain rows in Pandas Dataframe by string format
38,596,519
<p>I have a Pandas dataframe with a column called Zip Code. The column is an object data type and some rows are not in proper zip code format. I would like to remove rows that do not contain ##### format zipcode. </p> <pre><code> Subscriber Type Zip Code 0 Subscriber 94040 1 Customer 11231 2 Customer 11231 3 Customer 32 4 Customer nil </code></pre> <p>What would be an easy way to do so? Is there a way to compare format and the records something like this? df.drop(df['Zip Code'] != #####)</p>
2
2016-07-26T17:39:07Z
38,596,575
<p>try this:</p> <pre><code>In [23]: df = df[df['Zip Code'].str.contains(r'^\d{5}$')] In [24]: df Out[24]: Subscriber Type Zip Code 0 Subscriber 94040 1 Customer 11231 2 Customer 11231 </code></pre> <p>Explanation:</p> <pre><code>In [22]: df['Zip Code'].str.contains(r'^\d{5}$') Out[22]: 0 True 1 True 2 True 3 False 4 False Name: Zip Code, dtype: bool </code></pre> <p>PS thanks to <a href="http://stackoverflow.com/questions/38596519/removing-certain-rows-in-pandas-dataframe-by-string-format/38596575?noredirect=1#comment64581535_38596575">@Alberto Garcia-Raboso</a> for the refined RegEx!</p>
3
2016-07-26T17:41:50Z
[ "python", "pandas", "dataframe" ]
Sympy algebraic solution to series summation
38,596,569
<p>I am trying to solve for <code>C</code> in the following equation</p> <p><a href="http://i.stack.imgur.com/zcTYW.gif" rel="nofollow"><img src="http://i.stack.imgur.com/zcTYW.gif" alt="enter image description here"></a></p> <p>I can do this with <code>sympy</code> for an enumrated number of <code>x's</code>, e.g <code>x0, x2, ..., x4</code> but cannot seem to figure out how to do this for <code>i=0</code> to <code>t</code>. E.g. for a limited number </p> <pre><code>from sympy import summation, symbols, solve x0, x1, x2, x3, x4, alpha, C = symbols('x0, x1, x2, x3, x4, alpha, C') e1 = ((x0 + alpha * x1 + alpha**(2) * x2 + alpha**(3) * x3 + alpha**(4) * x4) / (1 + alpha + alpha**(2) + alpha**(3) + alpha**(4))) e2 = (x3 + alpha * x4) / (1 + alpha) rhs = (x0 + alpha * x1 + alpha**(2) * x2) / (1 + alpha + alpha**(2)) soln_C = solve(e1 - C*e2 - rhs, C) </code></pre> <p>Any insight would be much appreciated.</p>
2
2016-07-26T17:41:34Z
38,597,392
<p>Thanks to @bryans for pointing me in the direction of <code>Sum</code>. Elaborating on his comment, here is one solution that seems to work. As I am fairly new to <code>sympy</code> if anyone has a more concise approach please share.</p> <pre><code>from sympy import summation, symbols, solve, Function, Sum alpha, C, t, i = symbols('alpha, C, t, i') x = Function('x') s1 = Sum(alpha**i * x(t-i), (i, 0, t)) / Sum(alpha**i, (i, 0, t)) s2 = Sum(alpha**i * x(t-3-i), (i, 0, t-3)) / Sum(alpha**i, (i, 0, t-3)) rhs = (x(0) + alpha * x(1) + alpha**(2) * x(2)) / (1 + alpha + alpha**(2)) </code></pre> <p><a href="http://i.stack.imgur.com/orvqB.png" rel="nofollow"><img src="http://i.stack.imgur.com/orvqB.png" alt="enter image description here"></a></p> <pre><code>soln_C = solve(s1 - C*s2 - rhs, C) </code></pre> <p><a href="http://i.stack.imgur.com/yl5Ww.png" rel="nofollow"><img src="http://i.stack.imgur.com/yl5Ww.png" alt="enter image description here"></a></p>
1
2016-07-26T18:29:52Z
[ "python", "sympy" ]
Using Python and elasticsearch, how can I loop through the returned JSON object?
38,596,580
<p>My code is as follows:</p> <pre><code>import json from elasticsearch import Elasticsearch es = Elasticsearch() resp = es.search(index="mynewcontacts", body={"query": {"match_all": {}}}) response = json.dumps(resp) data = json.loads(response) #print data["hits"]["hits"][0]["_source"]["email"] for row in data: print row["hits"]["hits"][0]["_source"]["email"] return "OK" </code></pre> <p>which produces this truncated (for convenience) JSON:</p> <pre><code>{"timed_out": false, "took": 1, "_shards": {"successful": 5, "total": 5, "failed": 0}, "hits": {"max_score": 1.0, "total": 7, "hits": [{"_index": "mynewcontacts", "_type": "contact", "_score": 1.0, "_source": {"email": "sharon.zhuo@xxxxx.com.cn", "position": "Sr.Researcher", "last": "Zhuo", "first": "Sharon", "company": "Tabridge Executive Search"}, "_id": "AVYmLMlKJVSAh7zyC0xf"}, {"_index": "mynewcontacts", "_type": "contact", "_score": 1.0, "_source": {"email": "andrew.springthorpe@xxxxx.gr.jp", "position": "Vice President", "last": "Springthorpe", "first": "Andrew", "company": "SBC Group"}, "_id": "AVYmLMlRJVSAh7zyC0xg"}, {"_index": "mynewcontacts", "_type": "contact", "_score": 1.0, "_source": {"email": "mjbxxx@xxx.com", "position": "Financial Advisor", "last": "Bell", "first": "Margaret Jacqueline", "company": "Streamline"}, "_id": "AVYmLMlXJVSAh7zyC0xh"}, {"_index": "mynewcontacts", "_type": "contact", "_score": 1.0, "_source": {"email": "kokaixxx@xxxx.com", "position": "Technical Solutions Manager MMS North Asia", "last": "Okai", "first": "Kensuke", "company": "Criteo"}, "_id": "AVYmLMlfJVSAh7zyC0xi"}, {"_index": "mynewcontacts", "_type": "contact", "_score": 1.0, "_source": {"email": "mizuxxxxto@zszs.com", "position": "Sr. Strategic Account Executive", "last": "Kato", "first": "Mizuto", "company": "Twitter"}, "_id": "AVYmLMlkJVSAh7zyC0xj"}, {"_index": "mynewcontacts", "_type": "contact", "_score": 1.0, "_source": {"email": "abc@example.com", "position": "Design Manager", "last": "Okada", "first": "Kengo", "company": "ON Semiconductor"}, "_id": "AVYmLMlpJVSAh7zyC0xk"}, {"_index": "mynewcontacts", "_type": "contact", "_score": 1.0, "_source": {"email": "007@example.com", "position": "Legal Counsel", "last": "Lei", "first": "Yangzi (Karen)", "company": "Samsung China Semiconductor"}, "_id": "AVYmLMkUJVSAh7zyC0xe"}]}} </code></pre> <p>When I try:</p> <pre><code>print data["hits"]["hits"][0]["_source"]["email"] </code></pre> <p>it prints the first email fine but when I attempt the loop with </p> <pre><code>for row in data: print row["hits"]["hits"][0]["_source"]["email"] </code></pre> <p>I receive an error:</p> <pre><code>TypeError: string indices must be integers </code></pre> <p>Please can somebody suggest how I can iterate through the items correctly? Many thanks!</p>
0
2016-07-26T17:42:08Z
38,596,741
<p>I could be wrong, but looks like you might not be starting the for loop based on the correct json item. Try:</p> <pre><code>for row in data['hits']['hits']: # Rest of loop here. </code></pre>
1
2016-07-26T17:53:14Z
[ "python", "json", "elasticsearch" ]
Using Python and elasticsearch, how can I loop through the returned JSON object?
38,596,580
<p>My code is as follows:</p> <pre><code>import json from elasticsearch import Elasticsearch es = Elasticsearch() resp = es.search(index="mynewcontacts", body={"query": {"match_all": {}}}) response = json.dumps(resp) data = json.loads(response) #print data["hits"]["hits"][0]["_source"]["email"] for row in data: print row["hits"]["hits"][0]["_source"]["email"] return "OK" </code></pre> <p>which produces this truncated (for convenience) JSON:</p> <pre><code>{"timed_out": false, "took": 1, "_shards": {"successful": 5, "total": 5, "failed": 0}, "hits": {"max_score": 1.0, "total": 7, "hits": [{"_index": "mynewcontacts", "_type": "contact", "_score": 1.0, "_source": {"email": "sharon.zhuo@xxxxx.com.cn", "position": "Sr.Researcher", "last": "Zhuo", "first": "Sharon", "company": "Tabridge Executive Search"}, "_id": "AVYmLMlKJVSAh7zyC0xf"}, {"_index": "mynewcontacts", "_type": "contact", "_score": 1.0, "_source": {"email": "andrew.springthorpe@xxxxx.gr.jp", "position": "Vice President", "last": "Springthorpe", "first": "Andrew", "company": "SBC Group"}, "_id": "AVYmLMlRJVSAh7zyC0xg"}, {"_index": "mynewcontacts", "_type": "contact", "_score": 1.0, "_source": {"email": "mjbxxx@xxx.com", "position": "Financial Advisor", "last": "Bell", "first": "Margaret Jacqueline", "company": "Streamline"}, "_id": "AVYmLMlXJVSAh7zyC0xh"}, {"_index": "mynewcontacts", "_type": "contact", "_score": 1.0, "_source": {"email": "kokaixxx@xxxx.com", "position": "Technical Solutions Manager MMS North Asia", "last": "Okai", "first": "Kensuke", "company": "Criteo"}, "_id": "AVYmLMlfJVSAh7zyC0xi"}, {"_index": "mynewcontacts", "_type": "contact", "_score": 1.0, "_source": {"email": "mizuxxxxto@zszs.com", "position": "Sr. Strategic Account Executive", "last": "Kato", "first": "Mizuto", "company": "Twitter"}, "_id": "AVYmLMlkJVSAh7zyC0xj"}, {"_index": "mynewcontacts", "_type": "contact", "_score": 1.0, "_source": {"email": "abc@example.com", "position": "Design Manager", "last": "Okada", "first": "Kengo", "company": "ON Semiconductor"}, "_id": "AVYmLMlpJVSAh7zyC0xk"}, {"_index": "mynewcontacts", "_type": "contact", "_score": 1.0, "_source": {"email": "007@example.com", "position": "Legal Counsel", "last": "Lei", "first": "Yangzi (Karen)", "company": "Samsung China Semiconductor"}, "_id": "AVYmLMkUJVSAh7zyC0xe"}]}} </code></pre> <p>When I try:</p> <pre><code>print data["hits"]["hits"][0]["_source"]["email"] </code></pre> <p>it prints the first email fine but when I attempt the loop with </p> <pre><code>for row in data: print row["hits"]["hits"][0]["_source"]["email"] </code></pre> <p>I receive an error:</p> <pre><code>TypeError: string indices must be integers </code></pre> <p>Please can somebody suggest how I can iterate through the items correctly? Many thanks!</p>
0
2016-07-26T17:42:08Z
38,596,750
<p>Your retrieved response <code>data</code> is a Python dictionary - if you make a <code>for</code> loop over it, it will yield the dictionary keys - in this case, teh strigns <code>timed_out</code>, <code>took</code>, <code>shards</code>, etc...</p> <p>Aparently yu want to iterate over teh list provided in the position <code>data["_shards"]["hits"]["hits"]</code> in your response data. That is a list.</p> <p>So, just do </p> <pre><code>for row in data["_shards"]["hits"]["hits"]: print(row["_source"]["email"]) </code></pre>
0
2016-07-26T17:53:34Z
[ "python", "json", "elasticsearch" ]
Using Python and elasticsearch, how can I loop through the returned JSON object?
38,596,580
<p>My code is as follows:</p> <pre><code>import json from elasticsearch import Elasticsearch es = Elasticsearch() resp = es.search(index="mynewcontacts", body={"query": {"match_all": {}}}) response = json.dumps(resp) data = json.loads(response) #print data["hits"]["hits"][0]["_source"]["email"] for row in data: print row["hits"]["hits"][0]["_source"]["email"] return "OK" </code></pre> <p>which produces this truncated (for convenience) JSON:</p> <pre><code>{"timed_out": false, "took": 1, "_shards": {"successful": 5, "total": 5, "failed": 0}, "hits": {"max_score": 1.0, "total": 7, "hits": [{"_index": "mynewcontacts", "_type": "contact", "_score": 1.0, "_source": {"email": "sharon.zhuo@xxxxx.com.cn", "position": "Sr.Researcher", "last": "Zhuo", "first": "Sharon", "company": "Tabridge Executive Search"}, "_id": "AVYmLMlKJVSAh7zyC0xf"}, {"_index": "mynewcontacts", "_type": "contact", "_score": 1.0, "_source": {"email": "andrew.springthorpe@xxxxx.gr.jp", "position": "Vice President", "last": "Springthorpe", "first": "Andrew", "company": "SBC Group"}, "_id": "AVYmLMlRJVSAh7zyC0xg"}, {"_index": "mynewcontacts", "_type": "contact", "_score": 1.0, "_source": {"email": "mjbxxx@xxx.com", "position": "Financial Advisor", "last": "Bell", "first": "Margaret Jacqueline", "company": "Streamline"}, "_id": "AVYmLMlXJVSAh7zyC0xh"}, {"_index": "mynewcontacts", "_type": "contact", "_score": 1.0, "_source": {"email": "kokaixxx@xxxx.com", "position": "Technical Solutions Manager MMS North Asia", "last": "Okai", "first": "Kensuke", "company": "Criteo"}, "_id": "AVYmLMlfJVSAh7zyC0xi"}, {"_index": "mynewcontacts", "_type": "contact", "_score": 1.0, "_source": {"email": "mizuxxxxto@zszs.com", "position": "Sr. Strategic Account Executive", "last": "Kato", "first": "Mizuto", "company": "Twitter"}, "_id": "AVYmLMlkJVSAh7zyC0xj"}, {"_index": "mynewcontacts", "_type": "contact", "_score": 1.0, "_source": {"email": "abc@example.com", "position": "Design Manager", "last": "Okada", "first": "Kengo", "company": "ON Semiconductor"}, "_id": "AVYmLMlpJVSAh7zyC0xk"}, {"_index": "mynewcontacts", "_type": "contact", "_score": 1.0, "_source": {"email": "007@example.com", "position": "Legal Counsel", "last": "Lei", "first": "Yangzi (Karen)", "company": "Samsung China Semiconductor"}, "_id": "AVYmLMkUJVSAh7zyC0xe"}]}} </code></pre> <p>When I try:</p> <pre><code>print data["hits"]["hits"][0]["_source"]["email"] </code></pre> <p>it prints the first email fine but when I attempt the loop with </p> <pre><code>for row in data: print row["hits"]["hits"][0]["_source"]["email"] </code></pre> <p>I receive an error:</p> <pre><code>TypeError: string indices must be integers </code></pre> <p>Please can somebody suggest how I can iterate through the items correctly? Many thanks!</p>
0
2016-07-26T17:42:08Z
38,597,013
<p>What you're doing is looping through keys of the dictionary. To print each email in the response you'd do this:</p> <pre><code>for row in data["hits"]["hits"]: print row["_source"]["email"] </code></pre> <p>Also converting to json isn't necessary. This should accomplish what you're looking to do:</p> <pre><code>from elasticsearch import Elasticsearch es = Elasticsearch() resp = es.search(index="mynewcontacts", body={"query": {"match_all": {}}}) for row in resp["hits"]["hits"]: print row["_source"]["email"] return "OK" </code></pre>
2
2016-07-26T18:08:25Z
[ "python", "json", "elasticsearch" ]
numpy concatenate not appending new array to empty multidimensional array
38,596,674
<p>I bet I am doing something very simple wrong. I want to start with an empty 2D numpy array and append arrays to it (with dimensions 1 row by 4 columns).</p> <pre><code>open_cost_mat_train = np.matrix([]) for i in xrange(10): open_cost_mat = np.array([i,0,0,0]) open_cost_mat_train = np.vstack([open_cost_mat_train,open_cost_mat]) </code></pre> <p>my error trace is:</p> <pre><code> File "/Users/me/anaconda/lib/python2.7/site-packages/numpy/core/shape_base.py", line 230, in vstack return _nx.concatenate([atleast_2d(_m) for _m in tup], 0) ValueError: all the input array dimensions except for the concatenation axis must match exactly </code></pre> <p>What am I doing wrong? I have tried append, concatenate, defining the empty 2D array as <code>[[]]</code>, as <code>[]</code>, <code>array([])</code> and many others.</p>
2
2016-07-26T17:49:04Z
38,596,760
<p>You need to reshape your original matrix so that the number of columns match the appended arrays:</p> <pre><code>open_cost_mat_train = np.matrix([]).reshape((0,4)) </code></pre> <p>After which, it gives:</p> <pre><code>open_cost_mat_train # matrix([[ 0., 0., 0., 0.], # [ 1., 0., 0., 0.], # [ 2., 0., 0., 0.], # [ 3., 0., 0., 0.], # [ 4., 0., 0., 0.], # [ 5., 0., 0., 0.], # [ 6., 0., 0., 0.], # [ 7., 0., 0., 0.], # [ 8., 0., 0., 0.], # [ 9., 0., 0., 0.]]) </code></pre>
3
2016-07-26T17:53:59Z
[ "python", "arrays", "numpy", "multidimensional-array", "concatenation" ]
numpy concatenate not appending new array to empty multidimensional array
38,596,674
<p>I bet I am doing something very simple wrong. I want to start with an empty 2D numpy array and append arrays to it (with dimensions 1 row by 4 columns).</p> <pre><code>open_cost_mat_train = np.matrix([]) for i in xrange(10): open_cost_mat = np.array([i,0,0,0]) open_cost_mat_train = np.vstack([open_cost_mat_train,open_cost_mat]) </code></pre> <p>my error trace is:</p> <pre><code> File "/Users/me/anaconda/lib/python2.7/site-packages/numpy/core/shape_base.py", line 230, in vstack return _nx.concatenate([atleast_2d(_m) for _m in tup], 0) ValueError: all the input array dimensions except for the concatenation axis must match exactly </code></pre> <p>What am I doing wrong? I have tried append, concatenate, defining the empty 2D array as <code>[[]]</code>, as <code>[]</code>, <code>array([])</code> and many others.</p>
2
2016-07-26T17:49:04Z
38,598,378
<p>If <code>open_cost_mat_train</code> is large I would encourage you to replace the for loop by a <strong>vectorized algorithm</strong>. I will use the following funtions to show how efficiency is improved by vectorizing loops:</p> <pre><code>def fvstack(): import numpy as np np.random.seed(100) ocmt = np.matrix([]).reshape((0, 4)) for i in xrange(10): x = np.random.random() ocm = np.array([x, x + 1, 10*x, x/10]) ocmt = np.vstack([ocmt, ocm]) return ocmt def fshape(): import numpy as np from numpy.matlib import empty np.random.seed(100) ocmt = empty((10, 4)) for i in xrange(ocmt.shape[0]): ocmt[i, 0] = np.random.random() ocmt[:, 1] = ocmt[:, 0] + 1 ocmt[:, 2] = 10*ocmt[:, 0] ocmt[:, 3] = ocmt[:, 0]/10 return ocmt </code></pre> <p>I've assumed that the values that populate the first column of <code>ocmt</code> (shorthand for <code>open_cost_mat_train</code>) are obtained from a for loop, and the remaining columns are a function of the first column, as stated in your comments to my original answer. As real costs data are not available, in the forthcoming example the values in the first column are random numbers, and the second, third and fourth columns are the functions <code>x + 1</code>, <code>10*x</code> and <code>x/10</code>, respectively, where <code>x</code> is the corresponding value in the first column.</p> <pre><code>In [594]: fvstack() Out[594]: matrix([[ 5.43404942e-01, 1.54340494e+00, 5.43404942e+00, 5.43404942e-02], [ 2.78369385e-01, 1.27836939e+00, 2.78369385e+00, 2.78369385e-02], [ 4.24517591e-01, 1.42451759e+00, 4.24517591e+00, 4.24517591e-02], [ 8.44776132e-01, 1.84477613e+00, 8.44776132e+00, 8.44776132e-02], [ 4.71885619e-03, 1.00471886e+00, 4.71885619e-02, 4.71885619e-04], [ 1.21569121e-01, 1.12156912e+00, 1.21569121e+00, 1.21569121e-02], [ 6.70749085e-01, 1.67074908e+00, 6.70749085e+00, 6.70749085e-02], [ 8.25852755e-01, 1.82585276e+00, 8.25852755e+00, 8.25852755e-02], [ 1.36706590e-01, 1.13670659e+00, 1.36706590e+00, 1.36706590e-02], [ 5.75093329e-01, 1.57509333e+00, 5.75093329e+00, 5.75093329e-02]]) In [595]: np.allclose(fvstack(), fshape()) Out[595]: True </code></pre> <p>In order for the calls to <code>fvstack()</code> and <code>fshape()</code> produce the same results, the random number generator is initialized in both functions through <code>np.random.seed(100)</code>. Notice that the equality test has been performed using <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.allclose.html" rel="nofollow"><code>numpy.allclose</code></a> instead of <code>fvstack() == fshape()</code> to avoid the round off errors associated to floating point artihmetic.</p> <p>As for efficiency, the following interactive session shows that initializing <code>ocmt</code> with its final shape is significantly faster than repeatedly stacking rows:</p> <pre><code>In [596]: import timeit In [597]: timeit.timeit('fvstack()', setup="from __main__ import fvstack", number=10000) Out[597]: 1.4884241055042366 In [598]: timeit.timeit('fshape()', setup="from __main__ import fshape", number=10000) Out[598]: 0.8819408006311278 </code></pre>
2
2016-07-26T19:29:38Z
[ "python", "arrays", "numpy", "multidimensional-array", "concatenation" ]
Tkinter text not displayed in order
38,596,720
<p>I wanted to display the progress status of a file analysis by a few independent routines in a sequential manner, as each analysis routine takes some time. The attached demo code shows the problem I had. The problem is that the display only gets updated after the analyses are over? Appreciate to know why the code is no doing what is intended and how to fix it. Note: routine1 &amp; 2 are in 2 separate .py files.</p> <pre><code>from Tkinter import * import tkFileDialog import tkMessageBox import routine1 import routine2 import sys class Analysis(Frame): def __init__(self): Frame.__init__(self) self.text = Text(self, height=20, width=60) # 20 characters self.pack() scroll=Scrollbar(self) scroll.pack(side=RIGHT, fill=Y) scroll.config(command=self.text.yview) self.text.config(background='white') self.text.pack(expand=YES, fill=BOTH) def selectfile(self): fname = tkFileDialog.askopenfilename() self.text.delete(1.0, END) self.text.insert(INSERT, ' working on routine 1: \n') routine1.main(fname) self.text.insert(INSERT, ' routine 1 done: \n') self.text.insert(INSERT, ' working on routine 2: \n') routine2.main(fname) self.text.insert(INSERT, ' routine 2 done: ') sys.exit() def main(): tk = Tk() tk.title('Data Analysis') atext = Analysis() atext.pack() open_button = Button(tk, text="Select Data", activeforeground='blue', command=atext.selectfile) open_button.pack() message=''' Select file to be analysized ''' atext.text.insert(END,message) tk.mainloop() if __name__ == "__main__": main() </code></pre> <p>routine1.py</p> <pre><code>import time def main(Filename): print Filename time.sleep(1) # to simulate process time return </code></pre> <p>routine2.py</p> <pre><code>import time def main(Filename): print Filename time.sleep(1) # to simulate process time return </code></pre>
0
2016-07-26T17:52:25Z
38,597,188
<p>You need to update the display manually by calling the <code>update_idletasks()</code> <a href="http://infohost.nmt.edu/tcc/help/pubs/tkinter/web/universal.html" rel="nofollow">universal widget method</a> after changing something in the GUI. Note that the last update will only be visible for a very short time because of the <code>sys.exit()</code> call immediately following it.</p> <pre><code> def selectfile(self): fname = tkFileDialog.askopenfilename() self.text.delete(1.0, END) self.text.insert(INSERT, ' working on routine 1: \n') self.text.update_idletasks() routine1.main(fname) self.text.insert(INSERT, ' routine 1 done: \n') self.text.update_idletasks() self.text.insert(INSERT, ' working on routine 2: \n') self.text.update_idletasks() routine2.main(fname) self.text.insert(INSERT, ' routine 2 done: ') self.text.update_idletasks() sys.exit() </code></pre>
1
2016-07-26T18:18:01Z
[ "python", "tkinter" ]
Reading From Google Sheets Periodically
38,596,793
<p>I'm trying to read from a Google sheet say every 2 hours. I have looked at both the API for Google sheets and also the Google Apps Script.</p> <p>I'm using Python/Flask, and what I'm specifically confused about is how to add the time trigger. I can use the Google Sheets API to read from the actual file,but I'm unsure of how to run this process every x hours. From my understanding, it seems like Google Apps Script, is for adding triggers to doc, sheets, etc, which is not really what I want to do.</p> <p>I'm pretty sure I'm looking in the wrong area for this x hour read. Should I be looking into using the sched module or Advanced Python Scheduler?Any advice on how to proceed would be very appreciated. </p>
0
2016-07-26T17:56:24Z
38,600,670
<p>If you want to do this with only manipulating your Python program, you would have to run it all day. This would waste CPU resources.</p> <p>It's best to use cron to schedule your unix system to run a command for you every 2 hours. In this case, it'd be your python program.</p>
1
2016-07-26T22:09:04Z
[ "python", "google-spreadsheet-api" ]
regex for parsing imports in other scripts?
38,596,798
<p>We have a large library of custom scripts (300+) on a network share and I have been cleaning up a few deprecated modules and I need to know what scripts import these so I can point them to the new modules. Therefore, I'm trying to come up with a reg ex that will allow me to search for any deprecated module.</p> <p>so for example, I have 2 deprecated modules (among many) called <code>sql_db</code> and <code>sql_server</code>, so I need to report what scripts may be importing these, but I'm having trouble writing a "catch all" regex that would find <code>sql_db</code> in the following scenarios (and any other import statements I may be overlooking):</p> <pre><code>from sql_db import * import sql_db import os, sql_db, other_module import sql_db, os import os,sql_db </code></pre> <p>I am terrible with regular expressions but I feel like I'm pretty close in this test:</p> <pre><code>import re tests = ['import test', 'import sql_db', 'import test, sql_db', 'import sql_db, test', 'from sql_db import *', 'import bmi, sql_db, os, sys', 'from test import os, sys', 'from sql_d import b', 'import a,b,c', 'import sql_db,test,os', ' import sys, sql_db1, test, os', 'import sys,sql_db,test,os'] pat = re.compile('\s*(import|from) (.*)(sql_db)(.*)') for test in tests: print test, '| ', pat.match(test) is not None </code></pre> <p>This almost works, but is a little too greedy as it will return true when a module is named <code>sql_db1</code> or one that has any characters after <code>sql_db</code>.</p> <p>Here are the results (note the failure in the second to last test):</p> <pre><code>import test | False import sql_db | True import test, sql_db | True import sql_db, test | True from sql_db import * | True import bmi, sql_db, os, sys | True from test import os, sys | False from sql_d import b | False import a,b,c | False import sql_db,test,os | True import sys, sql_db1, test, os | True #should be false but is returning true for sql_db1 import sys,sql_db,test,os | True </code></pre> <p>I know it is because I have the greedy <code>(.*)</code> after <code>(sql_db)</code>, but how can I make find that part explicitly? Any help would be greatly appreciated!</p>
0
2016-07-26T17:56:29Z
38,596,970
<p>Use <code>\b</code> to look for a word boundary before &amp; after (sql_db):</p> <pre><code>\s*(import|from)(.*)\b(sql_db)\b </code></pre> <p>This will not match with sql_db1 because sql_db does not end in a word boundary -- it ends in a 1. Commas <em>are</em> considered word boundaries, so it works with the rest of your examples as well. Feel free to test it at <a href="http://regexr.com/" rel="nofollow">http://regexr.com/</a></p>
3
2016-07-26T18:05:43Z
[ "python", "regex" ]
Which is more efficient way to monitor progress of a long running code? print on screen or log to a text file?
38,596,808
<p>I have a java code that as part of the code it handles a very big matrix (1 million by 4 million) and it takes several hours to run, before it crashes!</p> <p>I would like to monitor the progress of the code, so know when it runs out of memory and crashes what percentage of matrix was already processes.</p> <p>I am thinking of adding a command to print on screen a counter (relative to the main operation loop) every 1000 iteration. or logging the counter in a text file.</p> <p>Is this a good idea? Or will it slow down my already-slow code even further? After all I am adding a condition verification check (to check 1000th iteration) and file-write operation.</p> <p>Any standard solution in java?</p> <p>If there is no standard method or function for monitoring the progress, which would be more efficient, writing my own log to a file or printing on screen?</p> <p>Bonus question: what about in Python? Any internal or standard library for this purspose?</p> <p>thanks</p>
-1
2016-07-26T17:56:47Z
38,596,984
<p>Without having any context about what this code <em>does</em>, my advice would be that you could place an <code>if</code> condition check for every 1000<sup>th</sup> iteration, then does something (<em>print statement, calculation, output to log, check or what have you</em>). This <code>if</code> condition's affect on performance is extremely negligible, unless inside the body of the <code>if</code> condition performs some sort of <em>expensive calculation</em> (<em>modulus expression is <strong>not</strong> an expensive calculation for computers!</em>).</p> <p>Python code:</p> <pre><code>for i in range(100000): # Loop symbolizing your code. if i%1000 == 0: # Every 1000th iteration do something. print "Progress" # Do something. </code></pre> <p>That being said unless you're going to be staring at <strong>stdout</strong> for a few hours in real time; or scrolling though the (<em>what could be very large</em>) stdout log this does not seem like the most efficient idea. I would recommend you write out an error report for this program (<em>function</em>?) while collecting metadata during it's run-time to put in said report, and would propose unit-testing if you aren't already.</p>
2
2016-07-26T18:06:29Z
[ "java", "python", "progress" ]
why is name of all threads same in python threading module?
38,596,877
<pre><code>from threading import * def myfunc(i,name): print("This is " + str(name)) for i in range(4): name = current_thread().name t = Thread(target=myfunc, args=(i,name,)) t.start() </code></pre> <hr> <p><code>current_thread().getName()</code> also gives same results.I was wondering is this the way it works or is it running the same thread,so it is passing the the name <code>MainThread</code>?</p> <hr> <p><b>Output : </b><br> This is MainThread <br> This is MainThread <br>This is MainThread <br>This is MainThread <br></p>
-1
2016-07-26T18:00:13Z
38,596,921
<p><code>current_thread()</code> always returns the thread that called <code>current_thread()</code>. You're repeatedly retrieving the name of the thread that's executing the loop, not the name of any of the threads that thread launches.</p> <p>If you want to get the names of the threads launched in the loop, you could have <em>them</em> call <code>current_thread()</code>:</p> <pre><code>import threading def target(): print("This is", threading.current_thread().name) for i in range(4): Thread(target=target).start() </code></pre>
3
2016-07-26T18:02:54Z
[ "python", "multithreading", "python-3.x", "python-multithreading" ]
How to save PIL image with negative values
38,596,995
<p>I am trying to save an image with negative values using PIL, however, after saving, the image files has all negative values clipped to 0.</p> <pre><code>from PIL import Image import numpy as np # generate random image for demo img_arr = np.random.random_integers(-1000,1000, size=[10,10]).astype(np.int32) print "original min {}, max: {}".format(img_arr.min(),img_arr.max()) # create PIL image img1 = Image.fromarray(img_arr) print "PIL min {}, max: {}".format(np.array(img1.getdata()).min(),np.array(img1.getdata()).max()) # save image img1.save("test_file.png", "PNG") # reload image img_file = Image.open("test_file.png") print "img_file min {}, max: {}".format(np.array(img_file.getdata()).min(),np.array(img_file.getdata()).max()) </code></pre> <p>This results in output:</p> <pre><code>original min -983, max: 965 PIL min -983, max: 965 img_file min 0, max: 965 </code></pre> <p>How can I save this image and maintain the negative values?</p>
1
2016-07-26T18:06:59Z
38,597,658
<p>Note that there is such a thing as storing your pixels as 32-bit signed integers according to PIL, and the image mode 'I' is meant to handle this in PIL. So the comments saying this makes no sense due to technical reasons are mistaken.</p> <p>I don't think the PNG format supports this mode(Despite no errors being thrown when you write an Image in mode 'I'). However, the ".tif" extension seems to:</p> <pre><code>img1.save("test_file.tif") </code></pre> <p>Changing that (and the read to get the correct file) seems to work:</p> <pre><code>original min -993, max: 990 PIL min -993, max: 990 img_file min -993, max: 990 </code></pre>
1
2016-07-26T18:46:24Z
[ "python", "image", "image-processing", "python-imaging-library" ]
Python - PyQt window became unmovable after adding some codes
38,597,091
<p>i am trying to set a window(i used QLabel) to look like an image in PyQt4 so i wrote this code :</p> <pre><code> self.background_update = QPixmap(":/work/window.png") update.setPixmap(self.background_update) update.resize(self.background.width(), self.background.height()) update.show() </code></pre> <p>then i tried to remove the window default action bar and show it transparent to look good so i added</p> <pre><code> update.setWindowFlags(Qt.FramelessWindowHint) update.setAttribute(Qt.WA_TranslucentBackground) </code></pre> <p>everything worked fine but the problem is : the window now became unmovable by using the mouse (like any normal window)</p> <p>what should i add to fix this ? and thanks </p>
0
2016-07-26T18:12:51Z
38,598,822
<p>Not sure what platform you are on, but on Linux you can move any window by pressing <kbd>Alt</kbd> and dragging with the mouse. But if that doesn't work, you can quite easily implement similar functionality in your own code.</p> <p>Here's a simple demo (left-click and drag):</p> <pre><code>from PyQt4 import QtCore, QtGui class Window(QtGui.QWidget): def __init__(self): super(Window, self).__init__() self.setWindowFlags(QtCore.Qt.FramelessWindowHint) self.setMouseTracking(True) self.setGeometry(600, 400, 200, 200) def mouseMoveEvent(self, event): if event.buttons() == QtCore.Qt.LeftButton: self.move(event.globalPos() - self._startpos) def mousePressEvent(self, event): self._startpos = event.pos() if __name__ == '__main__': import sys app = QtGui.QApplication(sys.argv) window = Window() window.show() sys.exit(app.exec_()) </code></pre>
0
2016-07-26T19:56:33Z
[ "python", "pyqt4" ]
How do I use socket.emit in python?
38,597,119
<p>Help!! I'm helping my friend with his game, and I'm doing the online system(client, and server). I've started working/using an example on a website, and i need to convert it to python somehow. I need to, in python, connect too the server and send + receive some data in a package:</p> <pre><code>&lt;script src="https://cdn.socket.io/socket.io-1.4.5.js"&gt;&lt;/script&gt; &lt;script&gt; var socket = io() var random = Math.random(); var package = {pos.y...}; var sendPackage = function(){ socket.emit('message', package); } socket.on('serverMsg',function(data){ console.log(data.msg); }); &lt;/script&gt; </code></pre> <p>The question is: How do I use <code>socket.emit('message', package)</code> and <code>socket.on('message', function...)</code> in python??</p>
-1
2016-07-26T18:14:33Z
38,598,074
<p>You can use the socketIO-client library.</p> <pre><code>from socketIO_client import SocketIO, LoggingNamespace with SocketIO('localhost', 8000, LoggingNamespace) as socketIO: socketIO.emit('aaa') </code></pre> <p>A lot more detailed information, including examples, can be found in the Python Package Index <a href="https://pypi.python.org/pypi/socketIO-client" rel="nofollow">here</a>.</p>
0
2016-07-26T19:11:22Z
[ "javascript", "python", "sockets" ]
How to always select the last file in a list?
38,597,167
<p>Long story short, I am using Python to plot many files on the same grid. I won't post the whole program, as it would be unhelpful and unnecessarily long. This is what I need help with</p> <p>To summarize, how to I get <code>ifl == 1</code> to say something that in pythonic language would equal <code>ifl == last file in directory</code>? Thanks</p>
0
2016-07-26T18:16:59Z
38,597,233
<p>If you just want to get the last item in an iteration, you can do this:</p> <pre><code>for fl in file_location: pass # do stuff with fl </code></pre> <p>After the loop finishes, <code>fl</code> will be set to whatever was the last iterated item.</p>
2
2016-07-26T18:20:45Z
[ "python", "function", "file", "loops" ]
How to always select the last file in a list?
38,597,167
<p>Long story short, I am using Python to plot many files on the same grid. I won't post the whole program, as it would be unhelpful and unnecessarily long. This is what I need help with</p> <p>To summarize, how to I get <code>ifl == 1</code> to say something that in pythonic language would equal <code>ifl == last file in directory</code>? Thanks</p>
0
2016-07-26T18:16:59Z
38,597,241
<p>What about:</p> <pre><code>if ifl == len(file_location): .... </code></pre> <p>Also, you don't actually need the index, you can do this:</p> <pre><code>file_location = glob.glob('../Data/2016/July/*.nc') for fl in file_location: ... if fl == file_location[-1]: Plot_Map(temp,lon,lat) ... </code></pre>
1
2016-07-26T18:20:57Z
[ "python", "function", "file", "loops" ]
How to always select the last file in a list?
38,597,167
<p>Long story short, I am using Python to plot many files on the same grid. I won't post the whole program, as it would be unhelpful and unnecessarily long. This is what I need help with</p> <p>To summarize, how to I get <code>ifl == 1</code> to say something that in pythonic language would equal <code>ifl == last file in directory</code>? Thanks</p>
0
2016-07-26T18:16:59Z
38,597,280
<p>how about, traversing the list backwards?</p> <pre><code>for f in reversed(file_location): </code></pre> <p>so, your condition can stay the same.</p>
2
2016-07-26T18:23:34Z
[ "python", "function", "file", "loops" ]
How to stop logged in users from accessing login/register pages with Flask-Stormpath?
38,597,204
<p>I'm building a Flask app which uses Stormpath and Flask-Stormpath for auth. I wish to prevent a logged in user from accessing the /login or /register pages (since this doesn't make much sense - a logged in user has no need to log in, and you are registered by definition if you are already logged in). I have attempted a solution in my custom login page template, doing something along the lines of:</p> <pre><code>{% block page_heading %} {% if user.given_name %} Already Logged in as {{ user.given_name }} {% else %} Enter your credentials {% endif %} {% endblock page_subheading %} </code></pre> <p>If a user is currently signed in, user.given_name will be defined and the page_heading block will take the 'already logged in' message, and otherwise the 'enter your credentials' message (in the normal case of an unauthenticated user going attempting to log in). I use the same construct to show the login form or more error text. However, this attempt does not work: it is as though user.given_name always reverts to undefined when a logged in user visits /login. This implies that if someone is logged in, and visits /login they are then logged out - this would explain the failure of my attempt at a solution. </p> <p>Given the above and after consulting the docs, I might be able to use is_authenticated(); not through Flask-Stormpath, however, but through the underlying Flask-Login module, since Flask-Stormpath always sets this to return True, per the docs, but I have no idea how to go about this.</p> <p>Additionally, my approach is hacky - I feel a better solution would reside in the Python side of the app.</p> <p>So my question is this: What is the most sensible way to detect a user is logged in and given this, prevent them from accessing, or redirect them from, the /login and /register pages? Perhaps there is a magical decorator somewhere that is the opposite to </p> <pre><code>@login_required </code></pre> <p>or a stormpath built-in 'unauthorised' group that I could use as in</p> <pre><code>@groups_required(['unauthorised']) </code></pre> <p>Or, maybe I was on the right lines with my original attempt.</p> <p>Any help is appreciated :)</p>
0
2016-07-26T18:18:45Z
38,619,033
<p>You can create a <a href="http://flask.pocoo.org/docs/0.11/patterns/viewdecorators/#view-decorators" rel="nofollow">decorator</a> containing the code below. It will redirect logged in users to another page if they try to access the login page.</p> <pre><code>if current_user.is_authencticated(): return redirect (url_for('home')) </code></pre>
0
2016-07-27T16:54:38Z
[ "python", "flask", "authorization", "flask-login", "stormpath" ]
How to stop logged in users from accessing login/register pages with Flask-Stormpath?
38,597,204
<p>I'm building a Flask app which uses Stormpath and Flask-Stormpath for auth. I wish to prevent a logged in user from accessing the /login or /register pages (since this doesn't make much sense - a logged in user has no need to log in, and you are registered by definition if you are already logged in). I have attempted a solution in my custom login page template, doing something along the lines of:</p> <pre><code>{% block page_heading %} {% if user.given_name %} Already Logged in as {{ user.given_name }} {% else %} Enter your credentials {% endif %} {% endblock page_subheading %} </code></pre> <p>If a user is currently signed in, user.given_name will be defined and the page_heading block will take the 'already logged in' message, and otherwise the 'enter your credentials' message (in the normal case of an unauthenticated user going attempting to log in). I use the same construct to show the login form or more error text. However, this attempt does not work: it is as though user.given_name always reverts to undefined when a logged in user visits /login. This implies that if someone is logged in, and visits /login they are then logged out - this would explain the failure of my attempt at a solution. </p> <p>Given the above and after consulting the docs, I might be able to use is_authenticated(); not through Flask-Stormpath, however, but through the underlying Flask-Login module, since Flask-Stormpath always sets this to return True, per the docs, but I have no idea how to go about this.</p> <p>Additionally, my approach is hacky - I feel a better solution would reside in the Python side of the app.</p> <p>So my question is this: What is the most sensible way to detect a user is logged in and given this, prevent them from accessing, or redirect them from, the /login and /register pages? Perhaps there is a magical decorator somewhere that is the opposite to </p> <pre><code>@login_required </code></pre> <p>or a stormpath built-in 'unauthorised' group that I could use as in</p> <pre><code>@groups_required(['unauthorised']) </code></pre> <p>Or, maybe I was on the right lines with my original attempt.</p> <p>Any help is appreciated :)</p>
0
2016-07-26T18:18:45Z
38,624,250
<p>Heyo!</p> <p>So, I'm the author of Flask-Stormpath, I thought I'd jump in here. Instead of trying to modify the behavior of login / register (which I wouldn't recommend, as we change that stuff periodically with library releases), a better solution is to simply hide those URLs from the user once they've logged in.</p> <p>Many websites have a top navbar, for instance, which initially says "Login" and "Register" somewhere. But, once you've logged in, those buttons change to something else, usually "Dashboard" or similar.</p> <p>This way, a user isn't able to go to /login or /register unless they specifically type into the browser and attempt to go there manually.</p> <p>I think this is the ideal scenario, which you can accomplish using template conditional logic:</p> <pre><code>{% if user %} ... {% else %} ... {% endif %} </code></pre>
1
2016-07-27T22:30:11Z
[ "python", "flask", "authorization", "flask-login", "stormpath" ]
Django Primary key acting as a foreign key
38,597,221
<p>I have a MySQL database with 9 tables. They are all related in some way, but I am having trouble with being able to connect with foreign keys. For example here are two of my tables that I am getting an error when I try to python manage.py migrate:</p> <pre><code>class Release(models.Model): release_ID = models.CharField(max_length=25, primary_key =True) releaseversion = models.CharField(max_length=25) model_ID = models.ForeignKey(Model, on_delete=models.CASCADE) #comes from Model class class Subrelease(models.Model): subrelease_ID = models.CharField(max_length=25) release_ID = models.ForeignKey('Release', blank =True) #comes from Release class subreleaseversion = models.CharField(max_length=25) </code></pre> <p>How would I make the primary key from class Release, which is release_ID also be the ForeignKey release_ID in class Subrelease? Any help would be much appreciated. Thank you. </p> <p>After I run migrate I get this in cmd: django.db.utils.InternalError: (1829, "Cannot drop column 'id': needed in a foreign key constraint 'app_subrel_release_ID_id_8e08450_fk_app_release_id' of table 'db.app_subrelease'")</p> <p><strong>UPDATE:</strong> Is this good/okay to do? I don't have any errors when migrating to DB ?</p> <pre><code>class Release(models.Model): #release_ID = models.CharField(max_length=25, primary_key =True)THIS WILL BE OUR PRIMARY KEY MADE BY DJANGO releaseversion = models.CharField(max_length=25) model = models.ForeignKey(Model, on_delete=models.CASCADE) #comes from Model class model_ID class Subrelease(models.Model): #subrelease_ID = models.CharField(max_length=25) THIS WILL BE OUR PRIMARY KEY MADE BY DJANGO release = models.ForeignKey(Release, on_delete=models.CASCADE ) #comes from Release class release_ID subreleaseversion = models.CharField(max_length=25) </code></pre>
0
2016-07-26T18:19:57Z
38,597,369
<p>You could try to use the <a href="https://docs.djangoproject.com/en/1.8/ref/models/fields/#django.db.models.ForeignKey.to_field" rel="nofollow"><code>to_field</code></a> and <a href="https://docs.djangoproject.com/en/1.8/ref/models/fields/#django.db.models.Field.db_column" rel="nofollow"><code>db_column</code></a> options.</p> <pre><code>class B(models.Model): name = models.ForeignKeyField(A, to_field="name", db_column="name") </code></pre> <p>Once you have created the foreign key, you can access the value and related instance as follows:</p> <pre><code>&gt;&gt;&gt; b = B.objects.get(id=1) &gt;&gt;&gt; b.name_id # the value stored in the 'name' database column &gt;&gt;&gt; b.name # the related 'A' instance </code></pre> <p>reference: <a href="http://stackoverflow.com/questions/31412377/non-primary-foreign-keys-in-django">this answer</a></p>
0
2016-07-26T18:28:44Z
[ "python", "mysql", "django" ]
Pandas: groupby forward fill with datetime index
38,597,253
<p>I have a dataset that has two columns: company, and value.<br> It has a datetime index, which contains duplicates (on the same day, different companies have different values). The values have missing data, so I want to forward fill the missing data with the previous datapoint from the same company. </p> <p>However, I can't seem to find a good way to do this without running into odd groupby errors, suggesting that I'm doing something wrong. </p> <p>Toy data: </p> <pre><code>a = pd.DataFrame({'a': [1, 2, None], 'b': [12,None,14]}) a.index = pd.DatetimeIndex(['2010', '2011', '2012']) a = a.unstack() a = a.reset_index().set_index('level_1') a.columns = ['company', 'value'] a.sort_index(inplace=True) </code></pre> <p>Attempted solutions (didn't work: <code>ValueError: cannot reindex from a duplicate axis</code>): </p> <pre><code>a.groupby('company').ffill() a.groupby('company')['value'].ffill() a.groupby('company').fillna(method='ffill') </code></pre> <p>Hacky solution (that delivers the desired result, but is obviously just an ugly workaround): </p> <pre><code>a['value'] = a.reset_index().groupby( 'company').fillna(method='ffill')['value'].values </code></pre> <p>There is probably a simple and elegant way to do this, how is this performed in Pandas? </p>
3
2016-07-26T18:21:45Z
38,597,362
<p>One way is to use the <code>transform</code> function to fill the <code>value</code> column after group by:</p> <pre><code>import pandas as pd a['value'] = a.groupby('company')['value'].transform(lambda v: v.ffill()) a # company value #level_1 #2010-01-01 a 1.0 #2010-01-01 b 12.0 #2011-01-01 a 2.0 #2011-01-01 b 12.0 #2012-01-01 a 2.0 #2012-01-01 b 14.0 </code></pre> <p>To compare, the original data frame looks like:</p> <pre><code># company value #level_1 #2010-01-01 a 1.0 #2010-01-01 b 12.0 #2011-01-01 a 2.0 #2011-01-01 b NaN #2012-01-01 a NaN #2012-01-01 b 14.0 </code></pre>
2
2016-07-26T18:28:21Z
[ "python", "datetime", "pandas", "group-by", "missing-data" ]
Pandas: groupby forward fill with datetime index
38,597,253
<p>I have a dataset that has two columns: company, and value.<br> It has a datetime index, which contains duplicates (on the same day, different companies have different values). The values have missing data, so I want to forward fill the missing data with the previous datapoint from the same company. </p> <p>However, I can't seem to find a good way to do this without running into odd groupby errors, suggesting that I'm doing something wrong. </p> <p>Toy data: </p> <pre><code>a = pd.DataFrame({'a': [1, 2, None], 'b': [12,None,14]}) a.index = pd.DatetimeIndex(['2010', '2011', '2012']) a = a.unstack() a = a.reset_index().set_index('level_1') a.columns = ['company', 'value'] a.sort_index(inplace=True) </code></pre> <p>Attempted solutions (didn't work: <code>ValueError: cannot reindex from a duplicate axis</code>): </p> <pre><code>a.groupby('company').ffill() a.groupby('company')['value'].ffill() a.groupby('company').fillna(method='ffill') </code></pre> <p>Hacky solution (that delivers the desired result, but is obviously just an ugly workaround): </p> <pre><code>a['value'] = a.reset_index().groupby( 'company').fillna(method='ffill')['value'].values </code></pre> <p>There is probably a simple and elegant way to do this, how is this performed in Pandas? </p>
3
2016-07-26T18:21:45Z
38,597,491
<p>You can add <code>'company'</code> to the index, making it unique, and do a simple <code>ffill</code> via <code>groupby</code>:</p> <pre><code>a = a.set_index('company', append=True) a = a.groupby(level=1).ffill() </code></pre> <p>From here, you can use <code>reset_index</code> to revert the index back to the just the date, if necessary. I'd recommend keeping <code>'company'</code> as part of the the index (or just adding it to the index to begin with), so your index remains unique:</p> <pre><code>a = a.reset_index(level=1) </code></pre>
3
2016-07-26T18:35:57Z
[ "python", "datetime", "pandas", "group-by", "missing-data" ]
Pandas: groupby forward fill with datetime index
38,597,253
<p>I have a dataset that has two columns: company, and value.<br> It has a datetime index, which contains duplicates (on the same day, different companies have different values). The values have missing data, so I want to forward fill the missing data with the previous datapoint from the same company. </p> <p>However, I can't seem to find a good way to do this without running into odd groupby errors, suggesting that I'm doing something wrong. </p> <p>Toy data: </p> <pre><code>a = pd.DataFrame({'a': [1, 2, None], 'b': [12,None,14]}) a.index = pd.DatetimeIndex(['2010', '2011', '2012']) a = a.unstack() a = a.reset_index().set_index('level_1') a.columns = ['company', 'value'] a.sort_index(inplace=True) </code></pre> <p>Attempted solutions (didn't work: <code>ValueError: cannot reindex from a duplicate axis</code>): </p> <pre><code>a.groupby('company').ffill() a.groupby('company')['value'].ffill() a.groupby('company').fillna(method='ffill') </code></pre> <p>Hacky solution (that delivers the desired result, but is obviously just an ugly workaround): </p> <pre><code>a['value'] = a.reset_index().groupby( 'company').fillna(method='ffill')['value'].values </code></pre> <p>There is probably a simple and elegant way to do this, how is this performed in Pandas? </p>
3
2016-07-26T18:21:45Z
38,597,704
<p>I like to use stacking and unstacking. In this case, it requires that I append the index with <code>'company'</code>.</p> <pre><code>a.set_index('company', append=True).unstack().ffill() \ .stack().reset_index('company') </code></pre> <p><a href="http://i.stack.imgur.com/0THlw.png" rel="nofollow"><img src="http://i.stack.imgur.com/0THlw.png" alt="enter image description here"></a></p> <hr> <h3>Timing</h3> <p><em>Conclusion</em> @Psidom's solution works best under both scenarios.</p> <p><strong>toy data</strong></p> <p><a href="http://i.stack.imgur.com/BxIFs.png" rel="nofollow"><img src="http://i.stack.imgur.com/BxIFs.png" alt="enter image description here"></a></p> <p><strong>bigger toy</strong></p> <pre><code>np.random.seed([3,1415]) n = 10000 a = pd.DataFrame(np.random.randn(n, 10), pd.date_range('2014-01-01', periods=n, freq='H', name='Time'), pd.Index(list('abcdefghij'), name='company')) a *= np.random.choice((1, np.nan), (n, 10), p=(.6, .4)) a = a.stack(dropna=False).rename('value').reset_index('company') </code></pre> <p><a href="http://i.stack.imgur.com/MH6J6.png" rel="nofollow"><img src="http://i.stack.imgur.com/MH6J6.png" alt="enter image description here"></a></p>
1
2016-07-26T18:49:10Z
[ "python", "datetime", "pandas", "group-by", "missing-data" ]
Extracting data from irregular form using openCV and OCR
38,597,272
<p>I'm trying to extract information from a form (scanned images of a form) and place that information into a table. I have used pytesseract to OCR the image with good success, but the problem with the output is the fact that Tesseract attempts to extract text line by line. </p> <p>My scanned form looks like this: <a href="http://i.stack.imgur.com/TDtma.png" rel="nofollow"><img src="http://i.stack.imgur.com/TDtma.png" alt="enter image description here"></a></p> <p>Each window of the form (A, B, C) should be a different row in a table. I'm trying to use Open Computer Vision (in python) to identify the individual windows to 1) identify individual units of data (the A, B, C), 2) crop each individual window, and 3) Use Tesseract to OCR the image of the individual window to put the information where it needs to go in a SQL table. </p> <p>My question: How can I identify the boundaries of each individual table entry window, and crop the image to only the extent of that boundary (to then apply OCR)? Also, is it possible to use corner detection to identify the individual units of data? </p> <p>I am primarily using python with OpenCV, and am familiar enough with the documentation to apply a C#/++ OpenCV solution to a python script, so I would appreciate any information/alternative solutions you can provide. </p>
1
2016-07-26T18:23:08Z
38,605,235
<p>In this case, what you should do is take a look at <a href="http://docs.opencv.org/trunk/d3/dc0/group__imgproc__shape.html#ga17ed9f5d79ae97bd4c7cf18403e1689a" rel="nofollow">OpenCV findContours</a>. Make sure to use the <code>RETR_TREE</code> retrieval method to obtain a hierarchy of contours. </p> <p>Your windows should be the highest level contours in your image. See my answer <a href="http://stackoverflow.com/questions/37408481/navigate-through-hierarchy-of-contours-found-by-findcontours-method/37470968#37470968">here</a> to get an idea of how to navigate the hierarchy returned by OpenCV.</p>
1
2016-07-27T06:24:58Z
[ "c#", "python", "c++", "opencv" ]
Extracting data from irregular form using openCV and OCR
38,597,272
<p>I'm trying to extract information from a form (scanned images of a form) and place that information into a table. I have used pytesseract to OCR the image with good success, but the problem with the output is the fact that Tesseract attempts to extract text line by line. </p> <p>My scanned form looks like this: <a href="http://i.stack.imgur.com/TDtma.png" rel="nofollow"><img src="http://i.stack.imgur.com/TDtma.png" alt="enter image description here"></a></p> <p>Each window of the form (A, B, C) should be a different row in a table. I'm trying to use Open Computer Vision (in python) to identify the individual windows to 1) identify individual units of data (the A, B, C), 2) crop each individual window, and 3) Use Tesseract to OCR the image of the individual window to put the information where it needs to go in a SQL table. </p> <p>My question: How can I identify the boundaries of each individual table entry window, and crop the image to only the extent of that boundary (to then apply OCR)? Also, is it possible to use corner detection to identify the individual units of data? </p> <p>I am primarily using python with OpenCV, and am familiar enough with the documentation to apply a C#/++ OpenCV solution to a python script, so I would appreciate any information/alternative solutions you can provide. </p>
1
2016-07-26T18:23:08Z
38,624,418
<p>It's possible to separate them section wise using contours and simple contour properties alone.</p> <p><strong>Note : These procedures will only work properly for this particular form. It's not a universal solution for all kinds of irregular forms. However you can implement or tweak certain methods in order to make this work for your form</strong> </p> <p>First read the image</p> <p><code>image=cv2.imread("TDtma.png")</code></p> <p>Convert it to grayscale</p> <p><code>gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)</code></p> <p>Use Canny Edge filter to get the edges - the values 600,1000 were chosen by random experimenting. I chose this value as it removes the background artifact properly. You may need to change and choose the right values for this depending on the images that you are going to input.</p> <p><code>edges = cv2.Canny(gray,600,1000)</code> <a href="http://i.stack.imgur.com/y0En5.png" rel="nofollow"><img src="http://i.stack.imgur.com/y0En5.png" alt="Canny edge detection"></a></p> <p>Use blur filter to remove minor artifacts that would be present in real-world image (such as handwriting etc)</p> <p><code>edges = cv2.GaussianBlur(edges,(5,5),0) # To remove small artifacting if any</code></p> <p>Next we find the external contours because the 3 rectangles (sections) are visibly separated and all we need to do is just find all the external contours. Do note that this code may be different for OpenCV 2.4.x.</p> <p><code>(_,contours,_) = cv2.findContours(edges, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)</code></p> <p>For some reason contours are detected from bottom to top. So we have a character C that is decremented to A just for labeling our regions of interest.</p> <p><code>FormPart = ord('C')</code></p> <p>Loop through each contour, and we crop the region of interest.</p> <p>We check whether each contour has the right aspect ratio and area, again these values (aspect ratio :2, area :1000) are obtained through experimentation, and may need to be changed depending upon real life input images. Ideally in our case a rectangle should have aspect ratio >2 (one side of the rectangle is always bigger than the other, the rectangles in this image have ratio >2). We check if area is >1000 so as to avoid any kind of contours that were detected due to small artifacts. Again these values may need to be changed accordingly so as to work with real world images properly. </p> <p>This given image will be processed properly even without checking contour area and aspect ratio, but there may be issues with real-world images due to small blobs, so in order to avoid them, the area/aspect ratio check is being done.</p> <p><code>for contour in contours: x,y,w,h = cv2.boundingRect(contour) aspect_ratio = w / float(h) area = cv2.contourArea(contour) if aspect_ratio&lt;2 or area &gt;1000: # Just to check whether we have the right contour, if not we go to the next contour continue crop_img = image[y:y+h,x:x+w] #This is our region of interest cv2.imshow("Split Section "+chr(FormPart), crop_img) cv2.waitKey(0) FormPart=FormPart-1 if chr(FormPart) &lt; ord('A'): # If there are more than 3 sections break</code></p> <p>Finally we have a full program here that you can copy and paste and run on your machine. Make sure you have Python >2.7.x and OpenCV 3. Some lines may need to be changed so as to work with OpenCV 2.4 Also make sure the image is named "TDtma.png" and is in the same directory as the Python program</p> <pre><code>import cv2 image=cv2.imread("TDtma.png") gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) edges = cv2.Canny(gray,600,1000) # To remove the irrelevant edges and show the relevant ones cv2.imshow("Canny edge detection", edges) cv2.waitKey(0) edges = cv2.GaussianBlur(edges,(5,5),0) # To remove small artifacting if any (_,contours,_) = cv2.findContours(edges, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) # Detecting external contours #If you are on opencv 2.4x use this #(contours,_)= cv2.findContours(edgescopy, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) FormPart = ord('C')# Contour goes from bottom to top in this example for contour in contours: x,y,w,h = cv2.boundingRect(contour) aspect_ratio = w / float(h) area = cv2.contourArea(contour) if aspect_ratio&lt;2 or area &lt;1000: #Go to next contour if this contour doesnt meet our specifications continue crop_img = image[y:y+h,x:x+w] #This is our region of interest cv2.imshow("Split Section "+chr(FormPart), crop_img) cv2.waitKey(0) FormPart=FormPart-1 if chr(FormPart) &lt; ord('A'): # If there are more than 3 sections break </code></pre> <p>And finally you should have something like this <a href="http://i.stack.imgur.com/kHbqP.png" rel="nofollow"><img src="http://i.stack.imgur.com/kHbqP.png" alt="Final result"></a></p> <p>It's possible to separate these individual data cells in textfields as well. It's a bit complicated though and may possibly not work right with real world images. If you want I can try. Leave a comment if you need it.</p> <p>Hope I was able to help</p>
2
2016-07-27T22:46:43Z
[ "c#", "python", "c++", "opencv" ]
Distributing a cross platform python3 script
38,597,275
<p>I have a python project which I want to work cross-platform. The approach I am taking to make sure that all the dependencies are installed on the user's machine is that I have a setup script that tries to import all the dependencies for my project, and if it encounters an import error, It installs the dependencies globally. The problem with this approach is that one I am installing packages globally, and two that I have to hand edit my psuedo setup script to add any new dependencies.This solution seems very clunky to me. Is there a better approach for solving this problem.</p>
0
2016-07-26T18:23:14Z
38,598,150
<p>If you dont want to <a href="https://docs.python.org/2/distributing/index.html" rel="nofollow">go the normal way</a>, the simplest solution would be to create a <a href="https://pip.pypa.io/en/stable/user_guide/#requirements-files" rel="nofollow">requirements.txt</a> for the dependencies.</p> <p>If you dont want to install stuff globally use <a href="https://virtualenv.pypa.io/en/stable/" rel="nofollow">virtualenv</a> or <a href="https://docs.python.org/3/library/venv.html" rel="nofollow">venv</a> to create an isolated envrionment - both tools also install pip which is needed for installing via requirements.txt</p>
0
2016-07-26T19:15:49Z
[ "python" ]
Gmail Settings API Backend error - Python
38,597,342
<p>Last month i finished working on a GAE app with Python which makes extensive use of various Google APIs for managing the google resources within the company's domain by the google admin. The app was finished!!!, but this month google announced that the EmailSettings API which i am currently implementing is no longer supported and the email settings will migrate to the Gmail API; so far they have migrated some of the settings (i.e. send-as alias, forwarding, vacation responder and signature). On the migration documentation that google put together, they point out the major differences between the two APIs as well as a somewhat vague reference on how to migrate it. Anyhow, i am currently trying to implement the new API to modify send-as settings using a service account. Here's how i am creating the service for the service account (again, this is Python):</p> <pre><code>scopes = ['https://mail.google.com/', 'https://www.googleapis.com/auth/gmail.settings.basic', 'https://www.googleapis.com/auth/gmail.settings.sharing'] email = "username@domain.bla" credentials = oauth2.service_account.build_credentials(scope=scopes, user=me) http = httplib2.Http() credentials.authorize(http) service = google_api_helper.build("gmail", "v1", credentials) body = {'emailAddress':'anotheruser@domain.bla'} service_2.users().settings().updateAutoForwarding(userId="me", body=body).execute() </code></pre> <p>In this particular example, i am trying to update the AutoForwarding setting, but it's the same scenario and error as some of the send-as settings. The problem i am having is the following; for the "delicate settings" as google calls them i need to use the scope: '<a href="https://www.googleapis.com/auth/gmail.settings.sharing" rel="nofollow">https://www.googleapis.com/auth/gmail.settings.sharing</a>', which needs a service account to be created for it to work. </p> <p>Whenever i try to use it though, i get a 500 error message: </p> <blockquote> <p>HttpError: https://www.googleapis.com/gmail/v1/users/me/settings/autoForwarding?alt=json returned "Backend Error"></p> </blockquote> <p>Why i am getting this error if i am authenticating the domain-wide access to the service account, is this an API error or is it the way i am currently implementing the oauth2 authentication? I have tried several implementations without success:</p> <p>Using application Default Authentication:</p> <pre><code>credentials = GoogleCredentials.get_application_default() httpauth = credentials.authorize(Http()) service = build("gmail", "v1", http=http_auth) aliases_2 = service.users().settings().sendAs().list(userId="username@domain.bla").execute() </code></pre> <p>Using the updated oauth2client library and through a local json file:</p> <pre><code> credentials_new = ServiceAccountCredentials.from_json_keyfile_name("app/service_account_key.json", scopes) delegated_credentials = credentials_new.create_delegated(sub="username@domain.bla") http_auth = delegated_credentials.authorize(Http()) service = build("gmail", "v1", http=http_auth) </code></pre> <p>Using the outdated oauth2client library and using the SignedJwtAssertionCredentials function which is no longer supported in the new implementation of the library:</p> <pre><code>credentials = SignedJwtAssertionCredentials(str(settings['oauth2_service_account']['client_email']).encode('utf-8'), settings['oauth2_service_account']['private_key'], scope=scopes, sub="username@domain.bla") auth2token = OAuth2TokenFromCredentials(credentials) # With this implementation i was able to provide the google admin account which is supposed to have super admin access to the "sub" parameter, this used to work for the EmailSettings API, but for this new implementation you need to pass the email of the user you are trying to gain access to. # build service # call API </code></pre> <p>With all 3 implementations i was able to make calls to the basic scope, but whenever i tried to make any changes to any settings under the umbrella of the settings.sharing scope, i got the backend error message. This is driving me crazy and i just finished this app!!!! if you have any ideas or if you have ran into this issue before, please let me know!!!!! ...</p>
1
2016-07-26T18:27:20Z
38,618,485
<p>Update: As of 2016-08-11, this issue should be fixed.</p> <p>As of 2016-07-27, there is a bug in the authorization backend that is leading to this error, although it only appears to affect certain domains / users. We are working on a fix. Please star <a href="https://code.google.com/a/google.com/p/apps-api-issues/issues/detail?id=4673" rel="nofollow">this issue</a> to get updates.</p>
0
2016-07-27T16:25:18Z
[ "python", "google-app-engine", "gmail-api", "google-admin-sdk", "google-admin-settings-api" ]
Why bisect slower than sort
38,597,378
<p>I know that bisect is using binary search to keep lists sorted. However I did a timing test that the values are being read and sorted. But, on the contrary to my knowledge, keeping the values and then sorting them win the timing by high difference. Could more experienced users please explain this behavior ? Here is the code I use to test the timings.</p> <pre><code>import timeit setup = """ import random import bisect a = range(100000) random.shuffle(a) """ p1 = """ b = [] for i in a: b.append(i) b.sort() """ p2 = """ b = [] for i in a: bisect.insort(b, i) """ print timeit.timeit(p1, setup=setup, number = 1) print timeit.timeit(p2, setup=setup, number = 1) # 0.0593081859178 # 1.69218442959 # Huge difference ! 35x faster. </code></pre> <p>In the first process I take values one-by-one instead of just sorting <code>a</code> to obtain a behavior like file reading. And it beats bisect very hard.</p>
0
2016-07-26T18:29:20Z
38,597,517
<p>Your algorithmic complexity will be worse in the bisect case ...</p> <p>In the <code>bisect</code> case, you have <code>N</code> operations (each at an average cost of <code>log(N)</code> to find the insertion point and then an additional <code>O(N)</code> step to insert the item). <strong>Total complexity: <code>O(N^2)</code></strong>.</p> <p>With <code>sort</code>, you have a single <code>Nlog(N)</code> step (plus <code>N</code> <code>O(1)</code> steps to build the list in the first place). <strong>Total complexity: <code>O(Nlog(N))</code></strong></p> <p>Also note that <code>sort</code> is implemented in very heavily optimized C code (<code>bisect</code> isn't quite as optimized since it ends up calling various comparison functions much more frequently...)</p>
3
2016-07-26T18:37:18Z
[ "python", "algorithm", "performance", "sorting" ]
Why bisect slower than sort
38,597,378
<p>I know that bisect is using binary search to keep lists sorted. However I did a timing test that the values are being read and sorted. But, on the contrary to my knowledge, keeping the values and then sorting them win the timing by high difference. Could more experienced users please explain this behavior ? Here is the code I use to test the timings.</p> <pre><code>import timeit setup = """ import random import bisect a = range(100000) random.shuffle(a) """ p1 = """ b = [] for i in a: b.append(i) b.sort() """ p2 = """ b = [] for i in a: bisect.insort(b, i) """ print timeit.timeit(p1, setup=setup, number = 1) print timeit.timeit(p2, setup=setup, number = 1) # 0.0593081859178 # 1.69218442959 # Huge difference ! 35x faster. </code></pre> <p>In the first process I take values one-by-one instead of just sorting <code>a</code> to obtain a behavior like file reading. And it beats bisect very hard.</p>
0
2016-07-26T18:29:20Z
38,597,519
<p>Sorting a list takes about <code>O(N*log(N))</code> time. Appending N items to a list takes <code>O(N)</code> time. Doing these things consecutively takes about <code>O(N*log(N))</code> time.</p> <p>Bisecting a list takes <code>O(log(n))</code> time. Inserting an item into a list takes <code>O(N)</code> time. Doing both N times inside a for loop takes <code>O(N * (N + log(n))) == O(N^2)</code> time.</p> <p><code>O(N^2)</code> is worse than <code>O(N*log(N))</code>, so your <code>p1</code> is faster than your <code>p2</code>.</p>
2
2016-07-26T18:37:18Z
[ "python", "algorithm", "performance", "sorting" ]
Why bisect slower than sort
38,597,378
<p>I know that bisect is using binary search to keep lists sorted. However I did a timing test that the values are being read and sorted. But, on the contrary to my knowledge, keeping the values and then sorting them win the timing by high difference. Could more experienced users please explain this behavior ? Here is the code I use to test the timings.</p> <pre><code>import timeit setup = """ import random import bisect a = range(100000) random.shuffle(a) """ p1 = """ b = [] for i in a: b.append(i) b.sort() """ p2 = """ b = [] for i in a: bisect.insort(b, i) """ print timeit.timeit(p1, setup=setup, number = 1) print timeit.timeit(p2, setup=setup, number = 1) # 0.0593081859178 # 1.69218442959 # Huge difference ! 35x faster. </code></pre> <p>In the first process I take values one-by-one instead of just sorting <code>a</code> to obtain a behavior like file reading. And it beats bisect very hard.</p>
0
2016-07-26T18:29:20Z
38,597,622
<p>To understand the time difference, let’s look at what you are actually doing there.</p> <p>In your first example, you are taking an empty list, and append items to it, and sorting it in the end.</p> <p>Appending to lists is really cheap, it has an <a href="https://en.wikipedia.org/wiki/Amortized_analysis" rel="nofollow">amortized time complexity</a> of O(1). It cannot be really constant time because the underlying data structure, a simple array, eventually needs to be expanded as the list grows. This is done every so often which causes a new array to be allocated and the data being copied. That’s a bit more expensive. But in general, <a href="https://wiki.python.org/moin/TimeComplexity" rel="nofollow">we still say this is O(1)</a>.</p> <p>Next up comes the sorting. Python is using <a href="https://en.wikipedia.org/wiki/Timsort" rel="nofollow">Timsort</a> which is very efficient. This is O(n log n) at average and worst case. So overall, we get constant time following <code>O(n log n)</code> so the sorting is the only thing that matters here. In total, this is pretty simple and very fast.</p> <p>The second example uses <a href="https://docs.python.org/3/library/bisect.html#bisect.insort" rel="nofollow"><code>bisect.insort</code></a>. This utilizes a list and binary search to ensure that the list <em>is sorted at all times</em>.</p> <p>Essentially, on every insert, it will use binary search to find the correct location to insert the new value, and then shift all items correctly to make room at that index for the new value. Binary search is cheap, O(log n) on average, so this is not a problem. Shifting alone is also not that difficult. In the worst case, we need to move all items one index to the right, so we get O(n) (this is basically the <a href="https://wiki.python.org/moin/TimeComplexity" rel="nofollow">insert operation on lists</a>).</p> <p>So in total, we would get linear time at worst. However, we do this <em>on every single iteration</em>. So when inserting <code>n</code> elements, we have O(n) each time. This results in a quadratic complexity, O(n²). This is a problem, and will ultimately slow the whole thing down.</p> <p>So what does this tell us? <a href="https://en.wikipedia.org/wiki/Insertion_sort" rel="nofollow">Sorted inserting</a> into a list to get a sorted result is not really performant. We can use the <code>bisect</code> module to keep an already sorted list ordered when we only do a few operations, but when we actually have unsorted data, it’s easier to sort the data as a whole.</p>
1
2016-07-26T18:44:08Z
[ "python", "algorithm", "performance", "sorting" ]
Why bisect slower than sort
38,597,378
<p>I know that bisect is using binary search to keep lists sorted. However I did a timing test that the values are being read and sorted. But, on the contrary to my knowledge, keeping the values and then sorting them win the timing by high difference. Could more experienced users please explain this behavior ? Here is the code I use to test the timings.</p> <pre><code>import timeit setup = """ import random import bisect a = range(100000) random.shuffle(a) """ p1 = """ b = [] for i in a: b.append(i) b.sort() """ p2 = """ b = [] for i in a: bisect.insort(b, i) """ print timeit.timeit(p1, setup=setup, number = 1) print timeit.timeit(p2, setup=setup, number = 1) # 0.0593081859178 # 1.69218442959 # Huge difference ! 35x faster. </code></pre> <p>In the first process I take values one-by-one instead of just sorting <code>a</code> to obtain a behavior like file reading. And it beats bisect very hard.</p>
0
2016-07-26T18:29:20Z
38,597,723
<p>Insertion and deletion operations in a data structure can be surprisingly expensive sometimes, particularly if the distribution of incoming data values is random. Whereas, <em>sorting</em> can be unexpectedly fast.</p> <p>A key consideration is whether-or-not you can "accumulate all the values," then sort them <em>once,</em> then use the sorted result "all at once." If you can, then sorting is almost always very-noticeably faster.</p> <p>If you remember the old sci-fi movies (back when computers were called "giant brains" and a movie always had spinning tape-drives), that's the sort of processing that they were supposedly doing: applying <em>sorted</em> updates to <em>also-sorted</em> master tapes, to produce a new <em>still-sorted</em> master. Random-access was not needed. (Which was a good thing, because at that time we really couldn't do it.) It is <em>still</em> an efficient way to process vast amounts of data.</p>
0
2016-07-26T18:50:09Z
[ "python", "algorithm", "performance", "sorting" ]
Replacing ' ' Values in a List with '0'
38,597,390
<p>I have a few <code>.csv</code> files that contain <code>NULL</code> values that are viewed as empty in the table. For example:</p> <pre><code>ID Volts Current Watts 0 383 0 1 383 1 383 2 382 2 764 </code></pre> <p>This <code>.csv</code> file is input into my program and converted to a list like this:</p> <pre><code>with open(inputPath + file) as inFile: csvFile = csv.reader(inFile) for row in csvFile: removeNull(row) print(row) </code></pre> <p>which essentially takes each row in <code>csvFile</code> and turns it into a list of values that looks something like this:</p> <p><code>['0', '383', '', '0']</code>, <code>['1', '383', '1', '383]</code>, etc.</p> <p><em>Note that the <code>NULL</code> values are now just empty strings, <code>''</code>.</em></p> <p>Then in relation to the snipet of the program above <code>removeNull()</code> is defined as:</p> <pre><code>def removeNull(row): nullIndex = row.index('') row.remove('') newRow = row.insert(nullIndex, '0') return newRow </code></pre> <p>which looks through the list (aka row) for empty strings, <code>''</code>, and notes their index as <code>nullIndex</code>. Then removes the empty string at said index, replaces it with <code>'0'</code> and returns the edited list. </p> <p><strong>Question:</strong> What exactly is wrong with my <code>removeNull()</code> function that causes it to only replace the first empty string, <code>''</code>, in a list? And how can I fix it so that it works for all empty strings in a list?</p> <p>For clarification, a table like this with only <em>one</em> <code>NULL</code> value per row, or empty string once converted to a list, works just fine.</p> <pre><code>ID Volts Current Watts 0 383 0 1 383 1 383 2 382 2 764 </code></pre> <p><code>['0', '383', '', '0']</code> <em>works fine.</em></p> <p>However if I have a table like this, with <em>multiple</em> <code>NULL</code> values per row, it will only replace the first empty string in the converted list and do nothing with the rest. </p> <pre><code>ID Volts Current Watts 0 0 1 383 1 383 2 382 2 764 </code></pre> <p><code>['0', '', '', '0']</code> <em>does not work fine.</em></p>
0
2016-07-26T18:29:49Z
38,597,450
<p>Because <code>list.index</code> only returns index of the first occurrence of the item in the list. You can instead use a list comprehension on each row to do the replacement:</p> <pre><code>def removeNull(row): return ['0' if i=='' else i for i in row] # |&lt;- ternary op. -&gt;| </code></pre> <p>The <em>ternary operator</em> in the list comprehension replaces blank strings with <code>'0'</code> while others are returned as they are.</p> <p>On a side note, your function does not modify <code>row</code> <em>in-place</em>, therefore, you will need to assign the return value of the function to <code>row</code>:</p> <pre><code>for row in csvFile: row = removeNull(row) print(row) </code></pre>
5
2016-07-26T18:33:05Z
[ "python", "string", "csv", null ]
python - how to log everything without instrumentation
38,597,475
<p>I want to log <strong>everything:</strong></p> <ol> <li>Function entered + values of parameters + function exited</li> <li>Result of every assignment or operation</li> <li>etc.</li> </ol> <p>Is it possible to log "everything" in a Python execution without instrumenting the code?</p> <p>Since things are executing in a VM, it should be possible to configure this at the VM level (hopefully?).</p> <p>I'm using Pycharm but I could do it via commandline it it's necessary.</p> <p>There's this existing question: <a href="http://stackoverflow.com/questions/23435488/how-to-do-logging-at-function-entry-inside-and-exit-in-python">How to do logging at function entry, inside and exit in Python</a> but it doesn't address how to log the result of variable assignments.</p>
1
2016-07-26T18:34:51Z
38,600,368
<p>You would need to use the <a href="https://docs.python.org/3/library/trace.html" rel="nofollow"><code>trace</code></a> module and/or perhaps the <a href="https://docs.python.org/3/library/pdb.html" rel="nofollow"><code>pdb</code></a> module. They may not give you everything you need, but it would be a starting point. The <code>logging</code> module doesn't work at such a low level as you seem to want.</p>
0
2016-07-26T21:41:41Z
[ "python", "logging", "pycharm" ]
In NLTK, get the number of occurrences of a trigram
38,597,503
<p>I'd like to get the "commonly used phrases" from a text, defined as the trigrams which occur more than once. Till now I have this:</p> <pre><code>import nltk def get_words(string): tokenizer = nltk.tokenize.RegexpTokenizer(r'\w+') return tokenizer.tokenize(string) string = "Hello, world. This is a dog. This is a cat." words = get_words(string) finder = nltk.collocations.TrigramCollocationFinder.from_words(words) scored = finder.score_ngrams(nltk.collocations.TrigramAssocMeasures().raw_freq) </code></pre> <p>The resulting <code>scored</code> is </p> <pre><code>[(('This', 'is', 'a'), 0.2), (('Hello', 'world', 'This'), 0.1), (('a', 'dog', 'This'), 0.1), (('dog', 'This', 'is'), 0.1), (('is', 'a', 'cat'), 0.1), (('is', 'a', 'dog'), 0.1), (('world', 'This', 'is'), 0.1)] </code></pre> <p>I've noticed that the number in the elements of <code>scored</code> is the number of occurrences of the trigram divided by the total word count (in this case, 10). Is there a way to get the number of occurrences directly, without 'post-multiplying' by the word count?</p>
0
2016-07-26T18:36:34Z
38,597,744
<p>To get frequencies with normalization you can just call ngram_fd. In your case:</p> <pre><code>trigram_freqs = finder.ngram_fd </code></pre>
0
2016-07-26T18:51:45Z
[ "python", "nltk" ]
In NLTK, get the number of occurrences of a trigram
38,597,503
<p>I'd like to get the "commonly used phrases" from a text, defined as the trigrams which occur more than once. Till now I have this:</p> <pre><code>import nltk def get_words(string): tokenizer = nltk.tokenize.RegexpTokenizer(r'\w+') return tokenizer.tokenize(string) string = "Hello, world. This is a dog. This is a cat." words = get_words(string) finder = nltk.collocations.TrigramCollocationFinder.from_words(words) scored = finder.score_ngrams(nltk.collocations.TrigramAssocMeasures().raw_freq) </code></pre> <p>The resulting <code>scored</code> is </p> <pre><code>[(('This', 'is', 'a'), 0.2), (('Hello', 'world', 'This'), 0.1), (('a', 'dog', 'This'), 0.1), (('dog', 'This', 'is'), 0.1), (('is', 'a', 'cat'), 0.1), (('is', 'a', 'dog'), 0.1), (('world', 'This', 'is'), 0.1)] </code></pre> <p>I've noticed that the number in the elements of <code>scored</code> is the number of occurrences of the trigram divided by the total word count (in this case, 10). Is there a way to get the number of occurrences directly, without 'post-multiplying' by the word count?</p>
0
2016-07-26T18:36:34Z
38,612,097
<p>You can get number of occurrences using <strong>finder.ngram_fd.items()</strong></p> <pre><code># To get Trigrams with occurrences trigrams = finder.ngram_fd.items() print trigrams # To get Trigrams with occurrences in descending order trigrams = sorted(finder.ngram_fd.items(), key=lambda t: (-t[1], t[0])) print trigrams </code></pre> <p>You can check more related examples at : <a href="http://www.nltk.org/howto/collocations.html" rel="nofollow">NLTK Collocations</a></p>
1
2016-07-27T11:47:41Z
[ "python", "nltk" ]
In NLTK, get the number of occurrences of a trigram
38,597,503
<p>I'd like to get the "commonly used phrases" from a text, defined as the trigrams which occur more than once. Till now I have this:</p> <pre><code>import nltk def get_words(string): tokenizer = nltk.tokenize.RegexpTokenizer(r'\w+') return tokenizer.tokenize(string) string = "Hello, world. This is a dog. This is a cat." words = get_words(string) finder = nltk.collocations.TrigramCollocationFinder.from_words(words) scored = finder.score_ngrams(nltk.collocations.TrigramAssocMeasures().raw_freq) </code></pre> <p>The resulting <code>scored</code> is </p> <pre><code>[(('This', 'is', 'a'), 0.2), (('Hello', 'world', 'This'), 0.1), (('a', 'dog', 'This'), 0.1), (('dog', 'This', 'is'), 0.1), (('is', 'a', 'cat'), 0.1), (('is', 'a', 'dog'), 0.1), (('world', 'This', 'is'), 0.1)] </code></pre> <p>I've noticed that the number in the elements of <code>scored</code> is the number of occurrences of the trigram divided by the total word count (in this case, 10). Is there a way to get the number of occurrences directly, without 'post-multiplying' by the word count?</p>
0
2016-07-26T18:36:34Z
38,623,755
<p>In the end I went with 'post-multiplying' the <code>raw_freq</code> attribute because it is already sorted. Here is my implementation:</p> <pre><code>import nltk def get_words(string): tokenizer = nltk.tokenize.RegexpTokenizer(r'\w+') return tokenizer.tokenize(string) string = "Hello, world. This is a dog. This is a cat." words = get_words(string) word_count = len(words) finder = nltk.collocations.TrigramCollocationFinder.from_words(words) scored = finder.score_ngrams(nltk.collocations.TrigramAssocMeasures().raw_freq) scored_common = filter(lambda score: score[1]*word_count &gt; 1, scored) common_phrases = [" ".join(score[0]) for score in scored_common] </code></pre> <p>This yields the common phrases as <code>['This is a']</code> for this example.</p>
0
2016-07-27T21:47:56Z
[ "python", "nltk" ]
Reading multiple files in real time?
38,597,535
<p>I'm trying to listen log files that are constantly updated and work with the lines continuously. The thing is that I have multiple files to listen. The logs are separated by the jboss instances and I have to work with all them together to insert them on a database. </p> <p>I've got a good example of how to read a file continuously from the question <a href="http://stackoverflow.com/questions/5419888/reading-from-a-frequently-updated-file">5419888</a>, but this code only reads one file by time. I've tried the following code to read them all, but it only listen to the first file it finds in the array of files.</p> <p>How could I multithread this to process all the files at the same time?</p> <pre><code>import time from glob import glob def follow(thefile): thefile.seek(0,2) while True: line = thefile.readline() if not line: time.sleep(0.1) continue yield line if __name__ == '__main__': for log in glob("/logs/xxx/production/jboss/yyy*/xxx-production-zzzz*/xxx-production-zzzz*-xxx-Metrics.log"): logfile = open(log, "r") loglines = follow(logfile) for line in loglines: print line, </code></pre>
-1
2016-07-26T18:38:29Z
38,598,072
<p>You can print the lines of each file at the same time using the following code:</p> <pre><code>lock = threading.Lock() def printFile(logfile): loglines = follow(logfile) for line in loglines: #only one thread at a time can print to the user lock.acquire() print line lock.release() if __name__ == '__main__': for log in glob("/logs/xxx/production/jboss/yyy*/xxx-production-zzzz*/xxx-production-zzzz*-xxx-Metrics.log"): logfile = open(log, "r") t = threading.Thread(target = printFile,args = (logfile,)) t.start() </code></pre>
2
2016-07-26T19:11:13Z
[ "python", "multithreading", "file-io" ]
What does it mean if get_ident() returns the same value?
38,597,538
<p>If <code>get_ident()</code> returns the same value during the execution of a program, does that mean the thread is the same thread, or is it possible that the thread was restarted but got the same identifier as a previous thread?</p> <p>If it is possible, how likely is it - pretty likely?</p> <p>I ask because I expected a particular Thread subclass to be killed and restarted multiple times during a program, and only one instance of that subclass running as a thread at a time. But, I made those threads log their thread id returned by <code>get_ident()</code> occasionally, and I saw that it was pretty common for some ids to re-occur, even after a different id was logged after the first occurrence.</p>
0
2016-07-26T18:38:41Z
38,597,574
<blockquote> <p>is it possible that the thread was restarted but got the same identifier as a previous thread?</p> </blockquote> <p>Yep, it seems so. From <a href="https://docs.python.org/2/library/thread.html#thread.get_ident" rel="nofollow">the documentation</a>:</p> <blockquote> <p>Thread identifiers may be recycled when a thread exits and another thread is created.</p> </blockquote> <p>As for how common it is, it probably varies depending on the OS and the Python implementation, but I expect it to be a pretty frequent occurrence.</p>
1
2016-07-26T18:41:07Z
[ "python", "multithreading" ]
Python - How to replace adjacent list elements with a different value
38,597,578
<p>I have been stumped trying to figure how to locate and change the values of the area bordered around a single element of an list without affecting the element values needing to be bordered. This is what I'm trying to execute:</p> <p>BEFORE:</p> <pre><code>[[0,0,0,0,0], [0,1,1,0,0], [0,1,0,1,0], [0,0,1,0,0], [0,0,0,0,0]] </code></pre> <p>AFTER:</p> <pre><code>[[9,9,9,9,0], [9,1,1,9,9], [9,1,9,1,9], [9,9,1,9,9], [0,9,9,9,0]) </code></pre> <p>I just need to know if I need mainly FOR loops as well as IF/ELSE statements, or if I need a separate library to get this solved. </p> <p>Thanks in Advance for Helping!</p>
-7
2016-07-26T18:41:22Z
38,597,765
<p>For loops and if/else statements will suffice.</p>
0
2016-07-26T18:52:57Z
[ "python", "list", "python-2.7" ]
Python - How to replace adjacent list elements with a different value
38,597,578
<p>I have been stumped trying to figure how to locate and change the values of the area bordered around a single element of an list without affecting the element values needing to be bordered. This is what I'm trying to execute:</p> <p>BEFORE:</p> <pre><code>[[0,0,0,0,0], [0,1,1,0,0], [0,1,0,1,0], [0,0,1,0,0], [0,0,0,0,0]] </code></pre> <p>AFTER:</p> <pre><code>[[9,9,9,9,0], [9,1,1,9,9], [9,1,9,1,9], [9,9,1,9,9], [0,9,9,9,0]) </code></pre> <p>I just need to know if I need mainly FOR loops as well as IF/ELSE statements, or if I need a separate library to get this solved. </p> <p>Thanks in Advance for Helping!</p>
-7
2016-07-26T18:41:22Z
38,597,986
<p>And yes , you can just pass with if/else and for loops : </p> <pre><code> a = [[0,0,0,0,0], [0,1,1,0,0], [0,1,0,1,0], [0,0,1,0,0], [0,0,0,0,0]] to_replace = 1 replacement = 9 for i in range(len(a)): for x in range(len(a[i])): if a[i][x] == to_replace: for pos_x in range(i-1,i+2): for pos_y in range(x-1,x+2): try: if a[pos_x][pos_y] != to_replace: a[pos_x][pos_y] = replacement except IndexError: print "Out of list range" #may happen when to_replace is in corners for line in a: print line #this will give you # [9, 9, 9, 9, 0] # [9, 1, 1, 9, 9] # [9, 1, 9, 1, 9] # [9, 9, 1, 9, 9] # [0, 9, 9, 9, 0] </code></pre>
0
2016-07-26T19:06:01Z
[ "python", "list", "python-2.7" ]
How to perform an operation on every element in a numpy matrix?
38,597,587
<p>Say I have a function foo() that takes in a single float and returns a single float. What's the fastest/most pythonic way to apply this function to every element in a numpy matrix or array?</p> <p>What I essentially need is a version of this code that doesn't use a loop:</p> <pre><code>import numpy as np big_matrix = np.matrix(np.ones((1000, 1000))) for i in xrange(np.shape(big_matrix)[0]): for j in xrange(np.shape(big_matrix)[1]): big_matrix[i, j] = foo(big_matrix[i, j]) </code></pre> <p>I was trying to find something in the numpy documentation that will allow me to do this but I haven't found anything.</p> <p>Edit: As I mentioned in the comments, specifically the function I need to work with is the sigmoid function, <code>f(z) = 1 / (1 + exp(-z))</code>.</p>
1
2016-07-26T18:41:53Z
38,600,740
<p>If <code>foo</code> is really a black box that takes a scalar, and returns a scalar, then you must use some sort of iteration. People often try <code>np.vectorize</code> and realize that, as documented, it does not speed things up much. It is most valuable as a way of broadcasting several inputs. It uses <code>np.frompyfunc</code>, which is slightly faster, but with a less convenient interface.</p> <p>The proper numpy way is to change your function so it works with arrays. That shouldn't be hard to do with the function in your comments</p> <pre><code>f(z) = 1 / (1 + exp(-z)) </code></pre> <p>There's a <code>np.exp</code> function. The rest is simple math. </p>
3
2016-07-26T22:13:56Z
[ "python", "arrays", "numpy", "matrix" ]
Cx_oracle in Python giving "missing expression error" - date and timestamp literals?
38,597,620
<p>I have a chunk of code that is supporting a procedure for making survey records. 1 line is causing it to fail and I can't figure out why. Here is a snippet:</p> <pre><code>data = [4, 1, 5, '2015-01-01', '2016-07-26 14:32:19'] insert_query2 = 'INSERT INTO SURVEY_RESPONSE (ID, SURVEY_ID, RESPONDENT_ID, DATE_TAKEN, DATE_ENTERED) VALUES (:sr_id , :survey_id , :respondent_id , DATE :dt , TIMESTAMP :ts )' cursor.prepare(insert_query) cursor.execute(None, {"sr_id":data[0], "survey_id":data[1], "respondent_id":data[2], "dt":data[3], "ts":data[4]}) </code></pre> <p>So my data is packed into a list and then I try to pass each list element as a parameter. I was trying to pass the row itself as I've seen in examples but was still getting the error:</p> <pre><code>File "myfile.py", line 95, in insert_new_survey_response cursor.execute(insert_query2, {"sr_id":data[0], "survey_id":data[1], "respondent_id":data[2], "dt":data[3], "ts":data[4]}) cx_Oracle.DatabaseError: ORA-00936: missing expression </code></pre> <p>Now I know that the line works, if I were to copy it into my DBMS and hardcode the values I'm passing, it works. The columns are right and all my datatypes are right. </p> <p>I think it is the way I am trying to use the DATE and TIMESTAMP literals. That's how it's said to be done in the Oracle documentation but I'm not sure if it's supported in cx_Oracle. Is there a way to get around needed them? The fields themselves are dates and timestamp datatypes so I thought this was correct.</p>
0
2016-07-26T18:44:02Z
38,597,766
<p>I just found the answer - the literals are not good to use, to_date() and to_timestamp() are the correct methods for passing the correct formatting of the date and timestamp.</p> <p>Discussion here: <a href="http://stackoverflow.com/questions/13518506/oracle-literal-does-not-match-format-string-error">Oracle - literal does not match format string error</a></p> <p>I'm going to leave it up for now in case someone else runs into this problem too :)</p>
0
2016-07-26T18:52:58Z
[ "python", "python-3.x", "cx-oracle", "ora-00936" ]
Image skewing when applying cartopy transformations
38,597,656
<p>I'm doing some plotting using cartopy and matplotlib, and I was recently using the PlateCarree transformation, but I changed to Mercator because Louisiana was a bit too squished for my liking. Prior to the switch, I had a logo displayed in the bottom left corner, using these two lines of code <code>logo = matplotlib.image.imread('/Users/ian/Desktop/M.png') plt.imshow(logo, extent =(lon-offset -1 + .25, lon - offset + .75, lat - offset + .25, lat - offset + 1 + .75), zorder=35) </code></p> <p>Where the extent of the axis was set using these points</p> <pre><code>ax.set_extent([lon-offset-1, lon+offset+1, lat-offset, lat+offset]) </code></pre> <p>this is what the plot looked like using PlateCarree: <a href="http://i.stack.imgur.com/lLIFv.png" rel="nofollow"><img src="http://i.stack.imgur.com/lLIFv.png" alt="enter image description here"></a></p> <p>After switching to Mercator, I've gotten everything to work well except for the logo. I've added the transformation keyword argument to the image plotting line, so now it reads:</p> <pre><code>plt.imshow(logo, extent =(lon-offset -1 +.25, lon - offset + .75, lat - offset + .25, lat - offset + 1 + .75), zorder=35, transform=ccrs.PlateCarree()) </code></pre> <p>but now the the logo has lost its crispness and become skewed with the transformation, and most mysteriously, it has switched corners to the upper left corner of the plot. It now looks like this: <a href="http://i.stack.imgur.com/IxIHl.png" rel="nofollow"><img src="http://i.stack.imgur.com/IxIHl.png" alt="enter image description here"></a></p> <p>Does anyone know how I can change the projection of my plot without skewing this image? What I really need to do is make sure that the corner of the image is in the corner of the plot in the transformed coordinate stystem, but leave the rest of the image's placement independent of the coordinate system. I was thinking about possibly putting the image all alone in its own seperate subplot, and than trying to place that subplot directly on top of the main one. Seems like a pretty bad solution though. Thanks!</p>
1
2016-07-26T18:46:22Z
38,598,940
<p>You might get better results if you plot the image logo in axes coordinates rather than data coordinates. You can use the <code>ax.transAxes</code> transform for this, and specify the extent in axes coordinates ([0, 0] in bottom left, [1, 1] in top right).</p>
2
2016-07-26T20:04:08Z
[ "python", "image", "matplotlib", "transformation", "cartopy" ]
Django Datatables from model
38,597,764
<p>I have a simple model in my django app that I am displaying as a table. I would like to change the table to a datatable. However even after following all the steps listed ont eh datatables.net website I cant seem to convert my table into a datatable:</p> <p><strong>models.py</strong></p> <pre><code>from __future__ import unicode_literals from django.db import models class Profile(models.Model): name = models.CharField(max_length=128) email = models.EmailField(null=False) city = models.CharField(null=False, max_length=128) def __unicode__(self): return self.name </code></pre> <p><strong>views.py</strong></p> <pre><code>from django.shortcuts import render from django.http import HttpResponse from vc4u.models import Profile # Create your views here. def index(request): user_list = Profile.objects.order_by('name') context_dict = {'user_list': user_list} return render(request, 'index.html', context_dict) </code></pre> <p><strong>index.html</strong></p> <pre><code>&lt;div class="container"&gt; {% if user_list %} &lt;table class="table table-bordered table-striped" id="users"&gt; &lt;tr&gt; &lt;th&gt;Name&lt;/th&gt; &lt;th&gt;Email&lt;/th&gt; &lt;th&gt;City&lt;/th&gt; &lt;/tr&gt; {% for users in user_list %} &lt;tr&gt; &lt;td&gt;{{ users.name }}&lt;/td&gt; &lt;td&gt;{{ users.email }}&lt;/td&gt; &lt;td&gt;{{ users.city }}&lt;/td&gt; &lt;/tr&gt; {% endfor %} &lt;/table&gt; {% else %} &lt;strong&gt;There are no users.&lt;/strong&gt; {% endif %} &lt;/div&gt; </code></pre> <p>I ve posted the CSS CDN on top of the page and the jquery and datatable js CDN on the bottom of the page with this script:</p> <pre><code>$(document).ready(function(){ $('#users').DataTable(); }); </code></pre> <p>This doesn't seem to work and I had a look at the console in developer tools and no errors are listed.</p>
0
2016-07-26T18:52:47Z
38,597,883
<p>Unless your table has an id of <code>myTable</code>, then it's not going to work. It looks like you should change your jquery selector to be <code>$('#users')</code> because that's the table id that you have set in the code posted above.</p>
0
2016-07-26T18:59:53Z
[ "jquery", "python", "django", "datatable" ]
Django Datatables from model
38,597,764
<p>I have a simple model in my django app that I am displaying as a table. I would like to change the table to a datatable. However even after following all the steps listed ont eh datatables.net website I cant seem to convert my table into a datatable:</p> <p><strong>models.py</strong></p> <pre><code>from __future__ import unicode_literals from django.db import models class Profile(models.Model): name = models.CharField(max_length=128) email = models.EmailField(null=False) city = models.CharField(null=False, max_length=128) def __unicode__(self): return self.name </code></pre> <p><strong>views.py</strong></p> <pre><code>from django.shortcuts import render from django.http import HttpResponse from vc4u.models import Profile # Create your views here. def index(request): user_list = Profile.objects.order_by('name') context_dict = {'user_list': user_list} return render(request, 'index.html', context_dict) </code></pre> <p><strong>index.html</strong></p> <pre><code>&lt;div class="container"&gt; {% if user_list %} &lt;table class="table table-bordered table-striped" id="users"&gt; &lt;tr&gt; &lt;th&gt;Name&lt;/th&gt; &lt;th&gt;Email&lt;/th&gt; &lt;th&gt;City&lt;/th&gt; &lt;/tr&gt; {% for users in user_list %} &lt;tr&gt; &lt;td&gt;{{ users.name }}&lt;/td&gt; &lt;td&gt;{{ users.email }}&lt;/td&gt; &lt;td&gt;{{ users.city }}&lt;/td&gt; &lt;/tr&gt; {% endfor %} &lt;/table&gt; {% else %} &lt;strong&gt;There are no users.&lt;/strong&gt; {% endif %} &lt;/div&gt; </code></pre> <p>I ve posted the CSS CDN on top of the page and the jquery and datatable js CDN on the bottom of the page with this script:</p> <pre><code>$(document).ready(function(){ $('#users').DataTable(); }); </code></pre> <p>This doesn't seem to work and I had a look at the console in developer tools and no errors are listed.</p>
0
2016-07-26T18:52:47Z
38,598,154
<p>From the <a href="https://datatables.net/manual/installation" rel="nofollow">datatables manual</a>:</p> <blockquote> <p>For DataTables to be able to enhance an HTML table, the table must be valid, well formatted HTML, with a header (thead) and a body (tbody). An optional footer (tfoot) can also be used.</p> </blockquote> <p>so your table is missing the required <code>thead</code> and <code>tbody</code> tags.</p>
1
2016-07-26T19:16:03Z
[ "jquery", "python", "django", "datatable" ]
Django Datatables from model
38,597,764
<p>I have a simple model in my django app that I am displaying as a table. I would like to change the table to a datatable. However even after following all the steps listed ont eh datatables.net website I cant seem to convert my table into a datatable:</p> <p><strong>models.py</strong></p> <pre><code>from __future__ import unicode_literals from django.db import models class Profile(models.Model): name = models.CharField(max_length=128) email = models.EmailField(null=False) city = models.CharField(null=False, max_length=128) def __unicode__(self): return self.name </code></pre> <p><strong>views.py</strong></p> <pre><code>from django.shortcuts import render from django.http import HttpResponse from vc4u.models import Profile # Create your views here. def index(request): user_list = Profile.objects.order_by('name') context_dict = {'user_list': user_list} return render(request, 'index.html', context_dict) </code></pre> <p><strong>index.html</strong></p> <pre><code>&lt;div class="container"&gt; {% if user_list %} &lt;table class="table table-bordered table-striped" id="users"&gt; &lt;tr&gt; &lt;th&gt;Name&lt;/th&gt; &lt;th&gt;Email&lt;/th&gt; &lt;th&gt;City&lt;/th&gt; &lt;/tr&gt; {% for users in user_list %} &lt;tr&gt; &lt;td&gt;{{ users.name }}&lt;/td&gt; &lt;td&gt;{{ users.email }}&lt;/td&gt; &lt;td&gt;{{ users.city }}&lt;/td&gt; &lt;/tr&gt; {% endfor %} &lt;/table&gt; {% else %} &lt;strong&gt;There are no users.&lt;/strong&gt; {% endif %} &lt;/div&gt; </code></pre> <p>I ve posted the CSS CDN on top of the page and the jquery and datatable js CDN on the bottom of the page with this script:</p> <pre><code>$(document).ready(function(){ $('#users').DataTable(); }); </code></pre> <p>This doesn't seem to work and I had a look at the console in developer tools and no errors are listed.</p>
0
2016-07-26T18:52:47Z
39,442,713
<p>In addition to thr table thead and tbody comment, Jquery datatables will automatically sort your table by the 1st column if you are using the default datatables constructor.</p>
0
2016-09-12T02:42:21Z
[ "jquery", "python", "django", "datatable" ]
create user input directory name in python
38,597,824
<p>I am trying to generate a python script that takes 3 input arguments and creates a directory whose name is a concatenation of the 3 arguments. The command i give is python new.py user1 32 male and I should get a directory name created as user1_32_male, but I am getting directory name as user_name + "<em>" + age + "</em>" + gender. Can someone please correct my code</p> <pre><code>#!/usr/bin/python import pexpect import numpy as np #import matplotlib.pyplot as plt #import pylab as p from math import * from sys import argv import os.path import numpy as np import os, sys #print "Hello, Python!" script, user_name, age, gender = argv dirname = user_name + "_" + age + "_" + gender newpath = r'./dirname' if not os.path.exists(newpath): os.makedirs(newpath) </code></pre>
1
2016-07-26T18:56:26Z
38,597,896
<p>You put the expression you want to <em>evaluate</em> to the directory name you want in quotes, so it doesn't get evaluated. Try:</p> <pre><code>newpath = r'./' + user_name + "_" + age + "_" + gender </code></pre>
0
2016-07-26T19:00:37Z
[ "python" ]
create user input directory name in python
38,597,824
<p>I am trying to generate a python script that takes 3 input arguments and creates a directory whose name is a concatenation of the 3 arguments. The command i give is python new.py user1 32 male and I should get a directory name created as user1_32_male, but I am getting directory name as user_name + "<em>" + age + "</em>" + gender. Can someone please correct my code</p> <pre><code>#!/usr/bin/python import pexpect import numpy as np #import matplotlib.pyplot as plt #import pylab as p from math import * from sys import argv import os.path import numpy as np import os, sys #print "Hello, Python!" script, user_name, age, gender = argv dirname = user_name + "_" + age + "_" + gender newpath = r'./dirname' if not os.path.exists(newpath): os.makedirs(newpath) </code></pre>
1
2016-07-26T18:56:26Z
38,598,043
<p>First of all, I noticed that you're importing things more than once. There's no reason to import <code>os.path</code> since it's included in <code>os</code>. The same goes with <code>sys</code>.</p> <p>It's easier to use string substitution in cases like this. The tuple that comes after the <code>%</code> contains values that are substituted into the <code>%s</code> in the part before the <code>%</code>.</p> <pre><code>from sys import argv import os.path script, user_name, age, gender = argv newpath = '%s_%s_%s' % (user_name, age, gender) if not os.path.exists(newpath): os.makedirs(newpath) </code></pre>
0
2016-07-26T19:08:51Z
[ "python" ]
Dictionary to JSON object conversion
38,597,831
<p>I have 2 long lists (extracted from a csv) both of the same index length. Example:</p> <pre><code>l1 = ['Apple','Tomato','Cocos'] #name of product l2 = ['1','2','3'] #some id's </code></pre> <p>I made my dictionary with this method:</p> <pre><code>from collections import defaultdict d = defaultdict(list) for x in l1: d['Product'].append(x) for y in l2: d['Plu'].append(y) print d </code></pre> <p>This will output:</p> <p><strong>{'Product': ['Apple', 'Tomato', 'Cocos'], 'Plu': ['1', '2', '3']}</strong></p> <p>(<code>Product</code> and <code>Plu</code> are my wanted keys)</p> <p>Now I've tried to import this to a JavaScript Object like this:</p> <pre><code>import json print(json.dumps(d, sort_keys=True, indent=4)) </code></pre> <p>This will output:</p> <pre class="lang-js prettyprint-override"><code>{ "Plu": [ "1", "2", "3" ], "Product": [ "Apple", "Tomato", "Cocos" ] } </code></pre> <p>But my desired output is this:</p> <pre class="lang-js prettyprint-override"><code> { Product:'Apple', Plu:'1' }, { Product:'Tomato', Plu:'2' }, { Product:'Cocos', Plu:'3' } </code></pre> <p>I will later use that to insert values in a MongoDB. What will I have to change in my json.dump (or in my dict?) in order to get a desired output? Also is there a way to save the output in a txt file? (since I will have a big code).</p>
-1
2016-07-26T18:56:49Z
38,597,851
<p>Rather than using a <code>defaultdict</code> (which doesn't buy you anything in this case), you're better off <code>zip</code>ping the lists and creating a <code>dict</code> from each pair:</p> <pre><code>[{'Product': product, 'Plu': plu} for product, plu in zip(l1, l2)] </code></pre>
0
2016-07-26T18:58:14Z
[ "python", "json", "dictionary" ]
How are file objects cleaned up in Python when the process is killed?
38,597,876
<p>What happens to a file object in Python when the process is terminated? Does it matter whether Python is terminated with <code>SIGTERM</code>, <code>SIGKILL</code>, <code>SIGHUP</code> (etc.) or by a <code>KeyboardInterrupt</code> exception?</p> <p>I have some logging scripts that continually acquire data and write it to a file. I don't care about doing any extra clean up, but I just want to make sure that log file is not corrupted when Python is abruptly terminated (e.g. I could leave it running in the background and just shutdown the computer). I made the following test scripts to try to see what happens:</p> <p><code>termtest.sh</code>:</p> <pre><code>for i in $(seq 1 10); do python termtest.py $i &amp; export pypid=$! sleep 0.3 echo $pypid kill -SIGTERM $pypid done </code></pre> <p><code>termtest.py</code>:</p> <pre><code>import csv import os import signal import sys end_loop = False def handle_interrupt(*args): global end_loop end_loop = True signal.signal(signal.SIGINT, handle_interrupt) with open('test' + str(sys.argv[-1]) + '.txt', 'w') as csvfile: writer = csv.writer(csvfile) for idx in range(int(1e7)): writer.writerow((idx, 'a' * 60000)) csvfile.flush() os.fsync(csvfile.fileno()) if end_loop: break </code></pre> <p>I ran <code>termtest.sh</code> with different signals (changed <code>SIGTERM</code> to <code>SIGINT</code>, <code>SIGHUP</code>, and <code>SIGKILL</code> in <code>termtest.sh</code>) (note: I put an explicit handler in <code>termtest.py</code> for <code>SIGINT</code> since Python does not handle that one other than as <code>Ctrl+C</code>). In all cases, all of the output files had only complete rows (no partial writes) and did not appear corrupted. I put the <code>flush()</code> and <code>fsync()</code> calls to try to make sure the data was being written to disk as much as possible so that the script had the greatest chance of being interrupted mid-write.</p> <p>So can I conclude that Python always completes a write when it is terminated and does not leave a file in an intermediate state? Or does this depend on the operating system and file system (I was testing with Linux and an ext4 partition)?</p>
1
2016-07-26T18:59:36Z
38,598,186
<p>It's not how files are "cleaned up" so much as how they are written to. It's possible that a program might perform multiple writes for a single "chunk" of data (row, or whatever) and you could interrupt in the middle of this process and end up with partial records written.</p> <p>Looking at the <a href="https://hg.python.org/cpython/file/tip/Modules/_csv.c" rel="nofollow">C source</a> for the <code>csv</code> module, it assembles each row to a string buffer, then writes that using a single <code>write()</code> call. That should generally be safe; either the row is passed to the OS or it's not, and if it gets to the OS it's all going to get written or it's not (barring of course things like hardware issues where part of it could go into a bad sector).</p> <p>The writer object is a Python object, and a custom writer could do something weird in its <code>write()</code> that could break this, but assuming it's a regular file object, it should be fine.</p>
1
2016-07-26T19:18:16Z
[ "python", "io", "terminate" ]
Double Group-by then apply some functions?
38,597,921
<p>I have data that looks like this:</p> <pre><code> country source 0 UK Ads 1 US Seo 2 US Seo 3 China Seo 4 US Seo 5 US Seo 6 China Seo 7 US Ads </code></pre> <p>For each country I want to get the ratio of each source. I did a groupby on country and source and got the table below which has the total counts for each source in each country but not sure how to go from here.</p> <pre><code>df.groupby(['country', 'source']).size() country source China Ads 21561 Direct 17463 Seo 37578 Germany Ads 3760 Direct 2864 Seo 6432 UK Ads 13518 Direct 11131 Seo 23801 US Ads 49901 Direct 40962 Seo 87229 </code></pre> <p>I'm looking for something like this:</p> <pre><code> Ads SEO Direct US .3 .1 .4 China .5 .3 .2 UK .5 .3 .6 </code></pre>
2
2016-07-26T19:01:51Z
38,598,087
<p>You can use <code>unstack</code> to transform the result from long to wide format and then calculate the ratio row by row using <code>apply</code> method:</p> <pre><code>import pandas as pd df1 = df.groupby(['country', 'source']).size().unstack(level=1,fill_value = 0).apply(lambda r: r/r.sum(), axis = 1) df1 # source Ads Seo #country # China 0.0 1.0 # UK 1.0 0.0 # US 0.2 0.8 </code></pre>
1
2016-07-26T19:11:59Z
[ "python", "pandas", "group-by", "aggregate-functions" ]
Double Group-by then apply some functions?
38,597,921
<p>I have data that looks like this:</p> <pre><code> country source 0 UK Ads 1 US Seo 2 US Seo 3 China Seo 4 US Seo 5 US Seo 6 China Seo 7 US Ads </code></pre> <p>For each country I want to get the ratio of each source. I did a groupby on country and source and got the table below which has the total counts for each source in each country but not sure how to go from here.</p> <pre><code>df.groupby(['country', 'source']).size() country source China Ads 21561 Direct 17463 Seo 37578 Germany Ads 3760 Direct 2864 Seo 6432 UK Ads 13518 Direct 11131 Seo 23801 US Ads 49901 Direct 40962 Seo 87229 </code></pre> <p>I'm looking for something like this:</p> <pre><code> Ads SEO Direct US .3 .1 .4 China .5 .3 .2 UK .5 .3 .6 </code></pre>
2
2016-07-26T19:01:51Z
38,598,138
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.crosstab.html" rel="nofollow"><code>pd.crosstab</code></a> to perform the frequency counts, followed by <code>apply</code> to normalize:</p> <pre><code>df = pd.crosstab(df['country'], df['source']).apply(lambda r: r/r.sum(), axis=1) </code></pre>
1
2016-07-26T19:15:12Z
[ "python", "pandas", "group-by", "aggregate-functions" ]
Double Group-by then apply some functions?
38,597,921
<p>I have data that looks like this:</p> <pre><code> country source 0 UK Ads 1 US Seo 2 US Seo 3 China Seo 4 US Seo 5 US Seo 6 China Seo 7 US Ads </code></pre> <p>For each country I want to get the ratio of each source. I did a groupby on country and source and got the table below which has the total counts for each source in each country but not sure how to go from here.</p> <pre><code>df.groupby(['country', 'source']).size() country source China Ads 21561 Direct 17463 Seo 37578 Germany Ads 3760 Direct 2864 Seo 6432 UK Ads 13518 Direct 11131 Seo 23801 US Ads 49901 Direct 40962 Seo 87229 </code></pre> <p>I'm looking for something like this:</p> <pre><code> Ads SEO Direct US .3 .1 .4 China .5 .3 .2 UK .5 .3 .6 </code></pre>
2
2016-07-26T19:01:51Z
38,598,272
<h3>Large sample set</h3> <pre><code>np.random.seed([3,1415]) n = 100000 df = pd.DataFrame( dict(country=np.random.choice(('UK', 'US', 'China'), n), source=np.random.choice(('Ads', 'Seo', 'Direct'), n))) </code></pre> <h3>Solution</h3> <pre><code>size = df.groupby(['country', 'source']).size().unstack() size.div(size.sum(1), axis=0) </code></pre> <p><a href="http://i.stack.imgur.com/vHciW.png" rel="nofollow"><img src="http://i.stack.imgur.com/vHciW.png" alt="enter image description here"></a></p> <hr> <h3>Timing</h3> <p><strong>using data from this post</strong></p> <p><a href="http://i.stack.imgur.com/xEoI4.png" rel="nofollow"><img src="http://i.stack.imgur.com/xEoI4.png" alt="enter image description here"></a></p>
1
2016-07-26T19:23:11Z
[ "python", "pandas", "group-by", "aggregate-functions" ]
Match lists based on Name and DOB
38,597,953
<p>This seems like it should be easy, but I can't seem to find what I'm looking for...I have two lists of people, FirstName, LastName, Date of Birth, and I just want to know which people are in both lists, and which ones are in one but not the other. </p> <p>I've tried something like </p> <pre><code>common = pd.merge(list1, list2, how='left', left_on=['Last', 'First', 'DOB'], right_on=['Patient Last Name', 'Patient First Name', 'Date of Birth']).dropna() </code></pre> <p>Based on something else I found online, but it give me this error:</p> <pre><code>KeyError: 'Date of Birth' </code></pre> <p>I've verified that that is indeed the column heading in the second list, so I don't get what's wrong. Anyone do matching like this? What's the easiest/fastest way? The names may have different formatting between lists, like "Smith-Jones" vs. "SmithJones" vs. "Smith Jones", but I get around that by stripping all spances and punctuation from the names...I assume that's a first good step?</p>
-1
2016-07-26T19:03:41Z
38,599,668
<p>Try this , it should work</p> <pre><code>import sys from StringIO import StringIO import pandas as pd TESTDATA=StringIO("""DOB;First;Last 2016-07-26;John;smith 2016-07-27;Mathew;George 2016-07-28;Aryan;Singh 2016-07-29;Ella;Gayau """) list1 = pd.read_csv(TESTDATA, sep=";") TESTDATA=StringIO("""Date of Birth;Patient First Name;Patient Last Name 2016-07-26;John;smith 2016-07-27;Mathew;XXX 2016-07-28;Aryan;Singh 2016-07-20;Ella;Gayau """) list2 = pd.read_csv(TESTDATA, sep=";") print list2 print list1 common = pd.merge(list1, list2, how='left', left_on=['Last', 'First', 'DOB'], right_on=['Patient Last Name', 'Patient First Name', 'Date of Birth']).dropna() print common </code></pre>
0
2016-07-26T20:51:42Z
[ "python" ]
How to do a regex replace *and* capture in python?
38,597,979
<p>I have a custom data format with open/close tags that I need to parse, e.g.:</p> <pre><code>&lt;t1&gt; 15 &lt;/t1&gt; &lt;t2&gt; 25 &lt;/t2&gt; </code></pre> <p>Tags are never nested, but I don't know the tag names in advance. I can't count on data conforming to XML (e.g. may have "&lt;" or ">" characters between tags) so I can't use common XML parsers. I can assume that close tags are always in place, and that there aren't duplicate tags in the same file.</p> <p>I'm a perl guy, but I need to code this parser in python. The easiest way to do this in perl is to do substitutions off the front, pulling the next open/close tag and everything inside, and then capturing the replaced text:</p> <pre><code>**************************************** #! /usr/bin/perl -w %tags = (); $_ = "&lt;t1&gt; 15 &lt;/t1&gt; &lt;t2&gt; 25 &lt;/t2&gt;"; # &lt; t1 &gt; 15 &lt; / t1 &gt; while ( s| \s* &lt; ([^&gt;]+) &gt; \s* (.+\S) \s* &lt; / \1 &gt; ||x ) { $tags{$1} = $2; } print "$_: $tags{$_}\n" for sort keys(%tags); **************************************** </code></pre> <p>I can't find any ability in vanilla Python to access captures after using re.sub(), and this whole problem seems <em>way</em> harder. What am I missing here?</p>
-1
2016-07-26T19:05:47Z
38,598,388
<p>Setup (I kept your regex but added a few words to the string so we can see that the replacing actually works):</p> <pre><code>&gt;&gt;&gt; import re &gt;&gt;&gt; s = 'front &lt;t1&gt; 15 &lt;/t1&gt; middle &lt;t2&gt; 25 &lt;/t2&gt; back' &gt;&gt;&gt; p = r'\s* &lt; ([^&gt;]+) &gt; \s* (.+\S) \s* &lt; / \1 &gt;' </code></pre> <p>If you don't mind the double scan, you could first run <code>re.findall</code> to capture and and then run <code>re.sub</code> to replace.</p> <pre><code>&gt;&gt;&gt; dict(re.findall(p, s, re.X)) {'t1': '15', 't2': '25'} &gt;&gt;&gt; re.sub(p, '', s, flags=re.X) 'before between after' </code></pre> <p>Or use a function for the replacement, for example:</p> <pre><code>&gt;&gt;&gt; d = {} &gt;&gt;&gt; re.sub(p, lambda m: d.update([m.groups()]) or '', s, flags=re.X) 'before between after' &gt;&gt;&gt; d {'t1': '15', 't2': '25'} </code></pre>
0
2016-07-26T19:30:30Z
[ "python", "regex" ]
How to do a regex replace *and* capture in python?
38,597,979
<p>I have a custom data format with open/close tags that I need to parse, e.g.:</p> <pre><code>&lt;t1&gt; 15 &lt;/t1&gt; &lt;t2&gt; 25 &lt;/t2&gt; </code></pre> <p>Tags are never nested, but I don't know the tag names in advance. I can't count on data conforming to XML (e.g. may have "&lt;" or ">" characters between tags) so I can't use common XML parsers. I can assume that close tags are always in place, and that there aren't duplicate tags in the same file.</p> <p>I'm a perl guy, but I need to code this parser in python. The easiest way to do this in perl is to do substitutions off the front, pulling the next open/close tag and everything inside, and then capturing the replaced text:</p> <pre><code>**************************************** #! /usr/bin/perl -w %tags = (); $_ = "&lt;t1&gt; 15 &lt;/t1&gt; &lt;t2&gt; 25 &lt;/t2&gt;"; # &lt; t1 &gt; 15 &lt; / t1 &gt; while ( s| \s* &lt; ([^&gt;]+) &gt; \s* (.+\S) \s* &lt; / \1 &gt; ||x ) { $tags{$1} = $2; } print "$_: $tags{$_}\n" for sort keys(%tags); **************************************** </code></pre> <p>I can't find any ability in vanilla Python to access captures after using re.sub(), and this whole problem seems <em>way</em> harder. What am I missing here?</p>
-1
2016-07-26T19:05:47Z
38,598,427
<p>You don't need the substitution in Python. Use <code>re.findall()</code> or <code>re.finditer()</code>, like so:</p> <pre><code>import re with open('input.txt') as input_file: data = input_file.read() tags = {} for match in re.finditer(r'&lt;\s*(.*?)\s*&gt;\s*(.*?)\s*&lt;/\1&gt;', data): tags[match.group(1)] = match.group(2) print tags </code></pre> <p>The <code>for</code> loop can be replaced by a dict comprehension. The following is equivalent to what I wrote above.</p> <pre><code>tags = dict(re.findall(r'&lt;\s*(.*?)\s*&gt;\s*(.*?)\s*&lt;/\1&gt;', data)) print tags </code></pre>
2
2016-07-26T19:32:38Z
[ "python", "regex" ]
Passing node object as parameter in graph query Py2neo
38,598,024
<p>I have the following code in which I am obtaining a node. How can I pass it to graph.evaluate as a parameter. Is there a possible method to do so if this is incorrect ? Or some alternative method</p> <pre><code>user_node = selector.select("User", user_id=95) lib_node = graph.evaluate("match {param}-[:LISTENS_TO]-&gt;(p) return p", param=dict(user_node)) </code></pre> <p>the above throws value error</p> <pre><code>ValueError: dictionary update sequence element #0 has length 6; 2 is required </code></pre>
0
2016-07-26T19:07:40Z
38,611,159
<p>The <code>select</code> method returns a selection of as many matches as it can find. This may or may not be a sequence of one but either way, you'll need to use the <code>.first()</code> method to grab the first (and probably only) one returned.</p> <p><a href="http://py2neo.org/v3/database.html#py2neo.database.selection.NodeSelection.first" rel="nofollow">http://py2neo.org/v3/database.html#py2neo.database.selection.NodeSelection.first</a></p>
0
2016-07-27T11:03:24Z
[ "python", "neo4j", "py2neo" ]
how to programatically open an executable in program files?
38,598,034
<p>I am trying to open an executable as follows and running into below error,how to I take care of the spaces between <code>Program Files</code> and open this executable?</p> <pre><code>C:\Windows\system32&gt;python Python 2.7.3 (default, Apr 10 2012, 23:31:26) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; import os &gt;&gt;&gt; os.system("C:\\Program Files (x86)\\company\\POST\\bin\\POSTConfig.exe") 'C:\Program' is not recognized as an internal or external command, operable program or batch file. </code></pre> <p><strong>UPDATE:-</strong></p> <p><strong><em>Also,I want the python process to terminate while the POSTConfig.exe process continues</em></strong></p>
0
2016-07-26T19:08:08Z
38,598,208
<p>Please run as below:</p> <pre><code>os.system(r'"C:\Program Files (x86)\company\POST\bin\POSTConfig.exe"') </code></pre>
0
2016-07-26T19:19:42Z
[ "python" ]
how to programatically open an executable in program files?
38,598,034
<p>I am trying to open an executable as follows and running into below error,how to I take care of the spaces between <code>Program Files</code> and open this executable?</p> <pre><code>C:\Windows\system32&gt;python Python 2.7.3 (default, Apr 10 2012, 23:31:26) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; import os &gt;&gt;&gt; os.system("C:\\Program Files (x86)\\company\\POST\\bin\\POSTConfig.exe") 'C:\Program' is not recognized as an internal or external command, operable program or batch file. </code></pre> <p><strong>UPDATE:-</strong></p> <p><strong><em>Also,I want the python process to terminate while the POSTConfig.exe process continues</em></strong></p>
0
2016-07-26T19:08:08Z
38,598,211
<p>Use a raw string, imbed double quotes inside single quotes:</p> <pre><code>os.system(r'"C:\Program Files (x86)\company\POST\bin\POSTConfig.exe"') </code></pre> <p>But please look at the <a href="https://docs.python.org/2/library/subprocess.html" rel="nofollow">subprocess</a> module instead.</p>
0
2016-07-26T19:19:52Z
[ "python" ]
Can't draw function using python
38,598,110
<p>I have made a function RC(n) that given any n changes the digits of n according to a rule. The function is the following </p> <pre><code>def cfr(n): return len(str(n))-1 def n_cfr(k,n): J=str(k) if "." in J: J2=J.replace(".", "") return J2[n-1] else: return J[n] def RC(n): if "." not in str(n): return n+1 sum=0 val=0 for a in range(1,cfr(n)+1): O=(int(n_cfr(n,a)))*10**(-a+1) if int(n_cfr(n,a))==9: val=0 else: val=O+10**(-a+1) sum=sum+val return sum </code></pre> <p>I would like to draw this function for non-integers values of n. A friend gave me this code that he used in other functions but it doesn't seem to work for me:</p> <pre><code>def draw(f,a,b,res): import numpy as np import matplotlib.pyplot as plt x=[a+(b-a)*i/res for i in range(0,res)] y=[f(elm) for elm in x] plt.plot(np.asarray(x), np.asarray(y)) plt.show() </code></pre> <p>I'm not familiar with plotting functions using python so could anyone give me some help? Thanks in advance</p>
0
2016-07-26T19:13:44Z
38,598,308
<p>The line in your function should be <code>x = list(range(a, b, res))</code> the first two arguments of <code>range</code> are <code>start</code> and <code>stop</code>. Here is a better version of draw:</p> <pre><code>def draw(f, a, b, res): import numpy as np import matplotlib.pyplot as plt x = list(range(a, b, res)) plt.plot(x, map(f, x)) plt.show() </code></pre>
1
2016-07-26T19:25:31Z
[ "python", "graphing" ]
Issue when setting-up wsgi as an alias path in apache
38,598,157
<p>NOTE: This question is different from '<a href="http://stackoverflow.com/questions/18967441/add-a-prefix-to-all-flask-routes">Add a prefix to all Flask routes</a>' as I am trying to resolve this at apache level. Additional, the suggested fix for flask routes did not work!</p> <p>Following on from <a href="http://stackoverflow.com/a/1020696/2761030">this post</a>, I'm trying to set up apache to serve PHP files by default, but point a given alias (i.e. <code>/flaskapp</code>) to a wsgi path. The wsgi file in turn routes requests to a python flask app.</p> <p>Here's the apache config that I'm trying (under <code>000-default.conf</code>):</p> <pre><code>&lt;VirtualHost *:80&gt; ServerName localhost ServerAdmin webmaster@localhost Alias / /var/www/html/ &lt;Directory "/var/www/html"&gt; Order Deny,Allow Allow from all Require all granted &lt;/Directory&gt; WSGIScriptAlias /flaskapp "/var/www/flaskapp/deploy.wsgi" &lt;Directory /var/www/flaskapp&gt; Options +ExecCGI Order allow,deny Allow from all &lt;/Directory&gt; &lt;/VirtualHost&gt; </code></pre> <p>After doing a <code>service apache2 restart</code> I find that requests to <code>http://myip/flaskapp</code> result in a 404 error. Everything else works fine.</p> <p>Things I've tried so far:</p> <ul> <li>Double checking all the file and folder paths (no issues found)</li> <li>Using the wsgi part of the above code to set up the wsgi app as a standalone virtualhost (works fine)</li> <li>Adding <code>app.config['APPLICATION_ROOT'] = '/flaskapp'</code> to my app.py file, as suggested the question '<a href="http://stackoverflow.com/questions/18967441/add-a-prefix-to-all-flask-routes">Add a prefix to all Flask routes</a>' (Didn't have any effect)</li> </ul> <p>Where could I be going wrong?</p>
0
2016-07-26T19:16:18Z
38,673,945
<p>Instead of:</p> <pre><code> Alias / /var/www/html/ &lt;Directory "/var/www/html"&gt; Order Deny,Allow Allow from all Require all granted &lt;/Directory&gt; </code></pre> <p>use:</p> <pre><code> DocumentRoot /var/www/html/ &lt;Directory "/var/www/html"&gt; Order Deny,Allow Allow from all Require all granted &lt;/Directory&gt; </code></pre> <p>Using '/' with <code>Alias</code> takes precedence over everything else including mod_wsgi's ability to intercept requests at a sub URL. So for stuff at root of the site you need to use <code>DocumentRoot</code> directive.</p>
1
2016-07-30T12:26:01Z
[ "python", "apache", "flask", "virtualhost" ]
Odoo 8 store=False not working
38,598,158
<p>I'm creating a simple form that you have to enter a password so the action can be executed but I can't figure out how not to store the password in the database. When I put <strong>store=False</strong> my module doesn't compile. Also I would like to know if there's a way to hide the password while the user is writing it.</p> <p>The best would be to not create any table in the database but I need to create a model cause I have a button that call a method. I don't know if there's a way to avoid creating a table.</p> <p><strong>siteweb_migration_wizard.py</strong></p> <pre><code># -*- coding: utf-8 -*- from openerp import models, fields, api, tools class SitewebMigrationWizard(models.TransientModel): _name = 'siteweb.migration' password = fields.Char(string="Mot de passe", store=False) @api.multi def migration(self): password = self.password print(password) </code></pre> <p><strong>siteweb_migration_wizard.xml</strong></p> <pre><code>&lt;?xml version="1.0" encoding="utf-8"?&gt; &lt;openerp&gt; &lt;data&gt; &lt;record model="ir.ui.view" id="siteweb_migration_wizard_form"&gt; &lt;field name="name"&gt;siteweb.migration.form&lt;/field&gt; &lt;field name="model"&gt;siteweb.migration&lt;/field&gt; &lt;field name="type"&gt;form&lt;/field&gt; &lt;field name="arch" type="xml"&gt; &lt;form string="Migrer" version="8.0"&gt; &lt;p&gt;Voulez-vous vraiment migrer vers la BD du site?&lt;/p&gt; &lt;group&gt; &lt;field name="password"/&gt; &lt;/group&gt; &lt;button string="Confirmer" type="object" name="migration"/&gt; &lt;button string="Annuler" class="oe_highlight" special="cancel"/&gt; &lt;/form&gt; &lt;/field&gt; &lt;/record&gt; &lt;record id="action_siteweb_migration" model="ir.actions.act_window"&gt; &lt;field name="name"&gt;Migration du site&lt;/field&gt; &lt;field name="res_model"&gt;siteweb.migration&lt;/field&gt; &lt;field name="view_type"&gt;form&lt;/field&gt; &lt;field name="view_id" ref="siteweb_migration_wizard_form"/&gt; &lt;field name="multi"&gt;True&lt;/field&gt; &lt;field name="target"&gt;new&lt;/field&gt; &lt;/record&gt; &lt;menuitem action="action_siteweb_migration" id="menu_siteweb_migration" name="Migration du site" parent="siteweb_createch.menu_siteweb"/&gt; &lt;/data&gt; &lt;/openerp&gt; </code></pre>
0
2016-07-26T19:16:22Z
38,630,889
<p>Some points to keep in mind</p> <ul> <li>"store" is a parameter for calculated fields. It defines if it should calculate the value each time you open the record or calculate it only when it's edited.</li> <li>After you've hit the confirm button it saves the current data. I'd suggest that you'd set the password to False in the migration method</li> </ul> <blockquote> <pre><code>@api.multi def migration(self): password = self.password print(password) self.password = False </code></pre> </blockquote> <ul> <li>Even when it's stored it's not stored for long. The scheduler for cleaning Transient models will delete the record after a while</li> </ul>
1
2016-07-28T08:24:58Z
[ "python", "xml", "odoo-8" ]
Odoo 8 store=False not working
38,598,158
<p>I'm creating a simple form that you have to enter a password so the action can be executed but I can't figure out how not to store the password in the database. When I put <strong>store=False</strong> my module doesn't compile. Also I would like to know if there's a way to hide the password while the user is writing it.</p> <p>The best would be to not create any table in the database but I need to create a model cause I have a button that call a method. I don't know if there's a way to avoid creating a table.</p> <p><strong>siteweb_migration_wizard.py</strong></p> <pre><code># -*- coding: utf-8 -*- from openerp import models, fields, api, tools class SitewebMigrationWizard(models.TransientModel): _name = 'siteweb.migration' password = fields.Char(string="Mot de passe", store=False) @api.multi def migration(self): password = self.password print(password) </code></pre> <p><strong>siteweb_migration_wizard.xml</strong></p> <pre><code>&lt;?xml version="1.0" encoding="utf-8"?&gt; &lt;openerp&gt; &lt;data&gt; &lt;record model="ir.ui.view" id="siteweb_migration_wizard_form"&gt; &lt;field name="name"&gt;siteweb.migration.form&lt;/field&gt; &lt;field name="model"&gt;siteweb.migration&lt;/field&gt; &lt;field name="type"&gt;form&lt;/field&gt; &lt;field name="arch" type="xml"&gt; &lt;form string="Migrer" version="8.0"&gt; &lt;p&gt;Voulez-vous vraiment migrer vers la BD du site?&lt;/p&gt; &lt;group&gt; &lt;field name="password"/&gt; &lt;/group&gt; &lt;button string="Confirmer" type="object" name="migration"/&gt; &lt;button string="Annuler" class="oe_highlight" special="cancel"/&gt; &lt;/form&gt; &lt;/field&gt; &lt;/record&gt; &lt;record id="action_siteweb_migration" model="ir.actions.act_window"&gt; &lt;field name="name"&gt;Migration du site&lt;/field&gt; &lt;field name="res_model"&gt;siteweb.migration&lt;/field&gt; &lt;field name="view_type"&gt;form&lt;/field&gt; &lt;field name="view_id" ref="siteweb_migration_wizard_form"/&gt; &lt;field name="multi"&gt;True&lt;/field&gt; &lt;field name="target"&gt;new&lt;/field&gt; &lt;/record&gt; &lt;menuitem action="action_siteweb_migration" id="menu_siteweb_migration" name="Migration du site" parent="siteweb_createch.menu_siteweb"/&gt; &lt;/data&gt; &lt;/openerp&gt; </code></pre>
0
2016-07-26T19:16:22Z
38,689,829
<p>you can just simple use the <em>PASSWORD</em> since you have declared it as var name of self.password. SO: password = False</p>
0
2016-08-01T01:23:12Z
[ "python", "xml", "odoo-8" ]
Training CNN using cifar10 solver
38,598,161
<p>I'm trying to train a CNN with my own data, using <a href="https://github.com/BVLC/caffe/blob/master/examples/cifar10/cifar10_quick_train_test.prototxt" rel="nofollow">cifar10</a> network layers. but, when I'm running this command:</p> <pre><code>roishik@roishik-System-Product-Name:~/Desktop/caffe/caffe$ /home/roishik/Desktop/caffe/caffe/build/tools/caffe train --solver /home/roishik/Desktop/Thesis/Code/cafe_cnn/first/caffe_models/cifar_10_fast/cifar10_quick_solver.prototxt 2&gt;&amp;1 | tee /home/roishik/Desktop/Thesis/Code/cafe_cnn/first/caffe_models/cifar_10_fast/cifar10_quick_train_test.prototxt </code></pre> <p>I get this error message:</p> <pre><code>I0726 22:01:40.884320 6596 caffe.cpp:210] Use CPU. I0726 22:01:40.884771 6596 solver.cpp:48] Initializing solver from parameters: test_iter: 100 test_interval: 500 base_lr: 0.001 display: 100 max_iter: 4000 lr_policy: "fixed" momentum: 0.9 weight_decay: 0.004 snapshot: 4000 snapshot_prefix: "/home/roishik/Desktop/Thesis/Code/cafe_cnn/first/caffe_models/cifar_10_fast/cifar_10_fast" solver_mode: CPU net: "/home/roishik/Desktop/Thesis/Code/cafe_cnn/first/caffe_models/cifar_10_fast/cifar10_quick_train_test.prototxt" train_state { level: 0 stage: "" } snapshot_format: HDF5 I0726 22:01:40.885051 6596 solver.cpp:91] Creating training net from net file: /home/roishik/Desktop/Thesis/Code/cafe_cnn/first/caffe_models/cifar_10_fast/cifar10_quick_train_test.prototxt [libprotobuf ERROR google/protobuf/text_format.cc:245] Error parsing text-format caffe.NetParameter: 1:7: Message type "caffe.NetParameter" has no field named "I0726". F0726 22:01:40.885253 6596 upgrade_proto.cpp:79] Check failed: ReadProtoFromTextFile(param_file, param) Failed to parse NetParameter file: /home/roishik/Desktop/Thesis/Code/cafe_cnn/first/caffe_models/cifar_10_fast/cifar10_quick_train_test.prototxt *** Check failure stack trace: *** @ 0x7f0f10ad5daa (unknown) @ 0x7f0f10ad5ce4 (unknown) @ 0x7f0f10ad56e6 (unknown) @ 0x7f0f10ad8687 (unknown) @ 0x7f0f10f614be caffe::ReadNetParamsFromTextFileOrDie() @ 0x7f0f10fc6acb caffe::Solver&lt;&gt;::InitTrainNet() @ 0x7f0f10fc7b9c caffe::Solver&lt;&gt;::Init() @ 0x7f0f10fc7eca caffe::Solver&lt;&gt;::Solver() @ 0x7f0f10fa2473 caffe::Creator_SGDSolver&lt;&gt;() @ 0x40eb6e caffe::SolverRegistry&lt;&gt;::CreateSolver() @ 0x407d4b train() @ 0x40589c main @ 0x7f0f0fae1f45 (unknown) @ 0x40610b (unknown) @ (nil) (unknown) </code></pre> <p>I searched all over google and didn't find an answer. What does this line means?:</p> <pre><code> Error parsing text-format caffe.NetParameter: 1:7: Message type "caffe.NetParameter" has no field named "I0726". </code></pre> <p>Really appreciate your help!</p>
0
2016-07-26T19:16:34Z
38,611,471
<p>It is because you are doing it wrong.</p> <p>The files that you are using:</p> <pre><code>Solver: /home/roishik/Desktop/Thesis/Code/cafe_cnn/first/caffe_models/cifar_10_fast/cifar10_quick_solver.prototxt Net: /home/roishik/Desktop/Thesis/Code/cafe_cnn/first/caffe_models/cifar_10_fast/cifar10_quick_train_test.prototxt Log output: /home/roishik/Desktop/Thesis/Code/cafe_cnn/first/caffe_models/cifar_10_fast/cifar10_quick_train_test.prototxt </code></pre> <p>It is to be noted that both the Net file and log file are the same, which means you are replacing the Net file with the log data. Thus, by the time caffe solver reads through the Net file, the data gets replaced by log and hence the error.</p> <p>This should solve your issue:</p> <pre><code>roishik@roishik-System-Product-Name:~/Desktop/caffe/caffe$ /home/roishik/Desktop/caffe/caffe/build/tools/caffe train --solver /home/roishik/Desktop/Thesis/Code/cafe_cnn/first/caffe_models/cifar_10_fast/cifar10_quick_solver.prototxt 2&gt;&amp;1 | tee ./log.txt </code></pre> <p>But make sure that you have replaced the overwritten <code>Net</code> file with the correct file.</p>
1
2016-07-27T11:17:30Z
[ "python", "python-2.7", "caffe", "conv-neural-network", "pycaffe" ]
Python -- Client Waiting for Server Input (and vice versa)
38,598,238
<p>I'm fairly new to Python and I'm very new to sockets and servers in Python, so please bear with me! Basically, I'm trying to set up a hangman game between a client and server to play a game. I know this system has a bunch of weaknesses (including having most of the code on the client side instead of the server side, which I'm working to fix, but anyway...) </p> <p>In essence, my client launches the game, and gets an input from the server for the "word" of the hangman game. After the game finishes, it plays the init method again to <em>restart</em> the game. I want it to wait for input from the server again so it doesn't play the same input from the 1st time. Do you have any suggestions?</p> <p>Here's my server code:</p> <pre><code>#!/usr/bin/env python """ Created on Tue Jul 26 09:32:18 2016 @author: Kevin Levine / Sam Chan """ # Echo server program import socket import time HOST = '10.232.2.162' # Symbolic name meaning all available interfaces PORT = 5007 # Arbitrary non-privileged port s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.bind((HOST, PORT)) s.listen(1) conn, addr = s.accept() print 'Connected by', addr def sendClient(message): conn.send(message) data = "1" while True: # Whole game if data == "1": while True: print "Enter word to be guessed." message = raw_input("&gt; ") if message.isalpha(): sendClient(message) else: "Your word included some non-alphabetical characters." data = conn.recv(1024) if data == "2": break # data = s.recv(1024) # clientData = conn.recv(1024) # print "Received", clientData, "from Client" </code></pre> <p>Here's my client code:</p> <pre><code>#!/usr/bin/python """ Created on Tue Jul 26 09:12:01 2016 @author: Kevin Levine / Sam Chan """ # Echo client program import socket import time HOST = '10.232.5.58' # The remote host PORT = 5007 # The same port as used by the server s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect((HOST, PORT)) def sendServer(message): s.send(message) class Hangman(): def __init__(self): print "Welcome to 'Hangman', are you ready to play? " print "(1) Yup!\n(2) Nah" user_choice_1 = raw_input("&gt; ") serverData = s.recv(1024) global the_word the_word = serverData if user_choice_1 == '1': print "Loading..." for t in range(3, -1, -1): print t, "..." time.sleep(1) self.start_game() elif user_choice_1 == '2': print "Bye bye now..." exit() else: print "I didn't understand you..." self.__init__() def start_game(self): print "Six fails, no more, no less. Try to guess the word by guessing letters!" self.core_game() def core_game(self): guesses = 0 letters_used = "" progress = [] for num in range(0, len(the_word)): progress.append("?") while guesses &lt; 6: if "".join(progress) == the_word: print "Congrats! You guessed the word '%s'!" % the_word print "========================================" time.sleep(3) self.__init__() guessraw = raw_input("&gt; ") guess = guessraw.lower() if (guess in the_word) and (guess not in letters_used): print "You guessed correctly!" letters_used += "," + guess self.hangman_graphic(guesses) print "Progress: " + self.progress_updater(guess, the_word, progress) print "Letter used: " + letters_used elif (guess not in "abcdefghijklmnopqrstuvwxyz"): print "Only guess letters!" elif (guess not in the_word) and (guess not in letters_used): guesses += 1 print "That guess was wrong." letters_used += "," + guess self.hangman_graphic(guesses) print "Progress: " + "".join(progress) print "Letter used: " + letters_used else: print "Only guess unique letters!" def hangman_graphic(self, guesses): if guesses == 0: print "________ " print "| | " print "| " print "| " print "| " print "| " elif guesses == 1: print "________ " print "| | " print "| 0 " print "| " print "| " print "| " elif guesses == 2: print "________ " print "| | " print "| 0 " print "| / " print "| " print "| " elif guesses == 3: print "________ " print "| | " print "| 0 " print "| /| " print "| " print "| " elif guesses == 4: print "________ " print "| | " print "| 0 " print "| /|\ " print "| " print "| " elif guesses == 5: print "________ " print "| | " print "| 0 " print "| /|\ " print "| / " print "| " else: print "________ " print "| | " print "| 0 " print "| /|\ " print "| / \ " print "| " print "GAME OVER, you lost! :(" print "The word was ", the_word print "========================================" endmessage = 1 sendServer(endmessage) self.__init__() def progress_updater(self, guess, the_word, progress): i = 0 while i &lt; len(the_word): if guess == the_word[i]: progress[i] = guess i += 1 else: i += 1 return "".join(progress) game = Hangman() </code></pre>
0
2016-07-26T19:21:29Z
38,598,528
<p>Maybe you could make loop instead of just creating only one Hangman instance, like this:</p> <pre><code>while True: game = Hangman() </code></pre> <p>instead of an infinite loop you can make an exit condition.For example in your class you may have a boolean variable that indicates if the game has ended or not.(for example lets say that the name of this variable is end)</p> <pre><code>game = Hangman() while True: if not game.end: continue game = Hangman() </code></pre>
1
2016-07-26T19:38:56Z
[ "python", "serversocket" ]
Merging two CSV files by a common column python
38,598,316
<p>I am trying to merge two csv files with a common id column and write the merge to a new file. I have tried the following but it is giving me an error - </p> <pre><code>import csv from collections import OrderedDict filenames = "stops.csv", "stops2.csv" data = OrderedDict() fieldnames = [] for filename in filenames: with open(filename, "rb") as fp: # python 2 reader = csv.DictReader(fp) fieldnames.extend(reader.fieldnames) for row in reader: data.setdefault(row["stop_id"], {}).update(row) fieldnames = list(OrderedDict.fromkeys(fieldnames)) with open("merged.csv", "wb") as fp: writer = csv.writer(fp) writer.writerow(fieldnames) for row in data.itervalues(): writer.writerow([row.get(field, '') for field in fieldnames]) </code></pre> <p>Both files have the "stop_id" column but I'm getting this error back - KeyError: 'stop_id'</p> <p>Any help would be much appreciated.</p> <p>Thanks</p>
0
2016-07-26T19:26:01Z
38,599,858
<p>Here is an example using pandas</p> <pre><code>import sys from StringIO import StringIO import pandas as pd TESTDATA=StringIO("""DOB;First;Last 2016-07-26;John;smith 2016-07-27;Mathew;George 2016-07-28;Aryan;Singh 2016-07-29;Ella;Gayau """) list1 = pd.read_csv(TESTDATA, sep=";") TESTDATA=StringIO("""Date of Birth;Patient First Name;Patient Last Name 2016-07-26;John;smith 2016-07-27;Mathew;XXX 2016-07-28;Aryan;Singh 2016-07-20;Ella;Gayau """) list2 = pd.read_csv(TESTDATA, sep=";") print list2 print list1 common = pd.merge(list1, list2, how='left', left_on=['Last', 'First', 'DOB'], right_on=['Patient Last Name', 'Patient First Name', 'Date of Birth']).dropna() print common </code></pre>
0
2016-07-26T21:04:48Z
[ "python", "csv" ]
Merging two CSV files by a common column python
38,598,316
<p>I am trying to merge two csv files with a common id column and write the merge to a new file. I have tried the following but it is giving me an error - </p> <pre><code>import csv from collections import OrderedDict filenames = "stops.csv", "stops2.csv" data = OrderedDict() fieldnames = [] for filename in filenames: with open(filename, "rb") as fp: # python 2 reader = csv.DictReader(fp) fieldnames.extend(reader.fieldnames) for row in reader: data.setdefault(row["stop_id"], {}).update(row) fieldnames = list(OrderedDict.fromkeys(fieldnames)) with open("merged.csv", "wb") as fp: writer = csv.writer(fp) writer.writerow(fieldnames) for row in data.itervalues(): writer.writerow([row.get(field, '') for field in fieldnames]) </code></pre> <p>Both files have the "stop_id" column but I'm getting this error back - KeyError: 'stop_id'</p> <p>Any help would be much appreciated.</p> <p>Thanks</p>
0
2016-07-26T19:26:01Z
38,664,595
<p>Thanks Shijo.</p> <p>This is what worked for me after - merged by the first column in each csv.</p> <pre><code>import csv from collections import OrderedDict with open('stops.csv', 'rb') as f: r = csv.reader(f) dict2 = {row[0]: row[1:] for row in r} with open('stops2.csv', 'rb') as f: r = csv.reader(f) dict1 = OrderedDict((row[0], row[1:]) for row in r) result = OrderedDict() for d in (dict1, dict2): for key, value in d.iteritems(): result.setdefault(key, []).extend(value) with open('ab_combined.csv', 'wb') as f: w = csv.writer(f) for key, value in result.iteritems(): w.writerow([key] + value) </code></pre>
0
2016-07-29T17:40:17Z
[ "python", "csv" ]
How to specify a priori correlation between samples drawn randomly from two multinomial distributions?
38,598,441
<p>Consider the following game: in each trial, you are presented with <em>x</em> red and <em>y</em> blue dots. You have to decide whether there are more red than blue dots. For each trial, the minimum number of dots in a given color is 10, the maximum is 50. Red and blue dots follow an identical multinomial distribution (for simplicity, let's consider that the probability of occurrence of each integer between 10 and 50 is similar). </p> <p>I would like to build 300 trials. To do so, I draw 300 samples from each multinomial distribution. importantly, I would like to specify (<strong>a priori</strong>) the correlation between the 300 samples from the first distribution and the 300 samples from the second distribution. I would like a correlations of -0.8, -0.5, 0, 0.5 and 0.8, in five pairs of sample sets.</p> <p>Preferably, I would like to also sample this sets so that in each set (X,Y) with any of the specified correlations, half of the X samples will be greater than Y (<code>x(i) &gt; y(i)</code>), and the other half will be smaller than Y (<code>x(i) &lt; y(i)</code>).</p> <p>How can I do that in python, R or MATLAB?</p>
0
2016-07-26T19:33:15Z
38,599,237
<p>Basically you ask how to <a href="https://www.mathworks.com/matlabcentral/answers/101802-how-can-i-generate-two-correlated-random-vectors-with-values-drawn-from-a-normal-distribution" rel="nofollow">create 2 vectors with a specified correlation</a>, so it is more statistics than programing question, but it can be done in the following way:</p> <p><strong>step 1</strong> - creating two vector with the desired correlation</p> <pre><code>r = 0.75; % r is the desired correlation M = rand(10000,2); % two vectors from uniform distribution between 0 to 1 R = [1 r; r 1]; L = chol(R); % this is Cholesky decomposition of R M = M*L; % when multiplied by M it gives the wanted correlation M = (M+abs(min(M(:)))); % shift the vector to only positive values M = M./max(M(:)); % normalize the vector... M = round(40*M)+10; % ...to values between 10 to 50 disp([min(M(:)) max(M(:))]) first_r = corr( M(:,1), M(:,2)); % and check the resulted correlation </code></pre> <p>The <code>rand</code> function could be changed to any random generated numbers function, like <code>randi</code> or <code>randn</code>, and if some specific distribution is desired, it could be obtained <a href="https://en.wikipedia.org/wiki/Data_transformation_(statistics)#Transforming_to_a_uniform_distribution" rel="nofollow">using the it's cdf</a>.</p> <p><strong>step 2</strong> - Sampling these vectors for two sets of samples, one with x>y and one with y>x</p> <pre><code>x = M(:,1); y = M(:,2); Xy = x&gt;y; % logical index for all x &gt; y Yx = y&gt;x; % logical index for all y &gt; x xy1 = datasample([x(Xy) y(Xy)],150,'Replace',false); % make a 1/2 sample like Xy xy2 = datasample([x(Yx) y(Yx)],150,'Replace',false); % make a 1/2 sample like Yx x = [xy1(:,1);xy2(:,1)]; % concat the smaples back to x y = [xy1(:,2);xy2(:,2)]; % concat the smaples back to y checkx = sum(x&gt;y) % how many times x is bigger than y checky = sum(y&gt;x) % how many times y is bigger than x final_r = corr(x,y) % and check the new correlation </code></pre> <p><strong>step 3</strong> - correcting the correlation</p> <p>As you'll see the <code>final_r</code> is not like the desired <code>r</code>, so in order to get it you have to shift the first <code>r</code> by its distance from the <code>final_r</code>. Here's an example - first the output when <code>r = 0.75</code>:</p> <pre><code> 10 50 checkx = 150 checky = 150 final_r = 0.67511 </code></pre> <p>we see that the <code>final_r</code> is shifted down by 0.074886, so we want to shift the original <code>r</code> up by this value to get our <code>final_r</code> correct. So if we run it again with <code>r = 0.75+0.074886</code>, we get:</p> <pre><code> 10 50 checkx = 150 checky = 150 final_r = 0.76379 </code></pre> <p>which is fairly close to the desired <code>r</code>. I would run a loop over the process for , say, 1000 times to find the closest <code>r</code> to the desired one, or simply set a threshold that continue to search until the <code>final_r</code> is close enough to what you want.</p>
1
2016-07-26T20:24:34Z
[ "python", "matlab", "correlation", "multinomial" ]
Run Multiple BigQuery Jobs via Python API
38,598,558
<p>I've been working off of <a href="https://github.com/GoogleCloudPlatform/python-docs-samples/blob/master/bigquery/api/async_query.py" rel="nofollow">Google Cloud Platform's Python API library</a>. I've had much success with these API samples out-of-the-box, but I'd like to streamline it a bit further by combining the three queries I need to run (and subsequent tables that will be created) into a single file. Although the documentation mentions being able to run multiple jobs asynchronously, I've been having trouble figuring out the best way to accomplish that.</p> <p>Thanks in advance!</p>
0
2016-07-26T19:40:09Z
38,600,157
<p>The idea of running multiple jobs asynchronously is in creating/preparing as many jobs as you need and kick them off using <a href="https://cloud.google.com/bigquery/docs/reference/v2/jobs/insert" rel="nofollow">jobs.insert</a> API (important you should either collect all respective jobids or set you own - they just need to be unique). Those API returns immediately, so you can kick them all off "very quickly" in one loop </p> <p>Meantime, you need to check repeatedly for status of those jobs (in loop) and as soon as job is done you can kick processing of result as needed </p> <p>You can check for details in <a href="https://cloud.google.com/bigquery/querying-data#running_asynchronous_queries" rel="nofollow">Running asynchronous queries</a> </p>
0
2016-07-26T21:26:47Z
[ "python", "google-bigquery", "google-cloud-platform" ]
Dictionaries pointing to another dictionary in Python
38,598,679
<p>Let's say I have a dictionary <code>dic = {k: v}</code>. My goal is to create another data structure that will point to a subset of a dictionary, accessed under a single key. </p> <p>E.g: <code>dic = {1: 'a', 2: 'b', 3: 'c', 4: 'f'}</code>. I would like a pointer only to keys <code>1</code> and <code>3</code>, for instance, but retrievable under a single key <code>'k1'</code>, and to keys <code>2</code> and <code>4</code> retrievable under a single key <code>'k2'</code> without the need to do a hardcopy. My data won't change throughout the application.</p> <p>I know I can create another dictionary <code>subdic = {'k1': (a, c), 'k2': (b, f)}</code>, but that would require extra memory, right? How can I do that only with some sort of pointers? With a softcopy, such that values are only once in memory from the original dictionary <code>dic</code>.</p>
-1
2016-07-26T19:47:21Z
38,598,790
<p>You can create a second dictionary like this</p> <pre><code>dic2 = {'k1': (dic[1], dic[3])} </code></pre> <p>Doing so, you don't actually use extra memory (only marginally) because the objects in the tuple are the same as the ones in your original <code>dic</code> (they occupy the same space in memory). You can check that it is true by doing:</p> <pre><code>id(dic2['k1'][0]) == id(dic[1]) # True </code></pre>
2
2016-07-26T19:54:27Z
[ "python", "dictionary" ]
How to find the diameter of objects using image processing in Python?
38,598,690
<p>Given an image with some irregular objects in it, I want to find their individual diameter.</p> <p><a href="http://stackoverflow.com/questions/33707095/how-to-locate-a-particular-region-of-values-in-a-2d-numpy-array?answertab=active#tab-top">Thanks to this answer</a>, I know how to identify the objects. <strong>However, is it possible to measure the maximum diameter of the objects shown in the image?</strong></p> <p>I have looked into the <code>scipy-ndimage</code> documentation and haven't found a dedicated function.</p> <p>Code for object identification:</p> <pre><code>import numpy as np from scipy import ndimage from matplotlib import pyplot as plt # generate some lowpass-filtered noise as a test image gen = np.random.RandomState(0) img = gen.poisson(2, size=(512, 512)) img = ndimage.gaussian_filter(img.astype(np.double), (30, 30)) img -= img.min() img /= img.max() # use a boolean condition to find where pixel values are &gt; 0.75 blobs = img &gt; 0.75 # label connected regions that satisfy this condition labels, nlabels = ndimage.label(blobs) # find their centres of mass. in this case I'm weighting by the pixel values in # `img`, but you could also pass the boolean values in `blobs` to compute the # unweighted centroids. r, c = np.vstack(ndimage.center_of_mass(img, labels, np.arange(nlabels) + 1)).T # find their distances from the top-left corner d = np.sqrt(r*r + c*c) # plot fig, ax = plt.subplots(1, 2, sharex=True, sharey=True, figsize=(10, 5)) ax[0].imshow(img) ax[1].hold(True) ax[1].imshow(np.ma.masked_array(labels, ~blobs), cmap=plt.cm.rainbow) for ri, ci, di in zip(r, c, d): ax[1].annotate('', xy=(0, 0), xytext=(ci, ri), arrowprops={'arrowstyle':'&lt;-', 'shrinkA':0}) ax[1].annotate('d=%.1f' % di, xy=(ci, ri), xytext=(0, -5), textcoords='offset points', ha='center', va='top', fontsize='x-large') for aa in ax.flat: aa.set_axis_off() fig.tight_layout() plt.show() </code></pre> <p>Image: <a href="http://i.stack.imgur.com/yOznb.png" rel="nofollow"><img src="http://i.stack.imgur.com/yOznb.png" alt="enter image description here"></a></p>
2
2016-07-26T19:48:01Z
38,616,904
<p>You could use <code>skimage.measure.regionprops</code> to determine the bounding box of all the regions in your image. For roughly circular blobs the diameter of the minimum enclosing circle can be approximated by the <strong>largest side of the bounding box</strong>. To do so you just need to add the following snippet at the end of your script:</p> <pre><code>from skimage.measure import regionprops N = 20 img_dig = np.digitize(img, np.linspace(0, 1, N)) properties = regionprops(img_dig) print 'Label \tLargest side' for p in properties: min_row, min_col, max_row, max_col = p.bbox print '%5d %14.3f' % (p.label, max(max_row - min_row, max_col - min_col)) </code></pre> <p>It is important to note that it is necessary to digitize <code>img</code> since <code>regionprops</code> does not accept arrays of float values. In the example above <code>img</code> was quantized into <code>N = 20</code> bins (each bin is uniquely identified by an integer index). You may want to test other values of <code>N</code> to better fit your needs.</p> <p>And this is the output you get:</p> <pre><code>Label Largest side 1 251.000 2 270.000 3 368.000 4 512.000 5 512.000 6 512.000 7 512.000 8 512.000 9 512.000 10 512.000 11 512.000 12 512.000 13 512.000 14 512.000 15 512.000 16 512.000 17 457.000 18 419.000 19 58.000 20 1.000 </code></pre>
3
2016-07-27T15:09:26Z
[ "python", "numpy", "image-processing", "geometry", "ndimage" ]
Copying several columns from a csv file to an existing xls file using Python
38,598,705
<p>I'm pretty new to Python but I was having some difficulty on getting started on this. I am using Python 3.</p> <p>I've googled and found quite a few python modules that help with this but was hoping for a more defined answer here. So basically, I need to read from a csv file certain columns i.e G, H, I, K, and M. The ones I need aren't consecutive. </p> <p>I need to read those columns from the csv file and transfer them to empty columns in an existing xls with data already in it.</p> <p>I looked in to openpyxl but it doesn't seem to work with csv/xls files, only xlsx. Can I use xlwt module to do this?</p> <p>Any guidance on which module may work best for my usecase would be greatly appreciated. Meanwhile, i'm going to tinker around with xlwt/xlrd.</p>
0
2016-07-26T19:48:54Z
38,598,804
<p>I recommend using pandas. It has convenient functions to read and write csv and xls files.</p> <pre><code>import pandas as pd from openpyxl import load_workbook #read the csv file df_1 = pd.read_csv('c:/test/test.csv') #lets say df_1 has columns colA and colB print(df_1) #read the xls(x) file df_2=pd.read_excel('c:/test/test.xlsx') #lets say df_2 has columns aa and bb #now add a column from df_1 to df_2 df_2['colA']=df_1['colA'] #save the combined output writer = pd.ExcelWriter('c:/test/combined.xlsx') df_2.to_excel(writer) writer.save() #alternatively, if you want to add just one column to an existing xlsx file: #i.e. get colA from df_1 into a new dataframe df_3=pd.DataFrame(df_1['colA']) #create writer using openpyxl engine writer = pd.ExcelWriter('c:/test/combined.xlsx', engine='openpyxl') #need this workaround to provide a list of work sheets in the file book = load_workbook('c:/test/combined.xlsx') writer.book = book writer.sheets = dict((ws.title, ws) for ws in book.worksheets) column_to_write=16 #this would go to column Q (zero based index) writeRowIndex=0 #don't plot row index sheetName='Sheet1' #which sheet to write on #now write the single column df_3 to the file df_3.to_excel(writer, sheet_name=sheetName, columns =['colA'],startcol=column_to_write,index=writeRowIndex) writer.save() </code></pre>
1
2016-07-26T19:55:28Z
[ "python", "excel", "python-3.x", "csv" ]
Copying several columns from a csv file to an existing xls file using Python
38,598,705
<p>I'm pretty new to Python but I was having some difficulty on getting started on this. I am using Python 3.</p> <p>I've googled and found quite a few python modules that help with this but was hoping for a more defined answer here. So basically, I need to read from a csv file certain columns i.e G, H, I, K, and M. The ones I need aren't consecutive. </p> <p>I need to read those columns from the csv file and transfer them to empty columns in an existing xls with data already in it.</p> <p>I looked in to openpyxl but it doesn't seem to work with csv/xls files, only xlsx. Can I use xlwt module to do this?</p> <p>Any guidance on which module may work best for my usecase would be greatly appreciated. Meanwhile, i'm going to tinker around with xlwt/xlrd.</p>
0
2016-07-26T19:48:54Z
38,599,184
<p>You could try XlsxWriter , which is fully featured python module for writing Excel 2007+ XLSX file format. <a href="https://pypi.python.org/pypi/XlsxWriter" rel="nofollow">https://pypi.python.org/pypi/XlsxWriter</a> </p>
1
2016-07-26T20:20:04Z
[ "python", "excel", "python-3.x", "csv" ]
Selenium Python - How to get class id by clicking on it manually
38,598,709
<p>I'm trying to click on a button on a page that has a lot of JavaScript but no matter what I try, can't click it. Is there a way for me to click the button manually and get what I need to click?</p> <p>I've tried inspecting elements and FirePath but those aren't giving me the correct class or id that I need to click </p> <p>Edit: This is what I'm trying to accomplish, the below is from Selenium IDE firefox plugin</p> <pre><code>Command Target open /logger/summary.ftl clickAndWait link=System Admin selectFrame sysadmin-content click //div[@id='sysadminmenu__sysadminmenu_x-auto-29']/span[3]/span click id=x-auto-178 </code></pre>
0
2016-07-26T19:48:59Z
38,599,105
<p>You may try <a href="http://www.seleniumhq.org/projects/ide/" rel="nofollow">http://www.seleniumhq.org/projects/ide/</a></p> <p>This has a feature to find web element path</p>
0
2016-07-26T20:15:00Z
[ "javascript", "python", "selenium-webdriver" ]
Accessing Cell history (python, smartsheets)
38,598,720
<p>I'm looking for a way to access elements of a cell's history. I have used various iterations of the following code to get at the keys inside the cell history dict, but I'm (obviously) doing something wrong. When running the code as below I get this error - <code>TypeError: 'CellHistory' object has no attribute '__getitem__'</code></p> <p>Help! This is driving me crazy!</p> <pre><code>#get the cell history action = smartsheet.Cells.get_cell_history( sheetid, sheet.rows[1].id, columns[1].id, include_all=True) revisions = action.data #print out something from revision history for rev in revisions: print rev['modifiedAt'] #breaks here` </code></pre>
0
2016-07-26T19:49:47Z
38,602,364
<p>Seems like you're using the wrong attribute name and syntax in the <code>print</code> statement. Try something like this instead:</p> <pre><code>#print out revision history for rev in revisions: print(rev.modified_at) print(rev.modified_by.name) print('') </code></pre>
1
2016-07-27T01:41:05Z
[ "python", "smartsheet-api" ]
Date/Time formatting
38,598,738
<p>I'm creating a python script that will display busy, no-answer and failed calls for a specific date but I'm stuck on the formatting of the date that's displayed. The start_time and end_time "variables" from Twilio print something like this: "Mon, 25 Jul 2016 16:03:53 +0000". I want to get rid of the day name and the comma since I'm saving the results into a csv file (script_name.py > some_file.csv) and the comma after the day name kind of screws up the csv structure. </p> <p>In the settings.py file the time_zone variable is set to the right one (America/Chicago) and the USE_TZ variable is set to true. But anyway the output is still in UTC. </p> <p>I don't know anything about Python and the things I've tried to parse call.start_time to a datetime have failed . . . I would know how to do it if it was a given value like start_time = '2016-07-26', but I don't know how to do it when the value comes from for call in client.calls.list . . .</p> <p>Any guidance will be greatly appreciated!</p> <p>Thanks!</p> <pre><code>from twilio.rest import TwilioRestClient from datetime import datetime from pytz import timezone from dateutil import tz # To find these visit https://www.twilio.com/user/account account_sid = "**********************************" auth_token = "**********************************" client = TwilioRestClient(account_sid, auth_token) for call in client.calls.list( start_time="2016-07-25", end_time="2016-07-25", status='failed', ): print(datetime.datetime.strptime(call.start_time, "%Y-%m-%d %H:%M:%S")) </code></pre>
0
2016-07-26T19:51:13Z
38,598,939
<p>The code I've provided does simple date and time format.</p> <pre><code>from datetime import datetime from time import sleep print('The Time is shown below!') while True: time = str(datetime.now()) time = list(time) for i in range(10): time.pop(len(time)-1) time = ('').join(time) time = time.split() date = time[0] time = time[1] print('Time: '+time+', Date: '+date, end='\r') sleep(1) </code></pre> <p>However if you looking just to format "Mon, 25 Jul 2016 16:03:53 +0000" as you said and just remove the day consider something like this:</p> <pre><code>day = "Mon, 25 Jul 2016 16:03:53 +0000" # Convert to an array day = list(day) # Remove first 5 characters for i in range(5): day.pop(0) day = ('').join(day) print(day) # You can use if statements to determine which day it is to decide how many characters to remove. &gt;&gt;&gt; "25 Jul 2016 16:03:53 +0000" </code></pre>
1
2016-07-26T20:04:01Z
[ "python", "datetime", "formatting", "twilio" ]
Date/Time formatting
38,598,738
<p>I'm creating a python script that will display busy, no-answer and failed calls for a specific date but I'm stuck on the formatting of the date that's displayed. The start_time and end_time "variables" from Twilio print something like this: "Mon, 25 Jul 2016 16:03:53 +0000". I want to get rid of the day name and the comma since I'm saving the results into a csv file (script_name.py > some_file.csv) and the comma after the day name kind of screws up the csv structure. </p> <p>In the settings.py file the time_zone variable is set to the right one (America/Chicago) and the USE_TZ variable is set to true. But anyway the output is still in UTC. </p> <p>I don't know anything about Python and the things I've tried to parse call.start_time to a datetime have failed . . . I would know how to do it if it was a given value like start_time = '2016-07-26', but I don't know how to do it when the value comes from for call in client.calls.list . . .</p> <p>Any guidance will be greatly appreciated!</p> <p>Thanks!</p> <pre><code>from twilio.rest import TwilioRestClient from datetime import datetime from pytz import timezone from dateutil import tz # To find these visit https://www.twilio.com/user/account account_sid = "**********************************" auth_token = "**********************************" client = TwilioRestClient(account_sid, auth_token) for call in client.calls.list( start_time="2016-07-25", end_time="2016-07-25", status='failed', ): print(datetime.datetime.strptime(call.start_time, "%Y-%m-%d %H:%M:%S")) </code></pre>
0
2016-07-26T19:51:13Z
38,598,996
<p>The format you need to parse is dictated by the timestamp provided by Twillo. You will likely need the following format string to properly parse the timestamp:</p> <pre><code>print(datetime.datetime.strptime(call.start_time, "%a, %d %b %Y %H:%M:%S %z")) </code></pre> <p>A great guide for the formatting string is <a href="http://strftime.org/" rel="nofollow">http://strftime.org/</a>.</p> <p>Another good library for lazily converting dates from strings is the python-dateutil library found at <a href="https://dateutil.readthedocs.io/" rel="nofollow">https://dateutil.readthedocs.io/</a>.</p>
1
2016-07-26T20:08:02Z
[ "python", "datetime", "formatting", "twilio" ]
Raising errors without traceback
38,598,740
<p>I would like to use <code>raise</code> without printing the traceback on the screen. I know how to do that using <code>try ..catch</code> but doesn't find a way with <code>raise</code>.</p> <p>Here is an example:</p> <pre><code>def my_function(self): resp = self.resp if resp.status_code == 404: raise NoSuchElementError('GET'+self.url+'{}'.format(resp.status_code)) elif resp.status_code == 500: raise ServerErrorError('GET'+self.url+'{}'.format(resp.status_code)) </code></pre> <p>When executing this, if I have a 404, the traceback will print on the screen.</p> <pre><code>Traceback (most recent call last): File "test.py", line 32, in &lt;module&gt; print ins.my_function() File "api.py", line 820, in my_function raise NoSuchElementError('GET ' + self.url + ' {} '.format(resp.status_code)) </code></pre> <p>This is an API wrapper and I don't want users to see the traceback but to see the API response codes and error messages instead.</p> <p>Is there a way to do it ? </p>
0
2016-07-26T19:51:18Z
38,598,793
<p>The problem is not with raising anything, but with what python interpreter does, when your program terminates with an exception (and it simply prints the stack trace). What you should do if you want to avoid it, is to put try except block around everything that you want to "hide" the stack trace, like:</p> <pre><code>def main(): try: actual_code() except Exception as e: print(e) </code></pre> <p>The other way around is to modify the exeption handler, <code>sys.excepthook(type, value, traceback)</code>, to do your own logic, like</p> <pre><code>def my_exchandler(type, value, traceback): print(value) import sys sys.excepthook = my_exchandler </code></pre> <p>you can even condition of exception <code>type</code> and do the particular logic iff it is your type of exception, and otherwise - backoff to the original one.</p>
1
2016-07-26T19:54:41Z
[ "python", "api", "wrapper" ]
Raising errors without traceback
38,598,740
<p>I would like to use <code>raise</code> without printing the traceback on the screen. I know how to do that using <code>try ..catch</code> but doesn't find a way with <code>raise</code>.</p> <p>Here is an example:</p> <pre><code>def my_function(self): resp = self.resp if resp.status_code == 404: raise NoSuchElementError('GET'+self.url+'{}'.format(resp.status_code)) elif resp.status_code == 500: raise ServerErrorError('GET'+self.url+'{}'.format(resp.status_code)) </code></pre> <p>When executing this, if I have a 404, the traceback will print on the screen.</p> <pre><code>Traceback (most recent call last): File "test.py", line 32, in &lt;module&gt; print ins.my_function() File "api.py", line 820, in my_function raise NoSuchElementError('GET ' + self.url + ' {} '.format(resp.status_code)) </code></pre> <p>This is an API wrapper and I don't want users to see the traceback but to see the API response codes and error messages instead.</p> <p>Is there a way to do it ? </p>
0
2016-07-26T19:51:18Z
38,598,932
<p>Catch the exception, log it and return something that indicates something went wrong to the consumer (sending a 200 back when a query failed will likely cause problems for your client).</p> <pre><code>try: return do_something() except NoSuchElementError as e: logger.error(e) return error_response() </code></pre> <p>The fake <code>error_response()</code> function could do anything form returning an empty response or an error message. You should still make use of proper HTTP status codes. It sounds like you should be returning a 404 in this instance.</p> <p>You should handle exceptions gracefully but you shouldn't hide errors from clients completely. In the case of your <code>NoSuchElementError</code> exception it sounds like the client should be informed (the error might be on their end). </p>
0
2016-07-26T20:03:42Z
[ "python", "api", "wrapper" ]
Continuous random questions?
38,598,837
<p>I want the code I currently have to go through a list of questions infinitely or until someone gets an answer wrong. I am currently using</p> <pre><code>random.shuffle(questions) for question in questions: question.ask() </code></pre> <p>to ask every question in a list once.</p> <p>How do I make it continuously ask until the user inputs a wrong answer? Here is my current code:</p> <pre><code>class Question(object): def __init__(self, question, answer): self.question = question self.answer = answer def ask(self): response = input(self.question) if response == self.answer: print "CORRECT" else: print "wrong" questions = [ Question("0", 0), Question("π/6", 30), Question("π/3", 60), Question("π/4", 45), Question("π/2", 90), Question("2π/3", 120), Question("3π/4", 135), Question("5π/6", 150), Question("π", 180), Question("7π/6", 210), Question("4π/3", 240), Question("5π/4", 225), Question("3π/2", 270), Question("5π/3", 300), Question("7π/4", 315), Question("11π/6", 330), Question("2π",360), ] </code></pre> <p>Also, if you could tell me how to add one score for every question correct that would be much appreciated. I tried to do this but I already have a piece of the program that deducts 1 from a global score variable every 5 seconds. I would like to continue editing that same variable but it gives errors.</p>
-3
2016-07-26T19:57:25Z
38,599,164
<p>you could loop through the list with a while loop something like this possibly</p> <pre><code>score = 0 currIndex = 0 #ask a question to start off q1 = questions[currIndex] #get the answer answer = q1.ask() while(answer == q1.answer): #ask the question at this index score+=1 q1=questions[currIndex] answer = q1.ask() currIndex+=1 #reset the loop? if currIndex == len(questions)-1: currIndex = 0 </code></pre> <p>haven't tested it yet but this should work? It will go until they get the answer wrong otherwise infinitely. edit: whoops didn't read that completely, I would make ask return correct or wrong and then change the loop to</p> <pre><code> while (answer == "CORRECT"): </code></pre>
0
2016-07-26T20:18:53Z
[ "python", "class" ]
Continuous random questions?
38,598,837
<p>I want the code I currently have to go through a list of questions infinitely or until someone gets an answer wrong. I am currently using</p> <pre><code>random.shuffle(questions) for question in questions: question.ask() </code></pre> <p>to ask every question in a list once.</p> <p>How do I make it continuously ask until the user inputs a wrong answer? Here is my current code:</p> <pre><code>class Question(object): def __init__(self, question, answer): self.question = question self.answer = answer def ask(self): response = input(self.question) if response == self.answer: print "CORRECT" else: print "wrong" questions = [ Question("0", 0), Question("π/6", 30), Question("π/3", 60), Question("π/4", 45), Question("π/2", 90), Question("2π/3", 120), Question("3π/4", 135), Question("5π/6", 150), Question("π", 180), Question("7π/6", 210), Question("4π/3", 240), Question("5π/4", 225), Question("3π/2", 270), Question("5π/3", 300), Question("7π/4", 315), Question("11π/6", 330), Question("2π",360), ] </code></pre> <p>Also, if you could tell me how to add one score for every question correct that would be much appreciated. I tried to do this but I already have a piece of the program that deducts 1 from a global score variable every 5 seconds. I would like to continue editing that same variable but it gives errors.</p>
-3
2016-07-26T19:57:25Z
38,600,682
<p>It might be worth a try giving ask() a return value. True if the answer was correct and False if the answer was incorrect. That could look like this:</p> <pre><code>def ask(self): response = input(self.question) if response == self.answer: print "CORRECT" return True else: print "wrong" return False </code></pre> <p>Then you could iterate through the questions like that:<br> (You would firstly have to create a score variable!)</p> <pre><code>for q in questions: if q.ask() is True: score += 1 else: break #Breaks out of the while loop </code></pre> <p>Eitherway, you will have to make your answers Strings too in order not to compare a String to an Integer (which will never be the same), so questions should look like this:</p> <pre><code>questions = [ Question("0", "0"), Question("π/6", "30"), Question("π/3", "60"), Question("π/4", "45"), ... </code></pre> <p>I hope I could help you!</p>
0
2016-07-26T22:09:47Z
[ "python", "class" ]
How do I install modules on qpython3 (Android port of python)
38,598,880
<p>I found this great module on within and downloaded it as a zip file. Once I extracted the zip file, i put the two modules inside the file(setup and the main one) on the module folder including an extra read me file I needed to run. I tried installing the setup file but I couldn't install it because the console couldn't find it. So I did some research and I tried using pip to install it as well, but that didn't work. So I was wondering if any of you could give me the steps to install it manually and with pip (keep in mind that the setup.py file needs to be installed in order for the main module to work).</p> <p>Thanks!</p>
0
2016-07-26T20:00:23Z
39,323,863
<p>Extract the zip file to the site-packages folder. Find the qpyplus folder in that Lib/python3.2/site-packages extract here that's it.Now you can directly use your module from REPL terminal by importing it.</p>
0
2016-09-05T04:16:58Z
[ "python", "module", "qpython", "qpython3" ]
How do I install modules on qpython3 (Android port of python)
38,598,880
<p>I found this great module on within and downloaded it as a zip file. Once I extracted the zip file, i put the two modules inside the file(setup and the main one) on the module folder including an extra read me file I needed to run. I tried installing the setup file but I couldn't install it because the console couldn't find it. So I did some research and I tried using pip to install it as well, but that didn't work. So I was wondering if any of you could give me the steps to install it manually and with pip (keep in mind that the setup.py file needs to be installed in order for the main module to work).</p> <p>Thanks!</p>
0
2016-07-26T20:00:23Z
39,722,699
<p>The cleanest and simplest way I have found is to use pip from within QPython console as in <a href="http://stackoverflow.com/questions/12332975/installing-python-module-within-code">This Answer</a></p> <pre><code>import pip pip.main(['install', 'networkx']) </code></pre>
0
2016-09-27T10:49:51Z
[ "python", "module", "qpython", "qpython3" ]
Python - Iterate through CSV rows and replace in XML
38,598,927
<p>I have a <strong>CSV with values that I would like to replace inside an XML template</strong>. Generating <strong>XMLs for each row</strong> that have filenames based on data in the same row. These are basically just copies of the template with a find and replace. So far I have gotten the filenames to work correctly but the XML file does not replace its data according to the row instead it only replaces data from row 3, column 10. I'm looking to have it iterate through a range of rows and create new files for each. I'm stumped as to what is going wrong here.</p> <p>CSV Snippet: </p> <pre><code>COLUMN_K, COLUMN_L K02496.ai, Test K02550.ai, Test K02686.ai, Test K02687.ai, Test </code></pre> <p>Existing XML Template Snippet</p> <pre><code> &lt;gmd:resourceFormat&gt; &lt;gmd:MD_Format&gt; &lt;gmd:name&gt; &lt;gco:CharacterString&gt;COLUMN_K&lt;/gco:CharacterString&gt; &lt;/gmd:name&gt; </code></pre> <p>Python Code</p> <pre><code>import csv exampleFile = open('U:\PROJECTS\Technical Graphics\metadata.csv') exampleReader = csv.reader(exampleFile) exampleData = list(exampleReader) #CSV as list with open('U:\PROJECTS\Technical Graphics\COLUMN_K_edited.xml') as inputfile: #template XML xml = inputfile.read() with open('U:\PROJECTS\Technical Graphics\metadata.csv') as csvfile: for row in reader(csvfile, delimiter=';'): for i in range(5): #range of 5 rows xml = xml.replace('COLUMN_K', exampleData[i+3][10]) #Only taking value from row 3, COLUMN_K- Need values from row 3 on xml = xml.replace('COLUMN_L', exampleData[i+3][11]) #Only taking value from row 3, COLUMN_L- Need values from row 3 on with open('U:\PROJECTS\Technical Graphics\XXX' + str((exampleData[i+3][10])) + ".xml", 'w') as outputfile: #Correctly outputs multiple filenames based on COLUMN_K value outputfile.write(xml) #writes multiple XMLs </code></pre>
0
2016-07-26T20:03:33Z
38,598,976
<p>Check these examples , they are working perfectly</p> <p><a href="http://stackoverflow.com/questions/20063987/python-create-xml-from-csv-within-a-loop">Python create XML from Csv within a loop</a></p> <p><a href="http://code.activestate.com/recipes/577423-convert-csv-to-xml/" rel="nofollow">http://code.activestate.com/recipes/577423-convert-csv-to-xml/</a></p>
0
2016-07-26T20:07:08Z
[ "python", "xml", "csv" ]
Python - Iterate through CSV rows and replace in XML
38,598,927
<p>I have a <strong>CSV with values that I would like to replace inside an XML template</strong>. Generating <strong>XMLs for each row</strong> that have filenames based on data in the same row. These are basically just copies of the template with a find and replace. So far I have gotten the filenames to work correctly but the XML file does not replace its data according to the row instead it only replaces data from row 3, column 10. I'm looking to have it iterate through a range of rows and create new files for each. I'm stumped as to what is going wrong here.</p> <p>CSV Snippet: </p> <pre><code>COLUMN_K, COLUMN_L K02496.ai, Test K02550.ai, Test K02686.ai, Test K02687.ai, Test </code></pre> <p>Existing XML Template Snippet</p> <pre><code> &lt;gmd:resourceFormat&gt; &lt;gmd:MD_Format&gt; &lt;gmd:name&gt; &lt;gco:CharacterString&gt;COLUMN_K&lt;/gco:CharacterString&gt; &lt;/gmd:name&gt; </code></pre> <p>Python Code</p> <pre><code>import csv exampleFile = open('U:\PROJECTS\Technical Graphics\metadata.csv') exampleReader = csv.reader(exampleFile) exampleData = list(exampleReader) #CSV as list with open('U:\PROJECTS\Technical Graphics\COLUMN_K_edited.xml') as inputfile: #template XML xml = inputfile.read() with open('U:\PROJECTS\Technical Graphics\metadata.csv') as csvfile: for row in reader(csvfile, delimiter=';'): for i in range(5): #range of 5 rows xml = xml.replace('COLUMN_K', exampleData[i+3][10]) #Only taking value from row 3, COLUMN_K- Need values from row 3 on xml = xml.replace('COLUMN_L', exampleData[i+3][11]) #Only taking value from row 3, COLUMN_L- Need values from row 3 on with open('U:\PROJECTS\Technical Graphics\XXX' + str((exampleData[i+3][10])) + ".xml", 'w') as outputfile: #Correctly outputs multiple filenames based on COLUMN_K value outputfile.write(xml) #writes multiple XMLs </code></pre>
0
2016-07-26T20:03:33Z
38,599,362
<p>You have mentioned 'COLUMN_F' in the xml tag and you are trying to replace 'COLUMN_E' in the code </p> <p>xml = xml.replace('COLUMN_E', exampleData[i+3][10])</p>
0
2016-07-26T20:33:00Z
[ "python", "xml", "csv" ]
Python - Iterate through CSV rows and replace in XML
38,598,927
<p>I have a <strong>CSV with values that I would like to replace inside an XML template</strong>. Generating <strong>XMLs for each row</strong> that have filenames based on data in the same row. These are basically just copies of the template with a find and replace. So far I have gotten the filenames to work correctly but the XML file does not replace its data according to the row instead it only replaces data from row 3, column 10. I'm looking to have it iterate through a range of rows and create new files for each. I'm stumped as to what is going wrong here.</p> <p>CSV Snippet: </p> <pre><code>COLUMN_K, COLUMN_L K02496.ai, Test K02550.ai, Test K02686.ai, Test K02687.ai, Test </code></pre> <p>Existing XML Template Snippet</p> <pre><code> &lt;gmd:resourceFormat&gt; &lt;gmd:MD_Format&gt; &lt;gmd:name&gt; &lt;gco:CharacterString&gt;COLUMN_K&lt;/gco:CharacterString&gt; &lt;/gmd:name&gt; </code></pre> <p>Python Code</p> <pre><code>import csv exampleFile = open('U:\PROJECTS\Technical Graphics\metadata.csv') exampleReader = csv.reader(exampleFile) exampleData = list(exampleReader) #CSV as list with open('U:\PROJECTS\Technical Graphics\COLUMN_K_edited.xml') as inputfile: #template XML xml = inputfile.read() with open('U:\PROJECTS\Technical Graphics\metadata.csv') as csvfile: for row in reader(csvfile, delimiter=';'): for i in range(5): #range of 5 rows xml = xml.replace('COLUMN_K', exampleData[i+3][10]) #Only taking value from row 3, COLUMN_K- Need values from row 3 on xml = xml.replace('COLUMN_L', exampleData[i+3][11]) #Only taking value from row 3, COLUMN_L- Need values from row 3 on with open('U:\PROJECTS\Technical Graphics\XXX' + str((exampleData[i+3][10])) + ".xml", 'w') as outputfile: #Correctly outputs multiple filenames based on COLUMN_K value outputfile.write(xml) #writes multiple XMLs </code></pre>
0
2016-07-26T20:03:33Z
38,618,030
<p>I was able to resolve this simply by moving the loop to the very top of the code. The problem was it was replacing the text but then could not find the "COLUMN_*" because it had been replaced with a value. Moving the loop to the top resolved this.</p> <pre><code>import csv for i in range(5): #loops through 5 rows exampleFile = open('U:\PROJECTS\Technical Graphics\metadata.csv') #CSV file exampleReader = csv.reader(exampleFile) exampleData = list(exampleReader) #turns CSV into list with open('U:\PROJECTS\Technical Graphics\COLUMN_K_edited.xml') as inputfile: xml = inputfile.read() #XML template file with open('U:\PROJECTS\Technical Graphics\metadata.csv') as csvfile: for row in reader(csvfile, delimiter=';'): #defines CSV delimiter with open('U:\PROJECTS\Technical Graphics\XXX' + str((exampleData[i+3][10])) + ".xml", 'w') as outputfile: #Filename of XMLs based on column 10 data xml = xml.replace('COLUMN_I', str((exampleData[i+3][8]))) xml = xml.replace('COLUMN_K', str((exampleData[i+3][10]))) outputfile.write(xml) #writes 5 XML files </code></pre>
0
2016-07-27T16:01:22Z
[ "python", "xml", "csv" ]
is there a detailed documentation for libpd Python API?
38,598,973
<p>I'm working with <a href="https://github.com/libpd" rel="nofollow">libpd</a> for Python, and I can't seem to find a detailed API. I would at least like a simple list of methods available.</p> <p>The best I can find is here: <a href="https://github.com/libpd/libpd/wiki/Python-API" rel="nofollow">https://github.com/libpd/libpd/wiki/Python-API</a> Which has a heading for "Detailed API Documentation", but under that, it just says: "Anyone care to elaborate or link here?"</p> <p>If it does not exist, I would like to document it as I go, but if it already exists somewhere, that (and so much figuring out) would be a bit of a waste of time.</p> <p>Thank you!!</p>
-1
2016-07-26T20:06:46Z
38,607,028
<p>assuming that you are really asking for a <code>detailed documentation for the API</code> rather than a <code>detailed API</code>:</p> <p>the python-API of libpd is just a thin wrapper around the C-API.</p> <p>Since the C-API is <a href="https://github.com/libpd/libpd/wiki/libpd" rel="nofollow">documented in the wiki</a> you better just use that one.</p>
0
2016-07-27T07:55:00Z
[ "python", "puredata", "libpd" ]
is there a detailed documentation for libpd Python API?
38,598,973
<p>I'm working with <a href="https://github.com/libpd" rel="nofollow">libpd</a> for Python, and I can't seem to find a detailed API. I would at least like a simple list of methods available.</p> <p>The best I can find is here: <a href="https://github.com/libpd/libpd/wiki/Python-API" rel="nofollow">https://github.com/libpd/libpd/wiki/Python-API</a> Which has a heading for "Detailed API Documentation", but under that, it just says: "Anyone care to elaborate or link here?"</p> <p>If it does not exist, I would like to document it as I go, but if it already exists somewhere, that (and so much figuring out) would be a bit of a waste of time.</p> <p>Thank you!!</p>
-1
2016-07-26T20:06:46Z
38,857,181
<p>So, what I found was you can use the C API documentation, BUT there are more methods and a PdManager class added to the Python version which were not documented (which were the things I wanted to know about)</p> <p>So I documented those extra things here:</p> <p><a href="http://mikesperone.com/files/libpdPythonAPIdoc.pdf" rel="nofollow">http://mikesperone.com/files/libpdPythonAPIdoc.pdf</a></p> <p>it's not super detailed and many things are just references to the methods in the C API, but hopefully it will help others who are also looking</p>
0
2016-08-09T17:36:55Z
[ "python", "puredata", "libpd" ]
Multi-Indexed fillna in Pandas
38,599,012
<p>I have a multi-indexed dataframe and I'm looking to backfill missing values within a group. The dataframe I have currently looks like this:</p> <pre><code>df = pd.DataFrame({ 'group': ['group_a'] * 7 + ['group_b'] * 3 + ['group_c'] * 2, 'Date': ["2013-06-11", "2013-07-02", "2013-07-09", "2013-07-30", "2013-08-06", "2013-09-03", "2013-10-01", "2013-07-09", "2013-08-06", "2013-09-03", "2013-07-09", "2013-09-03"], 'Value': [np.nan, np.nan, np.nan, 9, 4, 40, 18, np.nan, np.nan, 5, np.nan, 2]}) df.Date = df['Date'].apply(lambda x: pd.to_datetime(x).date()) df = df.set_index(['group', 'Date']) </code></pre> <p>I'm trying to get a dataframe that backfills the missing values within the group. Like this:</p> <pre><code>Group Date Value group_a 2013-06-11 9 2013-07-02 9 2013-07-09 9 2013-07-30 9 2013-08-06 4 2013-09-03 40 2013-10-01 18 group_b 2013-07-09 5 2013-08-06 5 2013-09-03 5 group_c 2013-07-09 2 2013-09-03 2 </code></pre> <p>I tried using <code>pd.fillna('Value', inplace=True)</code>, but I get a warning on setting a value on copy, which I've since figured out is related to the presence of the multi-index. Is there a way to make fillna work for multi-indexed rows? Also, ideally I'd be able to apply the fillna to only one column and not the entire dataframe.</p> <p>Any insight on this would be great.</p>
1
2016-07-26T20:08:49Z
38,599,152
<p>Use <code>groupby(level=0)</code> then <code>bfill</code> and <code>update</code>:</p> <pre><code>df.update(df.groupby(level=0).bfill()) df </code></pre> <p>Note: <code>update</code> changes <code>df</code> inplace.</p> <p><a href="http://i.stack.imgur.com/XQU4B.png" rel="nofollow"><img src="http://i.stack.imgur.com/XQU4B.png" alt="enter image description here"></a></p> <h3>Other alternatives</h3> <pre><code>df = df.groupby(level='group').bfill() df = df.unstack(0).bfill().stack().swaplevel(0, 1).reindex_like(df) </code></pre> <h3>Column specific</h3> <pre><code>df.Value = df.groupby(level=0).Value.bfill() </code></pre>
2
2016-07-26T20:18:20Z
[ "python", "pandas", "missing-data", "multi-index" ]
print object/instance name in python
38,599,015
<p>I was wondering if there is a way to print the object name in python as a string. For example I want to be able to say ENEMY1 has 2 hp left or ENEMY2 has 4 hp left. Is there a way of doing that?\</p> <pre><code>class badguy: def __init__(self): self.hp = 4 def attack(self): print("hit") self.hp -= 1 def still_alive(self): if self.hp &lt;=0: print("enemy destroyed") else : print (str(self.hp) + " hp left") # creating objects enemy1 = badguy() enemy2 = badguy() enemy1.attack() enemy1.attack() enemy1.still_alive() enemy2.still_alive() </code></pre>
3
2016-07-26T20:09:18Z
38,599,084
<p>You'd have to first give them names. E.g.</p> <pre><code>class badguy: def __init__(self, name): self.hp = 4 self.name = name def attack(self): print("hit") self.hp -= 1 def still_alive(self): if self.hp &lt;=0: print("enemy destroyed") else : print (self.name + " has " + str(self.hp) + " hp left") # creating objects enemy1 = badguy('ENEMY1') enemy2 = badguy('ENEMY2') enemy1.attack() enemy1.attack() enemy1.still_alive() enemy2.still_alive() </code></pre>
1
2016-07-26T20:14:06Z
[ "python", "object" ]
print object/instance name in python
38,599,015
<p>I was wondering if there is a way to print the object name in python as a string. For example I want to be able to say ENEMY1 has 2 hp left or ENEMY2 has 4 hp left. Is there a way of doing that?\</p> <pre><code>class badguy: def __init__(self): self.hp = 4 def attack(self): print("hit") self.hp -= 1 def still_alive(self): if self.hp &lt;=0: print("enemy destroyed") else : print (str(self.hp) + " hp left") # creating objects enemy1 = badguy() enemy2 = badguy() enemy1.attack() enemy1.attack() enemy1.still_alive() enemy2.still_alive() </code></pre>
3
2016-07-26T20:09:18Z
38,599,196
<p>A much better <strong>design principle</strong> is not to rely on the specific name of the object as shown below:</p> <pre><code>class badguy(object): def __init__(self): pass b = badguy() print b &gt;&gt;&gt; &lt;__main__.badguy object at 0x7f2089a74e50&gt; # Not a great name huh? :D </code></pre> <p>This can lead to a whole wealth of <em>issues</em> with assignment binding, referencing, and most importantly does not allow you to name your objects per <em>user</em> or <em>program</em> choice.</p> <p>Instead add an instance variable to your class called <code>self._name</code> (<a href="https://docs.python.org/3/tutorial/classes.html#tut-private" rel="nofollow">9.6 Classes - Private Variables</a>) or <code>self.name</code> if you want to allow access outside the scope of the class (<em>in this example, you can name it anything</em>). Not only is this more <em>Object-Oriented</em> design, but now you can implement methods like <code>__hash__</code> to be able to create a <strong>hash</strong> based on a name for example to use an object as a key (<em>there are many more reasons why this design choice is better!</em>).</p> <pre><code>class badguy(object): def __init__(self, name=None): self.hp = 4 self._name = name @property def name(self): return self._name @name.setter def name(self, name): self._name = name def attack(self): print("hit") self.hp -= 1 def still_alive(self): if self.hp &lt;=0: print("enemy destroyed") else : print ("{} has {} hp left.".format(self.name, self.hp)) </code></pre> <p><strong>Sample output:</strong></p> <pre><code>b = badguy('Enemy 1') print b.name &gt;&gt;&gt; Enemy 1 b.still_alive() &gt;&gt;&gt; Enemy 1 has 4 hp left. b.name = 'Enemy One' # Changing our object's name. b.still_alive() &gt;&gt;&gt; Enemy One has 4 hp left. </code></pre>
5
2016-07-26T20:20:40Z
[ "python", "object" ]
Removing some of the duplicates from a list in Python
38,599,066
<p>I would like to remove a certain number of duplicates of a list without removing all of them. For example, I have a list <code>[1,2,3,4,4,4,4,4]</code> and I want to remove 3 of the 4's, so that I am left with <code>[1,2,3,4,4]</code>. A naive way to do it would probably be</p> <pre><code>def remove_n_duplicates(remove_from, what, how_many): for j in range(how_many): remove_from.remove(what) </code></pre> <p>Is there a way to do remove the three 4's in one pass through the list, but keep the other two.</p>
5
2016-07-26T20:13:25Z
38,599,197
<p>If you just want to remove the first <code>n</code> occurrences of something from a list, this is pretty easy to do with a generator:</p> <pre><code>def remove_n_dupes(remove_from, what, how_many): count = 0 for item in remove_from: if item == what and count &lt; how_many: count += 1 else: yield item </code></pre> <p>Usage looks like:</p> <pre><code>lst = [1,2,3,4,4,4,4,4] print list(remove_n_dupes(lst, 4, 3)) # [1, 2, 3, 4, 4] </code></pre> <p>Keeping a specified number of duplicates of <em>any</em> item is similarly easy if we use a little extra auxiliary storage:</p> <pre><code>from collections import Counter def keep_n_dupes(remove_from, how_many): counts = Counter() for item in remove_from: counts[item] += 1 if counts[item] &lt;= how_many: yield item </code></pre> <p>Usage is similar:</p> <pre><code>lst = [1,1,1,1,2,3,4,4,4,4,4] print list(keep_n_dupes(lst, 2)) # [1, 1, 2, 3, 4, 4] </code></pre> <p>Here the input is the list and the max number of items that you want to keep. The caveat is that the items need to be hashable...</p>
6
2016-07-26T20:20:48Z
[ "python", "list", "duplicates" ]
Removing some of the duplicates from a list in Python
38,599,066
<p>I would like to remove a certain number of duplicates of a list without removing all of them. For example, I have a list <code>[1,2,3,4,4,4,4,4]</code> and I want to remove 3 of the 4's, so that I am left with <code>[1,2,3,4,4]</code>. A naive way to do it would probably be</p> <pre><code>def remove_n_duplicates(remove_from, what, how_many): for j in range(how_many): remove_from.remove(what) </code></pre> <p>Is there a way to do remove the three 4's in one pass through the list, but keep the other two.</p>
5
2016-07-26T20:13:25Z
38,599,411
<p>I can solve it in different way using collections.</p> <pre><code>from collections import Counter li = [1,2,3,4,4,4,4] cntLi = Counter(li) print cntLi.keys() </code></pre>
-1
2016-07-26T20:35:38Z
[ "python", "list", "duplicates" ]