title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
listlengths
1
5
Ignore null/blank cells with filter
38,570,044
<p>I'm trying to filter the 22nd column with numbers between <code>0.10</code> and <code>1.00</code> into <code>Day.csv</code>. But some of those cells are blank with no number at all and cause an error:</p> <p><code>ValueError: could not convert string to float:</code></p> <p>Here is what I tried:</p> <pre><code>reader = csv.reader(open("AllData.csv"), delimiter=',') filteredDay = filter(lambda p:0.10 &lt;= float(p[23]) &lt;= 1.00, reader) csv.writer(open(r"{}\Day.csv".format(queue),'w',newline =''), delimiter=',').writerows(filteredDay) </code></pre>
1
2016-07-25T14:10:39Z
38,570,162
<p>You can use a ternary conditional to return <code>False</code> for blanks:</p> <pre><code>filteredDay = filter(lambda p: 0.10 &lt;= float(p[23]) &lt;= 1.00 if p[23] != '' else False, reader) # ^^^^^^^^^^^^^^ </code></pre>
2
2016-07-25T14:15:31Z
[ "python", "csv", "lambda" ]
Ignore null/blank cells with filter
38,570,044
<p>I'm trying to filter the 22nd column with numbers between <code>0.10</code> and <code>1.00</code> into <code>Day.csv</code>. But some of those cells are blank with no number at all and cause an error:</p> <p><code>ValueError: could not convert string to float:</code></p> <p>Here is what I tried:</p> <pre><code>reader = csv.reader(open("AllData.csv"), delimiter=',') filteredDay = filter(lambda p:0.10 &lt;= float(p[23]) &lt;= 1.00, reader) csv.writer(open(r"{}\Day.csv".format(queue),'w',newline =''), delimiter=',').writerows(filteredDay) </code></pre>
1
2016-07-25T14:10:39Z
38,570,166
<p>Presumably you therefore need your filter to return False when the cell in question contains no value? Try:</p> <pre><code>filteredDay = filter(lambda p: p[23] != "" and 0.10 &lt;= float(p[23]) &lt;= 1.00, reader) </code></pre>
3
2016-07-25T14:15:41Z
[ "python", "csv", "lambda" ]
Type hint that a function never returns
38,570,144
<p>Python's new <a href="https://www.python.org/dev/peps/pep-0484/">type hinting</a> feature allows us to type hint that a function returns <code>None</code>...</p> <pre><code>def some_func() -&gt; None: pass </code></pre> <p>... or to leave the return type unspecified, which the PEP dictates should cause static analysers to assume that any return type is possible:</p> <blockquote> <p>Any function without annotations should be treated as having the most general type possible</p> </blockquote> <p>However, how should I type hint that a function will never return? For instance, what is the correct way to type hint the return value of these two functions?</p> <pre><code>def loop_forever(): while True: print('This function never returns because it loops forever') def always_explode(): raise Exception('This function never returns because it always raises') </code></pre> <p>Neither specifying <code>-&gt; None</code> nor leaving the return type unspecified seems correct in these cases.</p>
11
2016-07-25T14:14:47Z
38,679,309
<p>There is no answer to this question, yet. Here are a couple of reasons:</p> <ul> <li><p>When a function doesn't return, there is no return value (not even <code>None</code>) that a type could be assigned to. So you are not actually trying to annotate a type; you are trying to annotate <em>the absence of a type</em>.</p></li> <li><p>The type hinting PEP has only just been adopted in the standard, as of Python version 3.5. In addition, the PEP only advises on what type annotations should <em>look like</em>, while being intentionally vague on <em>how to use them</em>. So there is no standard telling us how to do anything in particular, beyond the examples.</p></li> <li><p>The PEP has a section <a href="https://www.python.org/dev/peps/pep-0484/#id14" rel="nofollow">Acceptable type hints</a> stating the following:</p> <blockquote> <p>Annotations must be valid expressions that evaluate without raising exceptions at the time the function is defined (but see below for forward references).</p> <p>Annotations should be kept simple or static analysis tools may not be able to interpret the values. For example, dynamically computed types are unlikely to be understood. (This is an intentionally somewhat vague requirement, specific inclusions and exclusions may be added to future versions of this PEP as warranted by the discussion.)</p> </blockquote> <p>So it tries to discourage you from doing overly creative things, like throwing an exception inside a return type hint in order to signal that a function never returns.</p></li> <li><p>Regarding exceptions, <a href="https://www.python.org/dev/peps/pep-0484/#id44" rel="nofollow">the PEP states the following</a>:</p> <blockquote> <p>No syntax for listing explicitly raised exceptions is proposed. Currently the only known use case for this feature is documentational, in which case the recommendation is to put this information in a docstring.</p> </blockquote></li> <li><p>There is a recommendation on <a href="https://www.python.org/dev/peps/pep-0484/#id37" rel="nofollow">type comments</a>, in which you have more freedom, but even that section doesn't discuss how to document the absence of a type.</p></li> </ul> <p>There is one thing you could try in a slightly different situation, when you want to hint that a parameter or a return value of some "normal" function should be a callable that never returns. The <a href="https://www.python.org/dev/peps/pep-0484/#id17" rel="nofollow">syntax</a> is <code>Callable[[ArgTypes...] ReturnType]</code>, so you could just omit the return type, as in <code>Callable[[ArgTypes...]]</code>. However, this doesn't conform to the recommended syntax, so strictly speaking it isn't an acceptable type hint. Type checkers will likely choke on it.</p> <p>Conclusion: you are ahead of your time. This may be disappointing, but there is an advantage for you, too: you can still influence how non-returning functions should be annotated. Maybe this will be an excuse for you to get involved in the standardisation process. :-)</p> <p>I have two suggestions.</p> <ol> <li><p>Allow omitting the return type in a <code>Callable</code> hint <em>and</em> allow the type of anything to be forward hinted. This would result in the following syntax:</p> <pre><code>always_explode: Callable[[]] def always_explode(): raise Exception('This function never returns because it always raises') </code></pre></li> <li><p>Introduce a <a href="https://wiki.haskell.org/Bottom" rel="nofollow">bottom type like in Haskell</a>:</p> <pre><code>def always_explode() -&gt; ⊥: raise Exception('This function never returns because it always raises') </code></pre></li> </ol> <p>These two suggestions could be combined.</p>
2
2016-07-30T23:07:13Z
[ "python", "type-hinting" ]
How to rank DataFrame by subgroup
38,570,175
<p>If I have a DataFrame such as </p> <pre><code> col1 col2 col3 0 x1 typeA 3 1 x2 typeB 13 2 x3 typeB 3 3 x4 typeA 5 4 x5 typeB 1 5 x6 typeA 1 </code></pre> <p>is there a way of ranking the rows by col3 for each type in col2? For example this solution would look like</p> <pre><code> col1 col2 col3 rank 0 x1 typeA 3 2 1 x2 typeB 13 1 2 x3 typeB 3 2 3 x4 typeA 5 1 4 x5 typeB 1 3 5 x6 typeA 1 3 </code></pre>
1
2016-07-25T14:16:04Z
38,570,297
<p><code>transform</code> keeps the same shape as your original dataframe. Then use a <code>lambda</code> function to rank <code>col3</code> based on groupings from <code>col2</code>..</p> <pre><code>df['col4'] = df.groupby('col2').col3.transform(lambda group: group.rank()) &gt;&gt;&gt; df col1 col2 col3 col4 0 x1 typeA 3 2 1 x2 typeB 13 3 2 x3 typeB 3 2 3 x4 typeA 5 3 4 x5 typeB 1 1 5 x6 typeA 1 1 </code></pre>
1
2016-07-25T14:20:40Z
[ "python", "pandas" ]
Python pandas make new column from data in existing column and from another dataframe
38,570,198
<p>I have a DataFrame called 'mydata', and if I do</p> <pre><code>len(mydata.loc['2015-9-2']) </code></pre> <p>It counts the number of rows in mydata that have that date, and returns a number like</p> <pre><code>1067 </code></pre> <p>I have another DataFrame called 'yourdata' which looks something like</p> <pre><code> timestamp 51 2015-06-22 52 2015-06-23 53 2015-06-24 54 2015-06-25 43 2015-07-13 </code></pre> <p>Now I want use each date in yourdata so instead of typing in each date</p> <pre><code>len(mydata.loc['2015-9-2']) </code></pre> <p>I can iterate through 'yourdata' using them like</p> <pre><code>len(mydata.loc[yourdata['timestamp']]) </code></pre> <p>and produce a new DataFrame with the results or just add a new column to yourdata with the result for each date, but I'm lost as how to do this?</p> <p>The following does not work</p> <pre><code>yourdata['result'] = len(mydata.loc[yourdata['timestamp']]) </code></pre> <p>neither does this</p> <pre><code>yourdata['result'] = len(mydata.loc[yourdata.iloc[:,-3]]) </code></pre> <p>this does work</p> <pre><code>yourdata['result'] = len(mydata.loc['2015-9-2']) </code></pre> <p>buts that no good as I want to use the date in each row not some fixed date.</p> <p><strong>Edit</strong>: first few rows of mydata</p> <pre><code> timestamp BPM 0 2015-08-30 16:48:00 65 1 2015-08-30 16:48:10 65 2 2015-08-30 16:48:15 66 3 2015-08-30 16:48:20 67 4 2015-08-30 16:48:30 70 </code></pre>
0
2016-07-25T14:16:45Z
38,570,949
<p>I think you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.value_counts.html" rel="nofollow"><code>value_counts</code></a>, but first convert to dates by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.dt.date.html" rel="nofollow"><code>dt.date</code></a>, convert to <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html" rel="nofollow"><code>to_datetime</code></a> and last use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.join.html" rel="nofollow"><code>join</code></a>:</p> <pre><code>print (yourdata.join(pd.to_datetime(mydata.timestamp.dt.date) .value_counts() .rename('len'), on='timestamp')) </code></pre> <p>Sample:</p> <pre><code>print (mydata) timestamp BPM 0 2015-06-23 16:48:00 65 1 2015-06-23 16:48:10 65 2 2015-06-23 16:48:15 66 3 2015-06-23 16:48:20 67 4 2015-06-22 16:48:30 70 print (yourdata) timestamp 51 2015-06-22 52 2015-06-23 53 2015-06-24 54 2015-06-25 43 2015-07-13 #if dtype not datetime mydata['timestamp'] = pd.to_datetime(mydata['timestamp']) yourdata['timestamp'] = pd.to_datetime(yourdata['timestamp']) print (yourdata.join(pd.to_datetime(mydata.timestamp.dt.date) .value_counts() .rename('len'), on='timestamp')) timestamp len 51 2015-06-22 1.0 52 2015-06-23 4.0 53 2015-06-24 NaN 54 2015-06-25 NaN 43 2015-07-13 NaN </code></pre>
0
2016-07-25T14:49:59Z
[ "python", "pandas", "dataframe" ]
Python pandas make new column from data in existing column and from another dataframe
38,570,198
<p>I have a DataFrame called 'mydata', and if I do</p> <pre><code>len(mydata.loc['2015-9-2']) </code></pre> <p>It counts the number of rows in mydata that have that date, and returns a number like</p> <pre><code>1067 </code></pre> <p>I have another DataFrame called 'yourdata' which looks something like</p> <pre><code> timestamp 51 2015-06-22 52 2015-06-23 53 2015-06-24 54 2015-06-25 43 2015-07-13 </code></pre> <p>Now I want use each date in yourdata so instead of typing in each date</p> <pre><code>len(mydata.loc['2015-9-2']) </code></pre> <p>I can iterate through 'yourdata' using them like</p> <pre><code>len(mydata.loc[yourdata['timestamp']]) </code></pre> <p>and produce a new DataFrame with the results or just add a new column to yourdata with the result for each date, but I'm lost as how to do this?</p> <p>The following does not work</p> <pre><code>yourdata['result'] = len(mydata.loc[yourdata['timestamp']]) </code></pre> <p>neither does this</p> <pre><code>yourdata['result'] = len(mydata.loc[yourdata.iloc[:,-3]]) </code></pre> <p>this does work</p> <pre><code>yourdata['result'] = len(mydata.loc['2015-9-2']) </code></pre> <p>buts that no good as I want to use the date in each row not some fixed date.</p> <p><strong>Edit</strong>: first few rows of mydata</p> <pre><code> timestamp BPM 0 2015-08-30 16:48:00 65 1 2015-08-30 16:48:10 65 2 2015-08-30 16:48:15 66 3 2015-08-30 16:48:20 67 4 2015-08-30 16:48:30 70 </code></pre>
0
2016-07-25T14:16:45Z
38,571,006
<pre><code>import numpy as np import pandas as pd mydata = pd.DataFrame({'timestamp': ['2015-06-22 16:48:00']*3 + ['2015-06-23 16:48:00']*2 + ['2015-06-24 16:48:00'] + ['2015-06-25 16:48:00']*4 + ['2015-07-13 16:48:00', '2015-08-13 16:48:00'], 'BPM': [65]*8 + [70]*4}) mydata['timestamp'] = pd.to_datetime(mydata['timestamp']) print(mydata) # BPM timestamp # 0 65 2015-06-22 16:48:00 # 1 65 2015-06-22 16:48:00 # 2 65 2015-06-22 16:48:00 # 3 65 2015-06-23 16:48:00 # 4 65 2015-06-23 16:48:00 # 5 65 2015-06-24 16:48:00 # 6 65 2015-06-25 16:48:00 # 7 65 2015-06-25 16:48:00 # 8 70 2015-06-25 16:48:00 # 9 70 2015-06-25 16:48:00 # 10 70 2015-07-13 16:48:00 # 11 70 2015-08-13 16:48:00 yourdata = pd.Series(['2015-06-22', '2015-06-23', '2015-06-24', '2015-06-25', '2015-07-13'], name='timestamp') yourdata = pd.to_datetime(yourdata).to_frame() print(yourdata) # 0 2015-06-22 # 1 2015-06-23 # 2 2015-06-24 # 3 2015-06-25 # 4 2015-07-13 result = (mydata.set_index('timestamp').resample('D') .size().loc[yourdata['timestamp']] .reset_index()) result.columns = ['timestamp', 'result'] print(result) # timestamp result # 0 2015-06-22 3 # 1 2015-06-23 2 # 2 2015-06-24 1 # 3 2015-06-25 4 # 4 2015-07-13 1 </code></pre>
1
2016-07-25T14:52:14Z
[ "python", "pandas", "dataframe" ]
How to get django queryset results with formatted datetime field
38,570,258
<p>I've Django model which has foreign keys associated with other models. Each model is having same field names(attributes) created_at and updated_at</p> <p>In every django queryset results I'll be getting datetime values.</p> <pre><code>Model.objects.all().values('created_at') </code></pre> <p>But I want to format the datetime field to "DD-MM-YYYY HH:MM:SS" and trim down the milliseconds in the django query results.</p> <p>If I use "extra" and and date_trunc_sql like the following command</p> <pre><code>dt = connection.ops.date_trunc_sql('day','created_date') objects.extra({'date':dt}).values('date') </code></pre> <p>Which works fine. But If I query like the following, its raising ambiguous statement error.</p> <pre><code>objects.extra({'date':dt}).values('date', 'x', 'y', 'z') </code></pre> <p>How to overcome this problem?</p>
0
2016-07-25T14:19:06Z
38,570,415
<p>I don't think <code>values()</code> function would anything related to formatting datetime result. But why does that bother you? Can't you convert them to proper format when you try to display them? If you try to render them in the template, django has template filter <code>date</code> for formatting your datetime value: <a href="https://docs.djangoproject.com/en/1.9/ref/templates/builtins/#date" rel="nofollow">https://docs.djangoproject.com/en/1.9/ref/templates/builtins/#date</a></p>
0
2016-07-25T14:26:50Z
[ "python", "django", "datetime", "django-models" ]
How to get django queryset results with formatted datetime field
38,570,258
<p>I've Django model which has foreign keys associated with other models. Each model is having same field names(attributes) created_at and updated_at</p> <p>In every django queryset results I'll be getting datetime values.</p> <pre><code>Model.objects.all().values('created_at') </code></pre> <p>But I want to format the datetime field to "DD-MM-YYYY HH:MM:SS" and trim down the milliseconds in the django query results.</p> <p>If I use "extra" and and date_trunc_sql like the following command</p> <pre><code>dt = connection.ops.date_trunc_sql('day','created_date') objects.extra({'date':dt}).values('date') </code></pre> <p>Which works fine. But If I query like the following, its raising ambiguous statement error.</p> <pre><code>objects.extra({'date':dt}).values('date', 'x', 'y', 'z') </code></pre> <p>How to overcome this problem?</p>
0
2016-07-25T14:19:06Z
38,582,438
<p>Got the solution.</p> <pre><code>data = list(Model.objects.extra(select={'date':"to_char(&lt;DATABASENAME&gt;_&lt;TableName&gt;.created_at, 'YYYY-MM-DD hh:mi AM')"}).values_list('date', flat='true') </code></pre> <p>It's not just tablename.attribute, it should be dbname_tablename.attribute when we have multiple databases(ambiguous)</p> <p>which will result list of created_at datetime values trimmed to 'YYYY-MM-DD HH:MM' format.</p>
0
2016-07-26T06:25:57Z
[ "python", "django", "datetime", "django-models" ]
How to get django queryset results with formatted datetime field
38,570,258
<p>I've Django model which has foreign keys associated with other models. Each model is having same field names(attributes) created_at and updated_at</p> <p>In every django queryset results I'll be getting datetime values.</p> <pre><code>Model.objects.all().values('created_at') </code></pre> <p>But I want to format the datetime field to "DD-MM-YYYY HH:MM:SS" and trim down the milliseconds in the django query results.</p> <p>If I use "extra" and and date_trunc_sql like the following command</p> <pre><code>dt = connection.ops.date_trunc_sql('day','created_date') objects.extra({'date':dt}).values('date') </code></pre> <p>Which works fine. But If I query like the following, its raising ambiguous statement error.</p> <pre><code>objects.extra({'date':dt}).values('date', 'x', 'y', 'z') </code></pre> <p>How to overcome this problem?</p>
0
2016-07-25T14:19:06Z
38,606,717
<p>Perhaps I misunderstood your question, but why not use something like <code>&lt;datetime_object&gt;.strftime('%d-%m-%Y %H:%M:%S')</code>?</p>
0
2016-07-27T07:39:05Z
[ "python", "django", "datetime", "django-models" ]
Python/Pandas - partitioning a pandas DataFrame in 10 disjoint, equally-sized subsets
38,570,268
<p>I want to partition a pandas DataFrame into ten disjoint, equally-sized, randomly composed subsets. </p> <p>I know I can randomly sample one tenth of the original pandas DataFrame using:</p> <pre><code>partition_1 = pandas.DataFrame.sample(frac=(1/10)) </code></pre> <p>However, how can I obtain the other nine partitions? If I'd do <code>pandas.DataFrame.sample(frac=(1/10))</code> again, there exists the possibility that my subsets are not disjoint. </p> <p>Thanks for the help!</p>
0
2016-07-25T14:19:25Z
38,570,355
<p>use <code>np.random.permutations</code> :</p> <p><code>df.loc[np.random.permutation(df.index)]</code> </p> <p>it will shuffle the dataframe and keep column names, after you could split the dataframe into 10.</p>
0
2016-07-25T14:23:38Z
[ "python", "python-2.7", "pandas", "dataframe", "partitioning" ]
Python/Pandas - partitioning a pandas DataFrame in 10 disjoint, equally-sized subsets
38,570,268
<p>I want to partition a pandas DataFrame into ten disjoint, equally-sized, randomly composed subsets. </p> <p>I know I can randomly sample one tenth of the original pandas DataFrame using:</p> <pre><code>partition_1 = pandas.DataFrame.sample(frac=(1/10)) </code></pre> <p>However, how can I obtain the other nine partitions? If I'd do <code>pandas.DataFrame.sample(frac=(1/10))</code> again, there exists the possibility that my subsets are not disjoint. </p> <p>Thanks for the help!</p>
0
2016-07-25T14:19:25Z
38,570,791
<p>Say <code>df</code> is your dataframe, and you want <code>N_PARTITIONS</code> partitions of roughly equal size (they will be of <em>exactly</em> equal size if <code>len(df)</code> is divisible by <code>N_PARTITIONS</code>).</p> <p>Use <code>np.random.permutation</code> to permute the array <code>np.arange(len(df))</code>. Then take slices of that array with step <code>N_PARTITIONS</code>, and extract the corresponding rows of your dataframe with <code>.iloc[]</code>.</p> <pre><code>import numpy as np permuted_indices = np.random.permutation(len(df)) dfs = [] for i in range(N_PARTITIONS): dfs.append(df.iloc[permuted_indices[i::N_PARTITIONS]]) </code></pre> <p>Since you are on Python 2.7, it might be better to switch <code>range(N_PARTITIONS)</code> by <code>xrange(N_PARTITIONS)</code> to get an iterator instead of a list.</p>
0
2016-07-25T14:42:59Z
[ "python", "python-2.7", "pandas", "dataframe", "partitioning" ]
Python/Pandas - partitioning a pandas DataFrame in 10 disjoint, equally-sized subsets
38,570,268
<p>I want to partition a pandas DataFrame into ten disjoint, equally-sized, randomly composed subsets. </p> <p>I know I can randomly sample one tenth of the original pandas DataFrame using:</p> <pre><code>partition_1 = pandas.DataFrame.sample(frac=(1/10)) </code></pre> <p>However, how can I obtain the other nine partitions? If I'd do <code>pandas.DataFrame.sample(frac=(1/10))</code> again, there exists the possibility that my subsets are not disjoint. </p> <p>Thanks for the help!</p>
0
2016-07-25T14:19:25Z
38,571,305
<p>Starting with this.</p> <pre><code> dfm = pd.DataFrame({'A' : ['foo', 'bar', 'foo', 'bar', 'foo', 'bar', 'foo', 'foo']*2, 'B' : ['one', 'one', 'two', 'three', 'two', 'two', 'one', 'three']*2}) A B 0 foo one 1 bar one 2 foo two 3 bar three 4 foo two 5 bar two 6 foo one 7 foo three 8 foo one 9 bar one 10 foo two 11 bar three 12 foo two 13 bar two 14 foo one 15 foo three Usage: Change "4" to "10", use [i] to get the slices. np.random.seed(32) # for reproducible results. np.array_split(dfm.reindex(np.random.permutation(dfm.index)),4)[1] A B 2 foo two 5 bar two 10 foo two 12 foo two np.array_split(dfm.reindex(np.random.permutation(dfm.index)),4)[3] A B 13 foo two 11 bar three 0 foo one 7 foo three </code></pre>
0
2016-07-25T15:05:26Z
[ "python", "python-2.7", "pandas", "dataframe", "partitioning" ]
How to extract specific words from a list as the list length changes?
38,570,487
<p>Let's assume that I have a sample of the following strings:</p> <ul> <li>string = 'http/1.1 <strong>abc-ad-sd-00</strong>.sad.sdsd.der.net (Server/1.2 [dsddsf did:t yy p sS]), http/1.1 <strong>asc-ad-sd-11</strong>.sad.sdsd.der.net (Server/1.2 [<strong>gfef srFw:t reri pSs</strong> ])'</li> <li>string1 = 'http/1.1 <strong>abc-ad-sd-01</strong>.sad.sdsd.der.net (Server/1.2 [dsddsf f u did:t yy p sS]), http/1.1 <strong>asc-ad-sd-13</strong>.sad.sdsd.der.net (Server/1.2 [<strong>sff as srFw:t reri pSs</strong> ])'</li> <li>string2 = 'http/1.1 <strong>abc-ad-sd-002</strong>.sad.sdsd.der.net (Server/1.2 [dsddsf did:t yy p sS]), http/1.1 <strong>asc-ad-sd-14</strong>.sad.sdsd.der.net (Server/1.2 [<strong>rts as f srFw:t reri pSs</strong> ])'</li> <li>string3 = 'http/1.1 <strong>abc-ad-sd-03</strong>.sad.sdsd.der.net (Server/1.2 [dsddsf did:t yy p sS]), http/1.1 <strong>asc-ad-sd-15</strong>.sad.sdsd.der.net (Server/1.2 [<strong>tttts as t srFw:t reri pSs</strong> ])'</li> </ul> <p>Here's what I did to get the bold strings:</p> <pre><code>If name == 'via': name = “ID1” string = header_line.split(' ') b = (string[2].split('.')) value = b[0] headers[name] = value #----------# name_1 = “ID2” string = header_line.split(' ') b_1 = (string[9].split('.')) value_1 = b_1[0] headers[name_1] = value_1 #-----# name_2 = “ID3” string = header_line.split(' ') b_2 = (string[11:]) value_2 = ''.join(b_2) headers[name_2] = value_2 #----# </code></pre> <p>The problem with this is that it works only in certain situations. As you can see, there are 3 different strings so getting the bold strings by their index doesn't quite work. Ofcourse, this is not my complete code as these strings are stored in dict list. Example: My initial output looks like this: </p> <blockquote> <p>[{‘item1’: '10574', 'Item2’: '69.241.51.134', ‘via’: ‘http/1.1 abc-ad-sd-00.sad.sdsd.der.net (Server/1.2 [dsddsf did:t yy p sS]), http/1.1 asc-ad-sd-11.sad.sdsd.der.net (Server/1.2 [tttts srFw:t reri pSs ]))’, ‘item4’: ’22’, 'HTTP RESPONSE': ['HTTP/1.1 200 OK\r\n']}, {…}, {…}]</p> </blockquote> <p>And I want a different output like this based on the parsed values from the response above. </p> <blockquote> <p>[{‘item1’: '10574', 'Item2’: '69.241.51.134', ‘ID3’: 'tttts srFw:t reri pSs', ‘item4’: ’22’, ‘ID2’: ‘asc-ad-sd-11', 'HTTP RESPONSE': ['HTTP/1.1 200 OK\r\n'], ‘ID1’: ‘abc-ad-sd-00’}, {…}, {…}]</p> </blockquote> <p>So as you can see, I've bunch of dicts inside a list and for the key 'via', I want its value to be parsed into different substrings that I want and store them into new key values. I've already done this in my code. </p> <p>Update: Thanks everyone for your responses. I've clarified my question. From your response, the value for ID1 and ID2 works however the value inside the [] isn't working because "tttts" won't be the same string in every response. </p> <p>Another update: Thank you all for your help!! Using everyone's response, I tweaked my code a little and figured out how to get the values.</p>
0
2016-07-25T14:29:44Z
38,570,923
<p>I think regular expressions are your friend here. Something like <code>http\/1\.1 ([^\.]+)</code> works for this specific case.</p> <pre><code>import re match = re.compile('http\/1\.1 ([^\.]+)').search(string) value = match.group(1) </code></pre> <p>I would recommend splitting the strings with <code>string.split(',')</code> or whatever works to split each <code>http</code> entry.</p> <p>You can learn more about python's regular expression module <a href="https://docs.python.org/2/library/re.html" rel="nofollow">here</a>, and you can test our your regular expressions in various websites, I like <a href="https://regex101.com/" rel="nofollow">this one.</a></p>
0
2016-07-25T14:48:44Z
[ "python", "string", "parsing" ]
How to extract specific words from a list as the list length changes?
38,570,487
<p>Let's assume that I have a sample of the following strings:</p> <ul> <li>string = 'http/1.1 <strong>abc-ad-sd-00</strong>.sad.sdsd.der.net (Server/1.2 [dsddsf did:t yy p sS]), http/1.1 <strong>asc-ad-sd-11</strong>.sad.sdsd.der.net (Server/1.2 [<strong>gfef srFw:t reri pSs</strong> ])'</li> <li>string1 = 'http/1.1 <strong>abc-ad-sd-01</strong>.sad.sdsd.der.net (Server/1.2 [dsddsf f u did:t yy p sS]), http/1.1 <strong>asc-ad-sd-13</strong>.sad.sdsd.der.net (Server/1.2 [<strong>sff as srFw:t reri pSs</strong> ])'</li> <li>string2 = 'http/1.1 <strong>abc-ad-sd-002</strong>.sad.sdsd.der.net (Server/1.2 [dsddsf did:t yy p sS]), http/1.1 <strong>asc-ad-sd-14</strong>.sad.sdsd.der.net (Server/1.2 [<strong>rts as f srFw:t reri pSs</strong> ])'</li> <li>string3 = 'http/1.1 <strong>abc-ad-sd-03</strong>.sad.sdsd.der.net (Server/1.2 [dsddsf did:t yy p sS]), http/1.1 <strong>asc-ad-sd-15</strong>.sad.sdsd.der.net (Server/1.2 [<strong>tttts as t srFw:t reri pSs</strong> ])'</li> </ul> <p>Here's what I did to get the bold strings:</p> <pre><code>If name == 'via': name = “ID1” string = header_line.split(' ') b = (string[2].split('.')) value = b[0] headers[name] = value #----------# name_1 = “ID2” string = header_line.split(' ') b_1 = (string[9].split('.')) value_1 = b_1[0] headers[name_1] = value_1 #-----# name_2 = “ID3” string = header_line.split(' ') b_2 = (string[11:]) value_2 = ''.join(b_2) headers[name_2] = value_2 #----# </code></pre> <p>The problem with this is that it works only in certain situations. As you can see, there are 3 different strings so getting the bold strings by their index doesn't quite work. Ofcourse, this is not my complete code as these strings are stored in dict list. Example: My initial output looks like this: </p> <blockquote> <p>[{‘item1’: '10574', 'Item2’: '69.241.51.134', ‘via’: ‘http/1.1 abc-ad-sd-00.sad.sdsd.der.net (Server/1.2 [dsddsf did:t yy p sS]), http/1.1 asc-ad-sd-11.sad.sdsd.der.net (Server/1.2 [tttts srFw:t reri pSs ]))’, ‘item4’: ’22’, 'HTTP RESPONSE': ['HTTP/1.1 200 OK\r\n']}, {…}, {…}]</p> </blockquote> <p>And I want a different output like this based on the parsed values from the response above. </p> <blockquote> <p>[{‘item1’: '10574', 'Item2’: '69.241.51.134', ‘ID3’: 'tttts srFw:t reri pSs', ‘item4’: ’22’, ‘ID2’: ‘asc-ad-sd-11', 'HTTP RESPONSE': ['HTTP/1.1 200 OK\r\n'], ‘ID1’: ‘abc-ad-sd-00’}, {…}, {…}]</p> </blockquote> <p>So as you can see, I've bunch of dicts inside a list and for the key 'via', I want its value to be parsed into different substrings that I want and store them into new key values. I've already done this in my code. </p> <p>Update: Thanks everyone for your responses. I've clarified my question. From your response, the value for ID1 and ID2 works however the value inside the [] isn't working because "tttts" won't be the same string in every response. </p> <p>Another update: Thank you all for your help!! Using everyone's response, I tweaked my code a little and figured out how to get the values.</p>
0
2016-07-25T14:29:44Z
38,570,998
<p>in my opinionm you can use regex to get substring. </p> <pre><code>import re pattern1 = r'\w+-\w+-\w+-\d+' pattern2 = r'\[tttts .+\]' #s is string you are checking #pattern 1 will find substring like abc-ad-sd-00 re.findall(pattern1,s) #pattern 2 will find substring like [tttts as t srFw:t reri pSs ] re.findall(pattern2,s) </code></pre> <p>example:</p> <pre><code>s = 'http/1.1 abc-ad-sd-00.sad.sdsd.der.net (Server/1.2 [dsddsf did:t yy p sS]), http/1.1 asc-ad-sd-11.sad.sdsd.der.net (Server/1.2 [tttts srFw:t reri pSs ])' re.findall(r'\[tttts .+\]',s) ['[tttts srFw:t reri pSs ]'] re.findall(r'\w+-\w+-\w+-\d+',s) ['abc-ad-sd-00', 'asc-ad-sd-11'] </code></pre>
0
2016-07-25T14:51:58Z
[ "python", "string", "parsing" ]
How to extract specific words from a list as the list length changes?
38,570,487
<p>Let's assume that I have a sample of the following strings:</p> <ul> <li>string = 'http/1.1 <strong>abc-ad-sd-00</strong>.sad.sdsd.der.net (Server/1.2 [dsddsf did:t yy p sS]), http/1.1 <strong>asc-ad-sd-11</strong>.sad.sdsd.der.net (Server/1.2 [<strong>gfef srFw:t reri pSs</strong> ])'</li> <li>string1 = 'http/1.1 <strong>abc-ad-sd-01</strong>.sad.sdsd.der.net (Server/1.2 [dsddsf f u did:t yy p sS]), http/1.1 <strong>asc-ad-sd-13</strong>.sad.sdsd.der.net (Server/1.2 [<strong>sff as srFw:t reri pSs</strong> ])'</li> <li>string2 = 'http/1.1 <strong>abc-ad-sd-002</strong>.sad.sdsd.der.net (Server/1.2 [dsddsf did:t yy p sS]), http/1.1 <strong>asc-ad-sd-14</strong>.sad.sdsd.der.net (Server/1.2 [<strong>rts as f srFw:t reri pSs</strong> ])'</li> <li>string3 = 'http/1.1 <strong>abc-ad-sd-03</strong>.sad.sdsd.der.net (Server/1.2 [dsddsf did:t yy p sS]), http/1.1 <strong>asc-ad-sd-15</strong>.sad.sdsd.der.net (Server/1.2 [<strong>tttts as t srFw:t reri pSs</strong> ])'</li> </ul> <p>Here's what I did to get the bold strings:</p> <pre><code>If name == 'via': name = “ID1” string = header_line.split(' ') b = (string[2].split('.')) value = b[0] headers[name] = value #----------# name_1 = “ID2” string = header_line.split(' ') b_1 = (string[9].split('.')) value_1 = b_1[0] headers[name_1] = value_1 #-----# name_2 = “ID3” string = header_line.split(' ') b_2 = (string[11:]) value_2 = ''.join(b_2) headers[name_2] = value_2 #----# </code></pre> <p>The problem with this is that it works only in certain situations. As you can see, there are 3 different strings so getting the bold strings by their index doesn't quite work. Ofcourse, this is not my complete code as these strings are stored in dict list. Example: My initial output looks like this: </p> <blockquote> <p>[{‘item1’: '10574', 'Item2’: '69.241.51.134', ‘via’: ‘http/1.1 abc-ad-sd-00.sad.sdsd.der.net (Server/1.2 [dsddsf did:t yy p sS]), http/1.1 asc-ad-sd-11.sad.sdsd.der.net (Server/1.2 [tttts srFw:t reri pSs ]))’, ‘item4’: ’22’, 'HTTP RESPONSE': ['HTTP/1.1 200 OK\r\n']}, {…}, {…}]</p> </blockquote> <p>And I want a different output like this based on the parsed values from the response above. </p> <blockquote> <p>[{‘item1’: '10574', 'Item2’: '69.241.51.134', ‘ID3’: 'tttts srFw:t reri pSs', ‘item4’: ’22’, ‘ID2’: ‘asc-ad-sd-11', 'HTTP RESPONSE': ['HTTP/1.1 200 OK\r\n'], ‘ID1’: ‘abc-ad-sd-00’}, {…}, {…}]</p> </blockquote> <p>So as you can see, I've bunch of dicts inside a list and for the key 'via', I want its value to be parsed into different substrings that I want and store them into new key values. I've already done this in my code. </p> <p>Update: Thanks everyone for your responses. I've clarified my question. From your response, the value for ID1 and ID2 works however the value inside the [] isn't working because "tttts" won't be the same string in every response. </p> <p>Another update: Thank you all for your help!! Using everyone's response, I tweaked my code a little and figured out how to get the values.</p>
0
2016-07-25T14:29:44Z
38,571,019
<p>As you are working with a lot of text, then first thing to do is use/create an efficient memory-fiendly iterator over the strings. (let's assume you put it into function line_iterator)</p> <p>The second thing to do is use a regular expression for getting the required parts of strings (assume you written and compiled the regexp). If there are always 2 similar pieces in each string, put them into groups in your regular expression.</p> <p>Then you can do something like this:</p> <pre><code>import re regexp = re.compile('&lt;you regular expression&gt;') for line in line_iterator(): match = regexp.match(line) if match: write_to_csv(match.groups()) </code></pre> <p>Anyway, have a look at <a href="https://docs.python.org/2/library/re.html" rel="nofollow">regular expressions</a>, they are worth it</p> <p>Note: 1. compile your regular expression(s) if you need to use it a lot; 2. use generators for iterating over strings, don't keep the all in the memory; 3. better use 1 regular expression if you can</p>
0
2016-07-25T14:52:45Z
[ "python", "string", "parsing" ]
How to extract specific words from a list as the list length changes?
38,570,487
<p>Let's assume that I have a sample of the following strings:</p> <ul> <li>string = 'http/1.1 <strong>abc-ad-sd-00</strong>.sad.sdsd.der.net (Server/1.2 [dsddsf did:t yy p sS]), http/1.1 <strong>asc-ad-sd-11</strong>.sad.sdsd.der.net (Server/1.2 [<strong>gfef srFw:t reri pSs</strong> ])'</li> <li>string1 = 'http/1.1 <strong>abc-ad-sd-01</strong>.sad.sdsd.der.net (Server/1.2 [dsddsf f u did:t yy p sS]), http/1.1 <strong>asc-ad-sd-13</strong>.sad.sdsd.der.net (Server/1.2 [<strong>sff as srFw:t reri pSs</strong> ])'</li> <li>string2 = 'http/1.1 <strong>abc-ad-sd-002</strong>.sad.sdsd.der.net (Server/1.2 [dsddsf did:t yy p sS]), http/1.1 <strong>asc-ad-sd-14</strong>.sad.sdsd.der.net (Server/1.2 [<strong>rts as f srFw:t reri pSs</strong> ])'</li> <li>string3 = 'http/1.1 <strong>abc-ad-sd-03</strong>.sad.sdsd.der.net (Server/1.2 [dsddsf did:t yy p sS]), http/1.1 <strong>asc-ad-sd-15</strong>.sad.sdsd.der.net (Server/1.2 [<strong>tttts as t srFw:t reri pSs</strong> ])'</li> </ul> <p>Here's what I did to get the bold strings:</p> <pre><code>If name == 'via': name = “ID1” string = header_line.split(' ') b = (string[2].split('.')) value = b[0] headers[name] = value #----------# name_1 = “ID2” string = header_line.split(' ') b_1 = (string[9].split('.')) value_1 = b_1[0] headers[name_1] = value_1 #-----# name_2 = “ID3” string = header_line.split(' ') b_2 = (string[11:]) value_2 = ''.join(b_2) headers[name_2] = value_2 #----# </code></pre> <p>The problem with this is that it works only in certain situations. As you can see, there are 3 different strings so getting the bold strings by their index doesn't quite work. Ofcourse, this is not my complete code as these strings are stored in dict list. Example: My initial output looks like this: </p> <blockquote> <p>[{‘item1’: '10574', 'Item2’: '69.241.51.134', ‘via’: ‘http/1.1 abc-ad-sd-00.sad.sdsd.der.net (Server/1.2 [dsddsf did:t yy p sS]), http/1.1 asc-ad-sd-11.sad.sdsd.der.net (Server/1.2 [tttts srFw:t reri pSs ]))’, ‘item4’: ’22’, 'HTTP RESPONSE': ['HTTP/1.1 200 OK\r\n']}, {…}, {…}]</p> </blockquote> <p>And I want a different output like this based on the parsed values from the response above. </p> <blockquote> <p>[{‘item1’: '10574', 'Item2’: '69.241.51.134', ‘ID3’: 'tttts srFw:t reri pSs', ‘item4’: ’22’, ‘ID2’: ‘asc-ad-sd-11', 'HTTP RESPONSE': ['HTTP/1.1 200 OK\r\n'], ‘ID1’: ‘abc-ad-sd-00’}, {…}, {…}]</p> </blockquote> <p>So as you can see, I've bunch of dicts inside a list and for the key 'via', I want its value to be parsed into different substrings that I want and store them into new key values. I've already done this in my code. </p> <p>Update: Thanks everyone for your responses. I've clarified my question. From your response, the value for ID1 and ID2 works however the value inside the [] isn't working because "tttts" won't be the same string in every response. </p> <p>Another update: Thank you all for your help!! Using everyone's response, I tweaked my code a little and figured out how to get the values.</p>
0
2016-07-25T14:29:44Z
38,571,155
<p>Checkout <a href="https://docs.python.org/2/library/re.html" rel="nofollow">positive lookbehind regular expression</a>. </p> <pre><code>import re p = """string = 'http/1.1 abc-ad-sd-00.sad.sdsd.der.net (Server/1.2 [dsddsf did:t yy p sS]), http/1.1 asc-ad-sd-11.sad.sdsd.der.net (Server/1.2 [tttts srFw:t reri pSs ])' string1 = 'http/1.1 abc-ad-sd-01.sad.sdsd.der.net (Server/1.2 [dsddsf f u did:t yy p sS]), http/1.1 asc-ad-sd-13.sad.sdsd.der.net (Server/1.2 [tttts as srFw:t reri pSs ])' string2 = 'http/1.1 abc-ad-sd-002.sad.sdsd.der.net (Server/1.2 [dsddsf did:t yy p sS]), http/1.1 asc-ad-sd-14.sad.sdsd.der.net (Server/1.2 [tttts as f srFw:t reri pSs ])' string3 = 'http/1.1 abc-ad-sd-03.sad.sdsd.der.net (Server/1.2 [dsddsf did:t yy p sS]), http/1.1 asc-ad-sd-15.sad.sdsd.der.net (Server/1.2 [tttts as t srFw:t reri pSs ])' """ s = re.findall("(?&lt;=http/1.1\s)([\w\d\-]*)", p, re.DOTALL | re.MULTILINE) s2 = re.findall("(?&lt;=Server/1.2\s)\[([\w:\s]*)\]", p, re.DOTALL | re.MULTILINE) print(list(s)) print(list(s2)) # will prints # ['abc-ad-sd-00', 'asc-ad-sd-11', 'abc-ad-sd-01', 'asc-ad-sd-13', 'abc-ad-sd-002', 'asc-ad-sd-14', 'abc-ad-sd-03', 'asc-ad-sd-15'] # and # ['dsddsf did:t yy p sS', 'tttts srFw:t reri pSs ', 'dsddsf f u did:t yy p sS', 'tttts as srFw:t reri pSs ', 'dsddsf did:t yy p sS', 'tttts as f srFw:t reri pSs ', 'dsddsf did:t yy p sS', 'tttts as t srFw:t reri pSs '] </code></pre>
0
2016-07-25T14:58:58Z
[ "python", "string", "parsing" ]
How to extract specific words from a list as the list length changes?
38,570,487
<p>Let's assume that I have a sample of the following strings:</p> <ul> <li>string = 'http/1.1 <strong>abc-ad-sd-00</strong>.sad.sdsd.der.net (Server/1.2 [dsddsf did:t yy p sS]), http/1.1 <strong>asc-ad-sd-11</strong>.sad.sdsd.der.net (Server/1.2 [<strong>gfef srFw:t reri pSs</strong> ])'</li> <li>string1 = 'http/1.1 <strong>abc-ad-sd-01</strong>.sad.sdsd.der.net (Server/1.2 [dsddsf f u did:t yy p sS]), http/1.1 <strong>asc-ad-sd-13</strong>.sad.sdsd.der.net (Server/1.2 [<strong>sff as srFw:t reri pSs</strong> ])'</li> <li>string2 = 'http/1.1 <strong>abc-ad-sd-002</strong>.sad.sdsd.der.net (Server/1.2 [dsddsf did:t yy p sS]), http/1.1 <strong>asc-ad-sd-14</strong>.sad.sdsd.der.net (Server/1.2 [<strong>rts as f srFw:t reri pSs</strong> ])'</li> <li>string3 = 'http/1.1 <strong>abc-ad-sd-03</strong>.sad.sdsd.der.net (Server/1.2 [dsddsf did:t yy p sS]), http/1.1 <strong>asc-ad-sd-15</strong>.sad.sdsd.der.net (Server/1.2 [<strong>tttts as t srFw:t reri pSs</strong> ])'</li> </ul> <p>Here's what I did to get the bold strings:</p> <pre><code>If name == 'via': name = “ID1” string = header_line.split(' ') b = (string[2].split('.')) value = b[0] headers[name] = value #----------# name_1 = “ID2” string = header_line.split(' ') b_1 = (string[9].split('.')) value_1 = b_1[0] headers[name_1] = value_1 #-----# name_2 = “ID3” string = header_line.split(' ') b_2 = (string[11:]) value_2 = ''.join(b_2) headers[name_2] = value_2 #----# </code></pre> <p>The problem with this is that it works only in certain situations. As you can see, there are 3 different strings so getting the bold strings by their index doesn't quite work. Ofcourse, this is not my complete code as these strings are stored in dict list. Example: My initial output looks like this: </p> <blockquote> <p>[{‘item1’: '10574', 'Item2’: '69.241.51.134', ‘via’: ‘http/1.1 abc-ad-sd-00.sad.sdsd.der.net (Server/1.2 [dsddsf did:t yy p sS]), http/1.1 asc-ad-sd-11.sad.sdsd.der.net (Server/1.2 [tttts srFw:t reri pSs ]))’, ‘item4’: ’22’, 'HTTP RESPONSE': ['HTTP/1.1 200 OK\r\n']}, {…}, {…}]</p> </blockquote> <p>And I want a different output like this based on the parsed values from the response above. </p> <blockquote> <p>[{‘item1’: '10574', 'Item2’: '69.241.51.134', ‘ID3’: 'tttts srFw:t reri pSs', ‘item4’: ’22’, ‘ID2’: ‘asc-ad-sd-11', 'HTTP RESPONSE': ['HTTP/1.1 200 OK\r\n'], ‘ID1’: ‘abc-ad-sd-00’}, {…}, {…}]</p> </blockquote> <p>So as you can see, I've bunch of dicts inside a list and for the key 'via', I want its value to be parsed into different substrings that I want and store them into new key values. I've already done this in my code. </p> <p>Update: Thanks everyone for your responses. I've clarified my question. From your response, the value for ID1 and ID2 works however the value inside the [] isn't working because "tttts" won't be the same string in every response. </p> <p>Another update: Thank you all for your help!! Using everyone's response, I tweaked my code a little and figured out how to get the values.</p>
0
2016-07-25T14:29:44Z
38,571,617
<p>Using regex you can compile an expression before your loop and get each id you want from each string as you loop over them. The first regex will get the first two ids which have the same format. <code>\w+</code> looks for at least one word and <code>\d+</code> looks for at least one digit. The second expression wants the second occurrence of what's in the brackets so you start with <code>\[.*?</code> and then look for at least one word and a space before the rest of the expression. </p> <pre><code>import re list_of_strings=[ 'http/1.1 abc-ad-sd-00.sad.sdsd.der.net (Server/1.2 [dsddsf did:t yy p sS]), http/1.1 asc-ad-sd-11.sad.sdsd.der.net (Server/1.2 [gfef srFw:t reri pSs ])', 'http/1.1 abc-ad-sd-01.sad.sdsd.der.net (Server/1.2 [dsddsf f u did:t yy p sS]), http/1.1 asc-ad-sd-13.sad.sdsd.der.net (Server/1.2 [sff as srFw:t reri pSs ])', 'http/1.1 abc-ad-sd-002.sad.sdsd.der.net (Server/1.2 [dsddsf did:t yy p sS]), http/1.1 asc-ad-sd-14.sad.sdsd.der.net (Server/1.2 [rts as f srFw:t reri pSs ])', 'http/1.1 abc-ad-sd-03.sad.sdsd.der.net (Server/1.2 [dsddsf did:t yy p sS]), http/1.1 asc-ad-sd-15.sad.sdsd.der.net (Server/1.2 [tttts as t srFw:t reri pSs ])' ] first_ids=r'\w+-\w+-\w+-\d+' last_id=r'\[.*\[(\w+\s.*\w+:\w+\s\w+\s\w+)' for url in list_of_strings: print(url) print(re.findall(first_ids,url)[0]) print(re.findall(first_ids,url)[1]) print(re.findall(last_id,url)[0]) </code></pre>
0
2016-07-25T15:19:49Z
[ "python", "string", "parsing" ]
Advice on structuring a growing Django project (Models & API)
38,570,535
<p>I’m currently working on improving a Django project that is used internally at my company. The project is growing quickly so I’m trying to make some design choices now before it’s unmanageable to refactor. Right now the project has a two really important models the rest of the data in the database that supports the each application in the project is added into the database through various separate ETL processes. Because of this the majority of data used in the application is queried in each view via SQLAlchemy using a straight up multiline string and passing the data through to the view via the context param when rendering rather than using the Django ORM.</p> <p><strong>Would there be a distinct advantage in building models for all the tables that are populated via ETL processes so I can start using the Django ORM vs using SQLAlchemy and query strings?</strong></p> <p>I think it also makes sense to start building an API rather than passing a gigantic amount of information through to a single view via the context param, but I’m unsure of how to structure the API. I’ve read that some people create an entirely separate app named API and make all the views in it return a JsonResponse. I’ve also read others do this same view based API but they simply include an api.py file in each application in their Django project. Others use the Django REST API Framework, which seems simple but is slightly more complicated than just returning JsonResponse via a view. There is really only one place where a users interaction does anything but GET data from the database and that portion of the project uses Django REST Framework to perform CRUD operations. That being said:</p> <p><strong>Which of these API structures is the most typical, and what do I gain/lose by implementing JsonResponse views as an API vs using the Django REST Framework?</strong></p> <p>Thanks you in advance for any resources or advice anyone has regarding these questions. Please let me know if I can add any additional context.</p>
0
2016-07-25T14:31:34Z
38,571,117
<blockquote> <p>Would there be a distinct advantage in building models for all the tables that are populated via ETL processes so I can start using the Django ORM vs using SQLAlchemy and query strings?</p> </blockquote> <p>Yes, a centralized, consistent way of accessing the data, and of course, one less dependency on the project.</p> <blockquote> <p>Which of these API structures is the most typical, and what do I gain/lose by implementing JsonResponse views as an API vs using the Django REST Framework?</p> </blockquote> <p>In general terms, JSON is used for data, and REST for APIs. You mentioned that Django-REST is already in use, so if there's any tangible benefit from having a REST API, I'd go with it</p>
2
2016-07-25T14:56:58Z
[ "python", "django", "django-rest-framework" ]
Advice on structuring a growing Django project (Models & API)
38,570,535
<p>I’m currently working on improving a Django project that is used internally at my company. The project is growing quickly so I’m trying to make some design choices now before it’s unmanageable to refactor. Right now the project has a two really important models the rest of the data in the database that supports the each application in the project is added into the database through various separate ETL processes. Because of this the majority of data used in the application is queried in each view via SQLAlchemy using a straight up multiline string and passing the data through to the view via the context param when rendering rather than using the Django ORM.</p> <p><strong>Would there be a distinct advantage in building models for all the tables that are populated via ETL processes so I can start using the Django ORM vs using SQLAlchemy and query strings?</strong></p> <p>I think it also makes sense to start building an API rather than passing a gigantic amount of information through to a single view via the context param, but I’m unsure of how to structure the API. I’ve read that some people create an entirely separate app named API and make all the views in it return a JsonResponse. I’ve also read others do this same view based API but they simply include an api.py file in each application in their Django project. Others use the Django REST API Framework, which seems simple but is slightly more complicated than just returning JsonResponse via a view. There is really only one place where a users interaction does anything but GET data from the database and that portion of the project uses Django REST Framework to perform CRUD operations. That being said:</p> <p><strong>Which of these API structures is the most typical, and what do I gain/lose by implementing JsonResponse views as an API vs using the Django REST Framework?</strong></p> <p>Thanks you in advance for any resources or advice anyone has regarding these questions. Please let me know if I can add any additional context.</p>
0
2016-07-25T14:31:34Z
39,083,902
<p>I would second @Av4t4r 's statement that building models for 'legacy' columns enforces:</p> <blockquote> <p>"a centralized, consistent way of accessing the data, and of course, one less dependency on the project."</p> </blockquote> <p>Additionally, this also allows you to define methods on your models to provide a <em>consistent way of modifying/checking your data</em>. </p> <p>You could write a host of <code>validate()</code> functions on a model to check the validity of certain fields, or perhaps a <code>clean()</code> function which does things like 'strip non-digits from the phone_number field' of an instantiated object.</p> <p>Rather than building these models from scratch, just run:</p> <pre><code>python manage.py inspectdb &gt; models.py </code></pre> <p>...from a shell to autogenerate the models for your project's database.</p> <p><strong>For those of you who landed here that are using SQLAlchemy to define their models</strong>:</p> <p>Checkout <a href="https://pypi.python.org/pypi/sqlacodegen" rel="nofollow">sqlacodegen</a>. It will autogenerate SQLAlchemy models from any given database, in any dialect (PostgreSQL, MySQL, etc...) supported by SQLAlchemy. </p> <p>To install:</p> <pre><code>pip install sqlacodegen </code></pre> <p>To run (from bash shell):</p> <pre><code>sqlacodegen --outfile models.py mysql://username:password@hostname/db_name </code></pre> <p>This is a great way to bootstrap a Django/Flask project with SQLAlchemy-defined data models. I usually use a combination of <a href="https://github.com/seanharr11/etlalchemy" rel="nofollow">etlalchemy</a> and sqlacodegen to get Django/Flask models with a pre-existing database bootstrapped. I use <a href="https://github.com/seanharr11/etlalchemy" rel="nofollow">etlalchemy</a> to copy database tables from ANY database type, to the desired local database of my choosing (i.e. SQL Server -> MySQL), and then autogenerate my models with sqlacodegen.</p>
0
2016-08-22T15:55:38Z
[ "python", "django", "django-rest-framework" ]
Advice on structuring a growing Django project (Models & API)
38,570,535
<p>I’m currently working on improving a Django project that is used internally at my company. The project is growing quickly so I’m trying to make some design choices now before it’s unmanageable to refactor. Right now the project has a two really important models the rest of the data in the database that supports the each application in the project is added into the database through various separate ETL processes. Because of this the majority of data used in the application is queried in each view via SQLAlchemy using a straight up multiline string and passing the data through to the view via the context param when rendering rather than using the Django ORM.</p> <p><strong>Would there be a distinct advantage in building models for all the tables that are populated via ETL processes so I can start using the Django ORM vs using SQLAlchemy and query strings?</strong></p> <p>I think it also makes sense to start building an API rather than passing a gigantic amount of information through to a single view via the context param, but I’m unsure of how to structure the API. I’ve read that some people create an entirely separate app named API and make all the views in it return a JsonResponse. I’ve also read others do this same view based API but they simply include an api.py file in each application in their Django project. Others use the Django REST API Framework, which seems simple but is slightly more complicated than just returning JsonResponse via a view. There is really only one place where a users interaction does anything but GET data from the database and that portion of the project uses Django REST Framework to perform CRUD operations. That being said:</p> <p><strong>Which of these API structures is the most typical, and what do I gain/lose by implementing JsonResponse views as an API vs using the Django REST Framework?</strong></p> <p>Thanks you in advance for any resources or advice anyone has regarding these questions. Please let me know if I can add any additional context.</p>
0
2016-07-25T14:31:34Z
39,084,030
<p>I'm usually a huge proponent for DRF. It's simple to implement an easy use case, yet INCREDIBLY powerful for more complex uses. </p> <p>However, if you are not using Django models for all your data, I think JsonResponse might be easier. Running queries and manual manipulation (especially if it is only a single endpoint) might be the way to go. </p> <p>Sorry for not weighing in on the other part of the question. </p>
0
2016-08-22T16:02:57Z
[ "python", "django", "django-rest-framework" ]
Custom format ID mapping
38,570,743
<p>I have two databases (txt files). One is a two-column, tab-delimited one, that holds names and IDs.</p> <pre><code>name1 \t ID1 name1 \t ID2 name2 \t ID9 name2 \t ID40 name3 \t ID3 </code></pre> <p>The other database has the same IDs as the first one in the first column, while the second column lists IDs of the same kind delimited by commas (these are the children of the ones in the first one, as the second database is hierarchical).</p> <pre><code>ID1 \t ID1,ID2,ID3 ID2 \t ID2, ID9 </code></pre> <p>What I would like to do is get a third database with the same format as the second, but in the second column I'd like to swap out the children IDs to the names of the first database. For example:</p> <pre><code>ID1 \t name1,name2,name3 ID2 \t name1,name2 </code></pre> <p>Is there a way to do this? I'm quite the beginner, when I had to map IDs before I used web services, but this is a custom format needed for further analysis and I'm not sure where to start.</p> <p>Thanks in advance!</p>
0
2016-07-25T14:40:57Z
38,572,013
<pre><code>import csv # Reading the first db is simple since there's only a fixed delimiter # Use csv module to split the lines and create a dictionary that maps id to name id_dictionary = {} with open('db_1.txt', 'r') as infile: reader = csv.reader(infile, delimiter='\t') for line in reader: id_dictionary[line[1]] = line[0] # We can again split on tab but that will return 'name1,name2' etc as a single # string that we call split() on later. row_data = [] with open('db_2.txt', 'r') as infile: reader = csv.reader(infile, delimiter='\t') for line in reader: # ID remains unchanged, so keep the first value row = [line[0]] # Split the string into individual elements in a list id_codes = line[1].split(',') # List comprehension to look for ID in the dictionary and return the # name stored against it translated = [id_dictionary.get(item) for item in id_codes] # Add translated to the list that we are using to represent a row row.extend(translated) # Append the row to our collection of rows row_data.append(row) with open('db_3.txt', 'w') as outfile: for row in row_data: outfile.write(row[0]) outfile.write('\t') outfile.write(','.join(map(str,row[1:]))) # Join values by a comma outfile.write('\n') </code></pre>
0
2016-07-25T15:37:46Z
[ "python", "bash" ]
Custom format ID mapping
38,570,743
<p>I have two databases (txt files). One is a two-column, tab-delimited one, that holds names and IDs.</p> <pre><code>name1 \t ID1 name1 \t ID2 name2 \t ID9 name2 \t ID40 name3 \t ID3 </code></pre> <p>The other database has the same IDs as the first one in the first column, while the second column lists IDs of the same kind delimited by commas (these are the children of the ones in the first one, as the second database is hierarchical).</p> <pre><code>ID1 \t ID1,ID2,ID3 ID2 \t ID2, ID9 </code></pre> <p>What I would like to do is get a third database with the same format as the second, but in the second column I'd like to swap out the children IDs to the names of the first database. For example:</p> <pre><code>ID1 \t name1,name2,name3 ID2 \t name1,name2 </code></pre> <p>Is there a way to do this? I'm quite the beginner, when I had to map IDs before I used web services, but this is a custom format needed for further analysis and I'm not sure where to start.</p> <p>Thanks in advance!</p>
0
2016-07-25T14:40:57Z
38,572,556
<p>You can try this one line awk script:</p> <pre><code>awk -v FS="\t|," -v OFS="," 'FILENAME=="file_name.txt" {str[$2]=$1;next;} {for(i=2;i&lt;=NF;i++) {sub($i,str[$i],$i)};a=$1;$1="";print a"\t"$0}' file_name.txt fileID.txt|sed -e 's/,//' -e 's/,$//' </code></pre> <p>The "file_name.txt" for the awk is the txt file whose first columns have the "name1,name2..." while the "fileID.txt" has in the first column the "ID1,ID2,..."</p> <p>The sed is to trim the commas at the beginning and at the end of the list that are not necessary.</p>
0
2016-07-25T16:04:15Z
[ "python", "bash" ]
Custom format ID mapping
38,570,743
<p>I have two databases (txt files). One is a two-column, tab-delimited one, that holds names and IDs.</p> <pre><code>name1 \t ID1 name1 \t ID2 name2 \t ID9 name2 \t ID40 name3 \t ID3 </code></pre> <p>The other database has the same IDs as the first one in the first column, while the second column lists IDs of the same kind delimited by commas (these are the children of the ones in the first one, as the second database is hierarchical).</p> <pre><code>ID1 \t ID1,ID2,ID3 ID2 \t ID2, ID9 </code></pre> <p>What I would like to do is get a third database with the same format as the second, but in the second column I'd like to swap out the children IDs to the names of the first database. For example:</p> <pre><code>ID1 \t name1,name2,name3 ID2 \t name1,name2 </code></pre> <p>Is there a way to do this? I'm quite the beginner, when I had to map IDs before I used web services, but this is a custom format needed for further analysis and I'm not sure where to start.</p> <p>Thanks in advance!</p>
0
2016-07-25T14:40:57Z
38,572,658
<pre><code>#suppose database files are f1.txt,f2.txt,f3.txt #use set to get key-value format datas def getArr(f): i=f.readline() arr=[] while i: i=i.replace('\n','') arr.append(i.split('\t')) i=f.readline() return arr if __name__=="__main__": f1=file("f1.txt") f2=file("f2.txt") f3=open('f3.txt','w') arr1=getArr(f1) arr2=getArr(f2) dic={} for array in arr1: dic[array[1]]=array[0] for i in arr2: keys=i[1].split(',') print keys line=i[0]+'\t' for key in keys: line+=dic.get(key)+',' line=line[:-1]+'\n' f3.write(line) f1.close() f2.close() f3.close() </code></pre>
0
2016-07-25T16:09:31Z
[ "python", "bash" ]
Best python way to return the initializing value of class if of that same class
38,570,919
<p>I have a class that I want it to accept an instance of that same class as initialization; in such case, it will simply return that instance.</p> <p>The reason is that I want this class to accept a myriad of initialization values and then the proceeding code can use this as an object with known properties, independent on how it was initialized.</p> <p>I have thought of something like:</p> <pre><code>class c(object): def __new__(cls, *args, **kwargs): if isinstance(args[0], c): return args[0] else: return super(c, cls).__new__(cls, *args, **kwargs) </code></pre> <p>The problem is that I don't want <code>__init__()</code> to be called when initialized in this manner. Is there any other way?</p> <p>Thanks!</p>
1
2016-07-25T14:48:37Z
38,571,749
<p>You probably want to use a factory (f.e. see this <a href="http://stackoverflow.com/questions/674304/pythons-use-of-new-and-init">question</a> for details or google). Or just use a class method for what you want, f.e.:</p> <pre><code>class C(object): @classmethod def new(cls, *args, **kwargs): if isinstance(args[0], cls): return args[0] else: return cls(*args, **kwargs) obj = C.new() obj2 = C.new(obj) </code></pre>
1
2016-07-25T15:25:22Z
[ "python", "class", "oop", "initialization" ]
Best python way to return the initializing value of class if of that same class
38,570,919
<p>I have a class that I want it to accept an instance of that same class as initialization; in such case, it will simply return that instance.</p> <p>The reason is that I want this class to accept a myriad of initialization values and then the proceeding code can use this as an object with known properties, independent on how it was initialized.</p> <p>I have thought of something like:</p> <pre><code>class c(object): def __new__(cls, *args, **kwargs): if isinstance(args[0], c): return args[0] else: return super(c, cls).__new__(cls, *args, **kwargs) </code></pre> <p>The problem is that I don't want <code>__init__()</code> to be called when initialized in this manner. Is there any other way?</p> <p>Thanks!</p>
1
2016-07-25T14:48:37Z
38,572,240
<p>You can use a metaclass</p> <pre><code>class InstanceReturnMeta(type): # You should probably think of a better name def __call__(cls, *args, **kwargs): if args and isinstance(args[0], cls): return args[0] instance = cls.__new__(cls, *args, **kwargs) instance.__init__(*args, **kwargs) return instance class Test(object): __metaclass__ = InstanceReturnMeta def __init__(self, value): self.value = value </code></pre> <p>Let's test it </p> <pre><code>In [3]: instance1 = Test(0) In [4]: instance2 = Test(instance1) In [5]: print id(instance1) == id(instance2) Out[5]: True </code></pre> <p>The ids are identical, hence both variables reference the same instance.</p> <p>P.S. I assume you are on Python 2, since your class explicitly inherits from <code>object</code>.</p>
1
2016-07-25T15:47:26Z
[ "python", "class", "oop", "initialization" ]
Best python way to return the initializing value of class if of that same class
38,570,919
<p>I have a class that I want it to accept an instance of that same class as initialization; in such case, it will simply return that instance.</p> <p>The reason is that I want this class to accept a myriad of initialization values and then the proceeding code can use this as an object with known properties, independent on how it was initialized.</p> <p>I have thought of something like:</p> <pre><code>class c(object): def __new__(cls, *args, **kwargs): if isinstance(args[0], c): return args[0] else: return super(c, cls).__new__(cls, *args, **kwargs) </code></pre> <p>The problem is that I don't want <code>__init__()</code> to be called when initialized in this manner. Is there any other way?</p> <p>Thanks!</p>
1
2016-07-25T14:48:37Z
38,572,327
<p>The standard way to do this is to simply not do your initialization in <code>__init__</code>. Do it in <code>__new__</code>.</p>
1
2016-07-25T15:52:47Z
[ "python", "class", "oop", "initialization" ]
Python: Find the total weights of a subgraph's outside edges
38,570,986
<p>I'm using python-igraph to extract a subgraph from a non-directed graph. Nodes are locations, and the subgraph represents all nodes/edges within a radius from a certain node. </p> <p>I need to find the weights that connect the outside nodes of the subgraph to the main graph, is there any simple way of doing this? I'm not sure of what this is formally called.</p>
0
2016-07-25T14:51:35Z
38,731,159
<p>This is basically the total weight of the cut between your chosen set of nodes and the rest of the graph. You can try something like this:</p> <pre><code>your_nodes = [1, 2, 3] other_nodes = sorted(set(range(graph.vcount())) - set(your_nodes)) weight_of_cut = sum(graph.es.select(_between=(your_nodes, other_nodes))["weight"]) </code></pre>
0
2016-08-02T21:57:51Z
[ "python", "graph", "igraph" ]
Using Stanford CoreNLP Python Parser for specific output
38,571,004
<p>I'm using <a href="https://github.com/dasmith/stanford-corenlp-python" rel="nofollow">SCP</a> to get the parse CFG tree for English sentences. </p> <pre><code>from corenlp import * corenlp = StanfordCoreNLP() corenlp.parse("Every cat loves a dog") </code></pre> <p>My expected output is a tree like this: </p> <pre><code>(S (NP (DET Every) (NN cat)) (VP (VT loves) (NP (DET a) (NN dog)))) </code></pre> <p>But what i got is: </p> <pre><code>(ROOT (S (NP (DT Every) (NN cat)) (VP (VBZ loves) (NP (DT a) (NN dog))))) </code></pre> <p>How to change the POS tag as expected and remove the ROOT node?</p> <p>Thanks</p>
0
2016-07-25T14:52:07Z
38,571,865
<p>You can use <a href="http://www.nltk.org/howto/tree.html" rel="nofollow">nltk.tree</a> module from <a href="http://www.nltk.org/" rel="nofollow">NLTK</a>.</p> <pre><code>from nltk.tree import * def traverse(t): try: # Replace Labels if t.label() == "DT": t.set_label("DET") elif t.label() == "VBZ": t.set_label("VT") except AttributeError: return for child in t: traverse(child) output_tree= "(ROOT (S (NP (DT Every) (NN cat)) (VP (VBZ loves) (NP (DT a) (NN dog)))))" tree = ParentedTree.fromstring(output_tree) # Remove ROOT Element if tree.label() == "ROOT": tree = tree[0] traverse(tree) print tree # (S (NP (DET Every) (NN cat)) (VP (VT loves) (NP (DET a) (NN dog)))) </code></pre>
1
2016-07-25T15:30:47Z
[ "python", "nlp", "stanford-nlp", "pos-tagger", "corenlp" ]
django- Use prefetch_related inside of another prefetch_related
38,571,076
<p>Closest thing to what I am asking can be found <a href="https://stackoverflow.com/questions/13092268/how-do-you-join-two-tables-on-a-foreign-key-field-using-django-orm">here</a></p> <p>Say I have the following models:</p> <pre><code>class Division(models.Model): name = models.TextField() state = models.IntegerField() class Team(models.Model): name2 = models.TextField() division = models.ForeignKey(Division, ...) class Player(models.Model): name = models.TextField() hometown = models.IntegerField() team = models.ForeignKey(Team, ...) </code></pre> <p>Now I can already do the following for just one table:</p> <pre><code>players = Player.objects.prefetch_related('team') </code></pre> <p>How would I go about adding <code>state</code> to the queryset? My endgoal is to be able to do <code>player.team.division.state</code> inside of a template. The other alternative would be to use nested for loops but I would like to avoid that.</p>
1
2016-07-25T14:54:55Z
38,571,599
<p>You don't need <code>prefetch_related</code> here. You can follow the foreign keys from <code>Player</code> to <code>Team</code> to <code>Division</code> using <a href="https://docs.djangoproject.com/en/1.9/ref/models/querysets/#select-related" rel="nofollow"><code>select_related()</code></a>.</p> <pre><code>players = Player.objects.select_related('team__division') </code></pre> <p>A use-case for <code>prefetch_related</code> is if you started with a <code>Division</code> queryset, and wanted to fetch the related teams at the same time.</p>
2
2016-07-25T15:18:45Z
[ "python", "django", "django-templates", "django-views", "django-queryset" ]
Django: One-to-many relation for [0, 1] cardinality
38,571,157
<p>Imagine having two models:</p> <pre><code>class Service(models.Model): key_service_name = models.ForeignKey(Key_service_type, related_name='Service_key_service_names', null=False) service_hourly_wage_rate = models.DecimalField(max_digits=5, decimal_places=2, null=True) service_amount = models.DecimalField(max_digits=7, decimal_places=2, null=True) class ServiceAdditionalInfo(models.Model): service = models.ForeignKey(Service, related_name='Service_additional_info_services', null=False) fixed_price = models.DecimalField(max_digits=8, decimal_places=2, null=True) </code></pre> <p>Amongst other info, the Service class serializer features this field description:</p> <pre><code>service_additional_info = ServiceAdditionalInfoSerializer(read_only = True, many=False, source = 'Service_additional_info_services') </code></pre> <p>In practice, one Service instance may be referenced by 0 or 1 ServiceAdditionalInfo instance. The serializer understandably returns a list while I would definitely prefer a dictionary.</p> <p>My question: Is this the recommended way of modelling the relation? If so, is there a django-built-in mechanism to return a dict for such cases?</p> <p>For the latter case I know how to work around the issue, but since I would like to use the framework in the intended way, I'm very curious if there's something I missed regarding modelling and serializers.</p>
1
2016-07-25T14:59:22Z
38,601,758
<p>Replace your foreign key with a <a href="https://docs.djangoproject.com/en/1.9/topics/db/examples/one_to_one/" rel="nofollow">OneToOneField</a></p> <blockquote> <p>A one-to-one relationship. Conceptually, this is similar to a ForeignKey with unique=True, but the “reverse” side of the relation will directly return a single object.</p> </blockquote> <p>This is the prefereble way of mapping a relationship when there can be exactly one or zero objects related to each other.</p> <p>As for generating a single dictionary that contains data from both models, you will need to subclass the model serializer.</p>
1
2016-07-27T00:07:21Z
[ "python", "django", "django-models", "data-modeling" ]
How to deliberately build bad URL with url_for() in unit tests?
38,571,192
<p>Im writing unit tests for an API written in Python with Flask. Specifically, I want to test that the error handling for my routes work properly so I want to deliberately build URLs with missing parameters. I want to avoid hard-coding by using url_for() but that doesn't allow you to have missing parameters, so how to build bad URLs ?</p>
0
2016-07-25T15:00:57Z
38,578,033
<p>When it comes to <code>url_for</code> it generates two kinds of urls. If there are named routes, like</p> <pre><code>@app.route('/favorite/color/&lt;color&gt;') def favorite_color(color): return color[::-1] </code></pre> <p>Then the URL parameters are <strong>required</strong>:</p> <pre><code>url_for('favorite_color') BuildError: Could not build url for endpoint 'favorite_color'. Did you forget to specify values ['color']? </code></pre> <p>However, any parameter that is not present in the route itself will simply be converted to a querystring parameter:</p> <pre><code>print(url_for('favorite_color', color='blue', no='yellooooowwww!')) /favorite/color/blue?no=yellooooowwww%21 </code></pre> <p>So when you ask for a URL with a missing parameter, what you're asking for is a <em>url that has no route</em>. You can't build a URL for that, because flask is trying to build parameters for endpoints that exist.</p> <p>The best you can do is use parameter values that are out of bounds, e.g.</p> <pre><code>url_for('favorite_color', color='toaster') </code></pre> <p>Toaster is not a real color, and thus should return <code>404</code>. Missing parameters might also make sense in a different context, so you'll have to account for that. If you really want to have missing parameters, what you really want to do is use querystring arguments. But if you're dead set on making sure that URLs that don't exist on your server actually don't exist on your server, then you can do something like this:</p> <pre><code>url_for('favorite_color', color='fnord').replace('/fnord', '') </code></pre>
2
2016-07-25T21:56:08Z
[ "python", "unit-testing", "url", "flask" ]
Python 3 - number of letters in an encoded string
38,571,273
<p>I would like to get the number of letters in a given string. However, len(txt) returns the number of letters in the unicode form (I guess), but the actual number of letters is less then what I get.</p> <p>for example:</p> <pre><code>txt = שלום וברכה len(txt) # returns something different then 10 </code></pre> <p>I saw a solution for python 2 using <code>string.decode</code> , which is not available in python 3 - and I'm not sure it is the appropriate answer for me. By the way, the encoding for the string is <code>cp862</code>.</p> <p>EDIT: more details: I read from a text file using </p> <pre><code>with open(path, "r", encoding="cp862") as textFile: </code></pre> <p>this is the output of the line I read when I print it</p> <pre><code>╫¬╫ñ╫¿╫ש╫ר ╫£╫ª╫ץ╫¥: ╫¢╫ת ╫¬╫ª╫£╫ק╫ץ ╫נ╫¬ ╫¢╫ש╫ñ╫ץ╫¿ </code></pre> <p>The length is 52. The real line is: תפריט לצום: כך תצלחו את כיפור and the real the length is 29</p>
-1
2016-07-25T15:04:03Z
38,573,093
<p>Probably, you are opening the file with the wrong encoding scheme, here is a demonstration:</p> <pre><code>&gt;&gt;&gt; import sys &gt;&gt;&gt; sys.version '3.4.3 (default, Oct 14 2015, 20:28:29) \n[GCC 4.8.4]' &gt;&gt;&gt; &gt;&gt;&gt; s = '╫¬╫ñ╫¿╫ש╫ר ╫£╫ª╫ץ╫¥: ╫¢╫ת ╫¬╫ª╫£╫ק╫ץ ╫נ╫¬ ╫¢╫ש╫ñ╫ץ╫¿' &gt;&gt;&gt; len(s) 52 &gt;&gt;&gt; &gt;&gt;&gt; s = s.encode('cp862').decode('utf-8') 'תפריט לצום: כך תצלחו את כיפור' &gt;&gt;&gt; len(s) 29 </code></pre> <p>Try to open it with default encoding (utf-8).</p>
0
2016-07-25T16:32:33Z
[ "python", "character-encoding", "python-3.4", "hebrew" ]
Mapping Window Drives Python: How to handle when the Win cmd Line needs input
38,571,274
<p>Good Afternoon,</p> <p>I used a version of this method to map a dozen drive letters:</p> <pre><code># Drive letter: M # Shared drive path: \\shared\folder # Username: user123 # Password: password import subprocess # Disconnect anything on M subprocess.call(r'net use * /del', shell=True) # Connect to shared drive, use drive letter M subprocess.call(r'net use m: \\shared\folder /user:user123 password', shell=True) </code></pre> <p>The above code works great as long as I do not have a folder with a file in use by a program.</p> <p>If I run the same command just in a cmd window and a file is in use when I try to disconnect the drive it returns the <b>Are you sure? Y/N</b>.</p> <p>How can I pass this question back to the user via the Py script (or if nothing else, force the disconnect so that the code can continue to run?</p>
1
2016-07-25T15:04:05Z
38,571,335
<p>To force disconnecting try it with <code>/yes</code> like so</p> <pre><code>subprocess.call(r'net use * /del /yes', shell=True) </code></pre> <hr> <p>In order to 'redirect' the question to the user you have (at least) 2 possible approaches: </p> <ul> <li>Read and write to the standard input / output stream of the sub process </li> <li>Work with exit codes and start the sub process a second time if necessary </li> </ul> <p>The first approach is very fragile as you have to read the standard output and interpret it which is specific to your current locale as well as answering later the question which is also specific to your current locale (e.g. confirming would be done with 'Y' in English but with 'J' in German etc.) </p> <p>The second approach is more stable as it relies on more or less static return codes. I did a quick test and in case of cancelling the question the return code is 2; in case of success of course just 0. So with the following code you should be able to handle the question and act on user input: </p> <pre><code>import subprocess exitcode = subprocess.call(r'net use * /del /no', shell=True) if exitcode == 2: choice = input("Probably something bad happens ... still continue? (Y/N)") if choice == "Y": subprocess.call(r'net use * /del /yes', shell=True) else: print("Cancelled") else: print("Worked on first try") </code></pre>
0
2016-07-25T15:06:59Z
[ "python", "subprocess", "net-use" ]
Django: reload variable's value in the template
38,571,334
<p>Imagine there is a Django HTML-template with a variable <code>foo</code>:</p> <pre><code>&lt;div&gt; {{ foo }} &lt;/div&gt; </code></pre> <p>Is it possible to reload <code>foo</code>'s value somehow without API call and without page reload?</p>
-1
2016-07-25T15:06:57Z
38,703,715
<p>After searching for an answer, I came to conclusion there is no way to update it without page reload or API-call. So, the solution is to change the approach.</p>
0
2016-08-01T16:26:56Z
[ "python", "django" ]
Python shell call to exiftool
38,571,336
<p>I have an image, 1.tiff, from which I want to copy the exif data to two other images, 2.tiff and 3.tiff. from the normal shell I can write the same exif data to multiple images by typing</p> <pre><code>exiftool -m -overwrite_original -TagsFromFile "1.tiff" {"2.tiff","3.tiff"} </code></pre> <p>For some reason, I am not able to do this form pyton. If I execute the same shell command from a python script, i.e. </p> <pre><code>os.system('exiftool -m -overwrite_original -TagsFromFile "1.tiff" {"2.tiff","3.tiff"}') </code></pre> <p>I get the following error: </p> <pre><code>Error: File not found - {2.tiff,3.tiff} </code></pre> <p>It works, however, if I call the command for every single image, to be written to. i.e.</p> <pre><code>os.system('exiftool -m -overwrite_original -TagsFromFile "1.tiff" "2.tiff"') os.system('exiftool -m -overwrite_original -TagsFromFile "1.tiff" "3.tiff"') </code></pre> <p>But, as I am going to call the command several thousand times, reading the exif data from 1.tiff over and over again is simply too slow. Do you have any suggesting on how to copy exif data from one source image to multiple images while only reading the source image once?</p> <p>The following zip-file contain a working bash-script and the non-working python equivalent: <a href="https://www.dropbox.com/s/nm8fdkdfq7hqi8m/folder.zip?dl=1" rel="nofollow">https://www.dropbox.com/s/nm8fdkdfq7hqi8m/folder.zip?dl=1</a></p>
0
2016-07-25T15:07:03Z
38,571,478
<p><code>os.system</code> tends to act up like that, especially on Windows. You'll probably have more success with <a href="https://docs.python.org/2/library/subprocess.html#subprocess.call" rel="nofollow"><code>subprocess.call</code></a>:</p> <pre><code>subprocess.call(['exiftool','-m','-overwrite_original','-TagsFromFile','1.tiff','{"2.tiff","3.tiff"}']) </code></pre>
1
2016-07-25T15:13:22Z
[ "python", "shell", "exiftool" ]
Python shell call to exiftool
38,571,336
<p>I have an image, 1.tiff, from which I want to copy the exif data to two other images, 2.tiff and 3.tiff. from the normal shell I can write the same exif data to multiple images by typing</p> <pre><code>exiftool -m -overwrite_original -TagsFromFile "1.tiff" {"2.tiff","3.tiff"} </code></pre> <p>For some reason, I am not able to do this form pyton. If I execute the same shell command from a python script, i.e. </p> <pre><code>os.system('exiftool -m -overwrite_original -TagsFromFile "1.tiff" {"2.tiff","3.tiff"}') </code></pre> <p>I get the following error: </p> <pre><code>Error: File not found - {2.tiff,3.tiff} </code></pre> <p>It works, however, if I call the command for every single image, to be written to. i.e.</p> <pre><code>os.system('exiftool -m -overwrite_original -TagsFromFile "1.tiff" "2.tiff"') os.system('exiftool -m -overwrite_original -TagsFromFile "1.tiff" "3.tiff"') </code></pre> <p>But, as I am going to call the command several thousand times, reading the exif data from 1.tiff over and over again is simply too slow. Do you have any suggesting on how to copy exif data from one source image to multiple images while only reading the source image once?</p> <p>The following zip-file contain a working bash-script and the non-working python equivalent: <a href="https://www.dropbox.com/s/nm8fdkdfq7hqi8m/folder.zip?dl=1" rel="nofollow">https://www.dropbox.com/s/nm8fdkdfq7hqi8m/folder.zip?dl=1</a></p>
0
2016-07-25T15:07:03Z
38,571,580
<p>If your only purpose is to use exiftool in python then why not use <a href="https://smarnach.github.io/pyexiftool/" rel="nofollow">this module</a>? Sorry I do not have enough reputation to post comments yet. For example:</p> <pre><code> import exiftool files = ["a.jpg", "b.png", "c.tif"] with exiftool.ExifTool() as et: metadata = et.get_metadata_batch(files) </code></pre> <p>UPDATE: Sorry I stand corrected. that module does not perform exif modification.</p>
0
2016-07-25T15:18:10Z
[ "python", "shell", "exiftool" ]
Python shell call to exiftool
38,571,336
<p>I have an image, 1.tiff, from which I want to copy the exif data to two other images, 2.tiff and 3.tiff. from the normal shell I can write the same exif data to multiple images by typing</p> <pre><code>exiftool -m -overwrite_original -TagsFromFile "1.tiff" {"2.tiff","3.tiff"} </code></pre> <p>For some reason, I am not able to do this form pyton. If I execute the same shell command from a python script, i.e. </p> <pre><code>os.system('exiftool -m -overwrite_original -TagsFromFile "1.tiff" {"2.tiff","3.tiff"}') </code></pre> <p>I get the following error: </p> <pre><code>Error: File not found - {2.tiff,3.tiff} </code></pre> <p>It works, however, if I call the command for every single image, to be written to. i.e.</p> <pre><code>os.system('exiftool -m -overwrite_original -TagsFromFile "1.tiff" "2.tiff"') os.system('exiftool -m -overwrite_original -TagsFromFile "1.tiff" "3.tiff"') </code></pre> <p>But, as I am going to call the command several thousand times, reading the exif data from 1.tiff over and over again is simply too slow. Do you have any suggesting on how to copy exif data from one source image to multiple images while only reading the source image once?</p> <p>The following zip-file contain a working bash-script and the non-working python equivalent: <a href="https://www.dropbox.com/s/nm8fdkdfq7hqi8m/folder.zip?dl=1" rel="nofollow">https://www.dropbox.com/s/nm8fdkdfq7hqi8m/folder.zip?dl=1</a></p>
0
2016-07-25T15:07:03Z
38,572,429
<p>Inspired by <a href="http://stackoverflow.com/questions/22659579/curly-braces-in-python-popen">this</a> question and Rawing answer, the problem seems to be that /bin/sh that doesn't support this curly braces. The solution therefore is to set executable='/bin/bash' in subprocess.Popen</p> <pre><code>subprocess.Popen(['exiftool','-m','-overwrite_original','-TagsFromFile','1.tiff','{"2.tiff","3.tiff"}'],executable='/bin/bash') </code></pre>
0
2016-07-25T15:57:41Z
[ "python", "shell", "exiftool" ]
Pandas Dataframe: How to parse integers into string of 0s and 1s?
38,571,348
<p>I have the following pandas DataFrame.</p> <pre><code>import pandas as pd df = pd.read_csv('filename.csv') print(df) sample column_A 0 sample1 6/6 1 sample2 0/4 2 sample3 2/6 3 sample4 12/14 4 sample5 15/21 5 sample6 12/12 .. .... </code></pre> <p>The values in <code>column_A</code> are not fractions, and these data must be manipulated such that I can convert each value into <code>0s</code> and <code>1s</code> (not convert the integers into their binary counterparts). </p> <p>The "numerator" above gives the total number of <code>1s</code>, while the "denominator" gives the total number of <code>0s</code> and <code>1s</code> together. </p> <p>So, the table should actually be in the following format:</p> <pre><code> sample column_A 0 sample1 111111 1 sample2 0000 2 sample3 110000 3 sample4 11111111111100 4 sample5 111111111111111000000 5 sample6 111111111111 .. .... </code></pre> <p>I've never parsed an integer to output strings of 0s and 1s like this. How does one do this? Is there a "pandas method" to use with <code>lambda</code> expressions? Pythonic string parsing or regex? </p>
4
2016-07-25T15:07:26Z
38,571,544
<p>First, suppose you write a function:</p> <pre><code>def to_binary(s): n_d = s.split('/') n, d = int(n_d[0]), int(n_d[1]) return '1' * n + '0' * (d - n) </code></pre> <p>So that, </p> <pre><code>&gt;&gt;&gt; to_binary('4/5') '11110' </code></pre> <p>Now you just need to use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.apply.html"><code>pandas.Series.apply</code></a>:</p> <pre><code> df.column_A.apply(to_binary) </code></pre>
6
2016-07-25T15:16:12Z
[ "python", "regex", "parsing", "pandas" ]
Pandas Dataframe: How to parse integers into string of 0s and 1s?
38,571,348
<p>I have the following pandas DataFrame.</p> <pre><code>import pandas as pd df = pd.read_csv('filename.csv') print(df) sample column_A 0 sample1 6/6 1 sample2 0/4 2 sample3 2/6 3 sample4 12/14 4 sample5 15/21 5 sample6 12/12 .. .... </code></pre> <p>The values in <code>column_A</code> are not fractions, and these data must be manipulated such that I can convert each value into <code>0s</code> and <code>1s</code> (not convert the integers into their binary counterparts). </p> <p>The "numerator" above gives the total number of <code>1s</code>, while the "denominator" gives the total number of <code>0s</code> and <code>1s</code> together. </p> <p>So, the table should actually be in the following format:</p> <pre><code> sample column_A 0 sample1 111111 1 sample2 0000 2 sample3 110000 3 sample4 11111111111100 4 sample5 111111111111111000000 5 sample6 111111111111 .. .... </code></pre> <p>I've never parsed an integer to output strings of 0s and 1s like this. How does one do this? Is there a "pandas method" to use with <code>lambda</code> expressions? Pythonic string parsing or regex? </p>
4
2016-07-25T15:07:26Z
38,571,969
<p>An alternative:</p> <pre><code>df2 = df['column_A'].str.split('/', expand=True).astype(int)\ .assign(ones='1').assign(zeros='0') df2 Out: 0 1 ones zeros 0 6 6 1 0 1 0 4 1 0 2 2 6 1 0 3 12 14 1 0 4 15 21 1 0 5 12 12 1 0 (df2[0] * df2['ones']).str.cat((df2[1]-df2[0])*df2['zeros']) Out: 0 111111 1 0000 2 110000 3 11111111111100 4 111111111111111000000 5 111111111111 dtype: object </code></pre> <p>Note: I was actually trying to find a faster alternative thinking apply would be slow but this one turns out to be slower.</p>
4
2016-07-25T15:35:17Z
[ "python", "regex", "parsing", "pandas" ]
Pandas Dataframe: How to parse integers into string of 0s and 1s?
38,571,348
<p>I have the following pandas DataFrame.</p> <pre><code>import pandas as pd df = pd.read_csv('filename.csv') print(df) sample column_A 0 sample1 6/6 1 sample2 0/4 2 sample3 2/6 3 sample4 12/14 4 sample5 15/21 5 sample6 12/12 .. .... </code></pre> <p>The values in <code>column_A</code> are not fractions, and these data must be manipulated such that I can convert each value into <code>0s</code> and <code>1s</code> (not convert the integers into their binary counterparts). </p> <p>The "numerator" above gives the total number of <code>1s</code>, while the "denominator" gives the total number of <code>0s</code> and <code>1s</code> together. </p> <p>So, the table should actually be in the following format:</p> <pre><code> sample column_A 0 sample1 111111 1 sample2 0000 2 sample3 110000 3 sample4 11111111111100 4 sample5 111111111111111000000 5 sample6 111111111111 .. .... </code></pre> <p>I've never parsed an integer to output strings of 0s and 1s like this. How does one do this? Is there a "pandas method" to use with <code>lambda</code> expressions? Pythonic string parsing or regex? </p>
4
2016-07-25T15:07:26Z
38,574,873
<p>Here are some alternative solutions using <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.extract.html" rel="nofollow">extract()</a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.repeat.html" rel="nofollow">.str.repeat()</a> methods:</p> <pre><code>In [187]: x = df.column_A.str.extract(r'(?P&lt;ones&gt;\d+)/(?P&lt;len&gt;\d+)', expand=True).astype(int).assign(o='1', z='0') In [188]: x Out[188]: ones len o z 0 6 6 1 0 1 0 4 1 0 2 2 6 1 0 3 12 14 1 0 4 15 21 1 0 5 12 12 1 0 In [189]: x.o.str.repeat(x.ones) + x.z.str.repeat(x.len-x.ones) Out[189]: 0 111111 1 0000 2 110000 3 11111111111100 4 111111111111111000000 5 111111111111 dtype: object </code></pre> <p>or a slow (two <code>apply()</code>) one-liner:</p> <pre><code>In [190]: %paste (df.column_A.str.extract(r'(?P&lt;one&gt;\d+)/(?P&lt;len&gt;\d+)', expand=True) .astype(int) .apply(lambda x: ['1'] * x.one + ['0'] * (x.len-x.one), axis=1) .apply(''.join) ) ## -- End pasted text -- Out[190]: 0 111111 1 0000 2 110000 3 11111111111100 4 111111111111111000000 5 111111111111 dtype: object </code></pre>
1
2016-07-25T18:20:21Z
[ "python", "regex", "parsing", "pandas" ]
Python3 Tkinter - Write input (y) to console for subprocess
38,571,439
<p>I've been looking around for an answer to my problem but have been unlucky. I would like for the answer to work with native python and hopefully be simple.</p> <p>My problem is that I'm using subprocess in my tkinter application, but one of the commands require you to write Y/N to be sure you want to proceed with the action.</p> <p>So I'm looking for a way to write y into the terminal when a message like this appears: Are you sure you want to continue? (y/N)</p> <p>I've tried by running subprocess.run("y") but that doesn't seem to work.</p> <p>I'm testing this on Debian Linux and to call the command that asks if I want to proceed, is subprocess.getoutput() so that I can check for errors.</p> <p><strong><em>CODE</em></strong></p> <pre><code>class RemovePublicKeyDialog: def __init__(self, parent): top = self.top = Toplevel(parent) Label(top, text="Who to remove?").pack() self.e = Entry(top) self.e.pack(padx=5) b = Button(top, text="REMOVE", command=self.ok) b.pack(pady=5) def ok(self): #print("value is " + self.e.get()) key = self.e.get() cmd = subprocess.getoutput("gpg --delete-keys " + key) print(cmd) if ("key \"" + key + "\" not found" in cmd): messagebox.showerror("Error", "No such public key.") elif ("Delete this key from keyring?" in cmd): #subprocess.run("echo 'y'") messagebox.showinfo("Success", "Public key \"" + key + "\" deleted from keyring.") else: messagebox.showerror("Error", "Unknown error, did you input a key?") self.top.destroy() </code></pre> <p>This is the "main" code, everything works but it's just that I need to input Y to get it to proceed.</p>
0
2016-07-25T15:12:06Z
38,571,880
<p>Many command line utilities have a flag that automatically answers yes to any prompts - if you have access to the source code of your particular command, adding such a flag if it doesn't have one (or simply making a custom version that never prompts) may be the easiest solution. Some commands automatically do this if not run directly from a terminal - are you sure that this is even a problem?</p> <p>If you know that there will be a single prompt, you could try:<br> <code>subprocess.run("echo y | your-command", shell=True)</code></p> <p>If there may be multiple prompts, you'd have to use one of the more complex options in the subprocess module, reading and parsing the command output to know when a reply is needed.</p>
0
2016-07-25T15:31:38Z
[ "python", "linux", "python-3.x", "input", "tkinter" ]
migrate python code to NodeJS
38,571,483
<p>I am a beginner in NodeJS. I have worked on some scripts in python which does some calculation on two csvs. Is there a easy way to migrate a python script to NodeJS? I don't know whether its a right way to do it. But I wanted to know is there any way for for it.</p> <p>Any suggestions will be encouragable. </p>
0
2016-07-25T15:13:36Z
38,571,665
<p><strong>Simple way :</strong> understand your code and rewrite it in <em>node.js</em> format.</p> <p>If you're a beginner in <em>node.js</em>, it's a good exercise to understand how <em>node.js</em> works.</p> <p>I recommand you this post which regroup a lot of ressources to begin <em>node.js</em> : <a href="http://stackoverflow.com/questions/2353818/how-do-i-get-started-with-node-js">How do I get started with Node.js</a></p> <p><strong>EDIT :</strong> you can also execute your <em>Python</em> script with a <em>node.js</em> process, and use the result in <em>node.js</em>.</p> <p><strong>Documentation :</strong> <a href="https://nodejs.org/api/process.html" rel="nofollow">https://nodejs.org/api/process.html</a></p>
0
2016-07-25T15:21:55Z
[ "python", "node.js" ]
migrate python code to NodeJS
38,571,483
<p>I am a beginner in NodeJS. I have worked on some scripts in python which does some calculation on two csvs. Is there a easy way to migrate a python script to NodeJS? I don't know whether its a right way to do it. But I wanted to know is there any way for for it.</p> <p>Any suggestions will be encouragable. </p>
0
2016-07-25T15:13:36Z
38,571,696
<p>If you simply want to convert the code to javascript there are tools available like <a href="http://www.transcrypt.org/" rel="nofollow">transcrypt</a> which converts the python code to javascript.</p>
2
2016-07-25T15:23:10Z
[ "python", "node.js" ]
The Concept Behind itertools's product Function
38,571,501
<p>so basically i want to understand the concept of product() function in itertools. i mean what is the different between yield and return. And can this code be shorten down anyway.</p> <pre><code> def product1(*args, **kwds): pools = map(tuple, args) * kwds.get('repeat', 1) n = len(pools) if n == 0: yield () return if any(len(pool) == 0 for pool in pools): return indices = [0] * n yield tuple(pool[i] for pool, i in zip(pools, indices)) while 1: for i in reversed(range(n)): # right to left if indices[i] == len(pools[i]) - 1: continue indices[i] += 1 for j in range(i+1, n): indices[j] = 0 yield tuple(pool[i] for pool, i in zip(pools, indices)) break else: return </code></pre>
-5
2016-07-25T15:14:13Z
38,571,626
<p>I would highly recommend using the well established and tested <code>itertools</code> <a href="https://docs.python.org/2/library/itertools.html#itertools.product" rel="nofollow">standard module</a>. Reinventing the wheel is never <em>advisable</em> as a programmer. That said, I would start by taking a look at the <code>product()</code> function in itertools.</p> <p>As for not using <code>itertools()</code>, this problem is essentially a <strong>cartesian product</strong> problem (<em>n-permutations with duplicates allowed</em>). This is where recursion helps us! One possible solution below: </p> <p><strong>Method Body:</strong></p> <pre><code>result = [] def permutations(alphabet, repeat, total = ''): if repeat &gt;= 1: for i in alphabet: # Add the subsolutions. permutations(alphabet, repeat - 1, total + i) else: result.append(total) return result </code></pre> <p>And when we call with <code>permutations()</code></p> <p><strong>Sample Outputs:</strong></p> <pre><code>permutations('ab', 3) -&gt; $ ['aaa', 'aab', 'aba', 'abb', 'baa', 'bab', 'bba', 'bbb'] permutations('ab', 3) -&gt; $ ['aaa', 'aab', 'aac', 'aba', 'abb', 'abc', 'aca', 'acb', 'acc', 'baa', 'bab', 'bac', 'bba', 'bbb', 'bbc', 'bca', 'bcb', 'bcc', 'caa', 'cab', 'cac', 'cba', 'cbb', 'cbc', 'cca', 'ccb', 'ccc'] permutations('ab', 1) -&gt; $ ['a', 'b'] </code></pre> <p><strong>How does it work?</strong></p> <p>This method works by nesting for loops in a recursive manner <em>repeat</em>-times. We then accumulate the result of the sub-solutions, appending to a result list. So if we use <strong>4</strong> as our repeat value, out expanded <em>iterative</em> trace of this problem would look like the following:</p> <pre><code>for i in alphabet: for j in alphabet: for k in alphabet: for l in alphabet: result.append(i + j + k + l) </code></pre>
2
2016-07-25T15:20:04Z
[ "python", "recursion", "itertools" ]
The Concept Behind itertools's product Function
38,571,501
<p>so basically i want to understand the concept of product() function in itertools. i mean what is the different between yield and return. And can this code be shorten down anyway.</p> <pre><code> def product1(*args, **kwds): pools = map(tuple, args) * kwds.get('repeat', 1) n = len(pools) if n == 0: yield () return if any(len(pool) == 0 for pool in pools): return indices = [0] * n yield tuple(pool[i] for pool, i in zip(pools, indices)) while 1: for i in reversed(range(n)): # right to left if indices[i] == len(pools[i]) - 1: continue indices[i] += 1 for j in range(i+1, n): indices[j] = 0 yield tuple(pool[i] for pool, i in zip(pools, indices)) break else: return </code></pre>
-5
2016-07-25T15:14:13Z
38,571,978
<p>This code should do the work:</p> <pre><code>bytes = [i for i in range(2**(n))] AB= [] for obj in bytes: t = str(bin(obj))[2:] t= '0'*(n-len(t)) + t AB.append(t.replace('0','A').replace('1','B')) </code></pre> <p>n being the string size wanted</p>
1
2016-07-25T15:35:51Z
[ "python", "recursion", "itertools" ]
The Concept Behind itertools's product Function
38,571,501
<p>so basically i want to understand the concept of product() function in itertools. i mean what is the different between yield and return. And can this code be shorten down anyway.</p> <pre><code> def product1(*args, **kwds): pools = map(tuple, args) * kwds.get('repeat', 1) n = len(pools) if n == 0: yield () return if any(len(pool) == 0 for pool in pools): return indices = [0] * n yield tuple(pool[i] for pool, i in zip(pools, indices)) while 1: for i in reversed(range(n)): # right to left if indices[i] == len(pools[i]) - 1: continue indices[i] += 1 for j in range(i+1, n): indices[j] = 0 yield tuple(pool[i] for pool, i in zip(pools, indices)) break else: return </code></pre>
-5
2016-07-25T15:14:13Z
38,573,018
<p>First create a list with all the possible arrangements, that's easily achievable by summing binaries:</p> <pre><code>def generate_arrangements(n): return [bin(i)[2:].zfill(n) for i in range(2**n)] # 2**n is number of possible options (A,B) n times </code></pre> <p>The [2:] slices the string and remove '0b' from it and zfill(n) completes the string with 0s until the string has length of n.</p> <p>Now replace all 0,1 by A,B respectively:</p> <pre><code>arrangements = [arrangement.replace('0', 'A').replace('1', 'B') for arrangement in generate_arrangements(3)] print(arrangements) &gt;&gt; ['AAA', 'AAB', 'ABA', 'ABB', 'BAA', 'BAB', 'BBA', 'BBB'] </code></pre> <p>If you want to put all together you have:</p> <pre><code>def generateAB(n): arrangements = [bin(i)[2:].zfill(n) for i in range(2**n)] return [arrangement.replace('0', 'A').replace('1', 'B') for arrangement in arrangements] </code></pre>
0
2016-07-25T16:28:29Z
[ "python", "recursion", "itertools" ]
Python requests function: url formatting unexpected ascii output
38,571,534
<p>I am getting unexpected ASCII characters while using the <code>requests</code> library in Python 3. </p> <pre><code>search_terms = ["ö", "é", "ä"] url = "http://www.domain.com/search" for i in search_terms: r = requests.get(url, i) </code></pre> <p>Which returns:</p> <pre><code>http://www.domain.com/search?%C3%B6 http://www.domain.com/search?%C3%A9 http://www.domain.com/search?%C3%A4 </code></pre> <p>Although I expected:</p> <pre><code>http://www.domain.com/search?%F6 http://www.domain.com/search?%E9 http://www.domain.com/search?%E4 </code></pre> <p>Can someone explain what happened and hint at me how to get the desired results?</p>
0
2016-07-25T15:15:37Z
38,571,982
<p>I assume that requests first encode the unicode strings as utf-8 and then quote them.</p> <pre><code>&gt;&gt;&gt; urllib.quote(u'ö'.encode('utf-8')) %C3%B6 </code></pre>
0
2016-07-25T15:36:07Z
[ "python", "python-requests" ]
Python requests function: url formatting unexpected ascii output
38,571,534
<p>I am getting unexpected ASCII characters while using the <code>requests</code> library in Python 3. </p> <pre><code>search_terms = ["ö", "é", "ä"] url = "http://www.domain.com/search" for i in search_terms: r = requests.get(url, i) </code></pre> <p>Which returns:</p> <pre><code>http://www.domain.com/search?%C3%B6 http://www.domain.com/search?%C3%A9 http://www.domain.com/search?%C3%A4 </code></pre> <p>Although I expected:</p> <pre><code>http://www.domain.com/search?%F6 http://www.domain.com/search?%E9 http://www.domain.com/search?%E4 </code></pre> <p>Can someone explain what happened and hint at me how to get the desired results?</p>
0
2016-07-25T15:15:37Z
38,572,044
<p>That's because it's UTF-8 encoded.</p> <pre><code>&gt;&gt;&gt; u'ö'.encode() b'\xc3\xb6' &gt;&gt;&gt; u'é'.encode() b'\xc3\xa9' &gt;&gt;&gt; u'ä'.encode() b'\xc3\xa4' </code></pre> <p>What it seems you want is latin encoding. You can achieve it like this:</p> <pre><code># Python 3 &gt;&gt;&gt; from urllib.parse import quote &gt;&gt;&gt; quote('ö', encoding='iso-8859-1') '%F6' </code></pre>
0
2016-07-25T15:39:02Z
[ "python", "python-requests" ]
Python requests function: url formatting unexpected ascii output
38,571,534
<p>I am getting unexpected ASCII characters while using the <code>requests</code> library in Python 3. </p> <pre><code>search_terms = ["ö", "é", "ä"] url = "http://www.domain.com/search" for i in search_terms: r = requests.get(url, i) </code></pre> <p>Which returns:</p> <pre><code>http://www.domain.com/search?%C3%B6 http://www.domain.com/search?%C3%A9 http://www.domain.com/search?%C3%A4 </code></pre> <p>Although I expected:</p> <pre><code>http://www.domain.com/search?%F6 http://www.domain.com/search?%E9 http://www.domain.com/search?%E4 </code></pre> <p>Can someone explain what happened and hint at me how to get the desired results?</p>
0
2016-07-25T15:15:37Z
38,572,591
<p>I figured it out without any further import statements. I am using the <code>encode</code> method now.</p> <p>Old code:</p> <pre><code>for i in search_terms: r = requests.get(url, i) </code></pre> <p>New code:</p> <pre><code>for i in search_terms: r = requests.get(url, i.encode("iso-8859-1")) </code></pre>
0
2016-07-25T16:05:46Z
[ "python", "python-requests" ]
Using Python to download files from Box
38,571,667
<p>I'm trying to use Python to download an Excel file to my local drive from Box.</p> <p>Using the boxsdk I was able to authenticate via OAuth2 and successfully get the file id on Box.</p> <p>However when I use the <code>client.file(file_id).content()</code> function, it just returns a string, and if I use <code>client.file(file_id).get()</code> then it just gives me a <code>boxsdk.object.file.File</code>.</p> <p>Does anybody know how to write either of these to an Excel file on the local machine? Or a better method of using Python to download an excel file from Box.</p> <p>(I discovered that <code>boxsdk.object.file.File</code> has an option <code>download_to(writeable_stream</code> <a href="http://box-python-sdk.readthedocs.io/en/latest/boxsdk.object.html" rel="nofollow">here</a> but I have no idea how to use that to create an Excel file and my searches haven't been helpful).</p>
0
2016-07-25T15:22:00Z
38,572,513
<p>You could use python <a href="https://docs.python.org/2/library/csv.html" rel="nofollow">csv library</a>, along with dialect='excel' flag. It works really nice for exporting data to Microsoft Excel. The main idea is to use csv.writer inside a loop for writing each line. Try this and if you can't, post the code here.</p>
0
2016-07-25T16:01:31Z
[ "python", "excel", "download", "box" ]
Repeating strings in pandas DF -- want to return list of unique strings
38,571,685
<p>I have a bunch of rows of data in a pandas DF that contain inconsistently offsetting string characters. For each Game ID (another column), the two string characters are unique to that Game ID, but do not switch off in a predicatble pattern. Regardless, I'm trying to write a helper function that takes each unique game ID and gets the two team names associated with it.</p> <p>For example...</p> <p><code>index game_id 0 400827888 1 400827888 2 400827888 3 400827888 4 400827888 ... 555622 400829117 555623 400829117 555624 400829117 555625 400829117</code></p> <p><code>index team 0 ATL 1 DET 2 ATL 3 DET 4 ATL ... 555622 POR 555623 DEN 555624 POR 555625 POR</code></p> <p>Here is my woeful attempt, which is not working.</p> <pre><code>def get_teams(df): for i in df['gameid']: both_teams = [df['team'].astype(str)] return(both_teams) </code></pre> <p>I'd like it to return ['ATL', 'DET] for Game ID 400827888 and ['POR', 'DEN'] for Game ID 400829117. Instead, it is just returning the team name associated with each index.</p>
0
2016-07-25T15:22:33Z
38,571,805
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.SeriesGroupBy.unique.html" rel="nofollow"><code>SeriesGroupBy.unique</code></a>:</p> <pre><code>print (df.groupby('game_id')['team'].unique()) game_id 400827888 [ATL, DET] 400829117 [POR, DEN] Name: team, dtype: object </code></pre> <p>For looping use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.iterrows.html" rel="nofollow"><code>iterrows</code></a>:</p> <pre><code>for i, g in df.groupby('game_id')['team'].unique().reset_index().iterrows(): print (g.game_id) print (g.team) </code></pre> <p>EDIT:</p> <p>If need find all <code>game_id</code> by some string (e.g. <code>DET</code>) use <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow"><code>boolean indexing</code></a>:</p> <pre><code>s = df.groupby('game_id')['team'].unique() print (s[s.apply(lambda x: 'DET' in x)].index.tolist()) [400827888] </code></pre>
2
2016-07-25T15:27:38Z
[ "python", "pandas", "for-loop", "dataframe" ]
Python Import error while running through apache
38,571,690
<p>Actually I've included the python script inside CGI script with back-ticks. While i was running the script I have an error like "ImportError: No module named skimage" through apache web server. But when i run via command line it was working properly. </p> <p>OS: RHEL 6.5</p> <p>Python: 2.7.8</p> <p>$PYTHONPATH = /usr/local/bin</p> <p><strong>httpd conf (only CGI Part):</strong> </p> <pre><code>&lt;Directory /home/*/public_html/cgi-bin&gt; Options ExecCGI AddHandler cgi-script .py .cgi SetHandler cgi-script &lt;/Directory&gt; ScriptAlias /cgi-bin/ "/var/www/cgi-bin/" &lt;Directory "/var/www/cgi-bin"&gt; AllowOverride None Options ExecCGI Order allow,deny Allow from all &lt;/Directory&gt; </code></pre> <p>Note: 1. SELinux is already disabled<br> 2. Shebang lines was included.</p> <p>Can anyone help?.</p> <p>Thanks in advance.</p>
0
2016-07-25T15:22:55Z
38,574,305
<p>Maybe the library is installed only for you and not for root. Put the library script in the same folder of your main script and try. </p>
0
2016-07-25T17:46:27Z
[ "python", "apache", "cgi" ]
Evaluate statements from within Python logging YAML config file
38,571,701
<p>Consider the following snippet of a Python <code>logging</code> YAML config file:</p> <pre><code>version: 1 formatters: simple: format: '%(asctime)s - %(name)s - %(levelname)s - %(message)s' handlers: logfile: class: logging.handlers.TimedRotatingFileHandler level: DEBUG filename: some_fancy_import_name.generate_filename_called_error backupCount: 5 formatter: simple </code></pre> <p>I would like to load this YAML config file this way:</p> <pre><code>with open('logging.yaml', 'r') as fd: config = yaml.safe_load(fd.read()) logging.config.dictConfig(config) </code></pre> <p>Take special notice of the <code>filename</code> to which the <code>handler</code> should write logs. In normal Python code, I would expect <code>some_fancy_import_name.generate_filename_called_errorlog</code> to generate the string <code>'error.log'</code>. All in all, I would like to say that this logging handler should write to the file <code>'error.log'</code> in the current directory.</p> <p>However, as it turns out, this is not the case. When I look at the current directory, I see a file named <code>'some_fancy_import_name.generate_filename_called_errorlog'</code>.</p> <h3>Why go through all this trouble?</h3> <p>I would like <code>filename</code> to be programmatically determined. I have successfully tried configuring logging using normal Python scripting this way:</p> <pre><code># fancy import name from os import environ as env # Programmatically determine filename path log_location = env.get('OPENSHIFT_LOG_DIR', '.') log_filename = os.path.join(log_location, 'error') handler = logging.handlers.TimedRotatingFileHandler(log_filename) </code></pre> <p>See how the <code>log_filename</code> path was inferred from environment variables.</p> <p>I would like to translate this to a YAML config file. Is it possible?</p> <p>Perhaps I might need to dig through the <code>dict</code> produced by <code>yaml.safe_load(fd.read())</code> and do some <code>eval()</code> stuff?</p>
1
2016-07-25T15:23:33Z
38,574,366
<p>You can add a custom constructor and mark the value with a special tag, so your constructor gets executed when loading it:</p> <pre><code>import yaml def eval_constructor(loader, node): return eval(loader.construct_scalar(node)) yaml.add_constructor(u'!eval', eval_constructor) some_value = '123' config = yaml.load(""" version: 1 formatters: simple: format: '%(asctime)s - %(name)s - %(levelname)s - %(message)s' handlers: logfile: class: logging.handlers.TimedRotatingFileHandler level: DEBUG filename: !eval some_value backupCount: 5 formatter: simple """) print config['handlers']['logfile']['filename'] </code></pre> <p>This prints <code>123</code>, since the value <code>some_value</code> has the tag <code>!eval</code>, and therefore is loaded with <code>eval_constructor</code>.</p> <p><strong>Be aware of the security implications of <code>eval</code>ing configuration data. Arbitrary Python code can be executed by writing it into the YAML file!</strong></p>
1
2016-07-25T17:50:22Z
[ "python", "logging", "yaml" ]
Evaluate statements from within Python logging YAML config file
38,571,701
<p>Consider the following snippet of a Python <code>logging</code> YAML config file:</p> <pre><code>version: 1 formatters: simple: format: '%(asctime)s - %(name)s - %(levelname)s - %(message)s' handlers: logfile: class: logging.handlers.TimedRotatingFileHandler level: DEBUG filename: some_fancy_import_name.generate_filename_called_error backupCount: 5 formatter: simple </code></pre> <p>I would like to load this YAML config file this way:</p> <pre><code>with open('logging.yaml', 'r') as fd: config = yaml.safe_load(fd.read()) logging.config.dictConfig(config) </code></pre> <p>Take special notice of the <code>filename</code> to which the <code>handler</code> should write logs. In normal Python code, I would expect <code>some_fancy_import_name.generate_filename_called_errorlog</code> to generate the string <code>'error.log'</code>. All in all, I would like to say that this logging handler should write to the file <code>'error.log'</code> in the current directory.</p> <p>However, as it turns out, this is not the case. When I look at the current directory, I see a file named <code>'some_fancy_import_name.generate_filename_called_errorlog'</code>.</p> <h3>Why go through all this trouble?</h3> <p>I would like <code>filename</code> to be programmatically determined. I have successfully tried configuring logging using normal Python scripting this way:</p> <pre><code># fancy import name from os import environ as env # Programmatically determine filename path log_location = env.get('OPENSHIFT_LOG_DIR', '.') log_filename = os.path.join(log_location, 'error') handler = logging.handlers.TimedRotatingFileHandler(log_filename) </code></pre> <p>See how the <code>log_filename</code> path was inferred from environment variables.</p> <p>I would like to translate this to a YAML config file. Is it possible?</p> <p>Perhaps I might need to dig through the <code>dict</code> produced by <code>yaml.safe_load(fd.read())</code> and do some <code>eval()</code> stuff?</p>
1
2016-07-25T15:23:33Z
38,591,392
<h1>Solution</h1> <p>Thanks to <a href="http://stackoverflow.com/a/38574366/366309">flyx's answer</a>, this is how I did it:</p> <pre><code>import logging import yaml from os import environ as env def constructor_logfilename(loader, node): value = loader.construct_scalar(node) return os.path.join(env.get('OPENSHIFT_LOG_DIR', '.'), value) yaml.add_constructor(u'!logfilename', constructor_logfilename) with open('logging.yaml', 'r') as fd: config = yaml.load(fd.read()) logging.config.dictConfig(config) </code></pre> <p>In the <code>logging.yaml</code> file, here's the important snippet:</p> <pre><code>... filename: !logfilename error.log ... </code></pre>
0
2016-07-26T13:35:11Z
[ "python", "logging", "yaml" ]
Split an numpy array into two numpy arrays
38,571,715
<p>I have a numpy array like this:</p> <pre><code>A=[(datetime.datetime(2016, 6, 8, 12, 37, 27, 826000), 3.0) (datetime.datetime(2016, 6, 8, 12, 37, 27, 827000), nan) (datetime.datetime(2016, 6, 8, 12, 37, 27, 832000), nan) (datetime.datetime(2016, 6, 8, 12, 37, 27, 833000), nan) (datetime.datetime(2016, 6, 8, 12, 37, 27, 837000), 3.0) (datetime.datetime(2016, 6, 8, 12, 37, 27, 837000), 35.0)] </code></pre> <p>And I want to split it into 2 numpy arrays:</p> <pre><code>B=[(datetime.datetime(2016, 6, 8, 12, 37, 27, 826000), (datetime.datetime(2016, 6, 8, 12, 37, 27, 827000), (datetime.datetime(2016, 6, 8, 12, 37, 27, 832000), (datetime.datetime(2016, 6, 8, 12, 37, 27, 833000), (datetime.datetime(2016, 6, 8, 12, 37, 27, 837000), (datetime.datetime(2016, 6, 8, 12, 37, 27, 837000)] C=[3.0,nan,nan,nan,3.0,35.0] </code></pre> <p>To give you more details this numpy array was at first a dictionnary and I've convert it into a numpy array, you can find the code below:</p> <pre><code>def convertarray(dictionary): names=['id','data'] formats=['datetime64[ms]','f8'] dtype=dict(names=names, formats=formats) result=np.array(dictionary.items(),dtype) return result </code></pre>
0
2016-07-25T15:24:10Z
38,571,907
<p>If you just a have a vanilla array with <code>dtype=object</code>, I think your best recourse is to just construct the new arrays by iterating over the old one in a couple list-comprehensions:</p> <p>import numpy as np from numpy import nan import datetime</p> <pre><code>A=np.array([(datetime.datetime(2016, 6, 8, 12, 37, 27, 826000), 3.0), (datetime.datetime(2016, 6, 8, 12, 37, 27, 827000), nan), (datetime.datetime(2016, 6, 8, 12, 37, 27, 832000), nan), (datetime.datetime(2016, 6, 8, 12, 37, 27, 833000), nan), (datetime.datetime(2016, 6, 8, 12, 37, 27, 837000), 3.0), (datetime.datetime(2016, 6, 8, 12, 37, 27, 837000), 35.0)]) print(A.dtype) times = np.array([x[0] for x in A]) values = np.array([x[1] for x in A]) print(times) print(values) </code></pre> <p>With that said, it <em>might</em> be slightly cleaner to use a record array:</p> <pre><code>import numpy as np from numpy import nan import datetime A=np.array([(datetime.datetime(2016, 6, 8, 12, 37, 27, 826000), 3.0), (datetime.datetime(2016, 6, 8, 12, 37, 27, 827000), nan), (datetime.datetime(2016, 6, 8, 12, 37, 27, 832000), nan), (datetime.datetime(2016, 6, 8, 12, 37, 27, 833000), nan), (datetime.datetime(2016, 6, 8, 12, 37, 27, 837000), 3.0), (datetime.datetime(2016, 6, 8, 12, 37, 27, 837000), 35.0)], dtype=[('time', object), ('value', float)]) print(A.dtype) print(A['time']) print(A['value']) </code></pre>
1
2016-07-25T15:32:35Z
[ "python", "arrays", "datetime", "numpy" ]
Split an numpy array into two numpy arrays
38,571,715
<p>I have a numpy array like this:</p> <pre><code>A=[(datetime.datetime(2016, 6, 8, 12, 37, 27, 826000), 3.0) (datetime.datetime(2016, 6, 8, 12, 37, 27, 827000), nan) (datetime.datetime(2016, 6, 8, 12, 37, 27, 832000), nan) (datetime.datetime(2016, 6, 8, 12, 37, 27, 833000), nan) (datetime.datetime(2016, 6, 8, 12, 37, 27, 837000), 3.0) (datetime.datetime(2016, 6, 8, 12, 37, 27, 837000), 35.0)] </code></pre> <p>And I want to split it into 2 numpy arrays:</p> <pre><code>B=[(datetime.datetime(2016, 6, 8, 12, 37, 27, 826000), (datetime.datetime(2016, 6, 8, 12, 37, 27, 827000), (datetime.datetime(2016, 6, 8, 12, 37, 27, 832000), (datetime.datetime(2016, 6, 8, 12, 37, 27, 833000), (datetime.datetime(2016, 6, 8, 12, 37, 27, 837000), (datetime.datetime(2016, 6, 8, 12, 37, 27, 837000)] C=[3.0,nan,nan,nan,3.0,35.0] </code></pre> <p>To give you more details this numpy array was at first a dictionnary and I've convert it into a numpy array, you can find the code below:</p> <pre><code>def convertarray(dictionary): names=['id','data'] formats=['datetime64[ms]','f8'] dtype=dict(names=names, formats=formats) result=np.array(dictionary.items(),dtype) return result </code></pre>
0
2016-07-25T15:24:10Z
38,572,024
<p>You likely want to slice the data. Inserting a <code>:</code> for that dimension will select all elements of that dimension.</p> <pre><code>B = A[:, 0] C = A[:, 1] </code></pre>
-1
2016-07-25T15:38:13Z
[ "python", "arrays", "datetime", "numpy" ]
How to separate stock picking in from out in odoo 8
38,571,757
<p>In openerp 7 stock_picking was separated in two items, stock_picking_in and stock_picking_out, now they created one item containing both and there's a field containing the type (in or out). I want to have the normal view for "in" items and a totally custom view for "out". Is it possible and how? Thanks.</p> <p>My picking.py inherit stock.picking and add some fields. I want the <strong>picking_in_view</strong> to use <strong>stock.picking</strong> default display for the form view and the tree view and I want to change the display for <strong>picking_out_view</strong>. The problem is that when I change the display in <strong>picking_out_view</strong> it also change in <strong>picking_in_view</strong> because it changes the model.</p> <p>And the biggest problem is that I need to change the many to many field with <strong>stock.move</strong> for out items but if I do so I need to modify the model and it do for in and out.</p> <p>Is there a way to do it?</p> <p><strong>move.py</strong></p> <pre><code># -*- coding: utf-8 -*- from openerp import models, fields, api, tools from openerp.exceptions import ValidationError class StockMove(models.Model): """ Ajout de champs dans la ligne de commande, et quelques fonctions telles que unpack """ _inherit = "stock.move" # Le code du produit à afficher product_code = fields.Char(string="Product", store=True, related="product_id.default_code") # Le lien vers la ligne d'achat sale.order.line sale_line_id = fields.Many2one(string="SaleOrderLine", store=True, related="procurement_id.sale_line_id") # Le colis associé à la commande stock_quant_package = fields.Many2one('stock.quant.package', string='Pack') # Sert à savoir si on affiche l'icône rouge pour déballer un colis show_unpack = fields.Boolean(store=False, compute='compute_show_unpack') # Sert à savoir si on affiche l'icône d'impression validée is_printed = fields.Boolean(store=False, compute='compute_printed') # Pour colorer les lignes, condition statut_ok = fields.Boolean(default=False, store=False, compute="compute_statut_ok") # Le statut de la commande statut_id = fields.Many2one('sale.statut', string='Statut', default=lambda self: self._default_statut_id()) date_emballage = fields.Datetime("Date d'emballage") # Champs non enregistrés en BD, utilisés pour l'affichage metal = fields.Many2one('product.finition',string="Metal", store=False, related="procurement_id.sale_line_id.metal") bois1 = fields.Many2one('product.finition',string="Bois 1", store=False, related="procurement_id.sale_line_id.bois1") bois2 = fields.Many2one('product.finition',string="Bois 2", store=False, related="procurement_id.sale_line_id.bois2") verre = fields.Many2one('product.finition',string="Verre", store=False, related="procurement_id.sale_line_id.verre") tissu = fields.Many2one('product.finition',string="Tissu", store=False, related="procurement_id.sale_line_id.tissu") patte = fields.Many2one('product.finition',string="Patte", store=False, related="procurement_id.sale_line_id.patte") config = fields.Char(string="Config", store=False, size=64, related="procurement_id.sale_line_id.config") poignee = fields.Many2one('product.finition',string="Poignee", store=False, related="procurement_id.sale_line_id.poignee") # Le prix d'une ligne de commande, calculé move_price = fields.Float(string="Prix", store=False, compute="compute_move_price") # Pour differencier les formulaires is_picking_out = fields.Boolean(store=False, compute="compute_is_picking_out") ... </code></pre> <p><strong>picking.py</strong></p> <pre><code># -*- coding: utf-8 -*- from openerp import models, fields, api, tools class StockPicking(models.Model): _inherit = "stock.picking" # Le statut statut_id = fields.Many2one("sale.statut", string="Statut") # Erreur inconnue sur l'inexistence de ce champ; à laisser stock_journal_id = fields.Integer() carrier_id = fields.Many2one("stock.carrier", compute="_carrier_info") num_compte_transport = fields.Char(string="Numéro de compte UPS", compute="_carrier_info") @api.multi def _carrier_info(self): for line in self: line.carrier_id = self.sale_id.carrier_transport line.num_compte_transport = self.sale_id.num_compte_facture </code></pre> <p><strong>picking_in_view.xml</strong></p> <pre><code>&lt;?xml version="1.0" encoding="utf-8"?&gt; &lt;openerp&gt; &lt;data&gt; &lt;!-- On cache un attribut de la liste des bons de livraison --&gt; &lt;record id="stock_picking_tree_view_cr" model="ir.ui.view"&gt; &lt;field name="name"&gt;stock.picking.tree.inherit.cr&lt;/field&gt; &lt;field name="model"&gt;stock.picking&lt;/field&gt; &lt;field name="priority" eval="2"/&gt; &lt;field name="inherit_id" ref="stock.vpicktree"/&gt; &lt;field name="arch" type="xml"&gt; &lt;xpath expr="//tree/field[@name='location_dest_id']" position="attributes"&gt; &lt;attribute name="invisible"&gt;1&lt;/attribute&gt; &lt;/xpath&gt; &lt;/field&gt; &lt;/record&gt; &lt;!-- Les boutons de modification de la commande de base par Odoo dans le formulaire --&gt; &lt;!-- Des bons de livraison --&gt; &lt;record id="stock_picking_form_view_cr" model="ir.ui.view"&gt; &lt;field name="name"&gt;stock.picking.form.inherit.cr&lt;/field&gt; &lt;field name="model"&gt;stock.picking&lt;/field&gt; &lt;field name="priority" eval="2"/&gt; &lt;field name="inherit_id" ref="stock.view_picking_form"/&gt; &lt;field name="arch" type="xml"&gt; &lt;xpath expr="//form/header/button[@name='action_assign']" position="attributes"&gt; &lt;attribute name="invisible"&gt;0&lt;/attribute&gt; &lt;/xpath&gt; &lt;xpath expr="//form/header/button[@name='force_assign']" position="attributes"&gt; &lt;attribute name="invisible"&gt;0&lt;/attribute&gt; &lt;/xpath&gt; &lt;xpath expr="//form/header/button[@name='action_cancel']" position="attributes"&gt; &lt;attribute name="invisible"&gt;0&lt;/attribute&gt; &lt;/xpath&gt; &lt;/field&gt; &lt;/record&gt; &lt;!-- - - - - - - - - - - - - ACTIONS - - - - - - - - - - - - --&gt; &lt;!-- L'action du bouton dans le menu lateral --&gt; &lt;record id="picking_in_action_createch" model="ir.actions.act_window"&gt; &lt;field name="name"&gt;Bons de réception&lt;/field&gt; &lt;field name="res_model"&gt;stock.picking&lt;/field&gt; &lt;field name="view_type"&gt;form&lt;/field&gt; &lt;field name="view_mode"&gt;tree,form&lt;/field&gt; &lt;field name="domain"&gt; [('picking_type_id','=',1)] &lt;/field&gt; &lt;/record&gt; &lt;!-- - - - - - - - - - - - - MENUS - - - - - - - - - - - - --&gt; &lt;!-- Le premier bouton dans la barre laterale --&gt; &lt;menuitem id="picking_orders_menu" name="Bons de réception" sequence="0" parent="warehouse_mgt_cr" action="picking_in_action_createch"/&gt; &lt;/data&gt; &lt;/openerp&gt; </code></pre> <p><strong>picking_out_view.xml</strong></p> <pre><code>&lt;?xml version="1.0" encoding="utf-8"?&gt; &lt;openerp&gt; &lt;data&gt; &lt;!-- Liste des colis --&gt; &lt;record id="stock_quant_package_tree_view" model="ir.ui.view"&gt; &lt;field name="name"&gt;stock.quant.package.tree.cr&lt;/field&gt; &lt;field name="model"&gt;stock.quant.package&lt;/field&gt; &lt;field name="priority" eval="2"/&gt; &lt;field name="arch" type="xml"&gt; &lt;tree string="Paquets"&gt; &lt;field name="name"/&gt; &lt;field name="order_name"/&gt; &lt;field name="item"/&gt; &lt;field name="owner_name"/&gt; &lt;field name="create_date"/&gt; &lt;field name="prix" sum="Total Amount"/&gt; &lt;/tree&gt; &lt;/field&gt; &lt;/record&gt; &lt;!-- On cache un attribut de la liste des bons de livraison --&gt; &lt;record id="stock_picking_tree_view_cr" model="ir.ui.view"&gt; &lt;field name="name"&gt;stock.picking.tree.inherit.cr&lt;/field&gt; &lt;field name="model"&gt;stock.picking&lt;/field&gt; &lt;field name="priority" eval="2"/&gt; &lt;field name="inherit_id" ref="stock.vpicktree"/&gt; &lt;field name="arch" type="xml"&gt; &lt;xpath expr="//tree/field[@name='location_dest_id']" position="attributes"&gt; &lt;attribute name="invisible"&gt;1&lt;/attribute&gt; &lt;/xpath&gt; &lt;/field&gt; &lt;/record&gt; &lt;!-- Les boutons de modification de la commande de base par Odoo dans le formulaire --&gt; &lt;!-- Des bons de livraison --&gt; &lt;record id="stock_picking_form_view_cr" model="ir.ui.view"&gt; &lt;field name="name"&gt;stock.picking.form.inherit.cr&lt;/field&gt; &lt;field name="model"&gt;stock.picking&lt;/field&gt; &lt;field name="priority" eval="2"/&gt; &lt;field name="inherit_id" ref="stock.view_picking_form"/&gt; &lt;field name="arch" type="xml"&gt; &lt;xpath expr="//form/header/button[@name='action_assign']" position="attributes"&gt; &lt;attribute name="invisible"&gt;1&lt;/attribute&gt; &lt;/xpath&gt; &lt;xpath expr="//form/header/button[@name='force_assign']" position="attributes"&gt; &lt;attribute name="invisible"&gt;1&lt;/attribute&gt; &lt;/xpath&gt; &lt;xpath expr="//form/header/button[@name='action_cancel']" position="attributes"&gt; &lt;attribute name="invisible"&gt;1&lt;/attribute&gt; &lt;/xpath&gt; &lt;/field&gt; &lt;/record&gt; &lt;!-- - - - - - - - - - - - - ACTIONS - - - - - - - - - - - - --&gt; &lt;!-- L'action du bouton dans le menu lateral --&gt; &lt;record id="picking_action_createch" model="ir.actions.act_window"&gt; &lt;field name="name"&gt;Bons de livraison&lt;/field&gt; &lt;field name="res_model"&gt;stock.picking&lt;/field&gt; &lt;field name="view_type"&gt;form&lt;/field&gt; &lt;field name="view_mode"&gt;tree,form&lt;/field&gt; &lt;field name="domain"&gt; [('picking_type_id','=',2)] &lt;/field&gt; &lt;field name="context"&gt;{"search_default_filter_a_emballer":1}&lt;/field&gt; &lt;/record&gt; &lt;!-- Un autre bouton dans la barre laterale. Sert de test actuellement --&gt; &lt;record id="stock_quant_package_action_createch" model="ir.actions.act_window"&gt; &lt;field name="name"&gt;Colis&lt;/field&gt; &lt;field name="res_model"&gt;stock.quant.package&lt;/field&gt; &lt;field name="view_type"&gt;form&lt;/field&gt; &lt;field name="view_mode"&gt;tree,form&lt;/field&gt; &lt;/record&gt; &lt;!-- - - - - - - - - - - - - MENUS - - - - - - - - - - - - --&gt; &lt;!-- Le titre dans la barre laterale --&gt; &lt;menuitem id="warehouse_mgt_cr" name="Warehouse Management" sequence="0" parent="stock.menu_stock_root"/&gt; &lt;!-- Le premier bouton dans la barre laterale --&gt; &lt;menuitem id="delivery_orders_menu" name="Bons de livraison" sequence="1" parent="warehouse_mgt_cr" action="picking_action_createch"/&gt; &lt;menuitem id="quant_package_colis_menu" name="Colis" sequence="3" parent="warehouse_mgt_cr" action="stock_quant_package_action_createch"/&gt; &lt;/data&gt; &lt;/openerp&gt; </code></pre>
0
2016-07-25T15:25:58Z
38,582,642
<p>One way to achieve is to use <code>attrs</code> and based on picking type hide or show fields, if you want to make minor changes, for whole document its not a best solution.</p> <p>Second way is to create different views for them but only if you are using different menuitems for opening this objects, on the action attached on the menuitem you can specify form and tree views by id, like this example:</p> <pre><code> &lt;record id="action_id" model="ir.actions.act_window"&gt; &lt;field name="name"&gt;Action&lt;/field&gt; &lt;field name="res_model"&gt;model.name&lt;/field&gt; &lt;field name="type"&gt;ir.actions.act_window&lt;/field&gt; &lt;field name="view_type"&gt;form&lt;/field&gt; &lt;field name="view_mode"&gt;tree,form&lt;/field&gt; &lt;/record&gt; &lt;record model="ir.actions.act_window.view" id="action_id_tree"&gt; &lt;field name="sequence" eval="5"/&gt; &lt;field name="view_mode"&gt;tree_sent_grievances&lt;/field&gt; &lt;field name="view_id" ref="tree_view_id"/&gt; &lt;field name="act_window_id" ref="action_id"/&gt; &lt;/record&gt; &lt;record model="ir.actions.act_window.view" id="action_id_form"&gt; &lt;field name="sequence" eval="5"/&gt; &lt;field name="view_mode"&gt;form&lt;/field&gt; &lt;field name="view_id" ref="form_view_id"/&gt; &lt;field name="act_window_id" ref="action_id"/&gt; &lt;/record&gt; </code></pre>
1
2016-07-26T06:37:58Z
[ "python", "xml", "openerp", "odoo-8" ]
How can I use "metrics.mutual_info" in scikit's feature.selection
38,571,783
<p>I would like to use other scoring functions then <code>chi2</code> etc., that are not listed on this page.</p> <p><a href="http://scikit-learn.org/stable/modules/feature_selection.html" rel="nofollow">http://scikit-learn.org/stable/modules/feature_selection.html</a></p> <p><a href="http://scikit-learn.org/stable/modules/classes.html" rel="nofollow">http://scikit-learn.org/stable/modules/classes.html</a></p> <p>For example <code>metrics.mutual_info</code> and <code>metrics.balanced_accuracy_score</code></p> <p>How can I integrate those into my code?</p> <p>Thanks for help</p>
1
2016-07-25T15:26:38Z
39,791,580
<p>The new scikit-learn version 0.18, has added support for Mutual information feature selection. So no need to use the <code>metrics.mutual_info</code>. You can use the new <a href="http://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.mutual_info_classif.html" rel="nofollow">feature_selection.mutual_info_classif</a> score function in <code>SelectKBest</code> or <code>SelectPercentile</code> just like you use <code>chi2</code>. </p> <pre><code>X_new = SelectKBest(mutual_info_classif, k=100).fit_transform(X, y) </code></pre> <p>For more information about the resent changes look at the <a href="http://scikit-learn.org/stable/whats_new.html" rel="nofollow">changelog</a>.</p>
0
2016-09-30T12:29:56Z
[ "python", "machine-learning", "scikit-learn", "sentiment-analysis" ]
Run a command from Windows from Python code
38,571,814
<p>How can I run the following command from Python3 on Windows 7</p> <pre><code>gcc main.cpp -o main.out ./main.out </code></pre> <p>The purpose is to execute the main.cpp file from Python3.</p>
0
2016-07-25T15:28:06Z
38,572,035
<p>look <a href="https://docs.python.org/3/library/subprocess.html" rel="nofollow">subprocess call</a></p> <pre><code>import subprocess subprocess.run(["gcc", "main.cpp -o main.out"]) subprocess.run(["./main.out"]) </code></pre> <p>should work. But subprocess have more utilities that will be usefull for you.</p>
1
2016-07-25T15:38:46Z
[ "python", "c++", "windows" ]
python/django- number comparison (>) does not work
38,571,881
<p>This is really weird. I have a django program and I am calling this py file from views. The file will read from an excel and find the correct number index for a given number(target).</p> <p>It keeps telling me "Target Number out of Range" which it should not, so I print out the numbers. </p> <pre><code>for target in target_list: print target, df[pin_num][end], df[pin_num][0] if target &gt; df[pin_num][end] or target &lt; df[pin_num][0]: print target, df[pin_num][end], df[pin_num][0] print target &gt; df[pin_num][end] print target &lt; df[pin_num][0] return "Target Number out of Range" </code></pre> <p>The Console(using pycharm) show:</p> <blockquote> <p>23925.85 24472.9 23876.0</p> <p>23925.85 24472.9 23876.0</p> <p>True</p> <p>False</p> <p>Target Number out of Range</p> </blockquote> <p>How could this ever happen? 23925.85 is obviously smaller than 24472.9 ... And I have print out everything I need to compare. </p>
1
2016-07-25T15:31:41Z
38,572,273
<p>So as @Rawing mentioned in comments, I added </p> <p><code>print type(target), type(df[pin_num][end]), type(df[pin_num][0]) </code> </p> <p>and it showed target is a string. </p> <p>so I added :</p> <pre><code> target=float(target) </code></pre> <p>This solve my problem.</p>
0
2016-07-25T15:49:32Z
[ "python", "django" ]
How to add specific axes to matplotlib subplot?
38,571,905
<p>I am trying to make a matrix plot with matplotlib.</p> <p>The individual plots are made with a specific module <code>windrose</code> which subclasses <code>PolarAxes</code>. However there does not seem to be any projection defined in the module to be called as subplot kwargs. The standard <code>polar</code> projection does not work since some of the subclass arguments are missing.</p> <p>I have tested several approaches without success (even with seaborn map considering this post: <a href="http://stackoverflow.com/a/25702476/3416205">http://stackoverflow.com/a/25702476/3416205</a>). Hereunder is the closest I have tried. Is there any way to do what I want without properly creating a new matplotlib projection associated with the specific <code>WindroseAxes</code>?</p> <pre><code>import pandas as pd import matplotlib.pyplot as plt import matplotlib.gridspec as gridspec from windrose import WindroseAxes df = pd.read_csv('https://raw.githubusercontent.com/AntoineGautier/Data/master/tmp.csv') fig = plt.figure() gs = gridspec.GridSpec(4, 2) def wind_plot(x, y, title=None, axes=None, fig=None): ax = WindroseAxes.from_ax() ax.set_position(axes.get_position(fig)) ax.bar(x, y, normed=True, opening=0.8, edgecolor='white', bins=[0, 2.5, 5, 7.5, 10]) ax.set_title(title) for (id_s, s) in enumerate(pd.unique(df.saison)): for (id_jn, jn) in enumerate(pd.unique(df.jn)): tmp = df.query('saison==@s &amp; jn==@jn') _ = plt.subplot(gs[id_s, id_jn], polar=True) wind_plot(tmp.wd, tmp.ws, title=s + ' - ' + jn, axes=_, fig=fig) plt.show() </code></pre>
0
2016-07-25T15:32:32Z
38,586,528
<p>The github page from the <code>windrose</code> module actually provides an example of subplots: <a href="https://github.com/scls19fr/windrose/blob/master/samples/example_subplots.py" rel="nofollow">https://github.com/scls19fr/windrose/blob/master/samples/example_subplots.py</a>.</p> <p>The following works.</p> <pre><code>import pandas as pd import matplotlib.pyplot as plt import matplotlib.gridspec as gridspec from windrose import WindroseAxes df = pd.read_csv('https://raw.githubusercontent.com/AntoineGautier/Data/master/tmp.csv') fig = plt.figure(figsize=(20, 10)) gs = gridspec.GridSpec(2, 4) gp = gs.get_grid_positions(fig) # [bottom, top, left, right] def wind_plot(x, y, title, fig, rect): ax = WindroseAxes(fig, rect) fig.add_axes(ax) ax.bar(x, y, normed=True, opening=0.8, edgecolor='white', bins=[0, 2.5, 5, 7.5, 10]) ax.set_title(title, position=(0.5, 1.1)) for (id_s, s) in enumerate(pd.unique(df.saison)): for (id_jn, jn) in enumerate(pd.unique(df.jn)): tmp = df.query('saison==@s &amp; jn==@jn') rect = [gp[2][id_s], gp[0][id_jn], gp[3][id_s]-gp[2][id_s], gp[1][id_jn]-gp[0][id_jn]] # [left, bottom, width, height] wind_plot(tmp.wd, tmp.ws, s + ' | ' + jn, fig, rect) plt.show() </code></pre>
0
2016-07-26T09:49:45Z
[ "python", "matplotlib", "subplot" ]
django rest framework: order of decorators, auth classes, dispatch to be called
38,572,049
<p>Very confused about the order of decorators, auth classes, dispatch to be called in djangorestframework. It seems that it is a little different from my django framework knowledge.</p> <p>Some codes:</p> <pre><code>#operation_logger: customized decorator class FileView(APIView): parser_classes = (MultiPartParser,)#A authentication_classes = (BasicAuthentication,)#B @permission_classes((IsAuthenticated,))#C @method_decorator(csrf_exempt)#D @method_decorator(operation_logger)#E def dispatch(self, request, *args, **kwargs):#F return super(FileView, self).dispatch(request, *args, **kwargs) @method_decorator(operation_logger)#G def post(self, request):#H print "xxxxpost" </code></pre> <p><strong>What is the order of (A),B,C,D,E,F,G,H to be called when handling requests?</strong> It seems that B is called after F but before G and H?</p> <p>By the way, at beginning, my project was traditional django project. I know that request should go through all the middlewares. Now, I added a new app, which hosts APIs by DRF. <strong>I am not sure whether my request to APIs will go through all the middlewares or not?</strong></p> <p>Thanks</p>
0
2016-07-25T15:39:13Z
38,573,691
<p>The <em>call</em> order is as you specified:</p> <ol> <li><code>@method_decorator(csrf_exempt)</code></li> <li><code>@method_decorator(operation_logger)</code> (#E)</li> <li><code>dispatch()</code> calls <code>initial()</code> which calls <code>check_permissions()</code> which evaluates <code>permission_classes</code> (#B).</li> <li><code>@method_decorator(operation_logger)</code> (#G)</li> <li><code>post()</code></li> </ol> <p>One thing won't work, however:</p> <p><code>@permission_classes((IsAuthenticated,))</code> on the method <a href="https://github.com/tomchristie/django-rest-framework/blob/0f61c9ec290ccf41bbb3c28ba91785a7430676c3/rest_framework/decorators.py#L106" rel="nofollow">adds</a> a <code>permission_classes</code> field to the callable (whatever that is) returned by (#E). This doesn't work with class-based views and is thus essentially a no-op.</p> <p>Other parts have no fixed order, but are used on demand:</p> <p>The authenticator is called whenever needed, i.e. when <a href="https://github.com/tomchristie/django-rest-framework/blob/0f61c9ec290ccf41bbb3c28ba91785a7430676c3/rest_framework/request.py#L189" rel="nofollow">user</a> or <a href="https://github.com/tomchristie/django-rest-framework/blob/0f61c9ec290ccf41bbb3c28ba91785a7430676c3/rest_framework/request.py#L212" rel="nofollow">authentication</a> information is accessed on the request object.</p> <p>Same thing for <code>parser_classes</code>. These get passed to the request object and used lazily when request information is accessed, e.g. <a href="https://github.com/tomchristie/django-rest-framework/blob/0f61c9ec290ccf41bbb3c28ba91785a7430676c3/rest_framework/request.py#L183" rel="nofollow"><code>request.data</code></a>.</p>
0
2016-07-25T17:07:15Z
[ "python", "django", "django-rest-framework" ]
deploy flask application on ubuntu/apache2: virtual host configuration
38,572,059
<p>I'm trying to deploy a python/flask application on an apache2 installation on ubuntu (14.04), following the instructions at the <a href="https://www.digitalocean.com/community/tutorials/how-to-deploy-a-flask-application-on-an-ubuntu-vps" rel="nofollow">link</a></p> <p>The application seems to work and if I point the browser to <code>http://mywebsite.com/</code> I correctly see the message returned by the Flask application.</p> <p>My problem is, what if I want to install a second site as a different virtual host on the same machine (say a non-python application)? What I would like is that the virtual host is mapped to an URL like <code>http://mywebsite.com/FlaskApp</code>, while having the possibility to define another virtual host at <code>http://mywebsite.com/MyOtherWebApp</code></p> <p>This is the FlaskApp.conf file as per instructions on the mentioned article:</p> <pre><code>&lt;VirtualHost *:80&gt; ServerName mywebsite.com ServerAdmin admin@mywebsite.com WSGIScriptAlias / /var/www/FlaskApp/flaskapp.wsgi &lt;Directory /var/www/FlaskApp/FlaskApp/&gt; Order allow,deny Allow from all &lt;/Directory&gt; Alias /static /var/www/FlaskApp/FlaskApp/static &lt;Directory /var/www/FlaskApp/FlaskApp/static/&gt; Order allow,deny Allow from all &lt;/Directory&gt; ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined &lt;/VirtualHost&gt; </code></pre> <p>and here is how the <code>/var/www</code> folder is structured after I installed the python app</p> <pre><code>/var/www +-- FlaskApp ¦   +-- FlaskApp ¦   ¦   +-- flaskenv ¦   ¦   +-- __init__.py ¦   ¦   +-- __init__.pyc ¦   ¦   +-- static ¦   ¦   +-- templates ¦   +-- flaskapp.wsgi ¦ +-- MyOtherWebApp +-- ... </code></pre> <p>Some notes with more details:</p> <ol> <li>I don't have the possibility to use different domain names or diffent ports for the different VirtualHost</li> <li>I found <a href="http://stackoverflow.com/questions/7660070/apache-multiple-virtual-hosts-on-the-same-same-ipdiffrent-urls">this</a> thread suggesting to use the directive <code>ServerAlias</code> as shown below to solve a similar problem, but if I do this and go to <code>http://mywebsite.com/</code> I see a directory listing of the FlaskApp folder instead of the results of the flask service invocation, as in the screenshot below:</li> </ol> <p>here the changed FlaskApp.conf:</p> <pre><code>&lt;VirtualHost *:80&gt; ServerName mywebsite.com/FlaskApp ServerAlias mywebsite.com/FlaskApp . . . </code></pre> <p>screenshot:</p> <p><a href="http://i.stack.imgur.com/8xraz.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/8xraz.jpg" alt="enter image description here"></a></p>
0
2016-07-25T15:39:46Z
38,572,394
<p>I think you missed that url in your controller. Perhaps you can add a index.html in FlaskApp folder to redirect to the correct url (or the other way around, if you leave ServerName mywebsite.com/FlaskApp, put an index file to redirect to FlaskApp). If index.html "hack" doesn't suit your needs, you may add another virtualserver to redirect from / to /FlaskApp/</p>
0
2016-07-25T15:55:57Z
[ "python", "apache", "ubuntu", "flask" ]
Python pandas - remove groups based on NaN count threshold
38,572,079
<p>I have a dataset based on different weather stations,</p> <pre><code>stationID | Time | Temperature | ... ----------+------+-------------+------- 123 | 1 | 30 | 123 | 2 | 31 | 202 | 1 | 24 | 202 | 2 | 24.3 | 202 | 3 | NaN | ... </code></pre> <p>And I would like to remove 'stationID' groups, which have more than a certain number of NaNs. For instance, if I type:</p> <pre><code>**&gt;&gt;&gt; df.groupby('stationID')** </code></pre> <p>then, I would like to drop groups that have (at least) a certain number of NaNs (say 30) within a group. As I understand it, I cannot use dropna(thresh=10) with groupby:</p> <pre><code>**&gt;&gt;&gt; df2.groupby('station').dropna(thresh=30)** *AttributeError: Cannot access callable attribute 'dropna' of 'DataFrameGroupBy' objects...* </code></pre> <p>So, what would be the best way to do that with Pandas?</p>
1
2016-07-25T15:40:34Z
38,572,196
<p>IIUC you can do <code>df2.loc[df2.groupby('station')['Temperature'].filter(lambda x: len(x[pd.isnull(x)] ) &lt; 30).index]</code></p> <p>Example:</p> <pre><code>In [59]: df = pd.DataFrame({'id':[0,0,0,1,1,1,2,2,2,2], 'val':[1,1,np.nan,1,np.nan,np.nan, 1,1,1,1]}) df Out[59]: id val 0 0 1.0 1 0 1.0 2 0 NaN 3 1 1.0 4 1 NaN 5 1 NaN 6 2 1.0 7 2 1.0 8 2 1.0 9 2 1.0 In [64]: df.loc[df.groupby('id')['val'].filter(lambda x: len(x[pd.isnull(x)] ) &lt; 2).index] Out[64]: id val 0 0 1.0 1 0 1.0 2 0 NaN 6 2 1.0 7 2 1.0 8 2 1.0 9 2 1.0 </code></pre> <p>So this will filter out the groups that have more than 1 nan values</p>
1
2016-07-25T15:45:34Z
[ "python", "pandas" ]
Python pandas - remove groups based on NaN count threshold
38,572,079
<p>I have a dataset based on different weather stations,</p> <pre><code>stationID | Time | Temperature | ... ----------+------+-------------+------- 123 | 1 | 30 | 123 | 2 | 31 | 202 | 1 | 24 | 202 | 2 | 24.3 | 202 | 3 | NaN | ... </code></pre> <p>And I would like to remove 'stationID' groups, which have more than a certain number of NaNs. For instance, if I type:</p> <pre><code>**&gt;&gt;&gt; df.groupby('stationID')** </code></pre> <p>then, I would like to drop groups that have (at least) a certain number of NaNs (say 30) within a group. As I understand it, I cannot use dropna(thresh=10) with groupby:</p> <pre><code>**&gt;&gt;&gt; df2.groupby('station').dropna(thresh=30)** *AttributeError: Cannot access callable attribute 'dropna' of 'DataFrameGroupBy' objects...* </code></pre> <p>So, what would be the best way to do that with Pandas?</p>
1
2016-07-25T15:40:34Z
38,572,311
<p>You can create a column to give the the number of null values by station_id, and then use <code>loc</code> to select the relevant data for further processing.</p> <pre><code>df['station_id_null_count'] = \ df.groupby('stationID').Temperature.transform(lambda group: group.isnull().sum()) df.loc[df.station_id_null_count &gt; 30, :] # Select relevant data </code></pre>
0
2016-07-25T15:52:01Z
[ "python", "pandas" ]
Python pandas - remove groups based on NaN count threshold
38,572,079
<p>I have a dataset based on different weather stations,</p> <pre><code>stationID | Time | Temperature | ... ----------+------+-------------+------- 123 | 1 | 30 | 123 | 2 | 31 | 202 | 1 | 24 | 202 | 2 | 24.3 | 202 | 3 | NaN | ... </code></pre> <p>And I would like to remove 'stationID' groups, which have more than a certain number of NaNs. For instance, if I type:</p> <pre><code>**&gt;&gt;&gt; df.groupby('stationID')** </code></pre> <p>then, I would like to drop groups that have (at least) a certain number of NaNs (say 30) within a group. As I understand it, I cannot use dropna(thresh=10) with groupby:</p> <pre><code>**&gt;&gt;&gt; df2.groupby('station').dropna(thresh=30)** *AttributeError: Cannot access callable attribute 'dropna' of 'DataFrameGroupBy' objects...* </code></pre> <p>So, what would be the best way to do that with Pandas?</p>
1
2016-07-25T15:40:34Z
38,573,238
<p>Using @EdChum setup: Since you dont mention your final output, adding this.</p> <pre><code> vals = df.groupby(['id'])['val'].apply(lambda x: (np.size(x)-x.count()) &lt; 2 ) vals[vals] id 0 True 2 True Name: val, dtype: bool </code></pre>
0
2016-07-25T16:39:52Z
[ "python", "pandas" ]
Redefining indexing in N-dimensional arrays
38,572,189
<p>So I have a N-dimensional array of values, let's call it A. Now, I can plot this in a contour map, with coordinate axes X and Y, using</p> <pre><code>plt.contourf(X,Y,A) </code></pre> <p>Now, I have to carry out a mapping of these points to another plane, so, basically another set of coordinates. Let the transformation be</p> <pre><code>X - X1 Y - X1 </code></pre> <p>Now, each point with magnitude "I" in matrix A at (X,Y) is at (X- X1, Y - Y1). I can plot this using</p> <pre><code>plt.contourf(X-X1, Y-Y1,A) </code></pre> <p>My question is, how do I index the array A such that I obtain an array B where the indexing corresponds to X-X1 and Y-Y1 instead of X and Y so that I can plot it directly using the following </p> <pre><code>plt.contourf(X,Y,B) </code></pre> <p>Thanks!</p>
2
2016-07-25T15:45:07Z
38,691,159
<p>A friend helped me out with this. Seems to work perfectly.</p> <pre><code> for ii in xrange(A.shape[1]): for jj in xrange(A.shape[0]): H = ii - alpha_x[jj,ii] J = jj - alpha_y[jj,ii] if H &gt;= 0 and H&lt;= A.shape[1] and J &gt;= 0 and J &lt;= A.shape[0]: B[J,H] = B[J,H] + A[jj,ii] </code></pre> <p>The alpha_x and alpha_y are just the 2 matrices which contain values of X1 and Y1.</p>
0
2016-08-01T04:43:48Z
[ "python", "arrays", "numpy", "matplotlib", "indexing" ]
Flask 404 Page Not Rendering Template
38,572,231
<p>I'm trying to figure out why my 404 page template will not render correctly. I am able to get it to return text, but not a template. Here is my error handler function that doesn't work:</p> <pre><code>@app.errorhandler(404) def page_not_found(error): return render_template('error.html'), 404 </code></pre> <p>It works if I do something like this instead:</p> <pre><code>@app.errorhandler(404) def page_not_found(error): return '&lt;h2&gt;Page Not Found.&lt;/h2&gt;&lt;a href="/"&gt;Click here&lt;/a&gt; to return home.', 404 </code></pre> <p>I'm using a blueprint to route the rest of my URLs. Here is more of my urls.py:</p> <pre><code>main = Blueprint('main', __name__, url_prefix='/language/&lt;lang_code&gt;/') app.config.from_object(__name__) babel = Babel(app) def render(template_name, data): template_data = { } template_data.update(data) return render_template(template_name, **template_data) @app.url_defaults def set_language_code(endpoint, values): if 'lang_code' in values or not session['lang_code']: return if app.url_map.is_endpoint_expecting(endpoint, 'lang_code'): values['lang_code'] = session['lang_code'] @app.url_value_preprocessor def get_lang_code(endpoint, values): if values is not None: session['lang_code'] = values.pop('lang_code', None) @app.before_request def ensure_lang_support(): lang_code = session['lang_code'] if lang_code and lang_code not in app.config['SUPPORTED_LANGUAGES'].keys(): return abort(404) @babel.localeselector def get_locale(): if session.get('lang_code') is None: session['lang_code']=request.accept_languages.best_match(app.config['SUPPORTED_LANGUAGES'].keys()) return session['lang_code'] @app.route('/') def root(): return redirect(url_for('main.index_en', lang_code='en')) @main.route('accueil', endpoint="index_fr") @main.route('home', endpoint="index_en") def index(): return render('index.html', {}) app.register_blueprint(main) </code></pre> <p>Here is the error that I'm getting:</p> <pre><code>File "lib/python2.7/site-packages/flask/helpers.py", line 296, in url_for appctx.app.inject_url_defaults(endpoint, values) File "lib/python2.7/site-packages/flask/app.py", line 1623, in inject_url_defaults func(endpoint, values) File "app/urls.py", line 35, in set_language_code if app.url_map.is_endpoint_expecting(endpoint, 'lang_code'): File "lib/python2.7/site-packages/werkzeug/routing.py", line 1173, in is_endpoint_expecting for rule in self._rules_by_endpoint[endpoint]: KeyError: u'None' </code></pre> <p>Any idea why this may be happening? </p>
0
2016-07-25T15:47:17Z
38,574,773
<p>I figured it out! The problem was not with the way I had routed my URLs, but how the language toggle was working on my site. </p> <p>I had been using this:</p> <pre><code>{% if session['lang_code']=='en' %} {% set new_lang_code='fr' %} {% else %} {% set new_lang_code='en' %} {% endif %} &lt;li&gt;&lt;a href="{{ url_for(request.endpoint|replace("_"+session['lang_code'], "_"+new_lang_code))|replace("/"+session['lang_code']+"/", "/"+new_lang_code+"/") }}"&gt;{{ _('Fr') }}&lt;/a&gt;&lt;/li&gt; </code></pre> <p>Because the errorhandler function didn't have an endpoint, it was throwing an error. I was able to get it working by adding an if statement around the toggle like so:</p> <pre><code>{% if request.endpoint != None %} &lt;li&gt;&lt;a href="{{ url_for(request.endpoint|replace("_"+session['lang_code'], "_"+new_lang_code))|replace("/"+session['lang_code']+"/", "/"+new_lang_code+"/") }}"&gt;{{ _('Fr') }}&lt;/a&gt;&lt;/li&gt; {% endif %} </code></pre>
0
2016-07-25T18:14:31Z
[ "python", "flask", "url-routing" ]
ImportError: No module named numpy - Google Cloud Dataproc when using Jupyter Notebook
38,572,287
<p>When starting Jupyter Notebook on Google Dataproc, importing modules fails. I have tried to install the modules using different commands. Some examples:</p> <pre><code>import os os.sytem("sudo apt-get install python-numpy") os.system("sudo pip install numpy") #after having installed pip os.system("sudo pip install python-numpy") #after having installed pip import numpy </code></pre> <p>None of the above examples work and return an import error:</p> <p><a href="http://i.stack.imgur.com/qtJEC.png" rel="nofollow">enter image description here</a></p> <p>When using command line I am able to install modules, but still the import error remains. I guess I am installing modules in a wrong location.</p> <p>Any thoughts?</p>
0
2016-07-25T15:50:46Z
38,572,627
<p>Did you try: pip install ipython[numpy]</p>
0
2016-07-25T16:07:39Z
[ "python", "importerror", "jupyter-notebook", "google-cloud-dataproc" ]
ImportError: No module named numpy - Google Cloud Dataproc when using Jupyter Notebook
38,572,287
<p>When starting Jupyter Notebook on Google Dataproc, importing modules fails. I have tried to install the modules using different commands. Some examples:</p> <pre><code>import os os.sytem("sudo apt-get install python-numpy") os.system("sudo pip install numpy") #after having installed pip os.system("sudo pip install python-numpy") #after having installed pip import numpy </code></pre> <p>None of the above examples work and return an import error:</p> <p><a href="http://i.stack.imgur.com/qtJEC.png" rel="nofollow">enter image description here</a></p> <p>When using command line I am able to install modules, but still the import error remains. I guess I am installing modules in a wrong location.</p> <p>Any thoughts?</p>
0
2016-07-25T15:50:46Z
38,589,114
<p>I found a solution.</p> <pre><code>import sys sys.path.append('/usr/lib/python2.7/dist-packages') os.system("sudo apt-get install python-pandas -y") os.system("sudo apt-get install python-numpy -y") os.system("sudo apt-get install python-scipy -y") os.system("sudo apt-get install python-sklearn -y") import pandas import numpy import scipy import sklearn </code></pre> <p>If any one has a more elegant solution, please let me know.</p>
0
2016-07-26T11:49:32Z
[ "python", "importerror", "jupyter-notebook", "google-cloud-dataproc" ]
ImportError: No module named numpy - Google Cloud Dataproc when using Jupyter Notebook
38,572,287
<p>When starting Jupyter Notebook on Google Dataproc, importing modules fails. I have tried to install the modules using different commands. Some examples:</p> <pre><code>import os os.sytem("sudo apt-get install python-numpy") os.system("sudo pip install numpy") #after having installed pip os.system("sudo pip install python-numpy") #after having installed pip import numpy </code></pre> <p>None of the above examples work and return an import error:</p> <p><a href="http://i.stack.imgur.com/qtJEC.png" rel="nofollow">enter image description here</a></p> <p>When using command line I am able to install modules, but still the import error remains. I guess I am installing modules in a wrong location.</p> <p>Any thoughts?</p>
0
2016-07-25T15:50:46Z
38,625,451
<p>Try <code>conda install numpy</code> as Google's jupyter init script is using conda. I personally prefer to have my own init scripts so I can have more control.</p>
0
2016-07-28T00:53:17Z
[ "python", "importerror", "jupyter-notebook", "google-cloud-dataproc" ]
ImportError: No module named numpy - Google Cloud Dataproc when using Jupyter Notebook
38,572,287
<p>When starting Jupyter Notebook on Google Dataproc, importing modules fails. I have tried to install the modules using different commands. Some examples:</p> <pre><code>import os os.sytem("sudo apt-get install python-numpy") os.system("sudo pip install numpy") #after having installed pip os.system("sudo pip install python-numpy") #after having installed pip import numpy </code></pre> <p>None of the above examples work and return an import error:</p> <p><a href="http://i.stack.imgur.com/qtJEC.png" rel="nofollow">enter image description here</a></p> <p>When using command line I am able to install modules, but still the import error remains. I guess I am installing modules in a wrong location.</p> <p>Any thoughts?</p>
0
2016-07-25T15:50:46Z
39,021,663
<pre><code> #!/usr/bin/env bash set -e ROLE=$(curl -f -s -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) INIT_ACTIONS_REPO=$(curl -f -s -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/INIT_ACTIONS_REPO || true) INIT_ACTIONS_REPO="${INIT_ACTIONS_REPO:-https://github.com/GoogleCloudPlatform/dataproc-initialization-actions.git}" INIT_ACTIONS_BRANCH=$(curl -f -s -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/INIT_ACTIONS_BRANCH || true) INIT_ACTIONS_BRANCH="${INIT_ACTIONS_BRANCH:-master}" DATAPROC_BUCKET=$(curl -f -s -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-bucket) echo "Cloning fresh dataproc-initialization-actions from repo $INIT_ACTIONS_REPO and branch $INIT_ACTIONS_BRANCH..." git clone -b "$INIT_ACTIONS_BRANCH" --single-branch $INIT_ACTIONS_REPO # Ensure we have conda installed. ./dataproc-initialization-actions/conda/bootstrap-conda.sh #./dataproc-initialization-actions/conda/install-conda-env.sh source /etc/profile.d/conda_config.sh if [[ "${ROLE}" == 'Master' ]]; then conda install jupyter if gsutil -q stat "gs://$DATAPROC_BUCKET/notebooks/**"; then echo "Pulling notebooks directory to cluster master node..." gsutil -m cp -r gs://$DATAPROC_BUCKET/notebooks /root/ fi ./dataproc-initialization-actions/jupyter/internal/setup-jupyter-kernel.sh ./dataproc-initialization-actions/jupyter/internal/launch-jupyter-kernel.sh fi if gsutil -q stat "gs://$DATAPROC_BUCKET/scripts/**"; then echo "Pulling scripts directory to cluster master and worker nodes..." gsutil -m cp -r gs://$DATAPROC_BUCKET/scripts/* /usr/local/bin/miniconda/lib/python2.7 fi if gsutil -q stat "gs://$DATAPROC_BUCKET/modules/**"; then echo "Pulling modules directory to cluster master and worker nodes..." gsutil -m cp -r gs://$DATAPROC_BUCKET/modules/* /usr/local/bin/miniconda/lib/python2.7 fi echo "Completed installing Jupyter!" # Install Jupyter extensions (if desired) # TODO: document this in readme if [[ ! -v $INSTALL_JUPYTER_EXT ]] then INSTALL_JUPYTER_EXT=false fi if [[ "$INSTALL_JUPYTER_EXT" = true ]] then echo "Installing Jupyter Notebook extensions..." ./dataproc-initialization-actions/jupyter/internal/bootstrap-jupyter-ext.sh echo "Jupyter Notebook extensions installed!" fi </code></pre>
0
2016-08-18T15:06:39Z
[ "python", "importerror", "jupyter-notebook", "google-cloud-dataproc" ]
python functools.partial with fixed array valued argument
38,572,506
<p>I'm trying to vectorize the following function over the argument <code>tiling</code>:</p> <pre><code>def find_tile(x,tiling): """ Calculates the index of the closest element of 'tiling' to 'x'. tiling: array of grid positions x: variable of the same type as the elements of tiling """ return np.argmin(np.linalg.norm(tiling - x, axis=1)) </code></pre> <p>For instance, the non-vectorized version of the function can accept the following arguments</p> <pre><code>tiling = np.array( [[i,j] for i in xrange(3) for j in xrange(3)] ) x = np.array([1.2, 2.7]) </code></pre> <p>I'm interested in finding the fastest possible vectorisation, such that <code>x</code> remains a single vector and I can pass a list of arguments <code>tiling</code></p> <p>So I tried defining multiple tilings using a generator:</p> <pre><code>tilings = (tiling + np.random.uniform(0,1,2) for j in xrange(3)) </code></pre> <p>and then using <code>map</code> and <code>functools.partial</code>:</p> <pre><code>map(functools.partial(find_tile, x=x), tilings) </code></pre> <p>Apparently, there's a problem with <code>x</code> being an array or something, since I'm getting the error:</p> <pre><code>Traceback (most recent call last): File "main.py", line 43, in &lt;module&gt; inds = map(functools.partial(find_tile, x=x), ts) TypeError: find_tile() got multiple values for keyword argument 'x' </code></pre> <p>Can someone explain to me how to get around it?</p> <p>Also, is there an alternative and faster way to do this (possibly re-writing the function <code>find_tile</code>?)</p>
0
2016-07-25T16:01:05Z
38,572,592
<p>You are passing in <code>x</code> as a <em>keyword argument</em>. <code>map()</code> passes in each element from <code>tilings</code> in as a <em>positional argument</em>. However, since your first positional argument is <code>x</code>, that clashes with the keyword argument. Using a name as a keyword argument does not prevent that same name being filled with a positional argument.</p> <p>Don't use a keyword argument for <code>x</code>; just pass it in as a positional argument to <code>partial()</code>:</p> <pre><code>map(functools.partial(find_tile, x), tilings) </code></pre> <p>Now each element from <code>tilings</code> is passed in as the second positional argument and the call works.</p>
1
2016-07-25T16:05:46Z
[ "python", "function", "vectorization" ]
pandas plot time-series with minimized gaps
38,572,534
<p>I recently started to explore into the depths of pandas and would like to visualize some time-series data which contains gaps, some of them rather large. an example <code>mydf</code>:</p> <pre><code> timestamp val 0 2016-07-25 00:00:00 0.740442 1 2016-07-25 01:00:00 0.842911 2 2016-07-25 02:00:00 -0.873992 3 2016-07-25 07:00:00 -0.474993 4 2016-07-25 08:00:00 -0.983963 5 2016-07-25 09:00:00 0.597011 6 2016-07-25 10:00:00 -2.043023 7 2016-07-25 12:00:00 0.304668 8 2016-07-25 13:00:00 1.185997 9 2016-07-25 14:00:00 0.920850 10 2016-07-25 15:00:00 0.201423 11 2016-07-25 16:00:00 0.842970 12 2016-07-25 21:00:00 1.061207 13 2016-07-25 22:00:00 0.232180 14 2016-07-25 23:00:00 0.453964 </code></pre> <p>now i could plot my DataFrame through <code>df1.plot(x='timestamp').get_figure().show()</code> and data along the x-axis would be interpolated (appearing as one line): <a href="http://i.stack.imgur.com/nby94.png" rel="nofollow"><img src="http://i.stack.imgur.com/nby94.png" alt="plot0"></a></p> <p>what i would like to have instead is:</p> <ul> <li>visible gaps between sections with data</li> <li>a consistent gap-width for differing gaps-legths</li> <li>perhaps some form of marker in the axis which helps to clarify the fact that some jumps in time are performed.</li> </ul> <p>researching in this matter i've come across</p> <ul> <li><a href="https://stackoverflow.com/questions/35085830/python-pandas-plot-time-series-with-gap">python-pandas-plot-time-series-with-gap</a> </li> <li><a href="https://stackoverflow.com/questions/27266987/python-matplotlib-avoid-plotting-gaps">python-matplotlib-avoid-plotting-gaps</a></li> </ul> <p>which generally come close to what i'm after but the former approach would yield in simply leaving the gaps out of the plotted figure and the latter in large gaps that i would like to avoid (think of gaps that may even span a few days).</p> <p>as the second approach may be closer i tried to use my timestamp-column as an index through: </p> <pre><code>mydf2 = pd.DataFrame(data=list(mydf['val']), index=mydf[0]) </code></pre> <p>which allows me to fill the gaps with <code>NaN</code> through reindexing <em>(wondering if there is a more simple solution to achive this)</em>:</p> <pre><code>mydf3 = mydf2.reindex(pd.date_range('25/7/2016', periods=24, freq='H')) </code></pre> <p>leading to:</p> <pre><code> val 2016-07-25 00:00:00 0.740442 2016-07-25 01:00:00 0.842911 2016-07-25 02:00:00 -0.873992 2016-07-25 03:00:00 NaN 2016-07-25 04:00:00 NaN 2016-07-25 05:00:00 NaN 2016-07-25 06:00:00 NaN 2016-07-25 07:00:00 -0.474993 2016-07-25 08:00:00 -0.983963 2016-07-25 09:00:00 0.597011 2016-07-25 10:00:00 -2.043023 2016-07-25 11:00:00 NaN 2016-07-25 12:00:00 0.304668 2016-07-25 13:00:00 1.185997 2016-07-25 14:00:00 0.920850 2016-07-25 15:00:00 0.201423 2016-07-25 16:00:00 0.842970 2016-07-25 17:00:00 NaN 2016-07-25 18:00:00 NaN 2016-07-25 19:00:00 NaN 2016-07-25 20:00:00 NaN 2016-07-25 21:00:00 1.061207 2016-07-25 22:00:00 0.232180 2016-07-25 23:00:00 0.453964 </code></pre> <p>from here on i might need to reduce consecutive entries over a certain limit with missing data to a fix number (representing my gap-width) and do something to the index-value of these entries so they are plotted differently but i got lost here i guess as i don't know how to achieve something like that.</p> <p>while tinkering around i wondered if there might be a more direct and elegant approach and would be thankful if anyone knowing more about this could point me towards the right direction.</p> <p>thanks for any hints and feedback in advance!</p> <p><strong>### ADDENDUM ###</strong></p> <p>After posting my question I've come across another interesting <a href="http://stackoverflow.com/a/13977632/294930">idea postend by Andy Hayden</a> that seems helpful. He's using a column to hold the results of a comparison of the difference with a time-delta. After performing a <code>cumsum()</code> on the int-representation of the boolean results he uses <code>groupby()</code> to cluster entries of each ungapped-series into a <code>DataFrameGroupBy</code>-object.</p> <p>As this was written some time ago pandas now returns <code>timedelta</code>-objects so the comparison should be done with another <code>timedelta</code>-object like so (based on the <code>mydf</code> from above or on the reindexed <code>df2</code> after copying its index to a now column through <code>mydf2['timestamp'] = mydf2.index</code>):</p> <pre><code>from datetime import timedelta myTD = timedelta(minutes=60) mydf['nogap'] = mydf['timestamp'].diff() &gt; myTD mydf['nogap'] = mydf['nogap'].apply(lambda x: 1 if x else 0).cumsum() ## btw.: why not "... .apply(lambda x: int(x)) ..."? dfg = mydf.groupby('nogap') </code></pre> <p>We now could iterate over the DataFrameGroup getting the ungapped series and do <em>something</em> with them. My pandas/mathplot-skills are way too immature but could we plot the group-elements into sub-plots? maybe that way the discontinuity along the time-axis could be represented in some way (in form of an interrupted axis-line or such)?</p> <p>piRSquared's answer already leads to a quite usable result with the only thing kind of missing being a more striking visual feedback along the time-axis that a gap/time-jump has occurred between two values.</p> <p>Maybe with the grouped Sections the width of the gap-representation could be more configurable?</p>
1
2016-07-25T16:02:49Z
38,574,247
<p>I built a new series and plotted it. This is not super elegant! But I believe gets you what you wanted.</p> <h3>Setup</h3> <p>Do this to get to your starting point</p> <pre><code>from StringIO import StringIO import pandas as pd text = """ timestamp val 2016-07-25 00:00:00 0.740442 2016-07-25 01:00:00 0.842911 2016-07-25 02:00:00 -0.873992 2016-07-25 07:00:00 -0.474993 2016-07-25 08:00:00 -0.983963 2016-07-25 09:00:00 0.597011 2016-07-25 10:00:00 -2.043023 2016-07-25 12:00:00 0.304668 2016-07-25 13:00:00 1.185997 2016-07-25 14:00:00 0.920850 2016-07-25 15:00:00 0.201423 2016-07-25 16:00:00 0.842970 2016-07-25 21:00:00 1.061207 2016-07-25 22:00:00 0.232180 2016-07-25 23:00:00 0.453964""" s1 = pd.read_csv(StringIO(text), index_col=0, parse_dates=[0], engine='python', sep='\s{2,}').squeeze() s1 timestamp 2016-07-25 00:00:00 0.740442 2016-07-25 01:00:00 0.842911 2016-07-25 02:00:00 -0.873992 2016-07-25 07:00:00 -0.474993 2016-07-25 08:00:00 -0.983963 2016-07-25 09:00:00 0.597011 2016-07-25 10:00:00 -2.043023 2016-07-25 12:00:00 0.304668 2016-07-25 13:00:00 1.185997 2016-07-25 14:00:00 0.920850 2016-07-25 15:00:00 0.201423 2016-07-25 16:00:00 0.842970 2016-07-25 21:00:00 1.061207 2016-07-25 22:00:00 0.232180 2016-07-25 23:00:00 0.453964 Name: val, dtype: float64 </code></pre> <p>Resample hourly. <code>resample</code> is a deferred method, meaning it expects you to pass another method afterwards so it knows what to do. I used <code>mean</code>. For your example, it doesn't matter because we are sampling to a higher frequency. Look it up if you care.</p> <pre><code>s2 = s1.resample('H').mean() s2 timestamp 2016-07-25 00:00:00 0.740442 2016-07-25 01:00:00 0.842911 2016-07-25 02:00:00 -0.873992 2016-07-25 03:00:00 NaN 2016-07-25 04:00:00 NaN 2016-07-25 05:00:00 NaN 2016-07-25 06:00:00 NaN 2016-07-25 07:00:00 -0.474993 2016-07-25 08:00:00 -0.983963 2016-07-25 09:00:00 0.597011 2016-07-25 10:00:00 -2.043023 2016-07-25 11:00:00 NaN 2016-07-25 12:00:00 0.304668 2016-07-25 13:00:00 1.185997 2016-07-25 14:00:00 0.920850 2016-07-25 15:00:00 0.201423 2016-07-25 16:00:00 0.842970 2016-07-25 17:00:00 NaN 2016-07-25 18:00:00 NaN 2016-07-25 19:00:00 NaN 2016-07-25 20:00:00 NaN 2016-07-25 21:00:00 1.061207 2016-07-25 22:00:00 0.232180 2016-07-25 23:00:00 0.453964 Freq: H, Name: val, dtype: float64 </code></pre> <p>Ok, so you also wanted equally sized gaps. This was a tad tricky. I used <code>ffill(limit=1)</code> to fill in only one space of each gap. Then I took the slice of <code>s2</code> where this forward filled thing was not null. This gives me a single null for each gap.</p> <pre><code>s3 = s2[s2.ffill(limit=1).notnull()] s3 timestamp 2016-07-25 00:00:00 0.740442 2016-07-25 01:00:00 0.842911 2016-07-25 02:00:00 -0.873992 2016-07-25 03:00:00 NaN 2016-07-25 07:00:00 -0.474993 2016-07-25 08:00:00 -0.983963 2016-07-25 09:00:00 0.597011 2016-07-25 10:00:00 -2.043023 2016-07-25 11:00:00 NaN 2016-07-25 12:00:00 0.304668 2016-07-25 13:00:00 1.185997 2016-07-25 14:00:00 0.920850 2016-07-25 15:00:00 0.201423 2016-07-25 16:00:00 0.842970 2016-07-25 17:00:00 NaN 2016-07-25 21:00:00 1.061207 2016-07-25 22:00:00 0.232180 2016-07-25 23:00:00 0.453964 Name: val, dtype: float64 </code></pre> <p>Lastly, if I plotted this, I still get irregular gaps. I need <code>str</code> indices so that <code>matplotlib</code> doesn't try to expand out my dates.</p> <pre><code>s3.reindex(s3.index.strftime('%H:%M')) timestamp 00:00 0.740442 01:00 0.842911 02:00 -0.873992 03:00 NaN 07:00 -0.474993 08:00 -0.983963 09:00 0.597011 10:00 -2.043023 11:00 NaN 12:00 0.304668 13:00 1.185997 14:00 0.920850 15:00 0.201423 16:00 0.842970 17:00 NaN 21:00 1.061207 22:00 0.232180 23:00 0.453964 Name: val, dtype: float64 </code></pre> <p>I'll plot them together so we can see the difference.</p> <pre><code>f, a = plt.subplots(2, 1, sharey=True, figsize=(10, 5)) s2.plot(ax=a[0]) s3.reindex(s3.index.strftime('%H:%M')).plot(ax=a[1]) </code></pre> <p><a href="http://i.stack.imgur.com/Cl1Xy.png" rel="nofollow"><img src="http://i.stack.imgur.com/Cl1Xy.png" alt="enter image description here"></a></p>
1
2016-07-25T17:42:52Z
[ "python", "pandas", "matplotlib", "plot" ]
Most efficient way to convert datetime string (Jul 25, 2016 11:51:32 PM) into another string (YYYYMMDD) in python
38,572,569
<p>I have a string representing a datetime such as : "Jul 19, 2016 4:45:57 PM"</p> <p>I would like to convert this to the following string: "20160725" </p> <p>What is the most efficient way to do this in python? </p> <p>Thanks :)</p> <p>Note : I need to do this in order to input this date into a csv file later on, which will be used to show a line chart. </p>
1
2016-07-25T16:04:51Z
38,572,645
<p>You can use the <a href="https://docs.python.org/2/library/datetime.html#strftime-strptime-behavior" rel="nofollow"><code>strptime</code></a> and <a href="https://docs.python.org/2/library/datetime.html#strftime-strptime-behavior" rel="nofollow"><code>strftime</code></a> methods of the <code>datetime</code> module. The following does what you want:</p> <pre><code>from datetime import datetime as dt s = "July 25, 2016 - 11:51:32 PM" old_format = '%B %d, %Y - %H:%M:%S %p' new_format = '%Y%m%d' r = dt.strptime(s, old_format).strftime(new_format) print(r) # '20160725' </code></pre>
3
2016-07-25T16:08:42Z
[ "python", "string", "date", "datetime" ]
copy values from one dataframe to another dataframe(different length) by comparing row values in python
38,572,583
<p>I am new to python and working with dataframes. I have two dataframes, one with data for months and another with data for days in those months. I want the data from the monthly dataframe as a column in the daily dataframe, repeated for the number of days in that month. Thanks, I have tried to Provide with an illustration below.</p> <pre><code>Monthly DF Val Date Year Month 0 0.00 2016-01-31 2016 1 1 0.10 2016-02-29 2016 2 2 0.07 2016-03-31 2016 3 3 0.01 2016-04-30 2016 4 4 0.28 2016-05-31 2016 5 DailyDF Date Year Month Val 0 2016-01-01 2016 1 0 1 2016-01-02 2016 1 0 2 2016-01-03 2016 1 0 3 2016-01-04 2016 1 0 4 2016-01-05 2016 1 0 5 2016-01-06 2016 1 0 6 2016-01-07 2016 1 0 7 2016-01-08 2016 1 0 8 2016-01-09 2016 1 0 9 2016-01-10 2016 1 0 10 2016-01-11 2016 1 0 11 2016-01-12 2016 1 0 12 2016-01-13 2016 1 0 13 2016-01-14 2016 1 0 14 2016-01-15 2016 1 0 15 2016-01-16 2016 1 0 16 2016-01-17 2016 1 0 17 2016-01-18 2016 1 0 18 2016-01-19 2016 1 0 19 2016-01-20 2016 1 0 20 2016-01-21 2016 1 0 21 2016-01-22 2016 1 0 22 2016-01-23 2016 1 0 23 2016-01-24 2016 1 0 24 2016-01-25 2016 1 0 25 2016-01-26 2016 1 0 26 2016-01-27 2016 1 0 27 2016-01-28 2016 1 0 28 2016-01-29 2016 1 0 29 2016-01-30 2016 1 0 .. ... ... ... ... 31 2016-02-01 2016 2 0 32 2016-02-02 2016 2 0 33 2016-02-03 2016 2 0 34 2016-02-04 2016 2 0 35 2016-02-05 2016 2 0 36 2016-02-06 2016 2 0 37 2016-02-07 2016 2 0 38 2016-02-08 2016 2 0 39 2016-02-09 2016 2 0 40 2016-02-10 2016 2 0 41 2016-02-11 2016 2 0 42 2016-02-12 2016 2 0 43 2016-02-13 2016 2 0 44 2016-02-14 2016 2 0 45 2016-02-15 2016 2 0 46 2016-02-16 2016 2 0 47 2016-02-17 2016 2 0 48 2016-02-18 2016 2 0 49 2016-02-19 2016 2 0 50 2016-02-20 2016 2 0 51 2016-02-21 2016 2 0 52 2016-02-22 2016 2 0 53 2016-02-23 2016 2 0 54 2016-02-24 2016 2 0 55 2016-02-25 2016 2 0 56 2016-02-26 2016 2 0 57 2016-02-27 2016 2 0 58 2016-02-28 2016 2 0 59 2016-02-29 2016 2 0 60 2016-03-01 2016 3 0 </code></pre> <p>So in the 'Val' column of Daily Dataframe I want the "Val" from the Monthly Dataframe to be repeated for the number of days in that month.</p> <pre><code>Expected Output Date Year Month Val 0 2016-01-01 2016 1 0 1 2016-01-02 2016 1 0 2 2016-01-03 2016 1 0 3 2016-01-04 2016 1 0 4 2016-01-05 2016 1 0 5 2016-01-06 2016 1 0 6 2016-01-07 2016 1 0 7 2016-01-08 2016 1 0 8 2016-01-09 2016 1 0 .. ... ... ... ... 10 2016-01-11 2016 1 0 11 2016-01-12 2016 1 0 12 2016-01-13 2016 1 0 13 2016-01-14 2016 1 0 14 2016-01-15 2016 1 0 15 2016-01-16 2016 1 0 16 2016-01-17 2016 1 0 17 2016-01-18 2016 1 0 18 2016-01-19 2016 1 0 19 2016-01-20 2016 1 0 20 2016-01-21 2016 1 0 21 2016-01-22 2016 1 0 22 2016-01-23 2016 1 0 23 2016-01-24 2016 1 0 24 2016-01-25 2016 1 0 25 2016-01-26 2016 1 0 26 2016-01-27 2016 1 0 27 2016-01-28 2016 1 0 28 2016-01-29 2016 1 0 29 2016-01-30 2016 1 0 .. ... ... ... ... 41 2016-02-11 2016 2 0.10 42 2016-02-12 2016 2 0.10 43 2016-02-13 2016 2 0.10 44 2016-02-14 2016 2 0.10 45 2016-02-15 2016 2 0.10 46 2016-02-16 2016 2 0.10 47 2016-02-17 2016 2 0.10 .. ... ... ... ... 49 2016-03-19 2016 3 0.07 50 2016-03-20 2016 3 0.07 51 2016-03-21 2016 3 0.07 52 2016-03-22 2016 3 0.07 53 2016-03-23 2016 3 0.07 54 2016-03-24 2016 3 0.07 </code></pre>
0
2016-07-25T16:05:23Z
38,573,927
<p>As <a href="http://stackoverflow.com/questions/38572583/copy-values-from-one-dataframe-to-another-dataframedifferent-length-by-compari/38573927#comment64535881_38572583">@Merlin has already mentioned</a> joining (using <a href="http://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging" rel="nofollow">pd.merge()</a> method) should be pretty straightforward:</p> <pre><code>In [126]: pd.merge(daily.drop('Val', 1), monthly.drop('Date', 1), on=['Year','Month']) Out[126]: Date Year Month val Val 0 2016-01-01 2016 1 0 0.00 1 2016-01-02 2016 1 0 0.00 2 2016-01-03 2016 1 0 0.00 3 2016-01-04 2016 1 0 0.00 4 2016-01-05 2016 1 0 0.00 5 2016-01-06 2016 1 0 0.00 6 2016-01-07 2016 1 0 0.00 7 2016-01-08 2016 1 0 0.00 8 2016-01-09 2016 1 0 0.00 9 2016-01-10 2016 1 0 0.00 10 2016-01-11 2016 1 0 0.00 11 2016-01-12 2016 1 0 0.00 12 2016-01-13 2016 1 0 0.00 13 2016-01-14 2016 1 0 0.00 14 2016-01-15 2016 1 0 0.00 .. ... ... ... ... ... 137 2016-05-17 2016 5 0 0.28 138 2016-05-18 2016 5 0 0.28 139 2016-05-19 2016 5 0 0.28 140 2016-05-20 2016 5 0 0.28 141 2016-05-21 2016 5 0 0.28 142 2016-05-22 2016 5 0 0.28 143 2016-05-23 2016 5 0 0.28 144 2016-05-24 2016 5 0 0.28 145 2016-05-25 2016 5 0 0.28 146 2016-05-26 2016 5 0 0.28 147 2016-05-27 2016 5 0 0.28 148 2016-05-28 2016 5 0 0.28 149 2016-05-29 2016 5 0 0.28 150 2016-05-30 2016 5 0 0.28 151 2016-05-31 2016 5 0 0.28 [152 rows x 5 columns] </code></pre> <p><strong>I want to offer you a bit more challenging task - generate your desired DF just from the MonthlyDF:</strong></p> <pre><code>In [108]: df Out[108]: Val Date Year Month 0 0.00 2016-01-31 2016 1 1 0.10 2016-02-28 2016 2 2 0.07 2016-03-31 2016 3 3 0.01 2016-04-30 2016 4 4 0.28 2016-05-31 2016 5 In [117]: df.set_index('Date').resample('MS').mean().append(x.iloc[[-1]]).resample('D').pad().reset_index() Out[117]: Date Val Year Month 0 2016-01-01 0.00 2016 1 1 2016-01-02 0.00 2016 1 2 2016-01-03 0.00 2016 1 3 2016-01-04 0.00 2016 1 4 2016-01-05 0.00 2016 1 5 2016-01-06 0.00 2016 1 6 2016-01-07 0.00 2016 1 7 2016-01-08 0.00 2016 1 8 2016-01-09 0.00 2016 1 9 2016-01-10 0.00 2016 1 10 2016-01-11 0.00 2016 1 11 2016-01-12 0.00 2016 1 12 2016-01-13 0.00 2016 1 13 2016-01-14 0.00 2016 1 14 2016-01-15 0.00 2016 1 .. ... ... ... ... 137 2016-05-17 0.28 2016 5 138 2016-05-18 0.28 2016 5 139 2016-05-19 0.28 2016 5 140 2016-05-20 0.28 2016 5 141 2016-05-21 0.28 2016 5 142 2016-05-22 0.28 2016 5 143 2016-05-23 0.28 2016 5 144 2016-05-24 0.28 2016 5 145 2016-05-25 0.28 2016 5 146 2016-05-26 0.28 2016 5 147 2016-05-27 0.28 2016 5 148 2016-05-28 0.28 2016 5 149 2016-05-29 0.28 2016 5 150 2016-05-30 0.28 2016 5 151 2016-05-31 0.28 2016 5 [152 rows x 4 columns] </code></pre> <p><strong>Explanation:</strong></p> <p>resample <code>MonthlyDF</code> to the begin-of-month</p> <pre><code>In [112]: df.set_index('Date').resample('MS').mean() Out[112]: Val Year Month Date 2016-01-01 0.00 2016 1 2016-02-01 0.10 2016 2 2016-03-01 0.07 2016 3 2016-04-01 0.01 2016 4 2016-05-01 0.28 2016 5 </code></pre> <p>add last row from the original <code>MonthlyDF</code>:</p> <pre><code>In [113]: df.set_index('Date').resample('MS').mean().append(x.iloc[[-1]]) Out[113]: Val Year Month Date 2016-01-01 0.00 2016 1 2016-02-01 0.10 2016 2 2016-03-01 0.07 2016 3 2016-04-01 0.01 2016 4 2016-05-01 0.28 2016 5 2016-05-31 0.28 2016 5 </code></pre> <p>after that we can easily resample it using <code>daily</code> rule: <code>D</code></p>
0
2016-07-25T17:23:04Z
[ "python", "pandas", "dataframe" ]
Removing encoded text from strings read from txt file
38,572,626
<p>Here's the problem:</p> <p>I copied and pasted this entire list to a txt file from <a href="https://www.cboe.org/mdx/mdi/mdiproducts.aspx" rel="nofollow">https://www.cboe.org/mdx/mdi/mdiproducts.aspx</a></p> <p>Sample of text lines:</p> <p><code>BFLY - The CBOE S&amp;P 500 Iron Butterfly Index BPVIX - CBOE/CME FX British Pound Volatility Index BPVIX1 - CBOE/CME FX British Pound Volatility First Term Structure Index BPVIX2 - CBOE/CME FX British Pound Volatility Second Term Structure Index</code></p> <p>These lines of course appear normal in my text file, and I saved the file with utf-8 encoding.</p> <p>My goal is to use python to strip out only the symbols from this long list, .e.g. BFLY, VPVIX etc, and write them to a new file</p> <p>I am using the following code to read the file and split it:</p> <pre><code>x=open('sometextfile.txt','r') y=x.read().split() </code></pre> <p>The issue I'm seeing is that there are unfamiliar characters popping up and they are affecting my ability to filter the list. Example:</p> <pre><code>print(y[0]) BFLY </code></pre> <p>I'm guessing that these characters have something to do with the encoding and I have tried a few different things with the codec module without success. Using .decode('utf-8') throws an error when trying to use it against the above variables x or y. I am able to use .encode('utf-8'), which obviously makes things even worse. </p> <p>The main problem is that when I try to loop through the list and remove any items that are not all upper case or contain non-alpha characters. Ex:</p> <pre><code>y[0].isalpha() False y[0].isupper() False </code></pre> <p>So in this example the symbol BFLY ends up being removed from the list. </p> <p>Funny thing is that these characters are not present in a txt file if I do something like:</p> <pre><code>q=open('someotherfile.txt','w') q.write(y[0]) </code></pre> <p>Any help would be greatly appreciated. I would really like to understand why this frequently happens when copying and pasting text from web pages like this one.</p>
0
2016-07-25T16:07:35Z
38,573,097
<p>Why not use Regex?</p> <p>I think this will catch the letters in caps</p> <pre><code>"[A-Z]{1,}/?[A-Z]{1,}[0-9]?" </code></pre> <p>This is better. I got a list of all such symbols. Here's my result.</p> <pre><code>['BFLY', 'CBOE', 'BPVIX', 'CBOE/CME', 'FX', 'BPVIX1', 'CBOE/CME', 'FX', 'BPVIX2', 'CBOE/CME', 'FX'] </code></pre> <p>Here's the code</p> <pre><code>import re reg_obj = re.compile(r'[A-Z]{1,}/?[A-Z]{1,}[0-9]?') sym = reg_obj.findall(a)enter code here print(sym) </code></pre>
1
2016-07-25T16:32:45Z
[ "python", "html", "encoding", "utf-8", "decoding" ]
Unexpected behavior from python's relativedelta
38,572,812
<p>I'm getting a confusing result when using Python's timestamps and </p> <blockquote> <p>my_timestamp </p> </blockquote> <p><code>Timestamp('2015-06-01 00:00:00')</code></p> <blockquote> <p>my_timestamp + relativedelta(month = +4)</p> </blockquote> <p><code>Timestamp('2015-04-01 00:00:00')</code></p> <p>Naturally I expected the output to <code>Timestamp('2015-10-01 00:00:00')</code></p> <p>What is the correct way to add "months" to dates? </p> <hr> <p>[EDIT]: I've since solved this by using the following (in case anyone in Pandas has the same problem): </p> <blockquote> <p>print my_timestamp<br> print my_timestamp + DateOffset(months=4)</p> </blockquote> <p><code>2015-06-01 00:00:00</code><br> <code>2015-10-01 00:00:00</code></p>
0
2016-07-25T16:17:29Z
38,572,982
<p>The issue is that you are using the wrong keyword argument. You want <code>months</code> instead of <code>month</code>. </p> <p>Per <a href="http://dateutil.readthedocs.io/en/stable/relativedelta.html" rel="nofollow">the documentation</a>, <code>month</code> denotes absolute information (not relative) and simply replaces the given information, as you are noting. Using <code>months</code> denotes relative information, and performs the calculation as you'd expect:</p> <pre><code>Timestamp('2015-06-01 00:00:00') + relativedelta(months=4) 2015-10-01 00:00:00 </code></pre>
2
2016-07-25T16:26:30Z
[ "python", "datetime", "pandas", "timestamp", "relativedelta" ]
Pandas - Modify string values in each cell
38,572,815
<p>I have a panda dataframe and I need to modify all values in a given column. Each column will contains a string value of the same length. The user provides the index they want to be replaced for each value: ex: <code>[1:3]</code> and the replacement value <code>"AAA"</code>.</p> <p>This would replace the string from values 1 to 3 with the value <code>AAA</code>.</p> <p>How can I use the applymap, map or apply function to get this done?</p> <p>Thanks</p> <p>Here is the final solution I went off of using the answer marked below:</p> <pre><code>import pandas as pd df = pd.DataFrame({'A':['ffgghh','ffrtss','ffrtds'], #'B':['ffrtss','ssgghh','d'], 'C':['qqttss',' 44','f']}) print df old = ['g', 'r', 'z'] new = ['y', 'b', 'c'] vals = dict(zip(old, new)) pos = 2 for old, new in vals.items(): df.ix[df['A'].str[pos] == old, 'A'] = df['A'].str.slice_replace(pos,pos + len(new),new) print df </code></pre>
1
2016-07-25T16:17:35Z
38,573,189
<p>IMO the most straightforward solution:</p> <pre><code>In [7]: df Out[7]: col 0 abcdefg 1 bbbbbbb 2 ccccccc 3 zzzzzzzz In [9]: df.col = df.col.str[:1] + 'AAA' + df.col.str[4:] In [10]: df Out[10]: col 0 aAAAefg 1 bAAAbbb 2 cAAAccc 3 zAAAzzzz </code></pre>
1
2016-07-25T16:37:16Z
[ "python", "pandas" ]
Pandas - Modify string values in each cell
38,572,815
<p>I have a panda dataframe and I need to modify all values in a given column. Each column will contains a string value of the same length. The user provides the index they want to be replaced for each value: ex: <code>[1:3]</code> and the replacement value <code>"AAA"</code>.</p> <p>This would replace the string from values 1 to 3 with the value <code>AAA</code>.</p> <p>How can I use the applymap, map or apply function to get this done?</p> <p>Thanks</p> <p>Here is the final solution I went off of using the answer marked below:</p> <pre><code>import pandas as pd df = pd.DataFrame({'A':['ffgghh','ffrtss','ffrtds'], #'B':['ffrtss','ssgghh','d'], 'C':['qqttss',' 44','f']}) print df old = ['g', 'r', 'z'] new = ['y', 'b', 'c'] vals = dict(zip(old, new)) pos = 2 for old, new in vals.items(): df.ix[df['A'].str[pos] == old, 'A'] = df['A'].str.slice_replace(pos,pos + len(new),new) print df </code></pre>
1
2016-07-25T16:17:35Z
38,573,317
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.slice_replace.html" rel="nofollow"><code>str.slice_replace</code></a>:</p> <pre><code>df['B'] = df['B'].str.slice_replace(1, 3, 'AAA') </code></pre> <p>Sample Input:</p> <pre><code> A B 0 w abcdefg 1 x bbbbbbb 2 y ccccccc 3 z zzzzzzzz </code></pre> <p>Sample Output:</p> <pre><code> A B 0 w aAAAdefg 1 x bAAAbbbb 2 y cAAAcccc 3 z zAAAzzzzz </code></pre>
3
2016-07-25T16:44:28Z
[ "python", "pandas" ]
Split Python Dataframe into multiple Dataframes (where chosen rows are the same)
38,572,816
<p>I would like to split one DataFrame into N Dataframes based on columns X and Z where they are the same (as eachother by column value).</p> <p>For example, this input:</p> <pre><code>df = NAME X Y Z Other 0 a 1 1 1 1 1 b 1 1 2 2 2 c 1 2 1 3 3 d 1 2 2 4 4 e 1 1 1 5 5 f 2 1 2 6 6 g 2 2 1 7 7 h 2 2 2 8 8 i 2 1 1 9 9 j 2 1 2 0 </code></pre> <p>Would have this output:</p> <pre><code>df_group_0 = NAME X Y Z Other 0 a 1 1 1 1 2 c 1 2 1 3 4 e 1 1 1 5 df_group_1 = NAME X Y Z Other 1 b 1 1 2 2 3 d 1 2 2 4 df_group_2 = NAME X Y Z Other 6 g 2 2 1 7 8 i 2 1 1 9 df_group_3 = NAME X Y Z Other 7 h 2 2 2 8 9 j 2 1 2 0 </code></pre> <p>Is this possible?</p>
2
2016-07-25T16:17:37Z
38,572,937
<p><code>groupby</code> generates an iterator of tuples with the first element be the group id, so if you iterate through the groupers and extract the second element from each tuple, you can get a list of data frames each having a unique group: </p> <pre><code>grouper = [g[1] for g in df.groupby(['X', 'Z'])] grouper[0] NAME X Y Z Other 0 a 1 1 1 1 2 c 1 2 1 3 4 e 1 1 1 5 grouper[1] NAME X Y Z Other 1 b 1 1 2 2 3 d 1 2 2 4 grouper[2] NAME X Y Z Other 6 g 2 2 1 7 8 i 2 1 1 9 grouper[3] NAME X Y Z Other 5 f 2 1 2 6 7 h 2 2 2 8 9 j 2 1 2 0 </code></pre>
2
2016-07-25T16:24:15Z
[ "python", "pandas", "grouping" ]
Iterating through a dictionary and subtracting values based on keys
38,572,817
<p>So I have a dictionary like the one below, however, I am trying to subtract ART[0][0] - ART[1][0] and this has to be an iteration. </p> <p>This is what I have, but it doesn't seem to work. I keep getting the error: 'KeyError: 2'</p> <p>Any help would be appreciated.</p> <pre><code> for i in range(1, 5): #from k = i for j in range (1, 5): #to if i == j: pass else: t = ART[j][0] - ART[i][0] g = ART[j][1] - ART[i][1] </code></pre> <p>ART = {'U': (5, 6), 'E': (7, 3), 'A': (3, 3), 'O': (3, 2), 'I': (1, 4)}</p>
-1
2016-07-25T16:17:38Z
38,572,939
<p>Dictionaries are accessed by the key name. See <a href="https://developers.google.com/edu/python/dict-files" rel="nofollow">here</a> for examples.</p> <p>For example, <code>ART['U']</code> would return <code>(5, 6)</code>. In your code, you're trying to access a key <code>2</code>, because that's what <code>j</code> is equal to. However, there is no key named <code>2</code> in the dictionary.</p>
0
2016-07-25T16:24:19Z
[ "python", "dictionary", "iteration" ]
Iterating through a dictionary and subtracting values based on keys
38,572,817
<p>So I have a dictionary like the one below, however, I am trying to subtract ART[0][0] - ART[1][0] and this has to be an iteration. </p> <p>This is what I have, but it doesn't seem to work. I keep getting the error: 'KeyError: 2'</p> <p>Any help would be appreciated.</p> <pre><code> for i in range(1, 5): #from k = i for j in range (1, 5): #to if i == j: pass else: t = ART[j][0] - ART[i][0] g = ART[j][1] - ART[i][1] </code></pre> <p>ART = {'U': (5, 6), 'E': (7, 3), 'A': (3, 3), 'O': (3, 2), 'I': (1, 4)}</p>
-1
2016-07-25T16:17:38Z
38,573,120
<p>It seems like you want to get all the combinations of keys from the dictionary. Dictionaries are unordered, so <code>ART[0]</code> does not have any meaning<sup>1)</sup>. Instead, you can iterate the keys directly:</p> <pre><code>for i in ART: for j in ART: if i != j: t = ART[j][0] - ART[i][0] g = ART[j][1] - ART[i][1] </code></pre> <p>Or shorter, using <a href="https://docs.python.org/3/library/itertools.html#itertools.product" rel="nofollow"><code>itertools.product</code></a>:</p> <pre><code>for i, j in itertools.product(ART, repeat=2): if ... </code></pre> <hr> <p><sup>1)</sup> Except if you have a key <code>0</code> in your dictionary, which you don't.</p>
0
2016-07-25T16:33:46Z
[ "python", "dictionary", "iteration" ]
Iterating through a dictionary and subtracting values based on keys
38,572,817
<p>So I have a dictionary like the one below, however, I am trying to subtract ART[0][0] - ART[1][0] and this has to be an iteration. </p> <p>This is what I have, but it doesn't seem to work. I keep getting the error: 'KeyError: 2'</p> <p>Any help would be appreciated.</p> <pre><code> for i in range(1, 5): #from k = i for j in range (1, 5): #to if i == j: pass else: t = ART[j][0] - ART[i][0] g = ART[j][1] - ART[i][1] </code></pre> <p>ART = {'U': (5, 6), 'E': (7, 3), 'A': (3, 3), 'O': (3, 2), 'I': (1, 4)}</p>
-1
2016-07-25T16:17:38Z
38,573,501
<p>You mention in comments that you want to call items in the dictionary by an index. This is not possible because dictionaries are unordered. Dictionaries are for mapping a particular 'key' to a value that you stored against that key. If you <code>print ART</code> several times, you will see different ordering. I don't think you want a dictionary here and I believe you've picked letters to make keys for the sake of making keys.</p> <p>Python uses zero indexing. So <code>range(1, 5)</code> would miss the first item in a <em>list</em> if you were iterating through. You would want <code>for x in range(0, 5)</code> to get all elements in the list, which can be written simply as <code>for x in range(5)</code>.</p> <p>Finally, your code has <code>i</code>, <code>j</code>, <code>k</code> (never used), <code>t</code> and <code>g</code>. This makes things <strong>much</strong> harder to understand, especially when Python gives you so much flexibility in naming things.</p> <p>With those assumptions, I think you want a list of tuples as your data structure:</p> <pre><code>ART = [(5, 6), (7, 3), (3, 3), (3, 2), (1, 4)] for i, item in enumerate(ART): for j, pair_item in enumerate(ART): if i != j: t = item[0] - pair_item[0] g = item[1] - pair_item[1] </code></pre>
0
2016-07-25T16:55:27Z
[ "python", "dictionary", "iteration" ]
django custom validation not showing up on button click
38,572,844
<p>I wrote a custom validation for my form in forms.py but it's not working. It doesn't show anything ("email not exist"), when I press the submit button it looks like refreshing the page. I would appreciate any help.</p> <p>here is my view.py:</p> <pre><code>def delete(request): if request.method == 'POST' and "DeleteButton" in request.POST: form = LoginPageDelete(request.POST) if form.is_valid(): DeleteData = form.cleaned_data q = DeleteData["emailD"] query = Users.objects.get(email = q ) query.delete() fetch = Users.objects.all() return render(request,'Result.html',{'QueryDelete':fetch.values(),}) FormDelete = LoginPageDelete() return render(request,'delete.html',{"FormDelete":FormDelete,}) </code></pre> <p>here is template:</p> <pre><code>&lt;html&gt; &lt;head&gt; &lt;title&gt;Delete&lt;/title&gt; &lt;/head&gt; &lt;body&gt; {% if FormDelete.errors %} &lt;p style="color:red"&gt;Please correct the problems&lt;/p&gt; 123 {%endif%} &lt;form action="" method="POST"&gt; &lt;table&gt; {{FormDelete.as_table}} &lt;/table&gt; {%csrf_token%} &lt;input type="submit" value="Delete" name="DeleteButton"&gt; &lt;/form&gt; &lt;/body&gt; </code></pre> <p></p> <p>here is forms.py:</p> <pre><code>from django import forms from login.models import Users class LoginPageDelete(forms.Form): emailD = forms.EmailField(required=True) def clean_emailD(self): email = self.cleaned_data['emailD'] if not Users.objects.filter(email = email).exists(): raise forms.ValidationError("email not exist") return email </code></pre>
0
2016-07-25T16:18:59Z
38,573,880
<p>You are simply not returning the validated form to the template. Everytime in your view, you return a new instance of LoginPageDelete() instead and discard the one with validation information.</p> <pre><code>def delete(request): if request.method == 'POST' and "DeleteButton" in request.POST: form = LoginPageDelete(request.POST) if form.is_valid(): DeleteData = form.cleaned_data q = DeleteData["emailD"] query = Users.objects.get(email = q ) query.delete() fetch = Users.objects.all() return render(request,'Result.html',{'QueryDelete':fetch.values(),}) else:# here request.method is get form = LoginPageDelete() return render(request,'delete.html',{"FormDelete":form}) </code></pre>
0
2016-07-25T17:20:01Z
[ "python", "django", "django-validation" ]
Append and Truncating together in Python
38,572,880
<p>So, I am at exercise 16 of Zed Shaw's Python book.</p> <p>I thought of trying out both append and truncate on the same file. I know, this does not make sense. But, I am new and I am trying to learn what would happen if I used both.</p> <p>So, first I am opening the file in append mode and then truncating it and then writing into it.</p> <p>But, the truncate is not working here and whatever I write gets appended to the file.</p> <p>So, can someone kindly explain why the truncate would not work ? Even though I am opening the file in append mode first, I believe I am calling the truncate function after that and it should have worked !!!</p> <p>Following is my code:</p> <pre><code>from sys import argv script, filename = argv print "We're going to erase %r." %filename print "If you don't want that. hit CTRL-C (^C)." print "If you do want that, hit RETURN." raw_input("?") print "Opening the file..." target = open(filename, 'a') print "Truncating the file. Goodbye!" target.truncate() print "Now I'm going to ask you for three lines." line1 = raw_input("line 1: ") line2 = raw_input("line 2: ") line3 = raw_input("line 3: ") print "I'm going to write these to the file." target.write(line1 + "\n" + line2 + "\n" + line3) print "And finally, we close it." target.close() </code></pre>
0
2016-07-25T16:21:24Z
38,572,996
<p>The argument <code>'a'</code> opens the file for appending. You will need to use <code>'w'</code> instead.</p>
0
2016-07-25T16:27:22Z
[ "python", "append", "truncate" ]
Append and Truncating together in Python
38,572,880
<p>So, I am at exercise 16 of Zed Shaw's Python book.</p> <p>I thought of trying out both append and truncate on the same file. I know, this does not make sense. But, I am new and I am trying to learn what would happen if I used both.</p> <p>So, first I am opening the file in append mode and then truncating it and then writing into it.</p> <p>But, the truncate is not working here and whatever I write gets appended to the file.</p> <p>So, can someone kindly explain why the truncate would not work ? Even though I am opening the file in append mode first, I believe I am calling the truncate function after that and it should have worked !!!</p> <p>Following is my code:</p> <pre><code>from sys import argv script, filename = argv print "We're going to erase %r." %filename print "If you don't want that. hit CTRL-C (^C)." print "If you do want that, hit RETURN." raw_input("?") print "Opening the file..." target = open(filename, 'a') print "Truncating the file. Goodbye!" target.truncate() print "Now I'm going to ask you for three lines." line1 = raw_input("line 1: ") line2 = raw_input("line 2: ") line3 = raw_input("line 3: ") print "I'm going to write these to the file." target.write(line1 + "\n" + line2 + "\n" + line3) print "And finally, we close it." target.close() </code></pre>
0
2016-07-25T16:21:24Z
38,573,098
<blockquote> <p>Truncate the file’s size. If the optional size argument is present, the file is truncated to (at most) that size. The size defaults to the current position.</p> </blockquote> <p>When you open your file with 'a' mode, the position is at the <strong>end</strong> of the file.</p> <p>You can do something like this</p> <pre><code>f = open('myfile', 'a') f.tell() # Show the position of the cursor # As you can see, the position is at the end f.seek(0, 0) # Put the position at the begining f.truncate() # It works !! f.close() </code></pre>
0
2016-07-25T16:32:45Z
[ "python", "append", "truncate" ]
Examples of Python Objects That Have An __index__() method?
38,572,999
<p>What are some examples of Python objects that have an <code>__index__()</code> method other than an <code>int</code>?</p> <p>For example, the <a href="https://docs.python.org/3.4/library/functions.html#hex" rel="nofollow">Hex</a> documentation state:</p> <blockquote> <p>If <code>x</code> is not a Python int object, it has to define an <code>__index__()</code> method that returns an integer.</p> </blockquote> <p>This is for self-learning.</p>
1
2016-07-25T16:27:41Z
38,573,210
<p>Mostly, these are types from mathematical libraries like <a href="http://www.numpy.org/" rel="nofollow">NumPy</a> or <a href="http://www.sympy.org/en/index.html" rel="nofollow">SymPy</a>. These libraries have their own integer types (for good reason), but thanks to <code>__index__</code>, their special integers can be used as list indices or passed to <code>hex</code> like normal integers.</p> <pre><code>In [9]: import sympy In [10]: x = sympy.Integer(1) In [11]: x # It looks like a regular 1, but it's not. Out[11]: 1 In [12]: x/2 # This object has special behavior that makes sense for SymPy... Out[12]: 1/2 In [13]: [1, 2, 3][x] # but you can still use it for things like indexing. Out[13]: 2 </code></pre>
2
2016-07-25T16:38:17Z
[ "python", "indexing", "int", "hex" ]
edit django form widget rendering
38,573,006
<p>I have a Django form where one of the fields is defined as:</p> <pre><code>widgets = { 'name': forms.CheckboxSelectMultiple() } </code></pre> <p>The template renders them in a loop with:</p> <pre><code>{% for field in form %} &lt;fieldset class="article-form__field"&gt; {{ field.label_tag }} {{ field }} &lt;/fieldset&gt; {% endfor %} </code></pre> <p>This is rendered as:</p> <pre><code>&lt;fieldset class="article-form__field"&gt; &lt;label for="category-name_0"&gt;Category:&lt;/label&gt; &lt;ul id="category-name"&gt; &lt;li&gt;&lt;label for="category-name_0"&gt;&lt;input id="category-name_0" name="category-name" type="checkbox" value="GEN" /&gt; General information&lt;/label&gt;&lt;/li&gt; &lt;li&gt;&lt;label for="category-name_1"&gt;&lt;input id="category-name_1" name="category-name" type="checkbox" value="FOO" /&gt; Food and drinks&lt;/label&gt;&lt;/li&gt; &lt;/ul&gt; &lt;/fieldset&gt; </code></pre> <p>In short: <code>&lt;label&gt;&lt;input&gt;&lt;/label&gt;</code>. However, I would like the output to be <code>&lt;label&gt;&lt;/label&gt;&lt;input&gt;</code>.</p> <p>Is that possible, and if so, how?</p> <p>Full code is <a href="https://gist.github.com/Flobin/4a38e53c9a779ba1de8ec2417dbc2e15" rel="nofollow">here</a>.</p>
1
2016-07-25T16:28:05Z
38,573,937
<pre><code>{% for field in form %} &lt;fieldset class="article-form__field"&gt; {% if field.name != "category-name" %} {{ field.label_tag }} {{ field }} {% else %} {{ field.label_tag }} &lt;ul id={{ field.auto_id }}&gt; {% for checkbox in field %} &lt;li&gt; &lt;label for="{{ checkbox.id_for_label }}"&gt; {{ checkbox.choice_label }} &lt;/label&gt; {{ checkbox.tag }} &lt;/li&gt; {% endfor %} &lt;/ul&gt; {% endif %} &lt;/fieldset&gt; {% endfor %} </code></pre> <ul> <li><a href="https://docs.djangoproject.com/en/1.9/ref/forms/widgets/#checkboxselectmultiple" rel="nofollow">CheckboxSelectMultiple</a></li> <li><a href="https://docs.djangoproject.com/en/1.9/ref/forms/widgets/#radioselect" rel="nofollow">RadioSelect</a> (how to customize rendering is described here)</li> </ul>
1
2016-07-25T17:23:39Z
[ "python", "django", "django-forms", "jinja2", "multi-model-forms" ]
Global variables aren't working
38,573,025
<p>My global variables are not working in my code. I'm fairly new to this and I can't seem to figure this out: I have set variables (only showing gna for this), which can be manipulated by an entry field, triggered by a corresponding button. For some reason, it's not taking the changes within the loop. I'm trying to make it to where the changed variable can be graphed as well, but it gives me the following error:</p> <pre><code>Exception in Tkinter callback Traceback (most recent call last): File "C:\Program Files\Python35\lib\tkinter\__init__.py", line 1549, in __call__ return self.func(*args) File "G:/PYTHON/Eulers.py", line 64, in graph v[i + 1] = 1 / c * (gna * f[i] - gk * u[i]) * del_t + v[i] TypeError: ufunc 'multiply' did not contain a loop with signature matching types dtype('&lt; U32') dtype('&lt; U32') dtype('&lt; U32') </code></pre> <p>Here is the code:</p> <pre><code>gna = 0.9 gnalabel = Label(topFrame, text="gna = %s" % gna) gnalabel.pack() gnaEntry = Entry(topFrame, justify=CENTER) gnaEntry.pack() def gnacallback(): global gna gna = gnaEntry.get() gnalabel.config(text="C = %s" % gna) gnaButton = Button(topFrame, text="Change", width=10, command=gnacallback) gnaButton.pack() def graph(): global c, gna, gk, beta, gamma for i in range(0, len(t)-1): stinum = np.floor(i / 3000) stimt = 3000 + 3000 * (stinum - 1) f[i] = v[i] * (1 - (((v[i]) ** 2) / 3)) v[i + 1] = 1 / c * (gna * f[i] - gk * u[i]) * del_t + v[i] if(i == stimt): v[i + 1] = v[i + 1] + v_stim u[i + 1] = (v[i] + beta - gamma * u[i]) * del_t + u[i] plt.plot(v) plt.show() </code></pre>
-3
2016-07-25T16:28:52Z
38,573,173
<pre><code>gna = gnaEntry.get() </code></pre> <p><code>Entry.get</code> returns a string, which is probably an unsuitable type for the arithmetic you're doing in <code>graph</code>. Try converting to a number first.</p> <pre><code>gna = float(gnaEntry.get()) #or perhaps `int` if it's always an integer </code></pre>
2
2016-07-25T16:36:29Z
[ "python", "loops", "variables", "global" ]
tkinter overrideredirect stops entry working
38,573,178
<p>I would like to create a "Borderless" window in python and my code works without overrideredirect, however when I use this the input is disallowed. I can not click into the Entry box</p> <p>*This is the code which needs to be figured out, it is currently in a function and there is another 500 lines of code to go with it :D *</p> <pre><code>newWindow = tkinter.Tk() w = 300 h = 400 ws = 1024 hs = 768 x = (ws/2) - (w/2) y = (hs/2) - (h/2) newWindow.configure(background="#2E393D") newWindow.overrideredirect(True) frame = tkinter.Frame(newWindow) name = tkinter.Label(newWindow, background="#1c1c1c", width=2, height=4) name.pack() global outputl global inputl inputl = tkinter.StringVar() outputl = tkinter.StringVar() def nomic(empty): reply = inputl.get() inputl.set("") mainProcess(reply) if way == "a": import speech_recognition as sr r = sr.Recognizer() with sr.Microphone() as source: audio = r.listen(source) try: reply = r.recognize_google(audio) mainProcess(reply) except sr.UnknownValueError: backCreateFile("I did not understand that !") else: inputBox = tkinter.Entry(frame, textvariable=inputl, width=40,foreground="#2E393D", background="white", font=("Ubuntu", 13)) inputBox.bind("&lt;Return&gt;", nomic) inputBox.pack(fill="x") outputLabel = tkinter.Label(newWindow, textvariable=outputl) outputLabel.config( background="#2E393D", foreground="white", wraplength=280, pady=10, font=("Ubuntu", 13)) outputLabel.pack(fill="x") weatherV = tkinter.StringVar() weatherV.set("Current Weather - " + status) weatherLabel = tkinter.Label(newWindow, justify="left", textvariable=weatherV, background="#2E393D", foreground="white", font=("Ubuntu", 13), pady=7).pack(fill="x") tempV = tkinter.StringVar() tempV.set("Current Temperature - " + str(ctemp)) tempLabel = tkinter.Label(newWindow, justify="left", textvariable=tempV, background="#2E393D", foreground="white", font=("Ubuntu", 13), pady=7).pack(fill="x") frame.pack() </code></pre>
0
2016-07-25T16:36:46Z
38,578,749
<p>Here is minimal code that works on Windows with Python 3.6 and tk 8.6. The popup is in its default position, the upper left corner the screen.</p> <pre><code>import tkinter as tk root = tk.Tk() tag = tk.Label(root, text='Popup entry contents: ') var = tk.StringVar(root, 'var') label = tk.Label(root, textvariable=var) tag.pack(side='left') label.pack(side='left') pop = tk.Toplevel(root) pop.overrideredirect(True) entry = tk.Entry(pop, textvariable=var) entry.pack() #pop.lift() # Needed? for tk 8.5.18+, not for 8.6 </code></pre> <p>Test this on your system, and if it works, figure out what you did differently in your code.</p>
0
2016-07-25T23:11:20Z
[ "python", "html", "user-interface", "tkinter" ]
Calculating cosine distance between the rows of matrix
38,573,213
<p>I'm trying to calculate cosine distance in python between the rows in matrix and have couple a questions.So I'm creating matrix matr and populating it from the lists, then reshaping it for analysis purposes:</p> <pre><code>s = [] for i in range(len(a)): for j in range(len(b_list)): s.append(a[i].count(b_list[j])) matr = np.array(s) d = matr.reshape((22, 254)) </code></pre> <p>The output of d gives me smth like:</p> <pre><code>array([[0, 0, 0, ..., 0, 0, 0], [2, 0, 0, ..., 1, 0, 0], [2, 0, 0, ..., 0, 0, 0], ..., [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [1, 0, 0, ..., 0, 0, 0]]) </code></pre> <p>Then I want to use scipy.spatial.distance.cosine package to calculate cosine from first row to every other else in the d matrix. How can I perform that? Should it be some for loop for that? Not too much experience with matrix and array operations.</p> <p>So how can I use for loop for second argument (d[1],d[2], and so on) in that construction not to launch it every time:</p> <pre><code>from scipy.spatial.distance import cosine x=cosine (d[0], d[6]) </code></pre>
1
2016-07-25T16:38:21Z
38,573,583
<p>You said "calculate cosine from first row to every other else in the d matrix" [sic]. If I understand correctly, you can do that with <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.cdist.html" rel="nofollow"><code>scipy.spatial.distance.cdist</code></a>, passing the first row as the first argument and the remaining rows as the second argument:</p> <pre><code>In [31]: from scipy.spatial.distance import cdist In [32]: matr = np.random.randint(0, 3, size=(6, 8)) In [33]: matr Out[33]: array([[1, 2, 0, 1, 0, 0, 0, 1], [0, 0, 2, 2, 1, 0, 1, 1], [2, 0, 2, 1, 1, 2, 0, 2], [2, 2, 2, 2, 0, 0, 1, 2], [0, 2, 0, 2, 1, 0, 0, 0], [0, 0, 0, 1, 2, 2, 2, 2]]) In [34]: cdist(matr[0:1], matr[1:], metric='cosine') Out[34]: array([[ 0.65811827, 0.5545646 , 0.1752139 , 0.24407105, 0.72499045]]) </code></pre> <hr> <p>If it turns out that you want to compute <em>all</em> the pairwise distances in <code>matr</code>, you can use <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.pdist.html" rel="nofollow"><code>scipy.spatial.distance.pdist</code></a>.</p> <p>For example,</p> <pre><code>In [35]: from scipy.spatial.distance import pdist In [36]: pdist(matr, metric='cosine') Out[36]: array([ 0.65811827, 0.5545646 , 0.1752139 , 0.24407105, 0.72499045, 0.36039785, 0.27625314, 0.49748109, 0.41498206, 0.2799177 , 0.76429774, 0.37117185, 0.41808563, 0.5765951 , 0.67661917]) </code></pre> <p>Note that the first five values returned by <code>pdist</code> are the same values returned above using <code>cdist</code>.</p> <p>For further explanation of the return value of <code>pdist</code>, see <a href="http://stackoverflow.com/questions/13079563/how-does-condensed-distance-matrix-work-pdist">How does condensed distance matrix work? (pdist)</a></p>
2
2016-07-25T17:00:17Z
[ "python", "numpy", "matrix", "scipy", "cosine" ]
Calculating cosine distance between the rows of matrix
38,573,213
<p>I'm trying to calculate cosine distance in python between the rows in matrix and have couple a questions.So I'm creating matrix matr and populating it from the lists, then reshaping it for analysis purposes:</p> <pre><code>s = [] for i in range(len(a)): for j in range(len(b_list)): s.append(a[i].count(b_list[j])) matr = np.array(s) d = matr.reshape((22, 254)) </code></pre> <p>The output of d gives me smth like:</p> <pre><code>array([[0, 0, 0, ..., 0, 0, 0], [2, 0, 0, ..., 1, 0, 0], [2, 0, 0, ..., 0, 0, 0], ..., [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [1, 0, 0, ..., 0, 0, 0]]) </code></pre> <p>Then I want to use scipy.spatial.distance.cosine package to calculate cosine from first row to every other else in the d matrix. How can I perform that? Should it be some for loop for that? Not too much experience with matrix and array operations.</p> <p>So how can I use for loop for second argument (d[1],d[2], and so on) in that construction not to launch it every time:</p> <pre><code>from scipy.spatial.distance import cosine x=cosine (d[0], d[6]) </code></pre>
1
2016-07-25T16:38:21Z
38,573,620
<p>You can just use a simple for loop with <code>scipy.spatial.distance.cosine</code>:</p> <pre><code>dists = [] for row in matr: dists.append(scipy.spatial.distance.cosine(matr[0,:], row)) </code></pre>
2
2016-07-25T17:02:33Z
[ "python", "numpy", "matrix", "scipy", "cosine" ]
Calculating cosine distance between the rows of matrix
38,573,213
<p>I'm trying to calculate cosine distance in python between the rows in matrix and have couple a questions.So I'm creating matrix matr and populating it from the lists, then reshaping it for analysis purposes:</p> <pre><code>s = [] for i in range(len(a)): for j in range(len(b_list)): s.append(a[i].count(b_list[j])) matr = np.array(s) d = matr.reshape((22, 254)) </code></pre> <p>The output of d gives me smth like:</p> <pre><code>array([[0, 0, 0, ..., 0, 0, 0], [2, 0, 0, ..., 1, 0, 0], [2, 0, 0, ..., 0, 0, 0], ..., [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [1, 0, 0, ..., 0, 0, 0]]) </code></pre> <p>Then I want to use scipy.spatial.distance.cosine package to calculate cosine from first row to every other else in the d matrix. How can I perform that? Should it be some for loop for that? Not too much experience with matrix and array operations.</p> <p>So how can I use for loop for second argument (d[1],d[2], and so on) in that construction not to launch it every time:</p> <pre><code>from scipy.spatial.distance import cosine x=cosine (d[0], d[6]) </code></pre>
1
2016-07-25T16:38:21Z
38,575,393
<p>Here's how you might calculate it easily by hand:</p> <pre><code>from numpy import array as a from numpy.random import random_integers as randi from numpy.linalg.linalg import norm from numpy import set_printoptions M = randi(10, size=a([5,5])); # create demo matrix # dot products of rows against themselves DotProducts = M.dot(M.T); # kronecker product of row norms NormKronecker = a([norm(M, axis=1)]) * a([norm(M, axis=1)]).T; CosineSimilarity = DotProducts / NormKronecker CosineDistance = 1 - CosineSimilarity set_printoptions(precision=2, suppress=True) print CosineDistance </code></pre> <p>Output:</p> <pre><code>[[-0. 0.15 0.1 0.11 0.22] [ 0.15 0. 0.15 0.13 0.06] [ 0.1 0.15 0. 0.15 0.14] [ 0.11 0.13 0.15 0. 0.18] [ 0.22 0.06 0.14 0.18 -0. ]] </code></pre> <p>This matrix is e.g. interpreted as "the cosine distance between row 3 against row 2 (or, equally, row 2 against row 3) is 0.15".</p>
1
2016-07-25T18:51:38Z
[ "python", "numpy", "matrix", "scipy", "cosine" ]
Pulling the Next Value Under the Same Column Header
38,573,287
<p>I am using Python's <code>csv</code> module to read ".csv" files and parse them out to MySQL insert statements. In order to maintain syntax for the statements I need to determine the type of the values listed under each column header. However, I have run into a problem as some of the rows start with a <code>null</code> value.</p> <p>How can I use the <code>csv</code> module to return the next value under the same column until the value returned is <strong>not</strong> <code>null</code>? This does not have to be accomplished with the <code>csv</code> module; I am open to all solutions. After looking through the documentation I am not sure that the <code>csv</code> module is capable of doing what I need. I was thinking something along these lines:</p> <pre><code>if rowValue == '': rowValue = nextRowValue(row) </code></pre> <p>Obviously the <code>next()</code> method simply returns the next value in the csv "list" rather than returning the next value under the same column like I want, and the <code>nextRowValue()</code> object does not exist. I am just demonstrating the idea.</p> <p><em>Edit:</em> Just to add some context, here is an example of what I am doing and the problems I am running into.</p> <p>If the table is as follows:</p> <pre><code>ID Date Time Voltage Current Watts 0 7/2 11:15 0 0 0 7/2 11:15 0 0 0 7/2 11:15 380 1 380 </code></pre> <p>And here is a very slimmed down version of the code that I am using to read the table, get the column headers and determine the type of the values from the first row. Then put them into separate lists and then use <code>deque</code> to add them to insert statements in a separate function. Not all of the code is featured and I might have left some crucial parts out, but here is an example:</p> <pre><code>import csv, os from collections import deque def findType(rowValue): if rowValue == '': rowValue = if '.' in rowValue: try: rowValue = type(float(rowValue)) except ValueError: pass else: try: rowValue = type(int(rowValue)) except: rowValue = type(str(rowValue)) return rowValue def createTable(): inputPath = 'C:/Users/user/Desktop/test_input/' outputPath = 'C:/Users/user/Desktop/test_output/' for file in os.listdir(inputPath): if file.endswith('.csv'): with open(inputPath + file) as inFile: with open(outputPath + file[:-4] + '.sql', 'w') as outFile: csvFile = csv.reader(inFile) columnHeader = next(csvFile) firstRow = next(csvFile) cList = deque(columnHeader) rList = deque(firstRow) hList = [] for value in firstRow: valueType = findType(firstRow) if valueType == str: try: val = '`' + cList.popleft() + 'varchar(255)' hList.append(val) except IndexError: pass etc. </code></pre> <p>And so forth for the rest of the value types returned from the findType function. The problem is that when adding the values to rList using <code>deque</code> it skips over <code>null</code> values so that the number of items in the list for column headers would be 6, for example, and the number of items in the list for rows would be 5 so they would not line up.</p> <p>A somewhat drawn out solution would be to scan each row for <code>null</code> values until one was found using something like this:</p> <pre><code>for value in firstRow: if value == '': firstRow = next(csvFile) </code></pre> <p>And continuing this loop until a row was found with no <code>null</code> values. However this seems like a somewhat drawn out solution that would slow down the program, hence why I am looking for a different solution.</p>
0
2016-07-25T16:42:38Z
38,574,470
<p>Rather than pull the next value from the column as the title suggests, I found it easier to just skip rows that contained any <code>null</code> values. There are two different ways to do this:</p> <p>Use a loop to scan each row and see if it contains a <code>null</code> value, and jump to the next row until one is found that contains no <code>null</code> values. For example:</p> <pre><code>tempRow = next(csvFile) for value in tempRow: if value == '': tempRow = next(csvFile) else: row = tempRow </code></pre>
1
2016-07-25T17:56:41Z
[ "python", "mysql", "python-3.x", "csv" ]
Terminology: A user-defined function object attribute?
38,573,353
<p>According to Python 2.7.12 documentation, <a href="https://docs.python.org/2.7/reference/datamodel.html#the-standard-type-hierarchy" rel="nofollow">User-defined methods</a>:</p> <blockquote> <p>User-defined method objects may be created when getting an attribute of a class (perhaps via an instance of that class), if that attribute is a <strong>user-defined function object, an unbound user-defined method object, or a class method object</strong>. When the attribute is a user-defined method object, a new method object is only created if the class from which it is being retrieved is the same as, or a derived class of, the class stored in the original method object; otherwise, the original method object is used as it is.</p> </blockquote> <p>I know that everything in Python is an object, so a "user-defined method" must be identical to a "user-defined method <em>object</em>". However, I can't understand why there is a "user-defined function object attribute". Say, in the following code:</p> <pre><code>class Foo(object): def meth(self): pass </code></pre> <p><code>meth</code> is a function defined inside a class body, and thus a <a href="https://docs.python.org/2.7/glossary.html" rel="nofollow">method</a>. So why can we have a "user-defined function object attribute"? Aren't all attributes defined inside a class body?</p> <hr> <p><strong>Bouns question:</strong> Provide some examples illustrating how a user-defined method object is <em>created</em> by <em>getting</em> an attribute of a class. Isn't objects <em>defined</em> in their class definition? (I know methods can be assigned to a class instance, but that's monkey patching.)</p> <p>I'm asking for help because this part of document is really really confusing to me, a programmer who only knows C, since Python is such a magical language that supports both functional programming and object-oriented programmer, which I haven't mastered yet. I've done a lot of search, but still can't figure that out.</p>
0
2016-07-25T16:46:53Z
38,573,851
<p>When you do</p> <pre><code>class Foo(object): def meth(self): pass </code></pre> <p>you are defining a class <code>Foo</code> with a method <code>meth</code>. However, when this class definition is executed, no method object is created to represent the method. The <code>def</code> statement creates an ordinary function object.</p> <p>If you then do</p> <pre><code>Foo.meth </code></pre> <p>or</p> <pre><code>Foo().meth </code></pre> <p>the attribute lookup finds the function object, but the function object is not used as the value of the attribute. Instead, using the <a href="https://docs.python.org/2/reference/datamodel.html#implementing-descriptors" rel="nofollow">descriptor protocol</a>, Python calls the <code>__get__</code> method of the function object to construct a method object, and that method object is used as the value of the attribute for that lookup. For <code>Foo.meth</code>, the method object is an unbound method object, which mostly behaves like the function you defined, but with some extra type checking. For <code>Foo().meth</code>, the method object is a bound method object, which already knows what <code>self</code> is.</p> <hr> <p>This is why <code>Foo().meth()</code> doesn't complain about a missing <code>self</code> argument; you pass 0 arguments to the method object, which then prepends <code>self</code> to the (empty) argument list and passes the arguments on to the underlying function object. If <code>Foo().meth</code> evaluated to the <code>meth</code> function directly, you would have to pass it <code>self</code> explicitly.</p> <hr> <p>In Python 3, <code>Foo.meth</code> doesn't create a method object; the function's <code>__get__</code> still gets called, but it returns the function directly, since they decided unbound method objects weren't useful. <code>Foo().meth</code> still creates a bound method object, though.</p>
1
2016-07-25T17:17:53Z
[ "python", "oop", "methods", "language-lawyer", "terminology" ]
Use __get__, __set__ with dictionary item?
38,573,405
<p>Is there a way to make a dictionary of functions that use set and get statements and then use them as set and get functions?</p> <pre><code>class thing(object): def __init__(self, thingy) self.thingy = thingy def __get__(self,instance,owner): return thingy def __set__(self,instance,value): thingy += value theDict = {"bob":thing(5), "suzy":thing(2)} theDict["bob"] = 10 </code></pre> <p>wanted result is that 10 goes into the set function and adds to the existing 5</p> <pre><code>print theDict["bob"] &gt;&gt;&gt; 15 </code></pre> <p>actual result is that the dictionary replaces the entry with the numeric value</p> <pre><code>print theDict["bob"] &gt;&gt;&gt; 10 </code></pre> <p>Why can't I just make a function like.. theDict["bob"].add(10) is because it's building off an existing and already really well working function that uses the set and get. The case I'm working with is an edge case and wouldn't make sense to reprogram everything to make work for this one case.</p> <p>I need some means to store instances of this set/get thingy that is accessible but doesn't create some layer of depth that might break existing references. </p> <p>Please don't ask for actual code. It'd take pages of code to encapsulate the problem.</p>
0
2016-07-25T16:49:47Z
38,573,617
<p>No, because to execute <code>theDict["bob"] = 10</code>, the Python runtime doesn't call any methods at all of the previous value of <code>theDict["bob"]</code>. It's not like when <code>myObject.mydescriptor = 10</code> calls the descriptor setter.</p> <p>Well, maybe it calls <code>__del__</code> on the previous value if the refcount hits zero, but let's not go there!</p> <p>If you want to do something like this then you need to change the way the dictionary works, not the contents. For example you could subclass <code>dict</code> (with the usual warnings that you're Evil, Bad and Wrong to write a non-Liskov-substituting derived class). Or you could from scratch implement an instance of <code>collections.MutableMapping</code>. But I don't think there's any way to hijack the normal operation of <code>dict</code> using a special value stored in it.</p>
1
2016-07-25T17:02:27Z
[ "python", "dictionary" ]
Use __get__, __set__ with dictionary item?
38,573,405
<p>Is there a way to make a dictionary of functions that use set and get statements and then use them as set and get functions?</p> <pre><code>class thing(object): def __init__(self, thingy) self.thingy = thingy def __get__(self,instance,owner): return thingy def __set__(self,instance,value): thingy += value theDict = {"bob":thing(5), "suzy":thing(2)} theDict["bob"] = 10 </code></pre> <p>wanted result is that 10 goes into the set function and adds to the existing 5</p> <pre><code>print theDict["bob"] &gt;&gt;&gt; 15 </code></pre> <p>actual result is that the dictionary replaces the entry with the numeric value</p> <pre><code>print theDict["bob"] &gt;&gt;&gt; 10 </code></pre> <p>Why can't I just make a function like.. theDict["bob"].add(10) is because it's building off an existing and already really well working function that uses the set and get. The case I'm working with is an edge case and wouldn't make sense to reprogram everything to make work for this one case.</p> <p>I need some means to store instances of this set/get thingy that is accessible but doesn't create some layer of depth that might break existing references. </p> <p>Please don't ask for actual code. It'd take pages of code to encapsulate the problem.</p>
0
2016-07-25T16:49:47Z
38,574,345
<p><code>theDict["bob"] = 10</code> is just assign <code>10</code> to the key <code>bob</code> for <code>theDict</code>. I think you should know about the magic methods <code>__get__</code> and <code>__set__</code> first. Go to: <a href="https://docs.python.org/2.7/howto/descriptor.html" rel="nofollow">https://docs.python.org/2.7/howto/descriptor.html</a> Using a class might be easier than dict.</p>
0
2016-07-25T17:49:11Z
[ "python", "dictionary" ]
Use __get__, __set__ with dictionary item?
38,573,405
<p>Is there a way to make a dictionary of functions that use set and get statements and then use them as set and get functions?</p> <pre><code>class thing(object): def __init__(self, thingy) self.thingy = thingy def __get__(self,instance,owner): return thingy def __set__(self,instance,value): thingy += value theDict = {"bob":thing(5), "suzy":thing(2)} theDict["bob"] = 10 </code></pre> <p>wanted result is that 10 goes into the set function and adds to the existing 5</p> <pre><code>print theDict["bob"] &gt;&gt;&gt; 15 </code></pre> <p>actual result is that the dictionary replaces the entry with the numeric value</p> <pre><code>print theDict["bob"] &gt;&gt;&gt; 10 </code></pre> <p>Why can't I just make a function like.. theDict["bob"].add(10) is because it's building off an existing and already really well working function that uses the set and get. The case I'm working with is an edge case and wouldn't make sense to reprogram everything to make work for this one case.</p> <p>I need some means to store instances of this set/get thingy that is accessible but doesn't create some layer of depth that might break existing references. </p> <p>Please don't ask for actual code. It'd take pages of code to encapsulate the problem.</p>
0
2016-07-25T16:49:47Z
38,574,478
<p>You could do it if you can (also) use a specialized version of the dictionary which is aware of your <code>Thing</code> class and handles it separately:</p> <pre><code>class Thing(object): def __init__(self, thingy): self._thingy = thingy def _get_thingy(self): return self._thingy def _set_thingy(self, value): self._thingy += value thingy = property(_get_thingy, _set_thingy, None, "I'm a 'thingy' property.") class ThingDict(dict): def __getitem__(self, key): if key in self and isinstance(dict.__getitem__(self, key), Thing): return dict.__getitem__(self, key).thingy else: return dict.__getitem__(self, key) def __setitem__(self, key, value): if key in self and isinstance(dict.__getitem__(self, key), Thing): dict.__getitem__(self, key).thingy = value else: dict.__setitem__(self, key, value) theDict = ThingDict({"bob": Thing(5), "suzy": Thing(2), "don": 42}) print(theDict["bob"]) # --&gt; 5 theDict["bob"] = 10 print(theDict["bob"]) # --&gt; 15 # non-Thing value print(theDict["don"]) # --&gt; 42 theDict["don"] = 10 print(theDict["don"]) # --&gt; 10 </code></pre>
1
2016-07-25T17:57:01Z
[ "python", "dictionary" ]
How can I check if each row in a matrix is equal to an array and return a Boolean array containing the result?
38,573,577
<p>How can I check if each row in a matrix is equal to an array and return a Boolean array containing the result using NumPy? e.g.</p> <pre><code>a = np.array([[1,2,3],[4,5,6],[7,8,9]]) b = np.array([4,5,6]) # Expected Result: [False,True,False] </code></pre>
0
2016-07-25T16:59:51Z
38,573,578
<p>The neatest way I've found of doing this, is:</p> <pre><code>result = np.all(a==b, axis=1) </code></pre>
3
2016-07-25T16:59:51Z
[ "python", "arrays", "numpy", "matrix" ]