title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags listlengths 1 5 |
|---|---|---|---|---|---|---|---|---|---|
Unique indexing of a dataframe pandas | 38,350,928 | <p>I have to merge data from several excel file and make a data frame. When i do that the index of the rows in the dataframes are not unique as shown below,</p>
<pre><code> a
0 green
1 blue
2 red
0 orange
1 black
2 yellow
</code></pre>
<p>Here i am trying to merge 2 different excel files. One with the d... | 0 | 2016-07-13T11:49:49Z | 38,351,183 | <p>If you were merging your Excel files using <code>concat()</code> function, then use <code>ignore_index=True</code> parameter like this:</p>
<pre><code>df = pd.concat([df1,df2], ignore_index=True)
</code></pre>
<p>so you won't need to reset index later on</p>
| 1 | 2016-07-13T12:00:17Z | [
"python",
"pandas",
"indexing",
"dataframe"
] |
class method not running inside the code | 38,351,090 | <p>I am coding a simple game and I am having the next trouble:</p>
<pre><code>if pidgeon.rect.y > 360:
pygame.mixer.music.stop()
pidgeon.electrocute()
electrocute.play()
time.sleep(2)
showGameOverScreen(score, total_score)
</code></pre>
<p>The method that is not working is:</p>
<pre><code>pid... | -1 | 2016-07-13T11:56:20Z | 38,351,566 | <p>try to call it like this. this is a small example based on user input value.</p>
<pre><code>class fly:
def mymethod(self):
print("canada")
def testmethod():
print("this should be greater")
for i in range(0,3):
userInput = int(input("Enter any number :"))
if(userInput > 5):
prin... | 0 | 2016-07-13T12:19:40Z | [
"python",
"python-2.7",
"pygame"
] |
How can I get to the result that asterisk (*) instead of number 0 ? | 38,351,184 | <p>How can I get to the result that asterisk (*) instead of number 0 ? </p>
<p>emp.csv</p>
<pre><code>import pandas as pd
import io
temp=u"""index empno ename job mgr hiredate sal comm deptno
0, 7839, KING, PRESIDENT, 0, 1981-11-17, 5000, 0, 10
1, 7698, BLAKE, MANAGER, 7839, 1981-0... | 1 | 2016-07-13T12:00:17Z | 38,384,657 | <p>For future reference (and happy I could help you):</p>
<pre><code>emp['sal'] = emp['sal'].astype(str)
emp['sal'] = emp['sal'].str.replace('0', '*')
</code></pre>
<p>To explain: first we cast the columns as a string (needed to do the replacements). Then we use a nifty pandas operation ".str" that allows you to use ... | 2 | 2016-07-14T21:36:35Z | [
"python",
"pandas"
] |
subprocess open ('source venv/bin/activate'),no such file? | 38,351,204 | <p>I want get into virtual environment in python files.But it raise no such files.</p>
<pre><code>import subprocess
subprocess.Popen(['source', '/Users/XX/Desktop/mio/worker/venv/bin/activate'])
</code></pre>
<blockquote>
<p>Traceback (most recent call last):
File "/Users/Ru/Desktop/mio/worker/run.py", line 3,... | 0 | 2016-07-13T12:01:26Z | 38,354,651 | <p>I think your code doesn't work because you are separating the 'source' command from the virtualenv path argument, from the <a href="https://docs.python.org/2/library/subprocess.html#subprocess.Popen" rel="nofollow">documentation</a>:</p>
<blockquote>
<p>"Note in particular that options (such as -input) and argume... | 0 | 2016-07-13T14:34:10Z | [
"python",
"subprocess",
"virtualenv"
] |
subprocess open ('source venv/bin/activate'),no such file? | 38,351,204 | <p>I want get into virtual environment in python files.But it raise no such files.</p>
<pre><code>import subprocess
subprocess.Popen(['source', '/Users/XX/Desktop/mio/worker/venv/bin/activate'])
</code></pre>
<blockquote>
<p>Traceback (most recent call last):
File "/Users/Ru/Desktop/mio/worker/run.py", line 3,... | 0 | 2016-07-13T12:01:26Z | 38,508,581 | <p>There is another simpler way to do what you want.
If you want a python script to use a virtualenv you can always use the python interpreter from the virualenv itself.</p>
<p>/Users/Ru/Desktop/mio/worker/venv/bin/python my_python_file.py</p>
<p>This will run the my_python_file.py with the properties/libraries of th... | 0 | 2016-07-21T15:39:45Z | [
"python",
"subprocess",
"virtualenv"
] |
creating a pandas dataframe from a list of image files | 38,351,224 | <p>I am trying to create a pandas dataframe from a list of image files (.png files)</p>
<pre><code>samples = []
img = misc.imread('a.png')
X = img.reshape(-1, 3)
samples.append(X)
</code></pre>
<p>I added multiple .png files in samples like this. I am then trying to create a pandas dataframe from this.</p>
<pre><cod... | 0 | 2016-07-13T12:02:42Z | 38,352,259 | <p>If you want to create a DataFrame from a list, the easiest way to do this is to create a <code>pandas.Series</code>, like the following example:</p>
<pre><code>import pandas as pd
samples = ['a','b','c']
s = pd.Series(samples)
print s
</code></pre>
<p>output:</p>
<hr>
<blockquote>
<p>0 a<br>
1 b<br>
... | 0 | 2016-07-13T12:50:04Z | [
"python",
"pandas"
] |
creating a pandas dataframe from a list of image files | 38,351,224 | <p>I am trying to create a pandas dataframe from a list of image files (.png files)</p>
<pre><code>samples = []
img = misc.imread('a.png')
X = img.reshape(-1, 3)
samples.append(X)
</code></pre>
<p>I added multiple .png files in samples like this. I am then trying to create a pandas dataframe from this.</p>
<pre><cod... | 0 | 2016-07-13T12:02:42Z | 39,357,504 | <p>you can use:</p>
<pre><code>df = pd.DataFrame.from_records(samples)
</code></pre>
| 1 | 2016-09-06T20:41:04Z | [
"python",
"pandas"
] |
Python - Multiple decorators to a method malfunctioning | 38,351,268 | <p>I am trying to learn decorators and have overcome a strange condition while having multiple decorators for a method. I have two decorators <code>@makeupper</code> and <code>@decorator_maker_with_arguments</code>.</p>
<p><code>@decorator_maker_with_arguments</code> demonstrates how the arguments are accessed inside ... | 0 | 2016-07-13T12:04:28Z | 38,351,719 | <blockquote>
<p>but I see @makeupper malfunctioning. It prints <code>None</code></p>
</blockquote>
<p><code>makeupper</code> isn't malfunctioning. The outer decorator <code>decorator_maker_with_arguments</code> isn't calling the <code>wrapper</code> of <code>makeupper</code>.</p>
<p>And then you have a <code>None</... | 1 | 2016-07-13T12:26:42Z | [
"python",
"python-decorators"
] |
Python - Multiple decorators to a method malfunctioning | 38,351,268 | <p>I am trying to learn decorators and have overcome a strange condition while having multiple decorators for a method. I have two decorators <code>@makeupper</code> and <code>@decorator_maker_with_arguments</code>.</p>
<p><code>@decorator_maker_with_arguments</code> demonstrates how the arguments are accessed inside ... | 0 | 2016-07-13T12:04:28Z | 38,351,962 | <p>You can add return statement inside my_decorator wrapper function.</p>
<p>Like following:</p>
<pre><code>def makeupper(some_fun):
def wrapper(arg1, arg2):
return some_fun(arg1, arg2).upper()
return wrapper
def decorator_maker_with_arguments(decorator_arg1, decorator_arg2):
"""Decorator mak... | 1 | 2016-07-13T12:37:27Z | [
"python",
"python-decorators"
] |
Spyder won't plot figures inline | 38,351,484 | <p>I have somehow messed up my Spyder configuration and the plots are no longer shown inline (in the IPython console). I followed the steps, described here:</p>
<p><a href="http://stackoverflow.com/questions/24002076/spyder-plot-inline">Spyder Plot Inline</a></p>
<p>But I don't want to reset my configuration in order... | 0 | 2016-07-13T12:15:45Z | 38,351,549 | <p>Ok, this was a simple mistake. I didn't actually run the code in the IPython console but in the normal Python console.</p>
| 0 | 2016-07-13T12:18:48Z | [
"python",
"matplotlib",
"plot",
"ide",
"spyder"
] |
List of parent and child into nested dictionary | 38,351,515 | <p>I have a list that I'd like to transform into a nested dictionary. The first element of the list is the parent, the second the child. Can I do this recursively without having to continue creating helper lists for each level? I feel so dumb not understanding this.</p>
<pre><code>relations = [["basket", "money"],
... | -1 | 2016-07-13T12:17:12Z | 38,351,706 | <p>Just create a dictionary that maps a name to the corresponding dictionary:</p>
<pre><code>items = {}
for parent, child in relations:
parent_dict = items.setdefault(parent, {})
child_dict = items.setdefault(child, {})
if child not in parent_dict:
parent_dict[child] = child_dict
result = items['b... | 1 | 2016-07-13T12:26:10Z | [
"python",
"dictionary",
"nested"
] |
Search a text file with another text file | 38,351,543 | <p>I thought I found a similar question here (<a href="http://stackoverflow.com/questions/19933813/python-search-a-file-for-text-using-input-from-another-file">Python search a file for text using input from another file</a>) but that doesn't seem to work for me, the print is empty, as in none found, but there is defini... | 1 | 2016-07-13T12:18:37Z | 38,351,787 | <p>This is how i would do it:</p>
<pre><code>with open(r'C:\Users\evkouni\Desktop\file_sample.txt', 'r') as f_in:
content = f_in.readlines()
add_dict = {}
for line in content:
add_dict[line.split()[0]] = line.split()[1]
with open(r'C:\Users\evkouni\Desktop\target.txt', 'r') as f_t:
content = f... | 2 | 2016-07-13T12:29:49Z | [
"python"
] |
Search a text file with another text file | 38,351,543 | <p>I thought I found a similar question here (<a href="http://stackoverflow.com/questions/19933813/python-search-a-file-for-text-using-input-from-another-file">Python search a file for text using input from another file</a>) but that doesn't seem to work for me, the print is empty, as in none found, but there is defini... | 1 | 2016-07-13T12:18:37Z | 38,351,893 | <p>Another solution for is:</p>
<pre><code>d_file = 'Data\data.txt'
s_file = 'Data\source.txt'
keywords = set()
with open(s_file) as list_file:
for line in list_file:
if line.strip():
keywords.add(line.strip())
data = set()
with open(d_file) as master_file:
for line in master_file:
... | 1 | 2016-07-13T12:34:40Z | [
"python"
] |
In odoo how to access rights for employee and manager are working for employee form? | 38,351,603 | <p>In Odoo, if I log in with an employee it shows employee form as a read-only and if I log in with the admin then I am able to edit all data. How is it managed?</p>
<p>I wanted to do same functionality for a custom user which I have added. If I log in with that user then some of the fields of employee form should be ... | -1 | 2016-07-13T12:21:33Z | 38,365,480 | <p>you can configure your user group access rights on Administror Menu > Users > Groups and set Access Rights as you want</p>
| 0 | 2016-07-14T04:25:51Z | [
"python",
"postgresql",
"openerp"
] |
ZMQ: REQ/REP fails with multiple concurrent requests and polling | 38,351,626 | <p>I have run into a strange behaviour with ZeroMQ that I have been trying to debug the whole day now.</p>
<p>Here is a minimal example script which reproduces the problem. It can be run with Python3.</p>
<p>One server with a REP socket is started and five clients with REP sockets connect to it basically simultaneous... | 1 | 2016-07-13T12:22:33Z | 38,991,550 | <p>For me the problem seems to be that you are "shutting down" the server before all clients have received their reply. So I guess its not the server who's blocking but clients are.</p>
<p>You can solve this by either waiting some time before you set the <code>stop_flag</code>:</p>
<pre><code>sleep(5)
stop_flag = Tru... | 0 | 2016-08-17T08:23:02Z | [
"python",
"zeromq",
"pyzmq"
] |
Script Python using plone.api to create File appear error WrongType when set a file | 38,351,633 | <p>Dears,</p>
<p>I'm creating a script python to mass upload files in Plone site, the installation is UnifiedInstaller Plone 4.3.10.</p>
<p>This script read a txt, and this txt have separation with semicolon, the error appear when set up a file in a new created item.</p>
<p>Bellow the Script.</p>
<pre><code>from z... | 2 | 2016-07-13T12:23:02Z | 38,352,200 | <p>You need to pass the filename as unicode.</p>
<pre><code>file_obj.file = NamedBlobFile(
data=open(pdf_path, 'r').read(),
contentType='application/pdf',
filename=unicode(file_obj.id), # needs to be unicode
)
</code></pre>
<p>More Info in the plone.namedfile docu --> <a href="https://github.com/plone/pl... | 6 | 2016-07-13T12:47:49Z | [
"python",
"file",
"plone"
] |
Error using uic to convert .ui file to .py file in Python | 38,351,729 | <p>I am trying to write a program in python that will convert a .ui file in the same folder (created in Qt Designer) into a .py file. This is the code for this extremely basic program:</p>
<pre><code># -*- coding: utf-8 -*-
from PyQt4 import uic
with open('exampleinterface.py', 'w') as fout:
uic.compileUi('examp... | 2 | 2016-07-13T12:27:01Z | 38,370,224 | <p>Thanks to ekhumoro and mwormser. The problem was indeed the .ui file.
I retried it with a new .ui file and everything worked fine.</p>
| 0 | 2016-07-14T09:16:26Z | [
"python",
"pyqt",
"qt-designer"
] |
pandas: to_csv with a numeric range of named columns? | 38,351,736 | <p>Is it possible through pd.to_csv to provide a numeric range to the columns argument, even if the headers are labeled with strings?</p>
<p>Sample dataframe:</p>
<pre><code> January February March April May June July August September
0 67 43 48 58 82 102 118 114 82
1 ... | 2 | 2016-07-13T12:27:44Z | 38,351,971 | <p>You can access the names of the columns of the dataframe as a series and then slice that, eg:</p>
<pre><code>df.to_csv('filename.csv', usecols=df.columns[:8])
</code></pre>
| 2 | 2016-07-13T12:37:53Z | [
"python",
"pandas"
] |
pandas: to_csv with a numeric range of named columns? | 38,351,736 | <p>Is it possible through pd.to_csv to provide a numeric range to the columns argument, even if the headers are labeled with strings?</p>
<p>Sample dataframe:</p>
<pre><code> January February March April May June July August September
0 67 43 48 58 82 102 118 114 82
1 ... | 2 | 2016-07-13T12:27:44Z | 38,352,412 | <p>alternative solution:</p>
<pre><code>df.ix[:, :7].to_csv('filename.csv')
</code></pre>
| 1 | 2016-07-13T12:56:26Z | [
"python",
"pandas"
] |
Using the "allow" keyword in Scrapy's LinkExtractor | 38,351,744 | <p>I'm trying to scrape the website <a href="http://www.funda.nl/koop/amsterdam/" rel="nofollow">http://www.funda.nl/koop/amsterdam/</a>, which lists houses for sale in Amsterdam. The main page contains many links, some of which are links to individual houses for sale. I would like to ultimately follow these links and ... | 0 | 2016-07-13T12:28:02Z | 38,351,819 | <p>The problem was that the regular expression was in the definition of the <code>rules</code> parameter, but not in the definition of <code>le1</code>. Adding it to the definition of <code>le1</code> made the output as expected.</p>
| 0 | 2016-07-13T12:31:44Z | [
"python",
"scrapy"
] |
Pandas: group some data | 38,351,758 | <p>I have dataframe </p>
<pre><code> date id
0 12-12-2015 123
1 13-12-2015 123
2 15-12-2015 123
3 16-12-2015 123
4 18-12-2015 123
5 10-12-2015 456
6 13-12-2015 456
7 15-12-2015 456
</code></pre>
<p>And I want to get </p>
<pre><code> id date count
0 123 10-12-2015 0
1 123 11... | 1 | 2016-07-13T12:28:51Z | 38,355,364 | <p>You can achieve what you want by reindexing in the aggregation of each group and filling <code>NaN</code>s with <code>0</code>.</p>
<pre><code>import io
import pandas as pd
data = io.StringIO("""\
date id
0 12-12-2015 123
1 13-12-2015 123
2 15-12-2015 123
3 16-12-2015 123
4 18-12-2015 123
5 10-12-2015... | 1 | 2016-07-13T15:03:26Z | [
"python",
"pandas"
] |
Conversion from numpy array float32 to numpy array float64 | 38,351,778 | <p>I am trying to implement randomforest in Python. While running the code I got this error. Although I had already converted from <code>float32</code> to <code>float64</code> using : </p>
<pre><code>x_arr = np.array(train_df, dtype='float64')
Traceback(most recent call last):
File "C:\Python27\randomforest.py", li... | 1 | 2016-07-13T12:29:40Z | 38,351,941 | <p>The problem is not that you're failing to set a float64 dtype. The error message says:</p>
<blockquote>
<p>Input contains NaN, infinity or a value too large for dtype('float32').</p>
</blockquote>
<p>So try checking for those conditions first:</p>
<pre><code>assert not np.any(np.isnan(x_arr))
assert np.all(np.... | 1 | 2016-07-13T12:36:25Z | [
"python",
"numpy",
"scikit-learn",
"sklearn-pandas"
] |
Emitting signals causes core dumps | 38,351,781 | <p>Being (quite) new to both Python and Qt I'm fiddling with this code.
It might be somewhat messy but I've tried to shave the code down as much as possible and still get the core-dumps.</p>
<p>Basically there's a button to start "something" - now it's just a for-loop -
a progress bar, and a label.</p>
<p>Clicking t... | 1 | 2016-07-13T12:29:44Z | 38,401,010 | <p>Do not use the <a href="http://wiki.qt.io/Signals_and_Slots_in_PySide" rel="nofollow">old-style signal and slot syntax</a>. It's bug-prone, and does not raise an exception if you get it wrong. In addition to that, it looks like the implementation is somewhat broken in PySide. I converted your code example to PyQt4, ... | 1 | 2016-07-15T16:29:23Z | [
"python",
"multithreading",
"pyside",
"signals-slots",
"coredump"
] |
How to redirect python logging output to file instead of stdout? | 38,351,934 | <p>I want to redirect all the output, even from the external modules which are imported to a file.</p>
<pre><code>sys.stdout = open('logfile', 'a')
</code></pre>
<p>doesn't do the job for the logging done by external files is echoed on stdout.</p>
<p>I've tinkered with the source code of external modules, and they a... | 2 | 2016-07-13T12:36:15Z | 38,351,994 | <p>Try this:</p>
<pre><code>sysstdout = sys.stdout
log_file = open("your_log_file.txt","w")
sys.stdout = log_file
print("this will be written to message.log")
sys.stdout = sysstdout
log_file.close()
</code></pre>
<p>Or, do the right thing and use <a href="https://docs.python.org/3/library/logging.html" rel="nofollow"... | 0 | 2016-07-13T12:38:41Z | [
"python",
"logging",
"io-redirection"
] |
Parsing log file using Python | 38,352,022 | <p>I have the following log files and I want to split it and put it in an ordered data structure(something like a list of list) using Python 3.4</p>
<p>The file follows this structure:</p>
<pre><code>Month #1
1465465464555
345646546454
442343423433
724342342655
34324233454
24543534533
***Day # 1
5465465465465455
6446... | 0 | 2016-07-13T12:39:57Z | 38,352,810 | <p><code>itertools.groupby</code> from the standard lib is a powerful function for this kind of work. The code below finds groups of lines by month, and then within the month by day, building up a nested data structure. Once done, then you can iterate over that structure by month, and within each month by day.</p>
<pr... | 0 | 2016-07-13T13:14:36Z | [
"python",
"file",
"parsing",
"logging",
"logfile"
] |
Parsing log file using Python | 38,352,022 | <p>I have the following log files and I want to split it and put it in an ordered data structure(something like a list of list) using Python 3.4</p>
<p>The file follows this structure:</p>
<pre><code>Month #1
1465465464555
345646546454
442343423433
724342342655
34324233454
24543534533
***Day # 1
5465465465465455
6446... | 0 | 2016-07-13T12:39:57Z | 38,353,496 | <p>If you want to go down the nested dict route:</p>
<pre><code>month, day = 0, 0
log = {}
with open("log.txt") as f:
for line in f:
if 'Month' in line:
month += 1
day = 0
log[month] = {0:[]}
elif 'Day' in line:
day += 1
log[month][day] = ... | 2 | 2016-07-13T13:44:15Z | [
"python",
"file",
"parsing",
"logging",
"logfile"
] |
Parsing log file using Python | 38,352,022 | <p>I have the following log files and I want to split it and put it in an ordered data structure(something like a list of list) using Python 3.4</p>
<p>The file follows this structure:</p>
<pre><code>Month #1
1465465464555
345646546454
442343423433
724342342655
34324233454
24543534533
***Day # 1
5465465465465455
6446... | 0 | 2016-07-13T12:39:57Z | 38,356,610 | <p>Well, here we have few answers for that question.</p>
<p>Here is my contribution, I solved the issue using some recursive solution. So, for a new way of thinking:</p>
<pre><code>def loop(stopParam, startArr, resultArr=[]):
if startArr == []:
return (resultArr, startArr)
elif stopParam in startArr[0... | 0 | 2016-07-13T16:02:11Z | [
"python",
"file",
"parsing",
"logging",
"logfile"
] |
Python, Postgres, and integers with blank values? | 38,352,040 | <p>So I have some fairly sparse data columns where most of the values are blank but sometimes have some integer value. In Python, if there is a blank then that column is interpreted as a float and there is a .0 at the end of each number.</p>
<p>I tried two things:</p>
<ul>
<li>Changed all of the columns to text and t... | 1 | 2016-07-13T12:40:47Z | 38,353,807 | <p>You could take advantage of the fact you are using POSTGRESQL (9.3 or above), and implement a "poor man's sparse row" by converting your data into Python dictionaries and then using a JSON datatype (JSONB is better).</p>
<p>The following Python snippets generate random data in the format you said you have yours, co... | 1 | 2016-07-13T13:58:36Z | [
"python",
"postgresql",
"sqlalchemy"
] |
How to SetFoucus to mainpanel or mainframe | 38,352,061 | <p>This code is from one of the answer to the pop up window question in the website. I want to make the subframe to open when textctrl is clicked, and the mainframe closed at the same time, and the data is transferred back to the mainframe after clicking the 'save and close' button, the code now can open the subwindow ... | 0 | 2016-07-13T12:41:33Z | 38,359,230 | <p>The reason this happens is that you are binding to the wrong event. The <code>EVT_SET_FOCUS</code> fires whenever the text control is in focus. It goes into focus when you click it. It also comes back into focus when you close the second frame and bring the first one back which is why you see the second frame again.... | 0 | 2016-07-13T18:28:35Z | [
"python",
"popup",
"wxpython"
] |
Pandas Dataframe.to_csv decimal=',' doesn't work | 38,352,082 | <p>In Python, I'm writing my Pandas Dataframe to a csv file and want to change the decimal delimiter to a comma (<code>,</code>). Like this:</p>
<pre><code>results.to_csv('D:/Data/Kaeashi/BigData/ProcessMining/Voorbeelden/Voorbeeld/CaseEventsCel.csv', sep=';', decimal=',')
</code></pre>
<p>But the decimal delimiter i... | 2 | 2016-07-13T12:42:39Z | 38,352,178 | <p>Somehow i don't get this to work either. I always just end up using the following script to rectify it. It's dirty but it works for my ends:</p>
<pre><code>for col in df.columns:
try:
df[col] = df[col].apply(lambda x: float(x.replace('.','').replace(',','.')))
except:
pass
</code></pre>
<p>E... | 0 | 2016-07-13T12:46:46Z | [
"python",
"csv",
"pandas"
] |
Pandas Dataframe.to_csv decimal=',' doesn't work | 38,352,082 | <p>In Python, I'm writing my Pandas Dataframe to a csv file and want to change the decimal delimiter to a comma (<code>,</code>). Like this:</p>
<pre><code>results.to_csv('D:/Data/Kaeashi/BigData/ProcessMining/Voorbeelden/Voorbeeld/CaseEventsCel.csv', sep=';', decimal=',')
</code></pre>
<p>But the decimal delimiter i... | 2 | 2016-07-13T12:42:39Z | 38,352,419 | <p>This functionality wasn't added until <a href="http://pandas.pydata.org/pandas-docs/version/0.16.0/whatsnew.html" rel="nofollow">0.16.0</a></p>
<blockquote>
<p>Added decimal option in to_csv to provide formatting for non-â.â decimal separators (<a href="https://github.com/pydata/pandas/issues/781" rel="nofoll... | 1 | 2016-07-13T12:56:35Z | [
"python",
"csv",
"pandas"
] |
Pandas Dataframe.to_csv decimal=',' doesn't work | 38,352,082 | <p>In Python, I'm writing my Pandas Dataframe to a csv file and want to change the decimal delimiter to a comma (<code>,</code>). Like this:</p>
<pre><code>results.to_csv('D:/Data/Kaeashi/BigData/ProcessMining/Voorbeelden/Voorbeeld/CaseEventsCel.csv', sep=';', decimal=',')
</code></pre>
<p>But the decimal delimiter i... | 2 | 2016-07-13T12:42:39Z | 38,352,486 | <p>This example suppose to work (as it works for me):</p>
<pre><code>import pandas as pd
import numpy as np
s = pd.Series(np.random.randn(10))
with open('Data/out.csv', 'w') as f:
s.to_csv(f, index=True, header=True, decimal=',', sep=';', float_format='%.3f')
</code></pre>
<p><strong>out.csv:</strong></p>
<bloc... | 0 | 2016-07-13T12:59:28Z | [
"python",
"csv",
"pandas"
] |
Subsetting a Pandas.DataFrame object only where there is a difference between two rows in python | 38,352,123 | <p>I was wondering if it there were an easy way in python to return a subset of my DataFrame rows only where there is a change between two consecutive rows. For example, my dataframe object might look like this:</p>
<pre><code> Date A B
20160713070000 20 21
20160713070100 20 23
20160713070128... | 2 | 2016-07-13T12:44:15Z | 38,352,312 | <p>Assuming your dataframe is df, try the following:</p>
<pre><code>sub_df = df[df.groupby('Date')['A'].transform(lambda x: x.index[-1])==df.index]
</code></pre>
| 1 | 2016-07-13T12:51:34Z | [
"python",
"pandas",
"dataframe"
] |
Subsetting a Pandas.DataFrame object only where there is a difference between two rows in python | 38,352,123 | <p>I was wondering if it there were an easy way in python to return a subset of my DataFrame rows only where there is a change between two consecutive rows. For example, my dataframe object might look like this:</p>
<pre><code> Date A B
20160713070000 20 21
20160713070100 20 23
20160713070128... | 2 | 2016-07-13T12:44:15Z | 38,352,962 | <p>I'd use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.drop_duplicates.html" rel="nofollow">drop_duplicates()</a> function:</p>
<pre><code>In [262]: df.drop_duplicates(subset=['A','B'])
Out[262]:
Date A B
0 20160713070000 20 21
1 20160713070100 20 23
3 201607... | 1 | 2016-07-13T13:20:31Z | [
"python",
"pandas",
"dataframe"
] |
Specifying a login table | 38,352,133 | <p>I have a login form. The problem is that it uses data from default table created by django called <code>auth_user</code>. I created a data model and it is in the database. How can I make my login form to get data from this table and not the default table?</p>
<p><strong>signin.html:</strong></p>
<pre><code><for... | 0 | 2016-07-13T12:44:37Z | 38,352,379 | <p>You will need to <a href="https://docs.djangoproject.com/en/1.9/topics/auth/customizing/#substituting-a-custom-user-model" rel="nofollow">substitute custom user model</a>.</p>
| 1 | 2016-07-13T12:54:38Z | [
"python",
"django"
] |
AD7705 - Problems with setup and communication with Raspberry Pi via bitbanged SPI in Python | 38,352,345 | <p>I am new here and desperately searching for a solution to my problem. I am currently trying to make my Raspberry Pi communicate to an AD7705 16bit-ADC with Python. Unfortunately though, things aren't going as expected...
The circuit looks like this: <a href="http://i.stack.imgur.com/QHiQR.png" rel="nofollow">AD7705 ... | 0 | 2016-07-13T12:52:58Z | 38,360,355 | <p>I borrowed a digital oscilloscope (great instruments!) from a friend and figured out that the CLK signal was switching too fast, so I increased the waiting time between CLK HI/LO switches to 0.001s.<br>
Afterwards, I found out that I didn't receive any proper signal from the ADC's DOUT pin, even though my bytes wher... | 0 | 2016-07-13T19:41:50Z | [
"python",
"python-2.7",
"raspberry-pi",
"spi",
"adc"
] |
Checking user type using hasattr() | 38,352,398 | <p>Some of my users are students. When a user creates a student profile the StudentProfile class is instantiated:</p>
<pre><code>class StudentProfile(models.Model):
user = models.OneToOneField(settings.AUTH_USER_MODEL, primary_key=True)
â¦
</code></pre>
<p>How can I check if a user is a student?</p>
<pre><c... | 3 | 2016-07-13T12:55:37Z | 38,352,480 | <p>You're almost there - you just need to use lowercase <code>studentprofile</code> instead of <code>StudentProfile</code>:</p>
<pre><code>hasattr(request.user, 'studentprofile')
</code></pre>
<p>From <a href="https://docs.djangoproject.com/en/1.9/ref/models/fields/#onetoonefield" rel="nofollow">the docs</a>:</p>
<b... | 4 | 2016-07-13T12:59:10Z | [
"python",
"django"
] |
Extract only subpages with Scrapy's LinkExtractor | 38,352,468 | <p>I'm trying to crawl the website <a href="http://www.funda.nl/koop/amsterdam/" rel="nofollow">http://www.funda.nl/koop/amsterdam/</a>, which lists houses for sale in Amsterdam, and extract data from the subpages such as <a href="http://www.funda.nl/koop/amsterdam/huis-49801360-brede-vogelstraat-2/" rel="nofollow">htt... | 0 | 2016-07-13T12:58:38Z | 38,354,862 | <p>For now I've added an <code>if</code> statement which checks that the <code>url</code> has the desired number of forward slashes (6) and ends with a forward slash:</p>
<pre><code>import scrapy
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
from Funda.items import FundaI... | 0 | 2016-07-13T14:42:50Z | [
"python",
"scrapy"
] |
how to know the user login fails in python session? | 38,352,565 | <pre><code>import requests
r=requests.Session()
name="user"
pas="pass123"
url="http://someurl/login.php"
r.get(url)
</code></pre>
<blockquote>
<p>Response<200></p>
</blockquote>
<pre><code>login_data=dict(username=name,password=pas)
r.post(url,data=login_data)
</code></pre>
<blockquote>
<p>Response<200><... | -2 | 2016-07-13T13:02:58Z | 38,352,821 | <p>It really depends on the service you're talking to. For example API endpoints will likely respond with a <code>40X</code> status to a failed authentication. On the other hand normal websites are likely to respond with a success and a normal page. In that case you need to figure out if you're logged in either by:</p>... | 1 | 2016-07-13T13:15:08Z | [
"python",
"session",
"python-requests"
] |
Inconsistent result with Python numpy matrix | 38,352,647 | <p>I have a matrix with float values and I try to get the summary of columns and rows. <strong>This matrix is symmetric</strong>.</p>
<pre><code>>>> np.sum(n2[1,:]) #summing second row
0.80822400592582844
>>> np.sum(n2[:,1]) #summing second col
0.80822400592582844
>>> np.sum(n2, axis=0)[1]
0... | 4 | 2016-07-13T13:07:05Z | 38,352,873 | <p>The numbers <code>numpy</code> uses are <code>double</code>s, with accuracy up to 16 decimal places. It looks like the differences happen at the 16th place, with the rest of the digits being equal. If you don't need this accuracy, you could use the rounding function <code>np.around()</code>, or you could actually tr... | 1 | 2016-07-13T13:16:46Z | [
"python",
"numpy",
"matrix"
] |
How can I find max number among numbers in this code? | 38,352,723 | <pre><code>class student(object):
def student(self):
self.name=input("enter name:")
self.stno=int(input("enter stno:"))
self.score=int(input("enter score:"))
def dis(self):
print("name:",self.name,"stno:",self.stno,"score:",self.score)
def stno(self):
return self.stn... | -1 | 2016-07-13T13:10:34Z | 38,352,832 | <p>You can use the <a href="https://docs.python.org/2/library/functions.html#max" rel="nofollow"><code>max</code></a> function with a custom key function:</p>
<pre><code>b = max(y, key=lambda student: student.score)
print(b.stno, b.score)
</code></pre>
| 1 | 2016-07-13T13:15:31Z | [
"python"
] |
How can I find max number among numbers in this code? | 38,352,723 | <pre><code>class student(object):
def student(self):
self.name=input("enter name:")
self.stno=int(input("enter stno:"))
self.score=int(input("enter score:"))
def dis(self):
print("name:",self.name,"stno:",self.stno,"score:",self.score)
def stno(self):
return self.stn... | -1 | 2016-07-13T13:10:34Z | 38,352,845 | <p>The max for loop should be like this:</p>
<pre><code># works only with non-negative numbers
max_val = 0
for b in y:
if max_val < b.score:
max_val = b.score
</code></pre>
<p>or use the <code>max</code> function as <a href="http://stackoverflow.com/users/1222951/rawing">Rawing</a> suggested.</p>
<p>-... | 1 | 2016-07-13T13:15:44Z | [
"python"
] |
How can I find max number among numbers in this code? | 38,352,723 | <pre><code>class student(object):
def student(self):
self.name=input("enter name:")
self.stno=int(input("enter stno:"))
self.score=int(input("enter score:"))
def dis(self):
print("name:",self.name,"stno:",self.stno,"score:",self.score)
def stno(self):
return self.stn... | -1 | 2016-07-13T13:10:34Z | 38,352,854 | <pre><code>for b in y:
max = b.score
if man < b.score:
max = b.score
</code></pre>
<p>You assign <code>max</code> to <code>b.score</code>, and in the next line you check <code>if man < b.score</code>.</p>
<ol>
<li><p>If this is your actual code, <code>man</code> is not defined anywhere so you wi... | 1 | 2016-07-13T13:16:06Z | [
"python"
] |
How can I find max number among numbers in this code? | 38,352,723 | <pre><code>class student(object):
def student(self):
self.name=input("enter name:")
self.stno=int(input("enter stno:"))
self.score=int(input("enter score:"))
def dis(self):
print("name:",self.name,"stno:",self.stno,"score:",self.score)
def stno(self):
return self.stn... | -1 | 2016-07-13T13:10:34Z | 38,352,967 | <p>Similar to <a href="http://stackoverflow.com/a/38352832/189134">Rawing's</a> answer, but instead of a lambda, you can use <code>operator.attrgetter()</code></p>
<pre><code>from operator import attgetter
class ...
# You class code remains unchanged
y=[]
j=0
while(j<3):
a=student()
a.student()
y.... | 1 | 2016-07-13T13:20:38Z | [
"python"
] |
Determine indentation level of line currently running Python Code | 38,352,825 | <p>Is it possible to determine the level of indentation of a line in Python while the program is running? I want to be able to organize a log file according to an outline structure of the script that is being run.</p>
<p>In the following example, the 'first message' function would yield 0, 'second message' would be 1,... | 0 | 2016-07-13T13:15:14Z | 38,353,894 | <p>I was able to determine the indentation level using the inspect.getouterframes() function. This assumes that 4 ' ' characters are used instead of '\t' characters for indentation.</p>
<pre><code>import inspect
def getIndentationLevel():
# get information about the previous stack frame
frame, filename, ... | 0 | 2016-07-13T14:02:19Z | [
"python",
"python-3.x",
"logging",
"indentation"
] |
Determine indentation level of line currently running Python Code | 38,352,825 | <p>Is it possible to determine the level of indentation of a line in Python while the program is running? I want to be able to organize a log file according to an outline structure of the script that is being run.</p>
<p>In the following example, the 'first message' function would yield 0, 'second message' would be 1,... | 0 | 2016-07-13T13:15:14Z | 38,354,602 | <p>First, I retrieve the code context of the caller:</p>
<pre><code>import inspect
context = inspect.getframeinfo(frame.f_back).code_context
</code></pre>
<p>This gives me a list of code lines; I ignore all but the first of those lines. Then I use a regular expression to get the whitespace at the start of this line... | 0 | 2016-07-13T14:32:05Z | [
"python",
"python-3.x",
"logging",
"indentation"
] |
Codeblock Expression for ArcGIS with Python | 38,353,015 | <p>I have problems with an expression for the Field Calculation in ArcGIS 10.2. I already tried the code in Python and it worked out but with the small changes I had to do for applying the code in ArcGIS it wont work.</p>
<p><code>PGIS_TXT</code> is a column of strings as shown below, the first number is the numerator... | 0 | 2016-07-13T13:22:44Z | 38,376,453 | <p>This code worked for me:</p>
<pre><code>def getnumerator(PGIS_TXT):
import re
if len(PGIS_TXT) > 3:
p = map(str, re.findall('\d+',PGIS_TXT))
z=p[:1]
b=int(''.join(z))
else:
if len(PGIS_TXT)==3:
b=int(PGIS_TXT[:3])
else:
if len(PGIS_TXT)==2:
b= int(PGIS_TXT[:2])
... | 0 | 2016-07-14T14:07:47Z | [
"python",
"arcgis",
"calculated-field"
] |
Django SMTPServerDisconnected: Connection unexpectedly closed using Postfix on Centos | 38,353,089 | <p>I have installed Postfix on Centos 7 and have successfully configured it to send mail (tested with command line program MailX).</p>
<p>However, when trying to send mail through Django shell or my Django website I am getting:</p>
<pre><code>File "/usr/lib64/python2.7/smtplib.py", line 367, in getreply
raise SMT... | -1 | 2016-07-13T13:25:49Z | 38,354,034 | <p>Thanks for reply.</p>
<p>Maillog highlighted:</p>
<p>fatal: no SASL authentication mechanisms</p>
<p>Resolved with:
yum install cyrus-sasl-plain</p>
| 0 | 2016-07-13T14:08:13Z | [
"python",
"django",
"centos",
"sendmail",
"postfix"
] |
How to open a remote file with GDAL in Python through a Flask application | 38,353,139 | <p>So, I'm developing a Flask application which uses the GDAL library, where I want to stream a .tif file through an url.</p>
<p>Right now I have method that reads a .tif file using gdal.Open(filepath). When run outside of the Flask environment (like in a Python console), it works fine by both specifying the filepath ... | 0 | 2016-07-13T13:28:00Z | 38,371,844 | <p>Please try the follow code snippet:</p>
<pre><code>from gzip import GzipFile
from io import BytesIO
import urllib2
from uuid import uuid4
from gdalconst import GA_ReadOnly
import gdal
def open_http_query(url):
try:
request = urllib2.Request(url,
headers={"Accept-Encoding": "gzip"})
... | 0 | 2016-07-14T10:30:34Z | [
"python",
"azure",
"iis",
"flask",
"gdal"
] |
pymsql and pandas | 38,353,252 | <p>have a question. How can I add the columns names from sql query to pandas dataframe. I'm doing next but columns=columns dont work in my case.</p>
<pre><code>import pymssql
import pandas as pd
con = pymssql.connect(
server="MSSQLSERVER",
port="1433",
user="us",
password="pass",
database="l... | 0 | 2016-07-13T13:32:41Z | 38,353,498 | <p>First establish the connection: I saw you used MSSQL</p>
<pre><code>import pyodbc
# Parameters
server = 'server_name'
db = 'db_name'
# Create the connection
conn = pyodbc.connect('DRIVER={SQL Server};SERVER=' + server + ';DATABASE=' + db + ';Trusted_Connection=yes')
</code></pre>
<p>Then use pandas:</p>
<pre><co... | 0 | 2016-07-13T13:44:31Z | [
"python",
"sql",
"pandas",
"pymssql"
] |
yesno_prompt( issue | 38,353,263 | <p>My code is this:</p>
<pre><code>highpri = yesno_prompt(
["1"], "Flag this message/s as high priority? [yes|no]")
if not "YES" in highpri:
prioflag1 = ""
prioflag2 = ""
else:
prioflag1 = ' 1 (Highest)'
prioflag2 = ' High'
</code></pre>
<p>But when I run it, I get:</p>
<pre class="lang-none pret... | -2 | 2016-07-13T13:33:14Z | 38,353,883 | <p>This code shows a fundamental misunderstanding of how <code>input()</code> works in Python 3.</p>
<p>First, the <code>input()</code> function in Python 3 is equivalent to <code>raw_input()</code> in Python 2. Your error shows that you are using Python 2.</p>
<p>So let's read the error message you got:</p>
<p>We k... | 0 | 2016-07-13T14:02:02Z | [
"python",
"smtp",
"sendmail"
] |
How to merge two or three 3D arrays in python? | 38,353,269 | <p>I have time series data in hdf format. I use the code below to read the data from the hdf files. Now I tried to join data on the basis of latitude and longitude for those data having same jdn (julian day number). Data with same julian day number represent the continuous spatial data</p>
<pre><code>import glob
impo... | 0 | 2016-07-13T13:33:25Z | 38,355,773 | <p>Numpy's <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.hstack.html" rel="nofollow">hstack</a>, <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.vstack.html" rel="nofollow">vstack</a>, or <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.dstack.html" rel="nofollo... | 1 | 2016-07-13T15:22:14Z | [
"python",
"numpy",
"pandas",
"hdf",
"pyhdf"
] |
Pandas: Reassigning values in dataframe | 38,353,453 | <p>Suppose I have two columns, ID1 and ID2 amongst many other columns in the dataframe.</p>
<pre><code>ID1 | ID2 | etc.
123 | 345
876 | 114
etc.
</code></pre>
<p>I need to rearrange the values in ID1 and ID2 in such a way that ID1 always contains the lowest integer value. In this ... | 3 | 2016-07-13T13:42:44Z | 38,354,444 | <p>is that what you want?</p>
<pre><code>In [279]: df
Out[279]:
ID1 ID2 ID3
0 123 345 100
1 876 114 200
2 111 222 300
In [280]: df[['ID1','ID2']] = df[['ID1','ID2']].apply(np.sort, axis=1)
In [281]: df
Out[281]:
ID1 ID2 ID3
0 123 345 100
1 114 876 200
2 111 222 300
</code></pre>
| 2 | 2016-07-13T14:25:49Z | [
"python",
"sorting",
"pandas",
"merge"
] |
Pandas: Reassigning values in dataframe | 38,353,453 | <p>Suppose I have two columns, ID1 and ID2 amongst many other columns in the dataframe.</p>
<pre><code>ID1 | ID2 | etc.
123 | 345
876 | 114
etc.
</code></pre>
<p>I need to rearrange the values in ID1 and ID2 in such a way that ID1 always contains the lowest integer value. In this ... | 3 | 2016-07-13T13:42:44Z | 38,400,247 | <p>I guess the faster way would be:</p>
<pre><code>df2 = df.copy()
mask = df.ID1 > df.ID2
df2.ix[mask, 'ID1'] = df.ix[mask, 'ID2']
df2.ix[mask, 'ID2'] = df.ix[mask, 'ID1']
</code></pre>
| 1 | 2016-07-15T15:46:23Z | [
"python",
"sorting",
"pandas",
"merge"
] |
Keras Trained VGG error | 38,353,491 | <p>I have followed <a href="https://gist.github.com/baraldilorenzo/07d7802847aaad0a35d3" rel="nofollow">this</a> to load and run a pretrained VGG model. However, I was trying to extract feature maps from hidden layers and trying to replicate results from the "Extracting arbitrary feature maps" section <a href="http://b... | 0 | 2016-07-13T13:44:05Z | 38,408,727 | <p>First of all, next time pls update a cleaner version of your code so that others can help you more easily.</p>
<p>Secondly, modify your function to debug:</p>
<pre><code>def get_features(model, layer, X_batch):
print model.layers[layer]
print model.layers[layer].output_shape
get_features = K.function([... | 1 | 2016-07-16T07:07:32Z | [
"python",
"theano",
"keras"
] |
How to fuzzy match movie titles with difflib and pandas? | 38,353,567 | <p>I have 2 lists of potentially overlapping movie titles, but possibly written in a different form.<br>
They are in 2 different dataframes from pandas. So I have tried to use the <code>map()</code> function with the <code>fuzzywuzzy</code> library like so:</p>
<pre><code>df1.title.map(lambda x: process.extractOne(x, ... | 2 | 2016-07-13T13:47:35Z | 38,354,562 | <p>To eliminate the possibility of low-score matches as a result of case-differences, I'd suggest applying <code>.upper()</code> or <code>.lower()</code> to the columns you're matching. After adjusting the case, you could compile a list of all titles into <code>ThisList</code> and apply the following function (relying... | 0 | 2016-07-13T14:30:33Z | [
"python",
"pandas",
"fuzzy-search",
"difflib",
"fuzzywuzzy"
] |
Custom filters field in Django admin | 38,353,609 | <p>I have such models:</p>
<pre><code>class Student(models.Model):
school_classes = models.ManyToManyField(SchoolClass)
name = models.CharField(max_length=50)
class SchoolClass(models.Model):
school = models.ForeignKey(School)
name = models.CharField(max_length=255)
class School(models.Model):
na... | 0 | 2016-07-13T13:49:53Z | 38,355,197 | <pre><code>from django.contrib import admin
from app.models import Student
class StudentSchoolClassInline(admin.TabularInline):
model = Student.school_classes.through
class StudentAdmin(admin.ModelAdmin):
inlines = [StudentSchoolClassInline]
exclude = ['student_classes']
admin.site.register(Student, Stud... | 0 | 2016-07-13T14:56:16Z | [
"python",
"django",
"django-admin"
] |
Custom filters field in Django admin | 38,353,609 | <p>I have such models:</p>
<pre><code>class Student(models.Model):
school_classes = models.ManyToManyField(SchoolClass)
name = models.CharField(max_length=50)
class SchoolClass(models.Model):
school = models.ForeignKey(School)
name = models.CharField(max_length=255)
class School(models.Model):
na... | 0 | 2016-07-13T13:49:53Z | 38,929,463 | <p>Finally I didn't find any simple solutions so I had to rewrite default Django widget for ManyToMany Select. </p>
| 0 | 2016-08-13T05:15:06Z | [
"python",
"django",
"django-admin"
] |
Find dates with Regex | 38,353,618 | <p>I am trying to write code that clean up dates in different date formats (such as <code>3/14/2015</code>, <code>03-14-2015</code>, and <code>2015/3/14</code>) by replacing them with dates in a single, standard format. So far I have wrote my regex expression but it's not working the way I would like. </p>
<pre><code>... | 1 | 2016-07-13T13:50:16Z | 38,353,885 | <p>This code works <a href="http://ideone.com/ly6fvs" rel="nofollow">(see live)</a>:</p>
<pre><code>import re
p = re.compile(ur'''(\d|\d{2}|\d{4}) # match 1 didget, or two didgets, or four didgets
([-\s./]) # match either a space or a dash or a period or a backslash
(\d{1,2}) #... | 1 | 2016-07-13T14:02:04Z | [
"python",
"regex"
] |
How to execute a root command using pythons Pexpect libray? | 38,353,644 | <p>I am trying to mount a drives shared folder on my system(<strong>Centos</strong>).Since mount command needs to be executed as a root user, I am first logging in as sudo user using <strong>su command</strong>. After the login is successful, I want to execute my mount command. </p>
<pre><code>import pexpect
cmd1 = "... | 1 | 2016-07-13T13:51:10Z | 39,581,693 | <p>Rather than sending the command interactively, use the <a href="http://man7.org/linux/man-pages/man1/su.1.html" rel="nofollow"><code>-c</code> option to <code>su</code></a> to give it the command you want to run:</p>
<pre><code>child = pexpect.spawn(['su', 'root', '-c', cmd2])
</code></pre>
| 0 | 2016-09-19T20:36:36Z | [
"python",
"python-2.7",
"pexpect"
] |
Irregular, non-linear grid with Python Imshow | 38,353,674 | <p>I want to draw a 2D colormap with a non-linear stretched Y-axis. The values for the X-axis have the same spacing, but the Y-values have a variable spacing:</p>
<pre><code>X=[1,2,3,4]
Y=[1,4,9,16]
</code></pre>
<p>In my matrix Z I have the brightness values (marked with a x):</p>
<pre><code>z=[[x,x,x,x],[x,x,x,x],... | 0 | 2016-07-13T13:52:33Z | 38,356,970 | <p>Have you tried </p>
<pre><code>import pylab as plt
X=[1,2,3,4]
Y=[1,4,9,16]
z=[[x,x,x,x],[x,x,x,x],[x,x,x,x],[x,x,x,x]]
plt.contourf(X,Y,z,200)
</code></pre>
<p>?</p>
| 0 | 2016-07-13T16:20:47Z | [
"python",
"imshow"
] |
Tornado RDBMS integration | 38,353,805 | <p>Typically the RDBMS drivers are blocking, while Tornado is a non-blocking server. This leads to irrational use of async, when performing CRUD operations, because the IOLoop will be blocked until that SQL query finishes.</p>
<p>I am working on a project that uses RDBMS as a DB (because of ACID), but which also requi... | 0 | 2016-07-13T13:58:34Z | 38,364,852 | <p>There are multiple ways of dealing with an RDBMS in Tornado.</p>
<p>There are some libraries for various DB's for doing them async in Tornado.
<a href="https://github.com/tornadoweb/tornado/wiki/Links" rel="nofollow">https://github.com/tornadoweb/tornado/wiki/Links</a></p>
<p>You can also use GEvent to get asyncro... | 1 | 2016-07-14T03:08:01Z | [
"python",
"tornado",
"rdbms",
"blocking",
"nonblocking"
] |
Deleting lines as you go in a big txt file | 38,353,826 | <p>I have red all questions in stackoverflow about it, they all say make it a list or put everything in another txt file etc. I can't make them because my txt file is bigger than 1gb, I can only read that file with for loop.</p>
<p>I tried to make:</p>
<pre><code>f = r.read()
</code></pre>
<p>and go outside for 3 ho... | 0 | 2016-07-13T13:59:24Z | 38,354,035 | <p>I think you can't delete the lines on the fly. You can however save the linenumber in which you have last been. When your script fails, you can simply do the following:</p>
<pre><code>[f.next() for _ in range(lines_count)]
</code></pre>
<p>Where lines_count was previously stored in a textfile. It is the number of ... | 2 | 2016-07-13T14:08:19Z | [
"python",
"file"
] |
Sum of all slices along given axis of a numpy array | 38,353,936 | <p>I have a numpy array of shape (3,12,7,5). I would like to have the sum of all slices along the first axis of this array.</p>
<pre><code>data = np.random.randint(low=0, high=8000, size=3*12*7*5).reshape(3,12,7,5)
data[0,...].sum()
data[1,...].sum()
data[2,...].sum()
np.array((data[0,...].sum(), data[1,...].sum(), ... | 2 | 2016-07-13T14:03:53Z | 38,353,996 | <p>For a generic ndarray, you could reshape into a 2D array, keeping the number of elements along the first axis same and <em>merging</em> all of the remaining axes as the second axis and finally sum along that axis, like so -</p>
<pre><code>data.reshape(data.shape[0],-1).sum(axis=1)
</code></pre>
<p>For a <code>4D</... | 1 | 2016-07-13T14:06:42Z | [
"python",
"numpy"
] |
Pandas: Write dataframe to json with split | 38,353,945 | <p>I need write data to <code>json</code>. Problem is that I can't establish delimiter to string.
My df looks like </p>
<pre><code> id date val
0 123 2015-12-12 1
1 123 2015-12-13 1
2 123 2015-12-14 0
3 123 2015-12-15 1
4 123 2015-12-16 1
5 123 2015-12-17 0
6 123 2015-12-18 ... | 0 | 2016-07-13T14:04:07Z | 38,354,823 | <p>You could have <code>to_json</code> write to a <code>StringIO</code> object and then use json loads/dumps to format to your liking:</p>
<pre><code>import pandas as pd
import StringIO, json
df = pd.read_csv('data.csv')
nielson = StringIO.StringIO()
df.groupby('id').apply(lambda x: x.set_index('date')['val'].to_dict(... | 1 | 2016-07-13T14:41:11Z | [
"python",
"pandas",
"to-json"
] |
Putting instance of subclass in SQLAlchemy relationship | 38,353,976 | <p><strong>TLDR</strong>; the problem was in inheritance construction, which I awkwardly didn't know how to make without declarative API.</p>
<p>I made general model <code>Job</code> which will further narrowed down by subclasses like <code>DeploymentJob</code>. Each <code>Job</code> consists of several <code>Actions... | 1 | 2016-07-13T14:05:41Z | 38,357,538 | <p>Problem is <code>Mapper</code> won't count <code>DeploymentJob</code> as child of <code>Job</code> until mapper object of <code>Job</code> provided to mapper object of <code>DeploymentJob</code> as <code>inherit</code> argument. So this works:</p>
<pre><code>JobsMapping = mapper(Job,
jobs,
polymorphi... | 0 | 2016-07-13T16:52:24Z | [
"python",
"sqlalchemy"
] |
Adding a Label to a Span in Bokeh 0.12 | 38,353,986 | <p>How can I add a label to a span annotations within Bokeh? So far, I've seen labels by themselves, is there a better way to tie the labels to the spans?</p>
| 0 | 2016-07-13T14:06:14Z | 38,866,787 | <p>If you want a label attached to Span, you just need to set the location to be the same.</p>
<pre><code>from bokeh.models import Span, Label
from bokeh.plotting import figure
p = figure(plot_height=400, plot_width=400)
# Initialize your span and label
my_span = Span(location=0, dimension='height')
p.renderers.exte... | 1 | 2016-08-10T07:25:48Z | [
"python",
"bokeh"
] |
Not reading all characters after seek | 38,353,994 | <p>I'm trying to make a program that inserts log entries into a text file. The issue I'm having is that I read through the file line by line for a specific line and want to write before the line. Python correctly reads the line I'm looking for, however, when I seek to go back to the previous position, it does not read ... | 0 | 2016-07-13T14:06:40Z | 38,356,074 | <p>I think I found the solution. I'm not sure why but when I use rb+ as the mode instead of r+, it reads the entire line just fine.</p>
| 0 | 2016-07-13T15:35:59Z | [
"python",
"python-2.7",
"file",
"file-read"
] |
Interleaving stdout from Popen with messages from ZMQ recv | 38,354,051 | <p>Is there a best-practices approach to poll for the stdout/stderr from a <code>subprocess.Popen</code> as well as a zmq socket?</p>
<p>In my case, I have my main program spawning a Popen subprocess. The subprocess publishes messages via zmq which I then want to subscribe to in my main program.</p>
<p>Waiting on mul... | 1 | 2016-07-13T14:09:10Z | 38,367,236 | <p>Use <code>zmq.Poller</code>: <a href="http://pyzmq.readthedocs.io/en/latest/api/zmq.html#polling" rel="nofollow">http://pyzmq.readthedocs.io/en/latest/api/zmq.html#polling</a>. You can register zmq sockets and native file descriptors (e.g. <code>process.stdout.fileno()</code> and <code>process.stderr.fileno()</code>... | 1 | 2016-07-14T06:45:24Z | [
"python",
"zeromq",
"popen"
] |
Error str' object is not callable | 38,354,056 | <p>I don't understand how I am getting this error, can someone help:</p>
<pre><code>import time
import os
import xlwt
from datetime import datetime
num = 0
def default():
global num
global model
global partnum
global serialnum
global countryorigin
time.sleep(1)
print ("Model: ")
model... | -3 | 2016-07-13T14:09:22Z | 38,354,125 | <p>You have both a <em>function</em> named <code>excel()</code> and a <em>local variable</em> named <code>excel</code> that you assigned a string to.</p>
<p>You can't do that and expect the function to still be available. The local name <code>excel</code> masks the global, so <code>excel()</code> tries to call the str... | 1 | 2016-07-13T14:12:28Z | [
"python",
"python-3.x"
] |
reading csv file enclosed in double quote but with newline | 38,354,124 | <p>I have <code>csv</code> with newline in column. Following is my <strong>example:</strong></p>
<pre><code>"A","B","C"
1,"This is csv with
newline","This is another column"
"This is newline
and another line","apple","cat"
</code></pre>
<p>I can read the file in spark but the newline inside the column is treated as ... | 0 | 2016-07-13T14:12:25Z | 38,354,324 | <p>The csv module from the standard python library does it out of the box:</p>
<pre><code>>>> txt = '''"A","B","C"
1,"This is csv with
newline","This is another column"
"This is newline
and another line","apple","cat"'''
>>> import csv
>>> import io
>>> with io.BytesIO(txt) as fd:
... | 2 | 2016-07-13T14:20:54Z | [
"python",
"python-2.7",
"apache-spark",
"pyspark"
] |
reading csv file enclosed in double quote but with newline | 38,354,124 | <p>I have <code>csv</code> with newline in column. Following is my <strong>example:</strong></p>
<pre><code>"A","B","C"
1,"This is csv with
newline","This is another column"
"This is newline
and another line","apple","cat"
</code></pre>
<p>I can read the file in spark but the newline inside the column is treated as ... | 0 | 2016-07-13T14:12:25Z | 38,355,331 | <p>You do not need to import anything. The solution proposed below creates a second file just for demonstration purposes. You can read the line after you modify it without writing it anywhere.</p>
<pre><code>with open(r'C:\Users\evkouni\Desktop\test_in.csv', 'r') as fin:
with open(r'C:\Users\evkouni\Desktop\test_o... | 0 | 2016-07-13T15:01:52Z | [
"python",
"python-2.7",
"apache-spark",
"pyspark"
] |
reading csv file enclosed in double quote but with newline | 38,354,124 | <p>I have <code>csv</code> with newline in column. Following is my <strong>example:</strong></p>
<pre><code>"A","B","C"
1,"This is csv with
newline","This is another column"
"This is newline
and another line","apple","cat"
</code></pre>
<p>I can read the file in spark but the newline inside the column is treated as ... | 0 | 2016-07-13T14:12:25Z | 38,363,891 | <p>If you want to create dataframe from csv with newline and quoted by double quote without reinventing wheel then use spark-csv and common-csv library:</p>
<pre><code>from pyspark.sql import SQLContext
df = sqlContext.load(header="true",source="com.databricks.spark.csv", path = "hdfs://analytics.com.np:8020/hdp/badcs... | 0 | 2016-07-14T00:50:27Z | [
"python",
"python-2.7",
"apache-spark",
"pyspark"
] |
Convert CSV to PNG with matplotlib Issue | 38,354,186 | <p>I am trying to create a PNG image with some CSV data but I am getting an error related to the date column (meanwhile converted to list). The error is: </p>
<pre><code>Traceback (most recent call last):
File "C:/Users/user1/Desktop/Py/AgentsStatus/testGraph.py", line 57, in <module>
plt.plot(dateCol,okCol... | 1 | 2016-07-13T14:15:14Z | 38,357,704 | <p>For stuff like this Pandas is unbeatable:</p>
<pre><code>import pandas
import matplotlib.pyplot as plt
df = pandas.read_csv('sampledata.csv', delimiter=';',
index_col=0,
parse_dates=[0], dayfirst=True,
names=['date','a','b','c'])
df.plot()
plt.save... | 0 | 2016-07-13T17:01:04Z | [
"python",
"csv",
"matplotlib"
] |
pandas get index of highest dot product | 38,354,213 | <p>I have a dataframe like this:</p>
<pre><code>df1 = pd.DataFrame({'a':[1,2,3,4],'b':[5,6,7,8],'c':[9,10,11,12]})
a b c
0 1 5 9
1 2 6 10
2 3 7 11
3 4 8 12
</code></pre>
<p>And I would like to create another column in this dataframe which stores for every row, which other row gets the ... | 3 | 2016-07-13T14:16:18Z | 38,354,728 | <p>Do a matrix multiplication with the transpose:</p>
<pre><code>mat_mul = np.dot(df.values, df.values.T)
</code></pre>
<p>Fill diagonals with a small number so they cannot be the maximum (I assumed all positive, so filled with -1 but you can change this):</p>
<pre><code>np.fill_diagonal(mat_mul, -1)
</code></pre>
... | 2 | 2016-07-13T14:37:10Z | [
"python",
"numpy",
"pandas",
"dot-product"
] |
pandas get index of highest dot product | 38,354,213 | <p>I have a dataframe like this:</p>
<pre><code>df1 = pd.DataFrame({'a':[1,2,3,4],'b':[5,6,7,8],'c':[9,10,11,12]})
a b c
0 1 5 9
1 2 6 10
2 3 7 11
3 4 8 12
</code></pre>
<p>And I would like to create another column in this dataframe which stores for every row, which other row gets the ... | 3 | 2016-07-13T14:16:18Z | 38,355,396 | <p>Since the dot-products would be repeated for pairs when they are flipped, the final dot-product array for each row against every other row would be a symmetric one. So, we can calculate for either the lower or upper triangular dot product elements and then get the full form by using <a href="http://docs.scipy.org/do... | 2 | 2016-07-13T15:04:47Z | [
"python",
"numpy",
"pandas",
"dot-product"
] |
Gauss Elimination "Object has no attribute '__getitem__'" | 38,354,227 | <p>Basically, I'm trying to code the <code>Gauss Elimination(Foward)</code> method, but, when executed, Python raises an exception saying: <code>"Object has no attribute '__getitem__'"</code> when the subtraction between 2 lists occurs.</p>
<p>The complete stacktrace is:</p>
<blockquote>
<pre><code>Traceback (most re... | -1 | 2016-07-13T14:16:52Z | 38,354,486 | <p>Your problem lies with <code>aux[i][w]</code>. Since you set <code>aux=self.a[i]</code>, <code>aux</code> is a flat list (ie. not a list of lists) and thus when you try to access <code>aux[i][w]</code>, you're trying to index <code>self.a[i][i][w]</code> which is not correct. I think you meant to do this: </p>
<pre... | 0 | 2016-07-13T14:27:19Z | [
"python",
"typeerror"
] |
display a dictionary value using input | 38,354,297 | <p>I can import the dictionary successfully and I can get an output from the dictionary of its values but it shows me all the values and not the value that matches the input of the user.</p>
<p>the input is first converted to lower and then split into individual words to be referenced in the dictionary.</p>
<pre><cod... | 0 | 2016-07-13T14:19:49Z | 38,354,413 | <p><code>if prob_dict in problemlist</code> is something that will almost never happen. You wouldn't find a <code>dict</code> in a list of strings.</p>
<p>Instead, you should iterate through the items in the list and see if the dictionary contains a key with the item:</p>
<pre><code>problemlist = [p.lower() for p in ... | 3 | 2016-07-13T14:24:33Z | [
"python",
"dictionary",
"input"
] |
Way to solve constraint satisfaction faster than brute force? | 38,354,320 | <p>I have a CSV that provides a y value for three different x values for each row. When read into a pandas DataFrame, it looks like this:</p>
<pre><code> 5 10 20
0 -13.6 -10.7 -10.3
1 -14.1 -11.2 -10.8
2 -12.3 -9.4 -9.0
</code></pre>
<p>That is, for row 0, at 5 the value is -13.6, at 10 the value is -10.... | 1 | 2016-07-13T14:20:47Z | 38,468,441 | <p>I hope i understood the task correctly.</p>
<p>If you know the resolution/discretization of the parameters, it looks like a discrete-optimization problem (in general: <strong>hard</strong>), which could be solved by CP-approaches.</p>
<p>But if you allow these values to be continuous (and reformulate the formulas)... | 0 | 2016-07-19T21:00:00Z | [
"python",
"constraint-programming"
] |
Printing the country name from Geopy | 38,354,357 | <p>I am trying to print the specific country code from a lat/long pair using GeoPy. It can return the address, latitude, longitude, or the entire JSON as a dict, but not individual components. </p>
<p>Is there a way I can access only the country portion and return that? Here is the code I have working that outputs the... | 0 | 2016-07-13T14:22:02Z | 38,354,990 | <p>Simply navigate the dict:</p>
<pre><code>>>> print(location.raw['address']['country'])
United States of America
>>> print(location.raw['address']['country_code'])
us
</code></pre>
| 1 | 2016-07-13T14:48:10Z | [
"python",
"python-2.7",
"dictionary",
"geocoding",
"geopy"
] |
Finding substring in list of strings, return index | 38,354,383 | <p>I've got a list of strings dumped by <code>readlines()</code> and I want to find the index of the first line that includes a substring, or the last line.</p>
<p>This works, but seems clunky:</p>
<pre><code>fooIndex=listOfStrings.index((next((x for x in listOfStrings if "foo" in x),listOfStrings[-1])))
</code></pre... | 3 | 2016-07-13T14:23:08Z | 38,354,506 | <p>Using <a href="https://docs.python.org/3.5/library/functions.html#enumerate" rel="nofollow"><code>enumerate()</code></a> in a function would be both more readable and efficient:</p>
<pre><code>def get_index(strings, substr):
for idx, string in enumerate(strings):
if substr in string:
break
... | 2 | 2016-07-13T14:28:06Z | [
"python"
] |
Finding substring in list of strings, return index | 38,354,383 | <p>I've got a list of strings dumped by <code>readlines()</code> and I want to find the index of the first line that includes a substring, or the last line.</p>
<p>This works, but seems clunky:</p>
<pre><code>fooIndex=listOfStrings.index((next((x for x in listOfStrings if "foo" in x),listOfStrings[-1])))
</code></pre... | 3 | 2016-07-13T14:23:08Z | 38,355,135 | <p>I don't think there is a good (i.e. readable) one-line solution for this. Alternatively to @eugene's loop, you could also use a <code>try/except</code>.</p>
<pre><code>def get_index(list_of_strings, substring):
try:
return next(i for i, e in enumerate(list_of_strings) if substring in e)
except StopI... | 2 | 2016-07-13T14:53:47Z | [
"python"
] |
Distribute value equally across NaN in pandas | 38,354,418 | <p>I have the following dataframe:</p>
<pre><code> var_value
2016-07-01 05:10:00 809.0
2016-07-01 05:15:00 NaN
2016-07-01 05:20:00 NaN
2016-07-01 05:25:00 NaN
2016-07-01 05:30:00 NaN
2016-07-01 05:35:00 NaN
2016-07-01 05:40:00 NaN
2016-07-01 05:45:00 ... | 2 | 2016-07-13T14:24:40Z | 38,354,607 | <p>You can first <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.DataFrameGroupBy.ffill.html" rel="nofollow"><code>ffill</code></a> <code>NaN</code> values and then divide by <code>len</code> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.Group... | 2 | 2016-07-13T14:32:35Z | [
"python",
"pandas"
] |
Distribute value equally across NaN in pandas | 38,354,418 | <p>I have the following dataframe:</p>
<pre><code> var_value
2016-07-01 05:10:00 809.0
2016-07-01 05:15:00 NaN
2016-07-01 05:20:00 NaN
2016-07-01 05:25:00 NaN
2016-07-01 05:30:00 NaN
2016-07-01 05:35:00 NaN
2016-07-01 05:40:00 NaN
2016-07-01 05:45:00 ... | 2 | 2016-07-13T14:24:40Z | 38,354,615 | <pre><code>df.fillna(0).groupby(df.notnull().cumsum()).transform(lambda x: x.mean())
2016-07-01 05:10:00 67.416667
2016-07-01 05:15:00 67.416667
2016-07-01 05:20:00 67.416667
2016-07-01 05:25:00 67.416667
2016-07-01 05:30:00 67.416667
2016-07-01 05:35:00 67.416667
2016-07-01 05:40:00 67.416667
201... | 2 | 2016-07-13T14:32:52Z | [
"python",
"pandas"
] |
Power BI REST API with Python | 38,354,445 | <p>I'm seeking more help on the following thread, <a href="http://stackoverflow.com/questions/35254559/client-credentials-dont-work-for-powerbi-rest-api">Client-credentials don't work for powerBI REST API</a>, as I can't post any comments yet. I have the exact same situation as described in that question: I can obt... | -1 | 2016-07-13T14:25:50Z | 38,558,782 | <p>According the authentication guide on PowerBi at <a href="https://powerbi.microsoft.com/en-us/documentation/powerbi-developer-authenticate-to-power-bi-service/" rel="nofollow">https://powerbi.microsoft.com/en-us/documentation/powerbi-developer-authenticate-to-power-bi-service/</a>, the Power BI uses authorization co... | 0 | 2016-07-25T02:09:56Z | [
"python",
"azure",
"powerbi"
] |
Modifying a class attribute from instance, different behaviour depending on type? | 38,354,490 | <p>If I create a class in Python and I give it a class attribute (this is taken directly from the docs, <a href="https://docs.python.org/2/tutorial/classes.html#class-and-instance-variables" rel="nofollow">here</a>), as </p>
<pre><code>class Dog:
tricks = []
def __init__(self, name):
self... | 0 | 2016-07-13T14:27:23Z | 38,354,684 | <p>The <code>int</code> class does not define the += operator (<code>__iadd__</code> method). That wouldn't make sense because it is immutable.</p>
<p>That's why <code>+=</code> defaults to <code>+</code> and then <code>=</code>. <a href="https://docs.python.org/3/reference/datamodel.html#object.__iadd__" rel="nofollo... | 2 | 2016-07-13T14:35:33Z | [
"python",
"oop",
"object"
] |
Conda entry_points configuration | 38,354,517 | <p>I am trying to create a conda package from my sources but I get the stuck with the <code>build/entry_points</code> part of the <code>meta.yaml</code> configuration file.</p>
<p>Explanations:</p>
<p>Here is my <code>setup.py</code> file which works correctly with <code>pip</code> :</p>
<pre><code>from setuptools ... | 3 | 2016-07-13T14:28:43Z | 38,484,029 | <p>Conda is packaging <code>pyinstruments</code> instead of your project <code>wopmars</code> as shown in the build log (line starting by <em>source tree</em>):</p>
<pre><code>(...)
Package: wopmars-1.1.0-py35_0
source tree in: /home/luc/bin/anaconda3/conda-bld/work/pyinstrument-0.13.1
+ source activate /home/luc/bin/... | 3 | 2016-07-20T14:27:35Z | [
"python",
"conda"
] |
Plotting two y axes on the same scale on an implot (matplotlib) | 38,354,564 | <p>I have an image plot, representing a matrix, with two axes. The y axis on th left of my image plot represents the rows and the x axis represents the column, while each grid cell represents the value as a function of x and y.</p>
<p>I'd like to plot my y-axis in another form on the right side of my image plot, which... | 0 | 2016-07-13T14:30:38Z | 38,358,570 | <p>The answer was:</p>
<p><code>fig5Ax2.yaxis.set_view_interval(minRange, maxRange)</code></p>
| 0 | 2016-07-13T17:51:14Z | [
"python",
"matplotlib",
"plot"
] |
ipython notebook with spark gets error with sparkcontext | 38,354,580 | <p>I'm testing turi with this example on my macbook osx 10.10.5
<a href="https://turi.com/learn/gallery/notebooks/spark_and_graphlab_create.html" rel="nofollow">https://turi.com/learn/gallery/notebooks/spark_and_graphlab_create.html</a></p>
<p>when getting to this step </p>
<pre><code># Set up the SparkContext object... | 1 | 2016-07-13T14:31:19Z | 38,355,595 | <p>This could potentially happen because of two reasons:</p>
<ol>
<li>Environment variable <code>SPARK_HOME</code> could be pointing to the wrong path</li>
<li>Set <code>export PYSPARK_SUBMIT_ARGS="--master local[2]"</code> - This is the configuration you want <code>PySpark</code> to start with.</li>
</ol>
| 1 | 2016-07-13T15:13:30Z | [
"python",
"apache-spark",
"ipython",
"pyspark",
"jupyter-notebook"
] |
Programatically creating edges in Neo4j with python | 38,354,584 | <p>I'm trying to generate nodes and edges for some chemicals and associated reactions in neo4j with python but am hitting a problem with node/relationship creation...</p>
<p>My code...</p>
<pre><code>from neo4j.v1 import GraphDatabase, basic_auth
driver = GraphDatabase.driver("bolt://localhost", auth=basic_auth("neo... | 0 | 2016-07-13T14:31:25Z | 38,356,271 | <p>I have a solution.</p>
<p>It appears if in neo4j I run </p>
<pre><code>CREATE (Reaction1:Reaction {RXNid:"reaction1", name:"AmideFormation"})
CREATE (Chem2:Molecule {CHMid: "nbutylamine", smiles:"CCCCN"})
CREATE (Chem3:Molecule {CHMid: "butanoicAcid", smiles:"CCCCOO"})
CREATE (Chem1:Molecule {CHMid: "Nbutylbutanam... | 0 | 2016-07-13T15:45:18Z | [
"python",
"neo4j",
"cypher",
"graph-databases"
] |
Cyclomatic complexity metric practices for Python | 38,354,633 | <p>I have a relatively large Python project that I work on, and we don't have any cyclomatic complexity tools as a part of our automated test and deployment process.</p>
<p>How important are cyclomatic complexity tools in Python? Do you or your project use them and find them effective? I'd like a nice before/after sto... | 2 | 2016-07-13T14:33:30Z | 38,357,619 | <p>Python isn't special when it comes to cyclomatic complexity. CC measures how much branching logic is in a chunk of code.</p>
<p>Experience shows that when the branching is "high", that code is harder to understand and change reliably than code in which the branching is lower.</p>
<p>With metrics, it typically is... | 0 | 2016-07-13T16:56:15Z | [
"python",
"python-2.7",
"code-metrics",
"cyclomatic-complexity"
] |
Cyclomatic complexity metric practices for Python | 38,354,633 | <p>I have a relatively large Python project that I work on, and we don't have any cyclomatic complexity tools as a part of our automated test and deployment process.</p>
<p>How important are cyclomatic complexity tools in Python? Do you or your project use them and find them effective? I'd like a nice before/after sto... | 2 | 2016-07-13T14:33:30Z | 38,366,375 | <p>We used the RADON tool in one of our projects which is related to Test Automation.</p>
<p><a href="http://radon.readthedocs.io/en/latest/" rel="nofollow">RADON</a></p>
<p>Depending on new features and requirements, we need to add/modify/update/delete codes in that project. Also, almost 4-5 people were working on t... | 0 | 2016-07-14T05:48:53Z | [
"python",
"python-2.7",
"code-metrics",
"cyclomatic-complexity"
] |
Grouping list by nth element | 38,354,640 | <p>I have a 2d list like the ones below </p>
<pre><code>original_list = [['2', 'Out', 'Words', 'Test3', '21702-1201', 'US', 41829.0, 'VN', 'Post', 'NAI'],
['Test', 'Info', 'More Info', 'Stuff', '63123-7802', 'US', 40942.0, 'CM', 'User Info', 'VAI'],
['Test1', 'Info1', 'More Info1', 'S... | -1 | 2016-07-13T14:33:51Z | 38,354,935 | <p>You can use <a href="https://docs.python.org/2.7/library/itertools.html#itertools.groupby" rel="nofollow"><code>itertools.groupby</code></a></p>
<pre><code>>>> from itertools import groupby
>>> l = [['2', 'Out', 'Words', 'Test3', '21702-1201', 'US', 41829.0, 'VN', 'Post', 'NAI'],
... ['Test',... | 2 | 2016-07-13T14:45:44Z | [
"python",
"list"
] |
Catching Flask abort status code in tests? | 38,354,706 | <p>I have an abort() in my flask class based view. I can assert that an abort has been called, but I cannot access the 406 code in my context manager.</p>
<p>views.py</p>
<pre><code>from flask.views import View
from flask import abort
class MyView(View):
def validate_request(self):
if self.accept_header... | 2 | 2016-07-13T14:36:08Z | 38,356,387 | <p>In the werkzeug library http errors codes are saved in <code>HTTPException.None</code>. You can see this yourself in the <a href="https://github.com/pallets/werkzeug/blob/master/werkzeug/exceptions.py#L83" rel="nofollow">sourcecode</a> (or for a not <code>None</code> code see e.g. the <a href="https://github.com/pal... | 1 | 2016-07-13T15:51:15Z | [
"python",
"flask"
] |
Catching Flask abort status code in tests? | 38,354,706 | <p>I have an abort() in my flask class based view. I can assert that an abort has been called, but I cannot access the 406 code in my context manager.</p>
<p>views.py</p>
<pre><code>from flask.views import View
from flask import abort
class MyView(View):
def validate_request(self):
if self.accept_header... | 2 | 2016-07-13T14:36:08Z | 38,356,706 | <p>Ok so I'm an idiot. Can't believe I didn't notice this before. There is an exception object inside the http_error. In my tests I was calling the http_error before calling validate_request, so I missed it. Here is the correct answer:</p>
<pre><code>from werkzeug.exceptions import HTTPException
def test_validate_req... | 0 | 2016-07-13T16:06:57Z | [
"python",
"flask"
] |
How do I append a custom error message to Django Rest Framework validation? | 38,354,770 | <p>I have a simple viewset,</p>
<pre><code>class UserViewSet(viewsets.ModelViewSet):
queryset = User.objects.all()
# more properties below.
def create(self, request, *args, **kwargs):
serialized_data = UserSerializer(data=request.data)
if serialized_data.is_valid():
# method t... | 1 | 2016-07-13T14:38:56Z | 38,354,955 | <p>You should raise validation error:</p>
<pre><code>raise serializers.ValidationError("the email is not acceptable!")
</code></pre>
<p>Or try writing custom validators <a href="http://www.django-rest-framework.org/api-guide/validators/#writing-custom-validators" rel="nofollow">http://www.django-rest-framework.org/ap... | 1 | 2016-07-13T14:46:58Z | [
"python",
"django",
"django-rest-framework"
] |
concordance index in python | 38,354,902 | <p>I'm looking for a python/sklearn/lifelines/whatever implementation of <code>Harrell's c-index</code> (concordance index), which is mentioned in <a href="https://arxiv.org/pdf/0811.1645.pdf" rel="nofollow">random survival forests</a>.</p>
<p>The C-index is calculated using the following steps:</p>
<ol>
<li>Form all... | 0 | 2016-07-13T14:44:37Z | 39,143,651 | <p>LifeLines package now has this implemented <a href="http://lifelines.readthedocs.io/en/latest/Survival%20Regression.html#model-selection-in-survival-regression" rel="nofollow">c-index, or concordance-index</a></p>
| 1 | 2016-08-25T11:07:53Z | [
"python",
"survival-analysis"
] |
How to parse each line in a file using regex and extract content before character? | 38,354,949 | <p>I'm trying to parse a file and extract content right before a certain character, in this case <code>|</code>, to create a dictionary and filter out duplicates based on this content/key. My take on it is that I should use regular expression for this.</p>
<p>Mock input data:</p>
<pre><code>AK_0004: abc123|Abc1231301... | -2 | 2016-07-13T14:46:33Z | 38,355,260 | <p>You can indeed use regexps, specifically you want create capture group with a pattern that matches the text before a <code>|</code> which I will assume is any word-character. </p>
<pre><code>import re
# Compile the regex pattern. (\w+) is our capture group.
p = re.compile(r'(\w+)\|')
line = 'AK_0004: abc123|Abc12... | 0 | 2016-07-13T14:58:54Z | [
"python"
] |
How to parse each line in a file using regex and extract content before character? | 38,354,949 | <p>I'm trying to parse a file and extract content right before a certain character, in this case <code>|</code>, to create a dictionary and filter out duplicates based on this content/key. My take on it is that I should use regular expression for this.</p>
<p>Mock input data:</p>
<pre><code>AK_0004: abc123|Abc1231301... | -2 | 2016-07-13T14:46:33Z | 38,355,383 | <p>I think this could be simpler without a regular expression if you use clever splits and list comprehensions like so:</p>
<pre><code>dicty = {}
for line in whatever:
parts = line.split(' ')
head = parts[0][:-1]
stuff = [s.split('|')[0] for s in parts[1:]]
dicty[head] = stuff
print("{}: {}".format... | 0 | 2016-07-13T15:04:08Z | [
"python"
] |
pyinstaller importError: No module name '_socket' | 38,354,982 | <p>I am using:</p>
<ul>
<li>pyinstaller 3.2 (I also try development version)</li>
<li>Windows 10 </li>
<li>python 3.5.2</li>
</ul>
<p>The code is:</p>
<pre><code>import socket
print("test")
so = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
so.setblocking(True)
print(so)
</code></pre>
<p>I launch pyinstaller ... | 1 | 2016-07-13T14:47:56Z | 38,566,780 | <p>Ok guys... I am ashamed ... I was not running the good executable file. </p>
<p>I was running the exe file in the <strong>build/</strong> directory, I had to run the one in the <strong>dist/</strong> directory.</p>
| 0 | 2016-07-25T11:40:10Z | [
"python",
"sockets",
"pyinstaller"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.