title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags listlengths 1 5 |
|---|---|---|---|---|---|---|---|---|---|
Listing third party libraries in project | 38,848,834 | <p>I've been working on a Python project for a while which uses quite some third party libraries. I want to deploy my project to another server, but I don't know by heart which projects I use and digging through every line of source code would be too much work.</p>
<p>Is there a way to generate a list of <strong>third party modules</strong> in my project so I can use this with PIP's installer? Thank you for your help</p>
<pre><code>pip install -r dependencies.txt # <-- I need to generate this file from a project
</code></pre>
<hr>
<p>I ended up writing a python module which does this instead as I couldn't find it. The source code is available on <a href="https://github.com/Paradoxis/PIP-Module-Scanner" rel="nofollow">GitHub</a>. You can install it like so:</p>
<pre><code>$ pip install pip-module-scanner
</code></pre>
<p>Using it is pretty simple, for full usage examples check the GitHub repo.</p>
<pre><code>$ pip-module-scanner
foo==0.0.1
bar==2.0.0
</code></pre>
| 2 | 2016-08-09T10:56:07Z | 38,848,876 | <p>Provided that you're using a virtual environment to keep your dependencies separate from the globally installed pip packages, you should be able to use pip's <code>freeze</code> command, like so:</p>
<pre><code>pip freeze > dependencies.txt
</code></pre>
<p>If you haven't been using a virtual environment, then you probably need to peruse the source code to find the modules. A virtual environment is a means of keeping your python project isolated from the global environment, meaning that you can only import modules that are installed within that environment, and it should only contain modules relevant to its corresponding python project. I recommend that you read up on <a href="http://docs.python-guide.org/en/latest/dev/virtualenvs/" rel="nofollow">Virtual Environments</a>, they are very useful for larger projects.</p>
| 1 | 2016-08-09T10:58:21Z | [
"python",
"pip"
] |
google api client python import taskqueue | 38,848,896 | <p>python version <code>2.7.9</code></p>
<p>installed version <code>1.5.1</code></p>
<p><code>pip install --upgrade google-api-python-client</code></p>
<p>from <a href="https://cloud.google.com/appengine/docs/python/taskqueue/push/creating-tasks" rel="nofollow">here</a> trying to import task queue like so </p>
<pre><code>from google.appengine.api import taskqueue
</code></pre>
<p>getting </p>
<pre><code>ImportError: No module named google.appengine.api
</code></pre>
<p>1.5.1 is the latest version, and I can't seem to find any code reference to task queue in the pip code <a href="https://github.com/google/google-api-python-client" rel="nofollow">here</a></p>
| 1 | 2016-08-09T10:58:55Z | 38,853,148 | <p>The <a href="https://cloud.google.com/sdk/release_notes" rel="nofollow">Google Cloud SDK - Release Notes</a> tracks the version of the GAE components, search for the <code>App Engine components updated to</code> pattern.</p>
<p>The version of the most recent GAE components in the current Cloud SDK version (120.0.0) is 1.9.38 (emphasis mine):</p>
<blockquote>
<p>Google App Engine</p>
<p>...</p>
<ul>
<li>Google App Engine components updated to <strong>1.9.38</strong>.</li>
</ul>
</blockquote>
<p>The 1.9.38 version is affected by a bug causing import errors, see <a href="http://stackoverflow.com/questions/37755195/importerror-no-module-named-webapp2-after-linux-sdk-upgrade-1-9-35-1-9-38">"ImportError: No module named webapp2" after Linux SDK upgrade (1.9.35 -> 1.9.38)</a></p>
<p>I see 2 options for you:</p>
<ul>
<li><p>downgrade to a Cloud SDK version prior to <strong>109.0.0</strong> (in which the affected GAE version 1.9.37 was introduced)</p></li>
<li><p>if you only use GAE and not other Cloud products for which you <strong>need</strong> the Cloud SDK switch to the GAE SDK (current version 1.9.40 in which the issue is fixed), see the comments to this Q&A: <a href="http://stackoverflow.com/questions/33769879/what-is-the-relationship-between-googles-app-engine-sdk-and-cloud-sdk">What is the relationship between Google's App Engine SDK and Cloud SDK?</a></p></li>
</ul>
| 0 | 2016-08-09T14:10:25Z | [
"python",
"google-app-engine",
"pip",
"google-cloud-sdk"
] |
google api client python import taskqueue | 38,848,896 | <p>python version <code>2.7.9</code></p>
<p>installed version <code>1.5.1</code></p>
<p><code>pip install --upgrade google-api-python-client</code></p>
<p>from <a href="https://cloud.google.com/appengine/docs/python/taskqueue/push/creating-tasks" rel="nofollow">here</a> trying to import task queue like so </p>
<pre><code>from google.appengine.api import taskqueue
</code></pre>
<p>getting </p>
<pre><code>ImportError: No module named google.appengine.api
</code></pre>
<p>1.5.1 is the latest version, and I can't seem to find any code reference to task queue in the pip code <a href="https://github.com/google/google-api-python-client" rel="nofollow">here</a></p>
| 1 | 2016-08-09T10:58:55Z | 38,970,027 | <p>I have downloaded the sdk from here
<a href="https://cloud.google.com/appengine/downloads" rel="nofollow">https://cloud.google.com/appengine/downloads</a></p>
<p>after downloaded added it to my project source files .</p>
<p>and added the path to the home directory in my python code like so </p>
<pre><code>import os, sys
test_directory = os.path.dirname(os.path.abspath(__file__))
paths = [
'/../../google_appengine',
]
for path in paths:
sys.path.insert(0, os.path.abspath(test_directory + path))
</code></pre>
| 0 | 2016-08-16T08:29:40Z | [
"python",
"google-app-engine",
"pip",
"google-cloud-sdk"
] |
Get specific file with glob | 38,849,001 | <p>I am trying to locate a specific .xml file with glob. But the path comes from an object, and it doesn't seem to work.</p>
<p>I followed this example: <a href="http://stackoverflow.com/questions/3608411/python-how-can-i-find-all-files-with-a-particular-extension">Python: How can I find all files with a particular extension?</a></p>
<p>The code is this:</p>
<pre><code>import glob
ren_folder = 'D:\Sentinel\D\S2A_OPER_PRD_MSIL1C_PDMC_20160710T162701'
glob.glob(ren_folder+'/'s*.xml)
</code></pre>
<p>I get invalid syntax...</p>
<blockquote>
<p>SyntaxError: invalid syntax</p>
</blockquote>
| -1 | 2016-08-09T11:03:59Z | 38,849,033 | <pre><code>glob.glob(ren_folder+'/'s*.xml)
</code></pre>
<p>You are closing the flie name string prematurely, it should be:</p>
<pre><code>glob.glob(ren_folder+'/s*.xml')
</code></pre>
| 2 | 2016-08-09T11:05:52Z | [
"python",
"xml",
"glob"
] |
Get specific file with glob | 38,849,001 | <p>I am trying to locate a specific .xml file with glob. But the path comes from an object, and it doesn't seem to work.</p>
<p>I followed this example: <a href="http://stackoverflow.com/questions/3608411/python-how-can-i-find-all-files-with-a-particular-extension">Python: How can I find all files with a particular extension?</a></p>
<p>The code is this:</p>
<pre><code>import glob
ren_folder = 'D:\Sentinel\D\S2A_OPER_PRD_MSIL1C_PDMC_20160710T162701'
glob.glob(ren_folder+'/'s*.xml)
</code></pre>
<p>I get invalid syntax...</p>
<blockquote>
<p>SyntaxError: invalid syntax</p>
</blockquote>
| -1 | 2016-08-09T11:03:59Z | 38,849,037 | <p>Does this work?</p>
<pre><code>glob.glob(os.path.join(ren_folder, 's*.xml'))
</code></pre>
| 1 | 2016-08-09T11:05:58Z | [
"python",
"xml",
"glob"
] |
Recursively read through a folder tree and determine which folders have files | 38,849,190 | <p>I've written a program to look through images inside a folder and extract the embedded EXIF data. Currently I'm using the code below to get paths for all the files inside a folder, the variable 'folder_name' is input by the user. The list created is then used by the program to cycle through all the images.</p>
<pre><code>file_names = glob(join(expanduser('~'),'Desktop',folder_name,'*'))
</code></pre>
<p>Now I want to add a bit of functionality, that is the ability to look through a folder tree and return only the folders with images/files in them. This list could then be passed into the bit of code above to do the rest. I just need a pointer as to where to look to develop this.</p>
<p>Also how could I select and output just the image files, using <code>endswith(.jpg)</code> on the file path didn't work due to case sensitivity. </p>
| 0 | 2016-08-09T11:13:00Z | 38,849,357 | <p>You can work around case-sensitivity by applying the <code>.lower()</code> string method to both strings before comparing them.</p>
| 0 | 2016-08-09T11:23:05Z | [
"python",
"python-2.7",
"folders"
] |
Recursively read through a folder tree and determine which folders have files | 38,849,190 | <p>I've written a program to look through images inside a folder and extract the embedded EXIF data. Currently I'm using the code below to get paths for all the files inside a folder, the variable 'folder_name' is input by the user. The list created is then used by the program to cycle through all the images.</p>
<pre><code>file_names = glob(join(expanduser('~'),'Desktop',folder_name,'*'))
</code></pre>
<p>Now I want to add a bit of functionality, that is the ability to look through a folder tree and return only the folders with images/files in them. This list could then be passed into the bit of code above to do the rest. I just need a pointer as to where to look to develop this.</p>
<p>Also how could I select and output just the image files, using <code>endswith(.jpg)</code> on the file path didn't work due to case sensitivity. </p>
| 0 | 2016-08-09T11:13:00Z | 38,849,367 | <p>You can try <a href="https://docs.python.org/2/library/os.html#os.walk" rel="nofollow">os.walk</a> + <a href="https://docs.python.org/2/library/mimetypes.html#mimetypes.guess_type" rel="nofollow">mimetypes.guess_type</a> like so</p>
<pre><code>import os
import os.path
import mimetypes
top="."
imagefiles=[]
for root, dirs, files in os.walk(top):
for fn in files:
t,e= mimetypes.guess_type(fn, strict=False)
if t.startswith("image/"):
imagefiles.append(os.path.join(root,fn)
</code></pre>
| 1 | 2016-08-09T11:23:31Z | [
"python",
"python-2.7",
"folders"
] |
Sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) | 38,849,252 | <p>My Flask app(postgresql db) is working fine in local. I pushed my code to server and there I tried to <code>run.py db migrate</code>, it throws me these errors</p>
<pre><code>Traceback (most recent call last):
File "run.py", line 11, in <module>
create_app().run()
File "/usr/local/lib/python2.7/dist-packages/flask_script/__init__.py", line 412, in run
result = self.handle(sys.argv[0], sys.argv[1:])
File "/usr/local/lib/python2.7/dist-packages/flask_script/__init__.py", line 383, in handle
res = handle(*args, **config)
File "/usr/local/lib/python2.7/dist-packages/flask_script/commands.py", line 216, in __call__
return self.run(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/flask_migrate/__init__.py", line 239, in upgrade
command.upgrade(config, revision, sql=sql, tag=tag)
File "/usr/local/lib/python2.7/dist-packages/alembic/command.py", line 174, in upgrade
script.run_env()
File "/usr/local/lib/python2.7/dist-packages/alembic/script/base.py", line 407, in run_env
util.load_python_file(self.dir, 'env.py')
File "/usr/local/lib/python2.7/dist-packages/alembic/util/pyfiles.py", line 93, in load_python_file
module = load_module_py(module_id, path)
File "/usr/local/lib/python2.7/dist-packages/alembic/util/compat.py", line 79, in load_module_py
mod = imp.load_source(module_id, path, fp)
File "migrations/env.py", line 87, in <module>
run_migrations_online()
File "migrations/env.py", line 72, in run_migrations_online
connection = engine.connect()
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 2018, in connect
return self._connection_cls(self, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 72, in __init__
if connection is not None else engine.raw_connection()
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 2104, in raw_connection
self.pool.unique_connection, _connection)
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 2078, in _wrap_pool_connect
e, dialect, self)
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1405, in _handle_dbapi_exception_noconnection
exc_info
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/compat.py", line 199, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb)
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 2074, in _wrap_pool_connect
return fn()
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 318, in unique_connection
return _ConnectionFairy._checkout(self)
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 713, in _checkout
fairy = _ConnectionRecord.checkout(pool)
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 480, in checkout
rec = pool._do_get()
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 1151, in _do_get
return self._create_connection()
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 323, in _create_connection
return _ConnectionRecord(self)
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 449, in __init__
self.connection = self.__connect()
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 607, in __connect
connection = self.__pool._invoke_creator(self)
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/strategies.py", line 97, in connect
return dialect.connect(*cargs, **cparams)
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 385, in connect
return self.dbapi.connect(*cargs, **cparams)
File "/usr/local/lib/python2.7/dist-packages/psycopg2/__init__.py", line 164, in connect
conn = _connect(dsn, connection_factory=connection_factory, async=async)
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) FATAL: parameter "listen_addresses" cannot be changed without restarting the server
</code></pre>
<p><code>run.py db init</code> worked fine and successfully created the migrations folder in the server. Any Help would be highly appreciated</p>
<p>My main app code </p>
<pre><code>app = Flask(__name__)
app.config.from_object('config')
rest_api = Api(app)
db = SQLAlchemy(app)
bcrypt = Bcrypt(app)
from app import routes
Compress(app)
assets = Environment(app)
define_assets(assets)
cache = Cache(app,config={'CACHE_TYPE': 'simple'})
migrate = Migrate(app, db)
manager = Manager(app)
manager.add_command('db', MigrateCommand)
</code></pre>
<p>and <code>run.py</code> code </p>
<pre><code>from flask_failsafe import failsafe
@failsafe
def create_app():
from app import manager
return manager
from app import app
if __name__ == '__main__':
create_app().run()
</code></pre>
| 0 | 2016-08-09T11:17:44Z | 38,972,526 | <p>Fixed it with a `sudo. </p>
<pre><code>sudo run.py db migrate
</code></pre>
<p>I know it is weird but yes, sudo did the trick</p>
| 0 | 2016-08-16T10:29:30Z | [
"python",
"postgresql",
"flask-sqlalchemy",
"flask-restful"
] |
Matplotlib: How to make a dotted line consisting of dots (circles)? | 38,849,258 | <p>I have two smooth dependences y1(x) and y2(x) where x's are distributed irregularly. I want these dependences to be described with dotted lines (<code>linestyle = ':'</code>). What i get now in a *.pdf file is shown <a href="http://i.stack.imgur.com/s9qPA.png" rel="nofollow">here</a>:</p>
<p>Here's the code:</p>
<pre><code>import matplotlib.pyplot as plt
fig, ax = plt.subplots()
x = [0, 1, 2, 3, 5, 7, 13, 14]
y1 = [3, 5, 6, 8, 7, 6, 9, 10]
y2 = [1, 7, 8, 10, 14, 18, 20, 23]
ax.plot(x, y1,
linestyle = ':',
linewidth = 4,
color = 'Green')
ax.plot(x, y2,
linestyle = ':',
linewidth = 4,
color = 'Blue')
ax.set_ylabel('y(x)')
ax.set_xlabel('x')
plt.savefig("./test_dotted_line.pdf")
</code></pre>
<p>I played with <code>dashes = [2,2]</code> (and other combinations) and <code>dash_capstyle = 'round'</code>, but the result looks bad.</p>
<p>Is there a chance to have a dotted line consisting of 'circle' dots?</p>
| 2 | 2016-08-09T11:18:14Z | 38,854,746 | <p>Remove <code>linewidth</code>. Then it prints little squares - good enough?</p>
<p>You can also round-off the squares with <code>dash_capstyle = "round"</code>.</p>
| 0 | 2016-08-09T15:24:22Z | [
"python",
"matplotlib"
] |
How to jump on specific Page usinig Beautifulsoup | 38,849,300 | <p>I want to get data for Product which are search by user in python.I am able to get data from any Urls but depending upon search Jump on that page and Get data Using beautifulsoup.</p>
<p>I Try this for get data : </p>
<pre><code>from bs4 import BeautifulSoup
import requests
import urllib2
url="http://amazon.in"
con=urllib2.urlopen(url).read()
soup=BeautifulSoup(con)
print soup.prettify()
</code></pre>
<p>But if user want price of IPhone 5s then it will jump to that product page and get data.</p>
<p>How I get this.</p>
| 0 | 2016-08-09T11:20:16Z | 38,856,198 | <p>You just need to do a get request passing the correct params:</p>
<pre><code>import requests
from bs4 import BeautifulSoup
params = {"url":"search-alias=aps","field-keywords":"iphone 5"}
url = "http://www.amazon.in/s/ref=nb_sb_noss_2"
soup = BeautifulSoup(requests.get(url, params=params).content)
ul = soup.select_one("#s-results-list-atf")
</code></pre>
<p>ul will be contain all the search results you see on the page. If we run the code and find the h2 tags inside each anchor, you can see the item name/description as you see on the page.</p>
<pre><code>In [6]: ul = soup.select_one("#s-results-list-atf")
In [7]: for h2 in ul.select("li a h2"):
...: print(h2.text)
...:
Apple iPhone 5s (Space Grey, 16GB)
Apple iPhone 5s (Silver, 16GB)
Supra Lightning 8 Pin To Micro Usb Charge Sync Data Connector Adapter Iphone 5 Ipad 4
OnePlus 3 (Graphite, 64GB)
Apple iPhone 5 (Black-Slate, 16GB)
ROCK 695029068729 Royce Series Shockproof Dual Layer Back Case Cover for Apple iPhone 5 5S,(Grey)
Apple iPhone 5c (White, 8GB)
iSAVE Soft Silicone Grid Design Back Case Cover For iPhone 5/5s (BLACK)
iPaky AT15312 360 Protective Body Case with Tempered Glass for Apple iPhone SE 5 5S,(Black)
Aeoss 9Pcs Open Pry Screwdriver Repair Tool Kit Set For iPhone 6 Plus 5 5s 5c 4 iPod.
2 IN 1 Tempered Glass for Iphone 5 5s 5c Explosion Proof Tempered Glass (FRONT AND BACK)
Itab iphone5sclearsoftgelly Imported Transparent Clear Silicone Jelly Soft Case Back Cover For Apple Iphone 5 5S
Shivam Earphones EarPods Handsfree Headphones for Apple iPhone 4/4s/5/5s/6/6+ (White)
USB Power Adapter Wall Charger&Data Cable for iPhone 5/5S/5C/6
Generic Ios 7 Compatible Data Sync Charging Cable For Apple Iphone 5 5S 6 - White
Tempered Glass Screen Protector Scratch Guard for Apple Iphone 5 5G 5s
</code></pre>
| 0 | 2016-08-09T16:37:28Z | [
"python",
"beautifulsoup",
"web-crawler"
] |
Python flask use buttons to trigger scripts | 38,849,318 | <p>I was investigating the web, mainly getting my self confused about several things.</p>
<p>First - what I need. my company runs java applications on our linux servers that non technological people uses. sometimes we have to restart those programs, but first kill them.</p>
<p>I've built small tool (runs on linux) that uses "jps -lm |grep -v Jps" to recover all java processes, then using splits and patterns to make something that looks like it:</p>
<pre><code>You have those programs running:
1. Java A
2. Java B
3. Java C
4. Exit
5. Refresh
</code></pre>
<p>They choose a number, It asks for approval, if allowed, kills the process ("kill procId") and restarts (while loop until they type 4 which triggeres "break").</p>
<p>Now, I wanted to take it to one level above, using web browser to minimize their appereance near my linux servers (sys admin).</p>
<p>Now, I've found that flask would be great for me, created a web page, even managed to make it available all over the web. </p>
<p>What I intend to do is that the value of my "jps -lm |grep -v Jps" would be inserted into the html file when someone reaches the webpage.
when the html is ready, they will be redirected to it, push the appropriate button, an "onclick=alert('are you sure?')" will pop up and if approved that will trigger a
"subprocess.Popen('kill '+procId, stdout.subprocess.PIPE, shell=True)" command that will kill the process, redirecting them to the page where my script maps the current running java that will redirect them to the buttons page.</p>
<p>I'm able to do everything except the buttons part. have no idea how to make it available to the python after pressed.</p>
<p>Would appriciate any help. for the mapping script I intended to use this solution: <a href="http://stackoverflow.com/questions/10903615/create-a-hyperlink-or-button-that-executes-a-python-script-and-then-redirects">Create a hyperlink (or button) that executes a python script and then redirects when script completes</a> .</p>
<p>sorry if my question is built badly, would appriciate any help with this as well!</p>
<p>Thanks! </p>
| 0 | 2016-08-09T11:21:32Z | 38,886,090 | <p>Take a look at the example below I hope that helps</p>
<p>if you are using inline javascript, then you don't have to use this variable and can access <code>url_for()</code> from the ajax request. If you are using a separate javascript file, then you have to put this variable in your template since <code>.js</code> files don't cannot render jinja2 code like <code>{{}}</code></p>
<pre><code>var flask_endpoint = "{{url_for('endpoint_name')}}"
</code></pre>
<p>ONLY IN THE CASE YOUR APP USES CSRF USE THIS:</p>
<pre><code>var csrftoken = "{{ csrf_token() }}";
$.ajaxSetup({
beforeSend: function(xhr, settings) {
if (!/^(GET|HEAD|OPTIONS|TRACE)$/i.test(settings.type) && !this.crossDomain) {
xhr.setRequestHeader("X-CSRFToken", csrftoken)
}
}
})
</code></pre>
<p>IF NOT THEN SKIP THE ABOVE</p>
<p>Your javascript should look like this (you don't have to use the <code>done()</code> callback </p>
<pre><code>$('#button-id').on('click', function(){
var data = {'key': 'some value'};
$.ajax({
type : "POST",
url : flask_endpoint,
data: JSON.stringify(data),
contentType: 'application/json;charset=UTF-8',
}).done(function (result) {
alert(result);
});
alert('data has been sent to the server');
});
</code></pre>
<p>below is how your python code should look like</p>
<pre><code>from flask import request
@app.route('/ajax_endpoint', methods=['POST', 'GET'])
def endpoint_name():
# check if the request type is POST (we defined POST for our ajax request)
if request.method == 'POST':
data = request.form['key'] # this is the key in our data object
# do whatever you want with the data
</code></pre>
| 0 | 2016-08-11T01:52:41Z | [
"python",
"linux",
"button",
"flask",
"webpage"
] |
how to call python method from server action in odoo 9 | 38,849,404 | <p>I want try to call the python method from server action in same module . but there is no error found but the method is not called . I also follow this link <a href="https://www.odoo.com/forum/help-1/question/how-to-open-a-form-with-ir-action-server-which-executes-python-code-61910" rel="nofollow">for call the python method</a> . This is the code i use for call the method .</p>
<p>.xml</p>
<pre><code><record id="action_email_data_parser_server" model="ir.actions.server" >
<field name="name">[Server Action] Create Leads from Mail</field>
<field name="model_id" ref="model_email_data_parser"/>
<field name="condition">True</field>
<field name="type">ir.actions.server</field>
<field name="state">code</field>
<field name="code">
lead = self.browse(cr, uid, context['active_id'], context=context)
mail_message = lead.message_ids[0]
mail_body = mail_message['body']
lead_data_dict = self.parse_body(cr, uid, mail_body)
self.write(cr, uid, context['active_id'], lead_data_dict)
</field>
</record>
</code></pre>
<p>.py</p>
<pre><code>class email_data_parser(osv.osv):
_name = "email_data_parser"
_description = "Email Data Parser"
def parse_body(self, uid, body):
data_dist = body
retun data_dist
</code></pre>
<p>So , if any one have any idea please share with me , how can i solve this .</p>
| 1 | 2016-08-09T11:25:41Z | 38,866,996 | <pre><code> <!-- server action : can be called from menu and open a view from a python methode so we can change -->
<!-- server action : the context of the action -->
<record id="idOfServerAction" model="ir.actions.server">
<field name="name">Action Title</field>
<field name="condition">True</field>
<field name="type">ir.actions.server</field>
<field name="model_id" ref="model_modelName" />
<!--Ex ref='model_res_users -->
<field name="code">action = self.MethodeName(cr, uid, None, context)</field>
<!-- EX: action = self.AddUser(cr, uid, None, context) -->
</record>
<menuitem id="idOfMenuItem" name="Title of the menu" action="idOfServerAction" />
</code></pre>
| 0 | 2016-08-10T07:35:39Z | [
"python",
"xml",
"odoo-9"
] |
how to call python method from server action in odoo 9 | 38,849,404 | <p>I want try to call the python method from server action in same module . but there is no error found but the method is not called . I also follow this link <a href="https://www.odoo.com/forum/help-1/question/how-to-open-a-form-with-ir-action-server-which-executes-python-code-61910" rel="nofollow">for call the python method</a> . This is the code i use for call the method .</p>
<p>.xml</p>
<pre><code><record id="action_email_data_parser_server" model="ir.actions.server" >
<field name="name">[Server Action] Create Leads from Mail</field>
<field name="model_id" ref="model_email_data_parser"/>
<field name="condition">True</field>
<field name="type">ir.actions.server</field>
<field name="state">code</field>
<field name="code">
lead = self.browse(cr, uid, context['active_id'], context=context)
mail_message = lead.message_ids[0]
mail_body = mail_message['body']
lead_data_dict = self.parse_body(cr, uid, mail_body)
self.write(cr, uid, context['active_id'], lead_data_dict)
</field>
</record>
</code></pre>
<p>.py</p>
<pre><code>class email_data_parser(osv.osv):
_name = "email_data_parser"
_description = "Email Data Parser"
def parse_body(self, uid, body):
data_dist = body
retun data_dist
</code></pre>
<p>So , if any one have any idea please share with me , how can i solve this .</p>
| 1 | 2016-08-09T11:25:41Z | 38,919,683 | <p>I got the answer , when you want to trigger any server action you just follow this code , that may be help you .</p>
<p>.xml </p>
<pre><code><record id="action_email_data_parser_server" model="ir.actions.server" >
<field name="name">[Server Action] Create Leads from Mail</field>
<field name="model_id" ref="model_email_data_parser"/>
<field name="condition">True</field>
<field name="type">ir.actions.server</field>
<field name="state">code</field>
<field name="code">
action = self.parse_body(cr, uid, context=context)
</field>
</record>
</code></pre>
<p>.py</p>
<pre><code>import logging
from openerp.osv import fields,osv,expression
from openerp import models,api
_logger = logging.getLogger(__name__)
class email_data_parser(osv.osv):
_name = "email_data_parser"
_description = "Email Data Parser"
def parse_body(self, cr, uid, context=None):
_logger.info('This is FDS email parsing and through we can parse the mail from any email')
</code></pre>
<p>then check it in odoo-server.log . if the message will show , then your server action will call the python method .</p>
| 1 | 2016-08-12T13:51:24Z | [
"python",
"xml",
"odoo-9"
] |
Python: reading float80 values | 38,849,469 | <p>I have an array of 10-bytes (80-bits) Little Endian float values (or <code>float80</code>). How can i read this values in python 3?</p>
<p>The package <code>struct</code> does not support <code>float80</code> (may be I read the docs carelessly).</p>
<p>The package <code>array</code> as same as package "struct" does not support <code>float80</code>.</p>
<p>The package <code>numpy</code> supports <code>float128</code> or <code>float96</code> types. It's very good, but appending <code>\x00</code> in a tail of <code>float80</code> to extend it to <code>float96</code> or <code>float128</code> is ugly, importing of this package takes a lot of time.</p>
<p>The package <code>ctypes</code> supports <code>c_longdouble</code>. It's many times faster then numpy, but <code>sizeof(c_longdouble)</code> is machine-dependent and can be less then 80 bits, appending <code>\x00</code> in a tail of <code>float80</code> to extend it to <code>c_longdouble</code> is ugly too.</p>
<p><strong>UPDATE 1</strong>: test code at my <a href="https://gist.github.com/kai3341/27e158bb0a9163f1603902abaeb9e940" rel="nofollow">gist.github</a>.
The function <code>decode_str64</code> is ugly, but it works. Now I'm looking for right way</p>
| 2 | 2016-08-09T11:28:23Z | 38,850,175 | <p>Let me rewrite my answer in a more logical way:</p>
<p><code>ctypes c_longdouble</code> is machine dependent because the longdouble float type is not set in stone by the C standard and is dependent on the compiler :( but it is still your best you can have right now for high precision floats...</p>
<p>If you plan to use numpy, numpy.longdouble is what your are looking for, numpy.float96 or numpy.float128 are highly misleading names. They do not indicate a 96- or 128-bit IEEE floating point format. Instead, they indicate the number of bits of alignment used by the underlying long double type. So e.g. on x86-32, long double is 80 bits, but gets padded up to 96 bits to maintain 32-bit alignment, and numpy calls this <code>float96</code>. On x86-64, long double is again the identical 80 bit type, but now it gets padded up to 128 bits to maintain 64-bit alignment, and numpy calls this <code>float128</code>. There's no extra precision, just extra padding. </p>
<p>Appending <code>\x00</code> at the end of a <code>float80</code> to make a <code>Float96</code> is ugly, but in the end it is just that as <code>float96</code> is just a padded <code>float80</code> and <code>numpy.longdouble</code> is a <code>float96</code> or <code>float128</code> depending of the architecture of the machine you use.</p>
<p><a href="http://stackoverflow.com/questions/9062562/what-is-the-internal-precision-of-numpy-float128">What is the internal precision of numpy.float128?</a></p>
| 2 | 2016-08-09T12:02:43Z | [
"python",
"floating-point",
"long-double"
] |
Python: reading float80 values | 38,849,469 | <p>I have an array of 10-bytes (80-bits) Little Endian float values (or <code>float80</code>). How can i read this values in python 3?</p>
<p>The package <code>struct</code> does not support <code>float80</code> (may be I read the docs carelessly).</p>
<p>The package <code>array</code> as same as package "struct" does not support <code>float80</code>.</p>
<p>The package <code>numpy</code> supports <code>float128</code> or <code>float96</code> types. It's very good, but appending <code>\x00</code> in a tail of <code>float80</code> to extend it to <code>float96</code> or <code>float128</code> is ugly, importing of this package takes a lot of time.</p>
<p>The package <code>ctypes</code> supports <code>c_longdouble</code>. It's many times faster then numpy, but <code>sizeof(c_longdouble)</code> is machine-dependent and can be less then 80 bits, appending <code>\x00</code> in a tail of <code>float80</code> to extend it to <code>c_longdouble</code> is ugly too.</p>
<p><strong>UPDATE 1</strong>: test code at my <a href="https://gist.github.com/kai3341/27e158bb0a9163f1603902abaeb9e940" rel="nofollow">gist.github</a>.
The function <code>decode_str64</code> is ugly, but it works. Now I'm looking for right way</p>
| 2 | 2016-08-09T11:28:23Z | 39,785,867 | <p><code>numpy</code> <a href="http://docs.scipy.org/doc/numpy-dev/user/basics.types.html#extended-precision" rel="nofollow">can use 80-bit float if the compiler and platform support them</a>:</p>
<blockquote>
<p>Whether [supporting higher precision] is possible in numpy depends on
the hardware and on the development environment: specifically, x86
machines provide hardware floating-point with 80-bit precision, and
while most C compilers provide this as their <code>long double</code> type, MSVC
(standard for Windows builds) makes <code>long double</code> identical to double
(64 bits). Numpy makes the compilerâs long double available as
<code>np.longdouble</code> (and np.clongdouble for the complex numbers). You can
find out what your numpy provides with<code>np.finfo(np.longdouble)</code>.</p>
</blockquote>
<p>I checked that <code>np.longdouble</code> is <code>float64</code> in stock <code>numpy-1.11.1-win32.whl</code> at PyPI as well as in <a href="http://www.lfd.uci.edu/~gohlke/pythonlibs/#numpy" rel="nofollow">Gohlke's build</a> and <code>float96</code> in <code>numpy-1.4.1-9.el6.i686</code> in CentOS 6.</p>
| 0 | 2016-09-30T07:20:46Z | [
"python",
"floating-point",
"long-double"
] |
Compare two excel files in Pandas and return the rows which have the same values in TWO columns | 38,849,538 | <p>I have a couple of excel files. Both the files have two common columns: Customer_Name and Customer_No. The first excel file has around 800k rows while the second has only 460. I want to get a dataframe which has the common data in both the files, ie obtain the rows from the first file that has both the Customer_Name and Customer_No. found in the 2nd file. I tried using .isin but so far I found examples using only a single variable(Column). Thanks in advance!</p>
| -1 | 2016-08-09T11:31:50Z | 38,849,591 | <p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.merge.html" rel="nofollow"><code>merge</code></a>:</p>
<pre><code>df = pd.merge(df1, df2, on=['Customer_Name','Customer_No'])
</code></pre>
<p>If you have different column names use <code>left_on</code> and <code>right_on</code>:</p>
<pre><code>df = pd.merge(df1,
df2,
left_on=['Customer_Name','Customer_No'],
right_on=['Customer_head','Customer_Id'])
</code></pre>
| 2 | 2016-08-09T11:34:09Z | [
"python",
"excel",
"python-2.7",
"pandas"
] |
Compare two excel files in Pandas and return the rows which have the same values in TWO columns | 38,849,538 | <p>I have a couple of excel files. Both the files have two common columns: Customer_Name and Customer_No. The first excel file has around 800k rows while the second has only 460. I want to get a dataframe which has the common data in both the files, ie obtain the rows from the first file that has both the Customer_Name and Customer_No. found in the 2nd file. I tried using .isin but so far I found examples using only a single variable(Column). Thanks in advance!</p>
| -1 | 2016-08-09T11:31:50Z | 38,849,663 | <p>IIUC and you don't need extra columns from the second file - it will be used just for joining, you can do it this way:</p>
<pre><code>common_cols = ['Customer_Name','Customer_No']
df = (pd.read_excel(filename1)
.join(pd.read_excel(filename2, usecols=common_cols),
on=common_cols))
</code></pre>
| 0 | 2016-08-09T11:37:39Z | [
"python",
"excel",
"python-2.7",
"pandas"
] |
Compare two excel files in Pandas and return the rows which have the same values in TWO columns | 38,849,538 | <p>I have a couple of excel files. Both the files have two common columns: Customer_Name and Customer_No. The first excel file has around 800k rows while the second has only 460. I want to get a dataframe which has the common data in both the files, ie obtain the rows from the first file that has both the Customer_Name and Customer_No. found in the 2nd file. I tried using .isin but so far I found examples using only a single variable(Column). Thanks in advance!</p>
| -1 | 2016-08-09T11:31:50Z | 38,849,772 | <p>I think the direct way will be like this:</p>
<pre><code>df_file1 = pd.read_csv(file1, index_col) # set Customer_No
df_file2 = pd.read_csv(file2, index_col) # set Customer_No
for index, row in df_file1.iterrows():
if row.get_value('Customer_name) in df_file2['Customer_name'].values:
</code></pre>
<p>here you can count, simply by integer or produce some complicated job like add [index, row] to result df, if needed.</p>
| 0 | 2016-08-09T11:42:53Z | [
"python",
"excel",
"python-2.7",
"pandas"
] |
Joint two array to one array? | 38,849,653 | <p>a:</p>
<pre><code>[[1,2,3],
[4,5,6]]
</code></pre>
<p>b:</p>
<pre><code>[[7,8,9],
[10,11,12]]
</code></pre>
<p>How can I get an array like:</p>
<pre><code>[[[1,2,3],[4,5,6]],[[7,8,9],[10,11,12]]]
</code></pre>
<p>using a and b?</p>
| -3 | 2016-08-09T11:37:08Z | 38,850,275 | <p>You can use <a href="https://docs.python.org/3/tutorial/datastructures.html" rel="nofollow">python's append method</a>:</p>
<pre><code>x = []
x.append(a)
x.append(b)
</code></pre>
<p>Or in short (mentioned by @Kasramvd in the comments):</p>
<pre><code>x = [a, b]
</code></pre>
| 2 | 2016-08-09T12:07:37Z | [
"python"
] |
Joint two array to one array? | 38,849,653 | <p>a:</p>
<pre><code>[[1,2,3],
[4,5,6]]
</code></pre>
<p>b:</p>
<pre><code>[[7,8,9],
[10,11,12]]
</code></pre>
<p>How can I get an array like:</p>
<pre><code>[[[1,2,3],[4,5,6]],[[7,8,9],[10,11,12]]]
</code></pre>
<p>using a and b?</p>
| -3 | 2016-08-09T11:37:08Z | 38,850,350 | <p>You can use the append over empty list to add as many list as you want. See below example.</p>
<pre><code>>>> final_list = []
>>> a = [[1,2,3], [4,5,6]]
>>> b = [[7,8,9], [10,11,12]]
>>> final_list.append(a)
>>> final_list.append(b)
>>> final_list
[[[1,2,3],[4,5,6]],[[7,8,9],[10,11,12]]]
</code></pre>
| 0 | 2016-08-09T12:10:12Z | [
"python"
] |
Joint two array to one array? | 38,849,653 | <p>a:</p>
<pre><code>[[1,2,3],
[4,5,6]]
</code></pre>
<p>b:</p>
<pre><code>[[7,8,9],
[10,11,12]]
</code></pre>
<p>How can I get an array like:</p>
<pre><code>[[[1,2,3],[4,5,6]],[[7,8,9],[10,11,12]]]
</code></pre>
<p>using a and b?</p>
| -3 | 2016-08-09T11:37:08Z | 38,852,351 | <p>As mentioned by other answers, if you want <code>[[[1,2,3],[4,5,6]],[[7,8,9],[10,11,12]]]</code>, call <code>append</code> or simply write <code>c = [a, b]</code>.</p>
<p>However, the title reads "<strong>Joint</strong> two array to one array?", so I suppose what you actually expect is <code>[[1,2,3],[4,5,6],[7,8,9],[10,11,12]]</code>, which seems more useful. To do this, call <code>extend</code> like this:</p>
<pre><code>x = []
x.extend(a)
x.extend(b)
</code></pre>
| 0 | 2016-08-09T13:36:32Z | [
"python"
] |
parse date-time while reading 'csv' file with pandas | 38,849,676 | <p>I am trying to parse dates while I amâ reading my data from cvs file. The command that I use is </p>
<pre><code>df = pd.read_csv('/Users/n....', names=names, parse_dates=['date'])â )
</code></pre>
<p>And it is working on my files generally.
But I have couple of data sets which has variety in date formats. I mean it has date format is like that <code>(09/20/15 09:59â )</code> while it has another format in other lines is like that <code>( 2015-09-20 10:22:01.013â )</code> in the same file. And the command that I wrote above doesn't work on these file. It is working when I delete (parse_dates=['date'])â, but that time I can't use date column as <code>datetime</code> format, it reads that column as integer . I would be appreciate anyone could answer that!</p>
| 1 | 2016-08-09T11:38:15Z | 38,849,974 | <p>Like this:</p>
<pre><code>df = pd.read_csv(file, names=names)
df['date'] = pd.to_datetime(df['date])
</code></pre>
| 0 | 2016-08-09T11:52:26Z | [
"python",
"date",
"csv",
"pandas"
] |
parse date-time while reading 'csv' file with pandas | 38,849,676 | <p>I am trying to parse dates while I amâ reading my data from cvs file. The command that I use is </p>
<pre><code>df = pd.read_csv('/Users/n....', names=names, parse_dates=['date'])â )
</code></pre>
<p>And it is working on my files generally.
But I have couple of data sets which has variety in date formats. I mean it has date format is like that <code>(09/20/15 09:59â )</code> while it has another format in other lines is like that <code>( 2015-09-20 10:22:01.013â )</code> in the same file. And the command that I wrote above doesn't work on these file. It is working when I delete (parse_dates=['date'])â, but that time I can't use date column as <code>datetime</code> format, it reads that column as integer . I would be appreciate anyone could answer that!</p>
| 1 | 2016-08-09T11:38:15Z | 38,850,852 | <p>Pandas <code>read_csv</code> accepts <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html" rel="nofollow"><code>date_parser</code></a> argument which you can define your own date parsing function. So for example in your case you have 2 different datetime formats you can simply do:</p>
<pre><code>import datetime
def date_parser(d):
try:
d = datetime.datetime.strptime("format 1")
except ValueError:
try:
d = datetime.datetime.strptime("format 2")
except:
# both formats not match, do something about it
return d
df = pd.read_csv('/Users/n....',
names=names,
parse_dates=['date1', 'date2']),
date_parser=date_parser)
</code></pre>
<p>You can then parse those dates in different formats in those columns.</p>
| 1 | 2016-08-09T12:31:27Z | [
"python",
"date",
"csv",
"pandas"
] |
pandas applying multicolumnindex to dataframe | 38,849,812 | <p>The situation is that I have a few files with time_series data for various stocks with several fields. each file contains</p>
<pre><code>time, open, high, low, close, volume
</code></pre>
<p>the goal is to get that all into one dataframe of the form</p>
<pre><code>field open high ...
security hk_1 hk_2 hk_3 ... hk_1 hk_2 hk_3 ... ...
time
t_1 open_1_1 open_2_1 open_3_1 ... high_1_1 high_2_1 high_3_1 ... ...
t_2 open_1_2 open_2_2 open_3_2 ... high_1_2 high_2_2 high_3_2 ... ...
... ... ... ... ... ... ... ... ... ...
</code></pre>
<p>I created a multiindex</p>
<pre><code>fields = ['time','open','high','low','close','volume','numEvents','value']
midx = pd.MultiIndex.from_product([security_name'], fields], names=['security', 'field'])
</code></pre>
<p>and for a start, tried to apply that multiindex to the dataframe I get from reading the data from csv (by creating a new dataframe and adding the index)</p>
<pre><code>for c in eqty_names_list:
midx = pd.MultiIndex.from_product([[c], fields], names=['security', 'field'])
df_temp = pd.read_csv('{}{}.csv'.format(path, c))
df_temp = pd.DataFrame(df_temp, columns=midx, index=df_temp['time'])
df_temp.df_name = c
all_dfs.append(df_temp)
</code></pre>
<p>However, the new dataframe only contains nan</p>
<pre><code>security 1_HK
field time open high low close volume
time
NaN NaN NaN NaN NaN NaN NaN
</code></pre>
<p>Also, it still contains a column for time, although I tried to make that the index (so that I can later join all the other dataframes for other stocks by index to get the aggregated dataframe).</p>
<p>How can I apply the multiindex to the dataframe without losing my data and then later join the dataframes looking like this</p>
<pre><code>security 1_HK
field time open high low close volume
time
</code></pre>
<p>to create something like (note hierarchy field and security are switched)</p>
<pre><code>field time open high ...
security 1_HK 2_HK ... 1_HK 2_HK ... ...
time
</code></pre>
| 1 | 2016-08-09T11:44:49Z | 38,850,014 | <p>I think you can first get all files to list <code>files</code>, then with list comprehension get all DataFrames and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html" rel="nofollow"><code>concat</code></a> them by columns <code>(axis=1)</code>. If add parameter <code>keys</code>, you get <code>Multiindex</code> in columns:</p>
<p>Files: </p>
<p><a href="https://dl.dropboxusercontent.com/u/84444599/web/a.csv" rel="nofollow">a.csv</a>,
<a href="https://dl.dropboxusercontent.com/u/84444599/web/b.csv" rel="nofollow">b.csv</a>,
<a href="https://dl.dropboxusercontent.com/u/84444599/web/c.csv" rel="nofollow">c.csv</a></p>
<pre><code>import pandas as pd
import glob
files = glob.glob('files/*.csv')
dfs = [pd.read_csv(fp) for fp in files]
eqty_names_list = ['hk1','hk2','hk3']
df = pd.concat(dfs, keys=eqty_names_list, axis=1)
print (df)
hk1 hk2 hk3
a b c a b c a b c
0 0 1 2 0 9 6 0 7 1
1 1 5 8 1 6 4 1 3 2
</code></pre>
<p>Last need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.MultiIndex.swaplevel.html" rel="nofollow"><code>swaplevel</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_index.html" rel="nofollow"><code>sort_index</code></a>:</p>
<pre><code>df.columns = df.columns.swaplevel(0,1)
df = df.sort_index(axis=1)
print (df)
a b c
hk1 hk2 hk3 hk1 hk2 hk3 hk1 hk2 hk3
0 0 0 0 1 9 7 2 6 1
1 1 1 1 5 6 3 8 4 2
</code></pre>
| 1 | 2016-08-09T11:54:01Z | [
"python",
"pandas",
"indexing"
] |
Django dates getting sorted wrong DRF | 38,849,826 | <p>I have a django models-</p>
<pre><code>class CompanyForLineCharts(models.Model):
company = models.TextField(null=True)
class LineChartData(models.Model):
foundation = models.ForeignKey(CompanyForLineCharts, null=True)
date = models.DateField(auto_now_add=False)
price = models.FloatField(null=True)
</code></pre>
<p>And views for these models-</p>
<pre><code>arr = []
for i in range(len(entereddate)):
date = entereddate[i]
if entereddate[i] in dates:
foundat = (dates.index(entereddate[i]))
allprices = Endday.objects.raw("SELECT id, eop FROM drf_endday where company=%s", [comp[i]])
allendofdayprices = ''
for a in allprices:
allendofdayprices=(a.eop)
tempprices = allendofdayprices.split(',')
stringprices = tempprices[foundat:]
finald = dates[foundat:]
finalp = []
for t in range(len(stringprices)):
finalp.append(float(re.sub(r'[^0-9.]', '', stringprices[t])))
company = CompanyForLineCharts.objects.get(company=comp[i])
for j in range(len(finalp)):
arr.append(
LineChartData(
foundation = company,
date = finald[j],
price = finalp[j]
)
)
LineChartData.objects.bulk_create(arr)
</code></pre>
<p>Where <code>entereddate</code> is a list of dates(date object) entered by the user, <code>dates</code> is a big list of dates(also date object, in chronological order) and <code>tempprices</code> is a list of prices that corresponds to the <code>dates</code> list. </p>
<p>I have a serializer setup for these-</p>
<pre><code>class LineChartDataSerializer(serializers.ModelSerializer):
class Meta:
model = LineChartData
fields = ('date','price')
class CompanyForLineChartsSerializer(serializers.ModelSerializer):
data = LineChartDataSerializer(many=True, source='linechartdata_set')
class Meta:
model = CompanyForLineCharts
fields = ('company', 'data')
</code></pre>
<p>As you see <code>LineChartData</code> model is associated to <code>CompanyForLineCharts</code> model via <code>foundation</code>. </p>
<p>Now the problem that I'm facing is when drf serialises these fields, the order of dates go haywire. </p>
<p>So I tried these as well-</p>
<ul>
<li><p>In views-</p>
<p><code>xy = zip(finald, finalp)
sort = sorted(xy)
finald = [x[0] for x in sort]
finalp = [x[1] for x in sort]
</code></p>
<p>Well, that did not change any order in the serialised output. </p></li>
<li><p>So I tried ordering serializer-</p>
<p><code>order_by = (('date',))
ordering = ['-date']</code></p></li>
</ul>
<p>And none of them worked. What to do now?</p>
| 1 | 2016-08-09T11:45:28Z | 38,851,054 | <p>@edit I'm sorry, but it shouldn't change anything I just checked Documentation and default values of <code>auto_now_add</code> and <code>auto_now</code> are <code>False</code>. <br><br>
<code>DateField(auto_now_add=False)</code> -> <code>DateField(auto_now_add=False, auto_now=False)</code> should solve your problem.<br>
If <code>auto_now</code> is true then it set date everytime when <code>.save()</code> method is called. <code>auto_now_add</code> do the same when you call constructor.<br>
<code>auto_now</code> is used when you want date of last modification, while <code>auto_now_add</code> when you need date of creation.<br>
You're not in any of these causes so you needs to set boths argumments as <code>False</code></p>
| 0 | 2016-08-09T12:41:06Z | [
"python",
"django",
"date",
"django-rest-framework"
] |
From hexadecimal string to character (jis encoding) | 38,849,828 | <p>I have an hexadecimal string "\x98\x4F" that is the JIS encoding of the japanese kanji 楼.<br>
How can I print the kanji in python starting from the encoding?<br>
I tried </p>
<pre><code>print b'\x98\x4F'.encode('euc_jp')
</code></pre>
<p>but without success...
any clue?
Regards</p>
| 1 | 2016-08-09T11:45:34Z | 38,849,995 | <p>In Python 2 use <code>str.decode()</code> with the <code>shift-jis</code> encoding:</p>
<pre><code>>>> s = "\x98\x4F".decode('shift-jis')
>>> s
u'\u697c'
>>> print s
楼
</code></pre>
<p>This <em>decodes</em> the jis encoded data into a Python unicode string. Printing that string displays the required character, provided that your default encoding can do so.</p>
<p>In Python 3 you can prefix the encoded string with <code>b</code>:</p>
<pre><code>>>> s = b"\x98\x4F".decode('shift-jis')
>>> s
'楼'
>>> print(s)
楼
</code></pre>
<p>(this will also work in Python 2)</p>
| 1 | 2016-08-09T11:53:19Z | [
"python",
"encoding"
] |
Using arabic characters in sparql in python? | 38,849,861 | <p>I developed my own ontology in arabic, and now i wanna do some sparql request using <strong>rdflib</strong> and <strong>sparql</strong>. The problem is when i make a request without using the Arabic language on my ontology i got answers without problems ,but when i want to do a specific request on properties using the Arabic langage i got some errors :(.</p>
<p>any one know how i can deal with that please. what's i'm doing wrong !!!</p>
<p>Here my code:</p>
<pre><code>graph =rdflib.Graph()
filename = r'JO Ontology modified 09 june 2014 with properties.owl'
graph.load(filename, format='xml')
qres = graph.query(
"PREFIX OntoJO:<http://www.owl-ontologies.com/Ontology1400008538.owl#>" +
"SELECT ?path " +
"WHERE { ?lois_ordinaires OntoJO:ministere_lord ?ministere_lord ."+
"?lois_ordinaires OntoJO:a_un_chemin ?y ."+
" ?y OntoJO:chemin ?path ."+
"FILTER(regex(?ministere_lord,'ÙØ²Ø§Ø±Ø© اÙÙ
اÙÙØ©'))}", )
for row in qres:
print row[0]
</code></pre>
<p>the errors:</p>
<pre class="lang-none prettyprint-override"><code>File "C:\Users\Mehdi\workspace\My_work\Test\Recherche.py", line 38, in main
"FILTER(regex(?ministere_lord,'ÙØ²Ø§Ø±Ø© اÙÙ
اÙÙØ©'))}", )
File "build\bdist.win-amd64\egg\rdflib\graph.py", line 920, in query
File "C:\Python27\lib\site-packages\rdfextras-0.4-py2.7.egg\rdfextras\sparql\components.py", line 168, in __new__
return unicode.__new__(cls, value)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xd9 in position 0: ordinal not in range(128)
</code></pre>
| 0 | 2016-08-09T11:47:28Z | 38,855,698 | <p>i found the correct syntax for that :).</p>
<p>i just change this line:</p>
<pre><code>"FILTER(regex(?ministere_lord,'ÙØ²Ø§Ø±Ø© اÙÙ
اÙÙØ©'))}", )
</code></pre>
<p>by this one:</p>
<pre><code>"FILTER (regex(?ministere_lord,'""" +u"ÙØ²Ø§Ø±Ø© اÙÙ
اÙÙØ©"+ """')) }""" , )
</code></pre>
<p>so the sparql request is:</p>
<pre><code>graph =rdflib.Graph()
filename = r'JO Ontology modified 09 june 2014 with properties.owl'
graph.load(filename, format='xml')
qres = graph.query(
""" PREFIX OntoJO:<http://www.owl-ontologies.com/Ontology1400008538.owl#>
SELECT ?path
WHERE { ?lois_ordinaires OntoJO:ministere_lord ?ministere_lord .
?lois_ordinaires OntoJO:a_un_chemin ?y .
?y OntoJO:chemin ?path .
FILTER (regex(?ministere_lord,'""" +u"ÙØ²Ø§Ø±Ø© اÙÙ
اÙÙØ©"+ """'))
}""" , )
</code></pre>
| 0 | 2016-08-09T16:10:00Z | [
"python",
"sparql",
"arabic",
"rdflib"
] |
Django forgets .using database on resolving foreign key | 38,849,878 | <p>I have a model for my legacy database created by manage.py inspectdb, which accesses the database named 'edlserver' in settings, one of many databases used for the project. I cannot change the database layout.</p>
<p>It has the following classes (among other irrelevant ones):</p>
<p>One for Logging entries.</p>
<pre><code>class Logs(models.Model):
time = models.DateTimeField()
job = models.ForeignKey(Jobs, models.DO_NOTHING, db_column='id_job')
msg = models.TextField()
class Meta:
managed = False
db_table = 'logs'
</code></pre>
<p>Another one for Jobs (that the job field references)</p>
<pre><code>class Jobs(models.Model):
job_type = models.ForeignKey(JobTypes, models.DO_NOTHING, db_column='id_job_type')
time_start = models.DateTimeField()
time_end = models.DateTimeField(blank=True, null=True)
pid = models.IntegerField(blank=True, null=True)
title = models.CharField(max_length=255)
class Meta:
managed = False
db_table = 'jobs'
</code></pre>
<p>And another one for JobTypes.</p>
<pre><code>class JobTypes(models.Model):
name = models.CharField(max_length=255)
max_processes = models.IntegerField()
class Meta:
managed = False
db_table = 'job_types'
</code></pre>
<p>The view for django-rest-framework looks like this</p>
<pre><code>class EDLLogList(generics.ListAPIView):
serializer_class = EDLLogsSerializer
filter_backends = (filters.DjangoFilterBackend, )
filter_class = EDLLogsFilter
def get_queryset(self):
if not 'job_name' in self.request.GET:
raise ParameterRequired('job_name')
else:
return Logs.objects.all().using('edlserver')
</code></pre>
<p>It uses the Filter:</p>
<pre><code>class EDLLogsFilter(filters.FilterSet):
time_start = django_filters.DateTimeFilter(name="time", lookup_type='gte')
time_end = django_filters.DateTimeFilter(name="time", lookup_type='lte')
job_name = django_filters.MethodFilter()
class Meta:
model = Logs
fields = ()
def filter_job_name(self, queryset, job_name):
try:
q = queryset.filter(job__job_type__name=job_name)[:10000]
except:
raise InternalError()
if len(q) < 1 and
len(JobTypes.objects.all().using('edlserver').filter(name=job_name)) < 1:
raise InvalidParameter(job_name, 'job_name')
else:
return q
</code></pre>
<p>and the Serializer:</p>
<pre><code>class EDLLogsSerializer(serializers.HyperlinkedModelSerializer):
time = serializers.DateTimeField()
job_name = serializers.SerializerMethodField()
message = serializers.SerializerMethodField()
class Meta:
model = Logs
fields = ('job_name','time', 'message')
def get_job_name(self, obj):
return obj['id_job__id_job_type__name']
def get_message(self, obj):
return obj.msg
</code></pre>
<p>Problem is I get a <code>TypeError: 'Logs' object is not subscriptable</code> in <code>get_job_name()</code> in the Serializer, coming from the psycopg2 module - the database is a MySQL database, however. The fact that the first query has a queryset with len > 0 during debugging shows that the model is okay, and django uses the MySQL backend for getting the data. On resolving the foreign key something goes wrong and the (i think) default database gets used, which is PostGreSQL. </p>
<p>Is this a bug?</p>
<p>If not, what can I do? I was thinking about a router, which would resolve a Meta field. That would mean a lot of change to other models so I'd like to not do this. Any ideas?</p>
<p>EDIT: settings for databases</p>
<pre><code>'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'pic',
'USER' : 'pic-5437',
'PASSWORD' : '',
'HOST' : 'host1.url.com',
'PORT' : '5432'
},
'...' : {
...
},
'edlserver': {
'ENGINE': 'django.db.backends.mysql',
'HOST': 'host2.url.com',
'NAME': 'edl',
'USER': 'edl_ro',
'PASSWORD': '',
}
</code></pre>
| 2 | 2016-08-09T11:48:14Z | 38,850,051 | <p>The problem is that obj here is an instance of <code>Logs</code></p>
<pre><code>def get_job_name(self, obj):
return obj['id_job__id_job_type__name']
</code></pre>
<p>Django models look like dictionaries, smell like dictionaries but they are not dictionaries. The correct usage is:</p>
<pre><code> return obj.job.job_type.name
</code></pre>
<p>I recommend that you open up a django shell, load a single instead of Logs and use the help() command and experiment with the paths.</p>
<p>As for the second issue, the wrong database been used for the queries, you will either need to define a <a href="https://docs.djangoproject.com/en/1.10/topics/db/multi-db/#using-routers" rel="nofollow">database router</a> or add <a href="https://docs.djangoproject.com/en/1.10/ref/models/querysets/#using" rel="nofollow">using()</a> in your queries.</p>
| 1 | 2016-08-09T11:56:19Z | [
"python",
"mysql",
"django",
"django-rest-framework",
"serializer"
] |
Django forgets .using database on resolving foreign key | 38,849,878 | <p>I have a model for my legacy database created by manage.py inspectdb, which accesses the database named 'edlserver' in settings, one of many databases used for the project. I cannot change the database layout.</p>
<p>It has the following classes (among other irrelevant ones):</p>
<p>One for Logging entries.</p>
<pre><code>class Logs(models.Model):
time = models.DateTimeField()
job = models.ForeignKey(Jobs, models.DO_NOTHING, db_column='id_job')
msg = models.TextField()
class Meta:
managed = False
db_table = 'logs'
</code></pre>
<p>Another one for Jobs (that the job field references)</p>
<pre><code>class Jobs(models.Model):
job_type = models.ForeignKey(JobTypes, models.DO_NOTHING, db_column='id_job_type')
time_start = models.DateTimeField()
time_end = models.DateTimeField(blank=True, null=True)
pid = models.IntegerField(blank=True, null=True)
title = models.CharField(max_length=255)
class Meta:
managed = False
db_table = 'jobs'
</code></pre>
<p>And another one for JobTypes.</p>
<pre><code>class JobTypes(models.Model):
name = models.CharField(max_length=255)
max_processes = models.IntegerField()
class Meta:
managed = False
db_table = 'job_types'
</code></pre>
<p>The view for django-rest-framework looks like this</p>
<pre><code>class EDLLogList(generics.ListAPIView):
serializer_class = EDLLogsSerializer
filter_backends = (filters.DjangoFilterBackend, )
filter_class = EDLLogsFilter
def get_queryset(self):
if not 'job_name' in self.request.GET:
raise ParameterRequired('job_name')
else:
return Logs.objects.all().using('edlserver')
</code></pre>
<p>It uses the Filter:</p>
<pre><code>class EDLLogsFilter(filters.FilterSet):
time_start = django_filters.DateTimeFilter(name="time", lookup_type='gte')
time_end = django_filters.DateTimeFilter(name="time", lookup_type='lte')
job_name = django_filters.MethodFilter()
class Meta:
model = Logs
fields = ()
def filter_job_name(self, queryset, job_name):
try:
q = queryset.filter(job__job_type__name=job_name)[:10000]
except:
raise InternalError()
if len(q) < 1 and
len(JobTypes.objects.all().using('edlserver').filter(name=job_name)) < 1:
raise InvalidParameter(job_name, 'job_name')
else:
return q
</code></pre>
<p>and the Serializer:</p>
<pre><code>class EDLLogsSerializer(serializers.HyperlinkedModelSerializer):
time = serializers.DateTimeField()
job_name = serializers.SerializerMethodField()
message = serializers.SerializerMethodField()
class Meta:
model = Logs
fields = ('job_name','time', 'message')
def get_job_name(self, obj):
return obj['id_job__id_job_type__name']
def get_message(self, obj):
return obj.msg
</code></pre>
<p>Problem is I get a <code>TypeError: 'Logs' object is not subscriptable</code> in <code>get_job_name()</code> in the Serializer, coming from the psycopg2 module - the database is a MySQL database, however. The fact that the first query has a queryset with len > 0 during debugging shows that the model is okay, and django uses the MySQL backend for getting the data. On resolving the foreign key something goes wrong and the (i think) default database gets used, which is PostGreSQL. </p>
<p>Is this a bug?</p>
<p>If not, what can I do? I was thinking about a router, which would resolve a Meta field. That would mean a lot of change to other models so I'd like to not do this. Any ideas?</p>
<p>EDIT: settings for databases</p>
<pre><code>'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'pic',
'USER' : 'pic-5437',
'PASSWORD' : '',
'HOST' : 'host1.url.com',
'PORT' : '5432'
},
'...' : {
...
},
'edlserver': {
'ENGINE': 'django.db.backends.mysql',
'HOST': 'host2.url.com',
'NAME': 'edl',
'USER': 'edl_ro',
'PASSWORD': '',
}
</code></pre>
| 2 | 2016-08-09T11:48:14Z | 38,850,368 | <p>Foreign keys add an attribute to the model that is itself a model instance. To follow the complete relation from <code>Logs</code> to <code>JobType</code>, simply use attribute lookups:</p>
<pre><code>def get_job_name(self, obj):
return obj.job.job_type.name
</code></pre>
<p>This would be the normal use-case. If multiple databases are used and django uses the wrong database while resolving the foreign key, it can be done manually:</p>
<pre><code> return JobTypes.objects.all().using('edlserver').filter(id=
Jobs.objects.all().using('edlserver').filter(id=
obj.job_id)[0].job_type_id)[0].name
</code></pre>
<p>Another option would be to introduce a Meta field in the model, like this:</p>
<pre><code>import django.db.models.options as options
options.DEFAULT_NAMES = options.DEFAULT_NAMES + ('in_db',)
class MyModel(models.Model):
class Meta:
in_db = 'edlserver'
</code></pre>
<p>Then a database router is needed:</p>
<pre><code>class DatabaseMetaRouter(object):
def db_for_read(self, model, **hints):
"""
Route to the given in_db database in Meta
"""
if hasattr(model._meta, 'in_db'):
return model._meta.in_db
else:
return 'default'
def db_for_write(self, model, **hints):
"""
Route to the given in_db database in Meta
"""
if hasattr(model._meta, 'in_db'):
return model._meta.in_db
else:
return 'default'
def allow_relation(self, obj1, obj2, **hints):
"""
Always allow
"""
return True
def allow_migrate(self, db, app_label, model_name=None, **hints):
"""
Always allow
"""
return True
</code></pre>
| 2 | 2016-08-09T12:10:54Z | [
"python",
"mysql",
"django",
"django-rest-framework",
"serializer"
] |
Python K-medoids visualisation | 38,849,906 | <p>This code I get from the internet. I apply to my data and work. So I try to show the visualisation of this method but I can't find the relevant code of visualisation for the k-medoids.</p>
<pre><code>from nltk.metrics import distance as distance
import Pycluster as PC
words = ['apple', 'Doppler', 'applaud', 'append', 'barker',
'baker', 'bismark', 'park', 'stake', 'steak', 'teak', 'sleek']
dist = [distance.edit_distance(words[i], words[j])
for i in range(1, len(words))
for j in range(0, i)]
clusterid, error, nfound = PC.kmedoids(dist, nclusters=3)
cluster = dict()
uniqid=list(set(clusterid))
new_ids = [ uniqid.index(val) for val in clusterid]
for word, label in zip(words, clusterid):
cluster.setdefault(label, []).append(word)
for label, grp in cluster.items():
print(grp)
</code></pre>
| -2 | 2016-08-09T11:49:30Z | 38,852,486 | <p>Your input data are <em>words</em>.</p>
<p>How would you visualize them? They are not coordinate vectors.</p>
| 0 | 2016-08-09T13:42:59Z | [
"python",
"cluster-analysis"
] |
Curious behaviour while writing to files | 38,849,913 | <p>I have a question about Python's behavior while opening and writing data in files.
I have a large code that is built to run even for hours and it involves writing text in ASCII files. The code uses the <code>with open</code> method and keeps a file open during the program's run.
Sometimes I need to stop the execution of the program, thus getting a <code>Process finished with exit code -1</code> exit status.</p>
<p>The matter that's bugging me is this curious behavior when the code is running and I get confirmation that text has been written in the ASCII files, but if I stop the code, the files are empty.
Let's have for example, the code below:</p>
<pre><code>import time
# Create the .txt files
with open('D:/Stuff/test1.txt', 'w') as write1:
pass
with open('D:/Stuff/test2.txt', 'w') as write2:
pass
# Write some text into the.txt files
# Case 1. The code runs until the end
with open ('D:/Stuff/test1.txt', 'a') as infile1:
a = range(1, 50, 1)
infile1.write(str(a))
print "Writing completed!"
# Case 2. I stop the execution manually. I use time.sleep to be able to stop it in this example
with open ('D:/Stuff/test2.txt', 'a') as infile2:
b = range(1, 50, 1)
infile2.write(str(b))
print "Writing completed!"
print "Start sleep"
time.sleep(10)
<< At this line I end the script manually>>
print "End sleep"
</code></pre>
<p>What happens is that Case 1, the text is written in the files, but in case 2, I get them empty.
Why is this happening and how can I solve it?</p>
| 1 | 2016-08-09T11:49:53Z | 38,850,023 | <p>For a start, you might want to ensure that you're indeed writing to the disk, instead of to a buffer.</p>
<p>One way of doing so is by using <code>infile2.flush()</code>:</p>
<pre><code>infile2.write(str(b))
infile2.flush() # <- here
print "Writing completed!"
</code></pre>
<p>A different way is to <a href="https://docs.python.org/2/library/functions.html#open" rel="nofollow">open the file with no buffering</a>. In the <code>open</code> call, set <code>buffering=0</code>.</p>
<p>The former method places the onus on you to remember to flush. On the other hand, it gives you greater control of "checkpoints" when to flush. Automatically unbuffered IO has lower throughput, in general.</p>
| 1 | 2016-08-09T11:54:40Z | [
"python",
"file-io",
"with-statement"
] |
Loading dynamic model attributes in the form - django | 38,849,987 | <p>So I'm building an Ads app.
Basically, users log in, and they can post an ad for something.
I've posted the models below but basically I have a category (say, electronics or real estate), then a subcategory(electronics->Laptop or real estate-> house,apartment etc.)
The problem is, different items have different attributes.
For example, a Laptop might have "screen", "ram" and "hdd" attributes, while a "Car" might have "mileage" and "condition".
I decided to store these attributes in a JSONField. </p>
<pre><code>from django.db import models
from django.contrib.postgres.fields import JSONField
class Category(models.Model):
name = models.CharField(max_length=255)
def __str__(self):
return self.name
class SubCategory(models.Model):
category = models.ForeignKey('Category')
name = models.CharField(max_length=255)
def __str__(self):
return self.name
class Product(models.Model):
name = models.CharField(max_length=255)
subcategory = models.ForeignKey('SubCategory')
description = models.TextField()
price = models.IntegerField()
price_fixed = models.BooleanField(default=False)
monthly_payments = models.BooleanField(default=False)
created = models.DateField(auto_now_add=True)
custom_attributes = JSONField(default=dict)
def __str__(self):
return self.name
</code></pre>
<p>Now, how do I handle these custom attributes in the form and the views?
I need to make it so that when the user selects a category/subcategory from a dropdown, these attributes need to appear as text fields and the user would enter screen size, t-shirt size or color etc. </p>
<p>This is my first django app, and the book I learned from didn't cover things like this and I'm not exactly sure where to go from here, searching on google/SO didn't find me a solution. </p>
| 1 | 2016-08-09T11:53:02Z | 38,850,313 | <p>JSONFields are good for structured metadata that Python won't introspect very often.</p>
<p>In your case I would suggest creating another table for field configuration, link this table to category and have a table for values in products. In a simplified form:</p>
<pre><code>class CategoryAtrribute(models.Model):
name = models.CharField()
value_type = models.CharField()
class Category(models.Model):
attributes = models.ManyToManyField(CategoryAtrribute)
class AttributeValues(models.Model):
category = models.ForeignKey('Category')
attribute = models.ForeignKey('CategoryAtrribute')
value = models.TextField()
class Product(models.Model):
attribute_values = models.ManyToManyField(CategoryAtrribute, through=AttributeValues)
</code></pre>
<p>The problems here are basically two:
1. You need to ensure that product will have only attributes allowed by its category
2. Functions checking types of fields need to be hardcoded</p>
<p>A simplier version of this solution is to create a table Metadata with all possible fields for all categories. This model will have one2one with product and Category will have list of fields to use from it.</p>
| 1 | 2016-08-09T12:09:05Z | [
"python",
"django",
"django-forms",
"django-views"
] |
How do i change color based on value of an HTML table generated from a pd.DataFrame using to_html | 38,849,992 | <p>I have a pandas dataFrame which I am converting to an HTML table using <code>to_html()</code> however I would like to color certain cells based on values in the HTML table that I return. </p>
<p>Any idea how to go about this? </p>
<p>Eg: All cells in a column called 'abc' that have a value greater than 5 must appear red else blue.</p>
| 2 | 2016-08-09T11:53:18Z | 38,857,277 | <p>here is one way to do this:</p>
<pre><code>df = pd.DataFrame(np.random.randint(0,10, (5,3)), columns=list('abc'))
def color_cell(cell):
return 'color: ' + ('red' if cell > 5 else 'green')
html = df.style.applymap(color_cell, subset=['a']).render()
with open('c:/temp/a.html', 'w') as f:
f.write(html)
</code></pre>
<p>result:</p>
<p><a href="http://i.stack.imgur.com/lMoVL.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/lMoVL.jpg" alt="enter image description here"></a></p>
| 0 | 2016-08-09T17:44:27Z | [
"python",
"html",
"css",
"pandas"
] |
Ansible install error on Mac | 38,850,141 | <p>Installing Ansible on the Mac (10.11.6) I get to the "installing Python modules" section here <a href="http://docs.ansible.com/ansible/intro_installation.html" rel="nofollow">http://docs.ansible.com/ansible/intro_installation.html</a> and get this error:</p>
<pre><code>sudo pip install paramiko PyYAML Jinja2 httplib2 six
... lots of downloading ...
Installing collected packages: pyasn1, pycparser, cffi, setuptools, idna, ipaddress, enum34, cryptography, paramiko, PyYAML, MarkupSafe, Jinja2, httplib2
Running setup.py install for pycparser ... done
Found existing installation: setuptools 1.1.6
Uninstalling setuptools-1.1.6:
Exception:
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/basecommand.py", line 215, in main
status = self.run(options, args)
File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/commands/install.py", line 317, in run
prefix=options.prefix_path,
File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/req/req_set.py", line 736, in install
requirement.uninstall(auto_confirm=True)
File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/req/req_install.py", line 742, in uninstall
paths_to_remove.remove(auto_confirm)
File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/req/req_uninstall.py", line 115, in remove
renames(path, new_path)
File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/utils/__init__.py", line 267, in renames
shutil.move(old, new)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 299, in move
copytree(src, real_dst, symlinks=True)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 208, in copytree
raise Error, errors
Error: [('/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/__init__.py', '/tmp/pip-QJiMr7-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/__init__.py', "[Errno 1] Operation not permitted: '/tmp/pip-QJiMr7-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/__init__.py'"), ('/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/__init__.pyc', '/tmp/pip-QJiMr7-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/__init__.pyc', "[Errno 1] Operation not permitted: '/tmp/pip-QJiMr7-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/__init__.pyc'"), ('/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.py', '/tmp/pip-QJiMr7-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.py', "[Errno 1] Operation not permitted: '/tmp/pip-QJiMr7-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.py'"), ('/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.pyc', '/tmp/pip-QJiMr7-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.pyc', "[Errno 1] Operation not permitted: '/tmp/pip-QJiMr7-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.pyc'"), ('/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib', '/tmp/pip-QJiMr7-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib', "[Errno 1] Operation not permitted: '/tmp/pip-QJiMr7-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib'")]
</code></pre>
<p>Any idea what the problem is?</p>
| 1 | 2016-08-09T12:01:04Z | 38,850,494 | <p>If you don't mind user install, call <code>pip install --user ansible</code> and enjoy.<br>
Mac OS has some troubles installing global packages after El Capitan upgrade.</p>
| 0 | 2016-08-09T12:16:56Z | [
"python",
"osx",
"ansible"
] |
Ansible install error on Mac | 38,850,141 | <p>Installing Ansible on the Mac (10.11.6) I get to the "installing Python modules" section here <a href="http://docs.ansible.com/ansible/intro_installation.html" rel="nofollow">http://docs.ansible.com/ansible/intro_installation.html</a> and get this error:</p>
<pre><code>sudo pip install paramiko PyYAML Jinja2 httplib2 six
... lots of downloading ...
Installing collected packages: pyasn1, pycparser, cffi, setuptools, idna, ipaddress, enum34, cryptography, paramiko, PyYAML, MarkupSafe, Jinja2, httplib2
Running setup.py install for pycparser ... done
Found existing installation: setuptools 1.1.6
Uninstalling setuptools-1.1.6:
Exception:
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/basecommand.py", line 215, in main
status = self.run(options, args)
File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/commands/install.py", line 317, in run
prefix=options.prefix_path,
File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/req/req_set.py", line 736, in install
requirement.uninstall(auto_confirm=True)
File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/req/req_install.py", line 742, in uninstall
paths_to_remove.remove(auto_confirm)
File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/req/req_uninstall.py", line 115, in remove
renames(path, new_path)
File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/utils/__init__.py", line 267, in renames
shutil.move(old, new)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 299, in move
copytree(src, real_dst, symlinks=True)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 208, in copytree
raise Error, errors
Error: [('/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/__init__.py', '/tmp/pip-QJiMr7-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/__init__.py', "[Errno 1] Operation not permitted: '/tmp/pip-QJiMr7-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/__init__.py'"), ('/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/__init__.pyc', '/tmp/pip-QJiMr7-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/__init__.pyc', "[Errno 1] Operation not permitted: '/tmp/pip-QJiMr7-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/__init__.pyc'"), ('/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.py', '/tmp/pip-QJiMr7-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.py', "[Errno 1] Operation not permitted: '/tmp/pip-QJiMr7-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.py'"), ('/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.pyc', '/tmp/pip-QJiMr7-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.pyc', "[Errno 1] Operation not permitted: '/tmp/pip-QJiMr7-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.pyc'"), ('/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib', '/tmp/pip-QJiMr7-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib', "[Errno 1] Operation not permitted: '/tmp/pip-QJiMr7-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib'")]
</code></pre>
<p>Any idea what the problem is?</p>
| 1 | 2016-08-09T12:01:04Z | 38,876,325 | <p>There is a version conflict on USX with the installed version of six. The installed version (the version that comes with OSX) will work, but pip does not know it will work. Do not try to install six with pip. When you go to install Ansible, use the following command:</p>
<pre><code>sudo -H pip install ansible --upgrade --ignore-installed six
</code></pre>
<p>This tells pip to ignore the conflict with six and assume the installed version will work.</p>
| 0 | 2016-08-10T14:31:40Z | [
"python",
"osx",
"ansible"
] |
Ansible install error on Mac | 38,850,141 | <p>Installing Ansible on the Mac (10.11.6) I get to the "installing Python modules" section here <a href="http://docs.ansible.com/ansible/intro_installation.html" rel="nofollow">http://docs.ansible.com/ansible/intro_installation.html</a> and get this error:</p>
<pre><code>sudo pip install paramiko PyYAML Jinja2 httplib2 six
... lots of downloading ...
Installing collected packages: pyasn1, pycparser, cffi, setuptools, idna, ipaddress, enum34, cryptography, paramiko, PyYAML, MarkupSafe, Jinja2, httplib2
Running setup.py install for pycparser ... done
Found existing installation: setuptools 1.1.6
Uninstalling setuptools-1.1.6:
Exception:
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/basecommand.py", line 215, in main
status = self.run(options, args)
File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/commands/install.py", line 317, in run
prefix=options.prefix_path,
File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/req/req_set.py", line 736, in install
requirement.uninstall(auto_confirm=True)
File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/req/req_install.py", line 742, in uninstall
paths_to_remove.remove(auto_confirm)
File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/req/req_uninstall.py", line 115, in remove
renames(path, new_path)
File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/utils/__init__.py", line 267, in renames
shutil.move(old, new)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 299, in move
copytree(src, real_dst, symlinks=True)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 208, in copytree
raise Error, errors
Error: [('/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/__init__.py', '/tmp/pip-QJiMr7-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/__init__.py', "[Errno 1] Operation not permitted: '/tmp/pip-QJiMr7-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/__init__.py'"), ('/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/__init__.pyc', '/tmp/pip-QJiMr7-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/__init__.pyc', "[Errno 1] Operation not permitted: '/tmp/pip-QJiMr7-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/__init__.pyc'"), ('/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.py', '/tmp/pip-QJiMr7-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.py', "[Errno 1] Operation not permitted: '/tmp/pip-QJiMr7-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.py'"), ('/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.pyc', '/tmp/pip-QJiMr7-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.pyc', "[Errno 1] Operation not permitted: '/tmp/pip-QJiMr7-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.pyc'"), ('/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib', '/tmp/pip-QJiMr7-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib', "[Errno 1] Operation not permitted: '/tmp/pip-QJiMr7-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib'")]
</code></pre>
<p>Any idea what the problem is?</p>
| 1 | 2016-08-09T12:01:04Z | 39,040,959 | <p>OSX recently made changes so that you will likely encounter (sooner or later) incompatibilities between python modules that reside on the system and stuff you need to install.
Ansible is a good example as it has dependencies that no longer play nice with what's already installed.</p>
<p>The clean solution is to use a virtual environment:
<a href="https://virtualenv.pypa.io/en/stable/userguide/#usage" rel="nofollow">https://virtualenv.pypa.io/en/stable/userguide/#usage</a></p>
<p>Quite simply you are creating a bubble in which to install and run your version of python and its modules. It goes like so:</p>
<pre><code>$ virtualenv myenv
New python executable in myenv/bin/python
Installing setuptools, pip...done.
$ ls -lah myenv
total 8
drwxr-xr-x 6 migueldavid staff 204B 19 Aug 14:45 .
drwxr-xr-x+ 60 migueldavid staff 2.0K 19 Aug 14:45 ..
lrwxr-xr-x 1 migueldavid staff 63B 19 Aug 14:45 .Python -> /System/Library/Frameworks/Python.framework/Versions/2.7/Python
drwxr-xr-x 14 migueldavid staff 476B 19 Aug 14:45 bin
drwxr-xr-x 3 migueldavid staff 102B 19 Aug 14:45 include
drwxr-xr-x 3 migueldavid staff 102B 19 Aug 14:45 lib
</code></pre>
<p>You then activate the virtual environment and install whatever you need:</p>
<pre><code>$ source myenv/bin/activate
(myenv)$ pip install ansible
Downloading/unpacking ansible
Downloading ansible-2.1.1.0.tar.gz (1.9MB): 1.9MB downloaded
(...)
</code></pre>
<p>To get out of this "sandbox", you simply have to type</p>
<pre><code>$ deactivate
</code></pre>
<p>I hope this helps.</p>
| 0 | 2016-08-19T14:00:35Z | [
"python",
"osx",
"ansible"
] |
Pandas: delete string with condition | 38,850,142 | <p>I have df</p>
<pre><code>ID url code
111 vk.com 1
111 twitter.com 1
222 facebook.com 1
222 vk.com 1
222 avito.ru 3
</code></pre>
<p>Desire output:</p>
<pre><code>ID url code
111 vk.com 1
222 facebook.com 1
222 avito.ru 3
</code></pre>
<p>I need to delete string, if previous <code>code</code> is equal to this string and <code>ID</code> is equal to this string.</p>
| 0 | 2016-08-09T12:01:06Z | 38,850,361 | <p>You can use <code>drop_duplicates()</code> and specify a subset of columns to use.</p>
<pre><code>df.drop_duplicates(['ID', 'code'], keep='first')
</code></pre>
<p>This will only consider the <code>ID</code> and <code>code</code> column and will keep the first occurrence, removing the other duplicates.</p>
| 2 | 2016-08-09T12:10:30Z | [
"python",
"pandas"
] |
Multi-Cursor editing with QScintilla | 38,850,277 | <p>I'd like to create a little QScintilla widget supporting multi-cursor-editing like in SublimeText. As far as i know Scintilla already supports multiple cursors but I haven't seen any example out there.</p>
<p>So, could anyone please post a little example showing the basics of multiple cursors with QScintilla?</p>
| -1 | 2016-08-09T12:07:39Z | 38,884,748 | <p>The multi-cursor feature is available in Scintilla, but QScintilla doesn't provide direct wrappers for this feature. However, you can "reimplement" your wrappers, since almost everything can be done with the <code>SendScintilla</code> method.</p>
<pre><code>from PyQt5.Qsci import QsciScintilla
from PyQt5.QtWidgets import QApplication
app = QApplication([])
ed = QsciScintilla()
ed.setText('insert <-\nsome <-\ntext <-\n')
ed.show()
# typing should insert in all selections at the same time
ed.SendScintilla(ed.SCI_SETADDITIONALSELECTIONTYPING, 1)
# do multiple selections
offset = ed.positionFromLineIndex(0, 7) # line-index to offset
ed.SendScintilla(ed.SCI_SETSELECTION, offset, offset)
# using the same offset twice selects no characters, hence a cursor
offset = ed.positionFromLineIndex(1, 5)
ed.SendScintilla(ed.SCI_ADDSELECTION, offset, offset)
offset = ed.positionFromLineIndex(2, 5)
ed.SendScintilla(ed.SCI_ADDSELECTION, offset, offset)
app.exec_()
</code></pre>
<p>You should wrap the <code>SendScintilla</code> calls in your own wrappers.</p>
<p>Keep in mind that the <code>offset</code>s are expressed in bytes and thus depend on the encoding of the text, which is more or less hidden by QScintilla's QStrings. On the other hand, the "line-index" are expressed in characters (codepoints if using unicode), and thus are more reliable.</p>
| 1 | 2016-08-10T22:44:33Z | [
"python",
"pyqt",
"pyqt5",
"qscintilla"
] |
Invalid syntax error with numba installed with pip | 38,850,278 | <p>When I try to run this Python code inside my <code>virtualenv</code>:</p>
<pre><code>#!./env/bin/python3
from numba import jit
@jit(nopython=True)
print("Hello World")
</code></pre>
<p>I got the following error:</p>
<pre><code>(env) root@LANTI-PC:/mnt/c/www/python/flask/app# ./test.py
File "./test.py", line 6
print("Hello World")
^
SyntaxError: invalid syntax
</code></pre>
<p>This is my <code>requirements.txt</code>:</p>
<pre><code>click==6.6
Flask==0.11.1
funcsigs==1.0.2
itsdangerous==0.24
Jinja2==2.8
llvmlite==0.12.1
MarkupSafe==0.23
numba==0.27.0
numpy==1.11.1
pybars3==0.9.1
PyMeta3==0.5.1
Werkzeug==0.11.10
</code></pre>
<p><code>llvm-config</code> version: <code>3.7.1</code></p>
<p>Also, If I just do <code>import numba</code> or <code>from numba import jit</code>, the file will be executed, but marginally slower than if I execute with <code>python3</code> only, without any numba import.</p>
| -2 | 2016-08-09T12:07:41Z | 38,850,515 | <p>Please remove your usage of <code>@jit</code> decorator, because there are no function to decorate there, they's why the error. Decorators wrap the functions so without them they are useless and erronous.</p>
| 1 | 2016-08-09T12:17:51Z | [
"python",
"python-3.x",
"pip",
"virtualenv",
"numba"
] |
python script to send OSC to SuperCollider using Neuroskys mindwave and NeuroPy module | 38,850,287 | <p>I'm trying to send multiple OSC messages to <a href="https://pypi.python.org/pypi/SC" rel="nofollow">Supercollider</a> using the variables (1-13) from <a href="https://pypi.python.org/pypi/NeuroPy/0.1" rel="nofollow">neuroPy</a>. It works fine with only one variable. How can I utilize more variables.</p>
<pre><code>from NeuroPy import NeuroPy
import time
import OSC
port = 57120
sc = OSC.OSCClient()
sc.connect(('192.168.1.4', port)) #send locally to laptop
object1 = NeuroPy("/dev/rfcomm0")
zero = 0
variable1 = object1.attention
variable2 = object1.meditation
variable3 = object1.rawValue
variable4 = object1.delta
variable5 = object1.theta
variable6 = object1.lowAlpha
variable7 = object1.highAlpha
variable8 = object1.lowBeta
variable9 = object1.highBeta
variable10 = object1.lowGamma
variable11 = object1.midGamma
variable12 = object1.poorSignal
variable13 = object1.blinkStrength
time.sleep(5)
object1.start()
def sendOSC(name, val):
msg = OSC.OSCMessage()
msg.setAddress(name)
msg.append(val)
try:
sc.send(msg)
except:
pass
print msg #debug
while True:
val = variable1
if val!=zero:
time.sleep(2)
sendOSC("/att", val)
</code></pre>
<p>This works fine and I get the message in Supercollider as expected.</p>
<p>What can I do to add more variables and get more messages?</p>
<p>I figured it should be something with setCallBack.</p>
| 1 | 2016-08-09T12:08:00Z | 38,854,108 | <p>You do not need to send multiple OSC messages, you can send one OSC message with all the values in. In fact, this will be a much better way to do it, because all the updated values will arrive synchronously, and less network traffic will be needed.</p>
<p>Your code currently does the equivalent of</p>
<pre><code>msg = OSC.OSCMessage()
msg.setAddress("/att")
msg.append(object1.attention)
sc.send(msg)
</code></pre>
<p>which is fine for one value. For multiple values you could do the following which is almost the same:</p>
<pre><code>msg = OSC.OSCMessage()
msg.setAddress("/neurovals")
msg.append(object1.attention)
msg.append(object1.meditation)
msg.append(object1.rawValue)
msg.append(object1.delta)
# ...
sc.send(msg)
</code></pre>
<p>It should be fine, you'll get an OSC message with multiple data in. You can also write the above as</p>
<pre><code>msg = OSC.OSCMessage()
msg.setAddress("/neurovals")
msg.extend([object1.attention, object1.meditation, object1.rawValue, object1.delta]) # plus more vals...
sc.send(msg)
</code></pre>
<p>Look at the documentation for the OSCMessage class to see more examples of how you can construct your message.</p>
| 2 | 2016-08-09T14:53:18Z | [
"python",
"osc",
"supercollider",
"neuropy"
] |
Django - AttributeError: 'module' object has no attribute 'admin' | 38,850,498 | <p>I'm in trouble.</p>
<p><strong>Python version:</strong> 3.4.4</p>
<p><strong>Django version:</strong> 1.10</p>
<p><strong>DB type/version:</strong> SqlLite3</p>
<p><strong>Installed apps:</strong> accounting, registry, ...</p>
<p><strong>Models (accounting):</strong> Bank, Fee, ...</p>
<p><strong>Models (registry):</strong> Company, ...</p>
<p><strong>Generic relations:</strong> Company-Bank, Fee-Company, ...</p>
<p><strong>Admin site inline (accounting):</strong></p>
<pre><code>class FeeAdmin(Admin):
list_display = ['date', 'content_object']
inlines = [registry.admin.CompanyInline]
...
</code></pre>
<p><strong>Admin site inline (registry):</strong></p>
<pre><code>class CompanyAdmin(Admin):
list_display = ['__str__', 'contact_telephone', 'contact_cellphone', 'contact_email']
list_filter = Admin.list_filter + ['residence_city']
search_fields = ['company_name']
inlines = [accounting.admin.BankInline]
...
</code></pre>
<p><strong>Problem:</strong> the second installed app gives me the error in the title, if I switch the order in <em>settings.py</em>, the error is raised by the other app. The first one always run smoothly:</p>
<pre><code>inlines = [registry.admin.CompanyInline]
AttributeError: 'module' object has no attribute 'admin'
</code></pre>
<p>if registry is installed after accounting, or</p>
<pre><code>inlines = [accounting.admin.BankInline]
AttributeError: 'module' object has no attribute 'admin'
</code></pre>
<p>if the order is switched.</p>
<p><strong>Headers:</strong></p>
<p>accounting.admin:</p>
<pre><code>from django.contrib import admin
from django.contrib.contenttypes import admin as ctadmin
from django.contrib.contenttypes.models import ContentType
import registry
from .models import Bank
from .models import Fee
...
</code></pre>
<p>registry.admin:</p>
<pre><code>from django.contrib import admin
from django.contrib.contenttypes import admin as ctadmin
from django.contrib.contenttypes.models import ContentType
import accounting
from .models import Company
...
</code></pre>
| 0 | 2016-08-09T12:17:03Z | 38,850,784 | <p>This is a question about Python imports.</p>
<p>When you import a package, you don't automatically get access to all the modules underneath it; you need to import those specifically. So instead of doing <code>import accounting</code> and then trying to access <code>accounting.admin</code>, you need to explicitly do <code>from accounting import admin</code> and then accessing <code>admin.BankInline</code> etc.</p>
| 2 | 2016-08-09T12:28:48Z | [
"python",
"django",
"order",
"admin"
] |
Maximise the Slope using CVXPY | 38,850,557 | <p>I'm trying to use CVXPY to maximise the Sharpe Ratio of a stock portfolio. </p>
<p>The variable w is a portfolio weight vector, Sigma is an nxn correlation matrix, mu - is the average return of each portfolio stock, and rf - the risk-free rate (a scalar value).</p>
<p>At first, I tried to construct the problem as: Maximise((ret-rf)/(sqrt(risk))), which raised a TypeError: Can only divide by a scalar constant. I tried bypassing this issue by taking the log of the value I'm trying to maximise, however now I am getting an "invalid syntax" raised by 'prob.solve()'. I'm pretty sure that the issue arising from the maximisation formula, but I'm not sure what it is. </p>
<p>(I've tried both CVXPY log formulas, namely log_det() and log_sum_exp())</p>
<p>Here's the code below:</p>
<pre><code> from cvxpy import *
def portfolio(mu, Sigma, rf):
n = len(mu)
w = Variable(n)
ret = mu.T*w
risk = quad_form(w, Sigma)
prob = Problem(Maximize(log_det(ret-rf)-log_det(sqrt(risk)),
[sum_entries(w) == 1])
prob.solve()
return w.value
</code></pre>
| 0 | 2016-08-09T12:18:59Z | 38,855,216 | <p>I believe this is not convex. From what I understand there are several ways to attack this problem</p>
<ol>
<li>Use a general purpose NLP solver (this is the method I used)</li>
<li>Trace the efficient frontier to find the point on this frontier with the best Sharpe Ratio</li>
<li>Under some conditions, this problem can be transformed into a convex QP (see e.g. Gerard Cornuejols, Reha Tütüncü, <em>Optimization Methods in Finance</em>, 2007).</li>
</ol>
| 1 | 2016-08-09T15:46:46Z | [
"python",
"optimization",
"cvxpy"
] |
Is it possible to create and reference objects in list comprehension? | 38,850,748 | <p>I have a list of urls that I want the net locations.</p>
<pre><code>urls = ["http://server1:53000/cgi-bin/mapserv?map=../maps/Weather.wms.map",
"http://server2:53000/cgi-bin/mapserv?map=../maps/Weather.wms.map"]
</code></pre>
<p>I would normally just write something like this:</p>
<pre><code>servers = []
for url in urls:
o = urlparse(url)
servers.append(o.netloc)
</code></pre>
<p>Then I immediately thought, "I should just put that into a comprehension" and proceeded to write this (which of course doesn't work):</p>
<pre><code>servers = [o.netloc() for urlparse(url) as o in urls]
</code></pre>
<p>Does python have a way to do this type of complex comprehension? (perhaps in 3.x?)</p>
<p>On a more academic level, would doing this type of complex comprehension move too far away from being "pythonic"? It seems relatively intuitive to me, but I've been completely off-base on these things before.</p>
| 3 | 2016-08-09T12:27:08Z | 38,850,776 | <p>There is no need to assign to an intermediary name, just access the <code>.netloc</code> attribute on the return value of <code>urlparse()</code> directly:</p>
<pre><code>servers = [urlparse(url).netloc for url in urls]
</code></pre>
<p>It's a perfectly pythonic thing to do it this way.</p>
| 7 | 2016-08-09T12:28:25Z | [
"python",
"python-2.7",
"list-comprehension"
] |
Is it possible to create and reference objects in list comprehension? | 38,850,748 | <p>I have a list of urls that I want the net locations.</p>
<pre><code>urls = ["http://server1:53000/cgi-bin/mapserv?map=../maps/Weather.wms.map",
"http://server2:53000/cgi-bin/mapserv?map=../maps/Weather.wms.map"]
</code></pre>
<p>I would normally just write something like this:</p>
<pre><code>servers = []
for url in urls:
o = urlparse(url)
servers.append(o.netloc)
</code></pre>
<p>Then I immediately thought, "I should just put that into a comprehension" and proceeded to write this (which of course doesn't work):</p>
<pre><code>servers = [o.netloc() for urlparse(url) as o in urls]
</code></pre>
<p>Does python have a way to do this type of complex comprehension? (perhaps in 3.x?)</p>
<p>On a more academic level, would doing this type of complex comprehension move too far away from being "pythonic"? It seems relatively intuitive to me, but I've been completely off-base on these things before.</p>
| 3 | 2016-08-09T12:27:08Z | 38,851,092 | <p>Is this specific case, there is simply no need for the intermediary variable <code>o</code>, as your loop could be simplified to this</p>
<pre><code>for url in urls:
servers.append(urlparse(url).netloc)
</code></pre>
<p>which can then be directly transformed to a list comprehension, as in <a href="http://stackoverflow.com/a/38850776/1639625">Martijn's answer</a>.</p>
<p>But in the case that you really <em>need</em> that variable, e.g. because you want to use it more than once, or want to perform some checks first without performing <code>urlparse(url)</code> twice?</p>
<pre><code>for url in urls:
o = urlparse(url)
if o is not None:
servers.append((o.netloc, o.protocol))
</code></pre>
<p>In this case, you can nest a generator expression inside your list comprehension, performing the calculation and declaring the variable to be used in the outer list comprehension:</p>
<pre><code>servers = [(o.netloc, o.protocol) for o in (urlparse(url) for url in urls)
if o is not None]
</code></pre>
| 2 | 2016-08-09T12:43:02Z | [
"python",
"python-2.7",
"list-comprehension"
] |
Convert three I;16B images into one image | 38,850,749 | <p>I have 3 images of type <code>I;16B</code> and I am correctly reading them into <code>Python</code> via <code>PIL</code>:</p>
<pre><code>#!/usr/bin/en python
import sys
from PIL import Image
mode2bpp = {'1':1, 'L':8, 'P':8, 'RGB':24, 'RGBA':32, 'CMYK':32, 'YCbCr':24, 'I':32, 'F':32}
if __name__=="__main__":
print "Working!"
basedir = sys.argv[1]
imname = sys.argv[2]
Rc = sys.argv[3]
Gc = sys.argv[4]
Bc = sys.argv[5]
Zstack = sys.argv[6]
Rtif = basedir+"/"+imname+"-"+Rc+"/Data-"+Rc+"-Z"+Zstack+".tif"
Gtif = basedir+"/"+imname+"-"+Gc+"/Data-"+Gc+"-Z"+Zstack+".tif"
Btif = basedir+"/"+imname+"-"+Bc+"/Data-"+Bc+"-Z"+Zstack+".tif"
Rim = Image.open(Rtif)
Gim = Image.open(Gtif)
Bim = Image.open(Btif)
print Rim
print Rim.mode
</code></pre>
<p>This shows me that the data is <code>I;16B</code> but I am having to read them as 3 different images (one per channel). How should I go about combining these 3 channels into one image and writing a <code>.tif</code> file as output?</p>
| 0 | 2016-08-09T12:27:11Z | 39,942,590 | <p>For now, Pillow doesn't support multichannel images with more than 8 bits per channel. You can only convert every image to 'L' mode and merge they together with <code>Image.merge()</code>.</p>
| 0 | 2016-10-09T10:32:54Z | [
"python",
"python-imaging-library",
"tiff"
] |
python merge multiple json requests to a single file and save it | 38,850,816 | <p>I know this request is a bit heavy. Not sure if anyone can help, but if so I'd appreciate it.</p>
<p>I'm attempting to work with XML data in python which is new to me. I'm trying to write a script that parses several json requests and combines them into one. I want to feed this file into a proprietary system every x amount of time to keep all data in a single spot up to date.</p>
<pre><code> {
"items": [{
"id": 333512,
"full_name": "Flooring",
"instock": true,
"dept": "none",
"stockid": 4708384,
"commonname": "StdFloor",
"reorder": true
}, {
"id": 3336532,
"full_name": "Standard Tool",
"instock": true,
"dept": "none",
"stockid": 4708383,
"commonname": "StandardTool",
"reorder": true
}]
}
</code></pre>
<p>200+ of these will come back in the initial request</p>
<p>first I'll need to grab the ids from each one and run a separate request to get more details for each items. I know how to run the requests, but how do i make an array with just the id's? </p>
<p>Once I run each of those requests I'll get back 5-6 invoice details PER item. So for example, these all belong to item id 333512 in the initial response</p>
<pre><code> {
"invoices": [{
"invoice_id": 10015,
"cusbillable": true,
"inventoried": false,
"totals": 2.0,
"totalswh": 0.0,
"title": "EarlyOrder",
"invoicerate": 0.0,
"invoicedamt": 0.0,
"stockcost": 0.0,
"remainingbudg": null
}, {
"invoice_id": 10016,
"title": "EarlyOrder",
"cusbillable": true,
"inventoried": false,
"totals": 2.0,
"totalswh": 0.0,
"invoicerate": 0.0,
"invoicedamt": 0.0,
"stockcost": 0.0,
"remainingbudg": null
}]
}
</code></pre>
<p>These invoices don't have the items id in them though so when i get them back using the id in the request URL I want to ADD them to the original items list as a sub array of items id as if they came back with the original request. So each item will then have its invoices attached. I'm assuming its best to run all of the requests in sequence and create an array with the id as the names of each member? </p>
<p>so something like this for example is what i want to end up with something like this (but formatted correctly).</p>
<pre><code> [{
"items": [{
"id": 333512,
"full_name": "Flooring",
"instock": true,
"dept": "none",
"stockid": 4708384,
"commonname": "StdFloor",
"reorder": true"
{
"invoices": [{
"invoice_id": 10015,
"cusbillable": true,
"inventoried": false,
"totals": 2.0,
"totalswh": 0.0,
"title": "EarlyOrder",
"invoicerate": 0.0,
"invoicedamt": 0.0,
"stockcost": 0.0,
"remainingbudg": null
},
{
"invoice_id": 10016,
"title": "EarlyOrder",
"cusbillable": true,
"inventoried": false,
"totals": 2.0,
"totalswh": 0.0,
"invoicerate": 0.0,
"invoicedamt": 0.0,
"stockcost": 0.0,
"remainingbudg": null
}],
}
}],
{
"id": 3336532,
"full_name": "Standard Tool",
"instock": true,
"dept": "none",
"stockid": 4708383,
"commonname": "StandardTool",
"reorder": true"
{
"invoices": [{
"invoice_id": 10015,
"cusbillable": true,
"inventoried": false,
"totals": 2.0,
"totalswh": 0.0,
"title": "EarlyOrder",
"invoicerate": 0.0,
"invoicedamt": 0.0,
"stockcost": 0.0,
"remainingbudg": null
},
{
"invoice_id": 10016,
"title": "EarlyOrder",
"cusbillable": true,
"inventoried": false,
"totals": 2.0,
"totalswh": 0.0,
"invoicerate": 0.0,
"invoicedamt": 0.0,
"stockcost": 0.0,
"remainingbudg": null
}],
}
}]
</code></pre>
| 0 | 2016-08-09T12:30:05Z | 38,852,153 | <p>Principe answer about how to compile items and invoices data into a single dictionary :</p>
<p></p>
<pre><code># The container for compiled items/invoices
items = {}
# Process an items request
for data in jsondata["items"]:
items[data["id"]] = data
items[data["id"]]["invoices"] = {}
# Process an invoices request
id = # Your way to get the associated id
items[id]["invoices"] = jsondata["invoices"]
</code></pre>
<p>Hope it will helps you.</p>
| 0 | 2016-08-09T13:27:21Z | [
"python",
"arrays",
"json",
"merge",
"python-requests"
] |
nonlinear pyplot imshow colors | 38,850,826 | <p>I spent few hours on this but can't find solution.</p>
<p>I have 2D array of data and I want to plot it as heatmap using <code>imshow()</code> function. How can I achieve effect like this? I mean nonlineary distributed colors on colorbar to get better contrast.</p>
<p>I found <a href="https://stackoverflow.com/questions/22521382/nonlinear-colormap-matplotlib">this</a>, but don't know how to apply it to <code>imshow().</code></p>
<p><a href="http://i.stack.imgur.com/n6eUO.png" rel="nofollow"><img src="http://i.stack.imgur.com/n6eUO.png" alt="enter image description here"></a></p>
| 0 | 2016-08-09T12:30:20Z | 38,868,398 | <p>Use the norm parameter:</p>
<p><code>plt.imshow(your_data, cmap='afmhot_r', norm=colors.PowerNorm(gamma=0.2))</code></p>
<p>Read more:
<a href="http://matplotlib.org/users/colormapnorms.html" rel="nofollow">http://matplotlib.org/users/colormapnorms.html</a></p>
| 0 | 2016-08-10T08:46:39Z | [
"python",
"matplotlib",
"imshow"
] |
Check if value is in categorical series of float range | 38,850,859 | <p>I got the following pandas DataFrame : </p>
<pre><code> bucket value
0 (15016, 18003.2] 368
1 (12028.8, 15016] 132
2 (18003.2, 20990.4] 131
3 (9041.6, 12028.8] 116
4 (50.128, 3067.2] 82
5 (3067.2, 6054.4] 79
6 (6054.4, 9041.6] 54
7 (20990.4, 23977.6] 28
8 (23977.6, 26964.8] 8
9 (26964.8, 29952] 2
</code></pre>
<p><code>buckets</code> have been computed with <code>pd.cut()</code> command (dtype is <code>cateogry</code>)</p>
<p>I would like to check if a value, let's say <code>my_value = 20000</code>, is in one of <code>bucket</code>'s range.</p>
<p>It could return a dataframe with one more column :</p>
<pre><code> bucket value value_in_bucket
0 (15016, 18003.2] 368 FALSE
1 (12028.8, 15016] 132 FALSE
2 (18003.2, 20990.4] 131 TRUE
3 (9041.6, 12028.8] 116 FALSE
4 (50.128, 3067.2] 82 FALSE
5 (3067.2, 6054.4] 79 FALSE
6 (6054.4, 9041.6] 54 FALSE
7 (20990.4, 23977.6] 28 FALSE
8 (23977.6, 26964.8] 8 FALSE
9 (26964.8, 29952] 2 FALSE
</code></pre>
<p>The main problem is that each item of <code>bucket</code> is a string, so I could split the string into 2 columns and use a basic test and an <code>apply</code> but it does not seem so classy to me.</p>
| 1 | 2016-08-09T12:31:53Z | 38,851,020 | <p>you can apply <code>pd.cut()</code> <strong>using the same bins</strong> (or, better, as <a href="http://stackoverflow.com/questions/38850859/check-if-value-is-in-categorical-series-of-float-range/38851020?noredirect=1#comment65066380_38851020">@ayhan suggested</a> save bins when you create <code>bucket</code> column, using <code>retbins=True</code> parameter) on <code>value</code> column and compare it to the <code>bucket</code> column.</p>
<p>Demo:</p>
<pre><code>In [265]: df = pd.DataFrame(np.random.randint(1,20, 5), columns=list('a'))
In [266]: df
Out[266]:
a
0 9
1 6
2 13
3 11
4 17
</code></pre>
<p>create <code>bucket</code> column and save bins in one step:</p>
<pre><code>In [267]: df['bucket'], bins = pd.cut(df.a, bins=5, retbins=True)
In [268]: df
Out[268]:
a bucket
0 9 (8.2, 10.4]
1 6 (5.989, 8.2]
2 13 (12.6, 14.8]
3 11 (10.4, 12.6]
4 17 (14.8, 17]
In [269]: bins
Out[269]: array([ 5.989, 8.2 , 10.4 , 12.6 , 14.8 , 17. ])
</code></pre>
<p>generate a new column which we want to compare:</p>
<pre><code>In [270]: df['b'] = np.random.randint(10,12, 5)
In [271]: df
Out[271]:
a bucket b
0 9 (8.2, 10.4] 10
1 6 (5.989, 8.2] 11
2 13 (12.6, 14.8] 11
3 11 (10.4, 12.6] 11
4 17 (14.8, 17] 11
</code></pre>
<p>compare whether we have matches (using saved <code>bins</code>):</p>
<pre><code>In [272]: pd.cut(df.b, bins=bins) == df.bucket
Out[272]:
0 True
1 False
2 False
3 True
4 False
dtype: bool
</code></pre>
| 1 | 2016-08-09T12:39:52Z | [
"python",
"pandas",
"dataframe",
"category"
] |
K-fold cross validation implementation python | 38,850,885 | <p>I am trying to implement the k-fold cross-validation algorithm in python.
I know SKLearn provides an implementation but still...
This is my code as of right now.</p>
<pre><code>from sklearn import metrics
import numpy as np
class Cross_Validation:
@staticmethod
def partition(vector, fold, k):
size = vector.shape[0]
start = (size/k)*fold
end = (size/k)*(fold+1)
validation = vector[start:end]
if str(type(vector)) == "<class 'scipy.sparse.csr.csr_matrix'>":
indices = range(start, end)
mask = np.ones(vector.shape[0], dtype=bool)
mask[indices] = False
training = vector[mask]
elif str(type(vector)) == "<type 'numpy.ndarray'>":
training = np.concatenate((vector[:start], vector[end:]))
return training, validation
@staticmethod
def Cross_Validation(learner, k, examples, labels):
train_folds_score = []
validation_folds_score = []
for fold in range(0, k):
training_set, validation_set = Cross_Validation.partition(examples, fold, k)
training_labels, validation_labels = Cross_Validation.partition(labels, fold, k)
learner.fit(training_set, training_labels)
training_predicted = learner.predict(training_set)
validation_predicted = learner.predict(validation_set)
train_folds_score.append(metrics.accuracy_score(training_labels, training_predicted))
validation_folds_score.append(metrics.accuracy_score(validation_labels, validation_predicted))
return train_folds_score, validation_folds_score
</code></pre>
<p>The learner parameter is a classifier from SKlearn library, k is the number of folds, examples is a sparse matrix produced by the CountVectorizer (again SKlearn) that is the representation of the bag of words.
For example:</p>
<pre><code>from sklearn.feature_extraction.text import CountVectorizer
from sklearn.naive_bayes import MultinomialNB
from Cross_Validation import Cross_Validation as cv
vectorizer = CountVectorizer(stop_words='english', lowercase=True, min_df=2, analyzer="word")
data = vectorizer.fit_transform("""textual data""")
clfMNB = MultinomialNB(alpha=.0001)
score = cv.Cross_Validation(clfMNB, 10, data, labels)
print "Train score" + str(score[0])
print "Test score" + str(score[1])
</code></pre>
<p>I'm assuming there is some logic error somewhere since the scores are 95% on the training set (as expected) but practically 0 on the test test, but I can't find it.</p>
<p>I hope I was clear.
Thanks in advance.</p>
<p>________________________________EDIT___________________________________</p>
<p>This is the code that loads the text into the vector that can be passed to the vectorizer. It also returns the label vector.</p>
<pre><code>from nltk.tokenize import word_tokenize
from Categories_Data import categories
import numpy as np
import codecs
import glob
import os
import re
class Data_Preprocessor:
def tokenize(self, text):
tokens = word_tokenize(text)
alpha = [t for t in tokens if unicode(t).isalpha()]
return alpha
def header_not_fully_removed(self, text):
if ":" in text.splitlines()[0]:
return len(text.splitlines()[0].split(":")[0].split()) == 1
else:
return False
def strip_newsgroup_header(self, text):
_before, _blankline, after = text.partition('\n\n')
if len(after) > 0 and self.header_not_fully_removed(after):
after = self.strip_newsgroup_header(after)
return after
def strip_newsgroup_quoting(self, text):
_QUOTE_RE = re.compile(r'(writes in|writes:|wrote:|says:|said:'r'|^In article|^Quoted from|^\||^>)')
good_lines = [line for line in text.split('\n')
if not _QUOTE_RE.search(line)]
return '\n'.join(good_lines)
def strip_newsgroup_footer(self, text):
lines = text.strip().split('\n')
for line_num in range(len(lines) - 1, -1, -1):
line = lines[line_num]
if line.strip().strip('-') == '':
break
if line_num > 0:
return '\n'.join(lines[:line_num])
else:
return text
def raw_to_vector(self, path, to_be_stripped=["header", "footer", "quoting"], noise_threshold=-1):
base_dir = os.getcwd()
train_data = []
label_data = []
for category in categories:
os.chdir(base_dir)
os.chdir(path+"/"+category[0])
for filename in glob.glob("*"):
with codecs.open(filename, 'r', encoding='utf-8', errors='replace') as target:
data = target.read()
if "quoting" in to_be_stripped:
data = self.strip_newsgroup_quoting(data)
if "header" in to_be_stripped:
data = self.strip_newsgroup_header(data)
if "footer" in to_be_stripped:
data = self.strip_newsgroup_footer(data)
if len(data) > noise_threshold:
train_data.append(data)
label_data.append(category[1])
os.chdir(base_dir)
return np.array(train_data), np.array(label_data)
</code></pre>
<p>This is what "from Categories_Data import categories" imports...</p>
<pre><code>categories = [
('alt.atheism',0),
('comp.graphics',1),
('comp.os.ms-windows.misc',2),
('comp.sys.ibm.pc.hardware',3),
('comp.sys.mac.hardware',4),
('comp.windows.x',5),
('misc.forsale',6),
('rec.autos',7),
('rec.motorcycles',8),
('rec.sport.baseball',9),
('rec.sport.hockey',10),
('sci.crypt',11),
('sci.electronics',12),
('sci.med',13),
('sci.space',14),
('soc.religion.christian',15),
('talk.politics.guns',16),
('talk.politics.mideast',17),
('talk.politics.misc',18),
('talk.religion.misc',19)
]
</code></pre>
| 2 | 2016-08-09T12:32:52Z | 38,855,536 | <p>The reason why your validation score is low is subtle.</p>
<p>The issue is how you have partitioned the dataset. Remember, when doing cross-validation you should <em>randomly</em> split the dataset. It is the randomness that you are missing.</p>
<p>Your data is loaded category by category, which means in your input dataset, class labels and examples follow one after the other. By not doing the random split, you have completely removed a class which your model never sees during the training phase and hence you get a bad result on your test/validation phase.</p>
<p>You can solve this by doing a random shuffle. So, do this:</p>
<pre><code>from sklearn.utils import shuffle
processor = Data_Preprocessor()
td, tl = processor.raw_to_vector(path="C:/Users/Pankaj/Downloads/ng/")
vectorizer = CountVectorizer(stop_words='english', lowercase=True, min_df=2, analyzer="word")
data = vectorizer.fit_transform(td)
# Shuffle the data and labels
data, tl = shuffle(data, tl, random_state=0)
clfMNB = MultinomialNB(alpha=.0001)
score = Cross_Validation.Cross_Validation(clfMNB, 10, data, tl)
print("Train score" + str(score[0]))
print("Test score" + str(score[1]))
</code></pre>
| 1 | 2016-08-09T16:01:18Z | [
"python",
"machine-learning",
"scikit-learn",
"cross-validation"
] |
Converting multiple DataFrames into a Panel | 38,850,935 | <p>I have tons of excel files. Each of these files contains one or more variables <strong>for all subjects at a certain point in time</strong>. For each variable, I have, say, 10 files (storing the value of the variable at 10 different points in time). My ultimate goal is to set up a panel series.</p>
<p>Suppose there is only one variable in each file. For each variable (or item), I initialize an empty DataFrame <code>item = pd.DataFrame()</code> and successively read and append all 10 files into that empty DataFrame <code>item = item.append(pd.DataFrame(df))</code>, where df is from the new file. Each of those 10 DataFrames has dimension <code>1 x #subjects</code>, thus I eventually have <code>10 x #subject</code>. I turn this into a panel frame using <code>pf = pd.Panel({'variable name': item})</code>. Now, I can easily add this to a big panel frame with many other items...</p>
<p><strong>Question</strong>: What is an <strong>easy and practical way</strong> to approach this problem if I have 2 or more variables in each file? If I stuck to the above approach, I would have a DataFrame of dimension <code>#variables x #subjects</code> for each file, leading to </p>
<pre><code> subject1 subject2
variable1 2000 val val
variable2 2000 val val
variable1 2001 val val
variable2 2001 val val
...
</code></pre>
<p>after appending them. This is obviously a bad structure to convert this into panel data.</p>
<p>I could work myself around it - e.g. by appending to "the correct line" to keep the appropriate structure or reading the same file as many times as it has variables - but this would be cumbersome and/or costly. There have to be methods that do this work easily, but I couldn't find them in the docs.</p>
<p>Thanks for your help.</p>
| 0 | 2016-08-09T12:35:47Z | 38,856,391 | <p>A <code>Panel</code> is essentially a stack of <code>DataFrame</code> objects, allowing the data to be explored in three dimensions. Thus, it does not matter how many variables or subjects are represented in each of your files, as long as each file represents only one point in time. Import each file into a <code>DataFrame</code> and then create your <code>Panel</code>. </p>
<p>This could be achieved by using a for loop over a list of your filenames. In your loop, you might check which year the data is from and store the results in a dictionary with all your other <code>DataFrame</code> objects, thus allowing you to easily convert your dictionary of dataframes into a panel.</p>
<p>If your original <code>DataFrame</code> format looks something like: </p>
<pre><code> Gerald Kate
Var1 1 5
Var2 2 6
Var3 3 7
Var4 4 8
</code></pre>
<p>Then you can create your <code>Panel</code> with something like:</p>
<pre><code>pn=pd.Panel(data={2010:df2010, 2015:df2015, 2020:df2020})
</code></pre>
<p>This yeilds a <code>Panel</code> with the properties:</p>
<pre><code>Dimensions: 3 (items) x 4 (major_axis) x 2 (minor_axis)
Items axis: 2010 to 2020
Major_axis axis: Var1 to Var4
Minor_axis axis: Gerald to Kate
</code></pre>
<p>It is possible to slice by year:</p>
<pre><code>print(pn[2015])
Gerald Kate
Var1 3 15
Var2 6 18
Var3 9 21
Var4 12 24
</code></pre>
<p>It is also possible to switch axes to get a better view of individual variables or subjects:</p>
<pre><code>print(pn.transpose('minor_axis','major_axis','items')['Gerald'])
2010 2015 2020
Var1 1 3 9
Var2 2 6 18
Var3 3 9 27
Var4 4 12 36
</code></pre>
| 0 | 2016-08-09T16:48:14Z | [
"python",
"pandas",
"panel-data"
] |
Python/Django date field in html input tag | 38,850,951 | <p>I'm trying to display a date in an input field as so:</p>
<pre><code><tr><td>Date of Birth</td><td><input type="date" name="DOB" value="{{m.dob|date:"d/m/Y"}}" required=True></td></tr>
</code></pre>
<p>where m.dob is defined in the model as:</p>
<pre><code>dob = models.DateField('Date of Birth', blank=True, null=True)
</code></pre>
<p>The HTML input tag shows the date as dd/mm/yyyy when the page is loaded but I can see the field has taken the value assigned. How do I get it to display correctly?</p>
<p>Thanks for help</p>
| 1 | 2016-08-09T12:36:25Z | 38,851,078 | <p>You can use <code>default</code> like so:</p>
<pre><code>date = models.DateField(_("Date"), default="{{"+datetime.date.today+"}}")
</code></pre>
| 0 | 2016-08-09T12:42:13Z | [
"python",
"html",
"django"
] |
Python/Django date field in html input tag | 38,850,951 | <p>I'm trying to display a date in an input field as so:</p>
<pre><code><tr><td>Date of Birth</td><td><input type="date" name="DOB" value="{{m.dob|date:"d/m/Y"}}" required=True></td></tr>
</code></pre>
<p>where m.dob is defined in the model as:</p>
<pre><code>dob = models.DateField('Date of Birth', blank=True, null=True)
</code></pre>
<p>The HTML input tag shows the date as dd/mm/yyyy when the page is loaded but I can see the field has taken the value assigned. How do I get it to display correctly?</p>
<p>Thanks for help</p>
| 1 | 2016-08-09T12:36:25Z | 38,851,993 | <p>Dude, your question is not clear. But for now, I am assuming you are new to Django.</p>
<p>Your DB value is not populating in the template. In order to show the value, you must do two steps:</p>
<ul>
<li>First query the model object</li>
<li>Pass it through additional context dictionary from the Django view.</li>
</ul>
<pre class="lang-py prettyprint-override"><code>from django.views.generic import View
from django.shortcuts import render
class Home(View):
template_name = "home.html"
def __init__(self, **kwargs):
pass
def get(self, request):
# Fetch your object here. ID or any other
myObj = MyModel.objects.get(id=1)
# Third argument is the conext dictionary
return render(request,self.template_name,{'myObj': myObj})
</code></pre>
<p>Now just use that <code>myObj</code> in your template with the following syntax.</p>
<pre><code><tr><td>Date of Birth</td><td><input type="date" name="DOB" value="{{myObj.dob|date:"d/m/Y"}}" required=True></td></tr>
</code></pre>
<p>Here this code is in <b>home.html</b>.</p>
| 0 | 2016-08-09T13:20:42Z | [
"python",
"html",
"django"
] |
Python/Django date field in html input tag | 38,850,951 | <p>I'm trying to display a date in an input field as so:</p>
<pre><code><tr><td>Date of Birth</td><td><input type="date" name="DOB" value="{{m.dob|date:"d/m/Y"}}" required=True></td></tr>
</code></pre>
<p>where m.dob is defined in the model as:</p>
<pre><code>dob = models.DateField('Date of Birth', blank=True, null=True)
</code></pre>
<p>The HTML input tag shows the date as dd/mm/yyyy when the page is loaded but I can see the field has taken the value assigned. How do I get it to display correctly?</p>
<p>Thanks for help</p>
| 1 | 2016-08-09T12:36:25Z | 38,856,785 | <p>The problem is <code>value="{{m.dob|date:'d/m/Y'}}"</code> which must be specified as <code>value="{{m.dob|date:'Y-m-d'}}"</code></p>
| 1 | 2016-08-09T17:12:16Z | [
"python",
"html",
"django"
] |
Python - matplotlib subplots in tripcolor case | 38,851,009 | <p>I m trying to organize and ajust my three subplots obtained with tripcolor od Delaunay triangulation. The problem is i cant use the function : plt.tight_layout(pad=0.5, w_pad=2.5, h_pad=2.0) to set the windows size, it doesnt work in this case.
The result corresponds to :
<a href="http://i.stack.imgur.com/WIuKM.png" rel="nofollow"><img src="http://i.stack.imgur.com/WIuKM.png" alt="enter image description here"></a></p>
<p>I would like to have square form for the windows...My code is :</p>
<pre><code>import matplotlib.tri as tr
triang = tr.Triangulation(Xini, Yini)
xmid = Xini[triang.triangles].mean(axis=1)
ymid = Yini[triang.triangles].mean(axis=1)
plt.figure()
ax1 = plt.subplot(131) # creates first axis
i1 =ax1.tripcolor(triang, Epst_eq2, shading='flat', cmap=plt.cm.hot)
ax1.set_xlim([-2.5,2.5])
ax1.set_ylim([-2.5,2.5])
# plt.title('tripcolor of Delaunay triangulation, flat shading')
plt.colorbar(i1,ax=ax1,ticks=np.linspace(0,0.005,3))
ax2 = plt.subplot(132) # creates first axis
ax2.tripcolor(triang, Epst_eq3, shading='flat', cmap=plt.cm.hot)
ax2.set_xlim([-2.5,2.5])
ax2.set_ylim([-2.5,2.5])
ax3 = plt.subplot(133) # creates first axis
ax3.tripcolor(triang, Epst_eq4, shading='flat', cmap=plt.cm.hot)
ax3.set_xlim([-2.5,2.5])
ax3.set_ylim([-2.5,2.5])
plt.savefig('test2.png',dpi=100)
plt.show()
</code></pre>
| -1 | 2016-08-09T12:39:26Z | 38,851,483 | <p>Finally i used gridspec and i can modify easily the size!</p>
<pre><code>import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
fig = plt.figure(figsize=(16, 6))
gs = gridspec.GridSpec(1, 3,width_ratios=[1.2,1,1])
ax1 = plt.subplot(gs[0])
i1 =ax1.tripcolor(triang, Epst_eq2, shading='flat', cmap=plt.cm.hot)
ax1.set_xlim([-2.5,2.5])
ax1.set_ylim([-2.5,2.5])
# plt.title('tripcolor of Delaunay triangulation, flat shading')
plt.colorbar(i1,ax=ax1,ticks=np.linspace(0,0.005,3))
ax2 = plt.subplot(gs[1])
ax2.tripcolor(triang, Epst_eq3, shading='flat', cmap=plt.cm.hot)
ax2.set_xlim([-2.5,2.5])
ax2.set_ylim([-2.5,2.5])
ax3 = plt.subplot(gs[2])
ax3.tripcolor(triang, Epst_eq4, shading='flat', cmap=plt.cm.hot)
ax3.set_xlim([-2.5,2.5])
ax3.set_ylim([-2.5,2.5])
plt.tight_layout()
plt.savefig('test2.png',dpi=100)
plt.show()
</code></pre>
| 0 | 2016-08-09T13:00:01Z | [
"python",
"matplotlib",
"delaunay"
] |
Using sed to interpret multiple lines on condition | 38,851,010 | <p>I'm stuck on constructing a <strong>sed</strong> expression that will parse a python file's imports and extract the names of the modules.</p>
<p>This is a simple example that I solved using (I need the output to be the module names without 'as' or any spaces..):</p>
<pre><code>from testfunctions import mod1, mod2 as blala, mod3, mod4
</code></pre>
<p>What I have so far:</p>
<pre><code>grep -ir "from testfunctions import" */*.py | sed -E s/'\s+as\s+\w+'//g | sed -E s/'from testfunctions import\s+'//g
</code></pre>
<p>This does get me the required result in a situation as above.</p>
<p><strong>The problem:</strong>
In files where the imports are like so:</p>
<pre><code>from testfunctions import mod1, mod2 as blala, mod3, mod4 \
mod5, mod6 as bla, mod7 \
mod8, mod9 ...
</code></pre>
<p><strong>Any ideas how I can improve my piped expression to handle multiple lines?</strong></p>
| 1 | 2016-08-09T12:39:31Z | 38,852,168 | <p>Try this; </p>
<pre><code> sed -n -r '/from/,/^\s*$/p;' *.py | sed ':x; /\\$/ { N; s/\\\n//; tx }' | sed 's/^.*.import//g;s/ */ /g'
</code></pre>
| 1 | 2016-08-09T13:27:57Z | [
"python",
"regex",
"bash",
"sed",
"grep"
] |
Using sed to interpret multiple lines on condition | 38,851,010 | <p>I'm stuck on constructing a <strong>sed</strong> expression that will parse a python file's imports and extract the names of the modules.</p>
<p>This is a simple example that I solved using (I need the output to be the module names without 'as' or any spaces..):</p>
<pre><code>from testfunctions import mod1, mod2 as blala, mod3, mod4
</code></pre>
<p>What I have so far:</p>
<pre><code>grep -ir "from testfunctions import" */*.py | sed -E s/'\s+as\s+\w+'//g | sed -E s/'from testfunctions import\s+'//g
</code></pre>
<p>This does get me the required result in a situation as above.</p>
<p><strong>The problem:</strong>
In files where the imports are like so:</p>
<pre><code>from testfunctions import mod1, mod2 as blala, mod3, mod4 \
mod5, mod6 as bla, mod7 \
mod8, mod9 ...
</code></pre>
<p><strong>Any ideas how I can improve my piped expression to handle multiple lines?</strong></p>
| 1 | 2016-08-09T12:39:31Z | 38,858,870 | <p>Thanks everyone for your help. I didn't know a module such as <code>ast</code> exists.. It really helped me achieve my goal.</p>
<p>I put together a simple version of the solution I needed, just for reference if anyone else encounters this question as well:</p>
<pre class="lang-python prettyprint-override"><code>import glob
import ast
moduleList = []
# get all .py file names
testFiles = glob.glob('*/*.py')
for testFile in testFiles:
with open(testFile) as code:
# ast.parse creates the tree off of plain code
tree = ast.parse(code.read())
# there are better ways to traverse the tree, in this sample there
# is no guarantee to the traversal order
for node in ast.walk(tree):
if isinstance(node, ast.ImportFrom) and node.module == 'testfunctions':
# each node will contain an ast.ImportFrom instance which
# data members are: module, names(list of ast.alias) and level
moduleList.extend([alias.name for alias in node.names])
</code></pre>
<p>You can read more about it in (probably the only detailed page about <code>ast</code> in the whole web) here: <a href="https://greentreesnakes.readthedocs.io/en/latest/manipulating.html#inspecting-nodes" rel="nofollow">https://greentreesnakes.readthedocs.io/en/latest/manipulating.html#inspecting-nodes</a></p>
| 1 | 2016-08-09T19:24:29Z | [
"python",
"regex",
"bash",
"sed",
"grep"
] |
Python process HTML / CGI form input | 38,851,019 | <p>I am trying to process HTML form input. I have a CGI file that I want to collect all the data from, including the check boxes and radio buttons. I am trying to use cgi.FieldStorage but something is not working. </p>
<p>Here is an example of what I am trying to do:</p>
<pre><code>form = cgi.FieldStorage()
name = form.getvalue('sensitivity')
print name
</code></pre>
<p>But this is returning none. Here is a snippet of the CGI file:</p>
<pre><code>if config_settings.settings[5] == '1':
print'''<html><label class="checkbox inline control-label"><input name="aWeight" value="1" type="checkbox" checked/></html>'''
else:
print'''<html><label class="checkbox inline control-label"><input name="aWeight" value="1" type="checkbox"/></html>'''
print'''<html><span> A-Weight &nbsp;&nbsp;&nbsp;</span></label></html>'''
</code></pre>
<p>This sets a checkbox depending on the content of an XML tag in another file being set to 1 or 0. The XML file and the Python file are working together fine. What I am trying to acheive is to collect the data from the checkboxes when a user changes them. </p>
<p>I have the this code at the beginning of my CGI script:</p>
<pre><code><form class="well form-inline" method="post" action="/cgi-bin/process_setup.py">
</code></pre>
<p>And I though that this would allow me to process/collect the from data with cgi.FieldStorage but it does not seem to be working. Any advice?</p>
| 0 | 2016-08-09T12:39:52Z | 38,976,380 | <p><del>I think you are missing this <code>print ("Content-type:text/html\r\n\r\n")</code> for python 3 and this <code>print "Content-type:text/html\r\n\r\n"</code> for python 2.</del></p>
<p>So you want to prevent the user to change the checkbox value?
If yes than here is a way :</p>
<pre><code><label id='checky'><input type="checkbox" name="checky" onchange="changeCheck(this)" checked="" ></label>
<script type="text/javascript">
function changeCheck (element) {
element.checked = !element.checked;
}
</script>
</code></pre>
| 0 | 2016-08-16T13:33:25Z | [
"python",
"html",
"xml",
"cgi"
] |
Adding a column header to a csv in python | 38,851,031 | <p>I have a csv that contains just 1 column of domain names that range from about 300 to 1500 lines, looking similar to the following:</p>
<pre><code>google.com
abc.net
yahoo.com
cnn.com
twitter.com
</code></pre>
<p>All I need to do is add a column header of "domain" so my csv will look like:</p>
<pre><code>domain
google.com
abc.net
yahoo.com
cnn.com
twitter.com
</code></pre>
<p>I attempted the following using pandas:</p>
<pre><code>from pandas import read_csv
x = read_csv('domains.csv')
x.columns = ['domain']
x.to_csv('out.csv')
</code></pre>
<p>This results in a csv with the added column header, but it also added an additional column with the row numbers, which I don't want... what am I doing wrong?</p>
<pre><code> domain
0 google.com
1 abc.net
2 yahoo.com
3 cnn.com
4 twitter.com
</code></pre>
| 3 | 2016-08-09T12:40:24Z | 38,851,052 | <p>You can add parameter <code>names</code> to <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html" rel="nofollow"><code>read_csv</code></a> and <code>index=False</code> to <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Datarame.to_csv.html" rel="nofollow"><code>to_csv</code></a>:</p>
<pre><code>x = read_csv('domains.csv', names=['domain'])
</code></pre>
<p>Sample:</p>
<pre><code>import pandas as pd
import io
temp=u"""google.com
abc.net
yahoo.com
cnn.com
twitter.com"""
#after testing replace io.StringIO(temp) to filename
x = pd.read_csv(io.StringIO(temp), names=['domain'])
print (x)
domain
0 google.com
1 abc.net
2 yahoo.com
3 cnn.com
4 twitter.com
#need remove index
x.to_csv('filename',index=False)
</code></pre>
| 0 | 2016-08-09T12:41:04Z | [
"python",
"csv",
"pandas"
] |
Adding a column header to a csv in python | 38,851,031 | <p>I have a csv that contains just 1 column of domain names that range from about 300 to 1500 lines, looking similar to the following:</p>
<pre><code>google.com
abc.net
yahoo.com
cnn.com
twitter.com
</code></pre>
<p>All I need to do is add a column header of "domain" so my csv will look like:</p>
<pre><code>domain
google.com
abc.net
yahoo.com
cnn.com
twitter.com
</code></pre>
<p>I attempted the following using pandas:</p>
<pre><code>from pandas import read_csv
x = read_csv('domains.csv')
x.columns = ['domain']
x.to_csv('out.csv')
</code></pre>
<p>This results in a csv with the added column header, but it also added an additional column with the row numbers, which I don't want... what am I doing wrong?</p>
<pre><code> domain
0 google.com
1 abc.net
2 yahoo.com
3 cnn.com
4 twitter.com
</code></pre>
| 3 | 2016-08-09T12:40:24Z | 38,851,103 | <p>you need to set <code>index=False</code> when writing <code>to_csv</code> to remove the additional column:</p>
<pre><code>x.to_csv('out.csv',index=False)
</code></pre>
| 2 | 2016-08-09T12:43:24Z | [
"python",
"csv",
"pandas"
] |
Adding a column header to a csv in python | 38,851,031 | <p>I have a csv that contains just 1 column of domain names that range from about 300 to 1500 lines, looking similar to the following:</p>
<pre><code>google.com
abc.net
yahoo.com
cnn.com
twitter.com
</code></pre>
<p>All I need to do is add a column header of "domain" so my csv will look like:</p>
<pre><code>domain
google.com
abc.net
yahoo.com
cnn.com
twitter.com
</code></pre>
<p>I attempted the following using pandas:</p>
<pre><code>from pandas import read_csv
x = read_csv('domains.csv')
x.columns = ['domain']
x.to_csv('out.csv')
</code></pre>
<p>This results in a csv with the added column header, but it also added an additional column with the row numbers, which I don't want... what am I doing wrong?</p>
<pre><code> domain
0 google.com
1 abc.net
2 yahoo.com
3 cnn.com
4 twitter.com
</code></pre>
| 3 | 2016-08-09T12:40:24Z | 38,851,148 | <p>If all you are doing is adding one line, you don't really need pandas to do this. Here is an example using the normal python file writing modules:</p>
<pre><code>with open('domains.csv', 'rb') as csvfile:
rows = [r for r in csvfile]
rows = ['domain'] + rows
with open('domains.csv', 'wb') as csvfile:
for row in rows:
csvfile.write(row + '\n')
</code></pre>
| 1 | 2016-08-09T12:45:32Z | [
"python",
"csv",
"pandas"
] |
Adding a column header to a csv in python | 38,851,031 | <p>I have a csv that contains just 1 column of domain names that range from about 300 to 1500 lines, looking similar to the following:</p>
<pre><code>google.com
abc.net
yahoo.com
cnn.com
twitter.com
</code></pre>
<p>All I need to do is add a column header of "domain" so my csv will look like:</p>
<pre><code>domain
google.com
abc.net
yahoo.com
cnn.com
twitter.com
</code></pre>
<p>I attempted the following using pandas:</p>
<pre><code>from pandas import read_csv
x = read_csv('domains.csv')
x.columns = ['domain']
x.to_csv('out.csv')
</code></pre>
<p>This results in a csv with the added column header, but it also added an additional column with the row numbers, which I don't want... what am I doing wrong?</p>
<pre><code> domain
0 google.com
1 abc.net
2 yahoo.com
3 cnn.com
4 twitter.com
</code></pre>
| 3 | 2016-08-09T12:40:24Z | 38,851,432 | <p>You could use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_csv.html" rel="nofollow"><code>header</code></a> parameter in <code>to_csv</code> as you have just 1 column in your dataframe.</p>
<pre><code>df = pd.read_csv(data, header=None)
df.to_csv('out.csv', header=['domain'], index=False)
</code></pre>
| 1 | 2016-08-09T12:58:03Z | [
"python",
"csv",
"pandas"
] |
Python: is throwing an exception a correct use case in this example? | 38,851,047 | <p>I wonder is if throwing exceptions is the best way to communicate something to the user, in case the user is another programmer. </p>
<p>I'm developing a small library to create text-based games (think Dwarf Fortress, but extremely more simple). Of course, I want stuff to move inside the map. It's very simple and the docstring is very complete, so it should read nicely. </p>
<pre><code>def move(self, tile):
"""Move the object to a tile.
That means: unlink the piece from its current tile and link it
to the new tile.
Raise CharacterIsNotOnATileError if piece didn't already
have an associated tile, CharacterIsNotOnThisBoardError if
the destinity tile is not on the same board as the current tile,
OutOfBoard error if destinity tile is falsey (most probably
this means you're tring to move somewhere outside the map)
"""
if tile.piece is not None:
raise PositionOccupiedError(tile)
if not self.home_tile:
raise PieceIsNotOnATileError
if self.home_tile.board is not tile.board:
raise PieceIsNotOnThisBoardError
if not tile:
raise OutOfBoardError
self.home_tile.piece = None
tile.piece = self
</code></pre>
<p>Is this structure bad? I think it reads nicely: when the user tries to move a piece of his own, he can do:</p>
<pre><code>try:
character.move(somewhere)
except PositionOcuppiedError:
character.punish # cause i'm in hardcore
except OutOfBoardError:
character.kill # cause we were in a floating world and the
# character just fell off
</code></pre>
<p>There's some more logic in the library implemented like this, where the code <em>tries</em> to do something but if it can't, it will throw an exception. Is this OK? Or maybe I should be returning error codes (as integers, for example). </p>
| 2 | 2016-08-09T12:40:54Z | 38,851,265 | <p>In my opinion throwing exceptions are better way to inform that some error occurred. Especially if your function does not return any value. Imagine that developer after each call have to check type of returned value. It is much clearer.
Check <a href="http://docs.quantifiedcode.com/python-code-patterns/latex/The-Little-Book-of-Python-Anti-Patterns-1.0.pdf?utm_source=Python+Weekly+Newsletter&utm_campaign=5edf7b6423-Python_Weekly_Issue_223_December_24_2015&utm_medium=email&utm_term=0_9e26887fc5-5edf7b6423-312742801" rel="nofollow">this</a>.</p>
| 2 | 2016-08-09T12:50:24Z | [
"python",
"python-3.x",
"exception"
] |
Python: is throwing an exception a correct use case in this example? | 38,851,047 | <p>I wonder is if throwing exceptions is the best way to communicate something to the user, in case the user is another programmer. </p>
<p>I'm developing a small library to create text-based games (think Dwarf Fortress, but extremely more simple). Of course, I want stuff to move inside the map. It's very simple and the docstring is very complete, so it should read nicely. </p>
<pre><code>def move(self, tile):
"""Move the object to a tile.
That means: unlink the piece from its current tile and link it
to the new tile.
Raise CharacterIsNotOnATileError if piece didn't already
have an associated tile, CharacterIsNotOnThisBoardError if
the destinity tile is not on the same board as the current tile,
OutOfBoard error if destinity tile is falsey (most probably
this means you're tring to move somewhere outside the map)
"""
if tile.piece is not None:
raise PositionOccupiedError(tile)
if not self.home_tile:
raise PieceIsNotOnATileError
if self.home_tile.board is not tile.board:
raise PieceIsNotOnThisBoardError
if not tile:
raise OutOfBoardError
self.home_tile.piece = None
tile.piece = self
</code></pre>
<p>Is this structure bad? I think it reads nicely: when the user tries to move a piece of his own, he can do:</p>
<pre><code>try:
character.move(somewhere)
except PositionOcuppiedError:
character.punish # cause i'm in hardcore
except OutOfBoardError:
character.kill # cause we were in a floating world and the
# character just fell off
</code></pre>
<p>There's some more logic in the library implemented like this, where the code <em>tries</em> to do something but if it can't, it will throw an exception. Is this OK? Or maybe I should be returning error codes (as integers, for example). </p>
| 2 | 2016-08-09T12:40:54Z | 38,852,399 | <p>This usage case seems to be all right for exceptions. Some people say that:</p>
<blockquote>
<p>"Exceptions should be exceptional"</p>
</blockquote>
<p>but in Python, use of exceptions to perform flow control is not regarded as a bad practice (read more <a href="http://stackoverflow.com/questions/16138232/is-it-a-good-practice-to-use-try-except-else-in-python">"Is it a good practice to use try except else in python"</a>).</p>
<p>Moreover, be aware that throwing and checking exceptions in Python might have a negative performance impact, if placed in repeatedly invoked functions. Unless you are developing a fast-paced game with hundreds of players, it shouldn't be your primary concern. You can read more about performance related issues of catching exceptions <a href="http://stackoverflow.com/questions/8107695/python-faq-how-fast-are-exceptions">in this topic on Stack Overflow</a>.</p>
<p>Personally, as a programmer I would rather deal with exceptions (when programming in Python) than with error codes or something similar. On the other hand, it is not much harder to to handle wide range of returned statuses if they are given as error codes - you can still mimic switch-case construct, using e.g. <a href="http://stackoverflow.com/questions/60208/replacements-for-switch-statement-in-python">dictionary with callable handlers as values</a>.</p>
| 3 | 2016-08-09T13:39:05Z | [
"python",
"python-3.x",
"exception"
] |
ConfigParser.NoSectionError: No section: 'metadata' in installing mysql-python | 38,851,059 | <p>When I try install mysql-python to CentOs6.5 it casts a error</p>
<pre><code>python /sw/ple.bkp/workspace/tianfd/TEMP/MySQL-python-1.2.5/setup.py install
Traceback (most recent call last):
File "/sw/ple.bkp/workspace/tianfd/TEMP/MySQL-python1.2.5/setup.py", line 17, in <module>metadata, options = get_config()
File "/sw/ple.bkp/workspace/tianfd/TEMP/MySQL-python1.2.5/setup_posix.py", line 32, in get_config
metadata, options = get_metadata_and_options()
File "/sw/ple.bkp/workspace/tianfd/TEMP/MySQL-python-1.2.5/setup_common.py", line 12, in get_metadata_and_options metadata = dict(config.items('metadata'))
File "/usr/local/lib/python2.7/ConfigParser.py", line 642, in items
raise NoSectionError(section)
ConfigParser.NoSectionError: No section: 'metadata'
</code></pre>
<p>I have tried to install mysql-devel and recompile Python, But the error doesn't fix. So,Could you give me some hints about this error?</p>
<p>Thank for your time!</p>
| 0 | 2016-08-09T12:41:16Z | 38,851,302 | <p>This seems to be the case of <code>ConfigParser</code> not being able to read the <code>metadata.cfg</code> file. Try setting the right permissions to this file as well as <code>site.cfg</code>.</p>
| 0 | 2016-08-09T12:51:47Z | [
"python",
"mysql",
"mysql-python"
] |
Importing Module python | 38,851,068 | <p>I have added:</p>
<pre><code>export PYTHONPATH="${PYTHONPATH}:/home/twittercap/alchemyapi"
</code></pre>
<p>to my ~/.profile file (ubuntu server environment) and it shows when I run</p>
<pre><code>import sys
print sys.path
</code></pre>
<p>but it won't let me import the module using</p>
<pre><code>from alchemyapi import AlchemyAPI
</code></pre>
<p>(which I can when running from within the directory.</p>
<p>Any help is appreciated.</p>
<p><strong>Update:</strong>
I can now <code>import alchemyapi</code> but <code>import alchemyapi.AlchemyAPI</code> returns <code>ImportError: No module named AlchemyAPI</code> (but there is!)</p>
| 1 | 2016-08-09T12:41:33Z | 38,862,175 | <p><strong>Resolved:</strong> Git cloned again and didn't rename - either the rename messed it up, or else the files corrupted the first time - thanks for your suggestions though!</p>
| 0 | 2016-08-09T23:49:53Z | [
"python",
"linux",
"alchemyapi"
] |
Convert number to date format using Python | 38,851,091 | <p>I am reading data from a text file with more that 14000 rows and there is a column which has eight (08) digit numbers in it. The format for some of the rows are like:</p>
<ul>
<li>01021943 </li>
<li>02031944 </li>
<li>00041945 </li>
<li>00001946</li>
</ul>
<p>The problem is that when I use to_date function it converts the datatype of the date from object to int64 but I want it to be datetime. Second by using the to_datetime function the dates like </p>
<ul>
<li>00041945 becomes 41945 </li>
<li>00001946 becomes 1946 and hence I cannot properly format them</li>
</ul>
| 0 | 2016-08-09T12:43:00Z | 38,851,425 | <p>As a first guess solution you could just parse it as a string into a datetime instance. Something like:</p>
<pre><code>from datetime import datetime
EXAMPLE = u'01021943'
dt = datetime(int(EXAMPLE[4:]), int(EXAMPLE[2:4]), int(EXAMPLE[:2]))
</code></pre>
<p>...not caring very much about performance issues.</p>
| 1 | 2016-08-09T12:57:47Z | [
"python",
"pandas",
"python-datetime"
] |
Convert number to date format using Python | 38,851,091 | <p>I am reading data from a text file with more that 14000 rows and there is a column which has eight (08) digit numbers in it. The format for some of the rows are like:</p>
<ul>
<li>01021943 </li>
<li>02031944 </li>
<li>00041945 </li>
<li>00001946</li>
</ul>
<p>The problem is that when I use to_date function it converts the datatype of the date from object to int64 but I want it to be datetime. Second by using the to_datetime function the dates like </p>
<ul>
<li>00041945 becomes 41945 </li>
<li>00001946 becomes 1946 and hence I cannot properly format them</li>
</ul>
| 0 | 2016-08-09T12:43:00Z | 38,851,447 | <pre><code>import datetime
def to_date(num_str):
return datetime.datetime.strptime(num_str,"%d%m%Y")
</code></pre>
<p>Note this will also throw exceptions for zero values because the expected behavior is not clear for this input.<br>
If you want a different behavior for zero values, you can implement it with <code>try & except</code>,<br>
for example, if you want to get <code>None</code> for zero values you can do:</p>
<pre><code>def to_date(num_str):
try:
return datetime.datetime.strptime(num_str,"%d%m%Y")
except ValueError, e:
return None
</code></pre>
| 1 | 2016-08-09T12:58:32Z | [
"python",
"pandas",
"python-datetime"
] |
Convert number to date format using Python | 38,851,091 | <p>I am reading data from a text file with more that 14000 rows and there is a column which has eight (08) digit numbers in it. The format for some of the rows are like:</p>
<ul>
<li>01021943 </li>
<li>02031944 </li>
<li>00041945 </li>
<li>00001946</li>
</ul>
<p>The problem is that when I use to_date function it converts the datatype of the date from object to int64 but I want it to be datetime. Second by using the to_datetime function the dates like </p>
<ul>
<li>00041945 becomes 41945 </li>
<li>00001946 becomes 1946 and hence I cannot properly format them</li>
</ul>
| 0 | 2016-08-09T12:43:00Z | 38,851,500 | <p>You can add parameter <code>dtype</code> to <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html" rel="nofollow"><code>read_csv</code></a> for converting column <code>col</code> to <code>string</code> and then use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html" rel="nofollow"><code>to_datetime</code></a> with parameters <code>format</code> for specify formatting and <code>errors='coerce'</code> - because bad dates, which are converted to <code>NaT</code>:</p>
<pre><code>import pandas as pd
import io
temp=u"""col
01021943
02031944
00041945
00001946"""
#after testing replace io.StringIO(temp) to filename
df = pd.read_csv(io.StringIO(temp), dtype={'col': 'str'})
df['col'] = pd.to_datetime(df['col'], format='%d%m%Y', errors='coerce')
print (df)
col
0 1943-02-01
1 1944-03-02
2 NaT
3 NaT
print (df.dtypes)
col datetime64[ns]
dtype: object
</code></pre>
<p>Thanks <a href="http://stackoverflow.com/questions/38851091/convert-number-to-date-format-using-python#comment65067594_38851500">Jon Clements</a> for another solution:</p>
<pre><code>import pandas as pd
import io
temp=u"""col_name
01021943
02031944
00041945
00001946"""
#after testing replace io.StringIO(temp) to filename
df = pd.read_csv(io.StringIO(temp),
converters={'col_name': lambda dt: pd.to_datetime(dt, format='%d%m%Y', errors='coerce')})
print (df)
col_name
0 1943-02-01
1 1944-03-02
2 NaT
3 NaT
print (df.dtypes)
col_name datetime64[ns]
dtype: object
</code></pre>
| 2 | 2016-08-09T13:00:42Z | [
"python",
"pandas",
"python-datetime"
] |
How to Import Exceptions from Package (Openpyxl)? | 38,851,120 | <p>I am trying to catch a sheet error exception for the package I am using (Openpyxl). I tried importing the exception like so <code>from openpyxl.utils import SheetTitleException</code> but I get the error <code>"ImportError: cannot import name SheetTitleException"</code>. When I tried importing it just with <code>from openpyxl.utils import *</code>, I get the error <code>NameError: global name 'SheetTitleException' is not defined</code>. </p>
<p>I'm sure I'm importing it incorrectly, but I'm not sure where I'm going wrong. </p>
<p><a href="http://openpyxl.readthedocs.io/en/2.3.3/_modules/openpyxl/utils/exceptions.html#SheetTitleException" rel="nofollow">Here is the documentation on exceptions for Openpyxl.</a></p>
<p>And here is the code I am using to catch the exception:</p>
<pre><code>try:
bdws = bdwb[finalBDSheetName]
except SheetTitleException:
messageBox("Invalid sheet title. Check your sheet title and try again.")
return
</code></pre>
| 0 | 2016-08-09T12:44:04Z | 38,851,173 | <p>The title of the page you linked to says "openpyxl.utils.exceptions".</p>
<p>Therefore you should be doing:</p>
<pre><code>from openpyxl.utils.exceptions import SheetTitleException
</code></pre>
| 4 | 2016-08-09T12:46:38Z | [
"python",
"exception-handling",
"openpyxl"
] |
How to Import Exceptions from Package (Openpyxl)? | 38,851,120 | <p>I am trying to catch a sheet error exception for the package I am using (Openpyxl). I tried importing the exception like so <code>from openpyxl.utils import SheetTitleException</code> but I get the error <code>"ImportError: cannot import name SheetTitleException"</code>. When I tried importing it just with <code>from openpyxl.utils import *</code>, I get the error <code>NameError: global name 'SheetTitleException' is not defined</code>. </p>
<p>I'm sure I'm importing it incorrectly, but I'm not sure where I'm going wrong. </p>
<p><a href="http://openpyxl.readthedocs.io/en/2.3.3/_modules/openpyxl/utils/exceptions.html#SheetTitleException" rel="nofollow">Here is the documentation on exceptions for Openpyxl.</a></p>
<p>And here is the code I am using to catch the exception:</p>
<pre><code>try:
bdws = bdwb[finalBDSheetName]
except SheetTitleException:
messageBox("Invalid sheet title. Check your sheet title and try again.")
return
</code></pre>
| 0 | 2016-08-09T12:44:04Z | 38,851,209 | <p>If it's anything like other module exception handling that i've done it should be</p>
<pre><code>from openpyxl.utils.exceptions import SheetTitleException
</code></pre>
<p>then to use it</p>
<pre><code>except SheetTitleException as e:
# do something
</code></pre>
| 1 | 2016-08-09T12:48:01Z | [
"python",
"exception-handling",
"openpyxl"
] |
Python - integration with rpy2 and 'must be atomic' error | 38,851,184 | <p>While using package rpy2, I get the error </p>
<blockquote>
<p>Error in sort.int(x, na.last = na.last, decreasing = decreasing, ...)
: 'x' must be atomic Traceback (most recent call last): File
"", line 1, in File
"/usr/lib/python2.7/dist-packages/rpy2/robjects/functions.py", line
86, in <strong>call</strong>
return super(SignatureTranslatedFunction, self).<strong>call</strong>(*args, **kwargs) File "/usr/lib/python2.7/dist-packages/rpy2/robjects/functions.py", line
35, in <strong>call</strong>
res = super(Function, self).<strong>call</strong>(*new_args, **new_kwargs) rpy2.rinterface.RRuntimeError: Error in sort.int(x, na.last = na.last,
decreasing = decreasing, ...) : 'x' must be atomic</p>
</blockquote>
<p>when executing</p>
<pre><code>file.R_func.rdc([1,2,3,4,5],[1,3,4,5,6],20,1.67)
</code></pre>
<p>where file.py is defined as follows:</p>
<pre><code>from rpy2.robjects.packages import SignatureTranslatedAnonymousPackage
string = """
rdc <- function(x,y,k,s) {
x <- cbind(apply(as.matrix(x),2,function(u) ecdf(u)(u)),1)
y <- cbind(apply(as.matrix(y),2,function(u) ecdf(u)(u)),1)
wx <- matrix(rnorm(ncol(x)*k,0,s),ncol(x),k)
wy <- matrix(rnorm(ncol(y)*k,0,s),ncol(y),k)
cancor(cbind(cos(x%*%wx),sin(x%*%wx)), cbind(cos(y%*%wy),sin(y%*%wy)))$cor[1]
}
"""
R_func = SignatureTranslatedAnonymousPackage(string, "R_func")
</code></pre>
<p>How do I have to pass x and y to rdc()?</p>
| 0 | 2016-08-09T12:47:11Z | 38,873,782 | <p>When doing</p>
<pre><code>file.R_func.rdc([1,2,3,4,5],[1,3,4,5,6],20,1.67)
</code></pre>
<p>an implicit conversion of Python objects is performed before passing them as parameters to the underlying R function.</p>
<p>By default, <code>[1,2,3,4,5]</code> (which is a Python <code>list</code>) will be converted to an R <code>list</code> and R lists are "non-atomic vectors", meaning that each element in the list can be an arbitrary object by opposition to an "atomic" type such as boolean ("logical" in R lingo), an integer, a string, etc...</p>
<p>Try:</p>
<pre><code>from rpy2.robjects.vectors import IntVector, FloatVector
# FloatVector is imported as an alternative if you need/prefer floats
file.R_func.rdc(IntVector([1,2,3,4,5]),
IntVector([1,3,4,5,6]),
20,
1.67)
</code></pre>
| 0 | 2016-08-10T12:42:50Z | [
"python",
"rpy2"
] |
Django: Filter a QuerySet and select results foreign key | 38,851,352 | <p>In Django, I have two models:</p>
<pre><code>class A(models.Model):
# lots of fields
class B(models.Model):
a = models.ForeignKey(A)
member = models.BooleanField()
</code></pre>
<p>I need to construct a query that filters B and selects all A, something like this:</p>
<pre><code>result = B.objects.filter(member=True).a
</code></pre>
<p>Above example code will of course return an error <code>QuerySet has no attribute 'a'</code></p>
<p>Expected result:
a QuerySet containing only A objects</p>
<p><strong>Whats the best and fastest way to achieve the desired functionality?</strong></p>
| 0 | 2016-08-09T12:54:23Z | 38,851,406 | <p>I assume you are looking for something like </p>
<pre><code>result = A.objects.filter(b__member=True)
</code></pre>
| 4 | 2016-08-09T12:57:10Z | [
"python",
"django",
"django-models"
] |
Django: Filter a QuerySet and select results foreign key | 38,851,352 | <p>In Django, I have two models:</p>
<pre><code>class A(models.Model):
# lots of fields
class B(models.Model):
a = models.ForeignKey(A)
member = models.BooleanField()
</code></pre>
<p>I need to construct a query that filters B and selects all A, something like this:</p>
<pre><code>result = B.objects.filter(member=True).a
</code></pre>
<p>Above example code will of course return an error <code>QuerySet has no attribute 'a'</code></p>
<p>Expected result:
a QuerySet containing only A objects</p>
<p><strong>Whats the best and fastest way to achieve the desired functionality?</strong></p>
| 0 | 2016-08-09T12:54:23Z | 38,851,489 | <p>An alternative to Andrey Zarubin's answer would be to iterate over the queryset you had and create a list of a objects.</p>
<pre><code>b_objects = B.objects.filter(member=True)
a_objects = [result.a for result in b_objects]
</code></pre>
| 1 | 2016-08-09T13:00:07Z | [
"python",
"django",
"django-models"
] |
Django: Filter a QuerySet and select results foreign key | 38,851,352 | <p>In Django, I have two models:</p>
<pre><code>class A(models.Model):
# lots of fields
class B(models.Model):
a = models.ForeignKey(A)
member = models.BooleanField()
</code></pre>
<p>I need to construct a query that filters B and selects all A, something like this:</p>
<pre><code>result = B.objects.filter(member=True).a
</code></pre>
<p>Above example code will of course return an error <code>QuerySet has no attribute 'a'</code></p>
<p>Expected result:
a QuerySet containing only A objects</p>
<p><strong>Whats the best and fastest way to achieve the desired functionality?</strong></p>
| 0 | 2016-08-09T12:54:23Z | 38,852,440 | <p>Below code will not filter everything but it will filter all the values with respect to field, might be you are looking for same</p>
<p>B.objects.filter(member=True).filter(a__somefield='some value')</p>
| 0 | 2016-08-09T13:40:52Z | [
"python",
"django",
"django-models"
] |
Maintaining a Lite and Pro Version of Python Project with Git/PyCharm | 38,851,381 | <h2>Background</h2>
<p>I am wrapping up a python project, and I am considering releasing a it with pro/lite versions. I don't want duplicate code laying around, of course, but I can't release a free version with many of the capabilities of the pro version only disabled with a few <em>if</em> checks: the code is for a Blender add-on and therefore will be easily edited and turned into a pro version if the features are still there.</p>
<h2>Question</h2>
<p>What is the best way to maintain a project like this using Git/Pycharm (or am I better off to just not worry about a lite version) <strong>without duplicate code</strong>? <a href="http://stackoverflow.com/a/29991195/6655092">I have read that using multiple Git branches is not the way to go.</a></p>
<h2>Disclaimer</h2>
<p>I do realize that there have been many similar questions about this topic. However, many of these deal with using Xcode, and many more do not have clear answers. Don't get me wrong, I know I <em>could</em> do this a number of ways - but I am looking for the <em>best</em> way, the <em>cleanest</em> way.</p>
| 5 | 2016-08-09T12:55:34Z | 38,860,222 | <p>Here's basic idea, based on you segregating out code into different modules. Right now, the concept is having 2 different download points. But it doesn't have to be, that's your call.</p>
<p>Regardless of which packaging/distribution approach you take, you'll have to separate out codelines into different code modules. Even if it's just one download.</p>
<p>lite/common_core.py - installed from github.lite</p>
<pre><code>#things you want in common between pro and lite
#i.e. what would be your "duplicate code"
def common_func1():
pass
</code></pre>
<p>Note: I would not put stuff common to both pro and lite directly into lite/main.py, because you want to present a unified API by exposing pro in lite, but you don't want to also have pro import lite, because that would risk circular import dependencies.</p>
<p>lite/main.py - installed from github.lite</p>
<pre><code>#things you want in common between pro and lite
import lite.common_core
#or import lite.common_core as common
def lite_function1():
pass
def lite_function2():
pass
try:
#you need to determine an appropriate path strategy
#a pypi-installed pro package should be available on the sys.path
from pro.main import *
# or import pro.main as pro
except ImportError:
pass
#client code can now call functions from the lite and pro
</code></pre>
<p>pro/main.py - installed from github.pro</p>
<pre><code>import lite.common_core
def pro_function1():
pass
</code></pre>
<p>You could have lite be a requirement of the pro pypi package, so that the user would still have only one download if they started that way.</p>
<p>Also, wrt to the answer you pointed to re git branches, another way to think of it is that you might be trying to fix/enhance say <em>pro</em>. So, from <em>pro's master</em>, you'd want the freedom to create a new branch and still be aware of <em>lite's master</em> (because you depend on it). That kinda bookkeeping is going to be difficult if you are juggling pro and lite on the same repo, with branches used to separate out pro/lite.</p>
| 2 | 2016-08-09T20:52:42Z | [
"python",
"git",
"pycharm",
"code-duplication"
] |
Copying from a binary file using Python inserts new bytes (?) | 38,851,533 | <p>I am trying to open a file created by a measurement equipment, find the bytes correspoding to metadata, then write everything else to a new binary file. (The metadata part is not the problem: I know the headers and can find them easily. Let's not worry about that.)</p>
<p>The problem is: when I open the file and write the bytes into a new file, new bytes are added, which messes up the relevant data. Specifically, every time there is a '0A' byte in the original file, the new file has a '0D' byte before it.
I've gone through a few iterations of trimming down the code to find the issue. Here is the latest and simplest version, in three different ways that all produce the same result:</p>
<pre><code>import os
import mmap
file_name = raw_input('Name of the file to be edited: ')
f = open(file_name, 'rb')
#1st try: using mmap, to make the metadata sarch easier
s = mmap.mmap(f.fileno(), 0, access=mmap.ACCESS_READ)
full_data = s.read(len(s))
with open(os.path.join('.', 'edited', ('[mmap data]' + file_name + '.bin')), 'a') as data_mmap:
data_mmap.write(full_data)
#2nd try: using bytes, in case mmap was giving me trouble
f_byte = bytes(f.read())
with open(os.path.join('.', 'edited', ('[bytes data]' + file_name + '.bin')), 'a') as data_bytes:
data_bytes.write(f_byte)
s.close()
f.close()
#3rd try: using os.read/write(file) instead of file.read() and file.write().
from os.path import getsize
o = os.open(file_name,os.O_BINARY) #only available on Windows
f_os = bytes(os.read(o,getsize(file_name)))
with open(os.path.join('.', 'edited', ('[os data]' + file_name + '.bin')), 'a') as data_os:
os.write(data_os.fileno(),f_os)
os.close(o)
</code></pre>
<p>The resulting files are all identical (compared with HxD). And they are almost identical to the original file, <em>except</em> for the single new bytes. For example, starting at offset 0120 the original file read:
A0 0A 00 00
whereas the new file reads:
A0 0D 0A 00 ...and then everything is exactly the same until the next occurrence of 0A in the original file, where again a 0D byte appears.</p>
<p>Since the code is really simple, I assume the error comes from the read function (or perhaps from some unavoidable inherent behaviour of the OS... I'm using python 2.7 on Windows, BTW.)
I also suspected the data format at first, but it seems to me it <em>should</em> be irrelevant. I am just copying everything, regardless of value.</p>
<p>I found no documentation that could help, so... anyone know what's causing that?</p>
<p>Edit: the same script works fine on Linux, by the way. So while it was not a big problem, it was very very annoying.</p>
| 1 | 2016-08-09T13:02:02Z | 38,852,230 | <p>Welcome to the world of end of line markers! When a file is open in text mode under Windows, any raw <code>\n</code> (hex 0x0a) will be written as <code>\r\n</code> (hex 0x0d 0x0a).</p>
<p>Fortunately it is easy to fix: just open the file in binary mode (note the <strong>b</strong>):</p>
<pre><code>with open(..., 'ab') as data_...:
</code></pre>
<p>and the unwanted <code>\r</code> will no longer bother you :-)</p>
| 3 | 2016-08-09T13:31:06Z | [
"python",
"binaryfiles"
] |
Convert ui-select selected values to integar list | 38,851,548 | <p>I've used <strong>angular-ui-select</strong> that is selecting some IDs of int type like this
<code>9,2,3</code>. I'm trying to pass these values as <code>integer</code> list to python file like this</p>
<pre><code>param += '"lstRole":[' + $scope.multipleDemo.roles +']';
</code></pre>
<p>It throws error </p>
<pre><code>Nonetype object is not iterable
</code></pre>
<p>and when I pass this param as</p>
<pre><code>param += '"lstRole":"[' + $scope.multipleDemo.roles +']"';
</code></pre>
<p>it received as string list and throws error</p>
<pre><code>invalid literal for int() with base 10
</code></pre>
<p>Can anyone mention my mistake that what I'm doing wrong??</p>
| 0 | 2016-08-09T13:02:29Z | 38,851,912 | <p>try this in your python function:</p>
<pre><code>json_str = request.body.decode('utf-8')
py_str = json.loads(json_str)
print(py_str["lstRole"])
</code></pre>
| 0 | 2016-08-09T13:17:56Z | [
"python",
"angularjs",
"list"
] |
Heroku Gunicorn Procfile | 38,851,564 | <p>I have a hard time finding documentation for creating Procfiles using flask with gunicorn and Heroku. Somewhere I found that the syntax is:
<code>web: gunicorn my_folder.my_module:app</code>. But I can't make it work. It only works for me when my python script: <code>hello.py</code> is in the root folder of the app. When I put it in a subfolder called app and create a Procfile: <code>web: gunicorn app.hello:app</code> it doesn't work. Only when I use <code>web: gunicorn hello:app</code> and my python script is in the root folder. Can someone explain me the proper syntax of Procfiles for gunicorn on Heroku, and how to make it work when the python script is in a subfolder?</p>
| 0 | 2016-08-09T13:03:05Z | 38,853,025 | <p>Have a look at <a href="https://github.com/zachwill/flask_heroku" rel="nofollow">this template</a>. It has a Procfile and is ready to be pushed to Heroku. You can learn from it, or just fork it and base your app on the template.</p>
| 0 | 2016-08-09T14:04:45Z | [
"python",
"heroku",
"flask",
"gunicorn"
] |
Data alignment in plotting bar chart from dictionary in python | 38,851,621 | <p>I'm trying to plot 2 axes in a bar chart using 2 different dictionaries, that has exactly the same keys, using matplotlib.
I wasn't able to make sure that the items (keys and values) of the two dictionaries are in the same order.</p>
<p>code example:</p>
<pre><code>ind = numpy.arange(len(types_dict)) # the x locations for the groups
fig, ax = plt.subplots()
rects1 = ax.bar(ind, types_dict.values(), 0.35, color='green')
rects2 = ax.bar(ind+0.35, genome_types_dict.values(), 0.35, color='purple')
plt.xticks(ind+width, types_dict.keys(), fontsize=10)
plt.savefig(output+"bar_" + library_name + ".png")
</code></pre>
<p>When printing the keys for the dictionaries types_dict and genome_types_dict, their keys are not in order, and therefore also their values:</p>
<pre><code>types_dict = ['rRNA', 'IGR', '3UTR', 'sRNA', 'tRNA', 'TU', '5UTR', 'AS', 'cis_AS_with_trans_t', 'mRNA', 'other-ncRNA']
genome_types_dict = ['rRNA', 'IGR', '3UTR', '5UTR', 'tRNA', 'TU', 'sRNA', 'AS', 'cis_AS_with_trans_t', 'mRNA', 'other-ncRNA']
</code></pre>
<p>Looking for a solution for the alignment between the 2 dictionaries.</p>
<p>Thank you,</p>
| 0 | 2016-08-09T13:05:38Z | 38,853,898 | <p>You're defining lists rather than dictionaries because you're using square brackets. If you want dictionaries, you'll need to wrap your data in {}. In Python, list order is preserved, dict order is not, unless you use an <a href="https://docs.python.org/2/library/collections.html#collections.OrderedDict" rel="nofollow">OrderedDict</a>.</p>
<p>Further, the elements in the two lists you've defined are strings (they're defined in quotes), which you're attempting to plot on the y-axis against against your x-axis (ind) values. You'll need <em>numerical data</em> to plot against ind.</p>
<p>Presumably you'd like to define true dictionary key-values pairs with numerical value data, such as:</p>
<pre><code>genome_dict = {'rRNA': 1, 'IGR': 2, '3UTR': 3, '5UTR': 4, 'tRNA': 6, 'TU': 7, 'sRNA': 8, 'AS': 9, 'cis_AS_with_trans_t': 10, 'mRNA': 11, 'other-ncRNA': 12}
</code></pre>
<p>Using that, you can extract the keys and use them as your graph axis labels with ax.set_xticklabels. You can use the dictionary's values as the second argument in your rects = ax.bar() expression, i.e. as the y-axis values, and thus produce a plot of numerical data against ind, with each bar labelled as a genome type.</p>
| 0 | 2016-08-09T14:43:00Z | [
"python",
"dictionary",
"matplotlib",
"bar-chart"
] |
Find consecutive zeros in Python, based on time | 38,851,641 | <p>I have a pandas <code>df1</code> with a <code>datetime</code> column and a <code>count</code> column. If there is a string of 0s for a consecutive hour, and less than 2 minutes of data > 0 within that hour (a 'spike tolerance'), it is considered invalid. </p>
<p>The <code>datetime</code> is in 5 second intervals but not always consistent (i.e. can jump from 6:00:00 to 14:00:00, skipping all the time in between) so the difference between rows should be 5 seconds in order to be considered a consecutive period of time. </p>
<p>I would like to add a new column <code>flag</code> that marks a 0 for invalid and a 1 for valid.</p>
<p>Sample data</p>
<pre><code> time count flag
00:00:05 0 0
00:00:10 0 0
..... all 0 0
01:00:05 0 0
01:00:10 33 1
01:00:15 19 1
....... n>0 1
02:00:10 12 1
</code></pre>
| 1 | 2016-08-09T13:06:35Z | 38,852,095 | <p>transposed and turn it into a series:</p>
<pre><code>y = df.T.unstack()
</code></pre>
<p>Then to make up for lack of contiguous groupby in pandas:</p>
<pre><code>y * (y.groupby((y != y.shift()).cumsum()).cumcount() + 1)
OUT: 0 0
1 0
2 1
3 2
4 3
5 0
6 0
7 1
8 0
9 1
10 2
</code></pre>
<p>This will yield the number of consecutive values</p>
| 0 | 2016-08-09T13:24:41Z | [
"python",
"datetime",
"pandas"
] |
Python run command line | 38,851,647 | <p>I'm trying to run a command line script.
I have :
command.py </p>
<pre><code>import subprocess
proc = subprocess.Popen('python3 hello.py', stdout=subprocess.PIPE)
tmp = proc.stdout.read()
print(tmp)
</code></pre>
<p>and hello.py</p>
<pre><code>print("hello")
</code></pre>
<p>but it returns error</p>
<blockquote>
<p>FileNotFoundError: [WinError 2]</p>
</blockquote>
<p>How can I run command and print result?</p>
| 0 | 2016-08-09T13:06:48Z | 38,852,031 | <p>The problem is that <code>Popen</code> cannot find <code>python3</code> in your path.</p>
<p>If the primary error was that <code>hello.py</code> wasn't found, you would have this error instead in <code>stderr</code> (that is not read)</p>
<pre><code>python: can't open file 'hello.py': [Errno 2] No such file or directory
</code></pre>
<p>You would not get an exception in <code>subprocess</code> because <code>python3</code> has run but failed to find the python file to execute.</p>
<p>First, if you want to run a python file, it's better to avoid running it in another process.</p>
<p>But, if you still want to do that, it is better to run it with an argument list like this so spaces are handled properly and provide full path to all files.</p>
<pre><code>import subprocess
proc = subprocess.Popen([r'c:\some_dir\python3',r'd:\full_path_to\hello.py'], stdout=subprocess.PIPE)
tmp = proc.stdout.read()
print(tmp)
</code></pre>
<p>To go further:</p>
<ul>
<li>For <code>python3</code>, it would be good to put it in the system path to avoid specifying the full path.</li>
<li>as for the script, if it is located, say, in the same directory and the current script, you could avoid to hardcode the path</li>
</ul>
<p>improved snippet:</p>
<pre><code>import subprocess,os
basedir = os.path.dirname(__file__)
proc = subprocess.Popen(['python3',os.path.join(basedir,'hello.py')], stdout=subprocess.PIPE)
tmp = proc.stdout.read()
print(tmp)
</code></pre>
<p>Also: <code>Popen</code> performs a kind of <code>execvp</code> of the first argument passed. If the first argument passed is, say, a <code>.bat</code> file, you need to add the <code>cmd /c</code> prefix or <code>shell=True</code> to tell <code>Popen</code> to create process in a shell instead that executing it directly.</p>
| 1 | 2016-08-09T13:21:59Z | [
"python",
"python-3.x"
] |
How to parse a repeatable option with two arguments with Pythonâs argparse? | 38,851,656 | <p>How can I get <code>argparse</code> to parse an option with two arguments, which might exist multiple times? Like this:</p>
<pre><code>$ cmd --repo origin here --repo other there --repo upstream url3
</code></pre>
<p>And the parsed arguments should be accessible for example like this:</p>
<pre><code>args.repo = [('origin', 'here'), ('other', 'there'), ('upstream', 'url3')]
</code></pre>
| 0 | 2016-08-09T13:07:21Z | 38,851,802 | <p>You should use append action.</p>
<p>From argparse documentation:</p>
<p><code>append</code> - This stores a list, and appends each argument value to the list. This is useful to allow an option to be specified multiple times. </p>
<p>Example usage:</p>
<pre><code>>>> parser = argparse.ArgumentParser()
>>> parser.add_argument('--foo', nargs='*', action='append')
>>> parser.parse_args('--foo 1 2 --foo 3 4'.split())
Namespace(foo=[['1', '2'], ['3', '4']])
</code></pre>
<p>Source: <a href="https://docs.python.org/3/library/argparse.html#action" rel="nofollow">https://docs.python.org/3/library/argparse.html#action</a></p>
<p>You might also want to take a look at docopt project, which is by me the best Python argument parser package:</p>
<ul>
<li><a href="http://docopt.org" rel="nofollow">http://docopt.org</a></li>
<li><a href="https://pypi.python.org/pypi/docopt" rel="nofollow">https://pypi.python.org/pypi/docopt</a></li>
</ul>
| 1 | 2016-08-09T13:13:10Z | [
"python",
"command-line-arguments",
"argparse",
"command-line-parsing"
] |
How to parse a repeatable option with two arguments with Pythonâs argparse? | 38,851,656 | <p>How can I get <code>argparse</code> to parse an option with two arguments, which might exist multiple times? Like this:</p>
<pre><code>$ cmd --repo origin here --repo other there --repo upstream url3
</code></pre>
<p>And the parsed arguments should be accessible for example like this:</p>
<pre><code>args.repo = [('origin', 'here'), ('other', 'there'), ('upstream', 'url3')]
</code></pre>
| 0 | 2016-08-09T13:07:21Z | 38,851,969 | <pre><code>import argparse
parser = argparse.ArgumentParser()
parser.add_argument('--repo', nargs=2, action='append')
parser.parse_args('--repo origin here --repo other there'.split())
</code></pre>
<p>Result:</p>
<pre><code>Namespace(repo=[['origin', 'here'], ['other', 'there']])
</code></pre>
| 2 | 2016-08-09T13:20:01Z | [
"python",
"command-line-arguments",
"argparse",
"command-line-parsing"
] |
Python 3.5 | split List and export to Excel or CSV | 38,851,819 | <p>I scrape a website with Python 3.5 (BeautifulSoup) and the result is a list. The values are stored in a variable called "project_titles".</p>
<p>The values look like:</p>
<pre><code>project_titles = ['I'm Back. Raspberry Pi unique Case for your Analog Cameras', 'CitizenSpring - App to crowdsource & map safe drinking water', 'Shoka Bell: The Ultimate City Cycling Tool']
</code></pre>
<p>I want to split the values at the comma and export this to an Excel file or CSV.</p>
<p>I need the values in an Excel like:</p>
<ul>
<li>Cell A1: I'm Back. Raspberry Pi unique Case for your Analog Cameras</li>
<li>Cell B1: CitizenSpring - App to crowdsource & map safe drinking water</li>
<li>Cell C1: Shoka Bell: The Ultimate City Cycling Tool</li>
</ul>
| 0 | 2016-08-09T13:14:07Z | 38,852,025 | <p>Since you already have a list of strings conforming to the columns required in your CSV file you can simply write the list out using the <a href="https://docs.python.org/3.5/library/csv.html#module-csv" rel="nofollow"><code>csv</code></a> module:</p>
<pre><code>import csv
project_titles = ["I'm Back. Raspberry Pi unique Case for your Analog Cameras", 'CitizenSpring - App to crowdsource & map safe drinking water', 'Shoka Bell: The Ultimate City Cycling Tool']
with open('projects.csv', 'w') as f:
csv.writer(f).writerow(project_titles)
</code></pre>
<p>After running this code the output CSV file would contain:</p>
<pre>
I'm Back. Raspberry Pi unique Case for your Analog Cameras,CitizenSpring - App to crowdsource & map safe drinking water,Shoka Bell: The Ultimate City Cycling Tool
</pre>
<p>which you can import into Excel.</p>
| 2 | 2016-08-09T13:21:51Z | [
"python",
"excel",
"python-3.x",
"csv",
"beautifulsoup"
] |
Python 3.5 | split List and export to Excel or CSV | 38,851,819 | <p>I scrape a website with Python 3.5 (BeautifulSoup) and the result is a list. The values are stored in a variable called "project_titles".</p>
<p>The values look like:</p>
<pre><code>project_titles = ['I'm Back. Raspberry Pi unique Case for your Analog Cameras', 'CitizenSpring - App to crowdsource & map safe drinking water', 'Shoka Bell: The Ultimate City Cycling Tool']
</code></pre>
<p>I want to split the values at the comma and export this to an Excel file or CSV.</p>
<p>I need the values in an Excel like:</p>
<ul>
<li>Cell A1: I'm Back. Raspberry Pi unique Case for your Analog Cameras</li>
<li>Cell B1: CitizenSpring - App to crowdsource & map safe drinking water</li>
<li>Cell C1: Shoka Bell: The Ultimate City Cycling Tool</li>
</ul>
| 0 | 2016-08-09T13:14:07Z | 38,852,059 | <p>Since <code>project_titles</code> is already a list containing the strings you want, it's easy to use <code>pandas</code> (can be installed together with one of the scientific pthon distributions, see scipy.org) with the following short code:</p>
<pre><code>import pandas as pd
project_titles = ["I'm Back. Raspberry Pi unique Case for your Analog Cameras",
'CitizenSpring - App to crowdsource & map safe drinking water',
'Shoka Bell: The Ultimate City Cycling Tool']
d = pd.DataFrame(project_titles)
writer = pd.ExcelWriter('data.xlsx')
d.to_excel(writer, 'my_data', index=False, header=False)
writer.save()
</code></pre>
| 1 | 2016-08-09T13:23:10Z | [
"python",
"excel",
"python-3.x",
"csv",
"beautifulsoup"
] |
Python 3.5 | split List and export to Excel or CSV | 38,851,819 | <p>I scrape a website with Python 3.5 (BeautifulSoup) and the result is a list. The values are stored in a variable called "project_titles".</p>
<p>The values look like:</p>
<pre><code>project_titles = ['I'm Back. Raspberry Pi unique Case for your Analog Cameras', 'CitizenSpring - App to crowdsource & map safe drinking water', 'Shoka Bell: The Ultimate City Cycling Tool']
</code></pre>
<p>I want to split the values at the comma and export this to an Excel file or CSV.</p>
<p>I need the values in an Excel like:</p>
<ul>
<li>Cell A1: I'm Back. Raspberry Pi unique Case for your Analog Cameras</li>
<li>Cell B1: CitizenSpring - App to crowdsource & map safe drinking water</li>
<li>Cell C1: Shoka Bell: The Ultimate City Cycling Tool</li>
</ul>
| 0 | 2016-08-09T13:14:07Z | 38,852,247 | <p>You can try a very simple code</p>
<pre><code>project_titles = ["I'm Back. Raspberry Pi unique Case for your Analog Cameras", 'CitizenSpring - App to crowdsource & map safe drinking water', 'Shoka Bell: The Ultimate City Cycling Tool']
with open('data.csv',"w") as fo:
fo.writelines(",".join(project_titles))
</code></pre>
| 0 | 2016-08-09T13:31:54Z | [
"python",
"excel",
"python-3.x",
"csv",
"beautifulsoup"
] |
Django 1.8 CSS load incompletely | 38,851,939 | <p>I have looked up almost every solution about Django CSS, JS, images file not load. However, I think my project is something different.</p>
<p>I want to make a RWD site but it doesn't work on the small screen.
No menu, No change the template.</p>
<p>I can see the CSS works but is not entirely.
<a href="http://i.stack.imgur.com/EbnHV.jpg" rel="nofollow">image description here</a></p>
<p>Trying everything I can find solutions. I confuse with the static file and path what is the correct level.</p>
<p><a href="http://i.stack.imgur.com/U8Tup.jpg" rel="nofollow">Here is my project path Image</a></p>
<p>I try so many methods and I delete those not really involved to keep the code clean for someone can help me.</p>
<p>settings.py</p>
<pre><code> TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [os.path.join(BASE_DIR, 'templated-EX').replace('\\', '/')],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
...
STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(BASE_DIR, 'coreapp/static')
STATIC_DIRS = [
('css',os.path.join(STATIC_ROOT,'css').replace('\\','/') ),
('js',os.path.join(STATIC_ROOT,'js').replace('\\','/') ),
('images',os.path.join(STATIC_ROOT,'images').replace('\\','/') ),
# ('upload',os.path.join(STATIC_ROOT,'upload').replace('\\','/') ),
]
STATICFILES_DIRS = (os.path.join(BASE_DIR, 'static'),)
</code></pre>
<p>index.html</p>
<pre><code>{% load staticfiles %}
<!DOCTYPE HTML>
<!--
Ex Machina by TEMPLATED
templated.co @templatedco
Released for free under the Creative Commons Attribution 3.0 license (templated.co/license)
-->
<html>
<head>
<title>Ex Machina by TEMPLATED</title>
<meta http-equiv="content-type" content="text/html; charset=utf-8" />
<meta name="description" content="" />
<meta name="keywords" content="" />
<link href='http://fonts.googleapis.com/css?family=Roboto+Condensed:700italic,400,300,700' rel='stylesheet' type='text/css'>
<!--[if lte IE 8]><script src="{% static "js/html5shiv.js" %}"></script><![endif]-->
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.11.0/jquery.min.js"></script>
<script src="/static/js/skel.min.js"></script>
<script src="/static/js/skel-panels.min.js"></script>
<script src="/static/js/init.js"></script>
<link rel="stylesheet" href="/static/css/skel-noscript.css" />
<link rel="stylesheet" href="/static/css/style.css"/>
<link rel="stylesheet" href="/static/css/style-desktop.css"/>
</code></pre>
<p>Also I have tried to add something to urls.py. It couldn't be improved either.</p>
<p>I'm new to Django and must lost something important. I have stuck a few days.</p>
<hr>
<p>update 0810</p>
<p>OK, guys,</p>
<p>I think I'm getting more clear than before. I try to download another template and build a new project. Doing the procedure again.</p>
<p>New project is fine! So the procedure is right.</p>
<p>I exam the original template I mentioned that is a little different with mine.
The different is html and CSS.</p>
<p>Original CSS in index.html</p>
<pre><code><noscript>
<link rel="stylesheet" href="{% static "css/skel-noscript.css" %}"/>
<link rel="stylesheet" href="{% static "css/style.css" %}"/>
<link rel="stylesheet" href="{% static "css/style-desktop.css" %}"/>
</noscript>
</code></pre>
<p>It doesn't work if you are using <code><noscript></noscript></code> in the html. Perhaps I deleted <code><noscript></noscript></code> because Django couldn't run with it.</p>
| 0 | 2016-08-09T13:18:50Z | 38,852,154 | <p>Try changing your href in html like this:</p>
<pre><code><link rel="stylesheet" href="{{STATIC_URL}}css/skel-noscript.css" />
</code></pre>
<p>Let me know if this doesn't work.</p>
<p>Edit:
What is your BASE_DIR settings?
Can you try with following settings.</p>
<pre><code># Static Files
STATIC_ROOT = join(os.path.dirname(BASE_DIR), 'staticfiles')
STATICFILES_DIRS = [join(os.path.dirname(BASE_DIR), 'static'), ]
STATIC_URL = '/static/'
STATICFILES_FINDERS = (
'django.contrib.staticfiles.finders.FileSystemFinder',
'django.contrib.staticfiles.finders.AppDirectoriesFinder',
)
</code></pre>
| 0 | 2016-08-09T13:27:23Z | [
"python",
"html",
"css",
"django"
] |
Django 1.8 CSS load incompletely | 38,851,939 | <p>I have looked up almost every solution about Django CSS, JS, images file not load. However, I think my project is something different.</p>
<p>I want to make a RWD site but it doesn't work on the small screen.
No menu, No change the template.</p>
<p>I can see the CSS works but is not entirely.
<a href="http://i.stack.imgur.com/EbnHV.jpg" rel="nofollow">image description here</a></p>
<p>Trying everything I can find solutions. I confuse with the static file and path what is the correct level.</p>
<p><a href="http://i.stack.imgur.com/U8Tup.jpg" rel="nofollow">Here is my project path Image</a></p>
<p>I try so many methods and I delete those not really involved to keep the code clean for someone can help me.</p>
<p>settings.py</p>
<pre><code> TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [os.path.join(BASE_DIR, 'templated-EX').replace('\\', '/')],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
...
STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(BASE_DIR, 'coreapp/static')
STATIC_DIRS = [
('css',os.path.join(STATIC_ROOT,'css').replace('\\','/') ),
('js',os.path.join(STATIC_ROOT,'js').replace('\\','/') ),
('images',os.path.join(STATIC_ROOT,'images').replace('\\','/') ),
# ('upload',os.path.join(STATIC_ROOT,'upload').replace('\\','/') ),
]
STATICFILES_DIRS = (os.path.join(BASE_DIR, 'static'),)
</code></pre>
<p>index.html</p>
<pre><code>{% load staticfiles %}
<!DOCTYPE HTML>
<!--
Ex Machina by TEMPLATED
templated.co @templatedco
Released for free under the Creative Commons Attribution 3.0 license (templated.co/license)
-->
<html>
<head>
<title>Ex Machina by TEMPLATED</title>
<meta http-equiv="content-type" content="text/html; charset=utf-8" />
<meta name="description" content="" />
<meta name="keywords" content="" />
<link href='http://fonts.googleapis.com/css?family=Roboto+Condensed:700italic,400,300,700' rel='stylesheet' type='text/css'>
<!--[if lte IE 8]><script src="{% static "js/html5shiv.js" %}"></script><![endif]-->
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.11.0/jquery.min.js"></script>
<script src="/static/js/skel.min.js"></script>
<script src="/static/js/skel-panels.min.js"></script>
<script src="/static/js/init.js"></script>
<link rel="stylesheet" href="/static/css/skel-noscript.css" />
<link rel="stylesheet" href="/static/css/style.css"/>
<link rel="stylesheet" href="/static/css/style-desktop.css"/>
</code></pre>
<p>Also I have tried to add something to urls.py. It couldn't be improved either.</p>
<p>I'm new to Django and must lost something important. I have stuck a few days.</p>
<hr>
<p>update 0810</p>
<p>OK, guys,</p>
<p>I think I'm getting more clear than before. I try to download another template and build a new project. Doing the procedure again.</p>
<p>New project is fine! So the procedure is right.</p>
<p>I exam the original template I mentioned that is a little different with mine.
The different is html and CSS.</p>
<p>Original CSS in index.html</p>
<pre><code><noscript>
<link rel="stylesheet" href="{% static "css/skel-noscript.css" %}"/>
<link rel="stylesheet" href="{% static "css/style.css" %}"/>
<link rel="stylesheet" href="{% static "css/style-desktop.css" %}"/>
</noscript>
</code></pre>
<p>It doesn't work if you are using <code><noscript></noscript></code> in the html. Perhaps I deleted <code><noscript></noscript></code> because Django couldn't run with it.</p>
| 0 | 2016-08-09T13:18:50Z | 38,858,437 | <p>Try doing this in your <strong>index.html</strong> file:</p>
<pre><code><!DOCTYPE HTML>
{% load staticfiles %}
<!--
Ex Machina by TEMPLATED
templated.co @templatedco
Released for free under the Creative Commons Attribution 3.0 license (templated.co/license)
-->
<html>
<head>
<title>Ex Machina by TEMPLATED</title>
<meta http-equiv="content-type" content="text/html; charset=utf-8" />
<meta name="description" content="" />
<meta name="keywords" content="" />
<link href='http://fonts.googleapis.com/css?family=Roboto+Condensed:700italic,400,300,700' rel='stylesheet' type='text/css'>
<!--[if lte IE 8]><script src="{% static "js/html5shiv.js" %}"></script><![endif]-->
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.11.0/jquery.min.js"></script>
<script src="{% static 'js/skel.min.js' %}" type="text/javascript"></script>
<script src="{% static 'js/skel-panels.min.js' %}" type="text/javascript"></script>
<script src="{% static 'js/init.js' %}" type="text/javascript"></script>
<link href="{% static 'css/skel-noscript.css' %}" rel="stylesheet">
<link href="{% static 'css/style.css' %}" rel="stylesheet">
<link href="{% static 'css/style-desktop.css' %}" rel="stylesheet">
</code></pre>
<p>You weren't properly using <code>{% load staticfiles %}</code></p>
| 0 | 2016-08-09T18:57:08Z | [
"python",
"html",
"css",
"django"
] |
python kmeans on string | 38,851,979 | <p>i am new to kmeans clustering method. i try to cluster a 1 dimension string array data in python. </p>
<p>Below is my data:</p>
<pre><code>expertise=['
Bioactive Surfaces and Scaffolds for Regenerative Medicine',
'Drug/gene delivery science',
'RNA nanomedicine', 'Immuno/bio/nano-engineering', 'Biomaterials', 'Nanomedicine',
'Biobased Chemicals and Polymers',
'Membranes Science & Technology',
'Modeling of Infectious and Lifestyle-related Diseases']
km = KMeans(n_clusters=2)
km.fit(expertise)
</code></pre>
<p>and i get ValueError: could not convert string to float:</p>
<p>so i wonder how to apply kmeans on string data or is there any way i can change the data to two dimension?</p>
| -4 | 2016-08-09T13:20:19Z | 38,852,530 | <p>you will first have to define how you wanna cluster your data. The scikit-learn's simple KMeans clustering is designed to work on numbers. However scikit-learn can be also be used to cluster documents by topics using a bag-of-words approach. This is done by extracting the features using scipy.sparse matrix instead of standard numpy arrays</p>
<p>One of the demo example is given here:
<a href="http://scikit-learn.org/stable/auto_examples/text/document_clustering.html" rel="nofollow">http://scikit-learn.org/stable/auto_examples/text/document_clustering.html</a></p>
| 0 | 2016-08-09T13:44:33Z | [
"python",
"scikit-learn",
"k-means"
] |
python kmeans on string | 38,851,979 | <p>i am new to kmeans clustering method. i try to cluster a 1 dimension string array data in python. </p>
<p>Below is my data:</p>
<pre><code>expertise=['
Bioactive Surfaces and Scaffolds for Regenerative Medicine',
'Drug/gene delivery science',
'RNA nanomedicine', 'Immuno/bio/nano-engineering', 'Biomaterials', 'Nanomedicine',
'Biobased Chemicals and Polymers',
'Membranes Science & Technology',
'Modeling of Infectious and Lifestyle-related Diseases']
km = KMeans(n_clusters=2)
km.fit(expertise)
</code></pre>
<p>and i get ValueError: could not convert string to float:</p>
<p>so i wonder how to apply kmeans on string data or is there any way i can change the data to two dimension?</p>
| -4 | 2016-08-09T13:20:19Z | 38,852,539 | <p>There is almost no sense in what you are trying to do. How do you think two clustered groups should look like?</p>
<p>If you can't plot data you won't be able to cluster it. Find a way to present strings in some numerical way (for example length, occurrence of letters depending on what you want to get) and then cluster this numerical data.</p>
| 0 | 2016-08-09T13:45:01Z | [
"python",
"scikit-learn",
"k-means"
] |
python - Yield improperly usage | 38,852,074 | <p>Im pretty sure im using yield improperly:</p>
<pre><code>#!/usr/bin/env python
# -*- coding: utf-8 -*-
import logging
from gensim import corpora, models, similarities
from collections import defaultdict
from pprint import pprint # pretty-printer
from six import iteritems
import openpyxl
import string
from operator import itemgetter
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
#Creating a stoplist from file
with open('stop-word-list.txt') as f:
stoplist = [x.strip('\n') for x in f.readlines()]
corpusFileName = 'content_sample_en.xlsx'
corpusSheetName = 'content_sample_en'
class MyCorpus(object):
def __iter__(self):
wb = openpyxl.load_workbook(corpusFileName)
sheet = wb.get_sheet_by_name(corpusSheetName)
for i in range(1, (sheet.max_row+1)/2):
title = str(sheet.cell(row = i, column = 4).value.encode('utf-8'))
summary = str(sheet.cell(row = i, column = 5).value.encode('utf-8'))
content = str(sheet.cell(row = i, column = 10).value.encode('utf-8'))
yield reBuildDoc("{} {} {}".format(title, summary, content))
def removeUnwantedPunctuations(doc):
"change all (/, \, <, >) into ' ' "
newDoc = ""
for l in doc:
if l == "<" or l == ">" or l == "/" or l == "\\":
newDoc += " "
else:
newDoc += l
return newDoc
def reBuildDoc(doc):
"""
:param doc:
:return: document after being dissected to our needs.
"""
doc = removeUnwantedPunctuations(doc).lower().translate(None, string.punctuation)
newDoc = [word for word in doc.split() if word not in stoplist]
return newDoc
corpus = MyCorpus()
tfidf = models.TfidfModel(corpus, normalize=True)
</code></pre>
<p>In the following example you can see me trying to create a corpus from an xlsx file. Im reading from the xlsx file 3 lines which are title summary and content and appending them into a big string. my <code>reBuildDoc()</code> and <code>removeUnwantedPunctuations()</code> functions then adjust the text to my needs and in the end returns a big list of words. (for ex: <code>[hello, piano, computer, etc... ]</code>) in the end I yield the result but I get the following error:</p>
<pre><code>Traceback (most recent call last):
File "C:/Users/Eran/PycharmProjects/tfidf/docproc.py", line 101, in <module>
tfidf = models.TfidfModel(corpus, normalize=True)
File "C:\Anaconda2\lib\site-packages\gensim-0.13.1-py2.7-win-amd64.egg\gensim\models\tfidfmodel.py", line 96, in __init__
self.initialize(corpus)
File "C:\Anaconda2\lib\site-packages\gensim-0.13.1-py2.7-win-amd64.egg\gensim\models\tfidfmodel.py", line 119, in initialize
for termid, _ in bow:
ValueError: too many values to unpack
</code></pre>
<p>I know the error is from the yield line because I had a different yield line that worked. It looked like this: </p>
<pre><code> yield [word for word in dictionary.doc2bow("{} {} {}".format(title, summary, content).lower().translate(None, string.punctuation).split()) if word not in stoplist]
</code></pre>
<p>It was abit messy and hard to put functionallity to it so I've changed it as you can see in the first example.</p>
| 0 | 2016-08-09T13:23:48Z | 38,852,397 | <p>the problem is not the <code>yield</code> per se, is what is yielded, the error said is from <code>for termid, _ in bow</code> this line said that you expect that <code>bow</code> contain a list of tuples or any other object containing exactly 2 element like <code>(1,2),[1,2],"12",...</code> but as it end giving to it the result of <code>MyCorpus</code> which is a string with obviously more than 2 element, hence the error, to fix this do either <code>for termid in bow</code> or in <code>MyCorpus</code> do <code>yield reBuildDoc("{} {} {}".format(title, summary, content)), None</code> so you yield a tuple of 2 object</p>
<p>to illustrate this check this example</p>
<pre><code>>>> def fun(obj):
for _ in range(2):
yield obj
>>> for a,b in fun("xyz"):
print(a,b)
Traceback (most recent call last):
File "<pyshell#11>", line 1, in <module>
for a,b in fun("xyz"):
ValueError: too many values to unpack (expected 2)
>>> for a,b in fun("xy"):
print(a,b)
x y
x y
>>> for a,b in fun(("xy",None)):
print(a,b)
xy None
xy None
>>>
</code></pre>
| 1 | 2016-08-09T13:38:56Z | [
"python",
"parsing",
"yield",
"gensim"
] |
python - Yield improperly usage | 38,852,074 | <p>Im pretty sure im using yield improperly:</p>
<pre><code>#!/usr/bin/env python
# -*- coding: utf-8 -*-
import logging
from gensim import corpora, models, similarities
from collections import defaultdict
from pprint import pprint # pretty-printer
from six import iteritems
import openpyxl
import string
from operator import itemgetter
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
#Creating a stoplist from file
with open('stop-word-list.txt') as f:
stoplist = [x.strip('\n') for x in f.readlines()]
corpusFileName = 'content_sample_en.xlsx'
corpusSheetName = 'content_sample_en'
class MyCorpus(object):
def __iter__(self):
wb = openpyxl.load_workbook(corpusFileName)
sheet = wb.get_sheet_by_name(corpusSheetName)
for i in range(1, (sheet.max_row+1)/2):
title = str(sheet.cell(row = i, column = 4).value.encode('utf-8'))
summary = str(sheet.cell(row = i, column = 5).value.encode('utf-8'))
content = str(sheet.cell(row = i, column = 10).value.encode('utf-8'))
yield reBuildDoc("{} {} {}".format(title, summary, content))
def removeUnwantedPunctuations(doc):
"change all (/, \, <, >) into ' ' "
newDoc = ""
for l in doc:
if l == "<" or l == ">" or l == "/" or l == "\\":
newDoc += " "
else:
newDoc += l
return newDoc
def reBuildDoc(doc):
"""
:param doc:
:return: document after being dissected to our needs.
"""
doc = removeUnwantedPunctuations(doc).lower().translate(None, string.punctuation)
newDoc = [word for word in doc.split() if word not in stoplist]
return newDoc
corpus = MyCorpus()
tfidf = models.TfidfModel(corpus, normalize=True)
</code></pre>
<p>In the following example you can see me trying to create a corpus from an xlsx file. Im reading from the xlsx file 3 lines which are title summary and content and appending them into a big string. my <code>reBuildDoc()</code> and <code>removeUnwantedPunctuations()</code> functions then adjust the text to my needs and in the end returns a big list of words. (for ex: <code>[hello, piano, computer, etc... ]</code>) in the end I yield the result but I get the following error:</p>
<pre><code>Traceback (most recent call last):
File "C:/Users/Eran/PycharmProjects/tfidf/docproc.py", line 101, in <module>
tfidf = models.TfidfModel(corpus, normalize=True)
File "C:\Anaconda2\lib\site-packages\gensim-0.13.1-py2.7-win-amd64.egg\gensim\models\tfidfmodel.py", line 96, in __init__
self.initialize(corpus)
File "C:\Anaconda2\lib\site-packages\gensim-0.13.1-py2.7-win-amd64.egg\gensim\models\tfidfmodel.py", line 119, in initialize
for termid, _ in bow:
ValueError: too many values to unpack
</code></pre>
<p>I know the error is from the yield line because I had a different yield line that worked. It looked like this: </p>
<pre><code> yield [word for word in dictionary.doc2bow("{} {} {}".format(title, summary, content).lower().translate(None, string.punctuation).split()) if word not in stoplist]
</code></pre>
<p>It was abit messy and hard to put functionallity to it so I've changed it as you can see in the first example.</p>
| 0 | 2016-08-09T13:23:48Z | 38,853,302 | <p>It looks like your problem is that <code>TfidfModel</code> expects a <code>corpus</code> that is a <code>list</code> of <code>doc2bow</code> outputs (themselves <code>list</code>s of two-<code>tuple</code>s). Your original working code used <code>doc2bow</code> correctly to convert from your plain strings to the corpus format, your new code is passing in raw strings, not the "vectors" <code>TfidfModel</code> expects.</p>
<p>Go back to using <code>doc2bow</code>, and <a href="https://radimrehurek.com/gensim/tut1.html#from-strings-to-vectors" rel="nofollow">read the tutorial on converting string to vectors</a>, which makes it clear that raw strings are nonsensical as input.</p>
| 1 | 2016-08-09T14:16:54Z | [
"python",
"parsing",
"yield",
"gensim"
] |
Python Script "machine/user-level scripts" difference | 38,852,194 | <p>The <a href="http://npppythonscript.sourceforge.net/docs/latest/usage.html" rel="nofollow" title="documentation">documentation</a> says </p>
<blockquote>
<p>The script called startup.py (in either the âmachineâ directory or
âuserâ directory - see Installation)</p>
</blockquote>
<p>but all I can see in the "Installation" section is </p>
<blockquote>
<p>(machine-level scripts)</p>
</blockquote>
<p>and </p>
<blockquote>
<p>(user level scripts go here).</p>
</blockquote>
<p>What are those and where I should put my script files? </p>
| 1 | 2016-08-09T13:29:23Z | 38,852,289 | <p>As for me, the default Python Script installation did not work at all. </p>
<p>I suggest installing <a href="https://sourceforge.net/projects/npppythonscript/files/Python%20Script%201.0.8.0/" rel="nofollow">Python Script 1.0.8.0</a>. </p>
<p>Then, once you go to the <em>Plugins</em> -> <em>Python Script</em> -> <em>New Script</em>, you will be able to save and open scripts from the location that opens, or anywhere else where you have access.</p>
| 1 | 2016-08-09T13:33:50Z | [
"python",
"notepad++"
] |
different amount of rows after merging two dataframes with pandas | 38,852,207 | <p>I have a dataframe which I merge with another dataframe by the column <code>EQ_NR</code>.</p>
<p>Here is the Structure of the first dataframe: (Rows: 320816)</p>
<pre><code> FAK_ART FAK_DAT LEIST_DAT KD_CRM MW_BW EQ_NR \
0 ZPAF 2015-12-10 2015-12-31 T-HOME ICP B 1001380363
1 ZPAF 2015-12-10 2015-12-31 T-HOME ICP B 1001380363
2 ZPAF 2015-12-10 2015-12-31 T-HOME ICP B 1001380363
3 ZPAF 2015-12-10 2015-12-31 T-HOME ICP B 1001380363
4 ZPAF 2015-12-10 2015-12-31 T-HOME ICP B 1001380363
5 ZPAF 2015-12-10 2015-12-31 T-HOME ICP B 1001380363
6 ZPAF 2015-12-10 2015-12-31 T-HOME ICP B 1001380363
7 ZPAF 2015-12-10 2015-12-31 T-HOME ICP E 1001380594
8 ZPAF 2015-12-10 2015-12-31 T-HOME ICP B 1001380594
MATERIAL KW_WERT NETTO_EURO TA
0 B60ETS 0.15 18.9 SDH
1 B60ETS 0.145 18.27 SDH
2 B60ETS 0.145 18.27 NaN
3 B60ETS 0.15 18.9 SDH
4 B60ETS 0.15 18.9 NaN
5 B60ETS 0.145 18.27 SDH
6 B60ETS 0.15 18.9 SDH
7 B60ETS 3.011 252.92 DSLAM/MSAN
8 B60ETS 3.412 429.91 DSLAM/MSAN
</code></pre>
<p>Here ist the second one: (Rows: 135818)</p>
<pre><code> EQ_NR TA
0 1001380363 SONSTIGES
1 1001380363 NaN
2 1001380363 Sonstiges
3 1000943704 Sonstiges
4 1000943823 Sonstiges
5 1000943985 Sonstiges
6 1000954774 FMED
7 1000954790 FMED
8 1001380363 SDH
9 1000955097 NaN
</code></pre>
<p>After merging I have one dataframe with 'TA' added from the second dataframe to the first one by the value of 'EQ_NR'.</p>
<p>The problem is that I have 320816 rows BEFORE merging and 320871 AFTER merging the two dataframes. What could happen that there are 55 more rows than in the basic data?</p>
<p>I need the Data to do some calculations and the 55 more rows distort the results of the calculations...</p>
| 2 | 2016-08-09T13:29:58Z | 38,852,237 | <p>There is problem with duplicates in joining column <code>EQ_NR</code>.</p>
<p>In sample there are duplicated values <code>1001380363</code> and <code>1001380594</code>.</p>
<p>Sample:</p>
<pre><code>import pandas as pd
df1 = pd.DataFrame({'EQ_NR':[1001380363,1001380363,1001380363, 1001380365],
'B':[4,5,6,7],
'C':[7,8,9,7]})
print (df1)
B C EQ_NR
0 4 7 1001380363
1 5 8 1001380363
2 6 9 1001380363
3 7 7 1001380365
df2 = pd.DataFrame({'EQ_NR':[1001380363,1001380363,1001380363,1001380363],
'B':[4,5,6,8],
'C':[7,8,9,3]})
print (df2)
B C EQ_NR
0 4 7 1001380363
1 5 8 1001380363
2 6 9 1001380363
3 8 3 1001380363
</code></pre>
<pre><code>print (pd.merge(df1, df2, on=['EQ_NR']))
B_x C_x EQ_NR B_y C_y
0 4 7 1001380363 4 7
1 4 7 1001380363 5 8
2 4 7 1001380363 6 9
3 4 7 1001380363 8 3
4 5 8 1001380363 4 7
5 5 8 1001380363 5 8
6 5 8 1001380363 6 9
7 5 8 1001380363 8 3
8 6 9 1001380363 4 7
9 6 9 1001380363 5 8
10 6 9 1001380363 6 9
11 6 9 1001380363 8 3
</code></pre>
<p>EDIT1:</p>
<p>If dataframe <code>df2</code> have no duplicates data in <code>EQ_NR</code>, use:</p>
<pre><code>print (df1)
FAK_ART FAK_DAT LEIST_DAT KD_CRM MW_BW EQ_NR MATERIAL \
0 ZPAF 2015-12-10 2015-12-31 T-HOME ICP B 1001380363 B60ETS
1 ZPAF 2015-12-10 2015-12-31 T-HOME ICP B 1001380363 B60ETS
2 ZPAF 2015-12-10 2015-12-31 T-HOME ICP B 1001380363 B60ETS
3 ZPAF 2015-12-10 2015-12-31 T-HOME ICP B 1001380363 B60ETS
4 ZPAF 2015-12-10 2015-12-31 T-HOME ICP B 1001380363 B60ETS
5 ZPAF 2015-12-10 2015-12-31 T-HOME ICP B 1001380363 B60ETS
6 ZPAF 2015-12-10 2015-12-31 T-HOME ICP B 1001380363 B60ETS
7 ZPAF 2015-12-10 2015-12-31 T-HOME ICP E 1001380594 B60ETS
8 ZPAF 2015-12-10 2015-12-31 T-HOME ICP B 1001380594 B60ETS
KW_WERT NETTO_EURO TA
0 0.150 18.90 SDH
1 0.145 18.27 SDH
2 0.145 18.27 NaN
3 0.150 18.90 SDH
4 0.150 18.90 NaN
5 0.145 18.27 SDH
6 0.150 18.90 SDH
7 3.011 252.92 DSLAM/MSAN
8 3.412 429.91 DSLAM/MSAN
print (df2)
EQ_NR TA
0 1001380363 Sonstiges
1 1000943704 Sonstiges
2 1000943823 Sonstiges
3 1000943985 Sonstiges
4 1000954774 FMED
5 1000954790 FMED
6 1000955097 NaN
</code></pre>
<pre><code>print (pd.merge(df1, df2, on=['EQ_NR'], how='left', suffixes=('','_new')))
FAK_ART FAK_DAT LEIST_DAT KD_CRM MW_BW EQ_NR MATERIAL \
0 ZPAF 2015-12-10 2015-12-31 T-HOME ICP B 1001380363 B60ETS
1 ZPAF 2015-12-10 2015-12-31 T-HOME ICP B 1001380363 B60ETS
2 ZPAF 2015-12-10 2015-12-31 T-HOME ICP B 1001380363 B60ETS
3 ZPAF 2015-12-10 2015-12-31 T-HOME ICP B 1001380363 B60ETS
4 ZPAF 2015-12-10 2015-12-31 T-HOME ICP B 1001380363 B60ETS
5 ZPAF 2015-12-10 2015-12-31 T-HOME ICP B 1001380363 B60ETS
6 ZPAF 2015-12-10 2015-12-31 T-HOME ICP B 1001380363 B60ETS
7 ZPAF 2015-12-10 2015-12-31 T-HOME ICP E 1001380594 B60ETS
8 ZPAF 2015-12-10 2015-12-31 T-HOME ICP B 1001380594 B60ETS
KW_WERT NETTO_EURO TA TA_new
0 0.150 18.90 SDH Sonstiges
1 0.145 18.27 SDH Sonstiges
2 0.145 18.27 NaN Sonstiges
3 0.150 18.90 SDH Sonstiges
4 0.150 18.90 NaN Sonstiges
5 0.145 18.27 SDH Sonstiges
6 0.150 18.90 SDH Sonstiges
7 3.011 252.92 DSLAM/MSAN NaN
8 3.412 429.91 DSLAM/MSAN NaN
</code></pre>
| 1 | 2016-08-09T13:31:27Z | [
"python",
"pandas",
"dataframe",
"merge"
] |
different amount of rows after merging two dataframes with pandas | 38,852,207 | <p>I have a dataframe which I merge with another dataframe by the column <code>EQ_NR</code>.</p>
<p>Here is the Structure of the first dataframe: (Rows: 320816)</p>
<pre><code> FAK_ART FAK_DAT LEIST_DAT KD_CRM MW_BW EQ_NR \
0 ZPAF 2015-12-10 2015-12-31 T-HOME ICP B 1001380363
1 ZPAF 2015-12-10 2015-12-31 T-HOME ICP B 1001380363
2 ZPAF 2015-12-10 2015-12-31 T-HOME ICP B 1001380363
3 ZPAF 2015-12-10 2015-12-31 T-HOME ICP B 1001380363
4 ZPAF 2015-12-10 2015-12-31 T-HOME ICP B 1001380363
5 ZPAF 2015-12-10 2015-12-31 T-HOME ICP B 1001380363
6 ZPAF 2015-12-10 2015-12-31 T-HOME ICP B 1001380363
7 ZPAF 2015-12-10 2015-12-31 T-HOME ICP E 1001380594
8 ZPAF 2015-12-10 2015-12-31 T-HOME ICP B 1001380594
MATERIAL KW_WERT NETTO_EURO TA
0 B60ETS 0.15 18.9 SDH
1 B60ETS 0.145 18.27 SDH
2 B60ETS 0.145 18.27 NaN
3 B60ETS 0.15 18.9 SDH
4 B60ETS 0.15 18.9 NaN
5 B60ETS 0.145 18.27 SDH
6 B60ETS 0.15 18.9 SDH
7 B60ETS 3.011 252.92 DSLAM/MSAN
8 B60ETS 3.412 429.91 DSLAM/MSAN
</code></pre>
<p>Here ist the second one: (Rows: 135818)</p>
<pre><code> EQ_NR TA
0 1001380363 SONSTIGES
1 1001380363 NaN
2 1001380363 Sonstiges
3 1000943704 Sonstiges
4 1000943823 Sonstiges
5 1000943985 Sonstiges
6 1000954774 FMED
7 1000954790 FMED
8 1001380363 SDH
9 1000955097 NaN
</code></pre>
<p>After merging I have one dataframe with 'TA' added from the second dataframe to the first one by the value of 'EQ_NR'.</p>
<p>The problem is that I have 320816 rows BEFORE merging and 320871 AFTER merging the two dataframes. What could happen that there are 55 more rows than in the basic data?</p>
<p>I need the Data to do some calculations and the 55 more rows distort the results of the calculations...</p>
| 2 | 2016-08-09T13:29:58Z | 38,852,934 | <p>if you want to add only one column you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.map.html" rel="nofollow">map()</a> method:</p>
<pre><code>In [290]: df1['TA2'] = df1.EQ_NR.map(df2.set_index('EQ_NR').TA)
In [291]: df1
Out[291]:
FAK_ART FAK_DAT LEIST_DAT KD_CRM MW_BW EQ_NR MATERIAL KW_WERT NETTO_EURO TA TA2
0 ZPAF 2015-12-10 2015-12-31 T-HOME ICP B 1001380363 B60ETS 0.150 18.90 SDH AAA
1 ZPAF 2015-12-10 2015-12-31 T-HOME ICP B 1001380363 B60ETS 0.145 18.27 SDH AAA
2 ZPAF 2015-12-10 2015-12-31 T-HOME ICP B 1001380363 B60ETS 0.145 18.27 NaN AAA
3 ZPAF 2015-12-10 2015-12-31 T-HOME ICP B 1001380363 B60ETS 0.150 18.90 SDH AAA
4 ZPAF 2015-12-10 2015-12-31 T-HOME ICP B 1001380363 B60ETS 0.150 18.90 NaN AAA
5 ZPAF 2015-12-10 2015-12-31 T-HOME ICP B 1001380363 B60ETS 0.145 18.27 SDH AAA
6 ZPAF 2015-12-10 2015-12-31 T-HOME ICP B 1001380363 B60ETS 0.150 18.90 SDH AAA
7 ZPAF 2015-12-10 2015-12-31 T-HOME ICP E 1001380594 B60ETS 3.011 252.92 DSLAM/MSAN NaN
8 ZPAF 2015-12-10 2015-12-31 T-HOME ICP B 1001380594 B60ETS 3.412 429.91 DSLAM/MSAN NaN
</code></pre>
<p>where df2:</p>
<pre><code>In [288]: df2
Out[288]:
EQ_NR TA
0 1001380363 AAA
</code></pre>
<p>NOTE: <code>df2.EQ_NR</code> must be unique, otherwise you'll get <code>InvalidIndexError: Reindexing only valid with uniquely valued Index objects</code> exception. <code>df1.EQ_NR</code> may have duplicates... </p>
| 0 | 2016-08-09T14:01:12Z | [
"python",
"pandas",
"dataframe",
"merge"
] |
HeatMap visualization | 38,852,220 | <p>I have a dataframe df1</p>
<pre><code>df1.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 38840 entries, 0 to 38839
Data columns (total 7 columns):
TIMESTAMP 38840 non-null datetime64[ns]
ACT_TIME_AERATEUR_1_F1 38696 non-null float64
ACT_TIME_AERATEUR_1_F3 38697 non-null float64
ACT_TIME_AERATEUR_1_F5 38695 non-null float64
ACT_TIME_AERATEUR_1_F6 38695 non-null float64
ACT_TIME_AERATEUR_1_F7 38693 non-null float64
ACT_TIME_AERATEUR_1_F8 38696 non-null float64
dtypes: datetime64[ns](1), float64(6)
memory usage: 2.1 MB
</code></pre>
<p>which looks like this : </p>
<pre><code> TIMESTAMP ACT_TIME_AERATEUR_1_F1 ACT_TIME_AERATEUR_1_F3 ACT_TIME_AERATEUR_1_F5 ACT_TIME_AERATEUR_1_F6 ACT_TIME_AERATEUR_1_F7
ACT_TIME_AERATEUR_1_F8
2015-08-01 05:10:00 100 100 100 100 100 100
2015-08-01 05:20:00 100 100 100 100 100 100
2015-08-01 05:30:00 100 100 100 100 100 100
2015-08-01 05:40:00 100 100 100 100 100 100
</code></pre>
<p>I try to create a heatmap with seaborn to visualize data which are between two date ( for example here between '2015-08-01 23:10:00' and '2015-08-02 02:00:00') :
I do like this : </p>
<pre><code>df1['TIMESTAMP']= pd.to_datetime(df_no_missing['TIMESTAMP'], '%d-%m-%y %H:%M:%S')
df1['date'] = df_no_missing['TIMESTAMP'].dt.date
df1['time'] = df_no_missing['TIMESTAMP'].dt.time
date_debut = pd.to_datetime('2015-08-01 23:10:00')
date_fin = pd.to_datetime('2015-08-02 02:00:00')
df1 = df1[(df1['TIMESTAMP'] >= date_debut) & (df1['TIMESTAMP'] < date_fin)]
sns.heatmap(df1.iloc[:,1:6:],annot=True, linewidths=.5)
</code></pre>
<p>I got a heatmap like in the attached </p>
<p><img src="http://i.stack.imgur.com/KnWie.png" alt="enter image description here"></p>
<p>My question now is how can I replace the number in the left of the heatmap map (145...161) by their corresponding values of timestamp (2015-08-01 05:10:00, 2015-08-01 05:20:00, 2015-08-01 05:30:00, ...)</p>
<p>Thank you</p>
<p>Bests</p>
<p>I try to make modifications : </p>
<pre><code>df1.set_index("TIMESTAMP", inplace=1)
sns.heatmap(df1.iloc[:, 1:6:], annot=True, linewidths=.5)
ax = plt.gca()
ax.set_yticklabels([i.strftime("%Y-%m-%d %H:%M:%S") for i in df1.TIMESTAMP], rotation=0)
</code></pre>
<p><strong>EDIT</strong> </p>
<p>But I get error and warning : </p>
<blockquote>
<pre><code>C:\Users\Demonstrator\Anaconda3\lib\site-packages\ipykernel\__main__.py:2:
</code></pre>
<p>SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead</p>
<pre><code>See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
from ipykernel import kernelapp as app
C:\Users\Demonstrator\Anaconda3\lib\site-packages\ipykernel\__main__.py:3:
</code></pre>
<p>SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead</p>
<pre><code>See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
app.launch_new_instance()
C:\Users\Demonstrator\Anaconda3\lib\site-packages\ipykernel\__main__.py:4:
</code></pre>
<p>SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead</p>
<pre><code>See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-129-cec498d88cac> in <module>()
9
10 #sns.heatmap(df1.iloc[:,1:6:],annot=True, linewidths=.5)
---> 11 sns.heatmap(df1.iloc[:, 1:6:], annot=True, linewidths=.5)
12 ax = plt.gca()
13 ax.set_yticklabels([i.strftime("%Y-%m-%d %H:%M:%S") for i in df1.TIMESTAMP], rotation=0)
C:\Users\Demonstrator\Anaconda3\lib\site-packages\seaborn\matrix.py in
</code></pre>
<p>heatmap(data, vmin, vmax, cmap, center, robust, annot, fmt, annot_kws,
linewidths, linecolor, cbar, cbar_kws, cbar_ax, square, ax,
xticklabels, yticklabels, mask, **kwargs)
483 plotter = _HeatMapper(data, vmin, vmax, cmap, center, robust, annot, fmt,
484 annot_kws, cbar, cbar_kws, xticklabels,
--> 485 yticklabels, mask)
486
487 # Add the pcolormesh kwargs here</p>
<pre><code>C:\Users\Demonstrator\Anaconda3\lib\site-packages\seaborn\matrix.py in
</code></pre>
<p><strong>init</strong>(self, data, vmin, vmax, cmap, center, robust, annot, fmt, annot_kws, cbar, cbar_kws, xticklabels, yticklabels, mask)
165 # Determine good default values for the colormapping
166 self._determine_cmap_params(plot_data, vmin, vmax,
--> 167 cmap, center, robust)
168
169 # Sort out the annotations</p>
<pre><code>C:\Users\Demonstrator\Anaconda3\lib\site-packages\seaborn\matrix.py in
</code></pre>
<p>_determine_cmap_params(self, plot_data, vmin, vmax, cmap, center, robust)
204 calc_data = plot_data.data[~np.isnan(plot_data.data)]
205 if vmin is None:
--> 206 vmin = np.percentile(calc_data, 2) if robust else calc_data.min()
207 if vmax is None:
208 vmax = np.percentile(calc_data, 98) if robust else calc_data.max()</p>
<pre><code>C:\Users\Demonstrator\Anaconda3\lib\site-packages\numpy\core\_methods.py
</code></pre>
<p>in _amin(a, axis, out, keepdims)
27
28 def _amin(a, axis=None, out=None, keepdims=False):
---> 29 return umr_minimum(a, axis, None, out, keepdims)
30
31 def _sum(a, axis=None, dtype=None, out=None, keepdims=False):</p>
<pre><code>ValueError: zero-size array to reduction operation minimum which has no identity
</code></pre>
</blockquote>
<p>@jeanrjc, look at the last image, there is a problem: the image is too small and there is two vertical line(scale)on the right. I hope that i'am clear now<a href="http://i.stack.imgur.com/yNrA8.png" rel="nofollow"><img src="http://i.stack.imgur.com/yNrA8.png" alt="enter image description here"></a></p>
| 0 | 2016-08-09T13:30:29Z | 38,853,129 | <p>It's because <code>TIMESTAMP</code> is not your index, from the <code>sns.heatmap</code> docstring:</p>
<blockquote>
<p>yticklabels : list-like, int, or bool, optional
If True, plot the row names of the dataframe. If False, don't plot
the row names. If list-like, plot these alternate labels as the
yticklabels. If an integer, use the index names but plot only every
n label.</p>
</blockquote>
<p>The row names being the index.</p>
<p>So you can just set your index accordingly:</p>
<pre><code>df1.set_index("TIMESTAMP", inplace=1)
</code></pre>
<p>and with your <code>sns</code> command, it will work almost fine. To problem is that you'll have an ugly representation of the date. </p>
<p>Alternatively, you can do, <strong>instead of changing the index</strong>:</p>
<pre><code>import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
...
...
ax = sns.heatmap(df1.iloc[:, 1:6:], annot=True, linewidths=.5)
ax.set_yticklabels([i.strftime("%Y-%m-%d %H:%M:%S") for i in df1.TIMESTAMP], rotation=0)
</code></pre>
<p>HTH</p>
| 0 | 2016-08-09T14:09:33Z | [
"python",
"heatmap",
"seaborn"
] |
Define pidfile in DaemonContext disable logger in python | 38,852,265 | <p>i'm trying to create a daemon service in python with log to file opption using logging.
If i add the pidfile parameter to daemon.DaemonContext constructor, logger is not log any messages to logger.log file. But if i remove that parameter, everything is working.
runs without any errors.
Can anyone know why pidfile parameter in daemon.DaemonContext disable log messages? and how can i solve that?</p>
<p>Any help will be appreciated. Thanks</p>
<pre><code>import signal
import daemon
import lockfile
import logging
import logging.handlers
def run():
logger = logging.getLogger("DaemonLog")
logger.setLevel(logging.INFO)
handler = logging.FileHandler('logger.log')
handler.setFormatter(logging.Formatter("%(asctime)s - %(levelname)s - %(message)s"))
logger.addHandler(handler)
daemon_context = daemon.DaemonContext(
working_directory='/opt/myDaemon',
umask=0o002,
pidfile=lockfile.FileLock('/var/run/myDaemon.pid'),
files_preserve=[handler.stream]
)
daemon_context.signal_map = {
signal.SIGTERM: terminate_collector,
signal.SIGHUP: terminate_collector,
signal.SIGABRT: terminate_collector
}
with daemon_context as context:
while True:
logger.info("log")
func()
time.sleep(PARAM_SLEEP)
if __name__ == "__main__":
run()
</code></pre>
| 0 | 2016-08-09T13:32:46Z | 38,867,793 | <p>I succeed to solve the problem. The problem was that no pidfile was created with the command:</p>
<pre><code>pidfile=lockfile.FileLock('/var/run/myDaemon.pid')
</code></pre>
<p>The solution is to use daemon.pidfile module instead of the lockfile module:</p>
<pre><code>pidfile=daemon.pidfile.PIDLockFile('/var/run/myDaemon.pid')
</code></pre>
<p>Full code solution:</p>
<pre><code>import signal
import daemon
import daemon.pidfile
import lockfile
import logging
import logging.handlers
def run():
logger = logging.getLogger("DaemonLog")
logger.setLevel(logging.INFO)
handler = logging.FileHandler('logger.log')
handler.setFormatter(logging.Formatter("%(asctime)s - %(levelname)s - %(message)s"))
logger.addHandler(handler)
daemon_context = daemon.DaemonContext(
working_directory='/opt/myDaemon',
umask=0o002,
pidfile=daemon.pidfile.PIDLockFile('/var/run/myDaemon.pid'),
files_preserve=[handler.stream]
)
daemon_context.signal_map = {
signal.SIGTERM: terminate_collector,
signal.SIGHUP: terminate_collector,
signal.SIGABRT: terminate_collector
}
with daemon_context as context:
while True:
logger.info("log")
func()
time.sleep(PARAM_SLEEP)
if __name__ == "__main__":
run()
</code></pre>
| 0 | 2016-08-10T08:17:25Z | [
"python",
"logging",
"daemon",
"lockfile"
] |
Read and push a table again to sql:Pandas | 38,852,273 | <p>I am reading a table from my sql server with Pandas such as </p>
<pre><code>df= pd.read_sql('table1', engine)
</code></pre>
<p>where engine is my pyodbc connection
and then again I am pushing it to sql server </p>
<pre><code>df.to_sql('table2', engine, if_exists='replace')
</code></pre>
<p>which gives me an error </p>
<pre><code>ValueError: duplicate name in index/columns: cannot insert level_0, already exists
</code></pre>
<p>and when I try to drop the column, it gave me some another error, which is anyways not an efficient way. I tried this as well, which also didn't work</p>
<pre><code> df= df.reset_index(drop=True)
</code></pre>
<p>Every help will be important </p>
| 0 | 2016-08-09T13:33:25Z | 38,853,079 | <p>Set <code>index=Flase</code> while writing <code>to_sql</code> because index values should be unique.</p>
| 2 | 2016-08-09T14:06:57Z | [
"python",
"pandas",
"dataframe",
"sqlalchemy"
] |
Binary representation over serial Python3 | 38,852,468 | <p>I'm writing various bytes over serial in the form of ascii representations.</p>
<p>Heres an example: <code>bytes_to_write = '\aa' # Will write 01010101</code></p>
<p>Now one part of what I send involves me calculating the 4 bytes that need to be transferred, I calculate the bytes using the below function:</p>
<pre><code>def convert_addr_with_flag(addr, flag):
if(addr[0:1]!="0x"):
# String does not have hex start, for representation
addr = "0x" + addr
# Convert String to int
return binascii.unhexlify(str(hex(int(addr, 16) + (flag << 31))[2:].zfill(8))) # Return the int value, bit shiffted with flag
</code></pre>
<p>This function will return a binary string instead of an ascii string however. Here is an example...
<code>convert_addr_with_flag("00ACFF21", 1) # Output: b'\x80\xac\xff!'</code></p>
<p>My question is how I can get this output to a form that I can add to the other bytes of the packet. e.g..</p>
<pre><code>part_1 = '\xaa\xaa' # 2 bytes
part_2 = '\x55\x55' # 2 bytes
part_3 = convert_addr_with_flag("00ACFF21", 1) # 4 bytes
full_packet = part_1 + part_2 + part_3 # Will not work, as part_3 is a binary string (b'\x80\xac\xff!)
</code></pre>
<p>Heres what I have already tried,
using <code>decode("UTF-8)</code> and UTF-16 and ASCII. It cannot understand the byte string.
Slicing. using [2] give me a binary character, but not the 'second byte'</p>
<p>Any tips would be very much appreciated!</p>
<p><strong>Python 3.4</strong></p>
| 0 | 2016-08-09T13:41:44Z | 38,854,281 | <p><code>decode</code> isn't working because there is no ascii representation of <code>part_3</code>; an ascii character must be an int between 0 and 127 inclusive. It looks like <code>part_3</code> in this example is <code>b'\x80\xac\xff!'</code>; the first three bytes are 128, 172 and 255, none of which are valid ascii.</p>
<p>If you need to send bytes that aren't between 0 and 127, you will probably need <code>part_1</code> and <code>part_2</code> to be byte strings.</p>
| 0 | 2016-08-09T15:01:15Z | [
"python",
"python-3.x",
"binary",
"pyserial"
] |
How to efficiently insert bulk data into Cassandra using Python? | 38,852,492 | <p>I have a Python application, built with Flask, that allows importing of many data records (anywhere from 10k-250k+ records at one time). Right now it inserts into a Cassandra database, by inserting one record at a time like this:</p>
<pre><code>for transaction in transactions:
self.transaction_table.insert_record(transaction)
</code></pre>
<p>This process is incredibly slow. Is there a best-practice approach I could use to more efficiently insert this bulk data?</p>
| 0 | 2016-08-09T13:43:13Z | 38,853,615 | <p>You can use batch statements for this, an example and documentation is available from the <a href="https://datastax.github.io/python-driver/api/cassandra/query.html#cassandra.query.BatchStatement" rel="nofollow">datastax documentation</a>. You can also use some child workers and/or async queries on top of this.</p>
<p>In terms of best practices, it is more efficient if each batch <strong>only contains one partition key</strong>. This is because you do not want a node to be used as a coordinator for many different partition keys, it would be faster to contact each individual node directly.</p>
<p>If each record has a different partition key, a single prepared statement with some child workers may work out to be better.</p>
<p>You may also want to consider using a <a href="https://datastax.github.io/python-driver/api/cassandra/policies.html#cassandra.policies.LoadBalancingPolicy" rel="nofollow">TokenAware load balancing policy</a> allowing the relevant node to be contacted directly, instead of being coordinated through another node.</p>
| 1 | 2016-08-09T14:31:06Z | [
"python",
"cassandra"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.