Title
stringlengths 15
150
| A_Id
int64 2.98k
72.4M
| Users Score
int64 -17
470
| Q_Score
int64 0
5.69k
| ViewCount
int64 18
4.06M
| Database and SQL
int64 0
1
| Tags
stringlengths 6
105
| Answer
stringlengths 11
6.38k
| GUI and Desktop Applications
int64 0
1
| System Administration and DevOps
int64 1
1
| Networking and APIs
int64 0
1
| Other
int64 0
1
| CreationDate
stringlengths 23
23
| AnswerCount
int64 1
64
| Score
float64 -1
1.2
| is_accepted
bool 2
classes | Q_Id
int64 1.85k
44.1M
| Python Basics and Environment
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Web Development
int64 0
1
| Available Count
int64 1
17
| Question
stringlengths 41
29k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
How to place myapp in Applications location developed through py2app on mac osx
| 19,920,707
| 1
| 1
| 382
| 0
|
python,macos,py2app
|
You can use packagemaker. It makes myapp.pkg file that when double-clicked installa an app in Applications location.
| 0
| 1
| 0
| 0
|
2013-10-24T08:23:00.000
| 2
| 1.2
| true
| 19,560,701
| 1
| 0
| 0
| 2
|
I had developed an app in python on mac osx for opening different file types on double-clicking any file. Then I converted into app by using py2app. py2app creates a myapp.app within dist folder. I moved myapp.app from dist folder to Applications location. Now, I am able to open any file through myapp just by double-clicking on it. Now, I want to make it work in such a way that, I don't need to drag and drop myapp to Applications location,it should automatically install on system.
|
How to place myapp in Applications location developed through py2app on mac osx
| 19,619,226
| 0
| 1
| 382
| 0
|
python,macos,py2app
|
py2app has a "--dist-dir" option for selecting the output directory (default is "diet"). That said, I have never tested using that option for installation in a directory containing other programs and don't know if "python setup.py py2app --dist-dir=/Applications" is save to use.
The alternative is to use a script that copies the application to /Applications (a simple python script, a shell script, or even a distutils command in your setup.py file that invokes py2app and then copies the resulting application to the right location).
| 0
| 1
| 0
| 0
|
2013-10-24T08:23:00.000
| 2
| 0
| false
| 19,560,701
| 1
| 0
| 0
| 2
|
I had developed an app in python on mac osx for opening different file types on double-clicking any file. Then I converted into app by using py2app. py2app creates a myapp.app within dist folder. I moved myapp.app from dist folder to Applications location. Now, I am able to open any file through myapp just by double-clicking on it. Now, I want to make it work in such a way that, I don't need to drag and drop myapp to Applications location,it should automatically install on system.
|
What is the difference between using HttpUwsgiModule for NGINX and using NGINX as a reverseproxy to uWSGI?
| 19,688,569
| 2
| 1
| 125
| 0
|
python,nginx,webserver,reverse-proxy,uwsgi
|
Simply put, when you use HttpUwsgiModule NGINX speaks the uwsgi protocol and it can leave out useless informations of the http protocol leading to less overhead and thus better performance.
| 0
| 1
| 0
| 1
|
2013-10-24T15:42:00.000
| 1
| 1.2
| true
| 19,570,490
| 0
| 0
| 1
| 1
|
Using HttpUwsgiModule with NGINX to control uWSGI has become quite popular since its release.
I was wondering though, what is the advantage of it, compared to using NGINX as a reverse-proxy to uWSGI application?
What are the gains and losses in two differing use cases?
|
Running python code in background
| 19,578,445
| 2
| 3
| 183
| 0
|
python,linux,background,ssh
|
You can use screen, as Robin Krahl recommended, or you can just run your command with nohup, which suppresses the SIGHUP (hangup) signal from your SSH session disconnecting.
nohup "python -u test.py > output.txt" &
| 0
| 1
| 0
| 1
|
2013-10-24T23:15:00.000
| 3
| 0.132549
| false
| 19,578,392
| 0
| 0
| 0
| 1
|
I need to run a python code that takes several hours and my computer disconnects from the ssh after a certain amount of inactive time.
I have tried python test.py > output.txt & but my output file is empty. However, the python code "test" is still running after I log off and log back in to the ssh. I also tried python -u test.py > output.txt & which does write to the output.txt but it does not continue after the ssh connection is lost.
I am very new to Linux so I do not know very many commands. I need the simplest/easiest to understand method.
Thanks!
|
Can I access python variables within a `%%bash` or `%%script` ipython notebook cell?
| 19,583,164
| 0
| 81
| 40,694
| 0
|
ipython-notebook,ipython-magic
|
No, %%script magic are autogenerated and don't do any magic inter-process data communication. (which is not the case for %%R but which is a separate magic in its own class with extra care from R peoples)
But writing your own magic that does it is not too hard to do.
| 0
| 1
| 0
| 0
|
2013-10-25T01:21:00.000
| 7
| 0
| false
| 19,579,546
| 1
| 0
| 0
| 1
|
Is there a way to access variables in the current python kernel from within a %%bash or other %%script cell?
Perhaps as command line arguments or environment variable(s)?
|
performance for xarg vs. python multiprocessing+subprocess
| 19,580,680
| 2
| 1
| 575
| 0
|
python,unix,multiprocessing,xargs
|
The xargs program will collect multiple arguments from standard input, and glue them together to make one long command line. If there are many many many arguments, too long for one command line, then it will build and execute multiple command lines, as many as needed.
This means less overhead for starting up processes and shutting them down. How much good this will do for you depends on how long your processes run. If you are starting up some sort of CPU-intensive program that will run for half an hour, the startup time for the process will be inconsequential. If you are starting up a program that runs quickly, but you are only running a small number of instances, again the savings will be inconsequential. However, if your program is truly trivial and requires minimal runtime, maybe you will notice a difference.
From your problem description, it appears to be a good candidate for this. 10K things with relatively short processing for each. xargs might speed things up for you.
However, in my experience, doing any nontrivial work in shell scripts brings the pain. If you have any directory names or file names that can have a space in them, the slightest mistake in quoting your variables makes your script crash, so you need to obsessively test your script to make sure it will work for all possible inputs. For this reason, I do my nontrivial system scripts in Python.
Therefore, if you already have your program working in Python, IMHO you would be crazy to try to rewrite it as a shell script.
Now, you can still use xargs if you want. Just use subprocess to run xargs and pass all the arguments via standard input. This gains all of the benefit and none of the pain. You can use Python to stick a NUL byte chr(0) at the end of each argument, and then use xargs --null, and it will be robust with filenames that have spaces in them.
Alternatively you could use ' '.join() to build your own very long command lines, but I don't see any reason to do that when you can just run xargs as described above.
| 0
| 1
| 0
| 0
|
2013-10-25T02:09:00.000
| 1
| 1.2
| true
| 19,579,978
| 1
| 0
| 0
| 1
|
I have a question on the performance scalability with xargs. Currently I have a batch processing program written in python with multiprocessing and subprocess. Each process spawns an independent subprocess.popen() to execute an external command. Recently I realized that the whole process can be redone with xargs. However, I wonder whether it is a good idea to use xargs to process 10k+ files since I have never done something this scale with only commandline tools before. Given my test with small data sets, it is actually not a bad idea if all I am doing is batch running a bunch of commands, since it avoids many cycles of overhead imposed by python's modules, but I would like to learn more from anyone who may have more experience with xargs and python. More specifically, is there any buffer limit that I need to configure for xargs to consume a large number of inputs? Thanks.
|
df command across multiple os
| 19,831,707
| 0
| 0
| 752
| 0
|
python,linux,solaris,expect
|
The only solution top this problem seems using uname to get the OS and
set the df accordingly... same as what i had stated in the problem!!!
| 0
| 1
| 0
| 0
|
2013-10-25T07:26:00.000
| 5
| 1.2
| true
| 19,583,656
| 0
| 0
| 0
| 2
|
The df command displays the amount of disk space occupied by mounted or unmounted file systems, the amount of used and available space, and how much of the file system's total capacity has been used.
Linux has df command in the following location /bin whereas in Solaris in the following location /usr/gnu/bin...
If suppose /usr/bin is set in the PATH, then programmatically, i need to ensure that one of required df (as mentioned above) is invoked instead of the user defined df.
One solution to this problem is using uname to get the OS and set the df accordingly... Is there any other better way to do this where i am not dependent on the OS.
Note: the default df and gnu df give different outputs hence i need to invoke the required df command on two different OS programmatically (the paths are mentioned above)
DID NOT FIND ANY SOLUTION TO THE PROBLEM
Used the alternative solution that i had provided in the question itself!
|
df command across multiple os
| 19,639,734
| 0
| 0
| 752
| 0
|
python,linux,solaris,expect
|
Are the target systems somehow under your control, and does this involve a limited set of servers?
If so, how about adding a soft link in both the Solaris and Linux servers, in the same location and with the same name?
Something like:
Solaris: ln -s /usr/gnu/bin/df /usr/bin/my_df
Linux: ln -s /bin/df /usr/bin/my_df
Then let your script use /usr/bin/my_df for every box.
Not fancy and rather simple approach... but maybe it would work for you?
Just my 2c.
| 0
| 1
| 0
| 0
|
2013-10-25T07:26:00.000
| 5
| 0
| false
| 19,583,656
| 0
| 0
| 0
| 2
|
The df command displays the amount of disk space occupied by mounted or unmounted file systems, the amount of used and available space, and how much of the file system's total capacity has been used.
Linux has df command in the following location /bin whereas in Solaris in the following location /usr/gnu/bin...
If suppose /usr/bin is set in the PATH, then programmatically, i need to ensure that one of required df (as mentioned above) is invoked instead of the user defined df.
One solution to this problem is using uname to get the OS and set the df accordingly... Is there any other better way to do this where i am not dependent on the OS.
Note: the default df and gnu df give different outputs hence i need to invoke the required df command on two different OS programmatically (the paths are mentioned above)
DID NOT FIND ANY SOLUTION TO THE PROBLEM
Used the alternative solution that i had provided in the question itself!
|
How do I pause/resume a Python script?
| 19,601,896
| 11
| 3
| 7,855
| 0
|
macos,python-2.7,resume
|
This was originally a comment, but it seems to be what OP wants, so I'm reposting it as an answer
I would use ctrl+z to suspend your live, running process. This will leave you with a PID, which you can later resume with a call to fg: fg <job-number>.
This shouldn't have any implications with changed network settings (like IP, etc), at least as far as python is concerned. I can't speak to whether the API will freak out, though
| 0
| 1
| 1
| 0
|
2013-10-26T00:50:00.000
| 1
| 1.2
| true
| 19,601,598
| 0
| 0
| 0
| 1
|
I run python scripts on my macbook air that process data from external API's and often take several hours or occasionally even days.
However, sometimes I need to suspend my laptop in the middle of running a script so I can go to work or go home or similar.
How can I simply pause/resume these scripts in the middle of their for loops?
Is there something very simple that I can add at the script level that just listens for a particular key stroke to stop/start? Or something I can do at the *nix process management level?
I'm well aware of Pickle but I'd rather not deal with the hassle of serializing/unserializing my data--since all I'm doing is hibernating the mac, I'm hoping if the script gets paused and then I hibernate, that OS X will handle saving the RAM to disk and then restoring back to RAM when I reopen the computer. At that point, I can hit a simple keystroke to continue the python script.
Since I'm switching between different wifi networks, not sure if the different IPs will cause problems when my script tries to access the internet to reach the 3rd party APIs.
|
Python module will not run correctly from command prompt
| 19,624,100
| 0
| 0
| 213
| 0
|
eclipse,python-3.x,command-line-arguments,pydev,windows-8.1
|
I answered my own question in the comment above. I just had to wait to post an answer due to the fact that I just created a stackoverflow account yesterday.
| 0
| 1
| 0
| 1
|
2013-10-26T21:42:00.000
| 1
| 0
| false
| 19,612,221
| 0
| 0
| 0
| 1
|
This goes out to anyone who is well versed in the Eclipse IDE and or PyDev perspective plug-in who is willing to offer some technical support.
I am trying to write a python module that must take in arguments from the command prompt with sys.argv function calls. Rather than printing out the correct output when I enter E:\ ... \src>program.py arg1 arg2, all that happens is a new command line (E:\ ... \src>) is output and the Eclipse IDE window flashes orange without any code in my python module actually being executed. Also, if I close the Eclipse IDE and try to run program.py, it will just open Eclipse again and open my program in a new tab.
I'm confused as to why it is not working now when just last week it was working perfectly while testing another program that took in arguments from the command prompt by sys.argv function calls. My question for everyone is whether or not you are aware of any settings that may have been altered by updates, etc. that could cause this problem; or has anybody out there ever run into this problem and figured out how to resolve it? I have already checked my PATH variable, so that is not the problem :-(. Any help you can provide would be greatly appreciated ... thank you.
OS: Windows 8.1 Pro / Eclipse ver.: Kepler (4.3) / Python ver.: 3.3.2
|
Android Client and Google App Engine APIs
| 19,613,781
| 1
| 0
| 127
| 0
|
java,android,python,google-app-engine
|
Yes, you can use Python to do what you want.
Google designs their services (such as GAE and endpoints) to be language agnostic, e.g. using JSON to serialize objects.
There are a few advantages to using Java on both, such as being able to share code between client and service projects, but Google does not promote such dependencies at all - you will have no problem using Python instead.
| 0
| 1
| 0
| 0
|
2013-10-26T22:37:00.000
| 1
| 1.2
| true
| 19,612,642
| 0
| 0
| 1
| 1
|
Am I confused as to what is possible between an Android Client and Google App Engine? I would like to be able to create a Python API that would handle requests between GAE services such as Datastore, and an Android Client.
I have found some examples that detail how to make a call from within an Android Client, but it doesn't seem to outline whether or not you can pass data to any specific API language. The question I have is whether or not it is possible to use a Python API deployed on GAE and making calls through Google End Points, or would I have to use Java Servlets to handle requests?
|
Issue with path to python/pythonpath
| 19,962,347
| 0
| 1
| 137
| 0
|
python,apache,unix,path,fastcgi
|
I had a call to python interpreter via env program in my fast cgi dispatch script. When I explicitly put path to 2.7 to the first line of the script it works as expected.
| 0
| 1
| 0
| 1
|
2013-10-28T05:30:00.000
| 3
| 1.2
| true
| 19,627,743
| 0
| 0
| 1
| 2
|
I have a VPS with system-wide installed python 2.5.
I installed python 2.7 to one of the user's home dir (using --prefix). added it to bashrc and bash_profile, exported python variable to env, and now when I type python in console python 2.7 is running.
But when I checked python version from my application (Django using with FastCGI) I still see that it is using 2.5.
In ps output I see python processes running for this account and apache processes runing with hosting-specific account. How can I switch this particular account to 2.7 without changing system-wide version?
Thanks!
|
Issue with path to python/pythonpath
| 27,295,744
| 0
| 1
| 137
| 0
|
python,apache,unix,path,fastcgi
|
I've set PYTHONPATH in my /home/me/.bashrc and all worked ok from terminal, but when Apache w/ mod_wsgi starts my python scripts, it acts under sysem or dedicated user, which knows nothing of my .bashrc.
For this particular situation, I just used apache config (apache2.conf) to set python path for apache (WSGIPythonPath option).
| 0
| 1
| 0
| 1
|
2013-10-28T05:30:00.000
| 3
| 0
| false
| 19,627,743
| 0
| 0
| 1
| 2
|
I have a VPS with system-wide installed python 2.5.
I installed python 2.7 to one of the user's home dir (using --prefix). added it to bashrc and bash_profile, exported python variable to env, and now when I type python in console python 2.7 is running.
But when I checked python version from my application (Django using with FastCGI) I still see that it is using 2.5.
In ps output I see python processes running for this account and apache processes runing with hosting-specific account. How can I switch this particular account to 2.7 without changing system-wide version?
Thanks!
|
Handling of arbitrary options using Tornado options, i.e. like **kwargs
| 19,716,110
| 3
| 3
| 4,015
| 0
|
python,python-2.7,tornado,keyword-argument
|
The tornado.options philosophy is that any module may define options, not just the main entry point. So if you might need a bluetooth mac address, you'd define that option in the module that interacts with bluetooth. (and if you might need more than one you can set multiple=True). The only tricky part is that you must import all modules that define options before calling parse_command_line. Truly arbitrary options are not supported by tornado.options.
It's also possible to use argparse or another command-line library instead of tornado.options.parse_command_line - the rest of tornado doesn't care whether you're using tornado.options or not.
| 0
| 1
| 0
| 0
|
2013-10-29T12:03:00.000
| 2
| 0.291313
| false
| 19,657,719
| 1
| 0
| 0
| 1
|
I'm using Tornado options to define command-line arguments. However, I would like to be able to throw arbitrary configuration options, not defined in code, to my program. These will differ, depending on what the program is supposed to do. For instance, connect to a bluetooth device using a MAC address or connect to a serial device using a TTY.
If I define a set of "mandatory" options in code and then add an additional when calling the program, I get an exception thrown by parse_command_line().
It would be very handy to get e.g. a dictionary with the remaining (undefined) options. That is, much in the same way as **kwargs works in functions.
Can this be done?
(A work-around is to define a string option named e.g. configuration and throw everything in there, possibly encoded in some clever way. As the program is being called by another program I can e.g. base64-encode a serialized dict.)
Update: I've noticed that if you add command-line args without leading dashes, Tornado will ignore them and return a list with remaining (undefined) options.
|
Python: subprocess vs native API
| 19,662,004
| 1
| 2
| 322
| 0
|
python,api,subprocess
|
The only way that i see myself using subprocess instead of a native python api is if some option of the program is not provided in the api.
| 0
| 1
| 0
| 0
|
2013-10-29T15:03:00.000
| 2
| 0.099668
| false
| 19,661,874
| 1
| 0
| 0
| 2
|
In case both options available: to call a command line tool with subprocess (say, hg) or to make use of native python API (say, mercurial API), is there a case where it's more favorable to use the former?
|
Python: subprocess vs native API
| 19,662,115
| 3
| 2
| 322
| 0
|
python,api,subprocess
|
If you want to execute some third party native code which you know is not stable and may crash with a segvault then it is better to execute it as a subprocess - you will be able to safely handle the possible crashes from your Python process.
Also, if you want to call several times some code which is known to leak memory, leave open files or other resources, from a long running Python process, then again it may be wise to run it as a subprocess. In this case the leaking memory or other resources will be reclaimed by the operating system for you each time the subprocess exits, and not accumulate.
| 0
| 1
| 0
| 0
|
2013-10-29T15:03:00.000
| 2
| 1.2
| true
| 19,661,874
| 1
| 0
| 0
| 2
|
In case both options available: to call a command line tool with subprocess (say, hg) or to make use of native python API (say, mercurial API), is there a case where it's more favorable to use the former?
|
syncronizing across machines for a python apscheduler and wmi based windows service
| 19,737,941
| 0
| 1
| 170
| 0
|
python,networking,synchronization,wmi,apscheduler
|
I tried to include functionality like this in APScheduler 2.0 but it didn't pan out. Maybe The biggest issue is handling concurrent accesses to jobs and making sure jobs get run even if a particular node crashes. The nodes also need to communicate somehow.
Are you sure you don't want to use Celery instead?
| 0
| 1
| 0
| 0
|
2013-10-30T19:44:00.000
| 1
| 0
| false
| 19,692,437
| 1
| 0
| 0
| 1
|
I am using apscheduler and wmi to create and install new python based windows services where the service determines the type of job to be run. The services are installed across all the machines on the same network. Given this scenario I want to make sure that these services run only on one machine and not all the machines.
If a machine goes down I still want the job to be run from another machine on the same network. How would I accomplish this task?
I know I need to do some kind of synchronization across machines but not sure how to address it?
|
Ubuntu - PySide module not found for python2 but works fine for python3
| 19,697,689
| 1
| 1
| 1,553
| 0
|
python,ubuntu,python-2.7,python-3.x,pyside
|
You have two independent Python 2.7 installations, one in /usr and one in /usr/local. (And that's on top of the Python 3.x installation you also have.)
This is bound to cause confusion, especially for novices. And it has caused exactly the kind of consuion it was bound to cause.
You've installed PySide into the /usr installation, so it ended up in /usr/lib/python2.7/dist-packages. If you run /usr/bin/python, that import PySide will probably work fine. (If not, see below.)
But the default thing called python and python2.7 on your PATH is the /usr/local installation, hence which python says /usr/local/bin/python, so it can't see PySide at all. So you need to get it installed for the other Python as well.
Unless you know that you need a second Python 2.7 in /usr/local for some reason, the simplest thing to do would be to scrap it. Don't uninstall it and reinstall it; just uninstall it. You've already got a Python 2.7 in /usr, and you don't need two of them.
If you really need to get PySide working with the second 2.7…
Since you still haven't explained how you've been installing PySide despite being asked repeatedly, I can't tell you exactly how to do that. But generally, the key is to make sure to use explicit paths for all Python programs (python itself, python-config, pip, easy_install, etc.) that you have to run. For example, if the docs or blog or voices in your head tell you to run easy_install at some step, run /usr/local/bin/easy_install instead. If there is no such program, then you need to install that. The fact that you already have /usr/bin/easy_install doesn't help—in fact, it hurts.
If you can get rid of the second Python, but that doesn't fix PySide yet, uninstall, rebuild, and reinstall PySide. Or, even simpler… PySide has pre-made, working binary Ubuntu packages for all of the major Python versions that have Ubuntu packages. Just install it that way.
| 0
| 1
| 0
| 1
|
2013-10-31T01:40:00.000
| 1
| 1.2
| true
| 19,696,973
| 0
| 0
| 0
| 1
|
I had PyQt4 running fine with python2 on Ubuntu 12.04. I then installed python-PySide. But the installation test would give me a module not found error. Then I installed python3-PySide and it works fine. So obviously something to do with my environment paths, but I'm not sure what I need to do. I'm guessing PySide is automatically checking if python3 exists and if it does then it'll use it regardless. I need PySide to work with python2.7 because of Qt4.8 compatibility issues. Any suggestions?
some info about my system:
which python
/usr/bin/local/python
which python3
/usr/bin/python3
EDIT:
More details about installation test.
After installation, I bring up the python console and try import PySide, as follows:
python
import PySide
ImportError: No module name PySide
But it works fine for python3:
python3
import PySide
PySide.version
'1.1.2'
|
Can I pass a list from python to a php script that usually takes a .txt file line-by-line to operate on?
| 19,698,635
| 0
| 0
| 74
| 0
|
php,python,bash,command-line-interface
|
Writing the data out to a text file in python and then loading that text file in php is definitely the easiest way. If you're willing to modify the php script, you could make it read the data from stdin, and set up a pipe between the two processes, but this going to be a little trickier.
| 0
| 1
| 0
| 1
|
2013-10-31T03:52:00.000
| 2
| 1.2
| true
| 19,698,151
| 0
| 0
| 0
| 1
|
I'm new to all things programming, but am trying to build up some functionality for my team.
I have a script in python that performs some useful analysis, and now I need it to communicate to a PHP script that I usually call from the command line with an argument that is a text file, which the script parses and operates on line by line.
What I'm trying to do is pass to the script in the CLI a list variable from Python.
Is the best way to do this to write the list to a text file on my server and then call the script with subprocess from Python or is there a more streamlined way to make this happen?
|
Using GPIO in webpage
| 19,702,189
| 2
| 0
| 467
| 0
|
python,django,raspberry-pi
|
You don't have to run the webbrowser as root but your django app (the webserver).
Of course running a web application as root is an incredibly bad idea (even on a pi), so you might want to use a separate worker process (e.g. using celery) that runs as root and accesses the GPIOs.
| 0
| 1
| 0
| 0
|
2013-10-31T08:59:00.000
| 1
| 1.2
| true
| 19,702,170
| 0
| 0
| 1
| 1
|
I am using Django 1.5.4 to design a web page in which i want to use GPIO, but i got following error:
"Noᅠaccessᅠtoᅠ/dev/mem. Tryᅠrunningᅠasᅠroot! "
in browser. Since web browser itself is an application, how can i assign "root" privileges to it when it tried to render a web page ? If it can be done without any need to install anything that would be better as other frameworks/applications who are able to use GPIO in web page must have made some tweaks.I tried searching for similar questions for this area but couldn't find this specific case ( django + gpio access).
Any help would be greatly appreciated.
Thanks
|
how to run python code on amazon ec2 webservice?
| 19,714,085
| 0
| 0
| 718
| 0
|
python,amazon-web-services,amazon-s3,amazon-ec2,cluster-computing
|
You do not need to use S3, you would likely want to use EBS for storing the code if you need it to be preserved between instance launches. When you launch an instance you have the option to add an ebs storage volume to the drive. That drive will automatically be mounted to the instance and you can access it just like you would on any physical machine. ssh your code up to the amazon machine and fire away.
| 0
| 1
| 0
| 0
|
2013-10-31T17:35:00.000
| 1
| 0
| false
| 19,713,141
| 0
| 0
| 1
| 1
|
I have never used amazon web services so I apologize for the naive question. I am looking to run my code on a cluster as the quad-core architecture on my local machine doesn't seem to be doing the job. The documentation seems overwhelming and I don't even know which AWS services are going to be used for running my script on EC2. Would I have to use their storage facility (S3) because I guess if I have to run my script, I'm going to have to store it on the cloud in a place where the cluster instance has access to the files or do I upload my files somewhere else while working with EC2? If this is true is it possible for me to upload my entire directory which has all the contents of the files required by my application onto s3. Any guidance would be much appreciated. So I guess my question is do I have to use S3 to store my code in a place accessible by the cluster? If so is there an easy way to do it? Meaning I have only seen examples of creating buckets wherein one file can be transferred per bucket. Can you transfer an entire folder into a bucket?
If we don't require to use S3 then which other service should I use to give the cluster access to my scripts to be executed?
Thanks in advance!
|
How can I automate google docs with Google App Engine?
| 19,746,281
| 0
| 0
| 559
| 0
|
python,google-app-engine,google-docs-api
|
There is currently no API to create google docs directly except for:
1) make a google apps script service, which does have access to the docs api.
2) create a ".doc" then upload and convert to gdoc.
1 is best but a gas service has some limitations like quotas. If you are only creating dozens/hundreds per day you will be ok with quotas. Ive done it this way for something similar as your case.
| 0
| 1
| 0
| 0
|
2013-11-02T18:16:00.000
| 1
| 1.2
| true
| 19,745,169
| 0
| 0
| 1
| 1
|
Listmates:
I am designing a google app engine (python) app to automate law office documents.
I plan on using GAE, google docs, and google drive to create and store the finished documents. My plan is to have case information (client name, case number, etc.) entered and retrieved using GAE web forms and the google datastore. Then I will allow the user to create a motion or other document by inserting the form data into template.
The completed document can be further customized by the user, email, printed, and/or stored in a google drive folder.
I found information on how to create a web page that can be printed. However, I am looking for information for how to create an actual google doc and insert the form data into that document or template.
Can someone point me to a GAE tutorial of any type that steps me through how to do this?
|
GAE SDK for Python 2.5
| 19,748,534
| 1
| 2
| 272
| 0
|
google-app-engine,sdk,python-2.5
|
In the 1.8.6 SDK, there's an old_dev_appserver.py that works with Python 2.5. That'll help you along as you migrate.
| 0
| 1
| 0
| 0
|
2013-11-02T22:26:00.000
| 2
| 1.2
| true
| 19,747,596
| 0
| 0
| 1
| 2
|
I have an existing app that uses the deprecated Python 2.5 and the deprecated master/slave datastore. According to the docs, I must migrate the datastore to HRD before I can upgrade to Python 2.7. Before I can migrate my M/S datastore to HRD, I need to do some work on the app and test it using the dev server.
However, I upgraded to the most recent version of the SDK (1.8.6), and it does not support Python 2.5. Somebody else encountered this problem and learned that the latest SDK that supports Python 2.5 by default is Python SDK 1.7.5. From where can that be downloaded? Or, is there a way I can make the SDK 1.8.6 work with Python 2.5?
|
GAE SDK for Python 2.5
| 20,110,862
| 0
| 2
| 272
| 0
|
google-app-engine,sdk,python-2.5
|
Dave W. Smith gave me the answer but I didn't know how to implement it until I made a discovery that maybe most people already know, But in case it might be helpful to somebody, I will tell it here:
I do all my GAE/Python/Flex development work in Eclispe, except that I used the launcher for local testing and deploying. (I am command-line adverse.) I discovered that using the PyDev Eclipse plugin it is easy to set up a "run configuration" (under the PyDev "Run" menu) whereby you can set up command line parameters, etc. and run any python program from within Eclipse. I now use that facility for running dev_appserver.py (and when needed for my Python 2.5 app, old_app_devserver.py). I no longer have a need to use the launcher. I also set up a PyDev run configuation to deploy my app and performing various appcfg.py functions (vacuum indexes, etc.).
| 0
| 1
| 0
| 0
|
2013-11-02T22:26:00.000
| 2
| 0
| false
| 19,747,596
| 0
| 0
| 1
| 2
|
I have an existing app that uses the deprecated Python 2.5 and the deprecated master/slave datastore. According to the docs, I must migrate the datastore to HRD before I can upgrade to Python 2.7. Before I can migrate my M/S datastore to HRD, I need to do some work on the app and test it using the dev server.
However, I upgraded to the most recent version of the SDK (1.8.6), and it does not support Python 2.5. Somebody else encountered this problem and learned that the latest SDK that supports Python 2.5 by default is Python SDK 1.7.5. From where can that be downloaded? Or, is there a way I can make the SDK 1.8.6 work with Python 2.5?
|
How to execute a Python program from the Python shell?
| 19,751,942
| 3
| 0
| 379
| 0
|
python,python-3.x
|
Exit python interpreter/console.
Edit your program in notepad++ creating first_program.py in the same directory where your python.exe is
start cmd.exe from within exactly the same directory
type python first_program.py*
you are done
| 0
| 1
| 0
| 0
|
2013-11-03T10:22:00.000
| 4
| 0.148885
| false
| 19,751,900
| 1
| 0
| 0
| 3
|
I am very very new in Python and I have a doubt.
If I write a program in a text editor (such as Nodepad++), then can I execute it from the Python shell (the one that begin with >>)? What command have I to launch to execute my Python program?
Tnx
Andrea
|
How to execute a Python program from the Python shell?
| 19,751,976
| 0
| 0
| 379
| 0
|
python,python-3.x
|
from within the Python IDLE shell:
File -> Open... -> Select your Python program
When your program has openend select Run -> Run Module or press F5
| 0
| 1
| 0
| 0
|
2013-11-03T10:22:00.000
| 4
| 0
| false
| 19,751,900
| 1
| 0
| 0
| 3
|
I am very very new in Python and I have a doubt.
If I write a program in a text editor (such as Nodepad++), then can I execute it from the Python shell (the one that begin with >>)? What command have I to launch to execute my Python program?
Tnx
Andrea
|
How to execute a Python program from the Python shell?
| 19,751,998
| 0
| 0
| 379
| 0
|
python,python-3.x
|
In the view of mine:
you wrote a program:
test.py
print 'test file'
and you turn to the windows cmd:
you excuted python,and you got this
>
then you can just simply:
os.system('python test.py')
| 0
| 1
| 0
| 0
|
2013-11-03T10:22:00.000
| 4
| 0
| false
| 19,751,900
| 1
| 0
| 0
| 3
|
I am very very new in Python and I have a doubt.
If I write a program in a text editor (such as Nodepad++), then can I execute it from the Python shell (the one that begin with >>)? What command have I to launch to execute my Python program?
Tnx
Andrea
|
Canopy - get Access Denied error
| 19,759,856
| 0
| 0
| 703
| 0
|
python,canopy
|
Since the supplied information is insufficient, the answer is the same. This is about user authentication. I don't know how you open the app but, your app tries to open a file or a process which is could not be opened by your user. If you open your app with root privileges there won't be any problem.
| 0
| 1
| 0
| 0
|
2013-11-03T14:00:00.000
| 1
| 1.2
| true
| 19,753,771
| 0
| 0
| 0
| 1
|
I'm learning python (from a very low baseline) and recently re-installed Canopy (on a MacBook) It was working fine before.
Now whenever I try an launch the editor I get a Access Denied error.
Can anyone help? Please bear in mind my inexperience
Thanks
File "/Users/simonthompson/Library/Enthought/Canopy_64bit/System/lib/python2.7/site-packages/envisage/ui/tasks/tasks_application.py", line 205, in create_window
window.add_task(task)
File "/Applications/Canopy.app/appdata/canopy-1.1.0.1371.macosx-x86_64/Canopy.app/Contents/lib/python2.7/site-packages/pyface/tasks/task_window.py", line 187, in add_task
state.dock_panes.append(dock_pane_factory(task=task))
File "build/bdist.macosx-10.5-i386/egg/canopy/plugin/editor_task.py", line 143, in _create_python_pane
File "/Users/simonthompson/Library/Enthought/Canopy_64bit/System/lib/python2.7/site-packages/envisage/application.py", line 371, in get_service
protocol, query, minimize, maximize
File "/Users/simonthompson/Library/Enthought/Canopy_64bit/System/lib/python2.7/site-packages/envisage/service_registry.py", line 78, in get_service
services = self.get_services(protocol, query, minimize, maximize)
File "/Users/simonthompson/Library/Enthought/Canopy_64bit/System/lib/python2.7/site-packages/envisage/service_registry.py", line 115, in get_services
actual_protocol, name, obj, properties, service_id
File "/Users/simonthompson/Library/Enthought/Canopy_64bit/System/lib/python2.7/site-packages/envisage/service_registry.py", line 259, in _resolve_factory
obj = obj(**properties)
File "build/bdist.macosx-10.5-i386/egg/canopy/python_frontend/plugin.py", line 109, in _frontend_manager_service_factory
File "build/bdist.macosx-10.5-i386/egg/canopy/app/running_process_manager.py", line 82, in register_proc
File "build/bdist.macosx-10.5-i386/egg/canopy/app/util.py", line 53, in get_exe_or_cmdline
File "/Users/simonthompson/Library/Enthought/Canopy_64bit/System/lib/python2.7/site-packages/psutil/_common.py", line 80, in get
ret = self.func(instance)
File "/Users/simonthompson/Library/Enthought/Canopy_64bit/System/lib/python2.7/site-packages/psutil/init.py", line 331, in exe
return guess_it(fallback=err)
File "/Users/simonthompson/Library/Enthought/Canopy_64bit/System/lib/python2.7/site-packages/psutil/init.py", line 314, in guess_it
cmdline = self.cmdline
File "/Users/simonthompson/Library/Enthought/Canopy_64bit/System/lib/python2.7/site-packages/psutil/init.py", line 346, in cmdline
return self._platform_impl.get_process_cmdline()
File "/Users/simonthompson/Library/Enthought/Canopy_64bit/System/lib/python2.7/site-packages/psutil/_psosx.py", line 153, in wrapper
raise AccessDenied(self.pid, self._process_name)
AccessDenied: (pid=343)
DEBUG|2013-11-03 21:19:25|QtWarningMsg: QImage::scaled: Image is a null image
|
python gstreamer script error message no element "h264parse"
| 19,759,263
| 1
| 0
| 2,506
| 0
|
python,ubuntu,gstreamer
|
h264parse is part of the "gst-plugins-bad" , you will want to install them through your package manager, if your script imports Gst from gi.repository you will want the 1.0 plugins, the 0.10 otherwise.
Have a nice day :)
| 0
| 1
| 0
| 0
|
2013-11-03T20:49:00.000
| 1
| 1.2
| true
| 19,757,936
| 0
| 0
| 0
| 1
|
I am running a Python script in Ubuntu. The script uses gstreamer. I get the following error message.
error: no element "h264parse"
Let me know if any other information would be helpful.
|
Multiprocessing on Ubuntu vs OSX and SSD vs HDD
| 19,758,695
| 1
| 0
| 229
| 0
|
python,multithreading,macos,ubuntu
|
On hardware level only one operation on a device can be done at once. If the drive is busy, the requested operation is being queued. There are few different queues where it may be waiting and they vary in different operation systems, hardware or even drivers. There are different queue management methods as well most popular on software side is fifo (first in, first out), but on drive side it probably is NCQ (special queue managment that selects closest data to be written/read first) All of those queues have limited size. If hardware level queues are full (for example disk cache have been filled), the system halts all operations of applications requesting disk access. So if your application is doing some disk operations it may simply be waiting for a disk drive.
As SSD technology makes whole processing much quicker, an access latency is about 10-20 times faster than in HDD, it is highly probable that your application doesn't use 100% of CPU because of HDD.
| 0
| 1
| 0
| 0
|
2013-11-03T21:33:00.000
| 1
| 1.2
| true
| 19,758,414
| 1
| 0
| 0
| 1
|
Can a HDD vs SSD setup account for lower processor utilization when there are many read and write operations?
So I've written a program that spawns multiple processes. On OSX it runs great and utilizes 100% of the cpu. Overloading it with hundreds of threads works out fine. On Ubuntu, it freezes when pushing a large number of threads. When I limit the number of total threads the the max for the processors, the Ubuntu machine doesn't utilize all the computing power--only about 50%. My threads do run at nearly 100% for the first minute or so, then suddently it becomes random with a wave like utilization graph which doesn't always begin at the same time.
Specs:
OSX, SSD, Intel i7 4 cores x 2 threads each = 8 threads
Ubuntu, HDD, 3930K Intel i7 6 cores x 2 threads each = 12 threads
|
Using the GAE remote_api to Create Local Scripts
| 19,760,171
| 1
| 0
| 71
| 0
|
python,google-app-engine,google-cloud-datastore
|
Why is including the paths that onerous ?
Normally the remote_api shell is used interactively but it is a good tool that you can use as the basis of acheiving what your want.
The simplest way will be to copy and modify the remote_api shell so that rather than presenting an interactive shell you get it to run a named script.
That way it will deal with all the path setup.
In the past I have integrated the remote_api inside a zope server, so that plone could publish stuff to appengine. All sort of things are possible with remote_api, however you need to deal with imports like anything else in python, except that appengine libraries are not installed in site-packages.
| 0
| 1
| 0
| 0
|
2013-11-04T00:23:00.000
| 1
| 1.2
| true
| 19,759,934
| 0
| 0
| 1
| 1
|
I'm trying to do some local processing of data entries from the GAE datastore and I am trying to do this by using the remote_api. I just want to write some quick processing scripts that pull some data, but I am getting import errors saying that Python cannot import from google.
Am I supposed to run the script from within the development environment somehow. Or perhaps I need to include all of the google stuff in my Python path? That seems excessive though.
|
How can I track how much data my Python program is sending / receiving over the network?
| 19,761,799
| 1
| 0
| 73
| 0
|
python,networking,monitoring
|
In python, you'd probably have to wrap things - it could be a bit of a challenge.
In Linux, the netstat program will probably do something that's at least related to what you want.
| 0
| 1
| 0
| 0
|
2013-11-04T04:14:00.000
| 1
| 0.197375
| false
| 19,761,546
| 0
| 0
| 0
| 1
|
I have a Python program that queries multiple remote services (MongoDB, MySQL, etc). Is there a way to track how much data my program is transferring over the network either within the Python program or through some Linux utility?
|
Django can't find libssl on OS X Mavericks
| 19,772,866
| 2
| 2
| 9,380
| 1
|
python,django,macos,postgresql
|
It seems that it's libssl.1.0.0.dylib that is missing. Mavericks comme with libssl 0.9.8. You need to install libssl via homebrew.
If loaderpath points to /usr/lib/, you also need to symlink libssl from /usr/local/Cell/openssl/lib/ into /usr/lib.
| 0
| 1
| 0
| 0
|
2013-11-04T12:15:00.000
| 5
| 0.07983
| false
| 19,767,569
| 0
| 0
| 1
| 1
|
I'm trying to get Django running on OS X Mavericks and I've encountered a bunch of errors along the way, the latest way being that when runpython manage.py runserver to see if everything works, I get this error, which I believe means that it misses libssl:
ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/psycopg2/_psycopg.so, 2): Library not loaded: @loader_path/../lib/libssl.1.0.0.dylib Referenced from: /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/psycopg2/_psycopg.so Reason: image not found
I have already upgraded Python to 2.7.6 with the patch that handles some of the quirks of Mavericks.
Any ideas?
Full traceback:
Unhandled exception in thread started by >
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/core/management/commands/runserver.py", line 93, in inner_run
self.validate(display_num_errors=True)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/core/management/base.py", line 280, in validate
num_errors = get_validation_errors(s, app)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/core/management/validation.py", line 28, in get_validation_errors
from django.db import models, connection
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/db/init.py", line 40, in
backend = load_backend(connection.settings_dict['ENGINE'])
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/db/init.py", line 34, in getattr
return getattr(connections[DEFAULT_DB_ALIAS], item)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/db/utils.py", line 93, in getitem
backend = load_backend(db['ENGINE'])
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/db/utils.py", line 27, in load_backend
return import_module('.base', backend_name)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/utils/importlib.py", line 35, in import_module
import(name)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/db/backends/postgresql_psycopg2/base.py", line 14, in
from django.db.backends.postgresql_psycopg2.creation import DatabaseCreation
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/db/backends/postgresql_psycopg2/creation.py", line 1, in
import psycopg2.extensions
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/psycopg2/init.py", line 50, in
from psycopg2._psycopg import BINARY, NUMBER, STRING, DATETIME, ROWID
ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/psycopg2/_psycopg.so, 2): Library not loaded: @loader_path/../lib/libssl.1.0.0.dylib
Referenced from: /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/psycopg2/_psycopg.so
Reason: image not found
|
Error: dlopen() Library not loaded Reason: image not found
| 22,560,206
| 30
| 12
| 27,135
| 0
|
macos,error-handling,ipython,dlopen
|
Shared object location under OS X is sometimes tricky. When you directly call dlopen() you have the freedom of specifying an absolute path to the library, which works fine. However, if you load a library which in turn needs to load another (as appears to be your situation), you've lost control of specifying where the library lives with its direct path.
There are environment variables that you could set before running your main program that tell the dynamic loader where to search for things. In general these are a bad idea (but you can read about them via the man dyld command on an OS X system).
When an OS X dynamic library is created, it's given an install name; this name is embedded within the binary and can be viewed with the otool command. otool -L mach-o_binary will list the dynamic library references for the mach-o binary you provide the file name to; this can be a primary executable or a dylib, for example.
When a dynamic library is statically linked into another executable (either a primary executable or another dylib), the expected location of where that dylib being linked will be found is based on the location written into it (either at the time it was built, or changes that have been applied afterwards). In your case, it seems that phys_services.so was statically linked against libphys-services.dylib. So to start, run otool -L phys_services.so to find the exact expectation of where the dylib will be.
The install_name_tool command can be used to change the expected location of a library. It can be run against the dylib before it gets statically linked against (in which case you have nothing left to do), or it can be run against the executable that loads it in order to rewrite those expectations. The command pattern for this is install_name_tool -change <old_path> <new_path> So for example, if otool -L phys_services.so shows you /usr/lib/libphys-services.dylib and you want to move the expectation as you posed in your question, you would do that with install_name_tool -change /usr/lib/libphys-services.dylib @rpath/lib/libphys-services.dylib phys_services.so.
The dyld man page (man dyld) will tell you how @rpath is used, as well as other macros @loader_path and @executable_path.
| 0
| 1
| 0
| 0
|
2013-11-04T20:31:00.000
| 1
| 1.2
| true
| 19,776,571
| 0
| 0
| 0
| 1
|
I am a newbie in this field. My laptop is Macbook air, Software: OS X 10.8.5 (12F45). I am running a code which gives me the following error:
dlopen(/Users/ramesh/offline/build_icerec/lib/icecube/phys_services.so, 2): Library not loaded: /Users/ramesh/offline/build_icerec/lib/libphys-services.dylib
Referenced from: /Users/ramesh/offline/build_icerec/lib/icecube/phys_services.so
Reason: image not found
I did google search and found variety of answers. I think the one that works is to use
" -install_name @rpath/lib ".
My question is, how to use -install_name @rpath/lib in my case?
|
I can't locate correct Python script to update
| 19,783,232
| 1
| 0
| 40
| 0
|
python,command-line,compiler-construction,command-line-interface,python-2.6
|
Closing the loop: bdist in the path is a sign that the package was installed with setup.py install and is running from the standard Python system path, not from wherever you have it checked out.
Easy fix is to setup.py install it again.
Harder fix is to uninstall it and fiddle with Apache's working directory, but that's not quite my area. :)
| 0
| 1
| 0
| 0
|
2013-11-04T23:36:00.000
| 1
| 1.2
| true
| 19,779,371
| 0
| 0
| 0
| 1
|
I have a script that I am running at /var/scripts/SomeAppName/source/importer/processor.py
That script triggers an error that has a line that says:
File "build/bdist.linux-i686/egg/something/cms/browser.py", line 43, in GetBrowser
The problem I'm running into is that I'm unable to locate build/bdist.linux-i686/egg/something/cms/browser.py but I can locate /var/scripts/AnotherApp/appcommon/cms/browser.py and /var/scripts/AnotherApp/build/lib/appcommon/cms/browser.py
I have modieified both of those files to remove the part that is throwing the error but am still getting the same error as if the file hasn't been modified at all.
I'm guessing the problem is that I'm not modifying the correct file or I need to compile the script some how but I'm just not able to find out where/how to do this.
I have tried restarting apache but with no luck.
Any help or guidance as to where I should be looking or if I need to run some sort of command to re-compile to browser.py file would be appreciated.
|
How to stop/terminate a python script from running?
| 33,560,303
| 4
| 161
| 854,269
| 0
|
python,execution,terminate,termination
|
you can also use the Activity Monitor to stop the py process
| 0
| 1
| 0
| 0
|
2013-11-05T04:46:00.000
| 17
| 0.047024
| false
| 19,782,075
| 1
| 0
| 0
| 15
|
I wrote a program in IDLE to tokenize text files and it starts to tokeniza 349 text files! How can I stop it? How can I stop a running Python program?
|
How to stop/terminate a python script from running?
| 51,491,746
| 5
| 161
| 854,269
| 0
|
python,execution,terminate,termination
|
To stop your program, just press CTRL + D
or exit().
| 0
| 1
| 0
| 0
|
2013-11-05T04:46:00.000
| 17
| 0.058756
| false
| 19,782,075
| 1
| 0
| 0
| 15
|
I wrote a program in IDLE to tokenize text files and it starts to tokeniza 349 text files! How can I stop it? How can I stop a running Python program?
|
How to stop/terminate a python script from running?
| 52,684,880
| 2
| 161
| 854,269
| 0
|
python,execution,terminate,termination
|
Press Ctrl+Alt+Delete and Task Manager will pop up. Find the Python command running, right click on it and and click Stop or Kill.
| 0
| 1
| 0
| 0
|
2013-11-05T04:46:00.000
| 17
| 0.023525
| false
| 19,782,075
| 1
| 0
| 0
| 15
|
I wrote a program in IDLE to tokenize text files and it starts to tokeniza 349 text files! How can I stop it? How can I stop a running Python program?
|
How to stop/terminate a python script from running?
| 53,984,398
| 4
| 161
| 854,269
| 0
|
python,execution,terminate,termination
|
Control+D works for me on Windows 10. Also, putting exit() at the end also works.
| 0
| 1
| 0
| 0
|
2013-11-05T04:46:00.000
| 17
| 0.047024
| false
| 19,782,075
| 1
| 0
| 0
| 15
|
I wrote a program in IDLE to tokenize text files and it starts to tokeniza 349 text files! How can I stop it? How can I stop a running Python program?
|
How to stop/terminate a python script from running?
| 55,071,056
| 3
| 161
| 854,269
| 0
|
python,execution,terminate,termination
|
If you are working with Spyder, use CTRL+. and you will restart the kernel, also you will stop the program.
| 0
| 1
| 0
| 0
|
2013-11-05T04:46:00.000
| 17
| 0.035279
| false
| 19,782,075
| 1
| 0
| 0
| 15
|
I wrote a program in IDLE to tokenize text files and it starts to tokeniza 349 text files! How can I stop it? How can I stop a running Python program?
|
How to stop/terminate a python script from running?
| 61,886,193
| 3
| 161
| 854,269
| 0
|
python,execution,terminate,termination
|
Windows solution: Control + C.
Macbook solution: Control (^) + C.
Another way is to open a terminal, type top, write down the PID of the process that you would like to kill and then type on the terminal: kill -9 <pid>
| 0
| 1
| 0
| 0
|
2013-11-05T04:46:00.000
| 17
| 0.035279
| false
| 19,782,075
| 1
| 0
| 0
| 15
|
I wrote a program in IDLE to tokenize text files and it starts to tokeniza 349 text files! How can I stop it? How can I stop a running Python program?
|
How to stop/terminate a python script from running?
| 51,932,807
| 8
| 161
| 854,269
| 0
|
python,execution,terminate,termination
|
Ctrl+Z should do it, if you're caught in the python shell. Keep in mind that instances of the script could continue running in background, so under linux you have to kill the corresponding process.
| 0
| 1
| 0
| 0
|
2013-11-05T04:46:00.000
| 17
| 1
| false
| 19,782,075
| 1
| 0
| 0
| 15
|
I wrote a program in IDLE to tokenize text files and it starts to tokeniza 349 text files! How can I stop it? How can I stop a running Python program?
|
How to stop/terminate a python script from running?
| 67,751,184
| 2
| 161
| 854,269
| 0
|
python,execution,terminate,termination
|
Try using:
Ctrl + Fn + S
or
Ctrl + Fn + B
| 0
| 1
| 0
| 0
|
2013-11-05T04:46:00.000
| 17
| 0.023525
| false
| 19,782,075
| 1
| 0
| 0
| 15
|
I wrote a program in IDLE to tokenize text files and it starts to tokeniza 349 text files! How can I stop it? How can I stop a running Python program?
|
How to stop/terminate a python script from running?
| 46,964,691
| 33
| 161
| 854,269
| 0
|
python,execution,terminate,termination
|
Ctrl-Break it is more powerful than Ctrl-C
| 0
| 1
| 0
| 0
|
2013-11-05T04:46:00.000
| 17
| 1
| false
| 19,782,075
| 1
| 0
| 0
| 15
|
I wrote a program in IDLE to tokenize text files and it starts to tokeniza 349 text files! How can I stop it? How can I stop a running Python program?
|
How to stop/terminate a python script from running?
| 19,782,093
| 76
| 161
| 854,269
| 0
|
python,execution,terminate,termination
|
To stop your program, just press Control + C.
| 0
| 1
| 0
| 0
|
2013-11-05T04:46:00.000
| 17
| 1.2
| true
| 19,782,075
| 1
| 0
| 0
| 15
|
I wrote a program in IDLE to tokenize text files and it starts to tokeniza 349 text files! How can I stop it? How can I stop a running Python program?
|
How to stop/terminate a python script from running?
| 34,029,481
| 190
| 161
| 854,269
| 0
|
python,execution,terminate,termination
|
You can also do it if you use the exit() function in your code. More ideally, you can do sys.exit(). sys.exit() which might terminate Python even if you are running things in parallel through the multiprocessing package.
Note: In order to use the sys.exit(), you must import it: import sys
| 0
| 1
| 0
| 0
|
2013-11-05T04:46:00.000
| 17
| 1
| false
| 19,782,075
| 1
| 0
| 0
| 15
|
I wrote a program in IDLE to tokenize text files and it starts to tokeniza 349 text files! How can I stop it? How can I stop a running Python program?
|
How to stop/terminate a python script from running?
| 44,786,454
| 55
| 161
| 854,269
| 0
|
python,execution,terminate,termination
|
To stop a python script just press Ctrl + C.
Inside a script with exit(), you can do it.
You can do it in an interactive script with just exit.
You can use pkill -f name-of-the-python-script.
| 0
| 1
| 0
| 0
|
2013-11-05T04:46:00.000
| 17
| 1
| false
| 19,782,075
| 1
| 0
| 0
| 15
|
I wrote a program in IDLE to tokenize text files and it starts to tokeniza 349 text files! How can I stop it? How can I stop a running Python program?
|
How to stop/terminate a python script from running?
| 59,539,599
| 7
| 161
| 854,269
| 0
|
python,execution,terminate,termination
|
exit() will kill the Kernel if you're in Jupyter Notebook so it's not a good idea. raise command will stop the program.
| 0
| 1
| 0
| 0
|
2013-11-05T04:46:00.000
| 17
| 1
| false
| 19,782,075
| 1
| 0
| 0
| 15
|
I wrote a program in IDLE to tokenize text files and it starts to tokeniza 349 text files! How can I stop it? How can I stop a running Python program?
|
How to stop/terminate a python script from running?
| 53,211,247
| 59
| 161
| 854,269
| 0
|
python,execution,terminate,termination
|
If your program is running at an interactive console, pressing CTRL + C will raise a KeyboardInterrupt exception on the main thread.
If your Python program doesn't catch it, the KeyboardInterrupt will cause Python to exit. However, an except KeyboardInterrupt: block, or something like a bare except:, will prevent this mechanism from actually stopping the script from running.
Sometimes if KeyboardInterrupt is not working you can send a SIGBREAK signal instead; on Windows, CTRL + Pause/Break may be handled by the interpreter without generating a catchable KeyboardInterrupt exception.
However, these mechanisms mainly only work if the Python interpreter is running and responding to operating system events. If the Python interpreter is not responding for some reason, the most effective way is to terminate the entire operating system process that is running the interpreter. The mechanism for this varies by operating system.
In a Unix-style shell environment, you can press CTRL + Z to suspend whatever process is currently controlling the console. Once you get the shell prompt back, you can use jobs to list suspended jobs, and you can kill the first suspended job with kill %1. (If you want to start it running again, you can continue the job in the foreground by using fg %1; read your shell's manual on job control for more information.)
Alternatively, in a Unix or Unix-like environment, you can find the Python process's PID (process identifier) and kill it by PID. Use something like ps aux | grep python to find which Python processes are running, and then use kill <pid> to send a SIGTERM signal.
The kill command on Unix sends SIGTERM by default, and a Python program can install a signal handler for SIGTERM using the signal module. In theory, any signal handler for SIGTERM should shut down the process gracefully. But sometimes if the process is stuck (for example, blocked in an uninterruptable IO sleep state), a SIGTERM signal has no effect because the process can't even wake up to handle it.
To forcibly kill a process that isn't responding to signals, you need to send the SIGKILL signal, sometimes referred to as kill -9 because 9 is the numeric value of the SIGKILL constant. From the command line, you can use kill -KILL <pid> (or kill -9 <pid> for short) to send a SIGKILL and stop the process running immediately.
On Windows, you don't have the Unix system of process signals, but you can forcibly terminate a running process by using the TerminateProcess function. Interactively, the easiest way to do this is to open Task Manager, find the python.exe process that corresponds to your program, and click the "End Process" button. You can also use the taskkill command for similar purposes.
| 0
| 1
| 0
| 0
|
2013-11-05T04:46:00.000
| 17
| 1
| false
| 19,782,075
| 1
| 0
| 0
| 15
|
I wrote a program in IDLE to tokenize text files and it starts to tokeniza 349 text files! How can I stop it? How can I stop a running Python program?
|
How to stop/terminate a python script from running?
| 53,210,260
| 8
| 161
| 854,269
| 0
|
python,execution,terminate,termination
|
When I have a python script running on a linux terminal, CTRL+\ works. (not CRTL + C or D)
| 0
| 1
| 0
| 0
|
2013-11-05T04:46:00.000
| 17
| 1
| false
| 19,782,075
| 1
| 0
| 0
| 15
|
I wrote a program in IDLE to tokenize text files and it starts to tokeniza 349 text files! How can I stop it? How can I stop a running Python program?
|
Run python application with admin privileges
| 19,782,967
| 0
| 1
| 341
| 0
|
python,windows
|
For Linux, see documentation on upstart (for Ubuntu) or service (for RedHat). The write a start-up script to start your Python script with appropriate rights. You can also configure it to be restarted if it crashes.
Windows has a similar facility for start-up programs, where you can register your program to start.
| 0
| 1
| 0
| 0
|
2013-11-05T05:40:00.000
| 1
| 0
| false
| 19,782,573
| 0
| 0
| 0
| 1
|
Working on windows platform, I have a python application which once invoked, remembers its state and resumes in case of system crash or reboot. The application actually runs some other executables or in technological terms is of type framework. The typical scenario where the executable need to run with admin mode passes for first time but fails after resuming from crash or reboot.
What I believe is I need to invoke the resumed application with admin mode. In what way this could be achieved, Thanks in advance!
|
How can I detect using Python the insertion of only USBs and hard drives on Ubuntu/Linux?
| 19,872,640
| 1
| 2
| 4,356
| 0
|
python,linux,ubuntu,python-3.x,usb
|
I can detect it rather easily through monitoring the /dev/disks/by-label/ directory.
| 0
| 1
| 0
| 0
|
2013-11-05T07:36:00.000
| 2
| 1.2
| true
| 19,783,877
| 0
| 0
| 0
| 1
|
I'm building a backup program which involves detecting when media available for backup is inserted. I've looked into detecting the insertion of backup media, and I'm going to use the file system watch service inotify on the /media/username directory.
The problem is that I've looked into this directory and there are folders that don't represent any currently available medium. How can I detect the list of currently available mediums (USBs, HDDs) and watch for any future ones? More technically, what are the characteristics of an actively available USB/HDD folder in the /media/username directory?
|
Best high concurrency Python / Redis server
| 19,787,579
| 1
| 1
| 1,710
| 0
|
python,performance,redis,bottle
|
If you are a beginner you should not start with evented (twisted/tornado/gevent/eventlet...) libs.
It will lead you to place you dont know!
If you need to scale add machines and balance the load with a load balancer.
| 0
| 1
| 0
| 0
|
2013-11-05T08:53:00.000
| 3
| 0.066568
| false
| 19,784,948
| 0
| 0
| 0
| 1
|
I'm prototyping a Python/Redis based API and am serving JSON using Bottle but unfortunately out of the box Bottle performs badly under load and under high concurrency. Some initial testing on real traffic results in the python script crashing without terminating, which means the API is unresponsive and not restarting*.
What is currently the best solution to scale a Python/Redis API in terms of performance as well as documentation. I find the bottle+greenlet solution poorly documented and not easy to implement for a Python beginner like me. I heard tornado is good but that its integration with Redis is slower than Bottle's.
*Seems that when bottle is unable to send the body of the HTTP request to the client, the server will bug out with "[Errno 32] Broken pipe" errors, which seems like a bad reason for a server to stop working
|
How to execute PERL scripts in Django using async tasker like Celery?
| 19,791,772
| 3
| 0
| 555
| 0
|
python,django,perl,rabbitmq,celery
|
The easiest way to execute a Perl script from Celery would probably be a thin wrapper written in Python that runs the script as a subprocess.
| 0
| 1
| 0
| 0
|
2013-11-05T14:41:00.000
| 2
| 0.291313
| false
| 19,791,570
| 0
| 0
| 1
| 1
|
I have a need to execute PERL scripts through a Django web interface.
The user will select the parameters of the script and execute it.
I am wondering if it is possible to use Celery/RabbitMQ to execute these script as Celery tasks?
If so, would I need to modify the PERL script?
Would I have to write RabbitMQ code into the PERL scripts? Or would I just execute the Celery task and wait for the script to finish processing? I would like to have the script update the Django user/celery.
|
Pydev Not Recognized in Eclipse
| 20,275,034
| 0
| 21
| 27,038
| 0
|
python,eclipse,ide,pydev
|
![Kepler Pydev config]
I had to spend lot of time to figure out why it is not working. But ultimately did. Download 2.8.2 zip instead and unzip in dropins folder as shown. Start eclipse with -clean option
| 0
| 1
| 0
| 0
|
2013-11-07T03:19:00.000
| 9
| 0
| false
| 19,827,404
| 1
| 0
| 0
| 2
|
I've been using PyDev within Eclipse on my Mac for about two years now. Updated Eclipse today, and suddenly PyDev is completely missing. Tried everything, included a complete uninstall and fresh install, but although PyDev shows up as installed in the menu, it appears nowhere else.
PyDev version: 3.0.0.201311051910
Eclipse: Version: Kepler Service Release 1
Build id: 20130919-0819
I can't open a PyDev perspective, I can't create a new Python file, and I can't open an existing Python file without it just being seen as plain text.
I've got a huge assignment due tonight, help appreciated.
|
Pydev Not Recognized in Eclipse
| 20,405,349
| 0
| 21
| 27,038
| 0
|
python,eclipse,ide,pydev
|
I debugged a python project (imported before). The Pydev menu has reappeared by only changing the current perspective to "Debug".
I think opening the Pydev perspective through Window > Open Perspective > Other ... > Pydev would also create the Pydev menu.
| 0
| 1
| 0
| 0
|
2013-11-07T03:19:00.000
| 9
| 0
| false
| 19,827,404
| 1
| 0
| 0
| 2
|
I've been using PyDev within Eclipse on my Mac for about two years now. Updated Eclipse today, and suddenly PyDev is completely missing. Tried everything, included a complete uninstall and fresh install, but although PyDev shows up as installed in the menu, it appears nowhere else.
PyDev version: 3.0.0.201311051910
Eclipse: Version: Kepler Service Release 1
Build id: 20130919-0819
I can't open a PyDev perspective, I can't create a new Python file, and I can't open an existing Python file without it just being seen as plain text.
I've got a huge assignment due tonight, help appreciated.
|
how to get python installed path from command line
| 19,829,607
| 2
| 30
| 85,755
| 0
|
python,windows,python-2.7,cmd
|
You can check registry by:
HKLM SOFTWARE\Python\PythonCore\${PYTHON_VERSION}\InstallPath
or HKCU
| 0
| 1
| 0
| 0
|
2013-11-07T06:39:00.000
| 3
| 0.132549
| false
| 19,829,516
| 1
| 0
| 0
| 1
|
I am trying to get the installed path of python?
Any idea how to get the python installed path from command line in windows. I don't want to set the environment variable?
Thanks,
|
Choosing a different python version to run code
| 19,843,644
| 0
| 1
| 473
| 0
|
python,windows
|
Assumptions: You mention python(w).exe and cmd, so you're most likely on windows.
You probably selected "use python 2.n" somewhere in your IDE (whichever one you may use). That works fine as long as you ONLY execute your scripts from there. From any arbitrary other place on your system, windows (or any other OS) makes use of environment variables PATH, in this case. (right-click My Computer > advanced > environment variables or something). It probably contains the path to python27.exe C:/Python27, change that to C:/Python25. If multiple directories are used, first order takes precedence.
pythonw.exe is a custom executable related to early pywin32 development by Mark Hammond I believe, and yes it should supress the console window during startup.
| 0
| 1
| 0
| 0
|
2013-11-07T17:32:00.000
| 3
| 1.2
| true
| 19,842,701
| 1
| 0
| 0
| 1
|
a) I have a code that works fine if I choose Python 2.5 but it gives me errors when using python 2.7. I have easily fixed the problem in windows by choosing python 2.5 as the default program. But, it seems that cmd does not follow this change. How could I fix this?
b) what is the difference between running a script using python.exe vs. pythonw.exe? I read somewhere that upon using pythonw.exe, I will not see the cmd window that pops out and disappears. But for my case, this is not true and I actually see the cmd window. Also, if I use python.exe I get an error from running my script but not when I use pythonw.exe.
|
how to close remote desktop window using python
| 29,933,838
| 0
| 1
| 1,415
| 0
|
python,desktop,os.system,mstsc
|
To close one of the mstsc, you should know the pid of it. If you are opening mstsc.exe from a python script itself, then you could capture the pid of that instance.
p = Popen('C:\Windows\System32\mstsc.exe "connection.rdp"')
print p.pid
Then kill the exe using pid.
| 0
| 1
| 0
| 0
|
2013-11-08T08:47:00.000
| 1
| 0
| false
| 19,854,866
| 0
| 0
| 0
| 1
|
I want to automate closing the remote desktop application using python. I open the remote desktop using mstsc. When I do
os.system("TASKKILL /F /IM mstsc.exe")
It is killing all the remote desktop applications that are open. Is there a way I can specify through python which remote desktop it has to close.
I have 2 or more instances of remote desktop open and I require my program to close only specific connection. Is there a way I can pass the IP address or process ID or something.
|
delphi code compiled to obj files to be used in python
| 19,867,378
| 2
| 3
| 245
| 0
|
c++,python,c,delphi,.obj
|
Delphi generated .obj files cannot be consumed by Python because Python doesn't consume .obj files. You'd need to compile them to a library at the very least. At which point, emitting .obj files is pointless – you may as well just output a full module. I conclude that you'll need to compile your Delphi code to a library (DLL) or a COM object.
To support multiple platforms, you'll need to compile separately for each platform. Which means that you'll only be able to support platforms on which Delphi compilers exist. FreePascal has wider platform support and may be a better choice.
Obviously COM would restrict you to Windows. So the other option is a library. This can be consumed using ctypes or by making your module a Python extension module.
| 1
| 1
| 0
| 0
|
2013-11-08T19:37:00.000
| 1
| 0.379949
| false
| 19,867,190
| 1
| 0
| 0
| 1
|
I have a lot of code in Delphi I would like to use in python.
In Delphi XE is an option to generate C / C + + files. obj
Can I generate these files. Obj in Delphi and use it in python code
python code to use it. obj will still be cross-platform?
thank you
|
Kill Python script after 60 second in simplest way
| 19,901,418
| 1
| 0
| 1,157
| 0
|
python
|
If you are using Linux, you can try this:
(cmdpid=$BASHPID; (sleep 60; kill $cmdpid) & exec YOUR_COMMAND)
for example, if you want to execute a program named script.py:
(cmdpid=$BASHPID; (sleep 60; kill $cmdpid) & exec python script.py)
| 0
| 1
| 0
| 0
|
2013-11-11T07:53:00.000
| 3
| 0.066568
| false
| 19,901,351
| 1
| 0
| 0
| 1
|
I have python script and want to stop work or kill proccess after 60 second of execution .
not just stop work , kill or die proccess.
Don't find a good reference for this .
P.S: Kill python script itself (like a timer with first line of script to kill script after a time)
P.S2: I'm on windows and want compile it to .exe
P.S3: Python is on version 2.7
|
Over-riding Debian/Ubuntu's lintian profile
| 20,292,610
| 1
| 4
| 683
| 0
|
python,ubuntu,debian,deb,lintian
|
For future reference, here's what I did.
I generated and packaged the .pyo files into their own tar.gz file
In the postinst script, the tar.gz file is extracted, and the tar.gz file is deleted
In the postrm script, the pyo files are deleted.
This isn't the nicest solution in the world, but it works with Debian/Ubuntu's overly draconian policies (which don't even make sense; if I can install a binary, why not a pyo?).
Hopefully this helps someone out.
| 0
| 1
| 0
| 0
|
2013-11-11T08:14:00.000
| 2
| 0.099668
| false
| 19,901,647
| 0
| 0
| 1
| 1
|
I have written a proprietary application that needs to install some .pyo files. When I create a .deb from the app, lintian complains that package-installs-python-bytecode. I tried adding an override, but apparently this tag is marked as non-overridable by ftp-master-auto-reject.profile in /usr/share/lintian/profiles/debian. Since this tag is considered an Error, Ubuntu Software Center complains about the package.
Is there a clean way to override this tag so that Ubuntu Software Center no longer complains?
|
Can't install pydev in eclipse
| 31,271,138
| 0
| 0
| 9,571
| 0
|
python,eclipse,pydev
|
I had the same problem, I opened Eclipse as admin and it worked... don't know if it helps
| 0
| 1
| 0
| 0
|
2013-11-11T13:35:00.000
| 4
| 0
| false
| 19,907,723
| 0
| 0
| 0
| 1
|
I'd like install pydev in eclipse by following method, all of them fails:
using update site, errors happens in the installation, looks like a firewall issue.
download the pydev zip file, and extract to the eclipse folder, not working ( could not find it in preference after restart eclipse)
download hte pydev zip file, and extract to the dropins folder, still not working (could not find it in preference after restart eclipse)
I am very frustrated on this, could anyone help on this ? Thanks
My enviroment:
OS: Mac 10.8
Eclipse: 3.7
Pydev: 3.0
Country: China
|
How exactly is Python Bytecode Run in CPython?
| 70,165,436
| 0
| 65
| 14,342
| 0
|
python,cpython,python-internals
|
When we run the python programs: 1_python source code compile with Cpython to the bytecode (bytecode is the binary file with .pyc format which seralize with marshal and it is set of stack structures that solve with pvm) 2_then the pvm (python virtual machine/python interpreter) is stackbase machine (the machine which solve task with stack data structure) which loop inside bytecode line by line and execute it.
What executes the bytecode?
The bytecode tells the Python interpreter which C code to execute.
| 0
| 1
| 0
| 1
|
2013-11-11T21:53:00.000
| 4
| 0
| false
| 19,916,729
| 1
| 0
| 0
| 1
|
I am trying to understand how Python works (because I use it all the time!). To my understanding, when you run something like python script.py, the script is converted to bytecode and then the interpreter/VM/CPython–really just a C Program–reads in the python bytecode and executes the program accordingly.
How is this bytecode read in? Is it similar to how a text file is read in C? I am unsure how the Python code is converted to machine code. Is it the case that the Python interpreter (the python command in the CLI) is really just a precompiled C program that is already converted to machine code and then the python bytecode files are just put through that program? In other words, is my Python program never actually converted into machine code? Is the python interpreter already in machine code, so my script never has to be?
|
How do I access my django app running on Amazon ec2?
| 51,827,225
| 0
| 4
| 6,277
| 0
|
python,django,amazon-web-services,amazon-ec2
|
Make sure to include your IPv4 Public IP address in the ALLOWED_HOSTS section in Django project/app/settings.py script...
| 0
| 1
| 0
| 0
|
2013-11-12T05:37:00.000
| 2
| 0
| false
| 19,921,705
| 0
| 0
| 1
| 1
|
So, I have looked around stack overflow + other sites, but havent been able to solve this problem: hence posting this question!
I have recently started learning django... and am now trying to run it on ec2.
I have an ec2 instance of this format: ec2-xx-xxx-xx-xxx.us-west-2.compute.amazonaws.com on which I have a django app running. I changed the security group of this instance to allow http port 80 connections.
I did try to run it the django app the following ways: python manage.py runserver 0.0.0.0:8000 and python manage.py runserver ec2-xx-xxx-xx-xxx.us-west-2.compute.amazonaws.com:8000 and that doesnt seem to be helping either!
To make sure that there is nothing faulty from django's side, I opened another terminal window and ssh'ed into the instance and did a curl GET request to localhost:8000/admin which went through successfully.
Where am I going wrong? Will appreciate any help!
|
Continuous Deployment: Version Numbering and Jenkins for Deployment?
| 19,932,927
| 2
| 2
| 1,215
| 0
|
python,jenkins,continuous-integration,continuous-deployment
|
Since we already use Jenkins, I think we do it in a script called by
Jenkins. Any reason to do it with a different (better) tool?
To answer your question: No, there aren't any big reasons to not go with Jenkins for deployment.
Pros:
You already know Jenkins (and you probably know some of the quirks)
You don't need to introduce yet another technology
You said that you want to write scripts called by Jenkins, so you can switch easily to a different system later.
Cons:
there might be better tools out there for deployment
Does not tie the best with Change Control tools.
Additional Considerations:
Do not use the same server for prod deployment and continuous build/integration. These are two different tasks performed by two different roles. Therefore two different permission schemes might be employed.
Use permissions wisely. I use two different permissions for my deploy and CI servers. We have 3 Jenkins servers right now.
CI and deploy to uncontrolled environments (Developers can play with these environments)
Deploy to controlled environments. (QA environemnts and upwards)
Deploy to prod (yes, that's the only purpose in live of this server.) with the most restrictive permission scheme.
sandbox, actually there is this forth server for Jenkins admins to play with.
Store your deployable artifacts outside of Jenkins (and you do if I read your question correctly).
So depending on your existing infrastructure and procedure you decide for the tooling. Jenkins won't log you in as long as you keep as much of the logic as possible in scripts that are only executed by Jenkins.
| 0
| 1
| 0
| 1
|
2013-11-12T10:32:00.000
| 1
| 1.2
| true
| 19,926,738
| 0
| 0
| 0
| 1
|
We want to use continuous deployment.
We have:
all sources (python) in a local RhodeCode (git) server.
Jenkins for automated testing
SSH connections to the production systems (linux).
a tool which can update servers in one command.
Now something like this should be implemented:
run tests with Jenkins
if there is a failure. Stop, mail developers
If all tests are OK:
deploy
We are long enough in the business to write some scripts to do this.
My questions:
How to you update the version numbers? You could increment them, you could use a timestamp ...
Since we already use Jenkins, I think we do it in a script called by Jenkins. Any reason to do it with a different (better) tool?
My fear: Jenkins becomes a central server for things which are not related to testing (deploy). I think other tools like SaltStack or Ansible should be used for this. Up to now we use Fabric (simple layer above ssh). Maybe we should switch to a central management system before starting with continuous deployment.
|
How do I make a standalone application out of a Docker container?
| 19,951,847
| 1
| 5
| 3,690
| 0
|
python,docker
|
In order to use a Docker container, you will always need the basics - Docker installed on a Linux OS (that supports LXC and any other required filesystem type). In a production environment, you'd be running a recent, native install of Linux, and the initial install of Docker would be easy, and a one-time event (in a cloud environment, you'd probably not upgrade a working machine, but instead spin up a new one, with the latest pre-tested Docker version, and the equally upgraded & tested new containers).
On a MacOS, or Windows development machine, you need a virtual machine to host the Linux OS. There's no way around that.
| 0
| 1
| 0
| 0
|
2013-11-13T09:17:00.000
| 3
| 0.066568
| false
| 19,949,809
| 0
| 0
| 1
| 3
|
I'll like to create a python flask application that can run on any platform. I've put it in a Docker container. But unless I've misunderstood something the host machine still need docker installed to launch the container. Which in turn required Vagrant and and Ubuntu VM (at least on Mac). Am I missing something? What is the correct way to use a container as a standalone application?
|
How do I make a standalone application out of a Docker container?
| 20,142,281
| 0
| 5
| 3,690
| 0
|
python,docker
|
The simple answer is: you can't.
The long answer is: Docker is not intended to be used to make cross platform standalone applications (but JAVA is, for example). Docker instead focuses on having a light weight container that acts like a virtual machine but basically isn't. It's just a box inside a linux(!) system that behaves like a virtual machine to install services into that can be separated clearly from each other that way. A proper use case example for docker would be to install a web application with a specific version of say apache and php in it to have a definitive environment guaranteed.
| 0
| 1
| 0
| 0
|
2013-11-13T09:17:00.000
| 3
| 0
| false
| 19,949,809
| 0
| 0
| 1
| 3
|
I'll like to create a python flask application that can run on any platform. I've put it in a Docker container. But unless I've misunderstood something the host machine still need docker installed to launch the container. Which in turn required Vagrant and and Ubuntu VM (at least on Mac). Am I missing something? What is the correct way to use a container as a standalone application?
|
How do I make a standalone application out of a Docker container?
| 20,142,478
| 0
| 5
| 3,690
| 0
|
python,docker
|
A Linux VM is a dependency if you are on Windows or Mac. Vagrant is not though. It is mentioned only because it's probably the easiest way to get a VM up and running.
| 0
| 1
| 0
| 0
|
2013-11-13T09:17:00.000
| 3
| 0
| false
| 19,949,809
| 0
| 0
| 1
| 3
|
I'll like to create a python flask application that can run on any platform. I've put it in a Docker container. But unless I've misunderstood something the host machine still need docker installed to launch the container. Which in turn required Vagrant and and Ubuntu VM (at least on Mac). Am I missing something? What is the correct way to use a container as a standalone application?
|
Python error while configuring mesos on centos
| 20,348,972
| 3
| 1
| 2,608
| 0
|
python,linux,centos,mesos
|
you need to have python dev version (headers etc.)
try
yum install python-devel
| 0
| 1
| 0
| 0
|
2013-11-14T04:41:00.000
| 1
| 0.53705
| false
| 19,969,687
| 0
| 0
| 0
| 1
|
I am trying to install mesos on CentOs. I downloaded it and run ./configure command. But couldn't be completed due to installed version of python as mesos requires development version 2.6. I tried to upgrade python but it upgrades to 2.4 only. Then I manually download python 2.6 and install, which is located under /usr/local/bin/ and the old one is located under /usr/bin. When I run the python command in terminal it displays python 2.6, but again when I configure mesos it again gives same error.
configure: error: in `/root/mesos-0.14.1':
configure: error:
Could not link test program to Python. Maybe the main Python library has been
installed in some non-standard library path. If so, pass it to configure,
via the LDFLAGS environment variable.
Example: ./configure LDFLAGS="-L/usr/non-standard-path/python/lib"
============================================================================
ERROR!
You probably have to install the development version of the Python package
for your distribution. The exact name of this package varies among them.
============================================================================
I then create symbolic link of /usr/local/bin/python to /usr/bin/python but now 'yum' command doesn't work also configuring mesos again gives same error. I also tried ./configure LDFLAGS="-L/usr/local/lib/python/lib" .What should I do to install mesos cluster in Cent Os ? What is the solution ?
|
Git: Master-thesis subprojects as submodules or stand-alone repositories
| 21,260,659
| 0
| 0
| 283
| 0
|
python,git,bash,git-submodules,subproject
|
I recommend a single master repository for this problem. You mentioned that the output files of certain programs are used as input to the others. These programs may not have run-time dependencies on each other, but they do have dependencies. It sounds like they will not work without each other being present to create the data. Especially if file location (e.g. relative path) is important, then a single repository will help you keep them better organized.
| 0
| 1
| 0
| 0
|
2013-11-14T16:22:00.000
| 2
| 0
| false
| 19,982,856
| 1
| 0
| 0
| 2
|
I just started using git to get my the code I write for my Master-thesis more organized. I have divided the tasks into 4 sub-folders, each one containing data and programs that work with that data. The 4 sub-projects do not necessarily need to be connected, none off the programs contained use functions from the other sub-projects. However the output-files produced by the programs in a certain sub-folder are used by programs of another sub-folder.
In addition some programs are written in Bash and some in Python.
I use git in combination with bitbucket. I am really new to the whole concept, so I wonder if I should create one "Master-thesis" repository or rather one repository for each of the (until now) 4 sub-projects. Thank you for your help!
|
Git: Master-thesis subprojects as submodules or stand-alone repositories
| 19,982,974
| 1
| 0
| 283
| 0
|
python,git,bash,git-submodules,subproject
|
Well, as devnull says, answers would be highly opinion based, but given that I disagree that that's a bad thing, I'll go ahead and answer if I can type before someone closes the question. :)
I'm always inclined to treat git repositories as separate units of work or projects. If I'm likely to work on various parts of something as a single project or toward a common goal (e.g., Master's thesis), my tendency would be to treat it as a single repository.
And by the way, since the .git repository will be in the root of that single repository, if you need to spin off a piece of your work later and track it separately, you can always create a new repository if needed at that point. Meantime it seems "keep it simple" would mean one repo.
| 0
| 1
| 0
| 0
|
2013-11-14T16:22:00.000
| 2
| 0.099668
| false
| 19,982,856
| 1
| 0
| 0
| 2
|
I just started using git to get my the code I write for my Master-thesis more organized. I have divided the tasks into 4 sub-folders, each one containing data and programs that work with that data. The 4 sub-projects do not necessarily need to be connected, none off the programs contained use functions from the other sub-projects. However the output-files produced by the programs in a certain sub-folder are used by programs of another sub-folder.
In addition some programs are written in Bash and some in Python.
I use git in combination with bitbucket. I am really new to the whole concept, so I wonder if I should create one "Master-thesis" repository or rather one repository for each of the (until now) 4 sub-projects. Thank you for your help!
|
What happens to my downloads when I delete the virtual environment they're in?
| 19,983,121
| 0
| 0
| 37
| 0
|
python,download,virtualenv
|
virtualenv doesn't cache the downloads anywhere. So it downloads the sources once, compiles and installs them and then deletes the download. If you delete the env, all installed modules are gone as well.
| 0
| 1
| 0
| 0
|
2013-11-14T16:25:00.000
| 1
| 1.2
| true
| 19,982,928
| 1
| 0
| 0
| 1
|
I set up a virtual environment on my mac and downloaded some Python libraries.
What happens to those libraries after I delete my virtual environment?
Where are my downloads stored when I download them in my virtualenv?
Thank you
|
What does the $ mean when running commands?
| 19,986,337
| 20
| 15
| 50,638
| 0
|
python,command,installation,dollar-sign
|
As of now, Python does not implement $ in its syntax. So, it has nothing to do with Python.
Instead, what you are seeing is the terminal prompt of a Unix-based system (Mac, Linux, etc.)
| 0
| 1
| 0
| 0
|
2013-11-14T19:13:00.000
| 5
| 1.2
| true
| 19,986,306
| 1
| 0
| 0
| 2
|
I've been learning Python, and I keep running into the $ character in online documentation. Usually it goes something like this:
$ python ez_setup.py (Yeah, I've been trying to install setup tools)
I'm fairly certain that this command isn't for the python IDE or console, but I've tried windows cmd and it doesn't work. Any help?
|
What does the $ mean when running commands?
| 19,986,332
| 5
| 15
| 50,638
| 0
|
python,command,installation,dollar-sign
|
The $ is the command prompt. It is used to signify that python ez_setup.py should be run on a command line and not on a python/perl/ruby shell
You might also see % python ez_setup.py, which also means the same thing
| 0
| 1
| 0
| 0
|
2013-11-14T19:13:00.000
| 5
| 0.197375
| false
| 19,986,306
| 1
| 0
| 0
| 2
|
I've been learning Python, and I keep running into the $ character in online documentation. Usually it goes something like this:
$ python ez_setup.py (Yeah, I've been trying to install setup tools)
I'm fairly certain that this command isn't for the python IDE or console, but I've tried windows cmd and it doesn't work. Any help?
|
install new version of python
| 19,987,194
| 0
| 0
| 168
| 0
|
python,python-3.x
|
instead of using python I typed python3 in terminal and it was the solution
| 0
| 1
| 0
| 0
|
2013-11-14T19:45:00.000
| 2
| 0
| false
| 19,986,895
| 1
| 0
| 0
| 1
|
I have mac os x, recently I have install python version 3.2 before I had version 2.6.1. but when I type "python" in terminal it prints Python 2.6.1 (r261:67515, Jun 24 2010, 21:47:49). what does it mean? how can I use python 3.2 that have install this week?
|
Google AppEngine: Setting up user roles and permissions
| 20,210,524
| 0
| 1
| 874
| 0
|
python,google-app-engine,permissions,roles,app-engine-ndb
|
You must manage user_profile yourself. In your user_profile, you can store the user id such as an email address or a google user id like you want. Add a role array in this entity where you store all roles for this user and you manage access with decorators.
For example, users which are employers will have "EMPLOYERS" in their roles and you manage access to the job creation handler with a @isEmployer decorator.
With this solution, you can assign many roles for you user like "ADMIN" in the future.
| 0
| 1
| 0
| 0
|
2013-11-14T21:21:00.000
| 3
| 0
| false
| 19,988,654
| 0
| 0
| 1
| 2
|
I am undergoing Udacity's Web Development course which uses Google AppEngine and Python.
I would like to set up specific user roles, and their alloted permissions. For example, I may have two users roles, Employer and SkilledPerson, and assign their permissions as follows:
Only Employers may create Job entities.
Only SkilledPerson may create Resume and JobApplication entities.
How do I do this?
How do I define these user roles?
How do I assign a group of permissions to specific roles?
How do I allow users to sign up as a particular role (Employer or SkilledPerson)?
|
Google AppEngine: Setting up user roles and permissions
| 20,207,381
| 0
| 1
| 874
| 0
|
python,google-app-engine,permissions,roles,app-engine-ndb
|
I'd create a user_profile table which stores their Google user id, and two Boolean fields for is_employer and is_skilled_person, because there's always potential for someone to be both of these roles on your site. (Maybe I'm an employer posting a job but also looking for a job as well)
If you perceive having multiple roles and a user can only be one role, I'd make it a string field holding the role name like "employer", "admin", "job seeker" and so on.
| 0
| 1
| 0
| 0
|
2013-11-14T21:21:00.000
| 3
| 0
| false
| 19,988,654
| 0
| 0
| 1
| 2
|
I am undergoing Udacity's Web Development course which uses Google AppEngine and Python.
I would like to set up specific user roles, and their alloted permissions. For example, I may have two users roles, Employer and SkilledPerson, and assign their permissions as follows:
Only Employers may create Job entities.
Only SkilledPerson may create Resume and JobApplication entities.
How do I do this?
How do I define these user roles?
How do I assign a group of permissions to specific roles?
How do I allow users to sign up as a particular role (Employer or SkilledPerson)?
|
Can one source code be deployed to Openshift, Heroku and Google App Engine at once?
| 20,003,022
| 3
| 1
| 244
| 0
|
python,google-app-engine,heroku,openshift
|
I work on Openshift and at this time I'm not aware of anything that will deploy your code to GAE and Openshift at the same time.
You might be able to write your own script for it.
| 0
| 1
| 0
| 0
|
2013-11-15T10:31:00.000
| 2
| 1.2
| true
| 19,998,958
| 0
| 0
| 1
| 2
|
As subject, is is possible with just one source code, we can deploy our code to Openshift or Google App Engine? Heroku is not necessarily in my case.
My application is using Python Flask + PostgreSQL 9.1. I love the easiness in Openshift when I configure my technology stack, but is the case will be same with GAE?
Thanks!
|
Can one source code be deployed to Openshift, Heroku and Google App Engine at once?
| 20,003,075
| 1
| 1
| 244
| 0
|
python,google-app-engine,heroku,openshift
|
PostgreSQL is not available on GAE, so this code will definitely not run there.
| 0
| 1
| 0
| 0
|
2013-11-15T10:31:00.000
| 2
| 0.099668
| false
| 19,998,958
| 0
| 0
| 1
| 2
|
As subject, is is possible with just one source code, we can deploy our code to Openshift or Google App Engine? Heroku is not necessarily in my case.
My application is using Python Flask + PostgreSQL 9.1. I love the easiness in Openshift when I configure my technology stack, but is the case will be same with GAE?
Thanks!
|
Scaling APScheduler
| 25,422,173
| 2
| 5
| 901
| 0
|
python-2.7,distributed,scheduler,apscheduler
|
This looks like an old question but I'll answer it anyway. No, it's not (yet) possible to run APScheduler in that manner yet due to lack of a synchronization/locking mechanism to that end.
| 0
| 1
| 0
| 0
|
2013-11-15T12:52:00.000
| 1
| 0.379949
| false
| 20,001,553
| 0
| 0
| 0
| 1
|
I want to run multiple instances of APScheduler pointing to one common persistent job DB. Is it possible to run in that way?? I also mean that the jobs in the DB get shared among the Scheduler instances and at a point there is only one instance executing a scheduled job.
|
Why use Pythons 'virtualenv' on Linux when one has 'chroot' (and union/overlay filesystems)?
| 20,001,653
| 6
| 4
| 2,878
| 0
|
python,linux,virtualenv,chroot
|
bootstrapping a directory tree that can be passed as root
That's not what virtualenv does, except (to some degree) for Python packages. It provides a place where these can be installed without replacing the rest of the filesystem. It also works without root privileges and it's portable as it needs no kernel support, unlike chroot, which (I presume) won't work on Windows.
Can't one install packages/modules locally in whatever application directory
Yes, but virtualenv does one more thing, which is that it disables (by default at least) the system's Python package directories. That means you can test whether your package correctly installs all of its dependencies (you might have forgotten to list one because it's already installed on your system) and it allows installing different versions in case you need either newer or older versions. The ability to install older versions should not be overlooked because sometimes new versions of packages introduce bugs.
| 0
| 1
| 0
| 0
|
2013-11-15T12:54:00.000
| 1
| 1
| false
| 20,001,606
| 1
| 0
| 0
| 1
|
First of all let me state that I am a proponent of generic software (in general ;-). I am no expert on Python, but it seems that the 'virtualenv' utility solves pretty much the same problem 'chroot' can help to solve - bootstrapping a directory tree that can be passed as root, thus effectively protecting the real directory tree, if needed.
Since I am no expert in Python as already mentioned, I wonder - what problem can virtualenv solve that chroot cannot? I mean, can't I just set up a nice fake root tree (possibly using union mounting), chroot into it, and do pip install a package I want in my new environment, and then play around within the bounds of my new environment, running python scripts and what not?
Am I missing something here?
Update:
Can't one install packages/modules locally in whatever application directory, I mean, without root privileges and subsequently without overwriting or adding files to /usr/lib or /usr/local/lib? It appears that this is what virtualenv does, however I think it has to symlink or otherwise provide a python interpreter for each environment one creates, does it not?
|
IDLE can't find cv2, CLI Python imports it correctly
| 20,009,587
| 0
| 0
| 1,242
| 0
|
python,opencv,command-line-interface,python-idle
|
When you launch GUI applications on OS X (.app bundles), no shell is involved and shell profile scripts are not used. IDLE.app is no exception. So any environment variables defined there are not available to the GUI app. The best solution is to properly install your third-party packages into the standard locations included in Python's module search path, viewable as sys.path, and not use PYTHONPATH at all. Another option in this case is to launch IDLE from a terminal session shell, e.g. /usr/local/bin/idle2.7.
| 0
| 1
| 0
| 0
|
2013-11-15T18:34:00.000
| 1
| 1.2
| true
| 20,008,181
| 0
| 0
| 0
| 1
|
I am able to import the OpenCV python bindings (cv2) fine when running Python from the command line, but I receive the standard 'no module named cv2' from IDLE when I import there.
I checked the Path Browser in IDLE, and noticed that it doesn't match my .bashrc PYTHONPATH.
That said, I copied the cv2 binding files into one of the directories specified in the Path Browser, and IDLE still can't find it.
Two questions:
1) Has anyone run into this circumstance?
2) Does IDLE have a PYTHONPATH different from the rest of the system?
|
Making pig embedded with python script and pig cassandra integration to work with oozie
| 22,040,536
| 0
| 1
| 419
| 0
|
python,hadoop,cassandra,apache-pig,oozie
|
This is solved. Solutions..
1) Put the python file in the oozie worklow path and then reference if from here.
2) Added cassandra jar files in the lib folder in the oozie's HDFS path.
| 0
| 1
| 0
| 0
|
2013-11-17T20:51:00.000
| 1
| 1.2
| true
| 20,036,040
| 0
| 0
| 0
| 1
|
I am new to oozie and I have few problems.
1) I am trying to embed a pig action in oozie which has a python script import. I've placed the jython.jar file in the lib path and have an import in the pig script which will take the python UDFs. I don't seems to get this working. The .py file is not getting picked up. How to go about this?
2) I have a pig cassandra integration where in I use the cql to get the data from cassandra using pig and do some basic transformation. In the CLI i am able to get this working. But on the oozie front I am not. I don't seem to find the solution(configuration and others) to do this in oozie. Can anyone please help me with this? Thanks in advance.
|
virtualenv can not work in centos and ubuntu
| 20,043,992
| 1
| 0
| 804
| 0
|
python,virtualenv
|
What did you expect exactly ? Virtualenv creates a sandboxed Python environmenent with binaries etc for the platform on which it's created - it doesn't automagically makes the binaries platform-independent...
| 0
| 1
| 0
| 0
|
2013-11-18T09:07:00.000
| 1
| 0.197375
| false
| 20,043,841
| 0
| 0
| 0
| 1
|
Here is the example:
centos:(build a virtualen)
$ virtualenv tenv
ubuntu:(active it)
$ . tenv/bin/activate
$ python
Could not find platform independent libraries
Could not find platform dependent libraries
Consider setting $PYTHONHOME to [:]
ImportError: No module named site
In turn:
ubuntu:
$ virtualenv ttenv
centos:
$ . ttenv/bin/activate
$ python
ttenv/bin/python: /usr/lib64/libcrypto.so.1.0.0: no version information available (required by ttenv/bin/python)
ttenv/bin/python: /lib64/libc.so.6: version `GLIBC_2.14' not found (required by ttenv/bin/python)
ttenv/bin/python: /lib64/libc.so.6: version `GLIBC_2.15' not found (required by ttenv/bin/python)
ttenv/bin/python: /usr/lib64/libssl.so.1.0.0: no version information available (required by ttenv/bin/python)
|
Nginx(Django) ImportError: cannot import name celeryd
| 20,312,840
| 1
| 0
| 700
| 0
|
python,django,nginx,celery,celerybeat
|
It turned out I used different version of Django in my remote server.
In Celery 3.1, there is no command named celeryd.
| 0
| 1
| 0
| 0
|
2013-11-18T19:45:00.000
| 1
| 1.2
| true
| 20,056,399
| 0
| 0
| 1
| 1
|
I tested my project in my local machine, and it worked fine. But after uploading to a remote server(CentOS), I cannot execute celerybeat.
Here is my command.
python manage.py celeryd --events --loglevel=INFO -c 5 --settings=[settings-directory].production
This command works in the local machine(with --settings=[settings-directory].local), but in the remote server, ImportError: cannot import name celeryd occured.
Setting about celery is in base.py. local.py and production.py import the file. In production.py, there are just DEBUG, static, database settings.
I can import djcelery and celery in shell of the remote machine.
How could I solve this?
--
I think this is a version problem.. I'm reading about celery3.1
|
What's the advantage of running multiple threads per UWSGI process?
| 20,062,339
| 6
| 1
| 1,418
| 0
|
python,multithreading,deployment,process,uwsgi
|
Python's native multithreading is affected by GIL limitations. Simply put, only one Python thread at a time is physically executed. An exception to this are blocking IO calls (e. g. DB query) that let other Python threads take over, which may increase performance of IO-bound operations.
So the real performance gain would only be possible if your application is mostly IO-bound. However, in this case you should consider making the app asynchronous, which uWSGI also supports.
Otherwise you should keep your app single-threaded and use multiprocess uWSGI to scale up.
| 0
| 1
| 0
| 1
|
2013-11-18T21:38:00.000
| 1
| 1.2
| true
| 20,058,464
| 0
| 0
| 0
| 1
|
If I'm performing blocking operations like querying a database, then what is the advantage? How does this add extra worthwhile capacity?
|
Google App Engine. How to create constant in application scope?
| 20,065,052
| 2
| 1
| 154
| 0
|
python,google-app-engine
|
You can define the dict with in a module, then import it where ever you wish to refer to it, or you could load it from the datastore, and set the value in the module. You would do this during a warmup request.
Defining it in a module, means to alter the contents will require de-deploying the app.
Defining it in the datastore, means instances will reload any new definition on startup.
You could also set up a handler which could trigger a refresh if reading from the datastore.
Defining directly in the datastore means its pickled state needs to be less than 1MB (compressed) if you use a BlobProperty with compressed=True and your using ndb.
Other variations similiar to module definition would be to load it from a yaml file etc.. You could define the dict in the app.yaml as an environment variable.
There are many options, without knowing the specifics of your use cases it's hard to recommend a particular strategy.
| 0
| 1
| 0
| 0
|
2013-11-19T06:43:00.000
| 1
| 0.379949
| false
| 20,064,915
| 1
| 0
| 1
| 1
|
I want to create global scope constant dict, that would be accessed by multiple views.
For now I see scenario after deploy:
Fetching big file, creating a dict, holding this dict in memory. This process can be re-executed by administrator.
|
How to check from Linux in Python for administrative access to a Windows machine
| 20,067,048
| 0
| 0
| 111
| 0
|
python,windows,remote-access,administrator
|
I am no sys-admin, but just trying to mount the C-drive ( \hostname\C$ ) via samba/smb should work. This assumes that remote sharing and filesystem access is enabled on that box and the firewall rule setup to allow for remote connections.
| 0
| 1
| 0
| 0
|
2013-11-19T07:59:00.000
| 1
| 1.2
| true
| 20,066,131
| 0
| 0
| 0
| 1
|
I have a network of end-user machines (Windows, Linux, MacOS) and I want to check whether the credential I have allow me to access the machines as administrator (I am checking the "here are the admin credentials to the machines" vs. reality).
I wrote a Python script (it runs on Linux) which
runs nmap -O on the network to gather the hosts
tries to ssh with paramiko to check the Linux credentials.
I would like to do a similar check for the Windows machines. What would be a practical way, in Python, to do so?
I have a few sets of credentials (AD or local to a machine) so I would need a somehow universal method. I was thinking about something like a call to _winreg.ConnectRegistry but it does not import on my Linux (it does on a Windows box).
|
file not found: /usr/lib/system/libdnsinfo.dylib for architecture i386
| 27,936,871
| 0
| 1
| 3,158
| 0
|
python,ios,xcode,macos,pycrypto
|
Use libdns_services instead, libdnsinfo.dylib is no more supported by latest sdk.
| 0
| 1
| 0
| 0
|
2013-11-19T17:27:00.000
| 3
| 0
| false
| 20,078,036
| 0
| 0
| 0
| 1
|
I am on MAC 10.9 with XCode 4.6.3 and have command line tools installed
I am trying to compile pycrypto-2.1.0 using
python setup.py build and getting following error
-----------------------------------------------------------------------------
ld: warning: ignoring file build/temp.macosx-10.6-intel-2.7/src/MD2.o, file was built for unsupported file format ( 0xcf 0xfa 0xed 0xfe 0x 7 0x 0 0x 0 0x 1 0x 3 0x 0 0x 0 0x 0 0x 1 0x 0 0x 0 0x 0 ) which is not the architecture being linked (i386): build/temp.macosx-10.6-intel-2.7/src/MD2.o
ld: file not found: /usr/lib/system/libdnsinfo.dylib for architecture i386
collect2: ld returned 1 exit status
ld: file not found: /usr/lib/system/libdnsinfo.dylib for architecture x86_64
collect2: ld returned 1 exit status
------------------------------------------------------------------------------------
locate is giving
$ locate libdnsinfo.dylib
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.7.sdk/usr/lib/system/libdnsinfo.dylib
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.8.sdk/usr/lib/system/libdnsinfo.dylib
/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS6.1.sdk/usr/lib/system/libdnsinfo.dylib
These path are also added to PATH.
Following is command and error
$ python setup.py build
running build
running build_py
running build_ext
warning: GMP library not found; Not building Crypto.PublicKey._fastmath.
building 'Crypto.Hash.MD2' extension
gcc-4.2 -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.7.sdk/usr/include/ -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.7.sdk/usr/include/ -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.7.sdk/usr/include/c++/4.2.1/ -O3 -fomit-frame-pointer -Isrc/ -I/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c src/MD2.c -o build/temp.macosx-10.6-intel-2.7/src/MD2.o
gcc-4.2 -bundle -undefined dynamic_lookup -arch i386 -arch x86_64 -g -L/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.7.sdk/usr/lib -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.7.sdk/usr/include/ -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.7.sdk/usr/include/ -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.7.sdk/usr/include/c++/4.2.1/ build/temp.macosx-10.6-intel-2.7/src/MD2.o -o build/lib.macosx-10.6-intel-2.7/Crypto/Hash/MD2.so
ld: warning: ignoring file build/temp.macosx-10.6-intel-2.7/src/MD2.o, file was built for unsupported file format ( 0xcf 0xfa 0xed 0xfe 0x 7 0x 0 0x 0 0x 1 0x 3 0x 0 0x 0 0x 0 0x 1 0x 0 0x 0 0x 0 ) which is not the architecture being linked (i386): build/temp.macosx-10.6-intel-2.7/src/MD2.o
ld: file not found: /usr/lib/system/libdnsinfo.dylib for architecture i386
collect2: ld returned 1 exit status
ld: file not found: /usr/lib/system/libdnsinfo.dylib for architecture x86_64
collect2: ld returned 1 exit status
Any idea to fix this?
|
PyDev Eclipse Plugin fails to update in Eclipse Update Manager
| 41,791,350
| 1
| 3
| 1,251
| 0
|
python,eclipse,pydev
|
It seems that the best resolution for this is to update from Eclipse 3.7 to 4.3+.
| 0
| 1
| 0
| 0
|
2013-11-19T18:06:00.000
| 1
| 1.2
| true
| 20,078,776
| 0
| 0
| 1
| 1
|
I'm trying to update some software in Eclipse, and mostly haven't had problems, but when I try to update PyDev (Python plugin) I get this error:
An error occurred while collecting items to be installed
session context was:(profile=epp.package.java, phase=org.eclipse.equinox.internal.p2.engine.phases.Collect, operand=, action=).
Problems downloading artifact: osgi.bundle,com.python.pydev,3.0.0.201311051910.
Error reading signed content:C:\Users\Blake\AppData\Local\Temp\signatureFile2219600778088128210.jar
An error occurred while processing the signatures for the file: C:\Users\Blake\AppData\Local\Temp\signatureFile2219600778088128210.jar
Problems downloading artifact: osgi.bundle,com.python.pydev.analysis,3.0.0.201311051910.
Error reading signed content:C:\Users\Blake\AppData\Local\Temp\signatureFile6795154829597372736.jar
An error occurred while processing the signatures for the file: C:\Users\Blake\AppData\Local\Temp\signatureFile6795154829597372736.jar
Problems downloading artifact: osgi.bundle,com.python.pydev.codecompletion,3.0.0.201311051910.
Error reading signed content:C:\Users\Blake\AppData\Local\Temp\signatureFile855072635271316145.jar
An error occurred while processing the signatures for the file: C:\Users\Blake\AppData\Local\Temp\signatureFile855072635271316145.jar
Problems downloading artifact: osgi.bundle,com.python.pydev.debug,3.0.0.201311051910.
Error reading signed content:C:\Users\Blake\AppData\Local\Temp\signatureFile4688521627100670190.jar
An error occurred while processing the signatures for the file: C:\Users\Blake\AppData\Local\Temp\signatureFile4688521627100670190.jar
Problems downloading artifact: osgi.bundle,com.python.pydev.fastparser,3.0.0.201311051910.
Error reading signed content:C:\Users\Blake\AppData\Local\Temp\signatureFile1084399815407097736.jar
An error occurred while processing the signatures for the file: C:\Users\Blake\AppData\Local\Temp\signatureFile1084399815407097736.jar
Problems downloading artifact: osgi.bundle,com.python.pydev.refactoring,3.0.0.201311051910.
Error reading signed content:C:\Users\Blake\AppData\Local\Temp\signatureFile4184776883512095240.jar
An error occurred while processing the signatures for the file: C:\Users\Blake\AppData\Local\Temp\signatureFile4184776883512095240.jar
Problems downloading artifact: osgi.bundle,org.python.pydev,3.0.0.201311051910.
Error reading signed content:C:\Users\Blake\AppData\Local\Temp\signatureFile4524222642627962811.jar
An error occurred while processing the signatures for the file: C:\Users\Blake\AppData\Local\Temp\signatureFile4524222642627962811.jar
Problems downloading artifact: osgi.bundle,org.python.pydev.ast,3.0.0.201311051910.
Error reading signed content:C:\Users\Blake\AppData\Local\Temp\signatureFile3249163288841740294.jar
An error occurred while processing the signatures for the file: C:\Users\Blake\AppData\Local\Temp\signatureFile3249163288841740294.jar
Problems downloading artifact: osgi.bundle,org.python.pydev.core,3.0.0.201311051910.
Error reading signed content:C:\Users\Blake\AppData\Local\Temp\signatureFile1814921458326062966.jar
An error occurred while processing the signatures for the file: C:\Users\Blake\AppData\Local\Temp\signatureFile1814921458326062966.jar
Problems downloading artifact: osgi.bundle,org.python.pydev.customizations,3.0.0.201311051910.
Error reading signed content:C:\Users\Blake\AppData\Local\Temp\signatureFile4652077908204425024.jar
An error occurred while processing the signatures for the file: C:\Users\Blake\AppData\Local\Temp\signatureFile4652077908204425024.jar
Problems downloading artifact: osgi.bundle,org.python.pydev.debug,3.0.0.201311051910.
Error reading signed content:C:\Users\Blake\AppData\Local\Temp\signatureFile5865734778550017815.jar
An error occurred while processing the signatures for the file: C:\Users\Blake\AppData\Local\Temp\signatureFile5865734778550017815.jar
Problems downloading artifact: osgi.bundle,org.python.pydev.django,3.0.0.201311051910.
Error reading signed content:C:\Users\Blake\AppData\Local\Temp\signatureFile1400608644382694448.jar
An error occurred while processing the signatures for the file: C:\Users\Blake\AppData\Local\Temp\signatureFile1400608644382694448.jar
Problems downloading artifact: osgi.bundle,org.python.pydev.help,3.0.0.201311051910.
Error reading signed content:C:\Users\Blake\AppData\Local\Temp\signatureFile5475958427511010644.jar
An error occurred while processing the signatures for the file: C:\Users\Blake\AppData\Local\Temp\signatureFile5475958427511010644.jar
Problems downloading artifact: osgi.bundle,org.python.pydev.jython,3.0.0.201311051910.
Error reading signed content:C:\Users\Blake\AppData\Local\Temp\signatureFile269530960804801404.jar
An error occurred while processing the signatures for the file: C:\Users\Blake\AppData\Local\Temp\signatureFile269530960804801404.jar
Problems downloading artifact: osgi.bundle,org.python.pydev.parser,3.0.0.201311051910.
Error reading signed content:C:\Users\Blake\AppData\Local\Temp\signatureFile6988087748918334886.jar
An error occurred while processing the signatures for the file: C:\Users\Blake\AppData\Local\Temp\signatureFile6988087748918334886.jar
Problems downloading artifact: osgi.bundle,org.python.pydev.refactoring,3.0.0.201311051910.
Error reading signed content:C:\Users\Blake\AppData\Local\Temp\signatureFile1524645906700502816.jar
An error occurred while processing the signatures for the file: C:\Users\Blake\AppData\Local\Temp\signatureFile1524645906700502816.jar
Problems downloading artifact: osgi.bundle,org.python.pydev.shared_core,3.0.0.201311051910.
Error reading signed content:C:\Users\Blake\AppData\Local\Temp\signatureFile7684330420892093099.jar
An error occurred while processing the signatures for the file: C:\Users\Blake\AppData\Local\Temp\signatureFile7684330420892093099.jar
Problems downloading artifact: osgi.bundle,org.python.pydev.shared_interactive_console,3.0.0.201311051910.
Error reading signed content:C:\Users\Blake\AppData\Local\Temp\signatureFile6948600865186203811.jar
An error occurred while processing the signatures for the file: C:\Users\Blake\AppData\Local\Temp\signatureFile6948600865186203811.jar
Problems downloading artifact: osgi.bundle,org.python.pydev.shared_ui,3.0.0.201311051910.
Error reading signed content:C:\Users\Blake\AppData\Local\Temp\signatureFile2509877364480980768.jar
An error occurred while processing the signatures for the file: C:\Users\Blake\AppData\Local\Temp\signatureFile2509877364480980768.jar
Problems downloading artifact: org.eclipse.update.feature,org.python.pydev.feature,3.0.0.201311051910.
Error reading signed content:C:\Users\Blake\AppData\Local\Temp\signatureFile7424055901779492006.jar
An error occurred while processing the signatures for the file: C:\Users\Blake\AppData\Local\Temp\signatureFile7424055901779492006.jar
I run Eclipse as an administrator and I don't understand what could cause this issue.
Regards,
|
How to activate an Anaconda environment
| 35,214,764
| 5
| 189
| 622,482
| 0
|
python,virtualenv,anaconda,conda
|
Below is how it worked for me
C:\Windows\system32>set CONDA_ENVS_PATH=d:\your\location
C:\Windows\system32>conda info
Shows new environment path
C:\Windows\system32>conda create -n YourNewEnvironment --clone=root
Clones default root environment
C:\Windows\system32>activate YourNewEnvironment
Deactivating environment "d:\YourDefaultAnaconda3"...
Activating environment "d:\your\location\YourNewEnvironment"...
[YourNewEnvironment] C:\Windows\system32>conda info -e
conda environments:
#
YourNewEnvironment
* d:\your\location\YourNewEnvironment
root d:\YourDefaultAnaconda3
| 0
| 1
| 0
| 0
|
2013-11-19T20:25:00.000
| 12
| 0.083141
| false
| 20,081,338
| 1
| 0
| 0
| 3
|
I'm on Windows 8, using Anaconda 1.7.5 64bit.
I created a new Anaconda environment with
conda create -p ./test python=2.7 pip
from C:\Pr\TEMP\venv\.
This worked well (there is a folder with a new python distribution). conda tells me to type
activate C:\PR\TEMP\venv\test
to activate the environment, however this returns:
No environment named "C:\PR\temp\venv\test" exists in C:\PR\Anaconda\envs
How can I activate the environment? What am I doing wrong?
|
How to activate an Anaconda environment
| 58,099,552
| 2
| 189
| 622,482
| 0
|
python,virtualenv,anaconda,conda
|
For me, using Anaconda Prompt instead of cmd or PowerShell is the key.
In Anaconda Prompt, all I need to do is activate XXX
| 0
| 1
| 0
| 0
|
2013-11-19T20:25:00.000
| 12
| 0.033321
| false
| 20,081,338
| 1
| 0
| 0
| 3
|
I'm on Windows 8, using Anaconda 1.7.5 64bit.
I created a new Anaconda environment with
conda create -p ./test python=2.7 pip
from C:\Pr\TEMP\venv\.
This worked well (there is a folder with a new python distribution). conda tells me to type
activate C:\PR\TEMP\venv\test
to activate the environment, however this returns:
No environment named "C:\PR\temp\venv\test" exists in C:\PR\Anaconda\envs
How can I activate the environment? What am I doing wrong?
|
How to activate an Anaconda environment
| 62,778,561
| -1
| 189
| 622,482
| 0
|
python,virtualenv,anaconda,conda
|
Window:
conda activate environment_name
Mac: conda activate environment_name
| 0
| 1
| 0
| 0
|
2013-11-19T20:25:00.000
| 12
| -0.016665
| false
| 20,081,338
| 1
| 0
| 0
| 3
|
I'm on Windows 8, using Anaconda 1.7.5 64bit.
I created a new Anaconda environment with
conda create -p ./test python=2.7 pip
from C:\Pr\TEMP\venv\.
This worked well (there is a folder with a new python distribution). conda tells me to type
activate C:\PR\TEMP\venv\test
to activate the environment, however this returns:
No environment named "C:\PR\temp\venv\test" exists in C:\PR\Anaconda\envs
How can I activate the environment? What am I doing wrong?
|
How to install pip for Python 3 on Mac OS X?
| 45,603,115
| 12
| 140
| 296,833
| 0
|
python,macos,python-3.x,pip,python-3.3
|
brew install python3
create alias in your shell profile
eg. alias pip3="python3 -m pip" in my .zshrc
➜ ~ pip3 --version
pip 9.0.1 from /usr/local/lib/python3.6/site-packages (python 3.6)
| 0
| 1
| 0
| 0
|
2013-11-19T21:57:00.000
| 16
| 1
| false
| 20,082,935
| 1
| 0
| 0
| 3
|
OS X (Mavericks) has Python 2.7 stock installed. But I do all my own personal Python stuff with 3.3. I just flushed my 3.3.2 install and installed the new 3.3.3. So I need to install pyserial again. I can do it the way I've done it before, which is:
Download pyserial from pypi
untar pyserial.tgz
cd pyserial
python3 setup.py install
But I'd like to do like the cool kids do, and just do something like pip3 install pyserial. But it's not clear how I get to that point. And just that point. Not interested (unless I have to be) in virtualenv yet.
|
How to install pip for Python 3 on Mac OS X?
| 52,130,224
| 4
| 140
| 296,833
| 0
|
python,macos,python-3.x,pip,python-3.3
|
pip is installed automatically with python2 using brew:
brew install python3
pip3 --version
| 0
| 1
| 0
| 0
|
2013-11-19T21:57:00.000
| 16
| 0.049958
| false
| 20,082,935
| 1
| 0
| 0
| 3
|
OS X (Mavericks) has Python 2.7 stock installed. But I do all my own personal Python stuff with 3.3. I just flushed my 3.3.2 install and installed the new 3.3.3. So I need to install pyserial again. I can do it the way I've done it before, which is:
Download pyserial from pypi
untar pyserial.tgz
cd pyserial
python3 setup.py install
But I'd like to do like the cool kids do, and just do something like pip3 install pyserial. But it's not clear how I get to that point. And just that point. Not interested (unless I have to be) in virtualenv yet.
|
How to install pip for Python 3 on Mac OS X?
| 55,175,708
| 0
| 140
| 296,833
| 0
|
python,macos,python-3.x,pip,python-3.3
|
For a fresh new Mac, you need to follow below steps:-
Make sure you have installed Xcode
sudo easy_install pip
/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
brew doctor
brew doctor
brew install python3
And you are done, just type python3 on terminal and you will see python 3 installed.
| 0
| 1
| 0
| 0
|
2013-11-19T21:57:00.000
| 16
| 0
| false
| 20,082,935
| 1
| 0
| 0
| 3
|
OS X (Mavericks) has Python 2.7 stock installed. But I do all my own personal Python stuff with 3.3. I just flushed my 3.3.2 install and installed the new 3.3.3. So I need to install pyserial again. I can do it the way I've done it before, which is:
Download pyserial from pypi
untar pyserial.tgz
cd pyserial
python3 setup.py install
But I'd like to do like the cool kids do, and just do something like pip3 install pyserial. But it's not clear how I get to that point. And just that point. Not interested (unless I have to be) in virtualenv yet.
|
How to install python2.6-dev on Debian Testing
| 20,102,657
| -1
| 0
| 1,188
| 0
|
python,debian,python-2.6,apt
|
I know this might seem extreme but if you need 2.6 that badly, try running debian stable in a virtual machine like virtualbox and install 2.6 through that.
| 0
| 1
| 0
| 1
|
2013-11-20T16:41:00.000
| 2
| -0.099668
| false
| 20,101,721
| 1
| 0
| 0
| 1
|
I'm using Linux Mint Debian Edition (eq. Debian Testing). There is no python2.6-dev package, which I'd need to install pycrypto for Python 2.6 (since it has a compilation step).
Is there any way to get this package or an equivalent on my system? I already have installed Python 2.6 in my system and I can use it without a hitch.
(The python2.7-dev package is there just fine. But I'm glued to 2.6, so it doesn't suit my needs.)
|
Using py2exe in a virtualenv
| 20,777,298
| 11
| 9
| 4,266
| 0
|
python,py2exe
|
You can do that this way:
Activate your virtualenv and then ...
easy_install py2exe-0.6.9.win32-py2.7.exe
| 0
| 1
| 0
| 0
|
2013-11-20T18:47:00.000
| 2
| 1
| false
| 20,104,368
| 1
| 0
| 0
| 2
|
I have a Python script I developed within a virtualenv on Windows (Python 2.7).
I would now like to compile it into a single EXE using Py2exe.
I've read and read the docs and stackoverflow, and yet I can't find a simple answer: How do I do this? I tried just installing py2exe (via the downloadable installer), but of course that doesn't work because it uses the system-level python, which doesn't have the dependencies for my script installed. It needs to use the virtualenv - but there doesn't seem to be such an option.
I did manage to get bbfreeze to work, but it outputs a dist folder crammed with files, and I just want a simple EXE file (one file) for my simple script, and I understand Py2Exe can do this.
tl;dr: How do I run Py2Exe within the context of a virtualenv so it correctly imports dependencies?
|
Using py2exe in a virtualenv
| 20,196,997
| 1
| 9
| 4,266
| 0
|
python,py2exe
|
Installing py2exe into your virtual env should be straightforward. You'll need Visual Studio 2008, the express version should work. Launch a 2008 Command Prompt and Activate your virtual env. Change into the directory that contains the py2exe source and run python setup.py install. You can verify that py2exe is in the correct environment by attempting to import it from an interactive shell. I tested myself earlier today (had to install virtualenv). It works exactly as expected.
| 0
| 1
| 0
| 0
|
2013-11-20T18:47:00.000
| 2
| 1.2
| true
| 20,104,368
| 1
| 0
| 0
| 2
|
I have a Python script I developed within a virtualenv on Windows (Python 2.7).
I would now like to compile it into a single EXE using Py2exe.
I've read and read the docs and stackoverflow, and yet I can't find a simple answer: How do I do this? I tried just installing py2exe (via the downloadable installer), but of course that doesn't work because it uses the system-level python, which doesn't have the dependencies for my script installed. It needs to use the virtualenv - but there doesn't seem to be such an option.
I did manage to get bbfreeze to work, but it outputs a dist folder crammed with files, and I just want a simple EXE file (one file) for my simple script, and I understand Py2Exe can do this.
tl;dr: How do I run Py2Exe within the context of a virtualenv so it correctly imports dependencies?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.