Title
stringlengths
15
150
A_Id
int64
2.98k
72.4M
Users Score
int64
-17
470
Q_Score
int64
0
5.69k
ViewCount
int64
18
4.06M
Database and SQL
int64
0
1
Tags
stringlengths
6
105
Answer
stringlengths
11
6.38k
GUI and Desktop Applications
int64
0
1
System Administration and DevOps
int64
1
1
Networking and APIs
int64
0
1
Other
int64
0
1
CreationDate
stringlengths
23
23
AnswerCount
int64
1
64
Score
float64
-1
1.2
is_accepted
bool
2 classes
Q_Id
int64
1.85k
44.1M
Python Basics and Environment
int64
0
1
Data Science and Machine Learning
int64
0
1
Web Development
int64
0
1
Available Count
int64
1
17
Question
stringlengths
41
29k
How to Accept Command Line Arguments With Python Using <
26,496,756
1
0
299
0
python,shell,terminal
< file is handled by the shell: the file doesn't get passed as an argument. Instead it becomes the standard input of your program, i.e., sys.stdin.
0
1
0
0
2014-10-21T21:30:00.000
3
1.2
true
26,496,708
1
0
0
2
Is it possible to run a python script and feed in a file as an argument using <? For example, my script works as intended using the following command python scriptname.py input.txt and the following code stuffFile = open(sys.argv[1], 'r'). However, what I'm looking to do, if possible, is use this command line syntax: python scriptname.py < input.txt. Right now, running that command gives me only one argument, so I likely have to adjust my code in my script, but am not sure exactly how. I have an automated system processing this command, so it needs to be exact. If that's possible with a Python script, I'd greatly appreciate some help!
Python windows script subprocess continues to output after script ends
26,500,297
0
0
390
0
python,windows,subprocess
You need to call results.kill() or results.terminate() (they are aliases on Windows) to end your subprocesses before exiting your main script.
0
1
0
0
2014-10-22T04:07:00.000
3
0
false
26,500,184
1
0
0
1
Hi I am writing a python script in windows and using subprocess I have a line like results=subprocess.Popen(['xyz.exe'],stdout=subprocess.PIPE) After the script ends, and I get back to the promp carrot in cmd, I see more output from the script being printed out. I'm seeing stuff like Could Not Find xxx_echo.txt Being printed out repeatedly. How do I properly close the subprocess in windows?
Adding Python to Windows environmental variables
70,948,379
0
0
1,498
0
python,windows,command-line,environment-variables
One quick solution to those who are still struggling with environment variable setup issue. Just Uninstall the existing python version and reinstall it make sure to enable checkbox as "Add Python 3.10 to PATH to the environment variable.
0
1
0
0
2014-10-22T05:16:00.000
2
0
false
26,500,725
1
0
0
1
I've been using Python for some time now, but I have never been able to properly run it from the Windows command line. The error shown is: C:\Windows\system32>python 'python' is not recognized as an internal or external command, operable program or batch file. I've tried to solve the problem many times. I understand it's a matter of editing the environment variables, but this hasn't fixed the problem. My System Path variable is currently C:\Python27;C:\Python27\Lib;C:\Python27\DLLs;C:\Python27\Lib\lib-tk This is the correct location of Python in my directory. I've tried adding this to my User Path, and I've tried creating a PYTHONPATH variable containing them. I should note that running python.exe does work. C:\Windows\system32>python.exe Python 2.7.5 (default, May 15 2013, 22:43:36) [MSC v.1500 32 bit (Intel)] on win 32 Type "help", "copyright", "credits" or "license" for more information. I've tried a variety of solutions to no avail. Any help is greatly appreciated.
Celery: Abort or revoke all tasks in a chord
26,513,120
0
4
3,818
0
python,task,celery,chord
Instead of chording the tasks themselves you may want to consider having the chords tasks that watch the A tasks. What I mean by this is the chord would contain tasks that check the running tasks(A) every so often to see if they are done or revoked. When all of those return successfully the chord with then chain into task B
0
1
0
0
2014-10-22T16:24:00.000
2
0
false
26,512,324
0
0
1
1
I use the following setup with a Redis broker and backend: chord([A, A, A, ...])(B) Task A does some checks. It uses AbortableTask as a base and regularly checks the task.is_aborted() flag. Task B notifies the user about the result of the calculation The user has the possibility to abort the A tasks. Unfortunately, when calling AbortableAsyncResult(task_a_id).abort() on all the task A instances, only the active ones are being aborted. The status for tasks that have not been received yet by a worker are changed to ABORTED, but they're still processed and the is_aborted() flag returns False. I could of course revoke() the pending tasks instead of abort()-ing them, but the problem is that in that case the chord body (task B) is not executed anymore. How can all pending and running task A instances be stopped, while still ensuring that task B runs?
PyUSB read multiple frames from bulk transfer with unknown length
26,553,765
1
0
2,500
0
python,python-3.x,usb,pyusb
Solved my own issue. After running my code on a full linux machine, capturing the data and comparing it to the wireshark trace I took on the windows application, I realized the length of the read was not the issue. The results were very similar and the windows app was actually requesting 4096 bytes back instead of 2 or 7 and the device was just giving back whatever it had. The problem actually had to do with my Tx message not being in the correct format before it was sent out.
0
1
0
1
2014-10-23T21:45:00.000
1
1.2
true
26,538,004
0
0
1
1
So I'm relatively new to USB and PyUSB. I am trying to communicate with a bluetooth device using PyUSB. To initialize it, I need to send a command and read back some data from the device. I do this using dev.write(0x02, msg) and ret = dev.read(0x81, 0x07). I know the command structure and the format of a successful response. The response should have 7 bytes, but I only get back 2. There is a reference program for this device that runs on windows only. I have run this and used USBPcap/wireshark to monitor the traffic. From this I can see that after my command is sent, the device responds several times with a 2 byte response, and then eventually with the full 7 byte response. I'm doing the python work on a Raspberry Pi, so I can't monitor the traffic as easily. I believe the issue is due to the read expecting 7 bytes and then returning the result after the default timeout is reached, without receiving the follow up responses. I can set the bytes to 2 and do multiple readings, but I have no way of knowing if the message had more bytes that I am missing. So, what I'm looking for is a way to check the length of the message in the buffer before requesting the read so I can specify the size. Also, is there a buffer or anything to clear to make sure I am reading next message coming through. It seems that no matter how many times I run the read command I get the same response back.
Make Tornado able to handle new requests while waiting response from RPC over RabbitMQ
27,069,239
0
0
199
0
python,asynchronous,rabbitmq,tornado,rpc
Use nginx with embedded perl.. It works like superman.. We are using this for our analytics tool.
0
1
0
0
2014-10-24T19:22:00.000
1
0
false
26,554,857
0
0
1
1
I have web-server listening clients and when someone hit handler server send an RPC message to RabbitMQ and waiting for response while keeping connection. When response from RMQ came server pass it to the client as response to request. All async examples in Tornado docs works with their own http.fetch_async() or something like that methods, and I understand that I have to wait/read for RMQ asynchronously... But how? And even worse - sometimes I have to send several messages at one moment (I create pool of threads and each thread send one message). Right now I cannot rebuild architecture to get rid of waiting to send an answer from RMQ, so I have web-server blocked. Yet we have no a lot of requests and RMQ response quickly enough but sometimes it can make server waiting up to a minute. So now we just using Gunicorn with A LOT of workers and BIG SERVERS but I feel it should be a better solution and investigate different options. Python 3.4, so we cannot use pika RMQ adapter and work with py-amqp from Celery.
Installation Errors + File Association Errors Python 3.4
26,827,960
0
1
36
0
python,python-2.7,python-3.x,installation
The problem can be fixed by doing a system restore to a point before, then installing again
0
1
0
0
2014-10-26T18:07:00.000
1
1.2
true
26,576,229
1
0
0
1
I uninstalled python 2.7.2 today as i have had both python 2 and 3 on my computer when it is only python 3 i use. After I uninstalled it all of my python files associated with notepad and it would not allow me to change it to python again - No Error Message but just wont register the change. I tried rebooting but that did not work so i decided to reinstall python 3.4 again as well, i did this and now i found out that whilst i can open the python file, i cannot open the pythonw file and therefore am unable to open the idle window to do anything. I have rebooted the system since then several times, tried another install but nothing happens and i am unable to use python at the moment. A fix to both problems would be greatly appreciated but i am more worried about python not being able to be opened. Thanks in advance
system default python can't use homebrew installed package
26,588,821
2
0
3,822
0
python,pip,package,homebrew
Use the /usr/local/bin/python instead of the system installed python. brew doctor should tell you that /usr/local/bin is not early enough in your path. By putting /usr/local/bin first (or earlier than /usr/bin) in your path, your shell will find homebrew versions of executables before system versions. If you don't want to adjust your path, you can invoke which python you want to run. /usr/local/bin/python instead of just python at the shell prompt.
0
1
0
0
2014-10-26T18:30:00.000
1
1.2
true
26,576,432
1
0
0
1
I have different versions of python installed on my mac. My system default python is ($ which python) "/Library/Frameworks/Python.framework/Versions/2.7/bin/python". And if I install something with pip command such as pip install numpy, the package will be installed in the system python's site-package "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages" However, I want to setup ipython & Qt working environment. So I brew install pyqt, brew install PySide And these packages are installed in my home-brew python pack-control part. My home-brew python is in "/usr/local/lib/python2.7/site-packages". Now my python just can't import any Qt or PySide... Any suggestions? How can I fix this?
Simple way to communicate between C# app and Python app
26,587,560
0
1
773
0
c#,python,mono,zeromq
You can always open up a TCP or UDP socket and communicate through that.
0
1
0
0
2014-10-27T09:46:00.000
2
0
false
26,584,752
0
0
0
1
I've got a C# app running under Windows and Linux. I would like to implement a way to communicate with it through a Python script. I've already tried using ZeroMQ library, and it was working right when the C# app was running on Windows - I could send/receive messages on both ends. But I failed miserably when I tried to use on Linux/Mono - the app crashed, kernel32 exception. I tried recompiling the libzmq.dll, using the tutorials, but I can't get it right. Is there any other way to do this, or should I stick with ZeroMQ and try to get it running on Linux/Mono?
Clicking on element in terminal
26,621,902
1
1
155
0
python,linux,terminal
URL support is hard coded in the individual terminal emulator. The terminal may support arbitrary URIs as registered in whichever environment it calls home, so that you can e.g. write a Gnome extension for myapp://something and have it work in gnome-terminal, but this is entirely terminal specific. It's also possible for a terminal program in any terminal to receive mouse events and it can then do whatever it wants with them (like how elinks lets you click non-URL links to browse). However, this requires the program to be running in the foreground and controlling everything that appears on that terminal.
0
1
0
0
2014-10-29T02:24:00.000
1
0.197375
false
26,621,818
0
0
0
1
I've noticed that hyperlinks printed in my Debian/Linux terminal are clickable and open the browser when clicked. I was wondering if this could be used for other things or if this was just hard-coded in the terminal for hyperlinks only. Is it possible to print out a line in Python that when clicked will launch another process, for example?
Can't uninstall Python 3.4.2 from Windows 7 after system restore
27,374,542
15
8
14,672
0
python,windows-7,uninstallation,python-3.4,system-restore
Just had this problem and solved it by hitting repair first then uninstall.
0
1
0
0
2014-10-29T11:39:00.000
2
1
false
26,629,379
1
0
0
2
A couple of days after uninstalling Python 3.4.2 I had to carry out a system restore (I'm using Windows 7) due to accidentally installing a bunch of rubbish-ware that was messing with my computer even after installation. This system restore effectively "reinstalled" Python, or rather a broken version of it. I now can't uninstall it via the usual Control Panel -> Uninstall Programs tool, nor can I reinstall it using the original installer. Unfortunately Windows has not saved an earlier system snapshot that I could restore to. Both the uninstall and reinstall processes make a fair bit of progress before stopping with a warning error that says: "There is a problem with this Windows Installer package. A program run as part of the setup did not finish as expected. Contact your support personnel or package vendor" Does anyone have any suggestions on how I might succeed in this uninstallation?
Can't uninstall Python 3.4.2 from Windows 7 after system restore
26,629,632
12
8
14,672
0
python,windows-7,uninstallation,python-3.4,system-restore
Just delete the c:\Python3.4\ directory, Reinstall python 3.4 (any sub version, just has to be 3.4), and uninstall that. Python is, for the most part, totally self-contained in the Python3.4 directory. Reinstalling python is only needed so you can get a fresh uninstaller to remove the registry keys installation creates.
0
1
0
0
2014-10-29T11:39:00.000
2
1.2
true
26,629,379
1
0
0
2
A couple of days after uninstalling Python 3.4.2 I had to carry out a system restore (I'm using Windows 7) due to accidentally installing a bunch of rubbish-ware that was messing with my computer even after installation. This system restore effectively "reinstalled" Python, or rather a broken version of it. I now can't uninstall it via the usual Control Panel -> Uninstall Programs tool, nor can I reinstall it using the original installer. Unfortunately Windows has not saved an earlier system snapshot that I could restore to. Both the uninstall and reinstall processes make a fair bit of progress before stopping with a warning error that says: "There is a problem with this Windows Installer package. A program run as part of the setup did not finish as expected. Contact your support personnel or package vendor" Does anyone have any suggestions on how I might succeed in this uninstallation?
Common celery workers for different clients having different DBs
26,644,301
1
0
412
1
python,django,celery,django-celery
Eventually you will have duplicates. Many people ignore this issue because it is a "low probability", and then are surprised when it hits them. And then a story leaks how someone was logged into another uses Facebook account. If you require them to always be unique then you will have to prefix each ID with something that will never repeat - like current date and time with microseconds. And if that is not good enough, because there still is a even tinier chance of a collision, you can create a small application that will generate those prefixes, and will add an counter (incremented after each hash request, and reset every couple seconds) to the date and microseconds. It will have to work in single-threaded mode, but this will be a guarantee to generate unique prefixes that won't collide.
0
1
0
0
2014-10-29T18:06:00.000
1
1.2
true
26,637,631
0
0
1
1
I'm using celery with django and am storing the task results in the DB. I'm considering having a single set of workers reading messages from a single message broker. Now I can have multiple clients submitting celery tasks and each client will have tasks and their results created/stored in a different DB. Even though the workers are common, they know which DB to operate upon for each task. Can I have duplicate task ids generated because they were submitted by different clients pointing to different DBs? Thanks,
urllib vs cloud storage (Google App Engine)
26,660,144
0
0
199
0
python,google-app-engine,parsing,google-cloud-storage,urllib
They will likely be very close. the AppEngine cloud storage library uses the URL fetch service, just like urllib. Nonetheless, like any performance tuning, I'd suggest measuring on your own.
0
1
1
0
2014-10-30T12:08:00.000
2
0
false
26,652,617
0
0
1
1
I'm developing an app (with Python and Google App Engine) that requires to load some content (basically text) stored in a bucket inside the Google Cloud Storage. Everything works as expected but I'm trying to optimize the application performance. I have two different options: I can parse the content via the urllib library (the content is public) and read it or I can load the content using the cloudstorage library provided by Google. My question is: in terms of performance, which method is better? Thank you all.
SGE: How to see the output in real time
26,760,309
0
3
1,042
0
python,linux,sungridengine
Instead of using qsub, use qrsh.
0
1
0
0
2014-10-31T03:13:00.000
1
0
false
26,667,008
0
0
0
1
I am submitting a job(script) to Sun Grid Engine. The job is a python program. It can take many hours to run, but it will periodically write to stdout and stderr to inform me its status (like how many iterations is finished, etc). The problem is that SGE is buffering the output and only writes to file at the end, which means that I cannot see the output on the screen or by tailing the file in real time. I can only get to know the status after the job is finished. Is there a way to get around this by configuring SGE (qsub, etc.)?
Stopping a autostarted python program started using crontab
26,667,793
0
0
525
0
python,raspberry-pi,raspbian
You must not add it to the crontab which will start it on a time scheduled base. Instead write an init script or (more simple) add it to /etc/rc.local !
0
1
0
1
2014-10-31T04:53:00.000
1
0
false
26,667,750
0
0
0
1
I am starting a python code once my system (debian based raspberry pi) starts by adding the statement sudo python /path/code.py in crontab -e. On boot up it does start. But I would like to know, how can I stop the thing from running using the command line once it starts up.
choosing language to work on very large text files (up to some terabytes)
26,684,332
3
0
103
0
java,python,bash,awk
As long as you process your file line by line and you assemble some statistics, it doesn't really matter what tool you choose. Java has some advantage in terms of speed, compared to scripting languages, but in the end it will be a difference only by a constant factor. What matters the most is the algorithm that you use to process the file.
0
1
0
0
2014-10-31T22:23:00.000
1
1.2
true
26,684,292
1
0
0
1
I am working on a project which uses text files (.txt) for input, reading them line by line but this files can go as large as 1 terabytes. I know some languages/technologies which I used for similar problems, those are Java, Bash, Awk, and Python. But I don't know which one can work with such large file, and what kind on tricks and tweaks will be needed.
Running Python 3 interpreter in PyCharm Community Edition
26,692,407
2
2
4,308
0
python,python-3.x,pycharm
When you go to Settings > Console > Python Console you can choose the standard interpreter for your console. The standard there is the chosen Project Interpreter you select under Settings > Project Interpreter. Don't forget to restart Pycharm. Or you can assign a different interpreter to each project. Go to Settings > Project:myproject > Project Interpreter.
0
1
0
0
2014-11-01T19:07:00.000
1
1.2
true
26,692,052
1
0
0
1
I am a new PyCharm user who switched from Wing. In Wing, if I configure Wing to use the "Python3" interpreter, the console would also run Python3. However, in PyCharm Community Version, even if I configure the project to use the Python 3.4 interpreter, the console would still use 2.7.5. (The program runs properly with Python 3.4) Is there a way that I can use the console with Python3. Platform: Mac OS X 10.7.5 Python 2.7.5 and 3.4 installed. Thanks!
How do I set python environmental variables for Volatility
26,698,001
0
0
1,614
0
python,python-2.7,environment-variables,ubuntu-14.04,volatility
It looks like you added the vol.py to your PATH, which is incorrect. You need to only add the directory to it such as /mydir/volatility/ without the vol.py in it
0
1
0
0
2014-11-02T09:33:00.000
2
0
false
26,697,891
0
0
0
1
I'm trying to setup volatility so I can execute commands regardless of what directory I happen to be in at the time. I'm not sure what I'm doing wrong, I've set the environmental variables with the export command. I've double checked my my ~/.bashrc and even added the directory to /etc/enviroment. Running echo $PATH=vol.py returns /mydir/volatility/vol.py. But when I run python vol.py I get "python: can't open file 'vol.py': [Errno 2] No such file or directory" . So I guess my question. How can I set evironmental variables for python so that when I run python vol.py it executes on whatever image file I point it to without being in the volatility directory? Or even better just type vol.py -f whatever/imagefile, the system recognizes it as a python script and executes. I figure its probably something simple, but I'm still learning, so any help is much appreciated. My system : Kubuntu 14.04LTS; Python 2.7; Volatility 2.4
setuptools easy_install mac error
40,178,814
0
3
5,730
0
python,macos,setuptools,easy-install
You can add "sudo" before "python setup.py ..." in the install.sh.
0
1
0
1
2014-11-03T10:27:00.000
5
0
false
26,712,229
0
0
0
3
I'm trying to setup easy_install on my mac. But I'm getting the following error. Installing Setuptools running install Checking .pth file support in /Library/Python/2.7/site-packages/ error: can't create or remove files in install directory The following error occurred while trying to add or remove files in the installation directory: [Errno 13] Permission denied: '/Library/Python/2.7/site-packages/test-easy-install-789.pth' The installation directory you specified (via --install-dir, --prefix, or the distutils default setting) was: /Library/Python/2.7/site-packages/
setuptools easy_install mac error
26,712,371
8
3
5,730
0
python,macos,setuptools,easy-install
Try again using sudo python ... to be able to write to '/Library/Python/2.7/site-packages/
0
1
0
1
2014-11-03T10:27:00.000
5
1.2
true
26,712,229
0
0
0
3
I'm trying to setup easy_install on my mac. But I'm getting the following error. Installing Setuptools running install Checking .pth file support in /Library/Python/2.7/site-packages/ error: can't create or remove files in install directory The following error occurred while trying to add or remove files in the installation directory: [Errno 13] Permission denied: '/Library/Python/2.7/site-packages/test-easy-install-789.pth' The installation directory you specified (via --install-dir, --prefix, or the distutils default setting) was: /Library/Python/2.7/site-packages/
setuptools easy_install mac error
39,312,073
0
3
5,730
0
python,macos,setuptools,easy-install
Try curl bootstrap.pypa.io/ez_setup.py -o - | sudo python for access related issues.
0
1
0
1
2014-11-03T10:27:00.000
5
0
false
26,712,229
0
0
0
3
I'm trying to setup easy_install on my mac. But I'm getting the following error. Installing Setuptools running install Checking .pth file support in /Library/Python/2.7/site-packages/ error: can't create or remove files in install directory The following error occurred while trying to add or remove files in the installation directory: [Errno 13] Permission denied: '/Library/Python/2.7/site-packages/test-easy-install-789.pth' The installation directory you specified (via --install-dir, --prefix, or the distutils default setting) was: /Library/Python/2.7/site-packages/
Django delete superuser
67,849,420
-1
73
57,795
0
python,django
There is no way to delete it from the Terminal (unfortunately), but you can delete it directly. Just log into the admin page, click on the user you want to delete, scroll down to the bottom and press delete.
0
1
0
0
2014-11-03T11:36:00.000
7
-0.028564
false
26,713,443
0
0
1
2
This may be a duplicate, but I couldn't find the question anywhere, so I'll go ahead and ask: Is there a simple way to delete a superuser from the terminal, perhaps analogous to Django's createsuperuser command?
Django delete superuser
62,485,147
4
73
57,795
0
python,django
No need to delete superuser...just create another superuser... You can create another superuser with same name as the previous one. I have forgotten the password of the superuser so I create another superuser with the same name as previously.
0
1
0
0
2014-11-03T11:36:00.000
7
0.113791
false
26,713,443
0
0
1
2
This may be a duplicate, but I couldn't find the question anywhere, so I'll go ahead and ask: Is there a simple way to delete a superuser from the terminal, perhaps analogous to Django's createsuperuser command?
Import Data Efficiently from Datastore to BigQuery every Hour - Python
26,722,516
2
2
541
1
python,google-app-engine,google-bigquery,google-cloud-datastore
There is no full working example (as far as I know), but I believe that the following process could help you : 1- You'd need to add a "last time changed" to your entities, and update it. 2- Every hour you can run a MapReduce job, where your mapper can have a filter to check for last time updated and only pick up those entities that were updated in the last hour 3- Manually add what needs to be added to your backup. As I said, this is pretty high level, but the actual answer will require a bunch of code. I don't think it is suited to Stack Overflow's format honestly.
0
1
0
0
2014-11-03T20:00:00.000
2
0.197375
false
26,722,127
0
0
1
1
Currently, I'm using Google's 2-step method to backup the datastore and than import it to BigQuery. I also reviewed the code using pipeline. Both methods are not efficient and have high cost since all data is imported everytime. I need only to add the records added from last import. What is the right way of doing it? Is there a working example on how to do it in python?
How to run f2py in macosx
30,111,058
0
1
2,795
0
python,bash
I had a similar problem (installed numpy with pip on macosx, but got f2py not found). In my case f2py was indeed in a location on my $PATH (/Users/username/Library/Python/2.7/bin), but had no execute permissions set. Once that was fixed all was fine.
0
1
0
0
2014-11-04T22:55:00.000
1
0
false
26,746,620
0
0
0
1
Hi I am trying to use f2py in macosx. I have a homebrew python instalation and I have installed numpy using pip. If I write on terminal f2py I get -bash: f2py: command not found but if I write in a python script import numpy.f2pyit works well. How can I solve this problem runing f2py directly from terminal? Thank you!
Call .bat file from anywhere in the directory using python
26,750,103
0
0
120
0
file,python-2.7,batch-file,directory,call
do you mean calling a script without specifying the exact location from commandline? there are two ways: add it to your path (eg: set it in your PATH environment variable) setup an alias/some sort of shortcut in your bashrc/whatever CLI you are using (since you are using windows, one example would be to setup a cmdlet in windows powershell or something)
0
1
0
0
2014-11-05T04:56:00.000
2
0
false
26,749,980
1
0
0
1
I need to call a .bat file from anywhere in the directory without including the specific directory in the script. You'll just need to specify the name of .bat file you want to call and then run. Is this possible?
python cryptography run-time bindings files in GCS
27,596,094
1
0
35
0
google-cloud-storage,google-api-python-client
Cause the compile errors occurred and could not solve easily, I finally use the previous pyOpenssl to solve this problem.
0
1
0
1
2014-11-05T15:05:00.000
1
0.197375
false
26,760,398
0
0
0
1
I am porting the GCS python client lib. and suffering some problems about dependency. Because I want to use gcs on the NAS without glibc, I got the error at the code: from oauth2client.client import SignedJwtAssertionCredentials the error shows the reason due to lack of gcc I trace the code and it seems to generate cryptography relative files (like _Cryptography_cffi_36a40ff0x2bad1bae.so) at run-time from the crypto.verify. Since the machine I used with gcc, is there any way to replace the cryptography library or I could pre-compile and generate the files at my build machine ? Thanks!
Port unrecognized in DOT file?
49,885,873
1
1
1,514
0
python,dot,pygraphviz
Your example is missing some information (a description of the nodes). Assuming those are somewhere and have just been omitted from your example, maybe the problem is that using node [shape=record] doesn't work with the port HTML attribute. For example, try node [shape=plaintext].
0
1
0
0
2014-11-06T09:13:00.000
2
0.099668
false
26,775,558
0
0
0
1
I am trying to create a PNG from DOT file dot -Tpng -o temp.png and I am getting the below errors: Warning: node s1, port eth2 unrecognized Warning: node s2, port eth2 unrecognized Warning: node s2, port eth3 unrecognized Warning: node s3, port eth2 unrecognized Warning: node s4, port eth4 unrecognized Warning: node s3, port eth3 unrecognized DOT FILE 1: graph G { node [shape=record]; graph [hostidtype="hostname", version="1:0", date="04/12/2013"]; edge [dir=none, len=1, headport=center, tailport=center]; "s1":"eth2" -- "s2":"eth2"; "s2":"eth3" -- "s3":"eth2"; "s4":"eth4" -- "s3":"eth3"; } When I try with the below topology file, it works. DOT FILE 2 graph G { node [shape=record]; graph [hostidtype="hostname", version="1:0", date="04/12/2013"]; edge [dir=none, len=1, headport=center, tailport=center]; "R1":"swp1" -- "R3":"swp3"; "R1":"swp2" -- "R4":"swp3"; } What is the difference here. For what reason is DOT FILE 1 giving errors ?
port usage in the ipython notebook
27,171,998
0
0
327
0
ipython,ipython-notebook
I never did figure out the answer to my question -- why the port matters. However, I found that my ROI widgets had a rookie mistake on the JavaScript side (I'm fairly new to JS programming) that, when fixed, made all the problems go away. Ironically, the puzzle now is why it worked when I was using the default port!
0
1
0
1
2014-11-06T15:56:00.000
1
1.2
true
26,783,752
0
0
0
1
I'm having a problem with running an ipython notebook server. I've written a series of custom ROI (Region Of Interest) widgets for the notebook that allow a user to draw shapes like rectangles and ellipses on an image displayed in the notebook, then send information about the shapes back to python running on the server. All information is passed via widget traitlets; the shape info is in a handles traitlet of type object. When I run this locally on port 8888 (the default) and access it with firefox running on the same computer, everything works. (The system in this case is a Mac running OSX Yosemite). Now I tried to access it remotely by making an ssh connection from another computer (ubuntu linux, in this case) and forwarding local port 8888 to 8888 on the host. This almost works: firefox running on the client is able to access the ipython notebook server, execute code in notebooks, etc. The ROI widgets also display and seem to work properly, except for one thing: no information about the shapes drawn makes it back to the server. This is not just an issue of remote access (although that's the most important for my intended use). I have exactly the same problem if I run locally, but use a port other than 8888. For instance, if I set the port to 9999 in ipython_notebook_config.py, run the notebook server and access it with a local firefox, I get exactly the same problem. Similarly, if I run ipython notebook twice with all default settings, the second instance binds port 8889, because 8888 was bound by the first. When I access the server running at 8888 with a local firefox, everything works; when I access the simultaneously running server running at 8889, my widgets once more fail to send info back to the server. If I use --debug, I can see all the comm_msgs passed. The server running on 8888 receives messages that contain shape info, as expected. These messages simply don't show up in the log of the server running at 8889. Any thoughts?
How can I map asynchronous operations to an ordered stream of data and obtain an identically-ordered result?
26,817,585
2
2
58
0
python,stream,twisted,frp
I don't know of any neat tricks that will help you with this. I think you probably just have to implement the re-ordering (or order-maintaining, depending on how you look at it) logic in your Stream.map implementation. If operation i + 1 completes before operation i then Stream.map will probably just have to hold on to that result until operation i completes. Then it can add results i and i + 1 to the output Stream. This suggests you may also want to support back-pressure on your input. The re-ordering requirement means you have an extra buffer in your application. You don't want to allow that buffer to grow without bounds so when it reaches some maximum size you probably want to tell whoever is sending you inputs that you can't keep up and they should back off. The IProducer and IConsumer interfaces in Twisted are the standard way to do this now (though something called "tubes" has been in development for a while to replace these interfaces with something easier to use - but I won't suggest that you should hold your breath on that).
0
1
0
0
2014-11-06T22:36:00.000
1
1.2
true
26,790,720
1
0
0
1
I'm currently designing an application using the Twisted framework, and I've hit a bit of a roadblock in my planning. Formal Description of the Problem My application has the following constraints: Data arrive in-order, but asynchronously. I cannot know when the next piece of my data will arrive The order in which data arrive must be preserved throughout the lifespan of the application process. Additional asynchronous operations must be mapped onto this "stream" of data. The description of my problem may remind people of the Functional Reactive Programming (FRP) paradigm, and that's a fair comparison. In fact, I think my problem is well-described in those terms and my question can be pretty accurately summarized thusly: "How can I leverage Twisted in such a way as to reason in terms of data streams?" More concretely, this is what I have figured out: A datum arrives and is unpacked into an instance of a custom class, henceforth referred to as "datum instance" The newly-arrived datum instance is appended to a collections.deque object, encapsulated by a custom Stream class. The Stream class exposes methods such as Stream.map that apply non-blocking computations asynchronously to: All elements already present in the Stream instance's deque. All future elements, as they arrive. Results of the operations performed in item 3 are appended to a new Stream object. This is because it's important to preserve the original data, as it will often be necessary to map several callable's to a given stream. At the risk of beating a dead horse, I want to insist upon the fact that the computations being mapped to a Stream instance are expected to return instances of Deferred. The Question Incidentally, this precisely where I'm stuck: I can implement items 1, 2 & 3 quite trivially, but I'm struggling with how to handle populating the results Stream. The difficulty stems from the fact that I have no guarantees of stream length, so it's completely possible for data to arrive while I'm waiting for some asynchronous operations to complete. It's also entirely possible for async operation Oi to complete after Oi + n, so I can't just add deque.append as a callback. So how should I approach this problem? Is there some nifty, hidden feature of Twisted I have yet to discover? Do any twisty-fingered developers have any ideas or patterns I could apply?
reactor design pattern in a single thread vs multiple threads
27,050,242
1
2
1,368
0
python,multithreading,twisted,reactor
But it seems that it's now impossible to block the main program, I'm not a Python guy but have done this in the context of Boost. Asio. You're correct—your callbacks need to execute quickly and return control to the main reactor. The idea is to only use asynchronous calls in your callbacks. For example, you wouldn't use an API for sending an IP datagram that blocks and returns a status code. Instead, you'd use a non-blocking API where you register success and failure callbacks. This lets the call send call return immediately. The reactor will then call the success/failure callback once the OS has dealt with the packet.
0
1
0
0
2014-11-09T12:30:00.000
2
0.099668
false
26,828,181
1
0
0
2
I've been reading about the reactor design pattern, specifically in the context of the Python Twisted networking framework. My simple understanding of the reactor design is that there is a single thread that will sit and wait until one or more I/O sources (or file descriptors) become available, and then it will synchronously loop through each of those sources, doing whatever callbacks specified for each of these sources. Which does mean that the program as a whole would block if any of the callbacks are themselves blocking. And regardless, once all callbacks have executed, the reactor goes back to waiting for more I/O sources to become ready. What are the pros and cons of this, compared to asynchronously looping through each source as they appear, i.e. launching a separate thread for each source. I imagine this may be less efficient if all your callbacks are very fast, as the OS now has to deal with managing multiple threads and swapping between them. But it seems that it's now impossible to block the main program, and as an added benefit, the main reactor can keep listening for sources. In short, why does something like Twisted not do this, instead keeping to a single-threaded model?
reactor design pattern in a single thread vs multiple threads
26,829,543
3
2
1,368
0
python,multithreading,twisted,reactor
What are the pros and cons of this, compared to asynchronously looping through each source as they appear, i.e. launching a separate thread for each source. What you're describing is basically what happens in a multithreaded program that uses blocking I/O APIs. In this case, the "reactor" moves into the kernel and the "asynchronous looping" is the kernel completing some outstanding blocking operation, freeing up a user-space thread to proceed. The cons of this approach are the greatly increased complexity with respect to thread-safety (ie, correctness) that it incurs compared to a strictly single-threaded approach. The pros are better utilization of multiple CPUs (but running multiple single-threaded event-driven processes often offers this benefit as well) and the greater number of programmers who are familiar and comfortable (though often mistakenly so) with the multithreading approach to concurrency. Also related, though, are the PyPy team's efforts towards providing a better abstraction over the conventional multithreading model. PyPy's work towards Software Transactional Memory (STM) could offer a system in which work is dispatched asynchronously to multiple worker threads without violating the assumptions that are valid in a strictly single-threaded system. If this works out, it could offer the best of both worlds.
0
1
0
0
2014-11-09T12:30:00.000
2
1.2
true
26,828,181
1
0
0
2
I've been reading about the reactor design pattern, specifically in the context of the Python Twisted networking framework. My simple understanding of the reactor design is that there is a single thread that will sit and wait until one or more I/O sources (or file descriptors) become available, and then it will synchronously loop through each of those sources, doing whatever callbacks specified for each of these sources. Which does mean that the program as a whole would block if any of the callbacks are themselves blocking. And regardless, once all callbacks have executed, the reactor goes back to waiting for more I/O sources to become ready. What are the pros and cons of this, compared to asynchronously looping through each source as they appear, i.e. launching a separate thread for each source. I imagine this may be less efficient if all your callbacks are very fast, as the OS now has to deal with managing multiple threads and swapping between them. But it seems that it's now impossible to block the main program, and as an added benefit, the main reactor can keep listening for sources. In short, why does something like Twisted not do this, instead keeping to a single-threaded model?
Interaction with a subprocess
26,847,231
1
1
76
0
python,multithreading,subprocess
Maybe this is a silly question, and you've probably done this already, but are you sure simply sending the "\n" to the process won't work? I would say it's likely that selftest.exe doesn't actually read the [ENTER] until it's done. That, of course, depends on how the programs reads the enter. You may also try sending SIGQUIT or SIGTERM, so maybe the program will handle them gracefully.
0
1
0
0
2014-11-10T15:27:00.000
1
1.2
true
26,847,130
0
0
0
1
I am trying to automate the running of a program called selftest.exe, which when it is done, asks the user to press [ENTER] to exit. Thus, calling this process through subprocess.Popen does not work, since it hangs forever until the user presses [ENTER], which he never will. Whilst I don't think it would be reasonably possible to send the key stroke at the exact right time, I thought about a timeout, like "after XXX seconds, send "\n" and store the stdout of the process in a string", with XXX being big enough for me to have all the results. Is that a viable idea, or is there a "cleaner" idea to work with interactive programs in general ? Solution: the first answer is right, the program will quit, whenever you press enter. However, calling p.communicate(input='\n') will lead to the following error : 'str' does not support the buffer interface. It needs to be p.communicate(input=b'\n') instead.
How to use threading/multiprocessing to prevent program hanging?
26,870,615
1
1
78
0
python,multiprocessing,pyside
As you are speaking of PySide, I assume you program is a GUI one. In a GUI program all processing must occurs in a worker thread if you want to keep the UI responsive. So yes, the initial script must be start in a thread distinct from main thread (main one is reserved for UI)
0
1
0
0
2014-11-11T17:02:00.000
1
1.2
true
26,870,317
1
0
0
1
I am a bit confused with multiprocessing. I have a video processing script which can be run from the command line or launched from a PySide application using a subprocess call. The script seems to run fine from the command line and basically initializes a pool of workers which each process a separate video file. When I run the program however the OS tells me my program is not responding. I would like to make use of all the cores on my system for multiprocessing but I would also like to prevent this annoyance. What should I do I get around this? Do I start the initial script in a thread or something?
Lubuntu with python 2.7.8 and 3.4.2 import rrdtool
27,647,300
0
0
633
0
python,rrdtool
Unfortunately python-rrdtool package from Ubuntu/Debian is a python 2.x package only. So it will work in python 2.7 and not in python 3.4. If you must use rrdtool in python 3.x then you will have to use some alternative python to rrdtool binding. There are several to choose from if you look at pypi.python.org (which you can then install with pip). I have not used them as they all seem to have low version count and am weary of possible bugs. If someone did try those perhaps they could share their experience...
0
1
0
0
2014-11-12T09:45:00.000
1
0
false
26,883,774
0
0
0
1
due to problems when installing rrdtool on windows i decided to switch to Linux to solve many problems. I've installed Lubuntu (that has python 2.7.8 installed by default) and python 3.4.2. Than with packet manager i've installed python-rrdtool. The problem is: from the terminal when i write "python2" and than "import rrdtool" it works, but when i write "python3" import "rrdtool" it says to me that there are no module. How can i use rrdtool also on python3? thanks Paolo
Accessing python generators in parallel using multiprocessing module
26,898,024
8
2
1,768
0
python,parallel-processing,generator
I think you may be trying to solve this problem at the wrong level of abstraction. Python generators are inherently stateful, and thus you can't split a generator across processes without some form of synchronization, and that will kill any performance gains that you might achieve through parallelism. I would recommend instead creating separate generators for each process and having them start at some offset from each other. For example if you have 4 processes, you basically have the first process handle the first chunk and then it process the 5th chunk followed by the 9th chunk and so on adding N where N is the number of processes that you've setup. This requires you to hand off a unique index to each of the processes at startup.
0
1
0
0
2014-11-12T22:30:00.000
1
1.2
true
26,897,922
0
0
0
1
I have a Python generator which pulls in a pretty huge table from a data warehouse. After pulling in the data, I am processing the data using celery in a distributed manner. After testing I realized the the generator is the bottleneck. It can't produce enough tasks for celery workers to work on. This is when I have decided to optimize my python generator. More details on the generator The generator hits the data warehouse with chunk queries and these query results are basically independent of each other and stateless. So I thought this is a good candidate for making it parallel using the multiprocessing module. I looked around how to parallelize generators fetch without much direction. So if my Python generator generates stateless chunks of data, this should be a good candidate for multiprocessing right? Are there any ways to parallelize python generators? Also are there any side effects which I should be aware of using parallelism in Python generators?
How to send data from raspberry pi to windows server?
26,941,321
0
0
2,420
0
python,mysql,raspberry-pi,windows-server
As long as you know the IP address of the windows machine, you can easily run a server on windows (Apache/MySQL -> PHP). You provide this IP address to the RaspberryPI and it can login and authenticate the same way as on any other server. Basically the WAMP stack will act as an abstraction layer for communication.
0
1
0
1
2014-11-13T11:13:00.000
1
0
false
26,907,588
0
0
0
1
In a project me and students are doing, we want to gather temperature and air humidity information in nodes that sends the data to a raspberry pi. That raspberry pi will then send the data to a mysql database, running on windows platform. Some background information about the project: As a project, we are going to design a sellable system which gathers temperature and air humidity information and saves it on a server, which the owner could overwatch on a website/mobile application. He will receive an username and a password when purchasing the system to log in to the server. This means that, as a seller, we bound the station to an account, which is given to the customer. 1 station can add unlimited amount of nodes and will have a static ID, so the server knows which node sends the information. We have very limited knowledge about sending information, python in general and servers/databases. My problem is: How could I send the data from the raspberry pi? How could I receive the data on the server? The idea is that the raspberry pi should send data continuously and it's up to the server to accept or ignore data if they are correct or not. I want to send this information: Station ID "In case the node does not exist in the database, it will then add it to the corresponding station" Node ID "To know which node in the database to store the data at" Date/Time "To know when the data was messured" Air humidity Temperature I am not sure if I need to send account/password information, since it should not matter as long as the account "owns" the station. I hope I provided enough information.
What is the most efficient way to write 3GB of data to datastore?
26,917,183
2
0
102
0
python-2.7,google-app-engine,google-cloud-datastore
If you need to store each row as a separate entity, it does not matter how you create these entities - you can improve the performance by batching your requests, but it won't affect the costs. The costs depend on how many indexed properties you have in each entity. Make sure that you only index the properties that you need to be indexed.
0
1
0
0
2014-11-13T19:45:00.000
2
0.197375
false
26,917,114
0
1
1
1
I have a 3Gb csv file. I would like to write all of the data to GAE datastore. I have tried reading the file row by row and then posting the data to my app, but I can only create around 1000 new entities before I exceed the free tier and start to incur pretty hefty costs. What is the most efficient / cost effective way to upload this data to datastore?
openshift python mongodb local
26,922,793
0
0
47
1
python,mongodb,openshift,bottle
Yes. You have to run your own Mongodb server locally or port forward and use the OPENSHIFT Mongodb.
0
1
0
0
2014-11-14T01:43:00.000
1
0
false
26,921,629
0
0
0
1
I have a bottle+mongo application running in openshift. when I git-clone the application to my local computer neither the database nor the env-variables get download on my computer --just the python files. Should I have to mimic the mongo part in my local computer to developed locally? Or I missing something here.
How to run Unix commands in python installed on a windows machine?
26,939,339
0
0
1,284
0
python,windows,shell,unix
You have to install Unix Bash for Windows, but not sure if it works correctly ... Better solution is install or virtualize some Linux distribution.
0
1
0
0
2014-11-14T21:22:00.000
2
0
false
26,939,090
0
0
0
1
I tried using some Unix command using the subprocess module on my Python interpreter installed on a windows 7 OS. However, it errors out saying command not found. It does recognize Windows commands though. Can I somehow use Unix commands in here too? Thanks, Vishal
Kivy app works on Windows 7 but not on ubuntu
26,946,523
3
1
268
0
python,ubuntu,kivy
It sounds like your kv file isn't being loaded. Does it have the correct name, and is in the right directory? You can check the output in the terminal to see whether the file is loaded. Edit: One possibility is case sensitivity - windows is not case sensitive, linux generally is. Make sure the kv filename is all lowercase.
1
1
0
0
2014-11-15T05:33:00.000
1
1.2
true
26,942,909
0
0
0
1
I have a kivy app that uses a kv file, main.py, and a py class that handles the database. Everything works fine in windows. When I run linux (ubuntu) I get a black window with the correct title, but there are no widgets in the window. What do I need to do different to run a kivy app, that was put together and runs windows, in Ubuntu? I am using the most current available version of kivy on both systems.
Why does GAE use classes and self.response.out.write over functions and print?
26,952,993
3
0
733
0
python,google-app-engine,webapp2
Using Methods Each handler class has methods with names like get and post, after the HTTP methods GET and POST etc. Those methods are functions that handle requests. Each request to your server will be routed to a request handler object, which is a new instance of some request handler class. So, a request handler instance is created per request, and is garbage collected once its HTTP response is sent. By inheriting from webapp2.RequestHandler, your handler classes get a bunch of functionality out the box for free. For example, handler instances will have the data from the HTTP request parsed into dictionaries and bound to self as self.request.headers and self.request.body automatically. The webapp2.RequestHandler class also provides self.response, which is what you write your response data to. Once the new request handler instance is initialised, the inherited __init__ method calls the method that maps to the HTTP request method, so assuming a GET request, it calls self.get. The webapp2.RequestHandler class doesn't implement those methods; your derived class does. Responding Neither print nor the return value of the handler method are used here. You do not 'return a response' with this framework; you write the response to the request handler instance's (inherited) self.response property. Your instance inherits self.response.out.write (which is aliased to self.response.write), which concatenates its argument to the body of the response, initially an empty string. Note: You can call self.response.clear to clear the response body. When you return from your handler method - get or post etc. - the return value is ignored. The framework uses the state of self.response to automatically create and send a HTTP response for you. There's a bunch of subtleties that the framework takes care of behind the scenes too. Classes Over Functions The main advantage is in inheritance. Normally, you'll create a single BaseHandler class that derives from webapp2.RequestHandler. The BaseHandler class will contain the core functionality for your actual handlers. It might include some logic for converting data into little JSON packages for a Web API, for example. All of the classes that actually handle requests would then be derived from your BaseHandler. You want a custom base class for your handler classes to derive from mainly so you can edit that base class. You want that base class to inherit from webapp2.RequestHandler so that all your handler instances inherit the framework magic. There is enough slight of hand to make the whole thing confusing, but it is easy to make sense of once you get it, and does save a lot of trouble. Technically, you could achieve all of the above just using functions and dictionaries, but Python is classically object oriented, so it would be painful and weird.
0
1
0
0
2014-11-15T06:51:00.000
1
0.53705
false
26,943,368
1
0
1
1
I'm a beginner and am wondering why we use self.response.out.write instead of print, and why we use classes, instead of functions, for the request handlers in the first place. Are there any special reasons?
Install OpenCV for python building the source or with apt-get?
26,958,046
5
0
872
0
python,linux,opencv,apt-get
Of course, it's much easier to use the apt-get variant. Some drawbacks are, that you might not get the most recent version hence the apt-get package isn't updated as fast as the sources are. Furthermore you'll have a higher level of control according to modules that are going to be installed and the compile parameters, when using the "make way". If you just want an easy way, use the apt-get install version. If you want control, flexibility and the most recent version, use the make version and compile the source code according to your needs.
0
1
0
0
2014-11-16T14:15:00.000
1
1.2
true
26,957,894
0
0
0
1
I'm working with OpenCV using Python on Linux. I always installed OpenCV building the the Source with make. As you know there are many guides online which all say pretty much the same things. Now i found some guys which say to install OpenCV using apt-get with the command sudo apt-get install python-opencv Which are the difference between the two methods ? Can i just use the apt-get command ? I looked around for an answer, but i still don't understand if i can avoid building OpenCV.
Which version of Python did pip or easy_install refer to by default?
26,965,467
2
3
163
0
python,macos,pip,easy-install
There's an easy way around it - use pip2 or pip2.7 or pip-2.7 for Python 2, and pip3 or pip3.4 or pip-3.4 for Python 3. Both version ship with easy_install, but Python 2 does not contain pip by default - you have to install it yourself.
0
1
0
0
2014-11-17T04:17:00.000
2
1.2
true
26,965,450
1
0
0
1
I am a non-programmer who started to learn Python. My Mac OS X Yosemite shipped with Python 2.7.6. I installed Python 3.4.2 too. If I use pip or easy_install in the terminal to install a package, how do I know which Python I installed the package in? It seems Python 3.4.2 shipped with pip and easy_install, but I think Python 2.7.6 may also have some version of pip or easy_install. I know my system can have both versions of Python, but can it have multiple versions of pip or easy_install?
Monitor Management Command Execution in Django
26,971,741
0
0
142
0
python,django
You could create a file named "i_am_running.log" at the beginning of your management command and remove it at the end if it. When running same management command, check for its' presence. In case of no exist - go further. Otherwise - abort.
0
1
0
0
2014-11-17T06:15:00.000
2
0
false
26,966,645
0
0
1
1
I'm writing a Django app that uses a management command to pull data from various sources. The plan is to run this command hourly with cron, and also have it run on user command from a view (i.e. when they add a new item that needs data, I don't want them to wait for the next hour to roll around to see results). The question is: How can I set up this command such that if it is already currently running, it won't execute? Is there some place where I can stash a variable that can be checked by the script before execution? My current best idea is to have the command monitor stdout for a while to make sure nothing else is executing, but that seems like a hack at best. This is the only task that will be running in the background. I'm basically trying to avoid using Celery here.
IPython notebook fails to open any notebook
27,033,546
0
0
208
0
ipython,ipython-notebook
Thanks for your comment. I just tried to google "oleshell dll" and found this file is not related to ipython and windows. The ipython works again after I've renamed this file.
0
1
0
0
2014-11-19T07:43:00.000
1
0
false
27,011,508
1
0
0
1
I was using ipython notebook (packages in Anaconda-2.1.0 or Canopy-1.4.1) in "Windows Server 2008 R2 Enterprise" with browser (latest version of chrome or firefox). It was working perfectly. Once another user has started ipython notebook in his account. At first, his ipython notebook was failed to run any notebook. The worst is that my ipython was also failed to open or create any notebook after restart the kernel. The windows popup a dialog of "Problem signature" with following information: Problem Event Name: APPCRASH Application Name: python.exe Application Version: 0.0.0.0 Application Timestamp: 538f8ffc Fault Module Name: oleshell874.dll Fault Module Version: 8.7.4.0 Fault Module Timestamp: 54448aac Exception Code: c0000005 Exception Offset: 0000000000004867 OS Version: 6.1.7600.2.0.0.274.10 Locale ID: 1033 Additional Information 1: a481 Additional Information 2: a481c64a34722f1c689be57b64ee6a54 Additional Information 3: 3393 Additional Information 4: 33936ce55b0e8b96f5dce6a43fae2e99 Even I reinstalled the Anaconda or Canopy and restarted the system, it won't help. I have tried to google the Fault Module (oleshell874.dll) but it shows no result. Please help!
Windows - Executing an executable (.exe) without the .exe extension
27,017,888
2
1
1,498
0
python,windows
I cannot imagine a valid reason to break Windows philosophy. Microsoft always said that on its system the file extension determined the type of file and its usage. There are already many cross platfomr programs, for example firefox in named firefox on Unix-like systems and firefox.exe on Windows system. EDIT That being said, Windows accepts what you give it as command, provided it is in a correct executable format. So if you create a program HelloWorld.exe and rename it HelloWorld.joe : cmd.exe will start the program when you type HelloWorld.joe at the prompt (tested on Windows XP and Windows 7) Python 2.7 and 3.4 should start it either using os.system or with the subprocess module (confirmed by eryksun) You cannot use os.system call with a badly named file (in Microsoft's sense) because under the hood, os.system uses cmd.exe. It is a Microsoft program that looks at the file extension to know what to do with it, and will never execute (as an exe) a file that do not have the exe extension. You will not be able to use the subprocess module with shell=True for exactly the same reason. But when shell=False, it directly calls CreateProcess that accept any name as the name of a valid executable (provided it is ...) as said by zeller.
0
1
0
0
2014-11-19T12:29:00.000
2
0.197375
false
27,016,867
1
0
0
1
I need to be able to execute a .exe file in Python that has been renamed without the file extension (for example, let's say .joe - it doesn't represent anything that I know of). Let's say I have "HelloWorld.exe", I then rename it to "HelloWorld.joe" and need to execute it. Looking around, os.system is often used to execute .exe files, but I haven't been able to get it working without the .exe extension. The file cannot be renamed to have the .exe extension (or for that matter anything), in this "scenario", I do not have access to the source code of the executable.
applying the same command to multiple files in multiple subdirectories
27,021,713
2
0
115
0
python,bash,fastq
You can try find . -name "*.fastq" | xargs your_bash_script.sh, which use find to get all the files and apply your script to each one of them.
0
1
0
0
2014-11-19T16:14:00.000
2
0.197375
false
27,021,617
1
0
0
1
I have a directory with 94 subdirectories, each containing one or two files *.fastq. I need to apply the same python command to each of these files and produce a new file qc_*.fastq. I know how to apply a bash script individually to each file, but I'm wondering if there is a way to write a bash script to apply the command to all the files at once
Unable to run Jug in windows
27,031,081
0
0
136
0
python,multiprocessing
I figured it out myself. It is related to python's relative import.
0
1
0
0
2014-11-19T20:24:00.000
1
0
false
27,026,298
1
0
0
1
I want to use Jug for parallel processing. I have a Canopy installed and I also installed Jug using command pip install jug according to the documentation online. In order to find where jug is installed, I installed jug again using the same command as above, it showed me: Requirement already satisfied (use --upgrade to upgrade): jug in c:\users[userfolder]\appdata\local\enthought\canopy\user\lib\site-packages (from jug) Requirement already satisfied (use --upgrade to upgrade): six in c:\users[userfolder]\appdata\local\enthought\canopy\user\lib\site-packages (from jug) Requirement already satisfied (use --upgrade to upgrade): redis in c:\users[userfolder]\appdata\local\enthought\canopy\user\lib\site-packages (from jug) Now, I thought my jug is in the path of c:\users\[userfolder]\appdata\local\enthought\canopy\user\lib\site-package and it is there since I listed all files under this folder and I saw it. I am not sure this jug is a exe or py or something else, but I tried to run a command: jug C:\primes.py under this folder, it gave me error message said jug is not a recognized as the name of cmdlet, function, script file.... I also tried the command ./jug C:\primes.py and .\jug C:\primes.py, but none of them works. In addition, I tried python jug status C:\primes.py and this one gave me message of cannot find '_main_' module in 'jug'. Now I have no idea how to run jug. Has someone ever tried jug on windows could help me with it?
PyDev not appearing in Eclipse after install
27,035,599
0
3
7,516
0
python,eclipse
Had this same problem a few days ago. You might have downloaded the wrong version of PyDev for your python version (2.7.5 or something is my python version, but I downloaded PyDev for version 3.x.x) 1) Uninstall your current version PyDev 2) you have to install the correct version by using the "Install New Software", then uncheck the "Show only newest software" or whatever it is. Then select the version that matched your python version, and install :)
0
1
0
0
2014-11-20T08:11:00.000
3
0
false
27,034,666
1
0
1
1
I have Java version 7 and had installed PyDev version 3.9 from Eclipse Marketplace..but it's not showing up in New project or in Windows perspective in Eclipse..Can some one please tell me what i need to do ???
How to set sys.argv in a boost::python plugin
27,047,900
2
3
795
0
python,command-line-arguments,boost-python
Prior to your call to exec_file() (but after Py_Initialize(), you should invoke PySys_SetArgv(argc, argv); giving it the int argc and const char *argv[] from your program's main().
0
1
0
1
2014-11-20T19:05:00.000
1
1.2
true
27,047,518
0
0
0
1
I use boost::python to integrate Python into a C++ program. Now I would like that the Python program that is executed via boost::python::exec_file() can obtain the command line arguments of my C++ program via sys.argv. Is this possible?
Sudo does not find new python version
27,052,957
7
4
9,548
0
python,linux
How about export PATH+=:/usr/local/bin, try it, maybe helpful.
0
1
0
0
2014-11-21T00:20:00.000
3
1.2
true
27,052,140
1
0
0
2
I had python2.6 on my linux box but installed python3.4 to use new modules. I installed it using sudo access. The new version was installed in /usr/local/bin. Without root access, I can use the new python3.4, both by just using python3.4 in the command line or using the shebang in the .py file #!/usr/local/bin/python3 Now I am trying to install a module, for which I need sudo access. When I am root, and I run python3.4, it says command not found. I ran whereis python and found the path to python2.6 in /usr/bin, but whereis python3.4 as root gives, not found in /usr/bin, which is correct since it is in /usr/local/bin. Again, if I exit from root, I have no trouble using python3.4 This seems like a $PATH issue (not sure), can some one help me what I am doing wrong while installing the module for the new python3.4? I was able to install the module, but it was installed in the old python2.6 site-packages.
Sudo does not find new python version
27,052,211
2
4
9,548
0
python,linux
Well you could have given the location to install Py 3.4 to be in /usr/bin. An easy approach could be to copy the Py 3.4 bin to /usr/bin from /usr/local/bin. Secondly You can also install again with the prefix params.
0
1
0
0
2014-11-21T00:20:00.000
3
0.132549
false
27,052,140
1
0
0
2
I had python2.6 on my linux box but installed python3.4 to use new modules. I installed it using sudo access. The new version was installed in /usr/local/bin. Without root access, I can use the new python3.4, both by just using python3.4 in the command line or using the shebang in the .py file #!/usr/local/bin/python3 Now I am trying to install a module, for which I need sudo access. When I am root, and I run python3.4, it says command not found. I ran whereis python and found the path to python2.6 in /usr/bin, but whereis python3.4 as root gives, not found in /usr/bin, which is correct since it is in /usr/local/bin. Again, if I exit from root, I have no trouble using python3.4 This seems like a $PATH issue (not sure), can some one help me what I am doing wrong while installing the module for the new python3.4? I was able to install the module, but it was installed in the old python2.6 site-packages.
How to run PyCharm in Ubuntu - "Run in Terminal" or "Run"?
27,064,195
71
38
153,289
0
python,ubuntu,pycharm
To make it a bit more user-friendly: After you've unpacked it, go into the directory, and run bin/pycharm.sh. Once it opens, it either offers you to create a desktop entry, or if it doesn't, you can ask it to do so by going to the Tools menu and selecting Create Desktop Entry... Then close PyCharm, and in the future you can just click on the created menu entry. (or copy it onto your Desktop) To answer the specifics between Run and Run in Terminal: It's essentially the same, but "Run in Terminal" actually opens a terminal window first and shows you console output of the program. Chances are you don't want that :) (Unless you are trying to debug an application, you usually do not need to see the output of it.)
0
1
0
0
2014-11-21T14:15:00.000
8
1.2
true
27,063,361
1
0
0
4
When I double-click on pycharm.sh, Ubuntu lets me choose between "Run in Terminal" and "Run". What is the difference between these options?
How to run PyCharm in Ubuntu - "Run in Terminal" or "Run"?
71,020,778
0
38
153,289
0
python,ubuntu,pycharm
I did the edit and added the PATH for my Pycharm in .bashrc but I was still getting the error "pycharm.sh: command not found". After trying several other things the following command resolved the issue which creates a symbolic link. sudo ln -s /snap/pycharm-community/267/bin/pycharm.sh /usr/local/bin/pycharm The first is location to the exact path to pycharm.sh and the second is user bin which should be on PATH env by default
0
1
0
0
2014-11-21T14:15:00.000
8
0
false
27,063,361
1
0
0
4
When I double-click on pycharm.sh, Ubuntu lets me choose between "Run in Terminal" and "Run". What is the difference between these options?
How to run PyCharm in Ubuntu - "Run in Terminal" or "Run"?
52,049,395
12
38
153,289
0
python,ubuntu,pycharm
The question is already answered, Updating answer to add the PyCharm bin directory to $PATH var, so that pycharm editor can be opened from anywhere(path) in terminal. Edit the bashrc file, nano .bashrc Add following line at the end of bashrc file export PATH="<path-to-unpacked-pycharm-installation-directory>/bin:$PATH" Now you can open pycharm from anywhere in terminal pycharm.sh
0
1
0
0
2014-11-21T14:15:00.000
8
1
false
27,063,361
1
0
0
4
When I double-click on pycharm.sh, Ubuntu lets me choose between "Run in Terminal" and "Run". What is the difference between these options?
How to run PyCharm in Ubuntu - "Run in Terminal" or "Run"?
69,640,211
0
38
153,289
0
python,ubuntu,pycharm
Yes just go to terminal cd Downloads ls cd pycharm-community-2021.2.2 (your pycharm version) ls cd bin ls ./pycharm.sh It will open your continued pycharm project
0
1
0
0
2014-11-21T14:15:00.000
8
0
false
27,063,361
1
0
0
4
When I double-click on pycharm.sh, Ubuntu lets me choose between "Run in Terminal" and "Run". What is the difference between these options?
python-dev installation without package management?
36,137,101
1
7
21,320
0
python-2.7
Does python-dev source contain C/C++ code? Yes. It includes lots of header files and a static library for Python. Can I download python-dev source code as a .tar.gz file, for direct compilation on this machine? python-dev is a package. Depending on your operation system you can download a copy of the appropriate files by running, e.g. sudo apt-get install python-dev or sudo yum install python-devel depending on your operation system.
0
1
0
1
2014-11-22T00:40:00.000
3
0.066568
false
27,072,734
0
0
0
1
I must install python-dev on my embedded linux machine, which runs python-2.7.2. The linux flavor is custom-built by TimeSys; uname -a gives: Linux hotspot-smc 2.6.32-ts-armv7l-LRI-6.0.0 #1 Mon Jun 25 18:12:45 UTC 2012 armv7l GNU/Linux The platform does not have package management such as 'yum' or 'apt-get', and for various reasons I prefer not to install one. It does have gcc. Does python-dev source contain C/C++ code? Can I download python-dev source code as a .tar.gz file, for direct compilation on this machine? I have looked for the source but haven't been able to find it. Thanks, Tom
Descover IPC interface for undocumented program?
27,088,750
2
2
163
0
python,c++,c,linux,wine
Discovery of IPC interface / IPC mechanisms for undocumented program can involve gathering of lot of information by various means, putting it together and mapping the information. The ipcs command can be used to get the information about all ipc objects. It shall provide information about currently active message queues, shared memory segments and semaphores. This is available as part of util-linux. Another option is to look for shm folder in /proc/ to view the list of currently active shared memory that are in use before and after running your program. FIFO are special files that are created using mkfifo which you can determine from file type p in ls-l output. Also, you can use the -p option to test whether a file is a pipe. /proc/<pid>/fd can help to gather more info. The lsof is a very handy tool that can give you the list of open files and the processes that opened them. It can list the PID, PGID, PPID, owner of process, the command that is being executed by the process and the files that are in use by the process. fuser can provide your the list of PIDs that use the specific files or file systems. top/htop gives you the list of processes that run in your system. This can give wide range of information ranging from priority of the processes in the form of NI to memory usage via REM or MEM. iotop can provide a table of current I/O usage by processes or threads on the system by monitoring the I/O usage information output by the kernel. mpstat can give 'the percentage of time that the CPU or CPUs were idle during which the system had an outstanding disk I/O request' via 'iowait' argument. strace shall intercept/record system calls that are called by a process and also the signals that are received by a process. Strace will be able to show the order of events and all the return/resumption paths of calls. LTTng is a combination of kprobes, tracepoint and perf functionalities that can help in tracing of interrupts and race conditions.
0
1
0
0
2014-11-23T07:47:00.000
2
1.2
true
27,086,731
0
0
0
2
I have a program without documentation. I am wondering if there is a way to discover if it has any interface for interprocess communication. Are there any tools that search through an executable to discover such interfaces? I am interested in learning anything about such a program, like if it supports any command line options or arguments, or whatever else may be discoverable. I primarily use Linux, and some of the programs I would like to interface with are Windows programs running via wine. I program in C and C++, and some Python. A related question; is there a way to programmatically simulate clicking a button in some other window on the computer screen?
Descover IPC interface for undocumented program?
27,086,912
1
2
163
0
python,c++,c,linux,wine
Some Windows Programs use DCOM for interprocess-communication. There are some few programs to extract this interface from DLL- and EXE-Files. Otherwise you have to disassemble the program, and look at the code directly, which is non-trival. For your last question: Windows programs use a message system to communicate with the GUI. You can use sendmessage to simulate any message, such as clicking a button.
0
1
0
0
2014-11-23T07:47:00.000
2
0.099668
false
27,086,731
0
0
0
2
I have a program without documentation. I am wondering if there is a way to discover if it has any interface for interprocess communication. Are there any tools that search through an executable to discover such interfaces? I am interested in learning anything about such a program, like if it supports any command line options or arguments, or whatever else may be discoverable. I primarily use Linux, and some of the programs I would like to interface with are Windows programs running via wine. I program in C and C++, and some Python. A related question; is there a way to programmatically simulate clicking a button in some other window on the computer screen?
celery: programmatically queue task to a specific queue?
27,100,098
0
0
283
0
python,celery
you should do something like that: sometask.apply_async(args = ['args1','args2'], queue = 'dev')
0
1
0
0
2014-11-24T06:40:00.000
1
1.2
true
27,099,187
0
0
1
1
When I do sometask.async_apply(args=['args1','args2'], kwargs={'queue': 'dev'}) nothing ends up on the queue 'dev'. I'm wondering if I am missing a step somewhere? I have created the queue 'dev' already and it shows up under queues when I check the rabbitmq management.
how to get python 2.7 into the system path on Redhat 6.5 Linux
27,115,695
0
1
18,214
0
python,linux,ipython,redhat
You have your $PATH fine, as you can run python without specifying full path, aka /usr/bin/python. You get 2.6.6 in Ipython directory because it has python executable in it, named, wild guess - python. 2.7.5 is installed system-wide. To call 2.7.5 from the Ipython dir, use full path /usr/bin/python, or whatever which python points to. Try out python virtualenv if you need two or more version of python on your system. Otherwise, having different versions is not a good idea.
0
1
0
0
2014-11-24T22:41:00.000
4
0
false
27,115,526
1
0
0
2
I have installed (or so I think) python 2.7.5. When I type "Python --version" I get python2.7.5 I've narrowed this down to: When I run "python" in a terminal in my /Home/UsrName/ directory it is version 2.7.5 However when I run "python" in a terminal in /Home/UserName/Downloads/Ipython directory I get 2.6.6 I went into the Ipython folder to run the Ipython Setup file. I think I need to add python27 to a system path so that when I am inside the /Home/UserName/Downloads/Ipython directory and run the install file Ipython knows I am using a required version of python. I am not sure how to add python27 to the system on redhat linux 6.5 (Also I am not even sure that this will fix it).
how to get python 2.7 into the system path on Redhat 6.5 Linux
27,116,633
0
1
18,214
0
python,linux,ipython,redhat
I think I know what is happening - abarnert pointed out that the cwd (".") may be in your path which is why you get the local python when you're running in that directory. Because the cwd is not normally setup in the global bashrc file (/etc/bashrc) it's probably in your local ~/.bashrc or ~/.bash_profile. So edit those files and look for something like PATH=$PATH:. and remove that line. Then open a new window (or logout and log back in) to refresh the path setting and you should be OK.
0
1
0
0
2014-11-24T22:41:00.000
4
0
false
27,115,526
1
0
0
2
I have installed (or so I think) python 2.7.5. When I type "Python --version" I get python2.7.5 I've narrowed this down to: When I run "python" in a terminal in my /Home/UsrName/ directory it is version 2.7.5 However when I run "python" in a terminal in /Home/UserName/Downloads/Ipython directory I get 2.6.6 I went into the Ipython folder to run the Ipython Setup file. I think I need to add python27 to a system path so that when I am inside the /Home/UserName/Downloads/Ipython directory and run the install file Ipython knows I am using a required version of python. I am not sure how to add python27 to the system on redhat linux 6.5 (Also I am not even sure that this will fix it).
Python socket.getdefaulttimeout() is None, but getting "timeout: timed out"
27,116,549
0
1
536
0
sockets,python-2.7,timeout
Not knowing anything more my guess would be that NAT tracking expires due to long inactivity and unfortunately in most cases you won't be able to discover exact timeout value. Workaround would be to introduce some sort of keep alive packets to your protocol if there's such a possibility.
0
1
0
0
2014-11-24T23:49:00.000
1
0
false
27,116,358
0
0
0
1
How can I determine what the numeric timeout value is that is causing the below stack trace? ... File "/usr/lib/python2.7/httplib.py", line 548, in read s = self._safe_read(self.length) File "/usr/lib/python2.7/httplib.py", line 647, in _safe_read chunk = self.fp.read(min(amt, MAXAMOUNT)) File "/usr/lib/python2.7/socket.py", line 380, in read data = self._sock.recv(left) timeout: timed out After importing my modules, the result of socket.getdefaulttimeout() is None (note that this isn't the same situation as what produced the above, since getting those requires an 8-hour stress run on the system). My code is not setting any timeout values (default or otherwise) AFAICT. I have not yet been able to find any hint that 3rd party libraries are doing so either. Obviously there's some timeout somewhere in the system. I want to know the numeric value, so that I can have the system back off as it is approached. This is python 2.7 under ubuntu 12.04. Edit: The connection is to localhost (talking to CouchDB listening on 127.0.0.1), so NAT shouldn't be an issue. The timeout only occurs when the DB is under heavy load, which implies to me that it's only when the DB is getting backed up and cannot respond quickly enough to requests, which is why I would like to know what the timeout is, so I can track response times and throttle incoming requests when the response time gets over something like 50% of the timeout.
Python program needs full path in Notepad++
27,117,505
2
1
510
0
python,file-io,cmd,notepad++
In the properties of the shortcut that you use to start Notepad++, you can change its working directory, to whichever directory you're more accustomed to starting from in Python. You can also begin your python program with the appropriate os.chdir() command.
0
1
0
0
2014-11-25T01:47:00.000
2
0.197375
false
27,117,461
1
0
0
1
Not a major issue but just an annoyance I've come upon while doing class work. I have my Notepad++ set up to run Python code straight from Notepad++ but I've noticed when trying to access files I have to use the full path to the file even given the source text file is in the same folder as the Python program being run. However, when running my Python program through cmd I can just type in the specific file name sans the entire path. Does anyone have a short answer as to why this might be or maybe how to reconfigure Notepad++? Thanks in advance.
How to change the console font size in Eclipse-PyDev
55,985,898
1
1
2,839
0
python,eclipse,fonts
Solution: Help > Preferences > General > Appearance > Colors And Fonts > Structed Text Editors > Edit
1
1
0
1
2014-11-25T09:18:00.000
2
0.099668
false
27,122,732
0
0
0
1
I am trying to use the PyDEV console in eclipse for a demonstration of some Python code. For this demonstration I need to resize the default font size used in the PyDev console window. Some googling led me to change the 'General/Appearance/Colors and Fonts/Debug/Console Font', but that didn't work. I tried changing all candidates I could identify in the Colors and Font settings, but none of them influences the font size in the PyDev console window. Is there any way to achieve this? This is in eclipse 4.3.2 (kepler) with Pydev 3.8
SSH into EC2 Instance created by EBS
27,139,451
1
1
723
0
python,amazon-web-services,ssh,amazon-ec2,amazon-elastic-beanstalk
Hi you have to declare the keypair to use on the web console. Go to elasticbeanstalk > your application > edit configuration > Instances > select keypair Alternatively, this sounds like a hack but you can write a python script file that call for the modules that you installed and throws an error if the module is not found. The error is captured and you can view it in the web logs.
0
1
0
1
2014-11-26T00:21:00.000
2
1.2
true
27,139,271
0
0
0
1
I am using python and Amazon EC2 I am trying to progrmamtically SSH into the instance created by the Elastic Beanstalk Worker. While using 'eb init' there is no option to specify the KeyPair to use for the instance and hence I am not able to SSH into it. Reason for me to do this is that I want to check if the dependencies in requirements.txt are installed correctly in the instance. Is there any other way to check this other than SSHing into the instance and checking?
nc.traditional -e script.py Not printing anything
27,194,201
0
0
136
0
python,netcat
So, I finally found the solution In the shebang, add the option -u to the interpreter to unbuffer stdin stdout and stderr Shebang line: #!usr/bin/python -u
0
1
0
0
2014-11-27T13:40:00.000
1
1.2
true
27,172,062
0
0
0
1
I am starting a basic netcat server with the command nc.traditional -l -e server.py -p 4567 Problem is when I'm connecting to it (telnet 127.0.0.1 4567), the script starts but nothing gets on screen. I have print instructions on the beginning of the script that are read by the interpreter (I tested that it starts via file manipulation) but nothing is written on my telnet terminal. Moreover, it stays stuck on a raw_input instruction. I can write in the telnet terminal, but nothing seems to be sent to the python script. I've tried with a bash script replacing the python one and this works, it prints things on screen and read inputs. I've also tried connecting via ftp instead of telnet without results.
To run python script in apache spark/Storm
27,195,171
1
0
1,217
0
python,hadoop,apache-spark
First and foremost what are you trying to achieve? What does running on Hadoop technology mean to you? If the goal is to work with a lot of data, this is one thing, if it's to parallelize the algorithm, it's another. My guess is you want both. First thing is: is the algorithm parallelizable? Can it run on multiple pieces of data at the same time and gather them all in the end to make the final answer? Some algorithms are not, especially if they are recursive and require previously computed data to process the next. In any case, running on Hadoop means running using Hadoop tools, whether it is Spark, Storm or other services that can run on Python, taking advantage of Hadoop means writing your algorithm for it. if your algorithm is parallelizable, then likely you can easily take the piece that processes one piece of data and adapt it to run with Spark or Storm on huge datasets.
0
1
0
0
2014-11-28T16:35:00.000
1
0.197375
false
27,192,852
0
1
0
1
I am having an algorithm written in python (not hadoop compatible i.e. not mapper.py and reducer.py) and it is running perfectly in local system (not hadoop). My objective is to run this in hadoop. Option 1: Hadoop streaming. But, I need to convert this python script into mapper and reducer. Any other way? Option 2: To run this python script through Storm. But, I am using cloudera which doesn't have Storm. either I need to install storm in cloudera or need to use Spark. If I install storm in cloudera. Is it better option? Option 3: To run this python script through Spark (Cloudera). Is it possible. This algorithm is not for real time processing. But, we want to process it in hadoop technology. Please help with other suitable solution.
Storing client secrets on Django app on App Engine
27,214,235
0
7
812
0
python,django,google-app-engine
You can hardly hide the secret keys from an attacker that can access your server, since the server needs to know the keys. But you can make it hard for an attacker with low privileges. Obfuscating is generally not considered as a good practice. Your option 5 seems reasonable. Storing the keys in a non-source controlled file allows to keep the keys in a single and well-defined place. You can set appropriate permissions on that file so that an attacker would need high privileges to open it. Also make sure that high privileges are required to edit the rest of the project, otherwise, the attacker could modify a random file of the project to access the keys. I myself use your option 5 in my projects.
0
1
0
0
2014-11-30T14:06:00.000
2
0
false
27,214,104
0
0
1
2
I have a Django app that uses some secret keys (for example for OAuth2 / JWT authentication). I wonder where is the right place to store these keys. Here are the methods I found so far: Hardcoding: not an option, I don't want my secrets on the source control. Hardcoding + obfuscating: same as #1 - attackers can just run my code to get the secret. Storing in environment variables: my app.yaml is also source-controlled. Storing in DB: Not sure about that. DB is not reliable enough in terms of availability and security. Storing in a non-source-controlled file: my favorite method so far. The problem is that I need some backup for the files, and manual backup doesn't sound right. Am I missing something? Is there a best practice for storing secret keys for Django apps or App Engine apps?
Storing client secrets on Django app on App Engine
37,824,536
0
7
812
0
python,django,google-app-engine
A solution I've seen is to store an encrypted copy of the secret configuration in your repository using gpg. Depending on the structure of your team you could encrypt it symmetrically and share the password to decrypt it or encrypt it with the public keys of core members / maintainers. That way your secrets are backed up the same way your code is without making them as visible.
0
1
0
0
2014-11-30T14:06:00.000
2
0
false
27,214,104
0
0
1
2
I have a Django app that uses some secret keys (for example for OAuth2 / JWT authentication). I wonder where is the right place to store these keys. Here are the methods I found so far: Hardcoding: not an option, I don't want my secrets on the source control. Hardcoding + obfuscating: same as #1 - attackers can just run my code to get the secret. Storing in environment variables: my app.yaml is also source-controlled. Storing in DB: Not sure about that. DB is not reliable enough in terms of availability and security. Storing in a non-source-controlled file: my favorite method so far. The problem is that I need some backup for the files, and manual backup doesn't sound right. Am I missing something? Is there a best practice for storing secret keys for Django apps or App Engine apps?
How do i pass on control on to different terminal tab using perl?
27,242,918
1
0
134
0
python,perl,ubuntu,automation,perl-module
The main thing to understand is that each tab has a different instance of terminal running, more importantly a different instance of shell (just thought I would mention as it didnt seem like you were clear about that from your choice of words). So "passing control" in such a scenario could most probably entail inter-process communication (IPC). Now that opens up a range of possibilities. You could, for example, have a python/perl script running in the target shell (tab) to listen on a unix socket for commands in the form of text, which the script can then execute. In Python, you have modules subprocess (call, Popen) and os (exec*) for this. If you have to transfer control back to the calling process, then I would suggest using subprocess as you would be able to send back return codes too. Switching between tabs is a different action and has no consequences on the calling/called processes. And you have already mentioned how you intend on doing that.
0
1
0
1
2014-12-01T10:39:00.000
1
0.197375
false
27,226,551
0
0
0
1
I am trying to automate a scenario in which, I have a terminal window open with multiple tabs open in it. I am able to migrate between the tabs, but my problem is how do i pass control to another terminal tab while i run my perl script in a different tab. Example: I have a terminal open with Tab1,Tab2,Tab3,Tab4 open in the same terminal, i run the perl script in Tab3 and i would want to pass some commands onto Tab1. Could you please tell me how can i do this ?? I use GUI tool to switch between tabs X11::GUITest and use keyboard shortcuts to switch between tabs, any alternative suggestion is welcome, my ultimate aim is to pass control on to a different tab.
How can I find out what different versions of Python installed on OSX?
27,246,214
0
0
132
0
python
you can use which python command to find what python you are using. This gives the output as path to symlink, you can find the original python by reading the symlink.
0
1
0
0
2014-12-02T09:27:00.000
3
0
false
27,245,890
1
0
0
1
It looks like there is more than one Python installed on my mac. Modules installed are not recognized by python interpreter (2.7.6) until I add them to PYTHONPATH. Could anyone show me how I can locate all the Pythons installed on my mac? Thank you
Unexpected EOF for https with python requests under nginx
28,087,767
0
0
1,403
0
python-2.7,nginx,python-requests,pyopenssl
Many TLS clients and servers consider it reasonable to abruptly close the TCP connection without finishing the TLS disconnect handshake. They may not do it all the time. It may depend on very specific, esoteric network conditions (eg, how quickly certain sends are executed). When this happens, you get the error you've reported. Typically this isn't actually a problem. All application data has been transferred already. Unfortunately you can't be entirely sure about this (that's part of the reason there is a TLS disconnect handshake) but there may also be little or nothing you can do about it. I don't know that nginx's TLS support closes connections this way but if this is the only symptom (in other words, if you're not losing application data) and your server uses Content-Length or Transfer-Encoding: chunked (to offer you some other protection against truncation attacks) this might just be expected behavior.
0
1
0
0
2014-12-03T12:26:00.000
1
0
false
27,271,747
0
0
0
1
After installing pyopenssl, ndg-httpsclient and pyasn1 to support SSL with SNI. I get the following error, for certain https urls: (-1, 'Unexpected EOF') only when running under nginx tried: removing the gzip from nginx.
set thread affinity in python and java
27,275,002
0
1
1,814
0
java,python,c++,multithreading,affinity
I'm not sure I understand what you want exactly, but in Java I remember that I could launch multiple JVM and run my java programs on different OS processes, using inter-processes communication (socket, pipe or whatever you want) to do multi-core processing and syncronization. Knowing that, it might be possible to then set a process (whole JVM) exclusively on a core. You can get the PID of the JVM.
0
1
0
0
2014-12-03T13:54:00.000
2
0
false
27,273,499
0
0
0
1
When I create a thread with Java or Python I can't find the pid among the operating system threads. In fact get_ident() in Python gives me a very large number that can't be the PID. In fact I need to set the process affinity of all other threads to the first processor core then I want to dedicate the other cores to my specific threads run in a program to create a real time environment. The threads will be less than remaining processor cores. As I have read in stackoverflow it is not possible in Java and there will be the esigence of native calls. Can it be done in Python or I must use C/C++? The program will be installed in a Linux machine.
Find C:\Programs path (not scripts path)
27,278,464
2
0
124
0
python,windows,path,directory
I second @iCodez's answer to use os.getenvto get the path string from a system environment variable, but you might want to use the paths defined for APPDATA or LOCALAPPDATA instead. Windows permissions settings on the Program Files directory may prevent a standard user account from writing data to the directory. I believe the APPDATA and LOCALAPPDATA paths were designed for just such a use. On my system, APPDATA = C:\Users\myname\AppData\Roaming and LOCALAPPDATA = C:\Users\myname\AppData\Local. My user account has full read/write permission for both directories.
0
1
0
0
2014-12-03T17:39:00.000
2
0.197375
false
27,278,145
0
0
0
1
I've made a game and I'd like to save the highscore. So I need a place to do that. I chose to put it in the C:\All Programs directory. My problem is that that directory name isn't the same on every computer. For example on mine it's C:/Program Files (x86). So my question: Is there a way, to discover that path on any computer? PROBLEM SOLVED: os.getenv('PROGRAMFILES')
can I combine NDB and mysqldb in one app on google cloud platform
28,197,823
0
1
62
1
google-app-engine,google-cloud-storage,google-cloud-datastore,mysql-python,app-engine-ndb
MySQL commands cannot be run on NoSQL. You will need to do some conversions during manipulation of the data from both DBs.
0
1
0
0
2014-12-03T17:47:00.000
1
0
false
27,278,297
0
0
1
1
Is it just about creating models that use the best fitting data store API? For part of the data I need relations, joins and sum(). For other this is not necessary but nosql way is more appropriate.
Plone Unified Installer missing Python
27,300,059
4
1
226
0
linux,python-2.7,plone,sles
The installer command: ./install.sh standalone --build-python --static-lxml=yes worked perfectly for me. The installer downloaded and built the Python and libxml2/libxslt components necessary to remedy the terribly out-of-date (and vulnerable) versions included with sles11sp3. System packages needed for the build were: gcc-c++ make readline-devel libjpeg-devel zlib-devel patch libopenssl-devel libexpat-devel man All installed via zypper. I'd advise not using sudo for the install. If you want to, you'll need to create the plone_daemon and plone_buildout users and the plone_group group in advance due to oddities in SUSE's adduser implementation.
0
1
0
0
2014-12-04T11:42:00.000
1
0.664037
false
27,293,173
0
0
1
1
I'm trying to install plone 4.3.4 on a SLES 11 SP3 64bit server via the Unified Installer. I've fullfilled all the dependencies listed in the readme.txt, but when I try to get the installer running with the command sudo ./install.sh --password=******* standalone I get the error message: which: no python2.7 in (/usr/bin:/bin:/usr/sbin:/sbin) Unable to find python2.7 on system exec path. I find that rather strange as in the description of the unified installer it is said "The new Zope/Plone install will use its own copy of Python, and the Python installed by the Unified Installer will not replace your system's copy of Python. You may optionally use your system (or some other) Python, and the Unified Installer will use it without modifying it or your site libraries." on the Plone-Website. So - what am I doing wrong??? I've just tried adding the parameter --build-python but had to find out that the libxml2-devel and libxslt-devel libraries that are available for SLES-11-SP-3 are sadly not up-to-date enough 2.7.6 instead of 2.7.8 and 1.1.24 instead of 1.1.26 respectively. So no joy there either. :-( Is there any way to install the current version of plone on SLES 11 SP3 64bit? Kate
/usr/bin/python vs /opt/local/bin/python2.7 on OS X
27,308,244
8
6
10,161
0
python,macos,python-2.7,numpy,matplotlib
Points to keep in mind about Python If a script foobar.py starts with #!/usr/bin/env python, then you will always get the OS X Python. That's the case even though MacPorts puts /opt/local/bin ahead of /usr/bin in your path. The reason is that MacPorts uses the name python2.7. If you want to use env and yet use MacPorts Python, you have to write #!/usr/bin/env python2.7. If a script foobar.py starts explicitly with #!/usr/bin/python or with #!/opt/local/bin/python2.7, then the corresponding Python interpreter will be used. What to keep in mind about pip To install pip for /usr/bin/python, you need to run sudo /usr/bin/easy_install pip. You then call pip (which will not be installed by easy_install in /usr/bin/pip, but rather in /usr/local/bin/pip) To install pip for /opt/local/bin/python2.7, you need to run sudo port install py27-pip. You would then call pip-2.7. You will get the pip in /opt/local/bin. Be careful, because if you type pip2.7 you will get /usr/local/bin/pip2.7 (the OS X pip). Installing networkx and matplotlib To install networkx for the OS X Python you would run sudo /usr/local/bin/pip install networkx. I don't know how to install matplotlib on OS X Lion. It may be that OS X has to stick to numpy 1.5.1 because it uses it internally. To install networkx and matplotlib for MacPorts-Python, call sudo pip-2.7 install networkx and sudo pip-2.7 install matplotlib. matplotlib installs with a lot of warnings, but it passes.
0
1
0
0
2014-12-05T03:25:00.000
2
1.2
true
27,308,234
1
1
0
2
Can you shed some light on the interaction between the Python interpreter distributed with OS X and the one that can be installed through MacPorts? While installing networkx and matplotlib I am having difficulties with the interaction of /usr/bin/python and /opt/local/bin/python2.7. (The latter is itself a soft pointer to /opt/local/Library/Frameworks/Python.framework/Versions/2.7/bin/python2.7) How can I be certain which Python, pip, and Python libraries I am using at any one time? More importantly, it appears that installing matplotlib is not possible on Lion. It fails with Requires numpy 1.6 or later to build. (Found 1.5.1). If I upgrade by running sudo pip install --upgrade numpy, it does not help. Subsequently attempting to install matplotlib (sudo /usr/local/bin/pip install matplotlib) still fails with the same (Requires numpy 1.6...) message. How can I install matplotlib?
/usr/bin/python vs /opt/local/bin/python2.7 on OS X
27,400,616
0
6
10,161
0
python,macos,python-2.7,numpy,matplotlib
May I also suggest using Continuum Analytics "anaconda" distribution. One benefit in doing so would be that you won't then need to modify he standard OS X python environment.
0
1
0
0
2014-12-05T03:25:00.000
2
0
false
27,308,234
1
1
0
2
Can you shed some light on the interaction between the Python interpreter distributed with OS X and the one that can be installed through MacPorts? While installing networkx and matplotlib I am having difficulties with the interaction of /usr/bin/python and /opt/local/bin/python2.7. (The latter is itself a soft pointer to /opt/local/Library/Frameworks/Python.framework/Versions/2.7/bin/python2.7) How can I be certain which Python, pip, and Python libraries I am using at any one time? More importantly, it appears that installing matplotlib is not possible on Lion. It fails with Requires numpy 1.6 or later to build. (Found 1.5.1). If I upgrade by running sudo pip install --upgrade numpy, it does not help. Subsequently attempting to install matplotlib (sudo /usr/local/bin/pip install matplotlib) still fails with the same (Requires numpy 1.6...) message. How can I install matplotlib?
Log in to Windows from a Python service?
27,330,723
0
0
46
0
python,windows,login
So I've found a way to do it from a Windows service (written in C++) and presumably the ctypes library will permit me to use it. Simple as using LogonUser from Win32API so far as I can see. Yet to actually set up and test it but it does seem to be exactly what I need. The difficulty being that session0 can only be accessed once logged in or via some remote debugging, so getting something like this working is no easy feat.
0
1
0
0
2014-12-05T11:47:00.000
1
1.2
true
27,315,254
1
0
0
1
I'm theory crafting a Python service which will manipulate domain joined machines into running tests as part of a suite. Details of requirements We must be logged in as a domain user, and we must not have the automatic login enabled We need to reboot machines a few times, so it's a requirement for this to be sustainable I'm wondering if it's possible to, from a Python service, somehow convince Windows to log us in? Presumably a Python service runs in Session0, as does Microsoft's Hardware Certification Kit? If that's capable of doing it, Python should also be (so far as I can see). Any suggestions most welcome, I've got a suspicion there's a cheeky Windows API call that does this, but can't seem to find it anywhere.
Starting a python script at boot and loading GUI after that
27,344,131
0
0
1,590
0
python,linux,boot,raspbian,autostart
Try to use bootup option in crontab: @reboot python /path/to/pythonfile.py
0
1
0
1
2014-12-06T23:03:00.000
2
0
false
27,337,587
0
0
0
1
Can anyone tel me how to start a python script on boot, and then also load the GUI ? I am debian based Raspbian OS. The reason I want to run the python script on boot is because I need to read key board input from a RFID reader. I am currently using raw_input() to read data from the RFID reader. The 11 character hex value is then compared against a set of values in a txt file. This raw_input() did not work for me on autostarting python script using crontab and also using with LXDE autostart. So, I am thinking to run python script at boot, so that it reads keyboard input. If there are any other ways of reading keyboard input using crontab autostart and LXDE autostart, please let me know.
uWSGI --http :80 doesn't listen IPv6 interface
27,342,634
7
3
1,462
0
http,python-3.x,ipv6,uwsgi
In your INI config file specify something like this [uwsgi] socket = [::]:your_port_number Or from the CL, ./uwsgi -s [::]:your_port_number The server shall now listen along all the interfaces (including IPv4, if the underlying OS supports dual stack TCP sockets)
0
1
0
1
2014-12-07T11:41:00.000
1
1.2
true
27,342,256
0
0
1
1
Why doesn't uWSGI listen on IPv6 interface, even if system is 100% IPv6 ready? As far as I could see there aren't parameters nor documentation covering this issue.
Creating installers or executables on Linux for every supported platform for a Kivy game
27,349,249
2
3
464
0
python,python-3.x,packaging,kivy
All are possible, but I'm not sure what people are recommending right now - the Kivy website has instructions for pyinstaller (specifically on windows as I remember, but it works well on other platforms too), with the disadvantage that pyinstaller only supports python2 right now. You can use other tools too, I've seen some activity with e.g. nuitka, but I don't know the current state. Your best bet may be to ask on the kivy mailing list or irc, where some of the people using these tools are most likely to be around to comment. I haven't seen anyone do .deb or .rpm. I'm fairly sure it shouldn't be too hard, though you'd need to do some stuff yourself to make it work since you'd quite likely be forging new ground. Android and iOS are covered only by kivy's own build tools. These are fine on android, I can't comment on iOS.
1
1
0
0
2014-12-07T20:20:00.000
1
1.2
true
27,347,356
0
0
0
1
Is there a tool that creates installers from the source code of a Kivy game for all the different supported platforms with a single button press? Linux: .deb, .rpm or just a portable .zip file that contains a .sh script Windows: .exe (installer or portable executable) Mac: .app (installer or portable executable) and possibly Android and iOS If not, is it possible?
speed limit of syn scanning ports of multiple targets?
29,195,455
0
0
126
0
python,linux,performance
Months ago I found out that this problem is well known as c10k problem. It has to do amongst other things with how the kernel allocates and processes tcp connections internally. The only efficient way to address the issue is to bypass the kernel tcp stack and implement various other low-level things by your own. All good approaches I know are working with low-level async implementations There are some good ways to deal with the problem depending on the scale. For further information i would recommend to search for the c10k problem.
0
1
1
1
2014-12-08T04:18:00.000
1
1.2
true
27,351,360
0
0
0
1
I've coded a small raw packet syn port scanner to scan a list of ips and find out if they're online. (btw. for Debian in python2.7) The basic intention was to simply check if some websites are reachable and speed up that process by preceding a raw syn request (port 80) but I stumbled upon something. Just for fun I started trying to find out how fast I could get with this (fastest as far as i know) check technique and it turns out that despite I'm only sending raw syn packets on one port and listening for responses on that same port (with tcpdump) the connection reliability quite drops starting at about 1500-2000 packets/sec and shortly thereafter almost the entire networking starts blocking on the box. I thought about it and if I compare this value with e.g. torrent seeding/leeching packets/sec the scan speed is quiet slow. I have a few ideas why this happens but I'm not a professional and I have no clue how to check if I'm right with my assumptions. Firstly it could be that the Linux networking has some fancy internal port forwarding stuff running to keep the sending port opened (maybe some sort of feature of iptables?) because the script seems to be able to receive syn-ack even with closed sourceport. If so, is it possible to prevent or bypass that in some fashion? Another guess is that the python library is simply too dumb to do real proper raw packet management but that's unlikely because its using internal Linux functions to do that as far as I know. Does anyone have a clue why that network blocking is happening? Where's the difference to torrent connections or anything else like that? Do I have to send the packets in another way or anything?
create apscheduler job trigger from cron expression
27,396,550
0
1
3,162
0
python,cronexpression,apscheduler
Given that APScheduler supports a slightly different set of fields, it's not immediately obvious how those expressions would map to CronTrigger's arguments. I should also point out that the preferred method of scheduling jobs does not involve directly instantiating triggers, but instead giving the arguments to add_job() instead. If you want to do that yourself, you could simply split the expression and map the elements to whichever trigger arguments you want.
0
1
0
0
2014-12-09T11:30:00.000
3
1.2
true
27,377,908
0
0
0
1
I'm trying to run some scheduled jobs using cron expressions in python. I'm new to python and I've already worked with quartz scheduler in java to achieve almost the same thing. Right now, I am trying to work with apscheduler in python. I know that it is possible to do this using crontrig = CronTrigger(minute='*', second = '*'); But, I was working with cron expressions (like "0/5 * * * * *") and I would like to know if there is anything which could directly parse the expression and generate a CronTrigger.
online documentation for old versions of nose
27,390,659
1
1
41
0
python,nose
You should be able to upgrade nosetests via pip, while still staying with python 2.6. At least, nose 1.3.4 (latest as of this writing) installs cleanly inside the py2.6 virtualenv I just threw together. I don't have any py2.6-compatible code to hand to show that it's working correctly, though.
0
1
0
1
2014-12-09T23:07:00.000
2
0.099668
false
27,390,553
0
0
0
1
The Question Where can I access the documentation for legacy versions of the nose testing framework? Why I have to support some python code that must run against python 2.6 on a Centos 6 system. It is clear from experimentation that nosetests --failed does not work on this system. I'd like to know if I'm just missing a module or not. More generally, I need to know what capabilities of nose that I have grown used to I will have to do without, without having to check for them individually.
How to use docker for deployment and development?
27,399,888
0
2
286
0
python,git,docker
In the development case I would just use docker's -v option to mount the current working copy into a well known location in the container and provide a small wrapper shell script that automates firing up the app in the container.
0
1
0
0
2014-12-10T10:22:00.000
2
0
false
27,398,538
0
0
0
2
Suppose I have a python web app. I can create docker file for installing all dependencies. But then (or before it if I have requirements for pip) I have like two different goals. For deployment I can just download all source code from git through ssh or tarballs and it would work. But for a developer machine it wouldn't work. I would need then work on actual source code. I know that I can 'mirror' any folder/files from host machine to docker container. So ok, I can then remove all source code, that was downloaded when image was built and 'mirror' current source code that exists in developer machine. But if developer machine don't have any source code downloaded with git clone, it wouldn't work. So what to do in that case? I mean except the obvious - clone all repos on developer machine and then 'mirror' it? So what is the right way to use docker not only for deployment but for the development also?
How to use docker for deployment and development?
27,402,917
1
2
286
0
python,git,docker
Providing developer with a copy of repository to work with is not docker's responsibility. Many people do the other way - you put Dockerfile or a script to pull (or build) and run your container into the sources of your project.
0
1
0
0
2014-12-10T10:22:00.000
2
0.099668
false
27,398,538
0
0
0
2
Suppose I have a python web app. I can create docker file for installing all dependencies. But then (or before it if I have requirements for pip) I have like two different goals. For deployment I can just download all source code from git through ssh or tarballs and it would work. But for a developer machine it wouldn't work. I would need then work on actual source code. I know that I can 'mirror' any folder/files from host machine to docker container. So ok, I can then remove all source code, that was downloaded when image was built and 'mirror' current source code that exists in developer machine. But if developer machine don't have any source code downloaded with git clone, it wouldn't work. So what to do in that case? I mean except the obvious - clone all repos on developer machine and then 'mirror' it? So what is the right way to use docker not only for deployment but for the development also?
Running Out of Threads: UWSGI + Multithreaded Python Application with GeventHTTPClient
27,436,888
1
1
777
0
python,multithreading,uwsgi,gevent
Mixing non-blocking programming (geventhttpclient) with blocking one (a uWSGI thread/process) is completely wrong. This is a general rule: even if your app is 99% non blocking it is still blocking. This is amplified by the fact that gevent makes use of stack switching to simulate blocking programming paradigms. This is like cooperative multitasking, and it is managed by the so called 'gevent-hub'. Unfortunately, albeit your greenlets will be able to make http requests they will be never terminated because the gevent hub will never run again once the request is over. If you want to maintain the geventhttpclient approach you have to set uWSGI in gevent mode, but you need to be sure that all the modules and techniques used by your app are gevent friendly.
0
1
0
0
2014-12-11T21:52:00.000
1
1.2
true
27,433,087
0
0
1
1
I'm currently running a python web API that is NOT multithreaded with much success on the uWSGI + NGINX stack. Due to new operational needs, I have implemented a new build that includes multithreaded requests to external data sources. However, when I deploy this new multithreaded build under uWSGI with --enable-threads, after a few minutes, the machine runs out of available threads. I was able to isolate the issue to my usage of geventhttpclient for my external HTTP requests by monitoring the thread count using ps -eLf | grep <process id>| wc -l. I have currently 2 worker threads (two external requests) in my application, so as I noticed, every time I hit/make a request from my API, the application thread use count increases by 2. If I swap my use of geventhttpclient with the standard python Requests module in just one of these worker threads, the thread count only increases by 1. NOTE: I am using HTTPClient.close() to close the connection within each thread. This leads me to suspect that geventhttpclient creates new threads that do not terminate when used in multithreaded uWSGI applications. Is there an easy way around this chokepoint? The performance of geventhttpclient is exceptional in non-multithreaded uWSGI applications, so I would love to continue using this. Thanks and let me know if I can provide any more information.
Python 3.4.2 AND pip Permission Issues
27,470,901
0
1
98
0
python,pip
After many days of trying a workaround, I finally got down to debugging the setup.py script and setuptools and distutils. Figured out the problem was a missing "svn.exe" on my workstation, which caused the "svn_finder" function in setuptools core to hang up. Can someone point me in the right direction as to how I can make the right team aware of the "bug"?
0
1
0
0
2014-12-12T07:45:00.000
1
1.2
true
27,439,027
1
0
0
1
I've been struggling with an issue with Python and pip installs (python version 3.4.2, same with x86 or x64 MSIs, Windows 7 x64). I'm using the CPython installer available from the Python.org website. When I install, I get the UAC prompt, which I approve, and it installs fine to D:\opt\python34 (along with pip, as added in 3.4.2 installations by default). Then, as standard procedure, I add the install path and Scripts subfolder to the user path variable. Now, the issues are as follows: Whenever I run python setup.py install inside any package directory, the prompt hangs at writing ... to top_level.txt or writing to dependency_links.txt or etc. (Same issue happens if I create a virtual environment using python -m venv, activate it, and do python setup.py install). Setup.py never succeeds. Pip install also hangs infinitely after giving a warning "manifest_maker: Standard file '-c' not found." If I remove setuptools, and just use distribute, then "python setup.py install" works. Kindly assist with ideas/solutions.
Send packet and change its source IP
28,396,576
1
7
31,903
0
python,ip,packet,scapy
You basically want to spoof your ip address.Well I suggest you to read Networking and ip header packets.This can be possible through python but you won't be able to see result as you have spoofed your ip.To be able to do this you will need to predict the sequence numbers.
0
1
1
0
2014-12-12T17:26:00.000
2
0.099668
false
27,448,905
0
0
0
1
Lets say I have an application written in python to send a ping or e-mail. How can I change the source IP address of the sent packet to a fake one, using, e.g., Scapy? Consider that that the IP address assigned to my eth0 is 192.168.0.100. My e-mail application will send messages using this IP. However, I want to manipulate this packet, as soon as it is ready to be sent, so its source IP is not 192.168.0.100 but 192.168.0.101 instead. I'd like to do this without having to implement a MITM.
How to debug python tornado
27,455,651
1
1
1,832
0
python,tornado
Use Eclipse, PyDev, PyCharm, or whatever to set a breakpoint at the misbehaving line of code and step through your code from there. Tornado applications are relatively difficult to debug because the stack trace is less clear than in multithreaded code. Step through your code carefully. If you use coroutines, you should become familiar with the implementation of gen.Runner so you can understand what your code does during a "yield".
0
1
0
0
2014-12-13T02:22:00.000
1
1.2
true
27,454,876
1
0
0
1
I already set the debug=Truethen what is the next ? I use eclipse + pydev for develop environment Give me some details about tornado debugging will be very appreciate
Configuring Remote Python Interpreter in Pycharm
29,901,664
0
4
2,553
0
python,raspberry-pi,pycharm,remote-debugging,interpreter
It works in PyCharm if you deploy a remote SFTP server. Tools > Deployment > Add > Enter name and SFTP > Enter host, port, root path (I said "/" without quotes) username and password. Then, when creating a new project, change your interpreter to 'Deployment Configuration', and select your SFTP server. Press OK, then create. You should be all set to go.
0
1
0
1
2014-12-13T16:07:00.000
1
1.2
true
27,460,843
1
0
0
1
I would like to connect to my raspberry pi using a remote interpreter. I've managed to do it just fine in windows 7 using Pycharm, but having recently upgrading to windows 8.1 it no longer works. I've tried to connect to the raspberry pi (where it worked in win 7) and another one with a fresh install of Raspbian (released 09-09-2014). I also tried through Ubuntu, but to no avail. Has anyone out there managed to get this right in windows 8 or any linux flavour? Should I try a key pair (OpenSSH or PuTTY)? After adding the RSA key to the repository, the process that hangs is 'Getting remote interpreter version' ~ 'Connecting to 10.0.0.98'
Pyephem: time of a planet will be closest to the horizon
29,810,773
0
2
75
0
python,pyephem
Wouldn't that be the, for lack of a better term, anti-transit? It seems to me that if it's cicumpolar, what you're looking for would be roughly 12 hours before/after transit.
0
1
0
0
2014-12-15T08:18:00.000
1
0
false
27,479,855
1
0
0
1
Is it possible to calculate time of a planet will be closest to the horizon, when pyephem throws AlwaysUpError and NeverUpError?
Ejabberd server not getting started?
27,539,662
0
0
75
0
python-2.7,ubuntu-14.04,openfire,ejabberd
Solved the issue problem wan with my settings,and then restarted the server using sudo service ejabbers restart.It worked
0
1
0
0
2014-12-15T13:33:00.000
1
1.2
true
27,485,296
0
0
1
1
I have re-installed ejabberd server in my localhost.When i run sudo service ejabberd restart its no getting restarted.Instead its craeting error.The following error is shown in erl_crash.dump.All my configurations in conf file is correct. Kernel pid terminated (application_controller) ({application_start_failure,kernel,{{shutdown,{failed_to_start_child,net_sup,{shutdown,{failed_to_start_child,net_kernel,{'EXIT',nodistribution}}}}},{k I tried everything also killed process running on same ports.I there anything else to do to solve this issue ???
Python on system start up
27,494,641
0
0
82
0
python,ssh,raspberry-pi,raspbian
You should modify your python script to write its output to a file instead of to the screen (which you can't see). I.e., I think that a log file is your best (possibly only) bet. You can write to a file in /tmp on the raspberry pi if you just want a temporary log file that you can check once and a while. Also, as Tim said, you could try out the python logging library, but I think just writing to a file is quicker and easier, although you might run into some issues with permissions...
0
1
0
1
2014-12-15T18:59:00.000
2
0
false
27,491,173
0
0
0
2
I am running Python on a Raspberry Pi and everything works great. I have a small script running on the system start up which prints several warning messages (which I actually cannot read since it is running in the background)... My question is: Is there a way via SSH to "open" this running script instance and see what is going on or a log file is the only way to work with that? Thanks!
Python on system start up
27,492,567
1
0
82
0
python,ssh,raspberry-pi,raspbian
Try using the Python logging library. You can configure it to save the output to a file and then you can use tail -f mylogfile.log to watch as content is put in. EDIT: An alternative is to use screen. It allows you to run a command in a virtual console, detach from that console, and then disconnect from the machine. You can then reconnect to the machine and re-attach to that console and see all the output the process made. I'm not sure about using it on a script that starts when the machine is turned on, though (I simply haven't tried it).
0
1
0
1
2014-12-15T18:59:00.000
2
1.2
true
27,491,173
0
0
0
2
I am running Python on a Raspberry Pi and everything works great. I have a small script running on the system start up which prints several warning messages (which I actually cannot read since it is running in the background)... My question is: Is there a way via SSH to "open" this running script instance and see what is going on or a log file is the only way to work with that? Thanks!
How do I make a python script executable?
27,494,871
118
72
145,067
0
python,command-line
Add a shebang line to the top of the script: #!/usr/bin/env python Mark the script as executable: chmod +x myscript.py Add the dir containing it to your PATH variable. (If you want it to stick, you'll have to do this in .bashrc or .bash_profile in your home dir.) export PATH=/path/to/script:$PATH
0
1
0
0
2014-12-15T23:03:00.000
6
1.2
true
27,494,758
1
0
0
2
How can I run a python script with my own command line name like 'myscript' without having to do 'python myscript.py' in the terminal?