Q_Id int64 337 49.3M | CreationDate stringlengths 23 23 | Users Score int64 -42 1.15k | Other int64 0 1 | Python Basics and Environment int64 0 1 | System Administration and DevOps int64 0 1 | Tags stringlengths 6 105 | A_Id int64 518 72.5M | AnswerCount int64 1 64 | is_accepted bool 2
classes | Web Development int64 0 1 | GUI and Desktop Applications int64 0 1 | Answer stringlengths 6 11.6k | Available Count int64 1 31 | Q_Score int64 0 6.79k | Data Science and Machine Learning int64 0 1 | Question stringlengths 15 29k | Title stringlengths 11 150 | Score float64 -1 1.2 | Database and SQL int64 0 1 | Networking and APIs int64 0 1 | ViewCount int64 8 6.81M |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
43,086,920 | 2017-03-29T07:21:00.000 | 0 | 0 | 1 | 0 | python,windows,python-3.x,windows-10,anaconda | 43,109,784 | 1 | true | 0 | 0 | I found the problem.
As it turns out the module menuinst wasn't automatically installed into the new environment so I had to manually install it. Now everything works. | 1 | 1 | 0 | I'm using Windows 10 and I have Anaconda with Python 2 installed, so my root environment is Python 2. I created an additional Python 3 environment and among other packages installed iPython and Spyder into it. I used the Anaconda Navigator to install the packages.
I can activate and deactivate the environment using Windows CMD just fine. After activating the Python 3 environment in the CMD the ipython command typed into the same CMD starts up Python 3.6.1.
The Anaconda Startmenu folder does contain shortcuts to iPython and Spyder both for Python 2 and Python 3 now. I can use those to start both for Python 2 as before, but the Python 3 versions won't start. And there is no error message or crash or anything.
When clicking on the Python 3 iPython shortcut a command prompt pops up for a split second and immediatly closes again. Spyder does not even open a command prompt, it does absolutely nothing, I presume it's because iPython fails. Checking the task manager shows that there is no Python running in the background at all, so it really does not start.
Now I know that iPython 3 itself is not broken because I can start it from within CMD after switching environments, nonetheless I deinstalled and reinstalled them both, no change.
I then went into the shortcut to get the exact command it was executing to write a small batch file with a pause command to see if anything gets displayed when iPython fails. Doing a right-click on the shortcut and executing "open file location" leads me to python.exe in the Python 3 environment base folder, and executing that works fine of course.
So now I'm stumped since I have no leads to solve or even analyze the problem properly, over the entire course of action not a single error message ever appeared anywhere.
Any hints and suggestions are appreciated.
EDIT:
The target of the Python 3 shortcut in the properties looks like this:
C:\Users\My.Name\AppData\Local\Continuum\Anaconda2\envs\Python3\python.exe C:\Users\My.Name\AppData\Local\Continuum\Anaconda2\cwp.py C:\Users\My.Name\AppData\Local\Continuum\Anaconda2\envs\Python3 "C:/Users/My.Name/AppData/Loca
The working shortcut to Python 2 looks pretty much the same:
C:\Users\My.Name\AppData\Local\Continuum\Anaconda2\python.exe C:\Users\My.Name\AppData\Local\Continuum\Anaconda2\cwp.py C:\Users\My.Name\AppData\Local\Continuum\Anaconda2 "C:/Users/My.Name/AppData/Local/Continuum/Anaconda2/pyth | Anaconda on Windows 10: iPython and Spyder fail to start in Python3 environment | 1.2 | 0 | 0 | 845 |
43,092,454 | 2017-03-29T11:40:00.000 | 5 | 0 | 0 | 0 | python,machine-learning,tensorflow,neural-network,artificial-intelligence | 43,098,199 | 3 | true | 0 | 0 | Instead of creating a whole new graph you might be better off creating a graph which has initially more neurons than you need and mask it off by multiplying by a non-trainable variable which has ones and zeros. You can then change the value of this mask variable to allow effectively new neurons to act for the first time. | 1 | 3 | 1 | If I want to add new nodes to on of my tensorflow layers on the fly, how can I do that?
For example if I want to change the amount of hidden nodes from 10 to 11 after the model has been training for a while. Also, assume I know what value I want the weights coming in and out of this node/neuron to be.
I can create a whole new graph, but is there a different/better way? | How to add new nodes / neurons dynamically in tensorflow | 1.2 | 0 | 0 | 2,690 |
43,093,142 | 2017-03-29T12:09:00.000 | 1 | 0 | 0 | 0 | python,django,pagination,django-pagination | 43,093,197 | 1 | true | 1 | 0 | You need to pass to the editing page the page number you were on, in a query parameter. And keep track of it and when editing/updating is successful you redirect back to that page number.
In your form include a next field. Where to redirect in case of success. | 1 | 0 | 0 | I am editing a content from table which has pagination in it. If i update a user which is at the page no 9 and save that user, it will return me to page no 1.
I want it to return to a same page where user was previously there that is at page no 9.
Any help regarding the same would be appreciated! | Django: How to return a user to correct pagination page after editing or updating? | 1.2 | 0 | 0 | 134 |
43,096,197 | 2017-03-29T14:18:00.000 | 0 | 0 | 0 | 1 | python,shell,subprocess,tcsh | 43,118,394 | 1 | false | 0 | 0 | Knowing the subprocess inherits all the parent process environment and they are supposed to be ran under same environment, making the shell script to not setup any environment, fixed it.
This solves the environment being retained, but now the problem is, the process just hangs! (it does not happen when it is ran directly from shell) | 1 | 0 | 0 | I have a tcsh shell script that sets up all the necessary environment including PYTHONPATH, which then run an executable at the end of it. I also have a python script that gets sent to the shell script as an input. So the following works perfectly fine when it is ran from Terminal:
path to shell script path to python script
Now, the problem occurs when I want to do the same thing from a subprocess. The python script fails to be ran since it cannot find many of the modules that's already supposed to be set via the shell script. And clearly, the PYTHONPATH ends up having many missing paths compared to the parent environment the subprocess was ran from or the shell script itself! It seems like the subprocess does not respect the environment the shell script sets up.
I've tried all sorts of things already but none help!
cmd = [shell_script_path, py_script_path]
process = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=os.environ.copy())
It makes no difference if env is not given either!
Any idea how to fix this?! | Subprocess not retaining all environment variables | 0 | 0 | 0 | 882 |
43,097,013 | 2017-03-29T14:51:00.000 | 0 | 0 | 1 | 0 | python,regex,string,find | 43,097,647 | 1 | true | 0 | 0 | re.search('(?:<[^<]+?>)*'.join('Hello'), 'He said: <p>He<i>ll</i>o how are you?').start() returns 12 (13nth character).
If you are not certain that Hello is in the string, you should check if search returns None before calling start. | 1 | 0 | 0 | Is there any way to achieve this in Python 3+?
I have a string He said: <p>He<i>ll</i>o how are you? which does include those HTML tags as plain text. The method find() returns a index (a position) withing the searched string. Is there perhaps any regex version of find() where I could input this <[^<]+?> as a regex for finding a tag enclosed in < > (or perhaps its negative lookahead) - and so ignore them to look for the word Hello but still get the absolute position within the original string?
For example:
String = He said: <p>He<i>ll</i>o how are you?
Function could be foo(String, "<[^<]+?>", "Hello") as in foo(search in this string, exclude characters matching this regex, look for this
..and get 13 as a position of the word Hello in the original string in return? | Discard specific sequence of character when using string method find() | 1.2 | 0 | 0 | 65 |
43,098,668 | 2017-03-29T16:03:00.000 | 1 | 0 | 0 | 0 | python,flask,flask-sqlalchemy,flask-mysql | 43,098,951 | 2 | true | 1 | 0 | It's possible, but not recommended. Consider this:
Half of your app will not benefit from anything a proper ORM offers
Adding a field to the table means editing raw SQL in many places, and then changing the model.
Don't forget to keep them in sync.
Alternatively, you can port everything that uses raw mysqldb to use SQLAlchemy:
Need to add a field to your table? Just change the model in one place.
Don't like SQL queries that ORM generates for you? You still have a low-level control over this. | 2 | 0 | 0 | So is it possible to mix 2 ORM's in same web app,and if so how optimal would it be ? Why so?
- I'm working on a web app in flask using flask-mysqldb and I came to a point where I need to implement an auth system, and on flask-mysqldb there's no secure way to do it.
- With that said now I'm trying to implement flask-security but it only works on flask-sqlalchemy so I'm trying to mix sqlalchemy with mysqldb and before that I want to know if it's optimal and if it works.That would lead to using user auth along sqlalchemy and other data to mysqldb.Thanks! | Is it possible to mix 2 ORMS in same web app? | 1.2 | 1 | 0 | 145 |
43,098,668 | 2017-03-29T16:03:00.000 | 3 | 0 | 0 | 0 | python,flask,flask-sqlalchemy,flask-mysql | 43,098,934 | 2 | false | 1 | 0 | You can have a module for each orm. One module can be called auth_db and the other can be called data_db. In your main app file just import both modules and initialize the database connections. That being said, this approach will be harder to maintain in the future, and harder for other developers to understand what's going on. I'd recommend moving your flask-mysqldb code to sqlalchemy so that you are only using one ORM. | 2 | 0 | 0 | So is it possible to mix 2 ORM's in same web app,and if so how optimal would it be ? Why so?
- I'm working on a web app in flask using flask-mysqldb and I came to a point where I need to implement an auth system, and on flask-mysqldb there's no secure way to do it.
- With that said now I'm trying to implement flask-security but it only works on flask-sqlalchemy so I'm trying to mix sqlalchemy with mysqldb and before that I want to know if it's optimal and if it works.That would lead to using user auth along sqlalchemy and other data to mysqldb.Thanks! | Is it possible to mix 2 ORMS in same web app? | 0.291313 | 1 | 0 | 145 |
43,099,139 | 2017-03-29T16:27:00.000 | 1 | 0 | 0 | 0 | java,python,scala,ubuntu,hadoop | 43,114,845 | 1 | false | 0 | 0 | You need to install hadoop-2.7 more to whatever you are installing.
Java version is fine.
The mentioned configuration should work with scala 2.12.1. | 1 | 1 | 1 | I am about to install Apache Spark 2.1.0 on Ubuntu 16.04 LTS. My goal is a standalone cluster, using Hadoop, with Scala and Python (2.7 is active)
Whilst downloading I get the choice: Prebuilt for Hadoop 2.7 and later (File is spark-2.1.0-bin-hadoop2.7.tgz)
Does this package actually include HADOOP 2.7 or does it need to be installed separately (first I assume)?
I have Java JRE 8 installed (Needed for other tasks). As the JDK 8 also seems to be a pre requisite as well, I also did a ' sudo apt install default-jdk', which indeed shows as installed:
default-jdk/xenial,now 2:1.8-56ubuntu2 amd64 [installed]
Checking java -version however doesn't show the JDK:
java version "1.8.0_121"
Java(TM) SE Runtime Environment (build 1.8.0_121-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)
Is this sufficient for the installation? Why doesn't it also show the JDK?
I want to use Scala 2.12.1. Does this version work well with the Spark2.1/Hadoop 2.7 combination or is another version more suitable?
Is the Scala SBT package also needed?
Been going back and forth trying to get everything working, but am stuck at this point.
Hope somebody can shed some light :) | Apache Spark: Pre requisite questions | 0.197375 | 0 | 0 | 730 |
43,100,290 | 2017-03-29T17:29:00.000 | 1 | 0 | 0 | 1 | python,tensorflow,ubuntu-16.04,cudnn | 43,239,777 | 1 | false | 0 | 0 | Answering my own question: The issue was not that the library was not installed, the library installed was the wrong version hence it could not find it. In this case it was cudnn 5.0. However even after installing the right version it still didn't work due to incompatibilities between versions of driver, CUDA and cudnn. I solved all this issues by re-installing everything including the driver taking into account tensorflow libraries requisites. | 1 | 0 | 1 | I'm trying to run a tensorflow python script in a google cloud vm instance with GPU enabled. I have followed the process for installing GPU drivers, cuda, cudnn and tensorflow. However whenever I try to run my program (which runs fine in a super computing cluster) I keep getting:
undefined symbol: cudnnCreate
I have added the next to my ~/.bashrc
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/cuda-8.0/lib64:/usr/local/cuda-8.0/extras/CUPTI/lib64:/usr/local/cuda-8.0/lib64"
export CUDA_HOME="/usr/local/cuda-8.0"
export PATH="$PATH:/usr/local/cuda-8.0/bin"
but still it does not work and produces the same error | undefined symbol: cudnnCreate in ubuntu google cloud vm instance | 0.197375 | 0 | 0 | 304 |
43,102,532 | 2017-03-29T19:28:00.000 | 0 | 0 | 0 | 0 | python,machine-learning,cross-validation,sklearn-pandas | 43,103,742 | 2 | false | 0 | 0 | To expand a bit further on Kelvin's answer, if you want a random train-test split, then don't specify the random_state parameter. If you do not want a random train-test split (i.e. you want an identically-reproducible split each time), specify random_state with an integer of your choice. | 2 | 0 | 1 | i need to know why argument random_state in cross_validation.train_test_split is integer not Boolean, since it's role is to flag random allocation or not? | why argument random_state in cross_validation.train_test_split is integer not boolean | 0 | 0 | 0 | 280 |
43,102,532 | 2017-03-29T19:28:00.000 | 2 | 0 | 0 | 0 | python,machine-learning,cross-validation,sklearn-pandas | 43,103,553 | 2 | false | 0 | 0 | random_state is not only a flag of randomness or not, but which random seed to use. If you choose random_state = 3 you will "randomly" split the dataset, but you are able to reproduce the same split each time. I.e. each call with the same dataset will yield the same split, which is not the case if you don't specify the random_state parameter.
The reason why I use the quotation marks, is that it is actually pseudo random.
Wikipedia explains pseudorandomness like this:
A pseudorandom process is a process that appears to be random but is
not. Pseudorandom sequences typically exhibit statistical randomness
while being generated by an entirely deterministic causal process.
Such a process is easier to produce than a genuinely random one, and
has the benefit that it can be used again and again to produce exactly
the same numbers - useful for testing and fixing software. | 2 | 0 | 1 | i need to know why argument random_state in cross_validation.train_test_split is integer not Boolean, since it's role is to flag random allocation or not? | why argument random_state in cross_validation.train_test_split is integer not boolean | 0.197375 | 0 | 0 | 280 |
43,104,748 | 2017-03-29T21:47:00.000 | 1 | 0 | 1 | 0 | python-3.x,scheduling,apscheduler | 43,112,916 | 1 | false | 0 | 0 | With APScheduler, providing sub-second accuracy is not very feasible. Do you really need the extra features provided by the library? If not, you could just have a loop where you use time.sleep(). | 1 | 1 | 0 | I've used APScheduler in the past to schedule function calls every X seconds with great success. However, I'm looking to call a function multiple times per second, which neither the IntervalTrigger or CronTrigger APScheduler functions seem to allow.
Is there a simple way to set the interval to a fraction of second, or will I need to look at threading options? | High-frequency scheduling with Python 3 | 0.197375 | 0 | 0 | 291 |
43,104,913 | 2017-03-29T21:58:00.000 | 0 | 0 | 0 | 1 | python,docker,flask,gunicorn,amazon-ecs | 71,013,193 | 4 | false | 1 | 0 | For me, it turned out that the worker was quitting due to one of the containers in my Docker Swarm stack was failing repeatedly, resulting in the rollback process. The gunicorn process received the signal 'term' when the rollback process began. | 2 | 13 | 0 | I have a Python/Flask web application that I am deploying via Gunicorn in a docker image on Amazon ECS. Everything is going fine, and then suddenly, including the last successful request, I see this in the logs:
[2017-03-29 21:49:42 +0000] [14] [DEBUG] GET /heatmap_column/e4c53623-2758-4863-af06-91bd002e0107/ADA
[2017-03-29 21:49:43 +0000] [1] [INFO] Handling signal: term
[2017-03-29 21:49:43 +0000] [14] [INFO] Worker exiting (pid: 14)
[2017-03-29 21:49:43 +0000] [8] [INFO] Worker exiting (pid: 8)
[2017-03-29 21:49:43 +0000] [12] [INFO] Worker exiting (pid: 12)
[2017-03-29 21:49:43 +0000] [10] [INFO] Worker exiting (pid: 10)
...
[2017-03-29 21:49:43 +0000] [1] [INFO] Shutting down: Master
And the processes die off and the program exits. ECS then restarts the service, and the docker image is run again, but in the meanwhile the service is interrupted.
What would be causing my program to get a TERM signal? I can't find any references to this happening on the web. Note that this only happens in Docker on ECS, not locally. | Why are my gunicorn Python/Flask workers exiting from signal term? | 0 | 0 | 0 | 11,790 |
43,104,913 | 2017-03-29T21:58:00.000 | 16 | 0 | 0 | 1 | python,docker,flask,gunicorn,amazon-ecs | 43,105,563 | 4 | true | 1 | 0 | It turned out that after adding a login page to the system, the health check was getting a 302 redirect to /login at /, which was failing the health check. So the container was periodically killed. Amazon support is awesome! | 2 | 13 | 0 | I have a Python/Flask web application that I am deploying via Gunicorn in a docker image on Amazon ECS. Everything is going fine, and then suddenly, including the last successful request, I see this in the logs:
[2017-03-29 21:49:42 +0000] [14] [DEBUG] GET /heatmap_column/e4c53623-2758-4863-af06-91bd002e0107/ADA
[2017-03-29 21:49:43 +0000] [1] [INFO] Handling signal: term
[2017-03-29 21:49:43 +0000] [14] [INFO] Worker exiting (pid: 14)
[2017-03-29 21:49:43 +0000] [8] [INFO] Worker exiting (pid: 8)
[2017-03-29 21:49:43 +0000] [12] [INFO] Worker exiting (pid: 12)
[2017-03-29 21:49:43 +0000] [10] [INFO] Worker exiting (pid: 10)
...
[2017-03-29 21:49:43 +0000] [1] [INFO] Shutting down: Master
And the processes die off and the program exits. ECS then restarts the service, and the docker image is run again, but in the meanwhile the service is interrupted.
What would be causing my program to get a TERM signal? I can't find any references to this happening on the web. Note that this only happens in Docker on ECS, not locally. | Why are my gunicorn Python/Flask workers exiting from signal term? | 1.2 | 0 | 0 | 11,790 |
43,105,148 | 2017-03-29T22:15:00.000 | 3 | 0 | 1 | 0 | python,multithreading,tensorflow | 43,107,623 | 1 | true | 0 | 0 | After doing some experimentation it appears that each call to sess.run(...) does indeed see a consistent point-in-time snapshot of the variables.
To test this I performed 2 big matrix multiply operations (taking about 10 sec each to complete), and updated a single, dependent, variable before, between, and after. In another thread I grabbed and printed that variable every 1/10th second to see if it picked up the change that occurred between operations while the first thread was still running. It did not, I only saw it's initial and final values. Therefore I conclude that variable changes are only visible outside of a specific call to sess.run(...) at the end of that run. | 1 | 1 | 1 | If you make two concurrent calls to the same session, sess.run(...), how are variables concurrently accessed in tensorflow?
Will each call see a snapshot of the variables as of the moment run was called, consistent throughout the call? Or will they see dynamic updates to the variables and only guarantee atomic updates to each variable?
I'm considering running test set evaluation on a separate CPU thread and want to verify that it's as trivial as running the inference op on a CPU device in parallel.
I'm having troubles figuring out exactly what guarantees are provided that make sessions "thread safe". | How are variables shared between concurrent `session.run(...)` calls in tensorflow? | 1.2 | 0 | 0 | 646 |
43,107,173 | 2017-03-30T01:57:00.000 | 2 | 0 | 0 | 1 | python,django,wamp | 43,772,665 | 2 | true | 1 | 0 | Ok the answer is basically ericeastwood.com/blog/3/django-setup-for-wamp combined with httpd.apache.org/docs/2.4/vhosts/name-based.html – shadow | 1 | 0 | 0 | I want to test my django app in my WAMP server. The idea is that i want to create a web app for aaa.com and aaa.co.uk, if the user enter the domain aaa.co.uk, my django app will serve the UK version, if the user go to aaa.com, the same django app will serve the US version (different frontend). Basically i will be detecting the host of the user and serve the correct templates.
How do i setup my WAMP so i can test this? right now i am using pyCharm default server which is 127.0.0.1:8000 | how do i setup django in wamp? | 1.2 | 0 | 0 | 3,505 |
43,107,807 | 2017-03-30T03:08:00.000 | 1 | 1 | 0 | 1 | python,amazon-web-services,amazon-ec2,hpc,grid-computing | 43,107,922 | 3 | false | 1 | 0 | Possible: of course it is.
You can use any kind of RPC to implement this. HTTPS requests, xml-rpc, raw UDP packets, and many more. If you're more interested in latency and small amounts of data, then something UDP based could be better than TCP, but you'd need to build extra logic for ordering the messages and retrying the lost ones. Alternatively something like Zeromq could help.
As for the latency: only you can answer that, because it depends on where you're connecting from. Start up an instance in the region closest to you and run ping, or mtr against it to find out what's the roundtrip time. That's the absolute minimum you can achieve. Your processing time goes on top of that. | 2 | 2 | 0 | I'm working on a robot that uses a CNN that needs much more memory than my embedded computer (Jetson TX1) can handle. I was wondering if it would be possible (with an extremely low latency connection) to outsource the heavy computations to EC2 and send the results back to the be used in a Python script. If this is possible, how would I go about it and what would the latency look like (not computations, just sending to and from). | Possible to outsource computations to AWS and utilize results locally? | 0.066568 | 0 | 0 | 99 |
43,107,807 | 2017-03-30T03:08:00.000 | 1 | 1 | 0 | 1 | python,amazon-web-services,amazon-ec2,hpc,grid-computing | 43,107,931 | 3 | true | 1 | 0 | I think it's certainly possible. You would need some scripts or a web server to transfer data to and from. Here is how I think you might achieve it:
Send all your training data to an EC2 instance
Train your CNN
Save the weights and/or any other generated parameters you may need
Construct the CNN on your embedded system and input the weights
from the EC2 instance. Since you won't be needing to do any training
here and won't need to load in the training set, the memory usage
will be minimal.
Use your embedded device to predict whatever you may need
It's hard to give you an exact answer on latency because you haven't given enough information. The exact latency is highly dependent on your hardware, internet connection, amount of data you'd be transferring, software, etc. If you're only training once on an initial training set, you only need to transfer your weights once and thus latency will be negligible. If you're constantly sending data and training, or doing predictions on the remote server, latency will be higher. | 2 | 2 | 0 | I'm working on a robot that uses a CNN that needs much more memory than my embedded computer (Jetson TX1) can handle. I was wondering if it would be possible (with an extremely low latency connection) to outsource the heavy computations to EC2 and send the results back to the be used in a Python script. If this is possible, how would I go about it and what would the latency look like (not computations, just sending to and from). | Possible to outsource computations to AWS and utilize results locally? | 1.2 | 0 | 0 | 99 |
43,111,193 | 2017-03-30T07:26:00.000 | 1 | 1 | 1 | 0 | python,python-2.7,amazon-web-services,aws-lambda,jwplayer | 43,133,422 | 2 | true | 0 | 0 | I am succeed to install jwplatform module locally.
Steps are as follows:
1. Open command line
2. Type 'python' on command line
3. Type command 'pip install jwplatform'
4. Now, you can use jwplatform api.
Above command added module jwplatform in python locally
But my another challenge is to install jwplatform in AWS Lambda.
After research i am succeed to install module in AWS Lambda. I have bundled module and code in a directory then create zip of bundle and upload it in AWS Lambda. This will install module(jwplatform) in AWS Lambda. | 1 | 0 | 0 | I am going to create search api for Android and iOS developers.
Our client have setup a lambda function in AWS.
Now we need to fetch data using jwplatform Api based on search keyword passed as parameter. For this, I have to install jwplatform module in Lambda function or upload zip file of code with dependencies. So that i want to run python script locally and after getting appropriate result i will upload zip in AWS Lambda.
I want to use the videos/list (jwplatform Api) class to search the video library using python but i don't know much about Python. So i want to know how to run python script? and where should i put the pyhton script ? | how to use jwplatform api using python | 1.2 | 0 | 0 | 514 |
43,113,198 | 2017-03-30T09:04:00.000 | 3 | 0 | 0 | 0 | python,postgresql,amazon-web-services,aws-lambda,aws-api-gateway | 43,126,859 | 1 | false | 1 | 0 | If your data is going to live in a postgresql data base anyway I would start with your requests hitting the database and profile the performance. You've made assumptions about it being slow but you haven't stated what your requirements for latency are or what your schema is, so any assertions people would make about whether or not it would fit your case is completely speculative.
If you do decide that after profiling that it is not fast enough, than adding a cache would make sense, though storing the entire contents in the cache seems wasteful unless you can guarantee your clients will always iterate through all results. You may want to consider a mechanism that prefetches blocks of data that would service a few requests rather than trying to cache the whole data.
TL;DR : Don't prematurely optimize your solution. Quantify how you want your system to respond and test and validate your assumptions. | 1 | 1 | 0 | I am thinking to use AWS API Gateway and AWS Lambda(Python) to create a serverless API's , but while designing this i was thinking of some aspects like pagination,security,caching,versioning ..etc
so my question is:
What is the best approach performance & cost wise to implement API pagination with very big data (1 million records)?
should i implement the pagination in postgresql db? (i think this
would be slow)
should i not use postgresql db pagination and just cache all the results i get from db into aws elastic cache and then do server side pagination in lambda.
I appreciate your help guys. | AWS API Gateway & Lambda - API Pagination | 0.53705 | 1 | 1 | 1,679 |
43,119,802 | 2017-03-30T13:46:00.000 | 1 | 0 | 0 | 0 | python,tensorflow,tensorboard,bazel | 43,459,614 | 1 | false | 0 | 0 | I tried a bunch of ways to remove the external dependencies used in tensorboard, all of which broke the build since I am not experienced with bazel.
But! here is what I found that did work:
Comment out all build directives for tensorboard and android in ./tensorflow/BUILD
Comment out all build directives for tensorboard in ./tensorflow/tensorboard/BUILD
Delete bower build file: ./tensorflow/tensorboard/bower/BUILD. (Despite commenting tensorboard from the higher level BUILD files, bower just kept wanting to compile)
Here is the semi-tedious part if you dont have internet access:
Manually download required dependencies in ./tensorflow/workspace.bzl from external system and transfer into local system
Then update ./tensorflow/workspace.bzl urls to file path in system | 1 | 3 | 0 | I am trying to build tensorflow in a system that doesn't have internet access. I've downloaded the dependencies listed in tensorflow/workspace.bzl externally. But now the configure is trying to fetch a bunch of dependencies in the WORKSPACE file. They all look like UI packages needed for tensorboard.
Is there a way I can edit the configure to skip over these packages since I wont be needing the tensorboard or android code? | Can I build tensorflow without android and without tensorboard? | 0.197375 | 0 | 0 | 450 |
43,122,092 | 2017-03-30T15:25:00.000 | 1 | 0 | 0 | 0 | python,windows,python-3.x,proxy,ldap | 43,866,290 | 1 | false | 0 | 0 | On Microsoft OSes, the authentication used is Kerberos, so you won't be able to use directly your ID + password.
I'm on Linux, so I can't test it directly but I think that you can create a proxy with fiddler which can negociate the authentication for you, and you can use this proxy with python.
Fiddler's Composer will automatically respond to authentication challenges (including the Negotiate protocol which wraps Kerberos) if you tick the Authentication box on the Options subtab, in the menus. | 1 | 8 | 0 | Today I'm dealing with a Python3 script that has to do a http post request and send a mail.
The Python script is launched on a Windows PC that is in a corporate network protected by Forefront.
The user is logged with his secret credentials and can access to the internet through a proxy.
Like the other non-Microsoft applications (i.e. Chrome), I want my script to connect to the internet without prompt the user for his username and password.
How can I do this? | A Python script, a proxy and Microsoft Forefront - Auto-Authentication | 0.197375 | 0 | 1 | 218 |
43,122,818 | 2017-03-30T15:58:00.000 | 2 | 0 | 0 | 0 | python,parsing,split,substring,text-files | 43,122,890 | 1 | true | 0 | 0 | Split by ' ||| '?
your_text.split(' ||| ') Would give you a list of elements separated by ' ||| '
So
your_text.split(' ||| ')[1:3] would return ['reflects','understands'] | 1 | 0 | 0 | I have a text file in the following format (1 line):
[NN] ||| transplant ||| transplantation ||| PPDB2.0Score=5.24981 PPDB1.0Score=3.295900 -logp(LHS|e1)=0.18597 -logp(LHS|e2)=0.14031 -logp(e1|LHS)=11.83583 -logp(e1|e2)=1.80507 -logp(e1|e2,LHS)=1.46728 -logp(e2|LHS)=11.47593 -logp(e2|e1)=1.49083 -logp(e2|e1,LHS)=1.10738 AGigaSim=0.63439 Abstract=0 Adjacent=0 CharCountDiff=5 CharLogCR=0.40547 ContainsX=0 Equivalence=0.371472 Exclusion=0.000344 GlueRule=0 GoogleNgramSim=0.03067 Identity=0 Independent=0.078161 Lex(e1|e2)=9.64663 Lex(e2|e1)=59.48919 Lexical=1 LogCount=4.67283 MVLSASim=NA Monotonic=1 OtherRelated=0.372735 PhrasePenalty=1 RarityPenalty=0 ForwardEntailment=0.177287 SourceTerminalsButNoTarget=0 SourceWords=1 TargetComplexity=0.98821 TargetFormality=0.98464 TargetTerminalsButNoSource=0 TargetWords=1 UnalignedSource=0 UnalignedTarget=0 WordCountDiff=0 WordLenDiff=5.00000 WordLogCR=0 ||| 0-0 ||| OtherRelated
What I want is to extract transplant and transplantation. How would you do that? Each line in the text file is varying in length for the values between the ||| separator. To illustrate, here is a second example:
[VBZ] ||| reflects ||| understand ||| PPDB2.0Score=3.50769 PPDB1.0Score=21.844910 -logp(LHS|e1)=0.01251 -logp(LHS|e2)=10.87470 -logp(e1|LHS)=6.91653 -logp(e1|e2)=11.53225 -logp(e1|e2,LHS)=4.29729 -logp(e2|LHS)=16.55913 -logp(e2|e1)=10.31266 -logp(e2|e1,LHS)=13.93988 AGigaSim=0.54532 Abstract=0 Adjacent=0 CharCountDiff=2 CharLogCR=0.22314 ContainsX=0 Equivalence=0.006535 Exclusion=0.022332 GlueRule=0 GoogleNgramSim=0 Identity=0 Independent=0.456621 Lex(e1|e2)=62.90141 Lex(e2|e1)=62.90141 Lexical=1 LogCount=0 MVLSASim=NA Monotonic=1 OtherRelated=0.404562 PhrasePenalty=1 RarityPenalty=0.36788 ForwardEntailment=0.109950 SourceTerminalsButNoTarget=0 SourceWords=1 TargetComplexity=0.99354 TargetFormality=1.00000 TargetTerminalsButNoSource=0 TargetWords=1 UnalignedSource=0 UnalignedTarget=0 WordCountDiff=0 WordLenDiff=2.00000 WordLogCR=0 ||| 0-0 ||| Independent
The target words here are reflects and understands. | How to extract substrings from each line in text file? | 1.2 | 0 | 0 | 256 |
43,123,378 | 2017-03-30T16:24:00.000 | 1 | 0 | 0 | 0 | python | 43,189,429 | 1 | false | 0 | 0 | df=pd.concat([a,b])
df = df.reset_index(drop=True)
df_gpby = df.groupby(list(df.columns))
idx = [x[0] for x in df_gpby.groups.values() if len(x) == 1]
df1=df.reindex(idx) | 1 | 1 | 1 | I have a two dataframes
ex:
test_1
name1 name2
a1 b1
a1 b2
a2 b1
a2 b2
a2 b3
test_2
name1 name2
a1 b1
a1 b2
a2 b1
I need the difference of two dataframes like
name1 name2
a2 b2
a2 b3 | Difference of two dataframes in python | 0.197375 | 0 | 0 | 63 |
43,123,395 | 2017-03-30T16:24:00.000 | 0 | 0 | 0 | 0 | python | 43,159,587 | 1 | true | 1 | 0 | Answering my own question .... I realised I was using the WebView's load_string function, rather than writing the html to a file and loading that. This means that there is no html file as a base uri to load the image relative to.
The designers have obviously thought of this and provided a base_uri parameter, which I had just set to "File://". Changing this to a path which makes sense in conjunction with the relative path to the image file fixes the problem.
Hope this will be useful to someone who may have the same problem. | 1 | 0 | 0 | I'm using webkit.WebView in Python to display html generated from Markdown. I can display an image from a local file by generating an img tag with an absolute src path, but a realtive path doesn't work. The html with the relative path displays the image OK in Firefox. Is this a known problem with webkit and if so is there a soultion? | Relative image paths in html displayed by webkit | 1.2 | 0 | 0 | 94 |
43,129,875 | 2017-03-30T23:15:00.000 | 1 | 0 | 1 | 0 | python,pycharm | 43,502,353 | 3 | false | 0 | 0 | I have PyCharm Community Edition 5.0.5 and I successfully added tweepy on PyCharm. Here are the steps: Go to PyCharm -> Preferences -> Project: your_project -> Project Interpreter, then on the bottom of the window clic the "plus" button and type tweepy. Select tweepy on the left side of the window and click the Install Package button. Once you have installed it, then press OK button. Done !
Good luck
-Mauricio | 3 | 1 | 0 | I am trying to install Tweepy on PyCharm. I have the latest version of PyCharm, and I am attempting to clone tweepy from GitHub. I have tried running the code in PyCharm, IDLE, and the python interpreter in the Mac Terminal. None have worked, and any help would be much appreciated.
Thanks | Getting Tweepy in PyCharm | 0.066568 | 0 | 0 | 5,447 |
43,129,875 | 2017-03-30T23:15:00.000 | 0 | 0 | 1 | 0 | python,pycharm | 53,102,161 | 3 | false | 0 | 0 | Your best choice will be the terminal route along with the IDE installation. I always have this issues with windows machines.
Seems like using both methods completes a "Symlink" of some sort. | 3 | 1 | 0 | I am trying to install Tweepy on PyCharm. I have the latest version of PyCharm, and I am attempting to clone tweepy from GitHub. I have tried running the code in PyCharm, IDLE, and the python interpreter in the Mac Terminal. None have worked, and any help would be much appreciated.
Thanks | Getting Tweepy in PyCharm | 0 | 0 | 0 | 5,447 |
43,129,875 | 2017-03-30T23:15:00.000 | 4 | 0 | 1 | 0 | python,pycharm | 50,724,345 | 3 | false | 0 | 0 | for installing tweepy in pycharm:
press ALT + F12 to open the Terminal
type: pip install tweepy | 3 | 1 | 0 | I am trying to install Tweepy on PyCharm. I have the latest version of PyCharm, and I am attempting to clone tweepy from GitHub. I have tried running the code in PyCharm, IDLE, and the python interpreter in the Mac Terminal. None have worked, and any help would be much appreciated.
Thanks | Getting Tweepy in PyCharm | 0.26052 | 0 | 0 | 5,447 |
43,133,642 | 2017-03-31T06:13:00.000 | 0 | 1 | 0 | 0 | python,raspberry-pi | 43,133,737 | 1 | false | 0 | 0 | Someone correct me on this if I'm wrong but python being interpreted in most common implementations, I don't believe you're going to be able to make it unreadable.
Assuming that you are running linux on your raspberry pi, you might be able to get the tiniest bit of security using chmod 100 on it, but I do not know enough to confirm or deny this for sure. | 1 | 0 | 0 | 1) I am working on a project on Raspberry pi. Once I finished my all stuff, I want my SD card/code to be properly locked. so that no one is able to read and write code just like we locked other small microcontrollers(AVR/PIC).Please help to do that.
2) I am generating logs in my code using logging library, will I be able to write logs if my SD card/code is write/read protected.
My objective is no one be able to steal my code or make modifications to the code. What should I do to protect my code from stealing and make changes into the code? | Raspberry pi : Make RPI code Read and Write Protected | 0 | 0 | 0 | 172 |
43,136,568 | 2017-03-31T08:59:00.000 | 0 | 0 | 1 | 0 | python,regex,nltk,chunking | 43,199,730 | 1 | false | 0 | 0 | The solution is the folloing
grammar = r"CHUNK:{<\(><NNP><CD><\)>}" | 1 | 0 | 0 | I am working with NLTK and I am trying to chunk (AIM 20-40-60) from the following text:
text = for more information refer to the Business Reporting Policy (AIM 20-40-60)
Currently I am using the following chunk pattern grammar = r"CHUNK:{<NN.*><CD>}" which is able to perfectly capture the AIM 20-40-60 part.
Nevertheless I also want the parenthesis () to be part of the chunk as well and since I am relatively new to regular expressions and chunking, I don't know the exact regEX pattern for capturing the parenthesis. | Chunking parenthesis with NLTK | 0 | 0 | 0 | 320 |
43,136,997 | 2017-03-31T09:20:00.000 | 5 | 1 | 0 | 1 | python,autotools,automake | 43,163,111 | 1 | true | 0 | 0 | Create a config.py.in with some contents like MYVAR = '''@MYVAR@''' and add it to AC_CONFIG_FILES in your configure.ac. You can then import config in your other Python scripts.
This fulfills much the same function as config.h does for C programs. | 1 | 2 | 0 | I want to pass a constant in a C preprocessor style but with a Python script.
This constant is already declared with AC_DEFINE in my configure.ac file and used in my C program, and now I need to pass it to a Python script too.
I tried with a custom target in my Makefile.am with a sed call to preprocess a specific symbol in my Python script, but it seems dirty-coding to me.
How can I achieve this? | autotools: pass constant from configure.ac to python script | 1.2 | 0 | 0 | 149 |
43,137,423 | 2017-03-31T09:39:00.000 | 1 | 0 | 0 | 0 | python,python-3.x,web-scraping,scrapy | 43,137,750 | 2 | false | 1 | 0 | Yes, it is, you can install it via
pip install scrapy
(You may want to activate your virtual environment first) | 1 | 0 | 0 | Is 'scrapy' compatible with Python 3(or later) on Windows?
If not, then is the only option to use it with Python 2.7?
I need this for a project I need to do.
Thank you. | Is Scrapy compatible with Python 3 on Windows? | 0.099668 | 0 | 0 | 1,291 |
43,139,718 | 2017-03-31T11:36:00.000 | 0 | 0 | 0 | 0 | python,machine-learning,statistics,deep-learning | 43,141,030 | 2 | true | 0 | 0 | If you are sure enough that the alternative hypothesis data come from different distribution than the null hypothesis, you can try unsupervised learning algorithm. i.e a K-mean or a GMM with the right number of cluster could yield a great separation of the data. You can then assign label to the second class data and train a classifier using it.
This a general approach of semi-supervised learning.
Another idea would be to consider the alternative hypothesis data as outliers and use anomalie detection algorithm to find you second class data point. This is much more difficult to achieve and rely heavily on the supposition that data comes from really different distribution. | 1 | 1 | 1 | Basically, I am interested in solving a hypothesis problem, where I am only aware of the data distribution of a null hypothesis and don't know anything about the alternative case.
My concern is how should I train my deep neural network so that it can classify or recognise whether a particular sample data has a similar distribution as in null hypothesis case or it's from another class(An alternative Hypothesis case).
According to my understanding, It's different from a binary classification (one vs all case), because in that case, we know what data we are going to tackle, but here in my case alternative hypothesis case can follow any data distribution.
Here I am giving you an example situation, what I want exactly
Suppose I want to predict that a person is likely to have cancer or not
e.g
I have a data set of the factors that cause cancer like,
Parameter A=1,Parameter B=3.87,Parameter C=5.6,Has cancer = yes
But I don't have a data set where
Parameter A=2,Parameter B=1.87,Parameter C=2.6,Has cancer = No
Can be anything like this
Means I don't know about anything which leads to a conclusion of not having cancer, can I still train my model to recognise whether a person has cancer? | How can I create a deep neural network which has a capability to take a decision for hypothesis? | 1.2 | 0 | 0 | 103 |
43,140,267 | 2017-03-31T12:05:00.000 | 0 | 0 | 0 | 0 | python,graph,shortest-path,chemistry,cheminformatics | 43,140,304 | 2 | true | 0 | 0 | Make sure all edges that would lead to the forbidden node(s) have an infinite cost, and whichever graph-traversing algorithm you use will deal with it automatically.
Alternatively, just remove the forbidden nodes from being considered by the graph traversal algorithm. | 1 | 0 | 0 | I would like to implement the following in Python but not sure where to start. Are there good modules for shortest path problems of this type?
I am trying to define the shortest path from a particular atom (node) to all other atoms (nodes) in a given collection of xyz coordinates for a 3D chemical structure (the graph). The bonds between the atoms (nodes) are the edges for which travel from node to node is allowed.
I am trying to filter out certain atoms (nodes) from the molecule (graph) based on the connectivity outward from a selected central node.
**For the paths considered, I want to FORBID certain atoms (nodes) from being crossed. If the shortest path from A to B is through forbidden node, this answer is disallowed. The shortest path from A to B must not include the forbidden node **
If the shortest path from the selected centre atom (A) to another other node(B) INCLUDES the forbidden node, AND there is no other path available from A to B through the available edges (bonds), then node B should be deleted from the final set of xyz coordinates (nodes) to be saved.
This should be repeated for A to C, A to D, A to E, etc. for all other atoms (nodes) in the structure (graph).
Thanks in advance for any help you can offer. | Calculating shortest path from set node to all other nodes, with some nodes forbidden from path | 1.2 | 0 | 1 | 380 |
43,141,160 | 2017-03-31T12:51:00.000 | 0 | 0 | 1 | 0 | python-3.x,pandas | 43,141,287 | 2 | false | 0 | 0 | e.g. "a"=97 in ascii}
write print(ord("a"))
print(ord("a"))
answer would be 97 | 1 | 1 | 1 | I have a dataframe in which one columns called 'label' holds values like 'b', 'm', 'n' etc.
I want 'label' to instead hold the ascii equivalent of the letter.
How do I do it? | How to convert a column of a dataframe from char to ascii integers? [Pandas] | 0 | 0 | 0 | 3,205 |
43,141,252 | 2017-03-31T12:55:00.000 | 0 | 0 | 0 | 1 | python,multithreading,subprocess | 43,145,849 | 1 | false | 0 | 1 | If you are using subprocess.Popen simply to spin off another process, there is no reason you need to do so from another thread. A sub-process created this way does not block your main thread. You can continue to do other things while the sub-process is running. You simply keep a reference to the Popen object returned.
The Popen object has all the facilities you need for monitoring / interacting with the sub-process. You can read and write to its standard input and output (via stdin and stdout members, if created with PIPE); you can monitor readability / writability of stdin and stdout (with select module); you can check whether the sub-process is still in existence with poll, reap its exit status with wait; you can stop it with terminate (or kill depending on how emphatic you wish to be).
There are certainly times when it might be advantageous to do this from another thread -- for example, if you need significant interaction with the sub-process and implementing that in the main thread would over-complicate your logic. In that case, it would be best to arrange a mechanism whereby you signal to your other "monitoring" thread that it's time to shutdown and allow the monitoring thread to execute terminate or kill on the sub-process. | 1 | 1 | 0 | How can I return a process id of a lengthy process started using Thread in Python before the thread completes its execution?
I'm using Tkinter GUI so I can't start a lengthy process on the main thread so instead I start one on a separate thread.
The thread in turn calls subprocess.popen. This process should run for like 5 -6 hours.
But When I press stopbutton I need this process to stop but I am unable to return the process id of the process created using subprocess.popen.
Is there any solution to this? | How to return a process id of a lengthy process started using Thread in python before the thread completes its execution | 0 | 0 | 0 | 254 |
43,144,802 | 2017-03-31T15:48:00.000 | 0 | 0 | 0 | 1 | python,airflow | 43,146,209 | 1 | false | 0 | 0 | Trigger_dag concept
Let the task that uses a database hook in a python operator to generate a list" be the task in the controller dag and pass the each item in list to the trigger_dag in the params section.
You will find reference in the examples folder in your airflow installation
Good Luck! | 1 | 5 | 0 | I want to use Airflow to generate client reports, I would like to have one DAG that loops through all clients and launches a task to generate their report. The list of clients is gathered by the first task in the DAG and cannot be hardcoded in.
Basically I have a task that uses a database hook in a python operator to generate a list. Then for each item in the list I would like to execute a task using a python operator with that item being passed as an argument to the python function. Is there a certain pattern I can use to achieve this? | Dynamic task generation in an Airflow DAG | 0 | 0 | 0 | 1,242 |
43,145,332 | 2017-03-31T16:17:00.000 | 1 | 0 | 0 | 0 | python,arrays,numpy | 43,158,973 | 3 | false | 0 | 0 | It is better to create array of zeros and fill it using if-else. Even conditions makes slow your code, reshaping empty array or concatenating it with new vectors each iteration of loop is more slower operation, because each time new array of new size is created and old array is copied there together with new vector value by value. | 1 | 2 | 1 | I am writing code and efficiency is very important.
Actually I need 2d array, that I am filling with 0 and 1 in for loop. What is better and why?
Make empty array and fill it with "0" and "1". It's pseudocode, my array will be much bigger.
Make array filled by zeros and make if() and if not zero - put one.
So I need information what is more efficiency:
1. Put every element "0" and "1" to empty array
or
2. Make if() (efficiency of 'if') and then put only "1" element. | numpy array of zeros or empty | 0.066568 | 0 | 0 | 5,613 |
43,145,705 | 2017-03-31T16:39:00.000 | 0 | 0 | 0 | 1 | python,tornado | 43,171,572 | 1 | false | 1 | 0 | Yes, there is the tornado.options package, which does pretty much what you need. Keep in mind, however, that the values saved here are not persisted between requests; if you need that kind of functionality, you will have to implement an external persistence solution, which you already have done with SQLite. | 1 | 0 | 0 | I am working on a python/tornado web application.
I have several options to save in my app.
Thoses options can by changed by the user, and those options will be access very often.
I have created an sqlite database but there is some disk operation and i am asking you what is the best location for those options.
Does tornado embed a feature for custom user options ?
Thanks | Where should i save my tornado custom options | 0 | 1 | 0 | 28 |
43,146,194 | 2017-03-31T17:08:00.000 | 1 | 0 | 1 | 0 | python,django,python-3.6,virtualenvwrapper | 47,361,540 | 2 | true | 0 | 0 | virtualenvwrapper-win is only intended for the DOS cmd.exe. From the README:
These scripts should work on any version of Windows (Windows XP, Windows Vista, Windows 7/8/10).
However, they only work in the regular command prompt. They will not work in Powershell. There are other virtualenvwrapper projects out there for Powershell. | 1 | 1 | 0 | When I run pip install virtualenvwrapper-win in a PowerShell console I get the error:
PS C:\Windows\system32> pip install virtualenvwrapper-win
Collecting virtualenvwrapper-win
Using cached virtualenvwrapper-win-1.2.1.zip
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "", line 1, in
ModuleNotFoundError: No module named 'setuptools'
Command "python setup.py egg_info" failed with error code 1 in
I've tried doing pip install setuptools and also tried uninstalling and reinstalling and update but the error persists.
pip and python executables have been added to environment variables. | pip install error: ModuleNotFoundError No module named 'setuptools' | 1.2 | 0 | 0 | 4,160 |
43,147,818 | 2017-03-31T18:56:00.000 | 0 | 0 | 1 | 0 | python,python-3.x,properties,descriptor | 43,149,096 | 2 | false | 0 | 0 | why do properties/descriptor instances have to be class attributes?
They don't have to be, they just are. This was a design decision that probably has many more reasons to back it up than I can think (simplifying implementation, separating classes from objects).
why can properties/descriptor instances not be instance attributes?
They could be, you can always override __getattribute__ in order to invoke any descriptors accessed on an instance or forbid them altogether if you desire.
Keep in mind that the fact that Python won't stop you from doing this doesn't mean it's a good idea. | 1 | 7 | 0 | Recently I have been learning about managed attributes in Python and a common theme with properties and descriptors is, that they have to be assigned as class attributes. But nowhere can I find an explanation of why and especially why they cannot be assigned as instance attributes. So my question has actually two parts:
why do properties / descriptor instances have to be class attributes?
why can properties / descriptor instances not be instance attributes? | Why do properties have to be class attributes in Python? | 0 | 0 | 0 | 135 |
43,149,092 | 2017-03-31T20:22:00.000 | 0 | 0 | 0 | 0 | python,flask,sqlalchemy,flask-sqlalchemy | 43,703,427 | 2 | false | 1 | 0 | I suggest you to look at Server Sent Events(SSE). I am looking for code of SSE for postgres,mysql,etc. It is available for reddis. | 1 | 0 | 0 | I have PhpMyAdmin to view and edit a database and a Flask + SQLAlchemy app that uses a table from this database. Everything is working fine and I can read/write to the database from the flask app. However, If I make a change through phpmyadmin, this change is not detected by SQLAlchmey. The only to get those changes is by manually refreshing SQLAlchmey connection
My Question is how to tell SQLAlchemy to reload/refresh its Database connection? | Flask App using SQLAlcehmy: How to detect external changes committed to the database? | 0 | 1 | 0 | 665 |
43,149,372 | 2017-03-31T20:42:00.000 | 1 | 0 | 0 | 0 | python,pdf,ghostscript | 43,154,410 | 1 | true | 0 | 0 | If the file renders as expected in Ghostscript then you can run it through GS to the pdfwrite device and create a new PDF file which won't be damaged.
Preview is (like Acrobat) almost certainly silently repairing the problem in the background. Ghostscript will be doing the same, but unlike other applications we feel you need to know that the file has a problem. Firstly so that you know its broken, secondly so that if the file renders incorrectly in Ghostscript (or indeed, other applications) you know why.
Note that there are two main reasons for a damaged xref; firstly the developer of the application didn't read the specification carefully enough and the file offsets in the xref are correct, but the format is incorrect (this is not uncommon and a repair by GS will be harmless), secondly the file genuinely has been damaged in transit, or by editing it.
In the latter case there may be other problems and Ghostscript will try to warn you about those too. If you don't get any other warnings or errors, then its probably just a malformed xref table. | 1 | 6 | 0 | Are there any solutions (preferably in Python) that can repair pdfs with damaged xref tables?
I have a pdf that I tried to convert to a png in Ghostscript and received the following error:
**** Error: An error occurred while reading an XREF table.
**** The file has been damaged. This may have been caused
**** by a problem while converting or transfering the file.
However, I am able to open the pdf in Preview on my Mac and when I export the pdf using Preview, I am able to convert the exported pdf.
Is there any way to repair pdfs without having to manually open them and export them? | Repairing pdfs with damaged xref table | 1.2 | 1 | 0 | 8,136 |
43,150,164 | 2017-03-31T21:46:00.000 | 0 | 0 | 0 | 0 | python,feed,xively | 43,162,396 | 1 | false | 0 | 0 | It is probably failing as your weather station software is unable to verify the certificate provided by the server.
I experienced the same today, seems the last time the data had been successfully uploaded was about 2 weeks ago.
I'm using curl and for the time being, I solved it by using the "-k" switch (meaning curl will still allow to establish the connection). | 1 | 0 | 0 | I'd appreciate any help that you might offer with debugging my AirPi based weather station that uploads data to my Xively Personal account. The station has been working and uploading for several months. I checked the Xively graphs and noticed that they had flat-lined. I checked the weather station and it was working as before, other than the status of the Python command that posts he data was coming back as failed.
I changed nothing so am really confused as to why something that worked flawlessly for months suddenly stopped and since that date has refused to work again.
Does anyone have any ideas what I might do to rectify this situation? Many thanks in advance for your attention.
Ian. | Xively Personal - was working but now data does not change | 0 | 0 | 0 | 103 |
43,150,581 | 2017-03-31T22:29:00.000 | 1 | 0 | 1 | 0 | python,virtualenv | 43,150,652 | 1 | false | 0 | 0 | Have you got other Python versions installed? That might be the problem.
Try using pip3 instead of pip | 1 | 2 | 0 | Python3.5 does not locate installed modules when invoked in virtual env.
Create virtual env: python3.5 -m venv autogit/venv && cd autogit
source venv/bin/activate
which python == ...autogit/venv/bin/python
Weird, would expect python3.5
Add my python source code to /autogit and pip freeze>requirements.txt
pip install -r requirements.txt
ls venv/lib/python3.5/site-packages shows request-0-0-0-py3.5.egg-info and some other stuff
Since dependencies are installed under python3.5 and which python revealed python rather than python3.5, lets invoke the python3.5binary explicitly.... venv/bin/python3.5 autogit.py
Get ImportError: No module named 'request
??? Where could python be looking for packages if not in my virtual env?
UPDATE The questions above remain unanswered; here are things I noticed since then and the workaround I used:
pip install produced a file request-0-0-0-py3.5.egg-info. It did NOT produce an actual request directory with the source code or binaries for this module. Also why is it version 0 0 0 that is fishy
After some googling I noticed the module I wanted seemed to be named requests not request which is what was in my source. I changed it requests, pip install, and everything works. It was hard to see that there was a mistake because pip installing request did not fail | Python does not find installed modules | 0.197375 | 0 | 0 | 788 |
43,151,105 | 2017-03-31T23:40:00.000 | 0 | 0 | 0 | 0 | windows,ui-automation,python-appium | 63,505,617 | 1 | false | 0 | 0 | I'm surprised that after 3+ years, nobody has responded to this question...
Anyhow, to assist others who may be wondering how to open Action Center > Connect in Windows 10, you can launch this URI from a command-prompt (cmd.exe):
start ms-settings-connectabledevices:devicediscovery
Otherwise, from a PowerShell session:
Start-Process -FilePath ms-settings-connectabledevices:devicediscovery
Or, simply from Windows 10 Start Menu > Run...
ms-settings-connectabledevices:devicediscovery
I'll leave it to others to determine how to invoke this from Appium in Python. | 1 | 1 | 0 | I'm writing an UI test automation for an app connecting laptop screen to TV. I need to open Connect through Action Center to see the list of available receivers. I'm using Appium in Python to test the app but the thing is Appium doesn't support Desktop app. So is there any way that I can open Connect in Action Center panel automatically? Thank you. | How do I open Action Center ---> Connect in Windows 10 for testing? | 0 | 0 | 0 | 168 |
43,151,776 | 2017-04-01T01:22:00.000 | 1 | 0 | 1 | 0 | python,python-2.7,function,global-variables | 43,151,811 | 1 | true | 0 | 0 | Two ideas that I have is either to pass a reference of all these variables through functions. The other idea is to define classes containing the variables. The all the functions of the class have access to the values defined in the class and you don't need to pass them. Or you are passing pack of variables. | 1 | 0 | 0 | I have made a program where there is are many values that are accessed and changed in many classes and functions. I want to know is how to use and change a variable without using global or only using it once. I used global around 20 times all throughout my code and it looks ugly and is annoying. | Python Globalling to use and change in many functions | 1.2 | 0 | 0 | 31 |
43,154,781 | 2017-04-01T08:53:00.000 | 1 | 0 | 1 | 0 | python-3.x,ibm-cloud,speech-to-text,watson | 43,174,919 | 2 | false | 0 | 0 | You can install packages when using SoloLearn. You need to ask SoloLearn's administrators to install the package for you.
The python playground includes some of the most popular packages but it's very limited in terms of what you can do if the package you want to use is not there. | 1 | 1 | 0 | I want to install third party package in python online coding environments. Could you please tell me how we can achieve this?
The below line needs to execute
from watson_developer_cloud import SpeechToTestV1 as STT
when I run the above line, I am getting the following error,
Error:
Traceback (most recent call last):
File "..\Playground\", line 1, in
\ufefffrom watson_developer_cloud import SpeechToTestV1 as STT
ImportError: No module named 'watson_developer_cloud'
Even tried the below command in CODE Playground but it throwing incorrect syntax.
pip install watson_developer_cloud
Thanks in advance, | How to install new module/package in python coding online environment? | 0.099668 | 0 | 0 | 806 |
43,155,042 | 2017-04-01T09:25:00.000 | 2 | 0 | 0 | 0 | python,jenkins,continuous-integration | 43,264,504 | 1 | true | 1 | 0 | The problem may appear when the job you are trying to build with python-jenkins doesn't require any parameters, so if you try to pass some, it just fails.
Please, doublecheck this.
Hope, it will help | 1 | 1 | 0 | server.build_job(self.job_full_name, parameters=params) when parameters is not None, jenkins.JenkinsException: Error in request. Possibly authentication failed [500]: Server Error occurs. It works when parameters=None. | 500 server error when using jenkins python module build_job(job_name, parameters=) method | 1.2 | 0 | 0 | 2,304 |
43,156,251 | 2017-04-01T11:29:00.000 | 2 | 0 | 1 | 0 | python-3.x,installation,anaconda,ubuntu-16.04 | 43,156,652 | 2 | false | 0 | 0 | Try using python3 in your terminal instead of python. This should start the python 3.X interpreter.
Note that other scripts like pip also have an equivalent pip3 for python 3.X. | 1 | 1 | 0 | I have installed Anaconda 4.3.1, Python 3.6, on my Ubuntu. Now when i run Python it is saying that the version is 2.7 and not 3.6 as i wanted it to be. and there is no mention to Anaconda beside the version. I am quite sure that i installed Anaconda the right way. What could i do? | Not the version that should be when installing Anaconda | 0.197375 | 0 | 0 | 46 |
43,157,877 | 2017-04-01T14:07:00.000 | 0 | 0 | 0 | 0 | python-3.x | 43,157,904 | 1 | true | 0 | 0 | You imported capital 'F' Frog. You may be able to replace frog.Frog with simply Frog. | 1 | 0 | 0 | Fast Question
in my file i wrote
from foggerlib.frog import Frog
my datamember is self.F = frog.Frog(perameters)
line 13, in init()
self.F = frog.Frog(self, x, y, w, h, dx, dy, s, hg, vg)
NameError: name 'frog' is not defined
why is this happening? | When Importing a File, Error States File is not Named. Python | 1.2 | 0 | 0 | 26 |
43,159,488 | 2017-04-01T16:46:00.000 | 2 | 0 | 1 | 1 | python,shell | 47,840,739 | 5 | false | 0 | 0 | If your using Windows 10 just type in idle where it says: "Type here for search" | 3 | 12 | 0 | I am just starting to learn Python and I am using Windows 10. I downloaded and installed Python 3.4.3. But everytime I open Python from my Desktop or from C:\Python\python.exe it just opens a black command prompt without any Menu options like File menu, Edit Menu, Format Menu etc. I can't see any colors of the code, it's just black screen with white text. I searched about it on internet and came to know that what I am opening is the Editor winodws and I need to open Shell Window in order to have access to all of those options and features. I can't figure out where is the .exe of Shell Window and with what name is it? Please help me.
P.S. I also tried to open pythonw.exe that was present in the Python folder where it was installed, but nothing opened. | How do I open Python IDLE (Shell WIndow) in WIndows 10? | 0.07983 | 0 | 0 | 68,871 |
43,159,488 | 2017-04-01T16:46:00.000 | 4 | 0 | 1 | 1 | python,shell | 56,585,937 | 5 | false | 0 | 0 | Start menu > type IDLE (Python 3.4.3 <bitnum>-bit).
Replace <bitnum> with 32 if 32-bit, otherwise 64.
Example:
IDLE (Python 3.6.2 64-bit)
I agree with one who says:
just type "IDLE" in the start-menu where it says "Type here to search" and press [{ENTER}] | 3 | 12 | 0 | I am just starting to learn Python and I am using Windows 10. I downloaded and installed Python 3.4.3. But everytime I open Python from my Desktop or from C:\Python\python.exe it just opens a black command prompt without any Menu options like File menu, Edit Menu, Format Menu etc. I can't see any colors of the code, it's just black screen with white text. I searched about it on internet and came to know that what I am opening is the Editor winodws and I need to open Shell Window in order to have access to all of those options and features. I can't figure out where is the .exe of Shell Window and with what name is it? Please help me.
P.S. I also tried to open pythonw.exe that was present in the Python folder where it was installed, but nothing opened. | How do I open Python IDLE (Shell WIndow) in WIndows 10? | 0.158649 | 0 | 0 | 68,871 |
43,159,488 | 2017-04-01T16:46:00.000 | 16 | 0 | 1 | 1 | python,shell | 43,159,526 | 5 | true | 0 | 0 | In Windows you will need to right click a .py, and press Edit to edit the file using IDLE. Since the default action of double clicking a .py is executing the file with python on a shell prompt.
To open just IDLE:
Click on that. C:\Python36\Lib\idlelib\idle.bat | 3 | 12 | 0 | I am just starting to learn Python and I am using Windows 10. I downloaded and installed Python 3.4.3. But everytime I open Python from my Desktop or from C:\Python\python.exe it just opens a black command prompt without any Menu options like File menu, Edit Menu, Format Menu etc. I can't see any colors of the code, it's just black screen with white text. I searched about it on internet and came to know that what I am opening is the Editor winodws and I need to open Shell Window in order to have access to all of those options and features. I can't figure out where is the .exe of Shell Window and with what name is it? Please help me.
P.S. I also tried to open pythonw.exe that was present in the Python folder where it was installed, but nothing opened. | How do I open Python IDLE (Shell WIndow) in WIndows 10? | 1.2 | 0 | 0 | 68,871 |
43,160,202 | 2017-04-01T17:56:00.000 | 0 | 0 | 0 | 0 | python,python-3.x,numpy,comma | 43,160,280 | 2 | false | 0 | 0 | In your code predictors is a two dimensional array. You're taking a slice of the array. Your output will be all the values with training_indices as their index in the first axis. The : is slice notation, meaning to take all values along the second axis.
This kind of indexing is not common in Python outside of numpy, but it's not completely unique. You can write your own class that has a __getitem__ method, and interpret it however you want. The slice you're asking about will pass a 2-tuple to __getitem__. The first value in the tuple will be training_indices, and the second value will be a slice object. | 1 | 0 | 1 | I am in an online course, and I find I do not understand this expression:
predictors[training_indices,:]
predictors is an np.array of floats.
training_indices is a list of integers known to be indices of predictors, so 0=< i < len(training_indices)).
Is this a special numpy expression?
Thanks! | in Python 3.6 and numpy, what does the comma mean or do in "predictors[training_indices,:]" | 0 | 0 | 0 | 802 |
43,161,718 | 2017-04-01T20:33:00.000 | 1 | 0 | 0 | 0 | python,sql,django | 43,164,854 | 2 | false | 1 | 0 | You do not have to be a wizard at it but understanding relations between data sets can be extremely helpful especially if you have a complicated data hierarchy.
Just learn as you go. If you want you can look at the SQL code Django executes for you in the migrations.py file of each app. | 1 | 1 | 0 | I am directing this question to experienced, Django developers, so as in subject, I have been learning Django since September 2016, but I've started to learn it without any knowledge about databases syntax. I know basic concepts and definitions, so I can easily implement in Django models. Summarizing, have I to know SQL to create web apps in Django? Thanks in advance. | Do I need to know SQL when I work with Django | 0.099668 | 1 | 0 | 1,361 |
43,162,825 | 2017-04-01T22:45:00.000 | 1 | 0 | 1 | 0 | python,arrays,algorithm,loops,for-loop | 43,162,852 | 3 | false | 0 | 0 | You can achieve this using the modulo operator, which returns the remainder of the division of its two operands. In this case, you would do the following: list[i%num_of_elements], where num_of_elements is a variable holding the number of elements in the list. | 1 | 1 | 0 | so I have this list,
List: [0, 0, 1, 0, 1];
And I need made a algorithm with a for to show all the list (list[i]).
When I am in first array position, I can do list[i-2] and list[i-1], with this I can see the elements of the last position and the position before the last position.
Exemple : list[0] = 0; list[i-1] = list[4] = 1; list[i-2] = list[3] = 0; so I can go to the last position and start from there.
But when I do, list[i+1] in the last position I got a IndexError: list index out of range from the terminal.
My question is: If I was in the last positions and I want again come to from the first one and keep doing the for loop, to see infinite times all array elements from all positions, How I can do it?
If the size of my array it is 5, and I am in second position(list[1]) in the loop and want do list[i + 11], how can I put this representing this, list[2]?
I am trying make this on python. | Infinite Loop and Rotation of Array | 0.066568 | 0 | 0 | 2,357 |
43,166,420 | 2017-04-02T08:48:00.000 | 1 | 0 | 0 | 0 | python,pandas | 59,593,711 | 2 | false | 0 | 0 | There is 2 option read series from csv file;
pd.Series.from_csv('File_name.csv')
pd.read_csv('File_name.csv', squeeze=True)
My prefer is; using squeeze=True with read_csv | 1 | 4 | 1 | When I try to use x = pandas.Series.from_csv('File_name.csv', header = None)
It throws an error saying IndexError: single positional indexer is out-of-bounds.
However, If I read it as dataframe and then extract series, it works fine.
x = pandas.read_csv('File_name.csv', header = None)[0]
What could be wrong with first method? | How to read a csv file as series instead of dataframe in pandas? | 0.099668 | 0 | 0 | 2,872 |
43,167,500 | 2017-04-02T10:58:00.000 | 1 | 0 | 1 | 0 | python,tensorflow,python-idle | 47,119,800 | 1 | false | 0 | 0 | IDLE does NOT provide such functionality - it works through idlelib, a package from stdlib so it's executed using pythonw -m idlelib. To change interprenter in IDLE, call it using a different interprenter - "C:\path\to\your\python\interprenter\pythonw.exe" -m idlelib (make sure idlelib is installed for target interprenter). | 1 | 1 | 0 | How do I select an interpreter on my IDE which is Python IDLE? I can't find the options to do that.
I managed to install Tensorflow but it only works when I import it in the terminal, not in my current IDE
What I want: Make my current IDE use the Python.exe that has been provided when I installed Tensorflow on my computer
What I tried: Using PYCHARM, it works (like a charm!) but I can't do that stuff like import module then have " >>> " then issue my commands etc... | how do i select an interpreter on my IDE which is Python IDLE | 0.197375 | 0 | 0 | 264 |
43,168,123 | 2017-04-02T12:08:00.000 | -1 | 0 | 1 | 0 | python,byte | 51,676,604 | 5 | false | 0 | 0 | It has a simple solution like this:
0x0400 = 0x04 × 256 + 0x00 | 2 | 3 | 0 | Say you have b'\x04' and b'\x00' how can you combine them as b'\x0400'? | How to append two bytes in python? | -0.039979 | 0 | 0 | 26,609 |
43,168,123 | 2017-04-02T12:08:00.000 | 0 | 0 | 1 | 0 | python,byte | 51,753,927 | 5 | false | 0 | 0 | In my application I am receiving a stream of bytes from a sensor. I need to combine two of the bytes to create the integer value.
Hossein's answer is the correct solution.
The solution is the same as when one needs to bit shift binary numbers to combine them for instance if we have two words which make a byte, high word 0010 and low word 0100. We can't just add them together but if we bit shift the high word to the left four spaces we can then or the bits together to create 00100100. By bit shifting the high word we have essencially multiplied it by 16 or 10000.
In hex example above we need to shift the high byte over two digits which in hex 0x100 is equal to 256. Therefore, we can multiple the high byte by 256 and add the low byte. | 2 | 3 | 0 | Say you have b'\x04' and b'\x00' how can you combine them as b'\x0400'? | How to append two bytes in python? | 0 | 0 | 0 | 26,609 |
43,168,438 | 2017-04-02T12:41:00.000 | 0 | 1 | 1 | 0 | python,debugging,reverse-engineering,ida | 43,170,113 | 1 | false | 0 | 0 | NO ONE KNOW ?
C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\bin\x86_arm>EDITBIN /REBASE:BASE=0x61000000 mydll.dll | 1 | 0 | 0 | Each time TEST_DEBUG.EXE loaded at 0x04000000 base in IDA-Modules, but
TEST_DEBUG.DLL file loaded at any randoms base like 0x0C120000, 0x0C710000 , 0x0ABC0000
How i say to IDA debugger, load TEST_DEBUG.DLL every time at 0x0ABC0000 BASE ?
PS:
TEST_DEBUG.EXE load many DLLS, and one of them is TEST_DEBUG.DLL | How IDA will load DLL at constant memory segment at debug process? | 0 | 0 | 0 | 590 |
43,169,118 | 2017-04-02T13:50:00.000 | 0 | 1 | 1 | 0 | python,c++ | 43,173,531 | 3 | false | 0 | 0 | You might consider having each "writer" process write its output to a temporary file, close the file, then rename it to the filename that the "reader" process is looking for.
If the file is present, then the respective reader process knows that it can read from it. | 1 | 0 | 0 | I have a slight problem. I have a project that I'm working on and that requires two programs to read/write into some .txt files.
Python writes into one .txt file, C++ reads from it. C++ does what it needs to do and then writes its own information into another .txt file that Python has to read.
What I want to know is how can I check with C++ if Python has closed the .txt file before opening the same file, as Python may still be writing stuff into it and vice versa?
If you need any extra information about this conundrum, feel free to contact me. | Has .txt file been closed? | 0 | 0 | 0 | 78 |
43,169,738 | 2017-04-02T14:52:00.000 | 0 | 0 | 1 | 0 | android,pycrypto,qpython | 43,861,801 | 1 | false | 0 | 1 | Qpython didn't support pycrypto now, we will consider to support it as soon after we have delivered our brand new version which we were developing. | 1 | 1 | 0 | I have been using pycrypto on my windows machine running python 2.7.
When I tried to install pycrypto on qpython2.7 via pip I got Runtime error ("autoconf error"). For reference I am running qpython on stock android nougat with no root access.
Is there any way to install pycrypto for qpython
Not a problem ended up using pyaes instead of pycrypto. | How to install pycrypto on qpython? | 0 | 0 | 0 | 467 |
43,169,766 | 2017-04-02T14:54:00.000 | 2 | 0 | 0 | 0 | python,tensorflow,neural-network,dataset | 43,170,260 | 1 | false | 0 | 0 | I suggest you use OpenCV library. Whatever you uses your MNIST data or PIL, when it's loaded, they're all just NumPy arrays. If you want to make MNIST datasets fit with your trained model, here's how I did it:
1.Use cv2.imread to load all the images you want them to act as training datasets.
2.Use cv2.cvtColor to convert all the images into grayscale images and resize them into 28x28.
3.Divide each pixel in all the datasets by 255.
4.Do the training as usual!
I haven't tried to make it your own format, but theoratically it's the same. | 1 | 4 | 1 | I already know how to make a neural network using the mnist dataset. I have been searching for tutorials on how to train a neural network on your own dataset for 3 months now but I'm just not getting it. If someone can suggest any good tutorials or explain how all of this works, please help.
PS. I won't install NLTK. It seems like a lot of people are training their neural network on text but I won't do that. If I would install NLTK, I would only use it once. | Using your own Data in Tensorflow | 0.379949 | 0 | 0 | 1,168 |
43,175,272 | 2017-04-03T01:21:00.000 | 3 | 0 | 0 | 0 | python,tensorflow,deep-learning | 43,214,452 | 1 | true | 0 | 0 | You can create a third placeholder variable of type boolean to select which branch to use and feed that in at run time.
The logic behind it is that since you are feeding in the placholders at runtime anyways you can determine outside of tensorflow which placeholders will be fed. | 1 | 3 | 1 | Suppose I have two placeholder quantities in tensorflow: placeholder_1 and placeholder_2. Essentially I would like the following computational functionality: "if placeholder_1 is defined (ie is given a value in the feed_dict of sess.run()), compute X as f(placeholder_1), otherwise, compute X as g(placeholder_2)." Think of X as being a hidden layer in a neural network that can optionally be computed in these two different ways. Eventually I would use X to produce an output, and I'd like to backpropagate error to the parameters of f or g depending on which placeholder I used.
One could accomplish this using the tf.where(condition, x, y) function if there was a way to make the condition "placeholder_1 has a value", but after looking through the tensorflow documentation on booleans and asserts I couldn't find anything that looked applicable.
Any ideas? I have a vague idea of how I could accomplish this basically by copying part of the network, sharing parameters and syncing the networks after updates, but I'm hoping for a cleaner way to do it. | check if tensorflow placeholder is filled | 1.2 | 0 | 0 | 1,190 |
43,176,607 | 2017-04-03T04:42:00.000 | 2 | 0 | 0 | 1 | python-3.x,apache-spark,pyspark | 43,182,382 | 1 | true | 0 | 0 | Your streaming job is not supposed to calculate the Daily count/Avg.
Approach 1 :
You can store the data consumer from Kafka into a persistent storage like DB/HBase/HDFS , and then you can run Daily batch which will calculate all the statistics for you like Daily count or avg.
Approach 2 :
In order to get that information form streaming itself you need to use Accumulators which will hold the record count,sum. and calculate avg according.
Approach 3 :
Use streaming window, but holding data for a day doesn't make any sense. If you need 5/10 min avg, you can use this.
I think the first method is preferable as it will give you more flexibility to calculate all the analytics you want. | 1 | 1 | 1 | We have a spark job running which consumes data from kafka stream , do some analytics and store the result.
Since data is consumed as they are produced to kafka, if we want to get
count for the whole day, count for an hour, average for the whole
day
that is not possible with this approach. Is there any way which we should follow to accomplish such requirement
Appreciate any help
Thanks and Regards
Raaghu.K | spark consume from stream -- considering data for longer period | 1.2 | 0 | 0 | 35 |
43,176,661 | 2017-04-03T04:47:00.000 | 1 | 0 | 1 | 0 | python-3.x,boost-python,dlib | 44,420,282 | 3 | false | 0 | 0 | You have to compile it for the platform. I believe PIP alone cannot accomplish that feat. You'll have to navigate the arcanery of bjam and boost build. God speed. | 1 | 2 | 0 | I want to install boost.python for the installation of dlib library. | How to install Boost.Python using pip on Windows 10? | 0.066568 | 0 | 0 | 10,811 |
43,177,320 | 2017-04-03T05:47:00.000 | 2 | 1 | 0 | 1 | python,github,directory | 43,177,411 | 1 | false | 0 | 0 | As long as all of the code used by the script has been compiled and loaded into the Python VM there will be no issue with the source moving since it will remain resident in memory until the process ends or is replaced (or swapped out, but since it is considered dirty data it will be swapped in exactly the same). The operating system, though, may attempt to block the move operation if any files remain open during the process. | 1 | 2 | 0 | I have a github project available to others. One of the scripts, update.py, checks github everyday (via cron) to see if there is a newer version available.
Locally, the script is located at directory /home/user/.Project/update.py
If the version on github is newer, then update.py moves /home/user/.Project/ to /home/user/.OldProject/, clones the github repo and moves/renames the downloaded repo to /home/user/.Project/
It has worked perfectly for me about five times, but I just realized that the script is moving itself while it is still running. Are there any unforeseen consequences to this approach, and it there a better way? | Are there any negative consequences if a python script moves/renames its parent directory? | 0.379949 | 0 | 0 | 24 |
43,178,966 | 2017-04-03T07:32:00.000 | 0 | 0 | 1 | 0 | python-3.x,text-classification,markov-models,hmmlearn | 43,355,585 | 1 | false | 0 | 0 | hmmlearn is designed for unsupervised learning of HMMs, while your problem is clearly supervised: given examples of English and random strings, learn to distinguish between the two. Also, as you've correctly pointed it out, the notion of hidden states is tricky to define for text data, therefore for your problem plain MMs would be more appropriate. I think you should be able to implement them in <100 lines of code in Python. | 1 | 0 | 1 | I want to implement a classic Markov model problem: Train MM to learn English text patterns, and use that to detect English text vs. random strings.
I decided to use hmmlearn so I don't have to write my own. However I am confused about how to train it. It seems to require the number of components in the HMM, but what is a reasonable number for English? Also, can I not do a simple higher order Markov model instead of hidden? Presumably the interesting property is is patterns of ngrams, not hidden states. | How to use hmmlearn to classify English text? | 0 | 0 | 0 | 764 |
43,179,875 | 2017-04-03T08:29:00.000 | 10 | 0 | 0 | 0 | python,django | 54,591,055 | 4 | false | 1 | 0 | You create models for your website. When a new instance is made for a model, django must know where to go when a new post is created or a new instance is created.
Here get_absolute_url comes in picture. It tells the django where to go when new post is created. | 1 | 43 | 0 | Django documentation says:
get_absolute_url() method to tell Django how to calculate the canonical URL for an object.
What is canonical URL mean in this is context?
I know from an SEO perspective that canonical URL means picking the best URL from the similar looking URLs (example.com , example.com/index.html). But this meaning doesn't fit in this context.
I know this method provides some additional functionality in Django admin, redirection etc. And I am fully aware of how to use this method.
But what is the philosophy behind it? I have never actually used it in my projects. Does it serve any special purpose? | When to use Django get_absolute_url() method? | 1 | 0 | 0 | 37,142 |
43,183,244 | 2017-04-03T11:16:00.000 | 7 | 0 | 1 | 0 | python,class,python-module | 57,102,475 | 5 | false | 0 | 0 | In python world, module is a python file (.py) inside a package. Package is a folder that has __init__.py in its root. It is a way to organize your codes physically (in files and folders).
On the other hand, class is an abstraction that gathers data (characteristics) and method (behavior) definitions to represent a specific type of objects. It is a way to organize your codes logically.
A module can have zero or one or multiple classes. A class can be implemented in one or more .py files (modules).
But often, we can organize a set of variables and functions into a class definition or just simply put them in a .py file and call it a module.
Likewise in system design, you can have elaborate logical modeling or just skip it and jump into physical modeling. But for very complex systems, it is better not to skip the logical modeling. For simpler systems, go KISS.
How to organize your code
This is how I decide to organize my code in classes or modules:
Class is supposed to be a blueprint to create (many) instances of objects based on that blueprint. Moreover, classes can have sub-classes (inheritance).
Therefore, if I need inheritance or (many) instantiations, I gather functions and variables under a class definition (methods and properties).
Otherwise, I Keep It Simple and Stupid (KISS) and use modules.
A good indication of a bad class (that should have been a module): you can rewrite all your object methods and properties with static methods and properties. | 2 | 47 | 0 | Can I assign value to a variable in the module? If yes, what is the difference between a class and module?
PS: I'm a Java guy (in case it helps in the way of explaining). Thanks. | Difference between Module and Class in Python | 1 | 0 | 0 | 65,683 |
43,183,244 | 2017-04-03T11:16:00.000 | 51 | 0 | 1 | 0 | python,class,python-module | 43,183,993 | 5 | false | 0 | 0 | There are huge differences between classes and modules in Python.
Classes are blueprints that allow you to create instances with attributes and bound functionality. Classes support inheritance, metaclasses, and descriptors.
Modules can't do any of this, modules are essentially singleton instances of an internal module class, and all their globals are attributes on the module instance. You can manipulate those attributes as needed (add, remove and update), but take into account that these still form the global namespace for all code defined in that module.
From a Java perspective, classes are not all that different here. Modules can contain more than just one class however; functions and any the result of any other Python expression can be globals in a module too.
So as a general ballpark guideline:
Use classes as blueprints for objects that model your problem domain.
Use modules to collect functionality into logical units.
Then store data where it makes sense to your application. Global state goes in modules (and functions and classes are just as much global state, loaded at the start). Everything else goes into other data structures, including instances of classes. | 2 | 47 | 0 | Can I assign value to a variable in the module? If yes, what is the difference between a class and module?
PS: I'm a Java guy (in case it helps in the way of explaining). Thanks. | Difference between Module and Class in Python | 1 | 0 | 0 | 65,683 |
43,184,937 | 2017-04-03T12:40:00.000 | 2 | 0 | 0 | 0 | python,django | 43,184,995 | 1 | true | 1 | 0 | No. Django views are specifically designed to prevent this. It would be a very bad idea; any instance variables set would be shared by all future users of that process, leading to potential information leakage and other thread-safety bugs.
If you want to store information between requests, use the session. | 1 | 2 | 0 | This is more of a conceptual question. While learning the Django class-based view, I am wondering if it is possible to make a call to a Django view as an initiation call. I mean, after the first call, the following calls from the templates can share the instance variables created by the first one. This avoids passing variables back and forth between the template and server. | Initiation call Django class based view. | 1.2 | 0 | 0 | 60 |
43,187,185 | 2017-04-03T14:23:00.000 | 6 | 0 | 1 | 0 | python,visual-studio | 44,996,726 | 4 | false | 0 | 0 | I am struggling with this as well. There is a Visual Studio Shell command execute file in Python interactive which is bound to Shift+Alt+F5 by default.
This works: if the focus is in a code window then the current file is executed. If the focus is in the Solution Explorer window, the file selected as "Startup item" is executed. There seems to be a glitch however: Some import statements from the specific file which work fine on the standard Ctrl+F5 will fail on Shift+Alt+F5. I need to figure out why this is the case and will report here.
EDIT: Once in the interactive windows, change the working directory to the folder containing the project: os.chdir etc. Then import your-filename works flawlessly. So I assume that there is some problem with selecting the working directory while executing Shift+Alt+F5. | 1 | 2 | 0 | I downloaded and installed Visual Studio along with Anaconda to get access to all of the packages that come pre installed with Anaconda. I am trying to figure out how to run code such that it runs in the interactive shell. Right now when I hit F5 an Anaconda 3 cmd line window comes up with the prompt "Press any key to continue..." comes up. My question is: how can I make it so that when I hit F5 my code is executed in the interactive Python shell much like it does on the basic IDLE that comes with Python.
This seems like a question that a simple Google Search could fix, but for some reason I cannot find the answer. I've done some google searching, and I watched the Visual Studio python official Microsoft series about it. One of the videos touched on using the interactive shell, but even in the video, when he clicked the Start (Run) button, the code ran in what looked like the command line.
I have used IDLE in the past, and now I think it is time to make the change to a bigger IDE. I love the code completion and templates of visual studio, and I can't wait to solve this (noob) question.
Thanks | Python 3.6 in Visual Studio 2017 How to Run Program in the Interactive Shell | 1 | 0 | 0 | 10,257 |
43,191,431 | 2017-04-03T18:05:00.000 | 0 | 0 | 1 | 1 | python,python-2.7 | 43,647,525 | 2 | true | 0 | 0 | since i know where it will be installed, you can set env, and then call sub processes.
The issue i was having is that a lot of these executables assign their own path variables which is what i wanted to do. Since i cant relaunch a new console due to security issues, the best course of action would be to navigate to the new applications target bin folder or otherwise and then set the env or pass it into subprocesses by appending it with Env variables. | 1 | 0 | 0 | I install a program through python, git in this case. Immediately after, I will call os.system("git --version") but the call doesn't go through because of the snapshot of variables has not been updated.
Is there a way to refresh the cmd prompt? Maybe just reimport os or something?
The issue i am having is that after installing an application, the app related cmd commands are not yet key words.
I have noticed this is a reoccurring issue in all of my platform configuration installs.
I spent awhile reading docs but i havent see anything really jumping out at me other than the concept that the env is pulls at the time of importing os so maybe that means i could dump and reimport it. | import os, trying to refresh event variables after running a script | 1.2 | 0 | 0 | 45 |
43,193,286 | 2017-04-03T20:01:00.000 | 8 | 0 | 0 | 0 | python,html,django,django-templates,naming-conventions | 43,193,334 | 1 | false | 1 | 0 | I think one reason why underscores are better in Python files is so that they can be imported. A dash is interpreted as a minus sign, which can cause problems.
For your Django templates, it's a matter of preference so you will likely be fine using any convention you prefer. | 1 | 5 | 0 | We are using the Django framework and in the book "2 scoops of django" there is a recommendation to use underscore naming for specific things, should this be included in the naming of templates as well? front end developers here are really locked on dashes and I was just wondering? | Django template naming with dashes VS underscores | 1 | 0 | 0 | 913 |
43,196,158 | 2017-04-03T23:53:00.000 | 1 | 0 | 1 | 0 | python,debugging,pycharm | 43,196,188 | 1 | true | 0 | 0 | It is not. You're too late, I'm afraid. | 1 | 1 | 0 | I have a Python script running in PyCharm and I would like to interrupt it, examine a variable and resume. If I was using debug mode, this would be straightforward, but unfortunately I am not (and the script has been running for 24 hours). Is it possible to pause the script and then enter debug mode to examine a variable? | Switch between Run and Debug modes in Pycharm | 1.2 | 0 | 0 | 93 |
43,196,821 | 2017-04-04T01:23:00.000 | 0 | 0 | 1 | 0 | python,import | 43,220,261 | 1 | true | 1 | 0 | I looked at similar questions on SO and found that a common theme was that they had incorrect versions of whatever they were working with. I decided to switch to Python 3.5.3 over 2.7; this fixed the problem, but a new error "ImportError: No module named 'rawpy'" appeared. This is because the module is now in the incorrect directory. I was able to fix this by uninstalling and reinstalling the module using pip. | 1 | 0 | 0 | I downloaded a Python wrapper called rawpy recently using easy_install (pip install did not work). When I imported it and attempted to run code, this error appeared: "ImportError: DLL load failed: The application has failed to start because its side-by-side configuration is incorrect. Please see the application event log or use the command-line sxstrace.exe tool for more detail."
Could anyone offer a fix for this? | ImportError: DLL load failed: The application has failed to start because its side-by-side configuration is incorrect | 1.2 | 0 | 0 | 2,176 |
43,198,084 | 2017-04-04T04:11:00.000 | 1 | 0 | 1 | 1 | python | 43,198,109 | 2 | false | 0 | 0 | A module's location is always available in the __file__ variable. You can use the functions in os.path (I'm mainly thinking of basedir and join) to transform module-relative paths to absolute paths | 1 | 0 | 0 | I discovered that a script's "current working directory" is, initially, not the where the script is located, but rather where the user is when he/she runs the script.
If the script is at /Desktop/Projects/pythonProject/myscript.py, but I'm at /Documents/Arbitrary in my terminal when I run the script, then that's going to be it's present working directory, and an attempt at open('data.txt') is going to give File Not Found because it's not looking in the right directory.
So how is a script supposed to open files if it can't know where it's being run from? How is this handled?
My initial thought was to use absolute paths. Say my script needs to open data.txt which is stored alongside it in its package pythonProject. Then I would just say open('/Desktop/Projects/pythonProject/data.txt').
But then you can't ever move the project without editing every path in it, so this can't be the right solution.
Or is the answer simply that you must be in the directory where the script is located whenever you run the script? That doesn't seem right either.
Is there some simple manipulation for this that I'm not thinking of? Are you just supposed to os.chdir to the script's location at the beginning of the script? | How to handle filepaths? | 0.099668 | 0 | 0 | 61 |
43,199,108 | 2017-04-04T05:42:00.000 | 4 | 0 | 0 | 0 | python,machine-learning,scikit-learn,classification,regression | 43,199,249 | 2 | true | 0 | 0 | Generally, for a qualitative problem that is to classify between categories or class, we prefer classification.
for example: to identify if it is night or day.
For Quantitative problems, we prefer regression to solve the problems.
for example: to identify if its 0th class or 1st class.
But in a special case, when we have only two classes. Then, we can use both classification and regression to solve two-class problems as in your case.
Please note that, this explanation is on the behalf of two-class point of view or multi-class problems. Though regression is to deal with real quantitative problems rather than classes.
Probability has nothing to deal specifically with methods. Each method deduce a probability and on the basis of that, they predict the outcome.
It is better if you explain the reference to predict_proba from your
question.
Hope it helps! | 1 | 1 | 1 | Just a quick question, if I want to classify objects into either 0 or 1 but I would like the model to return me a 'likeliness' probability for example if an object is 0.7, it means it has 0.7 chance of being in class 1, do I do a regression or stick to classifiers and use the predict_proba function?
How is regression and predict_proba function different?
Any help is greatly appreciated!
Thank you! | Regression vs Classifier predict_proba | 1.2 | 0 | 0 | 2,562 |
43,199,359 | 2017-04-04T05:59:00.000 | 2 | 0 | 0 | 0 | python-3.x,xlsxwriter | 43,248,203 | 2 | true | 0 | 0 | I was trying to recreate(Thanks to @jmcnamara) the problem and I could figure out where it went wrong.
In my command to write_rich_string, sometimes it was trying to format the empty string.
my_work_sheet.write_rich_string(row_no, col_no,format_1, string_1, format_2, string_2, format_1, string_3)
I came to know that at some point of time the value of one among string_1, string_2 and string_3 becomes ''. Now I use write_rich_string only after ensuring they are not ''. | 1 | 1 | 0 | I'm creating and writing into an excel file using xlsxwriter module. But when I open the excel file, I get this popup:
We found a problem with some content in 'excel_sheet.xlsx'. Do you want us to try to recover as much as we can? If you trust the source of this workbook, click Yes. If I click Yes, it says Repaired Records: String properties from /xl/sharedStrings.xml part (Strings) and then I can see the contents.
I found that this occurs because of the cells I wrote using write_rich_string.
my_work_sheet.write_rich_string(row_no, col_no,format_1, "Some text in format 1", format_2, "Text in format 2", format_1, "Again in format 1")
If I write it using write_string this doesn't occur. format_1 and format_2 has font name, color, size and vertical align set.
Can anyone suggest what goes wrong here? | Python xlsxwriter Repaired Records: String properties from /xl/sharedStrings.xml part (Strings) | 1.2 | 1 | 0 | 1,256 |
43,202,548 | 2017-04-04T08:56:00.000 | 8 | 0 | 0 | 0 | python-3.x,gensim | 46,034,678 | 1 | false | 0 | 0 | kv.vector_size still works; I'm using gensim 2.3.0, which is the latest as I write. (I am assuming kv is your KeyedVectors object.) It appears object properties are not documented on the API page, but auto-complete suggests it, and there is no deprecated warning or anything.
Your question helped me answer my own, which was how to get the number of words: len(kv.index2word) | 1 | 6 | 1 | Im gensims latest version, loading trained vectors from a file is done using KeyedVectors, and dosent requires instantiating a new Word2Vec object. But now my code is broken because I can't use the model.vector_size property. What is the alternative to that? I mean something better than just kv[kv.index2word[0]].size. | gensim KeydVectors dimensions | 1 | 0 | 0 | 4,245 |
43,204,496 | 2017-04-04T10:22:00.000 | 8 | 0 | 0 | 1 | python,sockets,redis,redis-py | 43,210,008 | 2 | true | 0 | 0 | Redis' String data type can be at most 512MB. | 1 | 5 | 1 | We are trying to SET pickled object of size 2.3GB into redis through redis-py package. Encountered the following error.
BrokenPipeError: [Errno 32] Broken pipe
redis.exceptions.ConnectionError: Error 104 while writing to socket. Connection reset by peer.
I would like to understand the root cause. Is it due to input/output buffer limitation at server side or client side ? Is it due to any limitations on RESP protocol? Is single value (bytes) of 2.3 Gb allowed to store into Redis ?
import redis
r = redis.StrictRedis(host='10.X.X.X', port=7000, db=0)
pickled_object = pickle.dumps(obj_to_be_pickled)
r.set('some_key', pickled_object)
Client Side Error
BrokenPipeError: [Errno 32] Broken pipe
/usr/local/lib/python3.4/site-packages/redis/connection.py(544)send_packed_command()
self._sock.sendall(item)
Server Side Error
31164:M 04 Apr 06:02:42.334 - Protocol error from client: id=95 addr=10.2.130.144:36120 fd=11 name= age=0 idle=0 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=16384 qbuf-free=16384 obl=42 oll=0 omem=0 events=r cmd=NULL
31164:M 04 Apr 06:07:09.591 - Protocol error from client: id=96 addr=10.2.130.144:36139 fd=11 name= age=9 idle=0 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=40 qbuf-free=32728 obl=42 oll=0 omem=0 events=r cmd=NULL
Redis Version : 3.2.8 / 64 bit | Broken Pipe Error Redis | 1.2 | 0 | 0 | 9,490 |
43,204,913 | 2017-04-04T10:41:00.000 | 1 | 0 | 1 | 0 | python,django,reactjs | 43,205,392 | 1 | false | 1 | 0 | Just posting @Sami's comment as an answer so you can accept it.
The React site itself has quite good documentation. It doesn't care what the backend is, or if there even is a backend. That's all up to you. So to your question in the title: yes, you can. As for should you, that's an opinion based question and not a good fit for Stack Overflow. | 1 | 0 | 0 | I am writing API in python, When i read the react js documentation it describe's lot about view layer and JSX and i didn't find any good tutorial to start with and apart from that im confused with technology decision shall i go with these technologies or not.
help me in taking right decision. | I am newbie to react.js can i use python as a backend and react as a frontend for data science application | 0.197375 | 0 | 0 | 354 |
43,207,222 | 2017-04-04T12:26:00.000 | -2 | 0 | 0 | 0 | python,numpy,scipy,polynomials,inverse | 43,208,268 | 3 | false | 0 | 0 | Try to use mathematical package sage | 1 | 1 | 1 | I'm fairly new to Python and I have a question related to the polynomials.
Lets have a high-degree polynomial in GF(2), for example :
x^n + x^m + ... + 1, where n, m could be up to 10000.
I need to find inverse polynomial to this one. What will be the fastest way to do that in Python (probably using numpy) ?
Thanks | Find inverse polynomial in python in GF2 | -0.132549 | 0 | 0 | 3,247 |
43,207,422 | 2017-04-04T12:35:00.000 | 0 | 0 | 0 | 0 | python-2.7,opencv3.0,face-detection,face-recognition,opencv3.1 | 43,324,614 | 2 | false | 0 | 0 | i guess here in your problem you are not actually referring to detection ,but recognition ,you must know the difference between these two things:
1-detection does not distinguish between persons, it just detects the facial shape of a person based on the haarcascade previously trained
2-recognition is the case where u first detect a person ,then try to distinguish that person from your cropped and aligned database of pics,i suggest you follow the philipp wagner tutorial for that matter. | 1 | 0 | 1 | I trained 472 unique images for a person A for Face Recognition using "haarcascade_frontalface_default.xml".
While I am trying to detect face for the same person A for the same images which I have trained getting 20% to 80% confidence, that's fine for me.
But, I am also getting 20% to 80% confidence for person B which I have not included in training the images. Why its happening for person B while I am doing face detection?
I am using python 2.7 and OpenCV 3.2.0-dev version. | Why OpenCV face detection recognition the faces for untrained face? | 0 | 0 | 0 | 402 |
43,207,723 | 2017-04-04T12:49:00.000 | 0 | 0 | 1 | 0 | python,python-3.x,cmd,pip | 43,211,167 | 1 | true | 0 | 0 | Done the problem was for AVG PC tuneup. this program was closing python. so python is was opening in new window. I simply removed this program. I tried disable Live optimization. But not worked. | 1 | 0 | 0 | I have installed python 3. When I want to use pip it opens a new window and the messages and logs disappear immediately by closing that cmd window, but in some days ago when i used pip it open that in the same window. I have tried and search many ways but the problem is still remaining. what should I do? | python is not running in the same cmd window | 1.2 | 0 | 0 | 42 |
43,209,135 | 2017-04-04T13:48:00.000 | 0 | 0 | 0 | 0 | python,machine-learning,data-mining | 43,216,616 | 1 | false | 0 | 0 | This can be solved reasonably easily if you go to a transposed matrix.
Of any two features (now rows, originally columns) you compute the intersection. If it's larger than 50, you have a frequent cooccurrence.
If you use an appropriate sparse encoding (now of rows, but originally of columns - so you probably need not only to transpose the matrix, but also to reencode it) this operation using O(n+m), where n and m are the number of nonzero values.
If you have an extremely high number of features this make take a while. But 100000 should be feasible. | 1 | 0 | 1 | The sparse matrix has only 0 and 1 at each entry (i,j) (1 stands for sample i has feature j). How can I estimate the co-occurrence matrix for each feature given this sparse representation of data points? Especially, I want to find pairs of features that co-occur in at least 50 samples. I realize it might be hard to produce the exact result, is there any approximated algorithm in data mining that allows me to do that? | Given a sparse matrix with shape (num_samples, num_features), how do I estimate the co-occurrence matrix? | 0 | 0 | 0 | 101 |
43,213,086 | 2017-04-04T16:43:00.000 | 0 | 0 | 1 | 0 | python,python-3.x,oop | 43,213,652 | 3 | false | 0 | 0 | Your question is asking about a concept called "dependency injection." You should take some time to read up on it. It details the ways of making one object available to another object that wants to interact with it. While that's too broad to write up here, here are some of the basics:
You could have all objects you care about be global, or contained in a global container. They can all see each other and interact with each other as necessary. This isn't very object-oriented, and is not the best practice. It's brittle (all the objects are tightly bound together, and it's hard to change or replace one), and it's not a good design for testing. But, it is a valid option, if not a good one.
You could have objects that care about each other be passed to each other. This would be the responsibility of something outside of all of the objects (in your case, basically your main function). You can pass the objects in every method that cares (e.g. board.verify_player_position(player1)). This works well, but you may find yourself passing the same parameter into almost every function. Or you could set the parameter either through a set call (e.g. board.set_player1(player1)), or in the constructor (e.g. board = Board(player1, player2)). Any of these are functionally pretty much the same, it's just a question of what seems to flow best for you. You still have the objects pretty tightly bound. That may be unavoidable. But at least testing is easier. You can make stub classes and pass them in to the appropriate places. (Remember that python works well with duck typing: "If it walks like a duck and quacks like a duck, then it's a duck." You can make testing code that has the same functions as your board and player class, and use that to test your functions.)
A frequent pattern is to have these objects be fairly dumb, and to have a "game_logic" or some other kind of controller. This would be given the instances of the board and the two players, and take care of maintaining all of the rules of the game. Then your main function would basically create the board and players, and simply pass them into your controller instance. If you went in this direction, I'm not sure how much code you would need your players or board to have, so you may not like this style.
There are other patterns that will do this, as well, but these are some of the more basic.
To answer your direct questions: yes, the error you're seeing is because you're trying to invoke the class function, and you need it to be on an object. And yes, instantiating in that case would be bad. But no, passing an instance of one class to another is not a bad thing. There's no use in having objects if they don't interact with something; most objects will need to interact with some other object at some point.
You mentioned that you have code available, but it's a good thing to think out your object interactions a little bit before getting too into the coding. So that's the question for you: do you want player1.check_valid_position(board), or board.check_player(player1), or rules.validate_move(player, some_kind_of_position_variable)`. They're all valid, and they all have the objects inter-relate; it's just a question of which makes the most sense to you to write. | 1 | 1 | 0 | I'm quite green on Python and have been looking around for an answer to my particular question. Though I'm not sure if it's a Python specific question, or if I'm simply getting my OOP / design patterns confused.
I've got three files: main.py, board.py and player.py. Board and player each only hold a class Player and Board, main simply starts the game.
However I'm struggling with validating player positions when they are added to the board. What I want is to instantiate the board and consecutively new player object(s) in main.py, but check the board size in player.py when a new player is added to the board, to ensure the player is not outside of bounds upon creation.
As it is now I'm getting a TypeError (getX() missing 1 required positional argument: 'self') when attempting to access the board's size inside of player.py.
Most likely because the board isn't instantiated in that scope. But if I instantiate it in the scope that will be counted as a new object, won't it? And if I pass the board to the player as a variable that would surely be counted as bad practice, wouldn't it?
So how do I go about accessing the instance variables of one class from another class? | Accessing variable of class-object instantiated in other file | 0 | 0 | 0 | 305 |
43,214,668 | 2017-04-04T18:13:00.000 | 0 | 0 | 0 | 0 | python,selenium | 43,214,788 | 1 | true | 0 | 0 | You need to wait after the action before you validate. Web drivers provide "wait conditions" that you can use to validate you reached your desired checkpoint before performing other validation operations.
We are not talking about system waits, we are talking about wait conditions where the driver can poll the browser until a given condition is met. There are many conditions and many options for these, and you will need them fairly universally to accomplish your goals. That is why I'm not providing an example. | 1 | 0 | 0 | I'm new to selenium. I have this situation and i don't know how to solve it:
If i use direct link in driver.get() i can find and count elements w/o problems using:
element.driver.find_elements_by_xpath();
print(len(element))
I get correct printed result
if I use home page instead in driver.get():
locate search button;
send keys and submit;
element.driver.find_elements_by_xpath();
print(len(element))
Test is passed but result is 0. Any idea what I'm doing wrong? | Python Selenium: Geting different results using driver.find_elements | 1.2 | 0 | 1 | 82 |
43,215,312 | 2017-04-04T18:48:00.000 | 0 | 0 | 1 | 0 | python-3.x | 43,215,354 | 3 | false | 0 | 0 | I think you need to use regx for solve your problem. | 1 | 0 | 0 | I was wondering if there was a way to use a .split() function to split a string up using 2 parameters.
For example in the maths equation:
x^2+6x-9
Is it possible to split it using the + and -?
So that it ends up as the list:
[x^2, 6x, 9] | How to split a string using 2 parameters - python 3.5 | 0 | 0 | 0 | 160 |
43,215,443 | 2017-04-04T18:55:00.000 | 1 | 0 | 0 | 0 | python,influxdb,grafana | 52,570,244 | 2 | false | 0 | 0 | I believe this is currently available via kapacitor, but assume a more elegant solution will be readily accomplished using FluxQL.
Consuming the influxdb measurements into kapacitor will allow you to force equivalent time buckets and present the data once normalized. | 2 | 0 | 1 | So basically I would like to be able to view two different databases within the same Grafana graph panel. The issue is that InfluxDB is a time series database, so it is not possible to see the trend between two databases in the same graph panel unless they have similar timestamps. The workaround is creating two panels in Grafana and adding a delay to one, but this doesn't give a good representation as the graphs are not on the same panel so it is more difficult to see the differences. I am currently working on a script to copy the databases in question and alter the timestamps so that the two newly created databases look like the data was taken at the same time. I am wondering if anyone has any idea how to change the timestamp, and if so, what would be the best way to to do so with a large amount of data points? Thanks. | InfluxDB and Grafana: comparing two databases with different timestamps on same graph panel | 0.099668 | 1 | 0 | 569 |
43,215,443 | 2017-04-04T18:55:00.000 | 0 | 0 | 0 | 0 | python,influxdb,grafana | 43,306,424 | 2 | false | 0 | 0 | I can confirm from my grafana instance that it's not possible to add a shift to one timeseries and not the other in one panel.
To change the timestamp, I'd just simply do it the obvious way. Load a few thousands of entries at a time to python, change the the timestamps and write it to a new measure (and indicate the shift in the measurement name). | 2 | 0 | 1 | So basically I would like to be able to view two different databases within the same Grafana graph panel. The issue is that InfluxDB is a time series database, so it is not possible to see the trend between two databases in the same graph panel unless they have similar timestamps. The workaround is creating two panels in Grafana and adding a delay to one, but this doesn't give a good representation as the graphs are not on the same panel so it is more difficult to see the differences. I am currently working on a script to copy the databases in question and alter the timestamps so that the two newly created databases look like the data was taken at the same time. I am wondering if anyone has any idea how to change the timestamp, and if so, what would be the best way to to do so with a large amount of data points? Thanks. | InfluxDB and Grafana: comparing two databases with different timestamps on same graph panel | 0 | 1 | 0 | 569 |
43,217,058 | 2017-04-04T20:26:00.000 | 1 | 0 | 0 | 0 | wxpython | 43,230,207 | 1 | true | 0 | 1 | The easiest solutions (without seeing the code) would probably be to
1. Bind the frame to EVT_KILL_FOCUS and then call frame.SetFocus() from the bound event. The downside is that having multiple widgets on that frame can complicate things as you would have to bind to each widget. To get the frame that has the focus call wx.GetActiveWindow().
2. Bind the other windows to EVT_ACTIVATE and then call frame.SetFocus() to re-activate the correct frame.
3. Try to call frame.ShowWithoutActivating on the other frames that you are showing to prevent them from receiving the focus.
4. Some combination of the above | 1 | 1 | 0 | I can keep a frame on top of a parent using style:
wx.FRAME_FLOAT_ON_PARENT
But it loses focus if this parent opens other child windows.
Is there a way to keep it on top of all windows of this given application?
I cannot use wx.STAY_ON_TOP because when I Alt-Tab to other process it's always on top. | How do I keep wx.Frame on top not only it's parent frame but all other child frames parent opens? | 1.2 | 0 | 0 | 489 |
43,217,916 | 2017-04-04T21:22:00.000 | 8 | 0 | 0 | 0 | python,pandas | 43,217,958 | 3 | false | 0 | 0 | Your data is stored with the precision, corresponding to your dtype (np.float16, np.float32, np.float64).
pd.options.display.precision - allows you to change the precision for printing the data | 1 | 24 | 1 | By default the numerical values in data frame are stored up to 6 decimals only. How do I get the full precision.
For example
34.98774564765 is stored as 34.987746. I do want the full value.
and 0.00000565 is stored as 0. .
Apart from applying formats to each data frame is there any global setting that helps preserving the precision.
Thanks | Pandas data precision | 1 | 0 | 0 | 80,695 |
43,219,217 | 2017-04-04T23:10:00.000 | 0 | 1 | 0 | 1 | python,windows,executable | 43,219,241 | 2 | false | 0 | 0 | You don't have shell scripts on Windows, you have batch or powershell.
If your reading is teaching Unix things, get a virtual machine running (insert popular Linux distribution here).
Regarding python, you just execute python script.py | 1 | 0 | 0 | I'm reading from a "bookazine" which I purchased from WHSmiths today and its said
during the setup I need to type in these commands into the terminal (or the Command Prompt in my case) in order to make a script without needing to do it manually. One of these commands is chmod +x (file name) but because this is based of Linux or Mac and I am on Windows I am not sure how to make my script executable, how do I?
Thanks in advance. | How would I go about making a Python script into an executable? | 0 | 0 | 0 | 492 |
43,219,641 | 2017-04-04T23:54:00.000 | 0 | 0 | 0 | 0 | python,pandas | 43,219,694 | 2 | false | 0 | 0 | I get that the mean of that particular group is NAN when a NAN value
is present
FALSE! :)
the mean will only consider non null values. You are safe my man. | 1 | 0 | 1 | I have a dataset consisting of multiple columns and I want to calculate the average by using the groupby function in Python. However, since some of the values are NAN I get that the mean of that particular group is NAN when a NAN value is present. I would like to omit this value, not set it to zero or fill it with any statistical variable, just omit.
Any idea how I can achieve this?
Thanks in advance! | How to Omit NaN values when applying groupyby in Pandas | 0 | 0 | 0 | 266 |
43,219,679 | 2017-04-04T23:58:00.000 | 1 | 0 | 1 | 1 | python,ubuntu,anaconda,navigator | 53,868,885 | 5 | false | 0 | 0 | I had the same issue when I install OpenCV library using conda.Most probably downgrading something makes this issue happen. Just type :
conda update --all | 2 | 6 | 1 | I have recently Installed Anaconda for Python 3.6 but it shows the error "Segmentation fault" whenever I try to run Anaconda-Navigator.
I've tried just writting in the terminal Anaconda-Navigator and also going to my Anaconda3 folder and try to execute it inside bin.
The only solution that works so far is accessing the previously bin folder as root. My problem is that I need to activate TensorFlow before I run anything in my console, but that is imposible as a root user.
I've already try to upgrade both, Anaconda and Navigator and reinstall them but nothing ocurrs
Anyone here has any idea of what is happening? | Segmentation fault when I try to run Anaconda Navigator | 0.039979 | 0 | 0 | 9,783 |
43,219,679 | 2017-04-04T23:58:00.000 | 0 | 0 | 1 | 1 | python,ubuntu,anaconda,navigator | 47,718,983 | 5 | false | 0 | 0 | I had the same problem.I solved it by adding /lib to mt LD_LIBRARY_PATH.
Note: On my system Anaconda installation path is /home/pushyamik/anaconda3. | 2 | 6 | 1 | I have recently Installed Anaconda for Python 3.6 but it shows the error "Segmentation fault" whenever I try to run Anaconda-Navigator.
I've tried just writting in the terminal Anaconda-Navigator and also going to my Anaconda3 folder and try to execute it inside bin.
The only solution that works so far is accessing the previously bin folder as root. My problem is that I need to activate TensorFlow before I run anything in my console, but that is imposible as a root user.
I've already try to upgrade both, Anaconda and Navigator and reinstall them but nothing ocurrs
Anyone here has any idea of what is happening? | Segmentation fault when I try to run Anaconda Navigator | 0 | 0 | 0 | 9,783 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.