qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
29
22k
response_k
stringlengths
26
13.4k
__index_level_0__
int64
0
17.8k
62,374,607
I have a list 2,3,4,3,5,9,4,5,6 I want to iterate over the list until I get the first highest number that is followed by a lower number. Then to iterate over the rest of the number until I get the lowest number followed by a higher number. Then the next highest highest number that is followed by a lower number.And so on.The result I want is 2,4,3,9,4,6 Here is my last attempt.I seem to be going round in circles ``` #!/usr/bin/env python value = [] high_hold = [0] low_hold = [20] num = [4,5,20,9,8,6,2,3,5,10,2,] def high(): for i in num: if i > high_hold[-1]: high_hold.append(i) def low(): for i in num: if i < low_hold[-1]: low_hold.append(i) high() a = high_hold[-1] value.append(a) high_hold = high_hold[1:] b = len(high_hold) -1 num = num[b:] low() c = len(low_hold) -1 num = num[c:] value.append(b) print('5: ', value, '(this is what we want)') print(num) high_hold = [0] def high(): for i in num: if i > high_hold[-1]: high_hold.append(i) high() a = high_hold[-1] print(a) print('1: ', high_hold, 'high hold') ```
2020/06/14
[ "https://Stackoverflow.com/questions/62374607", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13744765/" ]
you need to add the new user ONLY after you have checked all of them. instead you have it in the middle of your for loop, so it's going to add it over and over. try this: ``` var doesExistFlag = false; for (let i = 0; i < this.users.length; i++) { if (this.users[i].user == this.adminId) { doesExistFlag = true; } } if(!doesExistFlag) this.users.push({user: this.adminId, permissions: this.g.admin_rights.value}); ``` an even better solution would be to generate a random id based on timestamp.
`id` is the name of the field ...which is never being compared to. `adminId` probably should be `userId`, for the sake of readability. While frankly speaking, just sort it on the server-side already.
13,264
58,031,373
I have a queue of 500 processes that I want to run through a python script, I want to run every N processes in parallel. What my python script does so far: It runs N processes in parallel, waits for all of them to terminate, then runs the next N files. What I need to do: When one of the N processes is finished, another process from the queue is automatically started, without waiting for the rest of the processes to terminate. Note: I do not know how much time each process will take, so I can't schedule a process to run at a particular time. Following is the code that I have. I am currently using subprocess.Popen, but I'm not limited to its use. ``` for i in range(0, len(queue), N): batch = [] for _ in range(int(jobs)): batch.append(queue.pop(0)) for process in batch: p = subprocess.Popen([process]) ps.append(p) for p in ps: p.communicate() ```
2019/09/20
[ "https://Stackoverflow.com/questions/58031373", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11597586/" ]
Advanced PDF Template is not yet supported in SuiteBundle.
Update: I noticed that when I create a new advanced template by customizing a standard one, I can see the new template in the bundle creation process. If I start from a "saved search", I don't... It is weird, ins't it?
13,267
60,740,554
I try to implement Apache Airflow with the CeleryExecutor. For the database I use Postgres, for the celery message queue I use Redis. When using LocalExecutor everything works fine, but when I set the CeleryExecutor in the airflow.cfg and want to set the Postgres database as the result\_backend ``` result_backend = postgresql+psycopg2://airflow_user:*******@localhost/airflow ``` I get this error when running the Airflow scheduler no matter which DAG I trigger: ``` [2020-03-18 14:14:13,341] {scheduler_job.py:1382} ERROR - Exception when executing execute_helper Traceback (most recent call last): File "<PATH_TO_VIRTUALENV>/lib/python3.6/site-packages/kombu/utils/objects.py", line 42, in __get__ return obj.__dict__[self.__name__] KeyError: 'backend' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<PATH_TO_VIRTUALENV>/lib/python3.6/site-packages/airflow/jobs/scheduler_job.py", line 1380, in _execute self._execute_helper() File "<PATH_TO_VIRTUALENV>/lib/python3.6/site-packages/airflow/jobs/scheduler_job.py", line 1441, in _execute_helper if not self._validate_and_run_task_instances(simple_dag_bag=simple_dag_bag): File "<PATH_TO_VIRTUALENV>/lib/python3.6/site-packages/airflow/jobs/scheduler_job.py", line 1503, in _validate_and_run_task_instances self.executor.heartbeat() File "<PATH_TO_VIRTUALENV>/lib/python3.6/site-packages/airflow/executors/base_executor.py", line 130, in heartbeat self.trigger_tasks(open_slots) File "<PATH_TO_VIRTUALENV>/lib/python3.6/site-packages/airflow/executors/celery_executor.py", line 205, in trigger_tasks cached_celery_backend = tasks[0].backend File "<PATH_TO_VIRTUALENV>/lib/python3.6/site-packages/celery/local.py", line 146, in __getattr__ return getattr(self._get_current_object(), name) File "<PATH_TO_VIRTUALENV>/lib/python3.6/site-packages/celery/app/task.py", line 1037, in backend return self.app.backend File "<PATH_TO_VIRTUALENV>/lib/python3.6/site-packages/kombu/utils/objects.py", line 44, in __get__ value = obj.__dict__[self.__name__] = self.__get(obj) File "<PATH_TO_VIRTUALENV>/lib/python3.6/site-packages/celery/app/base.py", line 1227, in backend return self._get_backend() File "<PATH_TO_VIRTUALENV>/lib/python3.6/site-packages/celery/app/base.py", line 944, in _get_backend self.loader) File "<PATH_TO_VIRTUALENV>/lib/python3.6/site-packages/celery/app/backends.py", line 74, in by_url return by_name(backend, loader), url File "<PATH_TO_VIRTUALENV>/lib/python3.6/site-packages/celery/app/backends.py", line 60, in by_name backend, 'is a Python module, not a backend class.')) celery.exceptions.ImproperlyConfigured: Unknown result backend: 'postgresql'. Did you spell that correctly? ('is a Python module, not a backend class.') ``` The exact same parameter to direct to the database works ``` sql_alchemy_conn = postgresql+psycopg2://airflow_user:*******@localhost/airflow ``` Setting Redis as the celery result\_backend works, but I read it is not the recommended way. ``` result_backend = redis://localhost:6379/0 ``` Does anyone see what I am doing wrong?
2020/03/18
[ "https://Stackoverflow.com/questions/60740554", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4296244/" ]
You need to add the `db+` prefix to the database connection string: ```py f"db+postgresql+psycopg2://{user}:{password}@{host}/{database}" ``` This is also mentioned in the docs: <https://docs.celeryproject.org/en/stable/userguide/configuration.html#database-url-examples>
You need to add the `db+` prefix to the database connection string: ``` result_backend = db+postgresql://airflow_user:*******@localhost/airflow ```
13,268
5,556,360
I'm having a problem getting matplotlib to work in ubuntu 10.10. First I install the matplotlib using apt-get, and later I found that the version is 0.99 and some examples on the official site just won't work. Then I download the 1.01 version and install it without uninstalling the 0.99 version. To make the situation more specific, here is the configuration: ``` BUILDING MATPLOTLIB matplotlib: 1.0.1 python: 2.6.6 (r266:84292, Sep 15 2010, 15:52:39) [GCC 4.4.5] platform: linux2 REQUIRED DEPENDENCIES numpy: 1.6.0b1 freetype2: 12.2.6 OPTIONAL BACKEND DEPENDENCIES libpng: 1.2.44 Tkinter: no * Using default library and include directories for * Tcl and Tk because a Tk window failed to open. * You may need to define DISPLAY for Tk to work so * that setup can determine where your libraries are * located. Tkinter present, but header files are not * found. You may need to install development * packages. wxPython: no * wxPython not found pkg-config: looking for pygtk-2.0 gtk+-2.0 * Package pygtk-2.0 was not found in the pkg-config * search path. Perhaps you should add the directory * containing `pygtk-2.0.pc' to the PKG_CONFIG_PATH * environment variable No package 'pygtk-2.0' found * Package gtk+-2.0 was not found in the pkg-config * search path. Perhaps you should add the directory * containing `gtk+-2.0.pc' to the PKG_CONFIG_PATH * environment variable No package 'gtk+-2.0' found * You may need to install 'dev' package(s) to * provide header files. Gtk+: no * Could not find Gtk+ headers in any of * '/usr/local/include', '/usr/include', '.' Mac OS X native: no Qt: no Qt4: no Cairo: 1.8.8 OPTIONAL DATE/TIMEZONE DEPENDENCIES datetime: present, version unknown dateutil: 1.4.1 pytz: 2010b OPTIONAL USETEX DEPENDENCIES dvipng: no ghostscript: 8.71 latex: no pdftops: 0.14.3 [Edit setup.cfg to suppress the above messages] ``` and now i can import matplotlib but once i run the example code, it just terminated and I get no results. I tried to 'clean install' several times, which means I delete all the files include the .matplotlib and the matplotlib directory under dist-package, but I still can't get things done. What makes weirder is that after I reinstall the 0.99 version, it works pretty well. Any ideas?
2011/04/05
[ "https://Stackoverflow.com/questions/5556360", "https://Stackoverflow.com", "https://Stackoverflow.com/users/693500/" ]
Ben Gamari has [packaged](https://launchpad.net/~bgamari/+archive/matplotlib-unofficial) matplotlib 1.0 for Ubuntu.
Try installing it with `pip`: ``` sudo apt-get install python-pip sudo pip install matplotlib ``` I just tested this and it should install matplotlib 1.0.1.
13,269
45,803,713
Presently we have a big-data cluster built using Cloudera-Virtual machines. By default the Python version on the VM is 2.7. For one of my programs I need Python 3.6. My team is very skeptical about 2 installations and afraid of breaking existing cluster/VM. I was planning to follow this article and install 2 versions <https://www.digitalocean.com/community/tutorials/how-to-set-up-python-2-7-6-and-3-3-3-on-centos-6-4> Is there a way "I can package Python 3.6" version in my project, and set the Python home path to my project folder, so that there is no installation that needs to be done on the existing Virtual machine? Since we have to download python and build source for the Unix version, I want to skip this part on VM, and instead ship the folder which has Python 3.6
2017/08/21
[ "https://Stackoverflow.com/questions/45803713", "https://Stackoverflow.com", "https://Stackoverflow.com/users/864598/" ]
It seems that [`miniconda`](https://conda.io/miniconda.html) is what you need. using it you can manage multiple python environments with different versions of python. **to install miniconda3 just run:** ----------------------------------- ``` # this will download & install miniconda3 on your home dir wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh chmod +x Miniconda3-latest-Linux-x86_64.sh ./Miniconda3-latest-Linux-x86_64.sh -b -p ~/miniconda3 ``` **then, create new python3.6 env:** ----------------------------------- ``` conda create -y -n myproject 'python>3.6' ``` **now, enter the new python3.6 env** ------------------------------------ ``` source activate myproject python3 ``` `miniconda` can also install python packages, including pip packages and compiled packages. you can also copy envs from one machine to another. I encourage you to take a deeper look into it.
ShmulikA's suggestion is pretty good. Here I'd like to add another one - I use Python 2.7.x, but for few prototypes, I had to go with Python 3.x. For this I used the **`pyenv`** utility. Once installed, all you have to do is: ``` pyenv install 3.x.x ``` Can list all the available Python variants: ``` pyenv versions ``` To use the specific version, while at the project root, execute the following: ``` pyenv local 3.x.x ``` It'll create a file .python-version at the project root, having the version as it's content: ``` [nahmed@localhost ~]$ cat some-project/.python-version 3.5.2 ``` Example: ``` [nahmed@localhost ~]$ pyenv versions * system (set by /home/nahmed/.pyenv/version) 3.5.2 3.5.2/envs/venv_scrapy venv_scrapy [nahmed@localhost ~]$ pyenv local 3.5.2 [nahmed@localhost ~]$ pyenv versions system * 3.5.2 (set by /home/nahmed/.python-version) 3.5.2/envs/venv_scrapy venv_scrapy ``` --- I found it very simple to use. Here's a [post](http://devopspy.com/python/pyenv-setup/) regarding the installation and basic usage (blog post by me). --- For the part: > > Since we have to download python and build source for the Unix > version, I want to skip this part on VM, and instead ship the folder > which has Python 3.6 > > > You might look into ways to embed Python interpreter with your Python application: > > And for both Windows and Linux, there's [**bbfreeze**](https://pypi.python.org/pypi/bbfreeze/) or also [**pyinstaller**](http://www.pyinstaller.org/) > > > from - [SOAnswer](https://stackoverflow.com/questions/2441172/embed-python-interpreter-in-a-python-application/2441184#2441184).
13,271
66,588,659
I have a variable that saves the user's input. If the user inputs a list, e.g `["oranges","apples","pears"]` python seems to take this as a string, and print every character, instead of every word, that the code would print if fruit was simply a list. How do I the code to do this? Here is what I've tried... ``` fruit = input("What is your favourite fruit?") fruit = list(fruit) for i in fruit: print(i) ```
2021/03/11
[ "https://Stackoverflow.com/questions/66588659", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
python takes inputs as one huge string so instead of that being a list it is just a string like that looks like this ```py '["oranges","apples","pears"]' ``` turning this into a list will just look like ```py ['[', '"', 'o', 'r', 'a', 'n', 'g', 'e', 's', '"', ',', '"', 'a', 'p', 'p', 'l', 'e', 's', '"', ',', '"', 'p', 'e', 'a', 'r', 's', '"', ']'] ``` instead of this try something like this which asks for favourite fruits until you do not enter a fruit ``` Fruits = [] while True: temp = input() if temp == "": break else: Fruits.append(temp) ``` and then output the values ``` for x in Fruits: print(x) ```
You will have to split word is list by comma. ``` fruit = list(fruit.split(",")) fruit = input("What is you favourite fruit?") fruit = fruit.split(",") for i in fruit: print(i) ```
13,272
62,616,736
I've been using the Fermipy conda environment on Python 2.7.14 64-bit on macOS Catalina 10.15.5 and overnight received the error "r.start is not a function" when trying to connect to the Jyputer server through Vscode (if I try on Jupyter Notebook/Lab the server instantly dies). I had a bunch of clutter on my system so I ended up formatting it and reinstalling all the dependencies needed (such as Conda through Homebrew, Fermitools through Conda and Fermipy through the install script on their site), but still get the same error, although I was previously running python scripts just fine. It gives me no other error or output, if it did I would attach it here. [This is the error I get.](https://i.stack.imgur.com/mW02y.png) Edit: I get the same error using any version of Python 2.7.XX and not for python 3.7.XX.
2020/06/27
[ "https://Stackoverflow.com/questions/62616736", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13820618/" ]
As answered here, <https://github.com/microsoft/vscode-python/issues/12355#issuecomment-652515770> VSCode changed how it launches jupyter kernels, and the new method is incompatible with python 2.7. Add this line to your VSCode settings.json file and restart. ``` "python.experiments.optOutFrom": ["LocalZMQKernel - experiment"] ```
I got the same message. (r.start is not a function.) I had an old uninstalled version of anaconda on the computer which had left behind a folder containing its python version. Jupyter was supposed to be running from new venv after setting both python and jupyter path in vscode. I fully deleted remaining files from old anaconda install - message went away and notebook ran fine. Maybe try getting rid of all conda stuff and pip install jupyter and anything else you need.
13,273
72,285,267
I have a below dictionary defined with IP address of the application for the respective region. Region is user input variable, based on the input i need to procees the IP in rest of my script. ``` app_list=["puppet","dns","ntp"] dns={'apac':["172.118.162.93","172.118.144.93"],'euro':["172.118.76.93","172.118.204.93","172.118.236.93"],'cana':["172.118.48.93","172.118.172.93"]} ntp={'asia':["172.118.162.93","172.118.148.93"],'euro':["172.118.76.93","172.118.204.93","172.118.236.93"],'cana':["172.118.48.93","172.118.172.93"]} puppet={'asia':["172.118.162.2251","1932.1625.254.2493"],'euro':["172.118.76.21","1932.1625.254.2493"],'cana':["172.118.76.21","193.1625.254.249"]} ``` ***Code Tried*** ``` region=raw_input("entee the region:") for appl in app_list: for ip in appl[region]: <<<rest of operations with IP in script>> ``` When I tried the above code got the error mentioned below. I tried to convert the str object to dict using json and ast module but still did not succeedd. **Error received** ``` TypeError: string indices must be integers, not str ``` As new to python unsure how to get the ip list based on the region
2022/05/18
[ "https://Stackoverflow.com/questions/72285267", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3619226/" ]
``` # use a nested dict comprehension # use enumerate for the index of the list items and add it to the key using f-string {key: {f"{k}.{i}": e for k, v in val.items() for i, e in enumerate(v)} for key, val in my_dict.items()} {'Ka': {'Ka.0': '0.80', 'Ka.1': '0.1', 'Ba.0': '0.50', 'Ba.1': '1.1', 'FC.0': '0.78', 'FC.1': '0.0', 'AA.0': '0.66', 'AA.1': '8.1'}, 'AL': {'AR.0': '2.71', 'AR.1': '7.3', 'KK.0': '10.00', 'KK.1': '90.0'}} ```
``` from collections import defaultdict new = defaultdict(dict) for k, values in d.items(): for sub_key, values in values.items(): for value in values: existing_key_count = sum(1 for existing_key in new[k].keys() if existing_key.startswith(sub_key)) new_key = f"{sub_key}.{existing_key_count}" new[k][new_key] = value ``` > > {'Ka': {'Ka.0': '0.80', > 'Ka.1': '0.1', > 'Ba.0': '0.50', > 'Ba.1': '1.1', > 'FC.0': '0.78', > 'FC.1': '0.0', > 'AA.0': '0.66', > 'AA.1': '8.1'}, > 'AL': {'AR.0': '2.71', 'AR.1': '7.3', 'KK.0': '10.00', 'KK.1': '90.0'}} > > >
13,274
45,732,286
I am working with protein sequences. My goal is to create a convolutional network which will predict three angles for each amino acid in the protein. I'm having trouble debugging a TFLearn DNN model that requires a reshape operation. The input data describes (currently) 25 proteins of varying lengths. To use Tensors I need to have uniform dimensions, so I pad the empty input cells with zeros. Each amino acid is represented by a 4-dimensional code. The details of that are probably unimportant, other than to help you understand the shapes of the Tensors. The output of the DNN is six numbers, representing the sines and cosines of three angles. To create ordered pairs, the DNN graph reshapes a [..., 6] Tensor to [..., 3, 2]. My target data is encoded the same way. I calculate the loss using cosine distance. I built a non-convolutional DNN which showed good initial learning behavior which is very similar to the code I will post here. But that model treated three adjacent amino acids in isolation. I want to treat *each protein* as a unit -- with sliding windows 3 amino acids wide at first, and eventually larger. Now that I am converting to a convolutional model, I can't seem to get the shapes to match. Here are the working portions of my code: ``` import tensorflow as tf import tflearn as tfl from protein import ProteinDatabase # don't worry about its details def backbone_angle_distance(predict, actual): with tf.name_scope("BackboneAngleDistance"): actual = tfl.reshape(actual, [-1,3,2]) # Supply the -1 argument for axis that TFLearn can't pass loss = tf.losses.cosine_distance(predict, actual, -1, reduction=tf.losses.Reduction.MEAN) return loss # Training data database = ProteinDatabase("./data") inp, tgt = database.training_arrays() # DNN model, convolution only in topmost layer for now net = tfl.input_data(shape=[None, None, 4]) net = tfl.conv_1d(net, 24, 3) net = tfl.conv_1d(net, 12, 1) net = tfl.conv_1d(net, 6, 1) net = tfl.reshape(net, [-1,3,2]) net = tf.nn.l2_normalize(net, dim=2) net = tfl.regression(net, optimizer="sgd", learning_rate=0.1, \ loss=backbone_angle_distance) model = tfl.DNN(net) # Generate a prediction. Compare shapes for compatibility. out = model.predict(inp) print("\ninp : {}, shape = {}".format(type(inp), inp.shape)) print("out : {}, shape = {}".format(type(out), out.shape)) print("tgt : {}, shape = {}".format(type(tgt), tgt.shape)) print("tgt shape, if flattened by one dimension = {}\n".\ format(tgt.reshape([-1,3,2]).shape)) ``` The output at this point is: ``` inp : <class 'numpy.ndarray'>, shape = (25, 543, 4) out : <class 'numpy.ndarray'>, shape = (13575, 3, 2) tgt : <class 'numpy.ndarray'>, shape = (25, 543, 3, 2) tgt shape, if flattened by one dimension = (13575, 3, 2) ``` So if I reshape the 4D Tensor **tgt**, flattening the outermost dimension, **out** and **tgt** should match. Since TFLearn's code makes the batches, I try to intercept and reshape the Tensor **actual** in the first line of backbone\_angle\_distance(), my custom loss function. If I add a few lines to attempt model fitting as follows: ``` e, b = 1, 5 model.fit(inp, tgt, n_epoch=e, batch_size=b, validation_set=0.2, show_metric=True) ``` I get the following extra output and error: ``` --------------------------------- Run id: EEG6JW Log directory: /tmp/tflearn_logs/ --------------------------------- Training samples: 20 Validation samples: 5 -- -- Traceback (most recent call last): File "exp54.py", line 252, in <module> model.fit(inp, tgt, n_epoch=e, batch_size=b, validation_set=0.2, show_metric=True) File "/usr/local/lib/python3.5/dist-packages/tflearn/models/dnn.py", line 216, in fit callbacks=callbacks) File "/usr/local/lib/python3.5/dist-packages/tflearn/helpers/trainer.py", line 339, in fit show_metric) File "/usr/local/lib/python3.5/dist-packages/tflearn/helpers/trainer.py", line 818, in _train feed_batch) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 789, in run run_metadata_ptr) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 975, in _run % (np_val.shape, subfeed_t.name, str(subfeed_t.get_shape()))) ValueError: Cannot feed value of shape (5, 543, 3, 2) for Tensor 'TargetsData/Y:0', which has shape '(?, 3, 2)' ``` Where in my code am I SPECIFYING that TargetsData/Y:0 has shape (?, 3, 2)? I know it won't be. According to the traceback, I never actually seem to reach my reshape operation in backbone\_angle\_distance(). Any advice is appreciated, thanks!
2017/08/17
[ "https://Stackoverflow.com/questions/45732286", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9376487/" ]
There is a new plugin (since a year), called [chartjs-plugin-piechart-outlabels](https://www.npmjs.com/package/chartjs-plugin-piechart-outlabels) Just import the source `<script src="https://cdn.jsdelivr.net/npm/chartjs-plugin-piechart-outlabels"></script>` and use it with the outlabeledPie type ``` var randomScalingFactor = function() { return Math.round(Math.random() * 100); }; var ctx = document.getElementById("chart-area").getContext("2d"); var myDoughnut = new Chart(ctx, { type: 'outlabeledPie', data: { labels: ["January", "February", "March", "April", "May"], ... plugins: { legend: false, outlabels: { text: '%l %p', color: 'white', stretch: 45, font: { resizable: true, minSize: 12, maxSize: 18 } } } }) ```
The real problem lies with the overlapping of the labels when the slices are small.You can use [PieceLabel.js](https://emn178.github.io/Chart.PieceLabel.js/samples/demo/) which solves the issue of overlapping labels by hiding it . You mentioned that you **cannot hide labels** so use legends, which will display names of all slices Or if you want exact behavior you can go with the [highcharts](https://www.highcharts.com/demo/pie-basic), but it requires licence for commercial use. ```js var randomScalingFactor = function() { return Math.round(Math.random() * 100); }; var ctx = document.getElementById("chart-area").getContext("2d"); var myDoughnut = new Chart(ctx, { type: 'pie', data: { labels: ["January", "February", "March", "April", "May"], datasets: [{ data: [ 250, 30, 5, 4, 2, ], backgroundColor: ['#ff3d67', '#ff9f40', '#ffcd56', '#4bc0c0', '#999999'], borderColor: 'white', borderWidth: 5, }] }, showDatapoints: true, options: { tooltips: { enabled: false }, pieceLabel: { render: 'label', arc: true, fontColor: '#000', position: 'outside' }, responsive: true, legend: { position: 'top', }, title: { display: true, text: 'Testing', fontSize: 20 }, animation: { animateScale: true, animateRotate: true } } }); ``` ```html <script src="https://cdnjs.cloudflare.com/ajax/libs/Chart.js/2.6.0/Chart.min.js"></script> <script src="https://cdn.rawgit.com/emn178/Chart.PieceLabel.js/master/build/Chart.PieceLabel.min.js"></script> <canvas id="chart-area"></canvas> ``` [Fiddle](https://jsfiddle.net/deep3015/zadn9j1j/1/) demo
13,276
58,523,431
`driver.getWindowHandles()` returns Set so, if we want to choose window by index, we have to wrap Set into ArrayList: ``` var tabsList = new ArrayList<>(driver.getWindowHandles()); var nextTab = tabsList.get(1); driver.switchTo().window(nextTab); ``` in python we can access windows by index immediately: ``` next_window = browser.window_handles[1] driver.switch_to.window(next_window) ``` What is the purpose of choosing Set here?
2019/10/23
[ "https://Stackoverflow.com/questions/58523431", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11705114/" ]
Window Handles -------------- In a discussion, regarding [window-handles](/questions/tagged/window-handles "show questions tagged 'window-handles'") Simon (creator of WebDriver) clearly mentioned that: > > While the datatype used for storing the list of handles may be ordered by insertion, the order in which the WebDriver implementation iterates over the window handles to insert them has no requirement to be stable. The ordering is arbitrary. > > > --- Background ---------- In the discussion [What is the difference between Set and List?](https://stackoverflow.com/questions/1035008/what-is-the-difference-between-set-and-list) @AndrewHare explained: [`List<E>:`](http://docs.oracle.com/javase/8/docs/api/java/util/List.html) > > An ordered collection (also known as a sequence). The user of this interface has precise control over where in the list each element is inserted. The user can access elements by their integer index (position in the list) and search for elements in the list. > > > [`Set<E>:`](http://docs.oracle.com/javase/8/docs/api/java/util/Set.html) > > A collection that contains no duplicate elements. More formally, sets contain no pair of elements e1 and e2 such that e1.equals(e2), and at most one null element. As implied by its name, this interface models the mathematical set abstraction. > > > --- Conclusion ---------- So considering the above definition, in presence of multiple window handles, the best possible approach would be to use a **`Set<>`** --- References ---------- You can find a couple of working examples in: * [Best way to keep track and iterate through tabs and windows using WindowHandles using Selenium](https://stackoverflow.com/questions/46251494/best-way-to-keep-track-and-iterate-through-tabs-and-windows-using-windowhandles/46346324#46346324) * [Open web in new tab Selenium + Python](https://stackoverflow.com/questions/28431765/open-web-in-new-tab-selenium-python/51893230#51893230)
One comment - take into account the order of Set is not fixed, so it will return you a random window by the usage above.
13,281
53,372,966
While executing the following python script using cloud-composer, I get `*** Task instance did not exist in the DB` under the `gcs2bq` task Log in Airflow Code: ``` import datetime import os import csv import pandas as pd import pip from airflow import models #from airflow.contrib.operators import dataproc_operator from airflow.operators.bash_operator import BashOperator from airflow.operators.python_operator import PythonOperator from airflow.utils import trigger_rule from airflow.contrib.operators import gcs_to_bq from airflow.contrib.operators import bigquery_operator print('''/-------/--------/------/ -------/--------/------/''') yesterday = datetime.datetime.combine( datetime.datetime.today() - datetime.timedelta(1), datetime.datetime.min.time()) default_dag_args = { # Setting start date as yesterday starts the DAG immediately when it is # detected in the Cloud Storage bucket. 'start_date': yesterday, # To email on failure or retry set 'email' arg to your email and enable # emailing here. 'email_on_failure': False, 'email_on_retry': False, # If a task fails, retry it once after waiting at least 5 minutes 'retries': 1, 'retry_delay': datetime.timedelta(minutes=5), 'project_id': 'data-rubrics' #models.Variable.get('gcp_project') } try: # [START composer_quickstart_schedule] with models.DAG( 'composer_agg_quickstart', # Continue to run DAG once per day schedule_interval=datetime.timedelta(days=1), default_args=default_dag_args) as dag: # [END composer_quickstart_schedule] op_start = BashOperator(task_id='Initializing', bash_command='echo Initialized') #op_readwrite = PythonOperator(task_id = 'ReadAggWriteFile', python_callable=read_data) op_load = gcs_to_bq.GoogleCloudStorageToBigQueryOperator( \ task_id='gcs2bq',\ bucket='dr-mockup-data',\ source_objects=['sample.csv'],\ destination_project_dataset_table='data-rubrics.sample_bqtable',\ schema_fields = [{'name':'a', 'type':'STRING', 'mode':'NULLABLE'},{'name':'b', 'type':'FLOAT', 'mode':'NULLABLE'}],\ write_disposition='WRITE_TRUNCATE',\ dag=dag) #op_write = PythonOperator(task_id = 'AggregateAndWriteFile', python_callable=write_data) op_start >> op_load ```
2018/11/19
[ "https://Stackoverflow.com/questions/53372966", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6039925/" ]
I have stumbled on this issue also. What helped for me was to do this line: ``` spark.sql("SET spark.sql.hive.manageFilesourcePartitions=False") ``` and then use `spark.sql(query)` instead of using dataframe. I do not know what happens under the hood, but this solved my problem. Although it might be too late for you (since this question was asked 8 months ago), this might help for other people.
I know the topic is quite old but: 1. I've received same error but the actual source problem was hidden much deeper in logs. If you facing same problem as me, go to the end of your stack trace and you might find actual reason for job to be failing. In my case: a. `org.apache.spark.sql.hive.client.Shim_v0_13.getPartitionsByFilter(HiveShim.scala:865)\n\t... 142 more\nCaused by: MetaException(message:Rate exceeded (Service: AWSGlue; Status Code: 400; Error Code: ThrottlingException ...` which basically means I've exceeded AWS Glue Data Catalog quota **OR**: b. `MetaException(message:1 validation error detected: Value '(<my filtering condition goes here>' at 'expression' failed to satisfy constraint: Member must have length less than or equal to 2048` which means that filtering condition I've put in my dataframe definition is too long Long story short, deep dive into your logs because reason for your error might be really simple, just the top message is far from being clear. 2. If you are working with tables that has huge number of partitions (in my case hundreds of thousands) I would strongly recommend against setting `spark.sql.hive.manageFilesourcePartitions=False` . Yes, it will resolve the issue but the performance degradation is enormous.
13,284
70,339,321
The decision variable of my optimization problem (which I am aiming at keeping linear) is a placement binary vector, where the value in each position is either 0 or 1 (two different possible locations of item i). One component of the objective function is this: [![XOR](https://i.stack.imgur.com/SFvgz.png)](https://i.stack.imgur.com/SFvgz.png) C\_T is the const of transferring N items. k is the iteration in which I am currently solving the problem and k-1 is the current displacement of items (result of solving the last iteration of the problem k-1). I have an initial condition (k=0). N is "how many positions of x are different between the current displacement (k-1) and the outcome of the optimization problem (future optimal displacement x^k)". How can I keep this component of the objective function linear? In other words, how can I replace the XOR operator? I thought about using the absolute difference as an alternative but I'm not sure it will help. Is there a linear way to do this? I will implement this problem with PuLP in python, maybe there is something that can help over there as well...
2021/12/13
[ "https://Stackoverflow.com/questions/70339321", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11422437/" ]
My notation is: `xprev[i]` is the previous solution and `x[i]` is the current one. I assume `xprev[i]` is a binary constant and `x[i]` is a binary variable. Then we can write ``` sum(i, |xprev[i]-x[i]|) =sum(i|xprev[i]=0, x[i]) + sum(i|xprev[i]=1, 1-x[i]) =sum(i, x[i]*(1-xprev[i]) + (1-x[i])*xprev[i]) ``` Both the second and third lines can be implemented directly in Pulp. Note that | in the second line is 'such that'. --- Below we have a comment that claims this is wrong. So let's write my expression as `B*(1-A)+(1-B)*A`. The following truth table can be constructed: ``` A B A xor B B*(1-A)+(1-B)*A 0 0 0 0 + 0 0 1 1 1 + 0 1 0 1 0 + 1 1 1 0 0 + 0 ``` Note that `A xor B = A*not(B) + not(A)*B` is a well-known identity. --- Note. Here I used the assumption that `xprev[i]` (or `A`) is a constant so things are linear. If both are (boolean) variables (let's call them x and y), then we need to do something differently. We can linearize the construct `z = x xor y` using four inequalities: ``` z <= x + y z >= x - y z >= y - x z <= 2 - x - y ``` This is now linear can be used inside a MIP solver.
**UPDATE**: If what you need to replace is a XOR gate, then you could use a combination of other gates, which are linear, to replace it. Here are some of them <https://en.wikipedia.org/wiki/XOR_gate#Alternatives>. Example: `A XOR B = (A OR B) AND (NOT A + NOT B)`. When A and B are binary, that should translate mathematically to: ``` (A + B - A * B) * ((-A) + (-B) - (-A * -B)) ``` --- Why not use multiplication? ``` AND table 0 0 = 0 0 1 = 0 1 0 = 0 1 1 = 1 ``` ``` Multiplication table 0*0 = 0 0*1 = 0 1*0 = 0 1*1 = 1 ``` I think that does it. If it doesn't, then more details are needed I suppose.
13,285
65,493,246
I am using python through a secure shell. When I use pydot and graphviz package, it shows error [Errno 2] dot not found in path. I searched so many solutions. People suggest 'sudo apt install graphviz' or 'sudo apt-get install graphviz'. But when I use 'sudo', it shows 'username is not in the sudoers file.This incident will be reported'. I also tried to add the graphviz folder location to PATH variable using 'export PATH= $PATH:/..../lib/python3.8/site-packages/graphviz'( exact path is shown in the picture), it doesn't work. Could anyone help please? Thank you very much. I added a screenshot. I understand that I need to add path including 'bin' [enter image description here](https://i.stack.imgur.com/WXtx8.png), but then I didnt find the bin folder. I know what that folder looks like on Windows. When I use Filezilla to check this graphviz folder, It doesn't have this 'bin' folder. I installed Graphviz using 'pip3 install graphviz' when I search "How do I install Graphviz on Linux?", they all say 'sodu .....', which doesn't work for me apparently . Could anyone help please?
2020/12/29
[ "https://Stackoverflow.com/questions/65493246", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14644632/" ]
You can use `_source` to limit what's retrieved: ``` POST indexname/_search { "_source": "scores.a/*" } ``` Alternatively, you could employ `script_fields` which do exactly the same but offer playroom for value modification too: ``` POST indexname/_search { "script_fields": { "scores_prefixed_with_a": { "script": { "source": """params._source.scores.entrySet() .stream() .filter(e->e.getKey().startsWith(params.prefix)) .collect(Collectors.toMap(e->e.getKey(),e->e.getValue()))""", "params": { "prefix": "a/" } } } } } ```
Use [`.filter()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/filter) - [`.reduce()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/reduce) on [`Object.keys()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/keys) ```js const data = [{"scores": {"a/b": 1.231, "a/c": 23.11, "x/a": 1232.1}}, {"scores": {"a/d": 3.1}}]; const allowed = 'a/'; const res = data.map((d) => Object.keys(d.scores) .filter(key => key.startsWith(allowed))) .reduce((prev, curr) => prev.concat(curr)); console.log(res) ``` ``` [ "a/b", "a/c", "a/d" ] ``` --- --- Originial, with object instead off keys; ```js const data = [{"scores": {"a/b": 1.231, "a/c": 23.11, "x/a": 1232.1}}, {"scores": {"a/d": 3.1}}]; const allowed = 'a/'; const res = data.map((d) => Object.keys(d.scores) .filter(key => key.startsWith(allowed)) .reduce((obj, key) => { obj[key] = d.scores[key]; return obj; }, {})); console.log(res); ```
13,286
10,234,575
I first installed pymongo using easy\_install, that didn't work so I tried with pip and it is still failing. This is fine in the terminal: ``` Macintosh:etc me$ python Python 2.7.2 (v2.7.2:8527427914a2, Jun 11 2011, 14:13:39) [GCC 4.0.1 (Apple Inc. build 5493)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import pymongo >>> ``` But on line 10 of my script ``` import pymongo ``` throws the following error: > > File "test.py", line 10, >in <module> > import pymongo > ImportError: No module named pymongo > > > I'm using the standard Lion builds of Apache and Python. Is there anybody else who has experienced this? Thanks EDIT: I should also mention that during install it throws the following error ``` Downloading/unpacking pymongo Downloading pymongo-2.1.1.tar.gz (199Kb): 199Kb downloaded Running setup.py egg_info for package pymongo Installing collected packages: pymongo Running setup.py install for pymongo building 'bson._cbson' extension gcc-4.0 -fno-strict-aliasing -fno-common -dynamic -arch ppc -arch i386 -g -O2 -DNDEBUG -g -O3 -Ibson -I/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c bson/_cbsonmodule.c -o build/temp.macosx-10.3-fat-2.7/bson/_cbsonmodule.o unable to execute gcc-4.0: No such file or directory command 'gcc-4.0' failed with exit status 1 ************************************************************** WARNING: The bson._cbson extension module could not be compiled. No C extensions are essential for PyMongo to run, although they do result in significant speed improvements. If you are seeing this message on Linux you probably need to install GCC and/or the Python development package for your version of Python. Python development package names for popular Linux distributions include: RHEL/CentOS: python-devel Debian/Ubuntu: python-dev Above is the ouput showing how the compilation failed. ************************************************************** building 'pymongo._cmessage' extension gcc-4.0 -fno-strict-aliasing -fno-common -dynamic -arch ppc -arch i386 -g -O2 -DNDEBUG -g -O3 -Ibson -I/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c pymongo/_cmessagemodule.c -o build/temp.macosx-10.3-fat-2.7/pymongo/_cmessagemodule.o unable to execute gcc-4.0: No such file or directory command 'gcc-4.0' failed with exit status 1 ************************************************************** WARNING: The pymongo._cmessage extension module could not be compiled. No C extensions are essential for PyMongo to run, although they do result in significant speed improvements. If you are seeing this message on Linux you probably need to install GCC and/or the Python development package for your version of Python. Python development package names for popular Linux distributions include: RHEL/CentOS: python-devel Debian/Ubuntu: python-dev Above is the ouput showing how the compilation failed. ************************************************************** ``` And then goes on to say ``` Successfully installed pymongo Cleaning up... Macintosh:etc me$ ``` Very odd. My sys.path in script is returned as: ['/Library/WebServer/Documents/', '/Library/Python/2.7/site-packages/tweepy-1.7.1-py2.7.egg', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python27.zip', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-darwin', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac/lib-scriptpackages', '/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-old', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload', '/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/PyObjC', '/Library/Python/2.7/site-packages'] And in interpreter: ['', '/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/SQLObject-1.2.1-py2.7.egg', '/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/FormEncode-1.2.4-py2.7.egg', '/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/setuptools-0.6c12dev\_r88846-py2.7.egg', '/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pip-1.1-py2.7.egg', '/Library/Python/2.7/site-packages/tweepy-1.7.1-py2.7.egg', '/Library/Frameworks/Python.framework/Versions/2.7/lib/python27.zip', '/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7', '/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-darwin', '/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac', '/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac/lib-scriptpackages', '/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk', '/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-old', '/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload', '/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages', '/Library/Python/2.7/site-packages']
2012/04/19
[ "https://Stackoverflow.com/questions/10234575", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4533572/" ]
Found it! Required a path append before importing of the pymongo module ``` sys.path.append('/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages') import pymongo ``` Would ideally like to find a way to append this to the pythonpath permanently, but this works for now!
I'm not sure why the last message says "Successfully installed pymongo" but it obviously failed due to the fact that you don't have gcc installed on your system. You need to do the following: RHEL/Centos: sudo yum install gcc python-devel Debian/Ubuntu: sudo apt-get install gcc python-dev Then try and install pymongo again.
13,287
1,941,894
I'm trying to get virtualenv to work on my machine. I'm using python2.6, and after installing pip, and using pip to install virtualenv, running "virtualenv --no-site-packages cyclesg" results in the following: ``` New python executable in cyclesg/bin/python Installing setuptools.... Complete output from command /home/nubela/Workspace/cyclesg...ython -c "#!python \"\"\"Bootstrap setuptoo... " /usr/lib/python2.6/site-packag...6.egg: error: invalid Python installation: unable to open /home/nubela/Workspace/cyclesg_dep/cyclesg/include/multiarch-i386-linux/python2.6/pyconfig.h (No such file or directory) ---------------------------------------- ...Installing setuptools...done. New python executable in cyclesg/bin/python Installing setuptools.... Complete output from command /home/nubela/Workspace/cyclesg...ython -c "#!python \"\"\"Bootstrap setuptoo... " /usr/lib/python2.6/site-packag...6.egg: error: invalid Python installation: unable to open /home/nubela/Workspace/cyclesg_dep/cyclesg/include/multiarch-i386-linux/python2.6/pyconfig.h (No such file or directory) ---------------------------------------- ...Installing setuptools...done. ``` Any idea how I can remedy this? Thanks!
2009/12/21
[ "https://Stackoverflow.com/questions/1941894", "https://Stackoverflow.com", "https://Stackoverflow.com/users/236267/" ]
Are you on mandriva? In order to support multilib (mixing x86/x86\_64) Mandriva messes up your python installation. They patched python, which breaks virtualenv; instead of fixing python, they then proceeded to patch virtualenv. This is useless if you are using your own virtualenv installed from pip. Here is the bug: <https://qa.mandriva.com/show_bug.cgi?id=42808>
Are you on a linux based system? It looks like virtualenv is trying to build a new python exectable but can't find the files to do that. Try installing the `python-dev` package.
13,288
65,465,114
I am new to python programming. Following the AWS learning path: <https://aws.amazon.com/getting-started/hands-on/build-train-deploy-machine-learning-model-sagemaker/?trk=el_a134p000003yWILAA2&trkCampaign=DS_SageMaker_Tutorial&sc_channel=el&sc_campaign=Data_Scientist_Hands-on_Tutorial&sc_outcome=Product_Marketing&sc_geo=mult> I am getting an error when excuting the following block (in conda\_python3): ``` test_data_array = test_data.drop(['y_no', 'y_yes'], axis=1).values #load the data into an array xgb_predictor.content_type = 'text/csv' # set the data type for an inference xgb_predictor.serializer = csv_serializer # set the serializer type predictions = xgb_predictor.predict(test_data_array).decode('utf-8') # predict! predictions_array = np.fromstring(predictions[1:], sep=',') # and turn the prediction into an array print(predictions_array.shape) ``` > > AttributeError Traceback (most recent call last) > in > 1 test\_data\_array = test\_data.drop(['y\_no', 'y\_yes'], axis=1).values #load the data into an array > ----> 2 xgb\_predictor.content\_type = 'text/csv' # set the data type for an inference > 3 xgb\_predictor.serializer = csv\_serializer # set the serializer type > 4 predictions = xgb\_predictor.predict(test\_data\_array).decode('utf-8') # predict! > 5 predictions\_array = np.fromstring(predictions[1:], sep=',') # and turn the prediction into an array > > > > > AttributeError: can't set attribute > > > I have looked at several prior questions but couldn't find much information related to this error when it comes to creating data types. Thanks in advance for any help.
2020/12/27
[ "https://Stackoverflow.com/questions/65465114", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2601359/" ]
If you just remove it then the prediction will work. Therefore, recommend removing this code line. xgb\_predictor.content\_type = 'text/csv'
Removing xgb\_predictor.content\_type = 'text/csv' will work. But best way is that you first check the attributes of the object: ``` xgb_predictor.__dict__.keys() ``` This way, you will know that which attributes can be set.
13,289
63,767,925
I'm really new to programming (two days old), so excuse my python dumbness. I've recently run into a problem with adding up to numbers from a list. I've managed to come up with this program: ``` list_nums = ["17", "3"] num1 = list_nums[0] num2 = list_nums[1] sum = (num1) + (num2) print(sum) ``` the problem is that instead of adding up num1 with num2 (17+3=20), Python combines both numbers (i.e. "173"). what can I do in order to add up the numbers, instead of combining them?
2020/09/06
[ "https://Stackoverflow.com/questions/63767925", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14231446/" ]
`"17"` and `"3"`These are string, if you remove double-quotes from them, they become integers `17` and `3`. So if you want to add 2 numbers, they have to be `integer` or `float` in Python. Just remove double-quotes in list: `list_nums = [17, 3]`
Your `num1` and `num2` variables contain string values `'17'` and `'3'`. Operator `+` for strings works as a concatenation, e.g. `'17' + '3' == '173'`. If you need to get 20 out of it, you need to work with numeric types, like integers. For that, you either need to remove quotes from your 17 and 3 literals: ``` list_nums = [17, 3] num1 = list_nums[0] num2 = list_nums[1] acc = num1 + num2 print(acc) ``` ...or convert strings to integers on the fly: ``` list_nums = ["17", "3"] num1 = list_nums[0] num2 = list_nums[1] acc = int(num1) + int(num2) print(acc) ``` **P.S.** `sum` is the name of a built-in function in python. It's in general a good idea to avoid overriding such names. Other common names to avoid: `id`, `type`, `min`, `max`, etc.
13,290
40,851,872
Can I get python to print the source code for `__builtins__` directly? OR (more preferably): What is the pathname of the source code for `__builtins__`? --- I at least know the following things: * `__builtins__` is a module, by typing `type(__builtins__)`. * I have tried the best-answer-suggestions to a more general case of this SO question: ["Finding the source code for built-in Python functions?"](https://stackoverflow.com/questions/8608587/finding-the-source-code-for-built-in-python-functions). But no luck: + `print inspect.getdoc(__builtins__)` just gives me a description. + `inspect.getfile(__builtins__)` just gives me an error: `TypeError: <module '__builtin__' (built-in)> is a built-in module` + <https://hg.python.org/cpython/file/c6880edaf6f3/#> does not seem to contain an entry for `__builtins__`. I've tried "site:" search and browsed several of the directories but gave up after a few.
2016/11/28
[ "https://Stackoverflow.com/questions/40851872", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
The `__builtin__` module is implemented in [`Python/bltinmodule.c`](https://github.com/python/cpython/blob/2.7/Python/bltinmodule.c), a rather unusual location for a rather unusual module.
I can't try it right now, but python default ide is able to open core modules easily (I tried with math and some more) <https://docs.python.org/2/library/idle.html> On menus. Open module.
13,291
8,275,793
I have managed to write some simple scripts in python for android using sl4a. I can also create shortcuts on my home screen for them. But the icon chosen for this is always the python sl4a icon. Can I change this so different scripts have different icons?
2011/11/26
[ "https://Stackoverflow.com/questions/8275793", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1024495/" ]
You can change it if you build the .APK file from your computer and pick the icon there. You develop the Python SL4A application and you pick the logo in the /res/drawable folder.
I guess it depends on your launcher. With ADW launcher, you can do a long press on your shortcut from your home screen and then select the icon you want to use by pressing the icon button. For other launchers I've no idea.
13,292
51,952,761
In tkinter, when a button has the focus, you can press the space bar to execute the command associated with that button. I'm trying to make pressing the Enter key do the same thing. I'm certain I've done this in the past, but I can't find the code, and what I'm doing now isn't working. I'm using python 3.6.1 on a Mac. Here is what I've tried ``` self.startButton.bind('<Return>', self.startButton.invoke) ``` Pressing the Enter key has no effect, but pressing the space bar activates the command bound to `self.startButton`. I've tried binding to `<KeyPress-KP_Enter>` with the same result. I also tried just binding to the command I want to execute: ``` self.startButton.bind('<Return>', self.start) ``` but the result was the same. **EDIT** Here is a little script that exhibits the behavior I'm talking about. ``` import tkinter as tk root = tk.Tk() def start(): print('started') startButton.configure(state=tk.DISABLED) clearButton.configure(state=tk.NORMAL) def clear(): print('cleared') clearButton.configure(state=tk.DISABLED) startButton.configure(state=tk.NORMAL) frame = tk.Frame(root) startButton = tk.Button(frame, text = 'Start', command = start, state=tk.NORMAL) clearButton = tk.Button(frame, text = 'Clear', command = clear, state = tk.DISABLED) startButton.bind('<Return>', start) startButton.pack() clearButton.pack() startButton.focus_set() frame.pack() root.mainloop() ``` In this case, it works when I press space bar and fails when I press Enter. I get an error message when I press Enter, saying that there an argument was passed, but none is required. When I change the definition of to take dummy argument, pressing Enter works, but pressing space bar fails, because of a missing argument. I'm having trouble understanding how wizzwizz4's answer gets both to work. Also, I wasn't seeing the error message when I pressed Enter in my actual script, but that's way too long to post. \*\* EDIT AGAIN \*\* I was just overlooking the default value of None in Mike-SMT's script. That makes things plain.
2018/08/21
[ "https://Stackoverflow.com/questions/51952761", "https://Stackoverflow.com", "https://Stackoverflow.com/users/908293/" ]
The only thing your Ajax is sending is this.refs.search.value - not the name "search" / not url encoded / not multi-part encoded. Indeed, you seem to have invented your own encoding system. Try: ``` xhr.open('get','//localhost:80/ReactStudy/travelReduxApp/public/server/search.php?search=' + value,true); ``` in Ajax.js
``` <?php header('Access-Control-Allow-Origin:* '); /*shows warning without isset*/ /*$form = $_GET["search"]; echo $form;*/ /*with isset shows not found*/ if(isset($_POST["search"])){ $form = $_GET["search"]; echo $form; }else{ echo "not found";`ghd` } ?> ```
13,294
37,357,896
I am using sublime to automatically word-wrap python code-lines that go beyond 79 Characters as the Pep-8 defines. Initially i was doing return to not go beyond the limit. The only downside with that is that anyone else not having the word-wrap active wouldn't have the limitation. So should i strive forward of actually word-wrapping or is the visual word-wrap ok? [![enter image description here](https://i.stack.imgur.com/Wz4ko.png)](https://i.stack.imgur.com/Wz4ko.png)
2016/05/21
[ "https://Stackoverflow.com/questions/37357896", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1767754/" ]
PEP8 wants you to perform an actual word wrap. The point of PEP8’s stylistic rules is that the file looks the same in every editor, so you cannot rely on editor visualizations to satisfy PEP8. This also makes you choose the point where to break deliberately. For example, Sublime will do a pretty basic job in wrapping that line; but you could do it in a more readable way, e.g.: ``` x = os.path.split(os.path.split(os.path.split( os.path.split(os.path.split(path)[0])[0] )[0])[0]) ``` Of course that’s not necessarily pretty (I blame that mostly on this example code though), but it makes clear what belongs to what. That being said, a good strategy is to simply avoid having to wrap lines. For example, you are using `os.path.split` over and over; so you could change your import: ``` from os.path import split x = split(split(split(split(split(path)[0])[0])[0])[0]) ``` And of course, if you find yourself doing something over and over, maybe there’s a better way to do this, for example using Python 3.4’s `pathlib`: ``` import pathlib p = pathlib.Path(path).parents[2] print(p.parent.absolute(), p.name) ```
In-file word wrapping would let your code conform to Pep-8 most consistently, even if other programmers are looking at your code using different coding environments. That seems to me to be the best solution to keeping to the standard, particularly if you are expecting that others will, at some point, be looking at your code. If you are working with a set group of people on a project, or even in a company, it may be possible to coordinate with the other programmers to find what solution you are all most satisfied with. For personal projects that you really aren't expecting anyone else to ever look at, I'm sure it's fine to use the visual word wrapping, but enforcing it yourself would certainly help to build on a good habit.
13,295
43,630,195
`A = [[[1,2,3],[4]],[[1,4],[2,3]]]` Here I want to find lists in A which sum of all sublists in list not grater than 5. Which the result should be `[[1,4],[2,3]]` I tried a long time to solve this problem in python. But I still can't figure out the right solution, which I stuck at loop out multiple loops. My code as follows, but its obviously wrong, how to correct it? ``` A = [[[1,2,3],[4]],[[1,4],[2,3]]] z = [] for l in A: for list in l: sum = 0 while sum < 5: for i in list: sum+=i else: break else: z.append(l) print z ``` Asking for help~
2017/04/26
[ "https://Stackoverflow.com/questions/43630195", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5702561/" ]
A simple solution which you can think of would be like this - ``` A = [[[1,2,3],[4]],[[1,4],[2,3]]] r = [] # this will be our result for list in A: # Iterate through each item in A f = True # This is a flag we set for a favorable sublist for item in list: # Here we iterate through each list in the sublist if sum(item) > 5: # If the sum is greater than 5 in any of them, set flag to false f = False if f: # If the flag is set, it means this is a favorable sublist r.append(list) print r ``` But I'm assuming the nesting level would be the same. <http://ideone.com/hhr9uq>
This should work for your problem: ``` >>> for alist in A: ... if max(sum(sublist) for sublist in alist) <= 5: ... print(alist) ... [[1, 4], [2, 3]] ```
13,296
54,093,050
I'm following this code example from a [python course](https://www.python-course.eu/python3_properties.php): ``` class P: def __init__(self,x): self.x = x @property def x(self): return self.__x @x.setter def x(self, x): if x < 0: self.__x = 0 elif x > 1000: self.__x = 1000 else: self.__x = x ``` And I tried to implement this pattern to my own code: ``` class PCAModel(object): def __init__(self): self.M_inv = None @property def M_inv(self): return self.__M_inv @M_inv.setter def set_M_inv(self): M = self.var * np.eye(self.W.shape[1]) + np.matmul(self.W.T, self.W) self.__M_inv = np.linalg.inv(M) ``` Note that I want the `M_inv` property to be `None` before I have run the setter the first time. Also, the setter solely relies on other properties of the class object, and not on input arguments. The setter decorator generates an error: ``` NameError: name 'M_inv' is not defined ``` Why is this?
2019/01/08
[ "https://Stackoverflow.com/questions/54093050", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3128156/" ]
Your setter method should be like below: ``` @M_inv.setter def M_inv(self): M = self.var * np.eye(self.W.shape[1]) + np.matmul(self.W.T, self.W) self.__M_inv = np.linalg.inv(M) ``` The decorator `@M_inv.setter` and the function `def M_inv(self):` name should be same
The example is wrong. EDIT: Example was using a setter in `__init__` on purpose. Getters and setters, even though they act like properties, are just methods that access a private attribute. That attribute **must exist**. In the example, `self.__x` is never created. Here is my suggested use : ``` class PCAModel(object): def __init__(self): # We create a private variable self.__M_inv = None @property def M_inv(self): # Accessing M_inv returns the value of the previously created variable return self.__M_inv @M_inv.setter def M_inv(self): # Keep the same name than your propery M = self.var * np.eye(self.W.shape[1]) + np.matmul(self.W.T, self.W) self.__M_inv = np.linalg.inv(M) ```
13,301
58,798,388
I feel silly having to ask this question, but my memory evades me of better alternatives. Two appraoches that spring to mind: First: ``` def f1(v): return sum(2**i for i,va in enumerate(v) if va) >>> f1([True, False, True]) 5 ``` Second: ``` def f2(v): return int('0b' + "".join(str(int(va)) for va in v),2) >>> f2([True, False, True]) 5 ``` I feel that f1 is almost to clunky to be pythonic, and f2 is plainly too ugly as I'm jumping between multiple datatypes. Maybe its my age...?
2019/11/11
[ "https://Stackoverflow.com/questions/58798388", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1186019/" ]
Using booleans in arithmetic operations (also lambda functions) is very pythonic: ``` lst = [True, False, True] func = lambda x: sum(2 ** num * i for num, i in enumerate(x)) print(func(lst)) # 5 ```
This is another hacky way I came up with: ``` def f1(v): return int(''.join(str(int(b)) for b in v), 2) ``` Example: ``` >>> def f1(v): ... return int(''.join(str(int(b)) for b in v), 2) ... >>> f1([True, False, True]) 5 >>> ``` Another identical example using `map` (more readable in my view): ``` def f1(v): return int(''.join(map(str, map(int, v))), 2) ```
13,302
54,757,300
I have an existing python array instantiated with zeros. How do I iterate through and change the values? I can't iterate through and change elements of a Python array? ``` num_list = [1,2,3,3,4,5,] mu = np.mean(num_list) sigma = np.std(num_list) std_array = np.zeros(len(num_list)) for i in std_array: temp_num = ((i-mu)/sigma) std_array[i]=temp_num ``` This the error: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices
2019/02/19
[ "https://Stackoverflow.com/questions/54757300", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7671993/" ]
In your code you are iterating over the elements of the `numpy.array` `std_array`, but then using these elements as indices to dereference `std_array`. An easy solution would be the following. ``` num_arr = np.array(num_list) for i,element in enumerate(num_arr): temp_num = (element-mu)/sigma std_array[i]=temp_num ``` where I am assuming you wanted to use the value of the `num_list` in the first line of the loop when computing `temp_num`. Notice that I created a new `numpy.array` called `num_arr` though. This is because rather than looping, we can use alternative solution that takes advantage of [broadcasting](https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html): ``` std_array = (num_arr-mu)/sigma ``` This is equivalent to the loop, but faster to execute and simpler.
You `i` is an element from `std_array`, which is `float`. `Numpy` is therefore complaining that you are trying slicing with `float` where: > > only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) > and integer or boolean arrays are valid indices > > > If you don't have to use `for`, then `numpy` can broadcast the calculations for you: ``` (std_array - mu)/sigma # array([-2.32379001, -2.32379001, -2.32379001, -2.32379001, -2.32379001, -2.32379001]) ```
13,311
32,200,565
I`ve got this exception when using returnvalue in function ``` @inlineCallbacks def my_func(id): yield somefunc(id) @inlineCallbacks def somefunc(id): somevar = yield func(id) returnValue(somevar) returnValue(somevar) File "/usr/lib64/python2.7/site-packages/twisted/internet/defer.py", line 1105, in returnValue raise _DefGen_Return(val) twisted.internet.defer._DefGen_Return: ``` The function works fine, but raises an exception. How can i avoid this exception? I just need to return some value from the function.
2015/08/25
[ "https://Stackoverflow.com/questions/32200565", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4349456/" ]
Just download and install [wp-pagenavi](https://wordpress.org/plugins/wp-pagenavi/) plugin and then use: ``` if(method_exists('wp_pagenavi')){ wp_pagenavi(array('query' => $query)); } ``` Pass your query object in wp\_pagenavi method argument.
i guess, you are seeking a numbered pagination for custom query, than try this article [Kvcodes](http://www.kvcodes.com/2015/08/how-to-add-numeric-pagination-in-your-wordpress-theme-without-plugin/). here is the code. ``` function kvcodes_pagination_fn($pages = '', $range = 2){ $showitems = ($range * 2)+1; // This is the items range, that we can pass it as parameter depending on your necessary. global $paged; // Global variable to catch the page counts if(empty($paged)) $paged = 1; if($pages == '') { // paged is not defined than its first page. just assign it first page. global $wp_query; $pages = $wp_query->max_num_pages; if(!$pages) $pages = 1; } if(1 != $pages) { //For other pages, make the pagination work on other page queries echo "<div class='kvc_pagination'>"; if($paged > 2 && $paged > $range+1 && $showitems < $pages) echo "<a href='".get_pagenum_link(1)."'>&laquo;</a>"; if($paged > 1 && $showitems < $pages) echo "<a href='".get_pagenum_link($paged - 1)."'>&lsaquo;</a>"; for ($i=1; $i <= $pages; $i++) { if (1 != $pages &&( !($i >= $paged+$range+1 || $i <= $paged-$range-1) || $pages <= $showitems )) echo ($paged == $i)? "<span class='current'>".$i."</span>":"<a href='".get_pagenum_link($i)."' class='inactive' >".$i."</a>"; } if ($paged < $pages && $showitems < $pages) echo "<a href='".get_pagenum_link($paged + 1)."'>&rsaquo;</a>"; if ($paged < $pages-1 && $paged+$range-1 < $pages && $showitems < $pages) echo "<a href='".get_pagenum_link($pages)."'>&raquo;</a>"; echo "</div>\n"; } } ``` place the function on your current theme, functions.php and use it on loop.php or index.php ``` kvcodes_pagination_fn(); ``` and for the WP\_Query example ``` $custom_query = new WP_Query("post_type=receipes&author=kvcodes"); while ($custom_query->have_posts()) : $custom_query->the_post(); // Show loop content... endwhile; kvcodes_pagination_fn($custom_query->max_num_pages); ``` that's it.
13,312
59,723,005
For my report, I'm creating a special color plot in jupyter notebook. There are two parameters, `x` and `y`. ``` import numpy as np x = np.arange(-1,1,0.1) y = np.arange(1,11,1) ``` with which I compute a third quantity. Here is an example to demonstrate the concept: ``` values = [] for i in range(len(y)) : z = y[i] * x**3 # in my case the value z represents phases of oscillators # so I will transform the computed values to the intervall [0,2pi) values.append(z) values = np.array(values) % 2*np.pi ``` I'm plotting `y` vs `x`. For each `y = 1,2,3,4...` there will be a horizontal line with total length two. For example: The coordinate `(0.5,8)` stands for a single point on line 8 at position `x = 0.5` and `z(0.5,8)` is its associated value. Now I want to represent each point on all ten lines with a unique color that is determined by `z(x,y)`. Since `z(x,y)` takes only values in `[0,2pi)` I need a color scheme that starts at zero (for example `z=0` corresponds to blue). For increasing z the color continuously changes and in the end at `2pi` it takes the same color again (so at `z ~ 2pi` it becomes blue again). Does someone know how this can be done in python?
2020/01/13
[ "https://Stackoverflow.com/questions/59723005", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
A known (reasonably) numerically-stable version of the geometric mean is: ```py import torch def gmean(input_x, dim): log_x = torch.log(input_x) return torch.exp(torch.mean(log_x, dim=dim)) x = torch.Tensor([2.0] * 1000).requires_grad_(True) print(gmean(x, dim=0)) # tensor(2.0000, grad_fn=<ExpBackward>) ``` This kind of implementation can be found, for example, in SciPy ([see here](https://github.com/scipy/scipy/blob/1cc8beed5362ed290f5a8ddf4e99db49b4de6286/scipy/stats/mstats_basic.py#L268-L271)), which is a quite stable lib. --- The implementation above does not handle zeros and negative numbers. Some will argue that the geometric mean with negative numbers is not well-defined, at least when not all of them are negative.
torch.prod() helps: ``` import torch x = torch.FloatTensor(3).uniform_().requires_grad_(True) print(x) y = x.prod() ** (1.0/x.shape[0]) print(y) y.backward() print(x.grad) # tensor([0.5692, 0.7495, 0.1702], requires_grad=True) # tensor(0.4172, grad_fn=<PowBackward0>) # tensor([0.2443, 0.1856, 0.8169]) ``` EDIT: ?what about ``` y = (x.abs() ** (1.0/x.shape[0]) * x.sign() ).prod() ```
13,313
5,268,391
Is it possible to pipe numpy data (from one python script ) into the other? suppose that `script1.py` looks like this: `x = np.zeros(3, dtype={'names':['col1', 'col2'], 'formats':['i4','f4']})` `print x` Suppose that from the linux command, I run the following: `python script1.py | script2.py` Will `script2.py` get the piped numpy data as an input (stdin)? will the data still be in the same format of numpy? (so that I can, for example, perform numpy operations on it from within `script2.py`)?
2011/03/11
[ "https://Stackoverflow.com/questions/5268391", "https://Stackoverflow.com", "https://Stackoverflow.com/users/540009/" ]
No, data is passed through a pipe as text. You'll need to serialize the data in `script1.py` before writing, and deserialize it in `script2.py` after reading.
Check out the `save` and `load` functions. I don't think they would object to being passed a pipe instead of a file.
13,314
26,373,356
I am not sure why I am getting an error that game is not defined: ``` #!/usr/bin/python # global variables wins = 0 losses = 0 draws = 0 games = 0 # Welcome and get name of human player print 'Welcome to Rock Paper Scissors!!' human = raw_input('What is your name?') print 'Hello ',human # start game game() def game(): humanSelect = raw_input('Enter selection: R - Rock, P - Paper, S - Scissors, Q - Quit: ') while humanSelect not in ['R', 'P', 'S', 'Q']: print humanSelect, 'is not a valid selection' humanSelect = raw_input('Enter a valid option please') return humanSelect main() ```
2014/10/15
[ "https://Stackoverflow.com/questions/26373356", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4147288/" ]
You have to define the function `game` before you can call it. ``` def game(): ... game() ```
Okay, I spent some time tinkering with this today and now have the following: ``` import random import string # global variables global wins wins = 0 global losses losses = 0 global draws draws = 0 global games games = 0 # Welcome and get name of human player print 'Welcome to Rock Paper Scissors!!' human = raw_input('What is your name? ') print 'Hello ',human def readyToPlay(): ready = raw_input('Ready to Play? <Y> or <N> ') ready = string.upper(ready) if ready == 'Y': game() else: if games == 0: print 'Thanks for playing' exit else: gameResults(games, wins, losses, draws) return def game(): global games games += 1 human = humanChoice() computer = computerChoice() playResults(human, computer) readyToPlay() def humanChoice(): humanSelect = raw_input('Enter selection: R - Rock, P - Paper, S - Scissors: ') while humanSelect not in ['R', 'P', 'S']: print humanSelect, 'is not a valid selection' humanSelect = raw_input('Enter a valid option please') return humanSelect def computerChoice(): computerInt = random.randint(1, 3) if computerInt == '1': computerSelect = 'R' elif computerInt == '2': computerSelect = 'P' else: computerSelect = 'S' return computerSelect def playResults(human, computer): global draws global wins global losses if human == computer: print 'Draw' draws += 1 elif human == 'R' and computer == 'P': print 'My Paper wrapped your Rock, you lose.' losses += 1 elif human == 'R' and computer == 'S': print 'Your Rock smashed my Scissors, you win!' wins += 1 elif human == 'P' and computer == 'S': print 'My Scissors cut your paper, you lose.' losses += 1 elif human == 'P' and computer == 'R': print 'Your Paper covers my Rock, you win!' wins += 1 elif human == 'S' and computer == 'R': print 'My Rock smashes your Scissors, you lose.' losses += 1 elif human == 'S' and computer == 'P': print 'Your Scissors cut my Paper, you win!' wins += 1 def gameResults(games, wins, losses, draws): print 'Total games played', games print 'Wins: ', wins, ' Losses: ',losses, ' Draws: ', draws exit readyToPlay() ``` I am going to work on the forcing the humanSelect variable to upper case in the same manner that I did with ready, ready = string.upper(ready). I ran into indentation errors earlier today, but will iron that out later tonight. I do have a question. Is it possible to use a variable between the () of a raw\_input function similar to this: ``` if game == 0: greeting = 'Would you like to play Rock, Paper, Scissors?' else: greeting = 'Play again?' ready = raw_input(greeting) ```
13,317
53,350,132
I'm trying to understand how to pull a specific item from the code below. ``` var snake = [[{x : 20, y : 30}],[{x : 40, y: 50}]]; ``` Coming from python I found this to be useful when dealing with for loops to have all my objects in an array within an array. Say for instance I want to pull the first `x:` value from the first object container. I thought `snake[0][0].x` would return `20`, and `snake[1][1].y`would return `50`. but instead I receive: `Uncaught TypeError: Cannot read property 'x' of undefined` ```js var snake = [[{x : 20, y : 30}],[{x : 40, y: 50}]]; snake[0][0].x; snake[1][1].y; ``` I'm new to JavaScript, trying to understand why this doesn't work and if there is a way to write this so that it may. Thank you for your help in advance.
2018/11/17
[ "https://Stackoverflow.com/questions/53350132", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10444342/" ]
You are deleting a large number of rows. That is the problem. There is lots of overhead in deletions. If you are deleting a significant number of rows in a table -- and significant might only be a few percent -- then it is often faster to recreate the table: ``` select b.* into temp_b -- actually, I wouldn't use a temporary table in case the server goes down from b where b.id = (select max(a.id) from b b2 where b2.id = b.a_id); truncate table b; insert into b select * from temp_b; ``` Before attempting this, be sure that you have backed up `b` or at least stashed a copy of it somewhere. Note that I changed the structure of the `NOT IN`. I strongly discourage the use of `NOT IN`, because the semantics are not intuitive when the subquery returns `NULL` values. If there were a single `NULL` value, then the `WHERE` would never evaluate to TRUE. Even if `NULL` values are not a problem in this case, I strongly recommend using other alternatives so you won't have a problem when `NULL`s are a possibility. For performance on the `SELECT`, you want an index on `b(a_id, id)`. You might find that such an index helps on your original query.
Your query looks fine to me. Your problem seems to be that you have a very large amount of data and need ways to optimize performance. What you can do is materialize your subquery, and make sure max\_id is indexed, for example by making it a primary key. So create a temporary table `Max_B`, and store the results of your sub query in this table. Then perform the delete and drop the temp table afterwards.
13,319
51,869,152
Supposing that a have this dict with the keys and some range: ``` d = {"x": (0, 2), "y": (2, 4)} ``` I need to create dicts using the range above, I will get: ``` >>> keys = [k for k,v in d.items()] >>> >>> def newDict(keys,array): ... return dict(zip(keys,array)) ... >>> for i in range(0,2): ... for j in range(2,4): ... dd = newDict(keys, [i,j]) ... print (dd) ... {'x': 0, 'y': 2} {'x': 0, 'y': 3} {'x': 1, 'y': 2} {'x': 1, 'y': 3} ``` My doubt is how to iterate change the key **using the ranges** and create a new dicts in a more pythonic way. Supposing the I add more one key **z**: ``` d = {"x": (0, 2), "y": (2, 4), "z": (3, 5)} ``` So, I will need to add a more for loop nested. Is there another approach?
2018/08/16
[ "https://Stackoverflow.com/questions/51869152", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2452792/" ]
This is known behavior that came about a few versions ago (I think 2016). This `#{style}` interpolation is not supported in attributes: > > Caution > > > Previous versions of Pug/Jade supported an interpolation syntax such > as: > > > a(href="/#{url}") Link This syntax is no longer supported. > Alternatives are found below. (Check our migration guide for more > information on other incompatibilities between Pug v2 and previous > versions.) > > > For more see: <https://pugjs.org/language/attributes.html> You should be able to use regular [template literals](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Template_literals): ``` a(href=`${originalUrl}`) ```
There is an easy way to do that, write directly the variable, without using quotes, brackets, $, !, or #, like this: ``` a(href=originalUrl) !{originalURL} ``` The result of this is a link with the text in originalURL Example: if originalUrl = 'www.google.es' ``` a(href='www.google.es') www.google.es ``` finally you get the link: [www.google.es](http://www.google.es)
13,320
58,928,062
``` import pandas as pd from sqlalchemy import create_engine host='user@127.0.0.1' port=10000 schema ='result' table='new_table' engine = create_engine(f'hive://{host}:{port}/{schema}') conn=engine.connect() engine.execute('CREATE TABLE ' + table + ' (year int, GDP_rate int, GDP string)') data = { 'year': [2017, 2018], 'GDP_rate': [31, 30], 'GDP': ['1.73M', '1.83M'] } df = pd.DataFrame(data) df.to_sql(name=table, con=engine,schema='result',index=False,if_exists='append',chunksize=5000) conn.close() ``` this is my code make pandas dataframe which save to hive table but when i run the code i got the error message like this ``` File "/opt/rh/rh-python36/root/usr/lib/python3.6/site-packages/pyhive/hive.py", line 380, in _fetch_more raise ProgrammingError("No result set") sqlalchemy.exc.ProgrammingError: (pyhive.exc.ProgrammingError) No result set [SQL: INSERT INTO TABLE `result`.`new_table` VALUES (%(year)s, %(GDP_rate)s, %(GDP)s)] [parameters: ({'year': 2017, 'GDP_rate': 31, 'GDP': '1.73M'}, {'year': 2018, 'GDP_rate': 30, 'GDP': '1.83M'})] (Background on this error at: http://sqlalche.me/e/f405) ``` actually i don't know why and the result only one dataframe save at hive table if someone knows that the reason plz teach me thank you
2019/11/19
[ "https://Stackoverflow.com/questions/58928062", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12300690/" ]
Kindly add method='multi' for batch insert df.to\_sql("table\_name", con = engine,index=False,method='multi')
A likely pyhive bug. See <https://github.com/dropbox/PyHive/issues/250>. The problem happens when inserting multiple rows.
13,321
48,524,013
So i'm starting to use Django but i had some problems trying to run my server. I have two versions of python installed. So in my mysite package i tried to run `python manage.py runserver` but i got this error: ``` Unhandled exception in thread started by <function wrapper at 0x058E1430> Traceback (most recent call last): File "C:\Python27\lib\site-packages\django\utils\autoreload.py", line 228, in wrapper fn(*args, **kwargs) File "C:\Python27\lib\site-packages\django\core\management\commands\runserver.py", line 125, in inner_run self.check(display_num_errors=True) File "C:\Python27\lib\site-packages\django\core\management\base.py", line 359, in check include_deployment_checks=include_deployment_checks, File "C:\Python27\lib\site-packages\django\core\management\base.py", line 346, in _run_checks return checks.run_checks(**kwargs) File "C:\Python27\lib\site-packages\django\core\checks\registry.py", line 81, in run_checks new_errors = check(app_configs=app_configs) File "C:\Python27\lib\site-packages\django\core\checks\urls.py", line 16, in check_url_config return check_resolver(resolver) File "C:\Python27\lib\site-packages\django\core\checks\urls.py", line 26, in check_resolver return check_method() File "C:\Python27\lib\site-packages\django\urls\resolvers.py", line 254, in check for pattern in self.url_patterns: File "C:\Python27\lib\site-packages\django\utils\functional.py", line 35, in __get__ res = instance.__dict__[self.name] = self.func(instance) File "C:\Python27\lib\site-packages\django\urls\resolvers.py", line 405, in url_patterns patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module) File "C:\Python27\lib\site-packages\django\utils\functional.py", line 35, in __get__ res = instance.__dict__[self.name] = self.func(instance) File "C:\Python27\lib\site-packages\django\urls\resolvers.py", line 398, in urlconf_module return import_module(self.urlconf_name) File "C:\Python27\lib\importlib\__init__.py", line 37, in import_module __import__(name) File "C:\Users\Davide\Desktop\django-proj\mysite\mysite\urls.py", line 17, in <module> from django.urls import path ImportError: cannot import name path ``` Is it because i have two versions installed?
2018/01/30
[ "https://Stackoverflow.com/questions/48524013", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9217311/" ]
Not sure what django version you are currently using but if you are working with Django 2.0 then python2 wouldn't work ( Cause Django2.0 support only Python 3.4+) So in your case if you are in Django2.0 (assuming you already have installed latest version of python in your machine) then you should run following command. ``` python3 manage.py runserver ``` Or install python3 instead of python2 in your virtual environment. Then hopefully your command ``` python manage.py runserver ``` will work perfectly.
Have you tried to upgrade django with pip or pip3? ``` pip install --upgrade django --user ```
13,322
29,858,752
I am using selenium with python and have downloaded the chromedriver for my windows computer from this site: <http://chromedriver.storage.googleapis.com/index.html?path=2.15/> After downloading the zip file, I unpacked the zip file to my downloads folder. Then I put the path to the executable binary (C:\Users\michael\Downloads\chromedriver\_win32) into the Environment Variable "Path". However, when I run the following code: ``` from selenium import webdriver driver = webdriver.Chrome() ``` ... I keep getting the following error message: ``` WebDriverException: Message: 'chromedriver' executable needs to be available in the path. Please look at http://docs.seleniumhq.org/download/#thirdPartyDrivers and read up at http://code.google.com/p/selenium/wiki/ChromeDriver ``` But - as explained above - the executable is(!) in the path ... what is going on here?
2015/04/24
[ "https://Stackoverflow.com/questions/29858752", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4474430/" ]
*For Linux and OSX* **Step 1: Download chromedriver** ``` # You can find more recent/older versions at http://chromedriver.storage.googleapis.com/ # Also make sure to pick the right driver, based on your Operating System wget http://chromedriver.storage.googleapis.com/81.0.4044.69/chromedriver_mac64.zip ``` For debian: `wget https://chromedriver.storage.googleapis.com/2.41/chromedriver_linux64.zip` **Step 2: Add chromedriver to `/usr/local/bin`** ``` unzip chromedriver_mac64.zip sudo mv chromedriver /usr/local/bin sudo chown root:root /usr/local/bin/chromedriver sudo chmod +x /usr/local/bin/chromedriver ``` --- You should now be able to run ``` from selenium import webdriver browser = webdriver.Chrome() browser.get('http://localhost:8000') ``` without any issues
Had this issue with Mac Mojave running Robot test framework and Chrome 77. This solved the problem. Kudos @Navarasu for pointing me to the right track. ``` $ pip install webdriver-manager --user # install webdriver-manager lib for python $ python # open python prompt ``` Next, in python prompt: ``` from selenium import webdriver from webdriver_manager.chrome import ChromeDriverManager driver = webdriver.Chrome(ChromeDriverManager().install()) # ctrl+d to exit ``` This leads to the following error: ``` Checking for mac64 chromedriver:xx.x.xxxx.xx in cache There is no cached driver. Downloading new one... Trying to download new driver from http://chromedriver.storage.googleapis.com/xx.x.xxxx.xx/chromedriver_mac64.zip ... TypeError: makedirs() got an unexpected keyword argument 'exist_ok' ``` * I now got the newest download link + Download and unzip chromedriver to where you want + For example: `~/chromedriver/chromedriver` Open `~/.bash_profile` with editor and add: ``` export PATH="$HOME/chromedriver:$PATH" ``` Open new terminal window, ta-da
13,324
52,566,756
I tried to draw a decision tree in Jupyter Notebook this way. ``` mglearn.plots.plot_animal_tree() ``` But I didn't make it in the right way and got the following error message. ``` --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) <ipython-input-65-45733bae690a> in <module>() 1 ----> 2 mglearn.plots.plot_animal_tree() ~\Desktop\introduction_to_ml_with_python\mglearn\plot_animal_tree.py in plot_animal_tree(ax) 4 5 def plot_animal_tree(ax=None): ----> 6 import graphviz 7 if ax is None: 8 ax = plt.gca() ModuleNotFoundError: No module named 'graphviz ``` So I downloaded [Graphviz Windows Packages](https://graphviz.gitlab.io/_pages/Download/Download_windows.html) and installed it. And I added the PATH installed path(C:\Program Files (x86)\Graphviz2.38\bin) to USER PATH and (C:\Program Files (x86)\Graphviz2.38\bin\dot.exe) to SYSTEM PATH. And restarted my PC. But it didnt work. I still can't get it working. So I searched over the internet and got another solution that, I can add the PATH in my code like this. ``` import os os.environ["PATH"] += os.pathsep + 'C:/Program Files (x86)/Graphviz2.38/bin' ``` But it didn't work. So I do not know how to figure it out now. I use the Python3.6 integrated into Anacode3. And I ALSO tried installing graphviz via PIP like this. ``` pip install graphviz ``` BUT it still doesn't work. Hope someone can help me, sincerely.
2018/09/29
[ "https://Stackoverflow.com/questions/52566756", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5574794/" ]
in Anaconda install * python-graphviz * pydot This will fix your problem
In case if your operation system is **Ubuntu** I recommend to try command: ``` sudo apt-get install -y graphviz libgraphviz-dev ```
13,334
58,838,759
I have multiple csv files containing item and invoicing data (proprietary and edifact files). They look roughly like this: ``` 0001;12345;Item1 0002;12345;EUR;1.99 0003;12345;EUR;1.99 ``` The always start with 0001 but do not necessarily have more than one row. How do I group them efficiently? Currently I read them line by line, split them by ';', and add them all to one list until the first value is again 0001. Should I first split them using regular expressions and then continue parsing? What is the most pythonic way?
2019/11/13
[ "https://Stackoverflow.com/questions/58838759", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11766755/" ]
With my knowledge of EDIFACT-style files, they're basically hierarchical, with some row code (`0001` here) acting as a "start-of-group" symbol. So yeah – something like this is a fast, Pythonic way to group by that symbol. (`input_file` can just as well be a disk file, but for the sake of a self-contained example, it's an `io.StringIO()`.) This particular implementation has the extra feature of crashing if the file smells invalid, i.e. doesn't start with an `0001` record. ```py import io from pprint import pprint import csv input_file = io.StringIO(""" 0001;12345;Item1 0002;12345;EUR;1.99 0003;12345;EUR;1.99 0001;12345;Item2 0002;12345;EUR;1.99 0003;12345;EUR;1.99 0001;12345;Item3 0002;12345;EUR;1.99 0003;12345;EUR;1.99 0001;12345;Item4 0002;12345;EUR;1.99 0003;12345;EUR;1.99 """.strip()) groups = [] for line in csv.reader(input_file, delimiter=";"): if line[0] == "0001": groups.append([]) groups[-1].append(line) pprint(groups) ``` The output is a list of lists of split rows: ``` [[['0001', '12345', 'Item1'], ['0002', '12345', 'EUR', '1.99'], ['0003', '12345', 'EUR', '1.99']], [['0001', '12345', 'Item2'], ['0002', '12345', 'EUR', '1.99'], ['0003', '12345', 'EUR', '1.99']], [['0001', '12345', 'Item3'], ['0002', '12345', 'EUR', '1.99'], ['0003', '12345', 'EUR', '1.99']], [['0001', '12345', 'Item4'], ['0002', '12345', 'EUR', '1.99'], ['0003', '12345', 'EUR', '1.99']]] ```
If the files have the same columns, it would be interesting to read it in dataframes, and append to each one this thing. `df1= pd.read_csv(Path+File1, sep=';') df2= pd.read_csv(Path+File2, sep=';') df2.append(df1, ignore_index = True, sort=False).` Afterward, you can just sort by the first column that contains the '0001' `df.sort_values(by=['col1'], ascending=False)`
13,343
52,676,660
I am totally new to Jupyter Notebook. Currently, I am using the notebook with R and it is working well. Now, I tried to use it with Python and I receive the following error. > > [I 09:00:52.947 NotebookApp] KernelRestarter: restarting kernel (4/5), > new random ports > > > Traceback (most recent call last): > > > File "/usr/lib/python3.6/runpy.py", line 193, in \_run\_module\_as\_main > "main", mod\_spec) > > > File "/usr/lib/python3.6/runpy.py", line 85, in \_run\_code exec(code, > run\_globals) > > > File > "/home/frey/.local/lib/python3.6/site-packages/ipykernel\_launcher.py", > line 15, in from ipykernel import kernelapp as app > > > File > "/home/frey/.local/lib/python3.6/site-packages/ipykernel/init.py", > line 2, in from .connect import \* > > > File > "/home/frey/.local/lib/python3.6/site-packages/ipykernel/connect.py", > line 13, in from IPython.core.profiledir import ProfileDir > > > File "/home/frey/.local/lib/python3.6/site-packages/IPython/init.py", > line 55, in from .terminal.embed import embed > > > File > "/home/frey/.local/lib/python3.6/site-packages/IPython/terminal/embed.py", > line 16, in from IPython.terminal.interactiveshell import > TerminalInteractiveShell > > > File > "/home/frey/.local/lib/python3.6/site-packages/IPython/terminal/interactiveshell.py", > line 20, in from prompt\_toolkit.formatted\_text import PygmentsTokens > ModuleNotFoundError: No module named 'prompt\_toolkit.formatted\_text' > > > [W 09:00:55.956 NotebookApp] KernelRestarter: restart failed [W > 09:00:55.956 NotebookApp] Kernel 24117cd7-38e5-4978-8bda-d1b84f498051 > died, removing from map. > > > Hopefully, someone can help me.
2018/10/06
[ "https://Stackoverflow.com/questions/52676660", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10464893/" ]
> > ipython` 7.0.1 has requirement prompt-toolkit<2.1.0,>=2.0.0, but you'll have prompt-toolkit 1.0.15 which is incompatible > > > <https://github.com/jupyter/jupyter_console/issues/158> > > > Upgrading `prompt-toolkit` will fix the problem. ``` pip install --upgrade prompt-toolkit ```
It's more stable to create a kernel with an Anaconda virtualenv. Follow these steps. 1. Execute Anaconda prompt. 2. Type `conda create --name $ENVIRONMENT_NAME R -y` 3. Type `conda activate $ENVIRONMENT_NAME` 4. Type `python -m ipykernel install` 5. Type `ipython kernel install --user --name $ENVIRONMENT_NAME` Then, you'll have a new jupyter kernel named 'R' with R installed.
13,344
2,623,524
As asked and answered in [this post](https://stackoverflow.com/questions/2595119/python-glob-and-bracket-characters), I need to replace '[' with '[[]', and ']' with '[]]'. I tried to use s.replace(), but as it's not in place change, I ran as follows to get a wrong anwser. ``` path1 = "/Users/smcho/Desktop/bracket/[10,20]" path2 = path1.replace('[','[[]') path3 = path2.replace(']','[]]') pathName = os.path.join(path3, "*.txt") print pathName --> /Users/smcho/Desktop/bracket/[[[]]10,20[]]/*.txt ``` * How can I do the multiple replace in python? * Or how can I replace '[' and ']' at the same time?
2010/04/12
[ "https://Stackoverflow.com/questions/2623524", "https://Stackoverflow.com", "https://Stackoverflow.com/users/260127/" ]
I would use code like ``` path = "/Users/smcho/Desktop/bracket/[10,20]" replacements = {"[": "[[]", "]": "[]]"} new_path = "".join(replacements.get(c, c) for c in path) ```
``` import re path2 = re.sub(r'(\[|\])', r'[\1]', path1) ```
13,353
41,186,818
The [uuid4()](https://docs.python.org/2/library/uuid.html#uuid.uuid4) function of Python's module `uuid` generates a random UUID, and seems to generate a different one every time: ``` In [1]: import uuid In [2]: uuid.uuid4() Out[2]: UUID('f6c9ad6c-eea0-4049-a7c5-56253bc3e9c0') In [3]: uuid.uuid4() Out[3]: UUID('2fc1b6f9-9052-4564-9be0-777e790af58f') ``` I would like to be able to generate the same random UUID every time I run a script - that is, I'd like to seed the random generator in `uuid4()`. Is there a way to do this? (Or achieve this by some other means)? ### What I've tried so far I've to generate a UUID using the `uuid.UUID()` method with a random 128-bit integer (from a seeded instance of `random.Random()`) as input: ``` import uuid import random rd = random.Random() rd.seed(0) uuid.UUID(rd.getrandbits(128)) ``` However, `UUID()` seems not to accept this as input: ``` Traceback (most recent call last): File "uuid_gen_seed.py", line 6, in <module> uuid.UUID(rd.getrandbits(128)) File "/usr/lib/python2.7/uuid.py", line 133, in __init__ hex = hex.replace('urn:', '').replace('uuid:', '') AttributeError: 'long' object has no attribute 'replace' ``` Any other suggestions?
2016/12/16
[ "https://Stackoverflow.com/questions/41186818", "https://Stackoverflow.com", "https://Stackoverflow.com/users/995862/" ]
[Faker](https://github.com/joke2k/faker "Faker") makes this easy ``` >>> from faker import Faker >>> f1 = Faker() >>> f1.seed(4321) >>> print(f1.uuid4()) cc733c92-6853-15f6-0e49-bec741188ebb >>> print(f1.uuid4()) a41f020c-2d4d-333f-f1d3-979f1043fae0 >>> f1.seed(4321) >>> print(f1.uuid4()) cc733c92-6853-15f6-0e49-bec741188ebb ```
Simple solution based on the answer of @user10229295, with a comment about the seed. The Edit queue was full, so I opened a new answer: ``` import hashlib import uuid seed = 'Type your seed_string here' #Read comment below m = hashlib.md5() m.update(seed.encode('utf-8')) new_uuid = uuid.UUID(m.hexdigest()) ``` **Comment about the string *'seed'***: It will be the seed from which the UUID will be generated: from the same seed string will be always generated the same UUID. You can convert integer with some significance as string, concatenate different strings and use the result as your seed. With this you will have control on the UUID generated, which means you will be able to reproduce your UUID knowing the seed you used: with the same seed, the UUID generated from it will be the same.
13,363
14,946,639
Say I have the following code: ``` if request.POST: id = request.POST.get('id') # block of code to use variable id do_work(id) do_other_work(id) ``` is there a shortcut (one line of code) that will test if it's request.POST for the conditional block and also assign variable id for the conditional block to use the id variable? I read this [Is there a Python shortcut for variable checking and assignment?](https://stackoverflow.com/questions/1207333/is-there-a-python-shortcut-for-variable-checking-and-assignment) but doesn't really answer my question.
2013/02/18
[ "https://Stackoverflow.com/questions/14946639", "https://Stackoverflow.com", "https://Stackoverflow.com/users/342553/" ]
No, you can't assign anything in an `if` test expression. If you didn't have the rest of the `if` block, ``` id = request.POST and request.POST.get('id') ``` would work. It doesn't make much sense, to do it, though, because `id = request.POST.get('id')` works just fine with empty `request.POST`. Please remember that `request.POST` can be empty, even if the method was `POST`. This is what most people would write: ``` if request.method == 'POST': id = request.POST.get('id') # rest of block ```
I like this: ``` id = request.POST.get('id', False) if id is not False: # do something ```
13,373
66,534,294
I am using python 3.7 and have intalled IPython I am using ipython shell in django like ``` python manage.py shell_plus ``` and then ``` [1]: %load_ext autoreload [2]: %autoreload 2 ``` and then i am doing ``` [1]: from boiler.tasks import add [2]: add(1,2) "testing" ``` `change add function` ``` def add(x,y): print("testing2") ``` and then i am doing ``` [1]: from boiler.tasks import add [2]: add(1,2) "testing" ``` so here i found its not updating
2021/03/08
[ "https://Stackoverflow.com/questions/66534294", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2897115/" ]
> > Will this cause issues on the memory side? > > > Which side of what, and which side of it is the memory side? It may use more memory than necessary. > > What happens to that extra memory? > > > It remains unused. > > Does it have to be manually freed or is there a way to do it automatically? > > > In a general-purpose operating system, the system will recover memory when your process terminates. If you want allocated memory to be available for reuse before that, you must free it. > > Should I just not bother messing around and just figure out the right size beforehand? > > > Appropriate strategies depend on circumstances. Some possibilities are: * You allocate a large amount of memory, perform the operations to get data into it and then, having learned the size of the data, use `realloc` to release the excess memory. * You make a fair estimate of the amount of memory needed, being sure to be at or over the requirement, not under, allocate that memory, and let the small excess be wasted. * You allocate some memory and start operations to put data into it. As you acquire data, you watch for it filling the allocated amount. When more is needed, you use `realloc` to get more.
For your purposes, the memory allocator doesn't know, nor does it really care about how much memory you actually use in a block you malloc. The key here is to never use *more* memory than you malloc. The extra memory just sits there, available for your use if you want it. Note that allocating 10 bytes vs 4 bytes won't make much, if any difference for you, assuming you're not performing this allocation thousands of times. The only validation / size calculation I'd recommend you never skip is ensuring that you never allocate too little memory for the item you're placing in the allocated space. You don't technically have to free the memory (as the OS will clean up the mess you've made after your application exits), but you absolutely should. If you don't free memory after you're done using it, this is called a memory leak and will cause your application to use more RAM than it has to. Additionally, be very careful never to use memory again after you've freed it. That's called a use-after-free, and can be a dangerous bug. Memory management in C is very much something you have to do manually, as opposed to some other languages.
13,374
26,953,153
Beginner python coder here, keep things simple, please. So, I need this code below to scramble two letters without scrambling the first or last letters. Everything seems to work right up until the `scrambler()` function. ``` from random import randint def wordScramble(string): stringArray = string.split() for word in stringArray: if len(word) >= 4: letter = randint(1,len(word)-2) point = letter while point == letter: point = randint(1, len(word)-2) word = switcher(word,letter,point) ' '.join(stringArray) return stringArray def switcher(word,letter,point): word = list(word) word[letter],word[point]=word[point],word[letter] return word print(wordScramble("I can't wait to see how this turns itself out")) ``` The outcome is always: `I can't wait to see how this turns itself out`
2014/11/16
[ "https://Stackoverflow.com/questions/26953153", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4257122/" ]
If the labels 0, 100, 200 belong to one axis and the texts "Day One", ... to the other one you can set the colors of the labels of the first axis to transparent like this ``` axis.TextColor = OxyColors.Transparent; ``` Hope this helps.
If XAML use this ``` <oxy:Plot.Axes> <oxy:LinearAxis Position="Left" TextColor = OxyColors.Transparent/> </oxy:Plot.Axes> ``` If code ``` // Create a plot model PlotModel = new PlotModel { Title = "Updating by task running on the UI thread" }; // Add the axes, note that MinimumPadding and AbsoluteMinimum should be set on the value axis. PlotModel.Axes.Add(new LinearAxis { Position = AxisPosition.Bottom, TextColor = OxyColors.Transparent}); ```
13,375
18,782,584
How can I perform post processing on my SQL3 database via python? The following code doesn't work, but what I am trying to do is first create a new database if not exists already, then insert some data, and finally execute the query and close the connection. But I what to do so separately, so as to add additional functionality later on, such as delete / updater / etc... Any ideas? ``` class TitlesDB: # initiate global variables conn = None c = None # perform pre - processing def __init__(self, name): import os os.chdir('/../../') import sqlite3 conn = sqlite3.connect(name) c = conn.cursor() c.execute('CREATE TABLE IF NOT EXISTS titles (title VARCHAR(100) UNIQUE)') # insert a bunch of new titles def InsertTitles(self, list): c.executemany('INSERT OR IGNORE INTO titles VALUES (?)', list) # perform post - processing def __fina__(self): conn.commit() conn.close() ```
2013/09/13
[ "https://Stackoverflow.com/questions/18782584", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2295350/" ]
It would seem that FB has made some changes to its redirection script, when it detects a Windows Phone webbrowser control. What the C# SDK does, is generate the login page as "<http://www.facebook.com>....". When you open this URL on the webbrowser control, it gets redirected to "<http://m.facebook.com>..." which displays the mobile version of the FB login page. This previously has no issue, but recently, when FB does the redirection, it also strips the parameter "display=page" from the URL. What happens then is that when a successful FB login is made, the "login\_success.html" page is opened without this parameter. Without the "display=page" parameter passed in, it defaults to "display=touch". This URL unfortunately, does not append the token string in the URL, hence the page shown in the very first thread is displayed. The solution to this is, instead of using the below code to generate the login URL, ammend it to original: ``` Browser.Navigate(_fb.GetLoginUrl(parameters)); ``` ammended: ``` var URI = _fb.GetLoginUrl(parameters).toString().replace("www.facebook.com","m.facebook.com"); Browser.Navigate(new Uri(URI)); ```
In my project I just listened for the WebView's navigated event. If it happens, it means that user did something on the login page (i.e. pressed login button). Then I parsed the uri of the page you mentioned which should contain OAuth callback url, if it is correct and the result is success I redirect manually to the correct page: ``` //somewhere in the app private readonly FacebookClient _fb = new FacebookClient(); private void webBrowser1_Navigated(object sender, System.Windows.Navigation.NavigationEventArgs e) { FacebookOAuthResult oauthResult; if (!_fb.TryParseOAuthCallbackUrl(e.Uri, out oauthResult)) { return; } if (oauthResult.IsSuccess) { var accessToken = oauthResult.AccessToken; //you have an access token, you can proceed further FBLoginSucceded(accessToken); } else { // errors when logging in MessageBox.Show(oauthResult.ErrorDescription); } } ``` If you abstract logging in an async function, you expect it to behave asynchronously, so events are ok. Sorry for my English. The code for the full page: ``` public partial class LoginPageFacebook : PhoneApplicationPage { private readonly string AppId = Constants.FacebookAppId; private const string ExtendedPermissions = "user_birthday,email,user_photos"; private readonly FacebookClient _fb = new FacebookClient(); private Dictionary<string, object> facebookData = new Dictionary<string, object>(); UserIdentity userIdentity = App.Current.Resources["userIdentity"] as UserIdentity; public LoginPageFacebook() { InitializeComponent(); } private void webBrowser1_Loaded(object sender, RoutedEventArgs e) { var loginUrl = GetFacebookLoginUrl(AppId, ExtendedPermissions); webBrowser1.Navigate(loginUrl); } private Uri GetFacebookLoginUrl(string appId, string extendedPermissions) { var parameters = new Dictionary<string, object>(); parameters["client_id"] = appId; parameters["redirect_uri"] = "https://www.facebook.com/connect/login_success.html"; parameters["response_type"] = "token"; parameters["display"] = "touch"; // add the 'scope' only if we have extendedPermissions. if (!string.IsNullOrEmpty(extendedPermissions)) { // A comma-delimited list of permissions parameters["scope"] = extendedPermissions; } return _fb.GetLoginUrl(parameters); } private void webBrowser1_Navigated(object sender, System.Windows.Navigation.NavigationEventArgs e) { if (waitPanel.Visibility == Visibility.Visible) { waitPanel.Visibility = Visibility.Collapsed; webBrowser1.Visibility = Visibility.Visible; } FacebookOAuthResult oauthResult; if (!_fb.TryParseOAuthCallbackUrl(e.Uri, out oauthResult)) { return; } if (oauthResult.IsSuccess) { var accessToken = oauthResult.AccessToken; FBLoginSucceded(accessToken); } else { // user cancelled MessageBox.Show(oauthResult.ErrorDescription); } } private void FBLoginSucceded(string accessToken) { var fb = new FacebookClient(accessToken); fb.GetCompleted += (o, e) => { if (e.Error != null) { Dispatcher.BeginInvoke(() => MessageBox.Show(e.Error.Message)); return; } var result = (IDictionary<string, object>)e.GetResultData(); var id = (string)result["id"]; userIdentity.FBAccessToken = accessToken; userIdentity.FBID = id; facebookData["Name"] = result["first_name"]; facebookData["Surname"] = result["last_name"]; facebookData["Email"] = result["email"]; facebookData["Birthday"] = DateTime.Parse((string)result["birthday"]); facebookData["Country"] = result["locale"]; Dispatcher.BeginInvoke(() => { BitmapImage profilePicture = new BitmapImage(new Uri(string.Format("https://graph.facebook.com/{0}/picture?type={1}&access_token={2}", id, "square", accessToken))); facebookData["ProfilePicture"] = profilePicture; userIdentity.FBData = facebookData; userIdentity.ProfilePicture = profilePicture; ARLoginOrRegister(); }); }; fb.GetAsync("me"); } private void ARLoginOrRegister() { WebService.ARServiceClient client = new WebService.ARServiceClient(); client.GetUserCompleted += client_GetUserCompleted; client.GetUserAsync((string)facebookData["Email"]); client.CloseAsync(); } void client_GetUserCompleted(object sender, WebService.GetUserCompletedEventArgs e) { if (e.Result == null) NavigationService.Navigate(new Uri("/RegisterPageFacebook.xaml", UriKind.RelativeOrAbsolute)); else if (e.Result.AccountType != (int)AccountType.Facebook) { MessageBox.Show("This account is not registered with facebook!"); NavigationService.Navigate(new Uri("/LoginPage.xaml", UriKind.RelativeOrAbsolute)); } else { userIdentity.Authenticated += userIdentity_Authenticated; userIdentity.FetchARSocialData((string)facebookData["Email"]); } } void userIdentity_Authenticated(bool success) { NavigationService.Navigate(new Uri("/MenuPage.xaml", UriKind.RelativeOrAbsolute)); } } ```
13,376
809,859
Personal preferences aside, is there an optimal tab size (2 spaces? 3 spaces? 8 spaces?) for code readability? In the different projects I've worked on, people seem to have vastly different standards. I can't seem to read 2 space indents, but companies like Google use it as a standard. Can anyone point to documentation, studies, or well-reasoned arguments for the optimal size of a tab? If we want to get specific, I work mostly in python. The goal of this question is to pick a standard for the team I work on.
2009/05/01
[ "https://Stackoverflow.com/questions/809859", "https://Stackoverflow.com", "https://Stackoverflow.com/users/85271/" ]
[Four spaces and no hard tabs](https://david.goodger.org/projects/pycon/2007/idiomatic/handout.html#whitespace-1), if you're a Pythonista.
``` 2 space 4 busy coder 3 space for heavy if statement using script kiddies 4 space for those who make real money pressing space 4 times 8 space for the man in ties and suit who doesn't need to code ```
13,378
32,309,177
How do we do a DNS query, expecially MX query, in Python by not installing any third party libs. I want to query the MX record about a domain, however, it seems that `socket.getaddrinfo` can only query the A record. I have tried this: ``` python -c "import socket; print socket.getaddrinfo('baidu.com', 25, socket.AF_INET, socket.SOCK_DGRAM)" ``` This prints ``` [(2, 2, 17, '', ('220.181.57.217', 25)), (2, 2, 17, '', ('123.125.114.144', 25)), (2, 2, 17, '', ('180.149.132.47', 25))] ``` However, we can not telnet it with `telnet 220.181.57.217 25` or `telnet 123.125.114.144 25` or `telnet 180.149.132.47 25`.
2015/08/31
[ "https://Stackoverflow.com/questions/32309177", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1889327/" ]
Here's some rough low-level code for making a dns request using just the standard library if anyone's interested. ``` import secrets import socket # https://datatracker.ietf.org/doc/html/rfc1035 # https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml#table-dns-parameters-4 def dns_request(name, qtype=1, addr=('127.0.0.53', 53), timeout=1): # A 1, NS 2, CNAME 5, SOA 6, NULL 10, PTR 12, MX 15, TXT 16, AAAA 28, NAPTR 35, * 255 name = name.rstrip('.') queryid = secrets.token_bytes(2) # Header. 1 for Recursion Desired, 1 question, 0 answers, 0 ns, 0 additional request = queryid + b'\1\0\0\1\0\0\0\0\0\0' # Question for label in name.rstrip('.').split('.'): assert len(label) < 64, name request += int.to_bytes(len(label), length=1, byteorder='big') request += label.encode() request += b'\0' # terminates with the zero length octet for the null label of the root. request += int.to_bytes(qtype, length=2, byteorder='big') # QTYPE request += b'\0\1' # QCLASS = 1 with socket.socket(socket.AF_INET, socket.SOCK_DGRAM) as s: s.sendto(request, addr) s.settimeout(timeout) try: response, serveraddr = s.recvfrom(4096) except socket.timeout: raise TimeoutError(name, timeout) assert serveraddr == addr, (serveraddr, addr) assert response[:2] == queryid, (response[:2], queryid) assert response[2] & 128 # QR = Response assert not response[2] & 4 # No Truncation assert response[3] & 128 # Recursion Available error_code = response[3] % 16 # 0 = no error, 1 = format error, 2 = server failure, 3 = does not exist, 4 = not implemented, 5 = refused qdcount = int.from_bytes(response[4:6], 'big') ancount = int.from_bytes(response[6:8], 'big') assert qdcount <= 1 # parse questions qa = response[12:] for question in range(qdcount): domain, qa = parse_qname(qa, response) qtype, qa = parse_int(qa, 2) qclass, qa = parse_int(qa, 2) # parse answers answers = [] for answer in range(ancount): domain, qa = parse_qname(qa, response) qtype, qa = parse_int(qa, 2) qclass, qa = parse_int(qa, 2) ttl, qa = parse_int(qa, 4) rdlength, qa = parse_int(qa, 2) rdata, qa = qa[:rdlength], qa[rdlength:] if qtype == 1: # IPv4 address rdata = '.'.join(str(x) for x in rdata) if qtype == 15: # MX mx_pref, rdata = parse_int(rdata, 2) if qtype in (2, 5, 12, 15): # NS, CNAME, MX rdata, _ = parse_qname(rdata, response) answer = (qtype, domain, ttl, rdata, mx_pref if qtype == 15 else None) answers.append(answer) return error_code, answers def parse_int(byts, ln): return int.from_bytes(byts[:ln], 'big'), byts[ln:] def parse_qname(byts, full_response): domain_parts = [] while True: if byts[0] // 64: # OFFSET pointer assert byts[0] // 64 == 3, byts[0] offset, byts = parse_int(byts, 2) offset = offset - (128 + 64) * 256 # clear out top 2 bits label, _ = parse_qname(full_response[offset:], full_response) domain_parts.append(label) break else: # regular QNAME ln, byts = parse_int(byts, 1) label, byts = byts[:ln], byts[ln:] if not label: break domain_parts.append(label.decode()) return '.'.join(domain_parts), byts ```
First Install dnspython ``` import dns.resolver answers = dns.resolver.query('dnspython.org', 'MX') for rdata in answers: print 'Host', rdata.exchange, 'has preference', rdata.preference ```
13,388
38,414,650
I've recently found this page: [Making PyObject\_HEAD conform to standard C](https://www.python.org/dev/peps/pep-3123/) and I'm curious about this paragraph: > > Standard C has one specific exception to its aliasing rules precisely designed to support the case of Python: a value of a struct type may also be accessed through a pointer to the first field. E.g. **if a struct starts with an int , the struct \* may also be cast to an int \* , allowing to write int values into the first field**. > > > So I wrote this code to check with my compilers: ``` struct with_int { int a; char b; }; int main(void) { struct with_int *i = malloc(sizeof(struct with_int)); i->a = 5; ((int *)&i)->a = 8; } ``` but I'm getting `error: request for member 'a' in something not a struct or union`. Did I get the above paragraph right? If no, what am I doing wrong? Also, if someone knows where C standard is referring to this rule, please point it out here. Thanks.
2016/07/16
[ "https://Stackoverflow.com/questions/38414650", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5960237/" ]
Your interpretation1 is correct, but the code isn't. The pointer `i` already points to the object, and thus to the first element, so you only need to cast it to the correct type: ``` int* n = ( int* )i; ``` then you simply dereference it: ``` *n = 345; ``` Or in one step: ``` *( int* )i = 345; ``` --- 1 (Quoted from: ISO:IEC 9899:201X 6.7.2.1 Structure and union specifiers 15) Within a structure object, the non-bit-field members and the units in which bit-fields reside have addresses that increase in the order in which they are declared. A pointer to a structure object, suitably converted, points to its initial member (or if that member is a bit-field, then to the unit in which it resides), and vice versa. There may be unnamed padding within a structure object, but not at its beginning.
You have a few issues, but this works for me: ``` #include <malloc.h> #include <stdio.h> struct with_int { int a; char b; }; int main(void) { struct with_int *i = (struct with_int *)malloc(sizeof(struct with_int)); i->a = 5; *(int *)i = 8; printf("%d\n", i->a); } ``` Output is: 8
13,389
65,521,446
When typing a word in a dash input I would like to get autosuggestions, an example of what I mean is this CLI app I made in the past. [![enter image description here](https://i.stack.imgur.com/SPzmM.png)](https://i.stack.imgur.com/SPzmM.png) a link to the documentation: <https://python-prompt-toolkit.readthedocs.io/en/master/pages/asking_for_input.html?highlight=suggestions#autocompletion> 3 Thank you in advance!
2020/12/31
[ "https://Stackoverflow.com/questions/65521446", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14008858/" ]
`:` is missing after the third `while` statement, also `except` and `print` statements have the same indentation level. You can use `try-except` without additional `while` loop, check if the input number is less then `11` and append the input the the list and if not break the while loop. ***Example***: ``` while True: try: grade = int(input("Please enter the student grade, enter '11' to quit this program:")) if grade >= 11: break student_list.append(grade) except ValueError: print("Please input integer between 1-10") ```
flows answered your question appropriately. Because I think you would like to ask for pairs of name and grade, I modified your program a little. ``` def student_data(): student_list = [] while True: # Ask for the name of the student student_name = input("Please enter the student name, press 'q' to quit this program: ") # When "q" is entered exit the loop. if student_name.lower() == 'q': break # Keep asking about the student's grade until a valid answer has been entered. while True: try: student_grade = int(input("Please enter a grade between 1-10: ")) # If an error occurs during the conversion, the following lines # are not executed. A direct jump is made to the except block. if 0 < student_grade <= 10: # If the value is within the expected limits add a tuple of # name and grade to the list and exit the nested loop. student_list.append((student_name, student_grade)) break except ValueError: # Ignore errors when converting the string to integer. pass # Return all pairs of name and grade as the result of the function. return student_list for name, grade in student_data(): print('You entered: ', name, grade) ```
13,392
67,655,396
I'm trying to migrate my custom user model and I run makemigrations command to make migrations for new models. But when I run migrate command it throws this error : > > conn = \_connect(dsn, connection\_factory=connection\_factory, > \*\*kwasync) django.db.utils.OperationalError > > > **Trace back:** ``` (venv_ruling) C:\Users\enosh\venv_ruling\ruling>python manage.py migrate Traceback (most recent call last): File "C:\Users\enosh\venv_ruling\lib\site-packages\django\db\backends\base\base.py", line 219, in ensure_connection self.connect() File "C:\Users\enosh\venv_ruling\lib\site-packages\django\utils\asyncio.py", line 26, in inner return func(*args, **kwargs) File "C:\Users\enosh\venv_ruling\lib\site-packages\django\db\backends\base\base.py", line 200, in connect self.connection = self.get_new_connection(conn_params) File "C:\Users\enosh\venv_ruling\lib\site-packages\django\utils\asyncio.py", line 26, in inner return func(*args, **kwargs) File "C:\Users\enosh\venv_ruling\lib\site-packages\django\db\backends\postgresql\base.py", line 187, in get_new_connection connection = Database.connect(**conn_params) File "C:\Users\enosh\venv_ruling\lib\site-packages\psycopg2\__init__.py", line 127, in connect conn = _connect(dsn, connection_factory=connection_factory, **kwasync) psycopg2.OperationalError The above exception was the direct cause of the following exception: Traceback (most recent call last): File "C:\Users\enosh\venv_ruling\ruling\manage.py", line 22, in <module> main() File "C:\Users\enosh\venv_ruling\ruling\manage.py", line 18, in main execute_from_command_line(sys.argv) File "C:\Users\enosh\venv_ruling\lib\site-packages\django\core\management\__init__.py", line 419, in execute_from_command_line utility.execute() File "C:\Users\enosh\venv_ruling\lib\site-packages\django\core\management\__init__.py", line 413, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "C:\Users\enosh\venv_ruling\lib\site-packages\django\core\management\base.py", line 354, in run_from_argv self.execute(*args, **cmd_options) File "C:\Users\enosh\venv_ruling\lib\site-packages\django\core\management\base.py", line 398, in execute output = self.handle(*args, **options) File "C:\Users\enosh\venv_ruling\lib\site-packages\django\core\management\base.py", line 89, in wrapped res = handle_func(*args, **kwargs) File "C:\Users\enosh\venv_ruling\lib\site-packages\django\core\management\commands\migrate.py", line 75, in handle self.check(databases=[database]) File "C:\Users\enosh\venv_ruling\lib\site-packages\django\core\management\base.py", line 419, in check all_issues = checks.run_checks( File "C:\Users\enosh\venv_ruling\lib\site-packages\django\core\checks\registry.py", line 76, in run_checks new_errors = check(app_configs=app_configs, databases=databases) File "C:\Users\enosh\venv_ruling\lib\site-packages\django\core\checks\model_checks.py", line 34, in check_all_models errors.extend(model.check(**kwargs)) File "C:\Users\enosh\venv_ruling\lib\site-packages\django\db\models\base.py", line 1290, in check *cls._check_indexes(databases), File "C:\Users\enosh\venv_ruling\lib\site-packages\django\db\models\base.py", line 1680, in _check_indexes connection.features.supports_covering_indexes or File "C:\Users\enosh\venv_ruling\lib\site-packages\django\utils\functional.py", line 48, in __get__ res = instance.__dict__[self.name] = self.func(instance) File "C:\Users\enosh\venv_ruling\lib\site-packages\django\db\backends\postgresql\features.py", line 93, in is_postgresql_11 return self.connection.pg_version >= 110000 File "C:\Users\enosh\venv_ruling\lib\site-packages\django\utils\functional.py", line 48, in __get__ res = instance.__dict__[self.name] = self.func(instance) File "C:\Users\enosh\venv_ruling\lib\site-packages\django\db\backends\postgresql\base.py", line 329, in pg_version with self.temporary_connection(): File "C:\Users\enosh\AppData\Local\Programs\Python\Python39\lib\contextlib.py", line 117, in __enter__ return next(self.gen) File "C:\Users\enosh\venv_ruling\lib\site-packages\django\db\backends\base\base.py", line 603, in temporary_connection with self.cursor() as cursor: File "C:\Users\enosh\venv_ruling\lib\site-packages\django\utils\asyncio.py", line 26, in inner return func(*args, **kwargs) File "C:\Users\enosh\venv_ruling\lib\site-packages\django\db\backends\base\base.py", line 259, in cursor return self._cursor() File "C:\Users\enosh\venv_ruling\lib\site-packages\django\db\backends\base\base.py", line 235, in _cursor self.ensure_connection() File "C:\Users\enosh\venv_ruling\lib\site-packages\django\utils\asyncio.py", line 26, in inner return func(*args, **kwargs) File "C:\Users\enosh\venv_ruling\lib\site-packages\django\db\backends\base\base.py", line 219, in ensure_connection self.connect() File "C:\Users\enosh\venv_ruling\lib\site-packages\django\db\utils.py", line 90, in __exit__ raise dj_exc_value.with_traceback(traceback) from exc_value File "C:\Users\enosh\venv_ruling\lib\site-packages\django\db\backends\base\base.py", line 219, in ensure_connection self.connect() File "C:\Users\enosh\venv_ruling\lib\site-packages\django\utils\asyncio.py", line 26, in inner return func(*args, **kwargs) File "C:\Users\enosh\venv_ruling\lib\site-packages\django\db\backends\base\base.py", line 200, in connect self.connection = self.get_new_connection(conn_params) File "C:\Users\enosh\venv_ruling\lib\site-packages\django\utils\asyncio.py", line 26, in inner return func(*args, **kwargs) File "C:\Users\enosh\venv_ruling\lib\site-packages\django\db\backends\postgresql\base.py", line 187, in get_new_connection connection = Database.connect(**conn_params) File "C:\Users\enosh\venv_ruling\lib\site-packages\psycopg2\__init__.py", line 127, in connect conn = _connect(dsn, connection_factory=connection_factory, **kwasync) django.db.utils.OperationalError ``` **models.py** ``` from django.contrib.auth.models import AbstractUser class CustomUser(AbstractUser): """extend usermodel""" class Meta: verbose_name_plural = 'CustomUser' ``` **settings.py** ``` DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql_psycopg2', 'NAME': 'rulings', 'USER': 'xxxxxxx', 'PASSWORD': 'xxxxxxx', 'HOST': '', 'PORT': '', } } AUTH_USER_MODEL = 'accounts.CustomUser' ``` The postgresql's database is empty.(ver.12.6) I just mentioned user model and settings in this question but still if more code is required then tell me I'll update my question with that information. Thank you
2021/05/23
[ "https://Stackoverflow.com/questions/67655396", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14472949/" ]
> > I'm trying to understand Linux OS library dependencies to effectively run python 3.9 and imported pip packages to work. > > > Your questions may have pretty broad answers and depend on a bunch of input factors you haven't mentioned. > > Is there a requirement for GCC to be installed for pip modules with c extention modules to run? > > > It depends how the package is built and shipped. If it is available only as a *source distribution* (`sdist`), then yes. Obviously a compiler is needed to take the `.c` files and produce a laudable binary extension (ELF or DLL). Some packages ship *binary distributions*, where the publisher does the compilation for you. Of course this is more of a burden on the publisher, as they must support many possible target machines. > > What system libraries does Python's interpreter depends on? > > > It depends on a number of things, including which interpreter (there are multiple!) and how it was built and packaged. Even constraining the discussion to CPython (the canonical interpreter), this may vary widely. The simplest thing to do is whatever your Linux distro has decided for you; just `apt install python3` or whatever, and don't think too hard about it. Most distros ship dynamically-linked packages; these will depend on a small number of "common" libraries (e.g. `libc`, `libz`, etc). Some distros will statically-link the Python *library* into the interpreter -- IOW the `python3` executable will *not* depend on `libpython3.so`. Other distros will dynamically link against libpython. What dependencies will external modules (e.g. from PyPI) have? Well that *completely* depends on the package in question! Hopefully this helps you understand the limitations of your question. If you need more specific answers, you'll need to either do your own research, or provide a more specific question.
Python depends on compilers and a lot of other tools if you're going to compile the source (from the repository). This is from the offical repository, telling you what you need to compile it from source, [check it out](https://devguide.python.org/setup/#install-dependencies). > > **1.4. Install dependencies** > This section explains how to install additional extensions (e.g. zlib) on Linux and macOs/OS X. On Windows, extensions are already included and built automatically. > > > **1.4.1. Linux** > > > > > > > For UNIX based systems, we try to use system libraries whenever available. This means optional components will only build if the relevant system headers are available. The best way to obtain the appropriate headers will vary by distribution, but the appropriate commands for some popular distributions are below. > > > > > > > > > However, if you just want to **run python programs**, all you need is the python binary (and the libraries your script wants to use). The binary is usually at `/usr/bin/python3` or `/usr/bin/python3.9` [Python GitHub Repository](https://github.com/python/cpython) For individual packages, it depends on the package. Further reading: * [What is PIP?](https://realpython.com/what-is-pip/) * [Official: Managing application dependencies](https://packaging.python.org/tutorials/managing-dependencies/)
13,393
51,346,677
``` ERROR: build step 1 "gcr.io/gae-runtimes/nodejs8_app_builder:nodejs8_20180618_RC02" failed: exit status 1 ERROR Finished Step #1 - "builder" Step #1 - "builder": Permission denied for "be8392bdf4a2c92301391a124a5b72078453db3c15fcfc71f923e3c63d1bd8ea" from request "/v2/PROJECT_ID/app-engine-build-cache/node-cache/manifests/be8392bdf4a2c92301391a124a5b72078453db3c15fcfc71f923e3c63d1bd8ea". : None Step #1 - "builder": containerregistry.client.v2_2.docker_http_.V2DiagnosticException: response: {'status': '403', 'content-length': '291', 'x-xss-protection': '1; mode=block', 'transfer-encoding': 'chunked', 'server': 'Docker Registry', '-content-encoding': 'gzip', 'docker-distribution-api-version': 'registry/2.0', 'cache-control': 'private', 'date': 'Sun, 15 Jul 2018 08:26:14 GMT', 'x-frame-options': 'SAMEORIGIN', 'content-type': 'application/json'} Step #1 - "builder": File "/ftl-v0.4.0.par/containerregistry/client/v2_2/docker_http_.py", line 364, in Request Step #1 - "builder": File "/ftl-v0.4.0.par/containerregistry/client/v2_2/docker_image_.py", line 250, in _content Step #1 - "builder": File "/ftl-v0.4.0.par/containerregistry/client/v2_2/docker_image_.py", line 293, in manifest Step #1 - "builder": File "/ftl-v0.4.0.par/containerregistry/client/v2_2/docker_image_.py", line 279, in exists Step #1 - "builder": File "/ftl-v0.4.0.par/__main__/ftl/common/cache.py", line 166, in getEntryFromCreds Step #1 - "builder": File "/ftl-v0.4.0.par/__main__/ftl/common/cache.py", line 143, in _getLocalEntry Step #1 - "builder": File "/ftl-v0.4.0.par/__main__/ftl/common/cache.py", line 128, in _getEntry Step #1 - "builder": File "/ftl-v0.4.0.par/__main__/ftl/common/cache.py", line 110, in Get Step #1 - "builder": File "/ftl-v0.4.0.par/__main__/ftl/node/layer_builder.py", line 55, in BuildLayer Step #1 - "builder": File "/ftl-v0.4.0.par/__main__/ftl/node/builder.py", line 38, in Build Step #1 - "builder": File "/ftl-v0.4.0.par/__main__.py", line 52, in main Step #1 - "builder": File "/ftl-v0.4.0.par/__main__.py", line 61, in Step #1 - "builder": exec code in run_globals Step #1 - "builder": File "/usr/lib/python2.7/runpy.py", line 72, in _run_code Step #1 - "builder": "__main__", fname, loader, pkg_name) Step #1 - "builder": File "/usr/lib/python2.7/runpy.py", line 174, in _run_module_as_main Step #1 - "builder": Traceback (most recent call last): Step #1 - "builder": INFO full build took 0 seconds Step #1 - "builder": INFO build process for FTL image took 0 seconds Step #1 - "builder": INFO checking_cached_packages_json_layer took 0 seconds Step #1 - "builder": DEBUG Checking cache for cache_key be8392bdf4a2c92301391a124a5b72078453db3c15fcfc71f923e3c63d1bd8ea Step #1 - "builder": INFO starting: checking_cached_packages_json_layer Step #1 - "builder": INFO starting: build process for FTL image Step #1 - "builder": INFO builder initialization took 0 seconds Step #1 - "builder": INFO Loading Docker credentials for repository 'asia.gcr.io/PROJECT_ID/app-engine/default/20180715t135547:6382e88f-f9db-4087-9eca-2a1aee5881d6' Step #1 - "builder": INFO Loading Docker credentials for repository 'gcr.io/gae-runtimes/nodejs8:nodejs8_20180618_RC02' Step #1 - "builder": INFO starting: builder initialization Step #1 - "builder": INFO starting: full build Step #1 - "builder": INFO FTL arg passed: verbosity NOTSET Step #1 - "builder": INFO FTL arg passed: destination_path /srv Step #1 - "builder": INFO FTL arg passed: entrypoint None Step #1 - "builder": INFO FTL arg passed: directory /workspace Step #1 - "builder": INFO FTL arg passed: output_path None Step #1 - "builder": INFO FTL arg passed: base gcr.io/gae-runtimes/nodejs8:nodejs8_20180618_RC02 Step #1 - "builder": INFO FTL arg passed: upload True Step #1 - "builder": INFO FTL arg passed: cache True Step #1 - "builder": INFO FTL arg passed: global_cache False Step #1 - "builder": INFO FTL arg passed: name asia.gcr.io/PROJECT_ID/app-engine/default/20180715t135547:6382e88f-f9db-4087-9eca-2a1aee5881d6 Step #1 - "builder": INFO FTL arg passed: builder_output_path /builder/outputs Step #1 - "builder": INFO FTL arg passed: tar_base_image_path None Step #1 - "builder": INFO FTL arg passed: cache_repository asia.gcr.io/PROJECT_ID/app-engine-build-cache Step #1 - "builder": INFO FTL arg passed: exposed_ports None Step #1 - "builder": INFO Beginning FTL build for node Step #1 - "builder": INFO FTL version node-v0.4.0 Step #1 - "builder": Status: Downloaded newer image for gcr.io/gae-runtimes/nodejs8_app_builder:nodejs8_20180618_RC02 Step #1 - "builder": Digest: sha256:f937017daa12ccde31d70836e640ef0eaf327436695d05da38e4290c2eb2eb70 Step #1 - "builder": nodejs8_20180618_RC02: Pulling from gae-runtimes/nodejs8_app_builder Step #1 - "builder": Pulling image: gcr.io/gae-runtimes/nodejs8_app_builder:nodejs8_20180618_RC02 Starting Step #1 - "builder" Finished Step #0 - "fetcher" Step #0 - "fetcher": 2018/07/15 08:26:12 ****************************************************** Step #0 - "fetcher": 2018/07/15 08:26:12 Total time: 2.04 s Step #0 - "fetcher": 2018/07/15 08:26:12 Time for manifest: 888.14 ms Step #0 - "fetcher": 2018/07/15 08:26:12 MiB/s throughput: 0.28 MiB/s Step #0 - "fetcher": 2018/07/15 08:26:12 MiB downloaded: 0.33 MiB Step #0 - "fetcher": 2018/07/15 08:26:12 GCS timeouts: 0 Step #0 - "fetcher": 2018/07/15 08:26:12 Total retries: 0 Step #0 - "fetcher": 2018/07/15 08:26:12 Total files: 16 Step #0 - "fetcher": 2018/07/15 08:26:12 Actual workers: 16 Step #0 - "fetcher": 2018/07/15 08:26:12 Requested workers: 200 Step #0 - "fetcher": 2018/07/15 08:26:12 Completed: 2018-07-15T08:26:12Z Step #0 - "fetcher": 2018/07/15 08:26:12 Started: 2018-07-15T08:26:10Z Step #0 - "fetcher": 2018/07/15 08:26:12 Status: SUCCESS Step #0 - "fetcher": 2018/07/15 08:26:12 ****************************************************** Step #0 - "fetcher": 2018/07/15 08:26:12 Fetched gs://staging.PROJECT_ID.appspot.com/5e9210875e15a2eb7d50c666136266837638eb03 (322831B in 1.145458029s, 0.27MiB/s) Step #0 - "fetcher": 2018/07/15 08:26:12 Fetched gs://staging.PROJECT_ID.appspot.com/058b7e502bf6750c6f01453ef947d5dd7e854e07 (1279B in 861.521735ms, 0.00MiB/s) Step #0 - "fetcher": 2018/07/15 08:26:12 Fetched gs://staging.PROJECT_ID.appspot.com/5d87534d139519ba7cec4d48d2c3ba27b99e80b0 (319B in 846.233597ms, 0.00MiB/s) Step #0 - "fetcher": 2018/07/15 08:26:12 Fetched gs://staging.PROJECT_ID.appspot.com/668461d157d199b783be12fd2f2ba9c6d154130c (1271B in 838.8342ms, 0.00MiB/s) Step #0 - "fetcher": 2018/07/15 08:26:11 Fetched gs://staging.PROJECT_ID.appspot.com/8dd0bca621558e6ce972f38d8aa9765e49436172 (327B in 589.603096ms, 0.00MiB/s) Step #0 - "fetcher": 2018/07/15 08:26:11 Fetched gs://staging.PROJECT_ID.appspot.com/c6b4f62664bbe2779fdff10261bc384708e73e7d (36B in 588.36617ms, 0.00MiB/s) Step #0 - "fetcher": 2018/07/15 08:26:11 Fetched gs://staging.PROJECT_ID.appspot.com/c271c61f49150008a5eeae99a9bd09570bd5d549 (2558B in 588.802856ms, 0.00MiB/s) Step #0 - "fetcher": 2018/07/15 08:26:11 Fetched gs://staging.PROJECT_ID.appspot.com/f293a6701e6de41513ba08b50bbbacb58ffbc19a (119B in 586.215815ms, 0.00MiB/s) Step #0 - "fetcher": 2018/07/15 08:26:11 Fetched gs://staging.PROJECT_ID.appspot.com/88ef0fc94347f5e38160bdacef44fedc51b85877 (2305B in 584.714922ms, 0.00MiB/s) Step #0 - "fetcher": 2018/07/15 08:26:11 Fetched gs://staging.PROJECT_ID.appspot.com/73c3a01c14c7b696454020d6dc917ea32a50872c (53B in 584.291891ms, 0.00MiB/s) Step #0 - "fetcher": 2018/07/15 08:26:11 Fetched gs://staging.PROJECT_ID.appspot.com/757cebb200f220928061cdabddd6126d67a46984 (1683B in 583.3675ms, 0.00MiB/s) Step #0 - "fetcher": 2018/07/15 08:26:11 Fetched gs://staging.PROJECT_ID.appspot.com/1bbf5d56ff53e82b6444cab2a87d43b14d23a8b3 (217B in 584.334096ms, 0.00MiB/s) Step #0 - "fetcher": 2018/07/15 08:26:11 Fetched gs://staging.PROJECT_ID.appspot.com/42a994f47562d763d59a4b64822554d24f0ffce7 (28B in 577.748008ms, 0.00MiB/s) Step #0 - "fetcher": 2018/07/15 08:26:11 Fetched gs://staging.PROJECT_ID.appspot.com/f9185bdfc5b57bef24b52a9bbeb09cdf981f279f (2495B in 579.458821ms, 0.00MiB/s) Step #0 - "fetcher": 2018/07/15 08:26:11 Fetched gs://staging.PROJECT_ID.appspot.com/0c676a042d5ad4e493046f23965ed648293225be (3162B in 577.802151ms, 0.01MiB/s) Step #0 - "fetcher": 2018/07/15 08:26:11 Fetched gs://staging.PROJECT_ID.appspot.com/5bf6fe209f6ec1a459ae628d8f94e6aedbf3abae (2675B in 571.413697ms, 0.00MiB/s) Step #0 - "fetcher": 2018/07/15 08:26:11 Processing 16 files. Step #0 - "fetcher": 2018/07/15 08:26:11 Fetched gs://staging.PROJECT_ID.appspot.com/ae/6382e88f-f9db-4087-9eca-2a1aee5881d6/manifest.json (3576B in 888.138003ms, 0.00MiB/s) Step #0 - "fetcher": 2018/07/15 08:26:10 Fetching manifest gs://staging.PROJECT_ID.appspot.com/ae/6382e88f-f9db-4087-9eca-2a1aee5881d6/manifest.json. Step #0 - "fetcher": Already have image (with digest): gcr.io/cloud-builders/gcs-fetcher Starting Step #0 - "fetcher" BUILD FETCHSOURCE starting build "88e0cc7f-62b7-496a-aa31-230e793b5ea1" ```
2018/07/15
[ "https://Stackoverflow.com/questions/51346677", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8025518/" ]
> > ### Troubleshooting > > > If you find 403 (access denied) errors in your build logs, try the following steps: > > > Disable the Cloud Build API and re-enable it. Doing so should give your service account access to your project again. > > > Fixed an issue for me.
It shows in the 1st few lines of the log that image couldnt be pulled from registry due to unauthorized user credentials accessing the registry. Did you check the credentials? If you have a token based login, check if the token is not expired.
13,394
45,417,077
I've got a module called `core`, which contains a number of python files. If I do: ``` from core.curve import Curve ``` Does `__init__.py` get called? Can I move import statements that apply to all core files into `__init__.py` to save repeating myself? What **should** go into `__init__.py`?
2017/07/31
[ "https://Stackoverflow.com/questions/45417077", "https://Stackoverflow.com", "https://Stackoverflow.com/users/222151/" ]
> > Is \_\_init\_\_.py run everytime I import anything from that module? > > > According to [docs](https://docs.python.org/2/tutorial/modules.html#importing-from-a-package) in most cases **yes**, it is.
You can add all your functions that you want to use in your directory ``` - core - __init__.py ``` in this `__init__.py` add your class and function references like ``` from .curve import Curve from .some import SomethingElse ``` and where you want to User your class just refer it like ``` from core import Curve ```
13,397
2,046,912
I am seeing some weird behavior while parsing shared paths (shared paths on server, e.g. \storage\Builds) I am reading text file which contains directory paths which I want to process further. In order to do so I do as below: ``` def toWin(path): return path.replace("\\", "\\\\") for line in open(fileName): l = toWin(line).strip() if os.path.isdir(l): print l # os.listdir(l) etc.. ``` This works for local directories but fails for paths specified on shared system. ``` e.g. E:\Test -- works \\StorageMachine\Test -- fails [internally converts to \\\\StorageMachine\\Test] \\StorageMachine\Test\ -- fails [internally converts to \\\\StorageMachine\\Test\\] ``` But if I open python shell, import script and invoke function with same path string then it works! It seems that parsing windows shared paths is behaving differently in both cases. Any ideas/suggestions?
2010/01/12
[ "https://Stackoverflow.com/questions/2046912", "https://Stackoverflow.com", "https://Stackoverflow.com/users/62056/" ]
This may not be your actual issue, but your UNC paths are actually not correct - they should start with a double backslash, but internally only use a single backslash as a divider. I'm not sure why the same thing would be working within the shell. **Update:** I suspect that what's happening is that in the shell, your string is being interpreted by the shell (with replacements happening) while in your code it's being treated as seen for the first time - basically, specifying the string in the shell is different from reading it from an input. To get the same effect from the shell, you'd need to specify it as a raw string with `r"string"`
Have to convert input to forward slash (unix-style) for os.\* modules to parse correctly. changed code as below ``` def toUnix(path): return path.replace("\\", "/") ``` Now all modules parse correctly.
13,400
31,620,161
I am trying to run a python script using nrpe to monitor rabbitmq. Inside the script is a command 'sudo rabbiqmqctl list\_queues' which gives me a message count on each queue. However this is resulting in nagios giving htis message: ``` CRITICAL - Command '['sudo', 'rabbitmqctl', 'list_queues']' returned non-zero exit status 1 ``` I thought this might be a permissions issue so proceeded in the following manner /etc/group: ``` ec2-user:x:500: rabbitmq:x:498:nrpe,nagios,ec2-user nagios:x:497: nrpe:x:496: rpc:x:32: ``` /etc/sudoers: ``` %rabbitmq ALL=NOPASSWD: /usr/sbin/rabbitmqctl ``` nagios configuration: ``` command[check_rabbitmq_queuecount_prod]=/usr/bin/python27 /etc/nagios/check_rabbitmq_prod -a queues_count -C 3000 -W 1500 ``` check\_rabbitmq\_prod: ```html #!/usr/bin/env python from optparse import OptionParser import shlex import subprocess import sys class RabbitCmdWrapper(object): """So basically this just runs rabbitmqctl commands and returns parsed output. Typically this means you need root privs for this to work. Made this it's own class so it could be used in other monitoring tools if desired.""" @classmethod def list_queues(cls): args = shlex.split('sudo rabbitmqctl list_queues') cmd_result = subprocess.check_output(args).strip() results = cls._parse_list_results(cmd_result) return results @classmethod def _parse_list_results(cls, result_string): results = result_string.strip().split('\n') #remove text fluff results.remove(results[-1]) results.remove(results[0]) return_data = [] for row in results: return_data.append(row.split('\t')) return return_data def check_queues_count(critical=1000, warning=1000): """ A blanket check to make sure all queues are within count parameters. TODO: Possibly break this out so test can be done on individual queues. """ try: critical_q = [] warning_q = [] ok_q = [] results = RabbitCmdWrapper.list_queues() for queue in results: if queue[0] == 'SFS_Production_Queue': count = int(queue[1]) if count >= critical: critical_q.append("%s: %s" % (queue[0], count)) elif count >= warning: warning_q.append("%s: %s" % (queue[0], count)) else: ok_q.append("%s: %s" % (queue[0], count)) if critical_q: print "CRITICAL - %s" % ", ".join(critical_q) sys.exit(2) elif warning_q: print "WARNING - %s" % ", ".join(warning_q) sys.exit(1) else: print "OK - %s" % ", ".join(ok_q) sys.exit(0) except Exception, err: print "CRITICAL - %s" % err sys.exit(2) USAGE = """Usage: ./check_rabbitmq -a [action] -C [critical] -W [warning] Actions: - queues_count checks the count in each of the queues in rabbitmq's list_queues""" if __name__ == "__main__": parser = OptionParser(USAGE) parser.add_option("-a", "--action", dest="action", help="Action to Check") parser.add_option("-C", "--critical", dest="critical", type="int", help="Critical Threshold") parser.add_option("-W", "--warning", dest="warning", type="int", help="Warning Threshold") (options, args) = parser.parse_args() if options.action == "queues_count": check_queues_count(options.critical, options.warning) else: print "Invalid action: %s" % options.action print USAGE ``` At this point I'm not sure what is preventing the script from running. It runs fine via the command-line. Any help is appreciated.
2015/07/24
[ "https://Stackoverflow.com/questions/31620161", "https://Stackoverflow.com", "https://Stackoverflow.com/users/811220/" ]
The "non-zero exit code" error is often associated with `requiretty` being applied to all users by default in your sudoers file. Disabling "requiretty" in your sudoers file for the user that runs the check is safe, and may potentially fix the issue. E.g. (assuming nagios/nrpe are the users) @ /etc/sudoers ``` Defaults:nagios !requiretty Defaults:nrpe !requiretty ```
I guess what Mr @EE1213 mentions is right. If you have the permission to see /var/log/secure, the log probably contains error messages regarding sudoers. Like: ``` "sorry, you must have a tty to run sudo" ```
13,402
24,494,437
I am using the Facebook Ads API and am wondering about Ad Image creation. This page, <https://developers.facebook.com/docs/reference/ads-api/adimage/#create>, makes it look pretty simple, except I'm not sure what's going on with the 'test.jpg=@test.jpg'. What is the @ for and how does it work? I currently make the post request as described in the API docs (the above link) with a parameter 'pic.jpg' with value '@<https://s3.amazonaws.com/path/to/my/image.jpg>' but the response is an empty array instead of the JSON object with the 'images' key as shown in the API docs (the above link). Can someone explain the idea/process/syntax of specifying files in HTTP requests, or perhaps more specific to Facebook Ads API? I'm not sure what is at play here. EDIT: (I should mention that I'm using python requests library.)
2014/06/30
[ "https://Stackoverflow.com/questions/24494437", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3391108/" ]
First if you want to upload an Image on Facebook Ads API you can't give a URL you will have to provide the real path for it. in many cases you can use 'image\_url' directly & give the URL of the image, but in some cases you will need to upload the image to facebook and use the HASH provided by it. PHP/cURL implementation ``` $myPhoto = realpath('c.jpg'); $ch = curl_init(); $data = array('name' => "@{$myPhoto}",'access_token' => "{$access_token}"); $urlStr = "https://graph.facebook.com/act_{$ad_account_id}/adimages"; curl_setopt($ch, CURLOPT_URL, $urlStr); curl_setopt($ch, CURLOPT_POST, 1); curl_setopt($ch, CURLOPT_POSTFIELDS, $data); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); $output = curl_exec($ch); ``` to know how to POST a file with Python (refer: <http://docs.python-requests.org/en/latest/user/quickstart/#post-a-multipart-encoded-file>) Also you can use this library it does all the work for you : <https://github.com/jgorset/facepy>
The @ feature is to upload a file. It is supported by curl and PHP for example. So if you add a https link it is not strange that it doesn't work
13,403
10,925,410
In a single character string when I try to fetch second character python fails quite verbosely which is correct and expected behavior. ``` print 'a'[1] ``` However when I try to fetch an invalid sub sequence range, it fails silently. ``` print 'a'[1:] ``` What is the reason for the difference in behavior?
2012/06/07
[ "https://Stackoverflow.com/questions/10925410", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1137587/" ]
actually `slicing` never generates any `error` in case of `strings` and `lists`: **for eg.** ``` >>> s='abc' >>> s[-1000:1000] 'abc' ``` works fine. On the other hand, while accesing `Indexes` that are not defined will always raise an `IndexError` in both `strings` and `lists`: ``` >>> s[4] Traceback (most recent call last): File "<pyshell#6>", line 1, in <module> s[4] IndexError: string index out of range ```
The semantics differ: ``` print 'a'[1] ``` tries to index into a non-existent (ie invalid) index/location, which *is* an error. ``` print 'a'[1:] ``` simply returns, based on the specified range, an empty string (`''`), which is *not* an error. I.e., ``` In [175]: 'a'[1] --------------------------------------------------------------------------- ----> 1 'a'[1] IndexError: string index out of range In [176]: 'a'[1:] Out[176]: '' ```
13,404
55,441,517
I am following the official [docker get started guide](https://docs.docker.com/get-started/part2/). Instead of using a python image, I would like to setup a mongodb instance. I decided on a tag, and found the relevant [Dockerfile](https://github.com/docker-library/mongo/blob/89f19dc16431025c00a4709e0da6d751cf94830f/4.0/Dockerfile). I have the docker file inside a otherwise empty folder (the content is the same as in the link). I ran `docker build --tag=mongoplayground:1.0.0 .`, and then `docker image ls`, but the output I get is this: ``` REPOSITORY TAG IMAGE ID CREATED SIZE <none> <none> 838852e4e564 32 minutes ago 409MB ubuntu xenial 9361ce633ff1 2 weeks ago 118MB ``` I would expect the repository name to be `mognoplayground` and the tag to be `1.0.0`. What am I doing wrong?
2019/03/31
[ "https://Stackoverflow.com/questions/55441517", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4341439/" ]
It seems the only thing I had to add was `<import type="android.view.View" />` in the data tags...
You have to define variable as ObservableField below : ``` public final ObservableField<String> name = new ObservableField<>(); public final ObservableField<String> family = new ObservableField<>(); ```
13,413
72,671,082
very new to VBA. Suppose I have a 6 by 2 array with values shown on right, and I have an empty 2 by 3 array (excluding the header). My goal is to get the array on the left looks as how it is shown. ``` (Header) 1 2 3 1 a a c e 1 b b d f 2 c 2 d 3 e 3 f ``` Since the array on the right is already sorted, I noticed that it can be faster if I just let the 1st column of the 2 by 3 array take the first 2 values (a and b), the 2nd column takes the following 2 values (c and d), and so on. This way, it can avoid using a nested for loop to populate the left array. However, I was unable to find a way to populate a specific column of an array. Another way to describe my question is: Is there a way in VBA to replicate this code from python, which directly modifies a specific column of an array? Thanks! ``` array[:, 0] = [a, b] ```
2022/06/18
[ "https://Stackoverflow.com/questions/72671082", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19365488/" ]
Populate Array With Values From Another Array --------------------------------------------- * It is always a nested loop, but in Python, it is obviously 'under the hood' i.e. not seen to the end-user. They have integrated this possibility (written some code) into the language. * The following is a simplified version of what you could do in VBA since there is just too much hard-coded data with 'convenient' numbers in your question. * The line of your interest is: ``` PopulateColumn dData, c, sData, SourceColumn ``` to populate column `c` in the destination array (`dData`) using one line of code. It's just shorter, not faster. * Sure, it has no loop but if you look at the called procedure, `PopulateColumn`, you'll see that there actually is one (`For dr = 1 To drCount`). * You can even go further with simplifying the life of the end-user by using classes but that's 'above my paygrade', and yours at the moment since you're saying you're a noob. * Copy the code into a standard module, e.g. `Module1`, and run the `PopulateColumnTEST` procedure. * Note that there are results written to the Visual Basic's Immediate window (`Ctrl`+`G`). **The Code** ```vb Option Explicit Sub PopulateColumnTEST() Const SourceColumn As Long = 2 ' Populate the source array. Dim sData As Variant: ReDim sData(1 To 6, 1 To 2) Dim r As Long For r = 1 To 6 sData(r, 1) = Int((r + 1) / 2) ' kind of irrelevant sData(r, 2) = Chr(96 + r) Next r ' Print source values. DebugPrintCharData sData, "Source:" & vbLf & "R 1 2" ' Populate the destination array. Dim dData As Variant: ReDim dData(1 To 2, 1 To 3) Dim c As Long ' Loop through the columns of the destination array. For c = 1 To 3 ' Populate the current column of the destination array ' with the data from the source column of the source array ' by calling the 'PopulateColumn' procedure. PopulateColumn dData, c, sData, SourceColumn Next c ' Print destination values. DebugPrintCharData dData, "Destination:" & vbLf & "R 1 2 3" End Sub Sub PopulateColumn( _ ByRef dData As Variant, _ ByVal dDataCol As Long, _ ByVal sData As Variant, _ ByVal sDataCol As Long) Dim drCount As Long: drCount = UBound(dData, 1) Dim dr As Long For dr = 1 To drCount dData(dr, dDataCol) = sData(drCount * (dDataCol - 1) + dr, sDataCol) Next dr End Sub Sub DebugPrintCharData( _ ByVal Data As Variant, _ Optional Title As String = "", _ Optional ByVal ColumnDelimiter As String = " ") If Len(Title) > 0 Then Debug.Print Title Dim r As Long Dim c As Long Dim rString As String For r = LBound(Data, 1) To UBound(Data, 1) For c = LBound(Data, 2) To UBound(Data, 2) rString = rString & ColumnDelimiter & Data(r, c) Next c rString = r & rString Debug.Print rString rString = vbNullString Next r End Sub ``` **The Results** ``` Source: R 1 2 1 1 a 2 1 b 3 2 c 4 2 d 5 3 e 6 3 f Destination: R 1 2 3 1 a c e 2 b d f ```
**Alternative avoiding loops** For the *sake of the art* and in order to *approximate* your requirement to find a way replicating Python's code ``` array[:, 0] = [a, b] ``` in VBA without nested loops, you could try the following function combining several column value inputs (via a ParamArray) returning a combined 2-dim array. *Note that the function* * *will return a* **1-based** *array by using `Application.Index` and* * *will be slower than any combination of array loops.* ``` Function JoinColumnValues(ParamArray cols()) As Variant 'Purp: change ParamArray containing "flat" 1-dim column values to 2-dim array !! 'Note: Assumes 1-dim arrays (!) as column value inputs into ParamArray ' returns a 1-based 2-dim array Dim tmp As Variant tmp = cols With Application tmp = .Transpose(.Index(tmp, 0, 0)) End With JoinColumnValues = tmp End Function ``` [![Display in VB Editor's local window](https://i.stack.imgur.com/U6OnB.png)](https://i.stack.imgur.com/U6OnB.png) **Example call** Assumes "flat" 1-dim array inputs with identical element boundaries ``` Dim arr arr = JoinColumnValues(Array("a", "b"), Array("c", "d"), Array("e", "f")) ```
13,423
74,478,463
I am developing deployment via DBX to Azure Databricks. In this regard I need a data job written in SQL to happen everyday. The job is located in the file `data.sql`. I know how to do it with a python file. Here I would do the following: ``` build: python: "pip" environments: default: workflows: - name: "workflow-name" #schedule: quartz_cron_expression: "0 0 9 * * ?" # every day at 9.00 timezone_id: "Europe" format: MULTI_TASK # job_clusters: - job_cluster_key: "basic-job-cluster" <<: *base-job-cluster tasks: - task_key: "task-name" job_cluster_key: "basic-job-cluster" spark_python_task: python_file: "file://filename.py" ``` But how can I change it so I can run a SQL job instead? I imagine it is the last two lines of code (`spark_python_task:` and `python_file: "file://filename.py"`) which needs to be changed.
2022/11/17
[ "https://Stackoverflow.com/questions/74478463", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13219123/" ]
There are various ways to do that. (1) One of the most simplest is to add a SQL query in the Databricks SQL lens, and then reference this query via `sql_task` as described [here](https://dbx.readthedocs.io/en/latest/reference/deployment/?h=sql_task#configuring-complex-deployments). (2) If you want to have a Python project that re-uses SQL statements from a static file, you can [add this file](https://dbx.readthedocs.io/en/latest/guides/python/packaging_files/) to your Python Package and then call it from your package, e.g.: ```py sql_statement = ... # code to read from the file spark.sql(sql_statement) ``` (3) A third option is to use the DBT framework with Databricks. In this case you probably would like to use `dbt_task` as described [here](https://dbx.readthedocs.io/en/latest/reference/deployment/?h=dbt_task#configuring-complex-deployments).
I found a simple workaround (although might not be the prettiest) to simply change the `data.sql` to a python file and run the queries using spark. This way I could use the same `spark_python_task`.
13,424
59,989,572
I'm working with a database called `international_education` from the `world_bank_intl_education` dataset of `bigquery-public-data`. ``` FIELDS country_name country_code indicator_name indicator_code value year ``` My aim is to plot a line graph with countries who have had the biggest and smallest change in Population growth (annual %) (one of the `indicator_name` values). I have done this below using two partitions finding the first and last value of the year by each country but I'm rough on my SQL and am wondering if there is a way to optimize this formula. ``` query = """ WITH differences AS ( SELECT country_name, year, value, FIRST_VALUE(value) OVER ( PARTITION BY country_name ORDER BY year RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING ) AS small_value, LAST_VALUE(value) OVER ( PARTITION BY country_name ORDER BY year RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING ) AS large_value FROM `bigquery-public-data.world_bank_intl_education.international_education` WHERE indicator_name = 'Population growth (annual %)' ORDER BY year ) SELECT country_name, year, (large_value-small_value) AS total_range, value FROM differences ORDER BY total_range """ ``` Convert to pandas dataframe. ``` df= wbed.query_to_pandas_safe(query) df.head(10) ``` Resulting table. ``` country_name year total_range value 0 United Arab Emirates 1970 -13.195183 14.446942 1 United Arab Emirates 1971 -13.195183 16.881671 2 United Arab Emirates 1972 -13.195183 17.689814 3 United Arab Emirates 1973 -13.195183 17.695296 4 United Arab Emirates 1974 -13.195183 17.125615 5 United Arab Emirates 1975 -13.195183 16.211873 6 United Arab Emirates 1976 -13.195183 15.450884 7 United Arab Emirates 1977 -13.195183 14.530119 8 United Arab Emirates 1978 -13.195183 13.033461 9 United Arab Emirates 1979 -13.195183 11.071306 ``` I would then plot this with python as follows. ``` all_countries = df.groupby('country_name', as_index=False).max().sort_values(by='total_range').country_name.values countries = np.concatenate((all_countries[:3], all_countries[-4:])) plt.figure(figsize=(16, 8)) sns.lineplot(x='year',y='value', data=df[df.country_name.isin(countries)], hue='country_name') ```
2020/01/30
[ "https://Stackoverflow.com/questions/59989572", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4459665/" ]
You don't need the CTE and you don't need the window frame definitions. So this should be equivalent: ``` SELECT country_name, year, value, (first_value(value) OVER (PARTITION BY country_name ORDER BY YEAR DESC) - first_value(value) OVER (PARTITION BY country_name ORDER BY YEAR) ) as total_range FROM `bigquery-public-data.world_bank_intl_education.international_education` WHERE indicator_name = 'Population growth (annual %)'; ``` Note that `LAST_VALUE()` is finicky with window frame definitions. So I routinely just use `FIRST_VALUE()` with the order by reversed. If you want just one row per country, then you need aggregation. BigQuery doesn't have "first" and "last" aggregation functions, but they are very easy to do with arrays: ``` SELECT country_name, ((array_agg(value ORDER BY year DESC LIMIT 1))[ordinal(1)] - (array_agg(value ORDER BY year LIMIT 1))[ordinal(1)] ) as total_range FROM `bigquery-public-data.world_bank_intl_education.international_education` WHERE indicator_name = 'Population growth (annual %)' GROUP BY country_name ORDER BY total_range; ```
If I understand correctly what you are trying to calculate, I wrote a query that do everything in BigQuery without the need to do anything in pandas. This query returns all the rows for each country that rank top 3 or bottom 3 in change in Population growth. ``` WITH differences AS ( SELECT country_name, year, value, LAST_VALUE(value) OVER (PARTITION BY country_name ORDER BY year) - FIRST_VALUE(value) OVER (PARTITION BY country_name ORDER BY year) AS total_range, FROM `bigquery-public-data.world_bank_intl_education.international_education` WHERE indicator_name = 'Population growth (annual %)' ORDER BY year ) , differences_with_ranks as ( SELECT country_name, year, value, total_range, row_number() OVER (PARTITION BY country_name ORDER BY total_range) as rank, FROM differences ) , top_bottom as ( SELECT country_name FROM ( SELECT country_name, FROM differences_with_ranks WHERE rank = 1 ORDER BY total_range DESC LIMIT 3 ) UNION DISTINCT SELECT country_name FROM ( SELECT country_name, FROM differences_with_ranks WHERE rank = 1 ORDER BY total_range ASC LIMIT 3 ) ) SELECT * FROM differences WHERE country_name in (SELECT country_name FROM top_bottom) ``` I don't really understand what do you mean with "optimize", this query run very fast (1.5 seconds) if you need a solution with lower latency BigQuery is not the right solution.
13,425
29,833,789
I am learning python, I get this error: ``` getattr(args, args.tool)(args) AttributeError: 'Namespace' object has no attribute 'cat' ``` If I execute my script like this: ``` myscript.py -t cat ``` What i want is print ``` Run cat here ``` Here is my full code: ``` #!/usr/bin/python import sys, argparse parser = argparse.ArgumentParser(str(sys.argv[0])) parser.add_argument('-t', '--tool', help='Input tool name.', required=True, choices=["dog","cat","fish"]) args = parser.parse_args() # Call function requested by user getattr(args, args.tool)(args) def dog(args): print 'Run something dog here' def cat(args): print 'Run cat here' def fish(args): print 'Yes run fish here' print "Bye !" ``` Thanks, for your help :D
2015/04/23
[ "https://Stackoverflow.com/questions/29833789", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4358977/" ]
EvenLisle's answer gives the correct idea, but you can easily generalize it by using `arg.tools` as the key to `globals()`. Moreover, to simplify validation, you can use the `choices` argument of `add_argument` so that you know the possible values of `args.tool`. If someone provides an argument other than dog, cat, or fish for the -t command line option, your parser will automatically notify them of the usage error. Thus, your code would become: ``` #!/usr/bin/python import sys, argparse parser = argparse.ArgumentParser(str(sys.argv[0])) parser.add_argument('-t', '--tool', help='Input tool name.', required=True, choices=["dog","cat","fish"]) args = parser.parse_args() def dog(args): print 'Run something dog here' def cat(args): print 'Run cat here' def fish(args): print 'Yes run fish here' if callable(globals().get(args.tool)): globals()[args.tool](args) ```
This: ``` def cat(args): print 'Run cat here' if "cat" in globals(): globals()["cat"]("arg") ``` will print "Run cat here". You should consider making a habit of having your function definitions at the top of your file. Otherwise, the above snippet would not have worked, as your function `cat` would not yet be in the dictionary returned by `globals()`.
13,428
41,531,571
I am working on a GUI in python 3.5 with PyQt5 for a small chat bot. The problem i have is that the pre-processing, post-processing and brain are taking too much time to give back the answer for the user provided input. The GUI is very simple and looks like this: <http://prntscr.com/dsxa39> it loads very fast without connecting it to other modules. I mention that using sleep before receiving answer from brain module will still make it unresponsive. `self.conversationBox.append("You: "+self.textbox.toPlainText()) self.textbox.setText("") time.sleep(20) self.conversationBox.append("Chatbot: " + "message from chatbot")` this is a small sample of code, the one that i need to fix. And this is the error I encounter: <http://prnt.sc/dsxcqu> I mention that I've searched for the solution already and everywhere I've found what I've already tried, to use sleep. But again, this won't work as it makes the program unresponsive too.
2017/01/08
[ "https://Stackoverflow.com/questions/41531571", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5956553/" ]
Slow functions, such as `sleep`, will always block unless they are running asynchronously in another thread. If you want to avoid threads a workaround is to break up the slow function. In your case it might look like: ``` for _ in range(20): sleep(1) self.app.processEvents() ``` where `self.app` is a reference to your QApplication instance. This solution is a little hacky as it will simply result in 20 short hangs instead of one long hang. If you want to use this approach for your brain function then you'll need it to break it up in a similar manner. Beyond that you'll need to use a threaded approach.
``` import sys from PyQt5 import QtCore, QtGui from PyQt5.QtWidgets import QMainWindow, QGridLayout, QLabel, QApplication, QWidget, QTextBrowser, QTextEdit, \ QPushButton, QAction, QLineEdit, QMessageBox from PyQt5.QtGui import QPalette, QIcon, QColor, QFont from PyQt5.QtCore import pyqtSlot, Qt import threading import time textboxValue = "" FinalAnsw = "" class myThread (threading.Thread): print ("Start") def __init__(self): threading.Thread.__init__(self) def run(self): def getAnswer(unString): #do brain here time.sleep(10) return unString global textboxValue global FinalAnsw FinalAnsw = getAnswer(textboxValue) class App(QWidget): def __init__(self): super().__init__() self.title = 'ChatBot' self.left = 40 self.top = 40 self.width = 650 self.height = 600 self.initUI() def initUI(self): self.setWindowTitle(self.title) self.setGeometry(self.left, self.top, self.width, self.height) pal = QPalette(); pal.setColor(QPalette.Background, QColor(40, 40, 40)); self.setAutoFillBackground(True); self.setPalette(pal); font = QtGui.QFont() font.setFamily("FreeMono") font.setBold(True) font.setPixelSize(15) self.setStyleSheet("QTextEdit {color:#3d3838; font-size:12px; font-weight: bold}") historylabel = QLabel('View your conversation history here: ') historylabel.setStyleSheet('color: #82ecf9') historylabel.setFont(font) messagelabel = QLabel('Enter you message to the chat bot here:') messagelabel.setStyleSheet('color: #82ecf9') messagelabel.setFont(font) self.conversationBox = QTextBrowser(self) self.textbox = QTextEdit(self) self.button = QPushButton('Send message', self) self.button.setStyleSheet( "QPushButton { background-color:#82ecf9; color: #3d3838 }" "QPushButton:pressed { background-color: black }") grid = QGridLayout() grid.setSpacing(10) self.setLayout(grid) grid.addWidget(historylabel, 1, 0) grid.addWidget(self.conversationBox, 2, 0) grid.addWidget(messagelabel, 3, 0) grid.addWidget(self.textbox, 4, 0) grid.addWidget(self.button, 5, 0) # connect button to function on_click self.button.clicked.connect(self.on_click) self.show() def on_click(self): global textboxValue textboxValue = self.textbox.toPlainText() self.conversationBox.append("You: " + textboxValue) th = myThread() th.start() th.join() global FinalAnsw self.conversationBox.append("Rocket: " + FinalAnsw) self.textbox.setText("") if __name__ == '__main__': app = QApplication(sys.argv) ex = App() app.exec_() ``` So creating a simple thread solved the problem, the code above will still freeze because of the sleep function call, but if you replace that with a normal function that lasts long it won't freeze anymore. It was tested by the brain module of my project with their functions. For a simple example of building a thread use <https://www.tutorialspoint.com/python/python_multithreading.htm> and for the PyQt GUI I've used examples from this website to learn <http://zetcode.com/gui/pyqt5/>
13,429
39,501,277
I have many (4000+) CSVs of stock data (Date, Open, High, Low, Close) which I import into individual Pandas dataframes to perform analysis. I am new to python and want to calculate a rolling 12month beta for each stock, I found a post to calculate rolling beta ([Python pandas calculate rolling stock beta using rolling apply to groupby object in vectorized fashion](https://stackoverflow.com/questions/34802972/python-pandas-calculate-rolling-stock-beta-using-rolling-apply-to-groupby-object)) however when used in my code below takes over 2.5 hours! Considering I can run the exact same calculations in SQL tables in under 3 minutes this is too slow. How can I improve the performance of my below code to match that of SQL? I understand Pandas/python has that capability. My current method loops over each row which I know slows performance but I am unaware of any aggregate way to perform a rolling window beta calculation on a dataframe. Note: the first 2 steps of loading the CSVs into individual dataframes and calculating daily returns only takes ~20seconds. All my CSV dataframes are stored in the dictionary called 'FilesLoaded' with names such as 'XAO'. Your help would be much appreciated! Thank you :) ``` import pandas as pd, numpy as np import datetime import ntpath pd.set_option('precision',10) #Set the Decimal Point precision to DISPLAY start_time=datetime.datetime.now() MarketIndex = 'XAO' period = 250 MinBetaPeriod = period # *********************************************************************************************** # CALC RETURNS # *********************************************************************************************** for File in FilesLoaded: FilesLoaded[File]['Return'] = FilesLoaded[File]['Close'].pct_change() # *********************************************************************************************** # CALC BETA # *********************************************************************************************** def calc_beta(df): np_array = df.values m = np_array[:,0] # market returns are column zero from numpy array s = np_array[:,1] # stock returns are column one from numpy array covariance = np.cov(s,m) # Calculate covariance between stock and market beta = covariance[0,1]/covariance[1,1] return beta #Build Custom "Rolling_Apply" function def rolling_apply(df, period, func, min_periods=None): if min_periods is None: min_periods = period result = pd.Series(np.nan, index=df.index) for i in range(1, len(df)+1): sub_df = df.iloc[max(i-period, 0):i,:] if len(sub_df) >= min_periods: idx = sub_df.index[-1] result[idx] = func(sub_df) return result #Create empty BETA dataframe with same index as RETURNS dataframe df_join = pd.DataFrame(index=FilesLoaded[MarketIndex].index) df_join['market'] = FilesLoaded[MarketIndex]['Return'] df_join['stock'] = np.nan for File in FilesLoaded: df_join['stock'].update(FilesLoaded[File]['Return']) df_join = df_join.replace(np.inf, np.nan) #get rid of infinite values "inf" (SQL won't take "Inf") df_join = df_join.replace(-np.inf, np.nan)#get rid of infinite values "inf" (SQL won't take "Inf") df_join = df_join.fillna(0) #get rid of the NaNs in the return data FilesLoaded[File]['Beta'] = rolling_apply(df_join[['market','stock']], period, calc_beta, min_periods = MinBetaPeriod) # *********************************************************************************************** # CLEAN-UP # *********************************************************************************************** print('Run-time: {0}'.format(datetime.datetime.now() - start_time)) ```
2016/09/14
[ "https://Stackoverflow.com/questions/39501277", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6107994/" ]
While efficient subdivision of the input data set into rolling windows is important to the optimization of the overall calculations, the performance of the beta calculation itself can also be significantly improved. The following optimizes only the subdivision of the data set into rolling windows: ``` def numpy_betas(x_name, window, returns_data, intercept=True): if intercept: ones = numpy.ones(window) def lstsq_beta(window_data): x_data = numpy.vstack([window_data[x_name], ones]).T if intercept else window_data[[x_name]] beta_arr, residuals, rank, s = numpy.linalg.lstsq(x_data, window_data) return beta_arr[0] indices = [int(x) for x in numpy.arange(0, returns_data.shape[0] - window + 1, 1)] return DataFrame( data=[lstsq_beta(returns_data.iloc[i:(i + window)]) for i in indices] , columns=list(returns_data.columns) , index=returns_data.index[window - 1::1] ) ``` The following also optimizes the beta calculation itself: ``` def custom_betas(x_name, window, returns_data): window_inv = 1.0 / window x_sum = returns_data[x_name].rolling(window, min_periods=window).sum() y_sum = returns_data.rolling(window, min_periods=window).sum() xy_sum = returns_data.mul(returns_data[x_name], axis=0).rolling(window, min_periods=window).sum() xx_sum = numpy.square(returns_data[x_name]).rolling(window, min_periods=window).sum() xy_cov = xy_sum - window_inv * y_sum.mul(x_sum, axis=0) x_var = xx_sum - window_inv * numpy.square(x_sum) betas = xy_cov.divide(x_var, axis=0)[window - 1:] betas.columns.name = None return betas ``` Comparing the performance of the two different calculations, you can see that as the window used in the beta calculation increases, the second method dramatically outperforms the first: [![enter image description here](https://i.stack.imgur.com/A5204.png)](https://i.stack.imgur.com/A5204.png) Comparing the performance to that of @piRSquared's implementation, the custom method takes roughly 350 millis to evaluate compared to over 2 seconds.
Created a simple python package [finance-calculator](https://finance-calculator.readthedocs.io/en/latest/usage.html) based on numpy and pandas to calculate financial ratios including beta. I am using the simple formula ([as per investopedia](https://www.investopedia.com/ask/answers/070615/what-formula-calculating-beta.asp)): ``` beta = covariance(returns, benchmark returns) / variance(benchmark returns) ``` Covariance and variance are directly calculated in pandas which makes it fast. Using the api in the package is also simple: ``` import finance_calculator as fc beta = fc.get_beta(scheme_data, benchmark_data, tail=False) ``` which will give you a dataframe of date and beta or the last beta value if tail is true.
13,430
35,257,550
I want to migrate from sqlite3 to MySQL in Django. First I used below command: ``` python manage.py dumpdata > datadump.json ``` then I changed the settings of my Django application and configured it with my new MySQL database. Finally, I used the following command: ``` python manage.py loaddata datadump.json ``` but I got this error : > > integrityError: Problem installing fixtures: The row in table > 'django\_admin\_log' with primary key '20' has an invalid foregin key: > django\_admin\_log.user\_id contains a value '19' that does not have a > corresponding value in auth\_user.id. > > >
2016/02/07
[ "https://Stackoverflow.com/questions/35257550", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5214998/" ]
You have consistency error in your data, django\_admin\_log table refers to auth\_user which does not exist. sqlite does not enforce foreign key constraints, but mysql does. You need to fix data and then you can import it into mysql.
I had to move my database from a postgres to a MySql-Database. This worked for me: Export (old machine): ``` python manage.py dumpdata --natural --all --indent=2 --exclude=sessions --format=xml > dump.xml ``` Import (new machine): (note that for older versions of Django you'll need **syncdb** instead of migrate) ``` manage.py migrate --no-initial-data ``` Get SQL for resetting Database: ``` manage.py sqlflush ``` setting.py: ``` DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', 'NAME': 'asdf', 'USER': 'asdf', 'PASSWORD': 'asdf', 'HOST': 'localhost', #IMPORTANT!! 'OPTIONS': { "init_command": "SET foreign_key_checks = 0;", }, } python manage.py loaddata dump.xml ```
13,440
44,481,386
I have a python script which is used to remove noise from background of image. When I am calling this script from terminal it is working fine without any error. I am calling that script as below from terminal: ``` /usr/bin/python noise.py 1.png 100 ``` But When I tried to calling it from PHP using apache it is giving me below error: ``` Traceback (most recent call last): File "./noise.py", line 2, in from PIL import Image, ImageFilter ImportError: No module named PIL ``` Can someone help me in resolving this issue? I tried to give permission to www-data user to that script, like this: ``` sudo chown www-data:www-data noise.py ``` But it doesn't help. Please help me.
2017/06/11
[ "https://Stackoverflow.com/questions/44481386", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3198113/" ]
Use this as a drawable ``` <?xml version="1.0" encoding="utf-8"?> <shape xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/listview_background_shape"> <stroke android:width="2dp" android:color="@android:color/transparent" /> <padding android:left="2dp" android:top="2dp" android:right="2dp" android:bottom="2dp" /> <corners android:radius="5dp" /> <solid android:color="@android:color/transparent" /> </shape> ``` and put it as background for the ImageButton
``` You can also make the android:background="@null" and remove android:cropToPadding="false" <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" tools:context=".MainActivity"> <ImageButton android:id="@+id/imageButton" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_alignParentTop="true" android:layout_centerHorizontal="true" android:adjustViewBounds="true" android:background="@null" android:scaleType="fitXY" app:srcCompat="@mipmap/ic_launcher" /> <ImageButton android:id="@+id/imageButton6" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_below="@id/imageButton" android:layout_centerHorizontal="true" android:adjustViewBounds="true" android:background="@null" android:scaleType="fitXY" app:srcCompat="@mipmap/ic_launcher" /> <ImageButton android:id="@+id/imageButton4" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_below="@id/imageButton6" android:layout_centerHorizontal="true" android:adjustViewBounds="true" android:background="@null" android:scaleType="fitXY" app:srcCompat="@mipmap/ic_launcher" /> </RelativeLayout> ```
13,441
55,939,474
I am trying to deploy the lambda function along with the `serverless.yml` file to AWS, but it throw below error The following is the function defined in the YAML file ``` functions: s3-thumbnail-generator: handler:handler.s3_thumbnail_generator events: - s3: bucket: ${self:custom.bucket} event: s3.ObjectCreated:* rules: - suffix: .png plugins: - serverless-python-requirements ``` Error I am getting: > > can not read a block mapping entry; a multiline key may not be an implicit key in serverless.yml" at line 45, column 10: > > > I would need to understand how to fix this issue in YAML file in order to deploy to the function to AWS?
2019/05/01
[ "https://Stackoverflow.com/questions/55939474", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11040619/" ]
The problem is that there is no value indicator (`:`) at the end of the line: ``` handler:handler.s3_thumbnail_generator ``` so the parser continues to try and gather a multi-line plain scalar by adding `events` followed by a value indicator. But a multi-line plain scalar cannot be a key in YAML. It is unclear what your actual error is. It might be that you need to add the value indicator and have a colon emmbedded in your key: ``` functions: s3-thumbnail-generator: handler:handler.s3_thumbnail_generator: events: - s3: bucket: ${self:custom.bucket} event: s3.ObjectCreated:* rules: - suffix: .png plugins: - serverless-python-requirements ``` Or it could be that that colon should have been a value indicator (which usually needs a following space) and you were sloppy with indentation: ``` functions: s3-thumbnail-generator: handler: handler.s3_thumbnail_generator events: - s3: bucket: ${self:custom.bucket} event: s3.ObjectCreated:* rules: - suffix: .png plugins: - serverless-python-requirements ```
If it is your original file there is a syntax error in your YAML file. I added a note under the line of possible error: ``` functions: s3-thumbnail-generator: handler:handler.s3_thumbnail_generator events: - s3: bucket: ${self:custom.bucket} event: s3.ObjectCreated:* rules: - suffix: .png ^^^ this line should be indented one level plugins: - serverless-python-requirements ```
13,444
11,743,378
I'm trying to talk to `supervisor` over xmlrpc. Based on [`supervisorctl`](https://github.com/Supervisor/supervisor/blob/master/supervisor/supervisorctl.py) (especially [this line](https://github.com/Supervisor/supervisor/blob/master/supervisor/options.py#L1512)), I have the following, which seems like it should work, and indeed it works, in so far as it connects enough to receive an error from the server: ``` #socketpath is the full path to the socket, which exists # None and None are the default username and password in the supervisorctl options In [12]: proxy = xmlrpclib.ServerProxy('http://127.0.0.1', transport=supervisor.xmlrpc.SupervisorTransport(None, None, serverurl='unix://'+socketpath)) In [13]: proxy.supervisor.getState() ``` Resulting in this error: ``` --------------------------------------------------------------------------- ProtocolError Traceback (most recent call last) /home/marcintustin/webapps/django/oneclickcosvirt/oneclickcos/<ipython-input-13-646258924bc2> in <module>() ----> 1 proxy.supervisor.getState() /usr/local/lib/python2.7/xmlrpclib.pyc in __call__(self, *args) 1222 return _Method(self.__send, "%s.%s" % (self.__name, name)) 1223 def __call__(self, *args): -> 1224 return self.__send(self.__name, args) 1225 1226 ## /usr/local/lib/python2.7/xmlrpclib.pyc in __request(self, methodname, params) 1576 self.__handler, 1577 request, -> 1578 verbose=self.__verbose 1579 ) 1580 /home/marcintustin/webapps/django/oneclickcosvirt/lib/python2.7/site-packages/supervisor/xmlrpc.pyc in request(self, host, handler, request_body, verbose) 469 r.status, 470 r.reason, --> 471 '' ) 472 data = r.read() 473 p, u = self.getparser() ProtocolError: <ProtocolError for 127.0.0.1/RPC2: 401 Unauthorized> ``` This is the `unix_http_server` section of `supervisord.conf`: ``` [unix_http_server] file=/home/marcintustin/webapps/django/oneclickcosvirt/tmp/supervisor.sock ; (the path to the socket file) ;chmod=0700 ; socket file mode (default 0700) ;chown=nobody:nogroup ; socket file uid:gid owner ;username=user ; (default is no username (open server)) ;password=123 ; (default is no password (open server)) ``` So, there should be no authentication problems. It seems like my code is in all material respects identical to the equivalent code from `supervisorctl`, but `supervisorctl` actually works. What am I doing wrong?
2012/07/31
[ "https://Stackoverflow.com/questions/11743378", "https://Stackoverflow.com", "https://Stackoverflow.com/users/21640/" ]
Your code looks substantially correct. I'm running Supervisor 3.0 with Python 2.7, and given the following: ``` import supervisor.xmlrpc import xmlrpclib p = xmlrpclib.ServerProxy('http://127.0.0.1', transport=supervisor.xmlrpc.SupervisorTransport( None, None, 'unix:///home/lars/lib/supervisor/tmp/supervisor.sock')) print p.supervisor.getState() ``` I get: ``` {'statename': 'RUNNING', 'statecode': 1} ``` Are you certain that your running Supervisor instance is using the configuration file you think it is? What if you run `supervisord` in debug mode, do you see the connection?
I don't use the ServerProxy from xmlrpclib, I use the Server class instead and I don't have to define any transports or paths to sockets. Not sure if your purposes require that, but here's a thin client I use fairly frequently. It's pretty much straight out of the docs. ``` python -c "import xmlrpclib;\ supervisor_client = xmlrpclib.Server('http://localhost:9001/RPC2');\ print( supervisor_client.supervisor.stopProcess(<some_proc_name>) )" ```
13,445
26,476,939
What's a concise python way to say ``` if <none of the elements of this array are None>: # do a bunch of stuff once ```
2014/10/21
[ "https://Stackoverflow.com/questions/26476939", "https://Stackoverflow.com", "https://Stackoverflow.com/users/554807/" ]
The [`all`](https://docs.python.org/2/library/functions.html#all) builtin is nice for this. Given an iterable, it returns `True` if all elements of the iterable evaluate to `True`. ``` if all(x is not None for x in array): # your code ```
You could use all ``` all(i is not None for i in l) ```
13,448
59,888,355
I am having issues with having Conda install the library at this link: <https://github.com/ozgur/python-firebase> I am running: `conda install python-firebase` This is the response I get: ``` Collecting package metadata (current_repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. Collecting package metadata (repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. PackagesNotFoundError: The following packages are not available from current channels: - python-firebase Current channels: - https://repo.anaconda.com/pkgs/main/win-64 - https://repo.anaconda.com/pkgs/main/noarch - https://repo.anaconda.com/pkgs/r/win-64 - https://repo.anaconda.com/pkgs/r/noarch - https://repo.anaconda.com/pkgs/msys2/win-64 - https://repo.anaconda.com/pkgs/msys2/noarch To search for alternate channels that may provide the conda package you're looking for, navigate to https://anaconda.org and use the search bar at the top of the page. ``` Does anyone have a solution? I successfully installed it through `pip`, but I can't get the package in the `Conda` environment. Python 3.7.4
2020/01/23
[ "https://Stackoverflow.com/questions/59888355", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11964771/" ]
You have to run this ``` conda install -c auto python-firebase ``` Take a look at [this](https://anaconda.org/auto/python-firebase)
Try doing `conda install -c auto python-firebase` Check <https://anaconda.org/auto/python-firebase> for further information
13,453
14,321,679
I'm a new to programming and I chose python as my first language because its easy. But I'm confused here with this code: ``` option = 1 while option != 0: print "/n/n/n************MENU************" #Make a menu print "1. Add numbers" print "2. Find perimeter and area of a rectangle" print "0. Forget it!" print "*" * 28 option = input("Please make a selection: ") #Prompt user for a selection if option == 1: #If option is 1, get input and calculate firstnumber = input("Enter 1st number: ") secondnumber = input("Enter 2nd number: ") add = firstnumber + secondnumber print firstnumber, "added to", secondnumber, "equals", add #show results elif option == 2: #If option is 2, get input and calculate length = input("Enter length: ") width = input("Enter width: ") perimeter = length * 2 + width * 2 area = length * width print "The perimeter of your rectangle is", perimeter #show results print "The area of your rectangle is", area else: #if the input is anything else its not valid print "That is not a valid option!" ``` Okay Okay I get every thing below the `Option` variable. I just want to know why we assigned the value of `Option=1`, why we added it on the top of the program ,and what is its function. Also can we change its value. Please make me understand it in simple language as I'm new to programming.
2013/01/14
[ "https://Stackoverflow.com/questions/14321679", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1977722/" ]
If you didn't create the variable `option` at the start of the program, the line ``` while option != 0: ``` would break, because no `option` variable would yet exist. As for how to change its value, notice that it is changed every time the line: ``` option = input("Please make a selection: ") ``` happens- that is reassigning its value to the user's input.
Python requires variables to be declared before they can be used. In this case, a decision is being made whether `option` is set to `1` or `2` (so we set it to one of those values, ordinarily we could just as easily set it to `0` or an empty string). While some languages are less stringent on variable declaration (PHP comes to mind), most require variables to exist prior to use. Python does not require variables to be explicitly declared, only that they are given a value to reserve memory space. VB.NET, by default, on the other hand, requires variables to be explicitly declared... ``` Dim var as D ``` Which sets the variables `type` but doesn't give it an initial value. See [the Python docs](http://www.tutorialspoint.com/python/python_variable_types.htm).
13,455
59,951,747
i tried the example project of the Flask-MQTT (<https://github.com/stlehmann/Flask-MQTT>) with my local mosquitto broker. But unfortunatly it is not working. Subscription and publish are not forwared correctly. so i've added some logger messages: ``` def handle_connect(client, userdata, flags, rc): print("CLIENT CONNECTED") @mqtt.on_disconnect() def handle_disconnect(): print("CLIENT DISCONNECTED") @mqtt.on_log() def handle_logging(client, userdata, level, buf): print(level, buf) ``` > > 16 Sending CONNECT (u0, p0, wr0, wq0, wf0, c1, k30) client\_id=b'flask\_mqtt' > > CLIENT DISCONNECTED > > 16 Received CONNACK (0, 0) > > CLIENT CONNECTED > > 16 Sending CONNECT (u0, p0, wr0, wq0, wf0, c1, k30) client\_id=b'flask\_mqtt' > > CLIENT DISCONNECTED > > 16 Received CONNACK (0, 0) > > > > The mosquitto Broker shows that it disconnects the flask app because of the client is already connected: > > 1580163250: New connection from 127.0.0.1 on port 1883. > > 1580163250: Client flask\_mqtt already connected, closing old connection. > > 1580163250: New client connected from 127.0.0.1 as flask\_mqtt (p2, c1, k30). > > 1580163250: No will message specified. > > 1580163250: Sending CONNACK to flask\_mqtt (0, 0) > > 1580163251: New connection from 127.0.0.1 on port 1883. > > 1580163251: Client flask\_mqtt already connected, closing old connection. > > 1580163251: New client connected from 127.0.0.1 as flask\_mqtt (p2, c1, k30). > > 1580163251: No will message specified. > > 1580163251: Sending CONNACK to flask\_mqtt (0, 0) > > 1580163251: Socket error on client flask\_mqtt, disconnecting. > > > > i also tested a simple python.paho mqtt client example without flask and it works as expected. i also changed tried several loop starts inside the flask-mqtt code `self.client.loop_start() --> self.client.loop_forever()` ... did not change anything. so any idea where's the problem? i also debugged the flask-mqtt code and cannot find issues. (my python version is Python 3.6.9 (default, Nov 7 2019, 10:44:02) (my host system is elementary Linux) maybe the FLASK-MQTT lib is deprecated ? any hint or idea is appreciated!
2020/01/28
[ "https://Stackoverflow.com/questions/59951747", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5409884/" ]
The reason this is failing is in the mosquitto logs. ``` 1580163250: New connection from 127.0.0.1 on port 1883. 1580163250: Client flask_mqtt already connected, closing old connection. 1580163250: New client connected from 127.0.0.1 as flask_mqtt (p2, c1, k30). 1580163250: No will message specified. 1580163250: Sending CONNACK to flask_mqtt (0, 0) ``` Every client that connects to the broker must have a unique client id. In this case the flask client is trying to make multiple connections to the broker with the same client id. When the second connection starts, the broker sees that client id is the same and automatically disconnects the first. You've not actually supplied any code showing how you are settings up the client connections so we can't make any suggestions on how to actually fix it. Did you pay attention to the comment at the end of the last example in the README.md on the github page?
thanks for your fast reply! this helped me a lot and fixes the problem: The code is ``` """ A small Test application to show how to use Flask-MQTT. """ import eventlet import json from flask import Flask, render_template from flask_mqtt import Mqtt from flask_socketio import SocketIO from flask_bootstrap import Bootstrap eventlet.monkey_patch() app = Flask(__name__) app.config['SECRET'] = 'my secret key' app.config['TEMPLATES_AUTO_RELOAD'] = True app.config['MQTT_BROKER_URL'] = '127.0.0.1' app.config['MQTT_BROKER_PORT'] = 1883 app.config['MQTT_CLIENT_ID'] = 'flask_mqtt' #app.config['MQTT_USERNAME'] = '' #app.config['MQTT_PASSWORD'] = '' app.config['MQTT_KEEPALIVE'] = 30 #app.config['MQTT_TLS_ENABLED'] = False #app.config['MQTT_REFRESH_TIME'] = 1.0 # refresh time in seconds #app.config['MQTT_LAST_WILL_TOPIC'] = 'home/lastwill' #app.config['MQTT_LAST_WILL_MESSAGE'] = 'bye' #app.config['MQTT_LAST_WILL_QOS'] = 2 # Parameters for SSL enabled # app.config['MQTT_BROKER_PORT'] = 8883 # app.config['MQTT_TLS_ENABLED'] = True # app.config['MQTT_TLS_INSECURE'] = True # app.config['MQTT_TLS_CA_CERTS'] = 'ca.crt' mqtt = Mqtt(app) socketio = SocketIO(app) bootstrap = Bootstrap(app) @app.route('/') def index(): return render_template('index.html') @socketio.on('publish') def handle_publish(json_str): data = json.loads(json_str) mqtt.publish(data['topic'], data['message'], data['qos']) @socketio.on('subscribe') def handle_subscribe(json_str): data = json.loads(json_str) mqtt.subscribe(data['topic'], data['qos']) @socketio.on('unsubscribe_all') def handle_unsubscribe_all(): mqtt.unsubscribe_all() @mqtt.on_message() def handle_mqtt_message(client, userdata, message): data = dict( topic=message.topic, payload=message.payload.decode(), qos=message.qos, ) socketio.emit('mqtt_message', data=data) @mqtt.on_log() def handle_logging(client, userdata, level, buf): # print(level, buf) pass @mqtt.on_connect() def handle_connect(client, userdata, flags, rc): print("CLIENT CONNECTED") @mqtt.on_disconnect() def handle_disconnect(): print("CLIENT DISCONNECTED") @mqtt.on_log() def handle_logging(client, userdata, level, buf): print(level, buf) if __name__ == '__main__': socketio.run(app, host='0.0.0.0', port=5000, use_reloader=True, debug=True) ``` changing the `use_reloader=False` solves the problem! In the example it is set to True .. maybe should be fixed. by the way what means use\_reloader ? (i am new to flask) Thanks alot!
13,457
53,932,357
When installing packages with sudo apt-get install or building libraries from source inside a python virtual environment (I am not talking about pip install), does doing it inside a python virtual environment isolate the applications being installed? I mean do they exist only inside the python virtual environment?
2018/12/26
[ "https://Stackoverflow.com/questions/53932357", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2651062/" ]
Things that a virtual environment gives you an isolated version of: * You get a separate `PATH` entry, so unqualified command-line references to `python`, `pip`, etc., will refer to the selected Python distribution. This can be convenient if you have many copies of Python installed on the system (common on developer workstations). This means that a shebang line like `#!/usr/bin/env python` will "do the right thing" inside of a virtualenv (on a Unix or Unix-like system, at least). * You get a separate `site-packages` directory, so Python packages (installed using `pip` or built locally inside this environment using e.g. `setup.py build`) are installed locally to the virtualenv and not in a system-wide location. This is especially useful on systems where the core Python interpreter is installed in a place where unprivileged users are not allowed to write files, as it allows each user to have their own private virtualenvs with third-party packages installed, without needing to use `sudo` or equivalent to install those third-party packages system-wide. ... and that's about it. A virtual environment will *not* isolate you from: * Your operating system (Linux, Windows) or machine architecture (x86). * Scripts that reference a particular Python interpreter directly (e.g. `#!/usr/bin/python`). * Non-Python things on your system `PATH` (e.g. third party programs or utilities installed via your operating system's package manager). * Non-Python libraries or headers that are installed into a operating system specific location (e.g. `/usr/lib`, `/usr/include`, `/usr/local/lib`, `/usr/local/include`). * Python packages that are installed using the operating system's package manager (e.g. `apt`) rather than a Python package manager (`pip`) might not be visible from the the virtualenv's `site-packages` folder, but the "native" parts of such packages (in e.g. `/usr/lib`) will (probably) still be visible.
As per the comment by @deceze, virtual environments have no influence over `apt` operations. When building from source, any compiled binaries will be linked to the python binaries of that environment. So if your virtualenv python version varies from the system version, and you use the system python (path problems usually), you can encounter runtime linking errors. As for isolation, this same property (binary compatibility) isolates you from system upgrades which might change your system python binaries. Generally we're stable in the 2.x and 3.x, so it isn't likely to happen. But has, and can. And of course, when building from source inside a virtualenv, installed packages are stashed in that virtualenv; no other python binary will have access to those packages, unless you are manipulating your path or PYTHONPATH in strange ways.
13,458
13,728,325
I'm trying to use Z3 from its python interface, but I would prefer not to do a system-wide install (i.e. sudo make install). I tried doing a local install with a --prefix, but the Makefile is hard-coded to install into the system's python directory. Best case, I would like run z3 directly from the build directly, in the same way I use the z3 binary (build/z3). Does anyone know how to, or have script, to run the z3py directly from the build directory, without doing an install?
2012/12/05
[ "https://Stackoverflow.com/questions/13728325", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1406686/" ]
Yes, you can do it by including the build directory in your `LD_LIBRARY_PATH` and `PYTHONPATH` environment variables.
If you don't care about the python interface, edit the `build/Makefile` and comment out or delete the following lines in the `install` target: ``` @cp libz3$(SO_EXT) /usr/lib/python2.7/dist-packages/libz3$(SO_EXT) @cp z3*.pyc /usr/lib/python2.7/dist-packages ```
13,459
40,222,971
The answer presented here: [How to work with surrogate pairs in Python?](https://stackoverflow.com/questions/38147259/how-to-work-with-surrogate-pairs-in-python) tells you how to convert a surrogate pair, such as `'\ud83d\ude4f'` into a single non-BMP unicode character (the answer being `"\ud83d\ude4f".encode('utf-16', 'surrogatepass').decode('utf-16')`). I would like to know how to do this in reverse. How can I, using Python, find the equivalent surrogate pair from a non-BMP character, converting `'\U0001f64f'` () back to `'\ud83d\ude4f'`. I couldn't find a clear answer to that.
2016/10/24
[ "https://Stackoverflow.com/questions/40222971", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6555884/" ]
You'll have to manually replace each non-BMP point with the surrogate pair. You could do this with a regular expression: ``` import re _nonbmp = re.compile(r'[\U00010000-\U0010FFFF]') def _surrogatepair(match): char = match.group() assert ord(char) > 0xffff encoded = char.encode('utf-16-le') return ( chr(int.from_bytes(encoded[:2], 'little')) + chr(int.from_bytes(encoded[2:], 'little'))) def with_surrogates(text): return _nonbmp.sub(_surrogatepair, text) ``` Demo: ``` >>> with_surrogates('\U0001f64f') '\ud83d\ude4f' ```
It's a little complex, but here's a one-liner to convert a single character: ``` >>> emoji = '\U0001f64f' >>> ''.join(chr(x) for x in struct.unpack('>2H', emoji.encode('utf-16be'))) '\ud83d\ude4f' ``` To convert a mix of characters requires surrounding that expression with another: ``` >>> emoji_str = 'Here is a non-BMP character: \U0001f64f' >>> ''.join(c if c <= '\uffff' else ''.join(chr(x) for x in struct.unpack('>2H', c.encode('utf-16be'))) for c in emoji_str) 'Here is a non-BMP character: \ud83d\ude4f' ```
13,460
52,264,354
I have the following dataframe: ``` Sentence 0 Cat is a big lion 1 Dogs are descendants of wolf 2 Elephants are pachyderm 3 Pachyderm animals include rhino, Elephants and hippopotamus ``` I need to create a python code which looks at the words in sentence above and calculates the sum of scores for each based on following distinct data frame. ``` Name Score cat 1 dog 2 wolf 2 lion 3 elephants 5 rhino 4 hippopotamus 5 ``` For example, for row 0, the score will be 1 (cat) + 3 (lion) = 4 I am looking to create an output that looks like following. ``` Sentence Value 0 Cat is a big lion 4 1 Dogs are descendants of wolf 4 2 Elephants are pachyderm 5 3 Pachyderm animals include rhino, Elephants and hippopotamus 14 ```
2018/09/10
[ "https://Stackoverflow.com/questions/52264354", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9244542/" ]
As a first effort, you can try a `split` and `map`-based approach, and then compute the score using `groupby`. ``` v = df1['Sentence'].str.split(r'[\s.!?,]+', expand=True).stack().str.lower() df1['Value'] = ( v.map(df2.set_index('Name')['Score']) .sum(level=0) .fillna(0, downcast='infer')) ``` ``` df1 Sentence Value 0 Cat is a big lion 4 1 Dogs are descendants of wolf 4 # s/dog/dogs in df2 2 Elephants are pachyderm 5 3 Pachyderm animals include rhino, Elephants and... 14 ```
### `nltk` You may need to download stuff ``` import nltk nltk.download('punkt') ``` Then set up stemming and tokenizing ``` from nltk.stem import PorterStemmer from nltk.tokenize import word_tokenize ps = PorterStemmer() ``` Create a handy dictionary ``` m = dict(zip(map(ps.stem, scores.Name), scores.Score)) ``` And generate scores ``` def f(s): return sum(filter(None, map(m.get, map(ps.stem, word_tokenize(s))))) df.assign(Score=[*map(f, df.Sentence)]) Sentence Score 0 Cat is a big lion 4 1 Dogs are descendants of wolf 4 2 Elephants are pachyderm 5 3 Pachyderm animals include rhino, Elephants and... 14 ```
13,461
22,590,892
I have a python list of string tuples of the form: `lst = [('xxx', 'yyy'), ...etc]`. The list has around `8154741` tuples. I used a profiler and it says that the list takes around 500 MB in memory. Then I wrote all tuples in the list into a text file and it took around 72MB on disk size. I have three questions: * Why the memory consumption is different from disk usage? * And is it logical to consume 500MB of memory for such a list? * Is there a way/technique to reduce the size of the list?
2014/03/23
[ "https://Stackoverflow.com/questions/22590892", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2464658/" ]
you have `8154741` tuples, that means your list, assuming 8 byte pointers, already contains `62 MB` of pointers to tuples. Assuming each tuple contains two ascii strings in python2, thats another `124 MB` of pointers for each tuple. Then you still have the overhead for the tuple and string objects, each object has a reference count, assuming that is a 8 byte integer you have another `186 MB` of reference count storage. That is already `372 MB` of overhead for the `46 MB` of data you would have with two 3 byte long strings in size 2 tuples. Under python3 your data is unicode and may be larger than 1 byte per character too. So yes it is expected this type of structure consumes an large amount of excess memory. If your strings are all of similar length and the tuples all have the same length a way to reduce this is to use numpy string arrays. They store the strings in one continuous memory block avoiding the object overheads. But this will not work well if the strings vary in size a lot as numpy does not support ragged arrays. ``` >>> d = [("xxx", "yyy") for i in range(8154741)] >>> a = numpy.array(d) >>> print a.nbytes/1024**2 46 >>> print a[2,1] yyy ```
Python objects can take much more memory than the raw data in them. This is because to achieve the features of Python's advanced and superfast data structures, you have to create some intermediate and temporary objects. Read more [here](http://deeplearning.net/software/theano/tutorial/python-memory-management.html). Working around this issue has several ways, see a case study [here](http://guillaume.segu.in/blog/code/487/optimizing-memory-usage-in-python-a-case-study/). In most cases, it is enough to find the best suitable python data type for your application (would it not be better to use a numpy array instead of a list in your case?). For more optimizing, you can move to Cython where you can directly declare the types (and so, the sizes) of your variables, like in C. There are also packages like [IOPro](https://store.continuum.io/cshop/iopro/) that try to optimize memory usage (this one is commercial though, does anyone know a free package for this?).
13,463
14,088,294
I'm trying to create multithreaded web server in python, but it only responds to one request at a time and I can't figure out why. Can you help me, please? ``` #!/usr/bin/env python2 # -*- coding: utf-8 -*- from SocketServer import ThreadingMixIn from BaseHTTPServer import HTTPServer from SimpleHTTPServer import SimpleHTTPRequestHandler from time import sleep class ThreadingServer(ThreadingMixIn, HTTPServer): pass class RequestHandler(SimpleHTTPRequestHandler): def do_GET(self): self.send_response(200) self.send_header('Content-type', 'text/plain') sleep(5) response = 'Slept for 5 seconds..' self.send_header('Content-length', len(response)) self.end_headers() self.wfile.write(response) ThreadingServer(('', 8000), RequestHandler).serve_forever() ```
2012/12/30
[ "https://Stackoverflow.com/questions/14088294", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1937459/" ]
Check [this](http://pymotw.com/2/BaseHTTPServer/index.html#module-BaseHTTPServer) post from Doug Hellmann's blog. ``` from BaseHTTPServer import HTTPServer, BaseHTTPRequestHandler from SocketServer import ThreadingMixIn import threading class Handler(BaseHTTPRequestHandler): def do_GET(self): self.send_response(200) self.end_headers() message = threading.currentThread().getName() self.wfile.write(message) self.wfile.write('\n') return class ThreadedHTTPServer(ThreadingMixIn, HTTPServer): """Handle requests in a separate thread.""" if __name__ == '__main__': server = ThreadedHTTPServer(('localhost', 8080), Handler) print 'Starting server, use <Ctrl-C> to stop' server.serve_forever() ```
I have developed a PIP Utility called [ComplexHTTPServer](https://github.com/vickysam/ComplexHTTPServer) that is a multi-threaded version of SimpleHTTPServer. To install it, all you need to do is: ``` pip install ComplexHTTPServer ``` Using it is as simple as: ``` python -m ComplexHTTPServer [PORT] ``` (By default, the port is 8000.)
13,466
51,106,340
I am trying to create an application in appengine that searches for a list of keys and then I use this list to delete these records from the datastore, this service has to be a generic service so I could not use a model just search by the name of kind, it is possible to do this through appengine features? Below my code, but it requires that I have a model. import httplib import logging from datetime import datetime, timedelta ``` import webapp2 from google.appengine.api import urlfetch from google.appengine.ext import ndb DEFAULT_PAGE_SIZE = 100000 DATE_PATTERN = "%Y-%m-%dT%H:%M:%S" def get_date(amount): date = datetime.today() - timedelta(days=30 * amount) date = date.replace(hour=0, minute=0, second=0) return date class Purge(webapp2.RequestHandler): def get(self): kind = self.request.get('kind') datefield = self.request.get('datefield') amount = self.request.get('amount', default_value=3) date = get_date(amount) logging.info('Executando purge para Entity {}, mantendo periodo de {} meses.'.format(kind, amount)) # cria a query query = ndb.Query(kind=kind, namespace='development') logging.info('Setando o filtro [{} <= {}]'.format(datefield, date.strftime(DATE_PATTERN))) # cria um filtro query.filter(ndb.DateTimeProperty(datefield) <= date) query.fetch_page(DEFAULT_PAGE_SIZE) while True: # executa a consulta keys = query.fetch(keys_only=True) logging.info('Encontrados {} {} a serem exluidos'.format(len(keys), kind)) # exclui usando as keys ndb.delete_multi(keys) if len(keys) < DEFAULT_PAGE_SIZE: logging.info('Nao existem mais registros a serem excluidos') break app = webapp2.WSGIApplication( [ ('/cloud-datastore-purge', Purge), ], debug=True) ``` Trace ``` Traceback (most recent call last): File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 1535, in __call__ rv = self.handle_exception(request, response, e) File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 1529, in __call__ rv = self.router.dispatch(request, response) File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 1278, in default_dispatcher return route.handler_adapter(request, response) File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 1102, in __call__ return handler.dispatch() File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 572, in dispatch return self.handle_exception(e, self.app.debug) File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 570, in dispatch return method(*args, **kwargs) File "/base/data/home/apps/p~telefonica-dev-155211/cloud-datastore-purge-python:20180629t150020.410785498982375644/purge.py", line 38, in get query.fetch_page(_DEFAULT_PAGE_SIZE) File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/1/google/appengine/ext/ndb/utils.py", line 160, in positional_wrapper return wrapped(*args, **kwds) File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/1/google/appengine/ext/ndb/query.py", line 1362, in fetch_page return self.fetch_page_async(page_size, **q_options).get_result() File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/1/google/appengine/ext/ndb/tasklets.py", line 383, in get_result self.check_success() File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/1/google/appengine/ext/ndb/tasklets.py", line 427, in _help_tasklet_along value = gen.throw(exc.__class__, exc, tb) File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/1/google/appengine/ext/ndb/query.py", line 1380, in _fetch_page_async while (yield it.has_next_async()): File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/1/google/appengine/ext/ndb/tasklets.py", line 427, in _help_tasklet_along value = gen.throw(exc.__class__, exc, tb) File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/1/google/appengine/ext/ndb/query.py", line 1793, in has_next_async yield self._fut File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/1/google/appengine/ext/ndb/context.py", line 890, in helper batch, i, ent = yield inq.getq() File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/1/google/appengine/ext/ndb/query.py", line 969, in run_to_queue batch = yield rpc File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/1/google/appengine/ext/ndb/tasklets.py", line 513, in _on_rpc_completion result = rpc.get_result() File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/1/google/appengine/api/apiproxy_stub_map.py", line 613, in get_result return self.__get_result_hook(self) File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/1/google/appengine/datastore/datastore_query.py", line 2951, in __query_result_hook self.__results = self._process_results(query_result.result_list()) File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/1/google/appengine/datastore/datastore_query.py", line 2984, in _process_results for result in results] File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/1/google/appengine/datastore/datastore_rpc.py", line 194, in pb_to_query_result return self.pb_to_entity(pb) File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/1/google/appengine/ext/ndb/model.py", line 690, in pb_to_entity modelclass = Model._lookup_model(kind, self.default_model) File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/1/google/appengine/ext/ndb/model.py", line 3101, in _lookup_model kind) KindError: No model class found for kind 'Test'. Did you forget to import it? ```
2018/06/29
[ "https://Stackoverflow.com/questions/51106340", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8484943/" ]
The problem was found on the line where fetch\_page is set. Removing this line ``` query.fetch_page(DEFAULT_PAGE_SIZE) ``` for this ``` keys = query.fetch(limit=_DEFAULT_LIMIT, keys_only=True) ```
To run a datastore query without a model class available in the environment, you can use the [`google.appengine.api.datastore.Query`](https://cloud.google.com/appengine/docs/standard/python/refdocs/google.appengine.api.datastore#google.appengine.api.datastore.Query) class from the low-level [datastore API](https://cloud.google.com/appengine/docs/standard/python/refdocs/google.appengine.api.datastore). See [this question](https://stackoverflow.com/questions/54900142/datastore-query-without-model-class) for other ideas.
13,475
26,529,791
this is the first time I am trying to code in python and I am implementing the Apriori algorithm. I have generated till 2-itemsets and below is the function I have to generate 2-Itemsets by combining the keys of the 1-itemset. How do I go about making this function generic? I mean, by passing the keys of a dictionary and the number of elements required in the tuple, the algorithm should generate all possible n-number(k+1) subsets using the keys. I know that Union on sets is a possibility, but is there a way to do union of tuples which is essentially the keys of a dictionary? ``` # generate 2-itemset candidates by joining the 1-itemset candidates def candidate_gen(keys): adict={} for i in keys: for j in keys: #if i != j and (j,i) not in adict: if j>i: #call join procedure which will generate f(k+1) keys #call has_infrequent_subset --> generates all possible k+1 itemsets and checks if k itemsets are present in f(k) keys adict[tuple([min(i,j),max(i,j)])] = 0 return adict ``` For example, if my initial dictionary looks like: {key, value} --> value is the frequency ``` {'382': 1163, '298': 560, '248': 1087, '458': 720, '118': 509, '723': 528, '390': 1288} ``` I take the keys of this dictionary and pass it to the candidate\_gen function mentioned above it will generate the subsets of 2-itemsets and give the output of keys. I will then pass the keys to a function to find the frequency by comparing against the original database to get this output: ``` {('390', '723'): 65, ('118', '298'): 20, ('298', '390'): 70, ('298', '458'): 35, ('248', '382'): 88, ('248', '458'): 76, ('248', '723'): 26, ('382', '723'): 203, ('390', '458'): 33, ('118', '458'): 26, ('458', '723'): 26, ('248', '390'): 87, ('118', '248'): 54, ('298', '382'): 47, ('118', '723'): 41, ('382', '390'): 413, ('382', '458'): 57, ('248', '298'): 64, ('118', '382'): 40, ('298', '723'): 36, ('118', '390'): 52} ``` How do I generated 3-itemset subsets from the above keys.
2014/10/23
[ "https://Stackoverflow.com/questions/26529791", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4077331/" ]
I assume that, given your field, you can benefit very much from the study of python's [itertools](https://docs.python.org/3/library/itertools.html) library. In your use case you can directly use the itertools `combinations` or wrap it in a helper function ``` from itertools import combinations def ord_comb(l,n): return list(combinations(l,n)) #### TESTING #### a = [1,2,3,4,5] print(ord_comb(a,1)) print(ord_comb(a,5)) print(ord_comb(a,6)) print(ord_comb([],2)) print(ord_comb(a,3)) ``` **Output** ``` [(1,), (2,), (3,), (4,), (5,)] [(1, 2, 3, 4, 5)] [] [] [(1, 2, 3), (1, 2, 4), (1, 2, 5), (1, 3, 4), (1, 3, 5), (1, 4, 5), (2, 3, 4), (2, 3, 5), (2, 4, 5), (3, 4, 5)] ``` Please note that the order of the elements in the `n`-uples depends on the order that you used in the iterable that you pass to `combinations`.
This? ``` In [12]: [(x, y) for x in keys for y in keys if y>x] Out[12]: [('382', '723'), ('382', '458'), ('382', '390'), ('458', '723'), ('298', '382'), ('298', '723'), ('298', '458'), ('298', '390'), ('390', '723'), ('390', '458'), ('248', '382'), ('248', '723'), ('248', '458'), ('248', '298'), ('248', '390'), ('118', '382'), ('118', '723'), ('118', '458'), ('118', '298'), ('118', '390'), ('118', '248')] ```
13,477
35,799,809
I am playing around with `unicode` in python. So there is a simple script: ``` # -*- coding: cp1251 -*- print 'юникод'.decode('cp1251') print unicode('юникод', 'cp1251') print unicode('юникод', 'utf-8') ``` In cmd I've switched encoding to `Active code page: 1251`. And there is the output: ``` СЋРЅРёРєРѕРґ СЋРЅРёРєРѕРґ юникод ``` I am a little bit confused. Since I've specified encoding to `cp1251` I expect that it would be decoded correctly. But as result there is some trash code points were interpreted. I am understand that `'юникод'` is just a bytes like: `'\xd1\x8e\xd0\xbd\xd0\xb8\xd0\xba\xd0\xbe\xd0\xb4'`. But there is a way to get correct output in terminal with `cp1251`? Should I build byte string manually? Seems like I misunderstood something.
2016/03/04
[ "https://Stackoverflow.com/questions/35799809", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3990145/" ]
I think I can understand what happened to you. The last line gave me the hint, that your *trash codepoints* confirmed. You try to display cp1251 characters but your editor is configured to use utf8. The `# -*- coding: cp1251 -*-` is only used by the Python interpretor to convert characters from source python files that are outside of the ASCII range. And anyway it it is only used for unicode litterals because bytes from original source give er... exactly same bytes in byte strings. Some text editors are kind enough to automagically use this line (IDLE editor is), but I'm little confident in that and allways switch **manually** to the proper encoding when I use gvim for example. Short story: `# -*- coding: cp1251 -*-` in unused in your code and can only mislead a reader since it it not the actual encoding. If you want to be sure of what lies in your source, you'd better use explicit escapes. In code page 1251, this word `юникод` is composed by those characters: `'\xfe\xed\xe8\xea\xee\xe4'` If you write this source: ``` txt = '\xfe\xed\xe8\xea\xee\xe4' print txt print txt.decode('cp1251') print unicode(txt, 'cp1251') print unicode(txt, 'utf-8') ``` and execute it in a console configured to use CP1251 charset, the first three lines will output `юникод`, and the last one will throw a UnicodeDecodeError exception because the input is no longer valid 'utf8'. Alternatively, if you find comfortable with you current editor, you could write: ``` # -*- coding: utf8 -*- txt = 'юникод'.decode('utf8').encode('cp1251') # or simply txt = u'юникод'.encode('cp1251') print txt print txt.decode('cp1251') print unicode(txt, 'cp1251') print unicode(txt, 'utf-8') ``` which should give same results - but now the declared source encoding should be the actual encoding of your python source. --- BTW, a Python 3.5 IDLE that natively uses unicode confirmed that: ``` >>> 'СЋРЅРёРєРѕРґ'.encode('cp1251').decode('utf8') 'юникод' ```
Just use the following, but **ensure** you save the source code in the declared encoding. It can be *any* encoding that supports the characters you want to print. The terminal can be in a different encoding, as long as it *also* supports the characters you want to print: ``` #coding:utf8 print u'юникод' ``` The advantage is that you don't need to know the terminal's encoding. Python will normally1 detect the terminal encoding and encode the print output correctly. 1Unless your terminal is misconfigured.
13,478
58,959,226
I am trying to install a package which needs `psycopg2` as a dependency, so I installed `psycopg2-binary` using `pip install psycopg2-binary` but when I try to `pip install django-tenant-schemas` I get this error: ``` In file included from psycopg/psycopgmodule.c:27:0: ./psycopg/psycopg.h:34:10: fatal error: Python.h: No such file or directory #include <Python.h> ^~~~~~~~~~ compilation terminated. You may install a binary package by installing 'psycopg2-binary' from PyPI. If you want to install psycopg2 from source, please install the packages required for the build and try again. For further information please check the 'doc/src/install.rst' file (also at <http://initd.org/psycopg/docs/install.html>). error: command 'x86_64-linux-gnu-gcc' failed with exit status 1 ERROR: Command errored out with exit status 1: /home/david/PycharmProjects/clearpath/venv/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-ckbbq00w/psycopg2/setup.py'"'"'; __file__='"'"'/tmp/pip-install-ckbbq00w/psycopg2/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-pi6j7x5l/install-record.txt --single-version-externally-managed --compile --install-headers /home/david/PycharmProjects/clearpath/venv/include/site/python3.7/psycopg2 Check the logs for full command output. ``` When I go into my projects (using PyCharm) repo settings I can see psycopg2-binary is installed. I assume this has something to do with the PATH but I can't seem to figure out how to solve the issue. `which psql`: /usr/bin/psql `which pg_config`: /usr/bin/pg\_config I am not comfortable doing much in the Environment variables as I really don't want to break something.
2019/11/20
[ "https://Stackoverflow.com/questions/58959226", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10796680/" ]
This takes a whole 1 line fewer. Whether it's cleaner or easier to understand is up to you .... ``` int sides[3]; for (int i=0; i < 3; i++) { cout << "Enter side " << i+1 << endl; cin >> sides[i]; } ``` It's good to write short code where it makes it clearer, so do keep considering how you can do that. Making it look pretty is a worthy consideration too - again as long as it makes what you're doing clearer. Clarity is everything!!
To make the code more maintainable and readable: 1) Use more meaningful variable names, or if you would name them consecutively, use an array e.g. `int numbers[3]` 2) Similarly, when you are taking prompts like this, consider having the prompts in a parallel array for the questions, or if they are the same prompt use something similar to [noelicus](https://stackoverflow.com/a/58959334/8760895) answer. I would do something like this: ``` int numbers[3]; String prompts[3] = {"put your", "prompts", "here"}; for(int i=0; i<3; i++){ cout << prompts[i] << endl; cin >> numbers[i] } //do math //print output ``` also, you may want to check to make sure the user has entered a number using [this](https://stackoverflow.com/questions/5655142/how-to-check-if-input-is-numeric-in-c).
13,481
61,643,039
When I run the cv.Canny edge detector on drawings, it detects hundreds of little edges densely packed in the shaded areas. How can I get it to stop doing that, while still detecting lighter features like eyes and nose? I tried blurring too. Here's an example, compared with an [online photo tool](https://online.rapidresizer.com/photograph-to-pattern.php). [Original image](https://i.stack.imgur.com/8VcUa.jpg). [Output of online tool](https://i.stack.imgur.com/JYhjJ.png). [My python program](https://i.stack.imgur.com/cFJ5E.png) Here's my code: ``` def outline(image, sigma = 5): image = cv.GaussianBlur(image, (11, 11), sigma) ratio = 2 lower = .37 * 255 upper = lower * ratio outlined = cv.Canny(image, lower, upper) return outlined ``` How can I improve it?
2020/05/06
[ "https://Stackoverflow.com/questions/61643039", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12346436/" ]
Here is one way to do that in Python/OpenCV. **Morphologic edge out is the absolute difference between a mask and the dilated mask** * Read the input * Convert to gray * Threshold (as mask) * Dilate the thresholded image * Compute the absolute difference * Invert its polarity as the edge image * Save the result Input: [![enter image description here](https://i.stack.imgur.com/bM0Wn.jpg)](https://i.stack.imgur.com/bM0Wn.jpg) ``` import cv2 import numpy as np # read image img = cv2.imread("cartoon.jpg") # convert to gray gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # threshold thresh = cv2.threshold(gray, 180, 255, cv2.THRESH_BINARY)[1] # morphology edgeout = dilated_mask - mask # morphology dilate kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (5,5)) dilate = cv2.morphologyEx(thresh, cv2.MORPH_DILATE, kernel) # get absolute difference between dilate and thresh diff = cv2.absdiff(dilate, thresh) # invert edges = 255 - diff # write result to disk cv2.imwrite("cartoon_thresh.jpg", thresh) cv2.imwrite("cartoon_dilate.jpg", dilate) cv2.imwrite("cartoon_diff.jpg", diff) cv2.imwrite("cartoon_edges.jpg", edges) # display it cv2.imshow("thresh", thresh) cv2.imshow("dilate", dilate) cv2.imshow("diff", diff) cv2.imshow("edges", edges) cv2.waitKey(0) ``` Thresholded image: [![enter image description here](https://i.stack.imgur.com/KejXu.jpg)](https://i.stack.imgur.com/KejXu.jpg) Dilated threshold image: [![enter image description here](https://i.stack.imgur.com/kUZjt.jpg)](https://i.stack.imgur.com/kUZjt.jpg) Difference image: [![enter image description here](https://i.stack.imgur.com/HXtdh.jpg)](https://i.stack.imgur.com/HXtdh.jpg) Edge image: [![enter image description here](https://i.stack.imgur.com/3xC95.jpg)](https://i.stack.imgur.com/3xC95.jpg)
I was successfully able to make `cv.Canny` give satisfactory results by changing the kernel dimension from (11, 11) to (0, 0), allowing the kernel to be dynamically determined by sigma. By doing this and tuning sigma, I got pretty good results. Also, `cv.imshow` distorts images, so when I was using it to test, the results looked significantly worse than they actually were.
13,483
38,451,831
I am using Zeppelin and matplotlib to visualize some data. I try them but fail with the error below. Could you give me some guidance how to fix it? ``` %pyspark import matplotlib.pyplot as plt plt.plot([1,2,3,4]) plt.ylabel('some numbers') plt.show() ``` And here is the error I've got ``` Traceback (most recent call last): File "/tmp/zeppelin_pyspark-3580576524078731606.py", line 235, in <module> eval(compiledCode) File "<string>", line 1, in <module> File "/usr/lib64/python2.6/site-packages/matplotlib/pyplot.py", line 78, in <module> new_figure_manager, draw_if_interactive, show = pylab_setup() File "/usr/lib64/python2.6/site-packages/matplotlib/backends/__init__.py", line 25, in pylab_setup globals(),locals(),[backend_name]) File "/usr/lib64/python2.6/site-packages/matplotlib/backends/backend_gtkagg.py", line 10, in <module> from matplotlib.backends.backend_gtk import gtk, FigureManagerGTK, FigureCanvasGTK,\ File "/usr/lib64/python2.6/site-packages/matplotlib/backends/backend_gtk.py", line 8, in <module> import gtk; gdk = gtk.gdk File "/usr/lib64/python2.6/site-packages/gtk-2.0/gtk/__init__.py", line 64, in <module> _init() File "/usr/lib64/python2.6/site-packages/gtk-2.0/gtk/__init__.py", line 52, in _init _gtk.init_check() RuntimeError: could not open display ``` I also try to add these lines, but still cannot work ``` import matplotlib matplotlib.use('Agg') ```
2016/07/19
[ "https://Stackoverflow.com/questions/38451831", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6151388/" ]
The following works for me with Spark & Python 3: ``` %pyspark import matplotlib import io # If you use the use() function, this must be done before importing matplotlib.pyplot. Calling use() after pyplot has been imported will have no effect. # see: http://matplotlib.org/faq/usage_faq.html#what-is-a-backend matplotlib.use('Agg') import matplotlib.pyplot as plt def show(p): img = io.StringIO() p.savefig(img, format='svg') img.seek(0) print("%html <div style='width:600px'>" + img.getvalue() + "</div>") plt.plot([1,2,3,4]) plt.ylabel('some numbers') show(plt) ``` The Zeppelin [documentation](https://zeppelin.apache.org/docs/0.6.0/interpreter/python.html#matplotlib-integration) suggests that the following should work: ``` %python import matplotlib.pyplot as plt plt.figure() (.. ..) z.show(plt) plt.close() ``` This doesn't work for me with Python 3, but looks to be addressed with the soon-to-be-merged [PR #1213](https://github.com/apache/zeppelin/pull/1213).
As per @eddies suggestion, I tried and this is what worked for me on Zeppelin 0.6.1 python 2.7 ``` %python import matplotlib matplotlib.use('Agg') import matplotlib.pyplot as plt plt.figure() plt.plot([1,2,3,4]) plt.ylabel('some numbers') z.show(plt, width='500px') plt.close() ```
13,484
8,510,615
I have ubuntu 11.10. I apt-get installed pypy from this launchpad repository: <https://launchpad.net/~pypy> the computer already has python on it, and python has its own pip. How can I install pip for pypy and how can I use it differently from that of python?
2011/12/14
[ "https://Stackoverflow.com/questions/8510615", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1098562/" ]
To keep a separate installation, you might want to create a [virtualenv](http://pypi.python.org/pypi/virtualenv) for PyPy. Within the virtualenv, you can then just run `pip install whatever` and it will install it for PyPy. When you create a virtualenv, it automatically installs pip for you. Otherwise, you will need to work out where PyPy will import from and install distribute and pip in one of those locations. [pip's installer](http://www.pip-installer.org/en/latest/installing.html) should do this automatically when run with PyPy. Be careful with this option - if it decides to install in your system Python directories, it could break other things.
The problem with `pip` installing from the `pypy` (at least when installing `pypy` via `apt-get`) is that it is installed into the system path: ``` $ whereis pip pip: /usr/local/bin/pip /usr/bin/pip ``` So after such install, `pypy pip` is executed by default (/usr/local/bin/pip) instead of the `python pip` (/usr/bin/pip) which may break subsequent updates of the whole Ubuntu. The problem with `virtualenv` is that you should remember where and what env you created. Convenient alternative solution is `conda` (miniconda), which manages not only python deployments: <http://conda.pydata.org/miniconda.html>. Comparison of `conda`, `pip` and `virtualenv`: <http://conda.pydata.org/docs/_downloads/conda-pip-virtualenv-translator.html>
13,493
63,191,779
I've created previously a python script that creates an author index. To spare you the details, (since extracting text from a pdf was pretty hard) I created a minimal reproducible example. My current status is I get a new line for each author and a comma separated list of the pages on which the author appears. However I would like to sort the list of pages in ascending manner. ``` import pandas as pd import csv words = ["Autor1","Max Mustermann","Max Mustermann","Autor1","Bertha Musterfrau","Author2"] pages = [15,13,5,1,17,20] str_pages = list(map(str, pages)) df = pd.DataFrame({"Autor":words,"Pages":str_pages}) df = df.drop_duplicates().sort_values(by="Autor").reset_index(drop=True) df = df.groupby("Autor")['Pages'].apply(lambda x: ','.join(x)).reset_index() df ``` This produces the desired output (except the sorting of the pages). ``` Autor Pages 0 Author2 20 1 Autor1 15,1 2 Bertha Musterfrau 17 3 Max Mustermann 13,5 ``` I tried to vectorize the `Pages` column to string, split by the comma and applied a lambda function that is supposed to sort the resulting list. ``` df["Pages"] = df["Pages"].str.split(",").apply(lambda x: sorted(x)) df ``` However this only worked for `Autor1` but not for `Max Mustermann`. I cant seem to figure out why this is the case ``` Autor Pages 0 Author2 [20] 1 Autor1 [1, 15] 2 Bertha Musterfrau [17] 3 Max Mustermann [13, 5] ```
2020/07/31
[ "https://Stackoverflow.com/questions/63191779", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7318488/" ]
`str.split` returns lists of strings. So `lambda x: sorted(x)` still sort by strings, not integers. You can try: ``` df['Pages'] = (df.Pages.str.split(',') .explode().astype(int) .sort_values() .groupby(level=0).agg(list) ) ``` Output: ``` Autor Pages 0 Author2 [20] 1 Autor1 [1, 15] 2 Bertha Musterfrau [17] 3 Max Mustermann [5, 13] ```
If you want to use your existing approach, ``` df.Pages = ( df.Pages.str.split(",") .apply(lambda x: sorted(x, key=lambda x: int(x))) ) ``` --- ``` Autor Pages 0 Author2 [20] 1 Autor1 [1, 15] 2 Bertha Musterfrau [17] 3 Max Mustermann [5, 13] ```
13,498
53,796,705
why so in python 3.6.1 with simple code like: ``` print(f'\xe4') ``` Result: ``` Traceback (most recent call last): File "<pyshell#16>", line 1, in <module> print(f'\xe4') File "<pyshell#13>", line 1, in <lambda> print = lambda text, end='\n', file=sys.stdout: print(text, end=end, file=file) File "<pyshell#13>", line 1, in <lambda> print = lambda text, end='\n', file=sys.stdout: print(text, end=end, file=file) File "<pyshell#13>", line 1, in <lambda> print = lambda text, end='\n', file=sys.stdout: print(text, end=end, file=file) [Previous line repeated 990 more times] RecursionError: maximum recursion depth exceeded ```
2018/12/15
[ "https://Stackoverflow.com/questions/53796705", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10540454/" ]
So let's recap: you have overridden the built-in `print` function with this: ``` print = lambda text, end='\n', file=sys.stdout: print(text, end=end, file=file) ``` Which is the same as ``` def print(text, end='\n', file=sys.stdout): print(text, end=end, file=file) ``` As you can see, this function calls itself recursively, but there is no recursion base, no condition when it finishes. You end up with a classic example of infinite recursion. This has absolutely nothing to do with Unicode or formatting. Simply do not name your functions after builtins: ``` def my_print(text, end='\n', file=sys.stdout): print(text, end=end, file=file) my_print('abc') # works ``` Or at least keep the reference to the original: ``` print_ = print def print(text, end='\n', file=sys.stdout): print_(text, end=end, file=file) print('abc') # works as well ``` Note: if the function is already overwritten, you will have to run `del print` (or restart the interpreter) to get back the original builtin.
Works for me as well. But maybe it'll work for you with: ``` print(chr(0xe4)) ```
13,499
50,653,208
What I want to achieve is simple, in R I can do things like `paste0("https\\",1:10,"whatever",11:20)`, how to do such in Python? I found some things [here](https://stackoverflow.com/questions/28046408/equivalent-of-rs-paste-command-for-vector-of-numbers-in-python), but only allow for : `paste0("https\\",1:10)`. Anyone know how to figure this out, this must be easy to do but I can not find how.
2018/06/02
[ "https://Stackoverflow.com/questions/50653208", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6113825/" ]
**@Jason**, I will suggest you to use any of these following 2 ways to do this task. ✓ By creating a list of texts using **list comprehension** and **zip()** function. > > **Note:** To print `\` on screen, use escape sequence `\\`. See [List of escape sequences and their use](https://msdn.microsoft.com/en-us/library/h21280bw.aspx). > > > Please comment if you think this answer doesn't satisfy your problem. I will change the answer based on your inputs and expected outputs. > > > ``` texts = ["https\\\\" + str(num1) + "whatever" + str(num2) for num1, num2 in zip(range(1,10),range(11, 20))] for text in texts: print(text) """ https\\1whatever11 https\\2whatever12 https\\3whatever13 https\\4whatever14 https\\5whatever15 https\\6whatever16 https\\7whatever17 https\\8whatever18 https\\9whatever19 """ ``` ✓ By defining a simple function **paste0()** that implements the above logic to return a list of texts. ``` import json def paste0(string1, range1, strring2, range2): texts = [string1 + str(num1) + string2 + str(num2) for num1, num2 in zip(range1, range2)] return texts texts = paste0("https\\\\", range(1, 10), "whatever", range(11, 20)) # Pretty printing the obtained list of texts using Jon module print(json.dumps(texts, indent=4)) """ [ "https\\\\1whatever11", "https\\\\2whatever12", "https\\\\3whatever13", "https\\\\4whatever14", "https\\\\5whatever15", "https\\\\6whatever16", "https\\\\7whatever17", "https\\\\8whatever18", "https\\\\9whatever19" ] """ ```
**Based on the link you provided,** this should work: ``` ["https://" + str(i) + "whatever" + str(i) for i in xrange(1,11)] ``` Gives the following output: ``` ['https://1whatever1', 'https://2whatever2', 'https://3whatever3', 'https://4whatever4', 'https://5whatever5', 'https://6whatever6', 'https://7whatever7', 'https://8whatever8', 'https://9whatever9', 'https://10whatever10'] ``` **EDIT:** This should work for `paste0("https\\",1:10,"whatever",11:20)` ``` paste_list = [] for i in xrange(1,11): # replace {0} with the value of i first_half = "https://{0}".format(i) for x in xrange(1,21): # replace {0} with the value of x second_half = "whatever{0}".format(x) # Concatenate the two halves of the string and append them to paste_list[] paste_list.append(first_half+second_half) print paste_list ```
13,500
29,219,814
Im kinda new to python, im trying to the basic task of splitting string data from a file using a double backslash (\\) delimiter. Its failing, so far: ``` from tkinter import filedialog import string import os #remove previous finalhostlist try: os.remove("finalhostlist.txt") except Exception as e: print (e) root = tk.Tk() root.withdraw() print ("choose hostname target list") file_path = filedialog.askopenfilename() with open("finalhostlist.txt", "wt") as rawhostlist: with open(file_path, "rt") as finhostlist: for line in finhostlist: ## rawhostlist.write("\n".join(line.split("\\\\"))) rawhostlist.write(line.replace(r'\\', '\n')) ``` I need the result to be from e.g. `\\Accounts01\\Accounts02` to `Accounts01 Accounts02` Can someone help me with this? I'm using python 3. **EDIT**: All good now, `strip("\\")` on its own did it for me. Thanks guys!
2015/03/23
[ "https://Stackoverflow.com/questions/29219814", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3014488/" ]
write expects a string and you have passed it a list, if you want the contents written use `str.join`. ``` rawhostlist.write("\n".join(line.split("\\"))) ``` You also don't need to call close when you use `with`, it closes your file automatically and you actually never call close anyway as you are missing parens `rawhostlist.close -> rawhostlist.close()` It is not clear if you actually have 2,3 or 4 backslashes. Your original code has two, your edit has three so whichever it is you need to use to same amount to split. ``` In [66]: s = "\\Accounts01\\Accounts02" In [67]: "\n".join(s.split("\\\\")) Out[67]: '\\Accounts01\\Accounts02' In [68]: s = "\\\Accounts01\\\counts02" In [69]: "\n".join(s.split("\\\\")) Out[69]: '\nAccounts01\ncounts02' ``` If it varies then split with `\\` and filter empty strings. Looking at the file you posted, you have a single element on each line so simply use strip ``` with open("finalhostlist.txt", "wt") as f_out, open(infile, "rt") as f_in: for line in f_in: out.write(line.strip("\\")) ``` Output: ``` ACCOUNTS01 EXAMS01 EXAMS02 RECEPTION01 RECEPTION02 RECEPTION03 RECEPTION04 RECEPTION05 TEACHER01 TEACHER02 TEACHER03 TESTCENTRE-01 TESTCENTRE-02 TESTCENTRE-03 TESTCENTRE-04 TESTCENTRE-05 TESTCENTRE-06 TESTCENTRE-07 TESTCENTRE-08 TESTCENTRE-09 TESTCENTRE-10 TESTCENTRE-11 TESTCENTRE-12 TESTCENTRE-13 TESTCENTRE-14 TESTCENTRE-15 ```
if you want them written on separate lines: ``` for sub in line.split("\\"):rawhostlist.write(sub) ```
13,501
20,369,642
I'm trying to get the keyboard code of a character pressed in python. For this, I need to see if a keypad number is pressed. *This is not what I'm looking for*: ``` import tty, sys tty.setcbreak(sys.stdin) def main(): tty.setcbreak(sys.stdin) while True: c = ord(sys.stdin.read(1)) if c == ord('q'): break if c: print c ``` which outputs the ascii code of the character. this means, i get the same ord for a keypad 1 as as a normal 1. I've also tried a similar setup using the `curses` library and `raw`, with the same results. I'm trying to get the raw keyboard code. How does one do this?
2013/12/04
[ "https://Stackoverflow.com/questions/20369642", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1224926/" ]
As synthesizerpatel said, I need to go to a lower level. Using pyusb: ``` import usb.core, usb.util, usb.control dev = usb.core.find(idVendor=0x045e, idProduct=0x0780) try: if dev is None: raise ValueError('device not found') cfg = dev.get_active_configuration() interface_number = cfg[(0,0)].bInterfaceNumber intf = usb.util.find_descriptor( cfg, bInterfaceNumber=interface_number) dev.is_kernel_driver_active(intf): dev.detach_kernel_driver(intf) ep = usb.util.find_descriptor( intf, custom_match=lambda e: usb.util.endpoint_direction(e.bEndpointAddress) == usb.util.ENDPOINT_IN) while True: try: # lsusb -v : find wMaxPacketSize (8 in my case) a = ep.read(8, timeout=2000) except usb.core.USBError: pass print a except: raise ``` This gives you an output: `array('B', [0, 0, 0, 0, 0, 0, 0, 0])` array pos: 0: AND of modifier keys (1 - control, 2 - shift, 4 - meta, 8 - super) 1: No idea 2 -7: key code of keys pushed. so: ``` [3, 0 , 89, 90, 91, 92, 93, 94] ``` is: ``` ctrl+shift+numpad1+numpad2+numpad3+numpad4+numpad5+numpad6 ``` If anyone knows what the second index stores, that would be awesome.
To get raw keyboard input from Python you need to snoop at a lower level than reading stdin. For OSX check this answer: [OS X - Python Keylogger - letters in double](https://stackoverflow.com/questions/13806829/os-x-python-keylogger-letters-in-double) For Windows, this might work: <http://www.daniweb.com/software-development/python/threads/229564/python-keylogger> [monitor keyboard events with python in windows 7](https://stackoverflow.com/questions/3476183/monitor-keyboard-events-with-python-in-windows-7)
13,504
54,229,785
How to check whether a folder exists in google drive with name using python? I have tried with the following code: ``` import requests import json access_token = 'token' url = 'https://www.googleapis.com/drive/v3/files' headers = { 'Authorization': 'Bearer' + access_token } response = requests.get(url, headers=headers) print(response.text) ```
2019/01/17
[ "https://Stackoverflow.com/questions/54229785", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10506357/" ]
* You want to know whether a folder is existing in Google Drive using the folder name. * You want to achieve it using the access token and `requests.get()`. If my understanding is correct, how about this modification? Please think of this as just one of several answers. ### Modification points: * You can search the folder using the query for filtering the file of drive.files.list. + In your case, the query is as follows. - `name='filename' and mimeType='application/vnd.google-apps.folder'` + If you don't want to search in the trash box, please add `and trashed=false` to the query. * In order to confirm whether the folder is existing, in this case, it checks the property of `files`. This property is an array. If the folder is existing, the array has elements. ### Modified script: ``` import requests import json foldername = '#####' # Put folder name here. access_token = 'token' url = 'https://www.googleapis.com/drive/v3/files' headers = {'Authorization': 'Bearer ' + access_token} # Modified query = {'q': "name='" + foldername + "' and mimeType='application/vnd.google-apps.folder'"} # Added response = requests.get(url, headers=headers, params=query) # Modified obj = response.json() # Added if obj['files']: # Added print('Existing.') # Folder is existing. else: print('Not existing.') # Folder is not existing. ``` ### References: * [drive.files.list](https://developers.google.com/drive/api/v3/reference/files/list) * [Search for Files](https://developers.google.com/drive/api/v3/search-parameters) If I misunderstood your question, please tell me. I would like to modify it.
You may see this [sample code](https://gist.github.com/jmlrt/f524e1a45205a0b9f169eb713a223330) on how to check if destination folder exists and return its ID. ``` def get_folder_id(drive, parent_folder_id, folder_name): """ Check if destination folder exists and return it's ID """ # Auto-iterate through all files in the parent folder. file_list = GoogleDriveFileList() try: file_list = drive.ListFile( {'q': "'{0}' in parents and trashed=false".format(parent_folder_id)} ).GetList() # Exit if the parent folder doesn't exist except googleapiclient.errors.HttpError as err: # Parse error message message = ast.literal_eval(err.content)['error']['message'] if message == 'File not found: ': print(message + folder_name) exit(1) # Exit with stacktrace in case of other error else: raise # Find the the destination folder in the parent folder's files for file1 in file_list: if file1['title'] == folder_name: print('title: %s, id: %s' % (file1['title'], file1['id'])) return file1['id'] ``` Also from this [tutorial](https://www.programcreek.com/python/example/88814/pydrive.drive.GoogleDrive), you can check if folder exists and if not, then create one with the given name.
13,507
43,223,017
I am attempting to understand the excellent Code given as a guide by Andrej Karpathy: <https://gist.github.com/karpathy/d4dee566867f8291f086> I am new to python, still learning! I am doing the best I can to understand the following code from the link: ``` # perform parameter update with Adagrad for param, dparam, mem in zip([Wxh, Whh, Why, bh, by], [dWxh, dWhh, dWhy, dbh, dby], [mWxh, mWhh, mWhy, mbh, mby]): mem += dparam * dparam param += -learning_rate * dparam / np.sqrt(mem + 1e-8) # adagrad update ``` I have read up on the [zip function](https://docs.python.org/3.3/library/functions.html#zip) and done some short tests to try to understand how this works. What I know so far, 5 Iterations, param == Wxh on the first iteration but not there on... Ideally I am trying to convert this code to C#, and to do that I need to understand it. In referring to [Python iterator and zip](https://stackoverflow.com/questions/38024554/python-iterator-and-zip) it appears as we are multiplying each item of each array: ``` param = Wxh * dWxh * mWxh ``` But then the variables `param` `dparam` and `mem` are being modified outside the zip function. How do these variables function in this for loop scenario?
2017/04/05
[ "https://Stackoverflow.com/questions/43223017", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1183804/" ]
Write a simple for loop with zip will help you learn a lot. for example: ``` for a, b, c in zip([1,2,3], [4,5,6], [7,8,9]): print a print b print c print "/" ``` This function will print: 1 4 7 / 2 5 8 / 3 6 7 So that the zip function just put those three lists together, and then using three variables param, dparam, mem to refer to different list. In each iteration, those three variables refer to certain item in their corresponding lists, just like `for i in [1, 2, 3]:`. In this way, you only need to write one for loop instead of three, to update grads for each parameters: Wxh, Whh, Why, bh, by. In the first iteration, only Wxh is updated using dWxh and mWxh following the adagrad rule. And secondly, update Whh using dWhh and mWhh, and so on.
Python treats the variables merely as *labels* or name tags. Since you have zipped those inside a `list` of lists, it doesn't matter where they are, as long as you address them by their name / label correctly. Kindly note, this may not work for immutable types like `int` or `str`, etc. Refer to this answer for more explanation - [Immutable vs Mutable types](https://stackoverflow.com/questions/8056130/immutable-vs-mutable-types).
13,508
73,425,359
I am running Ubuntu 22.04 with xorg. I need to find a way to compile microbit python code locally to a firmware hex file. Firstly, I followed the guide here <https://microbit-micropython.readthedocs.io/en/latest/devguide/flashfirmware.html>. After a lot of debugging, I got to this point: <https://pastebin.com/MGShD31N> However, the file platform.h does exist. ``` sawntoe@uwubuntu:~/Documents/Assignments/2022/TVP/micropython$ ls /home/sawntoe/Documents/Assignments/2022/TVP/micropython/yotta_modules/mbed-classic/api/platform.h /home/sawntoe/Documents/Assignments/2022/TVP/micropython/yotta_modules/mbed-classic/api/platform.h sawntoe@uwubuntu:~/Documents/Assignments/2022/TVP/micropython$ ``` At this point, I gave up on this and tried using Mu editor with the AppImage. However, Mu requires wayland, and I am on xorg. Does anyone have any idea if this is possible? Thanks.
2022/08/20
[ "https://Stackoverflow.com/questions/73425359", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12625930/" ]
Mu and the uflash command are able to retrieve your Python code from .hex files. Using uflash you can do the following for example: ``` uflash my_script.py ``` I think that you want is somehow possible to do, but its harder than just using their web python editor: <https://python.microbit.org/v/2>
**Working Ubuntu 22.04 host CLI setup with Carlos Atencio's Docker to build your own firmware** After trying to setup the toolchain for a while, I finally decided to Google for a Docker image with the toolchain, and found <https://github.com/carlosperate/docker-microbit-toolchain> [at this commit](https://github.com/carlosperate/docker-microbit-toolchain/blob/ad2e52620d47f98c225647855432a854f65d9140/requirements.txt) from Carlos Atencio, a Micro:Bit foundation employee, and that just absolutely worked: ``` # Get examples. git clone https://github.com/bbcmicrobit/micropython cd micropython git checkout 7fc33d13b31a915cbe90dc5d515c6337b5fa1660 # Get Docker image. docker pull ghcr.io/carlosperate/microbit-toolchain:latest # Build setup to be run once. docker run -v $(pwd):/home --rm ghcr.io/carlosperate/microbit-toolchain:latest yt target bbc-microbit-classic-gcc-nosd@https://github.com/lancaster-university/yotta-target-bbc-microbit-classic-gcc-nosd docker run -v $(pwd):/home --rm ghcr.io/carlosperate/microbit-toolchain:latest make all # Build one example. docker run -v $(pwd):/home --rm ghcr.io/carlosperate/microbit-toolchain:latest \ tools/makecombinedhex.py build/firmware.hex examples/counter.py -o build/counter.hex # Build all examples. docker run -v $(pwd):/home --rm ghcr.io/carlosperate/microbit-toolchain:latest \ bash -c 'for f in examples/*; do b="$(basename "$f")"; echo $b; tools/makecombinedhex.py build/firmware.hex "$f" -o "build/${b%.py}.hex"; done' ``` And you can then flash the example you want to run with: ``` cp build/counter.hex "/media/$USER/MICROBIT/" ``` Some further comments at: [Generating micropython + python code `.hex` file from the command line for the BBC micro:bit](https://stackoverflow.com/questions/52691853/generating-micropython-python-code-hex-file-from-commandline/73877468#73877468)
13,516
9,725,737
> > **Possible Duplicate:** > > [Tool to convert python indentation from spaces to tabs?](https://stackoverflow.com/questions/338767/tool-to-convert-python-indentation-from-spaces-to-tabs) > > > I have a number of python files (>1000) that need to be reformatted so indentation is done only with tabs (yes, i know PEP-8, but this is a coding standard of a company). What is the easiest way to do so? Maybe some script in Python that will `os.walk` over files and do some magic? I can't just `grep` file content since file can be malformed (mixed tab and spaces, different amount of spaces) but Python will still run it and i get it back working after conversion.
2012/03/15
[ "https://Stackoverflow.com/questions/9725737", "https://Stackoverflow.com", "https://Stackoverflow.com/users/69882/" ]
I would suggest using this [Reindent](http://pypi.python.org/pypi/Reindent/0.1.0) script on PyPI to convert all of your horribly inconsistent files to a consistent PEP-8 (4-space indents) version. At this point try one more time to convince whoever decided on tabs that the company coding standard is stupid and PEP-8 style should be used, if this fails then you could use sed (as in hc\_'s answer) or create a Python script to replace 4 spaces with a single tab at the beginning of every line.
How about ``` find . -type f -iname \*.py -print0 | xargs -0 sed -i 's/^ /\t/' ``` This command finds all .py files below the current directory and replaces every four consecutive spaces it finds inside of them with a tab. Just noticed Spacedman's comment. This approach will not handle spaces at the beginning of a line inside a """ string correctly.
13,519
37,817,559
I've packaged a [this simple flask app](https://github.com/SimplyAhmazing/pyinstaller-tut) using PyInstaller but my OSX executable fails to run and shows the following executable, ``` Error loading Python lib '/Users/ahmed/Code/play/py-install-tut/dist/myscript.app/Contents/MacOS/Python': dlopen(/Users/ahmed/Code/play/py-install-tut/dist/myscript.app/Contents/MacOS/Python, 10): image not found ``` My guess is that PyInstaller is not packaging Python with my app. Here's what I ran, ``` $ pyinstaller hello_flask.spec --onedir 83 INFO: PyInstaller: 3.2 83 INFO: Python: 3.4.3 87 INFO: Platform: Darwin-13.4.0-x86_64-i386-64bit 89 INFO: UPX is not available. 90 INFO: Extending PYTHONPATH with paths ['/Users/ahmed/Code/play/py-install-tut', '/Users/ahmed/Code/play/py-install-tut'] 90 INFO: checking Analysis 99 INFO: checking PYZ 104 INFO: checking PKG 105 INFO: Building because toc changed 105 INFO: Building PKG (CArchive) out00-PKG.pkg 144 INFO: Bootloader /opt/boxen/pyenv/versions/3.4.3/Python.framework/Versions/3.4/lib/python3.4/site-packages/PyInstaller/bootloader/Darwin-64bit/run_d 144 INFO: checking EXE 145 INFO: Building because toc changed 145 INFO: Building EXE from out00-EXE.toc 145 INFO: Appending archive to EXE /Users/ahmed/Code/play/py-install-tut/build/hello_flask/hello_flask 155 INFO: Fixing EXE for code signing /Users/ahmed/Code/play/py-install-tut/build/hello_flask/hello_flask 164 INFO: checking COLLECT WARNING: The output directory "/Users/ahmed/Code/play/py-install-tut/dist/hello_flask" and ALL ITS CONTENTS will be REMOVED! Continue? (y/n)y 1591 INFO: Removing dir /Users/ahmed/Code/play/py-install-tut/dist/hello_flask 1597 INFO: Building COLLECT out00-COLLECT.toc 2203 INFO: checking BUNDLE WARNING: The output directory "/Users/ahmed/Code/play/py-install-tut/dist/myscript.app" and ALL ITS CONTENTS will be REMOVED! Continue? (y/n)y 3947 INFO: Removing dir /Users/ahmed/Code/play/py-install-tut/dist/myscript.app 3948 INFO: Building BUNDLE out00-BUNDLE.toc 3972 INFO: moving BUNDLE data files to Resource directory ``` When I open the contents of the packaged app in OSX I get the following files, ``` myscript.app/Contents/MacOS/ _struct.cpython-34m.so hello_flask zlib.cpython-34m.so ``` When I double click the above `hello_flask` executable I get the following output in my terminal, ``` /Users/ahmed/Code/play/py-install-tut/dist/myscript.app/Contents/MacOS/hello_flask ; exit; PyInstaller Bootloader 3.x LOADER: executable is /Users/ahmed/Code/play/py-install-tut/dist/myscript.app/Contents/MacOS/hello_flask LOADER: homepath is /Users/ahmed/Code/play/py-install-tut/dist/myscript.app/Contents/MacOS LOADER: _MEIPASS2 is NULL LOADER: archivename is /Users/ahmed/Code/play/py-install-tut/dist/myscript.app/Contents/MacOS/hello_flask LOADER: Extracting binaries LOADER: Executing self as child LOADER: set _MEIPASS2 to /Users/ahmed/Code/play/py-install-tut/dist/myscript.app/Contents/MacOS PyInstaller Bootloader 3.x LOADER: executable is /Users/ahmed/Code/play/py-install-tut/dist/myscript.app/Contents/MacOS/hello_flask LOADER: homepath is /Users/ahmed/Code/play/py-install-tut/dist/myscript.app/Contents/MacOS LOADER: _MEIPASS2 is /Users/ahmed/Code/play/py-install-tut/dist/myscript.app/Contents/MacOS LOADER: archivename is /Users/ahmed/Code/play/py-install-tut/dist/myscript.app/Contents/MacOS/hello_flask LOADER: Already in the child - running user's code. LOADER: Python library: /Users/ahmed/Code/play/py-install-tut/dist/myscript.app/Contents/MacOS/Python Error loading Python lib '/Users/ahmed/Code/play/py-install-tut/dist/myscript.app/Contents/MacOS/Python': dlopen(/Users/ahmed/Code/play/py-install-tut/dist/myscript.app/Contents/MacOS/Python, 10): image not found LOADER: Back to parent (RC: 255) LOADER: Doing cleanup LOADER: Freeing archive status for /Users/ahmed/Code/play/py-install-tut/dist/myscript.app/Contents/MacOS/hello_flask [Process completed] ``` I've also tried running this on a co-workers mac OSX and I get the same issue.
2016/06/14
[ "https://Stackoverflow.com/questions/37817559", "https://Stackoverflow.com", "https://Stackoverflow.com/users/772401/" ]
The minimum deployment target with Xcode 8 is iOS 8. To support target the iOS SDK 7.x and below, use Xcode 7. If you try to use a deployment target of iOS 7.x or below, Xcode will suggest you change your target to iOS 8: [![Xcode Warning](https://i.stack.imgur.com/LGe5e.png)](https://i.stack.imgur.com/LGe5e.png)
Apple has changed so much since iOS 7 until now. The easiest way of not having to deal with backward compatibility is to make the old OS's obsolete. ~~So you have 2 choices. You can leave the setting as is and deal with the warning message,~~ or you can change the setting and not support iOS 7 or lower any longer. There are pros and cons to each... ~~Leave the setting: If you chose to leave the Min OS setting as is your app will have a larger user base. But since new OS adoption rate is very very high this is not as much of an issue with iOS devices as it would be with Android devices. You would also have to deal with supporting iOS 7. That means that if you decide to use any new features not available in iOS 7 you would have to deal with the iOS 7 case. Possible app crashes, inconsistent UI, etc...~~ Change the setting: If you chose to change the setting then you no longer have to support iOS 7 (you can create much simpler and more consistent code with new features). You also slightly shrink your customer base (very very slightly). It's up to you what you would like to do, but really all devices that can run 7 can also run 8. So if they want your app they can just upgrade OS's and be fine (not like the iPad 1 that stopped at iOS 5). My customers are all large businesses that need to run through lots of red tape to upgrade their fleet of devices. So I have to support iOS 7 (for now, xCode 8 may give me the sway to force those who haven't to upgrade).
13,520
67,117,219
i am new to coding and python and i was wondering how to create a regex that will match all ip addresses that start with 192.168.1.xxx I have been looking online and have not yet been able to find a match. Here is some some sample data that i am trying to match them from. ``` /index.html HTTP/1.1" 404 208 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:49.0) Gecko/20100101 Firefox/49.0" 192.168.1.142 - - [30/Sep/2016:16:18:43 -0400] "GET / HTTP/1.1" 403 4897 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:49.0) Gecko/20100101 Firefox/49.0" 192.168.1.142 - - [30/Sep/2016:16:18:43 -0400] "GET /noindex/css/fonts/Light/OpenSans-Light.woff HTTP/1.1" 404 241 "http://optiplex360/noindex/css/open-sans.css" "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:49.0) Gecko/20100101 Firefox/49.0" 192.168.1.142 - - [30/Sep/2016:16:18:43 -0400] "GET /noindex/css/fonts/Bold/OpenSans-Bold.woff HTTP/1.1" 404 239 "http://optiplex360/noindex/css/open-sans.css" "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:49.0) Gecko/20100101 Firefox/49.0" 192.168.1.142 - - [30/Sep/2016:16:18:43 -0400] "GET /noindex/css/fonts/Light/OpenSans-Light.ttf HTTP/1.1" 404 240 "http://optiplex360/noindex/css/open-sans.css" "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:49.0) Gecko/20100101 Firefox/49.0" 192.168.1.142 - - [30/Sep/2016:16:18:43 -0400] "GET /noindex/css/fonts/Bold/OpenSans-Bold.ttf HTTP/1.1" 404 238 "http://optiplex360/noindex/css/open-sans.css" "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:49.0) Gecko/20100101 Firefox/49.0" 192.168.1.142 - - [30/Sep/2016:16:18:53 -0400] "GET /first HTTP/1.1" 404 203 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:49.0) Gecko/20100101 Firefox/49.0" 192.168.1.1 - - [30/Sep/2016:16:19:00 -0400] "GET /HNAP1/ HTTP/1.1" 404 204 "-" "-" 192.168.1.1 - - [30/Sep/2016:16:19:00 -0400] "GET / HTTP/1.1" 403 4897 "-" "-" 192.168.1.1 - - [30/Sep/2016:16:19:00 -0400] "POST /JNAP/ HTTP/1.1" 404 203 "-" "-" 192.168.1.1 - - [30/Sep/2016:16:19:00 -0400] "POST /JNAP/ HTTP/1.1" 404 203 "-" "-" ```
2021/04/15
[ "https://Stackoverflow.com/questions/67117219", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12920080/" ]
Here you go. Also, checkout <https://regexr.com/> `^192\.168\.1\.[0-9]{1,3}$`
I think here its best to use a combination of `regex` to grab any valid IP address from your data, row by row. Then use `ipaddress` to check if the address sits within the network you're looking for. This will provide much more flexibility in the case you need to check different networks, instead of rewriting the `regex` every single time, you can create an `ip_network` object instead. We could also create multiple networks, and check for existence in all of them. ``` import ipaddress import re data = '''/index.html HTTP/1.1" 404 208 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:49.0) Gecko/20100101 Firefox/49.0" 192.168.1.142 - - [30/Sep/2016:16:18:43 -0400] "GET / HTTP/1.1" 403 4897 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:49.0) Gecko/20100101 Firefox/49.0" 192.168.1.142 - - [30/Sep/2016:16:18:43 -0400] "GET /noindex/css/fonts/Light/OpenSans-Light.woff HTTP/1.1" 404 241 "http://optiplex360/noindex/css/open-sans.css" "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:49.0) Gecko/20100101 Firefox/49.0" 192.168.1.142 - - [30/Sep/2016:16:18:43 -0400] "GET /noindex/css/fonts/Bold/OpenSans-Bold.woff HTTP/1.1" 404 239 "http://optiplex360/noindex/css/open-sans.css" "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:49.0) Gecko/20100101 Firefox/49.0" 192.168.1.142 - - [30/Sep/2016:16:18:43 -0400] "GET /noindex/css/fonts/Light/OpenSans-Light.ttf HTTP/1.1" 404 240 "http://optiplex360/noindex/css/open-sans.css" "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:49.0) Gecko/20100101 Firefox/49.0" 192.168.1.142 - - [30/Sep/2016:16:18:43 -0400] "GET /noindex/css/fonts/Bold/OpenSans-Bold.ttf HTTP/1.1" 404 238 "http://optiplex360/noindex/css/open-sans.css" "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:49.0) Gecko/20100101 Firefox/49.0" 192.168.1.142 - - [30/Sep/2016:16:18:53 -0400] "GET /first HTTP/1.1" 404 203 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:49.0) Gecko/20100101 Firefox/49.0" 192.168.1.1 - - [30/Sep/2016:16:19:00 -0400] "GET /HNAP1/ HTTP/1.1" 404 204 "-" "-" 192.168.1.1 - - [30/Sep/2016:16:19:00 -0400] "GET / HTTP/1.1" 403 4897 "-" "-" 192.168.1.1 - - [30/Sep/2016:16:19:00 -0400] "POST /JNAP/ HTTP/1.1" 404 203 "-" "-" 192.168.1.1 - - [30/Sep/2016:16:19:00 -0400] "POST /JNAP/ HTTP/1.1" 404 203 "-" "-"''' network = ipaddress.ip_network('192.168.1.0/24') # Pattern that matches any valid ipv4 address pattern = r'^((25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)$' for row in data.split(): if (ip := re.search(pattern, row)): if ipaddress.IPv4Address(ip.group()) in network: print(f'{ip.group()} exists in {network}') ``` --- Output ``` 192.168.1.142 exists in 192.168.1.0/24 192.168.1.142 exists in 192.168.1.0/24 192.168.1.142 exists in 192.168.1.0/24 192.168.1.142 exists in 192.168.1.0/24 192.168.1.142 exists in 192.168.1.0/24 192.168.1.142 exists in 192.168.1.0/24 192.168.1.1 exists in 192.168.1.0/24 192.168.1.1 exists in 192.168.1.0/24 192.168.1.1 exists in 192.168.1.0/24 192.168.1.1 exists in 192.168.1.0/24 ```
13,526
16,024,041
I'm having issues sending unicode to SQL Server via pymssql: ``` In [1]: import pymssql conn = pymssql.connect(host='hostname', user='me', password='password', database='db') cursor = conn.cursor() In [2]: s = u'Monsieur le Curé of the «Notre-Dame-de-Grâce» neighborhood' In [3]: s Out [3]: u'Monsieur le Cur\xe9 of the \xabNotre-Dame-de-Gr\xe2ce\xbb neighborhood' In [4]: cursor.execute("INSERT INTO MyTable VALUES(%s)", s.encode('utf-8')) cursor.execute("INSERT INTO MyTable VALUES(" + s.encode('utf-8') + "')") conn.commit() ``` Both execute statements yield the same garbled text on the SQL Server side: ``` 'Monsieur le Curé of the «Notre-Dame-de-Grâce» neighborhood' ``` Maybe something is wrong with the way I'm encoding, or with my syntax. Someone suggested a stored procedure, but I'm hoping not to have to go that route. [This](https://stackoverflow.com/questions/4791114/python-to-mssql-encoding-problem) seems to be a very similar problem, with no real response.
2013/04/15
[ "https://Stackoverflow.com/questions/16024041", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1599229/" ]
Ended up using pypyodbc instead. Needed some assistance to [connect](https://stackoverflow.com/questions/16024956/connecting-to-sql-server-with-pypyodbc), then used the [doc recipe](https://code.google.com/p/pypyodbc/wiki/A_HelloWorld_sample_to_access_mssql_with_python) for executing statements: ``` import pypyodbc conn = pypyodbc.connect("DRIVER={SQL Server};SERVER=my_server;UID=MyUserName;PWD=MyPassword;DATABASE=MyDB") cur = conn.cursor cur.execute('''INSERT INTO MyDB(rank,text,author) VALUES(?,?,?)''', (1, u'Monsieur le Curé of the «Notre-Dame-de-Grâce» neighborhood', 'Charles S.')) cur.commit() ```
Here is something which worked for me: ``` # -*- coding: utf-8 -*- import pymssql conn = pymssql.connect(host='hostname', user='me', password='password', database='db') cursor = conn.cursor() s = u'Monsieur le Curé of the «Notre-Dame-de-Grâce» neighborhood' cursor.execute("INSERT INTO MyTable(col1) VALUES(%s)", s.encode('latin-1', "ignore")) conn.commit() cursor.close() conn.close() ``` *MyTable* is of collation: Latin1\_General\_CI\_AS and the column *col1* in it is of type varchar(MAX) My environment is: SQL Server 2008 & Python 2.7.10
13,528
40,700,192
The Virt-Manager is capable of modifying network interfaces of running domains, for example changing the connected network. I want to script this in python with the libvirt-API. ``` import libvirt conn = libvirt.open('qemu:///system') deb = conn.lookupByName('Testdebian') xml = deb.XMLDesc() xml = replace('old-network-name', 'new-network-name') deb.undefine() deb = conn.defineXML(xml) ``` But that doesn't work. The network isn't changed. Can someone give me a tipp how to modify a running domain with libvirt? I couldn't find anything about that in the docs. But it must be possible as the Virt-Manager can do it. Thanks for any help. Edit: I managed to perform the network change via virsh: ``` virsh update-device 16 Testdebian.xml ``` Testdebian.xml must contain the interface device only, not the whole domain-XML. But how can I do this via the libvirt-API? There seems to be no method to perform update-device through the API....
2016/11/20
[ "https://Stackoverflow.com/questions/40700192", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6204346/" ]
This is very easy in [c++17](/questions/tagged/c%2b%2b17 "show questions tagged 'c++17'"). ``` template<class Tuple> decltype(auto) sum_components(Tuple const& tuple) { auto sum_them = [](auto const&... e)->decltype(auto) { return (e+...); }; return std::apply( sum_them, tuple ); }; ``` or `(...+e)` for the opposite fold direction. In previous versions, the right approach would be to write your own `apply` rather than writing a bespoke implementation. When your compiler updates, you can then delete code. In [c++14](/questions/tagged/c%2b%2b14 "show questions tagged 'c++14'"), I might do this: ``` // namespace for utility code: namespace utility { template<std::size_t...Is> auto index_over( std::index_sequence<Is...> ) { return [](auto&&f)->decltype(auto){ return decltype(f)(f)( std::integral_constant<std::size_t,Is>{}... ); }; } template<std::size_t N> auto index_upto() { return index_over( std::make_index_sequence<N>{} ); } } // namespace for semantic-equivalent replacements of `std` code: namespace notstd { template<class F, class Tuple> decltype(auto) apply( F&& f, Tuple&& tuple ) { using dTuple = std::decay_t<Tuple>; auto index = ::utility::index_upto< std::tuple_size<dTuple>{} >(); return index( [&](auto...Is)->decltype(auto){ auto target=std::ref(f); return target( std::get<Is>( std::forward<Tuple>(tuple) )... ); } ); } } ``` which is pretty close to `std::apply` in [c++14](/questions/tagged/c%2b%2b14 "show questions tagged 'c++14'"). (I abuse `std::ref` to get `INVOKE` semantics). (It does not work perfectly with rvalue invokers, but that is very corner case). In [c++11](/questions/tagged/c%2b%2b11 "show questions tagged 'c++11'"), I would advise upgrading your compiler at this point. In [c++03](/questions/tagged/c%2b%2b03 "show questions tagged 'c++03'") I'd advise upgrading your job at this point. --- All of the above do right or left folds. In some cases, a binary tree fold might be better. This is trickier. If your `+` does expression templates, the above code won't work well due to lifetime issues. You may have to add another template type for "afterwards, cast-to" to cause the temporary expression tree to evaluate in some cases.
With C++1z it's pretty simple with [fold expressions](http://en.cppreference.com/w/cpp/language/fold). First, forward the tuple to an `_impl` function and provide it with index sequence to access all tuple elements, then sum: ``` template<typename T, size_t... Is> auto sum_components_impl(T const& t, std::index_sequence<Is...>) { return (std::get<Is>(t) + ...); } template <class Tuple> int sum_components(const Tuple& t) { constexpr auto size = std::tuple_size<Tuple>{}; return sum_components_impl(t, std::make_index_sequence<size>{}); } ``` [demo](http://melpon.org/wandbox/permlink/abxtgr4crzUyxy3Q) --- A C++14 approach would be to recursively sum a variadic pack: ``` int sum() { return 0; } template<typename T, typename... Us> auto sum(T&& t, Us&&... us) { return std::forward<T>(t) + sum(std::forward<Us>(us)...); } template<typename T, size_t... Is> auto sum_components_impl(T const& t, std::index_sequence<Is...>) { return sum(std::get<Is>(t)...); } template <class Tuple> int sum_components(const Tuple& t) { constexpr auto size = std::tuple_size<Tuple>{}; return sum_components_impl(t, std::make_index_sequence<size>{}); } ``` [demo](http://melpon.org/wandbox/permlink/2l5ivZY0sb2qwYaF) A C++11 approach would be the C++14 approach with custom implementation of `index_sequence`. For example from [here](https://stackoverflow.com/a/17426611/2456565). --- As @ildjarn pointed out in the comments, the above examples are both employing right folds, while many programmers expect left folds in their code. The C++1z version is trivially changeable: ``` template<typename T, size_t... Is> auto sum_components_impl(T const& t, std::index_sequence<Is...>) { return (... + std::get<Is>(t)); } ``` [demo](http://melpon.org/wandbox/permlink/uOvNgrUU240Ub6wt) And the C++14 isn't much worse, but there are more changes: ``` template<typename T, typename... Us> auto sum(T&& t, Us&&... us) { return sum(std::forward<Us>(us)...) + std::forward<T>(t); } template<typename T, size_t... Is> auto sum_components_impl(T const& t, std::index_sequence<Is...>) { constexpr auto last_index = sizeof...(Is) - 1; return sum(std::get<last_index - Is>(t)...); } ``` [demo](http://melpon.org/wandbox/permlink/J1LHuZKYFbn5AmYV)
13,531
43,628,733
I wrote this code to display contents of a list in grid form . It works fine for the alphabet list . But when i try to run it with a randomly generated list it gives an list index out of range error . Here is the full code: import random ``` #barebones 2d shell grid generator ''' Following list is a place holder you can add any list data to show in a grid pattern with this tool ''' lis = ['a','b','c','d','e','f','g','h','j','i','j','k','l','m','n','o','p','q','r','s','t','u','v','w','x','y','z'] newLis = [] #generates random list def lisGen(): length = 20 # random.randint(10,20) for i in range(length): value = random.randint(1,9) newLis.append(str(value)) lisGen() askRow = input('Enter number of rows :') askColumns = input('Enter number of columns :') def gridGen(row,column): j=0 cnt = int(row) while (cnt>0): for i in range(int(column)): print(' '+'-',end='') print('\n',end='') #this is the output content loop for i in range(int(column)): if j<len(lis): print('|'+newLis[j],end='') j += 1 else: print('|'+' ',end='') print('|',end='') print('\n',end='') cnt -= 1 for i in range(int(column)): print(' '+'-',end='') print('\n',end='') gridGen(askRow,askColumns) ``` The expected/correct output ,using the alphabet list(lis): ``` Enter number of rows :7 Enter number of columns :7 - - - - - - - |a|b|c|d|e|f|g| - - - - - - - |h|j|i|j|k|l|m| - - - - - - - |n|o|p|q|r|s|t| - - - - - - - |u|v|w|x|y|z| | - - - - - - - | | | | | | | | - - - - - - - | | | | | | | | - - - - - - - | | | | | | | | - - - - - - - ``` The error output when used randomly generated list ( newLis ): ``` Enter number of rows :7 Enter number of columns :7 - - - - - - - |9|2|1|4|7|5|4| - - - - - - - |9|7|7|3|2|1|3| - - - - - - - |7|5|4|1|2|3Traceback (most recent call last): File "D:\01-Mywares\python\2d shell graphics\gridGen.py", line 56, in <module> gridGen(askRow,askColumns) File "D:\01-Mywares\python\2d shell graphics\gridGen.py", line 40, in gridGen print('|'+newLis[j],end='') IndexError: list index out of range ```
2017/04/26
[ "https://Stackoverflow.com/questions/43628733", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5698361/" ]
I suggest you use the function '.load' rather than '.csv', something like this: ``` data = sc.read.load(path_to_file, format='com.databricks.spark.csv', header='true', inferSchema='true').cache() ``` Of you course you can add more options. Then you can simply get you want: ``` data.columns ``` Another way of doing this (to get the columns) is to use it this way: ``` data = sc.textFile(path_to_file) ``` And to get the headers (columns) just use ``` data.first() ``` Looks like you are trying to get your schema from your csv file without opening it! The above should help you to get them and hence manipulate whatever you like. Note: to use '.columns' your 'sc' should be configured as: ``` spark = SparkSession.builder \ .master("yarn") \ .appName("experiment-airbnb") \ .enableHiveSupport() \ .getOrCreate() sc = SQLContext(spark) ``` Good luck!
It would be good if you can provide some sample data next time. How should we know how your csv looks like. Concerning your question, it looks like that your csv column is not a decimal all the time. InferSchema takes the first row and assign a datatype, in your case, it is a [DecimalType](http://spark.apache.org/docs/2.1.0/api/python/_modules/pyspark/sql/types.html) but then in the second row you might have a text so that the error would occur. If you don't infer the schema then, of course, it would work since everything will be cast as a StringType.
13,532
48,761,673
I want to solve a case where i know what all will be contents of the string output.. but i am not sure about the order of the contents inside the output.. say, the expected contents of my output are `['this','output','can','be','jumbled','in','any','order']`.. and the output can be `'this can in any order jumbled output be'`, or `'this order in any can output jumbled be'` How do i write a regular expression in python to solve this case??
2018/02/13
[ "https://Stackoverflow.com/questions/48761673", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7534349/" ]
Use [`contains(where:)`](https://developer.apple.com/documentation/swift/sequence/2905153-contains) on the dictionary values: ``` // Enable button if at least one value is not nil: button.isEnabled = dict.values.contains(where: { $0 != nil }) ``` Or ``` // Enable button if no value is nil: button.isEnabled = !dict.values.contains(where: { $0 == nil }) ```
You can use [`filter`](https://developer.apple.com/documentation/swift/sequence/2905694-filter) to check if any value is nil in a dictionary. ``` button.isEnabled = dict.filter { $1 == nil }.isEmpty ```
13,535
53,140,438
How to create cumulative sum (new\_supply)in dataframe python from demand column from table ``` item Date supply demand A 2018-01-01 0 10 A 2018-01-02 0 15 A 2018-01-03 100 30 A 2018-01-04 0 10 A 2018-01-05 0 40 A 2018-01-06 50 50 A 2018-01-07 0 10 B 2018-01-01 0 20 B 2018-01-02 0 30 B 2018-01-03 20 60 B 2018-01-04 0 20 B 2018-01-05 100 10 B 2018-01-06 0 20 B 2018-01-07 0 30 ``` New Desired table from the above table ``` item Date supply demand new_supply A 2018-01-01 0 10 0 A 2018-01-02 0 15 0 A 2018-01-03 100 30 55 A 2018-01-04 0 10 0 A 2018-01-05 0 40 0 A 2018-01-06 50 50 100 A 2018-01-07 0 10 0 B 2018-01-01 0 20 0 B 2018-01-02 0 30 0 B 2018-01-03 20 60 110 B 2018-01-04 0 20 0 B 2018-01-05 100 10 140 B 2018-01-06 0 20 0 B 2018-01-07 0 30 0 ```
2018/11/04
[ "https://Stackoverflow.com/questions/53140438", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10603056/" ]
Make the python file executable chmod +x Test.py
Why do you have to include the logic inside a class? note = 10 if note >= 10: print("yes") else: print("NO") Just this will do, remove the class
13,540
3,885,846
I'd like to call a .py file from within python. It is in the same directory. Effectivly, I would like the same behavior as calling python foo.py from the command line without using any of the command line tools. How should I do this?
2010/10/07
[ "https://Stackoverflow.com/questions/3885846", "https://Stackoverflow.com", "https://Stackoverflow.com/users/450054/" ]
It's not quite clear (at least to me) what you mean by using "none of the command-line tools". To run a program in a subprocess, one usually uses the `subprocess` module. However, if both the calling and the callee are python scripts, there is another alternative, which is to use the `multiprocessing` module. For example, you can organize foo.py like this: ``` def main(): ... if __name__=='__main__': main() ``` Then in the calling script, test.py: ``` import multiprocessing as mp import foo proc=mp.Process(target=foo.main) proc.start() # Do stuff while foo.main is running # Wait until foo.main has ended proc.join() # Continue doing more stuff ```
``` execfile('foo.py') ``` See also: * [Further reading on execfile](http://docs.python.org/library/functions.html#execfile)
13,542
61,270,154
I used to have my app on Heroku and the way it worked there was that I had 2 buildpacks. One for NodeJS and one for Python. Heroku ran `npm run build` and then Django served the files from the `build` folder. I use Code Pipeline on AWS to deploy a new version of my app every time there is a new push on my GitHub repository. Since I couldn't figure out how to run `npm run build` in a python environment in EB, I had a workaround. I ran `npm run build` and pushed it to my repository (removed the `build` folder from .gitignore) and then Django served the files on EB. However, this is not the best solution and I was wondering if anyone knows how to run `npm run build` the way Heroku can do it with their NodeJS buildpack for a python app on EB.
2020/04/17
[ "https://Stackoverflow.com/questions/61270154", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11804213/" ]
So I figured out one solution that worked for me. Since I want to create the build version of my app on the server the way Heroku does it with the NodeJS buildpack, I had to create a command that installs node like this: ``` container_commands: 01_install_node: command: "curl -sL https://rpm.nodesource.com/setup_12.x | sudo bash - && sudo yum install nodejs" ignoreErrors: false ``` And then to create the build version of the react app on a Python Environment EB, I added the following command: ``` container_commands: 02_react: command: "npm install && npm run build" ignoreErrors: false ``` So of course, after the build version is created, you should collect static files, so here is how my working config file looked at the end: ``` option_settings: aws:elasticbeanstalk:container:python: WSGIPath: <project_name>/wsgi.py aws:elasticbeanstalk:application:environment: DJANGO_SETTINGS_MODULE: <project_name>.settings aws:elasticbeanstalk:container:python:staticfiles: /static/: staticfiles/ container_commands: 01_install_node: command: "curl -sL https://rpm.nodesource.com/setup_12.x | sudo bash - && sudo yum install nodejs" ignoreErrors: false 02_react: command: "npm install && npm run build" ignoreErrors: false 03_collectstatic: command: "django-admin.py collectstatic --noinput" ``` Hope this helps anyone who encounters the same
I don't know exactly Python but I guess you can adapt for you case. Elastic Beanstalk for Node.js platform use by default `app.js`, then `server.js`, and then `npm start` (in that order) to start your application. You can change this behavior with **configuration files**. Below the steps to accomplish with Node.js: 1. Create the following file `.ebextensions/<your-config-file-name>.config` with the following content: ``` option_settings: aws:elasticbeanstalk:container:nodejs: NodeCommand: "npm run eb:prod" ``` 2. Edit your `package.json` to create the `eb:prod` command. For instance: ``` "scripts": { "start": "razzle start", "build": "razzle build", "test": "razzle test --env=jsdom", "start:prod": "NODE_ENV=production node build/server.js", "eb:prod": "npm run build && npm run start:prod" } ``` 3. You may faced permission denied errors during your build. To solve this problem you can create `.npmrc` file with the following content: ``` # Force npm to run node-gyp also as root unsafe-perm=true ``` If you need more details, I wrote a blogpost about it: [I deployed a server-side React app with AWS Elastic Beanstalk. Here’s what I learned.](https://medium.com/@johanrin/i-deployed-a-server-side-react-app-with-aws-elastic-beanstalk-heres-what-i-learned-34c8399079c5?source=friends_link&sk=6e05dee451fe68f3b290ff70c582bfb5)
13,544