input_text_instruct
stringlengths 282
37.9k
| output_text
stringlengths 37
27.3k
|
|---|---|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Python tqdm process_map: Append list shared between processes?<p>I want to share a list to append output from parallel threads, started by <code>process_map</code> from <code>tqdm</code>. (The reason why I want to use <code>process_map</code> is the nice progress indicator and the <code>max_workers=</code> option.)</p>
<p>I have tried to use <code>from multiprocessing import Manager</code> to create the shared list, but I am doing something wrong here: My code prints an empty <code>shared_list</code>, but it should print a list with 20 numbers, correct order is not important.</p>
<p>Any help would be greatly appreciated, thank you in advance!</p>
<pre><code>import time
from tqdm.contrib.concurrent import process_map
from multiprocessing import Manager
shared_list = []
def worker(i):
global shared_list
time.sleep(1)
shared_list.append(i)
if __name__ == '__main__':
manager = Manager()
shared_list = manager.list()
process_map(worker, range(20), max_workers=5)
print(shared_list)
</code></pre>
|
<p>You didn't specify what platform you are running under (you are supposed to tag your question with your platform whenever you tag a question with <code>multiprocessing</code>), but it appears you are running under a platform that uses <code>spawn</code> to create new processes (such as Windows). This means that when a new process is launched, an empty address space is created, a new Python interpreter is launched and the source is re-executed from the top.</p>
<p>So although you have in the block that begins <code>if __name__ == '__main__':</code> assigned to <code>shared_list</code> a managed list, each process in the pool that is created will be executing <code>shared_list = []</code> clobbering your initial assignment.</p>
<p>You can pass <code>shared_list</code> as the first argument to your worker function:</p>
<pre class="lang-py prettyprint-override"><code>import time
from tqdm.contrib.concurrent import process_map
from multiprocessing import Manager
from functools import partial
def worker(shared_list, i):
time.sleep(1)
shared_list.append(i)
if __name__ == '__main__':
manager = Manager()
shared_list = manager.list()
process_map(partial(worker, shared_list), range(20), max_workers=5)
print(shared_list)
</code></pre>
<p>If <code>process_map</code> supported the <em>initializer</em> and <em>initargs</em> arguments in the same way as the <code>ProcessPoolExecutor</code> class does (<strong>it appears that it does not</strong>), then you could do:</p>
<pre class="lang-py prettyprint-override"><code>import time
from tqdm.contrib.concurrent import process_map
from multiprocessing import Manager
def init_pool(the_list):
global shared_list
shared_list = the_list
def worker(i):
time.sleep(1)
shared_list.append(i)
if __name__ == '__main__':
manager = Manager()
shared_list = manager.list()
process_map(worker, range(20), max_workers=5, initializer=init_pool, initargs=(shared_list,))
print(shared_list)
</code></pre>
<p><strong>Comment</strong></p>
<p>This has nothing to do <em>per se</em> with your original problem, but for this type of problem you might want to consider using instead of a managed list to which your worker function (coincidentally named <code>worker</code>) appends elements and the order in which the elements are appended is non-deterministic since you do not have any control over the scheduling of the pool processes, a <code>multiprocessing.Array</code> instance initialized as follows:</p>
<pre class="lang-py prettyprint-override"><code>shared_list = Array('i', [0] * 20, lock=False)
</code></pre>
<p>And then you worker function becomes:</p>
<pre class="lang-py prettyprint-override"><code>def worker(i):
time.sleep(1)
shared_list[i] = i
</code></pre>
<p>Here the array is being stored in shared memory and there is not even a need for locked access since each invocation of <code>worker</code> is accessing a different index of the array. Accessing elements of a shared memory Array is <em>much</em> faster than accessing elements of a managed list. The only problem is that references to shared memory cannot be passed as arguments and we saw that <code>process_map</code> does not support the <em>initializer</em> and <em>initargs</em> arguments. So you would have to use lower-level methods. For example:</p>
<pre class="lang-py prettyprint-override"><code>import time
from multiprocessing import Pool, Array
from tqdm import tqdm
def init_pool(the_list):
global shared_list
shared_list = the_list
def worker(i):
time.sleep(1)
shared_list[i] = i
if __name__ == '__main__':
# Preallocate 20 slots for the array in shared memory
# And we don't require a lock if each worker invocation is accessing a different Array index:
args = range(20)
shared_list = Array('i', [0] * len(args), lock=False)
with tqdm(total=len(args)) as pbar:
pool = Pool(5, initializer=init_pool, initargs=(shared_list,))
for result in pool.imap_unordered(worker, args):
pbar.update(1)
# print out elements one at a time:
for elem in shared_list:
print(elem)
# print out all elements at once (must first convert to a regular list):
print(list(shared_list))
</code></pre>
<p><strong>Comment 2</strong></p>
<p>I would avoid using <code>process_map</code>. This function is based on the <code>map</code> method of the <code>ProcessPoolExecutor.map</code> method, which is required to return results in an order corresponding to the elements of the <em>iterable</em> being passed, <em>not</em> in the order of completion. Imagine what happens if for some reason the process that is processing the first task submitted, <code>i</code> value 0 in our case, takes a very long time to process and turns out to be the last task completed. You would see the <code>tqdm</code> progress bar do nothing for a long time until that first submitted task completed. But when that happens we know that all the other submitted tasks have already completed and so the progress bar would jump from 0 to 100% instantaneously. Modify function <code>worker</code> as follows to see this in action:</p>
<pre class="lang-py prettyprint-override"><code>def worker(shared_list, i):
if i == 0:
time.sleep(5)
else:
time.sleep(.25)
shared_list.append(i)
</code></pre>
<p>The code version I offered above that uses <code>Pool.imap_unordered</code> is allowed to returns results out of order and with the default <em>chunksize</em> value of 1, it will be in the order of completion. The progress bar will more smoothly progress.</p>
<p><strong>Comment 3</strong></p>
<p>There also seems to be a bug in <code>tqdm</code>. The following program demonstrates how you would use the low-level <code>tqdm</code> calls this time with the <code>concurrent.futures</code> module. Unfortunately, its <code>ProcessPoolExecutor</code> class (for multiprocessing) and <code>ThreadPoolExecutor</code> class (for multithreading) does not have an equivalent to the <code>imap_unordered</code> method. You have to use the <code>submit</code> method (whose <code>multiprocessing.pool.Pool</code> analog would be the <code>apply_async</code> method), which returns a <code>Future</code> instance on which you can call the <code>result</code> method to block for completion and to return the result of the submitted task). You would <code>submit</code> a bunch of tasks and store the returned <code>Future</code> instances perhaps in a list and then use the <code>as_completed</code> function call to be returned from that list the next completed <code>Future</code> instance that has completed.</p>
<p>This demo uses threading and creates a thread pool who size is 20 and submits 20 tasks, so all tasks should start at the same time. The sleep time for <code>worker1</code> is set to vary so that the smaller the value for the <code>i</code> argument, the longer the sleep time is. This program creates the pool and submits the tasks 4 times. The first time, the return values are just printed. The second time a <code>tqdm</code> progress bar is used. The results are as you would expect. The third time <code>worker2</code> is used with the <code>tqdm</code> progress bar. The difference is that for all values of <code>i != 0</code> the sleep time is a constant (.25 seconds), so that for <code>i</code> values 1, 2, ... 19, the tasks should complete at roughly the same time. You would therefore expect to see the progress bar within a very short time jump to 95% and then wait for the <code>i == 0</code> task to complete. What you observe, however, is the opposite. The progress bar goes to 5% and hangs there for a long time and then jumps to 100%. The fourth case is using <code>worker2</code> with my own "progress bar", which behaves as you would expect.</p>
<p>This is <code>tqdm</code> 4.61.1 under Python 3.8.5. I have tested this under Windows and Linux. Does anyone have an explanation for this behavior?</p>
<pre class="lang-py prettyprint-override"><code>import time
from concurrent.futures import ThreadPoolExecutor, as_completed
from tqdm import tqdm
import sys
class MyProgressBar:
def __init__(self, n_tasks):
self._task_count = n_tasks
self._completed = 0
self.update()
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
print(file=sys.stderr)
return False
def update(self, count=0):
self._completed += count
print(f'\r{self._completed} of {self._task_count} task(s) completed.', end='', flush=True)
def worker1(i):
if i == 0:
time.sleep(8)
else:
time.sleep(5 - i/5)
return i
def worker2(i):
if i == 0:
time.sleep(8)
else:
time.sleep(.25)
return i
if __name__ == '__main__':
args = range(20)
with ThreadPoolExecutor(max_workers=20) as pool:
futures = [pool.submit(worker1, arg) for arg in args]
for future in as_completed(futures):
print(future.result())
with ThreadPoolExecutor(max_workers=20) as pool:
with tqdm(total=len(args)) as pbar:
futures = [pool.submit(worker1, arg) for arg in args]
for future in as_completed(futures):
future.result()
pbar.update(1)
with ThreadPoolExecutor(max_workers=20) as pool:
with tqdm(total=len(args)) as pbar:
futures = [pool.submit(worker2, arg) for arg in args]
for future in as_completed(futures):
future.result()
pbar.update(1)
# Works with my progress "bar":
with ThreadPoolExecutor(max_workers=20) as pool:
with MyProgressBar(len(args)) as pbar:
futures = [pool.submit(worker2, arg) for arg in args]
for future in as_completed(futures):
future.result()
pbar.update(1)
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Calculating the range of values in a Pandas DataFrame using groupby function<p>I have a dataset which has feature 'abdomcirc' that has multiple values per ChildID, like so:</p>
<pre><code> ChildID abdomcirc
0 1 273
1 1 267
2 1 294
3 2 136
4 2 248
</code></pre>
<p>I want to calculate the range of values for a given a list of abdomcirc values per child id. So I want to get these results:</p>
<pre><code> ChildID range
0 1 27
1 2 112
</code></pre>
<p>So I first tried this:</p>
<pre class="lang-py prettyprint-override"><code>df["range"] = df.groupby('ChildID')["mussabdomcirc"].transform('range')
</code></pre>
<p>But I got this error <strong>ValueError: 'range' is not a valid function name for transform(name)</strong></p>
<p>So, as suggested in the answer to this <a href="https://stackoverflow.com/questions/39560578/python-calculating-the-rangehighest-lowest-in-different-group">question</a>, I tried the following line:</p>
<pre class="lang-py prettyprint-override"><code>df["range"] = df.groupby('ChildID').apply(lambda x: x.High.max() - x.Low.min())
</code></pre>
<p>But I got this error:
<strong>AttributeError: 'DataFrame' object has no attribute 'High'</strong></p>
<p>Not sure why I am getting this error. Any suggestion on how to successfully calculate the range of a group of values in a dataframe?</p>
|
<p><code>High</code> is not in <code>df</code>, please change <code>High</code> with your column</p>
<pre><code>df.groupby("ChildID").apply(lambda x: x['abdomcirc'].max() - x['abdomcirc'].min())
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
remove parent node depending on the child element values<p>I have many XML files like below sample input file.</p>
<p>What I want to get is that eliminating <code><b></code> nodes which are not including the value of banana in the child element node.</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code><a>
<header>
fruit
</header>
<b>
<fruitlist>
<d>banana</d>
</fruitlist>
<fruitlist>
<d>apple</d>
</fruitlist>
</b>
<b>
<fruitlist>
<d>lemon</d>
</fruitlist>
<fruitlist>
<d>tomato</d>
</fruitlist>
</b>
<b>
<fruitlist>
<d>banana</d>
</fruitlist>
</b>
<b>
<fruitlist>
<d>lemon</d>
</fruitlist>
<fruitlist>
<d>kiwi</d>
</fruitlist>
</b>
<b>
<fruitlist>
<d>strawberry</d>
</fruitlist>
</b>
</a></code></pre>
</div>
</div>
</p>
<p>This is what I want:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code><a>
<header>
fruit
</header>
<b>
<fruitlist>
<d>banana</d>
</fruitlist>
<fruitlist>
<d>apple</d>
</fruitlist>
</b>
<b>
<fruitlist>
<d>banana</d>
</fruitlist>
</b>
</a></code></pre>
</div>
</div>
</p>
<p>I make my code like this:</p>
<pre><code>def removebanana(diretories):
xmlFiles = diretories + "/*.xml"
dirloc = directories + "/result"
for fname in glob.glob(xmlFiles):
name = os.path.basename(fname)
content = open(fname, "rt", encoding="utf-8", errors="ignore")
root = tree.getroot()
for b in root.findall("b"):
dlist = []
for b.find("d") is not None:
d = str(drug.find("d").text)
dlist.append(d)
for dd in dlist:
dd = dd.strip()
if dd.lower() == "banana":
cnt += 1
if cnt == 0:
root.remove(b)
num += 0
filename = f"{dirloc}/{name}"
cnt += 1
tree.write(filename)
</code></pre>
<p>However, the result is same with sample input file.</p>
|
<p>If I understand you correctly, this is what you need to do:</p>
<pre><code>fruits = """[your code above]"""
import xml.etree.ElementTree as ET
tree = ET.fromstring(fruits)
targets = tree.findall('.//b')
for target in targets:
f_list= [t.text for t in target.findall('.//d')]
if not "banana" in f_list:
tree.remove(target)
print(ET.tostring(tree).decode())
#to write to file:
tree = ET.ElementTree(tree)
tree.write("test.xml", encoding="utf-8")
</code></pre>
<p>Output:</p>
<pre><code><a>
<header>
fruit
</header>
<b>
<fruitlist>
<d>banana</d>
</fruitlist>
<fruitlist>
<d>apple</d>
</fruitlist>
</b>
<b>
<fruitlist>
<d>banana</d>
</fruitlist>
</b>
</a>
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
How to change numbers in jsonl file and save it<p>I have a jsonl file with content</p>
<p>How to read a file and change the number after the label sign to a random number 0 or 1 and save the converted file back in python</p>
<pre><code>{"idx": 0, "passage": {"questions": [{"idx": 0, "answers": [{"idx": 0, "label": 0}, {"idx": 1, "label": 0}, {"idx": 2, "label": 0}, {"idx": 3, "label": 0}]}, {"idx": 1, "answers": [{"idx": 4, "label": 0}, {"idx": 5, "label": 0}, {"idx": 6, "label": 0}, {"idx": 7, "label": 0}]}, {"idx": 2, "answers": [{"idx": 8, "label": 1}, {"idx": 9, "label": 0}, {"idx": 10, "label": 0}, {"idx": 11, "label": 0}]}, {"idx": 3, "answers": [{"idx": 12, "label": 0}, {"idx": 13, "label": 0}, {"idx": 14, "label": 0}, {"idx": 15, "label": 0}]}, {"idx": 4, "answers": [{"idx": 16, "label": 0}, {"idx": 17, "label": 0}, {"idx": 18, "label": 0}, {"idx": 19, "label": 0}, {"idx": 20, "label": 0}]}, {"idx": 5, "answers": [{"idx": 21, "label": 0}, {"idx": 22, "label": 0}, {"idx": 23, "label": 0}, {"idx": 24, "label": 0}, {"idx": 25, "label": 0}]}, {"idx": 6, "answers": [{"idx": 26, "label": 0}, {"idx": 27, "label": 0}, {"idx": 28, "label": 0}, {"idx": 29, "label": 0}, {"idx": 30, "label": 0}]}, {"idx": 7, "answers": [{"idx": 31, "label": 0}, {"idx": 32, "label": 0}, {"idx": 33, "label": 0}, {"idx": 34, "label": 0}, {"idx": 35, "label": 0}]}, {"idx": 8, "answers": [{"idx": 36, "label": 0}, {"idx": 37, "label": 0}, {"idx": 38, "label": 0}, {"idx": 39, "label": 0}, {"idx": 40, "label": 0}]}, {"idx": 9, "answers": [{"idx": 41, "label": 0}, {"idx": 42, "label": 0}, {"idx": 43, "label": 0}, {"idx": 44, "label": 0}, {"idx": 45, "label": 0}]}]}}
{"idx": 1, "passage": {"questions": [{"idx": 10, "answers": [{"idx": 46, "label": 0}, {"idx": 47, "label": 0}, {"idx": 48, "label": 0}, {"idx": 49, "label": 0}, {"idx": 50, "label": 0}]}, {"idx": 11, "answers": [{"idx": 51, "label": 0}, {"idx": 52, "label": 0}, {"idx": 53, "label": 0}, {"idx": 54, "label": 0}, {"idx": 55, "label": 0}]}]}}
{"idx": 2, "passage": {"questions": [{"idx": 12, "answers": [{"idx": 56, "label": 0}, {"idx": 57, "label": 0}, {"idx": 58, "label": 0}, {"idx": 59, "label": 0}, {"idx": 60, "label": 0}]}, {"idx": 13, "answers": [{"idx": 61, "label": 0}, {"idx": 62, "label": 0}, {"idx": 63, "label": 0}, {"idx": 64, "label": 0}, {"idx": 65, "label": 0}]}, {"idx": 14, "answers": [{"idx": 66, "label": 0}, {"idx": 67, "label": 0}, {"idx": 68, "label": 0}, {"idx": 69, "label": 0}, {"idx": 70, "label": 0}]}, {"idx": 15, "answers": [{"idx": 71, "label": 0}, {"idx": 72, "label": 0}, {"idx": 73, "label": 0}, {"idx": 74, "label": 0}, {"idx": 75, "label": 0}]}, {"idx": 16, "answers": [{"idx": 76, "label": 0}, {"idx": 77, "label": 0}, {"idx": 78, "label": 0}, {"idx": 79, "label": 0}, {"idx": 80, "label": 0}]}, {"idx": 17, "answers": [{"idx": 81, "label": 0}, {"idx": 82, "label": 0}, {"idx": 83, "label": 0}, {"idx": 84, "label": 0}, {"idx": 85, "label": 0}]}, {"idx": 18, "answers": [{"idx": 86, "label": 0}, {"idx": 87, "label": 0}, {"idx": 88, "label": 0}, {"idx": 89, "label": 0}, {"idx": 90, "label": 0}]}, {"idx": 19, "answers": [{"idx": 91, "label": 0}, {"idx": 92, "label": 0}, {"idx": 93, "label": 0}, {"idx": 94, "label": 0}, {"idx": 95, "label": 0}]}]}}
{"idx": 3, "passage": {"questions": [{"idx": 20, "answers": [{"idx": 96, "label": 0}, {"idx": 97, "label": 0}, {"idx": 98, "label": 0}, {"idx": 99, "label": 0}]}, {"idx": 21, "answers": [{"idx": 100, "label": 0}, {"idx": 101, "label": 0}, {"idx": 102, "label": 0}, {"idx": 103, "label": 0}]}, {"idx": 22, "answers": [{"idx": 104, "label": 0}, {"idx": 105, "label": 0}, {"idx": 106, "label": 0}, {"idx": 107, "label": 0}]}, {"idx": 23, "answers": [{"idx": 108, "label": 0}, {"idx": 109, "label": 0}, {"idx": 110, "label": 0}, {"idx": 111, "label": 0}]}, {"idx": 24, "answers": [{"idx": 112, "label": 0}, {"idx": 113, "label": 0}, {"idx": 114, "label": 0}, {"idx": 115, "label": 0}]}, {"idx": 25, "answers": [{"idx": 116, "label": 0}, {"idx": 117, "label": 0}, {"idx": 118, "label": 0}, {"idx": 119, "label": 0}]}, {"idx": 26, "answers": [{"idx": 120, "label": 0}, {"idx": 121, "label": 0}, {"idx": 122, "label": 0}, {"idx": 123, "label": 0}]}, {"idx": 27, "answers": [{"idx": 124, "label": 0}, {"idx": 125, "label": 0}, {"idx": 126, "label": 0}, {"idx": 127, "label": 0}]}]}}
{"idx": 4, "passage": {"questions": [{"idx": 28, "answers": [{"idx": 128, "label": 1}, {"idx": 129, "label": 1}, {"idx": 130, "label": 1}, {"idx": 131, "label": 1}, {"idx": 132, "label": 1}]}, {"idx": 29, "answers": [{"idx": 133, "label": 0}, {"idx": 134, "label": 1}, {"idx": 135, "label": 1}, {"idx": 136, "label": 0}, {"idx": 137, "label": 1}]}, {"idx": 30, "answers": [{"idx": 138, "label": 0}, {"idx": 139, "label": 0}, {"idx": 140, "label": 1}, {"idx": 141, "label": 0}, {"idx": 142, "label": 0}]}, {"idx": 31, "answers": [{"idx": 143, "label": 0}, {"idx": 144, "label": 0}, {"idx": 145, "label": 0}, {"idx": 146, "label": 0}, {"idx": 147, "label": 0}]}, {"idx": 32, "answers": [{"idx": 148, "label": 0}, {"idx": 149, "label": 0}, {"idx": 150, "label": 0}, {"idx": 151, "label": 0}, {"idx": 152, "label": 0}]}, {"idx": 33, "answers": [{"idx": 153, "label": 0}, {"idx": 154, "label": 1}, {"idx": 155, "label": 1}, {"idx": 156, "label": 1}, {"idx": 157, "label": 1}]}, {"idx": 34, "answers": [{"idx": 158, "label": 0}, {"idx": 159, "label": 0}, {"idx": 160, "label": 0}, {"idx": 161, "label": 0}, {"idx": 162, "label": 0}]}, {"idx": 35, "answers": [{"idx": 163, "label": 0}, {"idx": 164, "label": 1}, {"idx": 165, "label": 1}, {"idx": 166, "label": 0}, {"idx": 167, "label": 0}]}, {"idx": 36, "answers": [{"idx": 168, "label": 0}, {"idx": 169, "label": 0}, {"idx": 170, "label": 1}, {"idx": 171, "label": 0}, {"idx": 172, "label": 0}]}, {"idx": 37, "answers": [{"idx": 173, "label": 1}, {"idx": 174, "label": 0}, {"idx": 175, "label": 0}, {"idx": 176, "label": 0}, {"idx": 177, "label": 0}]}, {"idx": 38, "answers": [{"idx": 178, "label": 0}, {"idx": 179, "label": 1}, {"idx": 180, "label": 1}, {"idx": 181, "label": 0}, {"idx": 182, "label": 1}]}, {"idx": 39, "answers": [{"idx": 183, "label": 1}, {"idx": 184, "label": 1}, {"idx": 185, "label": 1}, {"idx": 186, "label": 0}, {"idx": 187, "label": 0}]}, {"idx": 40, "answers": [{"idx": 188, "label": 0}, {"idx": 189, "label": 0}, {"idx": 190, "label": 1}, {"idx": 191, "label": 0}, {"idx": 192, "label": 0}]}, {"idx": 41, "answers": [{"idx": 193, "label": 0}, {"idx": 194, "label": 0}, {"idx": 195, "label": 0}, {"idx": 196, "label": 0}, {"idx": 197, "label": 0}]}]}}
{"idx": 5, "passage": {"questions": [{"idx": 42, "answers": [{"idx": 198, "label": 0}, {"idx": 199, "label": 0}]}, {"idx": 43, "answers": [{"idx": 200, "label": 1}, {"idx": 201, "label": 0}]}, {"idx": 44, "answers": [{"idx": 202, "label": 0}, {"idx": 203, "label": 0}, {"idx": 204, "label": 0}, {"idx": 205, "label": 0}]}, {"idx": 45, "answers": [{"idx": 206, "label": 0}, {"idx": 207, "label": 0}, {"idx": 208, "label": 0}, {"idx": 209, "label": 0}]}, {"idx": 46, "answers": [{"idx": 210, "label": 0}, {"idx": 211, "label": 0}, {"idx": 212, "label": 0}, {"idx": 213, "label": 0}]}, {"idx": 47, "answers": [{"idx": 214, "label": 0}, {"idx": 215, "label": 0}, {"idx": 216, "label": 0}, {"idx": 217, "label": 0}]}, {"idx": 48, "answers": [{"idx": 218, "label": 0}, {"idx": 219, "label": 0}, {"idx": 220, "label": 0}, {"idx": 221, "label": 0}, {"idx": 222, "label": 0}]}, {"idx": 49, "answers": [{"idx": 223, "label": 1}, {"idx": 224, "label": 0}, {"idx": 225, "label": 0}, {"idx": 226, "label": 0}, {"idx": 227, "label": 0}]}, {"idx": 50, "answers": [{"idx": 228, "label": 1}, {"idx": 229, "label": 0}, {"idx": 230, "label": 0}, {"idx": 231, "label": 0}, {"idx": 232, "label": 0}]}, {"idx": 51, "answers": [{"idx": 233, "label": 0}, {"idx": 234, "label": 0}, {"idx": 235, "label": 0}, {"idx": 236, "label": 0}, {"idx": 237, "label": 0}]}, {"idx": 52, "answers": [{"idx": 238, "label": 1}, {"idx": 239, "label": 0}, {"idx": 240, "label": 0}, {"idx": 241, "label": 0}, {"idx": 242, "label": 0}]}, {"idx": 53, "answers": [{"idx": 243, "label": 0}, {"idx": 244, "label": 0}, {"idx": 245, "label": 0}, {"idx": 246, "label": 1}, {"idx": 247, "label": 1}]}, {"idx": 54, "answers": [{"idx": 248, "label": 1}, {"idx": 249, "label": 1}, {"idx": 250, "label": 1}, {"idx": 251, "label": 1}, {"idx": 252, "label": 0}]}, {"idx": 55, "answers": [{"idx": 253, "label": 0}, {"idx": 254, "label": 0}, {"idx": 255, "label": 0}, {"idx": 256, "label": 0}, {"idx": 257, "label": 0}]}]}}
{"idx": 6, "passage": {"questions": [{"idx": 56, "answers": [{"idx": 258, "label": 1}, {"idx": 259, "label": 0}, {"idx": 260, "label": 1}, {"idx": 261, "label": 0}]}, {"idx": 57, "answers": [{"idx": 262, "label": 1}, {"idx": 263, "label": 1}, {"idx": 264, "label": 1}]}, {"idx": 58, "answers": [{"idx": 265, "label": 1}, {"idx": 266, "label": 1}, {"idx": 267, "label": 1}, {"idx": 268, "label": 1}]}, {"idx": 59, "answers": [{"idx": 269, "label": 1}, {"idx": 270, "label": 1}, {"idx": 271, "label": 1}, {"idx": 272, "label": 1}]}, {"idx": 60, "answers": [{"idx": 273, "label": 1}, {"idx": 274, "label": 1}, {"idx": 275, "label": 1}, {"idx": 276, "label": 1}]}, {"idx": 61, "answers": [{"idx": 277, "label": 1}, {"idx": 278, "label": 1}, {"idx": 279, "label": 1}, {"idx": 280, "label": 1}]}]}}
{"idx": 7, "passage": {"questions": [{"idx": 62, "answers": [{"idx": 281, "label": 0}, {"idx": 282, "label": 1}, {"idx": 283, "label": 1}, {"idx": 284, "label": 1}, {"idx": 285, "label": 0}]}, {"idx": 63, "answers": [{"idx": 286, "label": 0}, {"idx": 287, "label": 0}, {"idx": 288, "label": 0}, {"idx": 289, "label": 0}, {"idx": 290, "label": 1}]}, {"idx": 64, "answers": [{"idx": 291, "label": 0}, {"idx": 292, "label": 0}, {"idx": 293, "label": 0}, {"idx": 294, "label": 0}, {"idx": 295, "label": 0}]}, {"idx": 65, "answers": [{"idx": 296, "label": 1}, {"idx": 297, "label": 1}, {"idx": 298, "label": 1}, {"idx": 299, "label": 1}, {"idx": 300, "label": 1}]}, {"idx": 66, "answers": [{"idx": 301, "label": 1}, {"idx": 302, "label": 0}, {"idx": 303, "label": 1}, {"idx": 304, "label": 0}, {"idx": 305, "label": 1}]}, {"idx": 67, "answers": [{"idx": 306, "label": 0}, {"idx": 307, "label": 0}, {"idx": 308, "label": 0}, {"idx": 309, "label": 1}, {"idx": 310, "label": 1}]}, {"idx": 68, "answers": [{"idx": 311, "label": 0}, {"idx": 312, "label": 0}, {"idx": 313, "label": 0}, {"idx": 314, "label": 1}, {"idx": 315, "label": 0}]}, {"idx": 69, "answers": [{"idx": 316, "label": 1}, {"idx": 317, "label": 1}, {"idx": 318, "label": 1}, {"idx": 319, "label": 1}, {"idx": 320, "label": 1}]}]}}
{"idx": 8, "passage": {"questions": [{"idx": 70, "answers": [{"idx": 321, "label": 0}, {"idx": 322, "label": 0}, {"idx": 323, "label": 0}, {"idx": 324, "label": 0}]}, {"idx": 71, "answers": [{"idx": 325, "label": 1}, {"idx": 326, "label": 0}, {"idx": 327, "label": 0}, {"idx": 328, "label": 0}]}, {"idx": 72, "answers": [{"idx": 329, "label": 0}, {"idx": 330, "label": 0}, {"idx": 331, "label": 0}, {"idx": 332, "label": 0}, {"idx": 333, "label": 0}]}, {"idx": 73, "answers": [{"idx": 334, "label": 0}, {"idx": 335, "label": 0}, {"idx": 336, "label": 0}, {"idx": 337, "label": 0}, {"idx": 338, "label": 0}]}, {"idx": 74, "answers": [{"idx": 339, "label": 0}, {"idx": 340, "label": 0}, {"idx": 341, "label": 0}, {"idx": 342, "label": 1}, {"idx": 343, "label": 1}]}, {"idx": 75, "answers": [{"idx": 344, "label": 1}, {"idx": 345, "label": 1}, {"idx": 346, "label": 0}, {"idx": 347, "label": 0}, {"idx": 348, "label": 0}]}, {"idx": 76, "answers": [{"idx": 349, "label": 0}, {"idx": 350, "label": 1}, {"idx": 351, "label": 0}, {"idx": 352, "label": 0}, {"idx": 353, "label": 0}]}, {"idx": 77, "answers": [{"idx": 354, "label": 0}, {"idx": 355, "label": 0}, {"idx": 356, "label": 0}, {"idx": 357, "label": 1}, {"idx": 358, "label": 0}]}, {"idx": 78, "answers": [{"idx": 359, "label": 0}, {"idx": 360, "label": 1}, {"idx": 361, "label": 0}, {"idx": 362, "label": 0}, {"idx": 363, "label": 0}]}, {"idx": 79, "answers": [{"idx": 364, "label": 0}, {"idx": 365, "label": 0}, {"idx": 366, "label": 0}, {"idx": 367, "label": 0}, {"idx": 368, "label": 0}]}, {"idx": 80, "answers": [{"idx": 369, "label": 0}, {"idx": 370, "label": 0}, {"idx": 371, "label": 0}, {"idx": 372, "label": 0}, {"idx": 373, "label": 0}]}, {"idx": 81, "answers": [{"idx": 374, "label": 0}, {"idx": 375, "label": 0}, {"idx": 376, "label": 0}, {"idx": 377, "label": 0}, {"idx": 378, "label": 0}]}]}}
{"idx": 9, "passage": {"questions": [{"idx": 82, "answers": [{"idx": 379, "label": 0}, {"idx": 380, "label": 0}, {"idx": 381, "label": 0}, {"idx": 382, "label": 0}]}, {"idx": 83, "answers": [{"idx": 383, "label": 0}, {"idx": 384, "label": 1}, {"idx": 385, "label": 0}, {"idx": 386, "label": 0}]}, {"idx": 84, "answers": [{"idx": 387, "label": 0}, {"idx": 388, "label": 0}, {"idx": 389, "label": 0}, {"idx": 390, "label": 0}]}, {"idx": 85, "answers": [{"idx": 391, "label": 1}, {"idx": 392, "label": 0}, {"idx": 393, "label": 1}, {"idx": 394, "label": 1}]}, {"idx": 86, "answers": [{"idx": 395, "label": 0}, {"idx": 396, "label": 0}, {"idx": 397, "label": 0}, {"idx": 398, "label": 0}]}, {"idx": 87, "answers": [{"idx": 399, "label": 0}, {"idx": 400, "label": 0}, {"idx": 401, "label": 0}, {"idx": 402, "label": 1}]}, {"idx": 88, "answers": [{"idx": 403, "label": 0}, {"idx": 404, "label": 0}, {"idx": 405, "label": 1}, {"idx": 406, "label": 0}]}, {"idx": 89, "answers": [{"idx": 407, "label": 0}, {"idx": 408, "label": 0}, {"idx": 409, "label": 0}, {"idx": 410, "label": 1}]}]}}
</code></pre>
|
<pre><code>import json
import random
# read each decoded JSON line into a list
with open('test.jsonl',encoding='utf8') as f:
data = [json.loads(line) for line in f]
# walk the structure and change the labels
for item in data:
for q in item['passage']['questions']:
for a in q['answers']:
a['label'] = random.randint(0,1)
# write each JSON line back to a new file
with open('test2.jsonl','w',encoding='utf8') as f:
for item in data:
json.dump(item,f)
print(file=f) # add a newline
</code></pre>
<p>You could write back to the same file, but safer to delete and rename once written successfully.</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
perform multiprocessing on the same method called two times in each iteration<p>I'm trying to optimize the performance of my code which detects and tracks two small cars from two cameras. the cameras provide a bird's-eye view and the two cars drives from one Field of View (FOV) into the other.</p>
<p>Below is the structure of my code</p>
<pre><code>While camera1.isOpened and camera2.isOpened:
if red_car1_in_FOV1:
red_centroid, red_centroid_transformed= model.process( passed args)
elif red_car1_in_FOV2:
red_centroid, red_centroid_transformed= model.process( passed args)
if blue_car1_in_FOV1:
blue_centroid, blue_centroid_transformed= model.process( passed args)
elif blue_car1_in_FOV2:
blue_centroid, blue_centroid_transformed= model.process( passed args)
</code></pre>
<p>As you can see that the method <code>model.process( passed args)</code> is called two times at the same time (one for the red car and the second for the blue car)</p>
<p>So i believe, i can run each method on a single cpu-core parallely instead of one core.</p>
<p>I made a search on that and came to the package <code>multiprocessing</code>which contains methods like <code>pool()</code> and <code>Process()</code>. However i really got confused in exploiting them in my code.</p>
<p>Any Help or suggestion is appreciated and thanks in advance</p>
|
<p>If you are going to be executing these calls over and over again, then it is more efficient to create a process pool of processes to which you can submit "jobs". In this way you are not constantly creating a new process each time you need to do a new unit of work. It is also the easiest way of getting a return value back from another process. You can use either class <code>Pool</code> from module <code>multiprocessing</code> or class <code>ProcessPoolExecutor</code> from module <code>concurrent.futures</code>. Here is an example:</p>
<pre><code>from multiprocessing import Pool
def process(pool):
"""
pool is a process pool of size 2
"""
while camera1.isOpened and camera2.isOpened:
if red_car1_in_FOV1:
# submit work:
red_result = pool.apply_async(model.process, args)
elif red_car1_in_FOV2:
# submit work:
red_result = pool.apply_async(model.process, args)
if blue_car1_in_FOV1:
# submit work:
blue_result = pool.apply_async(model.process, args)
elif blue_car1_in_FOV2:
# submit work:
blue_result = pool.apply_async(model.process, args)
# wait for completion
red_centroid, red_centroid_transformed = red_result.get()
blue_centroid, blue_centroid_transformed = blue_result.get()
# required for Windows and other platforms that do not have a fork() call:
if __name__ == '__main__':
# We only need a pool size of 2 for this usage:
with Pool(2) as pool:
process(pool)
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Python - Using recursive to replace string in python<p>I have written a recursive to filter the string , but the output isn't my expected, could you please help assist this ?</p>
<pre><code>s2 = "Hello"
def remove_text(s):
t = s.find(s2)
t2 = len(s2)
if t == -1:
return s
else:
if t == 0:
s = s[t2+1:]
else:
s = s[0:t-1] + s[t+len(s2):]
if s.find(s2) >= 0: #if still found s2 in s
remove_text(s)
s1 = "Hello my name is XXX Hello 12345 6 "
s3 = remove_text(s1)
print(s3)
</code></pre>
<p>the output i got always is "None" . I expected the output is:</p>
<pre><code>my name is XXX 12345 6
</code></pre>
|
<pre><code>s2 = "Hello"
def remove_text(s):
t = s.find(s2)
t2 = len(s2)
if t == -1:
return s
else:
if t == 0:
s = s[t2+1:]
else:
s = s[0:t-1] + s[t+len(s2):]
if s.find(s2) >= 0: #if still found s2 in s
s = remove_text(s)
return s
s1 = "Hello my name is XXX Hello 12345 6 "
s3 = remove_text(s1)
print(s3)
</code></pre>
<p>You forgot to write return. It is not enough to add return only at the end, so it is better to assign the result of a recursive call to the string s, and then return s</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
How to count exact words in Python<p>I want to search a text and count how many times selected words occur. For simplicity, I'll say the text is "Does it fit?" and the words I want to count are "it" and "fit".</p>
<p>I've written the following code:</p>
<pre><code>mystring = 'Does it fit?'
search_words = 'it', 'fit'
for sw in search_words:
frequency = {}
count = mystring.count(sw.strip())
output = (sw + ',{}'.format(count))
print(output)
</code></pre>
<p>The output is</p>
<pre><code>it,2
fit,1
</code></pre>
<p>because the code counts the 'it' in 'fit' towards the total for 'it'.</p>
<p>The output I want is</p>
<pre><code>it,1
fit,1
</code></pre>
<p>I've tried changing line 5 to <code>count = mystring.count('\\b'+sw+'\\b'.strip())</code> but the count is then zero for each word. How can I get this to work?</p>
|
<p>that list syntax is off, heres a way to do it though</p>
<pre><code>bad_chars = [';', ':', '!', "*","?","."]
res = {}
for word in ["it","fit"]:
res[word] = 0
string = ''.join((filter(lambda i: i not in bad_chars, "does it fit?")))
for i in string.split(" "):
if word == i: res[word] += 1
print(res)
</code></pre>
<p>by using the <code>in</code> keyword you were checking if that string was in another string, in this case <code>it</code> was inside <code>fit</code>, so you were getting 2 occurrences of <code>it</code></p>
<p>here it directly compares the words after removing punctuation/special characters!</p>
<p>output:</p>
<pre><code>{'it': 1, 'fit': 1}
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Most commonly used description use in each row for each unique id<p>I have a problem. I have articles and these articles have a unique <code>id</code>. The problem is, however, the <code>article</code> description aka <code>article</code> - this is not unique.
I would like to try changing the name of the <code>article</code> description <code>article</code> so that there is only one description left. I always want to use the <code>article</code> name that occurs most often.</p>
<p>I have tried something, however I don't know how to access that with <code>.apply(lambda x: ...</code> and write the most used item out to me and then set that as the name.</p>
<p>How can I change the description of the article so that only the most frequently mentioned article description is included?</p>
<pre class="lang-py prettyprint-override"><code> id article cost
0 1 Bendge 15.30
1 1 Bendge 15.30
2 1 Bendge V2 15.30
3 1 Bendge - volumne 2 15.30
4 5 SEF 14.89
5 1 Bendge 15.30
6 2 DFH 4.56
7 2 DFH 4.56
8 2 DFH V2 4.56
9 2 DFH - volumne 2 4.56
10 2 DFH 4.56
</code></pre>
<p>Code</p>
<pre class="lang-py prettyprint-override"><code>d = {'id': [1, 1, 1, 1, 5, 1, 2, 2, 2, 2, 2],
'article': ['Bendge', 'Bendge', 'Bendge V2', 'Bendge - volumne 2', 'SEF', 'Bendge',
'DFH', 'DFH', 'DFH V2', 'DFH - volumne 2', 'DFH'],
'cost': [15.30, 15.30, 15.30, 15.30, 14.89, 15.30,
4.56, 4.56, 4.56, 4.56, 4.56],
}
df = pd.DataFrame(data=d)
display(df)
df.loc[df['id'] == 1, ['id','article']].value_counts().head(1)
[OUT]
id article
1 Bendge 3
dtype: int64
df['article'] = df.apply(lambda x: x[['id','article']].value_counts().head(1))
[OUT]
KeyError: "None of [Index(['id', 'article'], dtype='object')] are in the [index]"
</code></pre>
<p>What I want</p>
<pre class="lang-py prettyprint-override"><code> id article cost
0 1 Bendge 15.30
1 1 Bendge 15.30
2 1 Bendge 15.30
3 1 Bendge 15.30
4 5 SEF 14.89
5 1 Bendge 15.30
6 2 DFH 4.56
7 2 DFH 4.56
8 2 DFH 4.56
9 2 DFH 4.56
10 2 DFH 4.56
</code></pre>
|
<p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.DataFrameGroupBy.transform.html" rel="nofollow noreferrer"><code>GroupBy.transform</code></a> with lambda function with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.value_counts.html" rel="nofollow noreferrer"><code>Series.value_counts</code></a> and first index:</p>
<pre><code>df['article'] = df.groupby('id')['article'].transform(lambda x: x.value_counts().index[0])
print (df)
id article cost
0 1 Bendge 15.30
1 1 Bendge 15.30
2 1 Bendge 15.30
3 1 Bendge 15.30
4 5 SEF 14.89
5 1 Bendge 15.30
6 2 DFH 4.56
7 2 DFH 4.56
8 2 DFH 4.56
9 2 DFH 4.56
10 2 DFH 4.56
</code></pre>
<p>Another solution with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.mode.html" rel="nofollow noreferrer"><code>Series.mode</code></a> with first value:</p>
<pre><code>df['article'] = df.groupby('id')['article'].transform(lambda x: x.mode().iat[0])
print (df)
id article cost
0 1 Bendge 15.30
1 1 Bendge 15.30
2 1 Bendge 15.30
3 1 Bendge 15.30
4 5 SEF 14.89
5 1 Bendge 15.30
6 2 DFH 4.56
7 2 DFH 4.56
8 2 DFH 4.56
9 2 DFH 4.56
10 2 DFH 4.56
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
pandas matching database with string keeping index of database<p>I have a database with strings and the index as below.</p>
<pre><code>df0
idx name_id_code string_line_0
0 0.01 A
1 0.5 B
2 77.6 C
3 29.8 D
4 56.2 E
5 88.1000005 F
6 66.4000008 G
7 2.1 H
8 99 I
9 550.9999999 J
df1
idx string_line_1
0 A
1 F
2 J
3 G
4 D
</code></pre>
<p>Now, I want to match the df1 with df0, taking the values where df1 = df 0 but, keeping the index of df0 true as below</p>
<pre><code>df_result name_id_code string_line_0
0 0.01 A
5 88.1000005 F
9 550.9999999 J
6 66.4000008 G
3 29.8 D
</code></pre>
<p>I tried with my code but it didnt work for string and only matching index</p>
<pre><code>c = df0['name_id_code'] + ' (' + df0['string_line_0'].astype(str) + ')'
out = df1[df2['string_line_1'].isin(s)]
</code></pre>
<p>I also tried to keep simple just last column match like</p>
<pre><code>c = df0['string_line_0'].astype(str) + ')'
out = df1[df1['string_line_1'].isin(s)]
</code></pre>
<p>but blank output.</p>
|
<p>Because is filtered <code>df0</code> DataFrame then is index values not changed if use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.isin.html" rel="nofollow noreferrer"><code>Series.isin</code></a> by <code>df1['string_line_1'</code>, only order of columns is like in original <code>df0</code>:</p>
<pre><code>out = df0[df0['string_line_0'].isin(df1['string_line_1'])]
print (out)
name_id_code string_line_0
idx
0 0.010000 A
3 29.800000 D
5 88.100001 F
6 66.400001 G
9 551.000000 J
</code></pre>
<p>Or if use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.merge.html" rel="nofollow noreferrer"><code>DataFrame.merge</code></a> then for avoid lost <code>df0.index</code> is necessary add <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.reset_index.html" rel="nofollow noreferrer"><code>DataFrame.reset_index</code></a>:</p>
<pre><code>out = (df1.rename(columns={'string_line_1':'string_line_0'})
.merge(df0.reset_index(), on='string_line_0'))
print (out)
string_line_0 idx name_id_code
0 A 0 0.010000
1 F 5 88.100001
2 J 9 551.000000
3 G 6 66.400001
4 D 3 29.800000
</code></pre>
<p>Similar solution, only same values in <code>string_line_0</code> and <code>string_line_1</code> columns:</p>
<pre><code>out = (df1.merge(df0.reset_index(), left_on='string_line_1', right_on='string_line_0'))
print (out)
string_line_1 idx name_id_code string_line_0
0 A 0 0.010000 A
1 F 5 88.100001 F
2 J 9 551.000000 J
3 G 6 66.400001 G
4 D 3 29.800000 D
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Getting some error while performing map/reduce on pyspark RDD<p>I am just trying to learn the PySpark, but confused about the difference between the following two RDDs, I know one is type set and one is list but both are RDDs</p>
<pre><code>rdd = sc.parallelize([('a', 1), ('b', 1), ('a', 3)])
type(rdd)
</code></pre>
<p>and</p>
<pre><code>rdd = sc.parallelize(['a, 1', 'b, 1', 'a, 3'])
type(rdd)
</code></pre>
<p>Code for processing map and reduce functions:</p>
<pre><code>priceMap= s.map(lambda o: (o.split(",")[0], float(o.split(",")[1])))
priceMap.reduceByKey(add).take(10)
</code></pre>
<p>I can easily perform the map/reduce function on the second rdd data, but when I try to perform the map or reduce I get the following error: so how can we convert the first rdd to second rdd data, or if there is any way to resolve the following error please help. thanks</p>
<blockquote>
<p>Py4JJavaError: An error occurred while calling
z:org.apache.spark.api.python.PythonRDD.runJob. :
org.apache.spark.SparkException: Job aborted due to stage failure:
Task 0 in stage 162.0 failed 1 times, most recent failure: Lost task
0.0 in stage 162.0 (TID 3850, localhost, executor driver): org.apache.spark.api.python.PythonException: Traceback (most recent
call last):</p>
</blockquote>
|
<p>For the first rdd, you can replace the map function:</p>
<pre><code>rdd = sc.parallelize([('a', 1), ('b', 1), ('a', 3)])
rdd.map(lambda o: (o[0], float(o[1]))).reduceByKey(add).collect()
</code></pre>
<p>That's because <code>split</code> only works with the strings but not tuples.</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
SQLite : Return true if duplicate values are found in table<p>I want to know how you can find duplicate values in a table and make it return True. I have seen alot of questions about this but none of them helped, Thanks!</p>
<p>Here's my example :</p>
<pre><code>import hashlib
import sqlite3
con = sqlite3.connect('users/accounts.db')
cur = con.cursor()
info = cur.execute("SELECT * FROM accounts;").fetchall()
print("Sign Up.")
username = input("Input your username : ")
password = input("Input your password : ")
email = input("Input your email: ")
result = hashlib.sha256(password.encode("utf-8"))
result_2 = str(result.digest)
cur.execute("insert into accounts (username, password, email) values(?,?,?)", (username, result_2, email))
con.commit()
print(info)
con.close()
</code></pre>
<hr />
<h2>Disclaimer</h2>
<p>To those wondering, No this will not be used in a production environment, It is unsafe and can be easily exploited. Haven't even salted the hashes.</p>
|
<p>If your goal be to prevent inserting a new user record with a username or email which already is being used by someone else, then an exists query provides one option:</p>
<pre class="lang-sql prettyprint-override"><code>INSERT INTO accounts (username, password, email)
SELECT ?, ?, ?
WHERE NOT EXISTS (SELECT 1 FROM accounts WHERE username = ? OR email = ?);
</code></pre>
<p>That is, in a single statement we also can check if the username or email provided already appears somewhere in the <code>accounts</code> table.</p>
<p>Updated Python code:</p>
<pre class="lang-py prettyprint-override"><code>sql = """
INSERT INTO accounts (username, password, email)
SELECT ?, ?, ?
WHERE NOT EXISTS (SELECT 1 FROM accounts WHERE username = ? OR email = ?)
"""
cur.execute(sql, (username, result_2, email, username, email))
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Pytorch dot product across rows from different arrays<p>I am trying to code up something similar to the positional encoding in the transformers paper.
In order to do so I need to do the following:</p>
<p>For the following three matrices, I want to concatenate them at row level (i.e. the first row from each one stacked together, the second rows together, etc.), and then apply dot product between each matrix and its transpose, and finally, flatten them and stack them together. I'll clarify this in the following example:</p>
<pre><code>x = torch.tensor([[1,1,1,1],
[2,2,2,2],
[3,3,3,3]])
y = torch.tensor([[0,0,0,0],
[0,0,0,0],
[0,0,0,0]])
z = torch.tensor([[4,4,4,4],
[5,5,5,5],
[6,6,6,6]])
concat = torch.cat([x, y, z], dim=-1).view(-1, x.shape[-1])
print(concat)
</code></pre>
<blockquote>
<pre><code>tensor([[1, 1, 1, 1],
[0, 0, 0, 0],
[4, 4, 4, 4],
[2, 2, 2, 2],
[0, 0, 0, 0],
[5, 5, 5, 5],
[3, 3, 3, 3],
[0, 0, 0, 0],
[6, 6, 6, 6]])
</code></pre>
</blockquote>
<pre><code># Here I get each three rows together, and then apply dot product, flatten, and stack them.
concat = torch.stack([
torch.flatten(
torch.matmul(
concat[i:i+3, :], # 3 is the number of tensors (x,y,z)
torch.transpose(concat[i:i+3, :], 0, 1))
)
for i in range(0, concat.shape[0], 3)
])
print(concat)
</code></pre>
<blockquote>
<pre><code>tensor([[ 4, 0, 16, 0, 0, 0, 16, 0, 64],
[ 16, 0, 40, 0, 0, 0, 40, 0, 100],
[ 36, 0, 72, 0, 0, 0, 72, 0, 144]])
</code></pre>
</blockquote>
<p>Finally, I was able to get the final matrix that I want. My question is, is there a way to achieve this without using a loop as I did in the final step? I want everything to be in tensors.</p>
|
<p>The loop you introduce only needs to be there to get a "list of slices" of the data, which is practically the same as reshaping it. You are basically introducing a additional dimension, in which there are 3 entries. Basically from shape <code>[n, k]</code> to <code>[n, 3, k]</code>.<br />
For working directly with tensors, you can just call <code>.reshape</code> to get to the same shape. After that, the rest of the code you use also works almost the exact same. The transpose has to be changed slightly, due to the change in dimensions.</p>
<p>All in all, what you want can be achieved with:</p>
<pre class="lang-py prettyprint-override"><code>concat2 = concat.reshape((-1, 3, concat.shape[1]))
torch.flatten(
torch.matmul(
concat2,
concat2.transpose(1,2)
),
start_dim=1,
)
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Django - OperationalError after deleting migration files<p>In the earliest days of my system, before i knew how to utilize django's built-in User model, i had my own class named myUser which i intended to use for the same purpose. i ended up deleting this one and instead using django's User model. all was fine for a while. then i tried to change the id of all my custom models to a UUIDField, and got the syntax wrong, which i unfortunately only realized after i ran makemigrations and migrate. i corrected it, but there was still a bug because the migration file of the wrongly-syntaxed UUIDField still lingered in migrations. that's what i understood from googling for a while, anyway. since i wasn't sure <em>which</em> migration file caused the error, i just deleted the last 5 migration files (in hindsight, not the best idea i've ever had). Unfortunately, one of those migration files were the one which covered the deletion of the myUser model.</p>
<p>now, whenever i try to migrate, i get the following error:</p>
<p>django.db.utils.OperationalError: no such table: phoneBook_myUser</p>
<p>it seems this error is stopping <em>any</em> of my new migrations to go through. i've googled for a while, and it seems my two options are to either dive deep down into django details and try to mess around until i fix it -which i had hoped to avoid, considering i've only been using Django for 2 weeks- or to reset everything and build the system from scratch. That is not too ideal either.</p>
<p>i have a backed-up version of the myUSer model. is reinstating it, migrating, deleting it and then migrating 'safe' or will it make an even bigger mess of things? Is there any other course of action i can take to fix this issue?</p>
<p>update:
result from python manage.py showmigrations:</p>
<pre><code> admin
[X] 0001_initial
[X] 0002_logentry_remove_auto_add
[X] 0003_logentry_add_action_flag_choices
auth
[X] 0001_initial
[X] 0002_alter_permission_name_max_length
[X] 0003_alter_user_email_max_length
[X] 0004_alter_user_username_opts
[X] 0005_alter_user_last_login_null
[X] 0006_require_contenttypes_0002
[X] 0007_alter_validators_add_error_messages
[X] 0008_alter_user_username_max_length
[X] 0009_alter_user_last_name_max_length
[X] 0010_alter_group_name_max_length
[X] 0011_update_proxy_permissions
[X] 0012_alter_user_first_name_max_length
contenttypes
[X] 0001_initial
[X] 0002_remove_content_type_name
phoneBook
[X] 0001_initial
[X] 0002_auto_20210707_1306
[X] 0003_informationrating_owner_ownerset_comments_connection/
_numbermetadata_source
[X] 0004_rename_comments_comment
[X] 0005_auto_20210707_1632
[X] 0006_auto_20210707_1634
[X] 0007_auto_20210707_1637
[X] 0008_alter_company_company_type
[X] 0009_owner_owner_type
[X] 0010_numbermetadata_published_at
[X] 0011_auto_20210713_0957
[X] 0012_auto_20210713_1014
[X] 0013_connectionrating
[X] 0014_informationrating
[X] 0015_delete_informationrating
[X] 0016_alter_owner_owner_type
[X] 0017_alter_owner_owner_type
[X] 0018_auto_20210713_1349
[X] 0019_auto_20210713_1525
[X] 0020_auto_20210713_1529
[X] 0021_auto_20210713_1533
[X] 0022_auto_20210713_1535
[X] 0023_alter_numbermetadata_owner_set_id
[X] 0024_auto_20210719_0948
[X] 0025_auto_20210719_1555
[ ] 0026_auto_20210720_1528
[ ] 0027_connection_connection_status
sessions
[X] 0001_initial
</code></pre>
<p>my migration files:</p>
<pre><code> 0001_initial.py
0002_auto_20210707_1306.py
0003_informationrating_owner_ownerset_comments_connection/
_numbermetadata_source.py
0004_rename_comments_comment.py
0005_auto_20210707_1632.py
0006_auto_20210707_1634.py
0007_auto_20210707_1637.py
0008_alter_company_company_type.py
0009_owner_owner_type.py
0010_numbermetadata_published_at.py
0011_auto_20210713_0957.py
0012_auto_20210713_1014.py
0013_connectionrating.py
0014_informationrating.py
0015_delete_informationrating.py
0016_alter_owner_owner_type.py
0017_alter_owner_owner_type.py
0018_auto_20210713_1349.py
0019_auto_20210713_1525.py
0020_auto_20210713_1529.py
0021_auto_20210713_1533.py
0022_auto_20210713_1535.py
0023_alter_numbermetadata_owner_set_id.py
0024_auto_20210719_0948.py
0025_auto_20210719_1555.py
0026_auto_20210720_1528.py
0027_connection_connection_status.py
__init__.py
</code></pre>
|
<p>i ended up creating a new version of the project, a perfect copy of all the files, but with a new database, and everything works as it should.</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
How to extend functions of another class automatically?<p>I have a class <code>A</code> with dozen of functions. Now I want to extend those functions in class <code>B</code> but apply to a list of <code>A</code>. See the example below.</p>
<pre><code>class A:
def __init__(self):
# initiate self.data
def func1(self, *args, **kargs):
# do things with self.data
# there are a lot more functions like func1
class B:
def __init__(self, n):
self.list_of_A = [A() for i in range n]
def func1(self, *args, **kargs):
# do something here
for a in self.list_of_A:
a.func1(*args, **kargs)
# do something here
</code></pre>
<p>The requirements are:</p>
<ol>
<li>Because <code>A</code> has a lot of functions and it is impractical to rewrite all of them in <code>B</code>, I would like to have <code>B.func1</code> to be generated automatically (e.g. through some wrapper function mechanism).</li>
<li><code>B.func1</code> can have the same docstring as <code>A.func1</code>.</li>
</ol>
<p>Can someones suggest me how to do this?
Thanks!</p>
|
<p>There are many ways you could approach this. Here's <em>one</em> approach that I would hesitate to use because it highly couples both of these classes, but I'm showing it to you because it is an example of generating methods dynamically:</p>
<pre><code>class B:
def __init__(self, n):
self.list_of_A = [A() for i in range(n)]
def _apply_a_method(self, method_name, *args, **kargs):
# do something here
for a in self.list_of_A:
getattr(a, method_name)(*args, **kargs)
# do something here
@classmethod
def build_api(cls, method_names):
def _make_wrapper(method_name):
def _wrapper(self, *args, **kwargs):
return self._apply_a_method(method_name, *args, **kwargs)
return _wrapper
for method_name in method_names:
setattr(cls, method_name, _make_wrapper(method_name))
B.build_api(["func1", "func2", "func3"]) # choose the functions you want
</code></pre>
<p>Note, this has <em>nothing to do with inheritance</em>. Inheritance is something that happens when you <em>subclass</em> from another class. That isn't what you are doing here. You are basically asking "how do I dynamically wrap various methods from class A into methods in class B".</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
How to calculate the PageRank and shortest path algorithm with gremlin in Amazon Neptune?<p>Is there any way to calculate <code>PageRank</code> and <code>Shortest Path</code> algorithm with gremlin in Amazon Neptune?
As it said in gremlin documentation <code>PageRank centrality can be calculated with Gremlin with the pageRank()-step which is designed to work with GraphComputer (OLAP) based traversals</code>.</p>
<p>I have try to create a traversal with gremlinpython through this code: <code>g = graph.traversal().withComputer().withRemote(remoteConn)</code> but I got this error: <code>GremlinServerError: 499: {"code":"UnsupportedOperationException","requestId":"4493df8b-b09f-47b1-b230-b83cfe1afa76","detailedMessage":"Graph does not support graph computer"}</code></p>
<p>So is it possible to use GraphComputer traversal in Amazon Neptune?</p>
|
<p>Amazon Neptune does not currently support the Apache TinkerPop GraphComputer interface. You have a few options.</p>
<ol>
<li>In some cases it is possible to use the example queries in the <a href="https://tinkerpop.apache.org/docs/current/recipes/" rel="nofollow noreferrer">Gremlin Recipes</a> document to calculate connected components etc.</li>
<li>Export the data using the Neptune Export tool and run the analysis you need to do using Spark (Glue and EMR are good options). This is quite commonly done today.</li>
<li>For modest size datasets you can import the data into NetworkX and run the analysis all from a Jupyter Notebook.</li>
</ol>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Python Sphinx WARNING: Definition list ends without a blank line; unexpected unindent<p>My doc is like this:</p>
<pre><code> def segments(self, start_time=1, end_time=9223372036854775806, offset=0, size=20):
"""Get segments of the model
:parameter
offset: - optional int
size: - optional int
start_time: - optional string,Segments
end_time: - optional string,Segments
:return: Segments Object
"""
</code></pre>
<p>When I make html, it turns out:</p>
<pre><code> WARNING: Definition list ends without a blank line; unexpected unindent.
</code></pre>
<p>I have no idea where I should add a blank line?</p>
<p>I searched SO, but can not find any similar case as mine. </p>
|
<p>The Question was already answered by <em>jonrsharpe</em> in a comment, but i want to complete it here.</p>
<p>You are using the default Sphinx Style <a href="https://docutils.sourceforge.io/docs/ref/rst/restructuredtext.html" rel="nofollow noreferrer">"reStructuredText"</a></p>
<p>You have to add ":parameter" on every line:</p>
<pre><code> """Get segments of the model
:parameter offset: optional int
:parameter size: optional int
:parameter start_time: optional string,Segments
:parameter end_time: optional string,Segments
:return: Segments Object
"""
</code></pre>
<p>If you had created a dedicated definition list, then you would need an empty line after the last definition. In this case, you can do it, but don't always have to.</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
How to average an off-set array<p>I have an array that looks something like this:</p>
<pre><code>A = [.36, .4, .43, .48, .53, .58, .63, .68, .72, .77, .82, .86, .93, .95, .97, .99, 1, 0.99, .97, .95, .92, .88, .85, .80, .76, .71, .66, .61, .56, .51, .47, .43, .40]
</code></pre>
<p>I want to be able to take the value to the left and right of 1, and find their average, and then extend +1 in each direction of 1. So that my new array is</p>
<pre><code> A =[1, (.99 +.99)/2, (.97 + .97)/2, (.95 + .95)/2, ......, (.36 + .40)/2]
</code></pre>
<p>Initially I was trying to do this:</p>
<pre><code>import numpy as np
import itertools
A = [.36, .4, .43, .48, .53, .58, .63, .68, .72, .77, .82, .86, .93, .95, .97, .99, 1, 0.99, .97, .95, .92, .88, .85, .80, .76, .71, .66, .61, .56, .51, .47, .43, .40]
A = np.sort(A)
A = list(itertoools.chain.from_iterable[i]*n for i in [sum(A[i:i+n])/n for i in range(0, len(A)),n)]))
A = A[1::2]
A = A[::-1]
</code></pre>
<p>So what this does is re-order my list from least to greatest (sort), and then take the average of each value next to it (the <code>iter</code>tools and <code>A[1::2]</code> command), then re-order backwards (the <code>A[::-1]</code> line). But i've found this does that incorrectly, more particularly in the average section due to the resort. It re-sorts it properly, but then that causes the average to be taken improperly as <code>.4</code> moves next to <code>.36</code>, and then I have two <code>0.43s</code> right next to each other after that, which leads my tail values to have incorrect averages.</p>
<p>I'd be happy to supply any more information for any particular questions.</p>
<p>Thanks!</p>
|
<p>My simple take on this:</p>
<pre><code>A = [.36, .4, .43, .48, .53, .58, .63, .68, .72, .77, .82, .86, .93, .95, .97, .99,
1, 0.99, .97, .95, .92, .88, .85, .80, .76, .71, .66, .61, .56, .51, .47, .43, .40]
center = A.index(1)
right = A[center + 1:]
A.reverse()
left = A[center + 1:]
out = [round((l + r) / 2, 2) for l, r in zip(left, right)]
out.insert(0, 1) # or A[center]
print(out)
</code></pre>
<p>Output:</p>
<pre><code>[1, 0.99, 0.97, 0.95, 0.93, 0.87, 0.83, 0.79, 0.74, 0.7, 0.65, 0.59, 0.55, 0.49, 0.45, 0.42, 0.38]
</code></pre>
<p>I made it so it rounds each division to 2 decimals. I tried to make this as easy as possible to understand.</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Hydrogen kernel not detected<p>When starting hydrogen in atom to execute python code, I usually get asked which kernel I want to use. I have three kernels. The one I use on my current project is the standard python3 kernel, where I have all the required libraries installed.</p>
<p>Today, when I tried to run some code, this kernel was not in the list. The two other environments (created with conda, I think) were still detected by Hydrogen.</p>
<p>I ran <code>$ jupyter kernelspec list</code> in my terminal (on macOS), and I got this :</p>
<pre><code> env1 /Users/me/Library/Jupyter/kernels/env1
env2 /Users/me/Library/Jupyter/kernels/env2
python3 /Applications/anaconda3/share/jupyter/kernels/python3
</code></pre>
<p>I have no idea what caused Hydrogen to stop detecting the python3 kernel.</p>
<p>Restarting atom did not solve this.</p>
<p>How can I make Hydrogen detect the python3 kernel?
Any idea of what could have happened?</p>
|
<p>My hypothesis was that the problem had something to do with the python3 kernel no being in the same directory as the two still-detected kernels.</p>
<p>I created a symbolic link to <code>python3</code> in the directory of <code>env1</code> and <code>env2</code> in Terminal:</p>
<pre class="lang-sh prettyprint-override"><code>cd /Users/me/Library/Jupyter/kernels
ln -s /Applications/anaconda3/share/jupyter/kernels/python3
</code></pre>
<p>I then restarted atom, and it now works as previously.</p>
<p>However, I don't know what caused the problem, and I'm not even entirely sure it is this symbolic link that solved it.</p>
<hr />
<p>Note : when doing <code>jupyter kernelspec list</code>, the python3 kernel is now listed under the <code>/Users/me/Library/Jupyter/kernels</code> directory.</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Keras: Make functional model accept multiple batches for LSTM<pre><code>import tensorflow as tf
import numpy as np
import tensorflow.keras.layers as layers
import tensorflow.keras as keras
from tensorflow.keras.optimizers import Adam
def get_model():
inputs = layers.Input(batch_shape=(1, 200, 12)) #input
x = layers.LSTM(12, return_sequences=True, stateful=True)(inputs)
outputs = layers.TimeDistributed(layers.Dense(2, activation="softmax"))(x)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(loss='categorical_crossentropy', optimizer=Adam(), metrics=['accuracy'])
print(model.summary())
return model
model = get_model()
x_train = np.ones((10,200,12))
x_val = np.ones((10,200,12))
y_train = np.ones((10,200,2))
y_val = np.ones((10,200,2))
history = model.fit(x_train, y_train,
epochs=40,
validation_data=(x_val, y_val),
verbose=2)
</code></pre>
<p>Gives me this output</p>
<pre><code>InvalidArgumentError: [_Derived_] Specified a list with shape [1,12] from a tensor with shape [10,12]
[[{{node TensorArrayUnstack/TensorListFromTensor}}]]
[[model_19/lstm_47/StatefulPartitionedCall]] [Op:__inference_train_function_78329]
</code></pre>
<p>If I make x_train and x_val shape (1,200,12) it works fine. How do I make the input object accept multiple batches?</p>
|
<p>Turns out I needed to specify the batch_size like so...</p>
<pre><code>history = model.fit(x_train, y_train,
epochs=40,
validation_data=(x_val, y_val),
verbose=2,
batch_size=1
)
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Constrained non-linear optimisation in GEKKO, objective function not matching expected solution<p>I've got a fairly straightforward constrained non-linear optimisation problem, maximising revenue, given some spend constraints (keep overall spend constant, and increase/decrease spend on each channel by upto 50%) and an objective function which is revenue = spend * roi (where roi is calculated using the log-log model coefficients for each channel).</p>
<p>Firstly the solution doesn't match what I believe to be the optimal solution. Also, when take the optimal spend values suggested by GEKKO, and put them into the objective function, this also doesn't match what GEKKO gives as the optimal objective function value.</p>
<p>If the above problem isn't articulated very well, the example code below should bring to life the problem...</p>
<p>Anyone know what I'm missing?</p>
<p>Code below:</p>
<pre><code>from gekko import GEKKO
import numpy as np
import pandas as pd
df1 = pd.DataFrame({'channel': ['fb', 'tv', 'ppc'],
'value': [5000, 5000, 5000],
'alpha': [1.00, 1.00, 1.00],
'beta': [0.03, 0.02, 0.01]
})
total_spend = df1['value'].sum()
m = GEKKO()
# Constraint parameters
spend_inc = 0.00
spend_dec = 0.00
channel_inc = 0.50
channel_dec = 0.50
channels = len(df1)
# Initialise decision variables
x = m.Array(m.Var, (channels), integer=True)
i = 0
for xi in x:
xi.value = df1['value'][i]
xi.lower = df1['value'][i] * (1 - channel_dec)
xi.upper = df1['value'][i] * (1 + channel_inc)
i += 1
# Initialise alpha
a = m.Array(m.Param, (channels))
i = 0
for ai in a:
ai.value = df1['alpha'][i]
i += 1
# Initialise beta
b = m.Array(m.Param, (channels))
i = 0
for bi in b:
bi.value = df1['beta'][i]
i += 1
# Initial global contraints
spend_min = total_spend * (1 - spend_dec)
spend_max = total_spend * (1 + spend_dec)
# Print out variabales
print(f'spend min: {spend_min}')
print(f'spend max: {spend_max}')
print('')
for i in range(0, channels):
print(f'x{i+1} value: {str(x[i].value)}')
print(f'x{i+1} lower: {str(x[i].lower)}')
print(f'x{i+1} upper: {str(x[i].upper)}')
print(f'x{i+1} alpha: {str(a[i].value)}')
print(f'x{i+1} beta: {str(b[i].value)}')
print('')
# Constraints
m.Equation(sum(x) >= spend_min)
m.Equation(sum(x) <= spend_max)
# Log-Log model
def roi(a, b, x):
roi = a + b * m.log(x[0])
roi = m.exp(roi[0])
return roi
# Objective function
m.Maximize(m.sum(x * roi(a, b, x)))
m.options.IMODE = 3
m.solve()
for i in range(0, channels):
print(f'x{i+1}: {str(x[i].value[0])}')
print('')
print(f'optimal solution: {str(m.options.objfcnval)}')
# THIS DOESN'T MATCH THE OBJECTIVE FUNCTION VALUE SUGGESTED BY GEKKO
opt = 0
for i in range(0, channels):
opt += x[i].value[0] * (np.exp((a[0].value[0] + b[i].value[0] * np.log(x[i].value[0]))))
print(f'optimal solution: {opt}')
# THIS IS THE EXPECETD SOLUTION, WHICH ALSO DOESN'T MATCH THE OBJECTIVE FUNCTION VALUE SUGGESTED BY GEKKO
7500 * np.exp(1.00 + 0.03 * np.log(7500)) + 5000 * np.exp(1.00 + 0.02 * np.log(5000)) + 2500 * np.exp(1.00 + 0.01 * np.log(2500))
</code></pre>
|
<p>There was a problem with how the objective function is defined. The fix to the problem is to use <code>a[i]</code>, <code>b[i]</code>, <code>x[i]</code> one at a time in the functions. Gekko allows more than one objective function definition. When maximizing, it converts to a minimization problem with <code>min = -max</code>. The maximized objective is therefore <code>-m.options.OBJFCNVAL</code>.</p>
<pre class="lang-py prettyprint-override"><code># Log-Log model
def roi(ai, bi, xi):
return m.exp(ai + bi * m.log(xi))
# Objective function
for i,xi in enumerate(x):
m.Maximize(xi * roi(a[i], b[i], x[i]))
</code></pre>
<p>It also helps to define a new variable <code>sum_x</code> with upper <code>spend_max</code> and lower <code>spend_min</code> bounds.</p>
<pre class="lang-py prettyprint-override"><code># Constraints
sum_x = m.Var(lb=spend_min,ub=spend_max)
m.Equation(sum_x==m.sum(x))
</code></pre>
<p>Here is the complete script.</p>
<pre class="lang-py prettyprint-override"><code>from gekko import GEKKO
import numpy as np
import pandas as pd
df1 = pd.DataFrame({'channel': ['fb', 'tv', 'ppc'],
'value': [5000, 5000, 5000],
'alpha': [1.00, 1.00, 1.00],
'beta': [0.03, 0.02, 0.01]
})
total_spend = df1['value'].sum()
m = GEKKO()
# Constraint parameters
spend_inc = 0.00
spend_dec = 0.00
channel_inc = 0.50
channel_dec = 0.50
channels = len(df1)
# Initialise decision variables
x = m.Array(m.Var, (channels), integer=True)
i = 0
for xi in x:
xi.value = df1['value'][i]
xi.lower = df1['value'][i] * (1 - channel_dec)
xi.upper = df1['value'][i] * (1 + channel_inc)
i += 1
# Initialise alpha
a = m.Array(m.Param, (channels))
i = 0
for ai in a:
ai.value = df1['alpha'][i]
i += 1
# Initialise beta
b = m.Array(m.Param, (channels))
i = 0
for bi in b:
bi.value = df1['beta'][i]
i += 1
# Initial global contraints
spend_min = total_spend * (1 - spend_dec)
spend_max = total_spend * (1 + spend_dec)
# Print out variabales
print(f'spend min: {spend_min}')
print(f'spend max: {spend_max}')
print('')
for i in range(0, channels):
print(f'x{i+1} value: {str(x[i].value)}')
print(f'x{i+1} lower: {str(x[i].lower)}')
print(f'x{i+1} upper: {str(x[i].upper)}')
print(f'x{i+1} alpha: {str(a[i].value)}')
print(f'x{i+1} beta: {str(b[i].value)}')
print('')
# Constraints
sum_x = m.Var(lb=spend_min,ub=spend_max)
m.Equation(sum_x==m.sum(x))
# Log-Log model
def roi(ai, bi, xi):
return m.exp(ai + bi * m.log(xi))
# Objective function
for i,xi in enumerate(x):
m.Maximize(xi * roi(a[i], b[i], x[i]))
m.options.IMODE = 3
m.solve()
for i in range(0, channels):
print(f'x{i+1}: {str(x[i].value[0])}')
print('')
print(f'optimal solution: {str(-m.options.objfcnval)}')
opt = 0
for i in range(0, channels):
opt += x[i].value[0] * (np.exp((a[i].value[0] \
+ b[i].value[0] \
* np.log(x[i].value[0]))))
print(f'optimal solution: {opt}')
# THIS IS THE EXPECTED SOLUTION
# IT NOW MATCHES THE OBJECTIVE FUNCTION VALUE SUGGESTED BY GEKKO
sol = 7500 * np.exp(1.00 + 0.03 * np.log(7500)) \
+ 5000 * np.exp(1.00 + 0.02 * np.log(5000)) \
+ 2500 * np.exp(1.00 + 0.01 * np.log(2500))
print(f'expected optimal solution: {sol}')
</code></pre>
<p>The solution is shown below.</p>
<pre><code>EXIT: Optimal Solution Found.
The solution was found.
The final value of the objective function is -50108.7613900549
---------------------------------------------------
Solver : IPOPT (v3.12)
Solution time : 9.499999985564500E-003 sec
Objective : -50108.7611889392
Successful solution
---------------------------------------------------
x1: 7500.0
x2: 4999.9999497
x3: 2500.0
optimal solution: 50108.761189
optimal solution: 50108.7611890521
expected optimal solution: 50108.76135441651
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
How to have a randomly generated unique ID in multiple rows in pandas?<p>I have tried to wrap my head around this but I am at a roadblock. I have a script here that creates a data frame with a pre-defined <code>requestTypes</code> and a randomly hashed <code>u_id</code>. I would like to edit the data_generator function so that the <code>u_id</code> can have multiple request types.</p>
<pre><code>requestTypes = ["type1","type2","type3","type"]
def rowValue(x, y):
return {'request_type': x,
'u_id': y
}
def data_generator():
temp_list = []
for i in range(0, 5):
temp_requestType = random.choice(requestTypes)
uid_value = hashlib.sha256((str(i)).encode('UTF-8')).hexdigest()
temp_list.append(rowValue(temp_requestType, uid_value))
return pd.DataFrame(temp_list)
df = data_generator()
df
</code></pre>
<p>This is the result I am getting;</p>
<pre><code> request_type u_id
0 type1 6cd5b6e51936a442b973660c21553dd22bd72ddc875113...
1 type2 b17ef6d19c7a5b1ee83b907c595526dcb1eb06db8227d6...
2 type3 535fa30d7e25dd8a49f1536779734ec8286108d115da50...
3 type2 0b918943df0962bc7a1824c0555a389347b4febdc7cf9d...
4 type3 73475cb40a568e8da8a045ced110137e159f890ac4da88...
</code></pre>
<p><strong>This is the result I want;</strong></p>
<pre><code> request_type u_id
0 type1 6cd5b6e51936a442b973660c21553dd22bd72ddc875113...
1 type2 b17ef6d19c7a5b1ee83b907c595526dcb1eb06db8227d6...
2 type3 6cd5b6e51936a442b973660c21553dd22bd72ddc875113...
3 type2 0b918943df0962bc7a1824c0555a389347b4febdc7cf9d...
4 type3 0b918943df0962bc7a1824c0555a389347b4febdc7cf9d...
</code></pre>
|
<p>If you just want to generate some random UUID for each request type, you can try this:</p>
<pre class="lang-py prettyprint-override"><code>from uuid import uuid4
requestTypes = ["type1","type2","type3","type5"]
hashes = [uuid4() for _ in range(len(requestTypes))]
t = np.random.choice(len(requestTypes), 5)
df = pd.DataFrame({
"request_type": np.array(requestTypes)[t],
"u_id": np.array(hashes)[t]
})
</code></pre>
<hr />
<p>If you want to use your original hashing algorithm:</p>
<pre><code>import hashlib
requestTypes = ["type1","type2","type3","type5"]
hashes = [hashlib.sha256((str(i)).encode('UTF-8')).hexdigest() for i in range(len(requestTypes))]
t = np.random.choice(len(requestTypes), 5)
df = pd.DataFrame({
"request_type": np.array(requestTypes)[t],
"u_id": np.array(hashes)[t]
})
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Interpolate 2d numpy array to change shape<p>I've a numpy array of shape <code>(960, 2652)</code>, I want to change its size to <code>(1000, 1600)</code> using linear/cubic interpolation.</p>
<pre><code>>>> print(arr.shape)
(960, 2652)
</code></pre>
<p>I've check <a href="https://stackoverflow.com/a/38065352/6210807">this</a> and <a href="https://stackoverflow.com/a/33261924/6210807">this</a> answer, which recommends to use <a href="https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.interpolate.interp2d.html" rel="nofollow noreferrer"><code>scipy.interpolate.interp2d</code></a>, but what should I provide as <code>x</code> and <code>y</code>?</p>
<pre><code>from scipy import interpolate
f = interpolate.interp2d(x, y, arr, kind='cubic')
</code></pre>
|
<p>The datapoints on the old domain, i.e.</p>
<pre><code>import numpy as np
from scipy import interpolate
from scipy import misc
import matplotlib.pyplot as plt
arr = misc.face(gray=True)
x = np.linspace(0, 1, arr.shape[0])
y = np.linspace(0, 1, arr.shape[1])
f = interpolate.interp2d(y, x, arr, kind='cubic')
x2 = np.linspace(0, 1, 1000)
y2 = np.linspace(0, 1, 1600)
arr2 = f(y2, x2)
arr.shape # (768, 1024)
arr2.shape # (1000, 1600)
plt.figure()
plt.imshow(arr)
plt.figure()
plt.imshow(arr2)
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Pandas equivalent of filter followed by groupby function in dplyr<p>There are 1000 rows and 50 columns in the data frame df. The following code in R's <code>dplyr</code> results in a tibble of 1000*50 and ID [1000] as there are 1000 distinct IDs in this df.</p>
<pre><code>df1 = df %>% group_by(ID) %>% filter(row_number()==n())
</code></pre>
<p>I want to execute the same code in Pandas and the result should be a data frame. I got the groups with groupby command in Pandas:</p>
<pre><code>df_groups = df.groupby(by=['ID'])
</code></pre>
<p>How to get <code>df1</code> after this step? After getting the <code>df1</code> the next step is to include one more column from another data frame.</p>
|
<p>It's easy to do it with <a href="https://github.com/pwwang/datar" rel="nofollow noreferrer"><code>datar</code></a>, without learning pandas APIs:</p>
<pre class="lang-py prettyprint-override"><code>>>> from datar.datasets import mtcars
>>> from datar.all import f, group_by, row_number, n, filter
>>> mtcars >> group_by(f.cyl) >> filter(row_number() == n())
mpg cyl disp hp drat wt qsec vs am gear carb
<float64> <int64> <float64> <int64> <float64> <float64> <float64> <int64> <int64> <int64> <int64>
0 19.7 6 145.0 175 3.62 2.77 15.5 0 1 5 6
1 15.0 8 301.0 335 3.54 3.57 14.6 0 1 5 8
2 21.4 4 121.0 109 4.11 2.78 18.6 1 1 4 2
[Groups: cyl (n=3)]
</code></pre>
<p>I am the author of the package. Feel free to submit issues if you have any questions.</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
All possible combinations of 2D numpy array<p>I have four numpy arrays, and example given below:</p>
<pre><code>a1=np.array([[-24.4925, 295.77 ],
[-24.4925, 295.77 ],
[-14.3925, 295.77 ],
[-16.4125, 295.77 ],
[-43.6825, 295.77 ],
[-22.4725, 295.77 ]])
a2=np.array([[-26.0075, 309.39 ],
[-24.9975, 309.39 ],
[-14.8975, 309.39 ],
[-17.9275, 309.39 ],
[-46.2075, 309.39 ],
[-23.9875, 309.39 ]])
a3=np.array([[-25.5025, 310.265 ],
[-25.5025, 310.265 ],
[-15.4025, 310.265 ],
[-17.4225, 310.265 ],
[-45.7025, 310.265 ],
[-24.4925, 310.265 ]])
a4=np.array([[-27.0175, 326.895 ],
[-27.0175, 326.895 ],
[-15.9075, 326.895 ],
[-18.9375, 326.895 ],
[-48.2275, 326.895 ],
[-24.9975, 326.895 ]])
</code></pre>
<p>I want to make all possible combinations between arrays and at the same time concatenate, for example:</p>
<pre><code>array[-24.4925, 295.77, -26.0075, 309.39, -25.5025, 310.265, -27.0175, 326.895]
</code></pre>
<p>and </p>
<pre><code>array[-24.4925, 295.77, -26.0075, 309.39, -25.5025, 310.265, -27.0175, 326.895]
</code></pre>
<p>that is <code>[a1[0],a2[0],a3[0],a4[0]]</code>, <code>[a1[0],a2[0],a3[0],a4[1]]</code> and so on</p>
<p>what is the fastest method to it other than to loop over the four arrays?!</p>
|
<p>Here is a <code>numpy</code> solution, based on the Cartesian product implementation from <a href="https://stackoverflow.com/questions/11144513/cartesian-product-of-x-and-y-array-points-into-single-array-of-2d-points">here</a>.</p>
<pre><code>arr = np.stack([a1, a2, a3, a4])
print(arr.shape) # (4, 6, 2)
n, m, k = arr.shape
# from https://stackoverflow.com/questions/11144513/cartesian-product-of-x-and-y-array-points-into-single-array-of-2d-points
def cartesian_product(*arrays):
la = len(arrays)
dtype = np.result_type(*arrays)
arr = np.empty([len(a) for a in arrays] + [la], dtype=dtype)
for i, a in enumerate(np.ix_(*arrays)):
arr[...,i] = a
return arr.reshape(-1, la)
inds = cartesian_product(*([np.arange(m)] * n))
res = np.take_along_axis(arr, inds.T[...,None], 1).swapaxes(0,1).reshape(-1, n*k)
print(res[0])
# [-24.4925 295.77 -26.0075 309.39 -25.5025 310.265 -27.0175 326.895 ]
</code></pre>
<p>In this example, the <code>inds</code> array looks as follows:</p>
<pre><code>print(inds[:10])
# [[0 0 0 0]
# [0 0 0 1]
# [0 0 0 2]
# [0 0 0 3]
# [0 0 0 4]
# [0 0 0 5]
# [0 0 1 0]
# [0 0 1 1]
# [0 0 1 2]
# [0 0 1 3]]
</code></pre>
<p>We can then use <code>np.take_along_axis</code> to select appropriate elements for each combination.</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Check a Checkbox if it is not checked in Selenium Python<p>I do have the Below Page Source:</p>
<pre><code>1. <li_ngcontent-shv-c123 mat-menu-item role="presentation" class="mat-focus-indicator dropdown-item mat-menu-item ng-star-inserted" tabindex="0" aria-disabled="false">
2. <mat-checkbox_ngcontent-shv-c123 class= "mat-checkbox example-margin mat-accent _mat-animation-noopable" id="53oo32be-6855-4yt4-965d-y71078b642">
3. <label class="mat-checkbox-layout" for="53oo32be-6855-4yt4-965d-y71078b642-input">
4. <div class="mat-checkbox-inner-container">.
5. <input type="checkbox" class="mat-checkbox-input cdk-visually-hidden" id="53oo32be-6855-4yt4-965d-y71078b642-input" tabindex="0" value="53oo32be-6855-4yt4-965d-y71078b642" name="Item List" aria-checked="true">
6. <div matripple="" class="mat-ripple mat-checkbox-ripple mat-focus-indicator">
7. <div class="mat-ripple-element mat-checkbox-persistent-ripple">
8. </div>
9. </div>
10. <div class="mat-checkbox-frame">
11. </div>
12. <div class="mat-checkbox-background">
13. <svg version="1.1" focusable="false" viewBox="0 0 31 31" xml:space="preserve" class="mat-checkbox-checkmark">
14. <path fill="none" stroke="white" d="M2.1,89.5 9,78.6 3.3,77.1" class="mat-checkbox-checkmark-path">
15. </Path>
16. </svg><div class="mat-checkbox-mixedmark">
17. </div>
18. </div>
19. </div>
20. <span class="mat-checkbox-label">
21. <span style="display none;">&nbsp;</span>XYZ Element</span>
22. </label>
23. </mat-checkbox><div matripple="" class="mat-ripple mat-menu-ripple">
24. </div>
25. </li>
</code></pre>
<p>In this <strong>Item List</strong> (Line 5) is an Drop Down List having finite number of elements with checkboxes.
What I need to check is for element <strong>XYZ Element</strong> (Line21) in the drop down list and check the checkbox with it if and only if the checkbox is unchecked.</p>
<p>If the checkbox is already checked leave it as such.</p>
<p>I tried with the below code:</p>
<pre><code>test= "//span[contains(text(),'XYZ Element')]"
try:
driver.finde_element_by_xpath(test).is_selected()
pass
except:
driver.find_element_by_xpath(test).click()
</code></pre>
<p>The above code is not yielding the require result.</p>
<p>Is there any alternative way to do this?</p>
<p>Thanks in Advance.</p>
|
<p>use <code>find_elements</code> to see if there's anything in the list, if it is there you can click or check for click like below :</p>
<p><strong>code :</strong></p>
<pre><code>try:
if len(driver.find_elements_by_xpath("//span[contains(text(),'XYZ Element')]/../descendant::input")) > 0 :
print("Element is present")
if driver.find_element_by_xpath("//span[contains(text(),'XYZ Element')]/../descendant::input").is_selected():
print("it's already selected")
else:
driver.find_element_by_xpath("//span[contains(text(),'XYZ Element')]/../descendant::input").click()
else:
print("Element is not present at all")
except:
print("Check out the code again")
pass
</code></pre>
<p><strong>Update 1 :</strong></p>
<p>to make <code>xyz element</code> variable :</p>
<pre><code>some_str = "XYZ Element"
driver.find_element_by_xpath(f"//span[contains(text(),'{some_str}')]")
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
How should I deploy a web application to Debian?<p>Ideally I’d like to build a package to deploy to Debian. Ideally the installation process would check the system has the required dependencies installed, as well as configure Cronjobs, set up users etc.</p>
<p>I’ve tried googling around and I understand a .deb is the format I can distribute in - but that is as far as I got since I’m getting confused now with the tooling I need to get up to speed with. The other option is to just git clone on the server and configure the environment manually… but that’s not preferable for obvious reasons.</p>
<p>How can I get started with building a Debian package and is that the right direction for deploying web applications? If anyone could point me in the right direction tools-wise and perhaps a tutorial that would be massively appreciated :) also if you advise to just take the simple route with git, happy to take that advice as well if you explain why. if it makes any difference I’m deploying one nodejs and one python web application</p>
|
<p>You can for sure package everything as a Linux application; for example using <a href="https://www.pyinstaller.org" rel="nofollow noreferrer">pyinstaller</a> for your python webapp.</p>
<p>Besides that, it depends on your use case.</p>
<p>I will focus on the second part of your question,</p>
<blockquote>
<p>How can I get started with building a Debian package and is that the right direction for deploying web applications?</p>
</blockquote>
<p>as that seems to be what you are after when considering other alternatives to .dev already in your question.</p>
<h2>I want to deploy 1-2 websites on my linux server</h2>
<p>In this case, I'd say manually git clone and configure everything. Its totally fine when you know that there won't be much more running on the server and is pretty hassle free.
Why spend time packaging when noone will need the package ever again after you just installed it on your server?</p>
<h2>I want to distribute my webapps to others on Debian</h2>
<p>Here a .deb would make total sense. For example <a href="https://www.plex.tv/media-server-downloads/" rel="nofollow noreferrer">Plex media server</a> and other applications are shipped like this.
If the <a href="https://wiki.debian.org/HowToPackageForDebian" rel="nofollow noreferrer">official Debian wiki</a> is too abstract, there are also <a href="https://www.internalpointers.com/post/build-binary-deb-package-practical-guide" rel="nofollow noreferrer">other more hands on guides</a> to get you started quickly. You could also get other .deb Packages and extract them to see what they are made up from. You mentioned one of your websites is using python, so I just suspect it might be flask or Django. If it's Django, there is an <a href="https://github.com/codeinthehole/django-in-a-deb-file" rel="nofollow noreferrer">example repository</a> you might want to check out.</p>
<h2>I want to run a lot of stuff on my server / distribute to other devs and platforms / or scale soon</h2>
<p>In this case I would make the webapps into <a href="https://docs.docker.com/samples/" rel="nofollow noreferrer">docker containers</a>. They are easy to build, share, and deploy. On top you can easily bundle all dependencies and scripts to make sure everything is setup right. Also they are easy to run and stop. So you have a simple "on/off" switch if your server is running low on resources while you want to run something else. I highly favour this solution, as it also allows you to easily control what is running on what ip when you deploy more and more applications to your server. But, as you pointed out, it runs with a bit of overhead and is not the best solution on weak hardware.
Also, if you know for sure what will be running on the server long term and don't need the flexibility I would probably skip Docker as well.</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
replace alphanumeric values in a column dataframe<p>I have a big data frame that looks like this:</p>
<pre><code>a, b, c
4f5t-4656, x, y
3jsu-56hj, x, y
gfhdu670-9, x, y
fgfj-6fhf, x, y
ELE, x, y
ELE, x, y
</code></pre>
<p>My goal is to replace all the alphanumeric values in column a by the the letters 'LCD'. I have tried:</p>
<pre><code>df['a']=df['a'].replace([a-z0-9-], 'LCD', regex=True)
</code></pre>
<p>but I am getting the "SyntaxError: invalid syntax"</p>
<p>What's the problem with the code? can anyone help?</p>
|
<p>My bet is that you don't want to replace each matching character by LCD, but the whole series of characters, thus you probably want to add a <code>+</code> quantifier in your regex (in addition to the missing quotes that give you the SyntaxError):</p>
<pre><code>df['a'] = df['a'].replace('[a-z0-9-]+', 'LCD', regex=True)
</code></pre>
<p>Output:</p>
<pre><code> a b c
0 LCD x y
1 LCD x y
2 LCD x y
3 LCD x y
4 ELE x y
5 ELE x y
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
how to make the page redirects to the same page after the event occured<p>I am trying to make a upvote and downvote functionality on my website. however there is a particular behaviour that i don't like which is that whenever a user clicks the button, he should not be redirect to another page but should remain on the same page.</p>
<p>What happens after clicking the upvote button is that it goes to the url <code>http://localhost:8001/upvote/2/</code> which i don't want. I want it to remain on the same page which is <code>http://localhost:8001/view-supplier/</code></p>
<p>models.py</p>
<pre><code>class User(AbstractBaseUser, PermissionsMixin):
email = models.EmailField(max_length=254, unique=True)
# CUSTOM USER FIELDS
firstname = models.CharField(max_length=30)
lastname = models.CharField(max_length=30)
upvotes = models.IntegerField(default=0)
downvotes = models.IntegerField(default=0)
objects = UserManager()
def get_absolute_url(self):
return "/users/%i/" % (self.pk)
def get_email(self):
return self.email
</code></pre>
<p>views.py</p>
<pre><code>def Viewsupplier(request):
title = "All Suppliers"
suppliers = User.objects.filter(user_type__is_supplier=True)
context = {"suppliers":suppliers, "title":title}
return render(request, 'core/view-suppliers.html', context)
@login_required
def upvote(request, pk):
supplier_vote = get_object_or_404(User, id=pk)
supplier_vote.upvotes += 1
supplier_vote.save()
upvote_count = supplier_vote.upvotes
context = {"supplier_vote":supplier_vote, "upvote_count":upvote_count}
return render(request, "core/view-suppliers.html", context)
@login_required
def downvote(request, pk):
supplier_vote = get_object_or_404(User, id=pk)
supplier_vote.downvotes -= 1
supplier_vote.save()
downvote_count = supplier_vote.downvotes
context = {"supplier_vote":supplier_vote, "downvote_count":downvote_count}
return render(request, "core/view-suppliers.html", context)
</code></pre>
<p>urls.py</p>
<pre><code>from django.urls import path
from . import views
urlpatterns = [
path('upvote/<int:pk>/', views.upvote, name='upvote'),
path('downvote/<int:pk>/', views.downvote, name='downvote'),
]
</code></pre>
<p>view-supplier.html</p>
<pre><code><table class="table table-borderless table-data3">
<thead>
<tr>
<th>No</th>
<th>Country</th>
<th>Votes</th>
</tr>
</thead>
<tbody>
{% for supplier in suppliers %}
<tr>
<td>{{forloop.counter}}</td>
<td>{{supplier.country}}</td>
<td>
<div class="table-data-feature">
<a href="{% url 'upvote' supplier.id %}" class="m-r-10">
<button class="item" data-toggle="tooltip" data-placement="top" title="Like">
<i class="zmdi zmdi-thumb-up"></i>{{upvote_count}}</button>
</a>
<a href="{% url 'downvote' supplier.id %}">
<button class="item" data-toggle="tooltip" data-placement="top" title="Dislike">
<i class="zmdi zmdi-thumb-down"></i>{{downvote_count}}</button>
</a>
</div>
</td>
</tr>
{% empty %}
<tr><td class="text-center p-5" colspan="7"><h4>No supplier available</h4></td></tr>
{% endfor %}
</tbody>
</table>
</code></pre>
|
<p>You need to implement an API (application programming interface) to send the upvote and downvote asynchronously. <a href="https://www.django-rest-framework.org/" rel="nofollow noreferrer">Django REST framework</a> is the way to go to create your very own API. You can watch hours of video tutorials on that subject on YouTube. The docs for Django REST framework is really great and easy to read as well. Django is a server-side web-framework which means that it can only help you if you submit to the server. You can definitely reload the same page:</p>
<pre><code>return HttpResponseRedirect(reverse('<app_name>:<url_name>'))
</code></pre>
<p>However, there will be an interruption. So, the recommended way to handle this type of behavior is through asynchronous call using JavaScript's APIs such as the <a href="https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API" rel="nofollow noreferrer">Fetch API</a> to the REST framework.</p>
<p>If, for probably misplaced fear of learning asynchronous coding, you decide to send data to your server the old-fashion way, you can always use your upvote and downvote to submit user data and update the counts. Then, in your view_supplier view, you need to get the updated view count and pass it to the context. So, your upvote view changes the upvote counts and trigger the Viewsupplier view. Then, inside the ViewSupplier view, you get the counts and add it to the context</p>
<pre><code># in your template
<a href="{% url 'upvote' supplier.id %}" class="m-r-10">
<button class="item" data-toggle="tooltip" data-placement="top" title="Like">
<i class="zmdi zmdi-thumb-up"></i>{{upvote_count}}</button>
</a>
# in your view
def Viewsupplier(request):
title = "All Suppliers"
suppliers = User.objects.filter(user_type__is_supplier=True)
# Get the updated count:
suppliers_votes_count = {}
for supplier in suppliers:
upvote_count = supplier.upvotes
downvote_count = supplier.upvotes
supplier_count = {supplier: {'upvote': upvote_count, 'downvote': downvote_count } }
suppliers_votes_count.update(supplier_count)
context = {"suppliers":suppliers, "title":title, "suppliers_votes_count": suppliers_votes_count }
return render(request, 'core/view-suppliers.html', context)
@login_required
def upvote(request, pk):
supplier_vote = get_object_or_404(User, id=pk)
supplier_vote.upvotes += 1
supplier_vote.save()
upvote_count = supplier_vote.upvotes
context = {"supplier_vote":supplier_vote, "upvote_count":upvote_count}
return HttpResponseRedirect(reverse('core:view_supplier'))
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Efficient way to load big heterogeneous dataset in a numpy array?<p>I have to load a dataset into a big array with <code>p</code> instances where each instance has 2 dimensions <code>(n_i, m)</code>. The length of the first dimension <code>n_i</code> is variable.</p>
<p>My first approach was to pad all the instances to the <code>max_len</code> over the first dimension, initialize an array of size <code>(p, max_len, m)</code> and then broadcast each instance into the big array as follows <code>big_array[i*max_len:i*max_len+max_len] = padded_i_instance</code>. This is fast and works well, the problem is that I only have 8Gb of RAM and I get <code>(interrupted by signal 9: SIGKILL) error</code> when I try to load the whole dataset. It also feels very wasteful since the shortest instance is almost 10 times shorter than the <code>max_len</code> so some instances are 90% padding.</p>
<p>My second approach was to use <code>np.vstack</code> and then build the <code>big_array</code> iteratively. Something like this:</p>
<pre><code>big_array = np.zeros([1,l])
for i in range(1,n):
big_array = np.vstack([big_array, np.full([i,l], i)])
</code></pre>
<p>this feels less "wasteful" but it actually takes 100x longer to execute for only 10000 instances, it is unfeasible to use for 100k+.</p>
<p>So I was wondering if there was a method that is both more memory efficient than approach 1 and more computationally efficient than approach 2. I read about <code>np.append</code> and <code>np.insert</code> but they seem to be other versions of <code>np.vstack</code> so I assume the would take roughly as much time.</p>
|
<p>The slow repeated vstack:</p>
<pre><code>In [200]: n=5; l=2
...: big_array = np.zeros([1,l])
...: for i in range(1,n):
...: big_array = np.vstack([big_array, np.full([i,l], i)])
...:
In [201]: big_array
Out[201]:
array([[0., 0.],
[1., 1.],
[2., 2.],
[2., 2.],
[3., 3.],
[3., 3.],
[3., 3.],
[4., 4.],
[4., 4.],
[4., 4.],
[4., 4.]])
</code></pre>
<p>list append is faster:</p>
<pre><code>In [202]: alist = []
In [203]: for i in range(1,n):
...: alist.append(np.full([i,l], i))
...:
...:
In [204]: alist
Out[204]:
[array([[1, 1]]),
array([[2, 2],
[2, 2]]),
array([[3, 3],
[3, 3],
[3, 3]]),
array([[4, 4],
[4, 4],
[4, 4],
[4, 4]])]
In [205]: np.vstack(alist)
Out[205]:
array([[1, 1],
[2, 2],
[2, 2],
[3, 3],
[3, 3],
[3, 3],
[4, 4],
[4, 4],
[4, 4],
[4, 4]])
</code></pre>
<p>filling a preallocated array:</p>
<pre><code>In [210]: arr = np.zeros((10,2),int)
...: cnt=0
...: for i in range(0,n):
...: arr[cnt:cnt+i,:] = np.full([i,l],i)
...: cnt += i
...:
In [211]: arr
Out[211]:
array([[1, 1],
[2, 2],
[2, 2],
[3, 3],
[3, 3],
[3, 3],
[4, 4],
[4, 4],
[4, 4],
[4, 4]])
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Accessing index using list comprehesion in Python<p>Hello recently I am trying to figure out list comprehension. And it looks like I suck at this :/</p>
<p>Here is my code I am trying to remake using list comprehension</p>
<pre><code>base = datetime.datetime.today()
date_list = [base - datetime.timedelta(days=x) for x in range(61)]
del date_list[0]
date_list.reverse()
weekend = []
dates = []
for idx, val in enumerate(date_list):
dates.append(str(val)[0:10])
dates.append(str(val)[0:10])
case = val.isoweekday()
if case == 6 or case == 7:
weekend.append(str(val)[0:10])
</code></pre>
<p>I looked it out how to use enumerate using this method and i found this:</p>
<p><code>[val for idx, val in enumerate(date_list)]</code></p>
<p>But i don't know how to continue this idea.</p>
<p>I would really appreciate some help :)</p>
|
<p>For list comprehension with if condition it should look like this [function(x) if condition(x) else other_function(x) for x in list]
So for your case, I would say:</p>
<pre><code>base = datetime.datetime.today()
date_list = [base - datetime.timedelta(days=x) for x in range(1,61)]
date_list.reverse()
weekend = [date.strftime("%m/%d/%Y, %H:%M:%S") for date in date_list if (date.weekday()==5 or date.weekday()==6)]
dates=[]
for date in date_list:
dates.extend([date.strftime("%m/%d/%Y, %H:%M:%S"),date.strftime("%m/%d/%Y, %H:%M:%S")])
</code></pre>
<p>It's better to use strftime to transform datetime in the format you want: <a href="https://docs.python.org/fr/3.6/library/datetime.html#strftime-and-strptime-behavior" rel="nofollow noreferrer">https://docs.python.org/fr/3.6/library/datetime.html#strftime-and-strptime-behavior</a></p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Pyspark - adding new column with values by using function - group by and max<p>I've got a scenario where I have to take the results from a group by and max and create a new column:</p>
<p>For example, say I have this data:</p>
<pre><code>|employee_name|department|state|salary|
+-------------+----------+-----+------+
| James| Sales| NY| 90000|
| Michael| Sales| NY| 86000|
| Robert| Sales| CA| 81000|
| Maria| Finance| CA| 90000|
| Raman| Finance| CA| 99000|
| Scott| Finance| NY| 83000|
| Jeff| Marketing| CA| 80000|
| Kumar| Marketing| NY| 91000|
</code></pre>
<p>My output should look like:</p>
<pre><code>|employee_name|department|state|salary|max(salary by department)
+-------------+----------+-----+------+---
| James| Sales| NY| 90000| 90000
| Michael| Sales| NY| 86000| 90000
| Robert| Sales| CA| 81000| 90000
| Maria| Finance| CA| 85000| 88000
| Raman| Finance| CA| 88000| 88000
| Scott| Finance| NY| 83000| 88000
| Jeff| Marketing| CA| 80000| 91000
| Kumar| Marketing| NY| 91000| 91000
</code></pre>
<p>Any tips? Will be of great help.</p>
|
<p>You can also use partition instead of groupby.</p>
<pre><code> df=df.withColumn('max_in_dept',F.max('salary')\
.over(Window.partitionBy('department')))
df.show(5,False)
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
How to create new csv files from a directories with python?<p>I have some csv files that i have filtered with this code and it works:</p>
<pre><code>with open('path' , 'r')as f:
for lines in f:
if '2020-12-31' in lines:
line_data = lines.split(';')
filtered_list.append(line_data)
newfile.write(lines)
</code></pre>
<p>Firstly i would like do this but for ALL csv file in my folder.</p>
<p>Secondly i would like to do this in prompt command line if possible( with sys?).</p>
<p>i tried:</p>
<pre><code>import os
from os import walk
from pathlib import Path
dir = r'myPathFolder1'
target = r'myPathFolder2'
filtered_list=[]
for filenames in os.listdir(dir):
for f in filenames:
if f.endswith(".csv"):
newfile = open(dir + f, 'w')
with open(f , 'r') as t:
for lines in t:
if '2020-12-31' in lines:
line_data = lines.split(';')
filtered_list.append(line_data)
newfile.write(lines)
</code></pre>
<p>But it doesnt work.</p>
|
<p>The full code would be, I tried my code, it will copy to another folder.</p>
<pre><code>import os,fnmatch
dir = "C:\\Users\\Frederic\\Desktop\\"
def find(pattern, path):
result = []
for root, dirs, files in os.walk(path):
for name in files:
if fnmatch.fnmatch(name, pattern):
result.append(os.path.join(root, name))
return result
filtered_list = find('*.csv', dir)
print(filtered_list)
for filenames in filtered_list:
print(filenames)
for f in filtered_list:
if f.endswith(".csv"):
print(f.endswith(".csv"))
base_dir_pair = os.path.split(f)
address = "C:\\Users\\Frederic\\Desktop\\aa\\"
address = address + base_dir_pair[1]
print(address)
newfile = open(address, 'w')
with open(f, 'r') as t:
print("in1")
for lines in t:
print("in2")
if '2020-12-31' in lines:
print("in3")
line_data = lines.split(';')
filtered_list.append(line_data)
newfile.write(lines)
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Using Python on the shared Windows Gitlab runner<p><a href="https://about.gitlab.com/blog/2020/01/21/windows-shared-runner-beta/" rel="nofollow noreferrer">More than a year ago</a> Gitlab announced a beta of shared runners for windows.</p>
<p>I would like to use this feature to bundle a Python application into an executable with <code>pyinstaller</code>, but I fail to get the most basic build environment. My <code>.gitlab-ci.yml</code> looks like this:</p>
<pre class="lang-yaml prettyprint-override"><code>.shared_windows_runners:
tags:
- shared-windows
- windows
- windows-1809
build:
extends:
- .shared_windows_runners
stage: build
script:
- wget.exe https://www.python.org/ftp/python/3.9.4/python-3.9.4-amd64.exe
- .\python-3.9.4-amd64.exe /quiet InstallAllUsers=1 PrependPath=1
- $env:Path += "C:\Program Files\Python39\"
- python.exe -m pip install .
- python.exe -m pyinstaller my_app.py -F
artifacts:
paths:
- my_app.exe
</code></pre>
<p>This fails at the <code>pip install</code> step with the following error message:</p>
<pre><code>python.exe : The term 'python.exe' is not recognized as the name of a cmdlet, function, script file, or operable
program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
At line:1 char:1
+ python.exe -m pip install .
+ ~~~~~~~~~~
+ CategoryInfo : ObjectNotFound: (python.exe:String) [], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException
</code></pre>
<p>When I use this instead:</p>
<pre class="lang-yaml prettyprint-override"><code>script:
- wget.exe https://www.python.org/ftp/python/3.9.4/python-3.9.4-amd64.exe
- .\python-3.9.4-amd64.exe /quiet InstallAllUsers=1 PrependPath=1
- '"C:\Program Files\Python39\python.exe" -m pip install .'
</code></pre>
<p>At least <code>python.exe</code> can be found (I guess) but the arguments are not taken as expected.</p>
<pre><code>At line:1 char:40
+ "C:\Program Files\Python39\python.exe" -m pip install .
+ ~~
Unexpected token '-m' in expression or statement.
At line:1 char:43
+ "C:\Program Files\Python39\python.exe" -m pip install .
+ ~~~
Unexpected token 'pip' in expression or statement.
+ CategoryInfo : ParserError: (:) [], ParentContainsErrorRecordException
+ FullyQualifiedErrorId : UnexpectedToken
</code></pre>
<p>Can you give me some pointers how to correctly pass the arguments for PowerShell?</p>
|
<p>This <a href="https://stackoverflow.com/a/62833355/5119485">answer</a> has the solution. Python should be installed with <code>choco</code>.</p>
<p>For this case the correct <code>.gitlab-ci.yml</code> is this:</p>
<pre class="lang-yaml prettyprint-override"><code>.shared_windows_runners:
tags:
- shared-windows
- windows
- windows-1809
build:
extends:
- .shared_windows_runners
stage: build
script:
- choco install python --version=3.9.4 -y -f
- "C:\\Python39\\python.exe -m pip install ."
- "C:\\Python39\\Scripts\\pyinstaller.exe my_app.py -F"
artifacts:
paths:
- dist/my_app.exe
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Python PyTorch Pyro - Multivariate Distributions<p>How does one sample a multivariate distribution in Pyro? I just want a <code>(M, N)</code> Beta distribution, but the following doesn't work:</p>
<pre><code>impor torch
import pyro
with pyro.plate("theta_plate", M):
theta = pyro.sample("theta",
pyro.distributions.Beta(concentration0=torch.ones(N),
concentration1=torch.ones(N)))
</code></pre>
|
<p>Use <code>to_event(n)</code> to declare depdent samples.</p>
<pre class="lang-py prettyprint-override"><code>import torch
import pyro
import pyro.distributions as dist
def model(N, M):
with pyro.plate("theta_plate", M):
theta = pyro.sample("theta", dist.Beta(torch.ones(N),1.).to_event(1))
return theta
if __name__ == '__main__':
print(model(10,12).shape) # (10,12)
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
how to pass a list to a Data Frame?<p>I'm getting this error <code>pyspark.sql.utils.Illegal Argument Exception: requirement failed: The number of columns doesn't match.</code></p>
<p>Old column names (6): <code>vin</code>, <code>age</code>, <code>var</code>, <code>rim</code>, <code>cap</code>, <code>cur</code>.</p>
<p>New column names (2): <code>vin</code>, <code>age</code> for the code below:</p>
<pre class="lang-py prettyprint-override"><code>
schema = StructType([
StructField( 'vin', StringType(), True),StructField( 'age', IntegerType(), True),StructField( 'var', IntegerType(), True),StructField( 'rim', IntegerType(), True),StructField( 'cap', IntegerType(), True),StructField( 'cur', IntegerType(), True)
])
data = [['tom', 10,54,87,23,90], ['nick', 15,63,23,11,65], ['juli', 14,87,9,43,21]]
df=spark.createDataFrame(data,schema)
use=['vin','age']
df1=df.toDF(*use)
df1.show()
</code></pre>
|
<p>To select certain columns from a dataframe using a list of column names, use <code>select</code>, not <code>toDF</code>:</p>
<pre><code>use = ['vin','age']
df1 = df.select(*use)
</code></pre>
<p><code>toDF</code> is only suitable for renaming all columns in a dataframe. It's not suitable for selecting certain columns.</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
How to change the height of each image grid with mpl_toolkits.axes_grid1.Imagegrid<p>I'm plotting an image with 4*6 grids with mpl_toolkits.axes_grid1.Imagegrid:</p>
<pre><code>target = np.reshape(target, (12, 41, 81))
output = np.reshape(output, (12, 41, 81))
pred_error = target - output
# target: C x D x H x W
sfmt = ticker.ScalarFormatter(useMathText=True)
sfmt.set_powerlimits((-2, 2))
cmap = 'jet'
fig = plt.figure(1, (11, 5.5))
axes_pad = 0.1
cbar_pad = 0.1
label_size = 6
plt.rcParams["mpl_toolkits.legacy_colorbar"] = False
subplots_position = [(4,6,i) for i in range(1, 25)]
for i, subplot_i in enumerate(subplots_position):
if i in [i for i in range(6)]+[i for i in range(12, 18)]:
# share one colorbar
grid = ImageGrid(fig, subplot_i, # as in plt.subplot(111)
nrows_ncols=(2, 1),
axes_pad=axes_pad,
share_all=False,
cbar_location="right",
cbar_mode="single",
cbar_size="3%",
cbar_pad=cbar_pad,
)
if i <6:
data = (target[i], output[i])
else:
data = (target[i-6], output[i-6])
channel = np.concatenate(data)
vmin, vmax = np.amin(channel), np.amax(channel)
# Add data to image grid
for j, ax in enumerate(grid):
im = ax.imshow(data[j], vmin=vmin, vmax=vmax, cmap=cmap)
ax.set_axis_off()
# ticks=np.linspace(vmin, vmax, 10)
#set_ticks, set_ticklabels
cbar = grid.cbar_axes[0].colorbar(im, format=sfmt)
# cbar.ax.set_yticks((vmin, vmax))
cbar.ax.yaxis.set_offset_position('left')
cbar.ax.tick_params(labelsize=label_size)
cbar.ax.toggle_label(True)
else:
grid = ImageGrid(fig, subplot_i, # as in plt.subplot(111)
nrows_ncols=(1, 1),
axes_pad=axes_pad,
# share_all=True,
# aspect=True,
cbar_location="right",
cbar_mode="single",
cbar_size="6%",
cbar_pad=cbar_pad,
)
data = [pred_error[i%12]]
for j, ax in enumerate(grid):
im = ax.imshow(data[j], cmap=cmap)
ax.set_axis_off()
ax.set_axes_locator
cbar = grid.cbar_axes[j].colorbar(im, format=sfmt)
grid.cbar_axes[j].tick_params(labelsize=label_size)
grid.cbar_axes[j].toggle_label(True)
plt.tight_layout()##pad=0.25, w_pad=0.25, h_pad=0.25)
# fig.subplots_adjust(wspace=0.075, hspace=0.075)
plt.show()
plt.savefig('test.pdf',
dpi=300, bbox_inches='tight')
plt.show()
plt.close(fig)
</code></pre>
<p><em>target</em> is (2,6,41,81) shape array, resized to (12,41,81) when plotting, the <em>output</em> is the same dimension, <em>pred_error</em> is the difference between them. I want to show <em>target</em> in the 1st and 4th rows, <em>output</em> in 2nd and 5th rows with the same colorbar, <em>pred_error</em> in 3rd and 6th rows.</p>
<p><strong>The image I'm getting now:</strong> I annotated the grid boxes,
<a href="https://i.stack.imgur.com/YOPLD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YOPLD.png" alt="the red boxes are my annotation, each box is a grid, I plot it this way so the first two rows can share the colorbar" /></a></p>
<p><strong>The image I want is without the huge gaps:</strong>
<a href="https://i.stack.imgur.com/hNA2y.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hNA2y.png" alt="I want to resize the grids in the green images rows so the big gap could shrink" /></a></p>
<p>I know the problem is that the grids in my image are all the same size, but I didn't find the way to edit the height of those green images grids. I appreciate your help!!!</p>
|
<p>I end up making figures as I want (still keeping the colorbar-sharing property) just with subplots:</p>
<pre><code>target = np.random.randn(12, 41, 81)
target[6:,:,:] *= 2
output = np.random.randn(12, 41, 81)
output[6:,:,:] *= 2
target = np.reshape(target, (12, 41, 81))
output = np.reshape(output, (12, 41, 81))
pred_error = target - output
fig, axes = plt.subplots(nrows=6, ncols=6, figsize=(11, 5))
axes = axes.flat
index = [[i, i+6] for i in range(6)] + [[i, i+6] for i in range(18, 18+6)]
error_index = list(np.arange(12, 18)) + list(np.arange(30, 36))
y_ind = 0
axes_pad = 0.1
cbar_pad = 0.1
label_size = 6
for ind_pair in index:
axy = axes[ind_pair[0]]
data = (target[y_ind], output[y_ind])
vmin = np.min(data)
vmax = np.max(data)
imy = axy.imshow(data[0], vmin=vmin, vmax=vmax, cmap = 'jet')
axout = axes[ind_pair[1]]
imout = axout.imshow(data[1], vmin=vmin, vmax=vmax, cmap = 'jet')
axy.set_axis_off()
axout.set_axis_off()
v1 = np.linspace(vmin, vmax, 7, endpoint=True)
cbar = fig.colorbar(imout, ax=[axes[ind_pair[0]], axes[ind_pair[1]]],
format='%.2f', aspect=20, shrink = 0.95)
cbar.set_ticks(v1)
cbar.ax.tick_params(labelsize=label_size)
y_ind += 1
e_ind = 0
for ind_error in error_index:
axe = axes[ind_error]
ime = axe.imshow(pred_error[e_ind], cmap = 'jet')
axe.set_axis_off()
v1 = np.linspace(np.min(pred_error[e_ind]),np.max(pred_error[e_ind]), 4, endpoint=True)
cbar = fig.colorbar(ime, ax=axes[ind_error], format='%.2f', aspect=8, shrink = 0.85)
cbar.set_ticks(v1)
cbar.ax.tick_params(labelsize=label_size)
# plt.tight_layout()
plt.savefig('a.pdf',
dpi=300, bbox_inches='tight')
plt.show()
plt.close(fig)
</code></pre>
<p>The figure looks like this:
<a href="https://i.stack.imgur.com/QoMwE.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QoMwE.jpg" alt="enter image description here" /></a></p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
how to crop area of an image inside a rectangle or a squre?<p><a href="https://i.stack.imgur.com/WwsqC.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WwsqC.jpg" alt="Image"></a></p>
<p><strong>First of all I take the picture and then I draw a rectangle over it. Now I just want to crop the image inside the rectangle. I tried drawing contours but that didn't work out in my case. I am stuck on it.</strong></p>
<pre><code>import cv2
import numpy as np
img = cv2.imread("C:/Users/hp/Desktop/segmentation/abc.jpg", 0);
h, w = img.shape[:2]
kernel = np.ones((15,15),np.uint8)
e = cv2.erode(img,kernel,iterations = 2)
d = cv2.dilate(e,kernel,iterations = 1)
ret, th = cv2.threshold(d, 150, 255, cv2.THRESH_BINARY_INV)
mask = np.zeros((h+2, w+2), np.uint8)
# cv2.floodFill(th, mask, (200,200), 255); # position = (200,200)
out = cv2.bitwise_not(th)
out= cv2.dilate(out,kernel,iterations = 3)
cnt, h = cv2.findContours(out,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
for i in range(len(cnt)):
area = cv2.contourArea(cnt[i])
if(area>100):
mask = np.zeros_like(img)
cv2.drawContours(mask, cnt, i, 255, -1)
x,y,w,h = cv2.boundingRect(cnt[i])
crop= img[ y:h+y,x:w+x]
cv2.imshow("snip",crop )
if(cv2.waitKey(0))==27:break
cv2.destroyAllWindows()
</code></pre>
|
<p>Taking into consideration that the rectangle is black, and there is no other large black elements connected here is the code which crops the rectangle inside:</p>
<pre><code>import cv2
import numpy as np
img = cv2.imread("abc.jpg", 0);
h, w = img.shape[:2]
# print(img.shape)
kernel = np.ones((3,3),np.uint8)
img2 = img.copy()
img2[img2!=0]=255
img2 = 255 - img2
img2 = cv2.dilate(img2, kernel)
img2 = cv2.medianBlur(img2, 9)
img2 = cv2.medianBlur(img2, 9)
position = np.where(img2 !=0)
x0 = position[0].min()
x1 = position[0].max()
y0 = position[1].min()
y1 = position[1].max()
print(x0,x1,y0,y1)
result = img[x0:x1,y0:y1]
result = cv2.resize(result,(800,800))
# rect = cv2.resize(np.hstack((img,img2)),(1000,700))
cv2.imshow('anything', result)
cv2.waitKey(0)
cv2.destroyAllWindows()
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Click share button under youtube video with selenium<pre><code>import time
from selenium import webdriver
from selenium.webdriver.common.by import By
import pyautogui
titleVideo = input("Enter the title of the video: ")
chrome_options = webdriver.ChromeOptions()
prefs = {"profile.default_content_setting_values.notifications": 2}
chrome_options.add_experimental_option("prefs", prefs)
# Add experimental options to remove "Google Chrome is controlled by automated software" notification
chrome_options.add_experimental_option("useAutomationExtension", False)
chrome_options.add_experimental_option("excludeSwitches", ["enable-automation"])
driver = webdriver.Chrome(r'C:\Users\iwanh\Desktop\Drivers\chromedriver.exe', options=chrome_options)
driver.get("https://www.youtube.com/")
# We use driver.find_element with the help of the By import instead of find_element_by_name or id
accept_all = driver.find_element(By.XPATH, '/html/body/ytd-app/ytd-consent-bump-v2-lightbox/tp-yt-paper-dialog/div['
'4]/div/div[6]/div[1]/ytd-button-renderer['
'2]/a/tp-yt-paper-button/yt-formatted-string')
accept_all.click()
time.sleep(5)
search_box = driver.find_element(By.XPATH, value='/html/body/ytd-app/div[1]/div/ytd-masthead/div[3]/div['
'2]/ytd-searchbox/form/div[1]/div[1]/input')
search_box.send_keys(titleVideo)
searchGo = driver.find_element(By.XPATH, value='/html/body/ytd-app/div[1]/div/ytd-masthead/div[3]/div['
'2]/ytd-searchbox/button/yt-icon')
searchGo.click()
time.sleep(3)
pyautogui.press('tab')
pyautogui.press('tab') # Slopppy way to click on the first recommended
pyautogui.press('enter')# video will fix later
time.sleep(3)
shareButton = driver.find_element(By.XPATH, "/html/body/ytd-app/div[1]/ytd-page-manager/ytd-watch-flexy/div[5]/div["
"1]/div/ytd-watch-metadata/div/div[2]/div["
"2]/div/div/ytd-menu-renderer/div[1]/ytd-button-renderer["
"1]/a/yt-icon-button/yt-interaction")
time.sleep(3)
shareButton.click()
time.sleep(2.5)
copyButton = driver.find_element(By.XPATH, value="/html/body/ytd-app/ytd-popup-container/tp-yt-paper-dialog["
"1]/ytd-unified-share-panel-renderer/div["
"2]/yt-third-party-network-section-renderer/div["
"2]/yt-copy-link-renderer/div/yt-button-renderer/a/tp-yt-paper"
"-button/yt-formatted-string")
copyButton.click()
</code></pre>
<p>When it executes the shareButton part it says that it is "Unable to locate element". Two things that I am suspiscious might be happening.</p>
<ol>
<li>I am not copying the right XPATH</li>
<li>The XPATH changes everytime I open a new chrome tab <strong>OR</strong> everytime i rerun the program.</li>
</ol>
<p>Output:</p>
<p><img src="https://i.stack.imgur.com/qK3q9.png" alt="" /></p>
<p>Share button I want its XPATH:</p>
<p><img src="https://i.stack.imgur.com/O9V7v.png" alt="" /></p>
<p>P.S. I have tried to find element with other ways than XPATH but same result, if someones manages to do it with another way, its perfect.</p>
|
<p>you can click the button to share with:</p>
<pre><code>driver.find_element(By.XPATH, value='//*[@id="top-level-buttons-computed"]/ytd-button-renderer[1]/a').click()
</code></pre>
<p>though a major issue is that the element is not loaded on the page when going to the page first. So you have to wait for that share element to load you could do that a couple of ways using: (easier method)</p>
<pre><code>import time
time.sleep(10) # to wait 10 seconds
</code></pre>
<p>or (the recommended way)</p>
<p><a href="https://stackoverflow.com/questions/26566799/wait-until-page-is-loaded-with-selenium-webdriver-for-python">Wait until page is loaded with Selenium WebDriver for Python</a></p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
How to add a column with conditions on another Dataframe?<p>Motivation: I want to check if customers have bought anything during 2 months since first purchase. (retention)</p>
<p>Resources: I have 2 tables:</p>
<ol>
<li>Buy date, ID and purchase code</li>
<li>Id and first day he bought</li>
</ol>
<p>Sample data:</p>
<pre><code>Table1
Date ID Purchase_code
2019-01-01 1 AQT1
2019-01-02 1 TRR1
2019-03-01 1 QTD1
2019-02-01 2 IGJ5
2019-02-05 2 ILW2
2019-02-20 2 WET2
2019-02-28 2 POY6
Table 2
ID First_Buy_Date
1 2019-01-01
2 2019-02-01
</code></pre>
<p>The expected result:</p>
<pre><code>ID First_login_date Retention Frequency_buy_at_first_month
1 2019-01-01 1 2
2 2019-02-01 0 4
</code></pre>
|
<p>First convert columns to datetimes if necessary, then add first days by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.merge.html" rel="nofollow noreferrer"><code>DataFrame.merge</code></a> and create new columns by compare with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.le.html" rel="nofollow noreferrer"><code>Series.le</code></a> or <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.gt.html" rel="nofollow noreferrer"><code>Series.gt</code></a> and converting to integers:</p>
<pre><code>df1['Date'] = pd.to_datetime(df1['Date'])
df2['First_Buy_Date'] = pd.to_datetime(df2['First_Buy_Date'])
df = df1.merge(df2, on='ID', how='left')
df['Retention'] = (df['First_Buy_Date'].add(pd.DateOffset(months=2))
.le(df['Date'])
.astype(int))
df['Frequency_buy_at_first_month'] = (df['First_Buy_Date'].add(pd.DateOffset(months=1))
.gt(df['Date'])
.astype(int))
</code></pre>
<p>Last aggregate by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.agg.html" rel="nofollow noreferrer"><code>GroupBy.agg</code></a> and <code>max</code> (if need only <code>0</code> or <code>1</code> output) and <code>sum</code> for count values:</p>
<pre><code>df1 = (df.groupby(['ID','First_Buy_Date'], as_index=False)
.agg({'Retention':'max', 'Frequency_buy_at_first_month':'sum'}))
print (df1)
ID First_Buy_Date Retention Frequency_buy_at_first_month
0 1 2019-01-01 1 2
1 2 2019-02-01 0 4
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
conditional editing in django model<p>I am trying to find a way to update fields values automatically like when a user update the Type field , Action, Status and finish time fields get updated also.for example:</p>
<pre><code>if Type == 'On store': Then
Action == 'Requested'
Status == 'Pending'
Time_Finished == Today
</code></pre>
<p>The model goes like this </p>
<pre><code>class order(models.Model):
Time_Registered = models.DateField(blank=False)
Number = models.CharField(max_length=500)
Type = models.ForeignKey(Type, on_delete=models.CASCADE)
Action = models.CharField(max_length=500, blank=True, null=True, choices=ACTION)
Status = models.ForeignKey(Status, on_delete=models.CASCADE)
Time_Finished = models.DateField(blank=False)
class Status(models.Model): class Meta: verbose_name_plural = "Status"
ID = models.IntegerField(max_length=250) status =
models.CharField(max_length=250) # and it contain : Three values Pending , Under Process and Delivered
</code></pre>
|
<p>You can override you Model <code>save()</code> method :</p>
<pre><code>class order(models.Model):
Time_Registered = models.DateField(blank=False)
Number = models.CharField(max_length=500)
Type = models.ForeignKey(Type, on_delete=models.CASCADE)
Action = models.CharField(max_length=500, blank=True, null=True, choices=ACTION)
Status = models.ForeignKey(Status, on_delete=models.CASCADE)
Time_Finished = models.DateField(blank=False)
def save(self, *args, **kwargs):
if self.Type == 'On store':
self.Action = 'Requested'
self.Status = Status.objects.get(status='Pending')
self.Time_Finished = datetime.utcnow()
super().save(*args, **kwargs)
</code></pre>
<p>For automatically settled dates, you can also take a look at <code>auto_now</code> and <code>auto_now_add</code> <code>DateField</code> attributes</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Converting a datetime to local time based on timezone in Python3<p>I have a question related to dates and time in Python.</p>
<p><strong>Problem:</strong></p>
<pre><code>date = datetime.datetime.strptime(str(row[1]), "%Y-%m-%d %H:%M:%S")
localtime = date.astimezone(pytz.timezone("Europe/Brussels"))
formattedDate = localtime.strftime("%Y-%m%-%d")
</code></pre>
<ol>
<li>In the code above, <code>str(row[1])</code> gives back a UTC datetime coming from a mysql database: <code>2022-02-28 23:00:00</code></li>
<li>I parse this as a datetime and change the timezone to Europe/Brussels.</li>
<li>I then format it back to a string.</li>
</ol>
<p><strong>Expected result:</strong></p>
<p>I'd like to return the date in local time. Europe/Brussels adds one hour so I would expect that strftime returns <code>2022-03-01</code>, but it keeps returning <code>2022-02-28</code>.</p>
<p>Can somebody help?</p>
|
<p><code>date</code> is a <em>naïve</em> date, without timezone, because no timezone information was in the string you parsed. Using <code>astimezone</code> on that simply attaches timezone information to it, turning a naïve date into an aware one. It obviously can't convert any times, because it doesn't know what to convert <em>from</em>.</p>
<p>This also already contains the answer: make the <code>date</code> aware that it's in UTC first before trying to convert it to a different timezone:</p>
<pre><code>date = datetime.datetime.strptime(...).astimezone(datetime.timezone.utc)
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
How to bind a mouse click with a tkinter witdges?<p>I want to clear text box as soon as i click on it to write, Please help me to do this</p>
<pre><code>from tkinter import *
from tkinter import ttk
root = Tk()
fileLable = Text(root, height = 1, width = 26 )
fileLable.insert(INSERT, "this text")
fileLable.grid(row = 0, column = 0,columnspan = 2, padx = (10,0),pady = (3,0))
submit = ttk.Button(root, text = "Submit", command = some_method)
submit.grid(row = 1, column = 1, padx = (10,0), pady = (10,0))
root.mainloop()
</code></pre>
<p>I know that this one :</p>
<pre><code>def some_callback(event): # note that you must include the event as an arg, even if you don't use it.
e.delete(0, "end")
return None
</code></pre>
<p>So then how I catch this event when it is clicked on the text box to include above code to my main code
Please help thanks in advance </p>
|
<p>You could try this below, bind events <code><FocusIn></code> and <code><FocusOut></code>:</p>
<pre><code>from tkinter import *
from tkinter import ttk
def delText(event=None):
fileLable.delete("1.0", END)
root = Tk()
fileLable = Text(root, height = 1, width = 26 )
fileLable.insert(INSERT, "this text")
fileLable.grid(row = 0, column = 0,columnspan = 2, padx = (10,0),pady = (3,0))
fileLable.bind("<FocusIn>", delText)
submit = ttk.Button(root, text = "Submit")
submit.grid(row = 1, column = 1, padx = (10,0), pady = (10,0))
root.mainloop()
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
read all files in sub folder with pandas<p>My notebook is in the home folder where I also have another folder "<strong>test</strong>". In the <strong>test</strong> folder, I have 5 sub folders. Each of the folder contains a .shp file. I want to iterate in all sub folders within test and open all .shp files. It doesn't matter if they get overwritten.</p>
<pre><code>data = gpd.read_file("./test/folder1/file1.shp")
data.head()
</code></pre>
<p>How can I do so? I tried this</p>
<pre><code>path = os.getcwd()
files = glob.glob(os.path.join(path + "/test/", "*.shp"))
print(files)
</code></pre>
<p>but this would only go in 1 layer deep.</p>
|
<p>you can use the os.walk method in the os library.</p>
<pre><code>import os
import pandas as pd
for root, dirs, files in os.walk("./test"):
for name in files:
fpath = os.path.join(root, name)
data = pd.read_file(fpath)
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Python kivy Problems with rewriting python in kivy<p>I have some problems with rewriting python code in kivy format</p>
<p>Here is the class "Settings" which has method to change screen.</p>
<pre><code>class SettingsMenu(Screen):
def on_touch_down(self, touch):
if touch.button == 'right':
self.parent.transition.direction = "right"
self.parent.transition.duration = 0.6
self.parent.current = "MainMenu"
</code></pre>
<p>And I want to rewrite it in kivy this way (or something like that):</p>
<pre><code><SettingsMenu>
name: "SettingsMenu"
on_touch_down:
if button == "right":
root.parent.transition.direction = "right"
root.parent.transition.duration = 0.6
root.parent.current = "MainMenu"
</code></pre>
<p>How should I do it correctly?</p>
<p>(Edit) Here is the full code. I just create two screens and when we are on SettingsMenu screen, I want to switch back to MainMenu screen with <em>right mouse button</em></p>
<p>(Commented on_touch_down in SettingsMenu python class works correctly, but when I try to make it in kivy way, I switch the screen with any mouse button, but desired was <em>right mouse button</em>)</p>
<p>Python:</p>
<pre><code>from kivy.app import App
from kivy.lang import Builder
from kivy.uix.screenmanager import ScreenManager, Screen
from kivy.config import Config
Config.set('input', 'mouse', 'mouse,multitouch_on_demand')
class MainMenu(Screen):
pass
class SettingsMenu(Screen):
pass
# def on_touch_down(self, touch):
# if touch.button == 'right':
# self.parent.transition.direction = "right"
# self.parent.transition.duration = 0.6
# self.parent.current = "MainMenu"
class MenuManager(ScreenManager):
pass
main_kv = Builder.load_file("test.kv")
class THEApp(App):
def build(self):
return main_kv
THEApp().run()
</code></pre>
<p>This is the Kivy file (indentation may be broken during copy-paste, but I had no problems with syntax):</p>
<pre><code>MenuManager:
MainMenu:
SettingsMenu:
<MainMenu>
name: "MainMenu"
FloatLayout:
size: root.width, root.height
Button:
text: "Button 1 on Main Menu Screen"
on_release:
root.manager.transition.direction = 'left'
root.manager.transition.duration = 0.6
root.manager.current = "SettingsMenu"
<SettingsMenu>
name: "SettingsMenu"
button: "right"
on_touch_down:
if root.button == "right": \
root.parent.transition.direction = "right"; \
root.parent.transition.duration = 0.6; \
root.parent.current = "MainMenu"
FloatLayout:
size: root.width, root.height
Label:
text: "Label 1 on SettingsMenu"
</code></pre>
|
<p>You can do python code/logic for an <code>on_touch_down:</code> attribute in a <code>kv</code> file, but it is a bit goofy. I think your <code>kv</code> file will work like this:</p>
<pre><code><SettingsMenu>
name: "SettingsMenu"
on_touch_down:
if args[1].button == "right": \
root.parent.transition.direction = "right"; \
root.parent.transition.duration = 0.6; \
root.parent.current = "MainMenu"
</code></pre>
<p>In the <code>kivy</code> language, you can access the <code>args</code> of an <code>on_<action></code> method (see <a href="https://kivy.org/doc/stable/api-kivy.lang.html#value-expressions-on-property-expressions-ids-and-reserved-keywords" rel="nofollow noreferrer">documentation</a>). So the code above checks the <code>button</code> attribute of the <code>touch</code> arg.</p>
<p>Note that the <code>if</code> statement in the <code>kv</code> file must be on a single line. That is the reason for the <code>\</code> characters (to escape endlines) and the <code>;</code> (to delimit the lines of code).</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Flipping a binary in a string<p>I am new to python and coding in general. So what I am trying to achieve is to flip each bit in a binary string (for eg. if I input '110' the output should be '001'). I specifically need to use the <strong>while loop</strong> and I need to define it as a function. Here is what I've tried so far:</p>
<pre><code>def flip(binary_string):
new_string= ''
i=0
while i<len(binary_string):
if i == '0':
new_string=new_string+ '1'
if i == '1':
new_string= new_string+ '0'
i=i+1
return new_string
</code></pre>
<p>however it just returns the empty new_string as defined in the beginning. What's wrong with my code? any help would be greatly appreciated</p>
|
<p>You were comparing the array index and not the element. Fixed that in the code below.</p>
<pre><code>def flip(binary_string):
new_string= ''
i=0
while i<len(binary_string):
if binary_string[i] == '0':
new_string=new_string+ '1'
if binary_string[i] == '1':
new_string= new_string+ '0'
i=i+1
return new_string
flip("100")
>>> 011
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
can't import python dictionary from a text file<p>i'm trying to import data from a text file as a dictionnary in Python, but i'm stuck.</p>
<pre><code>{001 : {"firstname":'A',"field-lastname":'B',"field-birthday":'01011970'},
002 : {"field-firstname":'C','field-lastname':'D',"field-birthday":'01011970'}}
</code></pre>
<p>I tried with <code>json.load</code> and <code>json.loads</code> but it's raising errors.</p>
<p>I would like to import the file as a dict. Check if some data <code>003 : {"firstname":'Z',"field-lastname":'Y',"field-birthday":'01011970'}</code> is already in the data set. If not, append it to the dict and write the new data set in the file.</p>
|
<p>This text is not a JSON it's seems like a <code>dict</code> that printed out(if <code>001</code> be <code>1</code> and <code>002</code> be <code>2</code> it's a python <code>dict</code>).</p>
<p>I suggest write this in JSON format but, if you want to read the <code>dict</code> you can use <code>eval</code> function to do that.</p>
<pre class="lang-py prettyprint-override"><code>with open("data.dict", 'r') as f:
mydata = eval(f.read())
print(mydata)
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Sorting PDF reports into proper directories "TypeError: expected str, bytes or os.PathLike object, not list"<p>I am currently getting an error:</p>
<pre><code>line 24, in <module> pdfFileobj = open(pdfFiles, 'rb') TypeError: expected str, bytes or os.PathLike object, not list"
</code></pre>
<p>Trying to automate part of my job.
<br>
At my job I am constantly creating new PDF reports for clients.
<br>
My goal is to sort all of the PDF reports from my download directory parse the reports for 3 pieces of data first name last name and report type then I need to compare the data to the appointments on my shared outlook calendar to get the date of the client's appointment. then I need to move the reports to our clients directory on the shared drive and create a client specific sub-directory if it does not exist lastly I need to rename the reports in this format <strong>LastnameDD-MM-YY Firstname Report type</strong></p>
<pre><code>import os
import winreg
import PyPDF2 as p2
import glob
def get_download_path():
"""Returns the default downloads path for linux or windows"""
if os.name == 'nt':
sub_key = r'SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\Shell Folders'
downloads_guid = '{374DE290-123F-4565-9164-39C4925E467B}'
with winreg.OpenKey(winreg.HKEY_CURRENT_USER, sub_key) as key:
location = winreg.QueryValueEx(key, downloads_guid)[0]
return location
else:
return os.path.join(os.path.expanduser('~'), 'downloads')
os.chdir(get_download_path())
pdfFiles = [glob.glob("*.pdf")]
pdfs = []
while pdfFiles:
pdfFileobj = open(pdfFiles, 'rb')
pdfReader = p2.PdfFileReader(pdfFileobj)
pdfFiles.pop(-1)
</code></pre>
|
<ul>
<li><p>First of all, <code>glob.glob(str)</code> returns a list, and so, you adding the extra square brackets are unnecessary.</p>
</li>
<li><p>Second, <code>open()</code> takes in a str, bytes or os.PathLike object as an argument, not a list.</p>
</li>
</ul>
<p><strong>Change this part:</strong></p>
<pre><code>while pdfFiles:
pdfFileobj = open(pdfFiles, 'rb')
pdfReader = p2.PdfFileReader(pdfFileobj)
pdfFiles.pop(-1)
</code></pre>
<p><strong>to:</strong></p>
<pre><code>for file in pdfFiles:
pdfFileobj = open(file, 'rb')
pdfReader = p2.PdfFileReader(pdfFileobj)
</code></pre>
<p><em>(also note that <code>pop()</code> by default removes the <code>-1</code> index, so you don't have to pass in the argument)</em></p>
<hr />
<p>All together:</p>
<pre><code>import os
import winreg
import PyPDF2 as p2
import glob
def get_download_path():
"""Returns the default downloads path for linux or windows"""
if os.name == 'nt':
sub_key = r'SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\Shell Folders'
downloads_guid = '{374DE290-123F-4565-9164-39C4925E467B}'
with winreg.OpenKey(winreg.HKEY_CURRENT_USER, sub_key) as key:
location = winreg.QueryValueEx(key, downloads_guid)[0]
return location
else:
return os.path.join(os.path.expanduser('~'), 'downloads')
os.chdir(get_download_path())
pdfFiles = glob.glob("*.pdf")
pdfs = []
for file in pdfFiles:
pdfFileobj = open(file, 'rb')
pdfReader = p2.PdfFileReader(pdfFileobj)
</code></pre>
<hr />
<p>____________________________ UPDATED FOR LOOPING: ____________________________</p>
<pre><code>import os
import winreg
import PyPDF2 as p2
import glob
from time import sleep
def get_download_path():
"""Returns the default downloads path for linux or windows"""
if os.name == 'nt':
sub_key = r'SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\Shell Folders'
downloads_guid = '{374DE290-123F-4565-9164-39C4925E467B}'
with winreg.OpenKey(winreg.HKEY_CURRENT_USER, sub_key) as key:
location = winreg.QueryValueEx(key, downloads_guid)[0]
return location
else:
return os.path.join(os.path.expanduser('~'), 'downloads')
os.chdir(get_download_path())
while True:
pdfFiles = glob.glob("*.pdf")
pdfs = []
for file in pdfFiles:
pdfFileobj = open(file, 'rb')
pdfReader = p2.PdfFileReader(pdfFileobj)
sleep(300) # stop the program for 300 seconds
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Inserting hour:minute entry in PostgreSQL<p>I'm using python with SQLAlchemy. I simply cannot insert a hour:minute pair. See the example:</p>
<p>SQL query 1:</p>
<pre><code>CREATE TABLE IF NOT EXISTS test_time (time_col _time)
</code></pre>
<p>(this works fine)</p>
<p>SQL query 2:</p>
<pre><code>INSERT INTO test_time(time_col) VALUES('17:30');
</code></pre>
<p>This fails and shows the error:</p>
<pre><code>LINE 1: INSERT INTO test_time(time_col) VALUES('17:30');
^
DETAIL: ARRAY value must start with "{" or dimension information
</code></pre>
<p>I have no idea what <code>dimension information</code> could be. But adding <code>{}</code> in any form I could think of didn't help either.</p>
<p>What is the correct form to insert a _time value in PostgreSQL with an SQL query?</p>
|
<p>This <code>CREATE TABLE IF NOT EXISTS test_time (time_col, _time)</code> is wrong and will not complete as there is no type information for the columns. Unless you meant <code>CREATE TABLE IF NOT EXISTS test_time (time_col _time)</code>, It would be better done as <code>CREATE TABLE IF NOT EXISTS test_time (time_col time[])</code> to make it clearer you are working with an array. Update with the actual table definition. The error is indication that <code>time_col</code> is an <code>ARRAY</code> type so you would need to enter as an array e.g. <code>ARRAY['17:30'::time]</code>.</p>
<p><strong>UPDATE</strong></p>
<p><code>_time</code> is an alias for the array type<code>time[]</code>. If you just want a <code>time</code> field then:</p>
<p><code>CREATE TABLE IF NOT EXISTS test_time (time_col time );</code></p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Snake Head Isn't Moving<p>I was coding a basic snake game in the turtle module in Python and when I tested it, I saw that the snake head wasn't moving.</p>
<p>here is the code:</p>
<p>import modules:</p>
<pre><code>import turtle
import random
import time
delay=0.1
# Background
win = turtle.Screen()
win.title("snake")
win.bgcolor("black")
win.setup(width=800, height=800)
win.tracer(0)
</code></pre>
<p>I tested it and it worked so I moved onto the snake head:</p>
<pre><code># snake head
head = turtle.Turtle()
head.speed(0)
head.shape("square")
head.color("white")
head.penup()
head.goto(0, 50)
head.direction = "stop"
</code></pre>
<p>moving the snake head:</p>
<pre><code># snake movement
def move():
if head.direction == "up":
y = head.ycor()
head.sety(y + 20)
if head.direction == "down":
y = head.ycor()
head.sety(y - 20)
if head.direction == "right":
x = head.xcor()
head.setx(x + 20)
if head.direction == "left":
x = head.xcor()
head.setx(x - 20)
# keyboard setting
def go_up():
if head.direction != "down":
head.direction = "up"
def go_down():
if head.direction != "up":
head.direction = "down"
def go_right():
if head.direction != "left":
head.direction = "right"
def go_left():
if head.direction != "right":
head.direction = "left"
</code></pre>
<p>Main game loop:</p>
<pre><code># main game loop
while True:
win.update()
move()
time.sleep(delay)
</code></pre>
<p>And keyboard binding:</p>
<pre><code># keyboard binding
win.listen()
win.onkey(go_up, "w")
win.onkey(go_down, "s")
win.onkey(go_right, "d")
win.onkey(go_left, "a")
</code></pre>
<p>Also i have an error in the line <code>win.listen()</code> saying "This code is unreachable" if anybody knows what I did wrong please let me know.</p>
|
<p>The reason it says the code is unreachable is because the key binding stuff, including <code>window.listen</code>, comes after the <code>while True:</code> loop. Since you don’t have any way to exit the loop, the code after will never run. If there are not any key bindings, the snake will never change from the <code>"stop"</code> direction.</p>
<p>You should move the main game loop to the very end of your code, so everything else gets a chance to run.</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
How can I find some words in files with regex?<p>I have many files and need to categorize them into the words that come up there.</p>
<p>ex) [..murder..murderAttempted..] or [murder, murderAttempted] etc..</p>
<p>I tried this code. but not all came out. so I want "murder" and "murderAttmpted" in files surrounded by "[ ]".</p>
<pre><code>def func(root_dir):
for files in os.listdir(root_dir):
pattern = r'\[.+murder.+murderAttempted.+'
if "txt" in files:
f = open(root_dir + files, 'rt', encoding='UTF8')
for i, line in enumerate(f):
for match in re.finditer(pattern, line):
print(match.group())
</code></pre>
|
<p>This appears to work for me: <code>pattern = r'\[.*murder.*murderAttempted.*\]'</code> instead of <code>pattern = r'\[.+murder.+murderAttempted.+'</code>. I believe it returns all occurrences of "murder" and "murderAttempted" in files surrounded by "[]". The <code>+</code> requires 1 or more occurrence whereas <code>*</code> could have 0. Also note the addition of the end <code>\]</code>. This ensures you only capture strings that are enclosed in brackets.</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Search in different type of list of dictionary by keys<p>I am requesting data from different APIs. All of them provide similar information but output they provide is a little bit different with regard to the structure. Every output is contained in list of dictionary but with different organisation and structure. One output can be a list with just one dictionary, other more than one dictionary and also dictionary as value of another dictionary.</p>
<p>Here I show one example output</p>
<pre><code>[{'allele_string': 'G/A',
'transcript_consequences': [{'protein_end': 663,
'gene_symbol_source': 'HGNC',
'protein_start': 663,
'gene_symbol': 'MYH7',
'amino_acids': 'R/H',
'codons': 'cGc/cAc',
'biotype': 'protein_coding',
'hgnc_id': 'HGNC:7577',
'cds_end': 1988,
'cds_start': 1988,
'polyphen_score': 0.782,
'transcript_id': 'ENST00000355349',
'cdna_start': 2093,
'impact': 'MODERATE',
'consequence_terms': ['missense_variant'],
'variant_allele': 'A',
'cdna_end': 2093,
'sift_score': 0.06,
'gene_id': 'ENSG00000092054',
'sift_prediction': 'tolerated',
'polyphen_prediction': 'possibly_damaging',
'strand': -1}],
'input': 'NM_000257.3:c.1988G>A',
'start': 23426833,
'end': 23426833,
'colocated_variants': [{'phenotype_or_disease': 1,
'allele_string': 'HGMD_MUTATION',
'strand': 1,
'id': 'CM993620',
'seq_region_name': '14',
'end': 23426833,
'start': 23426833},
{'allele_string': 'C/T',
...
</code></pre>
<p>Independently of the structure of the list of dictionaries, I need to get for example the value of the key 'gene_symbol' and 'allele_string'. These values can be in the first dictionary of the list or in the last one or in a dictionary inside another dictionary. So I think that what I need is to read key by key of the complete list and find the key I am looking for and then save its value in one variable for example</p>
<pre><code>gene_symbol = 'value_found'
</code></pre>
<p>Is this the best approach to do this? and How can I do that?</p>
|
<p>One way is to use some complex function, possibly using recursion to get all keys-values from your structure. This is more elegant but more difficult as well.
See here for some ideas:</p>
<p><a href="https://stackoverflow.com/questions/39233973/get-all-keys-of-a-nested-dictionary">Get all keys of a nested dictionary</a></p>
<p>Another approach is to handle your structure as a text. If your result is always pretty similar, like 'gene_symbol': 'MYH7', inside your structure, you can use something like the following (result is based on the structure you provided):</p>
<pre><code>s=str(your_structure)
s2=s[s.find("'gene_symbol'")+13:] #13 is the length of "'gene_symbol'"
s3=s2[s2.find("'")+1:]
res=s3[:s3.find("',")]
>>> res
'MYH7'
</code></pre>
<p>Similarly for 'allele_string':</p>
<pre><code>s2=s[s.find("'allele_string'")+15:] #15 is the length of "'allele_string'"
s3=s2[s2.find("'")+1:]
res=s3[:s3.find("',")]
>>> res
'G/A'
</code></pre>
<p>You can easily adjust your code if results may vary slightly</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
And, == operators in the same python if statement<p>Given a list of integers:</p>
<pre><code>old_list = [1,1,2,-2,5,2,4,4,-1,-2,5]
</code></pre>
<p>I run the following code to get all the integers grouped in a list of lists:</p>
<pre><code>old_list.sort()
new_list = []
for i in old_list:
if new_list == []:
new_list.append([i])
elif new_list[-1][0] == i:
new_list[-1].append(i)
else:
new_list.append([i])
print(new_list)
</code></pre>
<p>Returning my desired output:</p>
<pre><code>[[-2, -2], [-1], [1, 1], [2, 2], [4, 4], [5, 5]]
</code></pre>
<p>On the one hand I found I could condense the if elif statement using <code>and</code>, while retrieving the same output:</p>
<pre><code>old_list.sort()
new_list = []
for i in old_list:
if new_list and new_list[-1][0] == i:
new_list[-1].append(i)
else:
new_list.append([i])
print(new_list)
</code></pre>
<p>On the other hand if I try to remove the <code>new_list and</code> from the if statement the following error arises:</p>
<pre><code>IndexError: list index out of range
</code></pre>
<p>Why isn't the <code>if new_list and new_list[-1][0] == i:</code> statement firing the same index error?</p>
|
<p>Because first you check that <code>new_list</code> is not empty, so at least it has a last element. When you remove this check, since your <code>new_list</code> is initially empty, <code>new_list[-1]</code> is out of range.</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Return does not return information<p>When wanting to return a value with the return and wanting to use it later in a variable, nothing is returned to me.</p>
<pre><code>#Actualizacion de informacion
def messageHandler(update: Update, context: CallbackContext):
userName = update.effective_user['username']
#text = update.message.text
Usuario = userName
ConsultarServer = ("Ingresa tu nombre de troncal correcta de lo contrario no se te regresara ninguna informacion.")
if ConsultarSaldoSw2 in update.message.text:
context.bot.send_message(chat_id=update.effective_chat.id, text="Ingresa tu nombre de troncal correcta de lo contrario no se te regresara ninguna informacion.")
text2 = update.message.text
return text2
print (text2)
if text2 in update.message.text:
context.bot.send_message(chat_id=update.effective_chat.id, text="Ingresa el servidor sw o sw2")
text3 = update.message.text
return text3
print (text3)
</code></pre>
|
<p>If the function is exhausted, and it doesn't meet any of your conditions, it will <code>return None</code> by default. Consider this shortened example:</p>
<pre><code>def messageHandler(update: Update, context: CallbackContext):
if ConsultarSaldoSw2 in update.message.text:
return text2
if text2 in update.message.text:
return text3
# None of these if conditions are met, so nothing is returned
</code></pre>
<p>If you don't enter any of these <code>if</code> statements, you never return a value.</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Django display list of dictionary in template - dict key is a 'variable'<p>I am using Django 4.0 to display a frontend page which source data is a list of dict.
I want to order the keys of the dict and then display all dict in the list in the same order.
Here is my views.py:</p>
<pre><code>def UserGoalstatus(request, promise_token):
print("__UserGoalstatus__")
from cmd_utils import Retrieve_goal
data = Retrieve_goal(promise_token)
keys = set()
for item in data:
keys.update(set(item))
key_order = sorted(keys)
context = {
"data": data,
"key_order": key_order,
}
return render(request, 'json_table.html', context)
</code></pre>
<p>Here is the content of my 'data' variable:</p>
<pre><code>[
{'goal_key': '286815', 'goal_type': 'hotelreservation', 'goal_id': 16149845, 'promise_token': '9ba51cbc-830b-64d603904099', 'campaign_id': 1002204, 'properties': {'price': 100, 'created': '2022-06-13 10:48:34', 'checkout': '2022-06-13', 'currency_code': 'USD', 'completed_booking_status': 1}},
{'goal_key': '1208107', 'goal_type': 'hotelreservation', 'goal_id': 16149846, 'promise_token': '9ba51cbc-830b-64d603904099', 'campaign_id': 1002204, 'properties': {'price': 100, 'created': '2022-06-13 10:48:35', 'checkout': '2022-06-13', 'currency_code': 'USD', 'completed_booking_status': 1}}
]
</code></pre>
<p>Here is my html file which I would like to print all content in data in the order of 'key_order'</p>
<pre><code><table id="dtBasicExample" class="table table-hover table-striped table-bordered" cellspacing="0" width="100%">
<thead>
<tr>
{% for key in key_order %}
<th>{{ key }}</th>
{% endfor %}
</tr>
</thead>
<tbody>
{% for item in data %}
<tr>
{% for key in key_order %}
<td>{{ item.get(key) }}</td>
{% endfor %}
</tr>
{% endfor %}
</tbody>
</table>
</code></pre>
<p>This part seems not right :<strong>{{ item.get(key) }}</strong> , anyone can suggest the right way to access the value mapping to the specific <strong>key</strong> ?</p>
|
<p>Here is my solution</p>
<p>I need to define a django template filter of my own</p>
<p>The key part is 'get_item' , it can parse dictionary key as 'variable' in Django html now .
For more detailed information refer to links below:</p>
<p><a href="https://docs.djangoproject.com/en/4.0/howto/custom-template-tags/#writing-custom-template-tags" rel="nofollow noreferrer">Django guide</a></p>
<p><a href="https://stackoverflow.com/a/8000091/4568140">stackoverflow answer</a></p>
<p>views.py</p>
<pre><code># customized template for html
from django.template.defaulttags import register
@register.filter
def get_item(dictionary, key):
return dictionary.get(key)
</code></pre>
<p>in json_table.html</p>
<pre><code><table id="dtBasicExample" class="table table-hover table-striped table-bordered" cellspacing="0" width="100%">
<thead>
<tr>
{% for key in key_order %}
<th>{{ key }}</th>
{% endfor %}
</tr>
</thead>
<tbody>
{% for item in data %}
<tr>
{% for key in key_order %}
<td>{{ item | get_item:key}}</td>
{% endfor %}
</tr>
{% endfor %}
</tbody>
</table>
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Python Tkinter Closing Windows<p>In creating a Python Tkinter program, I wish to create a button that will close the program. I have tried the </p>
<pre><code>#with master = Tk()
master.quit()
</code></pre>
<p>method. And it did absolutely nothing to my program - apart from stopping anything from working, although I received no Tracebacks.</p>
<p>The other method I have tried is:</p>
<pre><code>#with master = Tk()
master.destroy()
</code></pre>
<p>This again did nothing to my program - it did give me a traceback error though which was:</p>
<pre><code>_tkinter.TclError: can't invoke "button" command: application has been destroyed
</code></pre>
<p>My full code is:</p>
<pre><code>from tkinter import *
master = Tk()
exitbutton = Button(master,text="Exit",(all the other personalization stuff here),command=(master.quit()))
#or I used master.destroy() in the command area.
exitbutton.grid(column=0,row=0)
</code></pre>
<p><strong>None of the above methods have worked.</strong></p>
<p>Many Thanks
(For the future)</p>
|
<p>You want to pass a function object into the command keyword, so don't use parentheses. Also you should be using the destroy function for TKinter.</p>
<pre><code>exitbutton = Button(master,text="Exit",(all the other personalization stuff here),command=master.destroy)
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Download file from curl command into python script don't work<p>I would like to send a complex curl command into python (3.9.7) script to download an output file .tif
I use a TimeNum variable because i want to download different files and clip the .tif in a personal area of intrest (I want to use a for loop).</p>
<p>The curl command is something like this:</p>
<p>curl --data '{"productType":"VMI","productDate":"1648710600000"}' -H "Content-Type: application/json" -X POST <a href="https://example.it/wide/product/downloadProduct" rel="nofollow noreferrer">https://example.it/wide/product/downloadProduct</a> --output hrd2.tif</p>
<p>I try different solutions:</p>
<p>1)</p>
<pre><code>import shlex
import subprocess
TimeNum=1648710600000
cmd ='''curl --data \'{"productType":"VMI","productDate":"%s"}\' -H "Content-Type: application/json" -X POST https://example.it/wide/product/downloadProduct --output %s.tif''' % (TimeNum,TimeNum)
args = shlex.split(cmd)
process = subprocess.Popen(args, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = process.communicate()
</code></pre>
<p>The download work but the output takes up 1KB instead of about 11MB, and an error occur when i try to open it in Windows.</p>
<ol start="2">
<li>I write a txt file with a list of curl command line by line and then:</li>
</ol>
<pre><code>file = open('CURL_LIST.txt', 'r')
lines = file.readlines()
for line in enumerate(lines):
os.system(line.strip())
</code></pre>
<p>but don't work fine and i have the same output of case 1 above.
Now i try to use urllib.request but i'm not able to use it very well.</p>
<p>Someone have a suggestion?
Thanks in advance for your help</p>
|
<p><strong>Important info</strong><br />
There is a self-signed certificate on that server so you get a warning (and this is also the reason why you get small files).<br />
In the example below I disabled certificate checking but this is <strong>DANGEROUS</strong> and you should use it only if you understood the risks and it is ok for you anyway (e.g. you are the owner of example.it).<br />
Given the nature of example.it I'd assume that you are just using it only for learning purpose but please be careful and read more about the <a href="https://www.globalsign.com/en/ssl-information-center/dangers-self-signed-certificates" rel="nofollow noreferrer">risks of self-signed certificates</a> anyway.<br />
The proper solution from a risk / security standpoint for a similar problem is NOT connecting to such a server.</p>
<p>Once this is clear, just for the sake of testing / learning
I would suggest using Python's requests library (please note the <code>verify=False</code> to disable certificate check):</p>
<pre><code>import requests
time_num = 1648710600000
headers = {
# Already added when you pass json=
# 'Content-Type': 'application/json',
}
json_data = {
'productType': 'VMI',
'productDate': time_num,
}
response = requests.post('https://example.it/wide/product/downloadProduct', headers=headers, json=json_data, verify=False)
with open(time_num + '.tif', 'wb') as f:
f.write(response.content)
</code></pre>
<p>If you prefer the approach you posted it is possible to disable cert check also in curl (<code>-k</code> option):</p>
<pre><code>import shlex
import subprocess
TimeNum=1648710600000
cmd ='''curl -k --data \'{"productType":"VMI","productDate":"%s"}\' -H "Content-Type: application/json" -X POST https://example.it/wide/product/downloadProduct --output %s.tif''' % (TimeNum,TimeNum)
args = shlex.split(cmd)
process = subprocess.Popen(args, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = process.communicate()
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
remove inverted commas from python string<p>I have a very annoying string formatting issue. I tried different approaches but i seem to be lost.
This is my expected output: ["aa","b","c","dd"].</p>
<p>Sample code:</p>
<pre><code>mylist = ['b','c']
mylisttmp = ','.join('"{0}"'.format(x) for x in mylist)
finalstr='"aa"' +","+"{}".format(mylisttmp) +","+'"dd"'
print([finalstr])
OUTPUT:['"aa","b","c","dd"'] #How to get rid of the end quotes,which is causing issues?
</code></pre>
<p>I did a lot of string splitting, joining etc but I am going round and round the same issue.
I intended to use the formatted output with a tkinter property, as follows:</p>
<pre><code>myComboBox['values']= ["aa","b","c","dd"]
</code></pre>
<p>Please direct me. Thank you</p>
|
<p>You don't actually want a string at all. You want a list that contains four strings.</p>
<pre><code>mylist = ['b','c']
finalset = ["aa"] + mylist + ["dd"]
print(finalset)
</code></pre>
<p><strong>Output</strong></p>
<pre><code>>>> print(finalset)
['aa', 'b', 'c', 'dd']
>>>
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
How to upload python script in django form and use it for some processing?<p>I want to upload python file through django form and read the function available inside and use it for processing.</p>
<p>Till now what i had done is:
Taken file from the user and saved in the media folder.
taken function name (so that i can use it if required for calling funtion)</p>
<p>Index.py</p>
<pre><code><form method="POST" enctype="multipart/form-data" action="/result">Enter function name
{% csrf_token %}
<input type="text" name="functionname"><br>
Upload your .py file here
<input type="file" name="functionfile">
<input type="submit" value="Submit">
</form>
</code></pre>
<p>views.py</p>
<pre><code>def result(request):
if request.method == 'POST':
functionname=request.POST['functionname']
functionfile=request.FILES['functionfile']
fs= FileSystemStorage()
modulename=fs.save(functionfile.name,functionfile)
url=fs.url(modulename)
print(url)
return render(request,'result.html')
</code></pre>
<p>I don't have any clue how to use that the function of the uploaded file in the backend</p>
<p>Desired result would be something like.</p>
<ol>
<li><p>for eg. example.py file contains a function</p>
<p>def add(data):
p=10+data
return p</p>
</li>
<li><p>i upload a example.py file</p>
</li>
<li><p>suppose in background i have d = 100</p>
</li>
<li><p>django calls result=add(d)</p>
</li>
<li><p>print the result</p>
</li>
</ol>
<p>Any reference or resource will also be helpful.</p>
<p>Thanks</p>
|
<p>An simple approach would be use the normal file upload in django get the data in an action, launch a subprocess using docker which has python3 in it on the server side where you are running django application.</p>
<p>Why docker ?
It is safer to run even malicious code inside the docker container rather than on the server machine itself.</p>
<p>Other Ideas :
You can run the code and get result through online API's.</p>
<p>REF : <a href="https://github.com/saikat007/Online-Compiler-using-Django-python/blob/master/src/ide/views.py" rel="nofollow noreferrer">https://github.com/saikat007/Online-Compiler-using-Django-python/blob/master/src/ide/views.py</a></p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
unicode_literals everywhere in python 2.7<p>Is there a way to get the behaviour of </p>
<pre><code>from __future__ import unicode_literals
</code></pre>
<p>to apply project-wide, apart from putting this import in the top of each and every module?</p>
<p>I would like to just define it in one place, like the <code>__init__.py</code> of the package directory of project root say, and have it recursively apply to subpackages and submodules. </p>
|
<h1>TL;DR - No</h1>
<p>Command-line option <strong><code>-U</code></strong> enables unicode globally. From <a href="https://docs.python.org/2/using/cmdline.html#id4" rel="nofollow noreferrer">docs</a>:</p>
<blockquote>
<p>Turns all string literals into unicodes globally. Do not be tempted to use this option as it will probably break your world. It also produces <code>.pyc</code> files with a different magic number than normal.</p>
</blockquote>
<p>You can't use it in practice, though, because it will mess up even the standard library. The docs advise just to enable unicode literals on a per-module basis using the future import.</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
discord.py 8ball command but without bot prefix<p>I'm coding a bot for a friend and they have asked me to make an 8ball command. You think it seems easy.. but they don't want the prefix included the command. So it would be like this:
<code>BotName, do you think today will be a good day?</code></p>
<p>I've tried using <code>@client.event</code> but I don't know how to make it so that the user can say their own question, but have the bots name at the front of the question.
So like this:
<code>BotName, do you think today will be a good day?</code></p>
<p>The <code>BotName</code> part will need to be included to trigger the event. Then they can say what they want. Like this (example was already given above):</p>
<p><code>BotName, do you think today will be a good day?</code></p>
<p>Here is some code I tried:</p>
<pre class="lang-py prettyprint-override"><code>import discord
from discord.ext import commands
import random
class eightball(commands.Cog):
def __init__(self, client):
self.client = client
@commands.Cog.listener()
async def on_message(self,message):
#botname = [f"BotName, {message}?"] #tried this too
# if message.content in botname: #tried this too
if f'BotName, {message.author.content}?' in message.content:
responses = ['Responses here.']
await message.channel.send(f'{random.choice(responses)}')
await self.client.process_commands(message)
def setup(client):
client.add_cog(eightball(client))
</code></pre>
<p>If it's not possible then do not worry! (Sorry if I didn't explain as well as I could and if I sound dumb or something.)</p>
|
<p>I guess you can make it work with a bit more logic.</p>
<pre class="lang-py prettyprint-override"><code>if message.content.startswith('BotName,'):
#rest of the code
</code></pre>
<p>Consider that if they @ your bot, the string would be <code><@1235574931>, do you think today will be a good day?</code>
So, it'll only work if they add specifically whatever <code>BotName</code> would be.</p>
<p>Also, cog listeners doesn't need <code>await self.client.process_commands(message)</code></p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Not able to fetch bs4 table contents in python<p>I want to fetch all the user handles present in this link <a href="https://practice.geeksforgeeks.org/leaderboard/" rel="nofollow noreferrer">https://practice.geeksforgeeks.org/leaderboard/</a></p>
<p>This is the code which tried,</p>
<pre><code>import requests
from bs4 import BeautifulSoup
URL = 'https://practice.geeksforgeeks.org/leaderboard/'
def getdata(url):
r = requests.get(url)
return r.text
htmldata = getdata(URL)
soup = BeautifulSoup(htmldata, 'html.parser')
table= soup.find_all('table',{"id":"leaderboardTable"})
print(table[0].find_all('tbody')[1])
print(table[0].find_all('tbody')[1].tr)
</code></pre>
<p>Output:</p>
<pre><code><tbody id="overall_ranking">
</tbody>
None
</code></pre>
<p>The code is fetching the table but when i try to print the tr or td tags present in the table it is showing None. I tried another approach also using pandas, the same is happening.</p>
<p>I just want all the user handles present in this table (<a href="https://practice.geeksforgeeks.org/leaderboard/" rel="nofollow noreferrer">https://practice.geeksforgeeks.org/leaderboard/</a>)
<a href="https://i.stack.imgur.com/EzlEk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EzlEk.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/Ceoj4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ceoj4.png" alt="enter image description here" /></a></p>
<p>Any solution for this problem will be will be highly appreciated.</p>
|
<p>The url is dynamic and beautifulsoup can't render JavaScript but Data is generating from API meaning the website is using API.</p>
<pre><code>import requests
api_url='https://practiceapi.geeksforgeeks.org/api/v1/leaderboard/ranking/?ranking_type=overall&page={page}'
for page in range(1,11):
data=requests.get(api_url.format(page=page)).json()
for handle in data:
print(handle['user_handle'])
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>Ibrahim Nash
blackshadows
mb1973
Quandray
akhayrutdinov
saiujwal13083
shivendr7
kirtidee18
mantu_singh
cfwong8
harshvardhancse1934
sgupta9519
sanjay05
samiranroy0407
Maverick_H
sreerammuthyam999
gfgaccount
sushant_a
verma_ji
balkar81199
marius_valentin_dragoi
ishu2001mitra
_tony_stark_01
ta7anas17113011638
yups0608
himanshujainmalpura
yujjwal9700
parthabhunia_04
KshamaGupta
the_coder95
ayush_gupta4
khushbooguptaciv18
aditya dhiman
dilipsuthar00786
adityajain9560
dharmsharma0811
Aegon_Targeryan
1032180422
mangeshagarwal1974
naveedaamir484
raj_271
Pulkit__Sharma__
aroranayan999
surbhi_7
ruchika1004ajmera
cs845418
shadymasum
lonewolf13325
user_1_4_13_19_22
SubhankarMajumdar
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Suitable Python modules for navigating a website<p>I am looking for a python module that will let me navigate searchbars, links etc of a website.
For context I am looking to do a little webscraping of this website [<a href="https://www.realclearpolitics.com/]" rel="nofollow noreferrer">https://www.realclearpolitics.com/]</a>
I simply want to take information on each state (polling data etc) in relation to the 2020 election and organize it all in a collection of a database.
Obviously there are a lot of states to go through and each is on a seperate webpage. So im looking for a method in python in which i could quickly navigate the site and take the data of each page etc aswell as update and add to existing data. So finding a method of quickly navigating links and search bars with my inputted data would be very helpful.
Any suggestions would be greatly appreciated.</p>
<pre><code># a simple list that contains the names of each state
states = ["Alabama", "Alaska" ,"Arizona", "....."]
for state in states:
#code to look up the state in the searchbar of website
#figures being taken from website etc
break
</code></pre>
<p>Here is the rough idea i have</p>
|
<p>There are many options to accomplish this with Python. As @LD mentioned, you can use Selenium. Selenium is a good option if you need to interact with a websites UI via a headless browser. E.g clicking a button, entering text into a search bar, etc. If your needs aren't that complex, for instance if you just need to quickly scrape all the raw content from a web page and process it, than you should use the requests module from Python's standard library.</p>
<p>For processing raw content from a crawl, I would recommend <a href="https://pypi.org/project/beautifulsoup4/" rel="nofollow noreferrer"><strong>beautiful soup</strong></a>.</p>
<p>Hope that helps!</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Inputting CSV Data to specific columns in SQLite<p>I have table <em>Emp</em> which have primary key which is auto increment. My <em>csv</em> file has 2 columns and table <em>Emp</em> has 3 columns(Autoincrement primary key and 2 columns from <em>csv</em> file).</p>
<p>I have below code:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd, csv, sqlite3
con = sqlite3.connect("CleanData.db")
cur = con.cursor()
a_file = open("EmpC.csv")
rows = csv.reader(a_file)
cur.executemany("INSERT INTO emp VALUES (?, ?)", rows)
cur.execute("SELECT * FROM data")
print(cur.fetchall())
con.commit()
con.close()
</code></pre>
<p>Kindly help!</p>
|
<p>Write your query with named columns:</p>
<pre><code>cur.executemany("INSERT INTO emp (column1, column2) VALUES (?, ?)", rows)
</code></pre>
<p>where <code>column1</code> and <code>column2</code> are the two non-autoincrement column names.</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Digging deeper into python if statements<p>It might be a really silly question, but I was not able to find the answer to it anywhere else, I've looked at SO, but there is nothing I could find related to my question.</p>
<p>Question:</p>
<p>In python, don't know about other languages, whenever we call if statements for a builtin class it returns something which the if statement interprets, For example,</p>
<pre class="lang-py prettyprint-override"><code>a = 0
if a: print("Hello World")
</code></pre>
<p>The above statement does not print anything as the <code>if a</code> is <code>False</code>. Now which method returned that it is <code>False</code>, or is there a method the if statement calls in order to know it ??</p>
<p><strong>Or more precisely, how does the if statement work in python in a deeper level ?</strong></p>
|
<p>Objects have <code>__bool__</code> methods that are called when an object needs to be treated as a boolean value. You can see that with a simple test:</p>
<pre><code>class Test:
def __bool__(self):
print("Bool called")
return False
t = Test()
if t: # Prints "Bool Called"
pass
</code></pre>
<p><code>bool(0)</code> gives <code>False</code>, so <code>0</code> is considered to be a "falsey" value.</p>
<p>A class can <a href="https://docs.python.org/3/reference/datamodel.html#object.__bool__" rel="nofollow noreferrer">also be considered to be truthy or falsey based on it's reported length</a> as well:</p>
<pre><code>class Test:
def __len__(self):
print("Len called")
return 0
t = Test()
if t:
pass
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
get a list of tuples with the columns that have only two same strings:<p>I have the following problem:</p>
<p>In case I want to get the columns of a dataframe which have all same strings I use the code that follows:</p>
<p>Let's create the dataframe first: <code>df_example1 = pd.DataFrame({'A':[1,2,3],'B':[1,2,3]})</code> . Now let's look for the columns that have exactly the same strings:
<code>[(i, j) for i,j in combinations(df_example1, 2) if df_example1[i].equals(df_example1[j])]</code>
The code returns the tuple [('A', 'B')]</p>
<p>My problem is: In case I want to get a tuple of columns which have ONLY two of the strings the same what code should I use? Let's say that my dataframe is the following:
<code>df_example2 = pd.DataFrame({'A':[2,3,4],'B':[1,2,3]})</code> and it should return the tuple [('A', 'B')].</p>
<p>Thank you in advance :)</p>
|
<p>You want the <em>intersection</em> of both columns to contain two (or more?) values. You can use the <a href="https://docs.python.org/3/library/stdtypes.html#set" rel="nofollow noreferrer"><code>set</code> class</a> and its operations for this.</p>
<pre><code>df_example2 = pd.DataFrame({'A':[2,3,4],'B':[1,2,3]})
intersect = s et(df_example2['A']).intersection(df_example2['B'])
# {2, 3}
</code></pre>
<p>Now, if <code>intersect</code> has 2 (or more?) elements, you want to select the tuple <code>('A', 'B')</code>.</p>
<pre><code>[(i, j)
for i,j in combinations(df_example2, 2)
if len(set(df_example2[i]).intersection(df_example2[j])) == 2
]
# Or >= 2 if you want 2 or more
# [('A', 'B')]
</code></pre>
<p>Note: The elements of the columns need to be hashable types to be able to create a set</p>
<blockquote>
<p>But in the end this solution returns nothing if you increase the columns of the dataframe. For example, if <code>df_example4 = pd.DataFrame({'A':[2,3,4], 'B':[1,2,3], 'C':[2,3,5], 'D':[3,5,6]})</code>, I would expect to get <code>[('A','B','C'), ('C', 'D')]</code>. Instead, I get an empty list.</p>
</blockquote>
<p>In this case, you need to also change the number of columns you select in <code>combinations()</code>. It's easier to write out a function that takes <em>any</em> number of sets and returns the intersection of them all:</p>
<pre><code>def intersect(cols_list):
cols_iter = iter(cols_list) # Create an iterator
r = set(next(cols_iter)) # Set the return value to the first column
for s in cols_iter:
r = r.intersection(s) # Intersect this with every other column
return r
</code></pre>
<p>Let's write the list comprehension as a regular loop first to help us understand what is happening:</p>
<pre><code>matching_cols = []
for ncols in range(2, len(df_example4.columns)+1):
col_names = combinations(df_example4, ncols)
cols = [df_example4[cname] for cname in col_names]
if len(intersect(cols)) == 2:
matching_cols.append(col_names)
</code></pre>
<p>Or, as a comprehension:</p>
<pre><code>[col_names for ncols in range(2, len(df_example4.columns)+1) for col_names in combinations(df_example4, ncols) if len(intersect(df_example4[cname] for cname in col_names)) == 2]
</code></pre>
<p>Which gives:</p>
<pre><code>[('A', 'B'), ('A', 'C'), ('B', 'C'), ('C', 'D'), ('A', 'B', 'C')]
</code></pre>
<p>Now, we need to remove the pairs that are already part of a bigger combination:</p>
<pre><code>final = []
for c in matching_cols:
sc = set(c)
exclude = False
for c2 in matching_cols:
sc2 = set(c2)
# If c is a subset of c2 and both aren't equal, then we need to exclude c from the final result
if sc != sc2 and sc.issubset(sc2):
exclude = True
break
if not exclude:
final.append(c)
</code></pre>
<p>Which gives <code>[('C', 'D'), ('A', 'B', 'C')]</code></p>
<p>There is probably a more efficient approach but I will leave that as an exercise for you :)</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Python on Linux: Iterating through a list with If Elif Elif Elif when the list is unknown<p>I am trying to make a python script that automatically moves files from my internal drive to any usb drive that is plugged in. However this destination path is unpredictable because I am not using the same usb drives everytime. With Raspbian Buster full version, the best I can do so far is automount into /media/pi/xxxxx, where that xxxxxx part is unpredictable. I am trying to make my script account for that. I can get the drive mounting points with</p>
<pre><code>drives = os.listdir("/media/pi/")
</code></pre>
<p>but I am worried some will be invalid because of not being unmounted before they're yanked out (I need to run this w/o a monitor or keyboard or VNC or any user input other than replacing USB drives). So I'd think I'd need to do a series of try catch statements perhaps in an if elif elif elif chain, to make sure that the destination is truly valid, but I don't know how to do that w/o hardcoding the names in. The only way I know how to iterate thru a set of names I don't know is</p>
<pre><code>for candidate_drive in drives:
</code></pre>
<p>but I don't know how to make it go onto the next candidate drive only if the current one is throwing an exception.</p>
<p>System: Raspberry Pi 4, Raspbian buster full, Python 3.7.</p>
<p>Side note: I am also trying this on Buster lite w/ usbmount, which does have predictable mounting names, but I can't get exfat and ntfs to mount and that is question for another post.</p>
<p>Update: I was thinking about this more and maybe I need a try, except, else statement where the except is pas and the else is break? I am trying it out.</p>
<p>Update2: I rethought my problem and maybe instead of looking for the exception to determine when to try the next possible drive, perhaps I could instead look for a successful transfer and break the loop if so.</p>
<pre><code>import os
import shutil
files_to_send = os.listdir("/home/outgoing_files/")
source_path = "/home/outgoing_files/"
possible_USB_drives = os.listdir("/media/")
for a_possible_drive in possible_USB_drives:
try:
destpath = os.path.join("/media/", a_possible_drive)
for a_file in files_to_send:
shutil.copy(source_path + a_file, destpath)
except:
pass # transfer to possible drive didn't work
else:
break # Stops this entire program bc the transfer worked!
</code></pre>
|
<p>If you have an unsigned number of directories in side of a directory, etc... You cannot use nested <code>for</code> cicles. You need to implement a recursive call function. If you have directories inside a directory, you would like to review all the directories, this is posible iterating over the list of directories using a same function that iterate over them, over and over, until it founds the file.</p>
<p>Lets see an example, you have a path structure like this:</p>
<pre><code>path
dir0
dir2
file3
dir1
file2
file0
file1
</code></pre>
<p>You have no way to now how many <code>for</code> cicles are required to iterate over al elements in this structure. You can call an iteration (<code>for</code> cicle) over all elements of a single directory, and do the same to the elements inside that elements. In this structure <code>dirN</code> where <code>N</code> is a number means a directory, and <code>fileN</code> means a file.</p>
<p>You can use <code>os.listdir()</code> function to get the contents of a directory:</p>
<pre><code>print(os.listdir("path/"))
</code></pre>
<p>returns:</p>
<pre><code>['dir0', 'dir1', 'file0.txt', 'file1.txt']
</code></pre>
<p>Using this function with all the directories you can get all the elements of the structure. You only need to iterate over all the <strong>directories</strong>. Specificly directories, because if you use a file:</p>
<pre><code>print(os.listdir("path/file0.txt"))
</code></pre>
<p>you get an error:</p>
<pre><code>NotADirectoryError: [WinError 267]
</code></pre>
<p>But, remember that in Python exists the generator expressions.</p>
<p><strong>String work</strong></p>
<p>If you have a <code>mainpath</code> you need to get access to a a directory inside this with a full string reference: <code>"path/dirN"</code>. You cannot access directly to the file that does not is in the scope of the .py script:</p>
<pre><code>print(os.listdir("dir0/"))
</code></pre>
<p>gets an error</p>
<pre><code>FileNotFoundError: [WinError 3]
</code></pre>
<p>So you need to always format the initial mainpath with the actual path, in this way <strong>you can get access to al the elements of the structure</strong>.</p>
<p><strong>Filtering</strong></p>
<p>I said that you could use an generator expression to get just the directories of the structure, recursively. Lets take a look to a function:</p>
<pre><code>def searchfile(paths: list,search: str):
for path in paths:
print("Actual path: {}".format(path))
contents = os.listdir(path)
print("Contents: {}".format(contents))
dirs = ["{}{}/".format(path,x) for x in contents if os.path.isdir("{}/{}/".format(path,x)) == True]
print("Directories: {} \n".format(dirs))
searchfile(dirs,search)
</code></pre>
<p>In this function we are getting the contents of the actual path, with <code>os.listdir()</code> and then filtering it with a generator expression. Obviusly we use recursive function call with the dirs of the actual path: <code>searchfile(dirs,search)</code></p>
<p>This algorithm can be applied to any file structure, because the <code>path</code> argument is a list. So you can iterate over directories with directories with directories, and that directories with more directories inside of them.</p>
<p>If you want to get an specific file you could use the second argument, <code>search</code>. Also you can implement a conditional and get the specific path of the file found:</p>
<pre><code>if search in contents:
print("File found! \n")
print("{}".format(os.path.abspath(search)))
sys.exit()
</code></pre>
<p>I hope have helped you.</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Compute median of column in pyspark<p>I have a dataframe as shown below:</p>
<pre><code>+-----------+------------+
|parsed_date| count|
+-----------+------------+
| 2017-12-16| 2|
| 2017-12-16| 2|
| 2017-12-17| 2|
| 2017-12-17| 2|
| 2017-12-18| 1|
| 2017-12-19| 4|
| 2017-12-19| 4|
| 2017-12-19| 4|
| 2017-12-19| 4|
| 2017-12-20| 1|
+-----------+------------+
</code></pre>
<p>I want to compute median of the entire 'count' column and add the result to a new column.</p>
<p>I tried:</p>
<pre><code>median = df.approxQuantile('count',[0.5],0.1).alias('count_median')
</code></pre>
<p>But of course I am doing something wrong as it gives the following error:</p>
<pre><code>AttributeError: 'list' object has no attribute 'alias'
</code></pre>
<p>Please help.</p>
|
<p>You need to add a column with <code>withColumn</code> because <code>approxQuantile</code> returns a list of floats, not a Spark column.</p>
<pre><code>import pyspark.sql.functions as F
df2 = df.withColumn('count_media', F.lit(df.approxQuantile('count',[0.5],0.1)[0]))
df2.show()
+-----------+-----+-----------+
|parsed_date|count|count_media|
+-----------+-----+-----------+
| 2017-12-16| 2| 2.0|
| 2017-12-16| 2| 2.0|
| 2017-12-17| 2| 2.0|
| 2017-12-17| 2| 2.0|
| 2017-12-18| 1| 2.0|
| 2017-12-19| 4| 2.0|
| 2017-12-19| 4| 2.0|
| 2017-12-19| 4| 2.0|
| 2017-12-19| 4| 2.0|
| 2017-12-20| 1| 2.0|
+-----------+-----+-----------+
</code></pre>
<p>You can also use the <a href="https://spark.apache.org/docs/latest/api/sql/index.html#approx_percentile" rel="nofollow noreferrer"><code>approx_percentile</code></a> / <a href="https://spark.apache.org/docs/latest/api/sql/index.html#percentile_approx" rel="nofollow noreferrer"><code>percentile_approx</code></a> function in Spark SQL:</p>
<pre><code>import pyspark.sql.functions as F
df2 = df.withColumn('count_media', F.expr("approx_percentile(count, 0.5, 10) over ()"))
df2.show()
+-----------+-----+-----------+
|parsed_date|count|count_media|
+-----------+-----+-----------+
| 2017-12-16| 2| 2|
| 2017-12-16| 2| 2|
| 2017-12-17| 2| 2|
| 2017-12-17| 2| 2|
| 2017-12-18| 1| 2|
| 2017-12-19| 4| 2|
| 2017-12-19| 4| 2|
| 2017-12-19| 4| 2|
| 2017-12-19| 4| 2|
| 2017-12-20| 1| 2|
+-----------+-----+-----------+
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
how to get value 50 rows before condition is true<p>I am using this code , all other things are right but at the end condition is not true but still giving flag 1</p>
<pre><code>df=pd.DataFrame({'A':[1,8.5,5.2,7,8,9,0,4,5,6],'B':[1,2,2,2,3.1,3.2,3,2,1,2]})
df['flag']=np.where((df['B']>3).shift(-2),1,0)
</code></pre>
<p>here are result at the end B not greater than 3 but still giving flag 1</p>
<pre><code> A B flag
0 1.0 1.0 0
1 8.5 2.0 0
2 5.2 2.0 1
3 7.0 2.0 1
4 8.0 3.1 0
5 9.0 3.2 0
6 0.0 3.0 0
7 4.0 2.0 0
8 5.0 1.0 1
9 6.0 2.0 1
</code></pre>
<p>desired output</p>
<pre><code> A B flag
0 1.0 1.0 0
1 8.5 2.0 0
2 5.2 2.0 1
3 7.0 2.0 1
4 8.0 3.1 0
5 9.0 3.2 0
6 0.0 3.0 0
7 4.0 2.0 0
8 5.0 1.0 0
9 6.0 2.0 0
</code></pre>
|
<p>try:</p>
<p>check if the value is greater then 3 after shifting :</p>
<pre><code>df['flag']=np.where((df['B'].shift(-2).gt(3)),1,0)
</code></pre>
<p>OR</p>
<p>If you want to include shifted rows as well in your condition then use <code>fillna()</code>:</p>
<pre><code>df['flag']=np.where((df['B'].shift(-2).fillna(df['B'].iloc[-2:]).gt(3)),1,0)
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Converting dtype: object to integer for a column that has numbers with spaces between them<p>I have the following dataframe that I am trying to remove the spaces between the numbers in the value column and then use pd.to_numeric to change the dtype. THe current dtype of value is an object.</p>
<pre><code> periodFrom value
1 17.11.2020 28 621 240
2 18.11.2020 30 211 234
3 19.11.2020 33 065 243
4 20.11.2020 34 811 330
</code></pre>
<p>I have tried multiple variations of this but can't work it out:</p>
<pre><code>df['value'] = df['value'].str.strip()
df['value'] = df['value'].str.replace(',', '').astype(int)
df['value'] = df['value'].astype(str).astype(int)
</code></pre>
|
<p>One option is to apply <code>.str.split()</code> first in order to split by whitespaces(even if the anyone of them has more than one character length), then concatenate (<code>''.join()</code>) while removing those whitespaces along with converting to integers(<code>int()</code>) such as</p>
<pre><code>j=0
for i in df['value'].str.split():
df['value'][j]=int(''.join(i))
j+=1
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
My Ordered Dict is not working as I expected<p>So basically I have this code:</p>
<pre><code>from collections import OrderedDict as OD
person = OD({})
for num in range(10):
person[num] = float(input())
tall = max(person.values())
short = min(person.values())
key_tall = max(person.keys())
key_short = min(person.keys())
print(f'The shortest person is the person number {key_short} who is {short}meters tall')
print(f'The tallest person is the person number {key_tall} who is {tall}meters tall')
</code></pre>
<p>And in theory when I put 10 people on my dictionary, being the first number 1, going all the way to 9, and the last one being 0, the output should be:</p>
<pre><code>The shortest person is the person number 9 who is 0.0m meters tall
The tallest person is the person number 8 who is 9.0m meters tall
</code></pre>
<p>But in fact it prints:</p>
<pre><code>The shortest person is the person number 0 who is 0.0m meters tall
The tallest person is the person number 9 who is 9.0m meters tall
</code></pre>
<p>And for some reason when the values of my dictionary go to 1 all the way to 10, it works fine.</p>
<p>Any ideas on why this is happening and how to fix it?</p>
|
<pre><code>key_tall = max(person.keys())
key_short = min(person.keys())
</code></pre>
<p>Your <em>keys</em> are the integers <code>0..9</code> so it's expected that you'll get <code>9</code> and <code>0</code> for these two values, since you're asking for the min/max key without regard to the values.</p>
<p>You <em>seem</em> to be after the key of the person that has the highest/lowest value, but that's not what that code will give you.</p>
<p>If you're after the indexes of the items with the largest values, you can do something like:</p>
<pre><code>indexes_tall = [idx for idx in range(len(person)) if person[idx] == max(person.keys())]
</code></pre>
<p>This will give you a list of the indexes matching the highest value, which you can then process as you see fit. As one example:</p>
<pre><code>from collections import OrderedDict as OD
person = OD({})
for num in range(10):
person[num] = float((num + 1) % 10) # effectively your input
tall = max(person.values())
short = min(person.values())
keys_tall = [str(idx + 1) for idx in range(len(person)) if person[idx] == max(person.keys())]
keys_short = [str(idx + 1) for idx in range(len(person)) if person[idx] == min(person.keys())]
print(f'The shortest height of {short}m is held by: {" ".join(keys_short)}')
print(f'The tallest height of {tall}m is held by: {" ".join(keys_tall)}')
</code></pre>
<p>will give you:</p>
<pre><code>The shortest height of 0.0m is held by: 10
The tallest height of 9.0m is held by: 9
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Running setup script in gunicorn prior to running WSGI server<p>Say I have a module <code>setup.py</code> and some and flask application code in <code>server.py</code>.</p>
<p>Say I want to run <code>gunicorn -b :$PORT server:app</code>. But I want to run the logic in <code>setup.py</code> before setting up the WSGI server and running any other logic in <code>server.py</code>.</p>
<p>There doesn't seem to be any gunicorn flag documented that let's me do this (unless I'm missing something). Basically I want to do something like <code>gunicorn --RunThisFirst setup.py -b :$PORT server:app</code></p>
|
<p>Using a <code>gunicorn.conf.py</code> file should do the trick.</p>
<pre class="lang-py prettyprint-override"><code>import multiprocessing
import os
###########
# INSERT / IMPORT setup code
#
# maybe like this...
#
# from src.setup import run_setup
#
# run_setup()
###########
port = os.getenv("PORT", 8000)
bind = f"127.0.0.1:{port}"
workers = multiprocessing.cpu_count() * 2 + 1
</code></pre>
<p>Running server:</p>
<pre class="lang-sh prettyprint-override"><code>gunicorn --check-config server:app
</code></pre>
<p>In your unique use case you may need to direct <code>gunicorn</code> where to find it (see <a href="https://docs.gunicorn.org/en/latest/settings.html#config" rel="nofollow noreferrer">docs</a>):</p>
<pre class="lang-sh prettyprint-override"><code>gunicorn -c /path/to/gunicorn.config.py server:app
</code></pre>
<p>Link to docs related to using <code>gunicorn.config.py</code>: <a href="https://docs.gunicorn.org/en/latest/configure.html#configuration-file" rel="nofollow noreferrer">https://docs.gunicorn.org/en/latest/configure.html#configuration-file</a></p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
How to calculate TP/TN/FP/FN from binary value pairs in Python?<p>I want to write a Python function that on a binary value input pair (truth, prediction) gives back True Positive/True Negative/False Positive/False Negative values according their input. So far I have reached the required output with this:</p>
<pre><code>def func(truth, prediction):
if prediction == 1:
if truth == 1:
return "TP"
else:
return "FP"
elif truth == 1:
return "FN"
else:
return "TN"
</code></pre>
<p>However, this seems a but clunky solution, is there a shorter, more elegant way?</p>
<p>(The input pair is supposed to be a binary integer 0/1)</p>
|
<p>The comment from <a href="https://stackoverflow.com/users/669576/johnny-mopp">Johnny Mopp</a> is pretty cool (though I think the order should be <code>['TN', 'FN', 'FP', 'TP']</code>, but if I came across it in code I'd have to think twice. (I can count on one hand the number of times I've seen bit-shift operations in production code.)</p>
<p>There's a new way to handle things like this in Python 3.10: <a href="https://docs.python.org/3/whatsnew/3.10.html#pep-634-structural-pattern-matching" rel="nofollow noreferrer">structural pattern matching</a>. Now, this is the first time I've tried to use this new feature, but here's how it might look:</p>
<pre class="lang-py prettyprint-override"><code>def get_result(true, pred):
"""Decide if TP, FP, TN or FN"""
match [true, pred]:
case [1, 1]: return 'TP'
case [1, 0]: return 'FN'
case [0, 0]: return 'TN'
case [0, 1]: return 'FP'
</code></pre>
<p>Seems to work:</p>
<pre><code>>>> y = [1, 0, 1, 0] # TP, FP, FN, TN
>>> ŷ = [1, 1, 0, 0]
>>> for yi, ŷi in zip(y, ŷ):
>>> print(get_result(yi, ŷi))
TP
FP
FN
TN
</code></pre>
<p>If you need to use Python 3.9 or below, then you could compactify your approach with something like this:</p>
<pre class="lang-py prettyprint-override"><code>def get_result(true, pred):
"""Decide if TP, FP, TN or FN"""
if pred:
return 'TP' if true else 'FP'
return 'FN' if true else 'TN'
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
How to get tkinter text widget coordinates from index<p>I’m a newbie in Python and for practice, I’m trying to make a listbox follow my text cursor in a Text widget using tkinter. (The moving listbox will basically serve as a autocomplete tool.) Here’s the closest question to mine: <a href="https://stackoverflow.com/questions/31625127/is-it-possible-to-convert-tkinter-text-index-to-coordinates">Is it possible to convert tkinter text index to coordinates?</a> but unfortunately there’s a better workaround for the thread poster’s problem, so the main question wasn’t answered.</p>
<p>I also tried to use Text.window_create(), which uses index to position windows, but the window/widget is treated like a text and gets deleted if I press backspace/delete. Tried looking into its internal code but I can’t understand :D</p>
<p>Here’s my code:</p>
<pre><code>def textChanged(event):
index = text1.index(tk.INSERT)
# coord = from index ??
# list1.place(coord[0],coord[1]
if __name__==‘__main__’:
root = Tk()
text1 = Text(root)
text1.place(x=0,y=0)
text1.bind(“<KeyRelease>“,textChanged)
list1 = Listbox(root)
</code></pre>
<p>Thanks!</p>
|
<p>You can use <code>text1.bbox(tk.INSERT)[:2]</code> to get the coordinates (relative to <code>text1</code>) of the insertion cursor. However since <code>list1</code> is a child of <code>root</code> window, it is better to offset this coordinates by the position of <code>text1</code> inside <code>root</code> window:</p>
<pre class="lang-py prettyprint-override"><code>def textChanged(event):
# get the coordinates of text box inside root window
x0, y0 = text1.winfo_x(), text1.winfo_y()
# get the coordinates of the insertion cursor inside text box
x1, y1 = text1.bbox(INSERT)[:2]
# the coordinates of the insertion cursor relative to root window
# will be (x0+x1, y0+y1)
list1.place(x=x0+x1, y=y0+y1)
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
What is the relation between stack frame and a scope?<p>Recently I am learning about scoping in Python. I understand what is a stack frame but I am confused about the relation and difference between a stack frame and a scope. I study Python through the book 《Introduction to Computation and Programming Using Python》. It does not specifically clarify these two terms.</p>
|
<p>Scope is just one of the LEGB: Local, Enclosing, Global and Built-in. They are namespaces that Python uses to look up names. LEGB is the order in which that lookup is performed, first the local scope is checked for a names, then the enclosing scope, then global, built-in and if it's never found, you end up with an exception.</p>
<p>This ordering is the reason for 'shadowing': if you define something in local scope, it shadows global, because local scope is checked before global scope. The definition doesn't overwrite the previous one, it shadows it. If you redefine a variable in the same scope, it overwrites the previous variable and you can't get it back.</p>
<p>A stack frame is created every time a function is called (and a global frame every time a module is loaded). The stack frame handles the local variables for the function. Every time another function is called, a new stack frame is created, creating a new local scope. This allows every call to a function to have its own set of local variables, without access to the local scope of previous calls. Every time a function returns, that stack frame is destroyed, and you end back up in the previous stack frame (so, it's a 'stack').</p>
<p>So 'stack frame' is related to 'scope' in that the local scope is on the top-most stack frame. A stack frame contains the local scope for a function call, and a global frame contains the global scope for a module.</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
arlpy.bf.steering doesn't exist in ARL py tools package<p>I am trying to run this example from ARL py Tools <a href="https://arlpy.readthedocs.io/en/latest/bf.html" rel="nofollow noreferrer">documentation</a> for generating
Barlett Beampattern which mentions usage of <code>arlpy.bf.steering()</code>; but when I try to run it says <code>steering()</code> not found.</p>
<pre class="lang-py prettyprint-override"><code>sd = arlpy.bf.steering(np.linspace(0, 5, 11), 1500, np.linspace(-np.pi/2, np.pi/2, 181))
bp = arlpy.bf.bartlett_beampattern(90, 1500, sd, show=True)
</code></pre>
<p>Error:</p>
<pre class="lang-sh prettyprint-override"><code>AttributeError: module 'arlpy.bf' has no attribute 'steering'
</code></pre>
<p>The documentation notes from April 2020 also mention the usage for the same function but doesn't show definition of that function anywhere.</p>
<p>Refer: Page 16 at <a href="https://arlpy.readthedocs.io/_/downloads/en/latest/pdf/" rel="nofollow noreferrer">https://arlpy.readthedocs.io/_/downloads/en/latest/pdf/</a><br />
The version of <code>arlpy</code> I am using is <strong>1.7.0</strong> which appears to be the latest.</p>
<p>Please advise what should be done to fix it?</p>
|
<p>This has been resolved now, as it seems that it <code>arlpy.bf.steering()</code> stayed from the pervious version which is now outdated and it will be updated to <code>arlpy.bf.steering_plane_wave()</code> in the next release.</p>
<p>Check my issue post on their github for more information: <a href="https://github.com/org-arl/arlpy/issues/61" rel="nofollow noreferrer">https://github.com/org-arl/arlpy/issues/61</a> which has been closed now.</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
How to sum values in list?<pre><code>list = ['1297546327:0', '1297546327:1', '1297546327:1', '1297546327:0', '1297546327:0', '1297546327:0', '1297546327:0', '1297546327:1', '1297546327:1', '1297546327:1', '5138875960:0', '5138875960:1', '5138875960:0', '5138875960:0', '5138875960:1']
</code></pre>
<p>I have a list like this and I need to add values after ":" To get it like this</p>
<pre><code>total = ['1297546327:5','5138875960:2']
</code></pre>
<p>How can you do that??</p>
|
<p>Use <a href="https://docs.python.org/3/library/itertools.html#itertools.groupby" rel="nofollow noreferrer"><code>itertools.groupby</code></a>, for example:</p>
<pre><code>alist = ['1297546327:0', '1297546327:1', '1297546327:1', '1297546327:0', '1297546327:0', '1297546327:0', '1297546327:0', '1297546327:1', '1297546327:1', '1297546327:1', '5138875960:0', '5138875960:1', '5138875960:0', '5138875960:0', '5138875960:1']
alist = [obj.split(':') for obj in alist]
result = {}
for k, g in groupby(sorted(alist), key=lambda x: x[0]):
result[k] = sum(int(v) for _, v in g)
print(result)
# if you want num in ascending order (for desending order pass reverse=True to the sorted function):
# result = dict(sorted(result.items(), key=lambda obj: obj[1]))
print([f'{k}:{v}' for k, v in result.items()])
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>{'1297546327': 5, '5138875960': 2}
['1297546327:5', '5138875960:2']
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Drop all the columns containing some string but excepting some with the same string<p>I have several columns containing a string "luds" I want to drop all these columns except exactly 2 columns containing the same string</p>
<p>For example if I have the following columns in my DataFrame:</p>
<pre><code> luds_mean
luds_std
luds_var
luds_corr
luds_out
made_mean
made_std
made_var
</code></pre>
<p>I want to retain luds_mean and luds_std columns and drop all the columns containing the string luds and left with following columns:</p>
<pre><code> luds_mean
luds_std
made_mean
made_std
made_var
</code></pre>
|
<p>You can use <code>isin()</code>+<code>loc</code> accessor:</p>
<pre><code>df=df.loc[:,~df.columns.isin(['luds_var','luds_corr','luds_out'])]
</code></pre>
<p>OR</p>
<p>If there are more columns named luds_... then use:</p>
<pre><code>s=df.columns.isin(['luds_mean','luds_std']) | ~df.columns.str.contains('luds_')
df=df.loc[:,s]
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Add same key and values while for loop in python dictionnary<p>I am a python beginner and I am working on a small project but I can't get the output wanted.</p>
<p>This is the entry value that I have, I have to keep only the email's and associate them with the project without duplicated values.
This my entry:</p>
<pre><code>my_dictionnary = [
{
"email": ["email1", "email2"],
"project":"projet1",
"project_name":"Project1"
},
{
"email": ["email1", "email2"],
"project":"projet2",
"project_name":"Project2"
},
{
"email": ["email2"],
"project":"projet3",
"project_name":"Project3"
}
]
</code></pre>
<p>I would like to create the ouput as follows:</p>
<pre><code>my_dictionnary_parsed = {
"email2": ["projet1", "projet2", "projet3"],
"email1": ["projet1", "projet2"]
}
</code></pre>
<p>I have the following function:</p>
<pre><code>def formating_email(list_of_email):
dictionnary={}
for email_data in list_of_email_data:
for cp in email_data['cp_related']:
print(cp,email_data['ticket_related_project'])
dictionnary[cp] = email_data
</code></pre>
<p>But the values keep overriding...</p>
|
<p>Assuming you have correct dictionary structure, because the current one is incorrect, you are directly setting the <code>dictionnary[cp] = email_data</code>. You should instead append because it is a list.</p>
<p>Also, do not name your variables as <code>dictionnary</code>. Give them some valid names that correspond to their functionality, such as <code>projects_per_email</code> or anything else.</p>
<pre><code>def formating_email(list_of_email):
dictionnary={}
for email_data in list_of_email_data:
for cp in email_data['cp_related']:
print(cp,email_data['ticket_related_project'])
if dictionnary.get(cp) == None:
dictionnary[cp] = [email_data]
else:
dictionnary[cp].append(email_data)
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Three lists in a dictionary<p>I have three lists which I want to convert into a list of dictionaries following this structure:</p>
<p>Expected output:</p>
<pre><code>[{'Chamber pop': {'url': '/wiki/Chamber_pop', 'description': 'xxx'},
{'Dance-punk': {'url': '/wiki/Dance-punk', 'description': 'yyy'},
{'Dream pop': {'url': '/wiki/Dream_pop', 'description': 'zzz'},
{'Dunedin Sound': {'url': 'Dunedin_Sound', 'description': 'aaa'}]
</code></pre>
<p>Lists:</p>
<pre><code>names = ['Chamber pop',
'Dance-punk',
'Dream pop',
'Dunedin Sound',]
urls = ['/wiki/Chamber_pop',
'/wiki/Dance-punk',
'/wiki/Dream_pop',
'/wiki/Dunedin_Sound']
description = ["xxx","yyy","zzz","aaa"]
</code></pre>
<p>What I've tried so far is:</p>
<pre><code>res = {}
for x in names:
for y in url:
res[x] = {}
res[x]["url"] = y
</code></pre>
<p>However the output of this code is:</p>
<pre><code>{'Chamber pop': {'url': '/wiki/Twee_pop'},
'Dance-punk': {'url': '/wiki/Twee_pop'},
'Dream pop': {'url': '/wiki/Twee_pop'},
'Dunedin Sound': {'url': '/wiki/Twee_pop'}}
</code></pre>
<p>As you may see, the url value keeps repeating. I guess it's because somewhere is overwriting the value. Also, it does not have the expected structure as it all falls inside a dictionary.</p>
<p>What am I doing wrong? Would appreciate any help.</p>
<p>Thank you</p>
|
<p>Typically you would <a href="https://docs.python.org/3/library/functions.html#zip" rel="nofollow noreferrer"><code>zip()</code></a> your lists allowing you to iterate over them in parallel. Then you could simply use a dict comprehension:</p>
<pre><code>names = ['Chamber pop','Dance-punk','Dream pop', 'Dunedin Sound',]
urls = ['/wiki/Chamber_pop', '/wiki/Dance-punk', '/wiki/Dream_pop','/wiki/Dunedin_Sound']
description = ["xxx","yyy","zzz","aaa"]
res = {n: {'url':u, 'description':d}
for n, u, d in zip(names, urls, description)}
</code></pre>
<p>which makes res =</p>
<pre><code>{'Chamber pop': {'url': '/wiki/Chamber_pop', 'description': 'xxx'},
'Dance-punk': {'url': '/wiki/Dance-punk', 'description': 'yyy'},
'Dream pop': {'url': '/wiki/Dream_pop', 'description': 'zzz'},
'Dunedin Sound': {'url': '/wiki/Dunedin_Sound', 'description': 'aaa'}}
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Equivalent of flatMap in Python's concurrency framework<p>I have a piece of code like this:</p>
<pre><code>for x in range(10):
for v in f(x):
print(v)
</code></pre>
<p>I would like to parallelize it, so I might do</p>
<pre><code>ex = ProcessPollExecutor()
for vs in ex.map(f, range(10)):
for v in vs:
print(v)
</code></pre>
<p>However, <code>f</code> is a generator, so the above code doesn't really work.
I can change <code>f</code> to return a list, but this list will be too big to fit in memory.</p>
<p>Ideally I would like something like <code>flatMap</code> in pyspark.
However using pyspark directly like <code>sc.parallelize(range(10)).flatMap(f).toLocalIterator()</code>
doesn't seem to work. At least I can't get it to utilize more than one processor when the initial list is that short.
(I've tried all of the stuff in <a href="https://stackoverflow.com/q/26828987/205521">Why is this simple Spark program not utlizing multiple cores?</a> with no luck.)</p>
<p>I can probably roll something by myself using queues, but I wonder if there is an intended way to parallelize such code in the Python concurrency framework?</p>
|
<p>I ended up coding my own small library using <code>multiprocessing</code>: <a href="https://github.com/thomasahle/pystreams" rel="nofollow noreferrer">PyStreams</a>.</p>
<p>It has pretty efficient flapmap support via buffering, and supports other Spark like features like this:</p>
<pre><code>>>> sentences = ["a word is a word", "all words are words"]
>>> (Stream(sentences)
... .flatmap(lambda sentence: sentence.split())
... .chunk_by_key(lambda x: hash(x) % 10)
... .reduce_once(lambda chunk: len(set(chunk)))
... .sum())
6
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Detect Changes between two XML files using Python<p>I'm working on a project where I need to detect changes in two XML files, generated at different times.</p>
<p>The XML files contain airport data, runways, obstacles etc. so a lot of nested information, and only a fraction of the airports in the file are relevant.</p>
<p>I've tried the xmldiff module but it takes a long time to run and obviously returns information not relevant to the airports I'm interested in.</p>
<p>I've decided to write a python script to extract only the airports I need from the XML's, and store the data in two separate dictionaries, I will then compare them and try to identify the changes.</p>
<p>Could anyone advise on whether this is the correct way to go about the problem? Any advice at all is much appreciated!</p>
<p>Thanks</p>
|
<p>Extracting the airports and saving them in a data structure that you can compare feels like a good way to approach the problem to me. Depending on how you handle duplicates you could think about using dictionaries (one key - multiple values) or sets (no duplicates).
<a href="https://betterprogramming.pub/a-visual-guide-to-set-comparisons-in-python-6ab7edb9ec41" rel="nofollow noreferrer">Here</a> is a pretty good explanation on how to compare the contents of two different sets.</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
How to check the nearest matching value between two fields in same table and add data to the third field using Pandas?<p>I have one table:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Index</th>
<th>Month_1</th>
<th>Month_2</th>
<th>Paid</th>
</tr>
</thead>
<tbody>
<tr>
<td>01</td>
<td>12</td>
<td>10</td>
<td></td>
</tr>
<tr>
<td>02</td>
<td>09</td>
<td>03</td>
<td></td>
</tr>
<tr>
<td>03</td>
<td>02</td>
<td>04</td>
<td></td>
</tr>
<tr>
<td>04</td>
<td>01</td>
<td>08</td>
<td></td>
</tr>
</tbody>
</table>
</div>
<p>The output should be:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Index</th>
<th>Month_1</th>
<th>Month_2</th>
<th>Paid</th>
</tr>
</thead>
<tbody>
<tr>
<td>01</td>
<td>12</td>
<td>10</td>
<td>Yes</td>
</tr>
<tr>
<td>02</td>
<td>09</td>
<td>03</td>
<td></td>
</tr>
<tr>
<td>03</td>
<td>02</td>
<td>04</td>
<td>Yes</td>
</tr>
<tr>
<td>04</td>
<td>01</td>
<td>08</td>
<td></td>
</tr>
</tbody>
</table>
</div>
<p>Logic: Add 'Yes' to the Paid field whose Month_1 and Month_2 are nearby</p>
|
<p>You can subtract columns, get absolute values and compare if equal or less like threshold, e.g. <code>2</code> and then set values in <a href="https://numpy.org/doc/stable/reference/generated/numpy.where.html" rel="nofollow noreferrer"><code>numpy.where</code></a>:</p>
<pre><code>df['Paid'] = np.where(df['Month_1'].sub(df['Month_2']).abs().le(2), 'Yes','')
print (df)
Index Month_1 Month_2 Paid
0 01 12 10 Yes
1 02 9 3
2 03 2 4 Yes
3 04 1 8
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
How to return data from R notebook task to Python task in databricks<p>I have a simple function in an R notebook (notebook A) that aggregates some data. I want to call notebook A from another notebook (notebook B) and interogate the aggregated data from notebook A in notebook B.</p>
<p>So far I can run notebook A from notebook B no problem, but cannot see any returned data, variables or functions.</p>
<p>Code in notebook A:</p>
<pre><code>function_to_aggregate_data = function(x,y){
...some code...
}
aggregated_data = function_to_aggregate_data(x,y)
</code></pre>
<p>Code in notebook B:</p>
<pre><code>%python
dbutils.notebook.run("path/to/notebook_A", 60)
</code></pre>
|
<p>When you use <code>dbutils.notebook.run</code>, that notebook is executed as a separate job, so no variables, etc. are available for the caller notebook, or in the called notebook. You can return some data from the notebook using <code>dbutils.notebook.exit</code>, but it's limited to 1024 bytes (as I remember). But you can return data by registering temp view, and then accessing data in this temp view - here is an example of doing that (although using Python for both).</p>
<p><a href="https://github.com/alexott/databricks-nutter-repos-demo/blob/master/Code1.py#L8" rel="nofollow noreferrer">Notebook B</a>:</p>
<pre class="lang-py prettyprint-override"><code>def generate_data1(n=1000, name='my_cool_data'):
df = spark.range(0, n)
df.createOrReplaceTempView(name)
</code></pre>
<p>Notebook A:</p>
<pre class="lang-py prettyprint-override"><code>dbutils.notebook.run('./Code1', default_timeout)
df = spark.sql("select * from my_cool_data")
assert(df.count() == 1000)
</code></pre>
<p>P.S. You can't directly share data between R & Python code, only by using temp views, etc.</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Error in using instance name as dict values and object attribute as dict keys<p>I want to create boutiqueDict as type dict which has Boutique objects tagged to different types.
eg: b1, b2, b3 are 3 boutique objects where b1,b2 belong to "shirt" type and b3 "dress" type.
Then boutiqueDict dictionary has values in format
<code>{'shirt':b1,b2, 'dress':b3}</code>
Here is the code i've written</p>
<pre><code>class Boutique:
def __init__(self, boutiqueid, boutiquename, boutiquetype, boutiquerating, points):
self.boutiqueid = boutiqueid
self.boutiquename = boutiquename
self.boutiquetype = boutiquetype
self.boutiquerating = boutiquerating
self.points = points
class OnlineBoutique:
def __init__(self, *argv):
self.boutiqueDict = {}
for arg in argv:
if arg.boutiquetype not in self.boutiqueDict.keys():
self.boutiqueDict = {arg.boutiquetype : list(arg)}
else:
(self.boutiqueDict[arg.boutiquetype]).append(arg)
</code></pre>
<p>And this is the error i am getting</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "hello.py", line 14, in __init__
self.boutiqueDict = {arg.boutiquetype : list(arg)}
TypeError: 'Boutique' object is not iterable
</code></pre>
<p>I am not sure where/why it is trying to iterate.</p>
|
<p>From the code above the <strong>OnlineBoutique</strong> is to be passed multiple instance of <strong>Boutique</strong> but the problem is you trying to convert such instance to list with the line</p>
<pre><code>list(arg)
</code></pre>
<p>Obviously your boutique class didn't override the <strong>__iter__()</strong> method</p>
<p>And even this is against your initial idea in the first place, as what you should rather do is</p>
<pre><code>self.boutiqueDict[arg.boutiquetype] = arg
</code></pre>
<p>Since you said the dresses in the dict should be in the format {'shirt':b1,b2}</p>
<p>Thus things will now look like</p>
<pre><code> for arg in argv:
if arg.boutiquetype not in self.boutiqueDict.keys():
self.boutiqueDict[arg.boutiquetype] = arg
else:
(self.boutiqueDict[arg.boutiquetype]).append(arg)
</code></pre>
<p>So that if the <strong>boutiquetype</strong> key isn't yet present then it's created, else the new value is appended to already existing Key</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Convert date time to day and time<p>I have a variable in a df that looks like this</p>
<pre><code>Datetime
10/27/2020 2:28:28 PM
8/2/2020 3:30:18 AM
6/15/2020 5:38:19 PM
</code></pre>
<p>How can I change it to this using python?</p>
<pre><code>Date Time
10/27/2020 14:28:28
8/2/2020 3:30:18
6/15/2020 17:38:19
</code></pre>
<p>I understand how to separate date and time, but unsure of how to convert it to 24 hour time.</p>
|
<p>You can use <a href="https://pandas.pydata.org/docs/reference/api/pandas.to_datetime.html" rel="nofollow noreferrer">pd.to_datetime</a> to convert a scalar, array-like, Series or DataFrame/dict-like to a pandas datetime object. Then, you can use the accessor object for datetimelike properties of the Series values (<code>Series.dt()</code>) to obtain the time, that will be already in the desired format.</p>
<p>You can also use <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.dt.strftime.html" rel="nofollow noreferrer">dt.strftime</a> to format the output string which supports the same string format as the python standard library.</p>
<pre class="lang-py prettyprint-override"><code>df['Datetime'] = pd.to_datetime(df.Datetime)
df['Date'] = df.Datetime.dt.strftime('%m/%d/%Y')
df['Time'] = df.Datetime.dt.time
print(df)
</code></pre>
<pre class="lang-py prettyprint-override"><code> Datetime Date Time
0 2020-10-27 14:28:28 10/27/2020 14:28:28
1 2020-08-02 03:30:18 08/02/2020 03:30:18
2 2020-06-15 17:38:19 06/15/2020 17:38:19
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
How to keep rows with more than three columns greater than (in Pandas)<p>i'd like to know how to keep rows in a Pandas dataframe in which more than 3 of its columns have values greater than 0.8
Here is an example:</p>
<pre><code>companyInfo = pd.DataFrame()
companyInfo['col1'] = [0,0,0,0,0]
companyInfo['col2'] = [0,0.9,0,0,0]
companyInfo['col3'] = [0,0,0.85,0,0]
companyInfo['col4'] = [0,0,0,0,0]
companyInfo['col5'] = [0,0.2,0,0,0.09]
companyInfo['col6'] = [0,0,0.3,0,0.87]
companyInfo['col7'] = [0,0,0.2,0.4,0.82]
</code></pre>
<p>In this case, only the last row would be kept as it has at least 3 columns greater than 0.8</p>
|
<p>You can create the masking for values greater than 0.8 and then call <code>sum()</code> on <code>axis=1</code> and then check if the sum is greater than 3</p>
<pre class="lang-py prettyprint-override"><code>companyInfo[(companyInfo>0.8).sum(axis=1)>3]
</code></pre>
<p><strong>OUTPUT</strong>:</p>
<pre><code>Columns: [col1, col2, col3, col4, col5, col6, col7]
Index: []
</code></pre>
<p><em>Empty because you don't have any value matching this criteria</em></p>
<p>But for some other criteria:</p>
<pre class="lang-py prettyprint-override"><code>companyInfo[(companyInfo>=0.2).sum(axis=1)>=3]
#output
col1 col2 col3 col4 col5 col6 col7
2 0 0.0 0.85 0 0.0 0.3 0.2
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
User ModelForm not getting saved in database (Django)<p>I have a ModelForm that creates an instance of a custom user (AbstractUser model). Whenever I try to submit the form, nothing happens except a reload of the page (because of the action), and nothing is getting saved in the database. The code is as following:</p>
<pre><code>def sign_up(request):
if request.method == 'POST':
print("test")
form = UserForm(request.POST)
if form.is_valid():
form.save()
data = form.cleaned_data
user = authenticate(
username=data['username'],
password=data['password']
)
user_login(request, user)
else:
form = UserForm()
return render(request, 'schedules/signup/signup.html', {'form':form})
</code></pre>
<pre><code>class ClientUser(AbstractUser):
classes = models.ManyToManyField(ScheduleClass)
personal_changes = models.TextField(name="personal_changes", default="none")
schedule = models.ManyToManyField(Schedule)
def get_classes(self):
return self.classes
</code></pre>
<pre><code>class UserForm(forms.ModelForm):
class Meta:
model = ClientUser
fields = ['username', 'password', 'classes', 'schedule']
</code></pre>
<pre><code> <form method="post" action="http://127.0.0.1:8000/">
{% csrf_token %}
{{ form }}
<input type="submit">
</form>
</code></pre>
<p>For some reason, it's also not even printing the line I wrote on line 3 of the <code>sign_up</code> func, even when I post. Is there something I am doing wrong?</p>
|
<p>Ok, so you said your print isn't showing, so first off make your form action relative to the site, using django's <code>{% url</code> template tag is often a good idea here, or just leaving it blank to post to the current URL;</p>
<pre><code> <form method="post" action="">
{% csrf_token %}
{{ form }}
<input type="submit">
</form>
</code></pre>
<p>Now if you hit the view with your POST request and the form isn't valid, what you're doing at the moment won't include any errors when the page is rendered again so it'll look like nothing happened when in actual fact you did try to validate the form, you just didn't return the errors to the user.</p>
<p>Once you call <code>form.is_valid()</code>, errors are added to that <code>form</code> instance if it isn't valid. So the state of the form instance is really important and you need to pass it back to the user rather than creating a new one which is how you're getting rid of the errors.</p>
<p>You'd do something like this;</p>
<pre><code>def sign_up(request):
# Create the form for a GET request
form = UserForm()
if request.method == 'POST':
# Recreate the form instance with the POST data
form = UserForm(request.POST)
# Run form validation
if form.is_valid():
form.save()
data = form.cleaned_data
user = authenticate(
username=data['username'],
password=data['password']
)
user_login(request, user)
# if you end up here on a POST request, the form isn't valid but still contains the errors
return render(request, 'schedules/signup/signup.html', {'form':form})
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Loading the binary data to a NumPy array<p>I am having trouble reading the binary file. I have a NumPy array as,</p>
<pre><code>data = array([[ 0. , 0. , 7.821725 ],
[ 0.05050505, 0. , 7.6358337 ],
[ 0.1010101 , 0. , 7.453858 ],
...,
[ 4.8989897 , 5. , 16.63227 ],
[ 4.949495 , 5. , 16.88153 ],
[ 5. , 5. , 17.130795 ]], dtype=float32)
</code></pre>
<p>I wrote this array to a file in binary format.</p>
<pre><code>file = open('model_binary', 'wb')
data.tofile(file)
</code></pre>
<p>Now, I am unable to get back the data from the saved binary file. I tried using <code>numpy.fromfile()</code> but it didn't work out for me.</p>
<pre><code>file = open('model_binary', 'rb')
data = np.fromfile(file)
</code></pre>
<p>When I printed the data I got <code>[0.00000000e+00 2.19335211e-13 8.33400000e+04 ... 2.04800049e+03 2.04800050e+03 5.25260241e+07]</code> which is absolutely not what I want.</p>
<p>I ran the following code to check what was in the file,</p>
<pre><code>for line in file:
print(line)
break
</code></pre>
<p>I got the output as <code>b'\x00\x00\x00\x00\......\c1\x07@\x00\x00\x00\x00S\xc5{@j\xfd\n'</code> which I suppose is in binary format.</p>
<p>I would like to get the array back from the binary file as it was saved. Any help will be appreciated.</p>
|
<p>As Kevin noted, adding the dtype is required. You might also need to reshape (you have 3 columns in your example. So</p>
<pre><code>file = open('model_binary', 'rb')
data = fromfile(file, dtype=np.float32).reshape((-1,3))
</code></pre>
<p>should work for you.</p>
<p>As an aside, I think <code>np.save</code> <strong>does</strong> save to binary format, and should avoid these issues.</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
How do I use python to turn two lists into a dictionary, and the value is the maximum value<p>I have two lists here</p>
<pre><code>L = ["a","b","b","c","c","c"]
L_02 = [1,3,2,4,6,""]
</code></pre>
<p>I want to turn two lists into a dictionary, and the value is the maximum value</p>
<pre><code>dic = {"a":1,"b":3,"c":6}
</code></pre>
<p>how can I do this?</p>
|
<p>We can first get the indices of each element in the list, get the corresponding values in the second list, and find the maximums and make a dictionary.</p>
<pre><code>dic = {}
for element in set(L):
indices = [i for i, x in enumerate(L) if x == element]
corresponding = []
for i in indices:
if type(L_02[i]) == int:
corresponding.append(L_02[i])
value = max(corresponding)
dic[element] = value
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Maximum volume inscribed ellipsoid in a polytope/set of points<p><strong>Later Edit</strong>: I uploaded <a href="https://send.firefox.com/download/5897807ea7bddc1e/#H4Zc7WIqCtkCCE3WBxz0og" rel="nofollow noreferrer">here</a> a sample of my original data. It's actually a segmentation image in the DICOM format. The volume of this structure as it is it's ~ 16 mL, so I assume the inner ellipsoid volume should be smaller than that. to extract the points from the DICOM image I used the following code:</p>
<pre><code>import os
import numpy as np
import SimpleITK as sitk
def get_volume_ml(image):
x_spacing, y_spacing, z_spacing = image.GetSpacing()
image_nda = sitk.GetArrayFromImage(image)
imageSegm_nda_NonZero = image_nda.nonzero()
num_voxels = len(list(zip(imageSegm_nda_NonZero[0],
imageSegm_nda_NonZero[1],
imageSegm_nda_NonZero[2])))
if 0 >= num_voxels:
print('The mask image does not seem to contain an object.')
return None
volume_object_ml = (num_voxels * x_spacing * y_spacing * z_spacing) / 1000
return volume_object_ml
def get_surface_points(folder_path):
"""
:param folder_path: path to folder where DICOM images are stored
:return: surface points of the DICOM object
"""
# DICOM Series
reader = sitk.ImageSeriesReader()
dicom_names = reader.GetGDCMSeriesFileNames(os.path.normpath(folder_path))
reader.SetFileNames(dicom_names)
reader.MetaDataDictionaryArrayUpdateOn()
reader.LoadPrivateTagsOn()
try:
dcm_img = reader.Execute()
except Exception:
print('Non-readable DICOM Data: ', folder_path)
return None
volume_obj = get_volume_ml(dcm_img)
print('The volume of the object in mL:', volume_obj)
contour = sitk.LabelContour(dcm_img, fullyConnected=False)
contours = sitk.GetArrayFromImage(contour)
vertices_locations = contours.nonzero()
vertices_unravel = list(zip(vertices_locations[0], vertices_locations[1], vertices_locations[2]))
vertices_list = [list(vertices_unravel[i]) for i in range(0, len(vertices_unravel))]
surface_points = np.array(vertices_list)
return surface_points
folder_path = r"C:\Users\etc\TTT [13]\20160415 114441\Series 052 [CT - Abdomen WT 1 0 I31f 3]"
points = get_surface_points(folder_path)
</code></pre>
<p>I have a set of points (n > 1000) in 3D space that describe a hollow ovoid like shape. <strong>What I would like is to fit an ellipsoid (3D) that is inside all of the points.</strong> I am looking for the maximum volume ellipsoid fitting inside the points.</p>
<p>I tried to adapt the code from <a href="https://stackoverflow.com/questions/14016898/port-matlab-bounding-ellipsoid-code-to-python">Minimum Enclosing Ellipsoid</a> (aka outer bounding ellipsoid)<br>
by modifying the threshold <code>err > tol</code>, with my logic begin that all points should be smaller than < 1 given the ellipsoid equation. But no success.</p>
<p>I also tried the Loewner-John adaptation on <a href="https://docs.mosek.com/9.2/pythonfusion/case-studies-ellipsoids.html" rel="nofollow noreferrer">mosek</a>, but I didn't figure how to describe the intersection of a hyperplane with 3D polytope (the Ax <= b representation) so I can use it for the 3D case. So no success again.</p>
<p><a href="https://i.stack.imgur.com/eU2S2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eU2S2.png" alt="inscribed ellipsoid"></a></p>
<p>The code from the outer ellipsoid:</p>
<pre><code>import numpy as np
import numpy.linalg as la
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
pi = np.pi
sin = np.sin
cos = np.cos
def plot_ellipsoid(A, centroid, color, ax):
"""
:param A: matrix
:param centroid: center
:param color: color
:param ax: axis
:return:
"""
centroid = np.asarray(centroid)
A = np.asarray(A)
U, D, V = la.svd(A)
rx, ry, rz = 1. / np.sqrt(D)
u, v = np.mgrid[0:2 * np.pi:20j, -np.pi / 2:np.pi / 2:10j]
x = rx * np.cos(u) * np.cos(v)
y = ry * np.sin(u) * np.cos(v)
z = rz * np.sin(v)
E = np.dstack((x, y, z))
E = np.dot(E, V) + centroid
x, y, z = np.rollaxis(E, axis=-1)
ax.plot_wireframe(x, y, z, cstride=1, rstride=1, color=color, alpha=0.2)
ax.set_zlabel('Z-Axis')
ax.set_ylabel('Y-Axis')
ax.set_xlabel('X-Axis')
def mvee(points, tol = 0.001):
"""
Finds the ellipse equation in "center form"
(x-c).T * A * (x-c) = 1
"""
N, d = points.shape
Q = np.column_stack((points, np.ones(N))).T
err = tol+1.0
u = np.ones(N)/N
while err > tol:
# assert u.sum() == 1 # invariant
X = np.dot(np.dot(Q, np.diag(u)), Q.T)
M = np.diag(np.dot(np.dot(Q.T, la.inv(X)), Q))
jdx = np.argmax(M)
step_size = (M[jdx]-d-1.0)/((d+1)*(M[jdx]-1.0))
new_u = (1-step_size)*u
new_u[jdx] += step_size
err = la.norm(new_u-u)
u = new_u
c = np.dot(u,points)
A = la.inv(np.dot(np.dot(points.T, np.diag(u)), points)
- np.multiply.outer(c,c))/d
return A, c
folder_path = r"" # path to a DICOM img folder
points = get_surface_points(folder_path) # or some random pts
A, centroid = mvee(points)
U, D, V = la.svd(A)
rx_outer, ry_outer, rz_outer = 1./np.sqrt(D)
# PLOT
fig = plt.figure()
ax1 = fig.add_subplot(111, projection='3d')
ax1.scatter(points[:, 0], points[:, 1], points[:, 2], c='blue')
plot_ellipsoid(A, centroid, 'green', ax1)
</code></pre>
<p>Which gives me this result for the outer ellipsoid on my sample points:
<a href="https://i.stack.imgur.com/odiXl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/odiXl.png" alt="enter image description here"></a></p>
<p><strong>The main question: How do I fit an ellipsoid (3D) inside a cloud of 3D points using Python?</strong> </p>
<p>Is it possible to modify the algorithm for the outer ellipsoid to get the maximum inscribed (inner) ellipsoid?</p>
<p>I am looking for code in <code>Python</code> ideally.</p>
|
<h1>Problem statement</h1>
<p>Given a number of points <code>v₁, v₂, ..., vₙ</code>, find a large ellipsoid satisfying two constraints:</p>
<ol>
<li>The ellipsoid is in the convex hull ℋ = ConvexHull(v₁, v₂, ..., vₙ).</li>
<li>None of the points v₁, v₂, ..., vₙ is within the ellipsoid.</li>
</ol>
<p>I propose an iterative procedure to find a large ellipsoid satisfying these two constraints. In each iteration we need to solve a semidefinite programming problem. This iterative procedure is guaranteed to converge, however it not guaranteed to converge to the <em>globally</em> largest ellipsoid.</p>
<h1>Approach</h1>
<h2>Find a single ellipsoid</h2>
<p>The core of our iterative procedure is that in each iteration, we find an ellipsoid satisfying 3 conditions:</p>
<ul>
<li>The ellipsoid is contained within ConvexHull(v₁, v₂, ..., vₙ) = {x | Ax<=b}.</li>
<li>For a set of points u₁, ... uₘ (where {v₁, v₂, ..., vₙ} ⊂ {u₁, ... uₘ}, namely the given point in the point clouds belongs to this set of points u₁, ... uₘ), the ellipsoid doesn't contain any point in u₁, ... uₘ. We call this set u₁, ... uₘ as "outside points".</li>
<li>For a set of points w₁,..., wₖ (where {w₁,..., wₖ} ∩ {v₁, v₂, ..., vₙ} = ∅, namely none of the point in v₁, v₂, ..., vₙ belongs to {w₁,..., wₖ}), the ellipsoid contains all of the points w₁,..., wₖ. We call this set w₁,..., wₖ as "inside points".</li>
</ul>
<p>The intuitive idea is that the "inside points" w₁,..., wₖ indicate the volume of the ellipsoid. We will append new point to "inside points" so as to increase the ellipsoid volume.</p>
<p>To find such an ellipsoid through convex optimization, we parameterize the ellipsoid as</p>
<pre><code>{x | xᵀPx + 2qᵀx ≤ r}
</code></pre>
<p>and we will search for <code>P, q, r</code>.</p>
<p>The condition that the "outside points" u₁, ... uₘ are all outside of the ellipsoid is formulated as</p>
<pre><code>uᵢᵀPuᵢ + 2qᵀuᵢ >= r ∀ i=1, ..., m
</code></pre>
<p>this is a linear constraint on <code>P, q, r</code>.</p>
<p>The condition that the "inside points" w₁,..., wₖ are all inside the ellipsoid is formulated as</p>
<pre><code>wᵢᵀPwᵢ + 2qᵀwᵢ <= r ∀ i=1, ..., k
</code></pre>
<p>This is also a linear constraint on <code>P, q, r</code>.</p>
<p>We also impose the constraint</p>
<pre><code>P is positive definite
</code></pre>
<p><code>P</code> being positive definite, together with the constraint that there exists points wᵢ satisfying wᵢᵀPwᵢ + 2qᵀwᵢ <= r guarantees that the set {x | xᵀPx + 2qᵀx ≤ r} is an ellipsoid.</p>
<p>We also have the constraint that the ellipsoid is inside the convex hull ℋ={x | aᵢᵀx≤ bᵢ, i=1,...,l} (namely there are <code>l</code> halfspaces as the H-representation of ℋ). Using <a href="https://en.wikipedia.org/wiki/S-procedure" rel="nofollow noreferrer">s-lemma</a>, we know that a necessary and sufficient condition for the halfspace <code>{x|aᵢᵀx≤ bᵢ}</code> containing the ellipsoid is that</p>
<pre><code>∃ λᵢ >= 0,
s.t [P q -λᵢaᵢ/2] is positive semidefinite.
[(q-λᵢaᵢ/2)ᵀ λᵢbᵢ-r]
</code></pre>
<p>Hence we can solve the following semidefinite programming problem to find the ellipsoid that contains all the "inside points", doesn't contain any "outside points", and is within the convex hull ℋ</p>
<pre><code>find P, q, r, λ
s.t uᵢᵀPuᵢ + 2qᵀuᵢ >= r ∀ i=1, ..., m
wᵢᵀPwᵢ + 2qᵀwᵢ <= r ∀ i=1, ..., k
P is positive definite.
λ >= 0,
[P q -λᵢaᵢ/2] is positive semidefinite.
[(q-λᵢaᵢ/2)ᵀ λᵢbᵢ-r]
</code></pre>
<p>We call this <code>P, q, r = find_ellipsoid(outside_points, inside_points, A, b)</code>.</p>
<p>The volume of this ellipsoid is proportional to (r + qᵀP⁻¹q)/power(det(P), 1/3).</p>
<h2>Iterative procedure.</h2>
<p>We initialize "outside points" as all the points v₁, v₂, ..., vₙ in the point cloud, and "inside points" as a single point <code>w₁</code> in the convex hull ℋ. In each iteration, we use <code>find_ellipsoid</code> function in the previous sub-section to find the ellipsoid within ℋ that contains all "inside points" but doesn't contain any "outside points". Depending on the result of the SDP in <code>find_ellipsoid</code>, we do the following</p>
<ul>
<li>If the SDP is feasible. We then compare the newly found ellipsoid with the largest ellipsoid found so far. If this new ellipsoid is larger, then accept it as the largest ellipsoid found so far.</li>
<li>If the SDP is infeasible, then we remove the last point in "inside points", add this point to "outside point".</li>
</ul>
<p>In both cases, we then take a new sample point in the convex hull ℋ, add that sample point to "inside points", and then solve the SDP again.</p>
<p>The complete algorithm is as follows</p>
<ol>
<li>Initialize "outside points" to v₁, v₂, ..., vₙ, initialize "inside points" to a single random point in the convex hull ℋ.</li>
<li>while iter < max_iterations:</li>
<li>Solve the SDP <code>P, q, r = find_ellipsoid(outside_points, inside_points, A, b)</code>.</li>
<li>If SDP is feasible and volume(Ellipsoid(P, q, r)) > largest_volume, set <code>P_best = P, q_best=q, r_best = r</code>.</li>
<li>If SDP is infeasible, pt = inside_points.pop_last(), outside_points.push_back(pt).</li>
<li>Randomly sample a new point in ℋ, append the point to "inside points", iter += 1. Go to step 3.</li>
</ol>
<h1>Code</h1>
<pre><code>from scipy.spatial import ConvexHull, Delaunay
import scipy
import cvxpy as cp
import matplotlib.pyplot as plt
import numpy as np
from scipy.stats import dirichlet
from mpl_toolkits.mplot3d import Axes3D # noqa
def get_hull(pts):
dim = pts.shape[1]
hull = ConvexHull(pts)
A = hull.equations[:, 0:dim]
b = hull.equations[:, dim]
return A, -b, hull
def compute_ellipsoid_volume(P, q, r):
"""
The volume of the ellipsoid xᵀPx + 2qᵀx ≤ r is proportional to
power(r + qᵀP⁻¹q, dim/2)/sqrt(det(P))
We return this number.
"""
dim = P.shape[0]
return np.power(r + q @ np.linalg.solve(P, q)), dim/2) / \
np.sqrt(np.linalg.det(P))
def uniform_sample_from_convex_hull(deln, dim, n):
"""
Uniformly sample n points in the convex hull Ax<=b
This is copied from
https://stackoverflow.com/questions/59073952/how-to-get-uniformly-distributed-points-in-convex-hull
@param deln Delaunay of the convex hull.
"""
vols = np.abs(np.linalg.det(deln[:, :dim, :] - deln[:, dim:, :]))\
/ np.math.factorial(dim)
sample = np.random.choice(len(vols), size=n, p=vols / vols.sum())
return np.einsum('ijk, ij -> ik', deln[sample],
dirichlet.rvs([1]*(dim + 1), size=n))
def centered_sample_from_convex_hull(pts):
"""
Sample a random point z that is in the convex hull of the points
v₁, ..., vₙ. z = (w₁v₁ + ... + wₙvₙ) / (w₁ + ... + wₙ) where wᵢ are all
uniformly sampled from [0, 1]. Notice that by central limit theorem, the
distribution of this sample is centered around the convex hull center, and
also with small variance when the number of points are large.
"""
num_pts = pts.shape[0]
pts_weights = np.random.uniform(0, 1, num_pts)
z = (pts_weights @ pts) / np.sum(pts_weights)
return z
def find_ellipsoid(outside_pts, inside_pts, A, b):
"""
For a given sets of points v₁, ..., vₙ, find the ellipsoid satisfying
three constraints:
1. The ellipsoid is within the convex hull of these points.
2. The ellipsoid doesn't contain any of the points.
3. The ellipsoid contains all the points in @p inside_pts
This ellipsoid is parameterized as {x | xᵀPx + 2qᵀx ≤ r }.
We find this ellipsoid by solving a semidefinite programming problem.
@param outside_pts outside_pts[i, :] is the i'th point vᵢ. The point vᵢ
must be outside of the ellipsoid.
@param inside_pts inside_pts[i, :] is the i'th point that must be inside
the ellipsoid.
@param A, b The convex hull of v₁, ..., vₙ is Ax<=b
@return (P, q, r, λ) P, q, r are the parameterization of this ellipsoid. λ
is the slack variable used in constraining the ellipsoid inside the convex
hull Ax <= b. If the problem is infeasible, then returns
None, None, None, None
"""
assert(isinstance(outside_pts, np.ndarray))
(num_outside_pts, dim) = outside_pts.shape
assert(isinstance(inside_pts, np.ndarray))
assert(inside_pts.shape[1] == dim)
num_inside_pts = inside_pts.shape[0]
constraints = []
P = cp.Variable((dim, dim), symmetric=True)
q = cp.Variable(dim)
r = cp.Variable()
# Impose the constraint that v₁, ..., vₙ are all outside of the ellipsoid.
for i in range(num_outside_pts):
constraints.append(
outside_pts[i, :] @ (P @ outside_pts[i, :]) +
2 * q @ outside_pts[i, :] >= r)
# P is strictly positive definite.
epsilon = 1e-6
constraints.append(P - epsilon * np.eye(dim) >> 0)
# Add the constraint that the ellipsoid contains @p inside_pts.
for i in range(num_inside_pts):
constraints.append(
inside_pts[i, :] @ (P @ inside_pts[i, :]) +
2 * q @ inside_pts[i, :] <= r)
# Now add the constraint that the ellipsoid is in the convex hull Ax<=b.
# Using s-lemma, we know that the constraint is
# ∃ λᵢ > 0,
# s.t [P q -λᵢaᵢ/2] is positive semidefinite.
# [(q-λᵢaᵢ/2)ᵀ λᵢbᵢ-r]
num_faces = A.shape[0]
lambda_var = cp.Variable(num_faces)
constraints.append(lambda_var >= 0)
Q = [None] * num_faces
for i in range(num_faces):
Q[i] = cp.Variable((dim+1, dim+1), PSD=True)
constraints.append(Q[i][:dim, :dim] == P)
constraints.append(Q[i][:dim, dim] == q - lambda_var[i] * A[i, :]/2)
constraints.append(Q[i][-1, -1] == lambda_var[i] * b[i] - r)
prob = cp.Problem(cp.Minimize(0), constraints)
try:
prob.solve(verbose=False)
except cp.error.SolverError:
return None, None, None, None
if prob.status == 'optimal':
P_val = P.value
q_val = q.value
r_val = r.value
lambda_val = lambda_var.value
return P_val, q_val, r_val, lambda_val
else:
return None, None, None, None
def draw_ellipsoid(P, q, r, outside_pts, inside_pts):
"""
Draw an ellipsoid defined as {x | xᵀPx + 2qᵀx ≤ r }
This ellipsoid is equivalent to
|Lx + L⁻¹q| ≤ √(r + qᵀP⁻¹q)
where L is the symmetric matrix satisfying L * L = P
"""
fig = plt.figure()
dim = P.shape[0]
L = scipy.linalg.sqrtm(P)
radius = np.sqrt(r + q@(np.linalg.solve(P, q)))
if dim == 2:
# first compute the points on the unit sphere
theta = np.linspace(0, 2 * np.pi, 200)
sphere_pts = np.vstack((np.cos(theta), np.sin(theta)))
ellipsoid_pts = np.linalg.solve(
L, radius * sphere_pts - (np.linalg.solve(L, q)).reshape((2, -1)))
ax = fig.add_subplot(111)
ax.plot(ellipsoid_pts[0, :], ellipsoid_pts[1, :], c='blue')
ax.scatter(outside_pts[:, 0], outside_pts[:, 1], c='red')
ax.scatter(inside_pts[:, 0], inside_pts[:, 1], s=20, c='green')
ax.axis('equal')
plt.show()
if dim == 3:
u = np.linspace(0, np.pi, 30)
v = np.linspace(0, 2*np.pi, 30)
sphere_pts_x = np.outer(np.sin(u), np.sin(v))
sphere_pts_y = np.outer(np.sin(u), np.cos(v))
sphere_pts_z = np.outer(np.cos(u), np.ones_like(v))
sphere_pts = np.vstack((
sphere_pts_x.reshape((1, -1)), sphere_pts_y.reshape((1, -1)),
sphere_pts_z.reshape((1, -1))))
ellipsoid_pts = np.linalg.solve(
L, radius * sphere_pts - (np.linalg.solve(L, q)).reshape((3, -1)))
ax = plt.axes(projection='3d')
ellipsoid_pts_x = ellipsoid_pts[0, :].reshape(sphere_pts_x.shape)
ellipsoid_pts_y = ellipsoid_pts[1, :].reshape(sphere_pts_y.shape)
ellipsoid_pts_z = ellipsoid_pts[2, :].reshape(sphere_pts_z.shape)
ax.plot_wireframe(ellipsoid_pts_x, ellipsoid_pts_y, ellipsoid_pts_z)
ax.scatter(outside_pts[:, 0], outside_pts[:, 1], outside_pts[:, 2],
c='red')
ax.scatter(inside_pts[:, 0], inside_pts[:, 1], inside_pts[:, 2], s=20,
c='green')
ax.axis('equal')
plt.show()
def find_large_ellipsoid(pts, max_iterations):
"""
We find a large ellipsoid within the convex hull of @p pts but not
containing any point in @p pts.
The algorithm proceeds iteratively
1. Start with outside_pts = pts, inside_pts = z where z is a random point
in the convex hull of @p outside_pts.
2. while num_iter < max_iterations
3. Solve an SDP to find an ellipsoid that is within the convex hull of
@p pts, not containing any outside_pts, but contains all inside_pts.
4. If the SDP in the previous step is infeasible, then remove z from
inside_pts, and append it to the outside_pts.
5. Randomly sample a point in the convex hull of @p pts, if this point is
outside of the current ellipsoid, then append it to inside_pts.
6. num_iter += 1
When the iterations limit is reached, we report the ellipsoid with the
maximal volume.
@param pts pts[i, :] is the i'th points that has to be outside of the
ellipsoid.
@param max_iterations The iterations limit.
@return (P, q, r) The largest ellipsoid is parameterized as
{x | xᵀPx + 2qᵀx ≤ r }
"""
dim = pts.shape[1]
A, b, hull = get_hull(pts)
hull_vertices = pts[hull.vertices]
deln = hull_vertices[Delaunay(hull_vertices).simplices]
outside_pts = pts
z = centered_sample_from_convex_hull(pts)
inside_pts = z.reshape((1, -1))
num_iter = 0
max_ellipsoid_volume = -np.inf
while num_iter < max_iterations:
(P, q, r, lambda_val) = find_ellipsoid(outside_pts, inside_pts, A, b)
if P is not None:
volume = compute_ellipsoid_volume(P, q, r)
if volume > max_ellipsoid_volume:
max_ellipsoid_volume = volume
P_best = P
q_best = q
r_best = r
else:
# Adding the last inside_pts doesn't increase the ellipsoid
# volume, so remove it.
inside_pts = inside_pts[:-1, :]
else:
outside_pts = np.vstack((outside_pts, inside_pts[-1, :]))
inside_pts = inside_pts[:-1, :]
# Now take a new sample that is outside of the ellipsoid.
sample_pts = uniform_sample_from_convex_hull(deln, dim, 20)
is_in_ellipsoid = np.sum(sample_pts.T*(P_best @ sample_pts.T), axis=0)\
+ 2 * sample_pts @ q_best <= r_best
if np.all(is_in_ellipsoid):
# All the sampled points are in the ellipsoid, the ellipsoid is
# already large enough.
return P_best, q_best, r_best
else:
inside_pts = np.vstack((
inside_pts, sample_pts[np.where(~is_in_ellipsoid)[0][0], :]))
num_iter += 1
return P_best, q_best, r_best
if __name__ == "__main__":
pts = np.array([[0., 0.], [0., 1.], [1., 1.], [1., 0.], [0.2, 0.4]])
max_iterations = 10
P, q, r = find_large_ellipsoid(pts, max_iterations)
</code></pre>
<p>I also put the code in the <a href="https://github.com/hongkai-dai/large_inscribed_ellipsoid" rel="nofollow noreferrer">github repo</a></p>
<h1>Results</h1>
<p>Here is the result on a simple 2D example, say we want to find a large ellipsoid that doesn't contain the five red points in the figure below. Here is the result after the first iteration
<a href="https://i.stack.imgur.com/k3Sxc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/k3Sxc.png" alt="iteration1_result" /></a>. The red points are the "outside points" (the initial outside points are v₁, v₂, ..., vₙ), the green point is the initial "inside points".</p>
<p>In the second iteration, the ellipsoid becomes</p>
<p><a href="https://i.stack.imgur.com/ND942.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ND942.png" alt="iteration2_result" /></a>. By adding one more "inside point" (green dot), the ellipsoid gets larger.</p>
<p>This gif shows the animation of the 10 iteations.<a href="https://i.stack.imgur.com/rNBWk.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rNBWk.gif" alt="all_iterations_result" /></a></p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Build a data frame from the existing data frame which has the distinct id<p>I have a data frame like in the figure.</p>
<p>I have multiple id with different columns.</p>
<p>I want to build another dataframe that only has the distinct id and also the dataframes has all the columns for that id.</p>
<p>could you help me with that? Thank you so much</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame()
df['id'] = [2, 3, 3,2,1]
df['c2'] = [1, 10, 6, 7, 4]
df['c3'] = [11, 23, 9, 10,23]
print(df)
dfn = pd.DataFrame()
dfn['id'] = [2, 3,1]
dfn['c2'] = [1, 10, 4]
dfn['c3'] = [11, 23, 23]
dfn['c4'] = [7, 6, 0]
dfn['c5'] = [10, 9, 0]
dfn
</code></pre>
<p><a href="https://i.stack.imgur.com/V8hmm.png" rel="nofollow noreferrer">The first one is that I have and the second one is that I want</a></p>
|
<p>You can use <code>set_index</code> / <code>unstack</code> at starting point of the transformation to flatten your dataframe. You can execute the following code step-by-step to follow the transformation to your final dataframe:</p>
<pre><code>out = (
df.set_index('id').rename_axis(columns='col')
.stack().rename('val').reset_index()
.assign(col=lambda x: x.groupby('id').cumcount().add(2))
.pivot_table('val', 'id', 'col', fill_value=0, sort=False)
.add_prefix('c').reset_index().rename_axis(columns=None)
)
</code></pre>
<p>Output:</p>
<pre><code>>>> out
id c2 c3 c4 c5
0 2 1 11 7 10
1 3 10 23 6 9
2 1 4 23 0 0
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Python - How to plot a function with conditions?<p>I am trying to plot a reflectance curve when light travels from glass to air with Python.</p>
<p>Here is the code I wrote:</p>
<pre><code>import matplotlib
import matplotlib.pyplot as plt
import numpy as np
def t_angle(theta):
n = 1.5
return np.arcsin(n * np.sin(theta))
def reflection(theta):
t = t_angle(np.deg2rad(theta))
up = 1.5 * np.cos(np.deg2rad(theta)) - np.cos(t)
down = 1.5 * np.cos(np.deg2rad(theta)) + np.cos(t)
return np.power(up,2)/np.power(down,2)
x = np.arange(0, 90, 0.01)
y1 = reflection(x)
matplotlib.rc('axes.formatter', useoffset=False)
fig, ax = plt.subplots()
ax.set(xlabel=r'Incidence angle', ylabel=r'R')
plt.axis([0, 90, 0, 1])
plt.plot(x, y1, 'r', label='R')
plt.legend(loc='upper right')
plt.show()
</code></pre>
<p><code>t_angle</code> is a function that calculates the transmission angle, and <code>reflection</code> is a function that calculates the reflectance.</p>
<p>The thing is, at <code>theta = 41.81</code> deg, <code>reflection</code> breaks because its inside will be bigger than 1.</p>
<p>So, I am trying to make <code>reflection</code> to return the value 1 if <code>theta</code> is bigger or the same as 41.81.</p>
<p>I first thought maybe putting a condition inside <code>reflection</code> like below might be the solution:</p>
<pre><code>def reflection(theta):
t = t_angle(np.deg2rad(theta))
up = 1.5 * np.cos(np.deg2rad(theta)) - np.cos(t)
down = 1.5 * np.cos(np.deg2rad(theta)) + np.cos(t)
if(theta > 41.81):
return np.power(up,2)/np.power(down,2)
return 1
</code></pre>
<p>However the Python console showed the following error:</p>
<pre><code>Warning (from warnings module):
File "C:/Users/Users/Desktop/plotting2.py", line 7
return np.arcsin(n * np.sin(theta))
RuntimeWarning: invalid value encountered in arcsin
Traceback (most recent call last):
File "C:/Users/Users/Desktop/plotting2.py", line 29, in <module>
y1 = reflection(x)
File "C:/Users/Users/Desktop/plotting2.py", line 14, in reflection
if(theta > 41.81):
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
</code></pre>
<p>I had some thought and I thought the error came because x is an array?? But I'm not really sure...</p>
<p>Is there anyway to make the program plot 1 if <code>reflection</code> breaks?</p>
|
<p>You can do the usual calculations and at the end, put <code>1</code>s to the output array's entries that correspond to <code>theta > 41.81</code> via masking:</p>
<pre class="lang-py prettyprint-override"><code>def reflection(theta):
t = t_angle(np.deg2rad(theta))
up = 1.5 * np.cos(np.deg2rad(theta)) - np.cos(t)
down = 1.5 * np.cos(np.deg2rad(theta)) + np.cos(t)
output = np.power(up,2)/np.power(down,2)
# mask and fill with 1
output[theta > 41.81] = 1
return output
</code></pre>
<p>To see the effect, you might want to expand top y-axis limit a bit i.e.</p>
<pre><code>plt.axis([0, 90, 0, 1.1]) # 1 -> 1.1
</code></pre>
<p>and we get:</p>
<p><a href="https://i.stack.imgur.com/UoP5g.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UoP5g.png" alt="img" /></a></p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Most common bigram words distribution in a data<p>I got a dataset with utterances text column:</p>
<pre><code>**utterance**
Where Arizona state located?
how to find Arizona state
is the united stated is the biggest country around the world?
Arizona state borders
united stated borders
</code></pre>
<p>I would like to get a bigram keyword distribution output:</p>
<pre><code> Arizona state 3
United stated 2
</code></pre>
<p>This code is for unigram/one word:
<code>df.loc['utterances'].explode().value_counts()</code></p>
<p>How can I do this for bigram?</p>
|
<p>In case you don't want to perform any pre-process like lowering the characters easy implementation would be like the bellow:</p>
<pre><code>import pandas as pd
from collections import Counter
from functools import reduce
df = pd.DataFrame({'utterances':
['Where Arizona state located?', 'how to find Arizona state', 'is the united stated is the biggest country around the world?', 'Arizona state borders', 'united stated borders']
})
df['bigrams'] = df['utterances'].apply(lambda item:Counter([bg for bg in zip(item.split(), item.split()[1:])]))
total = reduce(lambda a, b: a+b , df['bigrams'].to_list())
total.most_common()
</code></pre>
<p>output:</p>
<pre><code>[(('Arizona', 'state'), 3),
(('is', 'the'), 2),
(('united', 'stated'), 2),
(('Where', 'Arizona'), 1),
(('state', 'located?'), 1),
(('how', 'to'), 1),
(('to', 'find'), 1),
(('find', 'Arizona'), 1),
(('the', 'united'), 1),
(('stated', 'is'), 1),
(('the', 'biggest'), 1),
(('biggest', 'country'), 1),
(('country', 'around'), 1),
(('around', 'the'), 1),
(('the', 'world?'), 1),
(('state', 'borders'), 1),
(('stated', 'borders'), 1)]
</code></pre>
<p>In case you want to add something more sophisticated, it needs to be done before counting bigrams. Adding trigrams and ... is easy then if you know :)</p>
<p><strong>UPDATE</strong></p>
<pre><code>import pandas as pd
from collections import Counter
from functools import reduce
df = pd.DataFrame({'utterances':
['Where Arizona state located?', 'how to find Arizona state', 'is the united stated is the biggest country around the world?', 'Arizona state borders', 'united stated borders']
})
df['bigrams'] = df['utterances'].apply(lambda item:Counter([bg for bg in zip(item.split(), item.split()[1:])]))
df['trigrams'] = df['utterances'].apply(lambda item:Counter([bg for bg in zip(item.split(), item.split()[1:], item.split()[2:])]))
total_bigram = reduce(lambda a, b: a+b , df['bigrams'].to_list())
total_trigram = reduce(lambda a, b: a+b , df['trigrams'].to_list())
print(total_bigram.most_common())
print(total_trigram.most_common())
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
How to match latex bibliography in python?<p>I am trying to get plain text from LaTeX source code and would like to remove bibliography. For example,</p>
<pre><code>\begin{thebibliography}{99}
\bibitem{b0} J.Dunietz, J.Hauser, J.L.Rosner, Phys. Rev. {\bf D}35 (1987)
2166
\end{thebibliography}
</code></pre>
<p>I found out the <code>detex</code> module for the extraction but I'm still trying to remove the bibliography first (using python <code>re</code>). What I have right now is:</p>
<pre><code>>>> b = '\newpage\begin{thebibliography}{99}\bibitem{b0} J.Dunietz, J.Hauser\end{thebibliography}'
>>> re.sub('\\\\begin\{thebibliography\}(.*?)\\\\end\{thebibliography\}', ' ', b)
'\newpage\x08egin{thebibliography}{99}\x08ibitem{b0} J.Dunietz, J.Hauser\\end{thebibliography}'
</code></pre>
<p>The ideal result here should be: <code>\newpage</code>. I wonder what am I doing wrong here? Thank you!</p>
|
<p>You can use raw-strings, to greatly reduce the mental load of dealing with escapes (especially when dealing with escaping at both the string and the regex level). Let's define <code>b</code> as a raw string (note the <code>r''</code>):</p>
<pre><code>b = r'\newpage\begin{thebibliography}{99}\bibitem{b0} J.Dunietz, J.Hauser\end{thebibliography}'
</code></pre>
<p>Let's look at what we have in the string:</p>
<pre><code>>>> b
'\\newpage\\begin{thebibliography}{99}\\bibitem{b0} J.Dunietz, J.Hauser\\end{thebibliography}'
>>> print(b)
\newpage\begin{thebibliography}{99}\bibitem{b0} J.Dunietz, J.Hauser\end{thebibliography}
</code></pre>
<p>The raw-strings seems to have escaped the backslashes properly. Now we can simply <code>re.sub</code> the pattern with an empty string. Again, we will use a raw-string to define the pattern to simplify the escaping:</p>
<pre><code>>>> result = re.sub(r'\\begin\{thebibliography\}.*?\\end\{thebibliography\}', '', b)
>>> result
'\\newpage'
>>> print(result)
\newpage
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Pandas fill missing Time-Series data. Only if more than one day is missing<p>I have two time-series with different frequencies. Would like to fill values using the lower frequency data.</p>
<p>Here is what I mean. Hope it is clear this way:</p>
<pre><code>index = [pd.datetime(2022,1,10,1),
pd.datetime(2022,1,10,2),
pd.datetime(2022,1,12,7),
pd.datetime(2022,1,14,12),]
df1 = pd.DataFrame([1,2,3,4],index=index)
2022-01-10 01:00:00 1
2022-01-10 02:00:00 2
2022-01-12 07:00:00 3
2022-01-14 12:00:00 4
index = pd.date_range(start=pd.datetime(2022,1,9),
end = pd.datetime(2022,1,15),
freq='D')
df2 = pd.DataFrame([n+99 for n in range(len(index))],index=index)
2022-01-09 99
2022-01-10 100
2022-01-11 101
2022-01-12 102
2022-01-13 103
2022-01-14 104
2022-01-15 105
</code></pre>
<p>The final df should only fill values if more than one day is missing under df1. So the result should be:</p>
<pre><code>2022-01-09 00:00:00 99
2022-01-10 01:00:00 1
2022-01-10 02:00:00 2
2022-01-11 00:00:00 101
2022-01-12 07:00:00 3
2022-01-13 00:00:00 103
2022-01-14 12:00:00 4
2022-01-15 00:00:00 105
</code></pre>
<p>Any idea how to do this?</p>
|
<p>You can filter <code>df2</code> to keep only the new dates and <code>concat</code> to <code>df1</code>:</p>
<pre><code>import numpy as np
idx1 = pd.to_datetime(df1.index).date
idx2 = pd.to_datetime(df2.index).date
df3 = pd.concat([df1, df2[~np.isin(idx2, idx1)]]).sort_index()
</code></pre>
<p>Output:</p>
<pre><code> 0
2022-01-09 00:00:00 99
2022-01-10 01:00:00 1
2022-01-10 02:00:00 2
2022-01-11 00:00:00 101
2022-01-12 07:00:00 3
2022-01-13 00:00:00 103
2022-01-14 12:00:00 4
2022-01-15 00:00:00 105
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
numpy.dot - Shapes error - Neural Network<p>I am trying to multiply these <code>A1</code> and <code>W2</code> matrices (<code>Z2 = W2.dot(A1)</code>):</p>
<pre><code>A1 : [[0.42940542]
[0.55013895]]
W2 : [[-0.4734037 -0.39642393 -0.05440914 -0.24011293 -0.03670913 -0.37523234]
[-0.45501004 0.23881832 0.21831658 0.32237388 0.25674681 0.27956714]]
</code></pre>
<p>But I am getting this error <code>shapes (2,6) and (2,1) not aligned: 6 (dim 1) != 2 (dim 0)</code>, why? Isn't it normal to multiply a (2,1) with a (2,6) matrix?</p>
<p>Because I have a <code>hidden layer with 2 nodes</code> and output layer with <code>6 nodes</code></p>
|
<p>Mathematically this is impossible because your multiplying a (2, 6) matrix by (2, 1). All you need to do is to transpose W2.</p>
<p><strong>P.S:</strong> Note that in linear algebra np.dot(W2.T, A1) is not the same as np.dot(A1.T, W2)</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
A1 = np.asarray([[0.42940542], [0.55013895]])
W2 = np.asarray([[
-0.4734037, -0.39642393, -0.05440914, -0.24011293, -0.03670913, -0.37523234
], [-0.45501004, 0.23881832, 0.21831658, 0.32237388, 0.25674681, 0.27956714]])
print(W2.shape, A1.shape) # (2, 6), (2, 1)
Z2 = W2.T @ A1
print(Z2)
</code></pre>
<p>The result would be: [[-0.45360086] [-0.03884332] [ 0.09674087] [ 0.07424463] [ 0.12548332] [-0.00732603]]</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
With Django, how to display User friendly names in the template using grouper?<p>My model offers a choice list:</p>
<pre><code>class Interview(models.Model):
class InterviewStatus(models.Model):
PENDING = '1-PDG'
XED = '0-XED'
DONE = '2-DON'
COMPLETE = '3-COM'
ITW_STATUS = [
(PENDING, "Interview Planned"),
(XED, "Interview Cancelled"),
(DONE, "Interview Done"),
(COMPLETE, "Interview Done and Assessed")
]
</code></pre>
<p>which is implemented in the model fields:</p>
<pre><code>status = models.CharField(max_length=5,
blank=False,
null=False,
default=InterviewStatus.PENDING,
choices=InterviewStatus.ITW_STATUS,
verbose_name="Statut de l'interview")
</code></pre>
<p>When creating a new object, everything is OK.
My template is written as such :</p>
<pre><code>{% regroup object_list by status as status_list %}
<h1 id="topic-list">Interview List</h1>
<ul>
{% for status in status_list %}
<li><h2>{{ status.grouper }}</h2></li>
<ul>
{% for interview in status.list %}
<li><a href="{{ interview.get_absolute_url }}">{{ interview.official }}{{ interview.date_effective }} </a></li>
{% endfor %}
</ul>
</code></pre>
<p>What I get as an outcome in my browser is:</p>
<p><strong>. 1-PDG</strong></p>
<ul>
<li>item1</li>
<li>item2</li>
</ul>
<p>etc.</p>
<p>My Question is:
How could I obtain the User friendly names instead of the values, which are supposed to be displayed exclusively in the html field:</p>
<pre><code><option value="1-PDG" selected>Interview Planned</option>
</code></pre>
|
<p>The <code>group.list</code> is just a python <code>list</code> and in templates you can index by using <code>variable.<index></code> so you can get the first object in the list and use it's <a href="https://docs.djangoproject.com/en/3.2/ref/models/instances/#django.db.models.Model.get_FOO_display" rel="nofollow noreferrer"><code>get_FOO_display</code> method [Django docs]</a> to display the human friendly name:</p>
<pre><code>{% regroup object_list by status as status_list %}
<h1 id="topic-list">Interview List</h1>
<ul>
{% for status in status_list %}
<!-- Use `status.list.0.get_status_display` here -->
<li><h2>{{ status.list.0.get_status_display }}</h2></li>
<ul>
{% for interview in status.list %}
<li><a href="{{ interview.get_absolute_url }}">{{ interview.official }}{{ interview.date_effective }} </a></li>
{% endfor %}
</ul>
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Pulling API data into Django Template that has <a></a> tags embedded, is there a way of wrapping the text in an HTML tag?<p>I'm reading a very large API, one of the fields I need, have "a" tags embedded in the item in the dictionary and when I pull it into my template and display it, it shows the "a" tags as text.</p>
<p>exp:</p>
<pre><code>"Bitcoin uses the <a href="https://www.coingecko.com/en?hashing_algorithm=SHA-256">SHA-256</a> hashing... ...such as <a href="https://www.coingecko.com/en/coins/litecoin">Litecoin</a>, <a href="https://www.coingecko.com/en/coins/peercoin">Peercoin</a>, <a href="https://www.coingecko.com/en/coins/primecoin">Primecoin</a>*"
</code></pre>
<p>I would like to wrap this in HTML so when it displays on the page it has the actual links rather than the 'a' tags and the URL.</p>
<p>What I'm looking to get:
"Bitcoin uses the <a href="https://www.coingecko.com/en?hashing_algorithm=SHA-256" rel="nofollow noreferrer">SHA-256</a> hashing... ...such as <a href="https://www.coingecko.com/en/coins/litecoin" rel="nofollow noreferrer">Litecoin</a>, Peercoin, Primecoin*"</p>
|
<p>I figured it out, I used the Humanize function with the |safe tag.</p>
<p>Pretty simple answer.</p>
<p>In the settings.py add 'django.contrib.humanize' to the INSTALLED_APPS:</p>
<p>**INSTALLED_APPS = [</p>
<p>'django.contrib.humanize', ]**</p>
<p>In the HTML Template add </p>
<pre><code>{% load humanize %}
</code></pre>
<p>For the data you want to format use |safe</p>
<pre><code>{{ location.of.data|safe }}
</code></pre>
<p>This will read the text as HTML.</p>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
FileNotFound error when loading assets in Pygame<p>I am learning how to use Pygame and when I am loading the assets to use in my project it gives me a FileNotFound error. The code it gives me an error at is <code>NORMAL_BINGUS = pygame.image.load(os.path.join('assets', 'Bingus_Normal.jpg'))</code></p>
<p>The assets folder is in the same folder my code is in, and the image name is exactly the same as I typed above, so I do not know what the cause for this could be.</p>
|
<p>Code may run in a different folder and it may search assets in the wrong place.</p>
<p>You need to create an absolute path to the folder with the code.</p>
<pre><code>BASE = os.path.dirname(os.path.abspath(__file__))
</code></pre>
<p>And later use it to create an absolute path to every asset:</p>
<pre><code>os.path.join(BASE, 'assets', 'Bingus_Normal.jpg')
</code></pre>
|
Please answer the following Stackoverflow question on Python. Answer it like you are a developer answering Stackoverflow questions.
Stackoverflow question:
Flask deploy to heroku errors<p>I when i try to open my website <a href="https://pomocnikprofesora.herokuapp.com/" rel="nofollow noreferrer">https://pomocnikprofesora.herokuapp.com/</a> its dont work.
This is my logs
I dont know what can i say more but i cant put this question without more normal text.
Any advice and help will be great for me.
Thanks</p>
<pre><code>$ heroku logs --tail
» Warning: heroku update available from 7.37.0 to 7.47.3.
2020-11-30T16:12:03.048409+00:00 app[api]: Initial release by user rozplochowskipawel9@gmail.com
2020-11-30T16:12:03.048409+00:00 app[api]: Release v1 created by user rozplochowskipawel9@gmail.com
2020-11-30T16:12:03.170216+00:00 app[api]: Release v2 created by user rozplochowskipawel9@gmail.com
2020-11-30T16:12:03.170216+00:00 app[api]: Enable Logplex by user rozplochowskipawel9@gmail.com
2020-11-30T16:12:22.236825+00:00 heroku[router]: at=info code=H81 desc="Blank app" method=GET path="/" host=fathomless-coast-67373.herokuapp.com request_id=49dbe3e9-8b50-49cc-94b0-9ec7fde34228 fwd="31.60.48.45" dyno= connect= service= status=502 bytes= protocol=https
2020-11-30T16:12:22.983607+00:00 heroku[router]: at=info code=H81 desc="Blank app" method=GET path="/favicon.ico" host=fathomless-coast-67373.herokuapp.com request_id=4b30a4cb-d870-42eb-9834-b5aaa006c1b5 fwd="31.60.48.45" dyno= connect= service= status=502 bytes= protocol=https
2020-11-30T16:13:57.834416+00:00 heroku[router]: at=info code=H81 desc="Blank app" method=GET path="/" host=pomoncikprofesora.herokuapp.com request_id=ca16ec59-2bf7-4741-a873-67096832462f fwd="31.60.48.45" dyno= connect= service= status=502 bytes= protocol=https
2020-11-30T16:16:16.950761+00:00 app[web.1]: Traceback (most recent call last):
2020-11-30T16:16:16.950764+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/arbiter.py", line 209, in run
2020-11-30T16:16:16.951127+00:00 app[web.1]: self.sleep()
2020-11-30T16:16:16.951128+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/arbiter.py", line 357, in sleep
2020-11-30T16:16:16.951411+00:00 app[web.1]: ready = select.select([self.PIPE[0]], [], [], 1.0)
2020-11-30T16:16:16.951412+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/arbiter.py", line 242, in handle_chld
2020-11-30T16:16:16.951616+00:00 app[web.1]: self.reap_workers()
2020-11-30T16:16:16.951617+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/arbiter.py", line 528, in reap_workers
2020-11-30T16:16:16.951902+00:00 app[web.1]: raise HaltServer(reason, self.APP_LOAD_ERROR)
2020-11-30T16:16:16.951941+00:00 app[web.1]: gunicorn.errors.HaltServer: <HaltServer 'App failed to load.' 4>
2020-11-30T16:16:16.951944+00:00 app[web.1]:
2020-11-30T16:16:16.951944+00:00 app[web.1]: During handling of the above exception, another exception occurred:
2020-11-30T16:16:16.951945+00:00 app[web.1]:
2020-11-30T16:16:16.951945+00:00 app[web.1]: Traceback (most recent call last):
2020-11-30T16:16:16.951949+00:00 app[web.1]: File "/app/.heroku/python/bin/gunicorn", line 8, in <module>
2020-11-30T16:16:16.952070+00:00 app[web.1]: sys.exit(run())
2020-11-30T16:16:16.952071+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/app/wsgiapp.py", line 58, in run
2020-11-30T16:16:16.952219+00:00 app[web.1]: WSGIApplication("%(prog)s [OPTIONS] [APP_MODULE]").run()
2020-11-30T16:16:16.952220+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/app/base.py", line 228, in run
2020-11-30T16:16:16.952395+00:00 app[web.1]: super().run()
2020-11-30T16:16:16.952396+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/app/base.py", line 72, in run
2020-11-30T16:16:16.952520+00:00 app[web.1]: Arbiter(self).run()
2020-11-30T16:16:16.952521+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/arbiter.py", line 229, in run
2020-11-30T16:16:16.952689+00:00 app[web.1]: self.halt(reason=inst.reason, exit_status=inst.exit_status)
2020-11-30T16:16:16.952691+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/arbiter.py", line 342, in halt
2020-11-30T16:16:16.952897+00:00 app[web.1]: self.stop()
2020-11-30T16:16:16.952898+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/arbiter.py", line 393, in stop
2020-11-30T16:16:16.953149+00:00 app[web.1]: time.sleep(0.1)
2020-11-30T16:16:16.953150+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/arbiter.py", line 242, in handle_chld
2020-11-30T16:16:16.953324+00:00 app[web.1]: self.reap_workers()
2020-11-30T16:16:16.953325+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/arbiter.py", line 528, in reap_workers
2020-11-30T16:16:16.953633+00:00 app[web.1]: raise HaltServer(reason, self.APP_LOAD_ERROR)
2020-11-30T16:16:16.953634+00:00 app[web.1]: gunicorn.errors.HaltServer: <HaltServer 'App failed to load.' 4>
2020-11-30T16:16:17.052618+00:00 heroku[web.1]: Process exited with status 1
2020-11-30T16:16:17.117406+00:00 heroku[web.1]: State changed from starting to crashed
2020-11-30T17:01:23.850624+00:00 heroku[web.1]: State changed from crashed to starting
2020-11-30T17:01:30.144003+00:00 heroku[web.1]: Starting process with command `gunicorn app:PomocnikProfesora.py`
2020-11-30T17:01:33.283660+00:00 app[web.1]: [2020-11-30 17:01:33 +0000] [4] [INFO] Starting gunicorn 20.0.4
2020-11-30T17:01:33.291826+00:00 app[web.1]: [2020-11-30 17:01:33 +0000] [4] [INFO] Listening at: http://0.0.0.0:43889 (4)
2020-11-30T17:01:33.300878+00:00 app[web.1]: [2020-11-30 17:01:33 +0000] [10] [INFO] Booting worker with pid: 10
2020-11-30T17:01:33.318087+00:00 app[web.1]: [2020-11-30 17:01:33 +0000] [11] [INFO] Booting worker with pid: 11
2020-11-30T17:01:34.497407+00:00 app[web.1]: Failed to parse 'PomocnikProfesora.py' as an attribute name or function call.
2020-11-30T17:01:34.498343+00:00 app[web.1]: [2020-11-30 17:01:34 +0000] [11] [INFO] Worker exiting (pid: 11)
2020-11-30T17:01:34.520370+00:00 app[web.1]: Failed to parse 'PomocnikProfesora.py' as an attribute name or function call.
2020-11-30T17:01:34.521180+00:00 app[web.1]: [2020-11-30 17:01:34 +0000] [10] [INFO] Worker exiting (pid: 10)
2020-11-30T17:01:34.690436+00:00 app[web.1]: [2020-11-30 17:01:34 +0000] [4] [INFO] Shutting down: Master
2020-11-30T17:01:34.690611+00:00 app[web.1]: [2020-11-30 17:01:34 +0000] [4] [INFO] Reason: App failed to load.
2020-11-30T17:01:34.770227+00:00 heroku[web.1]: Process exited with status 4
2020-11-30T17:01:34.808969+00:00 heroku[web.1]: State changed from starting to crashed
</code></pre>
|
<p>As per the logs, your app crashed while loading:</p>
<pre><code>2020-11-30T16:21:28.904773+00:00 heroku[router]: at=error code=H10 desc="App crashed"
</code></pre>
<p>The error also says that it is not able to parse the file <code>PomocnikProfesora.py</code> :</p>
<pre><code>Failed to parse 'PomocnikProfesora.py' as an attribute name or function call
</code></pre>
<ol>
<li><p>Check if this file is executing properly or not</p>
</li>
<li><p>Also check how you have instantiated gunicorn. If this is the entry point for your app, try this in the Procfile:</p>
<p>web: gunicorn app:PomocnikProfesora</p>
</li>
</ol>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.