QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
75,211,183
| 7,114,235
|
What does Pydantic ORM mode exactly do?
|
<p>According to the <a href="https://docs.pydantic.dev/usage/models/#orm-mode-aka-arbitrary-class-instances" rel="noreferrer">docs</a>, Pydantic "ORM mode" (enabled with <code>orm_mode = True</code> in <code>Config</code>) is needed to enable the <code>from_orm</code> method in order to create a model instance by reading attributes from another class instance. If ORM mode is not enabled, the <code>from_orm</code> method raises an exception.</p>
<p>My questions are:</p>
<ol>
<li>Are there any other effects (in functionality, performance, etc.) in enabling ORM mode?</li>
<li>If not, why is it an opt-in feature?</li>
</ol>
|
<python><pydantic>
|
2023-01-23 14:51:54
| 1
| 1,504
|
rrobby86
|
75,211,035
| 62,898
|
When using subprocess.run python script failes to find module
|
<p>I have a python script that launches several other script inside powershell
Every script works fine when I run python in powershell.
when I use subprocess.run inside the master script it does not work.</p>
<p>This is how I invoke the scripts:</p>
<pre><code>subprocess.run(["powershell", "-Command", self.ScriptPath], capture_output=False)
</code></pre>
<p>The specific script has the following imports at the top</p>
<pre><code>import os
import sys
from datetime import datetime
import time
import pandas as pd
import pandas_ta
import requests
</code></pre>
<p>the one that failes is the pandas_ta import with the following error:</p>
<pre><code>Traceback (most recent call last):
File "XXX", line 7, in <module>
import pandas_ta as pandasTA
ModuleNotFoundError: No module named 'pandas_ta'
</code></pre>
<p>the pandas import works which is very strange.
Any thoughts?</p>
|
<python><subprocess>
|
2023-01-23 14:39:29
| 0
| 5,554
|
Amit Raz
|
75,210,946
| 10,689,857
|
Python requests how to know what parameter to pass
|
<p>I have the following code. I am trying to access to <a href="https://api.github.com/users/jtorre94" rel="nofollow noreferrer">https://api.github.com/users/jtorre94</a> via the requests library.</p>
<pre><code>import requests
api_url = "https://api.github.com/users"
response = requests.get(api_url, params={'login': 'jtorre94'})
response.json()
</code></pre>
<p>However, the response is something I do not recognize at all, like if it was not filtered by the jtorre94 parameter.</p>
<pre><code>[{'login': 'mojombo',
'id': 1,
'node_id': 'MDQ6VXNlcjE=',
'avatar_url': 'https://avatars.githubusercontent.com/u/1?v=4',
'gravatar_id': '',
'url': 'https://api.github.com/users/mojombo',
'html_url': 'https://github.com/mojombo',
'followers_url': 'https://api.github.com/users/mojombo/followers',
'following_url': 'https://api.github.com/users/mojombo/following{/other_user}',
'gists_url': 'https://api.github.com/users/mojombo/gists{/gist_id}',
'starred_url': 'https://api.github.com/users/mojombo/starred{/owner}{/repo}',
'subscriptions_url': 'https://api.github.com/users/mojombo/subscriptions',
'organizations_url': 'https://api.github.com/users/mojombo/orgs',
'repos_url': 'https://api.github.com/users/mojombo/repos',
'events_url': 'https://api.github.com/users/mojombo/events{/privacy}',
'received_events_url': 'https://api.github.com/users/mojombo/received_events',
'type': 'User',
'site_admin': False},
{'login': 'defunkt',
'id': 2,
'node_id': 'MDQ6VXNlcjI=',
'avatar_url': 'https://avatars.githubusercontent.com/u/2?v=4',
'gravatar_id': '',
'url': 'https://api.github.com/users/defunkt',
'html_url': 'https://github.com/defunkt',
'followers_url': 'https://api.github.com/users/defunkt/followers',
'following_url': 'https://api.github.com/users/defunkt/following{/...
</code></pre>
<p>How can I retrieve the json for username jtorre94?</p>
|
<python><python-requests>
|
2023-01-23 14:31:49
| 1
| 854
|
Javi Torre
|
75,210,845
| 4,909,923
|
Python: Recursively split strings when longer than max allowed characters, by the last occurrence of a delimiter found before max allowed characters
|
<p>I have a text transcript of dialogue, consisting of strings of variable length. The string lengths can be anywhere from a few characters to thousands of characters.</p>
<p>I want Python to transform the text so that any line is maximally <em>n</em> characters. To make the partitioning natural, I want to recursively partition the lines by the last occurrence of any of the delimiters <code>. </code>, <code>, </code>, <code>? </code>, <code>! </code>. For example, let's assume that the below 72-character string is above a threshold of 36 characters:</p>
<blockquote>
<p>This, is a long, long string. It is around(?) 72 characters! Pretty cool</p>
</blockquote>
<p>Since the string is longer than 36 characters, the function should recursively partition the string by the last occurrence of any delimiter within 36 characters. Recursively meaning that if the resulting partitioned strings are longer than 36 characters, they should also be split according to the same rule. In this case, it should result in a list like:</p>
<p><code>['This, is a long, long string. ', 'It is around(?) 72 characters! ', 'Pretty cool']</code></p>
<p>The list items are respectively 30, 31, and 11 characters. None were allowed to be over 36 characters long. Note that the partitions in this example do not occur at a <code>, </code> delimiter, because those weren't the last delimiters within the 36+ character threshold.</p>
<p>The partition sequence would've been something like:</p>
<pre><code>'This, is a long, long string. It is around(?) 72 characters! Pretty cool' # 72
['This, is a long, long string. ', 'It is around(?) 72 characters! Pretty cool'] # 30 + 42
['This, is a long, long string. ', 'It is around(?) 72 characters! ', ' Pretty cool'] # 30 + 31 + 11
</code></pre>
<p>In the odd situation that there are no delimiters in the string or resulting recursive partitions, the strings should be wrapped using something like <code>textwrap.wrap()</code> to max 36 characters, which produces a list which in the absence of delimiters would be:</p>
<pre><code>['There are no delimiters here so I am', ' partitioned at 36 characters'] # 36 + 29
</code></pre>
<p>I've tried to work out a Python function algorithm to achieve this, but it has been difficult. I spent long time in ChatGPT and couldn't get it to work despite many prompts.</p>
<p>Is there a Python module function that can achieve this already, or alternatively can you suggest a function will solve this problem?</p>
<hr />
<p><strong>NB:</strong> Character count online tool: <a href="https://www.charactercountonline.com/" rel="nofollow noreferrer">https://www.charactercountonline.com/</a></p>
|
<python><string><recursion><split>
|
2023-01-23 14:23:40
| 1
| 5,942
|
P A N
|
75,210,554
| 3,764,619
|
Importing from folders with the same name gives ImportError in PyCharm
|
<p>Given the following project structure:</p>
<pre><code>test/
data/
__init__.py
a/
data/
__init__.py
main.py
__init__.py
</code></pre>
<p>In <code>test/data/__init__.py</code></p>
<pre><code>from pathlib import Path
DATA_DIR = Path(__file__).parent
</code></pre>
<p>In <code>main.py</code></p>
<pre><code>from data import DATA_DIR
if __name__ == "__main__":
print(DATA_DIR)
</code></pre>
<p>When running from the terminal, it works fine. When running from PyCharm, it gives the following error:</p>
<pre><code>Traceback (most recent call last):
File "/Users/<USER>/code/test/a/main.py", line 1, in <module>
from data import DATA_DIR
ImportError: cannot import name 'DATA_DIR' from 'data'
(/Users/<USER>/code/test/a/data/__init__.py)
</code></pre>
<p>Has anyone found a way to get around this?</p>
|
<python><pycharm>
|
2023-01-23 13:59:27
| 2
| 680
|
Casper Lindberg
|
75,210,489
| 821,995
|
How to get a consistent WeakSet length during finalization?
|
<p>Consider the following code which uses a <code>WeakSet</code> at the same time as a finalizer:</p>
<pre class="lang-py prettyprint-override"><code>>>> import weakref
>>> import gc
>>> class A:
... pass
>>> class Parent:
... def __init__(self, a):
... self.a = a
>>> ws = weakref.WeakSet()
>>> def finalizer(a):
... print(f"Finalizing {a}")
... print(f"Contents of the WeakSet: {ws}")
... print(f"List of elements in the WeakSet: {list(ws)}")
... print(f"Length of the WeakSet: {len(ws)}")
</code></pre>
<p>Consider the following example:</p>
<pre class="lang-py prettyprint-override"><code>>>> a = A()
>>> p = Parent(a)
>>> ws.add(p)
>>> weakref.finalize(p, finalizer, a)
<finalize object at 0x1c2ca0d7060; for 'Parent' at 0x1c2ca0aff40>
>>> del p
>>> gc.collect()
Finalizing <__main__.A object at 0x000001C2CA239310>
Contents of the WeakSet: {<weakref at 0x000001C2CA270090; dead>}
List of elements in the WeakSet: []
Length of the WeakSet: 1
</code></pre>
<p>When swapping the creation of the finalizer and the addition to the <code>WeakSet</code>, on the other hand:</p>
<pre class="lang-py prettyprint-override"><code>>>> a = A()
>>> p = Parent(a)
>>> weakref.finalize(p, finalizer, a)
<finalize object at 0x1c2ca0d7060; for 'Parent' at 0x1c2ca0aff40>
>>> ws.add(p)
>>> del p
>>> gc.collect()
Finalizing <__main__.A object at 0x000001C2CA519370>
Contents of the WeakSet: set()
List of elements in the WeakSet: []
Length of the WeakSet: 0
</code></pre>
<p>Why are these results different? Is there a way to get a consistent value for <code>len(ws)</code> as the object is being finalized?</p>
|
<python><garbage-collection><weak-references>
|
2023-01-23 13:53:47
| 1
| 7,455
|
F.X.
|
75,210,362
| 14,787,964
|
Image artefacts when using cyclic colormaps for periodic data
|
<p>I am currently trying to visualize the phase of an electromagnetic field which is 2pi-periodic. To visualize that e.g. 1.9 pi is almost the same as 0, I am using a cyclic colormap (twilight). However, when I plot my images, there are always lines at the sections where the phase jumps from (almost) 2pi to 0. When you zoom in on these lines, these artefacts vanish.</p>
<p>Here is a simple script and example images that demonstrate this issue.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(-3,3,501)
x,y = np.meshgrid(x,x)
data = x**2+y**2
data = np.mod(data, 2)
plt.set_cmap('twilight')
plt.imshow(data)
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/ACeDFm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ACeDFm.png" alt="image with artefacts" /></a></p>
<p><a href="https://i.sstatic.net/vWxKkm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vWxKkm.png" alt="enter image description here" /></a></p>
<p>I tested it with "twilight_shifted" and "hsv" as well and got the same issue. The problem also occurs after saving the image via plt.savefig(). I also tried other image formats like svg but it did not change anything.</p>
|
<python><matplotlib><colormap><cyclic>
|
2023-01-23 13:42:37
| 1
| 867
|
Oskar Hofmann
|
75,210,241
| 12,304,000
|
How to parse nested XML and extract attributes + tag text both?
|
<p>My XML looks like this:</p>
<pre><code><?xml version="1.0" encoding="UTF-8" ?>
<main_heading timestamp="20220113">
<details>
<offer id="11" new_id="12">
<level>1&amp;1</level>
<typ>Green</typ>
<name>Alpha</name>
<visits>
<name>DONT INCLUDE</name>
</visits>
</offer>
<offer id="12" new_id="31">
<level>1&amp;1</level>
<typ>Yellow</typ>
<name>Beta</name>
<visits>
<name>DONT INCLUDE</name>
</visits>
</offer>
</details>
</main_heading>
</code></pre>
<p>I want to parse certain fields into a dataframe.</p>
<p>Expected Output</p>
<pre><code>timestamp id new_id level name
20220113 11 12 1&amp;1 Alpha
20220113 12 31 1&amp;1 Beta
</code></pre>
<p>where NAME nested within the "visits" tag is not included. I just want to consider the outer "name" tag.</p>
<pre><code>timestamp = soup.find('main_heading').get('timestamp')
df[timestamp'] = timestamp
</code></pre>
<p>this solves one part</p>
<p>The rest I can do like this:</p>
<pre><code>typ = []
for i in (soup.find_all('typ')):
typ.append(i.text)
</code></pre>
<p>but i don't want to create several for loops for every new field</p>
|
<python><python-3.x><xml><beautifulsoup><xml-parsing>
|
2023-01-23 13:33:23
| 4
| 3,522
|
x89
|
75,209,994
| 14,744,714
|
Date search and date output from the same class name when parsing
|
<p>I'm trying to parse a site. It has the same named classes, while the number of such classes varies from page to page. I'm interested in the class that contains the date, which is written in the following pattern: <code>24 January 2020</code>.</p>
<p>Code:</p>
<pre><code>import pandas as pd
import numpy as np
import requests
from bs4 import BeautifulSoup
import re
from user_agent import generate_user_agent
import time
my_user_agent = generate_user_agent(os=('mac', 'linux'))
headers = {
"Accept": "*/*",
"user-agent": my_user_agent}
source = 'https://coronavirus-graph.ru/kanada'
req = requests.get(source, headers=headers)
html = req.content
soup = BeautifulSoup(html, 'lxml')
items_list = soup.find_all('div', class_='rr')
print(items_list)
[<div class="rr"><span>In Canada, ≈</span><b>11</b><span> people die out of </span><b>1000</b><span> infected ( 1.1%)</span></div>, <div class="rr"><b>99</b><span> a person in serious condition</span></div>, <div class="rr"><span>First person infected</span><b>24 January 2020</b></div>]
</code></pre>
<p>In this case, I'm interested in the last class and its value in <code><b></code>.
That is the date. How can I get it?</p>
|
<python><html><parsing><beautifulsoup>
|
2023-01-23 13:09:20
| 2
| 717
|
kostya ivanov
|
75,209,817
| 12,304,000
|
filter non-nested tag values from XML
|
<p>I have an xml that looks like this.</p>
<pre><code><?xml version="1.0" encoding="UTF-8" ?>
<main_heading timestamp="20220113">
<details>
<offer id="11" parent_id="12">
<name>Alpha</name>
<pos>697</pos>
<kat_pis>
<pos kat="2">112</pos>
</kat_pis>
</offer>
<offer id="12" parent_id="31">
<name>Beta</name>
<pos>099</pos>
<kat_pis>
<pos kat="2">113</pos>
</kat_pis>
</offer>
</details>
</main_heading>
</code></pre>
<p>I am parsing it using BeautifulSoup. Upon doing this:</p>
<pre><code>soup = BeautifulSoup(file, 'xml')
pos = []
for i in (soup.find_all('pos')):
pos.append(i.text)
</code></pre>
<p>I get a list of all <strong>POS</strong> tag values, even the ones that are nested within the tag <strong>kat_pis</strong>.</p>
<p>So I get (697, 112, 099. 113).</p>
<p>However, I only want to get the POS values of the non-nested tags.</p>
<p><strong>Expected desired result is (697, 099).</strong></p>
<p>How can I achieve this?</p>
|
<python><python-3.x><xml><beautifulsoup>
|
2023-01-23 12:53:46
| 2
| 3,522
|
x89
|
75,209,722
| 5,353,753
|
Extract phone numbers from email-signature (regex)
|
<p>I'm trying to parse an email signature, that can contain multiple phone numbers in different formats. I've managed to come up with this Regex:</p>
<pre><code>(?<![%_])\b(\+?\d{1,}[\s.-]?\(?\d{1,}\)?[\s.-]?\(?\d{1,}\)?[\s.-]?\d{1,}[\s.-]?\d{1,}[\s.-]?\d{3,})
</code></pre>
<p>This matches almost any phone number. (The reason there are many digits classes is because the number can arrive like this: <code>+86 21 3387 6 532</code>). I need two things though.</p>
<p>1.I don't know how to force the match to come from the same line. For example:</p>
<pre><code>Newtown Square, PA 19073
360.751.1471 Some Other Text Here
</code></pre>
<p>The regex matches <code>19073 360.751.1471</code> instead of <code>360.751.1471</code>.</p>
<p>2.The are different types of numbers, and I want to catch each type with a different regex. I.E. Cell, Fax, Office, Mobile ETC.. Lets focus on <code>Mobile</code>. It can come in different formats each time like the following (I want to catch them all with the same regex):</p>
<pre><code>M: 360.123.1471
M 360.123.1471
(M): 360.123.1471
(M):360.123.1471
M:360.123.1471
</code></pre>
<p>Every prefixcan be either lower or upper case.</p>
<p>The reason I need to capture them by the prefix, is because I get signatures such as this:</p>
<pre><code>Jane Doe
Ocean Export Agent
Some Company, Inc.
Celebrating 100 years!
p:
410-123-3 123 m: 410-123-1234
a:
111 Cromwell Park Drive, Glen Burnie, MD 21061
w:
website.com e: janedoe@somecompany.com
</code></pre>
<p>And I want to capture the mobile number out of it.</p>
<p>How can I change my regex to solve both cases?</p>
|
<python><regex><phone-number>
|
2023-01-23 12:45:12
| 2
| 40,569
|
sagi
|
75,209,609
| 2,179,795
|
Selenium Python: Not finding element which clearly exists
|
<p>I am trying to click through the levels of a site's navigation using python and selenium. The navbar contains list items that have subelements within them.</p>
<p>Here is the html of the navbar:</p>
<p><a href="https://i.sstatic.net/UrbyL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UrbyL.png" alt="enter image description here" /></a></p>
<p>The objective here is to find the element with id="ts_time", to hover over it and to click on the element within it.</p>
<p>So far I have tried the following selection types:</p>
<ul>
<li>ID</li>
<li>XPath</li>
<li>Class_Name</li>
</ul>
<p>Here is the ID.</p>
<pre><code>time_menu_button = driver.find_element(By.ID, "ts_time")
ActionChains(driver).move_to_element(time_menu_button)
time.sleep(2.5)
</code></pre>
<p>This results in a <code>NoSuchElementException</code></p>
|
<python><selenium><selenium-webdriver><mousehover><nosuchelementexception>
|
2023-01-23 12:36:17
| 1
| 1,247
|
Merv Merzoug
|
75,209,473
| 1,574,551
|
How to convert list column into a single column and the concat in pandas dataframe
|
<p>I would like convert each list column into one single column and then concat for below dataframe</p>
<pre><code> data = {'labels': ['[management,workload,credibility]','[ethic,hardworking,profession]'],
'Score': [[0.55,0.36,0.75],[0.41,0.23,0.14]]}
# Create DataFrame
df = pd.DataFrame(data)
</code></pre>
<p>new dataframe output should look like this</p>
<pre><code> labels Score
management 0.55
workload 0.36
credibility 0.75
ethic 0.41
hardworking 0.23
profession 0.14
</code></pre>
<p>Thank you</p>
|
<python><pandas><list>
|
2023-01-23 12:25:03
| 1
| 1,332
|
melik
|
75,209,432
| 13,285,583
|
Cannot import tensorflow after a week of no use, no changes was made
|
<p>What I've did:</p>
<ol>
<li>uninstall and <code>pip install tensorflow-macos</code></li>
<li>uninstall <code>pip install tensorflow-metal</code></li>
<li><code>import tensorflow as tf</code></li>
</ol>
<p>The expected result is no error.</p>
<p>However, I got an error after importing tensorflow.</p>
<pre><code>TypeError Traceback (most recent call last)
Cell In[1], line 1
----> 1 import tensorflow as tf
3 # from keras import datasets, layers, models
4 # import matplotlib.pyplot as plt
File /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/tensorflow/__init__.py:37
34 import sys as _sys
35 import typing as _typing
---> 37 from tensorflow.python.tools import module_util as _module_util
38 from tensorflow.python.util.lazy_loader import LazyLoader as _LazyLoader
40 # Make sure code inside the TensorFlow codebase can use tf2.enabled() at import.
File /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/tensorflow/python/__init__.py:45
43 from tensorflow.python import distribute
44 # from tensorflow.python import keras
---> 45 from tensorflow.python.feature_column import feature_column_lib as feature_column
46 # from tensorflow.python.layers import layers
47 from tensorflow.python.module import module
File /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/tensorflow/python/feature_column/feature_column_lib.py:18
15 """FeatureColumns: tools for ingesting and representing features."""
17 # pylint: disable=unused-import,line-too-long,wildcard-import,g-bad-import-order
---> 18 from tensorflow.python.feature_column.feature_column import *
...
(...)
3248 :returns: The PKCS12 object
3249 """
TypeError: deprecated() got an unexpected keyword argument 'name'
</code></pre>
|
<python><python-3.x><tensorflow>
|
2023-01-23 12:21:02
| 0
| 2,173
|
Jason Rich Darmawan
|
75,209,418
| 7,048,760
|
How to make flask app with projects written in two different Python versions work in endpoints?
|
<p>I’m building a flask api in which I have to use two different Python projects written in 2 different versions. One is a Python project built on version 3.8+, and another is a proprietary package compatible for version <3.7. I get errors if I use a single version and try to run it, and fixing errors would be too much and potentially break the system behavior. The flask api runs on docker. What are my options to make both Python projects work on the single flask app?</p>
|
<python><flask><docker-compose>
|
2023-01-23 12:19:41
| 1
| 865
|
Kuni
|
75,209,415
| 5,510,713
|
Unable to decode Aztec barcode
|
<p>I'm trying to decode an Aztec barcode using the following script:</p>
<pre><code>import zxing
reader = zxing.BarCodeReader()
barcode = reader.decode("test.png")
print(barcode)
</code></pre>
<p><strong>Here is the input image:</strong></p>
<p><a href="https://i.sstatic.net/FTlJG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FTlJG.png" alt="enter image description here" /></a></p>
<p><strong>Following is the output:</strong></p>
<blockquote>
<p>BarCode(raw=None, parsed=None,
path='/Users/dhiwatdg/Desktop/test.png', format=None, type=None,
points=None)</p>
</blockquote>
<p>I'm sure it a valid Aztec barcode. Not the script is not able to decode.</p>
|
<python><image-processing><zxing><barcode-scanner><aztec-barcode>
|
2023-01-23 12:19:25
| 2
| 776
|
DhiwaTdG
|
75,209,261
| 10,667,216
|
How to install a Python package inside a docker image?
|
<p>Is there a way to install a python package without rebuilding the docker image? I have tried in this way:</p>
<pre><code>docker compose run --rm web sh -c "pip install requests"
</code></pre>
<p>but if I list the packages using</p>
<pre><code>docker-compose run --rm web sh -c "pip freeze"
</code></pre>
<p>I don't get the new one.
It looks like that is installed in the container but not in the image.</p>
<p>My question is what is the best way to install a new python package after building the docker image?
Thanks in advance</p>
|
<python><docker>
|
2023-01-23 12:06:43
| 2
| 483
|
Davood
|
75,209,161
| 100,214
|
Does python ecdsa NIST-256p provide recovery code (byte)?
|
<p>I am using the python <code>ecdsa</code> library for signing messages for blockchain transactions. In the blockchain specification it says, for <code>secp256r1</code> that the signature should have a length of 65 bytes where:</p>
<blockquote>
<p>The signature must be of length 65 bytes in the form of [r, s, v] where the first 32 bytes are r, the second 32 bytes are s and the last byte is v.</p>
</blockquote>
<p>and</p>
<blockquote>
<p>The v represents the recovery ID, which must be normalized to 0, 1, 2 or 3. Note that unlike EIP-155 chain ID is not used to calculate the v value</p>
</blockquote>
<p>I consistently get the first 64 bytes using:
<code>sign_deterministic(data, hashfunc=hashlib.sha256)</code></p>
<p>But not sure where ecdsa holds, or I can compute, the <code>v</code> byte?</p>
<p>I see in the source code (Rust fastcrypto) used on the blockchain it is doing:</p>
<pre class="lang-rust prettyprint-override"><code>// Compute recovery id and normalize signature
let is_r_odd = y.is_odd();
let is_s_high = sig.s().is_high();
let is_y_odd = is_r_odd ^ is_s_high;
let sig_low = sig.normalize_s().unwrap_or(sig);
let recovery_id = RecoveryId::new(is_y_odd.into(), false);
</code></pre>
<p>But in <code>ecdsa</code> all that is hidden behind the signing function noted above.</p>
|
<python><python-3.x><ecdsa><ecdsasignature>
|
2023-01-23 11:57:02
| 1
| 8,185
|
Frank C.
|
75,209,065
| 5,775,358
|
xarray dataset extract values select
|
<p>I have a xarray dataset from which I would like to extract points based on their coordinates. When <code>sel</code> is used for two coordinates it returns a 2D array. Sometimes this is what I want and it is the intended behavior, but I would like to extract a line from the dataset.</p>
<pre><code>import xarray as xr
import numpy as np
ds = xr.Dataset(
{'data': (('y', 'x'), np.linspace(1, 9, 9).reshape(3, 3))},
coords={
'x': [0, 1, 2],
'y': [0, 1, 2]
}
)
"""
<xarray.Dataset>
Dimensions: (x: 3, y: 3)
Coordinates:
* x (x) int32 0 1 2
* y (y) int32 0 1 2
Data variables:
xx = np.array([0, 1])
yy = np.array([1, 2])
data (y, x) float64 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0
"""
xx = np.array([0, 1])
yy = np.array([1, 2])
print(ds.sel(x=xx, y=yy).data.values)
"""
[[4. 5.]
[7. 8.]]
"""
[ds.sel(x=x, y=y).data.item() for x, y in zip(xx, yy)]
"""
[4.0, 8.0]
"""
</code></pre>
<p>The example is given for <code>sel</code>. Ideally I would like to use the <code>interp</code> option of the dataset in the same way.</p>
<pre><code>xx = np.array([0.25, 1.25])
yy = np.array([0.75, 1.75])
ds.interp(x=xx, y=yy).data.values
"""
array([[3.5, 4.5],
[6.5, 7.5]])
"""
[ds.interp(x=x, y=y).data.item() for x, y in zip(xx, yy)]
"""
[3.5, 7.5]
"""
</code></pre>
|
<python><arrays><slice><interpolation><python-xarray>
|
2023-01-23 11:47:33
| 1
| 2,406
|
3dSpatialUser
|
75,209,058
| 7,800,760
|
Utterly lost importing functions for pytest
|
<p>I know there are several pytest and importing questions and answers here but I'm not able to find a solution :(</p>
<p>PS After rereading import tutorials have changed my project folder structure a bit and edited the examples below.</p>
<p>My simple directory structure for a demo python project is as follows:</p>
<pre><code>.
├── src
│ └── addsub
│ ├── __init__.py
│ └── addsub.py
└── tests
├── test_add.py
└── test_subtract.py
</code></pre>
<p>src/addsub/addsub.py has the following content:</p>
<pre><code>""" a demo module """
def add(a, b):
""" sum two numbers or concat two strings """
return a + b
def subtract(a: int, b:int) -> int:
""" subtract b from a --- bugged for the demo """
return a + b
def main():
print(f"Adding 3 to 4 makes: {add(3,4)}")
print(f"Subtracting 4 from 3 makes: {subtract(3,4)}")
if __name__ == "__main__":
main()
</code></pre>
<p>while my test files are under the tests directory with test_add.py having the following content:</p>
<pre><code>from src.addsub import add
def test_add():
assert add(2, 3) == 5
assert add('space', 'ship') == 'spaceship'
</code></pre>
<p>Now when I run pytest from the projects root directory I get the following errors:</p>
<pre><code>====================================================== test session starts ======================================================
platform darwin -- Python 3.8.15, pytest-7.2.1, pluggy-1.0.0
rootdir: /Users/bob/Documents/work/code/ci-example
plugins: anyio-3.6.2
collected 0 items / 1 error
============================================================ ERRORS =============================================================
______________________________________________ ERROR collecting tests/test_add.py _______________________________________________
ImportError while importing test module '/Users/bob/Documents/work/code/ci-example/tests/test_add.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/Users/bob/opt/miniconda3/lib/python3.8/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
tests/test_add.py:1: in <module>
from src.addsub import add
E ModuleNotFoundError: No module named 'src'
==================================================== short test summary info ====================================================
ERROR tests/test_add.py
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
======================================================= 1 error in 0.06s ========================================================
</code></pre>
<p>Thanks in advance for any clarification.</p>
|
<python><pytest>
|
2023-01-23 11:46:40
| 1
| 1,231
|
Robert Alexander
|
75,208,912
| 10,967,961
|
Grouping together lists in pandas
|
<p>I have a database of patents citing other patents looking like this:</p>
<pre><code>{'index': {0: 0, 1: 1, 2: 2, 12: 12, 21: 21},
'docdb_family_id': {0: 57904406,
1: 57904406,
2: 57906556,
12: 57909419,
21: 57942222},
'cited_docdbs': {0: [15057621,
16359315,
18731820,
19198211,
19198218,
19198340,
19550248,
19700609,
20418230,
22144166,
22513333,
22800966,
22925564,
23335606,
23891186,
25344297,
25345599,
25414615,
25495423,
25588955,
26530649,
27563473,
34277948,
36626718,
38801947,
40454852,
40885675,
40957530,
41249600,
41377563,
41378429,
41444278,
41797413,
42153280,
42340085,
42340086,
42678557,
42709962,
42709963,
42737942,
43648036,
44691991,
44947081,
45352855,
45815534,
46254922,
46382961,
47830116,
49676686,
49912209,
54191614],
1: [15057621,
16359315,
18731820,
19198211,
19198218,
19198340,
19550248,
19700609,
20418230,
22144166,
22513333,
22800966,
22925564,
23335606,
23891186,
25344297,
25345599,
25414615,
25495423,
25588955,
26530649,
27563473,
34277948,
36626718,
38801947,
40454852,
40885675,
40957530,
41249600,
41377563,
41378429,
41444278,
41797413,
42153280,
42340085,
42340086,
42678557,
42709962,
42709963,
42737942,
43648036,
44691991,
44947081,
45352855,
45815534,
46254922,
46382961,
47830116,
49676686,
49912209,
54191614],
2: [6078355,
8173164,
14235835,
16940834,
18152411,
18704525,
27343995,
45467248,
46172598,
49878759,
50995553,
52668238],
12: [6293366,
7856452,
16980051,
23177359,
26477802,
27453602,
41135094,
53004244,
54332594,
55018863],
21: [7913900,
13287798,
18834564,
23971781,
26904791,
27304292,
29720924,
34622252,
35197847,
37766575,
39873073,
42075013,
44508652,
44530218,
45571357,
48222848,
48747089,
49111776,
49754218,
50024241,
50474222,
50545849,
52580625,
58800268]},
'doc_std_name': {0: 'SEEO INC',
1: 'BOSCH GMBH ROBERT',
2: 'SAMSUNG SDI CO LTD',
12: 'NAGAI TAKAYUKI',
21: 'SAMSUNG SDI CO LTD'}}
</code></pre>
<p>Now, what I would like to do is performing a groupby firm as follows:</p>
<pre><code>df_grouped_byfirm=data_min.groupby("doc_std_name").agg(publn_nrs=('docdb_family_id',"unique")).reset_index()
</code></pre>
<p>but merging together the lists of cited_docdbs. So, for instance in the example above, for SAMSUNG SDI CO LTD the final list of cited_docdbs should become a mega list where all the cited docdbs of both ids of SAMSUNG SDI CO LTD are merged together:</p>
<pre><code>[6078355,
8173164,
14235835,
16940834,
18152411,
18704525,
27343995,
45467248,
46172598,
49878759,
50995553,
52668238,
7913900,
13287798,
18834564,
23971781,
26904791,
27304292,
29720924,
34622252,
35197847,
37766575,
39873073,
42075013,
44508652,
44530218,
45571357,
48222848,
48747089,
49111776,
49754218,
50024241,
50474222,
50545849,
52580625,
58800268]
</code></pre>
<p>Thank you</p>
|
<python><pandas><list>
|
2023-01-23 11:32:43
| 2
| 653
|
Lusian
|
75,208,906
| 3,815,773
|
How do I catch the PyQt5 QSplitter.resizeEvent?
|
<p>I can catch the user resizing or repositioning the window simply by defining functions:</p>
<pre><code>def resizeEvent(self, event): ...
def moveEvent(self, event): ...
</code></pre>
<p>But I also have 2 QSplitter, and would like to know any new split the user has applied to make it also a default for the next start. QSplitter also has a resizeEvent(), but I can't catch it, because it is already used by the apparently higher ranking functions above.</p>
<p>How can I get hold of QSplitter's resizeEvent?</p>
<p>I now use as a workaround <code>def paintEvent(self, event):...</code> which does allow me to get the needed info, but it feels a bit clumsy :-/</p>
|
<python><events><pyqt5><qsplitter>
|
2023-01-23 11:32:23
| 0
| 505
|
ullix
|
75,208,629
| 9,929,725
|
Odoo - Return record lists on form view
|
<p>I use Odoo 14 Enterprise.
From a wizard form, I want to return a form view with record list into domain.</p>
<p>If i use tree view, this action is great. But, i want directly show to result into form view.</p>
<p>Can you help me ?</p>
<p>My code :</p>
<pre><code>return {
'type': 'ir.actions.act_window',
'views': [(False, 'tree'), (False, 'form')],
'res_model': 'annual.control.base',
'domain': [('id', 'in', [174, 175])],
'target': 'current',
# 'context': ctx,
}
</code></pre>
<p>Thanks !</p>
|
<python><odoo>
|
2023-01-23 11:08:09
| 1
| 321
|
PseudoWithK
|
75,208,539
| 7,658,051
|
Ansible: The error was: 'dict object' has no attribute 'stdout_lines', but changing `stdout_lines` to `results` does not solve the problem
|
<p>I am a beginner with ansible.</p>
<p>I have copied a playbook managed by third parts and extended into the following one, then I run it in debug mode (<code>-vvv</code>) via</p>
<pre><code>/home/.../ansible-venv/bin/ansible-playbook -vvv playbooks/dump_data_into_csv.yml >/dump_data_into_csv.log 2>&1
</code></pre>
<p>The playbook runs fine when the snippet</p>
<pre><code>- hosts: myhosts
# - hosts:
# - myhost-010_c1
# - myhost-010_c3
# - myhost-014_c4
</code></pre>
<p>is uncommented and the snippet</p>
<pre><code>- hosts: myhosts
</code></pre>
<p>is commented.</p>
<p>When the case is the opposite, as you can read in the following code, running the playbook raises this error:</p>
<pre><code>Dump data to csv...
host-001_c1 failed: {
"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'stdout_lines'\n\nThe error appears to be in '/home/.../dump_data_into_csv.yml': line 64, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: Dump data to csv\n ^ here\n"
}
</code></pre>
<p>where "line 64, column 7" is indicated by my comment into the playbook below.</p>
<p>I have searched on the internet this message</p>
<blockquote>
<p>The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'stdout_lines'</p>
</blockquote>
<p>and from what I read on <a href="https://www.middlewareinventory.com/blog/ansible-dict-object-has-no-attribute-stdout-or-stderr-how-to-resolve/#Why_this_error" rel="nofollow noreferrer">this article</a> I understand that this error is raised when one tries to access the <code>stdout_lines</code> object of a dictionary, which does not have that object, and which instead has the object <code>results</code>.</p>
<p>So I have substituted in my playbook</p>
<pre><code>{% if host.grep_output.stdout_lines %}
{% for line in host.grep_output.stdout_lines %}
</code></pre>
<p>with</p>
<pre><code>{% if host.grep_output.results %}
{% for line in host.grep_output.results %}
</code></pre>
<p>but the error shows up again.</p>
<p>What am I doing wrong?</p>
<pre><code>- hosts: myhosts
# - hosts:
# - myhost-010_c1
# - myhost-010_c3
# - myhost-014_c4
order: sorted
gather_facts: False
vars:
current_date: "{{ '%Y-%m-%d' | strftime }}"
tasks:
- name: Grep over syslog
become: true
ansible.builtin.shell:
cmd: |
tomorrow_date=$(date '+%Y-%m-%d' -d "$input_date + 1 day")
six_days_ago_date=$(date '+%Y-%m-%d' -d "$input_date - 6 days")
last_seven_days_syslogs=$(find /var/log/ -type f -name "syslog*" -newermt "$six_days_ago_date" ! -newermt "$tomorrow_date" | sort)
sudo zgrep -aHF "my_pattern" $last_seven_days_syslogs | cut -d'[' -f 1
register: grep_output
ignore_errors: true
no_log: true
- name: Dump data to csv
# ^ this is line 64, column 7
copy:
dest: /tmp/data_{{ '%Y-%m-%d' | strftime }}.csv
content: |
{{ ['idm', 'idc', 'mese', 'giorno', 'orario'] | map('trim') | join(';') }}
{% for host in hosts_list %}
{% set idm = host.inventory_hostname.split('_')[0].split('-')[1] %}
{% set idm_padded = '%03d' % idm|int %}
{% set idm_padded = '"' + idm_padded + '"' %}
{% set idc = host.inventory_hostname.split('_')[1].upper() %}
{% if host.grep_output.stdout_lines %}
{% for line in host.grep_output.stdout_lines %}
{% set month = line.strip().split(':')[1].replace(' ', ' ').split(' ')[0] %}
{% set day = line.strip().replace(' ', ' ').split(' ')[1] %}
{% set time = line.strip().replace(' ', ' ').split(' ')[2] %}
{{ [idm_padded, idc, month, day, time] | map('trim') | join(';') }}
{% endfor %}
{% endif %}
{% endfor %}
vars:
hosts_list: "{{ ansible_play_hosts | map('extract', hostvars) | list }}"
delegate_to: localhost
register: csv_content
run_once: yes
</code></pre>
|
<python><bash><dictionary><ansible>
|
2023-01-23 10:59:48
| 0
| 4,389
|
Tms91
|
75,208,468
| 16,250,224
|
Creating cumulative product with reset to 1 at each end of the month
|
<p>I try to create a new column <code>CumReturn</code>in a Dataframe <code>df</code>with the cumulative product over the month. I try to reset the cum_prod() to 1 at the end of each month (if EndMonth == 1) and start new with the cumulative product.</p>
<pre><code>df:
Date EndMonth ID1 Return
2023-01-30 0 A 0.95
2023-01-30 0 B 0.98
2023-01-31 1 A 1.01
2023-01-31 1 B 1.02
2023-02-01 0 A 1.05
2023-02-01 0 B 0.92
2023-02-02 0 A 0.97
2023-02-02 0 B 0.99
</code></pre>
<p>I tried it to do with: df['CumReturn'] = np.where(df['EndMonth'] == 1, 1, df['Return'].groupby('ID1').cumprod())</p>
<p>When I do that, I get for <code>2023-02-02</code> the cumulative product over the whole period and not only since the start of February.</p>
<p>For reproducability:</p>
<pre><code>import pandas as pd
df1 = pd.DataFrame({
'Date':['2023-01-30', '2023-01-30', '2023-01-31', '2023-01-31', '2023-02-01', '2023-02-01', '2023-02-02', '2023-02-02'],
'EndMonth':[0, 0, 1, 1, 0, 0, 0, 0],
'ID1':['A', 'B', 'A', 'B', 'A', 'B', 'A', 'B'],
'Return':[0.95, 0.98, 1.01, 1.02, 1.05, 0.92, 0.97, 0.99]})
df1 = df1.set_index('Date')
</code></pre>
<p>Many thanks!</p>
|
<python><pandas>
|
2023-01-23 10:53:23
| 2
| 793
|
fjurt
|
75,208,331
| 19,580,067
|
Win32 not recognising the outlook email in Jupyter notebook
|
<p>Tried to read the outlook email in jupyter notebook for creating a ML algorithm but the win32 is not recognising my outlook account. It was working fine yesterday but somehow same code not wrking today.</p>
<p>Any suggestions please?</p>
<p>Attached my code below.</p>
<pre><code>import win32com #.client
import pyttsx3
#other libraries to be used in this script
import os
from datetime import datetime, timedelta
outlook = win32com.client.Dispatch('outlook.application').GetNamespace("MAPI")
# mapi = outlook.GetNamespace("MAPI")
# mapi
# for account in outlook.Accounts:
# print(account.DeliveryStore.DisplayName)
# account
# inbox = outlook.GetDefaultFolder(6)
outlook.Accounts
</code></pre>
<p>The result I'm getting is
<a href="https://i.sstatic.net/O6H36.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/O6H36.png" alt="enter image description here" /></a></p>
|
<python><jupyter-notebook><outlook><pywin32><win32com>
|
2023-01-23 10:40:16
| 2
| 359
|
Pravin
|
75,208,301
| 12,304,000
|
a bytes-like object is required, not 'str' while parsing XML files
|
<p>I am trying to parse an xml that looks like this. I want to extract information regarding the katagorie i.e ID, parent ID etc:</p>
<pre><code><?xml version="1.0" encoding="UTF-8"?>
<test timestamp="20210113">
<kategorien>
<kategorie id="1" parent_id="0">
Sprache
</kategorie>
</kategorien>
</test>
</code></pre>
<p>I am trying this</p>
<pre><code>fields = ['id', 'parent_id']
with open('output.csv', 'wb') as fp:
writer = csv.writer(fp)
writer.writerow(fields)
tree = ET.parse('./file.xml')
# from your example Locations is the root and Location is the first level
for elem in tree.getroot():
writer.writerow([(elem.get(name) or '').encode('utf-8')
for name in fields])
</code></pre>
<p>but I get this error:</p>
<pre><code>in <module>
writer.writerow(fields)
TypeError: a bytes-like object is required, not 'str'
</code></pre>
<p>even though I am already using <code>encode('utf-8') </code> in my code. How can I get rid of this error?</p>
|
<python><python-3.x><xml><utf-8><encode>
|
2023-01-23 10:39:06
| 2
| 3,522
|
x89
|
75,208,261
| 19,716,381
|
Applying custom functions to groupby objects pandas
|
<p>I have the following pandas dataframe.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import numpy as np
df = pd.DataFrame(
{
"bird_type": ["falcon", "crane", "crane", "falcon"],
"avg_speed": [np.random.randint(50, 200) for _ in range(4)],
"no_of_birds_observed": [np.random.randint(3, 10) for _ in range(4)],
"reliability_of_data": [np.random.rand() for _ in range(4)],
}
)
# The dataframe looks like this.
bird_type avg_speed no_of_birds_observed reliability_of_data
0 falcon 66 3 0.553841
1 crane 159 8 0.472359
2 crane 158 7 0.493193
3 falcon 161 7 0.585865
</code></pre>
<p>Now, I would like to have the weighted average (according to the number_of_birds_surveyed) for the average_speed and reliability variables. For that I have a simple function as follows, which calculates the weighted average.</p>
<pre class="lang-py prettyprint-override"><code>def func(data, numbers):
ans = 0
for a, b in zip(data, numbers):
ans = ans + a*b
ans = ans / sum(numbers)
return ans
</code></pre>
<p>How can I apply the function of <code>func</code> to both average speed and reliability variables?</p>
<p>I expect the answer to be a dataframe like follows</p>
<pre class="lang-py prettyprint-override"><code> bird_type avg_speed no_of_birds_observed reliability_of_data
0 falcon 132.5 10 0.5762578
# how (66*3 + 161*7)/(3+7) (3+10) (0.553841×3+0.585865×7)/(3+7)
1 crane 158.53 15 0.4820815
# how (159*8 + 158*7)/(8+7) (8+7) (0.472359×8+0.493193×7)/(8+7)
</code></pre>
<p>I saw <a href="https://stackoverflow.com/questions/55644578/groupby-apply-custom-function-pandas">this question</a>, but could not generalize the solution / understand it completely. I thought of not asking the question, but according to <a href="https://stackoverflow.blog/2009/04/29/handling-duplicate-questions/#:%7E:text=We%20rely%20on%20Stack%20Overflow%20users%20to%20link%20these%20questions,answers%20attached%20to%20each%20question.">this blog post</a> by SO and <a href="https://meta.stackexchange.com/questions/10841/how-does-duplicate-closing-work-when-is-a-question-a-duplicate-and-how-should">this meta question</a>, with a different example, I think this question can be considered a "borderline duplicate". An answer will benefit me and probably some others will also find this useful. So finally decided to ask.</p>
|
<python><pandas><dataframe>
|
2023-01-23 10:35:37
| 2
| 484
|
berinaniesh
|
75,208,221
| 839,837
|
Python replace nested dictionary in a dictionary
|
<p>I'd like to replace a dictionary in a dictionary, but when I try I keep getting quotes and slashes around the added dictionary.</p>
<pre><code>current_dict = {"header": {"from": "/app/off_grid_control/subscribe",
"messageId": "ef6b8e50620ac768569f1f7abc6507a5", "method": "SET",
"namespace": "Appliance.Control.ToggleX", "payloadVersion": 1,
"sign": "e48c24e510044d7e2d248c68ff2c10ca", "timestamp": 1601908439,
"triggerSrc": "Android"}, "payload": {"togglex": {"channel": 0, "onoff": 1}}}
raw_payload = {"togglex": {"channel": 0, "onoff": 1}}
payload = json.dumps(raw_payload)
</code></pre>
<p>From a print statement I get:</p>
<pre><code>payload = {"togglex": {"channel": 0, "onoff": 0}}
</code></pre>
<p>So that looks fine.</p>
<p>Then I try and add the new dictionary part into the original dictionary:</p>
<pre><code>current_dict["payload"] = payload
</code></pre>
<p>And get this:</p>
<pre><code>current_dict = {"header": {"from": "/app/off_grid_control/subscribe",
"messageId": "ef6b8e50620ac768569f1f7abc6507a5", "method": "SET",
"namespace": "Appliance.Control.ToggleX", "payloadVersion": 1,
"sign": "e48c24e510044d7e2d248c68ff2c10ca", "timestamp": 1601908439,
"triggerSrc": "Android"}, "payload": "{\"togglex\": {\"channel\": 0, \"onoff\": 0}}"}
</code></pre>
<p>Noting all the added <code>"</code> and <code>\</code> around the payload values.
Can someone please help with out to add a different dictionary to "payload" cleanly?</p>
|
<python><json><dictionary>
|
2023-01-23 10:31:30
| 1
| 727
|
Markus
|
75,208,152
| 15,296,575
|
Why use "db.*" in SqlAlchemy?
|
<p>I am working on a project using Flask and SqlAlchemy. Me and my colleagues found two ways to define a table. Both work, but what is the different?</p>
<p>Possibility I</p>
<pre><code>base = declarative_base()
class Story(base):
__tablename__ = 'stories'
user_id = Column(Integer, primary_key=True)
email = Column(String(100), unique=True)
password = Column(String(100), unique=True)
</code></pre>
<p>Possibility II</p>
<pre><code>db = SQLAlchemy()
class Story(db.Model):
__tablename__ = 'stories'
user_id = db.Column(Integer, primary_key=True)
email = db.Column(String(100), unique=True)
password = db.Column(String(100), unique=True)
</code></pre>
<p>We want to choose one option, but which one?
It is obvious that both classes inherit from a different parent class, but for what are these two possibilities used for?</p>
|
<python><sql><python-3.x><sqlalchemy>
|
2023-01-23 10:24:46
| 1
| 332
|
Shraft
|
75,208,050
| 853,710
|
xml parsing in python with XPath
|
<p>I am trying to parse an XML file in Python with the built in xml module and Elemnt tree, but what ever I try to do according to the documentation, it does not give me what I need.
I am trying to extract all the <strong>value</strong> tags into a list</p>
<pre><code><?xml version="1.0" encoding="UTF-8"?>
<CustomField xmlns="http://soap.sforce.com/2006/04/metadata">
<fullName>testPicklist__c</fullName>
<externalId>false</externalId>
<label>testPicklist</label>
<required>false</required>
<trackFeedHistory>false</trackFeedHistory>
<type>Picklist</type>
<valueSet>
<restricted>true</restricted>
<valueSetDefinition>
<sorted>false</sorted>
<value>
<fullName>a 32</fullName>
<default>false</default>
<label>a 32</label>
</value>
<value>
<fullName>23 432;:</fullName>
<default>false</default>
<label>23 432;:</label>
</value>
</code></pre>
<p>and here is the example code that I cant get to work. It's very basic and all I have issues is the xpath.</p>
<pre><code>from xml.etree.ElementTree import ElementTree
field_filepath= "./testPicklist__c.field-meta.xml"
mydoc = ElementTree()
mydoc.parse(field_filepath)
root = mydoc.getroot()
print(root.findall(".//value")
print(root.findall(".//*/value")
print(root.findall("./*/value")
</code></pre>
|
<python><xml><xpath>
|
2023-01-23 10:12:36
| 1
| 1,767
|
user853710
|
75,208,047
| 6,409,572
|
How can I define plain console output of a class object (vs. __str__)?
|
<p>When defining my own class, I can overwrite the <code>__str__</code> to define its <code>print(my_class)</code> behavior. What do I have to do to overwrite the behavior when just calling an <code>my_class</code> object?</p>
<p>What I get:</p>
<pre><code>> obj = my_class("ABC") # define
> print(obj) # call with print
'my class with ABC'
> obj # call obj only
'<__console__.my_class object at 0x7fedf752398532d9d0>'
</code></pre>
<p>What I want (e.g. <code>obj</code> returning the same as <code>print(obj)</code> or some other manually defined text).</p>
<pre><code>> obj # when obj is called plainly, I want to define its default
'my class with ABC (or some other default representation of the class object)'
</code></pre>
<p>With:</p>
<pre><code>class my_class:
def __init__(self, some_string_argument)
self.some_string = some_string_argument
def __str__(self): #
return f"my_class with {self.some_string}"
</code></pre>
|
<python>
|
2023-01-23 10:12:06
| 1
| 3,178
|
Honeybear
|
75,207,803
| 10,689,857
|
Calling a SQL script from python
|
<p>I have a sql script called myscript.sql that looks like this:</p>
<pre><code>-- Comment that I have in my sql script.
MERGE INTO my_table i using
(select several_columns
from my_other_table f
where condition
) f on (my join conditions)
WHEN MATCHED THEN
UPDATE SET whatever;
COMMIT;
</code></pre>
<p>I have tried to call it from python the same way I do from a SQL Developer worksheet, which is:</p>
<pre><code>cursor().execute(r'''@"path_to_my_script\myscript.sql"''')
</code></pre>
<p>But it does not work, the following error is raised:</p>
<pre><code>DatabaseError: ORA-00900: invalid SQL statement
</code></pre>
<p>How could I execute the script?</p>
|
<python><sql><oracle-database>
|
2023-01-23 09:49:46
| 3
| 854
|
Javi Torre
|
75,207,749
| 264,136
|
importlib.import_module is not a package
|
<p>update.py:</p>
<pre><code>import argparse
parser = argparse.ArgumentParser()
parser.add_argument("--data_file")
job_args = parser.parse_known_args()[0]
imported = importlib.import_module(job_args.data_file)
CURIE_BLR.py
testbed = "CURIE_BLR"
sockets = [
"10.64.127.135:2005:Fugazi",
"10.64.127.135:2006:Radium",
"10.64.127.135:2007:Thallium",
"10.64.127.135:2008:Thorium",
"10.64.127.135:2009:Uranium",
"10.64.127.135:2011:Neptune",
"10.64.127.135:2033:Promethium"
]
</code></pre>
<blockquote>
<p><code>python update_topo_status.py --data_file CURIE_BLR.py</code></p>
</blockquote>
<pre><code>ModuleNotFoundError: No module named 'CURIE_BLR.py'; 'CURIE_BLR' is not a package
</code></pre>
|
<python>
|
2023-01-23 09:43:34
| 1
| 5,538
|
Akshay J
|
75,207,721
| 2,178,942
|
change the factorplot of seaborn to include dots
|
<p>I have a pandas dataframe that looks like this:</p>
<pre><code> feat roi sbj alpha test_type acc
0 cnn2 LOC Subject1 normal_space imagery 0.260961
1 cnn2 LOC Subject1 0.4 imagery 0.755594
2 cnn4 LOC Subject1 normal_space imagery 0.282238
3 cnn4 LOC Subject1 0.4 imagery 0.726485
4 cnn6 LOC Subject1 normal_space imagery 0.087359
5 cnn6 LOC Subject1 0.4 imagery 0.701167
6 cnn8 LOC Subject1 normal_space imagery 0.209444
7 cnn8 LOC Subject1 0.4 imagery 0.612597
8 glove LOC Subject1 normal_space imagery 0.263176
9 glove LOC Subject1 0.4 imagery 0.659182
10 cnn2 FFA Subject1 normal_space imagery 0.276830
11 cnn2 FFA Subject1 0.4 imagery 0.761014
12 cnn4 FFA Subject1 normal_space imagery 0.288127
13 cnn4 FFA Subject1 0.4 imagery 0.727325
14 cnn6 FFA Subject1 normal_space imagery 0.113507
15 cnn6 FFA Subject1 0.4 imagery 0.732963
16 cnn8 FFA Subject1 normal_space imagery 0.264455
17 cnn8 FFA Subject1 0.4 imagery 0.615467
18 glove FFA Subject1 normal_space imagery 0.245950
19 glove FFA Subject1 0.4 imagery 0.640502
20 cnn2 PPA Subject1 normal_space imagery 0.344078
...
</code></pre>
<p>For plotting it, I wrote:</p>
<pre><code>ax = sns.factorplot(x="feat", y="acc", col="roi", hue="alpha", alpha = 0.9, data=df_s_pt, kind="bar").set(title = "perception, scene wise correlation")
</code></pre>
<p>The result look like this:</p>
<p><a href="https://i.sstatic.net/h9yit.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/h9yit.png" alt="enter image description here" /></a></p>
<p>I want to upgrade it so it can look like the one in this answer (so it has the dots of each subject (i.e., Subject1, Subject2, ...))</p>
<p>Also, I want to control the color.</p>
<p>I could'nt use the code in that <a href="https://stackoverflow.com/questions/68154123/matplotlib-grouped-bar-chart-with-individual-data-points">answer</a>. How should I apply having dots/color change in factorplot?</p>
|
<python><matplotlib><plot><seaborn><bar-chart>
|
2023-01-23 09:40:20
| 1
| 1,581
|
Kadaj13
|
75,207,414
| 4,462,086
|
How to properly type check alembic with mypy
|
<p>I am using alembic in a project. Everything works as expected, but when checking the types with mypy I am getting:</p>
<pre><code>error: Module "alembic" has no attribute "op" [attr-defined]
error: Module "alembic" has no attribute "context" [attr-defined]
</code></pre>
<p>Any idea how to resolve this? Do I need to install types? Is there something else that I am missing?</p>
|
<python><mypy><alembic>
|
2023-01-23 09:10:13
| 0
| 2,525
|
Imre_G
|
75,207,366
| 5,353,753
|
Regex for URL without path
|
<p>I know there are many solutions, articles and libraries for this case, but couldn't find one to match my case. I'm trying to write a regex to extract a URL(which represent the website) from a text (a signature of a person in an email), and has multiple cases:</p>
<ul>
<li>Could contain http(s):// , or not</li>
<li>Could contain www. , or not</li>
<li>Could have multiple TLD such as "test.com.cn"</li>
</ul>
<p>Here are some examples:</p>
<pre><code>www.test.com
https://test.com.cn
http://www.test.com.cn
test.com
test.com.cn
</code></pre>
<p>I've come up with the following regex:</p>
<pre><code>(https?://)?(www\.)?\w{2,}\.[a-zA-Z]{2,}(\.[a-zA-Z]{2,})?$
</code></pre>
<p>But there are two main problems with this, because the signature can contain an email address:</p>
<ol>
<li>It (wrongly) capture the TLDs of emails like this one: name.surname@test2.com</li>
<li>It doesn't capture URLS in the middle of a line, and if I remove the $ sign at the end, it captures the <code>name.surname</code> part of the last example</li>
</ol>
<p>For (1) I tried using <code>negative lookbehind</code>, adding this <code>(?<!@)</code> to the beginning, the problem is that now it captures <code>est2.com</code> instead of not matching it at all.</p>
|
<python><regex><url>
|
2023-01-23 09:05:25
| 1
| 40,569
|
sagi
|
75,207,310
| 8,967,152
|
How do I set dropdown colors using Google SpreadSheet API?
|
<p>I am using the gspread library in python to send api requests.<br>
The request is to set the dropdown.<br>
However, I could not figure out how to set the following.</p>
<ol>
<li>set a color for each value in the dropdown</li>
<li>set the display style to "Chip"</li>
</ol>
<p><a href="https://i.sstatic.net/8UDtN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8UDtN.png" alt="cells" /></a></p>
<p>Here is the code.</p>
<pre class="lang-py prettyprint-override"><code>options = ["Apple", "Banana", "Orange"]
values = [{"userEnteredValue": option} for option in options]
request = {
"requests": [
{
"setDataValidation": {
"range": {
"sheetId": 0,
"startRowIndex": 1,
"endRowIndex": 4,
"startColumnIndex": 1,
"endColumnIndex": 2,
},
"rule": {
"condition": {"type": "ONE_OF_LIST", "values": values},
"strict": False,
"showCustomUi": True
},
}
}
]
}
spreadSheet.batch_update(request)
</code></pre>
<p>Please let me know if there is a way to do this.
Thanks!</p>
|
<python><google-sheets-api><gspread>
|
2023-01-23 08:59:13
| 1
| 369
|
karutt
|
75,207,298
| 998,070
|
Drawing an arc tangent to two lines segments in Python
|
<p>I'm trying to draw an arc of n number of steps between two points so that I can bevel a 2D shape. This image illustrates what I'm looking to create (the blue arc) and how I'm trying to go about it:</p>
<ol>
<li>move by the radius away from the target point (red)</li>
<li>get the normals of those lines</li>
<li>get the intersections of the normals to find the center of the circle</li>
<li>Draw an arc between those points from the circle's center</li>
</ol>
<p><a href="https://i.sstatic.net/sxkKm.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sxkKm.jpg" alt="Beveled Edge" /></a></p>
<p>This is what I have so far:</p>
<p><a href="https://i.sstatic.net/es19c.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/es19c.png" alt="Matplotlib" /></a></p>
<p>As you can see, the circle is not tangent to the line segments. I think my approach may be flawed thinking that the two points used for the normal lines should be moved by the circle's radius. Can anyone please tell me where I am going wrong and how I might be able to find this arc of points? Here is my code:</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
import numpy as np
#https://stackoverflow.com/questions/51223685/create-circle-tangent-to-two-lines-with-radius-r-geometry
def travel(dx, x1, y1, x2, y2):
a = {"x": x2 - x1, "y": y2 - y1}
mag = np.sqrt(a["x"]*a["x"] + a["y"]*a["y"])
if (mag == 0):
a["x"] = a["y"] = 0;
else:
a["x"] = a["x"]/mag*dx
a["y"] = a["y"]/mag*dx
return [x1 + a["x"], y1 + a["y"]]
def plot_line(line,color="go-",label=""):
plt.plot([p[0] for p in line],
[p[1] for p in line],color,label=label)
def line_intersection(line1, line2):
xdiff = (line1[0][0] - line1[1][0], line2[0][0] - line2[1][0])
ydiff = (line1[0][1] - line1[1][1], line2[0][1] - line2[1][1])
def det(a, b):
return a[0] * b[1] - a[1] * b[0]
div = det(xdiff, ydiff)
if div == 0:
raise Exception('lines do not intersect')
d = (det(*line1), det(*line2))
x = det(d, xdiff) / div
y = det(d, ydiff) / div
return x, y
line_segment1 = [[1,1],[4,8]]
line_segment2 = [[4,8],[8,8]]
line = line_segment1 + line_segment2
plot_line(line,'k-')
radius = 2
l1_x1 = line_segment1[0][0]
l1_y1 = line_segment1[0][1]
l1_x2 = line_segment1[1][0]
l1_y2 = line_segment1[1][1]
new_point1 = travel(radius, l1_x2, l1_y2, l1_x1, l1_y1)
l2_x1 = line_segment2[0][0]
l2_y1 = line_segment2[0][1]
l2_x2 = line_segment2[1][0]
l2_y2 = line_segment2[1][1]
new_point2 = travel(radius, l2_x1, l2_y1, l2_x2, l2_y2)
plt.plot(line_segment1[1][0], line_segment1[1][1],'ro',label="Point 1")
plt.plot(new_point2[0], new_point2[1],'go',label="radius from Point 1")
plt.plot(new_point1[0], new_point1[1],'mo',label="radius from Point 1")
# normal 1
dx = l1_x2 - l1_x1
dy = l1_y2 - l1_y1
normal_line1 = [[new_point1[0]+-dy, new_point1[1]+dx],[new_point1[0]+dy, new_point1[1]+-dx]]
plot_line(normal_line1,'m',label="normal 1")
# normal 2
dx2 = l2_x2 - l2_x1
dy2 = l2_y2 - l2_y1
normal_line2 = [[new_point2[0]+-dy2, new_point2[1]+dx2],[new_point2[0]+dy2, new_point2[1]+-dx2]]
plot_line(normal_line2,'g',label="normal 2")
x, y = line_intersection(normal_line1,normal_line2)
plt.plot(x, y,'bo',label="intersection") #'blue'
theta = np.linspace( 0 , 2 * np.pi , 150 )
a = x + radius * np.cos( theta )
b = y + radius * np.sin( theta )
plt.plot(a, b)
plt.legend()
plt.axis('square')
plt.show()
</code></pre>
<p>Thanks a lot!</p>
|
<python><numpy><matplotlib><trigonometry>
|
2023-01-23 08:57:40
| 2
| 424
|
Dr. Pontchartrain
|
75,207,215
| 6,494,707
|
Generating random datasets in a dictionary
|
<p>I have two datasets dictionary like this:</p>
<pre><code>D1 = {'query': torch.tensor([(1, 2)], dtype=torch.float32),
'support': torch.tensor([(2, 4), (3, 1)], dtype=torch.float32)} # (x, y) pairs for query (Q1) and support (S1) set.
D2 = {'query': torch.tensor([(4, 1)], dtype=torch.float32), 'support': torch.tensor([(5, 3), (6, 0)], dtype=torch.float32)}
</code></pre>
<p>in which it has two datapoints for support (subset) used in training and one datapoint for query subset (used during the test/validation). I wanna extend my experiment with more datasets and increase the number of random samples in each subset. Is there any way to automate the generation of datasets to 100 and data points?</p>
<p>Thanks</p>
|
<python><dictionary><tensor><torch>
|
2023-01-23 08:47:40
| 1
| 2,236
|
S.EB
|
75,207,087
| 13,238,456
|
Loading a conda environment in PyCharm - File or directory not found
|
<p>TLDR: PyCharm can't load my conda environment with a crypted error message:</p>
<p><code>Error code 2. *path*: can't open the file 'info' [ErrNo 2] No such file or directory</code></p>
<p>And does not list my conda environment in its run configuration.</p>
<p>I have a PyCharm Professional and Anaconda installation on Windows 11 and have created the conda environment using the shell, not with PyCharm.</p>
<p>Using the shell, I can run my Python code</p>
<pre><code>conda activate fav-env
python mycode.py
</code></pre>
<p>Now I would much rather like to setup the interpreter in PyCharm to get rid of the red scribbly lines.
To do so, I click <code>Add Local Interpreter -> Conda Environment</code> and select the environment executable in
<code>C:\Users\myuser\anaconda3\envs\pythonproject\python.exe</code> (see image)</p>
<p><a href="https://i.sstatic.net/JP1Im.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JP1Im.png" alt="PyCharm UI screenshot displaying the failed environment loading" /></a></p>
<p>So far, PyCharm can detect the .exe. Once I click <code>Load Environments</code> it displays me the error message <code>Error code 2. *path*: can't open the file 'info' [ErrNo 2] No such file or directory</code> No idea why it would require a file 'info' when it asks me to open a Conda executable.</p>
<p>I tried a different way, by changing the <code>Run Configuration</code>. Here one can select the Python interpreter, including those from conda environments. Unfortunately, it only lists 2 out of 3 conda environments, the 1 out of 3 being the one that I would need in this project.
<a href="https://i.sstatic.net/Qp0p7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Qp0p7.png" alt="PyCharm UI displaying the runtime configuratoin" /></a></p>
|
<python><pycharm><anaconda><conda>
|
2023-01-23 08:29:13
| 0
| 493
|
Dustin
|
75,207,049
| 796,634
|
Mercurial 'hg commit --logfile' working from commandline but not Python
|
<p>I'm auto-generating 'hg add' then 'hg commit' commands and executing from Python 3.8 on Win 10.</p>
<p>The automation writes the commit template out to a temp file then passes it to the hg commit command using the --logfile / -l parameter. If I run from the commandline 'hg commit' works fine and uses the template stored in the temp file:</p>
<pre><code>hg commit -l path/to/my/GeneratedCommitTemplate
</code></pre>
<p>However if I attempt to run 'hg commit' using Python, hg seemingly ignores the -l param and pops up notepad with a generic commit template:</p>
<pre><code>subprocess.run(["hg", "commit", "-l", "path/to/my/GeneratedCommitTemplate"], cwd=repoBasePath, check=True)
</code></pre>
<p>I'm executing 'hg add' from Python the same way and that works fine.</p>
<p>What could cause 'hg commit' to not detect the -l param or the template file when executed from Python?</p>
|
<python><mercurial>
|
2023-01-23 08:24:11
| 0
| 2,194
|
Geordie
|
75,206,988
| 4,961,888
|
Split dataframe and plot subsets with a for loop in jupyter notebook
|
<p>I am trying to split a dataframe and to plot the create subsets using a for loop in jupyter notebooks:</p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
data = pd.DataFrame(np.random.rand(120, 10))
for i in range(len(data) // 10):
split_data = data.iloc[i:i*(len(data) // 10)]
for i, row in split_data.iterrows():
row.plot(figsize=(15,5))
plt.show()
</code></pre>
<p>Here the the figures being plotted are not reset, rather the curves of the first figure appear again on the second one and so on.</p>
<p>How can i reset the plot so that i have only 10 curves per plot?</p>
|
<python><pandas><matplotlib><jupyter-notebook>
|
2023-01-23 08:15:01
| 1
| 5,182
|
jim jarnac
|
75,206,920
| 7,095,530
|
Safely store data from GET request - Django
|
<p>Alright,</p>
<p>Let's say we need to create a website with Django where people can book a lodge for the weekends.</p>
<p>We add a search form on the homepage where people can fill in the check-in and check-out date
to filter for all available lodges.</p>
<p>We use a generic Listview to create an overview of all the lodges and we overwrite the queryset to grab the search parameters from the GET request to create a filtered view.</p>
<p><strong>views.py</strong></p>
<pre><code>class ListingsView(ListView):
"""Return all listings"""
model = Listing
template_name = 'orders/listings.html'
def get_queryset(self):
"""
Visitors can either visit the page with- or without a search query
appended to the url. They can either use the form to perform a search
or supply an url with the appropriate parameters.
"""
# Get the start and end dates from the url
check_in = self.request.GET.get('check_in')
check_out = self.request.GET.get('check_out')
queryset = Calendar.objects.filter(
date__gte=check_in,
date__lt=check_out,
is_available=True
)
return queryset
</code></pre>
<p>Now this code is simplified for readability, but what I would like to do, is store the check-in and check-out date people are searching for.</p>
<p><strong>Updated views.py</strong></p>
<pre><code>class ListingsView(ListView):
"""Return all listings"""
model = Listing
template_name = 'orders/listings.html'
def get_queryset(self):
"""
Visitors can either visit the page with- or without a search query
appended to the url. They can either use the form to perform a search
or supply an url with the appropriate parameters.
"""
# Get the start and end dates from the url
check_in = self.request.GET.get('check_in')
check_out = self.request.GET.get('check_out')
queryset = Calendar.objects.filter(
date__gte=check_in,
date__lt=check_out,
is_available=True
)
Statistics.objects.create(
check_in=check_in,
check_out=check_out
)
return queryset
</code></pre>
<p>We created a "Statistics" model to store all dates people are looking for.</p>
<p>We essentially add data to a model by using a GET request and I'm wondering if this is the right way of doing things? Aren't we creating any vulnerabilities?</p>
<p>The search form uses hidden text inputs, so there's always the possibility of not knowing what data is coming in. Is cleaning or checking the datatype from these input enough, or will this always be in string format?</p>
<p>Any ideas?</p>
<p>Greetz,</p>
|
<python><django><get>
|
2023-01-23 08:07:10
| 0
| 315
|
Kevin D.
|
75,206,754
| 11,895,506
|
Python: parce json with 2 arrays via json_normalize
|
<p>Would you help, please, to parce 2-arrayed json via python, json_normalize.</p>
<p>Here is the code:</p>
<pre><code>import json
from pandas.io.json import json_normalize
data5 = {
"id": "0001",
"type": "donut",
"name": "Cake",
"ppu": 0.55,
"batters":
{
"batter":
[
{ "id": "1001", "type": "Regular" },
{ "id": "1002", "type": "Chocolate" },
{ "id": "1003", "type": "Blueberry" },
{ "id": "1004", "type": "Devil's Food" }
]
},
"topping":
[
{ "id": "5001", "type": "None" },
{ "id": "5002", "type": "Glazed" },
{ "id": "5005", "type": "Sugar" },
{ "id": "5007", "type": "Powdered Sugar" },
{ "id": "5006", "type": "Chocolate with Sprinkles" },
{ "id": "5003", "type": "Chocolate" },
{ "id": "5004", "type": "Maple" }
]
}
df2 = json_normalize(data5
, record_path = ['topping']
, meta = ['id', 'type', 'name', 'ppu', 'batters']
, record_prefix='_'
, errors='ignore'
)
</code></pre>
<p>This parces "topping" object but doesn't parce the "batters".
To parce the "batters" may be applied the code:</p>
<pre><code># parce the part of json string into another dataframe
df3 = json_normalize(data5
,record_path = ['batters', 'batter'])
# cross join 2 dataframes
df2['key_'] = 1
df3['key_'] = 1
result = pd.merge(df2, df3, on ='key_').drop("key_", 1)
</code></pre>
<p>But this looks complicated.
Is it possible to combine 2 steps above in one query? E.g.:</p>
<pre><code> df2 = json_normalize(data5
, record_path = ['topping', ['batters', 'batter']]
, meta = ['id', 'type', 'name', 'ppu', ]
, record_prefix='_'
, errors='ignore'
)
</code></pre>
<p>Thank you.</p>
|
<python><json><json-normalize>
|
2023-01-23 07:44:01
| 1
| 737
|
Semyon-coder
|
75,206,732
| 11,608,962
|
SparseTermSimilarityMatrix().inner_product() throws "cannot unpack non-iterable bool object"
|
<p>While working with cosine similarity, I am facing issue calculating the inner product of two vectors.</p>
<p>Code:</p>
<pre class="lang-py prettyprint-override"><code>from gensim.similarities import (
WordEmbeddingSimilarityIndex,
SparseTermSimilarityMatrix
)
w2v_model = api.load("glove-wiki-gigaword-50")
similarity_index = WordEmbeddingSimilarityIndex(w2v_model)
similarity_matrix = SparseTermSimilarityMatrix(similarity_index, dictionary)
score = similarity_matrix.inner_product(
X = [
(0, 1), (1, 1), (2, 1), (3, 2), (4, 1),
(5, 1), (6, 1), (7, 1), (8, 1), (9, 1),
(10, 1), (11, 1), (12, 1), (13, 1), (14, 1),
(15, 1), (16, 3)
],
Y = [(221, 1), (648, 1), (8238, 1)],
normalized = True
)
</code></pre>
<p>Error:</p>
<pre class="lang-bash prettyprint-override"><code>TypeError Traceback (most recent call last)
Input In [77], in <cell line: 1>()
----> 1 similarity_matrix.inner_product(
2 [(0, 1), (1, 1), (2, 1), (3, 2), (4, 1), (5, 1), (6, 1), (7, 1),
3 (8, 1), (9, 1), (10, 1), (11, 1), (12, 1), (13, 1), (14, 1), (15, 1), (16, 3)],
4 [(221, 1), (648, 1), (8238, 1)], normalized=True)
File ~\Anaconda3\lib\site-packages\gensim\similarities\termsim.py:558, in SparseTermSimilarityMatrix.inner_product(self, X, Y, normalized)
555 if not X or not Y:
556 return self.matrix.dtype.type(0.0)
--> 558 normalized_X, normalized_Y = normalized
559 valid_normalized_values = (True, False, 'maintain')
561 if normalized_X not in valid_normalized_values:
TypeError: cannot unpack non-iterable bool object
</code></pre>
<p>I am not sure why it says <code>bool</code> objects when both X and Y are <code>list</code>.</p>
|
<python><nlp><nltk><gensim><cosine-similarity>
|
2023-01-23 07:39:44
| 1
| 1,427
|
Amit Pathak
|
75,206,490
| 5,807,808
|
add new column to dataframe with values matching the keys with existing column in dataframe in python
|
<p>fairly new to python and pandas/numpy. I have a data frame and a dictionary. I need to add a new column in the data frame where the values would be the values from dictionary for the matching dictionary keys with the existing column in data frame. and have empty string for non matching values.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;"></th>
<th style="text-align: left;">Name</th>
<th style="text-align: left;">Fonder</th>
<th style="text-align: right;">Year</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">0</td>
<td style="text-align: left;">company1</td>
<td style="text-align: left;">person1</td>
<td style="text-align: right;">1999</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: left;">company2</td>
<td style="text-align: left;">person2</td>
<td style="text-align: right;">2000</td>
</tr>
<tr>
<td style="text-align: right;">2</td>
<td style="text-align: left;">company3</td>
<td style="text-align: left;">person3</td>
<td style="text-align: right;">2001</td>
</tr>
<tr>
<td style="text-align: right;">3</td>
<td style="text-align: left;">company4</td>
<td style="text-align: left;">person4</td>
<td style="text-align: right;">2002</td>
</tr>
<tr>
<td style="text-align: right;">4</td>
<td style="text-align: left;">company5</td>
<td style="text-align: left;">person5</td>
<td style="text-align: right;">2003</td>
</tr>
</tbody>
</table>
</div>
<pre><code>col_dict = {'company2': 'abc', 'company4': 'xyz',}
</code></pre>
<p>I want the following</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;"></th>
<th style="text-align: left;">Name</th>
<th style="text-align: left;">Fonder</th>
<th style="text-align: right;">Year</th>
<th style="text-align: left;">new_col</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">0</td>
<td style="text-align: left;">company1</td>
<td style="text-align: left;">person1</td>
<td style="text-align: right;">1999</td>
<td style="text-align: left;"></td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: left;">company2</td>
<td style="text-align: left;">person2</td>
<td style="text-align: right;">2000</td>
<td style="text-align: left;">abc</td>
</tr>
<tr>
<td style="text-align: right;">2</td>
<td style="text-align: left;">company3</td>
<td style="text-align: left;">person3</td>
<td style="text-align: right;">2001</td>
<td style="text-align: left;"></td>
</tr>
<tr>
<td style="text-align: right;">3</td>
<td style="text-align: left;">company4</td>
<td style="text-align: left;">person4</td>
<td style="text-align: right;">2002</td>
<td style="text-align: left;">xyz</td>
</tr>
<tr>
<td style="text-align: right;">4</td>
<td style="text-align: left;">company5</td>
<td style="text-align: left;">person5</td>
<td style="text-align: right;">2003</td>
<td style="text-align: left;"></td>
</tr>
</tbody>
</table>
</div>
|
<python><python-3.x><pandas><dataframe><numpy>
|
2023-01-23 07:06:52
| 1
| 401
|
peter
|
75,206,392
| 19,238,204
|
How to Calculate the Volume and Area Surface From the 3D Plot Using Matplotlib and Numpy?
|
<p>Hi all I want to know how to calculate the volume and the surface area of this 3D plot?</p>
<p>I am using the right size for the width and length and height.</p>
<p>this is my code:</p>
<pre><code>from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d.art3d import Poly3DCollection, Line3DCollection
import numpy as np
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
# vertices of a prism
v = np.array([[-2, -2, -2.5], [2, -2, -2.5], [2, 2, -2.5], [-2, 2, -2.5], [2,0,2.5],
[-2,0,2.5]])
ax.scatter3D(v[:, 0], v[:, 1], v[:, 2])
# generate list of sides' polygons of our prism
verts = [ [v[0],v[1],v[4],v[5]], [v[0],v[3],v[5]],
[v[2],v[1],v[4]], [v[2],v[3],v[5],v[4]], [v[0],v[1],v[2],v[3]]]
# plot sides
ax.add_collection3d(Poly3DCollection(verts,
facecolors='cyan', linewidths=1, edgecolors='r', alpha=.25))
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/5Sjny.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5Sjny.png" alt="1" /></a></p>
|
<python><numpy><matplotlib>
|
2023-01-23 06:53:35
| 2
| 435
|
Freya the Goddess
|
75,206,372
| 16,997,421
|
How to filter emails based on received date using microsoft graph api with python
|
<p>I'm working on a python script to retrieve emails based on receiveddatetime. When i run my script i get the error below.</p>
<p><a href="https://i.sstatic.net/gPAVi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gPAVi.png" alt="enter image description here" /></a></p>
<p>Below is my whole script.</p>
<pre><code>import msal
import json
import requests
def get_access_token():
tenantID = 'yyy'
authority = 'https://login.microsoftonline.com/' + tenantID
clientID = 'xxx'
clientSecret = 'xxx'
scope = ['https://graph.microsoft.com/.default']
app = msal.ConfidentialClientApplication(clientID, authority=authority, client_credential = clientSecret)
access_token = app.acquire_token_for_client(scopes=scope)
return access_token
# token block
access_token = get_access_token()
token = access_token['access_token']
# Set the parameters for the email search
# received 2023-01-22T21:13:24Z
date_received = '2023-01-22T21:13:24Z'
mail_subject = 'Test Mail'
mail_sender = 'bernardberbell@gmail.com'
mailuser = 'bernardmwanza@bernardcomms.onmicrosoft.com'
# Construct the URL for the Microsoft Graph API
url = "https://graph.microsoft.com/v1.0/users/{}/Messages?$filter=(ReceivedDateTime ge '{}')&$select=subject,from".format(mailuser, date_received)
# Set the headers for the API call
headers = {
"Authorization": f"Bearer {token}",
"Content-Type": "application/json"
}
# Send the API request and get the response
response = requests.get(url, headers=headers)
print(response)
# Parse the response as JSON
data = json.loads(response.text)
print(data)
</code></pre>
<p>I tried to modify the api call like below, but still i get the same error. What am i doing wrong here?</p>
<pre><code>url = "https://graph.microsoft.com/v1.0/users/{}/Messages?$filter=ReceivedDateTime ge '{}'&$select=subject,from".format(mailuser, date_received)
</code></pre>
|
<python><python-3.x><microsoft-graph-api><filtering><microsoft-graph-mail>
|
2023-01-23 06:49:44
| 1
| 352
|
Bernietechy
|
75,206,293
| 743,086
|
WinError 10061 and WinError 10022 in socket programming only on Windows
|
<p>I have a very simple Python code for <code>bind</code> or <code>connect</code> to a port. it works without any error on <code>Ubuntu</code> and <code>CentOs</code> but I have an error on <code>Windows 10</code>. I turned off the firewall and antivirus but it didn't help.</p>
<p>my code:</p>
<pre><code>import socket
port = 9999
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
host = socket.gethostname()
try:
s.connect((host,port))
except:
s.bind((host, port))
s.listen(1)
print("I'm a server")
clientsocket, address = s.accept()
else:
print("I'm a client")
</code></pre>
<p>error on windows 10:</p>
<pre><code>Traceback (most recent call last):
File "win.py", line 11, in <module>
s.connect((host,port))
ConnectionRefusedError: [WinError 10061] No connection could be made because the target machine actively refused it
During the handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "win.py", line 13, in <module>
s.bind((host, port))
OSError: [WinError 10022] An invalid argument was supplied
</code></pre>
<p>Edit:</p>
<p>I found my problem is in <code>Try... Except</code> part, if I put this code in two files my problem will solve. But Why? <code>try except</code> don't work correctly in Windows?</p>
|
<python><linux><sockets><window><try-except>
|
2023-01-23 06:36:52
| 1
| 2,295
|
Ehsan
|
75,206,092
| 7,177,478
|
FastAPI Sqlalchemy sqlalchemy.exc.InvalidRequestError: Could not refresh instance
|
<p>I'm new to the sqlalchemy.
I was trying to do a simple insert.
But I get an InvalidRequestError.</p>
<blockquote>
<p>sqlalchemy.exc.InvalidRequestError: Could not refresh instance '<Block
at 0x2464fd85670>'</p>
</blockquote>
<p>It seems like there is something wrong with the refresh().
Any advice would be greatful.
Below is my code.</p>
<pre><code>@router.post("/{user_id}")
async def block(
user_id: int,
session: Session = Depends(get_db_session),
user: UserDB = Depends(get_current_user)):
block = BlockCreate(blocker=user.id, blocked=user_id)
db_block = db_models.Block(**block.dict())
session.add(db_block)
session.commit()
session.refresh(db_block)
return {"block": db_block}
class BlockBase(BaseModel):
blocker: int
blocked: int
class BlockCreate(BlockBase):
pass
</code></pre>
<h2>DB setting</h2>
<pre><code>class Block(Base):
__tablename__ = "block"
id = Column(Integer, primary_key=True, autoincrement=True)
blocker = Column(Integer, ForeignKey("user.id"), nullable=False, primary_key=True)
blocked = Column(Integer, ForeignKey("user.id"), nullable=False, primary_key=True)
</code></pre>
|
<python><sqlalchemy><fastapi>
|
2023-01-23 06:02:27
| 1
| 420
|
Ian
|
75,205,725
| 17,778,275
|
No search results while using API Key to retrieve information using Python
|
<p>Mouser is a website where electronic components can be bought and are listed with their details and technical parameters.</p>
<p>To automate the search of parts from this website, I am trying to automate the processing using the <a href="https://api.mouser.com/api/docs/ui/index" rel="nofollow noreferrer">Mouser API Key for part search</a>.</p>
<p>I am trying to retrieve Part Number details from <a href="https://www.mouser.com/" rel="nofollow noreferrer">mouser.com</a> using API key. But, I get no search results.
Below is the python script for the same.</p>
<pre><code>
import requests
import json
api_key = "my API Key"
part_number = "LM258AMDREP" #Part Number
headers = {
"Content-Type": "application/json",
"Accept": "application/json"
}
data = {
"SearchByPartnumberRequest": {
"MouserPartNumber": part_number
}
}
url = f"https://api.mouser.com/api/v1/search/partnumber?apikey={api_key}"
try:
response = requests.post(url, headers=headers, json=data, verify=False)
response.raise_for_status()
data = response.json()
print(data)
except requests.exceptions.HTTPError as err:
print ("Error: " + str(err))
except requests.exceptions.RequestException as e:
# catastrophic error. bail.
print ("Error: " + str(e))
</code></pre>
<p>The output I get is:</p>
<pre><code>{'Errors': [{'Id': 0, 'Code': 'Required', 'Message': 'Required', 'ResourceKey': 'Required', 'ResourceFormatString': None,
'ResourceFormatString2': None, 'PropertyName': 'Request'}], 'SearchResults': None}
</code></pre>
<p>If I remove the <code>verify=False</code>, I get following error</p>
<pre><code>Error: HTTPSConnectionPool(host='api.mouser.com', port=443): Max retries exceeded with url: /api/v1/search/partnumber?apikey=e9226156-491c-4635-bfcd-5285f80244cf (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED]
certificate verify failed: self signed certificate in certificate chain (_ssl.c:992)')))
</code></pre>
<p>Is there some other version/procedure to get the part details.</p>
|
<python><search><ssl-certificate>
|
2023-01-23 04:42:57
| 2
| 354
|
spd
|
75,205,674
| 17,990,405
|
Have pydantic object dict() method return custom representation for non-pydantic type
|
<p>I have a pydantic object that has some attributes that are custom types. I was able to create validators so pydantic can validate this type however I want to get a string representation of the object whenever I call the pydantic dict() method. Heres an example:</p>
<pre><code>class MongoId(ObjectId):
@classmethod
def __get_validators__(cls):
yield cls.validate
@classmethod
def validate(cls, v):
if isinstance(v, str):
return ObjectId(v)
if not isinstance(v, ObjectId):
raise ValueError("Invalid ObjectId")
return v
class User(BaseModel):
uid: MongoId
user = User(...)
</code></pre>
<p>When I call <code>user.dict()</code> I want to get a string representation of the <code>uid</code> attribute not an object representation for that attribute.</p>
<p>I tried having the custom type inherit the BaseModel class and then implementing a dict() method inside hoping it would overload it but it doesn't seem to get called</p>
|
<python><pydantic>
|
2023-01-23 04:26:12
| 1
| 325
|
syntactic
|
75,205,618
| 5,966,097
|
winxpgui.SetLayeredWindowAttributes throwing error pywintypes.error: (87, 'SetLayeredWindowAttributes', 'The parameter is incorrect.')
|
<p>While I am trying to make a window's background transparent using win32gui, I am using following code.</p>
<pre><code>hwnd = win32gui.FindWindow(None, "My App Name")
win32gui.SetWindowLong (hwnd, win32con.GWL_EXSTYLE, win32gui.GetWindowLong (hwnd, win32con.GWL_EXSTYLE ) | win32con.WS_EX_LAYERED )
winxpgui.SetLayeredWindowAttributes(hwnd, win32api.RGB(0,0,0), 180, win32con.LWA_ALPHA)
</code></pre>
<p>But I am getting an error as</p>
<pre><code>pywintypes.error: (87, 'SetLayeredWindowAttributes', 'The parameter is incorrect.')
</code></pre>
<p>I tried different documentation, but could not really find anything very specific or helpful. Would anyone like to give some suggestion or code snippet or idea to fix the issue.</p>
<p>Thanks in advance.</p>
|
<python><pywinauto><win32gui><win32con>
|
2023-01-23 04:12:48
| 1
| 2,052
|
KOUSIK MANDAL
|
75,205,556
| 2,580,505
|
Focal Loss in Pytorch implementation
|
<p>Where can I find a reliable Pytorch implementation of Focal Loss for a multi-class object detection problem? I have seen some implementations on GitHub, but I am looking for the official Pytorch version, similar to <code>nn.CrossEntropyLoss()</code>.</p>
<p>My prediction has a shape of <code>(b, c)</code> and the target has a shape of <code>(b)</code>, where <code>b</code> is the batch size and c is the number of classes. If there is no official implementation from Pytorch team, does anyone know a good GitHub implementation that supports different shapes for target and prediction?</p>
|
<python><deep-learning><pytorch><neural-network><computer-vision>
|
2023-01-23 03:56:19
| 0
| 969
|
Infintyyy
|
75,205,435
| 935,376
|
Extraction of values from pandas column which is a list of dictionaries
|
<p>A pandas dataframe called top_chart_movies, which has a column, <strong>genres</strong>, that has a list of dictionaries as shown in the picture below <a href="https://i.sstatic.net/b2bQG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/b2bQG.png" alt="enter image description here" /></a></p>
<p>The column values has varying number of dictionary items within a list.</p>
<p>How to extract the values as a list and include it into another column <strong>genres1</strong>, where</p>
<p>top_chart_movies['genres1'].head(2)</p>
<pre><code>1881 ["Drama","Crime"]
3337 ["Drama","Crime"]
</code></pre>
<p>I tried this following code, but it didn't work.</p>
<pre><code>top_chart_movies['genres1'] = [value for key, value in top_chart_movies['genres']]
</code></pre>
<p>Edit:
When I type the following code</p>
<pre><code>top_chart_movies['genres'].iloc[1]
</code></pre>
<p>I get:</p>
<pre><code>'[{"id": 18, "name": "Drama"}, {"id": 80, "name": "Crime"}]'
</code></pre>
<p>So, the values are stored as a string.</p>
|
<python><pandas><dictionary>
|
2023-01-23 03:19:32
| 2
| 2,064
|
Zenvega
|
75,205,383
| 6,676,101
|
How can we print a list of all printable ASCII characters to the console using python?
|
<p>Some ASCII characters are printable and some ASCII characters are not printable.</p>
<p>The non-printable ASCII characters are shown below:</p>
<pre class="lang-None prettyprint-override"><code> 0 NUL (Ctrl-@) NULL
1 SOH (Ctrl-A) START OF HEADING
2 STX (Ctrl-B) START OF TEXT
3 ETX (Ctrl-C) END OF TEXT
4 EOT (Ctrl-D) END OF TRANSMISSION
5 ENQ (Ctrl-E) ENQUIRY
6 ACK (Ctrl-F) ACKNOWLEDGE
7 BEL (Ctrl-G) BELL (Beep)
8 BS (Ctrl-H) BACKSPACE
9 HT (Ctrl-I) HORIZONTAL TAB
10 LF (Ctrl-J) LINE FEED
11 VT (Ctrl-K) VERTICAL TAB
12 FF (Ctrl-L) FORM FEED
13 CR (Ctrl-M) CARRIAGE RETURN
14 SO (Ctrl-N) SHIFT OUT
15 SI (Ctrl-O) SHIFT IN
16 DLE (Ctrl-P) DATA LINK ESCAPE
17 DC1 (Ctrl-Q) DEVICE CONTROL 1 (XON)
18 DC2 (Ctrl-R) DEVICE CONTROL 2
19 DC3 (Ctrl-S) DEVICE CONTROL 3 (XOFF)
20 DC4 (Ctrl-T) DEVICE CONTROL 4
21 NAK (Ctrl-U) NEGATIVE ACKNOWLEDGE
22 SYN (Ctrl-V) SYNCHRONOUS IDLE
23 ETB (Ctrl-W) END OF TRANSMISSION BLOCK
24 CAN (Ctrl-X) CANCEL
25 EM (Ctrl-Y) END OF MEDIUM
26 SUB (Ctrl-Z) SUBSTITUTE
27 ESC (Ctrl-[) ESCAPE
28 FS (Ctrl-\) FILE SEPARATOR
29 GS (Ctrl-]) GROUP SEPARATOR
30 RS (Ctrl-^) RECORD SEPARATOR
31 US (Ctrl-_) UNIT SEPARATOR
</code></pre>
<p>The question is, how to we print the printable ASCII characters to the console using python?</p>
|
<python><python-3.x><ascii>
|
2023-01-23 03:01:09
| 1
| 4,700
|
Toothpick Anemone
|
75,205,307
| 787,463
|
Python iterating through file, "Unresolved attribute reference 'strip' for class 'int'" warning: is it PyCharm or is it me?
|
<p>As part of something I'm writing, when iterating through a file (using enumerate so I can get line numbers) PyCharm gives me the warning <code>Unresolved attribute reference 'strip' for class 'int'</code> when I try and strip my strings.<br />
Stripped down the essentials, the code is:</p>
<pre class="lang-python prettyprint-override"><code>with open("testfile.txt", "r") as my_list:
for line, line_num in enumerate(my_list, start=1):
line = line.strip() # < Warning is here
my_bits = line.split("|")
# What I'm doing here isn't important
# I'm just showing I use string methods on it without further errors
</code></pre>
<p>I can select to ignore it in PyCharm, so this isn't a world-ending situation. But I would like to know: could I be coding it better or is PyCharm just too quick to make assumptions?</p>
|
<python><pycharm>
|
2023-01-23 02:39:10
| 1
| 325
|
Slashee the Cow
|
75,205,306
| 2,635,209
|
Why can't I use `statistics.correlation`?
|
<p>I want to compute correlations in Python. My IDE suggests using <code>statistics.correlation</code>, but when I try it:</p>
<pre><code>import statistics
x = [1, 2, 3, 4, 5, 6]
p = statistics.correlation(x, x)
print(p)
</code></pre>
<p>I get an error that says <code>AttributeError: module 'statistics' has no attribute 'correlation'</code>.</p>
<p>Why? How can I fix it?</p>
|
<python><pycharm>
|
2023-01-23 02:38:39
| 1
| 6,104
|
Thomas Carlton
|
75,205,254
| 14,499,516
|
How to fix the proxy server error ERR_TUNNEL_CONNECTION_FAILED with free-proxy?
|
<p>I have followed a tutorial to put all required modules for a modified selenium webdriver into one single class. However, as I implemented the codes, the following errors keeps happening:</p>
<pre><code>Traceback (most recent call last):
File "c:\Users\s1982\Documents\GitHub\Preventing-Selenium-from-being-detected\master.py", line 149, in <module>
main()
File "c:\Users\s1982\Documents\GitHub\Preventing-Selenium-from-being-detected\master.py", line 140, in main
driverinstance.get("https://bot.sannysoft.com")
File "C:\Python311\Lib\site-packages\selenium\webdriver\remote\webdriver.py", line 455, in get
self.execute(Command.GET, {"url": url})
File "C:\Python311\Lib\site-packages\selenium\webdriver\remote\webdriver.py", line 444, in execute
self.error_handler.check_response(response)
File "C:\Python311\Lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 249, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.WebDriverException: Message: unknown error: net::ERR_TUNNEL_CONNECTION_FAILED
(Session info: chrome=109.0.5414.75)
Stacktrace:
Backtrace:
(No symbol) [0x007F6643]
(No symbol) [0x0078BE21]
(No symbol) [0x0068DA9D]
(No symbol) [0x00689E22]
(No symbol) [0x0067FCFD]
(No symbol) [0x00681101]
(No symbol) [0x0067FFDD]
(No symbol) [0x0067F3BC]
(No symbol) [0x0067F2D8]
(No symbol) [0x0067DC68]
(No symbol) [0x0067E512]
(No symbol) [0x0068F75B]
(No symbol) [0x006F7727]
(No symbol) [0x006DFD7C]
(No symbol) [0x006F6B09]
(No symbol) [0x006DFB76]
(No symbol) [0x006B49C1]
(No symbol) [0x006B5E5D]
GetHandleVerifier [0x00A6A142+2497106]
GetHandleVerifier [0x00A985D3+2686691]
GetHandleVerifier [0x00A9BB9C+2700460]
GetHandleVerifier [0x008A3B10+635936]
(No symbol) [0x00794A1F]
(No symbol) [0x0079A418]
(No symbol) [0x0079A505]
(No symbol) [0x007A508B]
BaseThreadInitThunk [0x751E7D69+25]
RtlInitializeExceptionChain [0x76F1BB9B+107]
RtlClearBits [0x76F1BB1F+191]
</code></pre>
<p>The code is provided whole as below, as though I suspect the problem comes from the <code>free-proxy</code> class, I am unsure of the actual problem. However, I believe it is at least reproducible.</p>
<pre><code>try:
import sys
import os
from fp.fp import FreeProxy
from fake_useragent import UserAgent
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver import Chrome
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import TimeoutException
import time
print('all module are loaded ')
except Exception as e:
print("Error ->>>: {} ".format(e))
class Spoofer(object):
def __init__(self, country_id=['US'], rand=True, anonym=True):
self.country_id = country_id
self.rand = rand
self.anonym = anonym
self.userAgent, self.ip = self.get()
def get(self):
ua = UserAgent()
proxy = FreeProxy(country_id=self.country_id,
rand=self.rand, anonym=self.anonym).get()
ip = proxy.split("://")[1]
return ua.random, ip
class DriverOptions(object):
def __init__(self):
self.options = Options()
self.options.add_argument('--no-sandbox')
self.options.add_argument('--start-maximized')
self.options.add_argument('--disable-dev-shm-usage')
self.options.add_argument("--incognito")
self.options.add_argument('--disable-blink-features')
self.options.add_argument(
'--disable-blink-features=AutomationControlled')
self.options.add_experimental_option(
"excludeSwitches", ["enable-automation"])
self.options.add_experimental_option('useAutomationExtension', False)
self.options.add_argument('disable-infobars')
self.options.add_experimental_option(
"excludeSwitches", ["enable-logging"])
self.helperSpoofer = Spoofer()
self.options.add_argument(
'user-agent={}'.format(self.helperSpoofer.userAgent))
self.options.add_argument('--proxy-server=%s' % self.helperSpoofer.ip)
class WebDriver(DriverOptions):
def __init__(self, path=''):
DriverOptions.__init__(self)
self.driver_instance = self.get_driver()
def get_driver(self):
print("""
IP:{}
UserAgent: {}
""".format(self.helperSpoofer.ip, self.helperSpoofer.userAgent))
PROXY = self.helperSpoofer.ip
webdriver.DesiredCapabilities.CHROME['proxy'] = {
"httpProxy": PROXY,
"ftpProxy": PROXY,
"sslProxy": PROXY,
"noProxy": None,
"proxyType": "MANUAL",
"autodetect": False,
}
webdriver.DesiredCapabilities.CHROME['acceptSslCerts'] = True
os.environ['PATH'] += r"C:\Program Files (x86)"
driver = webdriver.Chrome(service=Service(), options=self.options)
driver.execute_script(
"Object.defineProperty(navigator, 'webdriver', {get: () => undefined})")
driver.execute_cdp_cmd("Page.addScriptToEvaluateOnNewDocument", {
"source":
"const newProto = navigator.__proto__;"
"delete newProto.webdriver;"
"navigator.__proto__ = newProto;"
})
return driver
def main():
driver = WebDriver()
driverinstance = driver.driver_instance
driverinstance.get("https://www.expressvpn.com/what-is-my-ip")
time.sleep(5)
user_agent_check = driverinstance.execute_script(
"return navigator.userAgent;")
print(user_agent_check)
print("done")
if __name__ == "__main__":
main()
</code></pre>
<p>I have also checked this two questions on StackOverFlow, but both answers fail to solve the problem in my scenario: <a href="https://stackoverflow.com/questions/65341279/error-with-proxy-selenium-webdriver-python-err-tunnel-connection-failed">Error with Proxy Selenium Webdriver Python : ERR_TUNNEL_CONNECTION_FAILED</a> and <a href="https://stackoverflow.com/questions/65156932/selenium-proxy-server-argument-unknown-error-neterr-tunnel-connection-faile">Selenium proxy server argument - unknown error: net::ERR_TUNNEL_CONNECTION_FAILED</a>.</p>
<p>Thank you for all the help, and please let me know what I could have corrected in the code.</p>
|
<python><selenium><proxy>
|
2023-01-23 02:22:58
| 2
| 699
|
hokwanhung
|
75,205,211
| 12,097,553
|
django channels: notifications of message not working properly
|
<p>I am writing a django module that handles real time message paired with notification. So far:<br />
a conversation can only take place between no more than 2 users.
a notification should be sent after each message.</p>
<p>I am currently working on getting the notifications to show up and the issue is that the notification gets rendered in the sender profile page and not in the recipient profile. I cant see where my error is</p>
<p>Here is what I have done:
consumers.py</p>
<pre><code>import json
from channels.generic.websocket import AsyncWebsocketConsumer
from channels.db import database_sync_to_async
from .models import Chat, ChatRoom
from accounts.models import User
from asgiref.sync import sync_to_async
from django.contrib.auth import get_user_model
from django.shortcuts import get_object_or_404
from asgiref.sync import async_to_sync
class ChatConsumer(AsyncWebsocketConsumer):
async def connect(self):
self.room_id = self.scope['url_route']['kwargs']['room_id']
self.room_group_name = 'chat_%s' % self.room_id
await self.channel_layer.group_add(
self.room_group_name,
self.channel_name
)
await self.accept()
async def disconnect(self, close_code):
await self.channel_layer.group_discard(
self.room_group_name,
self.channel_name
)
async def receive(self, text_data):
text_data_json = json.loads(text_data)
message = text_data_json['message']
recipient = text_data_json['recipient']
self.user_id = self.scope['user'].id
# Find room object
room = await database_sync_to_async(ChatRoom.objects.get)(pk=self.room_id)
print('ok1')
# Create new chat object
chat = Chat(
content=message,
sender=self.scope['user'],
room=room,
)
print('ok2')
await database_sync_to_async(chat.save)()
print("ok3")
# get the recipient user
recipient_user = await database_sync_to_async(User.objects.get)(id=recipient)
print("ok4")
await sync_to_async(chat.recipient.add)(recipient_user.id)
print("ok5")
await database_sync_to_async(chat.save)()
await self.channel_layer.group_send(
self.room_group_name,
{
'type': 'chat_message',
'message': message,
'user_id': self.user_id
})
# Send a notification to the recipient
await self.channel_layer.send(
recipient_user.username,
{
'type': 'notification',
'message': message
}
)
await self.send_notification(f'New message from {self.user_id}')
print('notification has been created')
async def chat_message(self, event):
message = event['message']
user_id = event['user_id']
await self.send(text_data=json.dumps({
'message': message,
'user_id': user_id
}))
async def send_notification(self, message):
await self.send(text_data=json.dumps({
'type': 'notification',
'message':message
}))
</code></pre>
<p>and here is my room.js code, which is the javascript code handling the logic to display the messages logic and the notifications:</p>
<pre><code>chatSocket.onmessage = function(e) {
const data = JSON.parse(e.data);
console.log("data",data)
console.log("datatype",data.type)
var message_type = data.type;
console.log("message type",message_type)
if(message_type === 'notification'){
$("#notification-bar2").text(data.message);
$("#notification-bar2").show();
}
if(message_type !== 'notification'){
const messageElement = document.createElement('div')
const userId = data['user_id']
const loggedInUserId = JSON.parse(document.getElementById('user_id').textContent)
console.log(loggedInUserId)
messageElement.innerText = data.message
if (userId === loggedInUserId) {
messageElement.classList.add( 'relative', 'max-w-xl', 'px-4', 'py-2', 'text-gray-700', 'bg-gray-100','rounded', 'shadow','flex', 'justify-end','message', 'sender','block')
} else {
messageElement.classList.add('relative', 'max-w-xl', 'px-4', 'py-2', 'text-gray-700', 'bg-gray-100','rounded', 'shadow','flex', 'justify-start','message', 'receiver','block')
}
chatLog.appendChild(messageElement)
if (document.querySelector('#emptyText')) {
document.querySelector('#emptyText').remove()
}
}
};
</code></pre>
<p>I am super confused about why that would be, and be fresh starting with channels, there are still quite a bit of stuff that I dont understand super well, therefore any king help is greatly appreciated! I am more than happy to provide additionnal code if necessary</p>
|
<python><django><django-channels>
|
2023-01-23 02:12:51
| 2
| 1,005
|
Murcielago
|
75,205,191
| 3,561,314
|
Python thread calling won't finish when closing tkinter application
|
<p>I am making a timer using tkinter in python. The widget simply has a single button. This button doubles as the element displaying the time remaining. The timer has a thread that simply updates what time is shown on the button.</p>
<p>The thread simply uses a while loop that should stop when an <a href="https://stackoverflow.com/a/48611452">event</a> is set.
When the window is closed, I use <a href="https://stackoverflow.com/a/111160">protocol</a> to call a function that sets this event then attempts to join the thread. This works <em>most</em> of the time. However, if I close the program just as a certain call is being made, this fails and the thread continues after the window has been closed.</p>
<p>I'm aware of the <a href="https://stackoverflow.com/questions/43738907/thread-wont-let-tkinter-application-close">other</a> <a href="https://stackoverflow.com/questions/23739453/cannot-close-multithreaded-tkinter-app-on-x-button">similar</a> threads about closing threads when closing a tkinter window. But these answers are old, and I would like to avoid using thread.stop() if possible.</p>
<p>I tried reducing this as much as possible while still showing my intentions for the program.</p>
<pre><code>import tkinter as tk
from tkinter import TclError, ttk
from datetime import timedelta
import time
import threading
from threading import Event
def strfdelta(tdelta):
# Includes microseconds
hours, rem = divmod(tdelta.seconds, 3600)
minutes, seconds = divmod(rem, 60)
return str(hours).rjust(2, '0') + ":" + str(minutes).rjust(2, '0') + \
":" + str(seconds).rjust(2, '0') + ":" + str(tdelta.microseconds).rjust(6, '0')[0:2]
class App(tk.Tk):
def __init__(self):
super().__init__()
self.is_running = False
is_closing = Event()
self.start_time = timedelta(seconds=4, microseconds=10, minutes=0, hours=0)
self.current_start_time = self.start_time
self.time_of_last_pause = time.time()
self.time_of_last_unpause = None
# region guisetup
self.time_display = None
self.geometry("320x110")
self.title('Replace')
self.resizable(False, False)
box1 = self.create_top_box(self)
box1.place(x=0, y=0)
# endregion guisetup
self.timer_thread = threading.Thread(target=self.timer_run_loop, args=(is_closing, ))
self.timer_thread.start()
def on_close(): # This occasionally fails when we try to close.
is_closing.set() # This used to be a boolean property self.is_closing. Making it an event didn't help.
print("on_close()")
try:
self.timer_thread.join(timeout=2)
finally:
if self.timer_thread.is_alive():
self.timer_thread.join(timeout=2)
if self.timer_thread.is_alive():
print("timer thread is still alive again..")
else:
print("timer thread is finally finished")
else:
print("timer thread finished2")
self.destroy() # https://stackoverflow.com/questions/111155/how-do-i-handle-the-window-close-event-in-tkinter
self.protocol("WM_DELETE_WINDOW", on_close)
def create_top_box(self, container):
box = tk.Frame(container, height=110, width=320)
box_m = tk.Frame(box, bg="blue", width=320, height=110)
box_m.place(x=0, y=0)
self.time_display = tk.Button(box_m, text=strfdelta(self.start_time), command=self.toggle_timer_state)
self.time_display.place(x=25, y=20)
return box
def update_shown_time(self, time_to_show: timedelta = None):
print("timer_run_loop must finish. flag 0015") # If the window closes at this point, everything freezes
self.time_display.configure(text=strfdelta(time_to_show))
print("timer_run_loop must finish. flag 016")
def toggle_timer_state(self):
# update time_of_last_unpause if it has never been set
if not self.is_running and self.time_of_last_unpause is None:
self.time_of_last_unpause = time.time()
if self.is_running:
self.pause_timer()
else:
self.start_timer_running()
def pause_timer(self):
pass # Uses self.time_of_last_unpause, Alters self.is_running, self.time_of_last_pause, self.current_start_time
def timer_run_loop(self, event):
while not event.is_set():
if not self.is_running:
print("timer_run_loop must finish. flag 008")
self.update_shown_time(self.current_start_time)
print("timer_run_loop must finish. flag 018")
print("timer_run_loop() ending")
def start_timer_running(self):
pass # Uses self.current_start_time; Alters self.is_running, self.time_of_last_unpause
if __name__ == "__main__":
app = App()
app.mainloop()
</code></pre>
<p>You don't even have to press the button for this bug to manifest, but it does take trail and error. I just run it and hit alt f4 until it happens.</p>
<p>If you run this and encounter the problem, you will see that "timer_run_loop must finish. flag 0015" is the last thing printed before we check if the thread has ended.
That means,
<code>self.time_display.configure(text=strfdelta(time_to_show))</code> hasn't finished yet. I think closing the tkinter window while a thread is using this tkinter button inside of it is somehow causing a problem.</p>
<p>There seems to be very little solid documentation about the configure method in tkinter.
Python's official <a href="https://docs.python.org/3/library/tkinter.html?highlight=configure" rel="noreferrer">documention of tkinter</a> mentions the function only in passing. It's just used as a read-only dictionary.
<br>A tkinter style class gets a <a href="https://docs.python.org/3/library/tkinter.ttk.html#tkinter.ttk.Style.configure" rel="noreferrer">little bit of detail</a> about it's configure method, but this is unhelpful. <br>The <a href="https://tkdocs.com/pyref/onepage.html" rel="noreferrer">tkdocs</a> lists configure aka config as one of the methods available for all widgets. <br>This <a href="https://coderslegacy.com/python/tkinter-config/" rel="noreferrer">tutorial article</a> seems to be the only place that shows the function actually being used. But it doesn't mention any possible problems or exceptions the method could encounter.</p>
<p>Is there some resource sharing pattern I'm not using? Or is there a better way to end this thread?</p>
|
<python><python-3.x><multithreading><tkinter>
|
2023-01-23 02:07:18
| 1
| 323
|
wimworks
|
75,205,126
| 679,081
|
Why does lldb only show "dyld" in each stack frame on macOS Ventura?
|
<p>I maintain a Python library that's written in C++ (using <a href="https://github.com/pybind/pybind11" rel="nofollow noreferrer">Pybind11</a>). For the past couple of years, I've been able to debug it just fine with <code>lldb</code>, just by compiling the extension in debug mode (i.e.: disabling optimization and including symbols, with <code>-g</code>).</p>
<p>However, as of <code>lldb-1300.0.42.3</code> on macOS Ventura 13.1 (22C65), <code>lldb</code> is completely unable to find the symbols and reports every stack frame as being from dyld:</p>
<pre><code>(lldb) bt
* thread #2, stop reason = signal SIGSTOP
frame #0: 0x0000000187110a00
frame #1: 0x00000001003700fc dyld
frame #2: 0x00000001003e2614 dyld
frame #3: 0x00000001003dfbac dyld
frame #4: 0x00000001003e3188 dyld
frame #5: 0x000000010033df20 dyld
frame #6: 0x000000010033d754 dyld
frame #7: 0x00000001003dfe28 dyld
frame #8: 0x00000001003e3188 dyld
frame #9: 0x000000010033df20 dyld
* frame #10: 0x000000010033d754 dyld
frame #11: 0x00000001003dfe28 dyld
frame #12: 0x00000001003e3188 dyld
frame #13: 0x000000010033df20 dyld
frame #14: 0x00000001003e2614 dyld
...
</code></pre>
<p>I've tried:</p>
<ul>
<li>Ensuring that debug symbols are compiled into the executable</li>
<li>Adding a breakpoint on <code>dlopen</code> (which does not get hit)</li>
<li>Using <code>image list</code> to show the loaded binary images (which does not include my Python extension)</li>
</ul>
|
<python><lldb><pybind11><macos-ventura>
|
2023-01-23 01:46:30
| 1
| 2,557
|
Peter Sobot
|
75,205,081
| 7,403,752
|
Identifying websites that use a paywall dynamically
|
<p>I am interested in identifying websites using paywall, not to bypass to paywall, but use it for research purposes. I use the following code, looks okay for some websites but not for all:</p>
<pre><code>import requests
url = "wsj.com"
response = requests.get(url)
if "pay wall" in response.text.lower() or "paywall" in response.text.lower() or "pay-wall" in response.text.lower():
print("paywall")
else:
print("no")
</code></pre>
<p>How can I capture dynamically loaded paywalls?</p>
|
<python><selenium><request>
|
2023-01-23 01:33:11
| 1
| 2,326
|
edyvedy13
|
75,205,015
| 874,380
|
How can I get the same WordNet output from the terminal in Python/NLTK?
|
<p>I have WordNet installed on my machine, and when I run the terminal command</p>
<pre><code>wn funny -synsa
</code></pre>
<p>I get the following output:</p>
<p><a href="https://i.sstatic.net/VWDs7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VWDs7.png" alt="enter image description here" /></a></p>
<p>Now I would like to get the same information within Python using the NLTK package. For example, if I run</p>
<pre><code>synset_name = 'amusing.s.02'
for l in wordnet.synset(synset_name).lemmas():
print('Lemma: {}'.format(l.name()))
</code></pre>
<p>I get all the lemmas I see in the terminal output (i.e.: amusing, comic, comical, funny, laughable, mirthful, risible). However, what does the <code>"=> humorous (vs. humorless), humourous"</code> part in the terminal output mean and how can I get this with NLTK? It looks kind of like a hypernym, but adjectives don't have hypernym relationships.</p>
|
<python><nltk><wordnet>
|
2023-01-23 01:17:24
| 1
| 3,423
|
Christian
|
75,204,971
| 4,180,276
|
django.db.utils.ProgrammingError: relation "pdf_conversion" does not exist
|
<p>I am trying to add in a conversion model that is referenced in the fileurl model that calls my customuser model. But for some reason, I get the following error when loading the admin panel.</p>
<pre><code>Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.
Internal Server Error: /admin/pdf/conversion/
Traceback (most recent call last):
File "/home/john/PycharmProjects/pdf/venv/lib/python3.8/site-packages/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
psycopg2.errors.UndefinedTable: relation "pdf_conversion" does not exist
LINE 1: SELECT COUNT(*) AS "__count" FROM "pdf_conversion"
^
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/john/PycharmProjects/pdf/venv/lib/python3.8/site-packages/django/core/handlers/exception.py", line 47, in inner
response = get_response(request)
File "/home/john/PycharmProjects/pdf/venv/lib/python3.8/site-packages/django/core/handlers/base.py", line 181, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/home/john/PycharmProjects/pdf/venv/lib/python3.8/site-packages/django/contrib/admin/options.py", line 616, in wrapper
return self.admin_site.admin_view(view)(*args, **kwargs)
File "/home/john/PycharmProjects/pdf/venv/lib/python3.8/site-packages/django/utils/decorators.py", line 130, in _wrapped_view
response = view_func(request, *args, **kwargs)
File "/home/john/PycharmProjects/pdf/venv/lib/python3.8/site-packages/django/views/decorators/cache.py", line 44, in _wrapped_view_func
response = view_func(request, *args, **kwargs)
File "/home/john/PycharmProjects/pdf/venv/lib/python3.8/site-packages/django/contrib/admin/sites.py", line 232, in inner
return view(request, *args, **kwargs)
File "/home/john/PycharmProjects/pdf/venv/lib/python3.8/site-packages/django/utils/decorators.py", line 43, in _wrapper
return bound_method(*args, **kwargs)
File "/home/john/PycharmProjects/pdf/venv/lib/python3.8/site-packages/django/utils/decorators.py", line 130, in _wrapped_view
response = view_func(request, *args, **kwargs)
File "/home/john/PycharmProjects/pdf/venv/lib/python3.8/site-packages/django/contrib/admin/options.py", line 1697, in changelist_view
cl = self.get_changelist_instance(request)
File "/home/john/PycharmProjects/pdf/venv/lib/python3.8/site-packages/django/contrib/admin/options.py", line 736, in get_changelist_instance
return ChangeList(
File "/home/john/PycharmProjects/pdf/venv/lib/python3.8/site-packages/django/contrib/admin/views/main.py", line 100, in __init__
self.get_results(request)
File "/home/john/PycharmProjects/pdf/venv/lib/python3.8/site-packages/django/contrib/admin/views/main.py", line 235, in get_results
result_count = paginator.count
File "/home/john/PycharmProjects/pdf/venv/lib/python3.8/site-packages/django/utils/functional.py", line 48, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "/home/john/PycharmProjects/pdf/venv/lib/python3.8/site-packages/django/core/paginator.py", line 97, in count
return c()
File "/home/john/PycharmProjects/pdf/venv/lib/python3.8/site-packages/django/db/models/query.py", line 412, in count
return self.query.get_count(using=self.db)
File "/home/john/PycharmProjects/pdf/venv/lib/python3.8/site-packages/django/db/models/sql/query.py", line 519, in get_count
number = obj.get_aggregation(using, ['__count'])['__count']
File "/home/john/PycharmProjects/pdf/venv/lib/python3.8/site-packages/django/db/models/sql/query.py", line 504, in get_aggregation
result = compiler.execute_sql(SINGLE)
File "/home/john/PycharmProjects/pdf/venv/lib/python3.8/site-packages/django/db/models/sql/compiler.py", line 1175, in execute_sql
cursor.execute(sql, params)
File "/home/john/PycharmProjects/pdf/venv/lib/python3.8/site-packages/django/db/backends/utils.py", line 98, in execute
return super().execute(sql, params)
File "/home/john/PycharmProjects/pdf/venv/lib/python3.8/site-packages/django/db/backends/utils.py", line 66, in execute
return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
File "/home/john/PycharmProjects/pdf/venv/lib/python3.8/site-packages/django/db/backends/utils.py", line 75, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/home/john/PycharmProjects/pdf/venv/lib/python3.8/site-packages/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "/home/john/PycharmProjects/pdf/venv/lib/python3.8/site-packages/django/db/utils.py", line 90, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/home/john/PycharmProjects/pdf/venv/lib/python3.8/site-packages/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
django.db.utils.ProgrammingError: relation "pdf_conversion" does not exist
LINE 1: SELECT COUNT(*) AS "__count" FROM "pdf_conversion"
^
</code></pre>
<p>And these are my models</p>
<pre><code>class Conversion(models.Model):
user = models.ForeignKey(CustomUser, on_delete=models.SET_NULL, related_name='user', null=True, blank=False)
conversion_id = models.CharField(default=None, max_length=200)
api_domain = models.CharField(max_length=250, null=True)
class FileUrlVal(models.Model):
filename = models.CharField(max_length=250, null=True)
url = models.CharField(max_length=250, null=True)
tool = models.CharField(max_length=250, null=True)
expired = models.BooleanField(default=False)
created_at = models.DateTimeField(default=timezone.now)
conversion = models.ForeignKey(Conversion, on_delete=models.SET_NULL, null=True)
</code></pre>
<p>Please note, I have already deleted all the migration files and tried to re run it (python manage.py makemigrations python manage.py migrate), it migrates fine. Just fails when I try and load it in the admin. My admin call looks like so</p>
<pre><code>class ConversionAdmin(admin.ModelAdmin):
list_display = (
'api_domain',
)
</code></pre>
<p>And when I run <code>manage.py showmigrations --verbosity 2 pdf</code> I get <code>pdf [X] 0001_initial (applied at 2021-10-31 02:17:08) </code>
I would appreciate any help</p>
|
<python><django><postgresql>
|
2023-01-23 01:01:54
| 0
| 2,954
|
nadermx
|
75,204,896
| 811,359
|
Google Bigquery SELECT count(*) on API returns 0 when Same Count(*) queryin the GBQ Console returns a count
|
<p>I'm examining the NOAA severe storms dataset in Google Bigquery and I wanted to iterate over the tables to get the storms per year between 1960 and 2022. with property damage.</p>
<pre><code>from google.cloud import bigquery
client = bigquery.Client.from_service_account_json("secret.json")
for year in range(1960, 2023):
query = f"""
SELECT count(*) AS ct
FROM `bigquery-public-data.noaa_historic_severe_storms.storms_{year}`
WHERE damage_property > 0
"""
query_job = client.query(query).result()
for row in query_job:
print(row)
</code></pre>
<p>The print statement prints results like:</p>
<pre><code>Row((552,), {'ct': 0})
Row((620,), {'ct': 0})
Row((401,), {'ct': 0})
</code></pre>
<p>It returns 0 for the count, but excecuting the same query in the GBQ console gets me the correct result. If I use a different query:</p>
<pre><code> query = f"""
SELECT *
FROM `bigquery-public-data.noaa_historic_severe_storms.storms_{year}`
WHERE damage_property > 0
"""
</code></pre>
<p>with the API, it works fine and gives me the correct results, the same results I get running it in the GBQ console. Why does count(*) work unexpectedly? How Do I get the correct results?</p>
|
<python><google-bigquery>
|
2023-01-23 00:37:59
| 2
| 474
|
ahalbert
|
75,204,880
| 3,195,413
|
pandas RollingGroupBy agg 'size' of rolling group (not 'count')
|
<p>It is possible to perform a</p>
<pre><code>df.groupby.rolling.agg({'any_df_col': 'count'})
</code></pre>
<p>But how about a size agg?</p>
<p>'count' will produce a series with the 'running count' of rows that match the groupby condition (1, 1, 1, 2, 3...), but I would like to know, for all of those rows, the total number of rows that match the groupby (so, 1, 1, 3, 3, 3) in that case.</p>
<p>Usually in pandas I think this is achieved by using size instead of count.</p>
<p>This code may illustrate.</p>
<pre><code>import datetime as dt
import pandas as pd
df = pd.DataFrame({'time_ref': [
dt.datetime(2023, 1, 1, 0, 30),
dt.datetime(2023, 1, 1, 0, 30),
dt.datetime(2023, 1, 1, 1),
dt.datetime(2023, 1, 1, 2),
dt.datetime(2023, 1, 1, 2, 15),
dt.datetime(2023, 1, 1, 2, 16),
dt.datetime(2023, 1, 1, 4),
],
'value': [1, 2, 1, 10, 10, 10, 10],
'type': [0, 0, 0, 0, 0, 0, 0]
})
df = df.set_index(pd.DatetimeIndex(df['time_ref']), drop=True)
by = ['value']
window = '1H'
gb_rolling = df.groupby(by=by).rolling(window=window)
agg_d = {'type': 'count'}
test = gb_rolling.agg(agg_d)
print (test)
# this works
</code></pre>
<pre><code> type
value time_ref
1 2023-01-01 00:30:00 1.0
2023-01-01 01:00:00 2.0
2 2023-01-01 00:30:00 1.0
10 2023-01-01 02:00:00 1.0
2023-01-01 02:15:00 2.0
2023-01-01 02:16:00 3.0
2023-01-01 04:00:00 1.0
</code></pre>
<pre><code># but this doesn't
agg_d = {'type': 'size'}
test = gb_rolling.agg(agg_d)
# AttributeError: 'size' is not a valid function for 'RollingGroupby' object
</code></pre>
<p>my desired output is to get the SIZE of the group ... this:</p>
<pre><code> type
value time_ref
1 2023-01-01 00:30:00 2
2023-01-01 01:00:00 2
2 2023-01-01 00:30:00 1
10 2023-01-01 02:00:00 3
2023-01-01 02:15:00 3
2023-01-01 02:16:00 3
2023-01-01 04:00:00 1
</code></pre>
<p>I cannot think of a way to do what I need without using the rolling functionality, because the relevant windows of my data are not deteremined by calendar time but by the time of the events themselves... if that assumption is wrong, and I can do that and get a 'size' without using rolling, that is OK, but as far as I know I have to use rolling since the time_ref of the event is the important thing for grouping with subsequent rows, not pure calendar time.</p>
<p>Thanks.</p>
|
<python><pandas><group-by><rolling-computation>
|
2023-01-23 00:33:21
| 3
| 599
|
10mjg
|
75,204,860
| 19,130,803
|
Dash: apply 3rd-party or self custom css and js to dash elements
|
<p>I am developing a dash application which is a multi-page app. I am using bootstrap theme.</p>
<p><strong>project structure</strong></p>
<pre><code>- proj
- app.py
- pages/
- index.py
- demo.py
- assets/
- 1_third_party_style.css
- 2_third_party_script.js
- 3_custom_script.js
</code></pre>
<p>In app.py, I have a sidebar containing navitems for home(index) and demo pages and div that loads the contents from home or demo pages whichever user has clicked.</p>
<p><strong>app.py</strong></p>
<pre><code>// design
server = Flask(__name__)
app = Dash(
__name__,
server=server,
suppress_callback_exceptions=True,
)
def serve_sidebar() -> html.Div:
return html.Div(
dbc.Nav([
dbc.NavLink("Home", href="/", active="exact"),
dbc.NavLink("Demo", href="/demo", active="exact"),
],
vertical=True,
pills=True,
),
)
def serve_content() -> html.Div:
return html.Div(id="page-content", style=CONTENT_STYLE)
def serve_layout() -> html.Div:
return html.Div(
[
dcc.Location(id="url", refresh=True), // set true to refresh the page
serve_sidebar(),
serve_content(),
],
id="div_main",
)
app.layout = serve_layout
// callback
@callback(
Output(component_id="page-content", component_property="children"),
Input(component_id="url", component_property="pathname"),
)
def display_page(pathname: str) -> html.Div:
if pathname == "/":
return index.layout
elif pathname == "/demo":
return demo.layout
</code></pre>
<p><strong>index.py</strong></p>
<pre><code>// design
def serve_layout() -> html.Div:
return html.Div(
[
"Welcome to my Dash app",
]
)
layout = serve_layout()
</code></pre>
<p><strong>demo.py</strong></p>
<pre><code>// design
def serve_layout() -> html.Div:
return html.Div(
[
html.Form(
id="form_demo",
name="form_demo",
method="POST",
action="/baz",
children=html.Div(
id="div_demo",
children="This is Form contents",
className="bar", // this is from 3rd party css
),
)
]
)
layout = serve_layout()
</code></pre>
<p><strong>custom_script.js</strong></p>
<pre><code>if (window.location.pathname == "/demo") {
alert(window.location.pathname);
if (window.PageTransitionEvent) {
alert("ok-page-transitioned")
if (window.document.getElementById("#form_demo")) {
alert("ok-form-loaded")
if (window.document.getElementById("#div_demo")) {
alert("ok-div-demo-loaded")
}
}
}
}
</code></pre>
<p>Once app is launched the default index(home) is loaded and displayed.</p>
<p>My goal is when user click on demo page, I want to attach some 3rd party style class (css) which I have defined in className, 3rd party script(js) and my own custom_script(js) to <strong>div_demo element</strong>. All assets are <strong>local</strong>.</p>
<p>Expected:</p>
<ul>
<li>all 4 alerts should pop-up (just for testing purpose, thereby assuring all assets are working(my goal)</li>
</ul>
<p><strong>case-1:</strong> (normal flow)</p>
<ul>
<li>Action:
<ul>
<li>User is on index page, and now clicks on demo page from sidebar.</li>
</ul>
</li>
<li>Observed:
<ul>
<li>Not a single alert is displayed</li>
<li>but page-contents for demo page is loaded</li>
</ul>
</li>
</ul>
<p><strong>case-2:</strong></p>
<ul>
<li>Action:
<ul>
<li>User is on index page, and now clicks on demo page from sidebar.</li>
<li>Now clicks on refresh button of the web browser.</li>
</ul>
</li>
<li>Observed:
<ul>
<li>alert for pathname and page transition is displayed</li>
<li>page-contents of demo page is loaded</li>
<li>but alert for form and div-demo is not displayed</li>
</ul>
</li>
</ul>
<p><strong>case-3:</strong></p>
<ul>
<li>Action:
<ul>
<li>User directly visit url as localhost/demo in the address bar and hits enter.</li>
</ul>
</li>
<li>Observed:
<ul>
<li>alert for pathname and page transition is displayed</li>
<li>page-contents of demo page is loaded</li>
<li>but alert for form and div-demo is not displayed</li>
</ul>
</li>
</ul>
<p>why alert for form and div-demo are not working?
How to apply 3rd-party or self css and js [all local inside assets] to dash elements from different pages.
I searched alot but no luck.</p>
|
<javascript><python><html><plotly-dash>
|
2023-01-23 00:27:45
| 0
| 962
|
winter
|
75,204,805
| 11,649,973
|
Define multiple templates for a predefined block wagtail CRX
|
<p>I was moving a site over to wagtail and decided to use the <a href="https://docs.coderedcorp.com/wagtail-crx/" rel="nofollow noreferrer">codered extensions</a>. The library comes with a image-gallery content-block. I want to use this but define a few templates you can choose from in the admin UI.</p>
<p>You usually define a template in the meta section, but I noticed a dropdown in the admin UI for a template.
How do I add a template to that dropdown? Link to <a href="https://github.com/coderedcorp/coderedcms/blob/dev/coderedcms/blocks/content_blocks.py#L83" rel="nofollow noreferrer">the content block I want to change</a></p>
<p>I am interested in adding an HTML template and not inheriting from the content-block to change behaviour. (Unless inheriting is the only way to add a template to the dropdown.)</p>
|
<python><django><wagtail>
|
2023-01-23 00:11:10
| 2
| 425
|
GreatGaja
|
75,204,791
| 12,224,591
|
Faster Way to Implement Gaussian Smoothing? (Python 3.10, NumPy)
|
<p>I'm attempting to implement a Gaussian smoothing/flattening function in my <code>Python 3.10</code> script to flatten a set of XY-points. For each data point, I'm creating a Y buffer and a Gaussian kernel, which I use to flatten each one of the Y-points based on it's neighbours.</p>
<p>Here are some sources on the Gaussian-smoothing method:</p>
<ul>
<li><a href="https://towardsdatascience.com/gaussian-smoothing-in-time-series-data-c6801f8a4dc3" rel="nofollow noreferrer">Source 1</a></li>
<li><a href="https://matthew-brett.github.io/teaching/smoothing_intro.html" rel="nofollow noreferrer">Source 2</a></li>
</ul>
<p>I'm using the <code>NumPy</code> module for my data arrays, and <code>MatPlotLib</code> to plot the data.</p>
<p>I wrote a minimal reproducible example, with some randomly-generated data, and each one of the arguments needed for the Gaussian function listed at the top of the <code>main</code> function:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import time
def main():
dataSize = 1000
yDataRange = [-4, 4]
reachPercentage = 0.1
sigma = 10
phi = 0
amplitude = 1
testXData = np.arange(stop = dataSize)
testYData = np.random.uniform(low = yDataRange[0], high = yDataRange[1], size = dataSize)
print("Flattening...")
startTime = time.time()
flattenedYData = GaussianFlattenData(testXData, testYData, reachPercentage, sigma, phi, amplitude)
totalTime = round(time.time() - startTime, 2)
print("Flattened! (" + str(totalTime) + " sec)")
plt.title(str(totalTime) + " sec")
plt.plot(testXData, testYData, label = "Original Data")
plt.plot(testXData, flattenedYData, label = "Flattened Data")
plt.legend()
plt.show()
plt.close()
def GaussianFlattenData(xData, yData, reachPercentage, sigma, phi, amplitude):
flattenedYData = np.empty(shape = len(xData), dtype = float)
# For each data point, create a Y buffer and a Gaussian kernel, and flatten it based on it's neighbours
for i in range(len(xData)):
gaussianCenter = xData[i]
baseReachEdges = GetGaussianValueX((GetGaussianValueY(0, 0, sigma, phi, amplitude) * reachPercentage), 0, sigma, phi, amplitude)
reachEdgeIndices = [FindInArray(xData, GetClosestNum((gaussianCenter + baseReachEdges[0]), xData)),
FindInArray(xData, GetClosestNum((gaussianCenter + baseReachEdges[1]), xData))]
currDataScanNum = reachEdgeIndices[0] - reachEdgeIndices[1]
# Creating Y buffer and Gaussian kernel...
currYPoints = np.empty(shape = currDataScanNum, dtype = float)
kernel = np.empty(shape = currDataScanNum, dtype = float)
for j in range(currDataScanNum):
currYPoints[j] = yData[j + reachEdgeIndices[1]]
kernel[j] = GetGaussianValueY(j, (i - reachEdgeIndices[1]), sigma, phi, amplitude)
# Dividing kernel by its sum...
kernelSum = np.sum(kernel)
for j in range(len(kernel)):
kernel[j] = (kernel[j] / kernelSum)
# Acquiring the current flattened Y point...
newCurrYPoints = np.empty(shape = len(currYPoints), dtype = float)
for j in range(len(currYPoints)):
newCurrYPoints[j] = currYPoints[j] * kernel[j]
flattenedYData[i] = np.sum(newCurrYPoints)
return flattenedYData
def GetGaussianValueX(y, mu, sigma, phi, amplitude):
x = ((sigma * np.sqrt(-2 * np.log(y / (amplitude * np.cos(phi))))) + mu)
return [x, (mu - (x - mu))]
def GetGaussianValueY(x, mu, sigma, phi, amplitude):
y = ((amplitude * np.cos(phi)) * np.exp(-np.power(((x - mu) / sigma), 2) / 2))
return y
def GetClosestNum(base, nums):
closestIdx = 0
closestDiff = np.abs(base - nums[0])
idx = 1
while (idx < len(nums)):
currDiff = np.abs(base - nums[idx])
if (currDiff < closestDiff):
closestDiff = currDiff
closestIdx = idx
idx += 1
return nums[closestIdx]
def FindInArray(arr, value):
for i in range(len(arr)):
if (arr[i] == value):
return i
return -1
if (__name__ == "__main__"):
main()
</code></pre>
<p>In the example above, I generate 1,000 random data points, between the ranges of -4 and 4. The <code>reachPercentage</code> variable is the percentage of the Gaussian amplitude above which the Gaussian values will be inserted into the kernel. The <code>sigma</code>, <code>phi</code> and <code>amplitude</code> variables are all inputs to the Gaussian function which will actually generate the Gaussians for each Y-data point to be smoothened.</p>
<p>I wrote some additional utility functions which I needed as well.</p>
<p>The script above works to smoothen the generated data, and I get the following plot:</p>
<p><a href="https://i.sstatic.net/j6K7g.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/j6K7g.png" alt="" /></a></p>
<p>Blue being the original data, and Orange being the flattened data.</p>
<p>However, it takes a surprisingly long amount of time to smoothen even smaller amounts of data. In the example above I generated 1,000 data points, and it takes ~8 seconds to flatten that. With datasets exceeding 10,000 in number, it can easily take over 10 minutes.</p>
<p>Since this is a very popular and known way of smoothening data, I was wondering why this script ran so slow. I originally had this implemented with standard Pythons <code>Lists</code> with calling <code>append</code>, however it was extremely slow. I hoped that using the <code>NumPy</code> arrays instead without calling the <code>append</code> function would make it faster, but that is not really the case.</p>
<p>Is there a way to speed up this process? Is there a Gaussian-smoothing function that already exists out there, that takes in the same arguments, and that could do the job faster?</p>
<p>Thanks for reading my post, any guidance is appreciated.</p>
|
<python><numpy><data-analysis>
|
2023-01-23 00:08:26
| 2
| 705
|
Runsva
|
75,204,703
| 5,671,447
|
Using KDDockWidgets in PySide2 with QML
|
<p>I am trying to get <a href="https://github.com/KDAB/KDDockWidgets/tree/2.0" rel="nofollow noreferrer">KDDockWidgets 2.0</a> into my PySide2 Python 3.10 application. I have followed the <a href="https://github.com/KDAB/KDDockWidgets/blob/2.0/README-bindings.md" rel="nofollow noreferrer">Python bindings guide</a> they provide (with great confusion on a Windows machine) and now am able to build KDDockWidgets with CMake, however, it is absolutely unclear what I do from here to actually be able to use the bindings in my QML code. I see in one of their <a href="https://github.com/KDAB/KDDockWidgets/blob/2.0/examples/qtquick/dockwidgets/main.qml" rel="nofollow noreferrer">examples</a> what would be imported in QML, but how would PySide2 know where to look for it?</p>
<p>The guide gives me the impression that it would generate a Python module in my virtual environment's site-packages, but there is no evidence of it doing that, nor do I see any errors at all.</p>
<p>My build script:</p>
<pre><code>setlocal
set VS_YEAR="2022"
set KDDW_REPO="KDDW"
set PY_ENV_NAME="env"
set PY_SITE_PKG="F:\Projects\Code\Python\boslin\env\Lib\site-packages\"
call "C:\Program Files\Microsoft Visual Studio\%VS_YEAR%\Community\VC\Auxiliary\Build\vcvarsall.bat" x64
cd %KDDW_REPO%
cmake . -DCMAKE_BUILD_TYPE=Release -DKDDockWidgets_PYTHON_BINDINGS=True -DKDDockWidgets_PYTHON_BINDINGS_INSTALL_PREFIX=%PY_SITE_PKG%
endlocal
</code></pre>
|
<python><qt><cmake><qml><pyside2>
|
2023-01-22 23:44:57
| 0
| 359
|
Jacob Jewett
|
75,204,560
| 2,648,481
|
Consuming TaskGroup response
|
<p>In Python3.11 it's suggested to use <code>TaskGroup</code> for spawning Tasks rather than using <code>gather</code>. Given Gather will also return the result of a co-routine, what's the best approach with TaskGroup.</p>
<p>Currently I have</p>
<pre class="lang-py prettyprint-override"><code>async with TaskGroup() as tg:
r1 = tg.create_task(foo())
r2 = tg.create_task(bar())
res = [r1.result(), r2.result()]
</code></pre>
<p>Is there a more concise approach that can be used to achieve the same result?</p>
|
<python><python-asyncio>
|
2023-01-22 23:10:19
| 2
| 1,106
|
johng
|
75,204,512
| 1,816,135
|
How to get more details on streaming data using Tweepy StreamingClient of Twitter APIv2
|
<p>I used tweepy for v1.1 Twitter streaming to follow a Twitter account. When someone is tweeting this user for any tweet (let say download this video bot), I get a lot of details about the tweet that the user mentioned and the tweet that has the video. (tweet info, video links etc)</p>
<p>used to be something like this:</p>
<pre><code>class StdOutListener(tweepy.Stream):
def on_data(self, data):
# process stream data here
struct = json.loads(data)
</code></pre>
<p><code>struct</code> has a lot of data in form of json</p>
<p>In API v2, I have the following:</p>
<pre><code>class StdOutListener(tweepy.StreamingClient):
def on_tweet(self, tweet):
print(tweet)
print(tweet.data)
print(tweet.entities)
print(f"{tweet.id} {tweet.created_at} ({tweet.author_id}): {tweet.text}")
</code></pre>
<p>I am getting just few info as follows:</p>
<pre><code>INFO:tweepy.streaming:Stream connected
@2hvQqjddfgfdUY96Ah5yW @7bdsfdsfds3h_bot
{'edit_history_tweet_ids': ['1617291424910688874'], 'id': '1617291424910688874', 'text': '@2hvQqjddfgfdUY96Ah5yW @7bdsfdsfds3h_bot'}
None
1617291424910688874 None (None): @2hvQqjddfgfdUY96Ah5yW @7bdsfdsfds3h_bot
</code></pre>
<p>How can I get all the data as it was in v1.1?</p>
<p>The rest of my code is like this:</p>
<pre><code>printer = StdOutListener(bearer_token)
# add new rules
rule = StreamRule(value="@7bdsfdsfds3h_bot")
printer.add_rules(rule)
printer.filter()
</code></pre>
|
<python><tweepy><twitter-api-v2>
|
2023-01-22 22:59:10
| 2
| 1,002
|
AKMalkadi
|
75,204,372
| 9,855,588
|
where to define variables for startup events fastapi
|
<p>In reference to -> <a href="https://fastapi.tiangolo.com/advanced/events/" rel="nofollow noreferrer">https://fastapi.tiangolo.com/advanced/events/</a></p>
<pre><code>from fastapi import FastAPI
app = FastAPI()
items = {}
@app.on_event("startup")
async def startup_event():
items["foo"] = {"name": "Fighters"}
items["bar"] = {"name": "Tenders"}
@app.get("/items/{item_id}")
async def read_items(item_id: str):
return items[item_id]
</code></pre>
<p>Why is it better to define a dict at the root level and populate the keys from the startup_event function, instead of making a <code>global</code> declaration from within the startup_event function?</p>
|
<python><python-3.x><fastapi>
|
2023-01-22 22:30:32
| 0
| 3,221
|
dataviews
|
75,204,318
| 6,077,239
|
How to do conditional scaling in polars?
|
<p>I have a polars dataframe and I want to do scaling of a column depending on whether its value is positive or negative, scaled by respective sum (positive or negative) and over another column.</p>
<p>Below is an example for illustration for what I tried.</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame(
{
"foo": [1, 3, -2, -1, 1, 2],
"bar": [1, 1, 1, 1, 2, 2],
}
)
foo_is_neg = pl.col("foo") < 0
df.with_columns(
pl.when(foo_is_neg)
.then(pl.col("foo") / pl.col("foo").filter(foo_is_neg).sum())
.otherwise(pl.col("foo") / pl.col("foo").filter(~foo_is_neg).sum())
.over("bar")
.alias("foo2")
)
df
</code></pre>
<p>After running through the code above, I got the following output which seems to be what I want.</p>
<pre><code>shape: (6, 3)
┌─────┬─────┬──────────┐
│ foo ┆ bar ┆ foo2 │
│ --- ┆ --- ┆ --- │
│ i64 ┆ i64 ┆ f64 │
╞═════╪═════╪══════════╡
│ 1 ┆ 1 ┆ 0.25 │
│ 3 ┆ 1 ┆ 0.75 │
│ -2 ┆ 1 ┆ 0.666667 │
│ -1 ┆ 1 ┆ 0.333333 │
│ 1 ┆ 2 ┆ 0.333333 │
│ 2 ┆ 2 ┆ 0.666667 │
└─────┴─────┴──────────┘
</code></pre>
<p>Is there a canonical/better way to do it? If so, what should it be like?</p>
<p>Thanks for your help!</p>
|
<python><python-polars>
|
2023-01-22 22:19:37
| 1
| 1,153
|
lebesgue
|
75,204,298
| 10,400,238
|
Find index of longest consectuive timespan in pandas time series
|
<p>I have a time series with gaps (NaN) and I want to find the start and stop index of the longest consecutive sequence where no Nan occurs. I have no clue how to do that.</p>
<pre><code>values = [5468.0,
5946.0,
np.nan,
6019.0,
5797.0,
5879.0,
np.nan,
5706.0,
5986.0,
6228.0,
6285.0,
np.nan,
5667.0,
5886.0,
6380.0,
5988.0,
6290.0,
5899.0,
6429.0,
6177.0]
dates = [Timestamp('2018-10-17 13:30:00'),
Timestamp('2018-10-17 14:00:00'),
Timestamp('2018-10-17 14:30:00'),
Timestamp('2018-10-17 15:00:00'),
Timestamp('2018-10-17 15:30:00'),
Timestamp('2018-10-17 16:00:00'),
Timestamp('2018-10-17 16:30:00'),
Timestamp('2018-10-17 17:00:00'),
Timestamp('2018-10-17 17:30:00'),
Timestamp('2018-10-17 18:00:00'),
Timestamp('2018-10-17 18:30:00'),
Timestamp('2018-10-17 19:00:00'),
Timestamp('2018-10-17 19:30:00'),
Timestamp('2018-10-17 20:00:00'),
Timestamp('2018-10-17 20:30:00'),
Timestamp('2018-10-17 21:00:00'),
Timestamp('2018-10-17 21:30:00'),
Timestamp('2018-10-17 22:00:00'),
Timestamp('2018-10-17 22:30:00'),
Timestamp('2018-10-17 23:00:00')]
</code></pre>
<p>I found a lot of solutions here on stack, but they all use days and then count with +-1 but in case of my 30 mins frequency this doesn't work.</p>
<p>I know that I can get True/False with <code>isnull()</code> and then <code>groupby()</code> or use <code>dates.diff()[1:]</code> but I have to less knowledge to find a solution.</p>
|
<python><pandas><time-series><subsequence>
|
2023-01-22 22:15:14
| 1
| 488
|
till Kadabra
|
75,204,255
| 16,845
|
How to force a platform wheel using build and pyproject.toml?
|
<p>I am trying to force a Python3 non-universal wheel I'm building to be a platform wheel, despite not having any native build steps that happen during the distribution-packaging process.</p>
<p>The wheel will include an OS-specific shared library, but that library is built and copied into my package directory by a larger build system that my package knows nothing about. By the time my Python3 package is ready to be built into a wheel, my build system has already built the native shared library and copied it into the package directory.</p>
<p><a href="https://stackoverflow.com/questions/45150304/how-to-force-a-python-wheel-to-be-platform-specific-when-building-it">This</a> SO post details a solution that works for the now-deprecated <code>setup.py</code> approach, but I'm unsure how to accomplish the same result using the new and now-standard <code>build</code> / <code>pyproject.toml</code> system:</p>
<pre><code>mypackage/
mypackage.py # Uses platform.system and importlib to load the local OS-specific library
pyproject.toml
mysharedlib.so # Or .dylib on macOS, or .dll on Windows
</code></pre>
<p>Based on the host OS performing the build, I would like the resulting wheel to be <code>manylinux</code>, <code>macos</code>, or <code>windows</code>.</p>
<p>I build with <code>python3 -m build --wheel</code>, and that always emits <code>mypackage-0.1-py3-none-any.whl</code>.</p>
<p>What do I have to change to force the build to emit a platform wheel?</p>
|
<python><setuptools><python-packaging><python-wheel><pep517>
|
2023-01-22 22:05:44
| 1
| 1,216
|
Charles Nicholson
|
75,204,254
| 14,090,167
|
Build conda package cryptography:37.0.2 with python 3.8.0
|
<p>I am attempting to build conda package for <a href="https://pypi.org/project/cryptography/37.0.2/" rel="nofollow noreferrer">cryptogaphy</a>:37.0.2 for osx-arm64.
The build fails, if I try to build it with dependecy of python 3.8; the same build is successful, if I use python 3.10. Can anyone help me on how to build the cryptogaphy 37.0.2 library(Conda package) compatible with python 3.8 for osx-arm64 architecture. I am completely blocked, and not sure how to proceed.</p>
<p>I am using the following conda recipe:</p>
<pre><code>package:
name: cryptography
version: 37.0.2
source:
url: https://files.pythonhosted.org/packages/80/e2/89a180c6dc1c3fe33f7f8965da6401cf0b31f440f4e59e9b024b6f54eb0c/cryptography-37.0.2-cp36-abi3-macosx_10_10_universal2.whl
build:
number: '0'
script: python -m pip install --no-deps https://files.pythonhosted.org/packages/80/e2/89a180c6dc1c3fe33f7f8965da6401cf0b31f440f4e59e9b024b6f54eb0c/cryptography-37.0.2-cp36-abi3-macosx_10_10_universal2.whl
string: py38_0
requirements:
build:
- python 3.8
- ca-certificates
- cffi 1.15
- pycparser 2.21
- python_abi 3.10
- openssl 1.1
run:
- python 3.8
- ca-certificates
- cffi 1.15
- pycparser 2.21
- python_abi 3.10
- openssl 1.1
test:
imports:
- cryptography
about:
description: https://pypi.org/project/cryptography/#files
dev_url: https://pypi.org/project/cryptography/#files
home: https://pypi.org/project/cryptography/#files
extra:
copy_test_source_files: true
final: true
</code></pre>
<p>Details on my system:</p>
<pre><code>% conda --version
conda 22.11.1
% python --version
Python 3.8.15
% uname -a
Darwin TESTUSER-KPQ05P 21.6.0 Darwin Kernel Version 21.6.0: Mon Aug 22 20:20:05 PDT 2022; root:xnu-8020.140.49~2/RELEASE_ARM64_T8101 arm64
MacOS Monterey
Version 12.6.1
</code></pre>
<p>Error message received when running <code>conda build</code>
(From my exp the error message doesn't point in the right direction majority of the times.)</p>
<pre><code>Found conflicts! Looking for incompatible packages.
This can take several minutes. Press CTRL-C to abort.
failed
Leaving build/test directories:
Work:
/Users/TESTUSER/miniconda3/envs/conda-build/conda-bld/work
Test:
/Users/TESTUSER/miniconda3/envs/conda-build/conda-bld/test_tmp
Leaving build/test environments:
Test:
source activate /Users/TESTUSER/miniconda3/envs/conda-build/conda-bld/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold
Build:
source activate /Users/TESTUSER/miniconda3/envs/conda-build/conda-bld/_build_env
Traceback (most recent call last):
File "/Users/TESTUSER/miniconda3/envs/conda-build/lib/python3.8/site-packages/conda_build/environ.py", line 796, in get_install_actions
actions = install_actions(prefix, index, specs, force=True)
File "/Users/TESTUSER/miniconda3/envs/conda-build/lib/python3.8/site-packages/conda/common/io.py", line 84, in decorated
return f(*args, **kwds)
File "/Users/TESTUSER/miniconda3/envs/conda-build/lib/python3.8/site-packages/conda/plan.py", line 470, in install_actions
txn = solver.solve_for_transaction(prune=prune, ignore_pinned=not pinned)
File "/Users/TESTUSER/miniconda3/envs/conda-build/lib/python3.8/site-packages/conda/core/solve.py", line 136, in solve_for_transaction
unlink_precs, link_precs = self.solve_for_diff(update_modifier, deps_modifier,
File "/Users/TESTUSER/miniconda3/envs/conda-build/lib/python3.8/site-packages/conda/core/solve.py", line 179, in solve_for_diff
final_precs = self.solve_final_state(update_modifier, deps_modifier, prune, ignore_pinned,
File "/Users/TESTUSER/miniconda3/envs/conda-build/lib/python3.8/site-packages/conda/core/solve.py", line 303, in solve_final_state
ssc = self._run_sat(ssc)
File "/Users/TESTUSER/miniconda3/envs/conda-build/lib/python3.8/site-packages/conda/common/io.py", line 84, in decorated
return f(*args, **kwds)
File "/Users/TESTUSER/miniconda3/envs/conda-build/lib/python3.8/site-packages/conda/core/solve.py", line 848, in _run_sat
ssc.solution_precs = ssc.r.solve(tuple(final_environment_specs),
File "/Users/TESTUSER/miniconda3/envs/conda-build/lib/python3.8/site-packages/conda/common/io.py", line 84, in decorated
return f(*args, **kwds)
File "/Users/TESTUSER/miniconda3/envs/conda-build/lib/python3.8/site-packages/conda/resolve.py", line 1326, in solve
self.find_conflicts(specs, specs_to_add, history_specs)
File "/Users/TESTUSER/miniconda3/envs/conda-build/lib/python3.8/site-packages/conda/resolve.py", line 354, in find_conflicts
raise UnsatisfiableError(bad_deps, strict=strict_channel_priority)
conda.exceptions.UnsatisfiableError: The following specifications were found to be incompatible with each other:
Output in format: Requested package -> Available versions
Package pycparser conflicts for:
pycparser=2.21
cffi=1.15 -> pycparser
Package zlib conflicts for:
python=3.8 -> zlib[version='>=1.2.11,<1.3.0a0']
cffi=1.15 -> python[version='>=3.9,<3.10.0a0'] -> zlib[version='>=1.2.11,<1.3.0a0']
pycparser=2.21 -> python[version='2.7.*|>=3.4'] -> zlib[version='>=1.2.11,<1.3.0a0']
python=3.8 -> sqlite[version='>=3.37.1,<4.0a0'] -> zlib[version='>=1.2.12,<1.3.0a0']
Package ca-certificates conflicts for:
python=3.8 -> openssl[version='>=1.1.1s,<1.1.2a'] -> ca-certificates
ca-certificates
Package openssl conflicts for:
cffi=1.15 -> python[version='>=3.10,<3.11.0a0'] -> openssl[version='>=1.1.1g,<1.1.2a|>=1.1.1h,<1.1.2a|>=1.1.1i,<1.1.2a|>=1.1.1j,<1.1.2a|>=1.1.1k,<1.1.2a|>=1.1.1l,<1.1.2a|>=1.1.1n,<1.1.2a|>=1.1.1o,<1.1.2a|>=1.1.1q,<1.1.2a|>=1.1.1s,<1.1.2a|>=3.0.7,<4.0a0|>=3.0.5,<4.0a0|>=3.0.3,<4.0a0|>=3.0.2,<4.0a0|>=3.0.0,<4.0a0']
openssl=1.1
pycparser=2.21 -> python[version='2.7.*|>=3.4'] -> openssl[version='>=1.1.1g,<1.1.2a|>=1.1.1h,<1.1.2a|>=1.1.1i,<1.1.2a|>=1.1.1j,<1.1.2a|>=1.1.1k,<1.1.2a|>=1.1.1l,<1.1.2a|>=1.1.1n,<1.1.2a|>=1.1.1o,<1.1.2a|>=1.1.1q,<1.1.2a|>=1.1.1s,<1.1.2a|>=3.0.7,<4.0a0|>=3.0.5,<4.0a0|>=3.0.3,<4.0a0|>=3.0.2,<4.0a0|>=3.0.0,<4.0a0']
python=3.8 -> openssl[version='>=1.1.1g,<1.1.2a|>=1.1.1h,<1.1.2a|>=1.1.1i,<1.1.2a|>=1.1.1j,<1.1.2a|>=1.1.1k,<1.1.2a|>=1.1.1l,<1.1.2a|>=1.1.1n,<1.1.2a|>=1.1.1s,<1.1.2a|>=3.0.7,<4.0a0|>=3.0.2,<4.0a0|>=3.0.0,<4.0a0']
python_abi=3.10 -> python=3.10 -> openssl[version='>=1.1.1l,<1.1.2a|>=1.1.1n,<1.1.2a|>=1.1.1o,<1.1.2a|>=1.1.1q,<1.1.2a|>=1.1.1s,<1.1.2a|>=3.0.7,<4.0a0|>=3.0.5,<4.0a0|>=3.0.3,<4.0a0|>=3.0.2,<4.0a0|>=3.0.0,<4.0a0']
Package python conflicts for:
python=3.8
python_abi=3.10 -> python=3.10
cffi=1.15 -> python[version='3.8.12|3.9.10|>=3.10,<3.11.0a0|>=3.9,<3.10.0a0|>=3.9,<3.10.0a0|>=3.8,<3.9.0a0|>=3.8,<3.9.0a0|>=3.11,<3.12.0a0|>=3.11,<3.12.0a0',build='0_73_pypy|*_cpython|0_73_pypy']
pycparser=2.21 -> python[version='2.7.*|>=3.4']
cffi=1.15 -> pycparser -> python[version='2.7.*|>=3.4|3.10.*|3.9.*|3.8.*|3.11.*']
Package tzdata conflicts for:
pycparser=2.21 -> python[version='2.7.*|>=3.4'] -> tzdata
cffi=1.15 -> python[version='>=3.10,<3.11.0a0'] -> tzdata
python_abi=3.10 -> python=3.10 -> tzdata
Package python_abi conflicts for:
cffi=1.15 -> python_abi[version='3.10.*|3.9.*|3.8.*|3.11.*|3.9|3.8',build='*_pypy39_pp73|*_cp311|*_cp310|*_cp39|*_cp38|*_pypy38_pp73']
python_abi=3.10
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/TESTUSER/miniconda3/envs/conda-build/bin/conda-build", line 11, in <module>
sys.exit(main())
File "/Users/TESTUSER/miniconda3/envs/conda-build/lib/python3.8/site-packages/conda_build/cli/main_build.py", line 495, in main
execute(sys.argv[1:])
File "/Users/TESTUSER/miniconda3/envs/conda-build/lib/python3.8/site-packages/conda_build/cli/main_build.py", line 475, in execute
outputs = api.build(
File "/Users/TESTUSER/miniconda3/envs/conda-build/lib/python3.8/site-packages/conda_build/api.py", line 180, in build
return build_tree(
File "/Users/TESTUSER/miniconda3/envs/conda-build/lib/python3.8/site-packages/conda_build/build.py", line 3097, in build_tree
packages_from_this = build(metadata, stats,
File "/Users/TESTUSER/miniconda3/envs/conda-build/lib/python3.8/site-packages/conda_build/build.py", line 2126, in build
create_build_envs(top_level_pkg, notest)
File "/Users/TESTUSER/miniconda3/envs/conda-build/lib/python3.8/site-packages/conda_build/build.py", line 1963, in create_build_envs
build_actions = environ.get_install_actions(m.config.build_prefix,
File "/Users/TESTUSER/miniconda3/envs/conda-build/lib/python3.8/site-packages/conda_build/environ.py", line 798, in get_install_actions
raise DependencyNeedsBuildingError(exc, subdir=subdir)
conda_build.exceptions.DependencyNeedsBuildingError: Unsatisfiable dependencies for platform osx-arm64: {"zlib[version='>=1.2.12,<1.3.0a0']", "python[version='2.7.*|>=3.4|3.10.*|3.9.*|3.8.*|3.11.*']", 'tzdata', "python_abi[version='3.10.*|3.9.*|3.8.*|3.11.*|3.9|3.8',build='*_pypy39_pp73|*_cp311|*_cp310|*_cp39|*_cp38|*_pypy38_pp73']", "openssl[version='>=1.1.1g,<1.1.2a|>=1.1.1h,<1.1.2a|>=1.1.1i,<1.1.2a|>=1.1.1j,<1.1.2a|>=1.1.1k,<1.1.2a|>=1.1.1l,<1.1.2a|>=1.1.1n,<1.1.2a|>=1.1.1o,<1.1.2a|>=1.1.1q,<1.1.2a|>=1.1.1s,<1.1.2a|>=3.0.7,<4.0a0|>=3.0.5,<4.0a0|>=3.0.3,<4.0a0|>=3.0.2,<4.0a0|>=3.0.0,<4.0a0']", 'pycparser', "zlib[version='>=1.2.11,<1.3.0a0']", "openssl[version='>=1.1.1g,<1.1.2a|>=1.1.1h,<1.1.2a|>=1.1.1i,<1.1.2a|>=1.1.1j,<1.1.2a|>=1.1.1k,<1.1.2a|>=1.1.1l,<1.1.2a|>=1.1.1n,<1.1.2a|>=1.1.1s,<1.1.2a|>=3.0.7,<4.0a0|>=3.0.2,<4.0a0|>=3.0.0,<4.0a0']", "python[version='2.7.*|>=3.4']", "openssl[version='>=1.1.1l,<1.1.2a|>=1.1.1n,<1.1.2a|>=1.1.1o,<1.1.2a|>=1.1.1q,<1.1.2a|>=1.1.1s,<1.1.2a|>=3.0.7,<4.0a0|>=3.0.5,<4.0a0|>=3.0.3,<4.0a0|>=3.0.2,<4.0a0|>=3.0.0,<4.0a0']", "python[version='3.8.12|3.9.10|>=3.10,<3.11.0a0|>=3.9,<3.10.0a0|>=3.9,<3.10.0a0|>=3.8,<3.9.0a0|>=3.8,<3.9.0a0|>=3.11,<3.12.0a0|>=3.11,<3.12.0a0',build='0_73_pypy|*_cpython|0_73_pypy']", 'python=3.10', 'ca-certificates'}
(conda-build) TESTUSER@TESTUSER-KPQ05P cryptography %
</code></pre>
|
<python><conda><pypi><conda-forge>
|
2023-01-22 22:05:37
| 1
| 717
|
Chinmaya Biswal
|
75,204,243
| 3,132,087
|
Python Openpyxl library cannot deal with automatic row height
|
<p>I'm currently using Openpyxl in a Django project to create an excel report.
I'm starting from a blank excel model in which column C has text wrap enabled.
Infact when I open the model and populate manually a cell I correctly get this</p>
<p><a href="https://i.sstatic.net/kg7ta.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kg7ta.png" alt="enter image description here" /></a></p>
<p>But when I run this trial code</p>
<pre><code>wb = openpyxl.load_workbook(fileExcel)
sh = wb["Rapportini"]
sh["C3"]="very very very very very very very very very very long row"
wb.save(fileExcel)
</code></pre>
<p>this is the result</p>
<p><a href="https://i.sstatic.net/9Vwil.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9Vwil.png" alt="enter image description here" /></a></p>
<p>I know openpyxl (strangely) cannot set row autoheight. I also tried to set wrap_text = True in the cell, but no way... any ideas ?</p>
|
<python><django><openpyxl>
|
2023-01-22 22:04:44
| 1
| 829
|
Stefano Losi
|
75,204,137
| 16,997,421
|
How to retrieve email based on sender, sent date and subject using graph api with python3
|
<p>I'm working on a simple python script to help me retrieve email in office365 user mailbox based on the following parameters, <code>sentdatetime, sender or from address and subject</code>.</p>
<p>As of current, am able to get the access token using msal, however the email api call does not work. I get an <code>error 401</code>. From graph explorer the query works however in the script it's not working.</p>
<p>My app registration is assigned application permission for mail, i selected everything under mail permissions. see below permissions
<a href="https://i.sstatic.net/L9lWj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/L9lWj.png" alt="enter image description here" /></a></p>
<p>Below is my script so far, what am i doing wrong.</p>
<pre><code>import msal
import json
import requests
def get_access_token():
tenantID = '9a13fbbcb90fa2'
authority = 'https://login.microsoftonline.com/' + tenantID
clientID = 'xxx'
clientSecret = 'yyy'
scope = ['https://outlook.office365.com/.default']
app = msal.ConfidentialClientApplication(clientID, authority=authority, client_credential = clientSecret)
access_token = app.acquire_token_for_client(scopes=scope)
return access_token
# token block
access_token = get_access_token()
token = access_token['access_token']
# Set the parameters for the email search
date_sent = "2023-01-22T21:13:24Z"
mail_subject = "Test Mail"
sender = "bernardberbell@gmail.com"
mailuser = "bernardmwanza@bernardcomms.onmicrosoft.com"
# Construct the URL for the Microsoft Graph API
url = "https://graph.microsoft.com/v1.0/users/{}/mailFolders/Inbox/Messages?$select=id,sentDateTime,subject,from&$filter=contains(subject, '{}') and from/emailAddress/address eq '{}' and SentDateTime gt '{}'".format(mailuser, mail_subject, sender, date_sent)
# Set the headers for the API call
headers = {
"Authorization": f"Bearer {token}",
"Content-Type": "application/json"
}
# Send the API request and get the response
response = requests.get(url, headers=headers)
print(response)
# # Parse the response as JSON
# data = json.loads(response.text)
# print(data)
</code></pre>
<p>Below is the error
<a href="https://i.sstatic.net/L4Zwc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/L4Zwc.png" alt="enter image description here" /></a></p>
|
<python><python-3.x><email><microsoft-graph-api><microsoft-graph-mail>
|
2023-01-22 21:44:30
| 1
| 352
|
Bernietechy
|
75,204,066
| 13,142,245
|
Python: Extract all possible mutually exclusive pairs?
|
<p>I have an assignment problem where I have N participants and need to find all possible 2-person assignments where every participant is assigned to exactly one pair.</p>
<p>When I use <code>list(combinations(range(100), 2))</code> I get a "flat" list of about 4000 items, each a pair in the form of (i,j).</p>
<p>This is not what I'm looking for; there needs to be three levels of this array:
<code>[ [(1,2), (3,4),...], [(1,3), (2,4),...], ... ]</code></p>
<p>What's the best way to accomplish this effect in Python?</p>
|
<python><graph><combinations><combinatorics>
|
2023-01-22 21:31:57
| 2
| 1,238
|
jbuddy_13
|
75,203,970
| 4,936,725
|
How to add all requirements.txt dependencies to py_library in Bazel BUILD file @rules_python
|
<p>When using Bazel @rules_python there's is this handy target generator helper <a href="https://github.com/bazelbuild/rules_python/blob/1722988cc407b08a4e7770295452076706823f9d/docs/pip.md#pip_parse" rel="nofollow noreferrer">pip_parse</a>, which generates library targets for every dependency in <code>requirements.txt</code>, which is nice:</p>
<pre><code>load("@rules_python//python:pip.bzl", "pip_parse")
pip_parse(
name = "pip_deps",
requirements_lock = ":requirements.txt",
)
load("@pip_deps//:requirements.bzl", "install_deps")
install_deps()
</code></pre>
<p>Is there a way to import all those deps into a <code>py_library</code> in one step? Otherwise we essentially duplicate the <code>requirements.txt</code> content:</p>
<pre><code>load("@pip_deps//:requirements.bzl", "requirement")
py_library(
name = "bar",
...
deps = [
"//my/other:dep",
requirement("requests"), # <----------------- duplicate
requirement("numpy"), # <----------------- duplicate
],
)
</code></pre>
<p>Edit: I found the helper <code>pip_repositories</code>, which <a href="https://github.com/bazelbuild/rules_python/blob/3c4ed56db7de85b76d870ffab9852e376197d30f/docs/pip_repository.md#pip_repository" rel="nofollow noreferrer">can be used as follows</a>:</p>
<pre><code>load("@foo//:requirements.bzl", "all_requirements")
py_binary(
name = "baz",
...
deps = [
":foo",
] + all_requirements,
)
</code></pre>
<p>but what is the difference?</p>
|
<python><pip><dependencies><bazel>
|
2023-01-22 21:15:30
| 0
| 410
|
manews
|
75,203,893
| 3,971,855
|
Not getting output when using terms to filter on multiple values
|
<p>I have a table in opensearch in which the format of every field is "text".</p>
<p>This is how my table looks like</p>
<p><a href="https://i.sstatic.net/B0F15.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/B0F15.png" alt="enter image description here" /></a></p>
<p>Now the <strong>query(q1)</strong> which I am running in opensearch looks like this. i am not getting any output. But when I run <strong>query q2</strong> then I get the output.</p>
<pre><code>q1 = {"size":10,"query":{"bool":{"must":[{"multi_match":{"query":"cen","fields":["name","alias"],"fuzziness":"AUTO"}}],"filter":[{"match_phrase":{"category":"Specialty"}},{"match_phrase":{"prov_type":"A"}},{"match_phrase":{"prov_type":"C"}}]}}}
q2 = {"size":10,"query":{"bool":{"must":[{"multi_match":{"query":"cen","fields":["name","alias"],"fuzziness":"AUTO"}}],"filter":[{"match_phrase":{"category":"Specialty"}},{"match_phrase":{"prov_type":"A"}}]}}}
</code></pre>
<p>Now I want to apply multiple filtering on prov_type. I have tried using terms also with prov_type in list like ['A','B'].</p>
<p>Can anyone please answer this on how to apply <strong>multiple filters</strong> on value for single column in opensearch/elasticsearch. <strong>Datatype for every field is text</strong>.
Have already tried this - <a href="https://stackoverflow.com/questions/48614104/how-to-filter-with-multiple-fields-and-values-in-elasticsearc">How to filter with multiple fields and values in elasticsearch?</a></p>
<p>Mapping for the index</p>
<pre><code>GET index/_mapping
{
"spec_proc_comb_exp" : {
"mappings" : {
"properties" : {
"alias" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"category" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"name" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"prov_type" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"specialty_code" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
}
}
}
}
}
</code></pre>
<p>Please let me know in case you need anymore information</p>
|
<python><elasticsearch><opensearch><amazon-elasticsearch><amazon-opensearch>
|
2023-01-22 21:03:05
| 1
| 309
|
BrownBatman
|
75,203,642
| 9,822,004
|
Import problems in python unittest
|
<p>I'm new to python and trying to get a unittest working, but the imports in the test files don't work.
Folder structure:</p>
<pre><code>toyProjects
└── pythonProj
├── mainpack
│ ├── __init__.py
│ ├── MyClass.py
│ └── setup.py
└── tests
├── __init__.py
└── test_MyClass.py
</code></pre>
<p>MyClass.py:</p>
<pre><code>class MyClass:
def __init__(self, x):
self.x = x
def someMethod(self):
print("Called MyClass.someMethod");
</code></pre>
<p>setup.py:</p>
<pre><code>def myFunction():
print("Called setup.myFunction");
</code></pre>
<p>test_MyClass.py:</p>
<pre><code>import unittest
import setup
from MyClass import MyClass
class TestMyClass(unittest.TestCase):
@classmethod
def setUpClass(cls):
cls.my = MyClass(4);
def test_something(self):
self.my.someMethod();
setup.myFunction();
</code></pre>
<p>Incase it matters: there is a mainfile.py in pythonProj (not shown in the diagram above cause I assume it's not relevant) which runs fine and finds all the files. Its contents are:</p>
<pre><code>from mainpack import setup
from mainpack.MyClass import MyClass
print("Program start");
setup.myFunction()
myInst = MyClass(4);
myInst.someMethod();
</code></pre>
<p>I run the test with</p>
<pre><code>cd toyProjects\pythonProj\tests
python -m unittest test_MyClass.py
</code></pre>
<p>I tried pretty much all the solutions from <a href="https://stackoverflow.com/questions/1896918/running-unittest-with-typical-test-directory-structure">here</a>. And <a href="https://iq-inc.com/importerror-attempted-relative-import/" rel="nofollow noreferrer">this page</a> says "Many resources on the web will have you fiddling with your sys.path but in most cases, this is a code smell that indicates you are not properly using modules and/or packages.", so hopefully I can learn the proper way to do it (but I'll use sys.path if there's no other solution. Only so far, all the versions I've tried of that haven't worked either.)</p>
<p>More specifically, different types of imports I tried in the test_MyClass file:</p>
<pre><code>import setup -->ModuleNotFoundError: No module named 'setup'
from MyClass import MyClass -->ModuleNotFoundError: No module named 'MyClass'
from mainpack import setup -->ModuleNotFoundError: No module named 'mainpack'
from ..mainpack import setup -->Unresolved reference mainpack
import mainpack -->ModuleNotFoundError: No module named 'mainpack'
from mainpack.setup import someFunction -->Unresolved reference mainpack
from .. import mainpack -->ImportError: attempted relative import with no known parent package
</code></pre>
|
<python><python-import><python-unittest>
|
2023-01-22 20:23:24
| 1
| 400
|
MJL
|
75,203,558
| 20,130,220
|
How can I count the number of occurrences of a given string in a string array in pandas
|
<p>I want to see which tags occur most frequently in my dataset. When i try to do this on my own i get something like this:</p>
<pre><code>df['tags'].value_counts()
</code></pre>
<blockquote>
<p>['Startup'] 80<br />
['Bitcoin'] 79<br />
['The Daily Pick'] 78<br />
['Addiction', 'Health', 'Body', 'Alcohol', 'Mental Health'] 62</p>
</blockquote>
<p>Some articles have many tags but
I would like to count the tracking count for each tag separately.</p>
|
<python><pandas><dataframe>
|
2023-01-22 20:09:48
| 2
| 346
|
IvonaK
|
75,203,436
| 14,673,832
|
Unexpected output while sorting the list of IP address
|
<p>I am trying to sort the list of ipaddress from the following list.</p>
<pre><code>IPlist= ['209.85.238.4', '216.239.51.98', '64.233.173.198', '64.3.17.208', '64.233.173.238']
</code></pre>
<p>#1st case</p>
<pre><code>tmp1 = [list (map (str, ip.split("."))) for ip in IPlist]
tmp1.sort()
print(tmp1)
</code></pre>
<p>When I run this snippet. I got the following output.</p>
<pre><code>[['209', '85', '238', '4'], ['216', '239', '51', '98'], ['64', '233', '173', '198'], ['64', '233', '173', '238'], ['64', '3', '17', '208']]
</code></pre>
<p>#second case</p>
<pre><code>tmp = [tuple (map (int, ip.split("."))) for ip in IPlist]
# print(tmp)
tmp.sort ()
print(tmp)
</code></pre>
<p>When I run the second case,I got the following output.</p>
<pre><code>[(64, 3, 17, 208), (64, 233, 173, 198), (64, 233, 173, 238), (209, 85, 238, 4), (216, 239, 51, 98)]
</code></pre>
<p>The only thing that I found the difference is the conversion to either string or int function using above function. But even if the intial values are string, doesnt sort() function works in the same way?</p>
<p>For eg:</p>
<pre><code>lst = ['23', '33', '11', '7', '55']
# Using sort() function with key as int
lst.sort(key = int)
print(lst)
Output:
['7', '11', '23', '33', '55']
</code></pre>
|
<python><list><sorting>
|
2023-01-22 19:51:16
| 1
| 1,074
|
Reactoo
|
75,203,370
| 6,876,149
|
Find indices of closest samples in distance matrix
|
<pre><code>import torch
from torch.utils.data import DataLoader
from torchvision import datasets, transforms
myTransform = transforms.Compose([
transforms.Resize((160, 160)),
transforms.ToTensor(),
])
image_datasets = datasets.ImageFolder(data_dir, transform=myTransform)
randomIdx = rnd.randint(0, len(image_datasets.imgs)-1)
def get_embeddings(module, input, output):
output = output.detach().cpu().numpy().tolist()
outputs.append(output) #populate the outputs list with the embeddings
outputs = []
from sklearn.decomposition import PCA
from scipy.spatial import distance_matrix
list_embeddings = [item for sublist in outputs for item in sublist]
myPCA = PCA(n_components = int(N_DIMS_95))
components_images = myPCA.fit_transform(list_embeddings)
distanceMatrix = distance_matrix(components_images, components_images)
</code></pre>
<p>This is how we create distance matrix of images</p>
<p>How do I find closest images in a distance matrix, for the image index <code>randomIdx</code>? Similar to this stackoverflow thread but written in python.</p>
<p><a href="https://stackoverflow.com/questions/14355853/find-indices-of-5-closest-samples-in-distance-matrix">Find indices of 5 closest samples in distance matrix</a></p>
|
<python><matrix><distance><distance-matrix>
|
2023-01-22 19:41:30
| 0
| 2,826
|
C.Unbay
|
75,203,248
| 1,019,129
|
Deep reference ..? | Intertools.product()
|
<p>The following (pseudo) code should insert <strong>only one</strong> ZERO per pair</p>
<pre class="lang-py prettyprint-override"><code>from itertools import product
In [321]:
for p in product([[1],[2]],[[4],[5]]):
p[0].insert(0,0)
print(p)
Out [321]
([0, 1], [4])
([0, 0, 1], [5])
([0, 2], [4])
([0, 0, 2], [5])
In [322]:
for p in product([[1],[2]],[[4],[5]]):
q = p[:]
q[0].insert(0,0)
print(q)
Out [322]
([0, 1], [4])
([0, 0, 1], [5])
([0, 2], [4])
([0, 0, 2], [5])
</code></pre>
<p>I know that the reason is that the <code>product()</code> method passes the item by reference, but ..... I cant change its behavior..</p>
<p>Any solution?</p>
<hr />
<p><a href="https://docs.python.org/3/library/itertools.html#itertools.product" rel="nofollow noreferrer"><code>itertools.product(*iterables, repeat=1)</code></a></p>
|
<python><product><pass-by-reference><python-itertools>
|
2023-01-22 19:24:14
| 0
| 7,536
|
sten
|
75,203,120
| 3,881,486
|
Pandas aggregation groupby and min
|
<p>I have the following data set and I want to return the <em>minimum</em> of <code>vol</code> grouped by <code>year</code> but I want also to know on which day (<code>date</code> column) this minimum occurred. This is a part of a bigger function.</p>
<p>For the example below, the return should be:</p>
<pre><code>1997-07-14 1162876
</code></pre>
<p><a href="https://i.sstatic.net/hUZ66.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hUZ66.png" alt="enter image description here" /></a></p>
<p>The first thing I tried was:</p>
<pre class="lang-py prettyprint-override"><code>df_grouped_vol = pandas_df.groupby(pandas_df['year']).min()[['date', 'vol']]
</code></pre>
|
<python><pandas><dataframe><pyspark>
|
2023-01-22 19:08:09
| 1
| 703
|
Pirvu Georgian
|
75,203,044
| 19,079,397
|
How to create polygon from bbox data in python?
|
<p>I have created a code in R which extracts <code>bbox</code> from a list of points and then creates a polygon using <code>st_as_sfc</code>. Now I am trying to do the same in python where I was able to get the <code>bbox</code> coordinates from the list of points data and then tried to create polygon using the two points by help of <code>shapely.geometry</code> which is throwing error. Is there any alternative for <code>st_as_sfc</code> in python or how can I create a polygon from the obtained <code>bbox</code> coordinates?</p>
<p>R code:-</p>
<pre><code>print(st_as_sfc(st_bbox(sgrid)))
result:-
Geometry set for 1 feature
Geometry type: POLYGON
Dimension: XY
Bounding box: xmin: -9574057 ymin: 3590448 xmax: -9494057 ymax: 3670448
Projected CRS: WGS 84 / Pseudo-Mercator
POLYGON ((-9574057 3590448, -9494057 3590448, -...
</code></pre>
<p>Python :-
data frame from bbox coordinates xmin,ymin and xmax,ymax</p>
<pre><code>from shapely import geometry
print(df)
x y geometry
0 3.418932e+06 -2.088506e+07 POINT (-20885056.99629 3418931.63321)
1 3.478932e+06 -2.082506e+07 POINT (-20825056.99629 3478931.63321)
poly = geometry.Polygon([[p.x, p.y] for p in gdf_p['geometry'].tolist()])
print(poly.wkt)
</code></pre>
<p>error:-</p>
<pre><code>ValueError: A linearring requires at least 4 coordinates.
</code></pre>
|
<python><r><polygon><geopandas><point>
|
2023-01-22 18:57:35
| 2
| 615
|
data en
|
75,202,868
| 12,860,924
|
Split labelling image parts into sub labelled images using python
|
<p>I am working on medical images. I want to segment each image into sub-segments parts according to the labelled letter in each part of the image. I have tried multiple codes and functions but I don't know how can I do that.</p>
<p><strong>Example of my dataset</strong></p>
<p>The original image</p>
<p><a href="https://i.sstatic.net/Rgm9K.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Rgm9K.jpg" alt="enter image description here" /></a></p>
<p>I want to segment the image into subsegments as the image.</p>
<p><a href="https://i.sstatic.net/UAkYx.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UAkYx.jpg" alt="enter image description here" /></a></p>
<p><strong>Trial Code :</strong></p>
<pre><code>def tile(filename, dir_in, dir_out, d):
name, ext = os.path.splitext(filename)
img = Image.open(os.path.join(dir_in, filename))
w, h = img.size
grid = product(range(0, h-h%d, d), range(0, w-w%d, d))
for i, j in grid:
box = (j, i, j+d, i+d)
out = os.path.join(dir_out, f'{name}_{i}_{j}{ext}')
img.crop(box).save(out)
dir_in="/content/drive/MyDrive/Images"
dir_out="/content/drive/MyDrive/output_segments"
filename='image 2.jpg'
d=100
output=tile(filename,dir_in,dir_out,d)
</code></pre>
<p>The above code perform general segmentation according to a specific tile not according to the labelled letter found in the original image.</p>
<p>Any help would be appreciated.</p>
<p>Thanks in advance.</p>
|
<python><image-processing><image-segmentation><labeling><east-text-detector>
|
2023-01-22 18:26:07
| 0
| 685
|
Eda
|
75,202,810
| 13,741,789
|
How to parse html such that the nesting that is implicit by header levels becomes explicit?
|
<p>I run into the same problem in a different form while web-scraping, over and over, and for some reason I can't seem to push my head through it.</p>
<p>The core of it is basically this:</p>
<p>HTML has a relatively flat organizational structure with some nesting implicit. I want to make that explicit.</p>
<p>To show what I mean consider the following fictional menu snippet:</p>
<blockquote>
<h1>Motley Mess Menu</h1>
<h2>Breakfast</h2>
<h3>Omelets</h3>
<h4>Cheese</h4>
<p>$7</p>
<p>American style omelet containing your choice of Cheddar, Swiss, Feta,
Colby Jack or all four!</p>
<h4>Sausage Mushroom</h4>
<p>$8</p>
<p>American style omelet containing sausage, mushroom and Swiss cheese</p>
<h4>Build-Your-Own</h4>
<p>$8</p>
<p>American style omelet containing…you tell me!</p>
<p>Options (+50 cents after 3):</p>
<ul>
<li>Cheddar</li>
<li>Swiss</li>
<li>Feta</li>
<li>Colby Jack</li>
<li>Bacon Bits</li>
<li>Sausage</li>
<li>Onion</li>
<li>Hamburger</li>
<li>Jalapenos</li>
<li>Hash Browns</li>
</ul>
<h3>Combos</h3>
<p>...</p>
</blockquote>
<p>When we read this menu we know that "Sausage Mushroom" is a type of "Omelet" served for "Breakfast" at the "Motley Mess." We understand the nesting just fine, however if this were represented via html (or in markdown for that matter) all of those headers are flat, and without ading a series of divs that nesting is all implicit. If I'm web scraping I have no control over whether or not those divs are present.</p>
<p>I want to parse the html to make that nesting explicit. This is a problem I have come across time and time again scraping websites and I always find another way to solve the problem. I feel that this should be a relatively basic problem to solve, if not simple, but for some reason I can't get past the weird dynamic recursion that ends up necessary, and I think I'm grossly overcomplicating it.</p>
<p>This last snippet is an html json pair which I would be happy with from a hypothetical html_unpacker function:</p>
<pre class="lang-py prettyprint-override"><code>html_string = """
<h1>Motley Mess Menu</h1>
<h2>Breakfast</h2>
<h3>Omelets</h3>
<h4>Cheese</h4>
<p>$7</p>
<p>American style omelet containing your choice of Cheddar, Swiss, Feta, Colby Jack or all four!</p>
<h4>Sausage Mushroom</h4>
<p>$8</p>
<p>American style omelet containing sausage, mushroom and Swiss cheese</p>
<h>Build-Your-Own</h4>
<p>$8</p>
<p>American style omelet containing…you tell me!</p>
<p>Options (+50 cents after 3):</p>
<ul>
<li>Cheddar</li>
<li>Swiss</li>
<li>Feta</li>
<li>Colby Jack</li>
<li>Bacon Bits</li>
<li>Sausage</li>
<li>Onion</li>
<li>Hamburger</li>
<li>Jalapenos</li>
<li>Hash Browns</li>
</ul>
<h3>Combos</h3>
<p>Each come with your choice of two sides</p>
<h4>Eggs and Bacon</h4>
<p>$8</p>
<p>Eggs cooked your way and crispy bacon. Sausage substitution is fine</p>
<h4>Glorious Smash</h4>
<p>$10</p>
<p>Your favorite breakfast of two pancakes, two eggs cooked your way, two sausages and two bacon, free of all trademark infringement! If you think you can finish it all then you forgot about the choice of two sides!</p>
html_unpacker(html_string)
</code></pre>
<p>output:</p>
<pre class="lang-json prettyprint-override"><code>{
"Motley Mess Menu": {
"Breakfast": {
"Omelets": {
"Cheese": {
"p1": "$7",
"p2": "American style omelet containing your choice of Cheddar, Swiss, Feta, Colby Jack or all four!"
},
"Sausage Mushroom": {
"p1": "$8",
"p2": "American style omelet containing sausage, mushroom and Swiss cheese"
},
"Build-Your-Own": {
"p1": "$8",
"p2": "American style omelet containing…you tell me!",
"p3": "Options (+50 cents after 3):",
"ul1": {
"li1": "Swiss",
"li2": "Feta",
"li3": "Colby Jack",
"li4": "Bacon Bits",
"li5": "Sausage",
"li6": "Onion",
"li7": "Hamburger",
"li8": "Jalapenos",
"li9": "Hash Browns"
}
}
},
"Combos": {
"p1": "Each come with your choice of two sides",
"Eggs and Bacon": {
"p1": "$8",
"p2": "Eggs cooked your way and crispy bacon. Sausage substitution is fine"
},
"Glorious Smash": {
"p1": "$10",
"p2": "Your favorite breakfast of two pancakes, two eggs cooked your way, two sausages and two bacon, free of all trademark infringement! If you think you can finish it all then you forgot about the choice of two sides!"
}
}
}
}
}
</code></pre>
<p>I don't necessarily need that exact style of output, just something that makes the nesting explicit and maintins the type ond order on non-header elements. I need explicit nestig to be preserved (lists within lists and whatnot) I just need to add some explicit nesting based on header levels.</p>
<p>I'm not asking for someone to build a function from scratch, this just seems so basic I feel like something of this nature must already exist and my google-fu must just be failing me.</p>
|
<python><html><json><beautifulsoup>
|
2023-01-22 18:17:22
| 1
| 312
|
psychicesp
|
75,202,679
| 10,633,402
|
How do I resolve AttributeError: "Pages" object has no attribute "pages"
|
<p>I have a few custom classes that look like this:</p>
<pre class="lang-py prettyprint-override"><code>from typing import List
from typing_extensions import Self
class Page:
def __init__(self, search_id: str, page_num: int) -> None:
self.search_id = search_id
self.page_num = page_num
self.isLast = False
def mark_as_last(self):
self.isLast = True
</code></pre>
<pre class="lang-py prettyprint-override"><code>class Pages:
def __new__(cls: Self, search_id: str, range_of_pages: List[int]):
instance = super(Pages, cls).__new__(cls)
return instance.pages
def __init__(self, search_id: str, range_of_pages: List[int]):
self.search_id = search_id
self.ranges_of_pages = range_of_pages
self.pages = Pages.create_pages(self.ranges_of_pages, self.search_id)
@staticmethod
def create_pages(range_of_pages: List[int], search_id: str) -> List[Page]:
pages = []
for page_num in range_of_pages:
page = Page(search_id, page_num)
if page_num == range_of_pages[-1]:
page.mark_as_last()
pages.append(page)
return pages
def __getitem__(self, item):
return self.pages[item]
</code></pre>
<p>When 'Pages' is called like <code>Pages('123', [1, 2, 3, 4])</code>, I want to return a list of pages - see <code>return instance.pages</code></p>
<p>Well... when I get to this point, I get an error. Specifically this error:</p>
<pre class="lang-none prettyprint-override"><code>def __new__(cls: Self, search_id: str, range_of_pages: List[int]):
instance = super(Pages, cls).__new__(cls)
return instance.pages
E AttributeError: 'Pages' object has no attribute 'pages'
</code></pre>
<p>Am I missing something? This should work. I have no idea what is wrong here.</p>
|
<python><class>
|
2023-01-22 17:57:33
| 2
| 1,919
|
prismo
|
75,202,656
| 14,282,714
|
How to change python version in r-reticulate environment?
|
<p>I would like to upgrade my python version in my <code>r-reticulate</code> environment. First I activate the right environment like this:</p>
<pre><code>conda activate /Users/quinten/Library/r-miniconda/envs/r-reticulate
</code></pre>
<p>Let's check the python version:</p>
<pre><code>python3 --version
Python 3.7.11
</code></pre>
<p>So I installed the newest <a href="https://www.python.org" rel="nofollow noreferrer">python</a> version for macOS which is 3.11.1. After that, I tried to change the Python version like described here (<a href="https://stackoverflow.com/questions/59163078/how-to-change-python-version-of-existing-conda-virtual-environment">How to change Python version of existing conda virtual environment?</a>) using this:</p>
<pre><code>conda install python=3.11
</code></pre>
<p>This was successfully installed, but when I check again the version <code>python3 --version</code> it still returns <code>3.7.11</code>. So I was wondering if anyone knows how to change the python version in an r-reticulate environment? I would like to use this in ‘Quarto’.</p>
|
<python><r><conda><reticulate><quarto>
|
2023-01-22 17:55:10
| 1
| 42,724
|
Quinten
|
75,202,619
| 8,964,393
|
How to calculate the maximum sum in a reverse way in pandas list
|
<p>I have this list:</p>
<pre><code>balance = [300,400,250,100,50,1,2,0,10,15,25,20,10,1,0,10,15]
</code></pre>
<p>I need to calculate the maximum consecutive increase in balance over a certain period of time.
The first element on the right is the most recent.</p>
<p>For example, I need to calculate the maximum consecutive increases in balance over the most recent 10 occurrences.
From the list above, I'd take the most recent 10 occurrences:</p>
<p><code>[0,10,15,25,20,10,1,0,10,15]</code></p>
<p>Count the consecutive increases (by adding 1 every time there is an increase, else reset the counter):</p>
<pre><code>[0,1,2,3,0,0,0,0,1,2]
</code></pre>
<p>And then take the maximum (which is 3).</p>
<p>Does anyone know how to code it in Python?</p>
|
<python><pandas><list><counter>
|
2023-01-22 17:50:24
| 1
| 1,762
|
Giampaolo Levorato
|
75,202,610
| 21,061,285
|
TypeError: 'type' object is not subscriptable Python
|
<p>Whenever I try to type-hint a list of strings, such as</p>
<pre><code>tricks: list[str] = []
</code></pre>
<p>, I get <em>TypeError: 'type' object is not subscriptable</em>. I follow a course where they use the same code, but it works for them. So I guess the problem is one of these differences between my an the courses environments. I use:</p>
<ul>
<li>vs code</li>
<li>anaconda</li>
<li>python 3.8.15</li>
<li>jupyter notebook</li>
</ul>
<p>Can someone help me fix that?</p>
<p>I used the same code in normal .py files and it sill doesn't work, so that is probably not it.
The python version should also not be the problem as this is kind of basic.
Anaconda should also not cause such Error messages.
Leaves the difference between vscode and pycharm, which is also strange.
Therefore I don't know what to try.</p>
|
<python><visual-studio-code>
|
2023-01-22 17:48:01
| 4
| 423
|
Sebastian
|
75,202,551
| 147
|
Segmentation Fault running wxPython 4.20 on macOS Monterey with Python 3.9 or 3.10
|
<p>I am trying to use wxPython 4.2.0 I have python 3.9.12 and 3.10.7 on my system and the results are identical with both. It's a 2021 MacBook Pro with an M1 chip.</p>
<p>I cannot run even the simplest app as my regular user, it fails on <code>import wx</code> with a segmentation fault. If I run under <code>sudo</code> it works perfectly.</p>
<p>I have a large codebase but this happens with even the simplest example:</p>
<pre><code>import faulthandler
faulthandler.enable()
import wx
app = wx.App(False)
frame = wx.Frame(None, wx.ID_ANY, "Hello World")
frame.Show(True)
app.MainLoop()
</code></pre>
<p>The output from fault handler is:</p>
<pre><code> File "<frozen importlib._bootstrap>", line 228 in _call_with_frames_removed
File "<frozen importlib._bootstrap_external>", line 1173 in create_module
File "<frozen importlib._bootstrap>", line 565 in module_from_spec
File "<frozen importlib._bootstrap>", line 666 in _load_unlocked
File "<frozen importlib._bootstrap>", line 986 in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 1007 in _find_and_load
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/wx/core.py", line 12 in <module>
File "<frozen importlib._bootstrap>", line 228 in _call_with_frames_removed
File "<frozen importlib._bootstrap_external>", line 850 in exec_module
File "<frozen importlib._bootstrap>", line 680 in _load_unlocked
File "<frozen importlib._bootstrap>", line 986 in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 1007 in _find_and_load
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/wx/__init__.py", line 17 in <module>
File "<frozen importlib._bootstrap>", line 228 in _call_with_frames_removed
File "<frozen importlib._bootstrap_external>", line 850 in exec_module
File "<frozen importlib._bootstrap>", line 680 in _load_unlocked
File "<frozen importlib._bootstrap>", line 986 in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 1007 in _find_and_load
File "/Users/garethsimpson/test.py", line 5 in <module>
[1] 63037 segmentation fault python3.9 test.py
</code></pre>
<p>This was working on my now-dead intel MacBook.</p>
<p>For completeness. I have just tried a homebrew install of python 3.9 and got the same results.</p>
|
<python><macos><wxpython>
|
2023-01-22 17:38:53
| 1
| 37,821
|
Gareth Simpson
|
75,202,479
| 2,299,245
|
Merge overlapping rasters using GDAL
|
<p>I have around 211 rasters, one for each area of the world. The gdalinfo for one of them is below. They are all the same except for the area of concern. Values are always between 1-6.</p>
<p><a href="https://i.sstatic.net/PS3CA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PS3CA.png" alt="enter image description here" /></a></p>
<p>I would like to merge them into one giant raster for the world. I have managed to successfully do this by first building a VRT, and then writing the VRT to a single file:</p>
<pre><code>gdalbuildvrt.exe -b 1 -q -input_file_list my_files.txt global_file.vrt
gdal_translate.exe -q -co PREDICTOR=2 -co COMPRESS=LZW -of GTiff -co BIGTIFF=YES -co TILED=YES -co NUM_THREADS=ALL_CPUS global_file.vrt global_file.tif
</code></pre>
<p>The result was around 15GB in size.</p>
<p>However my issue is that there are often areas where each country overlap. In those cases I need to take the max raster/pixel value. But gdal_translate does not do that, it just takes the last value that was written.</p>
<p>I read about the PixelFunction, (<a href="https://gis.stackexchange.com/questions/350233/how-can-i-modify-a-vrtrasterband-sub-class-etc-from-python">https://gis.stackexchange.com/questions/350233/how-can-i-modify-a-vrtrasterband-sub-class-etc-from-python</a>) and tried to implement it, but I kept getting memory issues.</p>
<p>Does anyone have any ideas on a memory-safe way to combine a large list of rasters and to take the maximum value where they overlap? If the best way is the PixelFunction then let me know and I will provide more details about the errors.</p>
<p>Thanks, James</p>
|
<python><gdal>
|
2023-01-22 17:28:21
| 1
| 949
|
TheRealJimShady
|
75,202,475
| 1,761,907
|
joblib persistence across sessions/machines
|
<p>Is joblib (<a href="https://joblib.readthedocs.io/en/latest/index.html" rel="nofollow noreferrer">https://joblib.readthedocs.io/en/latest/index.html</a>) expected to be reliable across different machines, or ways of running functions, even different sessions on the same machine over time?</p>
<p>For concreteness if you run this code in a Jupyter notebook, or as a python script, or piped to the stdin a python interpreter, you get different cache entries. The piped version seems to be a special case where you get a JobLibCollisionWarning that leads to it running every time and never reading from the cache. The other two though, end up having a different path saved in the joblib cache dir, and inside each one the same hash directory (fb65b1dace3932d1e66549411e3310b6) exists.</p>
<pre><code>from joblib import Memory
memory = Memory('./cache', verbose=0)
@memory.cache
def job(x):
print(f'Running with {x}')
return x**2
print(job(2))
</code></pre>
<p>you get multiple cache entries. These entries also are in folders that contain path information (including what appears to be a tmp directory for the notebook entry, e.g. <strong>main</strong>--var-folders-3q-ht_2mtk52hl7ydxrcr87z2gr0000gn-T-ipykernel-3189892766), so it looks like if I transferred to another machine that the jobs would all be run again. I don't know how that path is reliable in the long run, it seems likely the tmpdir could change, or the ipykernel could have some other number associated with it.</p>
<p>Is this expected?</p>
|
<python><joblib>
|
2023-01-22 17:27:46
| 1
| 2,453
|
John Kitchin
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.