QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
76,012,644
| 508,907
|
FastAPI / uvicorn (or hypercorn): where is my root-path?
|
<p>Based on a few FastAPI tutorials, including this, I made a simple FastAPI app:</p>
<pre><code>from fastapi import FastAPI, Request
app = FastAPI() # also tried FastAPI(root_path="/api/v1")
@app.get("/app")
def read_main(request: Request):
return {"message": "Hello World", "root_path": request.scope.get("root_path")}
</code></pre>
<p>Which I want to have at a path other than root (e.g. <code>/api/vi</code>). Again based on most tutorials and common sense, I tried to start it with e.g.:</p>
<pre><code>uvicorn main:app --root-path /api/v1
</code></pre>
<p>The service comes up ok (on <code>http://127.0.0.1:8000</code>), however, the <code>root-path</code> seems to be ignored, i.e., any <code>GET</code> request to <code>http://127.0.0.1:8000/</code> gives:</p>
<pre><code>message "Hello World"
root_path "/api/v1"
</code></pre>
<p>and any <code>GET</code> request to <code>http://127.0.0.1:8000/api/v1</code> gives:</p>
<pre><code>detail "Not Found"
</code></pre>
<p>I would expect the requests to produce the reverse outcomes. What is going on here?!</p>
<p>I also tried initializing FastAPI with <code>FastAPI(root_path="/api/v1")</code>, as well as switching to <code>hypercorn</code> without avail.</p>
<p>Details of the versions of apps (I might have tried a few others as well, though these should be the latest tried):</p>
<pre><code>python 3.9.7 hf930737_3_cpython conda-forge
fastapi 0.85.1 pyhd8ed1ab_0 conda-forge
uvicorn 0.20.0 py39h06a4308_0
hypercorn 0.14.3 py39hf3d152e_1 conda-forge
</code></pre>
|
<python><fastapi><uvicorn><hypercorn>
|
2023-04-14 07:38:20
| 1
| 14,360
|
ntg
|
76,012,505
| 11,712,575
|
Pytest Python SDK Positional Arguments
|
<p>I have a directory like this</p>
<pre><code>Platform
-test_1.py
-test_2.py
Another_folder
-test_3.py
Test
-start_test.py
</code></pre>
<p>Inside start_test.py, I have this code to start pytest:</p>
<pre><code>retcode = pytest.main(["/Platform/test_*.py /Platform/Another_folder/test_*.py", "--rootdir=/Platform", "--tb=auto"])
</code></pre>
<p>But it seems like pytest cannot find any of the test and it says this:</p>
<pre><code>platform darwin -- Python 3.10.6, pytest-7.2.1, pluggy-1.0.0
rootdir: /Platform
plugins: requests-mock-1.10.0, cov-4.0.0
collected 0 items
</code></pre>
<p>However, when I go to terminal, and I <code>cd /Platform/Test</code> and then run <code>python3 -m /Platform/test_*.py /Platform/Another_folder/test_*.py --tb=auto</code> there are no problems, and I get this:</p>
<pre><code>platform darwin -- Python 3.10.6, pytest-7.2.1, pluggy-1.0.0
rootdir:/Platform
plugins: requests-mock-1.10.0, cov-4.0.0
collected 194 items
{bunch of test outputs}
</code></pre>
<p>How do I enter in positional arguments to manually tell pytest programmatically where to scan for the test files?</p>
|
<python><python-3.x><pytest>
|
2023-04-14 07:18:43
| 0
| 389
|
Jason Chan
|
76,012,502
| 7,242,912
|
Can't reach RestAPI (FastAPI) from my Flutter web - Cross-Origin Request Blocked
|
<p>I have a Linux Server. On that I have two Docker Containers.In the first one I am deploying my Flutter Web and in the other one I am running my RestAPI with FastAPI().</p>
<p>I set both the Docker containers in the same Network, so the communication should work. I also set origins with <code>origins = ['*']</code> (Wildcard). I reverse proxy my Flutter web with <code>nginx</code> from the Linux server. I also include <code>*.crt</code> and <code>*.key</code> with nginx to my Flutter Web.</p>
<p>Now, obviously, since my Flutter Web App has <code>https</code>, I cant make <code>http</code> calls. When I am trying to make a call with <code>https</code>, I get the Error (from catch): <code>"XMLHttpRequest error"</code>, and in the Browser Console I get:</p>
<blockquote>
<p>Cross-Origin Request Blocked: The Same Origin Policy disallows reading
the remote resource at <a href="https://172.21.0.2:8070/" rel="nofollow noreferrer">https://172.21.0.2:8070/</a>. (Reason: CORS request
did not succeed). Status code: (null).</p>
</blockquote>
<p>(172.21.0.2 is the ip of the Docker and 8070 the port on RestApi running)</p>
<p>I am new to the RestAPI world. I normaly develop only Frontend. But I wanted to give it a try. So I'm sorry if I expressed some things wrong. I am searching since days but cant find a solution to my problem. I would be grateful for any help! (If i missed some information or you need more, feel free to write in the comments, I will update the Question immediately!)</p>
<p>Thank You!</p>
|
<python><flutter><docker><nginx><fastapi>
|
2023-04-14 07:17:48
| 1
| 1,234
|
MrOrhan
|
76,012,400
| 6,076,861
|
Using CONCAT in column alias
|
<p>I have a bunch of repeating queries in a Python accessed SQL notebook. I've tested the below</p>
<pre class="lang-py prettyprint-override"><code>colum_to_agg = f'''data_col_1'''
sql_aggregates_query=f'''
avg({colum_to_agg}) as (concat({colum_to_agg},'_mean')),
max({colum_to_agg}) as (concat({colum_to_agg},'_max')),
'''
print(sql_aggregates_query)
</code></pre>
<p>Which returns</p>
<pre><code>avg(data_col_1) as (concat(data_col_1,'_mean')),
max(data_col_1) as (concat(data_col_1,'_max')),
</code></pre>
<p>Where i would like</p>
<pre class="lang-sql prettyprint-override"><code>avg(data_col_1) as data_col_1_mean,
max(data_col_1) as data_col_1_max,
</code></pre>
<p>I can get the query to run with</p>
<pre class="lang-py prettyprint-override"><code>colum_to_agg = f'''data_col_1'''
sql_aggregates_query=f'''
avg({colum_to_agg}) as {colum_to_agg}{'_mean'},
max({colum_to_agg}) as {colum_to_agg}{'_max'},
'''
print(sql_aggregates_query)
</code></pre>
<p>But am interested to know whether using concat in an alias is possible</p>
|
<python><python-3.x><postgresql><ipython-magic><ipython-sql>
|
2023-04-14 07:03:50
| 1
| 2,045
|
mapping dom
|
76,012,360
| 7,624,196
|
pip: yanked release is somehow accessible as default
|
<p>When I try to install a python package from pypi, somehow the yanked version is installed by default.</p>
<p>According to the history of the terminado package: <a href="https://pypi.org/project/terminado/#history" rel="nofollow noreferrer">https://pypi.org/project/terminado/#history</a> version 0.13.0 is yanked, but when running the following</p>
<blockquote>
<p>h-ishida@0bbb747d2765:~$ pip install terminado==foo ERROR: Could not
find a version that satisfies the requirement terminado==foo (from
versions: 0.1, 0.2, 0.3, 0.3.1, 0.3.2, 0.3.3, 0.4, 0.5, 0.6, 0.7, 0.8,
0.8.1, 0.8.2, 0.8.3, 0.13.0) ERROR: No matching distribution found for terminado==foo</p>
</blockquote>
<p>the top listed version is 0.13.0, which is yanked, and when try to install it without any version specification, 0.13.0 is installed.</p>
<p>Note that pip version is <code>9.0.1</code> for python2. The problem is that 0.13.0 is not compatible with python2 any longer, and thus the error occurred in installation.</p>
<p>What the cause of this bug? Is this <code>pip</code> or pypi's bug? or Did I made something wrong?</p>
|
<python><pip><pypi>
|
2023-04-14 06:58:11
| 1
| 1,623
|
HiroIshida
|
76,012,237
| 262,875
|
How to type a class decorator adding additional methods to a class
|
<p>I have a class decorator that is adding additional methods to a class:</p>
<pre><code>def add_method(cls):
def new_method(self):
pass
setattr(cls, 'new_method', new_method)
return cls
@add_method
class Klass():
pass
k = Klass()
k.new_method() # Typechecker complains about lack of new_method
</code></pre>
<p>Now I understand <em>why</em> I get the error. But I can't wrap my head around how to avoid it. How do I tell the typing system that the thing decorated by <code>add_method</code> has a <code>new_method</code> member (and what it's signature is).</p>
<p>I tried several approaches with Union types and Protocols and what not, but I didn't really get anything to work. The only solution I can come up with is to ditch the decorator and go the inheritance/mixin route with <code>Klass</code> inheriting from a <code>MethodMixin</code> class or similar, but would prefer a decorator based solution -- or at least wonder if one exists.</p>
|
<python><types>
|
2023-04-14 06:41:02
| 2
| 11,089
|
Daniel Baulig
|
76,012,206
| 16,383,578
|
How do I properly handle all possible exceptions by the requests module if a site is not reachable?
|
<p>I live in China, behind the infamous Great Firewall of China, and I use VPNs.</p>
<p>A simple observation is that while the VPN is connected, I can access <a href="http://www.google.com" rel="nofollow noreferrer">www.google.com</a>. And if the VPN isn't connected, then I cannot access Google. So I can check if I have an active VPN connection by accessing Google.</p>
<p>My ISP really loves to disconnect my VPN, and so I have routinely check if I have an active VPN connection, and I have already found a way to programmatically do this.</p>
<p>I am connected to the VPN right now, and if I do the following:</p>
<pre class="lang-py prettyprint-override"><code>import requests
google = requests.get('https://www.google.com', timeout=3)
print(google.status_code == 200)
</code></pre>
<p>Everything is fine.</p>
<p>But if I don't have an active VPN connection, then all hell breaks loose.</p>
<p>I do this check precisely because my connection will be disconnected, and I need a function to return a <code>False</code> when it happens, but <code>requests</code> really loves to throw exceptions, it stops the execution of my script, and the exceptions come one after another:</p>
<pre><code>...
ReadTimeoutError: HTTPSConnectionPool(host='www.google.com', port=443): Read timed out. (read timeout=3)
During handling of the above exception, another exception occurred:
ReadTimeout Traceback (most recent call last)
...
</code></pre>
<p>I have imported a bunch of exceptions just so <code>requests</code> doesn't panic and stop my script when VPN is disconnected:</p>
<pre class="lang-py prettyprint-override"><code>import requests
from requests.exceptions import ConnectionError, ConnectTimeout, ReadTimeout, Timeout
from socket import gaierror
from requests.packages.urllib3.exceptions import MaxRetryError, NewConnectionError, ReadTimeoutError
def google_accessible():
try:
google = requests.get('https://www.google.com', timeout=3)
if google.status_code == 200:
return True
except (ConnectionError, ConnectTimeout, gaierror, MaxRetryError, NewConnectionError, ReadTimeout, ReadTimeoutError, TimeoutError):
pass
return False
</code></pre>
<p>I thought I caught all exceptions previously, but that isn't the case because I failed to catch the above exceptions (<code>ReadTimeout</code>, <code>ReadTimeoutError</code>, <code>TimeoutError</code>).</p>
<p>I know I can use <code>except Exception</code> to catch them all, but that would catch exceptions that aren't intended to be caught, and I would rather let those exceptions stop the execution than risking bugs.</p>
<p>How do I use minimal number of exceptions to catch all exceptions that are VERY likely to occur when a request failed?</p>
|
<python><python-3.x><network-programming><python-requests>
|
2023-04-14 06:36:15
| 1
| 3,930
|
Ξένη Γήινος
|
76,012,189
| 10,544,599
|
How to config credential to login a website successfully using requests in python3?
|
<p>I am trying to bypass login to following site using python3 requests module. I have tried with maintaining cookies, also using with & without session.
But I am not getting the page appears after login.</p>
<p>Visit the below source, then right-click and select Translate to English option. Then enter below credential and click on submit.</p>
<p><strong>Source</strong> - <a href="https://igrsup.gov.in/igrsup/userServicesLogin?request_locale=hi" rel="nofollow noreferrer">https://igrsup.gov.in/igrsup/userServicesLogin?request_locale=hi</a></p>
<p><strong>Id -</strong> 'shreeram'</p>
<p><strong>pass -</strong> 'Shreeram@1'</p>
<p>This is the password encryption script saved as <strong>encrypt.py</strong> which can be used directly by importing it.</p>
<p><a href="https://drive.google.com/file/d/18-ebQwr-P6b7O-xD4K9M3r7I63Nwx31X/view" rel="nofollow noreferrer">https://drive.google.com/file/d/18-ebQwr-P6b7O-xD4K9M3r7I63Nwx31X/view</a></p>
<p>Following is the main script importing above encrypt.py</p>
<pre><code># For requests
import requests
from bs4 import BeautifulSoup
import re
# For Pass encryption
import encrypt
# For image processing
from io import BytesIO
from PIL import Image
from pytesseract import image_to_string
def get_soup(resp_content):
return BeautifulSoup(resp_content, 'lxml')
def get_hidden_inputs(soup):
if soup is None:
return None
data = {}
hidden_inputs = soup.find_all("input", {"type": "hidden"})
if hidden_inputs:
for input_key in hidden_inputs:
if input_key.has_attr('value'):
data[input_key['name']] = input_key['value']
else:
data[input_key['name']] = ''
return data
def parse_image(img_response):
img = Image.open(BytesIO(img_response.content))
text = image_to_string(img)
return text
def get_captcha_text(resp):
image_resp = kscrapy_obj.get('https://igrsup.gov.in/igrsup/CaptchaImageAction',cookies= resp.cookies)
print(image_resp)
return parse_image(image_resp)
def get_salt_value(soup):
raw_salt = soup.find('script',text=re.compile('var salt')).text
salt_value = re.findall('[a-z0-9]{16}',raw_salt)[0]
return salt_value
</code></pre>
<blockquote>
<p>Creating session & fetching login page</p>
</blockquote>
<pre><code>sess = requests.session()
# Login Page (phase-1)
resp = sess.get('https://igrsup.gov.in/igrsup/userServicesLogin?request_locale=hi')
print(resp)
</code></pre>
<blockquote>
<p>Fetching captcha image</p>
</blockquote>
<pre><code>img_header = {'Accept': 'image/avif,image/webp,image/apng,image/svg+xml,image/*,*/*;q=0.8',
'Accept-Encoding': 'gzip, deflate, br',
'Accept-Language': 'en-GB,en;q=0.9',
'Connection': 'keep-alive',
'Host': 'igrsup.gov.in',
'Referer': 'https://igrsup.gov.in/igrsup/userServicesLogin?request_locale=hi',
'sec-ch-ua': '"Google Chrome";v="111", "Not(A:Brand";v="8", "Chromium";v="111"',
'sec-ch-ua-mobile': '?0',
'sec-ch-ua-platform': '"Linux"',
'Sec-Fetch-Dest': 'image',
'Sec-Fetch-Mode': 'no-cors',
'Sec-Fetch-Site': 'same-origin',
'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36'}
# Captcha request (phase-2)
image_resp = sess.get('https://igrsup.gov.in/igrsup/CaptchaImageAction', headers = img_header, cookies = resp.cookies)
print(image_resp)
captcha_text = parse_image(image_resp)
print(captcha_text)
</code></pre>
<blockquote>
<p>Fetching hidden inputs and encrypting password</p>
</blockquote>
<pre><code>resp_soup = get_soup(resp.content)
# Get hidden inputs
input_data = get_hidden_inputs(resp_soup)
# Get salt value
input_salt = get_salt_value(resp_soup)
input_salt
# Password encrytion
pass_word = 'Shreeram@1'
login_pass = encrypt.PyJsHoisted_hash_sha_(pass_word,input_salt)
print(login_pass)
# Updating input parameters
data = {'login_id': 'shreeram',
'login_password': login_pass,
'enteredCaptcha': captcha_text,
'label.log_in': 'प्रवेश करें',
}
data.update(input_data)
print(data)
</code></pre>
<blockquote>
<p>Sending post data to get after login page</p>
</blockquote>
<pre><code>post_headers = {'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7',
'Accept-Encoding': 'gzip, deflate, br',
'Accept-Language': 'en-GB,en;q=0.9',
'Cache-Control': 'max-age=0',
'Connection': 'keep-alive',
'Content-Length': '352',
'Content-Type': 'application/x-www-form-urlencoded',
'Host': 'igrsup.gov.in',
'Origin': 'https://igrsup.gov.in',
'Referer': 'https://igrsup.gov.in/igrsup/userServicesLogin?request_locale=hi',
'sec-ch-ua': '"Google Chrome";v="111", "Not(A:Brand";v="8", "Chromium";v="111"',
'sec-ch-ua-mobile': '?0',
'sec-ch-ua-platform': '"Linux"',
'Sec-Fetch-Dest': 'document',
'Sec-Fetch-Mode': 'navigate',
'Sec-Fetch-Site': 'same-origin',
'Sec-Fetch-User': '?1',
'Upgrade-Insecure-Requests': '1',
'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36,'}
# Submit Login detials (phase-3)
resp2 = sess.post('https://igrsup.gov.in/igrsup/us_secureIgrsUserLogin',data = data, headers = post_headers, cookies = resp.cookies)
print(resp2)
with open(html_out.html,'wb+') as f:
f.write(resp2.content)
</code></pre>
<p>The output of last response i.e resp2 is 200, however on successful login the status code of resp2 should be 302. So when we open the saved html response, it shows the same login page with error as invalid inputs. I feel like some where cookies are being used differently. As when we try to login on browser with same credentials it works perfectly. Can someone please help me to find a way to get it done.</p>
|
<python><html><python-3.x><web-scraping><python-requests>
|
2023-04-14 06:34:36
| 0
| 379
|
David
|
76,012,178
| 5,631,598
|
ruamel.yaml: insert a value given dynamic index and unknown depth
|
<p>Using ruamel, I need to add new values to a yaml file where the index (and depth) is not known until runtime. Is there a way to insert a value if I'm given the entire path for the index? (I can decide the format for the index)</p>
<p>i.e. Given the following yaml:</p>
<pre><code>app:
datasource:
url:example.org
username:myuser
password:mypw
</code></pre>
<p>I need to add a new value to this yaml file. If I'm given the index <code>app:datasource:port</code> and value <code>myport</code>, then I want the output to be:</p>
<pre><code>app:
datasource:
url:example.org
username:myuser
password:mypw
port:myport
</code></pre>
<p><strong>What I tried</strong></p>
<p>I tried just putting the entire index inside square brackets <code>[]</code> or use <code>.insert()</code>, but neither of these work:</p>
<pre><code>import sys
from ruamel.yaml import YAML
yaml = YAML()
with open('myfile.yaml', 'r') as stream:
code = yaml.load(stream)
# assume this index isn't known until runtime
index = 'app:datasource:port'
other_index = 'app:datasource:other_port'
code[index] = 'myport'
code.insert(1, other_index, 'otherport')
yaml.dump(code, sys.stdout)
</code></pre>
<p>it produces the following output which isn't what I want. It just treats them as a top level index rather than a nested one:</p>
<pre><code>app:
datasource:
url:example.org
username:myuser
password:mypw
app:datasource:port: myport
app:datasource:other_port: otherport
</code></pre>
<p><strong>What can I do to insert a value at a dynamically provided index?</strong></p>
|
<python><ruamel.yaml>
|
2023-04-14 06:32:52
| 1
| 381
|
Calseon
|
76,012,070
| 2,848,049
|
paypal personal account payment integration with flask
|
<p>I am developping a website using flask and I want to get user payments with Paypal.</p>
<p>For testing purpose, I want to let users pay a small amount using paypal, which will be converted into credits of website. <strong>I don't have a PayPal business account.</strong></p>
<p>However, I failed to find a detailed and updated example to do this using flask and paypal: the ones I found them seem to be outdated or deprecated, such as <a href="https://pypi.org/project/paypalrestsdk/" rel="nofollow noreferrer">paypalrestsdk</a>.</p>
<p>Also, PayPal's examples are in nodejs/javascript and most of them require a business account.</p>
<p>I noticed that one can easily create a paypal button and embed it in your own website. But how can we detect and verify using flask if we do it this way?</p>
<p>Do you have any suggestions or any other alternatives? Thanks</p>
|
<python><flask><paypal>
|
2023-04-14 06:15:11
| 1
| 574
|
wildcolor
|
76,012,038
| 843,075
|
ImportError while importing test module
|
<p>When I run my Playwright test, I get the following error:</p>
<pre><code>ImportError while importing test module 'C:\Users\fsdam\OneDrive\Documents\python_work\playwright-python-tutorial\tests\test_search.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
C:\Users\fsdam\AppData\Local\Programs\Python\Python310\lib\importlib\__init__.py:126: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
test_search.py:2: in <module>
from pages.search import DuckDuckGoSearchPage
E ModuleNotFoundError: No module named 'pages'
</code></pre>
<p>It complains that there is no module names <code>pages</code>. But <code>pages</code> is a package. I am following a tutorial and I do not see any difference in my code and folder structure when compared. Any help would be great.</p>
<p>Here is my folder structure:</p>
<pre><code>.
└── playwright_python_tutorial /
├── pages/
│ ├── __init__.py
│ ├── result.py
│ └── search.py
└── tests/
└── test_search.py
</code></pre>
<p>Here is my code that contains the test containing the imports:</p>
<pre><code>from playwright.sync_api import expect, Page
from pages.search import DuckDuckGoSearchPage
from pages.result import DuckDuckGoResultPage
def test_basic_duckduckgo_search(page: Page) -> None:
search_page = DuckDuckGoSearchPage(page)
result_page = DuckDuckGoResultPage(page)
#Given the DuckDuckGo home page is displayed
search_page.load()
#When the user searches for a phrase
page.locator('id=searchbox_input').fill('panda')
#page.fill('id=searchbox_input', 'panda')
page.locator('xpath=//button[@aria-label="Search"]').click()
#Then the search result query is the phrase
expect(result_page.search_input).to_have_value('panda')
# Alternative way: assert 'panda' == page.input_value('id=search_form_input')
#And the search result links pertain to the phrase
assert result_page.result_link_titles_contain_phrase('panda')
#And the search result title contains the phrase
expect(page).to_have_title('panda at DuckDuckGo')
</code></pre>
|
<python><playwright>
|
2023-04-14 06:10:48
| 1
| 304
|
fdama
|
76,011,925
| 6,846,071
|
How to make python pandas less redundant?
|
<p>I have a few lines of code that are basically categorizing the video length into different video length brackets, they look like this - but I don't know if I can make them less redundant. It seems very repetitive and maybe it's slower in run time? I'm not sure how this could be improved. Any advice is appreciated. thanks.</p>
<pre><code>df_preroll.loc[(df_preroll['Video length'] > 0) & (df_preroll['Video length'] <= 6), 'Video_Length_Bracket'] = '0-6'
df_preroll.loc[(df_preroll['Video length'] >= 7) & (df_preroll['Video length'] <= 14), 'Video_Length_Bracket'] = '7-14'
df_preroll.loc[(df_preroll['Video length'] >= 15) & (df_preroll['Video length'] <= 29), 'Video_Length_Bracket'] = '15-29'
df_preroll.loc[(df_preroll['Video length'] >= 30), 'Video_Length_Bracket'] = '30+'
df_preroll.loc[(df_preroll['Video length'] == None), 'Video_Length_Bracket'] = None
</code></pre>
|
<python><pandas>
|
2023-04-14 05:45:28
| 0
| 395
|
PiCubed
|
76,011,899
| 11,926,527
|
Python code to remove line breaks in documents is not working
|
<p>I have multiple Word documents in a directory. I am using python-docx to clean them up. It's a long code, but one small part of it that you'd think would be the easiest is not working. After making some edits, I need to remove all line breaks and carriage returns. However, the following code doesn't do the job. I've tried different workarounds, such as using for loop to iterate over each character, etc. No results! However, when I tried doing it manually in Notepad++, \r was easily found and replaced.</p>
<pre class="lang-py prettyprint-override"><code>def remove_line_breaks(document):
for paragraph in document.paragraphs:
paragraph.text = paragraph.text.replace('\r', ' ').replace('\n', ' ')
</code></pre>
|
<python><ms-word><data-cleaning>
|
2023-04-14 05:41:07
| 2
| 392
|
Leila
|
76,011,891
| 16,935,119
|
Read only specific columns from text file in Python
|
<p>Hi all I have few values in my text file as shown below</p>
<p><a href="https://i.sstatic.net/6LyUN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6LyUN.png" alt="enter image description here" /></a></p>
<p>I Need to extract only first column that is "ColA". Is there a way to extract it.
Below is the code to extract all values</p>
<pre><code>import pandas as pd
pd.read_csv('New Text Document.txt', sep = "|")
Out[7]:
ColA ColB
0 3 F
1 6 G
</code></pre>
<p>Expected output</p>
<pre><code>Out[7]:
ColA
0 3
1 6
</code></pre>
|
<python>
|
2023-04-14 05:38:37
| 1
| 1,005
|
manu p
|
76,011,794
| 924,677
|
How to decode a COCO RLE binary mask to an image in javascript?
|
<p>This is an example of a COCO RLE mask - <a href="https://pastebin.com/ZhE2en4C" rel="nofollow noreferrer">https://pastebin.com/ZhE2en4C</a></p>
<p>It's an output from a YOLOv8 validation run, taken from the generated predictions.json file.</p>
<p>I'm trying to decode this string in JavaScript and render it on a canvas. The encoded string is valid, because in python I can do this:</p>
<pre class="lang-py prettyprint-override"><code>from pycocotools import mask as coco_mask
from PIL import Image
example_prediction = {
"image_id": "102_jpg",
"category_id": 0,
"bbox": [153.106, 281.433, 302.518, 130.737],
"score": 0.8483,
"segmentation": {
"size": [640, 640],
"counts": "<RLE string here>"
}
}
def rle_to_bitmap(rle):
bitmap = coco_mask.decode(rle)
return bitmap
def show_bitmap(bitmap):
img = Image.fromarray(bitmap.astype(np.uint8) * 255, mode='L')
img.show()
input("Press Enter to continue...")
img.close()
mask_bitmap = rle_to_bitmap(example_prediction["segmentation"])
show_bitmap(mask_bitmap)
</code></pre>
<p>And I can see the decoded mask.</p>
<p>Is there a library I can use to decode the same string in JavaScript and convert it to an <code>Image</code>? I tried digging into the source code of pycocotools, but I couldn't do it.</p>
|
<javascript><python><yolo><pycocotools>
|
2023-04-14 05:16:10
| 1
| 7,347
|
Nikolay Dyankov
|
76,011,734
| 16,156,882
|
OR Tools taking infinite time for MVRP
|
<p>I am trying to solve MVRP problem using Google OR-Tools where I have 1000 points and 30 fleets(Vehicles). This seems to take indefinite amount of time but when I am reducing the fleets(vehicles) to 1 it is giving me solution.</p>
<p>The above situation gives me a fair Idea that TSP works well for google OR -Tools but MVRP stucks and takes huge amount of time.</p>
<p>Can anyone suggest me better ways, research papers or any blogs or articles where people scaled up MVRP problems?</p>
|
<python><or-tools><traveling-salesman><vehicle-routing>
|
2023-04-14 05:01:51
| 0
| 519
|
Varun Singh
|
76,011,623
| 10,035,190
|
how to make dataframe from indexed dictionary?
|
<p>how to create Dataframe for indexed dictionary like shown in the code below?</p>
<pre><code>import pandas as pd
details = {
0:{'name':'Ankit','age':'23','college':'bhu'},
1:{'name':'Aishwarya','age':'24','college':'jnu'}
}
df = pd.DataFrame(details)
df
</code></pre>
<p>I want table like this but it not</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>name</th>
<th>age</th>
<th>college</th>
</tr>
</thead>
<tbody>
<tr>
<td>Ankit</td>
<td>23</td>
<td>bhu</td>
</tr>
<tr>
<td>Aishwarya</td>
<td>24</td>
<td>jnu</td>
</tr>
</tbody>
</table>
</div>
|
<python><python-3.x><pandas><dataframe><dictionary>
|
2023-04-14 04:34:12
| 3
| 930
|
zircon
|
76,011,595
| 3,595,843
|
Ray Cluster Launch - Cannot switch off rsync during Cluster Launch with ray up
|
<h3>What happened</h3>
<p>I’m trying to launch a single-node ray cluster using ray up.</p>
<p>I have two nodes. One is the node I run ray up from, and the other is to be the head node of the ray cluster. I’ve confirmed that the first node can SSH into the second one.</p>
<p>I’m using exactly the same config.yaml as the one found here - <a href="https://github.com/ray-project/ray/blob/master/python/ray/autoscaler/local/example-full.yaml" rel="nofollow noreferrer">https://github.com/ray-project/ray/blob/master/python/ray/autoscaler/local/example-full.yaml</a>. When I run <code>ray up config.yaml</code>, I get the following error:</p>
<pre><code>FileNotFoundError: [Errno 2] No such file or directory: 'rsync'
</code></pre>
<p>I don’t really need the rsync stuff since all the nodes I’m working have a shared NAS mounted on to them. So, I commented out the following fields:</p>
<ol>
<li><code>file_mounts</code></li>
<li><code>file_mounts_sync_continuously</code></li>
<li><code>rsync_filter</code></li>
<li><code>cluster_synced_files</code></li>
</ol>
<p>and ran <code>ray up config.yaml</code> again, but get the same error.</p>
<p>So, here’s my question - how can I switch off file syncing when running the Cluster Launcher? Or is there an easy way to make my error go away?</p>
<h3>Versions / Dependencies</h3>
<p>I'm using Ray 2.3.1 with Python 3.8.9</p>
<h3>Reproduction script</h3>
<p>This is the Ray Cluster Launch config that I used (with all the file syncing stuff removed):</p>
<pre><code>cluster_name: default
provider:
type: local
head_ip: YOUR_HEAD_NODE_HOSTNAME
worker_ips: []
auth:
ssh_user: root
min_workers: 0
max_workers: 0
upscaling_speed: 1.0
idle_timeout_minutes: 5
initialization_commands: []
setup_commands: []
head_setup_commands: []
worker_setup_commands: []
head_start_ray_commands:
- ray stop
- ulimit -c unlimited && ray start --head --port=6379
worker_start_ray_commands:
- ray stop
- ray start --address=$RAY_HEAD_IP:6379
</code></pre>
|
<python><cluster-computing><ray>
|
2023-04-14 04:26:59
| 1
| 861
|
shinvu
|
76,011,502
| 14,679,834
|
pip list not showing packages installed in dockerfile in dev container
|
<p>I'm currently trying to develop a Python application inside a container and am using Docker. I'm under the impression that the packages installed through the <code>dockerfile</code> should be available in the container but when running <code>pip list</code> it doesn't show any of the packages mentioned in the <code>dockerfile</code>. Here's my <code>dockerfile</code>.</p>
<pre><code>FROM python:3.10-slim-buster
# Update package lists
RUN apt-get update && apt-get install ffmpeg libsm6 libxext6 gcc g++ git build-essential libpoppler-cpp-dev pkg-config poppler-utils tesseract-ocr libtesseract-dev -y
# Make working directories
RUN mkdir -p /intellecs-backend
WORKDIR /intellecs-backend
# Copy the requirements.txt file to the container
COPY requirements.txt .
# Install dependencies
RUN pip install --upgrade pip
RUN pip install torch torchvision torchaudio
RUN pip install -r requirements.txt
RUN pip install 'git+https://github.com/facebookresearch/detectron2.git@v0.4#egg=detectron2'
# Copy the .env file to the container
COPY .env .
# Copy every file in the source folder to the created working directory
COPY . .
# Expose the port that the application will run on
EXPOSE 8000
# Start the application
CMD ["python", "-m", "uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
</code></pre>
<p>and here's my devcontainer.json:</p>
<pre><code>// For format details, see https://aka.ms/devcontainer.json. For config options, see the
// README at: https://github.com/devcontainers/templates/tree/main/src/docker-existing-dockerfile
{
"name": "Existing Dockerfile",
"build": {
// Sets the run context to one level up instead of the .devcontainer folder.
"context": "..",
// Update the 'dockerFile' property if you aren't using the standard 'Dockerfile' filename.
"dockerfile": "../dockerfile"
},
"features": {
"ghcr.io/devcontainers/features/python:1": {
"installTools": true,
"version": "3.10"
}
}
}
</code></pre>
<p>Because of this I can't develop inside the container unless I install again the packages in the container itself, which were supposed to be already installed when building the container.</p>
<p>The app does work though when developing on my local system and using <code>docker build</code> and <code>docker run</code></p>
|
<python><docker><pip><dockerfile><vscode-devcontainer>
|
2023-04-14 04:02:41
| 1
| 526
|
neil_ruaro
|
76,011,471
| 21,295,456
|
Cannot install Ultralytics using pip in PythonAnywhere
|
<p>I was trying to install the module <code>ultralytics</code> (for YOLOv8) in the bash of hosting service PythonAnywhere</p>
<p>I tried:</p>
<p><code>pip install ultralytics</code></p>
<p>But get:</p>
<blockquote>
<p>ERROR: Could not find a version that satisfies the requirement ultralytics (from versions: none)
ERROR: No matching distribution found for ultralytics</p>
</blockquote>
<p>I can install using the same method in my local machine , but having trouble in PythonAnywhere</p>
<p>I upgraded pip, but no difference</p>
<p>Why is this happening?</p>
|
<python><pip><pythonanywhere>
|
2023-04-14 03:52:19
| 1
| 339
|
akashKP
|
76,011,372
| 5,455,532
|
Errors trying to flatten a response object from Google's Entity Sentiment Analysis in Python
|
<p><strong>Goal</strong></p>
<p>I'm trying to flatten a response object from google's entity sentiment analysis that's in a field named <code>entitysentiment</code> in a pandas dataframe (<code>df</code>) in a python notebook. A sample of one of the response object entries for a single row's <code>entitysentiment</code> field is below[1].</p>
<p>It needs to loop through each row of the df, find the <code>entitysentiment</code> field, and flatten the object and its nested objects.</p>
<p>The latest function I've tried is below[2]. And the resulting error message is [3].</p>
<p>In [2], my previously attempted version is commented out, but it received the error message <code>AttributeError: Unknown field for AnalyzeEntitySentimentResponse: split</code>.</p>
<p>Any input on how to approach this or thoughts on what I'm doing wrong will be greatly appreciated.</p>
<p>[1]
<code>entities {\n name: "login page"\n type_: OTHER\n salience: 0.5467509031295776\n mentions {\n text {\n content: "login page"\n begin_offset: 24\n }\n type_: COMMON\n sentiment {\n magnitude: 0.4000000059604645\n score: -0.4000000059604645\n }\n }\n sentiment {\n magnitude: 0.4000000059604645\n score: -0.4000000059604645\n }\n}\nentities {\n name: "app"\n type_: CONSUMER_GOOD\n salience: 0.45324909687042236\n mentions {\n text {\n content: "app"\n begin_offset: 52\n }\n type_: COMMON\n sentiment {\n magnitude: 0.4000000059604645\n score: -0.4000000059604645\n }\n }\n sentiment {\n magnitude: 0.4000000059604645\n score: -0.4000000059604645\n }\n}\nlanguage: "en"\n</code></p>
<p>[2]</p>
<pre><code>"""
# Define a function to extract entity mentions from entitysentiment
def extract_entities(text):
entities = []
for line in text.split('\n'):
if 'content:' in line:
entity = line.strip().split(':')[-1].strip().replace("'", "")
entities.append(entity)
return entities
"""
def extract_entities(text):
entities = []
if 'entity_mentions' not in text:
return entities
for entity in text['entity_mentions']:
entities.append(entity['content'])
return entities
# Apply the function to the entitysentiment column
df['entity_mentions'] = df['entitysentiment'].apply(extract_entities)
# Convert the entity mentions to separate columns
entity_mentions_df = pd.DataFrame(df['entity_mentions'].to_list(), columns=['entity_mention_1', 'entity_mention_2', 'entity_mention_3'])
# Concatenate the original dataframe with the entity mentions dataframe
result = pd.concat([df, entity_mentions_df], axis=1)
# Drop the original entitysentiment and entity_mentions columns
result.drop(['entitysentiment', 'entity_mentions'], axis=1, inplace=True)
# Show the result
print(result)
</code></pre>
<p>[3]</p>
<pre><code>---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/tmp/ipykernel_1/3330714854.py in <module>
22
23 # Apply the function to the entitysentiment column
---> 24 df['entity_mentions'] = df['entitysentiment'].apply(extract_entities)
25
26 # Convert the entity mentions to separate columns
/opt/conda/lib/python3.7/site-packages/pandas/core/series.py in apply(self, func, convert_dtype, args, **kwargs)
4355 dtype: float64
4356 """
-> 4357 return SeriesApply(self, func, convert_dtype, args, kwargs).apply()
4358
4359 def _reduce(
/opt/conda/lib/python3.7/site-packages/pandas/core/apply.py in apply(self)
1041 return self.apply_str()
1042
-> 1043 return self.apply_standard()
1044
1045 def agg(self):
/opt/conda/lib/python3.7/site-packages/pandas/core/apply.py in apply_standard(self)
1099 values,
1100 f, # type: ignore[arg-type]
-> 1101 convert=self.convert_dtype,
1102 )
1103
/opt/conda/lib/python3.7/site-packages/pandas/_libs/lib.pyx in pandas._libs.lib.map_infer()
/tmp/ipykernel_1/3330714854.py in extract_entities(text)
13 def extract_entities(text):
14 entities = []
---> 15 if 'entity_mentions' not in text:
16 return entities
17 for entity in text['entity_mentions']:
/opt/conda/lib/python3.7/site-packages/proto/message.py in __contains__(self, key)
686 wire serialization.
687 """
--> 688 pb_value = getattr(self._pb, key)
689 try:
690 # Protocol buffers "HasField" is unfriendly; it only works
AttributeError: entity_mentions
</code></pre>
|
<python><pandas><flatten>
|
2023-04-14 03:20:40
| 1
| 301
|
dsx
|
76,011,304
| 2,095,858
|
Tkinter/Matplotlib inconsistent behaviour between debugging and 'release'
|
<h1>Preliminaries</h1>
<ul>
<li>Python: 3.11.1 x64</li>
<li>tkinter: 8.6</li>
<li>matplotlib: 3.6.2</li>
<li>VsCode 1.77.2 on Windows 10 21H2</li>
</ul>
<h1>Summary</h1>
<p>I have a simple app that plots a graph on a tkinter canvas, records the image and then restores said image using copy_from_bbox and restore region respectively. When I debug and step through the code, I can see the graph appear, then get cleared and then get restored. If I run the app normally none of these steps seem to happen. What am I doing wrong?</p>
<h1>Code</h1>
<p>The code is complete and should just paste and run</p>
<pre><code>import tkinter
from matplotlib.backends.backend_tkagg import (
FigureCanvasTkAgg, NavigationToolbar2Tk)
from matplotlib.backend_bases import key_press_handler
from matplotlib.figure import Figure
import numpy as np
import time
root = tkinter.Tk()
# Step 1: create a plot and copy from bbox
fig = Figure(figsize=(5, 4), dpi=100)
t = np.arange(0, 3, .01)
fig.add_subplot(111).plot(t, 2 * np.sin(2 * np.pi * t))
canvas = FigureCanvasTkAgg(fig, master=root) # A tk.DrawingArea.
canvas.get_tk_widget().pack(side=tkinter.TOP, fill=tkinter.BOTH, expand=1)
canvas.draw()
bg = canvas.copy_from_bbox(fig.axes[0].bbox)
time.sleep(3)
# Step 2: clear the axes
fig.axes[0].clear()
canvas.draw()
time.sleep(3)
# Step 3: restore the plot
canvas.restore_region(bg)
canvas.blit(fig.axes[0].bbox)
canvas.flush_events()
# Step 4: enter tkinter loop
tkinter.mainloop()
</code></pre>
|
<python><matplotlib><tkinter><tkinter-canvas><matplotlib-animation>
|
2023-04-14 03:03:38
| 0
| 421
|
craigB
|
76,011,221
| 4,503,546
|
How to round down in a forloop when using .loc in Python
|
<p>I am trying to turn a float into an integer by rounding down to the nearest whole number. Normally, I use numpy's .apply(np.floor) on data in a dataframe and it works. However in this instance I am iterating in a forloop with this code:</p>
<pre><code>f1.loc[today,'QuantityRounded'] = f1.loc[today,'QuantityNotRounded'].apply(np.floor)
</code></pre>
<p>And I get this error:</p>
<pre><code>AttributeError: 'numpy.float64' object has no attribute 'apply'
</code></pre>
<p>It seems when using a forloop and .loc the numpy function doesn't work.</p>
<p>What is the easiest way to round down in a forloop using .loc (sorry my vocabulary here is lacking)?</p>
<p>Thanks</p>
|
<python><dataframe><numpy><for-loop><numpy-slicing>
|
2023-04-14 02:42:07
| 1
| 407
|
GC123
|
76,011,210
| 6,534,818
|
Pandas: Aggregate to longest set
|
<p>How can I get the unique entries from a dataframe such as the following; in the first case realizing that many are overlapping and thus do not need to be counted in the final output. I feel like this is perhaps a substring search problem but I am unclear as to what might be a good approach.</p>
<pre><code>df = pd.DataFrame({"id": [1, 1, 1, 1, 1, 1, 2, 2],
"filepath": ['src', 'src/abc', 'src/abc/cde',
'src/abc/cde/main', 'src/abc/cde/main/detach', 'dl/path',
'src', 'dl/path']})
id filepath
0 1 src
1 1 src/abc
2 1 src/abc/cde
3 1 src/abc/cde/main
4 1 src/abc/cde/main/detach
5 1 dl/path
6 2 src
7 2 dl/path
</code></pre>
<pre><code>expected = pd.DataFrame({"id": [1, 2],
"filepath": ['src/abc/cde/main/detach, dl/path', 'src, dl/path']})
id filepath
0 1 src/abc/cde/main/detach, dl/path
1 2 src, dl/path
</code></pre>
|
<python><pandas>
|
2023-04-14 02:39:57
| 2
| 1,859
|
John Stud
|
76,010,709
| 3,380,902
|
How to obtain underlying data from pydeck HexagonLayer?
|
<p>Is there a method or any other way to obtain the underlying binned data from HexagonLayer ? I'd like to get the aggregate counts, hexagon centers (long and lat) or geometry. I am using pydeck to produce the visualization.</p>
<pre><code>import pydeck as pdk
# Create the HexagonLayer
layer = pdk.Layer(
'HexagonLayer',
df,
get_position=['lng', 'lat'],
auto_highlight=True,
radius=radius,
elevation_scale=200,
pickable=True,
extruded=True,
wireframe=True,
coverage=1,
colorAggregation='SUM'
)
# Render the layer
r = pdk.Deck(layers=[layer], initial_view_state=view_state)
r.show()
# Get the underlying data
data = layer.get_data()
print(data.head())
</code></pre>
|
<python><geospatial><deck.gl><pydeck>
|
2023-04-14 00:14:35
| 1
| 2,022
|
kms
|
76,010,660
| 11,331,843
|
Click a link based on value in another column of a table - webscrapping python
|
<p>I am trying to scrape a website for a test case and the <a href="https://cdicloud.insurance.ca.gov/cal/LicenseNumberSearch" rel="nofollow noreferrer">link</a>. When I enter a certain value in the search box it return multiple values in the table and I would want to click on the link in the column ('License Number') when the value in the column 'License Name' matches with the string I am looking for.</p>
<p>For example I would want to click on the License Number <strong>0747600</strong> when the value in the column <strong>License Name == 'AON Consulting, Inc.'</strong></p>
<p>So far I have gotten to the point below</p>
<pre><code>
driver = webdriver.Chrome('YOUR_PATH_TO_chromedriver.exe_FILE')
form_url = "https://cdicloud.insurance.ca.gov/cal/LicenseNumberSearch?handler=Search"
driver.get(form_url)
search_box = driver.find_element('id','SearchLicenseNumber').send_keys("0747600")
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.ID, "btnSearch"))).click()
click_search=driver.find_element('id','btnSearch')
click_search.send_keys(Keys.ENTER)
page_source = driver.page_source
soup = BeautifulSoup(page_source, "html.parser")
tables = soup.find('table')
# Looking for the table with the classes 'wikitable' and 'sortable'
table = soup.find('table', id='searchResult')
df = pd.DataFrame(columns=['license_name', 'license_number'])
license_name = []
license_number =[]
# Collecting Ddata
for row in table.tbody.find_all('tr'):
# Find all data for each column
columns = row.find_all('td')
if(columns != []):
#license_name = columns[0].text.strip()
license_name.append(columns[0].text.strip())
license_number.append(columns[1].text.strip())
select_finder = "//td[contains(text(), 'AON Consulting, Inc.')]/td[1]/a"
driver.find_element("xpath",select_finder).click()
</code></pre>
<p>The last 2 lines of the code is throwing me the error</p>
<pre><code>Message: no such element: Unable to locate element:
</code></pre>
|
<python><selenium-webdriver><web-scraping><beautifulsoup>
|
2023-04-14 00:01:11
| 2
| 631
|
anonymous13
|
76,010,410
| 16,655,290
|
Edge modal blocking login automation using Selenium
|
<p>I am writing a python script that logs into edge using msedge-selenium-tools. When I run the script, a modal appears, and I can't seem to get any css selectors to use in Selenium. Here is my current code</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from msedge.selenium_tools import EdgeOptions, Edge
from time import sleep
import requests
from pprint import pprint
import random
import os
options = EdgeOptions()
options.use_chromium = True
def wait_for(sec=2):
sleep(sec)
driver = Edge(executable_path="./edgedriver_win64/msedgedriver.exe", options=options)
try:
driver.get('https://login.live.com/')
wait_for(5)
email_input = driver.find_element_by_id("i0116")
email_input.clear()
email_input.send_keys(os.environ['email'])
email_input.send_keys(Keys.ENTER)
wait_for(10)
password_input = driver.find_element_by_id("i0118")
password_input.clear()
password_input.send_keys(os.environ['pass'])
password_input.send_keys(Keys.ENTER)
except Exception as e:
print(e)
wait_for(3)
</code></pre>
<p>I attached an image of the modal I receive.<a href="https://i.sstatic.net/t6Inm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/t6Inm.png" alt="Edge Modal" /></a></p>
<p>Any assistance would be greatly appreciated. When the webdriver launches the Edge browser and displays the modal, I tried right-click > inspect, or ctrl + shift + i to get into the front-end to inspect but could not perform these actions.</p>
|
<python><selenium-webdriver><modal-dialog><microsoft-edge>
|
2023-04-13 22:55:31
| 1
| 351
|
Daikyu
|
76,010,335
| 417,896
|
Setting julia project when using pyjulia
|
<p>I am using pyjulia to call a julia project from python. I want to use all the packages that are part of my julia package so I want to init the julia runtime from my project directory. Using <code>jl_init_path</code> doesn't seem to work for this. Is there a pyjulia option or should I use Pkg.activate().</p>
<pre><code>import julia
j = julia.Julia(jl_init_path="/path/to/project/")
</code></pre>
|
<python><julia>
|
2023-04-13 22:38:09
| 1
| 17,480
|
BAR
|
76,010,331
| 21,787,377
|
Django: how to generate random number inside a slug field
|
<p>I have a model called <code>Product</code>, when the user create an objects inside a <code>Product</code> model, I want to generate some random number inside a <code>Product</code> model field <code>Slug</code> like this:</p>
<pre><code>https://stackoverflow.com/questions/76010275/cryptography-python-installation-error-with-rush-airgapped-installation
</code></pre>
<p>Take a look at this Stack Overflow question, you may see the domain name at first, and some path name called <code>questions</code>, and some random number at the end of the <code>questions</code>, I think it should look like this in django <code>path</code>:</p>
<pre><code>path('question/random_number/<slug:slug>/', views.some_views_name, name='Redirect-name')
</code></pre>
<p>All the examples i found in the internet does not do something like this, so how can i do this in django?</p>
<p>model:</p>
<pre><code>class Product(models.Model):
user = models.ForeignKey(User, on_delete=models.CASCADE)
name = models.CharField(max_length=50)
image = models.ImageField(upload_to='product-image')
price = models.FloatField(max_length=11)
category = models.ForeignKey(Category, on_delete=models.CASCADE)
slug = models.SlugField(max_length=100, unique=True)
def get_user_public_url(self):
return reverse('Public-Profile', kwargs={'slug': self.user.profile.slug})
def save(self, *args, **kwargs):
self.slug = slugify(self.name)
super().save(*args, **kwargs)
def __str__(self):
return str(self.user)
</code></pre>
|
<python><django>
|
2023-04-13 22:37:42
| 1
| 305
|
Adamu Abdulkarim Dee
|
76,010,127
| 3,259,258
|
Slack Bot Actions (update form data on dropdown)
|
<p>I am working on fixing a python bot that another developer wrote. The code is a <code>python-bolt</code> slack bot that creates a <code>shortcut</code> on <code>messages</code>. When clicked it will pop up a Modal Form that allows you to create a Redmine Issue via the redminelib api. The initial developer grabbed all the projects from Redmine and put them in one drop down, and grabbed all the users from Redmine and put it in a second dropdown. This worked (kinda).</p>
<p>The problem is that if you attempt to assign a user to a project that they are not a part of ... the redmine api (correctly) gives you an error.</p>
<p>I have attempted to adapt the code to pull ONLY the users for a project. This functionality works ... the problem is I need this to dynamically update whenever the projects dropdown changes.</p>
<p>To my understanding, the code below "should" work ... but <code>__handle_project_select__</code> is never called.</p>
<pre class="lang-py prettyprint-override"><code># Helper function to print to stderr
def eprint(*args, **kwargs):
print(*args, file=sys.stderr, **kwargs)
class SlackApp:
def __init__(self, config, app, handler):
eprint('__init__')
# self.__logger__ = logging.getLogger(__name__)
redmine_api_key = config["REDMINE_API_TOKEN_KEY"]
redmine_cert_path = config["REDMINE_CERT_FILE_PATH_KEY"]
self.__redmine_url__ = config["REDMINE_URL_KEY"]
self.__slack_instance__ = config["SLACK_INSTANCE_KEY"]
self.__slackBoltApp__ = app # App(token=slackBot_token)
self.__slackHandler__ = handler # SocketModeHandler(self.__slackBoltApp__, slackApp_token)
self.__slackClient__ = app.client # WebClient(token=slackBot_token)
self.__redmineClient__ = RedmineClient(self.__redmine_url__, redmine_api_key, redmine_cert_path)
self.__slackBoltApp__.shortcut("ssb_redmine_create_issue")(self.__handle_shortcut_redmine__)
self.__slackBoltApp__.view("ssb_redmine_issue_submission")(self.__handle_view_submission__)
self.__slackBoltApp__.action("ssb_redmine_project_select_action")(self.__handle_project_select__)
def __showform__(self, trigger_id, metadata):
eprint('__showform__')
try:
projects = self.__redmineClient__.get_projects()
users = self.__redmineClient__.get_users()
response = self.__slackClient__.views_open(
trigger_id=trigger_id,
view = json.dumps(SlackModalForm.getFormNewIssue(trigger_id, metadata, projects, users))
)
except Exception as e:
# self.__logger__.exception(f"Error opening view: {e}")
raise
def __handle_project_select__(self, ack, body, logger):
eprint('__handle_project_select__')
ack()
# Extract the selected project ID
selected_project_id = body['actions'][0]['selected_option']['value']
# Get the updated list of users based on the selected project (Assuming you have a method for this)
updated_users = self.__redmineClient__.get_users_for_project(selected_project_id)
# Create the updated users dropdown options
updated_user_options = [
{
"text": {
"type": "plain_text",
"text": user_name
},
"value": str(user_id)
}
for user_name, user_id in updated_users.items()
]
# Update the view with the new options for the users dropdown
self.__slackClient__.views_update(
view_id=body['container']['view_id'],
view=json.dumps(
SlackModalForm.getFormNewIssue(
body['trigger_id'],
body['view']['private_metadata'],
projects, # Pass the original project list
updated_users
)
)
)
</code></pre>
<p>The <a href="https://api.slack.com/tutorials/message-action" rel="nofollow noreferrer">slack documentation I found</a> seems to been replaced via websocket and "Settings > Interactivity & Shortcuts" menu specifically states:</p>
<blockquote>
<p>Socket Mode is enabled. You won’t need to specify an Options Load URL.</p>
</blockquote>
<p>And while I have found <a href="https://api.slack.com/methods?filter=views" rel="nofollow noreferrer">reference</a> to <code>view</code> api I dont see anything involving views in <code>Bot Token Scopes</code></p>
<p>I even tried:</p>
<pre class="lang-py prettyprint-override"><code>@app.action("*")
def handle_all_actions(ack, action, logger):
eprint('Is this even being called?')
ack()
</code></pre>
<p>but am not getting anything.</p>
<p>Gut feel is that I am missing a permission or setting somewhere ... but have no idea where.</p>
|
<python><slack-api>
|
2023-04-13 21:54:02
| 1
| 823
|
CaffeineAddiction
|
76,010,100
| 5,240,473
|
XGBoost callback
|
<p>I'm following <a href="https://xgboost.readthedocs.io/en/stable/python/examples/callbacks.html" rel="nofollow noreferrer">this</a> example to understand how callbacks work with xgboost.</p>
<p>I modified the code to run without gpu_hist and use hist only (otherwise I get an error):</p>
<p><code>'tree_method': 'hist'</code></p>
<p>I'm facing two problems:</p>
<ol>
<li>The matplotlib plot opens but does not update and shows not-responding.</li>
<li>I attempted to write a custom print statement. The statements are printed only after code execution is completed.</li>
</ol>
<p>Note that if I stop in the debugger and manually execute line by line everything works as expected.</p>
<p>My guess is I need to somehow force a "draw"?</p>
<p>Any help solving either problem is appreciated. Below are my system specs:</p>
<ul>
<li>OS: Windows 10</li>
<li>python: 3.10.9</li>
<li>xgboost version: 1.7.4</li>
</ul>
|
<python><python-3.x><xgboost>
|
2023-04-13 21:50:07
| 1
| 374
|
Ricardo
|
76,010,038
| 1,509,264
|
Type checking calling identical overloaded signatures in base class from derived class
|
<p>I am using MyPy 1.2.0 with the settings:</p>
<pre class="lang-toml prettyprint-override"><code>[tool.mypy]
show_column_numbers = true
disallow_any_expr = true
disallow_untyped_defs = true
</code></pre>
<p>The code has attributes where the type of one class attribute is correlated to the value of another attribute:</p>
<pre class="lang-python prettyprint-override"><code>from typing import List, Literal, overload
class Base:
null: bool
enum: List[str | None] | List[str] | None
@overload
def __init__(
self: "Base", null: Literal[True] = ..., enum: List[str | None] | None = ...,
) -> None:
pass
@overload
def __init__(
self: "Base", null: Literal[False] = ..., enum: List[str] | None = ...,
) -> None:
pass
def __init__(
self: "Base", null: bool = False, enum: List[str | None] | List[str] | None = None,
) -> None:
self.null = null
self.enum = enum
class Derived(Base):
@overload
def __init__(
self: "Derived", null: Literal[True] = ..., enum: List[str | None] | None = ...,
) -> None:
pass
@overload
def __init__(
self: "Derived", null: Literal[False] = ..., enum: List[str] | None = ...,
) -> None:
pass
def __init__(
self: "Derived", null: bool = False, enum: List[str | None] | List[str] | None = None,
) -> None:
super().__init__(null=null, enum=enum)
</code></pre>
<p>MyPy gives the error:</p>
<pre class="lang-error prettyprint-override"><code>overload-with-types.py:43:9: error: No overload variant of "__init__" of "Base" matches argument types "bool", "Union[List[Optional[str]], List[str], None]" [call-overload]
overload-with-types.py:43:9: note: Possible overload variants:
overload-with-types.py:43:9: note: def __init__(self, null: Literal[True] = ..., enum: Optional[List[Optional[str]]] = ...) -> None
overload-with-types.py:43:9: note: def __init__(self, null: Literal[False] = ..., enum: Optional[List[str]] = ...) -> None
</code></pre>
<p>I can fix the error using explicit casts for each possible overload:</p>
<pre class="lang-python prettyprint-override"><code> def __init__(
self: "Derived", null: bool = False, enum: List[str | None] | List[str] | None = None,
) -> None:
if null:
super().__init__(null=null, enum=cast(List[str | None] | None, enum))
else:
super().__init__(null=null, enum=cast(List[str] | None, enum))
</code></pre>
<p>However, if there are n correlated sets of binary type choices in the method signature then there will be 2<sup>n</sup> overloads and a corresponding number of different calls to <code>super().__init__</code> from the derived class. The number of different overloads is expected but MyPy (and pyright) not recognising that the base and derived class have the same overloaded signatures and requiring explicit casts is not expected.</p>
<p>Is there a pythonic way to ensure that the corresponding signatures of base- and derived-classes are recognised during type checking that does not require writing out multiple <code>super().__init__(...)</code> and use <code>cast</code> for each possible overloaded signature? (Without relaxing the strict options used during type checking)</p>
|
<python><overloading><mypy><python-typing>
|
2023-04-13 21:40:37
| 0
| 172,539
|
MT0
|
76,010,030
| 12,361,700
|
tf Dataset does not seem to apply map
|
<p>I might be missing something very fundamental, but I have the following code:</p>
<pre><code>train_dataset = (tf.data.Dataset.from_tensor_slices((data_train[0:1], labels_train[0:1]))
.shuffle(500)
.batch(64)
.map(lambda x,y: (tf.cast(x, tf.float32), y))
.map(lambda x,y: (tf.image.random_flip_left_right(x), y))
.map(lambda x,y: (tf.image.random_contrast(x, 0.99, 0.999), y))
.map(lambda x,y: (tf.clip_by_value(x, 0., 255.)/255., y))
.prefetch(10))
for x,y in train_dataset.take(1):
plt.imshow(x[0])
</code></pre>
<p>Why is it displaying always the same image? It seems that no preprocessing has been applied</p>
|
<python><tensorflow><dataset>
|
2023-04-13 21:39:39
| 0
| 13,109
|
Alberto
|
76,009,885
| 2,093,802
|
Flask Redirect From Jinja HTML Template
|
<p>I can call the redirect method from .py file like this</p>
<pre><code>@app.route('/logout', methods=['POST', 'GET'])
def logout_page():
logout_user()
flash('Anda telah logout', category='success')
return redirect(url_for('login_page'))
</code></pre>
<p>How to call that redirect method from the Jinja HTML template?</p>
<p>I've tried something like this</p>
<pre><code>{% if not current_user.is_authenticated %}
{% redirect(url_for('login_page')) %}
{% else
...
%}
</code></pre>
<p>But it's not works</p>
|
<python><flask><jinja2>
|
2023-04-13 21:15:31
| 1
| 346
|
Herahadi An
|
76,009,612
| 17,638,323
|
Numpy `matmul` performs ~100 times worse than `dot` on array views
|
<p>It was brought to my attention that the <code>matmul</code> function in numpy is performing significantly worse than the <code>dot</code> function when multiplying array views. In this case my array view is the real part of a complex array. Here is some code which reproduces the issue:</p>
<pre><code>import numpy as np
from timeit import timeit
N = 1300
xx = np.random.randn(N, N) + 1j
yy = np.random.randn(N, N) + 1J
x = np.real(xx)
y = np.real(yy)
assert np.shares_memory(x, xx)
assert np.shares_memory(y, yy)
dot = timeit('np.dot(x,y)', number = 10, globals = globals())
matmul = timeit('np.matmul(x,y)', number = 10, globals = globals())
print('time for np.matmul: ', matmul)
print('time for np.dot: ', dot)
</code></pre>
<p>On my machine the output is as follows:</p>
<pre><code>time for np.matmul: 23.023062199994456
time for np.dot: 0.2706864000065252
</code></pre>
<p>This clearly has something to do with the shared memory as replacing <code>np.real(xx)</code> with <code>np.real(xx).copy()</code> makes the performance discrepancy go away.</p>
<p>Trolling the numpy docs was not particularly helpful as the listed differences did not discuss implementation details when dealing with memory views.</p>
|
<python><numpy><performance>
|
2023-04-13 20:32:08
| 1
| 324
|
Simon Tartakovksy
|
76,009,595
| 25,197
|
Why does this Python docstring not have newlines in the VS Code hover info?
|
<p>I have this docstring:</p>
<pre class="lang-py prettyprint-override"><code>def get_crypto_price(crypto: str, currency: str, max_attempts: int = 3) -> float | None:
"""
Retrieves the price of a cryptocurrency in a specified currency using the CoinGecko API.
Args:
crypto (str): The cryptocurrency to retrieve the price for (e.g., 'bitcoin').
currency (str): The currency to retrieve the price in (e.g., 'usd').
max_attempts (int): The maximum number of attempts to make before giving up (default: 3).
Returns:
float | None: The price of the cryptocurrency in the specified currency, or None if the price could not be retrieved.
"""
</code></pre>
<p>But it shows up like this in VS Code, with no newline between the currency and max_attempts lines:<br />
<a href="https://i.sstatic.net/IUiZR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IUiZR.png" alt="docsctring context menu" /></a></p>
<p>How do I get the max_attempts line to be on a new line in the hover info?</p>
|
<python><visual-studio-code>
|
2023-04-13 20:29:39
| 3
| 27,200
|
GollyJer
|
76,009,540
| 19,980,284
|
Pandas to_csv but remove NaNs on individual cell level without dropping full row or column
|
<p>I have a dataframe of comments from a survey. I want to export the dataframe as a csv file and remove the NaNs without dropping any rows or columns (unless an entire row is NaN, for instance). Here is a sample of the dataframe:</p>
<p><a href="https://i.sstatic.net/e1HCc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/e1HCc.png" alt="enter image description here" /></a></p>
<p>I don't care about maintaining the index so I'm, fine with just dropping individual cells with NaNs and shifting those column's rows up instead of dropping entire rows, so I'd just have a nice compressed output csv file without any empty cells.</p>
|
<python><pandas><dataframe><export-to-csv><missing-data>
|
2023-04-13 20:22:19
| 1
| 671
|
hulio_entredas
|
76,009,381
| 395,857
|
How can I convert a dictionary containing key and value as list into a panda dataframe with one column for keys and one column for values?
|
<p>I have this dictionary:</p>
<pre><code>data = {'key1': [2, 6, 7],
'key2': [9, 5, 3]}
</code></pre>
<p>How can I convert it into this pandas dataframe</p>
<pre><code> key value
0 key1 2
1 key1 6
2 key1 7
3 key2 9
4 key2 5
5 key2 3
</code></pre>
<p>?</p>
<hr />
<p>I tried:</p>
<pre><code>import pandas as pd
data = {'key1': [2, 6, 7], 'key2': [9, 5, 3]}
df = pd.DataFrame(data.items())
print(df)
</code></pre>
<p>but it returns:</p>
<pre><code> 0 1
0 key1 [2, 6, 7]
1 key2 [9, 5, 3]
</code></pre>
<p>I also tried:</p>
<pre><code>import pandas as pd
data = {'key1': [2, 6, 7], 'key2': [9, 5, 3]}
df = pd.DataFrame.from_dict(data)
print(df)
</code></pre>
<p>but it returns:</p>
<pre><code> key1 key2
0 2 9
1 6 5
2 7 3
</code></pre>
|
<python><pandas><dataframe><dictionary>
|
2023-04-13 20:00:06
| 3
| 84,585
|
Franck Dernoncourt
|
76,009,262
| 5,032,387
|
Incorrect prediction using LSTM many-to-one architecture
|
<p>I'm predicting 12 months of data based on a sequence of 12 months. The architecture I'm using is a many-to-one LSTM, where the ouput is a vector of 12 values. The problem is that the predictions of the model are way out-of-line with the expected - the values in the time series are around 0.96, whereas the predictions are in the 0.08 - 0.12 range.</p>
<p>After generating the 72 random values, I use the function subsequences to create overlapping sequences 12 timesteps long. So my X dimensions are # of 12-timestep sequences, 12, 1.</p>
<pre><code>from keras.models import Sequential
from keras.layers import LSTM
from keras.layers import Dense
import numpy as np
import pandas as pd
def subsequences(ts, window):
shape = (ts.size - window + 1, window)
strides = ts.strides * 2
return np.lib.stride_tricks.as_strided(ts, shape=shape, strides=strides)
test_arr = np.random.normal(0.975, 0.008, size=72)
ts = 12
temp = subsequences(test_arr, ts)
X = temp.reshape(temp.shape[0], temp.shape[1], 1)
y = subsequences(pd.Series(test_arr).shift(-ts).to_numpy(), ts)
X_predict = X[-ts, :, :]
# Lop off the timestamps that have missing values
X = X[:(X.shape[0] - ts - 1), :, :]
y = y[:(y.shape[0] - ts - 1), :]
output = 12
model = Sequential()
model.add(LSTM(12, return_sequences=True, input_shape=(ts, 1)))
model.add(LSTM(12))
model.add(Dense(output))
model.compile(loss='mae', optimizer='adam')
model.summary()
model.fit(X, y, batch_size = 24, epochs = 50)
yhat = model.predict(X_predict, verbose=0)
</code></pre>
<p>These are some of the predictions I get:</p>
<pre><code>array([[0.11727975, 0.08559777, 0.09116013, 0.09350648, 0.13221847,
0.08328149, 0.12618074, 0.12006135, 0.11579201, 0.10330589,
0.11863852, 0.13914064],
[0.1172842 , 0.08561642, 0.091188 , 0.0935307 , 0.1322322 ,
0.08330169, 0.12619175, 0.12007872, 0.11579723, 0.10331573,
0.11865252, 0.13916571],
[0.11731011, 0.08572483, 0.09135018, 0.09367174, 0.13231221,
0.08341929, 0.12625587, 0.12017974, 0.1158275 , 0.10337301,
0.11873408, 0.13931161],
</code></pre>
<p>If I increase epochs to 1000, the value range does increase, but the maximum is still low - 0.20.</p>
|
<python><keras><lstm>
|
2023-04-13 19:42:02
| 0
| 3,080
|
matsuo_basho
|
76,008,794
| 11,712,575
|
Python Boto3 not connecting to localstack
|
<p>I am trying to simulate a dynamoDB on localstack, and I have my python running in a docker container and localstack running with environmental variables SERVICES=dynamodb,sqs</p>
<p>I also have another docker container running python and it is running all in the same network.</p>
<p>Here is a general overview:</p>
<pre><code>Network: test-net-1
-Container: Localstack-1
-Container: Python-1
</code></pre>
<p>Inside my python script my code looks like this to create the dynamoDB:</p>
<pre><code>self.dynamodb = boto3._get_default_session().resource('dynamodb', endpoint_url='Localstack-1')
</code></pre>
<p>and I get this error: ValueError: Invalid endpoint: Localstack-1</p>
<p>However, going into my docker container, if I do <code>ping Localstack-1</code>, it returns with a valid response:</p>
<pre><code>64 bytes from Localstack-1.test-net-1 (172.22.0.2): icmp_seq=4 ttl=64 time=0.239 ms
</code></pre>
<p>What is going on?</p>
|
<python><amazon-web-services><docker><boto3><localstack>
|
2023-04-13 18:40:09
| 0
| 389
|
Jason Chan
|
76,008,753
| 8,372,455
|
How is Pandas rolling average calculated?
|
<p>I have a project that logs data every approximately every second into a Pandas df for a few columns of numeric sensor values and also a time stamp column. A project requirement is taking in data on a 5 minute rolling average, running some fault checks, and then clearing the data frame.</p>
<p>If I am clearing the data every 5 minutes (basically only using Pandas for the rolling avg feature) after the fault check would I still need this:</p>
<pre><code>df['timestamp'] = pd.to_datetime(df['timestamp'])
df = df.set_index("timestamp")
df = df.rolling("5T").mean()
</code></pre>
<p>Or for my columns could I just take a column mean instead of the last 5 minutes of data?</p>
<pre><code>for col in df.columns:
print(f"df column: {col} - {df[col].mean()}")
</code></pre>
<p>Now I am rethinking what a 5 minute rolling average would be on 5 minutes of data. What I think that is but could be wrong a 5 minute rolling average on 5 minutes of data would boil the dataframe down to a single row of data representing the previous 5 minute mean values.</p>
<p>Maybe Panda isn't a good application for this and just using a list would be better.</p>
|
<python><pandas>
|
2023-04-13 18:34:16
| 1
| 3,564
|
bbartling
|
76,008,731
| 12,300,981
|
How to generate numpy arrays of different sizes (e.g. objects) but then apply mathematical operations on them
|
<p>I am attempting to do a minimization via sum of squares, but my global minimization I am doing has arrays of different sizes. I am trying to minimize using all data sets, even though they are different sizes.</p>
<p>I.E.</p>
<pre><code>data_1=[[1,2],[2,3]]
data_2=[[3,4],[5,6],[6,7]]
model_1=[[1,1],[2,2]]
model_2=[[3,3],[5,5],[6,6]]
</code></pre>
<p>Now the data/model are the same size, so doing</p>
<pre><code>sum((data_1-model_1)**2)
</code></pre>
<p>works. The issue is I'm trying to combine all the data and all the models for a global fit.</p>
<pre><code>total_data=[[[1,2],[2,3]],[[3,4],[5,6],[6,7]]]
total_model=[[[1,1],[2,2]],[[3,3],[5,5],[6,6]]]
</code></pre>
<p>Now you can make lists of lists like this, but I'd like to make a numpy array so that I can then do mathematical operations such as</p>
<pre><code>np.sum((total_data-total_model)**2)
</code></pre>
<p>You can make numpy objects, but they are handled as lists, and as such mathematical numpy operations do not apply to list objects.</p>
<p>Any advice for how to do this would be greatly appreciated! I'm just trying to consolidate my data and do a global fit rather than making multiple nested loops and treating each one individually.</p>
|
<python><numpy>
|
2023-04-13 18:31:28
| 2
| 623
|
samman
|
76,008,688
| 1,966,871
|
In python 3, how can one add an assertion about the value of particular class attributes at testing time via mocking?
|
<p>I have a value that is set early on in processing a fairly complex HTTP request, and I need my unit test to assert that this value is set properly in a private attribute of a class deep in the bowels of implementation.</p>
<p>It seems to me that the easiest way to do this is to mock out a function in that class that is always called after that value is set, and replace it with a new function that intercepts 'self', performs the assertion, and then calls the original function. However, I have been unable to find documentation on how to get access to the original context of a mocked-out function.</p>
<p>Surely this is a case that people run into all the time on nontrivial projects, so what's the best practice for this kind of test?</p>
|
<python><python-3.x><unit-testing>
|
2023-04-13 18:24:14
| 0
| 306
|
enkiv2
|
76,008,649
| 8,228,382
|
Python cronjob won't run in Docker container
|
<p>I am trying to install Python dependencies using <code>Poetry</code> and then run a Python script in a <code>Docker</code> container using a <code>cronjob</code>. However, the python script doesn't execute as expected, and I can't figure out what tweaks to my Docker/crontab setup is causing problems. I have followed several different StackOverflow threads (like <a href="https://stackoverflow.com/questions/37458287/how-to-run-a-cron-job-inside-a-docker-container">this one</a>) on how to configure <code>cron</code> in a container, but none of them are consistently working in my use case. Here is the <code>Dockerfile</code> definition:</p>
<pre><code>FROM python:3.11
RUN apt-get update && apt-get -y install cron
# Install Poetry
RUN curl -sSL https://install.python-poetry.org | POETRY_HOME=/opt/poetry python3 && \
cd /usr/local/bin && \
ln -s /opt/poetry/bin/poetry && \
poetry config virtualenvs.create false
WORKDIR /app
# Copy using poetry.lock* in case it doesn't exist yet
COPY ./app/poetry.lock* ./app/pyproject.toml /app/
RUN poetry install --no-root --no-dev
COPY ./app/app/example.py /app/
# Copy hello-cron file to the cron.d directory
COPY ./app/cronjob /etc/cron.d/hello-cron
# Give execution rights on the cron job
RUN chmod 0644 /etc/cron.d/hello-cron
RUN chmod a+x /app/example.py
# Apply cron job
RUN crontab /etc/cron.d/hello-cron
# Create the log file to be able to run tail
RUN touch /var/log/cron.log
# Run the command on container startup
CMD cron && tail -f /var/log/cron.log
</code></pre>
<p>The <code>cronjob</code> file that is being used to create the <code>crontab</code>:</p>
<pre><code># Example crontab
* * * * * /app/example.py 2>&1
* * * * * echo $(date) >> /app/example.log 2>&1
# empty line
</code></pre>
<p>And the simple Python script (example.py):</p>
<pre><code>#!/usr/local/bin/python
from loguru import logger
from pathlib import Path
f = Path("/app/info.log")
f = str(f) if f.exists() else "./info.log"
logger.add(
f,
format="{time:YYYY-MM-DD at HH:mm:ss} {level} {message}",
level="INFO",
rotation="10 MB",
)
logger.info("working")
</code></pre>
<p>I have tried several different options to get the python script to run every minute, such as removing the shebang and explicitly specifying the python path (<code>/usr/local/bin/python</code>) in the crontab, but have had no luck. Interestingly, the second line of the <code>crontab</code> that writes the time to <code>/app/example.log</code> works as expected. Additionally, manually running the script from the container's <code>bash</code> shell also works and creates the <code>/app/info.log</code> file.</p>
<p>Are there any obvious issues with the Dockerfile/crontab setup?</p>
|
<python><docker><cron><python-poetry>
|
2023-04-13 18:19:07
| 1
| 675
|
ColinB
|
76,008,563
| 4,764,787
|
How to complete FastAPI Google OAuth flow
|
<p>I followed the <a href="https://fastapi.tiangolo.com/tutorial/security/" rel="nofollow noreferrer">official tutorial</a> and was able to set up the security system for a username and password. But instead of (or in addition to) username and password, I want to set up a Google OAuth flow. For that I've followed this <a href="https://blog.authlib.org/2020/fastapi-google-login" rel="nofollow noreferrer">blog post</a> and <a href="https://github.com/authlib/demo-oauth-client/blob/master/fastapi-google-login/app.py" rel="nofollow noreferrer">repo</a>, but I get to the <code>/auth/google</code> endpoint and don't know what to do next. That endpoint looks like this:</p>
<pre><code>@app.get("/auth/google", response_model=None)
async def auth_via_google(request: Request) -> HTMLResponse | RedirectResponse:
try:
token: dict[str, Any] = await oauth.google.authorize_access_token(request)
except OAuthError as error:
return HTMLResponse(f"<h1>{error.error}</h1>")
user: dict[str, Any] = token["userinfo"]
# return dict(token) # blog post has it like this
if user: # repo does this
request.session["user"] = dict(user)
return RedirectResponse(url="/")
</code></pre>
<p>First of all, the blog post returns the <code>token</code> whereas the repo adds <code>user</code> to <code>request.session</code> and returns <code>RedirectResponse</code>. What's the difference and which should I use?</p>
<p>For context, the google side is properly set up because I am able to go through the OAuth consent screen and can even visualize the <code>token</code> as</p>
<pre><code>{"access_token": <some_token>,"expires_in":3599,"scope":"openid https://www.googleapis.com/auth/userinfo.email https://www.googleapis.com/auth/userinfo.profile","token_type":"Bearer","id_token":<other_token>,"expires_at":1681411911,"userinfo":{"iss":"https://accounts.google.com", ... ,"sub":<some_number>,"email":"me@gmail.com","email_verified":true, ... ,"name":"Me Myself", ... ,"exp":1681411914}}
</code></pre>
<p>This <a href="https://nilsdebruin.medium.com/fastapi-google-as-an-external-authentication-provider-3a527672cf33" rel="nofollow noreferrer">article</a> mentions swapping google's token for my own but do I need to do that or does my flow end at <code>/auth/google</code>?</p>
<p>The official FastAPI tutorial makes you create some functions (<code>authenticate_user</code>, <code>create_access_token</code>, <code>get_current_user</code>, and <code>get_current_active_user</code>), as well as the <code>/token</code> endpoint. Is any of that still needed when using google oauth?</p>
<p>If I try to redirect to eg. <code>/home</code> and <code>/home</code> is protected by the original tutorial's user and password system, I am currently getting an <code>Unauthorized</code> error.</p>
<p>Should <code>/auth/google</code> actually redirect to the <code>/token</code> endpoint I had set up in the tutorial? How can I replace (or integrate with) this new google flow with my existing user and password security system?</p>
|
<python><security><google-oauth><fastapi>
|
2023-04-13 18:06:11
| 0
| 381
|
Jaime Salazar
|
76,008,561
| 1,691,186
|
Why is Mace4 valuation not working in my Python program?
|
<p>Here is a very simple example of the use of Mace4, taken directly from the NLTK Web site:</p>
<pre><code>from nltk.sem import Expression
from nltk.inference import MaceCommand
read_expr = Expression.fromstring
a = read_expr('(see(mary,john) & -(mary = john))')
mb = MaceCommand(assumptions=[a])
mb.build_model()
print(mb.valuation)
</code></pre>
<p>When instead of <code>mb.valuation</code> I print the return value of <code>mb.build_model()</code> I get <code>True</code> so the model is correctly built. But when I ask <code>print(mb.valuation)</code>I get</p>
<pre><code>Traceback (most recent call last):
File "test2.py", line 8, in <module>
print(mb.valuation)
File "/Users/yannis/miniconda3/lib/python3.8/site-packages/nltk/inference/mace.py", line 51, in valuation
return mbc.model("valuation")
File "/Users/yannis/miniconda3/lib/python3.8/site-packages/nltk/inference/api.py", line 355, in model
return self._decorate_model(self._model, format)
File "/Users/yannis/miniconda3/lib/python3.8/site-packages/nltk/inference/mace.py", line 185, in _decorate_model
return self._convert2val(valuation_str)
File "/Users/yannis/miniconda3/lib/python3.8/site-packages/nltk/inference/mace.py", line 60, in _convert2val
valuation_standard_format = self._transform_output(valuation_str, "standard")
File "/Users/yannis/miniconda3/lib/python3.8/site-packages/nltk/inference/mace.py", line 206, in _transform_output
return self._call_interpformat(valuation_str, [format])[0]
File "/Users/yannis/miniconda3/lib/python3.8/site-packages/nltk/inference/mace.py", line 220, in _call_interpformat
self._interpformat_bin = self._modelbuilder._find_binary(
File "/Users/yannis/miniconda3/lib/python3.8/site-packages/nltk/inference/prover9.py", line 177, in _find_binary
return nltk.internals.find_binary(
File "/Users/yannis/miniconda3/lib/python3.8/site-packages/nltk/internals.py", line 675, in find_binary
return next(
File "/Users/yannis/miniconda3/lib/python3.8/site-packages/nltk/internals.py", line 661, in find_binary_iter
yield from find_file_iter(
File "/Users/yannis/miniconda3/lib/python3.8/site-packages/nltk/internals.py", line 620, in find_file_iter
raise LookupError(f"\n\n{div}\n{msg}\n{div}")
LookupError:
===========================================================================
NLTK was unable to find the interpformat file!
Use software specific configuration parameters or set the PROVER9 environment variable.
Searched in:
- /usr/local/bin/prover9
- /usr/local/bin/prover9/bin
- /usr/local/bin
- /usr/bin
- /usr/local/prover9
- /usr/local/share/prover9
For more information on interpformat, see:
<https://www.cs.unm.edu/~mccune/prover9/>
===========================================================================
</code></pre>
<p>I installed Prover9 through <code>brew install</code> and everything went fine. Am I doing something wrong?</p>
<p>========================</p>
<p>Following advice by Grimlock I did a <code>brew -prefix prover9</code> which gave me the path <code>/usr/local/opt/prover9</code> and tried several methods to communicate it to mace: I added the environment variable <code>PROVER9</code> to <code>.bash_profile</code>, I included an <code>os.environ['PROVER9']</code> to my script and I even added the path to the Python script <code>prover9.py</code>. Nothing changed, this path appears in the list of paths but otherwise I still get the error message below:</p>
<pre><code>Traceback (most recent call last):
File "test-mace.py", line 10, in <module>
print(mb.valuation)
File "/Users/yannis/miniconda3/lib/python3.8/site-packages/nltk/inference/mace.py", line 51, in valuation
return mbc.model("valuation")
File "/Users/yannis/miniconda3/lib/python3.8/site-packages/nltk/inference/api.py", line 355, in model
return self._decorate_model(self._model, format)
File "/Users/yannis/miniconda3/lib/python3.8/site-packages/nltk/inference/mace.py", line 185, in _decorate_model
return self._convert2val(valuation_str)
File "/Users/yannis/miniconda3/lib/python3.8/site-packages/nltk/inference/mace.py", line 60, in _convert2val
valuation_standard_format = self._transform_output(valuation_str, "standard")
File "/Users/yannis/miniconda3/lib/python3.8/site-packages/nltk/inference/mace.py", line 206, in _transform_output
return self._call_interpformat(valuation_str, [format])[0]
File "/Users/yannis/miniconda3/lib/python3.8/site-packages/nltk/inference/mace.py", line 220, in _call_interpformat
self._interpformat_bin = self._modelbuilder._find_binary(
File "/Users/yannis/miniconda3/lib/python3.8/site-packages/nltk/inference/prover9.py", line 178, in _find_binary
return nltk.internals.find_binary(
File "/Users/yannis/miniconda3/lib/python3.8/site-packages/nltk/internals.py", line 675, in find_binary
return next(
File "/Users/yannis/miniconda3/lib/python3.8/site-packages/nltk/internals.py", line 661, in find_binary_iter
yield from find_file_iter(
File "/Users/yannis/miniconda3/lib/python3.8/site-packages/nltk/internals.py", line 620, in find_file_iter
raise LookupError(f"\n\n{div}\n{msg}\n{div}")
LookupError:
===========================================================================
NLTK was unable to find the interpformat file!
Use software specific configuration parameters or set the PROVER9 environment variable.
Searched in:
- /usr/local/bin/prover9
- /usr/local/bin/prover9/bin
- /usr/local/bin
- /usr/bin
- /usr/local/prover9
- /usr/local/share/prover9
- /usr/local/opt/prover9
For more information on interpformat, see:
<https://www.cs.unm.edu/~mccune/prover9/>
===========================================================================
</code></pre>
|
<python><nltk><inference>
|
2023-04-13 18:06:04
| 1
| 938
|
yannis
|
76,008,523
| 16,935,119
|
Merge based on common values
|
<p>Is there a way to merge column based on common values and return NAN if there is no match. I tried below code but the output is wierd. Even though there is no match, the values are returning</p>
<pre><code>import pandas as pd
data2 = {'Name' : ['Tom', 'Nick', 'f']}
d2 = pd.DataFrame(data2)
data1 = {'Name' : ['Tom', 'Nick', 'h', 'g']}
d1 = pd.DataFrame(data1)
</code></pre>
<p>(d1 on d2)</p>
<pre><code>pd.merge(d2, d1, left_index=True, right_index=True, how='left')
Name_x Name_y
0 Tom Tom
1 Nick Nick
2 f h
</code></pre>
<p>But expected output (d1 on d2)</p>
<pre><code> Name_x Name_y
0 Tom Tom
1 Nick Nick
2 f NaN
</code></pre>
<p>Similarly(d2 on d1)</p>
<pre><code>pd.merge(d1, d2, left_index=True, right_index=True, how='left')
Out[17]:
Name_x Name_y
0 Tom Tom
1 Nick Nick
2 h f
3 g NaN
</code></pre>
<p>Expected output (d2 on d1)</p>
<pre><code> Name_x Name_y
0 Tom Tom
1 Nick Nick
2 h NaN
3 g NaN
</code></pre>
<p>So basically, it should compare the 2 dataframe and depending on mismatch values, it should return NaN</p>
|
<python><pandas>
|
2023-04-13 18:00:33
| 3
| 1,005
|
manu p
|
76,008,514
| 3,482,266
|
Comparing two lists of instances of the same class, by a specific instance attribute only
|
<p>I have two lists of instances of the class below. I want to compare whether the two lists contain the same data (order is unimportante) in the doc_data attribute.</p>
<pre><code>class Class_Test:
type: str
def __init__(self, doc_data: Dict, meta_data: Dict):
self.doc_data = doc_data
self.meta_data = meta_data
</code></pre>
<p>I thought of sorting a list of instances of Class_Test, by the values in the keys of doc_data, only.
The problem is that not all instances have the same keys...
All instances keys belong to a superset -> {key_1, ..., key_n}
But some instances have only key_1, key_7, while other may only have key_2, key_4.
I suspect it may not be possible to sort consistently in this situation. Nevertheless, there may exist someone with a better idea, like using set().</p>
|
<python>
|
2023-04-13 17:59:20
| 1
| 1,608
|
An old man in the sea.
|
76,008,438
| 17,596,237
|
Firebase realtime database permission error even though rules playground simulation succeeds
|
<p>My database has a <code>users</code> child with fields <code>email</code> and <code>name</code>, with the key being the users uid. Here are the database rules:</p>
<pre><code>{
"rules": {
"users": {
"$uid": {
"email": {
".read": "auth.uid == $uid",
".write": "auth.uid == $uid"
},
"name": {
".read": "auth.uid == $uid",
".write": "auth.uid == $uid"
}
}
}
}
}
</code></pre>
<p>I am using pyrebase to access the firebase project. Here is the statement that tries to add them to <code>users</code>:</p>
<pre><code>db.child("users").child(self.user["localId"]).set({"name": name, "email" : email})
</code></pre>
<p>While creating a new user, I get a permission denied error. The user is still able to sign in though. When I make a simulated request with the rules playground, it succeeds.</p>
<p>edit: Here is a reproducible example:</p>
<pre class="lang-py prettyprint-override"><code>import pyrebase
config = {
"apiKey": "",
"authDomain": "",
"projectId": "",
"databaseURL": "",
"storageBucket": "",
"messagingSenderId": "",
"appId": "",
"measurementId": "",
}
firebase = pyrebase.initialize_app(config)
user = firebase.auth().sign_in_with_email_and_password("test@gmail.com", "password")
db = firebase.database()
db.child("users").child(user["localId"]).set({"name": "test", "email": "test@gmail.com"}, token=user["localId"])
</code></pre>
|
<python><database><firebase><firebase-realtime-database><pyrebase>
|
2023-04-13 17:51:01
| 2
| 678
|
Jacob
|
76,008,308
| 9,018,649
|
What does "func.QueueMessage) -> None:" do in python?
|
<p>In an python azure function i need to upgrade i got:</p>
<pre><code>def main(msg: func.QueueMessage) -> None:
</code></pre>
<p>def main - is the entry point of the program , and main is a function name. But other then that i'm blank.</p>
<p>What does the rest of the code actually mean/do?</p>
|
<python><azure>
|
2023-04-13 17:31:18
| 0
| 411
|
otk
|
76,008,125
| 2,893,712
|
SQL INSERT but increment ID if exist without having to do separate SELECT query
|
<p>I have a table with Primary Key on two columns: <code>ID</code> (Users ID) and <code>SQ</code> (Sequence, this starts at 1 and increments +1 for each <code>ID</code>). A user should not have multiple instances of the same sequence #. Here is how the table looks:</p>
<pre><code>ID SQ Code
------ -- ----
123456 1 123
654321 1 369
123456 2 234
</code></pre>
<p>I am trying to insert a value in column <code>Code</code> but before I do that I need to check if a specific user already has that code and if so, receive the Sequence #.</p>
<p>Currently, I run:</p>
<pre><code># CodeExists is 1 if user has code already. 0 otherwise
# Max_SQ gives the maximum SQ that exists for this user
cursor.execute("SELECT MAX(CASE WHEN Code = ? THEN 1 ELSE 0 END) AS 'CodeExists', MAX(SQ) AS 'Max_SQ' FROM TABLENAME WHERE ID = ? GROUP BY ID", Code, User_PID)
data = cursor.fetchone()
if data is None: # This user does not have any codes, SQ starts at 1
SQ = 1
else:
CodeExists, SQ = data # Unpack query values
if int(CodeExists) == 1: # User already has the code we are trying to insert, skip
continue
else:
SQ = int(SQ) + 1 # Increment SQ
cursor.execute("INSERT INTO TABLENAME (ID,SQ,Code) VALUES(?,?,?)", User_PID, SQ, Code)
cursor.commit()
</code></pre>
<p>Examples:</p>
<ul>
<li>Trying to insert Code=123 for User=123456, should not insert. This user has code 123 already.</li>
<li>Trying to insert Code=123 for User=654321, should insert with SQ=2</li>
<li>Trying to insert Code=123 for User=999999, should insert with SQ=1</li>
</ul>
<p>Given a Code # and User ID, is there a way to combine this into a single INSERT query?</p>
|
<python><sql><sql-server><sql-insert>
|
2023-04-13 17:08:00
| 2
| 8,806
|
Bijan
|
76,008,084
| 11,720,193
|
SOAP Response XML Error in AWS Glue job (Python)
|
<p>I am very new to AWS Glue. I have coded the following script in Glue which sends a SOAP request to a website and it's response is stored in S3.
Even though the job is running successfully, the xml response which is being received (and saved on s3 object) is throwing error. However, the same program is running perfectly from PyCharm. The glue script is given below also.</p>
<p>XML response (Error):</p>
<pre><code><soap:Envelope xmlns:soap="http://www.w3.org/2003/05/soap-envelope" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<soap:Body>
<soap:Fault>
<soap:Code>
<soap:Value>soap:Receiver</soap:Value>
</soap:Code>
<soap:Reason>
<soap:Text xml:lang="en">Server was unable to process request. ---> Unexpected XML declaration. The XML declaration must be the first node in the document, and no white space characters are allowed to appear before it. Line 2, position 10.</soap:Text>
</soap:Reason>
<soap:Detail/>
</soap:Fault>
</soap:Body>
</soap:Envelope>
</code></pre>
<p>The glue job is as follows:</p>
<pre><code>import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
import requests
import boto3
## @params: [JOB_NAME]
args = getResolvedOptions(sys.argv, ['JOB_NAME'])
sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)
print("Imported Libraries")
url = "https://www.w3schools.com/xml/tempconvert.asmx"
data ="""
<?xml version="1.0" encoding="utf-8"?>
<soap12:Envelope
xmlns:xsi="http://w3.org/2002/XMLSchema-instance"
xmlns:xsd="http://www.w3.org/2001/XMLSchema"
xmlns:soap12="http://schemas.xmlsoap.org/soap/envelope/">
<soap12:Body>
<CelsiusToFahrenheit xmlns="https://www.w3schools.com/xml/">
<Celsius>20</Celsius>
</CelsiusToFahrenheit>
</soap12:Body>
</soap12:Envelope>"""
headers = {
'Content-Type': 'text/xml; charset=utf-8'
}
response = requests.request("POST", url, headers=headers, data=data)
var = response.text
print(f"Response: {var}")
client = boto3.client('s3')
client.put_object(Body=var, Bucket='my-bucket', Key='data/soap_inbound.xml')
print("S3 object created")
job.commit()
</code></pre>
<p>Can anyone please help to fix the error.</p>
|
<python><soap><aws-glue>
|
2023-04-13 17:03:22
| 1
| 895
|
marie20
|
76,007,953
| 12,990,915
|
Read HDF - Dealing with Hierarchy via Pandas
|
<p>I would like to read an hdf5 file <code>2D_rdb_NA_NA.h5</code>. The file has parent groups: <code>0000</code> <code>0001</code> <code>0002</code> etc. Each parent group has child groups <code>data</code> and <code>grid</code>.</p>
<p>Here is what I have attempted so far:</p>
<pre><code>import h5py
import pandas as pd
data = h5py.File('2D_rdb_NA_NA.h5', 'r')
print(list(data.keys()))
</code></pre>
<pre><code>['0000', '0001', '0002', '0003', '0004', '0005', '0006', '0007', '0008', '0009', '0010', ...
</code></pre>
<pre><code>print(list(data['0000'].keys()))
</code></pre>
<pre><code>['data', 'grid']
</code></pre>
<pre><code>data['0000']['data']
</code></pre>
<pre><code><HDF5 dataset "data": shape (101, 128, 128, 1), type "<f4">
</code></pre>
<pre><code>df = pd.read_hdf('2D_rdb_NA_NA.h5', key='0000')
</code></pre>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[11], line 1
----> 1 df = pd.read_hdf('2D_rdb_NA_NA.h5', '0000')
File ~/lib/python3.10/site-packages/pandas/io/pytables.py:442, in read_hdf(path_or_buf, key, mode, errors, where, start, stop, columns, iterator, chunksize, **kwargs)
437 raise ValueError(
438 "key must be provided when HDF5 "
439 "file contains multiple datasets."
440 )
441 key = candidate_only_group._v_pathname
--> 442 return store.select(
443 key,
444 where=where,
445 start=start,
446 stop=stop,
447 columns=columns,
448 iterator=iterator,
449 chunksize=chunksize,
450 auto_close=auto_close,
451 )
452 except (ValueError, TypeError, KeyError):
453 if not isinstance(path_or_buf, HDFStore):
454 # if there is an error, close the store if we opened it.
...
1679 )
1680 else:
1681 if isinstance(value, Series):
TypeError: cannot create a storer if the object is not existing nor a value are passed
</code></pre>
<p><code>df = pd.read_hdf('2D_rdb_NA_NA.h5', key='0000/data')</code> returns the same error. If it is not possible to use Pandas for this, a h5py solution will have to do.</p>
|
<python><pandas><h5py>
|
2023-04-13 16:47:55
| 2
| 383
|
user572780
|
76,007,874
| 735,926
|
How to test a FastAPI route that retries a SQLAlchemy insert after a rollback?
|
<p>I have a route where I want to retry an insert if it failed due to an <code>IntegrityError</code>. I’m trying to test it using pytest and httpx but I get an error when I reuse the session to retry the insert after the rollback of the previous one. It works fine if I test with <code>curl</code>.</p>
<p>I use Python 3.10 with the latest FastAPI (0.95) and SQLAlchemy (2.0). I have a tests setup based on <a href="https://dev.to/jbrocher/fastapi-testing-a-database-5ao5" rel="nofollow noreferrer">this blog post</a> that works well for other tests but not this one.</p>
<p>Here is a minimal reproducible example (I left out the <code>import</code>s to reduce the code):</p>
<p><code>database.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>async_engine = create_async_engine(f"sqlite+aiosqlite:///:memory:")
async_session_maker = async_sessionmaker(bind=async_engine, class_=AsyncSession, expire_on_commit=False)
async def get_async_db_session():
async with async_session_maker() as session:
yield session
class Base(DeclarativeBase):
pass
class Animal(Base):
__tablename__ = "animals"
id: Mapped[int] = mapped_column(Integer, primary_key=True)
name: Mapped[str] = mapped_column(String, nullable=False, unique=True)
</code></pre>
<p><code>main.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>app = FastAPI()
@app.post("/add")
async def root(session=Depends(get_async_db_session)):
for name in ("Max", "Cody", "Robby"):
session.add(Animal(name=name))
try:
await session.flush()
except IntegrityError:
await session.rollback()
continue # retry
await session.commit()
return name
return None
</code></pre>
<p><code>tests.py</code>:</p>
<pre class="lang-py prettyprint-override"><code># test setup based on https://dev.to/jbrocher/fastapi-testing-a-database-5ao5
@pytest.fixture(scope="session")
def event_loop():
loop = asyncio.get_event_loop_policy().new_event_loop()
yield loop
loop.close()
@pytest.fixture(scope="session")
async def db_engine():
engine = create_async_engine("sqlite+aiosqlite:///:memory:")
async with engine.begin() as conn:
await conn.run_sync(Base.metadata.create_all)
yield engine
@pytest.fixture(scope="function")
async def db(db_engine):
async with db_engine.connect() as connection:
async with connection.begin() as transaction:
db_session = AsyncSession(bind=connection)
yield db_session
await transaction.rollback()
@pytest.fixture(scope="function")
async def client(db):
app.dependency_overrides[get_async_db_session] = lambda: db
async with AsyncClient(app=app, base_url="http://test") as c:
yield c
async def test_add(client):
r = await client.post("/add")
assert r.json() == "Max"
r = await client.post("/add")
assert r.json() == "Cody"
</code></pre>
<p>I run the tests with <code>pytest --asyncio-mode=auto tests.py</code>.</p>
<p>The test simulates two requests to the endpoint. The first one succeeds, but the second one fails with the following error:</p>
<blockquote>
<p>Can't operate on closed transaction inside context manager. Please complete the context manager before emitting further commands.</p>
</blockquote>
<p>The traceback points to the line with <code>await session.flush()</code> in <code>main.py</code>.</p>
<p>I don’t understand what I need to change in the tests setup (or the route?) to make this work.</p>
|
<python><sqlalchemy><fastapi><httpx><pytest-asyncio>
|
2023-04-13 16:38:21
| 1
| 21,226
|
bfontaine
|
76,007,760
| 6,087,667
|
Inserting values into multiindexed dataframe with sline(None)
|
<p>I am trying to insert entries on each first level but it fails:</p>
<pre><code>import string
alph = string.ascii_lowercase
n=5
inds = pd.MultiIndex.from_tuples([(i,j) for i in alph[:n] for j in range(1,n)])
t = pd.DataFrame(data=np.random.randint(0,10, len(inds)), index=inds).sort_index()
# inserting value np.nan on every alphabetical level at index 0 on the second level
t.loc[(slice(None), 0), :]=np.nan
</code></pre>
<p>KeyError: 0</p>
<p>Expected output:</p>
<p><a href="https://i.sstatic.net/JboVs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JboVs.png" alt="enter image description here" /></a></p>
<p>How can I do these <code>np.nan</code> insertions on the second level as shown on the picture?</p>
<p>This code piece works, indicating that something is wrong with the <code>slice(None)</code>:</p>
<pre><code>t.loc[('a', 0), :]=np.nan
t = t.sort_index()
</code></pre>
|
<python><pandas><dataframe>
|
2023-04-13 16:26:03
| 1
| 571
|
guyguyguy12345
|
76,007,636
| 3,210,927
|
How can I (and should I?) acceptance test a simple Flask app without importing unnecessary libraries?
|
<p>I'm relatively new to Python and entirely new to Flask. I'm trying to create a Flask app that at least initially consists of little more than a couple of forms, some number crunching based on the values you submit and then a page displaying the outcome. It will probably never have a database.</p>
<p>The in-progress version based on the Flask 'getting started' guide plus importing the logic, looks like this:</p>
<pre><code>from flask import Flask, request, render_template
import pdb
import calculators.simple_calc.simple_calc as sc
from calculators.website.forms.simple_calc import SimpleCalcForm
app = Flask(__name__)
app.config['SECRET_KEY'] = 'seekrit'
@app.route('/', methods=['GET', 'POST'])
def simple_calc_page():
form = SimpleCalcForm()
calc = None
if form.validate_on_submit():
calc = sc.SimpleCalc(request.form)
return render_template('simple_calc_page.html', form=form, calc=calc)
</code></pre>
<p>And this works fine. But when I look up guides to acceptance testing in Flask, eg <a href="https://testdriven.io/blog/flask-pytest/" rel="nofollow noreferrer">this</a> and <a href="https://pytest-flask.readthedocs.io/en/latest/" rel="nofollow noreferrer">this</a>, I end up down a rabbithole: we start with a create_app() function, which the docs haven't mentioned until now. Looking at the <a href="https://flask.palletsprojects.com/en/2.2.x/patterns/appfactories/" rel="nofollow noreferrer">guide for that</a>, I'm still unsure where to put the function (GPT tells me in the <code>__init__.py</code> file, but that instruction seems to have been removed from all the docs in which GPT found it), or where to call it from; and it's now telling me to import a further library, Blueprint, just to create this file.</p>
<p>This all feels very complicated for what I'm trying to do. Should I nonetheless do it anyway? If so, where <em>should</em> I put the create_app() function, and where should I call it from? Or is it better to stick with a simpler approach? And if so, how do I actually do acceptance testing?</p>
|
<python><flask><pytest>
|
2023-04-13 16:12:23
| 1
| 889
|
Arepo
|
76,007,606
| 7,479,376
|
Bigquery: cannot keep field description
|
<p>I've created an empty table in Bigquery:</p>
<pre><code>import bigquery
schema = [
bigquery.SchemaField(name="id",
field_type="INTEGER",
description='user ID',
mode="REQUIRED"),
bigquery.SchemaField(name="field1",
field_type="INTEGER",
description='my field 1',
mode="NULLABLE")
]
table = bigquery.Table('myproject.mydataset.mytable', schema=schema)
table.description="A test table."
table.expires=datetime.datetime(2021, 1, 1)
table.labels={"label": "mylabel"}
table = bq_client.create_table(table)
</code></pre>
<p>However when I add new data in <code>WRITE_TRUNCATE</code> mode, the field descriptions are lost and all fields become <code>NULLABLE</code></p>
<pre><code>table_id='myproject.mydataset.mytable'
job_config = bigquery.QueryJobConfig(destination=table_id, write_disposition='WRITE_TRUNCATE')
query_job = bq_client.query("SELECT * FROM my_other_table", job_config=job_config)
query_job.result()
table = bigquery.Table('myproject.mydataset.mytable')
table.schema = schema
table = bq_client.update_table(table, ["schema"])
print("Query results loaded to the table {}".format(table_id))
</code></pre>
<p>How to keep the description in the schema fields, as well as the type <code>REQUIRED</code> while writing to the table?</p>
|
<python><google-cloud-platform><google-bigquery>
|
2023-04-13 16:09:17
| 1
| 3,353
|
Galuoises
|
76,007,394
| 18,018,869
|
Repeat given array to more complex shape
|
<p>I want to create an array of shape <code>(3, 3, 4)</code>. The data to populate the array with is given.</p>
<p>My solution right now works perfectly fine but feels like I am missing a numpy lesson here. I do not want to do multiple <code>.repeat()</code>s over and over.</p>
<pre class="lang-py prettyprint-override"><code>start = np.linspace(start=10, stop=40, num=4)
arr = np.repeat([start], 3, axis=0)
arr = np.repeat([arr], 3, axis=0)
arr
# output
array([[[10., 20., 30., 40.],
[10., 20., 30., 40.],
[10., 20., 30., 40.]],
[[10., 20., 30., 40.],
[10., 20., 30., 40.],
[10., 20., 30., 40.]],
[[10., 20., 30., 40.],
[10., 20., 30., 40.],
[10., 20., 30., 40.]]])
</code></pre>
|
<python><numpy><multidimensional-array>
|
2023-04-13 15:45:26
| 3
| 1,976
|
Tarquinius
|
76,007,383
| 4,422,095
|
How to use `@task` decorator in Airflow to return outputs from wrapped function
|
<p>I'm confused how the <code>@task</code> decorator is supposed to be used in Airflow when working with large amounts of data.</p>
<p>For most situations where I work with data, I am typically dealing with several transformations of that data. I use various assorted methods for this and the methods I use typically have an output.</p>
<pre class="lang-py prettyprint-override"><code>import torch
import torchvision.models as models
import torchvision.transforms as transforms
# Load the pre-trained ResNet-50 model
model = models.resnet50(pretrained=True)
# Create a tensor representing an image
image_tensor = torch.rand(1, 3, 224, 224)
# Define the transformation to apply to the image tensor
transform = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
])
# Apply the transformation to the image tensor
input_tensor = transform(image_tensor)
# Pass the input tensor through the pre-trained model
output_tensor = model(input_tensor)
</code></pre>
<p>This is an example of the kind of transformation I'm referring to.</p>
<p>The problem I'm having with airflow is that the <code>@task</code> decorator appears to wrap all the outputs of my functions and makes their output value of type <code>PlainXComArgs</code>.</p>
<p>But consider the following</p>
<blockquote>
<p>Knowing the size of the data you are passing between Airflow tasks is important when deciding which implementation method to use. As you'll learn, XComs are one method of passing data between tasks, but they are only appropriate for small amounts of data. Large data sets require a method making use of intermediate storage and possibly utilizing an external processing framework.</p>
</blockquote>
<p>and also</p>
<blockquote>
<p>XComs should be used to pass small amounts of data between tasks. For example, task metadata, dates, model accuracy, or single value query results are all ideal data to use with XCom.</p>
<p>While there is nothing stopping you from passing small data sets with XCom, be very careful when doing so. This is not what XCom was designed for, and using it to pass data like pandas dataframes can degrade the performance of your DAGs and take up storage in the metadata database.</p>
<p>XCom cannot be used for passing large data sets between tasks. The limit for the size of the XCom is determined by which metadata database you are using:</p>
<p>Postgres: 1 Gb</p>
<p>SQLite: 2 Gb</p>
<p>MySQL: 64 Kb</p>
<p>You can see that these limits aren't very big. And even if you think your data might meet the maximum allowable limit, don't use XComs. Instead, use intermediary data storage, which is more appropriate for larger amounts of data.</p>
</blockquote>
<p>Source: <a href="https://docs.astronomer.io/learn/airflow-passing-data-between-tasks" rel="nofollow noreferrer">https://docs.astronomer.io/learn/airflow-passing-data-between-tasks</a></p>
<p>Based on this, it seems using XCom is a bad idea for many data tasks I'd be considering. The link I mentioned offered some example code for using intermediate storage</p>
<pre class="lang-py prettyprint-override"><code>from pendulum import datetime, duration
from io import StringIO
import pandas as pd
import requests
from airflow.decorators import dag, task
from airflow.providers.amazon.aws.hooks.s3 import S3Hook
S3_CONN_ID = "aws_conn"
BUCKET = "myexamplebucketone"
@task
def upload_to_s3(cat_fact_number):
# Instantiate
s3_hook = S3Hook(aws_conn_id=S3_CONN_ID)
# Base URL
url = "http://catfact.ninja/fact"
# Grab data
res = requests.get(url).json()
# Convert JSON to csv
res_df = pd.DataFrame.from_dict([res])
res_csv = res_df.to_csv()
# Take string, upload to S3 using predefined method
s3_hook.load_string(
res_csv,
"cat_fact_{0}.csv".format(cat_fact_number),
bucket_name=BUCKET,
replace=True,
)
@task
def process_data(cat_fact_number):
"""Reads data from S3, processes, and saves to new S3 file"""
# Connect to S3
s3_hook = S3Hook(aws_conn_id=S3_CONN_ID)
# Read data
data = StringIO(
s3_hook.read_key(
key="cat_fact_{0}.csv".format(cat_fact_number), bucket_name=BUCKET
)
)
df = pd.read_csv(data, sep=",")
# Process data
processed_data = df[["fact"]]
print(processed_data)
# Save processed data to CSV on S3
s3_hook.load_string(
processed_data.to_csv(),
"cat_fact_{0}_processed.csv".format(cat_fact_number),
bucket_name=BUCKET,
replace=True,
)
@dag(
start_date=datetime(2021, 1, 1),
max_active_runs=1,
schedule="@daily",
default_args={"retries": 1, "retry_delay": duration(minutes=1)},
catchup=False,
)
def intermediary_data_storage_dag():
upload_to_s3(cat_fact_number=1) >> process_data(cat_fact_number=1)
intermediary_data_storage_dag()
</code></pre>
<p>However, I notice that this code side-steps the issue I'm focused on because neither functions referenced with the <code>@task</code> decorator involve return statements. By contrast, my task below does have a return statement. But as you see, this will create a problem.</p>
<pre class="lang-py prettyprint-override"><code>import torch
from typing import Optional
from airflow.decorators import task
@task
def generate_normal_vector(k: int, filepath: Optional[str] = None) -> torch.Tensor:
"""
Generates a vector of length k with normally distributed random numbers.
Parameters:
k (int): Length of the vector.
filepath (str, optional): Path to save the tensor as a file. If None, the tensor is not saved.
Returns:
torch.Tensor: A tensor of shape (k,).
"""
tensor = torch.randn(k)
if filepath:
torch.save(tensor, filepath)
return tensor
</code></pre>
<p>and I have testing</p>
<pre><code>class TestGenerateNormalVector(unittest.TestCase):
def test_generate_normal_vector_shape(self):
# Test that the function returns a tensor of the correct shape
k = 10
vector = generate_normal_vector(k)
self.assertEqual(vector.shape, torch.Size([k]))
def test_generate_normal_vector_values(self):
# Test that the function generates normally distributed random numbers
k = 10000
vector = generate_normal_vector(k)
mean = vector.mean()
std = vector.std()
self.assertAlmostEqual(mean, 0, delta=0.1)
self.assertAlmostEqual(std, 1, delta=0.1)
if __name__ == '__main__':
unittest.main()
</code></pre>
<p>When I run this, I get an error</p>
<pre><code>======================================================================
ERROR: test_generate_normal_vector_shape (__main__.TestGenerateNormalVector)
----------------------------------------------------------------------
Traceback (most recent call last):
File "...<filepath>...\helper_functions_test.py", line 65, in test_generate_normal_vector_shape
self.assertEqual(vector.shape, torch.Size([k]))
AttributeError: 'PlainXComArg' object has no attribute 'shape'
======================================================================
ERROR: test_generate_normal_vector_values (__main__.TestGenerateNormalVector)
----------------------------------------------------------------------
Traceback (most recent call last):
File "...<filepath>...\helper_functions_test.py", line 71, in test_generate_normal_vector_values
mean = vector.mean()
AttributeError: 'PlainXComArg' object has no attribute 'mean'
</code></pre>
<p>So you can see the problem I have, which is that the <code>@task</code> wrapper yields this <code>PlainXComArg</code>. So when I try to treat it as a torch tensor, the torch methods don't work. My original functions returned torch tensors and this works without the dectorator. So the problem is that the <code>@task</code> decorator is converting them to type <code>PlainXComArg</code>.</p>
<p>While I understand I can save every output from all my functions to disk, is it possible to somehow use the <code>@task</code> decorator while still returning the outputs of my functions normally?</p>
|
<python><airflow>
|
2023-04-13 15:44:41
| 2
| 2,244
|
Stan Shunpike
|
76,007,323
| 19,198,552
|
Why is the generated tkinter button-1 event not recognized?
|
<p>I have 2 canvas rectangles, which have a binding for Button-1. Sometimes I want that when one rectangle gets this event, then the other rectangle should also get this event. So I have to duplicate the Button-1 event by the widget method "event_generate".</p>
<p>My example code is kind of minimal, so you can only press once the left mouse button. When you press the left mouse button over the red rectangle, this event is recognized, which is proved by the message "button1 red " showing up. Because I have added a second binding to each of the rectangles, also the method "create_event_for_rectangle" is started, which is proved by the message "Create event for other rectangle" showing up. This method also generates a new Button-1 event at the green rectangle. The coordinates of the generated event seem to be correct, as additionally a small rectangle is created at the calculated coordinates. This generated event at the the green rectangle now should create the message "button1 green 45 45", but nothing happens.</p>
<p>This is my code:</p>
<pre><code>import tkinter as tk
class rectangle():
def __init__(self, factor, color):
self.canvas_id=canvas.create_rectangle(factor*10,factor*10,
(factor+1)*10,(factor+1)*10,fill=color)
canvas.tag_bind(self.canvas_id, "<Button-1>", self.button1)
self.color = color
def button1(self, event):
print("button1", self.color, event.x, event.y)
def get_canvas_id(self):
return self.canvas_id
def get_center(self):
coords = canvas.coords(self.canvas_id)
return (coords[0]+coords[2])/2, (coords[1]+coords[3])/2
def create_event_for_rectangle(not_clicked):
print("Create event for other rectangle")
canvas.tag_unbind(red.get_canvas_id() , "<Button-1>", func_id_red)
canvas.tag_unbind(green.get_canvas_id(), "<Button-1>", func_id_green)
not_clicked_center_x, not_clicked_center_y = not_clicked.get_center()
canvas.event_generate("<Button-1>",
x=not_clicked_center_x, y=not_clicked_center_y)
canvas.create_rectangle(not_clicked_center_x-1, not_clicked_center_y-1,
not_clicked_center_x+1, not_clicked_center_y+1)
root = tk.Tk()
canvas = tk.Canvas(width=100, height=100)
canvas.grid()
red = rectangle(1, "red" )
green = rectangle(4, "green")
func_id_red = canvas.tag_bind(red.get_canvas_id() , "<Button-1>",
lambda event: create_event_for_rectangle(green), add="+multi")
func_id_green = canvas.tag_bind(green.get_canvas_id(), "<Button-1>",
lambda event: create_event_for_rectangle(red ), add="+multi")
root.mainloop()
</code></pre>
<p>What am I doing wrong?</p>
|
<python><tkinter><events><canvas>
|
2023-04-13 15:36:59
| 2
| 729
|
Matthias Schweikart
|
76,007,313
| 11,926,527
|
Python code to remove the end of document is not working
|
<p>I am using python-docx to clean up multiple Word documents.</p>
<p>The following code is supposed to find paragraphs which contain only one word and the word is among the list provided, case-insensitive, and then remove the remaining text from the document. However, it is not working. I can't figure out the reason!</p>
<pre class="lang-py prettyprint-override"><code>import os
import re
from docx import Document
def remove_end(document):
for paragraph in document.paragraphs:
text = paragraph.text.strip().lower()
words_to_check = ['references', 'acknowledgements', 'note', 'notes']
if text in words_to_check and len(paragraph.text.split()) <= 2:
if paragraph not in document.paragraphs:
continue
idx = document.paragraphs.index(paragraph)
del document.paragraphs[idx+1:]
break
document.save(file_path)
</code></pre>
|
<python><ms-word>
|
2023-04-13 15:36:06
| 1
| 392
|
Leila
|
76,007,296
| 9,314,961
|
FileNotFoundError: [WinError 2] The system cannot find the file specified: 'angry'
|
<p>I am new to python and I was trying the SER solution at '<a href="https://github.com/mldmxm/CTL-MTNet" rel="nofollow noreferrer">https://github.com/mldmxm/CTL-MTNet</a>'. I downloaded the publicly available 'SAVEE' and 'EMODB' speech emotion datasets from the internet and added them to the 'CAAM_Code' folder as shown in the image. <a href="https://i.sstatic.net/H7KMp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/H7KMp.png" alt="enter image description here" /></a></p>
<p>When I execute the main file it gives the error, '<strong>FileNotFoundError: [WinError 2] The system cannot find the file specified: 'angry'</strong>'<a href="https://i.sstatic.net/MAMa3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MAMa3.png" alt="This is the error" /></a>. What I understood was that, there is not an 'angry' folder to search in the dataset though it has used 'angry' in the tuple in Utils.py file. Does that mean that I have to modify the file names in the datasets?</p>
<p>Please note that the SAVEE dataset contains angry speech tracks as 'a01, a02,... etc'. and the EMODB uses a different file naming structure.</p>
|
<python><speech-recognition>
|
2023-04-13 15:34:47
| 0
| 311
|
RMD
|
76,007,288
| 2,478,485
|
How to check if system is installed with latest python version (python3.10)?
|
<p>I am using following <code>shell</code> command to find the latest <code>python</code> is installed</p>
<pre><code>$ python3 -c 'import sys; print(sys.version_info)'
sys.version_info(major=3, minor=8, micro=10, releaselevel='final', serial=0)
</code></pre>
<p>But this command is returning the default python version (3.8) that was pointing to python3, instead of higher python version installed (3.10).</p>
<p><strong>How to check if python3.10 version is installed in a host ?</strong></p>
|
<python><python-3.x><bash><shell>
|
2023-04-13 15:34:02
| 2
| 3,355
|
Lava Sangeetham
|
76,007,269
| 1,668,622
|
How to avoid mypy complaints when inheriting from a built-in collection type?
|
<p>Running <code>mypy</code> on code like this</p>
<pre class="lang-py prettyprint-override"><code>class MySpecialList(list):
# funky extra functionality
</code></pre>
<p>gives me</p>
<pre><code>my_file.py:42: error: Missing type parameters for generic type "list" [type-arg]
</code></pre>
<p>I can avoid this by not inheriting from <code>list</code> at all or ignoring this error.</p>
<p>But how <em>should</em> I deal with this? Isn't this a bug in mypy?</p>
|
<python><python-3.x><mypy>
|
2023-04-13 15:30:58
| 1
| 9,958
|
frans
|
76,007,259
| 8,602,367
|
Question and answer over multiple csv files in langchain
|
<p>I've a folder with multiple csv files, I'm trying to figure out a way to load them all into langchain and ask questions over all of them.</p>
<p>Here's what I have so far.</p>
<pre><code>from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Chroma
from langchain.text_splitter import CharacterTextSplitter
from langchain import OpenAI, VectorDBQA
from langchain.document_loaders import DirectoryLoader
from langchain.document_loaders.csv_loader import CSVLoader
import magic
import os
import nltk
os.environ['OPENAI_API_KEY'] = '...'
loader = DirectoryLoader('../data/', glob='**/*.csv', loader_cls=CSVLoader)
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=400, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings(openai_api_key=os.environ['OPENAI_API_KEY'])
docsearch = Chroma.from_documents(texts, embeddings)
qa = VectorDBQA.from_chain_type(llm=OpenAI(), chain_type="stuff", vectorstore=docsearch)
query = "how many females are present?"
qa.run(query)
</code></pre>
|
<python><python-3.x><langchain>
|
2023-04-13 15:29:23
| 3
| 1,605
|
Dave Kalu
|
76,007,256
| 8,849,755
|
Importing Python module in C++ not in main
|
<p>I want to use a Python module within C++. In all examples I find (<a href="https://docs.python.org/3/extending/embedding.html" rel="nofollow noreferrer">doc</a>, <a href="https://stackoverflow.com/a/24687260/8849755">SO1</a>, <a href="https://stackoverflow.com/a/39840385/8849755">SO2</a>) they do things like <code>Py_Initialize()</code> and <code>Py_FinalizeEx()</code>, among other things, within the <code>main</code> function. In my application, however, I am writing a small part of a larger system and have no access to <code>main</code>. I am just writing a function that will be called many times somewhere in the program. What would be the way of doing all the initialization and finalization of the Python stuff in this case? I guess I could do everything within my function, but initializing the interpreter, importing, etc. every time the function is called will probably become very slow. I am not very experienced in C++, not sure if something like the following pseudocode is possible:</p>
<pre class="lang-cpp prettyprint-override"><code>void my_func(void) {
if (my_func_is_called_for_the_first_time()) {
Py_Initialize();
module = load_python_module();
etc_related_to_init_stuff();
}
if (exiting_program()) { // Don't know how this would work, but you get the idea... Something like the `atexit` functionality from Python.
clean_everything_related_to_python_and_the_module();
Py_FinalizeEx();
}
// Actual code of my function using the package goes here...
}
</code></pre>
<h1>Edit 1</h1>
<p>Pseudocode:</p>
<pre class="lang-cpp prettyprint-override"><code>#include <Python.h>
class MyPackage:
def constructor:
Py_Initialize();
self.module = load_python_module();
// Do other things related to the initialization here.
def destructor:
Py_FinalizeEx();
package = MyPackage(); // This is now global
void my_func(void) {
blah();
package.whatever();
}
</code></pre>
|
<python><c++>
|
2023-04-13 15:29:02
| 1
| 3,245
|
user171780
|
76,007,207
| 889,742
|
Python3 and VSCode ModuleNotFoundError: No module named '_ctypes'
|
<p>I'm working with Visual Code and Python 3 and am getting a</p>
<blockquote>
<p>ModuleNotFoundError: No module named '_ctypes' error.</p>
</blockquote>
<p>error on my laptop. ( A similar setup works on my desktop )</p>
<p>It turns out that VS code is pointing to a python version installed by chocolatey.</p>
<p>I was able to fix it by changing to a different local python installation Following this
<a href="https://stackoverflow.com/questions/56658553/module-not-found-error-in-vs-code-despite-the-fact-that-i-installed-it">Module not found error in VS code despite the fact that I installed it</a></p>
<p>but the next time I restarted VS Code the problem returned.
Can I either make VS Code keep using the new installation, or otherwise fix the chocolatey installation?</p>
<p>( the problem with chocolatey python may be related to this question, but I'm not sure how to fix it in chocolatey )
<a href="https://stackoverflow.com/questions/27022373/python3-importerror-no-module-named-ctypes-when-using-value-from-module-mul">Python3: ImportError: No module named '_ctypes' when using Value from module multiprocessing</a></p>
<p>I would love some help with either one of these directions.</p>
|
<python><visual-studio-code><modulenotfounderror>
|
2023-04-13 15:21:59
| 2
| 6,187
|
Gonen I
|
76,007,165
| 12,292,254
|
Speeding up the process of list-appending via if else statements
|
<p>I have a list that contains strings that are a combination of "A", "B" and "C".</p>
<p>For example:</p>
<pre><code>abc_list = ["AB", "AC", "AB", "BC", "BB", ...]
</code></pre>
<p>I now want to create a new list that translates every element again to "A", "B" and "C" with a simple rule.</p>
<p>The rule is as follows:</p>
<p>If the element is "AA", "AB" or "BA", then the new string element has to become "A".
If the element is "BB", "AC” or "CA”, then the new string element has to become "B".
If the element is "CC", "CB" or "BC", then the new string element has to become "C".</p>
<p>So the new list becomes:</p>
<pre><code>new_abc_list = ["A", "B", "A", "C", "B", ...]
</code></pre>
<p>I'm doing it as follow for the moment, but I got a feeling it can be done much more efficient.</p>
<pre><code>new_abc_list = []
for element in abc_list :
if element == "AA":
new_abc_list .append("A")
elif element == "AB":
new_abc_list .append("A")
elif element == "BA":
new_abc_list .append("A")
elif element == "CB":
new_abc_list .append("C")
elif element == "CC":
new_abc_list .append("C")
elif element == "BC":
new_abc_list .append("C")
else:
new_abc_list .append("B")
return new_abc_list
</code></pre>
<p>What is a more efficient way of doing this?</p>
|
<python><numpy>
|
2023-04-13 15:18:56
| 1
| 460
|
Steven01123581321
|
76,007,123
| 807,149
|
Error b'' trying to install psycopg2 on AIX 7.2
|
<p>I am trying to install psycopg2==2.9.6 on AIX 7.2. The first thing I did was install postgresql 13.10 from source in order to get pg_config.</p>
<p>Now when I try to run <code>setup.py build</code> I get this blank error message:</p>
<pre><code>python -v setup.py build
...
running build_ext
# /opt/freeware/lib64/python3.7/__pycache__/_sysconfigdata_m_aix7_.cpython-37.pyc matches /opt/freeware/lib64/python3.7/_sysconfigdata_m_aix7_.py
# code object from '/opt/freeware/lib64/python3.7/__pycache__/_sysconfigdata_m_aix7_.cpython-37.pyc'
import '_sysconfigdata_m_aix7_' # <_frozen_importlib_external.SourceFileLoader object at 0xa000000008b7f08>
Error: b''
</code></pre>
<p>I get the same error when trying to install directly with pip. My pip, setuptools and wheel packages are all up to date. I'm not sure where to go from here, any ideas?</p>
|
<python><psycopg2><aix>
|
2023-04-13 15:15:29
| 1
| 958
|
Nolan
|
76,007,112
| 14,269,252
|
Finding the words or sentence that is followed by a search word and put them into a dictionary python
|
<p>I have to find the words or sentence that follow a search word and put them into a dictionary. My data is in the pdf which I already extract it to a text using PyPDF2 library. I am new to NLP and I don't know how to implement this part of code.
I know how to find 1 word that follows the search word, but sometimes it is a word, sometimes it is sentence which can be identify by <strong>\n</strong>.</p>
<blockquote>
<p>the text example:</p>
<p>["CODE: ID\nStudy of Men's brain ID\nBased upon 16", '3 valid cases
out of 76', '33 total cases.\n•Mean: 54695.29\n•Minimum:
8.00\nVariable Type: 'numeric \nHEALTH: h1 - Health in general\n xxx', ' ccc']</p>
</blockquote>
<pre><code>import PyPDF2
search_keywords=['CODE','LVI','HEALTH']
pdfFileObj = open('df.pdf', 'rb')
pdfReader = PyPDF2.PdfFileReader(pdfFileObj)
pageObj = pdfReader.getPage(5)
text=(pageObj.extractText())
text=text.split(",")
text
</code></pre>
<p>the out put should be :</p>
<pre><code>{"CODE":"ID"
"VIS":none
"HEALTH":"h1 - Health in general"}
</code></pre>
|
<python><nlp>
|
2023-04-13 15:14:37
| 2
| 450
|
user14269252
|
76,007,004
| 2,015,669
|
How to calculate individual documents score BM25F score for each document?
|
<p>I a trying to implement BM25F from scratch in Python.
Here is the simplified folumation of the BM25F:</p>
<p><a href="https://i.sstatic.net/pXLnf.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pXLnf.jpg" alt="enter image description here" /></a></p>
<p>However, after implementing it, I am getting the same score for all the documents in the collection.</p>
<p>After a careful inspection, the only intuition I can understand is that the formulation is considering the stream scores and not the individual document scores.</p>
<p>Which part of the formulation am I missing that considers the individual scores for each document?</p>
<p>I am trying consciously since last week to decode this without any luck, any help would be immensely appreciated!</p>
|
<python><ranking><information-retrieval><ranking-functions>
|
2023-04-13 15:04:18
| 0
| 640
|
HQuser
|
76,007,000
| 19,392,385
|
Captions of slider not visible (VPython)
|
<p>I have create a little 3d simulation with VPython. And with it sliders to control positions and colors or objects. However, I can't find a way to display a caption to indicate what is the slider about.</p>
<p>I have added an argument when defining the slider <code>wtitle</code>. I don't get any error message so I assume it is rightfuly defined but the caption is probably hidden or I am missing a step somewhere.</p>
<p>Here is my code:</p>
<pre><code>""" Import libraries """
import vpython as vp
""" Create Canvas """
canvas = vp.canvas(width=1080, height=720)
scene = vp.scene
""" Create objects """
# Define the planet
planet_radius = 5
# planet_texture = 'planet.jpg' # You can replace this with your own custom picture
planet = vp.sphere(radius=planet_radius, texture=vp.textures.earth, shininess=0)
def atm_opacity(wavelength):
return atm_density * (550 / wavelength)**4
# Define the atmosphere
atm_thickness = 3.0 # You can modify this using a slide cursor
atm_density = 0.5 # You can modify this using a slide cursor
atm_radius = planet.radius+atm_thickness
atm = vp.sphere(radius=atm_radius, opacity=0.25)
atm.opacity_function = atm_opacity
# Define the light source
def light_color(wavelength):
r, g, b = 0, 0, 0
if wavelength >= 400 and wavelength < 440:
r = -(wavelength - 440) / (440 - 400)
b = 1.0
elif wavelength >= 440 and wavelength < 490:
g = (wavelength - 440) / (490 - 440)
b = 1.0
elif wavelength >= 490 and wavelength < 510:
g = 1.0
b = -(wavelength - 510) / (510 - 490)
elif wavelength >= 510 and wavelength < 580:
r = (wavelength - 510) / (580 - 510)
g = 1.0
elif wavelength >= 580 and wavelength < 645:
r = 1.0
g = -(wavelength - 645) / (645 - 580)
elif wavelength >= 645 and wavelength <= 700:
r = 1.0
return vp.vector(r, g, b)
light_type = 'red' # You can modify this using a drop-down menu
# light_wavelength = 550 # You can modify this using a slide cursor
light_size = 2 # You can modify this using a slide cursor
light_intensity = 1.0 # You can modify this using a slide cursor
pos_drift = 50
light_xpos = atm_radius+pos_drift
light_ypos = atm_radius+pos_drift
light_zpos = atm_radius+pos_drift
light_pos = vp.vector(light_xpos, light_ypos, light_zpos)
wavelength = 400
light = vp.local_light(pos=light_pos, color= light_color(wavelength), radius=light_size)
light.color_function = light_color
# Define the camera
scene.autoscale = False
scene.range = 10
scene.forward = vp.vector(0, 0, -1)
scene.up = vp.vector(0, 1, 0)
scene.caption = 'Click and drag on the light source to move it. Use the drop-down menu and slide cursors to adjust its properties.'
""" Define the event handlers """
def on_light_down(evt):
global light_dragging, light_drag_pos
light_dragging = True
light_drag_pos = evt.pos
def on_light_move(evt):
global light_dragging, light_drag_pos
if light_dragging:
light.pos += evt.pos - light_drag_pos
light_drag_pos = evt.pos
def on_light_up(evt):
global light_dragging
light_dragging = False
# Bind the event handlers
vp.scene.bind('mousedown', on_light_down)
vp.scene.bind('mousemove', on_light_move)
vp.scene.bind('mouseup', on_light_up)
def on_mouse_down(event):
global dragging, last_mouse_pos
obj = scene.mouse.pick
if obj == light:
dragging = True
last_mouse_pos = scene.mouse.pos
canvas.bind('mousedown', on_mouse_down)
# Create the slider to rotate the light source around
def set_rotation_angle(slider):
# light.rotate(angle=slider.value, axis=vp.vector(0, 1, 0), origin=planet.pos)
# Calculate the rotation angle
angle = vp.radians(slider.value)
# Calculate the new position of the object
x = atm_radius+pos_drift * vp.cos(angle)
y = 0
z = atm_radius+pos_drift * vp.sin(angle)
# Set the position of the object
light.pos = vp.vector(x, y, z)
# Create the slider
slider = vp.slider(wtitle='Rotate light source', min=0, max=360, step=1, value=0, bind=set_rotation_angle)
# Create slider to change light source wavelength
def WL_cursor(slider):
val = slider.value
new_wavelength = light_color(val)
global light
light.color = new_wavelength
slider = vp.slider(wtitle='Change light source color', min=400, max=700, length=250, bind=WL_cursor)
# Define the white light button
def white():
global light
light.color = vp.color.white
# Create the exit button
white_button = vp.button(bind=white, text="White light")
""" Run the simulation """
while True:
vp.rate(30)
# Update the atmosphere
atm.radius = planet.radius+atm_thickness
atm.opacity = 0.25*atm_density
# Update the light source
light.radius = light_size
light.intensity = light_intensity
light.pos = light.pos
# Update the camera
vp.scene.center = planet.pos
</code></pre>
|
<python><3d><slider><vpython>
|
2023-04-13 15:04:03
| 2
| 359
|
Chris Ze Third
|
76,006,978
| 21,420,742
|
How to find past values by ID in Python
|
<p>I have a dataset that looks at all employees' history. The goal I am trying to get to is see an employees current manager and previous manager only if that previous manager has left without being replaced. To identify if a manager has left you look at the <strong>ManagerPositionNum</strong> this column is unique to the Manager and if another Manager has those numbers then they are filling in for a vacant role. I am running pandas and numpy with this file.</p>
<p>Here is a sample of what I have:</p>
<pre><code>EmpID Date Job_Title ManagerName ManagerPositionNum
101 May 2021 Sales Rep John Doe 1111
101 June 2021 Sales Rep John Doe 1111
102 February 2022 Tech Support Mary Sue 2111
102 March 2022 Tech Support Mary Sue 2111
102 April 2022 Tech Support John Doe 2111
103 October 2022 HR Advisor Sarah Long 3111
103 November 2022 HR Advisor Michael Scott 4111
103 December 2022 HR Advisor John Doe 4111
103 December 2022 HR Advisor John Doe 4111
</code></pre>
<p>Desired Output:</p>
<pre><code> EmpID Date Job_Title ManagerName ManagerPositionNum Vacated Manager
101 May 2021 Sales Rep John Doe 1111
101 June 2021 Sales Rep John Doe 1111
102 February 2022 Tech Support Mary Sue 2111 Mary Sue
102 March 2022 Tech Support Mary Sue 2111 Mary Sue
102 April 2022 Tech Support John Doe 2111 Mary Sue
103 October 2022 HR Advisor Sarah Long 3111
103 November 2022 HR Advisor Michael Scott 4111
103 December 2022 HR Advisor John Doe 4111 Michael Scott
103 January 2023 HR Advisor John Doe 4111 Michael Scott
</code></pre>
<p>Just for clarification:</p>
<p>1111 is unique to John Doe</p>
<p>2111 is unique to Mary Sue</p>
<p>3111 is unique to Sarah Long</p>
<p>4111 is unique to Michael Scott</p>
<p>Code I have tried:</p>
<pre><code> reportid = df.groupby('ManagerName')['ManagerPositionNum'].transform('first')m =
~df['ManagerPositionNum'].eq(reportid) df.loc[m,'ManagerName']
</code></pre>
|
<python><python-3.x><pandas><dataframe><numpy>
|
2023-04-13 15:01:43
| 1
| 473
|
Coding_Nubie
|
76,006,901
| 12,362,355
|
How to use local Apple M1 GPU in a Colab note in a local run time
|
<p>Hi I'm new to Colab and deep learning.
I'm practising a piece of Colab code (from the web) in which it imports a bunch packages and checks for GPU access.</p>
<pre><code>!nvcc --version
!nvidia-smi
import os, shutil
import numpy as np
import matplotlib.pyplot as plt
from cellpose import core, utils, io, models, metrics
from glob import glob
use_GPU = core.use_gpu()
yn = ['NO', 'YES']
print(f'>>> GPU activated? {yn[use_GPU]}')
</code></pre>
<p>Now I would like to run this locally on my Mac M1 pro and am able to connect the colab to local run time. The problem becomes how can I access the M1 chip's GPU?</p>
<p>Running the same code will only give me :</p>
<pre><code>zsh:1: command not found: nvcc
zsh:1: command not found: nvidia-smi
</code></pre>
<p>Which kinda make sense since I dont have nvidia-smi on M1....</p>
|
<python><deep-learning><gpu><google-colaboratory><apple-m1>
|
2023-04-13 14:53:37
| 0
| 415
|
ML33M
|
76,006,794
| 2,758,414
|
Is someone trying to hack into my Django app?
|
<p>I have a Django app (personal project) running live in production on Azure VM.</p>
<p>I have looked in <code>/var/log/django.log</code> and I can see a long list of warnings. These look like someone is trying to scan my VM/app in order to find <code>.env</code> file, login credentials, etc.</p>
<pre><code>2023-04-13 16:19:12 [WARNING ] (log.log_response) Not Found: /.env
2023-04-13 16:19:12 [WARNING ] (log.log_response) Not Found: /.env
2023-04-13 16:19:14 [WARNING ] (log.log_response) Not Found: /.env.save
2023-04-13 16:19:14 [WARNING ] (log.log_response) Not Found: /.env.save
2023-04-13 16:19:14 [WARNING ] (log.log_response) Not Found: /.env.old
2023-04-13 16:19:14 [WARNING ] (log.log_response) Not Found: /.env.old
2023-04-13 16:19:16 [WARNING ] (log.log_response) Not Found: /.env.prod
2023-04-13 16:19:16 [WARNING ] (log.log_response) Not Found: /.env.prod
2023-04-13 16:19:20 [WARNING ] (log.log_response) Not Found: /.env.production
2023-04-13 16:19:20 [WARNING ] (log.log_response) Not Found: /.env.production
</code></pre>
<pre><code>2023-04-13 05:35:17 [WARNING ] (log.log_response) Not Found: /owa/auth/logon.aspx
2023-04-13 05:35:17 [WARNING ] (log.log_response) Not Found: /owa/auth/logon.aspx
2023-04-13 06:02:18 [WARNING ] (log.log_response) Not Found: /login
2023-04-13 06:02:18 [WARNING ] (log.log_response) Not Found: /login
</code></pre>
<p>Is this something I should be concerned about?</p>
<p>It seems like the actor is scanning files and directories, what if he was successful in locating my <code>.env</code> file. Is he someone able to retrieve the file?</p>
<p>Also, do presence of these warnings indicate that my security settings are somehow weak?</p>
|
<python><django><azure><security><.env>
|
2023-04-13 14:43:50
| 2
| 2,747
|
LLaP
|
76,006,789
| 11,462,274
|
How to adjust the image to meet the minimum requirements and avoid receiving the Telegram API Bad Request error: PHOTO_INVALID_DIMENSIONS?
|
<p>Summary: Telegram has requirements for images that are sent, and to avoid failures, I add huge white borders far above what would be the minimum necessary. I would like to know how I could create a method so that the image is adjusted at once but with the minimum borders required.</p>
<p>According to the documentation I couldn't understand exactly how I could reach that value, here it is <code>sendPhoto</code>:</p>
<blockquote>
<p>Photo to send. Pass a file_id as a String to send a photo that exists
on the Telegram servers (recommended), pass an HTTP URL as a String
for Telegram to get a photo from the Internet, or upload a new photo
using multipart/form-data. The photo must be at most 10 MB in size.
The photo's width and height must not exceed 10000 in total. Width and
height ratio must be at most 20.</p>
</blockquote>
<p>My sample code creates this image that generates an error when sending it to Telegram because it does not meet the minimum requirements:</p>
<p><a href="https://i.sstatic.net/7r8q3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7r8q3.png" alt="enter image description here" /></a></p>
<p>Currently, the code adds borders above and below a size that I know works, and it looks like this:</p>
<pre class="lang-python prettyprint-override"><code>def create_image():
df = pd.DataFrame(columns=[f"column_letter_{chr(97 + i)}" for i in range(9)])
dfi.export(df, 'telegram_image.png', table_conversion="selenium", max_rows=-1)
def adjust_image():
fil_img = Image.open("telegram_image.png")
min_img = 600
difference = min_img - fil_img.size[1]
new_image = Image.new("RGB", (fil_img.size[0], min_img), (255, 255, 255))
new_image.paste(fil_img, (0, difference // 2))
new_image.save("telegram_image.png")
</code></pre>
<p><a href="https://i.sstatic.net/I56Cw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/I56Cw.png" alt="enter image description here" /></a></p>
<p>In this additional method I tried, the problem is that the image shrinks and thus the quality also decreases, I don't want the quality to decrease at all, I just want to adjust the edges so that it maintains the original size of the image but fits the minimum necessary proportion:</p>
<pre class="lang-python prettyprint-override"><code>def adjust_image():
image = Image.open('telegram_image.png')
width, height = image.size
if width / height > 20 or height / width > 20:
new_width = min(width, height * 20)
new_height = min(height, width * 20)
image = image.resize((new_width, new_height), Image.ANTIALIAS)
image.save('telegram_image.png', quality=100, subsampling=0)
</code></pre>
<p><a href="https://i.sstatic.net/mYniZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mYniZ.png" alt="enter image description here" /></a></p>
<p>How would it be possible to create an image adjustment that is not fixed to a specific number of borders to add, and instead adds only the minimum amount necessary?</p>
<p><strong>Complete Code:</strong></p>
<pre class="lang-python prettyprint-override"><code>import dataframe_image as dfi
from PIL import Image
import pandas as pd
import requests
import json
def send_image_telegram(ids_chat, bot_token, text_msg, file_photo, buttons=None, parse_mode=None):
if not isinstance(ids_chat, list):
ids_chat = [ids_chat]
list_for_return = []
for chat_id in ids_chat:
telegram_api_url = f'https://api.telegram.org/bot{bot_token}/sendPhoto'
message_data = {
"chat_id": chat_id,
"caption": text_msg
}
if parse_mode is not None:
message_data['parse_mode'] = parse_mode
if buttons is not None:
message_data['reply_markup'] = json.dumps({'inline_keyboard': buttons})
with open(file_photo, "rb") as imageFile:
response = requests.post(telegram_api_url, files={"photo": imageFile}, data=message_data)
list_for_return.append(json.loads(response.text))
return list_for_return
def create_image():
df = pd.DataFrame(columns=[f"column_letter_{chr(97 + i)}" for i in range(9)])
dfi.export(df, 'telegram_image.png', table_conversion="selenium", max_rows=-1)
def adjust_image():
fil_img = Image.open("telegram_image.png")
min_img = 600
difference = min_img - fil_img.size[1]
new_image = Image.new("RGB", (fil_img.size[0], min_img), (255, 255, 255))
new_image.paste(fil_img, (0, difference // 2))
new_image.save("telegram_image.png")
def to_telegram():
send = send_image_telegram('123456789','XXXXXXXXXX','test','telegram_image.png')
if not send[0]['ok'] and send[0]['description'] == 'Bad Request: PHOTO_INVALID_DIMENSIONS':
adjust_image()
send_image_telegram('123456789','XXXXXXXXXX','test','telegram_image.png')
def main():
create_image()
to_telegram()
if __name__ == '__main__':
main()
</code></pre>
|
<python><python-imaging-library><telegram><telegram-bot>
|
2023-04-13 14:43:18
| 1
| 2,222
|
Digital Farmer
|
76,006,716
| 1,897,722
|
Performance of Scipy lsim/lsim2
|
<p>In a python script of mine, I currently use the <a href="https://scipy.github.io/devdocs/reference/generated/scipy.signal.lsim.html#scipy.signal.lsim" rel="nofollow noreferrer">lsim</a> function to simulate a system model. An issue I encountered recently is that lsim spawns a lot of subprocesses on multiple cores, and together they cause heavy CPU load. That fact also shows in the profiling log, where I attached the relevant snippet below. I run this script on a processing machine, which I share with multiple people. If I instead use lsim2, it seems that no subprocesses are spawned, however running the script takes unbearably long. Anyone has an idea how I can run lsim fast, while using fewer resources/just one core?</p>
<pre><code> ncalls tottime percall cumtime percall filename:lineno(function)
3740 25.422 0.007 51.062 0.014 /grid/common/pkgs/python/v3.7.2/lib/python3.7/site-packages/scipy/signal/ltisys.py:1870(lsim)
26753891 21.519 0.000 21.519 0.000 {built-in method numpy.dot}
12 0.001 0.000 21.450 1.788 /grid/common/pkgs/python/v3.7.2/lib/python3.7/subprocess.py:431(run)
12 0.000 0.000 21.265 1.772 /grid/common/pkgs/python/v3.7.2/lib/python3.7/subprocess.py:895(communicate)
24 0.000 0.000 21.265 0.886 /grid/common/pkgs/python/v3.7.2/lib/python3.7/subprocess.py:985(wait)
24 0.000 0.000 21.265 0.886 /grid/common/pkgs/python/v3.7.2/lib/python3.7/subprocess.py:1592(_wait)
12 0.000 0.000 21.264 1.772 /grid/common/pkgs/python/v3.7.2/lib/python3.7/subprocess.py:1579(_try_wait)
</code></pre>
|
<python><performance><scipy>
|
2023-04-13 14:36:31
| 1
| 377
|
Daiz
|
76,006,687
| 601,862
|
Python updating 1 single element cause the entire column updated in a 2 dimentional array
|
<p>I have a 10 by 10 matrix which has dic as element.
When I only want to update a single value, it ends up updating all values in that column.
Why? and How to fix it?</p>
<pre><code>m = [[{}] * 10]*10
m[0][0] = {"a":1}
</code></pre>
<p><a href="https://i.sstatic.net/VACCj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VACCj.png" alt="enter image description here" /></a></p>
<p>Is there an equivalent numpy style to do the same thing(if it could be quicker by using Numpy)?</p>
|
<python>
|
2023-04-13 14:33:44
| 0
| 7,147
|
Franva
|
76,006,681
| 10,638,529
|
AssertionError: 400 != 201 Unit Test Django REST Framework with drf-yasg
|
<p>i am creating an unittest for a endpoint POST that i created:</p>
<p>Endpoint:</p>
<pre><code>class AddJokeAPI(generics.ListAPIView):
serializer_class = JokeSerializer
queryset = Joke.objects.all()
query_param = openapi.Parameter('text', openapi.IN_QUERY,
description="Nuevo chiste",
type=openapi.TYPE_STRING)
@swagger_auto_schema(
operation_description="Guarda en una base de datos el chiste el texto pasado por parámetro",
manual_parameters=[query_param]
)
def post(self, request):
serializer = self.get_serializer(data={"joke": self.request.GET.get('text')})
serializer.is_valid(raise_exception=True)
joke = serializer.save()
return Response({
"joke": JokeSerializer(joke, context=self.get_serializer_context()).data,
"message": "New joke added",
"success": True
}, status=status.HTTP_201_CREATED)
</code></pre>
<p>If you can see, in the POST method i have to send a POST query param called "text" to register it in the model "Joke" and receive a 201 HTTP Status.</p>
<p>This is my test.py method to call that endpoint:</p>
<pre><code>class JokesTestCase(TestCase):
def setUp(self):
self.client = APIClient()
def test_post_joke(self):
payload = {
'text': 'Test Joke'
}
response = self.client.post('/api/jokes/add', payload)
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
</code></pre>
<p>the error that i am getting is:</p>
<pre><code>self.assertEqual(response.status_code, status.HTTP_201_CREATED) AssertionError: 400 != 201
</code></pre>
<p>as you can see the error is that i am making a bad request, but i dont know exactly why i am getting that error.</p>
<p>If i try to make a request with this path, the test will pass as i expect:</p>
<pre><code>response = self.client.post('/api/jokes/add?text=Test%20Joke')
</code></pre>
<p>UPDATE: This is my serializers.py</p>
<pre><code>class JokeSerializer(serializers.ModelSerializer):
class Meta:
model = Joke
fields = ('id', 'joke')
def create(self, validated_data):
joke = Joke.objects.create(**validated_data)
return joke
</code></pre>
<p>Anyone can help me with this? Thank you</p>
|
<python><django><unit-testing><django-rest-framework><drf-yasg>
|
2023-04-13 14:33:23
| 1
| 664
|
Luis Bermúdez
|
76,006,674
| 4,036,532
|
Get from Pandas dataframe column to features for scikit-learn model
|
<p>Let's say I have a dataframe that looks like this:</p>
<pre><code>import pandas as pd
import numpy as np
vectors = pd.Series([[1.0, 2.0, 3.0], [0.5, 1.5, 2.5], [0.1, 1.1, 2.1]], name='vector')
output = pd.Series([True, False, True], name='target')
data = pd.concat((vectors, output), axis=1)
</code></pre>
<p><code>data</code> looks like this: a Series of lists of floats, and a Series of booleans:</p>
<pre><code> vector target
0 [1.0, 2.0, 3.0] True
1 [0.5, 1.5, 2.5] False
2 [0.1, 1.1, 2.1] True
</code></pre>
<p>Now, I want to fit a simple scikit-learn LogisticRegression model on top of the vectors to predict the target output.</p>
<pre><code>from sklearn.linear_model import LogisticRegression
clf = LogisticRegression()
clf.fit(X=data['vector'], y=data['target'])
</code></pre>
<p>This does not work, with the error:</p>
<pre><code>ValueError: setting an array element with a sequence
</code></pre>
<p>I tried casting my vector data to an np array first, with</p>
<pre><code>data['vector'].apply(np.array)
</code></pre>
<p>But this yields the same error as before.</p>
<p>I can get it to work by executing the following:</p>
<pre><code>input_vectors = np.array(data['vector'].to_list())
clf.fit(X=input_vectors, y=data['target'])
</code></pre>
<p>But this seems quite clunky and bulky - I turn the entire pandas array into a list, then turn it into a numpy array.</p>
<p>I'm wondering if there is a better method here for converting this data format into one that is acceptable to scikit-learn. In reality, my datasets are much larger and this transformation is expensive. Given how compatible scikit-learn and pandas normally are, I imagine I might be missing something.</p>
|
<python><pandas><numpy><scikit-learn>
|
2023-04-13 14:32:58
| 2
| 2,202
|
Katya Willard
|
76,006,647
| 4,253,946
|
Payload clarification for Langchain Embeddings with OpenAI and Chroma
|
<p>I have created the following piece of code using <a href="https://en.wikipedia.org/wiki/IPython#Project_Jupyter" rel="nofollow noreferrer">Jupyter Notebook</a> and <code>langchain==0.0.134</code> (which in my case comes with <code>openai==0.27.2</code>). The code takes a CSV file and loads it in <a href="https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/chroma.html" rel="nofollow noreferrer">Chroma</a> using OpenAI Embeddings.</p>
<p><strong>CSV</strong></p>
<pre><code>COLUMN1;COLUMN2
Hello;World
From;CSV
</code></pre>
<p><strong>Jupyter Notebook</strong></p>
<pre><code>#!/usr/bin/env python
# coding: utf-8
get_ipython().run_line_magic('load_ext', 'dotenv')
get_ipython().run_line_magic('dotenv', '')
# ### CSV Load
from langchain.document_loaders.csv_loader import CSVLoader
csv_args = {"delimiter": ";",
"quotechar": '"',
'fieldnames': ['COLUMN1','COLUMN2']}
loader = CSVLoader(file_path='./data/stack-overflow-test.csv', csv_args=csv_args)
# ### Load in Chroma
from langchain.vectorstores import Chroma
from langchain.indexes import VectorstoreIndexCreator
from langchain.embeddings.openai import OpenAIEmbeddings
index_creator = VectorstoreIndexCreator(
vectorstore_cls=Chroma,
embedding=OpenAIEmbeddings(),
vectorstore_kwargs= {"collection_name": "collection"}
)
# This is the line of code that is recorded with the "packet analyzer"
indexWrapper = index_creator.from_loaders([loader])
</code></pre>
<p>If I check the request (<a href="https://wiki.wireshark.org/TLS#tls-decryption" rel="nofollow noreferrer">using Wireshark</a>), I obtain the following:</p>
<p><strong>Request</strong></p>
<pre><code>POST /v1/engines/text-embedding-ada-002/embeddings HTTP/1.1
Host: api.openai.com
User-Agent: OpenAI/v1 PythonBindings/0.27.2
Content-Type: application/json
{
"input": [
[82290, 16, 25, 76880, 82290, 16, 40123, 17, 25, 40123, 17],
[82290, 16, 25, 22691, 40123, 17, 25, 4435],
[82290, 16, 25, 5659, 40123, 17, 25, 28545]
],
"encoding_format": "base64"
}
</code></pre>
<p><strong>Reply</strong></p>
<pre><code>openai-version: 2020-10-01
Content-Type: application/json
{
"object": "list",
"data": [
{
"object": "embedding",
"index": 0,
"embedding": "YX/vu7kSBzxjbeW7l7wkvLckEb1r3KM8i9ZCvLD+a7xp0v67d7kAvehysDysNao8y++0u/B8pjyQ+0e8FKXPO0PPCT13Cx+8ZMkoPOlpq7zjBJK8ns+fPAy3CLz2Ktk75h/yOuEWnDwg1Eo8sVqvvPch1DxIj0Y7lc4uu559gbzeMSu7apMKvWXAI7z6aw084B8hvHzC1jsGmwg72MwRPM7B+zxA6hg8+2KIu/eGHDsdAoS8QnPGuq9suTzHE8m76A1ou7V/NLpCKq06ykrYPE1irbzTlOK8EdMIvGFJgLy4UXu8HVSiOwpkyryDOpq7NqmTOjrXnbpjiZS8ufZXPO48Er30WJK8ACNFvClwc7xfW4q8EwBzO4xx+jy90sM8AhE7PPBgd7zuPJI8dvj0uyhDCbz/kJK7vdLDu9WVgrzsThw8VkfvvJmXcLzocrC7+M+1PLL/Cz1frag6hmiku9nDjLyVfBC8LfpAPC/oNj2hT8g8tTYbPOkE4zt1y4o8ChKsuoJDHz3HE8k6suzhvCiMIjyy7OG77KC6vIQxFbz24T+98SGDOzWyGD0zxCK6B3/ZPFW1vLxEs9q4PGDLPCmDHTy8Lec6YeQ3PCjewDsPic88AGzeu/YqWbxemv67NHKEuUHO6Tzd1ec6o+sfvV5kjzzGgRY8cji4u9TwpbwDUU88FuXjO22uaj2UhRU8+cawPDRWVTxtZdG7jWj1O16afrzclVM8GiaYvIR6rrwwMdA8LlaEPJqO6zvF3Dk6/n3oPL6AJTsyzSc7S8ZVPLhR+zs2lum71eegPH3VgLxFDx48VVB0OxwLibstX4k8tdHSO/40zzwqzDY84c0CvOvyWDn3hhw8JacxO0j0Dryoq1w8xJMgPTVNULxCxWS88SGDvJhOV7wyaN+4vTeMPJTXs7urkM08o6KGvGF/bzwrw7E7K8OxvDWymLzclVO84qjOu+WNP7yvGps7of2pPJmX8LuEMZW6mZfwPHKB0TttZdE80HmCOn5nszvJZgc9m+ouPCLCwDtDISi/GNPZvCX5z7uB+oW8lrJ/u94xqzx/DJA6Hy/uPGvAdLsbHRM8lClSvEO83zspHlW7OtcduBfvCL3Rpuy8pH1SPJBNZjoYOKI8suzhO9vn8buezx88QNduPLGjSDxBzuk7/yvKPLn2Vzwpg528UT6ZN2qTCj3Fipu8RgaZPA7ulzzbTLq7OIRfPWXAozud2KS8iZ+zOnmUzDtkZOA8wWUWvOOfybx91QA8qv6avBiBO7y7Nuw8ndikPHQK/zlemv47Q7xfPEHhEzokZx06/3RjuiEd5DtRPpk8HabAPOFoOj3rqb+8Hy9uOgU/xTtX9dC7x2VnvJgFvryGusK83fGWvLlboDtj0q28l2qGuyO5uzwaJhi8P6oEvE5ZKLo38iy6R5hLvFg15TsbZiw8ptAQPbFaL7wCWlS86CmXPNWVgjulK7S6uUj2vMfKL7yvbDk8cuaZvJFXC72AVak7UtBLvAd/2Ts/8x08adJ+u7ARlrwKZMq8bNOeuyiMojxrd9u8yMGqOaM9vjypB6C80K/xO7tJFrxnSVG6jl/wOysMyzwyzSc7/I9yvLdtqjxAhdA8riOgvGUSQjynx4u8dsKFvKa95juR8kI7RMYEvVUaBT2l2RU8pM9wuyjewDve6BE9MoSOPJHyQry+Lge9bsGUPKJGwzxqLsI75jshvEVYNzubM8i8QNduPHPdlLyh6n87fHk9vAsJJzzpaSu8QsVkvEwG6rvvMw08I7k7vEDqmLwuVgQ83PqbPM7UpbxaI1s8LUzfvIIw9bwKEqw72RWrvDIfxjyVfJC74RYcvBtmLLxcEVE8TBmUu5BgEDzsoDq8VL5BvBfcXrxUIwq92rqHO3QK/ztNEA+9+H2XvHo5qbtDzwm719WWu8cTSTyudb478VfyvKYiLzww3zG7BpuIvPRYkjuiRsO7m4XmOU4HCrzr8lg8pH1SPJep+rokZx08h1+fPC+fnTsGmwi74LpYO2USQjz3hpw6FpPFu0CFULwAiI08ZrcevNZ50zsg1Mq7VQfbO0EzsrvESoc8ITAOvbtJlrvyTm26E2W7PEy90DzSVM47fR6aO3wnH7uMcfo8L+g2vUSzWrxHRq28fdUAPbP2BjyEei48Q88JvfBgd7yRRGG82vB2PMXcuTyma8i8EsqDOzCWGDzmhLo7OeCiPLvkzTtTYn48dWbCuxWcSrzQFDq7KdW7O+tXoTw5jgS7pcZrvGF/77u2di+66bvJPAFjWTwpgx27IDkTPcw4zjwIyHK7J5WnPJgFvjtB4ZO8T/4EPQWkjTsH0Xe7yl2CO/chVLyGukI9e4LCPEsPb7w4l4k7eObqvO7XSTsRd8W8evAPveVEpjx2FKS8rIdIO8GurztHmMs86/LYPIxxejwrXum76hcNulf1ULsl+U87j2mVOk/rWjzq+925LGiOvBblYzwuQ1o8Yi3RO7kShzwhHeS8HUH4u53YJDxKz1q86w6IvPFXcjyPVus83fEWvNKwkbwE/7A72vD2PGJ26rkrcZO8w5ylvBT37buefQG87KC6OvRYkjxrd1s83N7sOqUrNLtldwo8dl29O4LxAD0KEiy6y0FTPKOihjr89Lo7u5s0PMTlvrszsXg7jg1SPKTPcDyymkO8iZ+zvJx8YTzlKHe8VloZPS861btCxWS75DF8u74bXbsREn06h7G9O+1FFzx3C588jWj1u67aBrzQeYK8kfJCu44N0jxuCi68tySRPJhhAbxnSdE8DAAiu4Qe6zsFpA28Y23lvJKXnzsY5oO8coHRvNdwTrzIXGI8Ppfau2y3bzxY7Eu8+muNO8Bum7zE5b48NqmTPHRvR7xIj0Y8L58dPbkShzz6aw08dNSPvBMTnTyovgY9Tf1kPMFllrzG07S7n8aaPO9pfLtFqlW60MKbvGvA9Lu90sO8KhXQu0rP2jugBq87ZwA4PD28DjyFw0c8VhGAvBESfTzg1oe8kjvcu66+1zwI25w7dG9HvB6duzsByKG6EC4sPBIJ+Lzpaau7EIBKPK51vrwVU7G7JV4YvEEzsrsW+I28+2IIvQ+Jzzuv0QE820w6vFUaBbzxvLq8fMLWu7ckEb2FDGE8ikQQO1+tqLyudT68aZyPuqi+Bjw0DTw5TqtGOqOihjtB4RO9N43kPLBjNLt2+PS8hQxhvLqkOb2Q+0e8GS+dOyQCVbyu2gY8izuLvG7BlDtXo7K7eZTMu1RZ+TprioW8b5xgPP6ZF7wREv05TgeKvP+QEjp2woW8GS8dvKXZFbzNlBG8fwwQvH5nszsIyHI7Jp6su020SzxVYx48xUECvP6ZF7xqyXm86bvJuSWnsbwyun27vhtdPDIfRjwpHtW77UUXO1VjnrzbA6E6P+BzvPaPoTwJGzE89o+hvAAjxToZylS8vJIvvGwcuDutx1w8UYeyO/h9F7v32Do8CCQ2PFiaLb26Uhs7MJaYO4hWmjwjcCK8UdlQvGzTHrxJhkE6YKQjPJmXcDvzYZe8kk4Gve48krt01I87oFhNvNRCxDzRpmw8LUxfPLdtqrzqYKa7kjtcO7ySr7xoQEy8jruzvGZuBbt61GA8NpbpPAoSrDysh8g8/FmDvOGxUzwt+sA8VCOKu1UahbtO9N+8Fe5oPM2UkTzhzYI8o4bXPL4uBzyRRGE8/n1ovCLCwLq3v8g7Q8+JvDDfsbtN/WS7He/ZO8zmr7oNm1m8rwfxvKL0JLuntOG782GXvCrMtjyIVpo8dCauPA6S1Lw1n+483dXnvE703zuTjhq4LqiiOiiMIrxh5Le8JGedPK4QdjzDiXs84wSSO7mtPrzq+9053sxiOx8vbrve6JG6aPcyuxjT2bylxuu7SdjfvE5ZKLwVARO9KTqEPBUBkzxH/ZM7gFUpPMhvDDoW5eO8Shj0O/bhv7wCETs99kYIvGmcjzzFd/E8+H0XvBommLze6JE80absu0dGLbt0Cv88MY0TPaSZAbti27K8QtiOPCk6hLxXURQ8qv6avFojWzw5KTw8oAYvuxsdk7zwKgi9fdUAvTc7Rrt2FKQ7FvgNu4ZoJLz2Roi8JlWTvODWhzxUIwq9xhxOuzRyBDyBOXq8DFLAuz6XWrzUngc8gvEAvLPj3LsuQ9o8d+9vvL8SWLwQ3A08yVPdO3CvCr0wlhg8TlkovP904zzJU928qL6GvJJOBry9iSo8dQF6POkgErwpOgS7TRAPvN6DybyZ/Li8KXBzPD/g8ztgP1u8evCPvMRKh7tNYi291eeguzgyQTzsoLo7YKQjvZSFlbwCv5w8ouH6PMpdAjypB6C1rwfxO0395DyLen889U+NvI1odTwrXmm8KmfuO4vWwryEzMw8AlpUvG8BqbvIwSo8d7kAvfEhA7yy/ws8MDHQO44NUjzOwXu8RBijvBNlOz3WeVM80bmWOyAmabywYzS8i3p/PFVQdDyDJ/C7U8fGO0CF0LzSnec70Qs1O2wcuLxcyDe7oer/vDrXHbz+fWi8HVSiPBXu6Luzkb68oFhNvLD+a7wOQDa8l6n6PPBg97u9JOK6V1GUPIexvbxMGZQ6Ar8cPM6LDLwTE528gkMfvEah0LyYoPW8V1EUvK4QdjsoeXi6Yi1RPD5OwbtLxtW89uE/PJGgJL1QR568iejMOzMWQbwzX9o8WSxgPIUMYTvGgZY8HAsJPICeQjz+4rC8dR2pu2PSrTyTRQG8U2L+PJHyQjwyzSe8fHm9vKKrizz5xrC7U2J+O9WVAjyfDzS8VFn5u0gqfrw1n+66uj/xu2wcuDwg1Mq6BK0SvZ9hUruEHms8yvg5PEf9kzxst+87xdw5u73SQ7xynYA80K9xvGeuGbwiwkC7WjYFPCk6hLx2FCQ8Y21lvM7BezxQ4tU6okZDPFI1lLyjPT46Sw/vvMVBAryQqSm7r2y5u45yGjttypm7WeNGu3XLirxjbeU6iejMvLckEbw/qgS8uFH7PLvkzTzl8ge9k46avMgKxLxmtx67/FkDPLhkpby4ZCU8QDw3vHemVrz1Tw084qjOupGgpLpUbKO7Da4DPJBNZjvcQzU7U2J+PuY7IbxbGlY8K8OxPHzehTyx9eY8ndikPFS+wTv16sQ6dR2pPHdUuLxSfi08UEeePJ2Ghjr32Do8Tlmou4Qe67xlW9u8kE3mvIPV0bwabzE7Zm6Fu4dfH7wgJum8/9mrPDaW6bsUChi8O84YO0y90DyFDGE6QtiOvGBSBbwc+N47B9F3PAy3iLyDgzO8S3S3PIiouLsDUU88mqEVPEj0Dj13C5+8GXg2vMhcYryNKYE7K17pPLo/cbv7Ygi78XOhu8C3NLuwEZa827GCPGV3Cromniw9j1ZrPLs27DtA1+48WOzLu325UbyIDYE8OeCiO2vA9Dz/dGO8qL6GPM8dv7vJZgc7oquLvBdBJ7sjuTu8HK/FvGnurby10VK8voAlvFCQN7tTYn688mqcOtn5ezwngn08Da6DvB2mQDzhFpy8ENwNvFt/njvd8Ra99PPJu7YtFr2Fw0e7S3Q3PMbTtLx01I+7o6KGPFPHxrtQRx670p3nO/7isDwqZ248rxobOxES/TzwKgi8x3gRvFk/Cr0zFkE9yArEPDSo8zvDifu8nI8LPMYcTjzxIQM9/uIwOwaI3ju8kq+7/aIcvKHq/7v080m8X62oPBWcSrzJAb88gFWpvH0emjs7F7K8fwwQPN8oJr3Pb907Btr8PPBg97v0WJK71PClunGmBTyy7OG6ce8evWqTijzESoe79FiSOxaTRTzKXQI8S8bVO74bXTzHyi886w4Ivb03DLsRJac8zS9Ju0eYyzxrwHQ8t20qu0gq/rt61GC7ITCOu9DCm7zU8CW9jWj1Oycw37vPyyA8I3AiuyiMojwgOZO8i3p/vPBgd70JGzG853s1vCp6mLtSNZS8aZyPPA2ug7yfYVK8saNIvBpvMb6xWq88KEOJPCfnRTwuQ9o7kum9u/ZGiDzz/M67LbGnO2RkYDuw/ms8c3jMvFMsj7xR2dC7x8ovPDN7Cby/Eti8Bu2mPGGbnjyEHuu7ykrYPErihLxqyXk8oep/uqa9ZjxuwZS6rxqbvF9bCj2SToa83jGrOZyPi7sQLiy8Rk+yPBAuLDzSsJE70lTOvIYfC73e6JE5Pun4u5epejz5D8o87UWXPF6a/juh/ak8pdkVPI9W6zwcSn06OtcdvANRzzxynYC8r7XSPG9TR70w37E8xOW+u49W6zxeZI88NA28vC7xOzyVIM27CsmSug5Atju+gKW88BdeO6zskLy/dyC8LFVkvDmOBLzrDog86MTOvKTPcDsTrtS8ENyNvNSL3TkKtui806eMPIyEJLvnMpy8gkOfPABs3jvPuPY7of2pvJrzszzuPBK80K9xPLxAkbxA1+48/T3UvCwDRjxYNWU8DAAiPAjI8jxfkXm8+5j3vLlbILxDzwk9NQQ3PO7XyTsKtmg8V6Oyu9NLSTu4UXs7Pul4u7TtAb0iJwk8eUKuPG0TMzw98v08mAW+PDpyVTzrDog7M19avN96RDy3JBE88rM1PZ3YpDsR04i8CCS2vKmi17xUbCM8qQegPE5ZqDyj2PW7RGE8ukPPiTxuwRQ8L+i2uxrBT738j3K8bIEAPQfkIT1qyfm8Bu2mPFsaVrxH/ZM8sfXmvHXLCj1EGCO8LqiivPtP3rxw+CM8VL5BPeEWnLo0Dby8xYqbulwRUbzHE0k8cK+KuVasN7yDJ3C7tIg5vT78IjuAnsI8xEqHvKyHSDx37288l2oGuxnKVDxKK568Qc7pPKG0EDrXcM68WOzLuuaEuryYs5+83EO1u4Qe67wsA0a83oPJPHbChbyzkb68HF0nPKM9PryXvCS86A1oPJBgEDzWMLo7RGG8OwCIDbzKXQK8LfrAvOYfcjwx1qw8RqFQPGj3sjxUvkG8He9Zu7BjtLyTRQE8nOEpPO1FFz2tfkM7CMhyuwDRJrwZLx28jHF6PKbQkLz7mHe8QsVkPLD+67wdQfg8cUE9vJuFZjtemv68gZW9vFqIozwAiA28DfecuouNqbyIqLg7IcvFvPxZAz0yhI48ZBtHPO48kry67VI8XQjMvJizH7zezOI8/5ASPeBxP72jPT68SPSOvMSAdjxWWpk7ZICPPN3VZ7w+TkG7rtoGvU5ZqL0OktQ81t4bvH4VlTuC8QA4WSxgvLzbyDvlKHc7mLMfvJmqmjvxc6G8uj/xPImfs7xYSI87TAbqvBIJeDvvzkQ7pDS5PDb7MTwbZqw8HQIEu2rlqLu07YG7CNscPOkE47wn58U8Z2WAvIwfXD19cLi8jHH6OwtbxTwzewm9dcsKvBAurDy4UXu8AIgNvT3y/bnmH/I8tO0BPBommDypULm8pr3mvGqTijyDgzO8FFw2u7wt5zxJIfm8LFXkPMcTyTpkgI+8UjWUumRkYDyhT0g8KnqYvPJqnLqQTWa8WiNbPJgFPrx754o8WJqtvPoGRT0MACI8kk4GPWXAI7wSCfi7h1+fvI1odbuI8dE8/I/yO8hc4rxIj8a7Gx2TvD2g37z088k80lTOPMjBKj1rJb05M8SiuiIniTrM5q87WxpWPXswJLwGNsC8DKRePO9pfDykz/A81eegvNBmWDt4SzO8gt5WvBZKrLyfxpo82rqHu6diQ7yQqak88rO1ua7aBr0/8x08V/VQPIODszxB4RO8a4oFPY4NUrwu8bu7xhxOvCaeLDx9Hhq9o9h1vAba/DpmbgU86mAmvIXDxzubmBC786owPB4487ya87M8iPFRvMbTtLuWF8i8k0UBPH1wuLwiJwk64RYcPIE5eryLen+7WSzgPDDfsbw1spi8a9yjPB8v7rnHeJG7hrrCPNvncbxCxWS866k/Op0hPjt5+ZS8M3sJPEwGajwcXac9VRqFPKuQzbwpgx08J5Wnu0/+BDys7BC8Q8+JPLXRUrznFu28NFbVPPeGHLzIXGK8YyTMvAfkIb2iRsO7/uIwvP90Y7wSHKI7PMWTPNGm7Dur9ZU8mvMzPGnSfjwE/7C8B9H3u5iznzydIb46rr5Xu0LF5LyVfBA8YO08vIBC/7x372+7sP5rPKs+L7z16sQ6fdWAvCX5Tzz6BkW7hSiQvMV38Tyh6v+8RqHQO1E+Gby5SHa7BK0Su88dv7ytLCW7"
},
{
"object": "embedding",
"index": 1,
"embedding": "EqQ0u2k6ojybByG8nfyXvIQ3ubwKmKA87peEvJ400Dn6jYa8owWnvKTegTxAebc8qwOtu1paMjy8ygU8AIkHu+MD5TxyYlK8sijYOwijqbsOnqq81dIlOzFWBrzP16Q8z9eku/XKvbpAh0U8WlqyvCJ2ljwko0W7E9xsO3xjXbyt3Ae6B7zAvEaQ1LsmbpI6ue6lun48uDtBUpI88qscuqvnED1+PLg7gE1Lu6ZO8rsTwNC869dAO8MLzTxpmX+8dHNlO28nlTtwX805vQI+PPiYD73onAO8nFt1vJXwA7w9TIi8cEOxPAx/CbzKMPi6cxEDPLEayjsahh68iFlfPEZmKr1qgGi8HJcxPLRH+bpqZMy8ala+PB6oRDyoJ808cYl3Ox5wjLtyYtI7UpTkulCRX7xWcES6O6jgO9sFX7xOgMw8NlwQvcAhX7yFLDC7XTYSPcjOlTyaPNS6xfI1PF85l7w2ap6848ssPP6vLD1xKpo8BKutPDG1Y7toYcc8TlYiPLvjHD1JQoo8sEHvvLzKhbtvJ5W8amTMvO6zoLvSwRK94KwLPKBh/zwQoS88ysOMPMUAxLuwQW87lwGXPLEMPDx8VU+8FJmru8jOlTtHTRM8rC1XvGpWvry/6Sa8XaN9Ogu0PD0Up7k7Qm4uvePZOjyt3Ic7CJWbvCp0nLz7xb48ZUKmurgxZz2hAiI5DI0XPHUiFjk5pVu8y/vEPMv7xLzNDFg8WTCIvKsDrbyYR908zdQfPPajmDygReO7pEttOuvlTjzqnwg5H7ZSuqTegTyNDhq8hR4iPDVOgrtkTS87hQIGPM3UnzyiLMw8Q7T0u9Ym+jxGkFQ84dY1vNHoN7vMxhE8lytBO35YVLx9BIA86LgfPVVGmjtceVM849m6vAKoqLyJFh689JKFPO3o07yCUNA8F+L2uy9TgboemrY8Jb9hvMH6ubsioMC8iRYevLcHvTxKUBi7LqTQPMHenTs6jES8La/ZPKXsjzt4XdM8pRa6ux5+GjiNDho9xCfpPIE0tDvyuSq/+8W+vJMlNzvwqBe8LqTQOkJ8PDwD/Py5+JgPPS9TAbz4BXs78cSzu1GtezvXDWO5B+bqu2B/3bz1yr28bRYCPM67CLy+zQq7rfijPFJ4SLzD4aI8fyMhPKfvFDw3vvK7y9+oPCKgQDs6fra8LGkTvCJ2Fj3ul4S8pEttOyxpkzxhPJy8zhpmPfPVxjxzjHy8Jb/hu1B1QzugReM8pk5yvNIg8Lx6RLy7vRBMvGOQcLzBFlY8cy0fPJw/WTw0Zxk8gmxsO3clmzupDja6qFH3unMtnzvhAGA8De/5PO6zID08j0m8fzEvO/fbUDsmfCC7BeNlOmCNa7yKTta7VlSovLX2qTzeFvK7LKHLuyusVDzAIV+8OF8VO1pMJDtxife7mwehvCKgQDzFDtI8JoquPFxdt7tvJ5W6fEfBPKw7ZTxZnfM7PGWfvBPAULwPyFQ8p+GGvKZO8rz0vC88Hc/pu8Yq7jstr9k8cF/Nux/Sbrwko8W8ApqaPIcTGTzIzpW8amTMOzKAMDzvwS686gz0OxKWprx4XdM608+gPH0gHD2v7Zq8Bq6yvPvh2jxcXTc8pjJWvI0cKDzMuIO7aR4GvVyHYTw2eCw7wiTkvLzKBT2sLVc8JHmbuq4+arx9Eg499QJ2PBqiOrzctA+8R55iPKpUfDxLNwG76JwDvKTegbv8rKe7BdXXPCiNM7xPPQs8jQCMu1xdtzvD7zC8xMgLvHkMBLyRBhY8NbvtO+XOMbwP8n67zLgDPKEevrxPPYs84hz8vAqmLr1JoWc8DdNduuvz3LvZ5r26jQCMu0p6Qrz0koU7iRYePPHSwbsN73m8ue6lvCHV87zWxxy90iDwu73mITz6jYa8rxdFvJ/xDjxuhnI7oR4+vIMNjzwD0tI72divvN6pBjzkpIe8w/0+vOyiDbwgc5G8reoVPPHEs7x+Ska7bkCsusA9+7sjrk48Smy0OLfdkrrF1pk7BeNlPByJIzpLNwE84wPlO0aCxrx2duo70L4NO4Js7Dsh1XO8wNCPO444xDqWRNg7N1EHvawtV7xLN4E8nQqmPNv30Dy12o080B3ru25OOrwaaoI8rwm3vDxzrbvx7t28E87ePCh/JTwsha88NJFDvFszDbzzx7g7nRi0PNymAT2pAKi8O0mDvKZO8rs1u+073qmGPKPpijzHA8k7C8LKOzKAsDv0oBM8vh7aO4sZIz1qVj67pQisvJEGlrxKUBi8KVgAPQyNFztca8U6TG85PDdRBz0ty3W8/bq1PH5m4juIPcO8LpbCPLXajTwN0928ew+Ju9bHnDoLtDw9g/+APL0CPrvlBmo8Yli4vA3veTo6jMS8CenvvNC+DTxWYra8yC1zvAKoKDgUfQ89c4z8PGZsUDx/FZO80dopPAnN07heUi68zMaROo5GUjwX4va6lGv9uwvQ2DtAldO7xeQnO4Mpqzw/oNy8llJmvERxszwskz28YSCAvGZ6XjxzjPw83/1au48DEbyxGso788e4PAKamrvD/T68J8LmvKRLbbxrL5m7Fd/xu006Bjw8gbs8+KadOkpeJrx/BwU8Hpo2PPfNwjxRMgI8Bbm7O6BhfzqNAAw8jkZSPM8P3bskeRs8fjw4PO/rWLrR6Lc7Jdv9vDlUDDuyRHS8tdqNPEp6wjthIIC8gxudO3czKTtebso7InaWu1md8zv50Mc8hlZau9XSpbvx7t06MpxMvClYgDzYvBM8aEUrPOcJ77vqu6Q8U0OVvHEqmjxFSg68RUoOvUtFD7xwQ7G87/nmvN4Wcrz4ph08llJmu30SjjxFm927YzGTunEqmryXARc8sQy8PPyembxEf0E5AIkHPdfxRjyVKDw7ZFu9vB3PaTwshS8888c4uxXD1bzv+Wa77KKNPKBFYzxcT6m8XlKuOu32YTx0ZVe8jwMRPCBlA7yV8AO7az0nPEee4jvKw4y78qucvAH59zsQhZO7B7xAvKvnED22zwS7oeaFvJX+kTwL0Fi8CoqSPCpmjrxAaym8mwehOwK2trzt6FO8CLE3vDFWBrySTFy84QDgvEJ8PDxnlvo7NIO1vLZKfrzHEde8Zl7CO3OM/LzH9bo8SIXLuwP8/LxZnXO8aR4GPHs5szxoYUc8A/x8u8DCgTsIhw29k/uMPAe8QLtlk/W8ts8EvFhXrbzN1B+80sESu4Mbnbr6t7A8gl7evENHCTz8kIu8EqS0uZg5zzvWJvq7WZ3zPI8tO7sFx0k86p+Iu5P7DDqYVes5gGnnOAXj5bsD/Py7zfA7vKkcRDzqu6Q8+AX7O7MBszxRrfs75qcMu6fvFLvd3jm8YEclPLzKhbwCqCg7CLE3PGxn0Twdz2k8OmIaPCaKrryNAAy7qlT8vFZiNjzT3a4797+0vIkIEDyKXOS6GJGnu0tFj7tIhUu8HqhEuzZcELxKUBg8RFWXPHclG709TIi7GcnfOzFWhjyTCRu8kmh4vH8HhbwlsdO6MIs5PLXom7sKihK9qFH3vK0GMrxDtHQ8mwehuklCijwWcoY8glDQO/6TkLwof6U6II+tO11EoLwmmDy80sESvN6phjzivR48jRwoPdgpfzzQHes8C8LKuzJkFDwQkyE8nfyXO1Gte7wGkpa80sESPFBntTwGhAg8GK1DPP3ybTy6NGw8GmoCvIMbnTqJ+gE8nkLevPfNwruV8IO8hlZaPJ0YNLz/9XK83+E+uyfCZrqEb/E7ne6JvD5opDxIWyE8AowMPdTEl7zzx7g8KHGXvHBfTTwQhZO8MGGPO6RLbTsYnzW8rwk3PPqNhjxkW708PUyIu0lCirvRzJu70sGSPAijqbyUa327swEzusPhorzP5bI6yMAHvWJKKrxDRwm9AcE/vBiRJzwWcga8KcVrPJ5eerzctA+9ZSYKu1yH4bw8j0k9/JCLuqfhhjxzLR88hnJ2vHJwYLyZ9o08fEfBu35mYryy1wg9DKmzPBylv7sWgJS8pN4BOtiuhbyx4hE8jRyovEBdGzyeNNA89pUKO7JE9LxpmX+8DJslvVszjTvnw6i6PnYyvOUGarwUmSs7O6hgvPUCdjydJkK9kyU3PLf5rjxFm928280mvMy4g7tBUpI81CN1vExhK7yhELA8BIGDvDhtI7wCqKg7Cc1TO6JI6LyT+4w7VF+xujyBuzwwfau8ZxuBvIcFi7toYUc8II+tPFug+LtAebe75wlvvLMBs7yQSdc6+AX7PB+20jvN1J+7/fLtuyx3obsdz+m8BePlvNQjdTwpxes7qfKZvGhhR7yk3gE8fmbiPGCNazuy1wi5YHHPO8fnrDyxDLw8BoQIvG0yHrvEJ+k6stcIPJv5Er2YVWs8seKRvF5uyruyRPQ7EIUTvf/18roJ6e87BpIWOxxtBzvA0I+8ayELvKYy1jxhLg49cF9NPMnqMbxLRQ+8JnygPCXbfTsN0906S6Tsu2ZQtLzupRI8NGcZO+XqzbwD4GC7dlrOvOSyFbp4Qbe76dQ7PNy0jzu78aq8fS6qvJfzCLqATUu8lSi8PPysJzq92BO8vxNRPLDUg7wHys48VG0/OwagJDv4pp28rjDcu55e+rwvst68PIE7vOvz3DwWgBQ8Vy0DPJsVL7wF4+W8QpjYuta5Dr2CbOy83hbyurYuYru6xwA9MG+dPLgj2TtKUJg8Zl5CPIJQUDvf07C8veahO8Ik5Dw/Tw280wfZPAvs9DvWuY689rEmvIUeIjwWqr47xQ7Suvvv6Dut3Ae7oEXju9LBkrzWq4C7iSSsuw/I1Dznw6i5oizMvFyHYbtoUzm6owWnO+XOMTwGhIg7SaHnO48Rn7xJQoo84+dIvG0kkLqqKtI7kEnXu4xfaTs3otY7hQIGPAvQ2DwSiJi7zfA7PGh947vKw4w688e4vOvz3Lp7K6W6XlKuu5vrBLwYdQu7fH95vPyQi7y95iG8w9MUvf6TkLyx8B+8hjq+PI8DkTw6fja9tdqNvM/lsryOYm67zrsIPAXj5bsty/U6S0UPvM0M2LyKXOQ7fH/5u/6FgrpfKwk8eE/FOy5smDt6RLw8WZ1zPqcZv7wxtWM8P67qPAm/xTw/oNw8SzeBPOHy0TvJ+D88wD17POHy0bxZnfO5enx0PKgnzTuCbOw7MoAwPJH4B73+hQK9aSwUvYchJ73ouB+8qjjgvPPj1Lx4XdO8MVYGPAyNF7xVOIy8wCHfOc7JFj2sO+U7hDe5vJRPYbyEb3E8DJulPBvMZLxsZ9G7pewPPSHV8zigReM81w1jOjRnmTwZu9G8JmCEvPgF+7uv3ww8yxfhPHM7rbuZBJw72+lCPDpimjvKwwy8C8JKPDSRwzy7G1U9u+OcPKg1WzxPqnY8VZfpOoJsbLxwQzE8VFEjvByJozzV4DO8GpQsPOCsi7w7xHy7+AX7vPSSBTuhHr66z/PAvBHZZ7wh1fM6nD9ZvLomXrz/9fK8fRIOPOAZdzyUM0U7MbXjOn0SjjxBsW+8G8zkuks3gbtKUJi80dqpvCxpE72gReO6cxEDPFU4DL299C+8XyuJPOnwV7zD4SK7PmikPPimnTxvGYc8/9lWvFCD0TydCia8HG2HvAfY3LwJzdM8zMYRPIJCQrzv+ea8S0WPO1ea7jvJBs47RmaqOzJkFLr12Eu8VnBEvHMRg7yiSOi7hRAUPPvh2jrWx5w8yC1zvCpmDjyLC5W8rfgjPHBRv7z50Ec8hSywPPSgkzg0Zxm8YEelOy5smDvv3cq7pQgsvZEisjzAwoG8fSAcPNi8Ezzp1Ls7o+kKPA/y/jsAiQc8/J4ZvSXb/Tr4ioE86gz0u+TAozz0koU8sx1PvM3iLby3FUs8Q7R0vCPY+Ls7qOC8czutvLcVS7yGVlq7IpIyvN/vzDysO+W81sccvIxDTb1Klt67s/MktyyTPbzh5EM7u9UOPdymAbyoUXe8rQayvE5kML4D/Pw8EJMhu48DEbw5VIw8RGMlPIsLFbt3JRs8vxNRPD6EQDwkhyk85fjbvECVU7ywQW+8pxk/O40cKLwiaIi8vxNRPCqCKj1RQBA8/KynPC9TgbwEq6082eY9O93sxztwQzG8MG8dvMHsKz06jMS8iTI6vOUG6rrB3p28xxFXPH5m4jsT3Gw8hDe5uw/y/rzqrZa8yC3zu/XmWTyaWPA8GbvRPCSjRTxAXRs8EpYmvByJozyKXOQ7GJEnvEqI0DzUI/U7XHnTPM7JFr15GpI8tSDUO4Bb2TyuMNw8EKGvuztJgzzusyA7blxIOqw75TvOu4i8/JALPG5cSLxob9W74wPlu9H2xbshq8k8hR6ivNqxijxCfLy8iQgQvIZIzLvyjwC9aoBoPNfVKrtxHAy9EoiYPJUaLrt7D4m5vxPRvLPzpDy95qE7NbvtO5cds7ylJMg8fH95vIBNyzy99K+8lfCDOl6K5jxMi9W7xdaZvJXwg7ySTNw8Dp4qPNvbNDs4baM8hnL2u3Z26rpch+E6EqS0Oh3B27wFx0k8n/GOPIn6gTzR2ik8Fd9xPDZckDxpmf+7RHGzvGJ01DzZyiE8YHFPPe/5ZjxXLYO74KyLvEW3ebzjA+U8n/8cPNS2CT2IWV+8IpKyvGpkTLu57qU7VFEjPGeWer3ZEOi8N1GHPO324Tzz/3C8GoaePL3mobzPAU88rj5qvIpAyDyQSde7ydyjvJ38l7zB+jk7Umo6PewPebwYnzW8ikDIO9EE1LxwX808qFH3u5g5z7vUI/W73qkGveKvEDwlsVM8rj7qvHEcDD2fGzk8Vy2DvN361blEY6W8LqRQPEVKDrwF42W8oizMOl02Er1UX7G7++/ou+HWNb1TQxU85KQHPSR5m7xefNi8QopKPAx/iTvexSK8sx3PPOcJbzw+drI77KKNu1CRX7xvNaO7RoLGu3UwJDwbzGQ85+3SPDKOvjyuPmq7nRi0OxZyhryk3oG7DoIOO7UEOD1JQgo7E8DQO+icg7znCW88VZdpPJ5e+rntzLe8vh5aPGpWPrxzEQM9qtmCvKkAKLtRMoK8s/OkvCPY+DsOdAC6oRAwvP+9urzN4q27HnCMvA50AD1KekI8cSqau0KY2LtLRY88cYn3vCCdu7yfG7k82fTLPJJo+LxGgsa7FoAUvHpgWDyWUuY7xr0CPRB3hbyIWV+8ZmzQvBCFk73p1Ls8PmgkvBXf8bs3vvK7Umq6vFKU5LswfSs708+gvMPvsLvR2im8T6p2PCfC5rsskz08f4L+vAnNUznusyA8akiwPIpOVjwggZ87lRouO7UEOLytIs67bINtPFszjbx6Uko8DoIOvcrDDD17HRe8kyW3uznB9zzD7zC9xdaZOixpEz1PPYu8sihYvHsdlzpgR6U8F+L2O5RrfTyqKtK8dnbqvFkwiDznCW88ikBIvM3irTySaHi8ZTQYPEFEBDyWUma6+IqBvCW/4Tw2XBC8CelvvJ/jALyyKFi8ixmjuz9PjbsUi508MdH/vKYyVj3UI3U8/J4ZPcLFhjsaojq8W0GbvM7JlrzIzpW7fkrGu7Io2Ly54Jc74cinvBJ6irwkh6k8iFnfPH8jIT0qZo67u+OcOwXHyby9Aj48OHsxPevzXLwB3du8lE9hOrBB7zxHuv48zhpmvDaGOrx7HRc8rwk3vPgFe7xuQKy71+M4vIg9QzwZ5Xs8lwEXOsIk5Lw5VAw8+9NMPPXm2Tv+oZ68tQS4PFR7TbyjBae76p+IvK/fjDubByG9YS6OvD5opLsH2Nw80dopvFFAkDyn4QY8FpwwO/6FAr0Fx0k86gz0OnJiUrsYgxm9YGPBPAe8wLxFSo66qirSPCSjRbxpmf87SlCYPMoweLxlk/W7dlrOPPCaCbx+WNQ4pk7yO8oweLxKbLS8SIXLOwXHSTxIhcu8++HaPMjABztbQZs99rGmO94W8rw0dSc8vRDMOS56pju2LmI8EJMhPIn6gbzxxDO97dpFPM/zwLpOgMy8lEHTvIg9w7wejCg7oixMvPqNhrxHTZO7I7zcPB/S7jtMYSs8nw2rPM/lsjzWqwC9VG2/vDuoYDyj6Qo84r2eupXwA70f0m484Bl3vPqNBr0Ub4G7PloWPKfhBjp1Ipa7jx+tvDB9KzzioQK8ayGLO3JURDwWqr68rDtlvD+g3DrIwAe8NFmLvNQj9bzpxi28"
},
{
"object": "embedding",
"index": 2,
"embedding": "gvwZu7VcFTt68FG7NauAvG17/LxSh+w8GzwjvAQUyjkE8kC8Lz8BvTRuYTwItwQ9OhQOvPV5eTyfl2W7j9QYPEDjtjwKwem7P5+kPMAxorxa9t284PHoO4vOtLxfX2s8NjAzO2PI+DwWsYw878zZvJT8hTzJoJM7/kQhPJgJXbswYQq8jZCGvLC26LuWIQG7eEkWvCeW4jvvaTA7k37GO5QeDz2EQCy8dchkvPKwNDktAmK8iqyrOz9ehDzbwoi84CsWuwV3c7xGkFY8jrKPPKwMu7zklKO8L6Iquu0lnrtwoPe7qAZXPGCcCr0Urhq7EzBbPDYOKrkbPCO8Ra/tupimM72kAHO8Cz8pu8mgk7ybig68/yWKO7+R2TxZDgI9wZRLu1TLfjy+y4Y8PN1SvLWdtbxttSm8XbgvPBgXqLwvBdQ7DqVEvPkcNLyczqC7wVOrPF9fazsiyqu6YKP9PIhoGbzDVp27LQLiPHisPz0BzcU8CsHpPI118DsR7Eg8z2+8Ow9FDT0hTOw8OfIEvabCRLyZK2a87SUevTIqz7tIk0i9QOO2PL0MpzyedVy8gvwZPK+U37yus/Y7aEwcPejbpzy/UDk8+PqquLP9bLwpmVQ8uaOZvJqpJb3UNIC8K//vuCGGmTxDCLK5bPZJvYipOTuh1IQ8pFypvGMkL7x5a588JM0dO4XFXj37w++7odSEPJdDCrwIW867x1wBPbTe1bx3ira7jrKPvCjTAb0EFMo84lARPNdgbjwlMMe77KfeO1zXRjzWmhs8m4qOvDx6KTzT9+C7/qfKPEx3I7wE8kA86t4ZvFzXRjwT77o8nfCpO124rzwGtJI8IYYZuy2fOLzUNAA8eWsfOWxSALziLgg8W3QdPUSN5LdKM5E69dWvvBKrqLw8OQk8AA7mOzYws7yQWcs8nTHKOeyn3ru8bN47WNFivKBWRbwTMNu8+X/dvKRcqTzJwpw8iEaQPL8PGTnGH2K8EYmfPMI0FLybLtg8wfCBOwJLBbsmtfk8hyt6u1bpBjxmSSq/VWvHvIHaELw3teW8iAxjPNVWCTwMBXy7mgxPPMqIb7xnrFM8kdeKvKJZN7r3GUI85FMDPOY737xZssu8CJzuOwPQN7srOZ08HPsCPaXh27wSq6g8re2jPAZY3DwmdFk7CdkNPKHUBDq1Ogy9s/3suxJPcjyf85u8ROkaPHOE0jxCaGm8rAw7PbVBfzrXvCS7U6KCO3YFBDwYWMg8/AAPvPDu4rzvKBA7hednPB+Dp7zYewQ8Xx5LPFgtmTv11S88e26RPNFyLrt3y1Y7GXrRO2kS77vm+r48fRXNPCvd5jzNSsG8UwUsPIqsK7xDSVK8kXvUOYfqWbxOepW8N5Pcu5pN77sIt4S8Iy3Vu7kGwzzfbLa8CJxuPBHKv7p04Ig8/yz9uw9FjTyrhwg8aW4lPZKd3bsr/288iKk5PFMFrDu8bF478Iu5vPErAjuC/Jk8XJYmvJzOIL3cR7s7YWJdvBGJn7s4MyW8N+8SPIamxzoBjKW8kd59u36TjDwx5ry8ZwgKPHG7jToGWNy8nzQ8Ow2Du7yjOiA89RbQOnlrHz3hb6g61HWgvI/UGLucrBc9y0fPvBndejswJ1272SLAvDKGBTweYR47uoQCvZimMzxr1EC7Frj/OoyvnbyaTe88wRKLPDZxUzoC78689xlCO90GGz3frVY7GzyjvJL5kzvzbxS8nTHKPGDdqrsnluI601MXvJ6vibnyjiu8sni6u0oY+7vCGf4747O6O0yZrLz0Nec7hkMeu2EhvbwyhoU8YySvvO7r8Lw9QHw7zo5TvDYOKrvWeJK7z/H8u5ajQbp/1547CsFpvDuZwLpfu6E7PVuSvCAI2rwpmVS9mGWTvBOMkTzfCY28evBRO7/tjzux8wc8Jq6Gu6JZNzwWVVY7msuuvAic7jxnzly77KfeOOMW5DqivGC8BTbTPP4DAb2NdfA5S7hDPPCLuTux8wc8A48XOwl9V7yoxbY7p+RNPMYfYjyTfsY7uot1u4ZDHr2Dwuw8DAV8vHUkGzuNdXC8PDkJu3QCkju8juc8o51JvVbweTxlBRg8axXhOyGGGT3HnSE8QaIWvGfO3LuPN8I85pcVvZpohbxM2sy8qUrpPLHzhzw2MLM8uCXavEzaTLxjwQW8+F3UO+bYtTxclia88O7iuwMz4TuWBus8KBQiPPKwNDzLowU9JKuUOiZ0Wbz5uQo85Xz/uzONeDyIRhC8EGeWvNY+ZTwP6Va7t18HPaXhWzt+tZU8uCVaPKS/0jzciNu6y0dPPN1pRLzUdaC8c4TSPHYFBDyvlF+8ueQ5u10bWbyoKGA9p6OtPOJQEbw3k9w7EGeWvJ6viTyz/ey8Dyp3vH3UrDzO6om867+CvMe/KjxkRrg8iu3LPNrhnzsdPxW9Y8EFPESNZDwwYQq8hyQHu0TpGjw41+47MGGKvB7ER7szqA67D0WNu2CjfTytrIO8hyt6vH1xgzz+AwG7pBuJPFduOTxnCIo84PFovDx6KbtrcZc8nnVcPCoXFLyzN5q7kFnLvNkiQDwyKk88vbBwvLhm+jv6YMY8UuOiu3149jrfz1+7wVOrPFRJPjze5wO7VQiePJC1gTxW6Ya7sBIfPMESizqqyCg8XJYmPROMETymJW48KBQivLtKVTy96h28N1I8PHRDMroSDlK8UiRDO7xs3ruetvw7IGSQuieWYjzgjr88dOAIvH6TjLzjFuS8u0rVuktVGj3HXIE71/3EPG6WEjy9DKc8pABzvBl6Ubu7pgs7EYkfvU27tbqt7SO8s5pDvIJfQ7vXH848Spa6vFbphrqrh4i75ti1vDoUjryMrx08q477OnyyI7u34Ue84tLRPIsx3jzldYw8EYkfvEdPtjz7w288LwVUPANtDrxjyHi8Rm7NPK2sgzvJoJM7O5nAOFTmlLu8K747Qmjpu2CcCjpp0U48uot1O4C4hzsPRY074bDIvF7auDs5ls689XIGvKK84DyoxTa8hefnvIZlp7v73gU8wthdPKYDZbxqT467DqXEO/l/3btfeoE73u52OgsdILyR3v27qUrpvAOPFz1CivI60XIuvBBnFrxte/y8KTarOVOiAr0pmVS6BLEgvEWv7bx+tZW8Ri0tPEDjtjtJEYg8YuAcPCTNnTkgZBC9QidJO/xBr7wp2vS8MMSzO30Vzbw6FA693otNPMIZfrwD0Lc8G33DOvg7Szwh6cK80RZ4OgAOZrvmO1+889K9PMFTK7jAMSI878xZvDLHJTxP/0e80C6cPCGGGb2qK9K7EKg2vDWQajylPZI8QMEtO3mNKDvyTYs8QCTXOyfymLsgxzm7tToMu4uNlDp9cYM7yWZmuVtSlDz3GUK8flnfO1ECujsyhgW8BXdzOwVwADzqQUM8Vo1Qu2kSbzwGtJK7X1/rvE1YDLxsUgC7zQmhPAi3hDolMEe7AKs8PHFfV731coa7XDpwPFjRYjxTxAu8A9A3vEGiFrtDpYi7lF8vOxmcWjxe2ri8m+23vBKrqDppLQU6EexIO89vvDzTU5c89dWvPLGX0bx0QzK8B9YbvGSpYbyPmuu80/dgPDbNiTynoy08U2jVPF7aODzWPuU8cPwtvEbKAzxrcRc83Ec7PPuCT7xMmSy8Frj/OjNM2DwFcIA7yOGzPHWHxDyR3v07wbZUuo4VuTvtyWe7DMTbu056FbweYR47+duTPPBKmbtZFfW7SbXRvOV8f7xEa1s8z/H8vMR4pjxPYnE8d8vWPLUAX7y/7Y88nrb8vJfnUzxc18a847M6PMlEXbwOZKS8TnqVPB2ivrvKiG88FdCju6rIqLzCdbS6rnJWO+SUI7v/JQq8NauAPH6TjLxPYvE5F/UevZJcvbtf/MG8lPyFuywahjwnVcI5iYoiPDAnXbxGygO9BvUyPM0JIb3Gexg9T76nup513Dxy/588hyv6O7oozLtKdDE7bxvFO0iTyLyaqSU8ZIfYPPcZwjrW27u8g4HMPA/pVrw+vjs7YN2qujoUjjzA1es7DGGyO1bw+bzeKKS87qrQvNFQpTxBBcC7rrP2u2SHWDw7mUC8ywavvMP65jxG7Iy8QoryO/9mqjsiC8y8SHE/vK6zdrx1h8Q8GXpRvE++J7xwoHc8HePeu1bpBrrJA7266P2wu8nCnLyiWTc85jvfu+MW5DxCJ0m89pSPvHHdFrojiYs8T5wePEyZLDtemZi8za3qu6RcKb1c18a8lUCYPCnadDv+p0q8r5RfO8znF7t26u27p6MtvBc2vzzuRyc817ykvMkDvbsm0A88y6MFPb1NxzoEFMq7pl+bO6M6oDx2RqQ8ODOluzdSvDsxpRw7KVg0OpD2Ib3ikbE8ROkavJdDirtZsss70pS3vArBaTskcee7jPA9u3UkGzwcXiy8fzpIvMqI7zt+kww9rs4MPGtxl7tlaMG86oJjPDx6KTwHOUW81DQAOwJLBb2lPRI8DYM7O/KOq7xfeoG7LZ+4vI7zrznuiEe8G31DPBR07TsUEUS8hGK1Ok+cHrxMNoO8/4gzPK1QzTsqF5S83u52PLqLdbu2v748XXcPPNP3YLymA+W8a3EXvOe5Hryf85u8qgnJu1iQQjxpLYW8qaYfPL0Mp7xAgA29apAuO6Wgu7xw/C288c9LPGktBbxUit48ES3pPOArljmU/IU8nA9BvBJqCDzCNJS8AksFPEhxPzwRiR88piXuPIPdgjqV5OG8FrEMvGPI+LkD0Lc80pS3PNrhHzx0ptu7QsQfPFLjorxRAjq8GdaHO3nOyDuh1IQ7A20OvQd6ZbxIMB88aW4lPGluJTzV+tI7wfCBO1gtmbxcOnA8XpmYvHG7jbw/nyS7nq8JPCx9rzkTzbE8q6mROd6LzTyoBtc7+F1UPJhlk7yF5+c7QopyvBQRxLz7w++7tBgDOxQRRDoM/oi8LTyPu5HefbzshdW7pT0SvTjXbjucUGG8l0OKPG21qTyIDGO9pFwpvL8PGTuoKGC8iu1Lue4Gh7zK5KU7EYmfvNm/lrwEsaA8tZ21vNkiwLvLadi6SREIvLvnqznbZlI8jlZZPmcqk7umJe48dWW7PPPSvTwiZ4I8ntGSuwCrvDyAuIe7veodPTeTXLwD0Le65Xx/PHs0ZLubio4812BuPEey37y4Jdq8Eco/vOMW5LwFNtM7R7Lfu1lPorymA+W88vFUPFsY57oqu128eKy/O2w36jzs4Qs8SVKovMByQjs8eqm612BuPPHPS7vc5BG7ofaNPDlVrrsTjJE8j9QYu/NvFD1UST69AEgTvUkRiLviLoi7pABzPOETcryZhxw8ZkmqN8Bywrve54O8F5noO7LbY7ttdAk9rg+tPIRiNbuOsg89UcGZuyGGGTjAckI8VOYUvL2w8Dx+k4y72oXpOxxerLshKuM68lT+vE1YDLy+b1C7H4MnvUfz/7sgZBC8I09evEGiFrvs4Yu80xnquprLrjztZj48VQieupN+xjzOKyq9e9E6vGz2Sbyrh4i89pSPvClYNL1L+WM8BXAAvKAVpbyHJAe8aEycPNFQpbyR1wq7xnuYPB3jXjyJy0I8BznFvF96AT0fJ3E75xxIvHyyI70InG49RsqDPHsS27tTogK9Ra/tuqPeaTzrIqw7fpMMPPd867sEFEq8g8JsvAfWm7xjyPi7HAJ2u/+IMzsHemU7qee/vBnWhzyO8y+8ONduPMolRrxKMxE8zCi4PBQRxDnQkcU482+UvOV8fzxEa9u8WJDCvGlupTy6i3W8ZOMOPCx9r7qh2/c7m+03PAV3czzLBq+7zQmhvL2wcLzysDQ8wRKLvPxjuDyMEsc7ZkkqPKE3LrzPDJM8JA4+vEzazLyTPSa9fPNDvFbweThWKqe8c8Xyu6vqsTyjOiC86JoHvALvTr126u26n5dlvMkDPbzqguO7nhKzPEkRCLwPKve7UEPavCp6Pb5p0c48PuDEOh3j3ju/Dxk8NauAPKYlbjwxpRw8oBWlPJmHHDwbn8w8UkbMvMqIb7yjOqC7bjpcPNvJe7yRe9S8gdoQPPm5Cj3297g6VQgePVjRYrx3Jw09LFsmPLB1yLvBUys86t6ZvJVAGD3z0r28e26RvPFsIrp1h8Q7YuAcPKtN27t1Zbu7nHLqu6Wgu7zNbMq8y0fPu361lTuQGKs8Kzkduya1+TsSaog8wHLCu2UFGD3pvJC7dcjkvLWdtTzjFmS7ofaNPJHXCr2X59M8H4MnPNUcXDyx84c8Gd16vOm8kDz29zg8b9qkvN+t1juTfsa7odQEPEtVmrvBEgu9f9eeO+OzOjwp2vQ8h8hQu+0lnjwgZBC9C4DJPDh0RTzbyfu8OfKEPDXsoLwp2nS8wZTLPMjhMzzYQVc7hcXevPT0xjwbn0y69LOmO/aUj7sMYTI8WPPrvD+fJDwxpRw8FdAjOqfkzTwgZJC8DP4IvS+iqrzJA708FdCjOzx6qTusyxq7CPgku2DdKrv32KE78k2Lu1JGzLy4Zno8SjORPF96AT0jiYu7JzO5O1sY5zwDjxe7lyh0vF13jzxU5hQ8ekyIPN5KrTvwrUI6PuDEvLK5WrwZ3fo76bwQPAQUyjwNQhu7828UuhR0bby1Qf87ZSchPD9ld73sA5W76yKsPC6AITwn8pi8Pr67O+9psLwHOcU8Yb6TvDCDEz19Fc285DjtvMzFjrzWmhs8UCHRPFVrRzxsUgC9Yb6TOvNvFLwyCEY8T5wevEx3o7vfCY24/EEvvLqEAjyh2/c8NMoXvVr23TxP/8c8lqNBvI83Qjy7CTW8VOaUPMdcAbw2Diq9SRGIvLU6DL0PKve7KBSiu3ZGJL2xNCg8eEkWPClYNLv7H6a8xt5BPDQtQbyhmte8WvbdPD4hZTx3ira7Y2VPOxoaGrwT7zq868Z1u1duOTu3X4c8MGj9upmHnDs+4MS6DGGyvAZYXL0z6S470pQ3vDtYID2HK/o8piVuPP3GYbyIaBm83ucDuwQUSrsmEbC8ROkaPBq+47w9/9s8HB0MvGrzV7oqFxS93QabuxndejwST/K7UcEZO4hGEL3PsNy71pobvH031jt3aC083ucDO5pNb7vUNIA8dycNvfvehTvbwgg99jjZPL8PGb2UHg+8cPytO5VAmDzy8dQ8eKy/Oxf1nrx70bq8bjpcu1Qntb3uBoc7OfKEO1WsZ7wUdG06w/pmvGcICjuEQCy8EGeWvEgwHzx04Ai8l0MKPSoXFL13ijY8CsFpuwwF/Lt68NG5vbBwPElSqDs2zYm7B3plPFPEC7zm2LU7QuaoPDQLODv/LP08zsiAvGmvRT2QtQG8eWsfvKuHCD1kh9i8uijMOVTmFD2xNCi8paC7vKIYFzxQIVE7cbuNPDGlHDx9N1a7tHssvLooTLp9ePY7FrEMvL/tjzxP/8e7VWvHPA1CG7wQZ5Y7dAISPHCgdzxhvhM6cv8fvDQtwTtrFWG86D5RPFbphrxo8GU89pSPvLtKVT3UNAA8XXcPPZYG67tEx5G8RI3kvHmNqLlpEm+8sng6O86O07zqguM6YoRmvNvCiLvldYw85xzIPOqkbDzsp168xHimOS5eGDzo/bC70Q8FPQ8jhLy9Tce8WjALPEzazDwbn8w8WbLLu72wcDtvXGW88lT+u7S8TLvLaVg8LH0vO4ZDHrzKiG88sZdRPEiTyLycrJe6Shj7u/j6KjyqCcm8H+bQPJYhAbyzNxq8aPBlvClYNDy+ywa9YySvvFu1vbsMotI8cPwtO400ULyaqaW8sNhxPLqEAr2zN5o868Z1vJSgz7xmisq8bdcyPIamR7yv8JU7511oPNIxDrqJLmy7liEBPbASn7yU/AW8T5yePB/mUDr7Hya7bRjTPNFQpTtZFXW8rnLWuzFJ5rp9eHa8Ra9tukx3IzsX05U9QUbgO4RArLxYkMK71DSAvHrwUTxKdDE743IaOy6AIbw58gS9X1/rPLkGQ7tzYsm8wbbUvEyZrDt2qU083OQRO1aN0LvB8AG8kdeKPIWEPjvXH048bXt8O1dMsDyxl9G8sBIfunTgiDy1Qf87OHTFOxR07bx6TAg8hEAsvIZDHr3hsMi71HUgPF7auLuOso+8VOaUvD8CTjwTMNs7GBeouxwC9jviLgi9leRhPGpPjrxGyoO8CFvOO4yvHb2h9g06"
}
],
"model": "text-embedding-ada-002-v2",
"usage": {
"prompt_tokens": 27,
"total_tokens": 27
}
}
</code></pre>
<p>I want to understand what is happening behind the scenes as this kind of debugging is useful for troubleshooting. As you can see, the payload has an input field with a matrix of numbers, but it does not make sense to me (<a href="https://platform.openai.com/docs/guides/embeddings/how-to-get-embeddings?lang=curl" rel="nofollow noreferrer">it does not match the documentation</a>).</p>
<p>So I have two questions:</p>
<ol>
<li>Why does the input field have this matrix of numbers?</li>
<li>How can I decode the answer? I couldn't create the vector I am supposed to receive when I decode the embedding field from the answer using <a href="https://en.wikipedia.org/wiki/Base64" rel="nofollow noreferrer">Base64</a>.</li>
</ol>
<p>It looks like the Python client from OpenAI uses an <a href="https://help.openai.com/en/articles/6283125-what-happened-to-engines" rel="nofollow noreferrer">older version of the API</a> (can be that the reason? I didn't use the API before).</p>
<p>ChatGPT mentioned</p>
<blockquote>
<p>The tokens are represented by numerical IDs such as 82290, 16, 25, etc., which likely correspond to a vocabulary or tokenization scheme used by OpenAI</p>
</blockquote>
<p>However, it does not provide references and I would like to have them. It might be related to one of this tools <a href="https://github.com/openai/tiktoken" rel="nofollow noreferrer">Tiktoken</a>, <a href="https://huggingface.co/docs/transformers/main_classes/tokenizer" rel="nofollow noreferrer">Huggingface Tokenizer</a></p>
|
<python><wireshark><openai-api><chatgpt-api><langchain>
|
2023-04-13 14:29:59
| 0
| 376
|
Edu
|
76,006,453
| 3,447,369
|
How do I make the calculation for this distance matrix faster?
|
<p>I am working on a clustering task with geospatial data. I want to compute my own distance matrix that combines both geographical and temporal distance. My data (<code>np.array</code>) contains latitude, longitude, and timestamp. A sample of my DataFrame <code>df</code> (<a href="https://justpaste.it/bc1fr" rel="nofollow noreferrer">dict to reproduce</a>):</p>
<pre><code> latitude longitude timestamp
412671 52.506136 6.068709 2017-01-01 00:00:23.518
412672 52.503316 6.071496 2017-01-01 00:01:30.764
412673 52.505122 6.068912 2017-01-01 00:02:30.858
412674 52.501792 6.068605 2017-01-01 00:03:38.194
412675 52.508105 6.075160 2017-01-01 00:06:41.116
</code></pre>
<p>I currently use the following code:</p>
<pre><code>np_data = df.to_numpy()
# convert latitudes and longitudes to radians
lat_lon_rad = np.radians(np_data[:,:2].astype(float))
# compute Haversine distance matrix
haversine_matrix = haversine_distances(lat_lon_rad)
haversine_matrix /= np.max(haversine_matrix)
# compute time difference matrix
timestamps = np_data[:,2]
time_matrix = np.abs(np.subtract.outer(timestamps, timestamps)) # This line is SLOW
time_matrix /= np.max(time_matrix)
combined_matrix = 0.5 * haversine_matrix + 0.5 * time_matrix
</code></pre>
<p>This produces the desired result. However, when my data set is 1000 rows, this code takes +- 25 seconds to complete, mainly due to the calculation of the <code>time_matrix</code> (the haversine matrix is very fast). The problem is: I have to work with data sets of +- 200-500k rows. Using only the Haversine function is then still fine, but calculating my <code>time_matrix</code> will take way too long.</p>
<p>My question: <strong>how do I speed up the calculation of the <code>time_matrix</code>?</strong> I cannot find any way to perform the <code>np.subtract.outer(timestamps, timestamps)</code> calculation faster.</p>
|
<python><scikit-learn><cluster-analysis><distance>
|
2023-04-13 14:11:42
| 1
| 1,490
|
sander
|
76,006,442
| 7,741,772
|
to get iteration count and total count over the rows from a column in table
|
<p>We have a database table(source) with some million records in a table : Sample data like (extracted to a text file for sample)</p>
<pre><code> denied the payment
the payment successfull and incident reported successfull
Incident is been reported
</code></pre>
<p>while trying to get the distint words count on these 3 records . have replaced space with new line character and then sort uniq we have done .</p>
<pre><code>sed 's/ /\n/g' file|sort|uniq -c >> new.txt
output:
denied 1
the 2
payment 2
successfull 2
Incident 2
is 1
been 1
reported 2
</code></pre>
<p>how can we also get number of rows for the above output: some thing like</p>
<pre><code>values iterationcount countofrows
denied 1 1
the 2 2
payment 2 2
successfull 2 1 (Although this word is two times but available only in 1 row )
Incident 2 2
is 1 1
been 1 1
reported 2 2
</code></pre>
|
<python><sql><bash>
|
2023-04-13 14:09:51
| 1
| 873
|
Ravi
|
76,006,381
| 1,913,115
|
How to add a shortcut to a custom script in pyproject.toml (using poetry)
|
<p>I recently switched to Poetry from Pipenv. I'm used to having this section in my Pipfile:</p>
<pre><code>[scripts]
test="pytest -s"
test:watch="ptw --runner 'pytest -s'"
</code></pre>
<p>so I can easily run my tests without typing out the full command or entering the shell, e.g.:</p>
<pre class="lang-bash prettyprint-override"><code>pipenv run test:watch
</code></pre>
<p>When I try something similar in pyproject.toml:</p>
<pre class="lang-ini prettyprint-override"><code>[tool.poetry.scripts]
watch = "ptw --runner 'pytest -s'"
</code></pre>
<p>I get an error:</p>
<pre class="lang-bash prettyprint-override"><code>$ poetry run watch
not enough values to unpack (expected 2, got 1)
</code></pre>
<p>Is there a different section in the pyproject.toml that I should be using for this?</p>
|
<python><python-poetry><pyproject.toml>
|
2023-04-13 14:03:48
| 2
| 6,363
|
ierdna
|
76,006,319
| 4,837,637
|
Python change dinamically json key
|
<p>I have a specific JSON file that I read and change dynamically the value of key. I'm using this code to do this:</p>
<pre class="lang-py prettyprint-override"><code>with open('payload/run_job.json', 'r+') as f:
data = json.load(f)
data['name'] = 'test'
result = json.dumps(data, sort_keys=True)
pprint(data)
</code></pre>
<p>and this is my JSON file</p>
<pre class="lang-json prettyprint-override"><code>{
"name":"",
"template": {
"containers": [
{
"name_job": "",
"env": [
{
"name": "COUNTRY",
"value": ""
},
{
"name": "NAME",
"value": ""
},
{
"name": "ISTANCE_UUID",
"value": ""
}
]
}
]
}
}
</code></pre>
<p>Now I have to change also the value of name <code>ISTANCE_UUID</code> into array object env. How i can do this?</p>
|
<python><arrays><json>
|
2023-04-13 13:57:40
| 1
| 415
|
dev_
|
76,006,315
| 4,019,495
|
Is there a Pydantic convention for a Model attribute that will default to a function of another attribute?
|
<p>I want to define a Pydantic <code>BaseModel</code> with the following properties:</p>
<ul>
<li>Two <code>int</code> attributes <code>a</code> and <code>b</code>.</li>
<li><code>a</code> is a required attribute</li>
<li><code>b</code> is optional, and will default to <code>a+1</code> if not set.</li>
</ul>
<p>Is there a way to achieve this? This is what I've tried.</p>
<pre class="lang-py prettyprint-override"><code>from typing import Optional
from pydantic import BaseModel, validator
class A(BaseModel):
a: int
b: Optional[int] = None # want to default to a+1 if not set
@validator('b', always=True)
def default_b_value(cls, b, values):
if b is None and 'a' in values:
return values['a']+1
return b
</code></pre>
<p>My problem with this is it doesn't match the guarantees of <code>A</code>. For any <code>A</code> that's actually constructed, <code>b</code> will never be <code>None</code> so its type should be <code>int</code>, not <code>Optional[int]</code>.</p>
|
<python><python-typing><pydantic>
|
2023-04-13 13:57:15
| 1
| 835
|
extremeaxe5
|
76,006,275
| 14,022,582
|
How to drop duplicate rows using value_counts and also using a condition that uses the actual value in a column using pandas?
|
<p>I have the following dataframe. I want to group by <code>a</code> first. Within each group, I need to do a value count based on <code>c</code> and only pick the one with most counts if the value in <code>c</code> is not <code>EMP</code>. If the value in <code>c</code> is <code>EMP</code>, then I want to pick the one with the second most counts. If there is no other value than <code>EMP</code>, then it should be <code>EMP</code> as in the case where a = 4.</p>
<pre><code>a c
1 EMP
1 y
1 y
1 z
2 z
2 z
2 EMP
2 z
2 a
2 a
3 EMP
3 EMP
3 k
4 EMP
4 EMP
4 EMP
</code></pre>
<p>The expected result would be</p>
<pre><code>a c
1 y
2 z
3 k
4 EMP
</code></pre>
<p>This is what I have tried so far: I have managed to sort it in the order I need so I could take the first row. However, I cannot figure out how to implement the condition for <code>EMP</code> using a lambda function with the <code>drop_duplicates</code> function as there is only the <code>keep=first</code> or <code>keep=last</code> option.</p>
<pre><code>df = df.iloc[df.groupby(['a', 'c']).c.transform('size').mul(-1).argsort(kind='mergesort')]
</code></pre>
<p><strong>Edit:</strong></p>
<p>The <code>mode</code> solution worked, I have an additional question. My dataframe contains about 50 more columns, and I would like to have all of these columns in the end result as well, with the values corresponding to the rows picked using the mode operation and the first value that occurred for the EMP rows. How would I do that? Is there an easier solution than what is mentioned <a href="https://stackoverflow.com/questions/47360510/pandas-groupby-and-aggregation-output-should-include-all-the-original-columns-i">here</a> where you create a dict of functions and pass it to <code>agg</code>? Using SQL for this is also fine.</p>
|
<python><sql><pandas>
|
2023-04-13 13:53:17
| 3
| 969
|
user42
|
76,006,269
| 19,500,571
|
Python Dash: Fitting a table and a graph in one row of a grid
|
<p>I am trying to fit a table and a graph in one single row of a grid. I have tried to resize both of them to fit by setting the <code>style</code>, but for some odd reason the table is placed beneath the graph.</p>
<p>Here is the code. What is it caused by?</p>
<pre><code>from dash import Dash, dcc, html, Input, Output, no_update, dash_table
import plotly.express as px
import dash_bootstrap_components as dbc
import plotly.graph_objects as go
import pandas as pd
from collections import OrderedDict
app = Dash(__name__, external_stylesheets=[dbc.themes.BOOTSTRAP])
data = OrderedDict(
[
("Date", ["2015-01-01", "2015-10-24", "2016-05-10", "2017-01-10", "2018-05-10", "2018-08-15"]),
("Region", ["Montreal", "Toronto", "New York City", "Miami", "San Francisco", "London"]),
("Temperature", [1, -20, 3.512, 4, 10423, -441.2]),
("Humidity", [10, 20, 30, 40, 50, 60]),
("Pressure", [2, 10924, 3912, -10, 3591.2, 15]),
]
)
df_a = pd.DataFrame(OrderedDict([(name, col_data * 10) for (name, col_data) in data.items()]))
df_a = df_a.iloc[0:20].copy()
df_b = px.data.gapminder().query("year == 2007")
dropdown_country_a = dcc.Dropdown(
id="dropdown-a",
options=df_b.country,
value="Turkey",
)
dropdown_country_b = dcc.Dropdown(
id="dropdown-b",
options=df_b.country,
value="Canada"
)
f1 = go.Figure(go.Bar(x=["a", "b", "c"], y=[2, 3, 1], marker_color="Gold"))
f2 = go.Figure(go.Bar(x=["a", "b", "c"], y=[2, 3, 1], marker_color="Gold"))
f3 = go.Figure(go.Bar(x=["a", "b", "c"], y=[2, 3, 1], marker_color="Gold"))
aa = {'width': '40%', 'display': 'inline-block'}
app.layout = html.Div([
html.H1("USD", style={'textAlign': 'center'}),
html.Div(children=[
html.Div([dcc.Graph(id="1", figure=f1, style=aa), dcc.Graph(id="2", figure=f2, style=aa)]),
html.Div([
html.Div([
dash_table.DataTable(id="table", data=df_a.to_dict('records'), columns=[{"name": i, "id": i} for i in df_a.columns])], style=aa),
html.Div([dbc.Container([
dbc.Row(dbc.Col([dropdown_country_a, dropdown_country_b], lg=6, sm=12)),
dbc.Row(dbc.Col(dcc.Graph(id="asd"), lg=6, sm=12))])], style=aa)
])
])
])
# Callback for line_geo graph
@app.callback(
Output("asd", "figure"),
Input("dropdown-a", "value"),
Input("dropdown-b", "value"),
)
def make_line_geo_graph(country_a, country_b):
dff = df_b[df_b.country.isin([country_a, country_b])]
fig = px.line_geo(
dff,
locations="iso_alpha",
projection="orthographic",
)
fig_locations = px.line_geo(
dff, locations="iso_alpha", projection="orthographic", fitbounds="locations"
)
fig.update_traces(
line_width=3,
line_color="red",
)
fig_locations.update_traces(
line_width=3,
line_color="red",
)
return fig
if __name__ == "__main__":
app.run_server(debug=True)
</code></pre>
|
<python><html><plotly><plotly-dash><dashboard>
|
2023-04-13 13:52:22
| 1
| 469
|
TylerD
|
76,006,099
| 1,884,953
|
in Python, how to overwrite the str for a module?
|
<p>When one types the name of a module and then hits return, one gets a string
like below</p>
<pre><code>import pandas as pd
pd
<module 'pandas' from '....'>
</code></pre>
<p>How can I overwrite this string for my own module ?
(Please note that I am referring to a module, not to a class)
For example adding a text below like</p>
<pre><code><module 'mymodule' from '....'>
my additional custom text
</code></pre>
<p>I tried to add this to mymodule:</p>
<pre><code>def __str__():
print('my additional custom text')
</code></pre>
<p>and also</p>
<pre><code>def __str__():
return 'my additional custom text'
def __repr__():
print("my additional custom text")
return "my additional custom text"
</code></pre>
<p>but nothing happens</p>
<p>Of course, if I type</p>
<pre><code>mymodule.__str__()
</code></pre>
<p>I get 'my additional custom text'</p>
|
<python><string><overwrite>
|
2023-04-13 13:36:15
| 2
| 303
|
minivip
|
76,006,042
| 2,302,262
|
Dictionary unpacking in python
|
<p>For a list, I can split it with one compact line of code:</p>
<pre class="lang-py prettyprint-override"><code>i_ate = ['apple', 'beetroot', 'cinnamon', 'donut']
# Elaborate
first = i_ate[0]
rest = [item for j, item in enumerate(i_ate) if j != 0]
# Shortcut
first, *rest = i_ate
# Result in both cases:
first # 'apple'
rest # ['beetroot', 'cinnamon', 'donut']
</code></pre>
<p><strong>Does someting similar exist for dictionaries?</strong></p>
<pre class="lang-py prettyprint-override"><code>i_ate = {'apple': 2, 'beetroot': 0, 'cinnamon': 3, 'donut': 8}
# Elaborate
apples = i_ate['apple']
rest = {k: v for k, v in i_ate.items() if k != 'apple'}
# Shortcut??
# -- Your line of code here --
# Result in both cases:
apples # 2
rest # {'beetroot': 0, 'cinnamon': 3, 'donut': 8}
</code></pre>
|
<python><dictionary>
|
2023-04-13 13:30:30
| 1
| 2,294
|
ElRudi
|
76,006,023
| 8,510,613
|
How to export c++ class adopted template method pattern with virtual interface to python world by pybind11?
|
<p>We have a class hierarchy which follow the template method pattern. The <code>Interface</code> class has a pure virtual method <code>process()</code>. class <code>AbstractImpl</code> inherit it and fill the <code>process</code> body with some pure virtual <code>stepX</code> method. Finally, derived class <code>Concrete1</code> implement those <code>stepX</code> method to extend custom behaviors.</p>
<pre class="lang-cpp prettyprint-override"><code>class Interface {
public:
virtual int process() = 0;
};
class AbstractImpl : public Interface {
public:
int process override(){
step1();
// do something
step2();
}
protected:
virtual void step1() = 0;
virtual void step2() = 0;
};
class Derived : public AbstractImpl {
protected:
void step1() final {
// do something
}
void step2() final {
// do something
}
};
</code></pre>
<p>In c++ world, i can use easily it like:</p>
<pre class="lang-cpp prettyprint-override"><code>Interface* obj = new Derived();
int res = obj->process();
</code></pre>
<p>Now we need to create some python binding for <code>Derived</code> class for some scaffolding test usage by pybind11. Ideally, in python world we could write:</p>
<pre class="lang-py prettyprint-override"><code>obj = Derived()
res = obj.process()
</code></pre>
<p>I've checked the pybind11 document class <a href="https://pybind11.readthedocs.io/en/latest/advanced/classes.html" rel="nofollow noreferrer">section</a>, which provide some examples like <code>PYBIND11_OVERRIDE_PURE</code>.</p>
<p>However, the <code>Derived</code> class actually not derived directly from <code>Interface</code> class, and we only want to export the base <code>process</code> method but not other protected <code>step</code> method to python world.</p>
<p>I wonder is there anyway to export the <code>Derived</code> class and it public interface <code>process</code> to python world, without much instruive code like <code>PYBIND11_OVERRIDE_PURE</code> added?</p>
|
<python><c++><pybind11>
|
2023-04-13 13:28:24
| 1
| 1,282
|
user8510613
|
76,006,019
| 9,601,748
|
How can I get the index number of a row in a pivoted table with python/pandas?
|
<p>So I have this pivot table, 'movieUser_df':</p>
<p><a href="https://i.sstatic.net/1r4uc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1r4uc.png" alt="enter image description here" /></a></p>
<p>And I need to be able to get the index number of any row based on userID.</p>
<p>I can locate the row easily enough like this:</p>
<pre><code>movieUser_df.loc[movieUser_df.index.values == "641c87d06a97e629837fc079"]
</code></pre>
<p>But it only returns the row data.</p>
<p>I thought just <code>movieUser_df.index == "641c87d06a97e629837fc079"</code> might work, but it just returns this:</p>
<p><a href="https://i.sstatic.net/z4Xqt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/z4Xqt.png" alt="enter image description here" /></a></p>
<p>I'm assuming it's booleans for whether each column for the row is 0 or not 0, but in reverse for some reason (as the first item in the returned array is true, despite the only column having value other than 0 in the row in movieUser_df being the very last datapoint.), but I'm not sure.</p>
<p>I've looked around for solutions/other people with this problem, but all I could find was <a href="https://stackoverflow.com/questions/60646153/pandas-pivot-table-get-index-of-certain-row">this</a>, but unless I'm missing something, they end up asking a completely different question in the questions body.</p>
<p>Anyway, does anyone know how I can get the row index number by userID in this situation? Let me know if there's any more info I can provide that could be helpful, thanks for any help.</p>
|
<python><pandas>
|
2023-04-13 13:28:18
| 1
| 311
|
Marcus
|
76,005,968
| 5,091,467
|
OneHotEncoder -- keep feature names after encoding categorical variables
|
<h3>Question</h3>
<p>After encoding categorical columns as numbers and pivoting LONG to WIDE into a sparse matrix, I am trying to retrieve the category labels for column names. I need this information to interpret the model in a latter step.</p>
<h3>Solution</h3>
<p>Below is my solution, which is really convoluted, please let me know if you have a better way:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
from scipy.sparse import csr_matrix
from sklearn.preprocessing import OneHotEncoder
# Example dataframe
data = {
'id':[13,13,14,14,14,15],
'name':['alex', 'mary', 'alex', 'barry', 'john', 'john'],
'categ': ['dog', 'cat', 'dog', 'ant', 'fox', 'seal'],
'size': ['big', 'small', 'big', 'tiny', 'medium', 'big']
}
df = pd.DataFrame(data)
# Create dictionaries from original dataframe to save categories
# Part of the convoluted solution
dcts = []
df_cols = ['categ', 'size']
for col in df_cols:
cats = df[col].astype('category')
dct = dict(enumerate(cats.cat.categories))
dcts.append(dct)
# Change into category codes, otherwise sparse matrix cannot be built
for col in ['categ', 'size']:
df[col] = df[col].astype('category').cat.codes
# Group by into sparse columns
piv = df.groupby(['id', 'name'])[['categ', 'size']].first().astype('Sparse[int]')
# Unstack keeps sparse format
piv = piv.unstack(fill_value=0)
piv.columns = piv.columns.to_flat_index().str.join('_')
# Encoding gives poor column names
encoder = OneHotEncoder(sparse_output=True)
piv_enc = encoder.fit_transform(piv)
piv_fin = pd.DataFrame.sparse.from_spmatrix(
piv_enc, columns=encoder.get_feature_names_out())
</code></pre>
<p>The column names look like this: <code>'categ_alex_-', 'categ_alex_2.0', 'categ_barry_-', 'categ_barry_0.0'</code>, while we need the original category labels, i.e. <code>'categ_alex_-', 'categ_alex_dog', 'categ_barry_-', 'categ_barry_ant'</code>.</p>
<h3>Convoluted part I need advice on</h3>
<pre class="lang-py prettyprint-override"><code># Fixing column names
piv_cols = list(piv_fin.columns)
for (dct, df_col) in zip(dcts, df_cols):
print(df_col, dct)
for i, piv_col in enumerate(piv_cols):
if df_col in piv_col:
if piv_col[-1:] != '-':
piv_cols[i] = piv_col[:-2] + '_' + dct[int(piv_col[-1:])]
piv_fin.columns = piv_cols
</code></pre>
<p>I'm sure there's a better way, perhaps OneHotEncoder can use category labels directly? Thanks for help!</p>
|
<python><pandas><one-hot-encoding>
|
2023-04-13 13:23:47
| 1
| 714
|
Dudelstein
|
76,005,807
| 6,658,422
|
Tornado plot using seaborn objects
|
<p>I would like to create a tornado plot using seaborn objects and was wondering if there was an efficient way to do this.</p>
<pre><code>import numpy as np
import pandas as pd
import seaborn.objects as so
# create a dataframe with the columns id, gender, and age with random values
df = pd.DataFrame({'id':np.random.randint(0, 1000, 100), "gender":np.random.choice(['m', 'f'], 100), "age":np.random.randint(18, 100, 100)})
so.Plot(data=df, x="age", y='id', color='gender').add(so.Bars(), so.Hist(), so.Stack())
</code></pre>
<p>This code generates a stacked histogram that looks similar to this:</p>
<p><a href="https://i.sstatic.net/lOLT7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lOLT7.png" alt="enter image description here" /></a></p>
<p>Now rather than having the bars stacked I would like to have one color on the positive axis and the other on the negative axis. Is there a way to get there without having to create two plots and gluing them together?</p>
<p>I understand that maybe there is not, because the solution would not be generic to any number of different items in the <code>gender</code> column, but then again, maybe there is.</p>
|
<python><seaborn><seaborn-objects>
|
2023-04-13 13:06:51
| 0
| 2,350
|
divingTobi
|
76,005,798
| 37,213
|
NumPy SVD Does Not Agree With R Implementation
|
<p>I saw a <a href="https://stackoverflow.com/questions/75998775/python-vs-matlab-why-my-matrix-is-singular-in-python">question about inverting a singular matrix</a> on Stack Overflow using NumPy. I wanted to see if NumPy SVD could provide an acceptable answer.</p>
<p>I've demonstrated using <a href="https://stackoverflow.com/questions/19763698/solving-non-square-linear-system-with-r/19767525#19767525">SVD in R</a> for another Stack Overflow answer. I used that known solution to make sure that my NumPy code was working correctly before applying it to the new question.</p>
<p>I was surprised to learn that the NumPy solution did not match the R answer. I didn't get an identity back when I substituted the NumPy solution back into the equation.</p>
<p>The U matricies from R and NumPy are the same shape (3x3) and the values are the same, but signs are different. Here is the U matrix I got from NumPy:</p>
<p><a href="https://i.sstatic.net/jyZ3L.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jyZ3L.png" alt="enter image description here" /></a></p>
<p>The D matricies are identical for R and NumPy. Here is D after the large diagonal element is zeroed out:</p>
<p><a href="https://i.sstatic.net/2OeqI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2OeqI.png" alt="enter image description here" /></a></p>
<p>The V matrix I get from NumPy has shape 3x4; R gives me a 4x3 matrix. The values are similar, but the signs are different, as they were for U. Here is the V matrix I got from NumPy:</p>
<p><a href="https://i.sstatic.net/DWxjP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DWxjP.png" alt="enter image description here" /></a></p>
<p>The R solution vector is:</p>
<pre><code>x = [2.41176,-2.28235,2.15294,-3.47059]
</code></pre>
<p>When I substitute this back into the original equation <code>A*x = b</code> I get the RHS vector from my R solution:</p>
<pre><code>b = [-17.00000,28.00000,11.00000]
</code></pre>
<p>NumPy gives me this solution vector:</p>
<pre><code>x = [2.55645,-2.27029,1.98412,-3.23182]
</code></pre>
<p>When I substitute the NumPy solution back into the original equation <code>A*x = b</code> I get this result:</p>
<pre><code>b = [-15.93399,28.04088,12.10690]
</code></pre>
<p>Close, but not correct.</p>
<p>I repeated the experiment using NumPy <code>np.linalg.pinv</code> pseudo-inverse method. It agrees with the R solution.</p>
<p>Here is my complete Python script:</p>
<pre><code># https://stackoverflow.com/questions/75998775/python-vs-matlab-why-my-matrix-is-singular-in-python
import numpy as np
def pseudo_inverse_solver(A, b):
A_inv = np.linalg.pinv(A)
x = np.matmul(A_inv, b)
error = np.matmul(A, x) - b
return x, error, A_inv
def svd_solver(A, b):
U, D, V = np.linalg.svd(A, full_matrices=False)
D_diag = np.diag(np.diag(np.reciprocal(D)))
D_zero = np.array(D_diag)
D_zero[D_zero >= 1.0e15] = 0.0
D_zero = np.diag(D_zero)
A_inv = np.matmul(np.matmul(np.transpose(V), D_zero), U)
x = np.matmul(A_inv, b)
error = np.matmul(A, x) - b
return x, error, A_inv
if __name__ == '__main__':
"""
Solution from my SO answer
https://stackoverflow.com/questions/19763698/solving-non-square-linear-system-with-r/19767525#19767525
Example showing how to use NumPy SVD
https://stackoverflow.com/questions/24913232/using-numpy-np-linalg-svd-for-singular-value-decomposition
"""
np.set_printoptions(20)
A = np.array([
[0.0, 1.0, -2.0, 3.0],
[5.0, -3.0, 1.0, -2.0],
[5.0, -2.0, -1.0, 1.0]
])
b = np.array([-17.0, 28.0, 11.0]).T
x_svd, error_svd, A_inv_svd = svd_solver(A, b)
error_svd_L2 = np.linalg.norm(error_svd)
x_pseudo, error_pseudo, A_inv_pseudo = pseudo_inverse_solver(A, b)
error_pseudo_L2 = np.linalg.norm(error_pseudo)
</code></pre>
<p>Any advice on what I've missed with NumPy SVD? Did I make a mistake at this line?</p>
<pre><code> A_inv = np.matmul(np.matmul(np.transpose(V), D_zero), U)
</code></pre>
<p>Update: Chrysophylaxs pointed out my error: I needed to transpose U:</p>
<pre><code> A_inv = np.matmul(np.matmul(np.transpose(V), D_zero), np.transpose(U))
</code></pre>
<p>This change solves the problem. Thank you so much!</p>
|
<python><numpy><svd>
|
2023-04-13 13:06:21
| 1
| 309,526
|
duffymo
|
76,005,681
| 3,595,907
|
Want to save image using OpenCV but Matplotlib insists I save a figure instead
|
<p>OpenCV 4.7, Matplotlib 3.7.1, Spinnaker-Python 3.0.0.118, Python 3.10, Win 10 x64</p>
<p>I'm acquiring images from a FLIR thermal camera via their <a href="https://www.flir.co.uk/products/spinnaker-sdk/" rel="nofollow noreferrer">Spinnaker API</a>.</p>
<p>I am displaying the images using OpenCV & concurrently displaying a histogram of the images using Matplotlib. The relevant code snippet, (that sits in a function which also contains camera set up & error handling)</p>
<pre><code>cam.BeginAcquisition()
print('Acquiring images...')
print('Press enter to close the program..')
fig = plt.figure(1)
fig.canvas.mpl_connect('close_event', handle_close)
acq_count = 0 # number of saved images
# Retrieve and display images
while(continue_recording):
try:
image_result = cam.GetNextImage(1000)
# Ensure image completion
if image_result.IsIncomplete():
print('Image incomplete with image status %d ...' % image_result.GetImageStatus())
else:
# Getting the image data as a numpy array
image_data = image_result.GetNDArray()
image_data = cv2.rotate(image_data, cv2.ROTATE_90_COUNTERCLOCKWISE)
image_data = cv2.flip(image_data, 1)
image_copy = image_data
count, bins = np.histogram(image_data.flatten())
cv2.imshow("FLIR", image_data)
plt.stairs(count, bins)
plt.pause(0.001)
plt.clf()
# if 's' pressed then save image data
if cv2.waitKey(100) & 0xFF == ord('s'):
f_name = "test_" + str(acq_count) + '.png'
cv2.imwrite(f_name, image_data)
acq_count += 1
# If user presses enter, close the program
if keyboard.is_pressed('ENTER'):
print('Program is closing...')
cv2.destroyAllWindows()
plt.close('all')
input('Done! Press Enter to exit...')
continue_recording=False
image_result.Release() # clears camera buffer
except PySpin.SpinnakerException as ex:
print('Error: %s' % ex)
return False
cam.EndAcquisition()
</code></pre>
<p>When <code>s</code> is pressed though a blocking popup appears asking me where I want to save <code>figure 1</code> and then proceeds to save the histogram but not the image!</p>
<p>EDIT :: I should also mention that it all works fine as long as there is no Matplotlib figure.</p>
|
<python><opencv><matplotlib><flir>
|
2023-04-13 12:53:43
| 1
| 3,687
|
DrBwts
|
76,005,506
| 8,182,504
|
pyqtgraph's GraphicsWidgetAnchor class incomplatible with PySide6
|
<p>I've written an application that uses <strong>PySide6</strong> and <strong>pyqtgraph</strong>. Now, I upgraded all my Python packages (<em>PySide 6.5.0</em> and <em>pyqtgraph 0.13.2</em>, <em>Python 3.10</em>) and that code crashes.
I was wondering what changed since that and apparently, the following line seems to be the problem:</p>
<pre class="lang-py prettyprint-override"><code>self.scope_original = pg.PlotWidget(title="Scope")
</code></pre>
<p>which leads to the error message:</p>
<pre><code>Traceback (most recent call last):
File "C:\--\testings\test_pyqtgraph.py", line 15, in <module>
plot = pg.PlotWidget()
File "C:\--\.venv\lib\site-packages\pyqtgraph\widgets\PlotWidget.py", line 55, in __init__
self.plotItem = PlotItem(**kargs)
File "C:\--\.venv\lib\site-packages\pyqtgraph\graphicsItems\PlotItem\PlotItem.py", line 154, in __init__
self.titleLabel = LabelItem('', size='11pt', parent=self)
File "C:\--\.venv\lib\site-packages\pyqtgraph\graphicsItems\LabelItem.py", line 19, in __init__
GraphicsWidget.__init__(self, parent)
File "C:\--\.venv\lib\site-packages\pyqtgraph\graphicsItems\GraphicsWidget.py", line 18, in __init__
QtWidgets.QGraphicsWidget.__init__(self, *args, **kwargs)
TypeError: GraphicsWidgetAnchor.__init__() takes 1 positional argument but 2 were given
</code></pre>
<p>Is there a workaround that allows me to use PySide6 and pyqtgraph?</p>
<hr />
<h2>Note/Working example:</h2>
<p>To check that nothing has changed in pyqtgraph and the problem is indeed PySide6 and not my application, I verifyed the behavior with the <a href="https://pyqtgraph.readthedocs.io/en/latest/getting_started/qtcrashcourse.html" rel="nofollow noreferrer">pyqtgraph crash course</a> code sample, where I just replaced <code>PyQt6</code> with PySide6:</p>
<pre class="lang-py prettyprint-override"><code>from PySide6 import QtWidgets # Should work with PyQt5 / PySide2 / PySide6 as well
import pyqtgraph as pg
## Always start by initializing Qt (only once per application)
app = QtWidgets.QApplication([])
## Define a top-level widget to hold everything
w = QtWidgets.QWidget()
w.setWindowTitle('PyQtGraph example')
## Create some widgets to be placed inside
btn = QtWidgets.QPushButton('press me')
text = QtWidgets.QLineEdit('enter text')
listw = QtWidgets.QListWidget()
plot = pg.PlotWidget()
## Create a grid layout to manage the widgets size and position
layout = QtWidgets.QGridLayout()
w.setLayout(layout)
## Add widgets to the layout in their proper positions
layout.addWidget(btn, 0, 0) # button goes in upper-left
layout.addWidget(text, 1, 0) # text edit goes in middle-left
layout.addWidget(listw, 2, 0) # list widget goes in bottom-left
layout.addWidget(plot, 0, 1, 3, 1) # plot goes on right side, spanning 3 rows
## Display the widget as a new window
w.show()
## Start the Qt event loop
app.exec() # or app.exec_() for PyQt5 / PySide2
</code></pre>
<p>which again leads to the same error message above. <strong>PyQt6 works without a problem.</strong> I also can't just easily change PySide6 to PyQt6, due to the scale of the project.</p>
|
<python><pyqtgraph><pyside6>
|
2023-04-13 12:35:39
| 0
| 1,324
|
agentsmith
|
76,005,401
| 8,248,194
|
Can't install xmlsec via pip
|
<p>I'm getting the following when running <code>pip install xmlsec</code> in macOS Big Sur 11.3.1:</p>
<pre><code>Building wheels for collected packages: xmlsec
Building wheel for xmlsec (PEP 517) ... error
ERROR: Command errored out with exit status 1:
command: /Users/davidmasip/.pyenv/versions/3.9.9/bin/python3.9 /Users/davidmasip/.pyenv/versions/3.9.9/lib/python3.9/site-packages/pip/_vendor/pep517/in_process/_in_process.py build_wheel /var/folders/ff/3y2196b13bq0nbm3_ms25nyh0000gp/T/tmpm51b1yso
cwd: /private/var/folders/ff/3y2196b13bq0nbm3_ms25nyh0000gp/T/pip-install-qm2a1dud/xmlsec_cd7a81ea26444cc4b8ae24acd3ec379d
Complete output (65 lines):
running bdist_wheel
running build
running build_py
creating build
creating build/lib.macosx-11.3-x86_64-cpython-39
creating build/lib.macosx-11.3-x86_64-cpython-39/xmlsec
copying src/xmlsec/py.typed -> build/lib.macosx-11.3-x86_64-cpython-39/xmlsec
copying src/xmlsec/tree.pyi -> build/lib.macosx-11.3-x86_64-cpython-39/xmlsec
copying src/xmlsec/__init__.pyi -> build/lib.macosx-11.3-x86_64-cpython-39/xmlsec
copying src/xmlsec/constants.pyi -> build/lib.macosx-11.3-x86_64-cpython-39/xmlsec
copying src/xmlsec/template.pyi -> build/lib.macosx-11.3-x86_64-cpython-39/xmlsec
running build_ext
building 'xmlsec' extension
creating build/temp.macosx-11.3-x86_64-cpython-39
creating build/temp.macosx-11.3-x86_64-cpython-39/private
creating build/temp.macosx-11.3-x86_64-cpython-39/private/var
creating build/temp.macosx-11.3-x86_64-cpython-39/private/var/folders
creating build/temp.macosx-11.3-x86_64-cpython-39/private/var/folders/ff
creating build/temp.macosx-11.3-x86_64-cpython-39/private/var/folders/ff/3y2196b13bq0nbm3_ms25nyh0000gp
creating build/temp.macosx-11.3-x86_64-cpython-39/private/var/folders/ff/3y2196b13bq0nbm3_ms25nyh0000gp/T
creating build/temp.macosx-11.3-x86_64-cpython-39/private/var/folders/ff/3y2196b13bq0nbm3_ms25nyh0000gp/T/pip-install-qm2a1dud
creating build/temp.macosx-11.3-x86_64-cpython-39/private/var/folders/ff/3y2196b13bq0nbm3_ms25nyh0000gp/T/pip-install-qm2a1dud/xmlsec_cd7a81ea26444cc4b8ae24acd3ec379d
creating build/temp.macosx-11.3-x86_64-cpython-39/private/var/folders/ff/3y2196b13bq0nbm3_ms25nyh0000gp/T/pip-install-qm2a1dud/xmlsec_cd7a81ea26444cc4b8ae24acd3ec379d/src
clang -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O3 -Wall -I/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include -D__XMLSEC_FUNCTION__=__func__ -DXMLSEC_NO_FTP=1 -DXMLSEC_NO_MD5=1 -DXMLSEC_NO_GOST=1 -DXMLSEC_NO_GOST2012=1 -DXMLSEC_NO_CRYPTO_DYNAMIC_LOADING=1 -DXMLSEC_CRYPTO_OPENSSL=1 -DMODULE_NAME=xmlsec -DMODULE_VERSION=1.3.13 -I/usr/local/Cellar/libxmlsec1/1.3.0/include/xmlsec1 -I/usr/local/opt/openssl@1.1/include -I/usr/local/opt/openssl@1.1/include/openssl -I/private/var/folders/ff/3y2196b13bq0nbm3_ms25nyh0000gp/T/pip-build-env-gsnsoluq/overlay/lib/python3.9/site-packages/lxml/includes -I/private/var/folders/ff/3y2196b13bq0nbm3_ms25nyh0000gp/T/pip-build-env-gsnsoluq/overlay/lib/python3.9/site-packages/lxml -I/private/var/folders/ff/3y2196b13bq0nbm3_ms25nyh0000gp/T/pip-build-env-gsnsoluq/overlay/lib/python3.9/site-packages/lxml/includes/libxml -I/private/var/folders/ff/3y2196b13bq0nbm3_ms25nyh0000gp/T/pip-build-env-gsnsoluq/overlay/lib/python3.9/site-packages/lxml/includes/libxslt -I/private/var/folders/ff/3y2196b13bq0nbm3_ms25nyh0000gp/T/pip-build-env-gsnsoluq/overlay/lib/python3.9/site-packages/lxml/includes/libexslt -I/private/var/folders/ff/3y2196b13bq0nbm3_ms25nyh0000gp/T/pip-build-env-gsnsoluq/overlay/lib/python3.9/site-packages/lxml/includes/extlibs -I/private/var/folders/ff/3y2196b13bq0nbm3_ms25nyh0000gp/T/pip-build-env-gsnsoluq/overlay/lib/python3.9/site-packages/lxml/includes/__pycache__ -I/Users/davidmasip/.pyenv/versions/3.9.9/include/python3.9 -c /private/var/folders/ff/3y2196b13bq0nbm3_ms25nyh0000gp/T/pip-install-qm2a1dud/xmlsec_cd7a81ea26444cc4b8ae24acd3ec379d/src/constants.c -o build/temp.macosx-11.3-x86_64-cpython-39/private/var/folders/ff/3y2196b13bq0nbm3_ms25nyh0000gp/T/pip-install-qm2a1dud/xmlsec_cd7a81ea26444cc4b8ae24acd3ec379d/src/constants.o -g -std=c99 -fPIC -fno-strict-aliasing -Wno-error=declaration-after-statement -Werror=implicit-function-declaration -Os
/private/var/folders/ff/3y2196b13bq0nbm3_ms25nyh0000gp/T/pip-install-qm2a1dud/xmlsec_cd7a81ea26444cc4b8ae24acd3ec379d/src/constants.c:319:5: error: use of undeclared identifier 'xmlSecSoap11Ns'
PYXMLSEC_ADD_NS_CONSTANT(Soap11Ns, "SOAP11");
^
/private/var/folders/ff/3y2196b13bq0nbm3_ms25nyh0000gp/T/pip-install-qm2a1dud/xmlsec_cd7a81ea26444cc4b8ae24acd3ec379d/src/constants.c:304:46: note: expanded from macro 'PYXMLSEC_ADD_NS_CONSTANT'
tmp = PyUnicode_FromString((const char*)(JOIN(xmlSec, name))); \
^
/private/var/folders/ff/3y2196b13bq0nbm3_ms25nyh0000gp/T/pip-install-qm2a1dud/xmlsec_cd7a81ea26444cc4b8ae24acd3ec379d/src/common.h:19:19: note: expanded from macro 'JOIN'
#define JOIN(X,Y) DO_JOIN1(X,Y)
^
/private/var/folders/ff/3y2196b13bq0nbm3_ms25nyh0000gp/T/pip-install-qm2a1dud/xmlsec_cd7a81ea26444cc4b8ae24acd3ec379d/src/common.h:20:23: note: expanded from macro 'DO_JOIN1'
#define DO_JOIN1(X,Y) DO_JOIN2(X,Y)
^
/private/var/folders/ff/3y2196b13bq0nbm3_ms25nyh0000gp/T/pip-install-qm2a1dud/xmlsec_cd7a81ea26444cc4b8ae24acd3ec379d/src/common.h:21:23: note: expanded from macro 'DO_JOIN2'
#define DO_JOIN2(X,Y) X##Y
^
<scratch space>:23:1: note: expanded from here
xmlSecSoap11Ns
^
/private/var/folders/ff/3y2196b13bq0nbm3_ms25nyh0000gp/T/pip-install-qm2a1dud/xmlsec_cd7a81ea26444cc4b8ae24acd3ec379d/src/constants.c:320:5: error: use of undeclared identifier 'xmlSecSoap12Ns'; did you mean 'xmlSecXPath2Ns'?
PYXMLSEC_ADD_NS_CONSTANT(Soap12Ns, "SOAP12");
^
/private/var/folders/ff/3y2196b13bq0nbm3_ms25nyh0000gp/T/pip-install-qm2a1dud/xmlsec_cd7a81ea26444cc4b8ae24acd3ec379d/src/constants.c:304:46: note: expanded from macro 'PYXMLSEC_ADD_NS_CONSTANT'
tmp = PyUnicode_FromString((const char*)(JOIN(xmlSec, name))); \
^
/private/var/folders/ff/3y2196b13bq0nbm3_ms25nyh0000gp/T/pip-install-qm2a1dud/xmlsec_cd7a81ea26444cc4b8ae24acd3ec379d/src/common.h:19:19: note: expanded from macro 'JOIN'
#define JOIN(X,Y) DO_JOIN1(X,Y)
^
/private/var/folders/ff/3y2196b13bq0nbm3_ms25nyh0000gp/T/pip-install-qm2a1dud/xmlsec_cd7a81ea26444cc4b8ae24acd3ec379d/src/common.h:20:23: note: expanded from macro 'DO_JOIN1'
#define DO_JOIN1(X,Y) DO_JOIN2(X,Y)
^
/private/var/folders/ff/3y2196b13bq0nbm3_ms25nyh0000gp/T/pip-install-qm2a1dud/xmlsec_cd7a81ea26444cc4b8ae24acd3ec379d/src/common.h:21:23: note: expanded from macro 'DO_JOIN2'
#define DO_JOIN2(X,Y) X##Y
^
<scratch space>:25:1: note: expanded from here
xmlSecSoap12Ns
^
/usr/local/Cellar/libxmlsec1/1.3.0/include/xmlsec1/xmlsec/strings.h:34:33: note: 'xmlSecXPath2Ns' declared here
XMLSEC_EXPORT_VAR const xmlChar xmlSecXPath2Ns[];
^
2 errors generated.
error: command '/usr/bin/clang' failed with exit code 1
----------------------------------------
ERROR: Failed building wheel for xmlsec
Failed to build xmlsec
ERROR: Could not build wheels for xmlsec which use PEP 517 and cannot be installed directly
WARNING: You are using pip version 21.2.4; however, version 23.0.1 is available.
You should consider upgrading via the '/Users/davidmasip/.pyenv/versions/3.9.9/bin/python3.9 -m pip install --upgrade pip' command.
</code></pre>
<p>I've also run before:</p>
<pre class="lang-bash prettyprint-override"><code>brew install libxml2 libxmlsec1 pkg-config xz
</code></pre>
<p>And I get:</p>
<pre class="lang-bash prettyprint-override"><code>Warning: libxml2 2.10.4 is already installed and up-to-date.
To reinstall 2.10.4, run:
brew reinstall libxml2
Warning: libxmlsec1 1.3.0 is already installed and up-to-date.
To reinstall 1.3.0, run:
brew reinstall libxmlsec1
Warning: pkg-config 0.29.2_3 is already installed and up-to-date.
To reinstall 0.29.2_3, run:
brew reinstall pkg-config
Warning: xz 5.4.2 is already installed and up-to-date.
To reinstall 5.4.2, run:
brew reinstall xz
</code></pre>
<p>how can I install xmlsec in macOS?</p>
|
<python><macos><pip><homebrew><xmlsec>
|
2023-04-13 12:24:18
| 4
| 2,581
|
David Masip
|
76,005,396
| 15,845,509
|
How to understand print result of byte data read from a pickle file?
|
<p>I am trying to get data from pickle file. As I know, when we do serialization, the data is converted into byte stream. When I read the data as binary using this code:</p>
<pre><code>f = open("alexnet.pth", "rb")
data = f.read()
</code></pre>
<p>I got this result</p>
<blockquote>
<p>b'PK\x03\x04\x00\x00\x08\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x10\x00\x12\x00archive/data.pklFB\x0e\x00ZZZZZZZZZZZZZZ\x80\x02ccollections\nOrderedDict\nq\x00)Rq\x01(X\x11\x00\x00\x00features.0.weightq\x02ctorch._utils\n_rebuild_tensor_v2\nq\x03((X\x07\x00\x00\x00storageq\x04ctorch\nFloatStorage\nq\x05X\r\x00\x00\x002472041505024q\x06X\x03\x00\x00\x00cpuq\x07M\xc0Ztq\x08QK\x00(K@K\x03K\x0bK\x0btq\t(Mk\x01KyK\x0bK\x01tq\n\x89h\x00)Rq\x0btq\x0cRq\rX\x0f\x00\x00\x00features.0.biasq\x0eh\x03((h\x04h\x05X\r\x00\x00\x002472041504928q\x0fh\x07K@tq\x10QK\x00K@\x85q\x11K\x01\x85q\x12\x89h\x00)Rq\x13tq\x14Rq\x15X\x11\x00\x00\x00features.3.weightq\x16h\x03((h\x04h\x05X\r\x00\x00\x002472041505120q\x17h\x07J\x00\xb0\x04\x00tq\x18QK\x00(K\xc0K@K\x05K\x05tq\x19(M@\x06K\x19K\x05K\x01tq\x1a\x89h\x00)Rq\x1btq\x1cRq\x1dX\x0f\x00\x00\x00features.3.biasq\x1eh\x03((h\x04h\x05X\r\x00\x00\x002472041507136q\x1fh\x07K\xc0tqQK\x00K\xc0\x85q!K\x01\x85q"\x89h\x00)Rq#tq$Rq%X\x11\x00\x00\x00features.6.weightq&h\x03((h\x04h\x05X\r\x00\x00\x002472041509056q'h\x07J\x00 \n\x00tq(QK\x00(M\x80\x01K\xc0K\x03K\x03tq)(M\xc0\x06K\tK\x03K\x01tq*\x89h\x00)Rq+tq,Rq-X\x0f\x00\x00\x00features.6.biasq.h\x03((h\x04h\x05X\r\x00\x00\x002472041505312q/h\x07M\x80\x01tq0QK\x00M\x80\x01\x85q1K\x01\x85q2\x89h\x00)Rq3tq4Rq5X\x11\x00\x00\x00features.8.weightq6h\x03((h\x04h\x05X\r\x00\x00\x002472041508192q7h\x07J\x00\x80\r\x00tq8QK\x00(M\x00\x01M\x80\x01K\x03K\x03tq9(M\x80\rK\tK\x03K\x01tq:\x89h\x00)Rq;tq<Rq=X\x0f\x00\x00\x00features.8.biasq>h\x03((h\x04h\x05X\r\</p>
</blockquote>
<p>I know those are hexadecimal characters. My question is does 1 byte contain 1 hexadecimal character (every "\" means 1 byte)? Or how to read this in terms of byte? Also I notice there are some English words such as "\x02ctorch._utils" and "n_rebuild_tensor_v2". What do they mean (hexadecimal + string)?</p>
|
<python><hex><pickle>
|
2023-04-13 12:24:08
| 1
| 369
|
ryan chandra
|
76,005,347
| 10,282,088
|
Ace linters - All annotations are of type error
|
<p>I m using ace-linters for ace editor to perform linting for languages like python, sql, json etc</p>
<p>The linting works pretty good with the help of web workers.
It returns the proper annotations.
But all the annotations are marked as error type even if some of them, according to me, should be warning annotations. Please refer below example</p>
<p>Is there a way to fix this issue? by either controlling the error codes or just the annotations as they are returned</p>
<pre><code>[
{
"row": 0,
"column": 7,
"text": "F401 `pandas` imported but unused",
"type": "error"
},
{
"row": 2,
"column": 18,
"text": "F401 `pandas` imported but unused",
"type": "error"
},
{
"row": 2,
"column": 44,
"text": "F401 `numpy` imported but unused",
"type": "error"
},
{
"row": 3,
"column": 88,
"text": "E501 Line too long (136 > 88 characters)",
"type": "error"
},
{
"row": 6,
"column": 88,
"text": "E501 Line too long (112 > 88 characters)",
"type": "error"
},
{
"row": 8,
"column": 88,
"text": "E501 Line too long (93 > 88 characters)",
"type": "error"
},
{
"row": 10,
"column": 88,
"text": "E501 Line too long (173 > 88 characters)",
"type": "error"
},
{
"row": 12,
"column": 88,
"text": "E501 Line too long (90 > 88 characters)",
"type": "error"
},
{
"row": 14,
"column": 88,
"text": "E501 Line too long (101 > 88 characters)",
"type": "error"
},
{
"row": 18,
"column": 88,
"text": "E501 Line too long (128 > 88 characters)",
"type": "error"
}
]
</code></pre>
|
<python><angular><ace-editor>
|
2023-04-13 12:18:41
| 1
| 538
|
Abdul K Shahid
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.