QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 β |
|---|---|---|---|---|---|---|---|---|
77,188,680 | 1,482,271 | How do I form a suds request in Python using a WSDL that uses the "any" type? | <p>Note, this is not a duplicate of <a href="https://stackoverflow.com/questions/77189479/how-to-pass-any-type-parameter-in-soap-request-using-zeep-in-python">How to pass "Any" type parameter in SOAP request using zeep in Python</a>.</p>
<p>This question relates to using suds, the other to zeep, and the problem and issues encountered are different.</p>
<p>I have a WSDL that uses the "any" type for the core element (Element) in all SOAP operations. Note that I have trimmed this down as it's quite big.</p>
<pre><code><?xml version="1.0" encoding="utf-8"?>
<definitions targetNamespace="urn:xtk:queryDef" xmlns="http://schemas.xmlsoap.org/wsdl/" xmlns:s="http://www.w3.org/2001/XMLSchema" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:tns="urn:xtk:queryDef" xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/">
<types>
<s:schema elementFormDefault="qualified" targetNamespace="urn:xtk:queryDef">
<s:complexType name="Element">
<s:sequence>
<s:any processContents="lax"/>
</s:sequence>
</s:complexType>
<s:element name="ExecuteQuery">
<s:complexType>
<s:sequence>
<s:element maxOccurs="1" minOccurs="1" name="sessiontoken" type="s:string" />
<s:element maxOccurs="1" minOccurs="1" name="entity" type="tns:Element" />
</s:sequence>
</s:complexType>
</s:element>
<s:element name="ExecuteQueryResponse">
<s:complexType>
<s:sequence>
<s:element maxOccurs="1" minOccurs="1" name="pdomOutput" type="tns:Element" />
</s:sequence>
</s:complexType>
</s:element>
</s:schema>
</types>
<message name="ExecuteQueryIn">
<part element="tns:ExecuteQuery" name="parameters" />
</message>
<message name="ExecuteQueryOut">
<part element="tns:ExecuteQueryResponse" name="parameters" />
</message>
<portType name="queryDefMethodsSoap">
<operation name="ExecuteQuery">
<input message="tns:ExecuteQueryIn" />
<output message="tns:ExecuteQueryOut" />
</operation>
</portType>
<binding name="queryDefMethodsSoap" type="tns:queryDefMethodsSoap">
<soap:binding style="document" transport="http://schemas.xmlsoap.org/soap/http" />
<operation name="ExecuteQuery">
<soap:operation soapAction="xtk:queryDef#ExecuteQuery" style="document" />
<input>
<soap:body encodingStyle="http://schemas.xmlsoap.org/soap/encoding/" use="literal" />
</input>
<output>
<soap:body encodingStyle="http://schemas.xmlsoap.org/soap/encoding/" use="literal" />
</output>
</operation>
</binding>
<service name="XtkQueryDef">
<port binding="tns:queryDefMethodsSoap" name="queryDefMethodsSoap">
<soap:address location="https://xxxxxxxxxxxxxx/nl/jsp/soaprouter.jsp" />
</port>
</service>
</definitions>
</code></pre>
<p>I cannot seem to form the correct parameters to call the <code>ExecuteQuery</code> service using suds-jurko in Python 3.</p>
<p>I want to send the equivalent of this payload:</p>
<pre><code><soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:urn="urn:xtk:queryDef">
<soapenv:Header/>
<soapenv:Body>
<urn:ExecuteQuery>
<urn:sessiontoken>xxxxxxx</urn:sessiontoken>
<urn:entity>
<queryDef schema="nms:recipient" operation="select">
<select>
<node expr="@email"/>
<node expr="@lastName+'-'+@firstName"/>
<node expr="Year(@birthDate)"/>
</select>
<orderBy>
<node expr="@birthDate" sortDesc="true"/>
</orderBy>
</queryDef>
</urn:entity>
</urn:ExecuteQuery>
</soapenv:Body>
</soapenv:Envelope>
</code></pre>
<p>So I have this code:</p>
<pre><code>import urllib.parse
import urllib.request
from suds.client import Client
import os
# Executes a query and returns the result set
def execute_query():
# Load the WSDL locally - not authorised to get from server
wsdl_url = urllib.parse.urljoin('file:', urllib.request.pathname2url(os.path.abspath("querydef_dev.wsdl")))
session_token = "xxxxxxxxxxx"
# Init the client
query_client = Client(wsdl_url)
# Construct the query def
query_def = {
"queryDef": {
"select": {
"node": [
{
"_expr": "@email"
},
{
"_expr": "@lastName+'-'+@firstName"
},
{
"_expr": "Year(@birthDate)"
}
]
},
"orderBy": {
"node": {
"_expr": "@birthDate",
"_sortDesc": "true"
}
},
"_schema": "nms:recipient",
"_operation": "select"
}
}
try:
response = query_client.service.ExecuteQuery(sessiontoken=session_token, entity=query_def)
except:
print("Failed!")
print(query_client.last_sent())
if __name__ == '__main__':
execute_query()
</code></pre>
<p>However, that results in an incorrect XML payload that messes with the attributes:</p>
<pre><code><?xml version="1.0" encoding="UTF-8"?>
<SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns0="http://schemas.xmlsoap.org/soap/envelope/" xmlns:ns1="urn:xtk:queryDef">
<SOAP-ENV:Header/>
<ns0:Body>
<ns1:ExecuteQuery>
<ns1:sessiontoken>xxxxxxxxxxx</ns1:sessiontoken>
<ns1:entity>
<ns1:queryDef>
<ns1:select>
<ns1:node>
<ns1:_expr>@email</ns1:_expr>
</ns1:node>
<ns1:node>
<ns1:_expr>@lastName+&apos;-&apos;+@firstName</ns1:_expr>
</ns1:node>
<ns1:node>
<ns1:_expr>Year(@birthDate)</ns1:_expr>
</ns1:node>
</ns1:select>
<ns1:orderBy>
<ns1:node>
<ns1:_expr>@birthDate</ns1:_expr>
<ns1:_sortDesc>true</ns1:_sortDesc>
</ns1:node>
</ns1:orderBy>
<ns1:_schema>nms:recipient</ns1:_schema>
<ns1:_operation>select</ns1:_operation>
</ns1:queryDef>
</ns1:entity>
</ns1:ExecuteQuery>
</ns0:Body>
</SOAP-ENV:Envelope>
</code></pre>
<p>There is no complexType definition of "Element" that I can use with <code>client.factory.create</code>, so I'm stuck as to how I form the payload I need.</p>
| <python><xml><wsdl><suds> | 2023-09-27 15:01:37 | 0 | 335 | mroshaw |
77,188,408 | 1,106,951 | Pandas Excel to DF throwing the unexpected keyword argument 'index' Error | <p>Importing Excel into a Pandas DataFrame without the Index giving me this error</p>
<blockquote>
<p>TypeError: read_excel() got an unexpected keyword argument 'index'</p>
</blockquote>
<p>As far as I can see, I am not doing any thing wrong here</p>
<pre><code>import pandas as pd
df = pd.read_excel('Prod_User.xlsx', sheet_name='PROD', index=False)
print(df)
</code></pre>
<p>but getting the error! What might I be doing wrong here?</p>
| <python><pandas> | 2023-09-27 14:26:20 | 1 | 6,336 | Behseini |
77,188,326 | 3,501,622 | Set lineplot marker for only one line | <p>I am trying to plot a seaborn lineplot with 2 lines: one with markers and one without. In the documentation (<a href="https://seaborn.pydata.org/generated/seaborn.lineplot.html" rel="nofollow noreferrer">https://seaborn.pydata.org/generated/seaborn.lineplot.html</a>), it says that, in order to achieve this, I have to pass the the <code>markers</code> parameter as well as setting the <code>style</code>.</p>
<p>I tried 2 different approaches:</p>
<p>In the first one, I set the <code>markers</code> as <code>True/False</code> and pass a <code>marker='o'</code> to set the default marker. The problem with this approach is that it does not use the marker. It seems to use a <code>'-'</code>
white marker as the default one.</p>
<pre><code>fig, ax = plt.subplots(figsize=(7, 5))
sns.lineplot(data=my_data, x='Date', y='Value', markers={'Series 1': True, 'Series 2': False}, style='Name', marker = 'o')
</code></pre>
<p>In the second approach, I set the <code>markers</code> as <code>"o" and None</code>, but it raises a <code>Value Error: Filled and line art markers cannot be mixed</code>.</p>
<pre><code>fig, ax = plt.subplots(figsize=(7, 5))
sns.lineplot(data=my_data, x='Date', y='Value', markers={'Series 1': 'o', 'Series 2': None}, style='Name')
</code></pre>
<p>What is the correct way to achieve the result I want?</p>
| <python><seaborn><linechart> | 2023-09-27 14:17:20 | 1 | 671 | Daniel |
77,188,170 | 19,155,645 | TypeError: 'NoneType' object is not iterable when using ucimlrepo | <p>I want to use the <a href="https://archive.ics.uci.edu/dataset/2/adult" rel="nofollow noreferrer">Adult dataset from the UCI ML Repo</a>.</p>
<p>For this I'm following the "import in python" option in the page, which gives this code:</p>
<pre><code>pip install ucimlrepo
from ucimlrepo import fetch_ucirepo
# fetch dataset
adult = fetch_ucirepo(id=2)
# data (as pandas dataframes)
X = adult.data.features
y = adult.data.targets
# metadata
print(adult.metadata)
# variable information
print(adult.variables)
</code></pre>
<p>But that raises the following error:</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
/Users/.../playground.ipynb Cell 3 line 4
1 from ucimlrepo import fetch_ucirepo
3 # fetch the adult dataset
----> 4 adult = fetch_ucirepo(id=2)
6 # convert the dataset to a Pandas DataFrame
7 df = pd.DataFrame(adult.data.features, columns=adult.variables.names, missing_values=["?"])
File ~/anaconda3/envs/myEnv/lib/python3.11/site-packages/ucimlrepo/fetch.py:148, in fetch_ucirepo(name, id)
142 # alternative usage?:
143 # variables.age.role or variables.slope.description
144 # print(variables) -> json-like dict with keys [name] -> details
145
146 # make nested metadata fields accessible via dot notation
147 metadata['additional_info'] = dotdict(metadata['additional_info'])
--> 148 metadata['intro_paper'] = dotdict(metadata['intro_paper'])
150 # construct result object
151 result = {
152 'data': dotdict(data),
153 'metadata': dotdict(metadata),
154 'variables': variables
155 }
TypeError: 'NoneType' object is not iterable
</code></pre>
<p>I know that i can download this database and then load it as a pandas df but this is more dirty because then i need to do some extra parsing and its not looking good (First load adult.data and second add the headers from the specific lines in adult.names (after splitting everything after the ":" in each line...)</p>
| <python><dataset><typeerror><data-analysis> | 2023-09-27 13:56:19 | 4 | 512 | ArieAI |
77,188,137 | 2,123,706 | Can multiprocessing decrease latency of a sqlalchemy writing to server | <p>I have a sqlalchemy session, and want to input a 300k row 24 col (~300MB) of data</p>
<pre><code>database_con = f'mssql://@{server}/{database}?driver={driver}'
engine = create_engine(database_con)
con = engine.connect()
df.to_sql(
name="data",
con=con,
if_exists="append",
index=False
)
con.commit()
</code></pre>
<p>I find that is is rather slow.</p>
<p>Is it possible to set up a multithreading/parallel processing session to improve the write time to the db? If so, how about would I go to set this up?</p>
| <python><sqlalchemy><parallel-processing> | 2023-09-27 13:51:41 | 0 | 3,810 | frank |
77,187,866 | 226,342 | Refreshing layout in multi page Dash app due to watchdog file event | <p>I have a multi page Dash app.</p>
<p>app.py:</p>
<pre><code>app = dash.Dash(__name__, use_pages=True, suppress_callback_exceptions=True)
...
</code></pre>
<p>I then have a page:</p>
<p>pages/resources.py:</p>
<pre><code>dash.register_page(__name__, order=4, title='ποΈ Resources')
</code></pre>
<p>Layout is defined as (note <code>table</code> which is a variable containing a HTML table):</p>
<pre><code>def layout(**other_unknown_query_strings):
return dbc.Container([... + table...])
</code></pre>
<p>I put an observer on a file (watchdog):</p>
<pre><code>event_handler = FileChangeHandler()
observer = Observer()
observer.schedule(event_handler, path='resources_reserve.yaml', recursive=False)
observer.start()
</code></pre>
<p>Now in <code>FileChangeHandler</code> I change <code>table</code> and then want to trigger a refresh of the layout.</p>
<p>Is there any elegant way of achieving this?</p>
| <python><plotly-dash><python-watchdog> | 2023-09-27 13:14:19 | 1 | 2,005 | Henrik |
77,187,839 | 4,244,347 | How to serialize Jinja2 template in PySpark? | <p>I want to use a Jinja2 template to create a column in a df using PySpark. For example, if I have a column <code>name</code>, use the following template to create another column called <code>new_name</code>.</p>
<pre><code>from jinja2 import Template
TEMPLATE = """
Hello {{ customize(name) }}!
"""
template = Template(source = TEMPLATE)
template.globals["customize"] = customize
def customize(name):
return name+"san"
def udf_foo(name):
template.render(name)
convertUDF = udf(lambda z: udf_foo(z),StringType())
df = df.select(df.name)
df1 = df.withColumn("new_name", convertUDF(col("name")))
</code></pre>
<p>Executing the code, I get the following error which I think is because the template cannot be serialized successfully.</p>
<pre><code>An exception was thrown from the Python worker. Please see the stack trace below.
'pyspark.serializers.SerializationError: Caused by Traceback (most recent call last):
File "/databricks/spark/python/pyspark/serializers.py", line 189, in _read_with_length
return self.loads(obj)
File "/databricks/spark/python/pyspark/serializers.py", line 541, in loads
return cloudpickle.loads(obj, encoding=encoding)
TypeError: Template.__new__() missing 1 required positional argument: 'source''. Full traceback below:
Traceback (most recent call last):
File "/databricks/spark/python/pyspark/serializers.py", line 189, in _read_with_length
return self.loads(obj)
File "/databricks/spark/python/pyspark/serializers.py", line 541, in loads
return cloudpickle.loads(obj, encoding=encoding)
TypeError: Template.__new__() missing 1 required positional argument: 'source'
</code></pre>
<p>I have tried using other serializers like Pickle, Kryo etc but the error persists.</p>
<ol>
<li>Does anyone think it might not be serialization related error?</li>
<li>Do you know how to fix this so that we can use Jinja2 with Pyspark?</li>
</ol>
<p>Thanks in advance!</p>
| <python><apache-spark><pyspark><serialization><jinja2> | 2023-09-27 13:11:33 | 2 | 936 | smaug |
77,187,827 | 11,925,053 | Xcode project (originally Python Kivy) not building to iPhone | <p>Short summary:
The app created by python kivy which was converted to Xcode project using kivy-ios/toolchain runs on the simulator and in some cases can be built to my iPhone. But I donβt understand what I am doing different on the occasions where it does not run on my iPhone</p>
<pre><code>-Mac M1 arm64
-Ventura 13.5.2
-Xcode15
-iOS17
</code></pre>
<p>Here are two of the errors I get when I fail to build. The second one is always the same the first one seems to change the file name on occasion.</p>
<p>Error 1:</p>
<pre><code>Building for 'iOS', but linking in dylib (/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator.sdk/System/Library/Frameworks/CoreGraphics.framework/CoreGraphics.tbd) built for 'iOS-simulator'
</code></pre>
<p>*Sometimes Error 1 will be for <code>AudioToolbox.framework/AudioToolbox.tbd</code> file.</p>
<p>Error2:</p>
<pre><code>Linker command failed with exit code 1 (use -v to see invocation)
</code></pre>
<p>I have created a text file that lists the settings of each Xcode project there are a few differences. Below is a screen shot of a comparison of these files. The one on left builds to iPhone oddly enough. I do not where to find the <code>ARCHS = arm64</code> setting in Xcode.
<a href="https://i.sstatic.net/Owaob.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Owaob.png" alt="enter image description here" /></a></p>
<p>To make this file I used:</p>
<pre><code>xcodebuild -project openmindset.xcodeproj -target openmindset -configuration openmindset -showBuildSettings > openmindset_settings_02.txt
</code></pre>
<p>If there is something else better, please share.</p>
<p>Here is an abbreviated version of my log with my most recent attempts to try to make sense of what is going on.
<a href="https://i.sstatic.net/o554M.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/o554M.png" alt="enter image description here" /></a></p>
<p>Full version:
<a href="https://1drv.ms/x/s!AmCs1-5fbd9gncEVVGmWSY8B3QyGSA?e=n4fla2" rel="nofollow noreferrer">https://1drv.ms/x/s!AmCs1-5fbd9gncEVVGmWSY8B3QyGSA?e=n4fla2</a></p>
<p>I have seen posts on this topic that have suggested I need to remove arm64 from Excluded Architectures, since I am using an arm64, but that doesnβt seem to matter either. Or maybe Iβve not set the right argument for that parameter?
<a href="https://i.sstatic.net/lX9ID.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lX9ID.png" alt="enter image description here" /></a></p>
<p>Honestly, I donβt think Iβm in searching in the right ball park. This is mainly to show avenues Iβve searched in. So if there are any ideas even in a different ball park, Iβd be greatful.</p>
<p>Thanks in advance.</p>
| <python><ios><xcode><kivy> | 2023-09-27 13:09:35 | 3 | 309 | costa rica |
77,187,721 | 17,471,060 | Best way to filter Polars dataframe from finding elements within multiple columns | <p>I would like to know an elegant way to filter dataframe based on condition that elements within a list are found in multiple columns of the dataframe. For example, I want to filter <code>df</code> based on all elements with <code>to_keep</code> are found in columns <code>c1</code> & <code>c2</code>.</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame({"id": range(11)})
df = df.with_columns(
pl.format("row_{}", "id"),
pl.linear_space(0, 20, 11).alias("c1"),
pl.linear_space(0, 10, 11).alias("c2")
)
to_keep = [5, 7, 16, 18]
cond1 = df['c1'].map_elements(lambda ser: True if ser in to_keep else False)
cond2 = df['c2'].map_elements(lambda ser: True if ser in to_keep else False)
print(df.filter(cond1 | cond2))
</code></pre>
<p>Returns the following -</p>
<pre><code>shape: (4, 3)
βββββββββ¬βββββββ¬ββββββ
β id β c1 β c2 β
β --- β --- β --- β
β str β f64 β f64 β
βββββββββͺβββββββͺββββββ‘
β row_5 β 10.0 β 5.0 β
β row_7 β 14.0 β 7.0 β
β row_8 β 16.0 β 8.0 β
β row_9 β 18.0 β 9.0 β
βββββββββ΄βββββββ΄ββββββ
</code></pre>
| <python><dataframe><python-polars> | 2023-09-27 12:55:50 | 0 | 344 | beta green |
77,187,616 | 2,956,276 | Python: Parse numbers with non-breaking space as group separator | <p>I want to parse number written according to specific locale. I need to support any locale.</p>
<p>It seems that <code>locale.atof</code> is the proper solution:</p>
<pre class="lang-py prettyprint-override"><code>from locale import setlocale, LC_NUMERIC, atof
setlocale(LC_NUMERIC, ('en_US', 'utf-8'))
print(atof("1,234.56")) # output is 1234.56
</code></pre>
<p>But it does not work for example for Czech locale:</p>
<pre class="lang-py prettyprint-override"><code>from locale import setlocale, LC_NUMERIC, atof
setlocale(LC_NUMERIC, ('cs_CZ', 'utf-8'))
print(atof("1 234,56")) # ValueError: could not convert string to float: '1 234.56'
</code></pre>
<p>The reason is that Czech locale use 'NARROW NO-BREAK SPACE' (U+202F) as group separator.
When I try to run <code>atof("1\u202f234,56")</code> then the output is <code>1234.56</code> - as expected.</p>
<p>But almost nobody in Czech type this kind of space in numbers. People use the simple space (0x20) because it is much easier to type it.
I can replace the regular space by non-breaking space or remove the regular space before <code>atof</code> call, but it would be specific solution for Czech locale only. And I worry that there can be similar issues in other locales too.</p>
<p>My question is: <strong>Is there some less strict version of <code>atof</code> method</strong> to get string with number in a format that people are used to using (in specified locale) and convert it to float number?</p>
<p>For example for <code>cs_CZ</code> locale it can take the string <code>"1 234,56"</code> and convert it to the float number <code>1234.56</code>?</p>
<p>I'm looking for generic solution - not only for czech specific solution.</p>
| <python> | 2023-09-27 12:41:15 | 0 | 1,313 | eNca |
77,187,551 | 755,229 | map' object is not a mapping --- why does python return such an exception | <p>I think this rather may be an english Grammer issue, but still I like to understand why
would python say the result of map operation is not a mapping?</p>
<pre><code>In [60]: a=[1,2,3,4]
In [61]: b=['a','b','c','d']
In [62]: {**map(a,b)}
...:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[62], line 1
----> 1 {**map(a,b)}
TypeError: 'map' object is not a mapping
In [63]:
</code></pre>
| <python><dictionary><mapping> | 2023-09-27 12:33:40 | 1 | 4,424 | Max |
77,187,538 | 14,649,310 | How to map a field to different key for serilization and de-serilization with Python dataclass | <p>I want to serialize and de-serialize some data with a dataclass, but one field in the received data has a different name than the class attribute I want to map it to for example in the data I have :
<code>{"accountId":123,"shortKey": 54}</code> and I want to map it to a dataclass like:</p>
<pre><code>from dataclasses import dataclass
from dataclasses import field
@dataclass
class MySchema:
account_id=field(default=None, metadata=dict(required=False))
short_key=field(default=None, metadata=dict(required=False))
</code></pre>
<p>but I want to map between accountId<->account_id and shortKey<->short_key. How can this be done. I saw the <code>data_key</code> option but I read conflicting things whether this works for serilization and de-serialization. What is the way to do it</p>
| <python><python-dataclasses><marshmallow> | 2023-09-27 12:31:49 | 1 | 4,999 | KZiovas |
77,187,497 | 2,112,406 | No member named 'replace' in namespace 'std::ranges' message in github actions, even with -std=c++20 | <p>I have a python package on github, that runs C++ code with pybind. I figured I'd add github actions to do tests. The build fails on macos-latest (the irony being that it compiles just fine on my local macos machine) with the following message:</p>
<pre><code>FAILED: CMakeFiles/sequence_analysis_cpp.dir/src/sequence.cpp.o
/Applications/Xcode_14.2.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ -DVERSION_INFO=0.1.0 -Dsequence_analysis_cpp_EXPORTS -isystem /Library/Frameworks/Python.framework/Versions/3.11/include/python3.11 -isystem /private/var/folders/24/8k48jl6d249_n_qfxwsl6xvm0000gn/T/pip-build-env-148llo6b/overlay/lib/python3.11/site-packages/pybind11/include -O3 -DNDEBUG -std=gnu++20 -isysroot /Applications/Xcode_14.2.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX13.1.sdk -mmacosx-version-min=12.6 -fPIC -MD -MT CMakeFiles/sequence_analysis_cpp.dir/src/sequence.cpp.o -MF CMakeFiles/sequence_analysis_cpp.dir/src/sequence.cpp.o.d -o CMakeFiles/sequence_analysis_cpp.dir/src/sequence.cpp.o -c /Users/runner/work/sequence_analysis/sequence_analysis/src/sequence.cpp
/Users/runner/work/sequence_analysis/sequence_analysis/src/sequence.cpp:101:18: error: no member named 'replace' in namespace 'std::ranges'
std::ranges::replace(codon, 'T', 'U');
~~~~~~~~~~~~~^
1 error generated.
</code></pre>
<p>It does set <code>-std=gnu++20</code> as expected. I have <code>#include <algorithm></code> in <code>sequence.cpp</code>.</p>
<p><code>CMakeLists.txt</code> has:</p>
<pre><code>set(CMAKE_CXX_STANDARD 20)
</code></pre>
<p><code>pip.yml</code> has:</p>
<pre><code>name: Pip
on:
workflow_dispatch:
pull_request:
push:
branches:
- master
jobs:
build:
strategy:
fail-fast: false
matrix:
platform: [windows-latest, macos-latest, ubuntu-latest]
python-version: ["3.11"]
runs-on: ${{ matrix.platform }}
steps:
- uses: actions/checkout@v4
with:
submodules: true
- uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: Add requirements
run: python -m pip install --upgrade wheel setuptools biopython numpy
- name: Build and install
run: pip install --verbose .[test]
- name: Test
run: python -m pytest
</code></pre>
<p>What am I missing? <a href="https://github.com/sodiumnitrate/sequence_analysis/tree/main" rel="nofollow noreferrer">Here's the repo</a> in case I'm missing some critical info. This issue does not arise with ubuntu-latest, and windows-latest fails for other reasons (that I'm working on).</p>
| <python><c++><cmake><github-actions> | 2023-09-27 12:26:29 | 1 | 3,203 | sodiumnitrate |
77,187,461 | 662,509 | Getting ODBC Driver 17 error while connecting to azure SQL | <p>I'm getting the following error while connection my code to azure SQL from local machine (windows) and also when deployed to azure function (python/linux)</p>
<blockquote>
<p>sqlalchemy.exc.OperationalError: (pyodbc.OperationalError) ('08001',
'[08001] [Microsoft][ODBC Driver 17 for SQL Server]TCP Provider: A
non-recoverable error occurred during a database lookup.\r\n (11003)
(SQLDriverConnect); [08001] [Microsoft][ODBC Driver 17 for SQL
Server]Login timeout expired (0); [08001] [Microsoft][ODBC Driver 17
for SQL Server]A network-related or instance-specific error has
occurred while establishing a connection to SQL Server. Server is not
found or not accessible. Check if instance name is correct and if SQL
Server is configured to allow remote connections. For more information
see SQL Server Books Online. (11003)')</p>
</blockquote>
<p>I'm able to connect to the SQL server using SQL Server management studio</p>
<p>my sample code for connecting to SQL server</p>
<pre><code>from sqlalchemy import create_engine
connection_string = "mssql+pyodbc://user:password@azuresqlserver.database.windows.net/mydatabase?driver=ODBC+Driver+17+for+SQL+Server"
self.engine = create_engine(connection_string, pool_size=100, max_overflow=20)
self.engine.connect()
</code></pre>
<ul>
<li>I have SQL server (2019) installed on my machine, yet installed ODBC Driver
17 and 18</li>
<li>Downgraded python from 3.11 to 3.9 ( <a href="https://stackoverflow.com/questions/76764452/azure-functions-cant-open-lib-odbc-driver-17-for-sql-server">similar issue</a>)</li>
<li>using SQL SErver management studio, i'm able to connect to the SQL
server from my windows machine</li>
<li>using VS Code to deploy on azure fuction (Serverless) hence docker image is not supported</li>
</ul>
| <python><sqlalchemy><azure-sql-database><pyodbc> | 2023-09-27 12:21:34 | 1 | 673 | AsitK |
77,187,280 | 8,790,507 | Is there any way to filter a multi-indexed pandas DataFrame using a dict? | <p>Please consider the following DataFrame:</p>
<pre><code>mi = pd.MultiIndex(
levels = [[1, 2, 3], ['red', 'green', 'blue'], ['a', 'b', 'c']],
codes = [[1,0,1,0], [0,1,1,2], [1,0,0,1]],
names = ["Key1", "Key2", "Key3"])
df = pd.DataFrame({
"values": [1, 2, 3, 4]
}, index = mi)
</code></pre>
<p>... which looks like this:
<a href="https://i.sstatic.net/3Do9a.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3Do9a.png" alt="enter image description here" /></a></p>
<p>Now I know how to filter this by values of the index levels, eg:</p>
<pre><code>df[
df.index.get_level_values("Key1").isin([1]) &
df.index.get_level_values("Key2").isin(["green"])
]
</code></pre>
<p>I'm trying to write a function which makes this operation less verbose, so I'd like to pass in a dict like: <code>{"Key1":1, "Key2":"green"}</code> to do the same thing.</p>
<p>The solution shouldn't hardcode the number of levels we are filtering on, so that later, I might want to only filter by one of the conditions, and would pass in <code>{"Key1":1}</code> or ``{"Key2":"green"}`.</p>
<p>I don't know the syntax for constructing the predicate inside the <code>df[ ... ]</code> on-the-fly from a <code>dict</code>. Is this possible?</p>
| <python><pandas> | 2023-09-27 11:54:34 | 3 | 1,594 | butterflyknife |
77,187,028 | 12,590,879 | Should I create a separate requests.Session object for each domain I access? | <p>Assuming that in my backend I'm hitting multiple different domains/IPs, should I create a different <code>requests.Session</code> object for each domain, to avoid ever having something like this happen?</p>
<pre><code>domain A is hit using the session, new TCP connection is created
domain A is hit using the session, the existing connection is used
domain B is hit using the session, no existing connection so a new TCP connection is created
domain A is hit using the session, it will attempt to re establish a connection with domain A
</code></pre>
<p>Based on my understanding, if I instead have two different session objects, one for domain A and B, the connection to each server should not have to be reestablished anew that way (except when the server terminates it). Something like this:</p>
<pre><code>domain A is hit using sessionA, new TCP connection is created
domain B is hit using sessionB, new TCP connection is created
domain A is hit using sessionA, existing session's TCP connection is used
</code></pre>
<p><strong>Bonus question:</strong> Should I be handling any session termination myself or should I let the requests library to handle any session termination and have it automatically reestablish a new one if needed?</p>
<p>Thanks.</p>
| <python><http><session><python-requests> | 2023-09-27 11:20:36 | 0 | 325 | Pol |
77,187,025 | 15,341,457 | Scrapy - xpath returns empty list | <p>I'm scraping restaurant reviews from yelp, specifically from this <a href="https://www.yelp.it/biz/roscioli-roma-4?rr=4" rel="nofollow noreferrer">url</a></p>
<p>I'm trying to get the list of review containers and, after testing with the chrome console, that would be given by the following xpath expression:</p>
<p><code>//li/div[@class='css-1qn0b6x']</code></p>
<p>However, by testing with scrapy shell, the following command returns an empty list</p>
<p><code>response.xpath("//li/div[@class='css-1qn0b6x']").extract()</code></p>
| <python><http><web-scraping><xpath><scrapy> | 2023-09-27 11:20:11 | 1 | 332 | Rodolfo |
77,187,001 | 12,881,307 | Clean architechture Entities depending on entities | <p>I am writing a python application following Robert Cecil Martin's Clean Architechture book. I have the following entities:</p>
<pre><code>@dataclass
class Factory():
name: str
latitude: float
longitude: float
@dataclass
class Event():
message: str
timestamp: datetime
source: Factory
</code></pre>
<p>I have also implemented a repository interface, <code>ReadOnlyRepository</code>, to read both types of entities from a database.</p>
<p>Due to the dependency rule, I understand that my entities should not store IDs of any kind.</p>
<p><strong>The Problem</strong></p>
<p>My current repository implementations (<code>EventRepository</code>, <code>FactoryRepository</code>) communicate with a PostgreSQL database in which I store the <code>source</code> field of my <code>Event</code> entity as an integer. The only way I can think of to get an <code>Event</code> from <code>EventRepository</code> is to use the stored source id and then use <code>FactoryRepository</code> to get the <code>Factory</code> entity. I'm sure this is not the intended way to access my data (no single responsibility, excess calls to the database, etc.), but I haven't found any way to solve this implementation issue.</p>
<p>I'm thinking I could try to get rid of the <code>source</code>field in <code>Event</code>, but the idea does not convince me. How can I get an <code>Event</code> entity without calling the <code>FactoryRepository</code>? Am I understanding something wrong about the entities?</p>
| <python><clean-architecture> | 2023-09-27 11:18:00 | 1 | 316 | Pollastre |
77,186,985 | 6,327,202 | Datadog python tracing floods with logs and errors | <p>I tried to remove datadog python tracing on my Ubuntu server with <code>pip uninstall ddtrace</code>, but it seems smth went wrong</p>
<p>When I try commands like <code>pip freeze</code> or even <code>supervisorctl status</code></p>
<p>I receive that logs constantly:</p>
<pre><code>INFO:datadog.autoinstrumentation(pid: 2467695): user-installed ddtrace not found, configuring application to use injection site-packages
Error in sitecustomize; set PYTHONVERBOSE for traceback:
ModuleNotFoundError: No module named 'ddtrace.bootstrap'
</code></pre>
<p>On <code>pip freeze</code> I also receive such error:</p>
<pre><code>ERROR: Exception:
Traceback (most recent call last):
File "/home/ubuntu/.local/lib/python3.10/site-packages/pip/_internal/cli/base_command.py", line 180, in exc_logging_wrapper
status = run_func(*args)
File "/home/ubuntu/.local/lib/python3.10/site-packages/pip/_internal/commands/freeze.py", line 98, in run
for line in freeze(
File "/home/ubuntu/.local/lib/python3.10/site-packages/pip/_internal/operations/freeze.py", line 43, in freeze
req = FrozenRequirement.from_dist(dist)
File "/home/ubuntu/.local/lib/python3.10/site-packages/pip/_internal/operations/freeze.py", line 236, in from_dist
editable = dist.editable
File "/home/ubuntu/.local/lib/python3.10/site-packages/pip/_internal/metadata/base.py", line 338, in editable
return bool(self.editable_project_location)
File "/home/ubuntu/.local/lib/python3.10/site-packages/pip/_internal/metadata/base.py", line 183, in editable_project_location
egg_link_path = egg_link_path_from_sys_path(self.raw_name)
File "/home/ubuntu/.local/lib/python3.10/site-packages/pip/_internal/metadata/base.py", line 429, in raw_name
return self.metadata.get("Name", self.canonical_name)
File "/home/ubuntu/.local/lib/python3.10/site-packages/pip/_internal/metadata/base.py", line 406, in metadata
return self._metadata_cached()
File "/home/ubuntu/.local/lib/python3.10/site-packages/pip/_internal/metadata/base.py", line 393, in _metadata_cached
metadata = self._metadata_impl()
File "/home/ubuntu/.local/lib/python3.10/site-packages/pip/_internal/metadata/pkg_resources.py", line 201, in _metadata_impl
metadata = self.read_text(metadata_name)
File "/home/ubuntu/.local/lib/python3.10/site-packages/pip/_internal/metadata/pkg_resources.py", line 180, in read_text
content = self._dist.get_metadata(name)
File "/home/ubuntu/.local/lib/python3.10/site-packages/pip/_vendor/pkg_resources/__init__.py", line 1519, in get_metadata
value = self._get(path)
File "/home/ubuntu/.local/lib/python3.10/site-packages/pip/_vendor/pkg_resources/__init__.py", line 1727, in _get
with open(path, 'rb') as stream:
PermissionError: [Errno 13] Permission denied: '/opt/datadog/apm/library/python/ddtrace_pkgs/site-packages-ddtrace-py3.10-manylinux2014/protobuf-4.24.0.dist-info/METADATA'
</code></pre>
<p>Anybody knows how can I remove datadog python tracing completely?</p>
| <python><ubuntu><datadog> | 2023-09-27 11:15:40 | 1 | 1,165 | Snobby |
77,186,601 | 18,140,022 | Can I parameterise tables using Pandas read_sql function? | <p>I got a database with multiple projects which follows a similar structure to the following:</p>
<ul>
<li>Schemas for our client Bob: bob_legal, bob_docs, bob_misc</li>
<li>Schemas for our client Jess: jess_legal, jess_docs, jess_misc</li>
</ul>
<p>I am currently using f-strings to build the query:</p>
<pre class="lang-py prettyprint-override"><code>query = f"SELECT * FROM {client}_legal.my_table WHERE country = '{country_name}';"
result = pd.read_sql(query, con)
</code></pre>
<p>But I want to parameterise building the query. I tried using the <code>params</code> parameter in panda's <code>read_sql</code> function. But it doesn't work as it outputs the table like this:</p>
<pre class="lang-sql prettyprint-override"><code>SELECT * FROM 'bob'_legal.my_table WHERE country = 'canada';"
</code></pre>
<p>The table <code>bob</code> now is wrapped in single quotes and it creates a SQL syntax error. Is there a different way to do achieve the job but still using the <code>read_sql</code> function? There does not seem to be an option for parameterising tables.</p>
| <python><pandas><sqlalchemy> | 2023-09-27 10:23:15 | 1 | 405 | user18140022 |
77,186,338 | 12,285,101 | Trying to read list of tuples with date-pairs in bash is failing using argparse | <p>I have python script that one of its' parameters is list of tuples:</p>
<pre><code>def my_func(my_date_list: list=None, #List of dates to be used. Example for list : [('2017-07-02', '2017-08-15'),('2019-09-02', '2019-11-21')]):
...
</code></pre>
<p>This function works when I run it from my Jupyter notebook.
However when I try to run it on bash, it can't really access the dates correctly.
This is what I did with argparse + bash command:</p>
<pre><code>import ast
def parse_args():
'''Parser for my script '''
parser.add_argument('--date_list',type=str, default=None, required=False,
help=f"#List of dates to be used. Example for list : [('2017-07-02', '2017-08-15'),('2019-09-02', '2019-11-21')]")
args = parser.parse_args()
if args.date_list is not None:
date_list = ast.literal_eval(args.date_list)
print(date_list)
</code></pre>
<p>my bash command:</p>
<pre><code>python3 main_func.py --date_list "[('2017-07-02','2020-03-02')]"
#this fails when the script gets to the part where it should worlk with the date
>>>end_date=date_range[LOC_1],
>>>IndexError: string index out of range
</code></pre>
<p>It's important to mention that <strong>this script does work when running from my Jupyter notebook so I believe problem is with parsing the argument</strong>.<br />
I am looking for solution for how I can use the date list parameter when I run the script from bash.</p>
| <python><bash><list><tuples><argparse> | 2023-09-27 09:45:19 | 1 | 1,592 | Reut |
77,186,131 | 1,618,465 | Detect if connection is HTTP or HTTPS and handle it in python | <p>I have a socket listening on a port and I don't know what kind of connections I'm getting on it. It can be HTTP, HTTPS or something different. I would like that my server will handle at least HTTP and HTTPS by identifying the fist bytes of traffic that it receives. I wrote some code that reads the first received byte to infer the connection type.</p>
<p>So far I have:</p>
<pre><code>from enum import Enum
class PacketType(Enum):
HTTP = 1
HTTPS = 2
HTTP2 = 3
OTHER = 4
def guessPacketType(s):
HTTP2_PREAMBLE = "PRI *m HTTP/2.0"
firstByte = csock.recv(1)
if firstByte == 0x16:
return (PacketType.HTTPS, firstByte)
elif firstByte == HTTP2_PREAMBLE[0]:
dataRead = firstByte + csock.recv(len(HTTP2_PREAMBLE) - 1)
if dataRead == HTTP2_PREAMBLE:
return (PacketType.HTTP2, dataRead)
else:
try:
dataRead.decode('ascii')
except UnicodeDecodeError:
return (PacketType.OTHER, dataRead)
else:
return (PacketType.HTTP, dataRead)
elif firstByte.decode() in ["G", "H", "P", "D", "C", "O", "T"]:
# HTTP more likely
return (PacketType.HTTP, firstByte)
else:
# something else
return (PacketType.OTHER, firstByte)
if __name__ == '__main__':
import socket
import struct
SO_ORIGINAL_DST = 80
s = socket.socket()
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s.bind(('0.0.0.0', 2000))
s.listen(10)
while True:
csock, caddr = s.accept()
orig_dst = csock.getsockopt(socket.SOL_IP, SO_ORIGINAL_DST, 16)
packetType, readBytes = guessPacketType(csock)
data = bytearray()
while True:
packet = csock.recv(1024)
if not data:
break
data.extend(packet)
csock.close()
orig_port = struct.unpack('>H', orig_dst[2:4])
orig_addr = socket.inet_ntoa(orig_dst[4:8])
print('connection from', caddr)
print('connection to', (orig_addr, orig_port))
</code></pre>
<p>How can I handle an SSL connection as soon as I detect that the client is requesting SSL? I've checked the <a href="https://docs.python.org/3/library/ssl.html" rel="nofollow noreferrer">SSL/TLS socket wrapper library</a> but it seems you need to create a SSL wrapped socket before listening which for my case I can't. For HTTP I can just read the entire data and parse it out later but what about SSL? How do should I do the TLS negotiation.</p>
<p>I've <a href="https://httptoolkit.com/blog/http-https-same-port/" rel="nofollow noreferrer">based my code on this post</a>.</p>
<p>Any ideas?</p>
| <python><http><sockets><ssl><https> | 2023-09-27 09:17:07 | 0 | 1,961 | user1618465 |
77,186,059 | 2,178,942 | Generating an array of random numbers with conditions | <p>I have a list of length 64, such as <code>first_list = [64, 27, 99, 133, 0, 41, ... ]</code>. Numbers in this are some random selection from <code>0</code> to <code>150</code> (minimum value is <code>0</code> and maximum value is <code>149</code>).</p>
<p>I want to generate several other list of random number with the same minimum and maximum condition (between <code>0</code> to <code>150</code>), and also in the new list (let's call it <code>new_list</code>), <code>new_list[0] != first_list[0]</code>, <code>new_list[1] != first_list[1]</code>, <code>new_list[2] != first_list[2]</code> , ...</p>
<p>Is there any fast way of implementing this?</p>
| <python><arrays><python-3.x><list><random> | 2023-09-27 09:07:48 | 3 | 1,581 | Kadaj13 |
77,186,034 | 4,445,832 | How to replace string keeping associated info in python | <p>Given an <strong>ordered</strong> list of words with associated info (or a list of tuples). I want to replace some of the strings with others but keep track of the associated info.</p>
<p>Let's say that we have a simple case where our input data is two list:</p>
<pre><code>words = ["hello", "I", "am", "I", "am", "Jone", "101"]
info = ["1", "3", "23", "4", "6", "5", "12"]
</code></pre>
<p>input could also be just a list of tuples:</p>
<pre><code>list_tuples = list(zip(words, info)))
</code></pre>
<p>Each item of "list_words" has an associated item (with the same index) from "list_info". e.g. "hello" corresponds to "1" and the second "I" corresponds to "4".</p>
<p>I want to apply some normalization rules to transform them into:</p>
<pre><code>words = ["hello", "I'm", "I'm", "Jone", "one hundred and one"]
info = ["1", ["3", "23"], ["4", "6"], "5", "12"]
</code></pre>
<p>or to another possible solution:</p>
<pre><code>words = ["hello", "I'm", "I'm", "Jone", "one", "hundred", "and", "one"]
info = ["1", ["3", "23"], ["4", "6"], "5", "12", "12", "12", "12"]
</code></pre>
<p>Note this is a simple case, and the idea is to apply multiple normalization rules (numbers to words, substitutions, other contractions, etc.). I know how to transform my string into another using regex, but in that case, I am losing the associated information:</p>
<pre><code>def normalize_texts_loosing_info(text):
# Normalization rules
text = re.sub(r"I am", "I\'m", text)
text = re.sub(r"101", "one hundred and one", text)
# other normalization rules. e.g.
# text = re.sub(r"we\'ll", "we will", text)
# text = re.sub(r"you are", "you\'re", text)
# ....
return text.split()
words = ["hello", "I", "am", "I", "am", "Jone", "101"]
print(words)
print(" ".join(words))
output = normalize_texts(" ".join(words))
print(output)
</code></pre>
<p>Question is: How can I apply some transformations to an <strong>ordered</strong> string/list of words but keep the associated info of those words?</p>
<p>PD: Thank you for all the useful comments</p>
| <python><regex> | 2023-09-27 09:04:51 | 4 | 714 | ivangtorre |
77,185,760 | 7,692,855 | Python mock and call assertion | <p>I am trying to write a python unit test to assert that a scoped_session .commit() is called.</p>
<p>main.py</p>
<pre><code>from database import DBSession
def deactivate_user(user_id):
db_session = DBSession()
user = User(db_session).get(user_id)
user.is_active = False
db_session.commit()
def get_account(user_id):
pass
</code></pre>
<p>database.py</p>
<pre><code>from sqlalchemy.orm import sessionmaker, scoped_session
import settings
session_factory = sessionmaker(bind=create_engine(settings.CONNECTION_STRING))
DBSession = scoped_session(session_factory)
</code></pre>
<p>unit_test.py</p>
<pre><code>from main import deactivate_user
def test_deactivate_user(mocker)
mocked_account = mocker.patch('main.get_account', return_value=get_test_account())
mocked_session = mocker.patch('database.DBSession', autospec=True)
deactivate_user(123)
mocked_account.assert_called()
mocked_session.commit.assert_called()
</code></pre>
<p><code>mocked_account.assert_called()</code> works correctly.</p>
<p>However, <code>mocked_session.commit.assert_called()</code> does not work.</p>
<p><code>E AssertionError: Expected 'commit' to have been called.</code></p>
<p>Other relevant information:</p>
<p><code>type(mocked_session) == <class 'unittest.mock.MagicMock'></code></p>
<p><code>mocked_session.mock_calls == []</code></p>
| <python><unit-testing><mocking><python-unittest><python-unittest.mock> | 2023-09-27 08:28:38 | 2 | 1,472 | user7692855 |
77,185,692 | 7,437,143 | How to resolve E0611: No name 'Test_HC' in module 'test' (no-name-in-module)? | <h2>Context</h2>
<p>I have a pip package called <code>something</code> with tree structure:</p>
<pre><code>src/something/__main__.py
src/something/__init__.py
src/something/hardcoded.py
src/something/bike/ride.py
test/__init__.py
test/bike/test_ride.py
test/Test_HC.py
</code></pre>
<p>With a class named <code>Hardcoded_testdata</code> within the <code>Test_HC.py</code>.
And when I run pylint from pre-commit:</p>
<pre><code># Performs static code analysis to check for programming errors.
- repo: local
hooks:
- id: pylint
name: pylint
entry: pylint
language: system
types: [python]
args:
[
# "--init-hook='import sys; sys.path.append(\"/home/name/git/something/test/\")'",
"--init-hook='from pylint.config import find_pylintrc; import os, sys; sys.path.append(os.path.dirname(find_pylintrc()))'",
# "--init-hook='from pylint.config import find_pylintrc; import os, sys; sys.path.append(os.path.dirname(find_pylintrc()))'"
"-rn", # Only display messages
"-sn", # Don't display the score
]
exclude: test/test_files/
</code></pre>
<p>I get the error:</p>
<pre><code>test/bike/test_ridepy:4:0: E0611: No name 'Test_HC' in module 'test' (no-name-in-module)
</code></pre>
<p>on import:</p>
<pre class="lang-py prettyprint-override"><code>from test.hardcoded_testdata import TestHC
</code></pre>
<p>Looking at <a href="https://pypi.org/project/flake8/#files" rel="nofollow noreferrer">flake8</a>, it seems standard practice to not include the test files in the pip package, hence putting them in a separate folder (named <code>tests</code>) at the root of the repository.
Accordingly, I would like to also not pollute my pip package source code with helper files for tests (like <code>hardcoded_testdata.py</code>).</p>
<h2>Approaches</h2>
<p>The first solution I tried is to "tell pylint to look for the <code>hardcoded_testdata.py</code>. So after applying <a href="https://stackoverflow.com/a/39207275/7437143">this</a> answer like:</p>
<pre class="lang-yaml prettyprint-override"><code># Performs static code analysis to check for programming errors.
- repo: local
hooks:
- id: pylint
name: pylint
entry: pylint
language: system
types: [python]
args:
[
# "--init-hook='import sys; sys.path.append(\"/home/name/git//something/test/\")'",
"--init-hook='from pylint.config import find_pylintrc; import os, sys; sys.path.append(os.path.dirname(find_pylintrc()))'",
# "--init-hook='from pylint.config import find_pylintrc; import os, sys; sys.path.append(os.path.dirname(find_pylintrc()))'"
"-rn", # Only display messages
"-sn", # Don't display the score
]
exclude: test/test_files/
</code></pre>
<p>I still get the error. I even tried hardcoding the filepath (Which is not desirable, because I would like other developers to also be able to work in this project), and that did not work either.</p>
<p>I also included the <code>__init__.py</code> file in the <code>/test/</code> directory, however, that does not resolve the issue.</p>
<p>When I run <code>python -m pytest</code>, it imports the file perfectly fine.</p>
<h2>Question</h2>
<p>How can I ensure pylint is able to find the <code>test/hardcoded_testdata.py</code> file and/or resolve the E0611 error?</p>
| <python><pylint> | 2023-09-27 08:18:09 | 1 | 2,887 | a.t. |
77,185,672 | 10,713,813 | Insert image in pdf file using python | <p>I want to insert an image in a pdf using a python script. I have given a jpg as well as a png file. I want to be able to specify a page number, x and y coordinates and heigth and width of the image. Then I want the image inserted at the given point.</p>
<p>I have found this question: <a href="https://stackoverflow.com/questions/13276409/how-to-add-image-to-pdf-file-in-python">How to add image to PDF file in Python?</a>, but the answer is only providing a way to overlay another pdf page on an existing one.</p>
| <python><image><pdf> | 2023-09-27 08:13:25 | 2 | 320 | wittn |
77,185,621 | 11,167,163 | Setting an item of incompatible dtype is deprecated and will raise in a future error of pandas | <p>I have the below code which for instance work as excepted but won't work in the future:</p>
<pre><code>total.name = 'New_Row'
total_df = total.to_frame().T
total_df.at['New_Row', 'CURRENCY'] = ''
total_df.at['New_Row', 'MANDATE'] = Portfolio
total_df.at['New_Row', 'COMPOSITE'] = 'GRAND TOTAL'
total_df.set_index('COMPOSITE',inplace=True)
</code></pre>
<p>since an error is thrown in</p>
<pre><code>FutureWarning: Setting an item of incompatible dtype is deprecated and
will raise in a future error of pandas. Value 'GRAND TOTAL' has dtype incompatible with float64, please explicitly cast to a compatible dtype first.
total_df.at['New_Row', 'COMPOSITE'] = 'GRAND TOTAL'
</code></pre>
<p>How to fix this?</p>
<p>variable total is:</p>
<pre><code>CURRENCY
MANDATE Mandate_Test
USD AMOUNT 123
LOCAL AMOUNT 12
Beg. Mkt 123
End. Mkt 456
Name: New_Row, dtype: object
</code></pre>
| <python><pandas> | 2023-09-27 08:03:42 | 3 | 4,464 | TourEiffel |
77,185,610 | 11,932,905 | Pandas: Assign same cluster id to records based on common groups in different columns | <p>I have a dataframe like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">index</th>
<th style="text-align: left;">A</th>
<th style="text-align: left;">B</th>
<th style="text-align: left;">C</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: left;">111</td>
<td style="text-align: left;">222</td>
<td style="text-align: left;">111</td>
</tr>
<tr>
<td style="text-align: left;">2</td>
<td style="text-align: left;">111</td>
<td style="text-align: left;">222</td>
<td style="text-align: left;">222</td>
</tr>
<tr>
<td style="text-align: left;">3</td>
<td style="text-align: left;">111</td>
<td style="text-align: left;">111</td>
<td style="text-align: left;">555</td>
</tr>
<tr>
<td style="text-align: left;">4</td>
<td style="text-align: left;">222</td>
<td style="text-align: left;">222</td>
<td style="text-align: left;">444</td>
</tr>
<tr>
<td style="text-align: left;">5</td>
<td style="text-align: left;">222</td>
<td style="text-align: left;">333</td>
<td style="text-align: left;">111</td>
</tr>
<tr>
<td style="text-align: left;">6</td>
<td style="text-align: left;">222</td>
<td style="text-align: left;">444</td>
<td style="text-align: left;">333</td>
</tr>
<tr>
<td style="text-align: left;">7</td>
<td style="text-align: left;">333</td>
<td style="text-align: left;">555</td>
<td style="text-align: left;">777</td>
</tr>
<tr>
<td style="text-align: left;">8</td>
<td style="text-align: left;">444</td>
<td style="text-align: left;">666</td>
<td style="text-align: left;">777</td>
</tr>
</tbody>
</table>
</div>
<pre><code>df = pd.DataFrame({
'A': [111,111,111,222,222,222,333,444],
'B': [222,222,111,222,333,444,555,666],
'C': [111,222,555,444,111,333,777,777]
})
</code></pre>
<p>I want to create new column 'cluster' and assign same id to records which are connected directly or through common group in one of the columns.<br />
Meaning, here for example, we see that first 3 elements connected by same group in 'A', but they also connected to other records which have same groups '222', '111' in column 'B'. And all records which have '111', '222', '555' in column 'C'.<br />
So basically, all first 6 elements should have same cluster Id.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">index</th>
<th style="text-align: left;">A</th>
<th style="text-align: left;">B</th>
<th style="text-align: left;">C</th>
<th style="text-align: left;">cluster</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: left;">111</td>
<td style="text-align: left;">222</td>
<td style="text-align: left;">111</td>
<td style="text-align: left;">1</td>
</tr>
<tr>
<td style="text-align: left;">2</td>
<td style="text-align: left;">111</td>
<td style="text-align: left;">222</td>
<td style="text-align: left;">222</td>
<td style="text-align: left;">1</td>
</tr>
<tr>
<td style="text-align: left;">3</td>
<td style="text-align: left;">111</td>
<td style="text-align: left;">111</td>
<td style="text-align: left;">555</td>
<td style="text-align: left;">1</td>
</tr>
<tr>
<td style="text-align: left;">4</td>
<td style="text-align: left;">222</td>
<td style="text-align: left;">222</td>
<td style="text-align: left;">444</td>
<td style="text-align: left;">1</td>
</tr>
<tr>
<td style="text-align: left;">5</td>
<td style="text-align: left;">222</td>
<td style="text-align: left;">333</td>
<td style="text-align: left;">111</td>
<td style="text-align: left;">1</td>
</tr>
<tr>
<td style="text-align: left;">6</td>
<td style="text-align: left;">222</td>
<td style="text-align: left;">444</td>
<td style="text-align: left;">333</td>
<td style="text-align: left;">1</td>
</tr>
<tr>
<td style="text-align: left;">7</td>
<td style="text-align: left;">333</td>
<td style="text-align: left;">555</td>
<td style="text-align: left;">777</td>
<td style="text-align: left;">2</td>
</tr>
<tr>
<td style="text-align: left;">8</td>
<td style="text-align: left;">444</td>
<td style="text-align: left;">666</td>
<td style="text-align: left;">777</td>
<td style="text-align: left;">2</td>
</tr>
</tbody>
</table>
</div>
<p>Records 4-6 are connected to 1-3 as they form a group in column A and they are connected to previous records through columns B and C.</p>
<p>I was playing with multiple consequent apply functions on pairs of columns, but now thinking of applying connected components here, but can't figure out how to do that.</p>
<p>Also, the main problem is that this dataset is huge, > 30 000 000 records.</p>
<p>Appreciate any help.</p>
| <python><pandas><graph><group-by><connected-components> | 2023-09-27 08:01:25 | 1 | 608 | Alex_Y |
77,185,476 | 1,294,704 | Why BeautifulSoup find_all not returning elements with <br> in them? | <p>Environment:</p>
<ul>
<li>Python 3.9.4</li>
<li>beautifulsoup4==4.12.2</li>
</ul>
<p>Code:</p>
<pre><code>from bs4 import BeautifulSoup
test_content = '''<html><head></head><body><p>123</p><p>123<br>123</p></body></html>'''
bs = BeautifulSoup(test_content, 'html.parser')
</code></pre>
<p>Why does <code>bs.find_all('p')</code> returns all elements, while <code>bs.find_all('p', string=True)</code> only returns elements without <code><br></code> in them?</p>
<pre><code>>>> bs.find_all('p')
[<p>123</p>, <p>123<br/>123</p>]
>>> bs.find_all('p', string=True)
[<p>123</p>]
>>> import re
>>> bs.find_all('p', string=re.compile('.+'))
[<p>123</p>]
</code></pre>
<p>I've searched through docs of BeautifulSoup yet found nothing related.</p>
<p>My question is why adding string=True makes find_all not returning elements with br tags?</p>
<p>And how can I find all elements (with or without <code><br></code> tags)? Not passing the <code>string</code> arg doesn't help here, cause my acutal need is to find elements with certain keywords, e.g. <code>string=re.compile('KEYWORD')</code></p>
| <python><python-3.x><beautifulsoup><web-crawler> | 2023-09-27 07:39:09 | 2 | 801 | wings |
77,185,386 | 13,975,077 | Python Rich Live not working in Intellij IDE | <p>I have the following example of Rich Live from the official examples of Rich. (<a href="https://github.com/Textualize/rich/blob/master/examples/layout.py" rel="nofollow noreferrer">layout.py</a>)</p>
<p><strong>Code</strong></p>
<pre><code>from datetime import datetime
from time import sleep
from rich.align import Align
from rich.console import Console
from rich.layout import Layout
from rich.live import Live
from rich.text import Text
console = Console()
layout = Layout()
layout.split(
Layout(name="header", size=1),
Layout(ratio=1, name="main"),
Layout(size=10, name="footer"),
)
layout["main"].split_row(Layout(name="side"), Layout(name="body", ratio=2))
layout["side"].split(Layout(), Layout())
layout["body"].update(
Align.center(
Text(
"""This is a demonstration of rich.Layout\n\nHit Ctrl+C to exit""",
justify="center",
),
vertical="middle",
)
)
class Clock:
"""Renders the time in the center of the screen."""
def __rich__(self) -> Text:
return Text(datetime.now().ctime(), style="bold magenta", justify="center")
layout["header"].update(Clock())
with Live(layout, screen=True, redirect_stderr=False) as live:
try:
while True:
sleep(1)
except KeyboardInterrupt:
pass
</code></pre>
<p><strong>Problem</strong></p>
<p>This works as expected when I do python layout.py in Powershell.
But if I click on run via the IntelliJ IDE it is not working.</p>
<p>The following is my configuration
<a href="https://i.sstatic.net/MyrBd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MyrBd.png" alt="enter image description here" /></a></p>
| <python><intellij-idea><pycharm><rich> | 2023-09-27 07:24:22 | 1 | 800 | Yogesh |
77,185,287 | 1,236,858 | Pymongo: Handling SQL Injection | <p>I'm using Pymongo to do database operations and it seems that the command I'm using is being flagged as having SQL Injection vulnerability.</p>
<p>Here is my original query:</p>
<pre><code>e = client[db_name]['request'].find_one_and_update(filter={'_id': ObjectId(request_id)},
update={'$unset': reset_data},
return_document=ReturnDocument.AFTER)
</code></pre>
<p>I'm changing it and use mongosanitizer to sanitize the query, so it becomes</p>
<pre><code>from mongosanitizer.sanitizer import sanitize
...
find_query = {'_id': ObjectId(request_id)}
sanitize(find_query)
e = client[db_name]['request'].find_one_and_update(filter=find_query,
update={'$unset': reset_data},
return_document=ReturnDocument.AFTER)
</code></pre>
<p>This seems to work fine. But there is problem if the query is a list, so for example:</p>
<pre><code>find_query = {"_id": {"$in": req_ids}}
</code></pre>
<p>where <code>req_ids</code> is an array. So basically the <code>sanitize</code> function removes everything that has <code>$</code>. But in some cases, it's actually necessary. So how do I sanitize such query?</p>
| <python><mongodb><pymongo> | 2023-09-27 07:06:00 | 0 | 7,307 | rcs |
77,185,218 | 18,904,265 | Is it possible to redirect a get request to another API? | <p>I want to use FastAPI as a translator/interface to a InfluxDB database. This enables me to simplify the endpoints other apps/clients need to call to.</p>
<p>The main endpoints would be /preview, where you could get a small preview of the datasets you want to get, and /data, where you would get a full copy of the data. Those datasets however, can be quite large (sometimes up to hundreds of MB/some GB). For that reason, I don't want to first load the data from Influx DB to FastAPI and then from FastAPI to the client, but instead "redirect" the request of the client with the proper parameters set to Influx DB. Is something like that possible at all?</p>
<p>I made a picture to illustrate what I mean:
<a href="https://i.sstatic.net/RwrCU.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RwrCU.jpg" alt="requests flow" /></a></p>
| <python><fastapi><influxdb> | 2023-09-27 06:54:08 | 0 | 465 | Jan |
77,184,639 | 2,975,438 | Button to download a file with Plotly Dash | <p>I've built a Plotly Dash app that allows users to navigate through directories and download files. The files are .log files and are converted into .csv format before download.</p>
<p>The issue I'm facing is with the download functionality. When I first click the download button, it downloads the previously requested file (or first time it will download html page instead). Only when I click the download button for the second time, it downloads the correct file.</p>
<p>Here's the code, where file_path is the path to the log file to be converted and downloaded (note <code>update_download_link</code> callback is the one that does not work correctly):</p>
<pre><code>import datetime
import os
from pathlib import Path
import dash_bootstrap_components as dbc
import pandas as pd
from dash import ALL, Dash, Input, Output, State, callback_context, html, dcc
from dash.exceptions import PreventUpdate
from icons import icons
import io
import time
import uuid
def serve_layout():
app_layout = html.Div([
html.Link(
rel="stylesheet",
href="https://cdnjs.cloudflare.com/ajax/libs/github-fork-ribbon-css/0.2.3/gh-fork-ribbon.min.css"),
html.Br(), html.Br(),
dbc.Row([
dbc.Col(lg=1, sm=1, md=1),
dbc.Col([
dcc.Store(id='stored_cwd', data=os.getcwd()),
html.H1('File Browser'),
html.Hr(), html.Br(), html.Br(), html.Br(),
html.H5(html.B(html.A("β¬οΈ Parent directory", href='#',
id='parent_dir'))),
html.H3([html.Code(os.getcwd(), id='cwd')]),
html.Br(), html.Br(),
html.Div(id='cwd_files',
style={'height': 500, 'overflow': 'scroll'}),
], lg=10, sm=11, md=10)
]),
dcc.Download(id="download"),
html.A(
"Download CSV",
id="download_csv",
className="btn btn-outline-secondary btn-sm",
href="",
download=""
)
] + [html.Br() for _ in range(15)])
return app_layout
@app.callback(
Output('cwd', 'children'),
Input('stored_cwd', 'data'),
Input('parent_dir', 'n_clicks'),
Input('cwd', 'children'),
prevent_initial_call=True)
def get_parent_directory(stored_cwd, n_clicks, currentdir):
triggered_id = callback_context.triggered_id
if triggered_id == 'stored_cwd':
return stored_cwd
parent = Path(currentdir).parent.as_posix()
return parent
@app.callback(
Output('cwd_files', 'children'),
Input('cwd', 'children'))
def list_cwd_files(cwd):
path = Path(cwd)
all_file_details = []
if path.is_dir():
files = sorted(os.listdir(path), key=str.lower)
for i, file in enumerate(files):
filepath = Path(file)
full_path=os.path.join(cwd, filepath.as_posix())
is_dir = Path(full_path).is_dir()
link = html.A([
html.Span(
file, id={'type': 'listed_file', 'index': i},
title=full_path,
style={'fontWeight': 'bold', 'fontSize': 18} if is_dir else {}
)], href='#')
details = file_info(Path(full_path))
details['filename'] = link
if is_dir:
details['extension'] = html.Img(
src=app.get_asset_url('icons/default_folder.svg'),
width=25, height=25)
else:
details['extension'] = icon_file(details['extension'][1:])
all_file_details.append(details)
df = pd.DataFrame(all_file_details)
df = df.rename(columns={"extension": ''})
table = dbc.Table.from_dataframe(df, striped=False, bordered=False,
hover=True, size='sm')
return html.Div(table)
@app.callback(
Output('stored_cwd', 'data'), # note the change here
Input({'type': 'listed_file', 'index': ALL}, 'n_clicks'),
State({'type': 'listed_file', 'index': ALL}, 'title'))
def store_clicked_file(n_clicks, title):
if not n_clicks or set(n_clicks) == {None}:
raise PreventUpdate
ctx = callback_context
index = ctx.triggered_id['index']
file_path = title[index]
return file_path # always returning the file path now
@app.callback(
Output('download_csv', 'href'),
Output('download_csv', 'download'),
Input('stored_cwd', 'data'),
Input('download_csv', 'n_clicks'),
prevent_initial_call=True
)
def update_download_link(file_path, n_clicks):
# when there is no click, do not proceed
if n_clicks is None:
raise PreventUpdate
if file_path.endswith(".log"):
with open(file_path, "r") as f:
log_content = f.read()
csv_data = import__(log_content)
temp_filename = save_file(csv_data)
# delay and then rename the temp file
time.sleep(10)
filename = f'{uuid.uuid1()}.csv'
os.rename(os.path.join('downloads', temp_filename), os.path.join('downloads', filename))
download_link = f'/download_csv?value={filename}'
return download_link, filename
else:
return "#", ""
</code></pre>
<p>I am using <code>temp_filename</code> because without it files bigger than 1mb does not getting downloaded at all for some reason.</p>
<p>helper functions:</p>
<pre><code>def import__(file_content):
# Convert the file content string to a StringIO object
file_io = io.StringIO(file_content)
# Split the file content into lines
lines = file_content.splitlines()
# Search for the header row number
headerline = 0
for n, line in enumerate(lines):
if "Header" in line:
headerline = n
break
# Go back to the start of the StringIO object before reading with pandas
file_io.seek(0)
# Read the content using pandas
# Use the StringIO object (file_io) and set the 'skiprows' parameter
data = pd.read_csv(file_io, sep='|', header = headerline) # header=None, skiprows=headerline)
data = data.drop(data.index[-1])
return data
def save_file(df):
"""Save DataFrame to a .csv file and return the file's name."""
filename = f'{uuid.uuid1()}.csv'
filepath = os.path.join('downloads', filename) # assuming the script has permission to write to this location
print(f"Saving to {filepath}")
df.to_csv(filepath, index=False)
return filename
</code></pre>
<p>also Flask API is:</p>
<pre><code>@app.server.route('/download_csv')
def download_csv():
"""Provide the DataFrame for csv download."""
value = request.args.get('value')
file_path = os.path.join('downloads', value) # Compute the file path
df = pd.read_csv(file_path) # Read the CSV data
csv = df.to_csv(index=False, encoding='utf-8') # Convert DataFrame to CSV
# Create a string response
return Response(
csv,
mimetype="text/csv",
headers={"Content-disposition": f"attachment; filename={value}"}
)
</code></pre>
<p>Here are screenshots:</p>
<p>1</p>
<p><a href="https://i.sstatic.net/G0c4rm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/G0c4rm.png" alt="1" /></a></p>
<p>2</p>
<p><a href="https://i.sstatic.net/phuYTm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/phuYTm.png" alt="2" /></a></p>
<p>3</p>
<p><a href="https://i.sstatic.net/2k5yAm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2k5yAm.png" alt="3" /></a></p>
<p>4</p>
<p><a href="https://i.sstatic.net/8w5pjm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8w5pjm.png" alt="4" /></a></p>
<p>5</p>
<p><a href="https://i.sstatic.net/gv9VDm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gv9VDm.png" alt="5" /></a></p>
<p>I'm not sure why the file ready for download is always one step behind. I put some sort of delay <code>time.sleep(10)</code> to ensure the file write operation is completed before the download begins, but it does not work.</p>
<p>Is there any way I can ensure that the correct file is downloaded on the first button click?</p>
| <python><flask><download><plotly-dash> | 2023-09-27 04:11:58 | 1 | 1,298 | illuminato |
77,184,545 | 2,793,602 | Apply function to values in a dataframe column | <p>I have a function that calls a mapping API and returns longs and lats given unstructured address data. This works, and I can pass</p>
<pre><code>address = "12 & 14 CHIN BEE AVENUE,, SINGAPORE 619937"
lat, lon = get_coordinates(api_key, address)
print(lat, lon)
</code></pre>
<p>and get a result like <code>1.3332439 103.7118193</code></p>
<p>Before this I have a SQL Query that populates a dataframe with all the addresses that I want geocodes for. What would be the best way to apply the function to every value in the dataframe, and store the longs and lats in separate columns in the dataframe?</p>
<p>I have tried creating a brand new dataframe and use apply, but this runs an abnormally long time <code>df2 = df.apply(get_coordinates(api_key, df['DeliveryAddress']))</code></p>
<p>I have also tried <code>df['coords'] = df['DeliveryAddress'].apply(get_coordinates(api_key, df['DeliveryAddress']))</code> based on this answer to a <a href="https://stackoverflow.com/questions/61652558/apply-function-to-dataframe-column">Similar Question</a> but I think the way I am passing parameters to the function is wrong. Please assist with pointing me in the right direction.</p>
<p>EDIT:</p>
<p>This is the code I am using at the moment, when passing doing a single address:</p>
<pre><code>def get_coordinates(api_key, address):
base_url = "http://someURL.net/REST/v1/Locations"
params = {
"query": address,
"key": api_key
}
response = requests.get(base_url, params=params)
response.raise_for_status()
data = response.json()
coordinates = data["resourceSets"][0]["resources"][0]["point"]["coordinates"]
return coordinates
address = "12 & 14 CHIN BEE AVENUE,, SINGAPORE 619937"
lat, lon = get_coordinates(api_key, address)
print(lat, lon)
</code></pre>
| <python><pandas><dataframe> | 2023-09-27 03:34:03 | 1 | 457 | opperman.eric |
77,184,513 | 2,497,309 | Call CDK Deploy in another AWS Account from Fargate Task | <p>I have two accounts A and B. I want to deploy a stack using CDK in Account A by running cdk synth and cdk deploy in an ECS Task in Account B.</p>
<p>I created a role in account A with administrator access and granted permissions to Account B to be able to assume the role. Then the ECS task spins up in Account B and runs the following:</p>
<pre><code>role_arn = "arn of Account A role which has admin access"
print("Synthesizing stack...")
sb.run(f"cdk --role-arn {role_arn} synth", shell=True)
print("Deploying stack...")
sb.run(f"cdk --role-arn {role_arn} deploy", shell=True)
</code></pre>
<p>This fails with the following error:</p>
<pre><code>Could not assume role in target account using current credentials (which are for account 614863243217) User: arn:aws:sts::<B_account_id>:assumed-role/ecs-fargate-role/65dd98a9c327410 is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::<A_account_id>:role/cdk-hnb659fds-deploy-role-473038482210-us-east-2 . Please make sure that this role exists in the account. If it doesn't exist, (re)-bootstrap the environment with the right '--trust', using the latest version of the CDK CLI.
</code></pre>
<p>Looks like it's not using the role I created to deploy the stack. Is there a way to assume the role I created and call CDK synth and deploy?</p>
<p>The role I created in Account A looks like this along with it's trust policy:</p>
<p><strong>Permissions Policy</strong></p>
<pre><code>{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "*",
"Resource": "*"
}
]
}
</code></pre>
<p><strong>Trust Relationships</strong></p>
<pre><code>{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<AccountB>:root"
},
"Action": "sts:AssumeRole"
}
]
}
</code></pre>
<p>I've also tried the following to "assume" the role in ecs.</p>
<pre><code>role_arn = stack_json["role_arn"]
aws_region = stack_json["region"]
assumed_role_object = sts_client.assume_role(
RoleArn=role_arn,
RoleSessionName="AssumeRoleSession1"
)
credentials = assumed_role_object['Credentials']
p = Popen(['aws configure'], stdin=PIPE, shell=True)
aws_configure_str = f"{credentials['AccessKeyId']}\n{credentials['SecretAccessKey']}\n{aws_region}\njson\n"
p.communicate(input=bytes(aws_configure_str, 'utf-8'))
</code></pre>
<p>This writes the credentials for the default aws cli.</p>
<p>I've also tried passing in the credentials like this:</p>
<pre><code>sb.run(f"cdk deploy", shell=True, env={
"AWS_ACCESS_KEY_ID": credentials['AccessKeyId'],
"AWS_SECRET_ACCESS_KEY": credentials['SecretAccessKey'],
"AWS_DEFAULT_REGION": aws_region,
})
</code></pre>
<p>But when I look at the logs it says:</p>
<pre><code>AWS Access Key ID [None]: AWS Secret Access Key [None]: Default region name [None]: Default output format [None]: Loaded stack_json
</code></pre>
<p>and the error says:
Deployment failed: Error: Need to perform AWS calls for account , but no credentials have been configured</p>
| <python><amazon-web-services><amazon-ecs><aws-cdk><aws-fargate> | 2023-09-27 03:23:06 | 1 | 947 | asm |
77,184,491 | 11,996,266 | Psycopg2: how to deal with special characters in password? | <p>I am trying to connect to a db instance, but my password has the following special characters: backslash, plus, dot, asterisk/star and at symbol. For example, 12@34\56.78*90 (regex nightmare lol)</p>
<p>How do I safe pass it to the connection string? My code looks like that:</p>
<pre><code>connection_string = f'user={user} password={pass} host={host} dbname={dbname} port={port}'
connection = psg2.connect(connection_string)
</code></pre>
<p>It gives me wrong pass/username error. However, I tried this combination directly on the db and it works, and I tried another combination on the python code and it worked as well. So looks like the problem is the password being passed weirdly to the connection.</p>
<p>I tried urllib scape, I tried double quotes on the password, nothing works so far :(</p>
| <python><postgresql><psycopg2> | 2023-09-27 03:15:06 | 2 | 1,495 | Roni Antonio |
77,184,478 | 8,708,364 | Not sure how to use Application Default Credentials | <p>I am currently working on a Flask project and I want to host it on Google Cloud App Engine. Since I want to store sessions, I came across the service named FireStore. To use that service, I need to use Google Application Credentials, which I am not sure how to use.</p>
<p>My code:</p>
<pre class="lang-py prettyprint-override"><code>...
from google.cloud import firestore
from google.oauth2 import service_account
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = 'application_default_credentials.json'
app = Flask(__name__ )
db = firestore.Client()
sessions = db.collection('sessions')
...
</code></pre>
<p>But once I use my software, I get the following error, which I've been struggling to fix for the last few hours.</p>
<blockquote>
<p>AttributeError: you need a private key to sign credentials.the credentials you are currently using <class 'google.oauth2.credentials.Credentials'> just contains a token. see <a href="https://googleapis.dev/python/google-api-core/latest/auth.html#setting-up-a-service-account" rel="nofollow noreferrer">https://googleapis.dev/python/google-api-core/latest/auth.html#setting-up-a-service-account</a> for more details.</p>
</blockquote>
<p>I clicked on the link and read the documentation, but it didn't really help me understand the problem, let along fixing it.</p>
<p>Any help appreciated.</p>
| <python><flask><google-cloud-platform><google-app-engine><google-cloud-firestore> | 2023-09-27 03:10:44 | 2 | 71,788 | U13-Forward |
77,184,472 | 4,286,383 | Breaking a python loop using threads | <p>I would like to have a thread that constantly monitors the status of a value in a text file, another thread that executes some code inside a loop, and I would like that when the value in the text file changes, immediately break the loop and execute an else statement regardless of which part in the loop the program is. Is it possible to do this?</p>
<p>This is my python3 code:</p>
<pre><code>import threading
import time
text_list = ['a']
def readfile():
global text_list
while True:
with open("data.txt") as f:
text_list = f.readlines()
# Removing new line "\n" character
text_list = [x.strip() for x in text_list]
def printloop():
while str(text_list[0]) == 'a':
for n in range(0,5):
print('File Character: '+str(text_list[0])+', Iteration: '+str(n))
time.sleep(5)
else:
print('File Character: '+str(text_list[0])+', Loop Broken')
t1 = threading.Thread(target=readfile, daemon=True)
t2 = threading.Thread(target=printloop)
t1.start()
t2.start()
</code></pre>
<p>and this is the content of data.txt</p>
<pre><code>a
</code></pre>
<p>The problem is that when I execute the code, I have to wait for the time.sleep function to be finished. I would like to break the loop as soon as my variable changes. I think a workaround might be to tell the time.sleep function to wait less time and increase the number of iterations, or to use something like the mills() function in Arduino, but I think it would be better for me to keep the time.sleep function as it is because my real loop in my file is much bigger and I use more than one time.sleep functions, so is there a way to break a loop if a variable changes regardless of in which part is it?</p>
| <python><python-3.x><multithreading><loops> | 2023-09-27 03:08:41 | 2 | 471 | Nau |
77,184,120 | 6,702,598 | `sam build` suddenly fails with `pip executable not found in your python environment` on Mac | <p>I'm building an aws lambda function, written in Python3.10, with <code>sam build</code>.</p>
<p>After some changes (code changes, files added, modules added) - nothing fancy, my build suddenly stops working. I get the error message <code>Error: PythonPipBuilder:ResolveDependencies - pip executable not found in your python environment</code>.</p>
<p>The files I changed</p>
<pre><code>modified: projectroot/aa/bb.py
deleted: projectroot/cc/dd.py
modified: projectroot/ee/ff.py
added: projectroot/types/__init__.py
</code></pre>
<p><em>What I've tried</em></p>
<ul>
<li>I checked if pip was there. It is: in the virtual env, as well as the system.</li>
<li>I tried re-initializing the venv.</li>
<li>I tried rebooting (expecting it to one of Mac's hickups)</li>
</ul>
| <python><aws-lambda> | 2023-09-27 00:46:20 | 1 | 3,673 | DarkTrick |
77,184,058 | 2,515,265 | Save a Pandas DataFrame to a CSV file without adding extra double quotes | <p>I want to save a Pandas dataframe to a CSV file in such a way that no additional double quotes or any other characters are added to these formulas. Here is my attempt:</p>
<pre><code>import pandas as pd
data = {
"Column1": [1, 2, 3],
"Column2": ["A", "B", "C"],
"Formula": ['"=HYPERLINK(""https://www.yahoo.com"",""See Yahoo"")"', '"=HYPERLINK(""https://www.google.com"",""See Google"")"', '"=HYPERLINK(""https://www.bing.com"",""See Bing"")"']
}
df = pd.DataFrame(data)
# Save the DataFrame to a CSV file without adding extra double quotes
df.to_csv("output.csv", index=False, doublequote=False)
</code></pre>
<p>But this throws this error:
<code> File "pandas/_libs/writers.pyx", line 75, in pandas._libs.writers.write_csv_rows _csv.Error: need to escape, but no escapechar set</code></p>
<p>How can I bypass this? I need it so that the hyperlink shows in Excel as a clickable link.</p>
| <python><pandas><dataframe><excel-formula><quoting> | 2023-09-27 00:24:25 | 1 | 2,657 | Javide |
77,184,016 | 6,032,140 | yaml/ruamel load and dumped out file is missing variables of same value | <ol>
<li>I have a string and using ruamel to load the string and dump the file output in yaml format.</li>
<li>The string contains arrays of same value.</li>
<li>If its of same value it misses those value but if there is different values then it prints those values.</li>
</ol>
<p>Code:</p>
<pre><code>import sys
import json
import ruamel.yaml
import re
dit="{p_d: {p: a0, nb: 0, be: {ar: {1, 1, 1, 1}}, bb: {tt: {dt: {10, 10}, vl: {0}, rl: {0}, sf: {10, 20}, ef: {10, 20}}}}}"
yaml_str=dit
print(yaml_str)
dict_yaml_str = yaml_str.split('\n')
print('#### full block style')
yaml = ruamel.yaml.YAML(typ='safe') #
yaml.default_flow_style = False
yaml.allow_duplicate_keys = True
data = ""
fileo = open("yamloutput.yaml", "w")
for dys in dict_yaml_str:
data = yaml.load(dys)
print("data: {}".format(data))
yaml.dump(data, fileo)
fileo.close()
</code></pre>
<p>Output:</p>
<pre><code>p_d:
bb:
tt:
dt:
10: null
ef:
10: null
20: null
rl:
0: null
sf:
10: null
20: null
vl:
0: null
be:
ar:
1: null
nb: 0
p: a0
</code></pre>
<p>Expected Output:</p>
<pre><code>p_d:
bb:
tt:
dt:
10: null
10: null
ef:
10: null
20: null
rl:
0: null
sf:
10: null
20: null
vl:
0: null
be:
ar:
1: null
1: null
1: null
1: null
nb: 0
p: a0
</code></pre>
<p>Is it some config know from yaml that I am missing ? Please share in your inputs.</p>
| <python><yaml><ruamel.yaml> | 2023-09-27 00:04:13 | 2 | 1,163 | Vimo |
77,183,906 | 3,553,923 | Python: Robustly remove all line breaks and indentations from HTML (replacing some inside tags with spaces as the browser would do for rendering) | <p>As BeautifulSoup doesn't seem to work well with indented content and breaks inside tags (also see <a href="https://stackoverflow.com/questions/77183750/how-do-i-make-beautifulsoup-ignore-any-indents-in-original-html-when-getting-tex">How do I make BeautifulSoup ignore any indents in original HTML when getting text</a>), I'd like to preprocess my HTML file.</p>
<p>Something like:</p>
<pre><code> <p>
Test text with something in it
Test text with something in it
<i>and italic text</i> inside that text.
Test text with something in it.
</p>
<p>
Next paragraph with more text.
</p>
</code></pre>
<p>should turn into:</p>
<pre><code><p>Test text with something in it Test text with something in it <i>and italic text</i> inside that text. Test text with something in it.</p><p>Next paragraph with more text.</p>
</code></pre>
<p>Is there a library for this, or should I use some own function like find and replace with a regex pattern?</p>
| <python><html><beautifulsoup><format> | 2023-09-26 23:20:26 | 1 | 323 | clel |
77,183,750 | 3,553,923 | How do I make BeautifulSoup ignore any indents in original HTML when getting text | <p>I think, I basically want the reverse of what the <code>prettify()</code> function does.</p>
<p>When one has HTML code (excerpt) like:</p>
<pre><code> <p>
Test text with something in it
Test text with something in it
<i>and italic text</i> inside that text.
Test text with something in it.
</p>
<p>
Next paragraph with more text.
</p>
</code></pre>
<p>How can one get the text inside without the line breaks and indentations? This all while looping recursively over the tree to also be able to cover nested tags?</p>
<p>The result after parsing and processing should be something like:</p>
<pre><code>Test text with something in it Test text with something in it \textit{and italic text} inside that text. Test text with something in it.
Next paragraph with more text.
</code></pre>
<p>Also, for further processing, it would be good to get the content of italic tags separately in Python.</p>
<p>That means (simplified; in reality, I want to call <code>pylatex</code> functions to compose a document):</p>
<pre><code>string result = ""
for child in soup.children:
for subchild in child.children:
# Some processing
result += subchild.string
</code></pre>
<p>This should also work for more complex examples, obviously:</p>
<pre><code> <p>
Test text with something in it
Test text with <i>something <b>in
<em>it</em> test</b></i>
<i>and <em>italic</em> text</i> inside that text.
Test text with something in it.
</p>
</code></pre>
<p>Most of this is not that complicated, but how can one deal correctly with line breaks and spaces for the nested text?</p>
<p>The browser seems to render this correctly.</p>
<p>If not possible with BeautifulSoup, another Python library doing this is also fine.</p>
<p>I was quite shocked that this isn't dealt with by default in BeautifulSoup and I also didn't find any function doing what I want.</p>
| <python><html><parsing><beautifulsoup> | 2023-09-26 22:27:17 | 3 | 323 | clel |
77,183,668 | 10,004,072 | Can I write behavioural unit tests using "Given-When-Then" in Pytest (Python)? | <p>We currently have many unit tests in Python and we're using Pytest. Ideally I'd like to stay with Pytest as that's what the company has elected to be their testing framework of choice. I wonder if people recommend fixtures in Pytest or some other good way to do this.</p>
<p>When doing a quick Google I see people using docstrings to explain their scenarios.</p>
<pre class="lang-py prettyprint-override"><code>def test_all(self):
"""
GIVEN a PhoneBook with a records property
WHEN the all method is called
THEN all numbers should be returned in ascending order
"""
</code></pre>
<p>I wonder if someone has more experience / real world examples in this area and can guide me a bit. In my previous company we wrote methods in C# a rough example being:</p>
<pre><code>authedUser = givenAnAuthenticatedUser()
transactionResult = whenIReconcileATransaction(user, transaction)
result = thenTransactionStatusIsReconciled(transactionResult)
assert result == True
</code></pre>
<p>Cheers</p>
| <python><unit-testing><testing><pytest><acceptance-testing> | 2023-09-26 22:05:28 | 0 | 1,623 | Leslie Alldridge |
77,183,648 | 8,508 | Find all the ManyToManyField targets that are not connected to a ManyToManyField haver | <p>Suppose I have 2 models connected by a many to many relation</p>
<pre><code>from django.db import models
class Record(models.Model):
name = models.CharField(max_length=64)
class Batch(models.Model):
records = models.ManyToManyField(Record)
</code></pre>
<p>Now I want to find all the Records that are not connected to a Batch.</p>
<p>I would have thought it would be one of</p>
<pre><code>Record.objects.filter(batch=[])
#TypeError: Field 'id' expected a number but got [].
Record.objects.filter(batch__count=0)
Record.objects.filter(batch__len=0)
#FieldError: Related Field got invalid lookup: count
</code></pre>
<p>Or something like that. But those don't work. They seem to act like they expect batch to be singular rather then a set.</p>
<p>What is the correct way to do this?</p>
| <python><django><database><django-models> | 2023-09-26 22:01:50 | 1 | 15,639 | Matthew Scouten |
77,183,618 | 8,372,455 | Dockerfile to install scikit-learn on rasp pi hardware | <p>How to make a Dockerfile for Raspbian hardware that can install scikit-learn? I haven't had any luck with debian-slim or Alpine Linux images.</p>
<pre><code># Use a base image for Raspberry Pi with Alpine Linux
FROM arm32v6/alpine:3.14
# Update package repositories and install necessary dependencies
RUN apk --no-cache update && \
apk --no-cache add python3 python3-dev py3-pip build-base gcc gfortran wget freetype-dev libpng-dev openblas-dev && \
ln -s /usr/include/locale.h /usr/include/xlocale.h
# Install scikit-learn and bacpypes3 using pip3
RUN pip3 install scikit-learn bacpypes3
# Clean up by removing unnecessary packages and cache
RUN apk del python3-dev py3-pip build-base gcc gfortran && \
rm -rf /var/cache/apk/*
# Set the working directory
WORKDIR /app
# Start your application
CMD ["python3", "bacnet_server.py"]
</code></pre>
| <python><docker><numpy><scikit-learn><raspbian> | 2023-09-26 21:55:33 | 1 | 3,564 | bbartling |
77,183,615 | 1,423,217 | Tracking the results of the multiprosssing version of the for-loop | <p>I have a function that I apply sequentially on a list of objects and which returns me a score for each object as following :</p>
<pre><code>def get_score(a):
// do something
return score
objects = [obj0, obj1, obj3]
results = np.zeros(len(objects))
index = 0
for i in range(len(results)):
results[i]=get_score(objects[i])
</code></pre>
<p>I want to parallelize the execution of this function whith Multiprocessing library, but I have a question, how can I tell that such a score corresponds to such an object since I will not have a shared results list ?</p>
| <python><multiprocessing> | 2023-09-26 21:54:56 | 1 | 327 | ZchGarinch |
77,183,499 | 1,185,790 | How can I use the Palantir Foundry REST API to get a list of datasets within a directory | <p>I need to check to see if a dataset exists within a Palantir Foundry directory, and if it doesn't exist, initiate the dataset creation process. I specifically want to look for a specified table name within the directory, and if it exists, return the dataset RID associated with that table. However, I'm having difficulty doing the first step.
I have the following code:</p>
<pre><code>def list_datasets_in_foundry_directory(
token,
base_url,
parent_folder_rid):
headers = {
"authorization": "Bearer {}".format(token)
}
response = requests.get(f'{base_url}/api/v1/directories/{parent_folder_rid}/datasets', headers=headers)
datasets = response.json()
return datasets
</code></pre>
<p>But the response returns a <code>404</code> error.</p>
| <python><palantir-foundry><palantir-foundry-api> | 2023-09-26 21:23:21 | 1 | 723 | baobobs |
77,183,391 | 382,200 | Search a string's unknown text and replace it | <p>I want to search for a section of a string in a data file and replace it.</p>
<p>No problem doing it if the exact text in the section of interest is known, but I seem unable to do it if the exact text is not fully known.</p>
<p>The contents of the string(s) may be different (may be numbers or names or mixed).</p>
<p>The string(s) are read from a file, get replaced, and are written back to the file.</p>
<p>I tried many combinations of regular expression syntax and get close, but never what I need...</p>
<p>I want to replace any/all number-pairs in the section of interest with 0.0 0.0 (no comma)</p>
<p><strong>Example:</strong>
<em>Results</em> for the code below are:</p>
<pre class="lang-none prettyprint-override"><code>The Original String
(cat 5.34 8.763) kenneled in:
The Replaced String
(dog 0.0 0.0)5.34 8.763) kenneled in:
</code></pre>
<p>I want Replaced String to be:</p>
<pre class="lang-none prettyprint-override"><code>(dog 0.0 0.0) kenneled in:
</code></pre>
<p>Here is my code attempt:</p>
<pre><code>data = '(cat 5.34 8.763) kenneled in:' # Section of the String
pattern = '[(cat *? )]+' # test the String
repl = '(dog 0.0 0.0)' # replace it with this
print('The Original String')
print(data + '\n')
result = re.sub(pattern, repl, data, count=1)
print('The Replaced String')
print(result)
</code></pre>
<p>All was good until I selected a file containing multiple similar strings - these extra strings can be different / similar / identical and I <em>don't</em> want them changed.</p>
<p>The problem is: All the text/content gets deleted - all of the text (except the first string) gets deleted.</p>
<p>I added <code>count=1</code> but it didn't work...</p>
<pre><code>data = re.sub(r"[^(]+ \d+\.?\d* \d+\.?\d*", "at 0.0 0.0", s1, count=1)
</code></pre>
<p>Example of the text I want to keep without being affected:</p>
<pre class="lang-none prettyprint-override"><code>(hound (cat 3.34 5.67)
(hound (cat 3.37 1.67)
(hound (cat 9.85 4.3)
(puppy (cat 6.76 0.123)
</code></pre>
| <python><regex> | 2023-09-26 21:01:02 | 1 | 551 | headscratch |
77,183,339 | 9,962,007 | Installing torch with GPU support but without downloading 3 GB of duplicated siloed CUDA libraries? | <p>I'm trying to make CUDA containers for DNN less heavy. PyTorch goes against my efforts, as it seems to come bundled with its own siloed copy of a large subset of CUDA libraries. But what if we have them already (possibly newer) and want to install just <code>torch</code>?</p>
<p>And bonus question: why cannot PyTorch detect or accept your system CUDA libraries which match its own desired (and bundled in) major and minor CUDA version (e.g. 11.8)? Why does it have to force <code>pip</code> to download its own hard-coded CUDA 11.8.x when your system already has 11.8.y and y>x (i.e. slightly newer build of the same version)? After all, <code>tensorflow</code> can accept such minor differences in CUDA builds, and avoid unwanted duplication of these heavy dependencies (measured in gigabytes).</p>
<hr />
<p>A more concrete illustration of the problem</p>
<p><em>(run under the reasonably new GPU driver - supporting the latest CUDA 12.2 - and the official <code>nvidia/cuda-11.8-cudnn8-devel-ubuntu22.04:latest</code> container with CUDA 11.8.0 installed inside)</em>:</p>
<pre><code>!nvidia-smi
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.104.05 Driver Version: 535.104.05 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
[..]
</code></pre>
<pre><code>!env | grep CU | sort
CUDA_VERSION=11.8.0
[..]
NV_CUDA_LIB_VERSION=11.8.0-1
[..]
NV_CUDNN_PACKAGE=libcudnn8=8.9.0.131-1+cuda11.8
NV_CUDNN_PACKAGE_DEV=libcudnn8-dev=8.9.0.131-1+cuda11.8
[..]
NV_LIBCUBLAS_DEV_VERSION=11.11.3.6-1
[..]
NV_LIBCUBLAS_VERSION=11.11.3.6-1
NV_LIBCUSPARSE_DEV_VERSION=11.7.5.86-1
NV_LIBCUSPARSE_VERSION=11.7.5.86-1
</code></pre>
<p>If you just intuitively try to install <code>pip install torch</code>, it will not download CUDA itself, but it will download the remaining NVIDIA libraries: its own (older) cuDNN (0.5 GB) and (older) NCCL, as well as various <code>cu11*</code> packages, including CUDA runtime (for an older version of CUDA - 11.7 instead of 11.8):</p>
<pre><code>!pip install torch
Collecting torch
Downloading torch-2.0.1-cp310-cp310-manylinux1_x86_64.whl (619.9 MB)
βββββββββββββββββββββββββββββββββββββββ 619.9/619.9 MB 5.3 MB/s eta 0:00:0000:0100:02
[..]
Collecting nvidia-cudnn-cu11==8.5.0.96
Downloading nvidia_cudnn_cu11-8.5.0.96-2-py3-none-manylinux1_x86_64.whl (557.1 MB)
βββββββββββββββββββββββββββββββββββββββ 557.1/557.1 MB 5.1 MB/s eta 0:00:0000:0100:02
Collecting nvidia-cufft-cu11==10.9.0.58
Downloading nvidia_cufft_cu11-10.9.0.58-py3-none-manylinux1_x86_64.whl (168.4 MB)
βββββββββββββββββββββββββββββββββββββββ 168.4/168.4 MB 8.7 MB/s eta 0:00:0000:0100:01
Collecting nvidia-cusolver-cu11==11.4.0.1
Downloading nvidia_cusolver_cu11-11.4.0.1-2-py3-none-manylinux1_x86_64.whl (102.6 MB)
βββββββββββββββββββββββββββββββββββββββ 102.6/102.6 MB 9.7 MB/s eta 0:00:0000:0100:01
[..]
Collecting nvidia-cublas-cu11==11.10.3.66
Downloading nvidia_cublas_cu11-11.10.3.66-py3-none-manylinux1_x86_64.whl (317.1 MB)
βββββββββββββββββββββββββββββββββββββββ 317.1/317.1 MB 7.2 MB/s eta 0:00:0000:0100:01
Collecting nvidia-curand-cu11==10.2.10.91
Downloading nvidia_curand_cu11-10.2.10.91-py3-none-manylinux1_x86_64.whl (54.6 MB)
ββββββββββββββββββββββββββββββββββββββββ 54.6/54.6 MB 10.2 MB/s eta 0:00:0000:0100:01
[..]
Collecting nvidia-nccl-cu11==2.14.3
Downloading nvidia_nccl_cu11-2.14.3-py3-none-manylinux1_x86_64.whl (177.1 MB)
βββββββββββββββββββββββββββββββββββββββ 177.1/177.1 MB 8.7 MB/s eta 0:00:0000:0100:01
Collecting nvidia-cuda-cupti-cu11==11.7.101
Downloading nvidia_cuda_cupti_cu11-11.7.101-py3-none-manylinux1_x86_64.whl (11.8 MB)
ββββββββββββββββββββββββββββββββββββββββ 11.8/11.8 MB 11.4 MB/s eta 0:00:0000:0100:01
[..]
Collecting nvidia-nvtx-cu11==11.7.91
Downloading nvidia_nvtx_cu11-11.7.91-py3-none-manylinux1_x86_64.whl (98 kB)
ββββββββββββββββββββββββββββββββββββββββ 98.6/98.6 KB 10.7 MB/s eta 0:00:00
Collecting nvidia-cuda-runtime-cu11==11.7.99
Downloading nvidia_cuda_runtime_cu11-11.7.99-py3-none-manylinux1_x86_64.whl (849 kB)
ββββββββββββββββββββββββββββββββββββββ 849.3/849.3 KB 11.0 MB/s eta 0:00:00a 0:00:01
[..]
Successfully installed cmake-3.27.5 filelock-3.12.4 lit-17.0.1 mpmath-1.3.0 networkx-3.1 nvidia-cublas-cu11-11.10.3.66 nvidia-cuda-cupti-cu11-11.7.101 nvidia-cuda-nvrtc-cu11-11.7.99 nvidia-cuda-runtime-cu11-11.7.99 nvidia-cudnn-cu11-8.5.0.96 nvidia-cufft-cu11-10.9.0.58 nvidia-curand-cu11-10.2.10.91 nvidia-cusolver-cu11-11.4.0.1 nvidia-cusparse-cu11-11.7.4.91 nvidia-nccl-cu11-2.14.3 nvidia-nvtx-cu11-11.7.91 sympy-1.12 torch-2.0.1 triton-2.0.0
</code></pre>
<p>But if you try to install <code>torch</code> for CUDA 11.8 (specifying <code>--index-url https://download.pytorch.org/whl/cu118</code>), then <code>torch</code> will make <code>pip</code> download it's own CUDA 11.8 bundle (2.3 GB, nearly 4 times larger than the <code>torch</code> wheel alone), probably not even checking if one is already installed (and available e.g. for <code>tensorflow</code>):</p>
<pre><code>!pip install torch --index-url https://download.pytorch.org/whl/cu118
Looking in indexes: https://download.pytorch.org/whl/cu118
Collecting torch
Downloading https://download.pytorch.org/whl/cu118/torch-2.0.1%2Bcu118-cp310-cp310-linux_x86_64.whl (2267.3 MB)
ββββββββββββββββββββββββββββββββββββββββ 2.3/2.3 GB 2.3 MB/s eta 0:00:00:00:0100:06
[..]
Installing collected packages: mpmath, lit, cmake, sympy, networkx, filelock, triton, torch
Successfully installed cmake-3.25.0 filelock-3.9.0 lit-15.0.7 mpmath-1.3.0 networkx-3.0 sympy-1.12 torch-2.0.1+cu118 triton-2.0.0
</code></pre>
<p>So what was inside <code>torch-2.0.1%2Bcu118-cp310-cp310-linux_x86_64.whl</code>? We have some 3 GB (unpacked) of goodies there: cuDNN and several dynamic libraries with "cu" in their names, including the biggest one called <code>libtorch_cuda.so</code>, which - judging by its name and the sheer size - is PyTorch's own distribution of CUDA (or just a subset?):</p>
<pre><code>148K libcudnn.so.8
680K libcudart-d0da41ae.so.11.0
1.3M libc10_cuda.so
72M libcudnn_ops_train.so.8
91M libcublas.so.11
94M libcudnn_ops_infer.so.8
98M libcudnn_cnn_train.so.8
116M libcudnn_adv_train.so.8
125M libcudnn_adv_infer.so.8
241M libtorch_cuda_linalg.so
548M libcublasLt.so.11
621M libcudnn_cnn_infer.so.8
1.3G libtorch_cuda.so
</code></pre>
<p>So it seems we don't need to install CUDA if it's PyTorch alone we are after, as it will download its own. Unless we make <code>torch</code> notice the pre-installed CUDA and cuDNN, the duplication of these libraries is unavoidable in other scenarios, because we will need the official NVIDIA version for the remaining DNN packages, such as Tensorflow or Hugging Face <code>transformers</code>.</p>
<p>Note: adding <code>--no-deps</code> argument to <code>pip install</code> would miss even the required C++ dynamic libraries (not just the bundled CUDA).</p>
| <python><pip><pytorch><gpu> | 2023-09-26 20:52:33 | 3 | 7,211 | mirekphd |
77,183,190 | 2,195,440 | How `torch.einsum` API works? | <p>How <code>torch.einsum</code> API works?</p>
<p>I am trying to understand how the
<code>torch.einsum("ac,bc->ab",norm_max_func_embedding,norm_nl_embedding)</code> is calculating the similarity?</p>
<p>I understand this is doing manipulation of tensors.</p>
<p>I thikn βacβ specifies a tensor with dimensions (a,c). But what "bc->ab" is doing. Also how is it calculating the similarity. I presume similarity can be calculated by consine similarity or euclidian distance.</p>
<pre><code># Encode maximum function
func = "def f(a,b): if a>b: return a else return b"
tokens_ids = model.tokenize([func],max_length=512,mode="<encoder-only>")
source_ids = torch.tensor(tokens_ids).to(device)
tokens_embeddings,max_func_embedding = model(source_ids)
# Encode minimum function
func = "def f(a,b): if a<b: return a else return b"
tokens_ids = model.tokenize([func],max_length=512,mode="<encoder-only>")
source_ids = torch.tensor(tokens_ids).to(device)
tokens_embeddings,min_func_embedding = model(source_ids)
norm_max_func_embedding = torch.nn.functional.normalize(max_func_embedding, p=2, dim=1)
norm_min_func_embedding = torch.nn.functional.normalize(min_func_embedding, p=2, dim=1)
norm_nl_embedding = torch.nn.functional.normalize(nl_embedding, p=2, dim=1)
max_func_nl_similarity = torch.einsum("ac,bc->ab",norm_max_func_embedding,norm_nl_embedding)
min_func_nl_similarity = torch.einsum("ac,bc->ab",norm_min_func_embedding,norm_nl_embedding)
</code></pre>
<p>I am referring to this github repository: <a href="https://github.com/microsoft/CodeBERT/tree/master/UniXcoder" rel="nofollow noreferrer">https://github.com/microsoft/CodeBERT/tree/master/UniXcoder</a></p>
<p>What kind of similarity is it measuring?</p>
<p>Any help or pointers to documentation is highly appreciated.</p>
| <python><pytorch><word-embedding> | 2023-09-26 20:24:22 | 2 | 3,657 | Exploring |
77,183,185 | 5,676,198 | How to stop my jupyter notebook to print the color ansi code with the variable output | <p>I am using <code>Jupyter notebook v2023.8.1002501831</code> with <code>VsCode v1.82.2</code>.</p>
<p>Every time I attribute a value to a variable, the default output of a variable also prints the ANSI colors:</p>
<p><a href="https://i.sstatic.net/TvOXY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TvOXY.png" alt="enter image description here" /></a></p>
<p>I want the output of <code>color</code> to be the same as <code>print(color)</code>.</p>
<p>How to do that?</p>
<p>(It was working, and 'suddenly' started this behaviour).</p>
| <python><visual-studio-code><jupyter-notebook> | 2023-09-26 20:23:20 | 1 | 1,061 | Guilherme Parreira |
77,183,132 | 5,503,494 | unable to import azureml in vscode | <p>I've installed python 3.8, from vscode terminal I created an env and seem to have successfully installed azureml as below:</p>
<pre><code>C:\Python38\python.exe -m venv tempml
tempml\scripts\activate
pip install azureml-core
</code></pre>
<p>I then go into python and type</p>
<pre><code>import azureml
</code></pre>
<p>but I get the following error:
ModuleNotFoundError: No module named 'azureml'</p>
<p>if i go back to terminal and type</p>
<pre><code>pip show azureml-core
</code></pre>
<p>i can indeed see it:</p>
<pre><code>Name: azureml-core
Version: 1.53.0
Summary: Azure Machine Learning core packages, modules, and classes
Home-page: https://docs.microsoft.com/python/api/overview/azure/ml/?view=azure-ml-py
Author: Microsoft Corp
Author-email: None
License: https://aka.ms/azureml-sdk-license
Location: c:\python38\tempml\lib\site-packages
Requires: docker, msal, azure-common, paramiko, msrestazure, azure-mgmt-resource, msrest, PyJWT, azure-graphrbac, azure-core, ndg-httpsclient, contextlib2, jmespath, backports.tempfile, adal, argcomplete, urllib3, azure-mgmt-network, cryptography, knack, pkginfo, python-dateutil, jsonpickle, azure-mgmt-keyvault, azure-mgmt-containerregistry, azure-mgmt-storage, humanfriendly, azure-mgmt-authorization, pathspec, pytz, msal-extensions, requests, pyopenssl, packaging, SecretStorage
Required-by:
</code></pre>
<p>if someone could please help it would be much appreciated. Thank you</p>
| <python><azure><visual-studio-code><azure-devops><azureml-python-sdk> | 2023-09-26 20:12:27 | 2 | 469 | tezzaaa |
77,182,937 | 9,983,652 | TypeError: get_loc() got an unexpected keyword argument 'method' | <p>In pandas 2 with the following code:</p>
<pre class="lang-py prettyprint-override"><code>for time in tfo_dates:
dt=pd.to_datetime(time)
indx_time.append(df.index.get_loc(dt,method='nearest'))
</code></pre>
<p>I get this error:</p>
<pre><code>TypeError: get_loc() got an unexpected keyword argument 'method'
</code></pre>
<p>This worked in <a href="https://pandas.pydata.org/pandas-docs/version/1.5/reference/api/pandas.Index.get_loc.html" rel="nofollow noreferrer">version 1.5</a> but if we look at the <a href="https://pandas.pydata.org/docs/reference/api/pandas.Index.get_loc.html" rel="nofollow noreferrer">version 2 documentation</a> there is no method argument anymore.</p>
<p>What method I can use now to get nearest index of timestamp inside time index list?</p>
| <python><pandas> | 2023-09-26 19:35:26 | 3 | 4,338 | roudan |
77,182,787 | 5,507,389 | Force type in abstract property from abstract class | <p>The following code illustrates one way to create an abstract property within an abstract base class (<code>A</code> here) in Python:</p>
<pre><code>from abc import ABC, abstractmethod
class A(ABC):
@property
@abstractmethod
def my_abstract_property(self):
pass
class B(A):
my_abstract_property = my_list = ["a", "b", "c"]
if __name__ == "__main__":
b = B()
print(b.my_abstract_property)
</code></pre>
<p>One thing to note here is that the class attribute <code>my_abstract_property</code> declared in <code>B</code> could be any Python object. I can as validly set it to the above list, or a string, or an integer, or a dictionary, etc.</p>
<p>So my question is: how can I force the type of the abstract property in <code>A</code> such that the class attribute in <code>B</code> has to be of that type? Taking the above example: is there a way I can force it to be a list?</p>
| <python><class><abstract-class> | 2023-09-26 19:04:57 | 0 | 679 | glpsx |
77,182,777 | 481,061 | Paste strings into larger string in Python | <p>I have a pretty large logfile (1.3G) which is more or less tabular with spaces padding the individual fields. I also have a list of indices in that file where I would like to overwrite a field's contents with some other string. For the sake of simplicity, assume that the field is previously empty and my replacement is not too long for the cell, so I only have to replace <code>len(replacement)</code> characters starting at <code>index</code> with <code>replacement</code>.</p>
<p>The number of replacements I need to perform in that file is around 10'000. How do I do this efficiently? Does Python have a data structure like a C array where I can just overwrite data?</p>
| <python><replace> | 2023-09-26 19:03:31 | 0 | 14,622 | Felix Dombek |
77,182,764 | 2,112,406 | How to integrate C++ tests into a sckit-build pybind module | <p>I'm making a python module based on <a href="https://github.com/pybind/scikit_build_example" rel="nofollow noreferrer">the official pybind scikit build example</a>. I (and the example) have some unit tests for the python part in <code>tests</code>, that runs with <code>pytest</code>. In my case, I have some libraries that I don't expose to python, so I want to have C++ unit tests to check those. How do I do this? For instance, how would I go about merging the stuff <a href="https://matgomes.com/integrate-google-test-into-cmake/" rel="nofollow noreferrer">in this tutorial using gtest</a> with my module?</p>
<p><em>EDIT:</em> reading more about this leads me to think that I have to build the tests separately and then run the tests. Which is totally fine, as I could have a different github action that does that. I'm having trouble setting it up with proper cmake files that would work with <code>pip install .</code>, when called from the <code>toml</code> file, and that would work with a <code>cmake</code> command. It seems that at the very least, I have to also separately fetch <code>pybind11</code> as a library, on top of what <code>pip</code> does. Not sure I'm complicating things unnecessarily.</p>
| <python><c++><unit-testing><cmake> | 2023-09-26 19:01:38 | 0 | 3,203 | sodiumnitrate |
77,182,715 | 1,852,526 | PyInstaller Tree Cannot find the path specified | <p>I want to include a folder and its contents within pyinstaller. I have a folder named 'etc' and it has a file named sbom.json. I am following the post mentioned <a href="https://stackoverflow.com/questions/20602727/pyinstaller-generate-exe-file-folder-in-onefile-mode/20677118#20677118">tree</a>, but I keep getting an error saying <code>The system cannot find the path specified: '..\\etc</code> when I run <code>pyinstaller fetch_sbom_packages.spec</code> command.</p>
<p>(The fetch_sbom_packages.py has dependencies with dsapi.py and shared.py).</p>
<p>Here is my folder structure. I want to include the etc folder and its contents</p>
<p><a href="https://i.sstatic.net/mHtwu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mHtwu.png" alt="folder." /></a></p>
<p>Here is the fetch_sbom_packages.spec</p>
<pre><code># -*- mode: python ; coding: utf-8 -*-
a = Analysis(
['fetch_sbom_packages.py'],
pathex=[],
binaries=[],
datas=[],
hiddenimports=[],
hookspath=[],
hooksconfig={},
runtime_hooks=[],
excludes=[],
noarchive=False,
)
pyz = PYZ(a.pure)
exe = EXE(
pyz,
a.scripts,
a.binaries,
Tree('..\\etc', prefix='etc\\'),
a.datas,
[],
name='fetch_sbom_packages',
debug=False,
bootloader_ignore_signals=False,
strip=False,
upx=True,
upx_exclude=[],
runtime_tmpdir=None,
console=True,
disable_windowed_traceback=False,
argv_emulation=False,
target_arch=None,
codesign_identity=None,
entitlements_file=None,
)
</code></pre>
| <python><path><pyinstaller> | 2023-09-26 18:51:39 | 1 | 1,774 | nikhil |
77,182,518 | 17,301,834 | pcolormesh with alternating cell widths | <p>I would like to make a matplotlib colourmesh which is scaled irregularly to make some rows and columns wider than others.</p>
<p>Keep in mind that I'm not very familiar with matplotlib, and am likely overlooking an obvious solution.</p>
<p>Say I have the following code:</p>
<pre><code>plt.axes().set_aspect("equal")
maze = plt.pcolormesh(arr)
plt.show()
</code></pre>
<p>Where <code>arr</code> is a 2d array of bits, producing a colourmesh like so:
<a href="https://i.sstatic.net/Pockr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Pockr.png" alt="colourmesh representing a maze" /></a></p>
<p>I'd like to reduce the width of every alternate row and column to compress the walls of the maze above.</p>
<p>Any help would be appreciated, thanks.</p>
<p><strong>Edit</strong>: here is a sample of the data. It's basically just a 2 dimensional array of 1's and 0's but I've converted it to a string for readability.</p>
<pre><code>0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0
0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0
0 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 0
0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 1 0
0 1 0 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 0 1 1 1 0
0 1 0 1 0 1 0 1 0 0 0 0 0 0 0 0 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0
0 1 0 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1 1 1 1 1 1 1 1 0 1 0 1 1 1 0
0 1 0 1 0 0 0 0 0 1 0 1 0 1 0 1 0 0 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0
0 1 0 1 1 1 1 1 1 1 0 1 1 1 0 1 1 1 0 1 1 1 1 1 1 1 0 1 0 1 1 1 0
0 1 0 0 0 0 0 0 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0
0 1 1 1 1 1 1 1 0 1 1 1 0 1 1 1 0 1 0 1 1 1 1 1 1 1 0 1 0 1 1 1 0
0 1 0 1 0 1 0 1 0 0 0 0 0 0 0 0 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0
0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 0 1 1 1 1 1 1 1 0 1 0 1 1 1 0
0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 0 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0
0 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 0 1 1 1 1 1 1 1 0 1 0 1 1 1 0
0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0
0 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 0 1 1 1 1 1 1 1 0 1 0 1 1 1 0
0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 0 0 0 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0
0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 0 1 1 1 0
0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0
0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 0 1 1 1 0
0 1 0 1 0 1 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 1 0 1 0 1 0
0 1 1 1 1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 0 1 1 1 0
0 0 0 0 0 0 0 1 0 1 0 1 0 1 0 0 0 0 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0
0 1 1 1 1 1 1 1 0 1 1 1 0 1 1 1 1 1 1 1 0 1 1 1 0 1 1 1 0 1 1 1 0
0 1 0 0 0 0 0 1 0 1 0 1 0 1 0 0 0 0 0 1 0 1 0 1 0 1 0 0 0 1 0 1 0
0 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1 1 1 1 0
0 1 0 1 0 1 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 1 0 1 0 1 0
0 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0
0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0
0 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
</code></pre>
| <python><matplotlib> | 2023-09-26 18:12:41 | 1 | 459 | user17301834 |
77,182,421 | 6,087,087 | Does PyTorch support stride_tricks as in numpy.lib.stride_tricks.as_strided? | <p>It is possible to make cool things by changing the strides of an array in Numpy like this:</p>
<pre><code>import numpy as np
from numpy.lib.stride_tricks import as_strided
a = np.arange(15).reshape(3,5)
print(a)
# [[ 0 1 2 3 4]
# [ 5 6 7 8 9]
# [10 11 12 13 14]]
b = as_strided(a, shape=(3,3,3), strides=(a.strides[-1],)+a.strides)
print(b)
# [[[ 0 1 2]
# [ 5 6 7]
# [10 11 12]]
# [[ 1 2 3]
# [ 6 7 8]
# [11 12 13]]
# [[ 2 3 4]
# [ 7 8 9]
# [12 13 14]]]
# Get 3x3 sums of a, for example
print(b.sum(axis=(1,2)))
# [54 63 72]
</code></pre>
<p>I searched a similar method in PyTorch and found <a href="https://pytorch.org/docs/stable/generated/torch.as_strided.html" rel="nofollow noreferrer">as_strided</a>, but it does not support strides which makes an element have multiple indices referring to it, as the warning says:</p>
<blockquote>
<p>The constructed view of the storage must only refer to elements within the storage or a runtime error will be thrown, and if the view is βoverlappedβ (with multiple indices referring to the same element in memory) its behavior is undefined.</p>
</blockquote>
<p>In particular it says that the behavior is undefined for the example above where elements have multiple indices.</p>
<p>Is there a way to make this work with documented, specified behavior?</p>
| <python><arrays><numpy><pytorch><stride> | 2023-09-26 17:59:34 | 1 | 499 | FΔ±rat KΔ±yak |
77,182,278 | 3,802,813 | Type hints for matplotlib Axes3D | <p>I have a function that takes a matplotlib 3D axis:</p>
<pre class="lang-py prettyprint-override"><code>from mpl_toolkits.mplot3d.axes3d import Axes3D
def plot(ax: Axes3D):
pass
</code></pre>
<p>I know how to add a type hint for a 2D axis. Just <code>ax: plt.Axes</code> works perfectly. However, with this <code>mpl_toolkits.mplot3d.axes3d.Axes3D</code>, pylance complains that:</p>
<pre><code>Expected type expression but received (...) -> Unknown Pylance
</code></pre>
<p>Sadly, there is no <code>plt.Axes3D</code> equivalent to <code>plt.Axes</code>.</p>
<p>I verified that <code>mpl_toolkits.mplot3d.axes3d.Axes3D</code> works as expected. The following code does not give an error:</p>
<pre class="lang-py prettyprint-override"><code>fig = plt.figure()
Axes3D(fig)
</code></pre>
<p>I went to the definition of <code>Axes3D</code> and found that the problem is the <code>@_docstring.interpd</code> decorator. If I remove it, the pylance warning goes away. The <code>@_api.define_aliases</code> seems to be fine.</p>
<p>I am using python 3.10.12 and matplotlib 3.8.0</p>
<p>Edit: The motivation for the type being <code>Axes3D</code> and not <code>Axes</code> is that otherwise pylance shows a diagnostic that <code>Axes</code> has no method <code>set_zlabel()</code>, etc.</p>
| <python><matplotlib><python-typing><pylance> | 2023-09-26 17:37:47 | 0 | 1,231 | Marcel |
77,182,230 | 382,982 | How can I use django-filter's DateTimeFromToRangeFilter with Graphene? | <p>I'm attempting to use an instance of django-filter's DateTimeFromToRangeFilter in conjunction with a custom <code>FilterSet</code>. However, this does not work when I attempt to do the following:</p>
<pre><code>class CustomFilterSet(FilterSet):
modified = django_filters.IsoDateTimeFromToRangeFilter()
class Meta:
fields = "__all__"
model = Custom
</code></pre>
<p>This does not result in additional fields or annotations being created, like I'd expect based on <a href="https://django-filter.readthedocs.io/en/stable/ref/filters.html?highlight=datetime#isodatetimefromtorangefilter" rel="nofollow noreferrer">the docs</a>. Something like:</p>
<pre><code>f = F({'modified_after': '...', 'modified_before': '...'})
</code></pre>
<p>If I inspect the single field (<code>modified</code>) which has been added to my DjangoFilterConnectionField, I see the following:</p>
<pre><code>{'creation_counter': 395, 'name': None, '_type': <String meta=<ScalarOptions name='String'>>, 'default_value': Undefined, 'description': None, 'deprecation_reason': None}
</code></pre>
<p>So, how do I configure this filter such that I can write a query like the following?</p>
<pre><code>query {
allTheSmallThings(
modified_before: "2023-09-26 17:21:22.921692+00:00"
) {
edges {
node {
id
}
}
}
}
</code></pre>
<p>UPDATE: I can confirm that I'm able to use the FilterSet subclass as documented. The issue seems to be with the fields that are/not being generated by Graphene.</p>
| <python><django><graphql><django-filter><graphene-django> | 2023-09-26 17:29:48 | 0 | 10,529 | pdoherty926 |
77,182,223 | 2,100,039 | Xtick Label Shift Problem with 2 Plots Overlaid | <p>I am trying to make a combined plot using seaborn boxplot and scatterplot. The problem is that my xtick labels are not aligned and I've tried many solutions. the red data values are the values that are not aligned with the boxplot xtcick labels and correspond to the LAST row in the df_concat df shown below. The truncated boxplot data is this: df_concat.iloc[:-5,:]. I'll show you my data and then my - incorrect plot.</p>
<p>df_concat:</p>
<pre><code> 1 2 3 4 5 6 7 8 9 10 11 12
0 704.8 785.4 743.2 945.2 650.5 775.8 841.8 561.2 548.5 690.2 785.8 822.5
1 833.8 827.0 734.7 819.1 668.4 612.5 745.4 472.1 368.6 644.8 893.0 652.5
2 776.6 762.1 825.0 873.5 954.3 705.4 612.1 388.8 444.8 660.5 786.5 693.3
3 942.0 914.3 862.8 928.7 1136.5 645.8 638.2 552.3 514.7 736.5 850.9 851.9
4 829.1 738.3 733.4 823.4 599.0 635.3 603.4 586.0 419.5 662.2 732.7 818.5
5 845.7 798.6 924.4 762.7 831.8 1033.7 626.9 426.1 461.8 725.9 713.2 763.7
6 932.7 885.9 880.1 1028.0 855.3 758.8 573.9 476.8 455.4 537.7 574.1 800.5
7 827.6 932.2 898.1 871.2 939.7 836.2 631.6 455.7 548.4 710.1 803.6 790.8
8 777.7 839.3 680.6 995.1 807.1 656.2 592.8 536.2 474.6 694.2 678.0 768.1
9 891.6 772.7 932.5 961.0 947.1 633.9 617.6 551.8 547.1 548.4 714.6 857.6
10 685.5 828.3 669.1 890.0 795.0 611.8 536.2 447.2 471.4 529.9 807.5 818.6
11 751.7 760.7 780.8 823.1 874.8 693.2 532.2 543.7 411.6 747.7 744.8 774.1
12 784.8 686.4 787.8 876.3 713.4 726.0 537.1 485.5 475.1 547.5 764.2 784.8
13 854.9 858.4 1009.9 930.2 757.3 552.8 479.1 532.3 529.3 648.0 759.9 780.7
14 749.1 763.7 842.9 825.6 674.0 634.3 443.8 420.0 402.2 586.4 664.3 886.0
15 924.9 938.0 1016.6 957.2 914.8 848.2 539.3 434.9 447.4 569.3 681.2 923.6
16 816.9 990.5 958.6 994.2 763.1 742.7 714.1 593.5 425.7 814.2 599.2 745.2
17 798.4 696.2 843.6 898.5 880.9 758.0 555.7 459.3 509.2 586.3 885.1 834.6
18 742.0 978.1 912.6 1173.1 1081.3 831.5 596.2 536.1 476.1 638.7 926.8 798.6
19 835.7 777.0 909.1 848.8 775.0 634.2 585.0 564.0 518.9 724.7 657.8 848.7
20 815.2 825.9 886.9 906.8 899.3 715.2 540.5 472.0 519.7 664.4 752.5 744.3
21 843.2 818.8 830.9 910.5 880.5 887.3 562.8 555.5 490.4 567.0 843.8 695.8
22 706.6 740.0 550.6 726.7 869.0 514.6 690.1 490.0 441.0 590.3 785.0 848.6
23 772.8 810.4 835.3 730.8 720.3 490.0 748.9 505.5 462.7 545.0 654.9 825.2
24 923.1 902.4 874.9 911.3 844.3 609.6 554.7 616.5 558.8 631.4 735.5 721.2
25 780.9 875.5 854.5 884.1 805.4 803.7 552.6 610.1 421.7 694.2 730.5 823.8
26 787.4 787.8 824.8 861.4 919.0 693.1 652.8 627.4 555.2 756.1 750.3 744.5
27 857.4 856.0 943.0 796.8 824.3 688.4 664.2 460.9 588.9 658.2 657.9 806.5
28 783.4 910.7 866.2 850.2 798.1 621.5 446.9 518.8 504.1 626.2 662.4 894.6
29 769.9 792.2 820.6 1027.8 1007.9 706.0 677.0 539.4 349.1 569.3 779.7 778.4
30 276.9 415.7 435.1 538.7 561.7 583.4 580.5 503.8 364.2 353.4 320.0 245.1
31 292.6 220.4 259.4 171.4 188.3 196.0 175.2 146.9 233.5 280.3 305.3 270.4
32 280.9 241.2 280.6 263.2 204.7 164.7 120.1 111.2 144.8 225.9 264.0 267.2
33 322.9 267.5 324.9 301.3 274.2 249.1 195.2 188.8 252.7 300.9 329.5 321.0
34 545.3 496.3 533.2 450.0 492.8 394.6 362.0 330.6 289.4 367.8 423.5 528.5
</code></pre>
<p>Here is the bad plot as a result of the code shown below.</p>
<p><a href="https://i.sstatic.net/Ra2ts.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ra2ts.png" alt="enter image description here" /></a></p>
<p>Here is my faulty code:</p>
<pre><code>import seaborn as sns
import matplotlib.pyplot as plt
# make the boxplot of data for rows 1-29 only
df_concat = df_concat.reset_index(drop=True)
plt.figure(figsize=(12,6))
g = sns.boxplot(data=df_concat.iloc[:-5,:],
orient='v', color='y',whis=100)
g.set_xticks(range(0,12))
g.set_xticklabels(['1','2','3','4','5','6','7','8','9','10','11','12'])
#add scatterplot BUDGET data...using only last row in df_concat
g = sns.scatterplot(x=df_concat.iloc[-1,:].index,
y=df_concat.iloc[-1,:], color='red',marker='o',label='2023 Budget')
g.set_xticklabels(['1','2','3','4','5','6','7','8','9','10','11','12'])
plt.show()
</code></pre>
| <python><seaborn><axis-labels><xticks> | 2023-09-26 17:28:10 | 0 | 1,366 | user2100039 |
77,182,026 | 1,080,189 | Type hinting Python enum staticmethod where literal types are defined | <p>In the code below, the <code>direction</code> attribute of the <code>Horizontal</code> dataclass is type hinted to only allow 3 of the 5 values from the <code>Direction</code> enum. I want to be able to return an enum value from dynamic input using the <code>from_string</code> static method which falls back to Direction.UNKNOWN if the string doesn't match any of the enum values. In practice the code works but <code>mypy</code> complains with the following error:</p>
<blockquote>
<p>27: error: Incompatible types in assignment (expression has type "Direction", variable has type "Literal[Direction.LEFT, Direction.RIGHT, Direction.UNKNOWN]") [assignment]
Found 1 error in 1 file (checked 1 source file)</p>
</blockquote>
<p>This is understandable because we're getting a generic <code>Direction</code> instance back from the static method rather than one of the specific 3 that are type hinted. Are there any ways round this that would satisfy <code>mypy</code> other than removing the return type of the <code>from_string</code> method?</p>
<pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass
from enum import auto, StrEnum
from typing import Literal
class Direction(StrEnum):
DOWN = auto()
LEFT = auto()
RIGHT = auto()
UP = auto()
UNKNOWN = auto()
@staticmethod
def from_string(value: str) -> 'Direction':
try:
return Direction(value.lower())
except ValueError:
pass
return Direction.UNKNOWN
@dataclass
class Horizontal:
direction: Literal[Direction.LEFT, Direction.RIGHT, Direction.UNKNOWN] = Direction.UNKNOWN
horizontal = Horizontal()
horizontal.direction = Direction.LEFT
print(f'{horizontal=}')
horizontal.direction = Direction.from_string(input('Enter a direction: '))
print(f'{horizontal=}')
</code></pre>
| <python><mypy><python-typing> | 2023-09-26 16:55:18 | 1 | 1,626 | gratz |
77,181,805 | 5,695,336 | Define an enum that is a subset of another enum | <p>My program has an <code>enum</code> called <code>Platform</code></p>
<pre><code>class Platform(StrEnum):
binance = auto()
kraken = auto()
trading_view = auto()
</code></pre>
<p>Now I want to define another <code>enum</code> called <code>Exchange</code> that only contains platforms that are exchanges. Something like</p>
<pre><code>class Exchange(Platform):
binance = auto()
kraken = auto()
</code></pre>
<p>But apparently this doesn't work because <code>enum</code>s are final, inherit another <code>enum</code> is not allowed.</p>
<p>I want to have functions like this:</p>
<pre><code>def exchange_func(ex: Exchange):
pass
def platform_func(p: Platform):
pass
</code></pre>
<p><code>Exchange</code> can enter both functions, but non-exchange <code>Platform</code>s can only enter the second function, assuming strict type checking is required.</p>
<p>If I don't define <code>Exchange</code>, just use <code>Platform</code>, then non-exchange platforms may accidentally enter the first function without showing errors.</p>
<p>If <code>Exchange</code> doesn't inherit <code>Platform</code>, exchanges cannot enter the second function.</p>
<p>As mentioned above, the main problem is that enums are final, inherit another enum is impossible. So how to solve this?</p>
| <python><enums> | 2023-09-26 16:21:21 | 2 | 2,017 | Jeffrey Chen |
77,181,786 | 3,086,470 | How to turn of spellchecking in ipywidgets Textarea? | <p>On ipywidgets version 8, I've tried to turn off spellchecking (which is a CSS attribute) like this:</p>
<pre class="lang-py prettyprint-override"><code>area = ipywidgets.Textarea(rows=10, style={"spellchecking": "false"})
</code></pre>
<p>But the created widget still gets spellchecked (Chrome 116.0.5845.111)</p>
| <python><css><ipywidgets> | 2023-09-26 16:19:30 | 0 | 850 | marscher |
77,181,761 | 2,426,888 | Build a python package with two independent modules where one is a git subodule | <p>I am trying to build a Python package for distribution that contains two independent modules (<code>lkh</code> and <code>tsplib95</code>), but with a non-standard directory layout.</p>
<p>The two modules are contained in different repos (<a href="https://github.com/ben-hudson/pylkh" rel="nofollow noreferrer">pylkh</a> and <a href="https://github.com/ben-hudson/tsplib95" rel="nofollow noreferrer">tsplib95</a>), and both have the standard layout. However, I want to distribute them together.</p>
<p><strong>How can I build this package?</strong></p>
<ul>
<li>I can use any build system, so long as it produces a package that can be published to PyPi.</li>
<li>I <em>can't</em> change the file/directory structure of <a href="https://github.com/ben-hudson/tsplib95" rel="nofollow noreferrer">tsplib95</a> (it is a fork of another repo that I want to keep synced).</li>
<li>I <em>can</em> change the directory structure of <a href="https://github.com/ben-hudson/pylkh" rel="nofollow noreferrer">pylkh</a>.</li>
<li>I don't want to include <code>tsplib95</code> as a submodule of <code>lkh</code>. I would like to have them both as top-level packages.</li>
<li>I would like to avoid multi-step builds.</li>
</ul>
<h1>Where I'm at</h1>
<p>The (current) directory structure is:</p>
<pre><code>pylkh
βββ pyproject.toml
βββ lkh
βΒ Β βββ __init__.py
βΒ Β βββ ...
βββ tsplib95 (git submodule)
βββ setup.py
βββ tsplib95
βββ __init__.py
βββ ...
</code></pre>
<p>pyproject.toml is (based on the <a href="https://python-poetry.org/docs/pyproject/#packages" rel="nofollow noreferrer">Poetry docs</a>):</p>
<pre class="lang-ini prettyprint-override"><code>[tool.poetry]
name = "lkh"
...
packages = [
{ include = "lkh" },
{ include = "tsplib95/*.py", from="tsplib95"},
]
...
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
</code></pre>
<p>This builds a package with the following structure:</p>
<pre><code>lkh-x.x.x.tar.gz
βββ LICENSE.txt
βββ lkh
βΒ Β βββ __init__.py
βΒ Β βββ ...
βββ PKG-INFO
βββ pyproject.toml
βββ README.md
βββ tsplib95
βββ tsplib95
βββ __init__.py
βββ ...
</code></pre>
<p><em>However</em>, this does not import <code>tsplib95</code> properly because there is an extra subdirectory.</p>
<h1>Edit</h1>
<ul>
<li>Add module names to make the question easier to understand.</li>
</ul>
| <python><pypi><python-packaging><python-poetry> | 2023-09-26 16:15:00 | 2 | 794 | beenjaminnn |
77,181,551 | 3,768,871 | How to generate uniform random numbers in Python close to the true mean and standard deviation? | <p>I am trying to generate uniform random numbers as much as possible close to its theoretical definition, particularly in Python. (i.e., I am familiar with the concept of Pesudo-Random Generators (PRGs) in programming languages.)</p>
<p>I am using the following code for this matter (a widely known solution):</p>
<pre class="lang-py prettyprint-override"><code>import random
import numpy as np
rands = []
rng = random.Random(5)
for i in range(10000):
rands.append(rng.uniform(0,1))
print(f"mean: {np.mean(rands)}")
print(f"std: {np.std(rands)}")
</code></pre>
<p>The result is:</p>
<pre><code>mean: 0.501672056714862
std: 0.2880418652775188
</code></pre>
<p>By changing the initial seed, we can observe that we will get approximately the same values.</p>
<p>On the other hand, from the theoretical aspect, we know that the mean and standard deviation (std) of a uniform random variable between [0, 1] are equal to 0.5 and 1/12 (~
0.08333), respectively.</p>
<p>As we can observe, the std of generated random numbers is more than 1/4 (3 times more than the theoretical one).</p>
<p>Hence, a plausible question is "how should I adjust this implementation to get a closer std to the theoretical one?"</p>
<p>I understand that the rationale behind this difference originated in the core implementation of the PRG used in the <code>random</code> function. But, I am looking for any other method to resolve this issue.</p>
<h2>Update:</h2>
<p>It is just a confusion between variance and std, as answered in the following!</p>
| <python><math><random><probability> | 2023-09-26 15:48:23 | 1 | 19,015 | OmG |
77,181,515 | 11,231,520 | Python not found when trying to execute git-clang-format on Windows | <p>I am trying to set up <code>git-clang-format</code> on Windows 10. I have following programs installed:</p>
<ul>
<li>git</li>
<li>python 3.11.5</li>
<li>LLVM 16.0.4 (which includes <code>clang-format</code> and <code>git-clang-format</code>)</li>
</ul>
<p>Both python executable and LLVM's bin folder (containing <code>clang-format</code> executable and <code>git-clang-format</code> python script) are in the path. I can run the following commands without any issue</p>
<pre class="lang-none prettyprint-override"><code>$ git --version
git version 2.42.0.windows.2
$ python --version
Python 3.11.5
$ clang-format --version
clang-format version 16.0.4
</code></pre>
<p>But for some reason, this command doesn't work</p>
<pre class="lang-none prettyprint-override"><code>$ git clang-format -h
Python not found. Run without argument [...]
</code></pre>
<p>How can I solve this issue?</p>
| <python><windows><git><clang-format> | 2023-09-26 15:43:48 | 1 | 509 | Vincent |
77,181,503 | 4,472,856 | How to understand the output of scipy's quadratic_assignment function? | <p>I'm trying to use scipy's <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.quadratic_assignment.html" rel="nofollow noreferrer">quadratic_assignment</a> function but I can't understand how the output can describe an optimal solution. Here is a minimal example where I compare a small matrix to itself:</p>
<pre><code>import numpy as np
from scipy.optimize import quadratic_assignment
# Parameters
n = 5
p = np.log(n)/n # Approx. 32%
# Definitions
A = np.random.rand(n,n)<p
# Quadratic assignment
res = quadratic_assignment(A, A)
print(res.col_ind)
</code></pre>
<p>and the results seem to be random assignments:</p>
<pre><code>[3 0 1 4 2]
[3 2 4 1 0]
[3 2 1 0 4]
[4 3 1 0 2]
[2 3 0 1 4]
...
</code></pre>
<p>However, according to the docs <code>col_ind</code> is supposed to be the <em>Column indices corresponding to the best permutation found of the nodes of B.</em> Since the input matrices are equal (B==A), I would thus expect the identity assignment <code>[0 1 2 3 4]</code> to pop out. Changing <code>n</code> to larger values does not help.</p>
<p>Is there something I am getting wrong?</p>
| <python><scipy-optimize> | 2023-09-26 15:41:33 | 1 | 5,185 | LowPolyCorgi |
77,181,404 | 7,257,731 | How to update multiple columns of a Pandas dataframe at once | <p>Given a Pandas dataframe with columns A, B and C, I want to update A and B as the result of applying a function over C. This function has one parameter and two return values.</p>
<pre><code>import pandas as pd
# Create a sample dataframe
df = pd.DataFrame({
'A': [1, 2, 3],
'B': [10, 20, 30],
'C': [100, 200, 300]
})
# Define a function to apply
def my_function(x):
return x*2, x*3 # This function returns a tuple
# Update
df[['A', 'B']] = ?
</code></pre>
| <python><pandas> | 2023-09-26 15:27:12 | 2 | 392 | Samuel O.D. |
77,181,227 | 711,006 | ModuleNotFoundError when trying to debug a test | <p>I am trying to debug a failing test method in VS Code but when I use the <code>Debug Test</code> action from the Testing sidebar or the context menu for the test method, it fails with the following error (found in the Debug Console):</p>
<pre><code>E
======================================================================
ERROR: test_clients (unittest.loader._FailedTest)
----------------------------------------------------------------------
ImportError: Failed to import test module: test_clients
Traceback (most recent call last):
File "/usr/lib/python3.8/unittest/loader.py", line 154, in loadTestsFromName
module = __import__(module_name)
File "/home/melebius/git/my-project/my_project/tests/test_clients.py", line 9, in <module>
from my_project.clients.base import Data
ModuleNotFoundError: No module named 'my_project'
----------------------------------------------------------------------
Ran 1 test in 0.007s
FAILED (errors=1)
</code></pre>
<p>The <code>my_project</code> folder is evidently not available to the Python interpreter. Unfortunately, I cannot find any information on where to define the path for a <code>Debug Test</code> run.</p>
<p>The ways I tried:</p>
<ul>
<li>I can <code>Debug Test</code> by <a href="https://stackoverflow.com/a/60118494/711006">appending the path in my test file</a> but I find this just a workaround because such a code should not be committed to Git.</li>
<li>I can run βplainβ debug (without involving <code>unittest</code>) using the <code>PYTHONPATH</code> configuration in my <code>.code-workspace</code> file.
<ul>
<li>This is the most common advice I have found (<a href="https://stackoverflow.com/questions/53290328/cant-get-vscode-python-debugger-to-find-my-project-modules">1</a>, <a href="https://stackoverflow.com/questions/53323647/vscode-python-debug-no-module-named-xx-when-using-module-attribute">2</a>). I have multiple debug launch configurations (I donβt know <em>which</em> one is used by the <code>Debug Test</code> action) but have the <code>PYTHONPATH</code> in all of them.</li>
</ul>
</li>
<li>I have defined <code>PYTHONPATH</code> in the <code>.env</code> file of my project.</li>
<li><code>Run Test</code> works well for my test method.</li>
<li>I can run my tests in the command line (<code>python -m unittest</code>) from the projectβs root directory. If I want to run them directly from the <code>tests</code> folder, I have to <code>export PYTHONPATH=/home/melebius/git/my-project</code> first.</li>
</ul>
<p>Additional information:</p>
<ul>
<li>I use Python and Pylance extensions by Microsoft to provide the testing environment in VS Code.</li>
<li>I use <code>venv</code> for my project which seems to be set up correctly. (I have resolved all <code>ModuleNotFoundError</code>βs for external libraries by running <code>pip install</code> in the <code>venv</code>.)</li>
<li>I have multiple projects (folders) in my workspace and the debug launch configurations are currently set on the workspace level. I tried to create a <code>launch.json</code> file on the project level and put the <code>PYTHONPATH</code> there, too, but nothing changed. Workspace structure (simplified, showing just directories and mentioned files):</li>
</ul>
<pre><code>.
|-my-project
| |-my_project
| | |-tests
| | |-clients
| | |-tools
| |-.git
| |-.vscode
| | |-launch.json
|-my-other-project
| |-.git
| |-...
|-my-yet-another-project
| |-.git
| |-...
|-my-projects.code-workspace
</code></pre>
<p>Where can I define the path for the <code>Debug Test</code> action in VS Code?</p>
| <python><visual-studio-code><debugging><testing><python-unittest> | 2023-09-26 15:04:30 | 1 | 6,784 | Melebius |
77,181,041 | 4,443,378 | Trying to change the first character of every row in a pandas series | <p>I have a series list1:</p>
<pre><code>1 listId
95 LaBc.defGabc
97 Kasd.defGabc
99 aaSd.defGabc
101 Basd.defGabc
103 Lasd.defGabc
105 Lasd.defGabc
</code></pre>
<p>I want to <code>lower()</code> only the first character of every item so it looks like:</p>
<pre><code>1 listId
95 laBc.defGabc
97 kasd.defGabc
99 aaSd.defGabc
101 basd.defGabc
103 lasd.defGabc
105 lasd.defGabc
</code></pre>
<p>I'm trying:</p>
<pre><code>list1 = list1.map(str).str[0].lower()
</code></pre>
<p>but I get error</p>
<pre><code>AttributeError: 'Series' object has no attribute 'lower'
</code></pre>
| <python><pandas> | 2023-09-26 14:40:28 | 3 | 596 | Mitch |
77,180,927 | 1,145,808 | python reportlab - how to test whether a particular font (core or TTF) can encode/render a given glyph in a PDF? | <p>I've seen <a href="https://stackoverflow.com/questions/69261202/how-to-determine-if-a-glyph-can-be-displayed">How to determine if a Glyph can be displayed?</a>, which doesn't seem to work for the core fonts.</p>
<p><a href="https://docs.reportlab.com/reportlab/userguide/ch3_fonts/" rel="nofollow noreferrer">https://docs.reportlab.com/reportlab/userguide/ch3_fonts/</a> talks about <code>reportlab.rl_config.warnOnMissingFontGlyphs</code>, but that seems to simply print a warning, which I don't see how to trap.</p>
<p>I'd be grateful for any insight.</p>
| <python><reportlab> | 2023-09-26 14:24:05 | 1 | 829 | DobbyTheElf |
77,180,833 | 5,924,264 | Am I violating encapsulation by having unit test access path to sql database file and creating a connector to it? | <p>I have a python class that manages initilization, read, and writes to a sql database:</p>
<pre><code>class Database {
def __init__(self, path2db):
# path2db is created within this constructor
self._connector = # create a sqlite3 connector pointing to an input path2db
def query_col1(query_sql):
# execute query_sql and return rows corresponding to the query
def query_col2(query_sql):
# execute query_sql and return rows corresponding to the query
def write(rows):
# write rows to db
}
</code></pre>
<p>For simplicity, let's assume this database table has 3 columns, <code>col1, col2, col3</code>, and we only have getters for 2 of them.</p>
<p>In the unit test, I want to query for <code>col3</code>, but we don't have a public facing getter for that, so I instead wrote my own getter:</p>
<pre><code>def _query_col3(path2db):
# implementation not included
def unit_test(tempdir):
path2db = tempdir
inputs = # define some inputs
# call some methods and pass in inputs (some of the inputs are written to the db) and within those methods, a Database instance is generated and a path2db is formed and populated
# now I want to test whether the table in path2db has the correct col3.
assert _query_col3(path2db) == (col3 from inputs)
</code></pre>
<p>I had a colleague suggest that I was breaking encapsulation by accessing <code>tempdir</code>, creating a connector to it that's outside of a <code>Database</code> instance.</p>
| <python><sql><oop><encapsulation> | 2023-09-26 14:12:09 | 0 | 2,502 | roulette01 |
77,180,768 | 10,071,473 | Linux process defunct if started from flask | <p>I would need my flask application to manage the execution of some commands via a specific endpoint. I would also like to have a second endpoint that allows me to monitor the processes triggered by the flask application.</p>
<p>I already have in my mind about how to do it, I am facing only one problem, after the process execution is finished, the spwaned process remains as <strong>defunct</strong>. How can I solve it?</p>
<p><a href="https://i.sstatic.net/A3CPP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/A3CPP.png" alt="enter image description here" /></a></p>
<p>I know this depends on the fact that the application that creates the process (i.e. my flask application) continues to exist</p>
<p>I tried to work around the problem by trying to start the process differently and also by trying to force-terminate it, but that didn't work.</p>
<pre><code>try:
res = Popen("my_script.sh",
universal_newlines=True,
start_new_session=True,
stdout=PIPE,
stderr=STDOUT,
cwd="/",
shell=True,
env=environ.copy(),
close_fds=True
)
#this function try to handle te process termination
#it must be a different process as the application must be able to return a response while the process runs in the background
multiprocessing.Process(target=_ManagerExecutor.__handle_process, args=(res,)).start()
return result
except Exception as e:
...
</code></pre>
<pre><code>@classmethod
def __handle_process(cls, process: Popen):
while True:
output = process.stdout.readline()
if not output:
break
#... other app logic
process.poll()
result: bool = process.returncode != 0
try:
#try force kill, but doesn't work
killpg(getpgid(process.pid), SIGTERM)
except Exception:
pass
</code></pre>
| <python><linux><flask><process> | 2023-09-26 14:04:15 | 0 | 2,022 | Matteo Pasini |
77,180,626 | 2,437,080 | Kubernetes CronJob to copy, modify and create a secret | <p>I am currently trying to debug the following setup in Kubernetes:
With a Cronjob, I want to read a secret from a namespace, delete some labels in the secret and then create the new secret in another namespace. For some reason, the following code (see CronJob) does not do what it's supposed to and I can't figure out why, strangely, I do not see the pod being created. Before, during debugging the roles, I could always see the pods in CrashLoopBackOff, now I do not even see the pod being created.</p>
<p>Executing the exact commands that you can see in the CronJob manually works perfectly well, the secret gets copied, modified and then the new secret gets created.</p>
<p>These are my Roles:</p>
<pre><code>kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: monitoring
name: secrets-editor
rules:
- apiGroups: [""]
resources:
- secrets
verbs:
- 'patch'
- 'get'
- 'create'
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: testns
name: secrets-editor
rules:
- apiGroups: [""]
resources:
- secrets
verbs:
- 'patch'
- 'get'
- 'create'
</code></pre>
<p>These are my Rolebindings:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
creationTimestamp: null
name: cronjob-runner-binding
namespace: testns
roleRef:
apiGroup: ""
kind: Role
name: secrets-editor
subjects:
- kind: ServiceAccount
name: sa-cronjob-runner
namespace: testns
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
creationTimestamp: null
name: cronjob-runner-binding
namespace: monitoring
roleRef:
apiGroup: ""
kind: Role
name: secrets-editor
subjects:
- kind: ServiceAccount
name: sa-cronjob-runner
namespace: monitoring
</code></pre>
<p>ServiceAccount:</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: sa-cronjob-runner
namespace: testns
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: sa-cronjob-runner
namespace: monitoring
</code></pre>
<p>delete_unneeded_entries.py</p>
<pre><code>import yaml
new_namespace = "monitoring"
labels_not_to_delete = ["app.kubernetes.io/instance"]
with open('secret.yaml', 'r') as file:
secret_yaml = yaml.safe_load(file)
for k, v in list(secret_yaml["metadata"]["labels"].items()):
if k not in labels_not_to_delete:
del secret_yaml["metadata"]["labels"][k]
try:
del secret_yaml["metadata"]["ownerReferences"]
except KeyError:
print('Key ownerReferences does not exist. Continue.')
secret_yaml["metadata"]["namespace"] = new_namespace
with open('new_secret.yaml', 'w') as file:
yaml.safe_dump(secret_yaml, file)
</code></pre>
<p>CronJob:</p>
<pre><code>apiVersion: batch/v1
kind: CronJob
metadata:
name: my-cronjob
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
serviceAccountName: sa-cronjob-runner
containers:
- name: hello
image: haraldott/harri-test1:latest
command:
- /bin/sh
- "-c"
- |
/bin/bash <<'EOF'
kubectl get secret mysecret -n testns -oyaml > secret.yaml
python3 delete_unneeded_entries.py
kubectl apply -f new_secret.yaml
EOF
restartPolicy: OnFailure
</code></pre>
<p>Dockerfile to have a bitnami/kubectl image and also python</p>
<pre><code>FROM bitnami/kubectl:latest as kubectl
FROM ubuntu:latest
# Install Python and any necessary dependencies
RUN apt-get update
RUN apt-get install -y python3 python3-pip
RUN pip3 install pyyaml
COPY --from=kubectl /opt/bitnami/kubectl/bin/kubectl /usr/local/bin/
COPY delete_unneeded_entries.py /app/delete_unneeded_entries.py
# Set the working directory
WORKDIR /app
</code></pre>
<p>The secret looks something like this:</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
creationTimestamp: "2023-09-26T12:02:36Z"
labels:
app.kubernetes.io/instance: testinstance
app.kubernetes.io/managed-by: test1
app.kubernetes.io/name: test2
app.kubernetes.io/part-of: test3
name: mysecret
namespace: testns
resourceVersion: "4702"
uid: 7f5f3c3e-5c73-11ee-8c99-0242ac120002
type: Opaque
</code></pre>
| <python><bash><kubernetes> | 2023-09-26 13:47:19 | 1 | 336 | zappa |
77,180,580 | 2,741,091 | Unable to parse string for Int64 in pd read_csv | <p>It doesn't seem as if Pandas 2.0.0 properly accounts for <code>thousands=','</code> when parsing <code>Int64</code> objects:</p>
<pre><code>import io
pd.read_csv(io.StringIO('''a\n22,922'''), sep='\t', dtype={'a': 'Int64'}, thousands=',')
</code></pre>
<p>The specific error is:</p>
<pre><code>Traceback (most recent call last):
File pandas/_libs/lib.pyx:2280 in pandas._libs.lib.maybe_convert_numeric
ValueError: Unable to parse string "22,922"
</code></pre>
<p>Is there a work around that doesn't involve going back to un-nullable <code>int</code> or converting to <code>float</code>? I've confirmed this works for the old dtypes <code>dtype={'a': 'int'}</code> and <code>dtype={'a': 'float'}</code>.</p>
| <python><pandas> | 2023-09-26 13:41:15 | 2 | 5,390 | ifly6 |
77,180,480 | 19,251,203 | Pandas get_dummies altering shape | <p>I want to one-hot encode the categorical features of my Pandas dataframe. Previously, the values stored variables of shape (60,). See code below:</p>
<pre><code>ohe_features = ["Gender", "Married", "Self_Employed"]
num_features = ["Dependents"]
df = pd.get_dummies(df, columns=ohe_features, dtype=int)
</code></pre>
<p>After calling <code>get_dummies</code> the <code>df</code> now has columns with the following shape:</p>
<pre><code>Column 'Gender_Female' has shape (60, 2)
Column 'Gender_Male' has shape (60, 2)
Column 'Married_No' has shape (60, 2)
Column 'Married_Yes' has shape (60, 2)
Column 'Self_Employed_No' has shape (60, 2)
Column 'Self_Employed_Yes' has shape (60, 2)
</code></pre>
<p>How do I encode the categorical variables without altering the original dimensions of the feature?</p>
<p><strong>Reproducible Example:</strong></p>
<pre><code>Dependents Gender Married Self_Employed
0 Female Yes No
</code></pre>
| <python><python-3.x><pandas><dataframe> | 2023-09-26 13:28:39 | 1 | 392 | user19251203 |
77,180,475 | 209,834 | Symbols have mismatching columns error which loading data into vectorbt from a pandas dataframe | <p>I have downloaded historical data and stored in a csv file on disk. I am loading this csv file in a pandas dataframe. From there on, I am trying to load this dataframe into vectorbt. Here is my code</p>
<pre class="lang-py prettyprint-override"><code>import vectorbt as vbt
import pandas as pd
prices_1m = pd.read_csv('../hist/eurusd_m1.csv',
index_col='timestamp')
prices_1m.index = pd.to_datetime(prices_1m.index)
vprices_m1 = vbt.Data.from_data(prices_1m)
#vprices_m1 = vbt.YFData.download('AAPL', missing_index='drop')
print(prices_1m.items())
</code></pre>
<p>But I am getting the following error when I run this code</p>
<p><code>ValueError: Symbols have mismatching columns</code></p>
<p>What am I doing here?</p>
| <python><vectorbt> | 2023-09-26 13:28:07 | 1 | 8,498 | Suhas |
77,180,434 | 726,730 | Can I use pyodbc or anything similar without having MS Access installed on my computer? | <p>Can I open a *.mdb Access database to display the table rows (in cmd for example) if I haven't installed Microsoft Access on my computer?</p>
| <python><ms-access><driver><pyodbc> | 2023-09-26 13:24:01 | 1 | 2,427 | Chris P |
77,179,988 | 2,043,397 | How to animate the translation of a sphere in vispy | <p>I'm trying to visualize the translation of a sphere in Vispy without sucess.</p>
<p>I have used the code from <a href="https://vispy.org/gallery/scene/sphere.html#sphx-glr-gallery-scene-sphere-py" rel="nofollow noreferrer">Vispy Documentation examples - Draw a Sphere</a> to generate the sphere and then tried to update its position using the <em>vispy.visuals.transforms.STTransform</em> method.</p>
<p>However, when I run the script presented below, the sphere only appears at the last position and I cannot see any animation.</p>
<pre><code>import sys
import numpy as np
import vispy
from vispy import scene
from vispy.visuals.transforms import STTransform
canvas = scene.SceneCanvas(keys='interactive',
size=(800, 600),
show=True)
view = canvas.central_widget.add_view()
view.camera = 'arcball'
view.camera = 'turntable'
# Add grid
grid_3D = scene.visuals.GridLines(color="white",
scale=(0.5, 0.5))
view.add(grid_3D)
# Add a 3D axis to keep us oriented
axis = scene.visuals.XYZAxis(parent=view.scene)
# Add a sphere
sphere1 = scene.visuals.Sphere(radius=0.5,
method='latitude',
parent=view.scene,
edge_color='black')
for _ in np.arange(0, 5, 0.5):
sphere1.transform = STTransform(translate=[_, 0, 0])
view.camera.set_range(x=[-10, 10])
if __name__ == '__main__' and sys.flags.interactive == 0:
canvas.app.run()
</code></pre>
<p>Thank you in advance for any help.</p>
| <python><animation><vispy> | 2023-09-26 12:23:21 | 2 | 662 | TMoover |
77,179,906 | 305,712 | Rendering a PDF from HTML that refers to another PDF as an <object> tag | <p>I have a generated HTML page that might include images or references to PDF pages. I would like to convert that HTML to PDF and include the referenced PDF files in the generated PDF using Playwright for Python.</p>
<p>An example:</p>
<pre><code><html>
<body>
<h1>Hello World</h1>
<p>The following PDF should be inserted inline</p>
<figure>
<object data="file:///path/to/file.pdf" type=application/pdf width="100%" height="500px">
<embed type="application/pdf" src="file:///path/to/file.pdf" />
</object>
</figure>
<p>There might be other content, images or PDFs</p>
</body>
</html>
</code></pre>
<p>Is this even possible or do I have to render the PDF separately? When I try it using the Playwright for Python it generates a message "Couldn't load plugin." instead of the tag.</p>
| <python><pdf><playwright> | 2023-09-26 12:12:51 | 1 | 335 | Radim Novotny |
77,179,651 | 3,447,369 | How to make a discrete colorbar for plotly.graph_objects.Heatmap? | <p>I want to visualize <a href="https://pastebin.com/HdyujvXw" rel="nofollow noreferrer">this</a> spatio-temporal data in a Plotly heatmap. A working example that includes <a href="https://pastebin.com/HdyujvXw" rel="nofollow noreferrer">this</a> data:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import plotly.graph_objects as go
df = pd.DataFrame(d) # use the data dict here
n_locations = df.location_encoded.nunique()
pivot_table = df.pivot_table(index='day', columns='time', values='location_encoded', aggfunc='first')
heatmap_data = pivot_table.values
x_labels = pivot_table.columns
y_labels = pivot_table.index
cmap = plt.get_cmap('viridis', n_locations)
color_map = [cmap(i) for i in range(n_locations)]
fig = go.Figure(data=go.Heatmap(
z=heatmap_data,
x=x_labels,
y=y_labels,
colorscale=[[i / (n_locations - 1), f"rgba{color_map[i]}"] for i in range(n_locations)],
colorbar=dict(
tickvals=np.arange(n_locations),
title='Location'
),
))
fig.update_layout(
xaxis=dict(title='Time of Day'),
yaxis=dict(title='Date'),
title='Heatmap of Location Data',
)
fig.show()
</code></pre>
<p>This produces the following heatmap:
<a href="https://i.sstatic.net/Lgpgw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Lgpgw.png" alt="enter image description here" /></a></p>
<p>The colorbar on the right is now a continuous scale, but I want it to be a discrete scale, i.e., all locations have their own color. <a href="https://plotly.com/python/colorscales/#constructing-a-discrete-or-discontinuous-color-scale" rel="nofollow noreferrer">Examples on Plotly's website</a> show the usage of <code>color_continuous_scale</code>, however, this parameter is only available for <code>plotly.express</code> visuals. How can the colorbar be made discrete for a <code>go.Heatmap</code>?</p>
<p><strong>UPDATE:</strong>
The following code seems to work perfectly (thanks to the comments):</p>
<pre><code># First we create a list of np.linspace values where each value repeats twice, except for the beginning (0) and the ending (1)
vals = np.r_[np.array(0), np.repeat(list(np.linspace(0, 1, self.n_locations+1))[1:-1], 2), np.array(1)]
# Then we make a list that contains lists of the values and the corresponding colors.
cc_scale = [[j, colors[i//2]] for i, j in enumerate(vals)]
# Create the heatmap using Plotly
self.fig = go.Figure(data=go.Heatmap(
z=self.heatmap_data,
x=x_labels,
y=y_labels,
colorscale=cc_scale,
colorbar=dict(
tickvals=np.linspace(1/self.n_locations/2, 1 - 1/self.n_locations/2, self.n_locations) * (self.n_locations - 1), # Center the ticks
ticktext=self.location_labels,
title='Location'
),
))
</code></pre>
| <python><plotly><heatmap> | 2023-09-26 11:34:28 | 0 | 1,490 | sander |
77,179,447 | 18,091,040 | Access Hyperledger Indy Node deployed in virtual machine using local workstation | <p>I'm trying to follow <a href="https://github.com/hyperledger/indy-sdk/blob/main/docs/how-tos/write-did-and-query-verkey/README.md" rel="nofollow noreferrer">this tutorial</a> and I have a virtual machine running an Indy pool (ip: 192.168.15.177 internal net: 10.217.0.18). It is a simple docker command to run the pool in the VM:</p>
<pre><code>docker build -f ci/indy-pool.dockerfile -t indy_pool .
docker run -itd -p 9701-9708:9701-9708 indy_pool
</code></pre>
<p>In my local workstation, I try to run the code present in the link <code>write_did.py</code>, which call a functions to access the pool with the genesis transactions and it has the pool_ip:</p>
<pre><code>def pool_genesis_txn_data():
pool_ip = environ.get("TEST_POOL_IP", "192.168.15.177")
return "\n".join([
# Gen transactions
])
</code></pre>
<p>But after a moment, I get a <code>PoolLedgerTimeout</code> error, which is described as:</p>
<blockquote>
<p>Make sure that the pool of local nodes in Docker is running on the
same ip/ports as in the docker_pool_transactions_genesis (for further
details see <a href="https://github.com/hyperledger/indy-sdk/blob/master/README.md#how-to-start-local-nodes-pool-with-docker" rel="nofollow noreferrer">How to start local nodes pool with docker</a>)</p>
</blockquote>
<p>I wonder how the access the virtual machine since the router translates the ip: 192.168.15.177 to the internal net: 10.217.0.18. Looking at the documentation, I didn't find anything similar to solve this problem.</p>
| <python><docker><ip><nat><hyperledger-indy> | 2023-09-26 11:09:33 | 1 | 640 | brenodacosta |
77,179,247 | 8,329,213 | Pythonic way of aggregating but without grouping | <p>I have a <code>df</code> as follows, where I wish to find of <code>Sum of orders</code> and <code>Number of unique order sizes</code>, but I don't want to compress the <code>df</code>:</p>
<pre><code>list_of_lists = [['11','Berlin',2],['11','Berlin',2],['11','Berlin',3],
['22','Munich',1],['22','Munich',1]]
df = pd.DataFrame(list_of_lists, columns=['ID', 'City', 'Order Size'])
ID City Order Size
0 11 Berlin 2
1 11 Berlin 2
2 11 Berlin 3
3 22 Munich 4
4 22 Munich 4
</code></pre>
<p>I want output be:</p>
<pre><code> ID City Order Size Sum of orders Number of unique order sizes
0 11 Berlin 2 7 2
0 11 Berlin 2 7 2
0 11 Berlin 3 7 2
3 22 Munich 4 8 1
4 22 Munich 4 8 1
</code></pre>
<p>I could have easily used <code>.groupby(['ID','City'])</code> but that would reduce my <code>df</code> to an aggregated <code>df</code> of two rows, where I could have done the <code>left-join</code> to the original <code>df</code>. I want a simpler approach, a pythonic approach.</p>
| <python><pandas><aggregate> | 2023-09-26 10:42:45 | 0 | 7,707 | cph_sto |
77,179,224 | 4,358,785 | Multiple ONNX outputs (Get Intermediate layer output for ONNX) in python | <p>How can I export model to ONNX so that I'll get intermediate layers' output as well as the layer? (I've seen a similar question that went unanswered <a href="https://stackoverflow.com/questions/69658166/get-intermediate-layer-output-for-onnx-mode">here</a>)</p>
<p>Consider I have a model, <code>model</code>. The model is a torch model and I'd like to have multiple outputs: the last layer as well as one of the intermediate layers: specifically, one of the convolutions that happens in the process.</p>
<pre><code> import torch
import onnx
device = 'cpu'
dummy_input = torch.randn(1, 3, 320, 320).to(device)
input_key_name = 'input'
output_key_name = 'output'
torch.onnx.export(model, dummy_input, model_output_name,
input_names=[input_key_name], output_names=[output_key_name])
</code></pre>
<p>My questions are:</p>
<ol>
<li>Is it possible to get multiple layers output? How?</li>
<li>How would I know the output layer name I'm supposed to provide? Is it possible to use Netron to elucidate the names? (or other tools?)</li>
</ol>
<p>Right now my code works correctly for the last layer, but I'm not sure how to go from here to get an additional layer as well.</p>
| <python><deep-learning><pytorch><onnx> | 2023-09-26 10:39:16 | 1 | 971 | Ruslan |
77,179,134 | 9,131,089 | Loading input as a string with Langchain | <p>I've been trying to load the dynamic user queries and the answers that I receive for it into Chroma DB in langachain. I have only options to load input as documents in langchain. Is there a possibility to load input strings in langchain ?</p>
<p>I have attached the function in which I'm trying to pass the query and answer as parameters receieved from earlier functions.</p>
<pre><code> def store_query_and_answer(query, answer):
embeddings = OpenAIEmbeddings()
Chroma.from_documents(documents=query, embedding=embeddings, persist_directory="./")
Chroma.from_documents(documents=answer, embedding=embeddings, persist_directory="./")
</code></pre>
<p>Instead of loading this as a document, I want to load the queries as a line of strings. For example if I asked a query and got an answer like below from another function,</p>
<p>query : What is your name ?</p>
<p>answer : My name is XYZ.</p>
<p>Please post your suggestions on this, I truly appreciate your time and efforts β€</p>
| <python><openai-api><langchain><gpt-3><chromadb> | 2023-09-26 10:26:57 | 1 | 431 | Nuju |
77,179,122 | 9,284,651 | Check if part of the string exists in different dataframe and get this part only | <p>My DF_1 looks like below:</p>
<pre><code>id x
1 eu continent hamburg
2 asia singapore
3 austrlia hedland
</code></pre>
<p>I have a second DF_2 that looks like below:</p>
<pre><code>name
germany hamburg
singapore china
west australia hedland
</code></pre>
<p>I want to check if there are similar names and get them. So the output should be like:</p>
<pre><code>id x name
1 eu continent hamburg hamburg
2 asia singapore singapore
3 austrlia hedland hedland
</code></pre>
<p>How could I do that? I was looking for some solutions and use <code>str.contains</code> but the problem was that I need to check through the whole string in both df.</p>
| <python><pandas><dataframe> | 2023-09-26 10:25:15 | 2 | 403 | Tmiskiewicz |
77,178,959 | 3,416,774 | How can I use gkeepapi to retrieve and print out the labels associated with my Google Keep notes? | <h1>Background</h1>
<p>I am using <code>gkeepapi</code> to get the labels of my Google Keep notes:</p>
<pre class="lang-py prettyprint-override"><code>import gkeepapi
keep = gkeepapi.Keep()
keep.login('user@gmail.com', 'password')
labels = keep.labels()
</code></pre>
<p>I want to read the data from <code>labels</code>. I expect it should be as easy as <code>console.log(labels)</code> in JavaScript.</p>
<h1>Problem</h1>
<p>To read the <code>labels</code> code, I try various suggestions from <a href="https://stackoverflow.com/q/1006169/3416774">How do I look inside a Python object?</a>:</p>
<pre class="lang-py prettyprint-override"><code>print("\nlabels:\n", labels)
print("\ntype(labels):\n", type(labels))
print("\nlist(labels):\n", list(labels))
print("\nlabels.__dir__:\n", labels.__dir__)
print("\ndir(labels):\n", dir(labels))
print("\nlist(labels.values())\n:", list(labels.values()))
print("\nlabels.__dict__:\n", labels.__dict__)
</code></pre>
<p>The result:</p>
<pre class="lang-py prettyprint-override"><code>labels:
dict_values([<gkeepapi.node.Label object at 0x0000023C546C40D0>, <gkeepapi.node.Label object at 0x0000023C54BB5750>, <gkeepapi.node.Label object at 0x0000023C54BB7E90>, <gkeepapi.node.Label object at 0x0000023C54BC4310>])
type(labels):
<class 'dict_values'>
list(labels):
[<gkeepapi.node.Label object at 0x0000023C546C40D0>, <gkeepapi.node.Label object at 0x0000023C54BB5750>, <gkeepapi.node.Label object at 0x0000023C54BB7E90>, <gkeepapi.node.Label object at 0x0000023C54BC4310>]
labels.__dir__:
<built-in method __dir__ of dict_values object at 0x0000023C54F057E0>
dir(labels):
['__class__', '__delattr__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__getstate__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__iter__', '__le__', '__len__', '__lt__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__reversed__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', 'mapping']
Traceback (most recent call last):
File "D:\QC supplements\Code\Apps\TrαΊ₯n Kα»³\test.py", line 24, in <module>
print("\nlist(labels.values())\n:", list(labels.values()))
^^^^^^^^^^^^^
AttributeError: 'dict_values' object has no attribute 'values'
</code></pre>
<p>None of them actually show me the data. What should I do?</p>
| <python><google-keep-api> | 2023-09-26 10:02:46 | 1 | 3,394 | Ooker |
77,178,957 | 3,243,499 | How to get info of class call order in Python? | <p>How to get the classes information of <code>A</code> and <code>B</code> and the order of their invocations <code>A</code> followed by <code>B</code> in the following Python code:</p>
<pre><code>class A:
def __init__(self, x, y):
self.x = x
self.y = y
def __call__(self):
return self.x + self.y
class B:
def __init__(self, x, y):
self.x = x
self.y = y
def __call__(self):
return self.x - self.y
Class C:
def __init__(self):
pass
def __call__(self):
a = A(2, 3)
b = B(a(), 3)
return b
</code></pre>
<p>Given, <code>C</code>, how to find out that the two module 'C()<code>uses are</code>A<code>and</code>B<code>and the order their executions is</code>A<code>followed by</code>B`?</p>
| <python> | 2023-09-26 10:02:41 | 0 | 3,201 | user3243499 |
77,178,883 | 3,619,498 | Python PDM backend complains about conflict which is not a conflict | <p>In my team, we are using PIP with PDM backend for managing our Python packages and we've run into a problem with PDM backend.</p>
<p>We've managed to reproduce with the following "mickey mouse" example.</p>
<p>Directory structure:</p>
<pre><code>- package-a
- pyproject.toml
- package-b
- pyproject.toml
</code></pre>
<p>Contents of .toml file for package-a:</p>
<pre><code>[build-system]
build-backend = "pdm.backend"
requires = [
"pdm-backend >= 2.1.6"
]
[project]
authors = [
{name = "author"},
]
dependencies = []
description = "test"
name = "package-a"
version = "0.0.0"
</code></pre>
<p>Contents of .toml file for package-b is the same as for package-a except this dependency:</p>
<pre><code>dependencies = [
# Works
#"package-a @ file:///C:/temp/pythontest/pdm/package-a",
# Fails
"package-a @ file:///${PROJECT_ROOT}/../package-a",
]
</code></pre>
<p>As you see, there is a dependency from package-b to package-a. If we try to write this dependency using $(PROJECT_ROOT}/../, it fails. With absolute path, it works.</p>
<p>Log:</p>
<pre><code>C:\temp\pythontest\pdm>pip install ./package-a ./package-b
Processing c:\temp\pythontest\pdm\package-a
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Processing c:\temp\pythontest\pdm\package-b
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Processing c:\temp\pythontest\pdm\package-a (from package-b==0.0.0)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
INFO: pip is looking at multiple versions of package-b to determine which version is compatible with other requirements. This could take a while.
ERROR: Cannot install package-a 0.0.0 (from C:\temp\pythontest\pdm\package-a) and package-b==0.0.0 because these package versions have conflicting dependencies.
The conflict is caused by:
The user requested package-a 0.0.0 (from C:\temp\pythontest\pdm\package-a)
package-b 0.0.0 depends on package-a 0.0.0 (from C:\temp\pythontest\pdm\package-b\..\package-a)
To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip attempt to solve the dependency conflict
ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts
</code></pre>
<p>Any ideas? Is this a bug in PDM backend?</p>
| <python><pip><package><python-pdm> | 2023-09-26 09:53:04 | 0 | 817 | jgreen81 |
77,178,879 | 13,768,998 | python wget.download() throws OSError: Address family not supported by protocol | <p>I'm trying to download a model weights file from huggingface with python's wget library. What did I run:</p>
<pre><code>import wget
wget.download("https://huggingface.co/h94/IP-Adapter/resolve/main/models/image_encoder/pytorch_model.bin")
</code></pre>
<p>what I've got:</p>
<pre><code>Traceback (most recent call last):
File "/opt/conda/lib/python3.8/urllib/request.py", line 1354, in do_open
h.request(req.get_method(), req.selector, req.data, headers,
File "/opt/conda/lib/python3.8/http/client.py", line 1256, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/opt/conda/lib/python3.8/http/client.py", line 1302, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/opt/conda/lib/python3.8/http/client.py", line 1251, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/opt/conda/lib/python3.8/http/client.py", line 1011, in _send_output
self.send(msg)
File "/opt/conda/lib/python3.8/http/client.py", line 951, in send
self.connect()
File "/opt/conda/lib/python3.8/http/client.py", line 1418, in connect
super().connect()
File "/opt/conda/lib/python3.8/http/client.py", line 922, in connect
self.sock = self._create_connection(
File "/opt/conda/lib/python3.8/socket.py", line 808, in create_connection
raise err
File "/opt/conda/lib/python3.8/socket.py", line 791, in create_connection
sock = socket(af, socktype, proto)
File "/opt/conda/lib/python3.8/socket.py", line 231, in __init__
_socket.socket.__init__(self, family, type, proto, fileno)
OSError: [Errno 97] Address family not supported by protocol
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "wget_to_oss.py", line 15, in <module>
wget.download(args.url, out=args.save_dir)
File "/opt/conda/lib/python3.8/site-packages/wget.py", line 526, in download
(tmpfile, headers) = ulib.urlretrieve(binurl, tmpfile, callback)
File "/opt/conda/lib/python3.8/urllib/request.py", line 247, in urlretrieve
with contextlib.closing(urlopen(url, data)) as fp:
File "/opt/conda/lib/python3.8/urllib/request.py", line 222, in urlopen
return opener.open(url, data, timeout)
File "/opt/conda/lib/python3.8/urllib/request.py", line 525, in open
response = self._open(req, data)
File "/opt/conda/lib/python3.8/urllib/request.py", line 542, in _open
result = self._call_chain(self.handle_open, protocol, protocol +
File "/opt/conda/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/opt/conda/lib/python3.8/urllib/request.py", line 1397, in https_open
return self.do_open(http.client.HTTPSConnection, req,
File "/opt/conda/lib/python3.8/urllib/request.py", line 1357, in do_open
raise URLError(err)
urllib.error.URLError: <urlopen error [Errno 97] Address family not supported by protocol>
</code></pre>
<p>I totally have no idea what the error means (as well as how to handle) due to my lack of internet-protocol knowledge</p>
| <python><wget> | 2023-09-26 09:52:44 | 0 | 807 | Flicic Suo |
77,178,497 | 968,861 | Mysterious missing dependency in AWS Lambda docker image | <p>I have been struggling on this for a few days. I have been following this page: <a href="https://docs.aws.amazon.com/lambda/latest/dg/python-image.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/lambda/latest/dg/python-image.html</a></p>
<p><strong>What do I try to accomplish?</strong></p>
<p>I want to have a lambda function in AWS to convert text into vectors in order to put those in a vector database.
To do this, I would like to use all-MiniLM-L6-v2 model from sentence-transformers.
(if there is an easier way, I'm all ears)
Note: I can't define this lib as a layer in AWS as this lib is too big.</p>
<p><strong>What do I want?</strong></p>
<p>I want to install sentence-transformers in the /tmp folder as this seems to be the only writable folder in AWS Lambda.
I need this as otherwise, I get errors because the package tries to write within the packages folder, even after defining the TRANSFORMERS_CACHE env variable.</p>
<p><strong>Why am I stuck?</strong></p>
<p>When I test this locally, it works well, running docker run -p 9000:8080 image:tag
But once deployed, I get the following error:</p>
<blockquote>
<p>Runtime.ImportModuleError: Unable to import module 'lambda_function': No module named 'sentence_transformers'</p>
</blockquote>
<p>The right folder /tmp/packages is in the path as print(sys.path) gives:</p>
<blockquote>
<p>['/var/task', '/var/runtime', '/var/task', '/tmp/packages', '/var/lang/lib/python311.zip', '/var/lang/lib/python3.11', '/var/lang/lib/python3.11/lib-dynload', '/var/lang/lib/python3.11/site-packages']</p>
</blockquote>
<p><strong>Dockerfile</strong></p>
<pre><code>FROM public.ecr.aws/lambda/python:3.11
# get the dependencies, sentence-transformers for instance
COPY requirements.txt ${LAMBDA_TASK_ROOT}
# The actual lambda function
COPY lambda_function.py ${LAMBDA_TASK_ROOT}
# Install the specified packages, target should be /tmp as it is the only writable directory
RUN pip install -r requirements.txt --target=/tmp/packages
# Add the new packages folder to the PYTHONPATH so it can be imported in the script
ENV PYTHONPATH "${PYTHONPATH}:/tmp/packages"
# set the handler function as the starting point for the lambda function
CMD [ "lambda_function.handler" ]
</code></pre>
<p><strong>lambda_function.py</strong></p>
<pre><code>import sys
print(sys.path)
import os
# important to change the cache folder to a writable folder (only the tmp folder is writable on AWS Lambda)
# this must be before import SentenceTransformer
os.environ['TRANSFORMERS_CACHE'] = '/tmp/cache/huggingface/models'
os.environ['HF_DATASETS_CACHE'] = '/tmp/cache/huggingface/datasets'
os.environ['HF_HOME'] = '/tmp/cache/huggingface/home'
import json
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('all-MiniLM-L6-v2')
def handler(event, context):
return {
"statusCode": 200,
"body": "res"
}
</code></pre>
<p><strong>requirements.txt</strong></p>
<pre><code>sentence-transformers
</code></pre>
<p>What am I missing?</p>
<p>Edit: When I list folders in /tmp/packages, locally, I get all the expected dependencies, but on AWS, the folder does not exist.</p>
| <python><docker><aws-lambda><sentence-transformers> | 2023-09-26 08:58:41 | 1 | 1,217 | dyesdyes |
77,178,489 | 3,104,974 | Using explode() as Aggregation Function | <p>How can I <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.explode.html" rel="nofollow noreferrer">explode</a> duplicate index rows in a <a href="https://pandas.pydata.org/docs/reference/api/pandas.pivot_table.html#pandas.pivot_table" rel="nofollow noreferrer"><code>pd.pivot_table()</code></a>?</p>
<pre><code>import pandas as pd
df = pd.DataFrame({
"group": [1,2,2,3,1,2,3],
"panel": [1,1,1,1,2,2,2],
"value": [0,1,2,3,4,5,6]
})
pd.pivot_table(df, index="group", columns="panel", aggfunc="explode")
</code></pre>
<p>However <code>"explode"</code> is not a valid aggregation function. I want to have this result</p>
<pre><code> value
panel 1 2
group
1 0 4
2 1 5
2 2 NaN
3 3 6
</code></pre>
| <python><pandas><aggregate><pivot-table> | 2023-09-26 08:57:12 | 1 | 6,315 | ascripter |
77,178,477 | 8,761,554 | Adding a paragraph to PDF with specific coordinates size | <p>I am using python for modifying PDFs but I have a PDF template which I am trying to automatize. I am using PyPDF2 and reportlab to do so. For single sentences I am using the reportlab's drawString function.
However I'd like to now create and paste a paragraph to the specific coordinates on the first page. I know that the paragraph should be bounded on the left side by 2cm on the right side by 2cm and should start 10 cm from the top. The bottom is not specified and limited as the paragraph length is not known upfront.</p>
<p>How can I achieve this using reportlab and PyPDF2?</p>
| <python><pdf><formatting><pypdf> | 2023-09-26 08:55:46 | 0 | 341 | Sam333 |
77,178,438 | 4,993,513 | How to use a custom toml file path for Streamlit secrets? | <p>Assume I have a toml file which contain secrets at <code>/some/filepath/secrets.toml</code>. How do I specify this path for Streamlit to use?</p>
<p>Currently, the streamlit docs mention <code>.streamlit/secrets.toml</code> as the default filepath for secrets. However, I do not find any doc which explains how to use custom filepaths for the same.</p>
| <python><streamlit><toml> | 2023-09-26 08:50:21 | 1 | 11,141 | Dawny33 |
77,178,429 | 17,541,416 | How to create new column with new names that has value from other column and keep values for the previous columns? | <p>I have the following dataframe_1 I would like to transform:</p>
<pre><code> Time Not_paid Paid
0 morning 30 10
1 afternoon 60 20
2 night 90 30
</code></pre>
<p>I would like to use the first column of dataframe_1 to create new columns with the others.
For example, 1 column 'Not_paid' + 'morning' that take the value 30, 1 column 'Not_paid' + 'afternoon' that take the value 60 and so on.</p>
<p>Expected output:</p>
<p>I would like to have the following df_2:</p>
<pre><code> Not_paid_morning Not_paid_afternon Not_paid_night Paid_morning Paid_afternoon Paid_night
0 30 60 90 10 20 30
</code></pre>
<p>How can I do this ?</p>
| <python><pandas><dataframe> | 2023-09-26 08:49:16 | 1 | 327 | codelifevcd |
77,178,406 | 10,755,032 | How to add tags to S3 buckets from a dataframe in python | <p>I have a dataframe as follows:</p>
<pre><code>bucket_name tags
name1 tag1
name2 tag2
name3 tag3
</code></pre>
<p>I want to use the above to insert tags into the respective s3 bucket. How can I do this using boto3?</p>
| <python><amazon-web-services><boto3> | 2023-09-26 08:46:48 | 1 | 1,753 | Karthik Bhandary |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.