QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
77,745,876
| 2,801,669
|
What is the json_schema attribute of core_schema.json_or_python_schema used for in pydantic
|
<p>Assume I have an external Class <code>OriginalExternalClass</code> whose code I can not be modified. I want to use that external class as a field of another class called <code>PydanticClassWithExternalClassMember</code> which derives from pydantic <code>BaseModel</code>.</p>
<p>I want circumvent using the arbitrary types flag, as I want to implement type checking and serialization capabilities for the <code>OriginalExternalClass</code> field. To code works and looks the following:</p>
<pre><code>from typing import Any
from pydantic_core import core_schema
import pydantic
class OriginalExternalClass:
def __init__(self, a: int, b: str) -> None:
self.a = a
self.b = b
class PydanticExternalClass(OriginalExternalClass):
@classmethod
def __get_pydantic_core_schema__(
cls,
_source_type: Any,
_handler: pydantic.GetCoreSchemaHandler,
) -> core_schema.CoreSchema:
def json_schema(value: dict) -> OriginalExternalClass:
# NEVER CALLED!!!
print("json_schema called")
result = OriginalExternalClass(**value)
return result
def python_schema(
value: OriginalExternalClass,
) -> OriginalExternalClass:
print("python_schema called")
if isinstance(value, dict):
value = OriginalExternalClass(**value)
if not isinstance(value, OriginalExternalClass):
raise pydantic.ValidationError(
msg="Expected a `ExternalClass` instance",
loc=("third_party_type",),
type=OriginalExternalClass,
)
return value
def serialization(value: OriginalExternalClass) -> dict:
print("serialization called")
return {"a": value.a, "b": value.b}
return core_schema.json_or_python_schema(
json_schema=core_schema.no_info_plain_validator_function(json_schema),
python_schema=core_schema.no_info_plain_validator_function(python_schema),
serialization=core_schema.plain_serializer_function_ser_schema(
serialization
),
)
class PydanticClassWithExternalClassMember(pydantic.BaseModel):
x: PydanticExternalClass
</code></pre>
<p>However, what I do not understand, is when is the <code>json_schema</code> attribute of <code>JsonOrPythonSchema</code> genereated by <code>json_schema=core_schema.no_info_plain_validator_function(json_schema)</code> used? What is it needed for? The function json_schema is never called when executing:</p>
<pre><code>print(
"m1 = PydanticClassWithExternalClassMember(x=PydanticExternalClass(1, 'basgsadf'))"
)
m1 = PydanticClassWithExternalClassMember(x=PydanticExternalClass(1, "basgsadf"))
print("d = m1.model_dump()")
d = m1.model_dump()
print("m1 = PydanticClassWithExternalClassMember(**d)")
m1 = PydanticClassWithExternalClassMember(**d)
</code></pre>
<p>And the output is</p>
<pre><code>m1 = PydanticClassWithExternalClassMember(x=PydanticExternalClass(1, 'basgsadf'))
python_schema called
d = m1.model_dump()
serialization called
m1 = PydanticClassWithExternalClassMember(**d)
python_schema called
</code></pre>
|
<python><python-3.x><pydantic-v2>
|
2024-01-02 11:48:00
| 1
| 1,080
|
newandlost
|
77,745,851
| 2,827,181
|
a col("Masse hydrique").divide(lit(100)) succeeds with Scala, but fails with Python, with a "'Column' object is not callable" error
|
<p>I'm converting a Zeppelin notebook written is Scala into a Jupyter one, with Python</p>
<p>This piece of code is working with Scala:</p>
<pre class="lang-scala prettyprint-override"><code>import org.apache.spark.sql.types.DateType;
val poidsCsvFile = "/home/lebihan/correctionPoids.csv";
var poids = spark.read.format("csv")
[...]
.load(poidsCsvFile);
poids = poids.withColumn("pctMasseHydrique", round(col("Masse hydrique").divide(lit(100)), 3))
</code></pre>
<p>But what I believe to be its translation with Python:</p>
<pre class="lang-py prettyprint-override"><code>from pyspark.sql import SparkSession
from pyspark.sql.functions import col, lit
spark = SparkSession.builder.getOrCreate()
poidsCsvFile = "/home/lebihan/correctionPoids.csv";
poids = (spark.read.format("csv")
[...]
.load(poidsCsvFile))
poids = poids.withColumn("pctMasseHydrique", round(col("Masse hydrique").divide(lit(100)), 3))
</code></pre>
<p>fails with the error:</p>
<pre><code>TypeError: 'Column' object is not callable
</code></pre>
<p>and it underlines the <code>col("Masse hydrique").divide(lit(100))</code> part of my statement.</p>
<p>I don't understand the message, and why the same statement could work in one environment and not on the other.</p>
<hr />
<p>The full code:</p>
<pre class="lang-py prettyprint-override"><code>from pyspark.sql import SparkSession
from pyspark.sql.functions import col, lit
spark = SparkSession.builder.getOrCreate()
poidsCsvFile = "/home/lebihan/dev/Java/garmin/correctionPoids.csv";
#
# Durée,Poids,Variation,IMC,Masse grasse,Masse musculaire squelettique,Masse osseuse,Masse hydrique
# 11/02/2022,72.2,0.1,22.8,25.6,29.7,3.5,54.3
#
poids = (spark.read.format("csv")
.option("inferSchema", "true") # Détecter les types de champs
.option("header", "true") # Il y a des entêtes de colonnes
.option("quote", "\"") # Les valeurs textes peuvent être entre guillemets
.option("escape", "\"") # Ne pas interpréter ce qu'il y a entre guillemets
.option("dec", ".") # Le séparateur décimal (virgule française) est le point '.''
.option("dateFormat", "dd/MM/yyyy") # et les dates sont au format jour/mois/année
.option("nullValue", "null") # (pour éviter une mauvaise détection de type de champ sur une absence de valeur)
.option("sep", ",") # Le séparateur entre champs csv est la virgule
.load(poidsCsvFile))
poids = poids.withColumn("pctMasseHydrique", round(col("Masse hydrique").divide(lit(100)), 3)) \
.withColumn("masseEau", round(col("Poids").multiply(col("pctMasseHydrique")), 1)) \
.withColumn("poidsSansEau", round(col("Poids").minus(col("Poids").multiply(col("pctMasseHydrique"))), 1)) \
.withColumn("pctMasseGrasse", round(col("Masse grasse").divide(lit(100)),3)) \
.withColumn("masseGrasse", round(col("Poids").multiply(col("pctMasseGrasse")), 1)) \
.withColumnRenamed("Poids", "poids") \
.withColumnRenamed("Masse osseuse", "masseOsseuse") \
.withColumnRenamed("Masse musculaire squelettique", "masseMusculaireSquelettique") \
.withColumn("date", to_date(col("Durée"), "dd/MM/yyyy")) \
.withColumn("ratioGraisse", round(col("masseGrasse").divide(col("poids")), 3)) \
.withColumn("ratioGraisseSansEau", round(col("masseGrasse").divide(col("poidsSansEau")), 3)) \
.withColumn("ratioMasseMusculaire", round(col("masseMusculaireSquelettique").divide(col("poids")), 3)) \
.withColumn("ratioMasseOsseuse", round(col("masseOsseuse").divide(col("poids")), 3)) \
.withColumnRenamed("Masse grasse", "masseGrasseKg") \
.withColumnRenamed("Masse hydrique", "masseHydrique");
</code></pre>
<p>with csv content:</p>
<pre><code>Durée,Poids,Variation,IMC,Masse grasse,Masse musculaire squelettique,Masse osseuse,Masse hydrique,
24/04/2022,70.1,0.0,22.1,24.3,29.1,3.5,55.2,
23/04/2022,70.1,0.1,22.1,24.5,29.1,3.5,55.1,
22/04/2022,70.0,0.1,22.1,25.3,29.1,3.4,54.5,
21/04/2022,69.9,0.0,22.1,24.8,29.0,3.4,54.9,
20/04/2022,69.9,0.4,22.1,24.3,29.0,3.5,55.2,
19/04/2022,70.3,0.1,22.2,23.9,29.1,3.5,55.5,
18/04/2022,70.4,0.4,22.2,24.5,29.2,3.5,55.1,
17/04/2022,70.8,0.8,22.3,24.7,29.3,3.5,54.9,
16/04/2022,70.0,0.3,22.1,24.4,29.1,3.5,55.2,
15/04/2022,69.7,0.3,22,24.7,29.0,3.4,55.0,
14/04/2022,70.0,0.2,22.1,24.9,29.1,3.4,54.9,
13/04/2022,70.2,0.3,22.2,24.9,29.1,3.4,54.8,
12/04/2022,70.5,0.4,22.3,24.7,29.2,3.5,55.0,
11/04/2022,70.9,0.5,22.4,25.4,29.3,3.5,54.5,
10/04/2022,70.4,0.2,22.2,25.4,29.2,3.4,54.5,
09/04/2022,70.6,0.6,22.3,25.3,29.2,3.4,54.5,
</code></pre>
|
<python><apache-spark>
|
2024-01-02 11:41:26
| 2
| 3,561
|
Marc Le Bihan
|
77,745,786
| 1,533,811
|
can the merchants disable the shopee's Buyer Info masking functionality
|
<p>I am consuming the Shopee API "get_order_detail" and Im getting masked data in the shipping section of the response as shown below</p>
<pre><code>{'name': '****', 'phone': '****', 'town': '****', 'district': '****', 'city': '****',
'state': '****', 'region': '****', 'zipcode': '****', 'full_address': '****'}.
</code></pre>
<p>May I know if merchants have the ability or other alternatives to unmask this data as we use this shipping_adderess data for creating an order.</p>
|
<python><shopee>
|
2024-01-02 11:30:49
| 0
| 794
|
balaji
|
77,745,673
| 5,594,008
|
Poetry add command removes category from lock file
|
<p>My current poetry.lock file looks like this (part of it)</p>
<pre><code># This file is automatically @generated by Poetry and should not be changed by hand.
[[package]]
name = "annotated-types"
version = "0.6.0"
description = "Reusable constraint types to use with typing.Annotated"
category = "main"
optional = false
python-versions = ">=3.8"
files = [
{file = "annotated_types-0.6.0-py3-none-any.whl", hash = "sha256:0641064de18ba7a25dee8f96403ebc39113d0cb953a01429249d5c7564666a43"},
{file = "annotated_types-0.6.0.tar.gz", hash = "sha256:563339e807e53ffd9c267e99fc6d9ea23eb8443c08f112651963e24e22f84a5d"},
]
[[package]]
name = "asgiref"
version = "3.7.2"
description = "ASGI specs, helper code, and adapters"
category = "main"
optional = false
python-versions = ">=3.7"
files = [
{file = "asgiref-3.7.2-py3-none-any.whl", hash = "sha256:89b2ef2247e3b562a16eef663bc0e2e703ec6468e2fa8a5cd61cd449786d4f6e"},
{file = "asgiref-3.7.2.tar.gz", hash = "sha256:9e0ce3aa93a819ba5b45120216b23878cf6e8525eb3848653452b4192b92afed"},
]
</code></pre>
<p>For every package there is a <strong>category</strong></p>
<p>Now I need to add an extra package. I run <code>poetry add --group main beautifulsoup4</code> , but category is removed from each line. What should I do to: 1) install a new package 2) make category fields stay the same</p>
|
<python><python-poetry>
|
2024-01-02 11:07:59
| 1
| 2,352
|
Headmaster
|
77,745,537
| 746,703
|
pyside6-designer - libpython3.12.dylib (no such file)
|
<p>I want to start pyside6-designer after I installed Python 3.12.1 via pyenv, which was successful. I also installed pyside6 via pip, which also was successful.</p>
<p>Now, when I want to start the designer with <code>pyside6-designer</code>, I get this error:</p>
<pre><code>Error: dyld[28253]: terminating because inserted dylib 'libpython3.12.dylib' could not be loaded: tried: 'libpython3.12.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OSlibpython3.12.dylib' (no such file), 'libpython3.12.dylib' (no such file), '/Users/xxx123/libpython3.12.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/Users/xxx123/libpython3.12.dylib' (no such file), '/Users/xxx123/libpython3.12.dylib' (no such file)
dyld[28253]: tried: 'libpython3.12.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OSlibpython3.12.dylib' (no such file), 'libpython3.12.dylib' (no such file), '/Users/xxx123/libpython3.12.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/Users/xxx123/libpython3.12.dylib' (no such file), '/Users/xxx123/libpython3.12.dylib' (no such file)
while executing '/Users/xxx123/.pyenv/versions/3.12.1/lib/python3.12/site-packages/PySide6/Designer.app/Contents/MacOS/Designer'
</code></pre>
<p>I am running Mac OSX 14.2.1 (23C71).</p>
<p>How can I fix this?</p>
|
<python><qt-designer><pyside6>
|
2024-01-02 10:42:04
| 2
| 545
|
Danzzz
|
77,745,266
| 1,851,425
|
Create a local settings file using poetry
|
<p>I have a python script that checks some files in a certain directory that uses a settings.ini file with local settings to a database and where the rootdir of the files to check is located on your environment and some git settings.</p>
<p>This file was now on the same directory as the script.
In the code I used the following command to access this:</p>
<pre><code>config = configparser.ConfigParser()
config.read('settings.ini')
</code></pre>
<p>Now I want to change my script to a Poetry project.
I want to change the layout to:</p>
<pre><code>my-Package
|
|-- Source dir
| |- script.py
|-- Settings dir
| |- settings.ini
|-- pyproject.toml
</code></pre>
<p>What is the best way to find my settings.ini in my code?</p>
<p>Kind regards</p>
|
<python><python-3.x><python-poetry>
|
2024-01-02 09:49:42
| 1
| 2,139
|
nightfox79
|
77,745,248
| 9,814,710
|
MaxPooling operation on temporal data - select signal with the highest amplitude
|
<p>Is there any clean way to perform Maxpooling operation on temporal data (i.e. signal with highest amplitude will be the output).</p>
<p>For example,</p>
<pre class="lang-py prettyprint-override"><code># sample four sin signals
a = 2*tf.math.sin(tf.linspace(0, 10, 200))
b = 0.1*tf.math.sin(2*tf.linspace(0, 10, 200))
c = 3*tf.math.sin(0.5*tf.linspace(0, 10, 200))
d = 1*tf.math.sin(5*tf.linspace(0, 10, 200))
# stack the signals
data = tf.stack([a, b, c, d], -1)
# reshape to appropriate timeseries of 2D feature-maps
# (batch_size, sequence length, feature_dim1, feature_dim2, channels)
data = tf.reshape(data, [1, 200, 2, 2, 1])
</code></pre>
<p><code>data</code> will look something like this:</p>
<p><a href="https://i.sstatic.net/LcDt8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LcDt8.png" alt="enter image description here" /></a></p>
<p>Now, I want to perform something similar to <code>MaxPooling2D((2,2))</code> operation on <code>data</code> to get only <code>c</code> (as it has the highest amplitude). Clearly, we cannot use <code>MaxPooling3D</code> and <code>TimeDistributed</code> layers directly, as they will perform pooling at each timestep. I tried my luck with alternatives using <code>tf.math.reduce_max()</code> and <code>tf.nn.max_pool_with_argmax</code> but they were not straight-forward.</p>
<p>Any suggestions or comments is appreciated. Thanks in advance :)</p>
|
<python><tensorflow><conv-neural-network>
|
2024-01-02 09:46:32
| 1
| 511
|
Vigneswaran C
|
77,744,784
| 5,572,627
|
Python multiprocess pool.map takes too long for initialization
|
<pre><code>%%time
import pandas as pd
from datetime import datetime
from multiprocessing import Pool
def get_id(x):
return x['motor_id']
def parallel_load(df, get_id):
print('parallel_load: ', datetime.now())
return df.apply(lambda x: get_id(x), axis=1)
def parallel_df(df, func, n_cores):
print('start: ', datetime.now())
df_split = np.array_split(df, n_cores)
print('splitted: ', datetime.now())
pool = Pool(n_cores)
# df = pd.concat(pool.map(func, df_split))
s = datetime.now()
print('pool start: {}'.format(s))
data = pool.map(func, df_split)
print('pool end: {}, {}s'.format(datetime.now(), (datetime.now() - s).seconds))
pool.close()
pool.join()
print('pool join: {}s'.format((datetime.now() - s).seconds))
return data
new_func = partial(
parallel_load, get_id=get_id
)
ndf = pd.DataFrame({'motor_id': np.arange(10000)})
parallel_df(ndf, new_func, 4)
</code></pre>
<p>With the sample code above,
I get this result</p>
<pre><code>start: 2024-01-02 16:58:09.390751
splitted: 2024-01-02 16:58:09.391897
parallel_load: parallel_load: parallel_load: parallel_load: parallel_load: parallel_load: 2024-01-02 16:58:12.453533parallel_load: parallel_load: 2024-01-02 16:58:12.456901parallel_load: parallel_load:
2024-01-02 16:58:12.456053 2024-01-02 16:58:12.4578302024-01-02 16:58:12.457674 2024-01-02 16:58:12.457189
2024-01-02 16:58:12.457663
2024-01-02 16:58:12.4583492024-01-02 16:58:12.458657
2024-01-02 16:58:12.457537
pool start: 2024-01-02 16:58:12.448087
pool end: 2024-01-02 16:58:12.504469, 0s
pool join: 0s
CPU times: user 52.5 ms, sys: 3.48 s, total: 3.53 s
Wall time: 3.39 s
</code></pre>
<p>As you can see, it seems like it takes about 3 seconds to start the actual code for each process after pool.map was calling.</p>
<p>check the time between "splitted" and "parallel_load"</p>
<p>Is there a way to shorten this time?
It seems like pool.map is not good for using small works like this; since getting to start takes too much time compare to the actual processing time.</p>
|
<python><multiprocess>
|
2024-01-02 08:01:51
| 0
| 581
|
Isaac Sim
|
77,744,319
| 785,581
|
Telethon: login not saved
|
<p>I want to write my own client for the Telegram messenger in Python. For this I use the <a href="https://github.com/LonamiWebs/Telethon" rel="nofollow noreferrer">Telethon</a> library. I take the example code <a href="https://stackoverflow.com/questions/77744149/telethon-how-to-get-information-about-yourself-correctly/77744168#77744168">from here</a> (the example code from the Telethon main page crashes on startup):</p>
<pre><code> from telethon import TelegramClient
import asyncio
api_id = my_id
api_hash = 'my_ha'
async def main():
client = TelegramClient('Test2Session', api_id, api_hash)
await client.start()
me = await client.get_me()
print(me.stringify())
if __name__ == '__main__':
asyncio.run(main())
</code></pre>
<p>When I launch the app for the first time, Telethon asks me to enter a phone number. The code is then sent to the account associated with that phone number. The library asks to enter this code and I enter it. The app then prints out information about me.</p>
<p>After finishing the application I have the following problems:</p>
<ol>
<li>Telegram logs out on all devices, all sessions (including official clients) and I need to log in again.</li>
<li>When I restart the application, it does not automatically log in and asks to re-enter my phone number and code. But the code is not received and the application remains stuck at the stage of entering the code. I have to interrupt it manually. Even though the Test2Session.session file is created upon first launch.</li>
<li>If I change the first argument in the TelegramClient function (session name), then the code arrives and the application logs in, but the problems from points 1 and 2 are repeated again.</li>
</ol>
<p>Please tell me what I’m doing wrong and how to log in to Telegram using Telethon correctly?</p>
|
<python><telegram><telethon>
|
2024-01-02 05:37:35
| 1
| 1,154
|
Andrey Epifantsev
|
77,744,169
| 3,291,077
|
Docstring doesn't show on hover in vscode jupyter notebook
|
<p>When I import a function into a vscode notebook, instead of getting a docstring I just get this:</p>
<p><a href="https://i.sstatic.net/7RktN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7RktN.png" alt="enter image description here" /></a></p>
<p>Here is the function:</p>
<pre><code>def balance_binary_target(df, target="", ratio=None):
"""
Takes data frame X and target string
Returns dataframe with upsampled balanced classes
"""
classes = df[target].value_counts()
class_differential = classes[0] - classes[1]
if class_differential < 0:
class_value = 0
upsample = np.abs(class_differential)
if ratio:
upsample = int(ratio * upsample)
sample = df[df[target] == class_value].sample(upsample, replace=True)
df = pd.concat([sample, df])
elif class_differential > 0:
class_value = 1
if ratio:
upsample = int(ratio * upsample)
upsample = np.abs(class_differential)
sample = df[df[target] == class_value].sample(upsample, replace=True)
df = pd.concat([sample, df])
return df
</code></pre>
<p>When I hover over the same function in the simple file text editor I get this:</p>
<p><a href="https://i.sstatic.net/R3lVh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/R3lVh.png" alt="enter image description here" /></a></p>
<p>How can I get the same detail showing in the jupyter notebook as I do in the base editor?</p>
<p>Here is my version of vscode:</p>
<p><a href="https://i.sstatic.net/A255k.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/A255k.png" alt="enter image description here" /></a></p>
<p>Here is my version of the Python extension:</p>
<p><a href="https://i.sstatic.net/3MIA6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3MIA6.png" alt="enter image description here" /></a></p>
|
<python><visual-studio-code><autocomplete><docstring>
|
2024-01-02 04:32:59
| 0
| 4,465
|
rgalbo
|
77,744,149
| 785,581
|
Telethon: how to get information about yourself correctly?
|
<p>I want to write my own client for the Telegram messenger in Python. I use the <a href="https://github.com/LonamiWebs/Telethon" rel="nofollow noreferrer">Telethon</a> library. I took the code example from their main page:</p>
<pre><code> from telethon import TelegramClient
api_id = my_id
api_hash = 'my_hash'
client = TelegramClient('Test2Session', api_id, api_hash)
client.start()
print(client.get_me().stringify())
</code></pre>
<p>This code should output information about me to the console. When I run this code I get the error:</p>
<pre><code> AttributeError: 'coroutine' object has no attribute 'stringify'
</code></pre>
<p>in line</p>
<pre><code> print(client.get_me().stringify())
</code></pre>
<p>How to fix this error? How to receive information about yourself?</p>
|
<python><telegram><telethon>
|
2024-01-02 04:21:11
| 2
| 1,154
|
Andrey Epifantsev
|
77,744,086
| 1,285,061
|
Numpy use one array as mask for another array
|
<p>I am trying to do these steps in NumPy. It was easy to do this with python list <code>sort()</code>, and <code>argsort()</code>.</p>
<p>How do I do this in Numpy?</p>
<pre><code>a = np.array([10,30,20,40,50])
a_sorted = np.array([10,20,30,40,50])
</code></pre>
<p>Get mask of a_sorted</p>
<pre><code>b = np.array(['one','three','two','four','five'])
</code></pre>
<p>Apply the mask to b</p>
<p>Expected array sorted according to a_sorted:</p>
<pre><code>b_sorted = np.array(['one','two','three','four','five'])
</code></pre>
|
<python><arrays><numpy><sorting><mask>
|
2024-01-02 03:46:40
| 0
| 3,201
|
Majoris
|
77,744,025
| 10,200,497
|
Keep the previous maximum value and create a new column
|
<p>This is my DataFrame:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(
{
'a': [1, 9, 8, 10, 7, 11, 20]
}
)
</code></pre>
<p>And this is the output that I want. I want to create column <code>b</code>:</p>
<pre><code> a b
0 1 1
1 9 9
2 8 9
3 10 10
4 7 10
5 11 11
6 20 20
</code></pre>
<p>This is actually a problem when I am converting code from Pine Script to Python. I am not used to thinking in this way.</p>
<p>I explain the issue with an example:</p>
<p>Row <code>0</code> in <code>b</code> is always <code>df.a.iloc[0]</code>. Now in order to get the next row in <code>b</code>, row number <code>1</code> in <code>a</code> should be compared with row number <code>0</code> in <code>b</code>. The greater one is selected for <code>b</code>. In this case 9 > 1 so 9 is selected.</p>
<p>For next row in <code>b</code> row number <code>2</code> in <code>a</code> is compared with row number <code>1</code> in <code>b</code>. 9 > 8 so 9 is selected. And it goes to the end. This image clarifies the issue:</p>
<p><a href="https://i.sstatic.net/kVNY6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kVNY6.png" alt="enter image description here" /></a></p>
<p>I tried a lot of solutions. But, since I am not used to this kind of logic, It feels like I'm running in circles. This is one of my attempts:</p>
<pre><code>df["b"] = np.fmax(df["a"], df["a"].shift(1))
</code></pre>
|
<python><pandas><dataframe>
|
2024-01-02 03:12:37
| 1
| 2,679
|
AmirX
|
77,743,898
| 10,200,497
|
Keep the previous maximum value after the streak ends
|
<p>This is my dataframe:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(
{
'a': [110, 115, 112, 180, 150, 175, 160, 145, 200, 205, 208, 203, 206, 207, 208, 209, 210, 215],
'b': [1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1],
}
)
</code></pre>
<p>And this is the output that I want. I want to create column <code>c</code>.</p>
<pre><code> a b c
0 110 1 110
1 115 1 115
2 112 0 115
3 180 1 180
4 150 0 180
5 175 1 180
6 160 0 180
7 145 0 180
8 200 1 200
9 205 1 205
10 208 1 208
11 203 0 208
12 206 1 208
13 207 1 208
14 208 1 208
15 209 1 209
16 210 1 210
17 215 1 215
</code></pre>
<p>When <code>df.a > df.a.shift(1)</code> <code>b</code> is 1 otherwise it is 0.</p>
<p>Steps needed:</p>
<p>a) Find where the streak of 1 in <code>b</code> ends.</p>
<p>b) Keep the maximum value of the streak.</p>
<p>c) Put that value in <code>c</code> until a greater value is found in <code>a</code>.</p>
<p>For example when 180 is found in <code>b</code>:</p>
<p>a) Row <code>3</code> has streak of 1.</p>
<p>b) Maximum value of the streak is 180.</p>
<p>c) <code>df.c = 180</code> until a greater value is found in <code>a</code>. In this case it is 200 at row <code>8</code>.</p>
<p>It was not easy to elaborate the problem. Maybe I have described the problem with wrong words. So If there are any questions feel free to ask in the comments.</p>
<p>And I really appreciate if you introduce a built-in way or a clean way to create column <code>b</code>. I put those 1 and 0s manually.</p>
<p>This is what I have tried. But it does not feel like a correct approach.</p>
<pre><code>df['streak'] = df['b'].ne(df['b'].shift()).cumsum()
df['max'] = df.groupby('streak')['a'].max()
</code></pre>
|
<python><pandas><dataframe>
|
2024-01-02 02:07:36
| 1
| 2,679
|
AmirX
|
77,743,753
| 5,484,417
|
Python print function stopped working after installing super-gradients in Google Colab
|
<p>After installing super-gradients with different ways in Google Colab including <code>!pip install -qq super_gradients==3.5</code> and importing it, the python print() function stopped working and calling it resulted in no displaying output.</p>
|
<python><google-colaboratory>
|
2024-01-02 00:34:33
| 2
| 440
|
CVDE
|
77,743,561
| 3,486,773
|
How do I left align data in pandas dataframe in tkinter label?
|
<p>I am trying to display a pandas dataframe in a tkinter label in a popup window so that the user can easily see the data.</p>
<p>Here is the code:</p>
<pre><code>from tkinter import Toplevel, Button, Tk
import tkinter as tk
import pandas as pd
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
pd.set_option('display.max_colwidth', 50)
pd.set_option('display.width',200)
root = Tk()
root.geometry("550x600+500+200")
df = pd.DataFrame.from_dict({'title': {0: 'Sober', 1: 'Sober (8-Bit Tool Emulation) - 8-Bit Tool Emulation', 2: 'Sober (1993) [8-Bit Tool Emulation]', 3: 'Tool No. 2 (Sober)', 4: 'Sober (Made Famous by Tool)'}, 'artist': {0: 'Tool', 1: '8-Bit Arcade', 2: '8-Bit Arcade', 3: 'UltraRock', 4: 'Baby Lullaby Ensemble'}, 'endPos': {0: 38235, 1: 46546, 2: 56462, 3: 65970, 4: 74870}, 'match': {0: 100, 1: 29, 2: 37, 3: 57, 4: 35}, 'url': {0: 'https://www.musixmatch.com/lyrics/Tool-4/sober?utm_source=application&utm_campaign=api&utm_medium=musixmatch-community%3A1409608317702', 1: 'https://www.musixmatch.com/lyrics/8-Bit-Arcade/Sober-8-Bit-Tool-Emulation-8-Bit-Tool-Emulation?utm_source=application&utm_campaign=api&utm_medium=musixmatch-community%3A1409608317702', 2: 'https://www.musixmatch.com/lyrics/8-Bit-Arcade/Sober-1993-8-Bit-Tool-Emulation?utm_source=application&utm_campaign=api&utm_medium=musixmatch-community%3A1409608317702', 3: 'https://www.musixmatch.com/lyrics/UltraRock/Tool-No-2-Sober?utm_source=application&utm_campaign=api&utm_medium=musixmatch-community%3A1409608317702', 4: 'https://www.musixmatch.com/lyrics/Baby-Lullaby-Ensemble/Sober-Made-Famous-by-Tool?utm_source=application&utm_campaign=api&utm_medium=musixmatch-community%3A1409608317702'}}
)
def new_window() :
win = Toplevel()
win.geometry("%dx%d+%d+%d" % (1050, 270, root.winfo_x() , root.winfo_y() + 600/4))
tk.Label(win, text= df).grid(row=0, sticky=(tk.E))
Button(root, text = "open", command = new_window).pack()
root.mainloop()
</code></pre>
<p>but this is what the output looks like:</p>
<p><a href="https://i.sstatic.net/NmvXR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NmvXR.png" alt="enter image description here" /></a></p>
<p>How do I make the values and headers left align? Or is there another way to display this nicer? I am not interested in using actual tables that look like excel, just want to make this more readable.</p>
|
<python><pandas><dataframe><tkinter><label>
|
2024-01-01 22:46:22
| 1
| 1,278
|
user3486773
|
77,743,557
| 338,479
|
Python3 curses segfault on stdscr.refresh()
|
<p>Here is a very small program that demonstrates the problem:</p>
<pre><code>#!/usr/bin/env python3
import curses
import sys
import time
def main():
try:
stdscr = curses.initscr()
stdscr.clear()
stdscr.addstr(0,0, "This is the prompt")
stdscr.refresh()
time.sleep(3)
finally:
curses.endwin()
return 0
if __name__ == '__main__':
sys.exit(main())
</code></pre>
<p>If I change <code>python3</code> to <code>python</code>, it works perfectly. With python3, I get a segmentation fault in the call to <code>stdscr.refresh()</code>.</p>
|
<python><python-3.x><macos><curses><macos-ventura>
|
2024-01-01 22:45:29
| 2
| 10,195
|
Edward Falk
|
77,743,536
| 19,366,064
|
How to remove trailing comma for Python code formatter Ruff
|
<p>Before applying the ruff formatter</p>
<pre><code>def method(arg1, arg2, arg3, ..., arg10):
</code></pre>
<p>After applying the ruff formatter</p>
<pre><code>def method(
arg1,
arg2,
arg3,
...,
arg10,
)
</code></pre>
<p>Is there a way to configure the ruff formatter to remove the trailing comma next to arg10 by passing an argument to ruff.format.args?</p>
<pre><code>"ruff.format.args": ["remove-trailing-comma"]
</code></pre>
|
<python><visual-studio-code><ruff>
|
2024-01-01 22:32:56
| 2
| 544
|
Michael Xia
|
77,743,437
| 14,735,451
|
How to plot lists with different x axis and number of points?
|
<p>I have a collection of dictionaries:</p>
<pre><code>my_dict_1 = {"x=0": 0.33282876064333017, "x=50": 0.3466414380321665, "x=110": 0.3540208136234626, "x=120": 0.350236518448439, "x=130": 0.35042573320719017, "x=140": 0.33415326395458844, "x=150": 0.33245033112582784, "x=450": 0.33345033112582784,}
my_dict_2 = {"x=0": 0.24751431153962036, "x=60": 0.2752335040674902, "x=130": 0.2799035854172944, "x=140": 0.28321783669780054, "x=170": 0.31048508586923773, "x=180": 0.3110876770111479, "x=200": 0.2946670683940946, "x=210": 0.30491111780656827, "x=220": 0.29873455860198855, "x=230": 0.299939740885809, "x=240": 0.3005423320277192, "x=260": 0.29873455860198855, "x=270": 0.2933112383247966, "x=280": 0.2963241940343477, "x=290": 0.2927086471828864, "x=300": 0.2927086471828864, "x=310": 0.2937631816812293, "x=320": 0.2960228984633926, "x=330": 0.2889424525459476, "x=340": 0.28984633925881287, "x=350": 0.28969569147333535, "x=360": 0.2770412774932208, "x=370": 0.26303103344380835, "x=380": 0.27628803856583306, "x=390": 0.2146730943055137, "x=400": 0.2124133775233504, "x=410": 0.2124133775233504}
my_dict_3 = {"x=0": 0.248, "x=50": 0.26, "x=110": 0.281, "x=120": 0.282, "x=130": 0.281, "x=140": 0.292, "x=150": 0.28, "x=160": 0.268, "x=170": 0.274, "x=180": 0.295, "x=190": 0.307, "x=210": 0.28, "x=230": 0.291, "x=240": 0.299, "x=250": 0.266, "x=260": 0.269, "x=270": 0.288, "x=290": 0.283, "x=300": 0.285, "x=310": 0.28, "x=320": 0.243, "x=330": 0.253, "x=340": 0.256, "x=350": 0.246, "x=360": 0.24, "x=370": 0.23, "x=380": 0.228, "x=390": 0.23, "x=400": 0.221, "x=410": 0.222, "x=420": 0.239, "x=430": 0.24, "x=440": 0.222, "x=450": 0.219, "x=460": 0.219, "x=470": 0.219, "x=480": 0.217, "x=490": 0.203, "x=500": 0.214, "x=510": 0.227, "x=520": 0.227, "x=530": 0.209, "x=540": 0.22, "x=550": 0.246, "x=560": 0.226, "x=570": 0.247, "x=580": 0.241, "x=590": 0.267}
</code></pre>
<p>I can convert each one to a list with <code>my_dict.values()</code>, which I can plot with <code>plt.plot(my_dict.values())</code>.</p>
<p>But how can I set a fixed x axis, say from 0 to 500 with increments of 10, and plot each list's values on the correct x axis' value? I want to plot all lists together on the same plot.</p>
|
<python><matplotlib>
|
2024-01-01 21:42:21
| 1
| 2,641
|
Penguin
|
77,743,320
| 12,178,630
|
Read black and white image with OpenCV
|
<p>I tried to simply read a binary mask image that I have created. When I read it using</p>
<pre><code>bw_mask = cv2.imread('image.png')
</code></pre>
<p>the image is read as a numpy array of zeros that looks like:</p>
<pre><code>array([[[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
...,
[0, 0, 0],
[0, 0, 0],
[0, 0, 0]],
[[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
...,
[0, 0, 0],
[0, 0, 0],
[0, 0, 0]],
[[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
...,
[0, 0, 0],
[0, 0, 0],
[0, 0, 0]],
...,
[[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
...,
[0, 0, 0],
[0, 0, 0],
[0, 0, 0]],
[[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
...,
[0, 0, 0],
[0, 0, 0],
[0, 0, 0]],
[[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
...,
[0, 0, 0],
[0, 0, 0],
[0, 0, 0]]], dtype=uint8)
</code></pre>
<p>Could you please explain to me why, I really want to learn the reason in depth but in a clear layman words. the colored image opens just as a typical image(not array).</p>
<p>Why is it possible to read it, if these operations are applied?</p>
<pre><code>bw_mask = cv2.imread('image.png', cv2.IMREAD_ANYDEPTH)
bw_mask = bw_mask.view(np.int32)
img_cv_16bit = bw_mask.astype(np.uint16)
img_cv_8bit = np.clip(img_cv_16bit // 16, 0, 255).astype(np.uint8)
result = cv2.resize(img_cv_8bit, (800, 600), interpolation=cv2.INTER_CUBIC)
plt.imshow(cv2.cvtColor(result, cv2.COLOR_BGR2RGB))
plt.show()
</code></pre>
<p>I still do not understand why this works, I would appreciate your explanation and answering.</p>
|
<python><numpy><opencv><image-processing>
|
2024-01-01 20:54:28
| 1
| 314
|
Josh
|
77,743,218
| 1,319,998
|
SQLite locking for xFileSize and xTruncate
|
<p>I'm writing a <a href="https://www.sqlite.org/c3ref/io_methods.html" rel="nofollow noreferrer">VFS</a> for SQLite using <a href="https://rogerbinns.github.io/apsw/vfs.html" rel="nofollow noreferrer">APSW's VFS layer</a>: <a href="https://github.com/michalc/sqlite-memory-vfs" rel="nofollow noreferrer">https://github.com/michalc/sqlite-memory-vfs</a>, and trying to make sure it allows concurrent access safely</p>
<p>I <em>think</em> that xRead and xWrite are protected by the SQLite+VFS locking mechanism - xRead is protected the SHARED lock, and xWrite by EXCLUSIVE.</p>
<p>But: what about xFileSize and xTruncate? Are these protected by SHARED and EXCLUSIVE locks as well? Asking because the underlying data structures used in the VFS to store the files are are <em>not</em> thread safe.</p>
|
<python><python-3.x><multithreading><sqlite><locking>
|
2024-01-01 20:17:05
| 1
| 27,302
|
Michal Charemza
|
77,743,164
| 8,372,755
|
Orange Data Mining Python Script for LGBM w/optimization. Not showing parameters
|
<p>I tried to replicate the behavior of the Orange widgets for models like XGBoost. In this case, I managed to get the Test&Score to show results using LightGBM with Bayesian optimization. I had to create a wrapper and finally printed the parameters in the console, but I intend them to be viewable in a data table. Finally, although the Data Table connected to Test&Score displays the probability and classification columns well, when I connect a Data Table to the widget, it hangs. If anyone has experience in this, please review the code to ensure it adheres to best practices, so I can have a model for continuing to write scripts with other models. THANK YOU VERY MUCH!!!</p>
<pre><code>import Orange
import numpy as np
import lightgbm as lgb
from Orange.data import Table, Domain, ContinuousVariable, StringVariable
from Orange.classification import Learner, Model
from skopt import BayesSearchCV
from skopt.space import Real, Categorical, Integer
class LightGBMLearner(Learner):
"""
A wrapper learner for LightGBM classifier with Bayesian Optimization.
"""
def __init__(self, preprocessors=None):
super().__init__(preprocessors=preprocessors)
self.name = 'LightGBM BayesOpt'
def fit_storage(self, data):
return LightGBMModel(data)
class LightGBMModel(Model):
"""
A wrapper model for LightGBM classifier with Bayesian Optimization.
"""
def __init__(self, data):
super().__init__(data.domain)
self.domain = data.domain
self.lgbm, self.best_params = self.bayesian_optimization(data)
def bayesian_optimization(self, data):
# Define the hyperparameter search space
search_space = {
'learning_rate': Real(0.01, 0.5),
'n_estimators': Integer(100, 1000),
'num_leaves': Integer(20, 150),
'max_depth': Integer(3, 10),
'min_child_weight': Integer(1, 10),
'colsample_bytree': Real(0.1, 1.0)
}
# Setup LightGBM classifier within BayesSearchCV
lgbm = lgb.LGBMClassifier()
optimizer = BayesSearchCV(lgbm, search_space, n_iter=32, random_state=0, cv=3)
# Fit the model
optimizer.fit(data.X, data.Y.flatten())
# Return the best estimator and best parameters
return optimizer.best_estimator_, optimizer.best_params_
def predict(self, X):
return self.lgbm.predict(X)
def predict_storage(self, data):
X = data.X
predictions = self.predict(X)
probabilities = self.lgbm.predict_proba(X)
return predictions, probabilities
# Create the learner and train the classifier
if in_data:
out_learner = LightGBMLearner()
out_classifier = out_learner(in_data)
# Convert the optimal parameters to an Orange table
param_names = [StringVariable("Parameter")]
param_values = [ContinuousVariable("Value")]
domain = Domain(param_names, None, metas=param_values)
rows = [[name, np.array([value])] for name, value in out_classifier.best_params.items()]
out_params = Table.from_list(domain, rows)
# Get predictions and probabilities
predictions, probabilities = out_classifier.predict_storage(in_data)
# Create new variables for predictions and probabilities
prediction_var = StringVariable("Prediction")
prob_vars = [ContinuousVariable(f'P(Class={i})') for i in range(probabilities.shape[1])]
# Update the data table domain
new_domain = Domain(in_data.domain.attributes, in_data.domain.class_vars,
in_data.domain.metas + tuple([prediction_var] + prob_vars))
# Create the new table with predictions and probabilities
new_metas = np.hstack((in_data.metas,
predictions.reshape(-1, 1),
probabilities))
out_data = Table(new_domain, in_data.X, in_data.Y, new_metas)
else:
out_learner = None
out_classifier = None
out_data = None
out_params = None
</code></pre>
<p>I tried the above code and I expect to see the parameters optimized as coefficients in Curve Fit widget.</p>
|
<python><orange>
|
2024-01-01 19:54:08
| 0
| 409
|
Antonio Velazquez Bustamante
|
77,743,159
| 827,927
|
Create a curve in which the space between points is balanced
|
<p>If I want to plot the curve of a function, say 1/x, between 0.1 and 10, I can do it like this:</p>
<pre><code>xx = np.linspace(0.1,10,1000)
yy = 1.0/xx
plt.plot(xx,yy)
</code></pre>
<p>The problem is that the spacing between points is not balanced along the curve. Specifically, at the left of the curve, where x<y, the points are very sparse (only about 10% of the points are there), and at the right of the curve, where x>y, the points are much denser (about 90% of the points are there, even though the curve is symmetric in both parts).</p>
<p>How can I create the arrays <code>xx</code> and <code>yy</code> (in general, not only for this particular function) such that the spacing between adjacent points is similar throughout the entire curve?</p>
|
<python><numpy><curve>
|
2024-01-01 19:51:59
| 1
| 37,410
|
Erel Segal-Halevi
|
77,743,056
| 20,266,647
|
Python OSError: [Errno 22] Invalid argument for datetime.timestamp()
|
<p>I got the issue in Python 3.9 under Windows, for datetime.timestamp():</p>
<pre><code>Traceback (most recent call last):
File "C:\Python\qgate-sln-mlrun\venv\lib\site-packages\IPython\core\interactiveshell.py", line 3508, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-5-1a3ac1c0825d>", line 1, in <module>
datetime.datetime(year=3021, month=11, day=13).timestamp()
OSError: [Errno 22] Invalid argument
</code></pre>
<p>For this part of code:</p>
<pre><code>import datetime
datetime.datetime(year=2021, month=11, day=13).timestamp()
datetime.datetime(year=1021, month=11, day=13).timestamp()
datetime.datetime(year=3021, month=11, day=13).timestamp()
</code></pre>
<p>The strange is, it works correctly under Linux. Did you solve the same issue?</p>
|
<python><timestamp>
|
2024-01-01 19:12:04
| 1
| 1,390
|
JIST
|
77,742,927
| 5,800,969
|
Selenium chromedriver not able to download file on virtual machine but downloading on local in headless mode using google-chrome
|
<p>I am download a file on download button click in default download folder using code below.</p>
<pre><code> prefs = {"download.default_directory": file_download_folder_path,
"download.prompt_for_download": False,
"download.directory_upgrade": True,
"safebrowsing_for_trusted_sources_enabled": False,
"safebrowsing.enabled": False,
"profile.default_content_setting_values.notifications": 2}
chrome_options.add_experimental_option("prefs", prefs)
</code></pre>
<p>It is working on local in both headless and non headless mode. But when I tried on virtual machine linux ubuntu 20.04 it can't download. Its clicking on download button but file is not getting downloaded.</p>
<p>I checked chromedriver version and google-chrome version are same and latest on virtual machine.</p>
<pre><code># google-chrome --version
Google Chrome 120.0.6099.129
# chromedriver --version
2024/01/01 23:55:13.650613 cmd_run.go:1055: WARNING: cannot start document portal: write unix @->/run/user/1001/bus: write: broken pipe
mkdir: cannot create directory ‘/run/user/0’: Permission denied
ChromeDriver 120.0.6099.71 (9729082fe6174c0a371fc66501f5efc5d69d3d2b-refs/branch-heads/6099_56@{#13})
</code></pre>
<p>Script is running in <code>/opt</code> directory as root and the download folder is in the /opt directory sub-directory. The file is running as root so it has the necessary permission to write to a given path inside /opt directory. It doesn't print any error as well after clicking on the download button.</p>
<p>Can anyone help me out here? I am new to selenium. Thanks in advance.</p>
|
<python><selenium-webdriver><selenium-chromedriver><virtual-machine>
|
2024-01-01 18:29:42
| 1
| 2,071
|
iamabhaykmr
|
77,742,640
| 16,136,190
|
MoviePy: CompositeVideoClip generates blank, unplayable video
|
<p>I'm trying to generate a video: the image at the top, the text in the center and the video at the bottom.
I don't want any gap between them. The image is resized to become the first half, so the height is set to <code>960</code> (the final video is <code>1080 x 1920</code>). Similarly, the video is resized and cropped. The text should be on top of the whole thing. But what it generates is a blank, unplayable video (<code>2570 x 960</code>: why is the width <code>2570</code>?); no errors, either. I've tried many combinations over the past few days, but none worked.</p>
<p>Here's the code:</p>
<pre class="lang-py prettyprint-override"><code>import cv2
from moviepy.editor import VideoFileClip, concatenate_videoclips, TextClip, ImageClip, CompositeVideoClip, clips_array
from moviepy.video.fx.crop import crop
from moviepy.video.fx.resize import resize
from moviepy.config import change_settings
change_settings({"IMAGEMAGICK_BINARY": r"E:\User\ImageMagick\ImageMagick-7.1.1-Q16-HDRI\magick.exe"})
def generate_video(image_path, video_path, output_path, text, text_options: dict = None):
image_clip = ImageClip(image_path)
image_clip = resize(image_clip, width=1080, height=960)
video_clip = VideoFileClip(video_path)
video_clip = resize(video_clip, height=960)
video_clip = crop(video_clip, x_center=video_clip.w/2, width=1080)
if text_options is None:
text_options = {'fontsize': 30, 'color': 'white', 'bg_color': 'black', 'font': 'Arial'}
text_clip = TextClip(text, text_options)
text_clip = text_clip.set_position(("center", "center"))
image_clip = image_clip.set_duration(video_clip.duration)
text_clip = text_clip.set_duration(video_clip.duration)
image_clip = image_clip.set_position((0.0, 0.0), relative=True)
video_clip = video_clip.set_position((0.0, 0.5), relative=True)
final_clip = CompositeVideoClip([image_clip, text_clip, video_clip])
final_clip.write_videofile(output_path, codec="libx264", audio=False)
image_clip.close()
text_clip.close()
video_clip.close()
</code></pre>
<p>It's worked before (so the installation is fine; when the script was smaller), the paths are correct, and I've called the function with valid arguments. Why doesn't this code work? I'm also curious to know how the code would be if <code>clips_array</code> or <code>concatenate_videoclips</code> were used (the output should be exactly the same).</p>
|
<python><mp4><moviepy><video-editing>
|
2024-01-01 17:00:40
| 1
| 859
|
The Amateur Coder
|
77,742,624
| 22,466,650
|
How to group items of a list based on their type transition?
|
<p>My input is a list:</p>
<pre><code>data = [
-1, 0,
'a','b', 1, 2, 3,
'c', 6,
'd', 'e', .4, .5,
'a', 'b', 4,
'f', 'g',
]
</code></pre>
<p>I'm trying to form groups (dictionary) where the keys are the strings and the values are the numbers right after them.</p>
<p>There are however three details I should consider:</p>
<ul>
<li>The list of data I receive can sometimes have leading non-string values that should be ignored</li>
<li>The number of strings for each group is variable but the minimum is always 1</li>
<li>Some groups can appear multiple times (example: <code>a/b</code>)</li>
</ul>
<p>For all of that I made the code below:</p>
<pre><code>start = list(map(type, data)).index(str)
wanted = {}
for i in data[start:]:
strings = []
if type(i) == str:
strings.append(i)
numbers = []
else:
numbers.append(i)
wanted['/'.join(strings)] = numbers
</code></pre>
<p>This gives me nearly what I'm looking for:</p>
<pre><code>{'a': [], 'b': [4], '': [4], 'c': [6], 'd': [], 'e': [0.4, 0.5], 'f': [], 'g': []}
</code></pre>
<p>Can you show me how to fix my code?</p>
<p>My expected output is this:</p>
<pre><code>{'a/b': [1, 2, 3, 4], 'c': [6], 'd/e': [0.4, 0.5], 'f/g': []}
</code></pre>
|
<python>
|
2024-01-01 16:54:43
| 4
| 1,085
|
VERBOSE
|
77,742,544
| 17,932,951
|
Maximize daily profit problem using Python and Pulp
|
<p>I am having trouble while trying to maximize the daily profit of a chocolate factory.</p>
<p>Problem: Maximize daily profit of a factory that produces individual chocolates(each one having a profit and a maximum daily production capacity), and special packs (each of them containing 3 chocolates and a profit). Example:</p>
<pre><code># Limit Variables
# n - number of chocolates
# p - the number of special packs
# max_chocolates - the total maximum chocolate production
# capacity in a day
n = 5
p = 2
max_chocolates = 150
# For the chocolates: Each chocolate (1,2,3,4,5) has a profit and a maximum daily production capacity
profit_chocolates = [50, 30, 45, 40, 35]
capacity_chocolates = [27, 33, 30, 37, 35]
# For the special packs: Each pack is made of 3 chocolates and a profit for said pack
pack_components = [(1, 3, 5), (2, 3, 4)]
profit_packs = [130, 130]
</code></pre>
<p>Next, I present the code to get the maximum daily profit:</p>
<pre><code># Problem initialization
prob = LpProblem("Maximize_chocolates_and_packs_profit", LpMaximize)
# Decision Variables:
chocolate_vars = [LpVariable(f"x{i}", lowBound=0, cat="Integer") for i in range(n)]
packs_vars = [LpVariable(f"p{i}", lowBound=0, cat="Integer") for i in range(p)]
# Objective Function
prob += lpSum(profit_chocolates[i] * chocolate_vars[i] for i in range(n)) + \
lpSum(profit_packs[i] * packs_vars[i] for i in range(p))
# Constraints:
# For the maximum number of chocolates we are allowed to produce in a day
prob += lpSum(chocolate_vars[i] for i in range(n)) <= max_chocolates
# For the maximum daily production of each chocolate
for i in range(n):
prob += chocolate_vars[i] <= capacity_chocolates[i]
# For the special packs:
# Profit Constraint: The profit of each pack should not be lower or equal to the
# sum of the profits of the chocolates said pack contains
for i in range(p):
prob += lpSum(profit_chocolates[j - 1] for j in pack_components[i]) >= profit_packs[i]
# Capacity Constraint for the packs: Each pack should take into consideration the
# capacity of the chocolates it contains - if the capacity of single chocolate of the pack
# exceeds the daily production limit of that chocolate then we are not able to produce the pack
for i in range(p):
for j in pack_components[i]:
prob += (chocolate_vars[j - 1]) <= capacity_chocolates[j - 1]
# Each decision variable must be greater or equal to 0
for i in range(n):
prob += chocolate_vars[i] >= 0
for i in range(p):
prob += packs_vars[i] >= 0
prob.solve()
for i in range(len(chocolate_vars)):
print(chocolate_vars[i].varValue)
for i in range(len(packs_vars)):
print(packs_vars[i].varValue)
print(f"Maximum daily profit: {int(prob.objective.value())}")
</code></pre>
<p>For the above input data, the expected result (maximum profit) should be 6440. However, the result I get is 6035.</p>
<p>I think the difference in results has to do with the special packs, since they depend on each chocolate.</p>
<p>Could you please help me find out what I'm missing / doing wrong?</p>
<p>Thanks</p>
|
<python><pulp>
|
2024-01-01 16:26:16
| 2
| 325
|
Jay Craig
|
77,742,521
| 4,177,926
|
call python class method from c++ in pybind11
|
<p>Consider a Callback class in Python.</p>
<pre><code>class My_Callback():
def __init__(self):
pass
def do_something(self, something):
print(something)
</code></pre>
<p>This Class is used in a DLL wrapper which is using pybind11 to expose it to python. A reference to the Class is stored in an ENUM and passed to a C++ function.</p>
<pre><code>void callPy(py::object f) {
//call f.do_something("test");
}
// ...
callPy(userData::myCallback);
</code></pre>
<p>There are many examples on how to call Python functions from C++ (<a href="https://pybind11.readthedocs.io/en/stable/advanced/pycpp/object.html" rel="nofollow noreferrer">pybind11 doc</a>), but what is the intended way to call class methods? It feels like I should cast the <code>py::object</code> to a <code>My_Callback</code>, but I don't see how to do it.</p>
<p>The following lines seems to do the job, but is this really the intended way to do it?</p>
<pre><code>void callPy(py::object f) {
py::object mycallback = f.attr("do_something");
mycallback("test");
}
</code></pre>
|
<python><c++><pybind11>
|
2024-01-01 16:17:13
| 1
| 2,283
|
user_na
|
77,742,431
| 7,563,454
|
Function that takes either a single argument or a tuple treated as multiple arguments
|
<p>There is a circumstance where I have a get function which returns a tuple of items. I have a set function and I want it to take either a single item, or the resulting tuple as multiple arguments. However if I use <code>*args</code> as the parameter of the set function to allow a variable number of results, it won't work as desired in both situations.</p>
<pre><code>def get():
return 0, 1, 2, 3
def set(*numbers):
for i in range(len(numbers)):
print(numbers[i])
single = 0
multiple = get()
set(single)
set(multiple)
</code></pre>
<p><code>set(single)</code> works as intended and prints one <code>0</code>, <code>set(multiple)</code> does not and prints a single <code>(0, 1, 2, 3)</code> instead of iterating through the values and printing <code>0, 1, 2, 3</code>. What is the most elegant way to address this, without the user having to manually decompose the results from <code>get</code> before piping them into <code>set</code>?</p>
|
<python>
|
2024-01-01 15:50:40
| 2
| 1,161
|
MirceaKitsune
|
77,742,336
| 1,293,127
|
Serialize SqlAlchemy query across process boundary
|
<p>Using <code>Python==3.10.12</code> and <code>sqlalchemy==2.0.23</code> (and postgres 12 under Ubuntu, if that makes any difference):</p>
<p><strong>How do I serialize an existing SqlAlchemy query so I can deserialize it and fetch its results in another process?</strong></p>
<p>In other words, I have a SqlAlchemy <code>query</code> in process=1, but want to run <code>for result in query.yield_per(1000): …</code> and <code>query.count()</code> in process=2.</p>
<p>I'm not looking to serialize the query results. I'm looking to serialize & deserialize the query itself.</p>
<p>It is essential the result set is never buffered/materialized at any point, because it's too large for RAM.</p>
|
<python><sqlalchemy><orm>
|
2024-01-01 15:20:41
| 2
| 8,721
|
user124114
|
77,742,298
| 2,372,954
|
Where should I store a bunch of strings I intend on using in a single feature (for a short amount of time)?
|
<p>I have a (relatively) big and complex (Django) app with multiple database tables.</p>
<p>Now, I'm building the landing page of a new feature that includes a section where are displayed 3 random sentences from a collection of 10 predefined strings.</p>
<p>I want to be able to update and change these strings without having to rebuild and redeploy the app each time.</p>
<p>Should I create another table in the database just to store this data or are there other ways to do it (properly)?</p>
|
<python><django><database><design-patterns>
|
2024-01-01 15:08:43
| 1
| 360
|
ahmed
|
77,742,010
| 9,135,359
|
Bad Request received in trying to get Flask form to display
|
<p>I have a simple Flask form that takes the username but it doesn't work. I keep getting a <code>Bad Request</code> error. Here's my code:</p>
<p>This is my <code>main.py</code>:</p>
<pre><code>from flask import Flask, render_template, request
app = Flask(__name__)
@app.route("/", methods=["GET", "POST"])
def index():
variable = request.form["recipe"]
return render_template("index.html")
if __name__=='__main__':
app.run()
</code></pre>
<p>This is my <code>index.html</code> that the form resides in:</p>
<pre><code>{% extends "base.html" %}
{% block content %}
<h1>Cooking By Myself</h1>
<form action="/" method="POST">
<h3>Add Recipe</h3>
<p>
<label for="recipe">Name:</label>
<input type="text" name="recipe"/>
</p>
<p><input type="submit" name="submit_recipe"/></p>
</form>
{% endblock %}
</code></pre>
<p>And this is my <code>base.html</code> file:</p>
<pre><code><!DOCTYPE html>
<html>
<body>
{% block content %}
{% endblock %}
</body>
</html>
</code></pre>
<p>I don't know where I'm going wrong?</p>
|
<python><flask>
|
2024-01-01 13:19:18
| 1
| 844
|
Code Monkey
|
77,741,969
| 10,415,970
|
SQLAlchemy access session within Base class method?
|
<p>I'm wanting to access a session from within my Base class's method without passing it in as an argument like this:</p>
<pre class="lang-py prettyprint-override"><code>class Job(Base):
__tablename__ = 'jobs'
# id = ..., etc.
def evaluate(self):
keywords = session.query(Keyword).all() # <- I want session already available.
# Do something with keywords
</code></pre>
<p>How can I have the session ready to be used within a Base class's method?</p>
|
<python><sqlalchemy>
|
2024-01-01 12:59:15
| 1
| 4,320
|
Zack Plauché
|
77,741,925
| 453,851
|
Does python expose an interface for parsing package metadata?
|
<p><em>Note: I am <strong>NOT</strong> looking for a way to interrogate installed packages in the virtual environment and do not want to load packages in order to parse them.</em></p>
<p>I'm looking for a way to parse meta data from files [wheels and tar.gz] downloaded from pypi. The format isn't so complex so I'm not really against writing my own. But given that python must be parsing this info inside <code>importlib</code> I wanted to see if theres a way that doesn't involve re-inventing the wheel.</p>
<p>I can see that <code>importlib.metadata</code> lets me get this information for installed pacakges. But I want to do this for a lot of versions of the same packages; many of which may not be compatible with the current virtual environment or even system architecture.</p>
<p>Does python (>=3.10) offer any interface for parsing wheel and tar.gz metadata without actually installing it into the virtual environment?</p>
|
<python><pypi><python-wheel>
|
2024-01-01 12:40:13
| 2
| 15,219
|
Philip Couling
|
77,741,075
| 16,319,191
|
Merge dfs but avoid duplication of columns and maintain the order in Pandas
|
<p>All the dfs have a key col "id".
pd.merge is not a viable option even with the suffix option.
There are over 40k cols in each of the dfs so column binding and deleting later (suffix_x) is not an option. Exactly 50k (common) rows in each of the dfs identified by "id" col.</p>
<p>Minimal example with two common cols:</p>
<pre><code>df1 = pd.DataFrame({
'id': ['a', 'b', 'c'],
'col1': [123, 121, 111],
'col2': [456, 454, 444],
'col3': [786, 787, 777],
})
df2 = pd.DataFrame({
'id': ['a', 'b', 'c'],
'col1': [123, 121, 111],
'col2': [456, 454, 444],
'col4': [11, 44, 77],
})
df3 = pd.DataFrame({
'id': ['a', 'b', 'c'],
'col1': [123, 121, 111],
'col2': [456, 454, 444],
'col5': [1786, 1787, 1777],
})
</code></pre>
<p>Final answer:</p>
<pre><code>finaldf = pd.DataFrame({
'id': ['a', 'b', 'c'],
'col1': [123, 121, 111],
'col2': [456, 454, 444],
'col3': [786, 787, 777],
'col4': [11, 44, 77],
'col5': [1786, 1787, 1777],
})
</code></pre>
|
<python><pandas><dataframe>
|
2024-01-01 05:58:20
| 1
| 392
|
AAA
|
77,740,923
| 10,200,497
|
Replacing a value with its previous value in a column if it is greater
|
<p>This is my DataFrame:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(
{
'a': [101, 90, 11, 120, 1]
}
)
</code></pre>
<p>And this is the output that I want. I want to create column <code>y</code>:</p>
<pre><code> a y
0 101 101.0
1 90 101.0
2 11 90.0
3 120 120.0
4 1 120.0
</code></pre>
<p>Basically, values in <code>a</code> are compared with their previous value, and the greater one is selected.</p>
<p>For example for row <code>1</code>, 90 is compared with 101. 101 is greater so it is selected.</p>
<p>I have done it in this way:</p>
<pre><code>df['x'] = df.a.shift(1)
df['y'] = df[['a', 'x']].max(axis=1)
</code></pre>
<p>Is there a cleaner or some kind of built-in way to do it?</p>
|
<python><pandas><dataframe>
|
2024-01-01 04:02:28
| 2
| 2,679
|
AmirX
|
77,740,864
| 6,383,910
|
How to generate unique sequences?
|
<p>I am trying to solve <a href="https://leetcode.com/problems/combination-sum-iv/" rel="nofollow noreferrer">377. Combination Sum 4</a>.</p>
<pre><code>Output: 7
Explanation:
The possible combination ways are:
(1, 1, 1, 1)
(1, 1, 2)
(1, 2, 1)
(1, 3)
(2, 1, 1)
(2, 2)
(3, 1)
</code></pre>
<p>Here's my solution so far:</p>
<pre><code>class Solution(object):
def combinationSum4(self, nums, target):
"""
:type nums: List[int]
:type target: int
:rtype: int
"""
result = []
def dfs(i, curr, total):
# print(i)
if total==target:
result.append(curr[::])
return
if total>target or i>=len(nums):
return
curr.append(nums[i])
dfs(i, curr, total+nums[i])
curr.pop()
dfs(i+1,curr,total)
dfs(0,[],0)
print(result)
return len(result)
</code></pre>
<p>It generates only one of (1,1,2),(1,2,1)(2,1,1) solutions or one of (1,3)(3,1). I am not sure how to construct the recursive tree such that all solutions would be produced. In general, how to think about building recursive trees for different solutions, like unique combinations (which would contain just one of the above possible solutions), unique sequences etc.? Any help is greatly appreciated!</p>
|
<python><python-3.x>
|
2024-01-01 03:02:50
| 2
| 2,132
|
Gingerbread
|
77,740,674
| 11,850,322
|
Pandas Groupby - Run Self Function - Then Transform(Apply)
|
<p>I need to run regression for each group, then pass the coefficients into the new column <code>b</code>. Here is my code:</p>
<p><strong>Self-defined function:</strong></p>
<pre><code>def simplereg(g, y, x):
try:
xvar = sm.add_constant(g[x])
yvar = g[y]
model = sm.OLS(yvar, xvar, missing='drop').fit()
b = model.params[x]
return pd.Series([b*100]*len(g))
except Exception as e:
return pd.Series([np.NaN]*len(g))
</code></pre>
<p><strong>Create sample data:</strong></p>
<pre><code>import pandas as pd
import numpy as np
# Setting the parameters
gvkeys = ['A', 'B', 'C', 'D'] # Possible values for gvkey
years = np.arange(2000, 2020) # Possible values for year
# Number of rows for each gvkey, ensuring 5-7 observations for each
num_rows_per_gvkey = np.random.randint(5, 8, size=len(gvkeys))
total_rows = sum(num_rows_per_gvkey)
# Creating the DataFrame
np.random.seed(0) # For reproducibility
df = pd.DataFrame({
'gvkey': np.repeat(gvkeys, num_rows_per_gvkey),
'year': np.random.choice(years, size=total_rows),
'y': np.random.rand(total_rows),
'x': np.random.rand(total_rows)
})
df.sort_values(by='year', ignore_index=True, inplace=True) # make sure if the code can handle even data without sort
</code></pre>
<p><strong>Run <code>groupby</code> code:</strong></p>
<pre><code>df['b'] = df.groupby('gvkey').apply(simplereg, y='y', x='x')
</code></pre>
<p>However, the code return column 'b' with all N/A
May I ask where is the issue and how to fix it?</p>
<p>Thank you</p>
|
<python><python-3.x><pandas><group-by>
|
2024-01-01 00:22:02
| 1
| 1,093
|
PTQuoc
|
77,740,623
| 2,821,586
|
How do I get access to Python doctest verbose flag inside a test?
|
<p>How do I get access to Python 3.8 doctest verbose flag inside a test?
I see it as <code>DocTestRunner._verbose</code> but I would need an instance to grab it from.
I'm trying to do something like</p>
<pre><code>def MyClass
"""
>>> if doctest.DocTestRunner.instance._verbose: printFancyStuffToStdErr() # how do I get instance?
"""
...
if __name__ == '__main__':
import doctest, os, sys
if someCondition(): # n.b. this is simplification and not available inside test
res = doctest.testmod(verbose=True)
else
res = doctest.testmod(verbose=False)
</code></pre>
<p>NB this is NOT the same as <a href="https://stackoverflow.com/questions/43001768/how-can-a-test-in-python-unittest-get-access-to-the-verbosity-level">How can a test in Python unittest get access to the verbosity level?</a> the answer to which grabs the '-v' or '--verbose' from the command line</p>
|
<python><doctest><verbose>
|
2023-12-31 23:42:28
| 0
| 1,230
|
brewmanz
|
77,740,559
| 1,278,584
|
Using BeautifulSoup, is there a way to target all elements with that contain a class name except the ones that contain an additional class name?
|
<p>I am looking to see if there is a way to <code>find_all</code> elements that have a certain class name, but remove the elements that contain an additional class name.</p>
<pre><code><div class="abc">1</div>
<div class="abc def">2</div> #Do not select
<div class="abc">3</div>
<div class="abc">4</div>
<div class="abc">5</div>
<div class="abc">6</div>
<div class="abc">7</div>
<div class="abc def">8</div> #Do not select
<div class="abc">9</div>
<div class="abc">10</div>
</code></pre>
<p>I know I can grab all of the class elements like this:</p>
<pre><code>elements = soup.find_all('div', {'class': 'abc'})
</code></pre>
<p>But is there a way to limit the results to the elements that also contain the class <code>def</code> are not included?</p>
|
<python><python-3.x><beautifulsoup>
|
2023-12-31 23:00:18
| 2
| 724
|
zeropsi
|
77,740,515
| 9,582,542
|
Scrapy Shell extract text only from div class element
|
<p>I am trying to pull only the dates values from this site <a href="http://www.nflweather.com/" rel="nofollow noreferrer">http://www.nflweather.com/</a></p>
<p>I believe I have the code but I need to clean up the result a little bit</p>
<pre><code>response.xpath('//div[@class="fw-bold text-wrap"]/text()').extract()
</code></pre>
<p>My result returns \n\t</p>
<pre><code>'\n\t\t\t12/28/23 08:15 PM EST\n\t\t'
</code></pre>
<p>I am looking to just get a nice clean date and time. I have see some other version here that do it in the script I would like to be able to have it done from the Scrapy shell.</p>
|
<python><scrapy>
|
2023-12-31 22:32:06
| 1
| 690
|
Leo Torres
|
77,740,462
| 2,817,602
|
Deconvolution of distributions defined by histograms
|
<p>I'm reading an excellent paper (<a href="https://onlinelibrary.wiley.com/doi/full/10.1002/sim.9173" rel="nofollow noreferrer">here</a>) in which the authors begin with a dataset of effect estimates from randomized control trials.</p>
<p>In theory, these numbers are a convolution between an unknown distribution and a standard normal. That is to say, each number can be thought of as a draw from the unknown distribution plus some white noise.</p>
<p>They claim they can recover the unknown distribution by deconvolving the data with a standard gaussian. I'd like to do this myself in a toy example, but an struggling to obtain sensible results. In the code below I:</p>
<ul>
<li>Draw <code>1e5</code> random numbers from a gamma distribution</li>
<li>To each draw, I add white noise</li>
<li>I compute a histogram of these new draws, and</li>
<li>let the "signal" be the height of the histogram at the midpoint of the bind edges (as defined by numby and matplotlib)</li>
</ul>
<p>My code to generate the data is as follows</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from scipy.stats import norm, gamma
from scipy.signal import convolve, deconvolve
import matplotlib.pyplot as plt
# First, create the original signal
N = int(1e5)
X = gamma(a=4, scale=1/2).rvs(N)
# Corrupt with gaussian noise
E = norm().rvs(N)
Y = X + E
height, edge, ax = plt.hist(X, edgecolor='white', bins = 50);
mid = (edge[:-1] + edge[1:])/2
</code></pre>
<p>Now that I have the signal, I'd like to deconvolve it with a gaussian. The result should hopefully be a gamma distribution (or something close to the gamma density I've used above). However, I'm not sure how to set up the "impulse" in the <code>scipy.signal.deconvolution</code> function call. What length should this be, and at what points should I evaluate the gaussian density?</p>
|
<python><scipy><signal-processing>
|
2023-12-31 21:59:50
| 1
| 7,544
|
Demetri Pananos
|
77,740,412
| 12,160,503
|
how to make a portable version of chrome for testing for linux
|
<p>After downloading chrome for testing from the official channel, I tried running it on my WSL2 as a test for work but it seems that it requires certain dependencies. Is there any way to create a portable version of it with all required dependencies? It being installed without its dependencies seems absurd but it is what it is.</p>
|
<python><google-chrome><testing><chrome-for-testing>
|
2023-12-31 21:31:33
| 1
| 1,720
|
Isaac Anatolio
|
77,740,183
| 3,748,679
|
Use the same attributes of one class from the super class python
|
<p>I have two python classes <code>A</code> and <code>B(A)</code>. I want to use the variables of <code>A</code> in <code>B</code>. I mean when I create an instance of A with <code>a=16</code>, and an instance of <code>B</code> as the code shows, I want to have <code>b.a=16</code>. What is the problem here?</p>
<p>I ma sure that this problem is solved but I am not able to find it on Internet.</p>
<pre><code>class A():
def __init__(self, a=12) -> None:
self.a = a
class B(A):
def __init__(self) -> None:
super().__init__()
a = A(a=16)
b = B()
print(b.a) # why 12 not 16?
</code></pre>
<p><strong>EDIT</strong></p>
<p>I tried to be as simple as possible in my question. I did not meant to do an XY problem (it is the first time I hear about the XY problem, so thanks for pointing out this because I have learnt something).</p>
<p>It turns out that by simplifying my problem, I turned it into an XY problem. Ok, here is my real problem.</p>
<p>I am developing an algorithm to solve an optimization problem in scheduling. I modelled the scheduling system with a python class that is given by my <code>class A</code> in the example I showed. This class (the scheduling system) has some parameters: a set of machines, a set of jobs, processing capacities, etc. So, those are the attributes of my class.</p>
<p>Now, I created <code>class B</code> to develop the environment of my algorithm to solve the problem. So <code>class B</code> should see all the parameters of the scheduling system. Since I am new to OOP, I thought I can do this simply by calling <code>super().__init__</code>. But it turns out that this is not possible.</p>
<p>So, to summarize, I want two classes: <code>class A</code> for the scheduling system and <code>class B</code> for the algorithm environment. <code>class B</code> has some methods inside to solve the scheduling problem and should be able to see <code>class A</code>'s parameters.</p>
|
<python>
|
2023-12-31 19:18:52
| 1
| 1,594
|
Jika
|
77,740,070
| 1,914,781
|
white line show after add background rect to second subplot
|
<p>I have a subplots with a rect shape as background. I would like to:
1. remove the white line above x-axis at the second subplots.
2. remove the white area near y-axis at the second subplot.</p>
<pre><code>import pandas as pd
def save_fig(fig,pngname):
fig.write_image(pngname,format="png",width=600,height=400, scale=1)
print("[[%s]]"%pngname)
#fig.show()
return
from plotly.subplots import make_subplots
import plotly.graph_objects as go
def plot(df1,df2,pngname):
title = "my title"
xlabel = "x1"
ylabel = "y"
xlabel2 = "x2"
color = "blue"
df2 = df2[df2['ts'].diff()>=0]
fig = make_subplots(
#x_title=xlabel,
y_title=ylabel,
rows=1, cols=2,
column_widths=[0.80,0.20],
row_heights=[1],
shared_yaxes=True,
horizontal_spacing=0)
fig.add_trace(
go.Scattergl(
x=df1['name'],
y=df1['value'],
error_y=error_bar(df1['value']),
marker=dict(color=color,size=1),
mode='markers'), row=1, col=1)
update_axis(fig,xlabel,1,1)
add_bgcolor(fig)
fig.add_trace(
go.Scattergl(
x=df2['value'],
y=df2['ts'],
error_x=error_bar(df2['value']),
marker=dict(color=color,size=.1),
#fill='tozeroy',
mode='markers'), row=1, col=2)
update_axis(fig,xlabel2,1,2)
fig.update_layout(
title_text=title,
showlegend=False,
legend=dict(y=1, x=0.1),
margin=dict(l=40,t=40,r=0,b=0),
title_x=0.5,
#title_y=1,
paper_bgcolor='white',
plot_bgcolor='white',
)
save_fig(fig,pngname)
return
def add_bgcolor(fig):
bgcolor=dict(
type="rect",
xref="x2",
yref="paper",
x0=0,
y0=0,
x1=100,
y1=1,
fillcolor="pink",
opacity=0.5,
layer="below",
line_width=0,
)
fig.add_shape(bgcolor)
return
def error_bar(arrayminus):
error=dict(
type='data',
symmetric=False,
arrayminus=arrayminus,
array=[0] * len(arrayminus),
thickness=5,
width=0,
)
return error
def update_axis(fig,xlabel,row,col):
fig.update_xaxes(
title=xlabel,
tickangle=25,
side='bottom',
showline=True,
linewidth=.5,
linecolor='black',
gridcolor="rgba(38,38,38,0.15)",row=row,col=col
)
fig.update_yaxes(
automargin=True,
side="left",
showline=True,
linewidth=.5,
linecolor='black',
gridcolor="rgba(38,38,38,0.15)",row=row,col=col
)
return
def main():
data1 = [
['AAA',10],
['BBB',20],
['CCC',30],
['DDD',40],
['EEE',50],
]
df1 = pd.DataFrame(data1,columns=['name','value'])
data2 = [
[30,10],
[40,20],
[50,30],
[60,40],
[70,50]
]
df2 = pd.DataFrame(data2,columns=['ts','value'])
pngname = "/media/sf_work/demo.png"
plot(df1,df2,pngname)
return
main()
</code></pre>
<p>Output:</p>
<p><a href="https://i.sstatic.net/qmXtB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qmXtB.png" alt="enter image description here" /></a></p>
|
<python><plotly>
|
2023-12-31 18:10:51
| 0
| 9,011
|
lucky1928
|
77,739,972
| 7,563,454
|
Get the frame number for a looping animation range based only on system time
|
<p>I'm looking for a way to animate a sprite using just the system time, without having to increment and store the active frame or time passed as a variable. I should be able to achieve this by properly stretching and looping the ms to get a range, accounting for animation speed the total number of frames and the range I want to fetch. My project is based on Pygame, unless it has a more useful function for this I'm trying to achieve it with the generic Python <code>time.time()</code> value.</p>
<p>Let's say I have a sprite with 30 frames and I currently want to loop frames 10 to 20 at a speed of 2 frames per second (advance every 0.5 seconds). I have my "milliseconds passed since 1 January 1970" value: I need to convert it into an integer between 10 and 19 to indicate what the current frame is. For example: If I call the function now I get <code>18</code>, if 0.25 seconds pass I still get <code>18</code>, if another 0.25 seconds pass this time I get <code>19</code>, I call the function again 0.5 seconds later and get <code>10</code> as it restarts, if I wait a full second then call it again I now get <code>12</code>.</p>
<p>Please suggest the cheapest solution if possible, I will use this in a place where calculations are frequent and expensive. In normal circumstances I'd store the old time then compare how much time passed since the last call, but I'm doing this in a thread pool which can't affect outside variables so I can't store permanent changes on the main thread. Only disadvantage to this technique is the count can start from anywhere so animations won't always play from the beginning of the range... this could be worked around by giving my sprite class the current time before threads start working then offsetting based on that, but this isn't a big deal so if it's too complex this part can be left out.</p>
|
<python><python-3.x><pygame><game-engine><game-development>
|
2023-12-31 17:32:38
| 1
| 1,161
|
MirceaKitsune
|
77,739,328
| 19,336,534
|
Code Infilling fine-tuning with llama code
|
<p>I have a dataset of java methods and I want to fine-tune a code llm to provide accurate method names. Right now the dataset is in a .txt format with methods in text separated by a delimiter(###del###).<br />
To do this I thought about using CodeLLaMa and more specifically code infilling.<br />
From the original documentation:</p>
<pre><code>from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "codellama/CodeLlama-7b-hf"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16
).to("cuda")
prompt = '''def remove_non_ascii(s: str) -> str:
""" <FILL_ME>
return result
'''
input_ids = tokenizer(prompt, return_tensors="pt")["input_ids"].to("cuda")
output = model.generate(
input_ids,
max_new_tokens=200,
)
output = output[0].to("cpu")
filling = tokenizer.decode(output[input_ids.shape[1]:], skip_special_tokens=True)
print(prompt.replace("<FILL_ME>", filling))
</code></pre>
<p>If i set <code>max_new_tokens=4</code> and replace method name with <code><FILL_ME></code> i will get a valid method name when i Inference the model.<br />
My problem is with fine-tuning.<br />
How am i supposed to format the dataset (as a supervised task) to fine tune such a model?</p>
|
<python><pytorch><huggingface-transformers><huggingface>
|
2023-12-31 13:49:18
| 0
| 551
|
Los
|
77,739,005
| 13,241,651
|
Langchain OpenAI ImportError: cannot import name 'Iterator' from 'typing_extensions' (/usr/local/lib/python3.10/dist-packages/typing_extensions.py)
|
<pre><code>!pip install langchain openai --use-deprecated=legacy-resolver
</code></pre>
<pre class="lang-py prettyprint-override"><code>from langchain.llms import OpenAI
llm = OpenAI()
</code></pre>
<p>ERROR:</p>
<pre class="lang-bash prettyprint-override"><code>ImportError Traceback (most recent call last)
/usr/local/lib/python3.10/dist-packages/langchain_community/llms/openai.py in validate_environment(cls, values)
298 try:
--> 299 import openai
300 except ImportError:
10 frames
ImportError: cannot import name 'Iterator' from 'typing_extensions' (/usr/local/lib/python3.10/dist-packages/typing_extensions.py)
During handling of the above exception, another exception occurred:
ImportError Traceback (most recent call last)
/usr/local/lib/python3.10/dist-packages/langchain_community/llms/openai.py in validate_environment(cls, values)
299 import openai
300 except ImportError:
--> 301 raise ImportError(
302 "Could not import openai python package. "
303 "Please install it with `pip install openai`."
ImportError: Could not import openai python package. Please install it with `pip install openai`.
</code></pre>
|
<python><openai-api><langchain><py-langchain>
|
2023-12-31 11:27:08
| 2
| 3,200
|
Niteesh
|
77,738,976
| 11,748,924
|
vscode python autoformat indentation to 2 spaces instead of 4 spaces when saving the file
|
<p>How do I make my source code of python to forced using 2 spaces indentation when clicking save file. ChatGPT code generated always produce 4 spaces that's why I asked this. I don't feel comfortable for 4 spaces indentation in python. I tried autopep8 formatter extension but it didn't help me.</p>
<p>Here is my current <code>.vscode/settings.json</code></p>
<pre class="lang-json prettyprint-override"><code>{
"editor.tabSize": 2,
"editor.defaultFormatter": "ms-python.autopep8",
"editor.autoIndent": "brackets",
"editor.indentSize": "tabSize",
"editor.detectIndentation": true,
"editor.insertSpaces": true,
"autopep8.args": [
"--indent-size=2"
]
}
</code></pre>
<p>I expect from here:</p>
<pre class="lang-py prettyprint-override"><code>if __name__ == '__main__':
asyncio.run(main())
</code></pre>
<p>To the here after save the file:</p>
<pre class="lang-py prettyprint-override"><code>if __name__ == '__main__':
asyncio.run(main())
</code></pre>
<p>Replying @Tim240 answer:
Here are I attached the screenshots, it seems same:
<a href="https://i.sstatic.net/eVZ5N.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eVZ5N.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/HQfZT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HQfZT.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/2O6FA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2O6FA.png" alt="enter image description here" /></a></p>
|
<python><visual-studio-code><indentation><auto-indent>
|
2023-12-31 11:17:08
| 2
| 1,252
|
Muhammad Ikhwan Perwira
|
77,738,852
| 8,443,985
|
pyspark print in for loop not working properly and skips some values
|
<p>I wrote a pyspark code like this:</p>
<pre><code>for current_date in dates_list:
print(f'loop started: {current_date}')
loop_start_date = current_date - timedelta(days=90)
dates_ = [loop_start_date + timedelta(days=i) for i in range((current_date - loop_start_date).days + 1)]
dates = ','.join(map(lambda date: date.strftime("%Y-%m-%d"), dates_))
res= spark.read.parquet(f'my_path/day={{{dates}}}')
print(f'loop continued: {current_date}')
print('----------')
</code></pre>
<p>in the out put I expect get <strong>same values</strong> in first and second print as I do not change <code>current_date</code>. output is like this:</p>
<pre><code>loop started: 2023-01-01
loop continued: 2023-01-01
----------
loop started: 2023-01-02
loop continued: 2023-01-03
----------
loop started: 2023-01-04
loop continued: 2023-01-04
----------
</code></pre>
<p>in the next run, it does not necessarily happen next time on the same date. also loop start and loop continue may have more than 1 day of difference.
why it is happening? how to solve it?</p>
|
<python><pyspark><jupyter-notebook><bigdata>
|
2023-12-31 10:26:19
| 0
| 796
|
mohammad hassan bigdeli shamlo
|
77,738,486
| 1,165,201
|
Why can't I import from autoawq which was already installed?
|
<p>For this code</p>
<pre><code>from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer
model_path = 'lmsys/vicuna-7b-v1.5'
quant_path = 'vicuna-7b-v1.5-awq'
quant_config = { "zero_point": True, "q_group_size": 128, "w_bit": 4, "version": "GEMM" }
# Load model
model = AutoAWQForCausalLM.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
</code></pre>
<p>The error is:</p>
<pre><code>ImportError Traceback (most recent call last)
<ipython-input-1-10f3d88ac51c> in <cell line: 1>()
----> 1 from awq import AutoAWQForCausalLM
2 from transformers import AutoTokenizer
3
4 model_path = 'facebook/opt-6.7b'
5 quant_path = "/Content/drive/models/opt-6.7b-awq"
3 frames
/usr/local/lib/python3.10/dist-packages/awq/__init__.py in <module>
1 __version__ = "0.1.8"
----> 2 from awq.models.auto import AutoAWQForCausalLM
/usr/local/lib/python3.10/dist-packages/awq/models/__init__.py in <module>
----> 1 from .mpt import MptAWQForCausalLM
2 from .llama import LlamaAWQForCausalLM
3 from .opt import OptAWQForCausalLM
4 from .falcon import FalconAWQForCausalLM
5 from .bloom import BloomAWQForCausalLM
/usr/local/lib/python3.10/dist-packages/awq/models/mpt.py in <module>
72 from awq.utils.utils import set_module_name
73 from awq.modules.fused.block import MPTBlock
---> 74 from awq.modules.fused.model import MPTModel
75
76 class MptFuser:
/usr/local/lib/python3.10/dist-packages/awq/modules/fused/model.py in <module>
3 from typing import List
4 from awq.utils import fused_utils
----> 5 from transformers.modeling_outputs import BaseModelOutputWithPast, MoeModelOutputWithPast
6 from awq.modules.fused.block import MPTBlock, FalconDecoderLayer, LlamaLikeBlock, MixtralBlock
7
ImportError: cannot import name 'MoeModelOutputWithPast' from 'transformers.modeling_outputs' (/usr/local/lib/python3.10/dist-packages/transformers/modeling_outputs.py)
</code></pre>
|
<python><huggingface-transformers><large-language-model>
|
2023-12-31 07:13:50
| 1
| 1,531
|
susanna
|
77,738,358
| 4,555,858
|
Find max row value using .shift and .apply in pandas
|
<p>I have the following data frame:</p>
<pre><code>df = pd.DataFrame({'A': [2.001, 4.001, 8.001, 0.001],
'B': [2.001, 0.001, 0.001, 0.001],
'C': [11.001, 12.001, 11.001, 8.001],
'D': [12.001, 23.001, 12.001, 8.021],
'E': [11.001, 24.001, 18.001, 8.0031]})
</code></pre>
<p>I can find the max value (in each row) between columns A, B, E and E (shifted by -1) using the below-mentioned method:</p>
<pre><code>df["e_shifted"] = df["E"].shift(-1)
df.apply(lambda x: max(x['A'], x['B'], x['E'], x['e_shifted']),axis = 1)
</code></pre>
<p>But this creates a temporary column (i.e. e_shifted) in the dataframe.</p>
<p>How can .apply () and shift(-1) be used together without creating a temporary column?</p>
<p>For example, using the below code:</p>
<pre><code>df.apply(lambda x: max(x['A'], x['B'], x['E'], x['E'].shift(-1)),axis = 1)
</code></pre>
<p>gives an error as below:</p>
<pre><code>AttributeError: 'numpy.float64' object has no attribute 'shift'
</code></pre>
<p>As per the solution provided by @Corralien(shown below), the code handles the shifting of a single column:</p>
<pre><code>out = df.assign(E_shift=df['E'].shift(-1))[['A', 'B', 'E', 'E_shift']].max(axis=1)
</code></pre>
<p>But can the solution provided by @Corralien be modified to handle the shifting of multiple columns?</p>
<p>For example:</p>
<pre><code>out = df.assign(E_shift=df['E'].shift(-1), A_shift=df['A'].shift(-1))[['A', 'B', 'E', 'E_shift', 'A_Shift']].max(axis=1)
</code></pre>
<p>Doing so gives the following error:</p>
<pre><code>KeyError: "['A_Shift'] not in index"
</code></pre>
<p>Solution (after @mozway pointed out the error in typo):</p>
<pre><code>out = df.assign(E_shift=df['E'].shift(-1), A_shift=df['A'].shift(-1))[['A', 'B', 'E', 'E_shift', 'A_shift']].max(axis=1)
</code></pre>
|
<python><pandas>
|
2023-12-31 06:09:50
| 4
| 483
|
Prateek Daniels
|
77,738,283
| 13,759,058
|
Is there a way in python to "emulate" running a python file?
|
<p>I want to be able to programmatically run a file, arbitrarily give it inputs (stdin) at any time, and periodically poll for any stdout. I also want to be able to kill the process whenever I want.</p>
<p>Here's what I have tried:</p>
<pre class="lang-py prettyprint-override"><code>import subprocess
from threading import Thread
class Runner:
def __init__(self):
self.process = subprocess.Popen(
["python", "x.py"],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
)
def run(self):
self.process.wait()
def poll(self):
print("got stdout:", self.process.stdout.readline().decode(), end="")
def give_input(self, text=""):
return self.process.stdin.write(bytes(text))
def kill(self):
self.process.kill()
r = Runner()
t = Thread(target=r.run)
t.start()
r.poll() # should be "hi"
r.poll() # should be "your name:"
r.give_input("hi\n")
r.kill()
t.join()
</code></pre>
<p>Here's the code in <code>x.py</code></p>
<pre class="lang-py prettyprint-override"><code>print("hi")
input("Your name: ")
</code></pre>
<p>So in my demo, I start a thread that will call <code>run</code> which will run the process. Then, I <code>poll</code> for stdout twice. It should print</p>
<pre><code>got stdout: hi
got stdout: Your name:
</code></pre>
<p>Afterwards, I give an stdin to the process -- <code>hi\n</code>.</p>
<p>At this point, the program should terminate, and I make sure of it by doing <code>r.kill()</code>.</p>
<p>However, the program does not work as expected.</p>
<p>Instead, the program freezes at the second <code>r.poll()</code>, and I'm not sure why. I suspect that since there is no newline, the program will keep reading, but I'm not sure how to prevent it from doing this.</p>
<p>Any ideas?</p>
|
<python><subprocess><execution>
|
2023-12-31 05:24:45
| 1
| 315
|
Coder100
|
77,738,266
| 14,818,796
|
How to make pydantic class fields immutable?
|
<p>I am trying to create a pydantic class with Immutable class fields (not instance fields).</p>
<p>Here is my base code:</p>
<pre><code>from pydantic import BaseModel
class ImmutableModel(BaseModel):
_name: str = "My Name"
_age: int = 25
ImmutableModel._age = 26
print("Class variables:")
print(f"Name: {ImmutableModel._name}")
print(f"Age: {ImmutableModel._age}")
</code></pre>
<p>Output:</p>
<pre><code>Class variables:
Name: My Name
Age: 26
</code></pre>
<p>I tried using the <code>Config</code> class inside my <code>ImmutableModel</code> to make fields immutable. But it seems like it only works for instance class fields.</p>
<pre><code>class Config:
allow_mutation = False
</code></pre>
<p>FYI, I use <code>Python 3.6.13</code> and <code>pydantic==1.9.2</code></p>
|
<python><python-3.6><pydantic>
|
2023-12-31 05:14:18
| 3
| 1,001
|
Arud Seka Berne S
|
77,737,988
| 313,756
|
How to create a temporary directory for a feature in python's behave (cucumber)?
|
<p>I'm trying to do a python port of some ruby code that uses <a href="https://cucumber.io/docs/cucumber/api/?lang=ruby" rel="nofollow noreferrer">Cucumber</a> for testing. I'm attempting to use the exact same feature files in the new port. One of the features looks a bit like:</p>
<pre><code>Feature: <whatever>
@with_tmpdir
Scenario: generating a file
Given <some setup stuff>
And file 'test.out' does not exist
When I call <something> with argument 'test.out'
Then a file called 'test.out' is created
And the file called 'test.out' contains:
"""
<some contents>
"""
</code></pre>
<p>To support this, the original code has the following in <code>features/support/hooks.rb</code>:</p>
<pre class="lang-rb prettyprint-override"><code>Around('@with_tmpdir') do |scenario, block|
old_pwd = Dir.getwd
Dir.mktmpdir do |dir|
Dir.chdir(dir)
block.call
end
Dir.chdir(old_pwd)
end
</code></pre>
<p>Now, I'd like to figure out how to do this sort of thing in <a href="https://behave.readthedocs.io/en/latest/" rel="nofollow noreferrer">behave</a>, presumably/ideally utilizing <a href="https://docs.python.org/3/library/tempfile.html#tempfile.TemporaryDirectory" rel="nofollow noreferrer"><code>with tempfile.TemporaryDirectory as tmpdir</code></a> in place of ruby's <a href="https://ruby-doc.org/current/stdlibs/tmpdir/Dir.html#method-c-mktmpdir" rel="nofollow noreferrer"><code>Dir.mktmpdir do |dir|</code></a>.</p>
<p>Alas, I don't see anything to indicate support for <code>Around</code> hooks. It seems like maybe this is the sort of thing that <a href="https://behave.readthedocs.io/en/latest/fixtures/?highlight=fixtures" rel="nofollow noreferrer">fixtures</a> are meant to do? Can fixtures do what I'm hoping for? If so, how?</p>
|
<python><cucumber><python-behave>
|
2023-12-31 01:50:13
| 1
| 10,469
|
lindes
|
77,737,904
| 1,625,455
|
Apply an arbitrary function to Pandas dataframe groupby
|
<p>How do I apply an arbitrary function group-wise to Pandas dataframe? The function should be able to access an entire group df at once, as if it were a full pandas dataframe.</p>
<pre><code>import pandas as pd
def arbitrary_function(df):
"""This function acts on groups of a df. It can see every row and column of a group df."""
# for example
# making a new column by accessing other columns in the df
df['new_col'] = df['data_col'].sum()
# return the original df with the new column
return df
df = pd.DataFrame([[1, 2], [1, 3], [2, 6], [2, 1]], columns=["group_col", "data_col"])
</code></pre>
<p>Before the group operation:</p>
<pre><code>df
group_col data_col
0 1 2
1 1 3
2 2 6
3 2 1
</code></pre>
<pre><code># group the dataframe by group_col
# run arbitrary_function() on the df groups
# the first run of arbitrary_function can see one group df as such:
# group_col data_col
# 0 1 2
# 1 1 3
# return to the original data - no more groups
</code></pre>
<p>Expected output:</p>
<pre><code>df
group_col data_col new_col
0 1 2 5
1 1 3 5
2 2 6 7
3 2 1 7
</code></pre>
<p>This should be done:</p>
<ol>
<li>Without a lambda function.</li>
<li>Without "simplifying" the problem into one that can be done with a column-wise or element-wise operation. This solution should be generalizable to anything you can do on a pandas dataframe.</li>
</ol>
|
<python><pandas><function><group-by>
|
2023-12-31 01:02:00
| 1
| 346
|
Atomic Tripod
|
77,737,859
| 12,104,604
|
How to pass results calculated in an external class to other classes and constantly update the values?
|
<p>I am creating code in Python. The following code is created using a GUI library called Kivy. It's code that automatically calculates and changes the font size when the UI size changes.</p>
<pre><code>#-*- coding: utf-8 -*-
from kivy.uix.button import Button
from kivy.uix.label import Label
from kivy.core.text.markup import MarkupLabel
from kivy.app import App
from kivy.properties import NumericProperty
from kivy.uix.widget import Widget
from kivy.lang import Builder
Builder.load_string("""
<TestWidget>:
BoxLayout:
size: root.size
orientation: 'vertical'
BoxLayout:
ResizableLabel:
text: "test1"
BoxLayout:
Label:
text: "test2"
font_size: root.resized_font_size
BoxLayout:
Label:
text: "test3"
font_size: root.resized_font_size
BoxLayout:
Label:
text: "test4"
font_size: root.resized_font_size
BoxLayout:
Label:
text: "test5"
font_size: root.resized_font_size
BoxLayout:
Label:
text: "test6"
font_size: root.resized_font_size
BoxLayout:
Label:
text: "test7"
font_size: root.resized_font_size
""")
class ResizableLabel(Label):
markup = True
def on_text(self,*args,**kwargs):
self.adjust_font_size()
def on_size(self,*args,**kwargs):
self.adjust_font_size()
def adjust_font_size(self):
font_size = self.font_size
lbl_available_height = self.height*0.9
lbl_available_width = self.width*0.6
while True:
lbl = MarkupLabel(font_name=self.font_name, font_size=font_size, text=self.text)
lbl.refresh()
if font_size > lbl_available_height:
font_size = lbl_available_height
elif lbl.content_width > lbl_available_width or \
lbl.content_height > lbl_available_height:
font_size *= 0.95
else:
break
while True:
lbl = MarkupLabel(font_name=self.font_name, font_size=font_size, text=self.text)
lbl.refresh()
if lbl.content_width * 1.1 < lbl_available_width and \
lbl.content_height * 1.1 < lbl_available_height:
font_size *= 1.05
else:
break
self.font_size = font_size
class TestWidget(Widget):
resized_font_size = NumericProperty()
def __init__(self, **kwargs):
super(TestWidget, self).__init__(**kwargs)
self.resized_font_size = 22
def printTest(self):
print("printTest")
class TestApp(App):
def __init__(self, **kwargs):
super(TestApp, self).__init__(**kwargs)
def build(self):
return TestWidget()
if __name__ == '__main__':
TestApp().run()
</code></pre>
<p>In this code, within the <code>ResizableLabel</code> class, the font size is calculated to fit the GUI size. However, I am applying the values obtained from <code>ResizableLabel</code> only to the topmost text in the GUI.</p>
<p>This is because performing resizing calculations for all fonts would result in a high computational load.</p>
<p>I want to pass the values calculated for the top font to the other fonts as well. For the second font onwards, I am passing the value using a variable called '<code>resized_font_size</code>.'</p>
<p>How can I always pass the values calculated within the <code>ResizableLabel</code> class to the '<code>resized_font_size</code>' variable in the <code>TestWidget</code> class?"</p>
<p><a href="https://i.sstatic.net/q7d6q.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/q7d6q.png" alt="enter image description here" /></a></p>
|
<python><kivy>
|
2023-12-31 00:34:10
| 1
| 683
|
taichi
|
77,737,797
| 5,036,928
|
Generating uniformly spaced points along a Spline
|
<p>Maybe this post is better suited for Code Review SE but here goes:</p>
<p>I have a series of 3-5 points that are fairly distant. I would like to generate a spline using these points and sample N points along the spline. The only tool/algorithm I've found for this is <a href="https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwjEjdKpqbiDAxUiAzQIHa85A2sQFnoECBAQAQ&url=https%3A%2F%2Fwww.mathworks.com%2Fmatlabcentral%2Ffileexchange%2F34874-interparc&usg=AOvVaw3O3JzgQdzdH7sYWek48sVK&opi=89978449" rel="nofollow noreferrer">interparc</a> which claims to implement the following logic (expained <a href="https://stackoverflow.com/a/19118984/5036928">here</a>:</p>
<ul>
<li>Compute the piecewise linear arclength from point to point along the curve. Call it t.</li>
<li>Generate a pair of cubic splines, x(t), y(t).</li>
<li>Differentiate x and y as functions of t. Since these are cubic segments, this is easy. The derivative functions will be piecewise quadratic.</li>
<li>Use an ode solver to move along the curve, integrating the differential arclength function. In MATLAB, ODE45 worked nicely.</li>
</ul>
<p>So I sought to port the MatLab code to Python:</p>
<pre><code>import numpy as np
from scipy.integrate import solve_ivp
from scipy.interpolate import CubicSpline, PchipInterpolator
import plotly.graph_objects as go
def interparc(t, data, method='linear'):
"""interparc: interpolate points along a curve in 2 or more dimensions
usage: pt = interparc(t,px,py) % a 2-d curve
usage: pt = interparc(t,px,py,pz) % a 3-d curve
usage: pt = interparc(t,px,py,pz,pw,...) % a 4-d or higher dimensional curve
usage: pt = interparc(t,px,py,method) % a 2-d curve, method is specified
usage: [pt,dudt,fofthandle] = interparc(t,px,py,...) % also returns derivatives, and a function handle
Interpolates new points at any fractional point along
the curve defined by a list of points in 2 or more
dimensions. The curve may be defined by any sequence
of non-replicated points.
arguments: (input)
t - vector of numbers, 0 <= t <= 1, that define
the fractional distance along the curve to
interpolate the curve at. t = 0 will generate
the very first point in the point list, and
t = 1 yields the last point in that list.
Similarly, t = 0.5 will yield the mid-point
on the curve in terms of arc length as the
curve is interpolated by a parametric spline.
If t is a scalar integer, at least 2, then
it specifies the number of equally spaced
points in arclength to be generated along
the curve.
px, py, pz, ... - vectors of length n, defining
points along the curve. n must be at least 2.
Exact Replicate points should not be present
in the curve, although there is no constraint
that the curve has replicate independent
variables.
method - (OPTIONAL) string flag - denotes the method
used to compute the points along the curve.
method may be any of 'linear', 'spline', or 'pchip',
or any simple contraction thereof, such as 'lin',
'sp', or even 'p'.
method == 'linear' --> Uses a linear chordal
approximation to interpolate the curve.
This method is the most efficient.
method == 'pchip' --> Uses a parametric pchip
approximation for the interpolation
in arc length.
method == 'spline' --> Uses a parametric spline
approximation for the interpolation in
arc length. Generally for a smooth curve,
this method may be most accurate.
method = 'csape' --> if available, this tool will
allow a periodic spline fit for closed curves.
ONLY use this method if your points should
represent a closed curve.
If the last point is NOT the same as the
first point on the curve, then the curve
will be forced to be periodic by this option.
That is, the first point will be replicated
onto the end.
If csape is not present in your matlab release,
then an error will result.
DEFAULT: 'spline'
arguments: (output)
pt - Interpolated points at the specified fractional
distance (in arc length) along the curve.
dudt - when a second return argument is required,
interparc will return the parametric derivatives
(dx/dt, dy/dt, dz/dt, ...) as an array.
fofthandle - a function handle, taking numbers in the interval [0,1]
and evaluating the function at those points.
Extrapolation will not be permitted by this call.
Any values of t that lie outside of the interval [0,1]
will be clipped to the endpoints of the curve.
Example:
% Interpolate a set of unequally spaced points around
% the perimeter of a unit circle, generating equally
% spaced points around the perimeter.
theta = sort(rand(15,1))*2*pi;
theta(end+1) = theta(1);
px = cos(theta);
py = sin(theta);
% interpolate using parametric splines
pt = interparc(100,px,py,'spline');
% Plot the result
plot(px,py,'r*',pt(:,1),pt(:,2),'b-o')
axis([-1.1 1.1 -1.1 1.1])
axis equal
grid on
xlabel X
ylabel Y
title 'Points in blue are uniform in arclength around the circle'
Example:
% For the previous set of points, generate exactly 6
% points around the parametric splines, verifying
% the uniformity of the arc length interpolant.
pt = interparc(6,px,py,'spline');
% Convert back to polar form. See that the radius
% is indeed 1, quite accurately.
[TH,R] = cart2pol(pt(:,1),pt(:,2))
% TH =
% 0.86005
% 2.1141
% -2.9117
% -1.654
% -0.39649
% 0.86005
% R =
% 1
% 0.9997
% 0.9998
% 0.99999
% 1.0001
% 1
% Unwrap the polar angles, and difference them.
diff(unwrap(TH))
% ans =
% 1.2541
% 1.2573
% 1.2577
% 1.2575
% 1.2565
% Six points around the circle should be separated by
% 2*pi/5 radians, if they were perfectly uniform. The
% slight differences are due to the imperfect accuracy
% of the parametric splines.
2*pi/5
% ans =
% 1.2566
See also: arclength, spline, pchip, interp1
Author: John D'Errico
e-mail: woodchips@rochester.rr.com
Release: 1.0
Release date: 3/15/2010"""
# ===============================================
# subfunction for evaluation at any point externally
# ===============================================
def foft(t, spl):
# tool allowing the user to evaluate the interpolant at any given point for any values t in [0,1]
pdim = len(spl)
f_t = np.zeros(len(t),pdim)
# convert t to a column vector, clipping it to [0,1] as we do.
t = np.clip(t, 0, 1).reshape(-1, 1)
# just loop over the splines in the cell array of splines
for i in range(pdim):
f_t[:, i] = spl[i](t.flatten())
return f_t
# ===============================================
# nested function for the integration kernel
# ===============================================
def segkernel(t, y, polyarray):
val = np.sum(np.polyval(polyarray, t)**2)
return np.sqrt(val)
# ===============================================
# ode45 solver function to return needed quantities
# ===============================================
def ode45(fun, t_span, y0, opts, args):
sol = solve_ivp(
fun,
t_span=t_span,
y0=y0,
method='RK45',
# t_eval=,
args=(args,),
vectorized=True,
events=opts['events'],
rtol=opts['reltol']
)
return sol.t, sol.y, sol.t_events, sol.y_events
# ===============================================
# PHASE 1:
# Assemble information and handle the lienar case
# t specifies the number of points to be generated equally spaced in arclength
if isinstance(t, int) and (t > 1) and (t % 1 == 0):
t = np.linspace(0,1,t)
elif any(t < 0) or any(t > 1):
raise ValueError('All elements of t must be 0 <= t <= 1')
# how many points will be interpolated?
nt = len(t)
# the number of points on the curve itself
n = len(data)
if len(data) < 2:
raise ValueError('px and py must be vectors of length at least 2')
# compose px and py into a single array. this way, if more dimensions are provided, the extension is trivial.
pxy = data
ndim = np.ndim(pxy)
# method may be any of {'linear', 'pchip', 'spline', 'csape'}
if method not in ['linear', 'pchip', 'spline']:
raise ValueError("Invalid Method")
# anything that remains in varargin must add
# an additional dimension on the curve/polygon
# for i = 1:numel(varargin)
# pz = varargin{i};
# pz = pz(:);
# if numel(pz) ~= n:
# error('ARCLENGTH:improperpxorpy', ...
# 'pz must be of the same size as px and py')
# pxy = [pxy,pz]; %#ok
# the final number of dimensions provided
ndim = pxy.shape[1] # how does this overwrite previous ndim
# # if csape, then make sure the first point is replicated at the end.
p1 = pxy[ 0,:]
pend = pxy[-1,:]
# # get a tolerance on whether the first point is replicated.
# if np.linalg.norm(p1 - pend) > 10*np.spacing(np.linalg.norm(max(abs(pxy),1))):
# # the two end points were not identical, so wrap the curve
# pxy = np.vstack((pxy, p1))
if not np.allclose(p1, pend):
pxy = np.vstack((pxy, p1))
t = np.append(t, 1)
nt = nt + 1
# preallocate the result, pt
pt = np.full((nt,ndim), np.nan)
# Compute the chordal (linear) arclength of each segment. This will be needed for any of the methods.
chordlen = np.sqrt(np.sum(np.diff(pxy, axis=0)**2, axis=1))
# Normalize the arclengths to a unit total
chordlen = chordlen / np.sum(chordlen)
# cumulative arclength
cumarc = np.append(0, np.cumsum(chordlen))
###############################################################################################
# PHASE 2:
# Handle the linear case if applicable
# The linear interpolant is trivial. do it as a special case
if method == 'linear':
# which interval did each point fall in (in terms of t)?
tbins = np.digitize(t, cumarc)
# catch any problems at the ends
tbins[np.bitwise_or(tbins <= 0, t <= 0)] = 1
tbins[np.bitwise_or(tbins >= n, t >= 1)] = n - 1
# interpolate
s = (t - cumarc[tbins-1]) / chordlen[tbins-1]
# be nice, and allow the code to work on older releases that don't have bsxfun
pt = pxy[tbins,:] + (pxy[tbins+1,:] - pxy[tbins,:]) * np.tile(s, (ndim, 1)).T
# do we need to compute derivatives here?
dudt = (pxy[tbins+1,:] - pxy[tbins,:]) / np.tile(chordlen[tbins-1], (ndim, 1)).T
# do we need to create the spline as a piecewise linear function?
# spl = cell(1,ndim)
# for i in range(ndim):
# coefs = [np.diff(pxy(:,i))./diff(cumarc),pxy(1:(end-1),i)]
# spl{i} = mkpp(cumarc, coefs)
spl = [CubicSpline(cumarc, pxy[:, i]) for i in range(ndim)]
# create a function handle for evaluation, passing in the splines
fofthandle = lambda t: foft(t,spl)
# we are done at this point
return pt, dudt, fofthandle
###############################################################################################
# PHASE 3:
# Handle the spline or csape or pchip interpolant
# compute parametric splines
spl = [CubicSpline(cumarc, pxy[:, i]) for i in range(ndim)] # list of splines wrt to each dimension of data
spld = [spline.derivative() for spline in spl]
# catch the case where there were exactly three points in the curve, and spline was used to generate the
# interpolant. In this case, spline creates a curve with only one piece, not two.
if (len(cumarc) == 3) and method == 'spline':
cumarc = spl[0].x
n = len(cumarc)
chordlen = np.sum(chordlen)
# Generate the total arclength along the curve by integrating each segment and summing the
# results. The integration scheme does its job using an ode solver.
seglen = np.zeros(len(cumarc) - 1)
for i in range(len(spl[0].x) - 1):
# polyarray here contains the derivative polynomials for each spline in a given segment
polyarray = np.zeros((ndim, 3))
for j in range(ndim):
# polyarray[j, :] = spld[j].c[i, :] # This is original logic but looks fishy to me
polyarray[j, :] = spld[j].c[:, i] # This is modified logic
# integrate the arclength for the i'th segment using ode45 for the integral. I could have
# done this part with quad too, but then it would not have been perfectly (numerically)
# consistent with the next operation in this tool.
t_out, y_out, te, ye = ode45(fun=segkernel,
t_span=[0, chordlen[i]],
y0=[0],
opts={'reltol': 1.e-9, 'events': None},
args=polyarray) # careful with this
# print(y_out)
seglen[i] = y_out.flatten()[-1] # Not sure why i need to flatten this
# and normalize the segments to have unit total length
totalsplinelength = np.sum(seglen)
cumseglen = np.append(0, np.cumsum(seglen))
# which interval did each point fall into, in terms of t, but relative to the cumulative
# arc lengths along the parametric spline?
tbins = np.digitize(t*totalsplinelength, cumseglen)
# catch any problems at the ends
tbins[np.bitwise_or(tbins <= 0, t <= 0)] = 1
tbins[np.bitwise_or(tbins >= n, t >= 1)] = n - 1
############################################################################
# PHASE 4:
# Do the fractional integration within each segment for the interpolated points. t is the parameter
# used to define the splines. It is defined in terms of a linear chordal arclength. This works nicely when
# a linear piecewise interpolant was used. However, what is asked for is an arclength interpolation
# in terms of arclength of the spline itself. Call the arclength traveled along the spline s.
s = totalsplinelength*t
event = lambda t, y, polyarray: y
event.terminal = True
event.direction = 1
opts = {'reltol': 1.e-9, 'events': event}
ti = t
for i in range(nt - 1):
# si is the piece of arc length that we will look for in this spline segment.
si = s[i] - cumseglen[tbins[i]]
# extract polynomials for the derivatives in the interval the point lies in
for j in range(ndim):
# polyarray[j,:] = spld[j].c[tbins[i],:] # Again this looks fishy
polyarray[j, :] = spld[j].c[:, tbins[i]] # This is modified logic
# we need to integrate in t, until the integral crosses the specified value of si. Because we
# have defined totalsplinelength, the lengths will be normalized at this point to a unit length.
#
# Start the ode solver at -si, so we will just look for an event where y crosses zero.
tout,yout,te,ye = ode45(fun=segkernel, # segkernel takes t, y
t_span=[0,chordlen[tbins[i]]],
y0=[-si],
opts=opts,
args=polyarray)
print(te)
# we only need that point where a zero crossing occurred if no crossing was found, then we can look at each end.
if not te or len(te) == 0:
ti[i] = te[0] + cumarc[tbins[i]]
else:
# a crossing must have happened at the very beginning or the end, and the ode solver
# missed it, not trapping that event.
if abs(yout.flatten()[0]) < abs(yout.flatten()[-1]):
# the event must have been at the start.
ti[i] = tout[0] + cumarc[tbins[i]]
else:
# the event must have been at the end.
ti[i] = tout[-1] + cumarc[tbins[i]]
# Interpolate the parametric splines at ti to get our interpolated value.
for L in range(ndim):
pt[:,L] = spl[L](ti)
# do we need to compute first derivatives here at each point?
dudt = np.zeros((nt,ndim))
for L in range(ndim):
dudt[:,L] = spld[L](ti)
# create a function handle for evaluation, passing in the splines
fofthandle = lambda t: foft(t, spl)
return pt, dudt, fofthandle
T = np.linspace(0,1,500)**3
x = np.sin(2*np.pi*T)
y = np.sin(np.pi*T)
z = np.cos(3*x + y)
pt, _, _ = interparc(t=100,
data=np.column_stack((x, y, z)),
method='spline')
fig = go.Figure([go.Scatter3d(x=pt[:,0], y=pt[:,1], z=pt[:,2], name='equispace', mode='markers',
marker=dict(size=10, color='red')),
go.Scatter3d(x=x, y=y, z=z, name='original', mode='markers',
marker=dict(size=10, color='blue'))])
fig.show()
</code></pre>
<p>Using the <a href="https://stackoverflow.com/a/18244715/5036928">example</a> the code ran (to my surprise) but did not generate the result I expected:</p>
<p><a href="https://i.sstatic.net/LyqrW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LyqrW.png" alt="enter image description here" /></a></p>
<p>Then I realized that the SO answer provided was for a linear interpolant so I decided to compare that also:</p>
<p><a href="https://i.sstatic.net/s8jO7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/s8jO7.png" alt="enter image description here" /></a></p>
<p>And I suppose the result is better but in this case, there were a lot of points to define the original curve in blue. Since I'm only working with 3-5 points, I'm worried a linear interpolant will not generate the curvature I'm trying to simulate with the spline. Did I incorrectly port some logic from MatLab for the spline case (i.e. does it appear that I correctly implemented the logic steps bulleted above)?</p>
|
<python><matlab><scipy><interpolation>
|
2023-12-31 00:02:46
| 0
| 1,195
|
Sterling Butters
|
77,737,772
| 5,942,779
|
How to eliminate duplicate legends in Plotly with subplot (Python)?
|
<p>The following codes generate 2 plotly charts and lay them out in subplots. Each subplot contains legends that partially overlap with another subplot. As a result, there are duplicated legends on the right. Does anyone know a better way to eliminate duplicated legends in this situation? Thanks.</p>
<pre class="lang-py prettyprint-override"><code>import plotly.graph_objects as go
from plotly.subplots import make_subplots
f1 = go.Figure([
go.Scatter(x=[1, 2, 3, 4, 5], y=[1, 2, 3, 4, 5], name="A"),
go.Scatter(x=[1, 2, 3, 4, 5], y=[5, 4, 3, 2, 1], name="B")
])
f2 = go.Figure([
go.Scatter(x=[1, 2, 3, 4, 5], y=[1, 2, 5, 4, 5], name="B"),
go.Scatter(x=[1, 2, 3, 4, 5], y=[5, 4, 1, 2, 1], name="C")
])
fig = make_subplots(rows=1, cols=2, subplot_titles=['F1', 'F2'])
for ea in f1.data:
fig.add_trace(ea, row=1, col=1)
for ea in f2.data:
fig.add_trace(ea, row=1, col=2)
fig.show()
</code></pre>
<p><a href="https://i.sstatic.net/avCEJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/avCEJ.png" alt="enter image description here" /></a></p>
|
<python><plotly>
|
2023-12-30 23:52:17
| 1
| 689
|
Scoodood
|
77,737,760
| 4,440,436
|
Pandas join with multi-index and NaN
|
<p>I am using Pandas 2.1.3.</p>
<p>I am trying to join two DataFrames on multiple index levels, and one of the index levels has NA's. The minimum reproducible example looks something like this:</p>
<pre><code>a = pd.DataFrame({
'idx_a':['A', 'A', 'B'],
'idx_b':['alpha', 'beta', 'gamma'],
'idx_c': [1.0, 1.0, 1.0],
'x':[10, 20, 30]
}).set_index(['idx_a', 'idx_b', 'idx_c'])
b = pd.DataFrame({
'idx_b':['gamma', 'delta', 'epsilon', np.nan, np.nan],
'idx_c': [1.0, 1.0, 1.0, 1.0, 1.0],
'y':[100, 200, 300, 400, 500]
}).set_index(['idx_b', 'idx_c'])
c = a.join(
b,
how='inner',
on=['idx_b', 'idx_c']
)
print(a)
x
idx_a idx_b idx_c
A alpha 1.0 10
beta 1.0 20
B gamma 1.0 30
print(b)
y
idx_b idx_c
gamma 1.0 100
delta 1.0 200
epsilon 1.0 300
NaN 1.0 400
1.0 500
print(c)
x y
idx_a idx_b idx_c
B gamma 1.0 30 100
1.0 30 400
1.0 30 500
</code></pre>
<p>I would have expected:</p>
<pre><code>print(c)
x y
idx_a idx_b idx_c
B gamma 1.0 30 100
</code></pre>
<p>Why is <code>join</code> matching on the <code>NaN</code> values?</p>
|
<python><pandas><dataframe>
|
2023-12-30 23:47:29
| 3
| 413
|
tcquinn
|
77,737,562
| 9,346,630
|
Scraping XML data from ClinicalTrials API V2
|
<p>I am trying to reproduce the results of these questions:
<a href="https://stackoverflow.com/questions/70011389/python-beautifulsoup-scraping-xml-data-from-clinicaltrials-gov-api-parse-d">Scraping xml data from clinicaltrials</a> and <a href="https://stackoverflow.com/questions/70029265/python-beautifulsoup-extracting-xml-data-from-clinicaltrials-gov-only-able">extracting xml data from clinicaltrials</a> using the new <a href="https://clinicaltrials.gov/data-about-studies/api-migration" rel="nofollow noreferrer">ClinicalTrials V2 API</a>. According to it, <code>/api/query/full_studies</code> should be replaced with <code>/api/v2/studies</code></p>
<p>However, when I try it, I received 0 results</p>
<pre><code>import requests
import pandas as pd
def download(keyword):
#base_url = "https://clinicaltrials.gov/api/query/full_studies"
base_url = "https://clinicaltrials.gov/api/v2/studies"
params = {
"expr": palabra_clave,
"min_rnk": 1,
"max_rnk": 40,
"fmt": "json"
}
response = requests.get(base_url, params=params)
if response.status_code == 200:
return response.json()
else:
return None
result = download("diabetes")
result
</code></pre>
|
<python><xml><web-scraping><beautifulsoup>
|
2023-12-30 22:11:29
| 1
| 493
|
torakxkz
|
77,737,462
| 11,268,857
|
Using macro keyboard as app control board
|
<p>A while back, I bought a small 12-key (+2 rotary toggles) macro keyboard that can be programmed with various keys (<a href="https://github.com/jonnytest1/minikeyboard" rel="nofollow noreferrer">https://github.com/jonnytest1/minikeyboard</a>)</p>
<p>Now I want to use that keyboard to control random apps, but without interfering with anything a "normal keyboard" does (by using any of the keys that could do something on Windows or other apps)
to achieve that, my idea was to:</p>
<ol>
<li>overwrite the "keyboard drivers" in the device manager to generic HID devices</li>
<li>Write my own keyboard driver in-app</li>
<li>Handle whatever stuff I want it to do</li>
</ol>
<p>my initial attempt (which sort of worked):
use pywinusb to open the device(s) for</p>
<pre><code>VENDOR_ID = 0x1189
PRODUCT_ID = 0x8890
</code></pre>
<p>And then set a</p>
<pre><code>device.set_raw_data_handler(devicebbound_handler(device))
</code></pre>
<p>which semi-worked :</p>
<ul>
<li>it works for the media keys (I get events for those keys, but they also have a Windows effect, so it may not have worked)</li>
<li>if I set it to any "normal keys" like A B C or 1 2 3, I don't get any
<code>raw_data_handler</code> - events</li>
</ul>
<p>My current intuition is that I have to either</p>
<ul>
<li><strong>A)</strong> manually query for key presses</li>
</ul>
<p>or</p>
<ul>
<li><strong>B)</strong> Send a report of some sort to start the keyboard to send interrupt signals</li>
</ul>
<p>Unfortunately, the HID documentation is quite sparse, so I don't know what kind of report I have to send to get either</p>
<p>(I'm pretty sure I have to use a device with usagePage: 12 for the keyboard, which seems to exist at least)</p>
<p>current state of the app:</p>
<pre><code>deviceList: Any = hid.HidDeviceFilter(vendor_id=VENDOR_ID,
product_id=PRODUCT_ID,).get_devices()
devices: list["hid.HidDevice"] = deviceList
def devicebbound_handler(device: hid.HidDevice):
def sample_handler(data):
print(device.product_name+" "+str(device.product_id))
print(data)
pass
return sample_handler
def isPLugged():
pl = False
for device in devices:
pl = pl | device.is_plugged()
return pl
for device in devices:
device.open()
device.set_raw_data_handler(devicebbound_handler(device))
while isPLugged():
sleep(0.01)
</code></pre>
|
<python><keyboard><usb><hid><pywinusb>
|
2023-12-30 21:29:57
| 0
| 881
|
jonathan Heindl
|
77,737,440
| 1,064,197
|
Why selenium can't find element to click using xpath or css selector?
|
<p>I am trying to click on a ">" on the page here but the chrome XPATH or CSS Selector can't find it. I am trying to navigate through dates and get a table per date.
<a href="https://theanalyst.com/na/2023/08/opta-football-predictions/" rel="nofollow noreferrer">https://theanalyst.com/na/2023/08/opta-football-predictions/</a></p>
<p>Should chrome full xpath just work for selenium?</p>
<pre><code>import sys
import os
import pandas as pd
from selenium import webdriver
from pyvirtualdisplay import Display
from bs4 import BeautifulSoup
from selenium import webdriver
import chromedriver_binary
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
def parse_website():
# Start a virtual display
display = Display(visible=0, size=(800, 600))
display.start()
try:
# Set Chrome options with the binary location
chrome_options = webdriver.ChromeOptions()
chrome_options.binary_location = "/usr/bin/google-chrome"
# Initialize Chrome driver
driver = webdriver.Chrome()
# Open the desired URL
url = "https://theanalyst.com/na/2023/08/opta-football-predictions/"
driver.get(url)
# Wait for the page to load completely (adjust the time as needed)
# Parse the page source using BeautifulSoup
predictions = WebDriverWait(driver, 10).until(
EC.presence_of_all_elements_located(
(By.CSS_SELECTOR, "iframe[src*=predictions]")
)
)
element = driver.find_element(By.CSS_SELECTOR,"#pos_3.div.fixtures-header.button:nth-child(3)")
element.click()
except Exception as e:
print(f"An error occurred: {e}")
</code></pre>
<p>This code gets this error:</p>
<pre><code> An error occurred: Message: no such element: Unable to locate element: {"method":"css selector","selector":"#pos_3.div.fixtures-header.button:nth-child(3)"}
</code></pre>
<p>Edit: I needed a wait in the code. Thank you Yaroslavm.</p>
|
<python><selenium-webdriver><xpath><css-selectors><selenium-chromedriver>
|
2023-12-30 21:22:59
| 2
| 2,625
|
Michael WS
|
77,737,430
| 893,254
|
Multiprocessing in python where only one thread (process) may use one of a set of values at one time
|
<p>I am in the process of attempting to automate some slow processes by multithreading (multiprocessing - parallelizing) them with Python.</p>
<p>However I have a problem. Each running process must take a piece of data as an argument. For each piece of data, only one instance of each piece of data must be in use at any one time.</p>
<p>To explain this more clearly:</p>
<ul>
<li>Each process connects to an API and requires the use of an API key</li>
<li>I have a fixed number of API keys</li>
<li>Two processes <strong>must not</strong> use the same API key simultaniously</li>
</ul>
<p>I am stuck and can't figure a way around this problem. (Other than the "dumb" solution which I will explain later.)</p>
<p>The issue with multiprocess is that one defines a fixed number of workers which execute as part of a pool. Each worker runs a function which expects to recieve some arguments. The arguments are initialized as a list, and workers are dispatched with one entry from the list.</p>
<p>Imagine a list like this:</p>
<pre><code>[a, b, c, d, e, f, g, h, i, ...]
</code></pre>
<p>and a pool of 3 workers.</p>
<p>When the pool first launches, the values <code>a</code>, <code>b</code>, <code>c</code> will be passed to three processes. There is no guarantee about how long each one will take.</p>
<p>It is therefore possible that process 1 finishes, and consumes data <code>d</code>. It is possible this process finishes again before either process 2 or 3 has finished processing data <code>b</code> or <code>c</code>.</p>
<p>If that happens, process 1 will consume data <code>e</code>.</p>
<p><em>It should now be obvious why putting the api key data into the same list as the rest of the data will not work.</em></p>
<p>In the above example, processes 1 and 2 will be processing data <code>e</code> and <code>b</code> respectively. If the api keys had been part of the list feeding the processes with data then elements <code>b</code> and <code>e</code> would contain the same api key. (presumably)</p>
<p>Is there a way to explicitly "pin" some data (like an api key) to each process spawned by <code>pool.map()</code> and thus solve this problem?</p>
|
<python><multithreading><algorithm><parallel-processing><multiprocess>
|
2023-12-30 21:17:05
| 4
| 18,579
|
user2138149
|
77,737,429
| 519,422
|
Python Pandas: why are some data missing from my dict when I get data from JSON-formatted text via dict(line.strip().split(None, 1)?
|
<p>I need to get data (property names and values) from JSON-formatted files. I can get what I need from the files as a list of strings. I.e., this always works:</p>
<pre><code>f = open("f.txt", "r")
data = json.load(f);
f.close()
get = data["payload"]["blob"]["rawLines"]
</code></pre>
<p>so that when I print "get" I see something like this (below) that contains all of the values and property names I need:</p>
<pre><code>get [' name property1 ', ' 98.00000 property2 ', ' 3.00000 property3 ', ' 500.66300 property4 ', ' -50000.9999 property5 ', ' 100.45200 property6 ', ' 59.75258 property7 ', ' 9.66543 property8 ', ' 0.00000 property9 ', ' 100.07655 property10 ', ' 0.00000 property11 ', ' 0.00000 property12 ', ' 0.00000 property13 ', ' 0.00000 property14 ', ' 8.88888 property15 ', ' 1.00000 property16 ', ' 0.00000 property17 ...
</code></pre>
<p>However, when I then make a dict via a strip and split:</p>
<pre><code>mydict = dict(line.strip().split(None, 1) for line in get)
</code></pre>
<p>some of the property name-values pairs are missing. For example, property16 and its value are always missing.</p>
<p>I can't post the data but am hoping that someone may know of a more robust method for dealing with the strip and split step. Today I have been looking at previous posts (such as <a href="https://stackoverflow.com/questions/45547263/convert-split-into-dict-after-calling-strip">this</a> one) but have not been getting far.</p>
<hr />
<p>The solution from @Tim Roberts has solved the fundamental problem. By using</p>
<pre><code>mydict = dict(reversed(line.split(None, 1)) for line in get)
</code></pre>
<p>I get all of the property name–value pairs that were previously missing.</p>
<p>There is one minor issue. The commas persist. So when I turn the dict into a dataframe</p>
<pre><code>mydataframe = pd.DataFrame(mydict.items(), columns=["name", "value"])
</code></pre>
<p>I get</p>
<pre><code> name value
0 Methane CH4
1 Thing2word1 Thing2word2 ... 25.07700
2 Thing3word3 Thing3word3 ... 11.33000
...
</code></pre>
<p>Is there a way to get rid of the commas so that there are only two columns, the property name and value? Thank you.</p>
|
<python><json><python-3.x><pandas><dataframe>
|
2023-12-30 21:16:32
| 1
| 897
|
Ant
|
77,737,387
| 477,969
|
Build Python as static library with won't include basic modules
|
<p>I am trying to get the ultimate performance in my application based on python.</p>
<p>I am already using Cython and everything works compiling it in C++.
I am even able to pack everything with PyInstaller and binary dist works.
Now I want to reduce the overhead of interactions between Python DLL and my module PYDs (modules compiled from C++). To achieve this I want to link both my app C++ cythonized modules and Python built as a static library with LTO enabled.</p>
<p>I already went to MSYS package recipe and was able to create the static python library.
However, I noticed the basic modules are missing in that binary (like _ssl, etc).
Those modules are still built as PYDs which reference a Python DLL instead of being included in libpython3.11.a</p>
<p>Is there exist any method to embed those modules in the static library?</p>
<p>This is related to this question, but it is outdated: <a href="https://stackoverflow.com/questions/27872864/static-python-build-missing-modules">Static python build missing modules</a></p>
|
<python><c++><python-3.x><build><static-linking>
|
2023-12-30 20:58:52
| 0
| 2,114
|
JairoV
|
77,737,358
| 1,234,434
|
How is a list element being converted to a Tuple
|
<p>I am learning python. My research using google hasn't hit on any description on what is occurring here. I'm doing some basic exercises.</p>
<p>A person on a the codewars website put this solution down for a problem:</p>
<pre><code>data=[[18, 20], [45, 2], [61, 12], [37, 6], [21, 21], [78, 9]]
def openOrSenior(data):
return ["Senior" if age >= 55 and handicap >= 8 else "Open" for (age, handicap) in data]
</code></pre>
<p>I'm unable to understand how they can use a tuple to access the sublist elements. What is this called in Python and how does it work?</p>
<pre><code>for (age, handicap) in data
</code></pre>
|
<python><for-loop>
|
2023-12-30 20:47:59
| 3
| 1,033
|
Dan
|
77,737,323
| 1,397,061
|
Why does parallel reading of an HDF5 Dataset max out at 100% CPU, but only for large Datasets?
|
<p>I'm using Cython to read a single Dataset from an HDF5 file using 64 threads. Each thread calculates a start index <code>start</code> and chunk size <code>size</code>, and reads from that chunk into a common buffer <code>buf</code>, which is a memoryview of a NumPy array. Crucially, each thread opens its own copy of the file and Dataset. Here's the code:</p>
<pre><code>def read_hdf5_dataset(const char* file_name, const char* dataset_name,
long[::1] buf, int num_threads):
cdef hsize_t base_size = buf.shape[0] // num_threads
cdef hsize_t start, size
cdef hid_t file_id, dataset_id, mem_space_id, file_space_id
cdef int thread
for thread in prange(num_threads, nogil=True):
start = base_size * thread
size = base_size + buf.shape[0] % num_threads \
if thread == num_threads - 1 else base_size
file_id = H5Fopen(file_name, H5F_ACC_RDONLY, H5P_DEFAULT)
dataset_id = H5Dopen2(file_id, dataset_name, H5F_ACC_RDONLY)
mem_space_id = H5Screate_simple(1, &size, NULL)
file_space_id = H5Dget_space(dataset_id)
H5Sselect_hyperslab(file_space_id, H5S_SELECT_SET, &start,
NULL, &size, NULL)
H5Dread(dataset_id, H5Dget_type(dataset_id), mem_space_id,
file_space_id, H5P_DEFAULT, <void*> &buf[start])
H5Sclose(file_space_id)
H5Sclose(mem_space_id)
H5Dclose(dataset_id)
H5Fclose(file_id)
</code></pre>
<p>Although it reads the Dataset correctly, the CPU utilization maxes out at exactly 100% on a float32 Dataset of ~10 billion entries, even though it uses all 64 CPUs (albeit only at ~20-30% utilization due to the I/O bottleneck) on a float32 Dataset of ~100 million entries. I've tried this on two different computing clusters with the same result. Maybe it has something to do with the size of the Dataset being greater than INT32_MAX?</p>
<p>What's stopping this code from running in parallel on extremely large datasets, and how can I fix it? Any other suggestions to improve the code's clarity or efficiency would also be appreciated.</p>
|
<python><cython><hdf5><hdf>
|
2023-12-30 20:36:42
| 2
| 27,225
|
1''
|
77,737,131
| 17,365,694
|
`UNIQUE constraint failed at: pharmcare_patient.id` in my pharmcare app
|
<p>I have a Django app in my project I am currently working on called pharmcare, and I kind of getting into a unique constraint issue whenever I want to override the <code>save()</code> method to insert/save the <code>total</code> payment made by the patient so that the pharmacist that is creating the form or updating the patient's record won't perform the math manually. Whenever I used the form from <code>CreateView</code> on the view.py without overriding the save method with the function I created dynamically to solve the math, it performed the work perfectly well, but in the admin panel, the answer wouldn't appear there. However, if I tried using the <code>save()</code> in my <code>models.py</code> of the app I'd get that error. If I remove the save() method in the model, the error will disappear, and that's not what I want, since I want it to be save in the db immediately the pharmacists finished creating or modifying the form. I had used the <code>get_or_create</code> method to solve it but it remains abortive, and another option which I don't want to use (since I want the site owner to have the option of deleting the pharmacist/organizer creating the record at ease same with the records s/he created) is the <code>SET_NULL</code> for the <code>on_delete</code> function of the foreign key. The database I am using is PostgreSQL btw.</p>
<p>Here is my pharmcare model of the patient table:</p>
<pre class="lang-py prettyprint-override"><code>
class Patient(models.Model):
medical_charge = models.PositiveBigIntegerField(blank=True, null=True,
verbose_name="amount paid (medical charge if any)")
notes = models.TextField(null=True, blank=True)
pharmacist = models.ForeignKey(
"Pharmacist", on_delete=models.SET_NULL, null=True, blank=True)
organization = models.ForeignKey(
'leads.UserProfile', on_delete=models.CASCADE)
user = models.ForeignKey(
'songs.User', on_delete=models.CASCADE)
patient = models.ForeignKey(
'PatientDetail', on_delete=models.CASCADE,
verbose_name='Patient-detail')
medical_history = models.ForeignKey(
'MedicationHistory',
on_delete=models.CASCADE)
total = models.PositiveBigIntegerField(editable=True, blank=True,
null=True, verbose_name="Total (auto-add)")
slug = models.SlugField(null=True, blank=True)
date_created = models.DateTimeField(auto_now_add=True)
class Meta:
ordering = ['id', '-date_created']
def __str__(self):
return self.patient.first_name
def get_total_charge(self) -> int:
total = 0
# check whether there is additional charges like drug price to be added
# if yes, then add medical charges to the total
if self.medical_charge:
amount_charged = self.patient.consultation + self.medical_charge
total += amount_charged
return total
total += self.patient.consultation
return total
# where the error is coming from
def save(self, *args, **kwargs):
""" override the save method to dynamically save the
total and slug in the db whenever form_valid() of the patient is
checked. """
self.total = self.get_total_charge()
self.slug = slug_modifier()
return super().save(self, *args, **kwargs)
</code></pre>
<p>My View:</p>
<pre class="lang-py prettyprint-override"><code>
class PatientCreateView(OrganizerPharmacistLoginRequiredMixin, CreateView):
""" Handles request-response cycle made by the admin/pharmacists to create
a patient"""
template_name = 'pharmcare/patients/patient-info-create.html'
form_class = PatientModelForm
# queryset = Patient.objects.all()
def get_queryset(self):
organization = self.request.user.userprofile
user = self.request.user
if user.is_organizer or user.is_pharmacist:
queryset = Patient.objects.filter(
organization=organization)
else:
queryset = Patient.objects.filter(
pharmacist=user.pharmacist.organization
)
queryset = queryset.filter(pharmacist__user=user)
return queryset
def form_valid(self, form: BaseModelForm) -> HttpResponse:
user = self.request.user
form = form.save(commit=False)
form.user = user
form.organization = user.userprofile
form.save()
# Patient.objects.get_or_create(pharmacist=user.pharmacist.organization)
return super(PatientCreateView, self).form_valid(form)
def get_success_url(self) -> str:
return reverse('pharmcare:patient-info')
</code></pre>
<p>The error message from my dev environment:</p>
<pre class="lang-py prettyprint-override"><code>
IntegrityError at /pharmcare/patient-info-create/
UNIQUE constraint failed: pharmcare_patient.id
Request Method: POST
Request URL: http://127.0.0.1:8000/pharmcare/patient-info-create/
Django Version: 4.2
Exception Type: IntegrityError
Exception Value:
UNIQUE constraint failed: pharmcare_patient.id
Exception Location: C:\Users\USER\Desktop\dj-tests\env\Lib\site-packages\django\db\backends\sqlite3\base.py, line 328, in execute
Raised during: pharmcare.views.patients.PatientCreateView
Python Executable: C:\Users\USER\Desktop\dj-tests\env\Scripts\python.exe
Python Version: 3.12.0
Python Path:
['C:\\Users\\USER\\Desktop\\dj-tests',
'C:\\Users\\USER\\AppData\\Local\\Programs\\Python\\Python312\\python312.zip',
'C:\\Users\\USER\\AppData\\Local\\Programs\\Python\\Python312\\DLLs',
'C:\\Users\\USER\\AppData\\Local\\Programs\\Python\\Python312\\Lib',
'C:\\Users\\USER\\AppData\\Local\\Programs\\Python\\Python312',
'C:\\Users\\USER\\Desktop\\dj-tests\\env',
'C:\\Users\\USER\\Desktop\\dj-tests\\env\\Lib\\site-packages',
'C:\\Users\\USER\\Desktop\\dj-tests\\env\\Lib\\site-packages\\win32',
'C:\\Users\\USER\\Desktop\\dj-tests\\env\\Lib\\site-packages\\win32\\lib',
'C:\\Users\\USER\\Desktop\\dj-tests\\env\\Lib\\site-packages\\Pythonwin']
Server time: Sat, 30 Dec 2023 18:02:51 +0000
</code></pre>
<p>I had also tried using <code>self.object</code> provided by Django <code>CBV</code> as well and I had tried looking at previous issues relating to my problem here in stackoverflow but none was able to sort my problem out, which 80% of them were using SET_NULL on their foreign key which I don't want.</p>
<p>Please I need your answer on making this work because it kinda stuck my progress on my work for a day now. Thank you!</p>
|
<python><django><postgresql><django-orm><django-class-based-views>
|
2023-12-30 19:13:33
| 2
| 474
|
Blaisemart
|
77,737,099
| 5,378,816
|
pylint without user site directory?
|
<p>My installation of <code>pylint</code> started to behave strangely. It complained that packgages that were present could not be imported. I found the problem in the <code>pylint</code> script's shebang line:</p>
<pre><code>#! /usr/bin/python3 -sP
</code></pre>
<p>where the <code>-s</code> option has following meaning:</p>
<blockquote>
<pre><code>-s Don't add user site directory to sys.path
</code></pre>
</blockquote>
<p>It does not make any sense to me. I removed the <code>-s</code> and I'm able to check my programs again. Could somebody please explain the logic behind it?</p>
<p>The system in question is Fedora 39, <code>pylint</code> was installed from Fedora RPMs.</p>
<hr />
<p>UPDATE: <strong>It was a bug</strong> and Fedora will release a fixed version soon.</p>
|
<python><pylint>
|
2023-12-30 19:01:32
| 0
| 17,998
|
VPfB
|
77,737,062
| 6,759,459
|
AttributeError: module 'redis' has no attribute 'Redis'
|
<h2>Problem</h2>
<p>Installed <code>redis</code> module and I cannot establish the client according documentation.
This is a Redis client in a Redis cluster hosted on Docker container.</p>
<p>Following <a href="https://github.com/redis/redis-py" rel="nofollow noreferrer">documentation</a>, as you can see:</p>
<pre><code>>>> import redis
>>> r = redis.Redis(host='localhost', port=6379, db=0)
>>> r.set('foo', 'bar')
True
>>> r.get('foo')
b'bar'
</code></pre>
<h3>Error</h3>
<pre><code>backend-web-1 | from redis.conversation_memory import redis_client, initiate_user_memory
backend-web-1 | File "/app/redis/conversation_memory.py", line 5, in <module>
backend-web-1 | redis_client = redis.Redis(host='localhost', port=6379, db=0, decode_responses=True)
backend-web-1 | ^^^^^^^^^^^
backend-web-1 | AttributeError: module 'redis' has no attribute 'Redis'
</code></pre>
<p><strong>Question</strong></p>
<p>What's wrong with my implementation?</p>
|
<python><caching><docker-compose><redis><redis-cluster>
|
2023-12-30 18:50:40
| 1
| 926
|
Ari
|
77,736,948
| 6,934,489
|
How to exclude values from a dataframe based on values in a second dataframe
|
<p>I have two dataframes, df1 and df2, example:</p>
<p><code>df1</code>:</p>
<pre><code>Date ID Value
1/2/2013 001 1
2/5/2013 002 15
3/4/2013 001 2
1/1/2014 005 17
2/5/2014 004 1
7/1/2016 002 2
7/1/2016 001 4
8/1/2016 007 4
</code></pre>
<p><code>df2</code>:</p>
<pre><code>Year ID
2013 001
2014 005
2014 004
2016 001
</code></pre>
<p>I want to get a dataframe where I remove the rows from df1 that has Year and ID combo in df2. For example, there is 2013 and 001 in df2, I would like to remove all values from df1 that has a date in 2013 and an ID of 001.</p>
<p>The resulting df3 would be:</p>
<p><code>df3</code>:</p>
<pre><code>Date ID Value
2/5/2013 002 15
7/1/2016 002 2
8/1/2016 007 4
</code></pre>
|
<python><pandas><dataframe>
|
2023-12-30 18:10:31
| 2
| 755
|
DPdl
|
77,736,879
| 1,938,410
|
Two images do not perfect align when using Pillow putalpha to set image transparency
|
<p>I am learning to use Pillow library to process images, and I am following this tutorial (<a href="https://note.nkmk.me/en/python-pillow-putalpha/" rel="nofollow noreferrer">https://note.nkmk.me/en/python-pillow-putalpha/</a>) to use the <code>putalpha()</code> method. I was able to use the <code>horse.png</code> and <code>horse_r.png</code> to set different parts of the image to be transparent. However, I found that when I put these two images together in a Microsoft Word or Powerpoint document or in PhotoShop, one above another, there always displays a white line contour of the horse. I checked the value of pixels of both images at the coordinate where the white line appears, which all seem correct, there is not any pixel whose alpha is set as 0 on both images. I wonder where this white line comes from and how to make it disappear?</p>
<p>Here are the two images for the code:</p>
<p><a href="https://i.sstatic.net/hetc7.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hetc7.jpg" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/mPNfp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mPNfp.png" alt="enter image description here" /></a></p>
<p>And here is the code I have used to generate the two images:</p>
<pre><code>from PIL import Image, ImageDraw, ImageFilter, ImageOps
im_check = Image.open('horse.png').convert('1')
im_check.save('horse.png')
im_check_r = ImageOps.invert(im_check)
im_check_r.save('horse_r.png')
im_rgb = Image.open('lena.jpg')
im_a = Image.open('horse.png').convert('1').resize(im_rgb.size)
im_rgba = im_rgb.copy()
im_rgba.putalpha(im_a)
im_rgba.save('1.png')
im_a = Image.open('horse_r.png').convert('1').resize(im_rgb.size)
im_rgba = im_rgb.copy()
im_rgba.putalpha(im_a)
im_rgba.save('2.png')
</code></pre>
<p><a href="https://i.sstatic.net/z5ACv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/z5ACv.png" alt="The first picture setting the horse part transparent." /></a>
<a href="https://i.sstatic.net/TrmfY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TrmfY.png" alt="The second picture setting the reverse-horse part transparent." /></a>
<a href="https://i.sstatic.net/eu109.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eu109.jpg" alt="When I place one picture above another." /></a></p>
|
<python><python-imaging-library><png>
|
2023-12-30 17:45:22
| 1
| 507
|
SamTest
|
77,737,654
| 1,725,974
|
`columns[summary.sort_values() > 0]` behaviour not making sense
|
<p>I have a result of <code>isna()</code> like this:</p>
<pre><code>data.isna().sum().sort_values()
</code></pre>
<pre><code>index 0
Longtitude 0
Latitude 0
Landsize 0
Bathroom 0
Bedroom2 0
Regionname 0
Distance 0
Postcode 0
SellerG 0
Method 0
Price 0
Type 0
Rooms 0
Address 0
Suburb 0
Date 0
Propertycount 0
Car 25
CouncilArea 553
YearBuilt 2130
BuildingArea 2542
dtype: int64
</code></pre>
<p>What I'd like to do is to get the column names in a list in ascending order of values where they're non-zero - so in the case above the last four. So essentially, I do this, but it gives me the wrong result:</p>
<pre><code>>>> list(data.columns[data.isna().sum().sort_values() > 0])
['Latitude', 'Longtitude', 'Regionname', 'Propertycount']
</code></pre>
<p>If I do not sort it, it works as expected:</p>
<pre><code>>>> list(data.columns[data.isna().sum() > 0]) # no `.sort_values()`
['Car', 'BuildingArea', 'YearBuilt', 'CouncilArea']
</code></pre>
<p>but I'd like the list to be sorted.</p>
<p>BTW it's the same behaviour with <code>isnull()</code></p>
<p>My questions are these:</p>
<ol>
<li>Why is the above happening? Why does sorting the result give some weird output (and its same every time - doesn't matter how many times you run it)</li>
<li>How may I get the names of the columns in ascending order in a list?</li>
</ol>
|
<python><pandas>
|
2023-12-30 17:27:01
| 1
| 1,539
|
Anonymous Person
|
77,736,713
| 1,089,412
|
PyGObject failed to install because of pycairo failing
|
<p>I'm trying to install system wide <code>PyGObject</code>. But I'm struggling with some weird error that pops up when installer try to build <code>pycairo</code> wheel. I tried to upgrade and reinstall <code>setuptools</code> using <code>sudo python3 -m pip install --upgrade --force-reinstall setuptools</code> but this did not help. Also tried to set <code>export SETUPTOOLS_USE_DISTUTILS=stdlib</code> but also did not work. Error is:</p>
<pre><code> File "/usr/lib/python3/dist-packages/setuptools/_distutils/cmd.py", line 290, in set_undefined_options
setattr(self, dst_option, getattr(src_cmd_obj, src_option))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/setuptools/_distutils/cmd.py", line 103, in __getattr__
raise AttributeError(attr)
AttributeError: install_layout. Did you mean: 'install_platlib'?
</code></pre>
<p>Seems like pycairo installer try to call some method/attribute that does not exist somehow.</p>
<p>How to fix this?</p>
<p>I'm using Python 3.11</p>
<pre><code>$ python3 --version
Python 3.11.7
</code></pre>
<p>Full install log below</p>
<pre><code>$ sudo python3 -m pip install --upgrade --force-reinstall PyGObject
Collecting PyGObject
Using cached PyGObject-3.46.0.tar.gz (723 kB)
Installing build dependencies ... error
error: subprocess-exited-with-error
× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> [121 lines of output]
Collecting setuptools
Using cached setuptools-69.0.3-py3-none-any.whl (819 kB)
Collecting wheel
Using cached wheel-0.42.0-py3-none-any.whl (65 kB)
Collecting pycairo
Using cached pycairo-1.25.1.tar.gz (347 kB)
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Installing backend dependencies: started
Installing backend dependencies: finished with status 'done'
Preparing metadata (pyproject.toml): started
Preparing metadata (pyproject.toml): finished with status 'done'
Building wheels for collected packages: pycairo
Building wheel for pycairo (pyproject.toml): started
Building wheel for pycairo (pyproject.toml): finished with status 'error'
error: subprocess-exited-with-error
× Building wheel for pycairo (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [93 lines of output]
running bdist_wheel
running build
running build_py
creating build
creating build/lib.linux-x86_64-3.11
creating build/lib.linux-x86_64-3.11/cairo
copying cairo/__init__.py -> build/lib.linux-x86_64-3.11/cairo
copying cairo/__init__.pyi -> build/lib.linux-x86_64-3.11/cairo
copying cairo/py.typed -> build/lib.linux-x86_64-3.11/cairo
running build_ext
building 'cairo._cairo' extension
creating build/temp.linux-x86_64-3.11
creating build/temp.linux-x86_64-3.11/cairo
x86_64-linux-gnu-gcc -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -fPIC -DPYCAIRO_VERSION_MAJOR=1 -DPYCAIRO_VERSION_MINOR=25 -DPYCAIRO_VERSION_MICRO=1 -I/usr/include/cairo -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -I/usr/include/pixman-1 -I/usr/include/uuid -I/usr/include/freetype2 -I/usr/include/libpng16 -I/usr/include/python3.11 -c cairo/bufferproxy.c -o build/temp.linux-x86_64-3.11/cairo/bufferproxy.o -Wall -Warray-bounds -Wcast-align -Wconversion -Wextra -Wformat=2 -Wformat-nonliteral -Wformat-security -Wimplicit-function-declaration -Winit-self -Winline -Wmissing-format-attribute -Wmissing-noreturn -Wnested-externs -Wold-style-definition -Wpacked -Wpointer-arith -Wreturn-type -Wshadow -Wsign-compare -Wstrict-aliasing -Wundef -Wunused-but-set-variable -Wswitch-default -Wno-missing-field-initializers -Wno-unused-parameter -Wno-unused-command-line-argument -fno-strict-aliasing -fvisibility=hidden -std=c99
x86_64-linux-gnu-gcc -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -fPIC -DPYCAIRO_VERSION_MAJOR=1 -DPYCAIRO_VERSION_MINOR=25 -DPYCAIRO_VERSION_MICRO=1 -I/usr/include/cairo -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -I/usr/include/pixman-1 -I/usr/include/uuid -I/usr/include/freetype2 -I/usr/include/libpng16 -I/usr/include/python3.11 -c cairo/cairomodule.c -o build/temp.linux-x86_64-3.11/cairo/cairomodule.o -Wall -Warray-bounds -Wcast-align -Wconversion -Wextra -Wformat=2 -Wformat-nonliteral -Wformat-security -Wimplicit-function-declaration -Winit-self -Winline -Wmissing-format-attribute -Wmissing-noreturn -Wnested-externs -Wold-style-definition -Wpacked -Wpointer-arith -Wreturn-type -Wshadow -Wsign-compare -Wstrict-aliasing -Wundef -Wunused-but-set-variable -Wswitch-default -Wno-missing-field-initializers -Wno-unused-parameter -Wno-unused-command-line-argument -fno-strict-aliasing -fvisibility=hidden -std=c99
x86_64-linux-gnu-gcc -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -fPIC -DPYCAIRO_VERSION_MAJOR=1 -DPYCAIRO_VERSION_MINOR=25 -DPYCAIRO_VERSION_MICRO=1 -I/usr/include/cairo -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -I/usr/include/pixman-1 -I/usr/include/uuid -I/usr/include/freetype2 -I/usr/include/libpng16 -I/usr/include/python3.11 -c cairo/context.c -o build/temp.linux-x86_64-3.11/cairo/context.o -Wall -Warray-bounds -Wcast-align -Wconversion -Wextra -Wformat=2 -Wformat-nonliteral -Wformat-security -Wimplicit-function-declaration -Winit-self -Winline -Wmissing-format-attribute -Wmissing-noreturn -Wnested-externs -Wold-style-definition -Wpacked -Wpointer-arith -Wreturn-type -Wshadow -Wsign-compare -Wstrict-aliasing -Wundef -Wunused-but-set-variable -Wswitch-default -Wno-missing-field-initializers -Wno-unused-parameter -Wno-unused-command-line-argument -fno-strict-aliasing -fvisibility=hidden -std=c99
x86_64-linux-gnu-gcc -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -fPIC -DPYCAIRO_VERSION_MAJOR=1 -DPYCAIRO_VERSION_MINOR=25 -DPYCAIRO_VERSION_MICRO=1 -I/usr/include/cairo -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -I/usr/include/pixman-1 -I/usr/include/uuid -I/usr/include/freetype2 -I/usr/include/libpng16 -I/usr/include/python3.11 -c cairo/device.c -o build/temp.linux-x86_64-3.11/cairo/device.o -Wall -Warray-bounds -Wcast-align -Wconversion -Wextra -Wformat=2 -Wformat-nonliteral -Wformat-security -Wimplicit-function-declaration -Winit-self -Winline -Wmissing-format-attribute -Wmissing-noreturn -Wnested-externs -Wold-style-definition -Wpacked -Wpointer-arith -Wreturn-type -Wshadow -Wsign-compare -Wstrict-aliasing -Wundef -Wunused-but-set-variable -Wswitch-default -Wno-missing-field-initializers -Wno-unused-parameter -Wno-unused-command-line-argument -fno-strict-aliasing -fvisibility=hidden -std=c99
x86_64-linux-gnu-gcc -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -fPIC -DPYCAIRO_VERSION_MAJOR=1 -DPYCAIRO_VERSION_MINOR=25 -DPYCAIRO_VERSION_MICRO=1 -I/usr/include/cairo -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -I/usr/include/pixman-1 -I/usr/include/uuid -I/usr/include/freetype2 -I/usr/include/libpng16 -I/usr/include/python3.11 -c cairo/enums.c -o build/temp.linux-x86_64-3.11/cairo/enums.o -Wall -Warray-bounds -Wcast-align -Wconversion -Wextra -Wformat=2 -Wformat-nonliteral -Wformat-security -Wimplicit-function-declaration -Winit-self -Winline -Wmissing-format-attribute -Wmissing-noreturn -Wnested-externs -Wold-style-definition -Wpacked -Wpointer-arith -Wreturn-type -Wshadow -Wsign-compare -Wstrict-aliasing -Wundef -Wunused-but-set-variable -Wswitch-default -Wno-missing-field-initializers -Wno-unused-parameter -Wno-unused-command-line-argument -fno-strict-aliasing -fvisibility=hidden -std=c99
x86_64-linux-gnu-gcc -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -fPIC -DPYCAIRO_VERSION_MAJOR=1 -DPYCAIRO_VERSION_MINOR=25 -DPYCAIRO_VERSION_MICRO=1 -I/usr/include/cairo -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -I/usr/include/pixman-1 -I/usr/include/uuid -I/usr/include/freetype2 -I/usr/include/libpng16 -I/usr/include/python3.11 -c cairo/error.c -o build/temp.linux-x86_64-3.11/cairo/error.o -Wall -Warray-bounds -Wcast-align -Wconversion -Wextra -Wformat=2 -Wformat-nonliteral -Wformat-security -Wimplicit-function-declaration -Winit-self -Winline -Wmissing-format-attribute -Wmissing-noreturn -Wnested-externs -Wold-style-definition -Wpacked -Wpointer-arith -Wreturn-type -Wshadow -Wsign-compare -Wstrict-aliasing -Wundef -Wunused-but-set-variable -Wswitch-default -Wno-missing-field-initializers -Wno-unused-parameter -Wno-unused-command-line-argument -fno-strict-aliasing -fvisibility=hidden -std=c99
x86_64-linux-gnu-gcc -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -fPIC -DPYCAIRO_VERSION_MAJOR=1 -DPYCAIRO_VERSION_MINOR=25 -DPYCAIRO_VERSION_MICRO=1 -I/usr/include/cairo -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -I/usr/include/pixman-1 -I/usr/include/uuid -I/usr/include/freetype2 -I/usr/include/libpng16 -I/usr/include/python3.11 -c cairo/font.c -o build/temp.linux-x86_64-3.11/cairo/font.o -Wall -Warray-bounds -Wcast-align -Wconversion -Wextra -Wformat=2 -Wformat-nonliteral -Wformat-security -Wimplicit-function-declaration -Winit-self -Winline -Wmissing-format-attribute -Wmissing-noreturn -Wnested-externs -Wold-style-definition -Wpacked -Wpointer-arith -Wreturn-type -Wshadow -Wsign-compare -Wstrict-aliasing -Wundef -Wunused-but-set-variable -Wswitch-default -Wno-missing-field-initializers -Wno-unused-parameter -Wno-unused-command-line-argument -fno-strict-aliasing -fvisibility=hidden -std=c99
x86_64-linux-gnu-gcc -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -fPIC -DPYCAIRO_VERSION_MAJOR=1 -DPYCAIRO_VERSION_MINOR=25 -DPYCAIRO_VERSION_MICRO=1 -I/usr/include/cairo -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -I/usr/include/pixman-1 -I/usr/include/uuid -I/usr/include/freetype2 -I/usr/include/libpng16 -I/usr/include/python3.11 -c cairo/glyph.c -o build/temp.linux-x86_64-3.11/cairo/glyph.o -Wall -Warray-bounds -Wcast-align -Wconversion -Wextra -Wformat=2 -Wformat-nonliteral -Wformat-security -Wimplicit-function-declaration -Winit-self -Winline -Wmissing-format-attribute -Wmissing-noreturn -Wnested-externs -Wold-style-definition -Wpacked -Wpointer-arith -Wreturn-type -Wshadow -Wsign-compare -Wstrict-aliasing -Wundef -Wunused-but-set-variable -Wswitch-default -Wno-missing-field-initializers -Wno-unused-parameter -Wno-unused-command-line-argument -fno-strict-aliasing -fvisibility=hidden -std=c99
x86_64-linux-gnu-gcc -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -fPIC -DPYCAIRO_VERSION_MAJOR=1 -DPYCAIRO_VERSION_MINOR=25 -DPYCAIRO_VERSION_MICRO=1 -I/usr/include/cairo -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -I/usr/include/pixman-1 -I/usr/include/uuid -I/usr/include/freetype2 -I/usr/include/libpng16 -I/usr/include/python3.11 -c cairo/matrix.c -o build/temp.linux-x86_64-3.11/cairo/matrix.o -Wall -Warray-bounds -Wcast-align -Wconversion -Wextra -Wformat=2 -Wformat-nonliteral -Wformat-security -Wimplicit-function-declaration -Winit-self -Winline -Wmissing-format-attribute -Wmissing-noreturn -Wnested-externs -Wold-style-definition -Wpacked -Wpointer-arith -Wreturn-type -Wshadow -Wsign-compare -Wstrict-aliasing -Wundef -Wunused-but-set-variable -Wswitch-default -Wno-missing-field-initializers -Wno-unused-parameter -Wno-unused-command-line-argument -fno-strict-aliasing -fvisibility=hidden -std=c99
x86_64-linux-gnu-gcc -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -fPIC -DPYCAIRO_VERSION_MAJOR=1 -DPYCAIRO_VERSION_MINOR=25 -DPYCAIRO_VERSION_MICRO=1 -I/usr/include/cairo -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -I/usr/include/pixman-1 -I/usr/include/uuid -I/usr/include/freetype2 -I/usr/include/libpng16 -I/usr/include/python3.11 -c cairo/misc.c -o build/temp.linux-x86_64-3.11/cairo/misc.o -Wall -Warray-bounds -Wcast-align -Wconversion -Wextra -Wformat=2 -Wformat-nonliteral -Wformat-security -Wimplicit-function-declaration -Winit-self -Winline -Wmissing-format-attribute -Wmissing-noreturn -Wnested-externs -Wold-style-definition -Wpacked -Wpointer-arith -Wreturn-type -Wshadow -Wsign-compare -Wstrict-aliasing -Wundef -Wunused-but-set-variable -Wswitch-default -Wno-missing-field-initializers -Wno-unused-parameter -Wno-unused-command-line-argument -fno-strict-aliasing -fvisibility=hidden -std=c99
x86_64-linux-gnu-gcc -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -fPIC -DPYCAIRO_VERSION_MAJOR=1 -DPYCAIRO_VERSION_MINOR=25 -DPYCAIRO_VERSION_MICRO=1 -I/usr/include/cairo -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -I/usr/include/pixman-1 -I/usr/include/uuid -I/usr/include/freetype2 -I/usr/include/libpng16 -I/usr/include/python3.11 -c cairo/path.c -o build/temp.linux-x86_64-3.11/cairo/path.o -Wall -Warray-bounds -Wcast-align -Wconversion -Wextra -Wformat=2 -Wformat-nonliteral -Wformat-security -Wimplicit-function-declaration -Winit-self -Winline -Wmissing-format-attribute -Wmissing-noreturn -Wnested-externs -Wold-style-definition -Wpacked -Wpointer-arith -Wreturn-type -Wshadow -Wsign-compare -Wstrict-aliasing -Wundef -Wunused-but-set-variable -Wswitch-default -Wno-missing-field-initializers -Wno-unused-parameter -Wno-unused-command-line-argument -fno-strict-aliasing -fvisibility=hidden -std=c99
x86_64-linux-gnu-gcc -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -fPIC -DPYCAIRO_VERSION_MAJOR=1 -DPYCAIRO_VERSION_MINOR=25 -DPYCAIRO_VERSION_MICRO=1 -I/usr/include/cairo -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -I/usr/include/pixman-1 -I/usr/include/uuid -I/usr/include/freetype2 -I/usr/include/libpng16 -I/usr/include/python3.11 -c cairo/pattern.c -o build/temp.linux-x86_64-3.11/cairo/pattern.o -Wall -Warray-bounds -Wcast-align -Wconversion -Wextra -Wformat=2 -Wformat-nonliteral -Wformat-security -Wimplicit-function-declaration -Winit-self -Winline -Wmissing-format-attribute -Wmissing-noreturn -Wnested-externs -Wold-style-definition -Wpacked -Wpointer-arith -Wreturn-type -Wshadow -Wsign-compare -Wstrict-aliasing -Wundef -Wunused-but-set-variable -Wswitch-default -Wno-missing-field-initializers -Wno-unused-parameter -Wno-unused-command-line-argument -fno-strict-aliasing -fvisibility=hidden -std=c99
x86_64-linux-gnu-gcc -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -fPIC -DPYCAIRO_VERSION_MAJOR=1 -DPYCAIRO_VERSION_MINOR=25 -DPYCAIRO_VERSION_MICRO=1 -I/usr/include/cairo -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -I/usr/include/pixman-1 -I/usr/include/uuid -I/usr/include/freetype2 -I/usr/include/libpng16 -I/usr/include/python3.11 -c cairo/rectangle.c -o build/temp.linux-x86_64-3.11/cairo/rectangle.o -Wall -Warray-bounds -Wcast-align -Wconversion -Wextra -Wformat=2 -Wformat-nonliteral -Wformat-security -Wimplicit-function-declaration -Winit-self -Winline -Wmissing-format-attribute -Wmissing-noreturn -Wnested-externs -Wold-style-definition -Wpacked -Wpointer-arith -Wreturn-type -Wshadow -Wsign-compare -Wstrict-aliasing -Wundef -Wunused-but-set-variable -Wswitch-default -Wno-missing-field-initializers -Wno-unused-parameter -Wno-unused-command-line-argument -fno-strict-aliasing -fvisibility=hidden -std=c99
x86_64-linux-gnu-gcc -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -fPIC -DPYCAIRO_VERSION_MAJOR=1 -DPYCAIRO_VERSION_MINOR=25 -DPYCAIRO_VERSION_MICRO=1 -I/usr/include/cairo -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -I/usr/include/pixman-1 -I/usr/include/uuid -I/usr/include/freetype2 -I/usr/include/libpng16 -I/usr/include/python3.11 -c cairo/region.c -o build/temp.linux-x86_64-3.11/cairo/region.o -Wall -Warray-bounds -Wcast-align -Wconversion -Wextra -Wformat=2 -Wformat-nonliteral -Wformat-security -Wimplicit-function-declaration -Winit-self -Winline -Wmissing-format-attribute -Wmissing-noreturn -Wnested-externs -Wold-style-definition -Wpacked -Wpointer-arith -Wreturn-type -Wshadow -Wsign-compare -Wstrict-aliasing -Wundef -Wunused-but-set-variable -Wswitch-default -Wno-missing-field-initializers -Wno-unused-parameter -Wno-unused-command-line-argument -fno-strict-aliasing -fvisibility=hidden -std=c99
x86_64-linux-gnu-gcc -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -fPIC -DPYCAIRO_VERSION_MAJOR=1 -DPYCAIRO_VERSION_MINOR=25 -DPYCAIRO_VERSION_MICRO=1 -I/usr/include/cairo -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -I/usr/include/pixman-1 -I/usr/include/uuid -I/usr/include/freetype2 -I/usr/include/libpng16 -I/usr/include/python3.11 -c cairo/surface.c -o build/temp.linux-x86_64-3.11/cairo/surface.o -Wall -Warray-bounds -Wcast-align -Wconversion -Wextra -Wformat=2 -Wformat-nonliteral -Wformat-security -Wimplicit-function-declaration -Winit-self -Winline -Wmissing-format-attribute -Wmissing-noreturn -Wnested-externs -Wold-style-definition -Wpacked -Wpointer-arith -Wreturn-type -Wshadow -Wsign-compare -Wstrict-aliasing -Wundef -Wunused-but-set-variable -Wswitch-default -Wno-missing-field-initializers -Wno-unused-parameter -Wno-unused-command-line-argument -fno-strict-aliasing -fvisibility=hidden -std=c99
x86_64-linux-gnu-gcc -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -fPIC -DPYCAIRO_VERSION_MAJOR=1 -DPYCAIRO_VERSION_MINOR=25 -DPYCAIRO_VERSION_MICRO=1 -I/usr/include/cairo -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -I/usr/include/pixman-1 -I/usr/include/uuid -I/usr/include/freetype2 -I/usr/include/libpng16 -I/usr/include/python3.11 -c cairo/textcluster.c -o build/temp.linux-x86_64-3.11/cairo/textcluster.o -Wall -Warray-bounds -Wcast-align -Wconversion -Wextra -Wformat=2 -Wformat-nonliteral -Wformat-security -Wimplicit-function-declaration -Winit-self -Winline -Wmissing-format-attribute -Wmissing-noreturn -Wnested-externs -Wold-style-definition -Wpacked -Wpointer-arith -Wreturn-type -Wshadow -Wsign-compare -Wstrict-aliasing -Wundef -Wunused-but-set-variable -Wswitch-default -Wno-missing-field-initializers -Wno-unused-parameter -Wno-unused-command-line-argument -fno-strict-aliasing -fvisibility=hidden -std=c99
x86_64-linux-gnu-gcc -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -fPIC -DPYCAIRO_VERSION_MAJOR=1 -DPYCAIRO_VERSION_MINOR=25 -DPYCAIRO_VERSION_MICRO=1 -I/usr/include/cairo -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -I/usr/include/pixman-1 -I/usr/include/uuid -I/usr/include/freetype2 -I/usr/include/libpng16 -I/usr/include/python3.11 -c cairo/textextents.c -o build/temp.linux-x86_64-3.11/cairo/textextents.o -Wall -Warray-bounds -Wcast-align -Wconversion -Wextra -Wformat=2 -Wformat-nonliteral -Wformat-security -Wimplicit-function-declaration -Winit-self -Winline -Wmissing-format-attribute -Wmissing-noreturn -Wnested-externs -Wold-style-definition -Wpacked -Wpointer-arith -Wreturn-type -Wshadow -Wsign-compare -Wstrict-aliasing -Wundef -Wunused-but-set-variable -Wswitch-default -Wno-missing-field-initializers -Wno-unused-parameter -Wno-unused-command-line-argument -fno-strict-aliasing -fvisibility=hidden -std=c99
x86_64-linux-gnu-gcc -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -g -fwrapv -O2 build/temp.linux-x86_64-3.11/cairo/bufferproxy.o build/temp.linux-x86_64-3.11/cairo/cairomodule.o build/temp.linux-x86_64-3.11/cairo/context.o build/temp.linux-x86_64-3.11/cairo/device.o build/temp.linux-x86_64-3.11/cairo/enums.o build/temp.linux-x86_64-3.11/cairo/error.o build/temp.linux-x86_64-3.11/cairo/font.o build/temp.linux-x86_64-3.11/cairo/glyph.o build/temp.linux-x86_64-3.11/cairo/matrix.o build/temp.linux-x86_64-3.11/cairo/misc.o build/temp.linux-x86_64-3.11/cairo/path.o build/temp.linux-x86_64-3.11/cairo/pattern.o build/temp.linux-x86_64-3.11/cairo/rectangle.o build/temp.linux-x86_64-3.11/cairo/region.o build/temp.linux-x86_64-3.11/cairo/surface.o build/temp.linux-x86_64-3.11/cairo/textcluster.o build/temp.linux-x86_64-3.11/cairo/textextents.o -L/usr/lib/x86_64-linux-gnu -lcairo -o build/lib.linux-x86_64-3.11/cairo/_cairo.cpython-311-x86_64-linux-gnu.so
installing to build/bdist.linux-x86_64/wheel
running install
running install_lib
Traceback (most recent call last):
File "/tmp/tmpieb1oamc_in_process.py", line 363, in <module>
main()
File "/tmp/tmpieb1oamc_in_process.py", line 345, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/tmpieb1oamc_in_process.py", line 261, in build_wheel
return _build_backend().build_wheel(wheel_directory, config_settings,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/setuptools/build_meta.py", line 230, in build_wheel
return self._build_with_temp_dir(['bdist_wheel'], '.whl',
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/setuptools/build_meta.py", line 215, in _build_with_temp_dir
self.run_setup()
File "/usr/lib/python3/dist-packages/setuptools/build_meta.py", line 158, in run_setup
exec(compile(code, __file__, 'exec'), locals())
File "setup.py", line 549, in <module>
main()
File "setup.py", line 513, in main
setup(
File "/usr/lib/python3/dist-packages/setuptools/__init__.py", line 153, in setup
return distutils.core.setup(**attrs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/setuptools/_distutils/core.py", line 148, in setup
return run_commands(dist)
^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/setuptools/_distutils/core.py", line 163, in run_commands
dist.run_commands()
File "/usr/lib/python3/dist-packages/setuptools/_distutils/dist.py", line 967, in run_commands
self.run_command(cmd)
File "/usr/lib/python3/dist-packages/setuptools/_distutils/dist.py", line 986, in run_command
cmd_obj.run()
File "/usr/lib/python3/dist-packages/wheel/bdist_wheel.py", line 335, in run
self.run_command('install')
File "/usr/lib/python3/dist-packages/setuptools/_distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/usr/lib/python3/dist-packages/setuptools/_distutils/dist.py", line 986, in run_command
cmd_obj.run()
File "/usr/lib/python3/dist-packages/setuptools/command/install.py", line 68, in run
return orig.install.run(self)
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/setuptools/_distutils/command/install.py", line 622, in run
self.run_command(cmd_name)
File "/usr/lib/python3/dist-packages/setuptools/_distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/usr/lib/python3/dist-packages/setuptools/_distutils/dist.py", line 985, in run_command
cmd_obj.ensure_finalized()
File "/usr/lib/python3/dist-packages/setuptools/_distutils/cmd.py", line 107, in ensure_finalized
self.finalize_options()
File "setup.py", line 416, in finalize_options
du_install_lib.finalize_options(self)
File "/usr/lib/python3/dist-packages/setuptools/command/install_lib.py", line 17, in finalize_options
self.set_undefined_options('install',('install_layout','install_layout'))
File "/usr/lib/python3/dist-packages/setuptools/_distutils/cmd.py", line 290, in set_undefined_options
setattr(self, dst_option, getattr(src_cmd_obj, src_option))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/setuptools/_distutils/cmd.py", line 103, in __getattr__
raise AttributeError(attr)
AttributeError: install_layout. Did you mean: 'install_platlib'?
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for pycairo
Failed to build pycairo
ERROR: Could not build wheels for pycairo, which is required to install pyproject.toml-based projects
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.```
</code></pre>
|
<python><pip><setuptools><pycairo>
|
2023-12-30 16:55:48
| 0
| 3,306
|
piotrekkr
|
77,736,611
| 5,703,539
|
Cannot access Flask Rest api with docker anywhere on internet via VPS
|
<p>I have a basic REST API implemented in Flask. I implemented Docker to containerize it and it works well locally. I then was able to push my project on a VPS cause I want my Flask api to be access anywhere on the internet.
But I wasn't able to do so, I can't access my Flask api on the host server. I'm completely new to Docker and VPS, but based on what I was able to figure out on various forums, this is what I have set up.</p>
<p>Dockerfile</p>
<pre><code>FROM python:3.11
WORKDIR /app
COPY ./requirements.txt /app/requirements.txt
COPY . /app
RUN python3 -m pip install -r /app/requirements.txt
EXPOSE 5000
ENTRYPOINT ["python3"]
CMD ["app.py"]
</code></pre>
<p>requirements.txt</p>
<pre><code>aiohttp==3.8.6
aiohttp-retry==2.8.3
aiosignal==1.3.1
async-timeout==4.0.3
attrs==23.1.0
blinker==1.6.3
certifi==2023.7.22
charset-normalizer==3.3.1
click==8.1.7
distlib==0.3.7
filelock==3.12.4
Flask==2.3.0
Flask-Cors==4.0.0
flask-marshmallow==0.14.0
Flask-MySQL==1.5.2
Flask-MySQLdb==2.0.0
Flask-SQLAlchemy==3.1.1
frozenlist==1.4.0
idna==3.4
itsdangerous==2.1.2
Jinja2==3.1.2
MarkupSafe==2.1.3
marshmallow-sqlalchemy==0.29.0
multidict==6.0.4
mysqlclient==2.2.0
packaging==23.2
platformdirs==3.11.0
PyJWT==2.8.0
PyMySQL==1.1.0
python-dotenv==1.0.0
requests==2.31.0
six==1.16.0
SQLAlchemy==2.0.22
twilio==8.10.0
typing_extensions==4.8.0
urllib3==2.0.7
virtualenv==20.24.5
Werkzeug==3.0.0
yarl==1.9.2
</code></pre>
<p>app.py</p>
<pre><code>from flask import Flask, request, jsonify, json
from flask_sqlalchemy import SQLAlchemy
from sqlalchemy.event import listens_for
from flaskext.mysql import MySQL
from flask_cors import CORS
from dataclasses import dataclass
from sqlalchemy import text
from urllib.parse import quote
app = Flask(__name__)
CORS(app, origins=["http://localhost:3000", "http://localhost:3000"])
db = SQLAlchemy()
db = SQLAlchemy(engine_options={"pool_size": 10, 'pool_recycle': 280, "poolclass":QueuePool, "pool_pre_ping":True})
mysql =MySQL()
@dataclass
class User(db.Model):
__tablename__ = 'user'
id = db.Column(db.Integer, primary_key=True)
firstname = db.Column(db.String(46), nullable=False)#1
lastname = db.Column(db.String(46), nullable=False)#1
def __init__(self, firstname, lastname):
self.firstname = firstname
self.lastname = lastname
def as_dict(self):
excluded_fields = ['id']
return {field.name:getattr(self, field.name) for field in self.__table__.c if field.name not in excluded_fields}
@dataclass
class User(db.Model):
__tablename__ = 'user'
__table_args__ = {'extend_existing': True}
id = db.Column(db.Integer, primary_key=True)
firstname = db.Column(db.String(46), nullable=False)#1
lastname = db.Column(db.String(46), nullable=False)#1
def __init__(self, firstname, lastname):
self.firstname = firstname
self.lastname = lastname
def as_dict(self):
excluded_fields = ['id']
return {field.name:getattr(self, field.name) for field in self.__table__.c if field.name not in excluded_fields}
app.config['SQLALCHEMY_DATABASE_URI'] = 'mysql://username:pwd@127.0.0.1/test'
db.init_app(app)
with app.app_context():
db.create_all()
@app.route('/users', methods=['GET'])
def get_user():
users = User.query.all()
return jsonify(users)
@app.route('/user/<firstname>', methods=['GET'])
def user_byfirstname(firstname):
user = User.query.filter_by(firstname = firstname).first()
return jsonify(user.as_dict())
if __name__ == '__main__':
app.run(debug=True, host="0.0.0.0")
</code></pre>
<p>I then went on the VPS Terminal, and ran <code>$ docker build -t myapp:latest</code>.</p>
<p>The build is successful, and I can see my app listed in the Docker VPS app</p>
<p>I then ran</p>
<pre><code>$ docker run --rm -it -p 8080:5000 myapp:latest
* Running on http://127.0.0.1:5000
* Running on http://172.17.0.2:5000
Press CTRL+C to quit
* Restarting with stat
/usr/local/lib/python3.11/site-packages/flask_sqlalchemy/model.py:144: SAWarning: This declarative base already contains a class with the same class name and module name as __main__.User, and will be replaced in the string-lookup table.
super().__init__(name, bases, d, **kwargs)
* Debugger is active!
* Debugger PIN: 581-248-767
</code></pre>
<p>Docker VPS shows myapp is in use.</p>
<p>So far, so good. But this is where I start running into issues.</p>
<p>I run the cmd to know my vps host IP :</p>
<pre><code>$ docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' ad37e8ffe744 //container Id
</code></pre>
<p>it returned <code>172.17.0.2</code></p>
<p>I then used Postman on my local machine to test it:</p>
<pre><code> 172.17.0.2:8080/users
</code></pre>
<p>Postman throws the error : <code>Error: Request timed out</code></p>
<p>I really don't know what to do or where to go from here, every source I've tried has given me a slight variation on what I have already attempted, and I'm no closer to getting this to work. Please help, how can I access my docker server anywhere accross the internet. Thanks.</p>
|
<python><docker><flask><vps>
|
2023-12-30 16:26:24
| 1
| 1,665
|
kabrice
|
77,736,516
| 2,923,617
|
Django DRF use same URL pattern for different views
|
<p>In Django Rest Framework, I have two separate views for the same model. I would like to use the same url pattern to access both views, and differentiate between the two based on the method that is used. So I have something like:</p>
<pre><code>class MyObjectListAPIView(generics.ListAPIView):
pass
class MyObjectCreateAPIView(generics.CreateAPIView):
pass
</code></pre>
<p>Both views obviously would have different logic. The url pattern for both would be <code>'myObjects\'</code>, and depending on the method that is used (<code>GET</code> or <code>POST</code>), it would need to refer to the appropriate view (<code>MyObjectListAPIView</code> or <code>MyObjectCreateAPIView</code> respectively). Is there a way to achieve this (without reverting to plain Django and losing the functionality of DRF)?</p>
|
<python><django><django-rest-framework><django-views>
|
2023-12-30 15:57:27
| 1
| 486
|
NiH
|
77,736,459
| 12,178,630
|
Sequential pattern with numpy array
|
<p>I would like to automatically fill (3x3x3) numpy array of zeros with the pattern the looks like this this, it starts with the coordinate <code>(0.0, 0.0, 0.0)</code> then sequentially and in the order shown starts to increase the <code>z</code> by the amount of <code>3.5</code> and <code>y</code> by the amount <code>6.5</code> and <code>x</code> by the amount <code>6.5</code> in a sequential manner so that the result will be nodes of a lattice in a 3d-space</p>
<pre><code>0.0 6.5 1.0
0.0 6.5 4.5
0.0 6.5 8.0
0.0 13.5 1.0
0.0 13.5 4.5
0.0 13.5 8.0
0.0 21.0 1.0
0.0 21.0 4.5
0.0 21.0 8.0
6.5 6.5 1.0
6.5 6.5 4.5
6.5 6.5 8.0
6.5 13.5 1.0
6.5 13.5 4.5
6.5 13.5 8.0
13.5 21.0 1.0
13.5 21.0 4.5
13.5 21.0 8.0
21.0 6.5 1.0
21.0 6.5 4.5
21.0 6.5 8.0
21.0 13.5 1.0
21.0 13.5 4.5
21.0 13.5 8.0
21.0 21.0 1.0
21.0 21.0 4.5
21.0 21.0 8.0
</code></pre>
|
<python><python-3.x><numpy><numpy-ndarray>
|
2023-12-30 15:40:31
| 2
| 314
|
Josh
|
77,736,397
| 194,735
|
LIBUSB_ERROR_NOT_FOUND in python when trying jdmtool
|
<p>I am trying to use JDMtool in macOS Mojave (python 3.11):</p>
<p><a href="https://github.com/dimaryaz/jdmtool" rel="nofollow noreferrer">https://github.com/dimaryaz/jdmtool</a></p>
<p>to write my GNS430 datacards from my Jeppesen subscription.
The app works as I can login and download my subscription navigation data file but when try to access the card reader I get the following error:</p>
<pre><code>(jdmtool) ~ ➤ jdmtool detect
Found device: Bus 020 Device 004: ID 0e39:1250
Traceback (most recent call last):
File "/Users/icordoba/anaconda3/envs/jdmtool/bin/jdmtool", line 8, in <module>
sys.exit(main())
^^^^^^
File "/Users/icordoba/anaconda3/envs/jdmtool/lib/python3.11/site-packages/jdmtool/main.py", line 442, in main
func(**kwargs)
File "/Users/icordoba/anaconda3/envs/jdmtool/lib/python3.11/site-packages/jdmtool/main.py", line 66, in wrapper
with handle.claimInterface(0):
^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/icordoba/anaconda3/envs/jdmtool/lib/python3.11/site-packages/usb1/__init__.py", line 1146, in claimInterface
mayRaiseUSBError(
File "/Users/icordoba/anaconda3/envs/jdmtool/lib/python3.11/site-packages/usb1/__init__.py", line 127, in mayRaiseUSBError
__raiseUSBError(value)
File "/Users/icordoba/anaconda3/envs/jdmtool/lib/python3.11/site-packages/usb1/__init__.py", line 119, in raiseUSBError
raise __STATUS_TO_EXCEPTION_DICT.get(value, __USBError)(value)
usb1.USBErrorNotFound: LIBUSB_ERROR_NOT_FOUND [-5]
</code></pre>
<p>I am pretty new to Python so I don't know how to install the required USB access library for JDMTool to be able to read and write the data card reader. Any indications please?</p>
|
<python>
|
2023-12-30 15:24:33
| 1
| 1,959
|
icordoba
|
77,736,263
| 16,626,443
|
Pandas .loc() method using "not" and "in" operators
|
<p>I have a data frame and I want to remove some rows if their value is not equal to some values that I have stored in a list.</p>
<p>So I have a list variable stating the values of objects I want to keep:</p>
<pre class="lang-py prettyprint-override"><code>allowed_values = ["value1", "value2", "value3"]
</code></pre>
<p>And I am attempting to remove rows from my dataframe if a certain column does not contain 1 of the <code>allowed_values</code>. At first I was using a <code>for</code> loop and <code>if</code> statement like this:</p>
<pre class="lang-py prettyprint-override"><code>for index, row in df.iterrows():
if row["Type"] not in allowed_values:
# drop the row, was about to find out how to do this, but then I found out about the `.loc()` method and thought it would be better to use this.
</code></pre>
<p>So using the <code>.loc()</code> method, I can do something like this to only keep objects that have a <code>Type</code> value equal to <code>value1</code>:</p>
<pre class="lang-py prettyprint-override"><code>df = df.loc[df["Type"] == "value1"]
</code></pre>
<p>But I want to keep all objects that have a <code>Type</code> in the <code>allowed_values</code> list. I tried to do this:</p>
<pre class="lang-py prettyprint-override"><code>df = df.loc[df["Type"] in allowed_values]
</code></pre>
<p>but this gives me the following error:</p>
<pre><code>ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
</code></pre>
<p>I would expect this to still work as using the <code>in</code> or a combination of <code>not in</code> operators still results in a boolean, so I'm not sure why the <code>.loc()</code> method doesn't like these operators.</p>
<p>What exactly is wrong with using <code>in</code> or <code>not</code> operators in the <code>.loc()</code> method and how can I create a logical statment that will drop rows if their <code>Type</code> value is not in the <code>allowed_values</code> list?</p>
<p><strong>EDIT</strong>: I found <a href="https://stackoverflow.com/questions/36921951/truth-value-of-a-series-is-ambiguous-use-a-empty-a-bool-a-item-a-any-o">this</a> question asking about the same error I got and the answer was that you need to use bitwise operators only (e.g. <code>==</code>, <code>!=</code>, <code>&</code>, <code>|</code>, etc) and <code>not</code> and <code>in</code> are not bitwise operators and require something called "truth-values". So I think the only way to get the functionality I want is to just have a lengthy bitwise logical operator, something like:</p>
<pre class="lang-py prettyprint-override"><code>df = df.loc[(df["Type"] == "value1") | (df["Type"] == "value2") | (df["Type"] == "value3")]
</code></pre>
<p>Is there no other way to check each value is in the <code>allowed_values</code> list? This would make my code a lot neater (I have more than 3 values in the list, so this is a lengthy line).</p>
|
<python><pandas><dataframe>
|
2023-12-30 14:40:18
| 1
| 760
|
Mo0rBy
|
77,736,224
| 8,106,583
|
Inconsistent Behavior: Printing C++ Struct Properties in Python using ctypes gives unexpected results
|
<p>I'm writing a c++ library that I'm importing in python with <code>ctypes</code>. The library implements a function <code>Wtf returnwtf()</code> that returns an object of type <code>Wtf</code> and I want to print the objects properties in Python. The code however behaves strangely.</p>
<p>Here is a minimum non-working example:</p>
<pre class="lang-cpp prettyprint-override"><code>// wtf.cpp
#include <cstdint>
extern "C" {
struct Wtf {
uint32_t z = 513;
Wtf(Wtf const &state);
Wtf(){
z = 514;
}
};
Wtf returnwtf(){
Wtf w = Wtf();
return w;
}
Wtf::Wtf(Wtf const &state) = default;
}
</code></pre>
<pre class="lang-py prettyprint-override"><code># testwtf.py
import ctypes
class Wtf(ctypes.Structure):
_fields_ = [
("z", ctypes.c_uint32),
]
my_lib = ctypes.CDLL("./libwtf.so")
returnwtf = my_lib.returnwtf
returnwtf.restype = Wtf
print(returnwtf().z)
</code></pre>
<pre class="lang-bash prettyprint-override"><code>g++ -c -Ofast -fPIC -o wtf/wtf.o wtf/wtf.
g++ -Ofast -shared -o wtf/libwtf.so wtf/wtf.o
python3.10 testwtf.py
</code></pre>
<p>I would expect it to print <code>514</code> but instead it prints random numbers like <code>143882176</code> or <code>1011496896</code>.</p>
<p>Now when I move the implementation of the copy constructor inside the struct like so</p>
<pre class="lang-cpp prettyprint-override"><code>struct Wtf {
uint32_t z = 513;
Wtf(Wtf const &state) = default;
Wtf(){
z = 514;
}
};
</code></pre>
<p>or remove the copy constructor completely it works fine and prints <code>514</code> as expected.
It also changes it's behavior when the struct consists of multiple members. If for example the class consists of 5 uint32_t like so:</p>
<pre><code>struct Wtf {
uint32_t z = 513;
uint32_t a = 513;
uint32_t b = 513;
uint32_t c = 513;
uint32_t d = 513;
Wtf(Wtf const &state); // implementation outside of struct like in the first example
Wtf(){
z = 514;
a = 1;
b = 1;
c = 1;
d = 1;
}
};
</code></pre>
<p>it works perfectly fine where it wouldn't with only one member. But when I substitute <code>uint32_t</code> with <code>uint16_t</code> (and c_uint32 with c_uint16 in Python) I get a <code>SIGSEGV</code></p>
<p>I'm out of ideas what I am doing wrong because I'm not able to narrow down the problem. I think it has something to do with the way the compiler structures the return data but I can't get a hold of the issue.</p>
<p>How can I get a consistent result?</p>
<pre class="lang-none prettyprint-override"><code>gcc version 11.4.0
Ubuntu 22.04
</code></pre>
|
<python><c++><ctypes>
|
2023-12-30 14:27:35
| 0
| 2,437
|
wuerfelfreak
|
77,735,997
| 485,330
|
DynamoDB Auto-increment
|
<p>Since DynamoDB does not natively support <strong>sequential auto-increment</strong> for "user_id" like MySQL; what's the best way to increment the "user_id" to the next in sequence?</p>
<p>I'm querying the latest "user_id" and incrementing it; however, this feels pretty lame.</p>
<p>Is there a better way? :-)</p>
<pre><code>def get_latest_user_id():
try:
# Scan the table to find the highest user_id
response = table.scan(
ProjectionExpression="user_id",
FilterExpression=Attr("user_id").gt(0),
Limit=1,
ScanIndexForward=False # Sort in descending order
)
items = response.get('Items', [])
if not items:
return 0 # Return 0 if table is empty
return max(item['user_id'] for item in items)
except ClientError as e:
logger.error(f"Error getting latest user ID: {e}")
raise
except Exception as e:
logger.error(f"Unexpected error: {e}")
raise
</code></pre>
|
<python><database><amazon-web-services><amazon-dynamodb><dynamodb-queries>
|
2023-12-30 13:01:44
| 1
| 704
|
Andre
|
77,735,923
| 17,580,381
|
itertools.pairwise backwards compatibility and a dubious Pylance warning
|
<p><a href="https://docs.python.org/3/library/itertools.html#itertools.pairwise" rel="nofollow noreferrer">The <code>pairwise</code> function</a> was added to <code>itertools</code> in Python version 3.10.</p>
<p>I would like to use that function or, where not available, define my own function based on the documented recipe.</p>
<p>I did this:</p>
<pre><code>import itertools
try:
_pairwise = itertools.pairwise # Pylance doesn't like this
except AttributeError:
def _pairwise(iterable):
a, b = itertools.tee(iterable)
next(b, None)
return zip(a, b)
</code></pre>
<p>In this way, I can call _pairwise which will be the out-of-the-box implementation on Python versions from 3.10 onwards. Earlier versions will use the recipe implementation.</p>
<p>I use VS Code and Pylance.</p>
<p>Pylance underlines <code>itertools.pairwise</code> and reports as follows:</p>
<pre><code>Expression of type "type[pairwise[_T_co@pairwise]]" cannot be assigned to declared type "(iterable: Unknown) -> zip[tuple[Unknown, Unknown]]"
</code></pre>
<p>However, having tested this code on 3.9 and 3.12 I deduce that it's not a problem at all.</p>
<p>I know I can suppress the warning with <code>#type: ignore</code></p>
<p>Is there a technique that I could use (maybe <em>should</em> use) that would not induce this warning or do I just have to live with it?</p>
|
<python><python-itertools><python-typing>
|
2023-12-30 12:33:51
| 1
| 28,997
|
Ramrab
|
77,735,912
| 5,865,393
|
Sort list of dictionaries based on relative time
|
<p>I have a CSV file that I create from an API response. I get it unsorted and the API response does not have a query parameter to sort the returned payload. The CSV looks like below, where column A has some device identifiers and column B has the configuration last sync of these devices. The time is represented in relative time as seen in the screenshot.</p>
<p><a href="https://i.sstatic.net/dipHF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dipHF.png" alt="CSV file" /></a></p>
<p>How can I naturally sort the CSV file to have the rows ordered in the following sequence <code>seconds</code>, <code>minutes</code>, <code>hours</code>, <code>days</code>, <code>weeks</code>, <code>months</code>, <code>years</code>, and <code>Never</code>?</p>
<p>I have tried to sort the entries using <code>natsort</code> but I didn't get what I am trying to achieve:</p>
<pre class="lang-py prettyprint-override"><code>import csv
import os
from natsort import natsort, natsort_keygen
natsort_key = natsort_keygen(key=lambda k: k["last_sync"])
fieldnames = ["device", "last_sync"]
with open("Devices_161846.csv", "rt") as csvfile:
csvreader = csv.DictReader(f=csvfile, fieldnames=fieldnames)
next(csvreader)
identities = list(csvreader)
sorted_identities = natsort.natsorted(identities, key=natsort_key)
new_file, ext = os.path.splitext(csvfile.name)
with open(f"{new_file}_Sorted.{ext}", "wt") as f:
csvwriter = csv.DictWriter(f=f, fieldnames=fieldnames)
csvwriter.writeheader()
csvwriter.writerows(sorted_identities)
</code></pre>
<p>I expect to get the same result, but sorted from newest to oldest.</p>
|
<python><csv><natural-sort><natsort>
|
2023-12-30 12:31:51
| 1
| 2,284
|
Tes3awy
|
77,735,868
| 11,426,624
|
stack multiple columns in a pandas dataframe
|
<p>I have a pandas data frame and would like to stack 4 columns to 2. So I have this</p>
<pre><code>df = pd.DataFrame({'date':['2023-12-01', '2023-12-05', '2023-12-07'],
'other_col':['a', 'b', 'c'],
'right_count':[4,7,9], 'right_sum':[2,3,5],
'left_count':[1,8,5], 'left_sum':[0,8,4]})
</code></pre>
<pre><code> date other_col right_count right_sum left_count left_sum
0 2023-12-01 a 4 2 1 0
1 2023-12-05 b 7 3 8 8
2 2023-12-07 c 9 5 5 4
</code></pre>
<p>and would like to get this</p>
<pre><code> date other_col side count sum
0 2023-12-01 a right 4 2
1 2023-12-05 b right 7 3
2 2023-12-07 c right 9 5
3 2023-12-01 a left 1 0
4 2023-12-05 b left 8 8
5 2023-12-07 c left 5 4
</code></pre>
|
<python><pandas><dataframe>
|
2023-12-30 12:11:52
| 1
| 734
|
corianne1234
|
77,735,843
| 893,254
|
How do I change the table schema with SQL Alchemy?
|
<p>I have some code which defines and creates SQL database tables using the SQL Alchemy declarative style.</p>
<p>The reference for the declarative style can be found <a href="https://docs.sqlalchemy.org/en/20/orm/declarative_tables.html#accessing-table-and-metadata" rel="nofollow noreferrer">here</a>.</p>
<p>Tables are created with this code:</p>
<pre><code>engine = create_engine(db_url)
Base.metadata.create_all(engine)
</code></pre>
<p><code>Base</code> inherits from <code>DeclarativeBase</code>, you can find an example of how to use this to define the tables to be created on the same <a href="https://docs.sqlalchemy.org/en/20/orm/declarative_tables.html#declarative-table-with-mapped-column" rel="nofollow noreferrer">documentation page</a>.</p>
<p>For convenience, here is that example:</p>
<pre><code>from sqlalchemy import Integer, String
from sqlalchemy.orm import DeclarativeBase
from sqlalchemy.orm import mapped_column
class Base(DeclarativeBase):
pass
class User(Base):
__tablename__ = "user"
id = mapped_column(Integer, primary_key=True)
name = mapped_column(String(50), nullable=False)
fullname = mapped_column(String)
nickname = mapped_column(String(30))
</code></pre>
<p>If I run the code to create this table, when connected to Postgres, the tables are created in the default "public" table namespace.</p>
<p>How can I change this so that I can control which schema the tables to be created are associated with?</p>
|
<python><sqlalchemy>
|
2023-12-30 11:59:13
| 1
| 18,579
|
user2138149
|
77,735,761
| 18,100,562
|
How to draw a circular arc / hallow sector in pyglet
|
<br>
I desperately trying to draw circular arc / hallow sector in pyglet.
E.g from this post where ArcType.ROUND is shown
https://github.com/pyglet/pyglet/issues/349
<p>This gives me a filled sector:
<code>Sector(x=200, y=200, radius=200, segments=200, angle=22.5, start_angle=0)</code></p>
<p>This gives me only the line/circular arc - without the lines of the size r to the center:
<code>Arc(px, py, radius, 50, tau, 0)</code></p>
<p>What I need is:
<a href="https://i.sstatic.net/TrG5P.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TrG5P.png" alt="enter image description here" /></a></p>
<p>PS: It comes to my attention that <code>pip install pyglet=2.0.10</code> with python 3.10 give me a "broken" package on my windows 10 machine. The <code>Arc</code> could not be drawn at all. Installing pyglet from source solved this issue.</p>
|
<python><pyglet>
|
2023-12-30 11:27:39
| 1
| 507
|
mister_kanister
|
77,735,728
| 5,703,539
|
Flask + Docker => No connection
|
<p>I have a basic REST API implemented in Flask. I want to try using Docker to containerize it. I'm completely new to Docker, but based on what I was able to figure out on various forums, this is what I have set up.</p>
<p>Dockerfile</p>
<pre><code>FROM python:3.11
WORKDIR /app
COPY ./requirements.txt /app/requirements.txt
COPY .env /app/.env
COPY . /app
RUN python3 -m pip install -r /app/requirements.txt
EXPOSE 5000
ENTRYPOINT ["python3"]
CMD ["app.py", "--host=0.0.0.0"]
</code></pre>
<p>requirements.txt</p>
<pre><code>aiohttp==3.8.6
aiohttp-retry==2.8.3
aiosignal==1.3.1
async-timeout==4.0.3
attrs==23.1.0
blinker==1.6.3
certifi==2023.7.22
charset-normalizer==3.3.1
click==8.1.7
distlib==0.3.7
filelock==3.12.4
Flask==2.3.0
Flask-Cors==4.0.0
flask-marshmallow==0.14.0
Flask-MySQL==1.5.2
Flask-MySQLdb==2.0.0
Flask-SQLAlchemy==3.1.1
frozenlist==1.4.0
idna==3.4
itsdangerous==2.1.2
Jinja2==3.1.2
MarkupSafe==2.1.3
marshmallow-sqlalchemy==0.29.0
multidict==6.0.4
mysqlclient==2.2.0
packaging==23.2
platformdirs==3.11.0
PyJWT==2.8.0
PyMySQL==1.1.0
python-dotenv==1.0.0
requests==2.31.0
six==1.16.0
SQLAlchemy==2.0.22
twilio==8.10.0
typing_extensions==4.8.0
urllib3==2.0.7
virtualenv==20.24.5
Werkzeug==3.0.0
yarl==1.9.2
</code></pre>
<p>app.py</p>
<pre><code>from flask import Flask, request, jsonify, json
from flask_sqlalchemy import SQLAlchemy
from sqlalchemy.event import listens_for
from flaskext.mysql import MySQL
from flask_cors import CORS
from dataclasses import dataclass
from sqlalchemy import text
from urllib.parse import quote
app = Flask(__name__)
CORS(app, origins=["http://localhost:3000", "http://localhost:3000"])
db = SQLAlchemy()
mysql =MySQL()
@dataclass
class User(db.Model):
__tablename__ = 'user'
id = db.Column(db.Integer, primary_key=True)
firstname = db.Column(db.String(46), nullable=False)#1
lastname = db.Column(db.String(46), nullable=False)#1
def __init__(self, firstname, lastname):
self.firstname = firstname
self.lastname = lastname
def as_dict(self):
excluded_fields = ['id']
return {field.name:getattr(self, field.name) for field in self.__table__.c if field.name not in excluded_fields}
@dataclass
class User(db.Model):
__tablename__ = 'user'
__table_args__ = {'extend_existing': True}
id = db.Column(db.Integer, primary_key=True)
firstname = db.Column(db.String(46), nullable=False)#1
lastname = db.Column(db.String(46), nullable=False)#1
def __init__(self, firstname, lastname):
self.firstname = firstname
self.lastname = lastname
def as_dict(self):
excluded_fields = ['id']
return {field.name:getattr(self, field.name) for field in self.__table__.c if field.name not in excluded_fields}
app.config['SQLALCHEMY_DATABASE_URI'] = 'mysql://username:pwd@127.0.0.1/test'
db.init_app(app)
with app.app_context():
db.create_all()
@app.route('/users', methods=['GET'])
def get_user():
users = User.query.all()
return jsonify(users)
@app.route('/user/<firstname>', methods=['GET'])
def user_byfirstname(firstname):
user = User.query.filter_by(firstname = firstname).first()
return jsonify(user.as_dict())
if __name__ == '__main__':
app.run(debug=True)
</code></pre>
<p>App tree:</p>
<p><a href="https://i.sstatic.net/SKbdt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SKbdt.png" alt="enter image description here" /></a></p>
<p>I then go to the Terminal, and run <code>$ docker build -t myapp:latest</code> .</p>
<p>The build is successful, and I can see my app listed in the Docker Desktop app</p>
<p>I then run</p>
<pre><code>$ docker run --rm -it -p 8080:5000 myapp:latest
* Running on http://127.0.0.1:5000
Press CTRL+C to quit
* Restarting with stat
/usr/local/lib/python3.11/site-packages/flask_sqlalchemy/model.py:144: SAWarning: This declarative base already contains a class with the same class name and module name as __main__.User, and will be replaced in the string-lookup table.
super().__init__(name, bases, d, **kwargs)
* Debugger is active!
* Debugger PIN: 581-248-767
</code></pre>
<p>Docker Desktop shows myapp is in use.</p>
<p>So far, so good. But this is where I start running into issues.</p>
<p>From host machine (through Postman) I cannot access the app on port 8080 with:</p>
<pre><code>127.0.0.1:8080/users
</code></pre>
<p>Postman throws the error : <code>Error: read ECONNRESET</code></p>
<p>I really don't know what to do or where to go from here, every source I've tried has given me a slight variation on what I have already attempted, and I'm no closer to getting this to work. Please help, thanks.</p>
|
<python><docker><flask><flask-restful>
|
2023-12-30 11:14:04
| 2
| 1,665
|
kabrice
|
77,735,694
| 275,552
|
Show state borders but not county borders on US Choropleth map
|
<p>I have some US county-level data and I want a choropleth map that does show state borders but not county borders. Following <a href="https://community.plotly.com/t/adding-state-lines-to-county-level-geojson-based-choropleth/36269" rel="nofollow noreferrer">this solution</a>, I end up with this:</p>
<p><a href="https://i.sstatic.net/VIMW3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VIMW3.png" alt="enter image description here" /></a></p>
<p>As you can see it seems the state borders are layered below the data. My attempt to solve this was to make a second choropleth with the state borders and a transparent colorscale, and layer that on top of the map with the data. First here's the borders-only figure:</p>
<pre><code>state_borders = px.choropleth(df,
geojson=counties,
locations='fips',
color='increase_12yr',
color_continuous_scale=["rgba(1,1,1,0)" ,"rgba(1,0,0,0)"],
range_color=(0, 350),
center = {"lat": 37.0902, "lon": -95.7129},
scope='usa',
basemap_visible=False
)
state_borders.update_traces(marker_line_width=0, marker_opacity=1)
state_borders.update_geos(
showsubunits=True, subunitcolor="black"
state_borders.show()
)
</code></pre>
<p><a href="https://i.sstatic.net/luPCu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/luPCu.png" alt="enter image description here" /></a></p>
<p>So far so good, but when I try to add this as a trace to the original map, I end up with this:</p>
<p><a href="https://i.sstatic.net/10CFn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/10CFn.png" alt="enter image description here" /></a></p>
<p>Here is the full code minus my data specific stuff:</p>
<pre><code>with urlopen('https://raw.githubusercontent.com/plotly/datasets/master/geojson-counties-fips.json') as response:
counties = json.load(response)
fig = px.choropleth(df,
geojson=counties,
locations='fips',
color='increase_12yr',
color_continuous_scale="Reds",
range_color=(0, 350),
center = {"lat": 37.0902, "lon": -95.7129},
scope='usa',
basemap_visible=False
)
fig.update_layout(margin={"r":0,"t":0,"l":0,"b":0})
state_borders = px.choropleth(df,
geojson=counties,
locations='fips',
color='increase_12yr',
color_continuous_scale=["rgba(1,1,1,0)" ,"rgba(1,0,0,0)"],
range_color=(0, 350),
center = {"lat": 37.0902, "lon": -95.7129},
scope='usa',
basemap_visible=False
)
state_borders.update_traces(marker_line_width=0, marker_opacity=1)
state_borders.update_geos(showsubunits=True, subunitcolor="black")
fig.add_trace(state_borders.data[0])
fig.show()
</code></pre>
<p>So anyone know how I can layer the state borders on top of my data?</p>
|
<python><pandas><plotly><choropleth>
|
2023-12-30 11:04:42
| 2
| 16,225
|
herpderp
|
77,735,535
| 3,291,993
|
How to fill a pandas dataframe from two 2d numpy arrays in an efficient way?
|
<pre><code>import pandas as pd
import numpy as np
s = [ "S" + str(i) for i in range(1,101)]
c = [ "C" + str(i) for i in range(1,51)]
arr1 = np.random.randn(len(c),len(s))
arr2 = np.random.randn(len(c),len(s))
</code></pre>
<p>How to create and fill pandas dataframe df with 100 * 50 = 5000 rows for each possible s and c pairs
such that <strong>arr1_col</strong> has <strong>arr1[s,c]</strong> and
<strong>arr2_col</strong> has <strong>arr2[s,c]</strong>?</p>
<pre><code>df = pd.DataFrame({'S':s, 'C':c, 'arr1_col':arr1[s,c] , 'arr2_col':arr2[s,c]})
</code></pre>
|
<python><pandas><dataframe><numpy>
|
2023-12-30 10:05:21
| 1
| 1,147
|
burcak
|
77,735,200
| 8,444,568
|
Why I get NAN when calculate corr of those two series?
|
<p>I have two pandas Series object:</p>
<pre><code>print(s1)
print("="*80)
print(s2)
# output
0 -0.443538
1 -0.255012
2 -0.582948
3 -0.393485
4 0.430831
5 0.232216
6 -0.014269
7 -0.133158
8 0.127162
9 -1.855860
Name: s1, dtype: float64
================================================================================
29160 -0.650857
29161 -0.135428
29162 0.039544
29163 0.241506
29164 -0.793352
29165 -0.054500
29166 0.901152
29167 -0.660474
29168 0.098551
29169 0.822022
Name: s2, dtype: float64
</code></pre>
<p>And I want to calculate corr of those two series:</p>
<pre><code>s1.corr(s2)
#output
nan
</code></pre>
<p>I don't know why I get 'nan' here, using numpy gives the correct result:</p>
<pre><code>np.corrcoef(s1,s2)[0][1]
#output
-0.4918385039519204
</code></pre>
<p>Did I do something wrong in the above code?</p>
|
<python><pandas><dataframe>
|
2023-12-30 07:38:54
| 1
| 893
|
konchy
|
77,735,153
| 10,200,497
|
Creating a new column by giving multiple conditions priority
|
<p>This is my dataframe:</p>
<pre><code>df = pd.DataFrame(
{
'a': [10, 20, 55],
'b': [3, 40, 33],
'c': [4, 50, 10]
}
)
</code></pre>
<p>And this is the output that I want. I want to create column <code>d</code>.</p>
<pre><code> a b c d
0 10 3 4 1.0
1 20 40 50 0.0
2 55 33 10 1.0
</code></pre>
<p>This is actually a weird and basic issue that I have. These are the conditions for creating column <code>d</code>:</p>
<pre><code>df.loc[df.a > df.b, 'd'] = 1
df.loc[df.a > df.c, 'd'] = 2
df['d'] = df.d.fillna(0)
</code></pre>
<p>If I do it in order, <code>d</code> would be overwritten. I want it in this way: If for example <code>df.a > df.b</code>, <code>d</code> is 1. And it is done. Don't go for other conditions.
This was actually my try to create <code>d</code> but it obviously fails,</p>
|
<python><pandas>
|
2023-12-30 07:16:37
| 0
| 2,679
|
AmirX
|
77,734,966
| 9,915,864
|
What settings am I missing in VS Code so that Flask print statement output shows up in terminal?
|
<p>I would like to get <code>print</code> statements to show up in my VS Code terminal. I'm running Flask 3.0 and Python 3.12 from VS Code 1.85.1.</p>
<p>Part A. I've read through a bunch of different posts and have assembled the following settings, mainly from <a href="https://stackoverflow.com/questions/49171144/how-do-i-debug-flask-app-in-vs-code">How do I debug Flask App in VS Code</a>. My <code>launch.json</code>, <code>settings.json</code>, <code>app.py</code>, and <code>views.py</code> files are below. One print statement is in the <code>index()</code> function of the <code>views.py</code> file, and a few more scattered throughout the file. My app and HTML files run fine but I'm about to start making changes and want to know how to get <code>print</code> statements to show up in the VS Code terminal. I also checked my Debug Console just to be sure- nothing there.</p>
<p>Part B to this question is if I wanted to use <code>app.logger</code> in <code>views.py</code>, how would I do that? <br/>https://stackoverflow.com/questions/44405708/flask-doesnt-print-to-console</p>
<p>Thanks for pointers!</p>
<p>Windows environment variable: <br />
<code>FLASK_APP .\app.py</code></p>
<p>launch.json</p>
<pre><code>{
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "Python: Flask",
"type": "python",
"request": "launch",
"module": "app",
"env": {
"FLASK_APP": "${workspaceFolder}\\app.py",
"FLASK_ENV": "development",
"FLASK_DEBUG": "1"
},
"args": [
// "run",
// "--no-debugger",
// "--no-reload"
],
"jinja": true
}
]
}
</code></pre>
<p>settings.json</p>
<pre><code>{
"editor.defaultFoldingRangeProvider": "samuelcolvin.jinjahtml",
"editor.defaultFormatter": "formulahendry.code-runner"
}
</code></pre>
<p>app.py</p>
<pre><code>from flask import Flask
import views
app = Flask(__name__)
app.secret_key = 'BAD_SECRET_KEY'
app.add_url_rule('/', view_func=views.index)
app.add_url_rule('/index', view_func=views.index)
app.add_url_rule('/page1', view_func=views.page_one_func)
app.add_url_rule('/page1_processed', view_func=views.page_one_add_input)
if __name__ == "__main__":
app.config['SERVER_NAME'] = f"127.0.0.1:5000"
app.run(debug=True, use_reloader=True)
</code></pre>
<p>I tried setting <code>debug</code> and <code>use_reloader</code> to False with no difference.</p>
<p>views.py</p>
<pre><code>def index():
print('calling index...')
return render_template('index.html.jinja')
def page_one_func():
vars = {'exp_dict': exp_dict}
return render_template('page1.html.jinja', vars=vars)
def page_one_add_input():
vars = {'exp_dict': exp_dict}
print(f"{type(request.args())}")
# result = request.args.get('userinput')
# vars['user_input']= request.form.get('userinput')
return render_template('page1.html.jinja', vars=vars)
</code></pre>
|
<python><flask>
|
2023-12-30 05:35:15
| 2
| 341
|
Meghan M.
|
77,734,721
| 106,906
|
How to disable a subset of Pylint checks for modules matching a string or regex?
|
<p><strong>I know how to disable a check entirely:</strong></p>
<pre class="lang-ini prettyprint-override"><code>disable = ["missing-module-doctring", ...]
</code></pre>
<p><strong>But I'm trying to do the equivalent of this:</strong></p>
<pre class="lang-ini prettyprint-override"><code>disable = [ {name="missing-module-docstring", glob="**/*/models.py"}, ...]
</code></pre>
<p><strong>One example:</strong> We have many SQL Alchemy <code>models.py</code> files in our repo. It's obvious from our directory structure what the purpose of each <code>models.py</code> is. I'd really like to disable <code>missing-module-docstring</code> for any module <code>x.y.z.models</code>. A doc string will be superfluous noise. A per-file disable comment is about the same.</p>
<p>I have a few more scenarios like this for other checks and other module / python file naming conventions.</p>
<p>I looked through the config docs and didn't find anything. Is there a plugin that supports this?</p>
<p>If I have to go to the "two-pass solution", I guess I could run pylint once excluding <code>models.py</code> files. Then run it again filtering out everything <em>but</em> <code>models.py</code>?</p>
<p>For context: I run pylint at the command line, in CI, and in VS Code. I configure it in the <code>project.toml</code> but I'm open to changing this.</p>
|
<python><pylint>
|
2023-12-30 02:57:45
| 1
| 16,978
|
Dogweather
|
77,734,670
| 1,982,032
|
What does above exception points to in exception info thrown?
|
<p>The simplest built-in exception as below:</p>
<pre><code>>>> 1/0
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ZeroDivisionError: division by zero
</code></pre>
<p>Throw exception info ,then whole press is done.</p>
<p>Define function <code>f1</code> and not to define <code>myException</code>:</p>
<pre><code>def f1():
try:
print(1/0)
except:
raise myException('my exception')
</code></pre>
<p>Call <code>f1()</code>:</p>
<pre><code>>>> f1()
Traceback (most recent call last):
File "<stdin>", line 3, in f1
ZeroDivisionError: division by zero
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 5, in f1
NameError: name 'myException' is not defined
</code></pre>
<p>When the info</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 3, in f1
ZeroDivisionError: division by zero
</code></pre>
<p>thrown out,exception of <code>print(1/0)</code> is handled as we have seen that in the beginning.<br />
Why the statement in the middle line is <code>During handling of the above exception, another exception occurred:</code> ,instead of <code>After handling of the above exception, another exception occurred:</code>?<br />
What is <code>above exception</code> in the statement <code>During handling of the above exception</code>?It points to <code>try ... except</code> structure in <code>f1</code> ,instead of exception resulted by <code>1/0</code>?<br />
If <code>above exception</code> points to exception resulted by <code>1/0</code>,then <code>After handling of the above exception, another exception occurred</code> is better to describe all the exceptions here!</p>
|
<python><exception><try-except>
|
2023-12-30 02:19:22
| 1
| 355
|
showkey
|
77,734,349
| 10,200,497
|
Creating a new column by a condition and selecting the maximum value by shift
|
<p>This is my DataFrame:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(
{
'a': [10, 20, 30, 400, 50, 60],
'b': [897, 9, 33, 4, 55, 65]
}
)
</code></pre>
<p>And this is the output that I want. I want to create column <code>c</code>.</p>
<pre><code> a b c
0 10 897 NaN
1 20 9 897.0
2 30 33 NaN
3 400 4 400.0
4 50 55 NaN
5 60 65 NaN
</code></pre>
<p>These are the steps needed:</p>
<p>a) Find rows that <code>df.a > df.b</code></p>
<p>b) From the above rows compare the value from <code>a</code> to its previous value from <code>b</code>. If it was more than previous <code>b</code> value, put <code>a</code> in column <code>c</code> otherwise put the previous <code>b</code>.</p>
<p>For example:</p>
<p>a) Rows <code>1</code> and <code>3</code> met <code>df.a > df.b</code></p>
<p>b) From row <code>1</code>, 20 is less than 897 so 897 is chosen. However in row <code>3</code>, 400 is greater than 33 so it is selected.</p>
<p>This image clarifies the point:</p>
<p><a href="https://i.sstatic.net/7t9tb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7t9tb.png" alt="enter image description here" /></a></p>
<p>This is what I have tried but it does not work:</p>
<pre><code>df.loc[df.a > df.b, 'c'] = max(df.a, df.b.shift(1))
</code></pre>
|
<python><pandas><dataframe>
|
2023-12-29 23:24:57
| 4
| 2,679
|
AmirX
|
77,734,086
| 2,142,728
|
Unclear how to import from a module defined in another target (python bazel)
|
<p>I have the following file structure</p>
<pre><code>.
├── bedrock
│ ├── BUILD
│ └── bedrock
│ ├── __init__.py
│ └── main.py
└── boilerplate
├── BUILD
└── main.py
</code></pre>
<pre class="lang-py prettyprint-override"><code># boilerplate/BUILD
py_binary(
name = "main",
main="main.py",
srcs = ["main.py","//bedrock:main"],
)
</code></pre>
<p>And</p>
<pre class="lang-py prettyprint-override"><code># bedrock/BUILD
package(default_visibility = ["//visibility:public"])
py_library(
name = "main",
srcs = glob(["**/*.py"]),
)
</code></pre>
<p>And</p>
<pre class="lang-py prettyprint-override"><code># bedrock/bedrock/main.py
def some_method():
return "some_value"
</code></pre>
<p>How do I access <code>some_method</code> in <code>boilerplate/main.py</code>????</p>
<p>Bazel python documentation (and actually, any kind of documentation) is VERY POOR!</p>
|
<python><bazel>
|
2023-12-29 21:33:49
| 1
| 3,774
|
caeus
|
77,734,040
| 519,422
|
Python re: why isn't my code for replacing values in a file throwing errors or changing the values?
|
<p>I need to replace certain values in an external file ("<code>file.txt</code>") with new values from a Pandas dataframe. The external file contents look like:</p>
<pre><code>(Many lines of comments, then)
identifier1 label2 = i \ label3 label4
label5
A1 = -5563.88 B2 = -4998 C3 = -203.8888 D4 = 5926.8
E5 = 24.99876 F6 = 100.6666 G7 = 30.008 H8 = 10.9999
J9 = 1000000 K10 = 1.0002 L11 = 0.1
M12
identifier2 label2 = i \ label3 label4
label5
A1 = -788 B2 = -6554 C3 = -100.23 D4 = 7526.8
E5 = 20.99876 F6 = 10.6666 G7 = 20.098 H8 = 10.9999
J9 = 1000000 K10 = 1.0002 L11 = 0.000
M12
...
</code></pre>
<p>From previous posts here, <a href="https://docs.python.org/3/library/re.html" rel="nofollow noreferrer">this</a> resource, and Python's "re", I'm trying:</p>
<pre><code>findThisIdentifierInFile = "identifier1" # I want the data immediately below this identifier in the external file
with open("file.txt", "r") as file:
file_string = file.read()
i = -500 # New A1 value (i.e., I want to replace the A1 value in the file with -500).
j = 100 # New C3 value.
string1 = re.sub(
rf"^({findThisIdentifierInFile}\s.*?)A1 = \S+ C3 = \S+",
f"\g<1>A1 = {i} C3 = {j}",
string1,
flags=re.M | re.S,
)
</code></pre>
<p>When I run this, there are no errors, but nothing happens. For example, when I print "<code>string1</code>", the data are identical to those in the original "<code>file.txt</code>". I can't provide more of the code but hope that someone who is experienced with RegEx and re (Python) will be able to spot where I have gone wrong. I apologize in advance because I'm certain to have done something silly.</p>
<p>Sometimes I will also want to replace the <code>B2</code> value and the <code>E5</code> - <code>H8</code> values and values on the other lines. I'm wondering whether there's a more foolproof/newbie-friendly method I could use to do any possible replacement of values immediately below a particular identifying label.</p>
|
<python><python-3.x><pandas><replace><python-re>
|
2023-12-29 21:15:19
| 1
| 897
|
Ant
|
77,733,721
| 14,029,775
|
Why does Pandas str.fullmatch blank values default to else?
|
<p>Let's say we have a dataframe col df['Old'] that looks like this</p>
<pre><code>0 NaN
1 NEWARK, NJ
</code></pre>
<p>And we apply this to fill out the values of a new col</p>
<pre><code>df['New']=np.where(df['Old'].str.fullmatch('.*,...')==False,'Value','Else Value')
</code></pre>
<p>The result is</p>
<pre><code>0 Else Value
1 Else Value
</code></pre>
<p>This makes sense for the second row, since the regex matches therefore the fullmatch evaluates to True and so we return the else value. But for the NaN, why is it also returning the else value? The NaN regex does not match so shouldn't the fullmatch evaluate to False and return Value?</p>
|
<python><pandas><regex><string>
|
2023-12-29 19:38:59
| 1
| 365
|
Jonathan Chen
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.