QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
โ |
|---|---|---|---|---|---|---|---|---|
78,826,639
| 1,914,781
|
remove zero data at the end of a string
|
<p>I would like to remove the <code>0x00</code> at the end of a string with python3. I try to use <code>rstrip()</code> but it not work.</p>
<pre><code>x = bytearray([0x30,0x31,0x32,0x33,0x00,0x00])
y = x.decode()
z = y.rstrip()
print(z)
</code></pre>
<p>How can I get output of <code>z</code> value as <code>123</code> .</p>
|
<python>
|
2024-08-02 17:39:33
| 3
| 9,011
|
lucky1928
|
78,826,617
| 4,515,940
|
docker - install python-pip on php-fpm-alpine is extremely slow
|
<p>I'm trying to install <code>python-pip</code> on my app</p>
<pre><code>FROM php:7.2-fpm
....
RUN apt-get install -y python-pip
</code></pre>
<p>This is taking more than 2 hours and its still not complete. Checking <code>htop</code> it shows no problem om RAM or Disk consume.</p>
<p>I found an article saying that Linux Alpine has a problem related to Python: <a href="https://pythonspeed.com/articles/alpine-docker-python/" rel="nofollow noreferrer">https://pythonspeed.com/articles/alpine-docker-python/</a></p>
<p>So, is there any way to make this install faster?</p>
<p>Or is there any way to use <code>php-fpm</code> image without Alpine?</p>
|
<python><linux><docker><alpine-linux>
|
2024-08-02 17:32:15
| 1
| 1,835
|
anderlaini
|
78,826,482
| 718,529
|
Fail to initialize model using GemmaVertexAIModelGarden
|
<p>I try initializing an <code>llm</code>, following an official tutorial in Colab: <a href="https://colab.research.google.com/github/google/generative-ai-docs/blob/main/site/en/gemma/docs/integrations/langchain.ipynb#scrollTo=bhIHsFGYjtFt" rel="nofollow noreferrer">Get started with Gemma and LangChain</a>. The tutorial instruct that I deploy a model from <em>Model Garden</em> and, when the endpoint is ready, copy its project ID, endpoint ID, and location, and enter them in Colab Cell like this:</p>
<p><a href="https://i.sstatic.net/Fh1BUBVo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Fh1BUBVo.png" alt="enter image description here" /></a></p>
<p>I follow the instruction and collect three strings from the following places (as highlighted in yellow):</p>
<p><a href="https://i.sstatic.net/iVbgVokj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iVbgVokj.png" alt="enter image description here" /></a></p>
<p>So, at this point the Colab cell that accept parameters look like this:</p>
<p><a href="https://i.sstatic.net/GGslV1QE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GGslV1QE.png" alt="enter image description here" /></a></p>
<p>The problem arises when I run the model. Here is code from the cell in which the problem occurs and its traceback:</p>
<pre><code>from langchain_google_vertexai import GemmaVertexAIModelGarden, GemmaChatVertexAIModelGarden
llm = GemmaVertexAIModelGarden(
endpoint_id=endpoint_id,
project=project,
location=location,
)
output = llm.invoke("What is the meaning of life?")
print(output)
</code></pre>
<pre><code>_InactiveRpcError Traceback (most recent call last)
/usr/local/lib/python3.10/dist-packages/google/api_core/grpc_helpers.py in error_remapped_callable(*args, **kwargs)
75 try:
---> 76 return callable_(*args, **kwargs)
77 except grpc.RpcError as exc:
11 frames
_InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.INVALID_ARGUMENT
details = "Invalid Endpoint name: projects/use-gemma/locations/us-east1/endpoints/google_gemma-1_1-2b-it-mg-one-click-deploy."
debug_error_string = "UNKNOWN:Error received from peer ipv4:173.194.215.95:443 {created_time:"2024-08-02T15:38:59.269976555+00:00", grpc_status:3, grpc_message:"Invalid Endpoint name: projects/use-gemma/locations/us-east1/endpoints/google_gemma-1_1-2b-it-mg-one-click-deploy."}"
>
The above exception was the direct cause of the following exception:
InvalidArgument Traceback (most recent call last)
/usr/local/lib/python3.10/dist-packages/google/api_core/grpc_helpers.py in error_remapped_callable(*args, **kwargs)
76 return callable_(*args, **kwargs)
77 except grpc.RpcError as exc:
---> 78 raise exceptions.from_grpc_error(exc) from exc
79
80 return error_remapped_callable
InvalidArgument: 400 Invalid Endpoint name: projects/use-gemma/locations/us-east1/endpoints/google_gemma-1_1-2b-it-mg-one-click-deploy.
</code></pre>
<p>Can anyone spot what is the cause of the problem, and/or what have I done wrong? Thank you in advance for any help.</p>
|
<python><google-colaboratory><langchain><google-cloud-vertex-ai><gemma>
|
2024-08-02 16:39:55
| 1
| 687
|
chanp
|
78,826,381
| 8,436,290
|
Why Langchain HuggingFaceEmbeddings model dimension is not the same as stated on HuggingFace
|
<p>I was using langchain HuggingFaceEmbeddings model: dunzhang/stella_en_1.5B_v5.
When I look at <a href="https://huggingface.co/spaces/mteb/leaderboard" rel="nofollow noreferrer">https://huggingface.co/spaces/mteb/leaderboard</a>, I can see that the model is 8192.
But when I do</p>
<pre><code>len(embed_model.embed_query("hey you"))
</code></pre>
<p>It gives me 1024.
Why this difference please ?</p>
|
<python><langchain><large-language-model><huggingface>
|
2024-08-02 16:12:09
| 1
| 467
|
Nicolas REY
|
78,826,349
| 769,933
|
Polars cross join without reversed or equal entries
|
<p>I'm joining two identical columns and am only interested in combinations (not permutations). Currently, I can perform a full cross-join and, subsequently, filter out the unwanted rows.</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df1 = pl.DataFrame({"a": range(3)})
df2 = pl.DataFrame({"b": range(3)})
expected_output = df1.join(df2, how="cross").filter(pl.col("a") < pl.col("b"))
</code></pre>
<pre><code>shape: (3, 2)
โโโโโโโฌโโโโโโ
โ a โ b โ
โ --- โ --- โ
โ i32 โ i32 โ
โโโโโโโชโโโโโโก
โ 0 โ 1 โ
โ 0 โ 2 โ
โ 1 โ 2 โ
โโโโโโโดโโโโโโ
</code></pre>
<p>Is there a way to do this without generating all combinations in the first place?</p>
|
<python><dataframe><python-polars><cross-join>
|
2024-08-02 16:04:35
| 2
| 2,396
|
gggg
|
78,826,249
| 4,810,328
|
modify global variable in a decorated function
|
<p>I have a decorator which inits a global dict which I want to modify within the decorated function. I have tried an implementation something like below:</p>
<pre><code>def decorator():
def inner_function(func):
global_dict = {}
print("Inner function")
ret = func()
return ret
return inner_function
@decorator()
def test_function():
global global_dict
global_dict["something"] = "something"
print("Test function")
return True
</code></pre>
<p>This gives me an error: <code>name 'global_dict' is not defined</code> and I am not really sure why. How would I go about doing this?</p>
|
<python><decorator><python-decorators>
|
2024-08-02 15:36:44
| 0
| 711
|
Tarique
|
78,826,206
| 7,791,036
|
NameError: name '_LanguageModel' is not defined
|
<p>I am trying to initialize vertexai llm in langchain library.</p>
<pre><code>from langchain_community.llms.vertexai import VertexAI
llm = VertexAI(
max_output_tokens=1024,
temperature=0.2,
top_p=0.8,
top_k=40,
verbose=True,
)
</code></pre>
<p>I get the following error with this.</p>
<blockquote>
<p>ConfigError: field "client" not yet prepared so the type is still a
ForwardRef, you might need to call VertexAI.update_forward_refs().</p>
</blockquote>
<p>And when I called this function I got.</p>
<blockquote>
<p>NameError: name '_LanguageModel' is not defined</p>
</blockquote>
<p>I also went through the source code and tried importing <strong>_LanguageModel</strong> manually just before the code but it's not working.
How can I resolve this?</p>
|
<python><langchain><large-language-model><google-cloud-vertex-ai>
|
2024-08-02 15:24:11
| 2
| 1,125
|
vivekpadia70
|
78,826,183
| 5,043,192
|
Can't import custom python package in my project
|
<p>I'm trying to create a package in Python using Poetry and then use it in another project. However after importing the package, I'm unable to use it. The project doesn't recognize it.</p>
<p><strong>Library</strong></p>
<p>pyproject.toml:</p>
<pre><code>[tool.poetry]
name = "python-lib-example"
version = "0.5.0"
description = ""
authors = ["xxx"]
readme = "README.md"
packages = [{include = "python_lib_example"}]
[tool.poetry.dependencies]
python = "^3.12"
[tool.poetry.group.dev.dependencies]
devpi-client = "^7.1.0"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api
</code></pre>
<p>Test Class:</p>
<pre><code>class PersonInfo:
def __init__(self, name, age):
self.name = name
self.age = age
def get_name(self):
return self.name
def get_age(self):
return self.age
def get_info(self):
return f'{self.name} is {self.age} years old'
def __str__(self):
return self.get_info()
</code></pre>
<p><code>__init__.py</code> is empty</p>
<p>Project structure:</p>
<p><a href="https://i.sstatic.net/wjyoJMfY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wjyoJMfY.png" alt="Project structure" /></a></p>
<p>I build the package this way and upload it to DevPi, which is running in Docker. I then install it into the target project from DevPi.</p>
<p><strong>Target project</strong></p>
<p>pyproject.toml:</p>
<pre><code>[tool.poetry]
name = "python-app-example"
version = "0.0.1"
description = ""
authors = ["xxx"]
readme = "README.md"
package-mode = false
[tool.poetry.dependencies]
python = "^3.12"
python-lib-example = {version = "^0.5.0", source = "mjana"}
[[tool.poetry.source]]
name = "mjana"
url = "http://localhost:3141/mjana/mjana/+simple"
priority = "supplemental"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api
</code></pre>
<p>And now I'm trying import</p>
<pre><code>from python_lib_example import PersonInfo
</code></pre>
<p>But it seems that the project doesn't recognize the package, even though the installation from DevPi completes without any issues. Where could I be making a mistake?</p>
|
<python><package><python-poetry>
|
2024-08-02 15:20:00
| 1
| 839
|
Michal
|
78,826,115
| 12,785,645
|
How to make a barplot with a double grouped axis using Pandas
|
<p>I am working on a plot where I want to show two groups on one axis and a third group as fill-value.
The problem is, that when I plot it, the y-axis shows values in tuples:</p>
<pre><code>data_dict = {'major_group': list(np.array([['A']*10, ['B']*10]).flat),
'minor_group': ['q ','r ','s ','t ']*5,
'legend_group':np.repeat(['d','e','f','g','h','i'],[7,3,1,5,1,3])}
(pd.DataFrame(data= data_dict)
.groupby(['major_group', 'minor_group','legend_group'], observed = True)
.size()
.unstack()
.plot(kind='barh', stacked=True))
</code></pre>
<p>Result:</p>
<p><a href="https://i.sstatic.net/jj6fRxFd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jj6fRxFd.png" alt="plot result of the code included" /></a></p>
<p>However, I'm looking for something like this:</p>
<p><a href="https://i.sstatic.net/82JnJH8T.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/82JnJH8T.png" alt="Desired result" /></a></p>
<p>How can this be achieved?
Is there some major and minor axis label that can be set?</p>
|
<python><pandas><bar-chart>
|
2024-08-02 15:02:19
| 2
| 463
|
saQuist
|
78,826,085
| 14,649,310
|
Elasticsearch RequestError(400) 'search_phase_execution_exception'
|
<p>I have a simple python app that uses Elasticsearch to store documents for Pokemon using this mapping:</p>
<pre><code>{
"mappings": {
"properties": {
"id": {
"type": "integer"
},
"name": {
"type": "object"
},
"type": {
"type": "keyword"
},
"base": {
"type": "object"
},
"species": {
"type": "text"
},
"description": {
"type": "text"
},
"evolution": {
"type": "object"
},
"profile": {
"type": "nested",
"properties": {
"ability": {
"type": "keyword"
}
}
},
"image": {
"type": "object"
},
"embedding": {
"type": "dense_vector",
"dims": 384
}
}
}
}
</code></pre>
<p>When I query by a property for example name in english from my python code:</p>
<pre><code>term_query = {
"size": 5,
"query": {
"term": {
"name.english": "Pikachu"
}
}
}
response = es_client.search(index="pokemon", body=term_query)
print(response)
</code></pre>
<p>I do get back the document for Pikachu. This is the <code>response</code>:</p>
<pre><code>{'took': 16, 'timed_out': False, '_shards': {'total': 1, 'successful': 1, 'skipped': 0, 'failed': 0}, 'hits': {'total': {'value': 1, 'relation': 'eq'}, 'max_score': 2.5902672, 'hits': [{'_index': 'pokemon', '_type': '_doc', '_id': 'uTSUE5EB_ikN9OuVrCYz', '_score': 2.5902672, '_source': {'id': 25, 'name': {'english': 'Pikachu', 'japanese': 'ใใซใใฅใฆ', 'chinese': '็ฎๅกไธ', 'french': 'Pikachu'}, 'type': ['Electric'], 'base': {'HP': 35, 'Attack': 55, 'Defense': 40, 'Sp. Attack': 50, 'Sp. Defense': 50, 'Speed': 90}, 'species': 'Mouse Pokรฉmon', 'description': 'While sleeping, it generates electricity in the sacs in its cheeks. If itโs not getting enough sleep, it will be able to use only weak electricity.', 'evolution': {'prev': ['172', 'high Friendship'], 'next': [['26', 'use Thunder Stone']]}, 'profile': {'height': '0.4 m', 'weight': '6 kg', 'egg': ['Field', 'Fairy'], 'ability': [['Static', 'false'], ['Lightning Rod', 'true']], 'gender': '50:50'}, 'image': {'sprite': 'https://raw.githubusercontent.com/Purukitto/pokemon-data.json/master/images/pokedex/sprites/025.png', 'thumbnail': 'https://raw.githubusercontent.com/Purukitto/pokemon-data.json/master/images/pokedex/thumbnails/025.png', 'hires': 'https://raw.githubusercontent.com/Purukitto/pokemon-data.json/master/images/pokedex/hires/025.png'}, 'embedding': [-0.08073285967111588, 0.06939519941806793, 0.12187427282333374, -0.004082795698195696, -0.12248878180980682, -0.20900841057300568, 0.24315859377384186, 0.17423027753829956, 0.16088330745697021, -0.11973902583122253, 0.219094380736351, -0.1787237524986267, 0.14656412601470947, 0.26914048194885254, -0.0900934562087059, -0.02184007130563259, 0.08033201843500137, 0.22495900094509125, -0.2534410059452057, 0.041292887181043625, 0.06425879895687103, -0.23064647614955902, 0.13032366335391998, -0.18455955386161804, -0.27389195561408997, -0.039019424468278885, 0.35004937648773193, -0.015063440427184105, -0.046075183898210526, 0.06129240244626999, -0.2043248414993286, 0.06172535941004753, 0.04312220215797424, 0.08780308812856674, 0.13448253273963928, -0.05551649630069733, -0.1422523856163025, -0.027074327692389488, 0.04502112418413162, -0.1769474595785141, -0.008200257085263729, -0.25519609451293945, 0.13449157774448395, 0.2700392007827759, 0.006515479180961847, 0.5337464809417725, 0.1946619153022766, -0.3672884404659271, -0.12406620383262634, -0.09569848328828812, -0.007241903804242611, 0.3416319191455841, -0.05228937789797783, 0.036399926990270615, -0.22220291197299957, -0.061945606023073196, 0.11898031830787659, -0.18378591537475586, -0.041961733251810074, -0.047705043107271194, -0.17708761990070343, 0.16963569819927216, 0.3308679163455963, 0.04163277521729469, 0.04719669744372368, 0.05068834125995636, -0.26456910371780396, 0.026870373636484146, -0.10195782780647278, 0.23298844695091248, 0.031212573871016502, -0.3020530343055725, 0.1332738697528839, -0.29328441619873047, -0.09723328799009323, 0.12734854221343994, -0.06871715933084488, -0.09589243680238724, -0.0217132605612278, -0.04689288139343262, 0.11232715100049973, -0.1212148666381836, -0.502180814743042, 0.3030500113964081, 0.23431822657585144, 0.20775146782398224, 0.03362984582781792, -0.3450615406036377, -0.38474512100219727, -0.015262553468346596, -0.012009349651634693, 0.5406750440597534, -0.11293786019086838, 0.19005414843559265, -0.15757060050964355, -0.1948317289352417, 0.09620492905378342, -0.2629433870315552, -0.8698116540908813, -0.1905592530965805, 0.007942819967865944, 0.5021940469741821, -0.22358925640583038, 0.28493547439575195, 0.2063635140657425, -0.09802521765232086, -0.12083346396684647, -0.2472897320985794, 0.11121658980846405, 0.2569717764854431, 0.14332978427410126, -0.4158305823802948, -0.21560056507587433, 0.47279030084609985, 0.05417272448539734, 0.15705130994319916, 0.11930802464485168, 0.2766338884830475, -0.16857114434242249, 0.38867419958114624, 0.07035291939973831, -0.19579669833183289, -0.20194637775421143, -0.045491937547922134, 0.22118116915225983, -0.4455321729183197, -0.14013874530792236, 0.08595887571573257, 0.07313554733991623, 0.21260005235671997, 0.011720014736056328, -0.11162177473306656, -0.31165409088134766, -0.18956924974918365, -0.08518164604902267, 0.3265906274318695, 0.3938029110431671, 0.16317987442016602, -0.24403564631938934, 0.2091466188430786, 0.15612895786762238, 0.5397796034812927, -0.021354854106903076, -0.2237994521856308, 0.15890972316265106, -0.1837628185749054, 0.07010602951049805, -0.05256254971027374, -0.06645826995372772, 0.05712897330522537, 0.0011353784939274192, -0.26107025146484375, 0.2231748253107071, 0.03408034145832062, -0.019110316410660744, -0.09659949690103531, -0.20144665241241455, 0.22397348284721375, 0.1376374363899231, 0.09118841588497162, 0.05390686169266701, 0.2291051596403122, -0.22862714529037476, -0.011429731734097004, 0.26337283849716187, 0.008011268451809883, -0.2091754674911499, -0.018558669835329056, -0.3221794664859772, -0.03949500247836113, 0.06667587906122208, 0.1057417243719101, 0.09627758711576462, 0.02740870602428913, 0.13777735829353333, -0.31403109431266785, -0.35416749119758606, -0.06899789720773697, -0.21653032302856445, -0.027547530829906464, 0.10295339673757553, 0.1908693164587021, 0.06642354279756546, -0.2547055780887604, -0.27135589718818665, -0.10589489340782166, 0.07966331392526627, 0.10850854963064194, -0.1262790560722351, -0.2978314459323883, -0.23875080049037933, -0.3113715648651123, 0.20441405475139618, 0.047840893268585205, -0.12133669853210449, -0.025360196828842163, -0.18699155747890472, -0.3434793949127197, -0.011686017736792564, -0.1433863788843155, -0.02858828380703926, -0.2638932764530182, -0.20476993918418884, -0.12419438362121582, 0.028579195961356163, -0.1174812987446785, -0.33529555797576904, -0.20364123582839966, 0.04101632535457611, -0.09168056398630142, -0.05435829237103462, -0.30858591198921204, 0.25615713000297546, 0.1913250833749771, 0.6707220673561096, 0.4516240358352661, -0.10038889944553375, 0.09332328289747238, -0.08849727362394333, -0.04820533096790314, 0.3817048966884613, -0.2124391496181488, -0.20861664414405823, -0.3969747722148895, -0.26697418093681335, -0.09186507016420364, -0.17242462933063507, 0.163199320435524, -0.18881991505622864, 0.08426131308078766, -0.2372647523880005, -0.004029334522783756, 0.06960441172122955, -0.047179535031318665, -0.20344287157058716, 0.183263897895813, -0.06168253347277641, -0.04381486400961876, 0.21352356672286987, -0.29498425126075745, 0.046090275049209595, 0.016421712934970856, -0.03849317133426666, 0.2436819225549698, -0.24784734845161438, 0.06414017081260681, -0.01664029061794281, 0.18358607590198517, 0.025173719972372055, 0.6090837717056274, 0.050406910479068756, 0.1362634152173996, -0.22938694059848785, 0.3377200961112976, 0.13915757834911346, 0.23770390450954437, 0.1720094382762909, 0.030198220163583755, 0.024356193840503693, -0.28200817108154297, -0.19686788320541382, -0.1351262480020523, -0.007643698249012232, -0.22928962111473083, 0.11154638230800629, -0.014717038720846176, 0.1324407309293747, 0.46048006415367126, -0.017119301483035088, -0.4727839231491089, -0.4402349591255188, -0.01458784844726324, -0.04428641498088837, 0.04039650410413742, 0.48811277747154236, -0.3889062702655792, -0.2668595612049103, 0.05276121944189072, -0.1911555528640747, -0.11344823241233826, 0.0762748047709465, -0.19064147770404816, 0.2186267226934433, -0.23358970880508423, 0.15427519381046295, -0.13358311355113983, 0.03089122287929058, -0.26767098903656006, 0.08962049335241318, -0.13496246933937073, 0.10376256704330444, 0.26555293798446655, 0.7292829155921936, 0.12933622300624847, 0.1885850727558136, 0.33418792486190796, -0.0045865499414503574, -0.08271858841180801, -0.19287362694740295, 0.39168789982795715, 0.07085824757814407, 0.16441291570663452, 0.026745975017547607, -0.014314485713839531, -0.10071783512830734, -0.08725757896900177, 0.04012788459658623, -0.22500525414943695, 0.1916060894727707, -0.44129520654678345, -0.34983548521995544, 0.3279268145561218, 0.35589221119880676, -0.014993308112025261, -0.2724052369594574, 0.1550331711769104, -0.16982153058052063, 0.28001534938812256, -0.08957020193338394, 0.26859310269355774, -0.06395307928323746, -0.18223333358764648, -0.03468851372599602, -0.091072678565979, -0.012290200218558311, -0.28910940885543823, 0.08019937574863434, -0.2714097201824188, 0.23566178977489471, -0.15085045993328094, 0.31374698877334595, 0.030088581144809723, 0.1816730797290802, -0.13970644772052765, -0.0039431205950677395, 0.6152960658073425, 0.4371360242366791, 0.03539286553859711, -0.10140117257833481, 0.03148588910698891, -0.16396838426589966, -0.3729839324951172, -0.08252952247858047, -0.10826507955789566, -0.13000987470149994, -0.0005456415819935501, -0.11005621403455734, 0.17918984591960907, -0.3737146556377411, -0.3396584987640381, -0.08200288563966751, 0.14435864984989166, 0.5620496273040771, 0.27496692538261414, 0.004530945792794228, -0.15538428723812103, 0.380673348903656, -0.22902937233448029, 0.22183893620967865, -0.06294988840818405, 0.08632779121398926, -0.5587152242660522, -0.22411471605300903, 0.1320917010307312, -0.1736510843038559, 0.09566263109445572, 0.4774761497974396, -0.009492949582636356, -0.2615973651409149, 0.31702664494514465, -0.11396189033985138, 0.4716211259365082, 0.057476241141557693, -0.05458242818713188, -0.05082737281918526, -0.1389731615781784, 0.22226481139659882, 0.5554996728897095, -0.19391866028308868, 0.24401207268238068, 0.3081281781196594, 0.26846665143966675, 0.13681122660636902, 0.04111223667860031]}}]}}
</code></pre>
<p>I checked all the properties and they look correct. The embedding also has the right lenght <code>384</code> which is what this model I am using produces: <code>embeddings_model = SentenceTransformer("paraphrase-MiniLM-L6-v2")</code></p>
<p>However when I try to run an embedding cosine similarity query like:</p>
<pre><code>query = "Pikachu"
query_embedding = embeddings_model.encode([query])[0].tolist()
script_query_cosine = {
"size": 5,
"query": {
"script_score": {
"query": {
"match_all": {}
},
"script": {
"source": "cosineSimilarity(params.query_vector, 'embedding') + 1.0",
"params": {
"query_vector": query_embedding
}
}
}
}
}
response = es_client.search(index="pokemon", body=script_query_cosine)
print(response)
</code></pre>
<p>I get this error:</p>
<p><code>RequestError(400, 'search_phase_execution_exception', 'runtime error')</code></p>
<p>I checked the embedding size and it is correct. I checked if any documents are missing the embeddings fields and I found no documents that miss it. I have no clue what this error is.</p>
|
<python><elasticsearch><huggingface><text-embedding-ada-002>
|
2024-08-02 14:55:23
| 1
| 4,999
|
KZiovas
|
78,826,008
| 11,163,122
|
Why does __init__.py break namespace packaging?
|
<p>This package structure works fine for namespace packaging:</p>
<pre class="lang-none prettyprint-override"><code>โโโ ๎ฟ pkg
โ โโโ ๎ฟ module
โ โ โโโ ๎ a.py
โ โโโ ๎ pyproject.toml
โโโ ๎ฟ subfolder
โโโ ๎ฟ pkg
โ โโโ ๎ฟ module
โ โโโ ๎ b.py
โโโ ๎ pyproject.toml
</code></pre>
<p>However, when I add an <code>__init__.py</code> to <code>pkg/module/</code> like so:</p>
<pre class="lang-none prettyprint-override"><code>โโโ ๎ฟ pkg
โ โโโ ๎ฟ module
โ โ โโโ ๎ __init__.py
โ โ โโโ ๎ a.py
</code></pre>
<p>Namespace packaging imports like <code>from pkg.module.b import something</code> become broken:</p>
<pre class="lang-none prettyprint-override"><code>ModuleNotFoundError: No module named 'pkg.module.b'
</code></pre>
<p>My question is:</p>
<ol>
<li>Why is <code>__init__.py</code> breaking namespace packaging installs?</li>
<li>Is there some way to work around this, such that I can have <code>pkg/module/__init__.py</code> as well as working namespace package installs?</li>
</ol>
<hr />
<p>Here is <code>pkg/pyproject.toml</code>:</p>
<pre class="lang-ini prettyprint-override"><code>[build-system]
build-backend = "setuptools.build_meta"
requires = ["setuptools>=61"]
[project]
dependencies = []
name = "my-pkg"
version = "0.1.0"
[tool.setuptools.packages.find]
include = ["pkg*"]
where = [".."]
</code></pre>
<p>Here is <code>subfolder/pyproject.toml</code>:</p>
<pre class="lang-ini prettyprint-override"><code>[build-system]
build-backend = "setuptools.build_meta"
requires = ["setuptools>=61"]
[project]
dependencies = []
name = "my-pkg-extension"
version = "0.1.0"
[tool.setuptools.packages.find]
include = ["pkg*"]
</code></pre>
|
<python><setuptools><python-packaging><pyproject.toml><namespace-package>
|
2024-08-02 14:36:48
| 0
| 2,961
|
Intrastellar Explorer
|
78,825,950
| 13,757,692
|
How to calculate gradient of data on non-orthogonal ordered grid?
|
<p>I have a grid of xy-coordinates that I generate using <code>np.meshgrid</code>. The spacing in x-direction is constant, the y-coordinates are scaled, as a function of x. In this specific case, this results in a cloud of points that lie on a trapezoid, with a side length of 2 on the left and 1 on the right side. I have a function <code>f(x, y)</code>, and I want to get the gradient of that function in y-direction and in the direction of each row of points. How can I use <code>np.gradient</code> with xy-arguments to do this? I find the documentation somewhat confusing.</p>
<pre><code>x = np.linspace(0, 1, num=20)
y = np.linspace(-1/2, 1/2, num=10)
x_grid, y_grid = np.meshgrid(x, y)
y_grid = y_grid * (2 - x_grid)
f = x_grid ** 2 - y_grid ** 3 # example function
gradient = np.gradient(f, x_grid, y_grid) # something like this?
# with gradient[0] being in the row-direction (in xy-space) and gradient[0] being in y-direction
</code></pre>
<p>EDIT: I found one way to do this, as <code>np.gradient</code> can take arguments of coordinate positions, but only as 1D arrays, meaning that the gradient must be calculated for each row and each column independently. For my specific case, where the x-spacing is constant in each row, the gradient can be calculated as follows:</p>
<pre><code>df_dy = np.array([np.gradient(f[:, j], y_grid[:, j], edge_order=2) for j in range(y_grid.shape[1])]).T
df_drow = np.array([np.gradient(f[j], np.linalg.norm([x_grid[j] - x_grid[j, 0], y_grid[j] - y_grid[j, 0]], axis=0), edge_order=2) for j in range(x_grid.shape[0])])
</code></pre>
<p>Basically, each row must be treated as a new axis, with the spacing in this axis being calculated using the distance between the points (which I calculate using <code>np.linalg.norm</code>)</p>
<p>Since this uses loops, which I would like to avoid, I will keep the question open for more efficient solutions.</p>
|
<python><numpy><gradient>
|
2024-08-02 14:21:41
| 0
| 466
|
Alex V.
|
78,825,756
| 3,415,543
|
Python Arcade 2.6.17 UIInputText register UITextEvent handler
|
<p>How do you register an event handler for the above.</p>
<p>I create one as follows:</p>
<pre><code>inputText: UIInputText = UIInputText(text='5', height=18, width=100, font_size=12, text_color=color.BLACK)
</code></pre>
<p>But cannot figure out how to register a callback/handler that gets called whenever a user types some text into the text box</p>
|
<python><arcade>
|
2024-08-02 13:30:52
| 1
| 555
|
Sequestered1776Vexer
|
78,825,554
| 1,418,090
|
How to install TensorFlow 2.16 on Macbook Pro M2?
|
<p>I currently use TensorFlow 2.13.1 (<code>tensorflow-macos</code>) with TF-metal (1.0.0). I want to migrate to TensorFlow 2.16.1 to keep up with the updates.</p>
<p>In the <a href="https://blog.tensorflow.org/2024/03/whats-new-in-tensorflow-216.html" rel="nofollow noreferrer">update website</a>, they say the following:</p>
<blockquote>
<p>Apple Silicon</p>
<p>If you previously installed TensorFlow using <code>pip install tensorflow-macos</code>, please update your installation method. Use <code>pip install tensorflow</code> from now on. <code>tensorflow-macos</code> package will no longer receive updates. Future updates will be released to <code>tensorflow</code>.</p>
</blockquote>
<p>That sounds great, but I have a few questions:</p>
<ol>
<li>Do I need to install <code>tensorflow-metal</code> for GPU acceleration? It doesn't seem to be possible.</li>
<li>Is it stable?</li>
</ol>
<p>My tests so far have been unsuccessful.</p>
<ol>
<li>It keeps asking for Keras, even though it is installed.</li>
<li>When I solved the Keras issue, a simple NN worked well. However, a <a href="https://www.tensorflow.org/tutorials/generative/dcgan" rel="nofollow noreferrer">simple GAN</a> makes my Jupyter Notebook kernel die consistently.</li>
</ol>
<p>My computer is a 2023 Macbook Pro with an M2 Max chip. The OS is up-to-date.</p>
<p><em><strong>Do you have any suggestions as to what may be going on? Is there a better way to perform this update?</strong></em></p>
<p>Here are the specs for the environment that seems to work best for my machine:</p>
<pre><code>conda create -n myenv python=3.9.18
conda activate myenv
conda install -c apple tensorflow-deps
pip install matplotlib==3.7.4
pip install numpy==1.24.3
pip install pandas==2.1.4
pip install scipy==1.11.4
pip install typing-extensions==4.5.0
pip install seaborn==0.13.0
pip install tensorflow-macos==2.13.1
pip install tensorflow-metal==1.0.0
pip install plotly==5.17.0
pip install scikit-learn pyarrow
conda install -c conda-forge notebook
conda install ipykernel
python -m ipykernel install --user --name=myenv --display-name "Python (myenv)"
</code></pre>
<p>Here is the most general spec for the update:</p>
<pre><code>conda create -n myenv python==3.11.9
conda activate myenv
conda install matplotlib
conda install numpy
conda install pandas
conda install scipy
conda install typing-extensions
conda install seaborn
conda install tensorflow
conda install plotly
conda install scikit-learn pyarrow
conda install -c conda-forge notebook
conda install ipykernel
python -m ipykernel install --user --name=myenv --display-name "Python (myenv)"
</code></pre>
<p>I tried mix-and-match <code>pip</code> with <code>conda</code> (when appropriate) and also tried to change versions of packages to see if that would be the issue, but no success.</p>
|
<python><tensorflow><pip><conda><apple-silicon>
|
2024-08-02 12:40:19
| 1
| 438
|
Umberto Mignozzetti
|
78,825,285
| 8,040,928
|
Running adb reboot command from python
|
<p>Running the adb reboot command from code always produces an error. In fact, the device reboots and when adb reboot commend run directly from terminal everything is ok, but I don't know how to handle this correctly in python code (unfortunately all chat GPTt suggestions fail). Probably the connection is lost when rebooting and then error is returned...</p>
<p>Do you have any suggestions or do you know how to handle it?</p>
<p>Full code:</p>
<pre><code>import subprocess
def execute_adb(adb_command):
# print(adb_command)
result = subprocess.run(
adb_command,
shell=True,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True,
)
if result.returncode == 0:
return result.stdout.strip()
print(f"returncode {result.returncode}, stderr: {result.stderr.lower()}, stdout: {result.stdout}")
return "ERROR"
class AndroidController:
def __init__(self, device):
self.device = device
def is_device_online(self):
adb_command = f"adb -s {self.device} get-state"
result = execute_adb(adb_command)
return result == "device"
def reset_device(self):
adb_command = f"adb -s {self.device} shell reboot"
ret = execute_adb(adb_command)
return ret
controller = AndroidController("emulator-5554")
print(controller.is_device_online())
controller.reset_device()
</code></pre>
<p>Output:</p>
<pre><code>True
returncode 255, stderr: , stdout:
Process finished with exit code 0
</code></pre>
|
<python><android><android-emulator><subprocess><adb>
|
2024-08-02 11:36:47
| 1
| 603
|
Janek Podwysocki
|
78,825,246
| 21,348,174
|
Revit API how to filter elements from Revit Links
|
<p>Im using Python, pyRevit and Revit 2021</p>
<p><strong>Main goal</strong></p>
<p>I want to use the <code>FilteredElementCollector</code> in order to collect specific elements within Revit Links linked in my project.</p>
<p><strong>My problem</strong></p>
<p>My question is how do I collect only the elements that are in my current view and belongs to Revit Links?
Im not sure about what I tried because I am working on a big file with multiple Revit Links and when I try to print the elements I get an endless list of elements inside every Link, which doesnt seem right given the fact that my current view is a section with not a lot of elements in it.</p>
<p><code>link_doc.ActiveView.Id</code> gets a NoneType errorโฆ
But when not passing an active view I get that endless list of elements I mentioned.</p>
<p><strong>My script</strong></p>
<pre><code>#######################################
# VARIABLES
#######################################
doc = __revit__.ActiveUIDocument.Document # type: Document
uidoc = __revit__.ActiveUIDocument # type: UIDocument
selection = uidoc.Selection # type: Selection
#######################################
# MAIN
#######################################
# Collect all Revit Links instances
revit_link_instances_collector = FilteredElementCollector(doc, active_view.Id).OfClass(RevitLinkInstance).ToElements()
for link in revit_link_instances_collector:
# Get the doc for current Link
link_doc = link.GetLinkDocument()
if link_doc:
# collect all FamilyInstances
linked_elemens = FilteredElementCollector(link_doc, link_doc.ActiveView.Id).OfClass(FamilyInstance).WhereElementIsNotElementType().ToElements()
for element in linked_elemens:
print(element)
</code></pre>
|
<python><revit-api><revitpythonshell><pyrevit>
|
2024-08-02 11:29:32
| 1
| 435
|
IdoBa
|
78,825,079
| 159,072
|
autocorrelation using FFT
|
<p>I am trying to translate this Python routine into C#:</p>
<pre><code>import numpy as np
def autocorr(x):
if not hasattr(x[0], "__len__"):#Check if one dimentional array
length=len(x)
N_padding=2 ** np.ceil(np.log2(2*len(x) - 1)).astype('int')#padding size is next power of 2 from 2*length of x -1
x=np.pad(x, (N_padding,))#pad with zeroes on each sides
fft=np.fft.fft(x)
cfft=np.conjugate(fft)
coefs=np.fft.ifft(fft*cfft)[:length]#Get the coefficients from the inverse fft of the product of fft and cfft
else:#For two dimentional array
x=np.transpose(x)
length=len(x[0])
coefs=[]
N_padding=2 ** np.ceil(np.log2(2*len(x[0]) - 1)).astype('int')#padding size is next power of 2 from 2*length of x -1
x_bis=[]
for i in range(len(x)):#Do the padding along dimension 0
x[i]=x[i]
x_bis.append(np.pad(x[i], (N_padding,)))#pad with zeroes on each sides
fft=np.fft.fft(x_bis)
cfft=np.conjugate(fft)
coefs=np.fft.ifft(fft*cfft)[:,:length].tolist()#Get the coefficients from the inverse fft of the product of fft and cfft
return coefs
coefs=autocorr([[1,1,1],[2,2,2],[3,3,3],[4,4,4],[5,5,5]])
if hasattr(coefs[0], "__len__"):
for i in range(len(coefs)):
coefs[i]/=np.arange(len(coefs[i]), 0, -1)#Divide each coefficients by the size of the vector minus it's index
coefs[i]/=coefs[i][0]
print(coefs)
</code></pre>
<p>Output:</p>
<pre><code>[array([1. +0.j, 0.90909091+0.j, 0.78787879+0.j, 0.63636364+0.j,
0.45454545+0.j]), array([1. +0.j, 0.90909091+0.j, 0.78787879+0.j, 0.63636364+0.j,
0.45454545+0.j]), array([1. +0.j, 0.90909091+0.j, 0.78787879+0.j, 0.63636364+0.j,
0.45454545+0.j])]
</code></pre>
<p>C# version</p>
<pre><code>using System;
using System.Linq;
using System.Numerics;
using MathNet.Numerics.IntegralTransforms;
using MathNet.Numerics.LinearAlgebra;
using MathNet.Numerics.LinearAlgebra.Complex;
using MathNet.Numerics.LinearAlgebra.Double;
public class AutoCorrelation
{
public static double[][] Autocorr(double[][] x)
{
int rows = x.Length;
int cols = x[0].Length;
// Convert input array to Matrix
var matrix = Matrix<double>.Build.DenseOfRowArrays(x).Transpose();
// Create a new matrix to hold the padded data
int length = cols;
int N_padding = (int)Math.Pow(2, Math.Ceiling(Math.Log(2 * length - 1, 2)));
var paddedMatrix = Matrix<Complex>.Build.Dense(matrix.RowCount, N_padding, Complex.Zero);
// Pad the matrix rows
for (int i = 0; i < matrix.RowCount; i++)
{
for (int j = 0; j < length; j++)
{
paddedMatrix[i, j] = new Complex(matrix[i, j], 0);
}
}
// Apply FFT to each row
var fftMatrix = paddedMatrix.Clone();
for (int i = 0; i < fftMatrix.RowCount; i++)
{
var rowArray = fftMatrix.Row(i).ToArray();
Fourier.Forward(rowArray, FourierOptions.NoScaling);
for (int j = 0; j < rowArray.Length; j++)
{
fftMatrix[i, j] = rowArray[j];
}
}
// Apply conjugate FFT
var cfftMatrix = fftMatrix.Conjugate();
// Multiply fft and cfft matrices element-wise
var productMatrix = fftMatrix.PointwiseMultiply(cfftMatrix);
// Apply inverse FFT to each row
for (int i = 0; i < productMatrix.RowCount; i++)
{
var rowArray = productMatrix.Row(i).ToArray();
Fourier.Inverse(rowArray, FourierOptions.NoScaling);
for (int j = 0; j < rowArray.Length; j++)
{
productMatrix[i, j] = rowArray[j];
}
}
// Extract the coefficients
var coefsMatrix = productMatrix.SubMatrix(0, productMatrix.RowCount, 0, length);
// Convert to jagged array and normalize the coefficients
var coefs = coefsMatrix.ToRowArrays().Select(row => row.Select(c => c.Real).ToArray()).ToArray();
for (int i = 0; i < coefs.Length; i++)
{
var firstElement = coefs[i][0];
for (int j = 0; j < coefs[i].Length; j++)
{
coefs[i][j] /= (length - j);
}
for (int j = 0; j < coefs[i].Length; j++)
{
coefs[i][j] /= firstElement;
}
}
return coefs;
}
public static void Main()
{
double[][] data = new double[][]
{
new double[] { 1, 1, 1 },
new double[] { 2, 2, 2 },
new double[] { 3, 3, 3 },
new double[] { 4, 4, 4 },
new double[] { 5, 5, 5 }
};
double[][] coefs = Autocorr(data);
// Print the coefficients
for (int i = 0; i < coefs.Length; i++)
{
for (int j = 0; j < coefs[i].Length; j++)
{
System.Console.Write(coefs[i][j] + " ");
}
System.Console.WriteLine();
}
}
}
</code></pre>
<p>Output:</p>
<pre><code>0.333333333333333 0.285714285714286 0.214285714285714
0.333333333333333 0.285714285714286 0.214285714285714
0.333333333333333 0.285714285714286 0.214285714285714
Press any key to continue . . .
</code></pre>
<p>The outputs are different - C# version is giving the incorrect output.</p>
<p>How can I fix the C# version?</p>
|
<python><c#><fft>
|
2024-08-02 10:49:48
| 1
| 17,446
|
user366312
|
78,825,070
| 3,333,319
|
Pydantic unknown type during serialization
|
<p>I need to serialize a python dataclass (<code>MainObj</code> in the MWE below) which contains other plain-python nested objects, however the snippet below throws an exception.</p>
<p>mwe.py (entry point)</p>
<pre class="lang-py prettyprint-override"><code>from mainobj import MAIN_OBJ_TA, MainObj
from myobj import MyObj, MyNestedObj
obj = MainObj(a=1, b="asd", c=MyObj(x=10, y="lol", z=MyNestedObj(100, "qwerty")))
j = MAIN_OBJ_TA.dump_json(obj)
print(j)
</code></pre>
<p>mainobj.py</p>
<pre class="lang-py prettyprint-override"><code>from pydantic import TypeAdapter
from pydantic.dataclasses import dataclass
from myobj import MyObj
@dataclass(config={'arbitrary_types_allowed': True})
class MainObj:
a: int
b: str
c: MyObj
MAIN_OBJ_TA = TypeAdapter(MainObj)
</code></pre>
<p>myobj.py</p>
<pre class="lang-py prettyprint-override"><code># from dataclasses import dataclass
# @dataclass
class MyNestedObj:
h: int
k: str
def __init__(self, h, k):
self.h = h
self.k = k
# @dataclass
class MyObj:
x: int
y: str
z: MyNestedObj
def __init__(self, x, y, z):
self.x = x
self.y = y
self.z = z
</code></pre>
<p>If I run <code>python mwe.py</code> at line <code>j = MAIN_OBJ_TA.dump_json(obj)</code> I get</p>
<pre class="lang-py prettyprint-override"><code>pydantic_core._pydantic_core.PydanticSerializationError: Unable to serialize unknown type: <class 'myobj.MyObj'>
</code></pre>
<p>However, if I turn the python classes in <code>myobj.py</code> into python dataclasses (removing the comments), it works and I see</p>
<pre class="lang-py prettyprint-override"><code>b'{"a":1,"b":"asd","c":{"x":10,"y":"lol","z":{"h":100,"k":"qwerty"}}}'
</code></pre>
<p>Is this the intended behaviour? I would like to be able to serialize the objects in <code>myobj.py</code> without turning them into dataclasses (as in my actual use case I have no control over them).</p>
|
<python><serialization><python-dataclasses><pydantic-v2>
|
2024-08-02 10:46:28
| 1
| 973
|
Sirion
|
78,825,029
| 7,478,839
|
How should I correctly configure tasks when using sockets in python?
|
<p>I have this small snippet which I intend to have function as a server, I have managed to get the socket to connect to the client but cannot send data due to errors.</p>
<pre class="lang-py prettyprint-override"><code>import asyncio
import json
import socketio
from aiohttp import web
ALLOWED_ORIGINS = ["http://localhost:45100"]
sio = socketio.AsyncServer(cors_allowed_origins=ALLOWED_ORIGINS)
app = web.Application()#socketio.ASGIApp(sio)
sio.attach(app)
@sio.event
async def connect(sid, environ):
origin=environ.get('HTTP_ORIGIN', '')
if origin not in ALLOWED_ORIGINS:
print(f'Connection from {origin} imetupwa')
await sio.disconnect(sid)
else:
print(f'Allowing connection from {origin}')
@sio.event
async def disconnect(sid):
print('Disconnected', sid)
async def send_gpio_data():
print('Going to send data')
try:
while True:
data = {'speed': 100}
await sio.emit('ecuData', json.dumps(data))
print("Data sent, sleeping...")
await asyncio.sleep(1)
except Exception as error:
print(f'Error in sending data: {error}')
async def main():
print('Init main')
asyncio.create_task(send_gpio_data())
runner = web.AppRunner(app)
await runner.setup()
site = web.TCPSite(runner, '0.0.0.0', 8090)
print("Web app running on http://0.0.0.0:8090")
await site.start()
await asyncio.Event().wait()
if __name__ == '__main__':
try:
asyncio.run(main())
except KeyboardInterrupt:
print("Done!")
</code></pre>
<p>Here is the putout:</p>
<pre><code>Init main
Web app running on http://0.0.0.0:8090
Going to send data
Data sent, sleeping...
Allowing connection from http://localhost:45100
Error in sending data: Passing coroutines is forbidden, use tasks explicitly.
/Users/keronei/StudioProjects/Side Projects/rotor/server.py:35: RuntimeWarning: coroutine 'AsyncServer._emit_internal' was never awaited
print(f'Error in sending data: {error}')
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
</code></pre>
<p>The execution stops right after the crash, I have also tried <code> sio.start_background_task(send_gpio_data)</code> but I get the same error.</p>
|
<python><coroutine><python-socketio>
|
2024-08-02 10:34:44
| 2
| 330
|
KE Keronei
|
78,825,011
| 4,451,521
|
RunnableSequence instead of LLMChain throws an error (updating from deprecated langchain)
|
<p>When I have this code first</p>
<pre><code>from langchain_community.llms import HuggingFacePipeline
from transformers import AutoTokenizer
import transformers
import torch
model="meta-llama/Llama-2-7b-chat-hf"
tokenizer=AutoTokenizer.from_pretrained(model)
pipeline=transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
max_length=1000,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id
)
llm=HuggingFacePipeline(pipeline=pipeline, model_kwargs={'temperature':0})
from langchain.prompts import PromptTemplate
prompt_template=PromptTemplate(input_variables=["book_name"],
template="Provide me a concise summary of the book {book_name}")
</code></pre>
<p>and then I complete it with</p>
<pre><code>from langchain.chains import LLMChain
chain = LLMChain(llm=llm, prompt=prompt_template, verbose=True)
response= chain.run("Alchemist")
print(response)
</code></pre>
<p>I get a response with the summary I wanted , but I get deprecation warnings.
So following the warnings I try to replace the second part with</p>
<pre><code>chain = prompt | llm
response = chain.invoke("The name of the rose")
print(response)
</code></pre>
<p>but I get the error</p>
<pre><code>TypeError: Expected a Runnable, callable or dict.Instead got an unsupported type: <class 'str'>
</code></pre>
<p>What am I doing wrong?</p>
<p>I have used something similar but with llm being a <code>HuggingFaceEndPoint</code> and in that case it worked, so I suspect that it has to do with llm being a <code>HuggingFacePipeline</code> but can someone tell me how to correct the code?</p>
<p>Edit:
I tried</p>
<pre><code>chain2 = prompt | llm | StrOutputParser()
response2 = chain2.invoke({"bookname":"The name of the rose"})
# response2 = chain2.invoke("The name of the rose")
print(response2)
</code></pre>
<p>but the error persists</p>
|
<python><langchain><huggingface>
|
2024-08-02 10:29:18
| 2
| 10,576
|
KansaiRobot
|
78,824,996
| 1,779,973
|
Run an async function from a sync function within an already-running event loop
|
<p>In my Python application, I have a sync function <code>boo()</code> that is called inside a <strong>running event loop</strong>. <code>boo()</code> has to get some data from <code>foo(arg1, arg2)</code>, which is an <strong>async</strong> function.</p>
<p>Unfortunately, I can't turn <code>boo()</code> into an async function. It must stay synchronized. (This constraint is out of my hands).</p>
<p>How can I call <code>foo(arg1, arg2)</code> from within <code>boo()</code>, wait until it completes, and continue the execution?</p>
<h1>Minimal Reproducible Example</h1>
<p>I tried to create a minimal reproducible example. This is the closest I could get. The real application is big and complex, and may behave differently.</p>
<pre class="lang-py prettyprint-override"><code>import time
import asyncio
async def work_for_data():
time.sleep(3)
return 42
# sync function, calling async function
def get_number():
return asyncio.get_event_loop().run_until_complete(work_for_data())
async def get_data():
return get_number()
async def run():
loop = asyncio.get_event_loop()
task = asyncio.create_task(get_data())
loop.run_until_complete(task)
if __name__ == "__main__":
asyncio.run(run())
</code></pre>
<p>This code raises:</p>
<pre><code> File "./minimal_example.py", line 9, in get_number
return asyncio.get_event_loop().run_until_complete(work_for_data())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/Cellar/python@3.11/3.11.3/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/base_events.py", line 629, in run_until_complete
self._check_running()
File "/usr/local/Cellar/python@3.11/3.11.3/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/base_events.py", line 588, in _check_running
raise RuntimeError('This event loop is already running')
RuntimeError: This event loop is already running
</code></pre>
<h1>Attempts To Solve The Problem</h1>
<p>I made a lot of attempts to solve it, all of them didn't work.</p>
<h3>Attempt 1</h3>
<pre><code>data = asyncio.run(foo(arg1, arg2))
</code></pre>
<p>Raised the following exception:</p>
<pre><code> data = asyncio.run(foo(arg1, arg2))
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/.pycharm_helpers/pydevd_asyncio/pydevd_nest_asyncio.py", line 143, in run
loop.run_until_complete(task)
File "uvloop/loop.pyx", line 1511, in uvloop.loop.Loop.run_until_complete
File "uvloop/loop.pyx", line 1504, in uvloop.loop.Loop.run_until_complete
File "uvloop/loop.pyx", line 1377, in uvloop.loop.Loop.run_forever
File "uvloop/loop.pyx", line 518, in uvloop.loop.Loop._run
RuntimeError: this event loop is already running.
</code></pre>
<h3>Attempt 2</h3>
<pre><code>loop = asyncio.get_event_loop()
data = loop.run_until_complete(foo(arg1, arg2))
</code></pre>
<p>Raised the following exception:</p>
<pre><code> data = loop.run_until_complete(foo(arg1, arg2))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "uvloop/loop.pyx", line 1511, in uvloop.loop.Loop.run_until_complete
File "uvloop/loop.pyx", line 1504, in uvloop.loop.Loop.run_until_complete
File "uvloop/loop.pyx", line 1377, in uvloop.loop.Loop.run_forever
File "uvloop/loop.pyx", line 518, in uvloop.loop.Loop._run
RuntimeError: this event loop is already running.
</code></pre>
<h3>Attempt 3</h3>
<pre><code>loop = asyncio.get_running_loop()
with ThreadPoolExecutor() as executor:
future = executor.submit(lambda: asyncio.run_coroutine_threadsafe(foo(arg1, arg2), loop).result())
data = future.result()
</code></pre>
<p>The interpreter got stuck when executing <code>future.result()</code></p>
<h3>Attempt 4</h3>
<pre><code> loop = asyncio.get_event_loop()
future = asyncio.Future()
def callback(task):
if task.exception():
future.set_exception(task.exception())
else:
future.set_result(task.result())
task = asyncio.run_coroutine_threadsafe(foo(arg1, arg2), loop)
task.add_done_callback(callback)
result = task.result() ## Stuck here
return result
</code></pre>
<p>The interpreter got stuck when executing <code>task.result()</code></p>
|
<python><async-await><python-asyncio><future><coroutine>
|
2024-08-02 10:26:32
| 3
| 536
|
Ido
|
78,824,983
| 2,810,305
|
Python str subclass with lazy evaluation of its value (for argparse)
|
<p>I am building a command-line program that uses <a href="https://docs.python.org/3/library/argparse.html" rel="nofollow noreferrer"><code>argparse</code></a>.
In the (assumed-to-be) rare case of a wrong call, <code>argparse</code> will show a description string supplied when creating the <code>ArgumentParser</code>.</p>
<p>I want this description to show the version number of my program.
I want to extract this from the <code>pyproject.toml</code> file via <a href="https://docs.python.org/3/library/tomllib.html" rel="nofollow noreferrer"><code>tomllib</code></a>.
Since this is an expensive operation (and even more so since I want to learn how to do it), <strong>I would like the description string to be evaluated lazily: only when it is actually to be printed</strong>.</p>
<p>I have not yet found a way to do it even though I am willing to build a one-trick-pony object specialized for this particular value:</p>
<ul>
<li><a href="https://docs.python.org/3/library/collections.html#collections.UserString" rel="nofollow noreferrer"><code>collections.UserString</code></a> could provide the lazy evaluation (via overriding <code>__getattribute__</code> for the <code>data</code> attribute), but, alas, some code in <code>argparse</code> uses <code>re.sub()</code> on it, which appears to check <code>isinstance(x, str)</code>, which a <code>UserString</code> does not fulfill.</li>
<li>a subclass of <code>str</code> can override any operation done on a string -- but not perform lazy evaluation for a plain use of the entire string. (Is this true?)</li>
<li>if <code>ArgumentParser</code> would use <code>str(description)</code> instead of <code>description</code> when it is about to print the description, one could supply an object that performs the lazy evaluation in its <code>__str__</code> method. But, alas, <code>ArgumentParser</code> does not do this.</li>
</ul>
<p>Is there any approach that does the job?</p>
|
<python>
|
2024-08-02 10:21:56
| 1
| 40,070
|
Lutz Prechelt
|
78,824,890
| 5,865,411
|
SQLAlchemy: Error can't emit change event
|
<p>So, I have these 2 dto classes:</p>
<pre class="lang-py prettyprint-override"><code>class ProjectDTO(DB):
__tablename__ = 'projects'
__table_args__ = {'schema': 'postgres'}
id = Column(Integer, primary_key=True)
serial = Column(UUID(as_uuid=True), unique=True, nullable=False, server_default=text('(uuid_generate_v4())'))
team_engineer_serial = Column(UUID(as_uuid=True), ForeignKey('postgres.team.serial'))
team_tester_serial = Column(UUID(as_uuid=True), ForeignKey('postgres.team.serial'))
statuses = relationship('ProjectStatusDTO', back_populates='project', foreign_keys='[ProjectStatusDTO.project_serial]')
team_engineer = relationship('TeamDTO', backref='project_as_engineer', foreign_keys=[team_engineer_serial])
team_tester = relationship('TeamDTO', backref='project_as_tester', foreign_keys=[team_tester_serial])
class ProjectStatusDTO(DB):
__tablename__ = 'project_statuses'
__table_args__ = {'schema': 'postgres'}
id = Column(Integer, primary_key=True)
serial = Column(UUID(as_uuid=True), unique=True, nullable=False, server_default=text('(uuid_generate_v4())'))
project_serial = Column(UUID(as_uuid=True), ForeignKey('postgres.projects.serial'))
name = Column(String(255), nullable=False)
is_initial = Column(Boolean)
project = relationship('ProjectDTO', back_populates='statuses', foreign_keys=[project_serial])
</code></pre>
<p>Then, I have a function to get the project details, including the list of statuses, using function below.</p>
<pre class="lang-py prettyprint-override"><code> def _get_project(self, ctx: Context, project_code: str = None, team_engineer_serial: str = None, with_initial_status: bool = False):
if not project_code and not team_engineer_serial:
raise BadRequestError("one of 'project_code' or 'team_engineer_serial' must be provided")
query = ctx.get_db_catalog().query(ProjectDTO)
if project_code:
query = query.filter(ProjectDTO.code == project_code)
else:
query = query.filter(ProjectDTO.team_engineer_serial == team_engineer_serial)
if with_initial_status:
query = query.join(ProjectDTO.statuses).filter(ProjectStatusDTO.is_initial == True)
query = query.options(
joinedload(ProjectDTO.statuses),
joinedload(ProjectDTO.team_engineer)
)
result = query.first()
if not result:
return None
return ProjectEntity().from_dto(result)
</code></pre>
<p>Unfortunately, the function above produce error <code>raise orm_exc.ObjectDereferencedError( sqlalchemy.orm.exc.ObjectDereferencedError: Can't emit change event for attribute 'ProjectDTO.statuses' - parent object of type <ProjectDTO> has been garbage collected.</code></p>
<p>What's wrong with my function?
Thank you.</p>
|
<python><sqlalchemy>
|
2024-08-02 09:58:51
| 0
| 909
|
dev-x
|
78,824,885
| 1,504,016
|
Python : filter set/list of tuple based on cardinality property
|
<p>I'm looking for a way to filter a set (or list) of tuples based on the number of times an
item appears in one of the other of the position of the tuple.</p>
<p>My current goal is a bit complex so I divided the problem in three smaller steps.</p>
<p><strong>1. Let's start with the simplest case, only a single value which applies only to the first element of the tuple</strong></p>
<p>For instance:</p>
<pre><code>my_filter([(1,2),(1,3),(2,4),(3,1),(3,4),(3,5),(5,2),(5,4)], 2)
</code></pre>
<p>Should return:</p>
<pre><code>[(1,2),(1,3),(5,2),(5,4)]
</code></pre>
<p>Because these are the only tuples for which the first item of the tuple is linked appears only twice in the whole list.</p>
<p>The naive way of doing it is : for each first element of tuple in list, count the number of times this element appears as first element in all tuples and, if the count matches the chosen number, add all tuples having this element at first position.</p>
<p>But I feel like this is so unoptimal and I have to iterate over the list for each possible value, I'm surely missing a better way of doing it.</p>
<p><strong>2. Make it reciprocal</strong></p>
<p>Ideally it would like to be able to apply the same treatment based on the second element of the tuple, with another cardinality parameter</p>
<p>For instance:</p>
<pre><code>my_filter([(1,2),(1,3),(2,4),(3,1),(3,4),(3,5),(5,2),(5,4)], 2, 1)
</code></pre>
<p>Here we want to keep only tuples in which the first element appears exactly twice but with the second element appearing only once (intersection of the two conditions). This should return:</p>
<pre><code>[(1,3)]
</code></pre>
<p><strong>3. Generalizing to multiple values</strong></p>
<pre><code>my_filter([(1,2),(1,3),(2,4),(3,1),(3,4),(3,5),(5,2),(5,4)], 2, [1,3])
</code></pre>
<p>In this case, we allow the cardinality filter to take multiple possible values. In this example, we want to keep tuples for which the first element appears exactly twice (in first position) and the second element appears either once or three times (in the second position). This should return:</p>
<pre><code>[(1,3),(5,4)]
</code></pre>
<p>Once again, I have no problem writing a naive solution that would simply iterate over each allowed values and join result sets, but I'm looking for something smarter.</p>
<p>I feel like there could be some useful functions in the itertools library but I'm not comfortable enough with it. Any advice ? Thanks.</p>
|
<python><list><functional-programming><tuples><python-itertools>
|
2024-08-02 09:58:13
| 2
| 2,649
|
ibi0tux
|
78,824,882
| 386,861
|
Voila output to html - can't see output file
|
<p>This is a simple query about the useful voila module in python</p>
<p>I've got a jupyter notebook called voila_testing.ipynb - although it could be any notebook.</p>
<p>It is hosted on Sharepoint and all I want to do is is output the voila results into an html file as not too many people can access the jupyter notebook on our system.</p>
<p>I've tried running it and then using the command line in vscode to run</p>
<p>voila voila_testing.ipynb --template=lab --export=output.html</p>
<p>But although the html page comes up it doesn't generate an output.html file.</p>
<p>Otherwise it would be a simple solution for my dashboard work.</p>
|
<python><pandas><visual-studio-code><jupyter-notebook><voila>
|
2024-08-02 09:57:35
| 0
| 7,882
|
elksie5000
|
78,824,872
| 2,641,187
|
distinguishing homogeneous and heterogeneous tuples in python function overloads
|
<p>Suppose I have an interface <code>Base</code> with a lot of implementations</p>
<pre class="lang-py prettyprint-override"><code>from abc import ABC
class Base(ABC): ...
class A(Base): ...
class B(Base): ...
class C(Base): ...
# ...
class Z(Base): ...
</code></pre>
<p>Now I want to define a composite class that holds a frozenset of such objects. There is a common interface <code>Product</code> and two implementations which take either a heterogeneous frozenset (<code>MixedProduct</code>) or a homogeneous frozenset of <code>Z</code>s (<code>ZProduct</code>)</p>
<pre class="lang-py prettyprint-override"><code>from abc import ABC, abstractmethod
from dataclasses import dataclass
class Product(ABC):
@property
@abstractmethod
def items(self) -> frozenset[Base]: ...
@dataclass(frozen=True)
class MixedProduct(Product):
items: frozenset[Base]
@dataclass(frozen=True)
class ZProduct(Product):
items: frozenset[Z]
</code></pre>
<p>there is a factory function that takes an arbitrary number of <code>Base</code> objects and returns the correct <code>Product</code> object</p>
<pre class="lang-py prettyprint-override"><code>from collections.abc import Iterable
from typing_extensions import TypeGuard
def check_all_z(items: tuple[Base, ...]) -> TypeGuard[tuple[Z, ...]]:
return all([isinstance(item, Z) for item in items])
def make_product(*items: Base) -> MixedProduct | ZProduct:
# `items` is a tuple[Base, ...]
if check_all_z(items): # the TypeGuard tells MyPy that items: tuple[Z, ...] in this clause
return ZProduct(frozenset(items))
return MixedProduct(frozenset(items))
</code></pre>
<p>so this function returns a <code>ZProduct</code> only if all input items are <code>Z</code> and <code>MixedProduct</code> otherwise. Now I would like to narrow the return type of <code>make_product</code> as a Union doesn't capture the feasible input - return type relations. What I want would be sth like this</p>
<pre class="lang-py prettyprint-override"><code>reveal_type(make_product(Z())) # note: Revealed type is "ZProduct"
reveal_type(make_product(A())) # note: Revealed type is "MixedProduct"
reveal_type(make_product(Z(), Z())) # note: Revealed type is "ZProduct"
reveal_type(make_product(B(), A())) # note: Revealed type is "MixedProduct"
reveal_type(make_product(B(), Z())) # note: Revealed type is "MixedProduct" # also contains one Z!!
</code></pre>
<p>I go ahead and define two overloads</p>
<pre class="lang-py prettyprint-override"><code>from typing import overload
@overload
def make_product(*items: Base) -> MixedProduct: ...
@overload
def make_product(*items: Z) -> ZProduct: ...
def make_product(*items):
if check_all_z(
items
): # the TypeGuard tells MyPy that items: tuple[Z, ...] in this clause
return ZProduct(frozenset(items))
return MixedProduct(frozenset(items))
</code></pre>
<p>so the first overload is the "catch all" while the second one is the specialization for the only case where you would get a <code>ZProduct</code>. But now MyPy complains with</p>
<pre><code>error: Overloaded function signature 2 will never be matched: signature 1's parameter type(s) are the same or broader [misc]
</code></pre>
<p>So my question is, is there a way to just specialize the annotations for <code>make_product</code> for this one particular case that would return <code>ZProduct</code> in any other way? With <code>overload</code> it seems to only be possible if all the involved types have no overlaps whatsoever. That would mean I would have to define a Union of all other implementations of <code>Base</code> <em>except</em> <code>Z</code> and use that as input for the <code>MixedProduct</code> variant. But that also doesn't work, because you <em>can</em> have <code>Z</code> in the input items for the <code>MixedProduct</code> variant, just not all of them (see last reveal_type example above). FWIW using a Union of all implementations of <code>Base</code> (including <code>Z</code>) for the <code>MixedProduct</code> variant throws the same MyPy error.</p>
<p>How else would I be able to differentiate between homogeneous and heterogeneous tuples with type annotations to capture the correct input - return type relations in my case?</p>
<p>To be clear: the actual runtime code does what I intend, I just can't get the type annotations right.</p>
|
<python><python-typing><mypy>
|
2024-08-02 09:56:35
| 1
| 931
|
Darkdragon84
|
78,824,869
| 4,596,240
|
Changes in child processes variables not reflected in parent object
|
<p>I have a parent class in Python that can start a process in a child class. The child class has a <em>multiprocessing.Process</em> that changes some variables. I would expect changes to be visible for the parent class since the object is created there, but somehow, the variables are not shared. The process is started as a fork.</p>
<p>Here is an example code:</p>
<pre><code>import multiprocessing
from multiprocessing import Queue, Process
import time
class Parent:
def __init__(self):
print(multiprocessing.get_start_method())
self.child = Child()
def start(self):
self.child.start_child()
def print_child_variable(self):
print('Parent: Child Variable: ' + str(self.child.some_variable))
class Child:
def __init__(self):
self.some_variable = None
self.status = False
def start_child(self):
self.status = True
do_something_process = Process(target=self.do_something)
do_something_process.start()
def do_something(self):
self.some_variable = True
print('Child: some variable after changing: ' + str(self.some_variable))
def print_some_variable(self):
print('Child: some variable: ' + str(self.some_variable))
if __name__ == '__main__':
parent = Parent()
parent.print_child_variable()
time.sleep(1)
parent.start()
parent.print_child_variable()
time.sleep(1)
parent.print_child_variable()
time.sleep(1)
parent.child.print_some_variable()
</code></pre>
<p>The output:</p>
<pre><code>fork
Parent: Child Variable None
Parent: Child Variable None
Child: some variable after chaging True
Parent: Child Variable None
Child: some variable None
</code></pre>
<p>I would expect that after the change, the variable <em>some_variable</em> will be true even if checked from the parent object. Can anyone help me understand what is going on?</p>
|
<python><multithreading><multiprocessing>
|
2024-08-02 09:55:21
| 1
| 838
|
Ignacio
|
78,824,644
| 10,200,497
|
What is the best way to return the group that has the largest streak of negative numbers in a column?
|
<p>My DataFrame is:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(
{
'a': [-3, -1, -2, -5, 10, -3, -13, -3, -2, 1, 2, -100],
}
)
</code></pre>
<p>Expected output:</p>
<pre><code> a
0 -3
1 -1
2 -2
3 -5
</code></pre>
<p>Logic:</p>
<p>I want to return the largest streak of negative numbers. And if there are more than one streak that are the largest, I want to return the first streak. In <code>df</code> there are two negative streaks with size of 4, so the first one is returned.</p>
<p>This is my attempt but whenever I use <code>idxmax()</code> in my code, I want to double check because it gets tricky sometimes in some scenarios.</p>
<pre><code>import numpy as np
df['sign'] = np.sign(df.a)
df['sign_streak'] = df.sign.ne(df.sign.shift(1)).cumsum()
m = df.sign.eq(-1)
group_sizes = df.groupby('sign_streak').size()
largest_group = group_sizes.idxmax()
largest_group_df = df[df['sign_streak'] == largest_group]
</code></pre>
|
<python><pandas><dataframe>
|
2024-08-02 09:10:10
| 2
| 2,679
|
AmirX
|
78,824,489
| 1,195,001
|
Python: wait for all threads to reach one point in the code before continuing
|
<pre><code>def setsystemdate(epoch)
# set system date to epoch
# this function uses a huge amount of file pointers that I want to keep open for efficiency
def longrunningfunc(elem1, start_time, end_time):
for newdate in range(start_time, end_time, 100000):
setsystemdate(newdate)
# need to change the date of the machine to create file with correct metadata
# copy a huge amount of file to a new folder
# create huge amount of link
elems = ["elem1","elem2","elem3"....."elem150"]
# that is working perfectly
for elem in elems:
longrunningfunc(elem, start_epoch, end_epoch)
# Now i'd like to be able to run code like this.
# My issue is how to synchronize my threads so they wait for each other to set the system date.
with concurrent.futures.ThreadPoolExecutor(max_workers=10) as executor:
executor.map(lambda kargs: longrunningfunc(**kargs), elems)
</code></pre>
<p>This code takes a list of folders and for each <code>elem</code> it creates a backlog of files which are a copy of the current element, but to have it consistent I need to change the system date so that the birth date is properly set.</p>
<p>How can I make all my threads wait for each other before starting the new epoch so they can write files in a bit more parallel manner?<br />
I don't really want to switch over to doing all folder for one epoch, because doing one <code>elem</code> at a time allows me to maintain 50k <code>filedescriptor</code> per <code>elem</code> open to copy them many times faster.</p>
<p>I was thinking of creating a separate thread to manage date change, but then I have no idea how all the threads could synchronize each other and wait for the date to be set.</p>
|
<python><python-multithreading>
|
2024-08-02 08:28:59
| 1
| 344
|
Kiwy
|
78,823,898
| 7,959,614
|
Measure balanceness of a weighted numpy array
|
<p>I have player <code>A</code> and <code>B</code> who both played against different opponents.</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>player</th>
<th>opponent</th>
<th>days ago</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>C</td>
<td>1</td>
</tr>
<tr>
<td>A</td>
<td>C</td>
<td>2</td>
</tr>
<tr>
<td>A</td>
<td>D</td>
<td>10</td>
</tr>
<tr>
<td>A</td>
<td>F</td>
<td>100</td>
</tr>
<tr>
<td>A</td>
<td>F</td>
<td>101</td>
</tr>
<tr>
<td>A</td>
<td>F</td>
<td>102</td>
</tr>
<tr>
<td>A</td>
<td>G</td>
<td>1</td>
</tr>
<tr>
<td>B</td>
<td>C</td>
<td>1</td>
</tr>
<tr>
<td>B</td>
<td>C</td>
<td>2</td>
</tr>
<tr>
<td>B</td>
<td>D</td>
<td>10</td>
</tr>
<tr>
<td>B</td>
<td>F</td>
<td>100</td>
</tr>
<tr>
<td>B</td>
<td>F</td>
<td>101</td>
</tr>
<tr>
<td>B</td>
<td>F</td>
<td>102</td>
</tr>
<tr>
<td>B</td>
<td>G</td>
<td>1</td>
</tr>
<tr>
<td>B</td>
<td>G</td>
<td>2</td>
</tr>
<tr>
<td>B</td>
<td>G</td>
<td>3</td>
</tr>
<tr>
<td>B</td>
<td>G</td>
<td>4</td>
</tr>
<tr>
<td>B</td>
<td>G</td>
<td>5</td>
</tr>
<tr>
<td>B</td>
<td>G</td>
<td>6</td>
</tr>
<tr>
<td>B</td>
<td>G</td>
<td>7</td>
</tr>
<tr>
<td>B</td>
<td>G</td>
<td>8</td>
</tr>
</tbody>
</table></div>
<p>First, I want to find the opponent that is the most common one. My definition of "most common" is not the total number of matches but more like the balanced number of matches.
If for example, player <code>1</code> and <code>2</code> played respectively 99 and 1 time(s) against player <code>3</code> I prefer opponent <code>4</code> where <code>A</code> and <code>B</code> played both 49 times against.</p>
<p>In order to measure the "balanceness" I write the following function:</p>
<pre><code>import numpy as np
from collections import Counter
def balanceness(array: np.ndarray):
classes = [(c, cnt) for c, cnt in Counter(array).items()]
m = len(classes)
n = len(array)
H = -sum([(cnt / n) * np.log((cnt / n)) for c, cnt in classes])
return H / np.log(m)
</code></pre>
<p>This functions works as expected:</p>
<pre><code>>> balanceness(array=np.array([0, 0, 0, 1, 1, 1]))
1.0
</code></pre>
<p>If I run the function on the different opponents I see the following results:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>opponent</th>
<th>balanceness</th>
<th>n_matches</th>
</tr>
</thead>
<tbody>
<tr>
<td>C</td>
<td>1</td>
<td>4</td>
</tr>
<tr>
<td>D</td>
<td>1</td>
<td>2</td>
</tr>
<tr>
<td>F</td>
<td>1</td>
<td>6</td>
</tr>
<tr>
<td>G</td>
<td>0.5032583347756457</td>
<td>9</td>
</tr>
</tbody>
</table></div>
<p>Clearly, opponent <code>F</code> is the most common one. However, the matches of <code>A</code> and <code>B</code> against <code>F</code> are relatively old.</p>
<p>How should I incorporate a recency-factor into my calculation to find the "most recent common opponent"?</p>
<p><strong>Edit</strong></p>
<p>After thinking more about it I decided to weight each match using the following function</p>
<pre><code>def weight(days_ago: int, epilson: float=0.005) -> float:
return np.exp(-1 * days_ago * epilson)
</code></pre>
<p>I sum the weight of all the matches against each opponent</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>opponent</th>
<th>balanceness</th>
<th>n_matches</th>
<th>weighted_n_matches</th>
</tr>
</thead>
<tbody>
<tr>
<td>C</td>
<td>1</td>
<td>4</td>
<td>3.9701246258837</td>
</tr>
<tr>
<td>D</td>
<td>1</td>
<td>2</td>
<td>1.90245884900143</td>
</tr>
<tr>
<td>F</td>
<td>1</td>
<td>6</td>
<td>3.62106362790388</td>
</tr>
<tr>
<td>G</td>
<td>0.5032583347756457</td>
<td>9</td>
<td>8.81753570603108</td>
</tr>
</tbody>
</table></div>
<p>Now, opponent <code>C</code> is the "most-recent balanced opponent".</p>
<p>Nevertheless, this method ignores the "recentness" on a player-level because we sum the values. There could be a scenario where player <code>1</code> played recently a lot of matches against player <code>3</code> whereas player <code>2</code> faced player <code>3</code> in the distant past.</p>
<p>How can we find the opponent that is</p>
<ol>
<li>the most balanced / equally-distributed between two players</li>
<li>the opponent with the most recent matches against the two players</li>
</ol>
|
<python><numpy><counter>
|
2024-08-02 05:29:45
| 2
| 406
|
HJA24
|
78,823,831
| 4,915,008
|
How to LEFT OUTER JOIN without foreign key using django orm
|
<p>I have following Models:</p>
<pre class="lang-py prettyprint-override"><code>class User(Model):
###
user fields
###
class CaseModel(Model):
id = # django build in id field
owner = models.ForeignKey('User')
###
other fields
###
class DraftMessageModel(Model):
entity_id = models.IntegerField(null=True, blank=True) # CaseModel.id is linked to this
###
other fields
###
</code></pre>
<h2>I need to construct queryset that generates following SQL:</h2>
<pre class="lang-sql prettyprint-override"><code>SELECT
C.id,
C.some_field,
D.another_field,
U.username,
< some extra fields >
FROM
CaseModel AS C
LEFT OUTER JOIN User AS U ON (C.owner_id = U.id)
LEFT OUTER JOIN DraftMessageModel AS D ON (C.id = D.entity_id)
WHERE
C.owner_id = 100
AND < some extra condition >
</code></pre>
<p>I know it's trivial question but I couldn't handle it <code>queryset.extra</code> or <code>queryset.annotate(draft_message=FilteredRelation())</code>.</p>
<p>Here is what I've tried so far:</p>
<ul>
<li>queryset.annotate</li>
</ul>
<pre class="lang-py prettyprint-override"><code>queryset = CaseModel.objects.select_related('owner').filter(owner_id=100)
queryset = queryset.annotate(
draft_message=FilteredRelation(
'draftmessagemodel',
condition=Q(draftmessagemodel__entity_id=F('id'))
)
)
# error: Cannot resolve keyword 'draftmessagemodel' into field. Choices are: <CaseModel's other fields>
</code></pre>
<ul>
<li>queryset.extra</li>
</ul>
<pre class="lang-py prettyprint-override"><code>queryset = CaseModel.objects.select_related('owner').filter(owner_id=100)
queryset = queryset.extra(
select={
'draftmessagemodel__id': 'draftmessagemodel.id',
'draftmessagemodel__text': 'draftmessagemodel.text',
},
tables=['draftmessagemodel'],
where=[
'casemodel.id = draftmessagemodel.entity_id'
]
)
</code></pre>
<p>generating undesired sql:</p>
<pre class="lang-sql prettyprint-override"><code>SELECT
"casemodel"."id",
"user"."username",
(draftmessagemodel.id) AS "draftmessagemodel__id",
(draftmessagemodel.text) AS "draftmessagemodel__text",
<some other fields>
FROM
"casemodel"
LEFT OUTER JOIN "user" ON (
"casemodel"."owner_id" = "user"."id"
), -- I didn't unserstand from where this comma come here
"draftmessagemodel"
WHERE
(
"casemodel"."owner_id" = 100
AND (
casemodel.id = draftmessagemodel.entity_id
)
)
</code></pre>
|
<python><django><postgresql><django-queryset><django-orm>
|
2024-08-02 04:56:36
| 1
| 1,165
|
Kholdarbekov
|
78,823,804
| 2,707,864
|
Where is it specified the path to look for kernel.json, to select the jupyter kernel in anaconda?
|
<p>I am launching an Anconda prompt from a Windows shortcut.
Its target is <code>%windir%\System32\cmd.exe "/K" %PYTHONDIR%\Scripts\activate.bat root</code>,
with <code>PYTHONDIR=C:\Users\User1\Anaconda\</code>.</p>
<p>At the prompt I get</p>
<pre><code>(base) C:\Users\User1\Documents> jupyter kernelspec list
[ListKernelSpecs] WARNING | Config option `kernel_spec_manager_class` not recognized by `ListKernelSpecs`.
Available kernels:
python3 C:\Users\User1\AppData\Roaming\jupyter\kernels\python3
(base) C:\Users\User1\Documents>
</code></pre>
<p>In the corresponding <code>C:\Users\User1\AppData\Roaming\jupyter\kernels\python3\kernel.json</code> (K1) I have</p>
<pre><code>{
"argv": [
"C:\\Users\\user1\\Documents\\appls_mydocs\\anaconda3\\python.exe",
"-m",
"ipykernel_launcher",
"-f",
"{connection_file}"
],
"display_name": "Python 3",
"language": "python"
}
</code></pre>
<p>And in another <code>C:\Users\User1\Anaconda\share\jupyter\kernels\python3\kernel.json</code> (K2) I have</p>
<pre><code>{
"argv": [
"C:/Users/User1/Anaconda\\python.exe",
"-m",
"ipykernel_launcher",
"-f",
"{connection_file}"
],
"display_name": "Python 3",
"language": "python"
}
</code></pre>
<p>Where is it stated which <code>kernel.json</code> to use to select the jupyter kernel (in this case, K1)?
I would actually mean to use K2, whenever I launch an Anaconda prompt in that same directory.</p>
<p>If I rename K1 to something else, and lanuch another Anaconda prompt, it points to K2, so there seems to be a sequence of directories as a search path where to look.</p>
|
<python><json><anaconda><kernel><jupyter>
|
2024-08-02 04:42:33
| 1
| 15,820
|
sancho.s ReinstateMonicaCellio
|
78,823,339
| 20,087,266
|
Type hinting list of child classes when the original list is of the parent's type in Python
|
<p>Given a variable with the type hint <code>list[ParentItem]</code>, how can one assign another list to it with the type hint <code>list[ChildItem]</code>, where <code>ChildItem</code> is derived from <code>ParentItem</code>, without triggering linter type checking errors?</p>
<p>Consider the following contrived, minimal example in which the <a href="https://github.com/microsoft/pyright" rel="nofollow noreferrer">pyright</a> linter throws the following argument type checking error:</p>
<blockquote>
<p>Argument of type <code>list[ChildItem]</code> cannot be assigned to parameter <code>items</code> of type <code>list[ParentItem]</code> in function <code>__init__</code><br />
<code>list[ChildItem]</code> is incompatible with <code>list[ParentItem]</code> Type parameter <code>_T@list</code> is invariant, but <code>ChildItem</code> is not the same as <code>ParentItem</code>ย Consider switching from <code>list</code> to <code>Sequence</code> which is covariant</p>
</blockquote>
<pre class="lang-py prettyprint-override"><code>class ParentItem:
def __init__(self) -> None:
pass
class ChildItem(ParentItem):
def __init__(self) -> None:
super().__init__()
class ParentGroup:
def __init__(self, items: list[ParentItem]) -> None:
self._items = items
def add(self, item: ParentItem) -> None:
self._items.append(item)
class ChildGroup(ParentGroup):
def __init__(self, child_items: list[ChildItem]) -> None:
super().__init__(child_items) # pyright argument type checking error here
</code></pre>
<p>Changing the line:</p>
<pre class="lang-py prettyprint-override"><code> def __init__(self, child_items: list[ChildItem]) -> None:
</code></pre>
<p>to</p>
<pre class="lang-py prettyprint-override"><code> def __init__(self, child_items: list[ParentItem]) -> None:
</code></pre>
<p>superficially resolves the error, but this doesn't provide the desired type hints when the <code>ChildGroup</code> class is used. The error reappears later anyway when one attempts to create a <code>ChildGroup</code> instance using a list of <code>ChildItem</code> instances, e.g.:</p>
<pre><code>i1 = ChildItem()
i2 = ChildItem()
ilist = [i1, i2]
g_with_child_items = ChildGroup(ilist) # same error as before, but regarding assignment to "child_items" parameter instead of "items" parameter
</code></pre>
<p>How can the <code>child_items</code> variable of the <code>ChildGroup</code> class be correctly type annotated as <code>list[ChildItem]</code>?</p>
|
<python><python-typing>
|
2024-08-01 23:32:51
| 1
| 4,086
|
Kyle F. Hartzenberg
|
78,823,052
| 13,578,682
|
What does "python3 -t" do?
|
<pre><code>$ python3 -t -c 'print("hello world")'
hello world
</code></pre>
<p>What does <code>-t</code> do? It's not mentioned in <code>python3 --help</code>.</p>
<p>Usually unknown options cause a non-zero exit code, like</p>
<pre><code>$ python3 -r
Unknown option: -r
usage: python3 [option] ... [-c cmd | -m mod | file | -] [arg] ...
Try `python -h' for more information.
</code></pre>
|
<python><command-line>
|
2024-08-01 21:09:11
| 1
| 665
|
no step on snek
|
78,822,863
| 3,711,985
|
pip and system python (3.12.4) don't work properly anymore after installing pyenv
|
<p>I installed pyenv to run an old code on python 3.7.17. Now I need to go back to version 3.12.4 which is the system python (before installing pyenv). But right now I can't even run pip :/</p>
<pre><code>โ ~ pip install <CLI-package-name>
pyenv: pip: command not found
The `pip' command exists in these Python versions:
3.7.17
3.7.17/envs/glue_3.7.17
glue_3.7.17
Note: See 'pyenv help global' for tips on allowing both
python2 and python3 to be found.
โ ~ which pip
/Users/fmdnst/.pyenv/shims/pip
</code></pre>
<p>Same thing for running python:</p>
<pre><code>โ ~ python
pyenv: python: command not found
The `python' command exists in these Python versions:
3.7.17
3.7.17/envs/glue_3.7.17
glue_3.7.17
Note: See 'pyenv help global' for tips on allowing both
python2 and python3 to be found.
โ ~ which python
/Users/fmdnst/.pyenv/shims/python
</code></pre>
<p>Here is the output of pyenv versions, so I am on the correct python version (I guess):</p>
<pre><code>โ ~ pyenv versions
* system (set by /Users/fmdnst/.pyenv/version)
3.7.17
3.7.17/envs/glue_3.7.17
glue_3.7.17 --> /Users/fmdnst/.pyenv/versions/3.7.17/envs/glue_3.7.17
</code></pre>
<p>I also tried to install the package as follow but it didn't work:</p>
<pre><code>โ ~ pip3.12 install <CLI-package-name>
error: externally-managed-environment
ร This environment is externally managed
โฐโ> To install Python packages system-wide, try brew install
xyz, where xyz is the package you are trying to
install.
If you wish to install a Python library that isn't in Homebrew,
use a virtual environment:
python3 -m venv path/to/venv
source path/to/venv/bin/activate
python3 -m pip install xyz
If you wish to install a Python application that isn't in Homebrew,
it may be easiest to use 'pipx install xyz', which will manage a
virtual environment for you. You can install pipx with
brew install pipx
You may restore the old behavior of pip by passing
the '--break-system-packages' flag to pip, or by adding
'break-system-packages = true' to your pip.conf file. The latter
will permanently disable this error.
If you disable this error, we STRONGLY recommend that you additionally
pass the '--user' flag to pip, or set 'user = true' in your pip.conf
file. Failure to do this can result in a broken Homebrew installation.
Read more about this behavior here: <https://peps.python.org/pep-0668/>
</code></pre>
<p>So following that instructions, I ran the following command:</p>
<pre><code>โ ~ python3.12 -m pip install --break-system-packages <CLI-package-name>
</code></pre>
<p>And this worked and said the package is installed successfully, but I can't find it anywhere and it's not available in bash. I'm quite lost and don't know what's happening. Could someone please help me to understand how to install a package as before?!
If I try to run python from bash, I can see the following options are available:</p>
<pre><code>โ ~ python
python python-config python3-config python3.12-config python3.7-config python3.7m
python-build python3 python3.12 python3.7 python3.7-gdb.py python3.7m-config
</code></pre>
<p>And if I run <code>python3.12</code> it gives me the correct python shell, but running <code>python</code> command says that <code>pyenv: python: command not found</code></p>
<p>Also, I added the following to my zshrc:</p>
<pre><code>โ ~ echo 'export PYENV_ROOT="$HOME/.pyenv"' >> ~/.zshrc
โ ~ echo '[[ -d $PYENV_ROOT/bin ]] && export PATH="$PYENV_ROOT/bin:$PATH"' >> ~/.zshrc
โ ~ echo 'eval "$(pyenv init -)"' >> ~/.zshrc
</code></pre>
|
<python><pyenv><pyenv-virtualenv>
|
2024-08-01 20:05:19
| 1
| 5,842
|
Birish
|
78,822,817
| 759,991
|
TemplateSyntaxError Could not parse the remainder
|
<p>I have this bit of jinja2 in my Django template:</p>
<pre><code>{% for filesystem, total_quota, total_usage, df_usage in totals_by_filesystem %}
<tr>
<td>{{ filesystem }}</span></td>
<td>{{ total_quota | filesizeformat }}</td>
<td>{{ total_usage | filesizeformat }}</td>
<td>{{ df_usage * 100 }}</td>
</tr>
{% endfor %}
</code></pre>
<p>When I run it I get this error message:</p>
<pre><code>Exception Type: TemplateSyntaxError
Exception Value:
Could not parse the remainder: ' * 100' from 'df_usage * 100'
</code></pre>
<p>I am pretty sure my syntax <code>{{ df_usage * 100 }}</code> is correct. What am I missing here?</p>
|
<python><django><jinja2>
|
2024-08-01 19:47:45
| 2
| 10,590
|
Red Cricket
|
78,822,798
| 20,122,390
|
Can explicitly invoking the garbage collector in Python have side effects in this case?
|
<p>can anyone help me with this?
When I call the heal_client method, and gc.collect() is executed, it successfully closes my database connection (just what I need). Is there something wrong with this approach, specifically with calling gc.collect() ?
(This code is part of a larger context involving asyncio and other libraries that got me to this point. It is expected that this method should almost never be called, but it has to be done because I have no other way to close the client and it is a long-running program.)
In conclusion, this approach seems to solve my problem, but I want to know if there might be a side effect that I am not seeing.</p>
<pre><code>class RedisNotifier(INotifier):
def __init__(self):
self.__redis_client = redis.Redis(
host=settings.REDIS_PUBSUB_HOST,
port=settings.REDIS_PUBSUB_PORT,
password=settings.REDIS_PUBSUB_PASSWORD,
)
async def send_notification(self, room: str, notification: NotificationMessage) -> None:
await self.__redis_client.publish(room, notification.json())
def heal_client(self) -> None:
print("Healing redis client")
del self.__redis_client
gc.collect()
self.__redis_client = redis.Redis(
host=settings.REDIS_PUBSUB_HOST,
port=settings.REDIS_PUBSUB_PORT,
password=settings.REDIS_PUBSUB_PASSWORD,
)
print("done")
</code></pre>
|
<python><garbage-collection>
|
2024-08-01 19:41:11
| 0
| 988
|
Diego L
|
78,822,778
| 798,099
|
In python, is there a way to print the user provided name (not the value) of a variable that is supplied to a function?
|
<p>I would like to print out the name of a variable that is supplied when a function is used in Python.</p>
<p>The goal is to use the supplied name in some printed text.</p>
<p>There are multiple questions dealing with similar topics, but I am unable to find an exact solution or approach.</p>
<p>I am running Python 3 on Windows.</p>
<p><code>(Python version: 3.12.3 (tags/v3.12.3:f6650f9, Apr 9 2024, 14:05:25) [MSC v.1938 64 bit (AMD64)]</code></p>
<hr />
<p><strong>Previous questions and answers:</strong></p>
<p><a href="https://stackoverflow.com/questions/23725524/how-to-get-the-literal-value-of-a-parameter-passed-to-a-python-function">How to get literal value</a></p>
<p><a href="https://stackoverflow.com/questions/544919/how-to-print-original-variables-name-in-python-after-it-was-returned-from-a-fun">How to print original variables name</a></p>
<p><a href="https://stackoverflow.com/questions/32000934/print-a-variables-name-and-value/57225950#57225950">Print a name and value</a></p>
<p><a href="https://stackoverflow.com/questions/18425225/getting-the-name-of-a-variable-as-a-string/59364138#59364138">Getting the name as a string</a></p>
<hr />
<p><strong>Example:</strong></p>
<pre><code>import pandas as pd
data_ok = {'col_a':[1,1,1,1],\
'col_b':[2,2,2,2],\
'col_c':[3,3,3,3]}
data_no = {'col_a':[1,1,1,1],\
'col_b':[2,2,2,2],\
'col_d':[4,4,4,4]}
df_ok = pd.DataFrame(data_ok)
df_no = pd.DataFrame(data_no)
print(df_ok)
print(df_no)
def df_check(df_in):
need_cols = ['col_a', 'col_b', 'col_c']
have_cols = df_in.columns.tolist()
check_cols = all(e in have_cols for e in need_cols)
assert check_cols == True, f"---------- Import Error - Check dataframe {df_in} columns in file. The columns must include : {need_cols} ----------"
if check_cols == True:
print("\n\n", "-"*80, '(required columns found!' , need_cols)
print("\n\n", "-" * 80, '(continuing with analysis)' )
df_check(df_ok)
df_check(df_no)
-------------------------------------------------------------------------------- (required columns found! ['col_a', 'col_b', 'col_c'] )
-------------------------------------------------------------------------------- (continuing with analysis)
#...
assert check_cols == True, f"---------- Import Error - Check dataframe {df_in} columns in file. The columns must include : {need_cols} ----------"
^^^^^^^^^^^^^^^^^^
AssertionError: ---------- Import Error - Check dataframe col_a col_b col_d
0 1 2 4
1 1 2 4
2 1 2 4
3 1 2 4 columns in file. The columns must include : ['col_a', 'col_b', 'col_c'] ----------
</code></pre>
<hr />
<p><strong>Goal:</strong></p>
<p>Is there a way to have the printed message report the <em>name</em> of the provided dataframe as opposed to printing the data frame itself?</p>
<pre><code>AssertionError: ---------- Import Error - Check dataframe "df_no" columns in file. The columns must include : ['col_a', 'col_b', 'col_c'] ----------
</code></pre>
|
<python><variables><debugging>
|
2024-08-01 19:34:49
| 1
| 401
|
blue and grey
|
78,822,692
| 1,609,514
|
How to register dependencies programmatically in a Python DVC pipeline
|
<p>I want to run a sequence of experiments and each experiment will use certain input data files (dependencies), each of which I want to prepare when an experiment is run. (Some experiments will use the same input data sets so they won't need to be re-generated during subsequent experiments).</p>
<p>At first I thought I could do this with one 'master' pipeline that loops over each experiment:</p>
<p>dvc.yaml</p>
<pre class="lang-yaml prettyprint-override"><code>stages:
prepare_data:
foreach: ${experiment_names}
do:
cmd: python stages/prepare_data.py "${item}"
deps:
- source_data
- stages/prepare_data.py
params:
- prepare_data
outs:
- input_data
run_simulation:
foreach: ${experiment_names}
do:
cmd: python stages/run_simulation.py "${item}"
deps:
- input_data
- stages/run_simulation.py
params:
- run_simulation
outs:
- results
</code></pre>
<p>The specific source data file used by each experiment may be different, and it will be determined in <code>prepare_data.py</code> based on the experiment name which I pass to it and some <code>exp_spec.yaml</code> file that it will load or perhaps from the <code>params.yaml</code> file.</p>
<p>What I'm struggling with is how to register the specific dependencies so that
(i) when an experiment requires an input data file that has already been prepared it doesn't regenerate it
(ii) when one of the source data files is changed, only the simulations that use that file are re-run.</p>
<p>Obviously this can't be done in the above dvc.yaml file because it relates to all the experiments.</p>
<p>Do I need to build separate pipeline for each experiment to register the specific dependencies? If so, can this be done programmatically, or do I need to build them all by hand?</p>
<p><strong>UPDATE</strong></p>
<p>I completed the pipeline by writing simple scripts for <code>prepare_data.py</code>, <code>run_simulation.py</code> and <code>params.yaml</code> and tried to run it with the above <code>dvc.yaml</code> file.</p>
<p>This is the output:</p>
<pre class="lang-none prettyprint-override"><code>Reproducing experiment 'lobar-snob'
Building workspace index |1.46k [00:00, 3.76kentry/s]
Comparing indexes |1.42k [00:00, 5.59kentry/s]
WARNING: No file hash info found for '/dvc_pipelines/test_pipeline/results'. It won't be created.
WARNING: No file hash info found for '/dvc_pipelines/test_pipeline/input_data'. It won't be created.
Applying changes |0.00 [00:00, ?file/s]
ERROR: output 'input_data' is specified in:
- prepare_data@test_exp_2
- prepare_data@test_exp_1
Use `dvc remove` with any of the above targets to stop tracking the overlapping output.
</code></pre>
<p>As I expected, the problem seems to be related to the use of folder names, <code>input_data</code> and <code>results</code>, instead of specific files.</p>
|
<python><dependencies><pyyaml><dvc>
|
2024-08-01 19:08:17
| 2
| 11,755
|
Bill
|
78,822,565
| 892,621
|
Inlining external jsonschema references with Python
|
<p>I have a question similar to <a href="https://stackoverflow.com/questions/47054088/fully-expanding-ref-references-in-a-json-schema-with-python">this one</a> except all I want to do is take external references and inline them.</p>
<p>For example, let's say I have a schema like this one:</p>
<pre><code>{
"$defs": {
"JSONSchema": {
"$ref": "https://json-schema.org/draft/2020-12/schema"
}
},
"properties": {
"config_1_jsonschema": {
"$ref": "#/$defs/JSONSchema"
},
"config_2_jsonschema": {
"$ref": "#/$defs/JSONSchema"
}
}
}
</code></pre>
<p>I have two fields which reference the <code>JSONSchema</code> definition that in turn references a URL to the JSONSchema meta-schema. I would like to dereference the URL, inlining that schema, and recursively inlining all of its external references, but not inlining the local references for the two properties.</p>
<p>How would I go about this while keeping the schema valid? Is there an existing library in Python that would let me do this?</p>
|
<python><json><reference><jsonschema><inlining>
|
2024-08-01 18:30:24
| 1
| 682
|
papercrane
|
78,822,540
| 18,769,241
|
How to grab an object from the ground placed in front of NAO?
|
<p>I want to make NAO grab a ball placed in front of him at (0,0)-(y,z) refrence and placed on the ground, I want the robot to grab it with the right hand (the ball being fit to its hand), for that I have developed the following code (using joint angles based arm movement as stated in the docs: <a href="http://doc.aldebaran.com/2-8/family/nao_technical/joints_naov6.html#naov6-joints-right-arm-joints" rel="nofollow noreferrer">http://doc.aldebaran.com/2-8/family/nao_technical/joints_naov6.html#naov6-joints-right-arm-joints</a>)</p>
<pre><code> nao_ip = "127.0.0.1"
nao_port = 9559
motion_proxy = ALProxy("ALMotion", nao_ip, nao_port)
posture_proxy = ALProxy("ALRobotPosture", nao_ip, nao_port)
posture_proxy.goToPosture("StandInit", 1.0)
motion_proxy.wakeUp()
posture_proxy.goToPosture("Crouch", 1.0)
#bend over to try to grab the ball placed on the ground
motion_proxy.changeAngles("LHipYawPitch", -0.4, 0.1)
time.sleep(5)
#moving the arm accordingly to make it reach the ball isn't sufficient
motion_proxy.setAngles(['RShoulderPitch', 'RShoulderRoll', 'RElbowYaw', 'RElbowRoll', 'RWristYaw', 'RHand'],
[0.6, 0.2, 0.0, 0.0349, 0.0, 0.05], 0.1)
time.sleep(10)
</code></pre>
<p>The problem being is that the robot cannot seem to be catching up to where the ball is placed and therefore cannot reach it. Is there a more effective way to make the robot grab the ball based on joint angles ?</p>
<p>PS: the code is easily testable using Choregraph using a virtual robot (with the <code>IP=127.0.0.1</code>)</p>
|
<python><nao-robot><choregraphe>
|
2024-08-01 18:25:17
| 0
| 571
|
Sam
|
78,822,168
| 2,287,458
|
Use polars when-then-otherwise on multiple output columns at once
|
<p>Assume I have this dataframe</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame({
'item': ['CASH', 'CHECK', 'DEBT', 'CHECK', 'CREDIT', 'CASH'],
'quantity': [100, -20, 0, 10, 0, 0],
'value': [99, 47, None, 90, None, 120],
'value_other': [97, 57, None, 91, None, 110],
'value_other2': [94, 37, None, 93, None, 115],
})
</code></pre>
<pre><code>โโโโโโโโโโฌโโโโโโโโโโโฌโโโโโโโโฌโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโ
โ item โ quantity โ value โ value_other โ value_other2 โ
โ --- โ --- โ --- โ --- โ --- โ
โ str โ i64 โ i64 โ i64 โ i64 โ
โโโโโโโโโโชโโโโโโโโโโโชโโโโโโโโชโโโโโโโโโโโโโโชโโโโโโโโโโโโโโโก
โ CASH โ 100 โ 99 โ 97 โ 94 โ
โ CHECK โ -20 โ 47 โ 57 โ 37 โ
โ DEBT โ 0 โ null โ null โ null โ
โ CHECK โ 10 โ 90 โ 91 โ 93 โ
โ CREDIT โ 0 โ null โ null โ null โ
โ CASH โ 0 โ 120 โ 110 โ 115 โ
โโโโโโโโโโดโโโโโโโโโโโดโโโโโโโโดโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโ
</code></pre>
<p>Now I want to set all value columns to <code>0</code> for all rows where <code>value is null</code> and <code>quantity == 0</code>.</p>
<p>Right now I have this solution</p>
<pre class="lang-py prettyprint-override"><code>cols = ['value', 'value_other', 'value_other2']
df = df.with_columns(
pl.when(pl.col('value').is_null() & (pl.col('quantity') == 0))
.then(0)
.otherwise(pl.col(col))
.alias(col)
for col in cols
)
</code></pre>
<p>which correctly gives</p>
<pre><code>โโโโโโโโโโฌโโโโโโโโโโโฌโโโโโโโโฌโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโ
โ item โ quantity โ value โ value_other โ value_other2 โ
โ --- โ --- โ --- โ --- โ --- โ
โ str โ i64 โ i64 โ i64 โ i64 โ
โโโโโโโโโโชโโโโโโโโโโโชโโโโโโโโชโโโโโโโโโโโโโโชโโโโโโโโโโโโโโโก
โ CASH โ 100 โ 99 โ 97 โ 94 โ
โ CHECK โ -20 โ 47 โ 57 โ 37 โ
โ DEBT โ 0 โ 0 โ 0 โ 0 โ
โ CHECK โ 10 โ 90 โ 91 โ 93 โ
โ CREDIT โ 0 โ 0 โ 0 โ 0 โ
โ CASH โ 0 โ 120 โ 110 โ 115 โ
โโโโโโโโโโดโโโโโโโโโโโดโโโโโโโโดโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโ
</code></pre>
<p>However, I feel this is very inefficient as my <code>when</code> condition is executed for every value column. Is there a way to achieve this using only polar internal functions & without the native for-loop?</p>
|
<python><dataframe><python-polars>
|
2024-08-01 16:56:20
| 2
| 3,591
|
Phil-ZXX
|
78,822,093
| 6,503,917
|
On VSCode, when using #%% cells, shift-enter no longer executes a cell in the interactive window and moves to the next cell
|
<p>On VSCode, when using #%% cells, shift-enter no longer executes a cell in the interactive window and moves to the next cell. It used to work for many years, but now shift-enter on some line of code throws an error in a new window titled 'Python REPL'. I can still use control-enter to execute a cell in the interactive window, but that does not move the cursor to the next cell. Could a VSCode update have changed the short-cut functionality?</p>
|
<python><visual-studio-code>
|
2024-08-01 16:37:46
| 3
| 419
|
javid
|
78,821,975
| 1,924,830
|
Can non-displayed fields of a Django model be altered during custom form validation but before committing model's data?
|
<p>for the last few weeks I've been working on my first Django project: a database to register all incoming orders from Customers for a generic enterprise. Let's assume that each one of the sale reps could be assigned to a specific Customer for a whole natural year and assignments may vary from one year to another (i.e. agent A assigned to customer A in 2024, whereas in 2025, agent B might be in charge of customer A).</p>
<p>As a start, I decided to use a FK inside Orders model to refer to the associated Product and Customer. At the same time, the Assignments model would also keep a couple of FK to the Customer and Agent tables, so everything could be connected, accessible and data loops were avoided in my relational diagram. Still, this solution was really painful to get some data, specially when I needed to retrieve filtered data for the final templates... so I decided to turn everything upside down and take a different approach:</p>
<pre><code>class Order (models.Model):
assignment = models.ForeignKey("Assignment", on_delete=models.RESTRICT)
product = models.ForeignKey("Product", on_delete=models.RESTRICT)
quantity = models.PositiveIntegerField(default=1)
order_date = models.DateField()
price = models.DecimalField(max_digits=10, decimal_places=2)
class Assignment (models.Model):
assignment_year = models.PositiveSmallIntegerField()
customer = models.ForeignKey("Customer", on_delete=models.CASCADE)
agent = models.ForeignKey("Agent", on_delete=models.CASCADE)
class Meta:
constraints = [
UniqueConstraint(
fields=['year', 'customer'], name='Primary_Key_Assignment'
)
]
class Customer (models.Model):
name = models.CharField(max_length=64)
address = models.CharField(max_length=64)
city = models.CharField(max_length=32)
working_time_start = models.TimeField(blank=True, null=True)
working_time_end = models.TimeField(blank=True, null=True)
class Meta:
verbose_name_plural = "Customers"
constraints = [
CheckConstraint(
check = Q(working_time_start__isnull=True, working_time_end__isnull=True)|Q(working_time_start__isnull=False, working_time_end__isnull=False, working_time_start__lte=F('working_time_end')),
name = 'Wrong working time limits',
),
]
</code></pre>
<p>So Orders are now directly referencing a specific assignments, which I admit it has some pros and cons and I'm not completely confortable with the fact of having the assignment's year and the order's date messing around in the same table due to redundancies and potential inconsistencies.</p>
<p>Once said this, I prefer to hide all this tricky details to the user and would like to have an admin view for the Orders which could hide the assignment FK field, but still would allow the user to select a Customer id. Afterwards, I would pick the Customer's id, the year based on the order's date and build up a valid assignment FK before the register could be saved into the database.</p>
<p>Something like the following:</p>
<pre><code>from django.contrib import admin
from django.db.models import F, CharField
from django import forms
from django.db import models
from .models import Customer, Agent, Product, Order, Assignment
class OrderForm(forms.ModelForm):
customers_list = forms.ModelChoiceField(queryset = Customer.objects.all())
def clean(self):
cleaned_data = self.cleaned_data
form_date = cleaned_data['order_date']
customer_form = cleaned_data['customers_list']
cleaned_data['assignment'] = Assignment.objects.get(assignment_year=form_date.year, customer=customer_form.id)
super(OrderForm, self).clean()
return cleaned_data
class Meta:
model = Order
fields = '__all__'
class OrderAdmin(admin.ModelAdmin):
form = OrderForm
fieldsets = (
('None', {
'fields': ( 'customers_list', 'product', 'quantity', 'order_date', 'price', )
}),
)
admin.site.register(Order, OrderAdmin)
</code></pre>
<p>The clean method presented above complains because I am not including the assignment FK in the form, so I cannot modify it before starting validations at model's level (e.g. CheckConstraints, model's customized clean and save method, if any, etc.).</p>
<p>So my question is: is there any way I could possibly build a valid Order model based on inputs retrieved from the custom Order's form and append my pending assignment as a valid FK before the register is inserted into the database, even when no assignment is displayed to the user?</p>
|
<python><django><django-models><django-forms>
|
2024-08-01 16:08:51
| 2
| 303
|
grover999
|
78,821,828
| 18,595,760
|
Intellisense issue when importing requests module in vscode
|
<pre><code>import sys
import requests
print(sys.executable)
r = requests.get("https://google.com")
print(r.status_code)
</code></pre>
<p>I wrote some simple code as above. Intellisense works well when inputting "requests.get()" while it stopped working when inputting "r.status_code". It seems like vscode recognized the type of variable "r" as "Any" instead of "Response". I'm afraid it's the reason why Intellisense didn't work when inputting "r.". I want to know how to fix it?</p>
|
<python><visual-studio-code><python-requests><pylance>
|
2024-08-01 15:35:06
| 1
| 317
|
test1229
|
78,821,744
| 20,944,710
|
How to Scale YOLOv8 Inference using Databricks
|
<p>I have successfully trained a YOLOv8 model using the Ultralytics Python package and now aim to run inference on 100 million images stored in an S3 bucket. Currently, I have a Databricks notebook with GPU acceleration that performs inference, but I don't know how to scale this.</p>
<p>From the Databricks documentation, I gathered that using Databricks Autoloader to fetch images from S3 and MLflow to manage the model could help in scaling the batch inference process.</p>
<p>How can I efficiently scale the batch inference process for 100 million images in Databricks?
Should I use MLflow to manage and scale the inference jobs?</p>
<p>The current setup is running multiple notebooks with dedicated compute, which seems inefficient.</p>
|
<python><databricks><yolo><mlflow><yolov8>
|
2024-08-01 15:20:03
| 1
| 316
|
Bindestrich
|
78,821,557
| 9,571,463
|
Assigning an Attribute to a @staticmethod in Python
|
<p>I have a scenario where I have objects with static methods. They are all built using an outside <code>def build_hello()</code> as class variables.</p>
<pre><code>def build_hello(name: str):
@staticmethod
def hello_fn():
return "hello my name is "
# Assign an attribute to the staticmethod so it can be used across all classes
hello_fn.first_name = name
print(hello_fn() + hello_fn.first_name) # This works
return hello_fn
class World:
hello_fn = build_hello("bob")
# Error, function object has no attribute "first_name"
World.hello_fn.first_name
</code></pre>
<p>What is happening here? I am able to access the attribute of <code>hello_fn()</code> within the <code>build_hello()</code> function call. but when its added to my object, that attribute no longer lists.</p>
<p>Also if I call <code>dir()</code> on the static method. I do not see it present:</p>
<pre><code>dir(World.hello_fn)
['__annotations__',
'__builtins__',
'__call__',
'__class__',
'__closure__',
'__code__',
'__defaults__',
'__delattr__',
'__dict__',
'__dir__',
'__doc__',
'__eq__',
'__format__',
'__ge__',
'__get__',
'__getattribute__',
'__getstate__',
'__globals__',
'__gt__',
'__hash__',
'__init__',
'__init_subclass__',
'__kwdefaults__',
'__le__',
'__lt__',
'__module__',
'__name__',
'__ne__',
'__new__',
'__qualname__',
'__reduce__',
'__reduce_ex__',
'__repr__',
'__setattr__',
'__sizeof__',
'__str__',
'__subclasshook__',
'__type_params__']
</code></pre>
|
<python><class><static-methods>
|
2024-08-01 14:42:22
| 2
| 1,767
|
Coldchain9
|
78,821,465
| 1,471,980
|
how do you performn division calculation on number of columns over a common column in pandas
|
<p>I have this data frame:</p>
<pre><code>Node Band 5-May 12-May 19-May 26-May 2-June
Server1 10000 800 1000 12000 12500 500
Server2 30000 600 3000 12000 12500 500
Server3 17000 1500 12000 12500 500 1000
</code></pre>
<p>I need to calculate usage by dividing column values right of "Band" column by the values in "Band" column in pandas.</p>
<p>Resulting data frame should look like this:</p>
<pre><code>df1
Node Band 5-May 12-May 19-May 26-May 2-June
Server1 10000 0.08 0.10 1.2 1.25 0.05
Server2 30000 0.02 0.1 0.4 0.42 0.02
etc
</code></pre>
<p>then I have to count the them if they fall withing certain percentage per node. if any date, usage per node between 70% and 80%, the count for that node is one, however many times the usage is between those thresholds. In this example for Node "Server1", number of times it breached 100% is 2.</p>
<p>For example:</p>
<p>All summary:</p>
<pre><code>Node 70%-80% 80%-90% 90%-100% 100%+
Server1 0 0 0 2
Server2 0 0 0 0
</code></pre>
<p>I can do this by explicitly dividing columns by Band column but the number of columns is not known, it could be many columns.</p>
<pre><code>df['5-May_%']=df['5-May']/df['Band']
</code></pre>
|
<python><pandas>
|
2024-08-01 14:19:48
| 2
| 10,714
|
user1471980
|
78,821,429
| 7,343,051
|
Diagonal line from start to endpoints in log x-scale matplotlib
|
<p>I am trying to plot a diagonal line from point (0, 0) to (1, 1) in <code>matplotlib</code> using a logarithmic x-scale. This is the MWE:</p>
<pre class="lang-py prettyprint-override"><code>plt.plot([0, 1], [0, 1])
plt.xscale('log')
plt.show()
</code></pre>
<p>I would expect to see an exponential function in such a log-normal plane, something that I get using</p>
<pre class="lang-py prettyprint-override"><code>x = np.linspace(0, 1, 100)
plt.plot(x, x)
</code></pre>
<p>but I see a ~horizontal line at <code>y=1</code>. It is similar for the <code>plt.yscale('log')</code>, but it is expectedly a ~vertical line at <code>x=1</code> this time. Is this expected behaviour?</p>
|
<python><matplotlib><xscale>
|
2024-08-01 14:13:19
| 1
| 314
|
gasar8
|
78,821,371
| 2,562,927
|
Django query to sum the n largest related values
|
<p>I have a db that tracks players' scores over a number of games. I want to create a leaderboard using each player's 4 highest scoring games.</p>
<p>Here are my models:</p>
<pre><code>class Event(models.Model):
name = models.CharField(max_length=120)
class Player(models.Model):
name = models.CharField(max_length=45)
class Game(models.Model):
name = models.CharField(max_length=15)
class Score(models.Model):
player = models.ForeignKey(Player, on_delete=models.CASCADE)
game = models.ForeignKey(Game, on_delete=models.CASCADE)
event = models.ForeignKey(Event, on_delete=models.CASCADE)
score = models.IntegerField()
class Meta:
unique_together = ["player", "event", "game"]
</code></pre>
<p>I've tried using a subquery, but it seems like my slice is being ignored and the sum returned is all of the player's scores, not just their top 4 scores:</p>
<pre><code>scores = Score.objects.filter(player=OuterRef('pk'), event=myEvent).order_by('-score')[:4]
scores = scores.annotate(total=Func(F("score"), function="SUM")).values('total')
leaders = Players.objects.annotate(total=Subquery(scores)).order_by('-total')
</code></pre>
<p>Any ideas on how to get the subquery working? Or a better way to approach the problem?</p>
|
<python><sql><django>
|
2024-08-01 14:01:14
| 1
| 1,133
|
desired login
|
78,821,355
| 10,517,777
|
Clone git repository and install python packages in a shared folder path
|
<p>I have a python project in a git repository and I am creating an azure pipeline to do the following tasks for that project:</p>
<ol>
<li>If git repository does not exist on a shared folder path, then clone it, else pull the last code.</li>
<li>If virtual environment folder (.venv) does not exist on a shared folder path, then create and activate the virtual environment, else activate the current environment.</li>
<li>Install python packages located in the file requirements.txt</li>
</ol>
<p>This is my current YAML file to reach the previous steps:</p>
<pre><code>trigger:
- main
pool:
vmImage: 'windows-latest'
name: 'On Premise Windows'
demands: Agent.Name -equals [Agent_name]
variables:
- group: Variable_Group
- name: PAT
value: $[variables.PAT]
steps:
- checkout: self
displayName: 'Checkout Repository'
- powershell: |
$parent_folder = '\\server\my\shared\folder\path'
$target_folder = Join-Path -Path $parent_folder -ChildPath '[project_name]'
$target_folder_exists = Test-Path -Path $target_folder
if ($target_folder_exists) {
cd $target_folder
git pull
} else {
git clone 'https://$(PAT)@dev.azure.com/my/git/project' $parent_folder
}
enabled: True
displayName: 'Clone or Pull Git repository'
- task: UsePythonVersion@0
inputs:
versionSpec: '3.11'
- powershell: |
$parent_folder = '\\server\my\shared\folder\path\[project_name]'
$target_folder = Join-Path -Path $parent_folder -ChildPath '.venv'
&requirements_path = Join-Path -Path $parent_folder -ChildPath 'requirements.txt'
$target_folder_exists = Test-Path -Path $target_folder
if ($target_folder_exists) {
cd &parent_folder
& C:\"Program Files"\Python311\python.exe -m venv .venv
}
pip install -r $requirements_path
displayName: 'Setup Python Environment and Install Dependencies'
</code></pre>
<p>The agent I am using is intalled On-Premise server and my shared folder path is located in an Azure VM. I am getting the following error:</p>
<blockquote>
<p>========================== Starting Command Output =========================== "C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe" -NoLogo
-NoProfile -NonInteractive -ExecutionPolicy Unrestricted -Command ". '....'" fatal:
could not create leading directories of
'\server\my\shared\folder\path':
No such file or directory
##[error]PowerShell exited with code '1'. Finishing: Clone or Pull Git repository</p>
</blockquote>
<p>It seems the On-premise server must have access to that shared folder path. Therefore, I have the following questions:</p>
<ul>
<li>Could On-premise server have access to the shared folder path if I give an user which has access to it? if so, how should I pass the user and password to access that path?. If not, what should be the proper way to achieve it?</li>
<li>Is powershell a good approach to clone my repository and then to install the python packages? If not, what would be the rigth approach?</li>
</ul>
|
<python><azure-pipelines-yaml>
|
2024-08-01 13:57:58
| 2
| 364
|
sergioMoreno
|
78,821,132
| 401,173
|
How do I get the AWS Glue Job CDK construct to update the script asset?
|
<p>I have a Glue Job and an S3 Asset defined in CDK. The resources create correctly but the update of the script file does not trigger the Glue Job to update its script in the resource.</p>
<p>How do I get the job to update when I update the content of the asset?</p>
<pre><code> dynamodb_etl_job_script = s3_assets.Asset(self, "dynamodb-etl-job", path="assets/dynamodb-etl-job.py")
etl_job = glue.CfnJob(self,
f"etl-{dynamodb_table_name}-glue-job",
name=f"ETL-{dynamodb_table_name}-glue-job",
role=f"arn:aws:iam::{self.account}:role/{crawler_role_name}",
command=glue.CfnJob.JobCommandProperty(
name="glueetl",
script_location=dynamodb_etl_job_script.s3_object_url,
python_version="3"
),
default_arguments={
"--enable-metrics": "true",
"--enable-spark-ui": "true",
"--extra-py-files": "s3://aws-glue-studio-transforms-510798373988-prod-us-east-1/gs_common.py,s3://aws-glue-studio-transforms-510798373988-prod-us-east-1/gs_flatten.py",
"--spark-event-logs-path": f"s3://{glue_asset_bucket.bucket_name}/sparkHistoryLogs/",
"--enable-job-insights": "true",
"--enable-observability-metrics": "true",
"--enable-glue-datacatalog": "true",
"--enable-continuous-cloudwatch-log": "true",
"--job-bookmark-option": "job-bookmark-enable",
"--job-language": "python",
"--TempDir": f"s3://{glue_asset_bucket.bucket_name}/temporary/",
"--enable-auto-scaling": "true",
"--glue-asset-bucket-name": glue_asset_bucket_name,
"--glue-database-name": glue_database_name,
"--glue-table-name": table_prefix+dynamodb_table_name,
"--dynamodb-table-arn": boto3_dynamodb_table.table_arn,
"--dynamodb-table-name": dynamodb_table_name,
"--glue-export-bucket-name": glue_export_bucket_name,
},
max_retries=0,
timeout=glue_job_timeout,
number_of_workers=glue_job_number_of_workers,
worker_type=glue_job_worker_type,
glue_version="4.0"
)
</code></pre>
|
<python><aws-glue><aws-cdk>
|
2024-08-01 13:05:58
| 0
| 3,241
|
Josh Russo
|
78,820,957
| 9,885,747
|
Understanding File Access in Databricks File System (DBFS) versus Volumes with Python and Spark
|
<p>I am currently trying to read and display a file from the Databricks File System (DBFS), but I encountered an issue. Here is the code I was using:</p>
<pre><code>file_path = "/dbfs/cluster-logs/use_case/default_job_cluster/cluster_id/init_scripts/cluster_id/20240801_proxy-init.sh.stderr.log"
with open(file_path, 'r') as file:
contents = file.read()
print(contents)
</code></pre>
<p>However, interestingly I get the following error:</p>
<pre><code>bash: line 11: /Volumes/landing/default/artifacts/projects/use_case/databricks/scripts/proxy-init.sh: No such file or directory
</code></pre>
<p>As you can see the path did not match the original input.
In the end, I was able to correctly read and display the log file content with the following code:</p>
<pre><code>file_path = "/dbfs/cluster-logs/use_case/default_job_cluster/cluster_id/init_scripts/cluster_id/20240801_proxy-init.sh.stderr.log"
from pyspark.sql import functions as F
from pyspark.sql.functions import collect_list
if dbutils.fs.ls(file_path):
file_df_to_check = spark.read.text(file_path).agg(collect_list("value").alias("all_lines"))
display(file_df_to_check)
</code></pre>
<p>Questions:</p>
<ol>
<li>Why does the first code snippet produce an error referring to the volume path?</li>
<li>What does it mean in the documentation that DBFS provides a <a href="https://learn.microsoft.com/en-us/azure/databricks/sql/language-manual/sql-ref-volumes" rel="nofollow noreferrer">scheme for volumes</a>? Shouldnt the first snipped work then?</li>
<li>Why can the file only be read using Spark and not with the standard Python open function?</li>
</ol>
<p>Thank you for your assistance.</p>
|
<python><apache-spark><file-io><filesystems><databricks>
|
2024-08-01 12:26:26
| 0
| 1,685
|
DataBach
|
78,820,951
| 1,749,980
|
EasyOCR inconsistent in recognizing letter "e" when by itself
|
<p>I have a function in Python which reads text in Portuguese with EasyOCR. For some reason it doesn't always recognize the <strong>"e"</strong> between bigger words, which is a <strong>common connector word in this language.</strong></p>
<p>Is there any way in which I could force EasyOCR to recognize the "e" as part of the phrase?</p>
<pre class="lang-python prettyprint-override"><code>def getOCRResults(imgFP)->list[tuple[tuple[int,int,int,int]|None,str|None,float|None]]:
reader = easyocr.Reader(['pt'], gpu=False)
result = reader.readtext(imgFP, width_ths=0.7, add_margin=0.2, height_ths=0.8)
return result
def draw_bounding_boxes(image, detections, threshold=0.25):
for bbox, text, score in detections:
if score > threshold:
cv2.rectangle(image, tuple(map(int, bbox[0])), tuple(map(int, bbox[2])), (0, 255, 0), 1)
cv2.putText(image, text, tuple(map(int, bbox[0])), cv2.FONT_HERSHEY_SIMPLEX, 0.35, (255, 0, 0), 1)
OCRResults = getOCRResults(image_path)
img = cv2.imread(image_path)
draw_bounding_boxes(img,OCRResults,0.25)
plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGBA))
plt.show()
</code></pre>
<p>Part of image where letter "<strong>e</strong>" not recognized as text when alone.
<a href="https://i.sstatic.net/Um98u2DE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Um98u2DE.png" alt="Bounding Box of recognized texts" /></a></p>
<p><strong>Another area in the same image</strong> where it recognized the "e" as part of text in some of the options (2ยบ and 4ยบ). The 1ยบ e 3ยบ options in the combobox also show that the "e" hasn't been recognized.</p>
<p><a href="https://i.sstatic.net/0kpS7VEC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0kpS7VEC.png" alt="Another area in the same image" /></a></p>
|
<python><ocr><easyocr>
|
2024-08-01 12:24:47
| 1
| 448
|
Anika
|
78,820,861
| 17,194,313
|
Fold or "unpivot" similar column names (with matching prefix) into structs using Polars
|
<p>I have a wide polars data frame that follows a very consistent pattern</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
pl.Config(tbl_cols=10)
schema = [
'x-latest', 'x-mean', 'x-std',
'y-latest', 'y-mean', 'y-std',
'z-latest', 'z-mean', 'z-std'
]
df = pl.DataFrame([range(n, n + 3) for n in range(0, 27, 3)], schema=schema)
</code></pre>
<pre><code>โโโโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโฌโโโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโฌโโโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโ
โ x-latest โ x-mean โ x-std โ y-latest โ y-mean โ y-std โ z-latest โ z-mean โ z-std โ
โ --- โ --- โ --- โ --- โ --- โ --- โ --- โ --- โ --- โ
โ i64 โ i64 โ i64 โ i64 โ i64 โ i64 โ i64 โ i64 โ i64 โ
โโโโโโโโโโโโชโโโโโโโโโชโโโโโโโโชโโโโโโโโโโโชโโโโโโโโโชโโโโโโโโชโโโโโโโโโโโชโโโโโโโโโชโโโโโโโโก
โ 0 โ 3 โ 6 โ 9 โ 12 โ 15 โ 18 โ 21 โ 24 โ
โ 1 โ 4 โ 7 โ 10 โ 13 โ 16 โ 19 โ 22 โ 25 โ
โ 2 โ 5 โ 8 โ 11 โ 14 โ 17 โ 20 โ 23 โ 26 โ
โโโโโโโโโโโโดโโโโโโโโโดโโโโโโโโดโโโโโโโโโโโดโโโโโโโโโดโโโโโโโโดโโโโโโโโโโโดโโโโโโโโโดโโโโโโโโ
</code></pre>
<p>And I am looking to "fold it" into structs</p>
<pre><code>โโโโโโโโโโโโโฌโโโโโโโโโโโโโฌโโโโโโโโโโโโโ
โ x-stats โ y-stats โ z-stats โ
โ --- โ --- โ --- โ
โ struct[3] โ struct[3] โ struct[3] โ
โโโโโโโโโโโโโชโโโโโโโโโโโโโชโโโโโโโโโโโโโก
โ {0,3,6} โ {9,12,15} โ {18,21,24} โ
โ {1,4,7} โ {10,13,16} โ {19,22,25} โ
โ {2,5,8} โ {11,14,17} โ {20,23,26} โ
โโโโโโโโโโโโโดโโโโโโโโโโโโโดโโโโโโโโโโโโโ
</code></pre>
<p>I can do this in a python list comprehension as such:</p>
<pre class="lang-py prettyprint-override"><code>cols = ['x', 'y', 'z']
df.select(*[
pl.struct(
latest=f'{c}-latest',
mean=f'{c}-mean',
std=f'{c}-std'
).alias(f'{c}-stats')
for c in cols]
)
</code></pre>
<p>But I may be missing the existence of some polars functionality - some "match by prefix fold" or similar.</p>
<p>Does this exist?</p>
|
<python><dataframe><python-polars>
|
2024-08-01 12:04:13
| 1
| 3,075
|
MYK
|
78,820,475
| 7,766,155
|
Cannot import OllamaEmbedding from llama_index.embeddings.ollama
|
<p>I get error:</p>
<pre><code>---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
Cell In[3], line 1
----> 1 from llama_index.embeddings.ollama import OllamaEmbedding
ModuleNotFoundError: No module named 'llama_index.embeddings.ollama'
</code></pre>
<p>I have both <code>llama-index</code>, <code>llama-index-embeddings-ollama</code> installed using pip.</p>
|
<python><python-3.x><pip><llama-index><ollama>
|
2024-08-01 10:38:23
| 2
| 301
|
AmirWG
|
78,820,362
| 4,941,009
|
VSCode quick fix does not suggest imports from a Python package in the same workspace
|
<h2> Overview </h2>
I have a VSCode workspace which contains two directories. In one is my main application, in another is a common package used by this application and a couple of others.
<pre><code> # main-app.code-workspace
"folders": [
{
"path": "."
},
{
"path": "../common/shared_package"
}
],
</code></pre>
<p>I am able to import classes from the shared package if I manually type out the import string (<code>from shared_package.utils.clock import SuperClock</code>), but VSCode's quick-fix isn't offering any help with this.</p>
<p><a href="https://i.sstatic.net/6cF5NZBM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6cF5NZBM.png" alt="Image showing that quick fix does not offer suggestion when I type out SuperClock" /></a></p>
<ol>
<li>Imports from other dependencies typically work with quick-fix (eg if I type <code>trio</code> I get an Auto-Import suggestion from VS Code)</li>
<li>Imports from within main-app also work fine with quick-fix</li>
<li>Manually typing out the import to pull in classes from shared_package is fine</li>
<li>Quick fix does not work with shared_package classes</li>
</ol>
<h2> Additional Info </h2>
<ol>
<li>This is a Python project which uses Poetry for dependency management</li>
<li>my pyproject.toml file includes this line</li>
</ol>
<pre><code>[tool.poetry.dependencies]
shared-hub-be = { path = "../common/shared_package", develop = true }
</code></pre>
<ol start="3">
<li>I think it might be a red herring, but I am seeing this error (and only this error) in output when I open VSCode</li>
</ol>
<pre><code>2024-08-01 10:50:48.793 [error] Error: spawn pixi ENOENT
at Process.ChildProcess._handle.onexit (node:internal/child_process:286:19)
at onErrorNT (node:internal/child_process:484:16)
at processTicksAndRejections (node:internal/process/task_queues:82:21) {
errno: -2,
code: 'ENOENT',
syscall: 'spawn pixi',
path: 'pixi',
spawnargs: [ '--version' ]
}
</code></pre>
<ol start="4">
<li><code>"python.languageServer": "Pylance"</code> is set in workspace file</li>
</ol>
<h2> What have I tried so far </h2>
I did what any of us would do and spent 6 months manually typing out the import statements.
<p>Beyond that I tried some back and forth with ChatGPT - it suggested</p>
<ol>
<li>Add the following to both my VSCode settings file and the Workspace settings file.</li>
</ol>
<pre><code>"python.analysis.extraPaths": ["../common/shared_package"]
</code></pre>
<ol start="2">
<li>Update PYTHONPATH to include <code>/absolute/path/to/common/shared_package</code></li>
<li>Try removing develop=true and then rerunning <code>poetry install</code></li>
</ol>
<p>None of this stuff helped and has now been reverted.</p>
|
<python><visual-studio-code><python-poetry><pylance>
|
2024-08-01 10:09:30
| 0
| 468
|
EmmaO91
|
78,820,328
| 3,186,922
|
How to add the instana span context to HTTP headers?
|
<p>Service A got an instana span (say <code>spanX</code>) and I need to send this spanX to service B via an HTTP call to make a child span (say <code>spanY</code>) out of it. I read through and found that in OpenTelemetry <code>TextMapPropagator</code> can be used to inject span to HTTP headers. But couldn't find a doc or code snippet of it.</p>
<pre><code>Service A
======
SpanX = tracer.start_span("parent")
part1. create a span context of SpanX to put in the http header
-> HTTP call to service B
SERVICE B
======
part2. Read the span context of SpanX from the HTTP header and create a child span, span Y out of it
</code></pre>
<p>How to do Part1 and Part2 is what I am exactly looking for. Any help is greatly appreciated. Thanks in advance</p>
|
<python><open-telemetry><instana>
|
2024-08-01 10:00:53
| 1
| 6,332
|
Hari Krishnan
|
78,820,308
| 2,697,895
|
How can I call a Java class method from Python?
|
<p>I am making an Android app in Python using <code>briefcase</code> from <code>BeeWare</code> that must start a service. And I have this code...</p>
<p>This is the relevant code from file <strong>MainActivity.java</strong>:</p>
<pre><code>package org.beeware.android;
import com.chaquo.python.Kwarg;
import com.chaquo.python.PyException;
import com.chaquo.python.PyObject;
import com.chaquo.python.Python;
import com.chaquo.python.android.AndroidPlatform;
public class MainActivity extends AppCompatActivity {
public static MainActivity singletonThis;
protected void onCreate(Bundle savedInstanceState) {
singletonThis = this;
... start Python
}
public void startMyService() {
Intent intent = new Intent(this, MyService.class);
startService(intent);
}
</code></pre>
<p>And this is the relevant code from <strong>app.py</strong> that my intuition came up with:</p>
<pre><code>from chaquopy import Java
class Application(toga.App):
...UI code here
def start_tcp_service(self, widget):
msg = 'START pressed !'
print(msg); self.LogMessage(msg)
self.CallJavaMethod('startMyService')
def CallJavaMethod(self, method_name):
MainActClass = Java.org.beeware.android.MainActivity
MainActivity = MainActClass.singletonThis
method = getattr(MainActivity, method_name)
method()
</code></pre>
<p>Now, when I try to run the project with <code>briefcase run android -u</code> on my Android phone, through the USB debugging bridge, I get the error:</p>
<blockquote>
<p>E/AndroidRuntime: java.lang.RuntimeException: Unable to start activity
ComponentInfo{com.example.myapp/org.beeware.android.MainActivity}:
com.chaquo.python.PyException: ModuleNotFoundError: No module named
'chaquopy'</p>
</blockquote>
<p>It seems that there isn't any module with name <code>chaquopy</code>. I tried to install it with pip, but it is not found. But then, how can I access the MainActivity methods from Python? What is the correct module to include?</p>
<p>I found <a href="https://chaquo.com/chaquopy/doc/current/python.html" rel="nofollow noreferrer">here some documentation</a> that says "The <code>java</code> module provides facilities to use Java classes and objects from Python code.". I tried to <code>import java</code> bun this is not found either... It seems that on this page it tells how to access Java from Python, but I donโt understand all that is there, because this is my first interaction with Java and Android...</p>
|
<python><java><chaquopy>
|
2024-08-01 09:56:28
| 1
| 3,182
|
Marus Gradinaru
|
78,820,057
| 10,200,497
|
How can I find the maximum value of a dynamic window and the minimum value below it?
|
<p>This is my DataFrame:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(
{
'a': [3, 1, 2, 5, 10, 3, 13, 3, 2],
}
)
</code></pre>
<p>Expected output is creating a <code>a_max</code> and <code>a_min</code>:</p>
<pre><code> a a_max a_min
0 3 NaN NaN
1 1 3 1
2 2 3 1
3 5 3 1
4 10 3 1
5 3 10 3
6 13 10 3
7 3 13 3
8 2 13 2
</code></pre>
<p>Logic:</p>
<p>I explain the logic row by row. There is a dynamic window for this <code>df</code> that for the first instance of the window only the first row is considered. For the second instance of the window the first two rows are considered. Same as below:</p>
<p><a href="https://i.sstatic.net/fzJm1uT6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fzJm1uT6.png" alt="enter image description here" /></a></p>
<p>These are the first four windows. It expands accordingly.</p>
<p>For each window I need to find the maximum value and after that I need to find the minimum value BELOW that maximum value.</p>
<p>I start explaining it from the yellow window. For this window the max value is 3 and the min value BELOW it is 1. So that is why <code>a_max</code> and <code>a_min</code> for this window is 3 and 1.</p>
<p>Now for the orange window the maximum value is 5 but since there are no values in this window BELOW this value that is less than 5, the previous <code>a_max</code> and <code>a_min</code> are repeated.</p>
<p>And the logic continues for the rest of rows.</p>
<p>This is my attempt:</p>
<pre><code>df['a_max'] = df.a.cummax()
df['a_min'] = df.a.cummin()
</code></pre>
|
<python><pandas><dataframe>
|
2024-08-01 09:07:51
| 1
| 2,679
|
AmirX
|
78,820,055
| 546,465
|
Why is attn_mask in PyTorch' MultiheadAttention specified for each head separately?
|
<p>PyTorch <a href="https://pytorch.org/docs/stable/generated/torch.nn.MultiheadAttention.html#torch.nn.MultiheadAttention" rel="nofollow noreferrer">MultiheadAttention</a> allows to specify the attention mask, either as 2D or as 3D. The former will be broadcasted over all N batches the latter allows one to specify specific masks for each example in the batch. All seems to make sense. However, from the documentation, the 3D mask is defined as follows:</p>
<blockquote>
<p>(Nโ
num_heads,L,S), where N is the batch size, L is the target sequence
length, and S is the source sequence length.</p>
</blockquote>
<p>Two questions araise:</p>
<ol>
<li>Why would anyone want a different mask for different heads?</li>
<li>If we have to do it this way anyway, how are they ordered? i.e. is it (Example1, Head1), (Example1, Head2),... etc OR is it (Example1, Head1), (Example2, Head1),... ? This is also asked in the comments at this <a href="https://stackoverflow.com/questions/62629644/what-the-difference-between-att-mask-and-key-padding-mask-in-multiheadattnetion">question</a>.</li>
</ol>
|
<python><pytorch><large-language-model><transformer-model><multihead-attention>
|
2024-08-01 09:07:23
| 1
| 4,712
|
Bastiaan
|
78,819,839
| 6,036,210
|
Stripe Webhook timeout on localhost with FastAPI
|
<p>I have an issue with a stripe webhook, which I am running locally for the test.</p>
<pre><code>async def my_webhook_view(request: Request, db: Session = Depends(get_db)):
payload = await request.body()
sig_header = request.headers.get('stripe-signature')
event = None
try:
event = stripe.Webhook.construct_event(
payload, sig_header, ENDPOINT_SECRET
)
except ValueError as e:
# Invalid payload
raise HTTPException(status_code=400, detail="Invalid payload")
except stripe.error.SignatureVerificationError as e:
# Invalid signature
raise HTTPException(status_code=400, detail="Invalid signature")
if event['type'] == 'checkout.session.completed':
#email = event['data']['object']['customer_details']['email']
# Check if the event has already been processed
event_id = event['id']
if db.query(ProcessedEvent).filter_by(id=event_id).first():
return {"message": "Event already processed"}
# Process the event
session = event['data']['object']
db_event = ProcessedEvent(id=event_id)
db.add(db_event)
db.commit()
msg = fulfill_checkout(event['data']['object']['id'])
if msg == "Event received":
print("Event received")
return JSONResponse(status_code=200, content={"message": "Success"})
else:
print("Event unprocessed")
return JSONResponse(status_code=200, content={"message": "Cancel"})
</code></pre>
<p>There is no delay (1s), the event is immediately received, and the debugger enters well into <code>return JSONResponse(status_code=200, content={"message": "Success"})</code>
but nothing happens on the stripe payment page.
I get a timeout after some time with the success url but everything has been correctly processed in the background tasks.</p>
<p>The checkout properly sets the success and cancel urls.</p>
<pre><code>#Stripe Payment
@app.post('/create-checkout-session')
async def create_checkout_session(price_id: str = Form(...)):
try:
checkout_session = stripe.checkout.Session.create(
line_items=[
{
# Provide the exact Price ID (for example, pr_1234) of the product you want to sell
'price': price_id,
'quantity': 1,
},
],
mode='payment',
success_url= f'{DOMAIN}/success',
cancel_url= f'{DOMAIN}/cancel',
billing_address_collection = 'required'
)
except Exception as e:
raise HTTPException(status_code=400, detail=str(e))
return RedirectResponse(checkout_session.url,303)
</code></pre>
<p>I am providing additional details</p>
<p>My webhook is running locally and is listening for checkout.session.completed events only</p>
<p>stripe listen --events checkout.session.completed --forward-to
localhost:8000/webhook</p>
<p>On the stripe dashboard page, my local webhook is properly configured.<a href="https://i.sstatic.net/QszER0Rn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QszER0Rn.png" alt="stripe local webhook" /></a></p>
<p>I noticed the following when I did a payment with a test card.</p>
<p>The "Events" tab for today ( 1st August )
<a href="https://i.sstatic.net/UmQbya2E.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UmQbya2E.png" alt="stripe events" /></a></p>
<p>In the logs, I can see the following
<a href="https://i.sstatic.net/fzYZP986.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fzYZP986.png" alt="stripe logs" /></a></p>
<p>The checkout.session in the logs indicate a payment_status of "unpaid"
<a href="https://i.sstatic.net/f5O5Xrp6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/f5O5Xrp6.png" alt="stripe checkout.session" /></a></p>
<p>But my local webhook which received the stripe checkout.session has a status of "paid"
<a href="https://i.sstatic.net/ylyHYK0w.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ylyHYK0w.png" alt="stripe webhook checkout.session" /></a></p>
<p>To summarise, Stripe is somewhat having 2 different payment statuses, paid in the Events, and unpaid in the Logs, for the same session</p>
|
<python><stripe-payments><fastapi><webhooks>
|
2024-08-01 08:19:57
| 0
| 435
|
TropicalViking
|
78,819,386
| 10,576,322
|
Python docker: remove unused dependencies to reduce image size
|
<p>I dockerize an python app that has a dependency on a package that has dependency in numpy, pandas, scipy, etc..</p>
<p>Actually in the actual app only part of that dependencies are actually used.</p>
<p>But when I install the application in dockerfile, all the wheels are downloaded and unpacked. Which leads to an layer of roughly 500MB in size.</p>
<p>I already used <code>pip --no-cache-dir</code> to make it a bit smaller, but of course the unpacked deps are still there.</p>
<p>Can one do some nagic to only keep the actually used modules instead of all those huge packages?</p>
<p>To give more specific example. I use one interpolation method from SciPy. So the SciPy package will be in my site-packages which takes alone 114MB unpacked. Most of this is never used. So I wondered whether there is some option to filter out unnused stuff.</p>
<p>I found an tutorial that suggested to recompile all the packages with removing debug functionallity etc. but that only reduces the size by 40% and in an first attempt caused more problems, because one needs to provide even more build tools like rust etc.</p>
|
<python><docker>
|
2024-08-01 06:18:49
| 1
| 426
|
FordPrefect
|
78,819,169
| 10,294,812
|
Comparing different methods of rolling back database changes in pytest tests for SQLAlchemy
|
<p>I'm working on a project that uses FastAPI and SQLAlchemy asynchronously. <br>
I've written pytest tests for this project and have successfully implemented database rollback after each test run. <br>
I've found two different implementation methods, but I'm unsure about the differences between them. Are both methods correct, or is one potentially problematic?<br></p>
<h1>conftest.py</h1>
<pre class="lang-py prettyprint-override"><code># pyproject.toml
#
# pytest = "^8.3.2"
# pytest-asyncio = "==0.21.2"
# #pytest-dependency = "^0.6.0"
# pytest-order = "^1.2.1"
#
# [tool.pytest.ini_options]
# addopts = "-s"
# asyncio_mode = "auto"
import asyncio
from urllib.parse import urlparse
import pytest
from sqlalchemy import NullPool
from sqlalchemy.ext.asyncio import create_async_engine, AsyncSession
from config import settings
from depends.db import async_session as api_async_session
from main import app
url = urlparse(settings.db)._replace(scheme="postgresql+asyncpg").geturl()
@pytest.fixture(scope="session")
def event_loop(request):
loop = asyncio.get_event_loop_policy().new_event_loop()
yield loop
loop.close()
@pytest.fixture
async def async_session():
async_db = create_async_engine(url, echo=False, poolclass=NullPool)
async with async_db.connect() as connection:
async with connection.begin() as transaction:
async with AsyncSession(bind=connection) as s:
app.dependency_overrides[api_async_session] = lambda: s
yield s
await transaction.rollback()
# Instead of using a connection pool, bind a specific connection to the event loop.
# If you don't set an event loop policy or using pytest-asyncio 0.23,
# each test will start a new event loop, causing asyncpg triggering exceptions.
@pytest.fixture
async def async_session2():
async_db = create_async_engine(url, echo=False, poolclass=NullPool)
async with async_db.connect() as connection:
transaction = await connection.begin()
async with AsyncSession(bind=connection, join_transaction_mode="create_savepoint") as s:
app.dependency_overrides[api_async_session] = lambda: s
yield s
await transaction.rollback()
</code></pre>
<p>I've also checked the official documentation for <code>create_savepoint</code>, but it's too difficult to understand. Even after looking into the souced code of AsyncTransaction's <code>__aenter__</code> method, I'm still uncertain.</p>
|
<python><postgresql><sqlalchemy><fastapi>
|
2024-08-01 04:46:16
| 1
| 492
|
ACE Fly
|
78,819,158
| 2,256,085
|
Analytical solution suffering from lack of precision
|
<p>I'm attempting to use python to solve the following analytical solution (eq. 4 <a href="https://onepetro.org/SPEATCE/proceedings-abstract/10ATCE/All-10ATCE/SPE-134670-MS/102057" rel="nofollow noreferrer">from here</a>) for a heat transport problem to verify the results calculated by a numerical model:</p>
<p><a href="https://i.sstatic.net/0klJ0btC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0klJ0btC.png" alt="enter image description here" /></a></p>
<p>The small reproducible example below invokes the <code>quad</code> function in <code>scipy.integrate</code> to solve the integral. So far, so good. The result from the integration is then multiplied by what I'm loosely referring to as a prefix term to give the final "Delta T" (change in temperature) result - ultimately what I need.</p>
<p>Unfortunately when I multiply the integral result, a monotonically increasing value, by the prefix term, a monotonically decreasing value, I don't get the expected smooth solution. I'm wondering if python offers tricks for forcing greater precision in the multiplication operation (i.e., <code>result = prefix * integral[0]</code>)?</p>
<p>The example script below ends with a 3-part plot highlighting what I think is the problem. That is, the left-most plot shows the monotonicity of the prefix term, the middle plot shows the monotonicity of the integral result, and the right most plot shows that multiplying the increasingly larger integrand values by ever-decreasing prefix values results in a non-smooth result - a result from a lack of precision possibly? Is there a fix for this?</p>
<p>x, y, and t values are related to space and time. I've chosen an arbitrarily small set of values for this small reproducible example. The larger T gets, the more "unsmooth" the results look.</p>
<pre><code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy.integrate import quad
import math
X = [val + 0.5 for val in np.arange(20)]
Y = [3.59]
days2seconds = 86400
times = [200, 400] # days
v = 9.553639342343923e-06 # m/s (velocity)
D = 2.9059885319758944e-06 # m^2/s (diffusivity)
D_prime = 1.4150943396226415e-06 # m^2/s (diffusivity overburden)
h_prime = 0.8372827804107424 # unitless (heat capacity ratio)
H = 100.0 # m (aquifer thickness)
T0 = 80.0 # deg C (initial temperature)
T1 = 30.0 # deg C (injected temperature)
# Some functions for calculating the Barends analytical solution
def barends_eqn4(sigma, x, y, t):
exponent = -1 * sigma ** 2 - ((x * v) / (4 * D * sigma)) ** 2
term1 = math.exp(exponent)
term2 = x ** 2 * h_prime * math.sqrt(D_prime) / (8 * D * H * sigma ** 2)
term3 = y / (2 * math.sqrt(D_prime))
term4 = t - x ** 2 / (4 * D * sigma ** 2)
# put it all together
eqn4_val = term1 * math.erfc((term2 + term3) * (term4) ** (-0.5))
return eqn4_val
def calc_analytical_sln(times):
times = [tm * days2seconds for tm in times]
#
analytical_sln = np.zeros((len(times), len(X)))
integral_rslt = np.zeros_like(analytical_sln)
prefix_rslt = np.zeros_like(analytical_sln)
for t_idx, t in enumerate(times):
for k, y in enumerate(Y): # 1 row
for j, x in enumerate(X): # columns
lower_lim = x / (2 * math.sqrt(D * t))
integral = quad(barends_eqn4, lower_lim, np.inf, args=(x, y, t))
integral_rslt[t_idx, j] = integral[0]
# multiply the prefix by the solution to the integral
prefix = (2 * (T1 - T0)) / (math.sqrt(np.pi)) * math.exp((x * v) / (2 * D))
prefix_rslt[t_idx, j] = prefix
result = prefix * integral[0]
# store result for plotting
analytical_sln[t_idx, j] = result + T0
return analytical_sln, integral_rslt, prefix_rslt
analytical_answers, integral_rslt, prefix_rslt = calc_analytical_sln(times)
# Because the values in prefix_rslt are negative, need to flip the sign
# for a log-scaled plot
fig, (ax1, ax2, ax3) = plt.subplots(1, 3)
fig.set_size_inches(12, 3)
ax1.plot(X, -prefix_rslt[0], color='r', label="prefix term")
ax1.set_yscale("log")
ax1.legend(loc='lower right')
ax2.plot(X, integral_rslt[0], color='b', label="integral result")
ax2.set_yscale("log")
ax2.legend(loc='lower left')
ax3.plot(X, analytical_answers[0], 'k', label="analytical solution (product)")
ax3.yaxis.tick_right()
ax3.set_yscale("log")
ax3.legend(loc='lower right')
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/TMoqAeUJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TMoqAeUJ.png" alt="enter image description here" /></a></p>
|
<python><scipy>
|
2024-08-01 04:39:59
| 1
| 469
|
user2256085
|
78,819,087
| 154,911
|
How to use mouse left button drag to pan on matplotlib's polar coordinate axis
|
<p>it looks like the pan feature works badly on the polar axis. It does not pan the plot when I drag the left mouse button. See the image shot below:</p>
<p><a href="https://i.sstatic.net/2foX6HkM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2foX6HkM.png" alt="mouse drag to pan" /></a></p>
<p>The pan feature works OK on a matplotlib's Cartesian axis.</p>
<p>Here is a very simple test code for the polar plot:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
r = np.arange(0, 2, 0.01)
theta = 2 * np.pi * r
fig, ax = plt.subplots(subplot_kw={'projection': 'polar'})
ax.plot(theta, r)
ax.set_rmax(2)
ax.set_rticks([0.5, 1, 1.5, 2]) # Less radial ticks
ax.set_rlabel_position(-22.5) # Move radial labels away from plotted line
ax.grid(True)
ax.set_title("A line plot on a polar axis", va='bottom')
plt.show()
</code></pre>
<p>What I want is to pan and zoom into the polar axis, so I can see the details of the plot. Currently, the zoom is only happens around the center. So, I need a pan feature.</p>
<p>Any ideas? Thanks.</p>
<p><strong>EDIT:</strong></p>
<p><a href="https://matplotlib.org/3.7.5/gallery/axisartist/demo_floating_axis.html" rel="nofollow noreferrer">https://matplotlib.org/3.7.5/gallery/axisartist/demo_floating_axis.html</a></p>
<p>It looks like in the above example code, a float polar axis is drawn inside a Cartesian axis, I can use the mouse drag to pan the whole outer axis, but I'm not sure how to draw some curves inside the inner polar axis.</p>
|
<python><matplotlib><polar-coordinates>
|
2024-08-01 04:00:41
| 2
| 1,220
|
ollydbg23
|
78,819,007
| 990,639
|
Unable to send a POST form data in python
|
<p>I am trying to use Python default library http.client to send a form-data. The reason is that I am unable to install other packages like requests on the system and it does not have even the urllib3.</p>
<p>Python code here</p>
<pre><code>import http.client
import uuid
def send_email_http_only(from_email, reply_to, send_to, subject, mail_content, filename, attachment_path):
# Prepare the file content
file_path = attachment_path
with open(file_path, "rb") as f:
file_content = f.read()
# Define boundary and headers
boundary = str(uuid.uuid4())
headers = {
"User-Agent": f"MyApp/1.0",
"Content-Type": f"multipart/form-data; boundary=----{boundary}",
}
# Create HTTP connection
conn = http.client.HTTPConnection("localhost", 8088)
# Create multipart/form-data payload
payload = (
# f"----{boundary}\r\n"
# f"Content-Disposition: form-data; name=\"attachment\"; filename=\"" + filename + "\"\r\n"
# f"{file_content.decode('utf-8')}\r\n"
f"----{boundary}\r\n"
f"Content-Disposition: form-data; name=\"emailContent\"\r\n"
f"Content-Type: application/json\r\n\r\n"
f"{{\"replyTo\": \""+reply_to+"\", \"from\": \""+from_email+"\", \"sendTo\": \""+send_to+"\", \"subject\": \""+subject+"\", \"mailContent\": \""+mail_content+"\"}\r\n"
f"----{boundary}--\r\n"
)
emailContent = {
'replyTo': reply_to,
'from': from_email,
'sendTo': send_to,
'subject': subject,
'mailContent': mail_content
}
# print(f"request_data: {request_data}")
# print(f"payload: {payload}")
# print(f"headers: {headers}")
# Send the request
conn.request("POST", "/sendEmail", body=payload, headers=headers)
# Get the response
response = conn.getresponse()
data = response.read()
# Close the connection
conn.close()
# Print response
print("Program response:")
print(response.status, response.reason)
print(data.decode("utf-8"))
</code></pre>
<p>After running this method, I keep getting bad request "emailContent" is not present.
I have also tried adding <code>http.client.HTTPConnection.debuglevel = 1</code> but the output does not look invalid.</p>
<pre><code>send: b'POST /sendEmail HTTP/1.1\r\nHost: localhost:8088\r\nAccept-Encoding: identity\r\nContent-Length: 340\r\nUser-Agent: MyApp/1.0\r\nContent-Type: multipart/form-data; boundary=----18e9255e-7597-42ef-814f-f10b4f0a2c6e\r\n\r\n'
send: b'----18e9255e-7597-42ef-814f-f10b4f0a2c6e\r\nContent-Disposition: form-data; name="emailContent"\r\nContent-Type: application/json\r\n\r\n{"replyTo": "x@gmail.com", "from": "y@outlook.com", "sendTo": "y@outlook.com", "subject": "Test subject", "mailContent": "Test main content"}\r\n----18e9255e-7597-42ef-814f-f10b4f0a2c6e--\r\n'
reply: 'HTTP/1.1 400 \r\n'
</code></pre>
<p>Appreciate any help here :)</p>
|
<python><http.client>
|
2024-08-01 03:21:33
| 0
| 1,147
|
Eugene
|
78,818,979
| 4,802,259
|
Watchdog: Change Schedule after start
|
<p>I'm attempting to write a <a href="https://docs.stackstorm.com/sensors.html" rel="nofollow noreferrer">StackStorm Sensor</a> that leverages <a href="https://python-watchdog.readthedocs.io/en/stable/quickstart.html" rel="nofollow noreferrer">Watchdog</a> for its file watching capabilities. There is a builtin sensor, but it doesn't provide the capabilities that I'm looking for. However, I'm running into some confusing thread blocking behavior while attempting to implement the sensor.</p>
<p>I've implemented most of a sensor, but for some reason when starting the WatchDog observer -- which is essentially just a threading.Thread -- the process hangs.</p>
<pre class="lang-py prettyprint-override"><code>"""StackStorm sensor to to watch for file changes"""
import json
from typing import TypedDict, Callable, Any
from logging import Logger
import re
from watchdog.observers import Observer
from watchdog.observers.api import BaseObserver
from watchdog.events import FileSystemEventHandler, FileSystemEvent
import eventlet
from st2reactor.sensor.base import Sensor
import time
class _FileWatchTriggerParams(TypedDict):
watch_directory: str
filename_regex: str
recursive: bool
class _FileWatchTrigger(TypedDict):
uid: str
id: str
ref: str
name: str
pack: str
type: str
parameters: _FileWatchTriggerParams
class _FileWatcherHandler(FileSystemEventHandler):
dispatch_trigger: Callable[[str, Any], None]
file_regex: re.Pattern | None
def __init__(
self,
dispatch_trigger: Callable[[str, Any], None],
trigger_ref: str,
file_regex: str,
logger: Logger,
):
self.dispatch_trigger = dispatch_trigger
self.trigger_ref = trigger_ref
self.logger = logger
if file_regex and file_regex != ".*":
self.file_regex = re.compile(file_regex)
else:
self.file_regex = None
def on_any_event(self, event: FileSystemEvent) -> None:
self.logger.info(
"%s event occurred on file %s", event.event_type.title(), event.src_path
)
if event.is_directory:
return
self.logger.info(
"Trigger %s Evaluating event on file %s", self.trigger_ref, event.src_path
)
if self.file_regex is not None:
if event.dest_path and not self.file_regex.match(event.dest_path):
return
if not self.file_regex.match(event.src_path):
return
payload = {
"event_type": event.event_type,
"src_path": event.src_path,
"dest_path": event.dest_path,
"watch_path": self.file_regex,
}
self.logger.info(
"Trigger %s dispatching event %s",
self.trigger_ref,
json.dumps(payload, indent=2),
)
self.dispatch_trigger(self.trigger_ref, payload)
class FileWatchSensor(Sensor):
"""Sensor allowing for configurable file watchers using the Watchdog library"""
_logger: Logger
_reload_needed: bool # whether the observer needs to be reloaded.
_triggers: dict[str, _FileWatchTrigger] # the triggers that are being watched
_observer: BaseObserver
def __init__(self, sensor_service, config=None):
super().__init__(sensor_service=sensor_service, config=config)
self._logger = self.sensor_service.get_logger(type(self).__qualname__)
self._reload_needed = False
self._observer = None
self._triggers = {}
def setup(self):
pass
def run(self):
while True:
try:
self._logger.info("Polling...")
if self._reload_needed:
self.reload_observer()
eventlet.sleep(1)
time.sleep(1)
except Exception: # pylint: disable=broad-exception-caught
self._logger.exception(
"Unexpected exception while running %s", type(self).__name__
)
def cleanup(self):
if self._observer:
self._observer.stop()
self._observer.join()
def add_trigger(self, trigger: _FileWatchTrigger):
try:
self._logger.info(
"Adding watch on dir [%s] for file [%s]",
trigger["parameters"].get("watch_directory", "No Watch Directory"),
trigger["parameters"].get("filename_regex", "No File Pattern"),
)
self._triggers[trigger["ref"]] = trigger
self._reload_needed = True
except Exception: # pylint: disable=broad-exception-caught
self._logger.exception(
"Unexpected exception while adding trigger: %s",
json.dumps(trigger, indent=2),
)
def update_trigger(self, trigger: _FileWatchTrigger):
try:
if trigger["ref"] not in self._triggers:
self._triggers[trigger["ref"]] = trigger
else:
self._triggers[trigger["ref"]].update(trigger)
except Exception: # pylint: disable=broad-exception-caught
self._logger.exception(
"Unexpected exception while updating trigger: %s",
json.dumps(trigger, indent=2),
)
def remove_trigger(self, trigger: _FileWatchTrigger):
try:
if trigger["ref"] in self._triggers:
removed_trigger = self._triggers.pop(trigger["ref"])
self._logger.info(
"Removing trigger watch %s on dir [%s] for file [%s]",
trigger["ref"],
removed_trigger["parameters"].get(
"watch_directory", "No Watch Directory"
),
removed_trigger["parameters"].get("filename_regex", ""),
)
self._reload_needed = True
except Exception: # pylint: disable=broad-exception-caught
self._logger.exception(
"Unexcpected exception while removing trigger: %s",
json.dumps(trigger, indent=2),
)
def reload_observer(self):
"""Build a new observer with the current set of watches"""
# The observer objects are just threads.
# Threads can only be started once.
# Schedules cannot be modified on started threads.
old_observer = None
if self._observer:
old_observer = self._observer
self._observer = Observer()
watches = []
for trigger in self._triggers.values():
watch_dir = trigger["parameters"].get("watch_directory")
if not watch_dir:
self._logger.error(
"Trigger missing `watch_directory` parameter: %s for ",
json.dumps(trigger, indent=2),
)
continue
pattern = trigger["parameters"].get("filename_regex")
event_handler = _FileWatcherHandler(
self.sensor_service.dispatch,
trigger["ref"],
pattern,
self._logger,
)
recursive = trigger["parameters"].get("recursive", False)
self._observer.schedule(
event_handler,
watch_dir,
recursive,
)
watches.append(
(watch_dir, "recursive" if recursive else "standard", pattern or ".*")
)
if old_observer is not None:
self._logger.info("Stopping old observer")
old_observer.stop()
old_observer.join()
self._logger.info("Old observer stopped")
self._logger.info("Starting observer with %d watches", len(watches))
self._observer.start() # Main process gets stuck right here
self._logger.info("Observer thread started.")
self._reload_needed = False
self._logger.info("Watching files: %s", json.dumps(watches, indent=2))
</code></pre>
<p>You'll notice there are a lot of log statements around the end of the "reload observer" function -- that's where the issue appears to be.</p>
<p>In implementing this, I discovered that once started, an observer's scheduled watches cannot be modified. Similar to the current state, attempting to do so will block the main thread indefinitely.</p>
<p>I'm running this on StackStorm 3.8.0 using Python 3.11.3</p>
<p>Here are the logs that the sensor generates while attempting to run:</p>
<pre><code>2024-07-31 21:58:51 2024-08-01 02:58:51,369 INFO [-] Sensor ava_core.FileWatchSensor updated. Reloading sensor.
2024-07-31 21:58:52 2024-08-01 02:58:52,387 INFO [-] Sensor ava_core.FileWatchSensor reloaded.
2024-07-31 21:58:53 2024-08-01 02:58:53,200 INFO [-] No config found for sensor "FileWatchSensor"
2024-07-31 21:58:53 2024-08-01 02:58:53,201 INFO [-] Watcher started
2024-07-31 21:58:53 2024-08-01 02:58:53,201 INFO [-] Running sensor initialization code
2024-07-31 21:58:53 2024-08-01 02:58:53,201 INFO [-] Running sensor in passive mode
2024-07-31 21:58:53 2024-08-01 02:58:53,201 INFO [-] Polling...
2024-07-31 21:58:53 2024-08-01 02:58:53,231 INFO [-] Adding watch on dir [/appl/appcloud] for file [.*\\.txt]
2024-07-31 21:58:53 2024-08-01 02:58:53,237 INFO [-] Connected to amqp://guest:**@rabbitmq:5672//
2024-07-31 21:58:55 2024-08-01 02:58:55,204 INFO [-] Polling...
2024-07-31 21:58:55 2024-08-01 02:58:55,205 INFO [-] Starting observer with 1 watches
</code></pre>
<p>For what it's worth, removing one sleep or the other (eventlet or time) doesn't seem to make a difference. With both removed, it's just a busy-wait loop that just spins up CPU time, and still doesn't register file events.</p>
|
<python><multithreading><python-watchdog><stackstorm>
|
2024-08-01 03:04:30
| 0
| 2,864
|
David Culbreth
|
78,818,917
| 6,541,639
|
Sphinx doc toctree click "jumps" to nav anchor instead of top
|
<p>When I click a toctree (navbar) item, the page "jumps/jolts" to the toc element instead of the top of the new page as if it's aligning with the toctree anchor (instead of the body h1 or just generally to the top page):</p>
<p><a href="https://i.sstatic.net/M6Zm5s8p.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/M6Zm5s8p.png" alt="Toctree click anchor to toc instead of body" /></a></p>
<p>This makes an unstable feeling for the page causing sudden jumps down the page.</p>
<p>If I'm scrolled all the way up before clicking and the body is a 1-pager, you can't repro this. However, if your body has a scrollbar, this is easily repro'd.</p>
<p><strong>Expected:</strong> Clicking a toctree element should bump me to the top of the new page</p>
<p><strong>Actual:</strong> I'm unexpectedly bumped <em>down</em> the new page to where the toctree element was anchored. New pages never start at the top, but 1/2 way down (unless it's a 1-pager doc).</p>
<p>Is there a way to stop navigating to the toctree anchor on click? I was expecting it to bump to the h1 or page top.</p>
<p>__</p>
<p><strong>EDIT 1: TL;DR</strong> using this site as an example: When I click on StackOverflow "Questions" section, I can still see "Home" above (the body moved, but the navbar didn't move -- expected). In my Sphinx build, if I were to dupe StackOverflow's navbar and clicked "Questions", I could no longer see "Home" since the navbar would anchor down. (CC'd from comment that felt important enough to edit in here)</p>
<p><strong>EDIT 2:</strong> Here's a GIF showing what I mean: <a href="https://i.imgur.com/oE25Siw.gif" rel="nofollow noreferrer">https://i.imgur.com/oE25Siw.gif</a></p>
|
<python><python-sphinx><restructuredtext>
|
2024-08-01 02:34:56
| 1
| 1,063
|
dylanh724
|
78,818,655
| 3,314,925
|
How to automate downloading data from webpage?
|
<p>I am trying to automate the downloading of an Excel files from an open Government website using Python and Selenium. I've tried XPath to select and click on buttons, but the script doesn't select the button correctly. The manual process is select:</p>
<ol>
<li>Ship Movements (top left of page)</li>
<li>Next 7 days (right of page)</li>
<li>Tools (right of page)</li>
<li>Export to Excel (right of page)</li>
</ol>
<p>Any suggestions would be useful.</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.keys import Keys
import time
xpath_ship_movements="//*[@id='Traffic']"
# xpath_ship_movements="xpath_full='/html/body/form/div/div[2]/div[1]/div[2]/ul/li[2]/a'"
xpath_days='//*[@id="rptGrid"]/div/div[2]/div[1]/div[2]/a[1]'
xpath_tools='//*[@id="rptGrid"]/div/div[2]/div[1]/div[2]/a[3]'
xpath_export='//*[@id="MSQ-WEB-0001"]'
url="https://qships.tmr.qld.gov.au/webx/#"
driver = webdriver.Edge()
# Open the webpage
driver.get(url)
driver.maximize_window()
wait = WebDriverWait(driver, 20)
# Wait for and click the "Ship Movements" button
ship_movements_button = wait.until(EC.element_to_be_clickable((By.XPATH, xpath_ship_movements)))
ship_movements_button.click()
# Wait for and click the "Next 7 days" button
next_7_days_button = wait.until(EC.element_to_be_clickable((By.XPATH, xpath_days)))
next_7_days_button.click()
# Wait for and click the Tools button
tools_button = wait.until(EC.element_to_be_clickable((By.XPATH, xpath_tools)))
tools_button.click()
# Wait for and click the Export to Excel option
export_to_excel = wait.until(EC.element_to_be_clickable((By.XPATH, xpath_export)))
export_to_excel.click()
# Wait for the export to complete
time.sleep(10)
# Close the browser
driver.quit()
</code></pre>
|
<python><selenium-webdriver><automation><download>
|
2024-07-31 23:49:37
| 1
| 1,610
|
Zeus
|
78,818,544
| 11,790,979
|
custom python package not importing
|
<p>I wrote a package to collect data and now im trying to interact with it to store and serve said data, but it's not importing, even though I have installed it on pip. This is the PyPI page: <a href="https://pypi.org/project/smog-usage-stats/" rel="nofollow noreferrer">https://pypi.org/project/smog-usage-stats/</a> and the repo is available here <a href="https://github.com/Stu-Gotz/smog_usage_stats" rel="nofollow noreferrer">https://github.com/Stu-Gotz/smog_usage_stats</a> sorry it's kind of against the minimally reproducable rule, but theres no short way to include an entire package.</p>
<p><a href="https://i.sstatic.net/AWUDAu8J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AWUDAu8J.png" alt="pip list and import error message" /></a></p>
<pre class="lang-py prettyprint-override"><code>from smog_usage_stats import UsageStatsLookup
import requests
if requests:
print("yes")
else:
print("no")
</code></pre>
<p>Testing with this prints "yes" (when the first line is commented out) so other packages are working correctly. I dotted all my t's and crossed all my i's and I have no prior experience writing Python packages, so I am not sure what I may have done wrong.</p>
<p>This is the project's file structure and below that I have pasted in my <code>pyproject.toml</code></p>
<p><a href="https://i.sstatic.net/cWyYq45g.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cWyYq45g.png" alt="project file structure" /></a></p>
<pre class="lang-ini prettyprint-override"><code># pyproject.toml
[build-system]
requires=["setuptools", "wheel"]
build-backend = "setuptools.build_meta"
[project]
name = "smog_usage_stats"
version = "1.0.4"
dependencies = [
"beautifulsoup4",
"pathlib",
"psycopg==3.1.12",
"psycopg-binary==3.1.12",
"psycopg2==2.9.5",
"python-dateutil",
"python-dotenv",
"requests",
"soupsieve",
"typing_extensions",
]
readme = "README.md"
authors = [{ name = "stu.gotz.dev", email = "gotz.stu.dev@gmail.com" }]
license = { file = "LICENSE" }
classifiers = [
"License :: OSI Approved :: MIT License",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
]
keywords = ["pokemon", "usage", "pokemon showdown", "smogon"]
requires-python = ">=3.7"
[project.optional-dependencies]
dev = ["black", "bumpver", "isort", "pip-tools", "pytest"]
[tool.bumpver]
current_version = "1.0.4"
version_pattern = "MAJOR.MINOR.PATCH"
commit_message = "bump version {old_version} -> {new_version}"
commit = true
tag = true
push = true
[tool.bumpver.file_patterns]
"pyproject.toml" = [
'current_version = "{version}"',
'version = "{version}"'
]
"src/smog_usage_stats/__init__.py" = ["{version}"]
"setup.py" = [
"{version}",
"{pep440_version}",
]
"README.md" = [
"{version}",
"{pep440_version}",
]
</code></pre>
<p>and here is the packages <code>smog_usage_stats/src/smog_usage_stats/__init__.py</code> contents:</p>
<pre class="lang-py prettyprint-override"><code>import sys
import os
__version__ = "1.0.4"
__author__ = ""
# Get the parent directory
parent_dir = os.path.dirname(os.path.realpath(__file__))
# Add the parent directory to sys.path
sys.path.append(parent_dir)
</code></pre>
<h3>EDIT</h3>
<p>I took some people's advice from comments and replies and it just seemed to break it worse, so I am not sure what is happening, but it does import, however when I run it from the "play button" in VS Code, I get <code>ModuleNotFoundError</code>, but when i run <code>py script.py</code> in the terminal (<code>venv</code> is active always) it gives me a printout and no <code>ModuleNotFoundError</code>.</p>
<pre><code>(venv) PS C:\dev\gssp> & c:/dev/gssp/.venv/Scripts/python.exe c:/dev/gssp/data_collection.py
Traceback (most recent call last):
File "c:\dev\gssp\data_collection.py", line 1, in <module>
from smog_usage_stats import Search
ModuleNotFoundError: No module named 'smog_usage_stats'
(venv) PS C:\dev\gssp> py data_collection.py
yes
</code></pre>
<p>and as another reply suggested, this is my <code>setup.py</code> file:</p>
<pre class="lang-py prettyprint-override"><code>from setuptools import setup
setup()
</code></pre>
<h3>EDIT 2:</h3>
<p>I am a moron, it wasn't working because it was pointing to the wrong venv.</p>
<pre class="lang-py prettyprint-override"><code>from smog_usage_stats import Usage
</code></pre>
|
<python><packaging><pypi>
|
2024-07-31 22:43:39
| 1
| 713
|
nos codemos
|
78,818,526
| 3,361,462
|
Creating a decorator with dynamic parameter
|
<p>I have some simple decorator that gets an argument:</p>
<pre><code>def deco(name):
def outer(f):
def inner(*args, **kwargs):
print(name)
return f(*args, **kwargs)
return inner
return outer
</code></pre>
<p>and a class:</p>
<pre><code>class A:
def __init__(self, name):
self.name = name
@deco(self.name) # This is my goal
def foo(self):
pass
</code></pre>
<p>I want to be able to create few instances of <code>A</code> and each of it will have it's own version of foo (decorated with <code>deco(name)</code>).</p>
<p>This obviously doesn't work as functions are class scope, not instance scope.
I thought about dynamic approach, like:</p>
<pre><code>class A:
def __init__(self, name):
self.name = name
A.foo = deco(name)(A._foo)
def _foo(self):
pass
</code></pre>
<p>However it has a problem when we create more instances.</p>
<p>Finally I went for that:</p>
<pre><code>class A:
def __init__(self, name):
self.name = name
self.foo = deco(name)(partial(A._foo, self))
def _foo(self):
pass
</code></pre>
<p>Which seems to work, however I have a feeling it's too hacky. Do I miss something? My real decorator calculates execution time and store it in database and it's used in many other places like that, so inlining it is not an option.</p>
|
<python><inheritance>
|
2024-07-31 22:37:16
| 2
| 7,278
|
kosciej16
|
78,818,501
| 21,540,734
|
subproces.Popen with an or "|" symbol isn't working
|
<p>I'm trying to list just the IP Address of my Wi-Fi network adapter to be able to detect if it is connected and has an IP address attached to it.</p>
<p>With this by itself it is working...</p>
<pre class="lang-py prettyprint-override"><code>from subprocess import Popen
Popen([
'netsh',
'interface',
'ip',
'show',
'addresses',
'Wi-Fi'
]).communicate()
</code></pre>
<p>Output:</p>
<pre><code>Configuration for interface "Wi-Fi"
DHCP enabled: No
IP Address: 192.168.1.200
Subnet Prefix: 192.168.1.0/24 (mask 255.255.255.0)
Default Gateway: 192.168.1.1
Gateway Metric: 0
InterfaceMetric: 2
</code></pre>
<p>But, with this...</p>
<pre class="lang-py prettyprint-override"><code>from subprocess import Popen
Popen([
'netsh',
'interface',
'ip',
'show',
'addresses',
'Wi-Fi',
'|',
'findstr',
'/ir',
'IP Address'
]).communicate()
</code></pre>
<p>with the or <code>|</code> symbol in the list, it is generating this...</p>
<pre><code>Usage: show addresses [[name=]<string>]
Parameters:
Tag Value
name - The name or index of a specific interface.
Remarks: Displays the IP address configuration for an interface or
interfaces.
The information displayed for this command consists of:
Field Description
----- -----------
DHCP enabled Shows whether the address comes from static or DHCP
configuration.
IP Address Shows the IP address configured for an interface.
Subnet Mask Shows the subnet mask associated with the IP address.
Default Gateway Shows the IP address of a default gateway for the interface.
Gateway Metric Shows the metric for the default gateway shown above.
Only applies if multiple default gateways are configured.
Interface Metric Shows the metric for an interface.
Only applies if multiple interfaces are configured.
Examples:
show addresses "Wired Ethernet Connection"
</code></pre>
<p>indicating that I typed in the wrong name of the adapter.</p>
<p>I've tried many combinations of the arguments of <code>netsh</code> in the list without any luck.</p>
<p>Does anyone have any Insight on this?</p>
<p>My best guess at the moment is that Popen doesn't know how to process the or <code>|</code> symbol.</p>
|
<python><subprocess><popen>
|
2024-07-31 22:23:57
| 1
| 425
|
phpjunkie
|
78,818,387
| 7,211,014
|
Elastic python request timeout error: Pool reached maximum size and no more connections are allowed
|
<p>I am using Elasticsearch python module. I am trying to set up a connection to the sever like this</p>
<pre><code>es = Elasticsearch([config.endpoint],
api_key=config.key,
request_timeout=config.request_timeout )
</code></pre>
<p>The server connects, then I try to execute enrichment policies.</p>
<pre><code>es.enrich.execute_policy(name=policy)
</code></pre>
<p>But they all fail with this error:</p>
<pre><code>{'policy': 'enrich-1', 'status': 'failed', 'error': "Connection error caused by: ConnectionError(Connection error caused by: FullPoolError(HTTPConnectionPool(host='our.server.internal', port=9200): Pool reached maximum size and no more connections are allowed.))"}
</code></pre>
<p>If I remove the <code>request_timeout</code> parameter, the enrichment trys to run but says timeout. If I put the parameter back then I get this error.</p>
<p>Why is this happening? I tried reading <a href="https://elasticsearch-py.readthedocs.io/en/v8.14.0/api/elasticsearch.html" rel="nofollow noreferrer">the documentation</a> but its not clear what any of these parameters actually do. Is there somewhere that details what exactly each of these parameters do? I tried using <code>connections_per_node=50</code> didn't help, same error.</p>
<p>What am I doing wrong?</p>
|
<python><http><elasticsearch><timeout>
|
2024-07-31 21:24:10
| 0
| 1,338
|
Dave
|
78,818,365
| 7,698,116
|
Java sshtools generated EDDSA signature not matching with Python's pycryptome's generated signature
|
<p>I have a python library that uses <code>pycryptodome</code> library to sign data using Ed25519 algorithm using an openssh format ED25519 private key. The signature then needs to be verified in a Java application using <code>sshtools</code> library with corresponding public key. However the signature verification is failing.</p>
<p><strong>Constraint</strong>: It's important to read the private/public keys from files. I cannot change the Python code and/or the keys used.</p>
<p>To debug, I wrote an implementation to generate the signature in Java as well along with validating the python generated signature. However both are coming different.</p>
<p>My Python implementation to sign the data is as follows:</p>
<pre class="lang-py prettyprint-override"><code>from Crypto.Hash import SHA512
from Crypto.PublicKey import ECC
from Crypto.Signature import eddsa
import base64
import json
def generate_signature_v1(message):
message = message.replace(" ", "")
h = SHA512.new(message.encode("utf-8"))
with open("private", "r") as f:
key = ECC.import_key(f.read())
signer = eddsa.new(key, "rfc8032")
signature = signer.sign(h)
str_signature = base64.standard_b64encode(signature).decode("utf-8")
return str_signature
</code></pre>
<p>My Java implementation to generate and verify the signature.</p>
<pre class="lang-java prettyprint-override"><code>import java.nio.charset.StandardCharsets;
import java.security.MessageDigest;
import java.security.NoSuchAlgorithmException;
import java.util.Base64;
import java.util.HashMap;
import java.util.Map;
import java.io.File;
import java.io.IOException;
import com.google.gson.Gson;
import com.sshtools.common.publickey.InvalidPassphraseException;
import com.sshtools.common.publickey.SshKeyUtils;
import com.sshtools.common.ssh.components.SshKeyPair;
import com.sshtools.common.ssh.components.SshPrivateKey;
import com.sshtools.common.ssh.components.SshPublicKey;
public class StackOverflow {
private static final Gson gson = new Gson();
public static boolean verifyV1Signature(String message, String signature) {
try {
byte[] messageBytes = message.getBytes(StandardCharsets.UTF_8);
MessageDigest digest = MessageDigest.getInstance("SHA-512");
byte[] hash = digest.digest(messageBytes);
// read public key
SshPublicKey readPublicKey = SshKeyUtils.getPublicKey(new File("public.pub"));
// verify signature
Base64.Decoder decoder = Base64.getDecoder();
byte[] signatureDecoded = decoder.decode(signature);
boolean isVerified = readPublicKey.verifySignature(signatureDecoded, hash);
System.out.println("signature is valid: " + isVerified);
return isVerified;
} catch (Exception e) {
return false;
}
}
public static String generateV1Signature(String message)
throws NoSuchAlgorithmException, IOException, InvalidPassphraseException {
byte[] messageBytes = message.getBytes(StandardCharsets.UTF_8);
MessageDigest digest = MessageDigest.getInstance("SHA-512");
byte[] hash = digest.digest(messageBytes);
// create signature
SshKeyPair readKeyPair = SshKeyUtils.getPrivateKey(new File("private"));
SshPrivateKey readPrivateKey = readKeyPair.getPrivateKey();
byte[] signature = readPrivateKey.sign(hash);
Base64.Encoder encoder = Base64.getEncoder();
return encoder.encodeToString(signature);
}
public static void main(String[] args) {
Map<String, String> data = new HashMap<>();
data.put("key", "value");
String message = gson.toJson(data);
String pythonSignature = "5Sdt3bIKFbLBhbZ2JLzQP+8MNX6/uzFtxHTkBa/UIpBbjtwKfNu+wfcMHmxksQkmzI5OMhEpY46hVlkM0P5nAA==";
verifyV1Signature(message, pythonSignature);
try {
String javaSignature = generateV1Signature(message);
System.out.println(javaSignature);
} catch (NoSuchAlgorithmException | IOException | InvalidPassphraseException e) {
e.printStackTrace();
}
}
}
</code></pre>
<p>Running Python code for message <code>json.dumps({"key": "value"})</code> gives <code>5Sdt3bIKFbLBhbZ2JLzQP+8MNX6/uzFtxHTkBa/UIpBbjtwKfNu+wfcMHmxksQkmzI5OMhEpY46hVlkM0P5nAA==</code></p>
<p>Running Java Code gives <code>xHgYq8/nUYOkpbGzCsUkei9Vw0O1/XKoYZlLAbsUPpQF3cTMQ96ROL/ZHSH+cUUNJlmTI2Qb2thAU3kEqvdHBQ==</code> and also verification fails.</p>
<p>The <code>private</code> key looks like <code>-----BEGIN OPENSSH PRIVATE KEY-----<suff>-----END OPENSSH PRIVATE KEY-----</code> and the public key looks like <code>ssh-ed25519 <stuff></code></p>
<p>Why signature is not matching? I have also tried <code>bouncycastle</code> and still signature is not matching.</p>
|
<python><java><cryptography><pycryptodome><eddsa>
|
2024-07-31 21:10:33
| 1
| 368
|
ATK
|
78,818,244
| 850,781
|
Get a single row in a tuple-indexed DataFrame
|
<p>I have a pandas <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html" rel="nofollow noreferrer"><code>DataFrame</code></a>:</p>
<pre><code>>>> f = pd.DataFrame.from_dict({"r0":{"c0":1,"c1":2},("r",1):{"c0":3,"c1":4}},orient="index")
c0 c1
r0 1 2
(r, 1) 3 4
</code></pre>
<p>I can get the 1st row:</p>
<pre><code>>>> list(f.loc["r0"].items())
[('c0', 1), ('c1', 2)]
</code></pre>
<p>but not the second row because <code>f.loc[("r",1)]</code> raises <code>KeyError</code>.</p>
<p>I suppose I can do</p>
<pre><code>>>> list(f.loc[[("r",1)]].iloc[0].items())
[('c0', 3), ('c1', 4)]
</code></pre>
<p>but this is unspeakably ugly.</p>
<p>What is the right way?</p>
<p>PS. No, I do <em><strong>not</strong></em> want to use <a href="https://pandas.pydata.org/docs/reference/api/pandas.MultiIndex.html" rel="nofollow noreferrer"><code>MultiIndex</code></a> here.</p>
|
<python><pandas><dataframe><indexing>
|
2024-07-31 20:26:21
| 2
| 60,468
|
sds
|
78,818,176
| 3,446,619
|
how to read Matlab duration object from Python?
|
<p>I create a Matlab duration object and save it to a .mat file:</p>
<pre><code>timeend = seconds(123);
save('time.mat', timeend, '-v7.3');
</code></pre>
<p>Then I read it from Python:</p>
<pre><code>with h5py.File('time.mat', 'r') as f:
var = f['timeend'][:]
print(list(var))
</code></pre>
<p>Then <code>var</code> is a np object:</p>
<pre><code>[array([3707764736, 2, 1, 1, 1,
1], dtype=uint32)]
</code></pre>
<p>How to convert this Numpy array to a timedelta object or an np.array of the time in seconds?</p>
|
<python><numpy><matlab><duration>
|
2024-07-31 20:04:41
| 1
| 645
|
Xin Niu
|
78,817,860
| 1,832,942
|
Piping python to file gives UnicodeEncodeError: 'charmap' codec can't encode character '...' in position ...: character maps to <undefined>
|
<p>In the Windows terminal, Python 3.x printing a unicode character works fine; it correctly outputs: ๐:</p>
<pre><code>python -c "print('๐')"
</code></pre>
<p>But piping the same command to a file:</p>
<pre><code>python -c "print('๐')" > log.txt
</code></pre>
<p>causes:</p>
<pre><code>Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\anaconda3\envs\env6\Lib\encodings\cp1252.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_table)[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
UnicodeEncodeError: 'charmap' codec can't encode character '\U0001f4c2' in position 0: character maps to <undefined>
</code></pre>
<p>How to avoid this <code>UnicodeEncodeError</code>?</p>
|
<python><windows><unicode><command-line-interface>
|
2024-07-31 18:22:34
| 1
| 14,828
|
Michael B. Currie
|
78,817,759
| 3,486,684
|
Extracting from a column containing a list of structs, using another column containing values a field of the structs much match
|
<p>Consider the example:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame(
[
pl.Series(
"id",
["alpha", "beta"],
),
pl.Series(
"s",
[
[{"x": 0, "y": "a"}, {"x": 1, "y": "b"}, {"x": 0, "y": "c"}],
[{"x": 0, "y": "b"}, {"x": 1, "y": "a"}, {"x": 1, "y": "c"}],
],
),
pl.Series("selector", [0, 1]),
]
)
print(df)
# shape: (2, 3)
# โโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโ
# โ id โ s โ selector โ
# โ --- โ --- โ --- โ
# โ str โ list[struct[2]] โ i64 โ
# โโโโโโโโโชโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโชโโโโโโโโโโโก
# โ alpha โ [{0,"a"}, {1,"b"}, {0,"c"}] โ 0 โ
# โ beta โ [{0,"b"}, {1,"a"}, {1,"c"}] โ 1 โ
# โโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโ
first_try = df.with_columns(
extracted=pl.col("s")
.list.eval(
pl.when(pl.element().struct.field("x").eq(pl.col("selector")))
.then(pl.element().struct.field("y"))
.otherwise(None)
)
.list.drop_nulls()
)
print(first_try)
# error: ComputeError: named columns are not allowed in `list.eval`; consider using `element` or `col("")`
other_try = df.join(
df.explode("s")
.filter(pl.col("s").struct.field("x").eq(pl.col("selector")))
.with_columns(extracted=pl.col("s").struct.field("y"))
.group_by("id")
.agg(pl.col("extracted")),
on="id",
)
print(other_try)
# shape: (2, 4)
# โโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโฌโโโโโโโโโโโโโ
# โ id โ s โ selector โ extracted โ
# โ --- โ --- โ --- โ --- โ
# โ str โ list[struct[2]] โ i64 โ list[str] โ
# โโโโโโโโโชโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโชโโโโโโโโโโโชโโโโโโโโโโโโโก
# โ alpha โ [{0,"a"}, {1,"b"}, {0,"c"}] โ 0 โ ["a", "c"] โ
# โ beta โ [{0,"b"}, {1,"a"}, {1,"c"}] โ 1 โ ["a", "c"] โ
# โโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโดโโโโโโโโโโโโโ
</code></pre>
<p><code>first_try</code> does not work, but <code>other_try</code> does. The error with <code>first_try</code> is:</p>
<pre><code>ComputeError: named columns are not allowed in `list.eval`; consider using `element` or `col("")`
</code></pre>
<p>Which lead me to: <a href="https://github.com/pola-rs/polars/issues/7210" rel="nofollow noreferrer">https://github.com/pola-rs/polars/issues/7210</a></p>
<p>Where the suggestion is to use <code>group_by</code>, which is what lead me to <code>other_try</code>. I am wondering if I might have misunderstood the suggestion though, or if there's another way? Perhaps something using something like how the "walrus operator" is used <a href="https://stackoverflow.com/a/78814327/3486684">here</a>?</p>
|
<python><python-polars>
|
2024-07-31 17:54:34
| 3
| 4,654
|
bzm3r
|
78,817,564
| 17,101,330
|
Unable to build pySFML - 'Time' is not a type identifier
|
<p>I'm unsuccessfully trying to build <strong>pySFML</strong> from <a href="https://github.com/intjelic/python-sfml" rel="nofollow noreferrer">https://github.com/intjelic/python-sfml</a> on my <strong>Windows</strong> machine with <strong>Python 3.12</strong> (aswell as 3.11, 3.10, 3.9).</p>
<p>I have downloaded <strong>SFML 2.3.2</strong> from <a href="https://www.sfml-dev.org/download/sfml/2.3.2/" rel="nofollow noreferrer">https://www.sfml-dev.org/download/sfml/2.3.2/</a> and have <strong>Visual Studio 22</strong> installed.</p>
<p>After extracting SFML-2.3.2 to <strong>C:\libraries</strong>, i have set the Environment-Variables:</p>
<pre><code>SFML_DIR = C:\libraries\SFML-2.3.2
SFML_HEADERS = C:\libraries\SFML-2.3.2\include
SFML_LIBRARIES = C:\libraries\SFML-2.3.2\lib
</code></pre>
<p>and added to Path:</p>
<pre><code>C:\libraries\SFML-2.3.2\bin
</code></pre>
<p>I then <strong>git-cloned</strong> the repo <a href="https://github.com/intjelic/python-sfml.git" rel="nofollow noreferrer">https://github.com/intjelic/python-sfml.git</a> into a <strong>Python 3.12 venv</strong>.</p>
<p>And installed the requirements:</p>
<pre><code>pip install -r requirements.txt
</code></pre>
<p>(which installed <strong>Cython-Version 3.0.10</strong>)</p>
<p>after that I did:</p>
<pre><code>pip install --upgrade setuptools wheel
</code></pre>
<p>When I then at first tried:</p>
<pre><code>pip install .
</code></pre>
<p>I got:</p>
<pre><code>Error compiling Cython file:
------------------------------------------------------------
...
from libcpp.string cimport string
cimport time
^
------------------------------------------------------------
include\Includes\sfml\sfml.pxd:9:8: 'time.pxd' not found
</code></pre>
<p>but the file is in the directory and there is also a empty <strong><strong>init</strong>.py</strong> file, but anyways I changed it to:</p>
<pre><code>from . cimport Time
</code></pre>
<p>It now finds the file but I get:</p>
<pre><code>Error compiling Cython file:
------------------------------------------------------------
...
from sfml cimport Time
cdef extern from "SFML/System.hpp" namespace "sf::Time":
cdef Time Zero
^
------------------------------------------------------------
include\Includes\sfml\Time.pxd:10:9: 'Time' is not a type identifier
</code></pre>
<p>The actual traceback is much longer (the same as above for every other class) and can be seen here: <a href="https://raw.githubusercontent.com/ai-cr/pySFML-traceback/main/pySFML_traceback.txt" rel="nofollow noreferrer">https://raw.githubusercontent.com/ai-cr/pySFML-traceback/main/pySFML_traceback.txt</a></p>
<p>I understand that the project is no longer maintainend, but shouldn't a specific release with a specific python/cython version work, as it worked in the past?</p>
<p>Did anyone manage to install pySFML? Help would be much appreciated..!</p>
|
<python><cython><sfml>
|
2024-07-31 17:00:09
| 1
| 530
|
jamesB
|
78,817,555
| 534,238
|
How to avoid `setattr` (and `getattr`) when using Python? And is it necessary to avoid
|
<p>If I want to add a value to a field in a protocol buffer that isn't known at compile time, I'm currently doing <code>setattr</code>. I normally don't like using <code>setattr</code> because it seems less secure. But when I know the object is a protobuf, I'm thinking it is fine, because the value I'm setting it to must be of the type that the protobuf allows. So maybe it isn't really unsafe??</p>
<p>Let me explain by example. First, assume I have this protobuf:</p>
<pre><code>message Person {
string first_name = 1;
string second_name = 1;
int age = 3;
}
</code></pre>
<p>Then I have some code that uses the above protobuf:</p>
<pre class="lang-py prettyprint-override"><code>from person_pb2 import Person
my_info = {"first_name": "Mike", "last_name": "example", "age": 999}
me = Person()
for field, value in my_info:
setattr(me, field, value)
</code></pre>
<p>This is a very flexible way to handle the protobuf. I cannot, for instance, specify it like I would in a <code>dict</code>, saying <code>me[field] = value</code>. Yet, <code>me[field] = value</code> is perfectly safe. If the value is of the wrong type for the field/attribute when using <code>setattr</code>, I'll get an error: <code>TypeError: bad argument type for built-in operation</code></p>
<p>So, I'm tempted to say that for protobufs, using <code>setattr</code> is completely fine and, in fact, is really the only way to programmatically add values to the protobuf fields. Is this correct? Is there a better or safer way? I cannot do something like <code>me.first_name = "Mike"</code> because I need it to be programmatic.</p>
|
<python><protocol-buffers><setattr>
|
2024-07-31 16:58:39
| 1
| 3,558
|
Mike Williamson
|
78,817,543
| 8,382,067
|
GridDB TQL Invalid Column
|
<p>I'm currently working with GridDB for a project involving IoT data, and I'm facing an issue with executing SQL-like queries using GridDB's TQL (Time Series SQL-like Query Language).</p>
<p>Here is a brief description of what I am trying to achieve:</p>
<p>I have a container in GridDB which stores IoT sensor data.
I am trying to query this data using TQL to fetch records based on certain conditions.
Here is a sample of my container schema and the data insertion code:</p>
<pre><code>import griddb_python as griddb
factory = griddb.StoreFactory.get_instance()
gridstore = factory.get_store(
host='127.0.0.1',
port=10001,
cluster_name='defaultCluster',
username='admin',
password='admin'
)
# Define container schema
conInfo = griddb.ContainerInfo(
name="sensorData",
column_info_list=[
["TimeStamp", griddb.Type.TIMESTAMP],
["Sensor_id", griddb.Type.STRING],
["Value", griddb.Type.DOUBLE]
],
type=griddb.ContainerType.TIME_SERIES,
row_key=True
)
# Create container
ts = gridstore.put_container(conInfo)
ts.set_auto_commit(False)
# Insert sample data
import datetime
ts.put([datetime.datetime.now(), "sensor_1", 25.5])
ts.put([datetime.datetime.now(), "sensor_2", 26.7])
ts.commit()
</code></pre>
<p>Now, I am trying to execute the following TQL query to fetch records:</p>
<pre><code>query = ts.query("SELECT * FROM sensorData WHERE value > 26")
rs = query.fetch()
while rs.has_next():
data = rs.next()
print(data)
</code></pre>
<p>Im getting the following error though:</p>
<pre><code>InvalidColumnException: Column (value) not found
</code></pre>
<p>I've checked the schema and value exists so i'm not sure if it is talking about some other column or something wrong with value exactly? Any help would be appreciated.</p>
|
<python><iot><griddb>
|
2024-07-31 16:55:07
| 1
| 2,099
|
Josh Adams
|
78,817,443
| 11,505,680
|
EngFormatter for minor ticks on a log scale
|
<p>I want to change the formatting of tick labels without changing which ones are displayed. This code gets me halfway there:</p>
<pre class="lang-py prettyprint-override"><code>plt.plot(range(5), [5e3, 7e3, 9e3, 11e3, 13e3], marker='o')
plt.yscale('log')
plt.gca().yaxis.set_major_formatter(ticker.EngFormatter())
</code></pre>
<p><a href="https://i.sstatic.net/H3oN2iWO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/H3oN2iWO.png" alt="Major tick is formatted as an integer; one minor tick is in scientific notation; other minor ticks are unlabeled" /></a></p>
<p>But if I try to change the formatting of the minor tick, I get a bunch of additional labels that I don't necessarily want:</p>
<pre class="lang-py prettyprint-override"><code>plt.plot(range(5), [5e3, 7e3, 9e3, 11e3, 13e3], marker='o')
plt.yscale('log')
plt.gca().yaxis.set_major_formatter(ticker.EngFormatter())
plt.gca().yaxis.set_minor_formatter(ticker.EngFormatter())
</code></pre>
<p><a href="https://i.sstatic.net/pBTP82nf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pBTP82nf.png" alt="All ticks are labeled as integers" /></a></p>
<p>To recap, how do I get a y-axis where the labels are '10 k' and '6 k'?</p>
|
<python><matplotlib>
|
2024-07-31 16:26:41
| 1
| 645
|
Ilya
|
78,817,421
| 1,028,270
|
How do I merely hide some properties from a dataclass's constructor but still access them in methods?
|
<p>I just want to prevent users from being able to set some properties, but all of the options I see don't seem to work.</p>
<p>Using <code>__post_init__</code> and <code>init=False</code> I'm not able to access and update those properties from methods I have attached to the class.</p>
<p>When I define them with <code>@property</code> like this I get <code>AttributeError: property 'my_prop' of 'MyClass' object has no setter</code>:</p>
<pre><code>@property
def my_prop(self) -> str:
return self.my_prop
def my_method(self):
self.my_prop = "sdfsdfsd"
</code></pre>
<p>Is there no simple way to merely "hide" properties so users can't mistakenly set then in the constructor? I have a dataclass with a handful of properties and only one I want this behavior for.</p>
|
<python><python-dataclasses>
|
2024-07-31 16:21:30
| 1
| 32,280
|
red888
|
78,817,346
| 6,166,453
|
How to resolve the crewai error: Input should be a valid dictionary or instance of BaseAgent?
|
<p>I am using crewai to set up 3 agents with one as manager agent and 2 worker agents. This works perfectly fine when I use a sequential process but when I switch to hierarchical processing, I see the following error</p>
<pre><code>manager_agent
Input should be a valid dictionary or instance of BaseAgent [type=model_type, input_value=<bound method memoize.<lo... object at 0x10bd9e390>>, input_type=method]
For further information visit https://errors.pydantic.dev/2.8/v/model_type
</code></pre>
<p>Here is the code</p>
<pre><code>from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
from dotenv import load_dotenv
from langchain_openai import AzureChatOpenAI
import os
# Uncomment the following line to use an example of a custom tool
# from sample.tools.custom_tool import MyCustomTool
# Check our tools documentations for more information on how to use them
# from crewai_tools import SerperDevTool
load_dotenv()
azure_llm = AzureChatOpenAI(
azure_endpoint=os.environ.get("AZURE_OPENAI_ENDPOINT"),
api_key=os.environ.get("AZURE_OPENAI_KEY"),
api_version=os.environ.get("AZURE_OPENAI_VERSION"),
)
@CrewBase
class SampleCrew():
"""Sample crew"""
agents_config = 'config/agents.yaml'
tasks_config = 'config/tasks.yaml'
@agent
def manager(self) -> Agent:
return Agent(
config=self.agents_config['manager'],
# tools=[MyCustomTool()], # Example of custom tool, loaded on the beginning of file
verbose=True,
allow_delegation=True,
llm=azure_llm
)
@agent
def field_engineer(self) -> Agent:
return Agent(
config=self.agents_config['field_engineer'],
# tools=[MyCustomTool()], # Example of custom tool, loaded on the beginning of file
verbose=True,
llm=azure_llm
)
@agent
def data_scientist(self) -> Agent:
return Agent(
config=self.agents_config['data_scientist'],
verbose=True,
llm=azure_llm
)
@task
def manager_task(self) -> Task:
return Task(
config=self.tasks_config['manager_task'],
agent=self.manager()
)
@task
def field_engineer_task(self) -> Task:
return Task(
config=self.tasks_config['field_engineer_task'],
agent=self.field_engineer()
)
@task
def data_scientist_task(self) -> Task:
return Task(
config=self.tasks_config['data_scientist_task'],
agent=self.data_scientist(),
output_file='report.md'
)
@crew
def crew(self) -> Crew:
"""Creates the Sample crew"""
return Crew(
agents=self.agents, # Automatically created by the @agent decorator
tasks=self.tasks, # Automatically created by the @task decorator
# process=Process.sequential,
verbose=2,
manager_agent=self.manager,
memory=True,
# manager_llm=azure_llm,
process=Process.hierarchical, # In case you wanna use that instead https://docs.crewai.com/how-to/Hierarchical/
)
</code></pre>
|
<python><openai-api><crewai>
|
2024-07-31 16:03:54
| 2
| 1,073
|
Diablo3093
|
78,817,335
| 3,598,205
|
ffmpeg python causing deadlock
|
<p>I am facing issues using ffmpeg python to process camera frames. My first approach with process.communicate() worked well, but has latency issues.</p>
<pre><code>process = (
ffmpeg
.input('pipe:', format='rawvideo', pix_fmt='rgb24', s='{}x{}'.format(width, height))
.filter(<filter_params>)
.output('pipe:', format='rawvideo', pix_fmt='rgb24')
.run_async(pipe_stdin=True, pipe_stdout=True, pipe_stderr=True)
)
out, err = process.communicate(input=img.tobytes())
output_image = np.frombuffer(out, np.uint8).reshape((height, width, channels))
</code></pre>
<p>To reduce the latency, I'm trying to keep the ffmpeg process open and feed in camera frames for processing. This runs fine for a couple of minutes with acceptable latency values but ends up in deadlock. What is the best way to fix this?</p>
<pre><code>import cv2
import numpy as np
import math
import ffmpeg
def start_ffmpeg_process_async(width, height):
return (
ffmpeg
.input('pipe:', format='rawvideo', pix_fmt='rgb24', s='{}x{}'.format(width, height))
.filter('<filter variables>')
.output('pipe:', format='rawvideo', pix_fmt='rgb24')
.run_async(pipe_stdin=True, pipe_stdout=True, pipe_stderr=True)
)
def main():
cap = cv2.VideoCapture(0)
if not cap.isOpened():
print("Camera can't open \nExiting...")
return -1
ffmpeg_process_async = start_ffmpeg_process_async(cap.get(cv2.CAP_PROP_FRAME_WIDTH),
cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
try:
while True:
success, img = cap.read()
if success:
height, width, channels = img.shape
ffmpeg_process_async.stdin.write(img.tobytes())
raw_output = ffmpeg_process_async.stdout.read(width * height * channels)
output_image = np.frombuffer(raw_output, np.uint8).reshape((height, width, channels))
cv2.imshow('Webcam', output_image)
else:
print ("Camera read error")
if cv2.waitKey(1) == ord('q'):
print ("Exiting . . .")
break
finally:
print ("Finalizing . . .")
ffmpeg_process_async.stdin.close()
ffmpeg_process_async.wait()
cap.release()
cv2.destroyAllWindows()
if __name__ == "__main__":
main()
</code></pre>
|
<python><ffmpeg><subprocess><ffmpeg-python>
|
2024-07-31 16:02:02
| 0
| 457
|
sa_penguin
|
78,817,193
| 12,016,688
|
How is "type" not a keyword in Python?
|
<p>In Python 3.12 we have type aliases like this:</p>
<pre class="lang-py prettyprint-override"><code>Python 3.12.4+ (heads/3.12:99bc8589f0, Jul 27 2024, 11:20:07) [GCC 12.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> type S = str
>>> S
S
</code></pre>
<p>By this syntax I assumed that, from now, the <code>type</code> word is considered a keyword, but it's not:</p>
<pre class="lang-py prettyprint-override"><code>>>> type = 2
>>>
</code></pre>
<p>and also:</p>
<pre><code>>>> import keyword
>>> keyword.iskeyword('type')
False
</code></pre>
|
<python><python-typing><keyword>
|
2024-07-31 15:32:08
| 1
| 2,470
|
Amir reza Riahi
|
78,817,087
| 5,858,995
|
Programatically get name of dataclass property
|
<p>Let's say I have a dataclass like this:</p>
<pre class="lang-py prettyprint-override"><code>@dataclass(init=True)
class TestDataclass:
property1: str
</code></pre>
<p>I want to fetch the name of this property like <code>TestDataclass.property1.__name__</code>, however this throws an <code>AttributeError</code>, saying that the class/type does not have this attribute.</p>
<p>The reason why I want this is because I use dependency injection based on the attribute names and I don't want to use hardcoded strings in case someone decides to change them in the future.
Here's the difference:</p>
<pre class="lang-py prettyprint-override"><code>di["property1"] = "THIS IS BAD"
di[TestDataclass.property1.__name__] = "I wish this worked"
</code></pre>
|
<python><python-dataclasses>
|
2024-07-31 15:09:10
| 0
| 462
|
Dzeri96
|
78,816,744
| 14,190,819
|
Failed while converting '.h5' model into '.tflite' using TensorFlow 2.0
|
<p>When converting the model from <strong>.h5</strong> to <strong>.tflite</strong>
<br><br>
<strong>Code I'm using:<br></strong></p>
<pre><code>import torch
import torch.nn as nn
import detectron2
from detectron2.modeling import build_model
from detectron2.modeling import build_model
from torch.ao.quantization import (
get_default_qconfig_mapping,
get_default_qat_qconfig_mapping,
QConfigMapping,
)
import torch.ao.quantization.quantize_fx as quantize_fx
import copy
model_path = '/kaggle/input/vehicleobjectdetection/pytorch/v1/2/model_final.h5'
cfg.MODEL.WEIGHTS = model_path
# Instantiate the model (adjust parameters as necessary)
model = build_model(cfg)
from keras.models import load_model
import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_keras_model(model_path)
tfmodel = converter.convert()
</code></pre>
<p>Getting this error:
<br></p>
<pre><code>**AttributeError** Traceback (most recent call last)
Cell In[54], line 28
25 import tensorflow as tf
27 converter = tf.lite.TFLiteConverter.from_keras_model(model_path)
---> 28 **tfmodel = converter.convert()**
File /opt/conda/lib/python3.10/site-packages/tensorflow/lite/python/lite.py:1139, in _export_metrics.<locals>.wrapper(self, *args, **kwargs)
1136 @functools.wraps(convert_func)
1137 def wrapper(self, *args, **kwargs):
1138 # pylint: disable=protected-access
-> 1139 return self._convert_and_export_metrics(convert_func, *args, **kwargs)
File /opt/conda/lib/python3.10/site-packages/tensorflow/lite/python/lite.py:1093, in TFLiteConverterBase._convert_and_export_metrics(self, convert_func, *args, **kwargs)
1091 self._save_conversion_params_metric()
1092 start_time = time.process_time()
-> 1093 result = convert_func(self, *args, **kwargs)
1094 elapsed_time_ms = (time.process_time() - start_time) * 1000
1095 if result:
File /opt/conda/lib/python3.10/site-packages/tensorflow/lite/python/lite.py:1606, in TFLiteKerasModelConverterV2.convert(self)
1602 if saved_model_convert_result:
1603 return saved_model_convert_result
1605 graph_def, input_tensors, output_tensors, frozen_func = (
-> 1606 self._freeze_keras_model()
1607 )
1609 graph_def = self._optimize_tf_model(
1610 graph_def, input_tensors, output_tensors, frozen_func
1611 )
1613 return super(TFLiteKerasModelConverterV2, self).convert(
1614 graph_def, input_tensors, output_tensors
1615 )
File /opt/conda/lib/python3.10/site-packages/tensorflow/lite/python/convert_phase.py:215, in convert_phase.<locals>.actual_decorator.<locals>.wrapper(*args, **kwargs)
213 except Exception as error:
214 report_error_message(str(error))
--> 215 raise error from None
File /opt/conda/lib/python3.10/site-packages/tensorflow/lite/python/convert_phase.py:205, in convert_phase.<locals>.actual_decorator.<locals>.wrapper(*args, **kwargs)
202 @functools.wraps(func)
203 def wrapper(*args, **kwargs):
204 try:
--> 205 return func(*args, **kwargs)
206 except ConverterError as converter_error:
207 if converter_error.errors:
File /opt/conda/lib/python3.10/site-packages/tensorflow/lite/python/lite.py:1543, in TFLiteKerasModelConverterV2._freeze_keras_model(self)
1537 input_signature = None
1538 # If the model's call is not a `tf.function`, then we need to first get its
1539 # input signature from `model_input_signature` method. We can't directly
1540 # call `trace_model_call` because otherwise the batch dimension is set
1541 # to None.
1542 # Once we have better support for dynamic shapes, we can remove this.
-> 1543 if not isinstance(self._keras_model.call, _def_function.Function):
1544 # Pass `keep_original_batch_size=True` will ensure that we get an input
1545 # signature including the batch dimension specified by the user.
1546 # TODO(b/169898786): Use the Keras public API when TFLite moves out of TF
1547 input_signature = _model_input_signature(
1548 self._keras_model, keep_original_batch_size=True
1549 )
1551 # TODO(b/169898786): Use the Keras public API when TFLite moves out of TF
AttributeError: 'str' object has no attribute 'call'
</code></pre>
|
<python><tensorflow><keras><pytorch><tflite>
|
2024-07-31 13:52:09
| 1
| 1,323
|
Muhammad Ammar
|
78,816,685
| 13,946,204
|
How to set method for instance dynamically in Python?
|
<p>I need to update method for instance in runtime and my hypothetical code is looking like:</p>
<pre class="lang-py prettyprint-override"><code>def set_method(where_to_set: str):
def the_method(self):
print(where_to_set, '->', self)
return the_method
class A:
one = set_method('as class attr `one`')
two = None
def __init__(self, necessary_variable):
self.two = set_method(f'as instance overriden attr `two` with {necessary_variable}')
self.three = set_method(f'as instance attr `three` with {necessary_variable}')
a = A('I`m important')
try:
a.one()
except TypeError as e:
print('one ->', e)
try:
a.two()
except TypeError as e:
print('two ->', e)
try:
a.three()
except TypeError as e:
print('three ->', e)
</code></pre>
<p>And output will be:</p>
<pre><code>as class attr `one` -> <__main__.A object at 0x7f0a1638e250>
two -> set_method.<locals>.the_method() missing 1 required positional argument: 'self'
three -> set_method.<locals>.the_method() missing 1 required positional argument: 'self'
</code></pre>
<p>Here is method <code>one</code> works as expected, but <code>two</code> and <code>three</code> raise <code>TypeError</code>.</p>
<p>However a <code>necessary_variable</code> is passed inside class instantiation so I can't set my method outside of <code>__init__</code> or before constructor will be called.<br />
So the question is, how to set <code>set_method</code> for an instance inside <code>__init__</code> constructor and why this situation is happens at all? All these methods are attached to instance. Why <code>self</code> is not found?</p>
<p>PS in real life <code>set_method</code> is a part of external library, so it can not be changed.</p>
|
<python>
|
2024-07-31 13:38:48
| 0
| 9,834
|
rzlvmp
|
78,816,652
| 2,287,458
|
Multiply polars columns of number type with object type (which supports __mul__)
|
<p>I have the following code.</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
class Summary:
def __init__(self, value: float, origin: str):
self.value = value
self.origin = origin
def __repr__(self) -> str:
return f'Summary({self.value},{self.origin})'
def __mul__(self, x: float | int) -> 'Summary':
return Summary(self.value * x, self.origin)
def __rmul__(self, x: float | int) -> 'Summary':
return self * x
mapping = {
'CASH': Summary( 1, 'E'),
'ITEM': Summary(-9, 'A'),
'CHECK': Summary(46, 'A'),
}
df = pl.DataFrame({'quantity': [7, 4, 10], 'type': mapping.keys(), 'summary': mapping.values()})
</code></pre>
<p>The dataframe <code>df</code> looks as follows.</p>
<pre><code>shape: (3, 3)
โโโโโโโโโโโโฌโโโโโโโโฌโโโโโโโโโโโโโโโโ
โ quantity โ type โ summary โ
โ --- โ --- โ --- โ
โ i64 โ str โ object โ
โโโโโโโโโโโโชโโโโโโโโชโโโโโโโโโโโโโโโโก
โ 7 โ CASH โ Summary(1,E) โ
โ 4 โ ITEM โ Summary(-9,A) โ
โ 10 โ CHECK โ Summary(46,A) โ
โโโโโโโโโโโโดโโโโโโโโดโโโโโโโโโโโโโโโโ
</code></pre>
<p>Especially, the <code>summary</code> column contains a <code>Summary</code> class object, which supports multiplication. Now, I'd like to multiply this column with the <code>quantity</code> column.</p>
<p>However, the naive approach raises an error.</p>
<pre class="lang-py prettyprint-override"><code>df.with_columns(pl.col('quantity').mul(pl.col('summary')).alias('qty_summary'))
</code></pre>
<pre class="lang-py prettyprint-override"><code>SchemaError: failed to determine supertype of i64 and object
</code></pre>
<p>Is there a way to multiply these columns?</p>
|
<python><dataframe><multiplication><python-polars>
|
2024-07-31 13:33:07
| 2
| 3,591
|
Phil-ZXX
|
78,816,599
| 10,003,538
|
How do I passa PyDub AudioSegment object into a pyannote.audio Pipeline?
|
<p>Here is my so far code</p>
<pre><code># SETUP
from pydub import AudioSegment
import torch
pipeline.to(torch.device("cuda"))
from pyannote.audio import Pipeline
pipeline = Pipeline.from_pretrained(
"pyannote/speaker-diarization-3.1",
use_auth_token="hf_#####")
audio = AudioSegment.from_file(file_path)
# DO SOMETHING
audio.export("temp_file.wav", format="wav") # I want to skip this line
diarization = pipeline("temp_file.wav")
</code></pre>
<p>I want to skip the part exporrt it to a temp file then have the Pipeline process it, so I dont need to borther removing the temp file</p>
|
<python><pytorch><pydub><audiosegment>
|
2024-07-31 13:22:15
| 1
| 1,225
|
Chau Loi
|
78,816,264
| 9,808,792
|
How to Optimize Rolling Correlation Calculation Across All Column Pairs in a Large Pandas DataFrame
|
<p>I want to compute the rolling Pearson correlation between all pairs of columns in a large Pandas DataFrame. Here's the current implementation I am using:</p>
<pre><code>import itertools
import pandas as pd
from tqdm import tqdm
def pairwise_rolling_correlations_naive(df, window_size):
column_pairs = list(itertools.combinations(df.columns, 2))
results = {}
for col1, col2 in tqdm(column_pairs):
results[(col1, col2)] = df[col1].rolling(window=window_size).corr(df[col2])
return pd.DataFrame(results) # DataFrame with multi-level column index
</code></pre>
<p>While the <code>.rolling().corr()</code> method itself is quite fast, the overall performance is not satisfactory when dealing with dataframes that contain hundreds of columns (and thousands of rows).</p>
<p>Are there more efficient approaches or optimizations that can be applied to this function to handle large dataframes more effectively?</p>
|
<python><pandas><dataframe><optimization>
|
2024-07-31 12:13:46
| 1
| 3,830
|
micycle
|
78,816,181
| 13,806,869
|
How can I link the records in the training dataset to the corresponding model predictions?
|
<p>Using scikit-learn, I've set up a regression model to predict customers' maximum spend per transaction. The dataset I'm using looks a bit like this; the target column is maximum spend per transaction during the previous year:</p>
<pre><code>customer_number | metric_1 | metric_2 | target
----------------|----------|----------|-------
111 | A | X | 15
222 | A | Y | 20
333 | B | Y | 30
</code></pre>
<p>I split the dataset into training & testing sets, one-hot encode the features, train the model, and make some test predictions:</p>
<pre><code>target = pd.DataFrame(dataset, columns = ["target"])
features = dataset.drop("target", axis = 1)
train_features, test_features, train_target, test_target = train_test_split(features, target, test_size = 0.25)
train_features = pd.get_dummies(train_features)
test_features = pd.get_dummies(test_features)
model = RandomForestRegressor()
model.fit(X = train_features, y = train_target)
test_prediction = model.predict(X = test_features)
</code></pre>
<p>I can output various measures of the model's accuracy (mean average error, mean squared error etc) using the relevant functions in scikit-learn. However, I'd like to be able to tell which customers' predictions are the most inaccurate. So I want to be able to create a dataframe which looks like this:</p>
<pre><code>customer_number | target | prediction | error
----------------|--------|----------- |------
111 | 15 | 17 | 2
222 | 20 | 19 | 1
333 | 30 | 50 | 20
</code></pre>
<p>I can use this to investigate if there is any correlation between the features and the model making inaccurate predictions. In this example, I can see that customer 333 has the biggest error by far, so I could potentially infer that customers with metric_1 = B end up with less accurate predictions.</p>
<p>I think I can calculate errors like this (please correct me if I'm wrong on this), but I don't know how to tie them back to customer number.</p>
<pre><code>error = abs(test_target - test_prediction)
</code></pre>
<p>How can I get the desired result?</p>
|
<python><pandas><scikit-learn><regression>
|
2024-07-31 12:00:01
| 2
| 521
|
SRJCoding
|
78,815,926
| 2,287,458
|
Apply Python dict to Polars column (using replace_strict)
|
<p>I have a <code>dict</code> and a polars <code>DataFrame</code> and want to map one column to the values of the dict:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame({
'label': ['AA', 'BB', 'AA', 'CC'],
'type': ['CASH', 'ITEM', 'CHECK', 'CHECK'],
})
mapping = {
'CASH': {'qty': 1, 'origin': 'E'},
'ITEM': {'qty': -9, 'origin': 'A'},
'CHECK': {'qty': 46, 'origin': 'A'},
}
df.with_columns(pl.col('type').replace_strict(mapping).alias('mapped'))
</code></pre>
<p>This outputs</p>
<pre><code>shape: (4, 3)
โโโโโโโโโฌโโโโโโโโฌโโโโโโโโโโโโ
โ label โ type โ mapped โ
โ --- โ --- โ --- โ
โ str โ str โ struct[2] โ
โโโโโโโโโชโโโโโโโโชโโโโโโโโโโโโก
โ AA โ CASH โ {1,"E"} โ
โ BB โ ITEM โ {-9,"A"} โ
โ AA โ CHECK โ {46,"A"} โ
โ CC โ CHECK โ {46,"A"} โ
โโโโโโโโโดโโโโโโโโดโโโโโโโโโโโโ
</code></pre>
<p>The problem is, it only takes the values of the dict and entirely drops the keys.</p>
<p>So I tried using <code>replace_strict(mapping, return_dtype=pl.Object)</code>, but this gives error</p>
<pre><code>File site-packages\polars\lazyframe\frame.py:2026,
in LazyFrame.collect(self, type_coercion, predicate_pushdown, projection_pushdown,
simplify_expression, slice_pushdown, comm_subplan_elim,
comm_subexpr_elim, cluster_with_columns, no_optimization,
streaming, engine, background, _eager, **_kwargs)
2024 # Only for testing purposes
2025 callback = _kwargs.get("post_opt_callback", callback)
-> 2026 return wrap_df(ldf.collect(callback))
InvalidOperationError: casting from Int64 to Unknown not supported
</code></pre>
<p>Ultimately, the output I am after is the table below. How do I achieve this?</p>
<pre><code>shape: (4, 3)
โโโโโโโโโฌโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ label โ type โ mapped โ
โ --- โ --- โ --- โ
โ str โ str โ object โ
โโโโโโโโโชโโโโโโโโชโโโโโโโโโโโโโโโโโโโโโโโโโโก
โ AA โ CASH โ {"qty":1,"origin":"E"} โ
โ BB โ ITEM โ {"qty":-9,"origin":"A"} โ
โ AA โ CHECK โ {"qty":46,"origin":"A"} โ
โ CC โ CHECK โ {"qty":46,"origin":"A"} โ
โโโโโโโโโดโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโโโโโโ
</code></pre>
<p><em>I am using polars==1.3.0</em></p>
|
<python><dataframe><dictionary><replace><python-polars>
|
2024-07-31 11:02:41
| 1
| 3,591
|
Phil-ZXX
|
78,815,656
| 22,221,987
|
How to add priority task to celery queue without fetching disabling
|
<p>I have a celery worker, with <strong>concurrency set to 1</strong>, which takes tasks from RabbitMQ. I want to make a system with only one queue in a single concurrency setup, so, all tasks are going to be added in main queue.</p>
<p>About the task - its just a loop where we update state with <code>task.update_state()</code>.</p>
<pre><code>@c_app.task(bind=True)
def task(self):
n = 20
for i in range(0, n):
self.update_state(state='PROGRESS', meta={'done': i, 'total': n})
print('working')
time.sleep(1)
return n
</code></pre>
<p>In parallel I have two services.</p>
<ul>
<li><strong>Celery-beat</strong> service, which creates 1000 tasks (amount as example).</li>
<li><strong>FastAP</strong>I service, which provides two endpoints:
<ul>
<li>create task with TOP priority and add it to the main queue</li>
<li>get actual info about active task and scheduled tasks (by using <code>inspect()</code>)</li>
</ul>
</li>
</ul>
<p>So, FastAPI can be asked about:</p>
<ul>
<li>current active task progress - <code>inspect().active()</code></li>
<li>how many tasks remain - <code>inspect().scheduled()</code></li>
</ul>
<p><strong>Question</strong>: How can I add task with higher priority to the queue, which already scheduled the tasks to the worker?</p>
<p>Here is my what I've tried:</p>
<p>Celery config:</p>
<pre><code>from celery.schedules import crontab
from kombu import Queue, Exchange
broker_url = 'amqp://guest:guest@localhost//'
result_backend = 'db+postgresql://admin:root@localhost/celery_test_db'
worker_concurrency = 1
timezone = 'Europe/Moscow'
enable_utc = False
result_extended = True
beat_schedule = {
'add-5-tasks-every-month': {
'task': 'celery_app.tasks.add_5_tasks',
'options': {'queue': 'celery_q'},
'schedule': 20.0
},
}
broker_transport_options = {'queue_order_strategy': 'priority'}
task_queues = (
Queue("celery_q", Exchange("celery_q"), routing_key="celery_q", queue_arguments={'x-max-priority': 9}),
)
</code></pre>
<p>Here is my Celery-Beat task for adding big amount of tasks with low priority:</p>
<pre><code>@c_app.task
def add_5_tasks():
for _ in range(800):
task.apply_async(countdown=1, queue='celery_q', priority=1)
</code></pre>
<p>Here is my FastAPI end-point for adding high priority task, which, as I expect, should be executed right after current task is being completed.</p>
<pre><code>@f_app.post("/add-task/")
def add_task():
task_ = task.apply_async(priority=9, countdown=1, queue='celery_q')
print('Task added with high priority:', task_.id)
return {'task_id': task_.id,
'message': 'Task added with high priority'}
</code></pre>
<p>And the "core" of the "current_progress" end-point which returns current progress and scheduled tasks:</p>
<pre><code>i = c_app.control.inspect()
scheduled = i.scheduled()
reserved = i.reserved()
active = i.active()
</code></pre>
<p><strong>Problem</strong>: the prioritisation doesn't work as I expected.<br />
It works only if I add these setting in the config:</p>
<pre><code>worker_prefetch_multiplier = 1
task_acks_late = True
</code></pre>
<p>But, it causes <code>inspect().scheduled()</code> to become absolutely useless, as we fetch only one task in a row, so worker thinks that we have only one task in a schedule. So, instead of list of tasks we can see a single task in <code>inspect().scheduled()</code></p>
<p><strong>MAIN QUESTION</strong>: How to enable prioritisation and get all info about scheduled tasks from <code>inspect().scheduled()</code>?</p>
|
<python><python-3.x><rabbitmq><celery><fastapi>
|
2024-07-31 09:58:06
| 0
| 309
|
Mika
|
78,815,544
| 3,378,204
|
How to correctly save a fine tuned model using apple MLX framework
|
<p>We're using <a href="https://github.com/ml-explore/mlx" rel="nofollow noreferrer">MLX</a> to fine tune a model fetched from hugging face.</p>
<pre><code>from transformers import AutoModel
model = AutoModel.from_pretrained('deepseek-ai/deepseek-coder-6.7b-instruct')
</code></pre>
<p>We fine tuned the model with command like <code>python -m mlx_lm.lora --config lora_config.yaml</code> and the config file looks like:</p>
<pre><code># The path to the local model directory or Hugging Face repo.
model: "deepseek-ai/deepseek-coder-6.7b-instruct"
# Save/load path for the trained adapter weights.
adapter_path: "adapters"
</code></pre>
<p>When the adapter files generated after fine tuning, we evaluated the model by scripts like</p>
<pre><code>from mlx_lm.utils import *
model,tokenizer = load(path_or_hf_repo ="deepseek-ai/deepseek-coder-6.7b-instruct",
adapter_path = "adapters" # path to new trained adaptor
)
text = "Tell sth about New York"
response = generate(model, tokenizer, prompt=text, verbose=True, temp=0.01, max_tokens=100)
</code></pre>
<p>and it works as expected.</p>
<p>However, after we saved the model and evaluated with mlx_lm.generate, the model worked poor. (the behavior is completely different from invoking the model with <code>generate(model, tokenizer, prompt=text, verbose=True, temp=0.01, max_tokens=100)</code>.</p>
<pre><code>mlx_lm.fuse --model "deepseek-ai/deepseek-coder-6.7b-instruct" --adapter-path "adapters" --save-path new_model
mlx_lm.generate --model new_model --prompt "Tell sth about New York" --adapter-path "adapters" --temp 0.01
</code></pre>
|
<python><machine-learning><deep-learning><large-language-model>
|
2024-07-31 09:35:34
| 1
| 11,155
|
Eugene
|
78,815,102
| 270,043
|
Unable to filter away dataframes in huge dataset in PySpark
|
<p>I have a huge PySpark dataframe that contains 1.5B rows, including the column <code>fieldA</code>. I have a list of 8.8M unique <code>fieldA</code> values, that I want to filter out of the 1.5B rows. However, I think due to the large data size, I keep getting errors like <code>StackOverflowError</code> or <code>OutOfMemoryError</code>.</p>
<p>I've tried to split the 8.8M list into smaller lists of 20K values, and also split the 1.5B dataframes into smaller dataframes of 15M rows each. Then for each dataframe of 15M rows, continuously (in a loop) filter away different 20K of the <code>fieldA</code> values (<code>temp_df = temp_df.filter(~col('fieldA').isin(fieldA_part_list))</code>) until all 8.8M values were filtered away, then write the final <code>temp_df</code> to parquet files. Repeat for the next 15M rows of dataframes. However, I think this resulted in hundreds of <code>.filter()</code>, and that might be what gave me the <code>StackOverflowError</code> when I tried to write to parquet files on the first 15M dataframe.</p>
<p>I then tried to filter away the full 8.8M values from each 15M dataframe. For each 15M dataframe, I would write the filtered results to parquet files. However, when I tried to write to parquet files, I got the <code>OutOfMemoryError</code> on the first 15M dataframe.</p>
<p>How can I filter away rows that match any of the 8.8M <code>fieldA</code> values from the 1.5B rows of dataframe, in an efficient manner?</p>
|
<python><pandas><pyspark><out-of-memory>
|
2024-07-31 08:00:20
| 1
| 15,187
|
Rayne
|
78,815,010
| 2,826,018
|
SVM most important coefficient doesn't have dependency on target class
|
<p>I've trained an SVM classifier with my data, looked at the coefficients and then plotted the most important training feature against my target class. However, I found there to be no dependency ( x axis is the class, y axis is the most important feature ). I trained the SVM on whether the class value is 0 or greater than 0 but the image shows all class values in my data set.</p>
<p><a href="https://i.sstatic.net/t3WN0ryf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/t3WN0ryf.png" alt="enter image description here" /></a></p>
<p>My code:</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
from sklearn import metrics, svm, preprocessing
from sklearn.model_selection import train_test_split
targetRaw = df[ target ]
correlationRaw = df[ mostImportantFeature ]
y = df[ target ]
X = df.drop( columns=[ target ] )
X_train, X_test, y_train, y_test = train_test_split( X, y, test_size= 0.4 )
scaler = preprocessing.StandardScaler()
X_train = scaler.fit_transform( X_train )
X_test = scaler.fit_transform( X_test )
svc = svm.SVC( kernel='linear' )
svc.fit( X_train, y_train )
y_pred = svc.predict( X_test )
print( "Testing Accuracy:", metrics.accuracy_score( y_test, y_pred ) )
coefficients = 15
imp, names = zip( *sorted( zip( svc.coef_[ 0 ], X.columns.values ) ) )
plt.figure( 0 )
plt.xlabel( "Attribution" )
plt.ylabel( "Input Features" )
plt.barh( range( len( names[ -coefficients: ] ) ), imp[ -coefficients: ], align='center' )
plt.yticks( range( len( names[ -coefficients: ] ) ), names[ -coefficients: ] )
plt.savefig( 'SVMCoefficientsAttribution.png', bbox_inches='tight', dpi=100 )
plt.figure( 1 )
plt.xlabel( target )
plt.ylabel( "Feature" )
plt.scatter( targetRaw.tolist(), correlationRaw.tolist(), s= 4.0 )
plt.savefig( 'SVMTargetFeatureCorrelation.png', bbox_inches='tight', dpi=100 )
</code></pre>
|
<python><scikit-learn><svm>
|
2024-07-31 07:42:10
| 1
| 1,724
|
binaryBigInt
|
78,814,860
| 1,259,406
|
Adding Status text to a (Textual) Footer
|
<p>I'm trying to create an enditor where the Footer contains the usual bindings on the left and some status information on the right, for example the line number.
The Footer in textual is very simple so I thought to extend it, but I'm unable to see both my label and the binding of the base Footer.
This is my code:</p>
<pre><code>class MyFooter(Footer):
DEFAULT_CSS = """
MyFooter {
.right-label {
text-align: right;
}
}
"""
def compose(self) -> ComposeResult:
for widget in super().compose():
yield widget
yield Label("This is the right side label", id="right-label")
</code></pre>
<p>To test it, you can use the first example of the tutorial:</p>
<pre><code>from textual.app import App, ComposeResult
from textual.widgets import Header, Footer,Label
class MyFooter(Footer):
DEFAULT_CSS = """
MyFooter {
.right-label {
text-align: right;
}
}
"""
def compose(self) -> ComposeResult:
"""Create child widgets for the footer."""
for widget in super().compose():
yield widget
yield Label("This is the right side label", id="right-label")
class StopwatchApp(App):
"""A Textual app to manage stopwatches."""
BINDINGS = [("d", "toggle_dark", "Toggle dark mode")]
def compose(self) -> ComposeResult:
"""Create child widgets for the app."""
yield Header()
yield MyFooter()
def action_toggle_dark(self) -> None:
"""An action to toggle dark mode."""
self.dark = not self.dark
if __name__ == "__main__":
app = StopwatchApp()
app.run()
</code></pre>
|
<python><textual><python-textual>
|
2024-07-31 07:09:38
| 1
| 1,328
|
maugch
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.