QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 ⌀ |
|---|---|---|---|---|---|---|---|---|
77,045,791 | 5,249,203 | executing bash scripts from R Shiny | <p>On a HPC cluster, I have created a workflow pipeline. To execute the pipeline, I need to execute <code>commands.sh</code>. which basically, has series of bash commands such as</p>
<pre><code>module load python
module load java/11.0.8
</code></pre>
<p>It works as expected when I invoke the file from HPC command line <code>./commands.sh</code></p>
<p>I have created a Shiny app within the same folder, and trying to execute the above command using a button from shiny UI as follows</p>
<pre><code> observeEvent(input$run_extern,{
system(command = "../commands.sh 2>&1 | tee ../output.log", intern = FALSE, wait = FALSE)
})
</code></pre>
<p>I get error,</p>
<blockquote>
<p>line 27: module: command not found</p>
</blockquote>
<p>It seems that the environment from Rshiny is different from when I use the HPC. How do solve this ? Even a python solution would be helpful</p>
| <python><r><bash><shiny> | 2023-09-05 15:23:33 | 0 | 4,678 | user5249203 |
77,045,776 | 11,578,282 | Why are my async functions not running in parallel? | <p>I'm trying to understand asyncio in Python. I'm expecting my two async functions to run at the same time with the output similar to below.</p>
<pre><code>foo
bar
foo
bar
foo
bar
foo
bar
</code></pre>
<p>Instead, I'm getting a list of 'foo'.</p>
<pre><code>async def print_foo():
while True:
print("foo")
async def print_bar():
while True:
print("bar")
async def main():
async with asyncio.TaskGroup() as tg:
task1 = tg.create_task(print_foo())
task2 = tg.create_task(print_bar())
if __name__ == "__main__":
asyncio.run(main())
</code></pre>
<p>Could someone explain why I see this behaviour and how I can get the two async functions to run at the same time?</p>
| <python><asynchronous><python-asyncio> | 2023-09-05 15:21:58 | 1 | 467 | Eind997 |
77,045,763 | 447,738 | Filter a Pandas dataframe if cell values exist in another dataframe but with a range | <p>I have two pandas DataFrame, with the same structure. Dataframe B is a subset of DataFrame A.</p>
<p>I want to filter DataFrame B, only if the Price value appears in DataFrame A, <strong>or it is within 1% of a value in DataFrame A</strong>.</p>
<p>For example, even if the exact price is not present, I want to keep the value if there is a row in A with a price +/- 1%.</p>
<p>DataFrame A:</p>
<pre><code>index price
0 20.23
1 10.34
2 5.28
3 12.25
4 12.32
</code></pre>
<p>DataFrame B:</p>
<pre><code>index price
0 0.23
1 10.34
2 5.26
</code></pre>
<p>Desired Result of filtering:</p>
<pre><code>index price
0 10.34
1 5.26
</code></pre>
<pre><code>import pandas as pd
dfA = pd.DataFrame({'index': [0, 1, 2, 3, 4],
'price': [20.23, 10.34, 5.28, 12.25, 12.32]})
dfB = pd.DataFrame({'index': [0, 1, 2],
'price': [0.23, 10.34, 5.26]})
</code></pre>
<p>The following will only give me the exact matches.</p>
<pre><code>dfB[dfB['price'].isin(dfA['price'])]
</code></pre>
| <python><pandas> | 2023-09-05 15:19:58 | 4 | 2,357 | cksrc |
77,045,722 | 6,742,553 | Python's replace() function's count property is vague | <p>I am relatively new to Python and don't really grasp the idea of <code>replace</code> method's third parameter (count).</p>
<p>So this:</p>
<pre><code>print("This is is it!".replace("is", "are", __count=1))
print("This is is it!".replace("is", "are", __count=2))
</code></pre>
<p>produces:</p>
<pre><code>Thare is is it!
Thare are is it!
</code></pre>
<p>How does count work in this case? Because according to my research and understanding, count is <a href="https://www.learnbyexample.org/python-string-replace-method/" rel="nofollow noreferrer">An integer specifying number of replacements to perform</a>. So, in this case, shouldn't the output be:</p>
<pre><code>Thare are is it!
Thare are are it!
</code></pre>
<p>Any help for my understanding would be appreciated!</p>
<p>PS: I am using<code>Python version: 3.11.4</code>. As per this version, <code>count=1</code> is invalid!</p>
<p>Yet another PS: I did <strong>not</strong> see the <code>is</code> in <code>This</code> for some reason. So replace makes sense now. It replaces the <code>is</code> in <code>This</code> that's why it does not replace the second <code>is</code> in the first example. I just overlooked that detail :/</p>
| <python><python-3.x><replace> | 2023-09-05 15:14:30 | 1 | 708 | Serhat |
77,045,716 | 15,673,147 | How to extract score from similarity search when doing RAG with LangChain, GPT and Chainlit UI? | <p>I'm doing RAG (retrieval augmentation generator) using LangChain and OpenAI's GPT, through Chainlit UI. The chain_type I'm using is "map_rerank".</p>
<p>I'm already able to extract the answer and the source document.</p>
<p>But I can't find a way to extract the score from the similarity search and print it in the message for the UI.</p>
<p>Follows the code.</p>
<p>First I set the variables:</p>
<pre><code>@on_chat_start
def init():
llm = AzureChatOpenAI(
deployment_name=saci_constants.AZURE_OPENAI_DEPLOYMENT_NAME,
model_name=saci_constants.AZURE_OPENAI_MODEL_NAME,
openai_api_base=saci_constants.AZURE_OPENAI_DEPLOYMENT_ENDPOINT,
openai_api_version=saci_constants.AZURE_OPENAI_DEPLOYMENT_VERSION,
openai_api_key=saci_constants.AZURE_OPENAI_API_KEY,
openai_api_type=saci_constants.AZURE_OPEN_API_TYPE,
temperature=saci_constants.TEMPERATURE,
streaming=True,
callbacks=[StreamingStdOutCallbackHandler()],
)
embeddings = OpenAIEmbeddings(
deployment=saci_constants.AZURE_OPENAI_ADA_EMBEDDING_DEPLOYMENT_NAME,
model=saci_constants.AZURE_OPENAI_ADA_EMBEDDING_MODEL_NAME,
openai_api_base=saci_constants.AZURE_OPENAI_DEPLOYMENT_ENDPOINT,
openai_api_key=saci_constants.AZURE_OPENAI_API_KEY,
openai_api_type=saci_constants.AZURE_OPEN_API_TYPE,
chunk_size=saci_constants.AZURE_CHUNK_SIZE,
)
faiss_db = FAISS.load_local(
saci_constants.FAISS_DATABASE_PATH,
embeddings,
)
retriever = faiss_db.as_retriever(
search_type="similarity_score_threshold",
search_kwargs={"score_threshold": 0.5, "k": 3},
)
question_generator = LLMChain(
llm=llm,
prompt=CONDENSE_QUESTION_PROMPT,
verbose=True,
)
doc_chain = load_qa_with_sources_chain(
llm,
chain_type="map_rerank",
return_intermediate_steps=False,
verbose=True,
)
memory = ConversationBufferMemory(
llm=llm,
memory_key="chat_history",
return_messages=True,
input_key="question",
output_key="answer",
max_token_limit=1000,
# k=1,
)
conversational_chain = ConversationalRetrievalChain(
retriever=retriever,
question_generator=question_generator,
combine_docs_chain=doc_chain,
return_source_documents=True,
memory=memory,
rephrase_question=False,
verbose=True,
)
# # Set chain as a user session variable
cl.user_session.set("conversation_chain", conversational_chain)
</code></pre>
<p>Some observations from the code above. I can't set True for intermediate_steps, since I will get more than one output and the UI run don't accept it.</p>
<p>After setting the variables, I run the chain, get the results, extract the sources from the metadata and pass to the UI:</p>
<pre><code>@on_message
async def main(message: str):
chat_history = []
# Read chain from user session variable
chain = cl.user_session.get("conversation_chain")
# Run the chain
res = chain({"question": message, "chat_history": chat_history})
# Extract sources from the documents' metadata
sources = [doc.metadata.get("source") for doc in res["source_documents"]]
# Send the answer and the text elements to the UI
await cl.Message(content=f'ANSWER: {res["answer"]}, SOURCES: {set(sources)}').send()
</code></pre>
<p>I'm having a hard time to get the score output. It prints in the terminal, but I can't save it or get the UI to show. I tried to ask for the prompt to extract that, but half of the times it ignores, I want to extract the value myself.</p>
| <python><openai-api><information-retrieval><langchain><large-language-model> | 2023-09-05 15:13:46 | 0 | 343 | Guilherme Giuliano Nicolau |
77,045,640 | 3,087,409 | Colour and transparency in plotly plot based on two columns in pandas dataframe | <p>I'm making a stacked+grouped bar chart in plotly. I have a dict of colours for each category, but I would like each of the groups to have a different transparency to distinguish them.</p>
<p>The data looks like this:</p>
<pre><code> year subplot plot_label value
0 2019 Projected Category 1 3.1
1 2020 Projected Category 1 3.0
2 2021 Projected Category 1 2.8
3 2019 Projected Category 2 3.1
4 2020 Projected Category 2 3.0
5 2021 Projected Category 2 2.8
6 2019 Actual Category 1 3.1
7 2020 Actual Category 1 3.0
8 2021 Actual Category 1 2.8
9 2019 Actual Category 2 3.1
10 2020 Actual Category 2 3.0
11 2021 Actual Category 2 2.8
</code></pre>
<p>And the code for plotting:</p>
<pre><code>colour_dict = {'Category 1': '#123', 'Category 2': '#456'}
fig = px.bar(df, x='subplot', y='value', facet_col='year', color='plot_label',
facet_col_spacing=0.01, color_discrete_map=colour_dict)
</code></pre>
<p>At the moment <code>Projected</code> and <code>Actual</code> bars have the same colour scheme based on the <code>plot_label</code> categories, but I want to add change the alpha of one of them to distinguish the side-by-side bars. Can it be done?</p>
| <python><pandas><plotly> | 2023-09-05 15:02:03 | 2 | 2,811 | thosphor |
77,045,623 | 9,112,151 | XML - Find without namespace | <p>With xml below:</p>
<pre><code><?xml version="1.0" encoding="UTF-8"?>
<ns0:mt_queryRequest xmlns:ns0="http://x5.ru/mdm/common">
<messageHeader>
<reqId>6437636</reqId>
<targetSystem>GGG</targetSystem>
<serviceInterface>NS</serviceInterface>
<interfaceNamespace/>
</messageHeader>
<Item xmlns="http://www.ibm.com/mdm/entry01">
<MDM_ID xmlns="">897346</MDM_ID>
<NAME xmlns="">
<ru_RU is_delta="True" xmlns="http://www.ibm.com/mdm/entry01">NANANN</ru_RU>
</NAME>
<SAPNAME xmlns="">
<ru_RU xmlns="http://www.ibm.com/mdm/entry01">NANANN</ru_RU>
</SAPNAME>
<FULLNAME xmlns="">
<ru_RU xmlns="http://www.ibm.com/mdm/entry01">NANANN</ru_RU>
</FULLNAME>
<IS_RESIDENT xmlns="">Yes</IS_RESIDENT>
<INN xmlns="">549348892</INN>
<KPP xmlns="">693628032</KPP>
<CONTRAGENT_FORM xmlns="">UL</CONTRAGENT_FORM>
<CONTRAGENT_GROUP occ="6543234" xmlns="">
<GROUP xmlns="http://www.ibm.com/mdm/entry01">Sub (Z013)</GROUP>
<SAPCODE xmlns="http://www.ibm.com/mdm/entry01">8012312</SAPCODE>
<IS_ARCHIVE xmlns="http://www.ibm.com/mdm/entry01">No</IS_ARCHIVE>
<GROUP_ID xmlns="http://www.ibm.com/mdm/entry01">P013</GROUP_ID>
<IS_ARCHIVE_ID xmlns="http://www.ibm.com/mdm/entry01">0</IS_ARCHIVE_ID>
</CONTRAGENT_GROUP>
<NAMES_HISTORY occ="1715174529" xmlns="">
<SAPNAME xmlns="http://www.ibm.com/mdm/entry01">
<ru_RU xmlns="">Ghcsjdhc</ru_RU>
</SAPNAME>
</NAMES_HISTORY>
<NORMALIZATION_STATUS xmlns="">Normalized</NORMALIZATION_STATUS>
<CONTRAGENT_FORM_ID xmlns="">3</CONTRAGENT_FORM_ID>
<IS_RESIDENT_ID xmlns="">1</IS_RESIDENT_ID>
</Item>
</ns0:mt_queryRequest>
</code></pre>
<p>I had to use namespace in path:</p>
<pre><code>import xml.etree.ElementTree as ET
xml = ET.fromstring(xml_string)
res = xml.find('.//{http://www.ibm.com/mdm/entry01}Item/CONTRAGENT_GROUP/{http://www.ibm.com/mdm/entry01}GROUP')')
</code></pre>
<p>As you can see namespace in path is hardcoded and I'm afraid that namespace could be change or there will be more than one namespace (somehow, I don't know).</p>
<p>Whether it possible to get rid of namespace? The only way I found is to convert xml to dict by <a href="https://github.com/martinblech/xmltodict" rel="nofollow noreferrer">xmltodict</a> lib and work with dict.</p>
| <python><xml><elementtree><xml-namespaces> | 2023-09-05 14:59:24 | 3 | 1,019 | Альберт Александров |
77,045,573 | 9,490,769 | Converting list function to generator using yield | <p>I am trying to convert a for loop into an iterator using yield, but I have failed in my attempts. I don't understand exactly why the <code>yield</code> isn't giving me the expected output. Does anyone know what the problem is?</p>
<p>Attempt at using yield:</p>
<pre class="lang-py prettyprint-override"><code>def iteration_order(dimensions):
for dim in range(dimensions):
order = [0, dim, 0]
yield order
for j in range(6):
sgn = 1 if j % 2 == 0 else -1
idx = j % 3
for _ in range(dim if j < 5 else dim-1):
order[idx] += sgn
yield order
</code></pre>
<pre class="lang-py prettyprint-override"><code>print(list(iteration_order(2))
>>> [[0, 0, 0], [0, 1, 1], [0, 1, 1], [0, 1, 1], [0, 1, 1], [0, 1, 1], [0, 1, 1]]
</code></pre>
<p>The code as it should work (when not using yield):</p>
<pre class="lang-py prettyprint-override"><code>def iteration_order(dimensions):
full_order = []
for dim in range(dimensions):
order = [[0, dim, 0]]
for j in range(6):
sgn = 1 if j % 2 == 0 else -1
idx = j % 3
for _ in range(dim if j < 5 else dim-1):
nxt = list(order[-1])
nxt[idx] += sgn
order.append(nxt)
full_order.extend(order)
return full_order
</code></pre>
<pre class="lang-py prettyprint-override"><code>print(iteration_order(2))
>>> [[0, 0, 0], [0, 1, 0], [1, 1, 0], [1, 0, 0], [1, 0, 1], [0, 0, 1], [0, 1, 1]]
</code></pre>
| <python><python-3.x><yield> | 2023-09-05 14:53:30 | 5 | 3,345 | oskros |
77,046,197 | 4,120,431 | Read EC Public Key, works in python, error in java | <p>I have this issue moving some code from python to java.
The following public key:</p>
<blockquote>
<p>MDkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDIgAC2X07fCab+nIPAWBb5eRlhdOfR0Bkrhx7TgM3cGbR31g=</p>
</blockquote>
<p>can be succesfully read from this python code:</p>
<pre class="lang-py prettyprint-override"><code>key = bytearray.fromhex(
"3039301306072a8648ce3d020106082a8648ce3d030107032200"
) + bytearray([0x03]) + bytearray.fromhex("d97d3b7c269bfa720f01605be5e46585d39f474064ae1c7b4e03377066d1df58")
device_public_key = load_der_public_key(
key
)
</code></pre>
<p>printing out device_public_key I see:</p>
<blockquote>
<p><cryptography.hazmat.backends.openssl.ec._EllipticCurvePublicKey object at 0x102372cd0></p>
</blockquote>
<p>When trying to do the same in Java, I fail in every possibile try:</p>
<pre class="lang-kt prettyprint-override"><code> fun loadKey() {
val key = Hex.decode("3039301306072a8648ce3d020106082a8648ce3d03010703220003d97d3b7c269bfa720f01605be5e46585d39f474064ae1c7b4e03377066d1df58")
try {
read(key)
} catch (e: Exception) {
println("Failed to load for sign $sign: $e")
}
}
private fun read(publicKey: ByteArray) {
val spec: ECNamedCurveParameterSpec = ECNamedCurveTable.getParameterSpec("prime256v1")
val kf = KeyFactory.getInstance("ECDSA", BouncyCastleProvider())
val params = ECNamedCurveSpec("prime256v1", spec.curve, spec.g, spec.n)
val point: ECPoint = ECPointUtil.decodePoint(params.curve, publicKey)
val pubKeySpec = ECPublicKeySpec(point, params)
val t = kf.generatePublic(pubKeySpec) as ECPublicKey
println(t)
}
</code></pre>
<p>The exception i get is:</p>
<blockquote>
<p>java.lang.IllegalArgumentException: Invalid point encoding 0x30</p>
</blockquote>
<p>I can't undestand what's wrong since the key is correct (works in python).
I tryed to Byte64 encode/decode but it always fail.</p>
<p>This post is linked to my previous question (next step):
<a href="https://security.stackexchange.com/questions/272048/parsing-and-loading-ec-private-key-curve-secp256r1">https://security.stackexchange.com/questions/272048/parsing-and-loading-ec-private-key-curve-secp256r1</a></p>
<p>The public key is different, the one i'm trying to load is defined as a concatenation of a fixed header, a signbyte and the variable I receive:</p>
<blockquote>
<p>Compressed ephemeral public key used for ECDH w/ reader private key, 32 byte X coordinate. Note: when uncompressing, the Y coordinate is always even</p>
</blockquote>
<p>EDIT: Thanks to @dave_thompson_085 for the explanation!
Indeed the code is not Java but Kotlin (JVM compatible), I forgot to mention it (code runs on Android).
My solution is similar to 2nd option, but the curve is obtained from the private key, and the compressed key (as explained) is only the sign byte (0x02) plus the body (as I understood, the body is the xCoordinate of the point).</p>
<pre class="lang-kt prettyprint-override"><code>fun load_compressed_public_key(privateKey: ECPrivateKey, compressedKey: ByteArray): ECPublicKey {
val decodePoint = org.bouncycastle.jce.ECPointUtil.decodePoint(privateKey.params.curve, compressedKey)
val spec = ECNamedCurveTable.getParameterSpec("secp256r1")
val kf = java.security.KeyFactory.getInstance("ECDSA", BouncyCastleProvider())
val params = ECNamedCurveSpec("secp256r1", spec.curve, spec.g, spec.n)
val pubKeySpec = java.security.spec.ECPublicKeySpec(decodePoint, params)
val uncompressed = kf.generatePublic(pubKeySpec) as ECPublicKey
return uncompressed
}
</code></pre>
<p>I also tried to use approach 1 (was my first attempt) but I failed just because I did not add BouncyCastleProvider when obtaining the KeyFactory! I did a quick test, with "BC" works, without fails with "Invalid EC Key"
Below the working test (in kotlin):</p>
<pre class="lang-kt prettyprint-override"><code> @Test
fun `test load ephemeral x509`(){
val spki: ByteArray = Hex.decode(
"3039301306072a8648ce3d020106082a8648ce3d030107032200"
+ "02d97d3b7c269bfa720f01605be5e46585d39f474064ae1c7b4e03377066d1df58"
)
val kf1: KeyFactory = KeyFactory.getInstance("EC", BouncyCastleProvider())
val pub1: PublicKey = kf1.generatePublic(X509EncodedKeySpec(spki))
println(pub1)
}
</code></pre>
<p>Thanks again for the help and for the explanation, I appreciate it instead of raw code!</p>
| <java><python><key> | 2023-09-05 14:40:02 | 1 | 3,540 | N Dorigatti |
77,045,475 | 6,068,731 | Numpy `random.choice` but using different weights each time we sample | <p>I have a <code>(N, T+1)</code> array of weights. Its rows are normalized, meaning</p>
<pre><code>np.array_equal(W.sum(axis=1), np.ones(N))
</code></pre>
<p>returns <code>True</code>. Now I want to get <code>N</code> samples from <code>np.arange(T+1)</code> where to select the <code>i</code>th sample, I use the <code>i</code>th row of <code>W</code>. I could of course do this with a for loop:</p>
<pre><code>import numpy as np
# Settings
N = 100
T = 20
# Create some normalized weights
W = np.random.rand(N, T+1)
W = W / W.sum(axis=1).reshape(-1, 1)
# Use a for loop to sample
samples = np.zeros(N)
for i in range(N):
samples[i] = np.random.choice(a=np.arange(T+1), size=1, p=W[i, :])
</code></pre>
<p>However I was wondering if there is a way to do this already in numpy/scipy or perhaps using some other library. I am hoping with something like this:</p>
<pre><code># or perhaps a=np.repeat(np.arange(T+1).reshape(-1,1), N, axis=1).T
samples = some_function(a=np.arange(T+1), size=N, p=W)
</code></pre>
| <python><numpy> | 2023-09-05 14:39:00 | 1 | 728 | Physics_Student |
77,045,451 | 7,838,925 | How can I fix the deformed piecharts on a map? | <p>I am plotting a pie chart per country with some statistics. A sample of my data (where all the columns except for geometry and size are categories to be shown by the pie charts) looks like this:</p>
<p><a href="https://i.sstatic.net/mPNTL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mPNTL.png" alt="sample data" /></a></p>
<p>Using the approach in <a href="https://stackoverflow.com/a/71900664/7838925">this answer</a>, I am able to produce a scatter plot with pie charts from the 'sample' data that looks reasonable:</p>
<p><a href="https://i.sstatic.net/WGGyq.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WGGyq.jpg" alt="plot" /></a></p>
<p>This is the code:</p>
<pre><code>import geopandas as gpd
import numpy as np
import shapely
import matplotlib.pyplot as plt
import math
# utility function inspired by https://stackoverflow.com/questions/56337732/how-to-plot-scatter-pie-chart-using-matplotlib
def draw_pie(dist, xpos, ypos, size, ax):
# for incremental pie slices
cumsum = np.cumsum(dist)
cumsum = cumsum / cumsum[-1]
pie = [0] + cumsum.tolist()
size = math.sqrt(size)
colors = ["blue", "red", "yellow", "pink", "purple", "black", "red", "yellow", "pink"]
for i, (r1, r2) in enumerate(zip(pie[:-1], pie[1:])):
angles = np.linspace(2 * np.pi * r1, 2 * np.pi * r2)
x = [0] + np.cos(angles).tolist()
y = [0] + np.sin(angles).tolist()
xy = np.column_stack([x, y])
ax.scatter([xpos], [ypos], marker=xy, s=size, color=colors[i], alpha=1)
return ax
fig, ax = plt.subplots(figsize=(20,40))
for idx, row in sample.iterrows():
ax = draw_pie([row[cols].tolist()], row.geometry.x, row.geometry.y, row["size"], ax)
cx.add_basemap(ax, crs=countries_centers.crs, source=cx.providers.OpenStreetMap.Mapnik)
</code></pre>
<p>However, upon zooming in, I see that the pie charts are broken:</p>
<p><a href="https://i.sstatic.net/cy37D.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cy37D.png" alt="zoomedin" /></a></p>
<p>This is for a publication so the quality needs to be perfect. I have no idea what's causing this and how to fix it even after going through the code. Any ideas?</p>
| <python><python-3.x><matplotlib><geopandas> | 2023-09-05 14:35:05 | 0 | 1,223 | Jan Pisl |
77,045,231 | 19,238,204 | How to Plot Surface Plot / 3-D Laplace Transform for x(t) = exp(-t)sin(t) with SymPy | <p>This is what I want to achieve:</p>
<p><a href="https://i.sstatic.net/5BOe3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5BOe3.png" alt="3" /></a></p>
<p>I have this function <code>x(t) = exp(-t)*sin(t)</code> that will undergone a Laplace transform x(s) :</p>
<p><a href="https://i.sstatic.net/CBdLK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CBdLK.png" alt="2" /></a></p>
<p>and</p>
<p><a href="https://i.sstatic.net/ow68S.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ow68S.png" alt="1" /></a></p>
<p>I have this code but unable to do the 3-D plot for the Laplace Transform toward the function <code>x(t)</code>, or how to break <code>s</code> into <code>α + iω</code>.</p>
<pre><code>import sympy
sympy.init_printing()
import matplotlib.pyplot as plt
t, s = sympy.symbols('t, s')
a = sympy.symbols('a', real=True, positive=True)
def L(f):
return sympy.laplace_transform(f, t, s, noconds=True)
def invL(F):
return sympy.inverse_laplace_transform(F, s, t)
f1 = sympy.exp(-t)*sympy.sin(t)
F1 = sympy.integrate(f1*sympy.exp(-s*t), (t, 0, sympy.oo))
print('The Laplace Transform of')
sympy.pretty_print(f1)
print('is:')
sympy.pretty_print(F1)
print('or:')
F = sympy.laplace_transform(f1, t, s, noconds=True)
sympy.pretty_print(F)
print('Plot of f(t)')
sympy.plot(f1);
invF = invL(F).subs({a: 2})
print('Surface Plot of the Laplace Transform of f(t):')
sympy.pretty_print(invF)
p = sympy.plot(f1.subs({a: 2}), invL(F).subs({a: 2}),
xlim=(0, 20), ylim=(0, 3), show=False)
p[1].line_color = 'red'
p.show()
</code></pre>
<p><a href="https://www.youtube.com/watch?v=n2y7n6jw5d0" rel="nofollow noreferrer">https://www.youtube.com/watch?v=n2y7n6jw5d0</a></p>
| <python><sympy> | 2023-09-05 14:05:36 | 1 | 435 | Freya the Goddess |
77,045,203 | 15,673,855 | How to improve the speed of an n body simulation type project? | <p>I am making a particle simulation, where each particle has an attraction to each other particle, scaling with distance. In addition to this, I have added some other features such as eating food and using energy for the particles to move. I currently am simulating 100 particles, but the speed of the simulation is much slower than I would like at around 5 frames per second. I have optimised most of the calculations (for example, I no longer use trig - just square roots and basic operations), and am using JIT and vectorisation to speed up the maths and operate on all particles simultaneously. I am working in python, but the JIT should convert the intensive maths to a faster language.</p>
<p>To speed up the code I have made it so that the calculations are only made when particles are within a reasonable distance, I have also heard of splitting the simulation into chunks and simulating one chunk at a time, but this seems similar to what i am doing now and would involve a massive rewrite for mediocre performance gains compared to what I have.</p>
<p>Does anyone know of a way I can improve runtime? I would like to get it at 60 fps, so I am quite far out, but I think I have seen similar simulations which run much faster.</p>
<p>I have a lot of poorly written code, and I don't think that you can gain much from it, but here are the relevant bits:</p>
<pre><code>
@guvectorize([(float64[:], float64[:], int32, float64[:], float64[:])],'(m),(n),(),(p)->(n)', nopython=True)
def mathStuff(ipos, j, interact_range, matrix, vels2):
vels2[2] = 0
vels2[3] = 0
if (ipos[0] != j[0] and ipos[1] != j[1]):
delta1 = (ipos[0]-j[0])
delta2 = (ipos[1]-j[1])
dist = (math.sqrt(delta1**2 + delta2**2))
if in_range(dist, interact_range): # if j is in range
dx,sector = distance_calc(dist, interact_range) # get sector the particles are in
if sector <= 4: # if close by, calculate remainder of the sector and repel based on that + inverse square law
#dy/dx = 3
dx = (dx + sector +1)/5
vel_mag = -(1/dx)**2
elif sector > 4 and sector <= 16: #medium distance: scale from 0 attraction to attraction matrrix attraction
dx = dx + sector-5
grad = matrix[int(j[4])]/12
vel_mag = grad*dx
elif sector >16 and sector <= 26: # large distance: scale from attraction matrix attraction down to 0
dx = dx + sector-16
grad = matrix[int(j[4])]
vel_mag = grad -(grad/10)*dx
else: # too far - 0 velocity
vel_mag= 0
vels = trig(delta2,delta1, vel_mag) # get what the velocity should be
vels2[0] = vels[0]
vels2[1] = vels[1]
else:
vels2[0] = 0.0
vels2[1] = 0.0
if in_range(dist, interact_range/4):
vels2[2] = int(j[4])+1
else:
vels2[0] = 0.0
vels2[1] = 0.0
</code></pre>
<pre><code>
for c1,i in enumerate(particles): # for each particle
i.timestep(positions[c1][0],positions[c1][1]) # update position
positions[c1][2] = 0
positions[c1][3] = 0
matrixes = [i.attraction_matrix[key] for key in i.attraction_matrix]
expand_pos = np.concatenate((positions,np.asarray([types]).T), axis=1)
v = np.asarray(mathStuff([(y) for y in positions[c1]],expand_pos, int(interact_range), matrixes))
</code></pre>
<pre><code>
@jit(nopython=True)
def in_range(dist, interact_range):
if dist < math.sqrt(2*interact_range**2):
return True
else:
return False
# calculate the sector that a particle is in (based on distance split into 32 sectors)
@jit(nopython=True)
def distance_calc(other_dist, interact_range):
other_dist = other_dist/(math.sqrt(2*interact_range**2)/32)
dx = other_dist%1
sector = other_dist-dx
return dx,sector
# get velx and vely from velTotal and posx and posy
@jit(nopython=True)
def trig(a,b, vel_mag):
angle = a/b
sign = np.sign(b)
sqrt = math.sqrt(1+angle**2)
self_vel0 = (-vel_mag/sqrt)*sign
self_vel1 = self_vel0*angle
return self_vel0,self_vel1
</code></pre>
| <python><performance><optimization><simulation> | 2023-09-05 14:03:18 | 2 | 310 | Mercury |
77,045,095 | 2,725,810 | Cached build with SAM downloads all package dependencies | <p>I am new to the topic and might not be using all the correct terminology, so please forgive it.</p>
<p>I am building a container to be deployed at AWS Lambda. The requirements include the <code>sentence-transformers</code> package, which is very large, so that building the docker image takes a long time.</p>
<p>I would like to be able to change the lambda function and rebuild the container as fast as possible, which means avoiding downloading the requirements that have not changed.</p>
<p>Based on my research, I should be able to do this with:</p>
<p><code>sam build --cached --beta-features</code></p>
<p>However, this does not seem to be working. Namely, all the requirements are being downloaded just like during the very first build. Note that, due to the difficulty discussed in <a href="https://stackoverflow.com/questions/77039005/aws-sam-build-is-stuck-waiting-for-lock#comment135817691_77039965">another post</a>, I had to downgrade to version 1.60.0 of <code>sam-cli</code>.</p>
<p>How do I make this work, so that <code>sam build</code> should not do the unnecessary work of downloading requirements that were not changed since the last build?</p>
| <python><docker><aws-lambda><aws-sam><aws-sam-cli> | 2023-09-05 13:49:43 | 1 | 8,211 | AlwaysLearning |
77,045,048 | 10,771,559 | Python Regex for subsetting columns from dataframe | <p>I have a dataframe with various column names and want to pull all those that end with '-A' followed by either 1 or 2 numbers. For instance, a column name I would want returned is 'Charlie-A14'.</p>
<p>I tried the following</p>
<pre><code>data.columns[data.columns.str.match('\A+\d')]
</code></pre>
<p>However, it does not return the column '2-But-1,1,1,3,3-A5'.</p>
<p>The columns I would want to return are: 'Hex-A14',
'2-But-1,1,1,3,3-A5', 'Hep-A16','2-Pen-1,1,1,3,3-A5','Oct-A18',
'Dec-A22', 'Dod-A26',
'(2RS,5R)-Ment-2,6,6-A3'</p>
| <python><pandas><regex> | 2023-09-05 13:42:13 | 2 | 578 | Niam45 |
77,044,930 | 15,673,147 | Get source_documents and score with ConversationalRetrievalChain, Stuff and Chailit UI | <p>I'm having trouble trying to export the source documents and score from this code. I tried a bunch of things, but I can't retrieve it. The most I could do is to pass the my demand to the prompt so the LLM retrieves it to me, but sometimes it just ignores me or hallucinates (ex: it gives me a source link from inside the text). If I change chain_type to map_rerank and adapt a bit, I can get the source documents from a JSON (based on original saved metadata from my vectordb), but I want to use Stuff instead. Is there a way to get the source_documents and score from the answer? Or any metadata I originally have from the vectordb?</p>
<pre><code>@on_chat_start
def init():
llm = AzureChatOpenAI(
deployment_name=saci_constants.AZURE_OPENAI_DEPLOYMENT_NAME,
model_name=saci_constants.AZURE_OPENAI_MODEL_NAME,
openai_api_base=saci_constants.AZURE_OPENAI_DEPLOYMENT_ENDPOINT,
openai_api_version=saci_constants.AZURE_OPENAI_DEPLOYMENT_VERSION,
openai_api_key=saci_constants.AZURE_OPENAI_API_KEY,
openai_api_type=saci_constants.AZURE_OPEN_API_TYPE,
temperature=saci_constants.TEMPERATURE,
streaming=True,
callbacks=[StreamingStdOutCallbackHandler()],
)
embeddings = OpenAIEmbeddings(
deployment=saci_constants.AZURE_OPENAI_ADA_EMBEDDING_DEPLOYMENT_NAME,
model=saci_constants.AZURE_OPENAI_ADA_EMBEDDING_MODEL_NAME,
openai_api_base=saci_constants.AZURE_OPENAI_DEPLOYMENT_ENDPOINT,
openai_api_key=saci_constants.AZURE_OPENAI_API_KEY,
openai_api_type=saci_constants.AZURE_OPEN_API_TYPE,
chunk_size=saci_constants.AZURE_CHUNK_SIZE,
)
faiss_db = FAISS.load_local(
saci_constants.FAISS_DATABASE_PATH,
embeddings,
)
retriever = faiss_db.as_retriever()
messages = [SystemMessagePromptTemplate.from_template(custom_prompts.SPARK)]
messages.append(HumanMessagePromptTemplate.from_template("{question}"))
spark_prompt = ChatPromptTemplate.from_messages(messages)
question_generator = LLMChain(
llm=llm,
prompt=CONDENSE_QUESTION_PROMPT,
verbose=True,
)
doc_chain = load_qa_with_sources_chain(
llm,
chain_type="stuff",
prompt=spark_prompt,
verbose=True,
)
memory = ConversationBufferMemory(
llm=llm,
memory_key="chat_history",
return_messages=True,
input_key="question",
# output_key="answer",
max_token_limit=1000,
# k=1,
)
conversational_chain = ConversationalRetrievalChain(
retriever=retriever,
question_generator=question_generator,
combine_docs_chain=doc_chain,
memory=memory,
rephrase_question=False,
verbose=True,
# output_key="answer",
)
# # Set chain as a user session variable
cl.user_session.set("conversation_chain", conversational_chain)
@on_message
async def main(message: str):
chat_history = []
# Read chain from user session variable
chain = cl.user_session.get("conversation_chain")
# Run the chain asynchronously with an async callback
res = chain(
{"question": message, "chat_history": chat_history},
callbacks=[cl.AsyncLangchainCallbackHandler()],
)
print("aaaaaaaa", res)
# Send the answer and the text elements to the UI
await cl.Message(content=f"ANSWER: {res['answer']}").send()
</code></pre>
<p>Right now, the print I've got from res is this:</p>
<blockquote>
<p>aaaaaaaa {'question': 'Do I need to pay for OpenAI when doing RAG?', 'chat_history': [HumanMessage(content='Do I need to pay for OpenAI when doing RAG?', additional_kwargs={}, example=False), AIMessage(content="I'm not sure, but according to an article on Towards Data Science, setting up RAG can be a large initial investment, covering the integration, database access, and possibly even licensing fees. However, there is no mention of paying for OpenAI specifically. Would you like me to look up more information?", additional_kwargs={}, example=False)], 'answer': "I'm not sure, but according to an article on Towards Data Science, setting up RAG can be a large initial investment, covering the integration, database access, and possibly even licensing fees. However, there is no mention of paying for OpenAI specifically. Would you like me to look up more information?"}</p>
</blockquote>
| <python><openai-api><information-retrieval><langchain><large-language-model> | 2023-09-05 13:28:37 | 2 | 343 | Guilherme Giuliano Nicolau |
77,044,832 | 19,238,204 | Fourier Transform Plot for exp(-t)sin(t) | <p>I want to achieve this:</p>
<p><a href="https://i.sstatic.net/HPk0R.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HPk0R.png" alt="1" /></a></p>
<p>My code here:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import scipy.fftpack
# Number of samplepoints
N = 600
t = np.arange(0.0, 20.0, 0.1)
# sample spacing
T = 3.0
x = np.linspace(-3.0, N*T, N)
y = np.exp(-x)*np.sin(x)
yf = scipy.fftpack.fft(y)
xf = np.linspace(-3.0, T, N//2)
yt = np.exp(-t)*np.sin(t)
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(14,7))
# Plot the X(ω)
ax1.plot(-4*np.pi*xf, 2.0/N * np.abs(yf[:N//2]))
ax1.set_title("$FT\{f(x)\}$")
# Plot the x(t)
ax2.plot(t,yt)
ax2.set_title("$f(x) = exp(-x)sin(x)$")
ax2.set_xticks([0, np.pi, 2 * np.pi])
plt.show()
</code></pre>
<p>and this is the plot that I can get, it is far from my goal:
<a href="https://i.sstatic.net/JfxYI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JfxYI.png" alt="2" /></a></p>
<p>I think maybe the linspace or something, I copy the code from here and modify a bit:
<a href="https://stackoverflow.com/questions/25735153/plotting-a-fast-fourier-transform-in-python">Plotting a fast Fourier transform in Python</a></p>
| <python><numpy> | 2023-09-05 13:14:20 | 0 | 435 | Freya the Goddess |
77,044,821 | 253,986 | Celery Periodic Tasks with Dynamic Arguments | <p>I want to schedule a Celery task that can accept dynamic arguments at each schedule. In essence, the arguments should be derived from the results of a database query. Is there a way to achieve this?</p>
<p>To elaborate, my goal is to distribute Celery tasks in parallel, depending on the number of rows retrieved from a database query. I plan to divide the rows into batches, with each batch serving as an argument for a separate Celery task.</p>
<p>I haven't found an option in Celery's beat schedule that supports this requirement. Are there any effective alternatives to achieve this, apart from using Celery? Alternatively, am I missing any Celery-specific configuration or feature that can support this dynamic argument feature?</p>
| <python><celery><scheduled-tasks> | 2023-09-05 13:13:34 | 1 | 6,144 | Achaius |
77,044,795 | 4,675,426 | Will list append create a copy? | <p>I am newbie to python. I came across this code.</p>
<pre><code>simple_list = ['a', 'b', 'c']
dict_list = []
for x in simple_list:
local_dict = {'key': x}
dict_list.append(local_dict)
print(dict_list)
</code></pre>
<p>The for loop creates a dictionary for each element in the list <code>simple_list</code> and then append the newly created dictionary to another list <code>dict_list</code>.</p>
<p>My question is related to append function. We have a local variable called <code>local_dict</code> inside the for loop. When we call append, will it copy <code>local_dict</code>? or create a reference?</p>
<p>I assume that it would copy <code>local_dict</code> because that variable will go out of scope after the for loop.
Is my understanding correct?</p>
| <python><list><dictionary> | 2023-09-05 13:09:49 | 1 | 1,000 | Karthick |
77,044,763 | 1,474,895 | Programmatably Automate Microsoft Outlook Settings | <p>Is there anyway to set Microsoft Outlook setting, specially the Options-->Mail-->Stationary and Fonts? Our organization has rules set up to set them back to Arial 12 after every restart of the PC which I do often. It only take me 30 seconds to set them back to what I like, but it would be nice if I could use some sort of programming to automate this. Python is my bread and butter language so accessing the API or hacking the settings somehow with it would be best for me.</p>
| <python><outlook> | 2023-09-05 13:05:30 | 1 | 1,483 | Cody Brown |
77,044,698 | 4,451,521 | Numpy vectorize with signature | <p>Can someone explain how does the signature works with numpy's vectorize?</p>
<p>I have read <a href="https://numpy.org/doc/stable/reference/generated/numpy.vectorize.html" rel="nofollow noreferrer">the documentation</a>. It is confusing and has a very tiny part dedicated to signatures
I have googled numpy vectorize signature and all I could find was very particular examples without a general explanation.
I could not find any doc where the rules of use of signature is explained.</p>
<p>For example what does this mean?</p>
<pre><code>ff=np.vectorize(a_function,signature="(),(),(),()->(2)")
</code></pre>
<p>where the function is</p>
<pre><code>def a_function(a,b,c,d):
# something here
</code></pre>
<p>I am guessing (because it is not explained anywhere) that the first four () correspond to the arguments of the function?? Alright... but what does the <code>(2)</code> mean? Is it a dimension?</p>
<p>Any link to some kind of tutorial or documentation <strong>to signatures</strong> specifically would help too (not to vectorize in general)</p>
| <python><numpy> | 2023-09-05 12:56:18 | 2 | 10,576 | KansaiRobot |
77,044,671 | 19,238,204 | How to Plot Fourier Transform of cos(2x) correctly with Numpy and Python | <p>I watch a video about Fourier Transform and in the Fourier Transform plot of the funciton <code>cos(2x)</code> the hike should be at <code>ω=-2</code> and <code>ω=2</code>, but why mine is different:</p>
<p><a href="https://i.sstatic.net/TA1jh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TA1jh.png" alt="myplot" /></a></p>
<p><a href="https://i.sstatic.net/5Vsfv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5Vsfv.png" alt="therealplot" /></a></p>
<p>this is the code:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import scipy.fftpack
# Number of samplepoints
N = 600
t = np.arange(0.0, 20.0, 0.1)
# sample spacing
T = -1.0 / 1.0
x = np.linspace(0.0, N*T, N)
yt = np.cos(2.0*t)
y = np.cos(2.0*x)
yf = scipy.fftpack.fft(y)
xf = np.linspace(0.0, 1.0/(2.0*T), N//2)
fig, ax = plt.subplots(figsize=(14, 7))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
ax1.plot(xf, 2.0/N * np.abs(yf[:N//2]))
ax1.set_title("$FT\{f(x)\}$")
ax2.plot(t,yt)
ax2.set_title("$f(x) = cos(2x)$")
ax2.set_xticks([0, np.pi, 2 * np.pi])
plt.show()
</code></pre>
| <python><numpy> | 2023-09-05 12:52:52 | 1 | 435 | Freya the Goddess |
77,044,659 | 9,958,954 | sqlalchemy proper way of using connection | <p>This is basically the skeleton of what I am using for executing sql statements</p>
<pre><code>engine = "engine_conn_here"
with engine.connect() as connection:
with connection.begin() as transaction:
try:
merge_sql_string = """MERGE SQL QUERY COMMAND HERE"""
connection.execute(text(merge_sql_string))
connection.execute(text("DROP TMP_TABLE ..."))
except:
transaction.rollback()
raise
else:
transaction.commit()
</code></pre>
<p>This works fine but just want to check if there is a better way or gotchas that I might missed.</p>
| <python><sql-server><sqlalchemy> | 2023-09-05 12:51:21 | 0 | 710 | LearningNoob |
77,044,597 | 2,030,532 | How to override the hash function of python data classes? | <p>I am trying to write a base class for python <code>dataclasse</code> with a custom hash function as follows. However, when calling the child class's <code>hash</code> it does not use the custom hash function of the parent class.</p>
<pre><code>import dataclasses
import joblib
@dataclasses.dataclass(frozen=True)
class HashableDataclass:
def __hash__(self):
print("Base class hash was called!")
fields = dataclasses.fields(self)
values = tuple(getattr(self, field.name) for field in fields)
return int(joblib.hash(values), 16)
@dataclasses.dataclass(frozen=True)
class MyDataClass1(HashableDataclass):
field1: int
field2: str
obj1 = MyDataClass1(1, "Hello")
print(hash(obj1))
</code></pre>
<p>Is there a way to override hash function of data classes?</p>
| <python><inheritance><hash><python-dataclasses> | 2023-09-05 12:42:13 | 1 | 3,874 | motam79 |
77,044,540 | 1,652,219 | How to filter out rows in pandas without creating a mask? | <p>Is there not a pretty pipe-way to do filtering in Python's Pandas like in R's tidyverse?</p>
<p>Most examples in Python vs R</p>
<pre><code># Pandas
Iris[(Iris.Petal_Width > 1) & (Iris.Species==’setosa’)][[‘Sepal_length’,’Petal_Length’]]
# Tidyverse
Iris %>%
filter(Petal_Width > 1, Species==’setosa’) %>%
select(Sepal_Length, Petal_Length)
</code></pre>
<p>It is a very ugly syntax in Pandas, and there must be a better way. Python is pretty well known for its piping, when why not here?</p>
| <python><r><pandas><tidyverse> | 2023-09-05 12:30:45 | 0 | 3,944 | Esben Eickhardt |
77,044,491 | 7,648,650 | Python Dash: How to use Input from a dynamically created Dropdown? | <p>I got an app that contains a button with callback function to create an unlimited amount of dropdowns which will be automatically id'ed as 'dropdown-i'.
The struggle is that I don't seem to be able to actually use the values I input in these Dropdowns in another callback function (that's only trying to print them).</p>
<p>How can I retrieve these values or how would you do this?</p>
<p>Apparently the part <code>value=dcc.Dropdown(id=dropdown_id).value</code> doesn't work.</p>
<pre><code>import dash
import dash_bootstrap_components as dbc
import dash_core_components as dcc
import dash_html_components as html
from dash.dependencies import Input, Output, State
app = dash.Dash(__name__)
app.layout = html.Div([
html.Button("Add Dropdown and Number Field", id="add-button"),
html.Div(id="input-container", children=[]),
html.Div(id="output"),
])
@app.callback(
Output("input-container", "children"),
Input("add-button", "n_clicks"),
State("input-container", "children")
)
def add_input(n_clicks, existing_children):
if n_clicks is None:
return existing_children
new_input = dbc.Row([
dbc.Col(dcc.Dropdown(
options=[
{'label': 'Option 1', 'value': 'option-1'},
{'label': 'Option 2', 'value': 'option-2'},
# Add more dropdown options as needed
],
value='option-1',
id=f'dropdown-{n_clicks}'
)),
dbc.Col(dcc.Input(
type='number',
value=0,
id=f'weight-{n_clicks}'
)),
])
existing_children.append(new_input)
return existing_children
@app.callback(
Output("output", "children"),
Input("add-button", "n_clicks"),
State("input-container", "children")
)
def process_dropdowns(n_clicks, dropdown_children):
if n_clicks is None:
return []
# Create a list to store the selected values from each dropdown
selected_values = []
# Iterate through the dropdowns to retrieve their values
for i, child in enumerate(dropdown_children):
dropdown_id = f'dropdown-{i+1}'
selected_value = dcc.Dropdown(id=dropdown_id).value
selected_values.append(selected_value)
# Process the selected values or use them as needed
return f"Selected Dropdown Values: {', '.join(selected_values)}"
if __name__ == "__main__":
app.run_server(debug=False)
</code></pre>
| <python><callback><dropdown><plotly-dash> | 2023-09-05 12:24:29 | 1 | 1,248 | Quastiat |
77,044,480 | 9,484,595 | Counting the points below a coordinate | <p>I have the following problem, for which the solution I use at the moment is too slow.</p>
<p><strong>Instance:</strong> a numpy array <code>b</code> of shape <code>(B,2)</code>, a sorted numpy array <code>x</code> of shape <code>(X)</code>, and a sorted numpy array <code>y</code> of shape <code>(Y)</code>.</p>
<p>Note that <code>x = np.unique(b[:,0])</code> and <code>y = np.unique(b[:,1])</code>, if that makes a difference for the problem.</p>
<p><strong>Task:</strong> Build the <code>(X,Y)</code>-array <code>H</code> such that <code>H[i,j]</code> is the number of rows in <code>b</code> whose first entry is less than <code>x[i]</code> and whose second entry is less than <code>y[j]</code>.</p>
<p>The following example code solves this:</p>
<pre><code>import numpy as np
b = np.random.random((2000,2))
x = np.unique(b[:,0])
y = np.unique(b[:,1])
H = np.count_nonzero(
np.logical_and(
b[:,0,None,None] <= x[None,:,None],
b[:,1,None,None] <= y[None,None,:]
),
axis=0
)
</code></pre>
<p>but this gets quite slow if <code>b</code> and thus <code>x</code> and <code>y</code> have a few thousand entries.</p>
<p>How can I do this more efficiently?</p>
| <python><arrays><numpy> | 2023-09-05 12:22:32 | 1 | 893 | Bubaya |
77,044,403 | 5,371,582 | Type hint function accepting a Union | <p>Here is my (much simplified) code:</p>
<pre><code>def myfun(X:list[str|int]):
for x in X:
print(x)
X = [1,2,3]
myfun(X)
</code></pre>
<p>Pyright complains on the last line because I provide a list of <code>int</code> while the function requires <code>list[int|str]</code>.</p>
<ul>
<li>What is the best way to deal with that case?</li>
<li>Is there a way to say pyright to accept "subtypes"?</li>
</ul>
<p>Constraints:</p>
<ul>
<li><p>I do not want to define <code>X</code> as <code>X:list[str|int]=[1,2,3]</code> because, in my real case, I want <code>X</code> to be understood as list of <code>int</code>.</p>
</li>
<li><p>I can call the function with <code>myfun(list[str|int](X))</code> but it is really annoying.</p>
</li>
</ul>
| <python><python-typing><pyright> | 2023-09-05 12:10:52 | 1 | 705 | Laurent Claessens |
77,044,352 | 2,725,810 | Decoding pickled body in AWS Lambda | <p>I have created a container for the following AWS Lambda function:</p>
<pre class="lang-py prettyprint-override"><code>import pickle
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')
def get_embeddings_raw(texts):
try:
return model.encode(texts)
except:
print("It failed!")
return None
def lambda_handler(event, context):
texts = pickle.loads(event['body'])
result = get_embeddings_raw(texts)
return {
"statusCode": 200,
"result": pickle.dumps(result),
}
</code></pre>
<p>I have deployed this container locally with <code>sam local start-api &</code> and am testing with:</p>
<pre class="lang-py prettyprint-override"><code>import requests
import pickle
texts = ['My first sentence', 'My second sentence']
my_url = "http://localhost:3000/wherewasit-embeddings/"
response = requests.post(my_url, data=pickle.dumps(texts))
print(response)
</code></pre>
<p>I get the 502 response with the error:</p>
<pre><code>UnicodeDecodeError while processing HTTP request: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte2023-09-05 14:56:31 127.0.0.1
</code></pre>
<p>Why is it trying to use <code>utf-8</code> when I use <code>pickle.loads</code>? How do I fix this?</p>
| <python><http><aws-lambda><pickle><aws-sam> | 2023-09-05 12:04:30 | 0 | 8,211 | AlwaysLearning |
77,043,967 | 13,518,907 | Accelerometer Data Classification | <p>I am trying to classify accelerometer data (sampled with a frequency of 100Hz) into 4 different transportation modes (0,1,2,3). I have 41 different CSV files, each representing a time series. I stored every file in a list called subjects. Each CSV-file looks as follows:</p>
<pre><code> # Check if the label mapping worked
test = subjects[0]
print(test.head())
print(test.info())
print(len(test))
</code></pre>
<pre><code> x y z label
0 -0.154881 0.383397 -0.653029 0
1 -0.189302 0.410185 -0.597840 0
2 -0.202931 0.408217 -0.490296 0
3 -0.205011 0.407853 -0.360820 0
4 -0.196665 0.430047 -0.147033 0
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 128628 entries, 0 to 128627
Data columns (total 4 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 x 128628 non-null float64
1 y 128628 non-null float64
2 z 128628 non-null float64
3 label 128628 non-null int64
dtypes: float64(3), int64(1)
memory usage: 3.9 MB
None
128628
</code></pre>
<p>At first, I would like to start with implementing a Random Forest Algorithm. However I am not sure how to create the train and test dataset for this, as I have different CSV-files.</p>
<p>How can I create the train and test files for this task? At first I thought about concat all CSV-files together, but as each file represents a time series, I am not sure if this is the correct way to do this.</p>
<p>Thanks in advance for helping!</p>
| <python><classification><random-forest><accelerometer><train-test-split> | 2023-09-05 11:02:58 | 1 | 565 | Maxl Gemeinderat |
77,043,946 | 16,716,173 | How to calculate binomial probabilities in Python with very small numbers? | <p>I'm trying to calculate the likelihood of successfully guessing a password in Python. For example, if we take a 10 character lowercase password (26**10 possible passwords) and we can make 1 billion guesses a second, we can calculate the probability of successfully guessing the password in one hour with:</p>
<pre class="lang-py prettyprint-override"><code>from scipy.stats import binom
(1 - binom.pmf(k=0, n=1000000000*3600, p=1 / 26**10))
</code></pre>
<p>Which gives us the the result of <code>0.02525515384826793</code> (i.e, 2.5%). However, this doesn't work as we increase the length of the password (or more strictly, as <code>p</code> gets closer to zero). For instance if we increase the length of the password to 12 characters:</p>
<pre class="lang-py prettyprint-override"><code>from scipy.stats import binom
(1 - binom.pmf(k=0, n=1000000000*3600, p=1 / 26**12))
</code></pre>
<p>Then the returned value is just <code>0.0</code>, which is incorrect - presumably due to the float rounding down to zero at some point. How can I calculate this to get a more precise answer?</p>
<hr />
<p>Edit: this was with SciPy 1.10.1. Testing on the latest version (1.11.2 at time of writing) gave the correct value - so it looks like this was an issue with the older version.</p>
| <python><scipy> | 2023-09-05 11:00:42 | 2 | 327 | Gh0stFish |
77,043,936 | 18,904,265 | Can/Should I define a schema for a requests call using pydantic? | <p>I am writing a class to handle get/post/patch/delete requests to an API. For this, I am writing a method each (e.g. <code>def get_request()</code>). The parameters needed for the requests are quite a few, which would result in a long list of parameters for each function. Therefore I want to define the schema in some other way and pass it as a single variable.</p>
<p>Of course I could do this using a regular dict, but since I am using pydantic anyhow to parse the return of the request, I was wondering if I could (and should) use a pydantic model to pass the parameters to the request. Is there a "pythonic way" to handle these kind of functions?</p>
<p>In result, the code may look something like this (didn't test this code):</p>
<pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel
import requests
class Params(BaseModel):
param_1: str
param_2: int
param_3: list[str]
class ApiCall:
def get_request(self, url: str, params: Params):
response = requests.get(url, params = params.dict())
</code></pre>
<p>Is this a valid way to handle this kind of thing? Or is there a different, usual way to handle this kind of situation?</p>
<p>Thanks!</p>
| <python><python-requests><pydantic> | 2023-09-05 10:59:08 | 0 | 465 | Jan |
77,043,882 | 8,887,907 | SSH key asks for passphrase when created using subprocess in python | <p>I have a python project that is supposed to be used for creating a SSH key with ssh-keygen and add it to some remote servers. When I run the command <code>ssh-keygen -f .ssh/id_rsa -t rsa -N ""</code> from the command line everything works fine, and I can simply run the rest of the python code to transfer my key to the remote servers. After that the SSH connections can be made using the key successfully, either from <em>paramiko</em> in python or from the command line.</p>
<p>However, when I run the very same command to create the key using the subprocess library in python, I can't use the key to make connections since it prompts for passphrase, like <code>Enter passphrase for key 'C:\Users\<my_user>/.ssh/id_rsa':</code>. My python code for building the SSH key is as follows:</p>
<pre><code>from pathlib import Path
import subprocess
LOCAL_SSH_KEY_FOLDER = Path().home() / ".ssh"
command = f'-f {LOCAL_SSH_KEY_FOLDER}/id_rsa -t rsa -N ""'
p = subprocess.Popen(["ssh-keygen"] + command.split(' '), stdout=subprocess.PIPE,
stderr=subprocess.PIPE, text=True)
result, errors = p.communicate()
</code></pre>
<p>One way to make it work is to remove the <code>N ""</code> part and ask the python code user to input an empty passphrase. It works, but I don't want my user to be prompted for this passphrase.</p>
| <python><ssh><subprocess><ssh-keygen> | 2023-09-05 10:52:54 | 1 | 583 | Amir Zare |
77,043,881 | 8,831,165 | Aiohttp: `async with session.get(...)` block blocks until receiving the request: why? | <p>I have following code snipped:</p>
<pre><code>import aiohttp
import asyncio
async def main():
async with aiohttp.ClientSession() as session:
async with session.get("https://hub.dummyapis.com/delay?seconds=1") as r:
print("Here!")
text = await r.text()
print(text)
asyncio.run(main())
</code></pre>
<p>The url I'm calling simply waits for one second, and then returns a <code>200</code>.</p>
<p>I thought the <code>Here!</code>, would be printed immediately, then 1 second would pass (or maybe a tiny bit more, including the overhead to send and receive the request), and then the text will be printed. After all, I'm only <code>await</code>ing after the <code>print("Here!")</code> statement. But that's not the case, the <code>Here!</code> is only printed after 1 second. So i guess that the <code>async with session.get(...) as r</code> is blocking? My questions:</p>
<ol>
<li>What's going on here? Is it an implementation detail of <code>aiohttp</code> to wait for the response to be received, or am I misunderstanding <code>async</code> here? Can I make the <code>session.get(...) as r</code> unblocking?</li>
<li>More specifically, I would like to be able to:</li>
</ol>
<ul>
<li>First send the request, saving a Future/awaitable (or something else?) that will resolve to a response for later.</li>
<li>Do some work - which uses that Future/awaitable (specifically I would like to inspect the state of the request when it's in-flight)</li>
<li>wait for the response to come through, and return</li>
</ul>
<p>Is that possible with <code>aiohttp</code>?</p>
| <python><aiohttp> | 2023-09-05 10:52:44 | 2 | 2,744 | Frederik Bode |
77,043,650 | 9,947,412 | Pylint and flake8: quality score | <p>I was wondering if it was possible to get a code quality score using flake8 like pylint do by default. I cannot find the flake8 configuration for it, if it exists.</p>
<p>I want to be able to compare my file with this insights over the time.</p>
<p>Thank you!</p>
| <python><pylint><flake8> | 2023-09-05 10:19:12 | 0 | 907 | PicxyB |
77,043,614 | 6,008,271 | Structed logging for uWSGI HARAKIRI messages | <p>My observability/logging platform requires that logs be structured in a specific format in order for them to be ingested and available in the right place.</p>
<p>This involves JSON-structuring the logs with some metadata like which service it's coming from.</p>
<p>For normal uwsgi request logs I can solve it like this:</p>
<pre class="lang-xml prettyprint-override"><code> <req-logger>app syslog:uwsgi-req</req-logger>
<log-req-encoder>json {"service":"foo","message":"${msg}"}</log-req-encoder>
<log-req-encoder>nl</log-req-encoder>
</code></pre>
<p>However, uWSGI has more than just request logs, there's also stuff I want to keep like Harakiri logs (when a worker has been processing longer than <code>HARAKIRI_TIMEOUT</code> seconds and gets killed):</p>
<pre><code>HARAKIRI: -- syscall> running
HARAKIRI: -- wchan> 0
Tue Sep 5 10:10:39 2023 - HARAKIRI !!! worker 2 status !!!
Tue Sep 5 10:10:39 2023 - HARAKIRI [core 0] 10.52.241.81 - GET /foo/bar since 1693908338
Tue Sep 5 10:10:39 2023 - HARAKIRI !!! end of worker 2 status !!!
DAMN ! worker 1 (pid: 68) died, killed by signal 9 :( trying respawn ...
Respawned uWSGI worker 1 (new pid: 468)
DAMN ! worker 2 (pid: 85) died, killed by signal 9 :( trying respawn ...
Respawned uWSGI worker 2 (new pid: 485)
</code></pre>
<p>These logs don't end up getting structured, so don't get ingested and get missed.</p>
<p>I tried setting a <code>log-encoder</code> to capture these like so:</p>
<pre class="lang-xml prettyprint-override"><code> <logger>requests syslog:uwsgi-main</logger>
<logformat>%(status) %(method) %(uri) | ip=%(var.HTTP_X_REAL_IP), dT=%(msecs)ms</logformat>
<log-encoder>json {"service":"foo","message":"${msg}"}</log-encoder>
<log-encoder>nl</log-encoder>
</code></pre>
<p>but the problem is that this <em>also</em> includes the service logs, which themselves are <em>already</em> JSON-structured, so application logs end up double-wrapped and looking like this:</p>
<pre><code>{"service":"foo","msg":"{\"service\":\"foo\",\"msg\":\"application log\"}"}
</code></pre>
<p>Ideally I'd like to apply the same <code>log-req-encoder</code> json structure to <em>every</em> log created by uWSGI so <em>nothing</em> is missed. Is there a good way to do this?</p>
| <python><logging><uwsgi> | 2023-09-05 10:14:46 | 0 | 739 | OscarVanL |
77,043,540 | 17,835,656 | How can i add a QR image in pyqt5? | <p>i am working on project works with QR Images and i have to convert a link into a qr image</p>
<p>after that i have to show the image on the window.</p>
<p>i am using <strong>qrcode</strong> library and it provides a way to convert the image into QImage, and i can set it directly on the window without save it as a file after that read it.</p>
<p>and i do not want to save it as a file , only i want to read it.</p>
<p>but when i tried to do it i got an error.</p>
<p>this is a code for test :</p>
<pre class="lang-py prettyprint-override"><code>from PyQt5 import QtWidgets
from PyQt5 import QtGui
import sys
import qrcode
qr_coda_as_an_image = qrcode.make("Link")
qr_coda_as_an_image = qr_coda_as_an_image.get_image()
qr_coda_as_an_image = qr_coda_as_an_image.toqimage()
app = QtWidgets.QApplication(sys.argv)
window = QtWidgets.QWidget()
window.resize(800,500)
window_layout = QtWidgets.QGridLayout()
image = QtGui.QPixmap.fromImage(qr_coda_as_an_image)
first_widget = QtWidgets.QLabel()
first_widget.setPixmap(image)
window_layout.addWidget(first_widget)
window.setLayout(window_layout)
window.show()
app.exec()
</code></pre>
<p>it was working when i made it but after a few edits i broke it , and i have been trying for 5 hours to find the problem or to solve it but i failed.</p>
<p>this is the error :</p>
<pre><code>TypeError: fromImage(QImage, flags: Union[Qt.ImageConversionFlags, Qt.ImageConversionFlag] = Qt.AutoColor): argument 1 has unexpected type 'ImageQt'
</code></pre>
<p>thanks</p>
| <python><python-3.x><pyqt><pyqt5><qr-code> | 2023-09-05 10:02:43 | 1 | 721 | Mohammed almalki |
77,043,496 | 1,422,096 | Is there a split function that returns [] in this case? | <p>Of course we have:</p>
<pre><code>"1,2,3".split(",") # ["1", "2", "3"]
"1".split(",") # ["1"]
</code></pre>
<p>but also this, which is sometimes problematic in some situations (*):</p>
<pre><code>"".split(",") # [""]
</code></pre>
<p>Is there a built-in way (maybe with a parameter, or a specific function) to have:</p>
<pre><code>"".split(",", allow_empty=True) # []
</code></pre>
<p>?</p>
<p>This would (sometimes) make sense: <strong>the input is empty, so the output list should be empty.</strong></p>
<p>(*) Example situation:</p>
<pre><code>for element in s.split(","):
print(f"we have the element {element}")
# works for s = "1, 2, 3"
# works for s = "1"
# doesn't work for s = "" => the loop should be empty
</code></pre>
| <python><list><split> | 2023-09-05 09:55:22 | 5 | 47,388 | Basj |
77,043,346 | 13,836,083 | Proper way to retrieve last item from JSON array, stored in file | <p>Below is how my JSON file looks like, Here I am continuously watching this. I am watching this file because I want to retrieve the every new item appended to JSON array in Python.</p>
<p>I want to know what could be a better approach.</p>
<pre><code>[
{key1:value1,key2:value2},
{key1:value1,key2:value2}
]
</code></pre>
| <python><json> | 2023-09-05 09:33:24 | 1 | 540 | novice |
77,043,285 | 188,331 | NLTK sentence_bleu() returns 0 while evaluating Chinese sentences | <p>I'm trying to evaluate Chinese sentence BLEU scores with NLTK's <code>sentence_bleu()</code> function. The code is as follows:</p>
<pre><code>import nltk
import jieba
from transformers import AutoTokenizer, BertTokenizer, BartForConditionalGeneration
src = '樓上漏水耍花招不處理可以怎麼做'
ref = '上層漏水耍手段不去處理可以怎麼做'
checkpoint = 'fnlp/bart-base-chinese'
tokenizer = BertTokenizer.from_pretrained(checkpoint)
model = BartForConditionalGeneration.from_pretrained(checkpoint)
hypothesis_translations = []
for sentence in [src]:
inputs = tokenizer(sentence, return_tensors="pt", truncation=True, max_length=100, return_token_type_ids=False)
outputs = model.generate(**inputs)
translated_sentence = tokenizer.decode(outputs[0], skip_special_tokens=True)
hypothesis_translations.append(translated_sentence)
# for Reference tokenization
inputs_ref = tokenizer(ref, return_tensors="pt", truncation=True, max_length=100, return_token_type_ids=False)
outputs_ref = model.generate(**inputs_ref)
tokenized_ref = tokenizer.decode(outputs_ref[0], skip_special_tokens=True)
nltk_bleu = nltk.translate.bleu_score.sentence_bleu(tokenized_ref, hypothesis_translations)
print(nltk_bleu)
</code></pre>
<p>The output of printing <code>nltk_bleu</code> is <code>0</code>.</p>
<p>But when I use the <code>corpus_score()</code> of <code>SacreBLEU</code> library, it returns normal and expected results:</p>
<pre><code>import evaluate
from sacrebleu.metrics import BLEU
bleu = BLEU()
bleu_score = bleu.corpus_score(references=tokenized_ref, hypotheses=hypothesis_translations)
print(bleu_score)
</code></pre>
<p>which returns:</p>
<blockquote>
<p>BLEU = 4.79 73.3/3.6/1.9/1.0 (BP = 1.000 ratio = 15.000 hyp_len = 15 ref_len = 1)</p>
</blockquote>
<p>How can I make the NLTK <code>sentence_score</code> return correct results?</p>
<hr />
<p><strong>UPDATE</strong> After adding NLTK's Method 3 into consideration:</p>
<pre><code>from nltk.translate.bleu_score import SmoothingFunction
smooth_fn = SmoothingFunction()
nltk_bleu = nltk.translate.bleu_score.sentence_bleu(tokenized_ref, hypothesis_translations, smoothing_function=smooth_fn.method3)
</code></pre>
<p>the value of <code>nltk_bleu</code> is still <code>0</code>.</p>
| <python><nltk><cjk><bleu> | 2023-09-05 09:25:09 | 2 | 54,395 | Raptor |
77,043,088 | 11,417,794 | How to create this alphabetical pattern in Python | <p>I want to create this alphabetical pattern:</p>
<pre><code>A
B C
D E
B C
A
</code></pre>
<p>So far I have done this:</p>
<pre><code>n = 4
ch = 65
for i in range(0, n):
for j in range(0, i+1):
if j == 0 or j == 1:
print(chr(ch), end =" ")
ch += 1
print()
for i in range(n, 0, -1):
for j in range(0, i):
if j == 0 or j == 1:
ch -= 1
print(chr(ch), end=" ")
print()
</code></pre>
<p>But the output is something like this: (For n =4)</p>
<pre><code>A
B C
D E
F G
G F
E D
C B
A
</code></pre>
<p>This does not mirror as per my expectation.
I have got this problem in the comment section of one of my blog and I can't solve it: <a href="https://www.codespeedy.com/python-program-to-print-alphabetical-pattern/" rel="nofollow noreferrer">Alphabetical patterns in Python</a></p>
| <python> | 2023-09-05 08:53:18 | 1 | 898 | CodeSpeedy |
77,043,082 | 13,518,907 | Human Activity Recognition - Train-Test-Split and Modeling | <p>I have a list of CSV files that contains accelerometer (Frequency 100Hz) data from several subjects. I first read in all CSV-files into a list called "subjects". Here is a short snipped on how each dataset in the "subject" list looks like:</p>
<pre><code>test = subjects[0]
print(test.head())
print(test.info())
x y z label
0 0.000964 -0.001134 0.006626 0
1 0.001184 -0.001213 0.009387 0
2 0.000443 -0.001731 0.008007 0
3 -0.000256 -0.000379 0.006897 0
4 0.000328 0.000040 0.005098 0
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1229597 entries, 0 to 1229596
Data columns (total 4 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 x 1229597 non-null float64
1 y 1229597 non-null float64
2 z 1229597 non-null float64
3 label 1229597 non-null int64
dtypes: float64(3), int64(1)
memory usage: 37.5 MB
None
</code></pre>
<p>Now I want to build a classification model (e.g. Random Forest) to be able to predict the label (range: 0-3).</p>
<p>At this point, I am not sure how to split my data into train and test datasets. As it is time series data, I think I can't use the classic train-test-split function from scikit-learn.</p>
<p>So how do I have to make the split at this task?</p>
<p>My first thought was to use the first 70% of the subjects as train data, the next 10% as validation data, and the remaining 20% as test data:</p>
<pre><code># 41 subjects in total
train_data = subjects[:29]
validation_data = subjects[29:33]
test_data = subjects[33:]
</code></pre>
<p>I am not sure if this is the correct way and don't know how I should proceed here to build a classification model.</p>
<p>Thanks in advance!</p>
| <python><classification><accelerometer><train-test-split> | 2023-09-05 08:52:45 | 0 | 565 | Maxl Gemeinderat |
77,043,028 | 7,758,765 | Use pip with authentication via a proxy server without storing the password as plaintext | <p>I am currently using a machine within an organization that requires passing through a proxy with authentication. I am running Red Hat 7 and need to use pip. However, it seems that the only ways to use pip through a proxy with authentication involve storing the password as plaintext in the config file or an environmental variable, which is really unsafe.</p>
<p>Is there a more secure solution available? I have come across <a href="https://stackoverflow.com/q/76075175/7758765">a similar post for Windows</a>, but unfortunately, it doesn't provide any solutions.</p>
| <python><pip> | 2023-09-05 08:45:21 | 0 | 833 | Welgriv |
77,042,924 | 2,992,055 | pandas read_csv can't handle additional commas in double quotes | <p>Seems like this has been addressed so many times, yet I'm not able to resolve it.
Here's a minimal example of my CSV:</p>
<pre><code>Issue, Content
Test, "A, B"
Test, "A, B, C"
</code></pre>
<p>Here's the read_csv code (tried all sorts of combinations regarding parameters):</p>
<pre><code>df = pd.read_csv('data.csv', delimiter=",", quotechar='"', encoding="utf-8")
</code></pre>
<p>Here's the error:
ParserError:</p>
<blockquote>
<p>Error tokenizing data. C error: Expected 3 fields in line 3, saw 4</p>
</blockquote>
<p>I created the CSV file with a plain text editor. Wondering also why the interpreter expects 3 fields..</p>
| <python><pandas><csv> | 2023-09-05 08:31:12 | 2 | 514 | Pete |
77,042,910 | 17,580,381 | Join on multiprocessing pool deadlock due to [Errno 61] Connection refused | <p>Here's a contrived piece of code that runs exactly as expected:</p>
<pre><code>from multiprocessing import Pool, Manager
from os import cpu_count
PROCESSES = cpu_count() // 2
N = 250
def add_to_list(the_list, the_value):
try:
the_list.append(the_value)
except Exception as e:
print(e)
def main():
with Manager() as manager:
myList = manager.list()
pool = Pool(PROCESSES)
for i in range(N):
pool.apply_async(add_to_list, (myList, i))
print('Closing...')
pool.close()
print('Joining...')
pool.join()
print(myList)
if __name__ == '__main__':
main()
</code></pre>
<p>Note that cpu_count() == 20 on my system.</p>
<p>The output is a list of N values in the inclusive range 0-249. This is exactly what I expect.</p>
<p>I then change the code so that:</p>
<pre><code>pool = Pool()
</code></pre>
<p>In other words the pool size will default to 20</p>
<p>If I do so, I get this:</p>
<pre><code>[Errno 61] Connection refused
</code></pre>
<p>...multiple times (it varies) from the exception handler in the add_to_list() function.</p>
<p>The join() then blocks indefinitely - i.e., the program never ends</p>
<p>I have determined that this unwanted behaviour arises when PROCESSES >= 18</p>
<p>If I Ctrl-C to stop the program, I see in the stack multiple references to:</p>
<pre><code>return self._semlock.__enter__()
</code></pre>
<p>I deduce therefore that there's some kind of issue with Python's semaphore implementation that <em>may</em> be platform specific.</p>
<p>Is there a way to maximise the number of processes in the pool (cross platform) that would ensure that this situation doesn't arise?</p>
<p>Platform details:</p>
<pre><code>macOS 13.5.1
3 GHz 10-Core Intel Xeon W
32 GB 2666 MHz DDR4
Python 3.11.5
</code></pre>
| <python><multiprocessing> | 2023-09-05 08:28:25 | 1 | 28,997 | Ramrab |
77,042,700 | 18,620,559 | Django Database Routing using USER from request | <p>I am trying to implement a PER USER database routing in Django and things are not going that smoothly.</p>
<p>Databases are predefined and they have identical structure.</p>
<p>This is what I have right now and IT WORKS but I was wondering if there is anything better for what I want :</p>
<pre><code>class DatabaseMiddleware:
def __init__(self, get_response):
self.get_response = get_response
def __call__(self, request):
if (str(request.company) == 'company1'):
request.database = 'COMPANY_X'
else:
request.database = 'default'
response = self.get_response(request)
return response
#THIS HAS SECURITY RISKS BUT I DONT THINK THERE IS SOMETHING BETTER OUT THERE FOR NOW
from threadlocals.threadlocals import get_current_request
class UserDatabaseRouter:
def db_for_read(self, model,user_id=None, **hints):
request = get_current_request()
if(not (request is None)):
return request.database
else:
return None
def db_for_write(self, model,user_id=None, **hints):
request = get_current_request()
if(not (request is None)):
return request.database
else:
return None
</code></pre>
| <python><django><database><routes> | 2023-09-05 07:56:11 | 0 | 343 | Lerex |
77,042,626 | 498,690 | How to pivot dataframe by columns from compound index | <p>My dataframe represents time series. it has three columns: ts, code, count
I try to pivot it.</p>
<pre class="lang-py prettyprint-override"><code># set index by code, ts to get
# code_01 1
# code_01 2
# ...
# code_42 1
# code_42 2
df_count = df[['code', 'count', 'ts']]
df_count = df_count.set_index(['code', 'ts'])
# fails with KeyError: 'ts'
df_pivot = df_count.pivot(index='ts', values='count', columns='code')
</code></pre>
<p>I've duplicated columns ts, code</p>
<pre><code>df_count['ts_pivot'] = df_count['ts']
df_count['code_pivot'] = df_count['code']
df_pivot = df_count.pivot(index='ts_pivot', values='count', columns='code_pivot')
</code></pre>
<p>Is there more elegant solution?</p>
| <python><pandas><dataframe> | 2023-09-05 07:46:15 | 0 | 3,818 | Capacytron |
77,042,247 | 2,009,558 | How can I collect output from a parallelised Python operation in a single list? | <p>I want to run an operation in parallel and colelct the output in a global list. I include the list as a variable that is passed to the parallel script. However, the output doesn't accumulate there. Why doesn't this work?</p>
<p>This is the 1st cell in Jupyter Notebook:</p>
<pre><code>from multiprocessing import Pool
import os
def tenfold(input_value,output_list):
output_value = input_value *10
print(input_value, output_value)
output_list.append(output_value)
# list to collect output
out_list = []
#create input tuples
tups = []
for i in range(0,5): tups.append((i, out_list))
# parallel
def main():
cores = os.cpu_count()
workers = 30
print('n cores: ', cores, ', n workers: ', workers)
with Pool(workers) as p:
p.map(tenfold, tups) # second variable must be a tuple
if __name__ == '__main__':
main()
</code></pre>
<p>And then when I run this cell:</p>
<pre><code>print(out_list)
</code></pre>
<p>I get an error in the terminal:</p>
<blockquote>
<p>AttributeError: Can't get attribute 'tenfold' on <module '<strong>main</strong>'
(built-in)></p>
</blockquote>
<p>And the kernel just sits there doing nothing until I restart it. What am I doing wrong? Is it even possible to collect the output from a function run in parallel in a global list?</p>
| <python><parallel-processing> | 2023-09-05 06:46:21 | 2 | 341 | Ninja Chris |
77,042,189 | 22,070,773 | Wrapping GLFWwindow with pybind11 | <p><code>GLFWwindow</code> is forward declared in <code>glfw.h</code>, and as such, <code>pybind11</code> tells me it cant properly wrap it without a definition, for example, this:</p>
<pre><code>#include <GLFW/glfw3.h>
#include <pybind11/pybind11.h>
PYBIND11_MAKE_OPAQUE(GLFWwindow);
class Application {
GLFWwindow *window;
public:
GLFWwindow* getWindow() {
return window;
}
};
namespace py = pybind11;
PYBIND11_MODULE(python_example, m) {
py::class_<Application>(m, "Application")
.def("window", &getWindow);
}
</code></pre>
<p>produces the error message:</p>
<blockquote>
<p>error: invalid use of incomplete type ‘struct GLFWwindow’</p>
</blockquote>
<p>I've tried a few different things with <code>py::capsule</code>s but on the whole haven't found a solution.
I pretty much need access to the window somehow, it doesn't necessarily have to be a raw pointer, but I do want to be able to pass it as an argument to things like <code>glfwMakeContextCurrent</code> or <code>glfwGetKey</code> from Python for example. So, I tried something like this:</p>
<pre><code>nb::capsule get_window = []() {
return nb::capsule(app->window);
};
void makeContextCurrent(nb::capsule a_cap) {
glfwMakeContextCurrent((GLFWwindow*)a_cap.data());
};
...
.def_prop_ro("window", []() {
return nb::capsule((void *)app->window);
})
m.def("make_context_current", &makeContextCurrent);
</code></pre>
<p>But when I try to instantiate an <code>Application</code> and print <code>self.window</code> or call <code>make_context_current(self.window)</code>, i'm getting:</p>
<blockquote>
<p>print (self.window)
TypeError: (): incompatible function arguments. The following argument types are supported:</p>
<ol>
<li>() -> capsule</li>
</ol>
</blockquote>
<p>Also, the above is <code>nanobind</code>, since I want to solve it with <code>nanobind</code>, but I figured if there was a solution in <code>pybind11</code> it would also apply to <code>nanobind</code>.</p>
<p>Anyway, thanks in advance for any help or suggestions.</p>
| <python><c++><pybind11> | 2023-09-05 06:33:43 | 1 | 451 | got here |
77,042,177 | 9,813,860 | Is there an R version of Python's asyncio.sleep() non-blocking wait function | <p>I am using the package (websocket) for R to receive messages sent from Discord. For this communication to work, Discord requires for R to send a message every ~40 seconds (what the call the heartbeat).</p>
<p>To do this, I create an infinite while loop like this:</p>
<pre><code>LastTime=Sys.time()
Send=1
while (Send==1){
TimeLapse=as.numeric(Sys.time()-LastTime)
Sys.sleep(1) #added to slow down
if(TimeLapse>=40){
ws$send(JsonKeepAliveMessage)
LastTime=Sys.time()
}
}
</code></pre>
<p>This works but the computer gets crazy hot without the Sys.sleep() command. If I put that line the computer run normally, but the problem is that the messages are not received, because the computer is asleep.</p>
<p>While looking this, I found that Python has two alternative ways to have a process wait. time.sleep() blocks, and asyncio.sleep() waits but does not block. Does R has a non-blocking waiting function?</p>
| <python><r><websocket><discord> | 2023-09-05 06:31:45 | 1 | 449 | Camilo |
77,041,917 | 116 | How do I tell mypy to load dotenv from my site-packages? | <p>How do I tell mypy the location of a package? The package is in site-packages.</p>
<p>Details:</p>
<p>I am getting this error</p>
<pre><code>% mypy t.py
t.py:10: error: Cannot find implementation or library stub for module named "dotenv" [import]
</code></pre>
<p>on this code</p>
<pre><code>import dotenv
dotenv.load_dotenv(dotenv.find_dotenv())
</code></pre>
<p>I see the dotenv module in my sys.path:</p>
<pre><code>>>> sys.path
['', '/Users/mark/miniconda3/lib/python311.zip', '/Users/mark/miniconda3/lib/python3.11', '/Users/mark/miniconda3/lib/python3.11/lib-dynload', '/Users/mark/miniconda3/lib/python3.11/site-packages']
% ls /Users/mark/miniconda3/lib/python3.11/site-packages/dotenv
__init__.py __pycache__/ ipython.py parser.py variables.py
__main__.py cli.py main.py py.typed version.py
</code></pre>
<p>What can I do to tell mypy about the location of this package?</p>
| <python><mypy> | 2023-09-05 05:25:25 | 0 | 305,996 | Mark Harrison |
77,041,913 | 9,644,490 | how to split Django urls.py into multiple files | <p>I have a function based view in my app, and I am trying to split my urls.py as follows</p>
<pre><code>my_project/
|- my_project/
|- urls.py
|- my_app/
|- urls
|- __init__.py
|- dogs.py
|- cats.py
|- views
|- __init__.py
|- dogs
|- index.py
...
</code></pre>
<pre><code># my_project/my_app/urls/__init__.py
from .dogs import urlpatterns as dog_urls
from .cats import urlpatterns as cat_urls
app_name = "pets"
urlpatterns = dog_urls + cat_urls
</code></pre>
<pre><code># my_project/my_app/urls/dogs.py
from .. import views
urlpatterns = [
path("dogs/", views.dogs.index, name="dogs")
...
]
</code></pre>
<pre><code># my_project/my_project/urls.py
from django.conf import settings
from django.conf.urls.static import static
from django.contrib import admin
from django.urls import include, path
from . import views
urlpatterns = [
path("pets/", include("pets.urls")),
...
]
if settings.DEBUG:
urlpatterns += static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
urlpatterns += static(settings.STATIC_URL, document_root=settings.STATIC_ROOT)
</code></pre>
<p>With this set up I am getting this error</p>
<pre><code>django.core.exceptions.ImproperlyConfigured: The included URLconf '<module 'pets.urls' (namespace)>' does not appear to have any patterns in it. If you see the 'urlpatterns' variable with valid patterns in the file then the issue is probably caused by a circular import.
</code></pre>
<p>I do not see where in my code I am creating a circular dependency, nor do I understand what I am doing wrong with urlpatterns.</p>
| <python><django> | 2023-09-05 05:24:02 | 0 | 409 | kyuden |
77,041,700 | 12,093,885 | Cannot pip install CUDA-enabled pytorch in nvidia/cuda docker container: not a supported wheel on this platform | <p>I have an image (<code>./app/Dockerfile</code>) defined like this:</p>
<pre><code>FROM nvidia/cuda:11.3.1-base-ubuntu20.04
ENV TZ="Asia/Kuala_Lumpur"
################################################################################
#
# Ubuntu 20.04, Cuda 11.3, python3.10
#
################################################################################
SHELL ["/bin/bash", "-c"]
# apt update
RUN apt update
# install python 3.10
RUN apt install -y software-properties-common && \
add-apt-repository -y ppa:deadsnakes/ppa && \
apt install -y python3.10 && \
apt-get install -y \
python3-pip python3.10-dev python3.10-distutils python3.10-venv && \
cd /usr/local/bin && \
ln -s /usr/bin/python3.10 python
</code></pre>
<p>Then, in a docker compose file, I have a service defined like this:</p>
<pre><code>services:
app:
runtime: nvidia
build:
context: ./app
image: ubuntu-cuda
</code></pre>
<p>Then when I try to pip install this version of pytorch specifically (from inside the container):</p>
<pre><code>pip install --upgrade pip
pip install https://download.pytorch.org/whl/cu113/torch-1.12.1%2Bcu113-cp310-cp310-linux_x86_64.whl
</code></pre>
<p>I get the error:</p>
<pre><code>ERROR: torch-1.12.1+cu113-cp310-cp310-linux_x86_64.whl is not a supported wheel on this platform.
</code></pre>
<p>I checked that python3.10 is successfully installed in the container:</p>
<p><code>python --version</code></p>
<pre><code>Python 3.10.13
</code></pre>
<p>But I can't run nvidia-smi (from inside the container) to check for some reason ... But when I do:</p>
<pre><code>docker run --rm --runtime=nvidia --gpus all ubuntu-cuda nvidia-smi
</code></pre>
<p>I have the output:</p>
<pre><code>Tue Sep 5 12:03:02 2023
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.161.03 Driver Version: 470.161.03 CUDA Version: 11.4 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
</code></pre>
<p>Then, <code>uname -a</code> from inside the container:</p>
<pre><code>Linux 0fe451225d84 5.4.0-135-generic #152-Ubuntu SMP Wed Nov 23 20:19:22 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
</code></pre>
<p>So it's not the wrong python installation, it's not the wrong playform ... what could be the problem?</p>
<p>I can run install <code>pip install https://download.pytorch.org/whl/cu113/torch-1.12.1%2Bcu113-cp310-cp310-linux_x86_64.whl</code> fine on the host with similar system configuration.</p>
<p>On the host:</p>
<p><code>uname -r</code></p>
<pre><code>Linux opus 5.4.0-135-generic #152-Ubuntu SMP Wed Nov 23 20:19:22 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
</code></pre>
<p><code>nvidia-smi</code></p>
<pre><code>Tue Sep 5 12:08:22 2023
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.161.03 Driver Version: 470.161.03 CUDA Version: 11.4 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
</code></pre>
<p><code>python --version</code></p>
<pre><code>Python 3.10.9
</code></pre>
| <python><docker><pip><pytorch> | 2023-09-05 04:12:51 | 1 | 603 | Hajar Razip |
77,041,555 | 11,277,108 | How to re-scrape a page if there is an error in parse method? | <p>The first action in my <code>parse</code> method is to extract a dictionary from a JSON string contained in the HTML. I've noticed that I sometimes get an error as the web page doesn't display correctly and thus doesn't contain the JSON string. If I rerun the spider then the same page displays fine and on it carries on until another random JSON error.</p>
<p>I'd like to check that I've got the error handling correct:</p>
<pre><code>def parse(self, response):
json_str = response.xpath("<xpath_to_json>").get()
try:
items = json.loads(json_str)["items"]
except JSONDecodeError:
return response.follow(url=response.url, callback=self.parse)
for i in items:
# do stuff
</code></pre>
<p>I'm pretty sure this will work ok but wanted to check check a couple of things:</p>
<ol>
<li>If this hits a 'genuinely bad' page where there is no JSON will the spider get stuck in a loop or does scrapy give up after trying a given URL a certain number of times?</li>
<li>I've used a <code>return</code> instead of a <code>yield</code> because I don't want to continue running the method. Is this ok?</li>
</ol>
<p>Any other comments are welcome too!!</p>
| <python><scrapy> | 2023-09-05 03:15:55 | 1 | 1,121 | Jossy |
77,041,384 | 3,247,006 | How to test Django with multiple broswers together with Selenium to reduce the code? | <p>I installed <a href="https://github.com/pytest-dev/pytest-django" rel="nofollow noreferrer">pytest-django</a> and <a href="https://github.com/SeleniumHQ/selenium" rel="nofollow noreferrer">selenium</a> in Django as shown below.</p>
<pre class="lang-none prettyprint-override"><code>pip install pytest-django && pip install selenium
</code></pre>
<p>Then, I created <a href="https://docs.pytest.org/en/7.3.x/reference/customize.html#pytest-ini" rel="nofollow noreferrer">pytest.ini</a>, <code>test_1.py</code> and <code>__init__.py</code>(Empty file) in <code>tests</code> folder as shown below:</p>
<pre class="lang-none prettyprint-override"><code>django-project
|-core
| └-settings.py
|-my_app1
|-my_app2
|-pytest.ini
└-tests
|-__init__.py
└-test_1.py
</code></pre>
<p>Then, I set the code below to <code>pytest.ini</code>:</p>
<pre class="lang-ini prettyprint-override"><code># "pytest.ini"
[pytest]
DJANGO_SETTINGS_MODULE = core.settings
testpaths = tests
</code></pre>
<p>Finally, I could test Django (Admin) with the multiple browsers Google Chrome, Microsoft Edge and Firefox separately but not together with Selenium as shown below so the code is a lot:</p>
<pre class="lang-py prettyprint-override"><code># "tests/test_1.py"
import pytest
from selenium import webdriver
from django.test import LiveServerTestCase
""" Google Chrome Test Begin """
@pytest.fixture(scope="class")
def chrome_driver_init(request):
chrome_driver = webdriver.Chrome()
request.cls.driver = chrome_driver
yield
chrome_driver.close()
@pytest.mark.usefixtures("chrome_driver_init")
class Test_URL_Chrome(LiveServerTestCase):
def test_open_url(self):
self.driver.get(("%s%s" % (self.live_server_url, "/admin/")))
assert "Log in | Django site admin" in self.driver.title
""" Google Chrome Test End """
""" Microsoft Edge Test Begin """
@pytest.fixture(scope="class")
def edge_driver_init(request):
edge_driver = webdriver.Edge()
request.cls.driver = edge_driver
yield
edge_driver.close()
@pytest.mark.usefixtures("edge_driver_init")
class Test_URL_Edge(LiveServerTestCase):
def test_open_url(self):
self.driver.get(("%s%s" % (self.live_server_url, "/admin/")))
assert "Log in | Django site admin" in self.driver.title
""" Microsoft Edge Test End """
""" Firefox Test Begin """
@pytest.fixture(scope="class")
def firefox_driver_init(request):
firefox_driver = webdriver.Firefox()
request.cls.driver = firefox_driver
yield
firefox_driver.close()
@pytest.mark.usefixtures("firefox_driver_init")
class Test_URL_Firefox(LiveServerTestCase):
def test_open_url(self):
self.driver.get(("%s%s" % (self.live_server_url, "/admin/")))
assert "Log in | Django site admin" in self.driver.title
""" Firefox Test End """
</code></pre>
<p>So, how can I test Django with multiple broswers together with Selenium to reduce the code?</p>
| <python><django><selenium-webdriver><browser><pytest-django> | 2023-09-05 02:12:16 | 1 | 42,516 | Super Kai - Kazuya Ito |
77,041,323 | 1,471,980 | how do you apply a function to create a column and insert value in pandas | <p>I have a data frame like this:</p>
<pre><code>df
Server Model
Server1dc xp2
server2dc cc4
server4uat dd4
</code></pre>
<p>I have this function to determine the number of slots given the model:</p>
<pre><code>def total_slot(row):
if "xp" in row["Model"]:
return 5
if "cc" in row["Model"]:
return 10
return 0
</code></pre>
<p>I need to apply this total_slot function to my data frame (df) and create a new column called total_slots:</p>
<pre><code>df['total_slots']=df.apply(total_slot, axis=1)
</code></pre>
<p>It works but I get this working message:</p>
<p>A value is trying to be set on a copy of a slice from a dataframe.</p>
<p>How else I could do this?</p>
<p>resulting df needs to look like this:</p>
<pre><code>Server Model total_slots
Server1dc xp2 5
server2dc cc4 10
server4uat dd4 0
</code></pre>
| <python><pandas> | 2023-09-05 01:52:13 | 2 | 10,714 | user1471980 |
77,041,241 | 10,994,166 | Shuffle array of arrays pyspark columns | <p>I have pyspark column Like this:</p>
<pre class="lang-none prettyprint-override"><code> gm_array
[[1, 4, 6,...], [2, 7, 8,...], [3, 5, 7,...],...]
[[8, 11, 9,...], [7, 2, 6,...], [10, 9, 8,...],...]
[[90, 13, 67,...], [55, 6, 98,...], [1, 6, 2,...],...]
.
.
</code></pre>
<p>Now I want to shuffle this single array and also array inside this array, and then I want to pick 5 first element from first 5 array.</p>
<p>1st Out which is randomly shuffle array:</p>
<pre class="lang-none prettyprint-override"><code> gm_array
[[19, 6, 1,...], [9, 80, 5,...], [30, 7, 3,...],...]
[[7, 9, 11,...], [6, 8, 7,...], [18, 7, 10,...],...]
[[90, 1, 7,...], [8, 9, 81,...], [6, 5, 1,...],...]
.
.
</code></pre>
<p>2nd Out 1st element of 1st 5 array inside main array:</p>
<pre class="lang-none prettyprint-override"><code>[19, 9, 30,...]
[7, 6, 18,...]
[[90, 8, 6,...]
.
.
</code></pre>
| <python><apache-spark><pyspark><apache-spark-sql><shuffle> | 2023-09-05 01:17:01 | 2 | 923 | Chris_007 |
77,041,235 | 2,954,547 | Functional difference between `coverage run -m pytest` and `pytest --cov=...`? | <p>The Coverage tool supports generating code coverage data from Pytest tests with <code>coverage run -m pytest ...</code>. However, there is also the Pytest-Cov plugin, which invokes Coverage and generates coverage data by adding the <code>--cov=</code> option to Pytest.</p>
<p>However the Pytest-Cov documentation doesn't seem to explain anywhere how this differs from just using <code>coverage run</code>. Is there any practical difference, or is it just a matter of the options/configuration choices that are available?</p>
| <python><pytest><coverage.py><pytest-cov> | 2023-09-05 01:12:00 | 1 | 14,083 | shadowtalker |
77,041,143 | 15,587,184 | Best Approach to Split, Explode, and Tidy Data Using Regex in Python and Pandas | <p>I have a dataset that requires splitting, exploding, and tidying using regular expressions (regex) in Python and Pandas. The dataset consists of logs from multiple users sent through an old machine to a server API. Each cell may contain multiple messages, and my goal is to transform the data into a structured and tidy format.</p>
<p>Here's a sample of the dataset:</p>
<pre><code>data = {
'text_plain': [
"5:57:11 H2045: Estatus OK updated (19:48:34) Mark P.: No Defects found on parcel",
"11:04:38 Jill : Travel on Time 2:11:30 YHXO: Wheater conds OK",
"6:53:07 Jill : Stable Props 22:38:15 Carl V : OK Status 6:15:34 IUJO-65: Door Open",
"18:44:38 Van UHJ: Spider on site Alert",
"/10:37:43/ H2046 : Movie Purchase Rejected",
"10:33:46 Mark P.: Alert by Super User overwrite 21:55:22 Jill: push sent 6:54:41 YHXO: pull received",
"23:20:04 Jill : Windows Closed",
"5:16:58 Carl V: Is someone on the Front door?",
"(17:11:49) IUJO-66 : No Response on Deck (5:10:43) Van UHJ : Flights delay 8:34:08 H2047: Buy Concert Tickets 9:05:42 Mark P.: Gen. OK",
"7:00:15 Jill : Status not ok updated 21:22:34 YHXO: Front desk clear"
],
'id': [1,2,3,4,5,6,7,8,9,10]
}
</code></pre>
<p>As you can see the data needs to be split based on the psuedo timestamp pattern that we see on the column "<code>text_Plain</code>" (will always be in 24-hour format thankfully) followed by a username and a message. The timestamp can be enclosed in parentheses or sometimes in slashes or sometimes not even enclosed at all as you can see, <strong>BUT</strong> the recurrent pattern that will be of used always to split the data will be</p>
<pre><code>r'[(/]?(\d{1,2}:\d{2}:\d{2})[/)]?'
</code></pre>
<p>My desired output will be:</p>
<pre><code>id timestamp user msg
1 5:57:11 H2045 Estatus OK updated
1 19:48:34 Mark P. No Defects found on parcel
2 11:04:38 Jill Travel on Time
2 2:11:30 YHXO Weather conds OK
3 6:53:07 Jill Stable Props
3 22:38:15 Carl V OK Status
3 6:15:34 IUJO-65 Door Open
4 18:44:38 Van UHJ Spider on site Alert
5 10:37:43 H2046 Movie Purchase Rejected
6 10:33:46 Mark P. Alert by Super User overwrite
6 21:55:22 Jill Push sent
6 6:54:41 YHXO Pull received
7 23:20:04 Jill Windows Closed
8 5:16:58 Carl V Is someone on the Front door?
9 17:11:49 IUJO-66 No Response on Deck
9 5:10:43 Van UHJ Flights delay
9 8:34:08 H2047 Buy Concert Tickets
9 9:05:42 Mark P. Gen. OK
10 7:00:15 Jill Status not ok updated
10 21:22:34 YHXO Front desk clear
</code></pre>
<p>My thought of process tells me that I should</p>
<p>first split the col "<code>text_Plain</code>" by my regex pattern (without REMOVING IT) so that I can get something like this:</p>
<pre><code>['5:57:11 H2045: Estatus OK updated', '(19:48:34) Mark P.: No Defects found on parcel']
'[11:04:38 Jill : Travel on Time', '2:11:30 YHXO: Wheater conds OK', '6:53:07 Jill : Stable Props']
['22:38:15 Carl V : OK Status',' 6:15:34 IUJO-65: Door Open', '18:44:38 Van UHJ: Spider on site Alert', '/10:37:43/ H2046 : Movie Purchase Rejected']
</code></pre>
<p>To then extract my timestamp, user, msg and get the id (afterwards exploding my plain_text col)</p>
<p>But, here is the catch: I have to deal with 5-20 thousands data logs a day and I need to write code that can actually prove to be faster than or more efficient than the one I'm using because I'm new to progrmaming and I have read that regex can be very slow and not efficient and I am keen to learn a better or faster way.</p>
<p>This is my workaround at the moment but is taking WAY TOO long for my PC to handle:</p>
<pre><code>pattern = '(?<=[(/])(\d{1,2}:\d{2}:\d{2})(?=[/)])'
df['new_text'] = df['text_plain'].str.split(pattern)
df['new_text'] = df['new_text'].apply(lambda x: [s for s in x if s.strip() != ''])
df = df.explode('new_text')
</code></pre>
<p>but it is failing as it is not keeping the pattern after the splitting and missing data along the way, also it uses .apply and I have read that we shouldn't use it with regex as it will make the operation way slower if you can clarify that would be awesome</p>
| <python><pandas><regex> | 2023-09-05 00:32:10 | 1 | 809 | R_Student |
77,041,133 | 4,465,454 | Mixing mapped and calculated columns in SQLAlchemy 2.0 ORM | <p>I am trying to create an SQLAlchemy 2.0 ORM mapping that allows me to mix Mapped columns that exist in my database with "calculated" columns from a query.</p>
<p>Assume I have two tables product_catalog and amount_catalog. The product_catalog lists a series of products I am selling. The amount_catalog lists the volumes options for each product. The two tables are connected by a foreign key column called product_catalog_id.</p>
<pre><code>product_catalog
| id | name |
| -- | ------------- |
| 1 | Apple Juice |
| 2 | Cherry Juice |
amount_catalog
| id | amount | unit | product_catalog_id |
| -- | ------ | ----------- | ------------------ |
| 1 | 500 | milliliters | 1 |
| 2 | 1000 | milliliters | 1 |
| 3 | 500 | milliliters | 2 |
</code></pre>
<p>What I am trying to do is create a class for product_catalog that also contains a calculated <em>amount_count</em> column in my python backend.</p>
<pre><code>class ProductCatalog(IdBase):
"""Defines the details of the product catalog table in our SQL database."""
__tablename__ = "product_catalog"
id: Mapped[Integer] = mapped_column(primary_key=True, autoincrement=True)
name: Mapped[String]
amount_count:[Integer] # please note this is not mapped
</code></pre>
<p>My plan is to dynamically generate the <em>amount_count</em> column in a query like the following</p>
<pre><code>def get_products(self) --> list[ProductCatalog]:
amount_counts:Subquery =
select(
AmountCatalog.product_catalog_id,
func.count(AmountCatalog.id).label("amount_count"),
)
.select_from(AmountCatalog)
.group_by(AmountCatalog.product_catalog_id)
.subquery()
result = session.execute(
select(
ProductCatalog.id,
ProductCatalog.name,
amount_counts.c.amount_count
)
.select_from(ProductCatalog)
.outerjoin(
amount_counts,
ProductCatalog.id == amount_counts.c.product_catalog_id)
)
return [ProductCatalog(**row._mapping) for row in rows] # type: ignore
</code></pre>
<p>Unfortunately pytest is now throwing the following error at me:</p>
<blockquote>
<p>sqlalchemy.exc.ArgumentError: Type annotation for "ProductCatalog.amount_count" can't be correctly interpreted for Annotated Declarative Table form. ORM annotations should normally make use of the <code>Mapped[]</code> generic type, or other ORM-compatible generic type, as a container for the actual type, which indicates the intent that the attribute is mapped. Class variables that are not intended to be mapped by the ORM should use ClassVar[]. To allow Annotated Declarative to disregard legacy annotations which don't use Mapped[] to pass, set "<strong>allow_unmapped</strong> = True" on the class or a superclass this class. (Background on this error at: <a href="https://sqlalche.me/e/20/zlpr" rel="nofollow noreferrer">https://sqlalche.me/e/20/zlpr</a>)
Check Python Test Logs for more details.</p>
</blockquote>
<p><strong>Any thoughts on how to best solve the above situation?</strong></p>
| <python><sqlalchemy> | 2023-09-05 00:27:00 | 1 | 1,642 | Martin Reindl |
77,040,978 | 21,575,627 | Python class variable (immutable) not properly updating | <p>Here is a code snippet:</p>
<pre><code> class h:
v = 0
def __init__(self):
print(self.v)
def add(self):
self.v += 1
def show(self):
print(self.v)
h.v
0
a = h()
0
a.add()
a.show()
1
h.v
0
</code></pre>
<p>In other words, when the instance of <code>h</code> updates the class variable, they no longer share it. Is this because the type of <code>int</code> is immutable? How can you share something like an <code>int</code> without the usage of a global?</p>
<p>Edit for clarity: <code>h.v</code> did not get updated with <code>a.add()</code>. I had expected it to.</p>
| <python> | 2023-09-04 23:06:27 | 0 | 1,279 | user129393192 |
77,040,944 | 11,922,765 | Python Dataframe Normalization ValueError: Columns must be same length as key | <p>I am trying to create a function for normalizing a given dataframe for the requested frequency.</p>
<p><strong>code:</strong></p>
<pre><code>import numpy as np
import pandas as pd
def timeseries_dataframe_normalized(df, normalization_freq = 'complete'):
"""
Input:
df : dataframe
input dataframe
normalization_freq : string
'daily', 'weekly', 'monthly','quarterly','yearly','complete' (default)
Return: normalized dataframe
"""
# auxiliary dataframe
adf = df.copy()
# convert columns to float
# Ref: https://stackoverflow.com/questions/15891038/change-column-type-in-pandas
adf = adf.astype(float)
# normalized columns
nor_cols = adf.columns
# add suffix to columns and create new names for maximum columns
max_cols = adf.add_suffix('_max').columns
# initialize maximum columns
adf.loc[:,max_cols] = np.nan
# check the requested frequency
if normalization_freq =='complete':
adf[max_cols] = adf[nor_cols].max()
# compute and return the normalized dataframe
print(adf[nor_cols])
print(adf[max_cols])
adf[nor_cols] = adf[nor_cols]/adf[max_cols]
# return the normalized dataframe
return adf[nor_cols]
# Example
df2 = pd.DataFrame(data={'A':[20,10,30],'B':[1,2,3]})
timeseries_dataframe_normalized(df2)
</code></pre>
<p><strong>Expected output:</strong></p>
<pre><code>df2 =
A B
0 0.666667 0.333333
1 0.333333 0.666667
2 1.000000 1.000000
</code></pre>
<p><strong>Present output:</strong></p>
<p>I am surprized to get following error. However, when I compute <code>df2/df2.max()</code> I am getting the expected output but this function giving me error.</p>
<pre><code>ValueError: Columns must be same length as key
</code></pre>
| <python><pandas><dataframe><numpy><normalization> | 2023-09-04 22:53:03 | 1 | 4,702 | Mainland |
77,040,912 | 17,653,423 | How to flat and merge a list of dictionaries in Python? | <p>How can I convert a list of dictionaries into a single dictionary? Given:</p>
<pre><code>data = [
{
"name" : "name1",
"type" : "type1",
},
{
"name" : "name2",
"type" : "type2",
}
]
</code></pre>
<p>I want to turn the above into a single dictionary:</p>
<pre><code>data = {
"name" : "name1 | name2",
"type" : "type1 | type2"
}
</code></pre>
<p>Obs: I getting that list from an API request, so it can be more than 2 dictionaries in a list.</p>
<p>I know how to do it using Pandas, in this case I want to measure which one is faster.</p>
| <python><dictionary> | 2023-09-04 22:38:44 | 4 | 391 | Luiz |
77,040,907 | 5,176,158 | Selenium "ERR_CONNECTION_RESET" using headless on Heroku | <p>I have a Django + Selenium app I'm trying to deploy to Heroku. I have a management command I call that activates a Selenium Webdriver to use.</p>
<p>Whenever I run it locally it's totally fine (without <code>headless</code>) however upon deploying to Heroku no matter what I try I just get:</p>
<pre><code>Message: unknown error: net::ERR_CONNECTION_RESET
(Session info: headless chrome=116.0.5845.140)
</code></pre>
<p>I instantiate my webdriver as follows:</p>
<pre class="lang-py prettyprint-override"><code>...
logger.info("Starting selenium webdriver...")
options = Options()
options.add_argument("--headless")
options.add_argument("--disable-dev-shm-usage")
options.add_argument("--no-sandbox")
options.add_argument("--disable-gpu")
options.add_argument("--enable-logging")
options.add_argument("--incognito")
options.add_argument("--ignore-certificate-errors")
options.add_argument("--disable-extensions")
options.add_argument("--dns-prefetch-disable")
self.webdriver = webdriver.Chrome(
service=Service(ChromeDriverManager().install()), options=options
)
</code></pre>
<p>I think the issue is the <code>headless</code> argument - adding it locally at least is what breaks things, however.</p>
<p>This is running on Heroku so I need the <code>headless</code> to work.</p>
<p>I'm really stumped. Any help/advice is appreciated - thank you!</p>
| <python><django><selenium-webdriver><heroku> | 2023-09-04 22:36:25 | 2 | 517 | Daniel N. |
77,040,889 | 19,238,204 | Heat Equation Plot: IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices | <p>I get the code from here:</p>
<p><a href="https://stackoverflow.com/questions/49463985/solving-heat-equation-with-python-numpy">Solving heat equation with python (NumPy)</a></p>
<p>But I get an error when running this:</p>
<pre><code># solve the heat equation for a metal rod as one end is kept at 100 °C
# and the other at 0 °C
# u(0,t) = 100 (x=0)
# u(L,t) = 0 (x=L)
import numpy as np
import matplotlib.pyplot as plt
dt = 0.0005
dy = 0.0005
k = 10**(-4)
y_max = 0.04
t_max = 1
T0 = 100
def FTCS(dt,dy,t_max,y_max,k,T0):
s = k*dt/dy**2
y = np.arange(0,y_max+dy,dy)
t = np.arange(0,t_max+dt,dt)
r = len(t)
c = len(y)
T = np.zeros([r,c])
T[:,0] = T0
for n in range(0,r-1):
for j in range(1,c-1):
T[n+1,j] = T[n,j] + s*(T[n,j-1] - 2*T[n,j] + T[n,j+1])
return y,T,r,s
y,T,r,s = FTCS(dt,dy,t_max,y_max,k,T0)
plot_times = np.arange(0.01,1.0,0.01)
for t in plot_times:
plt.plot(y,T[t/dt,:])
</code></pre>
<pre class="lang-py prettyprint-override"><code>---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
Cell In[36], line 25
23 plot_times = np.arange(0.01,1.0,0.01)
24 for t in plot_times:
---> 25 plt.plot(y,T[t/dt,:])
IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices
</code></pre>
<p>How to solve this?</p>
| <python><numpy><matplotlib> | 2023-09-04 22:28:14 | 1 | 435 | Freya the Goddess |
77,040,787 | 9,915,864 | Tkinter: subpanel is not showing up in the main window | <ul>
<li>Closing this question as Part 1 was my primary problem at the time*
I'm writing a small app to show menu buttons on a left side panel and the functionality on the right.</li>
</ul>
<p><strong>Part 1:</strong> The main app windows shows up fine (first image) and the AppDownloader() works by itself (second image). The goal is for AppDownloader() frame to show up in the right-hand panel. <strong>ANSWER</strong>: **The call to draw the panel wasn't in the <code>__init__</code>. Adding the existing <code>draw_downloader_pane()</code> method to <code>AppDownloader.__init__()</code> fixed this part of the issue. The <code>app_downloader.py</code> code is at the bottom. <em>I'm still working on the Download pane widget placements, separate from this.</em></p>
<p><strong>Part 2:</strong> However, it's still not switching between panes, even after double-checking that the <code>AppTransform.__init__()</code> method didn't have the same problem as Part 1.</p>
<p>I've checked my <code>master</code> references to make sure they're calling the correct frames. I think the <code>tkraise()</code> is acting on the right objects. When I click the buttons in the main app window, I get the following debugging info, but it doesn't switch the pages:</p>
<p><a href="https://i.sstatic.net/ZoksL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZoksL.png" alt="enter image description here" /></a></p>
<p><em>UPDATE: revised AppDownloader() code with Part 1 solution</em></p>
<p><code>app_main.py</code></p>
<pre><code>import tkinter as tk
from tkinter import Misc
import customtkinter as ctk
## my modules
from app_downloader import AppDownloader
from app_transform import AppTransform
from app_toolkit import CTK_DARK_MODE, CTK_COLOR_SCHEME, PADX, PADY, COLOR_THEME
# log formatting
from my_logging import setup_logging
logger = setup_logging('app_main.py')
ctk.set_appearance_mode(CTK_DARK_MODE)
ctk.set_default_color_theme(CTK_COLOR_SCHEME)
class AppMain(ctk.CTk):
def __init__(self, *args, **kwargs):
ctk.CTk.__init__(self, *args, **kwargs)
WIDTH = 680
HEIGHT = 600
MENU_WIDTH = 280
self.title("Switch Frames")
self.geometry(f'{WIDTH}x{HEIGHT}')
self.grid_rowconfigure(0, weight=1)
self.grid_columnconfigure(0, weight=1)
self.left_master = ctk.CTkFrame(master=self, **COLOR_THEME, width=MENU_WIDTH, corner_radius=0)
self.left_master.pack(side=tk.LEFT, fill=tk.Y, expand=False, padx=PADX, pady=PADY)
self.right_master = ctk.CTkFrame(master=self, **COLOR_THEME, corner_radius=0)
self.right_master.pack(side=tk.LEFT, fill=tk.BOTH, expand=True)
def draw_left_menu(self):
row_i=1
menu_heading = ctk.CTkLabel(master=self.left_master, text="Activity Menu", font=("Roboto Medium", -16))
menu_heading.grid(row=row_i, column=0, padx=PADX, pady=PADY)
row_i += 1
BTN_PADX = 9
BTN_PADY = 8
BTN_IPADY = 5
self.menu_buttons = [
{'btn_name': 'download', 'btn_label': 'Download Shelf', 'frame_name': 'AppDownloader'},
{'btn_name': 'transform', 'btn_label': 'Transform Data', 'frame_name': 'AppTransform'},
{'btn_name': 'updates', 'btn_label': 'Find Updates (Unimplemented)', 'frame_name': None},
{'btn_name': 'ids', 'btn_label': 'Find Book IDs (Unimplemented)', 'frame_name': None},
{'btn_name': 'save', 'btn_label': 'Save Results (Unimplemented)', 'frame_name': None},
{'btn_name': 'quit', 'btn_label': 'Quit App', 'frame_name': None},
]
for data in self.menu_buttons:
str_var = tk.StringVar()
str_var.set(data['btn_label'])
btn = ctk.CTkButton(self.left_master, textvariable=str_var,
corner_radius=6, width=130)
btn._text_label.configure(wraplength=140)
data['btn_object'] = btn
if 'quit' in data['btn_label']:
btn.grid(row=row_i, column=0, padx=BTN_PADX, pady=(40,0), ipady=BTN_IPADY)
else:
btn.grid(row=row_i, column=0, padx=BTN_PADX, pady=BTN_PADY, ipady=BTN_IPADY)
row_i += 1
def draw_right_container(self):
self.frames = {}
self.right_container = ctk.CTkFrame(self.right_master)
self.right_container.pack(side=tk.LEFT, fill=tk.BOTH, expand=True)
for F in (AppDownloader, AppTransform):
page_name = F.__name__
frame = F(parent=self.right_container, controller=self)
self.frames[page_name] = frame
frame.pack(side="top", fill=tk.BOTH, expand=True)
def show_frame(self, frame_name):
frame = self.frames[frame_name]
## debug statement
logger.debug(f"{frame}")
frame.tkraise()
def configure_buttons(self):
self.menu_buttons[0]['btn_object'].configure(command=lambda:self.show_frame('AppDownloader'))
self.menu_buttons[1]['btn_object'].configure(command=lambda:self.show_frame('AppTransform'))
def start_app(self):
self.draw_left_menu()
self.draw_right_container()
self.configure_buttons()
a = AppMain()
a.start_app()
a.mainloop()
</code></pre>
<p>Here's the correct<code>AppDownloader</code> setup:
<code>app_downloader.py</code></p>
<pre><code>class AppDownloader(ctk.CTkFrame, InitValues):
def __init__(self, parent, controller):
ctk.CTkFrame.__init__(self, parent, **COLOR_THEME)
InitValues.__init__(self)
self.parent = parent
self.controller = controller
self.grid_columnconfigure(1, weight=0, uniform='auto')
# init code here
self.draw_downloader_pane()
# functionality methods snipped
def draw_downloader_pane(self):
self.frame_body = ctk.CTkFrame(master=self.parent, **COLOR_THEME)
banner_frame = ctk.CTkFrame(master=self.parent, **COLOR_THEME)
banner_frame.pack(side=tk.TOP, fill=tk.X, expand=True, padx=PADX, pady=PADY)
banner = ctk.CTkLabel(master=banner_frame, text="Download and Save Files",
**COLOR_THEME)
banner.pack(side=tk.TOP, fill=tk.X, expand=True, padx=PADX, pady=PADY)
self.draw_frame_body()
self.frame_body.pack(side=tk.TOP, fill=tk.BOTH, expand=True, padx=PADX)
</code></pre>
| <python><tkinter> | 2023-09-04 21:53:21 | 1 | 341 | Meghan M. |
77,040,573 | 11,001,493 | How to create new column based on mean of specific window (number of rows)? | <p>I have a dataframe like this:</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
df = pd.DataFrame({"ID":[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12],
"A":[30, 20, 10, 20, 60, 80, 90, 70, 120, 150, 120, 140]})
</code></pre>
<p>I would like to create a new column "B" with the mean of every 4 rows (df["A"]) as a window. And the mean values should be repeated in those 4 rows, but as another column. So the result should be like this:</p>
<pre><code> df
Out[6]:
ID A B
0 1 30 20.0
1 2 20 20.0
2 3 10 20.0
3 4 20 20.0
4 5 60 75.0
5 6 80 75.0
6 7 90 75.0
7 8 70 75.0
8 9 120 132.5
9 10 150 132.5
10 11 120 132.5
11 12 140 132.5
</code></pre>
<p>I tried something like this <code>df["B"] = df.rolling(window=4)['A'].mean()</code>, but it didn't work as expected. Anyone could help me?</p>
| <python><pandas><average><scaling> | 2023-09-04 20:59:24 | 1 | 702 | user026 |
77,040,389 | 1,471,980 | how do you merge two data frames on two common columns and fill empty cells | <p>I have 2 data frames df1 and df2, I need to join them based on server and row columns (both df1 and df2 have the same server, row), final_df need to have data from both df1 and df2 where grid =available need to be inserted into the final_df with row and slot numbers. Final_df should not have empty cells. Once new grid and slot numbers inserted to the final_df, all the other columns values should the same as the last row for that server.Final df need to be sorted by server and slot (slot going from 1, 2, 3, etc)</p>
<pre><code>df1
Env location lob grid row server model make slot
Prod USA Market AB3 bc2 Server123 Hitachi dcs 1
Prod USA Market AB3 bc2 Server123 Hitachi dcs 2
Prod USA Market AB3 bc2 Server123 Hitachi dcs 3
Prod USA Market AB3 bc2 Server123 Hitachi dcs 4
Dev EMEA Ins AB6 bc4 Serverabc IBM abc 3
Dev EMEA Ins AB6 bc4 Serverabc IBM abc 3
Dev EMEA Ins AB6 bc4 Serverabc IBM abc 3
Dev EMEA Ins AB6 bc4 Serverabc IBM abc 4
Dev EMEA Ins AB6 bc4 Serverabc IBM abc 5
Dev EMEA Ins AB6 bc4 Serverabc IBM abc 5
Dev EMEA Ins AB6 bc4 Serverabc IBM abc 6
UAT PAC Retail AB6 bc4 Serverzzz Cisco ust 3
UAT PAC Retail BB6 bc4 Serverzzz Cisco ust 4
UAT PAC Retail BB6 bc4 Serverzzz Cisco ust 5
UAT PAC Retail BB6 bc4 Serverzzz Cisco ust 6
</code></pre>
<p>and</p>
<pre><code>df2
grid row server slot
AB3 bc2 Server123 1
AB3 bc2 Server123 2
AB3 bc2 Server123 3
AB3 bc2 Server123 4
AB6 bc4 Serverabc 3
AB6 bc4 Serverabc 4
AB6 bc4 Serverabc 5
AB6 bc4 Serverabc 6
available bc4 Serverabc 7
available bc4 Serverabc 8
available bc4 Serverabc 9
available bc4 Serverabc 10
AB6 bc4 Serverzzz 3
BB6 bc4 Serverzzz 4
BB6 bc4 Serverzzz 5
BB6 bc4 Serverzzz 6
available bc4 Serverzzz 7
available bc4 Serverzzz 8
</code></pre>
<p>final_df need to be like this:</p>
<pre><code>Env location lob grid row server model make slot
Prod USA Market AB3 bc2 Server123 Hitachi dcs 1
Prod USA Market AB3 bc2 Server123 Hitachi dcs 2
Prod USA Market AB3 bc2 Server123 Hitachi dcs 3
Prod USA Market AB3 bc2 Server123 Hitachi. dcs 4
Dev EMEA Ins. AB6 bc4 Serverabc IBM abc 3
Dev EMEA Ins. AB6 bc4 Serverabc IBM abc 4
Dev EMEA Ins. AB6 bc4 Serverabc IBM abc 5
Dev EMEA Ins. AB6 bc4 Serverabc IBM abc 6
Dev EMEA Ins. available bc4 Serverabc IBM abc 7
Dev EMEA Ins. available bc4 Serverabc IBM abc 8
Dev EMEA Ins. available bc4 Serverabc IBM abc 9
Dev EMEA Ins. available bc4 Serverabc IBM abc 10
UAT PAC Retail AB6 bc4 Serverzzz Cisco ust 3
UAT PAC Retail BB6 bc4 Serverzzz Cisco ust 4
UAT PAC Retail BB6 bc4 Serverzzz Cisco ust 5
UAT PAC Retail BB6 bc4 Serverzzz Cisco ust 6
UAT PAC Retail available bc4 Serverzzz Cisco ust 7
UAT PAC Retail available bc4 Serverzzz Cisco ust 8
</code></pre>
<p>I tried this:</p>
<pre><code>final_df=pd.merge(df1, df2, on=["server", "row"], how=outer)
</code></pre>
<p>I am getting empty cells. any ideas how can I merge 2 data frames without having any empty cells?</p>
| <python><pandas> | 2023-09-04 20:14:05 | 1 | 10,714 | user1471980 |
77,040,373 | 10,431,629 | Pandas Create a column based on first occurrence of keyword strings from another column | <p>I am not sure , how challenging the problem is, but this is my problem:</p>
<p>I have a data frame say as below:</p>
<pre><code> df-in
ColA ColB ColC ColD
PA 1 2 c
PB 3 3 d
PA 5 11 x
Ind1 Suma 20 14 z
QA 3 7 a
QB 3 7 b
QC 5 8 c
QD 5 12 c
Ind2 Dat 202 3 y
RA 12 1 a
RB 13 1 v
RC 14 1 q
Ind3 CapT 120 3 t
Other 10 4 x
</code></pre>
<p>So now the problem is that what I have to do is to create a column (say ColN) which will have the Indicator values only ( Ind1 or Ind2 or Ind3 as in ColA) but with the following rule: The ColN will have all Ind1 till ColA has the Ind1 value, , Ind2, between Ind1 & Ind2 and Ind3 between Ind3 and Ind2.</p>
<p>(Just to emphasize here , that Ind1, Ind2, Ind3 can be different strings like 'star', 'planet', 'moon', but they will always remain that. They are not going to change)</p>
<p>So my output df will look like the following:</p>
<pre><code> df-out
ColA ColB ColC ColD ColN
PA 1 2 c Ind1
PB 3 3 d Ind1
PA 5 11 x Ind1
Ind1 Suma 20 14 z Ind1
QA 3 7 a Ind2
QB 3 7 b Ind2
QC 5 8 c Ind2
QD 5 12 c Ind2
Ind2 Dat 202 3 y Ind2
RA 12 1 a Ind3
RB 13 1 v Ind3
RC 14 1 q Ind3
Ind3 CapT 120 3 t Ind3
Other 10 4 x Ind3
</code></pre>
<p>So how can I achieve that? I am trying to iterate over ColA or trying to put it in a list and trying, but nothing seems to work. Any help will be appreciated.</p>
| <python><pandas><multiple-columns><calculated-columns><indicator> | 2023-09-04 20:10:45 | 3 | 884 | Stan |
77,040,311 | 2,030,532 | PyCharm does not recognize the return type of a decorated function? | <p>I am trying to decorate a method of a python class:</p>
<pre><code>from typing import Any, Callable, TypeVar
T = TypeVar('T')
def my_decorator(func: Callable[..., T]) -> Callable[..., T]:
@functools.wraps(func)
def wrapped(*args: Any) -> T:
print("decorator")
print("Before the method is called")
result = func(*args)
print("After the method is called")
return result
return wrapped
class Foo:
def __init__(self, param):
super().__init__()
self.param = param
@my_decorator
def bar(self, x: int) -> str:
return str(self.param + x)
if __name__ == '__main__':
a = Foo(2)
res = a.bar(1)
</code></pre>
<p>However, I realized PyCharm does not recognize the return type of the method (the hint/autocomplete does not work). However if I do the following instead</p>
<pre><code> def bar(self, x: int) -> str:
return str(self.param + x)
bar = my_decorator(bar)
</code></pre>
<p>PyCharm recognizes the return type correctly. What is the difference between these two cases? Is there a way to make PyCharm deduce the types in the first (more compact) implementation?</p>
| <python><pycharm><decorator><python-typing> | 2023-09-04 19:55:29 | 0 | 3,874 | motam79 |
77,040,207 | 10,292,330 | Unable to properly capture mp3 stream frames | <p>I can only get streams to play for between 1 and 10ish frames before the player crashes. I've noticed that every time the player crashes, it has far fewer bytes to work with for that frame. This means there is something wrong with my sync logic. It's finding a sync too early. I don't know how to solve this. I've done some research on the mp3 specs and found examples in forms that are completely unrelated to Python. They seem to agree to do a <code>word_search</code> in the 12 most significant bits. I've done that, and it even works ... for less than a second. What am I missing?</p>
<p>This is the entire code. <code>z</code> is tossed in to make sure the loop doesn't run on forever. It's not intended to be permanent.</p>
<pre class="lang-py prettyprint-override"><code>import requests, io, pygame
pygame.init()
pygame.mixer.init()
def stream(url:str):
s = requests.Session()
d = bytearray()
z = 0
with s.get(url, headers=None, stream=True) as resp:
for b in resp.iter_content():
d.append(int.from_bytes(b)) #append this byte
sync = int.from_bytes(d[-2:]) #get last two bytes
#if we have enough bytes and sync
if (len(d) > 12) and (sync == 0xFFF0):
pygame.mixer.music.load(io.BytesIO(bytes(d)))
pygame.mixer.music.play()
pygame.event.wait()
d = bytearray() #clear for next frame
z+=1
if z>100: return
stream('http://198.27.120.235:8450')
</code></pre>
<p><strong>edit</strong></p>
<p>I grabbed the header and inserted it at the beginning of each new frame. It works for longer, but it seems to just quit after a few seconds. It teeters between no error and <code>pygame.error: music_drmp3: corrupt mp3 file (bad stream).</code> How do I reliably "chunk" the mp3 data?</p>
<pre><code>import requests, io, pygame
pygame.init()
pygame.mixer.init()
def stream(url:str):
s = requests.Session()
h = bytearray()
d = bytearray()
with s.get(url, headers=None, stream=True) as resp:
for b in resp.iter_content():
if len(h) < 4: h.append(int.from_bytes(b)) #get header
else : d.append(int.from_bytes(b)) #append this byte
sync = int.from_bytes(d[-2:]) #get last two bytes
#if we have enough bytes and sync
if (L:=len(d)) and (sync == 0xFFF0):
#length, first 4 bytes, last 4 bytes (looking for patterns)
print(L, hex(int.from_bytes(d[:4])), hex(int.from_bytes(d[-4:])))
pygame.mixer.music.load(io.BytesIO(h+d))
pygame.mixer.music.play()
d = bytearray()
stream('http://198.27.120.235:8450')
</code></pre>
<p>As you can see from a print, the size of the data (left column) fluctuates wildly, but all of those lines, up to the last one, played just fine. However, the last one is always significantly less bytes than the rest of the frames. This can only mean that I am not finding the sync bits properly. Apparently, the signature of the sync bits can exist in the audio data. How do I differentiate between the 2?</p>
<pre class="lang-none prettyprint-override"><code>7175 0x7f0a747d 0x83fffff0
20035 0x41dffe4d 0xfffffff0
1114 0xcde3fff3 0x1016fff0
5039 0x7f29ff68 0x7ffffff0
9337 0x28358c14 0x71dffff0
2262 0x8bc3caa1 0xfffffff0
18097 0x8894f5a0 0x659ffff0
19282 0xd09ababd 0xfffffff0
8136 0xe75fa462 0xfc41fff0
1336 0xc7ffa8f9 0x77fffff0
485 0x683a5b88 0xcbbbfff0
Traceback (most recent call last):
File "C:\Users\Michael\Documents\SlimPad_ftk\scratch.py", line 25, in <module>
stream('http://198.27.120.235:8450')
File "C:\Users\Michael\Documents\SlimPad_ftk\scratch.py", line 21, in stream
pygame.mixer.music.load(io.BytesIO(h+d))
pygame.error: music_drmp3: corrupt mp3 file (bad stream).
</code></pre>
| <python><pygame> | 2023-09-04 19:33:42 | 1 | 5,561 | OysterShucker |
77,040,164 | 498,690 | Grouped column diff, based on another column | <p>Here is my dataframe:</p>
<pre><code>metric_name ts count
m_1 1 10
m_2 1 20
m_1 2 12
m_2 2 19
</code></pre>
<p>I would like to get diff of <code>count</code> value for each metric_name</p>
<pre><code>m_1 1 0 # start for m_1
m_2 1 0 # start for m_2
m_1 2 2 # 12 - 10
m_2 2 -1 # 19 - 20
</code></pre>
<p>how can I do it?</p>
| <python><pandas><group-by> | 2023-09-04 19:22:34 | 2 | 3,818 | Capacytron |
77,040,058 | 11,188,140 | Making nested if statements more readable and compact | <p>Consider the 1D numpy array <code>arrRow</code> shown below. It is composed of 1s and 0s, but the important feature is that the row has exactly two 0-islands. The right and left ends of each 0-island are called Edge cells (marked E), and the cells immediately beyond each 0-island are called Border cells (marked B).</p>
<pre><code>indices: [ 0 1 2 3 4 5 6 7 8 9 10 11 12]
B E E B B E E B
arrRow : [ 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1]
</code></pre>
<p>For a given <code>arrRow</code>, I already have code that gives me the index of each Edge cell and each Border cell. In the example shown, the index names and values are:<br>
Edge cells: <code>isl1_start=2, isl1_end=5, isl2_start=9, isl2_end=11</code><br>
Border cells: <code>isl1_start-1=1, isl1_end+1=6, isl2_start-1=8, isl2_end+1=12</code> <br><br>
I need to randomly choose two indices, <code>idx1 and idx2</code>, taken from <strong>an Edge cell of one 0-island</strong> and <strong>a Border cell of the other 0-island.</strong></p>
<p>In the example above, I have 8 choices for [idx1, idx2]: <code>[2,8], [2,12], [5,8], [5,12], [9,1], [9,6], [11,1], [11,6]</code></p>
<p>In rows where the left-most 0-island starts in column 0, there will be just 6 choices. (same if the right-most 0-island ends in the last row cell).</p>
<p>In cases where both the left-most 0-island starts in column 0 <strong>AND</strong> the right-most 0-island ends in the last column, there are just 4 choices.</p>
<p><strong>The QUESTION: Is there a fast and, hopefully, compact way to randomly choose these two indices?</strong></p>
<p>I currently use nested if/elif/else statements that deal with all 8+6+6+4 = 24 possibilities, like this:</p>
<pre><code>if (isl1_start == 0) and (isl2_end_idx == len(arrRow) - 1):
rand = random.randint(0, 3)
if rand == 0:
idx1 = isl1_start
idx2 = isl2_start - 1
elif rand == 1:
idx1 = isl1_end
idx2 = isl2_start - 1
elif rand == 2:
idx1 = isl1_end + 1
idx2 = isl2_start
elif rand == 3:
idx1 = isl1_end + 1
idx2 = isl2_end
elif isl1_start == 0:
rand = random.randint(0, 5)
if rand == 0:
idx1 = isl1_start
idx2 = isl2_start - 1
etc...
</code></pre>
<p>This works well, but I'm hoping for a 'tighter' solution. I'll be extending the ideas described above to cases where I more than two 0-islands, and the nested 'ifs' could start getting out of hand.</p>
| <python><numpy> | 2023-09-04 18:58:22 | 1 | 746 | user109387 |
77,039,900 | 2,153,235 | Perusing Python help() details and getting them in Spyder's Help pane | <p>I have Spyder running with Spark as per the details in <a href="https://stackoverflow.com/questions/77019827">this posted
question</a> (Annex A and
<code>SpyderSpark.cmd</code> in Annex B). Before accessing Spark
classes, methods, objects, and help, I first had to import the module and
create the top-level object:</p>
<pre><code>from pyspark.sql import SparkSession
spark = SparkSession.builder \
.appName("SparkExamples.com").getOrCreate()
</code></pre>
<p>With Spark set up, when I enter <code>spark</code> into Spyder's
Help pane, I get a very short description:</p>
<pre><code>The entry point to programming Spark with the Dataset and
DataFrame API.
A SparkSession can be used to create DataFrame, register DataFrame
as tables, execute SQL over tables, cache tables, and read parquet
files. To create a SparkSession, use the following builder
pattern:
Changed in version 3.4.0: Supports Spark Connect.
Examples
Create a Spark session.
>>> spark = (
... SparkSession.builder
... .master("local")
... .appName("Word Count")
... .config("spark.some.config.option", "some-value")
... .getOrCreate()
... )
Create a Spark session with Spark Connect.
>>> spark = (
... SparkSession.builder
... .remote("sc://localhost")
... .appName("Word Count")
... .config("spark.some.config.option", "some-value")
... .getOrCreate()
... )
</code></pre>
<p>In contrast, if I issue <code>help(spark)</code> in the console pane, I get much
more information -- more than can fit into the cached console lines,
even if I pan to the top.</p>
<ol>
<li><p>How can I enable a pager for the more detailed help so that I can
see more than just the bottom-most text for which the console buffer
has room for?</p>
</li>
<li><p>How can I get the more detailed help in the Spyder Help pane?
Perhaps it will allow one to view the entire help text.</p>
</li>
</ol>
<p><strong>Afternote:</strong> Aside from the inability to
peruse the entirety of the detailed help,
I found other oddities in using the Spyder Help pane. For example,
If I enter <code>SparkSesssion</code>, the Help pane explains this as:</p>
<pre><code>Definition : SparkSession(...)
Type : Function of pyspark.sql.session module
</code></pre>
<p>However, it isn't a function, but rather, a class. I'm wondering how
much I have to take the wording of the Help pane with a grain of salt.
Subtleties like that determine how I go about searching for further
details and can cause unintended detours for newbies (among which I
count myself).</p>
<p><strong>This is potentially an old problem:</strong> iPython's general lack of a pager seems to go back <a href="https://github.com/spyder-ide/spyder/issues/7311" rel="nofollow noreferrer">many years</a></p>
<p><strong>Work-around</strong> Using <code>pandas.DataFrame</code> as an example, if you imported <code>pandas</code> as <code>pd</code>, then issuing <code>pd.DataFrame?</code> at the console gives the same short help as in the Help pane, but it additionally provides the source file path. You can then delve into the file to peruse the class definition, methods, properties, attributes, etc.</p>
<p>Yes, it seems odd to have to locate and read source files to understand how to use classes and methods. After all, OOP is about encapsulation, detail hiding, and separating interface from implementation. However, one can just focus on the doc strings and the function prototypes to see the interface aspects of a class, similar to what one would see in user level documentation.</p>
| <python><spyder> | 2023-09-04 18:21:26 | 0 | 1,265 | user2153235 |
77,039,648 | 913,098 | How to log checkpoint location every time one is saved with pytorch lightning? | <p>I am currently saving a checkpoint every validation epoch like so</p>
<pre><code>checkpoint_callback = ModelCheckpoint(
# dirpath=checkpoints_path,
filename="fa_classifier_{epoch:02d}",
every_n_epochs=val_every_n_epochs,
save_top_k=-1,
# monitor=
)
</code></pre>
<p>and</p>
<pre><code> trainer = Trainer(
callbacks=[checkpoint_callback],
# overfit_batches=1,
# fast_dev_run=True,
default_root_dir=checkpoints_path,
check_val_every_n_epoch=val_every_n_epochs,
max_epochs=max_epochs,
gpus=1
)
</code></pre>
<p>I would like to interfere with the process, and simply add a log line after a checkpoint has been saved, with the location in which it was saved.</p>
<p>I don't know how to access that data, and at the right time.</p>
| <python><pytorch><pytorch-lightning> | 2023-09-04 17:25:57 | 1 | 28,697 | Gulzar |
77,039,507 | 6,293,211 | Multiple python versions causing pip failure | <p>See <strong>Photo1</strong>. On executing <code>--version</code>, <code>which</code> and <code>where</code> command to <code>python</code> and <code>python3</code>, I discover to have <code>python</code> linked to ver 3.11 and <code>python3</code> linked to 3.9.</p>
<p>See <strong>Photo2</strong>. I search all python related paths and see a few 3.9 paths and majority are 3.11 paths</p>
<p><strong>Question</strong>:</p>
<p>What is the best way to 'gracefully' delete 3.9 and all its references?</p>
<p>a). Should i simply manually delete all 3.9 folders shown in Photo2? I fear if I break anything else unintentionally...</p>
<p>b). Also, would like to make <code>python</code> and <code>python3</code> point to the 3.11</p>
<p>I believe this issue is rootcause to my previous post on pycaret installation: <a href="https://stackoverflow.com/q/77031054/6293211">Pycaret installation on Mac unsuccessful via pip</a></p>
<p><a href="https://i.sstatic.net/BzDdj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BzDdj.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/kYXzU.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kYXzU.jpg" alt="enter image description here" /></a></p>
| <python><pip><python-install> | 2023-09-04 16:57:51 | 1 | 466 | Sinha |
77,039,365 | 1,087,942 | Outdated - CS50's Introduction to Programming with Python: Code does not break | <p>I'm using the CS50's tutorials to teach myself python. I am trying to solve the 'Outdated' problem in problem set 3. The code works when I input date as 9/8/1636(i.e., correctly transforms date to yyyy-mm-dd and outputs the result) but not when I input September 8, 1636; it again prompts me for input. I cannot figure out where I went wrong.</p>
<p>The context to the problem can be found <a href="https://cs50.harvard.edu/python/2022/psets/3/outdated/" rel="nofollow noreferrer">here</a>, and the code i tried is copied below:</p>
<pre><code># Initiate the list
month_li = [
"January",
"February",
"March",
"April",
"May",
"June",
"July",
"August",
"September",
"October",
"November",
"December"
]
# Start infinite loop that breaks when a valid date is entered
while True:
# Prompt the user for a date (format: 9/8/1636 or september 8, 1636)
input_date = input("Date (in month/day/year order): ").strip()
# Attempt to transform date into yyyy-mm-dd format
try:
# Split month, day and year by "/" separator and convert to integer
month, day, year = map(int, input_date.split("/"))
# Check whether month and day fall within expected ranges
if 1 <= month <= 12 and 1 <= day <= 31:
# If so, then print year, month, and date
print(year,f"{month:02}",f"{day:02}", sep = "-")
# Break out of the loop
break
else:
# Check if date string starts not with a number but a letter
if not input_date[0].isdigit():
# Split the date by space
month, day, year = input_date.split(" ")
# A comma (,) will remain, so need to get rid of that
day = day.replace(',', '')
# Convert day and year to integers; will deal with month after
day = int(day)
year = int(year)
# Capitalize month for case insensitivity
month = month.capitalize()
# Need to check if the month exists in the list
if month in month_li and (1 <= day <= 31):
print(year,f"{month_li.index(month) + 1:02}", f"{day:02}", sep = "-")
break
except:
pass
</code></pre>
| <python><cs50> | 2023-09-04 16:28:48 | 5 | 1,152 | FlyingDutch |
77,039,302 | 17,889,328 | pyinstaller with pydantic v2 - `pydantic:compiled` has been removed in V2.` | <p>i have a python app which i build with pyinstaller, and which i recently brought pydantic v2 into.</p>
<p>pydantic v1 was never used, or as far as i am aware installed on my system.</p>
<p>pyinstaller now fails to build with this error:</p>
<pre><code>23401 INFO: Loading module hook 'hook-pydantic.py' from 'C:\\paul\\AmDesp\\venv\\Lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'...
pydantic.errors.PydanticImportError: `pydantic:compiled` has been removed in V2.
</code></pre>
<p>my app works fine if i run as a python script but i am unable to build it out to make portable...</p>
<p>i'm a novice but as far as i can tell this is an issue with pyinstaller not being pydantic v2 ready? rather than my code relying on v1 features or being otherwise wrong?</p>
<p>i waited to start learning pydantic because v2 was imminent by the time i was ready to roll, and i dont really want to have to roll it back and learn v1.</p>
<p>can someone confirm if pyinstaller is pydantic v2 ready? and if not give some advice about how i might overcome this? if i can contribute to making it ready i am happy to get involved but will need pointing at the actual problem-space and a brief intro... otherwise i guess i'm looking at other build-platforms?</p>
| <python><pyinstaller><pydantic> | 2023-09-04 16:17:03 | 1 | 704 | prosody |
77,039,278 | 116,906 | How do I have a Python 3.11.5 process suspend itself on Monterey 12.6.8? | <pre><code>#!/usr/bin/python3
import os
import time
os.system('/bin/kill -19 ' + str(os.getpid()))
time.sleep(2)
</code></pre>
<p>On Linux this script returns immediately, suspended. On Mac this script waits for two seconds and exits without suspending itself.</p>
<p>What can I do to make the suspension behavior available on Mac?</p>
<p>TIA,</p>
| <python><python-3.x><signals><macos-monterey> | 2023-09-04 16:12:31 | 1 | 6,021 | Christos Hayward |
77,039,054 | 4,406,532 | Changing ranges in timeline | <p>Following <a href="https://stackoverflow.com/questions/68386624/plot-timeline-in-python">this question</a> I would like to change the X range (date), so that it will be from 1970, 1980,... till 2020 and the events will be a..b, etc... (I have a predefined list of events). I use the following code which dates starts with 1969 and ends with 1970 (instead of 1970 and 2020 respectively). The tried 2 options:
'event': ['a', 'b','c','d','e','f'],
first option:</p>
<pre><code>'date':pd.date_range(start='1970', periods=6)
</code></pre>
<p>second option:</p>
<pre><code>'date': ['1970', '1980','1990','2000','2010','2020']
</code></pre>
<p>Here is the complete code:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
import matplotlib.dates as mdates
from datetime import datetime
df = pd.DataFrame(
{
'event': ['a', 'b','c','d','e','f'],
'date': ['1970','1980','1990','2000','2010','2020']
# -> 'date':pd.date_range(start='1970', periods=6)
}
)
df['date'] = pd.to_datetime(df['date'])
levels = np.tile(
[-5, 5, -3, 3, -1, 1],
int(np.ceil(len(df)/6))
)[:len(df)]
fig, ax = plt.subplots(figsize=(12.8, 4), constrained_layout=True);
ax.set(title="A series of events")
ax.vlines(df['date'], 0, levels, color="tab:red"); # The vertical stems.
ax.plot( # Baseline and markers on it.
df['date'],
np.zeros_like(df['date']),
"-o",
color="k",
markerfacecolor="w"
);
# annotate lines
for d, l, r in zip(df['date'], levels, df['event']):
ax.annotate(
r,
xy=(d, l),
xytext=(-3, np.sign(l)*3),
textcoords="offset points",
horizontalalignment="right",
verticalalignment="bottom" if l > 0 else "top"
);
# format xaxis with 4 month intervals
ax.xaxis.set_major_locator(mdates.MonthLocator(interval=4));
ax.xaxis.set_major_formatter(mdates.DateFormatter("%b %Y"));
plt.setp(ax.get_xticklabels(), rotation=30, ha="right");
# remove y axis and spines
ax.yaxis.set_visible(False);
ax.yaxis.set_visible(False);
ax.spines["left"].set_visible(False)
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)
ax.margins(y=0.1);
plt.show();
</code></pre>
<p>As explained above I would like to see on Xais only years 1970 to 2020 (with no month and day) and respectively their events (a to f).</p>
| <python><matplotlib><time-series><timeline> | 2023-09-04 15:34:27 | 1 | 2,293 | Avi |
77,039,010 | 6,734,243 | how to ignore folders contained in tests with pytest? | <p>I checked the <a href="https://stackoverflow.com/questions/11117062/how-to-tell-py-test-to-skip-certain-directories">following question</a> and it's many answers. It's very outdated (as OP is referencing py.test) and no answer is specific enough for my use case.</p>
<p>I have the following test structure:</p>
<pre><code>.
└── tests/
├── test_number/
│ ├── test_Number.py
│ └── test_Float.py
└── legacy/
├── test_toto.py
├── ...
└── test_tutu.py
</code></pre>
<p>I want to run my test with a simple command to avoid any issues of reproducibility in downstream pipelines or local dev.</p>
<pre><code>pytest
</code></pre>
<p>So I want pytest to avoid the entire content of the legacy folder.</p>
<p>I'm using a <code>pyproject.toml</code> configuration so as per the recommendation of pytest docs I wrote:</p>
<pre class="lang-ini prettyprint-override"><code>[tool.pytest.ini_options]
testpaths = "tests"
norecursedirs = "tests/legacy/*"
</code></pre>
<p>Problem, the file are still parsed and tested. I also tried to make the path relative to the testpaths and I got the same result.</p>
| <python><python-3.x><pytest> | 2023-09-04 15:27:16 | 0 | 2,670 | Pierrick Rambaud |
77,038,895 | 5,547,553 | How to count null values in a summary statistics in python polars? | <p>How do you count null values in a column?</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame({
'prod': ['apple','apple','banana','lemon','lemon','lemon'], 'price': [11, None, 14, 21, None, 23]
})
</code></pre>
<pre><code>shape: (6, 2)
┌────────┬───────┐
│ prod ┆ price │
│ --- ┆ --- │
│ str ┆ i64 │
╞════════╪═══════╡
│ apple ┆ 11 │
│ apple ┆ null │
│ banana ┆ 14 │
│ lemon ┆ 21 │
│ lemon ┆ null │
│ lemon ┆ 23 │
└────────┴───────┘
</code></pre>
<p>I tried this, but it does not give what I want:</p>
<pre class="lang-py prettyprint-override"><code>df.group_by('prod').agg(pl.count().alias('COUNT'),
pl.when(pl.col('price').is_null()).then(1).count().alias('NULLS')
).sort(by='prod')
</code></pre>
<pre><code>shape: (3, 3)
┌────────┬───────┬───────┐
│ prod ┆ COUNT ┆ NULLS │
│ --- ┆ --- ┆ --- │
│ str ┆ u32 ┆ u32 │
╞════════╪═══════╪═══════╡
│ apple ┆ 2 ┆ 1 │
│ banana ┆ 1 ┆ 0 │
│ lemon ┆ 3 ┆ 1 │
└────────┴───────┴───────┘
</code></pre>
| <python><dataframe><python-polars> | 2023-09-04 15:09:01 | 1 | 1,174 | lmocsi |
77,038,836 | 10,251,146 | polars subset struct variable field name | <p>I have a struct column and want to access fields based on another column</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
data = {'my_struct': [{'field1': 1, 'field2': 'A'}, {'field1': 2, 'field2': 'B'}],
'field_name': ['field1', 'field2']}
df = pl.DataFrame(data)
</code></pre>
<pre><code>shape: (2, 2)
┌───────────┬────────────┐
│ my_struct ┆ field_name │
│ --- ┆ --- │
│ struct[2] ┆ str │
╞═══════════╪════════════╡
│ {1,"A"} ┆ field1 │
│ {2,"B"} ┆ field2 │
└───────────┴────────────┘
</code></pre>
<p>I am looking for something like</p>
<pre class="lang-py prettyprint-override"><code>df.select( pl.col("my_struct").struct.field(pl.col("field_name")) )
</code></pre>
<p>But I get an error:</p>
<pre><code># TypeError: argument 'name': 'Expr' object cannot be converted to 'PyString'
</code></pre>
<p>But how can I access the value of <code>pl.col("field_name")</code>?</p>
| <python><python-polars> | 2023-09-04 15:00:21 | 1 | 459 | linus heinz |
77,038,811 | 2,837,887 | Recursive regex in Python to match balanced brackets and replace | <p>I have some LaTeX source and have some annotations: three types:</p>
<ul>
<li><code>\ins{who}{what}</code></li>
<li><code>\rem{who}{what}</code></li>
<li><code>\rpl{who}{whatold}{whatnew}</code></li>
</ul>
<p>The goal is to run through the source files and replace all occurences in the following manner:</p>
<pre><code>\ins{who}{what}->what
\rem{who}{what}->''
\rpl{who}{whatold}{whatnew}->whatnew
</code></pre>
<p>My initial idea was <code>python</code> + <code>regex</code> but as you can see on the example below, one quickly runs to the balanced braces issue. Based on the discussion <a href="https://stackoverflow.com/questions/546433/regular-expression-to-match-balanced-parentheses">here</a>, i came up with the regex for replace as <code>\\rpl\{[a-z]{2}\}(\{((?>[^}{]+|(?1))*)\})(\{((?>[^}{]+|(?3))*)\})</code> with replacement with fourth group. I verified on regex101 for PCRE2. But I am having hard time getting this to work in Python.</p>
<p>The simplistic test is:</p>
<pre><code>import regex
text = 'Velit sed ullamcorper morbi tincidunt ornare. Lorem dolor sed viverra ipsum nunc aliquet bibendum enim.\rpl{dd}{ Pellentesque \qtyrange{10}{20}{\celsius} habitant morbi \qtyrange{10}{20}{\celsius} tristique senectus \gls{abc}.}{Some new text with \qtyrange{10}{20}{\celsius} and more \gls{cdf}.}'
regx = r'\\rpl\{[a-z]{2}\}(\{((?>[^}{]+|(?1))*)\})(\{((?>[^}{]+|(?3))*)\})'
print(regex.findall(regx, text, overlapped=False))
</code></pre>
<p>The expected output from the test should be:</p>
<pre><code>'Velit sed ullamcorper morbi tincidunt ornare. Lorem dolor sed viverra ipsum nunc aliquet bibendum enim.Some new text with \qtyrange{10}{20}{\celsius} and more \gls{cdf}.'
</code></pre>
<p>In the discussions mentioned, some people advised against using regex and implementing bracket counting. But I am trying to avoid that, i feel it is too much programing when the tool is there and seems to work in other flavours.</p>
<p>Any advise how to modify the regex or whether to use other language feature than regex that is able to do this? The advantage of regex is that I can easily match what should be around.</p>
| <python><regex><recursion><regex-group> | 2023-09-04 14:57:06 | 1 | 1,400 | atapaka |
77,038,678 | 6,178,507 | How to extract metadata from .heic image files on Windows 11 with Python? | <p>I'm running Python 3.10 on Windows 11. I need to extract metadata from <code>.heic</code> image files. Here is what I tried:</p>
<h2>1. ExifRead</h2>
<p>I tried with <strong>ExifRead</strong> (see <a href="https://pypi.org/project/ExifRead/" rel="nofollow noreferrer">https://pypi.org/project/ExifRead/</a>) but that failed:</p>
<pre class="lang-bash prettyprint-override"><code>>>> import exifread
>>> f = open("path/to/img.heic", 'rb')
>>> tags = exifread.process_file(f)
Traceback (most recent call last):
File "C:\Python310\lib\site-packages\exifread\heic.py", line 171, in get_parser
return defs[box.name]
KeyError: 'hdlr'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python310\lib\site-packages\exifread\__init__.py", line 137, in process_file
offset, endian, fake_exif = _determine_type(fh)
File "C:\Python310\lib\site-packages\exifread\__init__.py", line 109, in _determine_type
offset, endian = heic.find_exif()
File "C:\Python310\lib\site-packages\exifread\heic.py", line 268, in find_exif
meta = self.expect_parse('meta')
File "C:\Python310\lib\site-packages\exifread\heic.py", line 159, in expect_parse
return self.parse_box(box)
File "C:\Python310\lib\site-packages\exifread\heic.py", line 177, in parse_box
probe(box)
File "C:\Python310\lib\site-packages\exifread\heic.py", line 195, in _parse_meta
psub = self.get_parser(box)
File "C:\Python310\lib\site-packages\exifread\heic.py", line 173, in get_parser
raise NoParser(box.name) from err
exifread.heic.NoParser: hdlr
</code></pre>
<h2>2. pyheif</h2>
<p>I tried to install the <code>pyheif</code> module, but there is no build for Windows.</p>
<h2>3. pillow</h2>
<p>I tried with the <code>pillow</code> module (aka <code>PIL</code>):</p>
<pre class="lang-bash prettyprint-override"><code>>>> from PIL import Image
>>> img = Image.open("path/to/img.HEIC")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python310\lib\site-packages\PIL\Image.py", line 3280, in open
raise UnidentifiedImageError(msg)
PIL.UnidentifiedImageError: cannot identify image file 'C:/Backup/Pictures_2023/IMG_0620.HEIC'
</code></pre>
| <python><python-3.x><image><windows-11><heic> | 2023-09-04 14:41:04 | 4 | 9,770 | K.Mulier |
77,038,525 | 10,667,216 | Which co-occurrence matrix generation method is better for a list of words? | <p>I am working on a project where I need to generate a co-occurrence matrix for a given list of words. I have found two different methods for generating this matrix, but I'm not sure which one would be more suitable for my task.</p>
<p>The list of words I am working with is quite extensive (provided below an example), and I need to make sure that the method I choose is efficient and accurate for generating the co-occurrence matrix.</p>
<pre><code>word_list = ['geben', 'interessieren', 'bringen', 'lassen', 'stellen', 'sehen', 'hand', 'erfahren', 'nehmen', 'steigen', 'super', 'kandidatinnen', 'veroffentlicht', 'weiß', 'kampfen', 'stehen', 'wort', 'stark', 'schließlich', 'einfach', 'sogar', 'weg', 'bleiben', 'direkt', 'gehort', 'zeigen', 'madels', 'clip_s', 'schaffen', 'begeistern', 'erklart', 'schwierig', 'kandidatin', 'wohnen', 'fragen', 'sagen', 'topmodel', 'letzt', 'backstage', 'meinung', 'erfahrung', 'selbstbewusst', 'sache', 'wichtig', 'germany', 'emotional', 'aussehen', 'fuhlt', 'freuen', 'familie', 'nathalie', 'std', 'umstyling_std', 'justine', 'casting_edition', 'tamara', 'theresia', 'maribel', 'untertitel', 'enisa', 'edition', 'modelloft', 'transformation', 'lijana', 'simone', 'tatjana', 'thoma', 'maureen', 'sayana', 'staffel_episode', 'gebardensprache', 'sendung', 'nacktshooting', 'linda', 'social', 'streit', 'sender', 'high_fashion', 'madchen', 'duellwoche', 'michael', 'jasmin', 'preview', 'makeover', 'team', 'kostenlos', 'dream_edition', 'sixx', 'bodypainting', 'nackt', 'anmelden', 'nachher', 'episode', 'celine', 'sedcard', 'anna', 'schuh', 'wasser', 'live', 'jacky', 'kostenlos', 'sender', 'anmelden', 'edition', 'high_fashion', 'transformation', 'duellwoche', 'dream_edition', 'casting_edition', 'gebardensprache', 'untertitel', 'film', 'modelloft', 'std', 'umstyling_std', 'nacktshooting', 'cover', 'ander', 'erhalten', 'hollywood', 'berlin', 'social', 'show', 'entdecken', 'fashion_week', 'tv', 'einzug', 'live', 'nachwuchsmodels', 'video', 'pose', 'shooting', 'catwalk', 'halbfinale', 'episode', 'bodypainting', 'juror', 'ansehen', 'sedcard', 'schnell', 'highlights', 'zuschauer', 'heidi_klum', 'gastjurorinnen', 'trailer', 'gntm', 'gewinnerinnen', 'madchen', 'staffel_episode', 'gast']
</code></pre>
<p>Here are the two methods I've come across:</p>
<p>Method 1: Custom Python code</p>
<pre><code>
def _co_occurrence_n_gram(self, word_list, window_size):
co_occurence_counts = defaultdict(int)
vocab = set()
# Iterate over the words in the word list
for word in word_list:
# iterate over tokens in the word
for i, token in enumerate(word):
vocab.add(token) # add the token to the vocab
# Get the next tokens within the window size
next_tokens = word[i + 1 : i + 1 + window_size]
for next_token in next_tokens:
# Create a tuple of the sorted tokens and increment the co-occurrence count
co_occurrence_key = tuple(sorted([next_token, token]))
co_occurence_counts[co_occurrence_key] += 1
# Create a DataFrame to represent the co-occurrence counts
vocab = sorted(vocab) # sort eh vocab
data_frame = pd.DataFrame(
data=np.zeros((len(vocab), len(vocab)), dtype=np.int16), index=vocab, columns=vocab
)
# populate the DataFrame with the co-occurrence counte
for co_occurrence_key, count in co_occurence_counts.items():
data_frame.at[co_occurrence_key[0], co_occurrence_key[1]] = count
data_frame.at[co_occurrence_key[1], co_occurrence_key[0]] = count
return data_frame
</code></pre>
<p>Method 2: Using scikit-learn's CountVectorizer</p>
<pre><code> def _co_occurrence_n_gram(self, word_list):
# Flatten the list of words
all_words = " ".join(word_list)
# Create a CountVectorizer with ngram_range=(1, 1) and stop_words='english'
cv = CountVectorizer(ngram_range=(1, 1))
# Convert the list of words into a matrix of token counts
X = cv.fit_transform([all_words])
# Calculate the co-occurrence matrix
Xc = X.T * X
Xc.setdiag(0) # Set the diagonal elements to zero
# Get the feature names (tokens)
names = cv.get_feature_names_out()
# Create a DataFrame to represent the co-occurrence matrix
data_frame = pd.DataFrame(data=Xc.toarray(), columns=names, index=names)
return data_frame
</code></pre>
<p>I want to know the differences between these two methods and which one is more suitable for generating a co-occurrence matrix for a large list of words. Specifically, I'm interested in factors like efficiency, accuracy, and ease of use.</p>
<p>Could someone please provide insights or recommendations on which method to choose and why? Any examples or comparisons would be greatly appreciated. Thank you!</p>
| <python><nlp> | 2023-09-04 14:21:33 | 0 | 483 | Davood |
77,038,494 | 3,899,975 | How to customize the handles and labels for a legend | <p>I am trying to customize the ax from matplotlib plot. Here I am using a surpyval package to fit the data and then plot it. The plot method in surpyval package does not accept arguments other than the ax=ax as I provided. My problem is i can't match the handles and legends as you can see from this example:</p>
<pre><code>import surpyval as surv
import matplotlib.pyplot as plt
y_a = np. array([181, 183, 190,190, 195, 195, 198, 198, 198, 201,202, 202, 202,
204, 205, 205, 206,206, 206, 206,207, 209 , 213, 214, 218, 219])
y_s = np.array([161, 179, 196,196, 197, 198, 204, 205, 209, 211,215, 218, 227,
230, 231, 232, 232 ,236, 237, 237,240, 243, 244, 246, 252, 255])
model_1 = surv.Weibull.fit(y_a)
model_2 = surv.Weibull.fit(y_s)
ax=plt.gca()
model_1.plot(ax=ax)
model_2.plot(ax=ax)
p_a = ['A', 'a_u_CI','a_l_CI', 'a_fit']
p_s= ['S', 's_u_CI','s_l_CI', 's_fit']
p_t = p_a + p_s
ax.legend(labels=p_t[0:5:4])
</code></pre>
<p><a href="https://i.sstatic.net/IVRWP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IVRWP.png" alt="enter image description here" /></a></p>
| <python><matplotlib><legend><legend-properties> | 2023-09-04 14:17:34 | 1 | 1,021 | A.E |
77,038,407 | 17,082,611 | GridSearchCV: You must compile your model before training/testing. But my model is already compiled | <p>I am trying to use <code>GridSearchCV</code> for tuning the hyper-parameter <code>epochs</code> of my model. This is the minimal, reproducible example:</p>
<pre><code>(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
latent_dimension = 25
encoder = Encoder(latent_dimension, (28, 28, 1))
decoder = Decoder()
vae = VAE(encoder, decoder)
vae.compile(Adam()) # Compiled here
param_grid = {'epochs': [10, 20, 30]}
grid = GridSearchCV(vae, param_grid, scoring=mean_absolute_error, cv=2)
grid.fit(x_train, y_train)
</code></pre>
<p>But unfortunately <code>grid.fit(x_train, y_train)</code> gives:</p>
<blockquote>
<p>RuntimeError: You must compile your model before training/testing. Use <code>model.compile(optimizer, loss)</code>.</p>
</blockquote>
<p>But I already compiled my model. How can I fix the problem?</p>
<p>This is <code>VAE</code>, <code>Encoder</code> and <code>Decoder</code> implementation:</p>
<pre><code>import tensorflow as tf
from keras import layers
from keras.optimizers.legacy import Adam
from sklearn.base import BaseEstimator
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import GridSearchCV
from tensorflow import keras
def sample(z_mean, z_log_var):
batch = tf.shape(z_mean)[0]
dim = tf.shape(z_mean)[1]
epsilon = tf.random.normal(shape=(batch, dim))
stddev = tf.exp(0.5 * z_log_var)
return z_mean + stddev * epsilon
class VAE(keras.Model, BaseEstimator):
def __init__(self, encoder, decoder, epochs=None, **kwargs):
super().__init__(**kwargs)
self.encoder = encoder
self.decoder = decoder
self.epochs = epochs
self.total_loss_tracker = keras.metrics.Mean(name="total_loss")
self.reconstruction_loss_tracker = keras.metrics.Mean(name="reconstruction_loss")
self.kl_loss_tracker = keras.metrics.Mean(name="kl_loss")
def call(self, inputs, training=None, mask=None):
_, _, z = self.encoder(inputs)
outputs = self.decoder(z)
return outputs
@property
def metrics(self):
return [
self.total_loss_tracker,
self.reconstruction_loss_tracker,
self.kl_loss_tracker,
]
def train_step(self, data):
with tf.GradientTape() as tape:
# Forward pass
z_mean, z_log_var, z = self.encoder(data)
reconstruction = self.decoder(z)
# Compute losses
reconstruction_loss = tf.reduce_mean(
tf.reduce_sum(
keras.losses.binary_crossentropy(data, reconstruction), axis=(1, 2)
)
)
kl_loss = -0.5 * (1 + z_log_var - tf.square(z_mean) - tf.exp(z_log_var))
kl_loss = tf.reduce_mean(tf.reduce_sum(kl_loss, axis=1))
total_loss = reconstruction_loss + kl_loss
# Compute gradient
grads = tape.gradient(total_loss, self.trainable_weights)
# Update weights
self.optimizer.apply_gradients(zip(grads, self.trainable_weights))
# Update my own metrics
self.total_loss_tracker.update_state(total_loss)
self.reconstruction_loss_tracker.update_state(reconstruction_loss)
self.kl_loss_tracker.update_state(kl_loss)
return {
"loss": self.total_loss_tracker.result(),
"reconstruction_loss": self.reconstruction_loss_tracker.result(),
"kl_loss": self.kl_loss_tracker.result(),
}
def test_step(self, data):
# Forward pass
z_mean, z_log_var, z = self.encoder(data)
reconstruction = self.decoder(z)
# Compute losses
reconstruction_loss = tf.reduce_mean(
tf.reduce_sum(
keras.losses.binary_crossentropy(data, reconstruction), axis=(1, 2)
)
)
kl_loss = -0.5 * (1 + z_log_var - tf.square(z_mean) - tf.exp(z_log_var))
kl_loss = tf.reduce_mean(tf.reduce_sum(kl_loss, axis=1))
total_loss = reconstruction_loss + kl_loss
# Update my own metrics
self.total_loss_tracker.update_state(total_loss)
self.reconstruction_loss_tracker.update_state(reconstruction_loss)
self.kl_loss_tracker.update_state(kl_loss)
return {
"loss": self.total_loss_tracker.result(),
"reconstruction_loss": self.reconstruction_loss_tracker.result(),
"kl_loss": self.kl_loss_tracker.result(),
}
class Encoder(keras.Model):
def __init__(self, latent_dimension, input_shape):
super(Encoder, self).__init__()
self.latent_dim = latent_dimension
self.conv_block1 = keras.Sequential([
layers.Input(shape=input_shape),
layers.Conv2D(filters=64, kernel_size=3, activation="relu", strides=2, padding="same"),
layers.BatchNormalization()
])
self.conv_block2 = keras.Sequential([
layers.Conv2D(filters=128, kernel_size=3, activation="relu", strides=2, padding="same"),
layers.BatchNormalization()
])
self.conv_block3 = keras.Sequential([
layers.Conv2D(filters=256, kernel_size=3, activation="relu", strides=2, padding="same"),
layers.BatchNormalization()
])
self.flatten = layers.Flatten()
self.dense = layers.Dense(units=100, activation="relu")
self.z_mean = layers.Dense(latent_dimension, name="z_mean")
self.z_log_var = layers.Dense(latent_dimension, name="z_log_var")
self.sampling = sample
def call(self, inputs, training=None, mask=None):
x = self.conv_block1(inputs)
x = self.conv_block2(x)
x = self.conv_block3(x)
x = self.flatten(x)
x = self.dense(x)
z_mean = self.z_mean(x)
z_log_var = self.z_log_var(x)
z = self.sampling(z_mean, z_log_var)
return z_mean, z_log_var, z
class Decoder(keras.Model):
def __init__(self):
super(Decoder, self).__init__()
self.dense1 = keras.Sequential([
layers.Dense(units=4096, activation="relu"),
layers.BatchNormalization()
])
self.dense2 = keras.Sequential([
layers.Dense(units=1024, activation="relu"),
layers.BatchNormalization()
])
self.dense3 = keras.Sequential([
layers.Dense(units=4096, activation="relu"),
layers.BatchNormalization()
])
self.reshape = layers.Reshape((4, 4, 256))
self.deconv1 = keras.Sequential([
layers.Conv2DTranspose(filters=256, kernel_size=3, activation="relu", strides=2, padding="same"),
layers.BatchNormalization()
])
self.deconv2 = keras.Sequential([
layers.Conv2DTranspose(filters=128, kernel_size=3, activation="relu", strides=1, padding="same"),
layers.BatchNormalization()
])
self.deconv3 = keras.Sequential([
layers.Conv2DTranspose(filters=128, kernel_size=3, activation="relu", strides=2, padding="valid"),
layers.BatchNormalization()
])
self.deconv4 = keras.Sequential([
layers.Conv2DTranspose(filters=64, kernel_size=3, activation="relu", strides=1, padding="valid"),
layers.BatchNormalization()
])
self.deconv5 = keras.Sequential([
layers.Conv2DTranspose(filters=64, kernel_size=3, activation="relu", strides=2, padding="valid"),
layers.BatchNormalization()
])
self.deconv6 = layers.Conv2DTranspose(filters=1, kernel_size=2, activation="sigmoid", padding="valid")
def call(self, inputs, training=None, mask=None):
x = self.dense1(inputs)
x = self.dense2(x)
x = self.dense3(x)
x = self.reshape(x)
x = self.deconv1(x)
x = self.deconv2(x)
x = self.deconv3(x)
x = self.deconv4(x)
x = self.deconv5(x)
decoder_outputs = self.deconv6(x)
return decoder_outputs
</code></pre>
<p>And this is the full traceback of the error:</p>
<pre><code>Traceback (most recent call last):
File "/Users/alex/PycharmProjects/VAE-EEG-XAI/del.py", line 195, in <module>
grid.fit(x_train, y_train)
File "/Users/alex/PycharmProjects/VAE-EEG-XAI/venv/lib/python3.11/site-packages/sklearn/model_selection/_search.py", line 874, in fit
self._run_search(evaluate_candidates)
File "/Users/alex/PycharmProjects/VAE-EEG-XAI/venv/lib/python3.11/site-packages/sklearn/model_selection/_search.py", line 1388, in _run_search
evaluate_candidates(ParameterGrid(self.param_grid))
File "/Users/alex/PycharmProjects/VAE-EEG-XAI/venv/lib/python3.11/site-packages/sklearn/model_selection/_search.py", line 851, in evaluate_candidates
_warn_or_raise_about_fit_failures(out, self.error_score)
File "/Users/alex/PycharmProjects/VAE-EEG-XAI/venv/lib/python3.11/site-packages/sklearn/model_selection/_validation.py", line 367, in _warn_or_raise_about_fit_failures
raise ValueError(all_fits_failed_message)
ValueError:
All the 6 fits failed.
It is very likely that your model is misconfigured.
You can try to debug the error by setting error_score='raise'.
Below are more details about the failures:
--------------------------------------------------------------------------------
6 fits failed with the following error:
Traceback (most recent call last):
File "/Users/alex/PycharmProjects/VAE-EEG-XAI/venv/lib/python3.11/site-packages/sklearn/model_selection/_validation.py", line 686, in _fit_and_score
estimator.fit(X_train, y_train, **fit_params)
File "/Users/alex/PycharmProjects/VAE-EEG-XAI/venv/lib/python3.11/site-packages/keras/src/utils/traceback_utils.py", line 70, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/Users/alex/PycharmProjects/VAE-EEG-XAI/venv/lib/python3.11/site-packages/keras/src/engine/training.py", line 3875, in _assert_compile_was_called
raise RuntimeError(
RuntimeError: You must compile your model before training/testing. Use `model.compile(optimizer, loss)`.
</code></pre>
| <python><tensorflow><machine-learning><keras><deep-learning> | 2023-09-04 14:04:31 | 1 | 481 | tail |
77,038,393 | 5,873,325 | Launch a web application locally | <p>As part of a project, a web application was passed to me. This web application has front and back end. The front end was developed with React JS. Here's how the front end folder looks like.</p>
<p><a href="https://i.sstatic.net/eXa4C.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eXa4C.png" alt="enter image description here" /></a></p>
<p>And the the back end contains a bunch of .py files.</p>
<p>In order to launch this web application (front and back end separately), my former colleagues used Docker to create images. They followed these steps :</p>
<ol>
<li>docker build --pull --rm -f ".Dockerfile" -t myapp "."</li>
<li>docker save myapp | gzip > myapp .tar.gz</li>
<li>docker load < myapp .tar.gz</li>
<li>docker run -p 3000:3000 myapp</li>
</ol>
<p>Now my question is how can I test this web application locally, meaning using a web server so I can test the changes in the back and front end faster without the need to create and run docker images every time.</p>
| <python><node.js><reactjs><docker> | 2023-09-04 14:02:31 | 3 | 640 | Mejdi Dallel |
77,038,382 | 2,502,126 | plotly express sunburst segment color control | <p>I have a set of data like this which is vended from a database:</p>
<pre><code>xdata = [
["ZT","Met",1],
["ZT","Compensates",0],
["ZT","Planned",0],
["ZT","To Do",0],
["ZT Basic","Met",6],
["ZT Basic","Compensates",6],
["ZT Basic","Planned",2],
["ZT Basic","To Do",1],
...
</code></pre>
<p>I wish to construct a sunburst with a white hole and where the level 1 ring (<code>xdata[n][0]</code> above) colors are picked from a suitably large palette but the level 2 segments ('Met','Planned', etc.) are always the same for each value, e.g. <code>Met</code> is lightblue for all <code>Met</code>.</p>
<p>I am getting close with</p>
<pre><code>level2_colors = {
"Met": "blue",
"Compensates": "purple",
"Planned": "#AACC96",
"To Do": "orange"
}
df = pd.DataFrame(xdata, columns=['Area', 'Status', 'Value'])
fig = px.sunburst(df.assign(hole=" "),
path=['hole','Area', 'Status'],
values='Value',
color_discrete_map=level2_colors
)
</code></pre>
<p>but this approach turns all level 1 segments and the hole into the same color. Attempts to keep the white hole and randomize level 1 segments breaks the coloring consistency at level 2. Additionally, I would like all labels to be radially oriented. I am not wed to <code>express</code> and happy to use the lower-level <code>plotly</code> SDK. Suggestions? This is plotly v5.16.1.</p>
| <python><plotly> | 2023-09-04 14:01:06 | 1 | 7,633 | Buzz Moschetti |
77,038,132 | 22,497,747 | Python pillow/PIL doesn't recognize the attribute "textsize" of the object "imagedraw" | <p>I already checked python version on my environment (sublime text) and it is 3.11.0, the latest, I checked pillow version which is 10.0.0, the latest, and my code looks similar to other examples online.</p>
<p>the code has a part in Italian, but its pretty understandable.</p>
<p>the problem is at "disegno.textsize(testo, font=font)</p>
<p>after I run the code:</p>
<pre><code>line 14, in metti_testo_su_sfondo
text_width, text_height = disegno.textsize(testo, font=font)
^^^^^^^^^^^^^^^^
AttributeError: 'ImageDraw' object has no attribute 'textsize'
</code></pre>
<p>its strange because imagedraw should have the textsize attribute. I'm a novice, I hope I didn't miss anything blatant</p>
<hr />
<pre><code>from PIL import Image, ImageDraw, ImageFont
def metti_testo_su_sfondo(testo, sfondo, posizione=(10, 10), colore_testo=(0, 0, 0), dimensione_font=25):
# Apri l'immagine dello sfondo
immagine_sfondo = Image.open(sfondo)
disegno = ImageDraw.Draw(immagine_sfondo)
font = ImageFont.truetype("ARIAL.TTF", dimensione_font)
text_width, text_height = disegno.textsize(testo, font=font)
# Calcola le coordinate del testo centrato
x = (immagine_sfondo.width - text_width) // 2
y = (immagine_sfondo.height - text_height) // 2
disegno.text((x, y), testo, fill=colore_testo, font=font)
immagine_sfondo.save("spotted.png")
testo_da_inserire = "Ciao, mondo!"
sfondo_da_utilizzare = "spotted_bianco.jpg"
metti_testo_su_sfondo(testo_da_inserire, sfondo_da_utilizzare)
</code></pre>
<p>The objective is a code that makes me images automatically without needing to edit them manually. I checked build system, python version and pillow version. when I run the code through cmd though it gives me this error:</p>
<pre><code>from PIL import Image, ImageDraw, ImageFont
ModuleNotFoundError: No module named 'PIL'
</code></pre>
| <python><python-3.x><python-imaging-library> | 2023-09-04 13:27:28 | 7 | 491 | carokann17 |
77,038,120 | 7,339,624 | How to create a 2D tensor of points with PyTorch, each dimension going from 0 to 1? | <p>I'm trying to create a 2D tensor where each dimension ranges from 0 to 1.</p>
<p>For a 1D tensor, I can use:</p>
<pre class="lang-py prettyprint-override"><code>torch.arange(0, 1, 0.2)
</code></pre>
<p>This gives me:</p>
<pre><code>tensor([0.0, 0.2, 0.4, 0.6, 0.8])
</code></pre>
<p>But, I want to extend this to 2D points. My desired output is [with the shape (25, 2)]:</p>
<pre><code>tensor([
[0.0, 0.0], [0.0, 0.2], [0.0, 0.4], [0.0, 0.6], [0.0, 0.8],
[0.2, 0.0], [0.2, 0.2], [0.2, 0.4], [0.2, 0.6], [0.2, 0.8],
[0.4, 0.0], [0.4, 0.2], [0.4, 0.4], [0.4, 0.6], [0.4, 0.8],
[0.6, 0.0], [0.6, 0.2], [0.6, 0.4], [0.6, 0.6], [0.6, 0.8],
[0.8, 0.0], [0.8, 0.2], [0.8, 0.4], [0.8, 0.6], [0.8, 0.8]
])
</code></pre>
<p>How can I achieve this using PyTorch?</p>
| <python><machine-learning><deep-learning><pytorch><tensor> | 2023-09-04 13:25:27 | 3 | 4,337 | Peyman |
77,037,897 | 13,801,302 | How to create an embeddings model in langchain | <p>I want to pass the hidden_states of llama-2 as an embeddings model to my method <code>FAISS.from_document(<filepath>, <embedding_model>)</code>.
Currently, I have the llama-2 model and get embeddings for a string.</p>
<pre class="lang-py prettyprint-override"><code>model_config = transformers.AutoConfig.from_pretrained(
model_id,
output_hidden_states=True,
use_auth_token=auth_token,
)
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf")
# Input data to test the code
input_text = "Hello World!"
encoded_input = tokenizer(input_text, return_tensors='pt')
model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-chat-hf",
trust_remote_code=True,
config=model_config,
quantization_config=bnb_config,
device_map='auto',
use_auth_token=auth_token
)
outputs = model(**encoded_input)
hidden_states = outputs.hidden_states
print(len(hidden_states)) # 33 for Llama-2: 1 (embeddings) + 32 (layers)
print(hidden_states[0].shape) # Shape of the embeddings
print(hidden_states[2])
</code></pre>
<p>Print outputs:</p>
<pre><code>33
torch.Size([1, 4, 4096])
tensor([[[ 0.0373, -0.5762, -0.0180, ..., 0.0962, -0.1099, 0.3767],
[ 0.0676, 0.0400, -0.0033, ..., 0.0655, 0.0278, -0.0079],
[-0.0160, 0.0157, 0.0478, ..., -0.0224, -0.0341, 0.0093],
[ 0.0229, -0.0104, 0.0217, ..., -0.0080, -0.0012, -0.0342]]],
dtype=torch.float16, grad_fn=<ToCopyBackward0>)
</code></pre>
<p>Now, I want to build the embeddings of my documents with Llama-2:</p>
<pre class="lang-py prettyprint-override"><code>from langchain.vectorstores import FAISS
# <clean> is the file-path
FAISS.from_documents(clean, model)
</code></pre>
<pre><code>AttributeError: 'LlamaForCausalLM' object has no attribute 'embed_documents'
</code></pre>
<p>How can I solve it and how can I use Llama-2-Hidden-States for embedding?</p>
| <python><word-embedding><data-retrieval><large-language-model><llama> | 2023-09-04 12:56:04 | 0 | 621 | Christian01 |
77,037,838 | 1,021,306 | Get the hex color code for each value of a column based on a cmap | <p>I have a dataframe <code>df_slope</code> with <code>Year</code> and <code>Slope</code> columns:</p>
<pre><code> Year Slope
0 1970 0.283962
1 1971 0.146310
2 1972 -0.548499
3 1973 0.680444
4 1974 0.801810
5 1975 0.354132
... ... ...
51 2021 0.180423
52 2022 0.597477
53 2023 0.097228
</code></pre>
<p>I would like to assign a hex color code to each Year from its Slope value based on a matplotlib cmap, e.g. "jet".</p>
<pre><code>norm = plt.Normalize(df_slope['Slope'].min() - df_slope['Slope'].max())
cmap = 'jet' # 'Reds'
</code></pre>
<p><a href="https://i.sstatic.net/zQr5p.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zQr5p.png" alt="enter image description here" /></a></p>
<p>What I wanted to achieve is to assign a color to a Year, where a color should be a value somewhere between the min (dark blue = min of Slope) and max (dark red = max of Slope) of the color map of "jet" based on the Slope value accordingly.</p>
<pre><code> Year Slope color_hex
0 1970 0.283962 ?
1 1971 0.146310 ?
2 1972 -0.548499 ?
3 1973 0.680444 ?
4 1974 0.801810 ?
5 1975 0.354132 ?
... ... ...
51 2021 0.180423 ?
52 2022 0.597477 ?
53 2023 0.097228 ?
</code></pre>
| <python><pandas> | 2023-09-04 12:46:53 | 2 | 3,579 | alextc |
77,037,764 | 11,261,546 | Using from, import, as and dot on the same object | <p>I have a folder with several <code>.py</code> files, each of them has several functions, but I would like to import just one function (or any object) from each file.</p>
<p>Using this tree:</p>
<pre class="lang-none prettyprint-override"><code>.
├── caller.py
└── functions
├── A.py
├── B.py
└── C.py
</code></pre>
<p>In file <code>caller</code> I would like to do something like:</p>
<pre><code>from functions import (
A.funcA as func_a,
B.funcA as func_b,
C.funcA as func_c
)
</code></pre>
<p>This gives a syntax error at <code>A.funcA</code> on the dot. Is there a way of achiving this without repating all the time?:</p>
<pre><code>from functions.A import funcA as func_a
from functions.B import funcB as func_b
from functions.C import funcC as func_c
</code></pre>
<p>Note: in my case <code>functions.A</code> it longer, something like <code>X.Y.Z.functions.A</code>, that's why I really want to avoid rewriting.</p>
| <python><python-3.x> | 2023-09-04 12:36:04 | 2 | 1,551 | Ivan |
77,037,676 | 14,777,704 | How can I read the contents of an xlsx file stored on Sharepoint? | <p>I need to read a sheet called "Input" from the file data.xlsx present in a Sharepoint location.</p>
<p>Code snippet -</p>
<pre><code>import openpyxl
from office365.runtime.auth.authentication_context import AuthenticationContext
from office365.sharepoint.client_context import ClientContext
from office365.runtime.auth.client_credential import ClientCredential
from office365.sharepoint.files.file import File
import io
import errno
ctx_auth = AuthenticationContext("something.com/sites/mainLocation/NextLocation/dir?csf=something&web=something&e=something")
print(ctx_auth.acquire_token_for_user(Username, Password))
if ctx_auth.acquire_token_for_user(Username, Password):
ctx = ClientContext(site_url, ctx_auth)
web = ctx.web
ctx.load(web)
ctx.execute_query()
print("Authentication successful")
response = File.open_binary(ctx, "data.xlsx")
df = pd.read_excel(io.BytesIO(response.content),sheet_name="Input", header=1, usecols=["c1","c3","c4","c9"],engine='openpyxl')
print(df)
</code></pre>
<p>Output -</p>
<pre><code>Authentication successful
<_io.BytesIO object at 0x0000024930156E30>
Traceback (most recent call last):
File "C:\Users\xx\Doc\Store.py", line 42, in <module>
df = pd.read_excel(io.BytesIO(response.content),sheet_name="Input",engine='openpyxl')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\pandas\io\excel\_base.py", line 478, in read_excel
io = ExcelFile(io, storage_options=storage_options, engine=engine)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\pandas\io\excel\_base.py", line 1513, in __init__
self._reader = self._engines[engine](self._io, storage_options=storage_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\pandas\io\excel\_openpyxl.py", line 549, in __init__
super().__init__(filepath_or_buffer, storage_options=storage_options)
File "C:\Users\xx\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\pandas\io\excel\_base.py", line 540, in __init__
self.book = self.load_workbook(self.handles.handle)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\pandas\io\excel\_openpyxl.py", line 560, in load_workbook
return load_workbook(
^^^^^^^^^^^^^^
File "C:\Users\xx\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\openpyxl\reader\excel.py", line 344, in load_workbook
reader = ExcelReader(filename, read_only, keep_vba,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\openpyxl\reader\excel.py", line 123, in __init__
self.archive = _validate_archive(fn)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\openpyxl\reader\excel.py", line 95, in _validate_archive
archive = ZipFile(filename, 'r')
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.1520.0_x64__qbz5n2kfra8p0\Lib\zipfile.py", line 1302, in __init__
self._RealGetContents()
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.1520.0_x64__qbz5n2kfra8p0\Lib\zipfile.py", line 1369, in _RealGetContents
raise BadZipFile("File is not a zip file")
zipfile.BadZipFile: File is not a zip file
</code></pre>
<p>I am not allowed to convert the required data to .csv. I have to read the sheet from the xlsx file only, and keep it in pandas dataframe.</p>
<p>I can read the file very well when it is kept in my local folder using -</p>
<pre><code>df = pd.read_excel("localpath/data.xlsx",sheet_name="Input", header=1, usecols=["c1","c3","c4","c9"])
</code></pre>
<p>When reading from SharePoint location, its creating an issue, although authentication is successful.
Please help.</p>
<p>Adding -</p>
<p>As suggested by Abdul, I printed response.status_code. It showed 200. But the error is persisting in the line - df=pd.read_excel(....). Kindly help.</p>
<p>EDIT -</p>
<p>I tried another way but that also did not work. Here is the code</p>
<pre><code>site_url="direct link to the file data.xlsx which I found by opening the file in SharePoint, then clicking Share -> Copy Link"
tx_auth = AuthenticationContext(site_url)
print(ctx_auth.acquire_token_for_user(Username, Password))
if ctx_auth.acquire_token_for_user(Username, Password):
ctx = ClientContext(site_url, ctx_auth)
web = ctx.web
ctx.load(web)
ctx.execute_query()
print("Authentication successful")
response = File.open_binary(ctx, site_url)
print(response.status_code)
#save data to BytesIO stream
bytes_file_obj = io.BytesIO()
bytes_file_obj.write(response.content)
bytes_file_obj.seek(0) #set file object to start
#read excel file and each sheet into pandas dataframe
df = pd.read_excel(bytes_file_obj, sheet_name="Input",header=1, usecols=["c1","c3","c4","c9"], engine='openpyxl')
print(df.shape)
</code></pre>
<p>Authentication was successful. Response status was 200. But the same error persisted -</p>
<pre><code>zipfile.BadZipFile: File is not a zip file
</code></pre>
<p>Please someone kindly advise.</p>
<p>EDIT -</p>
<p>I used xlrd instead of openpyxl -</p>
<pre><code>df = pd.read_excel(io.BytesIO(response.content),sheet_name="Input", header=1, usecols=["c1","c3","c4","c9"],engine='xlrd')
</code></pre>
<p>Error is as follows -</p>
<pre><code>in bof_error
raise XLRDError('Unsupported format, or corrupt file: ' + msg)
xlrd.biffh.XLRDError: Unsupported format, or corrupt file: Expected BOF record; found b'\r\n<!DOCT'
</code></pre>
<p>I also tried this -</p>
<pre><code>df = pd.read_html(bytes_file_obj
</code></pre>
<p>Error -</p>
<pre><code>ValueError: No tables found
</code></pre>
<p>Please help.</p>
<p>Edit -</p>
<p>As suggested, I have printed <code>response.content.decode()</code>.</p>
<p>Given below is the beginning of the response -</p>
<pre><code><!DOCTYPE html>
<html lang="en-us".....
</code></pre>
| <python><pandas><excel><sharepoint> | 2023-09-04 12:22:25 | 1 | 375 | MVKXXX |
77,037,590 | 7,274,764 | Running python with -c flag in C# | <p>I'm trying to run a python command from within a c# process. This command is a one-liner and not a script. But strangely enough when starting the python process with the <code>-c</code> flag, i.e. the command instead of script flag, I encounter some problems.</p>
<p>First an example of my code that runs a script and works fine:</p>
<pre class="lang-cs prettyprint-override"><code>using (Process python = new Process())
{
python.StartInfo.FileName = "python";
python.StartInfo.Arguments = @"-u C:\path\to\script.py";
python.StartInfo.UseShellExecute = false;
python.StartInfo.RedirectStandardOutput = true;
python.StartInfo.RedirectStandardError = true;
python.StartInfo.CreateNoWindow = true;
python.ErrorDataReceived += Print;
python.OutputDataReceived += Print;
python.Start();
python.BeginErrorReadLine();
python.BeginOutputReadLine();
python.WaitForExit();
}
static void Print(object sender, DataReceivedEventArgs e)
{
Console.WriteLine(e.Data);
}
</code></pre>
<p>With the following script as an example:</p>
<pre class="lang-py prettyprint-override"><code>import time
for i in range(5):
print("Hello World " + str(i))
time.sleep(1)
</code></pre>
<p>The output is retrieved as it is created (not waiting until the process is finished):</p>
<pre><code>Hello World 0
Hello World 1
Hello World 2
Hello World 3
Hello World 4
null
null
</code></pre>
<p>Then, if I change the StartInfo.Argument to <code>"-uc 'print(1)'"</code>, the only two output lines I get are the null values at the end, but the <code>1</code> you would expect to retrieve from the print statement is missing:</p>
<pre><code>null
null
</code></pre>
<p>And lastly, if I add any space to the executing command (such as <code>"-uc 'print(1); print(2)'"</code>), I get an EOL message on StdErr:</p>
<pre><code> File "<string>", line 1
'print(1);
^
SyntaxError: EOL while scanning string literal
null
null
</code></pre>
<p>And for completeness; running that from the command line works just fine:</p>
<pre><code>$> python -uc 'print(1); print(2)'
1
2
</code></pre>
<p>Any clues how to solve both these problems?</p>
| <python><c#> | 2023-09-04 12:11:02 | 1 | 2,380 | Jurgy |
77,037,560 | 18,308,621 | How to count work days between date columns with Polars | <p>I have the following DataFrame.</p>
<pre class="lang-py prettyprint-override"><code>df = pl.from_repr("""
┌────────────┬───────────────┐
│ date ┆ maturity_date │
│ --- ┆ --- │
│ date ┆ date │
╞════════════╪═══════════════╡
│ 2000-01-04 ┆ 2000-01-17 │
│ 2000-01-04 ┆ 2000-02-15 │
│ 2000-01-04 ┆ 2000-03-15 │
│ 2000-01-04 ┆ 2000-04-17 │
│ 2000-01-04 ┆ 2000-05-15 │
└────────────┴───────────────┘
""")
</code></pre>
<p>I'm trying to get the number of the workdays between <code>date</code> and <code>maturity_date</code> (not counting saturday and sunday)</p>
<p>I'd also like to calculate diff days that use a given calendar like a trade date calendar of stock market which is different from a normal calendar.</p>
<p>I use this <code>date_ranges</code> to count workdays, but it seems only a little faster than <code>map_elements</code></p>
<pre class="lang-py prettyprint-override"><code>df.with_columns(
pl.date_ranges("date", "maturity_date")
.list.eval(pl.element().dt.weekday() <= 5)
.list.count_matches(True)
.alias("workdays_diff")
# pl.concat_list("date", "maturity_date").map_elements(lambda x: get_work_days(x[0], x[1]))
# .alias("workdays_diff")
)
</code></pre>
<pre><code>shape: (5, 3)
┌────────────┬───────────────┬───────────────┐ # ┌────────────────┐
│ date ┆ maturity_date ┆ workdays_diff │ # │ tradedate_diff │
│ --- ┆ --- ┆ --- │ # │ --- │
│ date ┆ date ┆ u32 │ # │ i64 │
╞════════════╪═══════════════╪═══════════════╡ # ╞════════════════╡
│ 2000-01-04 ┆ 2000-01-17 ┆ 10 │ # │ 10 │
│ 2000-01-04 ┆ 2000-02-15 ┆ 31 │ # │ 21 │
│ 2000-01-04 ┆ 2000-03-15 ┆ 52 │ # │ 42 │
│ 2000-01-04 ┆ 2000-04-17 ┆ 75 │ # │ 65 │
│ 2000-01-04 ┆ 2000-05-15 ┆ 95 │ # │ 80 │
└────────────┴───────────────┴───────────────┘ # └────────────────┘
</code></pre>
<p>Is there a faster way?</p>
<p>Is there also a way to calculate <code>tradedate_diff</code>?</p>
| <python><dataframe><python-polars> | 2023-09-04 12:07:28 | 4 | 331 | Hakase |
77,037,340 | 1,619,706 | Python Stored procedure sql injection | <p><strong>Question:</strong><br />
I would like to know if the below stored procedure is safe from sql injection and I know that the general guideline is to use parametrized query. However, I am not sure how to put parametrized query in a stored procedure.</p>
<p><strong>Context</strong>:<br />
I am using flask/python to get the data based on the parameters. The data is retrieved from a stored procedure and for the database connection, the code is using pymysql libray.</p>
<pre><code>param1 = request.get('param1')
param2 = request.get('param2')
param3 = request.get('param3')
pymysql.callproc('EXEC STORED_PROCEDURE_NAME', (param1, param2, param3))
</code></pre>
<p>In the stored procedure definition, the parameters are defined and a select query is written. Based on the other parameters, other parameters will be added in the select query.</p>
<pre><code>CREATE PROCEDURE `PROCEDURE_NAME`(PARAM1 VARCHAR(500),PARAM2 VARCHAR(500),PARAM3 VARCHAR(500))
BEGIN
set @query=concat("select * from TABLE1 WHERE column1 = SOME_VALUE");
IF param1 is not null THEN
set @query=concat(@query," AND COLUMN2 like '%"PARAM1"%'");
END iF;
IF param2 is not null THEN
set @query=concat(@query," AND COLUMN3 like '%"PARAM2"%'");
END iF;
IF param3 is not null THEN
set @query=concat(@query," AND COLUMN4 like '%"PARAM3"%'");
END iF;
SELECT @query1;
prepare st from @query1;
execute st;
END
</code></pre>
| <python><mysql><stored-procedures><sql-injection> | 2023-09-04 11:29:05 | 1 | 331 | jackysatpal |
77,037,297 | 9,615,185 | Creating a Databricks Job with Separate Tasks and Parameters from within a notebook | <p>I'm trying to create a Databricks job using the Databricks Python SDK, and I want to set up separate tasks within the job that call a Python file with specific parameters. Here's a simplified version of my code:</p>
<pre><code>import os
import time
import json
from databricks.sdk import WorkspaceClient
from databricks.sdk.service import jobs
# Initialize the Databricks Workspace client
w = WorkspaceClient()
# Define the path to my Databricks notebook
script_path = f'jobs/sync_clients.py'
# Define job parameters
config1 = {} # pydantic
config2 = {} # pydantic
some_string = "/path/to/something"
client = {}
job_params = {
"config1": config1.model_dump(),
"config2": config2.model_dump(),
"some_string": some_string,
"client": client
}
# Create the job with separate tasks
created_job = w.jobs.create(name=f'client-{time.time_ns()}',
tasks=[
jobs.Task(description=f"{client.ID}",
existing_cluster_id=dbutils.notebook.entry_point.getDbutils().notebook().getContext().tags().get("clusterId"),
spark_python_task=jobs.SparkPythonTask(python_file=script_path, parameters=job_params),
task_key=client.ID,
timeout_seconds=0,
)
])
# Run the job
run_by_id = w.jobs.run_now(job_id=created_job.job_id).result()
# Delete the job (optional)
w.jobs.delete(job_id=created_job.job_id)
</code></pre>
<p>but I am getting this error when trying to create it:</p>
<p><code>TypeError: Object of type JavaObject is not JSON serializable</code></p>
<p>Ideally I would serialize the job parameters as a string with <code>json.dumps()</code>
and load it on the job with <code>json.loads()</code></p>
<p>in addition to that
Ultimately this will use a new cluster or another cluster, but it is not created yet, so I'm trying to run it on the same cluster</p>
<p>I'm quite new to databricks and feeling lost in the documentation for that sdk</p>
<p>full error:</p>
<pre><code>TypeError Traceback (most recent call last)
File <command-22633648340896>:18
10 notebook_path = f'jobs/sync_clients_metadata_tables.py'
12 job_params = {
13 "config1": config1.model_dump(),
14 "config2": config2.model_dump(),
15 "some_string": some_string,
16 "client": first_client
17 }
---> 18 created_job = w.jobs.create(name=f'client-metadata-ingestion-{time.time_ns()}',
19 tasks=[
20 jobs.Task(description=f"{first_client.ID}",
21 existing_cluster_id= dbutils.notebook.entry_point.getDbutils().notebook().getContext().tags().get("clusterId"),
22 spark_python_task=jobs.SparkPythonTask(python_file=notebook_path,parameters=job_params),
23 task_key=first_client.ID,
24 timeout_seconds=0,
25 )
26 ])
28 run_by_id = w.jobs.run_now(job_id=created_job.job_id).result()
30 w.jobs.delete(job_id=created_job.job_id)
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/databricks/sdk/service/jobs.py:2902, in JobsAPI.create(self, access_control_list, compute, continuous, email_notifications, format, git_source, health, job_clusters, max_concurrent_runs, name, notification_settings, parameters, run_as, schedule, tags, tasks, timeout_seconds, trigger, webhook_notifications)
2900 if webhook_notifications is not None: body['webhook_notifications'] = webhook_notifications.as_dict()
2901 headers = {'Accept': 'application/json', 'Content-Type': 'application/json', }
-> 2902 res = self._api.do('POST', '/api/2.1/jobs/create', body=body, headers=headers)
2903 return CreateResponse.from_dict(res)
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/databricks/sdk/core.py:997, in ApiClient.do(self, method, path, query, headers, body, raw, files, data)
995 headers = {}
996 headers['User-Agent'] = self._user_agent_base
--> 997 response = self._session.request(method,
998 f"{self._cfg.host}{path}",
999 params=self._fix_query_string(query),
1000 json=body,
1001 headers=headers,
1002 files=files,
1003 data=data,
1004 stream=raw)
1005 try:
1006 self._record_request_log(response, raw=raw or data is not None or files is not None)
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/requests/sessions.py:575, in Session.request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json)
562 # Create the Request.
563 req = Request(
564 method=method.upper(),
565 url=url,
(...)
573 hooks=hooks,
574 )
--> 575 prep = self.prepare_request(req)
577 proxies = proxies or {}
579 settings = self.merge_environment_settings(
580 prep.url, proxies, stream, verify, cert
581 )
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/requests/sessions.py:486, in Session.prepare_request(self, request)
483 auth = get_netrc_auth(request.url)
485 p = PreparedRequest()
--> 486 p.prepare(
487 method=request.method.upper(),
488 url=request.url,
489 files=request.files,
490 data=request.data,
491 json=request.json,
492 headers=merge_setting(
493 request.headers, self.headers, dict_class=CaseInsensitiveDict
494 ),
495 params=merge_setting(request.params, self.params),
496 auth=merge_setting(auth, self.auth),
497 cookies=merged_cookies,
498 hooks=merge_hooks(request.hooks, self.hooks),
499 )
500 return p
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/requests/models.py:371, in PreparedRequest.prepare(self, method, url, headers, files, data, params, auth, cookies, hooks, json)
369 self.prepare_headers(headers)
370 self.prepare_cookies(cookies)
--> 371 self.prepare_body(data, files, json)
372 self.prepare_auth(auth, url)
374 # Note that prepare_auth must be last to enable authentication schemes
375 # such as OAuth to work on a fully prepared request.
376
377 # This MUST go after prepare_auth. Authenticators could add a hook
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/requests/models.py:511, in PreparedRequest.prepare_body(self, data, files, json)
508 content_type = "application/json"
510 try:
--> 511 body = complexjson.dumps(json, allow_nan=False)
512 except ValueError as ve:
513 raise InvalidJSONError(ve, request=self)
File /usr/lib/python3.9/json/__init__.py:234, in dumps(obj, skipkeys, ensure_ascii, check_circular, allow_nan, cls, indent, separators, default, sort_keys, **kw)
232 if cls is None:
233 cls = JSONEncoder
--> 234 return cls(
235 skipkeys=skipkeys, ensure_ascii=ensure_ascii,
236 check_circular=check_circular, allow_nan=allow_nan, indent=indent,
237 separators=separators, default=default, sort_keys=sort_keys,
238 **kw).encode(obj)
File /usr/lib/python3.9/json/encoder.py:199, in JSONEncoder.encode(self, o)
195 return encode_basestring(o)
196 # This doesn't pass the iterator directly to ''.join() because the
197 # exceptions aren't as detailed. The list call should be roughly
198 # equivalent to the PySequence_Fast that ''.join() would do.
--> 199 chunks = self.iterencode(o, _one_shot=True)
200 if not isinstance(chunks, (list, tuple)):
201 chunks = list(chunks)
File /usr/lib/python3.9/json/encoder.py:257, in JSONEncoder.iterencode(self, o, _one_shot)
252 else:
253 _iterencode = _make_iterencode(
254 markers, self.default, _encoder, self.indent, floatstr,
255 self.key_separator, self.item_separator, self.sort_keys,
256 self.skipkeys, _one_shot)
--> 257 return _iterencode(o, 0)
File /usr/lib/python3.9/json/encoder.py:179, in JSONEncoder.default(self, o)
160 def default(self, o):
161 """Implement this method in a subclass such that it returns
162 a serializable object for ``o``, or calls the base implementation
163 (to raise a ``TypeError``).
(...)
177
178 """
--> 179 raise TypeError(f'Object of type {o.__class__.__name__} '
180 f'is not JSON serializable')
TypeError: Object of type JavaObject is not JSON serializable
</code></pre>
| <python><databricks><jobs> | 2023-09-04 11:22:07 | 0 | 1,592 | Dor Lugasi-Gal |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.