QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 ⌀ |
|---|---|---|---|---|---|---|---|---|
77,026,689 | 22,070,773 | What is the nanobind equivalent of the Boost.Python function manage_new_object? | <p>I'm trying to wrap a function that exposes a raw, heap-allocated pointer, and I want Python to be responsible for its lifetime. This is usually achieved in Boost.Python with using the <code>return_value_policy</code> of <code>manage_new_object</code> when wrapping it, is there a similar way to achieve this in nanobind?</p>
| <python><c++><nanobind> | 2023-09-02 01:54:51 | 1 | 451 | got here |
77,026,565 | 5,287,011 | Count certain occurrences with multiple conditions: groupby(), count or len | <p>I have the following dataframe tran:</p>
<pre><code>SourceLoc DestLoc SourceEq DestEq TypeT
Loc1 Loc1 Eq1 Eq2 Make
Loc1 Loc2 Eq1 Eq1 Transfer
Loc1 Loc3 Eq2 Eq2 Transfer
Vend Loc2 Eq1 Eq1 Buy
Loc2 Loc3 Eq1 Eq2 MakeTransfer
Loc2 Loc3 Eq2 Eq2 Transfer
</code></pre>
<p>I want to count the number of rows with the following conditions:</p>
<ol>
<li>TypeT is ONLY Transfer OR MakeTransfer</li>
<li>It must be counted by:
SourceLoc AND DestLoc (for all SourceEq AND DestEq combinations)</li>
</ol>
<p>My code (the results are stored in the new column "count"):</p>
<pre><code>tran=tran.groupby(['SourceLoc','DestLoc']).size().reset_index().rename(columns={0:'count'})
</code></pre>
<p>I do not know how to incorporate the condition tran['TypeT'].isin(['Transfer', 'MakeTransfer'])</p>
<p>It must be done BEFORE the count() function is used. Ideally, I want a new dataframe with the column "count".</p>
<p>The result must be:</p>
<pre><code>SourceLoc DestLoc count
Loc1 Loc2 1
Loc1 Loc3 1
Loc2 Loc3 2
</code></pre>
<p>Appreciate your help very much!!</p>
| <python><pandas><dataframe><group-by><count> | 2023-09-02 00:52:06 | 2 | 3,209 | Toly |
77,026,453 | 13,250,589 | Python's collection.Counter equivalent in C# or .NET | <p>While solving problems in Python I often had to use Python's <code>collections.Counter</code> object.</p>
<p>When I started solving the same problems in C# I couldn't find an equivalent method or class for it.</p>
<p>I was just wondering if there is any?</p>
<h3>Explanation of <code>collections.Counter</code></h3>
<p>Python's <code>collections.Counter</code> takes in an iterable and returns a hash-map, where the keys are all the unique elements of the original iterable and the corresponding value is the number of times that key appears in the original iterable.</p>
| <python><c#><dictionary><hashmap><counter> | 2023-09-01 23:56:54 | 3 | 885 | Hammad Ahmed |
77,026,441 | 14,509,604 | How do filter/slicing in pandas work? Why always it occupies ram? | <p>I've been doing data science for few years, but just right now I've realized that when I filter columns with any <strong>any</strong> method, it occupies RAM.</p>
<pre><code>import pandas as pd
import numpy as np
import datetime
df = pd.DataFrame({'A' : ['spam', 'eggs', 'spam', 'eggs'] * 175000,
'B' : ['alpha', 'beta', 'gamma' , 'foo'] * 175000,
'C' : [np.random.choice(pd.date_range(datetime.datetime(2013,1,1),datetime.datetime(2013,1,3))) for i in range(700000)],
'D' : np.random.randn(700000),
'E' : np.random.random_integers(0,4,700000)})
</code></pre>
<p>Now run any of these methods a few times.</p>
<pre><code>df.iloc[:, :4]
df.filter(regex="A|B|C|D")
df.loc[:, ["A", "B", "C", "D"]]
dft[["A", "B", "C", "D"]]
df[df["A"]=="spam"]
</code></pre>
<p>Every time you run it, the df is loaded in memmory and never released. Why? I'm just exploring a dataset, this should not happen.</p>
<p>Otherwise, these methods do not attempt against memory usage:</p>
<pre><code>df.iloc[0:500000]
df.loc[:500000]
</code></pre>
<p>So it's kind like I use columns in the filtering criteria, then the df is stored in RAM. how am I supposed to do data exploration with large datasets?</p>
| <python><pandas> | 2023-09-01 23:50:55 | 1 | 329 | juanmac |
77,026,403 | 2,905,762 | AES CTR encrypt on Flutter and failed to decrypt on python | <p>I have a project that needs to encrypt text on the front end using Flutter and decrypt it on Python.</p>
<p>After I searched, I found the way to use AES CTR in this link
<a href="https://stackoverflow.com/questions/62580708/same-encryption-between-flutter-and-python-aes">same encryption between flutter and python [ AES ]</a></p>
<p>After I tried that it worked on Flutter, but it didn't work on Python because I failed to <code>pip install pycrypto</code>, then I found another way in this <a href="https://cryptobook.nakov.com/symmetric-key-ciphers/aes-encrypt-decrypt-examples" rel="nofollow noreferrer">https://cryptobook.nakov.com/symmetric-key-ciphers/aes-encrypt-decrypt-examples</a> that i think that can work</p>
<p>After I tried, it failed too.</p>
<p>Here is my flutter code to encrypt:</p>
<pre><code>final plainText = "abcde";//encoded;
final key = enc.Key.fromUtf8('12345678901234567890123456789012');
final iv = enc.IV.fromUtf8('12345');
final encrypter = enc.Encrypter(enc.AES(key, mode: enc.AESMode.ctr, padding: null));
final encrypted = encrypter.encrypt(plainText, iv: iv);
print(encrypted.base64); => result is yJpzWuw=
</code></pre>
<p>and here is my Python code:</p>
<pre><code> print(text) => yJpzWuw=
plaintext = text.encode('utf-8')
key = '12345678901234567890123456789012'.encode("utf-8")
iv = '12345'.encode('utf-8')
iv_int = int.from_bytes(iv)
aes = pyaes.AESModeOfOperationCTR(key, pyaes.Counter(iv_int))
decrypted = aes.decrypt(plaintext)
print('Decrypted:', decrypted)
</code></pre>
<p>This is the result when I print the decrypted</p>
<pre><code>Decrypted: b'\xe7:\xcb\x94\xbe\xbb\xf8\x1c'
</code></pre>
<p>Am I missing something? Or is there another way to encrypt in Flutter than decrypt in Python? I would be glad for any help.</p>
<p><strong>==Solution from comment ==
Change the code in python with below code</strong></p>
<pre><code>ciphertext = base64.b64decode('yJpzWuw=') # Base64 decode the Base64 encoded ciphertext
key = '12345678901234567890123456789012'.encode("utf-8")
iv = '12345\0\0\0\0\0\0\0\0\0\0\0'.encode('utf-8') # Pad the IV to 16 bytes with 0x00 values
iv_int = int.from_bytes(iv, 'big') # Convert the binary data into an int (big endian order)
aes = pyaes.AESModeOfOperationCTR(key, pyaes.Counter(iv_int))
decrypted = aes.decrypt(ciphertext)
print('Decrypted:', decrypted)
</code></pre>
| <python><flutter><encryption><aes><ctr> | 2023-09-01 23:33:26 | 0 | 694 | Hansen |
77,026,147 | 3,821,009 | Jpype java to python type conversions | <p>Say I create a simple java type with jpype:</p>
<pre><code>jt = jpype.java.lang.System.currentTimeMillis()
print(jt)
print(type(jt))
pt = int(jt)
print(pt)
print(type(pt))
</code></pre>
<p>This produces:</p>
<pre><code>1693604984710
<java class 'JLong'>
1693604984710
<class 'int'>
</code></pre>
<p>as expected.</p>
<p>Now my problem here is that I have a class with a bunch of getters that I want to call and put results into a python dict. That part works fine, however I don't get <code>int</code>s, I get <code>JLong</code>s (and similar for other java classes) which causes issues downstream.</p>
<p>Is there something in jpype that allows standard java types (including boxed types such as <code>Integer</code>) to be converted to python types or do I have to do it manually?</p>
| <python><jpype> | 2023-09-01 21:53:00 | 0 | 4,641 | levant pied |
77,025,944 | 4,224,575 | Temporal topological sort of NetworkX graph | <p>I have a <a href="https://networkx.org/documentation/stable/index.html" rel="nofollow noreferrer">NetworkX graph</a> which was constructed using simple node and edge types, with the addition of a timestamp field on nodes:</p>
<pre class="lang-py prettyprint-override"><code>class Node:
key: str
timestamp: int
class Edge:
n1: Node # Source node
n2: Node # Target node
</code></pre>
<p>Here's an example of constructing such a graph assuming I already have a list of <code>edges</code> and a list of <code>nodes</code>:</p>
<pre class="lang-py prettyprint-override"><code>graph = nx.DiGraph()
for e in edges:
graph.add_edge(e.n1, e.n2)
for n in nodes:
graph.add_node(n.key, timestamp=n.timestamp)
</code></pre>
<ul>
<li><p>The graph is a "git graph-like" entity and the timestamp represents a "commit date". I want a topological sort that also "respects" dates, so am I correct to assume that <a href="https://networkx.org/documentation/stable/reference/algorithms/generated/networkx.algorithms.dag.lexicographical_topological_sort.html" rel="nofollow noreferrer">lexicographical topological sort</a> is the tool for the job, i.e. it sorts topologically and break ties using the date?</p>
</li>
<li><p>What is the correct way to invoke this algorithm? I'm trying :</p>
<pre class="lang-py prettyprint-override"><code>networkx.lexicographical_topological_sort(graph, key=lambda n: n.timestamp)
</code></pre>
<p>but the <code>timestamp</code> information was provided as keyword argument to the node constructor. The documentation only mentions cases where the lambda directly utilizes the node. Am I correct to assume I can access its members, or is it converted to a string (or hash) by the time my lambda function accesses it?</p>
</li>
</ul>
| <python><networkx> | 2023-09-01 20:54:57 | 1 | 5,968 | Lorah Attkins |
77,025,869 | 18,150,609 | Can't Connect Python Service to Mongo Server in Docker | <p>I have a docker file with two services running on the same network.
The purpose is to scrape data and store it in a database.</p>
<p>When I attempt to run the program <code>db_connector.py</code> inside service <code>block_scraper</code> I get an error (Timeout):</p>
<pre><code>pymongo.errors.ServerSelectionTimeoutError: storage:27017: [Errno 111] Connection refused, Timeout: 30s, Topology Description: <TopologyDescription id: 64f24e6e941b3aa3dea67cc9, topology_type: Unknown, servers: [<ServerDescription ('storage', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('storage:27017: [Errno 111] Connection refused')>]>
</code></pre>
<p>As seen in the error, my most recent attempts are using the Docker service name, <code>storage</code>, to refer to the service.</p>
<p>I was able to get this to work fine when running both services natively outside docker. How can I get them to talk to eachother inside docker?</p>
<hr />
<p>Project Tree:<br />
|-Dockerfile<br />
|-docker-compose.yml<br />
|-requirements.txt<br />
|-app/<br />
|-|-main.py<br />
|-|-db_connector.py</p>
<p>The <code>docker-compose.yml</code> file:</p>
<pre><code>version: "3.2"
services:
storage:
image: mongo:7.0.1-rc0
container_name: mongodb
restart: unless-stopped
ports:
- 27017-27019:27017-27019
volumes:
- ./data:/data/db
networks:
- scraper_network
block_scraper:
build: .
depends_on:
- storage
volumes:
- ./app:/app
environment:
- STORAGE_CONNECTION_STR=mongodb://storage:27017
networks:
- scraper_network
networks:
scraper_network:
</code></pre>
<p>The Dockerfile (for <code>block_scraper</code> service):</p>
<pre><code># Use the official Python 3.11 image as a parent image
FROM python:3.11-slim-buster
# Set the working directory inside the container
WORKDIR /app
# Copy in and install requirements
COPY ./requirements.txt /app/requirements.txt
RUN pip install -Ur requirements.txt
# Copy the application code into the container
COPY . /app
# Specify the command to run your main program
CMD ["python", "main.py"]
</code></pre>
<p>The <code>db_connector.py</code> file:</p>
<pre><code>import os
import pymongo
# Create a single MongoClient instance and reuse it across calls to get_collection
mongo_host = os.environ.get("STORAGE_CONNECTION_STR")
client = pymongo.MongoClient(mongo_host)
def get_collection(mongo_db_name="blocks101", mongo_collection_name="block"):
database = client[mongo_db_name]
return database[mongo_collection_name]
def close():
client.close()
</code></pre>
<p>For simplicity, lets say this is the <code>main.py</code> program:</p>
<pre><code>from db_connector import get_collection
get_collection()
</code></pre>
| <python><mongodb><docker><docker-compose> | 2023-09-01 20:34:25 | 1 | 364 | MrChadMWood |
77,025,860 | 2,673,149 | Setuptools -- Replacing Manifest.in with pyproject.toml | <p>Does setuptools support replacing the Manifest.in file, which specifies files that should only be included in the sdist distribution with a declaration in pyproject.toml?</p>
| <python><setuptools><pep517><sdist> | 2023-09-01 20:29:31 | 1 | 400 | Spitfire19 |
77,025,859 | 9,947,159 | How to parallelize a for loop that takes a dataframe as input | <p>I have a function that takes a dataframe as an input and loops through each row to perform a bunch of operations and writes the final output to a file in append mode. I want to parallelize the execution of this function such that instead of iterating through all the rows of the dataframe one by one, it will loop over slices or chunks of my original dataframe in parallel. To help with creating slices, col1 of the dataframe includes numbers to indicate slices of rows. I am using this column to create array of dataframes to pass to <code>executor.submit</code> and also in my function to create slice specific output file names. That way each process writes to file associated with a specific slice without contention.</p>
<p>The problem is that I keep running into this error.</p>
<p><code>AttributeError: Can't get attribute 'my_df_func' on <module '__main__' (built-in)</code></p>
<p>Here's what a simplified version of my code looks like.</p>
<pre><code>from concurrent.futures import ProcessPoolExecutor
import pandas as pd
import os
def my_df_func(my_df_slice):
for col1, col2, col3, col4 in zip(my_df_slice.col1, my_df_slice.col2, my_df_slice.col3, my_df_slice.col4):
file_df = 'info_'+str(col2)+'&id='+col3+'&abc='+col4
output_file = os.path.join(output_dir, f'out_{col1}.csv')
file_df.to_csv(output_file, mode = 'a', header=False if os.path.isfile(output_file) else True, index = False)
if __name__ == '__main__':
with ProcessPoolExecutor(max_workers=4) as executor:
my_df_array = [x for __, x in my_df.groupby('col1')]
[executor.submit(my_df_func, my_df_slice) for my_df_slice in my_df_array]
</code></pre>
<p>Oddly, this works but it executes the function for each slice sequentially.</p>
<pre><code>if __name__ == '__main__':
with ProcessPoolExecutor(max_workers=4) as executor:
my_df_array = [x for __, x in my_df.groupby('col1')]
[executor.submit(my_df_func(my_df_slice)) for my_df_slice in my_df_array]
</code></pre>
<p>Any idea what might be going on?</p>
| <python><parallel-processing> | 2023-09-01 20:29:17 | 1 | 5,863 | Rajat |
77,025,766 | 6,843,153 | Post hyperlink to slack using slack sdk for python | <p>I want to post to slack a string that includes a hyperlink, and I'm using slack SDK for python. The string is made up this way:</p>
<pre><code>f":notepadicon: *{doc_to_report['desc']}*: {doc_to_report['file_name']} [link]({doc_to_report['url']})\n"
</code></pre>
<p>I know the markdown code for hyperlink is <code>[text](url)</code>, and it should post the text with a hyperlink to the URL string, but my code is posting this:</p>
<pre><code>Description of file: name_of_the_file.csv [link](https://url_string.com)
</code></pre>
<p>You can see that the markdown hyperlink code (<code>[]()</code>) is not being considered.</p>
<p>The funny part is that if I copy what the script posts to slack and paste it into a slack post, it is properly formated.</p>
<p>What am I doing wrong?</p>
| <python><hyperlink><markdown><slack-api> | 2023-09-01 20:08:28 | 1 | 5,505 | HuLu ViCa |
77,025,762 | 1,397,061 | Idiomatic way to check if any elements in a polars DataFrame are null | <p>Is there a more idiomatic way than:</p>
<p><code>df.select(pl.any_horizontal(pl.all().null_count()))[0, 0]</code></p>
<p>to check if any elements in a polars DataFrame <code>df</code> are null?</p>
| <python><dataframe><python-polars> | 2023-09-01 20:06:44 | 2 | 27,225 | 1'' |
77,025,537 | 11,332,999 | I can use random to get a random value from a range. How do I make math evaluator that does opposite and also has access to math modules? | <p>I want to build a math evaluator for a private use (no security limitations when doing eval operations) that can give me possible outcomes of mathematical statements. Assume random() here refers to range() and also built-in Python range doesn't work with float values. So while it should do normal evals as default, return values should be in either list or set.</p>
<p>So, it should eval basic math.</p>
<pre><code>"3" -> [3]
"round(419.9)" -> [420]
"round(3.14, 1)" -> [3.1]
</code></pre>
<p>It should also do list evals for all possible outcomes.</p>
<pre><code>"[1,2,3]" -> [1,2,3]
</code></pre>
<p>It should also recursively evaluate multiple lists. For further clarification, evaluator should break down expressions with list and use for loop to substitute, then call the expression evalulate_expression(for_loop_substituted_expression) which then returns list, and it should be combined to get resultant list. See the comment below</p>
<pre><code>"[2,1] - 1" -> [1,0]
"[2,1] - [0,1]" -> [2, 1, 0] # With internal evaluation being i) split into multiple expression via for, ii) If code can pass above test case simply call the function recursively for "[2,1] - 0" and "[2,1] - 1" iii) Combine the results combine([2,1], [1,0]) -> [2,1,0].
"[2,1] * [1,2]" -> [4,2,1]
</code></pre>
<p>Then one just need to code random function as range and return list which then evaluated should give answer.</p>
<pre><code>"random(3)" -> [0, 1, 2]
"random(3,4)" -> [3]
"round(random(0.1, 0.5), 1)" -> [0.1, 0.2, 0.3, 0.4]
</code></pre>
<p>Finally, it should also have variables and functions from stdlib math modules like</p>
<pre><code>"log(e)" -> [1] # math.log(math.e)
"tau/pi" -> [2.0] # math.tau/math.pi
</code></pre>
<p>While this test case is out of scope of question, it would be still cool if something like this can be coded. I have certainly seen some evaluators processing this code fine whereas when I tried using sympy, ast.eval and normal eval, there were significant errors.</p>
<pre><code>"2(3)" -> [6]
"2(3(3))" -> [18]
</code></pre>
<p>I managed to come up with code that passes some test cases but its hard to get it all correct.</p>
<pre class="lang-py prettyprint-override"><code>import random
def evaluate_expression(expression):
# Define safe functions and constants to use within eval
def custom_function(expression, roundval=0):
if isinstance(expression, list):
return [round(item, roundval) for item in expression]
else:
return round(expression, roundval)
safe_dict = {
'random': lambda *args: list(range(int(args[0]), int(args[1]) + 1)),
'round': custom_function,
}
# Add some common mathematical constants
safe_dict.update({
'pi': 3.141592653589793,
'e': 2.718281828459045
})
# Try to evaluate the expression
try:
result = eval(expression, {"__builtins__": None}, safe_dict)
# If the result is a single number, return it in a list
if isinstance(result, (int, float)):
return [result]
# If the result is a list, return it as is
elif isinstance(result, list):
return result
else:
raise ValueError("Unsupported result type")
except (SyntaxError, NameError, TypeError, ValueError) as e:
return str(e)
</code></pre>
<p>These are the test cases</p>
<pre><code># Test cases
assert evaluate_expression("3") == [3]
assert evaluate_expression("round(419.9)") == [420]
assert evaluate_expression("round(3.14, 1)") == [3.1]
assert evaluate_expression("[1,2,3]") == [1,2,3]
assert evaluate_expression("[2,1] - 1") == [1,0]
assert evaluate_expression("[2,1] - [0,1]") == [2, 1, 0]
assert evaluate_expression("[2,1] * [1,2]") == [4,2,1]
assert evaluate_expression("random(3)") == [0,1,2]
assert evaluate_expression("random(3, 4)") == [3]
assert evaluate_expression("round(random(0.1, 0.5), 1)") == [0.1, 0.2, 0.3, 0.4]
assert evaluate_expression("log(e)") == [1]
assert evaluate_expression("tau/pi") == [2.0]
#out of scope
assert evaluate_expression("2(3)") == [6]
assert evaluate_expression("2(3(3))") == [18]
</code></pre>
<p>I think if this test case passes, something like "random(1,9) * random(1,9)" shouldn't error out and should produce "[1,2,3,4,5,6,7,8]*[1,2,3,4,5,6,7,8]" which then evaluated should generate a big list. As, a side note, I also managed to generate a custom random range generator.</p>
<pre class="lang-py prettyprint-override"><code>def custom_random(start, end):
ten = 10**len(str(start).split('.')[-1])
if isinstance(start, int):
mul = 1
elif isinstance(start, float):
if len(str(start)) == 3:
mul = 0.1
elif len(str(start)) == 4:
mul = 0.01
if isinstance(start, int) and isinstance(end, int):
return list(range(start, end))
elif isinstance(start, float) and isinstance(end, float):
return [round(i * mul, len(str(start).split('.')[1])) for i in range(int(start * ten), int(end * ten) + 1)]
else:
raise TypeError("Unsupported input type")
print(custom_random(1, 5))
print(custom_random(3, 4))
print(custom_random(10, 50))
print(custom_random(0.1, 0.5)) #prints also 0.5 but it should not print 0.5, but only upto 0.4? but not big of bug anyways
print(custom_random(0.01, 0.05)) #prints also 0.05 but it should not print 0.05, but only upto 0.4? but not big of bug anyways
print(custom_random(0.05, 0.09))
</code></pre>
<h2>Edit</h2>
<p>Answers needs to pass most test cases to be accepted, out_of_scope needn't to be passed. custom_random can be improved but not necessarily it needs to. My current code passes basic tests but not expressions with two functions at once or expressions having list.</p>
<h2>Edit</h2>
<p>I posted an answer, I am keen to see how to update my answer and make it pass these two test cases without hurting other test cases.</p>
<pre><code>assert evaluate_expression("2(3)") == [6]
assert evaluate_expression("2(3(3))") == [18]
</code></pre>
| <python><math><range><expression><eval> | 2023-09-01 19:15:36 | 2 | 629 | Machinexa |
77,025,425 | 10,859,585 | Squeeze x-axis dates on altair line chart | <p>I have a burndown graph that shows dates on the x-axis and remaining objects on the y-axis. Sometimes I have a large amount of "stagnant" data (no objects completed for X-amount of time). The stagnant data is overpowering the line chart and viewers are more interested in the end part of the line chart. Given two known dates, is there a way to squeeze a range of x-axis dates?</p>
<p>There is much more data, but here is a quick reproducible example (using streamlit as well).</p>
<pre><code>import datetime
import streamlit as st
import altair as alt
import pandas as pd
x_values = [datetime.date(2022, 1, 7), datetime.date(2022, 1, 28), datetime.date(2022, 1, 29), datetime.date(2022, 1, 30), datetime.date(2022, 1, 31), datetime.date(2022, 2, 1), datetime.date(2022, 2, 2), datetime.date(2022, 2, 3), datetime.date(2022, 2, 4), datetime.date(2022, 2, 5), datetime.date(2022, 2, 6), datetime.date(2022, 2, 7), datetime.date(2022, 2, 8), datetime.date(2022, 2, 9), datetime.date(2022, 2, 10), datetime.date(2022, 2, 11), datetime.date(2022, 2, 12),datetime.date(2022, 12, 8), datetime.date(2023, 8, 20), datetime.date(2023, 8, 21), datetime.date(2023, 8, 22), datetime.date(2023, 8, 23), datetime.date(2023, 8, 24), datetime.date(2023, 8, 25), datetime.date(2023, 8, 26), datetime.date(2023, 8, 27), datetime.date(2023, 8, 28), datetime.date(2023, 8, 29), datetime.date(2023, 8, 30), datetime.date(2023, 8, 31), datetime.date(2023, 9, 1), datetime.date(2023, 9, 2), datetime.date(2023, 9, 3), datetime.date(2023, 9, 4), datetime.date(2023, 9, 5), datetime.date(2023, 9, 6), datetime.date(2023, 9, 7), datetime.date(2023, 9, 8), datetime.date(2023, 9, 9)]
y_values = [3521, 2200, 2200, 2200, 2200, 2200, 2200, 2200, 2200, 2200, 2200, 2200, 2200, 2200, 2200, 2200, 2200, 2200, 2200, 2200, 2200, 2200, 2200, 2200, 2200, 2200, 2200, 2200, 2200, 2200, 2200, 1808.695652173913, 1417.391304347826, 1417.391304347826, 1417.391304347826, 1026.086956521739, 634.782608695652, 243.47826086956502, 0]
# Burndown line graph
line_data = pd.DataFrame({'x': x_values, 'y': y_values})
chart_width = 400
chart_height = 600
line_chart = alt.Chart(line_data).mark_line().encode(
x=alt.X('x:T', axis=alt.Axis(title='Date', grid=True, ticks=True, tickCount=12,
format='%Y-%m-%d', labelAngle=-45, tickColor='pink', tickSize=10, tickWidth=3)),
y=alt.Y('y:Q', axis=alt.Axis(title='Tiles'))
)
st.altair_chart(line_chart, use_container_width=True)
</code></pre>
<p><a href="https://i.sstatic.net/2r3LI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2r3LI.png" alt="enter image description here" /></a></p>
| <python><datetime><streamlit><altair> | 2023-09-01 18:49:08 | 1 | 414 | Binx |
77,025,354 | 8,139,261 | With Pydantic V2 and model_validate, how can I create a "computed field" from an attribute of an ORM model that IS NOT part of the Pydantic model | <p>This context here is that I am using FastAPI and have a <code>response_model</code> defined for each of the paths. The endpoint code returns a SQLAlchemy ORM instance which is then passed, I believe, to <code>model_validate</code>. The <code>response_model</code> is a Pydantic model that filters out many of the ORM model attributes (internal ids and etc...) and performs some transformations and adds some <code>computed_field</code>s. This all works just fine so long as all the attributes you need are part of the Pydantic model. Seems like <code>__pydantic_context__</code> along with <code>model_config = ConfigDict(from_attributes=True, extra='allow')</code> would be a great way to hold on to some of the extra attributes from the ORM model and use them to compute new fields, however, it seems that when <code>model_validate</code> is used to create the instance that <code>__pydantic_context__</code> remains empty. Is there some trick to getting this behavior in a clean way?</p>
<p>I have a way to make this work, but it involves dynamically adding new attributes to my ORM model, which leaves me with a bad feeling and a big <code>FIXME</code> in my code.</p>
<p>Here is some code to illustrate the problem. Note that the second test case fails.</p>
<pre class="lang-py prettyprint-override"><code>from typing import Any
from pydantic import BaseModel, ConfigDict, computed_field, model_validator
class Foo:
def __init__(self):
self.original_thing = "foo"
class WishThisWorked(BaseModel):
"""
__pydantic_extra__ does not pick up the additional attributes when model_validate is used to instantiate
"""
model_config = ConfigDict(from_attributes=True, extra='allow')
@computed_field
@property
def computed_thing(self) -> str:
try:
return self.__pydantic_extra__["original_thing"] + "_computed"
except Exception as e:
print(e)
return None
model = WishThisWorked(original_thing="bar")
print(f'WishThisWorked (original_thing="bar") worked: {model.computed_thing == "bar_computed"}')
# this is the case that I actually want to work
model_orm = WishThisWorked.model_validate(Foo())
print(f'WishThisWorked model_validate(Foo()) worked: {model.computed_thing == "foo_computed"}')
class WorksButKludgy(BaseModel):
"""
I don't like having to modify the instance passed to model_validate
"""
model_config = ConfigDict(from_attributes=True)
computed_thing: str
@model_validator(mode="before")
@classmethod
def _set_fields(cls, values: Any) -> Any:
if type(values) is Foo:
# This is REALLY gross
values.computed_thing = values.original_thing + "_computed"
elif type(values) is dict:
values["computed_thing"] = values["original_thing"] + "_computed"
return values
print(f'WorksButKludgy (original_thing="bar") worked: {model.computed_thing == "bar_computed"}')
model = WorksButKludgy(original_thing="bar")
model_orm = WorksButKludgy.model_validate(Foo())
print(f'WorksButKludgy model_validate(Foo()) worked: {model_orm.computed_thing == "foo_computed"}')```
</code></pre>
| <python><fastapi><pydantic> | 2023-09-01 18:32:31 | 1 | 1,908 | jsnow |
77,025,330 | 378,622 | Rewriting complex exponential as trig functions | <p>I want to rewrite complex exponentials as trig functions like you see in this picture: <a href="https://i.sstatic.net/PDuPc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PDuPc.png" alt="enter image description here" /></a></p>
<p>My attempt with e^(2 pi I / 8) is not working, because it instantly converts it to square roots:</p>
<pre><code>from sympy import E, pi, I, re
root = E**(2 * pi * I / 8)
print(root)
print(root.rewrite(cos))
print(re(root).rewrite(cos))
</code></pre>
<p>From these print statements I get:</p>
<blockquote>
<p>exp(I*pi/4)</p>
<p>sqrt(2)/2 + sqrt(2)*I/2</p>
<p>sqrt(2)/2</p>
</blockquote>
<p>I want to get cos(2 pi/8) for the real part, and sin(2 pi/8) for the imaginary part. But as you can see, it directly evaluates to square roots. How can I get the trig representation?</p>
| <python><sympy><trigonometry><complex-numbers> | 2023-09-01 18:25:57 | 1 | 26,851 | Ben G |
77,025,285 | 823,633 | Python expressions in f-strings do not return same result as str() conversion for subclasses? | <p>For example, we override the <code>__str__</code> method in <code>Decimal</code> then call it two different ways</p>
<pre><code>from decimal import Decimal
class D(Decimal):
def __str__(self):
return super().normalize().__str__()
num = D('1.0')
print(f"{num}")
print(f"{str(num)}")
</code></pre>
<p>outputs</p>
<pre><code>1.0
1
</code></pre>
<p>But they should both output <code>1</code>. It seems like the first <code>print</code> is calling <code>Decimal.__str__</code> instead of <code>D.__str__</code> for some reason. Why does this happen and how do I fix the behavior?</p>
| <python><string><overriding><f-string> | 2023-09-01 18:17:38 | 1 | 1,410 | goweon |
77,025,228 | 1,466,897 | How do you collect performance metrics from Chrome DevTools Protocol (cdp) with the python selenium RemoteWebDriver? | <p>I have an existing set of frontend automated tests that run in python and use pytest. I would like to add an automated test that, when run against a chromium based browser, collect performance metrics.</p>
<p>Assume that <code>browser</code> is a custom wrapper around <code>RemoteWebdriver</code> and that I am not able to create a <code>Chrome</code> driver inside these tests.</p>
<pre><code>import pytest
@pytest.mark.chrome_only
def test_lighthouse_performance(browser):
session = browser.bidi_connection()
# What do I do now?
</code></pre>
<p>The example <a href="https://www.selenium.dev/documentation/webdriver/bidirectional/chrome_devtools/#collect-performance-metrics" rel="nofollow noreferrer">at seleniumHQ</a> mentions <code>aysnc</code>, but that isn't compatible with my current test framework and those methods don't exist on the <code>RemoteWebdriver</code> class.</p>
<p>In addition, there isn't any documentation on using the <a href="https://github.com/SeleniumHQ/selenium/blob/trunk/py/selenium/webdriver/remote/webdriver.py#L1030" rel="nofollow noreferrer"><code>bidi_connection()</code></a> method is the remote webdriver. The <a href="https://www.selenium.dev/documentation/webdriver/bidirectional/bidi_api/" rel="nofollow noreferrer">bidi_api doc page</a> is similarly unhelpful.</p>
| <python><selenium-webdriver><selenium-chromedriver><ui-automation> | 2023-09-01 18:07:25 | 1 | 392 | Muttonchop |
77,025,034 | 11,001,493 | How to horizontally stretch a signal | <p>I have a dataframe with two columns. This is an example of signal that varies along depth:</p>
<pre><code>import matplotlib.pyplot as plt
import pandas as pd
df = pd.DataFrame({"DEPTH":[5, 10, 15, 20, 25, 30, 35, 40, 45, 50],
"PROPERTY":[23, 29, 26, 17, 15, 20, 28, 25, 20, 17]})
plt.figure()
plt.plot(df["PROPERTY"],
df["DEPTH"])
plt.xlabel("PROPERTY")
plt.ylabel("DEPTH")
plt.gca().invert_yaxis()
plt.grid()
</code></pre>
<p><a href="https://i.sstatic.net/QJz5U.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QJz5U.png" alt="enter image description here" /></a></p>
<p>But I would like to stretch my signal horizontally so I could see it more "wide". In my real data, my signal is smoothed (almost flat). For example, high values should be higher and low ones should be lower, like this new orange signal:</p>
<p><a href="https://i.sstatic.net/7W8tz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7W8tz.png" alt="enter image description here" /></a></p>
<p>Does anyone know how I could do it? If I multiply by a number, all the values would be higher.</p>
| <python><matplotlib><signal-processing><scaling> | 2023-09-01 17:28:38 | 1 | 702 | user026 |
77,024,952 | 10,574,250 | Sending email with python 3.10 not working with error EOF occurred in violation of protocol (_ssl.c:1007) | <p>I am trying to send an email using python 310 but am getting an error that I can't find a solution to. I should mention I am using a company proxy but this seems to work with python 3.8 so I'm not sure why it doesn't now. The code looks as such:</p>
<pre><code>def send_email():
text = self.message.as_string()
with smtplib.SMTP(smtp_host, smtp_port) as server:
server.ehlo()
if server.has_extn('STARTTLS'):
server.starttls()
server.ehlo()
print(f"Sending email")
server.sendmail(sender_email, receiver_email, text)
print('Email sent')
</code></pre>
<p>The error is this:</p>
<pre><code>File "path\mail.py", line 102, in send_email
server.starttls()
File "path\env\lib\smtplib.py", line 790, in starttls
self.sock = context.wrap_socket(self.sock,
File "path\env\lib\ssl.py", line 513, in wrap_socket
return self.sslsocket_class._create(
File "path\env\lib\ssl.py", line 1071, in _create
self.do_handshake()
File "path\env\lib\ssl.py", line 1342, in do_handshake
self._sslobj.do_handshake()
ssl.SSLEOFError: [SSL: UNEXPECTED_EOF_WHILE_READING] EOF occurred in violation of protocol (_ssl.c:1007)
</code></pre>
<p>I think this is linked to OPENSSL but I can't get it to work with any changes. Does anyone know how I could fix this? Thanks</p>
<p>Update</p>
<p>The source of the error is that by default python 3.8 uses a lower version of OpenSSL with has a default cip<code>ssl._DEFAULT_CIPHERS = ('DES-CBC3-SHA')</code> which is considered weak. I have OpenSSL 3.0.10, thus it fails. I assume the handshake is failing due to authentication and ciphers (I'm no expert on this so I may be misunderstanding).</p>
<p>I could try to downgrade OpenSSL to 1.1.1 which I know works but I believe this is not recommended?</p>
| <python><email><ssl><openssl><starttls> | 2023-09-01 17:14:35 | 2 | 1,555 | geds133 |
77,024,850 | 11,922,765 | Download all xlsx files and metainformation from a website | <p>I am using kaggle browser. Looking to see if all the below can be done on this kaggle notebook.</p>
<p>Website url: <a href="https://www.eia.gov/electricity/gridmonitor/dashboard/electric_overview/US48/US48" rel="nofollow noreferrer">click here</a></p>
<p>Website screenshot:</p>
<p><a href="https://i.sstatic.net/v0h2p.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/v0h2p.png" alt="enter image description here" /></a></p>
<p>The downloading files here in the website are updated every hour and daily. I don't think any information on this website going to change except the <code>xlsx</code> file content as you see in the website.</p>
<p>I want to download two things from this url: meta information and <code>xlsx</code> files you see in the screenshot.</p>
<p>First, I want to download this meta information and make it a dataframe as given below.
Now I am manually selecting them, copying them here. But I want to do it from the url</p>
<pre><code>url_meta_df =
ID Type Name URL
CAL Region California https://www.eia.gov/electricity/gridmonitor/knownissues/xls/Region_CAL.xlsx
CAR Region Carolinas https://www.eia.gov/electricity/gridmonitor/knownissues/xls/Region_CAR.xlsx
CENT Region Central https://www.eia.gov/electricity/gridmonitor/knownissues/xls/Region_CENT.xlsx
FLA Region Florida https://www.eia.gov/electricity/gridmonitor/knownissues/xls/Region_FLA.xlsx
</code></pre>
<p>Second: download each <code>xlsx</code> file, save them.</p>
<p>My code: I have tried following based on an answer here in SO</p>
<pre><code>from bs4 import BeautifulSoup
import requests
r = requests.get(url)
data = r.text
soup = BeautifulSoup(data)
for link in soup.find_all('a'):
print(link.get('href'))
</code></pre>
<p>Present output:</p>
<pre><code>None
https://twitter.com/EIAgov
None
https://www.facebook.com/eiagov
None
#page-sub-nav
/
#
/petroleum/
/petroleum/weekly/
/petroleum/supply/weekly/
/naturalgas/
http://ir.eia.gov/ngs/ngs.html
/naturalgas/weekly/
/electricity/
/electricity/monthly/
....
</code></pre>
| <python><python-3.x><web-scraping><beautifulsoup> | 2023-09-01 16:55:40 | 2 | 4,702 | Mainland |
77,024,812 | 2,441,441 | Avoiding Spark withColumn when using complex column derivation logic | <p>I went through the link here <a href="https://stackoverflow.com/questions/70381876/why-is-my-code-repo-warning-me-about-using-withcolumn-in-a-for-while-loop">Why is my Code Repo warning me about using withColumn in a for/while loop?</a></p>
<p>which says that we need to avoid withColumn, and use this instead:</p>
<pre><code> my_other_columns = [...]
df = df.select(
*[col_name for col_name in df.columns if col_name not in my_other_columns],
*[F.col(col_name).alias(col_name + "_suffix") for col_name in my_other_columns]
)
</code></pre>
<p>This avoids any SO issues. My question is more of a followup. I have a complex nested when.otherwise logic that I'd like to use above.
So I have tried:</p>
<pre><code> df = df.select(
*[col_name for col_name in df.columns if col_name not in column_list],
*[when(...).otherwise(
when(...).otherwise(when(...).otherwise(...)))
.alias(col_name + "_new_name") for col_name in column_list],
*[df["result"] + abs(df[col_name])
.alias("result") for col_name in column_list]
)
</code></pre>
<p>I also tried from this helpful <a href="https://stackoverflow.com/questions/75373583/replace-withcolumn-with-a-df-select?rq=3">link</a>:</p>
<pre><code>cols_to_keep = [c for c in df.columns if c not in column_list]
cols_transformed1 = [when(...).otherwise(
when(....).otherwise(when(...).otherwise(...))).alias(c + "_new_name") for c in cols_to_compare]
cols_transformed2 = [df["result"] + abs(df[col_name]).alias("result") for col_name in cols_to_compare]
df.select(*cols_to_keep, *cols_transformed1, *cols_transformed2)
#throws invalid syntax at * at *cols_transformed1
</code></pre>
<p>The when part works when put inside a for loop, like:</p>
<pre><code>for col_name in column_list:
df = df.withColumn(col_name + "_new_name",
when(...).otherwise(
when(...).otherwise(when(...).otherwise(...)))
)
</code></pre>
<p>But does not work when plugging in the top example though. I get syntax error at '*' in *[when(...)....] line.
I have tried various combinations including removing *, etc. but none have worked.</p>
<p>Is it possible to have this kind of complex when logic using the example at the top?
I'm not an expert in Python and struggling to get this working.</p>
<p>Update: Look slike since I'm using Pythoin 2.7 the unpacking operator <code>*</code> doesn't work?
What would be a workaround if that is the case?</p>
| <python><python-2.7><apache-spark><pyspark><apache-spark-sql> | 2023-09-01 16:47:55 | 1 | 1,397 | user2441441 |
77,024,785 | 22,434,294 | How to clone lines with a specific condition? | <p>I have some hyperlinks in a text file. I want to compare the link on first line to next adjacent line and create links as per number?.
For example,</p>
<p>Consider below adjacent links</p>
<pre><code>https://gp.to/ab/394/las69-02-09-2020/
https://gp.to/ab/394/las69-02-09-2020/4/
</code></pre>
<p>Here the output file will be:</p>
<pre><code>https://gp.to/ab/394/las69-02-09-2020/
https://gp.to/ab/394/las69-02-09-2020/2/
https://gp.to/ab/394/las69-02-09-2020/3/
https://gp.to/ab/394/las69-02-09-2020/4/
</code></pre>
<p>Similarly I need to do for other lines....</p>
<p>Example input:</p>
<pre><code>https://gp.to/ab/394/las69-02-09-2020/
https://gp.to/ab/394/las69-02-09-2020/4/
https://gp.to/ab/563/dimp-02-07-2023/
https://gp.to/ab/39443/omegs-02-07-2023/
https://gp.to/ab/39443/omegs-02-07-2023/3/
https://gp.to/ab/39443/lis-22-04-2018/
https://gp.to/ab/39443/lis-22-04-2018/2/
https://gp.to/ab/39443/madi-22-04-2018/
https://gp.to/ab/39443/madi-22-04-2018/5/
</code></pre>
<p>Example output:</p>
<pre><code>https://gp.to/ab/394/las69-02-09-2020/
https://gp.to/ab/394/las69-02-09-2020/2/
https://gp.to/ab/394/las69-02-09-2020/3/
https://gp.to/ab/394/las69-02-09-2020/4/
https://gp.to/ab/563/dimp-02-07-2023/
https://gp.to/ab/39443/omegs-02-07-2023/
https://gp.to/ab/39443/omegs-02-07-2023/2/
https://gp.to/ab/39443/omegs-02-07-2023/3/
https://gp.to/ab/39443/lis-22-04-2018/
https://gp.to/ab/39443/lis-22-04-2018/2/
https://gp.to/ab/39443/madi-22-04-2018/
https://gp.to/ab/39443/madi-22-04-2018/2/
https://gp.to/ab/39443/madi-22-04-2018/3/
https://gp.to/ab/39443/madi-22-04-2018/4/
https://gp.to/ab/39443/madi-22-04-2018/5/
</code></pre>
<p>I tried..</p>
<pre><code># Function to extract the number from a URL
def extract_number(url):
parts = url.split('/')
for part in parts[::-1]:
if part.isdigit():
return int(part)
return None
# Read the input file
with open('input.txt', 'r') as input_file:
lines = input_file.readlines()
output_lines = []
# Iterate through the input lines and generate output lines
for i in range(len(lines)):
current_url = lines[i].strip()
output_lines.append(current_url)
if i + 1 < len(lines):
next_url = lines[i + 1].strip()
current_number = extract_number(current_url)
next_number = extract_number(next_url)
if current_number is not None and next_number is not None:
for num in range(current_number + 1, next_number):
new_url = current_url.rsplit('/', 1)[0] + '/' + str(num) + '/'
output_lines.append(new_url)
# Write the output to a file
with open('output.txt', 'w') as output_file:
output_file.writelines(output_lines)
</code></pre>
<p>But I did not get desired output.</p>
| <python><r> | 2023-09-01 16:43:03 | 2 | 577 | nicholaspooran |
77,024,725 | 7,086,220 | Convert the path string from VSCode python into windows style | <p>The standard windows path string use backslash '\' to divide folders. In python, it is either converted to forward slash '/' or escaped to double backslash '\\'. Is there VSCode built-in trick or extension to covert between them? For now I have to copy it, and then paste it into a temporary new file, and then search/replace, and then copy again, which is too troublesome.
(Please note I'm asking how to do the operation with the IDE/editor, instead of how to program such conversion)</p>
| <python><visual-studio-code><path><copy-paste> | 2023-09-01 16:33:29 | 1 | 343 | jerron |
77,024,638 | 13,086,128 | Logarithm to base 3 in python | <p>I am trying to find the logarithm of a number to base 3 using python.</p>
<p>Here is the problem :</p>
<p><a href="https://leetcode.com/problems/power-of-three/description/" rel="nofollow noreferrer">https://leetcode.com/problems/power-of-three/description/</a></p>
<p>I am using this code:</p>
<pre><code>import math
math.log(k,3)
</code></pre>
<p>However, it is failing for some test cases like (there are some more test cases):</p>
<pre><code>math.log(243,3)
#output
4.999999999999999
math.log(59049,3)
#output
9.999999999999998
</code></pre>
<p>I do not want to use <code>round()</code> as it will round other numbers which are not the powers of 3.</p>
<p>Here is the full code which I am using:</p>
<pre><code>class Solution:
def isPowerOfThree(self, n: int) -> bool:
import math
k = abs(n)
if n<=0:
return False
return math.log(k,3) == int(math.log(k,3))
</code></pre>
<p>Note: I am looking for solutions involving logarithms. Please feel free to ask for any clarifications.</p>
| <python><python-3.x><math><logarithm> | 2023-09-01 16:17:20 | 2 | 30,560 | Talha Tayyab |
77,024,623 | 16,383,578 | How to find extrema per cell in 3 dimensional array with Numba? | <p>I have recently written a script to convert BGR arrays of [0, 1] floats to HSL and back. I posted it on <a href="https://codereview.stackexchange.com/questions/286798/numpy-script-to-convert-bgr-to-hsl-and-back">Code Review</a>. There is currently one answer but it doesn't improve performance.</p>
<p>I have benchmarked my code against <code>cv2.cvtColor</code> and found my code to be inefficient, so I want to compile the code with Numba to make it run faster.</p>
<p>I have tried to wrapping every function with <code>@nb.njit(cache=True, fastmath=True)</code>, and this doesn't work.</p>
<p>So I have tested every NumPy syntax and NumPy functions I have used individually, and found two functions that don't work with Numba.</p>
<p>I need to find the maximum channel of each pixel (<code>np.max(img, axis=-1)</code>) and minimum channel of each pixel (<code>np.max(img, axis=-1)</code>), and the <code>axis</code> argument doesn't work with Numba.</p>
<p>I have tried to Google search this but the only thing even remotely relevant I found is <a href="https://stackoverflow.com/questions/61304720/workaround-for-numpy-np-all-axis-argument-compatibility-with-numba">this</a>, but it only implements <code>np.any</code> and <code>np.all</code>, and only works for two dimensional arrays whereas here the arrays are three-dimensional.</p>
<p>I can write a for loop based solution but I won't write it, because it is bound to be inefficient and against the purpose of using NumPy and Numba in the first place.</p>
<p>Minimal reproducible example:</p>
<pre class="lang-py prettyprint-override"><code>import numba as nb
import numpy as np
@nb.njit(cache=True, fastmath=True)
def max_per_cell(arr):
return np.max(arr, axis=-1)
@nb.njit(cache=True, fastmath=True)
def min_per_cell(arr):
return np.min(arr, axis=-1)
img = np.random.random((3, 4, 3))
max_per_cell(img)
min_per_cell(img)
</code></pre>
<p>Exception:</p>
<pre><code>In [2]: max_per_cell(img)
---------------------------------------------------------------------------
TypingError Traceback (most recent call last)
Cell In[2], line 1
----> 1 max_per_cell(img)
File C:\Python310\lib\site-packages\numba\core\dispatcher.py:468, in _DispatcherBase._compile_for_args(self, *args, **kws)
464 msg = (f"{str(e).rstrip()} \n\nThis error may have been caused "
465 f"by the following argument(s):\n{args_str}\n")
466 e.patch_message(msg)
--> 468 error_rewrite(e, 'typing')
469 except errors.UnsupportedError as e:
470 # Something unsupported is present in the user code, add help info
471 error_rewrite(e, 'unsupported_error')
File C:\Python310\lib\site-packages\numba\core\dispatcher.py:409, in _DispatcherBase._compile_for_args.<locals>.error_rewrite(e, issue_type)
407 raise e
408 else:
--> 409 raise e.with_traceback(None)
TypingError: Failed in nopython mode pipeline (step: nopython frontend)
No implementation of function Function(<function amax at 0x0000014E306D3370>) found for signature:
>>> amax(array(float64, 3d, C), axis=Literal[int](-1))
There are 2 candidate implementations:
- Of which 2 did not match due to:
Overload in function 'npy_max': File: numba\np\arraymath.py: Line 541.
With argument(s): '(array(float64, 3d, C), axis=int64)':
Rejected as the implementation raised a specific error:
TypingError: got an unexpected keyword argument 'axis'
raised from C:\Python310\lib\site-packages\numba\core\typing\templates.py:784
During: resolving callee type: Function(<function amax at 0x0000014E306D3370>)
During: typing of call at <ipython-input-1-b3894b8b12b8> (10)
File "<ipython-input-1-b3894b8b12b8>", line 10:
def max_per_cell(arr):
return np.max(arr, axis=-1)
^
</code></pre>
<p>How to fix this?</p>
| <python><numpy><numba><jit> | 2023-09-01 16:14:07 | 2 | 3,930 | Ξένη Γήινος |
77,024,602 | 13,129,731 | How to execute two different functions concurrently in Python? | <p>I have two different functions (<code>funcA</code> and <code>funcB</code>) that I want to be executed concurrently to cut down on overall execution time.</p>
<p><code>funcA</code> is an API call that takes somewhere between 5 to 7 seconds.</p>
<p><code>funcB</code> is a CPU intensive operation that uses ML algorithms and takes somewhere between 7 to 10 seconds.</p>
<p>A simple dummy version of <code>funcA</code> and <code>funcB</code> would be:</p>
<pre class="lang-py prettyprint-override"><code>def funcA(inpA):
res1 = apiCall1(inpA)
res2 = apiCall2(res1)
if res2['requireAPICall3']:
res3 = apiCall3(res2)
return res3
else:
return res2
</code></pre>
<pre class="lang-py prettyprint-override"><code>def funcB(inpB):
modelPath = downloadModel()
model = loadModel(modelPath)
prediction = predict(inpB)
return prediction
</code></pre>
<h3>For context:</h3>
<p>This will all be done on a python server.</p>
<p>The request will contain two parts, let's say <code>inpA</code> and <code>inpB</code>. <code>inpA</code> will be passed to <code>funcA</code> and <code>inpB</code> will be passed to <code>funcB</code>.</p>
<p>Currently the overall execution is as follows:</p>
<pre class="lang-py prettyprint-override"><code>def processRequest(request):
response = {}
response['outA'] = funcA(request['inpA'])
response['outB'] = funcB(request['inpB'])
return response
</code></pre>
<p>So, the overall execution takes somewhere between 12 to 17 seconds.
But considering both the functions take separate input and are not dependent on each other if they could be executed simultaneously, the overall request would take much less time (7 to 10 seconds for this example).</p>
<p>Every function being used is synchronous. I can make <code>funcA</code> async but making <code>funcB</code> async is not an option right now. Maybe it can be wrapped in an <code>async</code> function. I'm not totally sure about asynchronous programming in Python so any help would be greatly appreciated. Maybe <code>multithreading</code> can be an option as well.</p>
<p>I tried using <code>asyncio</code> library and it's <code>gather</code> function. But for some reason, it worked the same as the <code>processRequest</code> function described above.</p>
<p>Below is the code I tried using <code>asyncio</code>:</p>
<pre class="lang-py prettyprint-override"><code>async def funcA(inpA):
return apiCall(inpA)
async def funcB(inpB):
return predict(inpB)
async def processRequest(req):
t1 = asyncio.create_task(funcA(req['inpA']))
t2 = asyncio.create_task(funcB(req['inpB']))
return await asyncio.gather(t1, t2)
</code></pre>
<p>I updated the <code>apiCall</code> function as below to test Anentropic's Answer.</p>
<pre class="lang-py prettyprint-override"><code>async def apiCall(inpA):
res1 = await apiCall1(inpA)
res2 = await apiCall2(res1)
if res2['requireAPICall3']:
res3 = await apiCall3(res2)
return res3
else:
return res2
</code></pre>
| <python><python-asyncio><python-multiprocessing><python-multithreading> | 2023-09-01 16:10:54 | 2 | 640 | eNeM |
77,024,542 | 12,334,809 | Linking span to parent span using trace_id and span_id | <p>I have a go process which starts a parent span up. I then call an export function to save the span_id and trace_id in a json file.</p>
<pre><code>tracer := otel.GetTracerProvider().Tracer("")
_, span := tracer.Start(ctx, "parent span name")
exportSpanContextToJSON(span)
...
defer span.End()
</code></pre>
<p>The go process eventually runs a python script. I need to start spans in the python process that are all children of the parent span from go.</p>
<p>The python script is not so straightforward.</p>
<p>I am instantiating a class <code>MainStreamEngine</code> that creates the tracer. It has a function called <code>new_stream</code> that creates an instance of the <code>ChildStream</code>, which takes stuff in from the <code>MainStreamEngine</code>. It looks something like this:</p>
<pre><code>class MainStreamEngine:
def __init__(self):
resource = Resource(attributes={
"service.name": "python_service"
})
trace.set_tracer_provider(TracerProvider(resource=resource))
self.tracer = trace.get_tracer(__name__)
otlp_exporter = OTLPSpanExporter(
endpoint="http://localhost:4317", insecure=True)
span_processor = BatchSpanProcessor(otlp_exporter)
trace.get_tracer_provider().add_span_processor(span_processor)
self.span_context = trace.SpanContext(
is_remote=True,
span_id=int(span_ctx_from_json["spanId"], 16)),
trace_id=int(span_ctx_from_json["traceId"], 16))
def new_stream(self):
stream = ChildStream(self.tracer, self.span_context)
class ChildStream:
def __init__(self, tracer, span_context):
self.tracer = tracer
ctx = trace.set_span_in_context(trace.NonRecordingSpan(span_context))
self.tracing_span = tracer.start_span(self.name, ctx)
def close(self):
self.tracing_span.close()
</code></pre>
<p>The issue is that the spans in the python process are being created with their own <code>trace_id</code> and not under the <code>trace_id</code> from the go process (and I also need them to be child spans of the span created in go).</p>
| <python><go><open-telemetry> | 2023-09-01 15:59:32 | 1 | 372 | Randy Maldonado |
77,024,475 | 6,655,075 | Pandas dataframe update column for lowest x values in group? | <p>I have some race data in a csv file that looks like this:</p>
<p><a href="https://i.sstatic.net/iCajh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iCajh.png" alt="" /></a></p>
<p>My aim is to update the 'FAV' for each race in the table relative to the 'BSP' value - lowest BSP = 1, 2nd lowest = 2. (full data has 1k+ races)</p>
<p>Using a dataframe I found a solution with the following lines:</p>
<p><code>df.loc[df.sort_values(['BSP'],ascending=True).groupby(['MENU_HINT','EVENT_DT']).head(2).index, ['FAV']] = 2</code></p>
<p>to first number the lowest 2 'BSP' values each with a '2' in the 'FAV' column, then<br />
<code>df.loc[df.sort_values(['TRAP'],ascending=True).groupby(['MENU_HINT','EVENT_DT']).BSP.idxmin(), ['FAV']] = 1</code>
to update the 'FAV' to 1 for the lowest 'BSP' value row (I sorted by trap to cover the instance where there are 2 with the same 'BSP' value - in this scenario lowest trap number takes priority)</p>
<p>I have limited experience with Pandas and dataframes so this was achieved through a lot of trial and error. I feel there is a more straight forward solution to achieve this but I am not sure what it would be. Ideally if there is a simpler solution would it be possible to make it so I can change the top X number say to 3 so it numbers the lowest 3 prices 1,2 and 3?</p>
| <python><pandas><dataframe> | 2023-09-01 15:47:27 | 1 | 474 | Jeremy |
77,024,117 | 11,136,668 | My Firestore query always takes just over 20 seconds. Is there a hidden timeout or similar? | <p>I am having an issue with a Firestore query taking a long time to run.</p>
<p>There are currently 2 records in the Firestore collection, but this query is always taking 20-21 seconds.</p>
<p>This suggests to me that there is a sleep or a timeout that lasts 20 seconds somewhere.</p>
<pre class="lang-py prettyprint-override"><code> start_first_query = time.time()
db = firestore.client()
doc_ref = db.collection('users').document(uid)
doc = doc_ref.get(['my_value'])
doc_dict = doc.to_dict()
print(f'first query took: {time.time() - start_first_query}')
</code></pre>
<p>Example outputs:</p>
<pre><code>first query took: 20.2761070728302
first query took: 20.213554859161377
first query took: 20.25418519973755
</code></pre>
<p>I also have a second slightly more complex query with an <code>in</code> operator, which has a similar just over 20 second execution time. Does anyone have any ideas why this could be happening?</p>
<p>Happy to share more details if required, as the code I have provided is basically straight from the documentation.</p>
<p>TLDR; I successfully wrote code to extract data from the DB. It always hangs for 20 seconds but I normally get sub 1-second latency when doing similar queries.</p>
<p>Edit: I have rerun with <code>logging.basicConfig(level=logging.DEBUG)</code>, and there is definitely a 20-second wait at the start of <code>doc_ref.get(['my_value'])</code> as I wait 20 seconds, then the following debug logs appear.</p>
<pre class="lang-bash prettyprint-override"><code>> DEBUG:google.auth.transport.requests:Making request: POST https://oauth2.googleapis.com/token
> DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): oauth2.googleapis.com:443
> DEBUG:urllib3.connectionpool:https://oauth2.googleapis.com:443 "POST /token HTTP/1.1" 200 None
</code></pre>
| <python><database><firebase><google-cloud-firestore> | 2023-09-01 14:51:32 | 1 | 904 | Jeremy Savage |
77,024,110 | 14,508,690 | pynput: getting different outputs with pynput.keyboard.Listener.running (in "non-blocking fashion") | <p>In the "non-blocking fashion" of starting a <code>pynput</code> <code>Listener</code>, I'm getting different behaviors with the code below:</p>
<pre><code>from time import sleep
import pynput
def stop(k):
if k:
print("end")
l.stop()
l = pynput.keyboard.Listener(on_press=stop)
l.start()
print(l.running)
while True:
if l.running:
print("running\n")
sleep(1)
else: break
</code></pre>
<p>Sometimes I get <code>False</code> and program ends. Other times, <code>False</code> then loop runs.<br />
If I duplicate <code>print(l.running)</code>, I might event get <code>True</code>, for both statements.<br />
Adding code after <code>l.start()</code> (e.g. duplicating <code>print(l.running)</code>) seems to increase the odds of getting <code>True</code> from <code>l.running</code>.
Finally, debugging instead of just running guarantees getting <code>True</code>.<br />
It feels almost as if I need to provide enough time to the interpreter in order to <code>running</code> to be able to "catch" the listener working.</p>
<p>I've seen some posts on getting different outputs when running vs debugging, but they involved generating random values, usually in C++, or even misuse of variable scope.<br />
They didn't seem very useful, at least to my poor knowledge.</p>
<p>Question is as straightforward as it gets: what is happening here?</p>
<p>I'm using VSCode on Windows11.</p>
| <python><debugging><listener><pynput> | 2023-09-01 14:50:20 | 1 | 399 | 101is5 |
77,024,058 | 11,197,301 | find the time difference between adjacent value in padas | <p>I have this dataframe</p>
<pre><code>date qq q_t l_t
1956-01-01 1 4 True
1956-01-02 2 5 True
1956-01-03 3 1 False
1956-01-04 4 1 False
1956-01-05 5 1 False
1956-01-06 6 10 True
1956-01-07 7 11 True
1956-01-08 8 12 True
1956-01-09 9 5 False
1956-01-10 10 3 False
1956-01-11 11 3 False
1956-01-12 12 3 False
1956-01-13 13 50 True
1956-01-14 14 51 True
1956-01-15 15 52 True
1956-01-16 16 53 True
1956-01-17 17 1 False
1956-01-18 18 23 True
1956-01-19 19 1 False
</code></pre>
<p>I would like to find the number of successive days with true values and store the initial and final date. In other words, I would like to create a new dataframe as:</p>
<pre><code>index days start end
1 2 1956-01-01 1956-01-02
2 3 1956-01-06 1956-01-08
3 4 1956-01-50 1956-01-53
4 1 1956-01-18 1956-01-18
</code></pre>
<p>I thought to work with np.where and then do a cycle all over the dataframe, a sort of</p>
<pre><code> _fref = np.where(dfr_['l_t'] == 'True')
for i in range(0, len(_fref)-1):
i_1 = _fref[i]
i_2 = _fref[i+1]
deltat = i_2-i_1
if deltat == 1:
...
...
</code></pre>
<p>It seems not very elegant and I am pretty sure that there are different way to do that.
What do you think? Is it better to stay stick with the cycle strategy?</p>
| <python><pandas><date><operation> | 2023-09-01 14:43:10 | 1 | 623 | diedro |
77,023,991 | 1,020,139 | AWS Lambda Python 3.11: Cannot import lxml: libxslt.so.1: cannot open shared object file: No such file or directory | <p>I have a Python function on AWS Lambda that depends on <code>lxml</code>. The dependent layer includes the result of <code>poetry install lxml</code>, yet I receive the following error at runtime:</p>
<pre><code> "errorMessage": "Unable to import module 'app.dim.application.watchdog': libxslt.so.1: cannot open shared object file: No such file or directory",
</code></pre>
<p>I have checked the Python 3.11 base image, and it doesn't include <code>libxslt.so.1</code>:</p>
<p><code>docker run -ti --rm --entrypoint bash public.ecr.aws/lambda/python:3.11</code></p>
<p>How can I include all shared libraries that packages, like <code>lxml</code>, depend on? Can I say to <code>pip</code> or <code>poetry</code> that they should include them in the installed package? It's a nightmare to selectively copy each shared library into a layer.</p>
<p><strong>Runtime</strong>: Python 3.11</p>
<p><strong>Architecture</strong>: ARM64</p>
| <python><amazon-web-services><aws-lambda><arm><lxml> | 2023-09-01 14:34:02 | 3 | 14,560 | Shuzheng |
77,023,539 | 2,725,810 | Transaction affects time of computation? | <p>Consider:</p>
<pre class="lang-py prettyprint-override"><code>with transaction.atomic():
job = Job.objects.select_for_update().get(id=job_id)
texts = pickle.loads(job.job_body)
timer = Timer()
print("Computing embedding for:", texts, flush=True)
job.started = ms_now()
result = get_embeddings_raw(texts) # no database communication.
job.finished = ms_now()
print("Done computing embedding for:", texts, timer.stop(), flush=True)
job.result = pickle.dumps(result)
job.save()
</code></pre>
<p>The computation within the transaction, performed by <code>get_embeddings_raw</code>, does not communicate with the database. Nonetheless, when I do not use concurrency, this computation takes ~70ms, but when there are up to ten simultaneous workers, it takes ~4,000ms. Can transaction cause this slowing down in any way?</p>
| <python><django><performance><concurrency><transactions> | 2023-09-01 13:32:41 | 0 | 8,211 | AlwaysLearning |
77,023,312 | 1,552,080 | Python multithreading blocks unexpectedly | <p>I am developing a Python particle swarm optimizer application which exchanges messages with a Kafka broker. I want to send a message to the broker, where another application (will be JAVA) pick up the message, processes it and returns a result. The python applications needs to wait until the result is available. Currently, the code looks like this:</p>
<pre><code>class KafkaConnector(Thread):
__consumption_thread = None
__kafka_consumer: KafkaConsumer = None
__kafka_producer: KafkaProducer = None
__pso_optimizer: GlobalBestPSO = None
__device_names: np.array = None
def __init__(self):
Thread.__init__(self)
self.__kafka_consumer = KafkaConsumer('A_TOPIC', bootstrap_servers=['my.kafka.server:9092'])
self.__kafka_producer = KafkaProducer(bootstrap_servers=['my.kafka.server:9092'])
self.__result_received = Event()
def consume_messages(self):
assert self.__kafka_consumer is not None
self.__kafka_consumer.subscribe(['A_TOPIC'])
assert self.__kafka_consumer.subscription() is not None
self.__consumption_thread = KafkaListener(self)
self.__consumption_thread.start()
print('[INFO] Kafka consumer thread started')
def stop_listener(self):
self.__consumption_thread.stop_consumer()
self.__kafka_consumer.unsubscribe()
self.__kafka_consumer.close()
def message_received(self, message: ConsumerRecord):
'''
Method switching message content by use case
'''
print('[INFO] Received message:', message)
if ('mode', b'INIT') in message.headers: # INIT comes from device automator
self.init_pso(message.value) <--- this call works as expected
elif ('mode', b'RESULT') in message.headers: # RESULT comes from device automator
print('perform step')
elif ('mode', b'SET') in message.headers: # SET is sent by PSO
print('SET received')
self.__result_received.set()
else:
print('unknown command', message.headers)
def send_message(self, message_content, headers: list):
self.__kafka_producer.send(topic='A_TOPIC', value=message_content, headers=headers)
def init_pso(self, json_string: str):
[...]
self.__pso_optimizer = GlobalBestPSO(n_particles=10 * parameter_count, dimensions=parameter_count,
options=options, bounds=bounds)
# start optimization run
best_cost, best_position = self.__pso_optimizer.optimize(self.objective_function, iters=100)
print(best_position, best_cost)
def objective_function(self, new_settings):
result = np.empty(len(new_settings))
for i in range(len(new_settings)):
setting_dict = []
result[i] = 0
for j in range(len(self.__device_names)):
parameter_value = {self.__device_names[j]: new_settings[i][j]}
setting_dict.append(parameter_value)
# send new setting values to Kafka / LSA client
self.__kafka_producer.send('A_TOPIC', value=json.dumps(setting_dict).encode('UTF-8'),
headers=[('mode','SET'.encode('UTF-8'))])
# wait for result to return
if self.__result_received.is_set():
self.__result_received.clear()
self.__result_received.wait() <--- this wait blocks everything, including KafkaListener::run() without wait(), the code executes as expected
# append to result array
#result[i] += new_settings[i][j]
return result
def get_optimizer(self):
return self.__pso_optimizer
'''
This is a class supposed to listen in a parallel thread to messages coming from Kafka
'''
class KafkaListener(Thread):
__stop_event: Event = None
__caller: KafkaConnector = None
def __init__(self, caller):
Thread.__init__(self)
self.__caller = caller
self.setDaemon(True)
self.__stop_event = Event()
def run(self):
while not self.__stop_event.is_set():
for message in self.__caller.getKafkaConsumer():
print('[INFO] KafkaListener received:', message)
self.__caller.message_received(message)
if self.__stop_event.is_set():
break
def stop_consumer(self):
self.__stop_event.set()
</code></pre>
<p>Ultimately I want to wait for the result coming via Kafka and then proceed with the next loop</p>
| <python><multithreading><events> | 2023-09-01 12:58:25 | 0 | 1,193 | WolfiG |
77,023,270 | 15,923,186 | Mask RGBA image with another one | <p>Consider there are two opencv images / numpy arrays of different shapes.
Smaller one is called <code>small</code> and is <code>(150,150,4)</code>, bigger one is called <code>big</code> and is <code>(300,300,4)</code>, both are RGBA images.
I'm trying to get the smaller one to be "on top" of the bigger one, exactly in the middle (this is done and works fine), but I need only the pixels of the smaller one, which are not transparent (their alpha channel is not equal 0, or is equal 255 to be exact), otherwise I want to keep the pixels from the bigger one (and this is the issue). Something like a ternary operator, but with broadcasting and in numpy/opencv. I thought of <code>np.where</code> and masking, but got stuck.
I know how to get the <code>mask</code> of my smaller shape, but I don't know how to properly assign / broadcast it into the bigger one.</p>
<p>My approach is:</p>
<ul>
<li>get the rectangle where the assignment should take place (offsets)</li>
<li>get the mask of the smaller one, to have only the shape's coordinates</li>
<li><strong>assign / broadcast the pixels</strong> - The problem is I need pixel values of corresponding coordinates of the bigger image, not a single value. The part I'm missing, currently using some dummy value</li>
<li>assign the modified smaller one into the coordinates of the bigger one</li>
</ul>
<pre class="lang-py prettyprint-override"><code>BACKGROUND_SIZE = 300, 300
LOGO_SIZE = 150, 150
def get_offset(bigger_shape, smaller_shape):
logo_x_offset = int((bigger_shape[0] - smaller_shape[0]) / 2)
logo_y_offset = int((bigger_shape[1] - smaller_shape[1]) / 2)
return (
logo_y_offset,
logo_y_offset + smaller_shape[1],
logo_x_offset,
logo_x_offset + smaller_shape[0],
)
small = cv2.imread(f"./small.png", flags=cv2.IMREAD_UNCHANGED)
big = cv2.imread(f"./big.png", flags=cv2.IMREAD_UNCHANGED)
offset = get_offset(BACKGROUND_SIZE, LOGO_SIZE)
y0, y1, x0, x1 = offset
DUMMY_BLUE = (255,0,0,255)
mask = np.where(small[:, :, 3] != 255) # this is a proper mask
small[mask] = DUMMY_BLUE # Here instead of the dummy blue I need the "bigger" image pixels
bigger[y0:y1, x0:x1] = smaller
</code></pre>
<p>Tried:</p>
<pre class="lang-py prettyprint-override"><code> small[mask] = big[y0:y1, x0:x1]
</code></pre>
<p>But that fails with (please ignore the 2nd shape, the <code>big</code> might be actually bigger):</p>
<pre><code>ValueError: shape mismatch: value array of shape (150,150,4) could not be broadcast to indexing result of shape (14899,4)
</code></pre>
| <python><numpy><opencv> | 2023-09-01 12:51:01 | 1 | 1,245 | Gameplay |
77,023,212 | 4,844,184 | Dynamically decorating method of instance in Python | <p>I have recently come across <code>MethodType</code> which allows to do arguably hacky but fun stuff like:</p>
<pre><code>from types import MethodType
class MyClass():
def __init__(self, n : int):
self.list_of_items = list(range(n))
def __getitem__(self, idx):
return self.list_of_items[idx]
def foo(self):
return "foo"
def bar(self):
return "bar"
a = MyClass(10)
a.foo = MethodType(bar, a)
a.foo()
## prints bar
a[0]
# 0 (int) is printed
</code></pre>
<p>But what if I wanted to dynamically decorate the <code>__getitem__</code> of <code>a</code> in a similar spirit ?
The first idea that comes to mind is to do:</p>
<pre><code>def new_getitem(self, idx):
return str(self.__getitem__(idx))
a.__getitem__ = MethodType(new_getitem, a)
</code></pre>
<p>However this has two problems first 1. <code>__getitem__</code> isn't changed in the sense that <code>[]</code> syntax is not modified.</p>
<pre><code>a[0]
# 0 (int) is printed
</code></pre>
<p>Which might be explained by some funny business happening in the init of the instance to tie <code>[]</code> to <code>__getitem__</code>
What is worst is that explicitly calling <code>__getitem__</code>:</p>
<pre><code>a.__getitem__(0)
</code></pre>
<p>runs into infinite recursion as we are redefining <code>__getitem__</code> infinitely.</p>
<p>I was wondering if there was a syntax allowing to perform this hack so that:</p>
<pre><code>a[0] = "0" (string)
</code></pre>
<p>I am open to any solution that allows to dynamically affect the instance's <code>[]</code> (using MethodType is not a requirement).</p>
| <python><methods><dynamic><instance> | 2023-09-01 12:42:14 | 0 | 2,566 | jeandut |
77,023,101 | 13,219,123 | '__init__.py' file in tests folder | <p>I am having problems with pytest my directory looks like below:</p>
<pre><code>├── modules
| ├── __init__.py
│ ├── data
| | ├── __init__.py
│ | └── tables.py
├── tests
│ └── test_tabels.py
└── main.py
</code></pre>
<p>My problem was that I was getting a <code>Pytest Discovery Error</code>. When reading the error output I get <code>ModuleNotFoundError: No module named modules</code>. In my <code>test_tables.py</code> file I obviously need to import classes and definitions to test them, so I would for instance run <code>from modules.data import GetData</code>.</p>
<p>By adding a <code>__init__.py</code> file to the <code>tests</code> folder I am not getting the error anymore and can run my tests. However, I do not understand why this works. I have other repos where I do not have an <code>__init__.py</code> file in the <code>tests</code> folder where I do not have the issue. This makes me wonder if I have actually solved the problem and if it makes sense to add the <code>__init__.py</code> file.</p>
<p>UPDATE:</p>
<p>I have, prior to writing the question, read explanations about the <code>__init__.py</code> file from other posts and understand the purpose. My question is more focused on why adding a <code>__init__.py</code> solves my problem, I am not importing code from the <code>tests</code> folder anywhere, it is only used for unit testing.</p>
<p>Also I can run the tests in the CLI but I cannot use the debugger or see the test in VS codes <em>Testing</em> tab. When opening the tab I see the message <code>Pytest Discovery Error</code>.</p>
| <python><pytest> | 2023-09-01 12:25:22 | 0 | 353 | andKaae |
77,023,072 | 1,329,744 | How to run a fully container-based Python dev environment in VSCode | <p>TL;DR: Why does this need to be started twice before it works?</p>
<p>I am trying to set up a Python development environment in my VSCode. The app will run in a container in production, so why hassle with Python versions, virtualenvs and so on on my dev machine, when my editor is able to also use that very same container environment to actually develop the app? So I used some of VSCode's auto-created config files plus crawled a couple of tutorials. It sort of works, but with one major drawback: I need to hit "Start debugging (F5)" twice, before the debugger manages to connect to the container.</p>
<p>This is only an example project. In my real Python project I can not even get it to connect at all. I assume this setup is not quite optimal yet, so I am herewith asking for help to get this straight. Here is what I have:</p>
<p><code>Dockerfile</code>:</p>
<pre><code># For more information, please refer to https://aka.ms/vscode-docker-python
FROM python:3.10-slim
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
# Install pip requirements
COPY requirements.txt .
RUN python -m pip install --upgrade pip && \
python -m pip install -r requirements.txt
WORKDIR /app
COPY . /app
# Creates a non-root user with an explicit UID and adds permission to access the /app folder
# For more info, please refer to https://aka.ms/vscode-docker-python-configure-containers
RUN adduser -u 5678 --disabled-password --gecos "" appuser && chown -R appuser /app
USER appuser
# During debugging, this entry point will be overridden. For more information, please refer to https://aka.ms/vscode-docker-python-debug
CMD ["python", "-m", "testproj"]
EXPOSE 80
</code></pre>
<p><code>docker-compose.debug.yml</code>:</p>
<pre><code>version: '3.4'
services:
testtasks:
image: testtasks
build:
context: .
dockerfile: ./Dockerfile
command:
[
"sh",
"-c",
"pip install debugpy -t /tmp && python /tmp/debugpy --wait-for-client --listen 0.0.0.0:5678 -m uvicorn testproj.testproj:app --host 0.0.0.0 --port 80 --reload"
]
ports:
- 80:80
- 5678:5678
volumes:
- ${PWD}:/app
</code></pre>
<p><code>requirements.txt</code></p>
<pre><code>fastapi
uvicorn
</code></pre>
<p><code>testproj/__main__.py</code>:</p>
<pre><code>import uvicorn
uvicorn.run("testproj.testproj:app", host="0.0.0.0", port=80)
</code></pre>
<p><code>testproj/testproj.py</code>:</p>
<pre><code>from typing import Union
from fastapi import FastAPI
app = FastAPI()
@app.get("/")
async def read_root():
return {"Hello": "World"}
</code></pre>
<p><code>.vscode/tasks.json</code>:</p>
<pre><code>{
"version": "2.0.0",
"tasks": [
{
"type": "docker-build",
"label": "docker-build",
"platform": "python",
"dockerBuild": {
"tag": "testtasks:latest",
"dockerfile": "${workspaceFolder}/Dockerfile",
"context": "${workspaceFolder}",
"pull": true
}
},
{
"label": "Run compose up",
"type": "docker-compose",
"dockerCompose": {
"up": {
"detached": true,
"build": true
},
"files": [
"${workspaceFolder}/docker-compose.debug.yml"
]
}
},
{
"label": "Run compose down",
"type": "docker-compose",
"dockerCompose": {
"down": {},
"files": [
"${workspaceFolder}/docker-compose.debug.yml"
]
}
}
]
}
</code></pre>
<p><code>.vscode/launch.json</code>:</p>
<pre><code>{
"configurations": [
{
"name": "Python: Remote Attach",
"type": "python",
"request": "attach",
"connect": {
"host": "localhost",
"port": 5678
},
"pathMappings": [
{
"localRoot": "${workspaceFolder}",
"remoteRoot": "/app"
}
],
"preLaunchTask": "Run compose up",
"postDebugTask": "Run compose down",
"justMyCode": true
}
]
}
</code></pre>
<p>The entire test project is also available here to clone, improve and request merges: <a href="https://gitlab.com/mcnesium/testtasks" rel="nofollow noreferrer">https://gitlab.com/mcnesium/testtasks</a></p>
<p>And here is a screencast with me trying to run this: <a href="https://imgur.com/a2UkRy7" rel="nofollow noreferrer">https://imgur.com/a2UkRy7</a>
The first hit on the green play button in the top left causes the container to launch, but it fails to connect the debugger to it, as far as I understand this. Then (at about 14sec of the video) with the second hit of the play button it does connect, indicated by the footer bar turning red. The task to stop the entire thing seems to work as expected:</p>
<p>Thanks in advance for any suggestions. Also, please feel free to improve this question's readability.</p>
| <python><docker><remote-debugging><vscode-debugger><vscode-remote> | 2023-09-01 12:21:48 | 1 | 1,553 | mcnesium |
77,022,836 | 13,100,938 | Apache Beam: Wait N seconds after window closes to execute DoFn | <p>I have real time data being published and read into a Dataflow pipeline in a synchronous manner.</p>
<p>I collect the data, window it (1 second fixed) with accumulation then write it to a Firestore DB.
This DB is being watched by a front end and when new data arrives the front end automatically pulls in the snapshot.</p>
<p>The behaviour I'm seeing is that the front end data arrival is not synchronous, with inconsistent delays between data arrival.</p>
<p>I've been looking into stateful and timely functions and I think it could solve my problem, but I can't work out how to implement it since I'm not batching any data which to my understanding means I should just use a timely <code>DoFn</code>.</p>
<p>The data has been windowed and grouped by key, then aggregated with a custom function. The structure of the data being passed into the <code>DoFn</code> is a single nested python dictionary with multiple different data structures inside it.</p>
<pre class="lang-py prettyprint-override"><code># class to write to firestore using DoFn
class WriteToFirestore(beam.DoFn):
TIMER = userstate.TimerSpec('timer', userstate.TimeDomain.WATERMARK)
BUFFER_STATE = userstate.BagStateSpec('buffer', WindowedValueCoder(MapCoder(str, str)))
def setup(self):
from google.cloud import firestore
self.firestore_client = firestore.Client(project='project ID')
def process(self, element,
timer=beam.DoFn.TimerParam(TIMER),
window=beam.DoFn.WindowParam,
buffer_state=beam.DoFn.StateParam(BUFFER_STATE)):
logging.info("PROCESSING FUNCTION")
timer.set(window.end + Duration(0.5)) # expire 0.5 seconds after window closes
buffer_state.add(element)
@userstate.on_timer(TIMER)
def expiry(self):
logging.info("FIRED")
#collection_ref = self.firestore_client.collection('collection id')
#collection_ref.add(event)
</code></pre>
<p>Pipeline Steps:</p>
<pre class="lang-py prettyprint-override"><code># define pipeline steps
data = (
pipeline
| 'Read from PubSub' >> beam.io.ReadFromPubSub(topic=self.topic_path)
| 'Proto to Dict' >> beam.Map(lambda pb_message: proto_to_dict(pb_message, msgs_pb2.Message))
| 'Add Key' >> beam.Map(lambda message: add_group_key(message, SETTINGS))
| 'Window' >> beam.WindowInto(
window.FixedWindows(self.window_length),
accumulation_mode=beam.trigger.AccumulationMode.ACCUMULATING,
)
| 'Group' >> beam.GroupByKey()
| 'Fuse' >> beam.Map(lambda group_messages: fuse(group_messages, SETTINGS, NODES))
)
# branch 1: Write to BigQuery
data | 'Write to BigQuery' >> beam.io.WriteToBigQuery(
'table ID',
dataset='dataset ID',
schema=TABLE_SCHEMA,
write_disposition=beam.io.BigQueryDisposition.WRITE_APPEND,
create_disposition=beam.io.BigQueryDisposition.CREATE_IF_NEEDED
)
# branch 2: Publish to PubSub
data | 'Write to Firestore' >> beam.ParDo(WriteToFirestore())
</code></pre>
<p>This is the current code, it throws that the <code>Number of components doesn't match number of coders</code> - surely I only have 1 component?</p>
<p>What am I doing wrong?</p>
| <python><google-cloud-platform><google-cloud-firestore><google-cloud-dataflow><apache-beam> | 2023-09-01 11:46:38 | 0 | 2,023 | Joe Moore |
77,022,582 | 12,040,751 | Type hint for a special argument | <p>Consider the following function:</p>
<pre><code>from datetime import date
def days_between(start_date: date, end_date: date) -> int:
if start_date == "initial":
start_date = date(2023, 9, 1)
delta = end_date - start_date
return delta.days
</code></pre>
<p>The type hint of start date is mostly fine, but it does not cover the case where <code>"initial"</code> is passed. This is the only string allowed, how would you express it?</p>
<p>The options I thought about are:</p>
<ul>
<li><code>start_date: date | str</code> which seems a bit of an overkill</li>
<li><code>start_date: date | "initial"</code> maybe fine but I have not come across anything similar</li>
<li>no change and simply add a description in the docstring</li>
</ul>
<p>For completeness, I am more interested in how to convey the use of the function to a user compared to formal correctness for static type checking.</p>
| <python><python-typing> | 2023-09-01 11:05:29 | 1 | 1,569 | edd313 |
77,022,574 | 3,312,274 | Flask is not reacting to the FLASK_ENV variable setting | <p>I am trying to activate debug mode in Flask.</p>
<p>What I have tried so far:</p>
<ul>
<li>set FLASK_ENV=development directly in Windows cmd</li>
<li>pip installed python-dotenv successfully and set FLASKENV=development in .env file</li>
<li>ensured that there is no dotenv package globally and within virtual env</li>
<li>pip force uninstall/reinstall python-dotenv a few times</li>
<li>with python-dotenv, tried <code>load_dotenv()</code> and <code>os.getenv('FLASK_ENV')</code> and it shows that the value of FLASK_ENV is <code>development</code></li>
</ul>
<p>None of the above enabled the debug mode of Flask. FLASK_APP variable is correctly set and read though. Only by running <code>Flask --debug run</code> activates the debug mode.</p>
<p>Why is the FLASK_ENV variable not recognized by Flask?</p>
| <python><flask><python-dotenv> | 2023-09-01 11:04:42 | 1 | 565 | JeffP |
77,022,479 | 11,197,301 | compare two dataframe according to the year and month | <p>I have two dataframe:</p>
<p>dfr1</p>
<pre><code> qq
date
1956-01-01 685.519348
1956-01-02 731.868500
1956-01-03 510.579375
1956-01-04 412.347250
1956-01-05 358.297625
2010-12-27 3.992000
2010-12-28 1.099583
2010-12-29 104.428958
2010-12-30 4.932750
2010-12-31 101.737292
2013-12-27 7.992000
2013-12-28 105.099583
2013-12-29 104.428958
2013-12-30 102.932750
2013-12-31 101.737292
</code></pre>
<p>and</p>
<p>dfr2</p>
<pre><code> q_t
01-01 61.629342
01-02 61.409750
01-03 61.309208
01-04 61.161462
01-05 61.020508
12-27 69.065375
12-28 68.935908
12-29 68.603104
12-30 68.474458
12-31 68.209075
</code></pre>
<p>As you can notice, the days and month are the same in both but the first one has year while the second one not.</p>
<p>I would like to compare the first with the second. In particular, I would like to know when the qq value in the first one is less or equal to the value in the second according to the day and month of the second. This is thus what I expects:</p>
<pre><code>1956-01-01 685.519348 61.629342 False
1956-01-02 731.8685 61.40975 False
1956-01-03 510.579375 61.309208 False
1956-01-04 412.34725 61.161462 False
1956-01-05 358.297625 61.020508 False
2010-12-27 3.992 69.065375 True
2010-12-28 1.099583 68.935908 True
2010-12-29 104.428958 68.603104 False
2010-12-30 4.93275 68.474458 True
2010-12-31 101.737292 68.209075 False
2013-12-27 7.992 69.065375 True
2013-12-28 105.099583 68.935908 False
2013-12-29 104.428958 68.603104 False
2013-12-30 102.93275 68.474458 False
2013-12-31 101.737292 68.209075 False
</code></pre>
<p>I tried compared and, as expected, I got an error:</p>
<pre><code>dfr1.compare(dfr1)
*** ValueError: Can only compare identically-labeled (both index and columns) DataFrame objects
</code></pre>
<p>I tried also:</p>
<pre><code>dfr_1['new'] = dfr_1 < dfr_2
*** ValueError: Can only compare identically-labeled (both index and columns) DataFrame objects
</code></pre>
<p>I see thus two problem:</p>
<ol>
<li>the two dataframes have different dimension</li>
<li>the two dataframes have different indexes.</li>
</ol>
<p>Specifically, I am not able to select properly the indexes properties.</p>
<p>What do you think?</p>
| <python><pandas><dataframe><datetime><operation> | 2023-09-01 10:48:19 | 1 | 623 | diedro |
77,021,891 | 1,455,474 | Parallel workers initialized with individual objects in Python | <p>I need a number of workers in Python that is initialized with an object. The main process will send commands+arguments to the workers that should then execute methods in their object and return the result in parallel.</p>
<p>I have tried with multiprocessing.pool, but when calling pool.map, it seems random which process is executing with which argument, even when the pool is initialized with N processes and chunksize is set to 1.</p>
<pre><code>import multiprocessing
def init(a):
global myA
myA = a
def get_value(_):
global myA
return myA.value
class A():
def __init__(self, value):
self.value = value
if __name__ == '__main__':
N = 4
a_lst = [A(i) for i in range(N)]
pool = multiprocessing.Pool(N)
pool.map(init, a_lst, chunksize=1)
print(pool.map(get_value, range(N), chunksize=1))
</code></pre>
<p>output</p>
<pre><code>[3, 1, 3, 1]
</code></pre>
<p>Can I do it with multiprocessing.pool or how can I do it?</p>
| <python><parallel-processing><multiprocessing> | 2023-09-01 09:13:54 | 2 | 623 | Mads M Pedersen |
77,021,872 | 4,193,573 | How do I find lines containing a specific string when it is not prefixed by another string with regex | <p>I am parsing some html files using a python script.</p>
<p>As part of this parsing, I need to detect text which contains "<...#id=xxx>" or "<a id=xxx>", but do not want to find text which contains "<....#bshid=xxx>" or "<a bshid=xxx>"</p>
<p>In other words, in the below, I want to find line1 and line2, but NOT line3 or line4</p>
<pre><code>line1='<p><a id=somegoodstuff>SomeGoodStuff</a></p>'
line2='<p><a href="../../../blah/blahblah/abc.htm#id=somegreatstuff">SomeGreatStuff</a></p>'
line3='<p><a bshid=somebadstuff">SomeBadStuff</a></p>'
line4='<p><a href="../../../blah/blahblah/abc.htm#bshid=someawfulstuff">SomeAwfulStuff</a></p>'
</code></pre>
<p>I want to do this with minimum changes to the code, as it has been working fine so far, until "bshid=xxxx" links started appearing recently in some new html files. So with only a change to the regex doing the check if possible.</p>
<p>So here is the python script:</p>
<pre><code>import re
id_check = re.compile(r'(<\w+ ([^>]*)id=([^>]*)>)')
line1='<p><a id=somegoodstuff>SomeGoodStuff</a></p>'
line2='<p><a href="../../../blah/blahblah/abc.htm#id=somegreatstuff">SomeGreatStuff</a>
</p>'
line3='<p><a bshid=somebadstuff">SomeBadStuff</a></p>'
line4='<p><a href="../../../blah/blahblah/abc.htm#bshid=someawfulstuff">SomeAwfulStuff</a></p>'
if id_check.findall(line1):
print(line1)
if id_check.findall(line2):
print(line2)
if id_check.findall(line3):
print(line3)
if id_check.findall(line4):
print(line4)
</code></pre>
<p>I have tried to do the following, to exclude "bshid" but it will then exclude anything containing the characters 'b', 's' or 'h' before "id", so that is not what I want to do:</p>
<pre><code>id_check = re.compile(r'(<\w+ ([^>]*)^[bsh]id=([^>]*)>)')
</code></pre>
| <python><regex> | 2023-09-01 09:10:59 | 1 | 461 | didjek |
77,021,797 | 9,751,398 | How to bundle an FMU with a png-file that illustrates the system? | <p>I use FMU a lot for simulation of Modelica models in a Python environment. It would be nice to bundle the obtained FMU from compilation with a png-file that shows the system. Is that possible?</p>
<p>And how to access that png-file and show it in a Jupyter notebook?</p>
| <python><jupyter-notebook><fmi> | 2023-09-01 09:00:15 | 3 | 1,156 | janpeter |
77,021,769 | 13,911,870 | Pytest fixtures skipping test | <p>I have this factory as ficture:</p>
<pre><code>
@pytest.fixture
def user_factory(db):
def create_app_user(
username: str,
password: str = None,
first_name: str = "firstname",
last_name: str = "lastname",
email: str = "user@g.com",
is_staff: str = False,
is_superuser: str = False,
is_active: str = True
):
user_f = User.objects.create_user(
username = username,
password = password,
first_name = first_name,
last_name = last_name,
email = email,
is_staff = is_staff,
is_superuser = is_superuser,
is_active = is_active
)
return user_f
return create_app_user
@pytest.fixture
def new_user(db, user_factory):
return user_factory("myusername", "mypassword", "myfirstname")
</code></pre>
<p>I tried to use the factory to run this test which is located in test_name.py file:</p>
<pre><code>
def create_new_user(new_user):
print(new_user.first_name)
assert new_user.first_name == "myfirstname"
</code></pre>
<p>However, the test didn't run and no error message was produced. I have the factory in my root inside the conftest.py file as mentioned in the documentation. I also have this settings in my pytest.ini file:
<code>python_files = tests.py test_*.py *_test.py</code></p>
<p>What could have happened please?</p>
| <python><pytest><pytest-django><pytest-fixtures> | 2023-09-01 08:54:17 | 1 | 360 | DevolamiTech |
77,021,739 | 5,300,978 | UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe2 in position 147: invalid continuation byte | <p>I am trying to send a file and params to an endpoint of FastAPI, but i am getting the following error (on the title).</p>
<p>My code</p>
<pre><code>response = requests.post(url=f'{os.environ["URL"]}/summarize',
files=[('files', open(file[0].name, 'rb'))],
params={"oneparamter": oneparamter})
</code></pre>
<p>Endpoint</p>
<pre><code>@app.post("/summarize")
async def get_summarize(request: Request):
mJson = await request.json()
</code></pre>
<p>How to fix it? also I would like to send a very large text through json parameter or data parameter in the requests.post, but it looks with <code>files=</code> parameter within <code>POST</code> is impossible. Thank you</p>
| <python><fastapi> | 2023-09-01 08:49:19 | 1 | 1,324 | M. Mariscal |
77,021,698 | 10,194,070 | python + run sed command in python | <p>I created python3 module file ( <code>sed_util_conf.py</code> ) that include <code>sed</code> Linux command that replaced <strong>old</strong> string with <strong>new</strong> for example - key - <code>http-server.https.port</code> and value )</p>
<pre><code>def run_cli(action_details, command_to_run):
print(action_details)
print(command_to_run)
errCode = os.system(command_to_run)
if errCode != 0:
print('failed, ' + command_to_run)
sys.exit(-1)
print(action_details + ' - action done')
def replace_line_do(File, OLD, NEW):
.
.
.
run_cli('run_sed_cli', f'sed -ie s/^{OLD}/{NEW}/ {File}')
</code></pre>
<p>on other python3 script I used the following python3 syntax in order to replace the "<code>OLD</code>" with new "<code>NEW</code>"</p>
<pre><code> sed_util_conf.replace_line_do(f'{file_path}', 'http-server.https.port=.*', f'http-server.https.port={val}')
</code></pre>
<p>example:</p>
<pre><code> sed_util_conf.replace_line_do(f'/etc/zomer_app/conf/config.zomer', 'http-server.https.port=.*', f'http-server.https.port={val}')
</code></pre>
<p>script is working and changed the OLD string with NEW string in file , but additionally we get another file that ended with "<code>e</code>"</p>
<p>for example</p>
<p>if we replaced the line on file - <code>/etc/zomer_app/conf/config.zomer</code></p>
<p>then we get two files</p>
<pre><code>/etc/zomer_app/conf/config.zomer
/etc/zomer_app/conf/config.zomere
</code></pre>
<p>the reason for that is the "<code>e</code>" charecter in syntx - <code>run_cli('sed', f'sed -ie s/^{OLD}/{NEW}/ {File}')</code></p>
<p>from sed man page "<code>e</code>" means "<code>e script, --expression=script , add the script to the commands to be executed</code> "</p>
<p><strong>but I am not understand</strong> how this "<code>e</code>" in <code>sed</code> create the additional file - <code>/etc/zomer_app/conf/config.zomere</code> ?</p>
| <python><python-3.x><linux><sed> | 2023-09-01 08:45:29 | 1 | 1,927 | Judy |
77,021,670 | 3,678,257 | Storing a list of floats or integers in Redis Search | <p>I'm testing Redis vector search solution. I have install the Redis server and have been following these tutorials and documentation:</p>
<ol>
<li><a href="https://github.com/openai/openai-cookbook/blob/main/examples/vector_databases/redis/getting-started-with-redis-and-openai.ipynb" rel="nofollow noreferrer">https://github.com/openai/openai-cookbook/blob/main/examples/vector_databases/redis/getting-started-with-redis-and-openai.ipynb</a></li>
<li><a href="https://redis.io/docs/interact/search-and-query/" rel="nofollow noreferrer">https://redis.io/docs/interact/search-and-query/</a></li>
</ol>
<p>Here is the index structure that I would like to create:</p>
<pre class="lang-py prettyprint-override"><code>id = NumericField(name="id")
user_id = NumericField(name="user_id")
title = TextField(name="title")
seo = NumericField(name="seo")
email = TextField(name="email")
tariff = NumericField(name="tariff")
cash = NumericField(name="cash")
title_embedding = VectorField("embeddings",
"HNSW", {
"TYPE": "FLOAT32",
"DIM": VECTOR_DIM,
"DISTANCE_METRIC": DISTANCE_METRIC,
"INITIAL_CAP": VECTOR_NUMBER,
}
)
rubrics = ???
</code></pre>
<p>It seems like there is a VERY limited number of data types supported by this "version" of Redis (<em>it's still not clear to me whether this vector based Redis is any different from a regular Redis</em>).<br />
So based on the <a href="https://redis.io/docs/interact/search-and-query/basic-constructs/field-and-type-options/" rel="nofollow noreferrer">docs</a> <em>this</em> Redis supports only the following data types:</p>
<pre><code>Number Fields, Geo Fields, Vector Fields, Tag Fields, Text Fields
</code></pre>
<p>How should I store a list of ints in <em>this</em> version of Redis?<br />
I want to store a list of ints in the <code>rubrics</code> field, the list looks like this: <code>[1, 22, 33]</code></p>
| <python><search><redis> | 2023-09-01 08:41:06 | 1 | 664 | ruslaniv |
77,021,604 | 2,876,079 | How to switch between local and installed version of python package, including IDE support (PyCharm)? | <p>I develop a package fhg_isi and currently have installed version 0.0.1.</p>
<p>What is the recommended way to switch between the local version 0.0.2 (being in development) and the installed version?</p>
<p>I am already able to load the local version by modifying the system path (and If I want to use the installed version, I can comment out the modification of the system path)</p>
<pre><code>import sys
# If you want to use the local version of fhg_isi package,
# insert its absolute path to the python system path, for example:
local_fhg_isi_path = 'C:/python_env/workspace/fhg_isi/src'
sys.path.insert(1, local_fhg_isi_path) # out comment this line to use installed version
from fhg_isi.gis.geojson_factory import GeojsonFactory # This import stays the same for both cases
...
</code></pre>
<p>Also see: <a href="https://stackoverflow.com/questions/47819398/how-can-you-import-a-local-version-of-a-python-package">How can you import a local version of a python package?</a></p>
<p>However, <strong>PyCharm does not recognize the switch of the libraries</strong>. If I Ctrl-Click on <code>GeojsonFactory</code>, I still navigate to the installed version instead of local version.</p>
<p>=> Is there a more comfortable way to switch between the packages, including IDE support, without modification of all the import statements?</p>
| <python><import><pycharm><package-management> | 2023-09-01 08:30:25 | 1 | 12,756 | Stefan |
77,021,415 | 7,973,386 | ModuleNotFoundError: No module named 'pytorch3d.io' in colab | <p>I was running this Google Colab today everything was working fine but eventually I starting getting these errors in the Set Up Environment. I can't find a fix. Any help would be appreciated let me know if I need to provide more info.</p>
<p>Here is the link for the colab <a href="https://colab.research.google.com/drive/15-vJXyGQvfMSOgPneQqwNyrBh5rnBaV-?usp=sharing" rel="nofollow noreferrer">https://colab.research.google.com/drive/15-vJXyGQvfMSOgPneQqwNyrBh5rnBaV-?usp=sharing</a></p>
<pre><code>---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-36-84909fa89079> in <cell line: 48>()
46 clear_output()
47 #render video
---> 48 from lib.colab_util import generate_video_from_obj, set_renderer, video
49
50 renderer = set_renderer()
/content/pifuhd/lib/colab_util.py in <module>
33
34 # Util function for loading meshes
---> 35 from pytorch3d.io import load_objs_as_meshes
36
37 from IPython.display import HTML
ModuleNotFoundError: No module named 'pytorch3d.io'
---------------------------------------------------------------------------
NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.
To view examples of installing some common dependencies, click the
"Open Examples" button below.
---------------------------------------------------------------------------
</code></pre>
| <python><google-colaboratory> | 2023-09-01 07:59:07 | 1 | 1,370 | Balaji Venkatraman |
77,021,379 | 5,821 | How to validate deeply nested data structures using Pydantic? | <p>Suppose I have the following data structure:</p>
<pre class="lang-py prettyprint-override"><code>{
"code": 123,
"a": {
"b": {
"c": {
"d": {
"value": "hello world"
}
}
}
}
}
</code></pre>
<p>And I want to validate its structure using Pydantic, and access the <code>code</code> and <code>value</code> fields. For this example, the interesting data is only at the root level, and at the innermost nesting level.</p>
<p>To handle the nesting I could create a few helper model classes:</p>
<pre><code>from pydantic import BaseModel
sample = {"code": 123, "a": {"b": {"c": {"d": {"value": "hello world"}}}}}
class D(BaseModel):
value: str
class C(BaseModel):
d: D
class B(BaseModel):
c: C
class A(BaseModel):
b: B
class MyModel(BaseModel):
code: int
a: A
obj = MyModel(**sample)
print(obj.code, obj.a.b.c.d.value)
</code></pre>
<p>Is there a way to "shortcut" the "a" - "d" nesting levels somehow, so I wouldn't need as many helper classes?</p>
| <python><pydantic> | 2023-09-01 07:53:08 | 2 | 45,522 | Pēteris Caune |
77,021,203 | 2,678,074 | My hex values are converted to int while passing as args | <p>I am creating an usb connection, and this works.</p>
<pre><code>device = usb.core.find(idVendor=0x47f, idProduct=0x2e6)
</code></pre>
<p>According to signature of the method, those arguments are passed to method as <em>args</em></p>
<pre><code>def find(find_all=False, backend = None, custom_match = None, **args):
</code></pre>
<p>I want to make class that uses this method, and can take same input, smth like:</p>
<pre><code> class MyNewClass(object):
def __init__(self, find_all=False, backend=None, custom_match=None, **args):
self.find_all = find_all
self.backend = backend
self.custom_match = custom_match
self.args = args
def __enter__(self):
self.device = usb.core.find(find_all=self.find_all, backend=self.backend, custom_match=self.custom_match,
args=self.args)
return self.device
</code></pre>
<p>Unfortuantely, that approach tranalates input hex values to int, and this is smth that <em>usb.core.find</em> cannot take, therefore it returns <em>None</em></p>
<pre><code> with MyNewClass(idVendor=0x47f, idProduct=0x2e6) as aaa:
aaa = ''
</code></pre>
<p><a href="https://i.sstatic.net/9Wp9Y.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9Wp9Y.png" alt="enter image description here" /></a></p>
<p>I would like to avoid that conversion, and I am looking for ideas.</p>
| <python><integer><hex> | 2023-09-01 07:17:37 | 1 | 845 | user2678074 |
77,021,145 | 17,896,651 | Migrate a django project to new server with already exists postgresql database of anotehr django project | <p>I have 2 django apps on separate servers.</p>
<p>Both have completely different models</p>
<p>The database name of app1 is xxxx1</p>
<p>The database name of app2 is xxxx2</p>
<p>how to migrate xxxx2 to server1 without damage anything ?</p>
| <python><django><database><postgresql> | 2023-09-01 07:06:42 | 1 | 356 | Si si |
77,021,140 | 1,852,526 | pygithub search and read specific files | <p>I am using pyGithub to go through the files in the Github repository. The problem is, with this code <code>my_code.get_contents("")</code>, it goes through each and every file in all the folders and subfolders in the repo. Is there a way to make this code efficient. I am only interested in parsing the .csproj files and the packages.config files where they are found. But these files are scattered in multiple places.</p>
<pre><code>from github import Github
import pathlib
import xml.etree.ElementTree as ET
def processFilesInGitRepo():
while len(contents)>0:
file_content = contents.pop(0)
if file_content.type=='dir':
contents.extend(my_code.get_contents(file_content.path))
else :
path=pathlib.Path(file_content.path)
file_name=path.name
extention=path.suffix
if(file_name=='packages.config'):
parseXMLInPackagesConfig(file_content.decoded_content.decode())
if(extention=='.csproj'):
parseXMLInCsProj(file_content.decoded_content.decode())
print(file_content)
my_git=Github("MyToken")
my_code=my_git.get_repo("BeclsAutomation/Echo65XPlus")
contents=my_code.get_contents("") #empty string i.e. ("") gives all the items in the Repository. But can I specify some kind of a search term here saying I need only .csproj and packages.config files.
processFilesInGitRepo()
</code></pre>
| <python><pygithub> | 2023-09-01 07:05:23 | 1 | 1,774 | nikhil |
77,021,106 | 10,669,819 | Is there anyway to generate doc strings for Python code using Github Copilot (VS Code) | <p>Is there anyway to generate doc string using Github Copilot</p>
<p>I have code and I want to generate doc string for it.</p>
<pre><code>def make_chat_content(self,chat_uuid,text,db_session):
import uuid
all_content = ChatContent.query.filter_by(chat_uuid=chat_uuid).all()
chat_json = dump(all_content)
chat_json.append({"role":"user","content":text})
response, total_words_generated = self.chat.get_response(chat_json)
</code></pre>
<p>for example doc string that describes function</p>
<pre><code>"""
Add the user's text to the chat content and generate a response.
This function takes the chat UUID, user's text, and a database session as input.
It appends the user's content to the existing chat content, generates a response
using a chat model, and returns the response along with the total words generated.
Args:
chat_uuid (str): The UUID of the chat session.
text (str): The text content provided by the user.
db_session: The database session for querying chat content.
Returns:
tuple: A tuple containing the response generated by the chat model and
the total number of words generated in the response.
"""
</code></pre>
| <python><visual-studio-code><github-copilot> | 2023-09-01 06:56:20 | 2 | 580 | Usman Rafiq |
77,021,026 | 12,466,687 | How to expand list columns into rows in polars? | <p>I am trying to <strong>expand the list column data</strong> into non list format in <code>Polars</code> but not able to do it.</p>
<p><strong>dataframe</strong></p>
<pre><code>import polars as pl
pl_mtcars = pl.DataFrame({'model': ['Mazda RX4','Mazda RX4 Wag','Datsun 710','Hornet 4 Drive',
'Hornet Sportabout','Valiant','Duster 360','Merc 240D','Merc 230','Merc 280'],
'mpg': [21.0, 21.0, 22.8, 21.4, 18.7, 18.1, 14.3, 24.4, 22.8, 19.2],
'cyl': [6, 6, 4, 6, 8, 6, 8, 4, 4, 6],
'disp': [160.0,160.0,108.0,258.0,360.0,225.0,360.0,146.7,140.8,167.6],
'hp': [110, 110, 93, 110, 175, 105, 245, 62, 95, 123],
'drat': [3.9, 3.9, 3.85, 3.08, 3.15, 2.76, 3.21, 3.69, 3.92, 3.92],
'wt': [2.62, 2.875, 2.32, 3.215, 3.44, 3.46, 3.57, 3.19, 3.15, 3.44],
'qsec': [16.46, 17.02, 18.61, 19.44, 17.02, 20.22, 15.84, 20.0, 22.9, 18.3],
'vs': [0, 0, 1, 1, 0, 1, 0, 1, 1, 1],
'am': [1, 1, 1, 0, 0, 0, 0, 0, 0, 0],
'gear': [4, 4, 4, 3, 3, 3, 3, 4, 4, 4],
'carb': [4, 4, 1, 1, 2, 1, 4, 2, 2, 4]}
)
</code></pre>
<p><strong>creating list column dataset below</strong></p>
<pre><code>pl_mtcars2 = .select([
"model","mpg","cyl",
pl.col("mpg").mean().over("cyl").alias("mean_mpg_by_cyl"),
pl.col("mpg").over("cyl", mapping_strategy="join").alias("mpg_by_cyl") #.flatten()
])
pl_mtcars2.head()
</code></pre>
<pre><code>┌────────────────┬──────┬─────┬─────────────────┬──────────────────────┐
│ model ┆ mpg ┆ cyl ┆ mean_mpg_by_cyl ┆ mpg_by_cyl │
│ --- ┆ --- ┆ --- ┆ --- ┆ --- │
│ str ┆ f64 ┆ i64 ┆ f64 ┆ list[f64] │
╞════════════════╪══════╪═════╪═════════════════╪══════════════════════╡
│ Mazda RX4 ┆ 21.0 ┆ 6 ┆ 20.14 ┆ [21.0, 21.0, … 19.2] │
│ Mazda RX4 Wag ┆ 21.0 ┆ 6 ┆ 20.14 ┆ [21.0, 21.0, … 19.2] │
│ Datsun 710 ┆ 22.8 ┆ 4 ┆ 23.333333 ┆ [22.8, 24.4, 22.8] │
│ Hornet 4 Drive ┆ 21.4 ┆ 6 ┆ 20.14 ┆ [21.0, 21.0, … 19.2] │
│ … ┆ … ┆ … ┆ … ┆ … │
│ Duster 360 ┆ 14.3 ┆ 8 ┆ 16.5 ┆ [18.7, 14.3] │
│ Merc 240D ┆ 24.4 ┆ 4 ┆ 23.333333 ┆ [22.8, 24.4, 22.8] │
│ Merc 230 ┆ 22.8 ┆ 4 ┆ 23.333333 ┆ [22.8, 24.4, 22.8] │
│ Merc 280 ┆ 19.2 ┆ 6 ┆ 20.14 ┆ [21.0, 21.0, … 19.2] │
</code></pre>
<p>Getting <strong>Error</strong> when trying to <strong>expand</strong> <code>mpg_by_cyl</code> into rows instead of list</p>
<pre><code>pl_mtcars2.select(pl.all(),
pl.exclude('mpg_by_cyl'),
pl.col('mpg_by_cyl').explode()).head(10)
</code></pre>
| <python><dataframe><python-polars> | 2023-09-01 06:44:10 | 1 | 2,357 | ViSa |
77,020,991 | 10,170,808 | Why the path is different between Python terminal in code editor vs after compiled into exe | <p>So I have this simple code in <code>C:\Work\Python Scripts\dfr_monthly\myfile.py</code></p>
<pre><code>print(os.getcwd())
print(os.path.dirname(__file__))
</code></pre>
<p>When I run this file from code editor (VSCode), the output in the python terminal is</p>
<pre><code>(base) C:\Work>C:/Users/coco/Anaconda3/python.exe "c:/Work/Python Scripts/dfr_monthly/myfile.py"
C:\Work
c:\Work\Python Scripts\dfr_monthly
</code></pre>
<p>Then I compiled it into exe but when I run it the output is
<a href="https://i.sstatic.net/wkAAo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wkAAo.png" alt="enter image description here" /></a></p>
<p>Why is it different between the exe output vs python terminal in vscode? I'm kind of confused what's happening here</p>
| <python> | 2023-09-01 06:37:21 | 1 | 791 | random student |
77,020,971 | 1,503,984 | How to properly terminate or close a background task in aiohttp application? | <p>I'm writing an app using aiohttp and asyncio. One of the parts of the app is supposed to continuously monitor a specific device for incoming data. It works as expected except that I cannot close the app if no data comes from the device. Here's a stipped version of the code:</p>
<pre class="lang-py prettyprint-override"><code>async def background_tasks(app):
app['scaner_listener'] = asyncio.create_task(watchscaner(app))
yield
app['scaner_listener'].cancel()
await app['scaner_listener']
async def shutdown(app):
try:
app['scaner_listener'].cancel()
except BaseException as Err:
logging.error(f'Error in the shutdown cleanup: {type(Err)} {Err.args}')
def main():
app = web.Application()
app.cleanup_ctx.append(background_tasks)
app.on_shutdown.append(shutdown)
web.run_app(app)
</code></pre>
<p>The code above is basically a copy-paste from official documentation while the function below is the one that (supposedly) causes the problem:</p>
<pre class="lang-py prettyprint-override"><code>async def watchscaner(app):
try:
async with aiofiles.open("/dev/scaner0", "r") as dev:
buffer = ""
b = await dev.read(1) # line 1
while b: # line 2
if b == "\n":
buffer = ""
elif b == "\x00":
if len(buffer) > 0:
logging.info(f"Scanned {buffer=}")
buffer = ""
else:
buffer += b
b = await dev.read(1) # line 3
except asyncio.CancelledError:
logging.warning('Cancelled error')
finally:
logging.info('Finally came here')
</code></pre>
<p>Also I tried the following approach for the commented "line1-2"</p>
<pre class="lang-py prettyprint-override"><code> while True:
b = await dev.read(1)
</code></pre>
<p>with or without the "line 3".</p>
<p>In all the cases, the application works fine, and the data from scanner are received without any problems. The only problem is - when I exit the app, the watchscanner task is still waiting for any incoming data from <code>/dev/scaner0</code> and exits only after that.</p>
<p>It looks like I'm not awaiting the data properly, but this is my assumption only.</p>
<p>Any suggestions are highgly appreciated.</p>
| <python><python-3.x><python-asyncio><aiohttp> | 2023-09-01 06:34:26 | 0 | 446 | miwa |
77,020,925 | 10,232,932 | Running a for-loop for the same function several times for a filtered dataframe, pythonic way to do that | <p>I am having an example pandas dataframe df:</p>
<pre><code>TypeA TypeB timepoint value
A AB 1 10
A AB 2 10
A AC 1 5
A AC 2 15
A AC 3 10
...
D DB 1 1
D DB 2 1
</code></pre>
<p>How can I run a function several times on a the unique combinations of 'TypeA' and 'TypeB' and store the results in a new dataframe, let's assume the following function, which is an example for many other functions I want to run on it:</p>
<pre><code>import numpy as np
def running_mean(x, N):
cumsum = np.cumsum(np.insert(x, 0, 0))
return (cumsum[N:] - cumsum[:-N]) / float(N)
</code></pre>
<p>Right now, I do a <code>for-loop</code> but i think that is not a good idea, is there any python way to do that?</p>
<pre><code>df4 = pd.DataFrame()
for i in df['typeA'].unique().tolist():
df2 = df[df['typeA'] == i]
for j in df2['typeB'].unique().tolist():
df3 = df2[df2['typeB'] == j]
moving_av = running_mean(df3['value'].values, 2)
df3['moving_av'] = 0
df3.iloc[1:1+len(moving_av), df3.columns.get_loc('moving_av')] = moving_av
df4 = pd.concat([df4, df3])
df = pd.merge(df, df4, how='left', on=['typeA', 'Type', 'timepoint'])
</code></pre>
<p>my desired output is:</p>
<pre><code>TypeA TypeB timepoint value moving_av
A AB 1 10 0
A AB 2 10 10
A AC 1 5 0
A AC 2 15 10
A AC 3 10 12.5
...
D DB 1 1 0
D DB 2 1 1
</code></pre>
<p>Please note that the simple function is only a example, I am searching for a solution for a bigger function.</p>
| <python><pandas><dataframe> | 2023-09-01 06:25:23 | 1 | 6,338 | PV8 |
77,020,917 | 22,070,773 | How to add a submodule to an extension with nanobind? | <p>I want to be able to add a extension to my extension, so that I can do:</p>
<p><code>import my_ext.sub_ext</code></p>
<p>In order to split up the extension so that different sub components of the extension can be imported seperately.</p>
<p>This is done in <code>Boost::python</code> like this:</p>
<pre><code>bp::object class1Module(bp::handle< (bp::borrowed(PyImport_AddModule("my_ext.sub_ext"))));
bp::scope().attr("sub_ext") = class1Module;
// set the current scope to the new sub-module
bp::scope io_scope = class1Module;
// any classes defined will now be in my_ext.sub_ext
...
</code></pre>
<p>How is this to be achieved in nanobind?</p>
| <python><c++><nanobind> | 2023-09-01 06:23:53 | 1 | 451 | got here |
77,020,794 | 2,556,795 | How to get last date along with its multiple rows | <p>I have a dataframe as below</p>
<pre><code>import pandas as pd
from io import StringIO
df = pd.read_csv(StringIO("""
id date1 date2
101 2021-10-01 2021-09-01
101 2021-11-01 2021-10-01
101 2021-11-01 2021-12-01
102 2022-09-01 2022-09-01
102 2022-11-01 2023-01-01
103 2021-10-01 2021-11-01
103 2021-10-01 2021-11-01
103 2021-12-01 2021-11-01
103 2021-12-01 2021-12-01
103 2021-12-01 2022-01-01"""), sep="\s+", parse_dates=["date1", "date2"])
</code></pre>
<p>I am trying to get the latest <code>date1</code> for each <code>id</code> along with its multiple entries. My expected result is as below</p>
<pre><code>id date1 date2
101 2021-11-01 2021-10-01
101 2021-11-01 2021-12-01
102 2022-11-01 2023-01-01
103 2021-12-01 2021-11-01
103 2021-12-01 2021-12-01
103 2021-12-01 2022-01-01
</code></pre>
<p>In this case <code>drop_duplicates</code> with <code>keep='last'</code> will only give me the last entry for each id but I need all last dates in each id. And <code>groupby</code> with <code>tail</code> will not work because the value for <code>tail</code> is not static and can differ for each id. Is there any way to get this?</p>
| <python><python-3.x><pandas> | 2023-09-01 05:59:45 | 3 | 1,370 | mockash |
77,020,618 | 882,512 | Xampp on windows server trying php run a python script | <p>I am trying to execute a python script from php on windows server.
I have read tutorials and answers here in SO but I can not make it work.</p>
<p>My Script is:</p>
<pre><code>$cmd = escapeshellcmd('c:/Windows/py.exe c:/xampp/htdocs/python.py');
echo shell_exec($cmd);
</code></pre>
<p>But it does not work.</p>
<p>I also tried</p>
<pre><code>$cmd = escapeshellcmd('c:/xampp/htdocs/python.py');
echo shell_exec($cmd);
</code></pre>
<p>also moved the python file where php.exe and tried</p>
<pre><code>$cmd = escapeshellcmd('python.py');
echo shell_exec($cmd);
</code></pre>
<p>and also</p>
<pre><code>$cmd = escapeshellcmd('c:/Windows/py.exe c:/xampp/htdocs/python.py');
echo shell_exec($cmd);
</code></pre>
<p>But nothing of the above works.</p>
<p>Any help is appreciated.</p>
| <python><php> | 2023-09-01 05:02:33 | 1 | 753 | Christoforos |
77,020,444 | 22,070,773 | How to manually build a nanobind extension? | <p>I am trying to manually build the example extension for nanobind:</p>
<pre><code>// main.cpp
#include <nanobind/nanobind.h>
int add(int a, int b) { return a + b; }
NB_MODULE(my_ext, m) {
m.def("add", &add);
}
</code></pre>
<blockquote>
<p>g++ -I/usr/include/python3.8 -I./nanobind/include main.cpp libnanobind.a /usr/lib/python3.8/config-3.8-x86_64-linux-gnu/libpython3.8-pic.a -std=c++17 -shared -fPIC -O3 -o my_ext.so</p>
</blockquote>
<pre><code># main.py
import my_ext
print (my_ext.add(1, 2))
</code></pre>
<p>But, I am getting this error:</p>
<pre><code>Fatal Python error: _PyInterpreterState_Get(): no current thread state
Current thread 0x00007f77aea1e740 (most recent call first):
File "<frozen importlib._bootstrap>", line 219 in _call_with_frames_removed
File "<frozen importlib._bootstrap_external>", line 1043 in create_module
File "<frozen importlib._bootstrap>", line 583 in module_from_spec
File "<frozen importlib._bootstrap>", line 670 in _load_unlocked
File "<frozen importlib._bootstrap>", line 967 in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 983 in _find_and_load
File "main.py", line 1 in <module>
Aborted (core dumped)
</code></pre>
<p>I have looked around and it says it is due to a mismatch between python versions. I am building <code>libnanobind.a</code> with <code>-I /usr/include/python3.8</code> and as far as I can tell, the versions are the same, so I am not sure what else to check.</p>
<p>Does anyone have any ideas?</p>
<p>Thanks</p>
| <python><c++><nanobind> | 2023-09-01 04:05:18 | 0 | 451 | got here |
77,020,430 | 8,075,540 | Logging from QueueHandler not appearing until future.result() is called | <p>I'm playing around with <code>logging.handlers.QueueHandler</code> (I'm trying to integrate it into my pytest suite). Here's my MRE:</p>
<pre class="lang-py prettyprint-override"><code>import concurrent.futures
import logging
import logging.handlers
import multiprocessing
import threading
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger()
def init_job(log_queue):
logging.getLogger().handlers = [logging.handlers.QueueHandler(log_queue)]
def func():
logger.info('Here')
def thread_func(log_queue):
while (record := log_queue.get()) is not None:
logger.info('Handling record')
logger.handle(record)
def main():
log_queue = multiprocessing.Queue()
thread = threading.Thread(target=thread_func, args=(log_queue,))
thread.start()
with concurrent.futures.ProcessPoolExecutor(initializer=init_job, initargs=(log_queue,)) as executor:
future = executor.submit(func)
future.result()
log_queue.put(None)
thread.join()
</code></pre>
<p>This works as expected. However, I notice that the thread function doesn't receive the record until after <code>future.result()</code> is called. That is, if I put <code>future.result()</code> after <code>thread.join()</code>, nothing gets logged.</p>
<p>How do I get my records in real time?</p>
| <python><logging><multiprocessing> | 2023-09-01 04:00:58 | 1 | 6,906 | Daniel Walker |
77,020,338 | 9,300,659 | Robot framework keywords are showing "No keyword with name xyz" | <p>I am new to robot framework and python, I am facing problem with the keywords, which shows me the error <code>No keyword with name xyz</code>. For an example in file <code>csv_handler.py</code></p>
<pre><code>import all library and key words
class csv_handler:
@keyword("Extract text from CSV file")
def extract_text_from_CSV_file(self, file_name):
execution_mode = BuiltIn().get_variable_value("${EXECUTION_MODE}")
download_folder = config_file[execution_mode]["download_folder"]
userName = getpass.getuser()
localUserName = userName.strip("adm-")
download_folder = download_folder.replace("<local_user>", localUserName)
data = []
with open(download_folder.strip() + file_name, 'r') as csvfile:
reader = csv.reader(csvfile)
for row in reader:
data.append(row)
return data
</code></pre>
<p>I need a new keyword in the file <code>csv_handler.py</code> so I just copy the old keyword and did some modification to it made the new keyword. Now the new file looks like</p>
<pre><code>import all library and key words
class csv_handler:
@keyword("Extract text from CSV file")
def extract_text_from_CSV_file(self, file_name):
execution_mode = BuiltIn().get_variable_value("${EXECUTION_MODE}")
download_folder = config_file[execution_mode]["download_folder"]
userName = getpass.getuser()
localUserName = userName.strip("adm-")
download_folder = download_folder.replace("<local_user>", localUserName)
data = []
with open(download_folder.strip() + file_name, 'r') as csvfile:
reader = csv.reader(csvfile)
for row in reader:
data.append(row)
return data
@keyword("Extract text from created CSV file")
def extract_text_from_CSV_file(self, file_name):
execution_mode = BuiltIn().get_variable_value("${EXECUTION_MODE}")
cwd = os.getcwd()
upload_folder = cwd + config_file[execution_mode]["upload_folder"]
userName = getpass.getuser()
localUserName = userName.strip("adm-")
upload_folder = upload_folder.replace("<local_user>", localUserName)
data = []
with open(upload_folder.strip() + file_name, 'r') as csvfile:
reader = csv.reader(csvfile)
for row in reader:
data.append(row)
return data
</code></pre>
<p>After making my new keyword in the file <code>csv_handler.py</code> , if I go to the <strong>.robot</strong> file and try to run the test script the test script where I used the old key word, that throws me error "No keyword with name old keyword" and for the result script fails.</p>
<p>I tried to restart my vscode, laptop but not got chance to fix it. Your help will be appriciated. Thank you.</p>
| <python><python-3.x><selenium-webdriver><robotframework><selenium-server> | 2023-09-01 03:23:51 | 0 | 558 | Satyajit Behera |
77,020,319 | 7,058,869 | Having Python Files in an R package- Where do I put it? | <p>I am working on a R package and for various reasons I need to write some code in Python.</p>
<p>For my current workflow I have the various scripts sitting in the root directory which I run in my various R codes with <code>reticulate::py_run_file()</code> followed by <code>reticulate::py$my_function()</code> (not ideal, I know).</p>
<p>So my question is:</p>
<p><strong>Where do I place my Python files when developing an R package?</strong></p>
| <python><r><r-package><reticulate><file-structure> | 2023-09-01 03:17:47 | 1 | 1,076 | Bensstats |
77,020,278 | 12,931,358 | How to load a huggingface dataset from local path? | <p>Take a simple example on Hugging Face: <a href="https://huggingface.co/datasets/Dahoas/rm-static" rel="nofollow noreferrer">Dahoas/rm-static</a></p>
<p>If I want to load this dataset online, I just directly use:</p>
<pre class="lang-py prettyprint-override"><code>from datasets import load_dataset
dataset = load_dataset("Dahoas/rm-static")
</code></pre>
<p>Now I want to load dataset from local path, so firstly I download the files and keep the same folder structure from Hugging Face <code>Files and versions</code> tab:</p>
<pre><code>rm-static
├── data
│ ├── test-00000-of-00001-bf4c733542e35fcb.parquet
│ └── train-00000-of-00001-2a1df75c6bce91ab.parquet
├── .gitattributes
├── README.md
└── dataset_infos.json
</code></pre>
<p>Then, put them into my folder, but it shows error when loading:</p>
<pre class="lang-py prettyprint-override"><code>dataset_path = "/data/coco/dataset/Dahoas/rm-static"
tmp_dataset = load_dataset(dataset_path)
</code></pre>
<p>It shows:</p>
<blockquote>
<p>FileNotFoundError: No (supported) data files or dataset script found in /data/coco/dataset/Dahoas/rm-static.</p>
</blockquote>
| <python><huggingface><huggingface-datasets><huggingface-hub> | 2023-09-01 03:03:06 | 4 | 2,077 | 4daJKong |
77,020,135 | 955,091 | How to create and compare syrupy snapshots in hypothesis's stateful testing? | <p>I want to create a <code>hypothesis.stateful.RuleBasedStateMachine</code> to run <a href="https://hypothesis.readthedocs.io/en/latest/stateful.html" rel="nofollow noreferrer">stateful testing</a> in Python. When the test is running, I want it to be deterministic and either update some snapshots or compare with existing, so that I can manually review whether the changes in the snapshot makes sense.</p>
<p>Normally I can use <a href="https://github.com/tophat/syrupy" rel="nofollow noreferrer">syrupy</a>'s <code>snapshot</code> assertion fixture to compare and update snapshot files.</p>
<p>Unfortunately, <code>hypothesis</code>'s stateful testing does not support fixtures, therefore I cannot directly use the <code>snapshot</code> fixture.</p>
<p>What should I do to use <code>syrupy</code> with <code>hypothesis</code> in stateful testing?</p>
| <python><testing><python-hypothesis><property-based-testing><snapshot-testing> | 2023-09-01 02:10:40 | 1 | 3,773 | Yang Bo |
77,020,014 | 17,197,068 | How to select an option from fiba page using playwright | <p>I am trying to get the fiba world cup 2023 team stats where my country (Philippines) is one of the host nations.</p>
<p>The url of the page is: <a href="https://www.fiba.basketball/basketballworldcup/2023/teamstats" rel="nofollow noreferrer">https://www.fiba.basketball/basketballworldcup/2023/teamstats</a></p>
<p>The select option in the page is:</p>
<pre><code>"""
<select id="type_select" style="display: none;">
<option id="type_select_points_per_game" value="PPG">Points per Game</option>
<option id="type_select_points" value="PTS">Total Points</option>
<option id="type_select_field_goals" value="FG">Field Goal Shooting</option>
<option id="type_select_2_points" value="FG2">2 Point Field Goals</option>
<option id="type_select_3_points" value="FG3">3 Point Field Goals</option>
<option id="type_select_free_throws" value="FT">Free-Throws</option>
<option id="type_select_rebounds" value="REB">Rebounds</option>
<option id="type_select_blocks" value="BL">Blocks</option>
<option id="type_select_assists" value="ASS">Assists</option>
<option id="type_select_steals" value="ST">Steals</option>
<option id="type_select_turn_overs" value="TO">Turn Overs</option>
<option id="type_select_fouls" value="FO">Fouls</option>
<option id="type_select_minutes" value="MIN">Minutes</option>
<option id="type_select_efficiency" value="EFF">Efficiency</option>
<option id="type_select_double_doubles" value="DD">Double-Doubles</option>
</select>
"""
</code></pre>
<p><a href="https://i.sstatic.net/IcK7b.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IcK7b.png" alt="enter image description here" /></a></p>
<p>I use playwright to select one of the options but could not get it right.</p>
<p>This is my code to select that <code>select</code> element with <code>type_select</code> id.</p>
<h3>Code to select</h3>
<pre><code>try:
page.locator('select#type_select').select_option(value='FG', timeout=60000)
except Exception as exc:
print(f'Unexpected exception: {repr(exc)}')
</code></pre>
<h3>Error message</h3>
<pre><code>Unexpected exception: TimeoutError('Timeout 60000ms exceeded.\n=========================== logs ===========================\nwaiting for locator("select#type_select")\n locator resolved to <select id="type_select">…</select>\n selecting specified option(s)\n element is not visible - waiting...\n============================================================')
</code></pre>
<h3>Full code</h3>
<p>The code can only extract table data from the default option.</p>
<pre><code>"""Extract fiba world cup 2023 team stats.
url = 'https://www.fiba.basketball/basketballworldcup/2023/teamstats'
Install playwright:
pip install playwright
Install the required browsers:
playwright install
"""
from playwright.sync_api import sync_playwright
def scrape_team_stats(url):
with sync_playwright() as p:
browser = p.chromium.launch(headless=False) # True and fiba might block it
page = browser.new_page()
page.goto(url)
# Select option.
"""
<select id="type_select" style="display: none;">
<option id="type_select_points_per_game" value="PPG">Points per Game</option>
<option id="type_select_points" value="PTS">Total Points</option>
<option id="type_select_field_goals" value="FG">Field Goal Shooting</option>
<option id="type_select_2_points" value="FG2">2 Point Field Goals</option>
<option id="type_select_3_points" value="FG3">3 Point Field Goals</option>
<option id="type_select_free_throws" value="FT">Free-Throws</option>
<option id="type_select_rebounds" value="REB">Rebounds</option>
<option id="type_select_blocks" value="BL">Blocks</option>
<option id="type_select_assists" value="ASS">Assists</option>
<option id="type_select_steals" value="ST">Steals</option>
<option id="type_select_turn_overs" value="TO">Turn Overs</option>
<option id="type_select_fouls" value="FO">Fouls</option>
<option id="type_select_minutes" value="MIN">Minutes</option>
<option id="type_select_efficiency" value="EFF">Efficiency</option>
<option id="type_select_double_doubles" value="DD">Double-Doubles</option>
</select>
"""
try:
page.locator('select#type_select').select_option(value='FG', timeout=60000)
except Exception as exc:
print(f'Unexpected exception: {repr(exc)}')
page.wait_for_selector("#team_stat_table")
table = page.query_selector("table.comparative")
rows = table.query_selector_all("tr")
team_stats = []
for row in rows:
cells = row.query_selector_all("td")
if cells:
cell_values = []
for cell in cells:
cell_text = cell.text_content()
cell_values.append(cell_text)
team_stats.append(cell_values)
browser.close()
return team_stats
url = "https://www.fiba.basketball/basketballworldcup/2023/teamstats"
team_stats = scrape_team_stats(url)
print(team_stats)
# [['1.', 'Canada', '3', '200.0', '108.0', ...
</code></pre>
<p>I am expecting for the select to function properly so that I can get the other team stats aside from the default option.</p>
| <python><web-crawler><playwright-python> | 2023-09-01 01:16:48 | 1 | 5,094 | ferdy |
77,019,852 | 433,202 | Numpy `.astype` rounding up | <p>Similar to <a href="https://stackoverflow.com/questions/43910477/numpy-astype-rounding-to-wrong-value">Numpy astype rounding to wrong value</a> but that seemed like the opposite issue and is actually what I want (truncating). In my real world case I'm doing various calculations where some values could get very very close to the next whole number and then get converted to integers. I <em>want</em> the numbers to be truncated and I expect it to be equivalent to the <code>floor</code> operation. I end up using the results as indexes. However, it seems to be rounding up when I do <code>.astype(np.int32)</code>. What's going on here:</p>
<pre class="lang-py prettyprint-override"><code>In [2]: import numpy as np
...
In [49]: np.array([4319.9997], dtype=np.float32).astype(np.int32)
Out[49]: array([4319], dtype=int32)
In [50]: np.array([4319.9998], dtype=np.float32).astype(np.int32)
Out[50]: array([4320], dtype=int32)
</code></pre>
<p>I understand 32-bit floating versus 64-bit floating precision, but I don't understand the internal operations of what <code>astype</code> is doing here.</p>
| <python><numpy><floating-point> | 2023-08-31 23:56:42 | 1 | 3,695 | djhoese |
77,019,827 | 2,153,235 | PYTHONPATH not propagating from CMD to Spyder | <p>I installed PySpark under Anaconda by issuing the following commands
at a Conda prompt:</p>
<pre><code>conda create -n py39 python=3.9 anaconda
conda activate py39
conda install openjdk
conda install pyspark
conda install -c conda-forge findspark
</code></pre>
<p>As can be seen, this is all within the <code>py39</code> environment.
Additionally, I fetched <strong>Hadoop 2.7.1</strong> from
<a href="https://github.com/steveloughran/winutils" rel="nofollow noreferrer">GitHub</a> and created
<code>c:%HOMEPATH%\AppData\Local\Hadoop\2.7.1</code> to contain the corresponding
<code>README.md</code> file and <code>bin</code> subfolder [1]. Here, <code>%HOMEPATH%</code> is
<code>\Users\User.Name</code>. Finally, I had to create file
<code>%SPARK_HOME%/conf/spark-defaults.conf</code> (Annex A).</p>
<p>With the above setup, I could launch PySpark using the following
<code>myspark.cmd</code> script located in
<code>c:%HOMEPATH%\anaconda3\envs\py39\bin\</code>:</p>
<pre><code>set "PYSPARK_DRIVER_PYTHON=python"
set "PYSPARK_PYTHON=python"
set "HADOOP_HOME=c:%HOMEPATH%\AppData\Local\Hadoop\2.7.1"
pyspark
</code></pre>
<p>I am now following <a href="https://sparkbyexamples.com/pyspark/setup-and-run-pyspark-on-spyder-ide" rel="nofollow noreferrer">this
page</a>
to be able to use Spyder instead of the Conda command line. I am
using the following <code>SpyderSpark.cmd</code> script to set the the variables
and launch Spyder:</p>
<pre><code>set "HADOOP_HOME=c:%HOMEPATH%\AppData\Local\Hadoop\2.7.1"
set "JAVA_HOME=C:%HOMEPATH%\anaconda3\envs\py39\Library"
set "SPARK_HOME=C:%HOMEPATH%\anaconda3\envs\py39\lib\site-packages\pyspark"
set "PYSPARK_DRIVER_PYTHON=Python"
set "PYSPARK_PYTHON=Python"
set "PYTHONPATH=%SPARK_HOME%\python\lib\site-packages\pyspark\python\lib\py4j-0.10.9.7-src.zip;%PYTHONPATH%"
set "PYTHONPATH=%SPARK_HOME%\python\lib\site-packages\pyspark\python;%PYTHONPATH%"
C:%HOMEPATH%\anaconda3\pythonw.exe ^
C:%HOMEPATH%\anaconda3\cwp.py ^
C:%HOMEPATH%\anaconda3\envs\py39 ^
C:%HOMEPATH%\anaconda3\envs\py39\pythonw.exe ^
C:%HOMEPATH%\anaconda3\envs\py39\Scripts\spyder-script.py
</code></pre>
<p>Some points that may not be clear:</p>
<ul>
<li><p>Folder <code>%JAVA_HOME%\bin</code> contains <code>java.exe</code> and <code>javac.exe</code></p>
</li>
<li><p>The second half of the above code block is the command that is
executed by Anaconda's shortcut for <code>Spyder (py39)</code></p>
</li>
</ul>
<p>As I am still trying to get <code>SpyderSpark.cmd</code> to work, I execute it
from the Conda prompt, specifically the <code>py39</code> environment. This way,
it inherits environment variables that I may have missed in
<code>SpyderSpark.cmd</code>. Issuing <code>SpyderSpark.cmd</code> launches the Spyder GUI,
but Spark commands aren't recognized at the console. Here is a
transcript of the response to the the first few lines of code from
<a href="https://sparkbyexamples.com/pyspark/different-ways-to-create-dataframe-in-pyspark" rel="nofollow noreferrer">this
tutorial</a>:</p>
<pre><code>In [1]: columns = ["language","users_count"]
...: data = [("Java", "20000"), ("Python", "100000"), ("Scala", "3000")]
In [2]: spark = SparkSession.builder.appName('SparkByExamples.com').getOrCreate()
NameError: name 'SparkSession' is not defined
</code></pre>
<p>The likely cause is that all but the <code>PYTHONPATH</code> variable propagated
their values into the Spyder session. From the Spyder console:</p>
<pre><code>import os
print(os.environ.get("HADOOP_HOME"))
print(os.environ.get("JAVA_HOME"))
print(os.environ.get("SPARK_HOME"))
print(os.environ.get("PYSPARK_DRIVER_PYTHON"))
print(os.environ.get("PYSPARK_PYTHON"))
print(os.environ.get("PYTHONPATH"))
c:\Users\User.Name\AppData\Local\Hadoop\2.7.1
C:\Users\User.Name\anaconda3\envs\py39\Library
C:\Users\User.Name\anaconda3\envs\py39\lib\site-packages\pyspark
Python
Python
None
</code></pre>
<p><strong>Why isn't <code>PYTHONPATH</code> propagating into the Spyder session, and how
can I fix this?</strong></p>
<p>I don't think that <a href="https://stackoverflow.com/questions/45086454">this
Q&A</a> explains the
problem because I <em>am</em> launching Spyder from a CMD environment <em>after</em>
setting the variable. Furthermore, all the other variables succeed in
propagating to the Spyder session.</p>
<p><strong>Notes</strong></p>
<p><strong>[1]</strong> Using Cygwin, I found that for all the files in
<code>c:%HOMEPATH%\AppData\Local\Hadoop\2.7.1\bin</code>, the permission bits for
execution were disabled and needed to be explicitly enabled.</p>
<p><strong>Afternote 2023-09-02:</strong></p>
<p>Respondents posted helpful hints on how to get Spark commands
recognized in Spyder, i.e., to first issue <code>from pyspark.sql import SparkSession</code>. I didn't see this tutorial code because it was in a screen capture and the image was blocked by AdBlocker. Also, it was not needed after issuing <code>pyspark</code> from the Conda prompt of the <code>py39</code> environment. It was needed after issuing <code>SpyderSpark.cmd</code>, as I found from the comments, and this allowed the Spark statements to be recognized. I assume, therefore, that <code>pyspark</code> imports <code>SparkSession</code> on the user's behalf, making it unnecessary to explicitly import it after launching <code>pyspark</code> from the Conda prompt.</p>
<p>As useful as it was to know that SparkSession needs to be imported from within Spyder, it doesn't answer the question of why 1 of 6
environment variables fail to propagate from <code>SpyderSpark.cmd</code> to Spyder,
i.e., variable <code>PYTHONPATH</code>. Admittedly, it solved the real
showstopper for me at present, which is to get Spark working from Spyder, for which I thank the respondents.
I would still be interested in why <code>PYTHONPATH</code> doesn't propagate.</p>
<p>On a separate but related issue, I found it tricky to create a shortcut to
<code>SpyderSpark.cmd</code> that doesn't leave a redundant terminal on the
desktop. The solution turned out to be to prefix the Spyder launching
command with <a href="https://stackoverflow.com/a/70713834/2153235"><code>start</code></a>:</p>
<pre><code>set "HADOOP_HOME=%USERPROFILE%\AppData\Local\Hadoop\2.7.1"
set "JAVA_HOME=%USERPROFILE%\anaconda3\envs\py39\Library"
set "SPARK_HOME=%USERPROFILE%\anaconda3\envs\py39\lib\site-packages\pyspark"
set "PYSPARK_DRIVER_PYTHON=Python"
set "PYSPARK_PYTHON=Python"
set "PYTHONPATH=%SPARK_HOME%\python\lib\site-packages\pyspark\python\lib\py4j-0.10.9.7-src.zip;%PYTHONPATH%"
set "PYTHONPATH=%SPARK_HOME%\python\lib\site-packages\pyspark\python;%PYTHONPATH%"
start "" ^
%USERPROFILE%\anaconda3\pythonw.exe ^
%USERPROFILE%\anaconda3\cwp.py ^
%USERPROFILE%\anaconda3\envs\py39 ^
%USERPROFILE%\anaconda3\envs\py39\pythonw.exe ^
%USERPROFILE%\anaconda3\envs\py39\Scripts\spyder-script.py
</code></pre>
<p>All the arguments starting with <code>%USERPROFILE%</code> would ideally be
enclosed in double-quotes in case they expand to include
non-alphanumeric characters. For some reason, I couldn't do that
without incurring the incorrect behaviour in Annex B (below). Therefore, I did not adorn the arguments with double-quotes.</p>
<p>With SpyderSpark as revised above, the <code>Target</code> field of the Windows shortcut
should contain:</p>
<pre><code>%SystemRoot%\System32\cmd.exe /D /C "%USERPROFILE%\anaconda3\envs\py39\bin\SpyderSpark.cmd"
</code></pre>
<p>I found it handy to simply copy the Spyder shortcut and modify the
<code>Target</code> field. For the sake of readability, here is the same command
broken into two physical lines (which isn't suitable for the <code>Target</code>
field of a shortcut):</p>
<pre><code>%SystemRoot%\System32\cmd.exe /D /C ^
"%USERPROFILE%\anaconda3\envs\py39\bin\SpyderSpark.cmd"
</code></pre>
<p>Thanks to <em>Mofi</em> for advice on having improved this afternote.</p>
<p><strong>Further troubleshooting 2023-09-03</strong></p>
<p>To further troubleshoot the propagation of environment variable <code>PYTHONPATH</code> into Spyder, I followed Mofi's advice and revised <code>SpyderSpark.cmd</code> to use the
console oriented <code>python</code> rather than GUI-oriented <code>pythonw</code>:</p>
<pre><code>set "HADOOP_HOME=%USERPROFILE%\AppData\Local\Hadoop\2.7.1"
set "JAVA_HOME=%USERPROFILE%\anaconda3\envs\py39\Library"
set "SPARK_HOME=%USERPROFILE%\anaconda3\envs\py39\lib\site-packages\pyspark"
set "PYSPARK_DRIVER_PYTHON=Python"
set "PYSPARK_PYTHON=Python"
set "PYTHONPATH=%SPARK_HOME%\python\lib\site-packages\pyspark\python\lib\py4j-0.10.9.7-src.zip;%PYTHONPATH%"
set "PYTHONPATH=%SPARK_HOME%\python\lib\site-packages\pyspark\python;%PYTHONPATH%"
set PYTHONPATH & REM HHHHHHHHHHHHHHHHH
%USERPROFILE%\anaconda3\python.exe ^
%USERPROFILE%\anaconda3\cwp-debug.py ^
%USERPROFILE%\anaconda3\envs\py39 ^
%USERPROFILE%\anaconda3\envs\py39\python.exe ^
%USERPROFILE%\anaconda3\envs\py39\Scripts\spyder-script.py
</code></pre>
<p>As can be seen from above, <code>PYTHONPATH</code> is also displayed to the screen prior to the Spyder launching command. Furthermore, <code>SpyderSpark.cmd</code> was revised
to use a modified <code>cwp.py</code>, dubbed <code>cwp-debug.py</code>, wherein
<code>PYTHONPATH</code> is printed out twice:</p>
<pre><code>import os
import sys
import subprocess
from os.path import join, pathsep
from menuinst.knownfolders import FOLDERID, get_folder_path, PathNotFoundException
# call as: python cwp.py PREFIX ARGs...
prefix = sys.argv[1]
args = sys.argv[2:]
new_paths = pathsep.join([prefix,
join(prefix, "Library", "mingw-w64", "bin"),
join(prefix, "Library", "usr", "bin"),
join(prefix, "Library", "bin"),
join(prefix, "Scripts")])
print(os.environ["PYTHONPATH"]) ###################
env = os.environ.copy()
env['PATH'] = new_paths + pathsep + env['PATH']
env['CONDA_PREFIX'] = prefix
documents_folder, exception = get_folder_path(FOLDERID.Documents)
if exception:
documents_folder, exception = get_folder_path(FOLDERID.PublicDocuments)
if not exception:
os.chdir(documents_folder)
print(env["PYTHONPATH"]) ######################
sys.exit(subprocess.call(args, env=env))
</code></pre>
<p>When <code>SpyderSpark.cmd</code> is executed from a CMD console, the expected
<code>PYTHONPATH</code> is printed out by <code>SpyderSpark.cmd</code> <em>and</em> at both
locations in <code>cwp-debug.py</code>. Furthermore, <code>PYTHONPATH</code> is echoed
to the screen when it is prepended to in <code>SpyderSpark.cmd</code>. I have
lumped together lines in the session transcript so that the different
echoings of <code>PYTHONPATH</code> are easier to recognize:</p>
<pre><code>C:\Users\User.Name> C:\Users\User.Name\anaconda3\envs\py39\bin\SpyderSpark.cmd
C:\Users\User.Name> set "HADOOP_HOME=C:\Users\User.Name\AppData\Local\Hadoop\2.7.1"
C:\Users\User.Name> set "JAVA_HOME=C:\Users\User.Name\anaconda3\envs\py39\Library"
C:\Users\User.Name> set "SPARK_HOME=C:\Users\User.Name\anaconda3\envs\py39\lib\site-packages\pyspark"
C:\Users\User.Name> set "PYSPARK_DRIVER_PYTHON=Python"
C:\Users\User.Name> set "PYSPARK_PYTHON=Python"
C:\Users\User.Name> set "PYTHONPATH=C:\Users\User.Name\anaconda3\envs\py39\lib\site-packages\pyspark\python\lib\site-packages\pyspark\python\lib\py4j-0.10.9.7-src.zip;"
C:\Users\User.Name> set "PYTHONPATH=C:\Users\User.Name\anaconda3\envs\py39\lib\site-packages\pyspark\python\lib\site-packages\pyspark\python;C:\Users\User.Name\anaconda3\envs\py39\lib\site-packages\pyspark\python\lib\site-packages\pyspark\python\lib\py4j-0.10.9.7-src.zip;"
C:\Users\User.Name> set PYTHONPATH & REM HHHHHHHHHHHHHHHHH
PYTHONPATH=C:\Users\User.Name\anaconda3\envs\py39\lib\site-packages\pyspark\python\lib\site-packages\pyspark\python;C:\Users\User.Name\anaconda3\envs\py39\lib\site-packages\pyspark\python\lib\site-packages\pyspark\python\lib\py4j-0.10.9.7-src.zip;
C:\Users\User.Name> C:\Users\User.Name\anaconda3\python.exe C:\Users\User.Name\anaconda3\cwp-debug.py C:\Users\User.Name\anaconda3\envs\py39 C:\Users\User.Name\anaconda3\envs\py39\python.exe C:\Users\User.Name\anaconda3\envs\py39\Scripts\spyder-script.py
C:\Users\User.Name\anaconda3\envs\py39\lib\site-packages\pyspark\python\lib\site-packages\pyspark\python;C:\Users\User.Name\anaconda3\envs\py39\lib\site-packages\pyspark\python\lib\site-packages\pyspark\python\lib\py4j-0.10.9.7-src.zip;
C:\Users\User.Name\anaconda3\envs\py39\lib\site-packages\pyspark\python\lib\site-packages\pyspark\python;C:\Users\User.Name\anaconda3\envs\py39\lib\site-packages\pyspark\python\lib\site-packages\pyspark\python\lib\py4j-0.10.9.7-src.zip;
fromIccProfile: failed minimal tag size sanity
C:\Users\User.Name\anaconda3\envs\py39\lib\site-packages\paramiko\transport.py:219: CryptographyDeprecationWarning: Blowfish has been deprecated
"class": algorithms.Blowfish,
C:\Users\User.Name>
</code></pre>
<p>The final warnings about <code>fromIccProfile</code> and <code>Blowfish</code> are
innocuous. Explanations about the <code>fromIccProfile</code> warning can be
found <a href="https://stackoverflow.com/questions/65463848">here</a> and
<a href="https://github.com/spyder-ide/spyder/issues/18026" rel="nofollow noreferrer">here</a> while the
<code>Blowfish</code> warning is just about deprecation. Therefore, the modifications to <code>SpyderSpark</code> and <code>cwp.py</code> (in the form of <code>cwp-debug.py</code>) did not reveal why <code>PYTHONPATH</code> fails to propagate to Spyder.</p>
<p>The next step was to check whether <code>PYTHONPATH</code> was being clobbered
by <code>spyder-script.py</code>, which is a very short script:</p>
<pre><code>import re
import sys
from spyder.app.start import main
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw?|\.exe)?$', '', sys.argv[0])
sys.exit(main())
</code></pre>
<p>I'm actually trying to spin up on Python, so I'm wondering whether
anyone can help decipher this code.</p>
<p><strong>Further troubleshooting 2023-09-06</strong></p>
<p><a href="https://chat.stackoverflow.com/transcript/message/56650189#56650189">Mofi</a>
explained that the regular expression substitution in
<code>spyder-script.py</code> above simply strips away a suffix <code>-script.py[w]</code>
or <code>.exe</code> from the script name, which merely affects the file identification
shown in diagnostic messages.</p>
<p>I noticed that the ensuing statement invokes <code>main()</code> from module
<code>spyder.app.start</code>. I examined
<code>%USERPROFILE%\anaconda3\envs\py39\Lib\site-packages\spyder\app\start.py</code>,
with emphasis on <code>main()</code>. I found pre-amble code that removes <code>PYTHONPATH</code>
paths from <code>sys.path</code>. I confirmed this from within Spyder: <code>sys.path</code>
contains neither of the PySpark paths that are added to <code>PYTHONPATH</code> by
<code>SpyderSpark.cmd</code>. <code>PYTHONPATH</code> is empty before running
<code>SpyderSpark.cmd</code>, so there are no other paths to check.</p>
<p>As for the disappearance of <code>PYTHONPATH</code> itself, I could see no code in
<code>start.py</code> that modifies <code>os.environ['PYTHONPATH']</code> or removes that
variable from the environment. However, it doesn't really matter,
as <code>PYTHONPATH</code> merely contributes to <code>sys.path</code> and <code>start.py</code>
explicitly removes <code>PYTHONPATH</code> paths from <code>sys.path</code>.</p>
<p>I lack the experience to appreciate why this is done. Spyder is supposed to provide a development IDE, but it's hard to use if it removes the paths in PYTHONPATH.</p>
<h1>Annex A: %SPARK_HOME%/conf/spark-defaults.conf</h1>
<p>Here, <code>%SPARK_HOME%</code> is
<code>C:%HOMEPATH%\anaconda3\envs\py39\lib\site-packages\pyspark</code>:</p>
<pre><code>spark.eventLog.enabled true
spark.eventLog.dir C:\\Users\\User.Name\\anaconda3\\envs\\py39\\PySparkLogs
spark.history.fs.logDirectory C:\\Users\\User.Name\\anaconda3\\envs\\py39\\PySparkLogs
spark.sql.autoBroadcastJoinThreshold -1
</code></pre>
<h1>Annex B: Incorrect behaviour when <code>start</code> arguments are double-quoted in <code>SpyderSpark.cmd</code></h1>
<p>When <code>SpyderSpark.cmd</code> is run, a terminal console appears with the following
messages:</p>
<pre><code>C:\Users\User.Name\Documents\Python Scripts>set "HADOOP_HOME=C:\Users\User.Name\AppData\Local\Hadoop\2.7.1"
C:\Users\User.Name\Documents\Python Scripts>set "JAVA_HOME=C:\Users\User.Name\anaconda3\envs\py39\Library"
C:\Users\User.Name\Documents\Python Scripts>set "SPARK_HOME=C:\Users\User.Name\anaconda3\envs\py39\lib\site-packages\pyspark"
C:\Users\User.Name\Documents\Python Scripts>set "PYSPARK_DRIVER_PYTHON=Python"
C:\Users\User.Name\Documents\Python Scripts>set "PYSPARK_PYTHON=Python"
C:\Users\User.Name\Documents\Python Scripts>set "PYTHONPATH=C:\Users\User.Name\anaconda3\envs\py39\lib\site-packages\pyspark\python\lib\site-packages\pyspark\python\lib\py4j-0.10.9.7-src.zip;"
C:\Users\User.Name\Documents\Python Scripts>set "PYTHONPATH=C:\Users\User.Name\anaconda3\envs\py39\lib\site-packages\pyspark\python\lib\site-packages\pyspark\python;C:\Users\User.Name\anaconda3\envs\py39\lib\site-packages\pyspark\python\lib\site-packages\pyspark\python\lib\py4j-0.10.9.7-src.zip;"
C:\Users\User.Name\Documents\Python Scripts>start "" "C:\Users\User.Name\anaconda3\pythonw.exe" ^
C:\Users\User.Name\Documents\Python Scripts>"C:\Users\User.Name\anaconda3\cwp.py" "C:\Users\User.Name\anaconda3\envs\py39" ^
[main 2023-09-02T23:29:02.117Z] update#setState idle
[main 2023-09-02T23:29:04.434Z] WSL is not installed, so could not detect WSL profiles
</code></pre>
<p>The VS Code app then appears, opened to a file <code>cwp.py</code> (the 2nd
argument supplied to <code>start</code>in <code>SpyderSpark.cmd</code>). When I exit VS Code, the following
additional messages are printed to the terminal console, followed by
the appearance of the Spyder app:</p>
<pre><code>[main 2023-09-02T23:29:09.998Z] Extension host with pid 21404 exited with code: 0, signal: unknown.
C:\Users\User.Name\Documents\Python Scripts>"C:\Users\User.Name\anaconda3\envs\py39\pythonw.exe" "C:\Users\User.Name\anaconda3\envs\py39\Scripts\spyder-script.py"
</code></pre>
<p>When I exit Spyder, the terminal console then disappears.</p>
<p><strong>2023-09-06 afternote:</strong> According to <a href="https://chat.stackoverflow.com/transcript/message/56640702#56640702">Mofi</a>, the cause for all of this unexpected behaviour is incorrect parsing of the Spyder launching command as a multi-line statement. Specifically, the caret symbol at the end of a physical line indicates the continuation of the statement on the next line, and this caret should <em>not</em> be preceded by a space. Rather, the next physical line, which the statement continues onto, should <em>start</em> with a space. With this fix, arguments to <code>Start</code> can be double=quoted and the script still launches Spyder in the expected manner. Here is the revised and properly working <code>SpyderSpark.cmd</code>:</p>
<pre><code>set "HADOOP_HOME=%USERPROFILE%\AppData\Local\Hadoop\2.7.1"
set "JAVA_HOME=%USERPROFILE%\anaconda3\envs\py39\Library"
set "SPARK_HOME=%USERPROFILE%\anaconda3\envs\py39\lib\site-packages\pyspark"
set "PYSPARK_DRIVER_PYTHON=Python"
set "PYSPARK_PYTHON=Python"
set "PYTHONPATH=%SPARK_HOME%\python\lib\site-packages\pyspark\python\lib\py4j-0.10.9.7-src.zip;%PYTHONPATH%"
set "PYTHONPATH=%SPARK_HOME%\python\lib\site-packages\pyspark\python;%PYTHONPATH%"
start ""^
"%USERPROFILE%\anaconda3\pythonw.exe"^
"%USERPROFILE%\anaconda3\cwp.py"^
"%USERPROFILE%\anaconda3\envs\py39"^
"%USERPROFILE%\anaconda3\envs\py39\pythonw.exe"^
"%USERPROFILE%\anaconda3\envs\py39\Scripts\spyder-script.py"
</code></pre>
<p>Other than for aesthetics, I have not seen described anywhere this prescription to avoid a space before the caret and to start the next line with a space. However, it works. In this specific case, the need to start a continuation line with a space could be due to the fact that the first character is <code>"</code>, which is meant to delimit a file path but is not part of the file path. Since the first character of a continuation line is <a href="https://stackoverflow.com/a/21000752/2153235">automatically escaped</a>, we do not want the <code>"</code> to be the first character or else it loses its special meaning.</p>
| <python><pyspark><cmd><anaconda><spyder> | 2023-08-31 23:46:36 | 1 | 1,265 | user2153235 |
77,019,642 | 1,620,040 | Converting custom string to dictionary python | <p>I have an example string from</p>
<pre><code> kubectl get nodes -o wide
</code></pre>
<p>command output, which looks like</p>
<pre><code> NAME STATUS ROLES AGE VERSION INTERNAL-IP
cp-node-1 Ready control-plane xxx xxx xxx
cp-node-2 Ready control-plane xxx xxx yyy
cp-node-3 Ready control-plane xxx xxx zzz
</code></pre>
<p>What could be the effective and the quickest way, to extract the output in the format of dictionary</p>
<pre><code> node_ip['cp-node-1'] = xxx (Internal IP)
</code></pre>
<p>I tried with json and ast, but I believe a custom formatting with regex is required.</p>
| <python><string> | 2023-08-31 22:45:14 | 2 | 5,362 | Lakshmi Narayanan |
77,019,628 | 21,891,079 | How to order facets when using the Seaborn objects interface | <p>I am trying to order facets in a plot produced by the <a href="https://seaborn.pydata.org/tutorial/objects_interface.html" rel="nofollow noreferrer"><code>seaborn</code> objects interface</a>.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import seaborn as sns
import seaborn.objects as so
import matplotlib.pyplot as plt
df = sns.load_dataset("iris")
df["species"] = df["species"].astype("category")
df["species"] = df["species"].cat.codes
rng = np.random.default_rng(seed=0)
df["subset"] = rng.choice(['A','B','C'], len(df), replace=True)
fig = plt.figure(figsize=(6.4 * 2.0, 4.8 * 2.0))
_ = (
so.Plot(df, x="sepal_length", y="sepal_width")
.facet(row="species", col="subset")
.add(so.Dot())
.on(fig)
.plot()
)
</code></pre>
<p><a href="https://i.sstatic.net/NCkLz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NCkLz.png" alt="A plot with facets row-wise and column-wise. The columns are, left to right--C, B, A--and the rows are, top to bottom--0, 1, 2." /></a></p>
<p>However, if <code>col_order</code> or <code>row_order</code> are passed as parameters to the <code>.facet()</code> line an "unexpected keyword argument" <code>TypeError</code> is raised.</p>
<pre class="lang-py prettyprint-override"><code>_ = (
so.Plot(df, x="sepal_length", y="sepal_width")
.facet(
row="species",
col="subset",
row_order=['A','C','B'],
col_order=[0,2,1]
)
.add(so.Dot())
.on(fig)
.plot()
)
</code></pre>
<pre><code>TypeError: Plot.facet() got an unexpected keyword argument 'row_order'
</code></pre>
<p>How should facets be ordered when using the <code>seaborn.objects</code> interface?</p>
<p>Note that this question is very similar to <a href="https://stackoverflow.com/questions/61541776/seaborn-ordering-of-facets">"Seaborn ordering of facets"</a> which is the same question when the plot is generated using <code>seaborn</code> but not the <code>seaborn.objects</code> module.</p>
<p>Ideally, an answer should also work when using the <code>wrap</code> parameter of <code>facet()</code> in the <code>seaborn.objects</code> interface.</p>
| <python><seaborn><seaborn-objects> | 2023-08-31 22:41:41 | 1 | 1,051 | Joshua Shew |
77,019,475 | 6,928,914 | How to clear memory from a qlikview document programmatically? | <p>I have lots of Qlikview documents in various folders. I would like to delete all the memory from all these documents programmatically , say for example using FOR LOOP in a python program. How can I do that? Are there other ways to do that?</p>
| <python><qlikview> | 2023-08-31 21:56:03 | 2 | 719 | Kay |
77,019,330 | 22,407,544 | Why is my Docx converter returning 'None' | <p>I am new to web development and trying to create a language translation app in Django that translates uploaded documents. It relies on a series of interconversions between pdf and docx. When my code ouputs the translated document it cannot be opened.</p>
<ol>
<li><p>When I inspect the file type I saw it identified as XML and docx and it could be opened and read by MS Word when I changed the extension to docx(But it couldn't be read by any PDF readers).</p>
</li>
<li><p>When I used my code python to analyze the file by printing the type and the contents of it I got NoneType and None.</p>
</li>
<li><p>A working PDF of the file is found in mysite/mysite folder but the one output by my reConverter function that is sent to the browser is the problem file.</p>
</li>
<li><p>I tried manually converting it using:</p>
</li>
</ol>
<pre><code>wordObj = win32com.client.Dispatch('Word.Application')
docObj = wordObj.Documents.Open(wordFilename)
docObj.SaveAs(pdfFilename, FileFormat=wdFormatPDF)
docObj.Close()
wordObj.Quit()
</code></pre>
<p>but got a CoInitialization error. My original
So I've completely narrowed it down to the reConverter function returning a NoneType.
Here is my code:</p>
<pre><code>from django.shortcuts import render
from django.http import HttpResponse
from django.http import JsonResponse
from django.views.decorators.csrf import csrf_protect
from .models import TranslatedDocument
from .forms import UploadFileForm
from django.core.files.storage import FileSystemStorage
import docx
from pdf2docx import parse
from docx2pdf import convert
import time #remove
# Create your views here.
#pythoncom.CoInitialize()
@csrf_protect
def translateFile(request) :
if request.method == 'POST':
form = UploadFileForm(request.POST, request.FILES)
if form.is_valid():
uploaded_file = request.FILES['file']
fs = FileSystemStorage()
filename = fs.save(uploaded_file.name, uploaded_file)
uploaded_file_path = fs.path(filename)
file = (converter(uploaded_file_path))
response = HttpResponse(file, content_type='application/pdf')
response['Content-Disposition'] = 'attachment; filename="' + filename + '"'
return response
else:
form = UploadFileForm()
return render(request, 'translator_upload/upload.html', {'form': form})
def reConverter(inputDocxPath):
#reconvert docx to pdf
print('reConverter: '+str(inputDocxPath))
outputPdfPath = inputDocxPath.replace('.docx', '.pdf')
test = convert(inputDocxPath, outputPdfPath)
print(type(test))
print('test: '+str(test))
return test
def translateDocx(aDocx, stringOfDocPath):
#translation logic
docx_file = stringOfDocPath
myDoc = docx.Document(docx_file)
print('translateDocx: '+str(docx_file))
print('translateDocx: '+str(myDoc))
for paragraphNum in range(len(myDoc.paragraphs)):
#TRANSLATION LOGIC
myDoc.save(docx_file)
return reConverter(docx_file)
#stringOfDocPath is used as convert() requires file path, not file object(myDoc)
def converter(inputPdfPath):
# convert pdf to docx
pdf_file = inputPdfPath
docx_file = inputPdfPath.replace('.pdf', '.docx')
print('file types saved: '+docx_file+'. Converting to docx')
parse(pdf_file, docx_file) #, start=0, end=3)
myDoc = docx.Document(docx_file)
print('converter '+str(myDoc))
return translateDocx(myDoc, docx_file)
</code></pre>
| <python><django><pdf><ms-word><pywin32> | 2023-08-31 21:18:21 | 1 | 359 | tthheemmaannii |
77,019,152 | 17,653,423 | How to flat a Series list of dictionaries in Dataframe? | <p>How can I convert a Series list of dictionaries into a DataFrame? Given:</p>
<pre><code>import pandas as pd
data = [
{
"id": 1,
"detail": [
{
"name" : "name1",
"type" : "type1",
},
{
"name" : "name2",
"type" : "type2",
},
]
},
{
"id": 2,
"detail": [
{
"name" : "name3",
"type" : "type3",
},
{
"name" : "name4",
"type" : "type4",
},
]
},
]
df = pd.DataFrame(data)
</code></pre>
<p>I want to turn the above into a DataFrame:</p>
<pre><code> id detail_name detail_type
0 1 name1 | name2 type1 | name2
1 2 name3 | name4 type3 | name4
</code></pre>
<p>Obs: The <code>|</code> is just a character to separate the concat records, it can be any character.</p>
<p>I tried using <code>pd.json_normalize</code> and joining after, but json_normalize will generate (4, 2) shaped dataframe.</p>
<pre><code>details = pd.json_normalize(data,record_path=['detail'],record_prefix='detail_',)
df = df.join([details])
</code></pre>
| <python><pandas><dataframe> | 2023-08-31 20:39:58 | 3 | 391 | Luiz |
77,019,129 | 7,347,925 | How to pcolormesh RGB array with cartopy? | <p>I have asked <a href="https://stackoverflow.com/q/70541279/7347925">similar question</a> about plotting RGB array on normal axis. The idea is plotting one channel of RGB array and set the facecolor by the combination of RGB.</p>
<p>I wanna apply this method to axis with projection.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
np.random.seed(100)
x = np.arange(10, 20)
y = np.arange(0, 10)
x, y = np.meshgrid(x, y)
img = np.random.randint(low=0, high=255, size=(10, 10, 3))
ax = plt.axes(projection=ccrs.PlateCarree())
mesh = ax.pcolormesh(x, y, img[:, :,0], facecolors=img.reshape(-1, 3)/255)
mesh.set_array(None)
plt.show()
</code></pre>
<p>However, I got this error because of None type.</p>
<pre><code>---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[59], line 17
14 ax = plt.axes(projection=ccrs.PlateCarree())
16 mesh = ax.pcolormesh(x, y, img[:, :,0], facecolors=img.reshape(-1, 3)/255)
---> 17 mesh.set_array(None)
18 plt.show()
File ~/miniconda3/envs/enpt/lib/python3.10/site-packages/cartopy/mpl/geocollection.py:29, in GeoQuadMesh.set_array(self, A)
27 def set_array(self, A):
28 # raise right away if A is 2-dimensional.
---> 29 if A.ndim > 1:
30 raise ValueError('Collections can only map rank 1 arrays. '
31 'You likely want to call with a flattened array '
32 'using collection.set_array(A.ravel()) instead.')
34 # Only use the mask attribute if it is there.
AttributeError: 'NoneType' object has no attribute 'ndim'
</code></pre>
| <python><matplotlib><cartopy> | 2023-08-31 20:34:58 | 0 | 1,039 | zxdawn |
77,018,924 | 259,538 | Log Viewer wth Error Highlighting in Scrollbar | <p>I am working on a custom notepad style log viewer in Python 3.11 using PySide6.</p>
<p>The ui is created with QtDesigner and must be compiled into python with pyuic command and is imported into the following code with import "main_ui".</p>
<p>The application loads a file called sample.log into a QPlainTextEdit field.
Then the user can click a button to find and highlight all lines containing "Error" substring.</p>
<p>The relative position of each highlighted line is indicated in a custom scrollbar with red lines.</p>
<p><strong>My problem is how to properly calculate the positions for the red lines in the scrollbar?
My current logic works, but the positions are not accurate.</strong></p>
<p>This is what the interface looks like:</p>
<p><a href="https://i.sstatic.net/dgXZK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dgXZK.png" alt="enter image description here" /></a></p>
<p>All necessary code is provided in this post, but additionally can be downloaded via this zip file <a href="https://drive.google.com/uc?export=download&id=16KemTIgG6v69frpGNo8WA-Vv6z6KfZW-" rel="nofollow noreferrer">logtool.zip</a> which includes these files:</p>
<ol>
<li>logtool.py</li>
<li>main.ui</li>
<li>main_ui.py (built from main.ui using Python6/Scripts/pyside6-uic.exe)</li>
<li>sample.log</li>
</ol>
<p>This is what the main program logtool.py looks like:</p>
<pre><code>import sys
import os
import os.path
from PySide6 import QtWidgets
from PySide6.QtCore import Qt, Slot
from PySide6.QtWidgets import QMainWindow, QScrollBar
from PySide6.QtGui import QTextCursor, QTextCharFormat, QColor, QPainter, QPen
import main_ui
substring_list = ["error", "exception", "fail"] # Strings to look for in log file that indicate an error
class MyScroll(QScrollBar):
def __init__(self, parent=None):
super(MyScroll, self).__init__(parent)
def paintEvent(self, event):
super().paintEvent(event)
painter = QPainter(self)
pen = QPen()
pen.setWidth(3)
pen.setColor(Qt.red)
pen.setStyle(Qt.SolidLine)
painter.setPen(pen)
try: # return if values is not set
self.values
except:
return
for value in self.values:
painter.drawLine(0, value, 15, value)
def draw(self, values):
self.values = values
self.update()
class MainWindow(QMainWindow, main_ui.Ui_main):
def __init__(self, parent=None):
super(MainWindow, self).__init__(parent)
self.setupUi(self) # setup the GUI --> function generated by pyuic4
self.txtEdit.setCenterOnScroll(True)
fullpath = "sample.log" # aos_diag log file
self.readfile(fullpath)
self.lines = []
self.scroll = MyScroll()
self.txtEdit.setVerticalScrollBar(self.scroll)
self.show()
def calc_drawline_pixel(self, lineno):
widget = self.txtEdit.verticalScrollBar()
total_lines = widget.maximum()
scroll_pixel_height = widget.height()
factor = lineno / total_lines
uparrow_height = 15
draw_at_pixel = int(factor * scroll_pixel_height) + uparrow_height
return draw_at_pixel
@Slot()
def on_btnMarkErrors_clicked(self):
self.lines = [] # lines is a list of line numbers which match search
self.pos = 0
string = self.txtEdit.toPlainText()
reclist = string.split("\n")
for lineno, line in enumerate(reclist):
flag = any(substring.lower() in line.lower() for substring in substring_list) # if any substring in list is in the line then return true
if flag:
self.markline(lineno)
self.lines.append(lineno)
self.lines = sorted(self.lines)
# fill values list and pass to draw method of scrollbar
values = []
for lineno in self.lines:
pixel = self.calc_drawline_pixel(lineno)
values.append(pixel)
self.scroll.draw(values)
def markline(self, line_number):
"""marks line with red highlighter"""
widget = self.txtEdit
cursor = QTextCursor(widget.document().findBlockByLineNumber(line_number)) # position cursor on the given line
cursor.select(QTextCursor.LineUnderCursor) # Select the line.
fmt = QTextCharFormat()
RED = QColor(228,191,190)
fmt.setBackground(RED)
cursor.mergeCharFormat(fmt) # Apply the format to the selected text
def readfile(self, fullpath):
fullpath = fullpath.replace("\\", "/") # flip all backslashes to forward slashes because python prefers
path,file = os.path.split(fullpath) # extract path and file from fullpath
fin = open(fullpath, encoding="utf8", errors='ignore')
content = fin.read()
fin.close()
self.txtEdit.setPlainText(content)
if __name__ == "__main__":
app = QtWidgets.QApplication(sys.argv)
app.setStyle("Fusion")
myapp = MainWindow()
rc = app.exec()
sys.exit(rc)
</code></pre>
<p>This is what main_ui.py looks like:</p>
<pre><code>from PySide6.QtCore import (QCoreApplication, QDate, QDateTime, QLocale,
QMetaObject, QObject, QPoint, QRect,
QSize, QTime, QUrl, Qt)
from PySide6.QtGui import (QBrush, QColor, QConicalGradient, QCursor,
QFont, QFontDatabase, QGradient, QIcon,
QImage, QKeySequence, QLinearGradient, QPainter,
QPalette, QPixmap, QRadialGradient, QTransform)
from PySide6.QtWidgets import (QApplication, QFrame, QHBoxLayout, QMainWindow,
QPlainTextEdit, QPushButton, QSizePolicy, QSpacerItem,
QStatusBar, QVBoxLayout, QWidget)
class Ui_main(object):
def setupUi(self, main):
if not main.objectName():
main.setObjectName(u"main")
main.resize(602, 463)
self.centralwidget = QWidget(main)
self.centralwidget.setObjectName(u"centralwidget")
self.verticalLayout = QVBoxLayout(self.centralwidget)
self.verticalLayout.setObjectName(u"verticalLayout")
self.frame = QFrame(self.centralwidget)
self.frame.setObjectName(u"frame")
self.frame.setFrameShape(QFrame.NoFrame)
self.frame.setFrameShadow(QFrame.Raised)
self.horizontalLayout = QHBoxLayout(self.frame)
self.horizontalLayout.setObjectName(u"horizontalLayout")
self.horizontalLayout.setContentsMargins(0, 0, 0, 0)
self.btnMarkErrors = QPushButton(self.frame)
self.btnMarkErrors.setObjectName(u"btnMarkErrors")
self.horizontalLayout.addWidget(self.btnMarkErrors)
self.horizontalSpacer_2 = QSpacerItem(40, 20, QSizePolicy.Expanding, QSizePolicy.Minimum)
self.horizontalLayout.addItem(self.horizontalSpacer_2)
self.verticalLayout.addWidget(self.frame)
self.txtEdit = QPlainTextEdit(self.centralwidget)
self.txtEdit.setObjectName(u"txtEdit")
font = QFont()
font.setFamilies([u"Consolas"])
self.txtEdit.setFont(font)
self.txtEdit.setLineWrapMode(QPlainTextEdit.NoWrap)
self.verticalLayout.addWidget(self.txtEdit)
main.setCentralWidget(self.centralwidget)
self.statusbar = QStatusBar(main)
self.statusbar.setObjectName(u"statusbar")
main.setStatusBar(self.statusbar)
self.retranslateUi(main)
QMetaObject.connectSlotsByName(main)
# setupUi
def retranslateUi(self, main):
main.setWindowTitle(QCoreApplication.translate("main", u"Log Viewer", None))
self.btnMarkErrors.setText(QCoreApplication.translate("main", u"Mark Errors", None))
</code></pre>
| <python><pyside6><pyqt6> | 2023-08-31 19:58:47 | 1 | 7,959 | panofish |
77,018,917 | 12,466,687 | How to convert column to list over groups in expressions in Polars? | <p>I am not able to create a <code>list</code> of values for each group from another column.</p>
<p><strong>Data</strong></p>
<pre><code>import polars as pl
df = pl.DataFrame(
{
"A": [1,4,4,7,7,10,10,13,16],
"B": [2,5,5,8,18,11,11,14,17],
"C": [3,6,6,9,9,12,12,15,18]
}
)
</code></pre>
<p><strong>Tried</strong></p>
<pre><code>df.with_columns(pl.col('B').implode().over('A').alias("group_list"))
</code></pre>
<p><strong>Error</strong></p>
<p><strong>Update</strong>
Expected output:</p>
<pre><code>┌─────┬─────┬─────┐
│ A ┆ B ┆ C │group_list
│ --- ┆ --- ┆ --- │
│ i64 ┆ i64 ┆ i64 │
╞═════╪═════╪═════╡
│ 1 ┆ 2 ┆ 3 │[2]
│ 4 ┆ 5 ┆ 6 │[5,5]
│ 4 ┆ 5 ┆ 6 │[5,5]
│ 7 ┆ 8 ┆ 9 │[8,18]
│ 7 ┆ 18 ┆ 9 │[8,18]
│ 10 ┆ 11 ┆ 12 │[11,11]
│ 10 ┆ 11 ┆ 12 │[11,11]
│ 13 ┆ 14 ┆ 15 │[14]
│ 16 ┆ 17 ┆ 18 │[17]
</code></pre>
| <python><dataframe><python-polars> | 2023-08-31 19:57:53 | 2 | 2,357 | ViSa |
77,018,742 | 12,708,740 | Use column names of one df to map values into rows of another df | <p>I am trying to map the values of one df (based on its column names) into the values of another df. Code below – however, here is also a screenshot to illustrate my question:</p>
<p><a href="https://i.sstatic.net/aynaQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/aynaQ.png" alt="enter image description here" /></a></p>
<p>For example, row 0 of the "desired" df matches the <em>category</em> of the row values in the "keys" df. But the specific values are taken from the "values" df. For example, in the desired df, <code>desired['block_1'][0]</code> is <code>values['apple'][0]</code>, <code>desired['block_2'][0]</code> is <code>values['orange'][0]</code>, <code>desired['block_3'][0]</code> is <code>values['pear'][0]</code>, and <code>desired['block_4'][0]</code> is <code>values['berry'][0]</code>.</p>
<p>Is there a way to do this cleanly? Thank you! Code below.</p>
<p>keys df:</p>
<pre><code>block_1 = ['apple', 'apple', 'apple', 'apple', 'apple', 'apple']
block_2 = ['orange', 'berry', 'pear', 'pear', 'orange', 'berry']
block_3 = ['pear', 'pear', 'orange', 'berry', 'berry', 'orange']
block_4 = ['berry', 'orange', 'berry', 'orange', 'pear', 'pear']
keys = pd.DataFrame({'block_1': block_1, 'block_2': block_2, 'block_3': block_3, 'block_4': block_4})
</code></pre>
<p>values df:</p>
<pre><code>apple = [('apple_1', 'apple_3', 'apple_2'),
('apple_2', 'apple_3', 'apple_1'),
('apple_3', 'apple_1', 'apple_2'),
('apple_3', 'apple_2', 'apple_1'),
('apple_1', 'apple_2', 'apple_3'),
('apple_2', 'apple_1', 'apple_3')]
pear = [('pear_1', 'pear_3', 'pear_2'),
('pear_2', 'pear_3', 'pear_1'),
('pear_3', 'pear_1', 'pear_2'),
('pear_3', 'pear_2', 'pear_1'),
('pear_1', 'pear_2', 'pear_3'),
('pear_2', 'pear_1', 'pear_3')]
orange = [('orange_1', 'orange_3', 'orange_2'),
('orange_2', 'orangee_3', 'orange_1'),
('orange_3', 'orange_1', 'orange_2'),
('orange_3', 'orange_2', 'orange_1'),
('orange_1', 'orange_2', 'orange_3'),
('orange_2', 'orange_1', 'orange_3')]
berry = [('berry_1', 'berry_3', 'berry_2'),
('berry_2', 'berry_3', 'berry_1'),
('berry_3', 'berry_1', 'berry_2'),
('berry_3', 'berry_2', 'berry_1'),
('berry_1', 'berry_2', 'berry_3'),
('berry_2', 'berry_1', 'berry_3')]
values = pd.DataFrame({'apple': apple, 'pear': pear, 'orange': orange, 'berry': berry})
</code></pre>
<p>desired output:</p>
<pre><code>desired = pd.DataFrame({'block_1': [('apple_1', 'apple_3', 'apple_2'),
('apple_2', 'apple_3', 'apple_1'),
('apple_3', 'apple_1', 'apple_2'),
('apple_3', 'apple_2', 'apple_1'),
('apple_1', 'apple_2', 'apple_3'),
('apple_2', 'apple_1', 'apple_3')],
'block_2': [('orange_1', 'orange_3', 'orange_2'),
('berry_2', 'berry_3', 'berry_1'),
('pear_3', 'pear_1', 'pear_2'),
('pear_3', 'pear_2', 'pear_1'),
('orange_1', 'orange_2', 'orange_3'),
('berry_2', 'berry_1', 'berry_3')],
'block_3': [('pear_1', 'pear_3', 'pear_2'),
('pear_2', 'pear_3', 'pear_1'),
('orange_3', 'orange_1', 'orange_2'),
('berry_3', 'berry_2', 'berry_1'),
('berry_1', 'berry_2', 'berry_3'),
('orange_2', 'orange_1', 'orange_3')],
'block_4': [('berry_1', 'berry_3', 'berry_2'),
('orange_2', 'orangee_3', 'orange_1'),
('berry_3', 'berry_1', 'berry_2'),
('orange_3', 'orange_2', 'orange_1'),
('pear_1', 'pear_2', 'pear_3'),
('pear_2', 'pear_1', 'pear_3')]})
</code></pre>
| <python><pandas><dataframe><dictionary> | 2023-08-31 19:27:31 | 3 | 675 | psychcoder |
77,018,692 | 12,611,330 | Solving environment: failed when install RAPIDS using conda | <p>In order to install <code>RAPIDS</code>, I get the command from the site below and run it, but the following error occurs.</p>
<p><a href="https://docs.rapids.ai/install" rel="nofollow noreferrer">https://docs.rapids.ai/install</a></p>
<pre><code>conda create --solver=libmamba -n rapids-23.08 -c rapidsai -c conda-forge -c nvidia rapids=23.08 python=3.10 cuda-version=12.0
</code></pre>
<hr />
<pre><code>Channels:
- rapidsai
- conda-forge
- nvidia
- defaults
Platform: win-64
Collecting package metadata (repodata.json): - DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): conda.anaconda.org:443
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): repo.anaconda.com:443
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): repo.anaconda.com:443
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): repo.anaconda.com:443
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): repo.anaconda.com:443
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): conda.anaconda.org:443
\ DEBUG:urllib3.connectionpool:https://repo.anaconda.com:443 "GET /pkgs/main/noarch/repodata.json HTTP/1.1" 304 0
DEBUG:urllib3.connectionpool:https://repo.anaconda.com:443 "GET /pkgs/r/win-64/repodata.json HTTP/1.1" 304 0
DEBUG:urllib3.connectionpool:https://repo.anaconda.com:443 "GET /pkgs/r/noarch/repodata.json HTTP/1.1" 304 0
DEBUG:urllib3.connectionpool:https://repo.anaconda.com:443 "GET /pkgs/main/win-64/repodata.json HTTP/1.1" 304 0
/ DEBUG:urllib3.connectionpool:https://conda.anaconda.org:443 "GET /rapidsai/noarch/repodata.json HTTP/1.1" 304 0
- DEBUG:urllib3.connectionpool:https://repo.anaconda.com:443 "GET /pkgs/msys2/win-64/repodata.json HTTP/1.1" 304 0
DEBUG:urllib3.connectionpool:https://repo.anaconda.com:443 "GET /pkgs/msys2/noarch/repodata.json HTTP/1.1" 304 0
\ DEBUG:urllib3.connectionpool:https://conda.anaconda.org:443 "GET /rapidsai/win-64/repodata.json HTTP/1.1" 304 0
done
Solving environment: failed
PackagesNotFoundError: The following packages are not available from current channels:
- rapids=23.08*
Current channels:
- https://conda.anaconda.org/rapidsai
- https://conda.anaconda.org/conda-forge
- https://conda.anaconda.org/nvidia
- defaults
To search for alternate channels that may provide the conda package you're
looking for, navigate to
https://anaconda.org
and use the search bar at the top of the page.
</code></pre>
<hr />
<p>OS : Windows Server 2022 Datacenter</p>
<p>Python : 3.10.12</p>
<p>conda : 23.7.3</p>
<p>cuda : 12.0</p>
| <python><conda><rapids> | 2023-08-31 19:17:51 | 1 | 1,012 | Tio |
77,018,639 | 4,470,365 | ModuleNotFoundError: No module named pyjanitor despite pyjanitor being installed | <p>I've confirmed the pyjanitor package is installed - it shows up in <code>pip list</code> and I get a confirmation if I try to reinstall with <code>pip install pyjanitor</code>. But then when I run <code>import pyjanitor</code> I get the error:</p>
<blockquote>
<p>No module named 'pyjanitor'</p>
</blockquote>
<p>What am I doing wrong?</p>
| <python><pyjanitor> | 2023-08-31 19:10:25 | 1 | 23,346 | Sam Firke |
77,018,585 | 181,238 | Checking if "protected" links resolve | <p>I wrote a simple python script that goes over local HTML documents and text files and tries to check if all the links in them resolve.</p>
<p>Basically, something similar to this (very simplified):</p>
<pre><code>import re, requests
with open(filename, "r", encoding="utf-8") as file:
for line in file:
for m in re.finditer(r'https?://[^\s"]+', line):
try:
requests.head(m[0]).raise_for_status()
except Exception as err:
print(err)
</code></pre>
<p>Unfortunately, it does not work for links on Twitter (which requires the user to be logged in) and with some other sites that use captcha protection.</p>
<p>I also tried writing a small JS code inside the document but ran into CORS issues.</p>
<p>Can anyone help me overcome this issues?<br />
Specifically for Twitter using the python script - how do I programmatically log in?</p>
| <javascript><python><html><python-requests> | 2023-08-31 18:57:58 | 1 | 1,976 | Alex O |
77,018,538 | 12,466,687 | How to convert column to list in expressions in Polars? | <p>I was earlier able to convert column to <code>list</code> but it is not working now after the <code>latest version update</code>.</p>
<pre><code>import polars as pl
df = pl.DataFrame(
{
"A": [1,4,4,7,7,10,10,13,16],
"B": [2,5,5,8,18,11,11,14,17],
"C": [3,6,6,9,9,12,12,15,18]
}
)
</code></pre>
<p>I have also referred to <a href="https://pola-rs.github.io/polars-book/user-guide/expressions/lists/#creating-a-list-column" rel="nofollow noreferrer">polars_list_link</a> but below code is not working</p>
<pre><code>df.with_columns(
pl.col('B').sum().over('A').alias("sum[B]/A_groups"),
pl.col("A").list.head(2).alias("list_A"),
pl.col("A").list.lengths().alias("length"),
)
</code></pre>
<p><strong>Error</strong>
SchemaError: invalid series dtype: expected <code>List</code>, got <code>i64</code></p>
<p><strong>Update</strong>
Expected Results</p>
<pre><code>┌─────┬─────┬─────┬─────────────────┐
│ A ┆ B ┆ C ┆ sum[B]/A_groups │list_A length
│ --- ┆ --- ┆ --- ┆ --- │
│ i64 ┆ i64 ┆ i64 ┆ i64 │
╞═════╪═════╪═════╪═════════════════╡
│ 1 ┆ 2 ┆ 3 ┆ 2 │[1,4] 9
│ 4 ┆ 5 ┆ 6 ┆ 10 │[1,4] 9
│ 4 ┆ 5 ┆ 6 ┆ 10 │[1,4] 9
│ 7 ┆ 8 ┆ 9 ┆ 26 │[1,4] 9
│ 7 ┆ 18 ┆ 9 ┆ 26 │[1,4] 9
│ 10 ┆ 11 ┆ 12 ┆ 22 │[1,4] 9
│ 10 ┆ 11 ┆ 12 ┆ 22 │[1,4] 9
│ 13 ┆ 14 ┆ 15 ┆ 14 │[1,4] 9
│ 16 ┆ 17 ┆ 18 ┆ 17 │[1,4] 9
└─────┴─────┴─────┴─────────────────┘
</code></pre>
<p>Later I will use this with window partitioning for these to make more sense.</p>
| <python><python-polars> | 2023-08-31 18:50:25 | 1 | 2,357 | ViSa |
77,018,508 | 5,858,752 | Iterating over dict of dicts vs (some other data structure) of dicts | <p>I'm looking at a codebase, and I see them iterating over only the values of a dict of dicts <code>d</code>. The only places where <code>d</code> is used, it's used with <code>itervalues()</code> (the code iterates over only the values of the dict), so the keys of <code>d</code> is never used, so I think they're just using the outer dict to hold all the inner dicts.</p>
<p>Is there an advantage to iterating values of a dict as opposed to just storing the dicts in a sequential data structure like a numpy array or a list, which might be more performant? I think especially with an array we can take advantage of cache locality?</p>
| <python><arrays><list><dictionary> | 2023-08-31 18:42:52 | 1 | 699 | h8n2 |
77,018,494 | 4,443,378 | How to normalize nested fields with json_normalizer()? | <p>I have a json with nested objects (nested list of objects):
json:</p>
<pre><code>{
"uniqueIdentifier": {
"identity": {
"textIdentifier": "MysticFoliage",
"encodedIdentifier": "<p>MysticFoliage</p>"
}
},
"languagePreference": {
"chosenLanguage": {
"displayText": "Enigmatic Tongue",
"languageCode": "<p>Enigmatic Tongue</p>"
}
},
"specialCode": 42,
"creationTimestamp": "1630411200000",
"creatorAlias": "whisperingShadow",
"unusualStatus": "UNEXPLORED",
"identifierCode": "XyzAbc123",
"nativeTongue": [],
"modificationTimestamp": "1670160000000",
"modifierAlias": "arcaneTraveler",
"designatedNameWithComma": {
"designation": {
"displayText": "Foliage, Mystic",
"encodedText": "<p>Foliage, Mystic</p>"
}
},
"otherRecognizedUniqueIdentifiers": [],
"otherKnownLanguages": [],
"otherKnownCodes": [],
"territories": [
{
"identityCode": "DefGhi456",
"designationEn": "Enchanted Forest",
"designationLanguage": "Enchanted Forest"
}
],
"scientificDesignation": {
"generatedScientificDesignation": {
"designation": {
"displayText": "Mysticus plantae enigma",
"encodedText": "<em>Mysticus</em> <em>plantae</em> enigma"
},
"status": "MYSTERIOUS"
},
"genusInfo": {
"code": "ZwxYvu789",
"designation": "Mysticus"
},
"speciesInfo": {
"code": "RstUvw123",
"designation": "plantae"
},
"subSpeciesInfo": {
"code": "KlmNop456",
"designation": "enigma"
},
"subSpeciesPrefixes": [
"sub."
],
"varietyPrefixes": [
"var."
]
},
"statusState": "UNKNOWN",
"currentCondition": "ETERNAL",
"classificationGroup": {
"groupCode": "PqrStu789",
"designation": "Enigma Kingdom",
"designationLanguage": "Enigmatic Realm"
}
}
</code></pre>
<p>I used <code>pd.json_normalize(json)</code> on it but some of the fields are still nested, such as "territories":</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">territories</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">[{'identityCode': 'DefGhi456', 'designationEn': 'Enchanted Forest', 'designationLanguage': 'Enchanted Forest'}]</td>
</tr>
</tbody>
</table>
</div>
<p>what I want is:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">territories.identityCode</th>
<th style="text-align: left;">territories.designationEn</th>
<th style="text-align: left;">territories.designationLanguage</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">'DefGhi456'</td>
<td style="text-align: left;">'Enchanted Forest'</td>
<td style="text-align: left;">'Enchanted Forest'</td>
</tr>
</tbody>
</table>
</div>
<p>I tried <code>pd.json_normalize(json, "territories")</code> but this only gives the output for "territories" and not the rest of the json. (it does normalize 'territories' it correctly).</p>
<p>From this <a href="https://stackoverflow.com/questions/71648478/nested-list-after-json-normalize">Nested list after json_normalize</a> answer it says to do e.g:</p>
<pre><code>metadata = ['name', 'period', 'title', 'description', 'id']
out = pd.json_normalize(data_read['data'], 'values', metadata)
</code></pre>
<p>But I have a lot of column titles and several other jsons will more fields, it would require me to create many lists manually.
I tried just getting the column names using df.columns by doing:</p>
<pre><code>fileReader = json.loads(data)
df = pd.DataFrame(fileReader)
metadata = list(df.columns)
j2 = pd.json_normalize(fileReader, "ranges", metadata)
</code></pre>
<p>But I get error:</p>
<pre><code>KeyError: "Key 'note' not found. To replace missing values of 'note' with np.nan, pass in errors='ignore'"
</code></pre>
<p>Trying with <code>errors='ignore'</code> I get:</p>
<pre><code>ValueError: Conflicting metadata name id, need distinguishing prefix
</code></pre>
<p>Which seems to mean I'd still need to manually set the column names for some of the "id" columns? Which is what I want to avoid.</p>
| <python><json><pandas><json-normalize> | 2023-08-31 18:40:55 | 1 | 596 | Mitch |
77,018,449 | 1,852,526 | ET.fromstring gives ParseError | <p>I am trying to parse an XML string and I want just the PackageReference Include attribute details and the Version of it. When I say ET.fromstring(xml) it gives error like <code>xml.etree.ElementTree.ParseError: not well-formed (invalid token): line 284, column 84</code>. I verified if the xml is valid on multiple websites and there seems to be no issues with my xml and it is well formed.</p>
<p><a href="https://i.sstatic.net/aRLck.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/aRLck.png" alt="xml parse error" /></a></p>
<p>Code:</p>
<pre><code> import xml.etree.ElementTree as ET
thisDict={}
def parseXMLInCsProj(xml):
tree = ET.fromstring(xml)
for packageReference in tree.findall('.//PackageReference'):
if packageReference is None:
continue
if 'Include' in packageReference:
key=packageReference.get('Include')
child_item=packageReference.find('.//Version').text
thisDict[key]=child_item
xmlstr=f'''<?xml version="1.0" encoding="utf-8"?>
<packages>
<package id="Castle.Core" version="5.1.1" targetFramework="net481" />
<package id="Moq" version="4.18.4" targetFramework="net481" />
<package id="System.Runtime.CompilerServices.Unsafe" version="4.5.3" targetFramework="net481" />
<package id="xunit.runner.visualstudio" version="2.4.5" targetFramework="net481" developmentDependency="true" />
</packages>'''
xmlstr1=f'''<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="12.0" DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<PropertyGroup>
<Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration>
<Platform Condition=" '$(Platform)' == '' ">AnyCPU</Platform>
<ProductVersion>8.0.30703</ProductVersion>
<SchemaVersion>2.0</SchemaVersion>
<ProjectGuid>fdgfdgfg</ProjectGuid>
<OutputType>Exe</OutputType>
<AppDesignerFolder>Properties</AppDesignerFolder>
<RootNamespace>Labcyte.Echo.Server</RootNamespace>
<AssemblyName>EchoServer</AssemblyName>
<TargetFrameworkVersion>v4.8.1</TargetFrameworkVersion>
<FileAlignment>512</FileAlignment>
<PublishUrl>publish\</PublishUrl>
<Install>true</Install>
<InstallFrom>Disk</InstallFrom>
<UpdateEnabled>false</UpdateEnabled>
<UpdateMode>Foreground</UpdateMode>
<UpdateInterval>7</UpdateInterval>
<UpdateIntervalUnits>Days</UpdateIntervalUnits>
<UpdatePeriodically>false</UpdatePeriodically>
<UpdateRequired>false</UpdateRequired>
<MapFileExtensions>true</MapFileExtensions>
<ApplicationRevision>0</ApplicationRevision>
<ApplicationVersion>1.0.0.%2a</ApplicationVersion>
<IsWebBootstrapper>false</IsWebBootstrapper>
<UseApplicationTrust>false</UseApplicationTrust>
<BootstrapperEnabled>true</BootstrapperEnabled>
<TargetFrameworkProfile />
<SccProjectName>
</SccProjectName>
<SccLocalPath>
</SccLocalPath>
<SccAuxPath>
</SccAuxPath>
<SccProvider>
</SccProvider>
<AutoGenerateBindingRedirects>true</AutoGenerateBindingRedirects>
</PropertyGroup>
<PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Debug|AnyCPU' ">
<DebugSymbols>true</DebugSymbols>
<DebugType>full</DebugType>
<Optimize>false</Optimize>
<OutputPath>bin\Debug\</OutputPath>
<DefineConstants>TRACE;DEBUG</DefineConstants>
<ErrorReport>prompt</ErrorReport>
<WarningLevel>4</WarningLevel>
<Prefer32Bit>false</Prefer32Bit>
<PlatformTarget>x64</PlatformTarget>
</PropertyGroup>
<PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Release|AnyCPU' ">
<DebugType>none</DebugType>
<Optimize>true</Optimize>
<OutputPath>bin\Release\</OutputPath>
<DefineConstants>TRACE</DefineConstants>
<ErrorReport>prompt</ErrorReport>
<WarningLevel>4</WarningLevel>
<Prefer32Bit>false</Prefer32Bit>
<PlatformTarget>x64</PlatformTarget>
</PropertyGroup>
<PropertyGroup>
<StartupObject>
</StartupObject>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)' == 'Debug|x86'">
<DebugSymbols>true</DebugSymbols>
<OutputPath>bin\\x86\Debug\</OutputPath>
<DefineConstants>DEBUG;TRACE</DefineConstants>
<DebugType>full</DebugType>
<PlatformTarget>x86</PlatformTarget>
<ErrorReport>prompt</ErrorReport>
<CodeAnalysisRuleSet>MinimumRecommendedRules.ruleset</CodeAnalysisRuleSet>
<Prefer32Bit>false</Prefer32Bit>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)' == 'Release|x86'">
<OutputPath>bin\\x86\Release\</OutputPath>
<DefineConstants>TRACE</DefineConstants>
<Optimize>true</Optimize>
<DebugType>pdbonly</DebugType>
<PlatformTarget>x86</PlatformTarget>
<ErrorReport>prompt</ErrorReport>
<CodeAnalysisRuleSet>MinimumRecommendedRules.ruleset</CodeAnalysisRuleSet>
</PropertyGroup>
<PropertyGroup>
<AutoGenerateBindingRedirects>true</AutoGenerateBindingRedirects>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)' == 'Debug|x64'">
<DebugSymbols>true</DebugSymbols>
<OutputPath>bin\\x64\Debug\</OutputPath>
<DefineConstants>TRACE;DEBUG</DefineConstants>
<DebugType>full</DebugType>
<PlatformTarget>x64</PlatformTarget>
<LangVersion>7.3</LangVersion>
<ErrorReport>prompt</ErrorReport>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)' == 'Release|x64'">
<OutputPath>bin\\x64\Release\</OutputPath>
<DefineConstants>TRACE</DefineConstants>
<Optimize>true</Optimize>
<PlatformTarget>x64</PlatformTarget>
<LangVersion>7.3</LangVersion>
<ErrorReport>prompt</ErrorReport>
</PropertyGroup>
<ItemGroup>
<Reference Include="InstallerUtil">
<HintPath>..\..\..\Library\EchoNET\EchoNETServer\InstallerUtil.dll</HintPath>
</Reference>
<Reference Include="Labcyte.Core.CommonInterface">
<HintPath>..\..\..\Library\Core\CoreClient\Labcyte.Core.CommonInterface.dll</HintPath>
</Reference>
<Reference Include="Labcyte.Core.EventServer">
<HintPath>..\..\..\Library\Core\CoreClient\Labcyte.Core.EventServer.dll</HintPath>
</Reference>
<Reference Include="Labcyte.Core.Interface">
<HintPath>..\..\..\Library\Core\CoreClient\Labcyte.Core.Interface.dll</HintPath>
</Reference>
<Reference Include="Labcyte.EchoNET.Common, Version=0.0.1.0, Culture=neutral, processorArchitecture=MSIL">
<SpecificVersion>False</SpecificVersion>
<HintPath>..\..\..\Library\EchoNET\EchoNETServer\Labcyte.EchoNET.Common.dll</HintPath>
</Reference>
<Reference Include="Labcyte.EchoNET.Data, Version=1.0.0.0, Culture=neutral, processorArchitecture=MSIL">
<SpecificVersion>False</SpecificVersion>
<HintPath>..\..\..\Library\EchoNET\EchoNETServer\Labcyte.EchoNET.Data.dll</HintPath>
</Reference>
<Reference Include="Labcyte.EchoNET.EchoNETScripts, Version=1.0.0.0, Culture=neutral, processorArchitecture=MSIL">
<SpecificVersion>False</SpecificVersion>
<HintPath>..\..\..\Library\EchoNET\EchoNETServer\Labcyte.EchoNET.EchoNETScripts.dll</HintPath>
</Reference>
<Reference Include="Labcyte.EchoNET.HostServices">
<HintPath>..\..\..\Library\EchoNET\EchoNETServer\Labcyte.EchoNET.HostServices.dll</HintPath>
</Reference>
<Reference Include="Labcyte.EchoNET.Instrument, Version=0.0.1.0, Culture=neutral, processorArchitecture=MSIL">
<SpecificVersion>False</SpecificVersion>
<HintPath>..\..\..\Library\EchoNET\EchoNETServer\Labcyte.EchoNET.Instrument.dll</HintPath>
</Reference>
<Reference Include="Labcyte.EchoNET.Interface, Version=0.0.1.0, Culture=neutral, processorArchitecture=MSIL">
<SpecificVersion>False</SpecificVersion>
<HintPath>..\..\..\Library\EchoNET\EchoNETServer\Labcyte.EchoNET.Interface.dll</HintPath>
</Reference>
<Reference Include="Labcyte.EchoNET.ServiceEndpoints, Version=1.0.0.0, Culture=neutral, processorArchitecture=MSIL">
<SpecificVersion>False</SpecificVersion>
<HintPath>..\..\..\Library\EchoNET\EchoNETServer\Labcyte.EchoNET.ServiceEndpoints.dll</HintPath>
</Reference>
<Reference Include="Microsoft.CSharp" />
<Reference Include="System" />
<Reference Include="System.ComponentModel.Composition" />
<Reference Include="System.Configuration" />
<Reference Include="System.Configuration.Install" />
<Reference Include="System.IdentityModel" />
<Reference Include="System.Runtime.Serialization" />
<Reference Include="System.ServiceModel" />
<Reference Include="System.ServiceProcess" />
<Reference Include="System.Xml" />
</ItemGroup>
<ItemGroup>
<Compile Include="Config.cs" />
<Compile Include="EchoCallBack_HostService.cs" />
<Compile Include="Echo_HostService.cs" />
<Compile Include="Host.cs">
<SubType>Component</SubType>
</Compile>
<Compile Include="Program.cs" />
<Compile Include="ProjectInstaller.cs">
<SubType>Component</SubType>
</Compile>
<Compile Include="ProjectInstaller.designer.cs">
<DependentUpon>ProjectInstaller.cs</DependentUpon>
</Compile>
<Compile Include="Properties\AssemblyInfo.cs" />
<Compile Include="Properties\Settings.Designer.cs">
<AutoGen>True</AutoGen>
<DesignTimeSharedInput>True</DesignTimeSharedInput>
<DependentUpon>Settings.settings</DependentUpon>
</Compile>
</ItemGroup>
<ItemGroup>
<None Include="app.config">
<SubType>Designer</SubType>
</None>
<None Include="Properties\Settings.settings">
<Generator>SettingsSingleFileGenerator</Generator>
<LastGenOutput>Settings.Designer.cs</LastGenOutput>
</None>
</ItemGroup>
<ItemGroup>
<BootstrapperPackage Include=".NETFramework,Version=v4.0">
<Visible>False</Visible>
<ProductName>Microsoft .NET Framework 4 %28x86 and x64%29</ProductName>
<Install>true</Install>
</BootstrapperPackage>
<BootstrapperPackage Include="Microsoft.Net.Client.3.5">
<Visible>False</Visible>
<ProductName>.NET Framework 3.5 SP1 Client Profile</ProductName>
<Install>false</Install>
</BootstrapperPackage>
<BootstrapperPackage Include="Microsoft.Net.Framework.3.5.SP1">
<Visible>False</Visible>
<ProductName>.NET Framework 3.5 SP1</ProductName>
<Install>false</Install>
</BootstrapperPackage>
<BootstrapperPackage Include="Microsoft.Windows.Installer.3.1">
<Visible>False</Visible>
<ProductName>Windows Installer 3.1</ProductName>
<Install>true</Install>
</BootstrapperPackage>
</ItemGroup>
<ItemGroup>
<WCFMetadata Include="Service References\" />
</ItemGroup>
<ItemGroup>
<ProjectReference Include="..\..\..\Pico\Medman\AppsWrapper\AppsWrapper.vcxproj">
<Project>hghjgj</Project>
<Name>AppsWrapper</Name>
</ProjectReference>
<ProjectReference Include="..\..\..\Pico\Medman\FluidTransferInterface\FluidTransferInterface.csproj">
<Project>434554</Project>
<Name>FluidTransferInterface</Name>
</ProjectReference>
<ProjectReference Include="..\..\..\Pico\Medman\HarrierFluidTransfer\HarrierFluidTransfer.csproj">
<Project>7686877</Project>
<Name>HarrierFluidTransfer</Name>
</ProjectReference>
<ProjectReference Include="..\..\EchoServerAPI\EchoServerAPI.csproj">
<Project>89798798879</Project>
<Name>EchoServerAPI</Name>
</ProjectReference>
<ProjectReference Include="..\..\Labcyte.Echo.EchoScripts\Labcyte.Echo.EchoScripts.csproj">
<Project>7687687878787</Project>
<Name>Labcyte.Echo.EchoScripts</Name>
</ProjectReference>
<ProjectReference Include="..\EchoServerAPI.Server\EchoServerAPI.Server.csproj">
<Project>76567687887</Project>
<Name>EchoServerAPI.Server</Name>
</ProjectReference>
<ProjectReference Include="..\Labcyte.Echo.Commands\Labcyte.Echo.Commands.csproj">
<Project>878989808899</Project>
<Name>Labcyte.Echo.Commands</Name>
</ProjectReference>
</ItemGroup>
<ItemGroup>
<PackageReference Include="Microsoft.ML">
<Version>2.0.1</Version>
</PackageReference>
<PackageReference Include="Microsoft.ML.CpuMath">
<Version>2.0.1</Version>
</PackageReference>
<PackageReference Include="Microsoft.ML.DataView">
<Version>2.0.1</Version>
</PackageReference>
<PackageReference Include="Microsoft.ML.OnnxRuntime">
<Version>1.14.1</Version>
</PackageReference>
<PackageReference Include="Microsoft.ML.OnnxTransformer">
<Version>2.0.1</Version>
</PackageReference>
<PackageReference Include="Mono.pdb2mdb">
<Version>0.1.0.20130128</Version>
</PackageReference>
<PackageReference Include="System.Runtime.InteropServices.RuntimeInformation">
<Version>4.3.0</Version>
</PackageReference>
</ItemGroup>
<ItemGroup>
<EmbeddedResource Include="ProjectInstaller.resx">
<DependentUpon>ProjectInstaller.cs</DependentUpon>
</EmbeddedResource>
</ItemGroup>
<Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" />
<PropertyGroup>
<PreBuildEvent>rmdir /S /Q "$(TargetDir)ScriptClasses"
mkdir "$(TargetDir)ScriptClasses"
rmdir /S /Q "$(SolutionDir)..\Library\Echo\EchoServer"
mkdir "$(SolutionDir)..\Library\Echo\EchoServer"
mkdir "$(SolutionDir)..\Library\Echo\EchoServer\ScriptClasses"
</PreBuildEvent>
</PropertyGroup>
<PropertyGroup>
<PostBuildEvent>copy "$(TargetDir)*.dll" "$(SolutionDir)..\Library\Echo\EchoServer"\"
copy "$(TargetDir)*.pdb" "$(SolutionDir)..\Library\Echo\EchoServer"\"
copy "$(SolutionDir)..\Library\EchoNET\EchoNETServer\ScriptClasses\*.cs" "$(TargetDir)ScriptClasses\"
copy "$(TargetDir)ScriptClasses\" "$(SolutionDir)..\Library\Echo\EchoServer"\ScriptClasses\"
copy "C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\.NETFramework\\v4.8.1\Facades\System.Linq.*" "$(TargetDir)"
copy "C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\.NETFramework\\v4.8.1\System.Core.dll" "$(TargetDir)"
copy "C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\.NETFramework\\v4.8.1\Facades\\netstandard.dll" "$(TargetDir)"
</PostBuildEvent>
</PropertyGroup>
<!-- To modify your build process, add your task inside one of the targets below and uncomment it.
Other similar extension points exist, see Microsoft.Common.targets.
<Target Name="BeforeBuild">
</Target>
<Target Name="AfterBuild">
</Target>
-->
</Project>'''
parseXMLInCsProj(xmlstr1)
</code></pre>
<h2>Edit</h2>
<p>As suggested I updated my XML string. Now I don't see the error. But as soon as the debugger comes to the line <code>for packageReference in tree.findall('.//PackageReference'):</code>, the program terminates. What can I try next?</p>
| <python><xml><elementtree> | 2023-08-31 18:32:59 | 2 | 1,774 | nikhil |
77,018,367 | 3,412,607 | XSLT Pandas - How to Pull the Grandchild value to a Dataframe | <p>I'm trying to flatten an XML structure into a Pandas (Python) Dataframe using a custom XSLT with the read_xml function.</p>
<p>Given the following XML Structure:</p>
<pre><code><ml:Meals xmlns:ml="http://www.food.com">
<ml:Meal>
<ml:type>lunch</ml:type>
<ml:main_course>turkey sandwich</ml:main_course>
<ml:dessert>
<ml:name>Cookie</ml:name>
</ml:dessert>
</ml:Meal>
</ml:Meals>
</code></pre>
<p>I ultimately want my dataframe to look like this:</p>
<pre><code>+-----+---------------+-------+
| type| main_course|dessert|
+-----+---------------+-------+
|lunch|turkey sandwich| Cookie|
+-----+---------------+-------+
</code></pre>
<p>Struggling a bit with my XSLT syntax. I feel like I'm close with this but not quite there. Any help would be appreciated! Thanks in advance.</p>
<pre><code><xsl:stylesheet version="2.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:ml="http://www.food.com">
<xsl:template match="@*|node()">
<xsl:copy>
<xsl:apply-templates select="@*|node()"/>
</xsl:copy>
</xsl:template>
<xsl:template match="*[@ml:dessert]">
<xsl:copy>
<xsl:value-of select="@ml:name"/>
</xsl:copy>
</xsl:template>
</xsl:stylesheet>
</code></pre>
| <python><pandas><xml><xslt><parent-child> | 2023-08-31 18:20:14 | 2 | 815 | daniel9x |
77,018,312 | 8,844,970 | Pytest: Specify different environments for different tests | <p>I am writing a long workflow for scientific data processing that uses Snakemake. Along the workflow, there are multiple scripts that use different environments. The folder looks something like this:</p>
<pre><code>package
|-- scripts
| |
| -- script1.py
| -- script2.py
|-- tests
| |
| -- test1.py
| -- test2.py
|-- envs
| |
| -- env1.yml
| -- env2.yml
</code></pre>
<p><code>test1</code> uses conda environment in <code>env1.yml</code> to test <code>script1</code> and so on. Currently, I can run individual tests with:</p>
<pre><code>conda activate env1 && python -m pytest tests/test1.py && conda deactivate
</code></pre>
<p>Is there a way I can use pytest to pair the tests with the right environment automatically? Right now I am thinking about creating a bash script to activate env and run the test with one line per test. Alternatively, I was also looking into multiple pytest configurations as <a href="https://stackoverflow.com/questions/71515440/specify-multiple-pytest-configurations">described in this SO link</a>.</p>
<p>I was hoping to this natively with pytest instead of bash scripts if that is possible.</p>
| <python><pytest> | 2023-08-31 18:08:10 | 0 | 369 | spo |
77,018,288 | 6,099,211 | sqlalchemy: how to customize standard type like DateTime() param binding processing for dialect? | <p>Given the following snippet</p>
<pre><code>t = Table(
"foo",
MetaData(),
Column("bar", DateTime()),
)
engine.execute(t.insert((datetime(1900, 1, 1),)))
engine.execute(t.insert(("1900-01-01",)))
</code></pre>
<p>the last statement works well for postgresql, while failing for Spark e.g.</p>
<pre><code>Cannot safely cast 'bar': string to timestamp
[SQL: INSERT INTO TABLE `foo` VALUES (%(bar)s)]
[parameters: {'bar': '1900-01-01'}]
</code></pre>
<p>I can manage it with custom type like</p>
<pre><code>class MyDateTime(TypeDecorator):
impl = DateTime
def process_bind_param(self, value, dialect):
if dialect.name == "hive" and isinstance(value, str):
return datetime.strptime(value, "%Y-%m-%d")
return value
t = Table(
"foo",
MetaData(),
Column("bar", MyDateTime()),
)
</code></pre>
<p>but solution</p>
<ul>
<li>seems hacky while we directly checks for dialect name</li>
<li>I need to customize existing type for dialect, not implementing new one, because we have code base with DateTime type</li>
</ul>
<p>Is there any solution for sqlalchemy to customize existing type?</p>
| <python><apache-spark-sql><sqlalchemy> | 2023-08-31 18:03:18 | 2 | 1,200 | Anton Ovsyannikov |
77,018,174 | 9,318,372 | Is it OK to have a mutable default argument if it's annotated as immutable? | <p>It is generally, for good reason, considered unsafe to use mutable default arguments in python. On the other hand, it is quite annoying to always have to wrap everything in <code>Optional</code> and do the little unpacking dance at the start of the function.</p>
<p>In the situation when one want to allow passing <code>**kwargs</code> to a subfunction, it appears there is an alternative option:</p>
<pre class="lang-py prettyprint-override"><code>def foo(
x: int,
subfunc_args: Sequence[Any] = (),
subfunc_kwargs: Mapping[str, Any] = {},
) -> R:
...
subfunc(*subfunc_args, **subfunc_kwargs)
</code></pre>
<p>Obviously, <code>{}</code> is a mutable default argument and hence considered unsafe. HOWEVER, since <code>subfunc_kwargs</code> is annotated as <code>Mapping</code>, and not <code>dict</code> or <code>MutableMapping</code>, a type-checker would raise an error if we do end up mutating.</p>
<p><strong>The question is: would this be considered OK to do, or still a horrible idea?</strong></p>
<p>It would be really nice not having to do the little <code>subfunc_kwargs = {} if subfunc_kwargs is None else subfunc_kwargs</code> dance and having neater signatures.</p>
<p><strong>Note:</strong> <code>**subfunc_kwargs</code> is not an option since this potentially clashes with other keys and leads to issues if the kwargs of <code>subfunc</code> get changed.</p>
| <python><default-parameters> | 2023-08-31 17:42:00 | 2 | 1,721 | Hyperplane |
77,018,149 | 38,975 | How to read and store vector (List[float]) in Dask DataFrame? | <p>I am trying to have "vector" column in Dask DataFrame, from a large np.array of vectors (at this point it is 500k * 1536 vector).</p>
<p>With Pandas DataFrame code would look something like this:</p>
<pre><code>import pandas as pd
import numpy as np
vectors = np.array([
np.array([1, 2, 3]),
np.array([4, 5, 6]),
np.array([7, 8, 9])
])
df = pd.DataFrame({
"vector": vectors.tolist()
})
df
</code></pre>
<p>Result df structure looks good. However, it takes 34GB of memory just to load.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th></th>
<th>vector.</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>[1, 2, 3]</td>
</tr>
<tr>
<td>1</td>
<td>[4, 5, 6]</td>
</tr>
<tr>
<td>2</td>
<td>[7, 8, 9]</td>
</tr>
</tbody>
</table>
</div>
<p>I tried a few options:</p>
<p><strong>Option #1</strong></p>
<pre><code>import dask.dataframe as dd
import dask.array as da
import numpy as np
vectors = np.array([
np.array([1, 2, 3]),
np.array([4, 5, 6]),
np.array([7, 8, 9])
])
vectors = da.from_array(vectors)
df = dd.from_dask_array(vectors)
df
</code></pre>
<p>This one results in df where each value of vector have its own column</p>
<p><strong>Option #2</strong></p>
<pre><code>import dask.dataframe as dd
import dask.array as da
import numpy as np
# vectors = np.load(dataset_path / "vectors.npy")
vectors = np.array([
np.array([1, 2, 3]),
np.array([4, 5, 6]),
np.array([7, 8, 9])
])
df = dd.from_dask_array(da.from_array(vectors))
columns_to_drop = df.columns.tolist()
df["vector"] = df.apply(lambda row: tuple(row), axis=1, meta=(None, 'object'))
df = df.drop(columns=columns_to_drop)
df
</code></pre>
<p>This one produces correct results but looks cumbersome and probably not efficient</p>
| <python><pandas><dataframe><dask> | 2023-08-31 17:37:42 | 3 | 26,818 | Mike Chaliy |
77,018,145 | 1,708,550 | python3 -m pip: Unverified HTTPS request is being made to host 'pypi.org' | <p>I am running <code>python3</code>:</p>
<pre><code>python3 --version
Python 3.9.2
</code></pre>
<p>I was going to update local <code>pip</code> packages and for that I first checked which were outdated:</p>
<pre><code>python3 -m pip list --outdated
</code></pre>
<p>This produced a warning message for each specific package (they are all the same so I am showing only a sample):</p>
<pre><code>/usr/share/python-wheels/urllib3-1.26.5-py2.py3-none-any.whl/urllib3/connectionpool.py:1015: InsecureRequestWarning: Unverified HTTPS request is being made to host 'pypi.org'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
/usr/share/python-wheels/urllib3-1.26.5-py2.py3-none-any.whl/urllib3/connectionpool.py:1015: InsecureRequestWarning: Unverified HTTPS request is being made to host 'pypi.org'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
/usr/share/python-wheels/urllib3-1.26.5-py2.py3-none-any.whl/urllib3/connectionpool.py:1015: InsecureRequestWarning: Unverified HTTPS request is being made to host 'pypi.org'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
/usr/share/python-wheels/urllib3-1.26.5-py2.py3-none-any.whl/urllib3/connectionpool.py:1015: InsecureRequestWarning: Unverified HTTPS request is being made to host 'pypi.org'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
/usr/share/python-wheels/urllib3-1.26.5-py2.py3-none-any.whl/urllib3/connectionpool.py:1015: InsecureRequestWarning: Unverified HTTPS request is being made to host 'pypi.org'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
</code></pre>
<p>At the end this produced a list of packages, but I want to understand why <code>pip</code> is showing this behavior.</p>
<p>I checked the <a href="https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings" rel="nofollow noreferrer">recommended page</a> in the warning, but it does not provide any useful information for my case, as it is focused on programming use cases which use the <code>urllib3</code> and <code>requests</code> packages directly.</p>
<p>There are suggestions to ignore this warning (with all the perils it entails) with a command-line argument passed to <code>pip</code>.</p>
<p>I have no idea why this warning started appearing.</p>
| <python><pip><urllib3> | 2023-08-31 17:37:22 | 1 | 1,848 | Blitzkoder |
77,018,079 | 9,942,944 | Query parent objects and filter child lists without excluding parents | <p>I managed to query a list of <code>Parent</code> objects while at the same time filter each parent's list of children using <code>contains_eager()</code></p>
<pre><code>stmt = select(Parent)
stmt = stmt.outerjoin(Parent.child)
stmt = stmt.filter(Child.a == None)
stmt = stmt.options(contains_eager(DbStudy.milestones))
parents = session.scalars(stmt).unique().all()
</code></pre>
<p>For each <code>Parent</code> in my <code>parents</code> result set, the list of children is filtered by <code>a == None</code> which is what I want.</p>
<p>However, the <code>parents</code> result set does only contain <code>Parents</code> where at least one child with <code>a == None</code> exists.</p>
<p>What I want to achieve is that <code>parents</code> are not filtered at all. If none of the parent's children meets the condition, I still want the <code>Parent</code> to be in the result set with an empty list of children.</p>
<p>Is this possible with <code>contains_eager()</code>. Are there alternative approaches like <code>subquery()</code>? (I tried that but couldn't get it to work)</p>
| <python><join><select><sqlalchemy><orm> | 2023-08-31 17:25:42 | 0 | 919 | Peter Petrus |
77,018,045 | 8,378,817 | How to load pdf files from Azure Blob Storage with LangChain PyPDFLoader | <p>I currently trying to implement langchain functionality to talk with pdf documents.
I have a bunch of pdf files stored in Azure Blob Storage. I am trying to use langchain PyPDFLoader to load the pdf files to the Azure ML notebook. However, I am not being able to get it done. If I have the pdf stored locally, it is no problem, but to scale up I have to connect to the blob store. I have not really found any documents on langchain website or azure website. Wondering, if any of you is having similar problem.</p>
<p>Thank you</p>
<p>Below is an example of code i am trying:</p>
<pre><code>from azureml.fsspec import AzureMachineLearningFileSystem
fs = AzureMachineLearningFileSystem("<path to datastore>")
from langchain.document_loaders import PyPDFLoader
with fs.open('*/.../file.pdf', 'rb') as fd:
loader = PyPDFLoader(document)
data = loader.load()
Error: TypeError: expected str, bytes or os.PathLike object, not StreamInfoFileObject
</code></pre>
<p>Another example tried:</p>
<pre><code>from langchain.document_loaders import UnstructuredFileLoader
with fs.open('*/.../file.pdf', 'rb') as fd:
loader = UnstructuredFileLoader(fd)
documents = loader.load()
Error: TypeError: expected str, bytes or os.PathLike object, not StreamInfoFileObject
</code></pre>
| <python><azure><azure-machine-learning-service><langchain><azureml-python-sdk> | 2023-08-31 17:19:50 | 1 | 365 | stackword_0 |
77,017,948 | 12,466,687 | How to filter duplicates based on multiple columns in Polars? | <p>I was earlier able to <strong>filter duplicates</strong> based on <strong>multiple columns</strong> using <code>df.filter(pl.col(['A','C']).is_duplicated())</code> but after the latest version update this is not working.</p>
<pre><code>import polars as pl
df = pl.DataFrame(
{
"A": [1,4,4,7,7,10,10,13,16],
"B": [2,5,5,8,18,11,11,14,17],
"C": [3,6,6,9,9,12,12,15,18]
}
)
</code></pre>
<pre><code>df.filter(pl.col(['A','C']).is_duplicated())
</code></pre>
<p>giving error</p>
<pre><code>df.filter(df.select(
pl.col(['A','C']).is_duplicated()
)
)
</code></pre>
<p>giving error</p>
| <python><python-polars> | 2023-08-31 17:05:48 | 2 | 2,357 | ViSa |
77,017,910 | 10,909,217 | Why does the sum function not work with pyarrow fields? | <p>Consider the following list of fields.</p>
<pre><code>import pyarrow.dataset as ds
fields = [ds.field('a'), ds.field('b'), ds.field('c')]
</code></pre>
<p><code>sum(fields)</code> produces the following error:</p>
<pre><code>TypeError: Argument 'self' has incorrect type (expected pyarrow._compute.Expression, got int)
</code></pre>
<p>However, interestingly, <code>functools.reduce</code> gets the job done.</p>
<pre><code>>>> import functools
>>> import operator
>>> functools.reduce(operator.add, fields)
<pyarrow.compute.Expression add_checked(add_checked(a, b), c)>
</code></pre>
<p>What's going on here?</p>
| <python><pyarrow> | 2023-08-31 16:59:09 | 1 | 1,290 | actual_panda |
77,017,856 | 5,561,472 | How to run transaction on every document in collection in Firestore | <p>I need to update the documents in collection. But I would like to run transaction for every document:</p>
<pre class="lang-py prettyprint-override"><code>async def example():
@google.cloud.firestore.async_transactional
async def update_in_transaction(transaction: AsyncTransaction, city_ref, data):
transaction.get(city_ref)
transaction.update(city_ref, data)
db: google.cloud.firestore.AsyncClient = AsyncClient()
await db.collection("1").document("1").set({"a": 1})
await db.collection("1").document("2").set({"a": 2})
docs = db.collection("1").stream()
async for doc in docs:
transaction = db.transaction()
await update_in_transaction(transaction, doc, {"a": 3})
asyncio.run(example())
</code></pre>
<p>But I think that I read twice under this approach : first time on <code>async for</code> and the second - on <code>get()</code>.</p>
<p>Is there any way to avoid excessive read in my case?</p>
| <python><google-cloud-firestore> | 2023-08-31 16:52:23 | 0 | 6,639 | Andrey |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.