QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
77,929,930
| 14,256,643
|
Django rest how to show hierarchical data in api response
|
<p>I am getting the category id for GET request but I want to show text in hierarchical structure like child1 -> child -> child3</p>
<p>my expected response will be look like this</p>
<pre><code>{
'category': 'child1 -> child -> child3'
}
</code></pre>
<p>now getting response like this</p>
<pre><code>{
'category': 1
}
</code></pre>
<p>somethings look like that my model where it show category name on hierarchical order
<a href="https://i.sstatic.net/HEGAT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HEGAT.png" alt="enter image description here" /></a></p>
<p>my category_model</p>
<pre><code>class Product_category(models.Model):
domain = models.CharField(max_length=250)
parent_category = models.ForeignKey('self',on_delete=models.CASCADE,null=True, blank=True)
category_name = models.CharField(max_length=200,null=True,blank=True,)
category_icon = models.ImageField(upload_to=dynamic_icon_upload_path,blank=True,null=True)
def get_hierarchical_name(self):
if self.parent_category:
return f"{self.parent_category.get_hierarchical_name()}->{self.category_name}"
else:
return f"{self.category_name}"
def __str__(self):
return self.get_hierarchical_name()
</code></pre>
<p>my product model</p>
<pre><code>class Product(models.Model):
title = models.CharField(max_length=200,blank=True,null=True)
category = models.ForeignKey(Product_category,blank=True,null=True,on_delete=models.CASCADE)
</code></pre>
<p>my serilizer</p>
<pre><code>class ParentProductSerializer(serializers.ModelSerializer):
class Meta:
model = Product
fields = ['category',...others product fields]
</code></pre>
<p>see the screenshot where right now I am getting id instated for hierarchical text
<a href="https://i.sstatic.net/Bve7C.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Bve7C.png" alt="enter image description here" /></a></p>
|
<python><python-3.x><django><django-models><django-rest-framework>
|
2024-02-02 21:28:38
| 1
| 1,647
|
boyenec
|
77,929,691
| 4,134,881
|
Why doesn't this simple while loop work with sympy?
|
<p>Consider the following commands</p>
<pre class="lang-py prettyprint-override"><code>from sympy import init_session()
init_session
def test(v, th):
va = v
val = 5*ln((2*(10**4))/(2*(10**4)+va-10))
while (va-val > th):
va=va-0.01
val = 5*ln((2*10**4)/(2*10**4+va-10))
return val
</code></pre>
<p>Mathematically, we have the equation</p>
<p><code>va=5*ln((2*(10**4))/(2*(10**4)+va-10))</code></p>
<p>and we are trying, in a very rudimentary way, to find <code>va</code> iteratively.</p>
<p>I am not worried about the math. I don't understand, however, why the above code does not run.</p>
<p><a href="https://i.sstatic.net/JRj8Z.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JRj8Z.png" alt="enter image description here" /></a></p>
<p>In <code>.pyenv/versions/3.12.0/lib/python3.12/site-packages/sympy/testing/runtests.py", line 545, in _test</code></p>
<p><a href="https://i.sstatic.net/MdwVz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MdwVz.png" alt="enter image description here" /></a></p>
<p>and in <code>.pyenv/versions/3.12.0/lib/python3.12/site-packages/sympy/testing/runtests.py", line 134, in convert_to_native_paths</code></p>
<p><a href="https://i.sstatic.net/4OjN2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4OjN2.png" alt="enter image description here" /></a></p>
|
<python><while-loop><sympy>
|
2024-02-02 20:27:27
| 1
| 3,514
|
xoux
|
77,929,667
| 386,861
|
Trying to understand Bayes theory with direct application
|
<p>I'm trying to apply Bayes theorem to a problem of my own so I can understand the methodology and how to set up the numbers.</p>
<p>Essentially, I've got some data on four artists and the number of objects that they make every month over 80 periods.</p>
<p>I'm interested in 'h' and believe there are three possibilities - equally likely - for the last update: 1) Have left work, 2) Have been promoted and have split time between making and managing others, 3) Have been working on a project.</p>
<p>I've used Allen Downey's Think Bayes code to work through the process.</p>
<pre><code>from empiricaldist import Pmf
# Define hypotheses
hypos = ["left", "managing", "project"]
# Define prior probabilities - they are all equally likely
prior = Pmf(1/len(hypos), hypos)
# Display prior probabilities
print("Prior probabilities:")
print(prior)
</code></pre>
<p>The result:</p>
<pre><code>Prior probabilities:
left 0.333333
managing 0.333333
project 0.333333
dtype: float64
</code></pre>
<p>Code:</p>
<pre><code># Normalize the data to calculate the likelihoods
normalized_data = df.div(df.sum(axis=1), axis=0)
</code></pre>
<p>Normalized Data:</p>
<pre><code> h j n t
0 0.000000 1.000000 0.000000 0.0
1 0.666667 0.333333 0.000000 0.0
2 0.571429 0.428571 0.000000 0.0
3 0.769231 0.230769 0.000000 0.0
4 0.700000 0.300000 0.000000 0.0
</code></pre>
<p>Now I get confused.</p>
<pre><code>from empiricaldist import Pmf
# Define hypotheses
hypos = ["left", "managing", "project"]
# Calculate the likelihoods using the normalized data
likelihoods = {}
for hypo in hypos:
likelihoods[hypo] = normalized_data.apply(lambda row: row if hypo == "left" else 1, axis=1)
# Perform Bayesian update to obtain the posterior probabilities
posterior_history = [prior]
for hypo in hypos:
posterior = prior.copy() # Create a copy of the prior probabilities
if hypo == "left":
# Ensure alignment of row labels and perform element-wise multiplication
for index, row in likelihoods[hypo].iterrows():
if index in posterior.index:
posterior.loc[index] *= row
posterior /= posterior.sum() # Normalize the posterior probabilities
posterior_history.append(posterior)
</code></pre>
<p>The output is:</p>
<pre><code>[left 0.333333
managing 0.333333
project 0.333333
dtype: float64,
left 0.333333
managing 0.333333
project 0.333333
dtype: float64,
left 0.333333
managing 0.333333
project 0.333333
dtype: float64,
left 0.333333
managing 0.333333
project 0.333333
dtype: float64]
</code></pre>
<p>I was confused by the output for two reasons. 1) the posterior is the same as the prior. 2) There are are four outputs.</p>
<p>Maybe I'm over-complicating this and should just update the values just using the one column of the normalized data.</p>
<p>What can I try next?</p>
<p>I've created a dict of the data, data_to_dict, thus:</p>
<pre><code>{'h': {0: 0.0,
1: 2.0,
2: 4.0,
3: 10.0,
4: 7.0,
5: 6.0,
6: 4.0,
7: 10.0,
8: 11.0,
9: 3.0,
10: 4.0,
11: 6.0,
12: 3.0,
13: 4.0,
14: 8.0,
15: 9.0,
16: 6.0,
17: 5.0,
18: 6.0,
19: 5.0,
20: 4.0,
21: 1.0,
22: 3.0,
23: 4.0,
24: 0.0,
25: 2.0,
26: 6.0,
27: 4.0,
28: 8.0,
29: 2.0,
30: 4.0,
31: 2.0,
32: 2.0,
33: 3.0,
34: 2.0,
35: 3.0,
36: 2.0,
37: 3.0,
38: 3.0,
39: 1.0,
40: 4.0,
41: 2.0,
42: 1.0,
43: 3.0,
44: 3.0,
45: 1.0,
46: 1.0,
47: 1.0,
48: 5.0,
49: 2.0,
50: 2.0,
51: 4.0,
52: 4.0,
53: 2.0,
54: 3.0,
55: 4.0,
56: 2.0,
57: 2.0,
58: 1.0,
59: 4.0,
60: 3.0,
61: 3.0,
62: 3.0,
63: 1.0,
64: 3.0,
65: 2.0,
66: 2.0,
67: 4.0,
68: 2.0,
69: 2.0,
70: 1.0,
71: 0.0,
72: 5.0,
73: 0.0,
74: 3.0,
75: 3.0,
76: 2.0,
77: 2.0,
78: 2.0,
79: 4.0,
80: 1.0,
81: 2.0,
82: 0.0},
'j': {0: 2.0,
1: 1.0,
2: 3.0,
3: 3.0,
4: 3.0,
5: 2.0,
6: 1.0,
7: 9.0,
8: 7.0,
9: 4.0,
10: 0.0,
11: 3.0,
12: 6.0,
13: 2.0,
14: 5.0,
15: 4.0,
16: 1.0,
17: 2.0,
18: 2.0,
19: 3.0,
20: 6.0,
21: 6.0,
22: 3.0,
23: 4.0,
24: 5.0,
25: 3.0,
26: 2.0,
27: 1.0,
28: 4.0,
29: 0.0,
30: 1.0,
31: 0.0,
32: 0.0,
33: 2.0,
34: 2.0,
35: 1.0,
36: 0.0,
37: 4.0,
38: 2.0,
39: 0.0,
40: 0.0,
41: 2.0,
42: 2.0,
43: 1.0,
44: 2.0,
45: 1.0,
46: 1.0,
47: 2.0,
48: 0.0,
49: 1.0,
50: 1.0,
51: 2.0,
52: 0.0,
53: 0.0,
54: 0.0,
55: 1.0,
56: 2.0,
57: 1.0,
58: 0.0,
59: 1.0,
60: 0.0,
61: 1.0,
62: 1.0,
63: 1.0,
64: 2.0,
65: 0.0,
66: 2.0,
67: 2.0,
68: 5.0,
69: 1.0,
70: 2.0,
71: 2.0,
72: 3.0,
73: 0.0,
74: 3.0,
75: 0.0,
76: 1.0,
77: 2.0,
78: 5.0,
79: 3.0,
80: 1.0,
81: 4.0,
82: 2.0},
'n': {0: 0.0,
1: 0.0,
2: 0.0,
3: 0.0,
4: 0.0,
5: 0.0,
6: 0.0,
7: 0.0,
8: 0.0,
9: 0.0,
10: 0.0,
11: 0.0,
12: 0.0,
13: 0.0,
14: 0.0,
15: 0.0,
16: 0.0,
17: 0.0,
18: 0.0,
19: 0.0,
20: 0.0,
21: 0.0,
22: 0.0,
23: 0.0,
24: 0.0,
25: 0.0,
26: 0.0,
27: 0.0,
28: 0.0,
29: 0.0,
30: 0.0,
31: 0.0,
32: 0.0,
33: 0.0,
34: 0.0,
35: 0.0,
36: 0.0,
37: 0.0,
38: 0.0,
39: 0.0,
40: 0.0,
41: 0.0,
42: 0.0,
43: 0.0,
44: 0.0,
45: 0.0,
46: 0.0,
47: 0.0,
48: 0.0,
49: 0.0,
50: 0.0,
51: 0.0,
52: 0.0,
53: 0.0,
54: 0.0,
55: 0.0,
56: 0.0,
57: 0.0,
58: 0.0,
59: 0.0,
60: 0.0,
61: 0.0,
62: 0.0,
63: 0.0,
64: 0.0,
65: 0.0,
66: 0.0,
67: 0.0,
68: 0.0,
69: 0.0,
70: 0.0,
71: 0.0,
72: 0.0,
73: 1.0,
74: 3.0,
75: 6.0,
76: 8.0,
77: 2.0,
78: 3.0,
79: 2.0,
80: 2.0,
81: 5.0,
82: 2.0},
't': {0: 0.0,
1: 0.0,
2: 0.0,
3: 0.0,
4: 0.0,
5: 0.0,
6: 0.0,
7: 0.0,
8: 6.0,
9: 3.0,
10: 4.0,
11: 8.0,
12: 2.0,
13: 5.0,
14: 5.0,
15: 3.0,
16: 7.0,
17: 3.0,
18: 4.0,
19: 2.0,
20: 5.0,
21: 1.0,
22: 2.0,
23: 2.0,
24: 2.0,
25: 1.0,
26: 1.0,
27: 6.0,
28: 4.0,
29: 5.0,
30: 2.0,
31: 3.0,
32: 6.0,
33: 1.0,
34: 2.0,
35: 1.0,
36: 2.0,
37: 1.0,
38: 2.0,
39: 1.0,
40: 0.0,
41: 2.0,
42: 2.0,
43: 2.0,
44: 2.0,
45: 2.0,
46: 3.0,
47: 0.0,
48: 2.0,
49: 5.0,
50: 3.0,
51: 4.0,
52: 0.0,
53: 1.0,
54: 1.0,
55: 0.0,
56: 3.0,
57: 1.0,
58: 1.0,
59: 0.0,
60: 1.0,
61: 1.0,
62: 1.0,
63: 2.0,
64: 0.0,
65: 1.0,
66: 1.0,
67: 0.0,
68: 0.0,
69: 0.0,
70: 0.0,
71: 0.0,
72: 0.0,
73: 0.0,
74: 0.0,
75: 0.0,
76: 0.0,
77: 0.0,
78: 0.0,
79: 0.0,
80: 0.0,
81: 0.0,
82: 0.0}}
df = pd.DataFrame(df_to_dict)
</code></pre>
|
<python><pandas><bayesian>
|
2024-02-02 20:22:50
| 0
| 7,882
|
elksie5000
|
77,929,620
| 1,058,090
|
Could not switch to new window and verify content using Selenium python
|
<p>I am automating a saas application using Selenium Python, and in that i need to verify a new window content, after clicking on the button. Below is the screenshot of the window.</p>
<p><a href="https://i.sstatic.net/qu5Zt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qu5Zt.png" alt="enter image description here" /></a></p>
<p>In the Image, i am clicking on the Printable Version button, the new window is the with title about:blank - Google Chrome.</p>
<p>I tried the below codes, but none of them worked,</p>
<pre><code>all_windows = driver.window_handles
driver.switch_to.window(all_windows[1])
</code></pre>
<p>i tried this also</p>
<pre><code>driver.switch_to.window(driver.window_handles[1])
</code></pre>
<p>i even tried to loop and find out but did not work</p>
<pre><code>all_windows = driver.window_handles
for window_handle in all_windows:
if window_handle != driver.current_window_handle:
driver.switch_to.window(window_handle)
break
</code></pre>
<p>all the codes freeze when trying to switch to the newly opened window. The system timesout without executing any other code.</p>
<p>Any idea will be helpful</p>
|
<python><selenium-webdriver>
|
2024-02-02 20:09:55
| 1
| 534
|
Vincent
|
77,929,607
| 816,566
|
How can I reduce the namespace qualifiers When using inner Enums?
|
<p>I'm using Python 3.10.12.
When I use an inner <code>Enum</code>, my code has many redundant namespace qualifiers, and they require a lot of horizontal space. How can I reduce them? For example:</p>
<pre><code>class Outer:
class InnerEnum(Enum):
INNER_VALUE_ONE = 1
INNER_VALUE_TWO = 2
def outer_method(self):
# I'd prefer "some_local: InnerEnum = INNER_VALUE"
some_local: Outer.InnerEnum = Outer.InnerEnum.INNER_VALUE_ONE # Can I reduce the namespace qualifiers?
</code></pre>
|
<python><enums>
|
2024-02-02 20:06:13
| 1
| 1,641
|
Charlweed
|
77,929,603
| 7,259,176
|
Element-wise comparison of SymPy Matrices
|
<p>Is there a convenient way to compare two equal sized SymPy matrices element-wise?</p>
<p>Here's what I'm attempting:</p>
<pre class="lang-py prettyprint-override"><code>from sympy import Matrix
def element_wise_matrix_comparison(matrix1, matrix2):
result_matrix = Matrix([[matrix1[i, j] == matrix2[i, j] for j in range(matrix1.shape[1])] for i in range(matrix1.shape[0])])
return result_matrix
</code></pre>
<p>While this code works as intended, I wonder if there's a more concise or efficient way to perform this task within SymPy's framework.</p>
<p>Ideally, the comparison should return a boolean Matrix/list/np.array...</p>
|
<python><sympy>
|
2024-02-02 20:04:46
| 3
| 2,182
|
upe
|
77,929,560
| 590,552
|
Python Loading JSON config file on __init__
|
<p>I am new to Python and I am trying to create a class that handles loading my config values.
So far this is what I have :</p>
<pre><code>import os, json
class ConfigHandler:
CONFIG_FILE = "/config.json"
def __init__(self):
print("init config")
if os.path.exists(self.CONFIG_FILE):
print("Loading config file")
self.__dict__ = json.load(open(self.CONFIG_FILE))
else:
print("Config file not loaded")
</code></pre>
<p>Then in my main application class, I am doing :</p>
<pre><code>from ConfigHandler import ConfigHandler
class MainApp:
config = ???
</code></pre>
<p>I have tried several ways to get the ConfigHandler to initialize, but it never hits the print statements. And I am not sure how to call the confighandler to get the dictionary.
Any help is appreciated!</p>
|
<python><json><python-3.x>
|
2024-02-02 19:54:39
| 2
| 312
|
evolmonster
|
77,929,554
| 1,316,702
|
databricks - how to debug parameters when calling function
|
<p>In databricks, I have 2 notebooks. One notebooks holds a function. The other notebook calls the function in the first notebook.</p>
<p>this is how I am calling the function</p>
<pre><code>from pyspark.sql.functions import col
df_adatposUpload = joined_df.select(
,'PATBKG'
,varMyBloombergFunction(col('pctym'), col('ExchCode'), col('BBSymbol'), col('BBYellow'), col('OptCode'), col('YearDigits'), col('WeeklyOptions'), col('psubty'), col('pstrik'), col('admmultstrike')).alias('BBSymbol')
)
</code></pre>
<p>I am receiving a long error, but bottom line it says something about line 58 in the function.</p>
<pre><code>PythonException: 'TypeError: object of type 'NoneType' has no len()', from <command-2467610093078830>, line 58
</code></pre>
<p>the line is</p>
<pre><code>if len(BBSymbol) == 1:
</code></pre>
<p>I think for this particular case is BBsymbol is blank or NULL</p>
<p>Is there a way to run some kind of debugger where it shows me all the values that are being fed into the function? There are ten parameters for each row of data.</p>
<p>this is how I create the function in the first notebook</p>
<pre><code>def CreateBloombergSymbol(pctym,ExchCode, BBSymbol, BBYellow, OptCode, YearDigits, WeeklyOptions, psubty, pstrik, admmultstrike):
digitMonth = pctym[4:6]
digitYear1 = pctym[3:4]
digitYear2 = pctym[2:4]
match_dict = {
"01": "F",
"02": "G",
"03": "H",
"04": "J",
"05": "K",
"06": "M",
"07": "N",
"08": "Q",
"09": "U",
"10": "V",
"11": "X",
"12": "Z",
"foo": "missingValue"
}
charMonth = match_dict.get(digitMonth, "foo")
</code></pre>
|
<python><tags><databricks>
|
2024-02-02 19:53:37
| 1
| 1,277
|
solarissf
|
77,929,475
| 50,385
|
Is it possible to check if an asyncio.Task is blocked vs ready?
|
<p>I'm working on a make-like system written in Python and I want to be able to throttle how many cores are in use for parallel building, similar to the <a href="https://www.gnu.org/software/make/manual/html_node/Parallel.html" rel="nofollow noreferrer"><code>-j</code>/<code>--jobs</code> option supported by GNU make</a>. Each build "job" is an <code>asyncio.Task</code>, and may spawn subprocesses with as part of its work. I require jobs to only spawn external processes through a function I provide, so I can track how many external processes are running at any given time, and add/subtract from the in-use core count accordingly (for now I naively assume every external process is single threaded) and block waiting for more cores if necessary using a semaphore.</p>
<p>Separate from the external processes there is a question of CPython itself. A build job may launch a subprocess asychronously, in which case 2 cores should be required, one for CPython and one for the external job. However if the build job launches a subprocess and waits on it, and no other build jobs are running (all other <code>asyncio.Task</code> are blocked) then only 1 core should be required, since CPython as a whole is blocked.</p>
<p>In other words, since the CPython interpreter is single threaded even when using <code>asyncio.Task</code>s, I think CPython itself should only ever count as consuming either exactly 0 cores (all tasks blocked) or 1 cores (at least 1 task unblocked). However this requires being able to query from within the current task if any other tasks are currently runnable; if not, and we are about to block waiting on a process to complete, we should temporarily decrement the in-use core count since CPython is about to sleep, and then we can increment it back as soon as any task resumes running.</p>
<p>Is this possible with asyncio? I see I can query if a task is finished but I don't see a runnable concept. Do I need my own event loop implementation for this? How would I do it?</p>
|
<python><async-await><python-asyncio><event-loop>
|
2024-02-02 19:37:35
| 1
| 22,294
|
Joseph Garvin
|
77,929,347
| 1,029,902
|
Insert row from CSV file2 in file1 if row is unique in file1 (according to a common key)
|
<p>I have two csv files that have the same structure. <code>tops.csv</code> and <code>yesterday.csv</code>. I am first sorting yesterday by a specific column than I want to take the row with the highest value in that column. For this row, I want to check 3 of its columns, (the 3rd fourth and fifth columns) and see if there is a row in tops.csv that has the same data in the same columns. So i am checking for duplicates in those 3 columns. If a row with the data already exists, the row we found should be skipped and proceed to the next highest/valid row by the sort column until one is found which does not already have data in <code>tops.csv</code>. If there is no row already, I want to write the row to a csv called <code>for_email.csv</code> and also append that row to <code>tops.csv</code> where we checked for its existence.</p>
<p>This is the script I have so far with comments for each step</p>
<pre><code>import csv
import pandas as pd
# Replace 'your_file.csv' with the actual path to your CSV file
file_path = 'yesterday.csv'
# Read the CSV file into a DataFrame
df = pd.read_csv(file_path, header=None)
# Sort the DataFrame by last column and then by the 2nd column from the left
df_sorted = df.sort_values(by=[df.columns[-1], df.columns[1]], ascending=[False, True])
# Drop duplicates based on all columns except the first one, keeping the first occurrence
df_sorted_no_duplicates = df_sorted.drop_duplicates(subset=df_sorted.columns[2:])
# Read the 'tops.csv' file to check for existing data
tops_df = pd.read_csv('tops.csv', header=None)
# Initialize a counter for the number of rows written to 'for_email.csv'
rows_written = 0
# Open the 'for_email.csv' file for writing
with open('for_email.csv', 'w', newline='') as for_email_file:
for index, row in df_sorted_no_duplicates.iterrows():
# Check if the third, fourth, and fifth columns of the current row exist in 'tops.csv'
if not tops_df[tops_df.columns[2:]].eq(row.iloc[2:]).all(axis=1).any():
# If not, write the row to 'for_email.csv'
for_email_file.write(','.join(map(str, row)) + '\n')
rows_written += 1
# Break the loop when 3 rows have been written
if rows_written == 1:
break
print(f"{rows_written} rows have been written to 'for_email.csv'")
# Open the 'for_email.csv' file for reading
with open('for_email.csv', 'r') as for_email_file:
# Read the content of 'for_email.csv'
for_email_content = for_email_file.read()
# Append the content of 'for_email.csv' to 'tops.csv'
with open('tops.csv', 'a', newline='') as tops_file:
tops_file.write(for_email_content)
print(f"{rows_written} rows have been appended to 'tops.csv'")
</code></pre>
<p>Here is a sample of my tops.csv</p>
<pre><code>Ale Mary's,3.45,Barrel Aged Wham Whams - Coffee,Prison City Pub & Brewery,174906,Stout - Imperial / Double,11.0,4.7619,21,1,4.63971
Coopers Seafood House,2.99,Kentucky Breakfast Stout (KBS) (2015),Founders Brewing Co.,549,Stout - Imperial / Double Coffee,11.2,4.52617,58194,0,4.52608
Ale Mary's,3.45,Wendigo - Double Oaked (Batch 2 - 2021),Anchorage Brewing Company,13756,Barleywine - Other,15.5,4.48663,2569,0,4.4862
Coopers Seafood House,2.99,Kentucky Breakfast Stout (KBS) (2015),Founders Brewing Co.,549,Stout - Imperial / Double Coffee,11.2,4.52617,58194,0,4.52604
</code></pre>
<p>Here is a sample of my <code>yesterday.csv</code></p>
<pre><code>Ale Mary's,3.45,Barrel Aged Wham Whams - Coffee,Prison City Pub & Brewery,174906,Stout - Imperial / Double,11.0,4.7619,21,1,4.63971
Coopers Seafood House,2.99,Kentucky Breakfast Stout (KBS) (2015),Founders Brewing Co.,549,Stout - Imperial / Double Coffee,11.2,4.52617,58194,0,4.52604
Ale Mary's,3.45,Wendigo - Double Oaked (Batch 2 - 2021),Anchorage Brewing Company,13756,Barleywine - Other,15.5,4.48663,2569,0,4.48604
Ale Mary's,3.45,Bourbon County Brand Stout (2018) 14.7%,Goose Island Beer Co.,2898,Stout - Imperial / Double,14.7,4.46902,49672,0,4.46887
Coopers Seafood House,2.99,Kentucky Breakfast Stout (KBS) (2017),Founders Brewing Co.,549,Stout - Imperial / Double Coffee,11.8,4.44884,88500,0,4.44872
Bartari,3.22,Utopias Barrel-Aged 120 Minute IPA,Dogfish Head Craft Brewery,459,IPA - Imperial / Double,17.0,4.44246,7565,1,4.44017
</code></pre>
<p>When I run it with this data, I am getting back this row:</p>
<p><code>Coopers Seafood House,2.99,Kentucky Breakfast Stout (KBS) (2015),Founders Brewing Co.,549,Stout - Imperial / Double Coffee,11.2,4.52617,58194,0,4.52604</code></p>
<p>which already exists, yet I should be getting back this one:</p>
<p><code>Ale Mary's,3.45,Bourbon County Brand Stout (2018) 14.7%,Goose Island Beer Co.,2898,Stout - Imperial / Double,14.7,4.46902,49672,0,4.46887</code></p>
<p>I am not sure where I am going wrong. Any assistance would be appreciated</p>
|
<python><pandas><dataframe><csv>
|
2024-02-02 19:11:08
| 2
| 557
|
Tendekai Muchenje
|
77,929,264
| 2,812,625
|
Python Dictionary from pd.DataFrame
|
<p>How come this method results in an embedded dictionary which then I can not use to rename columns.</p>
<pre><code># Rename columns
cen_columns = cen_columns[['VARIABLE','LABEL_CLEAN']].set_index("VARIABLE").to_dict()
census_age.columns = census_age.columns.to_series().map(cen_columns['LABEL_CLEAN'])
</code></pre>
<p>Resulting in:</p>
<pre><code>{'LABEL_CLEAN': {'B01001_001E': 'Total',
'B01001_002E': 'Male',
'B01001_003E': 'Male_Under_5',
</code></pre>
<p>where as this method is a single level dictionary that I then can use to rename columns:</p>
<pre><code>#Rename Columns
cen_columns = dict(zip(cen_columns['VARIABLE'], cen_columns['LABEL_CLEAN']))
census_age.rename(columns=cen_columns,inplace=True)
census_age.head(2)
</code></pre>
<p>Resulting in a straightforward dict</p>
<pre><code>{'B01001_001E': 'Total',
'B01001_002E': 'Male',
'B01001_003E': 'Male_Under_5'....}
</code></pre>
|
<python><pandas><dictionary>
|
2024-02-02 18:54:42
| 1
| 446
|
Tinkinc
|
77,929,243
| 14,735,451
|
Find unique answers from list of strings
|
<p>I have 2 lists:</p>
<pre><code>texts = ["this is string 1", "this is string 2", "this is string 3"]
answers = ["this", "string 1", "string 3"]
</code></pre>
<p>I need to create a function that returns as many possible answers from <code>answers</code> (and their corresponding text) that exist in (exact string match) exactly one of the <strong>returned texts</strong>. That is, each of the returned answers must be in exactly one of the returned texts.</p>
<p>It should return a list of lists, where each sublist has the <code>[answer, corresponding text]</code>.</p>
<p>For example, in this example, only <code>"string 1"</code> appear in the first text (<code>"this is string 1"</code>), only <code>"string 3"</code> appear in the last text (<code>"this is string 3"</code>), and the first answer (<code>"this"</code>) appear in all of the texts, so it shouldn't be returned.</p>
<p>Another example:</p>
<pre><code>texts = ["this is string 1 this is string 2", "this is string 1", "this is string 3"]
answers = ["this", "string 1", "string 3"]
</code></pre>
<p>The returned output in this case can either be</p>
<p><code>[["string 1", "this is string 1"],["string 3", "this is string 3"]]</code></p>
<p>Or</p>
<p><code>[["string 1", "this is string 1 this is string 2"],["string 3", "this is string 3"]]</code></p>
<p>As "string 1" does not appear in the returned text "this is string 3".</p>
<p>I currently have the following code:</p>
<pre><code>def find_unique_answers(texts, answers):
result = []
for answer in answers:
text_found = None
for text in texts:
if answer in text:
if text_found is None:
text_found = text
else:
text_found = None
break
if text_found is not None:
result.append([answer, text_found])
return result
texts = ["this is string 1", "this is string 2", "this is string 3"]
answers = ["this", "string 1", "string 3"]
output = find_unique_answers(texts, answers)
print(output) # Output: [['string 1', 'this is string 1'], ['string 3', 'this is string 3']]
</code></pre>
<p>But, if I change the lists to</p>
<pre><code>texts = ["this is string 1 this is string 2", "this is string 1", "this is string 3"]
answers = ["this", "string 1", "string 3"]
</code></pre>
<p>But the output is <code>[['string 3', 'this is string 3']]</code></p>
<p>Not entirely sure what I'm doing wrong.</p>
|
<python><string><list>
|
2024-02-02 18:49:45
| 1
| 2,641
|
Penguin
|
77,929,212
| 823,859
|
Uninstall package located in /Library/Python/3.9/bin
|
<p>I would like to install a package (streamlit) only in a <code>conda</code> environment, not system-wide. However, when I go to <code>pip install</code> it in the environment, it says that the requirements were already satisfied in <code>/Users/adamg/Library/Python/3.9/bin</code>. How do I remove this instance, so that streamlit is installed only within conda environments?</p>
<p>Here is my output when I install it in a conda environment:</p>
<pre><code>(streamlit0) 22:28:34 ~
○ pip install streamlit
zsh: correct 'streamlit' to '.streamlit' [nyae]? n
Defaulting to user installation because normal site-packages is not writeable
Collecting streamlit
Using cached streamlit-1.31.0-py2.py3-none-any.whl.metadata (8.1 kB)
Requirement already satisfied: altair<6,>=4.0 in ./Library/Python/3.9/lib/python/site-packages (from streamlit) (5.2.0)
...
Using cached streamlit-1.31.0-py2.py3-none-any.whl (8.4 MB)
Installing collected packages: streamlit
WARNING: The script streamlit is installed in '/Users/adamg/Library/Python/3.9/bin' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
Successfully installed streamlit-1.31.0
</code></pre>
|
<python><pip><anaconda><streamlit>
|
2024-02-02 18:41:38
| 1
| 7,979
|
Adam_G
|
77,929,174
| 1,313,890
|
Pyspark adding an extra row with `.select`
|
<p>I have a pyspark dataframe that I am building out to get the unique values of a column in a table. I have run scripts on this table to update any NULL values in the column in question and have confirmed that there are no NULL values in the table. Here's where things get a weird.</p>
<p>I use <code>.groupBy</code> to get a distinct list of the values. I then add a couple of columns using <code>.withColumn</code>. If I check the dataframe at this point, I get 9 rows with no NULL values. I then use <code>.select</code> to alias certain column names and put them in the correct order (as I then use <code>.union</code> to join it with another dataframe and drop duplicates, essentially so I am picking up new values from the source table and can write these new values into my destination table). But when I use <code>.select</code>, I suddenly have 10 rows with one row having a null value in the field that comes from the source table. See below:</p>
<pre><code> dfCalls = spark.read.table('foo.bar.my_table') \
.groupBy([category_name]) \
.count() \
.withColumn('categoryName', lit(category_name)) \
.withColumn('dimCategoryId', lit(None)) \
.select('dimCategoryId', 'categoryName', col(category_name).alias('categoryValue')) \
.alias('cm')
</code></pre>
<p>If I check the results of the above without the <code>.select</code> statement, I get 9 rows with no nulls in the <code>col(category_name)</code> column but once I add the <code>.select</code> clause, I get 10 rows with a <code>NULL</code> entry in <code>col(category_name)</code>. Why is this happening and how can I fix it (minus adding a <code>.where</code> clause to exclude nulls that shouldn't be necessary).</p>
<p>EDIT:
To add to the weirdness, I ran this same code against a few other columns (i.e. different "category_name" values which are essentially columns in <code>foo.bar.my_table</code>). For 3 of the values, this code executed as expected (no extra null values). For 2 other values, 1 of them is adding an extra null value (same as above) and the other is adding 2 rows: one null value and one value <code>== ''</code> (i.e. empty string). I have checked both columns and neither of them have empty string OR null values in them.</p>
|
<python><pyspark><databricks><delta-live-tables>
|
2024-02-02 18:33:07
| 0
| 547
|
Shane McGarry
|
77,929,077
| 1,663,382
|
How to maintain Polars DataFrame metadata through processing, serialization and deserialization in Apache Arrow IPC format in Python?
|
<p>I would like to be able to track metadata related to a <a href="https://docs.pola.rs/user-guide/concepts/data-structures/#dataframe" rel="nofollow noreferrer">polars DataFrame</a> through processing, serialization, and deserialization using the Apache Arrow IPC format in python-polars. How can this be accomplished?</p>
<p>A common use case for DataFrame metadata is to store data about how the DataFrame was generated or metadata about the data contained within its columns such as the database it was loaded from, the timezone associated or possibly the CRS or coordinates in the DataFrame.</p>
|
<python><python-3.x><dataframe><metadata><python-polars>
|
2024-02-02 18:12:32
| 1
| 718
|
Liquidgenius
|
77,929,049
| 2,038,912
|
Convert a DataFrame of 2 columns into a dict ColA -> ColB
|
<p>If I had a DataFrame of 2 columns(a and b)
How do I convert this into a dictionary mapping a->b?
(None of the DataFrame.to_dict() options is doing what I want</p>
<pre><code>pd.DataFrame(data={ 'a':[1,3], 'b':[2,4]})
a b
0 1 2
1 3 4
</code></pre>
<p>and I want a resulting dictionary like</p>
<pre><code>{ 1:2, 3:4 }
</code></pre>
<p>This works but there must be a better way?</p>
<pre><code>>>> df = pd.DataFrame(data={ 'a':[1,3], 'b':[2,4]})
>>> { r[1][0] : r[1][1] for r in df.iterrows() }
{1: 2, 3: 4}
</code></pre>
|
<python><pandas><dictionary>
|
2024-02-02 18:04:40
| 2
| 593
|
9mjb
|
77,928,941
| 10,437,727
|
How to run a FileSensor inside of an Airflow task?
|
<p>Airflow version: 2.8</p>
<p>I'm struggling to run a <code>FileSensor</code> inside a DAG in Airflow. I built a generic <code>FileSensor</code> <em>generator</em>:</p>
<pre class="lang-py prettyprint-override"><code>def generic_human_feedback_sensor(step_name: str) -> FileSensor:
prefix = ""
run_id = str(os.getenv("AIRFLOW_CTX_DAG_RUN_ID"))
if os.getenv("FILE_STORAGE_LAYER_ENV") == "local":
pythonpath = os.getenv("PYTHONPATH")
prefix = f"{pythonpath}/data/s3/"
fs_connection_id = "fs_default"
else:
fs_connection_id = "aws_s3_log"
definitions_validated_by_humans = prefix + f"input/{step_name}/{run_id}/**.json"
return FileSensor(
task_id=f"{step_name}_human_feedback",
filepath=definitions_validated_by_humans,
mode="poke",
timeout=300,
poke_interval=60,
fs_conn_id=fs_connection_id,
recursive=True,
)
</code></pre>
<p>This method should be able to return a <code>FileSensor</code> operator to the DAG it runs in:</p>
<pre class="lang-py prettyprint-override"><code>@dag(
dag_id="main",
schedule=None,
start_date=pendulum.datetime(2021, 1, 1, tz="UTC"),
catchup=False,
tags=["main"],
params={"pdf_path": Param("path to a pdf to run the DAG on.")},
)
def main():
...
definitions_sensor = generic_human_feedback_task(step_name="text_to_definitions")
definitions_sensor.set_upstream(previous_task)
if definitions_sensor is True:
logging.error("File found")
</code></pre>
<p>Here's the thing: I don't know how to check whether the sensor has found the file or not. The documentation indicates that the sensor will return True if the file is found but the log isn't triggered even if the DAG logs indicate that the file was found:</p>
<pre><code>[2024-02-02, 18:35:16 CET] {base.py:83} INFO - Using connection ID 'fs_default' for task execution.
[2024-02-02, 18:35:16 CET] {filesystem.py:66} INFO - Poking for file data/s3/input/<step name>/None/**.json
[2024-02-02, 18:35:16 CET] {filesystem.py:71} INFO - Found File data/s3/input/<step name>/None/some_file.json last modified: 20240201184050
</code></pre>
<p>P.S: I know that Airflow injects some env vars, does somebody know why I can't seem to extract the <code>AIRFLOW_CTX_DAG_RUN_ID</code> env var?</p>
<p>Thanks in advance for your help!</p>
|
<python><airflow>
|
2024-02-02 17:43:01
| 0
| 1,760
|
Fares
|
77,928,740
| 1,576,208
|
Python sockets in a multithreaded program with classes/objects interfere with each other
|
<p>As an aid to my work, I am developing a communication simulator between devices, based on TCP sockets, in Python 3.12 (with an object oriented approach).
Basically, the difference between a SERVER type communication channel rather than a CLIENT type is merely based on the way by which the sockets are instantiated: respectively, the server ones listen/accept for connection requests while the client ones actively connect to their endpoint.
Once the connection is established, either party can begin to transmit something, which the other receives and processes and then responds (this on the same socket pair of course).
As you can see. this simulator has a simple interface based on <code>Tkinter</code>
You can create up to 4 channels n a grid layout, in this case we have two:</p>
<p><a href="https://i.sstatic.net/AWV5O.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AWV5O.png" alt="enter image description here" /></a></p>
<p>When the user clicks on <code>CONNECT</code> button, this is what happens in the listener of that button in the frame class:</p>
<pre><code>class ChannelFrame(tk.Frame):
channel = None #istance of channel/socket type
def connectChannel(self):
port = self.textPort.get();
if self.socketType.get() == 'SOCKET_SERVER':
self.channel = ChannelServerManager(self,self.title,port)
elif self.socketType.get() == 'SOCKET_CLIENT':
ipAddress = self.textIP.get()
self.channel = ChannelClientManager(self,self.title,ipAddress,port)
</code></pre>
<p>Then I have an implementation of a channel of type Server and one for type Client. Their constructors basically collect the received data and create a main thread whose aim is to create socket and then:</p>
<p>1a) connect to the counterpart in case of socket client</p>
<p>1b) waiting for requests of connections in case of socket server</p>
<p>2.) enter a main loop using <code>select.select</code> and trace in the text area of their frame the received and sent data</p>
<p>Here is the code for main thread Client</p>
<pre><code>class ChannelClientManager():
establishedConn = None
receivedData = None
eventMainThread = None #set this event when user clicks on DISCONNECT button
def threadClient(self):
self.socketsInOut.clear()
self.connected = False
while True:
if (self.eventMainThread.is_set()):
print(f"threadClient() --> ChannelClient {self.channelId}: Socket client requested to shut down, exit main loop")
break;
if(not self.connected):
try :
self.establishedConn = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.establishedConn.connect((self.ipAddress, int(self.port)))
self.channelFrame.setConnectionStateChannel(True)
self.socketsInOut.append(self.establishedConn)
self.connected = True
#keep on trying to connect to my counterpart until I make it
except socket.error as err:
print(f'socket.error threadClient() --> ChannelClient {self.channelId}: Error while connecting to server: {err}')
time.sleep(0.5)
continue
except socket.timeout as sockTimeout:
print(f'socket.timeout threadClient() --> ChannelClient {self.channelId}: Timeout while connecting to server: {sockTimeout}')
continue
except Exception as e:
print(f'Exception on connecting threadClient() --> ChannelClient {self.channelId}: {e}')
continue
if(self.connected):
try:
r, _, _ = select.select(self.socketsInOut, [], [], ChannelClientManager.TIMEOUT_SELECT)
if len(r) > 0: #socket ready to be read with incoming data
for fd in r:
data = fd.recv(1)
if data:
self.manageReceivedDataChunk(data)
else:
print(f"ChannelClient {self.channelId}: Received not data on read socket, server connection closed")
self.closeConnection()
else:
#timeout
self.manageReceivedPartialData()
except ConnectionResetError as crp:
print(f"ConnectionResetError threadClient() --> ChannelClient {self.channelId}: {crp}")
self.closeConnection()
except Exception as e:
print(f'Exception on selecting threadClient() --> ChannelClient {self.channelId}: {e}')
</code></pre>
<p>Here is the code for main thread Server</p>
<pre><code>class ChannelServerManager():
socketServer = None #user to listen/accept connections
establishedConn = None #represents accepted connections with the counterpart
receivedData = None
eventMainThread = None
socketsInOut = []
def __init__(self, channelFrame, channelId, port):
self.eventMainThread = Event()
self.socketsInOut.clear()
self.socketServer = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.socketServer.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
self.socketServer.bind(('', int(port))) #in ascolto qualsiasi interfaccia di rete, se metto 127.0.0.1 starebbe in ascolto solo sulla loopback
self.socketServer.listen(1) #accepting one connection from client
self.socketsInOut.append(self.socketServer)
self.mainThread = Thread(target = self.threadServer)
self.mainThread.start()
def threadServer(self):
self.receivedData = ''
while True:
if (self.eventMainThread.is_set()):
print("threadServer() --> ChannelServer is requested to shut down, exit main loop\n")
break;
try:
r, _, _ = select.select(self.socketsInOut, [], [], ChannelServerManager.TIMEOUT_SELECT)
if len(r) > 0: #socket pronte per essere lette
for fd in r:
if fd is self.socketServer:
#if the socket ready is my socket server, then we have a client wanting to connect --> let's accept it
clientsock, clientaddr = self.socketServer.accept()
self.establishedConn = clientsock
print(f"ChannelServer {self.channelId} is connected from client address {clientaddr}")
self.socketsInOut.append(clientsock)
self.channelFrame.setConnectionStateChannel(True)
self.receivedData = ''
elif fd is self.establishedConn:
data = fd.recv(1)
if not data:
print(f"ChannelServer {self.channelId}: Received not data on read socket, client connection closed")
self.socketsInOut.remove(fd)
self.closeConnection()
else:
self.manageReceivedDataChunk(data)
else: #timeout
self.manageReceivedPartialData()
except Exception as e:
print(f"Exception threadServer() --> ChannelServer {self.channelId}: {traceback.format_exc()}")
</code></pre>
<p>I don't know why, but this frames/sockets appear to interfere with each other or "share data".
Or, disconnecting and closing a channel from its button in its own frame also causes the other one into error, or the other one closes/crashes too.
These two frames/objects should each live their own life and move forward with their counterpart as long as it is connected, instead they interfere.
As you can see from this screenshot:</p>
<p><a href="https://i.sstatic.net/KnQES.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KnQES.png" alt="enter image description here" /></a></p>
<p>By a medical device (which is server), I am sending this data</p>
<pre><code><VT>MSH|^~\&|KaliSil|KaliSil|AM|HALIA|20240130182136||OML^O33^OML_O33|1599920240130182136|P|2.5<CR>PID|1||A20230522001^^^^PI~090000^^^^CF||ESSAI^Halia||19890522|M|||^^^^^^H|||||||||||||||<CR>PV1||I||||||||||||A|||||||||||||||||||||||||||||||<CR>SPM|1|072401301016^072401301016||h_san^|||||||||||||20240130181800|20240130181835<CR>ORC|NW|072401301016||A20240130016|saisie||||20240130181800|||^^|CP1A^^^^^^^^CP1A||20240130182136||||||A^^^^^ZONA<CR>TQ1|1||||||||0||<CR>OBR|1|072401301016||h_GLU_A^^T<CR>OBX|1|NM|h_GLU_A^^T||||||||||||||||<CR>BLG|D<CR><FS>
</code></pre>
<p>only to channel on port 10001 but part of this data is received on one socket client, other part on the other (right) socket client. This is not a problem of rendering the text in the right frame, also the log of the received data shows that some data is received in Channel 0 and some other data in Channel 1.
Why does this happen? Instead, I start 2 instances of the simulator with only one channel each, then everything works perfectly but this defeats our purpose of being able to work up to 4 channels in parallel from a single window.
Do you have any ideas? The first time I had implemented <code>ChannelServerManager</code> and <code>ChannelClientManager</code> as extended from an <code>ChannelAbstractManager</code> with common methods and data structures, based on Python library <code>ABC</code>
Then I read that inheritance in Python is not the same as in Java, so I thought the different instances were sharing some attributes. I removed the abstract class and replicated
the code and resources in both classes but this has not solved.
Any suggestions?</p>
|
<python><java><multithreading><sockets><tcp>
|
2024-02-02 17:06:31
| 1
| 5,467
|
SagittariusA
|
77,928,449
| 3,170,559
|
How to get data from REST API over self-hosted integration runtime on Synapse Analytics?
|
<p>In my company I got access to a data source via a REST API which can only be accessed from within the company network. When I run my code for testing on my pc, it works as expected. However, I want to run the code on Synapse Analytics which is not internal because it is in Azure. So the code won't work. We got a self-hosted integration runtime for our Synapse Analytics workspace to fix this issue. I couldn't find a documentaion or code snippet to adjust my code.</p>
<p>What do I need to add to my code to make it work?</p>
<pre><code>import requests
import base64
# User und Password
username = "user"
password = "pw"
# URL
url = "https://test.server.com/gate/pull/"
# Auth
auth_string = f"{username}:{password}"
auth_header = f"Basic {base64.b64encode(auth_string.encode()).decode()}"
headers = {
"Authorization": auth_header,
"Content-Type": "application/json"
}
# GET
req = requests.get(url, headers=headers)
json_data = req.json()
</code></pre>
|
<python><azure><rest><azure-data-factory><azure-synapse>
|
2024-02-02 16:20:47
| 1
| 717
|
stats_guy
|
77,928,407
| 3,160,284
|
NumPy C-API: Undefined reference error trying to support numpy.float16
|
<p>I am trying to build one of the <code>ufunc</code> examples from NumPy's documentation (see <a href="https://numpy.org/doc/stable/user/c-info.ufunc-tutorial.html#example-numpy-ufunc-with-multiple-dtypes" rel="nofollow noreferrer">here</a>), but I am unable to get the support for half-precision floating point numbers to work.</p>
<p>This is my minimal example:</p>
<p><code>pyproject.toml</code></p>
<pre class="lang-ini prettyprint-override"><code>[build-system]
build-backend = "mesonpy"
requires = ["meson-python==0.15", "numpy==1.26.3"]
[project]
name = "mypackage"
version = "0.1.0"
dependencies = ["numpy==1.26.3"]
</code></pre>
<p><code>meson.build</code> - There's an <a href="https://github.com/mesonbuild/meson/issues/9598" rel="nofollow noreferrer">open issue</a> on GitHub regarding adding NumPy as a build dependency in Meson - the custom lookup in the code below follows what seems to be the <a href="https://github.com/scipy/scipy/blob/730010e77319606718b3d50a8fc402ba9cc629bc/scipy/meson.build#L30-L77" rel="nofollow noreferrer">current recommendation</a> from the SciPy project on how to add the NumPy headers.</p>
<pre><code>project(
'mypackage',
'c',
version: '0.1.0'
)
py = import('python').find_installation(pure: false)
incdir_numpy = meson.get_external_property('numpy-include-dir', 'not-given')
if incdir_numpy == 'not-given'
incdir_numpy = run_command(
py,
['-c', '''
import os
import numpy
try:
incdir = os.path.relpath(numpy.get_include())
except Exception:
incdir = numpy.get_include()
print(incdir)
'''
],
check: true
).stdout().strip()
_incdir_numpy_abs = run_command(
py,
['-c', '''
import os
import numpy
print(numpy.get_include())
'''
],
check: true
).stdout().strip()
else
_incdir_numpy_abs = incdir_numpy
endif
inc_np = include_directories(incdir_numpy)
numpy_nodepr_api = ['-DNPY_NO_DEPRECATED_API=NPY_1_26_API_VERSION']
np_dep = declare_dependency(include_directories: inc_np, compile_args: numpy_nodepr_api)
py.extension_module(
'npufunc',
'npufunc.c',
dependencies: np_dep,
install: true
)
</code></pre>
<p><code>npufunc.c</code> - This is just straightforward copy & paste from the <a href="https://numpy.org/doc/stable/user/c-info.ufunc-tutorial.html#example-numpy-ufunc-with-multiple-dtypes" rel="nofollow noreferrer">example</a>. I removed the methods for other data types (single, double, etc.), since they don't cause trouble.</p>
<pre class="lang-c prettyprint-override"><code>#define PY_SSIZE_T_CLEAN
#include <Python.h>
#include "numpy/ndarraytypes.h"
#include "numpy/ufuncobject.h"
#include "numpy/halffloat.h"
#include <math.h>
static PyMethodDef LogitMethods[] = {
{NULL, NULL, 0, NULL}
};
static void half_float_logit(char **args, const npy_intp *dimensions,
const npy_intp *steps, void *data)
{
npy_intp i;
npy_intp n = dimensions[0];
char *in = args[0], *out = args[1];
npy_intp in_step = steps[0], out_step = steps[1];
float tmp;
for (i = 0; i < n; i++) {
tmp = npy_half_to_float(*(npy_half *)in);
tmp /= 1 - tmp;
tmp = logf(tmp);
*((npy_half *)out) = npy_float_to_half(tmp);
in += in_step;
out += out_step;
}
}
PyUFuncGenericFunction funcs[1] = {&half_float_logit};
static char types[2] = {NPY_HALF, NPY_HALF};
static struct PyModuleDef moduledef = {
PyModuleDef_HEAD_INIT,
"npufunc",
NULL,
-1,
LogitMethods,
NULL,
NULL,
NULL,
NULL
};
PyMODINIT_FUNC PyInit_npufunc(void)
{
PyObject *m, *logit, *d;
import_array();
import_umath();
m = PyModule_Create(&moduledef);
if (!m) {
return NULL;
}
logit = PyUFunc_FromFuncAndData(funcs, NULL, types, 1, 1, 1,
PyUFunc_None, "logit",
"logit_docstring", 0);
d = PyModule_GetDict(m);
PyDict_SetItemString(d, "logit", logit);
Py_DECREF(logit);
return m;
}
</code></pre>
<p>But I am getting an error from the linker when using GCC 13.2 (via <a href="https://www.msys2.org/" rel="nofollow noreferrer">MSYS2</a> - I'm working on Windows 10):</p>
<pre><code>[2/2] Linking target npufunc.cp312-win_amd64.pyd
FAILED: npufunc.cp312-win_amd64.pyd
"cc" -o npufunc.cp312-win_amd64.pyd npufunc.cp312-win_amd64.pyd.p/npufunc.c.obj "-Wl,--allow-shlib-undefined" "-Wl,-O1" "-shared" "-Wl,--start-group" "-Wl,--out-implib=npufunc.cp312-win_amd64.dll.a" "C:\Program Files\Python312\python312.dll" "l" "-lkernel32" "-luser32" "-lgdi32" "-lwinspool" "-lshell32" "-lole32" "-loleaut32" "-luuid" "-lcomdlg32" "-ladvapi32"-,-" "-Wl,--end-group"
C:/msys64/ucrt64/bin/../lib/gcc/x86_64-w64-mingw32/13.2.0/../../../../x86_64-w64-mingw32/bin/ld.exe: npufunc.cp312-win_amd_amd64.pyd.p/npufunc.c.obj:npufunc.c:(.text+0x45): undefined reference to `npy_half_to_float'
C:/msys64/ucrt64/bin/../lib/gcc/x86_64-w64-mingw32/13.2.0/../../../../x86_64-w64-mingw32/bin/ld.exe: npufunc.cp312-win_amd_amd64.pyd.p/npufunc.c.obj:npufunc.c:(.text+0x5a): undefined reference to `npy_float_to_half'
collect2.exe: error: ld returned 1 exit status
ninja: build stopped: subcommand failed.
</code></pre>
<p>Switching to MSVC 19.38 doesn't help either:</p>
<pre><code>[2/2] Linking target npufunc.cp312-win_amd64.pyd
FAILED: npufunc.cp312-win_amd64.pyd
"link" /MACHINE:x64 /OUT:npufunc.cp312-win_amd64.pyd npufunc.cp312-win_amd64.pyd.p/npufunc.c.obj "/release" "/nologo" "/OPT:REF" "/DLL" "/IMPLIB:npufunc.cp312-win_amd64.lib" "C:\Program Files\Python312\libs\python312.lib" "kernel32.lib" "user32.lib" "gdi32.lib" "winspool.lib" "shell32.lib" "ole32.lib" "oleaut32.lib" "uuid.lib" "comdlg32.lib" "advapi32.lib"
Creating library npufunc.cp312-win_amd64.lib and object npufunc.cp312-win_amd64.exp
npufunc.c.obj : error LNK2019: unresolved external symbol npy_half_to_float referenced in function half_float_logit
npufunc.c.obj : error LNK2019: unresolved external symbol npy_float_to_half referenced in function half_float_logit
npufunc.cp312-win_amd64.pyd : fatal error LNK1120: 2 unresolved externals
ninja: build stopped: subcommand failed.
</code></pre>
<p>I understand that dealing with 16-bit floating point numbers can be somewhat "iffy", due to a lack of hardware support on most CPUs. But I'm including the correct header <code>"numpy/halffloat.h"</code>, that defines the two conversion functions <code>npy_half_to_float</code> and <code>npy_float_to_half</code>.</p>
<p>Why am I seeing this error? Any pointers in the right direction would be very much appreciated.</p>
|
<python><c><numpy><linker-errors><meson-build>
|
2024-02-02 16:14:17
| 0
| 335
|
Roland Seubert
|
77,928,345
| 715,348
|
Delete possible despite Foreign Key
|
<p>I'm facing a behavior that I don't technically understand.</p>
<p>Let's assume I have a data model that looks like the following.</p>
<pre><code>class MastoList(db.Model):
__tablename__ = 'mastolist'
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(255), nullable=False)
list_id = db.Column(db.String(255), nullable=False)
base_url = db.Column(db.String(255), nullable=False)
max_id = db.Column(db.String(100), nullable=False, default='0')
updated_at = db.Column(db.DateTime, nullable=True)
class Toot(db.Model):
id = db.Column(db.String(255), primary_key=True)
list_id = db.Column(db.Integer, db.ForeignKey('mastolist.id'), nullable=False)
account = db.Column(db.String(255))
account_displayname = db.Column(db.String(255))
created_at = db.Column(db.DateTime, default=datetime.utcnow)
uri = db.Column(db.String(255), unique=True, nullable=False)
content = db.Column(db.Text, nullable=True)
</code></pre>
<p>The generated DB structure looks like the following.</p>
<pre><code>CREATE TABLE mastolist (
id INTEGER NOT NULL,
name VARCHAR(255) NOT NULL,
list_id VARCHAR(255) NOT NULL,
base_url VARCHAR(255) NOT NULL,
max_id VARCHAR(100) NOT NULL, updated_at DATETIME,
PRIMARY KEY (id)
)
CREATE TABLE toot (
id VARCHAR(255) NOT NULL,
list_id INTEGER NOT NULL,
account VARCHAR(255),
account_displayname VARCHAR(255),
created_at DATETIME,
uri VARCHAR(255) NOT NULL,
content TEXT,
reblogged BOOLEAN NOT NULL,
PRIMARY KEY (id),
FOREIGN KEY(list_id) REFERENCES mastolist (id),
UNIQUE (uri)
)
</code></pre>
<p>The database storing the data contains one row for <code>MastoList</code> with <code>id</code> = 1. <code>Toot</code> contains several rows all having set <code>list_id</code> = 1. If I try to delete the row from <code>MastoList</code> using DB Browser for SQLite I get the expected <code>FOREIGN KEY constraint failed</code> error.</p>
<p>Trying to delete the row using the following code works without any error.</p>
<pre><code>session = Session(db)
masto_list = session.execute(select(MastoList).filter(MastoList.id == id)).scalar_one()
session.delete(masto_list)
session.commit()
</code></pre>
<p>Afterwards all rows in <code>Toot</code> still have <code>list_id</code> to 1.</p>
<p>Only if I add</p>
<pre><code>toots = db.Relationship('Toot', backref='MastoList')
</code></pre>
<p>to the model of <code>MastoList</code> I also get the expected foreign key error when trying to delete the row with my code.</p>
<p>Can anybody explain why it's possible to delete the row using my code meaning the DB constraint is simply ignored?</p>
|
<python><sqlite><sqlalchemy>
|
2024-02-02 16:04:28
| 1
| 318
|
André
|
77,928,256
| 9,983,652
|
how to union multiple index?
|
<p>I'd like to find union of index of multiple dataframes, in pandas union can only be used for 2 indexes at a time, so I have to do it multiple times like below. I am wondering if I can do it using only one line? Thanks</p>
<pre><code>NewIdx = pd.Index(Idx1.union(Idx2),
name = 'depth')
NewIdx=pd.Index(NewIdx.union(Idx3),
name = 'depth')
NewIdx=pd.Index(NewIdx.union(Idx4),
name = 'depth')
NewIdx=pd.Index(NewIdx.union(Idx5),
name = 'depth')
</code></pre>
|
<python><pandas>
|
2024-02-02 15:49:13
| 2
| 4,338
|
roudan
|
77,928,197
| 2,812,625
|
A more Pythonic Way to clean up strings
|
<p>I am looking to clean up some strings to make them column names and the easiest method i found is a series of replace statements: is there a cleaner way to achieve this outcome?</p>
<p>Sample String for clarity:</p>
<pre><code>0 Estimate!!Total:
1 Estimate!!Total:!!Male:
2 Estimate!!Total:!!Male:!!Under 5 years
3 Estimate!!Total:!!Male:!!5 to 9 years
</code></pre>
<p>Outcome:</p>
<pre><code>0 Total:
1 Male:
2 Male_Under_5
3 Male_5_to_9
4 Male_10_to_14
5 Male_15_to_17
</code></pre>
<p>Code:</p>
<pre><code>test['LABEL'].str.replace("Estimate!!Total:!!", "").str.replace("Estimate!!", "")\
.str.replace(':!!','_').str.replace('years','').str.replace('years','').str.replace(' ','_').str.rstrip('_')
</code></pre>
|
<python><string><replace>
|
2024-02-02 15:38:18
| 3
| 446
|
Tinkinc
|
77,928,132
| 1,911,722
|
How to install .conda file downloaded from anaconda package site?
|
<p>For example, I want to install fish shell using conda. But the server has no internet connection.</p>
<p>On <a href="https://anaconda.org/conda-forge/fish/files" rel="nofollow noreferrer">https://anaconda.org/conda-forge/fish/files</a> , many versions are provided. But almost the latest few versions are always in .conda format.</p>
<p>I downloaded <code>linux-64/fish-3.7.0-hdab1d28_0.conda</code> and install it using</p>
<pre><code>conda install linux-64/fish-3.7.0-hdab1d28_0.conda
</code></pre>
<p>But this does not work. It shows a long error report like</p>
<pre><code>...
FileNotFoundError: [Errno 2] No such file or directory:
'...../miniconda3/pkgs/linux-64_fzf-0.46.1-ha8f183a_0/info/repodata_record.json'
FileNotFoundError: [Errno 2] No such file or directory: '...../miniconda3/pkgs/linux-64_fzf-0.46.1-ha8f183a_0/info/index.json'
....
etc
</code></pre>
<p>But old version in tar.gz format like <code>linux-64/fish-3.4.1-h682823d_0.tar.bz2</code> installed just fine.</p>
<p>So how to correctly conda install .conda file?</p>
|
<python><anaconda><conda>
|
2024-02-02 15:28:45
| 1
| 2,657
|
user15964
|
77,927,935
| 2,986,153
|
multiple hue in same kdeplot
|
<pre><code>import polars as pl
import numpy as np
import seaborn as sb
import matplotlib.pyplot as plt
import matplotlib.ticker as mtick
TRIALS = int(10e3)
RATE_A = 0.65
RATE_B = 0.66
SIMS = int(4e3)
np.random.seed(42)
df_sim = pl.DataFrame(
{
"A": np.random.binomial(TRIALS, RATE_A, SIMS) / TRIALS,
"B": np.random.binomial(TRIALS, RATE_B, SIMS) / TRIALS,
}
)
df_sim = df_sim.with_columns(
(pl.col("B") - pl.col("A")).alias("treat_B")
)
df_melt = df_sim.melt(
value_vars=["A", "B", "treat_B"],
variable_name='distribution',
value_name='mean')
df_melt = df_melt.with_columns(
(pl.when(df_melt['mean'] > 0) # & df_melt['distribution'] == 'treat_B')
.then(pl.lit("pos"))
.otherwise(pl.lit("neg"))
.alias('direction')
)
)
sb.set(style="whitegrid")
sb.set(font_scale=2)
dens_all = sb.kdeplot(
data=df_melt.filter(pl.col("distribution")=='treat_B'),
x="mean",
#hue="direction",
fill=True,
common_norm=False,
alpha=.5)
dens_all.set_xlabel("Simulated Sample Means for treat_B")
dens_all.xaxis.set_major_formatter(mtick.PercentFormatter(1, 0))
</code></pre>
<p><a href="https://i.sstatic.net/cgFkD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cgFkD.png" alt="enter image description here" /></a></p>
<p>I would like to display this single density plot with all density left of 0 as red and all density right of 0 as blue. However if I add an argument for <code>hue</code> I get two separate density plots.</p>
<p>How can I control fill color while maintaining a single density plot?</p>
<p><a href="https://i.sstatic.net/V1cvV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/V1cvV.png" alt="enter image description here" /></a></p>
<p>common_norm and multiple args bring me closer, but not to the direct split I want.</p>
<pre><code>dens_all = sb.kdeplot(
data=df_melt.filter(pl.col("distribution")=='treat_B'),
x="mean",
hue="direction",
fill=True,
common_norm=True,
multiple='stack',
alpha=.5)
dens_all.set_xticks(np.arange(-0.02, 0.04, 0.01))
dens_all.set_title('4e3 Simulated Samples for treat_B')
dens_all.set_xlabel("Simulated Sample Means for treat_B")
dens_all.xaxis.set_major_formatter(mtick.PercentFormatter(1, 0))
plt.axvline(0, color='red')
</code></pre>
<p><a href="https://i.sstatic.net/oxnH0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oxnH0.png" alt="enter image description here" /></a></p>
|
<python><seaborn>
|
2024-02-02 15:00:31
| 1
| 3,836
|
Joe
|
77,927,837
| 11,908,438
|
import function from folder in python
|
<p>Im trying to import a function located in another folder into my python file which is located in another folder.</p>
<p>Folder structure looks like this</p>
<pre><code>Project:
-dbNameFolder
+ __init__.py
+ dbNames.py
-FolderIwantToImportTo
+ file.py
</code></pre>
<p>for example, i tried this with a basic add function</p>
<pre><code>def Add(a,b):
return (a+b)
</code></pre>
<p>and imported it like so</p>
<pre><code>from solve.basic import Add
</code></pre>
<p>NB: Im importing from a folder into another folder not the root of the project</p>
<p>I'm getting an error saying the function is not found, i tried using sys but its only working in vscode if i do it in pycharm im getting errors.</p>
<p><a href="https://i.sstatic.net/ItIwT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ItIwT.png" alt="Preview of file" /></a></p>
<p><a href="https://i.sstatic.net/xsS21.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xsS21.png" alt="enter image description here" /></a></p>
|
<python><import><module>
|
2024-02-02 14:45:36
| 1
| 545
|
Zayne komichi
|
77,927,819
| 11,039,749
|
PulseAudio Create Virtual Mic and use it in Python
|
<p>I am working on creating a virtual mic that mirrors speakers, a loopback to the mic pretty much.
I was able to successfully create this mic, and when I look at pavucontrol and it shows the mic is copying the sound waves as the speakers. BUT when I connect it to a python script I tried all available inputs it doesnt read or hear anything.</p>
<p>I'm not sure 100% i setup this virtual mic correctly.</p>
<p>Here is what I did.</p>
<p>I create the new virtual mic:</p>
<pre><code>pactl load-module module-pipe-source source_name=virtual_mic file=/tmp/virtual_mic format=s16le rate=44100 channels=2
</code></pre>
<p>I get the index of the new mic:</p>
<pre><code>pactl list sources
index: 2
name: <virtual_mic>
driver: <module-pipe-source.c>
module: 22
properties:
device.string = "/tmp/virtual_mic"
device.description = "Unix FIFO source /tmp/virtual_mic"
device.icon_name = "audio-input-microphone"
</code></pre>
<p>I get the index of my speakers</p>
<pre><code>Sink #2
State: IDLE
Name: alsa_output.pci-0000_02_02.0.analog-stereo
Description: ES1371/ES1373 / Creative Labs CT2518 (Audio PCI 64V/128/5200 / Creative CT4810/CT5803/CT5806 [Sound Blaster PCI]) Analog Stereo
Driver: module-alsa-card.c
</code></pre>
<p>I create a loopback between the mic and speakers</p>
<pre><code>pactl load-module module-loopback source=virtual_mic sink=alsa_output.pci-0000_02_02.0.analog-stereo
</code></pre>
<p>I run this Python script to get all of my sources</p>
<pre><code>import pyaudio
import subprocess
def get_pulseaudio_sources():
result = subprocess.run(['pacmd', 'list-sources'], capture_output=True, text=True)
sources = result.stdout.split('\n')
# Extract source indexes and names
source_info = [line.strip() for line in sources if line.startswith('index') or line.startswith('device.description')]
source_info = [info.split(':')[-1].strip() for info in source_info]
source_indexes = [int(source_info[i]) for i in range(0, len(source_info), 2)]
source_names = [source_info[i] for i in range(1, len(source_info), 2)]
return dict(zip(source_indexes, source_names))
audio = pyaudio.PyAudio()
inputdevice = 0
pulseaudio_sources = get_pulseaudio_sources()
print("\nStarting Audio Devices \n")
# Get all Audio Devices
for i in range(audio.get_device_count()):
device_info = audio.get_device_info_by_index(i)
device_name = device_info['name']
device_index = device_info['index']
print(f"Device {i}: {device_name}" )
print(f" Index: {device_index}")
for pulseaudio_index, pulseaudio_name in pulseaudio_sources.items():
if pulseaudio_name in device_name:
print(f" Matches PulseAudio Source Index: {pulseaudio_index}")
break
print("-----")
</code></pre>
<p>This is what I get:
Starting Audio Devices</p>
<pre><code>Device 0: Ensoniq AudioPCI: ES1371 DAC1 (hw:0,1)
Index: 0
-----
Device 1: pulse
Index: 1
-----
Device 2: default
Index: 2
-----
</code></pre>
<p>When I try different indexes in my code It doesnt find any sound (I tried all).</p>
<pre><code>import sounddevice as sd
import numpy as np
from transformers import pipeline
def record_audio_callback(indata, frames, time, status):
if status:
print(status)
# Process the audio data if needed
else:
print("inside record")
try:
# Convert the recorded audio to text
recognizer = pipeline("automatic-speech-recognition", model="openai/whisper-medium")
result = recognizer(np.squeeze(indata))
# Print the entire 'result' for debugging
print("Full result:", result)
# Check if 'transcription' is present in the result
#if 'transcription' in result[0]:
#if result and isinstance(result, list) and result[0].get('transcription'):
if result and isinstance(result, dict) and result.get('text'):
#transcription = result[0]['transcription']
transcription = result['text']
print("Transcription:", transcription)
else:
print ("No valid Transcription found in result")
except Exception as e:
print("Error during transcription: ", e)
# Set the audio parameters
channels = 1 # Mono audio
sample_rate = 44100
input_device = 1
# Start recording
with sd.InputStream(callback=record_audio_callback, channels=channels, samplerate=sample_rate, device=input_device):
print("Inside Live Audio Listen")
try:
sd.sleep(20000) # Record for 10 seconds (adjust as needed)
except KeyboardInterrupt:
print("Keyboard Stopped Recording")
</code></pre>
|
<python><pulseaudio>
|
2024-02-02 14:42:33
| 1
| 529
|
Bigbear
|
77,927,706
| 7,972,989
|
Dash datatable ignores zeros at beginning of string in filter input
|
<p>I have a dash datatable and I activated the filter option.</p>
<p>I want to filter on "col2", which is a columns of character values. Not numeric.</p>
<p>However, when I filter put "00500" in the filter input on top of the table, the table has 2 rows : the one containing "00500" and "75001", as if the I had filtered on "500" and not on "00500".</p>
<p>It looks like the filter ignores all the zeros at the begining of the string.
How can I solve this and have only the "00500" row when I put "00500" on the filter input ?</p>
<pre><code>from dash import Dash, dash_table
import pandas as pd
my_df = pd.DataFrame(
{
"col1": ["aabbcc", "bbddcc", "ccddee"],
"col2": ["00500", "75001", "75013"],
}
)
app = Dash(__name__, suppress_callback_exceptions=True)
app.layout = dash_table.DataTable(
data=my_df.to_dict("records"),
columns=[{"name": i, "id": i} for i in my_df.columns],
filter_action="native",
filter_options={"placeholder_text": "Filtrer", "case": "insensitive"},
)
if __name__ == "__main__":
app.run(debug=True)
</code></pre>
|
<python><plotly-dash>
|
2024-02-02 14:25:36
| 0
| 2,505
|
gdevaux
|
77,927,700
| 7,956,959
|
Python. SeleniumWebDriver. How to operate with {Webelement} as iterable unit?
|
<p>got a problem with my trying to use clear selenium.webdriver.
I want to operate with element {WebElement} which was taken from list of WebElements.
Is it possible at all?</p>
<pre><code>...
for button in web_driver.find_elements(By.XPATH, path):
try:
wait = WebDriverWait(self.web_driver, 10)
wait.until(EC.element_to_be_clickable((By.XPATH, button.get_attribute("xpath")))):
pass
...
</code></pre>
<p>But for real, the "button" has nothing useful for this.
Is it way to take its "xpath" or somehow check "element_to_be_clickable" during iteration?
Thanks advance!</p>
|
<python><selenium-webdriver>
|
2024-02-02 14:25:21
| 1
| 518
|
Kaonashi
|
77,927,600
| 616,460
|
ProxyFix does not seem to be working (Flask behind nginx)
|
<p>I have a server running nginx. I'm trying to set up a Flask application at a specific sub-URL (e.g. <code>http://server/app/</code> maps to Flask application's <code>/</code>). If it matters, ultimately I intend for multiple Flask applications to run at various other sub-URLs on this same server.</p>
<p>I'm new to all of this so I've been running through tutorials, in particular I've followed these guides / posts:</p>
<ul>
<li><a href="https://www.digitalocean.com/community/tutorials/how-to-serve-flask-applications-with-gunicorn-and-nginx-on-ubuntu-22-04" rel="nofollow noreferrer">Flask + Gunicorn + Nginx Guide</a></li>
<li><a href="https://flask.palletsprojects.com/en/3.0.x/deploying/nginx/" rel="nofollow noreferrer">Flask docs re: nginx</a></li>
<li><a href="https://flask.palletsprojects.com/en/3.0.x/deploying/proxy_fix/" rel="nofollow noreferrer">Flask docs re: ProxyFix</a></li>
<li><a href="https://stackoverflow.com/a/75123044">https://stackoverflow.com/a/75123044</a></li>
</ul>
<p>I have a section in my nginx <em>sites-available/default</em> file like this:</p>
<pre><code> location /firmware {
proxy_pass http://localhost:6001;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Prefix /firmware;
}
</code></pre>
<p>It seems to be working fine (the Flask app is on 6001):</p>
<ul>
<li><p>I've confirmed that requests to <code>/firmware</code> are forwarded to the Flask app.</p>
</li>
<li><p>I've confirmed that the HTTP headers are present in the request and that they all contain the expected values:</p>
<pre><code>X-Forwarded-For: 192.168.98.113
X-Forwarded-Proto: http
X-Forwarded-Host: 192.168.99.223
X-Forwarded-Prefix: /firmware
</code></pre>
</li>
</ul>
<p>Also, I'm using <code>ProxyFix</code> in the Flask app to let the app know it's behind a proxy. I've got this bit of code when creating the application:</p>
<pre><code>from flask import Flask
from werkzeug.middleware.proxy_fix import ProxyFix
app = Flask(__name__)
app.wsgi_app = ProxyFix(
app.wsgi_app,
x_for=1,
x_proto=1,
x_host=1,
x_prefix=1
)
@app.route("/")
def ... ():
...
</code></pre>
<p>However, the Flask application still sees the request as <code>/firmware/</code>, rather than <code>/</code>. I've verified this by checking the output logs when running the app via <code>flask</code>, and also by adding a route for <code>/firmware/</code> to test if it gets hit (it does). The <code>/</code> route does not get a hit.</p>
<p>The behavior is the same whether I run the application via <code>flask</code> or <code>gunicorn</code>, that doesn't seem to matter.</p>
<p>What step have I missed / what am I doing wrong here? Why is the Flask application still getting requests for <code>/firmware/</code> even though I've set up <code>ProxyFix</code> and the HTTP headers?</p>
|
<python><nginx><flask>
|
2024-02-02 14:10:36
| 1
| 40,602
|
Jason C
|
77,927,589
| 7,529,256
|
The IF - OR condition does not work with Float(nan) in python. How to write this in a single line?
|
<p>There are 4 conditions mainly :</p>
<ol>
<li>str(row["Vch Type*"]) == "Payment"</li>
<li>str(row["Vch Type*"]) == "Contra" and row["Withdrawal*"] is not null</li>
<li>str(row["Vch Type*"]) == "Receipt"</li>
<li>str(row["Vch Type*"]) == "Contra" and str(row["Deposit*"]) is not null</li>
</ol>
<p>For 1 and 2, the value should be
amount.text = str(row["Withdrawal*"] * -1)</p>
<p>But for 3,4 the value should be amount.text = str(row["Deposit*"] * -1)</p>
<p>So, I am trying hard to combine them into 2 conditions, as below, but its always failing to identify "and" condition in the if statement properly and giving me a NaN for condition 2.</p>
<p>What is wrong in this code?</p>
<pre><code> if str(row["Vch Type*"]) == "Payment" or (str(row["Vch Type*"]) == "Contra" and row["Withdrawal*"] != '']):
amount.text = str(row["Withdrawal*"] * -1)
elif str(row["Vch Type*"]) == "Receipt" or (str(row["Vch Type*"]) == "Contra" and str(row["Deposit*"]) != ''):
amount.text = str(row["Deposit*"])
else:
x = 0
</code></pre>
<p>Note : In debug mode, i can see that row["Withdrawal*"] is NaN in condition 2 and row["Deposit*"] is NaN in condition 4. So, i saw some posts to compare a float NaN using below, but even that is not working. Is there any way to combine this into a single if statement? if not atleast into 2 instead of 4.??</p>
<p>float('nan') == float('nan')</p>
<blockquote>
<p>False</p>
</blockquote>
|
<python><pandas>
|
2024-02-02 14:09:16
| 0
| 1,584
|
Viv
|
77,927,531
| 9,608,860
|
set_permission_overrides not assigning permission with dpytest
|
<p>I am trying to configure a testing bot with dpytest. I want to create a bot which has permission to manage channels
Here is my code base:</p>
<pre><code>@pytest_asyncio.fixture(params=BOT_FIXTURE)
async def bot(request) -> AsyncGenerator[Tuple[commands.Bot, str], None]:
b = commands.Bot(
command_prefix=settings.DISCORD_COMMAND_PREFIX,
intents=intents,
strip_after_prefix=True,
help_command=None,
)
dpytest.configure(b)
guild = dpytest.get_config().guilds[0]
await dpytest.set_permission_overrides(
target=guild.members[0], channel=guild.text_channels[0], manage_channels=True
)
# this statement returns false
print('permissions?', guild.members[0].guild_permissions.manage_channels)
</code></pre>
<p>Can anyone help me what I am doing wrong?</p>
|
<python><discord.py><pytest>
|
2024-02-02 14:00:01
| 0
| 405
|
Aarti Joshi
|
77,927,401
| 3,071,626
|
How to integrate pydantic with pymongo?
|
<p>I get a simple Json from my mongodb with pymongo which I want to validate using pydantic:</p>
<pre><code>from pydantic import TypeAdapter, BaseModel, Json
# contents is the mongodb collection and returns this
# content = [{"content": {}}]
content = list(contents.find(projection={'_id': 0}))
class Content(BaseModel):
content: Json
print(TypeAdapter(list[Content]).validate_python(content))
</code></pre>
<p>This raises this error:</p>
<pre><code>0.content
JSON input should be string, bytes or bytearray [type=json_type, input_value={}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.6/v/json_type
(dissident-ui-backend-py3.11) [tom@freshbook dissident-ui-backend]$
</code></pre>
<p>I understood that pydantic wants the content value to be an escaped JSON string, but pymongo returns a dict. How can I solve this without iterating overall content and manually json.dumps() all content???</p>
|
<python><json><pymongo><pydantic>
|
2024-02-02 13:37:58
| 0
| 575
|
Tom
|
77,927,370
| 4,948,719
|
Typing a class member that gets overriden by a property in subclass
|
<p>Here is an example code of what I have:</p>
<pre class="lang-py prettyprint-override"><code>class A:
value: int
def __init__(self, value):
self.value = value
class B(A):
@property
def value(self) -> int:
return 3
</code></pre>
<p>Dynamically this works perfectly, but pyright complains that <code>Type "property" cannot be assigned to type "int"</code></p>
<p>Is there a way to type this properly? I want to be able to set the value in methods of other subclasses of <code>A</code>.</p>
<p>More precisely, I am looking for a way to express that accessing <code>a.value</code> (when <code>insinstance(a, A)</code>) should always yield an <code>int</code>, but that the implementation <strong>may or may not</strong> be a <code>property</code>.</p>
<p>Thanks!</p>
|
<python><python-3.x><inheritance><python-typing>
|
2024-02-02 13:33:38
| 1
| 1,834
|
tbrugere
|
77,927,215
| 8,771,082
|
Understanding scikit learn import variants
|
<p>Scikit learn import statements in their tutorials are on the form</p>
<pre><code>from sklearn.decomposition import PCA
</code></pre>
<p>Another versions that works is</p>
<pre><code>import sklearn.decomposition
pca = sklearn.decomposition.PCA(n_components = 2)
</code></pre>
<p>However</p>
<pre><code>import sklearn
pca = sklearn.decomposition.PCA(n_components = 2)
</code></pre>
<p>does not, and complains</p>
<pre><code>AttributeError: module 'sklearn' has no attribute 'decomposition'
</code></pre>
<p>Why is this, and how can I predict which ones will work and not so i don't have to test around? If the understanding and predictiveness extends to python packages in general that would be the best.</p>
|
<python><scikit-learn><python-import>
|
2024-02-02 13:06:29
| 1
| 449
|
Anton
|
77,926,884
| 101,022
|
AWS S3 generate_presigned_post loses content-type after formData POST
|
<p>Using Python's boto3 library, I create a presigned URL and return it to the front end:</p>
<pre class="lang-py prettyprint-override"><code>s3_client = boto3.client('s3')
response = s3_client.generate_presigned_post(
"bucket",
"logo",
Fields={
"Content-Type": "image/png"
},
Conditions=None,
ExpiresIn=3600
)
</code></pre>
<p>...here's an example response:</p>
<pre class="lang-json prettyprint-override"><code>{
"url": "https://s3.amazonaws.com/bucket,
"fields": {
"Content-Type": "image/png",
"key": "logo",
"AWSAccessKeyId": "ABC",
"policy": "blahblah",
"signature": "XYZ123"
}
}
</code></pre>
<p>I then use the response in the front end to POST the image to S3:</p>
<pre class="lang-js prettyprint-override"><code>const formData = new FormData();
for (const key in json.fields) {
formData.append(key, json.fields[key]);
}
formData.append("file", image);
return fetch(json.url, {
method: "POST",
body: formData
});
</code></pre>
<p>...which uploads the file as expected....except the Content-Type is reset to <code>binary/octet-stream</code>
<a href="https://i.sstatic.net/fQVhA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fQVhA.png" alt="enter image description here" /></a></p>
<p>How do I ensure that the Content-Type is <code>image/png</code> and not <code>binary/octet-stream</code>?</p>
|
<javascript><python><amazon-web-services><amazon-s3>
|
2024-02-02 12:12:40
| 0
| 1,480
|
timborden
|
77,926,865
| 9,074,767
|
Is it a good practice to define datatypes in `__init__.py`?
|
<p>I have a folder structure like this:</p>
<pre><code>nlp
├── generation
│ ├── io_operations
│ ├── __init__.py
│ ├── generator.py
│ ├── processing.py
│ └── prompting.py
│ ├── prompts (folder)
│ ├── ressources (folder)
│ ├── __init__.py
│ ├── classifier.py
│ ├── clustering.py
│ ├── dsos.py
│ ├── mapper.py
│ └── utils.py
</code></pre>
<p>and in both <code>nlp/generation/__init__.py</code> and <code>nlp/__init__.py</code> I have custom datatypes defined. The datatypes in <code>generation</code> are used across the entire folder structure (so, e.g., in <code>classifier</code> something from there can be used. So it is "globally" available and easy to import by just calling <code>from nlp.generation import structure</code>. The data structures defined in <code>nlp/__init__.py</code> are not used in the <code>generation</code> sub-directory but only on that level (and sometimes from outside). Here, too, importing them is just <code>from nlp import structure</code>.</p>
<p>My question is whether that is a good practice or not, and if not, why. Thanks!</p>
|
<python><architecture><design-principles>
|
2024-02-02 12:10:15
| 0
| 1,183
|
Marcel Braasch
|
77,926,829
| 7,959,614
|
How to write "probability that result of objective function is lower than y" in LinearConstraint using scipy.optimize
|
<p>I have the following working optimization problem:</p>
<pre><code>import random
import numpy as np
from scipy.optimize import minimize, LinearConstraint
random.seed(123)
# assumptions
d = np.array([1.3, 10])
p = np.array([random.randint(700, 950) / 1000 for _ in range(100)])
scenarios = np.array([[1],
[0]])
r = scenarios * d - 1 # returns per scenario
def obj(f):
return -1 * np.sum(p * np.log(np.sum(1 + r * f, axis=1))[:, None])
f0 = [0, 0]
constr_1 = LinearConstraint(np.ones((1, 2)), lb=0, ub=1)
constr_2 = LinearConstraint(np.eye(2), lb=0, ub=1)
constraints = [constr_1, constr_2]
res = minimize(fun=obj, x0=f0, method='trust-constr', constraints=constraints)
print(res.x)
</code></pre>
<p>This is an attempt to implement the <a href="https://arxiv.org/pdf/1701.02814.pdf" rel="nofollow noreferrer">Kelly-ECC method</a>:</p>
<p><a href="https://i.sstatic.net/5M1V5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5M1V5.png" alt="ECC method" /></a></p>
<p>I would like to introduce a <code>LinearConstraint</code> that makes sure that for <code>x</code>% of the estimates in <code>p</code> the result of the objective function is lower than <code>y</code>.
So, the constraint would look like</p>
<blockquote>
<p>P[y >= obj] >= (1 - x)</p>
</blockquote>
<p>How do I create such a <code>LinearConstraint</code>?</p>
|
<python><scipy><scipy-optimize>
|
2024-02-02 12:02:42
| 1
| 406
|
HJA24
|
77,926,630
| 2,123,706
|
compare row equality by linking values between columns in pandas
|
<p>I have a df:</p>
<pre><code>rdata = {'id': {0: 'a01', 1: 'a02', 2: 'a03', 3: 'a04', 4: 'a05', 5: 'a06', 6: 'a07', 7: 'a08', 8: 'a09', 9: 'a10', 10: 'a11'}, 'col1': {0: 123456, 1: 1234567, 2: 123456, 3: 123456, 4: 1234, 5: 123456, 6: 123456, 7: 1234, 8: 1234, 9: 12345, 10: 12345}, 'col2': {0: 'a', 1: 'c', 2: 'a', 3: 'a', 4: 'a', 5: 'a', 6: 'a', 7: 'a', 8: 'a', 9: 'b', 10: 'b'}, 'col3': {0: 'AAA', 1: 'BBB', 2: 'CCC', 3: 'DDD', 4: 'EEE', 5: 'EEE', 6: 'EEE', 7: 'EEE', 8: 'EEE', 9: 'GGG', 10: 'GGG'}, 'col4': {0: 555555, 1: 111111, 2: 222222, 3: 222222, 4: 666666, 5: 111111, 6: 111111, 7: 111111, 8: 111111, 9: 444444, 10: 333333}}
df = pd.DataFrame(rdata)
</code></pre>
<p>visually (makes more sense):</p>
<p><a href="https://i.sstatic.net/GaQX3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GaQX3.png" alt="enter image description here" /></a></p>
<p>It consists of 11 samples and 4 categorical columns. I need to determine if samples are the same or not based on if the columns contain the same values.</p>
<p>Starting from column 4, there are 6 groups. id's a02, a06, a07, a08, and a09 are in the same group, <code>111111</code>.</p>
<p>We see that 111111 is linked to BBB and EEE in column 3. EEE is linked to 666666 as well, so now a05 is included in the main group.</p>
<p>BBB from column 3 is linked to c in column2, and EEE is linked to a in column2. now all a's and all c's from column two are now in the main group.</p>
<p>a and c from column2 is linked to 123456, 1234567, 1234 from column a. Now samples a01-a09 are in the same group.</p>
<p>samples a10 and a11 are a different group.</p>
<p>I am lost trying to set this up in python, and would like to return a list of what samples are in each group.</p>
<pre><code>def same_samples(df):
stuff happens
grps=[[a01,a02,a03...a09],[a10,a11],[]]
return(grps)
</code></pre>
<p>Are there any suggestions on what needs to go in the <code>stuff happens</code> section. Note that there may be 1 group, up to n groups.</p>
<p>expected output of above</p>
<pre><code>same_samples(df)
[ [a01,a02,a03...a09],
[a10,a11]
]
</code></pre>
|
<python><pandas>
|
2024-02-02 11:21:22
| 1
| 3,810
|
frank
|
77,926,591
| 1,216,183
|
Pillow raise OSError when fail to read image (not real OSError), how to workaround this?
|
<p>Some images pass the <code>image.verify()</code> method but still fail to be read and the following errors can be raised :</p>
<pre><code>OSError
broken data stream when reading image file
</code></pre>
<pre><code>OSError
image file is truncated (XXX bytes not processed)
</code></pre>
<p>I've read many issues on github on the problem with raising OSError which does not make it possible to know whether the error comes from the image (bad format) or really from the OS (hardware problem, filesystem, full disk, etc.) :</p>
<ul>
<li><a href="https://github.com/python-pillow/Pillow/issues/1643" rel="nofollow noreferrer">https://github.com/python-pillow/Pillow/issues/1643</a></li>
<li><a href="https://github.com/python-pillow/Pillow/issues/7489" rel="nofollow noreferrer">https://github.com/python-pillow/Pillow/issues/7489</a></li>
<li><a href="https://github.com/python-pillow/Pillow/issues/7425" rel="nofollow noreferrer">https://github.com/python-pillow/Pillow/issues/7425</a></li>
</ul>
<p>Since the problem is present since many years and will not be resolved soon, I ask myself how other handle this.
So here my question :</p>
<p>How can we disambiguate OSError raised by Pillow to determine if this is a real OS error or a file parsing/reading error from the library (because of malformatted file but still passing the <code>.verify</code> check) ?</p>
|
<python><python-imaging-library>
|
2024-02-02 11:15:59
| 0
| 2,213
|
fabien-michel
|
77,926,337
| 6,546,694
|
Why is it that setting a value on a dataframe that is (I think) a copy changing the original dataframe?
|
<p>I create a dataframe and (I think) I create a copy out of it. I think I create a copy out of it since <code>._is_view</code> property is <code>False</code></p>
<pre><code>import pandas as pd
df = pd.DataFrame({'a':[1,2,3],'b':['x','y','z']})
df.loc[[True, True, False],'a']._is_view
output: False
</code></pre>
<p>So setting this copy equal to some value should not change the original dataframe, right?</p>
<p>However, the following changes the original dataframe</p>
<pre><code>df.loc[[True, True, False],'a'] = 'abcd'
print(df)
output:
a b
0 abcd x
1 abcd y
2 3 z
</code></pre>
<p>Can you please help me understand this behavior by pandas?</p>
|
<python><pandas><dataframe>
|
2024-02-02 10:39:14
| 2
| 5,871
|
figs_and_nuts
|
77,926,327
| 355,401
|
How to see current exception in VS Code Python Debugger in except clause
|
<p>I am using VS Code to debug Python application.<br />
If I have the following code</p>
<pre class="lang-py prettyprint-override"><code>try:
do_something()
catch Exception:
handle_exception() # <--- program is here
</code></pre>
<p>Is there a way in VS Code to watch the exception catch even though I didn't assigned it to a variable?</p>
|
<python><vscode-debugger>
|
2024-02-02 10:37:42
| 1
| 11,544
|
Ido Ran
|
77,926,266
| 9,443,693
|
Google OR-Tools: Different cost for same node base on position in route
|
<p>I have a list of costs for each node based on the position of a node in a route. The depot is positioned at 0.</p>
<pre><code>costs = [
{"part": 0, "final": 0},
{"part": 2, "final": 3},
{"part": 4, "final": 2},
{"part": 1, "final": 3},
]
</code></pre>
<p>It means if node 1 is <strong>part</strong> of a route, it will cost only 2, but if it is the <strong>final</strong> destination, it will cost 3.</p>
<p>So the optimized route should be: <code>0 -> 1 -> 3 -> 2</code>, which will cost <strong>part</strong> of 1, <strong>part</strong> of 3, and <strong>final</strong> of 2, total cost is 5.</p>
<p>Currently, I can not find a proper way to set a <em><strong>dynamic</strong></em> cost for a node based on its position in a route. Any idea is appreciated.</p>
<p>I'm using Python with the Google OR-Tools package.</p>
<p>Many thanks.</p>
|
<python><python-3.x><or-tools>
|
2024-02-02 10:26:26
| 3
| 863
|
Nguyen Hoang Vu
|
77,926,226
| 10,815,899
|
In AND condition which side evaluate first in python?
|
<p>I have an AND statement, the right side condition of the statement, can be checked only, if the first one returns true.</p>
<pre><code>if foo is not None and foo.passes_the_test():
# do something
</code></pre>
<p>Does the python skip the right side condition check if the left side returns false, thus preventing the Exception if foo is None and foo method is called?</p>
|
<python>
|
2024-02-02 10:19:22
| 1
| 831
|
Viljami
|
77,926,208
| 1,059,549
|
What's the difference StreamingResponse or EventSourceResponse?
|
<p>I'm looking to stream back server side events generated by LLM.
I noticed that LiteLLM and LangChain use the built in StreamingResponse.
I also came across <a href="https://github.com/sysid/sse-starlette" rel="nofollow noreferrer">https://github.com/sysid/sse-starlette</a> ServerSideEvent</p>
<p>Has anyone experimented with both?
Differences and advantages / disadvantages to using one or the other?
I'm curious what's your input / suggestion.</p>
|
<python><streaming><fastapi><starlette>
|
2024-02-02 10:16:16
| 0
| 10,767
|
Dory Zidon
|
77,925,955
| 10,750,537
|
Unexpected behavior of numpy invert
|
<p>I am wondering why the following Python code</p>
<pre><code>import numpy as np
a = np.array([0], dtype=np.int8)
print(np.invert(a))
</code></pre>
<p>prints</p>
<pre><code>[-1]
</code></pre>
<p>instead of [0]. According to the <a href="https://numpy.org/devdocs/reference/generated/numpy.invert.html" rel="nofollow noreferrer">documentation</a>, the two's complement of 0 should be returned.</p>
<p>What am I getting wrong?</p>
<p>Ps: numpy version = 1.23.4</p>
<p><strong>Edit:</strong> the <a href="https://numpy.org/doc/stable/reference/generated/numpy.invert.html" rel="nofollow noreferrer">official documentation of version 2.1</a> reports that</p>
<blockquote>
<p>In a two’s-complement system, this operation effectively flips all
the bits, resulting in a representation that corresponds to the
negative of the input plus one.</p>
</blockquote>
<p>As suggested by <a href="https://stackoverflow.com/a/77926058">9769953's comment</a>, it should be read as "...a representation that corresponds to the negative of (the input plus one)" to avoid misunderstanding.</p>
|
<python><numpy>
|
2024-02-02 09:35:47
| 1
| 381
|
JtTest
|
77,925,889
| 15,227,516
|
How can I align text by width using python-docx library
|
<p>I make some report document using <code>python-docx>=1.1.0</code> and I need to align text by width.
I don't understand which constant from <code>WD_PARAGRAPH_ALIGNMENT</code> I should use.</p>
<p>I tried to user <code>WD_PARAGRAPH_ALIGNMENT.DISTRIBUTE</code> and all constants with prefix <code>JUSTIFY_</code> , but steel I can't to achieve my goal.</p>
|
<python>
|
2024-02-02 09:27:32
| 0
| 369
|
Andrei Gurko
|
77,925,761
| 4,742,357
|
How to make Python Azure Function (v2) Block Requests without functionkey/code
|
<p>I have created my first Python Azure Function (v2) and detected that the function accepts all requests regardless if they have the "code"(FunctionKey) param used in .Net Azure functions.</p>
<p>How can I make the function block all requests that don't have a valid key attached?</p>
|
<python><azure-functions>
|
2024-02-02 09:00:20
| 1
| 1,207
|
Nelssen
|
77,925,755
| 2,971,409
|
Jump ahead n random generations on the specific seed
|
<p>I got this function which returns a set of random choices from a collection where I want the generation to be similar when called close in time (not actual time, just when the input variable time is close to what it was last time).
I have the implementation here in Python which works as expected, but Im not so satisfied with how I jump ahead using:</p>
<pre><code>for _ in range(split):
random.random()
</code></pre>
<p>This seems like a waste. <strong>Is there no way of jump ahead N times as if random generated N values?</strong> Or perhaps my algorithm should be completely different. Note that split could be very large.</p>
<pre><code>def time_choices(collection: [], time: int, k: int, seed: int) -> []:
seed_with_time = time // k
split = time % k
random.seed(seed + seed_with_time)
for _ in range(split):
random.random()
first = random.choices(collection, k=k - split)
random.seed(seed + seed_with_time + 1)
second = random.choices(collection, k=split)
return first + second
</code></pre>
<p>This could then be called like this:</p>
<pre><code>for i in range(6):
print(time_choices([
"Anna", "Bruno", "Clara", "Dan", "Emma", "Fabian",
"Sara", "Greta", "Hugo", "Iris", "Julia", "Kevin",
"Lena", "Marco", "Nora", "Oscar", "Paula", "Quinn",
"Rose", "Tor", "Uno", "Van", "Will", "Yuri", "Zoe"
], i, 4, 1337))
</code></pre>
<p>Example output:</p>
<pre><code>['Oscar', 'Marco', 'Iris', 'Nora']
['Marco', 'Iris', 'Nora', 'Van']
['Iris', 'Nora', 'Van', 'Quinn']
['Nora', 'Van', 'Quinn', 'Sara']
['Van', 'Quinn', 'Sara', 'Paula']
['Quinn', 'Sara', 'Paula', 'Oscar']
</code></pre>
<p>Note how each name stays for the same amount of time. This function most of the time wont be called so often, with similar time parameter.</p>
|
<python><random>
|
2024-02-02 08:58:48
| 0
| 464
|
Dwadelfri
|
77,925,670
| 5,168,463
|
Pyspark to_timestamp date format parsing error
|
<p>In my pyspark dataframe, there is a column <code>CallDate</code> with datatype as <code>string</code> containing the values shown below:</p>
<pre><code>2008-04-01T00:00:00
2008-04-01T00:00:00
2008-04-01T00:00:00
2008-04-01T00:00:00
2008-04-01T00:00:00
2008-04-01T00:00:00
</code></pre>
<p>I am trying to convert this columns from datatype <code>string</code> to <code>timestamp</code> using <code>pyspark.sql.functions.to_timestamp()</code>.</p>
<p>When I am running this code:</p>
<pre><code>df.withColumn('IncidentDate', to_timestamp(col('CallDate'), 'yyyy/MM/dd')).select('CallDate', 'IncidentDate').show()
</code></pre>
<p>... I am getting this output:</p>
<pre><code>+-------------------+------------+
| CallDate|IncidentDate|
+-------------------+------------+
|2008-04-01T00:00:00| NULL|
|2008-04-01T00:00:00| NULL|
|2008-04-01T00:00:00| NULL|
|2008-04-01T00:00:00| NULL|
|2008-04-01T00:00:00| NULL|
|2008-04-01T00:00:00| NULL|
</code></pre>
<p>I believe the <code>NULL</code> values are due to the fact that the format specified for the date is not consistent with the actual date string, and since no match is found, the <code>NULL</code> values are returned.</p>
<p>But when I run this code:</p>
<pre><code>df.withColumn('IncidentDate', to_timestamp(col('CallDate'), 'yyyy-MM-dd')).select('CallDate', 'IncidentDate').show()
</code></pre>
<p>I am getting an error saying that</p>
<pre><code>Caused by: org.apache.spark.SparkUpgradeException: [INCONSISTENT_BEHAVIOR_CROSS_VERSION.PARSE_DATETIME_BY_NEW_PARSER] You may get a different result due to the upgrading to Spark >= 3.0:
Fail to parse '2008-04-01T00:00:00' in the new parser. You can set "spark.sql.legacy.timeParserPolicy" to "LEGACY" to restore the behavior before Spark 3.0, or set to "CORRECTED" and treat it as an invalid datetime string.
</code></pre>
<p>I know the correct parse format should be <code>"yyyy-MM-dd'T'HH:mm:ss"</code> as shown below:</p>
<pre><code>df.withColumn('IncidentDate', to_timestamp(col('CallDate'), "yyyy-MM-dd'T'HH:mm:ss")).select('CallDate', 'IncidentDate').show()
</code></pre>
<p>But my question is why is it that when I give the date parse format as <code>yyyy/MM/dd</code>, Spark returns <code>NULL</code> values but when I give it as <code>yyyy-MM-dd</code>, it is throwing an error?</p>
|
<python><apache-spark><pyspark><to-timestamp>
|
2024-02-02 08:43:55
| 2
| 515
|
DumbCoder
|
77,925,333
| 5,790,653
|
my python code show last day of previous month instead of current month
|
<p>This is my python code:</p>
<pre><code>import datetime
import jdatetime
jcurrent_year = jdatetime.datetime.now().year
jcurrent_month = jdatetime.datetime.now().month
jcurrent_day = jdatetime.datetime.now().day
jnow = jdatetime.datetime.now()
jlast_day_of_current_month = jdatetime.date(jnow.year, 1 if jnow.month==12 else jnow.month, 1) - jdatetime.timedelta(days=1)
dcurrent_year = datetime.datetime.now().year
dcurrent_month = datetime.datetime.now().month
dcurrent_day = datetime.datetime.now().day
dnow = datetime.datetime.now()
dlast_day_of_current_month = datetime.date(dnow.year, 1 if dnow.month==12 else dnow.month, 1) - datetime.timedelta(days=1)
</code></pre>
<p>I use <code>jdatetime</code> for Jalali date format, but to make sure if something's wrong with this module, I used <code>datetime</code> but it's wrong too.</p>
<p>I googled and saw different answers on Stackoverflow and other websites, but they're either using modules rather than <code>datetime</code>, or they're showing last month's last day.</p>
<p>I'm not sure what's wrong with this code, but I'm going to dynamically find the last day of current month in Jalali date.</p>
<p>The date in Jalali is now 13th day of 11th month, the year is 1402 and this month has 30 days.</p>
<p>As you see, the <code>dlast_day_of_current_month</code> shows the 31st of January, instead of 29th of February.</p>
<pre><code>>>> jlast_day_of_current_month
jdatetime.date(1402, 10, 30)
>>> dlast_day_of_current_month
datetime.date(2024, 1, 31)
</code></pre>
|
<python>
|
2024-02-02 07:32:18
| 1
| 4,175
|
Saeed
|
77,925,274
| 3,727,079
|
How can I append one text file to another in Python, if there might be overlap between the two?
|
<p>Say I have two .txt files. The first one is:</p>
<pre><code>2024-01-26 09:00:00, Alice, 4, Win
2024-01-27 09:00:00, Bob, 6, Loss
2024-01-28 09:00:00, Charlie, 2, Loss
2024-01-29 09:00:00, Denise, 3, Loss
</code></pre>
<p>The second one is:</p>
<pre><code>2024-01-29 09:00:00, Denise, 3, Loss
2024-01-30 09:00:00, Eve, 3, Win
2024-01-31 09:00:00, Frank, 15, Win
</code></pre>
<p>How can I merge into this?</p>
<pre><code>2024-01-26 09:00:00, Alice, 4, Win
2024-01-27 09:00:00, Bob, 6, Loss
2024-01-28 09:00:00, Charlie, 2, Loss
2024-01-29 09:00:00, Denise, 3, Loss
2024-01-30 09:00:00, Eve, 3, Win
2024-01-31 09:00:00, Frank, 15, Win
</code></pre>
<p>I know of my data:</p>
<ul>
<li>Both files have their data in chronological order</li>
<li>It's possible there will be overlap, in which case the two text files should have exactly the same data, and I want to delete the overlap</li>
<li>The files are very big, so efficiency matters</li>
<li>Order is important; I want the output in chronological order</li>
</ul>
<p>(Basically, it's like taking two chunks of data from the same Jan-Dec dataset, say Jan-Mar and Mar-Jun, and then merging them back into a Jan-Jun block. No guarantees there'll actually be an overlap as well, e.g. if one block is Jan-Mar and the other is Jun-Aug.)</p>
<p>Things I've considered/tried ...</p>
<ul>
<li>It's easy enough to directly append the two files. I could conceivably append them and then delete duplicate lines. However deleting duplicate lines does not look quick, since the <a href="https://stackoverflow.com/questions/15830290/remove-duplicates-from-text-file">code</a> would search through the entire file even though I know the duplication only occurs in the middle (at the end of the first file and the start of the second file).</li>
<li>Because the order is important, <code>set</code> does not work well to detect overlaps. I could conceivably use it anyway and then sort the output before saving, but destroying the order before reassembling it also looks inefficient.</li>
<li>The human way to do this would be to take the last line of the first file and see if it's before the first line of the second file; if yes then there's no problem with direct merging, while if no I would skip until I find the first line in the second file that's after that date. But this looks complicated, because I'd have to extract the last line first, convert all the text before the first comma into date/time, and compare the two. (With more <code>while</code> loops afterwards.)</li>
<li>Bing suggested I use <code>shutil.copyfileobj</code>, but that appears to just append one file into the other without looking for overlaps.</li>
</ul>
<p>Is there a better way to do this? I'm actually kind of shocked I've not been able to Google for an answer to this, because it seems like it should be a common problem.</p>
|
<python><datetime><merge>
|
2024-02-02 07:20:22
| 1
| 399
|
Allure
|
77,925,185
| 19,123,103
|
ValueError: Expected EmbeddingFunction.__call__ to have the following signature
|
<p>When I try to pass a Chroma Client to Langchain that uses <code>OpenAIEmbeddings</code>, I get a ValueError:</p>
<pre class="lang-none prettyprint-override"><code>ValueError: Expected EmbeddingFunction.__call__ to have the following signature: odict_keys(['self', 'input']), got odict_keys(['self', 'args', 'kwargs'])
</code></pre>
<p>How do I resolve this error?</p>
<p>The error seems to be related to the fact that langchain's embedding function implementation doesn't meet the new requirements introduced by Chroma's latest update because the issue showed up after upgrading Chroma.</p>
<p>My code:</p>
<pre class="lang-py prettyprint-override"><code>import chromadb
from langchain_openai import OpenAIEmbeddings
client = chromadb.PersistentClient()
collection = client.get_or_create_collection(
name='chroma',
embedding_function=OpenAIEmbeddings()
)
</code></pre>
<p>I have langchain==0.1.1, langchain-openai==0.0.3 and chromadb==0.4.22. Looking into github issues, it seems downgrading chromadb to 0.4.15 solves the issue but since these libraries will upgrade even more in the coming months, I don't want to downgrade chroma but find a solution that works in the current version.</p>
|
<python><valueerror><langchain><chromadb><openaiembeddings>
|
2024-02-02 06:59:32
| 2
| 25,331
|
cottontail
|
77,925,087
| 9,394,465
|
combine 2 dataframes with a duplicate common identifier in both
|
<p>I have two dataframes both having the exact same column names and I am trying to combine them both where both and preferring the second df for conflicting data</p>
<p>df1:</p>
<pre><code>A,B,C,D
123,xyz,S1,1111
123,xyz,S1,1111
234,mno,S3,2222
999,abc,S9,9999
999,abc,S9,9999
</code></pre>
<p>df2:</p>
<pre><code>A,B,C,D
123,abc,S1,1234
123,abc,S1,1234
123,abc,S1,1234
123,cde,S2,2345
234,nop,S3,
567,pqr,S5,5555
</code></pre>
<p>I wanted to have the following output:</p>
<pre><code>A,B,C,D
123,abc,S1,1234
123,abc,S1,1234
123,abc,S1,1234
123,cde,S2,2345
234,nop,S3,
567,pqr,S5,5555
999,abc,S9,9999
999,abc,S9,9999
</code></pre>
<p>i.e., use column A as a common identifier in both, and take data from both df1 and df2 for non-conflicting rows (preserving duplicates) and for conflicting rows, prefer the df2's rows (preserving duplicates)</p>
<p>I tried doing the combine operation like this:</p>
<pre><code>df1 = pd.read_csv(StringIO(csv1_data), dtype=str, keep_default_na=False)
df2 = pd.read_csv(StringIO(csv2_data), dtype=str, keep_default_na=False)
df1.set_index(['A'], inplace=True)
df2.set_index(['A'], inplace=True)
result_df = df2.combine_first(df1)
result_df.reset_index(inplace=True)
print(result_df)
</code></pre>
<p>but getting the following result:</p>
<pre><code> A B C D
0 123 abc S1 1234
1 123 abc S1 1234
2 123 abc S1 1234
3 123 abc S1 1234
4 123 abc S1 1234
5 123 abc S1 1234
6 123 cde S2 2345
7 123 cde S2 2345
8 234 nop S3
9 567 pqr S5 5555
10 999 abc S9 9999
11 999 abc S9 9999
</code></pre>
<p>this is preserving data when there's non-conflicting rows (i.e., 999 is seen twice as expected) but its duplicating rows for the conflicting ones (i.e., 123 is seen 8 times instead of 4 times)</p>
<p><em>EDIT</em>:
Note that the indexes can be multiple so instead of just 'A', there can be ['A', 'A1', 'A2'...] and it would be dynamically set before combine even takes place. Also note that the data can be in millions which is truncated here for simplicity.</p>
|
<python><pandas>
|
2024-02-02 06:36:48
| 2
| 513
|
SpaceyBot
|
77,925,004
| 1,176,504
|
SMS not sending even if the phone number is verified - 400 Error Code
|
<p>I signed up with Twilio Yesterday for educational purposes and just trying to send myself a SMS. But it keep giving me 400 - Bad Request Error.</p>
<p>I am from a high-risk country but Twilio has automatically updated my Geo Permissions. I am using Python and there is no errors in my code.</p>
<p>Python Error:</p>
<blockquote>
<p>twilio.base.exceptions.TwilioRestException: HTTP 400 error: Unable to create record: The number +947XXXXXXXX is unverified. Trial accounts cannot send messages to unverified numbers;</p>
</blockquote>
<p>This error occurs when i try to send the SMS from python application as well as Twilio Console as a testing message. I do not want to spend on buying a pro account just yet. I tried to add a new Caller ID in Verified Caller IDs, but now it's not sending the OTP as well, for which i tried with all possible ways. Is the 'Extension' the country code? Please provide a solution. I appreciate your help.</p>
|
<python><twilio><http-status-code-400>
|
2024-02-02 06:17:17
| 0
| 1,627
|
guitarlass
|
77,924,487
| 1,982,032
|
How can simpify and clear the expression to get the cell value?
|
<p>My target dataframe is something as below:</p>
<pre><code>df
SimFinId Currency Fiscal Year ... Net Extraordinary Gains (Losses) Net Income Net Income (Common)
Ticker Report Date ...
A 2018-10-31 45846 USD 2018 ... NaN 316000000 316000000
2019-10-31 45846 USD 2019 ... NaN 1071000000 1071000000
2020-10-31 45846 USD 2020 ... NaN 719000000 719000000
2021-10-31 45846 USD 2021 ... NaN 1210000000 1210000000
2022-10-31 45846 USD 2022 ... NaN 1254000000 1254000000
... ... ... ... ... ... ... ...
ZYXI 2018-12-31 171401 USD 2018 ... NaN 9552000 9552000
2019-12-31 171401 USD 2019 ... NaN 9492000 9492000
2020-12-31 171401 USD 2020 ... NaN 9074000 9074000
2021-12-31 171401 USD 2021 ... NaN 17103000 17103000
2022-12-31 171401 USD 2022 ... NaN 17048000 17048000
</code></pre>
<p>I want to get the cell(Net Income (Common))'s value whose ticker name is <code>A</code> and <code>Fiscal Year</code> is 2019.</p>
<pre><code>df[df['Fiscal Year'] == 2019].loc['A','Net Income (Common)'].values[0]
1071000000
</code></pre>
<p>How can simpify and clear the expression to get the cell value?</p>
|
<python><pandas><dataframe>
|
2024-02-02 03:14:36
| 2
| 355
|
showkey
|
77,924,456
| 584,239
|
using python pycups cups how to set margin for custom page size
|
<p>Below is script using pycups I am able to print with custom page size but I need to set the margin also how do it do that.</p>
<pre><code>import cups
def print_to_printer(printer_name, file_path, custom_width=720, custom_length=432):
conn = cups.Connection()
# Get the printer information
printer_info = conn.getPrinterAttributes(printer_name)
# Convert inches to points (1 inch = 72 points)
custom_width_points = str(int(custom_width * 72))
custom_length_points = str(int(custom_length * 72))
# Create the page size string
page_size_str = f"{custom_width_points}x{custom_length_points}"
# Start the print job with custom page size
print_options = {'PageSize': page_size_str}
print_job_id = conn.printFile(printer_name, file_path, "Print Job", print_options)
print(f"Print job {print_job_id} sent to {printer_name} with custom page size ({custom_width}x{custom_length} inches)")
# Example usage
printer_name = "TVS_MSP-250CH-TVS-original" # Replace with your actual printer name
file_to_print = "/home/tk/Documents/bill.txt" # Replace with the path to your file
custom_width = 10 # Replace with your desired width in inches
custom_length = 6 # Replace with your desired length in inches
print_to_printer(printer_name, file_to_print, custom_width, custom_length)
</code></pre>
|
<python><python-3.x><cups>
|
2024-02-02 03:02:37
| 2
| 9,691
|
kumar
|
77,924,248
| 1,396,516
|
How to write to a BigQuery table using ApacheBeam ran locally without running the Google Cloud CLI executable?
|
<p>I can use the external Python engine to read data from a BigQuery table, but I cannot authenticate when I use Apache Beam + Python run locally.</p>
<p><a href="https://cloud.google.com/dataflow/docs/concepts/security-and-permissions" rel="nofollow noreferrer">Documentation</a> only gives the CLI option "When you run locally, your Apache Beam pipeline runs as the Google Cloud account that you configured with the Google Cloud CLI executable. Hence, locally run Apache Beam SDK operations and your Google Cloud account have access to the same files and resources."</p>
<p>I cannot save the contents of the Service Account as a file, it will be unsafe in my set-up.</p>
<p>I wanted to authenticate using an environment variable which value is the <em>value</em> of the SA API key, like I have done with <code>bigquery.Client</code>.</p>
<p>I have added <code>method="STREAMING_INSERTS"</code> to avoid using the bucket.</p>
<p>The code I have got so far:</p>
<pre><code>import os
import pandas
import pandas_gbq
from google.cloud import bigquery
from google.oauth2 import service_account
import apache_beam as beam
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.io.gcp.internal.clients import bigquery as beam_bigquery
assert 'google_key' in os.environ, "Please define a secret called google_key; its value must be the contents of access key in JSON format"
credentials = service_account.Credentials.from_service_account_info(eval("{" + os.environ['google_key'] +"}"))
client = bigquery.Client(credentials=credentials, project=credentials.project_id)
# reading data - client.query() - works fine
query_job = client.query("SELECT * FROM `bigquery-public-data.stackoverflow.posts_questions` LIMIT 1")
df = query_job.to_dataframe()
# writing data using load_table_from_dataframe also works fine
# https://cloud.google.com/bigquery/docs/samples/bigquery-load-table-dataframe
quotes_list = [{'source': 'Mahatma Gandhi', 'quote': 'My life is my message.'},
{'source': 'Yoda', 'quote': "Do, or do not. There is no 'try'."},
]
df_quotes = pandas.DataFrame(quotes_list)
job_config = bigquery.LoadJobConfig(
schema=[bigquery.SchemaField(col, bigquery.enums.SqlTypeNames.STRING) for col in df_quotes.columns],
write_disposition="WRITE_TRUNCATE",
)
table_id = "dummy_dataset.quotes_table"
job = client.load_table_from_dataframe(df_quotes, table_id, job_config=job_config) # Make an API request.
job.result() # Wait for the job to complete.
table = client.get_table(table_id) # Make an API request.
print(f"Loaded {table.num_rows} rows and {len(table.schema)} columns")
# writing data using pandas_gbq works fine
# https://pandas-gbq.readthedocs.io/en/latest/writing.html#writing-to-an-existing-table
pandas_gbq.to_gbq(df, "dummy_dataset.dummy_table2", credentials.project_id, credentials=credentials, if_exists='replace')
# writing using WriteToBigQuery does not work
beam_options = PipelineOptions()
with beam.Pipeline(options=beam_options) as pipeline:
quotes = pipeline | beam.Create(quotes_list)
quotes | "WriteToBigQuery" >> beam.io.WriteToBigQuery(table=f'{credentials.project_id}:dummy_dataset.dummy_table',
ignore_unknown_columns=True,
schema='source:STRING, quote:STRING',
create_disposition=beam.io.BigQueryDisposition.CREATE_IF_NEEDED,
write_disposition=beam.io.BigQueryDisposition.WRITE_APPEND,
method="STREAMING_INSERTS")
</code></pre>
<p>The errors and warnings that I get when I use <code>beam_options = PipelineOptions()</code> are as follows:</p>
<pre><code>WARNING:google.auth.compute_engine._metadata:Compute Engine Metadata server unavailable on attempt 1 of 3. Reason: Remote end closed connection without response
WARNING:google.auth.compute_engine._metadata:Compute Engine Metadata server unavailable on attempt 2 of 3. Reason: Remote end closed connection without response
WARNING:google.auth.compute_engine._metadata:Compute Engine Metadata server unavailable on attempt 3 of 3. Reason: Remote end closed connection without response
WARNING:google.auth.compute_engine._metadata:Compute Engine Metadata server unavailable on attempt 1 of 5. Reason: [Errno -2] Name or service not known
WARNING:google.auth.compute_engine._metadata:Compute Engine Metadata server unavailable on attempt 2 of 5. Reason: [Errno -2] Name or service not known
WARNING:google.auth.compute_engine._metadata:Compute Engine Metadata server unavailable on attempt 3 of 5. Reason: [Errno -2] Name or service not known
WARNING:google.auth.compute_engine._metadata:Compute Engine Metadata server unavailable on attempt 4 of 5. Reason: [Errno -2] Name or service not known
WARNING:google.auth.compute_engine._metadata:Compute Engine Metadata server unavailable on attempt 5 of 5. Reason: [Errno -2] Name or service not known
WARNING:google.auth._default:No project ID could be determined. Consider running `gcloud config set project` or setting the GOOGLE_CLOUD_PROJECT environment variable
WARNING:google.auth.compute_engine._metadata:Compute Engine Metadata server unavailable on attempt 1 of 3. Reason: Remote end closed connection without response
WARNING:google.auth.compute_engine._metadata:Compute Engine Metadata server unavailable on attempt 2 of 3. Reason: Remote end closed connection without response
WARNING:google.auth.compute_engine._metadata:Compute Engine Metadata server unavailable on attempt 3 of 3. Reason: Remote end closed connection without response
WARNING:google.auth.compute_engine._metadata:Compute Engine Metadata server unavailable on attempt 1 of 5. Reason: [Errno -2] Name or service not known
WARNING:google.auth.compute_engine._metadata:Compute Engine Metadata server unavailable on attempt 2 of 5. Reason: [Errno -2] Name or service not known
WARNING:google.auth.compute_engine._metadata:Compute Engine Metadata server unavailable on attempt 3 of 5. Reason: [Errno -2] Name or service not known
WARNING:google.auth.compute_engine._metadata:Compute Engine Metadata server unavailable on attempt 4 of 5. Reason: [Errno -2] Name or service not known
WARNING:google.auth.compute_engine._metadata:Compute Engine Metadata server unavailable on attempt 5 of 5. Reason: [Errno -2] Name or service not known
WARNING:google.auth._default:No project ID could be determined. Consider running `gcloud config set project` or setting the GOOGLE_CLOUD_PROJECT environment variable
ERROR:apache_beam.runners.common:Project was not passed and could not be determined from the environment. [while running 'WriteToBigQuery/_StreamToBigQuery/StreamInsertRows/ParDo(BigQueryWriteFn)']
Traceback (most recent call last):
File "apache_beam/runners/common.py", line 1493, in apache_beam.runners.common.DoFnRunner._invoke_bundle_method
File "apache_beam/runners/common.py", line 566, in apache_beam.runners.common.DoFnInvoker.invoke_start_bundle
File "apache_beam/runners/common.py", line 572, in apache_beam.runners.common.DoFnInvoker.invoke_start_bundle
File "/home/runner/BigQueryWrite/.pythonlibs/lib/python3.10/site-packages/apache_beam/io/gcp/bigquery.py", line 1494, in start_bundle
self.bigquery_wrapper = bigquery_tools.BigQueryWrapper(
File "/home/runner/BigQueryWrite/.pythonlibs/lib/python3.10/site-packages/apache_beam/io/gcp/bigquery_tools.py", line 356, in __init__
self.gcp_bq_client = client or gcp_bigquery.Client(
File "/home/runner/BigQueryWrite/.pythonlibs/lib/python3.10/site-packages/google/cloud/bigquery/client.py", line 238, in __init__
super(Client, self).__init__(
File "/home/runner/BigQueryWrite/.pythonlibs/lib/python3.10/site-packages/google/cloud/client/__init__.py", line 320, in __init__
_ClientProjectMixin.__init__(self, project=project, credentials=credentials)
File "/home/runner/BigQueryWrite/.pythonlibs/lib/python3.10/site-packages/google/cloud/client/__init__.py", line 271, in __init__
raise EnvironmentError(
OSError: Project was not passed and could not be determined from the environment.
Traceback (most recent call last):
File "apache_beam/runners/common.py", line 1493, in apache_beam.runners.common.DoFnRunner._invoke_bundle_method
File "apache_beam/runners/common.py", line 566, in apache_beam.runners.common.DoFnInvoker.invoke_start_bundle
File "apache_beam/runners/common.py", line 572, in apache_beam.runners.common.DoFnInvoker.invoke_start_bundle
File "/home/runner/BigQueryWrite/.pythonlibs/lib/python3.10/site-packages/apache_beam/io/gcp/bigquery.py", line 1494, in start_bundle
self.bigquery_wrapper = bigquery_tools.BigQueryWrapper(
File "/home/runner/BigQueryWrite/.pythonlibs/lib/python3.10/site-packages/apache_beam/io/gcp/bigquery_tools.py", line 356, in __init__
self.gcp_bq_client = client or gcp_bigquery.Client(
File "/home/runner/BigQueryWrite/.pythonlibs/lib/python3.10/site-packages/google/cloud/bigquery/client.py", line 238, in __init__
super(Client, self).__init__(
File "/home/runner/BigQueryWrite/.pythonlibs/lib/python3.10/site-packages/google/cloud/client/__init__.py", line 320, in __init__
_ClientProjectMixin.__init__(self, project=project, credentials=credentials)
File "/home/runner/BigQueryWrite/.pythonlibs/lib/python3.10/site-packages/google/cloud/client/__init__.py", line 271, in __init__
raise EnvironmentError(
OSError: Project was not passed and could not be determined from the environment.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/runner/BigQueryWrite/main.py", line 27, in <module>
with beam.Pipeline(options=beam_options) as pipeline:
File "/home/runner/BigQueryWrite/.pythonlibs/lib/python3.10/site-packages/apache_beam/pipeline.py", line 612, in __exit__
self.result = self.run()
File "/home/runner/BigQueryWrite/.pythonlibs/lib/python3.10/site-packages/apache_beam/pipeline.py", line 586, in run
return self.runner.run_pipeline(self, self._options)
File "/home/runner/BigQueryWrite/.pythonlibs/lib/python3.10/site-packages/apache_beam/runners/direct/direct_runner.py", line 128, in run_pipeline
return runner.run_pipeline(pipeline, options)
File "/home/runner/BigQueryWrite/.pythonlibs/lib/python3.10/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py", line 202, in run_pipeline
self._latest_run_result = self.run_via_runner_api(
File "/home/runner/BigQueryWrite/.pythonlibs/lib/python3.10/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py", line 224, in run_via_runner_api
return self.run_stages(stage_context, stages)
File "/home/runner/BigQueryWrite/.pythonlibs/lib/python3.10/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py", line 466, in run_stages
bundle_results = self._execute_bundle(
File "/home/runner/BigQueryWrite/.pythonlibs/lib/python3.10/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py", line 794, in _execute_bundle
self._run_bundle(
File "/home/runner/BigQueryWrite/.pythonlibs/lib/python3.10/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py", line 1031, in _run_bundle
result, splits = bundle_manager.process_bundle(
File "/home/runner/BigQueryWrite/.pythonlibs/lib/python3.10/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py", line 1367, in process_bundle
result_future = self._worker_handler.control_conn.push(process_bundle_req)
File "/home/runner/BigQueryWrite/.pythonlibs/lib/python3.10/site-packages/apache_beam/runners/portability/fn_api_runner/worker_handlers.py", line 384, in push
response = self.worker.do_instruction(request)
File "/home/runner/BigQueryWrite/.pythonlibs/lib/python3.10/site-packages/apache_beam/runners/worker/sdk_worker.py", line 639, in do_instruction
return getattr(self, request_type)(
File "/home/runner/BigQueryWrite/.pythonlibs/lib/python3.10/site-packages/apache_beam/runners/worker/sdk_worker.py", line 677, in process_bundle
bundle_processor.process_bundle(instruction_id))
File "/home/runner/BigQueryWrite/.pythonlibs/lib/python3.10/site-packages/apache_beam/runners/worker/bundle_processor.py", line 1071, in process_bundle
op.start()
File "apache_beam/runners/worker/operations.py", line 929, in apache_beam.runners.worker.operations.DoOperation.start
File "apache_beam/runners/worker/operations.py", line 931, in apache_beam.runners.worker.operations.DoOperation.start
File "apache_beam/runners/worker/operations.py", line 934, in apache_beam.runners.worker.operations.DoOperation.start
File "apache_beam/runners/common.py", line 1510, in apache_beam.runners.common.DoFnRunner.start
File "apache_beam/runners/common.py", line 1495, in apache_beam.runners.common.DoFnRunner._invoke_bundle_method
File "apache_beam/runners/common.py", line 1547, in apache_beam.runners.common.DoFnRunner._reraise_augmented
File "apache_beam/runners/common.py", line 1493, in apache_beam.runners.common.DoFnRunner._invoke_bundle_method
File "apache_beam/runners/common.py", line 566, in apache_beam.runners.common.DoFnInvoker.invoke_start_bundle
File "apache_beam/runners/common.py", line 572, in apache_beam.runners.common.DoFnInvoker.invoke_start_bundle
File "/home/runner/BigQueryWrite/.pythonlibs/lib/python3.10/site-packages/apache_beam/io/gcp/bigquery.py", line 1494, in start_bundle
self.bigquery_wrapper = bigquery_tools.BigQueryWrapper(
File "/home/runner/BigQueryWrite/.pythonlibs/lib/python3.10/site-packages/apache_beam/io/gcp/bigquery_tools.py", line 356, in __init__
self.gcp_bq_client = client or gcp_bigquery.Client(
File "/home/runner/BigQueryWrite/.pythonlibs/lib/python3.10/site-packages/google/cloud/bigquery/client.py", line 238, in __init__
super(Client, self).__init__(
File "/home/runner/BigQueryWrite/.pythonlibs/lib/python3.10/site-packages/google/cloud/client/__init__.py", line 320, in __init__
_ClientProjectMixin.__init__(self, project=project, credentials=credentials)
File "/home/runner/BigQueryWrite/.pythonlibs/lib/python3.10/site-packages/google/cloud/client/__init__.py", line 271, in __init__
raise EnvironmentError(
OSError: Project was not passed and could not be determined from the environment. [while running 'WriteToBigQuery/_StreamToBigQuery/StreamInsertRows/ParDo(BigQueryWriteFn)']
</code></pre>
|
<python><authentication><google-bigquery><apache-beam>
|
2024-02-02 01:33:38
| 1
| 3,567
|
Yulia V
|
77,924,142
| 7,662,164
|
JAX `vjp` does not recognize cotangent argument with `custom_vjp`
|
<p>I have a JAX function <code>cart_deriv()</code> which takes another function <code>f</code> and returns the Cartesian derivative of <code>f</code>, implemented as follows:</p>
<pre><code>@partial(custom_vjp, nondiff_argnums=0)
def cart_deriv(f: Callable[..., float],
l: int,
R: Array
) -> Array:
df = lambda R: f(l, jnp.dot(R, R))
for i in range(l):
df = jacrev(df)
return df(R)
def cart_deriv_fwd(f, l, primal):
primal_out = cart_deriv(f, l, primal)
residual = cart_deriv(f, l+1, primal) ## just a test
return primal_out, residual
def cart_deriv_bwd(f, residual, cotangent):
cotangent_out = jnp.ones(3) ## just a test
return (None, cotangent_out)
cart_deriv.defvjp(cart_deriv_fwd, cart_deriv_bwd)
if __name__ == "__main__":
def test_func(l, r2):
return l + r2
primal_out, f_vjp = vjp(cart_deriv,
jax.tree_util.Partial(test_func),
2,
jnp.array([1., 2., 3.])
)
cotangent = jnp.ones((3, 3))
cotangent_out = f_vjp(cotangent)
print(cotangent_out[1].shape)
</code></pre>
<p>However this code produces the error:</p>
<pre><code>TypeError: cart_deriv_bwd() missing 1 required positional argument: 'cotangent'
</code></pre>
<p>I have checked that the syntax agrees with that in the documentation. I'm wondering why the argument <code>cotangent</code> is not recognized by <code>vjp</code>, and how to fix this error?</p>
|
<python><function><jax><automatic-differentiation>
|
2024-02-02 00:51:44
| 1
| 335
|
Jingyang Wang
|
77,924,082
| 1,429,402
|
Apply the Savitzky-Golay filter with locked endpoints
|
<p>I have a stream of values I would like to smooth with <code>locked end-points</code>, meaning that the first and last data elements of the array are taken into account in the filtering algorithm but remain untouched.</p>
<p>For the smoothing algorithm I chose <code>scipy.signal.savgol_filter</code>, which gives me good results, except that I cannot figure out how to lock the end-points even after padding the values.</p>
<p>Example:</p>
<pre><code>import numpy as np
from scipy.signal import savgol_filter
def smooth_data(data, n, polyorder):
# n is the number samples on either side of the sliding window
window_length = n * 2 + 1 # window_length is always odd
# pad the end-points
padded_data = np.zeros(data.shape[0] + 2 * n)
padded_data[:n] = data[0]
padded_data[-n:] = data[-1]
padded_data[n:-n] = data
filtered_data = savgol_filter(padded_data , window_length, polyorder)[n:-n]
return filtered_data[n:-n]
source_data = np.sort(np.random.random(100))
smooth_data = smooth_data(source_data , window_size=31, polyorder=5)
print(np.allclose(source_data[0], smooth_data[0])) # False
print(np.allclose(source_data[-1], smooth_data[-1])) # False
</code></pre>
<p>If you run this code you will see that despite padding the data, the <code>ends-points</code> are still affected.</p>
<p>As a temp solution I decided to linearly interpolate the filtered data so the <code>end-points</code> do match.</p>
<pre><code>def smooth_data(data, n, polyorder):
# n is the number samples on either side of the sliding window
window_length = n * 2 + 1 # window_length is always odd
# pad the end-points
padded_data = np.zeros(data.shape[0] + 2 * n)
padded_data[:n] = data[0]
padded_data[-n:] = data[-1]
padded_data[n:-n] = data
filtered_data = savgol_filter(padded_data , window_length, polyorder)[n:-n]
# linearly adjust the values so the endpoints match
delta0 = data[0] - filtered_data[0]
delta1 = data[-1] - filtered_data[-1]
linspace = np.linspace(0, 1, data.shape[0])
filtered_data += delta0 + (delta1 - delta0) * linspace
return filtered_data
source_data = np.sort(np.random.random(100))
smooth_data = smooth_data(source_data , n=31, polyorder=5)
print(np.allclose(source_data[0], smooth_data[0])) # True
print(np.allclose(source_data[-1], smooth_data[-1])) # True
</code></pre>
<p>This does the trick, but feels very dirty to me. Also it is debatable if there is still a need to pad the values when interpolating. Results are a bit different with or without. So far i've opted to keep the padded values.</p>
<p><strong>QUESTION</strong></p>
<p>Can <code>savgol_filter</code> be made to lock the <code>end-points</code>? And if not, are there any other filtering schemes I could use to reduce noise in my data but lock the <code>end-points</code>?</p>
<p>Sample code would be appreciated.</p>
|
<python><numpy><scipy>
|
2024-02-02 00:29:56
| 1
| 5,983
|
Fnord
|
77,923,904
| 10,705,248
|
How to run multiple python based slurm jobs together in HPC
|
<p>I need to submit 100 slurm jobs they all perform the same computing but with a slight change (the only difference is the year; all files have different years). Is there a way to submit them together without writing slurm file and running separately?</p>
<p>For example, I have 100 python files with the name: <code>process1.py</code>, <code>process2.py</code>, <code>process3.py</code>, ... and so on. I am looking for a way I can assign the hpc resources for all of them together, something like the below-</p>
<pre><code>#!/bin/bash
#SBATCH -n 2
#SBATCH -p main
#SBATCH --qos main
#SBATCH -N 1
#SBATCH -J name
.
.
. #other SBATCH commands
.
.
python process1.py
python process2.py
python process3.py
python process4.py....
</code></pre>
|
<python><parallel-processing><slurm><hpc><multiple-processes>
|
2024-02-01 23:19:36
| 1
| 854
|
lsr729
|
77,923,835
| 7,106,508
|
How to save a stream object in Azure text to speech without speaking the text using Python
|
<p>I want to convert a book to audio, and save the file, so naturally I don't want my computer to be speaking the book out loud while the conversion happens, but looking at the Azure documentation, I frankly don't see a way to get a stream object without speaking the text first. I've already got the code set up so that I can save the file, but I can't save the file unless I play that audio first. I want to convert some text to a stream object without having to listen to my computer utter the text. I realize a very inelegant solution is to simply mute my computer, but still, suppose the conversion takes an hour and I need to take a phone call on it.</p>
<pre><code>import azure.cognitiveservices.speech as speechsdk
speech_config = speechsdk.SpeechConfig(subscription=subscription_key,
region=service_region)
audio_config = speechsdk.audio.AudioOutputConfig(use_default_speaker=True)
speech_config.speech_synthesis_voice_name = 'ar-EG-SalmaNeural'
speech_synthesizer = speechsdk.SpeechSynthesizer(speech_config=speech_config, audio_config=audio_config)
speech_synthesizer = speechsdk.SpeechSynthesizer(speech_config=speech_config, audio_config=None)
</code></pre>
<p>In the following line, I don't want to do this step because this utters the audio:</p>
<pre><code>result = speech_synthesizer.speak_text_async("I'm excited to try text to speech").get()
</code></pre>
<p>But I have to do that step in order to get the following steps:</p>
<pre><code>stream = AudioDataStream(result)
stream.save_to_wav_file(path)
</code></pre>
<p><a href="https://learn.microsoft.com/en-us/azure/ai-services/speech-service/how-to-speech-synthesis?tabs=browserjs%2Cterminal&pivots=programming-language-python" rel="nofollow noreferrer">here</a></p>
<p>I've tried looking at all the methods listed in the <code>speech_synthesizer</code> object but all of them involve speaking the text, they are listed here:</p>
<pre><code>class SpeechSynthesizer(builtins.object)
| SpeechSynthesizer(speech_config: azure.cognitiveservices.speech.SpeechConfig, audio_config: Optional[azure.cognitiveservices.speech.audio.AudioOutputConfig] = <azure.cognitiveservices.speech.audio.AudioOutputConfig object at 0x137ffc790>, auto_detect_source_language_config: azure.cognitiveservices.speech.languageconfig.AutoDetectSourceLanguageConfig = None)
|
| A speech synthesizer.
|
| :param speech_config: The config for the speech synthesizer
| :param audio_config: The config for the audio output.
| This parameter is optional.
| If it is not provided, the default speaker device will be used for audio output.
| If it is None, the output audio will be dropped.
| None can be used for scenarios like performance test.
| :param auto_detect_source_language_config: The auto detection source language config
|
| Methods defined here:
|
| __del__(self)
|
| __init__(self, speech_config: azure.cognitiveservices.speech.SpeechConfig, audio_config: Optional[azure.cognitiveservices.speech.audio.AudioOutputConfig] = <azure.cognitiveservices.speech.audio.AudioOutputConfig object at 0x137ffc790>, auto_detect_source_language_config: azure.cognitiveservices.speech.languageconfig.AutoDetectSourceLanguageConfig = None)
| Initialize self. See help(type(self)) for accurate signature.
|
| get_voices_async(self, locale: str = '') -> azure.cognitiveservices.speech.ResultFuture
| Get the available voices, asynchronously.
|
| :param locale: Specify the locale of voices, in BCP-47 format; or leave it empty to get all available voices.
| :returns: A task representing the asynchronous operation that gets the voices.
|
| speak_ssml(self, ssml: str) -> azure.cognitiveservices.speech.SpeechSynthesisResult
| Performs synthesis on ssml in a blocking (synchronous) mode.
|
| :returns: A SpeechSynthesisResult.
|
| speak_ssml_async(self, ssml: str) -> azure.cognitiveservices.speech.ResultFuture
| Performs synthesis on ssml in a non-blocking (asynchronous) mode.
|
| :returns: A future with SpeechSynthesisResult.
|
| speak_text(self, text: str) -> azure.cognitiveservices.speech.SpeechSynthesisResult
| Performs synthesis on plain text in a blocking (synchronous) mode.
|
| :returns: A SpeechSynthesisResult.
|
| speak_text_async(self, text: str) -> azure.cognitiveservices.speech.ResultFuture
| Performs synthesis on plain text in a non-blocking (asynchronous) mode.
|
| :returns: A future with SpeechSynthesisResult.
|
| start_speaking_ssml(self, ssml: str) -> azure.cognitiveservices.speech.SpeechSynthesisResult
| Starts synthesis on ssml in a blocking (synchronous) mode.
|
| :returns: A SpeechSynthesisResult.
|
| start_speaking_ssml_async(self, ssml: str) -> azure.cognitiveservices.speech.ResultFuture
| Starts synthesis on ssml in a non-blocking (asynchronous) mode.
|
| :returns: A future with SpeechSynthesisResult.
|
| start_speaking_text(self, text: str) -> azure.cognitiveservices.speech.SpeechSynthesisResult
| Starts synthesis on plain text in a blocking (synchronous) mode.
|
| :returns: A SpeechSynthesisResult.
|
| start_speaking_text_async(self, text: str) -> azure.cognitiveservices.speech.ResultFuture
| Starts synthesis on plain text in a non-blocking (asynchronous) mode.
|
| :returns: A future with SpeechSynthesisResult.
|
| stop_speaking(self) -> None
| Synchronously terminates ongoing synthesis operation.
| This method will stop playback and clear unread data in PullAudioOutputStream.
|
| stop_speaking_async(self) -> azure.cognitiveservices.speech.ResultFuture
| Asynchronously terminates ongoing synthesis operation.
| This method will stop playback and clear unread data in PullAudioOutputStream.
|
| :returns: A future that is fulfilled once synthesis has been stopped.
|
</code></pre>
<h5>UPDATE</h5>
<p>Someone recommended using the <code>synthesize_speech_to_stream_async</code> method but his code resulted in errors and I haven't heard back from him, but I think he might be on to something.</p>
<p>His code was</p>
<pre><code>speech_config = speechsdk.SpeechConfig(subscription=subscription_key, region=service_region)
speech_config.speech_synthesis_voice_name = 'ar-EG-SalmaNeural'
stream = speechsdk.AudioDataStream(format=speechsdk.AudioStreamFormat(pcm_data_format=speechsdk.PcmDataFormat.Pcm16Bit, sample_rate_hertz=16000, channel_count=1))
result = speechsdk.SpeechSynthesizer(speech_config=speech_config).synthesize_speech_to_stream_async("I'm excited to try text to speech", stream).get()
stream.save_to_wav_file(path)
</code></pre>
<p>This generated an error:</p>
<pre><code>stream = speechsdk.AudioDataStream(
format=speechsdk.AudioStreamFormat(
pcm_data_format=speechsdk.PcmDataFormat.Pcm16Bit,
sample_rate_hertz=16000, channel_count=1))
</code></pre>
<p>which recommended:</p>
<pre><code> stream = speechsdk.AudioDataStream(
format=speechsdk.AudioStreamWaveFormat(
pcm_data_format=speechsdk.PcmDataFormat.Pcm16Bit,
sample_rate_hertz=16000, channel_count=1))
</code></pre>
<p>But that generated:</p>
<pre><code>AttributeError: module 'azure.cognitiveservices.speech' has no attribute 'PcmDataFormat'
</code></pre>
|
<python><azure-cognitive-services>
|
2024-02-01 22:58:14
| 1
| 304
|
bobsmith76
|
77,923,733
| 1,375,292
|
Recursive version of python unittest assertDictContainsSubset
|
<p><a href="https://stackoverflow.com/questions/20050913/python-unittests-assertdictcontainssubset-recommended-alternative">Python unittest's assertDictContainsSubset recommended alternative</a> discusses how to check if a <code>dict</code> contains another <code>dict</code> using <code>d1 | d2</code>. However, that seems to only work at the top level.</p>
<p>If I have</p>
<pre><code>d1 = {
'a': 1
'b': {
'c': 2
'd': 3
}
}
d2 = {
'b': {
'c': 2
}
}
</code></pre>
<p>and I want to check if <code>d1</code> contains everything in <code>d2</code>, I cannot do <code>assertEquals(d1, d1 | d2)</code>. The union will ignore the <code>a</code> field at the top level of <code>d1</code>, but it fails because <code>d</code> is not in the nested <code>dict</code>.</p>
<p>I could write code that loops over <code>d1.items</code> and recurses if the value is a dict, but that starts to get awkward (especially since it would also have to handle tuples and lists in the general case).</p>
<p>Is there something that already exists that handles this?</p>
|
<python><unit-testing>
|
2024-02-01 22:30:00
| 1
| 3,658
|
Troy Daniels
|
77,923,632
| 127,251
|
How do I set the focus on entering an axes in matplotlib?
|
<p>I bind key_press events in Matplotlib. They are not triggered unless I click on the figure first. I want to force the keyboard focus to be on the axes when the mouse enters it.</p>
<p>I attempted to follow the guidance in this answer:</p>
<p><a href="https://stackoverflow.com/questions/31656345/matplotlib-key-press-event-does-not-respond">Matplotlib 'key_press_event' does not respond</a></p>
<p>I get an error:</p>
<pre><code>AttributeError: 'Canvas' object has no attribute 'setFocus'
</code></pre>
<p>This is my event handler:</p>
<pre class="lang-py prettyprint-override"><code> def on_enter(self, event):
# self.fig.canvas.setFocus()
event.canvas.setFocus()
</code></pre>
<p>I tried to reference the canvas in two ways: from the event or from my stored figure object. Both give the same error.</p>
<p>How do you set the focus in matplotlib?</p>
<p>System information:</p>
<ul>
<li>MAC OS Sonoma 14.3</li>
<li>using the %matplotlib widget in a Jupyter Notebook</li>
<li>Backend is ipympl.backend_nbagg</li>
<li>For GUI controls, I am using ipywidgets.</li>
</ul>
<p>I set the handler using this:</p>
<pre class="lang-py prettyprint-override"><code>
self.cid_enter = self.fig.canvas.mpl_connect('axes_enter_event', self.on_enter)
</code></pre>
<p>When I call dir on the Canvas object, it says these are the properties:</p>
<pre><code>Canvas Methods:
[
'__annotations__'
'__class__'
'__del__'
'__delattr__'
'__dict__'
'__dir__'
'__doc__'
'__eq__'
'__format__'
'__ge__'
'__getattribute__'
'__getstate__'
'__gt__'
'__hash__'
'__init__'
'__init_subclass__'
'__le__'
'__lt__'
'__module__'
'__ne__'
'__new__'
'__reduce__'
'__reduce_ex__'
'__repr__'
'__setattr__'
'__setstate__'
'__sizeof__'
'__str__'
'__subclasshook__'
'__weakref__'
'_active_widgets'
'_add_notifiers'
'_all_trait_default_generators'
'_button'
'_call_widget_constructed'
'_closed'
'_comm_changed'
'_compare'
'_control_comm'
'_cross_validation_lock'
'_current_image_mode'
'_cursor'
'_data_url'
'_default_keys'
'_descriptors'
'_device_pixel_ratio'
'_dom_classes'
'_figure_label'
'_fix_ipython_backend2gui'
'_force_full'
'_gen_repr_from_keys'
'_get_embed_state'
'_get_trait_default_generator'
'_handle_control_comm_msg'
'_handle_custom_msg'
'_handle_key'
'_handle_message'
'_handle_mouse'
'_handle_msg'
'_handle_set_device_pixel_ratio'
'_holding_sync'
'_idle_draw_cntx'
'_image_mode'
'_instance_inits'
'_ipython_display_'
'_is_idle_drawing'
'_is_numpy'
'_is_saving'
'_key'
'_lastKey'
'_last_buff'
'_last_mouse_xy'
'_lastx'
'_lasty'
'_lock_property'
'_log_default'
'_message'
'_model_id'
'_model_module'
'_model_module_version'
'_model_name'
'_msg_callbacks'
'_notify_observers'
'_notify_trait'
'_png_is_old'
'_print_pil'
'_property_lock'
'_register_validator'
'_remove_notifiers'
'_repr_keys'
'_repr_mimebundle_'
'_rubberband_height'
'_rubberband_width'
'_rubberband_x'
'_rubberband_y'
'_send'
'_set_device_pixel_ratio'
'_should_send_property'
'_size'
'_states_to_send'
'_static_immutable_initial_values'
'_switch_canvas_and_return_print_method'
'_timer_cls'
'_trait_default_generators'
'_trait_from_json'
'_trait_notifiers'
'_trait_to_json'
'_trait_validators'
'_trait_values'
'_traits'
'_view_count'
'_view_module'
'_view_module_version'
'_view_name'
'_widget_construction_callback'
'_widget_types'
'add_class'
'add_traits'
'blit'
'blur'
'buffer_rgba'
'button_pick_id'
'button_press_event'
'button_release_event'
'callbacks'
'capture_scroll'
'class_own_trait_events'
'class_own_traits'
'class_trait_names'
'class_traits'
'close'
'close_all'
'close_event'
'comm'
'copy_from_bbox'
'cross_validation_lock'
'current_dpi_ratio'
'device_pixel_ratio'
'draw'
'draw_event'
'draw_idle'
'enter_notify_event'
'events'
'figure'
'filetypes'
'fixed_dpi'
'flush_events'
'focus'
'footer_visible'
'get_default_filename'
'get_default_filetype'
'get_diff_image'
'get_manager_state'
'get_renderer'
'get_state'
'get_supported_filetypes'
'get_supported_filetypes_grouped'
'get_view_spec'
'get_width_height'
'grab_mouse'
'handle_ack'
'handle_button_press'
'handle_button_release'
'handle_comm_opened'
'handle_control_comm_opened'
'handle_dblclick'
'handle_draw'
'handle_event'
'handle_figure_enter'
'handle_figure_leave'
'handle_key_press'
'handle_key_release'
'handle_motion_notify'
'handle_refresh'
'handle_resize'
'handle_scroll'
'handle_send_image_mode'
'handle_set_device_pixel_ratio'
'handle_set_dpi_ratio'
'handle_toolbar_button'
'handle_unknown_event'
'has_trait'
'header_visible'
'hold_sync'
'hold_trait_notifications'
'inaxes'
'is_saving'
'key_press_event'
'key_release_event'
'keys'
'layout'
'leave_notify_event'
'log'
'manager'
'manager_class'
'model_id'
'motion_notify_event'
'mouse_grabber'
'mpl_connect'
'mpl_disconnect'
'new_manager'
'new_timer'
'notify_change'
'observe'
'on_msg'
'on_trait_change'
'on_widget_constructed'
'open'
'pan_zoom_throttle'
'pick'
'pick_event'
'print_figure'
'print_jpeg'
'print_jpg'
'print_png'
'print_raw'
'print_rgba'
'print_tif'
'print_tiff'
'print_to_buffer'
'print_webp'
'release_mouse'
'remove_class'
'renderer'
'required_interactive_framework'
'resizable'
'resize'
'resize_event'
'restore_region'
'scroll_event'
'scroll_pick_id'
'send'
'send_binary'
'send_event'
'send_json'
'send_state'
'set_cursor'
'set_image_mode'
'set_state'
'set_trait'
'setup_instance'
'show'
'start_event_loop'
'stop_event_loop'
'supports_blit'
'switch_backends'
'syncing_data_url'
'tabbable'
'toolbar'
'toolbar_position'
'toolbar_visible'
'tooltip'
'tostring_argb'
'tostring_rgb'
'trait_defaults'
'trait_events'
'trait_has_value'
'trait_metadata'
'trait_names'
'trait_values'
'traits'
'unobserve'
'unobserve_all'
'widget_types'
'widgetlock'
'widgets'
]
</code></pre>
|
<python><matplotlib><setfocus>
|
2024-02-01 22:02:44
| 1
| 5,593
|
Paul Chernoch
|
77,923,604
| 920,599
|
How to examine and invoke the functions in Python's __main__ module from a different module
|
<p>In Python, is it possible for a function in a module to examine the list of functions in <code>__main__</code> then execute functions that match a pattern?</p>
<p>For example, can I write a module that contains the function <code>run_all_auto</code> that will examine the list of functions in <code>__main__</code> then invoke all the functions that begin with <code>auto_</code>?</p>
<p>Yes, I know this task would be much easier/better if I put my <code>auto_</code> functions in a class; but, I'm just curious if it's possible to do so without "bundling" them somehow.</p>
<p>I see that I could add a parameter and do something like <code>my_module.run_all_auto(dir())</code>; but that only provides a list of function names, I don't see how to then invoke any of the functions.</p>
|
<python><reflection>
|
2024-02-01 21:55:14
| 1
| 6,694
|
Zack
|
77,923,461
| 263,061
|
How to type __eq__ in generic Python class?
|
<p>When I define a type with generic type parameters (e.g. <code>K</code>/<code>V</code> for key and value of a mapping), I seemingly cannot write a suitable <code>isinstance</code> check for implementing <code>__eq__</code>:</p>
<pre class="lang-py prettyprint-override"><code>from collections import OrderedDict
from collections.abc import MutableMapping
from typing import TypeVar
K = TypeVar("K")
V = TypeVar("V")
class MyDictWrapper(MutableMapping[K, V]):
def __init__(self):
self.cache: OrderedDict[K, V] = OrderedDict()
def __eq__(self, other: object) -> bool:
if not isinstance(other, MyDictWrapper):
return NotImplemented
return self.cache == other.cache # Pyright error: Type of "cache" is partially unknown: Type of "cache" is "OrderedDict[Unknown, Unknown]
</code></pre>
<p>(This is on <code>pyright 1.1.309</code>, see <a href="https://pyright-play.net/?pyrightVersion=1.1.349&pythonVersion=3.12&pythonPlatform=Linux&strict=true&code=GYJw9gtgBALgngBwJYDsDmUkQWEMoAqiApgGoCGIAUKJFAMZgA2Tx9MSYKAzptrvgDyIACbEQxEQBEk7GuGiMWbDl24A6cgCN6fHHigBZAK4xtrQ%2BQTJ0VKgGkoAXkIkKIABQAie14CUVKTOrghklN6k-nb0TOTcvIZwMuwA6iBWoZ4mZloWGahoANr2ADRQpAC6fgBcdlBQYsBQAPrNqEgwrR7cxEzANVT19T196vTk9AAWxNVQwmIS0rIwxWWVwfPikskwHgGDDcRNrcQAjl0jwGVgMNMgs2BaAFYqflAAtAB8UFpgzLVDTBNFA3TDcVDcMwoejEDw3O5lRI7NIZcQDQH1CQwYwgFBQAByNwAkthWBBiCgYJIDpjiNjcVBLmMJtNnC54eJmVNiPUAMRQAAKcBASDQk3w4nA9xCPLATS8425XjBUAQlA45BYcCgxhQAGsQQB3FCzIihKByqAKlnEZVIXheTaLHaFACq%2BqNKDK7oNYGNFSAA" rel="nofollow noreferrer">Pyright Playground repro</a>.)</p>
<p>The alternative</p>
<pre class="lang-py prettyprint-override"><code> if not isinstance(other, MyDictWrapper[K, V]):
</code></pre>
<p>also does not work, errors with:</p>
<pre class="lang-none prettyprint-override"><code>Second argument to "isinstance" must be a class or tuple of classes
Generic type with type arguments not allowed for instance or class
</code></pre>
<p>Is there a correct way to type this?</p>
|
<python><python-typing><pyright>
|
2024-02-01 21:24:11
| 2
| 25,947
|
nh2
|
77,923,449
| 438,223
|
Generating non-zero mean AR(2) samples in python
|
<p>I am trying to generate non-zero mean AR(2) samples using statsmodels package. But it seems , by default we can't generate non-zero mean samples.</p>
<p>Is there any workaround, in python. I want to generate only positive samples.</p>
<p>My current code is</p>
<pre><code>import numpy as np
from statsmodels.tsa.arima_process import ArmaProcess
rng = np.random.default_rng(12345)
ar_1 = np.array([2, -0.25, -0.25])
ar_2 = np.array([2, -0.5, -0.25])
ma1 = np.array([1])
ar1_proc = ArmaProcess(ar_1, ma1)
ar1_dat = ar1_proc.generate_sample(nsample=2*60*60, distrvs=rng.lognormal)
ar2_proc = ArmaProcess(ar_2, ma1)
ar2_dat = ar2_proc.generate_sample(nsample=2*60*60, distrvs=rng.lognormal)
</code></pre>
|
<python><statsmodels><autoregressive-models>
|
2024-02-01 21:21:46
| 1
| 1,596
|
Shew
|
77,923,380
| 2,205,799
|
python input() not printing to screen when piping to another command
|
<p>I have a simple python program that is using input to get a username and password, ie:</p>
<pre><code>
from getpass import getpass
username = input("Username: ")
password = getpass()
</code></pre>
<p>When i run the program and pipe to wc, the username prompt does not print. It takes input, and then the getpass prompt does print the passowrd prompt.</p>
<pre><code>./example.py | wc -l
sdfsdfsf # Not seeing the "Username: " prompt.
Password:
1
</code></pre>
<p>I've done some searches. I cannot find out why this is not printing. I am assuming it has to do with redirection or buffers, but I am not sure. Basically, trying to see if there is a way around this.</p>
|
<python><shell><io-redirection><getpass>
|
2024-02-01 21:07:37
| 0
| 309
|
feeble
|
77,923,330
| 437,260
|
Google Colab 'Credentials' object has no attribute 'universe_domain'
|
<p>When trying to use google colab with google speech to text (from google sample notebook :</p>
<p>I get :</p>
<pre><code>if credentials:
--> 505 credentials_universe = credentials.universe_domain
506 if client_universe != credentials_universe:
507 default_universe = SpeechClient._DEFAULT_UNIVERSE
AttributeError: 'Credentials' object has no attribute 'universe_domain'
</code></pre>
<p>Here is the python code :</p>
<p>First I login using colab "auth" :</p>
<pre class="lang-py prettyprint-override"><code>!pip install -q google-cloud-speech
from google.cloud import speech
!pip install --upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlib
from google.colab import auth
gcp_project_id = 'my-gcp-project'
auth.authenticate_user(project_id=gcp_project_id)
</code></pre>
<p>Then I activate speech api from gcp :</p>
<pre class="lang-py prettyprint-override"><code>!gcloud services enable speech.googleapis.com
</code></pre>
<p>I configure the speech recognition :</p>
<pre class="lang-py prettyprint-override"><code>from google.cloud import speech
def speech_to_text(
config: speech.RecognitionConfig,
audio: speech.RecognitionAudio,
) -> speech.RecognizeResponse:
client = speech.SpeechClient()
# Synchronous speech recognition request
response = client.recognize(config=config, audio=audio)
return response
config = speech.RecognitionConfig(
language_code="fr",
enable_automatic_punctuation=True,
)
</code></pre>
<p>Finally I record some sound to transcript :</p>
<pre class="lang-py prettyprint-override"><code>!pip install ipywebrtc
import io
from ipywebrtc import AudioRecorder, CameraStream
# create a microphone stream
camera = CameraStream(constraints={'audio': True, 'video':False})
recorder = AudioRecorder(stream=camera)
recorded_audio = recorder.audio.value
audio = speech.RecognitionAudio(
content=recorded_audio,
)
</code></pre>
<p>And here is the problematic piece of code, when I actually try to use the google api :</p>
<pre class="lang-py prettyprint-override"><code>processing_results = speech_to_text(config, audio)
</code></pre>
|
<python><google-cloud-platform><google-colaboratory>
|
2024-02-01 20:57:16
| 1
| 950
|
Antonin
|
77,923,160
| 2,011,779
|
SwaggerHub introducing duplicate keys by lower casing when downloading SDK
|
<p>I'm trying to use the BinanceSpotAPI on <a href="https://app.swaggerhub.com/apis/binance_api/BinanceSpotAPI/1.0" rel="nofollow noreferrer">https://app.swaggerhub.com/apis/binance_api/BinanceSpotAPI/1.0</a>
When I look at the raw code that is there, the /api/v3/aggTrades has a bunch of parameters,
one of which is</p>
<p>m, "Was the buyer the maker?" and</p>
<p>M, "Was the trade the best price match?"</p>
<p>They are cased correctly. But when I download the sdk (from export -> client sdk -> python), going to that file shows that all of the instances of M have been replaced with m in <em>almost</em> all places. This means that there are duplicate keys. Also, the T is replaced with t. How is this possible?
raw in swaggerhub:</p>
<pre><code>aggTrade:
type: object
properties:
a:
type: integer
format: int64
description: Aggregate tradeId
example: 26129
p:
... [p ,q, f, l, removed for brevity] ...
T:
type: boolean
description: Timestamp
example: 1498793709153
m:
type: boolean
description: Was the buyer the maker?
M:
type: boolean
description: Was the trade the best price match?
required:
- a
- p
- q
- f
- l
- T
- m
- M
</code></pre>
<p>but then this code is generated with the duplicate m:</p>
<pre><code>swagger_types = {
'a': 'int',
'p': 'str',
'q': 'str',
'f': 'int',
'l': 'int',
't': 'bool',
'm': 'bool',
'm': 'bool'
}
</code></pre>
<p>and</p>
<pre><code>attribute_map = {
'a': 'a',
'p': 'p',
'q': 'q',
'f': 'f',
'l': 'l',
't': 'T',
'm': 'm',
'm': 'M'
}
</code></pre>
<p>I think the fact that theres a capital T and M in the value of the attribute_map is some kind of clue but no idea why this would be the case</p>
<p>to reproduce:
go to the webpage
download the python sdk
look at /swagger_client/models/agg_trade.py, line 38</p>
<p>Looks like this happens in swaggerhub, but also when trying the same download in editor.swagger.io</p>
<p>Edit:
another hint. When I try to use openapi-generator, it fails because one of the names is too long. But before it fails it does generate this file, and heres how it shows up.</p>
<pre><code>class AggTrade(BaseModel):
"""
AggTrade
""" # noqa: E501
a: StrictInt = Field(description="Aggregate tradeId")
p: StrictStr = Field(description="Price")
q: StrictStr = Field(description="Quantity")
f: StrictInt = Field(description="First tradeId")
l: StrictInt = Field(description="Last tradeId")
t: StrictBool = Field(description="Timestamp", alias="T")
m: StrictBool = Field(description="Was the buyer the maker?")
m: StrictBool = Field(description="Was the trade the best price match?", alias="M")
__properties: ClassVar[List[str]] = ["a", "p", "q", "f", "l", "T", "m", "M"]
@classmethod
def from_dict(cls, obj: Dict) -> Self:
"""Create an instance of AggTrade from a dict"""
if obj is None:
return None
if not isinstance(obj, dict):
return cls.model_validate(obj)
_obj = cls.model_validate({
"a": obj.get("a"),
"p": obj.get("p"),
"q": obj.get("q"),
"f": obj.get("f"),
"l": obj.get("l"),
"T": obj.get("T"),
"m": obj.get("m"),
"M": obj.get("M")
})
return _obj
</code></pre>
|
<python><swagger><swagger-codegen>
|
2024-02-01 20:26:05
| 0
| 2,384
|
hedgedandlevered
|
77,922,861
| 1,142,970
|
Why does sympy.solve produce empty array for certain inequalities
|
<p>sympy's solve seems to generally work for inequalities. For example:</p>
<pre><code>from sympy import symbols, solve;
x = symbols('x');
print(solve('x^2 > 4', x)); # ((-oo < x) & (x < -2)) | ((2 < x) & (x < oo))
</code></pre>
<p>In some cases where all values of x or no values of x are valid solutions, <code>solve()</code> returns <code>True</code> or <code>False</code> to indicate that.</p>
<pre><code>print(solve('x^2 > 4 + x^2', x)); # False
print(solve('x^2 >= x^2', x)); # True
</code></pre>
<p>However, there are other cases where I would expect a result of True or False and instead I get <code>[]</code>:</p>
<pre><code>print(solve('x - x > 0', x)); # []
print(solve('x - x >= 0', x)); # []
</code></pre>
<p>Why does this happen/what am I doing wrong?</p>
|
<python><python-3.x><sympy><inequality>
|
2024-02-01 19:25:46
| 1
| 21,993
|
ChaseMedallion
|
77,922,825
| 4,030,166
|
How to mock argparse with pytest in bigger Python cli application
|
<p>I'm working on the following application: <a href="https://github.com/TobiWo/eth-duties" rel="nofollow noreferrer">https://github.com/TobiWo/eth-duties</a></p>
<p>I didn't include unit-tests from the beginning which is bad, I know but I wanted to concentrate on the functionality. Anyway, I want to add unit-tests now but I have big issues in doing so.</p>
<p>I already adapted a lot of stuff which can be found on this <a href="https://github.com/TobiWo/eth-duties/tree/feature/add-unit-tests-intermediate-commit" rel="nofollow noreferrer">feature branch</a>. Since I'm working with <code>poetry</code> I invoke pytest using <code>poetry run pytest --cov=duties tests/</code>. I could run a basic unit test (<code>tests/protocol/test_connection.py</code>). However, the second test I tried (<code>tests/protocol/test_ethereum.py</code>) failed and revealed a much bigger issue with regards to argparse. I try to describe the issue as detailed as I can:</p>
<ol>
<li>In my test file I want to test <a href="https://github.com/TobiWo/eth-duties/blob/feature/add-unit-tests-intermediate-commit/duties/protocol/ethereum.py#L41" rel="nofollow noreferrer">this function</a></li>
<li>In the testfile you can see that I want to mock the function <a href="https://github.com/TobiWo/eth-duties/blob/feature/add-unit-tests-intermediate-commit/tests/protocol/test_ethereum.py#L18" rel="nofollow noreferrer">send_beacon_api_request</a></li>
<li>However, this function is imported from module <code>duties.protocol.request</code> and within this module I also import the <a href="https://github.com/TobiWo/eth-duties/blob/feature/add-unit-tests-intermediate-commit/duties/protocol/request.py#L15" rel="nofollow noreferrer">BeaconNode class</a> which is then used within <code>send_beacon_api_request</code> or in one of the called private functions respectively. The issue is that the <code>BeaconNode class</code> <a href="https://github.com/TobiWo/eth-duties/blob/feature/add-unit-tests-intermediate-commit/duties/protocol/connection.py#L10" rel="nofollow noreferrer">imports my parsed arguments</a></li>
<li>Now when I run the test I get the following error from pytest:</li>
</ol>
<pre><code>tests/protocol/test_ethereum.py:5: in <module>
from duties.protocol.ethereum import get_current_slot
duties/protocol/ethereum.py:12: in <module>
from duties.protocol.request import CalldataType, send_beacon_api_request
duties/protocol/request.py:18: in <module>
beacon_node = BeaconNode()
duties/protocol/connection.py:19: in __init__
self.__arguments = get_arguments()
duties/cli/arguments.py:305: in get_arguments
arguments = __get_raw_arguments()
duties/cli/arguments.py:192: in __get_raw_arguments
return parser.parse_args()
/home/towo/miniconda3/envs/poetry-py312/lib/python3.12/argparse.py:1894: in parse_args
self.error(msg % ' '.join(argv))
/home/towo/miniconda3/envs/poetry-py312/lib/python3.12/argparse.py:2655: in error
self.exit(2, _('%(prog)s: error: %(message)s\n') % args)
/home/towo/miniconda3/envs/poetry-py312/lib/python3.12/argparse.py:2642: in exit
_sys.exit(status)
E SystemExit: 2
-------------------------------------------------------------------------------------------------------- Captured stderr ---------------------------------------------------------------------------------------------------------
usage: eth-duties [...options]
eth-duties: error: unrecognized arguments: --cov=duties tests/
</code></pre>
<p>It looks like that I also need to mock argparse which I already <a href="https://github.com/TobiWo/eth-duties/blob/feature/add-unit-tests-intermediate-commit/tests/protocol/test_ethereum.py#L11" rel="nofollow noreferrer">tried</a>. I know that I need to mock the get_arguments function based on the module I want to test. Same thing is working <a href="https://github.com/TobiWo/eth-duties/blob/feature/add-unit-tests-intermediate-commit/tests/protocol/test_connection.py#L9" rel="nofollow noreferrer">here</a>. But this doesn't work and I get the above error. I might have a misunderstanding of pytest here so maybe someone can help.</p>
<p>I know it is hard to grasp because the application is pretty big already.</p>
<p>Since these kind of imports happening often times I'm pretty sure I will have this issue multiple times.</p>
|
<python><pytest><argparse><python-poetry>
|
2024-02-01 19:19:54
| 0
| 574
|
Tobias
|
77,922,823
| 2,375,040
|
GDExtension cannot find a tool chain to build for the web, in spite of compatibility
|
<p>To clarify my title, "web" is an available export platform for GDExtension as of Godot 4.2, which, while no doubt limited, is great news for those of us building web applications. This is mentioned <a href="https://godotengine.org/article/dev-snapshot-godot-4-2-dev-6/" rel="nofollow noreferrer">here</a>. When I attempt to compile with:</p>
<pre><code>scons platform=web
</code></pre>
<p>I get a curious error message:</p>
<pre><code>michael@Marduk:~/Documents/Godot/gdextension_cpp_example$ scons platform=web
scons: Reading SConscript files ...
Auto-detected 16 CPU cores available for build parallelism. Using 15 cores by default. You can override it with the -j argument.
Building for architecture wasm32 on platform web
ValueError: Required toolchain not found for platform web:
File "/home/michael/Documents/Godot/gdextension_cpp_example/SConstruct", line 5:
env = SConscript("godot-cpp/SConstruct")
File "/usr/lib/python3/dist-packages/SCons/Script/SConscript.py", line 661:
return method(*args, **kw)
File "/usr/lib/python3/dist-packages/SCons/Script/SConscript.py", line 598:
return _SConscript(self.fs, *files, **subst_kw)
File "/usr/lib/python3/dist-packages/SCons/Script/SConscript.py", line 287:
exec(compile(scriptdata, scriptname, 'exec'), call_stack[-1].globals)
File "/home/michael/Documents/Godot/gdextension_cpp_example/godot-cpp/SConstruct", line 53:
cpp_tool.generate(env)
File "/home/michael/Documents/Godot/gdextension_cpp_example/godot-cpp/tools/godotcpp.py", line 265:
raise ValueError("Required toolchain not found for platform " + env["platform"])
</code></pre>
<p>All of this is in Python and should be easy enough to read, but it isn't obvious. My best parsing shows me that no tool chain is registered for the "web" platform, but this is interesting in itself, as if it was, say, "windows", it would mention to me exactly what file was missing. This implies that it doesn't even have anything on record.</p>
<p>Is it maybe something to do with my C++ binding? Or perhaps I'm using the wrong platform extension? Is there a known way around this?</p>
<p>The extension builds fine for Linux and Windows.</p>
|
<python><c++><build><webassembly><godot>
|
2024-02-01 19:19:17
| 1
| 1,801
|
Michael Macha
|
77,922,817
| 13,086,128
|
ImportError: cannot import name 'Iterator' from 'typing_extensions'
|
<p>When I try to install <code>openai</code> on my Google Colab, I get this error:</p>
<pre><code>from openai import OpenAI
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-6-5ee9a4e68b89> in <cell line: 1>()
----> 1 from openai import OpenAI
5 frames
/usr/local/lib/python3.10/dist-packages/openai/_utils/_streams.py in <module>
1 from typing import Any
----> 2 from typing_extensions import Iterator, AsyncIterator
3
4
5 def consume_sync_iterator(iterator: Iterator[Any]) -> None:
ImportError: cannot import name 'Iterator' from 'typing_extensions' (/usr/local/lib/python3.10/dist-packages/typing_extensions.py)
---------------------------------------------------------------------------
NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.
To view examples of installing some common dependencies, click the
"Open Examples" button below.
</code></pre>
<p>My details to reproduce:</p>
<p><code>Python 3.10.12</code></p>
<p><code>OS</code> is <code>Windows 11</code></p>
<p><code>typing_extensions</code> is <code>4.9.0</code></p>
<p>Edit:</p>
<p>my <code>typing_extensions</code> details:</p>
<p><a href="https://i.sstatic.net/58YJZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/58YJZ.png" alt="enter image description here" /></a></p>
|
<python><python-3.x><pip><openai-api>
|
2024-02-01 19:18:15
| 1
| 30,560
|
Talha Tayyab
|
77,922,776
| 5,327,068
|
How do I mock a function called by an instance of a class
|
<p>I am writing a unit test, and in my <code>test_model.py</code> file, I have a class <code>TestPlayerAnomalyDetectionModel</code> that creates an instance of <code>PlayerAnomalyDetectionModel</code>, and then calls the <code>.fit</code> method using a fixture that returns sample data. The <code>.fit</code> method calls a function <code>get_player_account_status</code> which sends an API request, so I want to mock this function. I also want the <code>mock_get_player_account_status</code> to return a different value every time it is called, so I using a <code>side_effect</code> but I am not sure I am doing this correctly.</p>
<p>If I use the decorator <code>@mock.patch('get_player_labels.get_player_account_status')</code>, the function isn't mocked when it's called from inside <code>PlayerAnomalyDetectionModel</code>, and actual API requests are sent.</p>
<p>Based on <a href="https://stackoverflow.com/a/30502738/5327068">this answer</a>, I believe I want to use the decorator <code>@mock.patch('model.get_player_account_status')</code> because <code>get_player_account_status</code> is called from inside <code>model.py</code>, but when I run <code>pytest test_model.py</code>, I can see that <code>get_player_account_status</code> is getting called instead of <code>mock_get_player_account_status</code>.</p>
<p>I have replaced some of the actual code with comments for the sake of clarity, but if any more information is needed to help answer my question, please let me know!</p>
<p>Here is the structure of the <code>test_model.py</code> file:</p>
<pre><code>import pandas as pd
import pytest
import unittest
import mock
from unittest.mock import Mock, MagicMock, patch
from model import PlayerAnomalyDetectionModel
@pytest.fixture(scope="class")
def get_sample_train_data():
## creates sample_train_data DataFrame
return sample_train_data
@pytest.mark.usefixtures("get_sample_train_data", "build_training_data")
class TestPlayerAnomalyDetectionModel(unittest.TestCase):
def setUp(self):
self.model = PlayerAnomalyDetectionModel()
@pytest.fixture(autouse=True)
def build_training_data(self, get_sample_train_data):
self.sample_train_data = get_sample_train_data
print("building training data...")
print(self.sample_train_data)
@mock.patch('model.get_player_account_status')
def test_fit(self, mock_get_player_account_status):
mock_get_player_account_status.side_effect = [
'open', 'open', 'tosViolation', 'tosViolation', 'tosViolation', 'closed',
'open', 'open', 'tosViolation', 'tosViolation', 'tosViolation', 'closed',
]
self.model.fit(self.sample_train_data, generate_plots=False)
def test_predict(self):
pass
def test_save_model(self):
pass
def test_load_model(self):
pass
</code></pre>
<p>The <code>model.py</code> file looks like the following:</p>
<pre><code>from get_player_labels import get_player_account_status
class PlayerAnomalyDetectionModel:
"""
The PlayerAnomalyDetectionModel class returns a model with methods:
.fit to tune the model's internal thresholds on training data
.predict to make predictions on test data
.load_model to load a predefined model from a pkl file
.save_model to save the model to a file
"""
def __init__(self):
self.is_fitted = False
self._thresholds = {
(time_control,'perf_delta_thresholds'): {
f"{rating_bin}-{rating_bin+100}": 0.15
for rating_bin in np.arange(0,4000,100)
}
for time_control in TimeControl.ALL.value
}
self._account_statuses = {} # store account statuses for each model instance
self._ACCOUNT_STATUS_SCORE_MAP = {
"open": 0,
"tosViolation": 1,
"closed": 0.75, # weight closed account as closer to a tosViolation
}
def load_model(self, model_file_name: str):
"""
Loads a model from a file
"""
pass
def fit(self, train_data: pd.DataFrame, generate_plots=True):
if self.is_fitted:
pass
# issue a warning that the user is retraining the model!
# give the user the option to combine multiple training data sets
else:
self._set_thresholds(train_data, generate_plots)
self.is_fitted = True
def _set_thresholds(self, train_data, generate_plots):
while(True):
for player in all_flagged_players:
if self._account_statuses.get(player) is None:
get_player_account_status(player, self._account_statuses)
else:
pass
## set thresholds
def predict(self, test_data: pd.DataFrame):
"""Returns pd.DataFrame of size (m+2, k)
where k = number of flagged games, and m = number of features
"""
if not self.is_fitted:
print("Warning: model is not fitted and will use default thresholds")
# make predictions
return predictions
def save_model(
self,
saved_models_folder = SAVED_MODELS_FOLDER,
model_name: str = "player_anomaly_detection_model"
):
if not os.path.exists(SAVED_MODELS_FOLDER):
os.mkdir(SAVED_MODELS_FOLDER)
with open(f'{SAVED_MODELS_FOLDER}/{BASE_FILE_NAME}_{model_name}.pkl', 'wb') as f:
pickle.dump(self._thresholds, f)
</code></pre>
<p>Below is the structure of the function <code>get_player_account_status</code> inside <code>get_player_labels.py</code>:</p>
<pre><code>def get_player_account_status(player, account_statuses):
try:
user = makeAPIcall()
if user.get('tosViolation'):
account_statuses[player] = "tosViolation"
elif user.get('disabled'):
account_statuses[player] = "closed"
else:
account_statuses[player] = "open"
except ApiHttpError:
account_statuses[player] = "not found"
</code></pre>
|
<python><unit-testing><mocking><pytest><decorator>
|
2024-02-01 19:10:31
| 1
| 20,200
|
Derek O
|
77,922,733
| 2,575,970
|
Bearer token expires after 3599, refresh data steps
|
<p>The bearer token expires after 3599. I still need to refresh it and keep using it. How is that possible? I am looking for a simple code example to do that. I am using Python</p>
<pre><code> # Create an MSAL instance providing the client_id, authority and client_credential parameters
client = msal.ConfidentialClientApplication(client_id, authority=authority, client_credential=client_secret)
# First, try to lookup an access token in cache
token_result = client.acquire_token_silent(scope, account=None)
# If the token is available in cache, save it to a variable
if token_result:
access_token = 'Bearer ' + token_result['access_token']
print('Access token was loaded from cache')
# If the token is not available in cache, acquire a new one from Azure AD and save it to a variable
if not token_result:
token_result = client.acquire_token_for_client(scopes=scope)
access_token = 'Bearer ' + token_result['access_token']
print('New access token was acquired from Azure AD')
</code></pre>
|
<python><azure-active-directory><bearer-token>
|
2024-02-01 19:00:02
| 1
| 416
|
WhoamI
|
77,922,683
| 7,844,237
|
python resample incorrectly
|
<p>I have a <code>pandas</code> dataframe like this:</p>
<pre><code> ticker_name price
timestamp_column
2024-02-01 18:34:58 NIFTY_08-FEB-2024_CE_21000 85.0
2024-02-01 18:34:55 NIFTY_08-FEB-2024_CE_21000 84.0
2024-02-01 18:34:52 NIFTY_08-FEB-2024_CE_21000 83.0
2024-02-01 18:34:49 NIFTY_08-FEB-2024_CE_21000 82.0
2024-02-01 18:34:46 NIFTY_08-FEB-2024_CE_21000 81.0
... ... ...
2024-02-01 18:30:58 NIFTY_08-FEB-2024_CE_21000 5.0
2024-02-01 18:30:55 NIFTY_08-FEB-2024_CE_21000 4.0
2024-02-01 18:30:52 NIFTY_08-FEB-2024_CE_21000 3.0
2024-02-01 18:30:49 NIFTY_08-FEB-2024_CE_21000 2.0
2024-02-01 18:30:46 NIFTY_08-FEB-2024_CE_21000 1.0
</code></pre>
<p>I am trying to resample using the below code:</p>
<pre><code>df_resample = df['price'].resample('10s').ohlc()
</code></pre>
<p>And I get:</p>
<pre><code> open high low close
timestamp_column
2024-02-01 18:30:40 1.0 2.0 1.0 2.0
2024-02-01 18:30:50 3.0 5.0 3.0 5.0
2024-02-01 18:31:00 6.0 8.0 6.0 8.0
2024-02-01 18:31:10 9.0 12.0 9.0 12.0
....
2024-02-01 18:34:20 73.0 75.0 73.0 75.0
2024-02-01 18:34:30 76.0 78.0 76.0 78.0
2024-02-01 18:34:40 79.0 82.0 79.0 82.0
2024-02-01 18:34:50 83.0 85.0 83.0 85.0
</code></pre>
<p>If you look at the data, for example the first row:</p>
<pre><code>2024-02-01 18:30:40 1.0 2.0 1.0 2.0
</code></pre>
<p>This should be the open-high-low-close between slot 18:30:<strong>30</strong> to 18:30:<strong>40</strong>. But instead its 18:30:<strong>40</strong> to 18:30:<strong>50</strong>. You can see this in every row, from the above data you can also see the same applies for the for last row as well.</p>
<p>Am I missing something? Or is this the intended behaviour? It looks wrong to me.</p>
|
<python><pandas><ohlc><resample>
|
2024-02-01 18:51:49
| 2
| 393
|
keerthan kumar
|
77,922,478
| 11,092,636
|
sets are unordered but python seems to "set a seed"
|
<p>I know that Python sets are unordered. For example, executing this code 10 times will yield around 5 times [DR1, DR12] and five times [DR12, DR1].</p>
<pre><code>list_of_list = [[] for _ in range(3)]
list_of_list[0].append("DR1")
list_of_list[0].append("DR1")
list_of_list[0].append("DR1")
list_of_list[0].append("DR12")
list_of_list[0].append("DR12")
list_of_list[0].append("DR12")
list_of_list[0] = list(set(list_of_list[0]))
print(list_of_list)
</code></pre>
<p>However, when executing this code:</p>
<pre><code>for _ in range(5):
list_of_list = [[] for _ in range(3)]
list_of_list[0].append("DR1")
list_of_list[0].append("DR1")
list_of_list[0].append("DR1")
list_of_list[0].append("DR12")
list_of_list[0].append("DR12")
list_of_list[0].append("DR12")
list_of_list[0] = list(set(list_of_list[0]))
print(list_of_list)
</code></pre>
<p>We always get the same order for each iteration of the <code>for</code> loop but if we restart the code we might get another order (but then this other order would be repeated 5 times).</p>
<p>It seems that there is some sort of seed Python sets, a sort of ordered unordering the hashmaps introduce. What is the underlying explanation?</p>
<p>After some investigation I came up with that conclusion but I still would want someone to confirm I'm right since I'm a beginner in programming: Would that mean that Python adds a random seed to the hash computation of strings at the start of the program? I know hashes of integers don't change since there are the integer itself modulo 2**61-1 but the hash of strings does? (<code>print(hash("d"))</code> yields different values if you restart the program several times)</p>
|
<python>
|
2024-02-01 18:14:33
| 0
| 720
|
FluidMechanics Potential Flows
|
77,922,451
| 11,770,390
|
How to ensure python is installed for C++ code generation with cmake & conan
|
<p>I have a library that uses a python code generator to read in some yaml files and spit out a *.hpp file. Now I want to ship it on conancenter and have it build effortlessly on the consumer machines.</p>
<p>There's one problem though: I use the pyyaml package for the code generator to read the yaml file, which is not guaranteed to be available on consumer system. Heck not even python might be installed! And of course all of that would need to run in a seperate environment to ensure the right pyyaml/python versions.., which might collide with the environment that the user is already using.</p>
<p>Is there a way to ensure python + pyyaml are installed on the fly using build requirements in conan?</p>
<p>I thought about it for a moment, but the problem seems to be that I cannot install python from conancenter (it's simply not available) and hence cannot use it in the build requirements. And I can't use bash since I can't read yaml files without also downloading some additinal tools.</p>
<p>What can I do here?</p>
|
<python><cmake><code-generation><conan>
|
2024-02-01 18:09:11
| 1
| 5,344
|
glades
|
77,922,414
| 8,874,388
|
Python Lists: Why is the Pythonic list comprehension 8-20% slower?
|
<p><strong>(Update: The issue has been found. <a href="https://stackoverflow.com/a/77922551/8874388">See my answer</a>. This turned out to be a performance issue in Python 3.11, and only affects small lists. For Python 3.9, 3.10 and 3.12, list comprehension is ALWAYS faster, regardless of list size.)</strong></p>
<p>I've seen claims that "list comprehension is always faster because it avoids the lookup and calling of the <code>append()</code> method".</p>
<p>The difference is how appending is done:</p>
<ul>
<li>List comprehensions use just two bytecodes, <code>LOAD_FAST(k); LIST_APPEND</code>.</li>
<li>Regular <code>.append()</code> calls use five bytecodes, one of which is a method lookup (hash table), <code>LOAD_METHOD(append); LOAD_FAST(k); PRECALL; CALL; POP_TOP</code>.</li>
</ul>
<p>So obviously, list comprehension <em>should</em> be faster since it avoids costly function lookups/calls...</p>
<p>But no matter how I twist this, I see two things:</p>
<ul>
<li>List comprehension is 8-20% slower than <code>.append()</code>. A friend verified this on their computer too.</li>
<li>The disassembly for list comprehension is way longer, which may explain why it's slower.</li>
</ul>
<p>Does anyone know what's going on? Why is the Pythonic "list comprehension" so much slower?</p>
<p>I am running Python 3.11 on Linux. He is running Windows. So it's slower on both platforms.</p>
<p>My code is below.</p>
<pre class="lang-py prettyprint-override"><code>from typing import Any
def filter_unset_loop(
settings: dict[str, Any],
new_settings: dict[str, Any],
) -> dict[str, Any]:
remove_keys: list[str] = []
for k, v in new_settings.items():
if (v is None or k == "self" or k == "config"):
remove_keys.append(k)
for k in remove_keys:
del new_settings[k]
return new_settings
def filter_unset_listcomp(
settings: dict[str, Any],
new_settings: dict[str, Any],
) -> dict[str, Any]:
remove_keys: list[str] = [
k for k, v in new_settings.items() if (v is None or k == "self" or k == "config")
]
for k in remove_keys:
del new_settings[k]
return new_settings
def test_loop():
filter_unset_loop(
{},
{
"self": True,
"what": None,
"hello": "world",
"great": "key",
"1": 1,
"2": 1,
"3": 1,
"4": 1,
"5": 1,
"6": 1,
"7": 1,
"8": 1,
"9": 1,
"10": 1,
"11": 1,
"12": 1,
"13": 1,
"14": 1,
"15": 1,
"16": 1,
},
)
def test_listcomp():
filter_unset_listcomp(
{},
{
"self": True,
"what": None,
"hello": "world",
"great": "key",
"1": 1,
"2": 1,
"3": 1,
"4": 1,
"5": 1,
"6": 1,
"7": 1,
"8": 1,
"9": 1,
"10": 1,
"11": 1,
"12": 1,
"13": 1,
"14": 1,
"15": 1,
"16": 1,
},
)
import timeit
n = 20_000_000
t1 = timeit.Timer(test_loop)
print("loop (seconds):", t1.timeit(number=n))
t2 = timeit.Timer(test_listcomp)
print("listcomp (seconds):", t2.timeit(number=n))
import dis
print("\nLOOP DISASSEMBLY:\n")
dis.dis(filter_unset_loop)
print("\n" * 10)
print("\nLISTCOMP DISASSEMBLY:\n")
dis.dis(filter_unset_listcomp)
</code></pre>
|
<python><python-3.x><list><optimization>
|
2024-02-01 17:59:36
| 1
| 4,749
|
Mitch McMabers
|
77,922,373
| 11,515,528
|
Convert column of dates to count of days from start day
|
<p>Simple question but I am struggling to find the answer. I am after a simple count of days from the start date. Thanks in advance.</p>
<pre><code>df = pd.DataFrame({'year': [2015, 2015, 2015],
'month': [2, 2, 2],
'day': [4, 5, 10]})
pd.to_datetime(df)
output
0 2015-02-04
1 2015-02-05
2 2015-02-10
</code></pre>
<p>What I want</p>
<pre><code> Date Days_from_start
0 2015-02-04 0
1 2015-02-05 1
2 2015-02-10 6
</code></pre>
|
<python><pandas>
|
2024-02-01 17:53:13
| 3
| 1,865
|
Cam
|
77,922,238
| 5,728,714
|
Errors "overflow encountered in matmul" and "Optimization failed to converge" while trying to use statsmodels' Exponential smoothing
|
<p>I have the following series:</p>
<pre><code>month,demand
2022-02-01,6059
2022-03-01,11990
2022-04-01,6586
2022-05-01,7363
2022-06-01,10837
2022-07-01,2268
2022-08-01,2615
2022-09-01,1590
2022-10-01,3797
2022-11-01,2195
2022-12-01,4480
2023-01-01,4030
2023-02-01,3242
2023-03-01,3111
2023-04-01,6724
2023-05-01,8781
2023-06-01,3115
2023-07-01,2031
2023-08-01,2992
2023-09-01,1453
2023-10-01,1407
2023-11-01,1672
2023-12-01,3261
2024-01-01,3076
</code></pre>
<p>And while trying to run the following code:</p>
<pre class="lang-py prettyprint-override"><code>demands = pd.read_csv('demands.csv')
demands.set_index("month", inplace=True)
demands.index = pd.DatetimeIndex(demands.index.values, freq="MS")
model = sm.tsa.ExponentialSmoothing(demands['demand'], trend='mul', use_boxcox=False)
fittedModel = model.fit()
result = fittedModel.forecast(steps=3),
print(fittedModel.mle_retvals)
</code></pre>
<p>I get the following warnings:</p>
<pre><code>[...]/.venv/lib/python3.12/site-packages/statsmodels/tsa/holtwinters/model.py:83: RuntimeWarning: overflow encountered in matmul
return err.T @ err
[...]/.venv/lib/python3.12/site-packages/statsmodels/tsa/holtwinters/model.py:917: ConvergenceWarning: Optimization failed to converge. Check mle_retvals.
warnings.warn(
message: Inequality constraints incompatible
success: False
status: 4
fun: 287177206.01341003
x: [ 5.000e-03 1.000e-04 1.019e+04 9.679e-01]
nit: 2
jac: [-2.500e+09 -4.295e+05 7.745e+04 9.093e+09]
nfev: 20
njev: 2
</code></pre>
<p>However I could not find anything that would suit why the warnings are being emitted.</p>
<p>Can the warnings impact the forecasted values or are they just something that should be ignored?</p>
|
<python><statsmodels>
|
2024-02-01 17:30:00
| 0
| 1,877
|
Mathias Hillmann
|
77,922,041
| 1,506,763
|
Performance of numpy sum and bottleneck nansum
|
<p>I've got a piece of code that is quiet heavily dependent on the use of <code>numpy</code> <code>sum</code>, about 60% of my time is spent on the calculation of just a few results and an replication of the offending code is shown below. This section of code (not including the array generation as this is passed in from external files) is then run thousands of times in each simulation as the data changes.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
# generate some representative size example data
array_size = 100 # typical array_size range 100 - 5000
weighting = np.random.random(array_size)
var_a = np.random.random(array_size)
var_b = np.random.random((array_size, 3))
var_c = np.random.random((array_size, 3))
var_d = np.random.random(array_size)
var_e = np.random.random(array_size)
# calculate results
res_1 = var_a * weighting
res_2 = np.sum(res_1)
res_3 = np.sum(var_b * var_a[:, np.newaxis], axis=0)
res_4 = np.sum(var_c * var_a[:, np.newaxis], axis=0)
res_5 = np.sum(res_1 / var_e)
res_6 = np.sum(var_d / res_1) / res_2
</code></pre>
<p>Running a few quick <code>timeit</code> benchmarks on the data gives a baseline of the time spent per call to the block of code:</p>
<pre class="lang-py prettyprint-override"><code>36.3 µs ± 579 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) # array_size = 100
103 µs ± 2.88 µs per loop (mean ± std. dev. of 7 runs, 10,000 loops each) # array_size = 1000
360 µs ± 3.23 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) # array_size = 5000
</code></pre>
<p>In order to reduce my runtime I decided to replace <code>numpy.sum()</code> with bottleneck.nansum() as it seems to be optimised implementations of some numpy functions. Here's the ever so slightly modified bottleneck version:</p>
<pre class="lang-py prettyprint-override"><code>res_1 = var_a * weighting
res_2 = bn.nansum(res_1)
res_3 = bn.nansum(var_b * var_a[:, np.newaxis], axis=0)
res_4 = bn.nansum(var_c * var_a[:, np.newaxis], axis=0)
res_5 = bn.nansum(res_1 / var_e)
res_6 = bn.nansum(var_d / res_1) / res_2
</code></pre>
<p>And a quick <code>timeit</code> test shows that <code>bn.nansum</code> should be quicker than numpy:</p>
<pre class="lang-py prettyprint-override"><code>10.3 µs ± 222 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each) # array_size = 100
43 µs ± 1.58 µs per loop (mean ± std. dev. of 7 runs, 10,000 loops each) # array_size = 1000
177 µs ± 1.66 µs per loop (mean ± std. dev. of 7 runs, 10,000 loops each) # array_size = 5000
</code></pre>
<p>So this simple test suggests that my code should be significantly quicker if I replace <code>numpy.sum</code> with <code>bottleneck.nansum</code>. However, when I make that change in the code and then run a test, my code is about 5x slower overall with <code>bottleneck</code> being used, and <code>bottleneck</code> seems to take more than 10x longer to process these few changed lines, contradicting the <code>timeit</code> results.</p>
<p>I've just copied these lines from some profiler runs related to this:</p>
<pre><code>ncalls | tottime | percall | cumtime | percall | filename:lineno(function)
83349 525.4 0.006303 525.4 0.006303 ~:0(<built-in method bottleneck.reduce.nansum>)
83349 0.2606 3.127e-06 55.69 0.0006681 fromnumeric.py:2188(sum)
101880 55.43 0.0005441 55.43 0.0005441 ~:0(<method 'reduce' of 'numpy.ufunc' objects>)
</code></pre>
<p>Does anyone have any idea why simple tests with <code>timeit</code> show <code>bottleneck</code> to be much quicker, but why the full code when the script is run is significantly slower when bottleneck is used? Is <code>timeit</code> doing something special hat doesn't happen under normal circumstances? Is there any way I can investigate this further to see what the problem is?</p>
<p>The only changes in the code are those few lines highlighted above - nothing else in my code changes. I'm quite surprised by this significant degrade in performance when using <code>bottleneck</code> instead of <code>numpy</code>.</p>
|
<python><arrays><numpy><performance><bottleneck-package>
|
2024-02-01 16:59:50
| 1
| 676
|
jpmorr
|
77,922,007
| 275,088
|
Is creating a constant dictionary inside a function efficient in Python?
|
<p>Say, I have a function that creates a dictionary just to look up a value, e.g.:</p>
<pre><code>def check_map(val):
value_map = {"val1": "newval1", "val2": "newval2", "val3": "newval3"}
val = value_map.get(val, val)
# other stuff
return val
</code></pre>
<p>Will Python optimize the dictionary creation in any way since it's basically a constant?</p>
|
<python><dictionary>
|
2024-02-01 16:55:40
| 1
| 16,548
|
planetp
|
77,921,942
| 7,844,237
|
sma calcualted in talib in reverse order
|
<p>i have a panda</p>
<pre><code>timestamp_column
2024-01-31 14:23:10 794.65
2024-01-31 14:23:00 794.65
2024-01-31 14:22:50 794.65
2024-01-31 14:22:40 794.65
2024-01-31 14:22:30 794.65
2024-01-31 14:22:20 794.65
Freq: -10S, Name: close, dtype: float64
</code></pre>
<p>im using the below code -</p>
<pre><code>ma_value = talib.SMA(close_df, ma_period)
</code></pre>
<p>the result im getting is in reverse order, its from old to new, but i want new to old. i dont see any option to do this in talib. please help</p>
<pre><code>timestamp_column
2024-01-31 14:23:10 NaN
2024-01-31 14:23:00 NaN
2024-01-31 14:22:50 NaN
2024-01-31 14:22:40 NaN
2024-01-31 14:22:30 NaN
2024-01-31 14:22:20 794.65
Freq: -10S, dtype: float64
</code></pre>
<p>i want the result to be</p>
<pre><code> 2024-01-31 14:23:10 764.65
2024-01-31 14:23:00 NaN
2024-01-31 14:22:50 NaN
2024-01-31 14:22:40 NaN
2024-01-31 14:22:30 NaN
2024-01-31 14:22:20 NaN
</code></pre>
|
<python><pandas><ta-lib>
|
2024-02-01 16:42:52
| 0
| 393
|
keerthan kumar
|
77,921,898
| 1,478,905
|
Not getting source documents from chain in langchain and openAI embeddings
|
<p>I'm loading pdfs using langchain.document_loaders:</p>
<p><code>loader = DirectoryLoader( './files/', glob='*.pdf', loader_cls=PyPDFLoader)</code></p>
<p>then splitted the docs, created the embeddings, stored and loaded them :</p>
<pre><code>docsearch = Chroma.from_documents(texts, embeddings, persist_directory=persist_directory)
...
docsearch = Chroma(persist_directory, embedding_function=embeddings )
retriever = docsearch.as_retriever( search_kwargs={"k": 5})
docs = retriever.get_relevant_documents( query )
len( docs)
</code></pre>
<p>I'm getting a correct response but I'm getting 0 source documents.</p>
|
<python><openai-api><langchain><openaiembeddings>
|
2024-02-01 16:35:23
| 3
| 997
|
Diego Quirós
|
77,921,888
| 2,675,981
|
Allow python script to be executed by Apache, but not read in a browser
|
<p>I have a website that uses a python script to get information from a separate entity. The python script is required due to the libraries in use and the structure of the other entity. The retrieved information will be used in various ways on the site.</p>
<p>The website itself is PHP running on Apache and there is a section that kicks off the python script. Obviously, being an Apache server, the python script is kicked off by Apache, so the script must be usable / readable by Apache.</p>
<p>The issue is that if a user goes to the location of the script (i.e. <code>www.mysite.com/scripts/myscript.py</code>), the code is viewable through the browser.</p>
<p>Is it possible to set up the permissions in such a way that Apache can continue to use the script, but when going to the script directly in a browser, the code cannot be viewed? I am also okay with changing the script to not be easily read (maybe obfuscate it?).</p>
<p>I have tried to change the <code>.htaccess</code> file to not allow the reading of files in that directory, but while it does direct to a 403 page when trying to view, it prevents the script from being used by Apache. I have also changed the permissions of the file to allow Apache to execute the file, but not read it... that seems to prevent Apache from reading the results from the script.</p>
<p>Any ideas?</p>
<hr />
<p>Edit 1:
I think it should be noted that the python script is being kicked off by PHP through the exec command: <code>exec("$cmd > /dev/null 2>&1 & echo $!;")</code>, where <code>$cmd</code> is the command to be executed. This may or may not be the best use case here. If there is another way to make PHP kick off a python script and get the results from the script, I am open to that alternative.</p>
|
<python><php><apache>
|
2024-02-01 16:34:40
| 1
| 885
|
Apolymoxic
|
77,921,360
| 151,939
|
How to specific augmentation data types in mediapipe_model_maker?
|
<p>I'm trying to use the image_classifier from mediapipe_model_maker to train a custom tflite image classification model. All I see for data augmentation is a boolean that can be passed in the options called "do_data_augmentation" that does random augmentation (cropping, flipping, etc.).</p>
<pre><code>spec = image_classifier.SupportedModels.MOBILENET_V2
hparams=image_classifier.HParams(epochs=100, export_dir="exported_model_2",do_data_augmentation=False)
options = image_classifier.ImageClassifierOptions(supported_model=spec, hparams=hparams)
model = image_classifier.ImageClassifier.create(
train_data = train_data,
validation_data = validation_data,
options=options,
)
</code></pre>
<p>I want to be specific about the data augmentation and keep only flipping, exposure and blur. Is that possible? Thanks.</p>
|
<python><tensorflow><machine-learning><mediapipe><image-classification>
|
2024-02-01 15:20:08
| 1
| 8,618
|
Michael Eilers Smith
|
77,921,326
| 2,575,970
|
Azure bearer token lifetime
|
<p>I have a python code to call a graph API and browse directory on sharepoint. The directory has 120GB of files and needs hours to scan. However, what I have observed is the script just shows as running on Visual Studio code and no further execution happens. I am printing the files names in the loop and it stops to put the file names after an hour.</p>
<p>Can it be because of Token getting expired after an hour? If yes, why I do not get an error stating the token to be invalid?</p>
<pre><code># Define imports
import requests
# Copy access_token and specify the MS Graph API endpoint you want to call, e.g. 'https://graph.microsoft.com/v1.0/groups' to get all groups in your organization
#access_token = '{ACCESS TOKEN YOU ACQUIRED PREVIOUSLY}'
url = "[URL TO THE SHAREPOINT]"
headers = {
'Authorization': access_token
}
consentfilecount=0
clientreportcount = 0
graphlinkcount = 0
while True:#
#print(graph_result.json()['@odata.nextLink'])
graph_result = requests.get(url=url, headers=headers)
if ('value' in graph_result.json()):
for list in graph_result.json()['value']:
if(("Client Consent Form").lower() in list["name"].lower()):
consentfilecount +=1
print(list["name"])
if(("Final Client Report").lower() in list["name"].lower()):
clientreportcount +=1
print(list["name"])
#print(graph_result.json())
if('@odata.nextLink' in graph_result.json()):
url = graph_result.json()['@odata.nextLink']
graphlinkcount += 1
else:
break
print(consentfilecount)
</code></pre>
|
<python><azure><microsoft-graph-api><onedrive><bearer-token>
|
2024-02-01 15:14:52
| 2
| 416
|
WhoamI
|
77,921,195
| 10,054,520
|
Running Python on Windows, how can I run multiple scripts that write to the same file?
|
<p>I'm running one instance of a Python 3.10 script on Windows 10. I would like to run another instance of the same script; however, at the end of the script it will write to a txt file so I don't want the scripts competing/overwriting what the other has done.</p>
<p>The flow goes like this</p>
<pre><code>for i in newlist:
runFunction()
completedlist.append(i)
with open(file.txt) as file:
for line in completedlist:
file.write()
</code></pre>
<p>I have a loop that has a list of items to run through. For each item it will perform a function, capture the item that was completed, and write the completed list to a txt file. This has been working just fine for me so i don't want to change the structure of how it runs.</p>
<p>My problem is that I have a long list of work items, and I can't run these in parallel. I would like to launch a second instance of this script and lock the "file.txt" while its being written to so that neither script overwrites the other. I would also like the script to not fail if the txt file is locked, and instead wait for it to be unlocked. Something like this:</p>
<pre><code>for i in newlist:
runFunction()
completedlist.append(i)
#Check if file.txt is locked
#If locked wait X time or wait for file to be unlocked
#Once file is unlocked proceed with below
with open(file.txt) as file:
for line in completedlist:
file.write()
</code></pre>
|
<python>
|
2024-02-01 14:58:23
| 2
| 337
|
MyNameHere
|
77,920,996
| 1,929,820
|
Remove all jobs from plone.app.async queue
|
<p>I have a lot of jobs in the queue and I just want to empty it, instead of waiting for them. So I managed to access pdb and this is what I tried:</p>
<pre><code>from zope.component import queryUtility
from plone.app.async.interfaces import IAsyncService
service = queryUtility(IAsyncService)
(Pdb) service
<plone.app.async.service.AsyncService object at 0x...>
queues = service.getQueues()
queues
<zc.async.queue.Queues object at 0x...>
vars(queues)
{'_len': <BTrees.Length.Length object at 0x...>, '_data': <BTrees.OOBTree.OOBTree object at 0x...>}
(Pdb) queues.values()
[<zc.async.queue.Queue object at 0x...>]
queue = queues.values()[0]
(Pdb) queue
<zc.async.queue.Queue object at 0x...>
dir(queue)
['__class__', '__delattr__', '__dict__', '__doc__', '__format__', '__getattribute__', '__getitem__', '__getstate__', '__hash__', '__implemented__', '__init__', '__iter__', '__len__', '__module__', '__new__', '__nonzero__', '__parent__', '__providedBy__', '__provides__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__setstate__', '__sizeof__', '__slotnames__', '__str__', '__subclasshook__', '__weakref__', '_held', '_iter', '_length', '_p_activate', '_p_changed', '_p_deactivate', '_p_delattr', '_p_estimated_size', '_p_getattr', '_p_invalidate', '_p_jar', '_p_mtime', '_p_oid', '_p_serial', '_p_setattr', '_p_state', '_putback_queue', '_queue', '_z_parent__', 'claim', 'dispatchers', 'name', 'parent', 'pull', 'put', 'putBack', 'quotas', 'remove']
(Pdb) vars(queue)
{'name': '', 'parent': <zc.async.queue.Queues object at 0x...>, '_held': <BTrees.OOBTree.OOBTree object at 0x...>, 'quotas': <zc.async.queue.Quotas object at 0x...>, 'dispatchers': <zc.async.queue.Dispatchers object at 0x...>, '_length': <BTrees.Length.Length object at 0x...>, '_queue': <zc.queue._queue.CompositeQueue object at 0x...>, '_putback_queue': <zc.queue._queue.CompositeQueue object at 0x...>}
qu = queue._queue
(Pdb) qu
<zc.queue._queue.CompositeQueue object at 0x...>
dir(qu)
dir(qu)
['__class__', '__delattr__', '__dict__', '__doc__', '__format__', '__getattribute__', '__getitem__', '__getstate__', '__hash__', '__implemented__', '__init__', '__iter__', '__len__', '__module__', '__new__', '__nonzero__', '__providedBy__', '__provides__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__setstate__', '__sizeof__', '__slotnames__', '__str__', '__subclasshook__', '__weakref__', '_data', '_p_activate', '_p_changed', '_p_deactivate', '_p_delattr', '_p_estimated_size', '_p_getattr', '_p_invalidate', '_p_jar', '_p_mtime', '_p_oid', '_p_resolveConflict', '_p_serial', '_p_setattr', '_p_state', 'compositeSize', 'pull', 'put', 'subfactory']
qu.clear()
*** AttributeError: 'CompositeQueue' object has no attribute 'clear'
len(queue)
64252
queue.clear()
*** AttributeError: 'Queue' object has no attribute 'clear'
</code></pre>
<p>I feel that I am so close, but it's annoying I can't find how to get rid of these 64000+ items in the queue. Please help.</p>
|
<python><queue><plone>
|
2024-02-01 14:29:45
| 1
| 3,437
|
GhitaB
|
77,920,917
| 1,987,599
|
Unable to validate XML with schema but works by reading the written file from it
|
<p>I am currently using <code>lxml</code> and want to validate a XML content.</p>
<p>I wrote it completely in Python from <code>tei = etree.Element("TEI", nsmap={None: 'http://www.tei-c.org/ns/1.0'}</code> with many subelements.</p>
<p>At a moment, I want to check if the structure is ok using a specific <code>.xsd</code> file using the following code:</p>
<pre><code>xmlschema_doc = etree.parse(xsd_file_path)
xmlschema = etree.XMLSchema(xmlschema_doc)
# run check
status = xmlschema.validate(xml_tree)
</code></pre>
<p>It returns False with error <code>Element 'TEI': No matching global declaration available for the validation root.</code></p>
<p>I observe a very weird thing that if I write the xml using</p>
<pre><code>ET = etree.ElementTree(xmlData)
ET.write('test.xml', pretty_print=True, xml_declaration=True, encoding='utf-8')
</code></pre>
<p>and if I reopen it with <code>b= etree.parse('test.xml')</code> I finally had no error and the xml structure is valid as a result of <code>xmlschema.validate(b)</code></p>
<p>Any idea about what I need to add in my xml structure?</p>
<p><strong>EDIT:</strong>
First items in the not valid XML <a href="https://i.sstatic.net/DCOkU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DCOkU.png" alt="linesXML" /></a></p>
<p>First items in the valid XML file <a href="https://i.sstatic.net/nX2sr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nX2sr.png" alt="linefile" /></a></p>
<p><strong>EDIT:</strong></p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code><?xml version='1.0' encoding='UTF-8'?>
<TEI xmlns="http://www.tei-c.org/ns/1.0">
<text>
<body>
<listBibl>
<biblFull>
<titleStmt>
<title xml:lang="en">article</title>
<title xml:lang="fr">article</title>
<title type="sub" xml:lang="en">A subtitle</title>
<author role="aut">
<persName>
<forename type="first">John</forename>
<surname>Doe</surname>
</persName>
<email>email</email>
<idno type="http://orcid.org/">orcid</idno>
<affiliation ref="#localStruct-affiliation"/>
<affiliation ref="#struct-affiliation"/>
</author>
<author role="aut">
<persName>
<forename type="first">Jane</forename>
<forename type="middle">Middle</forename>
<surname>Doe</surname>
</persName>
<email>email</email>
<idno type="http://orcid.org/">orcid</idno>
<affiliation ref="#localStruct-affiliationA"/>
<affiliation ref="#localStruct-affiliationB"/>
</author>
</titleStmt>
<editionStmt>
<edition>
<ref type="file" subtype="author" n="1" target="upload.pdf"/>
</edition>
</editionStmt>
<publicationStmt>
<availability>
<licence target="https://creativecommons.org/licenses//cc-by/"/>
</availability>
</publicationStmt>
<notesStmt>
<note type="audience" n="2"/>
<note type="invited" n="1"/>
<note type="popular" n="0"/>
<note type="peer" n="1"/>
<note type="proceedings" n="0"/>
<note type="commentary">small comment</note>
<note type="description">small description</note>
</notesStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en">article</title>
<title xml:lang="fr">article</title>
<title type="sub" xml:lang="en">A subtitle</title>
<author role="aut">
<persName>
<forename type="first">John</forename>
<surname>Doe</surname>
</persName>
<email>email</email>
<idno type="http://orcid.org/">orcid</idno>
<affiliation ref="#localStruct-affiliation"/>
<affiliation ref="#struct-affiliation"/>
</author>
<author role="aut">
<persName>
<forename type="first">Jane</forename>
<forename type="middle">Middle</forename>
<surname>Doe</surname>
</persName>
<email>email</email>
<idno type="http://orcid.org/">orcid</idno>
<affiliation ref="#localStruct-affiliationA"/>
<affiliation ref="#localStruct-affiliationB"/>
</author>
</analytic>
<monogr>
<idno type="isbn">978-1725183483</idno>
<idno type="halJournalId">117751</idno>
<idno type="issn">xxx</idno>
<imprint>
<publisher>springer</publisher>
<biblScope unit="serie">a special collection</biblScope>
<biblScope unit="volume">20</biblScope>
<biblScope unit="issue">1</biblScope>
<biblScope unit="pp">10-25</biblScope>
<date type="datePub">2024-01-01</date>
</imprint>
</monogr>
<series/>
<idno type="doi">reg</idno>
<idno type="arxiv">ger</idno>
<idno type="bibcode">erg</idno>
<idno type="ird">greger</idno>
<idno type="pubmed">greger</idno>
<idno type="ads">gaergezg</idno>
<idno type="pubmedcentral">gegzefdv</idno>
<idno type="irstea">vvxc</idno>
<idno type="sciencespo">gderg</idno>
<idno type="oatao">gev</idno>
<idno type="ensam">xcvcxv</idno>
<idno type="prodinra">vxcv</idno>
<ref type="publisher">https://publisher.com/ID</ref>
<ref type="seeAlso">https://link1.com/ID</ref>
<ref type="seeAlso">https://link2.com/ID</ref>
<ref type="seeAlso">https://link3.com/ID</ref>
</biblStruct>
</sourceDesc>
<profileDesc>
<textClass>
<keywords scheme="author">
<term xml:lang="en">keyword1</term>
<term xml:lang="en">keyword2</term>
<term xml:lang="fr">mot-clé1</term>
<term xml:lang="fr">mot-clé2</term>
</keywords>
<classCode scheme="halDomain" n="physics"/>
<classCode scheme="halDomain" n="halDomain2"/>
<classCode scheme="halTypology" n="ART"/>
</textClass>
</profileDesc>
</biblFull>
</listBibl>
</body>
<back>
<listOrg type="structures">
<org type="institution" xml:id="localStruct-affiliation">
<orgName>laboratory for MC, university of Yeah</orgName>
<orgName type="acronym">LMC</orgName>
<desc>
<address>
<addrLine>Blue street 155, 552501 Olso, Norway</addrLine>
<country key="LS">Lesotho</country>
</address>
<ref type="url" target="https://lmc.univ-yeah.com"/>
</desc>
</org>
<org type="institution" xml:id="localStruct-affiliationB">
<orgName>laboratory for MCL, university of Yeah</orgName>
<orgName type="acronym">LMCL</orgName>
<desc>
<address>
<addrLine>Blue street 155, 552501 Olso, Norway</addrLine>
<country key="NO">Norway</country>
</address>
<ref type="url" target="https://lmcl.univ-yeah.com"/>
</desc>
</org>
</listOrg>
</back>
</text>
</TEI></code></pre>
</div>
</div>
</p>
|
<python><xml><xsd><lxml>
|
2024-02-01 14:19:37
| 1
| 609
|
Guuk
|
77,920,700
| 13,518,907
|
Langserve Streaming with Llamacpp
|
<p>I have built a RAG app with Llamacpp and Langserve and it generally works. However I can't find a way to stream my responses, which would be very important for the application. Here is my code:</p>
<pre><code>from langchain_community.vectorstores.pgvector import PGVector
from langchain_core.prompts import ChatPromptTemplate, PromptTemplate
from langchain_core.runnables import RunnableParallel
import os
from langchain_community.embeddings import HuggingFaceBgeEmbeddings, HuggingFaceEmbeddings
import box
import yaml
from langchain_community.llms import LlamaCpp
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from operator import itemgetter
from typing import TypedDict
from fastapi import FastAPI
from fastapi.responses import RedirectResponse
from langserve import add_routes
from fastapi.middleware.cors import CORSMiddleware
from starlette.staticfiles import StaticFiles
with open('./config/config.yml', 'r', encoding='utf8') as ymlfile:
cfg = box.Box(yaml.safe_load(ymlfile))
def build_llm(model_path, temperature=cfg.RAG_TEMPERATURE, max_tokens=cfg.MAX_TOKENS, callback = StreamingStdOutCallbackHandler()):
callback_manager = CallbackManager([callback])
n_gpu_layers = 1 # Metal set to 1 is enough. # ausprobiert mit mehreren
n_batch = 512 #1024 Should be between 1 and n_ctx, consider the amount of RAM of your Apple Silicon Chip.
llm = LlamaCpp(
max_tokens = max_tokens,
n_threads = 8,#8, #für performance,
model_path=model_path,
temperature=temperature,
f16_kv=True,
n_ctx=15000, # 8k aber mann muss Platz lassen für Instruction, History etc.
n_gpu_layers=n_gpu_layers,
n_batch=n_batch,
callback_manager=callback_manager,
verbose=True, # Verbose is required to pass to the callback manager
top_p=0.75,
top_k=40,
repeat_penalty = 1.1,
streaming=True,
model_kwargs={
#'repetition_penalty': 1.1,
#'mirostat': 2,
},
)
return llm
embeddings = HuggingFaceEmbeddings(model_name=cfg.EMBEDDING_MODEL_NAME,
model_kwargs={'device': 'mps'})
PG_COLLECTION_NAME = "PGVECTOR_BKB"
model_path = "./modelle/sauerkrautlm-mixtral-8x7b-instruct.Q4_K_M.gguf"
CONNECTION_STRING = "MY_CONNECTION_STRING"
vector_store = PGVector(
collection_name=PG_COLLECTION_NAME,
connection_string=CONNECTION_STRING,
embedding_function=embeddings
)
prompt= """
<s> [INST] Du bist RagBot, ein hilfsbereiter Assistent. Antworte nur auf Deutsch. Verwende die folgenden Kontextinformationen, um die Frage am Ende knapp zu beantworten. Wenn du die Antwort nicht kennst, sag einfach, dass du es nicht weisst. Erfinde keine Antwort! Falls der Nutzer allgemeine Fragen stellt, führe Smalltalk mit Ihm.
### Hier der Kontext: ###
{context}
### Hier die Frage: ###
{question}
Antwort: [/INST]
"""
def model_response_prompt():
return PromptTemplate(template=prompt, input_variables=['input', 'typescript_string'])
prompt_temp = model_response_prompt()
llm = build_llm(model_path, temperature= cfg.NO_RAG_TEMPERATURE, max_tokens = cfg.NO_RAG_MAX_TOKENS)
class RagInput(TypedDict):
question: str
final_chain = (
RunnableParallel(
context=(itemgetter("question") | vector_store.as_retriever()),
question=itemgetter("question")
) |
RunnableParallel(
answer=(prompt_temp| llm),
docs=itemgetter("context")
)
).with_types(input_type=RagInput)
app = FastAPI()
app.add_middleware(
CORSMiddleware,
allow_origins=[
"http://localhost:3000"
],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
#app.mount("/rag/static", StaticFiles(directory="./source_docs"), name="static")
@app.get("/")
async def redirect_root_to_docs():
return RedirectResponse("/docs")
# Edit this to add the chain you want to add
add_routes(app, final_chain, path="/rag")
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8000)
</code></pre>
<p>So if I try it out in the server playground (<a href="http://0.0.0.0:8000/rag/playground" rel="nofollow noreferrer">http://0.0.0.0:8000/rag/playground</a>) the response is not streamed but returned when the completion is done. I only see the streaming in my Terminal.</p>
<p>So does anyone have suggestions what I have to change?</p>
|
<python><langchain><large-language-model><llama-cpp-python>
|
2024-02-01 13:49:59
| 0
| 565
|
Maxl Gemeinderat
|
77,920,479
| 1,467,533
|
How to specify bounds and variable length in python union types?
|
<p>I have a type that has subclasses. E.g., <code>Event</code>, with classes <code>Event1(Event)</code>, <code>Event2(Event)</code>, etc. I want to specify a type annotation for the argument to a function that accepts any union of subclasses of <code>Event</code>. Concretely, I want all of the following to type check:</p>
<pre class="lang-py prettyprint-override"><code>listen_for_events(Event1)
listen_for_events(Event1 | Event3)
listen_for_events(Event1 | Event2 | Event3)
</code></pre>
<p>while the following should <em>not</em> type check:</p>
<pre class="lang-py prettyprint-override"><code>listen_for_events(str | int)
</code></pre>
<p>How do I specify the type on the argument to <code>listen_for_events</code>? I can do this with <code>tuple</code> (<code>tuple[type[Event], ...]</code>), but I don't appear to be able to do something similar with <code>Union</code>. Is there a way to correctly type this?</p>
|
<python><python-typing>
|
2024-02-01 13:15:05
| 1
| 1,855
|
mattg
|
77,920,475
| 2,410,605
|
selenium python how do i convert a string into time.strftime() date?
|
<p>I'm reading a string from an html table that represents a date: Feb 1, 2024 87:05:23 AM. How do I convert this to a true date variable so that I can compare it to the current time? Below is the code I'm using, it doesn't produce any errors but also is not working because when it gets to the compare it doesn't fall into the If statement.</p>
<pre><code>#13. Start the recovery by pressing the "Start Recovery" button
try:
logger.info("Step 13 Started: Start the recovery by pressing the 'Start Recovery' button.")
currentTime = time.strftime("%b %d, %Y %I:%M:%S %p")
WebDriverWait(browser, 10).until(EC.element_to_be_clickable((By.XPATH, '//span[contains(text(), "Start recovery")]'))).click()
backup1_id = ''
backup2_id = ''
tableRows = WebDriverWait(browser, 10).until(EC.visibility_of_all_elements_located((By.XPATH, '//table[@id="cdk-drop-list-10"]//tbody//following::tr')))
for row in tableRows:
backupTime = time.strftime(row.find_element(By.CSS_SELECTOR, 'td.cdk-column-startTime').text)
if backupTime <= currentTime:
if backup1_id == '':
backup1_id = row.find_element(By.CSS_SELECTOR, 'td.cdk-column-id').text
continue
if backup1_id != '' and backup2_id == '':
backup2_id = row.find_element(By.CSS_SELECTOR, 'td.cdk-column-id').text
break
</code></pre>
<p><a href="https://i.sstatic.net/60fDx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/60fDx.png" alt="enter image description here" /></a></p>
|
<python><selenium-webdriver>
|
2024-02-01 13:14:46
| 1
| 657
|
JimmyG
|
77,920,297
| 19,838,445
|
How to properly annotate pydantic fields if it's another class
|
<p>I have a model with the fields which are other Pydantic models (but not instances of them), so I'm able to use it later as a builder.</p>
<pre class="lang-py prettyprint-override"><code>class ItemModel(BaseModel):
id: int
class UserModel(BaseModel):
name: str
class MatchModels(BaseModel):
name: str
user_model: Optional[type(BaseModel)] = None
item_model: Optional[type(BaseModel)] = None
m = MatchModels(
name="match only user",
user_model=UserModel,
)
</code></pre>
<p>Is it correct to use <code>type(BaseModel)</code> here if I want to instantiate it later somewhere in the code? What is the proper type annotation would be here?</p>
|
<python><python-3.x><pydantic><pydantic-v2>
|
2024-02-01 12:45:26
| 1
| 720
|
GopherM
|
77,920,228
| 6,042,172
|
jupyter notebook export with embedded CSV files
|
<p>I need to share a jupyter notebook, a really simple interactive app using pandas, plotly and dash with some colleagues who have not installed Python.</p>
<p>If my notebook uses no csv files, it seems to work well.</p>
<p>But, is there any way to export in html format with somewhat embedded data in csv files?</p>
<p>Or any feasible workaround?</p>
<p>Thank you SO much!</p>
|
<python><csv><jupyter-notebook>
|
2024-02-01 12:34:11
| 0
| 908
|
glezo
|
77,920,171
| 1,473,517
|
How can I return the subarrays in my max k-subarray sum code?
|
<p>Consider an array A of integers of length n. The k-max subarray sum asks us to find up to k (contiguous) non overlapping subarrays of A with maximum sum. If A is all negative then this sum will be 0. If A = [-1, 2, -1, 2, -1, 2, 2] for example, then the two subarrays could be [2, -1, 2] and [2, 2] with total sum 7.</p>
<p>The following code is my extension of <a href="https://en.wikipedia.org/wiki/Maximum_subarray_problem" rel="nofollow noreferrer">Kadane's algorithm</a> and runs in O(nk) time, which might be optimal.</p>
<pre><code>def solve_SO(test_seq, k=2):
"""
Computes the k max subarray sum
"""
num_intervals = k * 2 + 1
t = None
best = np.zeros(num_intervals, dtype=int)
for seq_idx, val in enumerate(test_seq):
# Add the current value to all the "include" interval best scores
for interval_idx in range(1, num_intervals, 2):
best[interval_idx] += val
# Go over all intervals from the first include one.
# If we were better off without, update the best score to be the best
# from the previous interval. This makes best monotonic.
# The final value is the current best score overall
for interval_idx in range(1, num_intervals):
if best[interval_idx] < best[interval_idx - 1]:
best[interval_idx] = best[interval_idx - 1]
return best[num_intervals - 1]
</code></pre>
<p>We can test it with:</p>
<pre><code>solve_SO([-1, 2, -1, 2, -1], 2)
Out[22]: 4
In [23]: solve_SO([-1, 2, -1, 2, -1], 1)
Out[23]: 3
</code></pre>
<p>I can't work out how to return the subarrays that are being used to form the final sum. In the first case I would like to output <code>[(1, 2), (3, 4]]</code> and in the second case <code>[(1, 4)]</code> to show the subarrays as interval pairs in the original array.</p>
<p>I realise that this should be doable by storing something after:</p>
<pre><code> if best[interval_idx] < best[interval_idx - 1]:
best[interval_idx] = best[interval_idx - 1]
</code></pre>
<p>but I can't work out how to do it correctly.</p>
<p>How can my code be modified to do that?</p>
|
<python><algorithm>
|
2024-02-01 12:26:26
| 1
| 21,513
|
Simd
|
77,920,123
| 13,000,695
|
Using starlette TestClient causes an AttributeError : '_UnixSelectorEventLoop' object has no attribute '_compute_internal_coro'
|
<p>Using FastAPI : <code>0.101.1</code></p>
<p>I run this <code>test_read_aynsc</code> and it pass.</p>
<pre class="lang-py prettyprint-override"><code># app.py
from fastapi import FastAPI
app = FastAPI()
app.get("/")
def read_root():
return {"Hello": "World"}
# conftest.py
import pytest
from typing import Generator
from fastapi.testclient import TestClient
from server import app
@pytest.fixture(scope="session")
def client() -> Generator:
with TestClient(app) as c:
yield c
# test_root.py
def test_read_aynsc(client):
response = client.get("/item")
</code></pre>
<p>However, executing this test in DEBUG mode (in pycharm) will cause an error. Here is the Traceback :</p>
<pre class="lang-bash prettyprint-override"><code>test setup failed
cls = <class 'anyio._backends._asyncio.AsyncIOBackend'>
func = <function start_blocking_portal.<locals>.run_portal at 0x1555c51b0>
args = (), kwargs = {}, options = {}
@classmethod
def run(
cls,
func: Callable[[Unpack[PosArgsT]], Awaitable[T_Retval]],
args: tuple[Unpack[PosArgsT]],
kwargs: dict[str, Any],
options: dict[str, Any],
) -> T_Retval:
@wraps(func)
async def wrapper() -> T_Retval:
task = cast(asyncio.Task, current_task())
task.set_name(get_callable_name(func))
_task_states[task] = TaskState(None, None)
try:
return await func(*args)
finally:
del _task_states[task]
debug = options.get("debug", False)
loop_factory = options.get("loop_factory", None)
if loop_factory is None and options.get("use_uvloop", False):
import uvloop
loop_factory = uvloop.new_event_loop
with Runner(debug=debug, loop_factory=loop_factory) as runner:
> return runner.run(wrapper())
../../../Library/Caches/pypoetry/virtualenvs/kms-backend-F9vGicV3-py3.10/lib/python3.10/site-packages/anyio/_backends/_asyncio.py:1991:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../../Library/Caches/pypoetry/virtualenvs/kms-backend-F9vGicV3-py3.10/lib/python3.10/site-packages/anyio/_backends/_asyncio.py:193: in run
return self._loop.run_until_complete(task)
../../../Library/Application Support/JetBrains/Toolbox/apps/PyCharm-P/ch-0/233.13763.11/PyCharm.app/Contents/plugins/python/helpers-pro/pydevd_asyncio/pydevd_nest_asyncio.py:202: in run_until_complete
self._run_once()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <_UnixSelectorEventLoop running=False closed=True debug=False>
def _run_once(self):
"""
Simplified re-implementation of asyncio's _run_once that
runs handles as they become ready.
"""
ready = self._ready
scheduled = self._scheduled
while scheduled and scheduled[0]._cancelled:
heappop(scheduled)
timeout = (
0 if ready or self._stopping
else min(max(
scheduled[0]._when - self.time(), 0), 86400) if scheduled
else None)
event_list = self._selector.select(timeout)
self._process_events(event_list)
end_time = self.time() + self._clock_resolution
while scheduled and scheduled[0]._when < end_time:
handle = heappop(scheduled)
ready.append(handle)
> if self._compute_internal_coro:
E AttributeError: '_UnixSelectorEventLoop' object has no attribute '_compute_internal_coro'
../../../Library/Application Support/JetBrains/Toolbox/apps/PyCharm-P/ch-0/233.13763.11/PyCharm.app/Contents/plugins/python/helpers-pro/pydevd_asyncio/pydevd_nest_asyncio.py:236: AttributeError
During handling of the above exception, another exception occurred:
@pytest.fixture(scope="session")
def client() -> Generator:
> with TestClient(app) as c:
tests/fixtures/common/http_client_app.py:10:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../../Library/Caches/pypoetry/virtualenvs/kms-backend-F9vGicV3-py3.10/lib/python3.10/site-packages/starlette/testclient.py:730: in __enter__
self.portal = portal = stack.enter_context(
../../../.pyenv/versions/3.10.12/lib/python3.10/contextlib.py:492: in enter_context
result = _cm_type.__enter__(cm)
../../../.pyenv/versions/3.10.12/lib/python3.10/contextlib.py:135: in __enter__
return next(self.gen)
../../../Library/Caches/pypoetry/virtualenvs/kms-backend-F9vGicV3-py3.10/lib/python3.10/site-packages/anyio/from_thread.py:454: in start_blocking_portal
run_future.result()
../../../.pyenv/versions/3.10.12/lib/python3.10/concurrent/futures/_base.py:451: in result
return self.__get_result()
../../../.pyenv/versions/3.10.12/lib/python3.10/concurrent/futures/_base.py:403: in __get_result
raise self._exception
../../../.pyenv/versions/3.10.12/lib/python3.10/concurrent/futures/thread.py:58: in run
result = self.fn(*self.args, **self.kwargs)
../../../Library/Caches/pypoetry/virtualenvs/kms-backend-F9vGicV3-py3.10/lib/python3.10/site-packages/anyio/_core/_eventloop.py:73: in run
return async_backend.run(func, args, {}, backend_options)
../../../Library/Caches/pypoetry/virtualenvs/kms-backend-F9vGicV3-py3.10/lib/python3.10/site-packages/anyio/_backends/_asyncio.py:1990: in run
with Runner(debug=debug, loop_factory=loop_factory) as runner:
../../../Library/Caches/pypoetry/virtualenvs/kms-backend-F9vGicV3-py3.10/lib/python3.10/site-packages/anyio/_backends/_asyncio.py:133: in __exit__
self.close()
../../../Library/Caches/pypoetry/virtualenvs/kms-backend-F9vGicV3-py3.10/lib/python3.10/site-packages/anyio/_backends/_asyncio.py:141: in close
_cancel_all_tasks(loop)
../../../Library/Caches/pypoetry/virtualenvs/kms-backend-F9vGicV3-py3.10/lib/python3.10/site-packages/anyio/_backends/_asyncio.py:243: in _cancel_all_tasks
loop.run_until_complete(tasks.gather(*to_cancel, return_exceptions=True))
../../../Library/Application Support/JetBrains/Toolbox/apps/PyCharm-P/ch-0/233.13763.11/PyCharm.app/Contents/plugins/python/helpers-pro/pydevd_asyncio/pydevd_nest_asyncio.py:202: in run_until_complete
self._run_once()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <_UnixSelectorEventLoop running=False closed=True debug=False>
def _run_once(self):
"""
Simplified re-implementation of asyncio's _run_once that
runs handles as they become ready.
"""
ready = self._ready
scheduled = self._scheduled
while scheduled and scheduled[0]._cancelled:
heappop(scheduled)
timeout = (
0 if ready or self._stopping
else min(max(
scheduled[0]._when - self.time(), 0), 86400) if scheduled
else None)
event_list = self._selector.select(timeout)
self._process_events(event_list)
end_time = self.time() + self._clock_resolution
while scheduled and scheduled[0]._when < end_time:
handle = heappop(scheduled)
ready.append(handle)
> if self._compute_internal_coro:
E AttributeError: '_UnixSelectorEventLoop' object has no attribute '_compute_internal_coro'
</code></pre>
<p>I am not sure to understand what causes the error
Since I can see the <code>_UnixSelectorEventLoop</code>, I need to precise that my operating system is MacOS M1.</p>
|
<python><pycharm><python-asyncio><fastapi><starlette>
|
2024-02-01 12:18:03
| 1
| 4,032
|
jossefaz
|
77,920,104
| 1,473,517
|
How to speed up the code for k-max subarray sum
|
<p>Consider an array A of integers of length n. The k-max subarray sum asks us to find up to k (contiguous) non-overlapping subarrays of A with maximum sum. If A is all negative then this sum will be 0. If A = [-1, 2, -1, 2, -1, 2, 2] for example, then the two subarrays could be [2, -1, 2] and [2, 2] with total sum 7.</p>
<p>The following code is my extension of <a href="https://en.wikipedia.org/wiki/Maximum_subarray_problem" rel="nofollow noreferrer">Kadane's algorithm </a> and runs in O(nk) time, which might be optimal.</p>
<pre><code>def solve_SO(test_seq, k=2):
"""
Computes the k max subarray sum
"""
num_intervals = k * 2 + 1
t = None
best = np.zeros(num_intervals, dtype=int)
for seq_idx, val in enumerate(test_seq):
# Add the current value to all the "include" interval best scores
for interval_idx in range(1, num_intervals, 2):
best[interval_idx] += val
# Go over all intervals from the first include one.
# If we were better off without, update the best score to be the best
# from the previous interval. This makes best monotonic.
# The final value is the current best score overall
for interval_idx in range(1, num_intervals):
if best[interval_idx] < best[interval_idx - 1]:
best[interval_idx] = best[interval_idx - 1]
return best[num_intervals - 1]
</code></pre>
<p>We can test it with:</p>
<pre><code>solve_SO([-1, 2, -1, 2, -1], 2)
Out[22]: 4
In [23]: solve_SO([-1, 2, -1, 2, -1], 1)
Out[23]: 3
</code></pre>
<p>But my code seems inefficient. If k=3 for example, it makes an auxiliary array of length 7 and for every index in the input array it goes through the entire auxiliary array from index 1 to the end.</p>
<p>Can the code be made faster/more efficient when k is reasonably large?</p>
|
<python><algorithm><performance><optimization>
|
2024-02-01 12:15:08
| 0
| 21,513
|
Simd
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.