QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
75,874,658
| 2,098,831
|
How to display inlay type hints in Pylance that use (python) aliases?
|
<p>I'm using pylance/pyright in VSCode and have activated the following option:</p>
<pre><code>"python.analysis.inlayHints.variableTypes": true
</code></pre>
<p>This cause the IDE to suggest type hints in the code. However, the suggested hints do not take in account that I have imported libraries with aliases. For example:</p>
<p><a href="https://i.sstatic.net/6VQPJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6VQPJ.png" alt="enter image description here" /></a></p>
<p>In the end, if I "double click to insert" the type hint, it's not defined:</p>
<p><a href="https://i.sstatic.net/0SbAy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0SbAy.png" alt="enter image description here" /></a></p>
<p>I would expect pylance to insert <code>pd.DataFrame</code> instead of just <code>DataFrame</code>.</p>
<p>How to fix that?</p>
<p>Thanks</p>
|
<python><python-typing><pylance>
|
2023-03-29 08:08:09
| 1
| 5,135
|
Carmellose
|
75,874,626
| 11,913,986
|
merge two dfs based on lookup of a common column keeping duplicates
|
<p>I have df1 as:</p>
<pre><code>import pandas as pd
index = [0, 1, 2, 3]
columns = ['col0', 'col1']
data = [['A', 'D'],
['A', 'D'],
['C', 'F'],
['A', 'D']
]
df1 = pd.DataFrame(data, index, columns)
col0 col1
0 A D
1 B E
2 C F
3 A D
</code></pre>
<p>I have df2 as:</p>
<pre><code>index = [0, 1]
columns = ['col1', 'col2', 'col3']
data = [['D', 'XX', 'YY'],
['E', 'XXX', 'YYY']
]
df2 = pd.DataFrame(data, index, columns)
col1 col2 col3
0 D XX YY
1 E XXX YYY
</code></pre>
<p>Length of df1 and df2 is different and df1 has lot of duplicate rows of values where I want to lookup values based on column1 and fetch the results of other columns from df2.</p>
<p>Result df3 should look like:</p>
<pre><code>index = [0, 1, 2, 3]
columns = ['col0', 'col1', 'col2', 'col3']
data = [['A', 'D', 'XX', 'YY'],
['B', 'E', 'XXX', 'YYY'],
['C', 'F', 'nan', 'nan'],
['A', 'D', 'XX', 'YY']
]
df3 = pd.DataFrame(data, index, columns)
col0 col1 col2 col3
0 A D XX YY
1 B E XXX YYY
2 C F nan nan
3 A D XX YY
</code></pre>
<p>if the length of df1 and df2 were same then this is working for me:</p>
<pre><code>df3 = pd.merge(df1, df2, left_on=["col0", "col1"], right_index=True, how="left")
</code></pre>
<p>whenever there is a match on col1, no matter how many duplicates are there, I want to populate the rest of the columns for all the duplicates except there is no match and hence Nan.</p>
<p>I can always go for id,row in df.iterrows() but that's not going to scale for my case, i have 1,41,000 rows on df1.</p>
<p>Open to Pyspark solutions as well.</p>
<p>Thanks in advance.</p>
|
<python><pandas><database><dataframe><pyspark>
|
2023-03-29 08:04:21
| 2
| 739
|
Strayhorn
|
75,874,549
| 19,106,705
|
Remove for loop in pytorch
|
<p>I want to remove <code>for loop</code> in pytorch.</p>
<p>I wrote the code as below, but it's not cool. And it doesn't use the GPU well either.</p>
<pre class="lang-py prettyprint-override"><code>for i in range(idx.shape[0]):
for j in range(idx.shape[2]):
for k in range(idx.shape[3]):
for x in range(idx.shape[4]):
for y in range(idx.shape[5]):
default_mask[i,idx[i,0,j,k,x,y],j,k,x,y] = 0
</code></pre>
<p>And my question is:</p>
<ol>
<li>Please remove the for loop from this code.</li>
<li>Give me a general way to get rid of the for loop.</li>
</ol>
|
<python><pytorch>
|
2023-03-29 07:55:34
| 1
| 870
|
core_not_dumped
|
75,874,491
| 9,798,210
|
How to add the text to the side of the line using matplotlib?
|
<p>I have a code which generates the line using matplotlib. Below is the code</p>
<pre><code>import matplotlib.pyplot as plt
plt.figure(1100)
plt.plot(np.array([1,1]),np.array([0,3]),'k','LineWidth',5)
x = np.array([1,1])
y = np.array([0,3])
plt.text(x[-1], y[-1]+0.5, '(t)', ha='center')
plt.show()
</code></pre>
<p>Below is the output of the above code
<a href="https://i.sstatic.net/GhZNv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GhZNv.png" alt="enter image description here" /></a></p>
<p>I need to add the text to the side of the line like below</p>
<p><a href="https://i.sstatic.net/2Bowx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2Bowx.png" alt="enter image description here" /></a></p>
<p>How to achieve this using matplotlib?</p>
|
<python><python-3.x><matplotlib>
|
2023-03-29 07:48:22
| 1
| 1,835
|
merkle
|
75,874,453
| 2,666,289
|
Typing unpacking of a class
|
<p>When using a tuple in Python and unpacking, types can be automatically deduced by type checkers (mypy/pyright):</p>
<pre class="lang-py prettyprint-override"><code>t = (33, "hello")
a, b = t # a, b correctly deduced by both mypy and pyright
</code></pre>
<p>How can I achieve the same with a custom class (with <code>__iter__</code> or something else)?</p>
<pre class="lang-py prettyprint-override"><code>class T:
def __init__(self):
self.a = 33
self.b = "hello"
def __iter__(self):
return iter((self.a, self.b))
t = T()
a, b = t
</code></pre>
<p>In the above example, <code>mypy</code> deduces <code>Any</code> and pyright deduces <code>int | str</code> for both <code>a</code> and <code>b</code>.</p>
|
<python><python-typing>
|
2023-03-29 07:43:03
| 1
| 38,048
|
Holt
|
75,874,357
| 2,971,574
|
Pyspark: Count number of True values per row
|
<p>Working in databricks, I've got a dataframe which looks like this:</p>
<pre><code>columns = ["a", "b", "c"]
data = [(True, True, True), (True, True, True), (True, False, True)]
df = spark.createDataFrame(data).toDF(*columns)
df.display()
</code></pre>
<p><a href="https://i.sstatic.net/ohY9h.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ohY9h.png" alt="enter image description here" /></a></p>
<p>I'd like to create a new column "number_true_values" that contains the number of True values per row. Unfortunately, one does not seem to be able to just sum up True and False values in pyspark like in pandas. The code</p>
<pre><code>import pyspark.sql.functions as F
df.withColumn('number_true_values', sum([F.col(column) for column in df.columns]))
</code></pre>
<p>throws the exception <code>AnalysisException: [DATATYPE_MISMATCH.BINARY_OP_DIFF_TYPES] Cannot resolve "(a + 0)" due to data type mismatch: the left and right operands of the binary operator have incompatible types ("BOOLEAN" and "INT").;</code></p>
<p>If I had a dataframe that contains numbers instead like the following...</p>
<pre><code>columns = ["a", "b", "c"]
data = [(1, 0, 1), (1, 0, 0), (1, 1, 1)]
df = spark.createDataFrame(data).toDF(*columns)
df.display()
</code></pre>
<p><a href="https://i.sstatic.net/qFADk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qFADk.png" alt="enter image description here" /></a></p>
<p>... the syntax from above would work and return the desired result:
<a href="https://i.sstatic.net/3n8gV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3n8gV.png" alt="enter image description here" /></a></p>
<p>How do I count the number of True values per row in databricks?</p>
|
<python><pyspark><databricks><azure-databricks>
|
2023-03-29 07:31:29
| 2
| 555
|
the_economist
|
75,874,348
| 6,214,197
|
Python: Creating fast api request with valid custom headers
|
<p>Was struggling in finding how to create a fast.Request object with custom headers for testing.
Spent around 3 hours searching for it. Adding it here so that you can save some time.</p>
|
<python><http><testing><request><fastapi>
|
2023-03-29 07:30:31
| 1
| 957
|
Shubham Arora
|
75,874,050
| 2,971,574
|
Check whether boolean column contains only True values
|
<p>Working in Databricks, I've got a dataframe which looks like this:</p>
<pre><code>columns = ["a", "b", "c"]
data = [(True, True, True), (True, True, True), (True, False, True)]
df = spark.createDataFrame(data).toDF(*columns)
df.display()
</code></pre>
<p><a href="https://i.sstatic.net/OuHdw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OuHdw.png" alt="enter image description here" /></a></p>
<p>I'd like to select only those columns of the dataframe in which not all values are True.<br />
In pandas, I would use <code>df['a'].all()</code> to check whether all values of column "a" are True. Unfortunately, I don't find an equivalent in PySpark.
I have found a solution for the problem, but it seems much too complicated:</p>
<pre><code>df.select(*[column for column in df.columns
if df.select(column).distinct().collect() !=
spark.createDataFrame([True], 'boolean').toDF(column).collect()])
</code></pre>
<p>The solution returns what I want:</p>
<p><a href="https://i.sstatic.net/QRlPN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QRlPN.png" alt="enter image description here" /></a></p>
<p>Is there an easier way of doing this in PySpark?</p>
|
<python><apache-spark><pyspark><databricks><azure-databricks>
|
2023-03-29 06:57:59
| 3
| 555
|
the_economist
|
75,874,043
| 11,479,825
|
Convert specific list of strings cells into multiple rows and keep the other columns
|
<p>I have pandas dataframe that looks like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>label</th>
<th>pred</th>
<th>gt</th>
</tr>
</thead>
<tbody>
<tr>
<td>label1</td>
<td>val1</td>
<td>val11</td>
</tr>
<tr>
<td>label2</td>
<td>['str1', str2']</td>
<td>['str1', 'str3', 'str4']</td>
</tr>
<tr>
<td>label3</td>
<td>foo</td>
<td>box</td>
</tr>
</tbody>
</table>
</div>
<p>And I want to convert label2 row where I have lists of strings or None value to multiple rows (in case it is a list of strings):</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>label</th>
<th>pred</th>
<th>gt</th>
</tr>
</thead>
<tbody>
<tr>
<td>label1</td>
<td>val1</td>
<td>val11</td>
</tr>
<tr>
<td>label2</td>
<td>'str1'</td>
<td>'str1'</td>
</tr>
<tr>
<td>label2</td>
<td>str2'</td>
<td>'str3'</td>
</tr>
<tr>
<td>label2</td>
<td>None</td>
<td>'str4'</td>
</tr>
<tr>
<td>label3</td>
<td>foo</td>
<td>box</td>
</tr>
</tbody>
</table>
</div>
<p>I have used <code>explode()</code> for this purpose but I get new dataframe with all nan values and the 'exploded' rows are not matched to the right label. Here is my code:</p>
<pre><code>df_filtered = output_df[output_df['label'] == 'label2']
# explode the list column into multiple rows while keeping other columns
df_exploded = pd.concat([
df_filtered.drop(['pred', 'gt'], axis=1),
df_filtered['pred'].explode().reset_index(drop=True),
df_filtered['gt'].explode().reset_index(drop=True)
], axis=1)
# add prefix to the existing column name (label) to differentiate each new row
df_exploded = df_exploded.add_prefix('new_')
# rename the columns to remove the prefix from the original column
df_exploded = df_exploded.rename(columns={'new_pred': 'pred', 'new_gt': 'gt'})
# combine the exploded dataframe with the original dataframe, dropping the original list column
df_combined = pd.concat([output_df.drop(['pred', 'gt'], axis=1), df_exploded], axis=1)
</code></pre>
<p>Any help would be appreciated.</p>
|
<python><pandas><pandas-explode>
|
2023-03-29 06:57:14
| 1
| 985
|
Yana
|
75,873,947
| 8,477,952
|
Python KeyError in format() using strings
|
<p>I tried to use the <code>format()</code> function in python and it gives me the <code>KeyError</code>. Don't get why. I looked up some websites using the <code>format()</code> function in combination with strings and I was convinced that I was doing it right, obviously I am not.</p>
<pre><code>servoid = "test1"
parameter = "test2"
value = "test3"
#message = '{"servoid":"{}","parameter":"{}","value":"{}"}'.format(servoid,parameter,value) # <-- key error
message = '{"servoid":"'+servoid+'","parameter":"'+parameter+'","value":"'+value+'"}' # <-- works fine
print(message)
</code></pre>
<p>Output of the fine working line =</p>
<pre><code>{"servoid":"test1","parameter":"test2","value":"test3"}
</code></pre>
<p>Output of the not so fine working line</p>
<pre><code>Traceback (most recent call last):
File "c:\Users\sknippels\Documents\machine_software\mqtt_test\TestEAL_EFC.py", line 126, in <module>
message = '{"servoid":"{}","parameter":"{}","value":"{}"}'.format(servoid,parameter,value) # <-- key error
KeyError: '"servoid"'
</code></pre>
<p>What am I doing wrong?</p>
|
<python><format>
|
2023-03-29 06:44:49
| 1
| 407
|
bask185
|
75,873,838
| 9,042,093
|
Change the error stream in logger in python
|
<p>I dont want any logger prints in my terminal. I have separate log file where I will dump all the logger messages.</p>
<p>So now if I do</p>
<pre><code>logger.exception('error')
</code></pre>
<p>The message will be shown in terminal which I dont want.</p>
<p>I assume I need to change something here in my code.</p>
<pre><code>stream_handler = logging.StreamHandler()
stream_handler.setLevel(logging.ERROR)
</code></pre>
<p>Edited:</p>
<p>I have seen the one answer here - <a href="https://stackoverflow.com/questions/2266646/how-to-disable-logging-on-the-standard-error-stream">How to disable logging on the standard error stream?</a></p>
<p>According to that I need to set</p>
<pre><code>stream_handler.setLevel(logging.CRITICAL)
</code></pre>
<p>If I do that in the logger file I am seeing some unnecessary prints.
So I want to be in</p>
<pre><code>stream_handler.setLevel(logging.ERROR)
</code></pre>
<p>And not want any prints in the terminal.
So can somebody help ?</p>
|
<python><python-3.x><python-logging>
|
2023-03-29 06:29:40
| 0
| 349
|
bad_coder9042093
|
75,873,522
| 9,501,624
|
More efficient cookie cutter for boolean array to simulate detector response?
|
<p>I am applying a predefined pattern to a chain of events to simulate a dead time of a detector.</p>
<p>I tried to think of a vectorized method to apply that pattern (e.g. <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.convolve.html#scipy.signal.convolve" rel="nofollow noreferrer">convolution</a>), but failed to find a suitable algorithm. The iterative approach works, but is rather slow (the simulator has to deal with millions of events).</p>
<p>Can you think of a vectorized method to get the same functionality as my <code>for loop</code> approach?</p>
<p><a href="https://i.sstatic.net/VaJvO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VaJvO.png" alt="enter image description here" /></a></p>
<p>You can test the expected behavior with the attached pytests.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
def detector_response(events: np.array, dead_time: int) -> np.array:
"""Iterate over events-array and apply dead_time to each event."""
i = 0
while i < len(events):
if events[i]:
events[i + 1 : i + dead_time + 1] = 0
i += dead_time
i += 1
return events
def test_continuous_event_chain_get_spaced_by_deadtime():
n_events = 1_000_000
events = np.ones(n_events)
dead_time = 9
events_with_deadtime = detector_response(events, dead_time)
print(events_with_deadtime)
assert sum(events_with_deadtime) == n_events / (1 + dead_time)
def test_individual_events_remain_while_close_events_get_filtered():
events = np.array([0, 1, 0, 1, 0, 0, 0, 1, 0])
dead_time = 3
assert sum(detector_response(events, dead_time)) == 2
def test_concrete_pattern():
events = np.array([0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0])
dead_time = 3
expected_output = [0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0]
assert (np.array(detector_response(events, dead_time)) == np.array(
expected_output)).all()
dead_time = 4
expected_output = [0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0]
assert (np.array(detector_response(events, dead_time)) == np.array(
expected_output)).all()
</code></pre>
|
<python><numpy><filter><vectorization>
|
2023-03-29 05:43:22
| 1
| 3,844
|
Christian Karcher
|
75,873,428
| 7,975,962
|
How to install diff version of a package (transformers) without internet in kaggle notebook w/o killing the kernel while keeping variables in memory?
|
<p>I have prepared an inference pipeline for a Kaggle competition and it has to be executed without internet connection.</p>
<p>I'm trying to use different versions of transformers but I had some issues regarding the installation part.</p>
<p>Kaggle's default transformers version is <code>4.26.1</code>. I start with installing a different branch of transformers (<code>4.18.0.dev0</code>) like this.</p>
<pre><code>!pip install ./packages/sacremoses-0.0.53
!pip install /directory/to/packages/transformers-4.18.0.dev0-py3-none-any.whl --find-links /directory/to/packages
</code></pre>
<p>It installs <code>transformers-4.18.0.dev0</code> without any problem. I use this version of the package and do the inference with some models. Then I want to use another package <code>open_clip_torch-2.16.0</code> which is compatible with <code>transformers-4.27.3</code>, so I install them by simply doing</p>
<pre><code>!pip install /directory/to/packages/transformers-4.27.3-py3-none-any.whl --no-index --find-links /directory/to/packages
!pip install /directory/to/packages/open_clip_torch-2.16.0-py3-none-any.whl --no-index --find-links /directory/to/packages/
</code></pre>
<p>I get a prompt of <code>Successfully installed transformers-4.27.3 and open_clip_torch-2.16.0.</code></p>
<p><code>!pip list | grep transformers</code> outputs <code>transformers 4.27.3</code> but when I do</p>
<pre><code>import transformers
transformers.__version__
</code></pre>
<p>the version is <code>'4.18.0.dev0'</code>. I can't use open_clip because of that reason. Some of the codes are breaking because it uses the old version of transformers even though I installed a newer version. How can I resolve this issue?</p>
|
<python><jupyter-notebook><pip><huggingface-transformers><python-importlib>
|
2023-03-29 05:26:20
| 2
| 974
|
gunesevitan
|
75,873,335
| 8,510,149
|
Linear Regression over Window in PySpark
|
<p>I want to perform a Linear Regression over a Window in PySpark. I have a timeseries and I want the slope of that timeseries for each person (identified by an ID) in a dataset looking 12 months back.</p>
<p>My idea was to do like this:</p>
<pre><code>sliding_window = Window.partitionBy('ID').orderBy('date').rowsBetween(-12, 0)
df=df.withColumn("date_integer", F.unix_timestamp(df['date']))
assembler = VectorAssembler(inputCols=['date_integer'], outputCol='features')
vector_df = assembler.transform(df)
lr = LinearRegression(featuresCol='features', labelCol='series')
df = df.withColumn('slope_window', lr.fit(vector_df).coefficients[0].over(sliding_window))
</code></pre>
<p>However, after 15 minutes of execution I get this error:</p>
<pre><code>AttributeError: 'numpy.float64' object has no attribute 'over'
</code></pre>
<p>Any advice?</p>
|
<python><pyspark><linear-regression>
|
2023-03-29 05:07:53
| 1
| 1,255
|
Henri
|
75,873,239
| 1,718,152
|
Efficiently persist many slices referencing the same dataframe
|
<p>I have a list of DataFrames which are slices to another DataFrame. I'm looking for a way to efficiently persist the DataFrames.</p>
<pre><code>import pandas as pd
import numpy as np
timestamps = pd.date_range("2018-01-01", "2023-03-29", freq="H")
values = np.random.rand(len(timestamps))
score_df = pd.DataFrame({"timestamp": timestamps, "value": values}).set_index("timestamp")
referenced_frames = [
score_df.iloc[score_df.index.slice_indexer(date, date + pd.Timedelta(days=1000))]
for date in timestamps
]
</code></pre>
<p>When using pickle to persist the DataFrames many copies of the same data are created which cause a very large memory usage:</p>
<pre><code>import pickle
# WARNING: this results in a large memory usage and file on disk
with open("test.pkl", "wb") as f:
pickle.dump(referenced_frames, f)
</code></pre>
<p>I know I could simply persist the original DataFrame and the slices into it:</p>
<pre><code>slices = [
score_df.index.slice_indexer(date, date + pd.Timedelta(days=1000))
for date in timestamps
]
</code></pre>
<p>However, that ends up being too slow for my usecase:</p>
<pre><code>>>> %timeit referenced_frames[999]
21.9 ns Β± 1.09 ns per loop (mean Β± std. dev. of 7 runs, 10,000,000 loops each)
>>> %timeit score_df.iloc[slices[999]] # This is too slow
15.1 Β΅s Β± 420 ns per loop (mean Β± std. dev. of 7 runs, 100,000 loops each)
</code></pre>
<p>Is there a way to efficiently persist my data? I'm okay with using another data structure than a list, as long as I can access the data very efficiently.</p>
|
<python><pandas><pickle>
|
2023-03-29 04:50:46
| 0
| 448
|
Semi
|
75,873,232
| 10,748,412
|
How to get individual columns values into a list using Python
|
<p>I have an image which is the extracted row of a table and it looks like this:
<a href="https://i.sstatic.net/Y69bI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Y69bI.png" alt="enter image description here" /></a></p>
<p>I want to extract each column which is separated by vertical lines into a list.</p>
<p>The expected output will be like this:</p>
<pre><code>['Sl No','Description of Goods','HSN/SAC','Quantity','Rate','per','Amount']
</code></pre>
<p>The image is not from a DB, It is just an image that is extracted by some ML model.</p>
<p>How can I achieve this?</p>
|
<python><opencv><machine-learning><deep-learning><paddleocr>
|
2023-03-29 04:48:37
| 1
| 365
|
ReaL_HyDRA
|
75,873,117
| 3,891,431
|
python call a function before every "continue"
|
<p>I am writing many files each with many samples into a database.</p>
<p>My code is in this format:</p>
<pre><code>files = find_my_files()
for file in files:
try:
check_file(file)
except:
logging.warning(f'{file} did not pass the check, skipping this file')
continue
try:
process_file(file)
except:
logging.warning(f'{file} did not pass the processing, skipping this file')
continue
samples = get_samples(file)
for sample in samples:
try:
check_sample(sample)
except:
logging.warning(f'{sample} did not pass the checks, skipping this sample')
continue
try:
write_to_db(sample)
except:
logging.warning(f'could not write{sample} to the database, skipping this sample')
continue
</code></pre>
<p>I am looking for a way to save all the samples that I have skipped. This might be a single sample or a list of samples if a whole file is skipped.</p>
<p>My go-to solution is to create a function and call it every time before <code>continue</code>, but I wonder if there is a better way?</p>
|
<python><python-3.x>
|
2023-03-29 04:23:06
| 1
| 4,334
|
Rash
|
75,873,099
| 13,957,283
|
Django rest framework slow response time
|
<p>I am working on a project. I am using <strong>Django rest framework</strong>.</p>
<p>I have a category model. And there is only 2 rows. And the api response time is 180 ms avarage.</p>
<p>it is <strong>too slow</strong> for only 2 rows. And api is <strong>very simple</strong>. Why it is too slow for such a simple api.</p>
<p><strong>here is my view</strong></p>
<pre><code>class CategoryListView(generics.ListAPIView):
queryset = Category.objects.order_by('order').all()
serializer_class = CategorySerializer
def list(self, request, *args, **kwargs):
queryset = self.filter_queryset(self.get_queryset())
print(queryset.query)
serializer = self.get_serializer(queryset, many=True)
return Response(serializer.data)
</code></pre>
<p><strong>here is my serializer</strong></p>
<pre><code>class CategorySerializer(serializers.ModelSerializer):
class Meta:
model = Category
fields = ("id", "name", "order")
</code></pre>
<p>And here is the query generated.</p>
<pre><code>SELECT "questions_category"."id", "questions_category"."created_at", "questions_category"."updated_at", "questions_category"."name", "questions_category"."name_tk", "questions_category"."name_ru", "questions_category"."icon", "questions_category"."order" FROM "questions_category" ORDER BY "questions_category"."order" ASC
</code></pre>
<p>I am analyzing it with <strong>django-silk</strong></p>
<p>query time is <strong>6 ms</strong>.</p>
<p>but response time is <strong>180 ms</strong></p>
<p>Could somebody help me with this?</p>
|
<python><django><django-rest-framework>
|
2023-03-29 04:18:20
| 0
| 1,216
|
Noa
|
75,873,043
| 15,155,978
|
How to upload a table from a CSV file to GCP BigQuery using a Python script?
|
<p>Iβm new to GCP BigQuery. I would like to upload <a href="https://drive.google.com/file/d/1zd4hcopyarm7PvRvBDfga2_1OHRdjTzW/view?usp=share_link" rel="nofollow noreferrer">this dataset</a> to BigQuery using a Python script as follow: (Iβm using this <a href="https://medium.com/pipeline-a-data-engineering-resource/automate-your-bigquery-schema-definitions-with-5-lines-of-python-7a1996749718" rel="nofollow noreferrer">post</a> & these <a href="https://github.com/googleapis/python-bigquery/tree/35627d145a41d57768f19d4392ef235928e00f72/samples/snippets" rel="nofollow noreferrer">Github examples</a> as references)</p>
<pre><code>import os
import pandas as pd
import numpy as np
import gdown
from pandas import read_csv
from google.cloud import bigquery
from dotenv import load_dotenv
from google.oauth2 import service_account
def create_schema(field_list: list, types_list: list):
schema_list = []
for fields, types in zip(field_list, types_list):
schema = bigquery.SchemaField(fields, types)
schema_list.append(schema)
return schema_list
def auth():
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = os.getenv(
'PATH_CREDENTIAL_GCP')
key_path = os.environ['GOOGLE_APPLICATION_CREDENTIALS']
credentials = service_account.Credentials.from_service_account_file(
key_path, scopes=["https://www.googleapis.com/auth/cloud-platform"],
)
client = bigquery.Client(credentials=credentials,
project=credentials.project_id,)
return client
def bq_load(df, dataset_id: str, table_id: str, schema):
bq_client = auth()
dataset_ref = bq_client.dataset(dataset_id)
dataset_table_id = dataset_ref.table(table_id)
job_config = bigquery.LoadJobConfig()
job_config.write_disposition = 'WRITE_TRUNCATE'
job_config.source_format = bigquery.SourceFormat.CSV
job_config.autodetect = False
job_config.schema = schema
job_config.ignore_unknown_values = True
job = bq_client.load_table_from_dataframe(
df,
table_id,
location='US',
job_config=job_config
)
return job.result()
def main():
load_dotenv()
working_dir = os.getcwd()
print(working_dir)
data_path = working_dir+'/news.csv'
df = read_csv(data_path)
headers = features_names.to_numpy()
types_df = df.dtypes.to_numpy()
field_list = headers
type_list = ["INTEGER", "BYTES", "BYTES", "BYTES", "INTEGER"]
result = create_schema(field_list=field_list, types_list=type_list)
dataset_id = "some_dataset_id"
table_id = "some_table_id"
bf_to_bq = bq_load(df, dataset_id, table_id, result)
print(bf_to_bq)
return 'Done!!'
if __name__ == "__main__":
main()
</code></pre>
<p>By running the prior code, Iβm getting this error <code>google.api_core.exceptions.BadRequest: 400 Error while reading data, error message: CSV table references column position 4, but line starting at position:0 contains only 4 columns</code> :</p>
<pre><code>Traceback (most recent call last):
File "/Users/user/bigquery/table_upload_bq.py", line 110, in <module>
main()
File "/Users/user/bigquery/table_upload_bq.py", line 103, in main
bf_to_bq = bq_load(df, dataset_id, table_id, result)
File "/Users/user/bigquery/table_upload_bq.py", line 65, in bq_load
return job.result()
File "/Users/user/.pyenv/versions/3.10.7/envs/py-3.10.7/lib/python3.10/site-packages/google/cloud/bigquery/job/base.py", line 911, in result
return super(_AsyncJob, self).result(timeout=timeout, **kwargs)
File "/Users/user/.pyenv/versions/3.10.7/envs/py-3.10.7/lib/python3.10/site-packages/google/api_core/future/polling.py", line 261, in result
raise self._exception
google.api_core.exceptions.BadRequest: 400 Error while reading data, error message: CSV table references column position 4, but line starting at position:0 contains only 4 columns.
</code></pre>
<p>This is what the CSV looks like (it has 5 columns):
<a href="https://i.sstatic.net/GBzjK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GBzjK.png" alt="dataset" /></a></p>
<p>If someone could suggest to me how to should this problem, it would be awesome. Thanks!</p>
|
<python><google-cloud-platform><google-bigquery>
|
2023-03-29 04:05:47
| 0
| 922
|
0x55b1E06FF
|
75,872,456
| 1,925,518
|
Python least means squares adaptive filter implementation
|
<p>I'm trying to write a least means squares adaptive filter in python similar to that of <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.least_squares.html" rel="nofollow noreferrer">least_squares</a> in scipy. I'm trying to follow the <a href="https://en.wikipedia.org/wiki/Least_mean_squares_filter#Simplifications" rel="nofollow noreferrer">wikipedia-defined algorithm</a> for the least means squares adaptive filter, but I can't seem to update my independent variables properly.</p>
<p>What am I missing in my implementation?</p>
<h3>Code:</h3>
<pre><code>import numpy as np
from scipy.optimize import least_squares
def desiredFunc(x):
"""desired response"""
aa0 = 1.39
bb0 = 4.43
return aa0*np.power(x,3) + bb0
def myFunc(x, a0, b0):
return a0*np.power(x,3) + b0
def myMinFunc(a0, b0, x):
"""minimization function"""
return desiredFunc(x) - myFunc(x, a0, b0)
def mse(errors):
"""mean square error"""
return sum(np.square(errors))/len(errors)
if __name__ == '__main__':
a0_bounds = [-3, 3] # [lower bound, upper bound]
b0_bounds = [-5, 5] # [lower bound, upper bound]
a0_scale = 10
b0_scale = 10
x0 = [0, 0] # initial guess of independent variables
# bounds for scipy implementation
#bounds = ([a0_bounds[0], b0_bounds[0]], [a0_bounds[1], b0_bounds[1]])
f = np.arange(-10, 11)
########################
#### scipy least_squares
########################
#ans = least_squares(lambda param: myMinFunc(param[0], param[1], f), x0, bounds=bounds, x_scale=[a0_scale, b0_scale], verbose = 2)
#print(ans.x)
########################
#### implementation
########################
# bounds for own implementation
bounds = [a0_bounds, b0_bounds]
# initial error array
errors = myMinFunc(x0[0], x0[1], f)
print(f'MSE: {mse(errors):.2f}\t(Sum of err)/N: {sum(errors)/len(errors):.2f}\tind. variables: [{x0[0]:.2f},{x0[1]:.2f}]')
while mse(errors) > 10:
# adjust independent variables
for i, param in enumerate(x0):
x0[i] = x0[i] + 1/len(errors)/a0_scale*sum(errors) # unbiased estimator from wikipedia
# check if bounds are violated:
if x0[i] > bounds[i][1]:
x0[i] = bounds[i][1]
if x0[i] < bounds[i][0]:
x0[i] = bounds[i][0]
# measure system
errors = myMinFunc(x0[0], x0[1], f)
print(f'MSE: {mse(errors):.2f}\t(Sum of err)/N: {sum(errors)/len(errors):.2f}\tind. variables: [{x0[0]:.2f},{x0[1]:.2f}]')
input('Press Enter to continue...')
</code></pre>
<h3>Output:</h3>
<pre><code>$ python3 least-squares-implementation.py
MSE: 364064.99 (Sum of err)/N: 4.43 ind. variables: [0.00,0.00]
MSE: 168992.22 (Sum of err)/N: 3.99 ind. variables: [0.44,0.44]
Press Enter to continue...
MSE: 56657.98 (Sum of err)/N: 3.59 ind. variables: [0.84,0.84]
Press Enter to continue...
MSE: 6774.48 (Sum of err)/N: 3.23 ind. variables: [1.20,1.20]
Press Enter to continue...
MSE: 3365.35 (Sum of err)/N: 2.91 ind. variables: [1.52,1.52]
Press Enter to continue...
MSE: 33900.81 (Sum of err)/N: 2.62 ind. variables: [1.81,1.81]
Press Enter to continue...
</code></pre>
|
<python>
|
2023-03-29 01:54:50
| 1
| 735
|
EarthIsHome
|
75,872,372
| 11,141,816
|
Numerical instability of system of equations of large order polynomials
|
<p>I have a set of non linear polynomials</p>
<pre><code>V_n(Input[N])=0
</code></pre>
<p>where <code>n</code> runs from <code>1</code> to <code>N</code>, with <code>N</code> of <code>Input[N]</code> unknowns. The <code>V_n()</code> contained the polynomial tern of <code>x**N</code>. I used the newton's methods to find the roots but encountered the numerical stability issues, part of it contributed from the computation of the matrix inverse. i.e. though <code>2000**2000=1.1481306952742545242E+6602</code> was easily computed, the elements taken in matrix inverse were separated by <code>1E200</code>, <code>1E200**2</code>, were not. This lead to the inverse of matrix multiplication have a large remainders</p>
<pre><code>max( Inverse(Jacobian(V_n(Input[N]))) * V_n(Input[N]) )=1E200
</code></pre>
<p>I've been trying to find the normalization procedure for the newton's method, but with limited progresses. Without the normalization, the root finding algorithm with the newton's method could handle <code>N~10</code>, with normalization, this increased <code>N~50</code>. However, I needed to compute the system of linear equations at <code>N~2000</code>. The good news is the maximum of the roots <code>max(Input[N])~N</code> or approximately $N$ itself.</p>
<p>Is there a way to find the roots of this system of equations?</p>
|
<python><matrix><numerical-methods><newtons-method>
|
2023-03-29 01:38:38
| 0
| 593
|
ShoutOutAndCalculate
|
75,872,342
| 2,977,256
|
JAX performance problems
|
<p>I am obviously not following best practices, but maybe that's because I don't know what they are. Anyway, my goal is to generate a tubular neighborhood about a curve in three dimensions. A curve is give by an array of length three <code>f(t) = jnp.array([x(t), y(t), z(t)])</code>.</p>
<p>Now, first we compute the unit tangent:</p>
<pre><code>def get_uvec2(f):
tanvec = jacfwd(f)
return lambda x: tanvec(x)/jnp.linalg.norm(tanvec(x))
</code></pre>
<p>Next, we compute the derivative of the tangent:</p>
<pre><code>def get_cvec(f):
return get_uvec2(get_uvec2(f))
</code></pre>
<p>Third, we compute the orthogonal frame at a point:</p>
<pre><code>def get_frame(f):
tt = get_uvec2(f)
tt2 = get_cvec(f)
def first2(t):
x = tt(t)
y = tt2(t)
tt3 = (jnp.cross(x, y))
return jnp.array([x, y, tt3])
return first2
</code></pre>
<p>which we use to generate a point in the circle around a given point:</p>
<pre><code>def get_point(frame, s):
v1 = frame[1, :]
v2 = frame[2, :]
return jnp.cos(s) * v1 + jnp.sin(s) * v2
</code></pre>
<p>And now we generate the point on the tubular neighborhood corresponding to a pair of parameters:</p>
<pre><code>def get_grid(f, eps):
ffunc = get_frame(f)
def grid(t, s):
base = f(t)
frame = ffunc(t)
return base + eps * get_point(frame, s)
return grid
</code></pre>
<p>And finally, we put it all together:</p>
<pre><code>def get_reg_grid(f, num1, num2, eps):
plist = []
tarray = jnp.linspace(start = 0.0, stop = 1.0, num = num1)
sarray = jnp.linspace(start = 0.0, stop = 2 * jnp.pi, num = num2)
g = get_grid(f, eps)
for t in tarray:
for s in sarray:
plist.append(g(t, s))
return jnp.vstack(plist)
</code></pre>
<p>Finally, use it to compute the tubular neighborhood around a circle in the xy-plane:</p>
<pre><code>f1 = lambda x: jnp.array([jnp.cos(2 * jnp.pi * x), jnp.sin(2 * jnp.pi * x), 0.0])
fff = np.array(get_reg_grid(f1, 200, 200, 0.1))
</code></pre>
<p>The good news is that it all works. The <em>bad</em> news is that this computation takes well over an hour. Where did I go wrong?</p>
|
<python><jax>
|
2023-03-29 01:30:22
| 1
| 4,872
|
Igor Rivin
|
75,872,319
| 1,942,626
|
tensorflow predict is too slow
|
<p>I built up a model with 500 captcha images with tensorflow.
It works fine on my macbook M1 at a good speed. But when I deployed it to cloud Ubuntu server with 4 cores. it became very slow. Esp. it takes long time when it starts up.</p>
<p>I am using tb-nightly==2.13.0a20230328 version.</p>
<p>Can I run the process all the time so the process receives the input file and return without loading delay?</p>
<p>The result says it took 3.6 sec to solve only 1 captcha image when it's just 1 sec on local macbook. what a bummer!!</p>
<p>Is there any way to improve performance?
Please advice.</p>
<p>My code and result is as follows:</p>
<p>Code:</p>
<pre><code>start = time.time()
math.factorial(100000)
data_dir = './gray/'
res = []
# Iterate directory
for path in os.listdir(data_dir):
# check if current path is a file
if os.path.isfile(os.path.join(data_dir, path)):
res.append(path)
img_width = 150
img_height = 50
max_length = 6
characters = {'0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd','e','f','g','h','i','j','k', 'l', 'm', 'n', 'o', 'p', 'q', 'r','s','t', 'u', 'v', 'w', 'x', 'y', 'z' }
weights_path = "./model/weights.h5"
AM = cc.ApplyModel(weights_path, img_width, img_height, max_length, characters)
for i in range(len(res)):
pred = AM.predict(data_dir + res[i])
print(res[i] + '=' + pred)
end = time.time()
print(f"{end - start:.5f} sec"
</code></pre>
<p>Result:</p>
<pre><code>(.venv) ubutu@server:~/captcha$ python predict.py
2023-03-29 10:07:24.919058: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2023-03-29 10:07:24.982799: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2023-03-29 10:07:24.983566: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-03-29 10:07:26.063683: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
['gray.png']
1/1 [==============================] - 2s 2s/step
gray.png=pkm7ga
3.64084 sec
</code></pre>
|
<python><tensorflow><captcha>
|
2023-03-29 01:24:59
| 0
| 845
|
user1942626
|
75,872,170
| 17,274,113
|
writing an array to a raster file with spatial attributes using array2raster. ` AttributeError: NoneType`
|
<p>I have two scripts. One uses hydrology tools and finds the depths of sinks in a dem and outputs a binary sink/no-sink array. The other script labels pixel groups. After this I have a function <code>array2raster</code> from <a href="https://gist.github.com/jkatagi/a1207eee32463efd06fb57676dcf86c8?permalink_comment_id=3248112" rel="nofollow noreferrer">here</a> To generate a spatial image from the array. The function takes an array, which I am attempting to convert to a raster which in this case is a label image, and it takes a raster with spatial attributes as inputs.</p>
<p>The funtion <code>array2raster</code> works when I keep the scripts separate and just transfer the data from script 1 to the segmentation script. However, when I incorporate the two scripts into one, and that resultant script calls <code>array2raster</code>, I get the error: <code>AttributeError: NoneType object has no attribute "SetGeotransform"</code>.</p>
<p>I checked the inputs, <code>dataset</code> and <code> array</code> in the case of both the incorporated script and the separate script. In both cases they are a <code>gdal.Dataset</code> and a <code>numpy.array</code>. Also, I am using the same input raster in each case, so I am not too sure what the difference is between the scripts that is causing an error only in one.</p>
<p>EDIT</p>
<p><a href="https://github.com/maxduso/CD_identification/blob/daab633a71671b947840a3f1d5e1615c52a93f72/whitebox_X_scikit.py" rel="nofollow noreferrer">full script</a></p>
<hr />
<p>Output of the first part of the script</p>
<pre><code>sink_depth = wbt.depth_in_sink(
input_path,
output_path,
zero_background= True
)
</code></pre>
<p>Read in the output of part one to continue analysis</p>
<pre><code>spatial_image = gdal.Open(output_path)
</code></pre>
<hr />
<p>Conversion of input array to raster using spatial information from <code>spatial_image</code></p>
<pre><code>def array2raster(newRasterfn, dataset, array, dtype):
"""
save GTiff file from numpy.array
input:
newRasterfn: save file name
dataset : original tif file with spatial information
array : numpy.array
dtype: Byte or Float32.
"""
cols = array.shape[1]
rows = array.shape[0]
originX, pixelWidth, b, originY, d, pixelHeight = dataset.GetGeoTransform()
driver = gdal.GetDriverByName('GTiff')
# set data type to save.
GDT_dtype = gdal.GDT_Unknown
if dtype == "Byte":
GDT_dtype = gdal.GDT_Byte
elif dtype == "Float32":
GDT_dtype = gdal.GDT_Float32
else:
print("Not supported data type.")
# set number of band.
if array.ndim == 2:
band_num = 1
else:
band_num = array.shape[2]
outRaster = driver.Create(newRasterfn, cols, rows, band_num, GDT_dtype)
outRaster.SetGeoTransform((originX, pixelWidth, 0, originY, 0, pixelHeight))
# Loop over all bands.
for b in range(band_num):
outband = outRaster.GetRasterBand(b + 1)
# Read in the band's data into the third dimension of our array
if band_num == 1:
outband.WriteArray(array)
else:
outband.WriteArray(array[:,:,b])
# setteing srs from input tif file.
prj=dataset.GetProjection()
outRasterSRS = osr.SpatialReference(wkt=prj)
outRaster.SetProjection(outRasterSRS.ExportToWkt())
outband.FlushCache()
print(prj)
array2raster("tif_folder/out_raster.tif", spatial_image, filtered_labels, "Byte")
</code></pre>
<p><code>AttributeError: 'NoneType' object has no attribute 'SetGeoTransform'</code></p>
<p>It seems as though the issue is in the creation of <code>outRaster</code> which is a class: NoneType. This is in turn created by the line <code>driver.Create()</code>, all the inputs to driver.Create are the correct format as far as I can tell, so I am wondering if it is an issue with the driver itself, which is defined above with <code>driver = gdal.GetDriverByName('GTiff')</code>. Agan though, this method worked in a different script.</p>
<p>Any suggestions?</p>
<p>Thanks for reading.</p>
|
<python><arrays><geospatial><raster>
|
2023-03-29 00:47:27
| 1
| 429
|
Max Duso
|
75,872,125
| 5,342,009
|
Speechmatics submit a job without audio argument
|
<p>I have implemented a SpeechMatics speech to text application with their API as given in this document <a href="https://docs.speechmatics.com/introduction/batch-guide" rel="nofollow noreferrer">with the code</a> below :</p>
<pre><code>from speechmatics.models import ConnectionSettings
from speechmatics.batch_client import BatchClient
from httpx import HTTPStatusError
API_KEY = "YOUR_API_KEY"
PATH_TO_FILE = "example.wav"
LANGUAGE = "en"
settings = ConnectionSettings(
url="https://asr.api.speechmatics.com/v2",
auth_token=API_KEY,
)
# Define transcription parameters
conf = {
"type": "transcription",
"transcription_config": {
"language": LANGUAGE
}
}
# Open the client using a context manager
with BatchClient(settings) as client:
try:
job_id = client.submit_job(
audio=PATH_TO_FILE,
transcription_config=conf,
)
print(f'job {job_id} submitted successfully, waiting for transcript')
# Note that in production, you should set up notifications instead of polling.
# Notifications are described here: https://docs.speechmatics.com/features-other/notifications
transcript = client.wait_for_completion(job_id, transcription_format='txt')
# To see the full output, try setting transcription_format='json-v2'.
print(transcript)
except HTTPStatusError:
print('Invalid API key - Check your API_KEY at the top of the code!')
</code></pre>
<p>The code uses a file as an argument for the submit_job function. I want to submit a job, with fetch_data that uses a URL instead of a local file.</p>
<p>However, the submit_job function requires an audio argument.</p>
<p>I just want to use fetch_data option as given <a href="https://docs.speechmatics.com/features-other/fetch-url" rel="nofollow noreferrer">here</a> and no audio argument as given below :</p>
<pre><code>conf = {
"type": "transcription",
"transcription_config": {
"language": "en",
"diarization": "speaker"
},
"fetch_data": {
"url": "${URL}/{FILENAME}"
}
}
</code></pre>
<p>How can I use fetch_data configuration that is given above and able to use submit_job function without an audio file as an argument ?</p>
|
<python><django>
|
2023-03-29 00:35:35
| 1
| 1,312
|
london_utku
|
75,872,072
| 9,872,200
|
Python assign covariance row wise calculation
|
<p>I am trying to assign the covariance value to a column based on the dataframe I have. The df is ~400k records x 30+ columns. The two data series that act as inputs for COV() are all aligned as a single record (with ~400k records). I would like to assign the column names as a list and then do operations as arrays. I can do this with the associated mean, but the covariance seems elusive.</p>
<p>Additionally, as a workaround, i can create the covariance in a more clunky manual way by writing out all of the steps, but it is not dynamic.
Example of dataframe (first 5 records, 4 acct monthly return and benchmark return figures - in the actual df, there are 12 months of acct returns and 12 months of benchmark returns). I have tried various iterations of COV(), however, as both datasets (acct returns/benchmark returns) are all on the same record; i have not found a good way of creating the function.</p>
<pre><code>df = pd.DataFrame({'ACCT_ID':['A_12345','A_23456','A_34567','A_45678','A_56789'],
'Acct_m1_RoR':[-0.025, -0.035, -0.055, 0.0127, -0.065],
'Acct_m2_RoR':[0.025, 0.035, 0.055, 0.0127, 0.065],
'Acct_m3_RoR':[0.065, -0.075, -0.015, 0.0527, 0.015],
'Acct_m4_RoR':[-0.009, 0.015, -0.065, 0.0827, -0.025],
'BCHMK_m1_RoR':[-0.025, -0.035, -0.055, 0.0127, -0.065],
'BCHMK_m2_RoR':[-0.025, -0.035, -0.055, 0.0127, -0.065],
'BCHMK_m3_RoR':[-0.025, -0.035, -0.055, 0.0127, -0.065],
'BCHMK_m4_RoR':[-0.025, -0.035, -0.055, 0.0127, -0.065]})
List of column headers:
a1=['Acct_m1_RoR','Acct_m2_RoR','Acct_m3_RoR','Acct_m4_RoR','Acct_m5_RoR','Acct_m6_RoR','Acct_m7_RoR','Acct_m8_RoR','Acct_m9_RoR','Acct_m10_RoR','Acct_m11_RoR','Acct_m12_RoR']
b1=['BCHMK_m1_RoR','BCHMK_m2_RoR','BCHMK_m3_RoR','BCHMK_m4_RoR','BCHMK_m5_RoR','BCHMK_m6_RoR','BCHMK_m7_RoR','BCHMK_m8_RoR','BCHMK_m9_RoR','BCHMK_m10_RoR','BCHMK_m11_RoR','BCHMK_m12_RoR']
df['acct_mean'] = np.mean(df[a1],axis = 1)
df['bchmk_mean'] = np.mean(df[b1], axis = 1)
</code></pre>
<p>semi manual workaround:</p>
<pre><code>df['cov'] = (((df['Acct_m1_RoR'] - df['acct_mean']) * (df['BCHMK_m1_RoR'] - df['bchmk_mean']))
+ ((df['Acct_m2_RoR'] - df['acct_mean']) * (df['BCHMK_m2_RoR'] - df['bchmk_mean']))
+ ((df['Acct_m3_RoR'] - df['acct_mean']) * (df['BCHMK_m3_RoR'] - df['bchmk_mean']))
+ ((df['Acct_m4_RoR'] - df['acct_mean']) * (df['BCHMK_m4_RoR'] - df['bchmk_mean']))
+ ((df['Acct_m5_RoR'] - df['acct_mean']) * (df['BCHMK_m5_RoR'] - df['bchmk_mean']))
+ ((df['Acct_m6_RoR'] - df['acct_mean']) * (df['BCHMK_m6_RoR'] - df['bchmk_mean']))
+ ((df['Acct_m7_RoR'] - df['acct_mean']) * (df['BCHMK_m7_RoR'] - df['bchmk_mean']))
+ ((df['Acct_m8_RoR'] - df['acct_mean']) * (df['BCHMK_m8_RoR'] - df['bchmk_mean']))
+ ((df['Acct_m9_RoR'] - df['acct_mean']) * (df['BCHMK_m9_RoR'] - df['bchmk_mean']))
+ ((df['Acct_m10_RoR'] - df['acct_mean']) * (df['BCHMK_m10_RoR'] - df['bchmk_mean']))
+ ((df['Acct_m11_RoR'] - df['acct_mean']) * (df['BCHMK_m11_RoR'] - df['bchmk_mean']))
+ ((df['Acct_m11_RoR'] - df['acct_mean']) * (df['BCHMK_m12_RoR'] - df['bchmk_mean']))) / 12
</code></pre>
|
<python><arrays><pandas><statistics><covariance>
|
2023-03-29 00:18:45
| 1
| 513
|
John
|
75,871,807
| 10,474,998
|
Rename subfolder in multiple folders by replacing a part of the string
|
<p>Assuming I have multiple subfolders inside many different folders (**). All the folders are in a main folder called Project1. I want to replace the string name of the folders using <code>substring</code> function.</p>
<pre><code>import glob
import os
import pandas as pd
paths = glob.glob("CC:/Users/xxx/College/Project1/**/", recursive=True)
</code></pre>
<p>Assuming the subfolders are in multiple folders and have a naming convention as follows:</p>
<pre><code>fwjekljfwelj-10-fwefw #(the path for this folder is "CC:/Users/xxx/College/Project1/**/wjekljfwelj-10-fwefw/")
kljkgpjrjrel-11-wwref
fwefjkecmuon-12-cfecd
dsfshncrpout-13-lplce
</code></pre>
<p>The alphanumeric sequence prior to the - character is meaningless. I want to replace the string preceding the dashed line with the number 20. The new subfolders would thus be:</p>
<pre><code>2010-fwefw
2011-wwref
2012-cfecd
2013-lplce
</code></pre>
<p>I can do it individually for each subfolder using <code>str.split('-', 1)[-1]</code> and then append '20' to the name, but I would like to automate the process.
I am renaming folder names, not the files themselves.</p>
|
<python><python-3.x><pandas>
|
2023-03-28 23:13:33
| 2
| 1,079
|
JodeCharger100
|
75,871,610
| 217,332
|
Spawning a new process with an asyncio loop from within the asyncio loop running in the current process
|
<p>I'm a little confused about the interaction between multiprocessing and asyncio. My goal is to be able to spawn async processes from other async processes. Here is a small example:</p>
<pre><code>import asyncio
from multiprocessing import Process
async def sleep_n(n):
await asyncio.sleep(n)
def async_sleep(n):
# This does not work
#
# loop = asyncio.get_event_loop()
# loop.run_until_complete(sleep_n(n))
# This works
asyncio.run(sleep_n(n))
async def spawn_another():
await asyncio.sleep(0.2)
p = Process(target=async_sleep, args=(5,))
p.start()
p.join()
def spawn():
# This does not work
# loop = asyncio.get_event_loop()
# loop.run_until_complete(spawn_another())
# This works
asyncio.run(spawn_another())
def doit():
p = Process(target=spawn)
p.start()
p.join()
if __name__ == '__main__':
doit()
</code></pre>
<p>If I replace <code>asyncio.run</code> with <code>get_event_loop().run_until_complete</code>, I get the following error: "The event loop is already running". This is raised from <code>loop.run_until_complete(sleep_n(n))</code>. What's the difference between these two?</p>
<p>(NB: the reason I care about this is, if it makes a difference in the proposed remedy, is because in my actual code the thing I'm running in async is a <code>grpc.aio</code> client which apparently requires me to use <code>run_until_complete</code> or otherwise I get an error about a Future that's attached to a different event loop. That said, this is just an aside and not really material to the question above.)</p>
|
<python><linux><python-asyncio><python-multiprocessing>
|
2023-03-28 22:33:10
| 2
| 83,780
|
danben
|
75,871,584
| 4,538,768
|
Serialize several models in Django
|
<p>I am looking to optimize the serialization of multiple models. Currently, I am able to serialize the models through the use of <code>SerializerMethodField</code> and providing the needed fields. I would like to use directly the serializer <code>SiteSchoolSerializer</code> to fetch in advance the results using <code>queryset.select_related</code> to reduce the time of the queries.</p>
<p>I have the models:</p>
<pre><code>site.py:
class SchoolDistance(models.Model):
school_id = models.ForeignKey(School,
on_delete=models.SET_NULL,
db_column="school_id",
null=True)
school_distance = models.DecimalField(max_digits=16, decimal_places=4, blank=False, null=False)
class Site(models.Model):
date_created = models.DateTimeField(auto_now=False, blank=True, null=True)
address = models.CharField(max_length=255, null=True, blank=True)
nearest_school = models.ForeignKey(SchoolDistance,
on_delete=models.SET_NULL,
db_column="nearest_school",
blank=True,
null=True)
------------------
location_asset.py:
class Location(models.Model):
name = models.CharField(null=True, blank=True, max_length=255)
location_type = models.CharField(null=True, blank=True, max_length=255)
class Meta:
unique_together = [['name','location_type']]
ordering = ("id", )
class School(Location):
description = models.CharField(null=True, blank=True, max_length=255)
keywords = models.CharField(null=True, blank=True, max_length=255)
class Meta:
ordering = ("id", )
</code></pre>
<p>With the following serializers:</p>
<pre><code>distance.py
class SiteSchoolSerializer(serializers.ModelSerializer):
class Meta:
#school = SchoolSerializer()
model = SchoolDistance
#fields = ('school_id', 'school_distance','school) why failing?
fields = ('school_id', 'school_distance')
------------------
location_types.py:
class SchoolSerializer(serializers.ModelSerializer):
class Meta:
model = School
fields = (
"id",
"name"
)
--------
get.py:
class SiteGetSerializer(serializers.ModelSerializer):
# nearest_school = serializers.SerializerMethodField()
nearest_school = SiteSchoolSerializer(required=False)
class Meta:
model = Site
fields = (
"id",
"address",
"nearest_school", #Should include name besides id and distance
)
# Want to avoid the below function and use directly SiteSchoolSerializer:
# def get_nearest_school(self, instance):
# nearest_school = None
# if instance.nearest_school:
# nearest_school = dict()
# nearest_school['school_id'] = instance.nearest_school.school_id_id
# nearest_school['school_distance'] = instance.nearest_school.school_distance
# nearest_school['name'] = School.objects.get(id=nearest_school['school_id']).name
# return nearest_school
</code></pre>
<p>Used by the following View:</p>
<pre><code>list.py:
class SiteList(generics.ListAPIView):
serializer_class = SiteGetSerializer
def get_queryset(self):
queryset = Site.objects.all()
return queryset
# Want to be able to use select_related
# Getting the error:
# django.core.exceptions.ImproperlyConfigured: Field name `school` is not valid for model `SchoolDistance`.
# when adding school = SchoolSerializer() to SiteSchoolSerializer
#return queryset.select_related('nearest_school').order_by('id')
</code></pre>
<p>Not sure what I am missing in order to use <code>SiteSchoolSerializer</code> serializer in <code>SiteGetSerializer</code> serializer.</p>
<p>Thanks a lot for your help</p>
|
<python><django><django-models><django-rest-framework><django-views>
|
2023-03-28 22:27:32
| 2
| 1,787
|
JarochoEngineer
|
75,871,529
| 310,370
|
cv2 rembg remove function changes the color tone of the input image. How to fix it?
|
<p>in this demo color is not changed : <a href="https://huggingface.co/spaces/KenjieDec/RemBG" rel="nofollow noreferrer">https://huggingface.co/spaces/KenjieDec/RemBG</a></p>
<p>So here my method</p>
<pre><code>def remove_background(image_path):
# Read the input image
input_image = cv2.imread(image_path, cv2.IMREAD_UNCHANGED)
# Remove the background
output_image = remove(input_image, alpha_matting=True, alpha_matting_erode_size=10)
output_image_pil = Image.fromarray(output_image)
return output_image_pil
</code></pre>
<p>.
.
.</p>
<pre><code> image = remove_background(file_path)
image.save(save_file_path)
</code></pre>
<p>here input png. it has white background</p>
<p><a href="https://i.sstatic.net/LZOmS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LZOmS.png" alt="enter image description here" /></a></p>
<p>here output png. background removed but the color tone is changed</p>
<p><a href="https://i.sstatic.net/2L7OE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2L7OE.png" alt="enter image description here" /></a></p>
|
<python><image-processing><rembg>
|
2023-03-28 22:18:08
| 1
| 23,982
|
Furkan GΓΆzΓΌkara
|
75,871,388
| 5,342,009
|
Speechmatics Python Code fails to use Google Cloud Storage signed url with fetch_data
|
<p>I am experimenting with Speechmatics Transcription API using a file in Google Cloud Storage with a signed url.</p>
<p>The <a href="https://docs.speechmatics.com/features-other/fetch-url" rel="nofollow noreferrer">SpeechMatics Document</a> says that I have to use fetch_data parameter in order to provide a URL to the config file.</p>
<p>But when I try to use <a href="https://docs.speechmatics.com/introduction/batch-guide" rel="nofollow noreferrer">submit_job</a> function with the fetch_data parameter, my understanding is that I have to use the API call with requests library directly.</p>
<p>Until this point I had no luck with the following code :</p>
<pre><code>import requests
import logging
from django.conf import settings
logger = logging.getLogger('speechmatics')
class SpeechMatics():
def submit_file(audio_url, webhook_url, lang):
# Define request parameters
url = 'https://asr.api.speechmatics.com/v2/jobs'
headers = {
'Authorization': 'Bearer ' + settings.SPEECHMATICS_API_KEY,
'Content-Type': 'multipart/form-data'
}
logger.info("Submitting transcription job to Speechmatics API...")
logger.info("audio_url type: {}".format(type(audio_url)))
logger.info("API URL: {}".format(url))
logger.info("API headers: {}".format(headers))
payload = {
"type": "transcription",
"transcription_config": {
"language": lang,
"diarization": "speaker",
},
"fetch_data": {
"url": audio_url
}
}
logger.info("API payload: {}".format(payload))
# Send the request
try:
response = requests.post(url, headers=headers, data=payload)
# response.raise_for_status() # raise HTTPError for non-2xx response status codes
logger.info("Speechmatics API response: {}".format(response.json()))
logger.info("API response status code: {}".format(response.status_code))
logger.info("API response headers: {}".format(response.headers))
except requests.exceptions.RequestException as e:
logger.error("Speechmatics API request failed: {}".format(e))
raise # re-raise the exception to be handled at a higher level
</code></pre>
<p>Although my payload and config seems to be fine. I keep getting following 400 error message :</p>
<pre><code>[speechmatics.py:18] API headers: {'Authorization': 'Bearer TOKEN', 'Content-Type': 'multipart/form-data'}
[speechmatics.py:29] API payload: {'type': 'transcription', 'transcription_config': {'language': 'en', 'diarization': 'speaker'}, 'fetch_data': {'url': 'https://storage.googleapis.com/staging-videoo-storage-7/8bdcf5aa-a998-440a-870d-7c31de591aca?...'}}
[speechmatics.py:36] Speechmatics API response: {'code': 400, 'message': 'no multipart boundary param in Content-Type'}
[speechmatics.py:37] API response status code: 400
[speechmatics.py:38] API response headers: {'Content-Length': '68', 'Content-Type': 'application/json', 'Strict-Transport-Security': 'max-age=15724800; includeSubDomains', 'Request-Id': '6126c67c56f12b5042b7e4f78b4632aa', 'Access-Control-Allow-Origin': '*', 'Access-Control-Allow-Credentials': 'true', 'Access-Control-Allow-Methods': 'GET, PUT, POST, DELETE, PATCH, OPTIONS', 'Access-Control-Allow-Headers': 'DNT,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range,Authorization', 'Access-Control-Max-Age': '1728000', 'X-Cache': 'CONFIG_NOCACHE', 'X-Azure-Ref': '0AWAjZAAAAAB5KJjEElURTa7s+0gal3wqTUFOMzBFREdFMDMwNwBhN2JjOWQ4MC02YjBiLTQ1NWEtYjE3MS01NGJkZmNiYWE0YTk=', 'Date': 'Tue, 28 Mar 2023 21:45:36 GMT'}
</code></pre>
<p>What shall I do, in order to make a successful call to Speechmatics API with a Google Cloud Storage Signedurl?</p>
|
<python><google-cloud-platform><google-cloud-storage>
|
2023-03-28 21:53:50
| 1
| 1,312
|
london_utku
|
75,871,250
| 9,481,731
|
using fstring or format causing 404 return in server response
|
<p>Team,
on linux, Using f-string or format or concatenate in python gets 404 response from server and works when I use values statically. a more strange part is when i run this locally on MAC with fstring it works.</p>
<p>Code I tried is commented</p>
<pre><code>GERRIT_CN = os.environ.get('GERRIT_CHANGE_NUMBER')
GERRIT_PS = os.environ.get('GERRIT_PATCHSET_NUMBER')
sq_url = 'https://sonar.company.com/'
projectKey="team-pba"
gerrit_cn_ps = f"{GERRIT_CN}-{GERRIT_PS}"
print(gerrit_cn_ps)
def sonar_api():
debug_requests_on()
sonar = SonarQubeClient(sonarqube_url=sq_url, username=myUsr, password=myPass)
# project_pull_requests = sonar.qualitygates.get_project_qualitygates_status(projectKey="team-pba", pullRequest="124434-120") < WORKS
# project_pull_requests = sonar.qualitygates.get_project_qualitygates_status(projectKey=f"{projectKey}", pullRequest=f"{GERRIT_CN}-{GERRIT_PS}") <FAILS
# project_pull_requests = sonar.qualitygates.get_project_qualitygates_status(projectKey=f"{projectKey}", pullRequest=GERRIT_CN+"-"+GERRIT_PS) < FAILS
# project_pull_requests = sonar.qualitygates.get_project_qualitygates_status(projectKey=projectKey, pullRequest=gerrit_cn_ps) < FAILS
</code></pre>
<p><a href="https://python-sonarqube-api.readthedocs.io/en/latest/examples/qualitygates.html" rel="nofollow noreferrer">sonar-python-api</a></p>
<p>using static values works</p>
<pre><code>project_pull_requests = sonar.qualitygates.get_project_qualitygates_status(projectKey="team-pba", pullRequest="124434-139")
</code></pre>
<p>but same above code used with fstring gets me 404 not sure what is api not liking. any hints or how formatting is replacing string?</p>
<p>on linux jenkins container fails ONLY when I use FSTRING or FORMAT. Works when I put in static values.</p>
<pre><code>+ python3.8 /home/jenkins/agent/workspace/sonar-api.py
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): sonar.company.com:443
DEBUG:urllib3.connectionpool:https://sonar.company.com:443 "GET /api/qualitygates/project_status?projectKey=team-pba&pullRequest=124434-139 HTTP/1.1" 404 103
Traceback (most recent call last):
File "/home/jenkins/agent/workspace///sonar-api.py", line 61, in <module>
print(sonar_api())
File "/home/jenkins/agent/workspace///sonar-api.py", line 51, in sonar_api
project_pull_requests = sonar.qualitygates.get_project_qualitygates_status(projectKey=projectKey, pullRequest=gerrit_cn_ps)
File "/usr/local/lib/python3.8/dist-packages/sonarqube/utils/common.py", line 132, in inner_func
response = self._get(url_pattern, params=params)
File "/usr/local/lib/python3.8/dist-packages/sonarqube/utils/rest_client.py", line 141, in _get
return self.request("GET", path=path, params=params, data=data, headers=headers)
File "/usr/local/lib/python3.8/dist-packages/sonarqube/utils/rest_client.py", line 99, in request
raise NotFoundError(msg)
sonarqube.utils.exceptions.NotFoundError: Error in request. Possibly Not Found error [404]: Pull request '124434-139' in project 'team-pba' not found
124434-139
send: b'GET /api/qualitygates/project_status?projectKey=team-pba&pullRequest=124434-139 HTTP/1.1\r\nHost: sonar.company.com\r\nUser-Agent: python-requests/2.22.0\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nConnection: keep-alive\r\nAuthorization: Basic xxxxx\r\n\r\nβ
reply: 'HTTP/1.1 404 \r\n'
header: Server: nginx/1.12.2
header: Date: Tue, 28 Mar 2023 21:16:09 GMT
header: Content-Type: application/json
header: Content-Length: 103
header: Connection: keep-alive
</code></pre>
<p>Successful log is when using static values.</p>
<pre><code>+ python3 /home/jenkins/agent/workspace/sonar-api.py
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): sonar.company.com:443
DEBUG:urllib3.connectionpool:https://sonar.company.com:443 "GET /api/qualitygates/project_status?projectKey=team-pba&pullRequest=124434-139 HTTP/1.1" 200 881
[Pipeline] echo
124434-139
send: b'GET /api/qualitygates/project_status?projectKey=team-pba&pullRequest=124434-139 HTTP/1.1\r\nHost
[Pipeline] }
</code></pre>
<p>fresh logs failure on linux using fstring.</p>
<pre><code>all values
124434
142
124434-142
send: b'GET /api/qualitygates/project_status?projectKey=team-pba&pullRequest=124434-142 HTTP/1.1\r\nHost: sonar.company.com\r\nUser-Agent: python-requests/2.28.2\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nConnection: keep-alive\r\nAuthorization: Basic xxxxx\r\n\r\nβ
reply: 'HTTP/1.1 404 \r\n'
header: Server: nginx/1.12.2
header: Date: Tue, 28 Mar 2023 21:45:21 GMT
header: Content-Type: application/json
header: Content-Length: 103
header: Connection: keep-alive
[Pipeline] }
</code></pre>
<p>fresh logs success on mac using fstring</p>
<pre><code>Command> python3 sonar-api.py
all values
124434
142
124434-142
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): sonar.company.com:443
send: b'GET /api/qualitygates/project_status?projectKey=team-pba&pullRequest=124434-142 HTTP/1.1\r\nHost: sonar.company.com\r\nUser-Agent: python-requests/2.27.1\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nConnection: keep-alive\r\nAuthorization: Basic xxxx\r\n\r\nβ
reply: 'HTTP/1.1 200 \r\n'
header: Server: nginx/1.12.2
header: Date: Tue, 28 Mar 2023 21:53:55 GMT
header: Content-Type: application/json
header: Content-Length: 739
header: Connection: keep-alive
header: X-Frame-Options: SAMEORIGIN
header: X-XSS-Protection: 1; mode=block
header: X-Content-Type-Options: nosniff
header: Cache-Control: no-cache, no-store, must-revalidate
DEBUG:urllib3.connectionpool:https://sonar.company.com:443 "GET /api/qualitygates/project_status?projectKey=team-pba&pullRequest=124434-142 HTTP/1.1" 200 739
52.53:0.0
</code></pre>
|
<python><python-3.x><sonarqube>
|
2023-03-28 21:32:33
| 0
| 1,832
|
AhmFM
|
75,871,154
| 6,462,301
|
Plotly: Share x-axis for subset of subplots
|
<p>The following python code will share the x-axis for all three subplots:</p>
<pre><code>from plotly.subplots import make_subplots
import plotly.graph_objects as go
fig = make_subplots(rows=3, cols=1,
shared_xaxes=True,
vertical_spacing=0.02)
fig.add_trace(go.Scatter(x=[0, 1, 2], y=[10, 11, 12]),
row=3, col=1)
fig.add_trace(go.Scatter(x=[2, 3, 4], y=[100, 110, 120]),
row=2, col=1)
fig.add_trace(go.Scatter(x=[3, 4, 5], y=[1000, 1100, 1200]),
row=1, col=1)
</code></pre>
<p>Is there a simple way to share the x-axis on the first two rows, but allow the x-axis of the third row to be set freely?</p>
<p>I'm aware that it's possible to individually specify the ranges via calls to update_xaxis for the individual subplots. Was wondering if there was another approach that would avoid this: something in the spirit of what @Ali Taghipour Heidari suggested.</p>
|
<python><plotly><subplot>
|
2023-03-28 21:18:24
| 3
| 1,162
|
rhz
|
75,871,139
| 967,621
|
Read a file line by line in Pyodide
|
<p>The code below reads the user-selected input file entirely. This requires a lot of memory for very large (> 10 GB) files. I need to read a file line by line.</p>
<p><strong>How can I read a file in Pyodide one line at a time?</strong></p>
<hr />
<pre><code><!doctype html>
<html>
<head>
<script src="https://cdn.jsdelivr.net/pyodide/v0.22.1/full/pyodide.js"></script>
</head>
<body>
<button>Analyze input</button>
<script type="text/javascript">
async function main() {
// Get the file contents into JS
const [fileHandle] = await showOpenFilePicker();
const fileData = await fileHandle.getFile();
const contents = await fileData.text();
// Create the Python convert toy function
let pyodide = await loadPyodide();
let convert = pyodide.runPython(`
from pyodide.ffi import to_js
def convert(contents):
return to_js(contents.lower())
convert
`);
let result = convert(contents);
console.log(result);
const blob = new Blob([result], {type : 'application/text'});
let url = window.URL.createObjectURL(blob);
var downloadLink = document.createElement("a");
downloadLink.href = url;
downloadLink.text = "Download output";
downloadLink.download = "out.txt";
document.body.appendChild(downloadLink);
}
const button = document.querySelector('button');
button.addEventListener('click', main);
</script>
</body>
</html>
</code></pre>
<p>The code is from <a href="https://stackoverflow.com/a/75834743/967621">this answer to question "Select and read a file from user's filesystem"</a>.</p>
<hr />
<p>Based on <a href="https://stackoverflow.com/a/75871580/967621">the answer by <em>rth</em></a>, I used the code below. It still has 2 issues:</p>
<ul>
<li>The chunks break some lines into parts, as shown on the example input file, which has 100 chars per line. The console log (below) shows that this is not always the case for chunks (thus, lines in chunks are broken not at the newline).</li>
<li>I cannot get the variable <code>result</code> to be written into the output file, which is available for download to the user (see below, where for the example purposes it is replaced by a dummy string <code>'result'</code>).</li>
</ul>
<pre><code><!doctype html>
<html>
<head>
<script src="https://cdn.jsdelivr.net/pyodide/v0.22.1/full/pyodide.js"></script>
</head>
<body>
<button>Analyze input</button>
<script type="text/javascript">
async function main() {
// Create the Python convert toy function
let pyodide = await loadPyodide();
let convert = pyodide.runPython(`
from pyodide.ffi import to_js
def convert(contents):
for line in contents.split('\\n'):
print(len(line))
return to_js(contents.lower())
convert
`);
// Get the file contents into JS
const bytes_func = pyodide.globals.get('bytes');
const [fileHandle] = await showOpenFilePicker();
let fh = await fileHandle.getFile()
const stream = fh.stream();
const reader = stream.getReader();
// Do a loop until end of file
while( true ) {
const { done, value } = await reader.read();
if( done ) { break; }
handleChunk( value );
}
console.log( "all done" );
function handleChunk( buf ) {
console.log( "received a new buffer", buf.byteLength );
let result = convert(bytes_func(buf).decode('utf-8'));
}
const blob = new Blob(['result'], {type : 'application/text'});
let url = window.URL.createObjectURL(blob);
var downloadLink = document.createElement("a");
downloadLink.href = url;
downloadLink.text = "Download output";
downloadLink.download = "out.txt";
document.body.appendChild(downloadLink);
}
const button = document.querySelector('button');
button.addEventListener('click', main);
</script>
</body>
</html>
</code></pre>
<p>Given this input file with 100 characters per line:</p>
<pre class="lang-sh prettyprint-override"><code>perl -le 'for (1..1e5) { print "0" x 100 }' > test_100x1e5.txt
</code></pre>
<p>I am getting this console log output, indicating that lines are broken not at the newline:</p>
<pre><code>received a new buffer 65536
648pyodide.asm.js:10 100
pyodide.asm.js:10 88
read_write_bytes_func.html:41 received a new buffer 2031616
pyodide.asm.js:10 12
20114pyodide.asm.js:10 100
pyodide.asm.js:10 89
read_write_bytes_func.html:41 received a new buffer 2097152
pyodide.asm.js:10 11
20763pyodide.asm.js:10 100
pyodide.asm.js:10 77
read_write_bytes_func.html:41 received a new buffer 2097152
pyodide.asm.js:10 23
20763pyodide.asm.js:10 100
pyodide.asm.js:10 65
read_write_bytes_func.html:41 received a new buffer 2097152
pyodide.asm.js:10 35
20763pyodide.asm.js:10 100
pyodide.asm.js:10 53
read_write_bytes_func.html:41 received a new buffer 1711392
pyodide.asm.js:10 47
16944pyodide.asm.js:10 100
pyodide.asm.js:10 0
read_write_bytes_func.html:37 all done
</code></pre>
<p>If I change from this:</p>
<pre><code>const blob = new Blob(['result'], {type : 'application/text'});
</code></pre>
<p>to that:</p>
<pre><code>const blob = new Blob([result], {type : 'application/text'});
</code></pre>
<p>then I get the error:</p>
<pre><code>Uncaught (in promise) ReferenceError: result is not defined
at HTMLButtonElement.main (read_write_bytes_func.html:45:34)
</code></pre>
|
<javascript><python><webassembly><pyodide>
|
2023-03-28 21:15:45
| 2
| 12,712
|
Timur Shtatland
|
75,871,053
| 4,977,957
|
Standard Approach to Adding Services to Python FastAPI App?
|
<p>I define a service as a class that provides DRY methods that can be called from anywhere in the code.</p>
<p>My FastAPI controller has an endpoint:</p>
<pre><code>@router.get("/health")
async def health_check():
return {"status": "pass"}
</code></pre>
<p>I would like to update it to <code>return MyHelper().get_health_status</code>, where <code>MyHelper</code> is defined in <code>./services/my_helper.py</code>. What is the standard approach here?</p>
<p>Ideally, I would like to use dependency injection like in .NET core, where the process is roughly:</p>
<ol>
<li>Define the service in a namespace (.NET doesn't care about file pathing as long as it's the same root).</li>
<li>Import the namespace into the <code>Startup.cs</code> file.</li>
<li>Register the service with the application in the aforementioned <code>Startup</code> file: <code>ConfigureServices(IServiceCollection services)</code> -> <code>services.AddSingleton<IMyHelper, MyHelper>();</code></li>
<li>Inject the helper interface into the constructor of any other files in the project.</li>
</ol>
<p>What's the best way to approach this in a FastAPI project? Is there DI built in or what third party library should be used? If no DI is built in, how do I register my service so I can at least instantiate it inside router ("controller") files?</p>
<p>P.S. I am reading <a href="https://stackoverflow.com/questions/31678827/what-is-a-pythonic-way-for-dependency-injection">What is a Pythonic way for Dependency Injection?</a>, but am still wondering if there is a preference when using FastAPI specifically. Again, given how REST APIs are basically built into .NET Core and there is a "standard" way (which is not just instantiating helper classes).</p>
|
<python><dependency-injection><service><fastapi>
|
2023-03-28 21:03:37
| 2
| 12,814
|
VSO
|
75,871,001
| 15,452,898
|
Advanced filtering in PySpark
|
<p>Currently I'm performing some calculations on a large database that contains various information on how loans are paid by various borrowers.
From technical point of view, I'm using PySpark and have just faced with an issue of how to use advanced filtering operations.</p>
<p>For example my dataframe looks like this:</p>
<pre><code>Name ID ContractDate LoanSum Status
Boris ID3 2022-10-10 10 Closed
Boris ID3 2022-10-15 10 Active
Boris ID3 2022-11-22 15 Active
John ID1 2022-11-05 30 Active
Martin ID6 2022-12-10 40 Closed
Martin ID6 2022-12-12 40 Active
Martin ID6 2022-07-11 40 Active
</code></pre>
<p>I have to create a dataframe that contains all loans issued by an organization to specific borrowers (group by ID) where the number of days between two loans (assigned to one unique ID) is less than 5 and the loansum is the same.</p>
<p>In other words, I have to obtain the following table:</p>
<pre><code>Name ID ContractDate LoanSum Status
Boris ID3 2022-10-10 10 Closed
Boris ID3 2022-10-15 10 Active
Martin ID6 2022-12-10 40 Closed
Martin ID6 2022-12-12 40 Active
</code></pre>
<p>What should I do in order to run this filtering?</p>
<p>Thank you in advance</p>
|
<python><pyspark><filtering>
|
2023-03-28 20:56:44
| 2
| 333
|
lenpyspanacb
|
75,870,967
| 10,620,003
|
Split the dataframe with a minimal way in pandas
|
<p>I have 10 different dataframe with shape 1000*1000. I have to use a part of this data frames for the training and validation. Now I am using the following lines of code to separate this dfs:</p>
<pre><code> df1_train = df1.loc[:, Start:end]
df2_train = df2.loc[:,Start:end]
df3_train = df3.loc[:,Start:end]
...
</code></pre>
<p>And this make a lot of lines of codes. I am looking for a more minimal way to do this. Do you know any option in the pandas or numpy for this? Thank you</p>
|
<python><pandas>
|
2023-03-28 20:51:50
| 2
| 730
|
Sadcow
|
75,870,852
| 15,067,623
|
Placing brackets in logical formula where possible in Python
|
<p>How can I place brackets where possible in a first order logical formula in Python? This formula only exists out of logical not Β¬, and β§ or β¨, implies β and bi implies β. For example "Β¬(A β§ B) β¨ C" yields ((Β¬(A β§ B)) β¨ C). You can see my attempt below. I want a function that wraps each operator in brackets (see tests) so only one operator is present within the brackets. Does anyone know why my code fails or any solution to this problem?</p>
<pre><code>import re
def add_brackets(formula):
# Define operator priorities
priority = {'Β¬': 4, 'β§': 3, 'β¨': 2, 'β': 1, 'β': 0}
# Convert formula to list of tokens
tokens = re.findall(r'\(|\)|Β¬|β§|β¨|β|β|[A-Z]', formula)
# Initialize stack to hold operators
stack = []
# Initialize queue to hold output
output = []
# Loop through tokens
for token in tokens:
if token.isalpha():
output.append(token)
elif token == 'Β¬':
stack.append(token)
elif token in 'β§β¨ββ':
while stack and stack[-1] != '(' and priority[token] <= priority[stack[-1]]:
output.append(stack.pop())
stack.append(token)
elif token == '(':
stack.append(token)
elif token == ')':
while stack and stack[-1] != '(':
output.append(stack.pop())
stack.pop()
# Empty out remaining operators on the stack
while len(stack) != 0:
if stack[-1] == '(':
raise ValueError('Unmatched left parenthesis')
output.append(stack.pop())
# Loop through output
for i, token in enumerate(output):
if token in 'β§β¨ββ':
if len(output) < i + 3:
raise ValueError('Invalid formula')
result = '({} {} {})'.format(output[i+1], token, output[i+2])
output[i:i+3] = [result]
# Return final result
return output[0]
# Formula 1
formula1 = "A β§ B β¨ C"
assert add_brackets(formula1) == "(A β§ (B β¨ C))"
# Formula 2
formula2 = "Β¬(A β§ B) β¨ C"
assert add_brackets(formula2) == "((Β¬(A β§ B)) β¨ C)"
# Formula 3
formula3 = "A β§ B β§ C β§ D β§ E"
assert add_brackets(formula3) == "((((A β§ B) β§ C) β§ D) β§ E))"
# Formula 4
formula4 = "Β¬A β§ Β¬B β§ Β¬C"
assert add_brackets(formula4) == "(((Β¬A) β§ (Β¬B)) β§ (Β¬C))"
# Formula 5
formula5 = "A β§ Β¬(B β¨ C)"
assert add_brackets(formula5) == "(A β§ (Β¬(B β¨ C)))"
# Formula 6
formula6 = "A β¨ B β C β§ D"
assert add_brackets(formula6) == "((A β¨ B) β (C β§ D))"
# Formula 7
formula7 = "A β§ B β C β¨ D"
assert add_brackets(formula7) == "(((A β§ B) β (C β¨ D)))"
# Formula 8
formula8 = "Β¬(A β§ B) β C β¨ D"
assert add_brackets(formula8) == "((Β¬(A β§ B)) β (C β¨ D))"
# Formula 9
formula9 = "(A β B) β (C β D)"
assert add_brackets(formula9) == "((A β B) β (C β D))"
# Formula 10
formula10 = "(A β§ B) β¨ (C β§ D) β E"
assert add_brackets(formula10) == "(((A β§ B) β¨ (C β§ D)) β E)"
</code></pre>
|
<python><string><parsing><logic>
|
2023-03-28 20:37:38
| 1
| 1,386
|
Jip Helsen
|
75,870,708
| 15,433,308
|
Python equivalent of JTS ConcaveHullOfPolygons
|
<p>The latest version of JTS <a href="https://locationtech.github.io/jts/javadoc/org/locationtech/jts/algorithm/hull/ConcaveHullOfPolygons.html" rel="nofollow noreferrer">implements an algorithm</a> that computes a concave hull of the input polygons, which guarantees the input polygons are contained in the result hull.</p>
<p>All of the pythonic concave hull implementations i've seen work with points as inputs so there is no guarantee the result will contain the input polygons. (If for example I use the polygon points as input)</p>
<p>Is there a pythonic implementation that achieves the same result as JTS?</p>
|
<python><geometry><computational-geometry><jts><concave-hull>
|
2023-03-28 20:18:59
| 1
| 492
|
krezno
|
75,870,695
| 781,938
|
How do I get a pandas MultiIndex level as a series, keeping the original MultiIndex?
|
<p>Suppose I have a MultiIndex like:</p>
<pre class="lang-py prettyprint-override"><code>multiindex = pd.MultiIndex.from_product([['a', 'b'], range(3)], names=['first', 'second'])
</code></pre>
<pre><code>MultiIndex([('a', 0),
('a', 1),
('a', 2),
('b', 0),
('b', 1),
('b', 2)],
names=['first', 'second'])
</code></pre>
<p>How do I get a Series or DataFrame like this?</p>
<pre class="lang-py prettyprint-override"><code> second
first second
a 0 0
1 1
2 2
b 0 0
1 1
2 2
</code></pre>
<p>Asking because I want to use this index level to compute some other stuff.</p>
<p>Note: I do not want to add a new column to the original DataFrame.</p>
|
<python><pandas><dataframe>
|
2023-03-28 20:16:46
| 1
| 6,130
|
william_grisaitis
|
75,870,461
| 877,329
|
Workaround for "Unknown template argument type 604" when usinig clang python bindings
|
<p>I am playing around with python-clang, version 15. I fed it with a concept, and got error</p>
<pre><code>ValueError: Unknown template argument kind 604
</code></pre>
<p>This is the code:</p>
<pre class="lang-py prettyprint-override"><code>import clang.cindex
def visit_nodes(node, src_file, depth):
if node.location.file == None or node.location.file.name == src_file:
for i in range(0, depth):
print(' ', end = '')
print('%s %s'%(node.kind, node.spelling))
for child in node.get_children():
visit_nodes(child, src_file, depth + 1)
def load_symbols(src_file, compiler_options = None):
index = clang.cindex.Index.create()
translation_unit = index.parse(src_file, compiler_options)
visit_nodes(translation_unit.cursor, src_file, 0)
if __name__ == '__main__':
import sys
if len(sys.argv) < 2:
exit(1)
load_symbols(sys.argv[-1], sys.argv[1:-1])
</code></pre>
<p>Run with</p>
<p><code>-std=c++20 <input file></code></p>
<p>I guess this is lack of functionality in python-clang. Can I somehow extract the type 604 manually, or should I wait for the next release to fix this?</p>
|
<python><clang><c++20>
|
2023-03-28 19:44:44
| 1
| 6,288
|
user877329
|
75,870,293
| 5,089,311
|
Tkinter: share single Frame accross multiple tabs (or dynamically reparent)
|
<p>I have <code>tkinter.Notebook</code> that contains multiple tabs.
I have <code>tkinter.Frame</code> "run box" with set of controls, that I want share across several tabs. Specifically <code>Test info</code> and <code>Test list</code> should have it.</p>
<p><a href="https://i.sstatic.net/MUwyj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MUwyj.png" alt="enter image description here" /></a></p>
<p>Currently I duplicate entire "run box" into each tab that needs it. And it works somewhat fine, but feels excessive and not right, because code behind "run box" controls is almost identical.</p>
<p>Ideally I want this "run box" to be single instance and dynamically shown on active tab. Can anyone advise how to detach"run box" from one tab and re-attach to different tab? I would do that on <code>Notebook</code> switch event.</p>
|
<python><tkinter>
|
2023-03-28 19:23:39
| 1
| 408
|
Noob
|
75,870,152
| 781,938
|
Pandas: raise an error if indexes don't match when concat'ing or merging?
|
<p>I want to concatenate (index-wise) or merge two pandas DataFrames. However, I want this to fail if the two DataFrames don't share the same index (potentially MultiIndex).</p>
<p>Does pandas offer a way to assert this when combining such datasets?</p>
<p>For example:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pandas as pd
rng = np.random.default_rng(42)
df1 = pd.DataFrame({'A': rng.integers(0, 10, size=3)}, index=['a', 'b', 'c'])
df2 = pd.DataFrame({'B': rng.integers(0, 10, size=3)}, index=['a', 'b', 'd'])
pd.merge(df1, df2, left_index=True, right_index=True)
</code></pre>
<pre><code> A B
a 0.0 0.0
b 7.0 6.0
c 6.0 NaN
d NaN 2.0
</code></pre>
<p>I'd like this to raise an error, since the indexes don't match.</p>
<p>Do any of pandas's merge functions (merge, concat, join, etc) offer any sort of index checking?</p>
|
<python><pandas><dataframe>
|
2023-03-28 19:06:10
| 1
| 6,130
|
william_grisaitis
|
75,870,150
| 3,507,127
|
Convert Pandas DataFrame to nested dictionary where key value pairs are columns
|
<p>I have a pandas dataframe with 3 columns. Say it looks like this:</p>
<pre><code>test_df = pd.DataFrame({
'key1': [1, 1, 1, 1, 2, 2, 2],
'key2': ['a', 'b', 'c', 'd', 'e', 'f', 'g'],
'value': ['a-mapped', 'b-mapped', 'c-mapped', 'd-mapped', 'e-mapped', 'f-mapped', 'g-mapped']
})
</code></pre>
<pre><code>> test_df
key1 key2 value
0 1 a a-mapped
1 1 b b-mapped
2 1 c c-mapped
3 1 d d-mapped
4 2 e e-mapped
5 2 f f-mapped
6 2 g g-mapped
</code></pre>
<p>I would like to return a nested dictionary where the first keys are <code>key1</code> and then then the second key value pair would be <code>{key2: value}</code>. For example, I would want the <code>desired_result</code> to be</p>
<pre><code>{
1: {'a': 'a-mapped', 'b': 'b-mapped', 'c': 'c-mapped', 'd': 'd-mapped'},
2: {'e': 'e-mapped', 'f': 'f-mapped', 'g': 'g-mapped'}
}
</code></pre>
<p>I can achieve this by using some loops:</p>
<pre><code>desired_result = {}
for key1 in test_df.key1:
desired_result[key1] = {}
for idx, row in test_df.iterrows():
desired_result[row.key1][row.key2] = row.value
</code></pre>
<p>Is there a more efficient way of doing this?</p>
|
<python><pandas>
|
2023-03-28 19:05:42
| 4
| 9,006
|
Vincent
|
75,869,943
| 3,529,833
|
How to run a local AWS Lambda container as part of a docker compose?
|
<p>I'm having a huge headache figuring this out, I just want to emulate a local AWS Lambda service to easily test integration without writing custom clients and such</p>
<p>I have this monorepo:</p>
<pre><code> | lambda-service-emulator/
---| app.py
---| Dockerfile
| lambda-function/
---| Dockerfile
| docker-compose.yml
</code></pre>
<p>lambda-service-emulator/app.py its a very simple FastAPI, with a single endpoint to simulate Lambda Service API:</p>
<pre><code>from fastapi import FastAPI, Body, Header
import os
import requests
app = FastAPI()
@app.post("/2015-03-31/functions/{function_name}/invocations")
def invoke_lambda(
function_name: str,
payload: dict = Body({})
):
f_endpoint = os.environ['FUNCTION_ENDPOINT']
r = requests.post(f'{f_endpoint}/2015-03-31/functions/function/invocations', json=payload)
try:
data = r.json()
except Exception:
return {
'statusCode': 400
}
return data
</code></pre>
<p>lambda-function/Dockerfile its an image that is compliant with AWS Lambda</p>
<pre><code>FROM public.ecr.aws/lambda/provided as runtime
# installing stuff
CMD ["handler.handler"]
</code></pre>
<p>And the docker-compose:</p>
<pre><code>version: '3.8'
services:
lambda-service:
build: ./lambda-service-emulator
ports:
- 8081:8080
environment:
FUNCTION_ENDPOINT: http://lambda-function:8080
lambda-function:
volumes:
- ./lambda-function:/var/task
build: ./lambda-function
ports:
- 8082:8080
</code></pre>
<p>Here's the problem, when I try to invoke the lambda through compose, the code goes through normally, but before it can return the response, it gives this error:</p>
<blockquote>
<p>[ERROR] (rapid) Failed to reserve: AlreadyReserved</p>
</blockquote>
<p>I think its because this image is not meant to keep running, it should teardown after every invocation, but I'm not sure how to solve this while maintaining the docker compose strutcture?</p>
<p>It works if I run the container as a one off, as per this docs: <a href="https://docs.aws.amazon.com/lambda/latest/dg/images-test.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/lambda/latest/dg/images-test.html</a></p>
|
<python><amazon-web-services><docker><aws-lambda>
|
2023-03-28 18:42:28
| 0
| 3,221
|
Mojimi
|
75,869,900
| 967,526
|
pants with isort thinks my python modules are third party
|
<p>I have a pants project that I've modified from the <a href="https://github.com/pantsbuild/example-python" rel="nofollow noreferrer">example repo</a> so that the source code resides under <code>project1/src</code>:</p>
<pre><code>βββ BUILD
βββ LICENSE
βββ README.md
βββ mypy.ini
βββ pants.ci.toml
βββ pants.toml
βββ project1
βΒ Β βββ src
βΒ Β βββ BUILD
βΒ Β βββ allthing.py
βΒ Β βββ helloworld
βΒ Β βββ BUILD
βΒ Β βββ __init__.py
βΒ Β βββ greet
βΒ Β βΒ Β βββ BUILD
βΒ Β βΒ Β βββ __init__.py
βΒ Β βΒ Β βββ greeting.py
βΒ Β βΒ Β βββ greeting_test.py
βΒ Β βΒ Β βββ translations.json
βΒ Β βββ main.py
βΒ Β βββ translator
βΒ Β βββ BUILD
βΒ Β βββ __init__.py
βΒ Β βββ translator.py
βΒ Β βββ translator_test.py
βββ python-default.lock
βββ requirements.txt
</code></pre>
<p>The <code>BUILD</code> files under <code>project1</code> are all boilerplate:</p>
<pre><code>python_sources(
name="lib",
)
</code></pre>
<p>I added the <code>allthing</code> module and modified <code>helloword.main</code> to import it:</p>
<pre class="lang-py prettyprint-override"><code>from colors import green
from allthing import Whatevs
from helloworld.greet.greeting import Greeter
def say_hello() -> None:
greeting = Greeter().greet("Pantsbuild")
print(green(greeting))
</code></pre>
<p>When I run <code>pants fmt ::</code> <code>isort</code> places the <code>allthing</code> import with the third-party modules:</p>
<pre class="lang-py prettyprint-override"><code>from allthing import Whatevs
from colors import green
from helloworld.greet.greeting import Greeter
</code></pre>
<p>I expect it to be organized with the other first party module, <code>helloworld</code>, as in the first snippet.</p>
<hr />
<p>I modified <code>pants.toml</code> to reflect the sources root:</p>
<pre class="lang-ini prettyprint-override"><code>[source]
root_patterns = ["src"]
</code></pre>
<p>I confirmed that pants knows the roots:</p>
<pre><code>$ pants roots
project1/src
</code></pre>
<p>This did not help.</p>
<p>The only way that I can get <code>pants fmt</code> to produce the correct result is to move the <code>src</code> directory out of <code>project1</code> into the root:</p>
<pre><code>βββ BUILD
βββ LICENSE
βββ README.md
βββ mypy.ini
βββ pants.ci.toml
βββ pants.toml
βββ project1
βββ python-default.lock
βββ requirements.txt
βββ src
βββ BUILD
βββ allthing.py
βββ helloworld
...
</code></pre>
<p>With the <code>src</code> folder at the project root, <code>pants fmt</code> always produces the correct result, regardless of how I've configured the sources roots.</p>
<p>For the curious, <code>.isort.cfg</code> is the same as in the example project:</p>
<pre class="lang-ini prettyprint-override"><code>[settings]
# This is to make isort compatible with Black. See
# https://black.readthedocs.io/en/stable/the_black_code_style.html#how-black-wraps-lines.
line_length=88
multi_line_output=3
include_trailing_comma=True
force_grid_wrap=0
use_parentheses=True
known_first_party=helloworld
default_section=THIRDPARTY
</code></pre>
|
<python><isort><pants>
|
2023-03-28 18:38:38
| 1
| 3,498
|
chadrik
|
75,869,700
| 9,461,736
|
VS code has the wrong ipython directory
|
<p>My ipython version on mac terminal and on VS code terminal are different. I have different packages installed in both. And this is confirmed when I do which python, I get:</p>
<p>β― which ipython</p>
<pre><code>/usr/local/bin/ipython
</code></pre>
<p>and</p>
<p>β― which ipython</p>
<pre><code>/Users/myname/opt/anaconda3/bin/ipython
</code></pre>
<p>in the other. How can I fix this? thank you.</p>
|
<python><visual-studio-code>
|
2023-03-28 18:13:57
| 1
| 519
|
stracc
|
75,869,605
| 2,445,273
|
Python requests result doesn't match the website because of JavaScript
|
<p>I'm trying to scrape links of products from a webpage (url below). The page uses JavaScript. I tried different libraries, but the links don't show up in the results (the links have the format <code>*/product/*</code>, as you can see by hovering over product links when you open the below url).</p>
<pre class="lang-py prettyprint-override"><code>url = 'https://www.bcliquorstores.com/product-catalogue?categoryclass=coolers%20%26%20ciders&special=new%20product&sort=name.raw:asc&page=1'
headers = {
'Host': 'www.bcliquorstores.com',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0) Gecko/20100101 Firefox/111.0',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8',
'Accept-Language': 'en-US,en;q=0.7,fa;q=0.3',
}
</code></pre>
<p>Using <code>requests</code> Library:</p>
<pre class="lang-py prettyprint-override"><code>import requests
res = requests.get(url, headers=headers)
</code></pre>
<p>Using <code>urllib</code> library</p>
<pre class="lang-py prettyprint-override"><code>import urllib.request
request = urllib.request.Request(url, headers=headers)
response = urllib.request.urlopen(request)
response.read().decode()
</code></pre>
<p>Using <code>requests_html</code> library:</p>
<pre class="lang-py prettyprint-override"><code>from requests_html import HTMLSession, AsyncHTMLSession
asession = AsyncHTMLSession()
r = await asession.get(url, headers=headers)
await r.html.arender()
res = r.html.html
</code></pre>
<p>When I search for the string <code>/product/</code> in the results, it cannot be found, but it's visible from the inspect window.</p>
<p>I know about Selenium, but I want to use it only if there is no other way.</p>
|
<javascript><python><web-scraping><python-requests><python-requests-html>
|
2023-03-28 18:02:57
| 1
| 1,720
|
LoMaPh
|
75,869,542
| 11,001,493
|
Can't get file size because it apparently doesn't exist
|
<p>I'm creating a code to get the size of all files inside folders from a directory.</p>
<pre><code> import os
rootdir = "Z:"
# Identifying all files and sizes, and putting them inside lists
files = []
sizes = []
for dirpath, dirnames, filenames in os.walk(rootdir):
for f in filenames:
sizes.append(os.path.getsize(dirpath + "/" + f))
files.append(dirpath + "/" + f)
</code></pre>
<p>There are thousands of files and the code works well on most of them. Although there are some files that I get the following error when I try to get its size:</p>
<pre><code>[WinError 3] The system cannot find the path specified
</code></pre>
<p>I already checked if the path really exists with <code>os.path.isdir(rootdir)</code> for Z and also for other folders that the error above occurs, but they all exist.</p>
<p>This error occurs with different types of files (xls, pdf, jpg, doc, etc). For example, in these 2 files below, my code works only at the first one ("Thumbs.db"):</p>
<pre><code>.../Thumbs.db
.../CS-2014-00-02832-1.jpg
</code></pre>
<p>IMPORTANT: when I renamed this image file to "test.jpg", then my code manage to read it and got its size.</p>
<p>Anyone can help me to solve this problem so I can get the size of all files without throwing an error?</p>
|
<python><path><size>
|
2023-03-28 17:55:19
| 0
| 702
|
user026
|
75,869,481
| 350,143
|
remove stripes / vertical streaks in remote sensing images
|
<p>I have a remote sensing photo that has bright non continuous vertical streaks or stripes as in the pic below, my question is there a way to remove them using python and opencv or any other ip library? <a href="https://i.sstatic.net/iamHP.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iamHP.jpg" alt="enter image description here" /></a>,</p>
|
<python><opencv><image-processing><image-enhancement>
|
2023-03-28 17:48:16
| 6
| 931
|
Atheer
|
75,869,474
| 211,983
|
How to determine if a datetime object overlaps with an ical string?
|
<p>Suppose I have an ical string:</p>
<pre><code>cal_str = """
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20211019T000000
DTEND;TZID=America/New_York:20211019T235959
RRULE:FREQ=WEEKLY;BYDAY=SU,MO,TU,WE,TH,FR,SA
X-STATE:convenience
END:VEVENT
"""
</code></pre>
<p>And a <code>datetime</code> object <code>datetime(2022, 11, 7, 4, 53, tzinfo=timezone.utc)</code>.</p>
<p>How can I confirm that the <code>datetime</code> object falls within the ical string? I've tried using <code>rules.between</code> but this <code>datetime</code>, which should overlap with the ical string, does not.</p>
<p>example code:</p>
<pre><code>from dateutil import rrule
import pytz
from dateutil.tz import gettz
from icalendar.cal import Calendar
from datetime import datetime, timedelta, timezone
cal_str = """
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20211019T000000
DTEND;TZID=America/New_York:20211019T235959
RRULE:FREQ=WEEKLY;BYDAY=SU,MO,TU,WE,TH,FR,SA
X-STATE:convenience
END:VEVENT
"""
def test_ical(target_time):
ical = Calendar.from_ical(cal_str)
start_time_dt = ical.get("DTSTART").dt
tzinfo = gettz(str(start_time_dt.tzinfo))
start_time_dt = start_time_dt.replace(tzinfo=None).replace(tzinfo=tzinfo)
end_time_dt = ical["DTEND"].dt if ical.get("DTEND") else None
irrule = ical.get("RRULE")
recurring_rule = irrule.to_ical().decode('utf-8')
rules = rrule.rruleset()
first_rule = rrule.rrulestr(recurring_rule, dtstart=start_time_dt)
rules.rrule(first_rule)
event_delta = end_time_dt - start_time_dt if end_time_dt else timedelta(days=1)
res = rules.between(target_time - event_delta, target_time + timedelta(minutes=1))
return res
print(test_ical(datetime(2022, 11, 7, 4, 53, tzinfo=timezone.utc)))
print(test_ical(datetime(2022, 11, 9, 4, 53, tzinfo=timezone.utc)))
</code></pre>
|
<python><icalendar>
|
2023-03-28 17:47:31
| 2
| 9,624
|
tipu
|
75,869,291
| 1,258,059
|
Cache errors using Spotify API in Python program
|
<p>I am running into a problem with my Python script. The code executes, but after it displays a caching error:</p>
<pre class="lang-none prettyprint-override"><code>Couldn't read cache at: .cache
Couldn't write token to cache at: .cache
Couldn't read cache at: .cache
Couldn't write token to cache at: .cache
Couldn't read cache at: .cache
Couldn't write token to cache at: .cache
Couldn't read cache at: .cache
Couldn't write token to cache at: .cache
Couldn't read cache at: .cache
Couldn't write token to cache at: .cache
</code></pre>
<p>I am using Spotify to return songs in a playlist. Then I'm checking that playlist for bad words to filter out songs that can't be played for kids.</p>
<p>Here is the code I'm using:</p>
<pre><code>import spotipy
from spotipy.oauth2 import SpotifyClientCredentials
spotify_id = "ABC123"
spotify_secret = "456789"
oldies = "738hgt"
kids = "201hgj"
country = "099sdt"
spotify_playlist = country
import lyricsgenius
genius_token = "123456789&ABCEFGHIJKLMNOPQRSTUVWXYZ"
genius = lyricsgenius.Genius(genius_token)
genius.verbose = False
genius.timeout = 100
word_listing = ["bad", "words", "go", "here"]
credentials = SpotifyClientCredentials(client_id=spotify_id, client_secret=spotify_secret)
validate = spotipy.Spotify(auth_manager=credentials)
songs = []
limit = 100
offset = 0
playlist = validate.playlist_tracks(spotify_playlist, limit=limit, offset=offset)
while True:
playlist = validate.playlist_tracks(spotify_playlist, limit=limit, offset=offset)
if not len(playlist['items']):
break
for items in playlist['items']:
info = {'artist': items['track']['artists'][0]['name'], 'title': items['track']['name']}
songs.append(info)
offset += limit
print("Checking playlist...")
for song in songs:
print(f" Checking \"{song['title']}\" by: {song['artist']}")
term = genius.search_song(title=song['title'], artist=song['artist'])
for words in word_listing:
try:
if len(term.lyrics) > 10000:
break
if words in term.lyrics.lower():
print(f" *Found \"{words}\" in \"{song['title']}\" by: {song['artist']}")
continue
except AttributeError:
print(f" *Unable to find lyrics for: \"{song['title']}\" by: {song['artist']}")
break
except:
print(f" *Unable to connect to service, moving on...")
</code></pre>
<p>I have replaced all my variable values for this example. I have been told this is an issue with Spotify's API; it's just a warning that can be ignored.</p>
<p>I have also been told it's a permissions issue, where Spotify wants to write to a cache directory and it doesn't have the right permissions to do so.</p>
<p>I'm really not sure what's causing this issue. The error appears at the beginning of the script, and then after it shows, the rest of the script runs successfully.</p>
<p>Is there something in one of my arguments or statements that's causing this?</p>
|
<python>
|
2023-03-28 17:25:20
| 2
| 595
|
joshmrodg
|
75,869,145
| 10,311,694
|
Unit testing - how to split cases into multiple files that all share the same setUp/tearDown?
|
<p>I have one test suite in a single file that is getting very long. It doesn't make sense to split the test cases into different classes because they all test different steps of the same user flow. This is also why they share the same <code>setUp</code>/<code>tearDown</code> methods and share class variables. Certain tests also depend on other tests - e.g. test 1 progresses to some part of the user flow and executes some assert statements, then test 2 continues from that stage of the user flow to execute more assert statements.</p>
|
<python><unit-testing><python-unittest>
|
2023-03-28 17:09:28
| 0
| 451
|
Kevin2566
|
75,869,041
| 1,803,648
|
how to sleep in twisted LoopingCall
|
<p>I have a <code>work()</code> method that calls functions that may invoke <code>time.sleep</code>. This clearly goes against recommended twisted usage.</p>
<p>Suppose one could replace LoopingCall with <code>threads.deferToThread()</code> and loop manually in our worker, but are there alternative ways to accomplish this while still utilizing LoopingCall?</p>
<p>Pseudocode:</p>
<pre class="lang-py prettyprint-override"><code>from twisted.internet.task import LoopingCall
looping_call = LoopingCall(work)
looping_call.start(10)
def work():
result = False
for i in elements:
if blocking_function(i): # note processing sequentially here is preferred
result = True
if result:
dosomething()
def blocking_function(i):
time.sleep(5)
return True # or False
</code></pre>
<p>related: <a href="https://stackoverflow.com/questions/34729079/a-non-blocking-way-to-sleep-wait-in-twisted">A non blocking way to sleep/wait in twisted?</a></p>
|
<python><twisted><twisted.internet>
|
2023-03-28 16:57:59
| 1
| 598
|
laur
|
75,868,981
| 16,739,739
|
Pythonic way to setup fixture for test
|
<p>I have a fixture with <code>scope="session"</code>, as it is webserver which should be started and finished only once for all tests. But I want to clear cache before running a new test, so I made fixture that gets webserver and cleans cache:</p>
<pre><code>import pytest
class Webserver:
def __init__(self):
self.cache = []
print("Init server")
def start(self):
print("Start server")
def stop(self):
print("Stop server")
def query(self):
self.cache.append("some data")
def clear_cache(self):
self.cache.clear()
print("Clear cache")
@pytest.fixture(scope="session")
def mock_server():
server = Webserver()
server.start()
yield server
server.stop()
@pytest.fixture
def mock_server_with_clean_cache(mock_server):
mock_server.clear_cache()
return mock_server
def test_basic_1(mock_server_with_clean_cache):
mock_server_with_clean_cache.query()
assert len(mock_server_with_clean_cache.cache) == 1
def test_basic_2(mock_server_with_clean_cache):
mock_server_with_clean_cache.query()
assert len(mock_server_with_clean_cache.cache) == 1
</code></pre>
<p>I believe there must be a better and more convenient way to do this. So is there any?</p>
|
<python><pytest><fixtures>
|
2023-03-28 16:52:57
| 1
| 693
|
mouse_00
|
75,868,847
| 21,420,742
|
How to modify rows with conditions? In Python
|
<p>I have a dataset of employee history containing information on job, manager, and etc. What I am trying to see is if a manager has taken over for another in their absence. If that happens have the current manager filing in add a <strong>(Sub)</strong> next to their name.</p>
<p>This is the output I have:</p>
<pre><code>Emp_ID Job_Title Manager_Pos Manager Name MGR_ID
1 Sales 627 John Doe 12
1 Sales 627 John Doe 12
1 Sales 627 David Stern 4
2 Tech 324 Mark Smith 7
2 Tech 324 Henry Ford 13
2 Tech 324 Henry Ford 13
</code></pre>
<p>This the output I want:</p>
<pre><code>Emp_ID Job_Title Manager_pos Manager Name Mgr_ID
1 Sales 627 John Doe 12
1 Sales 627 John Doe 12
1 Sales 627 David Stern(Sub) 4
2 Tech 324 Mark Smith 7
2 Tech 324 Henry Ford(Sub) 13
2 Tech 324 Henry Ford(Sub) 13
</code></pre>
<p>I have tried using:</p>
<pre><code>`np.where((df['Manager_pos].head(1) == df['Manager_pos') & (df['Manager Name'].head(1) != df['Manager Name'].tail(1)), df['Manager Name'] + 'Sub', df['Manager Name')
</code></pre>
<p>This code ends up throwing an error. Any Suggestions?</p>
|
<python><python-3.x><pandas><dataframe><numpy>
|
2023-03-28 16:36:11
| 2
| 473
|
Coding_Nubie
|
75,868,823
| 17,101,330
|
Why pyqtgraph legend text cannot contain special characters like: '>', '<'
|
<p>I am plotting data with pyqtgraph inside a PySide6 Application.
Anyways, the problem is that the text gets cut off if a name contains a special character like '<' or '>'.</p>
<p>Here is a minimal example:</p>
<pre><code>from PySide6 import QtCore, QtWidgets
import pyqtgraph as pg
plt = pg.plot()
plt.addLegend()
c1 = plt.plot([1,2,3], pen='r', name="TEST_O<C_C*0.99>=H")
c2 = plt.plot([3,2,1], pen='g', name='green plot test blabla')
if __name__ == '__main__':
import sys
if (sys.flags.interactive != 1) or not hasattr(QtCore, 'PYQT_VERSION'):
QtWidgets.QApplication.instance().exec()
</code></pre>
<p>Which outputs:</p>
<p><a href="https://i.sstatic.net/0RBzZ.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0RBzZ.jpg" alt="enter image description here" /></a></p>
<p>Why does this happen?
Is there a work-around without replacing the characters?</p>
|
<python><pyqtgraph><pyside6>
|
2023-03-28 16:34:15
| 2
| 530
|
jamesB
|
75,868,813
| 2,074,794
|
SQLite generate series is missing when calling with python
|
<p>The following statement works fine when used directly in sqlite:</p>
<p><code>SELECT * FROM generate_series(1, 10);</code></p>
<p>But when I try to use it via python sqlite3, it fails:</p>
<pre><code>import sqlite3
c = sqlite3.connect(":memory:")
c.execute("SELECT * FROM generate_series(1, 10)")
</code></pre>
<p>Error: <code>sqlite3.OperationalError: no such table: generate_series</code></p>
<p>Is there a different between how sqlite is called via python vs directly?</p>
|
<python><sqlite>
|
2023-03-28 16:32:48
| 0
| 2,156
|
kaveh
|
75,868,744
| 1,128,648
|
Python script is reporting Google sheet api Quota exceed error
|
<p>I have below function to update my google sheet every 3 seconds.</p>
<pre><code>def update_gsheet():
global ce_trade_status, pe_trade_status
while ce_trade_status != 'completed' or pe_trade_status != 'completed':
try:
batch_update_values = [
{"range": "A2", "values": [[selectedCEStrike]]},
{"range": "B2", "values": [[ce_sell_time]]},
{"range": "C2", "values": [[ce_sell_price]]},
{"range": "D2", "values": [[ce_sl_price]]},
{"range": "E2", "values": [[ce_buy_time]]},
{"range": "F2", "values": [[ce_buy_price]]},
{"range": "G2", "values": [[ce_exit_reason]]},
{"range": "H2", "values": [[ce_limit_price]]},
{"range": "I2", "values": [[ltp[selectedCEStrike]]]},
{"range": "A3", "values": [[selectedPEStrike]]},
{"range": "B3", "values": [[pe_sell_time]]},
{"range": "C3", "values": [[pe_sell_price]]},
{"range": "D3", "values": [[pe_sl_price]]},
{"range": "E3", "values": [[pe_buy_time]]},
{"range": "F3", "values": [[pe_buy_price]]},
{"range": "G3", "values": [[pe_exit_reason]]},
{"range": "H3", "values": [[pe_limit_price]]},
{"range": "I3", "values": [[ltp[selectedPEStrike]]]},
]
sh.batch_update(batch_update_values, value_input_option="USER_ENTERED")
except Exception as e:
logger.error(e)
sys.exit(1)
time.sleep(3)
</code></pre>
<p>But I am getting below 429 error for <code>'WriteRequestsPerMinutePerUser'</code>.</p>
<pre><code>{'code': 429, 'message': "Quota exceeded for quota metric 'Write requests' and limit 'Write requests per minute per user' of service 'sheets.googleapis.com' for consumer 'project_number:XXXX'.", 'status': 'RESOURCE_EXHAUSTED', 'details': [{'@type': 'type.googleapis.com/google.rpc.ErrorInfo', 'reason': 'RATE_LIMIT_EXCEEDED', 'domain': 'googleapis.com', 'metadata': {'service': 'sheets.googleapis.com', 'quota_location': 'global', 'quota_limit': 'WriteRequestsPerMinutePerUser', 'consumer': 'projects/XXXX', 'quota_metric': 'sheets.googleapis.com/write_requests', 'quota_limit_value': '120'}}, {'@type': 'type.googleapis.com/google.rpc.Help', 'links': [{'description': 'Request a higher quota limit.', 'url': 'https://cloud.google.com/docs/quota#requesting_higher_quota'}]}]}
</code></pre>
<p>My Current Quota for <code>WriteRequestsPerMinutePerUser</code> is <code>120</code> per minute. Since I am using the batch update, it has to count entire set of Cells as single request instead of individual ones.
I am not not getting how it is crossing the Quota limit even though I am putting a delay of 3 sec. How can I get rid of this issue ? (I already opened a request with Google of a Quota increase)</p>
|
<python><google-cloud-platform><google-api><google-sheets-api>
|
2023-03-28 16:24:44
| 1
| 1,746
|
acr
|
75,868,739
| 453,673
|
What is the Python colon followed by a greater than sign meant for? :>
|
<p>I came across this code in PyTorch's example code:</p>
<pre><code>if batch % 100 == 0:
loss, current = loss.item(), (batch + 1) * len(X)
print(f"loss: {loss:>7f} [{current:>5d}/{size:>5d}]")
</code></pre>
<p>and</p>
<pre><code>print(f"Test Error: \n Accuracy: {(100*correct):>0.1f}%, Avg loss: {test_loss:>8f} \n")
</code></pre>
<p>Assuming it may have something to do with decimal places or the number of digits to display, I tried some sample code like:</p>
<pre><code>size=46323656
current=3
loss=4.6362635675
print(f"loss: {loss:>7f} [{current:>5d}/{size:>5d}]")
</code></pre>
<p>and the output is:</p>
<pre><code>loss: 4.636264 [ 3/46323656]
</code></pre>
<p>But the output makes no sense. Does anyone know what the "<strong>:></strong>" is meant for? I've heard of the <a href="https://docs.python.org/3/whatsnew/3.8.html#assignment-expressions" rel="nofollow noreferrer">Walrus operator</a>.</p>
|
<python><syntax>
|
2023-03-28 16:24:07
| 0
| 20,826
|
Nav
|
75,868,668
| 13,896,667
|
Pymssql - insert large string - DBPROCESS is dead or not enabled
|
<p>I'm trying to insert a single row to a table that has a <code>nvarchar(max)</code> column (say response) using python's <code>pymssql</code> library.</p>
<p>The other columns are straightforward - one nvarchar(10) column, two nvarchar(30) columns, two date columns and one bigint column</p>
<p>I don't really have control over the length of the response string I get and as a result they can be of arbitrary length. Everywhere I've searched shows (e.g. <a href="https://stackoverflow.com/questions/11131958/what-is-the-maximum-characters-for-the-nvarcharmax">What is the maximum characters for the NVARCHAR(MAX)?</a>) that <code>nvarchar(max)</code> can support up to a billion characters. However, the code I'm using seems to break at around the 130 million character mark.</p>
<p>Code:</p>
<pre class="lang-py prettyprint-override"><code>import pymssql
from pathlib import Path
conn = pymssql.connect('server', 'user', 'password', 'db')
content = Path('file.txt').read_text()
print(len(content)) # 547031539
with conn.cursor() as cursor:
cursor.execute("Insert into table_name values (%s, %s, %s, %d, %s, %s, %s)", ('2020-05-21', '2023-03-27T20:51:50.221718', '2023-03-27T19:34:02.103253', 127671, content[:133_949_006], 'New', '2020-05-22'))
conn.commit()
</code></pre>
<p>Traceback:</p>
<pre><code>---------------------------------------------------------------------------
MSSQLDatabaseException Traceback (most recent call last)
File src\pymssql\_pymssql.pyx:461, in pymssql._pymssql.Cursor.execute()
File src\pymssql\_mssql.pyx:1087, in pymssql._mssql.MSSQLConnection.execute_query()
File src\pymssql\_mssql.pyx:1118, in pymssql._mssql.MSSQLConnection.execute_query()
File src\pymssql\_mssql.pyx:1251, in pymssql._mssql.MSSQLConnection.format_and_run_query()
File src\pymssql\_mssql.pyx:1789, in pymssql._mssql.check_cancel_and_raise()
File src\pymssql\_mssql.pyx:1835, in pymssql._mssql.raise_MSSQLDatabaseException()
MSSQLDatabaseException: (20047, b'DB-Lib error message 20047, severity 9:\nDBPROCESS is dead or not enabled\n')
During handling of the above exception, another exception occurred:
OperationalError Traceback (most recent call last)
Cell In[126], line 2
1 with conn.cursor() as cursor:
----> 2 cursor.execute(Insert into table_name values (%s, %s, %s, %d, %s, %s, %s)", ('2020-05-21', '2023-03-27T20:51:50.221718', '2023-03-27T19:34:02.103253', 127671, content[:133_949_006], 'New', '2020-05-22'))
3 conn.commit()
File src\pymssql\_pymssql.pyx:479, in pymssql._pymssql.Cursor.execute()
OperationalError: (20047, b'DB-Lib error message 20047, severity 9:\nDBPROCESS is dead or not enabled\n')
</code></pre>
|
<python><sql-server><limit><pymssql>
|
2023-03-28 16:17:43
| 0
| 342
|
wkgrcdsam
|
75,868,632
| 15,724,084
|
python selenium timeout exception does not pass compiling
|
<p>I have some Selenium WebDRiver code that searches for the "Next" button and if it exists, it clicks it. If not, the script should catch <code>TimeoutException</code> and continue.</p>
<p>Code:</p>
<pre><code>from selenium.common.exceptions import TimeoutException
def clicking_next_page():
btn_next_to_click=WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, "//a[@class='next']")))
try:
btn_next_to_click.click()
crawler()
except TimeoutException:
pass
</code></pre>
<p>Error:</p>
<pre><code>File "C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\site-packages\selenium\webdriver\support\wait.py", line 90, in until
raise TimeoutException(message, screen, stacktrace)
selenium.common.exceptions.TimeoutException: Message:
Stacktrace:
</code></pre>
|
<python><selenium-webdriver><timeoutexception>
|
2023-03-28 16:13:45
| 1
| 741
|
xlmaster
|
75,868,523
| 361,530
|
Unwanted tkinter trace event
|
<p>I just discovered a nasty behavior in tkinter trace events. When the user selects a character or characters to change, on the first keystroke two events occur: the first is the deleting of the selected characters and the second is the insertion of the first character typed by the user. The problem arises on an Entry field that holds a tk.IntVar as its 'textvariable". When the content is a single digit, that first event leaves the field momentari]y empty; the trace event leads to a call on IntVar.get() and that in turn throws a Double conversion exception. My workaround is to wrap the IntVar.get() call in a try-except but having all these try-except blocks uglifies my code. Is there a cleaner way to handle this"? If so, either a link or an example would be most welcome.</p>
|
<python><tkinter><trace>
|
2023-03-28 16:01:54
| 1
| 389
|
RadlyEel
|
75,868,509
| 12,711,133
|
Data format being changed when saving .xls file with Python
|
<p>TLDR I have dates that show in this format '44992', which I can't recognise - is there a way to turn that into another format in JS?</p>
<p>I have a Python script that pulls a spreadsheet from a website. In that original spreadsheet, there's a column with dates in this format: <code>07/03/2023 00:00:00</code></p>
<p>There's an issue with the spreadsheet that two columns have the same name, so I run a function which opens the spreadsheet, changes the name of one of those columns, and then re-saves it. After that, the date is in this format: <code>44992</code></p>
<p>I don't understand why it's doing that or what format the second number correlates to - but is there a way to turn that number back into mm/dd/yyyy? Or any other format really! I'm handling the data on the front-end with Javascript, and need to be able to show the date.</p>
<p>This is the function that fixes the name of the column:</p>
<pre><code>def fix():
rb = xlrd.open_workbook('static/files/sheet.xls')
wb = copy(rb)
# open the first sheet
w_sheet = wb.get_sheet(0)
# row number = 0 , column number = 1
w_sheet.write(1, 20, 'Applicant postcode')
w_sheet.write(1, 4, 'Postcode')
# save the file
wb.save('static/files/fixed.xls')
return "File has been fixed"
</code></pre>
<p>I've tried doing <code>new Date()</code> on those numbers, but it doesn't work.</p>
<p>Any ideas?</p>
|
<javascript><python>
|
2023-03-28 16:00:09
| 1
| 412
|
Gordon Maloney
|
75,868,334
| 835,523
|
How to compare 2 files in python, ignoring whitespace
|
<p>How can I compare two files to check if they have the same contents, ignoring whitespace, in Python?</p>
<p>This is something I need to do programmatically. I have a pytest test and while it passes locally on my windows computer it fails in our linux build environment and I believe whitespace is to blame.</p>
<p>I tried using filecmp, however that doesn't give me the option to ignore whitespace.</p>
|
<python>
|
2023-03-28 15:41:30
| 1
| 4,741
|
Steve
|
75,868,233
| 10,462,461
|
Tuple unpacking and np.max() giving unexpected values
|
<p>I have a dataset that I am trying to filter and apply an adjustment value using tuples.</p>
<pre><code>df = pd.DataFrame({
'loannum': ['1', '2', '3', '4'],
'or_dep': [250000, 650000, 1000000, 300000]
})
loan2adj = [('1', 50000), ('3', 250000), ('2', 100000)]
</code></pre>
<p>My expected output looks like this.</p>
<pre><code>loannum or_dep
1 200000
2 550000
3 750000
4 300000
</code></pre>
<p>This is the logic I'm using to unpack the tuples and apply the adjustment value.</p>
<pre><code>for loan, adj_amt in loan2adj:
df.loc[df['loannum'] == loan, 'or_dep'] = np.max(df['or_dep'] - adj_amt, 0)
</code></pre>
<p>This code produces some unusual values.</p>
<pre><code>loannum or_dep
1 950000
2 550000
3 750000
4 300000
</code></pre>
<p>Loans 3 and 4 are being returned correctly. Loan 4 should not have an adjustment and loan 3 is being adjusted correctly. How can I achieve the desired output?</p>
|
<python><pandas><numpy><tuples>
|
2023-03-28 15:30:16
| 1
| 340
|
gernworm
|
75,868,186
| 6,346,482
|
Pandas: Alter a column by condition using other columns
|
<p>I have this dataframe here (sorry for the bad example)</p>
<pre><code>import pandas as pd
import random
df = pd.DataFrame({
"Alpha": ["A", "A", "A", "A", "B", "B", "B", "B"],
"Beta": ["C", "D", "E", "F", "C", "D", "E", "F"],
"Value": [1, 2, 3, 4, 7, 2, 5, 1],
})
</code></pre>
<p>I want that for every Alpha == A, the "Value" of the rows with "Beta"="D" should be the value of "Beta="F" multiplied by a random number and a scalar.</p>
<pre><code>mask = (df['Alpha'] == 'A')
df.loc[mask & (df['Beta'] == 'D'), 'Value'] = df.loc[mask & (df['Beta'] == 'F'), 'Value'] * 0.5 * random.uniform(0.95, 1.05)
</code></pre>
<p>Both loc functions return a series of equal length (also in other more advanced examples), but in the end, the Value becomes NaN everywhere. This is also the case if I remove the scalar.</p>
<p>Any ideas how to easily solve this?</p>
|
<python><pandas>
|
2023-03-28 15:25:41
| 2
| 804
|
Hemmelig
|
75,868,166
| 10,266,106
|
Speed Up Numpy Process Across Multiple Arrays in Parallel
|
<p>I have constructed a script to find the 8 nearest indices at the current index in a loop; in essence a moving window algorithm. The parallel component entails doing this across multiple 2-D arrays, which may vary in total amount but will always have the same dimensions (say 2800i x 1200j). I've tested this script below with 12, all arrays have a datatype of Float32 with a maximum decimal precision of 8.</p>
<p>First, let's start with the nearest neighbor portion of the script, which is as follows:</p>
<pre><code>import numpy as np
import multiprocessing as mpr
def get_neighbors(arr, origin, num_neighbors = 8):
coords = np.array([[i,j] for (i,j),value in np.ndenumerate(arr)]).reshape(arr.shape + (2,))
distances = np.linalg.norm(coords - origin, axis = -1)
neighbor_limit = np.sort(distances.ravel())[num_neighbors]
window = np.where(distances <= neighbor_limit)
exclude_window = np.where(distances > neighbor_limit)
return window, exclude_window, distances
</code></pre>
<p>I've built a static array (named <code>gridranger</code>) that handles the moving window index loop and is used to provide the window index coordinates for all other 2-D arrays given all others are identical in size.
The goal of this script is to extract all values at the indices in the moving window across all arrays into a list, then perform some analysis. This portion looks as follows; note that the variable <code>grids</code> contains the names of all variables mapped to each corresponding 2-D array:</p>
<pre><code>def extractor(queue, gridin, windowin):
extract_values = []
for i in range(0, len(windowin[0])):
extract_values.append(gridin[windowin[0][i], windowin[1][i]])
queue.put(extract_values)
def parallel():
for index, val in np.ndenumerate(gridranger):
window, exclude, distances = get_neighbors(gridranger, [index[0], index[1]])
outarr = np.column_stack((window[0], window[1]))
outvalues, processes = [], []
q = mpr.Queue()
for grid in grids:
pro = mpr.Process(target=extractor, args=(q, grid, window))
processes.extend([pro])
pro.start()
for p in processes:
extract_values = q.get()
outvalues.append(extract_values)
for p in processes:
p.join()
# return outvalues
print(index, outvalues)
</code></pre>
<p>The issue I've encountered is the length of time to run this using Multiprocess, which averages at ~7.5-8.5 seconds. This clearly is wholly inefficient for large 2-D arrays like what I'm running this moving window through. What steps can I take to drastically reduce this runtime?</p>
|
<python><numpy><multiprocessing><numpy-ndarray>
|
2023-03-28 15:23:36
| 1
| 431
|
TornadoEric
|
75,868,129
| 12,760,550
|
Create a column ranking order of records by date
|
<p>Imagine I have the following dataframe with employees, their contract type (values could be Employee, Contractor and Agency). Also, one person can have more than 1 contract as you could see in the dataframe example below:</p>
<pre><code>ID Name Contract Date
10000 John Employee 2021-01-01
10000 John Employee 2021-01-01
10000 John Employee 2020-03-06
10000 John Contractor 2021-01-03
10000 John Agency 2021-01-01
10000 John Contractor 2021-02-01
10001 Carmen Employee 1988-06-03
10001 Carmen Employee 2021-02-03
10001 Carmen Contractor 2021-02-03
10002 Peter Contractor 2021-02-03
10003 Fred Employee 2020-01-05
10003 Fred Employee 1988-06-03
</code></pre>
<p>I need to find a way that, per each unique ID, and each unique Contract Type, it created a column named "Order" that would rank, starting with 1 on the oldest contract, each of the contract types each ID have.If the date is the same, the rank order does not matter. This would result on the following dataframe:</p>
<pre><code>ID Name Contract Date Order
10000 John Employee 2021-01-01 1
10000 John Employee 2021-01-01 2
10000 John Employee 2020-03-06 3
10000 John Contractor 2021-01-03 2
10000 John Agency 2021-01-01 1
10000 John Contractor 2021-02-01 1
10001 Carmen Employee 1988-06-03 1
10001 Carmen Employee 2021-02-03 2
10001 Carmen Contractor 2021-02-03 1
10002 Peter Contractor 2021-02-03 1
10003 Fred Employee 2020-01-05 2
10003 Fred Employee 1988-06-03 1
</code></pre>
|
<python><pandas><group-by><lines-of-code>
|
2023-03-28 15:20:14
| 1
| 619
|
Paulo Cortez
|
75,867,923
| 3,073,612
|
MongoDB $indexOfArray not working for a nested object inside an array
|
<p>I have a document with the following data</p>
<pre><code>{
"_id": "640311ab0469a9c4eaf3d2bd",
"id": 4051,
"email": "manoj123@gmail.com",
"password": "SomeNew SecurePassword",
"about": null,
"token": "7f471974-ae46-4ac0-a882-1980c300c4d6",
"country": "India",
"location": null,
"lng": 0,
"lat": 0,
"dob": null,
"gender": 0,
"userType": 1,
"userStatus": 1,
"profilePicture": "Images/9b291404-bc2e-4806-88c5-08d29e65a5ad.png",
"coverPicture": "Images/44af97d9-b8c9-4ec1-a099-010671db25b7.png",
"enablefollowme": false,
"sendmenotifications": false,
"sendTextmessages": false,
"enabletagging": false,
"createdAt": "2020-01-01T11:13:27.1107739",
"updatedAt": "2020-01-02T09:16:49.284864",
"livelng": 77.389849,
"livelat": 28.6282231,
"liveLocation": "Unnamed Road, Chhijarsi, Sector 63, Noida, Uttar Pradesh 201307, India",
"creditBalance": 130,
"myCash": 0,
"data": {
"name": "ThisIsAwesmoe",
"arr": [
1,
100,
3,
4,
5,
6,
7,
8,
9
],
"hobies": {
"composer": [
"anirudh",
{
"co_singer": [
"rakshitha",
"divagar"
]
},
"yuvan",
"rahman"
],
"music": "helo"
}
},
"scores": [
{
"subject": "math",
"score": 100
},
{
"subject": "physics",
"score": 85
},
{
"subject": "chemistry",
"score": 95
}
],
"fix": 1,
"hello": 1,
"recent_views": [
200
],
"exam": "",
"subject": "",
"arr": {
"name": "sibidharan",
"pass": "hello",
"score": {
"subject": {
"minor": "zoology",
"major": "biology",
"others": [
"evs",
{
"name": "shiro",
"inarr": [
200,
2,
3,
{
"sub": "testsub",
"newsu": "aksjdad",
"secret": "skdjfnsdkfjnsdfsdf"
},
4,
12
]
}
]
},
"score": 40,
"new": "not7",
"hello": {
"arr": [
5,
2
]
}
}
},
"name": "Manoj Kumar",
"d": [
1,
3,
4,
5
],
"score": {},
"hgf": 5
}
</code></pre>
<p>I am trying to find the <code>$indexOfArray</code> located in this path <code>data.hobies.composer.1.co_singer</code> and I am using the following query:</p>
<pre><code>[
{
"$match": {
"id": 4051
}
},
{
"$project": {
"index": {
"$indexOfArray": [
"$data.hobies.composer.1.co_singer",
"divagar"
]
}
}
}
]
</code></pre>
<p>and this returns -1 with this python code that uses pymongo</p>
<pre><code>result = list(self._collection.aggregate(pipeline))
return result[0]["index"]
</code></pre>
<p>whereas the <code>pipeline</code> is the query above.</p>
<p>It is working if there is no array in the nested path, but if there is an array, <code>$indexOfArray</code> returns -1.</p>
<p>What am I missing?</p>
|
<python><mongodb><pymongo>
|
2023-03-28 15:02:10
| 1
| 2,756
|
Sibidharan
|
75,867,922
| 13,158,157
|
pandas pivot table: dropna does not create new column
|
<p>I have a data frame with two columns: df._merge and df.A.
df.A is an object column with a few values and NA.</p>
<p>When trying to output pairs and their count with <code>pivot_table</code> I am missing the na values, and using <code>dropna=False</code> in <code>pivot_table</code> does not return them.</p>
<pre><code>df= pd.DataFrame({
'_merge':['left_only', 'both','right_only', 'right_only'],
'A':['1', '1','2',pd.NA]
})
df.pivot_table(index='_merge', columns='A', aggfunc='size', fill_value='-', dropna=False)
</code></pre>
<p>returns pivot without NA values:</p>
<p><a href="https://i.sstatic.net/sLKBI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sLKBI.png" alt="enter image description here" /></a></p>
<p>but manually filling na's shows that they are there:</p>
<pre><code>df['fA'] = dffff.A.fillna('NA')
df.pivot_table(index='_merge', columns='fA', aggfunc='size', fill_value='-', dropna=False)
</code></pre>
<p><a href="https://i.sstatic.net/oZ3qD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oZ3qD.png" alt="enter image description here" /></a></p>
<p>How can I get the pivot with empty values in pivot columns without having to create a new column in the source df?</p>
|
<python><pandas><pivot-table>
|
2023-03-28 15:02:02
| 1
| 525
|
euh
|
75,867,791
| 11,922,765
|
Python Download A list of URL linked ZIP files
|
<p>I have a list of URLs to download data. I am doing this on Kaggle. I want to know how to download this data, save to kaggle or local machine. The goal is, download this data onto Python and combine them into a single CSV file and download this big file. Presently each URL corresponds to one year data.</p>
<p>Ref: <a href="https://stackoverflow.com/questions/9419162/download-returned-zip-file-from-url">Download Returned Zip file from URL</a></p>
<p>My code:</p>
<pre><code> url_list = ['https://mapfiles.nrel.gov/data/solar/ae014839fbbe9de5c30bedf56a2f5521.zip', 'https://mapfiles.nrel.gov/data/solar/ea8f39523778ba0223a28116a3e9d85a.zip']
import requests, zipfile, io
data_list = []
for url in url_list:
r = requests.get(url)
z = zipfile.ZipFile(io.BytesIO(r.content))
data_list.append(pd.read_csv(z.open(z.namelist()[0])))
# Create a big dataframe
df = pd.concat(data_list)
df.to_csv('WeatherData.csv')
</code></pre>
<p>It is working as I intended. But, is there a better way of doing it.</p>
|
<python><url><python-requests><io><zip>
|
2023-03-28 14:49:48
| 1
| 4,702
|
Mainland
|
75,867,783
| 15,915,737
|
How trigger dbt cloud job on free version
|
<p>I wan't to trigger dbt cloud job with API request from a python script, however the API is not accessible for unpaid version as the error suggest : <code>{"status": {"code": 401, "is_success": false, "user_message": "The API is not accessible to unpaid accounts.", "developer_message": null}, "data": null}</code>.</p>
<p>I was trying with this code :</p>
<pre><code>import requests
import os
ACCOUNT_ID = os.environ['ACCOUNT_ID']
JOB_ID = os.environ['JOB_ID']
API_KEY = os.environ['DBT_CLOUD_API_TOKEN']
url = "https://cloud.getdbt.com/api/v2/accounts/"+ACCOUNT_ID+"/jobs/"+JOB_ID+"/run/"
headers = {
'Authorization': 'Bearer '+API_KEY,
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers)
print(response.text)
</code></pre>
<p>Is there an alternative way to achieve it with the free tier version ?</p>
|
<python><triggers><jobs><dbt-cloud>
|
2023-03-28 14:48:51
| 1
| 418
|
user15915737
|
75,867,644
| 112,976
|
How to return Pydantic object with a specific http response code in FastAPI?
|
<p>I have an endpoint which returns a Pydantic object. However, I would like a response code other than 200 in some cases (for example if my service in not healthy). How can I achieve that with FastAPI?</p>
<pre><code>class ServiceHealth(BaseModel):
http_ok: bool = True
database_ok: bool = False
def is_everything_ok(self) -> bool:
return self.http_ok and self.database_ok
@router.get("/health")
def health() -> ServiceHealth:
return ServiceHealth()
</code></pre>
|
<python><http><fastapi><pydantic>
|
2023-03-28 14:34:59
| 2
| 22,768
|
poiuytrez
|
75,867,636
| 11,452,928
|
How to get value of jaxlib.xla_extension.ArrayImpl
|
<p>Using <code>type(z1[0])</code> I get <code>jaxlib.xla_extension.ArrayImpl</code>. Printing <code>z1[0]</code> I get <code>Array(0.71530414, dtype=float32)</code>. How can I get the actual number <code>0.71530414</code>?</p>
<p>I tried <code>z1[0][0]</code> because <code>z1[0]</code> is a kind of array with a single value, but it gives me an error: <code>IndexError: Too many indices for array: 1 non-None/Ellipsis indices for dim 0.</code>.</p>
<p>I tried also a different approach: I searched on the web if it was possible to convert from jaxnumpy array to a python list, but I didn't find an answer.</p>
<p>Can someone help me to get the value inside a <code>jaxlib.xla_extension.ArrayImpl</code> object?</p>
|
<python><arrays><jax>
|
2023-03-28 14:34:18
| 2
| 753
|
fabianod
|
75,867,580
| 4,810,328
|
package data in setup.py
|
<p>I have some data and config file in the project root folder(same as setup.py) and I want to add them to the wheel build as package data. So my setup.py looks like below:</p>
<pre><code>from setuptools import find_packages, setup
setup(
name="dbx_test",
packages=find_packages(exclude=["tests", "tests.*"]),
include_package_data=True,
package_data={
"": ["config.yaml",
"secrets.txt"
]
},
setup_requires=["wheel"],
entry_points={
"console_scripts": [
"main = src.main:main",
],
},
version="0.0.1",
description="",
author="",
)
</code></pre>
<p>But, the wheel build is not able to pick up the secrets and config file from the root. How would I be able to do that?</p>
|
<python><python-packaging><python-wheel>
|
2023-03-28 14:30:22
| 0
| 711
|
Tarique
|
75,867,551
| 241,552
|
Bazel: add executable to PATH
|
<p>I am having trouble with installing python in my Bazel build. I tried following different approaches, starting with the official bazel_rules documentation, but I get the error <code>/usr/bin/env: βpython3β: No such file or directory</code>. I have tried several approaches with downloading the python installation as part of the build and registering them as a toolchain, but I still get the same error. Looking at the <code>PATH</code> env variable inside the build I see that the directory where my python installation is is not included in it, but some non-existent directory is. This lead me to think that I could modify <code>PATH</code> myself to include my actual python folder. However, I couldn't find a way to set environment variables inside a <code>repository_rule</code>. Is there a way to do that? Am I on the wrong path?</p>
|
<python><bazel>
|
2023-03-28 14:27:08
| 1
| 9,790
|
Ibolit
|
75,867,416
| 3,152,686
|
Extract substrings from a list of patterns in Python
|
<p>I have a list of patterns that contains the Page ID of my webpage</p>
<pre><code>patterns = [
https://www.my-website.com/articles/$1/show,
https://www.my-website.com/groups/$1/articles/$2,
https://www.my-website.com/blogs/$1/show,
https://www.my-website.com/groups/$1/documents/$2
]
</code></pre>
<p>where $1 is the Page ID for the first list item, and $2 is the Page ID for the second list item, and so on.</p>
<p>I have an input WebURL string and I want to extract the Page ID from this string</p>
<pre><code>input = https://www.my-website.com/articles/asojd1277/show
output = asojd1277
input = https://www.my-website.com/groups/hsaid123/articles/hqwoj8239
output = hqwoj8239
</code></pre>
<p>Is there any python library that does this for me? I tried the <strong>re.match</strong> (regex package) function but it does not fit into my case.</p>
|
<python><string><extract>
|
2023-03-28 14:13:41
| 2
| 564
|
Vishnukk
|
75,867,330
| 5,432,214
|
Mypy, ignore types for remaining elements
|
<p>I have a function like this</p>
<pre class="lang-py prettyprint-override"><code>def _parse(input_dict: dict) -> Tuple[int, str, ???]:
entity = 3
_id = "test"
return (
entity,
_id,
*super()._parse(input_dict),
)
</code></pre>
<p>I know the types of the first two return values, and the remaining values are given by the return values of the parent class of the one with this function. Since the superclass has a similar <code>_parse</code> function, properly type hinting this will result in a ton of etries in the long run.</p>
<p>If we consider that this class has a subclass with the same function, its return value type hint would be <code>Tuple[subclass_param_1, subclass_param_2, int, str, ???]</code>, and will get longer and longer as we go deeper.</p>
<p>Is there a way of marking the rest of the parameters as unknown, assuming that we don't know how many parameters we will end up with?</p>
|
<python><type-hinting><mypy>
|
2023-03-28 14:04:40
| 0
| 1,310
|
HitLuca
|
75,867,274
| 10,437,110
|
Value of a column at time t is subracted by running mean of the past 1000 values
|
<p>I have a dataframe called <code>df</code>.</p>
<p>It has a column name <code>x</code>.</p>
<p>I want to update the <code>df['x']</code> such that value at index i is changed to value at index i - mean of previous 1000 values.</p>
<p>How to achieve that in Python?</p>
|
<python><python-3.x><pandas><dataframe>
|
2023-03-28 13:59:25
| 1
| 397
|
Ash
|
75,867,199
| 15,756,325
|
Why is mypy not inferring Django CharField choices as type with django-stubs?
|
<p>I have a Django model with a field that is a <code>models.CharField</code> with choices:</p>
<pre class="lang-py prettyprint-override"><code>from django.db import models
class MyChoice(models.TextChoices):
FOO = "foo", _("foo")
BAR = "bar", _("bar")
BAZ = "baz", _("baz")
class MyModel(models.Model):
my_field = models.CharField(choices=MyChoice.choices, max_length=64)
</code></pre>
<p>Then, if I use this choice field in a function that has it as a typed parameter, mypy catches it as an error.</p>
<pre class="lang-py prettyprint-override"><code>def use_choice(choice: MyChoice) -> None:
pass
def call_use_choice(model: MyModel) -> None:
use_choice(model.my_field)
# error: Argument 1 to "use_choice" has incompatible type "str"; expected "MyChoice" [arg-type]
</code></pre>
<p>My configuration is as follows:</p>
<pre><code># pyproject.toml
[tool.poetry.dependencies]
python = ">=3.10,<3.11"
Django = "^3.2.8"
[tool.poetry.dependencies]
mypy = "^1.1.1"
django-stubs = "^1.16.0"
</code></pre>
<pre><code># mypy.ini
[mypy]
python_version = 3.10
ignore_missing_imports = True
plugins =
mypy_django_plugin.main
[mypy.plugins.django-stubs]
django_settings_module = "config.settings"
</code></pre>
<p>Why is this happening?</p>
|
<python><django><django-models><mypy><django-stubs>
|
2023-03-28 13:53:16
| 0
| 409
|
tinom9
|
75,867,074
| 119,253
|
testing.postgres failing with "Command not found: initdb"
|
<p>I'm trying to run some unit tests on a python function that updates a database.
I'm attempting to use testing.postgres to create a local database upon which to carry out the tests.
I have the following code for my unit test:</p>
<pre><code>from unittest import TestCase
import testing.postgresql
import psycopg2
import postgres
class TestCOUDB(TestCase):
def test_COUDB(self):
self.postgresql = testing.postgresql.Postgresql(port=7654)
print(self.postgresql.url())
</code></pre>
<p>When I run this, I get:</p>
<pre><code>Command not found: initdb
</code></pre>
<p>I have postgres installed in the python environment I'm using but from my limited understanding I'm thinking that testing.postgresql does not know where to find it.
Can anyone help?</p>
|
<python><postgresql><python-unittest>
|
2023-03-28 13:42:25
| 0
| 2,936
|
Columbo
|
75,867,049
| 14,230,633
|
LGBM's best_iteration_ is None when using early early_stopping callback even though early stopping occurs
|
<p>If I fit a model with</p>
<pre><code>gbm = lgb.LGBMRegressor(learning_rate=0.01, n_estimators=250)
gbm.fit(
X_train,
y_train,
eval_set=[(X_test, y_test)],
eval_metric='l2',
callbacks=[lgb.early_stopping(3)],
verbose=-1
)
</code></pre>
<p>the output is</p>
<pre><code>Early stopping, best iteration is:
[210] valid_0's l2: 0.00261499
</code></pre>
<p>But <code>gbm.best_iteration_</code> is None. I think it should be 210?</p>
<p>If I run the same model but use <code>early_stopping_rounds=3</code> instead of <code>callbacks=...</code>, I do get <code>gbm.best_iteration_</code> of 210. Any idea why?</p>
|
<python><machine-learning><lightgbm>
|
2023-03-28 13:40:10
| 1
| 567
|
dfried
|
75,866,753
| 2,171,348
|
how to get variable starting with double-underscore displayed in eclipse/pydev debug view
|
<p>In the eclipse/pydev debug view, if a dict object has a key starting with double-underscore, like { "__my_key": , ...}, that key won't show up in the variables view in debug mode; the other keys all show up as expected.
Is there a setting to change to get the double-underscore key displayed in debug mode?</p>
|
<python><eclipse><debugging><pydev>
|
2023-03-28 13:14:41
| 1
| 481
|
H.Sheng
|
75,866,706
| 3,241,257
|
Is there an easier way to allow all http methods in Sanic framework?
|
<p>I'm developing a proxy server by using Sanic. (The reason why I chose this framework is that it is so fast and it uses asynchronous event loop.)</p>
<p>So I want to catch all requests to my Sanic server and do some post-processings.</p>
<pre><code>@app.route("/<path:path>", methods = []) # Here is the question point
async def proxy(request: Request, path: str):
// do something
return redirect("...")
</code></pre>
<p>I think it is not so good to write all HTTP methods and pass them through <code>methods</code> parameter. So I want to know if there is a more effective way to allow all HTTP methods to my route.</p>
<p>Thanks in advance.</p>
<ul>
<li>Python version: 3.11</li>
<li>Sanic version: 23.3.0</li>
</ul>
|
<python><http><sanic>
|
2023-03-28 13:11:01
| 2
| 908
|
jeongmin.cha
|
75,866,487
| 626,664
|
How to find similarity between two vectors?
|
<p>I have two embedding vectors A1 and B1, of size 32 (it could be 64, and so on).</p>
<pre><code>A = torch.rand(32)
A1 = torch.unsqueeze(A, dim = 1)
A1
B = torch.rand(32)
B1 = torch.unsqueeze(B, dim = 1)
B1
</code></pre>
<p>Now, I want to be able to find how similar are they, based on every say 4, 8, 16 elements. So that the output will look like a vector of say 4 (or 8 ... and so on) elements.</p>
<pre><code>ExampleOutput = [.4, .5, .6, .7] # as the window is 8 (from 32 sized vector)
</code></pre>
<p>Meaning that the similarity will be computed over a window. I know one way could be using torch histogram.</p>
<ol>
<li>Any idea how can I compute this?</li>
</ol>
<p>Thanks in advance.</p>
|
<python><vector><pytorch><similarity>
|
2023-03-28 12:50:39
| 2
| 1,559
|
Droid-Bird
|
75,865,962
| 2,772,805
|
color meanings when shapely polygons are displayed
|
<p>Is there a signification for colors (red, green) when you display a shapely polygon object ?</p>
<p><a href="https://i.sstatic.net/zUjB5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zUjB5.png" alt="enter image description here" /></a></p>
|
<python><shapely>
|
2023-03-28 11:57:15
| 1
| 429
|
PBrockmann
|
75,865,819
| 9,758,352
|
How to install python cryptography 38.0.4 without pip on Ubuntu
|
<p>My pip has corrupted after trying to install Apache Airflow. Every time I am trying to use it I am getting the following message</p>
<pre><code>Traceback (most recent call last):
File "/usr/bin/pip", line 11, in <module>
load_entry_point('pip==20.0.2', 'console_scripts', 'pip')()
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 490, in load_entry_point
return get_distribution(dist).load_entry_point(group, name)
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2854, in load_entry_point
return ep.load()
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2445, in load
return self.resolve()
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2451, in resolve
module = __import__(self.module_name, fromlist=['__name__'], level=0)
File "/usr/lib/python3/dist-packages/pip/_internal/cli/main.py", line 10, in <module>
from pip._internal.cli.autocompletion import autocomplete
File "/usr/lib/python3/dist-packages/pip/_internal/cli/autocompletion.py", line 9, in <module>
from pip._internal.cli.main_parser import create_main_parser
File "/usr/lib/python3/dist-packages/pip/_internal/cli/main_parser.py", line 7, in <module>
from pip._internal.cli import cmdoptions
File "/usr/lib/python3/dist-packages/pip/_internal/cli/cmdoptions.py", line 24, in <module>
from pip._internal.exceptions import CommandError
File "/usr/lib/python3/dist-packages/pip/_internal/exceptions.py", line 10, in <module>
from pip._vendor.six import iteritems
File "/usr/lib/python3/dist-packages/pip/_vendor/__init__.py", line 65, in <module>
vendored("cachecontrol")
File "/usr/lib/python3/dist-packages/pip/_vendor/__init__.py", line 36, in vendored
__import__(modulename, globals(), locals(), level=0)
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 655, in _load_unlocked
File "<frozen importlib._bootstrap>", line 618, in _load_backward_compatible
File "<frozen zipimport>", line 259, in load_module
File "/usr/share/python-wheels/CacheControl-0.12.6-py2.py3-none-any.whl/cachecontrol/__init__.py", line 9, in <module>
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 655, in _load_unlocked
File "<frozen importlib._bootstrap>", line 618, in _load_backward_compatible
File "<frozen zipimport>", line 259, in load_module
File "/usr/share/python-wheels/CacheControl-0.12.6-py2.py3-none-any.whl/cachecontrol/wrapper.py", line 1, in <module>
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 655, in _load_unlocked
File "<frozen importlib._bootstrap>", line 618, in _load_backward_compatible
File "<frozen zipimport>", line 259, in load_module
File "/usr/share/python-wheels/CacheControl-0.12.6-py2.py3-none-any.whl/cachecontrol/adapter.py", line 5, in <module>
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 655, in _load_unlocked
File "<frozen importlib._bootstrap>", line 618, in _load_backward_compatible
File "<frozen zipimport>", line 259, in load_module
File "/usr/share/python-wheels/requests-2.22.0-py2.py3-none-any.whl/requests/__init__.py", line 95, in <module>
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 655, in _load_unlocked
File "<frozen importlib._bootstrap>", line 618, in _load_backward_compatible
File "<frozen zipimport>", line 259, in load_module
File "/usr/share/python-wheels/urllib3-1.25.8-py2.py3-none-any.whl/urllib3/contrib/pyopenssl.py", line 46, in <module>
File "/home/bill/.local/lib/python3.8/site-packages/OpenSSL/__init__.py", line 8, in <module>
from OpenSSL import crypto, SSL
File "/home/bill/.local/lib/python3.8/site-packages/OpenSSL/crypto.py", line 3268, in <module>
_lib.OpenSSL_add_all_algorithms()
AttributeError: module 'lib' has no attribute 'OpenSSL_add_all_algorithms'
</code></pre>
<p>Based on <a href="https://stackoverflow.com/a/75053968/9758352">this</a> answer, I need to downgrade cryptography. However I cannot do it using pip since I am getting the same error. I tried to download to my Ubuntu the <code>wheel</code> file from <a href="https://pypi.org/project/cryptography/38.0.4/#files" rel="nofollow noreferrer">here</a>; specifically <code>cryptography-38.0.4-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl</code>. However, when I extracted it, there was no <code>setup.py</code> file inside it. The only way I can imagine installing it without using pip is moving the extracted files to the place python libraries are stored. However I cannot find the place where the old cryptography library was installed.</p>
<p>How to install the specific version on my machine without using pip and considering there is no <code>setup.py</code> file?</p>
|
<python><python-3.x><ubuntu><pip><pypi>
|
2023-03-28 11:43:38
| 1
| 457
|
BillTheKid
|
75,865,732
| 10,291,291
|
Python Subprocess cannot find module
|
<p>Been stuck on this for a while and can't quite work it out.</p>
<h1>Setup</h1>
<h2>Folder Structure</h2>
<pre><code>- Home
- folder1
+ script1.py
+ script2.py
- folder2
+ script3.py
</code></pre>
<h2>Scripts</h2>
<h3>Script1</h3>
<pre><code>import logging
logger=........
import os
cwd = os.getcwd()
import sys
print(sys.executable)
subprocess.run([sys.executable, 'folder1/script2.py'], cwd=cwd)
</code></pre>
<h3>Script2</h3>
<pre><code>import logging
logger=........
try:
from folder2 import script3
except Exception as e:
logger.exception(f'Failed to open: {str(e)}')
</code></pre>
<h3>Script3</h3>
<pre><code>import logging
logger=........
<<VariousFunctions>>
</code></pre>
<p>I'm within a virtual environment and use spyder.</p>
<h1>Issue</h1>
<p><code>script2</code> and <code>script3</code> run fine, the default working directory is the project home, Home.
However, I cannot get <code>script2</code> to find <code>script3</code> using a <code>subprocess</code>. I have ensured it is using the correct interpreter <code>sys.executable</code> and set the working directory to be the same too via <code>os.getcwd()</code>. I put these in the log and as far as I can see everything is set correctly but can't seem to import the module.</p>
<pre><code>ModuleNotFoundError: No module named 'folder2'
</code></pre>
<p>What am I missing ?</p>
|
<python><subprocess><python-import><python-venv><python-packaging>
|
2023-03-28 11:34:17
| 1
| 2,924
|
Quixotic22
|
75,865,619
| 5,024,631
|
Convert any binary combination to boolean
|
<p>I have a function that returns numpy arrays which consist of a numbers. These numbers are always two different numbers, e.g.:</p>
<pre><code>np.array([0,3,3,0,0,0,3])
np.array([1,4,1,1,1,4,4])
np.array([0,3,3,0,0,0,3])
</code></pre>
<p>How could I convert these binary combinations of numbers to boolean, i.e. <code>np.array([True, False, True, True, False])</code>?</p>
<p>I've been playing around with a lot of stuff, but it seems to only work when the numbers are already 0 and 1.</p>
|
<python><numpy><boolean>
|
2023-03-28 11:23:11
| 4
| 2,783
|
pd441
|
75,865,524
| 4,428,377
|
Error "Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary." when running TFLite model in Android
|
<p>I built a <strong>keras-ocr tflite</strong> model using <a href="https://github.com/tulasiram58827/ocr_tflite" rel="nofollow noreferrer">ocr_tflite</a>.</p>
<p>Following are the tensorflow library versions used on mac osx m1 -</p>
<pre><code>tensorflow-datasets 4.8.3
tensorflow-deps 2.10.0
tensorflow-estimator 2.9.0
tensorflow-macos 2.9.0
tensorflow-metadata 1.12.0
tensorflow-metal 0.5.0
</code></pre>
<p>. The model has the following shapes -</p>
<pre><code>input - [ 1 31 200 1]
<class 'numpy.float32'>
output - [ 1 48]
<class 'numpy.int64'>
</code></pre>
<p>I hosted this model on firebase and tried using it in my android app using below code -</p>
<pre><code>@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
final File[] modelFile = new File[1];
final Interpreter[] interpreter = new Interpreter[1];
Interpreter.Options options = new Interpreter.Options();
FirebaseCustomRemoteModel remoteModel = new FirebaseCustomRemoteModel.Builder("ocr-float16-3-8").build();
FirebaseModelDownloadConditions conditions = new FirebaseModelDownloadConditions.Builder().build();
FirebaseModelManager.getInstance()
.download(remoteModel, conditions)
.addOnSuccessListener(new OnSuccessListener<Void>() {
@Override
public void onSuccess(Void unused) {
Log.d("TAG", "downloading model");
FirebaseModelManager.getInstance().getLatestModelFile(remoteModel).addOnCompleteListener(new OnCompleteListener<File>() {
@Override
public void onComplete(@NonNull Task<File> task) {
modelFile[0] = task.getResult();
interpreter[0] = new Interpreter(modelFile[0], options);
Log.d("TAG", "built local model");
predict(interpreter[0]);
}
});
}
})
.addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(@NonNull Exception e) {
Log.e("TAG", "failure " + e);
}
});
}
private void predict(Interpreter interpreter) {
Bitmap grayedBitmap;
Bitmap inputBitmap = null;
try {
inputBitmap = getBitmapFromAssets("word_4.png"); // not included in the question
} catch (IOException e) {
throw new RuntimeException(e);
}
inputBitmap = Bitmap.createScaledBitmap(inputBitmap, 200, 31, true);
grayedBitmap = toGrayScale(inputBitmap); // not included in the question
Log.d("TAG", "created grayed bitmap");
ByteBuffer input = getByteBuffer(grayedBitmap); // not included in the question
Log.d("TAG", "converted grayed bitmap");
int bufferSize = 48 * java.lang.Integer.SIZE / java.lang.Byte.SIZE;
ByteBuffer modelOutput = ByteBuffer.allocateDirect(bufferSize).order(ByteOrder.nativeOrder());
interpreter.run(input, modelOutput);
modelOutput.rewind();
IntBuffer probabilities = modelOutput.asIntBuffer();
StringBuilder predicted = new StringBuilder();
String []alphabetIndex = new String[]{"0","1","2","3","4","5","6","7","8","9","a","b","c","d","e","f","g","h","i","j","k","l","m","n","o","p","q","r","s","t","u","v","w","x","y","z"};
for(int i=0; i<48; i++) {
int prob = probabilities.get(i);
if(prob != -1 && prob != 36) {
predicted.append(alphabetIndex[prob]);
}
}
Log.d("TAG", "predicted string is " + predicted.toString());
}
</code></pre>
<p>But I get error -</p>
<pre><code>E/AndroidRuntime: FATAL EXCEPTION: main
Process: games.cloudfeather.mlkitcustom2, PID: 16074
java.lang.IllegalArgumentException: Internal error: Failed to run on the given Interpreter: NodeDef mentions attr 'blank_index' not in Op<name=CTCGreedyDecoder; signature=inputs:T, sequence_length:int32 -> decoded_indices:int64, decoded_values:int64, decoded_shape:int64, log_probability:T; attr=merge_repeated:bool,default=false; attr=T:type,default=DT_FLOAT,allowed=[DT_FLOAT, DT_DOUBLE]>; NodeDef: {{node CTCGreedyDecoder}}. (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).
(while executi
at org.tensorflow.lite.NativeInterpreterWrapper.run(Native Method)
at org.tensorflow.lite.NativeInterpreterWrapper.run(NativeInterpreterWrapper.java:163)
at org.tensorflow.lite.Interpreter.runForMultipleInputsOutputs(Interpreter.java:360)
at org.tensorflow.lite.Interpreter.run(Interpreter.java:319)
at games.cloudfeather.mlkitcustom2.MainActivity.predict(MainActivity.java:115)
at games.cloudfeather.mlkitcustom2.MainActivity.access$000(MainActivity.java:39)
at games.cloudfeather.mlkitcustom2.MainActivity$2$1.onComplete(MainActivity.java:63)
at com.google.android.gms.tasks.zzi.run(com.google.android.gms:play-services-tasks@@18.0.2:1)
at android.os.Handler.handleCallback(Handler.java:938)
at android.os.Handler.dispatchMessage(Handler.java:99)
at android.os.Looper.loopOnce(Looper.java:226)
at android.os.Looper.loop(Looper.java:313)
at android.app.ActivityThread.main(ActivityThread.java:8751)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:571)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1135)
</code></pre>
<p>on line <code>interpreter.run(input, modelOutput);</code></p>
<p>I have following lines added to my <strong>build.gradle</strong> file (app level) -</p>
<pre><code>dependencies {
implementation 'androidx.appcompat:appcompat:1.4.1'
implementation 'com.google.android.material:material:1.5.0'
implementation 'androidx.constraintlayout:constraintlayout:2.1.3'
implementation platform('com.google.firebase:firebase-bom:31.3.0')
implementation 'com.google.firebase:firebase-ml-model-interpreter:22.0.4'
implementation 'com.google.firebase:firebase-ml-modeldownloader'
implementation 'org.tensorflow:tensorflow-lite:2.4.0'
implementation 'org.tensorflow:tensorflow-lite-support:0.1.0'
implementation 'org.tensorflow:tensorflow-lite-metadata:0.1.0'
implementation 'org.tensorflow:tensorflow-lite-gpu:2.3.0'
implementation 'org.tensorflow:tensorflow-lite-select-tf-ops:2.4.0'
testImplementation 'junit:junit:4.13.2'
androidTestImplementation 'androidx.test.ext:junit:1.1.3'
androidTestImplementation 'androidx.test.espresso:espresso-core:3.4.0'
}
</code></pre>
<p>I have looked into other Stackoverflow questions with same title like <a href="https://stackoverflow.com/questions/48723686/check-whether-your-graphdef-interpreting-binary-is-up-to-date-with-your-graphde">here</a> and <a href="https://stackoverflow.com/questions/56210011/fix-for-check-weather-your-graph-def-interpreting-binary-is-up-to-date-with-you">here</a> where people have solved it by downgrading tensorflow but I don't know upto which version I should downgrade to.</p>
<p>Also when I use prebuilt <strong>keras-ocr tflite</strong> models from the <a href="https://github.com/tulasiram58827/ocr_tflite/tree/main/models" rel="nofollow noreferrer">repo</a> itself, everything works fine.</p>
<p>Question: Which version can I use so that I can run my model in my app successfully?</p>
<p>Thanks</p>
|
<python><android><firebase><tensorflow><tflite>
|
2023-03-28 11:12:44
| 1
| 1,668
|
aquaman
|
75,865,420
| 11,546,773
|
Polars apply return only null after second row
|
<p>I'm trying to apply a custom function to each row of the dataframe. The function itself returns a dataframe with more columns and rows. To obtain one big dataframe I use <code>explode()</code> in the end.</p>
<p>The first <code>apply()</code> is performed succesfully. But after that it only returns <code>null</code>. When debugging you can see that the actual dataframe is returned in the function but it still ends up as a row of nulls.</p>
<p>Any ideas on how to resolve this?</p>
<p><strong>Current result:</strong></p>
<pre><code>Polars
shape: (9, 2)
βββββββββββββ¬ββββββββββββ
β column_0 β column_1 β
β --- β --- β
β f64 β f64 β
βββββββββββββͺββββββββββββ‘
β 23.92626 β 22.921899 β
β 2.099872 β 17.662035 β
β 20.393656 β 5.777625 β
β 14.643948 β 4.797752 β
β ... β ... β
β null β null β
β null β null β
β null β null β
β null β null β
βββββββββββββ΄ββββββββββββ
</code></pre>
<p><strong>Code:</strong></p>
<pre><code>import polars as pl
import numpy as np
def myfunc(x):
data1 = np.random.uniform(low=0, high=25, size=(5,))
data2 = np.random.uniform(low=0, high=25, size=(5,))
# Just a example dataframe to show
df = pl.DataFrame([data1, data2])
return df
df = pl.DataFrame({
'val1': [1, 2, 3, 4, 5],
'val2': [1, 2, 3, 4, 5]
})
output = df.apply(myfunc)
output = output.explode(output.columns)
print('\nPolars\n',output)
</code></pre>
|
<python><dataframe><python-polars>
|
2023-03-28 11:02:36
| 2
| 388
|
Sam
|
75,864,929
| 7,123,933
|
Can't open lib 'ODBC Driver 18 for SQL Server in Docker image
|
<p>I created image in Docker with Dockerfile:</p>
<pre><code># Write your dockerfile code here
# Use an official Python runtime as a base image
FROM python:3.8-slim-buster
USER root
# Upgrade version of pip to avoid warning message
RUN pip install --upgrade pip
RUN apt-get update && apt-get install unixodbc-dev -y
# clean the install.
# Set the working directory
WORKDIR /
COPY requirements/core.txt .
# Install the application's Python packages
ARG PIP_EXTRA_INDEX_URL_GAR
RUN pip install --extra-index-url "${PIP_EXTRA_INDEX_URL_GAR}" -r core.txt
# Set environment and timezone
# ENV GOOGLE_APPLICATION_CREDENTIALS="/app/config/adc.json"
ENV TZ=America/Toronto
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
COPY . .
# Entrypoint to run the app
ENTRYPOINT ["python"]
CMD ["Tableau_export.py"]
</code></pre>
<p>However, I am receiving error like this:</p>
<pre><code>sqlalchemy.exc.DBAPIError: (pyodbc.Error) ('01000', "[01000] [unixODBC][Driver Manager]Can't open lib 'ODBC Driver 18 for SQL Server' : file not found (0) (SQLDriverConnect)")
(Background on this error at: https://sqlalche.me/e/14/dbapi)
</code></pre>
<p>I was trying to resolve it by adding lines with Adding SQL Server ODBC Driver 18 for Ubuntu 18.04 but probably I do it incorrectly and also receiving errors :(</p>
|
<python><docker>
|
2023-03-28 10:12:15
| 0
| 359
|
Lukasz
|
75,864,903
| 17,639,970
|
filtering out node_id based on their tags?
|
<p>I'd like to work with a portion of nodes od an *.osm file. It's looks like this:</p>
<pre><code> <node id="614006085" visible="true" version="5" changeset="87511557" timestamp="2020-07-03T16:17:13Z" user="clay_c" uid="119881" lat="40.7127313" lon="-73.9902088">
<tag k="description" v="all F,&lt;F&gt; trains"/>
<tag k="level" v="0"/>
<tag k="name" v="East Broadway (F)"/>
<tag k="railway" v="subway_entrance"/>
<tag k="wheelchair" v="no"/>
</node>
<node id="619520020" visible="true" version="3" changeset="86948275" timestamp="2020-06-21T22:49:28Z" user="clay_c" uid="119881" lat="40.7160774" lon="-73.9948416"/>
<node id="889261487" visible="true" version="3" changeset="22009634" timestamp="2014-04-28T19:52:58Z" user="Rub21_nycbuildings" uid="1781294" lat="40.7139881" lon="-74.0106222"/>
<node id="889261517" visible="true" version="3" changeset="22009634" timestamp="2014-04-28T19:52:58Z" user="Rub21_nycbuildings" uid="1781294" lat="40.7147476" lon="-74.0112224"/>
<node id="889261524" visible="true" version="4" changeset="22010391" timestamp="2014-04-28T20:30:28Z" user="Rub21_nycbuildings" uid="1781294" lat="40.7132375" lon="-74.0094599"/>
<node id="889261540" visible="true" version="3" changeset="65654404" timestamp="2018-12-20T22:41:08Z" user="rbrome" uid="9266455" lat="40.7129150" lon="-74.0088281"/>
</code></pre>
<p>Is it possible to automatically filetrout node-ids that has a specific tag via osmx.jl package or OSMnx?
I've tried to use xml parsing, howver, it return some sort of <code>not well-formed</code> error for some lines. But when I look at those lines , they are perfectly valid. For instance, the following line</p>
<pre><code> <node id="2826472874" visible="true" version="1" changeset="22043570" timestamp="2014-04-30T13:38:20Z" user="Rub21_nycbuildings" uid="1781294" lat="40.7132669" lon="-73.9967686"/>
</code></pre>
<p>cause an error of <code>(invalid token): line 8228, column 2</code> when I try to pars the data via ElementTree. What is the issue?</p>
<p>Is it possible to directly identify nodes with specific tags (like "id="3774690783"" for "railway" in <code> <tag k="railway" v="subway_entrance"/></code>, and store them into a dict, list,..</p>
|
<python><julia><openstreetmap>
|
2023-03-28 10:09:33
| 0
| 301
|
Rainbow
|
75,864,866
| 5,197,329
|
OmegaConf - how to delete a single parameter
|
<p>I have a code that looks something like this:</p>
<pre><code>def generate_constraints(c):
if c.name == 'multibodypendulum':
con_fnc = MultiBodyPendulum(**c)
</code></pre>
<p>where c is an OmegaConf object containing a bunch of parameters. Now I would like to pass all the parameters in c except the name to the MultiBodyPendulum class (the name was used to identify the class to send the rest of the parameters to). However I have no idea how to make a copy of c without one parameter.
Does anyone have a good solution for this?</p>
|
<python><fb-hydra><omegaconf>
|
2023-03-28 10:05:59
| 2
| 546
|
Tue
|
75,864,807
| 1,175,081
|
Mypy complaining about [no-any-return] rule being violated for obvious boolean expression
|
<p>The following code throws a mypy error:</p>
<pre><code>from typing import Dict, Any
def asd(x: Dict[str, Any]) -> bool:
return x['a'] == 1
asd({"x": 2})
</code></pre>
<p>IMO it doesn't matter what is passed as a dict. <code>x['a'] == 1</code> should always be a boolean. But mypy complains with:</p>
<pre><code>test.py:4: error: Returning Any from function declared to return "bool" [no-any-return]
</code></pre>
<p>Any Idea how to fix is besides peppering our codebase with <code># type: ignore[no-any-return]</code>?</p>
<p>my setup.cfg for reference:</p>
<pre><code>namespace_packages = True
explicit_package_bases = True
check_untyped_defs = True
warn_return_any = True
warn_unused_ignores = True
show_error_codes = True
</code></pre>
|
<python><mypy><python-typing>
|
2023-03-28 10:00:04
| 2
| 14,967
|
David Schumann
|
75,864,772
| 973,936
|
Failed to install Feast in Python 3.7
|
<p>I am trying to install feature store library named feast inside Python 3.7.</p>
<pre><code>~$ pip install feast==0.25.2
</code></pre>
<p>Running this command gives us the following error</p>
<pre><code>Collecting feast==0.25.2
Using cached feast-0.25.2.tar.gz (3.5 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Installing backend dependencies ... done
Preparing wheel metadata ... error
ERROR: Command errored out with exit status 1:
command: /usr/local/bin/python3.7 /usr/local/lib/python3.7/site-packages/pip/_vendor/pep517/in_process/_in_process.py prepare_metadata_for_build_wheel /tmp/tmpnujfu5ex
cwd: /tmp/pip-install-9meyapk7/feast_a83bf8faa7b342529f74972f8ca885ad
Complete output (1 lines):
error in feast setup command: 'extras_require' must be a dictionary whose values are strings or lists of strings containing valid project/version requirement specifiers.
</code></pre>
<p>Please note I have already run "pip install --upgrade setuptools"</p>
<p>Can you please suggest me about this issue here?</p>
|
<python><feast>
|
2023-03-28 09:56:36
| 1
| 1,152
|
habibalsaki
|
75,864,755
| 9,510,800
|
How to append all possible pairings as a column in pandas
|
<p>I have a dataframe like below:</p>
<pre><code>Class Value1 Value2
A 2 1
B 3 3
C 4 5
</code></pre>
<p>I wanted to generate all possible pairings and my output dataframe looks like below</p>
<pre><code>Class Value1 Value2 A_Value1 A_Value2 B_Value1 B_Value2 C_Value1 C_Value2
A 2 1 2 1 3 3 4 5
B 3 3 2 1 3 3 4 5
C 4 5 2 1 3 3 4 5
</code></pre>
<p>Please assume there are nearly 1000 such classes. Is there any efficient way to do this ? Ultimately, I wanted to find the difference between (Value1 and value2) across each pairings</p>
<p>EDIT:
A_B_Value is created based on the formula</p>
<pre><code>A_B_Value = absolute(ClassA_value1 - ClassB_value1) + absolute(ClassA_value2 - ClassB_value2)
Class Value1 Value2 A_B_Value A_C_Value B_C_Value
A 2 1 3 6 3
B 3 3 3 6 3
C 4 5 3 6 3
</code></pre>
<p>Thank you</p>
|
<python><pandas><numpy>
|
2023-03-28 09:55:00
| 2
| 874
|
python_interest
|
75,864,690
| 10,134,422
|
Building airflow in docker compose with pycurl in requirements gets Permission denied: 'curl-config'
|
<p><strong>The problem</strong></p>
<p>When building a docker image for apache airflow, build fails at <em>RUN pip install -r requirements.txt</em> (when installing pycurl package) with:</p>
<pre><code>Permission denied: 'curl-config'
</code></pre>
<p><strong>What I think the problem is</strong></p>
<p>I think this problem relates to airflow insisting on packages in requirements.txt being installed by a non-root user (ie 'airflow'). When adding <em>USER root</em> to the dockerfile, it errors out with <em>You are running pip as root. Please use 'airflow' user to run pip!</em></p>
<p>Other non-airflow docker containers are installing their packages without seeing this issue</p>
<p><strong>What I've tried</strong></p>
<ul>
<li>creating a docker group, adding the user and owning the socket - every type of answer listed <a href="https://stackoverflow.com/questions/48957195/how-to-fix-docker-got-permission-denied-issue?page=1&tab=scoredesc#tab-top">here</a></li>
<li>removing the requirements.txt step and adding the packages instead to the docker-compose.yaml file as suggested <a href="https://stackoverflow.com/questions/67851351/cannot-install-additional-requirements-to-apache-airflow">here</a></li>
<li>changing the airflow user id to 50000</li>
</ul>
<p>None of this has resolved the problem - it is always the same error.</p>
<p><strong>Code I'm using</strong></p>
<p>The code I'm using can be got from here:</p>
<p><code>curl -LfO 'https://airflow.apache.org/docs/apache-airflow/2.5.2/docker-compose.yaml</code></p>
<p>I am following <a href="https://airflow.apache.org/docs/apache-airflow/stable/howto/docker-compose/index.html#before-you-begin" rel="nofollow noreferrer">this install documentation</a></p>
<p>The requirements are:</p>
<pre><code>awscli==1.20.65
beeline==0.0.9a0
boto3==1.18.65
botocore==1.21.65
croniter==1.0.15
elasticsearch==7.13.4
setuptools==58.5.3
Jinja2==2.11.3
krbticket==1.0.6
pycurl==7.44.1
PyHive==0.6.4
pyhocon==0.3.58
requests-kerberos==0.13.0
requests==2.27.1
python-dotenv==0.20.0
impyla==0.17.0
pandas==1.5.3
</code></pre>
|
<python><docker><docker-compose><airflow><pycurl>
|
2023-03-28 09:48:39
| 1
| 460
|
Sanchez333
|
75,864,676
| 1,367,705
|
How to print list of objects grouped by itertools' groupby?
|
<p>I have a class with some fields. I created a list of objects of this class and now I want to group the list by one of the fields inside the objects. It <em>almost</em> works, but unfortunately it shows me only one Dog, but the result for this data should be different.</p>
<p>The code:</p>
<pre><code>from collections import OrderedDict, defaultdict
from itertools import chain
from itertools import groupby
class Animal:
def __init__(
self,
name,
age,
description,
width,
height
):
self.data = dict(
name=name,
age=age,
description=description,
width=width,
height=height
)
for k, v in self.data.items():
setattr(self, k, v)
def json(self):
return json.dumps(self.data)
animals = [Animal('Dog', '3', 'nice dog', 12, 13), Animal('Cat', '2', 'kitty', 12, 23), Animal('Dog', '5', 'woof', 12, 13)]
# print(vulns)
sorted_animals = sorted(animals, key=lambda Animal: Animal.name)
grouped_animals = [list(result) for key, result in groupby(
sorted_animals, key=lambda Animal: Animal.name)]
report = {"data": OrderedDict(), "meta": {"animals": {}}}
# print(grouped)
for group in grouped_animals:
for animal in group:
report["meta"]["animals"][animal.name] = animal.name, animal.age, animal.width
print(report)
</code></pre>
<p>The result:</p>
<p><code>{'data': OrderedDict(), 'meta': {'animals': {'Cat': ('Cat', '2', 12), 'Dog': ('Dog', '5', 12)}}}</code></p>
<p>What I want:</p>
<p><code>{'data': OrderedDict(), 'meta': {'animals': {'Cat': ('Cat', '2', 12), 'Dog': ('Dog', '5', 12), 'Dog': ('Dog', '3', 12)}}}</code></p>
<p>So I'm missing the first animal (<code>('Dog', '3', 'nice dog', 12, 13)</code>) in my list. How to solve this?</p>
|
<python><group-by><itertools-groupby>
|
2023-03-28 09:47:10
| 1
| 2,620
|
mazix
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.