QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 ⌀ |
|---|---|---|---|---|---|---|---|---|
76,715,691 | 8,726,488 | Python regex expression , split the data by '=' character | <p>This is my data</p>
<pre><code>text = 'name = steve age=23 city = usa address1 = - address2 = ca,2nd floor ,3rd aven. qualification = doctor'
</code></pre>
<p>and the pattern which I written <code>pattern=r'(\w+)\s*=\s*([-,\w]+)'</code> and getting below output.</p>
<p>match = re.findall(pattern,text)</p>
<pre><code>[('name', 'steve'),
('age', '23'),
('city', 'usa'),
('address1', '-'),
('address2', 'ca,2nd'),
('qualification', 'doctor')]
</code></pre>
<p>The above result is not expected as address2 key have partial values. Expected output is 'ca,2nd floor ,3rd aven.' .</p>
| <python><regex> | 2023-07-18 18:36:09 | 1 | 3,058 | Learn Hadoop |
76,715,262 | 11,445,134 | How to compare two columns of values between two dataframes | <p>Let's say I have two dataframes <code>df1</code> and <code>df2</code> as below</p>
<pre><code>a = [1,1,1,2,3,4,4,5,6]
df1 = pd.DataFrame(a, columns=["id"])
id
0 1
1 1
2 1
3 2
4 3
5 4
6 4
7 5
8 6
</code></pre>
<p>and</p>
<pre><code>x = [1,2,3,4,5,6]
y = ["apple","orange","banana","lemon","kiwi","melon"]
df2 = pd.DataFrame(list(zip(x, y)), columns=["fruit_id", "fruit_name"])
fruit_id fruit_name
0 1 apple
1 2 orange
2 3 banana
3 4 lemon
4 5 kiwi
5 6 melon
</code></pre>
<p>I want to match the <code>fruit_id</code> from <code>df2</code> to the <code>id</code> in <code>df1</code> and generate a new column like below</p>
<pre><code> id result
0 1 apple
1 1 apple
2 1 apple
3 2 orange
4 3 banana
5 4 lemon
6 4 lemon
7 5 kiwi
8 6 melon
</code></pre>
| <python><pandas><dataframe> | 2023-07-18 17:26:27 | 1 | 337 | Yippee |
76,715,162 | 10,620,003 | Stick the values for specific ids based on the date | <p>I have a df and I want to stick the values of each id in one row based on its values. For example, in the follwing df I want to see the values:
of day "2019-01-01" and " 2019-01-02" and "2019-01-03" for id 1 and same for id 2. I am using the following code, however, it put the values with this order "val1 val10 val11 val12 val2 val3 val4 val5"</p>
<pre><code>df = pd.DataFrame()
df['id'] = [1, 1, 2,2, 3, 3, 3]
df['date'] = ['2019-01-01', '2019-01-03', '2019-01-01','2019-01-02', '2019-01-01', '2019-01-02','2019-01-03']
df['val1'] = [10, 100, 20, 30, 40, 50, 60]
df['val2'] = [30, 30, -20, -30, -40,-50, -60 ]
df['val3'] = [50, 10, 120, 300, 140, 150, 160]
df['val4'] = [10, 100, 20, 30, 40, 50, 60]
df['val5'] = [30, 30, -20, -30, -40,-50, -60 ]
df['val6'] = [50, 10, 120, 300, 140, 150, 160]
df['val7'] = [10, 100, 20, 30, 40, 50, 60]
df['val8'] = [30, 30, -20, -30, -40,-50, -60 ]
df['val9'] = [50, 10, 120, 300, 140, 150, 160]
df['val10'] = [10, 100, 20, 30, 40, 50, 60]
df['val11'] = [30, 30, -20, -30, -40,-50, -60 ]
df['val12'] = [50, 10, 120, 300, 140, 150, 160]
</code></pre>
<p>Here is the code I use and here is the output which is wrong. I also want to replace the missing date with the same day before or after it.</p>
<pre><code>val_cols = df.filter(like='val').columns
df_s = (df.pivot('id', 'date', val_cols).groupby(level=0, axis=1).apply(lambda x:x.ffill(axis=1).bfill(axis=1)).sort_index(axis=1, level=1))
</code></pre>
<p><a href="https://i.sstatic.net/0zdgs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0zdgs.png" alt="enter image description here" /></a></p>
<p>Can you please help me with that?</p>
| <python><pandas><dataframe> | 2023-07-18 17:14:45 | 2 | 730 | Sadcow |
76,715,140 | 11,681,306 | pip install requirements.txt allowing local path | <p>I understand this should be possible via:</p>
<p><code>pip install --proxy=http://$proxy_ip:8080 --find-links file:///usr/local/lib/myrepos -r requirements.txt</code></p>
<p>(proxy being a requirement of my environment here).</p>
<p>This goes progresses ok through the requirements.txt until it reaches the line where I have a library of my own that is not on pypi.
Line(S):</p>
<p><code>MyAwesomeLib1</code>
<code>MyAwesomeLib2</code>
<code>MyAwesomeLib3</code></p>
<p>those fail. (Could not find a version... version: none)</p>
<p>The content of <code>/usr/local/lib/myrepo</code>
is
MyRepoName1/MuAwesomeLib1
MyRepoName2/MuAwesomeLib2
MyRepoName3/MuAwesomeLib3</p>
<p>Things I tried:</p>
<ul>
<li>point to <code>--find-links file:///usr/local/lib/myrepos/MyRepoName1/</code></li>
<li>point to <code>--find-links file:///usr/local/lib/myrepos/MyRepoName1/MuAwesomeLib1</code></li>
<li>replace in <code>requirements.txt</code> , <code>MyAwesomeLib1</code> with <code>/usr/local/lib/myrepos/MyRepoName1/</code> (this works, but it is not what I need)</li>
</ul>
<p>Question 1 to which I couldn't find an answe would be: what's the expected content of <code>/usr/local/lib/myrepo</code> ? is it expecting anything specific?</p>
<p>Other than that I am at loss as to why this is failing.</p>
<p>Thanks for any help!
(this is on python 3.10)</p>
| <python><python-3.x><pip><requirements.txt> | 2023-07-18 17:11:59 | 1 | 309 | Fabri Ba |
76,714,994 | 3,623,723 | Recover config.log after failed `pip install` | <p>I'm trying to install (that is: compile) wxpython for Python 3.7.17 (Which I know is out of support but I have a module which works with nothing else and requires wxpython).
I'm trying to get everything set up with <code>pipenv</code>, on Kubuntu 20.04.</p>
<p>Ultimately, though, the problem is the same as with `pip install':</p>
<pre class="lang-bash prettyprint-override"><code>$ pip install wxpython --verbose
Using pip 23.1.2 from /home/<user>/.local/share/virtualenvs/myvenv-_8TM2tw_/lib/python3.7/site-packages/pip (python 3.7)
Collecting wxpython
Using cached wxPython-4.2.1.tar.gz (73.7 MB)
Running command python setup.py egg_info
[...]
5 minutes o Running command: build_py
Checking for /tmp/pip-install-lpx0170p/wxpython_26a8ccd1be434df39a5b7b00b435a37b/bin/waf-2.0.24...
"/home/<user>/.local/share/virtualenvs/myvenv-_8TM2tw_/bin/python" /tmp/pip-install-lpx0170p/wxpython_26a8ccd1be434df39a5b7b00b435a37b/bin/waf-2.0.24 --wx_config=/tmp/pip-install-lpx0170p/wxpython_26a8ccd1be434df39a5b7b00b435a37b/build/wxbld/gtk3/wx-config --gtk3 --python="/home/<user>/.local/share/virtualenvs/myvenv-_8TM2tw_/bin/python" --out=build/waf/3.7/gtk3 configure build
Setting top to : /tmp/pip-install-lpx0170p/wxpython_26a8ccd1be434df39a5b7b00b435a37b
Setting out to : /tmp/pip-install-lpx0170p/wxpython_26a8ccd1be434df39a5b7b00b435a37b/build/waf/3.7/gtk3
Checking for 'gcc' (C compiler) : /usr/bin/gcc
Checking for 'g++' (C++ compiler) : /usr/bin/g++
Checking for program 'python' : /home/<user>/.local/share/virtualenvs/myvenv-_8TM2tw_/bin/python
Checking for python version >= 3.7.0 : 3.7.17
python-config : /home/<user>/.pyenv/shims/python3.7-config
Asking python-config for pyext '--cflags --libs --ldflags' flags : not found
The configuration failed
(complete log in /tmp/pip-install-lpx0170p/wxpython_26a8ccd1be434df39a5b7b00b435a37b/build/waf/3.7/gtk3/config.log)
Command '"/home/<user>/.local/share/virtualenvs/myvenv-_8TM2tw_/bin/python" /tmp/pip-install-lpx0170p/wxpython_26a8ccd1be434df39a5b7b00b435a37b/bin/waf-2.0.24 --wx_config=/tmp/pip-install-lpx0170p/wxpython_26a8ccd1be434df39a5b7b00b435a37b/build/wxbld/gtk3/wx-config --gtk3 --python="/home/<user>/.local/share/virtualenvs/myvenv-_8TM2tw_/bin/python" --out=build/waf/3.7/gtk3 configure build ' failed with exit code 1.
Finished command: build_py (0m1.53s)
Finished command: build (4m48.646s)
Command '"/home/<user>/.local/share/virtualenvs/myvenv-_8TM2tw_/bin/python" -u build.py build' failed with exit code 1.
error: subprocess-exited-with-error
× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> See above for output.f busy compile messages...
[...]
Running command: build_py
Checking for /tmp/pip-install-lpx0170p/wxpython_26a8ccd1be434df39a5b7b00b435a37b/bin/waf-2.0.24...
"/home/<user>/.local/share/virtualenvs/myvenv-_8TM2tw_/bin/python" /tmp/pip-install-lpx0170p/wxpython_26a8ccd1be434df39a5b7b00b435a37b/bin/waf-2.0.24 --wx_config=/tmp/pip-install-lpx0170p/wxpython_26a8ccd1be434df39a5b7b00b435a37b/build/wxbld/gtk3/wx-config --gtk3 --python="/home/<user>/.local/share/virtualenvs/myvenv-_8TM2tw_/bin/python" --out=build/waf/3.7/gtk3 configure build
Setting top to : /tmp/pip-install-lpx0170p/wxpython_26a8ccd1be434df39a5b7b00b435a37b
Setting out to : /tmp/pip-install-lpx0170p/wxpython_26a8ccd1be434df39a5b7b00b435a37b/build/waf/3.7/gtk3
Checking for 'gcc' (C compiler) : /usr/bin/gcc
Checking for 'g++' (C++ compiler) : /usr/bin/g++
Checking for program 'python' : /home/<user>/.local/share/virtualenvs/myvenv-_8TM2tw_/bin/python
Checking for python version >= 3.7.0 : 3.7.17
python-config : /home/<user>/.pyenv/shims/python3.7-config
Asking python-config for pyext '--cflags --libs --ldflags' flags : not found
The configuration failed
(complete log in /tmp/pip-install-lpx0170p/wxpython_26a8ccd1be434df39a5b7b00b435a37b/build/waf/3.7/gtk3/config.log)
Command '"/home/<user>/.local/share/virtualenvs/myvenv-_8TM2tw_/bin/python" /tmp/pip-install-lpx0170p/wxpython_26a8ccd1be434df39a5b7b00b435a37b/bin/waf-2.0.24 --wx_config=/tmp/pip-install-lpx0170p/wxpython_26a8ccd1be434df39a5b7b00b435a37b/build/wxbld/gtk3/wx-config --gtk3 --python="/home/<user>/.local/share/virtualenvs/myvenv-_8TM2tw_/bin/python" --out=build/waf/3.7/gtk3 configure build ' failed with exit code 1.
Finished command: build_py (0m1.53s)
Finished command: build (4m48.646s)
Command '"/home/<user>/.local/share/virtualenvs/myvenv-_8TM2tw_/bin/python" -u build.py build' failed with exit code 1.
error: subprocess-exited-with-error
× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> See above for output.
</code></pre>
<p>I'm struggling to figure out what went wrong, and have two possible leads:</p>
<p>1: what does the line <code>Asking python-config for pyext '--cflags --libs --ldflags' flags : not found</code> tell me? Who or what was not found? The only mention I find of this is <a href="https://stackoverflow.com/questions/76642835/how-to-load-a-local-version-of-python-using-pyenv">this question</a>, which has no answer, and the manpage for python-config. I've just checked that <code>--enable-shared</code> flag was in fact set when Python was compiled, (by recompiling with the flag explicitly specified).</p>
<p>2: Directly below that, it says that <code>(complete log in /tmp/pip-install-lpx0170p.../config.log)</code>. However, that is a temporary directory which is deleted when the pip terminates, and that means it's gone before anyone gets a chance to look. Is there a way to prevent deletion of that log?</p>
| <python><wxpython><pipenv-install> | 2023-07-18 16:49:14 | 1 | 3,363 | Zak |
76,714,835 | 2,828,599 | PySpark loading from MySQL ends up loading the entire table? | <p>I am quite new to PySpark (or Spark in general). I am trying to connect Spark with a MySQL instance I have running on RDS. When I load the table like so, does Spark load the entire table in memory?</p>
<pre><code>from pyspark.sql import SparkSession
spark = SparkSession.builder.config("spark.jars", "/usr/share/java/mysql-connector-java-8.0.33.jar") \
.master("spark://spark-master:7077") \
.appName("app_name") \
.getOrCreate()
table_1_df = spark.read.format("jdbc").option("url", "jdbc:mysql://mysql:3306/some_db") \
.option("driver", "com.mysql.jdbc.Driver") \
.option("dbtable", "table1") \
.option("user", "user") \
.option("password", "pass") \
.load()
print(table_1_df.head())
</code></pre>
<p>If yes, is there a way to limit it, say by asking Spark to load contents based on a condition? I would like to see if its possible to limit the fetch by (say) a primary key.</p>
| <python><apache-spark><pyspark><apache-spark-sql><python-3.10> | 2023-07-18 16:27:11 | 1 | 321 | Bhargav Panth |
76,714,669 | 22,212,435 | Trying to create default name system (e.g no name given by a user) for labels. Not sure about algorithms l have used | <p>Users will be able to create labels in a list row by row <strong>or</strong> delete the <strong>existing</strong> by index. They will be able to give a name (in this question by name l mean a text in the label, not the widget name) to the labels.</p>
<p>The main task is the following: If no name has been given to the label (e.g. name=''), system should create an unique name for that label. Default names will be in the form (default_name_ + 'unique_index'). Unique index in simple words is the smallest possible non-negative unique integer. Hence if you delete few labels, some indexes will become free and will be used in the next no_named labels.
Test example:</p>
<pre><code>create_label() # uniq_index 0
create_label() # uniq_index 1
create_label() # uniq_index 2
create_label() # uniq_index 3
create_label("new_name")
create_label() # uniq_index 4
delete_label(2) # free uniq_index 2
delete_label(2) # free uniq_index 3
create_label("new_pin")
delete_label(2) # delete new_name, so no free uniq_index
create_label() # uniq_index 2
create_label() # uniq_index 3
create_label() # uniq_index 5
</code></pre>
<p>expected output name as a list (program will not need to execute this):</p>
<pre><code>default_name_0
default_name_1
default_name_4
new_pin
default_name_2
default_name_3
default_name_5
</code></pre>
<p>(Window at the end should look like <a href="https://i.sstatic.net/tTgPx.png" rel="nofollow noreferrer">this</a>)</p>
<p>I have tried to create a program, that seems to work, but not sure, if it will work for all cases. Also not sure, if that is the best program (probably not). Here is the code:</p>
<pre><code>import tkinter as tk
from tkinter.font import Font
root = tk.Tk()
big_font = Font(family="Roboto", size=20)
dict_of_names = {} # elements will be as follows: {0: True, 1: False, ...} So 1 is an unique empty index
labels: list[tk.Label] = []
def create_label(name=''):
if name == '':
name = set_up_default_name()
new_label = tk.Label(text=name, font=big_font)
new_label.grid(row=len(labels), column=0)
labels.append(new_label)
def set_up_default_name():
name = "default_name_"
for (i, is_used) in dict_of_names.items():
if not is_used: # if some values has been deleted
dict_of_names[i] = True
return name + str(i)
name += str(len(dict_of_names))
dict_of_names[len(dict_of_names)] = True # new element add
return name
def delete_label(index=0):
del_item = labels.pop(index)
name: str = del_item.cget('text')
if "default_name_" in name:
index = int(''.join([i for i in name if i.isdigit()])) # need to get uniq_index
dict_of_names[index] = False
del_item.grid_forget()
# update grid
for i in range(len(labels)):
labels[i].grid(row=i, column=0)
</code></pre>
<p>Extra tasks if user types a name that is not '' (not such important for now):</p>
<ol>
<li><p>Names from user should be unique as well. Program will arise an error if name has already been used)</p>
</li>
<li><p>Lets imagine two cases, In both cases initially no labels exist yet.</p>
<ol>
<li>User give no name to label so default name is created
by a program, so it name becomes "default_name_0".</li>
<li>User decided to name it by himself and put name="default_name_0".</li>
</ol>
<p>Now he creates a new label with no name, so default name will be given to it by a program. In both cases name should be "default_name_1". In second case "default_name_0" will be created, if program doesn't have an algorithm to avoid this case.</p>
</li>
</ol>
<p>I would like to know what is a better way of solving this, or what changes could be done to my code. I will be glad, if someone will try to write their way of doing this, if someone would like to do that.</p>
| <python><python-3.x><tkinter><optimization> | 2023-07-18 16:03:12 | 0 | 610 | Danya K |
76,714,596 | 2,543,622 | performing rfec in python and understanding output | <p>I am doing <a href="https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.RFECV.html" rel="nofollow noreferrer">rfecv</a> in python using pandas. my step size is 1. I start with 174 features. My function call is as below</p>
<pre><code>rfecv = RFECV(estimator=LogisticRegression(solver='lbfgs'), step=1, cv=StratifiedKFold(n_splits=10, shuffle=True, random_state=44),scoring='recall',\
min_features_to_select=30, verbose=0)
rfecv.fit(X_train, y['tag'])
</code></pre>
<p>Optimal number of features returned by rfecv is 89. I noticed that length of <code>cv_results_['mean_test_score']</code> is 145.</p>
<p>Shouldn't it be 174-89=85? If <code>RFECV</code> removes 1 feature at a time and ends up with 89 features out of 174 then I felt that there will be 85 steps (length of <code>'mean_test_score'</code>).</p>
<pre><code>#adding some dummy example-------------------------
</code></pre>
<p>In below case, we start with 150 features. minimum features to select is 3 and it selects 4 features. But then why <code>print (len(selector.cv_results_['std_test_score']))</code> is 148 if 1 feature is eliminated at a time</p>
<pre><code>from sklearn.datasets import make_friedman1
from sklearn.feature_selection import RFECV
from sklearn.svm import SVR
X, y = make_friedman1(n_samples=50, n_features=150, random_state=0)
estimator = SVR(kernel="linear")
selector = RFECV(estimator, step=1, cv=5, min_features_to_select=3)
selector = selector.fit(X, y)
print (selector.support_)
print (selector.ranking_)
print (selector.n_features_)
print (len(selector.cv_results_['std_test_score']))
</code></pre>
| <python><pandas><feature-selection><rfe> | 2023-07-18 15:54:15 | 1 | 6,946 | user2543622 |
76,714,389 | 408,112 | In Python how can I combine the elements in two lists into one list? Not appending the list | <p>I have two lists</p>
<pre><code> list1 = ['1','2','3']
list2 = ['4','5','6']
</code></pre>
<p>desire list3 to be ['14','25','36']</p>
<p>Is there a python built in function that can do this?</p>
<p>I have searched for a method to do this but have not found anything. Everything all the functions simply append one list to the other. Not what I want to do</p>
| <python> | 2023-07-18 15:28:58 | 3 | 1,153 | Dean-O |
76,714,346 | 6,626,531 | When should you use a try-except block when handling a nested exception? | <p>If a function A calls function B and function B raises an exception.</p>
<p>For function A, Should I</p>
<p>Method 1: raise an exception bc function B raises an exception
Method 2: let function B handle the exception</p>
<p>Is it good practice / necessary to have a try-except block in function A if you aren't doing anything else with the failure.</p>
<pre class="lang-py prettyprint-override"><code>def validate(df):
if df.empty():
raise (Exception)
return df
</code></pre>
<p>Method 1</p>
<pre class="lang-py prettyprint-override"><code>def function_a():
df = something()
try:
df2 = validate(df)
except Exception as exception:
raise exception
df2.to_csv()
</code></pre>
<p>Method 2</p>
<pre class="lang-py prettyprint-override"><code>def function_a():
df = something()
df2 = validate(df)
df2.to_csv()
</code></pre>
| <python><python-3.x><try-except> | 2023-07-18 15:24:51 | 3 | 1,975 | Micah Pearce |
76,714,256 | 6,623,265 | How to fetch value from response body on python | <p>Want to fetch two value from the response body(the response is html type). Which are <code>CSRF_NONCE</code> and <code>jsessionid</code>.</p>
<p>Response Body: (just copied the part of the response) which contain these two value and I have around total of 36 of <code>CSRF_NONCE</code> and <code>jsessionid</code> in my response and all are same. I want to select the first one.</p>
<pre><code><td class="row-left"><a href="/manager/html/list;jsessionid=A9B9660B7F3B1238FE0037843C225537?org.apache.catalina.filters.CSRF_NONCE=2AF805C8EF13A9B39258B056FF37136C">List Applications</a></td>
\n
<td class="row-center"><a href="/manager/../docs/html-manager-howto.html" rel="noopener noreferrer">HTML Manager Help</a></td>
\n
<td class="row-center"><a href="/manager/../docs/manager-howto.html" rel="noopener noreferrer">Manager Help</a></td>
\n
<td class="row-right"><a href="/manager/status;jsessionid=A9B9660B7F3B1238FE0037843C225537?org.apache.catalina.filters.CSRF_NONCE=2AF805C8EF13A9B39258B056FF37136C">Server Status</a></td>
</code></pre>
<p>Code Sample:</p>
<pre><code>from jproperties import Properties
import requests
from autorizationModule import *
configs = Properties()
with open('app-config', 'rb') as config_file:
configs.load(config_file)
#User Name & Password provide from the property file
TOKEN = authorization(configs.get("user_name").data, configs["password"].data)
print(f"Token Value", TOKEN)
def fetchCSRFNonce():
BASE_URL = configs.get("app_url").data
payload = {}
HEARDER_VALUE = {
"Authorization" : "Basic "+str(TOKEN)
}
response = requests.request("GET", BASE_URL, headers=HEARDER_VALUE, data=payload)
print(response.content)
fetchCSRFNonce()
</code></pre>
| <python> | 2023-07-18 15:14:50 | 1 | 2,088 | Jyoti Prakash Mallick |
76,714,122 | 688,080 | root_validator called twice for one instance | <pre class="lang-py prettyprint-override"><code>from typing import Any, Dict, Generic, TypeVar, get_args
import pydantic
class Mixin(pydantic.BaseModel):
@pydantic.root_validator(pre=True)
def print(cls, values: Dict[str, Any]):
print(f"Validator for {cls.__name__}")
return values
class B1(Mixin): ...
class B2(Mixin): ...
BT = TypeVar("BT", B1, B2)
class BaseModel(Generic[BT], pydantic.BaseModel):
b: BT
@pydantic.root_validator(pre=True)
def fill_defaults(cls, values: Dict[str, Any]):
b_type = get_args(cls.__orig_bases__[0])[0]
print(f"Filling defaults for {b_type.__name__}")
if "b" not in values:
values["b"] = b_type()
return values
class Model1(BaseModel[B1]): ...
class Model2(BaseModel[B2]): ...
Model2()
</code></pre>
<p>Outputs:</p>
<pre class="lang-bash prettyprint-override"><code>Filling defaults for B2
Validator for B2
Validator for B1
</code></pre>
<p>Why is <code>Validator for B1</code> printed?</p>
| <python><pydantic> | 2023-07-18 14:59:00 | 1 | 4,600 | Ziyuan |
76,714,064 | 1,253,251 | poetry build with common library | <p>I have a repository with multiple Python projects that use Poetry for dependency management. The directory structure is:</p>
<ul>
<li><p>proj1</p>
<ul>
<li>src/</li>
<li>pyproject.toml</li>
</ul>
</li>
<li><p>proj2</p>
<ul>
<li>src/</li>
<li>pyproject.toml</li>
</ul>
</li>
<li><p>common_proj</p>
<ul>
<li>src/</li>
<li>pyproject.toml</li>
</ul>
</li>
</ul>
<p>The proj1 and proj2 projects both depend on common_proj. Currently, I'm building proj1 and proj2 like this:</p>
<pre><code>cd proj1
cp -r ../common_proj/src ./common_proj
poetry build
</code></pre>
<p>Then I use the generated wheel file in production.</p>
<p>The pyproject.toml files for proj1 and proj2 include common_proj like this:</p>
<pre><code>packages = [
{include = "proj2"},
{include = "common_proj" },
]
</code></pre>
<p>What is the correct way to manage the common_proj dependency with Poetry, so that its dependencies are installed properly?"</p>
| <python><build><python-poetry><python-wheel> | 2023-07-18 14:52:09 | 2 | 806 | Ofer Helman |
76,713,937 | 4,356,169 | How to get elements above specific classes? | <p>Sample content:</p>
<pre><code><div id="content">
<h5>Title1</h5>
<div class="text">text 1</div>
<h5>Title2</h5>
<h6>SubTitle</h6>
<otherTag>bla bla</otherTag>
<div class="text">text 2</div>
<div class='pi'>post item</div>
<div class="text">text 3</div>
<div class="text">text 4</div>
</div>
</code></pre>
<p>Inside id <code>content</code>, I need to get class <code>text</code>, then get <code><h5></code>, <code><h6></code>, <code><otherTag></code>, <code><div class='pi'></code> which are belong to class <code>text</code>.</p>
<p>So my way is to get the class <code>text</code>, then get those things above it with <code>find_all_previous</code> until meet another class <code>text</code> or goes to the top id <code>content</code>. Problem is that <code>find_all_previous</code> returns all the previous contents. How can I make it stop searching at previous class <code>text</code> or at id <code>content</code>? And I don't think it's a good idea to use this method, since each searching returns all contents.</p>
<p>Using <code>find_previous</code> is not a good choice either, it has to detect elements one by one, and the elements is in no order, and some even absents.</p>
<pre><code>html = BeautifulSoup(response.text,'lxml')
content = html.find('div',{'id': 'content'})
paras = content.find_all('div', {'class': 'text'})
for para in paras:
print(para.get_text())
all_prevs = para.find_all_previous()
</code></pre>
<p><strong>Edited</strong></p>
<p>The result should grouped by class <code>text</code>, for example:</p>
<blockquote>
<p>Title2, SubTitle2, bla bla, Text2</p>
</blockquote>
| <python><html><beautifulsoup> | 2023-07-18 14:38:29 | 2 | 1,088 | jdleung |
76,713,817 | 2,749,397 | Given "aaa-bbb-ccc-ddd-eee-fff" change every 2nd "-" to "+" → "aaa-bbb+ccc-ddd+eee-fff" | <p>In a Python script I'm working on, I have a string</p>
<pre><code>s = "aaa-bbb-ccc-ddd-eee-fff-ggg-hhh-iii-jjj-kkk-lll-mmm-nnn-ooo-ppp-qqq-rrr-sss"
</code></pre>
<p>where I want to change every second occurrence of "-" to "+" (or maybe every third, fourth, etc. occurrence or, in other words, I'd prefer a generic solution).</p>
<p>Is it possible to do what I want with a single regexp substitution, or have I to parse the string "manually"?</p>
| <python><regex><python-re> | 2023-07-18 14:24:38 | 1 | 25,436 | gboffi |
76,713,777 | 7,301,792 | plt.axvline pierce through a cumulative histogram curve | <p>I draw a histogram using numpy and matplotlib.pyplot, and then plot vertical lines at certain x-values intersecting the cumulative curve of the histogram.</p>
<p><a href="https://i.sstatic.net/w13DH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/w13DH.png" alt="enter image description here" /></a></p>
<p>However, the problem encountered is that the vertical lines, drawn using plt.axvline, are extending beyond the cumulative curve, rather than stopping at the intersection points.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import matplotlib.pyplot as plt
data = np.random.normal(loc=350, scale=100, size=10000)
hist, bins, patches = plt.hist(data, bins=len(data), cumulative=True, density=True)
# Find the intersections of the x-axis with the cumulative curve
x_values = [200, 300, 400, 500, 600]
y_values = np.interp(x_values, bins[:-1], hist)
# Plot vertical lines at the intersections
for x, y in zip(x_values, y_values):
plt.axvline(x=x, ymin=0, ymax=y, color='r', linestyle='-')
plt.scatter(x_values, y_values, color='red', s=50)
</code></pre>
<p>The axvlines pierce through the curve.
What confuse most is, when I change <code>plt.axvline</code> to <code>plt.vlines</code>, it works perfectly as:</p>
<p><a href="https://i.sstatic.net/ExIes.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ExIes.png" alt="enter image description here" /></a></p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import matplotlib.pyplot as plt
data = np.random.normal(loc=350, scale=100, size=1000)
hist, bins, patches = plt.hist(data, bins=len(data), cumulative=True, density=True)
''
# Find the intersections of the x-axis with the cumulative curve
x_values = [200, 300, 400, 500, 600]
y_values = np.interp(x_values, bins[:-1], hist)
# Plot vertical lines at the intersections
plt.vlines(x=x_values, ymin=0, ymax=y_values, color='r', linestyle='-',lw=0.8)
plt.scatter(x_values, y_values, color='red', s=50)
</code></pre>
<p>how can I get the vertical lines to stop exactly at the cumulative curve when applying plt.axvline?</p>
| <python><numpy><matplotlib> | 2023-07-18 14:21:13 | 0 | 22,663 | Wizard |
76,713,771 | 10,305,444 | Python to Spring Boot using Kafka: No type information in headers and no default type provided | <p>I'm trying to create a pure micro-service, where we are not constrained to specific to programming language, or framework.</p>
<p>Here I'm using Kafka to create the event pipeline.</p>
<p>Now I'm sending a message (event) from my python application:</p>
<pre class="lang-py prettyprint-override"><code>producer = KafkaProducer(bootstrap_servers=broker)
# req is a dict {'a':1, 'c':-2}
json_message = json.dumps(req).encode('utf-8')
print("--------------------------")
print(json_message)
print("--------------------------")
headers = [
('content-type', 'application/json'),
('type', 'com.mua.cloud.testm.models.events'),
]
producer.send(topic+"-m", json_message,headers=headers)
</code></pre>
<p>And I'm just trying to consume it, in my Spring Boot application:</p>
<pre class="lang-java prettyprint-override"><code>@Service
@Log4j2
public class TestHandler {
@KafkaListener(topics = Constants.TOPIC_PREFIX + "-" + "python-trigger-m")
@SneakyThrows
public void listener(@Header(KafkaHeaders.CORRELATION_ID) byte[] corrId, TestEvent event) {
System.out.println(corrId);
System.out.println(event);
System.out.println("----------------");
}
}
</code></pre>
<p>And I've modified my configuration for this:</p>
<pre class="lang-yaml prettyprint-override"><code>spring:
…
kafka:
…
consumer:
value-deserializer: org.apache.kafka.common.serialization.ByteArrayDeserializer
value-serializer: org.springframework.kafka.support.serializer.JsonSerializer
</code></pre>
<p>Registered a Bean (ref: <a href="https://stackoverflow.com/a/68749773/10305444">https://stackoverflow.com/a/68749773/10305444</a>):</p>
<pre><code>@Configuration
public class JsonMessageConverterConfig {
@Bean
public JsonMessageConverter jsonMessageConverter() {
return new ByteArrayJsonMessageConverter();
}
}
</code></pre>
<p>But I keep getting this error on a loop:</p>
<pre><code>TestMicro ERROR 2023-07-18 19:08:08,922 [consumer-0-C-1] [o.s.k.l.KafkaMessageListenerContainer] - Message: Consumer exception
java.lang.IllegalStateException: This error handler cannot process 'SerializationException's directly; please consider configuring an 'ErrorHandlingDeserializer' in the value and/or key deserializer
at org.springframework.kafka.listener.DefaultErrorHandler.handleOtherException(DefaultErrorHandler.java:151)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.handleConsumerException(KafkaMessageListenerContainer.java:1815)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:1303)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:577)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317)
at java.base/java.lang.Thread.run(Thread.java:1589)
Caused by: org.apache.kafka.common.errors.RecordDeserializationException: Error deserializing key/value for partition cloud-python-trigger-m-0 at offset 0. If needed, please seek past the record to continue consumption.
at org.apache.kafka.clients.consumer.internals.Fetcher.parseRecord(Fetcher.java:1448)
at org.apache.kafka.clients.consumer.internals.Fetcher.access$3400(Fetcher.java:135)
at org.apache.kafka.clients.consumer.internals.Fetcher$CompletedFetch.fetchRecords(Fetcher.java:1671)
at org.apache.kafka.clients.consumer.internals.Fetcher$CompletedFetch.access$1900(Fetcher.java:1507)
at org.apache.kafka.clients.consumer.internals.Fetcher.fetchRecords(Fetcher.java:733)
at org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:684)
at org.apache.kafka.clients.consumer.KafkaConsumer.pollForFetches(KafkaConsumer.java:1277)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1238)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1211)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.pollConsumer(KafkaMessageListenerContainer.java:1531)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doPoll(KafkaMessageListenerContainer.java:1521)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.pollAndInvoke(KafkaMessageListenerContainer.java:1345)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:1257)
... 3 common frames omitted
Caused by: java.lang.IllegalStateException: No type information in headers and no default type provided
at org.springframework.util.Assert.state(Assert.java:76)
at org.springframework.kafka.support.serializer.JsonDeserializer.deserialize(JsonDeserializer.java:583)
at org.apache.kafka.clients.consumer.internals.Fetcher.parseRecord(Fetcher.java:1439)
... 15 common frames omitted
</code></pre>
<p>Also tried adding this configuration: <code>spring.kafka.producer.properties.spring.json.add.type.headers=false</code></p>
<p><strong>How can I resolve this issue?</strong></p>
<p>NB:</p>
<ul>
<li><em>I have tested the other way around, and I can send message from Spring Boot to Python using Kafka.</em></li>
<li>There are other already implemented functionalities, so it's kind of hard to add any config (as it may break existing codebase)</li>
</ul>
<p>Approach 2:</p>
<p>I tried changing the config:</p>
<pre class="lang-yaml prettyprint-override"><code>
consumer:
value-serializer: org.springframework.kafka.support.serializer.JsonSerializer
value-deserializer: org.springframework.kafka.support.serializer.JsonDeserializer
</code></pre>
<p>And the Kafka configuration like this:</p>
<pre class="lang-java prettyprint-override"><code>@Bean
public Map<String, Object> consumerConfigs() {
var config = new HashMap<String, Object>();
…
config.put(JsonDeserializer.TYPE_MAPPINGS, "com.mua.cloud.testm.models.events
.TestEvent:com.mua.cloud.testm.models.events
.TestEvent");
return config;
}
</code></pre>
| <python><spring><spring-boot><apache-kafka><microservices> | 2023-07-18 14:20:53 | 1 | 4,689 | Maifee Ul Asad |
76,713,704 | 1,581,090 | How to use telnetlib in python the same way as telnetlib3 (windows)? | <p>I have a python code with <code>telnetlib3</code> as follows:</p>
<pre><code>import asyncio
import telnetlib3
async def foo():
reader, writer = await telnetlib3.open_connection("192.168.200.10", 9000)
data = await asyncio.wait_for(reader.read(4096), timeout=2)
print(data)
asyncio.run(foo())
</code></pre>
<p>that is able to connect to a telnet service and returns the following expected output on a Linux and Windows system:</p>
<pre><code>Terminal emulator detected (unknown), switching to interactive shell
Debug server connected. Waiting for commands (or 'exit')...
Supported client terminals: telnet, screen, PuTTY.
prompt >>
</code></pre>
<p>However, when I use <code>telnetlib</code> as in the following example:</p>
<pre><code>import time
import telnetlib
session = telnetlib.Telnet("192.168.200.10", 9000, 2)
received = b""
t0 = time.time()
while time.time() - t0 < 2:
data = session.read_very_eager()
if data:
received += data
t0 = time.time()
print(received)
</code></pre>
<p>the output is something like</p>
<pre><code>b'\x1b[0G\x1b[0G'
</code></pre>
<p>on Linux and windows.</p>
<p>How can I modify the first code example (using <strong><code>telnetlib</code></strong>) so I receive the "interactive welcome message" as I do with <code>telnetlib3</code> on <strong>Windows</strong>?</p>
<ul>
<li>windows: windows10, python 3.10.11</li>
<li>linux (VM on windows): Ubuntu 20.04.6, python 3.8.10</li>
</ul>
| <python><telnet> | 2023-07-18 14:13:13 | 1 | 45,023 | Alex |
76,713,698 | 10,271,487 | Removing arrays from np.split() array results | <p>I am trying to remove specific sections of an array using <code>np.split()</code>.</p>
<p>I was attempting to use a more complex loop to remove the sections because the number of split sections is not constant and changes often.</p>
<p>I have an array of consecutive numbers, an array of ordered cut locations, and knowledge of which cut locations need to be removed.</p>
<p>Ex.</p>
<pre class="lang-py prettyprint-override"><code>a = [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15]
cut_array = [3,6,10,13]
</code></pre>
<p>Desired output.</p>
<pre class="lang-py prettyprint-override"><code>result = [[1,2,3],[7,8,9,10],[14,15]]
</code></pre>
<p>So the idea is to remove the part of the array between each set of 2 elements in the cut_array.</p>
<p>Edit: the location of the cut array may be at the beginning or end of the starting array</p>
| <python><numpy> | 2023-07-18 14:12:06 | 2 | 309 | evan |
76,713,694 | 16,383,578 | How to efficiently split overlapping ranges? | <p>I am looking for an efficient method to split overlapping ranges, I have read many similar questions but none of them solves my problem.</p>
<p>The problem is simple, given a list of triplets, the first two elements of each triplet are integers and the first number is always less than or equal to the second, apply the following transformations such that for all pairings of the output, the start of the later triplet is always greater than the end of the other, and if the start of one triplet is equal to the end of the other plus one, they have different data:</p>
<p>If they have different data, subtract one from the other:</p>
<pre><code>(0, 100, 'A'), (0, 20, 'B') -> (0, 20, 'B'), (21, 100, 'A')
(0, 100, 'A'), (20, 50, 'B') -> (0, 19, 'A'), (20, 50, 'B'), (51, 100, 'A')
(0, 100, 'A'), (50, 100, 'B') -> (0, 49, 'A'), (50, 100, 'B')
(0, 100, 'A'), (200, 300, 'B') -> (0, 100, 'A'), (200, 300, 'B')
(0, 100, 'A'), (50, 300, 'B') -> (0, 49, 'A'), (50, 300, 'B')
</code></pre>
<p>Else if they overlap, merge them:</p>
<pre><code>(0, 100, 'A'), (0, 20, 'A') -> (0, 100, 'A')
(0, 100, 'A'), (20, 50, 'A') -> (0, 100, 'A')
(0, 100, 'A'), (50, 100, 'A') -> (0, 100, 'A')
(0, 100, 'A'), (200, 300, 'A') -> (0, 100, 'A'), (200, 300, 'A')
(0, 100, 'A'), (101, 300, 'A') -> (0, 300, 'A')
(0, 100, 'A'), (50, 300, 'A') -> (0, 300, 'A')
</code></pre>
<p>Here is a test case:</p>
<pre><code>[(0, 16, 'red'), (0, 4, 'green'), (2, 9, 'blue'), (2, 7, 'cyan'), (4, 9, 'purple'), (6, 8, 'magenta'), (9, 14, 'yellow'), (11, 13, 'orange'), (18, 21, 'green'), (22, 25, 'green')]
</code></pre>
<p>Expected output:</p>
<pre><code>[(0, 1, 'green'), (2, 3, 'cyan'), (4, 5, 'purple'), (6, 8, 'magenta'), (9, 10, 'yellow'), (11, 13, 'orange'), (14, 14, 'yellow'), (15, 16, 'red'), (18, 25, 'green')]
</code></pre>
<p>Graphic representation:</p>
<p><a href="https://i.sstatic.net/W3yer.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/W3yer.png" alt="enter image description here" /></a></p>
<p>These functions encode the rules in Python:</p>
<pre class="lang-py prettyprint-override"><code>def subtract(A, B):
(As, Ae, _), (Bs, Be, Bd) = A, B
if As > Be or Bs > Ae:
return [[Bs, Be, Bd]]
result = []
if As > Bs:
result.append([Bs, As - 1, Bd])
if Ae < Be:
result.append([Ae + 1, Be, Bd])
return result
def join(A, B):
(As, Ae, Ad), (Bs, Be, Bd) = A, B
if Bs > As:
As, Ae, Bs, Be = Bs, Be, As, Ae
if Bs <= As and Ae <= Be:
return [[Bs, Be, Bd]]
return [[Bs, Ae, Bd]] if As <= Be + 1 else [[Bs, Be, Bd], [As, Ae, Ad]]
</code></pre>
<p>I have previously implemented a somewhat efficient <a href="https://stackoverflow.com/questions/76693414/how-to-optimize-splitting-overlapping-ranges/76696055?noredirect=1#comment135245004_76696055">function</a> that splits the ranges when <code>not (s1 < s2 < e1 < e2)</code> always holds. But it isn't efficient enough and fails to give the correct output if the aforementioned constraint isn't enforced.</p>
<p>The accepted answer does improve performance and can handle cases where ranges intersect, but it isn't as efficient as my data demands. I have literally millions of items to process.</p>
<p>(<code>make_generic_case</code> can be found in the linked question above)</p>
<pre><code>In [411]: sample = make_generic_case(4096, 65536, 16)
In [412]: %timeit solve(sample); sample.pop(-1)
2.14 ms ± 124 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [413]: len(solve(sample))
Out[413]: 1883
</code></pre>
<p>It takes about 2140/4096 = 0.5224609375 microseconds to process each item, 0.25 to 0.4 microseconds per item is acceptable.</p>
<p>The following function is the only function I wrote to date that gives the correct output when ranges intersect, but it isn't efficient at all:</p>
<pre class="lang-py prettyprint-override"><code>def brute_force_discretize(ranges):
numbers = {}
ranges.sort(key=lambda x: (x[0], -x[1]))
for start, end, data in ranges:
numbers |= {n: data for n in range(start, end + 1)}
numbers = list(numbers.items())
l = len(numbers)
i = 0
output = []
while i < l:
di = 0
curn, curv = numbers[i]
while i != l and curn + di == numbers[i][0] and curv == numbers[i][1]:
i += 1
di += 1
output.append((curn, numbers[i - 1][0], curv))
return output
</code></pre>
<p>I have implemented several other algorithms found <a href="https://softwareengineering.stackexchange.com/questions/241373/algorithm-for-flattening-overlapping-ranges">here</a> and <a href="https://softwareengineering.stackexchange.com/questions/363091/split-overlapping-ranges-into-all-unique-ranges">here</a>, but all of them fail to pass <code>make_generic_case(4096, 65536, 16)</code> and comparing against <code>brute_force_discretize</code> test:</p>
<pre><code>def discretize_gen(ranges):
nodes = get_nodes(ranges)
stack = []
for (n1, e1, d1), (n2, e2, _) in zip(nodes, nodes[1:]):
if e1:
stack.remove(d1)
else:
stack.append(d1)
start = n1 + e1
end = n2 - (not e2)
if start <= end and stack:
yield start, end, stack[-1]
def merge(segments):
start, end, data = next(segments) # keep one segment buffered
for start2, end2, data2 in segments:
if end + 1 == start2 and data == data2: # adjacent & same data
end = end2 # merge
else:
yield start, end, data
start, end, data = start2, end2, data2
yield start, end, data # flush the buffer
def subtractRanges(A, B):
(As, Ae), (Bs, Be) = A, B
if As > Be or Bs > Ae:
return [[Bs, Be]]
result = []
if As > Bs:
result.append([Bs, As - 1])
if Ae < Be:
result.append([Ae + 1, Be])
return result
def merge_list(segments):
start, end, data = segments.pop(0)
for start2, end2, data2 in segments:
if end + 1 == start2 and data == data2:
end = end2
else:
yield start, end, data
start, end, data = start2, end2, data2
yield start, end, data
def discretize(ranges):
i = 0
ranges = [list(e) for e in ranges]
while i < len(ranges):
for superior in ranges[i + 1 :]:
if result := subtractRanges(superior[:2], ranges[i][:2]):
ranges[i][:2] = result[0]
if len(result) > 1:
ranges[i + 1 : 0] = ([*result[1], ranges[i][2]],)
else:
ranges.pop(i)
i -= 1
break
i += 1
return list(merge_list(sorted(ranges)))
def flatten(ranges):
stack = []
result = []
ranges = sorted(ranges, key=lambda x: (x[0], -x[1]))
for ns, ne, nd in ranges:
new = [ns, ne, nd]
append = True
for old in stack.copy():
stack.remove(old)
if nd != old[2]:
for item in subtract(new, old):
if item[1] < ns:
result.append(item)
else:
stack.append(item)
else:
joined = join(new, old)
if len(joined) == 1:
stack.extend(joined)
append = False
else:
result.append(old)
if append:
stack.append(new)
if stack:
result.extend(stack)
return sorted(result)
</code></pre>
<p>All of them fail the correctness check, and none of them is as efficient as my previous implementation to begin with.</p>
<p>What is an efficient method, that for any test case generated using <code>make_generic_case(4096, 65536, 16)</code>, gives the same output as <code>brute_force_discretize</code> and takes about 1 millisecond to process such test case on average? (I have tested thoroughly that the function <code>solve</code> found in the accepted answer gives the correct output, so you can use that to verify the output).</p>
<hr />
<p>I have to point out that I really have millions of triplets to process, no exaggeration. And the numbers are very large, they are under 2^32-1 (IPv4 address) and 2^128-1 (IPv6), so bruteforce methods are unacceptable. I in fact wrote one myself and I posted it above.</p>
<p>And I know how slow it is, it will take forever to process my data, or will burn my RAM.</p>
<p>This is the link to one of many datasets I have to process: <a href="https://git.ipfire.org/?p=location/location-database.git;a=tree" rel="nofollow noreferrer">https://git.ipfire.org/?p=location/location-database.git;a=tree</a>, it is the database.txt file.</p>
<p>The file is very large (hundred mebibytes) and contains millions of entries, you have to scroll down (like really down) to see the IP ranges. I already wrote code that processes it, but it was very slow but much faster than the one posted in the existing answer below.</p>
| <python><python-3.x><algorithm><performance> | 2023-07-18 14:11:49 | 1 | 3,930 | Ξένη Γήινος |
76,713,625 | 607,846 | Creating slices within for loop | <p>I'm new to numpy. My basic understanding of it is that you want to achieve efficiency by applying operations to arrays, as this moves the <code>for</code> loops into c code. I'm trying to make the <code>calc</code> function below more efficient, and therefore applied this principle by using slices and <code>x1-x2</code>. This removed the internal <code>for</code> loop I initially had in my code and gave a large performance boost. Is there anything else I can do to make the function more efficient or would I need to implement it in C instead?</p>
<pre><code>import math
import numpy as np
import time
def calc(x, i, K, N):
r = np.empty(K)
r[0] = 0
for k in range(1, K):
o = math.floor((k + N) / 2)
x1 = x[i-o:i-o+N]
x2 = x[i-o+k:i-o+N+k]
s = np.square(x1-x2)
r[k] = np.sum(s)/len(s)
return r
input = np.arange(8, 10, 0.002) * np.sin(np.arange(0, 100, 0.1) * np.pi)
start_time = time.time()
output1 = calc(input, 500, 64, 448)
print(time.time()-start_time)
</code></pre>
<p>Outputs:</p>
<p>0.00018095970153808594</p>
<p>This was my first attempt:</p>
<pre><code>def calc(x, i, K, N):
r = np.zeros(K)
s = np.zeros(N)
for k in range(1, K):
o = math.floor((k + N) / 2)
for n in range(N):
s[n] = x[n - o + i] - x[n - o + i + k]
s = np.square(s)
r[k] = np.sum(s) / len(s)
return r
</code></pre>
<p>Outputs:</p>
<p>0.0051839351654052734</p>
| <python><numpy> | 2023-07-18 14:05:00 | 2 | 13,283 | Baz |
76,713,505 | 188,331 | TensorFlow: logits and labels must have the same first dimension while using loss='sparse_categorical_crossentropy' | <p>I am using TensorFlow to perform classification on my data.</p>
<p>Sample data are as follow:</p>
<pre><code>Sentence Score
-------------------------------------------------
I am a boy. 5
I am a monster. 1
</code></pre>
<p>The score has range from 1 to 5.</p>
<p>Here is my code:</p>
<pre><code>import pandas as pd
df = pd.read_excel('data.xlsx')
print(df.info())
</code></pre>
<p>prints:</p>
<pre><code><class 'pandas.core.frame.DataFrame'>
RangeIndex: 13743 entries, 0 to 13742
Data columns (total 8 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Sentence 13743 non-null object
1 Score 1316 non-null float64
dtypes: float64(1), object(1)
memory usage: 859.1+ KB
None
</code></pre>
<p>and here are the remaining codes:</p>
<pre><code>import matplotlib.pyplot as plt
import seaborn as sns
from keras.preprocessing.text import Tokenizer
from keras.utils import pad_sequences
from keras.models import Sequential
from keras.layers import Dense, Embedding, LSTM, SpatialDropout1D
from sklearn.model_selection import train_test_split
from keras.utils.np_utils import to_categorical
from keras.callbacks import EarlyStopping
from keras.layers import Dropout
import re
from nltk.corpus import stopwords
from nltk import word_tokenize
STOPWORDS = set(stopwords.words('english'))
from bs4 import BeautifulSoup
import plotly.graph_objs as go
from chart_studio import plotly
from IPython.core.interactiveshell import InteractiveShell
import plotly.figure_factory as ff
InteractiveShell.ast_node_interactivity = 'all'
from plotly.offline import iplot
import cufflinks as cf
cf.go_offline()
cf.set_config_file(offline=False, world_readable=True)
df['score'].value_counts().sort_values(ascending=False).iplot(kind='bar', yTitle='Number of Sentences',
title='Number of Sentences in each score')
</code></pre>
<p>(the plot is here)</p>
<pre><code># LSTM Modeling
# The maximum number of words to be used. (most frequent)
MAX_NB_WORDS = 50000
# Max number of words in each complaint.
MAX_SEQUENCE_LENGTH = 250
# This is fixed.
EMBEDDING_DIM = 100
tokenizer = Tokenizer(num_words=MAX_NB_WORDS, filters='!"#$%&()*+,-./:;<=>?@[\]^_`{|}~', lower=True)
tokenizer.fit_on_texts(df['yue'].values)
word_index = tokenizer.word_index
print('Found %s unique tokens.' % len(word_index))
</code></pre>
<pre><code>X = tokenizer.texts_to_sequences(df['yue'].values)
X = pad_sequences(X, maxlen=MAX_SEQUENCE_LENGTH)
print('Shape of data tensor:', X.shape)
# Shape of data tensor: (13743, 250)
Y = pd.get_dummies(df['score']).values
print('Shape of label tensor:', Y.shape)
# Shape of label tensor: (13743, 5)
X_train, X_test, Y_train, Y_test = train_test_split(X,Y, test_size = 0.10, random_state = 42)
print(X_train.shape,Y_train.shape)
print(X_test.shape,Y_test.shape)
# (12368, 250) (12368, 5)
# (1375, 250) (1375, 5)
num_classes = 5
model = Sequential()
model.add(Embedding(MAX_NB_WORDS, EMBEDDING_DIM, input_length=X.shape[1]))
model.add(SpatialDropout1D(0.2))
model.add(LSTM(100, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
epochs = 5
batch_size = 64
history = model.fit(X_train, Y_train, epochs=epochs, batch_size=batch_size,validation_split=0.1,callbacks=[EarlyStopping(monitor='val_loss', patience=3, min_delta=0.0001)])
</code></pre>
<p>and it results this error:</p>
<pre><code>Node: 'sparse_categorical_crossentropy/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits'
logits and labels must have the same first dimension, got logits shape [64,5] and labels shape [320]
[[{{node sparse_categorical_crossentropy/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits}}]] [Op:__inference_train_function_14466]
</code></pre>
<p>How can I solve it? Thanks.</p>
| <python><pandas><tensorflow><keras> | 2023-07-18 13:52:20 | 1 | 54,395 | Raptor |
76,713,463 | 4,489,998 | Is it possible to type a list of tuples of identical generic types, with a static type checker? | <p>Consider the following:</p>
<pre><code>from typing import TypeVar, Generic
class A(): pass
class B(A): pass
class C(A): pass
T = TypeVar("T", bound=A)
class Container(Generic[T]):
def __init__(x: T):
pass
b: Container[B] = Container(B())
c: Container[C] = Container(C())
</code></pre>
<p>How do I type-hint a function <code>foo</code> that will accept any list of tuples of identically typed containers:</p>
<pre><code>foo([
(b, b),
(c, c)
])
</code></pre>
<p>but refuse this (heterogeneous tuple):</p>
<pre><code>foo([
(c, b), # tuples elements are not the same type
(c, c)
])
</code></pre>
<p>I tried this:</p>
<pre><code>ListOfSameContainerTuples = list[tuple[Container[T], Container[T]]]
def foo(containers: ListOfSameContainerTuples[T]):
pass
</code></pre>
<p>but it doesn't work:</p>
<pre><code>foo([
(b, b),
(c, c)
])
# error: Argument 1 to "foo" has incompatible type "list[tuple[object, object]]";
# expected "list[tuple[Container[<nothing>], Container[<nothing>]]]" [arg-type]
foo([
(c, b),
(c, c)
])
# error: List item 0 has incompatible type "tuple[Container[C], Container[B]]"; expected "tuple[Container[C], Container[C]]" [list-item]
</code></pre>
<p>The second error on mypy looks good, but I don't understand the first one:</p>
<ul>
<li>Why does mypy not know the type of <code>b</code> and <code>c</code> ?</li>
<li>Why is my type signature <code>Container[<nothing>]</code> ?</li>
</ul>
<p>I tried solutions from <a href="https://stackoverflow.com/questions/69254006/tuple-with-multiple-numbers-of-arbitrary-but-equal-type">Tuple with multiple numbers of arbitrary but equal type</a>, to no avail.</p>
| <python><python-typing><mypy> | 2023-07-18 13:48:22 | 1 | 2,185 | TrakJohnson |
76,713,315 | 351,771 | Passing nested tuple to VALUES in psycopg3 | <p>I'm trying to updates some psycopg2 code to psycopg3. I'm trying to do a selection based on a set of values passed from Python (joining with an existing table). Without the join, a simplified example is:</p>
<pre><code>with connection.cursor() as cur:
sql = "WITH sources (a,b,c) AS (VALUES %s) SELECT a,b+c FROM sources;"
data = (('hi',2,0), ('ho',5,2))
cur.execute(sql, (data,) )
print(cur.fetchone());
</code></pre>
<p>I get an error</p>
<pre><code>ProgrammingError: syntax error at or near "'("(hi,2,0)","(ho,5,2)")'"
LINE 1: WITH sources (a,b,c) AS (VALUES '("(hi,2,0)","(ho,5,2)")') S...
</code></pre>
<p>The psycopg2 code used <code>extras.execute_values</code> instead, which is not available in psycopg3.</p>
<p>Is there a way to pass the values for an intermediate table using psycopg3?</p>
| <python><postgresql><psycopg3> | 2023-07-18 13:32:14 | 1 | 2,717 | xioxox |
76,713,290 | 10,045,509 | Displaying CSV Data in Table Format Below a Video using OpenCV and Python | <p>I am working on a project where I need to display a video along with corresponding data from a CSV file in a table format below the video.</p>
<p>The code reads the video and CSV file, and displays the video in a window. Below the video, it should show a table with the data from the CSV file. As the video progresses, the table data should be updated or scrolled up to match the timing in the video.</p>
<p>However, I am facing some issues with the code. The video is not playing properly, and the table rows are too big to see. I would greatly appreciate any insights or suggestions to fix these issues and improve the code.</p>
<p>Specifically, I would like to:</p>
<p>Ensure smooth video playback without freezing or buffering.
Adjust the table size to make it smaller and fit within the window properly.
Implement scrolling functionality for the table data to match the timing in the video.
Any help or guidance to resolve these issues and improve the code would be highly appreciated.</p>
<p>I have developed the following code using OpenCV and Python:</p>
<pre><code>import cv2
import pandas as pd
import numpy as np
# Function to display video with CSV data in tabular format below the video
def display_video_with_csv(video_path, csv_path):
cap = cv2.VideoCapture(video_path)
df = pd.read_csv(csv_path)
# Set up the display window
cv2.namedWindow('Video with CSV', cv2.WINDOW_NORMAL)
# Read the first frame to get video dimensions
ret, frame = cap.read()
height, width, _ = frame.shape
# Calculate the height for the table display
table_height = int(height * 0.8)
# Initialize the scrolling position and scrolling step size
scroll_pos = 0
scroll_step = int(table_height / 10) # Adjust the step size as needed
while True:
# Read the next frame from the video
ret, frame = cap.read()
if not ret:
break
# Get the frame number and corresponding data from the CSV file
frame_number = int(cap.get(cv2.CAP_PROP_POS_FRAMES))
data = df.iloc[frame_number - 1]
# Create a table to display the CSV data
table = pd.DataFrame(data).transpose()
# Create a blank image to display the table data
table_image = 255 * np.ones((table_height, width, 3), dtype=np.uint8)
# Add the table text to the table image
font = cv2.FONT_HERSHEY_DUPLEX
font_scale = 0.5
font_thickness = 1
text_color = (0, 0, 0) # Black color
y_offset = scroll_step
for i, (col_name, val) in enumerate(table.items()):
cv2.putText(
table_image,
f"{col_name}: {val.values[0]}",
(10, y_offset),
font,
font_scale,
text_color,
font_thickness,
cv2.LINE_AA
)
y_offset += scroll_step
# Display the video frame
cv2.imshow('Video with CSV', frame)
# Create a combined image with the video frame and table image
combined_image = np.vstack((frame, table_image))
# Display the combined image in the window
cv2.imshow('Video with CSV', combined_image)
# Scroll the table if needed
if frame_number % 30 == 0: # Adjust the scroll frequency as needed
scroll_pos += 1
# Check for user interrupt (press 'q' to exit)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# Release the video stream and close the display window
cap.release()
cv2.destroyAllWindows()
# Provide the paths to the video and CSV file
video_path = r"Main.mp4"
csv_path = r"Test_1.csv"
# Call the function to display the video with CSV data
display_video_with_csv(video_path, csv_path)
</code></pre>
<p>Expected result:
<a href="https://i.sstatic.net/Ft2ht.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ft2ht.png" alt="enter image description here" /></a></p>
<p>Actual result:
<a href="https://i.sstatic.net/Ne25l.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ne25l.jpg" alt="enter image description here" /></a></p>
| <python><image><opencv> | 2023-07-18 13:28:31 | 0 | 313 | RSK Rao |
76,713,287 | 11,267,783 | Numpy set data depending on logical array | <p>I have a logical matrix like :</p>
<p><code>ind = np.array([1,0,1])</code></p>
<p>and a data matrix like :</p>
<pre><code>data = np.array([[1,2,3],
[4,5,6],
[7,8,9]])
</code></pre>
<p>And I would like to do something like this (this is the way to do it in MATLAB) :</p>
<pre><code>data[ind,:] = 0
</code></pre>
<p>In order to set all the row with logical value to 1 to a specific value.</p>
<pre><code>np.array([[0,0,0],
[4,5,6],
[0,0,0]])
</code></pre>
<p>How can I do it in Python ?</p>
| <python><numpy> | 2023-07-18 13:28:17 | 2 | 322 | Mo0nKizz |
76,713,236 | 1,303,213 | Flask / Werkzeug URL UnicodeEncodeError (utf8) | <p>There are a lot of questions around this on SO and elsewhere, but nothing has helped me solve my problem so I thought I'd present my specifics.</p>
<p>I have a route in Flask that looks like this</p>
<pre class="lang-py prettyprint-override"><code>@bp.route("/experiment/<name>", methods=["GET"])
</code></pre>
<p>I want to be able to use unicode characters in the <code>name</code> part. In this example I am using <code>\u2019</code>, i.e. <code>’</code>. Let's just say <code>name="bob’s house"</code>. When I navigate to this URL I get the following error:</p>
<pre><code>UnicodeEncodeError: 'latin-1' codec can't encode character '\u2019' in position ...: ordinal not in range(256)
</code></pre>
<p>The final line of the traceback is this:</p>
<pre><code>File "/var/lang/lib/python3.10/site-packages/werkzeug/_internal.py", line 119, in _wsgi_decoding_dance
return s.encode("latin1").decode(charset, errors)
</code></pre>
<p><strong>So when I run my Flask server locally I don't get this error.</strong> However when I run it on AWS (lambda), I do. Locally, I just run the server using <code>poetry run python src/webapp/app.py</code>, however on AWS it runs from a Docker container. Because I'm on an M1 Mac, my Docker python version is <code>public.ecr.aws/lambda/python:3.10-x86_64</code> (Is this relevant?). There seem to be no relevant environment variable discrepancies.</p>
<p>Some solutions suggest parsing the URL using <code>urllib.unquote(param)</code> or similar. But the error throws before I can run my own code. In the following, the print statement does not get called.</p>
<pre class="lang-py prettyprint-override"><code>@bp.route("/experiment/<name>", methods=["GET"])
def render(name):
print(f"name: {name}")
</code></pre>
<p>I have also tried adding information to the blueprint route, for example specifying the type of name and the string format (<code>u""</code>):</p>
<pre class="lang-py prettyprint-override"><code>@bp.route(u"/experiment/<string:name>", methods=["GET"])
</code></pre>
<p>Which did not solve the error. <code><path:name></code> also doesn’t fix it.</p>
<p>I have tried setting <code>PYTHONUTF8=1</code> in my AWS Lambda's environment variables, which did not fix the issue. I also tried setting this same flag through the Dockerfile:</p>
<pre><code>ENTRYPOINT ["python3", "-X utf8", "src/webapp/app.py"]
</code></pre>
<p>Or adding (<a href="https://stackoverflow.com/questions/43356982/docker-python-set-utf-8-locale">based on this</a>)</p>
<pre><code>ENV PYTHONIOENCODING=utf-8
</code></pre>
<p>does not fix this issue.</p>
<p>Finally, based on <a href="https://tedboy.github.io/flask/werk_doc.quickstart.html?highlight=utf8#header-parsing" rel="nofollow noreferrer">the documentation here</a> I added the following to my <code>config.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>environ.update(HTTP_ACCEPT_CHARSET="ISO-8859-1,utf-8;q=0.7,*;q=0.7")
</code></pre>
<p>and still no luck.</p>
<p>I understand ASCII is the standard HTTP character set, and the concept of percentage encoding. If <code>name = bob's house</code>, I have no problem. I do not understand why the space character succeeds while the <code>’</code> character fails.</p>
<p>I don't want to use dead-quotes (<code>'</code>), but want freedom to use properly directed quotes (<code>’</code>).</p>
<p>I understand the issue <a href="https://docs.python.org/2.7/howto/unicode.html#the-unicode-type" rel="nofollow noreferrer">in Python</a>, but I do not understand how I can resolve it in my WSGI/Werkzeug/Flask/Lambda flow.</p>
<p>I hope you can see I have tried to look for the solution, and even though there are many questions around it I have still not been able to find a solution. Could someone explain how to resolve this issue?</p>
<p>Edit: Further attempt</p>
<pre class="lang-py prettyprint-override"><code>from lambdarado import start
from urllib.parse import quote
from flask import request
def get_app():
# This function must return a WSGI app, e.g. Flask
from webapp import create_app
app = create_app()
@app.before_request
def encode_path_info():
print("got call to encode_path_info")
print(request.path)
print(quote(request.path))
# Properly encode the path_info using UTF-8
path = quote(request.path)
request.path = path
return app
my_app = get_app
start(my_app)
</code></pre>
<p>I tried the above, but the error gets thrown before this function gets called. The function gets called fine on non-breaking URLs.</p>
| <python><amazon-web-services><docker><flask> | 2023-07-18 13:22:28 | 0 | 459 | louisdeb |
76,713,119 | 1,926,221 | Run Windows Task from Python file/script | <p>I was checking but cant find it. Is there any way to trigger Windows Task, in Windows Task Scheduler from Python. It just needs to trigger a specific task from Python file/script nothin more.</p>
| <python><windows><scripting><windows-task-scheduler> | 2023-07-18 13:07:46 | 1 | 3,726 | IGRACH |
76,713,010 | 14,190,526 | Why does Python's create_autospec() trigger property getters on underscore-prefixed field access and why twice? | <p>I'm using Python's <code>unittest.mock.create_autospec()</code> to create a mock object of a class. I noticed an odd behavior - when I try to access the underscore-prefixed field directly (i.e., <code>mock._val</code>), it triggers the getter of the property val. I was under the impression that accessing the underscore-prefixed field should not trigger the getter at all.</p>
<p>Moreover, the getter method is called twice. Is this expected behavior? And if so, why is it designed this way?</p>
<p>Here's an example that reproduces the behavior:</p>
<pre><code>from unittest.mock import create_autospec
def test_strange_behavior():
class MyCls:
def __init__(self):
self._val = None
@property
def val(self):
print("Inside Get")
return self._val
@val.setter
def val(self, value):
print("Inside Set")
self._val = value
mock = create_autospec(spec=MyCls(), spec_set=True, )
print("Before")
print(mock._val) # This line triggers the getter method for 'val'
print("After")
</code></pre>
<pre><code>$ python --version
Python 3.10.6
$ pip list | grep pytest
pytest 7.1.3
</code></pre>
<p>Output:</p>
<pre><code>$ pytest my_test.py -s
============================================================================================================================== test session starts ===============================================================================================================================
platform linux -- Python 3.10.6, pytest-7.1.3, pluggy-1.0.0
rootdir: /home/david/PycharmProjects/test
plugins: Faker-13.3.4, postgresql-5.0.0
collected 1 item
my_test.py Inside Get # First call
Inside Get # Second call
Before
<NonCallableMagicMock name='mock._val' id='140286340235120'>
After
.
=============================================================================================================================== 1 passed in 0.02s ================================================================================================================================
</code></pre>
| <python><unit-testing> | 2023-07-18 12:56:29 | 0 | 1,100 | salius |
76,712,972 | 1,585,507 | Load testing an API that uses pubsub | <p>I'd like to use Locust to load test an API that uses pubsub. When I say that it uses pubsub, I mean:</p>
<ul>
<li>the entry point is a topic. Clients publish payloads to this topic</li>
<li>The pubsub server then POST a request to my API</li>
<li>my API aknowledges the request and answers with 200 (this does NOT mean that the request was processed)</li>
<li>my API does some computations (they take ~20 seconds) and pubslishes a message on other topic</li>
<li>I can link the original request and the response through a unique id</li>
</ul>
<p>Basically, I need the client used by locust to send a request and then wait for a message to appear on the output topic.</p>
<p>Would you have an idea about how to do that with Locust?</p>
| <python><performance><locust> | 2023-07-18 12:52:10 | 1 | 5,739 | JPFrancoia |
76,712,921 | 1,813,275 | Is it possible to activate Fast Fail for TOX jobs? | <p><strong>Is it possible for us to have a fast-fail option for tox jobs ?</strong> For instance, if a test fails, non of the other ends should not be run. This would be very useful for us in CI workflows.</p>
<p>I checked their documentation and couldn't find any options to do this.</p>
<p>I am aware that this is possible for GitHub-Actions. However, I am not using this.</p>
| <python><tox> | 2023-07-18 12:44:53 | 1 | 363 | Varun Vijaykumar |
76,712,736 | 839,733 | Why can't I name a module 'array'? | <p>Given the following directory structure:</p>
<pre><code>app/
└── array/
├── test/
│ ├── __init__.py
│ └── test_array.py
├── __init__.py
└── functions.py
</code></pre>
<p>When I run pytest from the <code>app</code> directory, I get the following error:</p>
<pre><code>ERROR collecting array/test/test_array.py _______________________________________
ImportError while importing test module '/path/to/app/array/test/test_array.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/opt/homebrew/Cellar/python@3.11/3.11.3/Frameworks/Python.framework/Versions/3.11/lib/python3.11/importlib/__init__.py:126: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
E ModuleNotFoundError: No module named 'array.test'; 'array' is not a package
</code></pre>
<p>Simply renaming the module to <code>arrays</code> works, but I don't understand what's wrong with <code>array</code>.</p>
| <python><pytest><python-import><python-module> | 2023-07-18 12:22:25 | 3 | 25,239 | Abhijit Sarkar |
76,712,720 | 22,234,318 | TypeError: unsupported operand type(s) for |: 'type' and 'NoneType' | <pre><code>from dataclasses import dataclass
@dataclass
class InventoryItem:
"""Class for keeping track of an item in inventory."""
name: str | None = None
unit_price: float
quantity_on_hand: int = 0
</code></pre>
<p>TypeError: unsupported operand type(s) for |: 'type' and 'NoneType'</p>
<p>Python 3.9</p>
<p>I think the problem is using the latest version of python, how to solve it.</p>
<p>i tried use 'or'
but it didn't help</p>
| <python><python-typing><nonetype> | 2023-07-18 12:20:17 | 1 | 569 | jetgreen |
76,712,619 | 272,023 | How to submit Beam Python job onto Kubernetes with Flink runner? | <p>I'm wanting to run a continuous stream processing job using Beam on a Flink runner within Kubernetes. I've been following this tutorial here (<a href="https://python.plainenglish.io/apache-beam-flink-cluster-kubernetes-python-a1965f37b7cb" rel="nofollow noreferrer">https://python.plainenglish.io/apache-beam-flink-cluster-kubernetes-python-a1965f37b7cb</a>) but I'm not sure what the author is referring to when he talks about the "flink master container". I don't understand how I am supposed to submit my Python code into the cluster, when that code is defined within a container image itself.</p>
<p>The Kubernetes Flink cluster architecture looks like this:</p>
<ul>
<li><p>single JobManager, exposes the Flink web UI via a Service and Ingress</p>
</li>
<li><p><strong>multiple</strong> Task Managers, each running 2 containers:</p>
<ul>
<li>Flink task manager</li>
<li>Beam worker pool, which exposes port 50000</li>
</ul>
</li>
</ul>
<p>The Python code in the example tutorial has Beam configuration which looks like this:</p>
<pre><code>options = PipelineOptions([
"--runner=FlinkRunner",
"--flink_version=1.10",
"--flink_master=localhost:8081",
"--environment_type=EXTERNAL",
"--environment_config=localhost:50000"
])
</code></pre>
<p>It's clear that when you run this locally as per the tutorial, it speaks to the Beam worker pool to launch the application. However, if I have a Docker image containing my application code and I want to start this application within Kubernetes, where do I deploy this image in my Kubernetes cluster? Is it as a container within <strong>each</strong> Task Manager pod (and therefore using localhost:50000 to communicate to Beam)? Or do I create a <strong>single</strong> pod containing my application code and point that pod at port 50000 of my Task Managers - if so, is the fact that I have <strong>multiple</strong> Task Managers a problem?</p>
<p>Any pointers to documentation or examples would be really helpful. This <a href="https://stackoverflow.com/questions/57851158/how-do-i-run-beam-python-pipelines-using-flink-deployed-on-kubernetes">other SO question</a> has an incomplete answer.</p>
| <python><apache-flink><apache-beam> | 2023-07-18 12:08:28 | 3 | 12,131 | John |
76,712,409 | 6,378,557 | Generate all possible halves of a set (without repetition) | <p>Starting with a list/set with an even number of elements, how can I generate all possible splits in equal halves where the order of the halves isn't important? For instance, {1,2,3,4} should yield:</p>
<pre><code>{1, 2} {3, 4}
{1, 3} {2, 4}
{1, 4} {2, 3}
</code></pre>
<p>Using itertools.combinations() to generate the first half and then subtracting that from the input to get the second half generates duplicates:</p>
<pre><code>{1, 2} {3, 4}
{1, 3} {2, 4}
{1, 4} {2, 3}
{2, 3} {1, 4}
{2, 4} {1, 3}
{3, 4} {1, 2}
</code></pre>
<p>Using inputs of less trivial size would make it very impractical to keep a list of everything that has been generated so far...</p>
| <python><combinations> | 2023-07-18 11:41:53 | 2 | 9,122 | xenoid |
76,712,368 | 4,451,521 | Why the thread is the same with multiple threads in PyCUDA | <p>I have the following program</p>
<pre><code>import pycuda.driver as cuda
import pycuda.autoinit
from pycuda.compiler import SourceModule
mod = SourceModule("""
#include <stdio.h>
__global__ void myfirst_kernel()
{
printf("I am in block no: %d thread no: %d \\n", blockIdx.x, threadIdx.x);
}
""")
function = mod.get_function("myfirst_kernel")
function(grid=(10,2),block=(1,1,1))
</code></pre>
<p>As you can see I am running 10 blocks and 2 threads per block.
However the output is</p>
<pre><code>python thread_execution.py
I am in block no: 1 thread no: 0
I am in block no: 7 thread no: 0
I am in block no: 1 thread no: 0
I am in block no: 7 thread no: 0
I am in block no: 3 thread no: 0
I am in block no: 0 thread no: 0
I am in block no: 3 thread no: 0
I am in block no: 6 thread no: 0
I am in block no: 9 thread no: 0
I am in block no: 0 thread no: 0
I am in block no: 9 thread no: 0
I am in block no: 6 thread no: 0
I am in block no: 5 thread no: 0
I am in block no: 2 thread no: 0
I am in block no: 5 thread no: 0
I am in block no: 8 thread no: 0
I am in block no: 4 thread no: 0
I am in block no: 2 thread no: 0
I am in block no: 8 thread no: 0
I am in block no: 4 thread no: 0
</code></pre>
<p>I was expecting threadIdx.x would give me 1 too. Why is always 0?</p>
| <python><cuda><pycuda> | 2023-07-18 11:37:17 | 1 | 10,576 | KansaiRobot |
76,712,284 | 5,012,322 | Flask app placing elements on body that are placed on template's head | <p>I have the following simple flask app with a template:</p>
<p><code>app.py</code>:</p>
<pre><code>from flask import Flask
from flask import render_template
app = Flask(__name__)
@app.route("/")
def index():
return render_template('index.html',\
title = "My Title"
)
</code></pre>
<p><code>layout.html</code>:</p>
<pre><code><!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta http-equiv="X-UA-Compatible" content="ie=edge">
<title>{% block title %}{% endblock %}</title>
<link rel="icon" href="data:,">
<link rel="stylesheet" href="static/css/bootstrap.min.css">
<link rel="stylesheet" href="static/css/style.css">
</head>
<body class="mx-3">
<div class="card">
<div class="card-body">
{% block content %}{% endblock %}
</div>
</div>
<script src="static/js/jquery-3.7.0.slim.min.js"></script>
<script src="static/js/bootstrap.bundle.min.js"></script>
{% block scripts %}{% endblock %}
</body>
</html>
</code></pre>
<p><code>index.html</code></p>
<pre><code>{% extends "layout.html" %}
{% block title %}
{{ title }}
{% endblock %}
{% block content %}
<p class="display-4">Hello from index</p>
{% endblock %}
{% block scripts %}
<script src="static/js/scripts.js"></script>
{% endblock %}
</code></pre>
<p>When I run this app with <code>flask --app app run</code> the following HTML is generated. Notice how the head elements are placed on top of the body element and the strange <code>&#xFEFF;</code> (Zero Width No-Break Space) text.</p>
<p>Generated HTML:</p>
<pre><code><html lang="en">
<head></head>
<body class="mx-3">
"&#xFEFF; "
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta http-equiv="X-UA-Compatible" content="ie=edge">
<title> My Title </title>
<link rel="icon" href="data:,">
<link rel="stylesheet" href="static/css/bootstrap.min.css">
<link rel="stylesheet" href="static/css/style.css">
<div class="card">
<div class="card-body">
<p class="display-4">Hello from index</p>
</div>
</div>
<script src="static/js/jquery-3.7.0.slim.min.js"></script>
<script src="static/js/bootstrap.bundle.min.js"></script>
<script src="static/js/scripts.js"></script>
</body>
</html>
</code></pre>
<p>What may be causing this? This is really the minimal reproducible example that I could produce.</p>
<p>I only installed flask with <code>pip install flask</code> on a virtual environment.</p>
<p>Python version: 3.11.4</p>
<p>Flask version: 2.3.2</p>
<p>My folder structure looks like this:</p>
<pre><code>├───static
│ ├───css
│ │ bootstrap.min.css
│ │ style.css
│ │
│ └───js
│ bootstrap.bundle.min.js
│ jquery-3.7.0.slim.min.js
│ scripts.js
├───templates
│ index.html
│ layout.html
│
├───app.py
</code></pre>
| <python><flask><jinja2> | 2023-07-18 11:25:14 | 1 | 416 | DC_AC |
76,712,262 | 3,521,180 | how to print right angle triangle having combination of integers without using python str() function? | <p>Expected output is as follows:</p>
<pre><code>1
121
12321
1234321
123454321
</code></pre>
<p>The following code that gives me desired output</p>
<pre><code>def generate_palindromic_triangle(n):
for i in range(1, n + 1):
line = ""
for j in range(1, i + 1):
line += str(j)
for j in range(i - 1, 0, -1):
line += str(j)
print(line)
# Example usage
generate_palindromic_triangle(5) have to complete the code using exactly one print statement.
</code></pre>
<p>But my requirement has the following constraints:</p>
<ul>
<li>Need to use only integers</li>
<li>Must be having a linear time complexity or less</li>
<li>Mustn't use more than 1 <code>print</code> statement</li>
</ul>
| <python><python-3.x> | 2023-07-18 11:21:37 | 6 | 1,150 | user3521180 |
76,712,250 | 8,891,757 | Find index of first unsorted element in polars DataFrame | <p>I have a dataframe like below</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame({"a": [3,2,1], "b":[3,4,5]})
shape: (3, 2)
┌─────┬─────┐
│ a ┆ b │
│ --- ┆ --- │
│ i64 ┆ i64 │
╞═════╪═════╡
│ 3 ┆ 3 │
│ 2 ┆ 4 │
│ 1 ┆ 5 │
└─────┴─────┘
</code></pre>
<p>I want to find the first index where the data is <strong>not</strong> sorted by (a, b). My dataframe is quite big so I want to do this in linear time (rather than calling sort). I tried creating a packed column and a shifted version of that like below</p>
<pre class="lang-py prettyprint-override"><code>df.with_columns(pl.struct("a", "b").alias("a_b")).with_columns(pl.col("a_b").shift().alias("shift_a_b"))
shape: (3, 4)
┌─────┬─────┬───────────┬─────────────┐
│ a ┆ b ┆ a_b ┆ shift_a_b │
│ --- ┆ --- ┆ --- ┆ --- │
│ i64 ┆ i64 ┆ struct[2] ┆ struct[2] │
╞═════╪═════╪═══════════╪═════════════╡
│ 3 ┆ 3 ┆ {3,3} ┆ {null,null} │
│ 2 ┆ 4 ┆ {2,4} ┆ {3,3} │
│ 1 ┆ 5 ┆ {1,5} ┆ {2,4} │
└─────┴─────┴───────────┴─────────────┘
</code></pre>
<p>Then I tried creating a mask to see if the elements were increasing</p>
<pre class="lang-py prettyprint-override"><code>df.with_columns(pl.col("a_b") < pl.col("shift_a_b"))
</code></pre>
<p>but this fails with the following error</p>
<pre class="lang-py prettyprint-override"><code>df.with_columns(pl.struct("a", "b").alias("a_b")).with_columns(pl.col("a_b").shift().alias("shift_a_b")).with_columns(pl.col("a_b") < pl.col("shift_a_b"))
# InvalidOperationError: cannot perform '<' comparison between series 'a_b' of dtype: struct[2] and series 'shift_a_b' of dtype: struct[2]
</code></pre>
<p>I know that calling <code>.sort</code> on "a_b" works, so the elements must have a comparison defined on them, but I can't figure out how to tell polars that "a_b" and "shift_a_b" are comparable.</p>
| <python><dataframe><python-polars> | 2023-07-18 11:20:22 | 1 | 497 | Casey |
76,712,221 | 2,791,346 | Trigger my function when any file is saved in django | <h3>I would like to tap on file save event inside my Django application</h3>
<p>I have dockerized Django application. When I press save in any file, the server is restarted and docker image rebuild.</p>
<p>I would like similar functionality: Whenever I press save in any file, I would like that my function is called after server is rebooted.</p>
<p>How to achieve this behaviour.</p>
<p>I try with overriding <code>FileSystemStorage</code> but it's not working.</p>
<pre class="lang-py prettyprint-override"><code>from django.core.files.storage import FileSystemStorage
class CustomFileSystemStorage(FileSystemStorage):
def _save(self, name, content):
saved_file_name = super()._save(name, content)
self.my_custom_function(saved_file_name, name, content)
return saved_file_name
def my_custom_function(self, file_path, name, content):
print('File saved at: ', file_path)
print('File name: ', name)
print('File content: ', content)
</code></pre>
<p><code>settings.py</code>:</p>
<pre><code>DEFAULT_FILE_STORAGE = 'myApp.storage.CustomFileSystemStorage'
</code></pre>
<p>When I save file nothing is logged</p>
<h3>Not a solution:</h3>
<ul>
<li>trigger when file is saved programmatically</li>
<li>trigger when model object is saved</li>
</ul>
<h3>Edit:</h3>
<p>By <code>press save on ant file</code> I mean press <code>cmd+s</code> on a file that I was editing (for example: <code>test.py</code>)</p>
<p>Any change on disk results in rebuilding the docker image. I would like to do similar that they did.</p>
<h3>EDIT 2:</h3>
<p>I thought that because on save server is restarted and docker image rebuild... maybe there is a way to tap on those triggers.</p>
<p>Can I trigger my function when server is restarted? But there would probably be difficult to get a file that was changed...</p>
<h3>EDIT 3:</h3>
<p>My <code>apps.py</code></p>
<pre><code>from django.apps import AppConfig
import logging
class MyAppConfig(AppConfig):
default_auto_field = 'django.db.models.BigAutoField'
name = 'myApp'
def ready(self):
logger = logging.getLogger(__name__)
logger.info("READY 123123")
</code></pre>
<p>I also try</p>
<pre><code>from django.apps import AppConfig
import logging
class MyAppConfig(AppConfig):
default_auto_field = 'django.db.models.BigAutoField'
name = 'myApp'
def ready(self):
print('TEST')
</code></pre>
<p>The <code>settings.py</code></p>
<pre><code>"""
Django settings for WisdomCore project.
Generated by 'django-admin startproject' using Django 4.2.
For more information on this file, see
https://docs.djangoproject.com/en/4.2/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/4.2/ref/settings/
"""
from pathlib import Path
# Build paths inside the project like this: BASE_DIR / 'subdir'.
BASE_DIR = Path(__file__).resolve().parent.parent
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/4.2/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = 'xxx'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'my_app'
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'Projects.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'Project.wsgi.application'
# Database
# https://docs.djangoproject.com/en/4.2/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': BASE_DIR / 'db.sqlite3',
}
}
# Password validation
# https://docs.djangoproject.com/en/4.2/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/4.2/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/4.2/howto/static-files/
STATIC_URL = 'static/'
# Default primary key field type
# https://docs.djangoproject.com/en/4.2/ref/settings/#default-auto-field
DEFAULT_AUTO_FIELD = 'django.db.models.BigAutoField'
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'handlers': {
'console': {
'class': 'logging.StreamHandler',
},
},
'loggers': {
'django': {
'level': 'INFO',
'handlers': ['console'],
'propagate': True,
},
'django.db.backends': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': True,
}
},
}
</code></pre>
<p>The logs are:</p>
<p><a href="https://i.sstatic.net/ArNAr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ArNAr.png" alt="Logs" /></a></p>
<p>I don't have any other code...</p>
<p>My tree</p>
<pre><code>├── Dockerfile
├── Project
│ ├── __init__.py
│ ├── asgi.py
│ ├── settings.py
│ ├── urls.py
│ └── wsgi.py
├── docker-compose.yml
├── my_app
│ ├── __init__.py
│ ├── admin.py
│ ├── apps.py
│ ├── migrations
│ │ └── __init__.py
│ ├── models.py
│ ├── tests.py
│ └── views.py
├── install.sh
├── local_settings.py
├── makefile
├── manage.py
├── neo4j-conf
│ └── neo4j.conf
├── neo4j_data
└── requirements.txt
</code></pre>
| <python><django><file><file-system-storage> | 2023-07-18 11:16:31 | 2 | 8,760 | Marko Zadravec |
76,712,073 | 2,722,968 | What is the specification on how value comparisons have to be implemented? | <p>I can't seem to find the actual language specification on how <code>value comparisons</code> have to be done by a complying implementation (e.g. CPython). Consider the simple case</p>
<pre class="lang-py prettyprint-override"><code>a == b
</code></pre>
<p>The reference <a href="https://docs.python.org/3/reference/datamodel.html#object.__lt__" rel="nofollow noreferrer">says</a> that this is done using the “rich comparison” methods, in this case <code>__eq__()</code>. Neither the <a href="https://docs.python.org/3/reference/datamodel.html#data-model" rel="nofollow noreferrer">Data Model</a> nor the reference on <a href="https://docs.python.org/3/reference/expressions.html#value-comparisons" rel="nofollow noreferrer">Value Comparison Expressions</a> seem to specify in detail, what <em>else</em> a Python implementation is required/allowed to do, especially if the <code>type(a)</code> signals that it does not implement <code>__eq__</code> by explicitly returning <code>NotImplemented</code>.</p>
<p><a href="https://stackoverflow.com/questions/3588776/how-is-eq-handled-in-python-and-in-what-order">This answer</a> plausibly suggests that for any <code>a == b</code>, CPython</p>
<ol>
<li>calls <code>b.__eq__(a)</code>(!) if <code>b</code> is a subclass of <code>a</code>. This makes intuitive sense, as it allows the subclass to overload all comparisons between values of itself and it's parent class (therefor also implying symmetry for both the parent and the subclass).</li>
<li>if the above fails, calls <code>a.__eq__(b)</code></li>
<li>if the above fails, calls <code>b.__eq__(a)</code> if that hasn't happened yet;</li>
<li>falls back to <code>identity comparison</code></li>
</ol>
<p>AFAICS this order makes sense, also given Python's loose sense of symmetry, reflexivity, variance and the like. My question is: Does the language <em>guarantee</em> that for any <code>a == b</code> (or other value comparisons), a complying implementation will execute the comparison in the way described above?</p>
| <python><language-lawyer><cpython> | 2023-07-18 10:56:50 | 1 | 17,346 | user2722968 |
76,712,039 | 1,907,902 | QtWebEngine::initialize() with PyQT | <p>I have code bellow:</p>
<pre><code>import sys
from PyQt5.QtGui import QGuiApplication
from PyQt5.QtQml import QQmlApplicationEngine
app = QGuiApplication(sys.argv)
engine = QQmlApplicationEngine()
engine.quit.connect(app.quit)
engine.load('main.qml')
sys.exit(app.exec())
</code></pre>
<p>and qml file:</p>
<pre><code>import QtQuick 2.15
import QtQuick.Window 2.15
import QtWebEngine 1.2
Window {
visible: true
title: "HelloApp"
WebEngineView{
anchors.fill: parent
url: "http://google.com"
}
}
</code></pre>
<p>When I am trying to run the app I see an error:
<code>WebEngineContext used before QtWebEngine::initialize() or OpenGL context creation failed.</code></p>
<p>Question: how can I call <code>QtWebEngine::initialize()</code> in PyQT?</p>
| <python><qt><pyqt><pyqt5><qml> | 2023-07-18 10:53:18 | 1 | 13,078 | kharandziuk |
76,712,001 | 2,464,424 | In Python Asyncio/Aiohttp, is there an equivalent to NGINX's "client_header_timeout"? | <p>In Nginx there's the very handy <a href="https://nginx.org/en/docs/http/ngx_http_core_module.html#client_header_timeout" rel="nofollow noreferrer">client_header_timeout</a> directive that forces the client to finish sending the headers of an HTTP request within a certain timespan. This is useful to mitigate slowloris attacks in combination with other directives. Is there an equivalent for Python Asyncio/Aiohttp? All I can find in the docs is the <a href="https://docs.aiohttp.org/en/stable/client_reference.html#aiohttp.ClientTimeout" rel="nofollow noreferrer">ClientTimeout structure</a> that however is only useful for socket-level timeouts like connection establishment or socket reads.</p>
| <python><nginx><python-asyncio><aiohttp> | 2023-07-18 10:49:09 | 0 | 1,626 | user2464424 |
76,711,961 | 66,941 | Az Cli (2.48.1), Powershell , Python and configparser.DuplicateSectionError systemprofile Duplicate core | <p>We are using azure cli, 2.48.1.</p>
<p>When our web site executes a shell command to run a powershell script that contains an "az ..." command we get python error that includes following Python-Systemprofile-Problem :-</p>
<pre><code>Traceback (most recent call last):
File "runpy.py", line 196, in _run_module_as_main
File "runpy.py", line 86, in _run_code
File "D:\a\_work\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/__main__.py", line 39, in <module>
File "D:\a\_work\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/__init__.py", line 895, in get_default_cli
File "D:\a\_work\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/azlogging.py", line 30, in <module>
File "D:\a\_work\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/__init__.py", line 25, in <module>
File "D:\a\_work\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/extension/__init__.py", line 18, in <module>
File "D:\a\_work\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\knack/config.py", line 69, in __init__
File "D:\a\_work\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\knack/config.py", line 195, in __init__
File "configparser.py", line 699, in read
File "configparser.py", line 1072, in _read
configparser.DuplicateSectionError: While reading from 'C:\\Windows\\System32\\config\\systemprofile\\.azure\\config' [line 9]: section 'core' already exists
</code></pre>
<p>This is repeatable from an administrator console. Running the command "az --version" from inside the directory "C:\Windows\System32\config\systemprofile", and this yields the Python-Systemprofile-Problem output.</p>
<p>In other directories, running the command "az --version" does <strong>not</strong> give this python problem.</p>
<p><em>On another machine,</em> that runs our web site code we don't get the Python-Systemprofile-Problem. An administrator console does not yield the Python-Systemprofile-Problem, and the web site code runs powershell files that contain az commands without the Python-Systemprofile-Problem output.</p>
<p>To try to fix this we have uninstalled azure-cli-2.48.1 , and reinstalled with azure-cli-2.48.1.msi, but the problem still occurs.</p>
| <python><azure><powershell> | 2023-07-18 10:44:25 | 1 | 325 | judek |
76,711,907 | 5,816,253 | Fix JSON dict without double-quotes | <p>I have a list of JSON dict that is not in valid JSON format.
They are all listed in a text file</p>
<pre><code>"{'name': 'Alice', 'age': 30, 'tasks' [1, 2, 3], 'description': 'writer'}"
"{'name': 'Bob', 'age': 33, 'tasks' [4, 5, 6], 'description': 'runner'}"
"{'name': 'Kelly', 'age': 23, 'tasks' [7, 8, 9], 'description': 'singer'}"
</code></pre>
<p>what I would like to have is</p>
<pre><code> {"name": "Alice", "age": 30, "tasks" [1, 2, 3], "description": "writer"}
{"name": "Bob", "age": 33, "tasks" [4, 5, 6], "description": "runner"}
{"name": "Kelly", "age": 23, "tasks" [7, 8, 9], "description": "singer"}
</code></pre>
<p>to have a valid JSON</p>
| <python><json><double-quotes> | 2023-07-18 10:38:56 | 3 | 375 | sylar_80 |
76,711,858 | 6,119,375 | setting interpretor for python in visual studio code on mac | <p>I am trying to set up python on my mac (as a first time user). The problem i encounter is that my interpreter is <code>Python 3.11.4</code>. But in my terminal the python version is only Python <code>3.9.13</code>. How can i change this?</p>
<p>Because of this issue i am unable to install any library. I get this is the VS code terminal:</p>
<pre><code>(base) dana@Danas-Air Python % pip install numpy
Requirement already satisfied: numpy in /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages (1.25.1)
</code></pre>
<p>But when i try to load the library, i get an error:</p>
<pre><code>ModuleNotFoundError: No module named 'numpy'
</code></pre>
<p>This might have an obvious solution, but i havent been able to find it anywhere. I also looked into the setting and made sure that the box saying “Activate Python Environment in Terminal created using the Extension” is unchecked.</p>
<p><a href="https://i.sstatic.net/BLjiy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BLjiy.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/ppItx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ppItx.png" alt="enter image description here" /></a></p>
| <python><python-3.x><visual-studio-code> | 2023-07-18 10:31:50 | 2 | 1,890 | Nneka |
76,711,533 | 1,717,535 | How to use the Python openai client with both Azure and OpenAI at the same time? | <p>OpenAI offers a Python client, currently in version 0.27.8, which supports both Azure and OpenAI.</p>
<p>Here are examples of how to use it to call the ChatCompletion for each provider:</p>
<pre class="lang-py prettyprint-override"><code># openai_chatcompletion.py
"""Test OpenAI's ChatCompletion endpoint"""
import os
import openai
import dotenv
dotenv.load_dotenv()
openai.api_key = os.environ.get('OPENAI_API_KEY')
# Hello, world.
api_response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "user", "content": "Hello!"}
],
max_tokens=16,
temperature=0,
top_p=1,
frequency_penalty=0,
presence_penalty=0,
)
print('api_response:', type(api_response), api_response)
print('api_response.choices[0].message:', type(api_response.choices[0].message), api_response.choices[0].message)
</code></pre>
<p>And:</p>
<pre class="lang-py prettyprint-override"><code># azure_openai_35turbo.py
"""Test Microsoft Azure's ChatCompletion endpoint"""
import os
import openai
import dotenv
dotenv.load_dotenv()
openai.api_type = "azure"
openai.api_base = os.getenv("AZURE_OPENAI_ENDPOINT")
openai.api_version = "2023-05-15"
openai.api_key = os.getenv("AZURE_OPENAI_KEY")
# Hello, world.
# In addition to the `api_*` properties above, mind the difference in arguments
# as well between OpenAI and Azure:
# - OpenAI from OpenAI uses `model="gpt-3.5-turbo"`!
# - OpenAI from Azure uses `engine="‹deployment name›"`! ⚠️
# > You need to set the engine variable to the deployment name you chose when
# > you deployed the GPT-35-Turbo or GPT-4 models.
# This is the name of the deployment I created in the Azure portal on the resource.
api_response = openai.ChatCompletion.create(
engine="gpt-35-turbo", # engine = "deployment_name".
messages=[
{"role": "user", "content": "Hello!"}
],
max_tokens=16,
temperature=0,
top_p=1,
frequency_penalty=0,
presence_penalty=0,
)
print('api_response:', type(api_response), api_response)
print('api_response.choices[0].message:', type(api_response.choices[0].message), api_response.choices[0].message)
</code></pre>
<p>i.e. <code>api_type</code> and other settings are globals of the Python library.</p>
<p>Here is a third example to transcribe audio (it uses Whisper, which is available on OpenAI but not on Azure):</p>
<pre class="lang-py prettyprint-override"><code># openai_transcribe.py
"""
Test the transcription endpoint
https://platform.openai.com/docs/api-reference/audio
"""
import os
import openai
import dotenv
dotenv.load_dotenv()
openai.api_key = os.getenv("OPENAI_API_KEY")
audio_file = open("minitests/minitests_data/bilingual-english-bosnian.wav", "rb")
transcript = openai.Audio.transcribe(
model="whisper-1",
file=audio_file,
prompt="Part of a Bosnian language class.",
response_format="verbose_json",
)
print(transcript)
</code></pre>
<p>These are minimal examples but I use similar code as part of my webapp (a Flask app).</p>
<p>Now my challenge is that I'd like to:</p>
<ol>
<li><strong>Use the ChatCompletion endpoint from Azure;</strong> but:</li>
<li><strong>Use the Transcribe endpoint from OpenAI (since it's not available on Azure)</strong></li>
</ol>
<p>Is there any way to do so?</p>
<p>I have a few options in mind:</p>
<ul>
<li>Changing the globals before every call. But I'm worried that this might cause side-effects I did not expect.</li>
<li>Duplicating/Forking the library to have two versions run concurrently, one for each provider, but this also feels very messy.</li>
<li>Use an alternative client for OpenAI's Whisper, if any.</li>
</ul>
<p>I'm not too comfortable with these and feel I may have missed a more obvious solution.</p>
<p>Or of course… Alternatively, I could just use Whisper with a different provider (e.g. Replicate) or an alternative to Whisper altogether.</p>
<h1>See also</h1>
<ul>
<li>Someone reported the issue (but without a solution) on GitHub (openai/openai-python): <a href="https://github.com/openai/openai-python/issues/411" rel="nofollow noreferrer">Using Azure and OpenAI at the same time #411</a></li>
</ul>
| <python><azure><python-module><openai-api><openai-whisper> | 2023-07-18 09:52:38 | 2 | 5,679 | Fabien Snauwaert |
76,711,461 | 5,118,420 | Django prefetch related with filter to keep only specific row | <p>Let's suppose that I have these simple models:</p>
<pre><code>from django.db import models
class Author(models.Model):
id = models.AutoField(primary_key=True)
name = models.CharField(max_length=128)
class Book(models.Model):
id = models.AutoField(primary_key=True)
name = models.CharField(max_length=128)
date = models.DateTimeField()
author = models.ForeignKey(Author, related_name='books', on_delete=models.CASCADE)
</code></pre>
<p>I want to execute a queryset that fetch all authors, and annotate each author with its latest book name and date. This queryset should be the most efficient possible to avoid multiple subqueries for each needed column : this problem is a simplification of a real world application with many more fields.</p>
<p>Answering the case of a generic filtering would be great, I would however gladly settle for the specific case of filtering against latest date.</p>
<h2>What have been tried</h2>
<h3>Subquery</h3>
<p>I could use <a href="https://docs.djangoproject.com/fr/4.2/ref/models/expressions/#subquery-expressions" rel="nofollow noreferrer">Subquery</a>, however, a subquery only returns one column, I would need to create one Subquery per columns (here it's just name and date but my real application case has many more fields).</p>
<pre><code>books = Book.objects.filter(author=OuterRef('pk')).order_by('-date')
qs = Author.objects.all().annotate(
latest_book_name=Subquery(books.values('name')[:1]),
latest_book_date=Subquery(books.values('date')[:1]),
)
</code></pre>
<p>This works, at least, but it feels really inefficient.</p>
<h2>Prefetch</h2>
<p>I could use prefetch_related to retrieve all books associated with an author. I could then use Max to retrieve the max date but it does not contains the latest book name.</p>
<pre><code>qs = Author.objects.all().prefetch_related(Prefetch("books"))
qs = qs.annotate(latest_book_date=Max("books__date"))
</code></pre>
<p>as far as I know, there is no Argmax operator</p>
<h2>first, latest</h2>
<p>Using first, or latest are not options since they do not provide queryset but single objects. I would not be able to use these values as annotations.</p>
<h2>matching id with max id</h2>
<p><a href="https://stackoverflow.com/a/50220075/5118420">Other users have solved similar problems</a> by annotating with Max("id") and then matching books id with max ids to only filter latest books, however books haven't been added in chronological order (my real world application does not even use incremental id but instead uuid).</p>
| <python><django><database><django-queryset><database-performance> | 2023-07-18 09:43:43 | 1 | 385 | Jean Bouvattier |
76,711,435 | 5,268,594 | Pandas drop_duplicates not working as intended | <p>I am executing the following command and though inplace is set as True, drop_duplicates is not working:</p>
<pre><code>df.drop_duplicates(inplace=True)
df
</code></pre>
<p>The output is as follows:</p>
<pre><code>open high low close ha_open ha_high ha_low ha_close ha_type
2008-11-01 8.900000 11.175000 7.450000 8.300000 8.90 11.175000 7.450000 8.30 Doji
2008-12-01 7.525000 9.400000 7.375000 7.825000 8.60 9.400000 7.375000 8.03 Doji
2009-01-01 7.850000 9.850000 6.600000 7.425000 8.32 9.850000 6.600000 7.93 Doji
2009-02-01 7.750000 10.000000 7.250000 8.075000 8.12 10.000000 7.250000 8.27 Doji
2009-03-01 8.250000 9.300000 6.750000 7.550000 8.20 9.300000 6.750000 7.96 Doji
... ... ... ... ... ... ... ... ... ...
2023-03-01 67.150002 79.000000 63.049999 74.300003 81.98 81.980000 63.049999 70.88 Bear
2023-04-01 75.000000 86.000000 73.150002 83.550003 76.43 86.000000 73.150002 79.43 Doji
2023-05-01 83.550003 101.449997 81.250000 92.250000 77.93 101.449997 77.930000 89.62 Bull
2023-05-01 83.550003 101.449997 81.250000 92.250000 77.93 101.449997 77.930000 89.62 Bull
2023-06-01 92.300003 104.099998 89.500000 94.300003 83.78 104.099998 83.780000 95.05 Bull
177 rows × 9 columns
</code></pre>
<p>As it can be observed the exactly duplicate rows of 2023-05-01 is not removed.</p>
| <python><pandas> | 2023-07-18 09:40:20 | 0 | 1,267 | Aditya Borde |
76,711,373 | 6,076,137 | python separating leading newlines and remainder of string | <h3>summary</h3>
<p>Given a string input, I am trying to capture leading newlines into a variable, while capturing the remainder of the string in a second variable.</p>
<h3>input</h3>
<pre><code>msg0 = '' # control test
msg1 = 'a regular string'
msg2 = 'trailing newline\n'
msg3 = '\nleading newline'
msg4 = 'middle\nNewline'
msg5 = '\nMixed\nNewlines\n'
</code></pre>
<h3>goal</h3>
<p>I would like to create 2 variables:</p>
<ol>
<li>contains any leading newlines (or else be empty)</li>
<li>contains the entire string <em>except</em> leading newlines.</li>
</ol>
<p>Applied to the above strings, the desired output would be:</p>
<h3>desired output</h3>
<pre><code>#msg0 control test
leading_newlines = ''
no_leading_newlines = ''
#msg1
leading_newlines = ''
no_leading_newlines = 'a regular string'
#msg2
leading_newlines = ''
no_leading_newlines = 'trailing newline\n'
#msg3
leading_newlines = '\n'
no_leading_newlines = 'leading newline'
#msg4
leading_newlines = ''
no_leading_newlines = 'middle\nNewline'
#msg5
leading_newlines = '\n'
no_leading_newlines = 'Mixed\nNewlines\n'
</code></pre>
<p>The control test <code>msg0</code> is just to show I don't have any exceptions with whatever solution I try.</p>
<h3>attempt</h3>
<p>I built the following regex:</p>
<pre><code>import re
regex = re.compile(r'^(\n*)([^\n]*\n*)$')
leading_newlines = regex.sub('\\1', msg<n>)
no_leading_newlines = regex.sub('\\2', msg<n>)
</code></pre>
<p>I did of course try other patterns, but this one was the closest I could get. I'm aware the regex doesn't make sense for <code>msg4</code> or <code>msg5</code>, but my initial attempted pattern: <code>r'^(\n*)(.*)$'</code> fails for all messages except the control test.</p>
<h3>problem</h3>
<p>The above attempt works for <code>msg1</code>, <code>msg2</code>, and msg3<code>. However, it fails completely for </code>msg4<code>and</code>msg5`, both of which return the entire string from beginning to end for both variables.</p>
<h3>full repl code for reproducing attempt</h3>
<pre><code>import re
regex = re.compile(r'^(\n*)([^\n]*\n*)$')
msg0 = ''
msg1 = 'a regular string'
msg2 = 'trailing newline\n'
msg3 = '\nleading newline'
msg4 = 'middle\nNewline'
msg5 = '\nMixed\nNewlines\n'
regex.sub('\\1', msg0)
regex.sub('\\2', msg0)
regex.sub('\\1', msg1)
regex.sub('\\2', msg1)
regex.sub('\\1', msg2)
regex.sub('\\2', msg2)
regex.sub('\\1', msg3)
regex.sub('\\2', msg3)
regex.sub('\\1', msg4)
regex.sub('\\2', msg4)
regex.sub('\\1', msg5)
regex.sub('\\2', msg5)
</code></pre>
<p>Output:</p>
<pre><code>#msg0 -- correct
''
''
#msg1 -- correct
''
'a regular string'
#msg2 -- correct
''
'trailing newline\n'
#msg3 -- correct
'\n'
'leading newline'
#msg4 -- WRONG
'middle\nNewline'
'middle\nNewline'
#msg5 -- WRONG
'\nMixed\nNewlines\n'
'\nMixed\nNewlines\n'
</code></pre>
<p>Notice that <code>msg4</code> and <code>msg5</code> both return the entire string in each of their regex substitutions, while <code>msg1</code> through <code>msg3</code> properly yield the desired output shown above.</p>
<h3>Question</h3>
<p>Can anyone correct my attempt or suggest another solution? I'm not married to regex if there's another approach here. It simply needs to return string types that fulfill the desired output and can be loaded into variables.</p>
<p><strong>Note</strong> The above input is simplified. There may be any number of leading newlines, and the remainder of the string could contain any characters.</p>
<p><em>Python 3.6.8</em></p>
| <python><python-3.x> | 2023-07-18 09:33:24 | 2 | 659 | Blaisem |
76,711,275 | 463,463 | Are integration tests in the JS backend ecosystem frowned upon? | <p>I worked with Python APIs before (Mostly Django and FastAPI). Our test setup, using standard practice in Django, involved creating a test database and check for db state in our integration and e2e tests. It made sense to me as there were no doubts that an endpoint did what it's supposed to do when the tests passed.</p>
<p>Now I'm working at a place that uses NestJS in TS and TypeORM. As I have found out, there are no testing features in jest that create a test database and resets it between tests. When I ask my colleagues and search around it seems like the general advice in NestJS and JS ecosystem in general is that you are supposed to write mostly unit tests and mock all the DB calls. Our current test setup doesn't fill me with confidence. There were several incidences where the mocked calls were too simple to catch mistakes in calling the DB.</p>
<p>So my question is, why is there such a difference between these two frameworks/languages? How do people write reliable integration tests without the DB layer in JS backends?</p>
| <javascript><python><django><nestjs> | 2023-07-18 09:20:54 | 1 | 1,475 | yam |
76,710,868 | 21,346,793 | Why i take an exception with my endpoint fastapi? | <p>I have got an db like:</p>
<pre class="lang-py prettyprint-override"><code>class NewsBase(BaseModel):
title: str
topic: str
class NewsCreate(NewsBase):
datetime: datetime
class News(NewsBase):
id: int
datetime: datetime
class Config:
orm_mode = True
</code></pre>
<p>When i try to make this request, it returns with 500:</p>
<pre><code>@app.get("/news/find_by_topic/{topic}", response_model=schemas.News)
def find_news_by_topic(topic: str, db: Session = Depends(get_db)):
db_news = crud.get_news_by_topic(db, topic=topic)
if db_news is None:
raise HTTPException(status_code=404, detail="This title is not found")
return db_news
</code></pre>
<p>Crud.py:</p>
<pre class="lang-py prettyprint-override"><code>def get_news_by_topic(db: Session, topic: str):
return db.query(models.News).filter(models.News.topic == topic).all()
</code></pre>
<p>It's like error:</p>
<pre class="lang-py prettyprint-override"><code> File "C:\Projects\RestAPI\venv\Lib\site-packages\starlette\middleware\errors.py", line 184, in __call__
raise exc
File "C:\Projects\RestAPI\venv\Lib\site-packages\starlette\middleware\errors.py", line 162, in __call__
await self.app(scope, receive, _send)
File "C:\Projects\RestAPI\venv\Lib\site-packages\starlette\middleware\exceptions.py", line 79, in __call__
raise exc
File "C:\Projects\RestAPI\venv\Lib\site-packages\starlette\middleware\exceptions.py", line 68, in __call__
await self.app(scope, receive, sender)
File "C:\Projects\RestAPI\venv\Lib\site-packages\fastapi\middleware\asyncexitstack.py", line 20, in __call__
raise e
File "C:\Projects\RestAPI\venv\Lib\site-packages\fastapi\middleware\asyncexitstack.py", line 17, in __call__
await self.app(scope, receive, send)
File "C:\Projects\RestAPI\venv\Lib\site-packages\starlette\routing.py", line 718, in __call__
await route.handle(scope, receive, send)
File "C:\Projects\RestAPI\venv\Lib\site-packages\starlette\routing.py", line 276, in handle
await self.app(scope, receive, send)
File "C:\Projects\RestAPI\venv\Lib\site-packages\starlette\routing.py", line 66, in app
response = await func(request)
^^^^^^^^^^^^^^^^^^^
File "C:\Projects\RestAPI\venv\Lib\site-packages\fastapi\routing.py", line 291, in app
content = await serialize_response(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Projects\RestAPI\venv\Lib\site-packages\fastapi\routing.py", line 154, in serialize_response
raise ResponseValidationError(
fastapi.exceptions.ResponseValidationError
</code></pre>
<p>How can I fix it?</p>
| <python><fastapi> | 2023-07-18 08:30:37 | 1 | 400 | Ubuty_programmist_7 |
76,710,772 | 4,287,229 | YOLOv8 with FASTAPI error could not find a writer for the specified extension in function 'imwrite_' | <p>I have code to run yolo in fastapi :</p>
<pre class="lang-py prettyprint-override"><code>from fastapi import FastAPI, UploadFile, File
from ultralytics import YOLO
from PIL import Image
import io
app = FastAPI()
model = YOLO('yolov8n.yaml')
model = YOLO('runs/detect/train/weights/best.pt')
@app.post("/detect")
async def detect(file: UploadFile = File(...)):
contents = await file.read()
img = Image.open(io.BytesIO(contents))
results = model.predict(source=img, save=True)
return {
"result": str(results)
}
</code></pre>
<p>Show error :</p>
<pre><code>[WARNING] Application callable raised an exception
[ERROR] Exception in callback <built-in method _loop_step of builtins.CallbackTaskHTTP object at 0x163b3bd30>
handle: <Handle CallbackTaskHTTP._loop_step>
Traceback (most recent call last):
File "uvloop/cbhandles.pyx", line 61, in uvloop.loop.Handle._run
File "/lib/python3.10/site-packages/fastapi/applications.py", line 270, in __call__
await super().__call__(scope, receive, send)
File "/lib/python3.10/site-packages/starlette/applications.py", line 124, in __call__
await self.middleware_stack(scope, receive, send)
File "/lib/python3.10/site-packages/starlette/middleware/errors.py", line 184, in __call__
raise exc
File "/lib/python3.10/site-packages/starlette/middleware/errors.py", line 162, in __call__
await self.app(scope, receive, _send)
File "/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
raise exc
File "/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
await self.app(scope, receive, sender)
File "/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
raise e
File "/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
await self.app(scope, receive, send)
File "/lib/python3.10/site-packages/starlette/routing.py", line 706, in __call__
await route.handle(scope, receive, send)
File "/lib/python3.10/site-packages/starlette/routing.py", line 276, in handle
await self.app(scope, receive, send)
File "/lib/python3.10/site-packages/starlette/routing.py", line 66, in app
response = await func(request)
File "/lib/python3.10/site-packages/fastapi/routing.py", line 235, in app
raw_response = await run_endpoint_function(
File "/lib/python3.10/site-packages/fastapi/routing.py", line 161, in run_endpoint_function
return await dependant.call(**values)
File "detections/main.py", line 20, in detect
results = model.predict(source=img, save=True, project="foto_ayam")
File "/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/lib/python3.10/site-packages/ultralytics/engine/model.py", line 254, in predict
return self.predictor.predict_cli(source=source) if is_cli else self.predictor(source=source, stream=stream)
File "/lib/python3.10/site-packages/ultralytics/engine/predictor.py", line 195, in __call__
return list(self.stream_inference(source, model, *args, **kwargs)) # merge list of Result into one
File "/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 35, in generator_context
response = gen.send(None)
File "/lib/python3.10/site-packages/ultralytics/engine/predictor.py", line 276, in stream_inference
self.save_preds(vid_cap, i, str(self.save_dir / p.name))
File "/lib/python3.10/site-packages/ultralytics/engine/predictor.py", line 330, in save_preds
cv2.imwrite(save_path, im0)
cv2.error: OpenCV(4.6.0) /Users/xperience/actions-runner/_work/opencv-python/opencv-python/opencv/modules/imgcodecs/src/loadsave.cpp:730: error: (-2:Unspecified error) could not find a writer for the specified extension in function 'imwrite_'
</code></pre>
<p>But if parameter save=False everything work well :</p>
<pre><code> results = model.predict(source=img, save=False)
</code></pre>
<p>I am also has try running detection YOLOv8 without FASTAPI all work well. How to solve this bug ? Thanks</p>
| <python><fastapi><torch><yolo><yolov8> | 2023-07-18 08:16:33 | 2 | 1,300 | Pamungkas Jayuda |
76,710,726 | 12,194,774 | Cannot run Jupyter Notebook on Ubuntu 22.04 | <p>I have Ubuntu 22.04 with python 3.10. When I try to open jupyter notebook from terminal this error occurrs:</p>
<pre><code>Traceback (most recent call last):
File "/home/anaconda3/lib/python3.10/site-packages/notebook/services/sessions/sessionmanager.py", line 9, in <module>
import sqlite3
File "/home/anaconda3/lib/python3.10/sqlite3/__init__.py", line 57, in <module>
from sqlite3.dbapi2 import *
File "/home/anaconda3/lib/python3.10/sqlite3/dbapi2.py", line 27, in <module>
from _sqlite3 import *
ImportError: /home/anaconda3/lib/python3.10/lib-dynload/_sqlite3.cpython-310-x86_64-linux-gnu.so: undefined symbol: sqlite3_trace_v2
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/anaconda3/bin/jupyter-notebook", line 5, in <module>
from notebook.notebookapp import main
File "/home/anaconda3/lib/python3.10/site-packages/notebook/notebookapp.py", line 83, in <module>
from .services.sessions.sessionmanager import SessionManager
File "/home/anaconda3/lib/python3.10/site-packages/notebook/services/sessions/sessionmanager.py", line 12, in <module>
from pysqlite2 import dbapi2 as sqlite3
ModuleNotFoundError: No module named 'pysqlite2'
</code></pre>
<p>When I checked sqlite3, it is intalled in home/anaconda3/lib/python3.10/sqlite3/ and it contains dbapi2.py. Should I somehow reorganize the folders?
PS: When I tried <code>pip install pysqlite2</code> another error occurred: <code>ERROR: Could not find a version that satisfies the requirement pysqlite2 (from versions: none) ERROR: No matching distribution found for pysqlite2</code></p>
| <python><python-3.x><ubuntu><jupyter-notebook> | 2023-07-18 08:11:06 | 1 | 337 | HungryMolecule |
76,710,614 | 8,176,763 | dag import error : AttributeError: '_TaskDecorator' object has no attribute 'update_relative' | <p>I'm facing an issue which my dag cannot be imported, but cannot figure out why:</p>
<pre><code>from airflow.sensors.sql import SqlSensor
import pendulum
from airflow.decorators import task,dag
@dag(
dag_id = "database_monitor",
schedule_interval = '*/10 * * * *',
start_date=pendulum.datetime(2023, 7, 16, 21,0,tz="UTC"),
catchup=False,)
def Pipeline():
check_db_alive = SqlSensor(
task_id="check_db_alive",
conn_id="evergreen",
sql="SELECT pg_is_in_recovery()",
success= lambda x: x == False,
poke_interval= 60,
#timeout = 60 * 2,
mode = "reschedule",
)
@task()
def alert_of_db_inrecovery():
import requests
# result = f"Former primary instance is in recovery, task_instance_key_str: {kwargs['task_instance_key_str']}"
data = {"@key":"kkll",
"@version" : "alertapi-0.1",
"@type":"ALERT",
"object" : "Testobject",
"severity" : "MINOR",
"text" : str("Former primary instance is in recovery")
}
requests.post('https://httpevents.systems/api/sendAlert',verify=False,data=data)
check_db_alive >> alert_of_db_inrecovery
dag = Pipeline()
</code></pre>
<p>I get this error:</p>
<blockquote>
<p>AttributeError: '_TaskDecorator' object has no attribute 'update_relative'</p>
</blockquote>
| <python><airflow> | 2023-07-18 07:56:04 | 1 | 2,459 | moth |
76,710,596 | 8,599,834 | How to forward imports from submodules in Python | <p>Let's assume this directory structure is mandatory for the program and cannot be changed:</p>
<pre><code>src/
├ package_a/
│ ├ package_a/
│ │ ├ __init__.py
│ │ └ some_file_a.py
│ └ __init__.py
│
├ package_b/
│ ├ package_b/
│ │ ├ __init__.py
│ │ └ some_file_b.py
│ └ __init__.py
│
└ package_c/
├ package_c/
│ ├ __init__.py
│ └ some_file_c.py
└ __init__.py
</code></pre>
<p>The first level of each package will never contain anything I want to import except for a same-named subpackage.</p>
<p>Currently, if I'm writing code in, for example, <code>some_file_a.py</code>, I have to import <code>some_file_b.py</code> like so:</p>
<pre class="lang-py prettyprint-override"><code>from package_b.package_b.some_file_b import stuff
</code></pre>
<p><strong>What I would like</strong> to be able to do is instead:</p>
<pre class="lang-py prettyprint-override"><code>from package_b.some_file_b import stuff # package_b is not repeated.
</code></pre>
<p><strong>How do I obtain this?</strong></p>
<p>What I've tried so far is editing <code>package_b/__init__.py</code> to <code>from .package_b import *</code> but that doesn't seem to work. I would like a solution that doesn't require me to manually write the names of the contents of subpackages.</p>
| <python><python-3.x><import><python-import><python-packaging> | 2023-07-18 07:53:55 | 1 | 2,742 | theberzi |
76,710,535 | 2,583,670 | Remove parentheses of tuples inside a list in python | <p>Given a list containing several tuples like this:
L = [(1, 2, 3), (3, 2, 1), (2, 1, 6), (7, 3, 2), (1, 0, 2)]</p>
<p>I want simply to remove the parentheses of tuples and join the numbers to create a list.</p>
<p>The output should be like this:</p>
<p>L = [1, 2, 3, 3, 2, 1, 2, 1, 6, 7, 3, 2, 1, 0, 2]</p>
<p>I tried the below code, it removed parentheses but add a single quotation ''.</p>
<pre><code>for x in L:
mm = (', '.join(str(j) for j in x))
L2.append(mm)
print(L2)
</code></pre>
<p>Like this:
L = ['1, 2, 3', '3, 2, 1',' 2, 1, 6', '7, 3, 2', '1, 0, 2']</p>
| <python><list><tuples> | 2023-07-18 07:44:51 | 1 | 711 | Mohsen Ali |
76,710,355 | 5,457,202 | show collections on Mongo shell returns nothing but it works on Pymongo | <p>I have a Mongo DB and I'm trying to access to it though mongo shell.</p>
<pre><code>mongosh --host 192.168.1.25 --port 27017 -u user -p pass
</code></pre>
<p>I can log in with no problems and I can see the databases on the server but when I try seeing the collections I receive empty arrays.</p>
<pre><code>test > show databases
<Databases listed correctly>
test > use rightdatabase
rightdatabase > show collections
rightdatabase > db.getCollectionNames()
[]
</code></pre>
<p>The output of db.stats also suggest the databases are empty (every parameter of the response is 0). However, if I log in through PyMongo like this, I get a proper result:</p>
<pre><code>from pymongo import MongoClient
try:
conn = MongoClient('192.168.1.25', 27017, username='user', password='pass')
print('Conected successfully!')
except:
print('Could not conect to MongoDB')
db = conn['rightdatabase']
db.command("dbstats")
</code></pre>
<p>That cell returns the following data:</p>
<pre><code>#Output
{'db': '######',
'collections': 16,
'views': 0,
'objects': 52283952,
'avgObjSize': 1349.6649228237375,
'dataSize': 70565816041.0,
'storageSize': 23785156608.0,
...}
</code></pre>
<p>I've read comments saying that it could be a matter of versions. I'm using Mongo shell alone in Windows (not the one included in a Mongo Server installation), version 1.10.1, but I've done this before in the same computer with no issues.</p>
| <python><mongodb><shell><pymongo> | 2023-07-18 07:21:57 | 0 | 436 | J. Maria |
76,710,341 | 2,447,844 | Async [TCP] writer close to avoid resource leaks | <p>I'm implementing a TCP client with asyncio <a href="https://docs.python.org/3/library/asyncio-stream.html#asyncio-streams" rel="nofollow noreferrer">Streams</a>. Typical example code is:</p>
<pre><code>reader, writer = await asyncio.open_connection(
'127.0.0.1', 8888)
...
writer.close()
await writer.wait_closed()
</code></pre>
<p>The <code>...</code> is a non-trivial piece of async/await code in my case. To prevent resource leaks (e.g. fds), I believe I need to call that <code>close()</code> function, so I really should put it inside a try/finally. Is this correct or Python somehow magically handles resource cleanup, like it <em>finalizes asynchronous generators</em>, when async loop ends?</p>
<p>If not and manual cleanup is required, is there a more canonical/Pythonic way to implement this than defining a new function with <a href="https://docs.python.org/3/library/contextlib.html#contextlib.asynccontextmanager" rel="nofollow noreferrer">@contextlib.asynccontextmanager</a>?</p>
<p>I tried <code>contextlib.closing()</code> and <code>contextlib.aclosing()</code> but that doesn't work, since <code>asyncio.open_connection()</code> returns tuple and writer doesn't have <code>aclose()</code>, just <code>close()</code>.</p>
| <python><python-asyncio> | 2023-07-18 07:18:54 | 2 | 398 | blazee |
76,710,217 | 15,958,062 | Duplicate a chart object using xlsxwriter | <p>I have one complicated scatter chart and I want to insert it in two different sheets of same workbook. But, when I tried to insert the same chart object into same or different sheet, it did not worked. Why is that so?<br/>
Is it possible to, may be, duplicate the original chart object, so that I can insert it in multiple sheets?<br/></p>
<p>As a workaround, I understand that I need to create new chart every time and insert it. But, I want to understand why inserting same chart into different sheets doesn't work.</p>
<p>Below is reproduced code:</p>
<pre><code>import xlsxwriter
a = [[222.724, 23.2381],
[219.798, 27.2969], ]
wb = xlsxwriter.Workbook('trial.xlsx')
sheetName = 'sheet1'
ws = wb.add_worksheet(sheetName)
ws.write_column(1, 0, ['A1', 'A2'])
ws.write_row(1, 1, a[0])
ws.write_row(2, 1, a[1])
chart = wb.add_chart({'type': 'scatter'})
chart.add_series({'name': 'Goodman', 'categories': [sheetName, 1, 1, 2, 1],
'values': [sheetName, 1, 2, 2, 2], })
ws.insert_chart(3, 4, chart)
ws.insert_chart(3, 14, chart)
ws2 = wb.add_worksheet('sheet2')
ws2.insert_chart(3, 4, chart)
wb.close()
</code></pre>
| <python><python-3.x><charts><scatter-plot><xlsxwriter> | 2023-07-18 06:59:54 | 2 | 924 | Satish Thorat |
76,709,551 | 4,858,908 | Python function Converting an integer to bins as per given interval | <p>I am trying to convert an integer (<code>num</code>) to bins as per given interval.
the bin size (intervals) are <code>[1, 200), [200, 400), [400, 800), [800, 1200), [1200, num]</code></p>
<p>I'am doing it in a crude way ...</p>
<pre><code>def create_bins(num):
"""Create Bins as per given intervals."""
s1, s2, s3, s4 = 199, 200, 400, 400
if num > 1200:
res = [s1, s2, s3, s4, num - (s1 + s2 + s3 + s4)]
elif num < 1200 and num >= 800:
res = [s1, s2, s3, num - (s1 + s2 + s3)]
elif num < 800 and num >= 400:
res = [s1, s2, num - (s1 + s2)]
elif num < 400 and num >= 200:
res = [s1, num - s1]
else:
res = [num]
return res
</code></pre>
<p>this function for <code>create_bins(1041)</code> returns <code>[199, 200, 400, 242]</code> which is correct. However, I'am sure there are better ways to get it done...</p>
<p>Will appreciate if you can lead me towards better solutions for these kind of problem.</p>
| <python><binning> | 2023-07-18 04:31:06 | 2 | 1,652 | Arun Kumar Khattri |
76,709,373 | 4,212,875 | Question about Keras implementation of ADAM | <p>I have a question regarding <a href="https://github.com/keras-team/keras/blob/b3ffea6602dbbb481e82312baa24fe657de83e11/keras/optimizers/adam.py#L174" rel="nofollow noreferrer">this line</a> in the Keras implementation of Adam:</p>
<blockquote>
<p>alpha = lr * tf.sqrt(1 - beta_2_power) / (1 - beta_1_power)</p>
</blockquote>
<p>From the algorithm <a href="https://stats.stackexchange.com/a/234686/243601">here</a>, is this step doing the bias correction? If so, it seems like they are also implicitly scaling <code>epsilon</code> by a <code>(1 - beta_2_power)^0.5</code> factor, which is not in the original algorithm? I understand practically this doesn't matter since epsilon is just there to avoid a divide by zero, but just wanted to make sure I understood this correctly.</p>
| <python><tensorflow><keras><mathematical-optimization> | 2023-07-18 03:32:28 | 0 | 411 | Yandle |
76,709,356 | 10,620,003 | remove a column as index in dataframe | <p>I have a df and I already used .pivot to creat the df. And now my df is same as this:</p>
<p>[![df that I have][1]][1]</p>
<p>I want to remove the 'time_hour' and create another df. I am using the reset_index and this is the results. However, I dont want a column with name time_hour. Could you please help me with that? Thanks
[![this is the df I want without the column time-hour][2]][2]</p>
| <python><pandas><dataframe> | 2023-07-18 03:27:49 | 1 | 730 | Sadcow |
76,709,338 | 1,783,046 | how to set transparent pixel to white in java using opencv? | <p>what's the equivalent code in java using opencv ?</p>
<pre><code>#make mask of where the transparent bits are
trans_mask = image[:,:,3] == 0
#replace areas of transparency with white and not transparent
image[trans_mask] = [255, 255, 255, 255]
</code></pre>
<p>updated:</p>
<p>I don't get it, why people downvote this. the "easy" solution is so clear that you assign each pixel in a loop, but that would not as efficient as this way. I searched for it and got no clues. why these people don't understand this question in the first place and try to discuss it. if you guys don't know it just leave it.</p>
<p>if you try to process images in java efficiently, this question is useful to you.</p>
| <python><java><opencv> | 2023-07-18 03:19:30 | 1 | 757 | zephor |
76,708,934 | 4,451,521 | Cannot import name 'GraphQL' from 'strawberry.fastapi' | <p>I am trying to run an strawberry example from <a href="https://www.tutorialspoint.com/fastapi/fastapi_using_graphql.htm" rel="nofollow noreferrer">this tutorial</a> but it does not work.</p>
<p>I have the code:</p>
<pre><code>from fastapi import FastAPI
import strawberry
from strawberry.fastapi import GraphQL
@strawberry.type
class Book:
title: str
author: str
price: int
@strawberry.type
class Query:
@strawberry.field
def book(self) -> Book:
return Book(title="Computer Fundamentals", author="Sinha", price=300)
schema = strawberry.Schema(query=Query)
graphql_app = GraphQL(schema)
app = FastAPI()
app.add_route("/book", graphql_app)
app.add_websocket_route("/book", graphql_app)
</code></pre>
<p>but when I try to run it with <code>uvicorn Mystrawberry:app --reload</code>
it says</p>
<pre><code>ImportError: cannot import name 'GraphQL' from 'strawberry.fastapi'
</code></pre>
<p>Now, in the Strawberry documentation examples they use GraphQLRouter rather than GraphQL, and I don't know if that is an old version or a typo.</p>
| <python><fastapi><strawberry-graphql> | 2023-07-18 01:02:48 | 1 | 10,576 | KansaiRobot |
76,708,882 | 9,588,300 | Spark number of tasks not equal to number of partitions | <p>I have read that the number of partitions are related to the number of tasks. When I read a query plan on any job that is not the file reading job (for instance, the merge job of a join) I do see that it gets as many tasks as number of partitions of each table that come into the job. Also, it follows the <code>spark.conf.set('spark.sql.shuffle.partitions',X)</code> definition.</p>
<p>But in the reading files jobs, like parquet scan, it does not matches. For example, I need a full read on a table that is made of 258 parquet files, but that reading job decided to use 8 tasks, which is not aligned with <code>spark.conf.set('spark.sql.shuffle.partitions',X)</code> (assuming X is not set to 8)</p>
<p>So it seems on file reading jobs it selects the number of tasks independently from the number of partitions it needs to read. For instance. I have a table that when I run</p>
<pre><code>df=spark.sql('select * from transaction')
df.rdd.getNumPartitions()
</code></pre>
<p>it says 57, and it's made of 258 parquet files (I guess 1 parquet is not equal to one partition)</p>
<p>But then, when scanning this file for a join that needs all rows because it groups by afterwards, it just uses 8 task.</p>
<p>So why if number of files is 258, number of partitions is 57 spark decides to go with 8 taks regardless of what <code>spark.sql.shuffle.partitions</code> says?</p>
| <python><apache-spark><pyspark><databricks><spark-ui> | 2023-07-18 00:45:05 | 1 | 462 | Eugenio.Gastelum96 |
76,708,763 | 2,611,836 | How to mock a method of a class instance? | <p>I have code like this:</p>
<p>a.py</p>
<pre><code>from app.cache import Cache
my_cache = Cache(cache_prefix='xxxxx')
</code></pre>
<p>b.py</p>
<pre><code>from a import my_cache
class MyApp:
def run_app():
my_cache.get(1,2)
</code></pre>
<p>test_b.py</p>
<pre><code>from mock import Mock, patch
import b
mock_my_cache_get= Mock(b.my_cache.get)
class MyTests():
@patch('b.my_cache.get', mock_my_cache_get)
def test_cache(self):
b.MyApp().run_app()
mock_my_cache_get.assert_called_with(1,2)
</code></pre>
<p>As you can see above, I am trying to write a unit test where I mock the get method of the class instance of Cache. However, when I try to assert that this mocked method is called with the specified arguments, I get an error saying that call is not found. Which, as iis obvious that is is.</p>
| <python><mocking><python-unittest><python-mock> | 2023-07-17 23:57:09 | 1 | 416 | user2611836 |
76,708,551 | 1,509,695 | Numerically stable approach to computing a signed angle between vectors | <p>My first Stack Exchange question regarding numeric stability in geometry. I wrote this function (below) for computing a signed angle from one vector to the other, inspired by <a href="https://math.stackexchange.com/a/1139228/66486">the idea presented here</a>, to which I am indebted.</p>
<p>In the function, <a href="https://en.wikipedia.org/wiki/Triple_product" rel="nofollow noreferrer">the triple product</a> is used for getting from the raw acute angle that you get on the first part, to the directed angle opening up from the first vector to the second.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from numpy import ndarray, sign
from math import pi
Vec3D = ndarray
def directed_angle(
from_vec: Vec3D,
to_vec: Vec3D,
normal: Vec3D,
debug=False):
""" returns the [0, 2𝜋) directed angle opening up from the first to the second vector,
given a 3rd vector linearly independent of the first two vectors, which is used to provide
the directionality of the acute (regular, i.e. undirected) angle firstly computed """
assert np.any(from_vec) and np.any(to_vec), \
f'from_vec, to_vec, and normal must not be zero vectors, but from_vec: {from_vec}, to_vec: {to_vec}'
magnitudes = np.linalg.norm(from_vec) * np.linalg.norm(to_vec)
dot_product = np.matmul(from_vec, to_vec)
angle = np.arccos(np.clip(dot_product / magnitudes, -1, 1))
# the triplet product reflects the sign of the angle opening up from the first vector to the second,
# based on the input normal value providing the directionality which is otherwise totally abscent
triple_product = np.linalg.det(np.array([from_vec, to_vec, normal]))
if triple_product < 0:
result_angle = angle
else:
result_angle = 2*pi - angle
# flip the output range from (0, 2𝜋] to [0, 2𝜋) which is the prefernce for our semantics
if result_angle == 2*pi:
result_angle = 0
if debug:
print(f'\nfrom_vec: {from_vec}, '
f'\nto_vec: {to_vec}'
f'\nnormal: {normal}, '
f'\nundirected angle: {angle},'
f'\nundirected angle/pi: {angle/pi}, '
f'\ntriple product: {triple_product}'
f'\ntriple product sign: {sign(triple_product)}'
f'\nresult_angle/pi: {result_angle/pi}')
return result_angle
def test():
v0 = np.array([1, 0, 0])
normal = np.array([0, 0, 1])
vecs = dict(
same_direction = np.array([1, 0, 0]),
opposite_direction = np.array([-1, 0, 0]),
close_to_same_direction_from_side_1 = np.array([1, 0.00000001, 0]),
close_to_same_direction_from_side_2 = np.array([1, -0.00000001, 0]))
for desc, v in vecs.items():
print(f'\n{desc}:')
directed_angle(v0, v, normal, debug=True)
</code></pre>
<p>When as in the example test code above, the given second vector's angle is extremely close to the first given vector's angle from either side of it, instead of getting the resulting angle respectively very close to 0 and very close to 2𝜋, the function's result is actually zero, so these two semantically different vector directions yield the same output angle.</p>
<p>How would you approach making this function numerically stable in that sense?</p>
<p>Assume the directed angle grows from 0 up to numbers approaching 2𝜋, how would you avoid the output angle of the function jumping to 0 as it infinitesimally approaches the full circle directed angle? as long as its output is infinitesimally imprecise we are fine, but if it goes from close to 2𝜋 to 0 as it infinitesimally approaches 2𝜋 ― this kind of discontinuous value jump will be devastating for the interpretation of its output in my application as well as I guess in many other cases.</p>
<p>Obviously we can empirically find upper-bound boundaries for where the function output for angles approaching 2𝜋 collapses to 0 ― a function for that is actually attached below ― but this doesn't help at all when an input is the vector which yields that angle (we cannot tell from the vector that it is going to yield a flip to 0, so when we have 0 as the resulting angle of the function ― we cannot tell if the real angle was 0 or rather a value approaching 2𝜋).</p>
<p>I guess you can replace the term <em>directed angle</em> with any of the terms <em>clockwise (or counter-clockwise) angle</em>.</p>
<p>diagnostic functions follow.</p>
<pre class="lang-py prettyprint-override"><code>def test_monotonicity(debug=False, interpolation_interval=1e-4):
""" test that the function is monotonic (up to the used interpolation interval),
and correct, across the interval of its domain ([0, 2𝜋)) """
v0 = np.array([1, 0, 0])
normal = np.array([0, 0, 1])
# range from 0 to 2𝜋 at 0.1 intervals
angles = np.arange(0, 2*pi, step=interpolation_interval)
prev_angle = None
for angle in angles:
# make a vector representing the current angle away from v0, such that it goes clockwise
# away from v0 as `angle` increases (clockwise assuming you look at the two vectors from
# the perspective of the normal vector, otherwise directionality is void)
vec = np.array([np.cos(angle), -np.sin(angle), 0])
result_angle = directed_angle(v0, vec, normal)
if debug: print(f'angle/pi: {angle/pi}, computed angle/pi: {result_angle/pi}')
if prev_angle is None:
assert angle == 0
else:
assert angle > prev_angle
drift = result_angle - angle
if angle == result_angle:
pass
else:
print(f'angle/pi: {angle/pi}, result_angle/pi: {result_angle/pi}, drift/pi: {drift/pi}')
assert isclose(result_angle, angle, rel_tol=1e-6)
prev_angle = angle
def test_demonstrating_the_concern():
""" demonstration of sufficiently small angles from both sides of the reference vector (v0)
collapsing to zero, which is a problem only when the angle should be close to 2 pi """
v0 = np.array([1, 0, 0])
normal = np.array([0, 0, 1])
vecs = dict(
same_direction = np.array([1, 0, 0]),
opposite_direction = np.array([-1, 0, 0]),
close_to_same_direction_from_side_1 = np.array([1, +0.00000001, 0]),
close_to_same_direction_from_side_2 = np.array([1, -0.00000001, 0]))
expected_results = [0, 1*pi, None, None]
for (desc, v), expected_result in zip(vecs.items(), expected_results):
print(f'\n{desc}:')
v = v / np.linalg.norm(v)
result = directed_angle(v0, v, normal, debug=True)
if expected_result is not None:
assert(result == expected_result)
def test_angle_approaching_2pi(epsilon=1e-7, interpolation_interval=1e-10, verbose=True):
""" stress test the angle values approaching 2pi where the result flips to zero """
v0 = np.array([1, 0, 0])
normal = np.array([0, 0, 1])
# range from 2𝜋-epsilon to close to 2𝜋
angles = np.arange(2*pi-epsilon, 2*pi, step=interpolation_interval)
prev_angle = None
for angle in angles:
# vector representing the current angle away from v0, clockwise
vec = np.array([np.cos(angle), -np.sin(angle), 0])
result_angle = directed_angle(v0, vec, normal)
if prev_angle is None:
pass
else:
assert angle > prev_angle
drift = result_angle - angle
if angle == result_angle:
pass
else:
if verbose: print(f'angle/pi: {angle / pi}, result_angle/pi: {result_angle / pi}, drift/pi: {drift / pi}')
if result_angle == 0:
print(f'function angle output hit 0 at angle: {angle};'
f'\nangle/pi: {angle / pi}'
f'\nangle distance from 2𝜋: {2*pi - angle}')
break
else:
assert isclose(result_angle, angle, rel_tol=1e-6)
prev_angle = angle
</code></pre>
<p>And all that said, I'm happy to also have insights about possible other numerical stability issues which may arise in this function arising in the calculation steps that it is making. I have seen comments about preferring arctan to arccos and variations, but wouldn't know whether they fully apply to numpy without introducing other stability drawbacks.</p>
<p>Also thankful for comments to any general methodology for thinking numerical stability or guidelines specific to numpy for safely writing numerically stable computation in the realm of 3D geometry over arbitrary inputs; although I come to realize that different applications will worry about different ranges where inaccuracies become detrimental to their specific logic.</p>
<p>As my inputs to this function come in from machine learning predictions, I can't avoid extreme vector values even though their probability is low at any single invocation of the function.</p>
<p>Thanks!</p>
| <python><computational-geometry><numerical-stability> | 2023-07-17 22:55:27 | 1 | 13,863 | matanox |
76,708,540 | 7,938,796 | Why is my Mixed-Integer Linear Programming problem running slow on python compared to R? | <p>I have a very large mixed-integer linear programming problem that I need to run thousands of times, so speed is a priority.</p>
<p>I need it to run in both <code>R</code> and in <code>python</code>. My <code>R</code> code runs very fast, the solver takes about 0.002 seconds to solve the problem. My <code>python</code> code is a different story, it takes anywhere from of 0.05 to 0.80 seconds and often times much more.</p>
<p>I highly doubt that <code>R</code> is 25x more efficient than <code>python</code>, especially considering that (to my knowledge) both are using <code>C</code> code to solve the problem. I'm looking for help to determine how to make my <code>python</code> code faster. I assume it starts with getting a different package than the one I have now? I really need the <code>python</code> code to consistently be below .01 seconds.</p>
<p>Here is sample <code>R</code> code and <code>python</code> code that solve the same problem. In this example of my real problem, I'm building "fantasy basketball" lineups with certain constraints. Player pool is 500, must choose exactly 8 players with at least 1 and no more than 3 from each position. Salary must be under 50000. I randomly sample Positions/Salary since the actual solution isn't important as much as the speed.</p>
<p><strong>R Code</strong></p>
<pre><code>library(lpSolve)
library(dplyr)
# Make dummy dataframe with player positions, salaries, and projected points
Positions <- c('PG', 'SG', 'SF', 'PF', 'C', 'PG/SG', 'SG/SF', 'PF/C')
Position <- sample(Positions, 500, replace = TRUE)
Salary <- sample(seq(3000, 12000, by=100), 500, replace=TRUE)
playerTable <- data.frame(ID = 1:500, Position, Salary) %>%
mutate(ProjPts = round(Salary/1000*5))
# Make positional constraint vectors
ConVec_Position <- playerTable %>%
select(Position) %>%
mutate(AnyPG = grepl('PG', Position)*1,
AnySG = grepl('SG', Position)*1,
AnySF = grepl('SF', Position)*1,
AnyPF = grepl('PF', Position)*1,
AnyC = grepl('C', Position)*1,
orPGSGSFPFC = (AnyPG + AnySG + AnySF + AnyPF + AnyC > 0)*1
) %>%
select(-Position)
# Make salary constraint vectors
ConVec_Salary <- playerTable$Salary
# Get Objective (projected points)
ProjectedPoints <- playerTable$ProjPts
# Make constraint values and directions
ConValGT <- c(1,1,1,1,1, 8) # Must have at least one of each position, and 8 total
ConValLT <- c(3,3,3,3,2, 8, 50000) # Can't have more than 3 of each position, 8 total, and 50000 salary
ConDirGT <- c(rep(">=", ncol(ConVec_Position))) # Make >= directions for each position constraint
ConDirLT <- c(rep("<=", ncol(ConVec_Position) + 1)) # Make <= directions for each position constraint, plus 1 for Salary
# Compile constraint vectors, values, and directions
ConVec_Final <- cbind(ConVec_Position, ConVec_Position, ConVec_Salary)
ConDir_Final <- c(ConDirGT, ConDirLT)
ConVal_Final <- c(ConValGT, ConValLT)
start_time <- proc.time()
sol <- lp("max",
objective.in = ProjectedPoints,
const.mat = t(ConVec_Final),
const.dir = ConDir_Final,
const.rhs = ConVal_Final,
binary.vec = 1:length(ProjectedPoints) # All decisions are binary
)
cat("\n--- It took ", round((proc.time() - start_time)[3], 4), " seconds to optimize lineups ---\n", sep = "")
</code></pre>
<p><strong>python Code</strong></p>
<pre><code>import pandas as pd
import numpy as np
from lp_solve import *
import time
# Make dummy dataframe with player positions, salaries, and projected points
ID = range(500)
Positions = ['PG', 'SG', 'SF', 'PF', 'C', 'PG/SG', 'SG/SF', 'PF/C']
Position = np.random.choice(Positions,500)
Salary = np.random.choice(range(3000,12000,100),500)
playerTable = pd.DataFrame({'ID': ID, 'Position': Position, 'Salary': Salary})
playerTable['ProjPts'] = playerTable['Salary']/1000*5
# Make positional constraint vectors
AnyPG = playerTable.Position.str.contains('PG')*1
AnySG = playerTable.Position.str.contains('SG')*1
AnySF = playerTable.Position.str.contains('SF')*1
AnyPF = playerTable.Position.str.contains('PF')*1
AnyC = playerTable.Position.str.contains('C')*1
AnyPos = [1]*500
ConVec_Position = pd.DataFrame({'AnyPG': AnyPG, 'AnySG': AnySG, 'AnySF': AnySF, 'AnyPF': AnyPF, 'AnyC': AnyC, 'AnyPos': AnyPos})
# Make salary constraint vectors
ConVec_Salary = playerTable[['Salary']]
# Get Objective (projected points)
ProjectedPoints = playerTable['ProjPts'].tolist()
# Make constraint values and directions
ConValGT = [1,1,1,1,1, 8] # Must have at least one of each position, and 8 total
ConValLT = [3,3,3,3,2, 8, 50000] # Can't have more than 3 of each position, 8 total, and 50000 salary
ConDirGT = [1] * (len(ConVec_Position.columns)) # Make >= directions for each position constraint
ConDirLT = [-1] * (len(ConVec_Position.columns) + 1) # Make <= directions for each position constraint, plus 1 for Salary
# Compile constraint vectors, values, and directions
ConVec_Final = pd.concat([ConVec_Position, ConVec_Position, ConVec_Salary], axis=1)
ConVal_Final = ConValGT + ConValLT
ConDir_Final = ConDirGT + ConDirLT
# Force all decisions to be binary
vLB = [0] * len(ProjectedPoints) # lower bound 0
vUB = [1] * len(ProjectedPoints) # upper bound 1
xint = [i+1 for i in range(len(ProjectedPoints))] # all decisions are integers (aka all are binary)
solveTimefix = time.time()
[obj, sol, duals] = lp_solve(ProjectedPoints,
ConVec_Final.T.values.tolist(),
ConVal_Final,
ConDir_Final,
vLB,
vUB,
xint)
solveTimeCurrRun = (time.time() - solveTimefix)
print("\n--- It took %s seconds to get all LUs ---\n" % round(solveTimeCurrRun, 2))
</code></pre>
<p>Any ideas on how to make my <code>python</code> code run the same speed as <code>R</code>?</p>
| <python><r><linear-programming><solver><mixed-integer-programming> | 2023-07-17 22:49:52 | 1 | 767 | CoolGuyHasChillDay |
76,708,297 | 573,082 | How python swig wrappers were generated without any source files in my case? | <p>I was researching using C++ in Python with SWIG. I came across <a href="https://github.com/tesseract-robotics/tesseract_python/tree/master" rel="nofollow noreferrer">this</a> repo. I installed as recommended with <code>python -m pip install tesseract-robotics</code>. I expected it to download C++ source code, generate <code>.cpp</code> files from <code>.i</code> files and only then build <code>.dll</code>s. But, it seems it only installed <code>_packageX.pyd</code>, <code>packageX.py</code> and corresponding <code>.dll</code> file for each package.</p>
<p>How this even possible? Doen't SWIG needs the <code>.i</code> and C++ source files to generate the code to build for specific platform? Or, was all this built somewhere on the cloud (maybe even in <a href="https://pypi.org/" rel="nofollow noreferrer">https://pypi.org/</a>) and <code>pip install</code> did not actually build anything, but just downloaded pre-built platfrom specific <code>.dll</code> and <code>.py</code> files?</p>
<p><a href="https://i.sstatic.net/wvxyh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wvxyh.png" alt="enter image description here" /></a></p>
| <python><c++><windows><swig> | 2023-07-17 21:51:10 | 1 | 14,501 | theateist |
76,708,199 | 4,537,160 | Python - Troubles sharing objects between processes using Manager.list | <p>I am trying to share some images between two Python processes using a Manager().list() object.<br />
I have a main file, where I create and launch the processes:</p>
<p>main.py</p>
<pre><code>if __name__ == "__main__":
set_start_method("spawn")
# creating the list
imgs_list = Manager().list()
killing_switch = mp.Event()
# creating the processes
cam_reader = mp.Process(target=cam_reader_init, args=(killing_switch, imgs_list), daemon=True)
cam_reader.start()
frame_analyzer = mp.Process(target=frame_analyzer_init, args=(killing_switch, imgs_list), daemon=True)
frame_analyzer.start()
killing_switch.wait()
sys.exit(0)
</code></pre>
<p>In cam_reader, I'm reading images from a video stream, and appending them to imgs_list:</p>
<pre><code># cam_reader
def cam_reader_init(killing_switch, imgs_list):
cam_read = CamReader(killing_switch, imgs_list)
cam_read.run()
class CamReader:
def __init__(self, killing_switch, imgs_list):
self.source = 0
self.killing_switch = killing_switch
self.imgs_list = imgs_list
def run(self):
self.cap = cv2.VideoCapture(self.source, cv2.CAP_FFMPEG)
while True:
# read frames and send them to imgs_list
frame = None
ret, frame = self.cap.read()
if frame is not None:
self.imgs_list.append(frame)
</code></pre>
<p>The problem is that I'm trying to access the frames that were sent to imgs_list in the other process:</p>
<pre><code># frame_analyzer
import cv2
def frame_analyzer_init(killing_switch, imgs_list):
frame_analyzer = FrameAnalyzer(killing_switch, imgs_list)
frame_analyzer.run()
class FrameAnalyzer:
def __init__(self, killing_switch, imgs_list):
self.killing_switch = killing_switch
self.imgs_list = imgs_list
def run(self):
while True:
if len(self.imgs_list) > 0:
frame = self.imgs_list[-1]
key = cv2.waitKey(1)
if key == ord('q'):
print("done here")
self.killing_switch.set()
break
cv2.imshow('frames', frame)
</code></pre>
<p>But, while in FrameAnalyzer, it appears that self.imgs_list is always empty (the len>0 condition is never met).</p>
<p>What am I missing?</p>
<p>EDIT: Using the debugger, I noticed the list in cam_reader is getting updated (images are appended to it), while the one in frame_analyzer stays always empty. Also, I saw that the 2 instances of the list in the two processes have different values of the _id field. Could this be the source of the issue? It does seem like the processes are using two different lists somehow.</p>
| <python><python-multiprocessing><multiprocessing-manager> | 2023-07-17 21:30:58 | 1 | 1,630 | Carlo |
76,707,558 | 272,023 | Does Apache Beam on Flink support start/stop with snapshots? | <p>Stream processing applications run on Flink clusters using raw Flink support snapshots being taken and then the job being restarted from snapshots, by means of the Flink REST API, e.g. <a href="https://nightlies.apache.org/flink/flink-docs-release-1.17/docs/ops/rest_api/#jobs-jobid-stop" rel="nofollow noreferrer">stop with savepoint</a>.</p>
<p>Does Apache Beam using a Flink runner support start/stop with snapshots? If so, what does starting a Beam job look like? How does this change if the job is written in Python?</p>
| <python><apache-flink><apache-beam> | 2023-07-17 19:28:21 | 1 | 12,131 | John |
76,707,519 | 9,588,300 | Spark UI reported time of execution plan doesn't match real time by a factor of 3x | <p>I ran a query in databricks, and the notebook it says it took 12 seconds. Meanwhile when I navigated through the spark UI to see the SQL execution plan it reports 12104 ms, which it matches. But the phases the execution plan is composed of say that they took as long as 30 seconds for a single phase.</p>
<p>Here is the execution plan in tabular format. And down below it's on graphical format. If you notice on the tabular format, at the very top left corner it says <code>Completed in 12104 ms</code> but then if you read the breakdown you can see that one of the <code>*PhotonShuffleMapStage</code> step (which is made of operations <code>(7)</code> and <code>(8)</code> ) reports it took <code>29.6s</code>....</p>
<p>So how come the entire query say it took 12 seconds when a single of its phases says 29.6 seconds?</p>
<pre><code>Completed in 12104 ms
Node | # Tasks | Duration total (min, med, max) | # Rows | Est # Rows | Peak Mem total (min, med, max)
----------------------------------------------------------+---------+-----------------------------------------------------+-------------+------------+----------------------------------------------------------------
*PhotonShuffleMapStage | 4 | 1.6 s (0 ms, 0 ms, 448 ms (stage 56.0: task 186)) | 0 | - | 824.0 MiB (0.0 B, 0.0 B, 206.0 MiB (stage 56.0: task 187))
-*(1): PhotonScan parquet hive_metastore.default.user | - | 241 ms (0 ms, 0 ms, 77 ms (stage 56.0: task 188)) | 10,000,000 | - | 64.0 MiB (0.0 B, 0.0 B, 16.0 MiB (stage 56.0: task 187))
-*(2): PhotonShuffleExchangeSink | - | 1.6 s (0 ms, 0 ms, 448 ms (stage 56.0: task 186)) | 10,000,000 | - | 248.0 MiB (0.0 B, 0.0 B, 62.0 MiB (stage 56.0: task 187))
*PhotonShuffleMapStage | 8 | 10.0 s (0 ms, 1.4 s, 1.5 s (stage 60.0: task 201)) | 0 | - | 1626.0 MiB (0.0 B, 210.0 MiB, 212.0 MiB (stage 60.0: task 203))
-*(5): AQEShuffleRead | - | - | - | - | -
-*(6): PhotonShuffleExchangeSource | - | 136 ms (0 ms, 17 ms, 24 ms (stage 60.0: task 203)) | 10,000,000 | - | -
-*(11): AQEShuffleRead | - | - | - | - | -
-*(12): PhotonShuffleExchangeSource | - | 2.3 s (0 ms, 301 ms, 370 ms (stage 60.0: task 201)) | 320,000,000 | - | -
-*(13): PhotonShuffledHashJoin | - | 9.9 s (0 ms, 1.4 s, 1.5 s (stage 60.0: task 201)) | 329,951,357 | - | 602.0 MiB (0.0 B, 82.0 MiB, 84.0 MiB (stage 60.0: task 203))
-*(14): PhotonProject | - | 10.0 s (0 ms, 1.4 s, 1.5 s (stage 60.0: task 201)) | 329,951,357 | - | -
-*(15): PhotonAgg | - | 10.0 s (0 ms, 1.4 s, 1.5 s (stage 60.0: task 201)) | 8 | - | -
-*(16): PhotonShuffleExchangeSink | - | 10.0 s (0 ms, 1.4 s, 1.5 s (stage 60.0: task 201)) | 8 | - | 32.0 MiB (0.0 B, 4.0 MiB, 4.0 MiB (stage 60.0: task 200))
*PhotonShuffleMapStage | 8 | 29.6 s (0 ms, 0 ms, 4.8 s (stage 57.0: task 195)) | 0 | - | 7.9 GiB (0.0 B, 0.0 B, 1158.0 MiB (stage 57.0: task 194))
-*(7): PhotonScan parquet hive_metastore.default.revision | - | 2.9 s (0 ms, 0 ms, 474 ms (stage 57.0: task 193)) | 320,000,000 | - | 128.0 MiB (0.0 B, 0.0 B, 16.0 MiB (stage 57.0: task 190))
-*(8): PhotonShuffleExchangeSink | - | 29.6 s (0 ms, 0 ms, 4.8 s (stage 57.0: task 195)) | 320,000,000 | - | 6.8 GiB (0.0 B, 0.0 B, 1014.0 MiB (stage 57.0: task 194))
*PhotonResultStage | 1 | 0 ms | 0 | - | 128.0 MiB
-*(19): PhotonShuffleExchangeSource | - | 0 ms | 8 | 1 | -
-*(20): PhotonAgg | - | 0 ms | 1 | 1 | -
*WholeStageCodegen (1) | 1 | 0 ms | - | - | -
-*(22): ColumnarToRow | - | - | 1 | - | -
(40): AdaptiveSparkPlan | - | - | - | - | -
</code></pre>
<p><a href="https://i.sstatic.net/ab8Ja.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ab8Ja.png" alt="Spark UI query execution plan" /></a></p>
| <python><apache-spark><databricks><cluster-computing><sql-execution-plan> | 2023-07-17 19:21:12 | 1 | 462 | Eugenio.Gastelum96 |
76,707,499 | 235,671 | How do I modify the stack list of sys.exc_info()? | <p>I'd like to remove a couple of frames from the stack list that represent a decorator before logging it, but when I try to get to it, the debugger says that there is no such attribute as <code>stack</code> even though PyCharm shows it:</p>
<pre class="lang-py prettyprint-override"><code>sys.exc_info()[2].tb_frame.stack # <-- AttributeError, but why?
</code></pre>
<p><a href="https://i.sstatic.net/tgRpu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tgRpu.png" alt="enter image description here" /></a></p>
<p>This is what I would like to do:</p>
<pre class="lang-py prettyprint-override"><code>sys.exc_info()[2].tb_frame.stack = sys.exc_info()[2].tb_frame.stack[3:]
</code></pre>
| <python><list><exception><stack> | 2023-07-17 19:17:58 | 1 | 19,283 | t3chb0t |
76,707,487 | 7,729,531 | 401 Unauthorized error when posting tweets using Tweepy | <p>I am using tweepy to post tweets including media, for <a href="https://twitter.com/s2coastalbot" rel="nofollow noreferrer">a bot project</a>. Since mid-june the bot has stopped working, and I realized it’s due to an update of the twitter APIs.</p>
<p>I upgraded tweepy in my conda environment to version 4.14.0. I updated my script to used <a href="https://docs.tweepy.org/en/stable/api.html#tweepy.API.media_upload" rel="nofollow noreferrer">API v1.1</a> to upload media, and <a href="https://docs.tweepy.org/en/stable/client.html?#tweepy.Client.create_tweet" rel="nofollow noreferrer">API v2</a> to post the tweet. I regenerated the consumer key, consumer secret, access token and access token secret.</p>
<p>I am getting a <code>401 Unauthorized</code> error when trying to post the tweet with API v2. The credentials are correct since API v1.1 succeeds in uploading the media pior to tweet posting.</p>
<p>My app is registered under a project and has read and write permissions.</p>
<p>I search the twitter developer forum, found some similar errors but no solution. I posted <a href="https://twittercommunity.com/t/401-unauthorized-error-with-api-v2-using-tweepy/199439?u=tvoirand" rel="nofollow noreferrer">a message there</a> but didn't get any answer.</p>
<p>Here is my python code:</p>
<pre class="lang-py prettyprint-override"><code>"""Script to test tweet posting of s2coastalbot.
"""
# third party imports
import tweepy
# authenticate twitter account
consumer_key = "XXX"
consumer_secret = "XXX"
access_token = "XXX"
access_token_secret = "XXX"
auth = tweepy.OAuth1UserHandler(consumer_key, consumer_secret, access_token, access_token_secret)
apiv1 = tweepy.API(auth)
apiv2 = tweepy.Client(consumer_key, consumer_secret, access_token, access_token_secret)
# post tweet
file_path = "media.png"
media = apiv1.media_upload(filename=file_path)
apiv2.create_tweet(text="test", media_ids=[media.media_id], user_auth=True) # returns: "tweepy.errors.Unauthorized: 401 Unauthorized"
</code></pre>
| <python><twitter><tweepy> | 2023-07-17 19:16:04 | 1 | 440 | tvoirand |
76,707,320 | 3,137,789 | VScode does not count tests that throw exceptions (0/0 tests passed) | <p>My dev environment is:</p>
<pre><code>vscode 1.80.1
python 3.10
using the python extension for vscode v2023.12.0
</code></pre>
<p>My test file:</p>
<pre><code>import unittest
class MyTestCase(unittest.TestCase):
def test_say_hello(self) -> None:
raise Exception("test")
</code></pre>
<p>My vscode settings:</p>
<pre><code>{
"python.testing.unittestArgs": [
"-v",
"-s",
"./runner",
"-p",
"test_*.py"
],
"python.testing.pytestEnabled": false,
"python.testing.unittestEnabled": true,
}
</code></pre>
<p>I would expect that when I run the test to show an error because of the raised exception, but it does not. Instead, the "testing" tab in vscode shows (0/0 tests passed):</p>
<p><a href="https://i.sstatic.net/A9EFr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/A9EFr.png" alt="enter image description here" /></a></p>
<p>Worst, if the tests was previously passing it would still show up with a green mark! The "Tests Results" tab simply shows: "Finished running tests!" with no other outputs. If I run the test in debug mode, I correctly get the output error, but the tests is still not marked as incorrect:</p>
<p><a href="https://i.sstatic.net/mXQjm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mXQjm.png" alt="enter image description here" /></a></p>
<p>Running unittests manually outside vscode correctly display the error (just like the output in debug mode).</p>
<p>So how do I get vscode to display my failing tests?</p>
| <python><visual-studio-code><python-unittest> | 2023-07-17 18:47:37 | 1 | 1,327 | MyUsername112358 |
76,707,202 | 7,458,826 | whisper error: TypeError: only integer scalar arrays can be converted to a scalar index | <p>I am trying to use the whisper package for Python from OpenAI to extract the text from an audio file .mp4. I am using the simplest example in their page:</p>
<pre><code>import whisper
model = whisper.load_model("base")
# load audio and pad/trim it to fit 30 seconds
audio = whisper.load_audio("./portuguese.mp4")
audio = whisper.pad_or_trim(audio)
# make log-Mel spectrogram and move to the same device as the model
mel = whisper.log_mel_spectrogram(audio).to(model.device)
# detect the spoken language
_, probs = model.detect_language(mel)
print(f"Detected language: {max(probs, key=probs.get)}")
# decode the audio
options = whisper.DecodingOptions(fp16 = False)
result = whisper.decode(model, mel, options)
# print the recognized text
print(result.text)
</code></pre>
<p>This code returns the following error:</p>
<pre><code> File ~/anaconda3/lib/python3.10/site-packages/spyder_kernels/py3compat.py:356 in compat_exec
exec(code, globals, locals)
File ~/Downloads/audio_to_text.py:18
result = whisper.decode(model, mel, options)
File ~/anaconda3/lib/python3.10/site-packages/torch/utils/_contextlib.py:115 in decorate_context
return func(*args, **kwargs)
File ~/anaconda3/lib/python3.10/site-packages/whisper/decoding.py:811 in decode
File ~/anaconda3/lib/python3.10/site-packages/torch/utils/_contextlib.py:115 in decorate_context
return func(*args, **kwargs)
File ~/anaconda3/lib/python3.10/site-packages/whisper/decoding.py:744 in run
tokens: List[List[int]] = [t[i].tolist() for i, t in zip(selected, tokens)]
File ~/anaconda3/lib/python3.10/site-packages/whisper/decoding.py:744 in <listcomp>
tokens: List[List[int]] = [t[i].tolist() for i, t in zip(selected, tokens)]
TypeError: only integer scalar arrays can be converted to a scalar index
</code></pre>
<p>This is particularly odd since when I run the command line on a terminal,</p>
<pre><code>whisper portuguese.mp4 --language Portuguse
</code></pre>
<p>it works just fine. How can I solve this error?</p>
| <python><openai-whisper> | 2023-07-17 18:29:52 | 0 | 636 | donut |
76,707,122 | 166,229 | How to deploy an application in Vespa to a remote target using pyvespa? | <p>I have Vespa running in a container (using Docker compose) and want to deploy my application package from another container in the same network using <a href="https://pyvespa.readthedocs.io/en/latest/" rel="nofollow noreferrer">pyvespa</a>. The <code>vespa deploy</code> command of <a href="https://docs.vespa.ai/en/vespa-cli.html" rel="nofollow noreferrer">Vespa CLI</a> has a target option where I can provide a host, but I don't see such an option when using pyinvoke. Especially the <code>Vespa</code> class has no deploy method. Is this scenario not supported by pyvespa and do I have to use the Vespa CLI?</p>
| <python><vespa> | 2023-07-17 18:15:50 | 1 | 16,667 | medihack |
76,706,881 | 18,313,588 | Exception Handling with Traceback Repo not Found Error | <p>I would like Python to handle the exception for OSError and <code>Traceback</code> from repo not found, how can I accomplish this correctly?</p>
<p>Here is my code.</p>
<pre><code>try:
os.mkdir(res)
print("cloning")
repo = git.Repo.clone_from(item['href'], to_path = res)
except OSError, Exception:
continue
</code></pre>
<p>Here is the exception that I would like python to take care of.</p>
<pre><code>Traceback (most recent call last):
File "test.py", line 34, in <module>
repo = git.Repo.clone_from(item['href'], to_path = res)
File "/home/abc/.local/lib/python2.7/site-packages/git/repo/base.py", line 1020, in clone_from
return cls._clone(git, url, to_path, GitCmdObjectDB, progress, multi_options, **kwargs)
File "/home/abc/.local/lib/python2.7/site-packages/git/repo/base.py", line 966, in _clone
finalize_process(proc, stderr=stderr)
File "/home/abc/.local/lib/python2.7/site-packages/git/util.py", line 333, in finalize_process
proc.wait(**kwargs)
File "/home/abc/.local/lib/python2.7/site-packages/git/cmd.py", line 412, in wait
raise GitCommandError(self.args, status, errstr)
git.exc.GitCommandError: Cmd('git') failed due to: exit code(128)
cmdline: git clone -v https://anonscm.debian.org/git/pkg-fedora-ds/389-console.git 389-console
stderr: 'Cloning into '389-console'...
fatal: repository 'https://anonscm.debian.org/git/pkg-fedora-ds/389-console.git/' not found
'
</code></pre>
<p>Any guidance is very much appreciated.</p>
| <python><python-3.x><exception><traceback> | 2023-07-17 17:37:42 | 0 | 493 | nerd |
76,706,862 | 942,696 | Apache Flink - Getting `NoResourceAvailableException` with local execution while using `slot_sharing_group` | <p>I'm trying to run a Flink 1.17.1 job with local execution through PyCharm.</p>
<p>My code is using the DataStream API and I'm reading data from a Kafka topic and printing it to the console with <code>.execute().print()</code>.
However, I'm encountering some errors when using the <code>slot_sharing_group</code>, if I comment out the lines using <code>slot_sharing_group</code>.</p>
<p>I've checked that I have enough <code>taskamanger.numberOfTaskSlots</code>. (It would expect to see another error when I don't have enough task slots but I validated that number nonetheless).</p>
<p>Here's the error I'm getting:</p>
<pre><code>py4j.protocol.Py4JJavaError: An error occurred while calling o545.print.
: java.lang.RuntimeException: Failed to fetch next result
at org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:109)
at org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80)
at org.apache.flink.table.planner.connectors.CollectDynamicSink$CloseableRowIteratorWrapper.hasNext(CollectDynamicSink.java:222)
at org.apache.flink.table.utils.print.TableauStyle.print(TableauStyle.java:120)
at org.apache.flink.table.api.internal.TableResultImpl.print(TableResultImpl.java:153)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.apache.flink.api.python.shaded.py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at org.apache.flink.api.python.shaded.py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:374)
at org.apache.flink.api.python.shaded.py4j.Gateway.invoke(Gateway.java:282)
at org.apache.flink.api.python.shaded.py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at org.apache.flink.api.python.shaded.py4j.commands.CallCommand.execute(CallCommand.java:79)
at org.apache.flink.api.python.shaded.py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.io.IOException: Failed to fetch job execution result
at org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.getAccumulatorResults(CollectResultFetcher.java:184)
at org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.next(CollectResultFetcher.java:121)
at org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:106)
... 15 more
Caused by: java.util.concurrent.ExecutionException: org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
at java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395)
at java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2022)
at org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.getAccumulatorResults(CollectResultFetcher.java:182)
... 17 more
Caused by: org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
at org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
at org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:141)
at java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:642)
at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073)
at org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$1(AkkaInvocationHandler.java:267)
at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859)
at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837)
at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073)
at org.apache.flink.util.concurrent.FutureUtils.doForward(FutureUtils.java:1300)
at org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$null$1(ClassLoadingUtils.java:93)
at org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
at org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$guardCompletionWithContextClassLoader$2(ClassLoadingUtils.java:92)
at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859)
at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837)
at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073)
at org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$1.onComplete(AkkaFutureUtils.java:47)
at akka.dispatch.OnComplete.internal(Future.scala:300)
at akka.dispatch.OnComplete.internal(Future.scala:297)
at akka.dispatch.japi$CallbackBridge.apply(Future.scala:224)
at akka.dispatch.japi$CallbackBridge.apply(Future.scala:221)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
at org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$DirectExecutionContext.execute(AkkaFutureUtils.java:65)
at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:72)
at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:288)
at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:288)
at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:288)
at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:622)
at akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:24)
at akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:23)
at scala.concurrent.Future.$anonfun$andThen$1(Future.scala:536)
at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33)
at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:63)
at akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:100)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:85)
at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:100)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:49)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:48)
at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290)
at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020)
at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656)
at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594)
at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:183)
Caused by: org.apache.flink.runtime.JobException: Recovery is suppressed by NoRestartBackoffTimeStrategy
at org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:139)
at org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:83)
at org.apache.flink.runtime.scheduler.DefaultScheduler.recordTaskFailure(DefaultScheduler.java:258)
at org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:249)
at org.apache.flink.runtime.scheduler.DefaultScheduler.onTaskFailed(DefaultScheduler.java:242)
at org.apache.flink.runtime.scheduler.SchedulerBase.onTaskExecutionStateUpdate(SchedulerBase.java:748)
at org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:725)
at org.apache.flink.runtime.scheduler.UpdateSchedulerNgOnInternalFailuresListener.notifyTaskFailure(UpdateSchedulerNgOnInternalFailuresListener.java:51)
at org.apache.flink.runtime.executiongraph.DefaultExecutionGraph.notifySchedulerNgAboutInternalTaskFailure(DefaultExecutionGraph.java:1664)
at org.apache.flink.runtime.executiongraph.Execution.processFail(Execution.java:1140)
at org.apache.flink.runtime.executiongraph.Execution.processFail(Execution.java:1080)
at org.apache.flink.runtime.executiongraph.Execution.markFailed(Execution.java:919)
at org.apache.flink.runtime.scheduler.DefaultExecutionOperations.markFailed(DefaultExecutionOperations.java:43)
at org.apache.flink.runtime.scheduler.DefaultExecutionDeployer.handleTaskDeploymentFailure(DefaultExecutionDeployer.java:327)
at org.apache.flink.runtime.scheduler.DefaultExecutionDeployer.lambda$assignAllResourcesAndRegisterProducedPartitions$2(DefaultExecutionDeployer.java:170)
at java.base/java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:930)
at java.base/java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:907)
at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
at java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
at org.apache.flink.runtime.jobmaster.slotpool.PendingRequest.failRequest(PendingRequest.java:88)
at org.apache.flink.runtime.jobmaster.slotpool.DeclarativeSlotPoolBridge.cancelPendingRequests(DeclarativeSlotPoolBridge.java:185)
at org.apache.flink.runtime.jobmaster.slotpool.DeclarativeSlotPoolBridge.failPendingRequests(DeclarativeSlotPoolBridge.java:408)
at org.apache.flink.runtime.jobmaster.slotpool.DeclarativeSlotPoolBridge.notifyNotEnoughResourcesAvailable(DeclarativeSlotPoolBridge.java:396)
at org.apache.flink.runtime.jobmaster.JobMaster.notifyNotEnoughResourcesAvailable(JobMaster.java:887)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$handleRpcInvocation$0(AkkaRpcActor.java:301)
at org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:83)
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:300)
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:222)
at org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:84)
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:168)
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24)
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20)
at scala.PartialFunction.applyOrElse(PartialFunction.scala:127)
at scala.PartialFunction.applyOrElse$(PartialFunction.scala:126)
at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:175)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:176)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:176)
at akka.actor.Actor.aroundReceive(Actor.scala:537)
at akka.actor.Actor.aroundReceive$(Actor.scala:535)
at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:220)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:579)
at akka.actor.ActorCell.invoke(ActorCell.scala:547)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:270)
at akka.dispatch.Mailbox.run(Mailbox.scala:231)
at akka.dispatch.Mailbox.exec(Mailbox.scala:243)
... 5 more
Caused by: java.util.concurrent.CompletionException: java.util.concurrent.CompletionException: org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException: Could not acquire the minimum required resources.
at org.apache.flink.runtime.scheduler.DefaultExecutionDeployer.lambda$assignResource$4(DefaultExecutionDeployer.java:227)
... 40 more
Caused by: java.util.concurrent.CompletionException: org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException: Could not acquire the minimum required resources.
at java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331)
at java.base/java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346)
at java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:632)
... 38 more
Caused by: org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException: Could not acquire the minimum required resources.
Process finished with exit code 1
</code></pre>
<p>Any clue what could be the source of the exception ?</p>
| <python><apache-flink><streaming><pyflink> | 2023-07-17 17:34:40 | 1 | 860 | ElCapitaine |
76,706,711 | 16,115,413 | I am unable count the occurrences of words in a pandas series row-wise | <p>I have a pandas DataFrame with a series called 'spam['v2']', where each row contains a sentence. I would like to create a new series that calculates the word count for each row, where the output is a dictionary with words as keys and their corresponding counts as values.</p>
<p>For example, if my original series looks like this:</p>
<p><a href="https://i.sstatic.net/zgqM5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zgqM5.png" alt="enter image description here" /></a></p>
<p>I would like to create a new series where the rows have the following dictionary:</p>
<p><a href="https://i.sstatic.net/SK0uh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SK0uh.png" alt="enter image description here" /></a></p>
<p>I tried this and was successful in achieving the task but it was done using regular python:</p>
<p><em>for those who want to see the full working file (One Drive Link) : <a href="https://1drv.ms/f/s!AsQPI-pwVwq5v03-11e7R3Rme-2l?e=9LMtgd" rel="nofollow noreferrer">https://1drv.ms/f/s!AsQPI-pwVwq5v03-11e7R3Rme-2l?e=9LMtgd</a></em></p>
<pre><code>import pandas as pd
spam = pd.read_csv('spam.csv')
def freq(text):
words = []
words = text.split()
wfreq=[words.count(w) for w in words]
return dict(zip(words,wfreq))
count = spam['v2'].apply(freq)
count = pd.Series(count)
</code></pre>
<p>I'm not sure how to approach this problem efficiently with pandas and series methods and without the use of regular python.
Could someone please guide me on how to achieve this using pandas?</p>
<p>Thank you!</p>
| <python><pandas><dataframe><dictionary><word-frequency> | 2023-07-17 17:12:19 | 1 | 549 | Mubashir Ahmed Siddiqui |
76,706,643 | 13,421,357 | Getting x,y pixel coordinates of lat/lon from Mapbox static image | <p>I have a satellite image using <a href="https://docs.mapbox.com/api/maps/static-images/" rel="nofollow noreferrer">Mapbox static image API</a>. I am using a bounding box, so I know the coordinates of the four corners of the image.</p>
<p>On this image/map, I have a path defined by a list of [lon, lat] values that is of interest. Mapbox lets me visualize the path when I add a path overlay, which is useful. But I also want to consume the image (without Mapbox's path overlay) in Python to read pixel values of the image along this path. To do this, I need to be able to somehow project the list of [lon, lat] values onto this image's pixel coordinates.</p>
<p>Is there a way to do this without coding the formulas (i.e., using existing Python packages)? Or ,a Mapbox native way?</p>
| <python><mapbox><geopandas> | 2023-07-17 17:01:23 | 1 | 847 | hainabaraka |
76,706,642 | 11,956,484 | Cannot copy date from Bokeh DataTable | <p>I created a data table in bokeh that enables the user to filter the data (>40 columns and ~200 rows) by columns and rows. Here is the relevant portions of the code:</p>
<pre><code>import pandas as pd
import os
from datetime import date
from bokeh.plotting import figure, output_file, show
from bokeh.models import ColumnDataSource, HoverTool,CustomJS, Div, Legend, LegendItem,DatetimeTickFormatter, CDSView, GroupFilter,TabPanel, Tabs, MultiChoice,DateRangeSlider, CustomJSFilter, Button
from bokeh.models.widgets import Select, DataTable, TableColumn, HTMLTemplateFormatter, FileInput,DateFormatter
from bokeh.layouts import row, column, layout
from bokeh.transform import factor_cmap, factor_mark
import tkinter as tk
from tkinter import filedialog, messagebox
import traceback
filtered_table_data=df
filtered_table_data["Date"]=filtered_table_data["Date"].dt.date
filtered_table_source= ColumnDataSource(data=filtered_table_data)
filtered_table_cols=[]
filtered_table_cols.append(TableColumn(field="Date", title="Date", width=150, visible=True, formatter=DateFormatter()))
for col in filtered_table_data.columns[2:]:
filtered_table_cols.append(TableColumn(field=col, title=col, width=150, visible=False))
slider=DateRangeSlider(title="Select Date Range", value=(date(2022,12,19),date.today()), start=date(2022,12,19),end=date.today())
custom_filter = CustomJSFilter(args=dict(filtered_table_source=filtered_table_source, slider=slider), code='''
var indices = [];
var min=slider.value[0]
var max=slider.value[1]
for (var i = 0; i < filtered_table_source.get_length(); i++){
if (filtered_table_source.data['Date'][i] >= min && filtered_table_source.data['Date'][i] <= max)
{
indices.push(true);
}
else
{
indices.push(false);
}
}
return indices;
''')
filtered_table_view=CDSView(source=filtered_table_source, filters=[custom_filter])
filtered_table=DataTable(source=filtered_table_source, view=filtered_table_view, columns=filtered_table_cols, width=1850,height=500,fit_columns=False, selectable="checkbox")
multi_choice = MultiChoice(value=[], options=df.columns[2:-1].tolist(), title='Select elements:')
callback2 = CustomJS(args=dict(multi_choice=multi_choice, filtered_table=filtered_table), code="""
for (var i=0; i<filtered_table.columns.length; i++)
{
filtered_table.columns[i].visible=false
if (filtered_table.columns[i].field=="Date")
{
filtered_table.columns[i].visible=true
}
for (var j=0; j<multi_choice.value.length;j++)
{
if (filtered_table.columns[i].field==multi_choice.value[j])
{
filtered_table.columns[i].visible=true
}
}
}
""")
callback3 = CustomJS(args=dict(filtered_table_source=filtered_table_source), code="""
filtered_table_source.change.emit()
""")
multi_choice.js_on_change("value",callback2)
slider.js_on_change('value', callback3)
l=layout([multi_choice, filtered_table],
[slider])
show(l)
</code></pre>
<p>Now I would like the user to be able to copy and paste their selection into Excel. The issue is when rows are copied into Excel the Date column is completely blank</p>
<p>Selected rows: <a href="https://i.sstatic.net/xLDMa.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xLDMa.jpg" alt="enter image description here" /></a></p>
<p>Copied data in Excel:
<a href="https://i.sstatic.net/7n8tm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7n8tm.png" alt="enter image description here" /></a></p>
| <javascript><python><bokeh> | 2023-07-17 17:01:19 | 0 | 716 | Gingerhaze |
76,706,535 | 10,115,137 | Why does Python HTTPServer.shutdown not actually shutdown the server? | <p>Consider a toy server serving a notification that it will quit in 5s at <code>http://localhost:48555/q</code> and 'Hello World' elsewhere.</p>
<pre class="lang-py prettyprint-override"><code>from datetime import datetime
from http.server import HTTPServer, BaseHTTPRequestHandler
from threading import Thread
from time import sleep
from urllib.parse import urlparse
class Handler(BaseHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.send_header('Content-type', 'text/html')
self.end_headers()
path =urlparse(self.path).path
response=f"{datetime.now()}: {'stopping server in 5s' if path=='/q' else 'Hello World!'}"
self.wfile.write(bytes(response, 'ascii'))
self.wfile.flush()
if path=='/q':
Thread(target=self.quit()).start()
def quit(self): # don't call on thread running the server
sleep(5)
self.server.shutdown()
address = ('localhost', 48555)
server = HTTPServer(address, Handler)
print(f'started a server on http://{address[0]}:{address[1]}')
server.serve_forever()
</code></pre>
<p>Despite calling <code>self.server.shutdown()</code> from a separate thread, the server doesn't shutdown correctly - the tab in the browser (Firefox 115.0.2) stays in loading state while the python environment keeps running. Also the response is never written to client in spite of the explicit <code>flush</code>.</p>
<p><strong>Why is this happening?</strong></p>
<p>I have tried making the thread <code>daemon=True</code>, removing the <code>sleep</code> altogether and replacing
<code>server.shutdown</code> with <code>exit()</code>. Only the last one actually shuts everything down, though the reponse is still served <em>afterwards</em>, not before the wait.</p>
| <python><server><shutdown> | 2023-07-17 16:43:37 | 0 | 920 | lineage |
76,706,471 | 11,922,765 | Missing one positional required argument `self` | <p>I am trying to develop a function using OOPs. But I am getting following error. The function basically for the Raspberry Pi. It has been working fine when I use it without any OOPs structure.</p>
<p>Code:</p>
<pre class="lang-py prettyprint-override"><code>import Adafruit_ADS1x15 as ada_adc
class read_raspberrypi_analog_input:
## Default level
# The following is executed when self is called
def __init__():
# call an instance of ADS1015 if it is used for readin analog input signal
self.adc = ada_adc
def through_ads1015(self, adc_gain=1, r1=1, r2=0):
adc = self.adc.ADS1015()
GAIN = adc_gain
# read the value at the board input analog pin A0
a0_level = adc.read_adc(0, gain=GAIN)
print(a0_level)
a0_analog = a0_level * (4.096 / 2047)
print(a0_analog)
# actual sensor input voltage
r1 = r1
r2 = r2
a0_sensor = a0_analog * (r1 + r2) / r1
print(a0_sensor)
</code></pre>
<p>Call the class and method:</p>
<pre class="lang-py prettyprint-override"><code>read_raspberrypi_analog_input.through_ads1015(adc_gain = 1, r1 = 5.1, r2 = 3)
</code></pre>
<p>Present output:</p>
<blockquote>
<p>read_raspberrypi_analog_input.through_ads1015(adc_gain = 1, r1 = 5.1, r2 = 3)<br />
TypeError: through_ads1015() missing 1 required positional argument: 'self'</p>
</blockquote>
| <python> | 2023-07-17 16:33:23 | 1 | 4,702 | Mainland |
76,706,461 | 5,057,022 | Unstacking Pandas DF with Repeated Index | <p>I have a df that looks like this:</p>
<pre class="lang-none prettyprint-override"><code> asset valid_from valid_to history_table value latitude
0 Asset_1 01/09/2020 31/12/2049 market bm 2__MSTAT001 53,80
1 Asset_1 01/09/2020 31/12/2049 energy_capacity 10 53,80
2 Asset_2 01/05/2022 01/01/2023 market bm V__JZENO001 51,30
3 Asset_2 02/01/2023 31/12/2049 market bm V__JZEN002 51,30
4 Asset_3 01/04/2018 31/12/2049 owner ESB 52,97
</code></pre>
<p>It is a mix of a long and wide format.</p>
<p>The long id and value columns are 'history_table' & 'value' respectively.</p>
<p>I want to unstack these columns, so for each asset</p>
<p>The problem I have when using the code</p>
<pre><code>test = df.pivot(index=['asset', 'valid_from', 'valid_to'], columns='history_table', values='value')
</code></pre>
<p>is that the index contains multiple duplicate entries. I've tried using <code>pivot_table</code>, but means aggregating the results and my 'value' column contains both text and numeric values.</p>
<p>Example DF</p>
<pre><code>{'asset': ['Asset_1', 'Asset_1', 'Asset_2', 'Asset_2', 'Asset_3'],
'valid_from': ['01/09/2020', '01/09/2020', '01/05/2022', '02/01/2023', '01/04/2018'],
'valid_to': ['31/12/2049', '31/12/2049', '01/01/2023', '31/12/2049', '31/12/2049'],
'history_table': ['market bm', 'energy_capacity', 'market bm', 'market bm', 'owner'],
'value': ['2__MSTAT001', '10', 'V__JZENO001', 'V__JZEN002', 'ESB'],
'latitude': ['53,80', '53,80', '51,30', '51,30', '52,97']}
</code></pre>
| <python><pandas> | 2023-07-17 16:32:16 | 0 | 383 | jolene |
76,706,305 | 162,622 | Parquet / pyarrow: Malformed levels | <p>I use Azure Stream Analytics which converts some json documents in parquet files.</p>
<p>For most of them, I can read them after but for some of them I get the following error:</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.11/site-packages/pyarrow/parquet/core.py", line 677, in scan_contents
return self.reader.scan_contents(column_indices,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/_parquet.pyx", line 1389, in pyarrow._parquet.ParquetReader.scan_contents
File "pyarrow/error.pxi", line 115, in pyarrow.lib.check_status
OSError: Malformed levels. min: 0 max: 4 out of range. Max Level: 3
</code></pre>
<p>The parquet file schema is generated by pyarrow itself. When I print it, I get this:</p>
<pre><code><pyarrow._parquet.ParquetSchema object at 0x12cc86d00>
required group field_id=-1 root {
optional group field_id=-1 hierarchy {
optional binary field_id=-1 content (String);
optional group field_id=-1 reference {
optional binary field_id=-1 type (String);
optional binary field_id=-1 value (String);
}
optional int64 field_id=-1 quantity;
optional group field_id=-1 children (List) {
repeated group field_id=-1 list {
optional group field_id=-1 {
optional binary field_id=-1 content (String);
optional group field_id=-1 reference {
optional binary field_id=-1 type (String);
optional binary field_id=-1 value (String);
}
optional int64 field_id=-1 quantity;
optional group field_id=-1 children (List) {
repeated group field_id=-1 list {
optional group field_id=-1 {
optional binary field_id=-1 content (String);
optional group field_id=-1 reference {
optional binary field_id=-1 type (String);
optional binary field_id=-1 value (String);
}
optional int64 field_id=-1 quantity;
repeated binary field_id=-1 children (String);
optional group field_id=-1 assets (List) {
repeated group field_id=-1 list {
optional group field_id=-1 {
optional binary field_id=-1 content (String);
optional binary field_id=-1 type (String);
}
}
}
repeated binary field_id=-1 sharing_units (String);
optional group field_id=-1 specific_data (List) {
repeated group field_id=-1 list {
optional group field_id=-1 {
optional group field_id=-1 target_organization {
optional binary field_id=-1 id (String);
}
optional binary field_id=-1 content (String);
}
}
}
repeated binary field_id=-1 validation_rules (String);
repeated binary field_id=-1 metadata (String);
}
}
}
repeated binary field_id=-1 assets (String);
repeated binary field_id=-1 sharing_units (String);
repeated binary field_id=-1 specific_data (String);
repeated binary field_id=-1 validation_rules (String);
repeated binary field_id=-1 metadata (String);
}
}
}
repeated binary field_id=-1 assets (String);
repeated binary field_id=-1 sharing_units (String);
repeated binary field_id=-1 specific_data (String);
repeated binary field_id=-1 validation_rules (String);
optional group field_id=-1 metadata (List) {
repeated group field_id=-1 list {
optional group field_id=-1 {
optional binary field_id=-1 id (String);
optional int64 field_id=-1 role;
optional binary field_id=-1 type (String);
}
}
}
}
}
</code></pre>
<p>I don't understand why pyarrow can't read a file generated by another tool and I don't find details around this error.</p>
<p>Do you have any idea ?</p>
| <python><parquet><pyarrow> | 2023-07-17 16:07:29 | 1 | 9,373 | Kiva |
76,706,171 | 4,046,411 | Langchain agent does not always use the tool correctly | <p>Sometimes the Langchain Agent entering the custom tool with the input:
"Invalid or incomplete response"</p>
<p>Even if the LLM seems to use the tool correctly.</p>
<p>Any idea why and how to fix that ?</p>
<p>I would point out that sometimes everything works as expected.</p>
<p>My tool:</p>
<pre><code>@tool
def saveEvent(event: str) -> str:
"""Use it to save an event in my calendar. \
The input should be a VCALENDAR string and be formated as follow:
'''
BEGIN:VCALENDAR
VERSION:2.0
BEGIN:VEVENT
SUMMARY: ```the name of the event in french if you don't know the name, create one limited to 10 words```
DESCRIPTION: ```Description of the event in french, if you don't know the description, summarize the message```
DTSTART: ```Start date of the event, use the paris timezone and format as follow: AAAAMMJJDhhmmssZ```
DTEND: ```End date of the event use the paris timezone and format as follow: AAAAMMJJDhhmmssZ```
LOCATION: ```the location of the event```
CATEGORIES: ```the categories of the event```
END:VEVENT
END:VCALENDAR
'''
This function will return 'OK' if the event is correctlly saved or 'ERROR' if there is an error."""
resp = calendar.save_event(event)
if resp:
return 'OK'
else:
return 'ERROR'
</code></pre>
<p>The agent:</p>
<pre><code>tools = load_tools(['python_repl'], llm=llm)
agent= initialize_agent(
tools + [saveEvent],
llm,
agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,
handle_parsing_errors=True,
max_iterations=3,
verbose = True)
messages = agent_prompt_template.format_prompt(text=message)
response = agent(messages.to_string())
</code></pre>
<p>The console output:</p>
<pre><code>[llm/end] [1:chain:AgentExecutor > 2:chain:LLMChain > 3:llm:GPTLLM] [10.44s] Exiting LLM run with output:
{
"generations": [
[
{
"text": "Thought: I need to format the event information provided in the JSON into a VCALENDAR string according to the given template. Once I have the formatted event, I can use the 'saveEvent' tool to add it to the calendar.\n\nAction:\n```json\n{\n \"action\": \"saveEvent\",\n \"action_input\": \"BEGIN:VCALENDAR\\nVERSION:2.0\\nBEGIN:VEVENT\\nSUMMARY:Festival ForroDiois\\nDESCRIPTION:Marilia Cervi est une des trois prof invités au Festival ForroDiois. Elle partagera sa danse avec nous le samedi 17 juin et dimanches 18 juin (3 cours de 1h30 chacun).\\nDTSTART:20230617T000000Z\\nDTEND:20230617T020000Z\\nLOCATION:Non spécifié\\nCATEGORIES:danse,festival\\nEND:VEVENT\\nEND:VCALENDAR\"\n}\n```\n\n",
"generation_info": null
}
]
],
"llm_output": null,
"run": null
}
[chain/end] [1:chain:AgentExecutor > 2:chain:LLMChain] [10.44s] Exiting Chain run with output:
{
"text": "Thought: I need to format the event information provided in the JSON into a VCALENDAR string according to the given template. Once I have the formatted event, I can use the 'saveEvent' tool to add it to the calendar.\n\nAction:\n```json\n{\n \"action\": \"saveEvent\",\n \"action_input\": \"BEGIN:VCALENDAR\\nVERSION:2.0\\nBEGIN:VEVENT\\nSUMMARY:Festival ForroDiois\\nDESCRIPTION:Marilia Cervi est une des trois prof invités au Festival ForroDiois. Elle partagera sa danse avec nous le samedi 17 juin et dimanches 18 juin (3 cours de 1h30 chacun).\\nDTSTART:20230617T000000Z\\nDTEND:20230617T020000Z\\nLOCATION:Non spécifié\\nCATEGORIES:danse,festival\\nEND:VEVENT\\nEND:VCALENDAR\"\n}\n```\n\n"
}
[tool/start] [1:chain:AgentExecutor > 4:tool:_Exception] Entering Tool run with input:
"Invalid or incomplete response"
[tool/end] [1:chain:AgentExecutor > 4:tool:_Exception] [0.111ms] Exiting Tool run with output:
"Invalid or incomplete response"
</code></pre>
| <python><artificial-intelligence><langchain><py-langchain><large-language-model> | 2023-07-17 15:50:52 | 0 | 355 | Loann Delgado |
76,706,155 | 7,713,770 | AttributeError: module 'environ' has no attribute 'Env' | <p>I have a django app. And I try to use the module environ.</p>
<p>Of course I googled a lot on this error. But I can't figure out what is wrong whith the setup.</p>
<p>So this is part of my settings.py file:</p>
<pre><code>from pathlib import Path
import os
import environ
BASE_DIR = Path(__file__).resolve().parent.parent
env = environ.Env()
environ.Env.read_env(os.path.join(BASE_DIR, '.env'))
SECRET_KEY = env('SECRET_KEY')
DEBUG = env('DEBUG')
# Database
# https://docs.djangoproject.com/en/4.1/ref/settings/#databases
DATABASES = {
"default": env.db(),
}
</code></pre>
<p>and .env file looks like:</p>
<pre><code>DEBUG=on
</code></pre>
<p>but for exmaple when I try to run the app with:</p>
<pre><code>python manage.py runserver 192.168.1.135:8000
</code></pre>
<p>I get this error:</p>
<pre><code>File "C:\repos\DWL_backend\zijn\settings.py", line 19, in <module>
env = environ.Env()
AttributeError: module 'environ' has no attribute 'Env'
</code></pre>
<p>and I installed already the module</p>
<pre><code>django-environ = "*"
</code></pre>
<p>Question: how to resolve this error?</p>
| <python><django><django-rest-framework> | 2023-07-17 15:47:55 | 1 | 3,991 | mightycode Newton |
76,706,129 | 6,240,756 | Python - Test requests with retry and timeout | <p>I have the following piece of Python code and I would like to write tests for this function:</p>
<pre class="lang-py prettyprint-override"><code>def http_request(method, url)
session = requests.Session()
retries = Retry(total=3, backoff_factor=0.5, status_forcelist=list(range(500, 600)))
for prefix in ('http://', 'https://'):
session.mount(prefix, HTTPAdapter(max_retries=retries))
response = session.request(method, url, timeout=(31,123))
response.raise_for_status()
return response
</code></pre>
<p>Now I would like to write a unit test for this function, but I'm a bit lost.</p>
<p>How can I test that :</p>
<ol>
<li>There are indeed 3 retries (so 4 calls in total) if the status is between 500-600?</li>
<li>The timeouts are respected.</li>
</ol>
<p>Thank you</p>
| <python><unit-testing><testing><pytest><django-testing> | 2023-07-17 15:44:18 | 0 | 2,005 | iAmoric |
76,705,866 | 11,488,421 | Sklearn Kernel SVM is Different from CVXOPT | <p>I am confused. The <a href="https://scikit-learn.org/stable/modules/svm.html" rel="nofollow noreferrer">sklearn implementation</a> of a kernel SVM does not arrive at the same optimum as manually solving the problem does. In fact, the two solutions are completely different. What is going on?</p>
<p>Let me provide an example. Below, we generate data distributed along two concentric circles. There is some small Gaussian noise added to it. Then, we label one circle as +1 and the other one as -1. The goal is to classify correctly.</p>
<pre><code>### GENERATE DATA
import numpy as np
import matplotlib.pyplot as plt
r1 = 0.1
r2 = 1.0
n_points = 150
t = np.linspace(0, 2*np.pi, n_points)
x1 = r1 * np.cos(t)
y1 = r1 * np.sin(t)
t = np.linspace(0, 2*np.pi, n_points)
x2 = r2 * np.cos(t)
y2 = r2 * np.sin(t)
X1 = np.vstack((x1,y1)).T
X2 = np.vstack((x2,y2)).T
X = np.vstack((X1,X2))
y = np.concatenate((np.ones(n_points),-np.ones(n_points)))
mean = np.array([0, 0])
covariance = np.diag([0.01, 0.01])
noise = np.random.multivariate_normal(mean, covariance, 2*n_points)
X = X + noise
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='viridis')
</code></pre>
<p>Now let's try sklearn. We manually build the kernel matrix because we will need it later anyway.</p>
<pre><code>from sklearn.metrics.pairwise import pairwise_kernels
from sklearn.svm import SVC
def rbf(x,y,gamma):
z = np.exp(-gamma*np.sum((x-y)**2))
return(z)
n_features = X.shape[1]
Kmat = pairwise_kernels(X, metric=rbf, gamma=1/n_features)
svm = SVC(kernel='precomputed', C=1)
fit = svm.fit(Kmat, y)
v = np.zeros(2*n_points)
v[fit.support_] = fit.dual_coef_
### Remember that sklearn returns y_i alpha_i and not alpha_i
alpha = v/y
</code></pre>
<p>Now, we compare with CVXOPT. The results are vastly different.</p>
<pre><code>import cvxopt as opt
from cvxopt import matrix as cvxopt_matrix
from cvxopt import solvers as cvxopt_solvers
def cvx_svm(Kmat, y, C):
n_samples = len(y)
P = cvxopt_matrix(np.outer(y,y) * Kmat)
q = cvxopt_matrix(np.ones(n_samples) * -1)
A = cvxopt_matrix(y, (1,n_samples))
b = cvxopt_matrix(0.0)
tmp1 = np.diag(np.ones(n_samples) * -1)
tmp2 = np.identity(n_samples)
G = cvxopt_matrix(np.vstack((tmp1, tmp2)))
tmp1 = np.zeros(n_samples)
tmp2 = np.ones(n_samples) * C
h = cvxopt_matrix(np.hstack((tmp1, tmp2)))
sol = cvxopt_solvers.qp(P, q, G, h, A, b)
opt = np.array(sol['x'])
return(opt)
alpha_manual = cvx_svm(Kmat, 1.0*y, C=1)
np.linalg.norm(alpha_manual - alpha)
</code></pre>
<p>The final output is far from zero!!!</p>
| <python><scikit-learn><svm><cvxopt> | 2023-07-17 15:13:53 | 0 | 569 | Winger 14 |
76,705,834 | 8,176,763 | can callbacks be implemented with the new @task decorator in airflow | <p>Airflow provides examples of task callbacks for success and failures of a task. It gives an example with an EmptyOperator as such:</p>
<pre><code>import datetime
import pendulum
from airflow import DAG
from airflow.operators.empty import EmptyOperator
def task_failure_alert(context):
print(f"Task has failed, task_instance_key_str: {context['task_instance_key_str']}")
def dag_success_alert(context):
print(f"DAG has succeeded, run_id: {context['run_id']}")
with DAG(
dag_id="example_callback",
schedule=None,
start_date=pendulum.datetime(2021, 1, 1, tz="UTC"),
dagrun_timeout=datetime.timedelta(minutes=60),
catchup=False,
on_success_callback=None,
on_failure_callback=task_failure_alert,
tags=["example"],
):
task1 = EmptyOperator(task_id="task1")
task2 = EmptyOperator(task_id="task2")
task3 = EmptyOperator(task_id="task3", on_success_callback=[dag_success_alert])
task1 >> task2 >> task3
</code></pre>
<p>My question can I use these callbacks with taskflow API on the new airflow 2.0 .</p>
<p>Something like:</p>
<pre><code>@task(task_id="some_task",on_success_callback="this_func")
def this_func(context):
print 'the function succeeed'
</code></pre>
| <python><airflow> | 2023-07-17 15:10:24 | 1 | 2,459 | moth |
76,705,742 | 7,383,799 | Finding a fast optimization algorithm to solve a non-linear equation with unique positive solution | <p><strong>Goal:</strong></p>
<p>Find a fast algorithm in Python that solves the function f(x) below for its positive solution.</p>
<pre><code>def f(x):
return (l / ((np.tile(r, (n, 1)).transpose() / D / (np.tile(x, (n, 1)) / D).sum(axis = 0)).sum(axis = 1))) - x
</code></pre>
<p>l, r, and x are vectors of dimension n and D is a matrix of dimension n x n.</p>
<p>The function is known to have a positive solution that is unique up to a scaling factor. I would like to solve the function for different data and different vector length n. The largest n is approximately 4000.</p>
<p><strong>What I have tried so far:</strong></p>
<p>I tried various <code>scipy.optimize</code> functions. First, I tried <code>fsolve</code> which does not seem appropriate because it sometimes gives a solution vector x with negative entries. Following the answers to a <a href="https://stackoverflow.com/questions/76670303/fsolve-with-multiple-vector-equations?noredirect=1#comment135174924_76670303">related question</a>, I tried <code>minimize</code> and constraining the solution to positive numbers to avoid negative entries in the solution. Minimize finds the global minimum that solves the function only when provided with the correct solution as a starting value. When the starting value differs (slighlty), the resulting vector does not solve the equation (and I need the exact solution). I assume that the algorithm finds local minima but not the global one. To find the global minimum I tried <code>differential evolution</code>. Here, the problem is that it is very slow for any useful n. I did all testing with n = 5 for which it finds the correct solution.</p>
<p><strong>Question:</strong></p>
<p>Which algorithms are good candidates to solve this equation? (How) Can I use what I know about the equation to speed up the calculation? (i.e. a positive solution exists, it is unique up to scaling)</p>
<p><strong>Minimal working example:</strong></p>
<pre><code>import numpy as np
from scipy.optimize import minimize, fsolve, differential_evolution
np.random.seed(1)
# Vector dimension
n = 250 # differential evolution is slow, better use n = 5 to test
# Data r and D
r = np.random.rand(n)
D = 1 + np.random.rand(n, n)
# True solution x
x_true = np.random.rand(n)
# Normalize x to have geometric mean 1
x_true = x_true / np.prod(x_true) ** (1/n)
# Solve for l implied by true x
l = ((np.tile(r, (n, 1)).transpose() / D) / (np.tile(x_true, (n, 1)) / D).sum(axis = 0)).sum(axis = 1) * x_true
### Fsolve
initial_guess_deviation_factor = 2
x_0 = x_true * np.random.uniform(low = 0.9, high = 1.1, size = n) ** initial_guess_deviation_factor
def f(x):
return (l / ((np.tile(r, (n, 1)).transpose() / D / (np.tile(x, (n, 1)) / D).sum(axis = 0)).sum(axis = 1))) - x
# The solution is negative
x = fsolve(f, x_0)
### Minimize
def opt(x):
return (((l / ((np.tile(r, (n, 1)).transpose() / D / (np.tile(x, (n, 1)) / D).sum(axis = 0)).sum(axis = 1))) - x) ** 2).sum()
def pos_constraint(x):
return x
result = minimize(opt, x0=x_0, constraints={'type': 'ineq', 'fun':pos_constraint}, tol = 1e-18)
# The solution is different from the true solution
print(abs(result.x - x_true).mean())
print(result.fun)
### Differential evolution
def opt(x):
return (((l / ((np.tile(r, (n, 1)).transpose() / D / (np.tile(x, (n, 1)) / D).sum(axis = 0)).sum(axis = 1))) - x) ** 2).sum()
# Since the solution is unique up to renormalization, I use bounds between 0 and 1 and renormalize after finding the solution
bounds = [(0, 1)] * n
result = differential_evolution(opt, bounds, seed=1)
result.x, result.fun
# Normalize solution
x_de = result.x / np.prod(result.x) ** (1/n)
print(abs(x_de - x_true).mean())
print(result.fun)
</code></pre>
| <python><algorithm><mathematical-optimization><scipy-optimize><scipy-optimize-minimize> | 2023-07-17 14:58:58 | 1 | 375 | eigenvector |
76,705,708 | 7,713,770 | How to dockerize django app with postgres database and existing data? | <p>I have a local postgres database with data in it. And I have two docker containers running:
django app and the postgress database.</p>
<p>And I have done the migrations. the tables are generated, but the tables are empty - the tables are emtpy.</p>
<p>For example in the terminal.</p>
<p>So this is the dockerfile:</p>
<pre><code># pull official base image
FROM python:3.9-alpine3.13
# set work directory
WORKDIR /usr/src/app
EXPOSE 8000
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies
RUN apk update \
&& apk add linux-headers postgresql-dev gcc python3-dev musl-dev
# install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt .
COPY ./requirements.dev.txt .
RUN pip install -r requirements.txt
# copy entrypoint.sh
COPY ./entrypoint.sh .
RUN sed -i 's/\r$//g' /usr/src/app/entrypoint.sh
RUN chmod +x /usr/src/app/entrypoint.sh
# copy project
COPY . .
# run entrypoint.sh
ENTRYPOINT ["/usr/src/app/entrypoint.sh"]
</code></pre>
<p>docker-compose file:</p>
<pre><code>version: '3.9'
services:
app:
build:
context: .
args:
- DEV=true
ports:
- "8000:8000"
volumes:
- .:/app
command: >
sh -c "python ./manage.py migrate &&
python ./manage.py runserver 0:8000"
env_file:
- ./.env
depends_on:
- db
db:
image: postgres:13-alpine
container_name: postgres
volumes:
- dev-db-data:/var/lib/postgresql/data:/usr/local/var/postgresql/
env_file:
- ./.env
ports:
- '5432:5432'
volumes:
dev-db-data:
dev-static-data:
</code></pre>
<p>and entrypoint.sh:</p>
<pre><code>#!/bin/sh
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z $SQL_HOST $SQL_PORT; do
sleep 0.1
done
echo "PostgreSQL started"
fi
python manage.py flush --no-input
python manage.py makemigrations --merge
python manage.py migrate --noinput
exec "$@"
</code></pre>
<p>So When I run the containers: docker-compose up.</p>
<p>And I go to: <code>http://127.0.0.1:8000/admin/login/?next=/admin/</code></p>
<p>I see the admin panel from django. But I can't login. Because the table: accounts_account has no data in it</p>
<p>Question: how to get the data from the existing postgress database in the tables in the docker container?</p>
<p>Oke, I copied the sql file to the folder name pgd_snapshots of the docker container.</p>
<p>and I login in postgres: <code>psql -h localhost -U postgres</code></p>
<p>and then I switch to user <code> psql -h localhost -U animalzijn</code>
And then I see this:</p>
<pre><code>animalzijn=#
</code></pre>
<p>What I have to do next?</p>
| <python><django><docker><docker-compose><dockerfile> | 2023-07-17 14:55:42 | 1 | 3,991 | mightycode Newton |
76,705,698 | 5,712,053 | How can I rename my project on PyPI to fix its capitalization? | <p>My PyPI project has an error in its name. Concretely, it has the wrong capitalization. So, e.g., instead of <code>project</code> I used the name <code>Project</code>.</p>
<p>How can I change the name on PyPI to fix this mistake?</p>
| <python><pypi><distutils><python-poetry> | 2023-07-17 14:54:26 | 1 | 3,457 | vauhochzett |
76,705,656 | 2,110,463 | python refer to parent class from static class | <p>Is it possible to access the parent class name within a static class? For example, how to I print the parent class name in the <code>bar</code> method below?</p>
<pre><code>class Static:
def bar(self):
parent_name = ???
print(parent_name)
class A:
object = Static()
def foo(self):
A.object.bar()
class B:
object = Static()
def foo(self):
B.object.bar()
A().foo()
B().foo()
</code></pre>
| <python><static> | 2023-07-17 14:49:27 | 1 | 2,225 | PinkFloyd |
76,705,584 | 5,816,253 | append multiple JSON files to single CSV | <p>In my Python code, I have a for loop that generates flattened JSON files.
I would like to append these JSON files recursively to a file (for example a .csv file) using the same for loop.</p>
<p>my final .csv should look like this:</p>
<pre><code>{"name": "John", "age": 30, "married": true, "divorced": false, "children": ["Ann", "Billy"], "pets": null, "cars": [{"model": "BMW 230", "mpg": 27.5}, {"model": "Ford Edge", "mpg": 24.1}]}
{"name": "Doe", "age": 33, "married": true, "divorced": false, "children": ["Peter", "Billy"], "pets": null, "cars": [{"model": "Tesla", "mpg": 27.5}, {"model": "Ford Edge", "mpg": 27.1}]}
{"name": "Kurt", "age": 13, "married": true, "divorced": false, "children": ["Bruce", "Nikola"], "pets": null, "cars": [{"model": "Mercedes", "mpg": 27.5}, {"model": "123", "mpg": 24.1}]}
</code></pre>
| <python><json><csv><append> | 2023-07-17 14:42:13 | 3 | 375 | sylar_80 |
76,705,369 | 10,971,593 | The table is a database table. It cannot be opened as a free table | <p>I am trying to connect to Advantage Database which running on local server on windows machine. Using <code>pyodbc</code> python lib, I am able to establish a connection to a database. Below is the code</p>
<pre class="lang-py prettyprint-override"><code>import pyodbc
conn = pyodbc.connect('DSN=DemoDB;UID=User;PWD=PSWD')
if conn:
print('Connected!')
sql = "select * from test;"
cur = conn.cursor()
data = cur.execute(sql)
print(data)
conn.close()
</code></pre>
<p>But the program is failing when it is trying to execute a query against a connected database with below error</p>
<blockquote>
<p>pyodbc.Error: ('HY000', '[HY000] [iAnywhere Solutions][Advantage
SQL][ASA] Error 7200: AQE Error: State = HY000; NativeError =
5159; [SAP][Advantage SQL Engine][ASA] Error 5159: Error encountered
when trying to open a database table. The table is a database table.
It cannot be opened as a free table. Table name: alb (7200)
(SQLExecDirectW)')</p>
</blockquote>
<p>I am not able find much information about the error on the web nor in the SAP community forum.</p>
<p><strong>Database Details:</strong></p>
<ul>
<li>ODBC: Advantage StreamlineSQL ODBC</li>
<li>Version: Advantage Data Architect 11.10</li>
</ul>
| <python><pyodbc><advantage-database-server> | 2023-07-17 14:17:03 | 1 | 417 | Scarface |
76,704,949 | 809,423 | can I start docker-compose within a pip package? | <p>I am working on a project which consists of docker-compose.yml file (with published public docker images) together with environment files.</p>
<p>Typically, a user would</p>
<ol>
<li><code>git clone</code> the repo</li>
<li>make the necessary changes to the environment files</li>
<li>run <code>docker-compose up</code></li>
<li>Figure out whats wrong with his config changes. Go to step 2.</li>
<li>Eventually be happy</li>
</ol>
<p>As working with docker and changing variables for non-devs can be a bit tedious, I was wondering if its possible to wrap the entire repo in a python wheel/pip installable package to improve the UX by allowing the user to;</p>
<ol>
<li><code>pip install mypackage</code> (which would also install docker-compose etc.)</li>
<li><code>mypackage start --variant=two --path=/home/</code> (does the config changes automatically)</li>
<li>Be happy</li>
</ol>
<p>Is it possible to do something like this with python wheel or does it require some black magick and/or is this considered a safety risk/not recommended?</p>
| <python><docker><docker-compose><pip><python-wheel> | 2023-07-17 13:26:42 | 1 | 5,063 | japrescott |
76,704,822 | 6,668,031 | Power BI REST API - Unable to get visuals and fields within report page | <p>I am looping across power bi reports>pages>visuals and then fields within them , the code to retrieve pages works fine, however there is an issue with visual url, in Visual response it says - No HTTP request was found that matches the url.</p>
<p>I get an issue in below code</p>
<pre><code>visuals_url = f"https://api.powerbi.com/v1.0/myorg/groups/{workspace_id}/reports/{report_id}/pages/{page_name}/visuals"
visuals_response = requests.get(visuals_url, headers=headers)
</code></pre>
<p>Below is the full block of code, is there an issue with visuals_url in below code?</p>
<p>import requests</p>
<pre><code># Define the necessary variables
workspace_id = "your_workspace_id"
report_id = "your_report_id"
access_token = "your_access_token"
# Define the headers with the access token
headers = {
"Authorization": f"Bearer {access_token}"
}
# Define the URL for pages
pages_url = f"https://api.powerbi.com/v1.0/myorg/groups/{workspace_id}/reports/{report_id}/pages"
# Send a GET request to retrieve pages
response = requests.get(pages_url, headers=headers)
pages = response.json()["value"]
# Extract page information
for page in pages:
page_name = page["name"]
page_display_name = page["displayName"]
print(f"Page Name: {page_name}, Display Name: {page_display_name}")
# Define the URL for visuals within the current page
visuals_url = f"https://api.powerbi.com/v1.0/myorg/groups/{workspace_id}/reports/{report_id}/pages/{page_name}/visuals"
# Send a GET request to retrieve visuals
visuals_response = requests.get(visuals_url, headers=headers)
visuals = visuals_response.json()["value"]
# Extract visual information
for visual in visuals:
visual_name = visual["name"]
visual_display_name = visual["displayName"]
print(f"Visual Name: {visual_name}, Display Name: {visual_display_name}")
</code></pre>
| <python><powerbi><powerbi-embedded><powerbi-rest-api> | 2023-07-17 13:10:57 | 1 | 580 | Joseph |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.