QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
75,959,903
| 13,132,728
|
How to efficiently round data in a streamlit dataframe
|
<p>Pandas' <code>.style.set_precision(2)</code> is completely stalling my streamlit app. Is there a more efficient way to round data that is displayed in a streamlit app using <code>st.dataframe</code>, <code>st.write</code>, or <code>st.table</code>?</p>
<p>I am simply trying to round my dataframe to two decimals. My data is ~60k rows and 43 columns. It displays instantly without using any rounding techniques, but when I add <code>.style.set_precision(2)</code>, my app completely stalls. I have to add <code>.head(1000)</code> and even then it still takes a lot longer to display than if I were to not set a precision at all. I’m shocked there isn’t a native rounding option yet. Is there a way to efficiently round my data? I would like to display it in full.</p>
<p>This works instantly, but displays multiple unnecessary trailing 0s for each <code>float</code> column:</p>
<pre><code>st.dataframe(data)
</code></pre>
<p>This crashes my app:</p>
<pre><code>st.dataframe(data.style.set_precision(2))
</code></pre>
<p>This works but takes way longer than I feel it should (and doesn't even display all of my data:</p>
<pre><code>st.dataframe(data.head(1000).style.set_precision(2))
</code></pre>
<p>I am running streamlit 1.17.0. Also, streamlit apps have 1 GB of resources, and my dataframe comes nowhere near that, so I'm curious why this bottlenecking is occuring.</p>
<p>I've noticed setting the precision stalls in a jupyter notebook environment too, so is this a pandas thing?</p>
<h1>MRE:</h1>
<p>Mock data:</p>
<pre><code> a b c d e
0 28.00 26.02 11.60 1.62 2.60
1 2.00 11.15 0.80 0.09 0.31
2 30.50 29.77 15.40 2.46 7.21
</code></pre>
<pre><code> data = data[cols]
roundf = data.copy()
numeric_cols = data.select_dtypes(include=np.number).columns
roundf[numeric_cols] = roundf[numeric_cols].round(decimals=2)
st.dataframe(roundf)
</code></pre>
<p>Displays as:</p>
<pre><code>0 28.0000 26.0200 11.6000 15 1.6200 4 2.6000
1 2.0000 11.1500 0.8000 5 0.0900 0 0.3100
2 30.5000 29.7700 15.4000 15 2.4600 4 7.2100
</code></pre>
<p>streamlit 1.17.0, numpy 1.24.2, pandas 1.5.3</p>
<p>Looks like the same issue is occurring <a href="https://discuss.streamlit.io/t/cant-adjust-dataframes-decimal-places/1949" rel="nofollow noreferrer">here</a></p>
|
<python><pandas><streamlit>
|
2023-04-07 15:38:06
| 2
| 1,645
|
bismo
|
75,959,757
| 3,503,228
|
Module installed using pip, but not found in python
|
<p>Installed calplot:</p>
<pre class="lang-bash prettyprint-override"><code>$ pip install calplot
Requirement already satisfied: calplot in /home/lamy/.local/lib/python3.10/site-packages (0.1.7.5)
Requirement already satisfied: matplotlib in /home/lamy/.local/lib/python3.10/site-packages (from calplot) (3.7.1)
Requirement already satisfied: numpy in /home/lamy/.local/lib/python3.10/site-packages (from calplot) (1.24.2)
Requirement already satisfied: pandas>=1 in /home/lamy/.local/lib/python3.10/site-packages (from calplot) (2.0.0)
Requirement already satisfied: pytz>=2020.1 in /home/lamy/.local/lib/python3.10/site-packages (from pandas>=1->calplot) (2023.3)
Requirement already satisfied: python-dateutil>=2.8.2 in /home/lamy/.local/lib/python3.10/site-packages (from pandas>=1->calplot) (2.8.2)
Requirement already satisfied: tzdata>=2022.1 in /home/lamy/.local/lib/python3.10/site-packages (from pandas>=1->calplot) (2023.3)
Requirement already satisfied: pyparsing>=2.3.1 in /home/lamy/.local/lib/python3.10/site-packages (from matplotlib->calplot) (3.0.9)
Requirement already satisfied: cycler>=0.10 in /home/lamy/.local/lib/python3.10/site-packages (from matplotlib->calplot) (0.11.0)
Requirement already satisfied: kiwisolver>=1.0.1 in /home/lamy/.local/lib/python3.10/site-packages (from matplotlib->calplot) (1.4.4)
Requirement already satisfied: fonttools>=4.22.0 in /home/lamy/.local/lib/python3.10/site-packages (from matplotlib->calplot) (4.39.3)
Requirement already satisfied: pillow>=6.2.0 in /home/lamy/.local/lib/python3.10/site-packages (from matplotlib->calplot) (9.5.0)
Requirement already satisfied: packaging>=20.0 in /home/lamy/.local/lib/python3.10/site-packages (from matplotlib->calplot) (23.0)
Requirement already satisfied: contourpy>=1.0.1 in /home/lamy/.local/lib/python3.10/site-packages (from matplotlib->calplot) (1.0.7)
Requirement already satisfied: six>=1.5 in /home/linuxbrew/.linuxbrew/lib/python3.10/site-packages (from python-dateutil>=2.8.2->pandas>=1->calplot) (1.16.0)
</code></pre>
<ul>
<li>CalPlot.py</li>
</ul>
<pre class="lang-py prettyprint-override"><code>import calplot
import numpy as np; np.random.seed(sum(map(ord, 'calplot')))
import pandas as pd
all_days = pd.date_range('1/1/2019', periods=730, freq='D')
days = np.random.choice(all_days, 500)
events = pd.Series(np.random.randn(len(days)), index=days)
calplot.calplot(events)
</code></pre>
<ul>
<li>Executing python program:</li>
</ul>
<pre class="lang-bash prettyprint-override"><code>$ python CalPlot.py
Traceback (most recent call last):
File "/home/lamy/Documents/Git/Cs/Architecture/ArchitectureSrc/Hardware/Memory/Harddisk/Backup/Borg/Src/CalPlot.py", line 10, in <module>
import calplot
ModuleNotFoundError: No module named 'calplot'
</code></pre>
<h2>Any suggestions?</h2>
<hr />
<h2>Edit</h2>
<pre class="lang-bash prettyprint-override"><code>type pip
pip is /home/linuxbrew/.linuxbrew/bin/pip
pip is /usr/bin/pip
pip is /bin/pip
$ type python
python is aliased to `python3'
python is /usr/bin/python
python is /bin/python
$ type python3
python3 is /home/linuxbrew/.linuxbrew/bin/python3
python3 is /usr/bin/python3
python3 is /bin/python3
$ python --version
Python 3.11.2
$ pip --version
pip 23.0.1 from /home/linuxbrew/.linuxbrew/opt/python@3.11/lib/python3.11/site-packages/pip (python 3.11)
$ python -m pip --version
pip 23.0.1 from /home/linuxbrew/.linuxbrew/Cellar/python@3.11/3.11.2_1/lib/python3.11/site-packages/pip (python 3.11)
</code></pre>
|
<python><pandas><module>
|
2023-04-07 15:21:22
| 1
| 6,635
|
Porcupine
|
75,959,755
| 1,780,761
|
flask - send_file with lazy loading
|
<p>i have a huge table (80k rows) with an image in each row. i want to lazy load the images while scrolling. The images are stored in a directory outside flask, so i have to use send_file. this is my python code:</p>
<pre><code>@app.route('/getLogImage/')
def getLogImage():
fileName = request.args.get('fn', None)
return send_file("D:/images/" + fileName, mimetype='image/jpg')
</code></pre>
<p>and this is the code of my table:</p>
<pre><code><div class="scrollable-table" style="height:72vh; width:90vw; overflow:scroll;">
<table id="logTable" class="table">
<thead class="sticky-top" style="background-color: #f8f9fa;">
<tr>
<th>Date and time</th>
<th>Image</th>
</tr>
</thead>
<tbody>
{% for id in imgids %}
<tr>
<td>{{ dateTime[loop.index - 1] }}</td>
<td>
<a href="#" data-toggle="modal" data-target="#imageModal{{ id }}">
<img src="{{ url_for('getLogImage', fn=id.zfill(12) ~ '.jpg') }}" class="lazy rounded" width="100px" height="100px">
</a>
</td>
</tr>
{% endfor %}
</tbody>
</table>
</div>
</code></pre>
<p>This is the script for lazy loading:</p>
<pre><code><script src="{{ url_for('static',filename='js/lazyload.js') }}">
</script>
<script>
$(document).ready(function() {
// Initialize lazy load
$("img.lazy").lazyload();
// Load more images on scroll
$('.scrollable-table').scroll(function() {
$("img.lazy").lazyload();
});
});
</script>
</code></pre>
<p>The problem I am facing is that instead of lazy loading the images it loads them all right away, and that causes long loading times and the cpu to be quite busy. how can i lazy load the images in flask while scrolling along the table?</p>
|
<python><flask><lazy-loading>
|
2023-04-07 15:21:06
| 1
| 4,211
|
sharkyenergy
|
75,959,702
| 2,628,868
|
the docker build stuck at transferring context forever
|
<p>when I using this command to build docker <code>Docker version 20.10.14, build a224086</code> image in macOS 13.2 with M1 chip, the build process blocked in transferring context forever, this is the build log output:</p>
<pre><code>➜ docker build -f ./Dockerfile -t=reddwarf-pro/cha-server:v1.0.0 .
[+] Building 318.6s (4/10)
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 671B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/python:3.10-alpine3.16 5.7s
=> [1/6] FROM docker.io/library/python:3.10-alpine3.16@sha256:b2d1f7c5ef2aad57f135104240e91c5327d9c414a58517f1080d55ed6a7a591d 312.8s
=> => resolve docker.io/library/python:3.10-alpine3.16@sha256:b2d1f7c5ef2aad57f135104240e91c5327d9c414a58517f1080d55ed6a7a591d 0.0s
=> => sha256:df0ead61b76e5f4e46838cd42d2a8c682d9ee2f368619d1b5f2dccc683f4ceaa 3.15MB / 12.29MB 312.8s
=> => sha256:b2d1f7c5ef2aad57f135104240e91c5327d9c414a58517f1080d55ed6a7a591d 1.65kB / 1.65kB 0.0s
=> => sha256:33733c1204f5f1619290fe3a4d4bbcb28574c7df61ccf0048f46d04c29e0ad21 1.37kB / 1.37kB 0.0s
=> => sha256:fb50eb7b018f767e9df8538552d47a2ddd135622357afeb6f78a2ce0515f429c 6.33kB / 6.33kB 0.0s
=> => sha256:547446be3368f442c50ff95e2a2a9c85110b6b41bbb3c75b7e5ebb115f478b57 2.71MB / 2.71MB 227.6s
=> => sha256:fc7cc9366210bf87382129b842210a5f3b868c4400e56cfbae81b0cbaad66492 662.28kB / 662.28kB 51.9s
=> => sha256:1205151c85319d9ca7046006cce9419f2516d28daebd1839d5ee1eda18f21f13 240B / 240B 53.3s
=> => sha256:d8f3c223bf9d2d03b40f67b94f7baa7b292e319f8c520f566cb9cbe28056a15e 3.08MB / 3.08MB 290.0s
=> => extracting sha256:547446be3368f442c50ff95e2a2a9c85110b6b41bbb3c75b7e5ebb115f478b57 0.1s
=> => extracting sha256:fc7cc9366210bf87382129b842210a5f3b868c4400e56cfbae81b0cbaad66492 0.1s
=> [internal] load build context 21.6s
=> => transferring context: 564.82MB 21.6s
</code></pre>
<p>what should I do to fixed this issue? this is my <code>Dockerfile</code>:</p>
<pre><code>FROM python:3.10-alpine3.16
LABEL jiangxiaoqiang (xiaoqiang@mail.com)
ENV LANG=en_US.UTF-8 \
LC_ALL=en_US.UTF-8 \
TZ=Asia/Shanghai
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime \
&& echo $TZ > /etc/timezone \
&& mkdir -p /root/chat-server
# upgrade pip
RUN pip install --upgrade pip && apk add curl
ADD . /root/chat-server/
EXPOSE 8001
RUN pip3 install -r /root/chat-server/requirements.txt
WORKDIR /root/chat-server/
ENTRYPOINT exec python3 -m uvicorn main:app --port 8001 --host 0.0.0.0 --reload
</code></pre>
|
<python><docker>
|
2023-04-07 15:14:37
| 0
| 40,701
|
Dolphin
|
75,959,463
| 472,599
|
How to insert multiple records in a table with psycopg2.extras.execute_values() when one of the columns is a sequence?
|
<p>I am trying to insert multiple records in a single database call with the psycopg2.extras.execute_values(cursor, statement, argument_list) function.
It works when I have a list with only strings or integers as field values.
But I want the id field to be assigned from a postgres sequence.
I tried using nextval('name_of_sequence) as a value, but it is treated as a string, and thus not valid for my id column, which is a begint.</p>
|
<python><postgresql><psycopg2>
|
2023-04-07 14:44:03
| 1
| 1,359
|
Bjinse
|
75,959,341
| 10,313,194
|
Selenium login Instragram why it has to login again when open new url
|
<p>I login Instragram and open some instragram page like this code.</p>
<pre><code>url=['https://www.instagram.com']
user_names = []
user_comments = []
driver = driver = webdriver.Chrome('C:/Users/User/chromedriver_win32/chromedriver')
driver.get(url[0])
time.sleep(3)
username = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "input[name='username']")))
password = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "input[name='password']")))
username.clear()
username.send_keys('myusername')
password.clear()
password.send_keys('mypassword')
Login_button = WebDriverWait(driver, 2).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "button[type='submit']"))).click()
driver.get('https://www.instagram.com/p/Co_3ALaDC3i/')
</code></pre>
<p>when driver.get() new url it show login button again. How to fix it?</p>
|
<python><selenium-webdriver>
|
2023-04-07 14:29:49
| 1
| 639
|
user58519
|
75,959,314
| 3,810,748
|
Huggingface model training loop has same performance on CPU & GPU? Confused as to why?
|
<h2>Question</h2>
<p>I created two Python notebooks to fine-tune BERT on a Yelp review dataset for sentiment analysis. The only difference between the two notebooks is that <a href="https://nbviewer.org/gist/AlanCPSC/f263426dce9b7b9580cd48dbc45910b0" rel="nofollow noreferrer">one runs on a CPU</a> <code>.to("cpu")</code> while the <a href="https://nbviewer.org/gist/AlanCPSC/7f0efaeaf9b5247a80e67c9a8e0d9c8b" rel="nofollow noreferrer">other uses a GPU</a> <code>.to("cuda")</code>.</p>
<p>Despite this difference in hardware, the training times for both notebooks are nearly the same. I am new to using Hugging Face, so I'm wondering if there's anything I might be overlooking. Both notebooks are running on a machine with a single GPU.</p>
<h2>Metrics for CPU</h2>
<pre><code>TrainOutput(global_step=100, training_loss=1.5707407319545745, metrics={'train_runtime': 116.5447, 'train_samples_per_second': 3.432, 'train_steps_per_second': 0.858, 'total_flos': 105247256985600.0, 'train_loss': 1.5707407319545745, 'epoch': 0.4})
{'eval_loss': 1.4039757251739502,
'eval_accuracy': 0.4,
'eval_runtime': 3.6833,
'eval_samples_per_second': 27.15,
'eval_steps_per_second': 3.529,
'epoch': 0.4}
# specifically concerned with 'train_samples_per_second': 3.432
</code></pre>
<h2>Metrics for GPU</h2>
<pre><code>TrainOutput(global_step=100, training_loss=1.6277318179607392, metrics={'train_runtime': 115.46, 'train_samples_per_second': 3.464, 'train_steps_per_second': 0.866, 'total_flos': 105247256985600.0, 'train_loss': 1.6277318179607392, 'epoch': 0.4})
{'eval_loss': 1.525576114654541,
'eval_accuracy': 0.35,
'eval_runtime': 3.6518,
'eval_samples_per_second': 27.384,
'eval_steps_per_second': 3.56,
'epoch': 0.4}
# specifically concerned with 'train_samples_per_second': 3.464
</code></pre>
|
<python><pytorch><huggingface-transformers><huggingface>
|
2023-04-07 14:26:17
| 1
| 6,155
|
AlanSTACK
|
75,959,247
| 2,100,039
|
Assign Week Number Column to Dataframe with Defined Dict in Python
|
<p>I have been trying to get this to work and cannot find a solution. I have data that looks like this in dataframe (df):</p>
<pre><code>index plant_name business_name power_kwh mos_time day month year
0 PROVIDENCE HEIGHTS UNITED STATES 7805.7 2023-02-25 08:00:00 56 2 2023
1 PROVIDENCE HEIGHTS UNITED STATES 9943.7 2023-02-25 07:00:00 56 2 2023
2 PROVIDENCE HEIGHTS UNITED STATES 9509.8 2023-02-25 06:00:00 56 2 2023
3 PROVIDENCE HEIGHTS UNITED STATES 8333 2023-02-25 05:00:00 56 2 2023
2993 PROVIDENCE HEIGHTS UNITED STATES 14560 2022-10-25 17:00:00 298 10 2022
2994 PROVIDENCE HEIGHTS UNITED STATES 9260.4 2022-10-25 16:00:00 298 10 2022
2995 PROVIDENCE HEIGHTS UNITED STATES 7327.7 2022-10-25 15:00:00 298 10 2022
2996 PROVIDENCE HEIGHTS UNITED STATES 5579.1 2022-10-25 14:00:00 298 10 2022
2997 PROVIDENCE HEIGHTS UNITED STATES 4507 2022-10-25 13:00:00 298 10 2022
13993 PROVIDENCE HEIGHTS UNITED STATES 1655.3 2021-07-19 14:00:00 200 7 2021
13994 PROVIDENCE HEIGHTS UNITED STATES 1686.1 2021-07-19 13:00:00 200 7 2021
13995 PROVIDENCE HEIGHTS UNITED STATES 2243.7 2021-07-19 12:00:00 200 7 2021
13996 PROVIDENCE HEIGHTS UNITED STATES 3577.9 2021-07-19 11:00:00 200 7 2021
33995 PROVIDENCE HEIGHTS UNITED STATES 2220.2 2019-04-05 20:00:00 95 4 2019
33996 PROVIDENCE HEIGHTS UNITED STATES 2266.7 2019-04-05 19:00:00 95 4 2019
33997 PROVIDENCE HEIGHTS UNITED STATES 2292.4 2019-04-05 18:00:00 95 4 2019
33998 PROVIDENCE HEIGHTS UNITED STATES 2197 2019-04-05 17:00:00 95 4 2019
</code></pre>
<p>The dict that I need to use to assign week numbers looks like this:</p>
<pre><code>weeks = {
1: [1, 2, 3, 4, 5],
2: [6, 7, 8, 9],
3: [10, 11, 12, 13],
4: [14, 15, 16, 17],
5: [18, 19, 20, 21, 22],
6: [23, 24, 25, 26],
7: [27, 28, 29, 30, 31],
8: [32, 33, 34, 35],
9: [36, 37, 38, 39],
10: [40, 41, 42, 43, 44],
11: [45, 46, 47, 48],
12: [49, 50, 51, 52]
}
</code></pre>
<p>And, my answer looks like this with the added "week" column on the far right column. thank you for any help here.</p>
<pre><code>index plant_name business_name power_kwh mos_time day month year week
0 PROVIDENCE HEIGHTS UNITED STATES 7805.7 2023-02-25 08:00:00 56 2 2023 8
1 PROVIDENCE HEIGHTS UNITED STATES 9943.7 2023-02-25 07:00:00 56 2 2023 8
2 PROVIDENCE HEIGHTS UNITED STATES 9509.8 2023-02-25 06:00:00 56 2 2023 8
3 PROVIDENCE HEIGHTS UNITED STATES 8333 2023-02-25 05:00:00 56 2 2023 8
2993 PROVIDENCE HEIGHTS UNITED STATES 14560 2022-10-25 17:00:00 298 10 2022 43
2994 PROVIDENCE HEIGHTS UNITED STATES 9260.4 2022-10-25 16:00:00 298 10 2022 43
2995 PROVIDENCE HEIGHTS UNITED STATES 7327.7 2022-10-25 15:00:00 298 10 2022 43
2996 PROVIDENCE HEIGHTS UNITED STATES 5579.1 2022-10-25 14:00:00 298 10 2022 43
2997 PROVIDENCE HEIGHTS UNITED STATES 4507 2022-10-25 13:00:00 298 10 2022 43
13993 PROVIDENCE HEIGHTS UNITED STATES 1655.3 2021-07-19 14:00:00 200 7 2021 30
13994 PROVIDENCE HEIGHTS UNITED STATES 1686.1 2021-07-19 13:00:00 200 7 2021 30
13995 PROVIDENCE HEIGHTS UNITED STATES 2243.7 2021-07-19 12:00:00 200 7 2021 30
13996 PROVIDENCE HEIGHTS UNITED STATES 3577.9 2021-07-19 11:00:00 200 7 2021 30
33995 PROVIDENCE HEIGHTS UNITED STATES 2220.2 2019-04-05 20:00:00 95 4 2019 14
33996 PROVIDENCE HEIGHTS UNITED STATES 2266.7 2019-04-05 19:00:00 95 4 2019 14
33997 PROVIDENCE HEIGHTS UNITED STATES 2292.4 2019-04-05 18:00:00 95 4 2019 14
33998 PROVIDENCE HEIGHTS UNITED STATES 2197 2019-04-05 17:00:00 95 4 2019 14
</code></pre>
<p>I have tried things like the function below but get the ValueError: Week number not found in dictionary:</p>
<pre><code>weeks = {
1: [1, 2, 3, 4, 5],
2: [6, 7, 8, 9],
3: [10, 11, 12, 13],
4: [14, 15, 16, 17],
5: [18, 19, 20, 21, 22],
6: [23, 24, 25, 26],
7: [27, 28, 29, 30, 31],
8: [32, 33, 34, 35],
9: [36, 37, 38, 39],
10: [40, 41, 42, 43, 44],
11: [45, 46, 47, 48],
12: [49, 50, 51, 52]
}
def get_week_number(day, month):
for num, days in weeks.items():
if day in days and num <= 12:
if month == num:
return num
raise ValueError("Week number not found in dictionary.")
# add a column for week number
df['week'] = ncData.apply(lambda row: get_week_number(row['day'], row['month']), axis=1)
</code></pre>
|
<python><dictionary><week-number>
|
2023-04-07 14:18:07
| 1
| 1,366
|
user2100039
|
75,959,228
| 128,618
|
added a c column total to get total by column name
|
<pre><code>from pprint import pprint
import pandas as pd
input = [
{"item": "i1", "balance": 11, "warehouse": "W1"},
{"item": "i1", "balance": 12, "warehouse": "W4"},
{"item": "i1", "balance": 13, "warehouse": "W3"},
{"item": "i2", "balance": 11, "warehouse": "W2"},
{"item": "i2", "balance": 10, "warehouse": "W1"},
{"item": "i3", "balance": 10, "warehouse": "W3"},
]
df = pd.DataFrame(input)
df_pivot = df.pivot_table(
index=["item"], columns="warehouse", values="balance", fill_value=0
)
print(df_pivot)
output = df_pivot.reset_index().to_dict(orient="records")
pprint(output)
warehouse W1 W2 W3 W4
item
i1 11 0 13 12
i2 10 11 0 0
i3 0 0 10 0
[{'W1': 11, 'W2': 0, 'W3': 13, 'W4': 12, 'item': 'i1'},
{'W1': 10, 'W2': 11, 'W3': 0, 'W4': 0, 'item': 'i2'},
{'W1': 0, 'W2': 0, 'W3': 10, 'W4': 0, 'item': 'i3'}]
</code></pre>
<p>I want to add a total column where is the sum of for(w1,w2,..) in its row:</p>
<pre><code> [
{'W1': 11, 'W2': 0, 'W3': 13, 'W4': 12, 'item': 'i1',total: 36},
{'W1': 10, 'W2': 11, 'W3': 0, 'W4': 0, 'item': 'i2',total: 21},
{'W1': 0, 'W2': 0, 'W3': 10, 'W4': 0, 'item': 'i3', total: 10}
]
</code></pre>
|
<python><pandas>
|
2023-04-07 14:16:02
| 2
| 21,977
|
tree em
|
75,959,070
| 18,157,326
|
TypeError: 'SnapResponse' object is not callable
|
<p>I have define a custom exception handler in python 3.10 fastapi <code>fastapi==0.95.0</code> app like this:</p>
<pre><code>async def too_much_face_exception_handler(request, exc):
resp = SnapResponse(result="",resultCode="TOO_MUCH_FACE")
return resp
</code></pre>
<p>when the code run into this exception handler, the error shows that:</p>
<pre><code>ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/opt/homebrew/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
await self.app(scope, receive, sender)
File "/opt/homebrew/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
raise e
File "/opt/homebrew/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
await self.app(scope, receive, send)
File "/opt/homebrew/lib/python3.10/site-packages/starlette/routing.py", line 718, in __call__
await route.handle(scope, receive, send)
File "/opt/homebrew/lib/python3.10/site-packages/starlette/routing.py", line 276, in handle
await self.app(scope, receive, send)
File "/opt/homebrew/lib/python3.10/site-packages/starlette/routing.py", line 66, in app
response = await func(request)
File "/opt/homebrew/lib/python3.10/site-packages/fastapi/routing.py", line 237, in app
raw_response = await run_endpoint_function(
File "/opt/homebrew/lib/python3.10/site-packages/fastapi/routing.py", line 163, in run_endpoint_function
return await dependant.call(**values)
File "/Users/xiaoqiangjiang/source/reddwarf/backend/chat-server/src/routers/snap/photo_process.py", line 23, in photo_upload
snap_file = process_snap.process_image(save_dirs + file_uniq_name, file)
File "/Users/xiaoqiangjiang/source/reddwarf/backend/chat-server/src/service/snap/photo/process_snap.py", line 56, in process_image
reg_face(img_full_path)
File "/Users/xiaoqiangjiang/source/reddwarf/backend/chat-server/src/tool/graphics/recognize.py", line 29, in reg_face
raise TooMuchFaceException();
src.common.exception.snap.too_much_face_exception.TooMuchFaceException: too much face
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/homebrew/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py", line 436, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "/opt/homebrew/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
return await self.app(scope, receive, send)
File "/opt/homebrew/lib/python3.10/site-packages/fastapi/applications.py", line 276, in __call__
await super().__call__(scope, receive, send)
File "/opt/homebrew/lib/python3.10/site-packages/starlette/applications.py", line 122, in __call__
await self.middleware_stack(scope, receive, send)
File "/opt/homebrew/lib/python3.10/site-packages/starlette/middleware/errors.py", line 184, in __call__
raise exc
File "/opt/homebrew/lib/python3.10/site-packages/starlette/middleware/errors.py", line 162, in __call__
await self.app(scope, receive, _send)
File "/opt/homebrew/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 91, in __call__
await response(scope, receive, sender)
TypeError: 'SnapResponse' object is not callable
</code></pre>
<p>this is my <code>SnapResponse</code> exception defined:</p>
<pre><code>from src.models.chat.rest_response import RestResponse
class SnapResponse(RestResponse):
def __init__(self, result=None, resultCode=None, uploaded=None):
super().__init__(result=result, resultCode=resultCode)
self.uploaded = uploaded
</code></pre>
<p>am I missing something? what should I do to fixed this issue? and this is the <code>RestResponse</code> defined:</p>
<pre><code>class RestResponse:
def __init__(self, msg="", result=None, resultCode="200", statusCode="200"):
self.msg = msg
self.result = result if result else {}
self.resultCode = resultCode
self.statusCode = statusCode
def to_dict(self):
return {
"msg": self.msg,
"result": self.result,
"resultCode": self.resultCode,
"statusCode": self.statusCode
}
</code></pre>
|
<python><fastapi>
|
2023-04-07 13:56:36
| 1
| 1,173
|
spark
|
75,959,005
| 610,569
|
Deduplicating a list while keeping its order, and OrderedSet?
|
<p>Given a list as follows, deduplicating the list without concern for order, we can do this:</p>
<pre><code>from collections import OrderedDict
x = [1,2,3,9,8,3,2,5,3,2,7,3,6,7]
list(set(x)) # Outputs: [1, 2, 3, 5, 6, 7, 8, 9]
</code></pre>
<p>If we want to keep the order, we can do something like:</p>
<pre><code>list(OrderedDict.fromkeys(x)) # Outputs: [1, 2, 3, 9, 8, 5, 7, 6]
</code></pre>
<p>Using <code>OrderedDict.fromkeys</code>, then casting it into a list again is kept of cumbersome. And seem counter-intuitive but I suppose there's some reason for Python dev to decide on processing sets with ordering as dictionary instead.</p>
<h2>Is there such an object as <code>OrderedSet</code> that can achieve the same deduplication while keeping order?</h2>
<p>If there isn't, what is the motivation to support <code>OrderedDict</code> but not <code>OrderedSet</code>? Is it an anti-pattern to create a custom object like <code>OrderedSet</code>?</p>
|
<python><list><duplicates><ordereddict>
|
2023-04-07 13:47:45
| 0
| 123,325
|
alvas
|
75,958,983
| 273,657
|
SQLAlchemy behaves differently on Windows and Linux when connecting to Oracle
|
<p>I have a simple python script that connects to an Oracle database. The connection is setup as follows (summarized):</p>
<pre><code>engine = sqlalchemy.create_engine("oracle+cx_oracle://username:password@localhost:1521/?service_name=mydb", arraysize=1000)
conn = engine.connect()
df = pd.read_sql("select count(*) from dual",conn)
</code></pre>
<p>The above works fine if i run it on my Windows 11 laptop. However if i move the script to a Linux server, i get the following error:</p>
<pre><code>ObjectNotExecutableError: Not an executable object: 'select count(*) from dual'
</code></pre>
<p>After a bit of googling, i found out that sqlalchemy does not accept strings i had to change the read_sql call to the following for it to work in linux:</p>
<pre><code>df = pd.read_sql(text("select count(*) from dual") ,conn)
</code></pre>
<p>What i don't understand is why is it that if i run the script in Windows i don't have to make that change to convert the query to text whereas i have to do it on Linux. I suspect that maybe the Oracle drivers are behaving differently on each OS?</p>
|
<python><pandas><sqlalchemy>
|
2023-04-07 13:45:15
| 0
| 15,926
|
ziggy
|
75,958,940
| 7,012,917
|
Compute rolling z-score over Pandas Series with Scipy gives error
|
<p>I have generic DataFrame with float numbers and no NaN's or Inf's. I want to compute the rolling Z-Score over the column <code>Values</code> and took help of <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.zscore.html" rel="nofollow noreferrer">Scipy's z-score</a>.</p>
<p>This works, but it's computing Z-Score over the whole column i.e. not rolling:</p>
<pre><code>from scipy.stats import zscore
df['Z-Score'] = zscore(df['Values'])
</code></pre>
<p>This is what I want to do but it's giving me an error:</p>
<pre><code>from scipy.stats import zscore
window_size = 5
df['Z-Score'] = df['Values'].rolling(window_size).apply(lambda s: zscore(s))
</code></pre>
<p>I get <code>TypeError: cannot convert the series to <class 'float'></code>.</p>
<p>I've searched over and over but can't find what the issue is. What am I doing wrong?</p>
<hr />
<p>I know I can implement the <code>zscore</code> function myself which is more performant but I'd rather use a library.</p>
|
<python><pandas><scipy>
|
2023-04-07 13:39:21
| 1
| 1,080
|
Nermin
|
75,958,912
| 8,971,383
|
Couldnt install pymongo==4.3.3
|
<p>My Ubuntu server had python 2.7.17 as default. When i try to install <strong>pymongo==4.3.3</strong>,prompt an error to upgrade python version to 3. Then i install python3 and set as default version. All commands i executed as root.Then i check python version using <code>python --version</code> and as root i can see python 3.10 set as default version. But as a normal user i can see old version is default version.Also till get same error for pymongo installation.</p>
<pre><code>ERROR: Could not find a version that satisfies the requirement pymongo==4.3.3 (from versions: 0.1rc0, 0.1.1rc0, 0.1.2rc0, 0.2rc0, 0.3rc0, 0.3.1rc0, 0.4rc0, 0.5rc0, 0.5.1rc0, 0.5.2rc0, 0.5.3rc0, 0.6, 0.7, 0.7.1, 0.7.2, 0.8, 0.8.1, 0.9, 0.9.1, 0.9.2, 0.9.3, 0.9.4, 0.9.5, 0.9.6, 0.9.7, 0.10, 0.10.1, 0.10.2, 0.10.3, 0.11, 0.11.1, 0.11.2, 0.11.3, 0.12, 0.13, 0.14, 0.14.1, 0.14.2, 0.15, 0.15.1, 0.15.2, 0.16, 1.0, 1.1, 1.1.1, 1.1.2, 1.2, 1.2.1, 1.3, 1.4, 1.5, 1.5.1, 1.5.2, 1.6, 1.7, 1.8, 1.8.1, 1.9, 1.10, 1.10.1, 1.11, 2.0, 2.0.1, 2.1, 2.1.1, 2.2, 2.2.1, 2.3, 2.4, 2.4.1, 2.4.2, 2.5, 2.5.1, 2.5.2, 2.6, 2.6.1, 2.6.2, 2.6.3, 2.7, 2.7.1, 2.7.2, 2.8, 2.8.1, 2.9, 2.9.1, 2.9.2, 2.9.3, 2.9.4, 2.9.5, 3.0, 3.0.1, 3.0.2, 3.0.3, 3.1, 3.1.1, 3.2, 3.2.1, 3.2.2, 3.3.0, 3.3.1, 3.4.0, 3.5.0, 3.5.1, 3.6.0, 3.6.1, 3.7.0, 3.7.1, 3.7.2, 3.8.0, 3.9.0, 3.10.0, 3.10.1, 3.11.0, 3.11.1, 3.11.2, 3.11.3, 3.11.4, 3.12.0, 3.12.1, 3.12.2, 3.12.3, 3.13.0)
</code></pre>
<p>How can i fix this issue?</p>
|
<python><python-3.x><python-2.7>
|
2023-04-07 13:34:34
| 1
| 627
|
kavindu
|
75,958,907
| 610,569
|
Deduplicating nested values that are also key in outer dictionary - Python
|
<p>Given an input dictionary,</p>
<pre><code>x = {'A fantastic gift for art lovers': ['Designed for adults, this stunning piece '
'of 3D art can be proudly displayed on a '
'wall following a rewarding build '
'experience.'],
'Build and relax': ['Art lovers can enjoy a relaxing and immersive building '
'experience as they create this unique artwork from 1,810 '
'pieces.'],
'Create your own artwork': ['Build Hokusai’s The Great Wave with layers of '
'LEGO® bricks.'],
'Finishing touch': ['Add a decorative tile with Hokusai’s signature.'],
'Hokusai – The Great Wave': ['Create your own artwork',
'Build Hokusai’s The Great Wave with layers of '
'LEGO® bricks.',
'The Great Wave comes to life!',
'The picture’s multiple layers create a stunning '
'3D effect.',
'Finishing touch',
'Add a decorative tile with Hokusai’s '
'signature.'],
'Immerse yourself in the world of art': ['Listen to the set’s Soundtrack, '
'tailor-made with content to enhance '
'the time you spend building this '
'Japanese wall art.'],
'Recreate an iconic piece of Japanese art': ['Celebrate your passion for '
'Japanese art when you build '
'this incredible LEGO® version '
'of Hokusai’s The Great Wave.'],
'The Great Wave comes to life!': ['The picture’s multiple layers create a '
'stunning 3D effect.']}
</code></pre>
<p>There are some duplicates in the inner list values of the keys, e.g.</p>
<pre><code>>>> y = x['The Great Wave comes to life!']
>>> y
['The picture’s multiple layers create a stunning 3D effect.']
>>> z = x['Hokusai – The Great Wave'][3]
>>> z
'The Great Wave comes to life!'
</code></pre>
<p>The goal is to remove <code>y</code>, if <code>z</code> is in any key of <code>x</code>.</p>
<p>The expected output of the deduplicated <code>x</code> should look like this:</p>
<pre><code>{'A fantastic gift for art lovers': ['Designed for adults, this stunning piece '
'of 3D art can be proudly displayed on a '
'wall following a rewarding build '
'experience.'],
'Build and relax': ['Art lovers can enjoy a relaxing and immersive building '
'experience as they create this unique artwork from 1,810 '
'pieces.'],
'Hokusai – The Great Wave': ['Create your own artwork',
'Build Hokusai’s The Great Wave with layers of '
'LEGO® bricks.',
'The Great Wave comes to life!',
'The picture’s multiple layers create a stunning '
'3D effect.',
'Finishing touch',
'Add a decorative tile with Hokusai’s '
'signature.'],
'Immerse yourself in the world of art': ['Listen to the set’s Soundtrack, '
'tailor-made with content to enhance '
'the time you spend building this '
'Japanese wall art.'],
'Recreate an iconic piece of Japanese art': ['Celebrate your passion for '
'Japanese art when you build '
'this incredible LEGO® version '
'of Hokusai’s The Great Wave.']}
</code></pre>
<p>I've tried the following:</p>
<pre class="lang-py prettyprint-override"><code> to_delete = []
for k,v in x.items():
for vv in v:
if vv in x:
to_delete.append(vv)
for d in x:
if d in x:
del x[d]
</code></pre>
<p>And a simpler form:</p>
<pre class="lang-py prettyprint-override"><code> for k in list(x.keys()):
if k in x:
for vv in x[k]:
if vv in x:
del features[vv]
</code></pre>
<p>Both code snippet would work to get the expected output but there's some quirks to them and I wonder if there's a better way to process it. The code above works well on small dictionary but at millions/billions scale, it's just way too inefficient to iterate through the keys twice.</p>
<ul>
<li><p>The first snippet loops through the key-value pairs and then keep a list of keys to delete. The main issue is that the dictionary has to be looped through 2 times instead of one</p>
</li>
<li><p>The 2nd snippet has this odd <code>if k in x</code> because when we loop through a copy of the <code>list(x.keys())</code>, the keys iterated are the old key and as the dictionary keys get dynamically popped, it's needed to check that it exists before checking if it's a duplicate</p>
</li>
</ul>
<h2>Is there a better way to iterate through a dictionary and then delete duplicates?</h2>
<p>E.g. maybe something like flatten the values into a set and do an intersection with the keys? Would that be a faster compute if <code>x</code> is a huge dictionary?</p>
|
<python><dictionary><duplicates><key-value>
|
2023-04-07 13:33:52
| 1
| 123,325
|
alvas
|
75,958,899
| 8,231,100
|
Keras & Pytorch Conv2D give different results with same weights
|
<p>My code:</p>
<pre><code>import tensorflow as tf
from tensorflow.keras.layers import Conv2D
import torch, torchvision
import torch.nn as nn
import numpy as np
# Define the PyTorch layer
pt_layer = torch.nn.Conv2d(3, 12, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
# Get the weight tensor from the PyTorch layer
pt_weights = pt_layer.weight.detach().numpy()
# Create the equivalent Keras layer
keras_layer = Conv2D(12, kernel_size=(3, 3), strides=(2, 2), padding='same', use_bias=False, input_shape=(None, None, 3))
# Build the Keras layer to initialize its weights
keras_layer.build((None, None, None, 3))
# Transpose the PyTorch weights to match the expected shape of the Keras layer
keras_weights = pt_weights.transpose(2, 3, 1, 0)
# Set the weights of the Keras layer to the PyTorch weights
keras_layer.set_weights([keras_weights])
#Test both models
arr = np.random.normal(0,1,(1, 3, 224, 224))
print(pt_layer(torch.from_numpy(arr).float())[0,0])
print(keras_layer(arr.transpose(0,2,3,1))[0,:,:,0])
</code></pre>
<p>I would expect both prints to be quite similar, but the are really different. I ran it on a Colab to make sure it wasn't due to old Pytorch/Keras versions. I'm sure I missed something trivial, bu I can't find it...
Any help would be welcome, please.</p>
|
<python><tensorflow><keras><pytorch><conv-neural-network>
|
2023-04-07 13:33:25
| 2
| 365
|
Adrien Nivaggioli
|
75,958,770
| 2,131,200
|
Python: how to print argparse in alphabetical order
|
<p>I am looking for a canonical way to print the returned arguments of <code>argparse</code> in alphabetical order:</p>
<pre class="lang-py prettyprint-override"><code>import argparse
parser = argparse.ArgumentParser('Some program', add_help=False)
parser.add_argument('--bb', type=str)
parser.add_argument('--aa', type=str)
args = parser.parse_args()
print(args) # Output: Namespace(bb=..., aa=...), but I want aa appears before bb
</code></pre>
<p>Thank you in advance!</p>
|
<python><argparse>
|
2023-04-07 13:15:45
| 1
| 1,606
|
f10w
|
75,958,759
| 11,608,962
|
Is there a way to keep request body structure unchanged in FastAPI (pydantic)?
|
<p>During a <code>POST</code> request, I would like to keep the request body unchanged as per the definition of the <code>BaseModel</code> irrespective of what the user has passed.</p>
<p>Consider the below <code>BaseModel</code>:</p>
<pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel
class Address(BaseModel):
city: Optional[str]
state: Optional[str]
pinCode: Optional[str]
class User_ReqModel(BaseModel):
name: str
mobile: Optional[str]
address: Optional[Address]
</code></pre>
<p>Now if I try to access <code>User_ReqModel.address.city</code> for the request body <code>{"name": "Amit Pathak"}</code> then it throws an error since the <code>address</code> object is not present in the body.</p>
<p>The two ways I can avoid this are by -</p>
<ol>
<li><p>Performing checks for null values is not feasible as I have a comparatively larger request body and cannot validate all attributes.</p>
</li>
<li><p>I used the dict object of the <code>BaseModel</code> by accessing <code>User_ReqModel.dict()</code> and then using <code>.get('key', {})</code> but this does not look clean and want to avoid it.</p>
</li>
</ol>
<p>Is there a way I can get the request body structure as defined by my <code>BaseModel</code>? That is, I want the below request format for a body <code>{"name": "Amit Pathak"}</code>.</p>
<pre class="lang-json prettyprint-override"><code>{
"name": "Amit Pathak",
"mobile": null,
"address": {
"city": null,
"state": null,
"pinCode": null
}
}
</code></pre>
|
<python><fastapi><pydantic>
|
2023-04-07 13:14:17
| 2
| 1,427
|
Amit Pathak
|
75,958,707
| 18,157,326
|
is it possible to get the project root dir in python fastapi app
|
<p>My python3 fastapi project root dir is <code>'/Users/john/source/reddwarf/backend/chat-server/'</code>
. I am tried to get the python project root dir like this:</p>
<pre><code>root_path = os.path.abspath(os.path.dirname(__file__))
</code></pre>
<p>but the dir is <code>'/Users/john/source/reddwarf/backend/chat-server/src/tool/graphics'</code>. I also tried this code:</p>
<pre><code>root1 = os.path.dirname(sys.modules['__main__'].__file__)
</code></pre>
<p>but the dir is <code>'/opt/homebrew/lib/python3.10/site-packages/uvicorn'</code>, is it possible to get the project dir path? what should I do?</p>
|
<python>
|
2023-04-07 13:06:29
| 1
| 1,173
|
spark
|
75,958,405
| 4,754,082
|
How to create a subscriptable custom type for Python type annotations?
|
<p>I'm trying to write a converter for <em>attrib</em> that converts a <em>IterableA</em> of <em>Type1</em> to an <em>IterableB</em> or <em>Type2</em>.</p>
<p>For example:</p>
<pre class="lang-py prettyprint-override"><code>converter = make_iter_converter(tuple, int)
converter(["1", "2"])
>>> (1, 2)
</code></pre>
<p>I want to add type annotations to the <code>make_iter_converter</code> function properly, but I’m not sure how to do it correctly. Here is my attempt:</p>
<pre class="lang-py prettyprint-override"><code>_ITEMS_T = TypeVar("_ITEMS_T")
_ITEM_T = TypeVar("_ITEM_T")
def make_iter_converter(
items_converter: Callable[[Iterator], _ITEMS_T],
item_converter: Callable[[Any], _ITEM_T],
) -> Callable:
def wrapped(items: Iterator) -> _ITEMS_T[_ITEM_T]:
return items_converter(item_converter(item) for item in items)
return wrapped
</code></pre>
<p>However, this raises an error when I try to use <code>_ITEMS_T[_ITEM_T]</code> as the return type of the wrapped function:</p>
<pre><code>TypeError: 'TypeVar' object is not subscriptable
</code></pre>
<p>I learned that <em>TypeVar</em> objects cannot be used with square brackets like some other types. Is there a way to fix this error and annotate the function properly? I know that <em>Generic</em> classes can accept type arguments, but that would require defining a separate class, and I don’t think that’s a good solution for this case.</p>
|
<python><python-typing>
|
2023-04-07 12:25:30
| 2
| 870
|
Kamoo
|
75,958,365
| 20,920,790
|
Why I got KeyError in loop "for" with regular expressions?
|
<p>I got pandas.Series with few values.</p>
<pre><code>money = pd.Series(df_t['Сокращенное наименование '].unique())
0 USDCNY_TOM
1 USDRCNY_TOD
dtype: object
</code></pre>
<p>My regular expression:</p>
<pre><code>m_trim = re.compile(r'_TOM$|_TOD$')
</code></pre>
<p>When I run this code, it's works.</p>
<pre><code>m_trim.sub('', money[0])
</code></pre>
<p>Result is: <code>'USDCNY'</code>.</p>
<p>But then I trying to make loop:</p>
<pre><code>for m in money:
money[m] = m_trim.sub('', money[m])
</code></pre>
<p>I get KeyError: <code>'USDCNY_TOM'</code>.</p>
<p>Thay am I doing wrong?</p>
|
<python><pandas><regex>
|
2023-04-07 12:20:20
| 1
| 402
|
John Doe
|
75,958,310
| 5,351,558
|
Is pulp.lpDot(a,n) not equivalent to pulp.lpSum([a[i] * n[i] for i in range(N)])?
|
<p>I have a decision variable vector <code>n</code>, and <code>a</code> is a constant vector. What confuses me is why <code>lpDot(a,n)</code> is not equivalent to <code>lpSum([a[i] * n[i] for i in range(N)])</code> in a constraint?</p>
<p>Here is the code:</p>
<pre><code>import pulp
import numpy as np
# params
N = 2
a = [3.5, 2]
w = [0.6, 0.4]
# lpSum
# Define the problem as a minimization problem
problem = pulp.LpProblem("lpSum", pulp.LpMinimize)
# Define decision variables
t = pulp.LpVariable.dicts('t', range(N), cat='Continuous')
n = pulp.LpVariable.dicts('n', range(N), lowBound=0, cat='Integer')
# Define the objective function
problem += pulp.lpSum(t)
# Define the constraints
problem += t[0] >= a[0] * n[0] - 6
problem += t[0] >= 6 - a[0] * n[0]
problem += t[1] >= a[1] * n[1] - 4
problem += t[1] >= 4 - a[1] * n[1]
problem += pulp.lpSum([a[i] * n[i] for i in range(N)]) <= 10
# Solve the problem
status = problem.solve()
# Convert result to numpy array
n = np.array([n[i].varValue for i in range(N)])
# Print the optimal solution
print("Optimal Solution with lpSum:")
print(n)
# lpDot
# Define the problem as a minimization problem
problem = pulp.LpProblem("lpDot", pulp.LpMinimize)
# Define decision variables
t = pulp.LpVariable.dicts('t', range(N), cat='Continuous')
n = pulp.LpVariable.dicts('n', range(N), lowBound=0, cat='Integer')
# Define the objective function
problem += pulp.lpSum(t)
# Define the constraints
problem += t[0] >= a[0] * n[0] - 6
problem += t[0] >= 6 - a[0] * n[0]
problem += t[1] >= a[1] * n[1] - 4
problem += t[1] >= 4 - a[1] * n[1]
problem += pulp.lpDot(a, n) <= 10
# Solve the problem
status = problem.solve()
# Convert result to numpy array
n = np.array([n[i].varValue for i in range(N)])
# Print the optimal solution
print("Optimal Solution with lpDot:")
print(n)
</code></pre>
<p>Both report "Optimal solution found".
With <code>lpSum</code> it correctly yields <code>n = [1, 2]</code>, while with <code>lpDot</code> it yields <code>n = [2, 2]</code> which violates the last constraint IMO.</p>
|
<python><optimization><pulp><integer-programming>
|
2023-04-07 12:13:52
| 1
| 418
|
Yfiua
|
75,958,215
| 13,294,769
|
Can't create table with boolean column using SQLAlchemy on Redshift
|
<p>After a succesfull connection to Redshift, I'm trying to ensure my tables exist in the cluster. This is the code I'm running:</p>
<pre class="lang-py prettyprint-override"><code>from sqlalchemy.engine import URL, create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Boolean, SMALLINT, Column
Base = declarative_base()
class Table(Base):
__tablename__ = "a_table"
id = Column(SMALLINT, primary_key=True)
a_boolean_column = Column(Boolean)
# example values
engine = create_engine(
URL.create(
drivername="postgresql+psycopg2",
username="username",
password="password",
host="127.0.0.1",
port=5439,
database="dev",
)
)
base.metadata.create_all(engine)
</code></pre>
<p>I'm getting the following error:</p>
<pre><code>sqlalchemy.exc.ProgrammingError: (psycopg2.errors.SyntaxError) syntax error at or near "a_boolean_column"
LINE 12: a_boolean_column BOOLEAN,
^
[SQL:
CREATE TABLE a_table (
id INTEGER NOT NULL,
a_boolean_column BOOLEAN,
PRIMARY KEY (id),
)
]
</code></pre>
<p>I've tried using the <code>BOOLEAN</code> data type from <code>sqlalchemy</code>, and tried setting a default value, but both failed.</p>
<p>Versions:</p>
<pre><code>SQLAlchemy==1.4.47
sqlalchemy-redshift==0.8.13
</code></pre>
|
<python><python-3.x><sqlalchemy><amazon-redshift>
|
2023-04-07 12:03:06
| 1
| 1,063
|
doublethink13
|
75,958,201
| 4,315,597
|
Where does virtualenvwrapper's activate script live on Ubuntu?
|
<p>I installed python2 and virtualenvwrapper on my ubuntu machine in order to run a very old set of scripts I've saved for an obscure task.</p>
<p>I run the following:</p>
<pre><code>sudo apt-get install python2
cd ~/path/to/my/project
pip install virtualenvwrapper
source ~/.virtualenvs/ccs/bin/activate
</code></pre>
<p>Unfortunately, this gives me something like <code>bash: ~/.virtualenvs/ccs/bin/activate: No such file or directory</code></p>
<p>I also try</p>
<pre><code>source env/bin/activate
</code></pre>
<p>and again get a <code>No such file or directory</code> message.</p>
<p>Is there a different place where this script is installed on Ubuntu?</p>
<p>(I also tried <code>apt-get install virtualenvwrapper</code> with similar results.)</p>
|
<python><ubuntu><pip><virtualenv><virtualenvwrapper>
|
2023-04-07 12:01:49
| 1
| 1,213
|
Mayor of the Plattenbaus
|
75,958,181
| 2,628,868
|
how to specify the custom python path when using visual studio code to debug
|
<p>I am using visual studio code(1.77.1) to debbugging a python application, how to specify the python path in <code>launch.json</code>? I have tried like this:</p>
<pre><code>"python.pythonPath": "/path",
</code></pre>
<p>but did not work. I also read the docs <a href="https://code.visualstudio.com/docs/python/environments" rel="nofollow noreferrer">https://code.visualstudio.com/docs/python/environments</a> and created the .env in project folder. add the python path like this:</p>
<pre><code>PYTHONPATH = /opt/python
</code></pre>
<p>still could not work. What should I do to specify the custom python path? I also tried this:</p>
<pre><code>"pythonPath": "/path/to/custom/python",
</code></pre>
<p>this is the <code>launch.json</code> file right now:</p>
<pre><code>{
"version": "0.2.0",
"configurations": [
{
"name": "Python: FastAPI",
"type": "python",
"request": "launch",
"module": "uvicorn",
"python.pythonPath": "/path",
"args": [
"main:app",
"--port=9002"
],
"jinja": true,
"justMyCode": true
}
]
}
</code></pre>
|
<python><visual-studio-code>
|
2023-04-07 11:56:48
| 1
| 40,701
|
Dolphin
|
75,958,121
| 12,201,164
|
AWS EC2 user data: run python script at startup as non-root user
|
<p>When my EC2 VM (Ubuntu) starts, I'd like it to execute a python script <code>myscript.py</code> as <code>ec2-user</code> instead of the default <code>root</code> user. This includes using the <code>ec2-user</code>'s python executable, installed packages and binaries (e.g. chromedriver)</p>
<p>How can I do this?</p>
<p>What I did/ tried so far:</p>
<ul>
<li>logged in as <code>ec2-user</code>, I verified that <code>python myscript.py</code>, <code>python --version</code> and <code>which python</code> work as intended</li>
<li>logged in as <code>root</code>, running <code>python</code> results in <code>Command 'python' not found</code></li>
<li>logged in as <code>root</code>, running <code>su ec2-user -c python</code> results in <code>Command 'python' not found</code>. Same result with <code>sudo -u ec2-user bash -c 'python --version'</code></li>
<li>logged in as <code>root</code>, running <code>/path/to/ec2-user/python_executable /path/to/myscript.py</code> does start the script, but crashes due not not finding binaries (e.g. chromedriver)</li>
<li>adding <code>su ec2-user -c 'echo "Switched to User ${USER} with $(python --version) in $(which python)" >> /path/to/logfile.log'</code> adds the line <code>Switched to User ec2-user with in </code> to the logfile, hence does not assume the <code>ec2-user</code> user when calling <code>python</code></li>
</ul>
<p>So, how can I make my user <code>ec2-user</code> execute <code>myscript.py</code> within the execution of the user data at instance startup?</p>
<p>references:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/31157432/login-using-ec2-user-instead-of-root-using-user-data-in-aws">login using "ec2-user" instead of root using user data in aws</a></li>
<li><a href="https://stackoverflow.com/questions/57443700/running-command-in-userdata-as-a-not-root-user">Running command in UserData as a not-root user</a></li>
<li><a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html</a></li>
</ul>
|
<python><amazon-web-services><amazon-ec2><su><ec2-userdata>
|
2023-04-07 11:47:10
| 1
| 398
|
Dr-Nuke
|
75,958,003
| 4,120,479
|
Unity Barracuda cannot import .onnx file correctly
|
<p>I'm trying to implement inferencing YoloV5 models on Unity C#.</p>
<p>I trained my own dataset, confirmed it works on python envs.</p>
<p>Also, using given export.py, I converted .pt file to .onnx file and checked it works on python envs for inference.</p>
<p>However, .onnx files cannot imported on Unity.</p>
<p>Unfortunately, I tried it a month ago, I worked!! as like below.</p>
<p><a href="https://i.sstatic.net/ojtRi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ojtRi.png" alt="enter image description here" /></a></p>
<p>.onnx file has its own logo, shows specifications of network.</p>
<p>However now, I tried every solutions that I can try, It was not loaded correctly.</p>
<p><a href="https://i.sstatic.net/p5x6I.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/p5x6I.png" alt="enter image description here" /></a></p>
<p>New .onnx file that I exported does not have icon, also does not show specifications.</p>
<p>Especially, It shows error message like</p>
<pre><code>"Assets/Resources/MLModels/best_export2.onnx">OnnxImportException:Unexpected error while parsing layer /model.24/Split_output_0 of type Split
</code></pre>
<p>What's the difference between them?</p>
<p>Currently, I trained yolo-v5 on window platform (tried conda and docker image), with export command like</p>
<pre><code>python export.py --weights best_export.py --batch 1 --img-size 640 480 --include onnx
</code></pre>
|
<python><unity-game-engine><onnx>
|
2023-04-07 11:29:35
| 1
| 521
|
Wooni
|
75,957,945
| 378,214
|
What is Spark's default cluster manager
|
<p>When using PySaprk and getting the Spark Session using following statement:</p>
<pre><code> spark = SparkSession.builder
.appName("sample-app")
.getOrCreate()
</code></pre>
<p>app works fine but I am unsure which cluster manager is being with this spark session. Is it local or standalone. I read through the docs but no where I found this thing documented. They tell about what are standalone and local cluster managers but no mention of which is default option.</p>
|
<python><apache-spark><pyspark>
|
2023-04-07 11:20:24
| 1
| 2,263
|
Bagira
|
75,957,922
| 17,724,172
|
Python, Matplotlib horizontal bar chart
|
<p>How would I start the horizontal bars at different places on the x axis? Just a blank space for x number of days at the beginning of bars would be okay. I may need to revise this using ax.broken_barh, but that seems like a lot of work and about twice as much data just to have the projects start on different days. Could I have a specified width white section at the beginning or something like that?</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
#Types of work:
category_names = ['Excavation & Foundation', 'Exterior Framing',
'Roofing', 'Interior Framing', 'Drywall, Finish & Painting']
#crew days needed on project:
days = {
'':[0,0,0,0,0], #this works for creating space at top
'Proj 1 budget': [10, 15, 7, 32, 26],
'Proj 1 actual': [10, 14, 7, 30, 28],
'Proj 2 budget': [15, 20, 8, 30, 20],
'Proj 2 actual': [15, 19, 8, 28, 20],
'Proj 3 budget': [7, 15, 5, 20, 20],
'Proj 3 actual': [7, 14, 5, 19, 20],
}
def crew(days, category_names):
labels = list(days.keys())
data = np.array(list(days.values()))
data_cum = data.cumsum(axis=1)
category_colors = plt.colormaps['RdYlGn'](
np.linspace(0.15, 0.95, data.shape[1]))
fig, ax = plt.subplots(figsize=(10.4, 4.5)) #set graph lenght & widwth
ax.invert_yaxis()
ax.xaxis.set_visible(True)
title = ax.set_title('Tasks and Crew-Days')
title.set_position([0.5, 1.0]) #set title at center
ax.set_xlim(0, np.sum(data, axis=1).max())
plt.xlabel('Total Days')
for i, (colname, color) in enumerate(zip(category_names, category_colors)):
widths = data[:, i]
starts = data_cum[:, i] - widths
rects = ax.barh(labels, widths, left=starts, height=0.75, #bar heigth
label=colname, color=color)
r, g, b, _ = color
text_color = 'white' if r * g * b < 0.1 else 'black' #use one or the other
#text_color = 'black' #of these lines
ax.bar_label(rects, label_type='center', color=text_color)
ax.legend(ncols=len(category_names), bbox_to_anchor=(1.0, 1.00),
loc='upper right', fontsize='small')
return fig, ax
crew(days, category_names)
plt
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/Pr0ZZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Pr0ZZ.png" alt="enter image description here" /></a></p>
|
<python><matplotlib><bar-chart>
|
2023-04-07 11:17:29
| 2
| 418
|
gerald
|
75,957,763
| 14,594,208
|
How to change the value of a Series element if preceded by N or more consecutive values to the value preceding it?
|
<p>Consider the following Series <code>s</code>:</p>
<pre class="lang-py prettyprint-override"><code>0 0
1 0
2 0
3 1
4 1
5 1
6 0
</code></pre>
<p>For <code>N = 3</code>, we can see that the items at indices <code>3</code> and <code>6</code> are both preceded by <code>>= N</code> same value occurrences. Hence, their value should change the value of the items preceding them!</p>
<p>Output:</p>
<pre class="lang-py prettyprint-override"><code>0 0
1 0
2 0
3 0
4 1
5 1
6 1
</code></pre>
<p>So, far I have come up with this:</p>
<pre class="lang-py prettyprint-override"><code>(s != s.shift()).cumsum()
</code></pre>
<p>It somehow assigns a group id to consecutive occurrences of a value, but I am not sure in which way I should proceed.</p>
|
<python><pandas>
|
2023-04-07 10:49:22
| 1
| 1,066
|
theodosis
|
75,957,488
| 10,571,370
|
Python Pandas - drop duplicates from dataframe and merge the columns value
|
<p>I am trying to remove duplicates from my Dataframe and save their data into the columns where they are NA/Empty.</p>
<p>Example:
I've the following DATAFRAME and I would like to remove all the duplicates in column A but merge the values from the rest of the tables</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>A</th>
<th>B</th>
<th>C</th>
<th>D</th>
<th>E</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>X</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>2</td>
<td>X</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>2</td>
<td></td>
<td>X</td>
<td></td>
<td></td>
</tr>
<tr>
<td>2</td>
<td></td>
<td></td>
<td>X</td>
<td></td>
</tr>
<tr>
<td>3</td>
<td></td>
<td>X</td>
<td></td>
<td></td>
</tr>
<tr>
<td>3</td>
<td></td>
<td></td>
<td>X</td>
<td></td>
</tr>
<tr>
<td>2</td>
<td></td>
<td></td>
<td></td>
<td>X</td>
</tr>
</tbody>
</table>
</div>
<p>The expected output:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>A</th>
<th>B</th>
<th>C</th>
<th>D</th>
<th>E</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>X</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>2</td>
<td>X</td>
<td>X</td>
<td>X</td>
<td>X</td>
</tr>
<tr>
<td>3</td>
<td></td>
<td>X</td>
<td>X</td>
<td></td>
</tr>
</tbody>
</table>
</div>
<p>How can I perform the above dynamically?</p>
<p>Thanks in advance for the answers</p>
|
<python><pandas><dataframe><merge><duplicates>
|
2023-04-07 10:13:15
| 2
| 303
|
Darmon
|
75,957,447
| 12,130,817
|
YOLOv5: does best.pt control for overfitting?
|
<p>After each YOLOv5 training, two model files are saved: <code>last.pt</code> and <code>best.pt</code>. I'm aware that:</p>
<ul>
<li><code>last.pt</code> is the latest saved checkpoint of the model. This will be updated after each epoch.</li>
<li><code>best.pt</code> is the checkpoint that has the best validation loss so far. It is updated whenever the model fitness improves.</li>
</ul>
<p>Fitness is defined as a weighted combination of <code>mAP@0.5</code> and <code>mAP@0.5:0.95</code> metrics:</p>
<pre class="lang-py prettyprint-override"><code> def fitness(x):
# Returns fitness (for use with results.txt or evolve.txt)
w = [0.0, 0.0, 0.1, 0.9] # weights for [P, R, mAP@0.5, mAP@0.5:0.95]
return (x[:, :4] * w).sum(1)
</code></pre>
<p>My question is, if the training continued for too many epochs (and <code>last.pt</code> is thus overfitted), is <code>best.pt</code> then a checkpoint from when the training was not yet overfit? In other words, does <code>best.pt</code> control for overfitting?</p>
|
<python><machine-learning><artificial-intelligence><yolov5><fitness>
|
2023-04-07 10:07:41
| 1
| 373
|
Peter
|
75,957,365
| 1,608,276
|
How to type annotate a decorator to make it Override-friendly?
|
<p>I find that adding type annotation for a decorator will cause typing error in overriding case in <strong>VSCode Pylance strict mode</strong>. following is a minimal-complete-example (in Python3.8 <em>or later?</em>):</p>
<pre><code>from datetime import datetime
from functools import wraps
from typing import Any
_persistent_file = open("./persistent.log", "a")
def to_json(obj: Any) -> str:
...
def persistent(class_method: ...):
@wraps(class_method)
async def _class_method(ref: Any, *args: Any):
rst = await class_method(ref, *args)
print(f'{ datetime.now().strftime("%Y-%m-%d %H:%M:%S") } { to_json(args) } { to_json(rst) }', file=_persistent_file)
return rst
return _class_method
class A:
async def f(self, x: int):
return x
class B (A):
@persistent
async def f(self, x: int):
return x + 1
</code></pre>
<p>Here, <code>B.f</code> will be marked as type error in pylance:</p>
<pre><code>"f" overrides method of same name in class "A" with incompatible type "_Wrapped[(...), Any, (ref: Any, *args: Any), Coroutine[Any, Any, ...]]"
Pylance (reportIncompatibleMethodOverride)
</code></pre>
<p>And I find that remove <code>@wraps</code> will workaround this issue, but I need it to keep the metadata of the function (e.g. <code>__name__</code>).</p>
<p>Hoping for a perfect solution.</p>
|
<python><overriding><python-decorators><python-typing><pyright>
|
2023-04-07 09:55:52
| 1
| 3,895
|
luochen1990
|
75,957,315
| 5,561,649
|
How to detect that an operation on a folder won't be able to complete before starting said operation, from Python on Windows?
|
<p>If we want to move or delete a folder, but some of its contained files are currently being used, the operation will likely fail after it has already started, and the folder could be left in a half-way, "corrupt" state.</p>
<p>Is there a fast way to detect in advance that this will happen to avoid starting the operation?</p>
<p>Caveat: I realize that if we perform the check beforehand, it might pass, but a file might start being used during the operation. Maybe in that case the solution would be to restore the folder's contents as they were before the operation started.</p>
|
<python><windows>
|
2023-04-07 09:47:34
| 1
| 550
|
LoneCodeRanger
|
75,957,048
| 1,581,090
|
How to use "fcntl.lockf" in Python?
|
<p>I found <a href="https://stackoverflow.com/questions/1320812/detect-and-delete-locked-file-in-python">this question</a> but I do not know how to use the suggestion. I have tried</p>
<pre><code>with open(fullname) as filein:
fcntl.lockf(filein, fcntl.LOCK_EX | fcntl.LOCK_NB)
</code></pre>
<p>and</p>
<pre><code>with open(fullname) as filein:
fcntl.lockf(filein.fileno(), fcntl.LOCK_EX | fcntl.LOCK_NB)
</code></pre>
<p>but in both cases I get an error</p>
<blockquote>
<p>OSError: [Errno 9] Bad file descriptor</p>
</blockquote>
<p>I want to use this method to check if a file is "locked", and ideally to unlock it.</p>
|
<python><fcntl>
|
2023-04-07 09:08:13
| 1
| 45,023
|
Alex
|
75,956,862
| 18,661,363
|
Why are gitlab jobs not sharing docker image?
|
<p>We have a Python project which required postgres docker image. The application runs fine on local system after starting docker container.
We have added two gitlab jobs, one is to start the docker container and another is to run the python script, but it seems that the second job is not executing successfully which depends on the first job. Is there any reason of not having access of docker container in the 2nd job?</p>
<p>gitlab-ci.yml</p>
<pre><code>stages:
- build
- test
build:
image: docker/compose:latest
services:
- docker:dind
script:
- docker-compose down --v
- docker-compose up -d
test:
image: python:3.8
needs: [build]
variables:
POETRY_VERSION: "1.1.15"
POETRY_CORE_VERSION: "1.0.8"
script:
- python --version
- POETRY_VIRTUALENVS_IN_PROJECT=true
- pip install poetry==${POETRY_VERSION} poetry-core==${POETRY_CORE_VERSION}
- poetry install --no-interaction --no-ansi
- poetry run pytest
</code></pre>
|
<python><docker><gitlab-ci>
|
2023-04-07 08:46:00
| 2
| 329
|
Prashant kamble
|
75,956,779
| 19,574,336
|
Windows bazel says python is not an executable when building tflite
|
<p>I'm trying to build c++ tflite for windows using bazel by following the <a href="https://www.tensorflow.org/install/source_windows" rel="nofollow noreferrer">official documentation</a>. So far I've installed everything it want's me to and added them to <code>PATH</code>. Then I cloned the github repo and checked out to <code>r2.12</code> branch.</p>
<p>Then I ran <code>python ./configure.py</code> and selected default for everything (said yes to override eigen strong inline). When doing so it declared that my python is located on <code>C:\Users\Asus\AppData\Local\Programs\Python\Python311\python.exe</code>.</p>
<p>After that running <code>bazel build -c opt //tensorflow/lite:tensorflowlite</code> on the directory where I've cloned tensorflow in cmd casuses this error:</p>
<pre><code>Starting local Bazel server and connecting to it...
INFO: Options provided by the client:
Inherited 'common' options: --isatty=1 --terminal_columns=117
INFO: Reading rc options for 'build' from c:\users\asus\desktop\tensorflow\.bazelrc:
Inherited 'common' options: --experimental_repo_remote_exec
INFO: Options provided by the client:
'build' options: --python_path=C:/Users/Asus/AppData/Local/Programs/Python/Python311/python.exe
INFO: Reading rc options for 'build' from c:\users\asus\desktop\tensorflow\.bazelrc:
'build' options: --define framework_shared_object=true --define tsl_protobuf_header_only=true --define=use_fast_cpp_protos=true --define=allow_oversize_protos=true --spawn_strategy=standalone -c opt --announce_rc --define=grpc_no_ares=true --noincompatible_remove_legacy_whole_archive --enable_platform_specific_config --define=with_xla_support=true --config=short_logs --config=v2 --define=no_aws_support=true --define=no_hdfs_support=true --experimental_cc_shared_library --experimental_link_static_libraries_once=false --incompatible_enforce_config_setting_visibility
INFO: Reading rc options for 'build' from c:\users\asus\desktop\tensorflow\.tf_configure.bazelrc:
'build' options: --action_env PYTHON_BIN_PATH=C:/Users/Asus/AppData/Local/Programs/Python/Python311/python.exe --action_env PYTHON_LIB_PATH=C:/Users/Asus/AppData/Local/Programs/Python/Python311/Lib/site-packages --python_path=C:/Users/Asus/AppData/Local/Programs/Python/Python311/python.exe --copt=/d2ReducedOptimizeHugeFunctions --host_copt=/d2ReducedOptimizeHugeFunctions --define=override_eigen_strong_inline=true
INFO: Reading rc options for 'build' from c:\users\asus\desktop\tensorflow\.bazelrc:
'build' options: --deleted_packages=tensorflow/compiler/mlir/tfrt,tensorflow/compiler/mlir/tfrt/benchmarks,tensorflow/compiler/mlir/tfrt/jit/python_binding,tensorflow/compiler/mlir/tfrt/jit/transforms,tensorflow/compiler/mlir/tfrt/python_tests,tensorflow/compiler/mlir/tfrt/tests,tensorflow/compiler/mlir/tfrt/tests/ir,tensorflow/compiler/mlir/tfrt/tests/analysis,tensorflow/compiler/mlir/tfrt/tests/jit,tensorflow/compiler/mlir/tfrt/tests/lhlo_to_tfrt,tensorflow/compiler/mlir/tfrt/tests/lhlo_to_jitrt,tensorflow/compiler/mlir/tfrt/tests/tf_to_corert,tensorflow/compiler/mlir/tfrt/tests/tf_to_tfrt_data,tensorflow/compiler/mlir/tfrt/tests/saved_model,tensorflow/compiler/mlir/tfrt/transforms/lhlo_gpu_to_tfrt_gpu,tensorflow/core/runtime_fallback,tensorflow/core/runtime_fallback/conversion,tensorflow/core/runtime_fallback/kernel,tensorflow/core/runtime_fallback/opdefs,tensorflow/core/runtime_fallback/runtime,tensorflow/core/runtime_fallback/util,tensorflow/core/tfrt/eager,tensorflow/core/tfrt/eager/backends/cpu,tensorflow/core/tfrt/eager/backends/gpu,tensorflow/core/tfrt/eager/core_runtime,tensorflow/core/tfrt/eager/cpp_tests/core_runtime,tensorflow/core/tfrt/gpu,tensorflow/core/tfrt/run_handler_thread_pool,tensorflow/core/tfrt/runtime,tensorflow/core/tfrt/saved_model,tensorflow/core/tfrt/graph_executor,tensorflow/core/tfrt/saved_model/tests,tensorflow/core/tfrt/tpu,tensorflow/core/tfrt/utils
INFO: Found applicable config definition build:short_logs in file c:\users\asus\desktop\tensorflow\.bazelrc: --output_filter=DONT_MATCH_ANYTHING
INFO: Found applicable config definition build:v2 in file c:\users\asus\desktop\tensorflow\.bazelrc: --define=tf_api_version=2 --action_env=TF2_BEHAVIOR=1
INFO: Found applicable config definition build:windows in file c:\users\asus\desktop\tensorflow\.bazelrc: --copt=/W0 --host_copt=/W0 --copt=/Zc:__cplusplus --host_copt=/Zc:__cplusplus --copt=/D_USE_MATH_DEFINES --host_copt=/D_USE_MATH_DEFINES --features=compiler_param_file --copt=/d2ReducedOptimizeHugeFunctions --host_copt=/d2ReducedOptimizeHugeFunctions --cxxopt=/std:c++17 --host_cxxopt=/std:c++17 --config=monolithic --copt=-DWIN32_LEAN_AND_MEAN --host_copt=-DWIN32_LEAN_AND_MEAN --copt=-DNOGDI --host_copt=-DNOGDI --copt=/Zc:preprocessor --host_copt=/Zc:preprocessor --linkopt=/DEBUG --host_linkopt=/DEBUG --linkopt=/OPT:REF --host_linkopt=/OPT:REF --linkopt=/OPT:ICF --host_linkopt=/OPT:ICF --verbose_failures --features=compiler_param_file --distinct_host_configuration=false
INFO: Found applicable config definition build:monolithic in file c:\users\asus\desktop\tensorflow\.bazelrc: --define framework_shared_object=false --define tsl_protobuf_header_only=false --experimental_link_static_libraries_once=false
INFO: Repository local_config_python instantiated at:
C:/users/asus/desktop/tensorflow/WORKSPACE:15:14: in <toplevel>
C:/users/asus/desktop/tensorflow/tensorflow/workspace2.bzl:957:19: in workspace
C:/users/asus/desktop/tensorflow/tensorflow/workspace2.bzl:104:21: in _tf_toolchains
Repository rule python_configure defined at:
C:/users/asus/desktop/tensorflow/third_party/py/python_configure.bzl:298:35: in <toplevel>
ERROR: An error occurred during the fetch of repository 'local_config_python':
Traceback (most recent call last):
File "C:/users/asus/desktop/tensorflow/third_party/py/python_configure.bzl", line 271, column 40, in _python_autoconf_impl
_create_local_python_repository(repository_ctx)
File "C:/users/asus/desktop/tensorflow/third_party/py/python_configure.bzl", line 212, column 22, in _create_local_python_repository
_check_python_bin(repository_ctx, python_bin)
File "C:/users/asus/desktop/tensorflow/third_party/py/python_configure.bzl", line 145, column 25, in _check_python_bin
auto_config_fail("--define %s='%s' is not executable. Is it the python binary?" % (
File "C:/users/asus/desktop/tensorflow/third_party/remote_config/common.bzl", line 12, column 9, in auto_config_fail
fail("%sConfiguration Error:%s %s\n" % (red, no_color, msg))
Error in fail: Configuration Error: --define PYTHON_BIN_PATH='C:/Users/Asus/AppData/Local/Programs/Python/Python311/python.exe' is not executable. Is it the python binary?
ERROR: C:/users/asus/desktop/tensorflow/WORKSPACE:15:14: fetching python_configure rule //external:local_config_python: Traceback (most recent call last):
File "C:/users/asus/desktop/tensorflow/third_party/py/python_configure.bzl", line 271, column 40, in _python_autoconf_impl
_create_local_python_repository(repository_ctx)
File "C:/users/asus/desktop/tensorflow/third_party/py/python_configure.bzl", line 212, column 22, in _create_local_python_repository
_check_python_bin(repository_ctx, python_bin)
File "C:/users/asus/desktop/tensorflow/third_party/py/python_configure.bzl", line 145, column 25, in _check_python_bin
auto_config_fail("--define %s='%s' is not executable. Is it the python binary?" % (
File "C:/users/asus/desktop/tensorflow/third_party/remote_config/common.bzl", line 12, column 9, in auto_config_fail
fail("%sConfiguration Error:%s %s\n" % (red, no_color, msg))
Error in fail: Configuration Error: --define PYTHON_BIN_PATH='C:/Users/Asus/AppData/Local/Programs/Python/Python311/python.exe' is not executable. Is it the python binary?
INFO: Repository local_execution_config_python instantiated at:
C:/users/asus/desktop/tensorflow/WORKSPACE:15:14: in <toplevel>
C:/users/asus/desktop/tensorflow/tensorflow/workspace2.bzl:957:19: in workspace
C:/users/asus/desktop/tensorflow/tensorflow/workspace2.bzl:94:27: in _tf_toolchains
C:/users/asus/desktop/tensorflow/tensorflow/tools/toolchains/remote_config/configs.bzl:6:28: in initialize_rbe_configs
C:/users/asus/desktop/tensorflow/tensorflow/tools/toolchains/remote_config/rbe_config.bzl:158:27: in _tensorflow_local_config
Repository rule local_python_configure defined at:
C:/users/asus/desktop/tensorflow/third_party/py/python_configure.bzl:279:41: in <toplevel>
INFO: Repository go_sdk instantiated at:
C:/users/asus/desktop/tensorflow/WORKSPACE:23:14: in <toplevel>
C:/users/asus/desktop/tensorflow/tensorflow/workspace0.bzl:134:20: in workspace
C:/users/asus/_bazel_asus/ddsftcyc/external/com_github_grpc_grpc/bazel/grpc_extra_deps.bzl:36:27: in grpc_extra_deps
C:/users/asus/_bazel_asus/ddsftcyc/external/io_bazel_rules_go/go/private/sdk.bzl:431:28: in go_register_toolchains
C:/users/asus/_bazel_asus/ddsftcyc/external/io_bazel_rules_go/go/private/sdk.bzl:130:21: in go_download_sdk
Repository rule _go_download_sdk defined at:
C:/users/asus/_bazel_asus/ddsftcyc/external/io_bazel_rules_go/go/private/sdk.bzl:117:35: in <toplevel>
ERROR: Analysis of target '//tensorflow/lite:tensorflowlite' failed; build aborted: Configuration Error: --define PYTHON_BIN_PATH='C:/Users/Asus/AppData/Local/Programs/Python/Python311/python.exe' is not executable. Is it the python binary?
INFO: Elapsed time: 224.163s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (32 packages loaded, 15 targets configured)
</code></pre>
<p>Looking at the main error part of this output : <code>ERROR: Analysis of target '//tensorflow/lite:tensorflowlite' failed; build aborted: Configuration Error: --define PYTHON_BIN_PATH='C:/Users/Asus/AppData/Local/Programs/Python/Python311/python.exe' is not executable. Is it the python binary?</code></p>
<p>I checked whether python was there or not, it indeed is. Simply running <code>C:/Users/Asus/AppData/Local/Programs/Python/Python311/python.exe</code> in cmd indeed opens python.</p>
<p>So I checked the internet for some solutions, I've removed the "python installers" from my system, added python to PATH, tried the same steps with tensorflow source zip instead of cloning it, nothing works.</p>
<p>Some people suggested chaning some stuff inside the <code>py</code> directory of tensorflow but it didn't work aswell.</p>
<p>Why is this happening? What causes bazel to not see python even though it's there? How can I fix this and get a build with windows?</p>
|
<python><c++><windows><tensorflow><bazel>
|
2023-04-07 08:32:37
| 1
| 859
|
Turgut
|
75,956,534
| 4,367,019
|
Select camera programatically
|
<p>My program should select three cameras and take pictures with each of it.</p>
<p>I have the following code at the moment:</p>
<pre><code>def getCamera(camera):
graph = FilterGraph()
print("Camera List: ")
print(graph.get_input_devices())
#tbd get right Camera
try:
device = graph.get_input_devices().index("HD Pro Webcam C920")
except ValueError as e:
device = graph.get_input_devices().index("Integrated Webcam")
return device
</code></pre>
<p>The code above worked fine. But I have three similar cameras with the same name.</p>
<p>Output of this:</p>
<pre><code>graph = FilterGraph()
print("Camera List: ")
print(graph.get_input_devices())
</code></pre>
<p>Is a list with three cameras all the same name. I thought they are in an array and I can select them with this:</p>
<pre><code>device = graph.get_input_devices().index(0)
</code></pre>
<p>Like any other Array.</p>
<p>But I only can access with the Name. Like in the first code example.</p>
<p>How can I access the cameras with index?</p>
|
<python><python-3.x><camera>
|
2023-04-07 07:57:13
| 2
| 5,638
|
Felix
|
75,956,472
| 8,071,608
|
Using a python 2.6 package in python 3.10
|
<h1>Background</h1>
<p>I want to learn python through creating an isometric RPG computer game. To make it a little more fun at the start, I thought I would instead of starting from scratch, alter another very basic isometric RPG and bit by bit make it my own*. I thought <a href="https://www.pygame.org/project/294" rel="nofollow noreferrer">this game</a> would be a suitable candidate. The problem is that it was made with python 2.6. I have already converted all the code to python 3, but when starting the debug process, I encountered the issue that the package <code>ocempgui</code> does not exist anymore. I cannot seem to install it with either <code>pip</code> or <code>conda</code>, because it is not in the repositories.</p>
<h1>Question</h1>
<p>Is there any way that I can use the <code>ocempgui</code> package in python 3? <a href="https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwj664rLopf-AhXI-KQKHVg_DVIQFnoECA8QAQ&url=https%3A%2F%2Fwww.pygame.org%2Fproject-OcempGUI-125-.html&usg=AOvVaw2bmBMBlYc6-1xmrAZnKD-6" rel="nofollow noreferrer">(link)</a></p>
<h1>PS</h1>
<p>If someone knows another basic isometric RPG written in python 3, please let me know.</p>
<p>*Some of you might say that it is better to start from scratch, but I think it is more important to stay motivated to work on it.</p>
|
<python><python-2.6>
|
2023-04-07 07:44:15
| 2
| 2,341
|
Tom
|
75,956,424
| 3,964,056
|
Retry func if it takes longer than 5 seconds or if it fails
|
<p>I m in a situation where I have a func that calls the 3rd party API using the native lib of that company. Usually, the response time is excellent, like 1-2 secs but sometimes, the func would take 30-50 seconds to get the response. I want to make the func work in way that if it takes longer than the 5 seconds, then I cancel the call and retry the func.
I have tried several combo's of tenacity and concurrent futures but nothing really showing a potential outcome that I need.</p>
<p>Note: The 3rd party when it takes long time to respond doesn't show as hung, that's another thing, so we need to find something that'll simply wait 5 seconds and kil it and retry.</p>
<p>Here is what I tried:</p>
<pre><code>@retry(wait=wait_random_exponential(min=3, max=5), stop=stop_after_attempt(3), retry_error_callback=lambda x: logger.info("Retrying getting vectors..."))
def get_vectors(namespace):
def query():
####SOME CODE for 3rd party API###
return some_output
with concurrent.futures.ThreadPoolExecutor() as executor:
future = executor.submit(query)
try:
return future.result(timeout=5)
except concurrent.futures.TimeoutError:
logger.info('5 secs timeout happened...')
future.cancel()
raise Exception("Function took too long to complete. Retrying...")
</code></pre>
|
<python><python-3.x><concurrent.futures><python-tenacity>
|
2023-04-07 07:37:15
| 0
| 984
|
PanDe
|
75,956,209
| 13,595,011
|
Error "'DataFrame' object has no attribute 'append'"
|
<p>I am trying to append a dictionary to a DataFrame object, but I get the following error:</p>
<blockquote>
<p>AttributeError: 'DataFrame' object has no attribute 'append'</p>
</blockquote>
<p>As far as I know, DataFrame does have the method "append".</p>
<p>Code snippet:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame(df).append(new_row, ignore_index=True)
</code></pre>
<p>I was expecting the dictionary <code>new_row</code> to be added as a new row.</p>
<p>How can I fix it?</p>
|
<python><pandas><dataframe><concatenation><attributeerror>
|
2023-04-07 07:05:59
| 4
| 3,471
|
Maksimjeet Chowdhary
|
75,956,116
| 3,783,002
|
Subprocess with visual studio debugger attached to process causes a problem in python project
|
<p>I'm facing a very annoying issue with Visual Studio 2022. Here's how to replicate it.</p>
<p>Folder contents:</p>
<pre><code>test-debugpy-issue
| test-debugpy-issue.sln
|
\---test-debugpy-issue
cli.py
test-debugpy-issue.pyproj
test_debugpy_issue_simplified.py
</code></pre>
<p>Contents of <code>cli.py</code>:</p>
<pre><code>print("hello world")
</code></pre>
<p>Contents of <code>test_debugpy_issue_simplified</code>:</p>
<pre><code>import subprocess
import os
import json
print(os.getpid())
input()
configArgs = ["python", "cli.py"]
ret_code=0
while ret_code==0:
ret_code = subprocess.call(configArgs, shell=False, universal_newlines=True)
print(ret_code)
</code></pre>
<p>In order to replicate the issue, carry out the following steps:</p>
<ul>
<li>Open up a Powershell or CMD terminal</li>
<li>Run <code>python .\test_debugpy_issue_simplified.py</code></li>
<li>Copy the provided PID</li>
<li>In Visual Studio 2022 under <em>Debug > Attach to Process</em>, paste the PID, select the process as shown below:</li>
</ul>
<p><a href="https://i.sstatic.net/iki6g.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iki6g.png" alt="enter image description here" /></a></p>
<ul>
<li>Click <em>Attach</em>, then wait in the <em>Output</em> window until the process has successfully attached. This apparently requires multiple attempts:</li>
</ul>
<p><a href="https://i.sstatic.net/aOVsH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/aOVsH.png" alt="enter image description here" /></a></p>
<ul>
<li>Go back into your Powershell and CMD terminal and press <em>Enter</em></li>
<li>The following error should appear:</li>
</ul>
<blockquote>
<p>0.47s - Error importing debugpy._vendored.force_pydevd (with sys.path entry: 'c:\program > files\microsoft visual studio\2022\professional\common7\ide\extensions\microsoft\python\core')
Traceback (most recent call last):
File "c:\program files\microsoft visual studio\2022\professional\common7\ide\extensions\microsoft\python\core\debugpy_vendored\pyde> vd_pydevd_bundle\pydevd_defaults.py", line 60, in on_pydb_init
<strong>import</strong>(module_name)
ModuleNotFoundError: No module named 'debugpy'</p>
</blockquote>
<p>I've found <a href="https://github.com/microsoft/debugpy/issues/1148" rel="nofollow noreferrer">this</a> issue on Github which describes a similar problem, but it's from the Vs Code repo. I'm not using VS Code.</p>
<p>Why is this happening and how can I fix it ?</p>
|
<python><visual-studio-2022><visual-studio-debugging><debugpy>
|
2023-04-07 06:50:30
| 1
| 6,067
|
user32882
|
75,956,066
| 6,333,825
|
How do I use an operand contained in a PySpark dataframe within a calculation?
|
<p>I have a PySpark dataframe that contains operands (specifically greater than, less than, etc). I want to use those operands to calculate a result using other values in the dataframe and create a new column with this new data. For instance:</p>
<pre><code>from pyspark.sql import Row
from pyspark.sql.functions import expr, when
df = spark.createDataFrame([
Row(id=1, value=3.0, operand='>', threshold=2. ),
Row(id=2, value=2.3, operand='>=', threshold=3. ),
Row(id=3, value=0.0, operand='==', threshold=0.0 )
])
df = df.withColumn('result', when(expr("value " + df.operand + " threshold"), True).otherwise(False))
df.show()
</code></pre>
<p>I would expect the following result:</p>
<pre><code>|id|value|operand|threshold|result|
|--|-----|-------|---------|------|
|1 | 3.0| >| 2.0| true|
|2 | 2.3| >=| 3.0| false|
|3 | 0.0| ==| 0.0| true|
</code></pre>
<p>But instead get the error <code>TypeError: Column is not iterable</code>. I have tried different mechanisms of extracting the operand value (i.e. <code>col("operand")</code>) but without success.</p>
<p>NB - I appreciate using <code>==</code> to determine if doubles are equal is not always reliable, but the use case allows it.</p>
|
<python><dataframe><pyspark>
|
2023-04-07 06:40:56
| 1
| 586
|
Concrete_Buddha
|
75,955,739
| 2,272,824
|
How to select the column from a Polars Dataframe that has the largest sum?
|
<p>I have Polars dataframe with a bunch of columns
I need to find the column with, for example, the largest sum.</p>
<p>The below snippet sums all of the columns:</p>
<pre class="lang-py prettyprint-override"><code>df = pl.DataFrame(
{
"a": [0, 1, 3, 4],
"b": [0, 0, 0, 0],
"c": [1, 0, 1, 0],
}
)
max_col = df.select(pl.col(df.columns).sum())
</code></pre>
<pre><code>shape: (1, 3)
┌─────┬─────┬─────┐
│ a ┆ b ┆ c │
│ --- ┆ --- ┆ --- │
│ i64 ┆ i64 ┆ i64 │
╞═════╪═════╪═════╡
│ 8 ┆ 0 ┆ 2 │
└─────┴─────┴─────┘
</code></pre>
<p>But I'm missing the last step of selecting the column with the largest value?</p>
|
<python><python-polars>
|
2023-04-07 05:45:37
| 3
| 391
|
scotsman60
|
75,955,658
| 1,220,955
|
Getting slow response form google OR tools
|
<p>I am using the OR tool Python library to resolve the cutting stock problem. But it has been taking too long for the last few days to respond. It is taking time more than 1 minute to respond.</p>
<p>I am using Python code as an API and passing the below data.</p>
<p>API Data :</p>
<pre><code>{
"child_rolls": [
{
"roll": 3,
"size": 1490
},
{
"roll": 8,
"size": 1490
},
{
"roll": 23,
"size": 1500
},
{
"roll": 9,
"size": 1500
},
{
"roll": 8,
"size": 1480
},
{
"roll": 30,
"size": 1480
}
],
"parent_rolls": [
{
"roll": null,
"size": 4470
}
]
}
</code></pre>
<p>When I passed the above data in API it is taking more than 1 minute to return the output of the above data. This means when I pass large data to my GCP Python script gives me a slow response. So, I need your help to improve performance for all types of data. Could you please guide me and help to resolve this issue?</p>
<p><strong>Python code:</strong></p>
<pre><code>import pdb
import math
from ortools.linear_solver import pywraplp
from math import ceil
from random import randint
import json
# from read_lengths import get_data
# import typer
from typing import Optional
from flask import Flask
from flask_cors import CORS
from flask import request
def newSolver(name, integer=False):
return pywraplp.Solver(name,
pywraplp.Solver.CBC_MIXED_INTEGER_PROGRAMMING
if integer else
pywraplp.Solver.GLOP_LINEAR_PROGRAMMING)
'''
return a printable value
'''
def SolVal(x):
if type(x) is not list:
return 0 if x is None \
else x if isinstance(x, (int, float)) \
else x.SolutionValue() if x.Integer() is False \
else int(x.SolutionValue())
elif type(x) is list:
return [SolVal(e) for e in x]
def ObjVal(x):
return x.Objective().Value()
def gen_data(num_orders):
R = [] # small rolls
# S=0 # seed?
for i in range(num_orders):
R.append([randint(1, 12), randint(5, 40)])
return R
def solve_model(demands, parent_width=100,parent_main_width=0):
'''
demands = [
[1, 3], # [quantity, width]
[3, 5],
...
]
parent_width = integer
'''
num_orders = len(demands)
solver = newSolver('Cutting Stock', True)
k, b = bounds(demands, parent_width)
# array of boolean declared as int, if y[i] is 1,
# then y[i] Big roll is used, else it was not used
y = [solver.IntVar(0, 1, f'y_{i}') for i in range(k[1])]
# x[i][j] = 3 means that small-roll width specified by i-th order
# must be cut from j-th order, 3 tmies
x = [[solver.IntVar(0, b[i], f'x_{i}_{j}') for j in range(k[1])]
for i in range(num_orders)]
# unused_widths = [solver.NumVar(0, parent_width, f'w_{j}')
# for j in range(k[1])]
unused_widths = []
for j in range(k[1]):
print("J",j)
var_name = f'w_{j}'
var = solver.NumVar(0, parent_width, var_name)
unused_widths.append(var)
print(f"Created variable {var_name} with domain [0, {parent_width}] and value {var}.")
#print("SolVal",SolVal(unused_widths))
# will contain the number of big rolls used
nb = solver.IntVar(k[0], k[1], 'nb')
print('1st of nb when intilize',nb)
# consntraint: demand fullfilment
for i in range(num_orders):
# small rolls from i-th order must be at least as many in quantity
# as specified by the i-th order
solver.Add(sum(x[i][j] for j in range(k[1])) >= demands[i][0])
# constraint: max size limit
for j in range(k[1]):
# total width of small rolls cut from j-th big roll,
# must not exceed big rolls width
solver.Add(
sum(demands[i][1]*x[i][j] for i in range(num_orders))
<= parent_width*y[j]
)
# width of j-th big roll - total width of all orders cut from j-th roll
# must be equal to unused_widths[j]
# So, we are saying that assign unused_widths[j] the remaining width of j'th big roll
solver.Add(parent_main_width*y[j] - sum(demands[i][1]*x[i][j]
for i in range(num_orders)) == unused_widths[j])
'''
Book Author's note from page 201:
[the following constraint] breaks the symmetry of multiple solutions that are equivalent
for our purposes: any permutation of the rolls. These permutations, and there are K! of
them, cause most solvers to spend an exorbitant time solving. With this constraint, we
tell the solver to prefer those permutations with more cuts in roll j than in roll j + 1.
The reader is encouraged to solve a medium-sized problem with and without this
symmetry-breaking constraint. I have seen problems take 48 hours to solve without the
constraint and 48 minutes with. Of course, for problems that are solved in seconds, the
constraint will not help; it may even hinder. But who cares if a cutting stock instance
solves in two or in three seconds? We care much more about the difference between two
minutes and three hours, which is what this constraint is meant to address
'''
if j < k[1]-1: # k1 = total big rolls
# total small rolls of i-th order cut from j-th big roll must be >=
# totall small rolls of i-th order cut from j+1-th big roll
solver.Add(sum(x[i][j] for i in range(num_orders))
>= sum(x[i][j+1] for i in range(num_orders)))
# find & assign to nb, the number of big rolls used
solver.Add(nb == solver.Sum(y[j] for j in range(k[1])))
print('2nd nb while executing',nb)
'''
minimize total big rolls used
let's say we have y = [1, 0, 1]
here, total big rolls used are 2. 0-th and 2nd. 1st one is not used. So we want our model to use the
earlier rolls first. i.e. y = [1, 1, 0].
The trick to do this is to define the cost of using each next roll to be higher. So the model would be
forced to used the initial rolls, when available, instead of the next rolls.
So instead of Minimize ( Sum of y ) or Minimize( Sum([1,1,0]) )
we Minimize( Sum([1*1, 1*2, 1*3]) )
'''
'''
Book Author's note from page 201:
There are alternative objective functions. For example, we could have minimized the sum of the waste. This makes sense, especially if the demand constraint is formulated as an inequality. Then minimizing the sum of waste Chapter 7 advanCed teChniques
will spend more CPU cycles trying to find more efficient patterns that over-satisfy demand. This is especially good if the demand widths recur regularly and storing cut rolls in inventory to satisfy future demand is possible. Note that the running time will grow quickly with such an objective function
'''
Cost = solver.Sum((j+1)*y[j] for j in range(k[1]))
# print("Cost == ",Cost)
solver.Minimize(Cost)
# solver.Add(Cost)
#print("COST DATA = ",solver.Minimize(Cost))
status = solver.Solve()
numRollsUsed = SolVal(nb)
#nb, x, w, demands
return status, \
numRollsUsed, \
rolls(numRollsUsed, SolVal(x), SolVal(unused_widths), demands), \
SolVal(unused_widths), \
solver.WallTime()
def bounds(demands, parent_width=100):
'''
b = [sum of widths of individual small rolls of each order]
T = local var. stores sum of widths of adjecent small-rolls. When the width reaches 100%, T is set to 0 again.
k = [k0, k1], k0 = minimum big-rolls requierd, k1: number of big rolls that can be consumed / cut from
TT = local var. stores sum of widths of of all small-rolls. At the end, will be used to estimate lower bound of big-rolls
'''
num_orders = len(demands)
b = []
T = 0
k = [0, 1]
TT = 0
for i in range(num_orders):
# q = quantity, w = width; of i-th order
quantity, width = demands[i][0], demands[i][1]
# TODO Verify: why min of quantity, parent_width/width?
# assumes widths to be entered as percentage
# int(round(parent_width/demands[i][1])) will always be >= 1, because widths of small rolls can't exceed parent_width (which is width of big roll)
b.append( min(demands[i][0], int(round(parent_width / demands[i][1]))) )
#b.append(min(quantity, int(round(parent_width / width))))
# if total width of this i-th order + previous order's leftover (T) is less than parent_width
# it's fine. Cut it.
if T + quantity*width <= parent_width:
T, TT = T + quantity*width, TT + quantity*width
# else, the width exceeds, so we have to cut only as much as we can cut from parent_width width of the big roll
else:
while quantity:
if T + width <= parent_width:
T, TT, quantity = T + width, TT + width, quantity-1
else:
k[1], T = k[1]+1, 0 # use next roll (k[1] += 1)
k[0] = int(round(TT/parent_width+0.5))
print('k', k)
print('b', b)
return k, b
'''
nb: array of number of rolls to cut, of each order
w:
demands: [
[quantity, width],
[quantity, width],
[quantity, width],
]
'''
def rolls(nb, x, w, demands):
consumed_big_rolls = []
num_orders = len(x)
# go over first row (1st order)
# this row contains the list of all the big rolls available, and if this 1st (0-th) order
# is cut from any big roll, that big roll's index would contain a number > 0
for j in range(len(x[0])):
# w[j]: width of j-th big roll
# int(x[i][j]) * [demands[i][1]] width of all i-th order's small rolls that are to be cut from j-th big roll
h=0
print ("int(x[i][j]",int(x[h][j]))
h=h+1
RR = [abs(w[j])] + [int(x[i][j])*[demands[i][1]] for i in range(num_orders)
if x[i][j] > 0] # if i-th order has some cuts from j-th order, x[i][j] would be > 0
print("RR ",RR)
consumed_big_rolls.append(RR)
return consumed_big_rolls
'''
this model starts with some patterns and then optimizes those patterns
'''
def solve_large_model(demands, parent_width=100):
num_orders = len(demands)
iter = 0
patterns = get_initial_patterns(demands)
# print('method#solve_large_model, patterns', patterns)
# list quantities of orders
quantities = [demands[i][0] for i in range(num_orders)]
print('quantities', quantities)
while iter < 20:
status, y, l = solve_master(
patterns, quantities, parent_width=parent_width)
iter += 1
# list widths of orders
widths = [demands[i][1] for i in range(num_orders)]
new_pattern, objectiveValue = get_new_pattern(
l, widths, parent_width=parent_width)
# print('method#solve_large_model, new_pattern', new_pattern)
# print('method#solve_large_model, objectiveValue', objectiveValue)
for i in range(num_orders):
# add i-th cut of new pattern to i-thp pattern
patterns[i].append(new_pattern[i])
status, y, l = solve_master(
patterns, quantities, parent_width=parent_width, integer=True)
return status, \
patterns, \
y, \
rolls_patterns(patterns, y, demands, parent_width=parent_width)
'''
Dantzig-Wolfe decomposition splits the problem into a Master Problem MP and a sub-problem SP.
The Master Problem: provided a set of patterns, find the best combination satisfying the demand
C: patterns
b: demand
'''
def solve_master(patterns, quantities, parent_width=100, integer=False):
title = 'Cutting stock master problem'
num_patterns = len(patterns)
n = len(patterns[0])
# print('**num_patterns x n: ', num_patterns, 'x', n)
# print('**patterns recived:')
# for p in patterns:
# print(p)
constraints = []
solver = newSolver(title, integer)
# y is not boolean, it's an integer now (as compared to y in approach used by solve_model)
y = [solver.IntVar(0, 1000, '') for j in range(n)] # right bound?
# minimize total big rolls (y) used
Cost = sum(y[j] for j in range(n))
solver.Minimize(Cost)
# for every pattern
for i in range(num_patterns):
# add constraint that this pattern (demand) must be met
# there are m such constraints, for each pattern
constraints.append(solver.Add(
sum(patterns[i][j]*y[j] for j in range(n)) >= quantities[i]))
status = solver.Solve()
y = [int(ceil(e.SolutionValue())) for e in y]
l = [0 if integer else constraints[i].DualValue()
for i in range(num_patterns)]
# sl = [0 if integer else constraints[i].name() for i in range(num_patterns)]
# print('sl: ', sl)
# l = [0 if integer else u[i].Ub() for i in range(m)]
toreturn = status, y, l
# l_to_print = [round(dd, 2) for dd in toreturn[2]]
# print('l: ', len(l_to_print), '->', l_to_print)
# print('l: ', toreturn[2])
return toreturn
def get_new_pattern(l, w, parent_width=100):
solver = newSolver('Cutting stock sub-problem', True)
n = len(l)
new_pattern = [solver.IntVar(0, parent_width, '') for i in range(n)]
# maximizes the sum of the values times the number of occurrence of that roll in a pattern
Cost = sum(l[i] * new_pattern[i] for i in range(n))
solver.Maximize(Cost)
# ensuring that the pattern stays within the total width of the large roll
solver.Add(sum(w[i] * new_pattern[i] for i in range(n)) <= parent_width)
status = solver.Solve()
return SolVal(new_pattern), ObjVal(solver)
'''
the initial patterns must be such that they will allow a feasible solution,
one that satisfies all demands.
Considering the already complex model, let’s keep it simple.
Our initial patterns have exactly one roll per pattern, as obviously feasible as inefficient.
'''
def get_initial_patterns(demands):
num_orders = len(demands)
return [[0 if j != i else 1 for j in range(num_orders)]
for i in range(num_orders)]
def rolls_patterns(patterns, y, demands, parent_width=100):
R, m, n = [], len(patterns), len(y)
for j in range(n):
for _ in range(y[j]):
RR = []
for i in range(m):
if patterns[i][j] > 0:
RR.extend([demands[i][1]] * int(patterns[i][j]))
used_width = sum(RR)
R.append([parent_width - used_width, RR])
return R
'''
checks if all small roll widths (demands) smaller than parent roll's width
'''
def checkWidths(demands, parent_width):
for quantity, width in demands:
if width > parent_width:
print(
f'Small roll width {width} is greater than parent rolls width {parent_width}. Exiting')
return False
return True
'''
params
child_rolls:
list of lists, each containing quantity & width of rod / roll to be cut
e.g.: [ [quantity, width], [quantity, width], ...]
parent_rolls:
list of lists, each containing quantity & width of rod / roll to cut from
e.g.: [ [quantity, width], [quantity, width], ...]
'''
def StockCutter1D(child_rolls, parent_rolls, output_json=True, large_model=True):
# at the moment, only parent one width of parent rolls is supported
# quantity of parent rolls is calculated by algorithm, so user supplied quantity doesn't matter?
# TODO: or we can check and tell the user the user when parent roll quantity is insufficient
# pdb.set_trace()
parent_width_main = parent_rolls[0][1]
qua_arr = []
for item in range(len(child_rolls)):
qua_arr.append(child_rolls[item][1])
# # print(item)
# # print(qua_arr)
# #
if min(qua_arr) <=20:
parent_width_second = min(qua_arr)*7
else:
parent_width_second = parent_width_main
if parent_width_second > parent_width_main:
parent_width = parent_width_main
else:
parent_width = parent_width_second
print("parent_width_main==",parent_width_main)
print("parent_width_second==",parent_width_second)
print("parent_width==",parent_width)
if not checkWidths(demands=child_rolls, parent_width=parent_width):
return []
print('child_rolls', child_rolls)
print('parent_rolls', parent_rolls)
if not large_model:
print('Running Small Model...')
status, numRollsUsed, consumed_big_rolls, unused_roll_widths, wall_time = \
solve_model(demands=child_rolls, parent_width=parent_width,parent_main_width=parent_width_main)
# convert the format of output of solve_model to be exactly same as solve_large_model
print('consumed_big_rolls before adjustment: ', consumed_big_rolls)
new_consumed_big_rolls = []
for big_roll in consumed_big_rolls:
if len(big_roll) < 2:
# sometimes the solve_model return a solution that contanis an extra [0.0] entry for big roll
consumed_big_rolls.remove(big_roll)
continue
unused_width = big_roll[0]
subrolls = []
for subitem in big_roll[1:]:
if isinstance(subitem, list):
# if it's a list, concatenate with the other lists, to make a single list for this big_roll
subrolls = subrolls + subitem
else:
# if it's an integer, add it to the list
subrolls.append(subitem)
new_consumed_big_rolls.append([unused_width, subrolls])
print('consumed_big_rolls after adjustment: ', new_consumed_big_rolls)
consumed_big_rolls = new_consumed_big_rolls
else:
print('Running Large Model...')
status, A, y, consumed_big_rolls = solve_large_model(
demands=child_rolls, parent_width=parent_width)
numRollsUsed = len(consumed_big_rolls)
# print('A:', A, '\n')
# print('y:', y, '\n')
STATUS_NAME = ['OPTIMAL',
'FEASIBLE',
'INFEASIBLE',
'UNBOUNDED',
'ABNORMAL',
'NOT_SOLVED'
]
output = {
"statusName": STATUS_NAME[status],
"numSolutions": '1',
"numUniqueSolutions": '1',
"numRollsUsed": numRollsUsed,
"solutions": consumed_big_rolls # unique solutions
}
# print('Wall Time:', wall_time)
print('numRollsUsed', numRollsUsed)
print('Status:', output['statusName'])
print('Solutions found :', output['numSolutions'])
print('Unique solutions: ', output['numUniqueSolutions'])
if output_json:
return output
else:
return consumed_big_rolls
'''
Draws the big rolls on the graph. Each horizontal colored line represents one big roll.
In each big roll (multi-colored horizontal line), each color represents small roll to be cut from it.
If the big roll ends with a black color, that part of the big roll is unused width.
TODO: Assign each child roll a unique color
'''
if __name__ == '__main__':
app = Flask(__name__)
# Enable the CORS
CORS(app, resources={r"/*": {"origins": "*"}})
@app.route("/")
def index():
return "This is index page, please go ahead...! "
@app.route("/track", methods=['POST'])
def track():
try:
print('request.data==', request.get_json())
req_data = request.get_json()
child_rolls = []
parent_rolls = []
child_rolls_obj = req_data.get('child_rolls', [])
for item in child_rolls_obj:
data = list(item.values())
# data.reverse()
child_rolls.append(data)
parent_rolls_obj = req_data.get('parent_rolls', [])
parent_rolls.append(list(parent_rolls_obj[0].values()))
# pdb.set_trace()
# number_of_cut = req_data.get('number_of_cut', [])
if(not(len(child_rolls) and len(parent_rolls))):
return {"success": False, "message": "Request data is not enough!"}
consumed_big_rolls = StockCutter1D(
child_rolls, parent_rolls, output_json=True, large_model=False)
# return {"success": True, "data": {"solutions": consumed_big_rolls}}
return consumed_big_rolls
except Exception as error:
return {"success": False, "message": "Something went wrong!", "error": error}
# raise error
app.config["FLASK_APP"] = 'main.py'
app.config["FLASK_ENV"] = 'development'
app.run(host="0.0.0.0", debug=True, port=5000, use_reloader=False)
</code></pre>
<p>Expected output be like this:</p>
<pre><code>{
"numRollsUsed": 29,
"numSolutions": "1",
"numUniqueSolutions": "1",
"solutions": [
[
0.0,
[
1490,
1500,
1480
]
],
[
0.0,
[
1490,
1500,
1480
]
],
[
29.999999999999957,
[
1480,
1480,
1480
]
],
[
10.000000000000151,
[
1500,
1480,
1480
]
],
[
10.000000000000151,
[
1500,
1480,
1480
]
],
[
10.000000000000151,
[
1500,
1480,
1480
]
],
[
10.000000000000151,
[
1500,
1480,
1480
]
],
[
10.000000000000151,
[
1500,
1480,
1480
]
],
[
0.0,
[
1490,
1500,
1480
]
],
[
0.0,
[
1490,
1500,
1480
]
],
[
10.000000000000151,
[
1500,
1480,
1480
]
],
[
10.000000000000151,
[
1500,
1480,
1480
]
],
[
10.000000000000151,
[
1500,
1480,
1480
]
],
[
0.0,
[
1490,
1500,
1480
]
],
[
10.000000000000151,
[
1500,
1480,
1480
]
],
[
0.0,
[
1490,
1500,
1480
]
],
[
0.0,
[
1490,
1500,
1480
]
],
[
29.999999999999957,
[
1480,
1480,
1480
]
],
[
10.000000000000151,
[
1500,
1480,
1480
]
],
[
10.000000000000151,
[
1500,
1480,
1480
]
],
[
0.0,
[
1490,
1500,
1480
]
],
[
10.000000000000151,
[
1500,
1480,
1480
]
],
[
0.0,
[
1490,
1490,
1490
]
],
[
1469.9999999999998,
[
1500,
1500
]
],
[
1469.9999999999998,
[
1500,
1500
]
],
[
1469.9999999999998,
[
1500,
1500
]
],
[
1469.9999999999998,
[
1500,
1500
]
],
[
1469.9999999999998,
[
1500,
1500
]
],
[
1469.9999999999998,
[
1500,
1500
]
]
],
"statusName": "OPTIMAL"
}
</code></pre>
|
<python><python-3.x><response><or-tools>
|
2023-04-07 05:23:54
| 1
| 4,547
|
Hkachhia
|
75,955,596
| 9,371,736
|
Azure Cognitive Search: got an unexpected keyword argument 'query_language' in python vscode
|
<p>I'm trying to use semantic search enabled azure cognitive search in my flask app (in python virtual env).
When I do pip install azure.search.documents, 11.3.0 version gets installed
and I get following error:</p>
<pre><code>TypeError: Session.request() got an unexpected keyword argument 'query_language'
</code></pre>
<p>same for query_speller, semantic_configuration_name etc.</p>
<p>here is the code I'm using:</p>
<pre><code>results = list(
self.search_client.search(search_text="xxx",
query_type ="semantic",
query_language ="en-us",
query_speller ="lexicon",
semantic_configuration_name ="xxx",
top=3,
captions= None)
</code></pre>
<p>below code works fine</p>
<pre><code>results = list(self.search_client.search(search_text="xxx"))
</code></pre>
|
<python><azure><visual-studio-code><azure-cognitive-search><semantic-search>
|
2023-04-07 05:07:10
| 2
| 307
|
Swasti
|
75,955,541
| 5,302,323
|
How to use Macbook terminal caffeinate to stop macbook from going to sleep while running Jupyter code on Chrome?
|
<p>I'd like to keep my battery saving options but ensure my macbook is awake and running my code before it goes to sleep.</p>
<p>From what I understand, there is a command called caffeinate that does just this.</p>
<p>My Chrome has several windows / tabs. How do I tell the Terminal to focus specifically on the tab which is running my Jupyter code?</p>
|
<python><google-chrome><jupyter>
|
2023-04-07 04:52:32
| 1
| 365
|
Cla Rosie
|
75,955,538
| 2,368,205
|
Is PIL (pillow) corrupting my image files?
|
<p>Here's the basic code:</p>
<pre><code>
file = "C:\DLS\sampleImage.jpg"
image = Image.open(file)
image.close()
print(str(image.width) + "x" + str(image.height))
</code></pre>
<p>Given a regular .jpg image file, I can open it fine. After I run this script, the file size goes to 1KB (or likely less) and is completely corrupted. These are all images I downloaded from Discogs. They were fine to start. In fact, the thumbnails all stayed valid because I didn't run this script on them.</p>
<p>Pillow: c:\python310\lib\site-packages (9.5.0)</p>
<p>Python: Python 3.10.4</p>
<p>I initially had this code running right after the download, but I began assuming that it was because Windows hadn't finished "writing" the file and I was messing with it too soon. I tried time.sleep(5) as a test. No good. I setup a new run against the files <em>well</em> after the were downloaded. Still corrupts them. I'm not even trying to modify the image here at all.</p>
|
<python><image><python-imaging-library>
|
2023-04-07 04:51:27
| 0
| 658
|
Will Belden
|
75,955,289
| 4,883,787
|
Cannot use pip to download scipy for wheel
|
<p>I need to install some Python packages in another computer that cannot connect to the Internet. I tried to use the <code>pip</code> command to access the packages I listed in <code>packages.txt</code> and put the wheel files in <code>./packs</code>. The full command is as follows,</p>
<pre><code>pip3 download -d ./packs -r packages.txt -i https://mirrors.aliyun.com/pypi/simple/
</code></pre>
<p>However, <code>scipy</code> cannot be turned into the wheel format successfully. As a result, some packages (e.g., <code>scikit-learn</code>, <code>statsmodels</code>) that depend on <code>scipy</code> cannot be downloaded successfully as well. The full error message is shown below.</p>
<pre><code>Downloading https://mirrors.aliyun.com/pypi/packages/84/a9/2bf119f3f9cff1f376f924e39cfae18dec92a1514784046d185731301281/scipy-1.10.1.tar.gz (42.4 MB)
|████████████████████████████████| 42.4 MB 726 kB/s
Installing build dependencies ... done
Getting requirements to build wheel ... done
Installing backend dependencies ... done
Preparing wheel metadata ... error
ERROR: Command errored out with exit status 1:
command: 'c:\users\liding\appdata\local\programs\python\python38-32\python.exe' 'c:\users\liding\appdata\local\programs\python\python38-32\lib\site-packages\pip\_vendor\pep517\_in_process.py' prepare_metadata_for_build_wheel 'C:\Users\liding\AppData\Local\Temp\tmpcp63m4lc'
cwd: C:\Users\liding\AppData\Local\Temp\pip-download-8bds6eo1\scipy_8794ca23ac8349ecb38e96bac61f61b7
Complete output (21 lines):
+ meson setup --prefix=c:\users\liding\appdata\local\programs\python\python38-32 C:\Users\liding\AppData\Local\Temp\pip-download-8bds6eo1\scipy_8794ca23ac8349ecb38e96bac61f61b7 C:\Users\liding\AppData\Local\Temp\pip-download-8bds6eo1\scipy_8794ca23ac8349ecb38e96bac61f61b7\.mesonpy-tq02oyez\build --native-file=C:\Users\liding\AppData\Local\Temp\pip-download-8bds6eo1\scipy_8794ca23ac8349ecb38e96bac61f61b7\.mesonpy-native-file.ini -Ddebug=false -Doptimization=2
The Meson build system
Version: 1.0.1
Source dir: C:\Users\liding\AppData\Local\Temp\pip-download-8bds6eo1\scipy_8794ca23ac8349ecb38e96bac61f61b7
Build dir: C:\Users\liding\AppData\Local\Temp\pip-download-8bds6eo1\scipy_8794ca23ac8349ecb38e96bac61f61b7\.mesonpy-tq02oyez\build
Build type: native build
Project name: SciPy
Project version: 1.10.1
WARNING: Failed to activate VS environment: Could not parse vswhere.exe output
..\..\meson.build:1:0: ERROR: Unknown compiler(s): [['icl'], ['cl'], ['cc'], ['gcc'], ['clang'], ['clang-cl'], ['pgcc']]
The following exception(s) were encountered:
Running `icl ""` gave "[WinError 2] The system cannot find the file specified"
Running `cl /?` gave "[WinError 2] The system cannot find the file specified"
Running `cc --version` gave "[WinError 2] The system cannot find the file specified"
Running `gcc --version` gave "[WinError 2] The system cannot find the file specified"
Running `clang --version` gave "[WinError 2] The system cannot find the file specified"
Running `clang-cl /?` gave "[WinError 2] The system cannot find the file specified"
Running `pgcc --version` gave "[WinError 2] The system cannot find the file specified"
A full log can be found at C:\Users\liding\AppData\Local\Temp\pip-download-8bds6eo1\scipy_8794ca23ac8349ecb38e96bac61f61b7\.mesonpy-tq02oyez\build\meson-logs\meson-log.txt
----------------------------------------
ERROR: Command errored out with exit status 1: 'c:\users\liding\appdata\local\programs\python\python38-32\python.exe' 'c:\users\liding\appdata\local\programs\python\python38-32\lib\site-packages\pip\_vendor\pep517\_in_process.py' prepare_metadata_for_build_wheel 'C:\Users\liding\AppData\Local\Temp\tmpcp63m4lc' Check the logs for full command output.
</code></pre>
<p>Any ideas on how to resolve this issue? Thanks!</p>
|
<python><pip><scipy><python-wheel>
|
2023-04-07 03:38:18
| 1
| 723
|
Ding Li
|
75,955,196
| 10,263,660
|
How to alias a type used in a class before it is declared?
|
<p>I have code similar to this example</p>
<pre class="lang-py prettyprint-override"><code>import typing
class C():
def __init__(self, callback: typing.Callable[[C], int]):
self._callback = callback
def getCallback() -> typing.Callable[[C], int]:
return self._callback
class C2():
def __init__(self, cInstance: C):
self._cInstance = cInstance
def f() -> typing.NoReturn:
self._cInstance.getCallback()(self.cInstance)
</code></pre>
<p>and I want to add a type alias for <code>typing.Callable[[C], int]</code></p>
<p>However when I try to do this</p>
<pre class="lang-py prettyprint-override"><code>import typing
CallbackType = typing.Callable[[C], int] # ERROR: C not defined
class C():
def __init__(self, callback: CallbackType):
self._callback = callback
def getCallback() -> CallbackType:
return self._callback
class C2():
def __init__(self, cInstance: C):
self._cInstance = cInstance
def f() -> typing.NoReturn:
self._cInstance.getCallback()(self.cInstance)
</code></pre>
<p>I get the error that <code>C</code> was not defined at the time. If I define the CallbackType after <code>class C</code>, then CallbackType is not defined in <code>C</code>'s <code>__init__</code>. In the example the typing is short enough, but in my actual code it's quite complex, which is why I want to add the alias.</p>
|
<python><python-typing>
|
2023-04-07 03:06:45
| 1
| 2,487
|
FalcoGer
|
75,955,154
| 8,075,540
|
exit not raising SystemExit in Jupyter Notebook
|
<p>I was writing a Python tutorial in Jupyter Notebook and was trying to demonstrate why one should never use a bare <code>except</code>. My example was</p>
<pre class="lang-py prettyprint-override"><code>try:
exit(0)
except:
print('Did not exit')
</code></pre>
<p>The idea was that <code>exit</code> would raise a <code>SystemExit</code> which inherits from <code>BaseException</code>. However, the kernel still died when I ran the cell.</p>
<p>Running this code in IPython displays</p>
<pre><code>Did not exit
</code></pre>
<p>What does <code>exit</code> do in Jupyter Notebook?</p>
|
<python><exception><jupyter-notebook>
|
2023-04-07 02:56:07
| 0
| 6,906
|
Daniel Walker
|
75,955,105
| 2,657,491
|
Python how to determine duplicates or no duplicate for all columns
|
<p>I want to output at once to see if column has duplicates or no duplicates on its own. For example:</p>
<pre><code> Student Date flag
0 Joe December 2017 4
1 James January 2018 5
2 Bob April 2018 6
3 Joe December 2017 8
4 Jack February 2018 9
5 Jack March 2018 1
</code></pre>
<p>The output should be</p>
<pre><code>Column Dup
Student true
Date true
flag false
</code></pre>
<p>Thanks</p>
|
<python><duplicates>
|
2023-04-07 02:39:39
| 1
| 841
|
lydias
|
75,955,059
| 4,420,797
|
torch.cuda.is_available() return True inside PyCharm project, return False outside project in Terminal
|
<p>I have installed the latest pytorch with cuda support using <code>conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia</code> command.</p>
<p>When I run my project on Terminal anc activate conda environment, it returns <code>torch.cuda.is_available()</code> returns False and causes an error.</p>
<p><strong>Through Terminal</strong></p>
<pre><code>(pytesseract) cvpr@cvprlab:/media/cvpr/CM_1/doctr-main/references/recognition$ python3
Python 3.9.16 (main, Jan 11 2023, 16:05:54)
[GCC 11.2.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.zeros(1).cuda()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/cvpr/anaconda3/envs/pytesseract/lib/python3.9/site-packages/torch/cuda/__init__.py", line 247, in _lazy_init
torch._C._cuda_init()
RuntimeError: No CUDA GPUs are available
</code></pre>
<p>But, when I run the project in PyCharm (in the same environment) it returns True. I add the screenshot below.</p>
<p><strong>PyCharm Project</strong></p>
<p><a href="https://i.sstatic.net/uMdLi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uMdLi.png" alt="enter image description here" /></a></p>
|
<python><pytorch><conda><torch>
|
2023-04-07 02:24:48
| 1
| 2,984
|
Khawar Islam
|
75,955,026
| 13,776,631
|
why accuracy remains about 0.5 in simple binary classification?
|
<p>I made a binary classifier.
The classifier comprises of 3 linear layers, 2 relu layers and the last sigmoid layer.
Then, I trained my classifier.</p>
<p>However, My binary classifier's train accuracy doesn't decrease :(
it should classify (x1,x2)->1 if x1<x2 else (x1,x2)->0</p>
<p>My code is:</p>
<pre class="lang-py prettyprint-override"><code>import torch
from torch import nn
from torch import optim
class Classifier(nn.Module):
def __init__(self):
super(Classifier, self).__init__()
self.layer1=nn.Linear(2,2)
self.relu1=nn.ReLU()
self.layer2=nn.Linear(2,2)
self.relu2=nn.ReLU()
self.layer3=nn.Linear(2,1)
self.sigmoid=nn.Sigmoid()
def forward(self, x):
x = self.layer1(x)
x = self.relu1(x)
x = self.layer2(x)
x = self.relu2(x)
x = self.layer3(x)
x = self.sigmoid(x)
return x
model_wo = Classifier()
criterion_wo = nn.BCELoss()
optimizer_wo = optim.Adam(model_wo.parameters(), lr=0.0001)
loss_history_wo=[]
val_loss_history_wo=[]
for step in range(10000):
input_wo=torch.randn((1000,2))
label_wo=(((input_wo[:,0])>input_wo[:,1])).float().reshape(-1,1)
pred_wo = model_wo(input_wo)
loss_wo = criterion_wo(pred_wo, label_wo)
optimizer_wo.zero_grad()
loss_wo.backward()
optimizer_wo.step()
pred_wo = model_wo(input_wo).round()
corr_pred_wo = (pred_wo==label_wo.reshape(-1))
acc_w = corr_pred_wo.float().mean()
input_val=torch.rand((1000,2))
label_val=(((input_val[:,0])>input_val[:,1])).float().reshape(-1,1)
pred_val_wo = model_wo(input_val)
loss_val_wo = criterion_wo(pred_val_wo, label_val)
print(f"step : {step}, train loss : {loss_wo}, train accuracy: {acc_w}, validation loss: {loss_val_wo}")
loss_history_wo.append(loss_wo.item())
val_loss_history_wo.append(loss_val_wo.item())
</code></pre>
<p>The result is:</p>
<pre><code>step : 0, train loss : 0.7370945811271667, train accuracy: 0.5090000033378601, validation loss: 0.7618637084960938
step : 1, train loss : 0.7580508589744568, train accuracy: 0.47600001096725464, validation loss: 0.7370656132698059
step : 2, train loss : 0.7357947826385498, train accuracy: 0.5109999775886536, validation loss: 0.7306984066963196
step : 9998, train loss : 0.6932045817375183, train accuracy: 0.4869999885559082, validation loss: 0.6931705474853516
step : 9999, train loss : 0.6931028366088867, train accuracy: 0.5109999775886536, validation loss: 0.6930352449417114
</code></pre>
|
<python><deep-learning><pytorch><classification>
|
2023-04-07 02:14:20
| 1
| 301
|
beginner
|
75,954,989
| 994,658
|
Best approach to generate a bunch of graphs from BigQuery?
|
<p>I have a ~10GB table in BigQuery which holds about ~2000 series that I want to plot individually using Matplotlib. The table has three columns, <code>series_id</code>, <code>date</code>, and <code>value</code>. I want to generate a time series for each value, 2000 times.</p>
<p>I could do 2000 queries, one for each series, and generate a chart for each one, but this seems extremely inefficient. Is there a better way that also doesn't cause memory problems?</p>
<p>This seems interesting, but I'm not sure if it's the right direction: <a href="https://cloud.google.com/python/docs/reference/bigquery/latest/google.cloud.bigquery.table.RowIterator#google_cloud_bigquery_table_RowIterator_to_dataframe_iterable" rel="nofollow noreferrer">https://cloud.google.com/python/docs/reference/bigquery/latest/google.cloud.bigquery.table.RowIterator#google_cloud_bigquery_table_RowIterator_to_dataframe_iterable</a></p>
|
<python><google-bigquery>
|
2023-04-07 02:04:51
| 0
| 3,674
|
Justin
|
75,954,765
| 5,942,100
|
Difficult field name conversion to specific values while performing row by row de-aggregation (Pandas) updated
|
<p>I have a dataset where I would like to convert specific field names to values while performing a de aggregation the values into their own unique rows as well as perform a long pivot.</p>
<p><strong>Data</strong></p>
<pre><code># create DataFrame
data = {
"Start": ['8/1/2013', '8/1/2013'],
"Date": ['9/1/2013', '9/1/2013'],
"End": ['10/1/2013', '10/1/2013'],
"Area": ['NY', 'CA'],
"Final": ['3/1/2023', '3/1/2023'],
"Type": ['CC', 'AA'],
"Middle Stat": [226, 130],
"Low Stat": [20, 50],
"High Stat": [10, 0],
"Middle Stat1": [0, 0],
"Low Stat1": [0, 0],
"High Stat1": [0, 0],
"Re": [0,0],
"Set": [0,0],
"Set2": [0,0],
"Set3": [0,0],
}
</code></pre>
<p><strong>Desired</strong></p>
<pre><code>data = {'Start': ['8/1/2013', '8/1/2013', '8/1/2013', '8/1/2013', '8/1/2013', '8/1/2013'],
'Date': ['9/1/2013', '9/1/2013', '9/1/2013', '9/1/2013', '9/1/2013', '9/1/2013'],
'End': ['10/1/2013', '10/1/2013', '10/1/2013', '10/1/2013', '10/1/2013', '10/1/2013'],
'Area': ['NY', 'CA', 'NY', 'CA', 'NY', 'CA'],
'Final': ['3/1/2023', '3/1/2023', '3/1/2023', '3/1/2023', '3/1/2023', '3/1/2023'],
'Type': ['CC', 'AA', 'CC', 'AA', 'CC', 'AA'],
'Stat': [20, 50, 226, 130, 10, 0],
'Range': ['Low', 'Low', 'Middle', 'Middle', 'High', 'High'],
'Stat1': [0, 0, 0, 0, 0, 0],
'Re': [0, 0, 0, 0, 0, 0],
'Set': [0, 0, 0, 0, 0, 0],
'Set2': [0, 0, 0, 0, 0, 0],
'Set3': [0, 0, 0, 0, 0, 0]
}
</code></pre>
<p><strong>Doing</strong></p>
<p>I am using this great script provided by SO member, but troubleshooting on how to adjust this to create desired output. I am needing to include all columns as shown in desired output.</p>
<pre><code>import janitor
(df
.pivot_longer(
index = slice('Start', 'Type'),
names_to = ("Range", ".value"),
names_sep = " ")
)
</code></pre>
<p>Any suggestion is appreciated.</p>
|
<python><pandas><numpy>
|
2023-04-07 00:59:42
| 1
| 4,428
|
Lynn
|
75,954,676
| 6,440,589
|
How to rerun docker-compose after modifying containerized python script
|
<p>Docker newbie question here.</p>
<p>I am working on containerizing my own Python script, following <a href="https://www.docker.com/blog/how-to-dockerize-your-python-applications/" rel="nofollow noreferrer">this tutorial</a> and <a href="https://dev.to/davidsabine/docker-compose-hello-world-1p14" rel="nofollow noreferrer">that tutorial</a>.</p>
<p>I noticed that when I fix an error in my Python script (<strong>myscript.py</strong>), this change is not seen upon rerunning <code>sudo docker-compose up</code>. I need to change the app name in <strong>docker-compose.yml</strong> (<em>e.g.</em> from <strong>my-awesome-app2</strong> to <strong>my-awesome-app3</strong>) to get it to run.</p>
<p>I tried listing the available containers using <code>sudo docker container ls -a</code> and then removing the unused containers with <code>sudo docker rm <CONTAINER ID></code> but this did not help.</p>
<p>How can I get docker-compose to see the changes made to the Python script, without having to keep changing the Dockerfile?</p>
<hr />
<p>Here is my <strong>docker-compose.yml</strong>:</p>
<pre><code>version: "3"
services:
my-awesome-app3:
build:
context: .
</code></pre>
<p>Here is my <strong>Dockerfile</strong>:</p>
<pre><code>FROM python:3.8.3
# Or any preferred Python version.
ADD myscript.py .
RUN pip3 install pywavelets scipy
CMD ["python", "./myscript.py"]
# Or enter the name of your unique directory and parameter set.
</code></pre>
<p>And here is <strong>myscript.py</strong>:</p>
<pre><code>from scipy.misc import electrocardiogram
import pywt
import numpy as np
ecg = electrocardiogram()
FPData = ecg.reshape(10800,-1)
DWTcoeffs = pywt.wavedec(FPData[:,1], 'db4')
filtered_data_dwt=pywt.waverec(DWTcoeffs,'db4',mode='symmetric',axis=-1)
print(filtered_data_dwt)
</code></pre>
<p>Let's say that I make a typo (<code>print(filtered_data)</code> instead of <code>print(filtered_data_dwt)</code>)... Rerunning <code>sudo docker-compose up</code> will still trigger:</p>
<blockquote>
<p>my-awesome-app2_1 | NameError: name 'filtered_data' is not defined</p>
</blockquote>
|
<python><docker><docker-compose>
|
2023-04-07 00:31:33
| 2
| 4,770
|
Sheldon
|
75,954,655
| 14,278,409
|
Sequential chaining of itertools operators
|
<p>I'm looking for a nice way to sequentially combine two itertools operators. As an example, suppose we want to select numbers from a generator sequence greater than a threshold, after having gotten past that threshold. For a threshold of 12000, these would correspond to <code>it.takewhile(lambda x: x<12000)</code> and <code>it.takewhile(lambda x: x>=12000)</code>:</p>
<pre><code># Set up an example generator:
def lcg(a=75,c=74,m=2**16+1,x0 = 1):
xn = x0
yield xn
while True:
xn = (a*xn+c) % m
yield xn
</code></pre>
<pre><code># First 20 elements:
list(it.islice(lcg(), 20))
[1, # <- start sequence, start it.takewhile(lambda x: x<12000)
149,
11249, # <- last element of it.takewhile(lambda x: x<12000)
57305, # <- start it.takewhile(lambda x: x>=12000) here
38044,
35283,
24819,
26463,
18689,
25472, # <- last element of it.takewhile(lambda x: x>=12000); end of sequence
9901,
21742,
57836,
12332,
7456,
34978,
1944,
14800,
61482,
23634]
</code></pre>
<p>Is there a way to select the sequence of greater than 12000, including the initial values less than 12000, i.e. the desired output is:</p>
<pre><code>[1, 149, 11249, 57305, 38044, 35283, 24819, 26463, 18689, 25472]
</code></pre>
<p>This is trivial to do with two for-loops, but I'm looking for an itertools-type way (maybe a one-liner?) of combining the two operators without reseting the <code>lcg</code> generator.</p>
|
<python><generator><python-itertools>
|
2023-04-07 00:26:43
| 2
| 730
|
njp
|
75,954,637
| 2,817,520
|
Is it possible not to use Flask decorators?
|
<p>I don't know why, but I don't like decorators. Can I use</p>
<pre><code>def load_user():
if "user_id" in session:
g.user = db.session.get(session["user_id"])
app.before_request(load_user)
</code></pre>
<p>instead of</p>
<pre><code>@app.before_request
def load_user():
if "user_id" in session:
g.user = db.session.get(session["user_id"])
</code></pre>
<p>?</p>
|
<python><flask>
|
2023-04-07 00:22:07
| 1
| 860
|
Dante
|
75,954,612
| 3,232,771
|
How can I tell which ax is selected on an interactive shiny for python plot?
|
<p>I'd like to generate a figure with <strong>multiple subplots</strong> that has basic interactivity; clicking, hovering etc...</p>
<p>Here is an example of a figure with a single subplot from the official docs: <a href="https://shinylive.io/py/examples/#basic-plot-interaction" rel="nofollow noreferrer">https://shinylive.io/py/examples/#basic-plot-interaction</a>.</p>
<p>How can I identify which ax is being hovered over in a figure with multiple subplots?</p>
<p>Here's an example of the JSON sent by hovering:</p>
<pre><code>{
"x": 3.5142333228259792,
"y": 15.062603892154872,
"coords_css": {
"x": 440,
"y": 289
},
"coords_img": {
"x": 445.2547771977413,
"y": 289
},
"img_css_ratio": {
"x": 1.011942675449412,
"y": 1
},
"mapping": {
"x": null,
"y": null
},
"domain": {
"left": 1.3174499999999998,
"right": 5.61955,
"bottom": 9.225,
"top": 35.074999999999996
},
"range": {
"left": 40.73333333333335,
"right": 832.9333129882813,
"bottom": 363.26666666666665,
"top": 34.400000000000034
},
"log": {
"x": null,
"y": null
}
}
</code></pre>
<p>However, for a figure with multiple subplots, this JSON lacks information about which subplot the hovering is happening over. (It is identical in structure to the JSON above).</p>
<p><code>"range"</code> contains the range, in pixels, of the subplot being hovered over. So, obviously, that would be useful information. If you knew the full range in both both x and y, and the proportions of the axes, you could probably work backwards.</p>
<p>But I can't see any function/method in the shiny for python API that returns the range of the entire figure. (I think the range of the entire figure would have to be checked on every hover event, given the window, and therefore figure, may have been resized).</p>
<p>This seems like useful functionality, and perhaps there's an easier way?</p>
|
<python><py-shiny>
|
2023-04-07 00:13:50
| 1
| 396
|
davipatti
|
75,954,596
| 9,682,236
|
Why Does Copied Object Avoid Modification in Python?
|
<p>I have the following data and following function that works to sort the products to the front of the list that match the specified category.</p>
<pre><code>products =[{'t1': 'bbq', 'category': 2}, {'name': 't5', 'category': 3}, {'name': 't6', 'category': 3}, {'name': 't2', 'category': 2}, {'name': 't3', 'category': 2}, {'name': 't4', 'category': 1}, {'name': 't7', 'category': 1}]
category = 2
</code></pre>
<p>I found I have to run this function twice in order to actually get this accomplished, why? How do I need to modify the function to just have to run this once? I am trying to modify products inplace via the category sorted as specified.</p>
<pre><code>def sort_by_cat(products,category):
for p in products:
print(p)
if p['category']==category:
continue
else:
products.append(products.pop(products.index(p)))
return
sort_by_cat(products,category)
sort_by_cat(products,category)
</code></pre>
<p>Desired output:</p>
<pre><code>[{'name': 't1', 'category': 2},
{'name': 't2', 'category': 2},
{'name': 't3', 'category': 2},
{'name': 't4', 'category': 1},
{'name': 't5', 'category': 3},
{'name': 't6', 'category': 3},
{'name': 't7', 'category': 1}]
</code></pre>
|
<python>
|
2023-04-07 00:10:25
| 0
| 1,125
|
Mark McGown
|
75,954,474
| 2,707,864
|
Sympy solve looking for complex solutions, even if real=True?
|
<p>I mean to solve a symbolic equation via <code>sympy.solve</code>.
It was taking surprisingly long, so I tried a few variations, until I found a posible cause.
It looks for complex solutions, while I am only interested in real solutions.</p>
<p>Code below...</p>
<pre><code>import sympy as sp
import numpy as np
x, y, z = sp.symbols('x, y, z', real=True, positive=True)
# Option X1: works
expr3 = 0.6*x**(9/8)*y**(7/5)*z**2.2
expr4 = 0.9*x**(1/2)*y**(4/5)*z**1.2
s3 = sp.solve(expr3 - expr4, x)
print('Exponents of x: 9/8, 1/2. Solution =', s3)
# Option X3: works
expr3 = 0.6*x**(11/9)*y**(7/5)
expr4 = 0.9*x**(1/2)*y**(4/5)
s3 = sp.solve(expr3 - expr4, x, check=True, minimal=True, rational=True, force=True)
print('Exponents of x: 11/9, 1/2. Solution =', s3)
# Option X2: takes "forever"
expr3 = 0.6*x**(11/9)*y**(7/5)
expr4 = 0.9*x**(1/2)*y**(4/5)
s3 = sp.solve(expr3 - expr4, x)
print('Exponents of x: 11/9, 1/2. Solution =', s3)
</code></pre>
<p>... produces this output</p>
<pre><code>Exponents of x: 9/8, 1/2. Solution = [1.91313675093869/(y**(24/25)*z**(8/5)), 0.351080874270527*(-y**(-0.12)*z**(-0.2) - 0.726542528005361*I*y**(-0.12)*z**(-0.2))**8, 0.351080874270527*(-y**(-0.12)*z**(-0.2) + 0.726542528005361*I*y**(-0.12)*z**(-0.2))**8, 1.28055023108529*(0.324919696232906*y**(-0.12)*z**(-0.2) - I*y**(-0.12)*z**(-0.2))**8, 1.28055023108529*(0.324919696232906*y**(-0.12)*z**(-0.2) + I*y**(-0.12)*z**(-0.2))**8]
Exponents of x: 11/9, 1/2. Solution = [3*2**(8/13)*3**(5/13)/(4*y**(54/65)), (-2**(12/13)*3**(1/13)*cos(pi/13)/(2*y**(3/65)) - 2**(12/13)*3**(1/13)*I*sin(pi/13)/(2*y**(3/65)))**18, (-2**(12/13)*3**(1/13)*cos(pi/13)/(2*y**(3/65)) + 2**(12/13)*3**(1/13)*I*sin(pi/13)/(2*y**(3/65)))**18, (2**(12/13)*3**(1/13)*cos(2*pi/13)/(2*y**(3/65)) - 2**(12/13)*3**(1/13)*I*sin(2*pi/13)/(2*y**(3/65)))**18, (2**(12/13)*3**(1/13)*cos(2*pi/13)/(2*y**(3/65)) + 2**(12/13)*3**(1/13)*I*sin(2*pi/13)/(2*y**(3/65)))**18, (-2**(12/13)*3**(1/13)*cos(3*pi/13)/(2*y**(3/65)) - 2**(12/13)*3**(1/13)*I*sin(3*pi/13)/(2*y**(3/65)))**18, (-2**(12/13)*3**(1/13)*cos(3*pi/13)/(2*y**(3/65)) + 2**(12/13)*3**(1/13)*I*sin(3*pi/13)/(2*y**(3/65)))**18, (2**(12/13)*3**(1/13)*cos(4*pi/13)/(2*y**(3/65)) - 2**(12/13)*3**(1/13)*I*sin(4*pi/13)/(2*y**(3/65)))**18, (2**(12/13)*3**(1/13)*cos(4*pi/13)/(2*y**(3/65)) + 2**(12/13)*3**(1/13)*I*sin(4*pi/13)/(2*y**(3/65)))**18, (-2**(12/13)*3**(1/13)*cos(5*pi/13)/(2*y**(3/65)) - 2**(12/13)*3**(1/13)*I*sin(5*pi/13)/(2*y**(3/65)))**18, (-2**(12/13)*3**(1/13)*cos(5*pi/13)/(2*y**(3/65)) + 2**(12/13)*3**(1/13)*I*sin(5*pi/13)/(2*y**(3/65)))**18, (2**(12/13)*3**(1/13)*cos(6*pi/13)/(2*y**(3/65)) - 2**(12/13)*3**(1/13)*I*sin(6*pi/13)/(2*y**(3/65)))**18, (2**(12/13)*3**(1/13)*cos(6*pi/13)/(2*y**(3/65)) + 2**(12/13)*3**(1/13)*I*sin(6*pi/13)/(2*y**(3/65)))**18]
</code></pre>
<p>from options X1, and X3, and then it is stuck at option X2. I have to manually interrupt calculation. <br>
X3 is one of many attempts at tinkering with options of <code>solve</code> to get a solution for X2.
From its result, I would understand why X2 takes so long.</p>
<p>The first element in the solution list is what I am looking for, possibly with later simplification.
My questions are as follows:</p>
<ol>
<li>Q1: Why is <code>solve</code> looking for and returning complex solutions, if I set <code>real=True, positive=True</code>?</li>
<li>Q2: Even if not understanding Q1, is there any way to tell <code>solve</code> not to look for complex solutions? (Note: I also want <code>positive=True</code> for <code>x</code>, not <code>x=0</code>)</li>
<li>Q3: I would be ok with using options for <code>solve</code> that lead to calculations in reasonable time, even if I get complex solutions. <br>
But how can I then automatically pick the real solution? <br>
I would pick element 0, but I am not sure it will always be the correct one.</li>
</ol>
<p><strong>Related</strong>:</p>
<ol>
<li><a href="https://stackoverflow.com/questions/15210704/ignore-imaginary-roots-in-sympy">Ignore imaginary roots in sympy</a></li>
<li><a href="https://stackoverflow.com/questions/70405920/sympy-very-slow-at-solving-equations-using-solve">Sympy very slow at solving equations using solve</a></li>
</ol>
|
<python><sympy><solver><complex-numbers>
|
2023-04-06 23:36:41
| 3
| 15,820
|
sancho.s ReinstateMonicaCellio
|
75,954,444
| 8,667,016
|
Ranking using multiple columns within groups allowing for tied ranks in Pandas
|
<h3>Intro and problem</h3>
<p>How can I rank observations within groups where the ranking is based on more than just one column and where the ranking allows for tied ranks?</p>
<p>I know how to calculate aggregated group-level statistics using the <code>groupby()</code> method and I also know how to rank using multiple columns without groups (see <a href="https://stackoverflow.com/questions/41974374/pandas-rank-by-multiple-columns">here</a>, <a href="https://stackoverflow.com/questions/38854731/rank-multiple-columns-in-pandas">here</a> and <a href="https://stackoverflow.com/questions/41974374/pandas-rank-by-multiple-columns">here</a>). The main problem seems to be getting both ideas (grouping & ranking) to play nicely together.</p>
<p><a href="https://stackoverflow.com/questions/66489613/pandas-group-by-and-rank-within-group-based-on-multiple-columns">This other thread</a> has some ideas on how to solve the problem, but its results don't show you which rows are tied - it just returns an array of ever-increasing ranks even when the values are identical. The problem is described in more detail in the example I created below.</p>
<h3>Minimal reproducible example</h3>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.DataFrame({'row_id':[1,2,3,4,5,6,7,8,9,10],
'Group':[1,1,1,1,1,2,2,2,2,2],
'Var1':[100,100,100,200,200,300,300,400,400,400],
'Var2':[5,5,6,7,8,1,1,2,2,3]})
print(df)
# row_id Group Var1 Var2
# 0 1 1 100 5
# 1 2 1 100 5
# 2 3 1 100 6
# 3 4 1 200 7
# 4 5 1 200 8
# 5 6 2 300 1
# 6 7 2 300 1
# 7 8 2 400 2
# 8 9 2 400 2
# 9 10 2 400 3
</code></pre>
<p>In the case above, I would like to group using the <code>Group</code> variable and rank using the <code>Var1</code> and <code>Var2</code> variables. Therefore, I expect the output to look like this:</p>
<pre class="lang-py prettyprint-override"><code># row_id Group Var1 Var2 Rank
# 0 1 1 100 5 1
# 1 2 1 100 5 1
# 2 3 1 100 6 3
# 3 4 1 200 7 4
# 4 5 1 200 8 5
# 5 6 2 300 1 1
# 6 7 2 300 1 1
# 7 8 2 400 2 3
# 8 9 2 400 2 3
# 9 10 2 400 3 5
</code></pre>
<h3>What I've tried</h3>
<p>Using the data in the example above, if I would like to group using the <code>Group</code> variable and only rank based on the <code>Var1</code> column, that would be pretty easy:</p>
<pre class="lang-py prettyprint-override"><code>df['Rank_Only_Var1'] = df.groupby('Group')['Var1'].rank(method='min', ascending=True)
print(df)
# row_id Group Var1 Var2 Rank_Only_Var1
# 0 1 1 100 5 1.0
# 1 2 1 100 5 1.0
# 2 3 1 100 6 1.0
# 3 4 1 200 7 4.0
# 4 5 1 200 8 4.0
# 5 6 2 300 1 1.0
# 6 7 2 300 1 1.0
# 7 8 2 400 2 3.0
# 8 9 2 400 2 3.0
# 9 10 2 400 3 3.0
</code></pre>
<p>However, if I want to group using the <code>Group</code> variable and rank using the <code>Var1</code> and <code>Var2</code> variables, things get hairy. Using the approach suggested <a href="https://stackoverflow.com/questions/66489613/pandas-group-by-and-rank-within-group-based-on-multiple-columns">by this other post</a>, we arrive at the following results:</p>
<pre class="lang-py prettyprint-override"><code>df = df.sort_values(['Var1', 'Var1'], ascending=[True, True])
df['overall_rank'] = 1
df['overall_rank'] = df.groupby(['Group'])['overall_rank'].cumsum()
print(df)
# row_id Group Var1 Var2 overall_rank
# 0 1 1 100 5 1
# 1 2 1 100 5 2
# 2 3 1 100 6 3
# 3 4 1 200 7 4
# 4 5 1 200 8 5
# 5 6 2 300 1 1
# 6 7 2 300 1 2
# 7 8 2 400 2 3
# 8 9 2 400 2 4
# 9 10 2 400 3 5
</code></pre>
<p>Note how the first and second rows have identical values for <code>Var1</code> and <code>Var2</code>, but the first row is ranked 1 and the second row is ranked 2. Those two rows <strong>shouldn't</strong> have different ranks. Their ranks should be identical and tied, because the values the rank is based on are identical and tied. This problem also happens with rows 6 & 7 as well as with rows 8 & 9.</p>
<p>I even tried adapting the solution from <a href="https://stackoverflow.com/a/56045215/8667016">this answer</a>, but it doesn't work when we have a <code>groupby</code> statement.</p>
<h3>Back to the heart of the question</h3>
<p>How can I rank observations within groups where the ranking is based on more than just one column and where the ranking allows for tied ranks?</p>
|
<python><pandas><group-by><grouping><ranking>
|
2023-04-06 23:28:07
| 1
| 1,291
|
Felipe D.
|
75,954,402
| 18,125,194
|
Seasonal decompose 'You must specify a period or x must be a pandas object with a PeriodIndex or a DatetimeIndex with a freq not set to None'
|
<p>I am trying to use statsmodels.tsa's seasonal decompose, but I keep getting this error</p>
<p>'ValueError: You must specify a period or x must be a pandas object with a PeriodIndex or a DatetimeIndex with a freq not set to None'</p>
<p>I have tryed to use a DatetimeIndex with a monthly period like this <code>time.index =time.index.to_timestamp(freq='M')</code> but still the error persists.</p>
<p>My code is below</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
from statsmodels.tsa.seasonal import seasonal_decompose
from pandas.plotting import register_matplotlib_converters
time = pd.DataFrame({'Date':['2022-03-01','2022-03-02','2022-03-03','2022-03-04','2022-03-05',
'2022-03-06','2022-03-07','2022-03-08','2022-03-09','2022-03-10',
'2022-03-11','2022-03-12','2022-03-13','2022-03-14','2022-03-15'],
'Employment_Rate':[52,12,18,35,75,95,85,45,75,85,95,65,85,75,78]
})
# Convert 'Date' to datetime and set as index
time['Date'] = pd.to_datetime(time['Date'], format='%Y-%m-%d')
time['Date'] = time['Date'].dt.to_period('M')
time.set_index('Date', inplace=True)
# Sort index and drop NA values
time.sort_index(inplace=True)
time.dropna(inplace=True)
# Convert PeriodDtype index to DatetimeIndex
time.index = time.index.to_timestamp(freq='M')
# Graphing
register_matplotlib_converters()
plt.rc("figure", figsize=(16, 12))
plt.rc("font", size=13)
decomposition = seasonal_decompose(time['Employment_Rate'])
</code></pre>
|
<python><statsmodels>
|
2023-04-06 23:12:30
| 1
| 395
|
Rebecca James
|
75,954,342
| 15,178,267
|
Django: 'QuerySet' object has no attribute 'product_obj' when running for loop
|
<p>I am trying to loop through a queryset, get the product object and remove the qty of items ordered from the product's stock amount.</p>
<p>This is how my model Looks</p>
<pre><code>class CartOrderItem(models.Model):
order = models.ForeignKey(CartOrder, on_delete=models.CASCADE)
product_obj = models.ForeignKey(Product, on_delete=models.CASCADE)
qty = models.IntegerField(default=0)
...
date = models.DateTimeField(auto_now_add=True, null=True, blank=True)
</code></pre>
<p>And this is how it looks in the admin section
<a href="https://i.sstatic.net/WUm1l.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WUm1l.png" alt="enter image description here" /></a></p>
<p>This is how i ran the forloop</p>
<pre><code>
@login_required
def PaymentSuccessView(request):
...
order_items = CartOrderItem.objects.filter(order=order, order__payment_status="paid")
for o in order_items.product_obj:
print(o.title) # print all the products title
print(o.stock_qty) # print the stock qty from the Product model
# I want to substract the CartOrderItem's qty from the Product's stock qty
</code></pre>
<p>It then shows this error that says</p>
<pre><code>'QuerySet' object has no attribute 'product_obj'
</code></pre>
<p>This error is saying that the queryset for the <code>CartOrderItems</code> models does not have an attribute <code>"product_obj"</code>, but i clearly have a <code>'product_obj'</code> field in my <code>CartOrderItems</code></p>
<hr />
<p>This is how i wanted to substract the items qty from the product stock qty, i am not sure this would work and i haven't tried it because the error above would not allow me</p>
<pre><code>for o in order_items.product_obj:
o.product_obj.stock_qty -= o.qty
o.save()
</code></pre>
|
<python><django><django-models><django-views>
|
2023-04-06 22:56:07
| 2
| 851
|
Destiny Franks
|
75,954,280
| 2,778,224
|
How to change the position of a single column in Python Polars library?
|
<p>I am working with the Python Polars library for data manipulation on a DataFrame, and I am trying to change the position of a single column. I would like to move a specific column to a different index while keeping the other columns in their respective positions.</p>
<p>One way of doing that is using <code>select</code>, but that requires giving a complete order for all the columns which I don't want to do.</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
# Create a simple DataFrame
data = {
'A': [1, 2, 3],
'B': [4, 5, 6],
'C': [7, 8, 9],
'D': [10, 11, 12]
}
df = pl.DataFrame(data)
</code></pre>
<p>I want to move column 'C' to index 1, so the desired output should be:</p>
<pre class="lang-py prettyprint-override"><code>shape: (3, 4)
┌─────┬─────┬─────┬──────┐
│ A │ C │ B │ D │
│ --- │ --- │ --- │ ---- │
│ i64 │ i64 │ i64 │ i64 │
╞═════╪═════╪═════╪══════╡
│ 1 │ 7 │ 4 │ 10 │
│ 2 │ 8 │ 5 │ 11 │
│ 3 │ 9 │ 6 │ 12 │
└─────┴─────┴─────┴──────┘
</code></pre>
|
<python><dataframe><python-polars>
|
2023-04-06 22:39:29
| 4
| 479
|
Maturin
|
75,954,211
| 5,617,608
|
ValueError: source code string cannot contain null bytes
|
<p>I'm originally an Ubuntu user, but I have to use a Windows Virtual Machine for some reason.</p>
<p>I was trying to pip-install a package using the CMD, however, I'm getting the following error:</p>
<pre><code>from pip._vendor.packaging.utils import canonicalize_name
ValueError: source code string cannot contain null bytes
</code></pre>
<p>I used <code>pip install numpy</code> and <code>pip3 install numpy</code> along with other commands I found while tying to fix the problem.</p>
<p>I checked that pip is available and reinstalled Python to make sure the path is added. I've also made sure that I'm running everything as an administrator. Everything seems to be installed properly, but I keep getting that error.</p>
<p>I've also checked almost all other StackOverflow questions related to this error message.</p>
<p>How can I solve this?</p>
|
<python><python-3.x><windows><virtual-machine>
|
2023-04-06 22:24:00
| 2
| 1,759
|
Esraa Abdelmaksoud
|
75,954,193
| 7,752,049
|
Filtering data does not work properly on docker
|
<p>I have dockerized my Flask project, It gets <code>added_at</code> as a datetime and filters my products:</p>
<pre><code>def post(self):
args = request.get_json()
products = Product.query
args = {k: v for k, v in args.items() if v}
added_at = args['added_at'] if 'added_at' in args else datetime.now() - timedelta(days=7)
products = products.filter(Product.created_at >= added_at).all()
return len(products), 200
</code></pre>
<p>My problem is that when I add a record in products table and search data, It returns new products correctly.</p>
<p>But on docker :</p>
<ol>
<li>I run <code>docker-compose up -d</code></li>
<li>Add a row manually to database (on mysql)</li>
<li>Search products => it does not return new rows, even if the created_at field is for yesterday</li>
<li>I restart my app service and then search => it works correctly!!</li>
</ol>
<p>It seems that my database is not synced with app, after few hours it returns data and my db is mysql.</p>
<p>Here is my docker-compose of app service:</p>
<pre><code>version: '3.7'
services:
app:
build: .
container_name: app
image: app
tty: true
volumes:
- .:/app
depends_on:
- db
ports:
- "8000:8000"
networks:
- app-network
</code></pre>
|
<python><docker><docker-compose>
|
2023-04-06 22:20:03
| 0
| 2,479
|
parastoo
|
75,954,148
| 5,942,100
|
Tricky conversion of field names to values while performing row by row de-aggregation (using Pandas)
|
<p>I have a dataset where I would like to convert specific field names to values while performing a de aggregation the values into their own unique rows as well as perform a long pivot.</p>
<p><strong>Data</strong></p>
<pre><code>Start Date End Area Final Type Middle Stat Low Stat High Stat Middle Stat1 Low Stat1 High Stat1
8/1/2013 9/1/2013 10/1/2013 NY 3/1/2023 CC 226 20 10 0 0 0
8/1/2013 9/1/2013 10/1/2013 CA 3/1/2023 AA 130 50 0 0 0 0
data = {
"Start": ['8/1/2013', '8/1/2013'],
"Date": ['9/1/2013', '9/1/2013'],
"End": ['10/1/2013', '10/1/2013'],
"Area": ['NY', 'CA'],
"Final": ['3/1/2023', '3/1/2023'],
"Type": ['CC', 'AA'],
"Middle Stat": [226, 130],
"Low Stat": [20, 50],
"High Stat": [10, 0],
"Middle Stat1": [0, 0],
"Low Stat1": [0, 0],
"High Stat1": [0, 0]
}
</code></pre>
<p><strong>Desired</strong></p>
<pre><code>Start Date End Area Final Type Stat Range Stat1
8/1/2013 9/1/2013 10/1/2013 NY 3/1/2023 CC 20 Low 0
8/1/2013 9/1/2013 10/1/2013 CA 3/1/2023 AA 50 Low 0
8/1/2013 9/1/2013 10/1/2013 NY 3/1/2023 CC 226 Middle 0
8/1/2013 9/1/2013 10/1/2013 CA 3/1/2023 AA 130 Middle 0
8/1/2013 9/1/2013 10/1/2013 NY 3/1/2023 CC 10 High 0
8/1/2013 9/1/2013 10/1/2013 CA 3/1/2023 AA 0 High 0
</code></pre>
<p><strong>Doing</strong></p>
<p>I believe I have to inject some sort of wide to long method, (SO member assisted) however unsure how to incorporate this whilst having the same suffix in the targeted (columns of interest) column names.</p>
<pre><code>pd.wide_to_long(df,
stubnames=['Low','Middle','High'],
i=['Start','Date','End','Area','Final'],
j='',
sep=' ',
suffix='(stat)'
).unstack(level=-1, fill_value=0).stack(level=0).reset_index()
</code></pre>
<p>Any suggestion is appreciated.</p>
<p><strong>#Original Dataset</strong></p>
<pre><code>import pandas as pd
# create DataFrame
data = {'Start': ['9/1/2013', '10/1/2013', '11/1/2013', '12/1/2013'],
'Date': ['10/1/2016', '11/1/2016', '12/1/2016', '1/1/2017'],
'End': ['11/1/2016', '12/1/2016', '1/1/2017', '2/1/2017'],
'Area': ['NY', 'NY', 'NY', 'NY'],
'Final': ['3/1/2023', '3/1/2023', '3/1/2023', '3/1/2023'],
'Type': ['CC', 'CC', 'CC', 'CC'],
'Low Stat': ['', '', '', ''],
'Low Stat1': ['', '', '', ''],
'Middle Stat': ['0', '0', '0', '0'],
'Middle Stat1': ['0', '0', '0', '0'],
'Re': ['','','',''],
'Set': ['0', '0', '0', '0'],
'Set2': ['0', '0', '0', '0'],
'Set3': ['0', '0', '0', '0'],
'High Stat': ['', '', '', ''],
'High Stat1': ['', '', '', '']}
df = pd.DataFrame(data)
</code></pre>
|
<python><pandas><numpy>
|
2023-04-06 22:10:56
| 3
| 4,428
|
Lynn
|
75,953,893
| 7,587,176
|
Connecting to snowflake in Jupyter Notebook
|
<p>I have a very base script that works to connect to snowflake python connect but once I drop it in a jupyter notebook , I get the error below and really have no idea why?</p>
<pre><code>import snowflake.connector
conn = snowflake.connector.connect(account='account',
user='user',
password='password',
database='db')
</code></pre>
<p>ERROR</p>
<pre><code>OperationalError Traceback (most recent call last)
File ~/Library/Python/3.8/lib/python/site-packages/snowflake/connector/connection.py:1125, in SnowflakeConnection._authenticate(self, auth_instance)
1124 try:
-> 1125 auth.authenticate(
1126 auth_instance=auth_instance,
1127 account=self.account,
1128 user=self.user,
1129 database=self.database,
1130 schema=self.schema,
1131 warehouse=self.warehouse,
1132 role=self.role,
1133 passcode=self._passcode,
1134 passcode_in_password=self._passcode_in_password,
1135 mfa_callback=self._mfa_callback,
1136 password_callback=self._password_callback,
1137 session_parameters=self._session_parameters,
1138 )
1139 except OperationalError as e:
File ~/Library/Python/3.8/lib/python/site-packages/snowflake/connector/auth/_auth.py:250, in Auth.authenticate(self, auth_instance, account, user, database, schema, warehouse, role, passcode, passcode_in_password, mfa_callback, password_callback, session_parameters, timeout)
249 try:
--> 250 ret = self._rest._post_request(
251 url,
252 headers,
253 json.dumps(body),
254 timeout=auth_timeout,
255 socket_timeout=auth_timeout,
256 )
257 except ForbiddenError as err:
258 # HTTP 403
File ~/Library/Python/3.8/lib/python/site-packages/snowflake/connector/network.py:716, in SnowflakeRestful._post_request(self, url, headers, body, token, timeout, _no_results, no_retry, socket_timeout, _include_retry_params)
714 pprint(ret)
--> 716 ret = self.fetch(
717 "post",
718 full_url,
719 headers,
720 data=body,
721 timeout=timeout,
722 token=token,
723 no_retry=no_retry,
724 socket_timeout=socket_timeout,
725 _include_retry_params=_include_retry_params,
726 )
727 logger.debug(
728 "ret[code] = {code}, after post request".format(
729 code=(ret.get("code", "N/A"))
730 )
731 )
File ~/Library/Python/3.8/lib/python/site-packages/snowflake/connector/network.py:814, in SnowflakeRestful.fetch(self, method, full_url, headers, data, timeout, **kwargs)
813 while True:
--> 814 ret = self._request_exec_wrapper(
815 session, method, full_url, headers, data, retry_ctx, **kwargs
816 )
817 if ret is not None:
File ~/Library/Python/3.8/lib/python/site-packages/snowflake/connector/network.py:937, in SnowflakeRestful._request_exec_wrapper(self, session, method, full_url, headers, data, retry_ctx, no_retry, token, **kwargs)
936 if not no_retry:
--> 937 raise e
938 logger.debug("Ignored error", exc_info=True)
File ~/Library/Python/3.8/lib/python/site-packages/snowflake/connector/network.py:868, in SnowflakeRestful._request_exec_wrapper(self, session, method, full_url, headers, data, retry_ctx, no_retry, token, **kwargs)
867 return return_object
--> 868 self._handle_unknown_error(method, full_url, headers, data, conn)
869 TelemetryService.get_instance().log_http_request_error(
870 "HttpRequestUnknownError",
871 full_url,
(...)
876 retry_count=retry_ctx.cnt,
877 )
File ~/Library/Python/3.8/lib/python/site-packages/snowflake/connector/network.py:992, in SnowflakeRestful._handle_unknown_error(self, method, full_url, headers, data, conn)
987 logger.error(
988 f"Failed to get the response. Hanging? "
989 f"method: {method}, url: {full_url}, headers:{headers}, "
990 f"data: {data}"
991 )
--> 992 Error.errorhandler_wrapper(
993 conn,
994 None,
995 OperationalError,
996 {
997 "msg": f"Failed to get the response. Hanging? method: {method}, url: {full_url}",
998 "errno": ER_FAILED_TO_REQUEST,
999 },
1000 )
File ~/Library/Python/3.8/lib/python/site-packages/snowflake/connector/errors.py:290, in Error.errorhandler_wrapper(connection, cursor, error_class, error_value)
274 """Error handler wrapper that calls the errorhandler method.
275
276 Args:
(...)
287 exception to the first handler in that order.
288 """
--> 290 handed_over = Error.hand_to_other_handler(
291 connection,
292 cursor,
293 error_class,
294 error_value,
295 )
296 if not handed_over:
File ~/Library/Python/3.8/lib/python/site-packages/snowflake/connector/errors.py:348, in Error.hand_to_other_handler(connection, cursor, error_class, error_value)
347 elif connection is not None:
--> 348 connection.errorhandler(connection, cursor, error_class, error_value)
349 return True
File ~/Library/Python/3.8/lib/python/site-packages/snowflake/connector/errors.py:221, in Error.default_errorhandler(connection, cursor, error_class, error_value)
220 done_format_msg = error_value.get("done_format_msg")
--> 221 raise error_class(
222 msg=error_value.get("msg"),
223 errno=None if errno is None else int(errno),
224 sqlstate=error_value.get("sqlstate"),
225 sfqid=error_value.get("sfqid"),
226 query=error_value.get("query"),
227 done_format_msg=(
228 None if done_format_msg is None else bool(done_format_msg)
229 ),
230 connection=connection,
231 cursor=cursor,
232 )
OperationalError: 250003: Failed to get the response. Hanging? method: post, url: https://xw81982.us-east4.gcp.snowflakecomputing.com.snowflakecomputing.com:443/session/v1/login-request?request_id=cedc0862-8df4-4cb1-a617-5216d2a16254&databaseName=Product&request_guid=435b4fa9-7df9-494c-b0b1-5238c26c032e
The above exception was the direct cause of the following exception:
OperationalError Traceback (most recent call last)
Cell In[6], line 2
1 import snowflake.connector
----> 2 conn = snowflake.connector.connect(account='xw81982.us-east4.gcp.snowflakecomputing.com',
3 user='03pes03',
4 password='Copper333,,',
5 database='Product')
File ~/Library/Python/3.8/lib/python/site-packages/snowflake/connector/__init__.py:51, in Connect(**kwargs)
50 def Connect(**kwargs) -> SnowflakeConnection:
---> 51 return SnowflakeConnection(**kwargs)
File ~/Library/Python/3.8/lib/python/site-packages/snowflake/connector/connection.py:304, in SnowflakeConnection.__init__(self, **kwargs)
302 self.converter = None
303 self.__set_error_attributes()
--> 304 self.connect(**kwargs)
305 self._telemetry = TelemetryClient(self._rest)
307 # get the imported modules from sys.modules
File ~/Library/Python/3.8/lib/python/site-packages/snowflake/connector/connection.py:571, in SnowflakeConnection.connect(self, **kwargs)
569 connection_diag.generate_report()
570 else:
--> 571 self.__open_connection()
File ~/Library/Python/3.8/lib/python/site-packages/snowflake/connector/connection.py:839, in SnowflakeConnection.__open_connection(self)
835 else:
836 # okta URL, e.g., https://<account>.okta.com/
837 self.auth_class = AuthByOkta(application=self.application)
--> 839 self.authenticate_with_retry(self.auth_class)
841 self._password = None # ensure password won't persist
842 self.auth_class.reset_secrets()
File ~/Library/Python/3.8/lib/python/site-packages/snowflake/connector/connection.py:1099, in SnowflakeConnection.authenticate_with_retry(self, auth_instance)
1096 def authenticate_with_retry(self, auth_instance) -> None:
1097 # make some changes if needed before real __authenticate
1098 try:
-> 1099 self._authenticate(auth_instance)
1100 except ReauthenticationRequest as ex:
1101 # cached id_token expiration error, we have cleaned id_token and try to authenticate again
1102 logger.debug("ID token expired. Reauthenticating...: %s", ex)
File ~/Library/Python/3.8/lib/python/site-packages/snowflake/connector/connection.py:1170, in SnowflakeConnection._authenticate(self, auth_instance)
1168 except OperationalError as auth_op:
1169 if auth_op.errno == ER_FAILED_TO_CONNECT_TO_DB:
-> 1170 raise auth_op from e
1171 logger.debug("Continuing authenticator specific timeout handling")
1172 continue
File ~/Library/Python/3.8/lib/python/site-packages/snowflake/connector/connection.py:1147, in SnowflakeConnection._authenticate(self, auth_instance)
1145 while True:
1146 try:
-> 1147 auth_instance.handle_timeout(
1148 authenticator=self._authenticator,
1149 service_name=self.service_name,
1150 account=self.account,
1151 user=self.user,
1152 password=self._password,
1153 )
1154 auth.authenticate(
1155 auth_instance=auth_instance,
1156 account=self.account,
(...)
1166 session_parameters=self._session_parameters,
1167 )
1168 except OperationalError as auth_op:
File ~/Library/Python/3.8/lib/python/site-packages/snowflake/connector/auth/by_plugin.py:214, in AuthByPlugin.handle_timeout(***failed resolving arguments***)
212 if not self._retry_ctx.should_retry():
213 self._retry_ctx.reset()
--> 214 raise OperationalError(
215 msg=f"Could not connect to Snowflake backend after {self._retry_ctx.get_current_retry_count()} attempt(s)."
216 "Aborting",
217 errno=ER_FAILED_TO_CONNECT_TO_DB,
218 )
219 else:
220 logger.debug(
221 f"Hit connection timeout, attempt number {self._retry_ctx.get_current_retry_count()}."
222 " Will retry in a bit..."
223 )
OperationalError: 250001: Could not connect to Snowflake backend after 0 attempt(s).Aborting
</code></pre>
|
<python><snowflake-cloud-data-platform>
|
2023-04-06 21:22:58
| 1
| 1,260
|
0004
|
75,953,757
| 6,392,779
|
Selenium Select Div By Class, no such element
|
<p>I am trying to use selenium to open a webpage and click an element (maximize screen button that's not really a button) to maximize the video. I've tried a few variations of finding element by xpath, but none have worked.. Here is my code below, the webpage is accessible.. I really just need the xpath/to know how this xpath should be formatted.. Appreciate any help !</p>
<pre><code>import time
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.chrome.options import Options
options = webdriver.ChromeOptions()
options.add_experimental_option('excludeSwitches', ['enable-logging'])
options.add_experimental_option('excludeSwitches', ['enable-automation'])
options.add_experimental_option("useAutomationExtension", False)
driver = webdriver.Chrome(options=options, executable_path='./driver/chromedriver')
DejeroControlVideo1 = "https://control.dejero.com/return_video/9I_HsFn7NYHiKvtJbHa3WQ"
driver.get(DejeroControlVideo1)
#driver.maximize_window()
time.sleep(2)
PreviewSwitch = driver.find_element(By.XPATH, "//span[text()='ON']")
PreviewSwitch.click()
print("preview")
FullscreenButton = driver.find_element(By.XPATH, '//span[contains(concat(" ", normalize-space(@class), " "), " lh-solid ")]')[0]
FullscreenButton.click()
print("full screen")
time.sleep(5)
</code></pre>
|
<python><selenium-webdriver>
|
2023-04-06 21:02:31
| 2
| 901
|
nick
|
75,953,653
| 6,878,762
|
Pyspark: optimize getting proportion of df at each level of each categorical column
|
<p>TLDR: I'm new to pyspark and I think I'm not being "sparky" while trying to do a bunch of aggregations.</p>
<p>I have a set of data for which I need to know the proportion of data at each level of each categorical column. For example if I start with the following:</p>
<pre><code>|box|potato|country|
|1 |red |usa |
|1 |red |mexico |
|1 |yellow|canada |
|1 |red |canada |
|1 |red |mexico |
</code></pre>
<p>I want to end up with:</p>
<pre><code>|box|potato.red|potato.yellow|country.usa|country.mexico|country.canada|
|1 |0.80 |0.20 |0.20 |0.40 |0.40 |
</code></pre>
<p>So, my output is a dataframe with one row per dataset about my boxes of potatoes and a number of columns equal to the sum of each unique level of each categorical column (in this case 2 + 3 = 5) plus one for the dataset identifier.</p>
<p>I'd like to optimize my approach as actual, non-potato data, is much larger and has ~100 categorical columns with any where from 2 - 100 levels.</p>
<p>I think my primary problem is that while I'm implementing this in pyspark - I'm not actually being 'sparky' in how I do it. Any types on optimizing by using a more pyspark appropriate approach are appreciated.</p>
<p>Current approach:</p>
<pre><code>box_of_potatos = <readindata>
sz = box_of_potatos.count()
proportion = count(*)/sz
output_for_one_box = process_potatos(box_of_potatos, 1)
def process_potatos(df, dataset_id):
c_str = [c for c, t in df.dtypes if t.startswith('string')]
foo = spark.createDataFrame([Row(id = dataset_id)])
for i in c_str:
bar = categorical_lvl(df, i, dataset_id)
foo = foo.join(bar, ['dataset_id'])
return(foo)
def categorical_lvl(df, cat_attribute, dataset_id):
newCols = [cat_attribute, dataset_id]
smry = df_demo.groupBy(cat_attribute).agg(proportion)
smry = smry.toDF(*newCols).withColumn('column_modified', f.concat(f.lit(cat_attribute), f.lit("."), f.col(cat_attribute))).withColumn('id', f.lit(dataset_id))
smry = smry.groupBy('id').pivot('column_modified').avg(dataset_id).fillna(0)
return(smry)
</code></pre>
|
<python><dictionary><optimization><pyspark><aggregate>
|
2023-04-06 20:48:34
| 1
| 347
|
HJT
|
75,953,556
| 1,574,054
|
VS Code cannot find python libpython3.10.so
|
<p>I am trying to select a vritual environment as the default interpreter for python in vs code, but this fails. In the terminal I can, without issues run</p>
<pre><code>$ <venv_path>/python
Python 3.10.8 (main, Mar 14 2023, 20:30:44) [GCC 12.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
</code></pre>
<p>or</p>
<pre><code>which python
<venv_path>/python
</code></pre>
<p>after the environment has been activated. Furthermore, in the terminal, I can see that</p>
<pre><code>echo $LD_LIBRARY_PATH
...:/opt/<...>/lib/:...
</code></pre>
<p>which means that the libraries can be found from the terminal so the python can run. Now the problem. When I try to select <code><venv_path>/python</code> as the interpreter for the opened project, this fails with an error message. In the output window I see the following:</p>
<blockquote>
<p>[Error: Command failed: <venv_path>/python -I /<...>/.vscode-server/extensions/ms-python.python-2023.6.0/pythonFiles/get_output_via_markers.py /<...>/.vscode-server/extensions/ms-python.python-2023.6.0/pythonFiles/interpreterInfo.py <br>
<venv_path>/python: error while loading shared libraries: libpython3.10.so.1.0: cannot open shared object file: No such file or directory</p>
</blockquote>
<p>Which means that vs code was not able to find the library. Why is that and how can I fixt that?</p>
|
<python><visual-studio-code><environment-variables><shared-libraries>
|
2023-04-06 20:35:17
| 1
| 4,589
|
HerpDerpington
|
75,953,432
| 4,154,548
|
Reticulate sees some (but not all) python packages using Rstudio on Ubuntu
|
<p>Setting a conda environment (in this case named <code>r420v1</code> where python is in "/home/tjm/anaconda3/envs/r420v1/bin/python3" ) in reticulate "works" in that it sets the appropriate version of python, libpython, and python home. However, it identifies only numpy (correct version 1.24.2 also seen in <code>r420v1</code>), also in the as the sole package available, but there are dozens more.</p>
<pre><code>> use_python("/home/tjm/anaconda3/envs/r420v1/bin/python3")
> # use_condaenv("r420v1")
> library(reticulate)
> py_config()
python: /home/tjm/anaconda3/envs/r420v1/bin/python3
libpython: /home/tjm/anaconda3/envs/r420v1/lib/libpython3.11.so
pythonhome: /home/tjm/anaconda3/envs/r420v1:/home/tjm/anaconda3/envs/r420v1
version: 3.11.2 | packaged by conda-forge | (main, Mar 31 2023, 17:51:05) [GCC 11.3.0]
numpy: /home/tjm/anaconda3/envs/r420v1/lib/python3.11/site-packages/numpy
numpy_version: 1.24.2
NOTE: Python version was forced by use_python function
</code></pre>
<p>Moreover, the other python packages I need, <code>gseapy</code>, <code>matplotlib</code>, and <code>pandas</code> are all installed in conda environment <code>r420v1</code> but cannot be found. However, they're there when I run <code>py_list_packages()</code> (see bottom row for <code>gseapy</code>):
<a href="https://i.sstatic.net/0V8KF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0V8KF.png" alt="enter image description here" /></a></p>
<p>When my script it fails at the python block where I import <code>gseapy</code>:</p>
<pre><code>> library(reticulate)
> use_condaenv("r405v1", required=TRUE)
Warning: Previous request to `use_python("/home/tjm/anaconda3/envs/r420v1/bin/python3", required = TRUE)` will be ignored. It is superseded by request to `use_python("/home/tjm/anaconda3/envs/r405v1/bin/python")
> reticulate::repl_python()
>>> import numpy as np
>>> np.__version__
'1.24.2'
>>> import gseapy as gp
ModuleNotFoundError: No module named 'gseapy'
</code></pre>
<p>Previously, when I ran a simplified version of my script in the Rstudio console, only focusing on reticulate, I g0t a completely different error:</p>
<pre><code>> library(reticulate)
> use_condaenv("r405v1", required=TRUE)
Warning: Previous request to `use_python("/home/tjm/anaconda3/envs/r420v1/bin/python3", required = TRUE)` will be ignored. It is superseded by request to `use_python("/home/tjm/anaconda3/envs/r405v1/bin/python")
> reticulate::repl_python()
>>> import gseapy as gp
ImportError: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.29' not found (required by /home/tjm/anaconda3/envs/r420v1/lib/python3.11/site-packages/pandas/_libs/window/aggregations.cpython-311-x86_64-linux-gnu.so)
</code></pre>
<p>I'm using:</p>
<ul>
<li>R 4.2.2</li>
<li>reticulate 1.28</li>
<li>RStudio "Elsbeth Geranium" Release (7d165dcf, 2022-12-03) for Ubuntu Bionic,</li>
<li>Ubuntu 20.04.2 "Focal Fossa"</li>
</ul>
|
<python><r><rstudio><conda><reticulate>
|
2023-04-06 20:15:50
| 0
| 2,906
|
Thomas Matthew
|
75,953,418
| 5,091,720
|
Selenium Other ways to find_element for onclick
|
<p>I am looking at a table that has this <code><tr onclick="setValue('115751')" role="row" class="odd"></code> like shown below. I am trying use Selenium to find_element and click on distinct rows based on the setValue. Since I normally use xpath I tried to use that but xpath does not work because some times the order changes.</p>
<pre><code>my_item = '//*[@id="resulttable"]/tbody/tr[1]'
mychoice = driver.find_element(By.XPATH, my_item)
mychoice.click()
</code></pre>
<p>How do I get the selenium to click my_item based on the setValue or on the "Recordation No" shown in the table?
I have a little demo content from the website below.</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-css lang-css prettyprint-override"><code>table, th, td {
border: 1px solid;
}</code></pre>
<pre class="snippet-code-html lang-html prettyprint-override"><code><table width="100%" class="display dataTable no-footer" id="resulttable" cellspacing="0" role="grid" aria-describedby="resulttable_info" style="width: 100%;">
<thead>
<tr role="row">
<th>Statement</th>
<th>Recordation No</th>
<th>Status</th>
</tr>
</thead>
<tbody>
<tr onclick="setValue('115751')" role="row" class="odd">
<td>2022-3301051</td>
<td class="hidden-xs hidden-sm sorting_1">
3301051</td>
<td>Submitted</td>
</tr>
<tr onclick="setValue('115529')" role="row" class="even">
<td>2022-3301053</td>
<td class="hidden-xs hidden-sm sorting_1">
3301053</td>
<td>Submitted</td>
</tr>
<tr onclick="setValue('115201')" role="row" class="odd">
<td>2022-3301309</td>
<td class="hidden-xs hidden-sm sorting_1">
3301309</td>
<td>Not Submitted</td>
</tr>
<tr onclick="setValue('115893')" role="row" class="even">
<td>2022-3301310</td>
<td class="hidden-xs hidden-sm sorting_1">
3301310</td>
<td>Not Submitted</td>
</tr>
<tr onclick="setValue('115497')" role="row" class="odd">
<td>2022-3301313</td>
<td class="hidden-xs hidden-sm sorting_1">
3301313</td>
<td>Not Submitted</td>
</tr>
</tbody>
</table></code></pre>
</div>
</div>
</p>
|
<python><selenium-webdriver>
|
2023-04-06 20:13:25
| 1
| 2,363
|
Shane S
|
75,953,279
| 4,212,158
|
ModuleNotFoundError: No module named 'pandas.core.indexes.numeric' using Metaflow
|
<p>I used Metaflow to load a Dataframe. It was successfully unpickled from the artifact store, but when I try to view its index using <code>df.index</code>, I get an error that says <code>ModuleNotFoundError: No module named 'pandas.core.indexes.numeric'</code>. Why?</p>
<p>I've looked at other answers with similar error messages <a href="https://stackoverflow.com/questions/51285798">here</a> and <a href="https://stackoverflow.com/questions/37371451">here</a>, which say that this is caused by trying to unpickle a dataframe with older versions of Pandas. However, my error is slightly different, and it is not fixed by upgrading Pandas (<code>pip install pandas -U</code>).</p>
|
<python><pandas><pickle><netflix-metaflow>
|
2023-04-06 19:52:50
| 5
| 20,332
|
Ricardo Decal
|
75,953,229
| 1,578,210
|
Referencing a class variable
|
<p>I have a class with several variables inside</p>
<pre class="lang-py prettyprint-override"><code>class myClass():
def __init__(self):
self.first = ""
self.second = ""
self.third = ""
</code></pre>
<p>Ok, so if I then initialize a variable to that class, I understand that I can easily reference or set any of those class variables like this</p>
<pre><code>myClassVar = myClass()
myClassVar.first = "foo"
myClassVar.second = "bar"
</code></pre>
<p>What I'm wanting to know is, what if the class variable I want to set is itself determined by another variable. for instance, if I looped through a list ["first", "second", "third"], is there any way to reference those class variables like this:</p>
<pre><code>varList = ["first", "second", "third"]
for item in varList:
myClass.item = "whatever I want to set it to"
</code></pre>
<p>Something like what I just did will result in "myClass has no 'item' member"</p>
|
<python><python-3.x>
|
2023-04-06 19:43:55
| 1
| 437
|
milnuts
|
75,953,212
| 89,480
|
How can I render a text to multiple filled polygons?
|
<p>I'm need to extrude text to a 3D triangle mesh using trimesh, open3d, or something similar.</p>
<p>Since these libraries don't contain any font related functions, I want to obtain polygons that correspond to glyph outlines as polygons and extrude those.</p>
<p>I've tried using fontTools and ended at using its <code>RecordingPen</code> to capture the outlines of glyphs. My problem now is that those use curves and I need just straight line segment polylines. I looked through the pens provided in fontTools and didn't find one that could approximate the curves with line segments. I also don't really feel like implementing this conversion myself.</p>
|
<python><fonts><geometry><polygon>
|
2023-04-06 19:42:17
| 1
| 3,958
|
cube
|
75,952,937
| 869,809
|
how to load data into mysql database when a flask app is running and "protecting" the data?
|
<p>I have several flask (python=3.10) applications running on my server (Ubuntu 20.04.6 LTS). They are, as of now, working great and being served via Apache and mod_wsgi. My requirements.txt is below.</p>
<p>For example, this is from the conf file in the sites-enabled directory:</p>
<pre><code> WSGIDaemonProcess opinions threads=2 user=ray python-home=/home/ray/opinions/.venv home=/home/ray/opinions/
WSGIScriptAlias /opinions /home/ray/opinions/opinions.py
<Directory /home/ray/opinions/>
WSGIProcessGroup opinions
Require all granted
</Directory>
</code></pre>
<p>This app is using sqlalchemy to connect to a mysql database.</p>
<p>The problem is that if I want to dump into the database (with, for example, "mysqldump <db1> | mysql meetings"), I cannot do it without shutting off the flask application. AFAIK the way to do this is edit the conf file, comment out this app and restart apache. Then, after the data write, edit the conf file again, uncomment the app and restart apache again. It is a PITA to do this every time.</p>
<p>There are obvious reasons for flask to want to protect the data, but I want to be able to turn off this safety feature. I cannot just take the lock statements out of the SQL that is writing the data. That does not work.</p>
<p>Do I need to change something in my flask setup, my setup of the sqlalchemy connection, the mod_wsgi setup, or what? Any suggestions appreciated.</p>
<p>I see <a href="https://stackoverflow.com/questions/12384323/when-a-flask-application-with-flask-sqlalchemy-is-running-how-do-i-use-mysql-cl">When a Flask application with Flask-SQLAlchemy is running, how do I use MySQL client on the same database at the same time?</a> but wow that looks hack-ish. And old. Is this still the way?</p>
<pre><code>$ cat requirements.txt
click==8.1.3
Flask==2.2.2
greenlet==2.0.0
itsdangerous==2.1.2
Jinja2==3.1.2
MarkupSafe==2.1.1
PyMySQL==1.0.2
python-dotenv==0.21.0
SQLAlchemy==1.4.42
Werkzeug==2.2.2
</code></pre>
|
<python><mysql><flask><flask-sqlalchemy>
|
2023-04-06 19:05:47
| 0
| 3,616
|
Ray Kiddy
|
75,952,667
| 6,794,223
|
How to use return value (str) of one activity as input a second activity in temporal python sdk?
|
<p>I'm using the python sdk for <a href="https://temporal.io/" rel="nofollow noreferrer">https://temporal.io/</a>.
I have a workflow that i'd like to execute two sequential activities.</p>
<ol>
<li>Do X and return a filepath.</li>
<li>Do Y with data at that filepath.</li>
</ol>
<pre><code>@workflow.defn
class ScraperWorkflow:
@workflow.run
async def run(self, scraper_input: ScraperInput):
scraper_result = await workflow.execute_activity(
ercot_scraper, # activity that takes scraper_input and returns a path
scraper_input,
)
extractor_result = await workflow.execute_activity(
extract_activity,
path_from_previous_activity,
)
return
</code></pre>
<p>How do i get path_from_previous_activity from the first activity?!!</p>
|
<python><temporal-workflow>
|
2023-04-06 18:28:20
| 2
| 312
|
user6794223
|
75,952,446
| 4,918,765
|
Get table description from Python sqlalchemy connection object and table name as a string
|
<p>Starting from a <code>sqlalchemy</code> connection object in Python and a table name as a string how do I get table properties, eg column names and datatypes.</p>
<p>For example, connect to a database in <code>sqlalchemy</code></p>
<pre><code>from sqlalchemy import create_engine
conn = create_engine('mssql+pyodbc://...driver=ODBC+Driver+17+for+SQL+Server').connect()
</code></pre>
<p>Then <code>conn</code> is a <code>sqlalchemy</code> connection object</p>
<pre><code>In [1]: conn
Out[1]: <sqlalchemy.engine.base.Connection at 0x7feb6efef070>
</code></pre>
<p><strong>How do I get table properties based on a table name as a string, eg <code>table = '...'</code></strong>?</p>
<p>This <em>should</em> work but instead creates an empty DataFrame</p>
<pre><code>from sqlalchemy import text
import pandas as pd
query = f"""SELECT * FROM information_schema.columns WHERE table_name='{table}'"""
df = pd.read_sql_query(text(query), conn)
</code></pre>
<pre><code>In [2]: df
Out[2]:
Empty DataFrame
Columns: [TABLE_CATALOG, TABLE_SCHEMA, TABLE_NAME, COLUMN_NAME, ORDINAL_POSITION, COLUMN_DEFAULT, IS_NULLABLE, DATA_TYPE, CHARACTER_MAXIMUM_LENGTH, CHARACTER_OCTET_LENGTH, NUMERIC_PRECISION, NUMERIC_PRECISION_RADIX, NUMERIC_SCALE, DATETIME_PRECISION, CHARACTER_SET_CATALOG, CHARACTER_SET_SCHEMA, CHARACTER_SET_NAME, COLLATION_CATALOG, COLLATION_SCHEMA, COLLATION_NAME, DOMAIN_CATALOG, DOMAIN_SCHEMA, DOMAIN_NAME]
Index: []
</code></pre>
<pre><code>versions:
sqlalchemy - 2.0.4
pandas - 1.5.3
</code></pre>
|
<python><pandas><sqlalchemy>
|
2023-04-06 17:58:34
| 1
| 2,723
|
Russell Burdt
|
75,952,237
| 4,329,348
|
Is there a convinient tool like Makefile for Python that will append a path to PYTHONPATH?
|
<p>I have an issue where I want to add a path to PYTHONPATH. Of course, I don't want to modify my <code>.bashrc</code> to get just this project running.</p>
<p>PyCharm for example will add to PYTHONPATH the root contents when you create a configuration and as a result when you run the program from within PyCharm it will work.</p>
<p>I was wondering if there is a configuration program like Makefile for Python that you can define configurations like "run this script but also append the current path to PYTHONPATH". An equivalant of what PyCharm does but for the command line.</p>
|
<python>
|
2023-04-06 17:26:58
| 1
| 1,219
|
Phrixus
|
75,952,055
| 4,329,348
|
Can't understand why Python won't import the file
|
<p>I have the following project structure:</p>
<pre><code>├── my_app
├── __init__.py
├── my_pkg
│ ├── __init__.py
│ ├── utils.py
│ ├── app.py
├── other_pkg...
│ ├── __init__.py
│ ├── ...
├── ...
</code></pre>
<p>I want in the <code>my_app/my_pkg/app.py</code> to import <code>utils</code>. I want to call that Python file from anywhere. For example, I want to be in <code>my_app</code> and run <code>python my_pkg/app.py</code>.</p>
<p>Adding the following line in <code>app.py</code> throws the exception <code>ModuleNotFoundError: No module named my_app</code>:</p>
<pre><code>from my_app.my_pkg import utils
</code></pre>
<p>I know I can add the path to sys.path but I don't want to do that. I don't understand why Python can't consider the above project structure as a package and just import the files. I find it really ugly to add paths to sys to get Python to find the files!</p>
<p>Any suggestions?</p>
|
<python>
|
2023-04-06 17:06:19
| 1
| 1,219
|
Phrixus
|
75,952,042
| 13,812,982
|
ExcelWriter using openpyxl engine ignoring date_format parameter
|
<p>I have read quite a few answers on this, but when I run my code I don't get the same result.</p>
<p>I am using pandas 2.0.0 and openpyxl 3.1.2 on Python 3.9</p>
<p>This is a reduced example of my issue, which is that <em>I can't get the ExcelWriter to respect my choice of date format</em>. I am trying to append a new sheet to an existing Excel .xlsx file.</p>
<pre><code>import pandas as pd
import datetime
filePath = 'c:\\temp\\myfile.xlsx'
writer = pd.ExcelWriter(filePath,mode='a',engine='openpyxl',if_sheet_exists='replace',date_format='DD/MM/YYY')
df = pd.DataFrame([datetime.date(2023,4,7)],columns=['Date'])
df.to_excel(writer,sheet_name='Data')
writer.close()
</code></pre>
<p>The result in Excel is this:</p>
<p><a href="https://i.sstatic.net/73EEC.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/73EEC.jpg" alt="enter image description here" /></a></p>
<p>I have explicitly set the type of the value in the dataframe to be <code>datetime.date</code>. I have tried using <code>datetime_format</code> or indeed both together but to no avail.</p>
<p>I have also tried <code>xlsxwriter</code> but it seems this engine does not allow appending to an existing workbook.</p>
|
<python><excel><pandas>
|
2023-04-06 17:04:35
| 1
| 4,331
|
DS_London
|
75,951,990
| 4,730,598
|
problem with if statement? not equal rule doesnt work?
|
<p>I have 1000 dataframes that have same structure, but some of them may contain strings as values. With all those frames I need to do the same calculations, just exclude rows in those dataframes where subsrings occur. The example of the structure of the dataset with string is as follows:</p>
<pre><code>time x y z
0.00 run_failed run_failed run_failed
0.02 run_failed run_failed run_failed
0.03 test_failed test_failed test_failed
0.04 44 321 644
0.04 44 321 644
0.04 44 321 644
0.03 test_failed test_failed test_failed
0.04 44 321 644
0.04 44 321 644
</code></pre>
<p>If too look at df.dtypes method, those dfs that contain substring will always be object type, while "normal" dfs - of float64</p>
<p>So in order to deal with it I made the following script:</p>
<pre><code>for df in dfs:
add = 0
z = pd.read_csv(df)
if type(z["x"]) != np.float64:
bmb = z[z['x'].str.contains('failed')]
z = z.drop(bmb.index)
add = len(bmb)
print(add)
....
and then the code for doing calculations assuming that if string occured, it was dropped inside if statement
</code></pre>
<p>But when I run code it returns error: "Can only use .str accessor with string values!" pointing inside if statement block, however the dataset if fully of float64 type and why it tried to process this " bmb = z[z['x'].str.contains('failed')]" command is not clear for me at all.</p>
|
<python><pandas><numpy>
|
2023-04-06 16:57:27
| 2
| 403
|
Ison
|
75,951,948
| 15,545,814
|
Pip installs packages in the wrong directory
|
<p>So I want to install the opencv-python package. I typed in pip install opencv-python and got this:</p>
<pre><code>Collecting opencv-python
Downloading opencv_python-4.7.0.72-cp37-abi3-win_amd64.whl (38.2 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 38.2/38.2 MB 3.1 MB/s eta 0:00:00
Requirement already satisfied: numpy>=1.17.0 in c:\users\leo westerburg burr\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from opencv-python) (1.24.2)
Installing collected packages: opencv-python
Successfully installed opencv-python-4.7.0.72
</code></pre>
<p>When I try and import the package (on IDLE) I get</p>
<pre><code>Traceback (most recent call last):
File "<pyshell#1>", line 1, in <module>
import cv2
ModuleNotFoundError: No module named 'cv2'
</code></pre>
<p>This is the same for all the packages I have installed recently, like numpy. The thing is when I type <code>sys.path</code> into the IDLE I get</p>
<pre><code>C:\Users\Leo Westerburg Burr\AppData\Local\Programs\Python\Python311\Lib\idlelib
C:\Users\Leo Westerburg Burr\AppData\Local\Programs\Python\Python311\python311.zip
C:\Users\Leo Westerburg Burr\AppData\Local\Programs\Python\Python311\DLLs
C:\Users\Leo Westerburg Burr\AppData\Local\Programs\Python\Python311\Lib
C:\Users\Leo Westerburg Burr\AppData\Local\Programs\Python\Python311
C:\Users\Leo Westerburg Burr\AppData\Local\Programs\Python\Python311\Lib\site-packages
</code></pre>
<p>Which are all in the <code>AppData/Local/Programs</code> directory, however the packages are stored in <code>appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\...</code> as you can see when I installed opencv-python - which I find weird; why are they installed there and not in <code>programs\python</code> ?</p>
<p>I have tried reinstalling pip, and also downloading a newer version of python. What is weird is that I have Python311 and Python38 in my Python folder, but this weird folder that has the packages is python39?</p>
<p>So my question is: how do I get pip to install packages in <code>Programs\Python\Python311\...</code>, rather than <code>Packages\...</code> ?</p>
<p>Do I have to add something to my PATH?</p>
|
<python><pip><python-module><python-packaging><pythonpath>
|
2023-04-06 16:51:59
| 2
| 512
|
LWB
|
75,951,840
| 6,458,245
|
Fast way to create dict of dicts and insert element in python?
|
<p>Is there a way to condense this code?</p>
<pre><code> key2 = 1
self.mydict = {1:{}}
if key1 in self.mydict.keys():
self.mydict[key1][key2] = None
else:
self.mydict[key1] = {}
self.mydict[key1][key2] = None
</code></pre>
<p>As you can see if the key doesn't exist I need to first initialize the dict inside the dict. Can I create a nested dict and insert an element into the nested dict automatically regardless of whether the inner dict exists or not?</p>
|
<python><dictionary>
|
2023-04-06 16:39:14
| 1
| 2,356
|
JobHunter69
|
75,951,825
| 7,059,087
|
How to make PyTest use the `conftest.py` of a parent directory
|
<p>I am working on a suite of tests for a project that were not written by me. The way they were set up is as follows</p>
<pre><code>root/
conftest.py
testType1/
a.py
testType2/
etc.
</code></pre>
<p>The file <code>a.py</code> is as follows</p>
<pre><code>class TestClass:
value_thats_important = None
def test_to_test(self):
do work that uses self.value_thats_important
</code></pre>
<p>Heres the problem I am encountering, the way it was originally developer, <code>self.value_thats_important</code> is set during the <code>conftest.py</code> generate tests. So if I run</p>
<pre><code>pytest root/ arg=ARG_THAT_I_NEED
</code></pre>
<p>or</p>
<pre><code>pytest testType1/ arg=ARG_THAT_I_NEED
</code></pre>
<p>It works totally fine. However, I also need to be able to run the test individually without calling every other test, and when I run</p>
<pre><code>pytest testType1/a.py::TestClass arg=ARG_THAT_I_NEED
</code></pre>
<p>I get the error that <code>Nonetype not subscriptable</code>, which is telling me that the conftest is not getting called. I'm pretty new to pytest so is there a way I can salvage this without having to rewrite the entire test?</p>
|
<python><python-3.x><testing><pytest>
|
2023-04-06 16:37:16
| 1
| 532
|
wjmccann
|
75,951,773
| 11,922,765
|
Plot temperature barplot with sorted axis categories
|
<p>I want to show number of samples in specific range of the temperature. The problem is, x-axis of the plot is not arranged in the proper order: lowest range first (negative temperatures ) and bigger range next. In the below plot, -5750 is lowest temperature. But it appears in the middle. It is to be first one on x-axis.</p>
<pre><code> fig,ax = plt.subplots(figsize=(20,10))
ax = sns.histplot(df,x='Temperature range (C)',stat='percent',ax=ax)
ax.set_yscale('log')
plt.ylabel('log(percentage samples)')
plt.gcf().autofmt_xdate()
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/nBGvD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nBGvD.png" alt="enter image description here" /></a></p>
|
<python><pandas><matplotlib><seaborn><histogram>
|
2023-04-06 16:31:33
| 1
| 4,702
|
Mainland
|
75,951,765
| 2,103,050
|
tf_agents changing underlying suite_gym reward function
|
<p>I'm trying to modify the MountainCarContinuous-v0 environment from <code>suite_gym()</code> because training is getting stuck in a local minima. The default reward function penalizes large actions which are preferred for optimal solving. So I would like to try other reward functions to see if I can get it to train properly.</p>
<p>In base gym, I can use a wrapper as follows:</p>
<pre><code>import gymnasium as gym
env = gym.make("MountainCarContinuous-v0")
wrapped_env = gym.wrappers.TransformReward(env, lambda r: 0 if r <= 0 else 1)
state = wrapped_env.reset()
state, reward, done = wrappped_env.step([action])
# reward will now always be 0 or 1 depending on whether it reached the goal or not.
</code></pre>
<p>How can you do the same thing inside tf_agents?</p>
<pre><code>env = suite_gym.load("MountainCarContinuous-v0")
env = None # how do I modify the underlying environment?
</code></pre>
<p>I have tried modifying <code>env._env</code>, but get a reset() got an unexpected keyword argument 'seed' error</p>
<p>Another approach seems to be trying the following, but I don't know how to pass the gym wrapper here, especially because it requires a function inside the Class init.</p>
<pre><code> test = suite_gym.load("MountainCarContinuous-v0", gym_env_wrappers=[gym.wrappers.TransformReward])
</code></pre>
<p>What is the proper way to get ahold of the underlying gym env and modify it?</p>
|
<python><reinforcement-learning><openai-gym><tensorflow-agents>
|
2023-04-06 16:30:44
| 0
| 377
|
brian_ds
|
75,951,753
| 1,827,854
|
Why does the __mul__ magic method behave differently than the human-readable version?
|
<p>I've been explaining Python magic functions to someone and I naturally showed these examples:</p>
<pre><code>In [1]: str("v").__mul__(5)
Out[1]: 'vvvvv'
In [2]: "v" * 5
Out[2]: 'vvvvv'
</code></pre>
<p>But then I moved on to how transitivity is implemented and tried also this example that failed:</p>
<pre><code>In [3]: 5 * "v"
Out[3]: 'vvvvv'
In [4]: int(5).__mul__("v")
Out[4]: NotImplemented
</code></pre>
<p>I was not able to explain why this last one did not work. Could you help me?</p>
|
<python>
|
2023-04-06 16:29:21
| 2
| 625
|
mapto
|
75,951,678
| 1,028,133
|
A short expression to pick from two values based on bool?
|
<p>If one has two values in a list:</p>
<pre><code>choices = [1,2]
</code></pre>
<p>and one wants to assign these values to two variables based on a <code>bool</code> parameter (let's call it <code>ascending</code>) like this:</p>
<pre><code>if ascending:
a,b = choices[1], choices[0]
else:
a,b = choices[0], choices[1]
</code></pre>
<p>one can express this in one line like this:</p>
<pre><code>a,b = choices[ascending], choices[not ascending]
</code></pre>
<p>or</p>
<pre><code>a,b = choices[::(-1)**ascending]
</code></pre>
<p>I like the last one better, but it's probably not extremely transparent. What are other concise but more readable ways to do the same?</p>
|
<python>
|
2023-04-06 16:21:31
| 2
| 744
|
the.real.gruycho
|
75,951,672
| 10,859,585
|
Pandas DataFrame with row list and column list
|
<p>I must be missing something, but I cannot find any guides on how to construct a pandas dataframe using both list of rows and columns. The purpose is that the <code>row_list</code> and <code>col_list</code> are updated in a loop and can hold more or less strings, but will always equal each other in length.</p>
<p>Using <a href="https://stackoverflow.com/a/42202892/10859585">transpose</a> works on the rows, but I'm not sure how to pass the columns in.</p>
<pre><code>row_list = ['a', 'b', 'c', 'd']
col_list = ['A', 'B', 'C', 'D']
df = pd.DataFrame(row_list, columns=col_list)
>>>
raise ValueError(f"Shape of passed values is {passed}, indices imply {implied}")
ValueError: Shape of passed values is (4, 1), indices imply (4, 4)
df = pd.DataFrame(row_list).T
>>>
0 1 2 3
a b c d
</code></pre>
<p>Expected Output:</p>
<pre><code>A B C D
a b c d
</code></pre>
|
<python><pandas><dataframe><list>
|
2023-04-06 16:20:29
| 1
| 414
|
Binx
|
75,951,653
| 6,039,697
|
Make a function return the same type of a overriden base class method
|
<p>I have an abstract base class <code>MyClass</code> which contains an abstract method <code>get_type</code> which returns a class type.</p>
<p>I also have a funtion <code>fn</code> which expects an instance of <code>MyClass</code> and I want to typing the return of <code>fn</code> as an instance of the same type returned by its overriden <code>get_type</code> <code>arg</code> argument.</p>
<p>For example, in this code below, I want intellisense to show the type of <code>my_fraction</code> as <code>Fraction</code> (not <code>MyFractionSubclass</code> or <code>Type[Fraction]</code>) and <code>my_decimal</code> as <code>Decimal</code> (not <code>MyDecimalSubclass</code> or <code>Type[Decima]</code>), but both are being evaluated as <code>Any</code></p>
<pre class="lang-py prettyprint-override"><code>from abc import ABC
from decimal import Decimal
from fractions import Fraction
from typing import Type, TypeVar
T = TypeVar("T")
class MyClass(ABC):
@classmethod
@abstractmethod
def get_type(cls) -> Type[T]:
...
class MyDecimalSubclass(MyClass):
@classmethod
def get_type(cls):
return Decimal
class MyFractionSubclass(MyClass):
@classmethod
def get_type(cls):
return Fraction
def fn(arg: MyClass):
value = arg.get_type()("100")
return value
my_fraction = fn(MyFractionSubclass()) # should be of type `Fraction`
my_decimal = fn(MyDecimalSubclass()) # should be of type `Decimal`
print(my_fraction, my_decimal)
</code></pre>
<p>Note: there will be another subclasses of <code>MyClass</code></p>
<p>I tried a naive (maybe stupid) solution like this one but didn't worked ("of course not"):</p>
<pre class="lang-py prettyprint-override"><code>def fn(arg: MyClass) -> "arg.get_type()()":
value = arg.get_type()("100")
return value
</code></pre>
<p>I also tried a hundred times with ChatGPT but it couldn't provide a solution</p>
|
<python><python-typing>
|
2023-04-06 16:17:54
| 0
| 1,184
|
Michael Pacheco
|
75,951,558
| 92,441
|
Beautiful Soup - ignore `<span>` while providing `string` to `find()` method
|
<p>I am parsing some text in Python, using BeautifulSoup4.</p>
<p>The address block starts with a cell like this:</p>
<pre><code><td><strong>Address</strong></td>
</code></pre>
<p>I find the above cell using <code>soup.find("td", "Address")</code></p>
<p>But, now some addresses have a highlight character too, like this:</p>
<pre><code><td><strong><span>*</span>Address</strong></td>
</code></pre>
<p>This has broken my matching. Is there still a way to find this TR?</p>
|
<python><html><beautifulsoup>
|
2023-04-06 16:04:00
| 2
| 2,442
|
ianmayo
|
75,951,472
| 11,512,576
|
how to set the name of returned index from df.columns in pandas
|
<p>I have a dataframe <code>df</code> in pandas. When I run <code>df.columns</code>, <code>Index(['Col1', 'Col2', 'Col3'], dtype='object', name='NAME')</code> is returned. What's the <code>name</code> here, how can I update it. And it doesn't appear in other dataframes, how can I add it. Thanks.</p>
|
<python><pandas><dataframe>
|
2023-04-06 15:54:18
| 2
| 491
|
Harry
|
75,951,469
| 15,178,267
|
How to check if a date is still less than 5 days in django query?
|
<p>I am trying to check if the date an order was created is still less than 5 days, then i want to display a <code>New Order</code> Text.</p>
<p>This is how i have tried doing this</p>
<pre><code>def vendor_orders(request):
five_days = datetime.now() + timedelta(days=5)
new_orders = CartOrder.objects.filter(payment_status="paid", date__lte=five_days).order_by("-id")
</code></pre>
<p>This is returning all the orders in the database that the date is less than 5 days, which means all orders both old and new are being returned and this is not what i am expecting.
If i manually create an order and set the date manually to some date in the future e.g July or August, it does not return that order.</p>
<p>Please how can i go about this logic? all i want to do is display an order as new orders if the date which the order was created is not yet older than 5 days.</p>
<h1>Template Update</h1>
<p>i want to check if the order is a new order then i display a <strong>new</strong> badge</p>
<p><strong>Views.py</strong></p>
<pre><code>
@login_required
def vendor_orders(request):
five_days = datetime.now() - timedelta(days=5) # this is 5 days ago
new_order = CartOrder.objects.filter(payment_status="paid", vendor=request.user.vendor, date__gte=five_days).order_by("-id")
order = CartOrder.objects.filter(payment_status="paid", vendor=request.user.vendor).order_by("-id")
context = {
"order":order,
"five_days":five_days,
"new_order":new_order,
}
return render(request, "vendor/vendor-order.html", context)
</code></pre>
<pre><code>{% for o in order %}
<tr>
<th scope="row">#{{o.oid}} <span class="badge">{% if o.date > five_days %}New{% endif %}</span></th>
<th scope="row"><a href="{% url 'vendor:order-detail' o.oid %}" class="btn btn-primary">View Order</a></th>
<td>{{o.date}}</td>
</tr>
{% endfor %}
</code></pre>
|
<python><django><django-models><django-rest-framework><django-views>
|
2023-04-06 15:53:43
| 1
| 851
|
Destiny Franks
|
75,951,304
| 3,231,250
|
faster way to search column pairs in another dataframe
|
<p>I have a big dataframe called <code>df</code> around 45 million row like below. <a href="https://drive.google.com/drive/folders/1ydrVDE5mU9KHT8k9XybTjGInqdFg07R4?usp=sharing" rel="nofollow noreferrer">download</a></p>
<pre><code> gene1 gene2 score
0 PIGA ATF7IP1 -0.047236
1 PIGB ATF7IP2 -0.047236
2 PIGC ATF7IP3 -0.047236
3 PIGD ATF7IP4 -0.047236
4 PIGE ATF7IP5 -0.047236
</code></pre>
<p>and I have a small dataframe called <code>terms</code>, size is around 3k row.</p>
<pre><code>id gene_set
1 {HDAC4, BCL6}
2 {HDAC5, BCL6}
3 {HDAC7, BCL6}
4 {NCOA3, KAT2B, EP300, CREBBP}
5 {NCAPD2, NCAPH, NCAPG, SMC4, SMC2}
...
2912 {FOXO1, ESR1}
2913 {APP, FOXO3}
2914 {APP, FOXO1}
2915 {APP, FOXO4}
2916 {MAP3K20, MAPK14, AKAP13, MAP2K3, PKN1}
</code></pre>
<p>and for each row I check the presence of <code>gene1,gene2</code> pairs in the <code>terms</code> dataset.</p>
<p>my code works fine, but I would like to ask is there any <strong>faster idea</strong> to that?<br />
I have tried couple of codes but the run time approximately is the same.</p>
<pre><code>def search(g1,g2):
# search gene pair in the go terms
return sum(terms.gene_set.map(set([g1,g2]).issubset))
</code></pre>
<p>code example 1</p>
<pre><code>np.sum(np.vectorize(search)(df.gene1,df.gene2))
</code></pre>
<p>code example 2</p>
<pre><code>[search(g1, g2) for g1, g2 in zip(df.gene1,df.gene2)]
</code></pre>
<p>code example 3</p>
<pre><code>df[['gene1','gene2']].apply(lambda x: search(x.gene1,x.gene2), axis=1 )
</code></pre>
|
<python><pandas><dataframe><vectorization>
|
2023-04-06 15:35:58
| 2
| 1,120
|
Yasir
|
75,951,299
| 4,298,200
|
Wrong encoding when redirecting printed unicode characters on Windows PowerShell
|
<p>Using python 3, running the following code</p>
<pre><code>print("some box drawing:")
print("┌─┬┼┴┐")
</code></pre>
<p>via</p>
<pre class="lang-console prettyprint-override"><code>py my_app.py
</code></pre>
<p>prints</p>
<pre class="lang-console prettyprint-override"><code>some box drawing:
┌─┬┼┴┐
</code></pre>
<p>As you would expect.</p>
<p>However, if you redirect this (either Windows or Linux) with</p>
<pre class="lang-console prettyprint-override"><code>py my_app.py > redirected.txt
</code></pre>
<p>you get the following exception:</p>
<pre class="lang-console prettyprint-override"><code>UnicodeEncodeError: 'charmap' codec can't encode characters in position 0-5: character maps to <undefined>
</code></pre>
<p>As has been suggested in many other posts, this exception can be "fixed" by calling <code>sys.stdout.reconfigure(encoding='utf-8')</code> prior to printing. On linux and in the windows cmd, thats it, problem solved. Using <a href="https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/powershell" rel="nofollow noreferrer">PowerShell</a> on Windows however, the output looks like this:</p>
<pre class="lang-console prettyprint-override"><code>some box drawing:
ΓöîΓöÇΓö¼Γö╝Γö┤ΓöÉ
</code></pre>
<p>Which is especially odd, since it works fine using the cmd.exe console.</p>
<p>The code base is delivered to a customer as an executable and I would like to not ask them to execute something in the console in order for my program to work reliably. Is there a programmatic way to have <a href="https://en.wikipedia.org/wiki/Box-drawing_character" rel="nofollow noreferrer">box drawing characters</a> written correctly when redirecting output to a file using the windows PowerShell?</p>
|
<python><windows><unicode><encoding><redirectstandardoutput>
|
2023-04-06 15:35:35
| 2
| 6,008
|
Georg Plaz
|
75,951,190
| 2,543,622
|
sentence transformer use of evaluator
|
<p>I came across <a href="https://github.com/UKPLab/sentence-transformers/blob/master/examples/training/sts/training_stsbenchmark_continue_training.py" rel="nofollow noreferrer">this script</a> which is second link on <a href="https://www.sbert.net/examples/training/sts/README.html" rel="nofollow noreferrer">this page</a> and <a href="https://www.sbert.net/docs/package_reference/SentenceTransformer.html" rel="nofollow noreferrer">this explanation</a>
I am using <code>all-mpnet-base-v2</code> (<a href="https://huggingface.co/sentence-transformers/all-mpnet-base-v2" rel="nofollow noreferrer">link</a>) and I am using my custom data</p>
<p>I am having hard time understanding use of</p>
<pre><code>evaluator = EmbeddingSimilarityEvaluator.from_input_examples(
dev_samples, name='sts-dev')
</code></pre>
<p>The documentation says:</p>
<blockquote>
<p>evaluator – An evaluator (sentence_transformers.evaluation) evaluates the model performance during training on held-out dev data. It is used to determine the best model that is saved to disc.</p>
</blockquote>
<p>But in this case, as we are fine tuning on our own examples, <code>train_dataloader</code>has <code>train_samples</code> which has our model sentences and scores.</p>
<p><strong>Q1. How is <code>train_samples</code> different than <code>dev_samples</code>?</strong></p>
<p><strong>Q2a: If the model is going to print performance against <code>dev_samples</code> then how is it going to help "<em>to determine the best model that is saved to disc</em>"?</strong></p>
<p><strong>Q2b: Are we required to run <code>dev_samples</code> against the model saved on the disc and then compare scores?</strong></p>
<p><strong>Q3. If my goal is to take a single model and then fine tune it, is it okay to skip parameters <code>evaluator</code> and <code>evaluation_steps</code>?</strong></p>
<p><strong>Q4. How to determine total steps in the model? Do I need to set <code>evaluation_steps</code>?</strong></p>
<hr />
<h3>Updated</h3>
<p>I followed the answer provided by Kyle and have below follow up questions</p>
<p>In the <code>fit</code> method I used the <code>evaluator</code> and below data was written to a file
<a href="https://i.sstatic.net/9QhYz.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9QhYz.jpg" alt="enter image description here" /></a></p>
<p><strong>Q5. which metric is used to select the best epoch? is it <code>cosine_pearson</code>?</strong></p>
<p><strong>Q6: why steps are <code>-1</code> in the above output?</strong></p>
<p><strong>Q7a: how to find steps based upon size of my data, batch size etc.</strong></p>
<p>Currently i have kept them to 1000. But not sure if that it is too much. I am running for 10 epochs, i have 2509 examples in the training data and batch size is 64.</p>
<p><strong>Q7b: are my steps going to be 2509/64?</strong> if yes then 1000 seems to be too high number</p>
|
<python><nlp><sentence-transformers>
|
2023-04-06 15:22:42
| 1
| 6,946
|
user2543622
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.