QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
75,052,865
| 4,740,661
|
How to use a variable as value of replace function in python pandas
|
<p>I have such dataframe where the column <code>body</code> has image path that I want to replace with the corresponding baseUrl + <code>id</code> + .webp</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.DataFrame({ 'id': ["982951473","000590051"],
'body': ["<script type=""application/ld+json"">{""image"":""https:\/\/www.f.it\/media\/catalog\/product\/2\/4\/240297_1.jpg?optimize=medium&fit=bounds&height=265&width=265&canvas=265:265"",""sku"":""982951473""}</script>","<script type=""application/ld+json"">{""image"":""https:\/\/www.f.it\/media\/catalog\/product\/0\/0\/000590051_rinazina_spray.jpg?optimize=medium&fit=bounds&height=265&width=265&canvas=265:265"",""sku"":""000590051""}</script>"]})
var = "f.com\/" + df['id'] + ".webp"
df_r = df.replace(to_replace=r'www[.]f[.]it[\\].*?["]', value=var, regex=True)
print(df_r.to_string())
</code></pre>
<p>At the end of the day this example url</p>
<pre><code>""https:\/\/www.f.it\/media\/catalog\/product\/2\/4\/240297_1.jpg?optimize=medium&fit=bounds&height=265&width=265&canvas=265:265""
</code></pre>
<p>Should become this url which contain the id variable</p>
<pre><code>""https:\/\/f.com\/982951473.webp""
</code></pre>
|
<python><pandas>
|
2023-01-09 03:13:19
| 1
| 1,027
|
Ax_
|
75,052,604
| 18,059,131
|
RefreshError ('invalid_grant: Token has been expired or revoked.') Google API
|
<p>About a week ago I set up an application on google. Now when I tri and run:</p>
<pre><code>SCOPES = ['https://www.googleapis.com/auth/gmail.readonly']
creds = None
if os.path.exists('token.pickle'):
with open(self.CREDENTIALS_PATH+self.conjoiner+'token.pickle', 'rb') as token:
creds = pickle.load(token)
if not creds or not creds.valid:
if creds and creds.expired and creds.refresh_token:
creds.refresh(Request()) ##error here
</code></pre>
<p>I get the following error:</p>
<pre><code>Exception has occurred: RefreshError
('invalid_grant: Token has been expired or revoked.', {'error': 'invalid_grant', 'error_description': 'Token has been expired or revoked.'})
</code></pre>
<p>What could be the problem?</p>
|
<python><google-api><google-oauth><gmail-api><google-api-python-client>
|
2023-01-09 02:07:34
| 2
| 318
|
prodohsamuel
|
75,052,440
| 9,357,484
|
WEKA's performance on nominal dataset
|
<p>I used <em>WEKA</em> for classification. I used the <em>breast cancer</em> dataset which is available in <em>WEKA</em> data folder. The data set is a nominal dataset. The .arff file can be found <a href="https://github.com/tertiarycourses/Weka/blob/master/Weka%20datasets/breast-cancer.arff" rel="nofollow noreferrer">this link</a>.</p>
<p>I did classification using Naive Bayes classifier. I received a classification report such as accuracy, precision, recall, ROC, and other metrics after classification.</p>
<p>I am familiar with <strong>SkLearn</strong> - the python package. I know that when the input features are nominal we need to convert those features into numerical values using the <em>label encoder</em> or other encoding techniques. Only after that, we can perform classification.</p>
<p>All those machine learning methods are doing some kind of mathematics in background to give the prediction result.</p>
<p>Therefore, I am confused about how could any classifier in WEKA give us prediction results on a nominal dataset?</p>
|
<python><machine-learning><scikit-learn><weka><naivebayes>
|
2023-01-09 01:24:14
| 1
| 3,446
|
Encipher
|
75,052,425
| 6,346,514
|
Python, Loop to reading in Excel file sheets, change header row number
|
<p>I have a loop that counts the rows in each sheet of an xls. When I open the xls itself the count is not aligning with what python is returning me.</p>
<p>It is due to the first sheet header being in row 3. How can I alter my code to read the first sheet ONLY in at row 3 and ignore the first two lines? The rest of my sheets ALWAYS start at the top row and contain no header. I would like to count the len of my first sheet without header included.</p>
<p>However when I open up my excel and count my sheet I am getting</p>
<pre><code>65522 , header starts in row 3, expecting a count of 65520
65520
65520
65520
65520
65520
65520
65520
65520
65520
65520
25427
</code></pre>
<p>my full code:</p>
<pre><code>from io import BytesIO
from pathlib import Path
from zipfile import ZipFile
import os
import pandas as pd
from os import walk
def process_files(files: list) -> pd.DataFrame:
file_mapping = {}
for file in files:
#data_mapping = pd.read_excel(BytesIO(ZipFile(file).read(Path(file).stem)), sheet_name=None)
archive = ZipFile(file)
# find file names in the archive which end in `.xls`, `.xlsx`, `.xlsb`, ...
files_in_archive = archive.namelist()
excel_files_in_archive = [
f for f in files_in_archive if Path(f).suffix[:4] == ".xls"
]
# ensure we only have one file (otherwise, loop or choose one somehow)
assert len(excel_files_in_archive) == 1
# read in data
data_mapping = pd.read_excel(
BytesIO(archive.read(excel_files_in_archive[0])),
sheet_name=None, header=None,
)
row_counts = []
for sheet in list(data_mapping.keys()):
if sheet == 'Sheet1':
df = data_mapping.get(sheet)[3:]
else:
df = data_mapping.get(sheet)
row_counts.append(len(df))
print(len(data_mapping.get(sheet)))
file_mapping.update({file: sum(row_counts)})
frame = pd.DataFrame([file_mapping]).transpose().reset_index()
frame.columns = ["file_name", "row_counts"]
return frame
dir_path = r'D:\test\2022 - 10'
zip_files = []
for root, dirs, files in os.walk(dir_path):
for file in files:
if file.endswith('.zip'):
zip_files.append(os.path.join(root, file))
df = process_files(zip_files) #function
</code></pre>
<p>does anyone have an idea on what im doing wrong?</p>
|
<python>
|
2023-01-09 01:18:39
| 1
| 577
|
Jonnyboi
|
75,052,366
| 851,699
|
How do I to save a stateful TFLite model where shape of state depends on input?
|
<p>I'm trying to do something fairly simple in tensorflow-lite, but I'm not sure it's possible.</p>
<p>I want to define a stateful graph where the shape of the state variable is defined when model is LOADED, not when it's saved.</p>
<p>As a simple example - lets say I just want to compute a temporal difference - ie. a graph that returns the difference between the input in two consecutive calls. The following should pass:</p>
<pre><code>func = load_tflite_model_func(tflite_model_file_path)
runtime_shape = 60, 80
rng = np.random.RandomState(1234)
ims = [rng.randn(*runtime_shape).astype(np.float32) for _ in range(3)]
assert np.allclose(func(ims[0]), ims[0])
assert np.allclose(func(ims[1]), ims[1]-ims[0])
assert np.allclose(func(ims[2]), ims[2]-ims[1])
</code></pre>
<p>Now, to create and save the model, I do:</p>
<pre><code>@dataclass
class TimeDelta(tf.Module):
_last_val: Optional[tf.Tensor] = None
def compute_delta(self, arr: tf.Tensor):
if self._last_val is None:
self._last_val = tf.Variable(tf.zeros(tf.shape(arr)))
delta = arr-self._last_val
self._last_val.assign(arr)
return delta
compile_time_shape = 30, 40
# compile_time_shape = None, None # Causes UnliftableError
tflite_model_file_path = tempfile.mktemp()
delta = TimeDelta()
save_signatures_to_tflite_model(
{'delta': tf.function(delta.compute_delta, input_signature=[tf.TensorSpec(shape=compile_time_shape)])},
path=tflite_model_file_path,
parent_object=delta
)
</code></pre>
<p>The problem of course is that if my compile-time shape differs from my run-time shape, it crashes. Attempting to make the graph dynamically-shaped with <code>compile_time_shape = None, None</code> also fails, causing an <code>UnliftableError</code> when I try to save the graph (because it needs concrete dimensions for the variable).</p>
<p><a href="https://colab.research.google.com/drive/19CwkF1MSGlfMXKxlrndCNqTv7V4fx55Q" rel="nofollow noreferrer">A full Colab-Notebook demonstrating the problem is here</a>.</p>
<p>So, to summarise - the question is:</p>
<p><strong>How can I save a stateful graph in tflite, where the shape of the state of the graph depends on the shape of the input?</strong></p>
<p>Corresponding tensor flow issue at <a href="https://github.com/tensorflow/tensorflow/issues/59217" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/issues/59217</a></p>
|
<python><tensorflow><tensorflow-lite><tflite>
|
2023-01-09 01:00:19
| 1
| 13,753
|
Peter
|
75,052,226
| 7,317,408
|
ValueError: time data '2018-02-22' does not match format '%m-%d-%Y' when using https://stocknewsapi.com/
|
<p>I have a bunch of daily data in ohlc format:</p>
<pre><code> index date symbol open high low close volume vwap previous_close change_1_day
0 7896 2018-02-22 CHK 2.870 3.330 2.86 3.20 130538189 3.109534 2.6300 21.673004
1 8108 2018-12-26 CHK 1.950 2.190 1.85 2.19 80610326 2.015611 1.7300 26.589595
2 27084 2020-06-12 CHNR 4.805 6.040 1.37 1.47 47780632 3.797993 0.9540 54.088050
3 27235 2021-01-19 CHNR 2.260 3.090 2.18 2.33 23674990 2.588788 1.7200 35.465116
4 57521 2020-06-04 CIDM 4.380 6.000 3.15 3.63 223314246 3.994496 1.3100 177.099237
.. ... ... ... ... ... ... ... ... ... ... ...
909 9471547 2018-09-24 AMRN 10.440 12.470 9.51 12.40 163106698 10.848351 2.9900 314.715719
910 9481473 2020-10-12 WIMI 5.330 8.199 5.30 7.06 66982871 7.203527 5.2800 33.712121
911 9545909 2022-09-29 SNTI 1.640 2.410 1.63 2.11 74311098 2.028829 1.4000 50.714286
912 9664240 2019-10-31 AGRX 1.500 1.540 1.03 1.20 60075942 1.352889 0.3706 223.799244
913 9664242 2019-11-04 AGRX 1.500 1.890 1.45 1.78 29016013 1.728473 1.3500 31.851852
[914 rows x 11 columns]
</code></pre>
<p>I am trying to loop over my data to add a column when there is news for that day:</p>
<pre><code>for row in df.iterrows():
# get the date
date = row[1][1]
# get the symbol
symbol = row[1][2]
# end date
end_date = datetime.strptime(date, '%m-%d-%Y').date()
# start date is minus one business day
start_date = (end_date - BDay(1)).date()
print(start_date, 'start_date')
print(end_date, 'end date')
print(type(start_date))
print(type(end_date))
# turn into unix
unixend = mktime(start_date.timetuple())
unixstart = mktime(end_date.timetuple())
print(unixend, 'unixend')
print(unixstart, 'unixstart')
url = f'https://stocknewsapi.com/api/v1?tickers={symbol}&items=3&page=1&date={unixstart-unixend}&token=4q4cwm1jys26wovaedhmy01x7ysxfnkpzjpmyysp'
response = requests.get(url)
print(response)
</code></pre>
<p>Which gives the error:</p>
<pre><code> raise ValueError("time data %r does not match format %r" %
ValueError: time data '2018-02-22' does not match format '%m-%d-%Y'
</code></pre>
<p>The <a href="https://stocknewsapi.com/documentation" rel="nofollow noreferrer">documentation</a> says:</p>
<blockquote>
<p>Use the date parameters to obtain historical news (up to March 2019).
Please use the following format: MMDDYYYY. You can also use: last5min,
last10min, last15min, last30min, last45min, last60min, today,
yesterday, last7days, last30days, last60days, last90days, yeartodate.</p>
</blockquote>
<p>And gives example of data parameter in unix:</p>
<pre><code>&date=03152019-03252019
</code></pre>
<p>The output from my print statements:</p>
<pre><code><class 'datetime.date'>
<class 'datetime.date'>
1572588000.0 unixend
1572847200.0 unixstart
</code></pre>
<p><strong>Desired result</strong>: I want to get the news for each day in my business column, starting from 1 business day before.</p>
<p>What am I getting wrong here?</p>
|
<python><pandas><numpy>
|
2023-01-09 00:19:08
| 1
| 3,436
|
a7dc
|
75,052,206
| 1,330,719
|
Specifying Huggingface model as project dependency
|
<p>Is it possible to install huggingface models as a project dependency?</p>
<p>Currently it is downloaded automatically by the <code>SentenceTransformer</code> library, but this means in a docker container it downloads every time it starts.</p>
<p>This is the model I am trying to use: <a href="https://huggingface.co/sentence-transformers/all-mpnet-base-v2" rel="nofollow noreferrer">https://huggingface.co/sentence-transformers/all-mpnet-base-v2</a></p>
<p>I have tried specifying the url as a dependency in my <code>pyproject.toml</code>:</p>
<pre><code>all-mpnet-base-v2 = {git = "https://huggingface.co/sentence-transformers/all-mpnet-base-v2.git", branch = "main"}
</code></pre>
<p>The first error I got was that the name was incorrect and it should be called <code>train-script</code>, which I renamed the dependency to, but I'm not sure if this is correct. Now I have:</p>
<pre><code>train-script = {git = "https://huggingface.co/sentence-transformers/all-mpnet-base-v2.git", branch = "main"}
</code></pre>
<p>However, now I get the following error:</p>
<pre><code> Package operations: 1 install, 0 updates, 0 removals
• Installing train-script (0.0.0 bd44305)
EnvCommandError
Command ['/srv/.venv/bin/pip', 'install', '--no-deps', '-U', '/srv/.venv/src/train-script'] errored with the following return code 1, and output:
ERROR: Directory '/srv/.venv/src/train-script' is not installable. Neither 'setup.py' nor 'pyproject.toml' found.
[notice] A new release of pip available: 22.2.2 -> 22.3.1
[notice] To update, run: pip install --upgrade pip
at /usr/local/lib/python3.10/site-packages/poetry/utils/env.py:1183 in _run
1179│ output = subprocess.check_output(
1180│ cmd, stderr=subprocess.STDOUT, **kwargs
1181│ )
1182│ except CalledProcessError as e:
→ 1183│ raise EnvCommandError(e, input=input_)
1184│
1185│ return decode(output)
1186│
1187│ def execute(self, bin, *args, **kwargs):
</code></pre>
<p>Is this possible? If not, is there a recommended way to bake the model download into a docker container so it doesn't need to be downloaded each time?</p>
|
<python><docker><python-poetry><huggingface>
|
2023-01-09 00:13:17
| 1
| 1,269
|
rbhalla
|
75,052,167
| 10,098,394
|
VLC-python don't read my mp4 video online
|
<p>I'm just looking for a way to read video from internet with Vlc-python.
i have just a very simple script but it does not work with my video and i don't understand why it's working for some link and why not others...</p>
<pre><code># importing vlc module
import vlc
# creating vlc media player object
media_player = vlc.MediaPlayer()
# media resource locator
#First link it's woring
#mrl = "https://www.w3schools.com/html/mov_bbb.mp4"
#Second link it does not work
mrl = "https://file-examples.com/wp-content/uploads/2017/04/file_example_MP4_480_1_5MG.mp4"
# setting mrl to the media player
media_player.set_mrl(mrl)
# start playing video
media_player.play()
</code></pre>
<p>I'm look all the day for a solution but i don't find it...
So i hope that someone can help me
As always i would like to thanks all the people who works and help the others here !
Sorry for my bad english, i'm french X)</p>
|
<python><stream><streaming><vlc>
|
2023-01-09 00:02:23
| 0
| 421
|
Ronaldonizuka
|
75,051,936
| 1,777,331
|
Is there an advantage to Lambda Powertools’s Parser over straight Pydantic?
|
<p>I’ve got models which inherit Pydantic’s BaseModel and I use this to define my model attributes and do some validation.</p>
<p>But I see that <a href="https://awslabs.github.io/aws-lambda-powertools-python/2.5.0/utilities/parser/#install" rel="nofollow noreferrer">Lambda Powertools comes with a Parser</a> module which uses Pydantic.</p>
<p>Now that I want to use these models within an AWS lambda execution, is there a benefit to using:</p>
<p><code>from aws_lambda_powertools.utilities.parser import BaseModel</code></p>
<p>Instead of sticking with my existing</p>
<p><code>from pydantic import BaseModel</code></p>
<p>I can see that the Powertools Parser comes with a useful BaseEnvelope - but is BaseModel in Powertools any different?</p>
<p>And as a followup, if there is a benefit, could I monkey patch within the lambda runtime so I can:</p>
<ol>
<li>Keep my models independent of anything Lambda like.</li>
<li>Spare myself from changing all the imports.</li>
</ol>
|
<python><amazon-web-services><pydantic>
|
2023-01-08 23:08:44
| 1
| 4,420
|
Tom Harvey
|
75,051,894
| 1,330,719
|
Pytest slow in docker container taking 75s+ for about 15 tests
|
<p>I have a simple dockerfile that does the following:</p>
<pre><code>FROM python:3.10.3
RUN pip install -U pip setuptools
RUN pip install poetry==1.1.11
WORKDIR /srv
</code></pre>
<p>I am using docker compose to mount the relevant directory, but will ignore that here. I am then running this command to run tests in 1 folder:</p>
<pre><code>docker-compose run -it -e PYTHONPATH=./src api /bin/bash -c "poetry run pytest --pspec ./src/e2e/test_*"
</code></pre>
<p>That folder has 2 files that match the glob <code>test_*</code> and they contain 15 tests.</p>
<p>When I run the command above using the <code>--collectonly</code> flag, it takes about 75s.</p>
<p>Example output below:</p>
<pre><code>===================================================== test session starts =====================================================
platform linux -- Python 3.10.3, pytest-6.2.5, py-1.11.0, pluggy-1.0.0
rootdir: /srv
plugins: pspec-0.0.4, mock-3.10.0, anyio-3.6.2
collected 15 items
<Module src/e2e/test_m.py>
<Function test_b_idempotency>
<Function test_e_creation>
<Function test_e_update>
<Function test_v_creation>
<Function test_v_update>
<Function test_s_cleared>
<Module src/e2e/test_q.py>
<Function test_basic_q>
<Function test_basic_r>
<Function test_basic_p>
<Function test_basic_r2>
<Function test_time_weight>
<Function test_s_content>
<Function test_simple_research_case>
<Function test_context_e_search>
<Function test_recent_m_pagination>
=========================================== 15 tests collected in 77.41s (0:01:17) ============================================
</code></pre>
<p>This used to be an incredibly quick command (<1s) but I am not entirely sure what has slowed it down apart from possibly an upgrade of docker.</p>
<p>The tests also take unusually long to run, however given collection is taking incredibly long, this seems like something outside of what's in the tests themselves.</p>
<p>I am on macOS 12.6 and Docker 4.15.0</p>
<p>Does anyone know what may be causing the issue, or how I may go about troubleshooting?</p>
<p>I have read the answers in this <a href="https://stackoverflow.com/questions/52937050/is-pytest-supposed-to-collect-tests-from-dependency-modules-in-a-virtual-environ/52937850#52937850">question</a> but the answers revolve around reducing the amount of files being scanned, which I think I'm already doing.</p>
<p>Any help is much appreciated.</p>
<p>EDIT: This does not seem to be a docker issue since collection is also slow when I run locally.</p>
|
<python><docker><pytest>
|
2023-01-08 22:59:07
| 0
| 1,269
|
rbhalla
|
75,051,852
| 13,509,355
|
How do I enable Python command history editing in bash under virtual environments
|
<p>I want to use history recall and command line editing in the python shell.</p>
<p>However, for virtual environments this does not work by default. For example using</p>
<pre><code>python3 -v venv env
source env/bin/activate
</code></pre>
<p>and then invoking the python interpreter</p>
<pre><code>python
</code></pre>
<p>does not allow up/down arrow etc command line editing.</p>
<p>How to get the command line interpreter to work?</p>
|
<python><virtual-environment>
|
2023-01-08 22:50:34
| 1
| 743
|
Neil Bartlett
|
75,051,776
| 726,730
|
pywinauto save (permant) control identifiers to variable
|
<p>I am trying to save control identifiers to a variable.
The reason is that: i use <code>main_dlg.print_control_identifiers(filename="control_ids.text")</code> but i am getting this error:</p>
<pre><code>Traceback (most recent call last):
File "run_tests.py", line 7, in <module>
main_dlg.print_control_identifiers(filename="control_ids.text")
File "C:\python\lib\site-packages\pywinauto\application.py", line 696, in prin
t_control_identifiers
print_identifiers([this_ctrl, ], log_func=log_func)
File "C:\python\lib\site-packages\pywinauto\application.py", line 685, in prin
t_identifiers
print_identifiers(ctrl.children(), current_depth + 1, log_func)
File "C:\python\lib\site-packages\pywinauto\application.py", line 681, in prin
t_identifiers
log_func(output)
File "C:\python\lib\site-packages\pywinauto\application.py", line 694, in log_
func
log_file.write(str(msg) + os.linesep)
File "C:\python\lib\codecs.py", line 721, in write
return self.writer.write(data)
File "C:\python\lib\codecs.py", line 377, in write
data, consumed = self.encode(object, self.errors)
File "C:\python\lib\encodings\cp1252.py", line 12, in encode
return codecs.charmap_encode(input,errors,encoding_table)
UnicodeEncodeError: 'charmap' codec can't encode characters in position 71-76: c
haracter maps to <undefined>
</code></pre>
<p>because the program i am trying to control has greek characters inside.</p>
<p>So, i decided to save the control identifiers to a variable and then save it with the right encoding to a txt file.</p>
<p>A simple solution i thought is:</p>
<pre><code>from pywinauto.application import Application
import os
app = Application(backend="uia").start("C:/python/Lib/site-packages/QtDesigner/designer.exe")
main_dlg = app.QtDesigner
main_dlg.wait('visible')
main_dlg.print_control_identifiers()
</code></pre>
<p>and then:</p>
<pre><code>python script_name.py > output.txt
</code></pre>
<p>after that in output.txt file there are the control identifiers of the program (in this example QtDesigner).</p>
|
<python><automation><pywinauto>
|
2023-01-08 22:38:19
| 0
| 2,427
|
Chris P
|
75,051,724
| 3,976,494
|
Pandas: return first row where column value satisfies condition against a list of values
|
<p>I have a dataframe <code>df</code> and a list of floats <code>T</code>. <code>df.B</code> is a time series of values sorted in chronological order, where the 0th index is the most recent timestamp and the last index is the oldest timestamp.</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({'A': [1.1, 2.2, 3.3, 4.4], 'B': [5.5, 6.6, 7.7, 8.8]})
T = [t1, t2, ..., tn] # floats
</code></pre>
<p><strong>What I am looking to do</strong></p>
<p>I would like to compare the values of column <code>B</code> against the list of values <code>T</code>, one <code>t</code> at a time, and return the first row of <code>df</code> that satisfies the condition against <code>t</code>. By first row of <code>df</code> I mean walk through the timeseries (essentially) and find the first instance in time where the values in <code>df.B</code> become larger than the value <code>t</code> for any <code>t</code> in <code>T</code>.</p>
<p><strong>What I've attempted:</strong></p>
<pre class="lang-py prettyprint-override"><code>df.loc[df.apply(lambda x: x.B >= T, axis=1)]
# => TypeError: unhashable type: 'numpy.ndarray'
df2 = df.query('B >= @T')
# => 'Lengths must match to compare'
[ df[df['B'] >= t] for t in T ]
# => Technically this works and then I can iterate again to retrieve the first row, but I get the warning -- pydevd warning: Computing repr of a (list) was slow
</code></pre>
<p><strong>EDIT, an example:</strong></p>
<pre class="lang-py prettyprint-override"><code>T = [3.5, 4.5, 8.0, 8.5, 10.0, 11.0]
df.B = [5.5, 8.8, 6.6, 7.7]
# I'm hoping that the expected output would have the rows corresponding to the following values in `df.B`:
[7.7, 7.7, 8.8, 8.8, None, None]
</code></pre>
|
<python><pandas><dataframe>
|
2023-01-08 22:31:24
| 1
| 702
|
Hunter
|
75,051,590
| 11,505,813
|
Remove first 5 from numbers in a list?
|
<p>I have a list with numbers like this :-</p>
<pre><code>s = [5542, 5654, 7545]
</code></pre>
<p>The goal is to remove the first 5 from the number such that the resultant list is like this</p>
<pre><code>s = [542, 654, 745]
</code></pre>
<p>What's the best way to achieve the following without using any external libraries?</p>
|
<python>
|
2023-01-08 22:02:31
| 2
| 1,388
|
Somethingwhatever
|
75,051,422
| 20,443,528
|
How to 'check' a value in a radio button by default in Django?
|
<p>I wrote a model form. I used widgets to make radio buttons and I want a particular radio button to be checked by default while rendering the form in my html file.</p>
<p>Model:</p>
<pre><code>class Room(models.Model):
class Meta:
number = models.PositiveSmallIntegerField()
CATEGORIES = (
('Regular', 'Regular'),
('Executive', 'Executive'),
('Deluxe', 'Deluxe'),
)
category = models.CharField(max_length=9, choices=CATEGORIES, default='Regular')
CAPACITY = (
(1, '1'),
(2, '2'),
(3, '3'),
(4, '4'),
)
capacity = models.PositiveSmallIntegerField(
choices=CAPACITY, default=2
)
advance = models.PositiveSmallIntegerField(default=10)
manager = models.CharField(max_length=30)
</code></pre>
<p>The following is my model form based on the above model.</p>
<p>Form:</p>
<pre><code>class AddRoomForm(forms.ModelForm):
ROOM_CATEGORIES = (
('Regular', 'Regular'),
('Executive', 'Executive'),
('Deluxe', 'Deluxe'),
)
category = forms.CharField(
max_length=9,
widget=forms.RadioSelect(choices=ROOM_CATEGORIES),
)
ROOM_CAPACITY = (
(1, '1'),
(2, '2'),
(3, '3'),
(4, '4'),
)
capacity = forms.CharField(
max_length=9,
widget=forms.RadioSelect(choices=ROOM_CAPACITY),
)
class Meta:
model = Room
fields = ['number', 'category', 'capacity', 'advance']
</code></pre>
<p>Here is the views:</p>
<pre><code>def add_room(request):
if request. Method == 'POST':
form = AddRoomForm(request.POST)
if form.is_valid():
room = Room(number=request.POST['number'],
category=request.POST['category'],
capacity=request.POST['capacity'],
advance=request.POST['advance'],
manager=request.user.username)
room.save()
# Implemented Post/Redirect/Get.
return redirect('../rooms/')
else:
context = {
'form': form,
'username': request.user.username
}
return render(request, 'add_room.html', context)
context = {
'form': AddRoomForm(),
'username': request.user.username
}
return render(request, 'add_room.html', context)
</code></pre>
<p>I rendered the form like this in my html file.</p>
<pre><code><form method="POST">
{% csrf_token %}
{{ form.as_p }}
<input type="submit" class= "submit submit-right" value="Add" />
</form>
</code></pre>
<p>I read somewhere that I can use 'initial' keyword in my views.py file but I don't understand how can I use it. Can someone please help me with it?</p>
|
<python><django><django-models><django-forms><django-widget>
|
2023-01-08 21:24:55
| 1
| 331
|
Anshul Gupta
|
75,051,370
| 6,346,514
|
Python, Len not counting rows correctly
|
<p>I have a loop that counts the rows in each sheet of an xls. When i open the xls itself the count is not exactly aligning with what python is returning me.</p>
<pre><code>row_counts = []
for sheet in list(data_mapping.keys()):
row_counts.append(len(data_mapping.get(sheet)))
print(len(data_mapping.get(sheet)))
</code></pre>
<p>my counts printed are:</p>
<pre><code>65521
65519
65519
65519
65519
65519
65519
65519
65519
65519
65519
25426
</code></pre>
<p>However when I open up my excel and count my sheet I am getting</p>
<pre><code> 65520 , this has a header and data starts on row 3
65520 , no header data starts on row 1
65520 , no header data starts on row 1
65520 , no header data starts on row 1
65520 , no header data starts on row 1
65520, no header data starts on row 1
65520, no header data starts on row 1
65520, no header data starts on row 1
65520, no header data starts on row 1
65520, no header data starts on row 1
65520, no header data starts on row 1
25427, no header data starts on row 1
</code></pre>
<p>my full code:</p>
<pre><code>from io import BytesIO
from pathlib import Path
from zipfile import ZipFile
import os
import pandas as pd
from os import walk
def process_files(files: list) -> pd.DataFrame:
file_mapping = {}
for file in files:
#data_mapping = pd.read_excel(BytesIO(ZipFile(file).read(Path(file).stem)), sheet_name=None)
archive = ZipFile(file)
# find file names in the archive which end in `.xls`, `.xlsx`, `.xlsb`, ...
files_in_archive = archive.namelist()
excel_files_in_archive = [
f for f in files_in_archive if Path(f).suffix[:4] == ".xls"
]
# ensure we only have one file (otherwise, loop or choose one somehow)
assert len(excel_files_in_archive) == 1
# read in data
data_mapping = pd.read_excel(
BytesIO(archive.read(excel_files_in_archive[0])),
sheet_name=None,
)
row_counts = []
for sheet in list(data_mapping.keys()):
row_counts.append(len(data_mapping.get(sheet)))
print(len(data_mapping.get(sheet)))
file_mapping.update({file: sum(row_counts)})
frame = pd.DataFrame([file_mapping]).transpose().reset_index()
frame.columns = ["file_name", "row_counts"]
return frame
zip_files = []
for root, dirs, files in os.walk(dir_path):
for file in files:
if file.endswith('.zip'):
zip_files.append(os.path.join(root, file))
df = process_files(zip_files)
</code></pre>
<p>does anyone have an idea on what im doing wrong?</p>
|
<python>
|
2023-01-08 21:17:20
| 1
| 577
|
Jonnyboi
|
75,051,327
| 1,128,695
|
Change month labels in matplotlib without changing the locale
|
<p>With this plot as an example:
<a href="https://i.sstatic.net/2bawL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2bawL.png" alt="enter image description here" /></a></p>
<p>With the code:</p>
<pre><code>import matplotlib.pyplot as plt
import matplotlib.dates as mdates
import matplotlib.cbook as cbook
data = cbook.get_sample_data('goog.npz', np_load=True)['price_data'][0:200]
fig, ax = plt.subplots(figsize=(10, 7), constrained_layout=True)
ax.plot('date', 'adj_close', data=data)
# ax.xaxis.set_major_locator(mdates.MonthLocator())
# ax.grid(True)
ax.set_ylabel(r'Price [\$]')
ax.xaxis.set_major_formatter(
mdates.ConciseDateFormatter(ax.xaxis.get_major_locator()))
plt.savefig("chart.png")
</code></pre>
<p>And I need to change the months from English to Spanish, ie: Dec to Dic, Apr to Abr, Jan to Ene and so on.</p>
<p>I can do it by changing the locale like so:</p>
<pre><code>locale.setlocale(locale.LC_TIME, 'es_ES')
</code></pre>
<p>But I can't use it in the script because it runs on a serverless vm where you can't change any of the os configuration.</p>
<p>So I thought in changing the labels "manually" from an English month to a Spanish one. I've seen examples using <code>DateFormatter</code> but that doesn't work because again it relies in the system locale for the months names using <code>strftime</code> and any other fromatter I've seen has been using numbers not dates. So is there any solution to localize the names of the months?</p>
<p>Update
Added solution <a href="https://stackoverflow.com/a/75068683/1128695">below</a></p>
|
<python><matplotlib>
|
2023-01-08 21:11:39
| 2
| 1,197
|
PerseP
|
75,051,207
| 4,348,534
|
pyautogui.write() not automatically using shift to type characters on non-english keyboard
|
<p>I'm trying to automate the uploading of a file to a website. Using <code>selenium</code> to navigate the website, I got to the point where I get the prompt box to input the filename:</p>
<p><a href="https://i.sstatic.net/p6ELl.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/p6ELl.jpg" alt="Upload prompt box" /></a></p>
<p>To enter the filename, I thought of using <code>pyautogui</code> (but if there is a better way, I'll take it, because at the moment, I'm testing with a headful navigator, but I'm not sure using <code>pyautogui</code> will work when I try to use a headless navigator.), like such:</p>
<pre><code>import pyautogui
path = r"C:\path\to_my_file.pdf"
pyautogui.write(path)
pyautogui.hotkey('enter')
</code></pre>
<p>My problem is that instead of typing the content of <code>path</code>, pyautogui types the keyboard keys that correspond to each character. Therefore, I get <code>C/\path\to8my8file.pdf</code>, because I have a french keyboard (where numbers are uppercase and special characters like <code>_</code> and <code>:</code> are lowercase).</p>
<p>Of course, I could make sure that when pyautogui types <code>:</code> and <code>_</code>, it does what's needed to do <code>shift</code> + <code>/</code> and <code>shift</code> + <code>8</code>, but that would be painstaking, and it would have to be done to each specific usecase. I'm sure there is a better, generic way, to type precisely the content of <code>path</code>.</p>
<p>Sidenote:</p>
<p>An old SO question had the answers <code>pyautogui.write('C:\path\to_my_file.pdf', interval=0.1)</code> or <code>pyautogui.typewrite("C:\path\to_my_file.pdf", interval=0.1)</code>. They do not work.</p>
|
<python><pyautogui>
|
2023-01-08 20:53:24
| 1
| 4,297
|
François M.
|
75,050,886
| 1,698,678
|
Signing a string with an RSA key in Python - how can I translate this JavaScript code that uses SubtleCrypto to Python?
|
<p>I am trying to sign a string with an RSA key in Python. I have working JavaScript code that does it, but now I need to replicate it in Python using Python-RSA.</p>
<p>In particular, these are the two JavaScript calls that I need to deal with:</p>
<pre><code>const key = await crypto.subtle.importKey(
'raw',
bytesOfSecretKey,
{ name: 'HMAC', hash: 'SHA-256' },
false,
['sign']);
</code></pre>
<p>and</p>
<pre><code>const mac = await crypto.subtle.sign('HMAC', key, bytesOfStringToSign));
</code></pre>
<p>where <code>bytesOfSecretKey</code> is just a key string represented as bytes, and <code>bytesOfStringToSign</code> is the string I am signing. Any pointers would be appreciated!</p>
|
<javascript><python><rsa><subtlecrypto>
|
2023-01-08 20:03:18
| 1
| 1,504
|
PlinyTheElder
|
75,050,590
| 18,758,062
|
Error using Pytorch nn.Sequential: RuntimeError: mat1 and mat2 shapes cannot be multiplied
|
<p>After I try to clean up my working code to use <code>nn.Sequential</code>, I start to get the error when doing a forward pass</p>
<blockquote>
<p>RuntimeError: mat1 and mat2 shapes cannot be multiplied (64x49 and 3136x512)</p>
</blockquote>
<p>This is likely because during my original forward pass, I did this in the middle of the pass to get the correct dimensions:</p>
<pre class="lang-py prettyprint-override"><code>x = x.view(x.size()[0], -1)
</code></pre>
<p>How can this be done when we're using <code>nn.Sequential</code> to define the layers?</p>
<p><strong>Old Working:</strong></p>
<pre class="lang-py prettyprint-override"><code>import torch.nn as nn
import torch.nn.functional as F
import torch
class Net(nn.Module):
def __init__(self, out_dims):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(4, 32, 8, 4)
self.relu1 = nn.ReLU(inplace=True)
self.conv2 = nn.Conv2d(32, 64, 4, 2)
self.relu2 = nn.ReLU(inplace=True)
self.conv3 = nn.Conv2d(64, 64, 3, 1)
self.relu3 = nn.ReLU(inplace=True)
self.fc4 = nn.Linear(3136, 512)
self.relu4 = nn.ReLU(inplace=True)
self.fc5 = nn.Linear(512, out_dims)
def forward(self, x):
x = self.conv1(x)
x = self.relu1(x)
x = self.conv2(x)
x = self.relu2(x)
x = self.conv3(x)
x = self.relu3(x) # torch.Size([1, 64, 7, 7])
x = x.view(x.size()[0], -1) # torch.Size([1, 3136])
x = self.fc4(x)
x = self.relu4(x)
x = self.fc5(x)
return x
net = Net(2)
x = torch.rand(4, 84, 84).unsqueeze(0)
net(x)
</code></pre>
<p><strong>New non-working using <code>nn.Sequential</code></strong>:</p>
<pre class="lang-py prettyprint-override"><code>import torch.nn as nn
import torch.nn.functional as F
import torch
class Net(nn.Module):
def __init__(self, out_dims):
super(Net, self).__init__()
self.layers = nn.Sequential(
nn.Conv2d(4, 32, 8, 4),
nn.ReLU(inplace=True),
nn.Conv2d(32, 64, 4, 2),
nn.ReLU(inplace=True),
nn.Conv2d(64, 64, 3, 1),
nn.ReLU(inplace=True),
nn.Linear(3136, 512), # RuntimeError: mat1 and mat2 shapes cannot be multiplied (64x49 and 3136x512)
nn.ReLU(inplace=True),
nn.Linear(512, out_dims),
)
def forward(self, x):
return self.layers(x)
net = Net(2)
x = torch.rand(4, 84, 84).unsqueeze(0)
net(x)
</code></pre>
|
<python><pytorch><neural-network>
|
2023-01-08 19:13:05
| 1
| 1,623
|
gameveloster
|
75,050,436
| 5,736,438
|
Why is Scipy.linalg.eigh much slower when using np.complex64?
|
<p>I have a large Hermitian matrix of which I need to calculate the eigenvalues and eigenvectors. For this I use <code>scipy.linalg.eigh</code>. Above a certain size of the matrix, scipy is much faster if <code>np.complex128</code> is used as data type instead of <code>np.complex64</code>.</p>
<p>If I use <code>np.complex64</code>, the CPU load drops after some time and only one core is used. I checked the SciPy source and SciPy seems to use the <code>cheevr</code> or <code>zheevr</code> methods from lapack. I use SciPy with openblas-0.3.20. Is this a bug in openblas?</p>
<p>If I run the following code with 32 cores, it takes about 13 minutes with <code>np.complex128</code> but more than 2 hours with <code>np.complex64</code>.</p>
<pre><code>import numpy as np
import scipy.linalg
import time
n = 27
shape = (n**3, n**3)
dtype = np.complex64
H = np.random.uniform(-1, 1, shape) + 1.j * np.random.uniform(-1, 1, shape)
H = np.asarray(H, dtype=dtype)
start = time.time()
values, vectors = scipy.linalg.eigh(H)
print(f"It took {time.time() - start} seconds.")
</code></pre>
|
<python><numpy><scipy><blas><eigenvector>
|
2023-01-08 18:48:28
| 0
| 1,062
|
Peter234
|
75,050,435
| 11,693,768
|
Python pandas drop columns if their partial name is in a list or column in pandas
|
<p>I have the following dataframe called <code>dropthese</code>.</p>
<pre><code> | partname | x1 | x2 | x3....
0 text1_mid1
1 another1_mid2
2 yet_another
</code></pre>
<p>And another dataframe called <code>df</code> that looks like this.</p>
<pre><code> text1_mid1_suffix1 | text1_mid1_suffix2 | ... | something_else | another1_mid2_suffix1 | ....
0 .....
1 .....
2 .....
3 .....
</code></pre>
<p>I want to drop all the columns from <code>df</code>, if a part of the name is in <code>dropthese['partname']</code>.</p>
<p>So for example, since <code>text1_mid1</code> is in <code>partname</code>, all columns that contain that partial string should be dropped like <code>text1_mid1_suffix1</code> and <code>text1_mid1_suffix2</code>.</p>
<p>I have tried,</p>
<pre><code>thisFilter = df.filter(dropthese.partname, regex=True)
df.drop(thisFilter, axis=1)
</code></pre>
<p>But I get this error, <code>TypeError: Keyword arguments `items`, `like`, or `regex` are mutually exclusive</code>. What is the proper way to do this filter?</p>
|
<python><pandas><filter>
|
2023-01-08 18:48:18
| 1
| 5,234
|
anarchy
|
75,050,419
| 41,174
|
python - mock - class method mocked but not reported as being called
|
<p>learning python mocks here. I need some helps to understand how the <em>patch</em> work when mocking a class.</p>
<p>In the code below, I mocked a class. the function under tests receives the mock and calls a function on it. In my assertions, the class is successfully called, but the function is reported as not being called.</p>
<p>I added a debug print to view the content in the function under tests and it is reported as called.</p>
<p>My expectation is the assertion <em>assert facadeMock.install.called</em> should be true. Why is it not reported as called and how do I achieve this?</p>
<p>Thank you.</p>
<p>install/__init__.py</p>
<pre><code>from .facade import Facade
def main():
f = Facade()
f.install()
print('jf-debug-> "f.install.called": {value}'.format(
value=f.install.called))
</code></pre>
<p>test/install_tests.py</p>
<pre><code>import os
import sys
# allow import of package
sys.path.insert(0,
os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
from unittest.mock import patch
import install
@patch('install.Facade') # using autospec=True did not change the result
def test_main_with_links_should_call_facade_install_with_link_true(facadeMock):
install.main()
assert facadeMock.called
assert facadeMock.install is install.Facade.install
assert facadeMock.install.called # <-------------------- Fails here!
</code></pre>
<p>output:</p>
<pre><code>============================= test session starts ==============================
platform linux -- Python 3.10.6, pytest-7.2.0, pluggy-1.0.0
rootdir: /home/jfl/ubuntu-vim, configfile: pytest.ini
collected 1 item
test/install_tests.py jf-debug-> "f.install.called": True
F
=================================== FAILURES ===================================
________ test_main_with_links_should_call_facade_install_with_link_true ________
facadeMock = <MagicMock name='Facade' id='140679041900864'>
@patch('install.Facade')
def test_main_with_links_should_call_facade_install_with_link_true(facadeMock):
install.main()
assert facadeMock.called
assert facadeMock.install is install.Facade.install
> assert facadeMock.install.called
E AssertionError: assert False
E + where False = <MagicMock name='Facade.install' id='140679042325216'>.called
E + where <MagicMock name='Facade.install' id='140679042325216'> = <MagicMock name='Facade' id='140679041900864'>.install
test/install_tests.py:21: AssertionError
=========================== short test summary info ============================
FAILED test/install_tests.py::test_main_with_links_should_call_facade_install_with_link_true - AssertionError: assert False
============================== 1 failed in 0.09s ===============================
</code></pre>
<p><strong>[edit]</strong></p>
<p>Thank you to @chepner and @Daniil Fajnberg for their comments. I found the cause of the problem.</p>
<p>The problem can be reduced at:
<em><code>install/__init__.py</code></em> receives an instance of Facade when calling Facade() in main().
This instance is not the same as the one received in parameters of the test. They are different instances.</p>
<p>to retrieve the instance received in main(), do:</p>
<pre><code> actualInstance = facadeMock.return_value
assert actualInstance.install.called
</code></pre>
<p>And it works!</p>
<p>Thank you. That really helps me understand the working of mocks in python.</p>
<p><strong>[/edit]</strong></p>
|
<python><unit-testing><mocking><python-unittest><python-unittest.mock>
|
2023-01-08 18:45:56
| 1
| 1,362
|
Jean-Francois
|
75,050,371
| 9,784,909
|
Difficulty Importing Fonts in Python FPDF package
|
<p>I'm having a tough time trying to get something working which I think should be straightforward.</p>
<p>I'm new to python and learning via projects which in this case is a simple PDF generator. I want to add custom fonts to my program (Poppins) and I can get it to work if I include the fonts in the exact location as main.py, but ideally, I want these stored in a separate folder.</p>
<p>I've tried numerous suggestions but nothing has worked for me yet. Such as:</p>
<ul>
<li>appending/inserting the font filepath to sys.path (<a href="https://stackoverflow.com/questions/4383571/importing-files-from-different-folder">link</a>)</li>
<li>creating an <code>__init__.py</code> file in the main directory with the following</li>
</ul>
<pre><code> import sys
sys.path.insert(1, '.')
</code></pre>
<ul>
<li>Also tried a blank <code>__init__.py</code> file in the font directory</li>
</ul>
<p>I've attached a screenshot of my code below along with the error message. Thanks in advance for any support.
<a href="https://i.sstatic.net/ykjoN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ykjoN.png" alt="My python code" /></a></p>
|
<python><fpdf>
|
2023-01-08 18:35:16
| 1
| 403
|
Tom E
|
75,050,364
| 6,372,226
|
More pythonic way to filter objects from a list of dicts based on a dict value that must not contain a string from a list of strings
|
<p>As I'm coming from Java, I wonder if there is a more pythonic approach to achieve the following logic:</p>
<pre><code>movies = [{'original_title': 'Star Wars'}, {'original_title': 'Avengers'}, {'original_title': 'After Love'}]
blacklist = ['Star', 'Love']
filtered_movies = []
for movie in movies:
blacklisted = False
for item in blacklist:
if item in movie['original_title']:
blacklisted = True
break
if not blacklisted:
filtered_movies.append(movie)
return filtered_movies
</code></pre>
<p><code>filtered_movies</code> then only contains the item <code>{'original_title': 'Avengers'}</code>.</p>
|
<python><list><dictionary><filter>
|
2023-01-08 18:34:19
| 3
| 384
|
XDAF
|
75,050,310
| 558,619
|
Python: Dynamically add properties to class instance, properties return function value with inputs
|
<p>I've been going through all the Stackoverflow answers on dynamic property setting, but for whatever reason I can't seem to get this to work.</p>
<p>I have a class, <code>Evolution_Base</code>, that in its <code>init</code> creates an instance of <code>Value_Differences</code>. <code>Value_Differences</code> should be dynamically creating <code>properties</code>, based on the list I pass, that returns the function value from <code>_get_df_change</code>:</p>
<pre><code>from pandas import DataFrame
from dataclasses import dataclass
import pandas as pd
class Evolution_Base():
def __init__(self, res_date_0 : DataFrame , res_date_1 : DataFrame):
@dataclass
class Results_Data():
res_date_0_df : DataFrame
res_date_1_df : DataFrame
self.res = Results_Data(res_date_0_df= res_date_0,
res_date_1_df= res_date_1)
property_list = ['abc', 'xyz']
self.difference = Value_Differences(parent = self, property_list=property_list)
# Shared Functions
def _get_df_change(self, df_name, operator = '-'):
df_0 = getattr(self.res.res_date_0_df, df_name.lower())
df_1 = getattr(self.res.res_date_1_df, df_name.lower())
return self._df_change(df_1, df_0, operator=operator)
def _df_change(self, df_1 : pd.DataFrame, df_0 : pd.DataFrame, operator = '-') -> pd.DataFrame:
"""
Returns df_1 <operator | default = -> df_0
"""
# is_numeric mask
m_1 = df_1.select_dtypes('number')
m_0 = df_0.select_dtypes('number')
def label_me(x):
x.columns = ['t_1', 't_0']
return x
if operator == '-':
return label_me(df_1[m_1] - df_0[m_0])
elif operator == '+':
return label_me(df_1[m_1] + df_0[m_0])
class Value_Differences():
def __init__(self, parent : Evolution_Base, property_list = []):
self._parent = parent
for name in property_list:
def func(self, prop_name):
return self._parent._get_df_change(name)
# I've tried the following...
setattr(self, name, property(fget = lambda cls_self: func(cls_self, name)))
setattr(self, name, property(func(self, name)))
setattr(self, name, property(func))
</code></pre>
<p>Its driving me nuts... Any help appreciated!</p>
<p>My desired outcome is for:</p>
<pre><code>evolution = Evolution_Base(df_1, df_2)
evolution.difference.abc == evolution._df_change('abc')
evolution.difference.xyz == evolution._df_change('xyz')
</code></pre>
<p>EDIT: The simple question is really, how do I setattr for a property <em>function</em>?</p>
|
<python>
|
2023-01-08 18:24:12
| 10
| 3,541
|
keynesiancross
|
75,049,933
| 3,132,844
|
How to interpolate values in non-rectangular coordinates using Python?
|
<p>I need to make compensation of values in my optical system using Python. I've measured the dependency of my compensation params in the corners of my table and I want to interpolate such value there linearly, but a map is not a rectangle.
Example:</p>
<pre><code># Corners coordinates:
a_real = (45, 45)
a_coeff = (333, 223)
b_real = (-45, -45)
b_coeff = (325, 243)
c_real = (-45, 45)
c_coeff = (339, 244)
d_real = (45, -45)
d_coeff = (319, 228)
</code></pre>
<p>Let's say, I want to know compensation coefficients in points (40, 40), or (0, 0).
<a href="https://i.sstatic.net/LigD1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LigD1.png" alt="enter image description here" /></a></p>
<ol>
<li>How this can be done? I'm looking at scipy.interpolate.interp2d but I'm not sure that it is my case</li>
<li>What if I want to add more points, defining my grid?</li>
</ol>
|
<python><numpy><scipy><coordinate-transformation>
|
2023-01-08 17:28:03
| 1
| 1,494
|
artsin
|
75,049,862
| 338,479
|
Minimal implementation of a "file-like" object?
|
<p>I want to create a file-like object to be passed to another module. I don't know if that other module will be calling read() or readline() or both. If I were to subclass say <code>io.IOBase</code> and just implement <code>read()</code> would that be sufficient, even if the client is calling <code>readline()</code>?</p>
<p>Bonus question: if my class will be opened in text mode, what's the minimum I can implement to let the superclass handle the unicode decoding?</p>
<p>(This will be an input-only class.)</p>
<p>Meta: I know I <em>could</em> just write all the methods and see what actually gets called, but I'd like to be a bit "future proof" and write to the actual specs, instead of trying things to see if they work, and then having them break in the future someday.</p>
|
<python><file>
|
2023-01-08 17:18:41
| 1
| 10,195
|
Edward Falk
|
75,049,828
| 997,832
|
Tensorflow - Text Classification - Shapes (None,) and (None, 250, 100) are incompatible error
|
<p>I want to classify text with multiple labels. I use TextVectorization layer and CategoricalCrossEntropy function. Here is my model code:</p>
<p>Text Vectorizer:</p>
<pre><code>def custom_standardization(input_data):
print(input_data[:5])
lowercase = tf.strings.lower(input_data)
stripped_html = tf.strings.regex_replace(lowercase, '<br />', ' ')
return tf.strings.regex_replace(stripped_html,
'[%s]' % re.escape(string.punctuation),
'')
max_features = 10000
sequence_length = 250
vectorize_layer = layers.TextVectorization(
standardize=custom_standardization,
max_tokens=max_features,
output_mode='int',
output_sequence_length=sequence_length)
</code></pre>
<p>Model generation:</p>
<pre><code>MAX_TOKENS_NUM = 5000 # Maximum vocab size.
MAX_SEQUENCE_LEN = 40 # Sequence length to pad the outputs to.
EMBEDDING_DIMS = 100
model = tf.keras.models.Sequential()
model.add(tf.keras.Input(shape=(1,), dtype=tf.string))
model.add(vectorize_layer)
model.add(tf.keras.layers.Embedding(MAX_TOKENS_NUM + 1, EMBEDDING_DIMS))
model.summary()
model.compile(loss=losses.CategoricalCrossentropy(from_logits=True),
optimizer='adam',
metrics=tf.metrics.CategoricalAccuracy())
</code></pre>
<p>FIT :</p>
<pre><code>epochs = 10
history = model.fit(
x_train,
y=y_train,
epochs=epochs)
</code></pre>
<p><code>x_train</code> is a list of texts like <code>['This is a text about science.', 'This is a text about art',...]</code></p>
<p><code>y_train</code> also is a list of texts like <code>['Science','Art',...]</code></p>
<p>When I try to run fitting code it gives the following error:</p>
<pre><code>ValueError: Shapes (None,) and (None, 250, 100) are incompatible
</code></pre>
<p>What am i doing wrong? And also I'd like to learn if it's a good approach/model for classifying test with multiple labels?</p>
<p><strong>EDIT</strong>:</p>
<p>I edited my code according to Frightera's answer. Here is my model:</p>
<pre><code>MAX_TOKENS_NUM = 5000 # Maximum vocab size.
MAX_SEQUENCE_LEN = 40 # Sequence length to pad the outputs to.
EMBEDDING_DIMS = 100
model = tf.keras.models.Sequential()
model.add(tf.keras.Input(shape=(1,), dtype=tf.string))
model.add(vectorize_layer)
model.add(tf.keras.layers.Embedding(MAX_TOKENS_NUM + 1, EMBEDDING_DIMS))
model.add(layers.Dropout(0.2))
model.add(layers.GlobalAveragePooling1D())
model.add(layers.Dropout(0.2))
model.add(layers.Dense(len(labels)))
model.summary()
model.compile(loss=losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer='adam',
metrics=tf.metrics.SparseCategoricalAccuracy())
</code></pre>
<p>And I pass <code>y_train_int</code> instead of <code>y_train</code> by converting categories to indexes with <code>y_train_int = [get_label_index(label) for label in y_train]</code></p>
<pre><code>epochs = 10
history = model.fit(
x_train,
y=y_train_int,
epochs=epochs)
</code></pre>
<p>Now the model fits, but when I check loss function with <code>plt.plot(history.history['loss'])</code> it's an all zero line like below:</p>
<p><a href="https://i.sstatic.net/wxj5M.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wxj5M.png" alt="Model loss function" /></a></p>
<p>Is this model good for classification. Do I need those layers between input layer and final Dense Layer(Embedding etc.)? What am I doing wrong?</p>
<p><strong>EDIT 2:</strong>
I have the above model now. I am using SparseCategoricalEntropy and passing to the last Dense layer length of labels which is 78 and now it fits the model.</p>
<p>Now when I use <code>model.predict(x_test)</code>, it gives following results:</p>
<pre><code>array([[ 1.3232083 , 3.4263668 , 0.3206688 , ..., -1.9279423 ,
-0.83103067, -5.3442082 ],
[ 0.11507592, -2.0753977 , -0.07149621, ..., -0.27729607,
-1.132122 , -2.4074485 ],
[ 0.87828857, -0.5063573 , 1.5770453 , ..., 0.72519284,
0.50958884, 3.7006462 ],
...,
[ 0.35316354, -3.1919005 , -0.25520897, ..., -1.648859 ,
-2.2707412 , -4.321298 ],
[ 0.89357865, 1.3001428 , 0.17324057, ..., -0.8185719 ,
-1.4108973 , -3.674326 ],
[ 1.6258209 , -0.59622926, 0.7382731 , ..., -0.8473997 ,
-0.90670204, -4.043623 ]], dtype=float32)
</code></pre>
<p>How can I convert these to labels?</p>
|
<python><tensorflow><deep-learning>
|
2023-01-08 17:14:23
| 1
| 1,395
|
cuneyttyler
|
75,049,808
| 3,507,584
|
Django postgres "No migrations to apply" troubleshoot
|
<p>I had to modify the table of my app so I dropped it from postgres database (using <code>objectname.objects.all().delete()</code> in django python shell and with postgres at PGAdmin).</p>
<p>I deleted the appname <code>migrations</code> folder. When I run python <code>manage.py makemigrations appname</code>, the folder migrations gets created with a <code>0001_initial.py</code> creating the tables.
When I run <code>python manage.py migrate appname</code>, nothing happens and I cannot see the tables in postgres PGAdmin.</p>
<pre><code>(website) C:\Users\Folder>python manage.py makemigrations appname
Migrations for 'appname':
food\migrations\0001_initial.py
- Create model Table1
- Create model Table2
- Create index Table2_search__7dd784_gin on field(s) search_vector of model Table2
(website) C:\Users\Folder>python manage.py migrate
Operations to perform:
Apply all migrations: accounts, admin, auth, contenttypes, appname, sessions
Running migrations:
No migrations to apply.
</code></pre>
<p>When I deleted the folder migrations I can see the migrations are gone with <code>python manage.py showmigrations</code>.</p>
<p>I also tried <code>python manage.py migrate --run-syncdb</code> but still no result.</p>
<pre><code>(website) C:\Users\Folder>python manage.py migrate --run-syncdb
Operations to perform:
Synchronize unmigrated apps: messages, postgres, staticfiles
Apply all migrations: accounts, admin, auth, contenttypes, appname, sessions
Synchronizing apps without migrations:
Creating tables...
Running deferred SQL...
Running migrations:
No migrations to apply.
</code></pre>
<p>Any other idea on what may be happening? Maybe postgres full_text_search index is messing it up?
Any idea on how to get the tables in postgres again?</p>
|
<python><django><postgresql><django-migrations>
|
2023-01-08 17:11:31
| 2
| 3,689
|
User981636
|
75,049,684
| 1,172,439
|
boxplot scatterplot from r to python
|
<p>I am trying to port some code from R to python, but I am encountering some difficulties with plotting.</p>
<p>The simplest example I can give is the following:</p>
<pre><code>library(ggplot2)
df1=data.frame(a_class=letters[1:10], vals=sample(10, replace=TRUE))
ggplot(df1, aes(x=0, y=vals))+
geom_boxplot()+
theme(axis.title.x=element_blank(),
axis.text.x=element_blank(),
axis.ticks.x=element_blank())+
geom_jitter(aes(shape=a_class, color=a_class), height = 0)+
scale_shape_manual(values = 1:nrow(df1))
</code></pre>
<p>This code produces this image (it may be slightly different each time since sample is random).</p>
<p><a href="https://i.sstatic.net/vU3va.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vU3va.png" alt="enter image description here" /></a></p>
<p>How can I achieve this in seaborn or matplotlib?
I tried:</p>
<pre><code>df1 = pd.DataFrame({"a_class": list("abcdefghij"), "vals": [1,2,3,4,5,6,7,8,9,10]})
sns.boxplot(x=0, y="vals", data=df1)
sns.stripplot(x=0, y="vals", data=df1, jitter=True, hue="a_class", palette="Set1")
</code></pre>
<p>But I get the following error:</p>
<pre><code>AttributeError Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_5696\451374326.py in <module>
1 df1 = pd.DataFrame({"a_class": list("abcdefghij"), "vals": [1,2,3,4,5,6,7,8,9,10]})
2
----> 3 sns.boxplot(x=0, y="vals", data=df1)
4 sns.stripplot(x=0, y="vals", data=df1, jitter=True, hue="a_class", palette="Set1")
C:\ProgramData\Anaconda3\lib\site-packages\seaborn\_decorators.py in inner_f(*args, **kwargs)
44 )
45 kwargs.update({k: arg for k, arg in zip(sig.parameters, args)})
---> 46 return f(**kwargs)
47 return inner_f
48
C:\ProgramData\Anaconda3\lib\site-packages\seaborn\categorical.py in boxplot(x, y, hue, data, order, hue_order, orient, color, palette, saturation, width, dodge, fliersize, linewidth, whis, ax, **kwargs)
2241 ):
2242
-> 2243 plotter = _BoxPlotter(x, y, hue, data, order, hue_order,
2244 orient, color, palette, saturation,
2245 width, dodge, fliersize, linewidth)
C:\ProgramData\Anaconda3\lib\site-packages\seaborn\categorical.py in __init__(self, x, y, hue, data, order, hue_order, orient, color, palette, saturation, width, dodge, fliersize, linewidth)
404 width, dodge, fliersize, linewidth):
405
--> 406 self.establish_variables(x, y, hue, data, orient, order, hue_order)
407 self.establish_colors(color, palette, saturation)
408
C:\ProgramData\Anaconda3\lib\site-packages\seaborn\categorical.py in establish_variables(self, x, y, hue, data, orient, order, hue_order, units)
154
155 # Figure out the plotting orientation
--> 156 orient = infer_orient(
157 x, y, orient, require_numeric=self.require_numeric
158 )
C:\ProgramData\Anaconda3\lib\site-packages\seaborn\_core.py in infer_orient(x, y, orient, require_numeric)
1309 """
1310
-> 1311 x_type = None if x is None else variable_type(x)
1312 y_type = None if y is None else variable_type(y)
1313
C:\ProgramData\Anaconda3\lib\site-packages\seaborn\_core.py in variable_type(vector, boolean_type)
1227
1228 # Special-case all-na data, which is always "numeric"
-> 1229 if pd.isna(vector).all():
1230 return "numeric"
1231
AttributeError: 'bool' object has no attribute 'all'
</code></pre>
|
<python><r><matplotlib><ggplot2><seaborn>
|
2023-01-08 16:56:18
| 0
| 947
|
Fabrizio
|
75,049,548
| 552,563
|
Why can't you intern bytes in Python?
|
<p>As mentioned in Python documentation, <a href="https://docs.python.org/3/library/sys.html?highlight=intern#sys.intern" rel="nofollow noreferrer"><code>sys.intern()</code></a> only accepts string objects. I understand why mutable types are not supported by <code>sys.intern</code>. But there's at least one more immutable type for which interning would make sense: <code>bytes</code>.</p>
<p>So here's my question: is there any particular reason why Python interning doesn't support <code>bytes</code>?</p>
|
<python>
|
2023-01-08 16:38:29
| 1
| 3,011
|
Alex Bochkarev
|
75,049,502
| 15,825,321
|
XGBoost prediction input dimensionality error
|
<p>I have the following model:</p>
<pre><code>X_train_xgb, X_test_xgb, y_train_xgb, y_test_xgb = X_train.values, X_test.values, y_train.values, y_test.values
xgb_model = xgboost.XGBRegressor()
xgb_model.fit(X_train_xgb, y_train_xgb)
</code></pre>
<p>X_train and X_test have the same count of columns. However, running</p>
<pre><code>test_point = X_test_xgb[42]
original_model_prediction = xgb_model.predict(test_point)
</code></pre>
<p>returns</p>
<blockquote>
<p>C:/buildkite-agent/builds/buildkite-windows-cpu-autoscaling-group-i-08de971ced8a8cdc6-1/xgboost/xgboost-ci-windows/src/predictor/cpu_predictor.cc:377: Check failed: m->NumColumns() == model.learner_model_param->num_feature (1 vs. 170) : Number of columns in data must equal to trained model.</p>
</blockquote>
<p><code>test_point.shape</code> returns (170,0). I have 170 columns, so this should be ok.</p>
<p>This has to be some kind of dimensionality issue, as <code>test_point = X_test_xgb[42:43]</code> returns a result when used in <code>xgb_model.predict(test_point)</code> (albeit just exactly <em>one</em> result). Does .predict() expect a header row? even though I trained using an np.array?</p>
<p>Also, neither <code>X_test_xgb.T[42]</code> nor <code>test_point.T</code> solve the problem (as expected). Where is my mistake?</p>
<p>Thanks already!</p>
|
<python><numpy><multidimensional-array><xgboost>
|
2023-01-08 16:33:16
| 0
| 303
|
Paul1911
|
75,049,265
| 9,855,588
|
unable to mock global variable assigned to function call python pytest
|
<p>When I run my pytest and mock patch a global variable in the python file that has a function call assigned to capture the output, I'm unable to mock it (I dont want to actually execute the function during tests). I find that the function is still getting called. How can I prevent it from being called?</p>
<pre><code>file 1: /app/file1.py
def some_func():
return "the sky is like super blue"
file 2: /app/file2.py
from app.file1 import some_func
VAR1 = some_func()
file 3: /tests/app/test_file2.py
import mock
import pytest
from app.file2 import VAR1
@mock.patch('app.file2.VAR1', return_value=None)
def test_file_2_func(baba_fake_val):
print('made it to my test :)'
print(VAR1)
</code></pre>
|
<python><python-3.x><pytest><pytest-mock>
|
2023-01-08 16:03:00
| 1
| 3,221
|
dataviews
|
75,049,127
| 3,451,339
|
Resize any image to 512 x 512
|
<p>I am trying to train a stable diffusion model, which receives <code>512 x 512</code> images as inputs.</p>
<p>I am downloading the bulk of images from the web, and they have multiple sizes and shapes, and so I need to preprocess them and convert to <code>512 x 512</code>.</p>
<p>If I take this <code>2000 × 1434</code> image:</p>
<p><a href="https://i.sstatic.net/DTE2p.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DTE2p.jpg" alt="enter image description here" /></a></p>
<p>And I try to resize it, with:</p>
<pre><code>from PIL import Image
# Open an image file
with Image.open("path/to/Image.jpg") as im:
# Create a thumbnail of the image
im.thumbnail((512, 512))
# Save the thumbnail
im.save("path/to/Image_resized.jpg")
</code></pre>
<p>I get this <code>512 × 367</code> image:</p>
<p><a href="https://i.sstatic.net/mjGXm.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mjGXm.jpg" alt="enter image description here" /></a></p>
<p>But I need 512 for both width and height, <em><strong>without distorting the image</strong></em>, like you can achieve on this website:</p>
<p><a href="https://www.birme.net/" rel="nofollow noreferrer">Birma</a></p>
<p>Any ideas on how I can achieve this conversion using python?</p>
|
<python><python-imaging-library><stable-diffusion>
|
2023-01-08 15:44:29
| 1
| 10,121
|
8-Bit Borges
|
75,049,077
| 1,436,800
|
How to create a single API which takes data of all Employees
|
<p>I am new to django. I have a model like this:</p>
<pre><code>class Standup(models.Model):
team = models.ForeignKey("Team", on_delete=models.CASCADE)
standup_time = models.DateTimeField(auto_now_add=True)
class StandupUpdate(models.Model):
standup = models.ForeignKey("Standup", on_delete=models.CASCADE)
employee = models.ForeignKey("Employee", on_delete=models.CASCADE)
update_time = models.DateTimeField(auto_now_add=True)
status = models.CharField(max_length=50)
work_done_yesterday = models.TextField()
work_to_do = models.TextField()
blockers = models.TextField()
</code></pre>
<p>If I write view for this model, every employee will have to hit API for his/her standup update. But I am supposed create a single API which takes updates of all the employees and saves it into database. In frontend, it will be something like this:</p>
<ul>
<li>Employee will select on a team as one employee can be a part of
multiple teams.</li>
<li>Then the employee will give his/her stadup updates.</li>
<li>Then another employee will do the same thing and so on.</li>
<li>At the end,by clicking on submit button, whole data will be saved together.
Any guidance on how to do it?</li>
</ul>
|
<python><django><django-models><django-rest-framework><django-views>
|
2023-01-08 15:37:27
| 1
| 315
|
Waleed Farrukh
|
75,048,983
| 4,845,935
|
Can I mock sqlite3 CURRENT_TIMESTAMP in Python tests?
|
<p>I want to return custom value for SQLite3 CURRENT_TIMESTAMP in my Python tests by mocking the return value (without interfering system clock).</p>
<p>I discovered <a href="https://stackoverflow.com/questions/27499411/sqlite-can-i-mock-the-current-time-now-for-testing">this answer</a> but it doesn't work for CURRENT_TIMESTAMP (apparently because it is a keyword and not a function). Any ideas how to get this working?</p>
<p><strong>UPD.</strong> Tried to mock the DATETIME() function according to suggestion by <a href="https://stackoverflow.com/users/10498828/forpas">@forpas</a>, but looks like it is not working for CURRENT_TIMESTAMP (unlike calling DATETIME() directly):</p>
<pre><code>def mock_date(*_):
return '1975-02-14'
def mock_datetime(*_):
return '1975-02-14 12:34:56'
connection = sqlite3.connect(':memory:')
print('Before DATE() mock, DATE(\'now\'): ' + connection.execute('SELECT DATE(\'now\')').fetchone()[0])
connection.create_function('DATE', -1, mock_date)
print('After DATE() mock, DATE(\'now\'): ' + connection.execute('SELECT DATE(\'now\')').fetchone()[0])
print('Before DATETIME() mock, CURRENT_TIMESTAMP: ' + connection.execute('SELECT CURRENT_TIMESTAMP').fetchone()[0])
print('Before DATETIME() mock, DATETIME(\'now\'): ' + connection.execute('SELECT DATETIME(\'now\')').fetchone()[0])
connection.create_function('DATETIME', -1, mock_datetime)
print('After DATETIME() mock, CURRENT_TIMESTAMP: ' + connection.execute('SELECT CURRENT_TIMESTAMP').fetchone()[0])
print('After DATETIME() mock, DATETIME(\'now\'): ' + connection.execute('SELECT DATETIME(\'now\')').fetchone()[0])
connection.create_function('CURRENT_TIMESTAMP', -1, mock_datetime)
print('After CURRENT_TIMESTAMP mock, CURRENT_TIMESTAMP: ' + connection.execute('SELECT CURRENT_TIMESTAMP').fetchone()[0])
</code></pre>
<p>Here are the test results:</p>
<pre><code>Before DATE() mock, DATE('now'): 2023-01-11
After DATE() mock, DATE('now'): 1975-02-14
Before DATETIME() mock, CURRENT_TIMESTAMP: 2023-01-11 21:03:40
Before DATETIME() mock, DATETIME('now'): 2023-01-11 21:03:40
After DATETIME() mock, CURRENT_TIMESTAMP: 2023-01-11 21:03:40
After DATETIME() mock, DATETIME('now'): 1975-02-14 12:34:56
After CURRENT_TIMESTAMP mock, CURRENT_TIMESTAMP: 2023-01-11 21:03:40
</code></pre>
<p>So after <code>DATETIME()</code> is mocked, <code>DATETIME('now')</code> result has changed but <code>CURRENT_TIMESTAMP</code> has not.</p>
<p><strong>UPD2.</strong> Added test case with mocking CURRENT_TIMESTAMP itself.</p>
<p>The python version is 3.9.13 and sqlite3 version is 3.37.2. Test is performed in Windows environment.</p>
|
<python><sqlite><unit-testing>
|
2023-01-08 15:26:32
| 2
| 874
|
dimnnv
|
75,048,909
| 5,359,846
|
How to make sure text inside a column is fully visible using Playwright?
|
<p>I have a table in my web page. Let us assume that one of the columns has a very long text, and the column is at its default width.</p>
<pre><code>expect(self.page.locator('text=ABCDEFGHIJKLMNOPQRSTUVWXYZ')).\
to_be_visible(timeout=20 *1000)
</code></pre>
<p>The code passes, as <code>Playwright</code> can find the text in the HTML.</p>
<p>How can I make sure that a human can see all the letters and that nothing is hidden?</p>
|
<python><playwright><playwright-python>
|
2023-01-08 15:15:49
| 0
| 1,838
|
Tal Angel
|
75,048,860
| 7,155,895
|
Resize images while preserving aspect ratio
|
<p>I have a small problem that could have a simple solution, but unfortunately I'm not very good at math.</p>
<p>I have three images that need to be stacked on top of each other and their heights add up to more than the screen height.</p>
<p>So to fix, I did a simple proportion and changed the height of the three images, like this (it's hypothetical, not the actual code):</p>
<p><code>new_img1.height = img1.height * screen.height // (img1.height + img2.height + img3.height)</code></p>
<p>The problem I'm having is doing the same thing, but with the width, considering all three images have the same width.</p>
<p>What I want is that the three images always have the same width as originally, but resized with the new height (so that the three images are proportionally smaller in both dimensions)</p>
<p>I've made several attempts, but my mathematical limits don't help me much XD</p>
<p>How should I fix? Ah, I'm using Python 3.9 with Pygame (although for the latter I don't think it needed to know)</p>
|
<python><python-3.x><math><pygame><python-3.9>
|
2023-01-08 15:09:25
| 1
| 579
|
Rebagliati Loris
|
75,048,782
| 18,753,528
|
How to sequence numbers in text while looping
|
<p>I'm trying to reach a way to print numbers in a specific way but I can't, my code here</p>
<pre><code>for i in range(7): # 7 it could be a string length for an example
print("letter"+str(i)+",letter"+str(i+1)+"|letter"+str(i+1)+",letter"+str(i+2)+";")
</code></pre>
<p>will print:</p>
<pre><code>letter0,letter1|letter1,letter2;
letter1,letter2|letter2,letter3;
letter2,letter3|letter3,letter4;
letter3,letter4|letter4,letter5;
letter4,letter5|letter5,letter6;
letter5,letter6|letter6,letter7;
letter6,letter7|letter7,letter8;
</code></pre>
<p>What output I need is like this:</p>
<pre><code>letter0,letter1|letter1,letter2;
letter2,letter3|letter3,letter4;
letter4,letter5|letter5,letter6;
letter6,letter7|letter7,letter8;
letter8,letter9|letter9,letter10;
letter10,letter11|letter11,letter12;
letter12,letter13|letter13,letter0;
</code></pre>
<p>and <code>letter0;</code> should be the last one always</p>
|
<python>
|
2023-01-08 14:59:07
| 4
| 875
|
IVs
|
75,048,688
| 17,267,064
|
PicklingError: Could not serialize object: IndexError: tuple index out of range
|
<p>I initiated pyspark in cmd and performed below to sharpen my skills.</p>
<pre><code>C:\Users\Administrator>SUCCESS: The process with PID 5328 (child process of PID 4476) has been terminated.
SUCCESS: The process with PID 4476 (child process of PID 1092) has been terminated.
SUCCESS: The process with PID 1092 (child process of PID 3952) has been terminated.
pyspark
Python 3.11.1 (tags/v3.11.1:a7a450f, Dec 6 2022, 19:58:39) [MSC v.1934 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
23/01/08 20:07:53 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/__ / .__/\_,_/_/ /_/\_\ version 3.3.1
/_/
Using Python version 3.11.1 (tags/v3.11.1:a7a450f, Dec 6 2022 19:58:39)
Spark context Web UI available at http://Mohit:4040
Spark context available as 'sc' (master = local[*], app id = local-1673188677388).
SparkSession available as 'spark'.
>>> 23/01/08 20:08:10 WARN ProcfsMetricsGetter: Exception when trying to compute pagesize, as a result reporting of ProcessTree metrics is stopped
a = sc.parallelize([1,2,3,4,5,6,7,8,9,10])
</code></pre>
<p>When I execute a.take(1), I get "_pickle.PicklingError: Could not serialize object: IndexError: tuple index out of range" error and I am unable to find why. When same is run on google colab, it doesn't throw any error. Below is what I get in console.</p>
<pre><code>>>> a.take(1)
Traceback (most recent call last):
File "C:\Spark\python\pyspark\serializers.py", line 458, in dumps
return cloudpickle.dumps(obj, pickle_protocol)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Spark\python\pyspark\cloudpickle\cloudpickle_fast.py", line 73, in dumps
cp.dump(obj)
File "C:\Spark\python\pyspark\cloudpickle\cloudpickle_fast.py", line 602, in dump
return Pickler.dump(self, obj)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Spark\python\pyspark\cloudpickle\cloudpickle_fast.py", line 692, in reducer_override
return self._function_reduce(obj)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Spark\python\pyspark\cloudpickle\cloudpickle_fast.py", line 565, in _function_reduce
return self._dynamic_function_reduce(obj)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Spark\python\pyspark\cloudpickle\cloudpickle_fast.py", line 546, in _dynamic_function_reduce
state = _function_getstate(func)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Spark\python\pyspark\cloudpickle\cloudpickle_fast.py", line 157, in _function_getstate
f_globals_ref = _extract_code_globals(func.__code__)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Spark\python\pyspark\cloudpickle\cloudpickle.py", line 334, in _extract_code_globals
out_names = {names[oparg]: None for _, oparg in _walk_global_ops(co)}
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Spark\python\pyspark\cloudpickle\cloudpickle.py", line 334, in <dictcomp>
out_names = {names[oparg]: None for _, oparg in _walk_global_ops(co)}
~~~~~^^^^^^^
IndexError: tuple index out of range
Traceback (most recent call last):
File "C:\Spark\python\pyspark\serializers.py", line 458, in dumps
return cloudpickle.dumps(obj, pickle_protocol)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Spark\python\pyspark\cloudpickle\cloudpickle_fast.py", line 73, in dumps
cp.dump(obj)
File "C:\Spark\python\pyspark\cloudpickle\cloudpickle_fast.py", line 602, in dump
return Pickler.dump(self, obj)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Spark\python\pyspark\cloudpickle\cloudpickle_fast.py", line 692, in reducer_override
return self._function_reduce(obj)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Spark\python\pyspark\cloudpickle\cloudpickle_fast.py", line 565, in _function_reduce
return self._dynamic_function_reduce(obj)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Spark\python\pyspark\cloudpickle\cloudpickle_fast.py", line 546, in _dynamic_function_reduce
state = _function_getstate(func)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Spark\python\pyspark\cloudpickle\cloudpickle_fast.py", line 157, in _function_getstate
f_globals_ref = _extract_code_globals(func.__code__)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Spark\python\pyspark\cloudpickle\cloudpickle.py", line 334, in _extract_code_globals
out_names = {names[oparg]: None for _, oparg in _walk_global_ops(co)}
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Spark\python\pyspark\cloudpickle\cloudpickle.py", line 334, in <dictcomp>
out_names = {names[oparg]: None for _, oparg in _walk_global_ops(co)}
~~~~~^^^^^^^
IndexError: tuple index out of range
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Spark\python\pyspark\rdd.py", line 1883, in take
res = self.context.runJob(self, takeUpToNumLeft, p)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Spark\python\pyspark\context.py", line 1486, in runJob
sock_info = self._jvm.PythonRDD.runJob(self._jsc.sc(), mappedRDD._jrdd, partitions)
^^^^^^^^^^^^^^^
File "C:\Spark\python\pyspark\rdd.py", line 3505, in _jrdd
wrapped_func = _wrap_function(
^^^^^^^^^^^^^^^
File "C:\Spark\python\pyspark\rdd.py", line 3362, in _wrap_function
pickled_command, broadcast_vars, env, includes = _prepare_for_python_RDD(sc, command)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Spark\python\pyspark\rdd.py", line 3345, in _prepare_for_python_RDD
pickled_command = ser.dumps(command)
^^^^^^^^^^^^^^^^^^
File "C:\Spark\python\pyspark\serializers.py", line 468, in dumps
raise pickle.PicklingError(msg)
_pickle.PicklingError: Could not serialize object: IndexError: tuple index out of range
</code></pre>
<p>It should provide [1] as an answer but instead throws this error. Is it because of incorrect installation?</p>
<p>Package used - spark-3.3.1-bin-hadoop3.tgz, Java(TM) SE Runtime Environment (build 1.8.0_351-b10), Python 3.11.1</p>
<p>Can anyone help in troubleshooting this? Many thanks in advance.</p>
|
<python><apache-spark><pyspark><rdd>
|
2023-01-08 14:47:42
| 3
| 346
|
Mohit Aswani
|
75,048,619
| 18,749,472
|
Django - stop logout with javascript pop-up confirm box
|
<p>In my django site I have a logout button that redirects to the view <code>logout</code>. When the button is clicked it instantly logs the user out, but I would like a JS pop-up confirm box to appear then the logout button is clicked.</p>
<p>When the user clicks 'Ok' OR 'Cancel' it logs the user out. How can i prevent the <code>logout</code> view being called when the user clicks 'Cancel'?</p>
<p><em>views.py</em></p>
<pre><code>def logout(request):
if "user_info" in request.session:
del request.session["user_info"]
#redirect to login so the user can log back in
return redirect("login")
</code></pre>
<p><em>script.js</em></p>
<pre><code>function logout_popup() {
if (confirm("Are you sure?")) {
window.location.reload()
}
}
</code></pre>
<p><em>base.html</em></p>
<pre><code><li onclick="logout_popup()" id="logout-tab"><a href="{% url 'logout' %}">Logout</a></li>
</code></pre>
|
<javascript><python><django><django-views><popup>
|
2023-01-08 14:38:15
| 1
| 639
|
logan_9997
|
75,048,611
| 10,480,181
|
How to prevent subprocess.run() output?
|
<p>I am using the subprocess module to create some directories. However in some cases the same command might be creating directories in restricted directories. In such cases I get an output to the console: <code>mkdir: cannot create directory 'location/to/directory': Permission denied</code></p>
<p>How to avoid this output to the console?</p>
<p>I have tried the following commands:</p>
<pre><code>subprocess.run(["mkdir", "-p", f"{outdir}/archive/backup_{curr_date}/"],check=True,stdout=subprocess.DEVNULL)
subprocess.run(["mkdir", "-p", f"{outdir}/archive/backup_{curr_date}/"],check=True,stdout=subprocess.PIPE)
subprocess.run(["mkdir", "-p", f"{outdir}/archive/backup_{curr_date}/"],check=True,capture_output=True)
</code></pre>
|
<python><python-3.x><subprocess><stdout>
|
2023-01-08 14:37:03
| 1
| 883
|
Vandit Goel
|
75,048,481
| 3,247,006
|
How to use local, non-local and global variables in the same inner function without errors in Python?
|
<p>When trying to use <strong>the local and non-local variables <code>x</code></strong> in <code>inner()</code> as shown below:</p>
<pre class="lang-py prettyprint-override"><code>x = 0
def outer():
x = 5
def inner():
x = 10 # Local variable
x += 1
print(x)
nonlocal x # Non-local variable
x += 1
print(x)
inner()
outer()
</code></pre>
<p>Or, when trying to use <strong>the global and non-local variables <code>x</code></strong> in <code>inner()</code> as shown below:</p>
<pre class="lang-py prettyprint-override"><code>x = 0
def outer():
x = 5
def inner():
global x # Global variable
x += 1
print(x)
nonlocal x # Non-local variable
x += 1
print(x)
inner()
outer()
</code></pre>
<p>I got the error below:</p>
<blockquote>
<p>SyntaxError: name 'x' is used prior to nonlocal declaration</p>
</blockquote>
<p>And, when trying to use <strong>the local and global variables <code>x</code></strong> in <code>inner()</code> as shown below:</p>
<pre class="lang-py prettyprint-override"><code>x = 0
def outer():
x = 5
def inner():
x = 10 # Local variable
x += 1
print(x)
global x # Global variable
x += 1
print(x)
inner()
outer()
</code></pre>
<p>Or, when trying to use <strong>the non-local and global variables <code>x</code></strong> in <code>inner()</code> as shown below:</p>
<pre class="lang-py prettyprint-override"><code>x = 0
def outer():
x = 5
def inner():
nonlocal x # Non-local variable
x += 1
print(x)
global x # Global variable
x += 1
print(x)
inner()
outer()
</code></pre>
<p>I got the error below:</p>
<blockquote>
<p>SyntaxError: name 'x' is used prior to global declaration</p>
</blockquote>
<p>In addition, when trying to define <strong>the non-local and global variables <code>x</code></strong> in <code>inner()</code> as shown below:</p>
<pre class="lang-py prettyprint-override"><code>x = 0
def outer():
x = 5
def inner():
nonlocal x # Non-local variable
global x # Global variable
inner()
outer()
</code></pre>
<p>Or, when trying to define <strong>the global and non-local variables <code>x</code></strong> in <code>inner()</code> as shown below:</p>
<pre class="lang-py prettyprint-override"><code>x = 0
def outer():
x = 5
def inner():
global x # Global variable
nonlocal x # Non-local variable
inner()
outer()
</code></pre>
<p>I got the error below:</p>
<blockquote>
<p>SyntaxError: name 'x' is nonlocal and global</p>
</blockquote>
<p>And, when trying to define <strong>the local and non-local variables <code>x</code></strong> in <code>inner()</code> as shown below:</p>
<pre class="lang-py prettyprint-override"><code>x = 0
def outer():
x = 5
def inner():
x = 10 # Local variable
nonlocal x # Non-local variable
inner()
outer()
</code></pre>
<p>I got the error below:</p>
<blockquote>
<p>SyntaxError: name 'x' is assigned to before nonlocal declaration</p>
</blockquote>
<p>And, when trying to define <strong>the local and global variables <code>x</code></strong> in <code>inner()</code> as shown below:</p>
<pre class="lang-py prettyprint-override"><code>x = 0
def outer():
x = 5
def inner():
x = 10 # Local variable
global x # Global variable
inner()
outer()
</code></pre>
<p>I got the error below:</p>
<blockquote>
<p>SyntaxError: name 'x' is assigned to before global declaration</p>
</blockquote>
<p>So, how can I use <strong>local, non-local and global variables</strong> in the same inner function without the errors above?</p>
|
<python><global-variables><local-variables><python-nonlocal><inner-function>
|
2023-01-08 14:22:11
| 1
| 42,516
|
Super Kai - Kazuya Ito
|
75,048,422
| 11,133,602
|
cibuildwheel - CIBW_BEFORE_ALL not working as expected and cannot find installed module
|
<p>Here's the GitHub Actions job that uses to build wheel for a Python module with C++ code, (bound using the <code>pybind11</code> module):</p>
<pre class="lang-yaml prettyprint-override"><code>
jobs:
build_wheels:
name: Build wheels on ${{ matrix.os }}
# if: ${{ github.event.workflow_run.conclusion == 'success' }}
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ubuntu-latest, windows-latest, macos-latest]
steps:
- uses: actions/checkout@v2
- name: Install build dependencies
run: |
python -m pip install pybind11 cibuildwheel
- name: Build wheels
# uses: pypa/cibuildwheel@v2.8.1
run: |
cibuildwheel
</code></pre>
<p>Related configuration in <code>pyproject.toml</code>:</p>
<pre class="lang-ini prettyprint-override"><code>[tool.cibuildwheel]
# before-build = "pip install pybind11"
before-all = "pip install pybind11" <------
test-requires = "pytest"
test-command = "pytest"
</code></pre>
<p>And it failed with error <code>ModuleNotFoundError: No module named 'pybind11'</code>, even though it's set to be installed via <code>CIBW_BEFORE_ALL</code> option, can you help me figure out why? Thank you in advance.</p>
<p>I read the documentation on <a href="https://cibuildwheel.readthedocs.io/en/latest/options/#before-all" rel="nofollow noreferrer"><code>CIBW_BEFORE_ALL</code></a>, it says the option <code>Execute a shell command on the build system before any wheels are built</code>, so I supposed that it should do the job.</p>
<p>I have included links to the job run's output, the full workflow file, and my setup.py file for reference. I am also including commands I use to build and run locally.</p>
<p>Any help would be greatly appreciated.</p>
<p><a href="https://github.com/easy-graph/Easy-Graph/actions/runs/3867354850/jobs/6592072102" rel="nofollow noreferrer">Link to the job run's output</a><br />
<a href="https://github.com/easy-graph/Easy-Graph/blob/2d702844a4e86538cce5786288c7197e998e698a/.github/workflows/release-cibuildwheel.yaml#L51" rel="nofollow noreferrer">Link to the full workflow file</a><br />
<a href="https://github.com/easy-graph/Easy-Graph/blob/2d702844a4e86538cce5786288c7197e998e698a/setup.py#L8" rel="nofollow noreferrer">Link to <code>setup.py</code></a></p>
<p>Commands to build and run locally:</p>
<pre class="lang-bash prettyprint-override"><code> git clone https://github.com/easy-graph/Easy-Graph && cd Easy-Graph && git checkout pybind11
pip install pybind11
python3 setup.py install
</code></pre>
|
<python><github-actions><python-module>
|
2023-01-08 14:11:48
| 1
| 1,096
|
Teddy C
|
75,048,344
| 7,959,614
|
Get ranking within Pandas.DataFrame with ties as possibility
|
<p>I have the following example</p>
<pre><code>import pandas as pd
names = ['a', 'b', 'c', 'd', 'e']
points = [10, 15, 15, 12, 20]
scores = pd.DataFrame({'name': names,
'points': points})
</code></pre>
<p>I want to create a new column called <code>position</code> that specifies the relative position of a player. The player with the most points is #1.</p>
<p>I sort the <code>df</code> using</p>
<pre><code>scores = scores.sort_values(by='points', ascending=False)
</code></pre>
<p>If there is a tie (same number of points) I want <code>position</code> to be the <code>T</code> and the corresponding position.
In my example the position of <code>b</code> and <code>c</code> is <code>T2</code>.</p>
<p>Desired output:</p>
<pre><code> name points position
e 20 1
b 15 T2
c 15 T2
d 12 3
a 10 4
</code></pre>
<p>Thank you</p>
|
<python><pandas>
|
2023-01-08 14:01:01
| 1
| 406
|
HJA24
|
75,048,233
| 7,665,821
|
MSVC/VS is not detecting any Windows SDK related things - includes, libraries etc
|
<p>I installed Visual Studio Build Tools 2022 from the official website, checking the "C++ desktop development" box. It installed successfully. However, when I try to use it to compile some Python library, various errors thrown by the compiler shows that MSVC is not aware of any Windows SDK related files on my system, although they are all there and I used default location to install them. To trouble shoot, I uninstalled Windows SDK from VS installer and installed it separately using <a href="https://developer.microsoft.com/en-us/windows/downloads/windows-sdk/" rel="nofollow noreferrer">https://developer.microsoft.com/en-us/windows/downloads/windows-sdk/</a>. Same issue still persists. Most online solutions suggest for a reinstall of the Windows SDK, but I have done it over 5 times, switching around different versions (Windows 10, Windows 11 two each), and nothing changed. I have also checked and confirmed the following registry values indeed points to my installation folder:</p>
<pre><code>HKEY_LOCAL_MACHINE\SOFTWARE\WOW6432Node\Microsoft\Microsoft SDKs\Windows\v10.0
HKEY_LOCAL_MACHINE\SOFTWARE\WOW6432Node\Microsoft\Windows Kits\Installed Roots\
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows Kits\Installed Roots\
HKLM\SOFTWARE\Microsoft\Windows Kits\Installed Roots
</code></pre>
<p>Some of the errors I met were header files cannot be found errors where the header files exist in the Windows SDK directory, e.g. <code>C:\Program Files (x86)\Windows Kits\10\Include\10.0.22621.0\ucrt</code> and <code>C:\Program Files (x86)\Windows Kits\10\Include\10.0.22621.0\shared</code>. Manually adding these folders in using setup.py works as a workaround, but definitely not a solution.</p>
<p>However, there is another issue where it cannot find certain <code>.lib</code> files too, and those files also belong in Windows SDK installation, for example, <code>kernel32.lib</code> which exists on my system in <code>C:\Program Files (x86)\Windows Kits\10\Lib\10.0.22621.0\um\x64</code>. Sure, I can also edit this in setup.py, but if I have to do this for every single setup.py I get, it would be very tiring.</p>
<p>When I try to run <code>set include</code> in VS x64 native developer command prompt, it only shows my VS code related paths: <code>INCLUDE=E:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.34.31933\include;E:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Auxiliary\VS\include</code>. However from online sources, it should also include windows SDK related paths.</p>
<p>I do not have a "cpp properties" file to set like many online solutions suggest because I am not using VS/VS code. I am just doing a <code>python setup.py build</code>.</p>
<p>Windows SDK can be seen with the correct version in control panel.</p>
<p>The Python library I am trying to build is <a href="https://github.com/facebookresearch/xformers" rel="nofollow noreferrer">https://github.com/facebookresearch/xformers</a>, and it seems like I am the only one having this issue building, and it should not be any issue from the python library side.</p>
|
<python><windows><visual-c++>
|
2023-01-08 13:45:13
| 0
| 381
|
Billy Cao
|
75,047,803
| 12,871,587
|
Polars read_excel set option to include filtered rows
|
<p>I've noticed that the Polars read_excel/Xlsx2csv doesn't include rows that are filtered in excel file(s). However, there is an option in Polars read_csv to set xlsx2csv options (xlsx2csv_options: dict[str, Any]).</p>
<p>I've tried to check from Xlsx2csv documentation whether there would be such an option, but couldn't find it. ChatGPT instructed to try "--skip-filtered-rows", but not sure how can I enter that to the Polars read_csv xlsx2csv option as a dictionary?</p>
<p>Workaround I've used is to read the excel file with pandas first and then read it with polars, but would prefer to do this with polars directly.</p>
<pre><code>pl.from_pandas(pd.read_excel(file))
</code></pre>
<p>xlsx2csv version 0.8.0</p>
<p>polars version 0.15.13</p>
<p>EDIT: Found the same issue in github as well: <a href="https://github.com/dilshod/xlsx2csv/issues/246" rel="nofollow noreferrer">https://github.com/dilshod/xlsx2csv/issues/246</a></p>
<p>Looks like there was added a new setting to xlsx2csv "skip_hidden_rows" in the version 0.8.0. However, when I tried to use the below, I am still getting only the filtered rows from the excel files.</p>
<pre><code>pl.read_excel(file, xlsx2csv_options={"skip_hidden_rows": False})
</code></pre>
|
<python><python-polars>
|
2023-01-08 12:37:22
| 0
| 713
|
miroslaavi
|
75,047,681
| 5,362,515
|
Socket programming: How does client get access to its IP adress and port?
|
<p>Just started learning about socket programming using Python. To create server, we do something like this:</p>
<pre><code>sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_address = ('localhost', 10000)
sock.bind(server_address)
sock.listen(1)
while True:
connection, client_address = sock.accept()
....
....
</code></pre>
<p>For client, we do something like this:</p>
<pre><code>#client.py
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_address = ('localhost', 10000)
sock.connect(server_address)
....
....
</code></pre>
<p>I am not sure how to phrase my question but I was wondering about the mechanism behind client being assigned an IP address and port to connect to server. At server side, we explicitly <code>bind()</code>ed the socket to <code>localhost</code> address and port <code>10000</code>. We don't do this for client and yet client has its own IP address and port. How and at what level (OS?) does this happen? In fact, when we run <code>telnet localhost 10000</code>, we are able to talk to that server and server knows about the IP address and port of incoming connection. What is the mechanism behind it? How can I explore more about this stuff on my (Windows) machine?</p>
|
<python><sockets><network-programming>
|
2023-01-08 12:18:42
| 0
| 327
|
mayankkaizen
|
75,047,527
| 9,328,846
|
How to add a new column to dataframe based on conditions on another column
|
<p>I have the following example dataframe:</p>
<pre><code>d = {'col1': [4, 2, 8, 4, 3, 7, 6, 9, 3, 5]}
df = pd.DataFrame(data=d)
df
col1
0 4
1 2
2 8
3 4
4 3
5 7
6 6
7 9
8 3
9 5
</code></pre>
<p>I need to add <code>col2</code> to this dataframe, and values of this new column will be set by comparing <code>col1</code> values (from different rows) as described below. Each row of <code>col2</code> will be set as following:</p>
<p><code>df.loc[0, "col2"]</code> will say how many of <code>df.loc[1, "col1"]</code>, <code>df.loc[2, "col1"]</code> and <code>df.loc[3, "col1"]</code> are bigger than <code>df.loc[0, "col1"]</code>.</p>
<p><code>df.loc[1, "col2"]</code> will say how many of <code>df.loc[2, "col1"]</code>, <code>df.loc[3, "col1"]</code> and <code>df.loc[4, "col1"]</code> are bigger than <code>df.loc[1, "col1"]</code>.</p>
<p><code>df.loc[2, "col2"]</code> will say how many of <code>df.loc[3, "col1"]</code>, <code>df.loc[4, "col1"]</code> and <code>df.loc[5, "col1"]</code> are bigger than <code>df.loc[2, "col1"]</code>.</p>
<p>And so on...</p>
<p>If there are not <code>3 rows</code> left after the <code>index N</code>, <code>col2</code> value will be set to <code>-1</code>.</p>
<p>The end result will look like the following:</p>
<pre><code> col1 col2
0 4 1
1 2 3
2 8 0
3 4 2
4 3 3
5 7 1
6 6 1
7 9 -1
8 3 -1
9 5 -1
</code></pre>
<p>I need a function that will take a dataframe as input and will return the dataframe by adding the new column as described above.</p>
<p>In the example above, next 3 rows are considered. But this needs to be configurable and should be an input to the function that will do the work.</p>
<p>Speed is important here so it is not desired to use for loops.</p>
<p>How can this be done in the most efficient way in Python?</p>
|
<python><python-3.x><pandas><dataframe><rolling-computation>
|
2023-01-08 11:54:06
| 1
| 2,201
|
edn
|
75,047,412
| 7,093,241
|
Is there a way to use filter() to return a modified value with a lambda?
|
<p>I was trying to check for palindromes and wanted to eliminate non alphanumeric characters. I can use filter for this like so:</p>
<pre><code>filteredChars = filter(lambda ch: ch.isalnum(), s)
</code></pre>
<p>However, I also need to compare with the same case so I would really like to get is <code>ch.lower</code> so I tried this.</p>
<pre><code>filteredChars = filter(lambda ch.lower() : ch.isalnum(), s)
</code></pre>
<p>but I got an error.</p>
<p>Is it possible to write a lambda to do this without a list comprehension or a user defined function?</p>
<p>I can already get my answer with:</p>
<pre><code>filteredChars = [ch.lower() for ch in s if ch.isalnum()]
</code></pre>
<p>However, this <a href="https://www.digitalocean.com/community/tutorials/how-to-use-the-python-filter-function" rel="nofollow noreferrer">DigitalOcean filter() tutorial</a> says that list comprehensions use up more space</p>
<blockquote>
<p>... a list comprehension will make a new list, which will increase the run time for that processing. This means that after our list comprehension has completed its expression, we’ll have two lists in memory. However, filter() will make a simple object that holds a reference to the original list, the provided function, and an index of where to go in the original list, which will take up less memory</p>
</blockquote>
<p><strong>Does filter only hold references to the filtered values in the original sequence?</strong> When I think of this though, I conclude (maybe not correctly) that if I have to lower the cases, then I would actually need a new list with the modified characters hence, filter can't be used for this task at all.</p>
|
<python>
|
2023-01-08 11:34:18
| 3
| 1,794
|
heretoinfinity
|
75,047,366
| 3,179,765
|
How to deal with non-English characters and spaces in filename in python?
|
<p>I want to download some files from gdrive with a python script. Files contain spaces and nonEnglish characters and I want to save them exactly with the same names. I try to create a similar file with spaces and special characters, but python gives me always
<code>OSError: [Errno 22] Invalid argument:</code>:</p>
<pre><code>from urllib.request import urlopen
from urllib import request
from urllib.parse import unquote
import os
base_dir = os.getcwd()
print(base_dir)
url = 'https://drive.google.com/u/0/uc?id=1KD3dIMaJU6SY8NX38jOW3DWQYjPGN-9A&export=download'
resource = request.urlopen(request.Request(url))
content = resource.getheader('Content-Disposition')
filename = content.split('*=')[1]
filename = unquote(filename.split("''")[1])
filepath = os.path.join(base_dir, filename)
with urlopen(url) as file:
file_content = file.read()
# Save to file
with open(filepath, "w") as f:
f.write(file_content)
</code></pre>
<p>This is the full message:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\user\Downloads\Telegram Desktop\ChatExport_2023-01-07\gdrive_links.py", line 26, in <module>
with open(filepath, "w") as f:
OSError: [Errno 22] Invalid argument:
'C:\\Users\\user\\Downloads\\Telegram Desktop\\ChatExport_2023-01-07\\DilKoçu1 (BEN KİMİM? SİSTEMİMİZ NASIL İŞLİYOR?).mp4'
</code></pre>
<p>I am using python 3.10.4 on Windows 11</p>
|
<python>
|
2023-01-08 11:26:26
| 0
| 705
|
Mubin Icyer
|
75,047,291
| 3,247,006
|
How can the most inner function access the non-local variable in the most outer function in Python?
|
<p><code>inner()</code> can access <strong>the non-local variable <code>x</code></strong> in <code>middle()</code> with <code>nonlocal x</code>:</p>
<pre class="lang-py prettyprint-override"><code>def outer():
x = 0
def middle():
x = 5 # <- Here
def inner():
nonlocal x # Here
x += 1
print(x) # 6
inner()
middle()
outer()
</code></pre>
<p>Now, how can <code>inner()</code> access <strong>the non-local variable <code>x</code></strong> in <code>outer()</code>?</p>
<pre class="lang-py prettyprint-override"><code>def outer():
x = 0 # <- Here
def middle():
x = 5
def inner():
x += 1
print(x) # How to print 1?
inner()
middle()
outer()
</code></pre>
|
<python><variables><nested-function><python-nonlocal><inner-function>
|
2023-01-08 11:11:08
| 1
| 42,516
|
Super Kai - Kazuya Ito
|
75,047,238
| 18,972,785
|
Why backslash ( \ ) causes ValuesError when using CDLIB?
|
<p>I am working on community detection algorithms on graphs. I have an edge list that its nodes are strings such as: names, addresses,IDs and ,...
There was something interesting with edge list. When there is backslash ( \ ) in edgelist such as <code>\template1\ </code> the program raises this Error: <code>ValueError: invalid literal for int() with base 10:</code> but when I remove backslashes and <code>\template1\ </code> becames as <code>template1 </code> everything works well. Is there any way to handle and solve this problem? because I dont want to change the actual words in edge list and i dont want to remove backslashes.</p>
<p>I use this code to build a graph and detect communities:</p>
<pre><code>from cdlib import algorithms
import networkx as nx
Txtfile = open("ProgramFolders/result/edgelist.txt", encoding='utf-8')
G = nx.read_weighted_edgelist(Txtfile)
Txtfile.close()
coms = algorithms.louvain(G)
</code></pre>
<p>and this is a small part of my edge list:</p>
<pre><code>\template1\ \192.168.100.30
\template1\ 200.02483421
\template1\ \192.168.100.70
\template1\ //192.167.100.30:70/service
\template1\ \192.168
\template1\ //192.168.44
\template1\ 240.54625187
\template1\ //192.167.100/service
...
</code></pre>
<p><strong>Update</strong></p>
<p>this is the traceback log:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\vaio\PycharmProjects\community.py", line 315, in <module>
coms = algorithms.louvain(G, weight='weight')
File "C:\Users\vaio\AppData\Local\Programs\Python\Python39\lib\site-packages\cdlib-0.2.4-py3.9.egg\cdlib\algorithms\crisp_partition.py", line 518, in louvain
File "C:\Users\vaio\AppData\Local\Programs\Python\Python39\lib\site-packages\cdlib-0.2.4-py3.9.egg\cdlib\classes\node_clustering.py", line 31, in __init__
File "C:\Users\vaio\AppData\Local\Programs\Python\Python39\lib\site-packages\cdlib-0.2.4-py3.9.egg\cdlib\classes\clustering.py", line 42, in __init__
File "C:\Users\vaio\AppData\Local\Programs\Python\Python39\lib\site-packages\cdlib-0.2.4-py3.9.egg\cdlib\classes\clustering.py", line 21, in __convert_back_to_original_nodes_names_if_needed
File "C:\Users\vaio\AppData\Local\Programs\Python\Python39\lib\site-packages\cdlib-0.2.4-py3.9.egg\cdlib\classes\clustering.py", line 21, in <listcomp>
ValueError: invalid literal for int() with base 10: 'template1\\'
</code></pre>
<p>I really appreciate answers which solves this problem</p>
|
<python><graph>
|
2023-01-08 11:02:16
| 0
| 505
|
Orca
|
75,046,988
| 3,131,604
|
Building tensorflow-text=2.11.0 from source fails with "rev: command not found" on Windows
|
<p>As you may know, after version 2.10, tensorflow-text is not be provided as a pip package on windows.</p>
<p>I tried to follow the Build from source procedure at <a href="https://github.com/tensorflow/text#build-from-source-steps" rel="nofollow noreferrer">https://github.com/tensorflow/text#build-from-source-steps</a>, but it fails at step 3 :</p>
<pre><code>sh ./oss_scripts/run_build.sh
</code></pre>
<p>I got:</p>
<pre><code>...
+++ python -c 'import tensorflow as tf; print('\'' '\''.join(tf.sysconfig.get_link_flags()))'
+++ awk '{print $2}'
+++ python -c 'import tensorflow as tf; print(tf.sysconfig.CXX11_ABI_FLAG)'
++ TF_ABIFLAG=0
++ HEADER_DIR='C:\Users\Gilles\AppData\Local\Programs\Python\Python310\lib\site-packages\tensorflow\include'
++ SHARED_LIBRARY_DIR='C:\Users\Gilles\AppData\Local\Programs\Python\Python310\lib\site-packages\tensorflow'
+++ echo -l:libtensorflow_framework.so.2
+++ rev oss_scripts/configure.sh: line 88: rev: command not found
+++ cut -d: -f1
+++ rev oss_scripts/configure.sh: line 88: rev: command not found
++ SHARED_LIBRARY_NAME=
</code></pre>
<p>My config:</p>
<ul>
<li>Windows 10</li>
<li>Python 10</li>
<li>Tensorflow 2.11.0 (pip install)</li>
</ul>
|
<python><tensorflow2.0>
|
2023-01-08 10:21:12
| 1
| 7,433
|
u2gilles
|
75,046,899
| 19,838,445
|
Does function know about the class before binding
|
<p>Is there a way to access a class (where function is defined as a method) before there is an instance of that class?</p>
<pre class="lang-py prettyprint-override"><code>class MyClass:
def method(self):
print("Calling me")
m1 = MyClass.method
instance = MyClass()
m2 = instance.method
print(m2.__self__.__class__) # <class 'MyClass'>
# how to access `MyClass` from `m1`?
</code></pre>
<p>For example I have <code>m1</code> variable somewhere in my code and want to have a reference to <code>MyClass</code> the same way I can access it from bound method <code>m2.__self__.__class__</code>.</p>
<pre class="lang-py prettyprint-override"><code>print(m1.__qualname__) # 'MyClass.method'
</code></pre>
<p>The only option I was able to find is <code>__qualname__</code> which is a string containing name of the class.</p>
|
<python><methods><metaprogramming><python-descriptors>
|
2023-01-08 10:02:15
| 2
| 720
|
GopherM
|
75,046,837
| 3,482,266
|
How can I preserve search efficiency, and reduce RAM usage of a dictionary container?
|
<p>I'm keeping a record of what I'm calling <code>profile</code>s.</p>
<p>Each <code>profile</code> is a tuple of dictionaries: <code>(Dict,Dict)</code>, and we can associate to it a unique <code>id</code>(may include characters like <code>_</code> or <code>-</code>).</p>
<p>I need to keep them in RAM memory for a certain time, because I'll need to search and updated some of them, and only at the end, when I no longer need the whole set of <code>profile</code>s will I flush them to persistent storage.</p>
<p>Currently, I'm using a dictionary/hash table to keep all of them (number of elements around 100K, but could be more), since I'll do many searches, using the <code>id:profile</code> as the <code>key:value</code> pair.</p>
<p>The data looks similar to this:</p>
<pre><code>{
"Aw23_1adGF":({
"data":
{"contacts":[11,22],"address":"usa"},
"meta_data":
{"created_by":"John"}
},{"key_3":"yes"}),
"AK23_1adGF":({
"data":
{"contacts":[33,44],"address":"mexico"},
"meta_data":
{"created_by":"Juan"}
},{"key_3":"no"}),
# ...
}
</code></pre>
<p>Once this data structure is built, I don't need to add/delete any more elements. I only build it once, than search it many times. On certain occasions I'll need to update some element in the dictionaries that compose a profile. However, building this data object contributes to the peak RAM usage that I'm trying to diminish.</p>
<p>The problem is that the dictionary seems to use too much RAM.</p>
<p>What were my other options regarding data structures that could keep some of the search efficiency and with a small RAM footprint?</p>
<p>I thought of an ordered list, since the <code>id</code> seems to me to be orderable (maybe except for characters like <code>_</code> or <code>-</code>).</p>
<p>What data structures are there that could help me in this predicament?</p>
|
<python>
|
2023-01-08 09:51:04
| 0
| 1,608
|
An old man in the sea.
|
75,046,764
| 12,167,384
|
max() with key argument python
|
<p>I know a bit about how <code>key</code> argument is used in python <code>max()</code>. <code>max(("pyth", "lua", "ruby"), key=len)</code> will return <code>pyth</code> and <code>ruby</code>. However, <code>max((31, 13, 11), key=lambda x: sum(int(i) for i in str(x)))</code> will only give me <code>31</code> (<code>13</code> should be returned as well), or <code>max((13, 31, 11), key=lambda x: sum(int(i) for i in str(x)))</code> will only give me <code>13</code> (<code>31</code> should be returned as well). Can someone explain this? Many thanks.</p>
|
<python><max>
|
2023-01-08 09:40:35
| 3
| 336
|
uoay
|
75,046,746
| 1,181,482
|
python: how to find name of currently evaluating argument during function call
|
<p>given the code example below: how can I get the name of argument corresponding to particular evaluation of function get_arg_val.</p>
<pre><code>def do_sum(a, b, c):
return a + b + c
def get_arg_val():
print("call f") # what argument this call corresponds to: a, b or c ???
return 1
x = do_sum(get_arg_val(), get_arg_val(), get_arg_val())
print(x)
</code></pre>
|
<python><python-3.x><parameter-passing><introspection><function-call>
|
2023-01-08 09:37:29
| 0
| 2,612
|
lowtech
|
75,046,723
| 5,869,076
|
Execute statement only once in Python
|
<p>I'm running a periodic task on Celery that executes the same code once every 3 minutes. If a condition is True, an action is performed (a message is sent), but I need that message to be sent only once.</p>
<p>The ideal would be that that message, if sent, it could not be sent in the next 24 hours (even though the function will keep being executed every 3 minutes), and after those 24 hours, the condition will be checked again and message sent again if still True. How can I accomplish that with Python? I put here some code:</p>
<pre><code>if object.shussui:
client.conversation_start({
'channelId': 'x',
'to': 'user',
'type': 'text',
'content': {
'text': 'body message')
}
})
</code></pre>
<p>Here the condition is being checked, and if <code>shussui</code> is <code>True</code> the <code>client.start_conversation</code> sends a message.</p>
|
<python><python-3.x><celery>
|
2023-01-08 09:33:31
| 2
| 611
|
Jim
|
75,046,635
| 8,844,732
|
Subplot multiple combination of categories in python
|
<p>I have the following example dataset where 'Type1 has 2 categories 'A' and 'B' and 'Type2' has 3 categories 'Red', 'Blue', and ' Green'. I would like to plot 3 subplots where subplot 1 has 'Type1', subplot 2 has 'Type2' and subplot3 has the combination of 'Type1' and 'Type2'. The x axis is the 'Time' and the y axis is the 'Amount'. The combination subplot should be at the bottom of the first 2 subplot. And the subplot should be lineplot.</p>
<pre><code>Time, Type1, Type2, Amount
0, A, Red, 1000
1, A, Red, 1002
2, A, Red, 1003
0, B, Blue, 50
1, B, Blue, 54
2, B, Blue, 60
0, A, Green, 89
1, A, Green, 90
2, A, Green, 100
</code></pre>
<p>I did not find anything useful in this regard. Any help is appreciated.
Note: A resulting subplot should look like the answer in the <a href="https://stackoverflow.com/questions/37360568/python-organisation-of-3-subplots-with-matplotlib">subplot 3 graphs matplotlib</a></p>
|
<python><pandas><matplotlib>
|
2023-01-08 09:16:52
| 1
| 423
|
non_linear
|
75,046,568
| 20,800,676
|
Why does pylint raises this error: Parsing failed: 'cannot assign to expression here. Maybe you meant '==' instead of '='?
|
<p>While solving some exercises from the "Impractical Python Projects" book I've encountered this error:</p>
<pre><code>myconfig.pylintrc:6:1: E0001: Parsing failed: 'cannot assign to expression here. Maybe you meant '==' instead of '='? (<unknown>, line 6)' (syntax-error)
</code></pre>
<p>The error occurs while analyzing my code with pylint but doesn't occur while running it in the console.
No errors occur at runtime.</p>
<p>This is my solution:</p>
<pre><code>"""Translate words from english to Pig Latin."""
english_vowels = ['a', 'e', 'i', 'o', 'u', 'y']
def main():
"""Take the words as input and return their Pig Latin equivalents."""
print('Welcome to the Pig Latin translator.\n')
while True:
user_word = input('Please enter your own word to get the equivalent in Pig Latin:\n')
if user_word[0].lower() in english_vowels:
print(user_word + 'way')
else:
print((user_word[1:] + user_word[0].lower()) + 'ay')
try_again = input('Try again? Press enter or else press n to quit:\n')
if try_again.lower() == 'n':
break
if __name__ == '__main__':
main()
</code></pre>
<p>myconfig.pylintrc is the same as standard.</p>
<p>I've tried moving the variable assignments in and out of the main function or the while loop and I still get this error.</p>
|
<python><python-3.x><pylint><pylintrc>
|
2023-01-08 09:03:20
| 1
| 541
|
Sigmatest
|
75,046,502
| 5,130,783
|
Python floating-point equations
|
<p>I want to know how to make floating-point equations correctly in Python3.</p>
<p>I am trying to solve a linear equation problem and have variables and statements below.</p>
<pre class="lang-py prettyprint-override"><code>slope = float(70 / 23)
c = 9
(slope * -161) + c == -481 # False
print((slope * -161) + c) # -481.00000000000006
</code></pre>
<p>If you manually evaluate <code>(slope * -161) + c</code>, you will get <code>-481</code>. However, python evaluates it as <code>-481.00000000000006</code> when I use <code>float</code>. How do I resolve this issue?</p>
|
<python><floating-point>
|
2023-01-08 08:49:53
| 0
| 1,679
|
Changnam Hong
|
75,046,444
| 4,053,840
|
Run python tests from different files in parrallel?
|
<p>I am working on integration tests, written in python and run with pytest.
They are defined/written in different python files, but all these tests are executed sequentially.
Part of the tests include testing of starting and stopping of machines, and this takes a while.
That is why i want to run the start/stopping tests and the other tests in parallel.
I found libraries like "pytest-parallel" and "pytest-xdist", but they provide means to run several tests simultaneously, which does not work for me, because in this case the tests for starting and stopping a machine are run simultaneously and the tests fail.
So is there a way to make the tests from different files to run simultaneously, but the tests from within a file to be run sequentially?</p>
|
<python><parallel-processing><pytest>
|
2023-01-08 08:36:59
| 1
| 483
|
Ivajlo Iliev
|
75,046,434
| 11,995,121
|
wxpython RuntimeError: super-class __init__() of type MainMenuBar was never called when instantiating class
|
<p>I am trying to import MainMenuBar class from filemenu.py to my main class and instantiate it in my main class init method in wxPython but I am getting error</p>
<pre><code>RuntimeError: super-class __init__() of type MainMenuBar was never called
</code></pre>
<p>Here is my main class</p>
<pre><code>import os
import wx
from wx.lib.agw.scrolledthumbnail import ScrolledThumbnail, Thumb, PILImageHandler
import filemenu
class MainFrameImageThumbnail(wx.Frame):
def __init__(self):
wx.Frame.__init__(self, None, -1, "Mockup", size=(1200,800))
self.scroll = ScrolledThumbnail(self, 10, size=(200,1200))
self.panel = wx.Panel(self)
self.scroll.SetBackgroundColour((5,5,5))
font = wx.Font( wx.FontInfo(10).Bold())
self.scroll.SetCaptionFont(font)
self.scroll.SetThumbOutline(4)
self.scroll.SetDropShadow(False)
self.scroll.SetSelectionColour(wx.Colour(120,80,255))
sizer = wx.BoxSizer(wx.VERTICAL)
self.menuCreation()
def menuCreation(self):
mainMenu = filemenu.MainMenuBar()
app = wx.App(False)
frame = MainFrameImageThumbnail()
frame.Show(True)
app.MainLoop()
</code></pre>
<p>Here is my filemenu.py</p>
<pre><code>import os
import wx
APP_EXIT = 1
class MainMenuBar(wx.Panel):
def __init__(self):
self.FileTab()
def OnQuit(self, e):
self.Close()
def FileTab(self):
menubar = wx.MenuBar()
fileMenu = wx.Menu()
qmi = wx.MenuItem(fileMenu, APP_EXIT, '&Quit\tCtrl+Q')
qmi.SetBitmap(wx.Bitmap('power-on.png'))
fileMenu.Append(qmi)
self.Bind(wx.EVT_MENU, self.OnQuit, id=APP_EXIT)
menubar.Append(fileMenu, '&File')
self.SetMenuBar(menubar)
</code></pre>
<p>Am I calling it wrong? Is there a specific way to do this in WxPython?</p>
<p>EDIT:</p>
<p>Following an example, I attempted this which looks more correct but I am getting another error</p>
<p>My main class:</p>
<pre><code>import os
import wx
from wx.lib.agw.scrolledthumbnail import ScrolledThumbnail, Thumb, PILImageHandler
import filemenu
class MainFrameImageThumbnail(wx.Frame):
def __init__(self):
super().__init__(parent=None,title="Mockup", size=(1200,800))
self.scroll = ScrolledThumbnail(self, 10, size=(200,1200))
self.panel = wx.Panel(self)
self.scroll.SetBackgroundColour((5,5,5))
font = wx.Font( wx.FontInfo(10).Bold())
self.scroll.SetCaptionFont(font)
self.scroll.SetThumbOutline(4)
self.scroll.SetDropShadow(False)
self.scroll.SetSelectionColour(wx.Colour(120,80,255))
sizer = wx.BoxSizer(wx.VERTICAL)
self.menu = filemenu.MainMenuBar(self)
self.menu.FileTab()
app = wx.App(False)
frame = MainFrameImageThumbnail()
frame.Show(True)
app.MainLoop()
</code></pre>
<p>my filemenu.py</p>
<pre><code>import os
import wx
APP_EXIT = 1
class MainMenuBar(wx.Panel):
def __init__(self,parent):
super().__init__(parent)
def OnQuit(self, e):
self.Close()
def FileTab(self):
menubar = wx.MenuBar()
fileMenu = wx.Menu()
qmi = wx.MenuItem(fileMenu, APP_EXIT, '&Quit\tCtrl+Q')
qmi.SetBitmap(wx.Bitmap('power-on.png'))
fileMenu.Append(qmi)
self.Bind(wx.EVT_MENU, self.OnQuit, id=APP_EXIT)
menubar.Append(fileMenu, '&File')
self.SetMenuBar(menubar)
</code></pre>
<p>Now I get this error:</p>
<pre><code>AttributeError: 'MainMenuBar' object has no attribute 'SetMenuBar'
</code></pre>
|
<python><class><wxpython><instantiation><init>
|
2023-01-08 08:35:05
| 2
| 454
|
HotWheels
|
75,046,293
| 13,789,467
|
Cannot run selenium test
|
<p>I am new to automation testing. Currently downloaded selenium 4 and trying to automate login into a sample site.Please help me with this. Been struggling all day in this. And I can't even understand my error messages in the console<a href="https://i.sstatic.net/lBD2S.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lBD2S.png" alt="enter image description here" /></a></p>
<p>Here is the code that I am trying to work with:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.by import By
driver = webdriver.Chrome("/Users/prashantmishra/Documents/chromedriver")
driver.get("https://opensource-demo.orangehrmlive.com/")
driver.find_element(By.NAME, "username").send_keys("Admin")
driver.find_element(By.NAME, "password").send_keys("admin123")
driver.find_element(By.CLASS_NAME, "oxd-button oxd-button--medium oxd-button--main orangehrm-login-button").click()
captured_page_title = driver.title
expected_age_title = "OrangeHRM"
if captured_page_title == expected_age_title:
print("Test PASSED!")
else:
print("Test FAILED!")
driver.close()
</code></pre>
|
<python><selenium><selenium-webdriver><selenium-chromedriver><webdriverwait>
|
2023-01-08 08:04:21
| 2
| 319
|
Prashant Mishra
|
75,046,236
| 194,888
|
Why is this map_elements() custom function slower in Polars than apply() in Pandas
|
<p>I ran the following in a Jupyter Notebook and was disappointed that similar Pandas code is faster. Hoping someone can show a smarter approach in Polars.</p>
<p>POLARS VERSION</p>
<pre class="lang-py prettyprint-override"><code>def cleanse_text(sentence):
RIGHT_QUOTE = r"(\u2019)"
sentence = re.sub(RIGHT_QUOTE, "'", sentence)
sentence = re.sub(r" +", " ", sentence)
return sentence.strip()
df = df.with_columns(pl.col("text").map_elements(lambda x: cleanse_text(x)).name.keep())
</code></pre>
<p>PANDAS VERSION</p>
<pre class="lang-py prettyprint-override"><code>def cleanse_text(sentence):
RIGHT_QUOTE = r"(\u2019)"
sentence = re.sub(RIGHT_QUOTE, "'", sentence)
sentence = re.sub(r" +", " ", sentence)
return sentence.strip()
df["text"] = df["text"].apply(lambda x: cleanse_text(x))
</code></pre>
<p>The above Pandas version was 10% faster than the Polars version when I ran this on a dataframe with 750,000 rows of text.</p>
|
<python><dataframe><python-polars>
|
2023-01-08 07:51:07
| 1
| 616
|
Biosopher
|
75,046,225
| 6,357,916
|
Why python code with list comprehension take less time than the one without?
|
<p>I wrote following code:</p>
<pre><code>def meth3(history_list, fromDate, toDate):
active = {}
chart_data = [0] * (daysDiff + 1)
activityData = [(_activityData.user_id, timezone.localtime(_activityData.last_save_time).date())
for item in history_list for _activityData in item.user.activeData
]
for iActivityData in activityData:
if ((iActivityData[0], iActivityData[1]) not in active
and fromDate < iActivityData[1] and iActivityData[1] < toDate):
chart_data[(iActivityData[1]-fromDate).days] += 1
active[(iActivityData[0], iActivityData[1])] = None
return chart_data
</code></pre>
<p>This function runs in max 1.8s consistently. Note that it iterates through the elements twice:</p>
<ol>
<li>in list comprehension to create <code>activityData</code> array</li>
<li>in <code>for</code> loop</li>
</ol>
<p>So to make it faster, I done away with explicitly creating <code>activityData</code> array, thus requiring only single iteration:</p>
<pre><code>def meth4(history_list, fromDate, toDate):
active = {}
chart_data = [0] * (daysDiff + 1)
for history_i in history_list:
for activityData_i in history_i.user.activeData:
userId = activityData_i.user_id
recordLastsaveTime = timezone.localtime(activityData_i.last_save_time).date()
if ((userId, recordLastsaveTime) not in active
and fromDate < recordLastsaveTime and recordLastsaveTime < toDate):
chart_data[(recordLastsaveTime-fromDate).days] += 1
active[(userId, recordLastsaveTime)] = None
return chart_data
</code></pre>
<p>But this function takes 2.4+ seconds consistently. Why is this version taking more time than the earlier despite requiring only single iteration?</p>
|
<python><python-3.x><list-comprehension>
|
2023-01-08 07:48:58
| 0
| 3,029
|
MsA
|
75,046,073
| 17,275,588
|
Python + Open AI/GPT3 question: Why is part of my prompt spilling into the responses I receive?
|
<p>This happens to probably 10% of responses I get. For whatever reason, the last bits of my prompt somehow spill into it, at the start of it. Like there will be a period, or a question mark, or sometimes a few of the last letters from the prompt, that get removed from the prompt, and somehow find their way into BOTH the response that gets printed inside of the Visual Studio Code terminal, AND in the outputted version that gets written to a corresponding Excel spreadsheet.</p>
<p>Any reason why this might happen?</p>
<p>Some example responses:</p>
<blockquote>
<p>.</p>
<p>Most apples are colored red.</p>
</blockquote>
<p>Also</p>
<blockquote>
<p>?</p>
<p>Most rocks are colored gray.</p>
</blockquote>
<p>Another example:</p>
<blockquote>
<p>for it.</p>
<p>Most oceans are colored blue.</p>
</blockquote>
<p>The period, the question mark, " for it" somehow get transposed FROM the end of the prompt, and tacked onto the response. And they even get removed from the prompt that was originally in the Excel spreadsheet to begin with.</p>
<p>Could this be a bug with xlsxwriter? open ai? Some combo of both?</p>
<p>Code here:</p>
<pre><code>import xlsxwriter
import openpyxl
import os
import openai
filename = f'testing-openai-gpt3-requests-v1.xlsx'
wb = openpyxl.load_workbook(filename, read_only=False)
sheet = wb.active
# print("starting number of ideas is:")
# print(sheet.max_row)
for x in range(sheet.max_row):
c = sheet.cell(row = x+1, column = 1)
# print(c.value)
myCurrentText = c.value
myCurrentPrompt = "What is the color of most of the following objects: " + myCurrentBusinessIdea
openai.api_key = [none of your business]
response = openai.Completion.create(
model = "text-davinci-003",
prompt = myCurrentPrompt,
max_tokens = 1000,
)
TheOutputtedSummary = response['choices'][0]['text']
print(TheOutputtedSummary)
sheet.cell(row = x+1, column = 6).value = TheOutputtedSummary
wb.save(str(filename))
print('All finished!')
</code></pre>
|
<python><machine-learning><artificial-intelligence><gpt-3>
|
2023-01-08 07:16:08
| 1
| 389
|
king_anton
|
75,045,732
| 14,358,677
|
html to pdf on Azure using pdfkit with wkhtmltopdf
|
<p>I'm attempting to write an Azure function which converts an html input to pdf and either writes this to a blob and/or returns the pdf to the client. I'm using the <a href="https://pypi.org/project/pdfkit/" rel="nofollow noreferrer">pdfkit</a> python library. This requires the wkhtmltopdf executable to be available.</p>
<p>To test this locally on my windows machine, I installed the windows version of wkhtmltopdf and this works completely fine.</p>
<p>When I deployed this function on a Linux app service on Azure, I could still execute the function successfully only after I execute the sudo command on kudo tools to install wkhtmltopdf on the app service.</p>
<pre><code>sudo apt-get install wkhtmltopdf
</code></pre>
<p>I'm also aware that I can write this start up script on the app service itself.</p>
<p>My question is : Is there something I can do on my local windows machine so I can just deploy the the azure function along with the linux version of wkhtmltopdf directly from my vscode without having to execute another script on the app service itself?</p>
|
<python><azure><azure-functions><wkhtmltopdf><html-to-pdf>
|
2023-01-08 05:39:09
| 1
| 2,769
|
Anupam Chand
|
75,045,588
| 19,238,204
|
Python3 Matplotlib to Show Solid of Revolution from Method of Disks, Method of Shells, and Method of Washers around x-axis and y-axis
|
<p>I have created this topic before: <a href="https://stackoverflow.com/questions/74970235/how-to-add-another-subplot-to-show-solid-of-revolution-toward-x-axis/74972856#74972856">How to Add another subplot to show Solid of Revolution toward x-axis?</a></p>
<p>I want to plot the solid from a curve 3 + 2x - x^2 that is revolved about:</p>
<p>a. the x-axis</p>
<p>b. the y-axis</p>
<p>c. the line y = -1</p>
<p>d. the line x = 4</p>
<p><a href="https://i.sstatic.net/FlhMW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FlhMW.png" alt="1" /></a></p>
<p><a href="https://i.sstatic.net/1bVG3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1bVG3.png" alt="2" /></a></p>
<p><a href="https://i.sstatic.net/dyWGQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dyWGQ.png" alt="3" /></a></p>
<p>this is my MWE (the problem is to find the inverse of 3 + 2x - x^2 is not something easy thus I have no idea how to make this code works):</p>
<pre><code># Compare the plot at xy axis with the solid of revolution toward x and y axis
# For function x=(y)^(3/2)
import matplotlib.pyplot as plt
import numpy as np
n = 100
fig = plt.figure(figsize=(14, 7))
ax1 = fig.add_subplot(221)
ax2 = fig.add_subplot(222, projection='3d')
ax3 = fig.add_subplot(223)
ax4 = fig.add_subplot(224, projection='3d')
y = np.linspace(0, 9, n)
x = (y) ** (3 / 2)
t = np.linspace(0, np.pi * 2, n)
xn = np.outer(x, np.cos(t))
yn = np.outer(x, np.sin(t))
zn = np.zeros_like(xn)
for i in range(len(x)):
zn[i:i + 1, :] = np.full_like(zn[0, :], y[i])
ax1.plot(x, y)
ax1.set_title("$f(x)$")
ax2.plot_surface(xn, yn, zn)
ax2.set_title("$f(x)$: Revolution around $y$")
# find the inverse of the function
x_inverse = y
y_inverse = np.power(x_inverse, 3 / 2)
xn_inverse = np.outer(x_inverse, np.cos(t))
yn_inverse = np.outer(x_inverse, np.sin(t))
zn_inverse = np.zeros_like(xn_inverse)
for i in range(len(x_inverse)):
zn_inverse[i:i + 1, :] = np.full_like(zn_inverse[0, :], y_inverse[i])
ax3.plot(x_inverse, y_inverse)
ax3.set_title("Inverse of $f(x)$")
ax4.plot_surface(xn_inverse, yn_inverse, zn_inverse)
ax4.set_title("$f(x)$: Revolution around $x$")
plt.tight_layout()
plt.show()
</code></pre>
|
<python><python-3.x><numpy><matplotlib>
|
2023-01-08 04:58:11
| 0
| 435
|
Freya the Goddess
|
75,045,285
| 17,696,880
|
Put "0" in front of numeric quantities within a string that are missing a numeric figure following the context of a regex pattern
|
<pre class="lang-py prettyprint-override"><code>import re
input_text = '2000_-_9_-_01 8:1 am' #example 1
input_text = '(2000_-_1_-_01) 18:1 pm' #example 2
input_text = '(20000_-_12_-_1) (1:1 am)' #example 3
identificate_hours = r"(?:a\s*las|a\s*la|)\s*(\d{1,2}):(\d{1,2})\s*(?:(am)|(pm)|)"
date_format_00 = r"(\d*)_-_(\d{1,2})_-_(\d{1,2})"
identification_re_0 = r"(?:\(|)\s*" + date_format_00 + r"\s*(?:\)|)\s*(?:a\s*las|a\s*la|)\s*(?:\(|)\s*" + identificate_hours + r"\s*(?:\)|)"
input_text = re.sub(identification_re_0,
#lambda m: print(m[2]),
lambda m: (f"({m[1]}_-_{m[2]}_-_{m[3]}({m[4] or '00'}:{m[5] or '00'} {m[6] or m[7] or 'am'}))"),
input_text, re.IGNORECASE)
print(repr(input_text)) # --> output
</code></pre>
<p>Considering that there are <strong>5 numerical values</strong>(year, month, day, hour, minutes) where possibly it should be corrected by adding a <code>"0"</code>, and being <strong>2 possibilities</strong> (add a zero <code>"0"</code> or not add a zero <code>"0"</code>), after using the combinatorics formula I can know that there would be a total of <strong>32 possible combinations</strong>, which it's too much to come up with 32 different regex that do or don't add a <code>"0"</code> in front of every value that needs it. For this reason I feel that trying to repeat the regex, changing only the <code>"(\d{1,2})"</code> one by one, would not be a good solution for this problem.</p>
<p>I was trying to standardize date-time data that is entered by users in natural language so that it can then be processed.</p>
<p>So, once the dates were obtained in this format, I needed those numerical values of <strong>months</strong>, <strong>days</strong>, <strong>hours</strong> and/or <strong>minutes</strong> that have remained with a single digit are standardized to 2 digits, placing a <code>"0"</code> before them to compensate for possible missing digits.</p>
<p>So that in the <strong>output</strong> the input date-time are expressed in this way:</p>
<p><code>YYYY_-_MM_-_DD(hh:mm am or pm)</code></p>
<pre><code>'(2000_-_09_-_01(08:01 am))' #for example 1
'(2000_-_01_-_01(18:01 pm))' #for example 2
'(20000_-_12_-_01(18:01 am))' #for example 3
</code></pre>
<p>I have used the <code>re.sub()</code> function because it contemplates the possibility that within the same <code>input_text</code> there is more than one occasion where a replacement of this type must be carried out. For example, in an input where <code>'2000_-_9_-_01 8:1 am 2000_-_9_-_01 8:1 am'</code>, you should perform this procedure 2 times since there are 2 dates present (that is, there are 2 times where this pattern appears), and obtain this <code>'(2000_-_09_-_01(08:01 am)) (2000_-_09_-_01(08:01 am))'</code></p>
|
<python><regex><datetime>
|
2023-01-08 03:18:02
| 1
| 875
|
Matt095
|
75,045,199
| 344,669
|
pytest patch generate exception for Class method call
|
<p>I am testing the generating mock exception using <code>side_effect</code>. based on this page <a href="https://changhsinlee.com/pytest-mock/" rel="nofollow noreferrer">https://changhsinlee.com/pytest-mock/</a> I am trying to generate the exception when calling <code>load_data</code> method, but its not working for me.</p>
<pre><code>test_load_data.py::test_slow_load_02 FAILED [100%]
tests/pytest_samples/test_load_data.py:31 (test_slow_load_02)
ds_mock = <MagicMock name='DataSet' id='4473855648'>
@patch("python_tools.pytest_samples.load_data.DataSet")
def test_slow_load_02(ds_mock):
with patch.object(ds_mock, 'load_data', side_effect=Exception('URLError')):
> with pytest.raises(Exception) as excinfo:
E Failed: DID NOT RAISE <class 'Exception'>
test_load_data.py:35: Failed
</code></pre>
<p>Here is the code.</p>
<p>slow.py</p>
<pre><code>class DataSet:
def load_data(self):
return 'Loading Data'
</code></pre>
<p>load_data.py</p>
<pre><code>from python_tools.pytest_samples.slow import DataSet
def slow_load():
dataset = DataSet()
return dataset.load_data()
</code></pre>
<p>test_data.py</p>
<pre><code>@patch("python_tools.pytest_samples.load_data.DataSet")
def test_slow_load_02(ds_mock):
with patch.object(ds_mock, 'load_data', side_effect=Exception('URLError')):
with pytest.raises(Exception) as excinfo:
actual = slow_load()
assert str(excinfo.value) == 'URLError'
</code></pre>
<p>This test code worked for me:</p>
<pre><code>def test_slow_load_01(mocker):
with mocker.patch("python_tools.pytest_samples.load_data.DataSet.load_data", side_effect=Exception('URLError')):
with pytest.raises(Exception) as excinfo:
actual = slow_load()
assert str(excinfo.value) == 'URLError'
</code></pre>
<p>I like to understand why patch and patch.object is not working.</p>
<p>Thanks</p>
|
<python><python-3.x><pytest><pytest-mock>
|
2023-01-08 02:47:33
| 2
| 19,251
|
sfgroups
|
75,045,161
| 1,014,385
|
How can we make pandas default handling of missing values warn of their presence rather than silently ignore them?
|
<p>As discussed <a href="https://stackoverflow.com/questions/73615856/does-pandas-current-treatment-of-missing-values-have-potential-to-cause-serious">here</a>, pandas silently replaces <code>NaN</code> values with 0 when calculating sums, in contrast to explicit calculations as shown here:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import numpy as np
np.NaN + np.NaN # Result: nan
pd.DataFrame([np.NaN,np.NaN]).sum().item() # Result: 0.0
</code></pre>
<p><a href="https://pandas.pydata.org/docs/user_guide/basics.html?highlight=skipna#descriptive-statistics" rel="nofollow noreferrer">pandas' Descriptive Statistics</a> methods have a <code>skipna</code> argument. However, <code>skipna</code> is by default <code>True</code>, thereby masking the presence of missing values to casual users and novice programmers</p>
<p>This creates a risk that analyses will be <a href="https://stackoverflow.com/questions/73615856/does-pandas-current-treatment-of-missing-values-have-potential-to-cause-serious#comment130033644_73615856">"...quietly, accidentally wrong since their Pandas operators haven't used the correct <code>skipna</code>"</a> .</p>
<p>In Python, is there a way for users to set <code>skipna=False</code> as the default option?</p>
|
<python><pandas><missing-data>
|
2023-01-08 02:34:04
| 1
| 912
|
David Lovell
|
75,045,158
| 5,212,614
|
Trying to scrape a specific table but getting no results
|
<p>I tried three different techniques to scrape a table named 'table-light', but nothing is actually working for me. The code below shows my attempts to extract the data.</p>
<pre><code>import pandas as pd
tables = pd.read_html('https://finviz.com/groups.ashx?g=industry&v=120&o=marketcap')
tables
############################################################################
import requests
import pandas as pd
url = 'https://finviz.com/groups.ashx?g=industry&v=120&o=marketcap'
html = requests.get(url).content
df_list = pd.read_html(html)
df = df_list[10]
print(df)
############################################################################
import requests
from bs4 import BeautifulSoup
url = "https://finviz.com/groups.ashx?g=industry&v=120&o=marketcap"
r = requests.get(url)
html = r.text
soup = BeautifulSoup(html, "html.parser")
table = soup.find_all('table-light')
print(table)
</code></pre>
<p>The table that I am trying to extract data from is named 'table-light'. I want to get all the columns and all 144 rows. How can I do that?</p>
|
<python><python-3.x><beautifulsoup>
|
2023-01-08 02:32:52
| 1
| 20,492
|
ASH
|
75,045,016
| 11,462,274
|
If the value of a specific column from a previous date is greater than zero then True, less than or equal to zero is False
|
<pre class="lang-none prettyprint-override"><code>,clock_now,competition,market_name,lay,per_day
0,2022-12-30,A,B,-1.0,-1.0
1,2022-12-31,A,B,1.28,0.28
2,2023-01-01,A,B,-1.0,-0.72
3,2023-01-02,A,B,1.0,0.28
4,2023-01-03,A,B,1.0,1.28
5,2023-01-04,A,B,-1.0,-1.72
6,2023-01-04,A,B,-1.0,-1.72
7,2023-01-04,A,B,-1.0,-1.72
</code></pre>
<p>The idea is to get the value of the <code>per_day</code> column of the previous date closest to the row being analyzed.</p>
<p>For example:</p>
<p>In the lines with the date <code>2023-01-04</code>, check in any of the lines that have the date <code>2023-01-03</code> which is the value of the column <code>per_day</code>, if it is greater than zero, True, if it is less or equal to zero, False.</p>
<p>The list would look like this:</p>
<pre class="lang-none prettyprint-override"><code>False
False
True
False
True
True
True
True
</code></pre>
<p>My attempt:</p>
<pre class="lang-python prettyprint-override"><code>df.clock_now = pd.to_datetime(df.clock_now)
df['invest'] = np.where(df.loc[df.clock_now == df['clock_now'] - timedelta(days=1),'per_day'].values[0] > 0,True,False)
</code></pre>
<p>But they all return <code>False</code> and there is another problem, it is not sure that the date will always be <code>1</code> day ago, it could be <code>2</code> or more, so it would still be a failed option.</p>
<p>How should I proceed in this case?</p>
|
<python><pandas>
|
2023-01-08 01:46:20
| 1
| 2,222
|
Digital Farmer
|
75,044,779
| 6,626,632
|
Scraping data from KendoUI interface using selenium and python
|
<p>I am trying to scrape data from <a href="https://laerm-monitoring.de/mittelung?mp=14" rel="nofollow noreferrer">this website</a> using selenium and python (note the website is in German but can be translated using Chrome's translate function). Specifically, I would like to automate the process of (1) selecting "24 hours" from the "averaging" dropdown, (2) selecting "maximum" from the "Period" dropdown, and finally (3) clicking the "export" button and downloading the associated Excel file.</p>
<p>I have limited scraping experience, but when I have done it in the past, I have found and clicked on items using their xpath (i.e., using <code>driver.find_element('xpath', ...).click()</code>). However, although I'm able to find what seem to be the correct xpaths here, when I try to interact with them, Selenium returns an <code>ElementNotInteractableException</code>. I would really appreciate any guidance on how to scrape this site. I am open to solution that do not use Selenium.</p>
|
<python><selenium><web-scraping>
|
2023-01-08 00:42:05
| 1
| 462
|
sdg
|
75,044,685
| 10,380,766
|
Pyenv unable to install older versions of Python 3 due to `configure: error: C compiler cannot create executables`
|
<p>I've been trying to use <code>pyenv</code> to install older versions of python3. I've tried <code>3.0.1</code>, <code>3.1.0</code>, <code>3.1.1</code>, and <code>3.1.2</code>. I kept getting the error</p>
<pre><code>ImportError: No module named _ssl
ERROR: The Python ssl extension was not compiled. Missing the OpenSSL lib?
</code></pre>
<p>Following some documentation from pyenv I tried updating my command to explicitly pass the location of the headers and libraries like so:</p>
<pre><code>➜ CPPFLAGS="-I/usr/bin/openssl/include" \
LDFLAGS="-L/usr/bin/openssl/lib" \
pyenv install -v 3.1.2
</code></pre>
<p>That seems to have gotten past that error but now I'm getting the error <code>configure: error: C compiler cannot create executables</code></p>
<p>The only suggestion I can find for Ubuntu is to install/update the <code>build-essential</code> package which I've done and was already installed and fully updated.</p>
<p>I can see that the <code>gcc</code> command is working. So what else am I missing?</p>
<h2>Full Error</h2>
<pre class="lang-bash prettyprint-override"><code>➜ CPPFLAGS="-I/usr/bin/openssl/include" \
LDFLAGS="-L/usr/bin/openssl/lib" \
pyenv install -v 3.1.2
/tmp/python-build.20230107155624.97473 ~/GitProjects/VWAP_backtest
Downloading Python-3.1.2.tar.gz...
-> https://www.python.org/ftp/python/3.1.2/Python-3.1.2.tgz
/tmp/python-build.20230107155624.97473/Python-3.1.2 /tmp/python-build.20230107155624.97473 ~/GitProjects/VWAP_backtest
Installing Python-3.1.2...
patching file ./setup.py
patching file ./Lib/ssl.py
patching file ./Modules/_ssl.c
checking for --enable-universalsdk... no
checking for --with-universal-archs... 32-bit
checking MACHDEP... linux5
checking machine type as reported by uname -m... x86_64
checking for --without-gcc... no
checking for gcc... gcc
checking for C compiler default output file name...
configure: error: C compiler cannot create executables
See `config.log' for more details.
BUILD FAILED (Ubuntu 22.04 using python-build 2.3.9-18-g25c974d5)
</code></pre>
|
<python><python-3.x><ubuntu><openssl><pyenv>
|
2023-01-08 00:15:41
| 2
| 1,020
|
Hofbr
|
75,044,683
| 11,154,361
|
NetCDF variable extraction at specific coordinates using arrays does not work - python
|
<p>I am trying to extract two variables values of a netCDF file at specific coordinates. From what I have seen from various posts on StackOverflow, the best way to achieve that is to work with <code>arrays</code>.</p>
<p>I have tried to use <a href="https://stackoverflow.com/questions/69330668/efficient-way-to-extract-data-from-netcdf-files">this method</a> but it does not work and I am struggling finding out why.</p>
<p>The code I used is the following:</p>
<pre class="lang-py prettyprint-override"><code>import xarray as xr
file = "input.nc"
lon=24.2
lat=17.8
da = xr.open_dataset(file)
ts = da.sel(lon=lon, lat=lat, method="nearest") #I also tried da["variable1"].sel(lon=lon, lat=lat, method="nearest") but same outcome.
</code></pre>
<p>But it raised this error:</p>
<pre><code>KeyError: 'lon is not a valid dimension or coordinate'
</code></pre>
<p>I then tried to set lon and lat as coordinates (<code>da.set_coords(("lat", "lon"))</code>) but it still raised an error:</p>
<pre><code>KeyError: 'no index found for coordinate lon'
</code></pre>
<p>I have very rarely worked with arrays so I don't understand what is wrong here. Here are more details of my netCDF:</p>
<pre><code>xarray.Dataset
Dimensions: south_north: 63, west_east: 69, vcoord_dim: 8, Time: 25, bottom_top: 8
Coordinates: (0) # For some reasons lon and lat are Data variables and not Coordinates in this netCDF.
Data variables: (216)
</code></pre>
<p>I also tried a more mannual extraction method (validated answer <a href="https://stackoverflow.com/questions/45582344/extracting-data-for-a-specific-location-from-netcdf-by-python/74599597#74599597">here</a>) but it gave an index value way out of bound for the <code>j</code> that I can also not explain.</p>
<hr />
<p>So, to summarize, what I want is extract the values of some <code>Data variables</code> at specific lon/lat values for all the timesteps of the netCDF.</p>
<p>I looked everywhere for answers but nothing helped me understand what was wrong. Any tips and advices would be greatly appreciated.</p>
|
<python><arrays><netcdf><python-xarray>
|
2023-01-08 00:14:40
| 0
| 344
|
I.M.
|
75,044,240
| 1,473,517
|
Correlation coefficient confidence intervals
|
<p>Using scipy.stats we can compute the correlation coefficient along with a p-value. For example using:</p>
<pre><code>x = [1.76405235, 0.40015721, 0.97873798,
2.2408932, 1.86755799, -0.97727788]
y = [2.71414076, 0.2488, 0.87551913,
2.6514917, 2.01160156, 0.47699563]
def statistic(x): # permute only `x`
return stats.spearmanr(x, y).statistic
res_exact = stats.permutation_test((x,), statistic, permutation_type='pairings')
res_exact.pvalue
(0.10277777777777777)
</code></pre>
<p>But I would like a confidence internal for the correlation coefficient. In R this is given by <a href="https://search.r-project.org/CRAN/refmans/DescTools/html/SpearmanRho.html" rel="nofollow noreferrer">default</a>. What can you do in R?</p>
|
<python><scipy><statistics>
|
2023-01-07 22:32:18
| 1
| 21,513
|
Simd
|
75,044,224
| 1,580,659
|
How do I "flatten" a nested serializer in DRF?
|
<p>We have a nested serializer that we would like to "flatten". But I'm not having much luck finding how to achieve this in the docs.</p>
<p>Here is the current output.</p>
<pre><code>{
"user_inventory": "UOHvaxFa11R5Z0bPYuihP0RKocn2",
"quantity": 1,
"player": {
"card_id": "c69c0808328fdc3e3f3ee8b9b7d4a7f8",
"game": "MLB The Show 22",
"name": "Jesus Tinoco",
"all_positions": [
"CP"
]
}
}
</code></pre>
<p>Here is what I'd like:</p>
<pre><code>{
"user_inventory": "UOHvaxFa11R5Z0bPYuihP0RKocn2",
"quantity": 1,
"card_id": "c69c0808328fdc3e3f3ee8b9b7d4a7f8",
"game": "MLB The Show 22",
"name": "Jesus Tinoco",
"all_positions": [
"CP"
]
}
</code></pre>
<p>Here is how the serializers are setup:</p>
<pre><code>class PlayerProfileSerializer(serializers.ModelSerializer):
class Meta:
model = PlayerProfile
fields = (
'card_id',
'game',
'name',
'all_positions',
)
class UserInventoryItemSerializer(serializers.ModelSerializer):
player = PlayerProfileSerializer()
class Meta:
model = UserInventoryItem
fields = (
'user_inventory',
'quantity',
'player',
)
</code></pre>
<p>Here is the view:</p>
<pre><code>class OwnedInventoryView(viewsets.ModelViewSet):
serializer_class = UserInventoryItemSerializer
filterset_class = UserInventoryItemFilter
def get_queryset(self):
order_by = self.request.query_params.get('order_by', '')
if order_by:
order_by_name = order_by.split(' ')[1]
order_by_sign = order_by.split(' ')[0]
order_by_sign = '' if order_by_sign == 'asc' else '-'
return UserInventoryItem.objects.filter(user_inventory=self.kwargs['user_inventory_pk']).order_by(order_by_sign + order_by_name)
return UserInventoryItem.objects.filter(user_inventory=self.kwargs['user_inventory_pk'])
</code></pre>
|
<python><django><django-rest-framework>
|
2023-01-07 22:30:44
| 1
| 1,428
|
JeremyE
|
75,043,540
| 14,368,435
|
how can i tell the python interpreter running inside the flatpak sandbox to use libraries from the main system
|
<p>I'm creating a Gtk4 application with python bindings on Linux. I'm using flatpak as well.
So I'm creating like an extensions/plugins systems where user can define his main module to call in a specific file and later on ill load it. Every thing works but when the user uses imports for external libraries like NumPy or pandas it will start looking inside the flatpak sandbox and this is normal. I'm wondering how can I tell the python interpreter to use the modules of the system for the imported plugins instead of looking in the modules of the app.</p>
<p>User's code and requirements should be independent of the app's requirements.</p>
<p>This is how I'm loading the modules:</p>
<pre><code>extension_module = SourceFileLoader(file_name, module_path).load_module()
extension_class = getattr(dataset_module, class_name)
obj = extension_class()
</code></pre>
<p>This is an example of the loaded class</p>
<p>the absolute path of this module is <code>/home/user/.extensions/ext1/module.py</code></p>
<pre><code>import numpy as np
class Module1:
def __init__(self):
self.data = np.array([1, 2, 3, 4, 5, 6])
def get_data(self):
return self.data
</code></pre>
<p>I tried using</p>
<pre><code>os.path.append('/usr/lib64/python-10.8/site-packages')
</code></pre>
<p>It's added, but in the sandbox environment.</p>
<p>I thought about looking for user imports manually like when a user import pandas ill try to look for installed python packages in the system and use the importlib or the SourceFileLoader to load it, but I don't think it's a good way to do it.</p>
|
<python><gtk4><flatpak>
|
2023-01-07 20:24:12
| 1
| 304
|
Ali BEN AMOR
|
75,043,535
| 2,495,203
|
Python: more efficient data structure than a nested dictionary of dictionaries of arrays?
|
<p>I'm writing a python-3.10 program that predicts time series of various properties for a large number of objects. My current choice of data structure for collecting results internally in the code and then for writing to files is a nested dictionary of dictionaries of arrays. For example, for two objects with time series of 3 properties:</p>
<pre><code>properties = {'obj1':{'time':np.arange(10),'x':np.random.randn(10),'vx':np.random.randn(10)},
'obj2': {'time':np.arange(15),'x':np.random.randn(15),'vx':np.random.randn(15)}}
</code></pre>
<p>The reason I like this nested dictionary format is because it is intuitive to access -- the outer key is the object name, and the inner keys are the property names. The elements corresponding to each of the inner keys are numpy arrays giving the value of some property as a function of time. My actual code generates a dict of ~100,000s of objects (outer keys) each having ~100 properties (inner keys) recorded at ~1000 times (numpy float arrays).</p>
<p>I have noticed that when I do <code>np.savez('filename.npz',**properties)</code> on my own huge properties dictionary (or subsets of it), it takes a while and the output file sizes are a few GB (probably because np.savez is calling pickle under the hood since my nested dict is not an array).</p>
<p>Is there a more efficient data structure widely applicable for my use case? Is it worth switching from my nested dict to pandas dataframes, numpy ndarrays or record arrays, or a list of some kind of Table-like objects? It would be nice to be able to save/load the file in a binary output format that preserves the mapping from object names to their dict/array/table/dataframe of properties, and of course the names of each of the property time series arrays.</p>
|
<python><numpy><dictionary><nested><pickle>
|
2023-01-07 20:22:39
| 1
| 721
|
quantumflash
|
75,043,497
| 1,486,486
|
Nested locks of the same Thread causing a dead lock
|
<p>I have a small class (<code>Counters</code>) which wraps a dictionary of objects (<code>Counter</code>).</p>
<p>Here is a simplified example (with my debugging prints...)</p>
<pre><code>import threading
import time
import logging
import random
logging.basicConfig(level=logging.DEBUG, format='(%(threadName)-9s) %(message)s',)
class Counter(object):
def __init__(self, start : int = 0):
self.lock = threading.Lock()
self.value = start
def increment(self):
logging.debug('Waiting inc - %s', threading.current_thread().name)
self.lock.acquire()
try:
logging.debug('Acquired inc - %s', threading.current_thread().name)
self.value = self.value + 1
finally:
logging.debug('Released inc - %s', threading.current_thread().name)
self.lock.release()
def lastValue(self) -> str:
logging.debug('Waiting lastValue - %s', threading.current_thread().name)
self.lock.acquire()
try:
# return the last seen time in mysql format:
logging.debug('Acquired lastValue - %s', threading.current_thread().name)
return f" value -> {self.value}"
except Exception as e:
logging.error(e)
finally:
logging.debug('Released lastValue - %s', threading.current_thread().name)
self.lock.release()
def getAsDict(self, with_log=False) -> dict:
logging.debug('Waiting getAsDict - %s', threading.current_thread().name)
self.lock.acquire()
try:
logging.debug('Acquired getAsDict - %s', threading.current_thread().name)
return {
"counted" : self.lastValue(),
}
except Exception as e:
logging.error(e)
finally:
logging.debug('Released getAsDict - %s', threading.current_thread().name)
self.lock.release()
class Counters:
def __init__(self,):
self.lock = threading.Lock()
self.store = {}
def add(self, name : str) -> None:
# add a counter object to the list:
logging.debug('Waiting add - %s', threading.current_thread().name)
self.lock.acquire()
try:
logging.debug('Acquired add - %s', threading.current_thread().name)
self.store[name] = Counter(0)
finally:
logging.debug('Released add - %s', threading.current_thread().name)
self.lock.release()
def remove(self, name : str) -> bool:
# remove a counter from the dictionary:
with self.lock:
if name in self.store:
del self.store[name]
return True
return False
def get(self, name) -> Counter or None:
with self.lock:
return self.store.get(name, None)
def getAll(self) -> dict:
logging.debug('Waiting getAll - %s', threading.current_thread().name)
self.lock.acquire()
try:
logging.debug('Acquired getAll - %s', threading.current_thread().name)
ret = {}
for name, counter in self.store.items():
print(counter.getAsDict())
ret[name] = counter.getAsDict()
return ret
except Exception as e:
print(e)
finally:
logging.debug('Released getAll - %s', threading.current_thread().name)
self.lock.release()
</code></pre>
<p>When I call the <code>getAll()</code> method I get stuck / deadlock in <code>lastValue</code>.
To my best knowledge python allows nested lock acquiring and in this case this is the problematic call path that causes the deadlock:</p>
<ul>
<li>getAll (1 lock on Counters)</li>
<li>getAsDict (2 lock on Counter)</li>
<li>lastValue (3 lock on Counter) - Dead here</li>
</ul>
<p>It can also be observed when running one thread:</p>
<pre><code>def worker(c):
for i in range(1):
r = random.random()
n = random.randint(1, 500)
#random name:
name = f"counter_{n}"
logging.debug('Counter [%s] Sleeping %0.02f', name, r)
time.sleep(r)
c.add(name)
c.get(name).increment()
logging.debug('Done')
result = c.getAll()
logging.debug('Result: %r', result)
if __name__ == '__main__':
counters = Counters()
for i in range(1):
t = threading.Thread(target=worker, args=(counters,))
t.start()
logging.debug('Waiting for worker threads')
main_thread = threading.current_thread()
for t in threading.enumerate():
if t is not main_thread:
t.join()
</code></pre>
<p>The output is:</p>
<pre><code>(MainThread) Waiting for worker threads
(Thread-7 (worker)) Counter [counter_129] Sleeping 0.55
(Thread-7 (worker)) Waiting add - Thread-7 (worker)
(Thread-7 (worker)) Acquired add - Thread-7 (worker)
(Thread-7 (worker)) Released add - Thread-7 (worker)
(Thread-7 (worker)) Waiting inc - Thread-7 (worker)
(Thread-7 (worker)) Acquired inc - Thread-7 (worker)
(Thread-7 (worker)) Released inc - Thread-7 (worker)
(Thread-7 (worker)) Done
(Thread-7 (worker)) Waiting getAll - Thread-7 (worker)
(Thread-7 (worker)) Acquired getAll - Thread-7 (worker)
(Thread-7 (worker)) Waiting getAsDict - Thread-7 (worker)
(Thread-7 (worker)) Acquired getAsDict - Thread-7 (worker)
(Thread-7 (worker)) Waiting lastValue - Thread-7 (worker) <-- DEADLOCK
</code></pre>
<p>What am I missing?</p>
|
<python><python-3.x><multithreading><deadlock><thread-lock>
|
2023-01-07 20:16:45
| 1
| 6,606
|
Shlomi Hassid
|
75,043,415
| 7,437,221
|
How can I add a new column to a dataframe (df1) that is the sum of multiple lookup values from df1 in another dataframe (df2)
|
<p>Say I have 2 dataframes:</p>
<p>df1</p>
<pre><code> id guid name item1 item2 item3 item4 item5 item6 item7 item8 item9
0 3031958124 85558-261955282 Alonso 85558-57439 85558-54608 85558-91361 85558-40647 85558-41305 85558-79979 85558-33076 85558-89956 85558-12554
1 3031958127 85558-261955282 Jeff 85558-57439 85558-39280 85558-91361 85558-55987 85558-83083 85558-79979 85558-33076 85558-41872 85558-12554
2 3031958129 85558-261955282 Mike 85558-57439 85558-39280 85558-91361 85558-55987 85558-40647 85558-79979 85558-33076 85558-88297 85558-12534
...
</code></pre>
<p>df2 where <code>item_lookup</code> is the index</p>
<pre><code> item_type cost value target
item_lookup
85558-57439 item1 9500 25.1 1.9
85558-54608 item2 8000 18.7 0.0
85558-91361 item3 7000 16.5 0.9
...
</code></pre>
<p>I want to add the sum of <code>cost</code>, <code>value</code>, and <code>target</code> for each item1 through item9 using item_lookup (<code>df2</code>) and store that as a column on df1.</p>
<p>So the result should look like:
df1</p>
<pre><code> id guid name item1 item2 item3 item4 item5 item6 item7 item8 item9 cost value target
0 3031958124 85558-261955282 Alonso 85558-57439 85558-54608 85558-91361 85558-40647 85558-41305 85558-79979 85558-33076 85558-89956 85558-12554 58000 192.5 38.3
1 3031958127 85558-261955282 Jeff 85558-57439 85558-39280 85558-91361 85558-55987 85558-83083 85558-79979 85558-33076 85558-41872 85558-12554 59400 183.2 87.7
2 3031958129 85558-261955282 Mike 85558-57439 85558-39280 85558-91361 85558-55987 85558-40647 85558-79979 85558-33076 85558-88297 85558-12534 58000 101.5 18.1
...
</code></pre>
<p>I've tried following similar solutions online that use <code>.map</code>, however these examples are only for single columns whereas I am trying to sum values for 9 columns.</p>
|
<python><pandas><numpy><data-science><data-analysis>
|
2023-01-07 20:03:24
| 3
| 353
|
Sean Sailer
|
75,043,231
| 10,664,489
|
Upgrade to opencv-python 4.7 causes import error
|
<p>While working on a project that uses the opencv-python module, I ended up updating it to version 4.7.0.68 from 4.6.0.66. The next time I ran my program it failed with an import error:
<code>ImportError: dlopen(path/to/my/virtualenv/lib/python3.9/site-packages/cv2/cv2.abi3.so, 2): Symbol not found: _VTRegisterSupplementalVideoDecoderIfAvailable</code></p>
<p>After downgrading opencv-python back to 4.6.0.66 the import error went away and things went back to running without error.</p>
<p>Given that OpenCV 4.7 was released fairly recently and from what I can tell <code>_VTRegisterSupplementalVideoDecoderIfAvailable</code> is a function of macOS I'm guessing that this is some sort of compatibility bug in the new version of opencv-python. On the machine where the error occurred I'm running macOS Catalina (10.15.7).</p>
<p>To isolate that the issue is strictly related to opencv-python, I created a clean environment, installed opencv-python 4.7 and attempted to run a script with the single line <code>import cv2</code>. This failed with the same error.</p>
<p><a href="https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/5461?sort=new?sort=new#discussioncomment-4623416" rel="noreferrer">This issue</a> in an unrelated project hints that it's possible upgrading my OS may be needed.</p>
<p>I can live with downgrading to 4.6 but curious to know if there's an alternative solution to fix this import error.</p>
|
<python><macos><opencv>
|
2023-01-07 19:34:51
| 1
| 574
|
Adam Richard
|
75,043,137
| 814,354
|
Set arrow size based on figure units instead of axis data units?
|
<p>In matplotlib, is there a way to specify arrow head sizes in figure units rather than in data units?</p>
<p>The use case is: I am making a multi-panel figure in which each panel has a different axis size (e.g., one goes from 0 to 1 on the X-axis, and the next goes from 0 to 10). I'd like the arrows to appear the same in each panel. I'd also like the arrows to appear the same independent of direction.</p>
<p>For axes with an aspect ratio not equal to 1, the width of the tail (and therefore the size of the head) varies with direction.</p>
<p>The closest I've come is, after drawing on the canvas:</p>
<pre class="lang-py prettyprint-override"><code>dx = ax.get_xlim()[1] - ax.get_xlim()[0]
for arrow in ax.patches:
arrow.set_data(width=dx/50)
</code></pre>
<p>but this does not work; it results in images like this:</p>
<p><a href="https://i.sstatic.net/VQPNL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VQPNL.png" alt="Graphic" /></a></p>
|
<python><matplotlib><plot>
|
2023-01-07 19:18:11
| 2
| 19,445
|
keflavich
|
75,043,134
| 2,326,896
|
How would I implement my own IntEnum in Python if one wasn't provided oob?
|
<p>I always struggle with Enum, IntEnum, etc and have to revisit the documentation several times each time I use this Python feature. I think it would be useful to have a more clear understanding of the internals.</p>
<p>For instance, why can't I use named arguments in this example?</p>
<pre><code>class MD_Fields(IntEnum):
ACCOUNT = (0, **identifier=True**)
M_DESCRIPT = (4, False)
def __new__(cls, value: int, identifier: bool):
obj = int.__new__(cls, value)
obj.identifier = identifier
return obj
</code></pre>
<p>And of, course, the main question, how would do I pretend a Enum is a int? How do I tell Python that "SOME.ENUM" should be handled as if it was a 5?</p>
|
<python><enums>
|
2023-01-07 19:17:44
| 2
| 891
|
Fernando César
|
75,043,118
| 2,453,606
|
Sagemaker Batch Transform Job "upstream prematurely closed connection" when surpassing 30 minutes
|
<p>I am serving a sagemaker model through a custom docker container using <a href="https://sagemaker-examples.readthedocs.io/en/latest/advanced_functionality/scikit_bring_your_own/scikit_bring_your_own.html#When-should-I-build-my-own-algorithm-container%3F" rel="nofollow noreferrer">the guide that AWS provides</a>. This is a docker container that runs a simple nginx->gunicorn/wsgi->flask server</p>
<p>I am facing an issue where my transform requests time out around 30 minutes in all instances, despite should being able to continue to 60 minutes. I need requests to be able to go to sagemaker maximum of 60 minutes due to data intense nature of request.</p>
<hr />
<p>Through experience working with this setup for some months, I know that there are 3 factors that should affect the time my server has to respond to requests:</p>
<ol>
<li>Sagemaker itself will cap invocations requests according to the
<code>InvocationsTimeoutInSeconds</code> paremeter set when <a href="https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_ModelClientConfig.html#sagemaker-Type-ModelClientConfig-InvocationsTimeoutInSeconds" rel="nofollow noreferrer">creating the batch
transform
job</a>.</li>
<li>The <code>nginx.conf</code> file must be configured such that <code>keepalive_timeout</code>, <code>proxy_read_timeout</code>, <code>proxy_send_timeout</code>, and <code>proxy_connect_timeout</code> are all equal or greater than maximum timeout</li>
<li>gunicorn server must its timeout configured to be equal or greater than maximum timeout</li>
</ol>
<hr />
<p>I have verified that when I create my batch transform job <code>InvocationsTimeoutInSeconds</code> is set to 3600 (1 hour)</p>
<p>My nginx.conf looks like this:</p>
<pre><code>worker_processes 1;
daemon off; # Prevent forking
pid /tmp/nginx.pid;
error_log /var/log/nginx/error.log;
events {
# defaults
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log combined;
sendfile on;
client_max_body_size 30M;
keepalive_timeout 3920s;
upstream gunicorn {
server unix:/tmp/gunicorn.sock;
}
server {
listen 8080 deferred;
client_max_body_size 80m;
keepalive_timeout 3920s;
proxy_read_timeout 3920s;
proxy_send_timeout 3920s;
proxy_connect_timeout 3920s;
send_timeout 3920s;
location ~ ^/(ping|invocations) {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://gunicorn;
}
location / {
return 404 "{}";
}
}
}
</code></pre>
<p>I start the gunicorn server like this:</p>
<pre><code>def start_server():
print('Starting the inference server with {} workers.'.format(model_server_workers))
print('Model server timeout {}.'.format(model_server_timeout))
# link the log streams to stdout/err so they will be logged to the container logs
subprocess.check_call(['ln', '-sf', '/dev/stdout', '/var/log/nginx/access.log'])
subprocess.check_call(['ln', '-sf', '/dev/stderr', '/var/log/nginx/error.log'])
nginx = subprocess.Popen(['nginx', '-c', '/opt/program/nginx.conf'])
gunicorn = subprocess.Popen(['gunicorn',
'--timeout', str(3600),
'-k', 'sync',
'-b', 'unix:/tmp/gunicorn.sock',
'--log-level', 'debug',
'-w', str(1),
'wsgi:app'])
signal.signal(signal.SIGTERM, lambda a, b: sigterm_handler(nginx.pid, gunicorn.pid))
# If either subprocess exits, so do we.
pids = set([nginx.pid, gunicorn.pid])
while True:
pid, _ = os.wait()
if pid in pids:
break
sigterm_handler(nginx.pid, gunicorn.pid)
print('Inference server exiting')
</code></pre>
<p>Despite all this, whenever a transform job takes longer than approx 30 minutes I will see this message in my logs and the transform job status becomes failed:</p>
<pre><code>2023/01/07 08:23:14 [error] 11#11: *4 upstream prematurely closed connection while reading response header from upstream, client: 169.254.255.130, server: , request: "POST /invocations HTTP/1.1", upstream: "http://unix:/tmp/gunicorn.sock:/invocations", host: "169.254.255.131:8080"
</code></pre>
<p>I am close to thinking there is a bug in AWS batch transform, but perhaps I am missing some other variable (perhaps in the nginx.conf) that could lead to premature upstream termination of my request.</p>
|
<python><amazon-web-services><nginx><gunicorn><amazon-sagemaker>
|
2023-01-07 19:14:55
| 1
| 1,885
|
GrantD71
|
75,043,093
| 2,326,896
|
Python compiler says I'm adding an extra argument to int in an Enum
|
<p>I'm trying to create a custom enumerator that can replace an int, but has additional fields.</p>
<pre><code>from enum import IntEnum
class MD_Fields(IntEnum):
ACCOUNT = (0, "Account", True)
M_DESCRIPT = (4, "Description", False)
def __new__(cls, value: int, description: str, identifier: bool):
obj = int.__new__(cls, value)
obj.description = description
obj.identifier = identifier
return obj
if __name__ == '__main__':
print(MD_Fields.M_DESCRIPT)
</code></pre>
<p>However, this code raises the following problem:</p>
<pre><code>Traceback (most recent call last):
File ".../JetBrains/PyCharmCE2022.3/scratches/scratch.py", line 3, in <module>
class MD_Fields(IntEnum):
File "/usr/lib/python3.7/enum.py", line 223, in __new__
enum_member._value_ = member_type(*args)
TypeError: int() takes at most 2 arguments (3 given)
</code></pre>
<p>I don't understand what's happening.
(I didn't find a meaningful definition of <code>int.__new__</code>)</p>
|
<python><enums>
|
2023-01-07 19:11:56
| 1
| 891
|
Fernando César
|
75,043,070
| 1,845,408
|
groupby produces a different mean when done at two levels
|
<p>I have a dataset (<code>df</code>) about the <code>Time</code> spent by different users (<code>UserId</code>). Users can belong to different <code>Group</code>s (1, 2, 3).</p>
<pre><code> UserId Time Group
0 181915 52 3
1 128004 52 2
2 178974 40 1
3 182887 30 1
4 238434 50 2
</code></pre>
<p>I am trying to compute the average <code>Time</code> spent by each user <code>Group</code>. I know that this is a easy task.</p>
<pre><code>df.groupby('Group')['Time'].mean()
</code></pre>
<p>The code above works just fine. However, I have the following alternative where I group the data by both 'Group' and 'UserId' and then compute the mean per user within each group, and then take the mean of the computed means for the whole group. Below is an example for Group 1:</p>
<pre><code>df.groupby(['Group', 'UserId')['Time'].mean().loc[1].mean()
</code></pre>
<p>However, this produces a different result. I do not understand why it works this way. At the end, I take average of everything. I do not which result should I trust. I appreciate any help.</p>
|
<python><pandas><dataframe><group-by>
|
2023-01-07 19:06:40
| 0
| 8,321
|
renakre
|
75,043,032
| 19,989,634
|
How to redirect to same place on page when redirecting to previous page
|
<p>I was wondering if there is a way in django for when you click back to go to the previous page via a link or in my case an anchored link with in an img, for it to also end up in the same place as you click originally, in my case the image that is clicked in the first place.</p>
<p><strong>Page I click image to redirect:</strong></p>
<pre><code> </head>
<body>
<header>{% include 'navbardesktop.html' %}</header>
<div class="image-container">
<div class="image-post">
<a href="{% url 'gallery' %}"
><img class="photo-img" src="{{photo.image.url}}"
/></a>
<h2 class="photo-title">{{photo.image_title}}</h2>
<p class="contact">
Interested in purchasing this as a print? Contact me for more
information regarding price and sizes.
</p>
<a href="{% url 'contact' %}" class="btn btn-secondary" type="button"
>Contact</a
>
<a href="{% url 'gallery' %}" class="btn btn-secondary" type="button"
>Gallery</a
>
</div>
</div>
</body>
</html>
</code></pre>
<p><strong>Page I want to redirect to:</strong></p>
<pre><code> <body>
<header>{% include 'navbardesktop.html' %}</header>
<div class="container-gallery">
<div class="col-md-12" id="gallerygrid1">
<div class="row">
{% for photo in images %}
<div class="col-md-4" id="gallerygrid2">
<a href="{% url 'viewimage' photo.slug %}"
><img
class="gallery-thumbnail"
src="{{photo.image.url}}"
style=""
/></a>
</div>
{% empty %}
<h3>No Projects...</h3>
{% endfor %}
</div>
</div>
</div>
</body>
</html>
</code></pre>
|
<python><django>
|
2023-01-07 19:01:18
| 1
| 407
|
David Henson
|
75,043,015
| 13,609,047
|
Jupyter Kernel Dies when Running TensorFlow Model
|
<p>While training a TensorFlow Model in Jupyter, the kernel dies before the first epoch.</p>
<p>The model I am using is a DeepLab with input size 256 on a ResNet50 encoder. I cannot show the model summary because it is too long to fit in the question.
This issue only happens with this specific model and does not occur with others that I have used.</p>
<p>Here is the output of the cell when I try to train the model:</p>
<pre><code>Epoch 1/100
2023-01-07 12:22:01.752760: W tensorflow/tsl/platform/profile_utils/cpu_utils.cc:128] Failed to get CPU frequency: 0 Hz
2023-01-07 12:22:05.727903: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:114] Plugin optimizer for device_type GPU is enabled.
The Kernel crashed while executing code in the the current cell or a previous cell. Please review the code in the cell(s) to identify a possible cause of the failure. Click here for more info. View Jupyter log for further details.
Canceled future for execute_request message before replies were done
</code></pre>
<p>This issue occurs in both VSCode Jupyter and Jupyter Notebook/Lab.</p>
<p>I have tried restarting the kernel, reinstalling tensorflow, creating a new environment, and using the <code>nomkl</code> library. I am on an M1 MacBook Pro running Tensorflow 2.11.0 (macos). The python version is 3.10.</p>
|
<python><tensorflow><jupyter-notebook>
|
2023-01-07 18:58:41
| 1
| 352
|
neuops
|
75,042,645
| 1,826,066
|
Specify return type of a wrapper function that calls an abstract method in Python
|
<p>For this example, consider the simplified scenario where a <code>Solver</code> will return a <code>Solution</code>.</p>
<p>We have <code>Solution</code>s:</p>
<pre class="lang-py prettyprint-override"><code>class Solution(ABC):
pass
class AnalyticalSolution(Solution):
pass
class NumericalSolution(Solution):
def get_mesh_size(self) -> float:
return 0.12345
</code></pre>
<p>And <code>Solver</code>s:</p>
<pre class="lang-py prettyprint-override"><code>class Solver(ABC):
def solve(self, task: int) -> Solution:
# Do some pre-processing with task
# ...
return self._solve(task)
@abstractmethod
def _solve(self, task: int) -> Solution:
pass
class NumericalSolver(Solver):
def _solve(self, task: int) -> NumericalSolution:
return NumericalSolution()
class AnalyticalSolver(Solver):
def _solve(self, task: int) -> AnalyticalSolution:
return AnalyticalSolution()
</code></pre>
<p>The problem I encounter results from the implementation of the wrapper method <code>solve</code> that then calls the abstract method <code>_solve</code>.
I often encounter a situation like this where I want to do some preprocessing in the <code>solve</code> method that is the same for all solver, but then the actual implementation of <code>_solve</code> might differ.</p>
<p>If I now call the numerical solver and call the <code>get_mesh_size()</code> method, Pylance (correctly) tells me that a <code>Solution</code> object has no <code>get_mesh_size</code>member.</p>
<pre class="lang-py prettyprint-override"><code>if __name__ == "__main__":
solver = NumericalSolver()
solution = solver.solve(1)
print(solution.get_mesh_size())
</code></pre>
<p>I understand that Pylance only sees the interface of <code>solve</code> which indicates that the return type is a <code>Solution</code> object that does not need to have a <code>get_mesh_size</code> method.
I am also aware that this example works at runtime.</p>
<p>I tried to use <code>TypeVar</code> like this (actually, because ChatGPT suggested it):</p>
<pre class="lang-py prettyprint-override"><code>class Solution(ABC):
pass
T = TypeVar("T", bound=Solution)
</code></pre>
<p>and then rewrite the <code>Solver</code> class:</p>
<pre class="lang-py prettyprint-override"><code>class Solver(ABC):
def solve(self, task: int) -> T:
# Do some pre-processing with task
# ...
return self._solve(task)
@abstractmethod
def _solve(self, task: int) -> T:
pass
</code></pre>
<p>But Pylance now tells me <code>TypeVar "T" appears only once in generic function signature</code>. So this can't be the solution.</p>
<p><strong>How do I get typing to work with this example?</strong></p>
|
<python><object><abstract-class><type-hinting>
|
2023-01-07 18:05:10
| 1
| 1,351
|
Thomas
|
75,042,556
| 5,994,555
|
TypeError when autoincrementing column in sqlalchemy
|
<p>I have the following table definition</p>
<pre><code>class User(Base):
__tablename__ = 'users'
id = Column(Integer, primary_key=True, autoincrement=True)
email = Column(String(256), unique=True)
is_admin = Column(Boolean, nullable=False)
def __init__(self, id, email, is_admin):
self.id = id
self.email = email
self.is_admin = is_admin
</code></pre>
<p>When I add a user I only call it with two arguments because I would like the id to be autoincremented and hence not passed by my call:</p>
<pre><code>u = User(email=email, is_admin=admin)
</code></pre>
<p>but I get the following error:</p>
<pre><code>TypeError: __init__() missing 1 required positional argument
</code></pre>
<p>How do I define a primary_key column without the need to pass it as an argument?</p>
|
<python><sqlalchemy>
|
2023-01-07 17:51:37
| 1
| 497
|
Sabina Orazem
|
75,042,333
| 13,345,744
|
How to convert Pandas Series to Timestamp when not every value is convertible?
|
<p><strong>Context</strong></p>
<p>I have a <code>Pandas</code> <code>Series</code> containing <code>Dates</code> in a <code>String</code> format <em>(e.g. 2017-12-19 09:35:00)</em>. My goal is to convert this <code>Series</code> into <code>Timestamps</code> <em>(Time in Seconds since 1970)</em>.</p>
<p>The difficulty is, that some <code>Values</code> in this <code>Series</code> are corrupt and cannot be converted to a <code>Timestamp</code>. In that case, they should be converted to <code>None</code>.</p>
<hr />
<p><strong>Code</strong></p>
<pre class="lang-py prettyprint-override"><code>import datetime
series = series.apply(lambda x: datetime.datetime.strptime(x, "%Y-%m-%d %H:%M:%S").timestamp())
</code></pre>
<hr />
<p><strong>Question</strong></p>
<blockquote>
<p>The code above would work when all <code>Values</code> are in the correct format, however there is corrupt data.</p>
</blockquote>
<ul>
<li>How can I achieve my goal while converting all not-convertible data to <code>None</code>?</li>
</ul>
|
<python><pandas><datetime><machine-learning><timestamp>
|
2023-01-07 17:22:13
| 2
| 1,721
|
christophriepe
|
75,042,171
| 14,094,460
|
Block setting of class attributes (__setattr__)
|
<ol>
<li>is there a simple way to prevent setting new class attrs?</li>
<li>while trying with the following snippet, shouldn't <code>setattr(Derived, "test1", 1)</code> call the <code>__setattr__</code> from <code>Base</code>?</li>
</ol>
<pre><code>class Base:
def __setattr__(self, key, value):
raise PermissionError('in base')
def __init_subclass__(cls, *args, **kwargs):
def _setattr_(inst, key, val):
raise PermissionError('in derived')
cls.__setattr__ = _setattr_
class Derived(Base):
pass
setattr(Derived, "test1", 1)
Derived.__setattr__(Derived, "test2", 2)
Traceback (most recent call last):
.
.
.
PermissionError: in derived
Base.__setattr__(Derived, "test3", 3)
Traceback (most recent call last):
.
.
.
PermissionError: in base
</code></pre>
<p><strong>EDIT</strong>:
this is a duplicate. See below.</p>
|
<python><setattr>
|
2023-01-07 16:58:58
| 2
| 1,442
|
deponovo
|
75,042,021
| 20,247,248
|
Import "pybit" could not be resolved
|
<p>When I run <code>pip3 install pybit</code> in the terminal I get:</p>
<pre><code>Requirement already satisfied: pybit in /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages (2.4.1)
</code></pre>
<p>But in my project:</p>
<pre><code>#API imports
from pybit import HTTP
</code></pre>
<p>I get <code>Import "pybit" could not be resolved</code>.</p>
<p>How do I fix my environment?</p>
|
<python>
|
2023-01-07 16:39:03
| 0
| 402
|
tdimoff
|
75,042,013
| 7,104,332
|
Python sort a string in CSV
|
<p>I want to sort the every row of the CSV string with the following code</p>
<pre><code>import csv
def sort_csv_columns(csv_string: str) -> str:
# Split the CSV string into lines
lines = csv_string.strip().split("\n")
# Split the first line (column names) and sort it case-insensitively
header = lines[0].split(",")
header.sort(key=str.lower)
# Split the remaining lines (data rows) and sort them by the sorted header
data = [line.split(",") for line in lines[1:]]
data.sort(key=lambda row: [row[header.index(col)] for col in header])
# Join the sorted data and header into a single CSV string
sorted_csv = "\n".join([",".join(header)] + [",".join(row) for row in data])
return sorted_csv
# Test the function
csv_string = "Beth,Charles,Danielle,Adam,Eric\n17945,10091,10088,3907,10132\n2,12,13,48,11"
sorted_csv = sort_csv_columns(csv_string)
print(sorted_csv)
</code></pre>
<p>Output</p>
<pre><code>Adam,Beth,Charles,Danielle,Eric
17945,10091,10088,3907,10132
2,12,13,48,11
</code></pre>
<p>Expected Output</p>
<pre><code>Adam,Beth,Charles,Danielle,Eric\n
3907,17945,10091,10088,10132\n
48,2,12,13,11
</code></pre>
<p>What am I doing wrong</p>
<p>I am not able to sort the row besides the top header</p>
|
<python><csv>
|
2023-01-07 16:37:28
| 1
| 474
|
Rohit Sthapit
|
75,041,692
| 4,451,521
|
Ordering a dataframe with a key function and multiple columns
|
<p>I have the following</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({
'col1': ['A', 'A', 'B', np.nan, 'D', 'C'],
'col2': [2, -1, 9, -8, 7, 4],
'col3': [0, 1, 9, 4, 2, 3],
'col4': ['a', 'B', 'c', 'D', 'e', 'F'],
'col5': [2, 1, 9, 8, 7, 4],
'col6': [1.00005,1.00001,-2.12132, -2.12137,1.00003,-2.12135]
})
print(df)
print(df.sort_values(by=['col5']))
print(df.sort_values(by=['col2']))
print(df.sort_values(by='col2', key=lambda col: col.abs() ))
</code></pre>
<p>So far so good.</p>
<p>However I would like to order the dataframe by two columns:
First col6 and then col5</p>
<p>However, with the following conditions:</p>
<ul>
<li>col6 only has to consider 4 decimals (meaning that <code>1.00005</code> and <code>1.00001</code> should be consider equal</li>
<li>col6 should be considered as absolute (meaning <code>1.00005</code> is less than <code>-2.12132</code>)</li>
</ul>
<p>So the desired output would be</p>
<pre><code> col1 col2 col3 col4 col5 col6
1 A -1 1 B 1 1.00001
0 A 2 0 a 2 1.00005
4 D 7 2 e 7 1.00003
5 C 4 3 F 4 -2.12135
3 NaN -8 4 D 8 -2.12137
2 B 9 9 c 9 -2.12132
</code></pre>
<p>How can I combine the usage of keys with multiple columns?</p>
|
<python><pandas>
|
2023-01-07 15:53:42
| 2
| 10,576
|
KansaiRobot
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.