QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
75,237,862
| 17,630,139
|
Snowflake-connector-python fails to install. Returns "ModuleNotFoundError: No module named 'cmake'"
|
<p>I'm trying to install <code>snowflake-connector-python</code> using pip, but it's giving me this error stack trace:</p>
<pre class="lang-py prettyprint-override"><code> copying pyarrow/tests/parquet/test_metadata.py -> build/lib.macosx-10.9-universal2-cpython-311/pyarrow/tests/parquet
copying pyarrow/tests/parquet/test_pandas.py -> build/lib.macosx-10.9-universal2-cpython-311/pyarrow/tests/parquet
copying pyarrow/tests/parquet/test_parquet_file.py -> build/lib.macosx-10.9-universal2-cpython-311/pyarrow/tests/parquet
copying pyarrow/tests/parquet/test_parquet_writer.py -> build/lib.macosx-10.9-universal2-cpython-311/pyarrow/tests/parquet
running build_ext
creating /private/var/folders/4c/xj1m5wts0xx46bbh5qhhhg4m0000gq/T/pip-install-v4ysgr2_/pyarrow_ae70c3da10594e6eb24b27149ad7d95d/build/temp.macosx-10.9-universal2-cpython-311
-- Running cmake for pyarrow
cmake -DPYTHON_EXECUTABLE=/Users/gree030/Workspace/projectName/venv/bin/python -DPython3_EXECUTABLE=/Users/gree030/Workspace/projectName/venv/bin/python "" -DPYARROW_BUILD_CUDA=off -DPYARROW_BUILD_FLIGHT=off -DPYARROW_BUILD_GANDIVA=off -DPYARROW_BUILD_DATASET=off -DPYARROW_BUILD_ORC=off -DPYARROW_BUILD_PARQUET=off -DPYARROW_BUILD_PARQUET_ENCRYPTION=off -DPYARROW_BUILD_PLASMA=off -DPYARROW_BUILD_S3=off -DPYARROW_BUILD_HDFS=off -DPYARROW_USE_TENSORFLOW=off -DPYARROW_BUNDLE_ARROW_CPP=off -DPYARROW_BUNDLE_BOOST=off -DPYARROW_GENERATE_COVERAGE=off -DPYARROW_BOOST_USE_SHARED=on -DPYARROW_PARQUET_USE_SHARED=on -DCMAKE_BUILD_TYPE=release /private/var/folders/4c/xj1m5wts0xx46bbh5qhhhg4m0000gq/T/pip-install-v4ysgr2_/pyarrow_ae70c3da10594e6eb24b27149ad7d95d
error: command 'cmake' failed: No such file or directory
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for pyarrow
Failed to build pyarrow
ERROR: Could not build wheels for pyarrow, which is required to install pyproject.toml-based projects
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
Γ pip subprocess to install build dependencies did not run successfully.
β exit code: 1
β°β> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
</code></pre>
<p>Here are my environments:</p>
<ul>
<li>Python version: 3.11.1</li>
<li>pip version: 22.3.1</li>
</ul>
<p>I tried installing and updating <code>cmake</code> but it still gave me this error:</p>
<pre class="lang-py prettyprint-override"><code> copying pyarrow/tests/parquet/test_pandas.py -> build/lib.macosx-10.9-universal2-cpython-311/pyarrow/tests/parquet
copying pyarrow/tests/parquet/test_parquet_file.py -> build/lib.macosx-10.9-universal2-cpython-311/pyarrow/tests/parquet
copying pyarrow/tests/parquet/test_parquet_writer.py -> build/lib.macosx-10.9-universal2-cpython-311/pyarrow/tests/parquet
running build_ext
creating /private/var/folders/4c/xj1m5wts0xx46bbh5qhhhg4m0000gq/T/pip-install-ejkkok_0/pyarrow_e560da15c45d4feeb95b2060af382048/build/temp.macosx-10.9-universal2-cpython-311
-- Running cmake for pyarrow
cmake -DPYTHON_EXECUTABLE=/Users/gree030/Workspace/projectName/venv/bin/python -DPython3_EXECUTABLE=/Users/gree030/Workspace/projectName/venv/bin/python "" -DPYARROW_BUILD_CUDA=off -DPYARROW_BUILD_FLIGHT=off -DPYARROW_BUILD_GANDIVA=off -DPYARROW_BUILD_DATASET=off -DPYARROW_BUILD_ORC=off -DPYARROW_BUILD_PARQUET=off -DPYARROW_BUILD_PARQUET_ENCRYPTION=off -DPYARROW_BUILD_PLASMA=off -DPYARROW_BUILD_S3=off -DPYARROW_BUILD_HDFS=off -DPYARROW_USE_TENSORFLOW=off -DPYARROW_BUNDLE_ARROW_CPP=off -DPYARROW_BUNDLE_BOOST=off -DPYARROW_GENERATE_COVERAGE=off -DPYARROW_BOOST_USE_SHARED=on -DPYARROW_PARQUET_USE_SHARED=on -DCMAKE_BUILD_TYPE=release /private/var/folders/4c/xj1m5wts0xx46bbh5qhhhg4m0000gq/T/pip-install-ejkkok_0/pyarrow_e560da15c45d4feeb95b2060af382048
Traceback (most recent call last):
File "/Users/gree030/Workspace/projectName/venv/bin/cmake", line 5, in <module>
from cmake import cmake
ModuleNotFoundError: No module named 'cmake'
error: command '/Users/gree030/Workspace/projectName/venv/bin/cmake' failed with exit code 1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for pyarrow
Failed to build pyarrow
ERROR: Could not build wheels for pyarrow, which is required to install pyproject.toml-based projects
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
Γ pip subprocess to install build dependencies did not run successfully.
β exit code: 1
β°β> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
</code></pre>
<p>I also followed the documentation <a href="https://docs.snowflake.com/en/user-guide/python-connector-install.html" rel="nofollow noreferrer">here</a> and <a href="https://docs.snowflake.com/en/user-guide/python-connector-install.html#label-python-connector-prerequisites-python-packages" rel="nofollow noreferrer">dependency installment guide here</a> on how to install the connector and downloaded the dependent libraries using:</p>
<p><code>pip install -r https://raw.githubusercontent.com/snowflakedb/snowflake-connector-python/main/tested_requirements/requirements_311.reqs</code></p>
<p>given that my python version is 3.11.0.</p>
|
<python><snowflake-cloud-data-platform>
|
2023-01-25 17:47:00
| 1
| 331
|
Khalil
|
75,237,818
| 8,479,344
|
DRF: Dynamic literal type hint for models.TextChoices
|
<p>Given this model</p>
<pre class="lang-py prettyprint-override"><code>from django.db import models
class Olympian(models.Model):
MedalType = models.TextChoices('MedalType', 'GOLD SILVER BRONZE')
medal = models.CharField(max_length=6, choices=MedalType.choices, default=MedalType.GOLD)
</code></pre>
<p>and this function which takes in the <code>CharField</code> as a param</p>
<pre class="lang-py prettyprint-override"><code>fn_with_type_hint(olympian.medal)
</code></pre>
<p>How can I type hint the param more strictly without hard-coding like this?</p>
<pre class="lang-py prettyprint-override"><code>def fn_with_type_hint(medal: Literal['Gold', 'Silver', 'Bronze']):
pass
</code></pre>
<hr />
<h3>What I tried</h3>
<p>I tried <code>Olympian.medal</code> but it's just a string</p>
<pre class="lang-py prettyprint-override"><code>medal: Olympian.medal
</code></pre>
<p>I also tried variations of this to no avail</p>
<pre class="lang-py prettyprint-override"><code>medal: Literal[*Olympian.MedalType.values]
</code></pre>
<p>I also can't use this solution because I don't start with a list of strings</p>
<p><a href="https://stackoverflow.com/a/64522240/8479344">https://stackoverflow.com/a/64522240/8479344</a></p>
|
<python><django><django-rest-framework><enums>
|
2023-01-25 17:41:56
| 0
| 711
|
Fullchee Zhang
|
75,237,748
| 8,247,997
|
In a scatterplot, how do I plot a line that is an average of the all vertical coordinates of datapoints that has the same x coordinate
|
<p>I want something like the plots shown in figure below, where the blue line is the average line that is generated by plotting the mean of all y-coordinate values of data-points that have the same x-coordinate values.</p>
<p><a href="https://i.sstatic.net/9JsDe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9JsDe.png" alt="Fig-1" /></a></p>
<p>I tried the code below</p>
<pre><code>window_size = 10
df_avg = pd.DataFrame(columns=df.columns)
for col in df.columns:
df_avg[col] = df[col].rolling(window=window_size).mean()
plt.figure(figsize=(20,20))
for idx, col in enumerate(df.columns, 1):
plt.subplot(df.shape[1]-4, 4, idx)
sns.scatterplot(data=df, x=col, y='charges')
plt.plot(df_avg[col],df['charges'])
plt.xlabel(col)
</code></pre>
<p>And, got plots shown below, which obviously, is not what I wanted.
<a href="https://i.sstatic.net/czXn8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/czXn8.png" alt="Fig-2" /></a></p>
|
<python><matplotlib><seaborn><data-science>
|
2023-01-25 17:34:26
| 1
| 346
|
Somanna
|
75,237,664
| 17,945,841
|
Deleting observations from a data frame, according to a bernoulli random variable that is 0 or 1
|
<p>I have a data frame with 1000 rows. I want to delete 500 observations from a specific column <code>Y</code>, in a way thet the bigger the values of <code>Y</code>, the probability it will be deleted is bigger.
One way to do that is to sort this column in an ascending way. For <code>i = 1,...,1000</code>, toss a bernoulli random variable with a <code>p_i</code> probability for success that is dependant on <code>i</code>. delete all observations that their bernoulli random variable is 1.</p>
<p>So first I sort this column:</p>
<p><code>df_sorted = df.sort_values("mycolumn")</code></p>
<p>Next, I tried something like this:</p>
<pre><code>p_i = np.linspace(0,1,num=sample_Encoded_sorted.shape[0])
bernoulli = np.random.binomial(1, p_i)
delete_index = bernoulli == 1
</code></pre>
<p>I get <code>delete_index</code> is a boolian vector with <code>True</code> or <code>False</code>, and the probability to get a True is higher among higher index. However, I get more than 500 <code>True</code> in it.</p>
<p>How do I get only 500 Trues in this vector? and how do I use it to delete the corresponding rows of the data frame?</p>
<p>For example if i = 1 in <code>delete_index</code> is <code>False</code>, the first row of the data frame wont be deleted, if it's <code>True</code> it will be deleted.</p>
|
<python><numpy><statistics>
|
2023-01-25 17:24:24
| 1
| 1,352
|
Programming Noob
|
75,237,628
| 1,816,135
|
tokenizer.save_pretrained TypeError: Object of type property is not JSON serializable
|
<p>I am trying to save the GPT2 tokenizer as follows:</p>
<pre><code>from transformers import GPT2Tokenizer, GPT2LMHeadModel
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
tokenizer.pad_token = GPT2Tokenizer.eos_token
dataset_file = "x.csv"
df = pd.read_csv(dataset_file, sep=",")
input_ids = tokenizer.batch_encode_plus(list(df["x"]), max_length=1024,padding='max_length',truncation=True)["input_ids"]
# saving the tokenizer
tokenizer.save_pretrained("tokenfile")
</code></pre>
<p>I am getting the following error:
TypeError: Object of type property is not JSON serializable</p>
<p>More details:</p>
<pre><code>TypeError Traceback (most recent call last)
Cell In[x], line 3
1 # Save the fine-tuned model
----> 3 tokenizer.save_pretrained("tokenfile")
File /3tb/share/anaconda3/envs/ak_env/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:2130, in PreTrainedTokenizerBase.save_pretrained(self, save_directory, legacy_format, filename_prefix, push_to_hub, **kwargs)
2128 write_dict = convert_added_tokens(self.special_tokens_map_extended, add_type_field=False)
2129 with open(special_tokens_map_file, "w", encoding="utf-8") as f:
-> 2130 out_str = json.dumps(write_dict, indent=2, sort_keys=True, ensure_ascii=False) + "\n"
2131 f.write(out_str)
2132 logger.info(f"Special tokens file saved in {special_tokens_map_file}")
File /3tb/share/anaconda3/envs/ak_env/lib/python3.10/json/__init__.py:238, in dumps(obj, skipkeys, ensure_ascii, check_circular, allow_nan, cls, indent, separators, default, sort_keys, **kw)
232 if cls is None:
233 cls = JSONEncoder
234 return cls(
235 skipkeys=skipkeys, ensure_ascii=ensure_ascii,
236 check_circular=check_circular, allow_nan=allow_nan, indent=indent,
237 separators=separators, default=default, sort_keys=sort_keys,
--> 238 **kw).encode(obj)
File /3tb/share/anaconda3/envs/ak_env/lib/python3.10/json/encoder.py:201, in JSONEncoder.encode(self, o)
199 chunks = self.iterencode(o, _one_shot=True)
...
178 """
--> 179 raise TypeError(f'Object of type {o.__class__.__name__} '
180 f'is not JSON serializable')
TypeError: Object of type property is not JSON serializable
</code></pre>
<p>How can I solve this issue?</p>
|
<python><huggingface-transformers><gpt-2>
|
2023-01-25 17:21:03
| 1
| 1,002
|
AKMalkadi
|
75,237,532
| 386,861
|
I don't why parsing error in parsing list within list in python
|
<p>Created a list that looks like this. Two lists within bigger list.</p>
<pre><code>topics = [gender_subset = [3, 4],
age_subset = [5, 6, 7, 8, 9, 10, 11]]
for t in topics:
print(t)
</code></pre>
<p>But get this error:</p>
<pre><code>Cell In[49], line 1
topics = [gender_subset = [3, 4],
^
SyntaxError: invalid syntax
</code></pre>
<p>Why?</p>
|
<python>
|
2023-01-25 17:13:03
| 3
| 7,882
|
elksie5000
|
75,237,528
| 9,640,238
|
Group by first occurrence of each value in a pandas dataframe
|
<p>I have a pandas dataframe that looks like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>user</th>
<th>action</th>
<th>timestamp</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Jim</td>
<td>start</td>
<td>12/10/2022</td>
</tr>
<tr>
<td>2</td>
<td>Jim</td>
<td>start</td>
<td>12/10/2022</td>
</tr>
<tr>
<td>3</td>
<td>Jim</td>
<td>end</td>
<td>2/2/2022</td>
</tr>
<tr>
<td>4</td>
<td>Linette</td>
<td>start</td>
<td>8/18/2022</td>
</tr>
<tr>
<td>5</td>
<td>Linette</td>
<td>start</td>
<td>3/24/2022</td>
</tr>
<tr>
<td>6</td>
<td>Linette</td>
<td>end</td>
<td>8/27/2022</td>
</tr>
<tr>
<td>7</td>
<td>Rachel</td>
<td>start</td>
<td>2/7/2022</td>
</tr>
<tr>
<td>8</td>
<td>Rachel</td>
<td>end</td>
<td>1/4/2023</td>
</tr>
<tr>
<td>9</td>
<td>James</td>
<td>start</td>
<td>6/12/2022</td>
</tr>
<tr>
<td>10</td>
<td>James</td>
<td>end</td>
<td>5/14/2022</td>
</tr>
<tr>
<td>11</td>
<td>James</td>
<td>start</td>
<td>11/28/2022</td>
</tr>
<tr>
<td>12</td>
<td>James</td>
<td>start</td>
<td>8/9/2022</td>
</tr>
<tr>
<td>13</td>
<td>James</td>
<td>end</td>
<td>2/15/2022</td>
</tr>
</tbody>
</table>
</div>
<p>For each user, there can be more than one start event, but only one end. Imagine that they sometimes need to start a book over again, but only finish it once.</p>
<p>What I want is to calculate the time difference between the <em>first</em> start and the end, so keep, for each user, the <em>first</em> occurrence of "start" and "end" in each group.</p>
<p>Any hint?</p>
|
<python><pandas><dataframe>
|
2023-01-25 17:12:37
| 1
| 2,690
|
mrgou
|
75,237,390
| 12,934,163
|
Decision Tree Based Survival Analysis with time-varying covariates in Python
|
<p>I'd like to predict the remaining survival time with time-varying covariates using Python. I already used <a href="https://lifelines.readthedocs.io/en/latest/Time%20varying%20survival%20regression.html" rel="nofollow noreferrer">lifelines' CoxTimeVaryingFitter</a> and would like to compare it to a decision tree based approach, such as <a href="https://scikit-survival.readthedocs.io/en/stable/api/generated/sksurv.ensemble.RandomSurvivalForest.html#sksurv.ensemble.RandomSurvivalForest.fit" rel="nofollow noreferrer">Random Survival Forest</a>. From <a href="https://arxiv.org/abs/2006.00567" rel="nofollow noreferrer">this paper</a> I understand, that the "normal" Random Survival Forest is not able to cope with time-varying covariates, but there are extensions to solve that. I could not find any solutions implemented in Python. Have I missed something? I'd also appreciate advice for other modules that can cope with time-varying covariates.</p>
|
<python><random-forest><survival-analysis><lifelines><scikit-survival>
|
2023-01-25 17:01:19
| 0
| 885
|
TiTo
|
75,237,274
| 5,199,660
|
Pandas String Series, return string if length equals number, otherwise return empty string
|
<p>I have a Pandas string series as the following:</p>
<pre><code>s = pd.Series(["12345678.0","45678912.0", "0", "2983129416.0", "62441626.0"])
</code></pre>
<p>I first of all must cut the decimal part, and then...</p>
<pre><code>result = s.str.split(".", 1, expand=True)[0]
</code></pre>
<p>I want to find a way to return the string if that's length is 8, otherwise return empty string: ""</p>
<pre><code>s[s.str.len() == 8]
</code></pre>
<p>Of course, this only would keep the strings that's length is 8, but I need empty strings added to fields where they are not 8 characters long. I couldn't figure out by myself how should this be done properly, so thanks in advance for all the ideas!</p>
<p>Expected result:</p>
<pre><code>s = pd.Series(["12345678","45678912", "", "", "62441626"])
</code></pre>
|
<python><pandas><string><conditional-statements><series>
|
2023-01-25 16:49:57
| 3
| 656
|
Eve
|
75,237,246
| 5,789,997
|
Filter list-valued columns
|
<p>I have this kind of dataset:</p>
<pre><code>id value cond1 cond2
a 1 ['a','b'] [1,2]
b 1 ['a'] [1]
a 2 ['b'] [2]
a 3 ['a','b'] [1,2]
b 3 ['a','b'] [1,2]
</code></pre>
<p>I would like to extract all the rows using the conditions, something like</p>
<pre><code>df.loc[(df['cond1']==['a','b']) & (df['cond2']==[1,2])
</code></pre>
<p>this syntax produces however</p>
<pre><code>ValueError: ('Lengths must match to compare', (100,), (1,))
</code></pre>
<p>or this if I use <code>isin</code>:</p>
<pre><code>SystemError: <built-in method view of numpy.ndarray object at 0x7f1e4da064e0> returned a result with an error set
</code></pre>
<p>How to do it right?</p>
<p>Thanks!</p>
|
<python><pandas>
|
2023-01-25 16:47:57
| 1
| 1,063
|
Ilja
|
75,237,101
| 2,635,863
|
empirical distribution from data - python
|
<p><a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.wasserstein_distance.html" rel="nofollow noreferrer">wasserstein_distance function</a> requires that the input data are "<strong>Values observed in the (empirical) distribution</strong>".</p>
<p>My data arrays range between -4 and 8:</p>
<pre><code>x = np.array([0.12,-1.29,-3.23,-3.21,-0.13, 1.52, 4.45, 6.45, 5.17, 0.11, 3.48, 5.98, 7.55])
y = np.array([3.54, 2.42,-4.43,-3.76, 0.43, 0.45, 2.56, 7.61, 4.47, 1.36, 2.34, 7.78, 7.13])
</code></pre>
<p>how can I create an empirical distribution of <code>x</code> and <code>y</code>?</p>
<p>I tried</p>
<pre><code>from statsmodels.distributions.empirical_distribution import ECDF
ecdf_x = ECDF(x)
x_ecdf = ecdf_y.y
ecdf_y = ECDF(y)
y_ecdf = ecdf_y.y
wasserstein_distance(x_ecdf, y_ecdf)
</code></pre>
<p>Would <code>x_ecdf</code> and <code>y_ecdf</code> be valid inputs to the function?</p>
|
<python><scipy>
|
2023-01-25 16:37:28
| 0
| 10,765
|
HappyPy
|
75,236,978
| 2,276,188
|
How do you add the degree to every vertex in a list in Gremlin using python?
|
<p>I'm trying to add the degree (number of vertexes connected to the given vertexes) to each one of the vertexes in a list.</p>
<p>Generating the degree for each vertex works-</p>
<pre class="lang-py prettyprint-override"><code>c.g.V(ids).as_('vertex'). \
both(). \
groupCount(). \
by(select('vertex')).toList()
</code></pre>
<p>Saving a constant degree to all of them works</p>
<pre class="lang-py prettyprint-override"><code>c.g.V(ids).as_('vertex'). \
both().groupCount().by(select('vertex')).unfold(). \
sideEffect(
__.select(Column.keys).property(Cardinality.single, "degree", 1)
).toList()
</code></pre>
<p>Though when I try to save the degree itself I get errors.
Note that the query groups the vertexes, and we have a dictionary from vertex to its degree. In the <code>sideEffect</code> function, I select the key - the vertex, and try to save the value into it.</p>
<p>Queries I have tried-</p>
<pre class="lang-py prettyprint-override"><code>c.g.V(ids).as_('vertex'). \
both().groupCount().by(select('vertex')).unfold(). \
sideEffect(
__.store('x').select(Column.keys).property(Cardinality.single, "degree", cap('x')).select(Column.values))
).toList()
</code></pre>
<pre class="lang-py prettyprint-override"><code>c.g.V(ids).as_('vertex'). \
both().groupCount().by(select('vertex')).unfold(). \
sideEffect(
__.store('x').select(Column.keys).property(Cardinality.single, "degree", __.select(Column.values))
).toList()
</code></pre>
<p>Does anyone know what is wrong with my queries? I basically want to extract <code>Column.values</code> from the group and insert it into the property.</p>
<p><strong>Edit:</strong>
The current implementation is as suggested in the first solution -</p>
<pre><code>c.g.V(ids).property(Cardinality.single,
"degree",
__.both()
.count()).iterate()
</code></pre>
<p>The reason I'm working on it is because it was really slow (the actual query has many <code>has</code> and <code>hasLabel</code> queries which make it slower).</p>
<p>I've noticed that the first query I've attached is much much faster than the one currently used, and that's why I'm trying to use it.</p>
|
<python><graph><gremlin><amazon-neptune>
|
2023-01-25 16:27:56
| 1
| 365
|
Guy
|
75,236,869
| 8,056,248
|
Get partitioned indices of sorted 2D list
|
<p>I have "2D" list and I want to make partitions/groups of the list indices based on the first value of the nested list, and then return the sorted index of the partitions/groups based on the second value in the nested list. For example</p>
<pre><code>test = [[1, 2], [1, 1], [1, 5], [2, 3], [2, 1], [1, 10]]
sorted_partitions(test)
>>> [[1, 0, 2, 5], [4, 3]]
# because the groupings are [(1, [1, 1]), (0, [1, 2]), (2, [1, 5]), (5, [1, 10]), (4, [2, 1]), (3, [2, 3])]
</code></pre>
|
<python>
|
2023-01-25 16:20:31
| 2
| 1,283
|
Andrew Holmgren
|
75,236,857
| 7,500,995
|
Django REST framework - parse uploaded csv file
|
<p>I have setup Django REST framework endpoint that allows me to upload a csv file.</p>
<p>The serializers.py looks like this:</p>
<pre><code>from rest_framework import serializers
class UploadSerializer(serializers.Serializer):
file_uploaded = serializers.FileField()
class Meta:
fields = ['file_uploaded']
</code></pre>
<p>In my views.py file, I'm trying to read data from uploaded csv like this:</p>
<pre><code>class UploadViewSet(viewsets.ViewSet):
serializer_class = UploadSerializer
def create(self, request):
file_uploaded = request.FILES.get('file_uploaded')
with open(file_uploaded, mode ='r')as file:
csvFile = csv.reader(file)
for lines in csvFile:
print(lines)
</code></pre>
<p>I'm getting the following error:</p>
<pre><code>... line 37, in create
with open(file_uploaded, mode ='r') as file:
TypeError: expected str, bytes or os.PathLike object, not InMemoryUploadedFile
</code></pre>
<p>I have checked type() of file_uploaded and It is <code><class 'django.core.files.uploadedfile.InMemoryUploadedFile'></code></p>
<p>How can I read this file into dictionary or dataframe so I can extract the data I need from it?</p>
|
<python><django><django-rest-framework>
|
2023-01-25 16:19:53
| 2
| 771
|
Marin Leontenko
|
75,236,850
| 2,618,377
|
Version mismatch between scipy and poetry
|
<p>I'm using the poetry dependency manager for some of my development (RTL-SDR application). However, when I try to add scipy to the environment (calling <code>poetry add scipy</code> inside Windows 11 Powershell), I get the following output:</p>
<pre><code> Using version ^1.10.0 for scipy
Updating dependencies
Resolving dependencies...
The current project's Python requirement (>=3.11,<4.0) is not compatible with some of the required packages Python requirement:
- scipy requires Python <3.12,>=3.8, so it will not be satisfied for Python >=3.12,<4.0
Because no versions of scipy match >1.10.0,<2.0.0
and scipy (1.10.0) requires Python <3.12,>=3.8, scipy is forbidden.
So, because sdr1 depends on scipy (^1.10.0), version solving failed.
β’ Check your dependencies Python requirement: The Python requirement can be specified via the `python` or `markers` properties
For scipy, a possible solution would be to set the `python` property to ">=3.11,<3.12"
https://python-poetry.org/docs/dependency-specification/#python-restricted-dependencies,
https://python-poetry.org/docs/dependency-specification/#using-environment-markers
</code></pre>
<p>However,using <code>py -V</code>, I verify that my python version is 3.11.0. So, everything should work, right?</p>
<p>Suggestions on resolving this would be most appreciated.</p>
|
<python><scipy><python-poetry>
|
2023-01-25 16:19:00
| 2
| 421
|
Pat B.
|
75,236,722
| 5,091,720
|
Flask How to make a tree in jinja2 html?
|
<p>I have in python a dictionary that represents a tree style parent child relationship. I want to display the dictionary on the webpage. FYI: The dictionary will end up being all names and will very base on the person entering info.</p>
<p>Example dictionary from Python:</p>
<pre><code>dict_ = {'A':['B', 'C'], 'B':['D','E'], 'C':['F', 'G', 'H'], 'E':['I', 'J']}
root = 'A'
</code></pre>
<p>The desired HTML output display would be.</p>
<pre><code>A
βββ B
β βββ D
β βββ E
β βββ I
β βββ J
βββ C
βββ F
βββ G
βββ H
</code></pre>
<p>I not sure how to get this type of display using flask, Jinja, of other options like javascript. Some guidance, partial, or full answers would be great. ( I did learn how to use treelib to display it in terminal but not html.)</p>
|
<python><jinja2>
|
2023-01-25 16:08:52
| 1
| 2,363
|
Shane S
|
75,236,716
| 6,837,658
|
Anyway to get rid of `math.floor` for positive odd integers with `sympy.simplify`?
|
<p>I'm trying to simplify some expressions of positive odd integers with sympy. But sympy refuses to expand <code>floor</code>, making the simplification hard to proceed.</p>
<p>To be specific, <code>x</code> is a positive odd integer (actually in my particular use case, the constraint is even stricter. But sympy can only do odd and positive, which is fine). <code>x // 2</code> should be always equal to <code>(x - 1) / 2</code>. Example code here:</p>
<pre class="lang-py prettyprint-override"><code>from sympy import Symbol, simplify
x = Symbol('x', odd=True, positive=True)
expr = x // 2 - (x - 1) / 2
print(simplify(expr))
</code></pre>
<p>prints <code>-x/2 + floor(x/2) + 1/2</code>. Ideally it should print <code>0</code>.</p>
<p>What I've tried so far:</p>
<ol>
<li>Simplify <code>(x - 1) // 2 - (x - 1) / 2</code>. Turns out to be 0.</li>
<li>Multiply the whole thing by 2: <code>2 * (x // 2 - (x - 1) / 2)</code>. Gives me: <code>-x + 2*floor(x/2) + 1</code>.</li>
<li>Try to put more weights on the <code>FLOOR</code> op by <a href="https://docs.sympy.org/latest/modules/simplify/simplify.html" rel="nofollow noreferrer">customizing</a> the <code>measure</code>. No luck.</li>
<li>Use <code>sympy.core.evaluate(False)</code> context when creating the expression. Nuh.</li>
<li>Tune other parameters like <code>ratio</code>, <code>rational</code>, and play with other function like <code>expand</code>, <code>factor</code>, <code>collect</code>. Doesn't work either.</li>
</ol>
<p><em>EDIT:</em> Wolfram alpha can <a href="https://www.wolframalpha.com/input?i=simplify%28floor%28x%2F2%29-%28x-1%29%2F2%29%2C+x%3E0+and+x%252%3D1" rel="nofollow noreferrer">do this</a>.</p>
<p>I tried to look like the assumptions of <code>x</code> along with some expressions. It surprises me that <code>(x - 1) / 2).is_integer</code> returns None, which means unknown.</p>
<p>I'm running out of clues. I'm even looking for alternativese of sympy. Any ideas guys?</p>
|
<python><sympy><symbolic-math>
|
2023-01-25 16:08:38
| 2
| 621
|
Scott Chang
|
75,236,681
| 19,283,541
|
Python AST - finding particular named function calls
|
<p>I'm trying to analyse some Python code to identify where specific functions are being called and which arguments are being passed.</p>
<p>For instance, suppose I have an ML script that contains a <code>model.fit(X_train,y_train)</code>. I want to find this line in the script, identify what object is being fit (i.e., <code>model</code>), and to identify <code>X_train</code> and <code>y_train</code> as the arguments (as well as any others).</p>
<p>I'm new to AST, so I don't know how to do this in an efficient way.</p>
<p>So far, I've been able to locate the line in question by iterating through a list of child nodes (using <code>ast.iter_child_nodes</code>) until I arrive at the <code>ast.Call</code> object, and then calling its <code>func.attr</code>, which returns <code>"fit"</code>. I can also get <code>"X_train"</code> and <code>"y_train"</code> with <code>args</code>.</p>
<p>The problem is that I have to know where it is in advance in order to do it this way, so it's not particularly useful. The idea would be for it to obtain the information I'm looking for automatically.</p>
<p>Additionally, I have not been able to find a way to determine that <code>model</code> is what is calling <code>fit</code>.</p>
|
<python><abstract-syntax-tree>
|
2023-01-25 16:07:05
| 1
| 309
|
radishapollo
|
75,236,604
| 15,376,262
|
Check if column name of a pandas df starts with "name" and split that column based on existing white space
|
<p>Let's say I have a pandas dataframe that looks like this:</p>
<pre><code>df = pd.read_json('{"id":{"0":"21 Delta","1":"38 Bravo","2":"Charlie 37","3":"Alpha 56"},"name_1":{"0":"Tom","1":"Nick","2":"Chris","3":"David 56"},"name_2":{"0":"Peter 17","1":"Emma 53","2":"Jeff 11","3":"Oscar"},"name_3":{"0":"Jeffrey","1":"Olivier 12","2":null,"3":null},"name_4":{"0":"Henry 23","1":null,"2":null,"3":null}}')
df
id name_1 name_2 name_3 name_4
0 21 Delta Tom Peter 17 Jeffrey Henry 23
1 38 Bravo Nick Emma 53 Olivier 12 None
2 Charlie 37 Chris Jeff 11 None None
3 Alpha 56 David 56 Oscar None None
</code></pre>
<p>What I would like to do is to iterate over the columns in this df and check if the column name starts with <code>name</code>. If so, I would like to add the number after the white space in each row of that particular column in an extra column called <code>age_</code> which increments by one like so:</p>
<pre><code> id name_1 name_2 name_3 name_4 age_1 age_2 age_3 age_4
0 21 Delta Tom Peter 17 Jeffrey Henry 23 None 17 None 23
1 38 Bravo Nick Emma 53 Olivier 12 None None 53 12 None
2 Charlie 37 Chris Jeff 11 None None None 11 None None
3 Alpha 56 David 56 Oscar None None 56 None None None
</code></pre>
<p>So far I came up with this, but I struggle how to get to the end result:</p>
<pre><code>for column in df.columns:
if column.startswith("name"):
age = df[column].str.split(" ").str.get(1)
</code></pre>
|
<python><pandas>
|
2023-01-25 15:59:47
| 2
| 479
|
sampeterson
|
75,236,567
| 7,839,535
|
What is the reliable way to select current directory's python with pyenv + pipenv or pip
|
<p>I have two problems:</p>
<p>pipenv ignores pyenv settings;</p>
<p>my pip shortcuts are ignoring pyenv settings too:</p>
<pre class="lang-bash prettyprint-override"><code>iuri@tartaruga:~$ pyenv version
3.9.16 (set by /home/iuri/.pyenv/version)
iuri@tartaruga:~$ pyenv local 3.10
iuri@tartaruga:~$ pyenv version
3.10.9 (set by /.python-version)
iuri@tartaruga:~$ pyenv exec python -m pip --version
pip 22.3.1 from /home/iuri/.local/lib/python3.10/site-packages/pip (python 3.10)
iuri@tartaruga:~$ pip --version
pip 22.3.1 from /home/iuri/.local/lib/python3.9/site-packages/pip (python 3.9)
iuri@tartaruga:~$ pyenv versions
system
3.9.14
3.9.16
* 3.10.9 (set by /.python-version)
3.11.1
iuri@tartaruga:~$ pyenv exec python -m pipenv install torch
Creating a virtualenv for this project...
Pipfile: Pipfile
Using /home/iuri/.pyenv/versions/3.11.1/bin/python3 (3.11.1) to create virtualenv...
^C
iuri@tartaruga:~$ python --version
Python 3.10.9
iuri@tartaruga:~$ pyenv exec python --version
Python 3.10.9
iuri@tartaruga:~$ pyenv exec python -m pip --version
pip 22.3.1 from /home/iuri/.local/lib/python3.10/site-packages/pip (python 3.10)
iuri@tartaruga:~$ python -m pip --version
pip 22.3.1 from /home/iuri/.local/lib/python3.10/site-packages/pip (python 3.10)
iuri@tartaruga:~$ pip --version
pip 22.3.1 from /home/iuri/.local/lib/python3.9/site-packages/pip (python 3.9)
iuri@tartaruga:~$ pip3
pip3 pip3.10 pip3.11 pip3.9
iuri@tartaruga:~$ pip3.10 --version
pip 22.3.1 from /home/iuri/.local/lib/python3.9/site-packages/pip (python 3.9)
iuri@tartaruga:~$ pyenv global
3.9.16
</code></pre>
<p>I know that pipenv can be forced into a python version easily, but why my linked pip binaries are not respecting pyenv?</p>
|
<python><pip><virtualenv><pipenv><pyenv>
|
2023-01-25 15:57:30
| 0
| 471
|
Iuri Guilherme
|
75,236,563
| 5,152,497
|
Pandas DataFrame plot, colors are not unique
|
<p>According to <strong>Pandas</strong> <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.plot.html" rel="nofollow noreferrer">manual</a>, the parameter <em><strong>Colormap</strong></em> can be used to select colors from matplotlib colormap object. However for each bar, in the case of a bar diagram, the color needs to be selected manually. This is not capable, if you have a lot of bars, the manual effort is annoying. My expectation is that if no color is selected, each object/class should get a unique color representation. Unfortunately, this is not the case. The colors are repetitive. Only 10 unique colors are provided.</p>
<p>Code for reproduction:</p>
<pre><code>import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randint(0,100,size=(100, 25)), columns=list('ABCDEFGHIJKLMNOPQRSTUVWXY'))
df.set_index('A', inplace=True)
df.plot(kind='bar', stacked=True, figsize=(20, 10))
plt.title("some_name")
plt.savefig("some_name" + '.png')
</code></pre>
<p>Does somebody have any idea how to get a unique color for each class in the diagram?
Thanks in advance</p>
|
<python><pandas><plot><unique>
|
2023-01-25 15:57:21
| 1
| 3,487
|
JΓΌrgen K.
|
75,236,516
| 7,531,433
|
How can I type hint arbitrary generic type aliases in Python?
|
<p>I've implemented a Python function, which takes a generic type alias, from which it extracts the origin and arguments for further processing.</p>
<p>Now I want to use a static type checker (MyPy) and want to provide a type hint for the <code>alias</code> argument.
One idea would be to use the <a href="https://docs.python.org/3/library/types.html#types.GenericAlias" rel="nofollow noreferrer"><code>types.GenericAlias</code></a> class:</p>
<pre class="lang-py prettyprint-override"><code>import typing
from types import GenericAlias
def foo(alias: GenericAlias):
origin = typing.get_origin(alias)
args = typing.get_args(alias)
...
</code></pre>
<p>Now I want to use a <a href="https://docs.python.org/3/library/typing.html#typing.Callable" rel="nofollow noreferrer"><code>typing.Callable</code></a> type alias as an argument for the function:</p>
<pre class="lang-py prettyprint-override"><code>x = typing.Callable[[int, int], float]
foo(x)
</code></pre>
<p>Unfortunately, MyPy now reports an error:</p>
<pre class="lang-none prettyprint-override"><code>Argument 1 to "foo" has incompatible type "object"; expected "GenericAlias"
</code></pre>
<p>The same happens when I use <code>type</code> instead of <code>GenericAlias</code> as the type hint for <code>alias</code>.</p>
<p>Investigating further, I get the following:</p>
<pre><code>>>> isinstance(x, type)
False
>>> isinstance(x, GenericAlias)
False
>>> type(x)
<class 'typing._CallableGenericAlias'>
>>> isinstance(x, typing._CallableGenericAlias)
True
</code></pre>
<p>So, apparently, neither <code>type</code> nor <code>GenericAlias</code> cover the generic type alias when using <code>Callable</code> and I can't use them to type hint generic type aliases in general.
Is there anything else that I can use, or am I stuck with using <a href="https://docs.python.org/3/library/typing.html#typing.Any" rel="nofollow noreferrer"><code>Any</code></a>?</p>
<p>I would also be interested in an explanation for why <code>typing._CallableGenericAlias</code> is not a subtype of <code>types.GenericAlias</code>.</p>
|
<python><generics><python-typing><mypy>
|
2023-01-25 15:54:09
| 0
| 709
|
tierriminator
|
75,236,509
| 6,367,971
|
Extract MMDDYYY date from dataframe rows
|
<p>I have a dataframe where some rows of data contain a long string with a date in <code>MMDDYYY</code> format in the middle.</p>
<pre><code>ID
-
blah
unc.abc.155gdgeh0t4ngs8_XYZ_01252023_US_C_Home_en-us_RS_Nat'l-vs-UNC
blah
unc.abc.52gst4363463463_RST_01272023_US_C_Away_en-us_RS_Nat'l-vs-UNC
unc.abc.534gs23ujgf9d8f_UVX_02052023_US_C_Away_en-us_RS_TEST-vs-TEST
unc.abc.5830ugjshg5345s_AAA_11012023_CA_C_Home_en-us_RS_Reg-vs-HBS
unc.abc.fs44848fvs8gs82_MBB_12252023_US_C_Home_en-us_RS_Nat'l-vs-UNC
unc.abc.fe0wjv-578244fs_FFS_04222023_CA_C_Away_en-us_RS_Nat'l-vs-UNC
</code></pre>
<p>I want to use the first date that appears in that column (<code>01252023</code>) as part of the filename, so how would I extract it and set it to a variable?</p>
|
<python><pandas><extract>
|
2023-01-25 15:53:34
| 2
| 978
|
user53526356
|
75,236,478
| 2,587,904
|
how to use vectorized H3 functions from h3-py?
|
<pre><code>import numpy as np
lats = np.random.uniform(0, 90, 1_000_000)
lons = np.random.uniform(0, 90, 1_000_000)
import h3
import h3.api.numpy_int
</code></pre>
<p>Passing numpy arrays straight away:</p>
<pre><code>fails with: TypeError: only size-1 arrays can be converted to Python scalars
h3.api.numpy_int.geo_to_h3(lats, lons, 6)
</code></pre>
<p>Following an example from their github issues:</p>
<pre><code>fails with: AttributeError: module 'h3' has no attribute 'geo_to_h3_vect'
np.asarray(h3.geo_to_h3_vect(lats, lons, 10))
</code></pre>
<p>The version of h3 is: 3.7.6</p>
|
<python><h3>
|
2023-01-25 15:51:32
| 1
| 17,894
|
Georg Heiler
|
75,236,229
| 1,977,609
|
python version interpretation indiscrepencies
|
<p>i am following along with the online book "data science from the command line" by ___. i have no experience with python, so am unfamiliar with it's syntax and interpretation intricacies.</p>
<p>while running this code, copied verbatim from the book:</p>
<p>`</p>
<pre><code>import sys
CYCLE_OF_15 = ["fizzbuzz", None, None, "fizz", None,
"buzz", "fizz", None, None, "fizz",
"buzz", None, "fizz", None, None]
def fizz_buzz(n: int) -> str:
return CYCLE_OF_15[n % 15] or str(n)
if __name__ == "__main__":
try:
while (n: sys.stdin.readline()):
print(fizz_buzz(int(n)))
except:
pass Β΄
</code></pre>
<p>i encounter this error in python 2</p>
<pre><code>File "fizzbuzz.py", line 8
def fizz_buzz(n: int) -> str:
^
SyntaxError: invalid syntaxΒ΄
</code></pre>
<p>and this error in python 3</p>
<pre><code>Β΄ File "/home/wan/saved-websites/datascience/ch04/fizzbuzz.py", line 13
while (n: sys.stdin.readline()):
^
SyntaxError: invalid syntaxΒ΄
</code></pre>
<p>why am i getting different error references for different versions of python? i assume that it's because of the order the interpreter reads the file, but a definitive answer would be better.</p>
<p>Bonus points: explain why the code itself doesn't run. using these two error messages i've managed to triangulate that the error has to do with how n is assigned or referenced with the colon : operator. is my hypothesis correct?</p>
|
<python>
|
2023-01-25 15:32:43
| 1
| 747
|
Andrew
|
75,236,134
| 8,573,902
|
Training with tensorflow is much slower using GPU rather than CPU on M1 Max
|
<p>I tried to run the example below (a simplified version of a part of <a href="https://github.com/ageron/handson-ml3/blob/main/15_processing_sequences_using_rnns_and_cnns.ipynb" rel="nofollow noreferrer">this tutorial</a>).
I was extremely surprised to see that for this model at least, the training was more than 50x faster on the CPU.</p>
<pre><code>from pathlib import Path
import tensorflow as tf
tf.keras.utils.get_file(
"ridership.tgz",
"https://github.com/ageron/data/raw/main/ridership.tgz",
cache_dir=".",
extract=True
)
path = Path("datasets/ridership/CTA_-_Ridership_-_Daily_Boarding_Totals.csv")
df = pd.read_csv(path, parse_dates=["service_date"])
df = df.sort_values("service_date").set_index("service_date")
df = df.drop_duplicates()
rail_train = df["rail_boardings"]["2016-01":"2018-12"] / 1e6
rail_valid = df["rail_boardings"]["2019-01":"2019-05"] / 1e6
seq_length = 56
train_ds = tf.keras.utils.timeseries_dataset_from_array(
rail_train.to_numpy(),
targets=rail_train[seq_length:],
sequence_length=seq_length,
batch_size=32,
shuffle=True,
seed=42
)
valid_ds = tf.keras.utils.timeseries_dataset_from_array(
rail_valid.to_numpy(),
targets=rail_valid[seq_length:],
sequence_length=seq_length,
batch_size=32
)
tf.random.set_seed(42) # extra code β ensures reproducibility
deep_model = tf.keras.Sequential([
tf.keras.layers.SimpleRNN(32, return_sequences=True, input_shape=[None, 1]),
tf.keras.layers.SimpleRNN(32, return_sequences=True),
tf.keras.layers.SimpleRNN(32),
tf.keras.layers.Dense(1)
])
</code></pre>
<p>Running using the GPU is extremely slow:</p>
<pre><code>with tf.device('/gpu:0'):
opt = tf.keras.optimizers.legacy.SGD(learning_rate=0.01, momentum=0.9)
deep_model.compile(loss=tf.keras.losses.Huber(), optimizer=opt, metrics=["mae"])
deep_model.fit(train_ds, validation_data=valid_ds, epochs=10)
</code></pre>
<p>Gives:</p>
<pre><code>Epoch 1/10
2023-01-25 15:08:13.733000: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:114] Plugin optimizer for device_type GPU is enabled.
33/33 [==============================] - ETA: 0s - loss: 0.0306 - mae: 0.1614
2023-01-25 15:12:10.354167: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:114] Plugin optimizer for device_type GPU is enabled.
33/33 [==============================] - 241s 7s/step - loss: 0.0306 - mae: 0.1614 - val_loss: 0.0045 - val_mae: 0.0783
Epoch 2/10
33/33 [==============================] - 265s 8s/step - loss: 0.0082 - mae: 0.0985 - val_loss: 0.0118 - val_mae: 0.1358
Epoch 3/10
33/33 [==============================] - 243s 7s/step - loss: 0.0066 - mae: 0.0838 - val_loss: 0.0030 - val_mae: 0.0567
Epoch 4/10
33/33 [==============================] - 236s 7s/step - loss: 0.0046 - mae: 0.0631 - val_loss: 0.0022 - val_mae: 0.0455
Epoch 5/10
17/33 [==============>...............] - ETA: 1:52 - loss: 0.0045 - mae: 0.0609
</code></pre>
<p>While using CPU:</p>
<pre><code>with tf.device('/cpu:0'):
opt = tf.keras.optimizers.legacy.SGD(learning_rate=0.01, momentum=0.9)
deep_model.compile(loss=tf.keras.losses.Huber(), optimizer=opt, metrics=["mae"])
deep_model.fit(train_ds, validation_data=valid_ds, epochs=10)
</code></pre>
<p>We get:</p>
<pre><code>Epoch 1/10
2023-01-25 15:35:43.883427: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:114] Plugin optimizer for device_type GPU is enabled.
33/33 [==============================] - ETA: 0s - loss: 0.0046 - mae: 0.0654
2023-01-25 15:35:48.833485: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:114] Plugin optimizer for device_type GPU is enabled.
33/33 [==============================] - 7s 160ms/step - loss: 0.0046 - mae: 0.0654 - val_loss: 0.0020 - val_mae: 0.0417
Epoch 2/10
33/33 [==============================] - 4s 128ms/step - loss: 0.0049 - mae: 0.0705 - val_loss: 0.0019 - val_mae: 0.0394
Epoch 3/10
33/33 [==============================] - 4s 135ms/step - loss: 0.0043 - mae: 0.0633 - val_loss: 0.0020 - val_mae: 0.0386
Epoch 4/10
33/33 [==============================] - 4s 131ms/step - loss: 0.0036 - mae: 0.0521 - val_loss: 0.0019 - val_mae: 0.0353
Epoch 5/10
33/33 [==============================] - 4s 127ms/step - loss: 0.0038 - mae: 0.0550 - val_loss: 0.0021 - val_mae: 0.0407
Epoch 6/10
33/33 [==============================] - 4s 123ms/step - loss: 0.0037 - mae: 0.0550 - val_loss: 0.0018 - val_mae: 0.0328
Epoch 7/10
33/33 [==============================] - 4s 123ms/step - loss: 0.0041 - mae: 0.0618 - val_loss: 0.0026 - val_mae: 0.0532
Epoch 8/10
33/33 [==============================] - 5s 141ms/step - loss: 0.0037 - mae: 0.0559 - val_loss: 0.0018 - val_mae: 0.0328
Epoch 9/10
33/33 [==============================] - 4s 129ms/step - loss: 0.0037 - mae: 0.0549 - val_loss: 0.0031 - val_mae: 0.0570
Epoch 10/10
33/33 [==============================] - 4s 129ms/step - loss: 0.0036 - mae: 0.0553 - val_loss: 0.0023 - val_mae: 0.0432
</code></pre>
<p>I understand that sometimes, CPU can be faster for simple models so I also tried stacking 15 of such <code>tf.keras.layers.SimpleRNN(32, return_sequences=True)</code> layer, and got the same kind of ratio, if not worse.</p>
<p>I am using conda with the following versions of the relevant libraries:</p>
<pre><code>tensorflow-datasets 4.7.0 pypi_0 pypi
tensorflow-deps 2.9.0 0 apple
tensorflow-estimator 2.11.0 pypi_0 pypi
tensorflow-macos 2.11.0 pypi_0 pypi
tensorflow-metadata 1.11.0 pypi_0 pypi
tensorflow-metal 0.7.0 pypi_0 pypi
</code></pre>
|
<python><tensorflow><keras><deep-learning><gpu>
|
2023-01-25 15:25:20
| 1
| 513
|
Amiel
|
75,235,933
| 13,566,716
|
flask_jwt_extended.exceptions.NoAuthorizationError: Missing Authorization Header
|
<p><strong>Server-side flask</strong></p>
<pre><code>@project_ns.route('/projects')
class ProjectsResource(Resource):
@project_ns.marshal_list_with(project_model)
@jwt_required()
def get(self):
"""Get all projects """
user_id = User.query.filter_by(username=get_jwt_identity()).first() # Filter DB by token (username)
projects=Project.query.filter_by(user_id=user_id)
#projects = Project.query.all()
return projects
</code></pre>
<p><strong>client-side reactjs</strong></p>
<pre><code>const getAllProjects=()=>{
const token = localStorage.getItem('REACT_TOKEN_AUTH_KEY');
console.log(token)
const requestOptions = {
method: 'GET',
headers: {
'content-type': 'application/json',
'Authorization': `Bearer ${JSON.parse(token)}`
},
body: "Get projects listed"
}
fetch('/project/projects', requestOptions)
.then(res => res.json())
.then(data => {
setProjects(data)
})
.catch(err => console.log(err))
}
</code></pre>
<p>I specified header on the client side and the error still occurs:</p>
<pre><code>flask_jwt_extended.exceptions.NoAuthorizationError: Missing Authorization Header
</code></pre>
<p><strong>my flask versions are as follows:</strong></p>
<pre><code>Flask==2.0.1
Flask-Cors==3.0.10
Flask-JWT-Extended==4.2.3
Flask-Migrate==3.1.0
Flask-RESTful==0.3.9
flask-restx==0.5.0
Flask-SQLAlchemy==2.5.1
</code></pre>
<p>I have tried many options and the issue is still persisting. Would love to come to a resolution. Thanks in advance!</p>
|
<python><reactjs><python-3.x><react-native><flask>
|
2023-01-25 15:08:35
| 1
| 369
|
3awny
|
75,235,923
| 7,575,552
|
File not found error when copying images from one folder to another
|
<p>I have a text file containing the names of images to be copied from a source folder to a destination folder. The source folder contains several sub-folders as shown below. The images may come from any of these sub-folders.</p>
<pre><code>animals (source folder)
|-cats_1
|-cats_2
|-tigers_1
|-lions_1
|-lions_2
</code></pre>
<p>Shown below is the Python code:</p>
<pre><code>import os
import shutil
src = r'X:\animals' #source with multiple sub-folders
dest = r'X:\images\cat_family' #destination folder
with open('cat_fam.txt') as file: #text file containing the image names
for path, subdirs, files in os.walk(src):
for name in file:
file_name = name.strip()
filename = os.path.join(path, file_name)
shutil.copy2(filename, dest)
</code></pre>
<p>I encounter a file not found error as shown below:</p>
<pre><code> File "C:\Users\AppData\Local\Temp\2/ipykernel_30556/2100413787.py", line 6, in <module>
shutil.copy2(filename, dest)
File "C:\Users\AppData\Local\Continuum\anaconda3\envs\tf2.7\lib\shutil.py", line 266, in copy2
copyfile(src, dst, follow_symlinks=follow_symlinks)
File "C:\Users\AppData\Local\Continuum\anaconda3\envs\tf2.7\lib\shutil.py", line 120, in copyfile
with open(src, 'rb') as fsrc:
FileNotFoundError: [Errno 2] No such file or directory: 'X:\\animals\\lion_2345.jpg'
</code></pre>
|
<python><path><copy>
|
2023-01-25 15:08:08
| 1
| 1,189
|
shiva
|
75,235,809
| 1,014,217
|
Postcommands not working properly in Github Codespaces
|
<p>I want to create a codespace for python development with some post commands like:
creating a conda environment
activate it
installing ipkyernel and creating a kernel
install requirements.txt</p>
<p>However when I rebuild the container I dont have any error, and when I open the codespace terminal and type conda env list, only thing I see is the base environment</p>
<p>I tried both ways:</p>
<ol>
<li>Put many commands on the same postCreateCommand</li>
</ol>
<pre><code> // For format details, see https://aka.ms/devcontainer.json. For config options, see the
// README at: https://github.com/devcontainers/templates/tree/main/src/python
{
"name": "Python 3",
// Or use a Dockerfile or Docker Compose file. More info: https://containers.dev/guide/dockerfile
"image": "mcr.microsoft.com/devcontainers/python:0-3.11",
// Features to add to the dev container. More info: https://containers.dev/features.
"features": {
"ghcr.io/devcontainers/features/anaconda:1": {}
},
// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],
// Use 'postCreateCommand' to run commands after the container is created.
"postCreateCommand": "conda create --name ForecastingSarimax && conda activate ForecastingSarimax",
// Configure tool-specific properties.
"customizations": {
// Configure properties specific to VS Code.
"vscode": {
// Add the IDs of extensions you want installed when the container is created.
"extensions": [
"streetsidesoftware.code-spell-checker"
]
}
}
// Uncomment to connect as root instead. More info: https://aka.ms/dev-containers-non-root.
// "remoteUser": "root"
}
</code></pre>
<ol start="2">
<li>Or create a .sh script with the commands and execute it</li>
</ol>
<pre><code> #!/usr/bin/env bash
conda create --name ForecastingSarimax
conda activate ForecastingSarimax
conda install pip
conda install ipykernel
python -m ipykernel install --user --name ForecastingSarimaxKernel311 β display-name "ForecastingSarimaxKernel311"
pip3 install --user -r requirements.txt
</code></pre>
<p>What am I missing here to have my requirements met?
A custom environment with my pip packages and a custom kernel</p>
|
<python><github><conda><codespaces><github-codespaces>
|
2023-01-25 14:58:23
| 0
| 34,314
|
Luis Valencia
|
75,235,686
| 14,667,788
|
How to find a center of an object in image in python
|
<p>I have a following image:</p>
<p><a href="https://i.sstatic.net/9yTpv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9yTpv.png" alt="enter image description here" /></a></p>
<p>I would like to find the center of the main object in the image - the book in this case.</p>
<p>I follow this answer: <a href="https://stackoverflow.com/questions/49582008/center-of-mass-in-contour-python-opencv">Center of mass in contour (Python, OpenCV)</a></p>
<p>and try:</p>
<pre class="lang-py prettyprint-override"><code>
import cv2
import numpy as np
image = cv2.imread("29289.jpg")
imgray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(imgray, 127, 255, 0, cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)
contours, hierarchy = cv2.findContours(thresh, cv2.RETR_LIST, cv2.CHAIN_APPROX_NONE)
cnts = cv2.drawContours(image, contours[0], -1, (0, 255, 0), 1)
kpCnt = len(contours[0])
x = 0
y = 0
for kp in contours[0]:
x = x+kp[0][0]
y = y+kp[0][1]
cv2.circle(image, (np.uint8(np.ceil(x/kpCnt)), np.uint8(np.ceil(y/kpCnt))), 1, (0, 0, 255), 30)
cv2.namedWindow("Result", cv2.WINDOW_NORMAL)
cv2.imshow("Result", cnts)
cv2.waitKey(0)
cv2.destroyAllWindows()
</code></pre>
<p>But the result is a nonsense (see the red point which should be the center):</p>
<p><a href="https://i.sstatic.net/OyxqR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OyxqR.png" alt="enter image description here" /></a></p>
<p>Do you have any idea how to solve this problem? Thanks a lot</p>
|
<python><opencv>
|
2023-01-25 14:50:12
| 0
| 1,265
|
vojtam
|
75,235,598
| 17,487,457
|
MinmaxScaler: Normalise a 4D array of input
|
<p>I have a 4D array of input that I would like to normalise using <code>MinMaxScaler</code>. For simplicity, I give an example with the following array:</p>
<pre class="lang-py prettyprint-override"><code>A = np.array([
[[[0, 1, 2, 3],
[3, 0, 1, 2],
[2, 3, 0, 1],
[1, 3, 2, 1],
[1, 2, 3, 0]]],
[[[9, 8, 7, 6],
[5, 4, 3, 2],
[0, 9, 8, 3],
[1, 9, 2, 3],
[1, 0, -1, 2]]],
[[[0, 7, 1, 2],
[1, 2, 1, 0],
[0, 2, 0, 7],
[-1, 3, 0, 1],
[1, 0, 1, 0]]]
])
A.shape
(3,1,5,4)
</code></pre>
<p>In the given example, the array contains 3 input samples, where each sample has the shape <code>(1,5,4)</code>. Each column of the input represents 1 variable (feature), so each sample has <code>4 features</code>.</p>
<p>I would like to normalise the input data, But <code>MinMaxScaler</code> expects a 2D array <code>(n_samples, n_features)</code> like dataframe.</p>
<p>How then do I use it to normalise this input data?</p>
|
<python><numpy><multidimensional-array><scikit-learn><normalize>
|
2023-01-25 14:42:37
| 3
| 305
|
Amina Umar
|
75,235,561
| 4,659,729
|
Speed up python split process
|
<p>I have a very big 4+ GB size of textfile and I have a script which splits the file into small files based on what characters are before the first coma. eg.: 16,.... line goes to 16.csv, 61,.... line goes to 61.csv. Unfortunately this script runs for ages, I guess because of the write out method. Is there any way to speed up the script?</p>
<pre><code>import pandas as pd
import csv
with open (r"updates//merged_lst.csv",encoding="utf8", errors='ignore') as f:
r = f.readlines()
for i in range(len(r)):
row = r[i]
letter = r[i].split(',')[0]
filename = r"import//"+letter.upper()+".csv"
with open(filename,'a',encoding="utf8", errors='ignore') as f:
f.write(row)
</code></pre>
|
<python><csv><split>
|
2023-01-25 14:39:53
| 4
| 352
|
Tamas Kosa
|
75,235,548
| 871,947
|
How to continously wait on any of multiple concurrent tasks to complete?
|
<p>Let's say there are multiple sources of events I want to monitor and respond to in an orderly fashion - for instance multiple connected sockets.</p>
<p>What's the best way to continuously await until any of them has data available to be read?</p>
<p><code>asyncio.wait</code> seems promising, but I am unsure about how to make sure tasks for sockets, that were just read from, get re-added into the list of tasks to await on.</p>
<p>I tried to re-schedule all of the reads every time the loop ran, but that (obviously) didn't work.</p>
<p>As a hack, I came up with cancelling pending tasks each iteration of the loop. The code I currently have, looks like this. But I'm not sure it's actually correct in all cases.</p>
<pre><code>while True:
done, pending = await asyncio.wait([socket1.read(), socket2.read()], return_when=FIRST_COMPLETED)
for received in done:
...
for to_cancel in pending:
to_cancel.cancel()
</code></pre>
<p>What would be the most elegant (and correct!) way of doing this?</p>
|
<python><asynchronous><python-asyncio>
|
2023-01-25 14:39:06
| 1
| 1,306
|
JanLikar
|
75,235,531
| 18,948,596
|
Problem when installing Python from source, SSL package missing even though openssl installed
|
<h1>The Problem</h1>
<p>Trying to install Python-3.11.1 from source on Zorin OS (Ubuntu16 based) I get the following errors when I try to pip install any package into a newly created venv:</p>
<pre><code>python3.11 -m venv venv
source venv/bin/active
pip install numpy
WARNING: pip is configured with locations that require TLS/SSL, however the ssl module in Python is not available.
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /simple/numpy/
WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /simple/numpy/
WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /simple/numpy/
WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /simple/numpy/
WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /simple/numpy/
Could not fetch URL https://pypi.org/simple/numpy/: There was a problem confirming the ssl certificate: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url: /simple/numpy/ (Caused by SSLError("Can't connect to HTTPS URL because the SSL module is not available.")) - skipping
ERROR: Could not find a version that satisfies the requirement numpy (from versions: none)
ERROR: No matching distribution found for numpy
</code></pre>
<p>Obviously, the SSL package seems to be missing, however I made sure to have both <code>openssl</code> and <code>libssl-dev</code> installed before installing python. More specifically, I made sure to have all packages installed <a href="https://stackoverflow.com/a/49696062/18948596">lined out here</a>.</p>
<h1>The Exact Steps I Took To Install</h1>
<ol>
<li>Make sure all packages that are required are installed (the once above)</li>
<li><code>cd .../python-installs</code></li>
<li>Download Python from <a href="https://www.python.org/" rel="noreferrer">python.org</a></li>
<li><code>tar -xvzf Python-3.11.1.tgz</code></li>
<li><code>cd Python-3.11.1</code> and then</li>
</ol>
<pre><code>./configure \
--prefix=/opt/python/3.11.1 \
--enable-shared \
--enable-optimizations \
--enable-ipv6 \
--with-openssl=/usr/lib/ssl \
--with-openssl-rpath=auto \
LDFLAGS=-Wl,-rpath=/opt/python/3.11.1/lib,--disable-new-dtags
</code></pre>
<ol start="6">
<li><code>make</code> <- Note that I get a lot off error messages from gcc here, very similar to <a href="https://bugs.python.org/issue37384" rel="noreferrer">this</a>, however it seems its successful at the end</li>
<li><code>make altinstall</code></li>
</ol>
<p>Parts of this installation process are from <a href="https://docs.posit.co/resources/install-python-source/" rel="noreferrer">[1]</a>, <a href="https://stackoverflow.com/questions/45954528/pip-is-configured-with-locations-that-require-tls-ssl-however-the-ssl-module-in/49696062#comment132607406_49696062">[2]</a></p>
<p>Running <code>python3.11</code> seems to work fine, however I cannot pip install anything into a venv created by Python3.11.1.</p>
<h1>Other Possible Error Sources</h1>
<p>Before trying to reinstall Python3.11.1, I always made sure to delete all files in the following places that were associated with Python3.11.1:</p>
<pre><code>/usr/local/bin/...
/usr/local/lib/...
/usr/local/man/man1/...
/usr/local/share/man/man1/...
/usr/local/lib/pkgconfig/...
/opt/python/...
</code></pre>
<p>I also tried adding Python-3.11.1 to PATH by adding</p>
<pre><code>PATH=/opt/python/3.11.1/bin:$PATH
</code></pre>
<p>to <code>/etc/profile.d/python.sh</code>, but it didn't seem to do much in my case.</p>
<p>When configuring the python folder I am using <code>--with-openssl=/usr/lib/ssl</code>, though perhaps I need to use something else? I tried <code>--with-openssl=/usr/bin/openssl</code> but that doesn't work because <code>openssl</code> is a file and not a folder and it gives me an error message and doesn't even configure anything.</p>
<h1>Conclusion</h1>
<p>From my research I found that most times this error relates to the <code>openssl</code> library not being installed (given that python versions >= 3.10 will need it to be installed), and that installing it and reinstalling python seemed to fix the issue. However in my case it doesn't, and I don't know why that is.</p>
<p>The most likely cause is that something is wrong with my <code>openssl</code> configuration, but I wouldn't know what.</p>
<p>Any help would be greatly appreciated.</p>
|
<python><ssl><installation><pip><ubuntu-16.04>
|
2023-01-25 14:37:41
| 2
| 413
|
Racid
|
75,235,467
| 2,587,904
|
How to parallelize a pandas UDF in polars (h3 polyfill) for string typed UDF outputs?
|
<p>I want to execute the following lines of python code in Polars as a UDF:</p>
<pre><code>w = wkt.loads('POLYGON((-160.043334960938 70.6363054807905, -160.037841796875 70.6363054807905, -160.037841796875 70.6344840663086, -160.043334960938 70.6344840663086, -160.043334960938 70.6363054807905))')
polygon (optionally including holes).
j = shapely.geometry.mapping(w)
h3.polyfill(j, res=10, geo_json_conformant=True)
</code></pre>
<p>In pandas/geopandas:</p>
<pre><code>import pandas as pd
import geopandas as gpd
import polars as pl
from shapely import wkt
pandas_df = pd.DataFrame({'quadkey': {0: '0022133222330023',
1: '0022133222330031',
2: '0022133222330100'},
'tile': {0: 'POLYGON((-160.043334960938 70.6363054807905, -160.037841796875 70.6363054807905, -160.037841796875 70.6344840663086, -160.043334960938 70.6344840663086, -160.043334960938 70.6363054807905))',
1: 'POLYGON((-160.032348632812 70.6381267305321, -160.02685546875 70.6381267305321, -160.02685546875 70.6363054807905, -160.032348632812 70.6363054807905, -160.032348632812 70.6381267305321))',
2: 'POLYGON((-160.02685546875 70.6417687358462, -160.021362304688 70.6417687358462, -160.021362304688 70.6399478155463, -160.02685546875 70.6399478155463, -160.02685546875 70.6417687358462))'},
'avg_d_kbps': {0: 15600, 1: 6790, 2: 9619},
'avg_u_kbps': {0: 14609, 1: 22363, 2: 15757},
'avg_lat_ms': {0: 168, 1: 68, 2: 92},
'tests': {0: 2, 1: 1, 2: 6},
'devices': {0: 1, 1: 1, 2: 1}}
)
# display(pandas_df)
gdf = pandas_df.copy()
gdf['geometry'] = gpd.GeoSeries.from_wkt(pandas_df['tile'])
import h3pandas
display(gdf.h3.polyfill_resample(10))
</code></pre>
<p>This works super quickly and easily.
However, the polyfill function called from pandas apply as a UDF is too slow for the size of my dataset.</p>
<p>Instead, I would love to use polars but I run into several issues:</p>
<h2>geo type is not understood</h2>
<p>rying to move to polars for better performance</p>
<pre><code>pl.from_pandas(gdf)
</code></pre>
<p>fails with: ArrowTypeError: Did not pass numpy.dtype object</p>
<p>it looks like geoarrow / geoparquet is not supported by polars</p>
<h2>numpy vectorized polars interface fails with missing geometry types</h2>
<pre><code>polars_df = pl.from_pandas(pandas_df)
out = polars_df.select(
[
gpd.GeoSeries.from_wkt(pl.col('tile')),
]
)
</code></pre>
<p>fails with:</p>
<pre><code>TypeError: 'data' should be array of geometry objects. Use from_shapely, from_wkb, from_wkt functions to construct a GeometryArray.
</code></pre>
<h2>all by hand</h2>
<pre><code>polars_df.with_column(pl.col('tile').map(lambda x: h3.polyfill(shapely.geometry.mapping(wkt.loads(x)), res=10, geo_json_conformant=True)).alias('geometry'))
</code></pre>
<p>fails with:</p>
<pre><code>Conversion of polars data type Utf8 to C-type not implemented.
</code></pre>
<p>this last option seems to be the most promising one (no special geospatial-type errors). But this generic error message of strings/Utf8 type for C not being implemented sounds very strange to me.</p>
<p>Furthermore:</p>
<pre><code>polars_df.select(pl.col('tile').apply(lambda x: h3.polyfill(shapely.geometry.mapping(wkt.loads(x)), res=10, geo_json_conformant=True)))
</code></pre>
<p>works - but is lacking the other columns - i.e. syntax to manually select these is inconvenient. Though this is also failing when appending a:</p>
<pre><code>.explode('tile').collect()
# InvalidOperationError: cannot explode dtype: Object("object")
</code></pre>
|
<python><pandas><geopandas><python-polars><h3>
|
2023-01-25 14:31:02
| 2
| 17,894
|
Georg Heiler
|
75,235,457
| 11,251,373
|
Modify method call if chained
|
<p>Better to provide an example i guess (a littler bit pseudo-codish...)</p>
<pre class="lang-py prettyprint-override"><code>from django.db import transaction
from somewhere import some_job
from functools import partial
class Foo:
def do_something(self, key, value):
return some_job(key, value)
@property
def modifier(self):
pass
f = Foo()
f.do_something(key='a', value=1) -> result
f.modifier.do_something(key='a', value=1) -> transaction.on_commit(partial(do_something, key='a', value=1))
</code></pre>
<p>Normally if <strong>do_something</strong> is called it would do it regular thing and return some result,
but when it is chained via <strong>modifier</strong> it should return <code>transaction.on_commit(partial(do_something, key='a', value=1))</code> instead of regular result. Modifier might be property or something else inside class. Problem is that this insinstance is a singletone and should not be changed permanently as it will be used latelly by other code.</p>
<p>Can not wrap my head around how to do this.
Any ideas?</p>
|
<python>
|
2023-01-25 14:30:05
| 3
| 2,235
|
Aleksei Khatkevich
|
75,235,435
| 8,800,836
|
Simulation of Markov chain slower than in Matlab
|
<p>I run the same test code in Python+Numpy and in Matlab and see that the Matlab code is faster by an order of magnitude. I want to know what is the bottleneck of the Python code and how to speed it up.</p>
<p>I run the following test code using Python+Numpy (the last part is the performance sensitive one):</p>
<pre class="lang-py prettyprint-override"><code># Packages
import numpy as np
import time
# Number of possible outcomes
num_outcomes = 20
# Dimension of the system
dim = 50
# Number of iterations
num_iterations = int(1e7)
# Possible outcomes
outcomes = np.arange(num_outcomes)
# Possible transition matrices
matrices = [np.random.rand(dim, dim) for k in outcomes]
matrices = [mat/np.sum(mat, axis=0) for mat in matrices]
# Initial state
state = np.random.rand(dim)
state = state/np.sum(state)
# List of samples
samples = np.random.choice(outcomes, size=(num_iterations,))
samples = samples.tolist()
# === PERFORMANCE-SENSITIVE PART OF THE CODE ===
# Update the state over all iterations
start_time = time.time()
for k in range(num_iterations):
sample = samples[k]
matrix = matrices[sample]
state = np.matmul(matrix, state)
end_time = time.time()
# Print the execution time
print(end_time - start_time)
</code></pre>
<p>I then run an equivalent code using Matlab (the last part is the performance sensitive one):</p>
<pre class="lang-matlab prettyprint-override"><code>% Number of possible outcomes
num_outcomes = 20;
% Number of dimensions
dim = 50;
% Number of iterations
num_iterations = 1e7;
% Possible outcomes
outcomes = 1:num_outcomes;
% Possible transition matrices
matrices = rand(num_outcomes, dim, dim);
matrices = matrices./sum(matrices,2);
matrices = num2cell(matrices,[2,3]);
matrices = cellfun(@shiftdim, matrices, 'UniformOutput', false);
% Initial state
state = rand(dim,1);
state = state./sum(state);
% List of samples
samples = datasample(outcomes, num_iterations);
% === PERFORMANCE-SENSITIVE PART OF THE CODE ===
% Update the state over all iterations
tic;
for k = 1:num_iterations
sample = samples(k);
matrix = matrices{sample};
state = matrix * state;
end
toc;
</code></pre>
<p>The Python code is consistently slower than the Matlab code by an order of magnitude, and I am not sure why.</p>
<p>Any idea where to start?</p>
<p>I run the Python code with the Python 3.10 interpreter and Numpy 1.22.4. I run the Matlab code with Matlab R2022a. Both codes are run on Windows 11 Pro 64 bits on a Lenovo T14 ThinkPad with the following processors:</p>
<p>11th Gen Intel(R) Core(TM) i7-1165G7 @ 2.80GHz, 2803 Mhz, 4 Core(s), 8 Logical Processor(s)</p>
<p>EDIT 1: I made some additional tests and it looks like the culprit is some type of Python-specific constant overhead at low matrix sizes:</p>
<p><a href="https://i.sstatic.net/TYnnK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TYnnK.png" alt="enter image description here" /></a></p>
<p>As hpaulj and MSS suggest, this might mean that a JIT compiler could solve some of these issues. I will do my best to try this in the near future.</p>
<p>EDIT 2: I ran the code under Pypy 3.9-v7.3.11-win64 and although it does change the scaling and even beats Cpython at small matrix sizes, it generally incurs a big overhead for this particular code:</p>
<p><a href="https://i.sstatic.net/IUwE7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IUwE7.png" alt="enter image description here" /></a></p>
<p>So a JIT compiler could help if there are ways to mitigate this overhead. Otherwise a Cython implementation is probably the remaining way to go...</p>
|
<python><numpy><performance><matlab>
|
2023-01-25 14:28:41
| 1
| 539
|
Ben
|
75,235,422
| 11,269,090
|
XGBoost XGBRegressor predict with different dimensions than fit
|
<p>I am using <a href="https://xgboost.XGBRegressor" rel="nofollow noreferrer">the xgboost XGBRegressor</a> to train on a data of 20 input dimensions:</p>
<pre><code> model = xgb.XGBRegressor(objective='reg:squarederror', n_estimators=20)
model.fit(trainX, trainy, verbose=False)
</code></pre>
<p><code>trainX</code> is 2000 x 19, and <code>trainy</code> is 2000 x 1.</p>
<p>In another word, I am using the 19 dimensions of <code>trainX</code> to predict the 20th dimension (the one dimension of <code>trainy</code>) as the training.</p>
<p>When I am making a prediction:</p>
<pre><code>yhat = model.predict(x_input)
</code></pre>
<p><code>x_input</code> has to be 19 dimensions.
I am wondering if there is a way to keep using the 19 dimensions to train prediction the 20th dimension. But during the prediction, <code>x_input</code> has only 4 dimensions to predict the 20th dimension. It is kinda of a transfer learning to different input dimension.</p>
<p>Does xgboost supports such a feature? I tried just to fill <code>x_input</code>'s other dimensions to <code>None</code>, but that yields to terrible prediction results.</p>
|
<python><machine-learning><time-series><regression><xgboost>
|
2023-01-25 14:27:31
| 2
| 1,010
|
Chen
|
75,235,367
| 5,159,404
|
How can I plot a line with a confidence interval in python using plotly?
|
<p>I am trying to use plotly to plot a graph similar to the one here below:</p>
<p><a href="https://i.sstatic.net/JWRib.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JWRib.png" alt="enter image description here" /></a></p>
<p>Unfortunately I am only able to plot something like this
<a href="https://i.sstatic.net/p5vGw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/p5vGw.png" alt="enter image description here" /></a></p>
<p>What I would like is to have normal boundaries (upper and lower defined by two dataframe columns and only one entry in the legend.</p>
<pre class="lang-py prettyprint-override"><code>
import plotly.graph_objs as go
# Create a trace for the lower bound
trace1 = go.Scatter(x=df.index,
y=df['lower'],
name='Lower Bound',
fill='tonexty',
fillcolor='rgba(255,0,0,0.2)',
line=dict(color='blue'))
# Create a trace for the median
trace2 = go.Scatter(x=df.index,
y=df['median'],
name='median',
line=dict(color='blue', width=2))
# Create a trace for the upper bound
trace3 = go.Scatter(x=df.index,
y=df['upper'],
name='Upper Bound',
fill='tonexty',
fillcolor='rgba(255,0,0,0.2)',
line=dict(color='blue'))
# Create the layout
layout = go.Layout(xaxis=dict(title='Date'),
yaxis=dict(title='title'))
# Create the figure with the three traces and the layout
fig = go.Figure(data=[trace1, trace2, trace3], layout=layout)
context['pltyplot'] = pltyplot(fig, output_type="div")
</code></pre>
<p>I want to use plotly because I am integrating the resulting figure into a django web page and plotly enables, with the las line, to import the whole object in a clean, simple and interactive way into the poge.</p>
<p>Any ideas?</p>
|
<python><plotly>
|
2023-01-25 14:21:46
| 1
| 1,002
|
Wing
|
75,235,331
| 13,579,159
|
Referencing objects depends on relative and absolute import of a package
|
<p>I have just faced with with behaviour I don't yet understand and don't know how to name it. I'll reproduce it here.</p>
<p>Let's say we have a project with the structure as lined below.</p>
<pre class="lang-bash prettyprint-override"><code>src
β __init__.py
β main.py
β
ββββutils
β __init__.py
β data.py
ββββfuncs.py
</code></pre>
<p>First, inside <code>utils.data</code> we define two classes.</p>
<pre class="lang-py prettyprint-override"><code># utils.data module
__all__ = [
'Menu',
'Case'
]
from enum import Enum
class Menu(Enum):
QUIT = 'exit'
class Case:
def __init__(self):
self.a = 7
</code></pre>
<p>Then inside <code>utils.funcs</code> we define two functions to return instances of those classes.</p>
<pre class="lang-py prettyprint-override"><code># utils.funcs module
__all__ = [
'get_Menu_instance',
'get_Case_instance',
]
from src.utils import data
def get_Menu_instance():
return data.Menu.QUIT
def get_Case_instance():
return data.Case()
</code></pre>
<p>Finally, in <code>main</code> we call both functions. And check if <code>result</code> and <code>expected</code> inherit from the respective classes.</p>
<pre class="lang-py prettyprint-override"><code># main module
import utils
def main():
result = utils.get_Menu_instance()
expected = utils.Menu.QUIT
print(f'{isinstance(result, utils.Menu) = }\n'
f'{isinstance(expected, utils.Menu) = }\n')
result = utils.get_Case_instance()
expected = utils.Case()
print(f'{isinstance(result, utils.Case) = }\n'
f'{isinstance(expected, utils.Case) = }\n')
if __name__ == '__main__':
main()
</code></pre>
<p>Here the Magic comes. An output of <code>main()</code> depends on relative or absolute import in <code>utils.__init__</code> .</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>relative</th>
<th>absolute</th>
</tr>
</thead>
<tbody>
<tr>
<td><pre># utils.<strong>init</strong> module<br><br>from .data import *<br>from .funcs import *</pre></td>
<td><pre># utils.<strong>init</strong> module<br><br>from src.utils.data import *<br>from src.utils.funcs import *</pre></td>
</tr>
<tr>
<td><pre>> py main.py<br><br>isinstance(result, utils.Menu) = False<br>isinstance(expected, utils.Menu) = True<br><br>isinstance(result, utils.Case) = False<br>isinstance(expected, utils.Case) = True</pre></td>
<td><pre>> py main.py<br><br>isinstance(result, utils.Menu) = True<br>isinstance(expected, utils.Menu) = True<br><br>isinstance(result, utils.Case) = True<br>isinstance(expected, utils.Case) = True</pre></td>
</tr>
</tbody>
</table>
</div>
<p>Most far I understand by myself, is that there are two different class objects for each class: inside and outside a package. But what I can't get is how it could depend on relative or absolute import.</p>
<p>I would very appreciate if someone explain me it.</p>
<p>Thanks!</p>
|
<python><package><python-import><relative-import><python-3.11>
|
2023-01-25 14:19:07
| 0
| 341
|
Gennadiy
|
75,235,221
| 911,576
|
Unable to Sign Solana Transaction using solana-py throws not enough signers
|
<p>using solana library from pip</p>
<pre><code>pip install solana
</code></pre>
<p>and then trying to perform <code>withdraw_from_vote_account</code></p>
<pre><code>txn = txlib.Transaction(fee_payer=wallet_keypair.pubkey())
# txn.recent_blockhash = blockhash
txn.add(
vp.withdraw_from_vote_account(
vp.WithdrawFromVoteAccountParams(
vote_account_from_pubkey=vote_account_keypair.pubkey(),
to_pubkey=validator_keypair.pubkey(),
withdrawer=wallet_keypair.pubkey(),
lamports=2_000_000_000,
)
)
)
txn.sign(wallet_keypair)
txn.serialize_message()
solana_client.send_transaction(txn).value
</code></pre>
<p>This throw me an error</p>
<pre><code>Traceback (most recent call last):
File "main.py", line 119, in <module>
solana_client.send_transaction(txn).value
File "venv/lib/python3.8/site-packages/solana/rpc/api.py", line 1057, in send_transaction
txn.sign(*signers)
File "venv/lib/python3.8/site-packages/solana/transaction.py", line 239, in sign
self._solders.sign(signers, self._solders.message.recent_blockhash)
solders.SignerError: not enough signers
</code></pre>
<p>I tried to workaround with adding more keypair to sign</p>
<pre><code>txn.sign(wallet_keypair,validator_keypair)
</code></pre>
<p>Doing this it throws me an error on the <strong>sign</strong> method</p>
<pre><code>self._solders.sign(signers, self._solders.message.recent_blockhash)
solders.SignerError: keypair-pubkey mismatch
</code></pre>
<p>not sure how to resolve this any help appreciated</p>
|
<python><solana><anchor-solana>
|
2023-01-25 14:10:09
| 2
| 7,498
|
anish
|
75,235,095
| 3,399,638
|
Pandas conditional join and calculation
|
<p>I have two Pandas dataframes, df_stock_prices and df_sentiment_mean.</p>
<p>I would like to do the following:</p>
<ol>
<li><p>Left join/merge these two dataframes into one dataframe, joined by Date and by ticker. In df_stock_prices, ticker is the column name, for example AAPL.OQ and in df_sentiment_mean ticker is found within the rows of the column named ticker.</p>
</li>
<li><p>If there is a Date and ticker from df_stock_prices that doesn't match df_sentiment_mean, keep the non-matching row of df_stock_prices as-is (hence the left join).</p>
</li>
<li><p>When there is a match for both Date and ticker, multiply the fields together; for example in the dataframes listed below, if df_stock_prices Date is 2021-11-29 and column AAPL.OQ is a match for the df_sentiment_mean Date of 2021-11-29 and ticker AAPL.OQ, then multiply the values for the match, in this example: 160.24 * 0.163266.</p>
</li>
</ol>
<p>If a Date and ticker from df_stock_prices doesn't match a Date and ticker value from df_sentiment_mean, keep the values from df_stock_prices.</p>
<p><strong>Current dataframes:</strong></p>
<p><strong>df_stock_prices:</strong></p>
<pre><code>
AAPL.OQ ABBV.N ABT.N ACN.N ADBE.OQ AIG.N AMD.OQ AMGN.OQ
Date
2021-11-29 160.24 116.89 128.03 365.82 687.49 54.95 161.91 203.47
2021-11-30 165.30 115.28 125.77 357.40 669.85 52.60 158.37 198.88
2021-12-01 164.77 115.91 126.74 360.14 657.41 51.72 149.11 200.80
2021-12-02 163.76 116.87 128.38 365.30 671.88 53.96 150.68 201.17
2021-12-03 161.84 118.85 130.27 361.42 616.53 53.32 144.01 202.44
...
</code></pre>
<p><strong>df_sentiment_mean:</strong></p>
<pre><code> ticker diff
Date
2021-11-29 AAPL.OQ 0.163266
2021-11-29 ABBV.N -0.165520
2021-11-29 ABT.N 0.149920
2021-11-29 ADBE.OQ -0.014639
2021-11-29 AIG.N -0.448595
... ... ...
2023-01-12 LOW.N 0.008863
2023-01-12 MDT.N 0.498884
2023-01-12 MO.N -0.013428
2023-01-12 NEE.N 0.255223
2023-01-12 NKE.N 0.072752
</code></pre>
<p><strong>Desired dataframe, partial first row example:</strong></p>
<p><strong>df_new:</strong></p>
<pre><code> AAPL.OQ ABBV.N ABT.N ACN.N ADBE.OQ AIG.N β¦
Date
2021-11-29 26.16174384 -19.3476328 19.1942576 365.82 -10.06416611 -24.65029525 β¦
...
</code></pre>
|
<python><pandas><dataframe><data-munging>
|
2023-01-25 14:00:13
| 1
| 323
|
billv1179
|
75,234,820
| 2,404,988
|
Dynamically add extra versions constraints to pip managed dependencies
|
<p>Is there a way to provide additional constraints on transitive dependencies to pip ?</p>
<p>For example, if you were to pip install scikit-optimize v0.8.1, <a href="https://github.com/scikit-optimize/scikit-optimize/blob/v0.8.1/setup.py#L47" rel="nofollow noreferrer">its setup.py</a> would say it depends on scikit-learn>=0.20, but the actual truth is the lib is incompatible with scikit-learn>=1.0 and will crash at runtime.</p>
<p>Same thing if you were to install scikit-optimize v0.7, it's secretely (to pip) incompatible with scikit-learn>=0.23 but doesn't declare it.</p>
<p>Can I somehow enrich my pip requirements.txt file so that pip know those additional constraints ? I considered using a constraint.txt file, but I can't make this one dynamic based on scikit-optimize version. And other qualifiers directly in requirements.txt only seem to include python version and OS, not other lib versions.</p>
|
<python><pip>
|
2023-01-25 13:38:34
| 0
| 8,056
|
C4stor
|
75,234,789
| 6,734,243
|
pip install -e is not resolved by python3
|
<p>everything was fine until December 2022 when packages installed in editable mode are not resolved anymore.</p>
<p><strong>reproducible example:</strong></p>
<p>from my terminal I run:</p>
<pre><code>git clone git@github.com:pydata/numexpr.git
pip install -e --user numexpr
</code></pre>
<p>In my local folder, I find the following:</p>
<pre><code>.local/
βββ lib/
βββ python3.8/
βββ site-packages/
βββ numexpr-2.8.5.dev1.dist-info/
βββ __editable___numexpr_2_8_5_dev1_finder.py
βββ __editable___numexpr_2_8_5_dev1_finder.py
</code></pre>
<p>Now from a python interface notebook, I execute the following code:</p>
<pre class="lang-py prettyprint-override"><code>import numexpr
numexpr.__version__
</code></pre>
<p>which give me 2.7.1 which is not version I installed in editable mode.</p>
<p>But from the terminal, it returns the correct version.</p>
<pre><code>$ pip show numexpr
Version: 2.8.5.dev1
</code></pre>
<p>Can someone explain why Python is not able to discover the lib? Is it related to the fact that there is no <code>numexpr/</code> in my <code>site-packages</code>?</p>
<p><strong>env:</strong><br />
python: 3.8<br />
pip: 22.3.1</p>
|
<python><pip>
|
2023-01-25 13:36:00
| 1
| 2,670
|
Pierrick Rambaud
|
75,234,649
| 12,734,492
|
Python pandas: left join by key and value in list of values:
|
<p>Code:</p>
<pre><code>df1 = pd.DataFrame({'key': ['A', 'B', 'C', 'D'],
'value': [1, 2, 3, 4]})
df2 = pd.DataFrame({'key': ['B', 'D', 'D', 'F'],
'list_values': [[2, 4, 6], [4, 8], [1, 3, 5], [7, 9]]})
</code></pre>
<p>I need to make a left join by :</p>
<ol>
<li><code>df1['key'] = df2['key']</code></li>
<li><code>df1['value'] in df2['list_values']</code></li>
</ol>
<p>The output needs to be:</p>
<pre><code> key value list_values
0 A 1 Nan
1 B 2 [2, 4, 6]
2 C 3 Nan
3 D 4 [4, 8]
</code></pre>
<p>I can merge by key, but how I can add a second conditional?</p>
<pre><code>merged_df = df1.merge(df2, left_on='key', right_on='key', how='left')
............ ??
</code></pre>
|
<python><pandas><dataframe>
|
2023-01-25 13:24:14
| 2
| 487
|
Galat
|
75,234,556
| 1,192,393
|
When using importlib to load a module, can I put it in a package without an __init__.py file?
|
<p>My Python application loads plugins from a user-specified path (which is not part of <code>sys.path</code>), according to <a href="https://docs.python.org/3/library/importlib.html#importing-a-source-file-directly" rel="nofollow noreferrer">the importlib documentation</a>:</p>
<pre><code>import importlib.util
import sys
def load_module(module_name, file_path):
spec = importlib.util.spec_from_file_location(module_name, file_path)
module = importlib.util.module_from_spec(spec)
sys.modules[module_name] = module
spec.loader.exec_module(module)
return module
plugin_module = load_module("some_plugin", "/path/to/plugins/some_plugin.py")
</code></pre>
<p>To make it possible for the user to factor out common functionality between multiple plugins, I want to allow relative imports in the plugins:</p>
<pre><code>from . import plugin_common
def plugin_function(x):
return plugin_common.something(x)
</code></pre>
<p>When implemented like this, I get an ImportError in the plugin:</p>
<pre><code>ImportError: attempted relative import with no known parent package
</code></pre>
<p>To my understanding, this is because the <code>some_plugin</code> module <a href="https://peps.python.org/pep-0328/#relative-imports-and-name" rel="nofollow noreferrer">is not considered part of a package</a>, and relative imports can therefore not be used (inside <code>some_plugin.py</code>, <code>__name__</code> is <code>'some_plugin'</code> and <code>__package__</code> is empty).</p>
<p>I can solve this by first loading the surrounding package and then putting the imported module into that package:</p>
<pre><code>load_module("plugins_package", "/path/to/plugins/__init__.py")
plugin_module = load_module("plugins_package.some_plugin", "/path/to/plugins/some_plugin.py")
</code></pre>
<p>Now, <code>__name__</code> is <code>'plugins_package.some_plugin'</code>, <code>__package__</code> is <code>'plugins_package'</code>, and I can use relative imports.</p>
<p>However, this requires the user to put an (empty) <code>__init__.py</code> file in the plugins directory, which I would like to avoid. Since normal packages don't require an <code>__init__.py</code> file (they will be treated as a <a href="https://peps.python.org/pep-0420/" rel="nofollow noreferrer">namespace packages</a>), it seems like this should be possible.</p>
<p>It seems like it should be possible to create a namespace package dynamically (using <code>importlib</code>) for <code>plugins_package</code> and using that as package for the imported <code>plugin_module</code>. But I haven't found a way to do this.</p>
<p>So:</p>
<ul>
<li>Can I create a namespace package (where I can put <code>plugin_module</code> in) dynamically?</li>
<li>Can I dynamically create a normal package without the need for an <code>__init__.py</code> file?</li>
<li>Am I on the wrong track and there is a better way to achieve what I want (relative imports in a module loaded dynamically from outside <code>sys.path</code>)?</li>
</ul>
|
<python><python-3.x><dynamic><python-import>
|
2023-01-25 13:18:03
| 0
| 411
|
Martin
|
75,234,294
| 4,796,629
|
Is it possible for a class to provide convenient direct access to the methods of specific objects initialised within it?
|
<p>This may be highly abnormal, but I feel like it may be a useful thing. I have a range of classes that do different jobs in my package. I want to keep them seperate to keep the logic modular, and allow advanced users to use the classess directly, but I also want users to have a main convenience class that gives them quick access to the methods defined in these other classes. So to provide an example, currently this works...</p>
<pre><code>class Tail:
def wag_tail(self):
print('Wag Wag')
class Dog:
def __init__(self):
self.tail = Tail()
my_dog = Dog()
my_dog.tail.wag_tail()
>> Wag Wag
</code></pre>
<p>But... is it possible to adjust my <code>Dog</code> class so that this also works?</p>
<pre><code>my_dog.wag_tail()
>> Wag Wag
</code></pre>
<p>Editing for clarity.</p>
<p>I want to achieve the above, but automatically, without necessarily having to define a new method in <code>Dog</code> e.g. you could manually ensure access via <code>def wag_tail(self): self.tail.wag_tail()</code>, but what if I wanted to avoid writing a convenience access method every time I add a method to my <code>Tail</code> class. Is there a way to set things up such that <code>Tail</code> methods are always accessible from <code>Dog</code>?</p>
|
<python><class>
|
2023-01-25 12:55:36
| 2
| 581
|
James Allen-Robertson
|
75,234,291
| 4,562,115
|
how to get modulus difference of values of two json objects in python
|
<p>I have two JSON arrays , I need to get the modulus difference of the JSON object keys. My array list can have 1000s of elements. How to calculate it efficiently? Is there a way to do it parallelly without using loop?</p>
<p>For example</p>
<pre><code>js1 = [{'myVal':100},{'myVal':200}]
js2 = [{'myVal':500},{'myVal':800}]
</code></pre>
<p>Result should be :</p>
<pre><code>[{'myVal':400},{'myVal':600}]
</code></pre>
|
<python><arrays><json><key><modulo>
|
2023-01-25 12:55:17
| 1
| 802
|
Varuni N R
|
75,234,258
| 4,499,574
|
Unable to call OpenAI API from within a docker container
|
<p>I am trying to call the OpenAI API from within my docker container but the request is timing out. The same curl works on my machine but not from within my container</p>
<p>I tried running this curl. It worked on the host machine but not in the container.</p>
<pre><code> curl https://api.openai.com/v1/models \
-H 'Authorization: Bearer sk-auth-token' \
-H 'OpenAI-Organization: org-orgid'
</code></pre>
<p>Here is my Dockerfile. Please let me know what I am missing here. Also, I'd like to add that I am able to call it sometimes but other times it timeouts.</p>
<pre><code>FROM python:3.12.0a3-alpine3.17 as base
ENV PYTHONDONTWRITEBYTECODE 1
COPY requirements.txt .
RUN apk add --update --no-cache --virtual .build-deps \
build-base \
postgresql-dev \
libffi-dev \
python3-dev \
libffi-dev \
jpeg-dev \
zlib-dev \
musl-dev \
libpq \
&& pip install --no-cache-dir -r requirements.txt \
&& find /usr/local \
\( -type d -a -name test -o -name tests \) \
-o \( -type f -a -name '*.pyc' -o -name '*.pyo' \) \
-exec rm -rf '{}' +
# Now multistage builds
FROM python:3.12.0a3-alpine3.17
RUN apk add --update --no-cache libpq libjpeg-turbo
COPY --from=base /usr/local/lib/python3.12/site-packages/ /usr/local/lib/python3.12/site-packages/
COPY --from=base /usr/local/bin/ /usr/local/bin/
WORKDIR /app
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONPATH /app:$PYTHONPATH
COPY . /app/
</code></pre>
|
<python><docker><docker-compose><openai-api>
|
2023-01-25 12:52:53
| 0
| 1,105
|
Shahrukh Mohammad
|
75,234,243
| 1,107,474
|
Python convert string to datetime but formatting is not very predictable
|
<p>I'm extract the execution time of a Linux process using Subprocess and <code>ps</code>. I'd like to put it in a datetime object, to perform datetime arithmetic. However, I'm a little concerned about the output <code>ps</code> returns for the execution time:</p>
<pre><code>1-01:12:23 // 1 day, 1 hour, 12 minutes, 23 seconds
05:39:03 // 5 hours, 39 minutes, 3 seconds
15:06 // 15 minutes, 6 seconds
</code></pre>
<p>Notice there is no zero padding before the day. And it doesn't include months/years, whereas technically something could run for that long.</p>
<p>Consequently i'm unsure what format string to convert it to a <code>timedelta</code> because I don't want it to break if a process has ran for months, or another has only ran for hours.</p>
<p>UPDATE</p>
<p>Mozway has given a very smart answer. However, I'm taking a step back and wondering if I can get the execution time another way. I'm currently using <code>ps</code> to get the time, but it means I also have the pid. Is there something else I can do with the pid, to get the execution time in a simpler format?</p>
<p>(Can only use official Python libraries)</p>
<p>UPDATE2</p>
<p>It's actually colons between the hours, mins and seconds.</p>
|
<python><python-3.x><datetime><timedelta>
|
2023-01-25 12:51:15
| 1
| 17,534
|
intrigued_66
|
75,234,217
| 13,596,837
|
Gibberish / malformed negative y-axis values in plotly charts in python
|
<p>Im trying to plot a bar plot in plotly that is representing net-gains (will have positive and negative bar values). But somewhat the negative values in the y-axis are being represented in gibberish. I tried several things, including using <code>update_layout</code> function, but nothing seems to work. Im using the <code>make_subplots</code> function because i want to plot multiple viz on one figure.</p>
<p>Im using databricks for this code.
Attaching my code and viz output:</p>
<pre><code>net_gains = pd.DataFrame()
net_gains["general_net_gain"] = [-2,2,-1,2]
fig = plotly.subplots.make_subplots(rows=1, cols=1)
fig.add_bar(x=net_gains.index, y=net_gains["general_net_gain"], row=1, col=1)
fig.update_layout(height=400,width=500,showlegend=True)
</code></pre>
<p><a href="https://i.sstatic.net/FavSv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FavSv.png" alt="Viz output" /></a></p>
|
<python><python-3.x><pandas><matplotlib><plotly>
|
2023-01-25 12:48:42
| 3
| 399
|
marksman123
|
75,234,161
| 5,986,907
|
Python Docker SDK "Error while fetching server API version"
|
<p>In the Python Docker SDK, When I do</p>
<pre><code>import docker
docker.from_env()
</code></pre>
<p>I see</p>
<pre class="lang-none prettyprint-override"><code>docker.errors.DockerException: Error while fetching server API version: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))
</code></pre>
<p>I have docker desktop running and this works in the terminal</p>
<pre><code>$ docker run -it ubuntu
</code></pre>
<p>If I add a version number</p>
<pre><code>docker.from_env(version="6.0.1")
</code></pre>
<p>it stops erroring, but it doesn't seem to matter what number I use. I also then see an error on</p>
<pre><code>client.containers.run("ubuntu")
</code></pre>
<p>of</p>
<pre><code>requests.exceptions.ConnectionError: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))
</code></pre>
<p>I'm on Ubuntu 22.04 and I'm seeing the problem with both Poetry and plain pip + venv. I've looked through the dozen or so questions about that error message and tried everything that looked relevant.</p>
|
<python><docker><python-poetry>
|
2023-01-25 12:43:25
| 2
| 8,082
|
joel
|
75,234,152
| 13,184,183
|
How to load model from mlflow with custom predict without local file?
|
<p>I want to log model with custom predict. Example of signature</p>
<pre class="lang-py prettyprint-override"><code>from sklearn.ensemble import RandomForestRegressor
class CustomForest(RandomForestRegressor):
def predict(self, X, second_arg=False):
pred = super().predict(X)
value = 1 if second_arg else 0
return pred, value
</code></pre>
<p>This model is saved in file <code>model.py</code>. From <a href="https://medium.com/@pennyqxr/how-save-and-load-fasttext-model-in-mlflow-format-37e4d6017bf0" rel="nofollow noreferrer">here</a> I get the idea to create wrapper to get access to the:</p>
<pre class="lang-py prettyprint-override"><code>class WrapperPythonModel(mlflow.pyfunc.PythonModel):
"""
Class to train and use custom model
"""
def load_context(self, context):
"""This method is called when loading an MLflow model with pyfunc.load_model(), as soon as the Python Model is constructed.
Args:
context: MLflow context where the model artifact is stored.
"""
import joblib
self.model = joblib.load(context.artifacts["model_path"])
def predict(self, context, model_input):
"""This is an abstract function. We customized it into a method to fetch the model.
Args:
context ([type]): MLflow context where the model artifact is stored.
model_input ([type]): the input data to fit into the model.
Returns:
[type]: the loaded model artifact.
"""
return self.model
</code></pre>
<p>And here is how I save and log it:</p>
<pre><code>model = CustomForest()
model.fit(X, y)
model_path = 'model.pkl'
joblib.dump(model, 'model.pkl')
artifacts = {"model_path": model_path}
with mlflow.start_run() as run:
mlflow.pyfunc.log_model(
artifact_path=model_path,
registered_name='model'
python_model=WrapperPythonModel(),
code_path=["models.py"],
artifacts=artifacts,
)
</code></pre>
<p>But when I load it and deploy on another machine, I get error <code>module models.py not found</code>. How can I fix that? I thought specifying <code>code_path</code> parameter fixes that issues with absent local files.</p>
|
<python><mlflow>
|
2023-01-25 12:42:43
| 1
| 956
|
Nourless
|
75,234,099
| 11,622,712
|
Iterate over rows and calculate values
|
<p>I have the following pandas dataframe:</p>
<pre><code>temp stage issue_datetime
20 1 2022/11/30 19:20
21 1 2022/11/30 19:21
20 1 None
25 1 2022/11/30 20:10
30 2 None
22 2 2022/12/01 10:00
22 2 2022/12/01 10:01
31 3 2022/12/02 11:00
32 3 2022/12/02 11:01
19 1 None
20 1 None
</code></pre>
<p>I want to get the following result:</p>
<pre><code>temp stage num_issues
20 1 3
21 1 3
20 1 3
25 1 3
30 2 2
22 2 2
22 2 2
31 3 2
32 3 2
19 1 0
20 1 0
</code></pre>
<p>Basically, I need to calculate the number of non-<code>None</code> per continuous value of <code>stage</code> and create a new column called <code>num_issues</code>.</p>
<p>How can I do it?</p>
|
<python><pandas>
|
2023-01-25 12:38:44
| 1
| 2,998
|
Fluxy
|
75,233,951
| 9,182,743
|
Plotly: bar plot with color red<0, green>0, divided by groups
|
<p>Given a dataframe with 2 groups: (group1, group2), that have values > and < than 0:
plot:</p>
<ul>
<li>Bar plot</li>
<li>x = x</li>
<li>y = values, divided by group1, group2</li>
<li>color = red if value<0, green if value>0</li>
<li>legend shows group1, grou2 with different colors.</li>
</ul>
<p>My current code however is not coloring as i would expect, and the legend is shown with the same color:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import numpy as np
import plotly.express as px
df = pd.DataFrame( {
"x" : [1,2,3],
"group1" : [np.nan, 1, -0.5],
"group2" : [np.nan, -0.2, 1],
}).set_index("x")
df_ = df.reset_index().melt(id_vars = 'x')
fig = px.bar(df_, x='x', y='value', color='variable', barmode='group')
fig.update_traces(marker_color=['red' if val < 0 else 'green' for val in df_['value']], marker_line_color='black', marker_line_width=1.5)
fig.show()
</code></pre>
<p>OUT with indications of what i want to achieve:
<a href="https://i.sstatic.net/j1ZRV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/j1ZRV.png" alt="enter image description here" /></a></p>
|
<python><pandas><plotly>
|
2023-01-25 12:23:48
| 3
| 1,168
|
Leo
|
75,233,893
| 357,313
|
Can itertools.groupby use pd.NA?
|
<p>I <a href="https://stackoverflow.com/a/59543818/357313">tried using</a> <code>itertools.groupby</code> with a pandas Series. But I got:</p>
<blockquote>
<p>TypeError: boolean value of NA is ambiguous</p>
</blockquote>
<p>Indeed some of my values are <code>NA</code>.</p>
<p>This is a minimal reproducible example:</p>
<pre><code>import pandas as pd
import itertools
g = itertools.groupby([pd.NA,0])
next(g)
next(g)
</code></pre>
<p>Comparing a <code>NA</code> always <a href="https://pandas.pydata.org/pandas-docs/version/1.0.0/user_guide/missing_data.html#propagation-in-arithmetic-and-comparison-operations" rel="nofollow noreferrer">results in</a> <code>NA</code>, so <code>g.__next__</code> does <code>while NA</code> and <a href="https://pandas.pydata.org/pandas-docs/version/1.0.0/user_guide/missing_data.html#na-in-a-boolean-context" rel="nofollow noreferrer">fails</a>.</p>
<p>Is there a way to solve this, so <code>itertools.groupby</code> works with <code>NA</code> values? Or should I just accept it and use a different route to my (whatever) goal?</p>
|
<python><pandas><itertools-groupby>
|
2023-01-25 12:19:19
| 1
| 8,135
|
Michel de Ruiter
|
75,233,891
| 1,275,973
|
Create a new dictionary with the key-value pair from values in a list of dictionaries based on matches from a separate list
|
<p>I am trying to get a new dictionary with the k: v pair from values in a list of dictionary based on matches from a separate list.</p>
<p>The casing is different.</p>
<p>My data looks like this:</p>
<pre><code>list_of_dicts = [
{'fieldname': 'Id', 'source': 'microsoft', 'nullable': True, 'type': 'int'},
{'fieldname': 'FirstName', 'source': 'microsoft', 'nullable': True, 'type': 'string'},
{'fieldname': 'LastName', 'source': 'microsoft', 'nullable': False, 'type': 'string'},
{'fieldname': 'Address1', 'source': 'microsoft', 'nullable': False, 'type': 'string'}
]
</code></pre>
<pre><code>fieldname_list = ['FIRSTNAME', 'LASTNAME']
</code></pre>
<p>From this I would like to create a new dictionary as follows:</p>
<pre><code>new_dict = {'FirstName': 'string', 'LastName': 'string'}
</code></pre>
<p>I think this should be possible with a dictionary comprehension, but I can't work it out - can anyone help?</p>
|
<python><dictionary><dictionary-comprehension>
|
2023-01-25 12:19:11
| 1
| 326
|
alexei7
|
75,233,794
| 2,823,719
|
How is the multiprocessing.Queue instance serialized when passed as an argument to a multiprocessing.Process?
|
<p>A related question came up at <a href="https://stackoverflow.com/questions/75193175/why-i-cant-use-multiprocessing-queue-with-processpoolexecutor">Why I can't use multiprocessing.Queue with ProcessPoolExecutor?</a>. I provided a partial answer along with a workaround but admitted that the question raises another question, namely why a <code>multiprocessing.Queue</code> instance <em>can</em> be passed as the argument to a <code>multiprocessing.Process</code> worker function.</p>
<p>For example, the following code fails under platforms that use either the <em>spawn</em> or <em>fork</em> method of creating new processes:</p>
<pre class="lang-py prettyprint-override"><code>from multiprocessing import Pool, Queue
def worker(q):
print(q.get())
with Pool(1) as pool:
q = Queue()
q.put(7)
pool.apply(worker, args=(q,))
</code></pre>
<p>The above raises:</p>
<p><code>RuntimeError: Queue objects should only be shared between processes through inheritance</code></p>
<p>Yet the following program runs without a problem:</p>
<pre class="lang-py prettyprint-override"><code>from multiprocessing import Process, Queue
def worker(q):
print(q.get())
q = Queue()
q.put(7)
p = Process(target=worker, args=(q,))
p.start()
p.join()
</code></pre>
<p>It appears that arguments to a multiprocessing pool worker function ultimately get put on the pool's input queue, which is implemented as a <code>multiprocessing.SimpleQueue</code>, and you cannot put a <code>multiprocessing.Queue</code> instance to a <code>multiprocessing.SimpleQueue</code> instance, which uses a <code>ForkingPickler</code> for serialization.</p>
<p>So how is the <code>multiprocessing.Queue</code> serialized when passed as an argument to a <code>multiprocessing.Process</code> that allows it to be used in this way?</p>
|
<python><multiprocessing><queue>
|
2023-01-25 12:11:15
| 2
| 45,536
|
Booboo
|
75,233,726
| 18,291,356
|
Locust AttributeError: object has no attribute
|
<p>I am using python 3.10 and here is my locust file.</p>
<pre class="lang-py prettyprint-override"><code>from locust import HttpUser, task, between
import string
import random
import time
import datetime
WAIT_TIME_MIN = 1
WAIT_TIME_MAX = 5
h = {
"Content-Type": "application/json"
}
random.seed()
class LoadTest(HttpUser):
wait_time = between(WAIT_TIME_MIN, WAIT_TIME_MAX)
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.path = None
def generate_random_string(self, min_name_size=2, max_name_size=20) -> str:
letters = string.ascii_lowercase
string_size = random.randint(min_name_size, max_name_size)
generated_string = ''.join(random.choice(letters) for i in range(string_size))
return generated_string
def generate_random_dob(self) -> str:
d = random.randint(1, int(time.time()))
return datetime.date.fromtimestamp(d).strftime('%Y-%m-%d')
@task(2)
def get_all(self):
self.client.get(url=self.path)
@task(8)
def post_request(self):
self.client.post(url=self.path, json=self._generate_post_data(), headers=h)
class TeacherProcess(LoadTest):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.path = "/api/v1/teacher"
def _generate_post_data(self):
request_data = {
"teacherName": str,
"teacherEmail": str,
"teacherDOB": str
}
request_data["teacherName"] = self.generate_random_string()
request_data["teacherEmail"] = f"{self.generate_random_string()}@{self.generate_random_string()}.{self.generate_random_string()}"
request_data["teacherDOB"] = self.generate_random_dob()
return request_data
class StudentProcess(LoadTest):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.path = "/api/v1/student"
def _generate_post_data(self):
request_data = {
"studentName": str,
"studentEmail": str,
"studentDOB": str
}
request_data["studentName"] = self.generate_random_string()
request_data["studentEmail"] = f"{self.generate_random_string()}@{self.generate_random_string()}.{self.generate_random_string()}"
request_data["studentDOB"] = self.generate_random_dob()
return request_data
</code></pre>
<p>I don't know how but somehow I am able to use <code>_generate_post_data</code> inside <code>LoadTest</code> class. I am telling it's working because locust output as below:</p>
<pre><code>Type Name # reqs # fails | Avg Min Max Med | req/s failures/s
--------|----------------------------------------------------------------------------|-------|-------------|-------|-------|-------|-------|--------|-----------
GET /api/v1/student 126 0(0.00%) | 36 2 117 7 | 12.78 0.00
POST /api/v1/student 499 0(0.00%) | 66 1 276 6 | 50.61 0.00
GET /api/v1/teacher 135 0(0.00%) | 53 2 233 8 | 13.69 0.00
POST /api/v1/teacher 502 0(0.00%) | 60 2 238 6 | 50.92 0.00
--------|----------------------------------------------------------------------------|-------|-------------|-------|-------|-------|-------|--------|-----------
Aggregated 1262 0(0.00%) | 59 1 276 7 | 128.00 0.00
Response time percentiles (approximated)
Type Name 50% 66% 75% 80% 90% 95% 98% 99% 99.9% 99.99% 100% # reqs
--------|--------------------------------------------------------------------------------|--------|------|------|------|------|------|------|------|------|------|------|------
GET /api/v1/student 7 10 110 110 110 110 120 120 120 120 120 126
POST /api/v1/student 6 10 200 220 250 260 260 270 280 280 280 499
GET /api/v1/teacher 8 10 110 140 230 230 230 230 230 230 230 135
POST /api/v1/teacher 6 9 180 200 220 230 230 230 240 240 240 502
--------|--------------------------------------------------------------------------------|--------|------|------|------|------|------|------|------|------|------|------|------
Aggregated 7 10 110 190 230 240 260 260 270 280 280 1262
</code></pre>
<p>As you can see there is no failure. My question is how I am able to access <code>_generate_post_data</code> while I am inside the <code>LoadTest</code> class? The second one is related to below error:</p>
<pre><code>[2023-01-25 12:31:38,147] pop-os/ERROR/locust.user.task: 'LoadTest' object has no attribute '_generate_post_data'
Traceback (most recent call last):
File "/home/ak/.local/lib/python3.10/site-packages/locust/user/task.py", line 347, in run
self.execute_next_task()
File "/home/ak/.local/lib/python3.10/site-packages/locust/user/task.py", line 372, in execute_next_task
self.execute_task(self._task_queue.pop(0))
File "/home/ak/.local/lib/python3.10/site-packages/locust/user/task.py", line 493, in execute_task
task(self.user)
File "/home/ak/Desktop/my-projects/spring-boot-app/performans-testing/locust.py", line 38, in post_request
self.client.post(url=self.path, json=self._generate_post_data(), headers=h)
AttributeError: 'LoadTest' object has no attribute '_generate_post_data'
</code></pre>
<p>I am quite confused, is that a locust bug, or am I doing something wrong? While locust doesn't show any failure, while I am having this error. If anyone can explain that, I would be appreciated</p>
|
<python><locust>
|
2023-01-25 12:04:56
| 1
| 432
|
Serdar
|
75,233,627
| 9,274,940
|
Pythonic way of counting max elements by index in a dictionary with list values
|
<p>I want to compare the lists inside a dictionary (as values) by each index, and save in another dictionary how many times each "key" had the highest value.</p>
<p>Let's put an example, I have this dictionary:</p>
<pre><code>my_dict = {'a': [1, 2, 5], 'b': [2, 1, 4], 'c': [1, 0, 3]}
</code></pre>
<p>I want to end up with a dictionary like this:</p>
<pre><code>count_dict = {'a': 2, 'b': 1, 'c': 0}
</code></pre>
<p>Because:</p>
<ul>
<li><p>at index 0 of the lists we have:
<code>1</code> from 'a' ; <code>2</code> from 'b' ; and <code>1</code> from 'c'.</p>
<p>So 'b' has the highest value for this index and adds one to the count.</p>
</li>
<li><p>at index 1 of the lists we have:
<code>2</code> from 'a' ; <code>1</code> from 'b' ; and <code>0</code> from 'c'.</p>
<p>So 'a' has the highest value for this index and adds one to the count.</p>
</li>
<li><p>at index 2 of the lists we have:
<code>5</code> from 'a' ; <code>4</code> from 'b' ; and <code>3</code> from 'c'.</p>
<p>So 'a' has the highest value for this index and adds one to the count.</p>
</li>
</ul>
<hr />
<p>I've tried with <code>Counter</code> and <code>max(my_dict, key=my_dict.get)</code>. But what would be the most pythonic way instead of doing this:</p>
<pre><code>for i in range(len(my_dict['a'])):
max_value = max(my_dict[key][i] for key in my_dict)
for key in my_dict:
if my_dict[key][i] == max_value:
max_count[key] += 1
print(max_count)
</code></pre>
|
<python>
|
2023-01-25 11:56:29
| 3
| 551
|
Tonino Fernandez
|
75,233,582
| 15,376,262
|
Split pandas dataframe column of type string into multiple columns based on number of ',' characters
|
<p>Let's say I have a pandas dataframe that looks like this:</p>
<pre><code>import pandas as pd
data = {'name': ['Tom, Jeffrey, Henry', 'Nick, James', 'Chris', 'David, Oscar']}
df = pd.DataFrame(data)
df
name
0 Tom, Jeffrey, Henry
1 Nick, James
2 Chris
3 David, Oscar
</code></pre>
<p>I know I can split the names into separate columns using the comma as separator, like so:</p>
<pre><code>df[["name1", "name2", "name3"]] = df["name"].str.split(", ", expand=True)
df
name name1 name2 name3
0 Tom, Jeffrey, Henry Tom Jeffrey Henry
1 Nick, James Nick James None
2 Chris Chris None None
3 David, Oscar David Oscar None
</code></pre>
<p>However, if the <code>name</code> column would have a row that contains 4 names, like below, the code above will yield a <code>ValueError: Columns must be same length as key</code></p>
<pre><code>data = {'name': ['Tom, Jeffrey, Henry', 'Nick, James', 'Chris', 'David, Oscar', 'Jim, Jones, William, Oliver']}
# Create DataFrame
df = pd.DataFrame(data)
df
name
0 Tom, Jeffrey, Henry
1 Nick, James
2 Chris
3 David, Oscar
4 Jim, Jones, William, Oliver
</code></pre>
<p>How can automatically split the <code>name</code> column into n-number of separate columns based on the ',' separator? The desired output would be this:</p>
<pre><code> name name1 name2 name3 name4
0 Tom, Jeffrey, Henry Tom Jeffrey Henry None
1 Nick, James Nick James None None
2 Chris Chris None None None
3 David, Oscar David Oscar None None
4 Jim, Jones, William, Oliver Jim Jones William Oliver
</code></pre>
|
<python><pandas><split>
|
2023-01-25 11:52:23
| 1
| 479
|
sampeterson
|
75,233,393
| 3,905,832
|
Is it possible with `click` library to limit choices of a command argument, based on the value of a previous argument?
|
<p><strong>The context</strong></p>
<p>I have a CLI tool named <code>mytool</code> written in python 3.9 and using the <code>click</code> library for handling CLI commands, arguments and options. Autocompletion is working.</p>
<p><strong>What I want to achieve</strong></p>
<p>In a directory <code>D</code> I have the following files:</p>
<pre><code>name1_version1.zip
name1_version2.zip
name1_version3.zip
name2_version1.zip
name2_version2.zip
name3_version1.zip
</code></pre>
<p>When running:</p>
<pre><code>mytool install <TAB>
</code></pre>
<p>I want to obtain a list of autocompletion possibilities: <code>name1 name2 name3</code>. Then, when choosing <code>name2</code> and running:</p>
<pre><code>mytool install name2 <TAB>
</code></pre>
<p>I want to obtain a list of autocompletion possibilities given the <code>name2</code> constraint, i.e.: <code>version1 version2</code> which correspond to the <code>name2_version1.zip</code> and <code>name2_version2.zip</code> files.</p>
<p>The content of <code>D</code> is changing regularly. New files are added and removed.</p>
<p>Now let's say that new files named <code>name2_version3.zip</code> and <code>name4_version1.zip</code> are added to the <code>D</code> directory. Then, when running:</p>
<pre><code>mytool install <TAB>
</code></pre>
<p>Without having to modify the code of <code>install</code> command. I want the completion to providean updated list of autocompletion possibilities, i.e.: <code>name1 name2 name3 name4</code>. Then when running:</p>
<pre><code>mytool install name2 <TAB>
</code></pre>
<p>I want the list of autocompletion possibilities to be updated as well i.e.: <code>version1 version2 version3</code>.</p>
<p><strong>How I was thinking to do it</strong></p>
<p>The <code>install</code> <code>@click.command</code> will take two <code>@click.argument</code> named <code>name</code> and <code>version</code>.</p>
<p>As existing files in <code>D</code> are changing, I want these two <code>@click.argument</code>s to be "dynamic choices".</p>
<p>The argument <code>name</code> would would have a <code>type=click.Choice(ToolsFinder.names())</code> where <code>ToolsFinder.names()</code> returns a list of <code><name></code> part of the files found in <code>D</code>. When using autocompletion, possible values for <code>name</code> will be extracted dynamically and returned.</p>
<p>When a valid (i.e. existing) <code>name</code> is provided, I want to use this value to limit "acceptable" versions to the <code><version></code> part of the files found in <code>D</code> where the files name matches <code><name></code>.</p>
<p>In other words I would like the argument <code>version</code> to have something like <code>type=click.Choice(ToolsFinder.versions(provided_name_argument_value))</code>, where <code>provided_name_argument_value</code> is the value of the <code>name</code> argument.</p>
<p><strong>The problem</strong></p>
<p>I don't know how to filter argument values based on previous argument values.</p>
<p><strong>The question</strong></p>
<p>Is it possible with <code>click</code> library to limit choices of a command argument, based on the value of a previous argument ?</p>
<p>If yes could you please provide a code snippet showing how to declare arguments to be able to do it (or any other kind of example) ?</p>
|
<python><python-click>
|
2023-01-25 11:34:53
| 0
| 2,885
|
Kraal
|
75,233,364
| 12,709,265
|
Extract the gesture from GestureRecognizerResult
|
<p>In the <code>mediapipe</code> library, there is a task called <code>GestureRecognizer</code> which can recognize certain hand gestures. There is also a task called <code>GestureRecognizerResult</code> which consists of the results from the <code>GestureRecognizer</code>. <code>GestureRecognizerResult</code> has an attribute called <code>gesture</code>, which when printed shows the following output</p>
<pre><code>> print(getattr(GestureRecognizerResult, 'gestures'))
#[[Category(index=-1, score=0.8142859935760498, display_name='', category_name='Open_Palm')]]
</code></pre>
<p>I actually want just the <code>category_name</code> to be printed, how can I do that?</p>
<p>Thanks in advance.</p>
|
<python><python-3.x><mediapipe>
|
2023-01-25 11:32:19
| 1
| 1,428
|
Shawn Brar
|
75,233,331
| 10,954,152
|
Convert dataframe column into set of text files
|
<p>I have a dataframe which contains <code>topic</code> and <code>keywords</code> column as shown below:</p>
<pre><code>topic keyword
0 ['player', 'team', 'word_finder_unscrambler', ...
1 ['weather', 'forecast', 'sale', 'philadelphia'...
2 ['name', 'state', 'park', 'health', 'dog', 'ce...
3 ['game', 'flight', 'play', 'game_live', 'play_...
4 ['dictionary', 'clue', 'san_diego', 'professor...
</code></pre>
<p>Need to create a text file of each topic separately namely topic1.txt, topic2.txt,....topic20.txt and the topic text file should contain strings in newline inside keyword column something like this:</p>
<pre><code>topic1.txt file should contain:
player
team
word_finder_unscrambler
etc
</code></pre>
|
<python><pandas><dataframe>
|
2023-01-25 11:29:28
| 1
| 967
|
think-maths
|
75,233,280
| 14,860,526
|
Get the main type out of a composite type in Python
|
<p>let's assume I have types defined as:</p>
<pre><code>data_type1 = list[str]
data_type2 = set[int]
</code></pre>
<p>and so on, how can I get just the main type (like list or set) by analyzing the two data types?</p>
<p>I tried:</p>
<pre><code>issubclass(data_type1, list)
issubclass(data_type2, set)
</code></pre>
<p>but it returns False</p>
<p>Any idea?</p>
|
<python><python-typing>
|
2023-01-25 11:25:01
| 2
| 642
|
Alberto B
|
75,233,196
| 10,037,034
|
How can i see my files on remote Jupyter server? (Vscode)
|
<p>I want to use vscode on my computer using remote jupyter connection.
I applied the steps for the connection, i can create a new notebook and run on this remote server.
Bu i want to see my folders on remote server. I want to edit them or create a new file in this server.</p>
<p><a href="https://i.sstatic.net/nI8NV.png" rel="noreferrer"><img src="https://i.sstatic.net/nI8NV.png" alt="vscode" /></a></p>
<p>How can i see this files on my vscode like in jupyterlab?</p>
<p><a href="https://i.sstatic.net/5JEIy.png" rel="noreferrer"><img src="https://i.sstatic.net/5JEIy.png" alt="editor" /></a></p>
|
<python><visual-studio-code><jupyter-notebook><vscode-remote>
|
2023-01-25 11:18:38
| 1
| 1,311
|
Sevval Kahraman
|
75,233,177
| 1,551,817
|
Why does this Python function not require an argument when it itself is being used as an argument?
|
<p>I'm looking at a function that acts as a class factory and takes a function as an argument:</p>
<pre><code>def Example(func):
class Example(object):
def __init__(self, name):
self._name = name
return Example
</code></pre>
<p>There is also another separate function:</p>
<pre><code>def other_function(flags):
flagvals = np.unique(flags)
return {val: flags == val for val in flagvals}
</code></pre>
<p>I then see the first function being used with the second function as an argument:</p>
<pre><code>my_example = Example(other_function)
</code></pre>
<p>Can anyone explain why <code>other_function</code> doesn't seem to require an argument itself here when it seemed to require one when it was defined?</p>
|
<python><function><class>
|
2023-01-25 11:17:06
| 3
| 7,561
|
user1551817
|
75,233,043
| 1,092,632
|
When can "s != s" occur in a method?
|
<p>I found a code snippet, which is a custom metric for tensorboard (pytorch training)</p>
<pre><code>def specificity(output, target, t=0.5):
tp, tn, fp, fn = tp_tn_fp_fn(output, target, t)
if fp == 0:
return 1
s = tn / (tn + fp)
if s != s:
s = 1
return s
def tp_tn_fp_fn(output, target, t):
with torch.no_grad():
preds = output > t # torch.argmax(output, dim=1)
preds = preds.long()
num_true_neg = torch.sum((preds == target) & (target == 0), dtype=torch.float).item()
num_true_pos = torch.sum((preds == target) & (target == 1), dtype=torch.float).item()
num_false_pos = torch.sum((preds != target) & (target == 1), dtype=torch.float).item()
num_false_neg = torch.sum((preds != target) & (target == 0), dtype=torch.float).item()
return num_true_pos, num_true_neg, num_false_pos, num_false_neg
</code></pre>
<p>In terms of the calculation itself it is easy enough to understand.</p>
<p>What I don't understand is <code>s != s</code>. What does that check do, how can the two <code>s</code> even be different?</p>
|
<python><pytorch>
|
2023-01-25 11:03:37
| 1
| 2,713
|
PrimuS
|
75,233,016
| 7,800,760
|
Managing python project CI/CD with Conda, Poetry and pip
|
<p>I have read through a couple of dozens of writeups on how to equip a modern python project to automate linting, testing, coverage, type checking etc (and eventually deploy it on cloud servers. Not interested in the latter part yet.)</p>
<p>I am thinking about using <strong>conda</strong> as my environment manager. This would give me the advantage of being able to install non python packages if needed and also by creating the project environment with a specified python version I believe it would replace <strong>pyenv</strong>.</p>
<p>Within the newly conda created environment I would use <strong>poetry</strong> to manage dependencies and to initialize its TOML and add other needed python packages which would be installed by <strong>pip</strong> from PyPI.</p>
<p>Did I get the above right?</p>
<p>To complete the "building" phase I am also looking into using <strong>pytest</strong> and <strong>pytest-cov</strong> for code coverage, <strong>mypy/pydantic</strong> for type checking and <strong>black</strong> for formatting.</p>
<p>All of this should work both on my local development machine but then when I push on <strong>GitHub</strong> trigger an <strong>Action</strong> and perform the same checks, so that any contributor will go through them. Currently managed to do it for pylint, pytest and coverage for very simple projects without requirements.</p>
<p>Does all of this make sense? Am I missing some important component/step? For example I'm trying to understand if tox would help me in this workflow by automating testing on different python versions, but haven't come to grips integrating this flood on (new to me) concepts. Thanks</p>
|
<python><continuous-integration><github-actions><conda><python-poetry>
|
2023-01-25 11:01:35
| 0
| 1,231
|
Robert Alexander
|
75,232,897
| 1,654,930
|
Is it possible to get the iteration number of a lambda map?
|
<p>I have this simple map:</p>
<pre><code>list( map(lambda p: mappingFunction(p,index?), data )
</code></pre>
<p>I'd like, on my mapping function, to be able to access the index, so to have 0,1,2,3,... and track the iteration number.
Is that possible?</p>
|
<python>
|
2023-01-25 10:51:46
| 2
| 6,752
|
Phate
|
75,232,776
| 5,928,682
|
Test case where assert should check the content of the response body
|
<p>I am new to writing a test case.</p>
<p>I am trying to check if the response body has certain value.</p>
<p>Response a power shell script I want to check if that script has the modified values or not.</p>
<pre><code>def test_get_script_remotes_auth():
payload = json.dumps("")
headers = {"Authorization": token, "Accept": "*/*"}
response_target_api = requests.request("GET", url, headers=headers)
assert response_target_api.status_code == 200
</code></pre>
<p>it is straight forward to check for status code but what i wanted to check is how can i check for content of the body?</p>
<p>Please bare with the trivial question.</p>
|
<python><amazon-web-services><pytest>
|
2023-01-25 10:42:02
| 1
| 677
|
Sumanth Shetty
|
75,232,761
| 3,344,139
|
How do I run poetry install in a subprocess without reference to the current virtualenv?
|
<p>I am writing some auto setup code in a command line tool that we have.
This command line tool is installed with poetry, so when someone is invoking the command line tool, it is from within a virtual environment setup by poetry.</p>
<p>The command I am trying to invoke is <code>poetry install</code> for <em>another directory</em>.
I want this command to create a new virtualenv for the other directory.</p>
<p>What I am currently doing is</p>
<pre class="lang-py prettyprint-override"><code>import subprocess
other_directory = "/some/path"
subprocess.run([f"cd {other_directory} && poetry install && poetry env info"], cwd=other_directory, shell=True, check=True)
</code></pre>
<p>But that always installs the dependencies from the other repo into the virtualenv that I am using to run this script. I have also tried to use the path to the system poetry instead of the one in the virtualenv, but that doesn't work either.
The <code>poetry env info</code> command always outputs the info for the current virtualenv:</p>
<pre><code>Virtualenv
Python: 3.10.9
Implementation: CPython
Path: /home/veith/.cache/pypoetry/virtualenvs/my-cli-tool-Qe2jDmlM-py3.10
Executable: /home/veith/.cache/pypoetry/virtualenvs/my-cli-tool-Qe2jDmlM-py3.10/bin/python
Valid: True
</code></pre>
<p>And if I manually <code>cd</code> to the other directory and run <code>poetry env info</code>, it always shows that no virtualenv is set up:</p>
<pre><code>Virtualenv
Python: 3.10.9
Implementation: CPython
Path: NA
Executable: NA
</code></pre>
<p>The output I am looking for here would be:</p>
<pre><code>Virtualenv
Python: 3.10.9
Implementation: CPython
Path: /home/veith/.cache/pypoetry/virtualenvs/my-other-directory-HASH-py3.10
Executable: /home/veith/.cache/pypoetry/virtualenvs/my-other-directory-HASH-py3.10/bin/python
Valid: True
</code></pre>
<p>Is there any way to achieve this? I don't even understand where poetry knows the current virtualenv from if I set the <code>cwd</code> keyword in the subprocess call.</p>
|
<python><subprocess><virtualenv><python-poetry>
|
2023-01-25 10:40:42
| 1
| 4,885
|
RunOrVeith
|
75,232,570
| 2,046,185
|
How to run a python file as a module with the "Run Python File" button in VS Code?
|
<p>I have Visual Studio Code 1.74.3 with the Microsoft Python extension v2022.20.2.</p>
<p>I am talking about the button to run a python file and specifically not about run/debug configurations or tasks.</p>
<p><a href="https://i.sstatic.net/Vaahz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Vaahz.png" alt="enter image description here" /></a></p>
<p>Per default this button seems to use the configured python interpreter and runs: <code>python.exe filename.py</code>.</p>
<p>However, I want to run files as a module: <code>python.exe -m filename</code></p>
<p>Is that possible somehow?</p>
<p>I found the setting "python.terminal.launchArgs" to add the "-m", but then there is still the problem that just the filename is required without the ".py" extension.</p>
|
<python><visual-studio-code>
|
2023-01-25 10:26:30
| 2
| 617
|
Fabian
|
75,232,413
| 13,840,270
|
Plotly Scatterplot3d display x-y grid that does not "climb" z-axis
|
<p><a href="https://i.sstatic.net/tWBGZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tWBGZ.png" alt="Desired plot (Pang 2021)" /></a></p>
<p><a href="https://i.sstatic.net/aKVpv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/aKVpv.png" alt="My plot with gridlines on z-axis" /></a></p>
<p>I am currently trying to rebuild the plot from the first figure (from Pang et al 2021) in plotly. However I do not find a setting in which I can prevent the x-y grid to also climb up to z axis (second figure). My code is the following.</p>
<pre><code>f.update_layout(
scene = dict(
xaxis = dict(
gridcolor = "black",
showbackground = False
),
yaxis = dict(
showbackground = False,
gridcolor = "black"
),
zaxis = dict(
showbackground = False
)
))
</code></pre>
|
<python><plotly><scatter3d>
|
2023-01-25 10:11:15
| 0
| 3,215
|
DuesserBaest
|
75,232,276
| 1,497,720
|
`index` in OpenGPT API output
|
<p>The following is the code</p>
<pre><code>import os
import openai
openai.api_key = "..."
response = openai.Completion.create(
model="text-davinci-003",
prompt="I am happy!",
temperature=0, #creativity
max_tokens=10,
top_p=1,
frequency_penalty=0.0,
presence_penalty=0.0,
suffix='I am even more happy!'
)
print(response)
</code></pre>
<p>Following is the output</p>
<pre><code>{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"text": "\n\nI am happy because I am surrounded by"
}
],
"created": 1674640360,
"id": "cmpl-6cWkK124234ho8C2134afasdasdnwDKLUMP",
"model": "text-davinci-003",
"object": "text_completion",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 10,
"total_tokens": 20
}
}
</code></pre>
<p>What does the <code>index</code> in above following output represent?</p>
|
<python><openai-api>
|
2023-01-25 09:58:30
| 1
| 18,765
|
william007
|
75,232,117
| 15,476,955
|
get the list of partition keys in a dynamodb with boto3
|
<p>Actually, I'm using <code>scan</code> and take the partition key in every item but that's really not efficient, my dynamodb is too big and it takes too much time.</p>
<p>Is there a way to <code>query</code> only the partition key so we have an optimized unexpensive way to get all the partition key of a dynamoDB with boto3 in python ?</p>
<p>My precise goal is to get informations from the 50 latest element of my dynamoDB.
My dynamoDB has a lot of informations in the column <code>form_data</code> so first, I want to get the ID of 50 latest <code>creation_file_date</code> in an inexpensive call so after that I can make optimized call on every element.</p>
<p><a href="https://i.sstatic.net/4qzHz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4qzHz.png" alt="dynamoDB example" /></a></p>
|
<python><amazon-dynamodb><boto3>
|
2023-01-25 09:46:13
| 1
| 1,168
|
Utopion
|
75,232,007
| 12,083,557
|
IOError when opening/writing file with "a" mode
|
<p>I have the following Python code:</p>
<pre><code>try:
with open(self.file_name, "a") as log_file:
log_file.write(...))
except IOError:
print("I couldn't open the file")
</code></pre>
<p>The file is opened with <strong>"a"</strong> mode, so it appends the text at the end of the file AND, if the file doesn't exist, it creates one.</p>
<p>Now, <code>IOError</code> occurs when the file that we passed in as argument does not exist or it has a different name (or the file location path is incorrect).</p>
<p>So in what case Python will throw such an error when we use the "a" mode? When we run out of disk space? Are there any other cases?</p>
|
<python><io>
|
2023-01-25 09:36:08
| 0
| 337
|
Life after Guest
|
75,231,984
| 6,266,810
|
read_sql in chunks with polars
|
<p>I am trying to read a large database table with polars. Unfortunately, the data is too large to fit into memory and the code below eventually fails.</p>
<p>Is there a way in polars how to define a chunksize, and also write these chunks to parquet, or use the lazy dataframe interface to keep the memory footprint low?</p>
<pre><code>import polars as pl
df = pl.read_sql("SELECT * from TABLENAME", connection_string)
df.write_parquet("output.parquet")
</code></pre>
|
<python><dataframe><python-polars>
|
2023-01-25 09:34:21
| 3
| 996
|
WilliamEllisWebb
|
75,231,942
| 9,374,372
|
How do I make sure a super method is called on child classes method on Python?
|
<p>How to make it sure in a parent class method that <code>super</code> is called in its children methods which override the parent method? I found this question in SO for other languages except for Python.</p>
|
<python><inheritance><super>
|
2023-01-25 09:29:52
| 3
| 505
|
Fernando Jesus Garcia Hipola
|
75,231,870
| 16,306,516
|
sort a list of string numerically in python
|
<p>I know there are many questions regarding this type of sorting, I tried many time by referring to those questions and also by going through the <code>re</code> topic in python too</p>
<p>My question is:</p>
<pre><code>class Example(models.Model):
_inherit = 'sorting.example'
def unable_to_sort(self):
data_list = ['Abigail Peterson Jan 25','Paul Williams Feb 1','Anita Oliver Jan 24','Ernest Reed Jan 28']
self.update({'list_of_birthday_week': ','.join(r for r in data_list)})
</code></pre>
<p>I need to be <code>sorted</code> according to the <code>month</code> & <code>date</code> like:</p>
<pre><code>data_list = ['Anita Oliver Jan 24','Abigail Peterson Jan 25','Ernest Reed Jan 28','Paul Williams Feb 1']
</code></pre>
<p>is there any way to achieve this ?</p>
|
<python>
|
2023-01-25 09:23:32
| 4
| 726
|
Sidharth Panda
|
75,231,843
| 14,125,436
|
Removing duplicates from animation's lened of a 3d plot in python
|
<p>I am exporting an animation in python but the legend is repeating. I have only one plot and want to have one single legend item in every frame of the animation. This is my script:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from matplotlib import animation
x = np.linspace(0., 10., 100)
y = np.linspace(0., 10., 100)
z = np.random.rand(100)
fig = plt.figure(figsize=(8,8))
ax = fig.add_subplot (111, projection="3d")
def init():
# Plot the surface.
ax.scatter3D(x, y, z, label='random', s=10)
ax.set_zlabel('Z [m]')
ax.set_ylabel('Y [m]')
ax.set_xlabel('X [m]')
plt.legend()
ax.grid(None)
return fig,
def animate(i):
ax.view_init(elev=20, azim=i)
return fig,
# Animate
ani = animation.FuncAnimation(fig, animate, init_func=init,
frames=360, interval=200, blit=True)
# Export
ani.save('random data.gif', writer='pillow', fps=30, dpi=50)
</code></pre>
<p>And this is the animation in which legend is repeated three times:</p>
<p><a href="https://i.sstatic.net/Qb6Kh.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Qb6Kh.gif" alt="enter image description here" /></a></p>
<p>I very much appreciate any help.</p>
|
<python><matplotlib><matplotlib-animation>
|
2023-01-25 09:20:40
| 3
| 1,081
|
Link_tester
|
75,231,661
| 6,695,762
|
GCP Dataflow ReadFromKafka creating a lot of connections
|
<p>We are creating Dataflow job using Python to read data from Kafka (Amazon MSK, 6 brokers, 5 partitions topic). Dataflow job is deployed in a VPC that has a Cloud NAT (single public IP) and this IP is fully allowed on out on AWS side.</p>
<p>I turned on <code>commit_offset_in_finalize=True</code> and set <code>group.id</code>. Also disabled <code>enable.auto.commit</code>.</p>
<p>In worker logs I can see that there are following warnings spawning all the time:</p>
<p><code>[Consumer clientId=consumer-Reader-2_offset_consumer_452577593_my-group-id-695, groupId=Reader-2_offset_consumer_452577593_my-group-id] Connection to node -3 (b-3-public.some-cluster-name.amazonaws.com/XXX.XXX.XXX.XXX:YYYY) could not be established. Broker may not be available.</code></p>
<p><code>[Consumer clientId=consumer-Reader-2_offset_consumer_1356187250_my-group-id-640, groupId=Reader-2_offset_consumer_1356187250_my-group-id] Bootstrap broker b-3-public.some-cluster-name.amazonaws.com:YYYY(id: -3 rack: null) disconnected</code></p>
<p><code>org.apache.kafka.common.errors.TimeoutException: Timeout of 300000ms expired before the position for partition my-topic-4 could be determined</code></p>
<p>And also errors:</p>
<p><code>org.apache.kafka.common.errors.TimeoutException: Timeout of 300000ms expired before successfully committing offsets {my-topic-1=OffsetAndMetadata{offset=13610611, leaderEpoch=null, metadata=''}}</code></p>
<p>There are not many events, like 5/sec so there is no load at all.</p>
<p>I logged into VM that hosts my job and run <code>toolbox</code> to see what is happening.</p>
<p>I noticed that there are a log of connection being created all the time to reach Kafka. Having all the parameter as below it is 100-200 estabilished connections. Earlir it was above 300-400 hundred and SYN_SENT connections were piling up to 2000 connection in total making worker machine unable to connect to Kafka at all.</p>
<p>Any ideas what is causing this many connections?</p>
<p>Piepeline looks as follows:</p>
<pre><code>with Pipeline(options=pipeline_options) as pipeline:
(
pipeline
| 'Read record from Kafka' >> ReadFromKafka(
consumer_config={
'bootstrap.servers': bootstrap_servers,
'group.id': 'my-group-id',
'default.api.timeout.ms' : '300000',
'enable.auto.commit' : 'false',
'security.protocol': 'SSL',
'ssl.truststore.location': truststore_location,
'ssl.truststore.password': truststore_password,
'ssl.keystore.location': keystore_location,
'ssl.keystore.password': keystore_password,
'ssl.key.password': key_password
},
topics=['my-topic'],
with_metadata=True,
commit_offset_in_finalize=True
)
| 'Format message element to name tuple' >> ParDo(
FormatMessageElement(logger, corrupted_events_table, bq_table_name)
)
| 'Get events row' >> ParDo(
BigQueryEventRow(logger)
)
| 'Write events to BigQuery' >> io.WriteToBigQuery(
table=bq_table_name,
dataset=bq_dataset,
project=project,
schema=event_table_schema,
write_disposition=io.BigQueryDisposition.WRITE_APPEND,
create_disposition=io.BigQueryDisposition.CREATE_IF_NEEDED,
additional_bq_parameters=additional_bq_parameters,
insert_retry_strategy=RetryStrategy.RETRY_ALWAYS
)
)
</code></pre>
<p>And here are start parameter (removed standard parameter):</p>
<pre><code>python3 streaming_job.py \
(...)
--runner DataflowRunner \
--experiments=use_runner_v2 \
--number_of_worker_harness_threads=1 \
--experiments=no_use_multiple_sdk_containers \
--sdk_container_image=${DOCKER_IMAGE} \
--sdk_harness_container_image_overrides=".*java.*,${DOCKER_IMAGE_JAVA}"
gcloud dataflow jobs run streaming_job \
(...)
--worker-machine-type=n2-standard-4 \
--num-workers=1 \
--max-workers=10
</code></pre>
<p>I tried modifying:</p>
<p><code>number_of_worker_harness_threads</code> - less threads, less connections</p>
<p><code>no_use_multiple_sdk_containers</code> - one SDK container per worker, less connections from worker</p>
<p>more resources - more SDK container, more connections</p>
<p><code>default.api.timeout.ms</code> - by increasing it, number of timeouts was reduced</p>
<p>And ended up with parameters as above. And still 100-200 connections and ReadFromKafka is working like crazy when other stages have nothing to do</p>
<p><a href="https://i.sstatic.net/oVsee.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oVsee.png" alt="enter image description here" /></a></p>
|
<python><google-cloud-platform><apache-kafka><google-cloud-dataflow><apache-beam>
|
2023-01-25 09:02:02
| 1
| 629
|
fl0r3k
|
75,231,549
| 1,436,800
|
How to compare list of models with queryset in django?
|
<p>I have a serializer:</p>
<pre><code>class MySerializer(serializers.ModelSerializer):
class Meta:
model = models.MyClass
fields = "__all__"
def validate(self, data):
user = self.context.get("request").user
users = data.get("users")
users_list = User.objects.filter(organization=user.organization)
return data
</code></pre>
<p>users will print a list of models like this:
[<User: User 1>, <User: User 2>]</p>
<p>users_list will display a queryset:
Queryset:
<QuerySet [<User: User 1>, <User: User 2>, <User: User 3>]></p>
<p>I want to write a query which checks if list of models e.g.users are present inside a queryset users_list. How to do that?</p>
|
<python><django><django-rest-framework><django-queryset><django-serializer>
|
2023-01-25 08:51:36
| 3
| 315
|
Waleed Farrukh
|
75,231,386
| 4,720,018
|
How to serialize a Mock object?
|
<p>In my unit-tests there are some (nested) <a href="https://docs.python.org/3/library/unittest.mock.html#the-mock-class" rel="nofollow noreferrer"><code>unittest.mock.Mock</code></a> objects.</p>
<p>At some point, these <code>Mock</code> objects need to be serialized using <a href="https://docs.python.org/3/library/json.html#json.dumps" rel="nofollow noreferrer"><code>json.dumps</code></a>.</p>
<p>As expected, this raises a</p>
<pre class="lang-none prettyprint-override"><code>TypeError: Object of type Mock is not JSON serializable
</code></pre>
<p>There are many questions and answers on SO about making classes serializable. For example, there's <a href="https://stackoverflow.com/q/3768895">1</a>, but this does not provide an answer specific to <code>Mock</code> objects. The titles for <a href="https://stackoverflow.com/q/73989150">2</a> and <a href="https://stackoverflow.com/q/70764110">3</a> look promising, but these do not provide an answer either.</p>
<p>The obvious thing to do is use the <code>default</code> argument, as in</p>
<pre class="lang-py prettyprint-override"><code>json.dumps(my_mock_object, default=mock_to_dict)
</code></pre>
<p>The question is, what is the easiest way to implement <code>mock_to_dict()</code>?</p>
<p>The serialized result should only include custom attributes, so <code>Mock</code> internals should be disregarded.</p>
<p>The following implementation appears to do the job for simple cases</p>
<pre class="lang-py prettyprint-override"><code>def mock_to_dict(obj):
if isinstance(obj, Mock):
# recursive case
return {
key: mock_to_dict(value)
for key, value in vars(obj).items()
if not key.startswith('_') and key != 'method_calls'
}
# base case
return obj
</code></pre>
<p>Is there a simpler way to do this?</p>
|
<python><json><serialization><python-unittest.mock>
|
2023-01-25 08:34:28
| 0
| 14,749
|
djvg
|
75,231,185
| 11,067,209
|
How to get an isomorphic graph from another in networkx?
|
<p>Good morning, everyone.</p>
<p>I am currently doing a unit test of a function that processes graphs and it should give similar results in front of isomorphic graphs. So, I would like to output only an isomorphic graph from a networkx graph, but I can't find if that functionality exists. <a href="https://networkx.org/documentation/stable/reference/algorithms/isomorphism.html" rel="nofollow noreferrer">Here</a> it talks about the checks needed to see if two graphs are isomorphic, but not about how to get one.</p>
<p>Is there such functionality in the networkx library or does anyone know how to get one?</p>
|
<python><graph><networkx><isomorphism>
|
2023-01-25 08:10:55
| 1
| 665
|
Angelo
|
75,231,162
| 7,359,831
|
Polars: settings not to display ellipsis
|
<p>Polars chops some text instead of showing all text like the following</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Link</th>
<th>Name</th>
</tr>
</thead>
<tbody>
<tr>
<td>https://...</td>
<td>name1</td>
</tr>
<tr>
<td>https://...</td>
<td>name2</td>
</tr>
</tbody>
</table>
</div>
<p>I want Polars to show all text of <code>Link</code> Col</p>
<p>How can I do that?</p>
|
<python><python-3.x><python-polars>
|
2023-01-25 08:08:15
| 1
| 1,786
|
Pytan
|
75,231,091
| 7,122,272
|
deepface: Don't print logs from MTCNN backend
|
<p>I have a very simple code that I use for face detection from an image, for example:</p>
<pre><code>from deepface.commons import functions
import numpy as np
random_image = np.random.randint(
0, 255, size=(360, 360, 3)
)
detected_face = functions.detect_face(
img=random_image,
detector_backend="mtcnn",
enforce_detection=False,
)[0]
</code></pre>
<p>This code prints out the following logs (made by MTCNN backend):</p>
<pre><code>1/1 [==============================] - 0s 24ms/step
1/1 [==============================] - 0s 15ms/step
1/1 [==============================] - 0s 13ms/step
1/1 [==============================] - 0s 11ms/step
1/1 [==============================] - 0s 11ms/step
1/1 [==============================] - 0s 11ms/step
1/1 [==============================] - 0s 10ms/step
1/1 [==============================] - 0s 9ms/step
1/1 [==============================] - 0s 10ms/step
5/5 [==============================] - 0s 4ms/step
1/1 [==============================] - 0s 16ms/step
</code></pre>
<p>Is there a way how to suppress deepface to print the logs, please?</p>
|
<python><logging><deepface>
|
2023-01-25 08:00:34
| 2
| 7,715
|
Jaroslav BezdΔk
|
75,231,045
| 3,318,528
|
Python Flet Async
|
<p>I'm trying to use Flet libray with an async function to login into telegram.
The functionality at the moment are really basic, it just detects if the user is already logged in or not, and if not opens a login page, with a phone number field and a button:</p>
<pre><code>import flet as ft
from flet import AppBar, ElevatedButton, Page, Text, View, colors
from telethon import TelegramClient
import sys
import re
from asyncio import new_event_loop, run
# you can get telegram development credentials in telegram API Development Tools
api_id = '***'
api_hash = '***'
client = TelegramClient('session_name', api_id, api_hash)
def main(page: Page):
page.title = "Tel"
def startup_async():
new_event_loop().run_until_complete(startup())
def get_verif_async(phone_num):
print('ciao')
new_event_loop().run_until_complete(get_verification_code(phone_num))
async def get_verification_code(phone_number):
if phone_number and re.match(r"^\+\d+$", phone_number):
await client.send_code_request(phone_number)
else:
page.add(ft.Text(value='errore'))
#view.update()
async def startup():
print('startup')
await client.connect()
if not await client.is_user_authorized():
page.route = "/login_screen"
else:
page.route = "/homepage"
def route_change(e):
page.views.clear()
if page.route == "/login_screen":
phone_num_field = ft.TextField(hint_text="Your phone number", expand=True)
page.views.append(
View(
"/login_screen",
[
AppBar(title=Text("Login"), bgcolor=colors.SURFACE_VARIANT),
phone_num_field,
ElevatedButton(text='Get code', on_click= get_verif_async(phone_num_field.value)),
],
)
)
if page.route == "/homepage":
page.views.append(
View(
"/homepage",
[
AppBar(title=Text("homepage"), bgcolor=colors.SURFACE_VARIANT),
],
)
)
page.update()
def view_pop(e):
page.views.pop()
top_view = page.views[-1]
page.go(top_view.route)
# async script startup
startup_async()
page.on_route_change = route_change
page.on_view_pop = view_pop
page.go(page.route)
ft.app(target=main)
</code></pre>
<p>I don't know what I'm doing wrong but the function get_verification_code is executed all the time at startup, even if I don't click the button the function is linked to. Why?</p>
|
<python><flet>
|
2023-01-25 07:54:55
| 1
| 335
|
Val
|
75,231,014
| 16,891,669
|
Understanding input in iter
|
<p>I was looking for solutions to take multiline input in python. I found <a href="https://stackoverflow.com/a/11664675/16891669">this</a> answer which uses the following code.</p>
<pre class="lang-py prettyprint-override"><code>sentinel = '' # ends when this string is seen
for line in iter(input, sentinel):
pass # do things here
</code></pre>
<p>I read from the python docs that if <code>iter</code> recieves the second argument then it will call the <code>__next__()</code> of the first argument. But I don't think <code>input</code> has the <code>__next__()</code> implemented (I am not able to verify this either through the docs or surfing through the source code). Can someone explain how it's working?</p>
<p>Also, I observed this weird behaviour with the following code.</p>
<pre><code>sentinel = ''
itr = iter(input, sentinel)
print("Hello")
print(set(itr))
</code></pre>
<p>Here is the output</p>
<pre><code>[dvsingla Documents]$ python3 temp.py
Hello
lksfjal
falkja
aldfj
{' aldfj', 'falkja', 'lksfjal'}
[dvsingla Documents]$
</code></pre>
<p>The prompt starts taking input after printing <em>Hello</em> which is not following line by line interpretation.</p>
<p>Thanks for any help</p>
|
<python>
|
2023-01-25 07:51:23
| 1
| 597
|
Dhruv
|
75,230,803
| 11,332,693
|
Row wise concatenation and replacing nan with common column values
|
<p>Below is the input data
df1</p>
<pre><code>A B C D E F G
Messi Forward Argentina 1 Nan 5 6
Ronaldo Defender Portugal Nan 4 Nan 3
Messi Midfield Argentina Nan 5 Nan 6
Ronaldo Forward Portugal 3 Nan 2 3
Mbappe Forward France 1 3 2 5
</code></pre>
<p>Below is the intended output</p>
<p>df</p>
<pre><code>A B C D E F G
Messi Forward,Midfield Argentina 1 5 5 6
Ronaldo Forward,Defender Portugal 3 4 2 3
Mbappe Forward France 1 3 2 5
</code></pre>
<p>My try:</p>
<pre><code>df.groupby(['A','C'])['B'].agg(','.join).reset_index()
df.fillna(method='ffill')
</code></pre>
<p>Do we have a better way to do this ?</p>
|
<python><pandas><dataframe><string-concatenation><method-missing>
|
2023-01-25 07:27:49
| 2
| 417
|
AB14
|
75,230,775
| 5,015,382
|
Why is this tkinter function not displaying any pictures
|
<p>I have code to display 2x5 images and change them when I click on them. However, the code I wrote does not display any images in the tkinter windows. Why?</p>
<p>Some details:</p>
<ul>
<li>the URLs are working fine</li>
</ul>
<pre><code>import tkinter as tk
from PIL import Image, ImageTk
# Create the main window
root = tk.Tk()
# Create a list of images to display
images = ['https://lh3.googleusercontent.com/SsEIJWka3_cYRXXSE8VD3XNOgtOxoZhqW1uB6UFj78eg8gq3G4jAqL4Z_5KwA12aD7Leqp27F653aBkYkRBkEQyeKxfaZPyDx0O8CzWg=s0',
'https://lh3.googleusercontent.com/Bawo7r1nPZV6sJ4OHZJHdKV_4Ky59vquAR7KoUXcNZgx9fqTaOW-QaOM9qoyYhOTAopzjt9OIfW06RMwa-9eJW9KjQw=s0',
'https://lh3.googleusercontent.com/tm1DbZrAP0uBM-OJhLwvKir1Le5LglRF_bvbaNi6m-F_pIyttsWQz040soRY9pWA9PgNEYFA_fBkg_keYixRXCAjz9Q=s0',
'https://lh3.googleusercontent.com/AyiKhdEWJ7XmtPXQbRg_kWqKn6mCV07bsuUB01hJHjVVP-ZQFmzjTWt7JIWiQFZbb9l5tKFhVOspmco4lMwqwWImfgg=s0',
'https://lh3.googleusercontent.com/FNNTrTASiUR0f49UVUY5bisIM-3RlAbf_AmktgnU_4ou1ZG0juh3pMT1-xpQmtN1R8At4Gq9B4ioSSi4TVrgbCZsmtY=s0',
'https://lh3.googleusercontent.com/mAyAjvYjIeAIlByhJx1Huctgeb58y7519XYP38oL1FXarhVlcXW7kxuwayOCFdnwtOp6B6F0HJmmws-Ceo5b_pNSSQs=s0',
'https://lh3.googleusercontent.com/gShVRyvLLbwVB8jeIPghCXgr96wxTHaM4zqfmxIWRsUpMhMn38PwuUU13o1mXQzLMt5HFqX761u8Tgo4L_JG1XLATvw=s0',
'https://lh3.googleusercontent.com/KA2hIo0BlMDmyQDEC3ixvp9WHgcyJnlAvWtVcZmExh9ocPoZdQGRJh7bZjE2Mx2OGC0Zi3QGHGP0LlmuFgRlIYs36Sgn5G2OD-0MaTo=s0',
'https://lh3.googleusercontent.com/N2m90mImdcoLacUybb_rxcktTwtr0LFhtuzxbSE9elIhElF6jpWngx96_uZ0L1TGNof5pNt4n_Ygb4KYlPTpA9o6788=s0',
'https://lh3.googleusercontent.com/1pTfYJlLwVTifKj4PlsWPyAg4PcIVBAiVvB8sameSnmm7HRd056abNUIRq33rgry7u9t-ju-eHOnbfqQpK4q_8IwzIXZ4WgrqZW9l7U=s0',
'https://lh3.googleusercontent.com/0bgOiMrBM2GuhW_pNeQW711GuB3kD7Gq7AILGHaJGeWKa1Fu1hUJGpOjvSpiP_XpgRlC4jVmH0Z1233PEPMJTfNRR7Q=s0',
'https://lh3.googleusercontent.com/x9NFmu-RbZ_9M5BK_hOzQRdVj4pu7p--y_IYwDK46lDPzQtTIO9AlBV_ObgQiY7GeWE0ZfNjMSyrCWgnwL4MCasQZQ=s0']
# Create a variable to keep track of the current image
current_image = [0,0,0,0,0,0,0,0,0,0]
# Create a grid of labels to display the images
image_grid = [[tk.Label(root) for _ in range(5)] for _ in range(2)]
# Function to change the image
def change_image(x,y):
global current_image
current_image[x*5+y] += 1
if current_image[x*5+y] >= len(images):
current_image[x*5+y] = 0
image = Image.open(BytesIO(requests.get(images[current_image[x*5+y]]).content))
image = image.resize((256,256))
print(image)
photo = ImageTk.PhotoImage(image)
image_grid[x][y].config(image=photo)
image_grid[x][y].image = photo
# Bind labels to the function
for i in range(2):
for j in range(5):
image_grid[i][j].bind("<Button-1>", lambda event, x=i, y=j: change_image(x,y))
image_grid[i][j].grid(row=i, column=j)
# Start the main loop
root.mainloop()
</code></pre>
|
<python><tkinter>
|
2023-01-25 07:24:16
| 3
| 452
|
Jan Janiszewski
|
75,230,541
| 8,026,274
|
Plotting complex graph in pandas
|
<p>I have the following dataset</p>
<pre><code>ids count
1 2000210
2 -23123
3 100
4 500
5 102300120
...
1 million 123213
</code></pre>
<p>I want a graph where I have group of <code>ids</code> (all unique ids) in the x axis and <code>count</code> in y axis and a distribution chart that looks like the following</p>
<p><a href="https://i.sstatic.net/DNEzk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DNEzk.png" alt="enter image description here" /></a></p>
<p>How can I achieve this in pandas dataframe in python.</p>
<p>I tried different ways but I am only getting a basic plot and not as complex as the drawing.</p>
<p>What I tried</p>
<pre><code>df = pd.DataFrame(np.random.randn(1000000, 2), columns=["count", "ids"]).cumsum()
df["range"] = pd.Series(list(range(len(df))))
df.plot(x="range", y="count");
</code></pre>
<p>But the plots dont make any sense. I am also new to plotting in pandas. I searched for a long time for charts like this in the internet and could really use some help with such graphs</p>
|
<python><pandas><dataframe>
|
2023-01-25 06:50:26
| 2
| 339
|
Mikasa
|
75,230,353
| 3,904,031
|
How to get control of the size and shape of images using .insert_image() with XlsxWriter in macOS?
|
<p>I'm trying to write a Python script that builds a workbook of thousands of thumbnails of images from a few hundred folders to a few hundred sheets (one sheet per top-level folder).</p>
<p>The formatting of my sheets depends upon getting the thumbnails to have a controlled size, but I'm getting inconsistent results.</p>
<p><a href="https://stackoverflow.com/questions/65325499/improper-scaling-of-large-images-in-xlsxwriter-insert-image-method#comment115493091_65325499">This comment on <em>Improper Scaling of Large Images in XlsxWriter insert_image Method</em></a> says:</p>
<blockquote>
<p>Also, <strong>watch out for scaling in Excel for macOS</strong> which can be different than versions of Excel for Windows.</p>
</blockquote>
<p>Related but different:</p>
<ul>
<li><a href="https://stackoverflow.com/q/65325499/3904031">Improper Scaling of Large Images in XlsxWriter insert_image Method</a></li>
<li><a href="https://stackoverflow.com/q/66776526/3904031">Scale images in a readable manner in xlsxwriter</a></li>
</ul>
<p><strong>Question:</strong> How can I get control of the size and shape of images using .insert_image() with XlsxWriter in macOS? Right now it seems to be ignoring x_scale, y_scale and choosing the final size and shape based on the column dimensions:</p>
<p>The image in column A is scaled in x and y as 150%, 165%, in column B as 150%, 150%.</p>
<p>I am using MSExcel for Mac 16.66.1 (circa 2019) and XlsxWriter 1.3.7</p>
<p><a href="https://i.sstatic.net/ISml2.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ISml2.jpg" alt="example of thumbnail" /></a> Example of thumbnail, 200x200 pixels, 72 dpi</p>
<p><a href="https://i.sstatic.net/pIDNX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pIDNX.png" alt="example of .xlsx file created by XlsxWriter" /></a></p>
<pre><code>import xlsxwriter
# https://xlsxwriter.readthedocs.io/worksheet.html#insert_image
# https://xlsxwriter.readthedocs.io/example_images.html
def create_workbook(filename=None):
if not isinstance(filename, str) or len(filename) == 0:
filename = 'testwriter.xlsx'
if isinstance(filename, str) and filename[-5:] != '.xlsx':
filename += '.xlsx'
workbook = xlsxwriter.Workbook(filename)
workbook.window_width = 25000
workbook.window_height = 16000
fs = 14 # font size
formats = dict()
formats['header'] = workbook.add_format({'font_size': fs, 'align': 'center'}) # or fmt.set_font_size(14)
formats['info'] = workbook.add_format({'font_size': fs, 'align': 'left',
'font': 'Courier'})
return workbook, formats
def create_sheet(workbook, sheetname, formats, n_columns):
sheetname_local = str(sheetname)
if not isinstance(sheetname_local, str) or len(sheetname_local) == 0:
sheetname_local = 'noname'
sheet = workbook.add_worksheet(sheetname_local)
widths = [20] + n_columns*[50]
headings = ['name'] + ['imgs ' + str(i+1) for i in range(n_columns)]
for i, (w, h) in enumerate(zip(widths, headings)):
sheet.set_column(i, i, w) # https://stackoverflow.com/q/17326973/3904031
sheet.write(0, i, h, formats['header'])
return sheet
workbook, formats = create_workbook(filename='testAB.xlsx')
sheet = create_sheet(workbook, '2021', formats, n_columns=3)
img_fname = 'ima.jpg'
sheet.insert_image('A2', img_fname, {'x_scale': 1, 'y_scale': 1,
'object_position': 2})
sheet.insert_image('B13', img_fname, {'x_scale': 1, 'y_scale': 1,
'object_position': 2})
workbook.close()
</code></pre>
|
<python><python-3.x><macos><image><xlsxwriter>
|
2023-01-25 06:25:51
| 1
| 3,835
|
uhoh
|
75,230,198
| 169,992
|
What are the TensorFlow equivalents of these PyTorch functions?
|
<p>I am looking to port something from PyTorch to Tensorflow, and could use some help in making sure I get the functions mapped correctly from one framework to the other. I have already started, for example both frameworks have the same <code>torch.where</code> and <code>tf.where</code> function, and <code>torchTensor.clamp</code> is <code>tf.clipByValue</code>. But some of the others are harder to find and I'm not sure the exact mapping.</p>
<h4>Questionable</h4>
<ul>
<li><p><code>torchTensor.isclose -> ?</code></p>
</li>
<li><p><code>torch.all -> ?</code></p>
</li>
<li><p><code>torchTensor.clamp_max -> ?</code></p>
</li>
<li><p><code>torchTensor.gt -> ?</code> <a href="https://pytorch.org/docs/stable/generated/torch.gt.html#torch.gt" rel="nofollow noreferrer">https://pytorch.org/docs/stable/generated/torch.gt.html#torch.gt</a></p>
</li>
<li><p><code>torchTensor.lt -> ?</code></p>
</li>
<li><p><code>torchTensor.any -> ?</code> <a href="https://stackoverflow.com/questions/53401053/is-there-all-or-any-equivalent-in-python-tensorflow">Is there .all() or .any() equivalent in python Tensorflow</a></p>
</li>
<li><p><code>torch.unsqueeze -> ?</code> <a href="https://pytorch.org/docs/stable/generated/torch.unsqueeze.html#torch.unsqueeze" rel="nofollow noreferrer">https://pytorch.org/docs/stable/generated/torch.unsqueeze.html#torch.unsqueeze</a></p>
</li>
<li><p><code>torch.broadcastTensors -> ?</code></p>
</li>
<li><p><code>torchTensor.dim -> ?</code> <a href="https://stackoverflow.com/questions/36966316/how-to-get-the-dimensions-of-a-tensor-in-tensorflow-at-graph-construction-time">How to get the dimensions of a tensor (in TensorFlow) at graph construction time?</a></p>
</li>
<li><p><code>torch.narrow -> ?</code> <a href="https://pytorch.org/docs/stable/generated/torch.narrow.html#torch.narrow" rel="nofollow noreferrer">https://pytorch.org/docs/stable/generated/torch.narrow.html#torch.narrow</a></p>
</li>
<li><p><code>torch.masked_fill -> ?</code> (<a href="https://stackoverflow.com/questions/47447272/does-tensorflow-have-the-function-similar-to-pytorchs-masked-fill">this</a>)</p>
<pre><code>def mask_fill_inf(matrix, mask):
negmask = 1 - mask
num = 3.4 * math.pow(10, 38)
return (matrix * mask) + (-((negmask * num + num) - num))
</code></pre>
</li>
</ul>
<h4>Figured Out</h4>
<ul>
<li><code>torch.numel -> tf.shape.num_elements</code> <a href="https://www.tensorflow.org/api_docs/python/tf/TensorShape#num_elements" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/TensorShape#num_elements</a></li>
<li><code>torchTensor.norm -> tf.norm</code></li>
<li><code>torch.cat -> tf.concat</code></li>
<li><code>torch.prod -> tf.math.reduce_prod</code></li>
<li><code>torch.squeeze -> tf.squeeze</code></li>
<li><code>torch.zeros -> tf.zeros</code></li>
<li><code>torchTensor.reciprocal -> tf.math.reciprocal</code></li>
<li><code>torchTensor.size -> tf.size</code></li>
</ul>
<p>Basically I am porting <a href="https://github.com/geoopt/geoopt/blob/27a1e6fd825e2de923b8493bb8baf514f3f089ea/geoopt/manifolds/stereographic/math.py" rel="nofollow noreferrer">this file</a> form PyTorch to TensorFlow, so these are the functions it uses.</p>
<p>Interestingly, ChatGPT gave me this for <code>narrow</code> after a few tries:</p>
<pre><code>def narrow(tensor, dim, start, size):
if dim < 0:
dim = tensor.shape.rank + dim
begin = [0] * dim + [start] + [0] * (tensor.shape.rank - dim - 1)
size = [-1] * dim + [size] + [-1] * (tensor.shape.rank - dim - 1)
return tf.slice(tensor, begin, size)
def _sproj(x, k, dim=-1):
inv_r = tf.sqrt(tf.abs(k))
last_element = narrow(x, dim, -1, 1)
proj = narrow(x, dim, 0, x.shape[dim] - 1)
factor = 1.0 / (1.0 + inv_r * last_element)
return factor * proj
</code></pre>
<p>Is that correct?</p>
<p>GPT also says:</p>
<blockquote>
<p>The comparison operator gt in PyTorch returns a tensor with the same shape as the input tensor, containing boolean values indicating whether the corresponding element in the input tensor is greater than the given value, whereas in TensorFlow the greater function returns a boolean tensor with the same shape as the input tensors, containing the element-wise comparison result. To translate this specific line of code to TensorFlow, you would use tf.math.greater(k, 0) which returns a boolean tensor, then use tf.reduce_any() to check if any of the values in the boolean tensor are True.</p>
</blockquote>
|
<python><tensorflow><pytorch>
|
2023-01-25 06:02:44
| 1
| 80,366
|
Lance Pollard
|
75,230,112
| 10,748,412
|
Detectron2 - undefined symbol: _ZNK2at6Tensor7reshapeEN3c108ArrayRefIlEE
|
<pre><code>File /home/xyzUser/MyProject/.env/lib/python3.8/site-packages/detectron2/layers/deform_conv.py", line 11, in <module>
from detectron2 import _C
ImportError: /home/xyzUser/MyProject/.env/lib/python3.8/site-packages/detectron2/_C.cpython-38-x86_64-linux-gnu.so: undefined symbol: _ZNK2at6Tensor7reshapeEN3c108ArrayRefIlEE
</code></pre>
<p>I am getting an import error while trying to work with detectron2. It was pre installed in my project. Cant uninstall detectron beacuse other tasks are using this package. does anyone know why this undefined symbol error is happening.</p>
|
<python><pytorch><huggingface-transformers><detectron>
|
2023-01-25 05:48:27
| 0
| 365
|
ReaL_HyDRA
|
75,230,017
| 5,089,311
|
How to convert my dictionary query to properly formatted string without getting garbage?
|
<p>Assume this is my dictionary:</p>
<pre><code>dc = { 0 : { "name" : "A", "value" : 4}, 1 : {"name" : "B", "value" : 5}, 2: {"name" : "C", "value" : 7}}
</code></pre>
<p>I need to transform all values from keys <code>value</code> into a string formatted like this:</p>
<pre><code>(4, 5, 7)
</code></pre>
<p>Format is mandatory (this is some robotic automation) - series of integers, separated by <code>, </code> and surrounded by <code>()</code>. No other garbage.</p>
<p>Good examples:</p>
<pre><code>()
(1)
(1, 5, 15, 37, 123, 56874)
</code></pre>
<p>Bad examples:</p>
<pre><code>[4, 5, 7]
{4, 5, 7}
('4', '5', '7')
(1,)
</code></pre>
<p>My naive approach was "lets iterate over all dictionary items, collect all "values" into <code>tuple</code> and then <code>str</code> it", but tuples cannot be modified. So I said "ok, collect to list, convert to tuple and then str":</p>
<pre><code>res = list()
for item in dc.values():
res.append(item['value'])
print(str(tuple(res)))
</code></pre>
<p>I'm new to Python and I bet there is more elegant way of doing this, but it worked fine for multi-item and empty results. However, if my query returns only single item, then Python adds an extra empty item and it breaks the robotic client.</p>
<pre><code>>>>>str(tuple([4]))
(4,)
</code></pre>
<p>Is there way of not getting this empty extra without explicit checking <code>if len(res)==1</code>?</p>
<p>Actually, is there more robust and elegant way of cutting out all <code>value</code>s into strictly formatted string as I need?</p>
|
<python>
|
2023-01-25 05:34:23
| 1
| 408
|
Noob
|
75,229,994
| 1,245,659
|
STContains returns points OUTSIDE polygon. Should return points INSIDE polygon
|
<p>I have this very long jupyter notebook</p>
<pre><code>#!/usr/bin/env python
# coding: utf-8
# In[1]:
#Setting Environment
from ipyleaflet import Map, basemaps, Marker, Polygon
import pyodbc
import json
#Getting ODBC Connection
cnxn = pyodbc.connect('DRIVER={ODBC Driver 18 for SQL Server};SERVER=<<SERVER>>;DATABASE=<<DATABASE>>;UID=<<USER>>;PWD=<<PWD>>;TrustServerCertificate=yes;')
cursor = cnxn.cursor()
#center function
def centroid(vertexes):
_x_list = [vertex [0] for vertex in vertexes]
_y_list = [vertex [1] for vertex in vertexes]
_len = len(vertexes)
_x = sum(_x_list) / _len
_y = sum(_y_list) / _len
return(_x, _y)
# In[2]:
#set TBL1
TBL1=9543
# In[17]:
#getting Polygon GEOJSON
cursor.execute(f"SELECT geography.ReorientObject().ToString() FROM DB1..TBL1 WHERE id = {TBL1}")
top = 0
bottom = 90
left = 0
right = -90
geography = cursor.fetchval()
geography = geography.replace('POLYGON ((','').replace('))', '').split(', ')
nodes = []
for node in geography:
node = node.split(' ')
top = float(node[1]) if float(node[1]) > top else top
bottom = float(node[1]) if float(node[1]) < bottom else bottom
left = float(node[0]) if float(node[0]) < left else left
right = float(node[0]) if float(node[0]) > right else right
nodes.append((float(node[1]),float(node[0])))
nodes
# In[18]:
print(top, bottom)
print(left, right)
# In[24]:
#getting Points
cursor.execute(f"""
select TBL2.[TBL2_ID]
,TBL2.LAT
,TBL2.LONG
,TBL1.[User]
,TBL1.[ID]
from DB1..TBL2, DB1..[TBL1]
where 1 = TBL1.[geography].ReorientObject().STContains(geography::Point(TBL2.LAT, TBL2.LONG, 4326))
and TBL1.id = {TBL1}
and TBL2.LAT between {bottom} and {top}
and TBL2.LONG between {left} and {right};""")
points = cursor.fetchall()
print(len(points))
print(points[0:10])
# In[25]:
center = centroid(nodes)
zoom = 12
m = Map(basemap=basemaps.OpenStreetMap.Mapnik, zoom=zoom, center=center)
poly = Polygon(
locations=nodes,
color="green",
fill_color="green"
)
m.add_layer(poly)
for point in points:
marker = Marker(location=(point[1], point[2]), title=str(point[0]), draggable=False)
m.add_layer(marker)
m
# In[ ]:
</code></pre>
<p>Line 17 returns:</p>
<pre><code>[(40.71601696448174, -74.00765417724092),
(40.71839144945969, -74.01160238891084),
(40.72613233284465, -74.00872706084688),
(40.72525421067396, -74.00568007140596),
(40.723888219823266, -74.00619505553682),
(40.71900945220738, -73.99632452636202),
(40.71507365242344, -73.99902819304903),
(40.718554082317965, -74.00559424071749),
(40.71601696448174, -74.00765417724092)]
</code></pre>
<p>Line 17 returns over 4000 points, however, the last line shows the map that has all the points OUTSIDE the polygon with no reason. We added the clause <code>ReorientObject()</code>, but it looks like we are still seeing the bad runs.</p>
<p>See image below:</p>
<p><a href="https://i.sstatic.net/GJttJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GJttJ.png" alt="enter image description here" /></a></p>
<p>What am I missing? The intent is only to get the points inside.</p>
|
<python><sql-server><geospatial><spatial>
|
2023-01-25 05:30:21
| 0
| 305
|
arcee123
|
75,229,988
| 6,751,456
|
python django serializer wrong date time format for DateTimeField
|
<p>I'm using Django <code>3.0.2</code>.</p>
<p>I have a serializer defined:</p>
<pre><code>class ValueNestedSerializer(request_serializer.Serializer):
lower = request_serializer.DateTimeField(required=True, allow_null=False, format=None, input_formats=['%Y-%m-%dT%H:%M:%SZ',])
upper = request_serializer.DateTimeField(required=True, allow_null=False, format=None, input_formats=['%Y-%m-%dT%H:%M:%SZ',])
class DateRangeSerializer(request_serializer.Serializer):
attribute = request_serializer.CharField(default="UPLOAD_TIME")
operator = request_serializer.CharField(default="between_dates")
value = ValueNestedSerializer(required=True)
timezone = request_serializer.CharField(default="UTC")
timezoneOffset = request_serializer.IntegerField(default=0)
class BaseQueryPayload(request_serializer.Serializer):
appid = request_serializer.CharField(required=True, validators=[is_valid_appid])
filters = request_serializer.ListField(
required=True, validators=[is_validate_filters],
min_length=1
)
date_range = DateRangeSerializer(required=True)
</code></pre>
<p>And the payload :</p>
<pre><code>{
"appid": "6017cef554df4124274ef36d",
"filters": [
{
"table": "session",
"label": "1month"
}
],
"date_range": {
"value": {
"lower": "2023-01-01T01:00:98Z",
"upper": "2023-01-20T01:00:98Z"
}
},
"page": 1
}
</code></pre>
<p>But I get this validation error:</p>
<pre><code>{
"error": {
"date_range": {
"value": {
"lower": [
"Datetime has wrong format. Use one of these formats instead: YYYY-MM-DDThh:mm:ssZ."
],
"upper": [
"Datetime has wrong format. Use one of these formats instead: YYYY-MM-DDThh:mm:ssZ."
]
}
}
}
}
</code></pre>
<p>The suggested format <code>YYYY-MM-DDThh:mm:ssZ</code> is similar to what is passed.</p>
<p>Am I missing anything here?</p>
|
<python><django><django-serializer><django-validation>
|
2023-01-25 05:29:24
| 1
| 4,161
|
Azima
|
75,229,981
| 6,077,239
|
How to use polars cut method returning result to original df
|
<p><strong>Update:</strong> <code>pl.cut</code> was removed from Polars. Expression equivalents were added instead:</p>
<p><a href="https://docs.pola.rs/api/python/stable/reference/expressions/api/polars.Expr.cut.html#polars.Expr.cut" rel="nofollow noreferrer"><code>.cut()</code></a> <a href="https://docs.pola.rs/api/python/stable/reference/expressions/api/polars.Expr.qcut.html#polars.Expr.qcut" rel="nofollow noreferrer"><code>.qcut()</code></a></p>
<hr />
<p>How can I use it in select context, such as df.with_columns?</p>
<p>To be more specific, if I have a polars dataframe with a lot of columns and one of them is called x, how can I do pl.cut on x and append the grouping result into the original dataframe?</p>
<p>Below is what I tried but it does not work:</p>
<pre class="lang-py prettyprint-override"><code>df = pl.DataFrame({"a": [1, 2, 3, 4, 5], "b": [2, 3, 4, 5, 6], "x": [1, 3, 5, 7, 9]})
df.with_columns(pl.cut(pl.col("x"), bins=[2, 4, 6]))
</code></pre>
<p>Thanks so much for your help.</p>
|
<python><python-polars>
|
2023-01-25 05:28:47
| 3
| 1,153
|
lebesgue
|
75,229,967
| 4,531,757
|
Pandas - Build sequnce in the group while resetting by fixed value - Issue in summary table
|
<p>I am resetting my flow by '000' when the patient sees that value in the pattern.But my summary table is mixing all patterns and giving only one value like shown in the out data frame. However, I like to show the same patient in all patterns individually shown the 'desired' data frame. Please help.</p>
<pre><code>df2 = pd.DataFrame({'patient': ['one', 'one', 'one', 'one','one', 'one','one','one','one','one','one','one'],
'pattern': ['A', 'B', '000', 'B', 'B', '000','D','A','C','000','A','B'],
'date': ['11/1/2022', '11/2/2022', '11/3/2022', '11/4/2022', '11/5/2022', '11/6/2022','11/7/2022', '11/8/2022', '11/9/2022','11/10/2022', '11/11/2022','11/12/2022']})
m = df2['pattern'] == '000'
display(df2)
out = (
df2[~m].sort_values(['patient','date'],ascending=True)
.groupby(["patient"])
.agg(pattern= ("pattern", ",".join),
patients=("patient", "nunique"))
.reset_index(drop=True)
.groupby(["pattern"]).agg({'patients':'sum'}).reset_index())
display(out)
</code></pre>
<p>I like to tweak my output to like below desired data frame:</p>
<pre><code>desired = pd.DataFrame({'pattern': ['A,B', 'B,B', 'D,A,C'],
'patients': [2, 1, 1]})
desired.head()
</code></pre>
|
<python><pandas><numpy>
|
2023-01-25 05:24:50
| 1
| 601
|
Murali
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.