QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
75,996,280
| 4,865,723
|
Problems with lookahead in RegEx
|
<p>I still have problems with regex lookahead. In this example here I'm not sure if I misunderstand the intention of lookahead or of I just use the wrong syntax.</p>
<h1>Initial situation</h1>
<p>The input lines can look like this, where <code>one</code> and <code>two</code> should be the result.</p>
<ol>
<li><code>[[one][two]]</code></li>
<li><code>[[one]]</code></li>
</ol>
<p>The <code>][</code> in the middle is optional.</p>
<p>My initial regex pattern only fits two the first example.</p>
<pre><code>^\[\[(.*)\]\[(.*)\]\]$
</code></pre>
<p>Resulting in</p>
<p><a href="https://i.sstatic.net/0nfNl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0nfNl.png" alt="enter image description here" /></a></p>
<h1>First modification</h1>
<p>I extend my pattern because of the optional <code>][</code> with a "non capturing group".</p>
<pre><code>^\[\[(.*)(?:\]\[)?(.*)\]\]$
</code></pre>
<p>Resulting in</p>
<p><a href="https://i.sstatic.net/TPcSu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TPcSu.png" alt="enter image description here" /></a></p>
<h1>Second modification with lookahead</h1>
<p>Now IMHO the lookahead comes into account. Am I right? The first group should "catch" (extract the string) only until <code>][</code> appears. The <code>][</code> is like a stop sign.</p>
<p>Now I'm digging in the fog with things like that not working:</p>
<pre><code>^\[\[((?!\]\[).*)(\]\[)?(.*)\]\]$
</code></pre>
|
<python><regex>
|
2023-04-12 13:51:29
| 1
| 12,450
|
buhtz
|
75,996,117
| 9,758,922
|
Numpy mean giving slightly different results based on row order
|
<p>In a test case we are using <code>np.testing.assert_allclose</code> to determine whether two data sources agree with each other on the mean. But despite having the same the data in a different the order, the computed means are slightly different. Here is a the shortest working example:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
x = np.array(
[[0.5224021, 0.8526993], [0.6045113, 0.7965965], [0.5053657, 0.86290526], [0.70609194, 0.7081201]],
dtype=np.float32,
)
y = np.array(
[[0.5224021, 0.8526993], [0.70609194, 0.7081201], [0.6045113, 0.7965965], [0.5053657, 0.86290526]],
dtype=np.float32,
)
print("X mean", x.mean(0))
print("Y mean", y.mean(0))
z = x[[0, 3, 1, 2]]
print("Z", z)
print("Z mean", z.mean(0))
np.testing.assert_allclose(z.mean(0), y.mean(0))
np.testing.assert_allclose(x.mean(0), y.mean(0))
</code></pre>
<p>with Python 3.10.6 and NumPy 1.24.2, gives the following output:</p>
<pre><code>X mean [0.58459276 0.8050803 ]
Y mean [0.5845928 0.8050803]
Z [[0.5224021 0.8526993 ]
[0.70609194 0.7081201 ]
[0.6045113 0.7965965 ]
[0.5053657 0.86290526]]
Z mean [0.5845928 0.8050803]
Traceback (most recent call last):
File "/home/nuric/semafind-db/scribble.py", line 19, in <module>
np.testing.assert_allclose(x.mean(0), y.mean(0))
File "/home/nuric/semafind-db/.venv/lib/python3.10/site-packages/numpy/testing/_private/utils.py", line 1592, in assert_allclose
assert_array_compare(compare, actual, desired, err_msg=str(err_msg),
File "/usr/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/home/nuric/semafind-db/.venv/lib/python3.10/site-packages/numpy/testing/_private/utils.py", line 862, in assert_array_compare
raise AssertionError(msg)
AssertionError:
Not equal to tolerance rtol=1e-07, atol=0
Mismatched elements: 1 / 2 (50%)
Max absolute difference: 5.9604645e-08
Max relative difference: 1.0195925e-07
x: array([0.584593, 0.80508 ], dtype=float32)
y: array([0.584593, 0.80508 ], dtype=float32)
</code></pre>
<p>A solution is to reduce the tolerance for the assertion but any ideas why this might be happening?</p>
|
<python><arrays><numpy><mean>
|
2023-04-12 13:36:00
| 1
| 11,275
|
nuric
|
75,996,083
| 10,232,932
|
AttributeError: module 'pytorch_lightning.utilities.distributed' has no attribute 'log'
|
<p>I am working with visual studio code in a personal / local enviornment for Visual Studio and Python:</p>
<pre><code>c:\Users\Mister\Documents\Visual Studio 2017\Forecasts\src\Test_Run.py
c:\Users\Mister\AppData\Local\Programs\Python\Python311\Lib\site-packages\...
</code></pre>
<p>somehow when I try to run:</p>
<pre><code>from neuralforecast.models import NBEATS
</code></pre>
<p>I get the errors:</p>
<blockquote>
<p>AttributeError: module 'pytorch_lightning.utilities.distributed' has
no attribute 'log'</p>
</blockquote>
<p>The error is in the ´utils.py`file in the neuralnetwork in this lines of code:</p>
<pre><code>import logging
import pytorch_lightning as pl
pl.utilities.distributed.log.setLevel(logging.ERROR)
</code></pre>
<p>I installed:</p>
<pre><code>pytorch-lightning 1.6.5
neuralforecast 0.1.0
</code></pre>
<p>on <code>python 3.11.3</code></p>
|
<python><pytorch-lightning>
|
2023-04-12 13:31:48
| 1
| 6,338
|
PV8
|
75,996,003
| 9,905,185
|
regexp. Find sub string in string
|
<p>I have strings like:</p>
<blockquote>
<p>q.0.0.0.1-1111, q.0.0.0.1.tt_0-1111, tes-00000000-1111, q.00.00.000.0.xx_0-1111</p>
</blockquote>
<p>I have next regexp:</p>
<blockquote>
<p>(?:(?<=[^-\s]{4}.\d{3})|(?<=[^-\s]{7}))[^-]+(?=-)|(?<=-)[^-\s]+-</p>
</blockquote>
<p>It work well with all cases except q.00.00.000.0.xx_0-1111</p>
<p>This part of regexp (?:(?<=[^-\s]{4}\d\d) in this string q.00.00.000.0.xx_0-1111 look for sub string like:</p>
<blockquote>
<p>.000.0.xx_0</p>
</blockquote>
<p>But i expect what this regexp find:</p>
<blockquote>
<p>.0.xx_0</p>
</blockquote>
<p>What wrong in my regexp and how i can fix it for getting result, which i expect?</p>
<p>Will be grateful for the help.</p>
|
<python><regex>
|
2023-04-12 13:22:33
| 1
| 766
|
MrOldSir
|
75,995,974
| 1,338,877
|
Run CDK inside lambda execution
|
<p>I have to update CloudWatch Dashboards often.</p>
<p>I'm trying to run CDK from a lambda execution to get the Cloudformation template.</p>
<p>When it is executing the constructor <code>app = cdk.App(outdir="./tmp")</code> it throws an exception</p>
<pre><code>[ERROR] FileNotFoundError: [Errno 2] No such file or directory: 'node'
Traceback (most recent call last):
File "/var/lang/lib/python3.9/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 850, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "/var/task/app.py", line 2, in <module>
import aws_cdk as cdk
File "/var/task/aws_cdk/__init__.py", line 1440, in <module>
from ._jsii import *
File "/var/task/aws_cdk/_jsii/__init__.py", line 13, in <module>
import aws_cdk.asset_awscli_v1._jsii
File "/var/task/aws_cdk/asset_awscli_v1/_jsii/__init__.py", line 13, in <module>
__jsii_assembly__ = jsii.JSIIAssembly.load(
File "/var/task/jsii/_runtime.py", line 55, in load
_kernel.load(assembly.name, assembly.version, os.fspath(assembly_path))
File "/var/task/jsii/_kernel/__init__.py", line 299, in load
self.provider.load(LoadRequest(name=name, version=version, tarball=tarball))
File "/var/task/jsii/_kernel/providers/process.py", line 352, in load
return self._process.send(request, LoadResponse)
File "/var/task/jsii/_utils.py", line 23, in wrapped
stored.append(fgetter(self))
File "/var/task/jsii/_kernel/providers/process.py", line 347, in _process
process.start()
File "/var/task/jsii/_kernel/providers/process.py", line 260, in start
self._process = subprocess.Popen(
File "/var/lang/lib/python3.9/subprocess.py", line 951, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/var/lang/lib/python3.9/subprocess.py", line 1821, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)Exception ignored in: <function _NodeProcess.__del__ at 0x7f80ae481e50>
Traceback (most recent call last):
File "/var/task/jsii/_kernel/providers/process.py", line 228, in __del__
self.stop()
File "/var/task/jsii/_kernel/providers/process.py", line 291, in stop
assert self._process.stdin is not None
AttributeError: '_NodeProcess' object has no attribute '_process'
</code></pre>
<p>Is there a way to run CDK from a lambda execution?</p>
|
<python><aws-lambda><aws-cdk>
|
2023-04-12 13:20:10
| 1
| 650
|
Renato Ramos Nascimento
|
75,995,689
| 18,108,367
|
Pipeline, watch() and multi() in redis. How do they really work?
|
<p>I'm trying to understand the correct use of commands <code>multi</code> and <code>watch</code> for the access to a database Redis.</p>
<h3>The context</h3>
<p>I'm using:</p>
<ul>
<li>the Python Client for Redis <a href="https://pypi.org/project/redis/3.5.3/" rel="nofollow noreferrer">redis-py version 3.5.3</a>.</li>
<li>the version of the Redis server is <strong>Redis server v=5.0.5</strong>.</li>
</ul>
<h3>Other links</h3>
<p>I have made many researches on Internet and I have found some useful link about the main topic of my question:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/33695837/redis-in-python-difference-of-with-and-without-multi-function">this post</a> is useful but not exhaustive</li>
<li><a href="https://stackoverflow.com/questions/73428663/using-multi-with-multiple-redis-cli">this post</a> explains well how works the <code>multi</code> command</li>
</ul>
<h3>Example code</h3>
<p>I have written and executed the following code where is used the instruction <code>watch</code> associated to an example Redis key called <code>keyWatch</code>:</p>
<pre class="lang-py prettyprint-override"><code>r = redis.Redis()
def key_incr():
print('keyWatch before incr = ' + r.get('keyWatch').decode("utf-8"))
pipe = r.pipeline()
pipe.watch('keyWatch')
pipe.multi()
pipe.incr('keyWatch')
pipe.execute()
print('keyWatch after incr = ' + r.get('keyWatch').decode("utf-8"))
key_incr()
</code></pre>
<p>Previous code can be correctly executed and if the initial value of <code>keyWatch</code> is <code>9</code>, the output of the execution is :</p>
<pre><code>keyWatch before incr = 9
keyWatch after incr = 10
</code></pre>
<p>If I remove the instruction <code>multi()</code> from the code it becomes:</p>
<pre class="lang-py prettyprint-override"><code>r = redis.Redis()
def key_incr():
print('keyWatch before incr = ' + r.get('keyWatch').decode("utf-8"))
pipe = r.pipeline()
pipe.watch('keyWatch')
# NOTE: here the multi() instruction is commented
#pipe.multi()
pipe.incr('keyWatch')
pipe.execute()
print('keyWatch after incr = ' + r.get('keyWatch').decode("utf-8"))
key_incr()
</code></pre>
<p>Its execution raise the following exception:</p>
<pre><code>raise WatchError("Watched variable changed.")
redis.exceptions.WatchError: Watched variable changed.
</code></pre>
<p>My need is to avoid that other clients modify the key <code>keyWatch</code> while are executed instructions contained inside the transaction.</p>
<h3>The question</h3>
<p>Why in my example code the <code>WatchError</code> exception is raised only if the <code>multi()</code> instruction is not present?</p>
<p>Thanks</p>
<hr />
<h4>Edit</h4>
<h3>Use of MONITOR</h3>
<p>I have edited my question and I have integrated it by the use of the Redis command <code>monitor</code>.</p>
<p>By <code>redis-cli monitor</code> (MONITOR in the rest of the post) I can see all the requests to the server during the execution of the previous 2 snippets of code.</p>
<h3>Monitor info with <code>multi</code> instruction present</h3>
<p>For the case where the <code>multi()</code> instruction is present requests are the following:</p>
<pre><code>> redis-cli monitor
OK
1681733993.273545 [0 127.0.0.1:46342] "GET" "keyWatch"
1681733993.273790 [0 127.0.0.1:46342] "WATCH" "keyWatch"
1681733993.273934 [0 127.0.0.1:46342] "MULTI"
1681733993.273945 [0 127.0.0.1:46342] "INCRBY" "keyWatch" "1"
1681733993.273950 [0 127.0.0.1:46342] "EXEC"
1681733993.274279 [0 127.0.0.1:46342] "GET" "keyWatch" <--- NOTE THE PRESENCE OF THIS GET
</code></pre>
<h3>Monitor info without <code>multi</code> instruction (WatchError)</h3>
<p>For the case without the <code>multi()</code> instruction, MONITOR shows the following requests:</p>
<pre><code>> redis-cli monitor
OK
1681737498.462228 [0 127.0.0.1:46368] "GET" "keyWatch"
1681737498.462500 [0 127.0.0.1:46368] "WATCH" "keyWatch"
1681737498.462663 [0 127.0.0.1:46368] "INCRBY" "keyWatch" "1"
1681737498.463072 [0 127.0.0.1:46368] "MULTI"
1681737498.463081 [0 127.0.0.1:46368] "EXEC"
</code></pre>
<p>Also in this second case is present the <code>MULTI</code> instruction, but between it and the <code>EXEC</code> there aren't any requests.<br />
The <code>keyWatch</code> exception is raised by the <code>EXEC</code> instruction in fact the MONITOR does not show the last <code>"GET" "keyWatch"</code> request (look at the first MONITOR log and you find the last <code>"GET" "keyWatch"</code> request).</p>
<blockquote>
<p>All this suggest me that the exception is caused by the execution of:<br />
<code>"INCRBY" "keyWatch" "1"</code> outside of the block <code>MULTI/EXEC</code>.</p>
</blockquote>
<p>If someone can confirm this and explain better the behavior is appreciated.</p>
<p>Thanks</p>
|
<python><redis><transactions><watch><redis-py>
|
2023-04-12 12:53:26
| 2
| 2,658
|
User051209
|
75,995,450
| 9,488,023
|
Pandas apply a function to specific rows in a column based on values in a separate column
|
<p>So what I have is a Pandas dataframe with two columns, one with strings and one with a boolean. What I want to do is to apply a function on the cells in the first column but only on the rows where the value is False in the second column to create a new column. I am unsure how to do this and my attempts have not worked so far, my code is:</p>
<pre><code>df["new_name"] = df["name"].apply(lambda x: difflib.get_close_matches(x, correction)[0] if not df["spelling"])
</code></pre>
<p>Here, <code>new_name</code> is the new column, <code>name</code> is the one with strings, and <code>spelling</code> is the boolean. The list <code>correction</code> is what will be used for the new column, and the code is meant to check if the spelling in <code>name</code> is correct according to <code>spelling</code>, and if it is not, replace it with the corresponding value in <code>correction</code>.</p>
<p>The code works fine without the</p>
<pre><code>if not df["spelling"]
</code></pre>
<p>but since it is a very long dataframe it goes through every single entry even if the spelling is correct which I do not want. Any suggestions are appreciated!</p>
|
<python><pandas><dataframe><conditional-statements><boolean>
|
2023-04-12 12:28:00
| 2
| 423
|
Marcus K.
|
75,995,342
| 6,357,916
|
One rest endpoint works just fine, other gives CORs error
|
<p>I have a react client app and django server app. React app is running on port <code>9997</code> and server API is available on port <code>9763</code>. Frontend is able to access some APIs while some APIs are failing with error:</p>
<p><a href="https://i.sstatic.net/jPjSJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jPjSJ.png" alt="enter image description here" /></a></p>
<p>As you can see first URL works, but second does not:</p>
<p><a href="https://i.sstatic.net/F3NQi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/F3NQi.png" alt="enter image description here" /></a></p>
<p><strong>API that works</strong></p>
<p>React code:</p>
<pre><code>import axios from "axios";
// ...
async getRestEndpoint1() {
let url = '/app/api/rest_endpoint1/'
const axiosInstance = axios.create();
try {
const response = await axiosInstance.get(url,
{
params: {
'format': 'json',
'propId': this.props.routeProps.match.params.propId
}
}
)
return response.data;
} catch (err) {
console.error(err);
}
}
</code></pre>
<p>Django REST code:</p>
<pre><code>def getHttpJsonResponse(obj):
resp = json.dumps(obj, default=str)
return HttpResponse(resp, content_type="application/json", status=status.HTTP_200_OK)
@api_view(http_method_names=['GET'])
def getRestEndpoint1(request):
entityId = request.GET['entityId']
headers = EntityObject.objects.filter(entity_id=entityId).all()
resp = []
for header in headers:
resp.append({ 'id': entity.id, 'subEntityName': header.subEntity_name})
return getHttpJsonResponse(resp)
</code></pre>
<p>Response Headers:</p>
<p><a href="https://i.sstatic.net/ZmoZu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZmoZu.png" alt="enter image description here" /></a></p>
<p><strong>API that does not work</strong></p>
<p>React code:</p>
<pre><code>import axios from "axios";
// ...
async getRestEndpoint2() {
let url = '/app/api/rest_endpoint2/'
const axiosInstance = axios.create();
try {
const response = await axiosInstance.get(url,
{
params: {
'format': 'json'
}
}
)
return response.data;
} catch (err) {
console.error(err);
}
}
</code></pre>
<p>Django code:</p>
<pre><code>@api_view(http_method_names=['GET'])
def getRestEndpoint2(request):
# business logic
return getHttpJsonResponse(respStatsJson)
</code></pre>
<p>Both APIs are in same <code>views.py</code> file and have similar paths added to <code>urls.py</code>:</p>
<pre><code>path('api/rest_endpoint1/', getRestEndpoint1 , name='rest_endpoint1'),
path('api/rest_endpoint2/', getRestEndpoint2 , name='rest_endpoint2')
</code></pre>
<p>Response headers:</p>
<p><a href="https://i.sstatic.net/oAUQm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oAUQm.png" alt="enter image description here" /></a></p>
<p>My <code>settings.py</code> has ollowing lines:</p>
<pre><code>CORS_ORIGIN_WHITELIST = (
'http://10.129.131.6:9997',
)
if DEBUG:
CORS_ALLOW_ALL_ORIGINS = True
</code></pre>
<p>So, everything just works on my local machine in debug mode. But when I checkout that branch on remote server, build docker image, and start the container, above behavior occurs. What I am missing here?</p>
|
<javascript><python><reactjs><django><cors>
|
2023-04-12 12:15:51
| 1
| 3,029
|
MsA
|
75,995,306
| 241,515
|
Pandas groupby: add suffix to elements which are identical across groups
|
<p>I have a dataframe like this:</p>
<pre><code> peakID cytoband start end length 10.388_116 10.193_156 10.401_184 10.214_385
0 Amp_2q37.3_chr2:237990001-242193529 2q37.3 237990001 242193529 4203528 1 0 0 0
1 Del_2q37.3_chr2:226990001-242193529 2q37.3 226990001 242193529 15203528 -1 0 0 0
</code></pre>
<p>Notice how <code>peakID</code> is different, but <code>cytoband</code> is not. I need to unpivot this table (using a function from <code>pyjanitor</code>) without keeping <code>peakID</code>. Currently I do:</p>
<pre><code>import pandas as pd
import pyjanitor
from natsort import natsort_keygen
table = (
table
.drop(columns="peakID")
.pivot_longer(index=["cytoband", "start", "end", "length"],
names_to="sample", values_to="state")
.sort_values(["cytoband", "sample"], key=natsort_keygen())
.remove_columns(["length", "start", "end"])
.set_index("cytoband")
)
</code></pre>
<p>And the end result looks like this:</p>
<pre><code>table.loc["2q37.3", :]
Out[36]:
sample state
cytoband
2q37.3 10.193_156 0
2q37.3 10.193_156 0
2q37.3 10.214_385 0
2q37.3 10.214_385 0
2q37.3 10.388_116 1
2q37.3 10.388_116 -1
2q37.3 10.401_184 0
2q37.3 10.401_184 0
</code></pre>
<p>The problem lies in the fact that if <code>cytoband</code> is duplicated in different <code>peakID</code>s, the resulting table will have the two records (<code>state</code>) for each sample mixed up (as they don't have the relevant unique ID anymore).</p>
<p>The idea would be to suffix the duplicate records across distinct peakIDs (e.g. "2q37.3_A", "2q37.3_B", but I'm not sure on how to do that with <code>groupby</code> or pandas in general as I need information from more than one group.</p>
<p>What's the cleanest solution to do this? <a href="https://stackoverflow.com/questions/47966558/pandas-groupby-neighboring-identical-elements">Existing solutions</a> (or <a href="https://stackoverflow.com/questions/23435270/add-a-sequential-counter-column-on-groups-to-a-pandas-dataframe">this one</a>) don't really fit.</p>
|
<python><pandas><group-by>
|
2023-04-12 12:12:35
| 1
| 4,973
|
Einar
|
75,995,195
| 12,361,700
|
Is Tensorflow positional encoding wrong?
|
<p>I was checking this guide <a href="https://www.tensorflow.org/text/tutorials/transformer#the_embedding_and_positional_encoding_layer" rel="nofollow noreferrer">https://www.tensorflow.org/text/tutorials/transformer#the_embedding_and_positional_encoding_layer</a> and I saw this positional encoding function:</p>
<pre><code>def positional_encoding(length, depth):
depth = depth/2
positions = np.arange(length)[:, np.newaxis] # (seq, 1)
depths = np.arange(depth)[np.newaxis, :]/depth # (1, depth)
angle_rates = 1 / (10000**depths) # (1, depth)
angle_rads = positions * angle_rates # (pos, depth)
pos_encoding = np.concatenate(
[np.sin(angle_rads), np.cos(angle_rads)],
axis=-1)
return tf.cast(pos_encoding, dtype=tf.float32)
</code></pre>
<p>However, that concatenation will append the cosine after the sine, instead of alternating them as asked in the paper:
<a href="https://i.sstatic.net/YoUwv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YoUwv.png" alt="enter image description here" /></a></p>
<p>infact plotting it shows:
<a href="https://i.sstatic.net/CYBil.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CYBil.jpg" alt="enter image description here" /></a></p>
<p>instead something like:</p>
<pre><code>def positional_encoding(len, token_size):
depth = tf.range(token_size/2, dtype=tf.float32) * 2 / token_size
divisor = 10000 ** depth
position = tf.range(len, dtype=tf.float32)
argument = (position[:,None] / divisor[None, :])[...,None]
pre_alternation = tf.concat((tf.math.sin(argument), tf.math.cos(argument)), axis=-1)
return tf.reshape(pre_alternation, (len,token_size))
</code></pre>
<p>shows something like this:
<a href="https://i.sstatic.net/HqXHH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HqXHH.png" alt="enter image description here" /></a></p>
<p>what am i missing?</p>
|
<python><numpy><tensorflow>
|
2023-04-12 11:58:24
| 1
| 13,109
|
Alberto
|
75,995,160
| 3,482,266
|
How to optimize this python pandas code using .to_dict?
|
<p>When I run locally, in my laptop (using python 3.10 and pandas 1.3.5), the following code, I get 0.031s approximately (ball parking it):</p>
<pre><code>profile_data = (
profiles_df[data_cols]
.loc[profile_ids]
.rename(columns=new_cols)
.to_dict("records")
)
</code></pre>
<p>where data_cols and new_cols are two lists of strings, and profiles_df is a dataframe with mostly string data.
However, when I run in it in a pod, using the same python and pandas versions, I get it run in 0.1s approx. The pod has still ample secondary memory (a few GBs) and never reaches its limit, nor does it reach the CPU limits (1 out of 1.5)</p>
<ol>
<li>Is there a way to optimize the above code?</li>
<li>What could be causing this difference in performance?</li>
</ol>
|
<python><pandas><kubernetes>
|
2023-04-12 11:53:28
| 1
| 1,608
|
An old man in the sea.
|
75,994,951
| 12,234,535
|
Python+xarray: Displaying datasets
|
<p>I have a dataset, which I am to interpolate.</p>
<p><a href="https://nomads.ncep.noaa.gov/cgi-bin/filter_gfs_0p25_1hr.pl?dir=%2Fgfs.20230411%2F00%2Fatmos&file=gfs.t00z.pgrb2.0p25.f000&var_TMP=on&lev_2_m_above_ground=on&subregion=&toplat=51&leftlon=1&rightlon=4&bottomlat=47" rel="nofollow noreferrer">Original dataset</a>: a field with a graticule (latitude: 17, longitude: 13, step: 0.25x0.25 degrees) and 221 values within this graticule.</p>
<pre><code>ds= xr.open_dataset('gfs.t00z.pgrb2.0p25.f000', engine='cfgrib')
print(ds['t2m'])
'''
Output:
<xarray.DataArray 't2m' (latitude: 17, longitude: 13)>
[221 values with dtype=float32]
Coordinates:
time datetime64[ns] ...
step timedelta64[ns] ...
heightAboveGround float64 ...
* latitude (latitude) float64 47.0 47.25 47.5 ... 50.5 50.75 51.0
* longitude (longitude) float64 1.0 1.25 1.5 1.75 ... 3.5 3.75 4.0
'''
</code></pre>
<p>I have to transform the field into a field with <em>graticule of a different latitude/longitude step</em> (1.9047x1.875 degrees):</p>
<pre><code>ds_i = ds.interp(latitude=[48.5705, 50.4752],
longitude=[1.875, 3.75],
method="linear")
print(ds_i['t2m'])
'''
Output:
<xarray.DataArray 't2m' (latitude: 2, longitude: 2)>
array([[281.84174231, 284.01994458],
[281.00258201, 280.88313926]])
Coordinates:
time datetime64[ns] 2023-04-11
step timedelta64[ns] 00:00:00
heightAboveGround float64 2.0
valid_time datetime64[ns] 2023-04-11
* latitude (latitude) float64 48.57 50.48
* longitude (longitude) float64 1.875 3.75
'''
</code></pre>
<p>How do I <strong>display the original and interpolated datasets</strong> to compare them side by side and make sure I did everything right and achieved my goal?</p>
<p>Also, note that the interpolated coordinates are truncated (48.5705, 50.4752 compared to 48.57 50.48 in the output). Is there a way to keep the accuracy?</p>
<p><strong>Update thanks to the answer-solution:</strong>
<a href="https://i.sstatic.net/Qj9xX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Qj9xX.png" alt="enter image description here" /></a></p>
|
<python><dataset><coordinates><interpolation><python-xarray>
|
2023-04-12 11:29:17
| 1
| 379
|
Outlaw
|
75,994,909
| 6,528,055
|
Can I use numpy.ndarray and pandas.core.series as two inputs to sklearn accuracy_score?
|
<p>My <code>sklearn</code> <code>accuracy_score</code> function takes two following inputs:</p>
<pre><code>accuracy_score(y_test, y_pred_class)
</code></pre>
<p><code>y_test</code> is of <code>pandas.core.series</code> and <code>y_pred_class</code> is of <code>numpy.ndarray</code>. <strong>So do two different inputs produce wrong accuracy?</strong> It's actually giving no error and produce some score. <strong>If my procedure is not correct what should I do to produce accuracy correctly?</strong></p>
<p><strong>Edit</strong></p>
<p>It's a binary classification problem and labels are not one-hot-encoded. So model.predict produces one probability value for each sample which are converted to label using np.round.</p>
<p>Outputs of <code>model.predict</code> looks like this---></p>
<pre><code>[[0.50104564]
[0.50104564]
[0.20969158]
...
[0.5010457 ]
[0.5010457 ]
[0.5010457 ]]
</code></pre>
<p>My <code>y_pred_class</code> after rounding off looks like this---></p>
<pre><code>[[1.]
[1.]
[0.]
...
[1.]
[1.]
[1.]]
</code></pre>
<p>And <code>y_test</code> which is pandas.series looks like this (as expected)---></p>
<pre><code>34793 1
60761 0
58442 0
56299 1
89501 0
..
91507 1
25467 1
79635 0
22230 1
22919 1
</code></pre>
<p>Are y_pred_class and y_test compatible to each other for accuracy_score() ?</p>
|
<python><pandas><numpy><scikit-learn>
|
2023-04-12 11:24:22
| 1
| 969
|
Debbie
|
75,994,898
| 9,827,719
|
Python and PostgreSQL running on Google Cloud Functions 2nd generation gives "FileNotFoundError: [Errno 2] No such file or directory /engine/base.py"
|
<p>I am using Google Cloud Functions 2nd generation in order deploy and run my applications.</p>
<p><strong>I deploy my application using the following command:</strong></p>
<pre><code>gcloud functions deploy postgresql-python-examples
--gen2
--runtime=python311
--region=europe-west1
--source=.
--entry-point=main
--trigger-http
--timeout=540
--verbosity=info
</code></pre>
<p>Now when I run the code I get errors that a file <code>/layers/google.python.pip/pip/lib/python3.11/site-packages/sqlalchemy/engine/base.py</code> does not exists. For me it seems like this is some type of requirements issue?</p>
<p><strong>requirements.txt</strong></p>
<pre><code>Flask
pg8000
SQLAlchemy
cloud-sql-python-connector
google-cloud-secret-manager
gunicorn
</code></pre>
<p><strong>main.py</strong></p>
<pre><code>import os
import sqlalchemy
def main(request=None):
""" Initializes a Unix socket connection pool for a Cloud SQL instance of Postgres. """
# Note: Saving credentials in environment variables is convenient, but not
# secure - consider a more secure solution such as
# Cloud Secret Manager (https://cloud.google.com/secret-manager) to help
# keep secrets safe.
db_user = os.environ["DB_USER"] # e.g. 'my-database-user'
db_pass = os.environ["DB_PASS"] # e.g. 'my-database-password'
db_name = os.environ["DB_NAME"] # e.g. 'my-database'
unix_socket_path = os.environ["INSTANCE_UNIX_SOCKET"] # e.g. '/cloudsql/project:region:instance'
pool = sqlalchemy.create_engine(
# Equivalent URL:
# postgresql+pg8000://<db_user>:<db_pass>@/<db_name>
# ?unix_sock=<INSTANCE_UNIX_SOCKET>/.s.PGSQL.5432
# Note: Some drivers require the `unix_sock` query parameter to use a different key.
# For example, 'psycopg2' uses the path set to `host` in order to connect successfully.
sqlalchemy.engine.url.URL.create(
drivername="postgresql+pg8000",
username=db_user,
password=db_pass,
database=db_name,
query={"unix_sock": "{}/.s.PGSQL.5432".format(unix_socket_path)},
),
# [START_EXCLUDE]
# Pool size is the maximum number of permanent connections to keep.
pool_size=5,
# Temporarily exceeds the set pool_size if no connections are available.
max_overflow=2,
# The total number of concurrent connections for your application will be
# a total of pool_size and max_overflow.
# 'pool_timeout' is the maximum number of seconds to wait when retrieving a
# new connection from the pool. After the specified amount of time, an
# exception will be thrown.
pool_timeout=30, # 30 seconds
# 'pool_recycle' is the maximum number of seconds a connection can persist.
# Connections that live longer than the specified amount of time will be
# re-established
pool_recycle=1800, # 30 minutes
# [END_EXCLUDE]
)
print("Pool OK with unix_socket")
# Create a table
with pool.connect() as conn:
conn.execute(sqlalchemy.text(
"CREATE TABLE IF NOT EXISTS votes "
"( vote_id SERIAL NOT NULL, time_cast timestamp NOT NULL, "
"candidate VARCHAR(6) NOT NULL, PRIMARY KEY (vote_id) );"
))
conn.commit()
return "OK"
if __name__ == '__main__':
main()
</code></pre>
<p><strong>Output</strong></p>
<pre><code>Pool OK with unix_socket
[2023-04-12 10:53:43,301] ERROR in app: Exception on / [POST]
Traceback (most recent call last): File "/layers/google.python.pip/pip/lib/python3.11/site-packages/pg8000/core.py", line 231, in __init__ self._usock.connect(unix_sock) FileNotFoundError: [Errno 2] No such file or directory
The above exception was the direct cause of the following exception:
Traceback (most recent call last): File "/layers/google.python.pip/pip/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 145, in __init__ self._dbapi_connection = engine.raw_connection()
^^^^^^^^^^^^^^^^^^^^^^^
File "/layers/google.python.pip/pip/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 3275, in raw_connection
sqlalchemy.exc.InterfaceError: (pg8000.exceptions.InterfaceError) communication error
</code></pre>
|
<python><google-cloud-functions>
|
2023-04-12 11:23:38
| 0
| 1,400
|
Europa
|
75,994,788
| 18,215,498
|
can every async/await expression is asynchronous in python?
|
<p>I have some basic understanding about asyncio concepts in python, but yesterday I played around with it a little bit and now I am confused:</p>
<p>First snippet is obviously asynchronous:</p>
<pre class="lang-py prettyprint-override"><code>#snippet 1
import asyncio
async def one():
asyncio.create_task(two())
await asyncio.sleep(3)
print('one done')
async def two():
await asyncio.sleep(0.1)
print('two done')
asyncio.run(one())
</code></pre>
<p>output:</p>
<pre><code>two done
one done
</code></pre>
<p>But with snippet 2 I am not sure (it has the same output as snippet 3):</p>
<pre class="lang-py prettyprint-override"><code>#snippet2
import asyncio
async def one():
await asyncio.sleep(3)
print('one done')
async def two():
await asyncio.sleep(0.1)
print('two done')
async def main():
await one()
await two()
asyncio.run(main())
</code></pre>
<pre class="lang-py prettyprint-override"><code>#snippet3
import time
def one():
time.sleep(3)
print('one done')
def two():
time.sleep(0.1)
print('two done')
def main():
one()
two()
main()
</code></pre>
<p>output:</p>
<pre><code>one done
two done
</code></pre>
<p>I know that coroutines acts in "non-blocking way", but what it really means if snippet 2 and 3 took the same execution time and have the same output order. Are there advantages of using asyncio without methods like create_task, run_in_executor etc.?</p>
|
<python><asynchronous><python-asyncio>
|
2023-04-12 11:11:07
| 1
| 533
|
mcdominik
|
75,994,533
| 1,516,331
|
How to extract the data from a Highcharts trend plot on a webpage?
|
<p>I'm not familiar with front end techniques.So I'm not sure if this can actually works. Here is a web page about the residential vacancy rate of a suburb 2000: <a href="https://sqmresearch.com.au/graph_vacancy.php?postcode=2000" rel="nofollow noreferrer">https://sqmresearch.com.au/graph_vacancy.php?postcode=2000</a>.</p>
<p>Now that I have the HTML that renders the trend plot, can we get the data behind it, for example, the vacancy rate in the most recent month? In this plot, the last month (March 2023)'s vacancy rate is 3.1% and the vacancy number is 292. <strong>These numbers are shown only when hovering the cursor on that month in the plot.</strong></p>
<p>It shows "Highcharts" in bottom right. Can we get the data using Python or JavaScript?</p>
<p><a href="https://i.sstatic.net/ma83T.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ma83T.png" alt="enter image description here" /></a></p>
|
<javascript><python><html><highcharts><web-crawler>
|
2023-04-12 10:43:27
| 2
| 3,190
|
CyberPlayerOne
|
75,994,336
| 6,389,268
|
Creating successful combination of sets of from combination table
|
<p>Good day,</p>
<p>I wish to select two control subjects for each of my selected subjects. I got beginning data like thus:</p>
<pre><code>Cohort Controls
#a #1
#a #2
#a #3
#a #4
#b #1
#b #2
#c #5
#c #6
#c #1
#c #2
</code></pre>
<p>I want result table with unique Controls for each Cohort subject like this:</p>
<pre><code>#a #3
#a #4
#b #1
#b #2
#c #5
#c #6
</code></pre>
<p>Here is the catch: There needs to be logic to select controls so that we don't end up in 'dead end': If we select <strong>.groupby('Cohort').head(2)</strong> we select combo A-1 and A-2 leading leading B without matches.</p>
<p>I solved this via brute force order by random, loop each Cohort, take 2 and drop taken from lists. If less than 2 per Cohort, repeat. This is slow, stupid and unreliable.</p>
<p>How would one go about doing set-magic to 'take two from each set so that each child-set remains unique (and no overlaps)'.</p>
|
<python><pandas><set>
|
2023-04-12 10:18:44
| 1
| 1,394
|
pinegulf
|
75,994,227
| 8,746,466
|
How to remove text and make the corresponding element an empty tag using `lxml`?
|
<p>I wanted to make my XML document more data-centric.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;"></th>
<th style="text-align: left;"></th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">original input</td>
<td style="text-align: left;"><code><elem>1</elem></code></td>
</tr>
<tr>
<td style="text-align: right;">desired output</td>
<td style="text-align: left;"><code><elem value="1"/></code></td>
</tr>
</tbody>
</table>
</div>
<p>Idea:</p>
<pre class="lang-py prettyprint-override"><code>for elem in doc.xpath("//elem"):
elem.attrib["value"] = elem.text
elem.text = ''
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;"></th>
<th style="text-align: left;"></th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">the above code gives</td>
<td style="text-align: left;"><code><elem value="1"></elem></code></td>
</tr>
</tbody>
</table>
</div>
<p>What to do to reach the desired output, i.e. an empty tag?</p>
|
<python><lxml>
|
2023-04-12 10:07:50
| 1
| 581
|
Bálint Sass
|
75,993,940
| 638,048
|
Python: chmod is NOT preventing file from being deleted
|
<p>I'm setting the mode on a file to try to prevent it being deletable, but nothing seems to work.
Example:</p>
<pre><code>import os
from stat import S_IRUSR, S_IRGRP, S_IROTH
with tempfile.TemporaryDirectory() as local_dir:
local_file = os.path.join(local_dir, 'a.txt')
with open(local_file, 'wt') as f:
f.writelines('some stuff')
os.chmod(local_file, S_IRUSR|S_IRGRP|S_IROTH)
print(oct(os.stat(local_file).st_mode)[-3:]) # prints '444' as expected
os.remove(local_file) # no exception
print(os.path.isfile(local_file)) # prints False, the file has been deleted
</code></pre>
|
<python><chmod>
|
2023-04-12 09:33:43
| 1
| 936
|
Richard Whitehead
|
75,993,918
| 12,148,704
|
How to properly annotate dataclasses with attributes that are not initialized?
|
<p>Given the following code:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Self
from dataclasses import dataclass, field
@dataclass
class MyClass:
var: float = field(init=False)
def __post_init__(self: Self) -> None:
self.var = True
</code></pre>
<p>My expection is that the line <code>self.var = True</code> should produce an error because the wrong type is assigned to <code>self.var</code>. However, type checkers like mypy or pyright cannot find any issues. Is it possible to annotate the dataclass in a way that the wrong assignment is detected?</p>
|
<python><python-typing><mypy>
|
2023-04-12 09:30:35
| 1
| 550
|
tschmelz
|
75,993,823
| 2,550,406
|
What is the difference between typing.Callable and callable?
|
<p>In vscode, when I have not performed <code>from typing import Callable</code>, it suggests to use <code>callable</code>.</p>
<p><code>typing.Callable</code> is <a href="https://docs.python.org/3/library/typing.html#typing.Callable" rel="nofollow noreferrer">documented here</a> as an annotation used for type-hinting in function declarations.<br />
So what is <code>callable</code>?</p>
<p>The vscode hover popup shows the following for it:</p>
<pre><code>(function) def callable(
__obj: object,
/
) -> TypeGuard[(...) -> object]
</code></pre>
<p>whereas for <code>Callable</code> it shows</p>
<pre><code>(class) Callable
</code></pre>
<p>My best guess would be that it has to do with the <a href="https://peps.python.org/pep-0585/" rel="nofollow noreferrer">PEP 585</a> linked in the <code>typing.Callable</code> documentation. Because that PEP e.g. also mentions that <code>dict</code> is the same as <code>typing.Dict</code>. So, is <code>callable</code> exactly the same thing as <code>typing.Callable</code>?</p>
<p>If they are not the same, how do they differ and when would I want to use which?</p>
|
<python>
|
2023-04-12 09:19:35
| 1
| 6,524
|
lucidbrot
|
75,993,765
| 17,596,179
|
duckdb (0.7.0) not supporting PEP 517 builds
|
<p>I'm trying to dockerize my backend so I can upload it on AWS lambda but I'm encountering this error constantly.
<code>Note: This error originates from the build backend, and is likely not a problem with poetry but with duckdb (0.7.0) not supporting PEP 517 builds. You can verify this by running 'pip wheel --use-pep517 "duckdb (==0.7.0)"'.</code> When I execute the command <code>pip wheel --use-pep517 "duckdb (==0.7.0)"</code> it generates a .whl file so I guess that the duckdb version is correct but I don't know what my problem could be. Firsly I used the Duckdb version 0.7.1 which had the exact same error, but I don't know if I just have to keep downgrading to a version that works because that can also 'ruin' my project.
The error is at <code>RUN poetry config virtualenvs.create false && poetry install --no-dev --no-interaction --no- ansi</code>
This is my Dockerfile</p>
<pre><code>FROM public.ecr.aws/lambda/python:3.9-arm64
ENV POETRY_VERSION=1.4.2
RUN pip install "poetry==$POETRY_VERSION"
WORKDIR ${LAMBDA_TASK_ROOT}
COPY poetry.lock pyproject.toml ${LAMBDA_TASK_ROOT}/
# Install the function's dependencies using file requirements.txt
# from your project folder.
RUN poetry config virtualenvs.create false && poetry install --no-dev --no-interaction --no-
ansi
# Copy function code
COPY app.py ${LAMBDA_TASK_ROOT}
COPY scraper_backend ${LAMBDA_TASK_ROOT}/scraper_backend
# Set the CMD to your handler (could also be done as a parameter override outside of the
Dockerfile)
CMD [ "app.handler" ]
</code></pre>
<p>Thanks in advance for any help.</p>
|
<python><docker><dockerfile><python-poetry><duckdb>
|
2023-04-12 09:13:42
| 0
| 437
|
david backx
|
75,993,763
| 5,250,620
|
How to change bar chart color when using Holoview with Plotly backend?
|
<p>I am using example in this <a href="https://holoviews.org/reference/elements/plotly/Bars.html" rel="nofollow noreferrer">documentation</a></p>
<p>This is code to plot Bar chart but change default blue color to red, using Bokeh backend.</p>
<pre><code>import holoviews as hv
hv.extension('bokeh')
data = [('one',8),('two', 10), ('three', 16), ('four', 8), ('five', 4), ('six', 1)]
bars = hv.Bars(data, hv.Dimension('Car occupants'), 'Count')
bars.opts(color="red")
bars
</code></pre>
<p><a href="https://i.sstatic.net/D2J1f.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/D2J1f.png" alt="bokeh red plot" /></a></p>
<p>I tried similar code, only change bokeh to plotly. But the color="red" has no effect.</p>
<pre><code>import holoviews as hv
hv.extension('plotly')
data = [('one',8),('two', 10), ('three', 16), ('four', 8), ('five', 4), ('six', 1)]
bars = hv.Bars(data, hv.Dimension('Car occupants'), 'Count')
bars.opts(color="red")
bars
</code></pre>
<p><a href="https://i.sstatic.net/m5RtC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/m5RtC.png" alt="plotly" /></a></p>
<p>I also tried</p>
<pre><code>import holoviews as hv
hv.extension('plotly')
data = [('one',8),('two', 10), ('three', 16), ('four', 8), ('five', 4), ('six', 1)]
bars = hv.Bars(data, hv.Dimension('Car occupants'), 'Count', color="red")
bars
</code></pre>
<p>And receive same bar chart above, with error</p>
<pre><code>WARNING:param.Bars01705: Setting non-parameter attribute color=red using a mechanism intended only for parameters
</code></pre>
<p>I would like to know how to plot bar chart in red, in Plotly (similar to Bokeh example above).</p>
|
<python><plotly><bar-chart><holoviews>
|
2023-04-12 09:13:31
| 0
| 5,576
|
Haha TTpro
|
75,993,650
| 9,374,372
|
Good Patterns to pass down data in deep OOP composition?
|
<p>This is a general open question about best practices and scalability in Python using OOP.</p>
<p>Overtime, I have been using in several projects Class inheritance and composition. This pattern has helped me abstracting and encapsulating a lot of the code. However, as the codebase increases, I have been trapped in a recurrent pattern where I have a "master" Class which "orchestrates" another classes by composition, which in turn also have another composed classes to orchestrate and so on. This creates a "deep" tree structure of dependant classes.</p>
<p>The problem is that if I want to pass down arguments to many of the Class components from the master class, I have to pass it down on everytime I init the class again and again, writing redundant code. For example (a simple example just to illustrate the kindof composition but imagine a much deeper structure than this):</p>
<pre><code> class UploaderExecutor:
def __init__(self, mediatype: Literal['video', 'image'], channel: Literal['online', 'offline'], ...):
self.uploader = Uploader(mediatype, channel, ...)
self.validator = Validator(mediatype, channel, ...)
self.reporter = Reporter(mediatype, ...)
...
class Uploader():
def __init__(self, mediatype, channel, ...):
self.configurator = Configurator(mediatype, channel...)
self.file_parser = FileParser(mediatype, channel...)
self.Notificator = Notificator(mediaype, channel...)
...
class Validator():
def __init__(self, mediatype, channel, ...):
self.file_validator = FileValidator(mediatype, channel...)
self.name_validator = NameValidator(channel...)
...
class Reporter():
def __init__(self, mediatype, channel, ...):
self.slack_reporter = SlackReporter(mediatype, channel...)
self.telegram_reporter = TelegramReporter(mediatype, channel...)
...
#etc etc
</code></pre>
<p>I guess this pattern of passing down arguments to all the composed classes is not escalable as more and more cases are added. Also I do not want to rely in solutions like global variables, because I want to decide which data passing down to which composed classes (maybe not all of them would need for example <code>mediatype</code>). So my question is which kind of abstractions or design patterns are recommended in these cases?</p>
|
<python><oop><design-patterns>
|
2023-04-12 09:00:40
| 1
| 505
|
Fernando Jesus Garcia Hipola
|
75,993,532
| 188,331
|
AttributeError: 'Seq2SeqTrainer' object has no attribute 'push_in_progress'
|
<p>I'm using HuggingFace's <code>Seq2SeqTrainer</code> and I successfully trained a model. When I try to execute (where trainer is an instance of <code>Seq2SeqTrainer</code>):</p>
<pre><code>trainer.push_to_hub()
</code></pre>
<p>It returns error:</p>
<blockquote>
<p>AttributeError: 'Seq2SeqTrainer' object has no attribute 'push_in_progress'</p>
</blockquote>
<p>Trainer Code:</p>
<pre><code>trainer = Seq2SeqTrainer(
model=model,
args=training_args,
train_dataset=tokenized["train"],
eval_dataset=tokenized["test"],
tokenizer=tokenizer,
data_collator=data_collator,
compute_metrics=compute_metrics,
)
trainer.train()
trainer.push_to_hub()
</code></pre>
<p>How can I resolve this problem?</p>
<p>Other codes can be found in my other <a href="https://stackoverflow.com/questions/75945735/xlnet-or-bert-chinese-for-huggingface-automodelforseq2seqlm-training">question</a>.</p>
|
<python><huggingface>
|
2023-04-12 08:45:54
| 1
| 54,395
|
Raptor
|
75,993,493
| 5,091,467
|
How to find permutation importance using sparse matrix X?
|
<p>I have a sparse matrix X (<code>csr_matrix</code>), since a dense version does not fit into RAM.</p>
<p>I want to find permutation importance for my estimator using the sparse matrix X.</p>
<p>When I run the following code, I receive an error <code>TypeError: A sparse matrix was passed, but dense data is required.</code></p>
<p>This code replicates the error:</p>
<pre><code>import pandas as pd
data = {
'y': [1, 0, 0, 1, 1, 1],
'categ': ['dog', 'cat', 'dog', 'ant', 'fox', 'seal'],
'size': ['big', 'small', 'big', 'tiny', 'medium', 'big']
}
df = pd.DataFrame(data)
X = df[['categ', 'size']]
y = df['y']
from sklearn.preprocessing import OneHotEncoder
encoder = OneHotEncoder(sparse_output=True)
X = encoder.fit_transform(X)
from sklearn.linear_model import LogisticRegression
logit = LogisticRegression()
logit.fit(X, y)
# This step throws an error
from sklearn.inspection import permutation_importance
perm = permutation_importance(logit, X, y)
</code></pre>
<p><strong>Is there another way to find permutation importance without making X dense?</strong></p>
<p>I am using python 3.9.16 and sk-learn 1.2.2.</p>
<p>Thanks for help!</p>
|
<python><scikit-learn><sparse-matrix>
|
2023-04-12 08:41:13
| 1
| 714
|
Dudelstein
|
75,993,380
| 6,400,443
|
Memory usage skyrocketting while reading Parquet file from S3 with Polars
|
<p>I try to read multiple Parquet files from S3. I read using Polars and Pyarrow with the following command :</p>
<pre><code>pl.scan_pyarrow_dataset(ds.dataset(f"my_bucket/myfiles/",filesystem=s3)).collect()
</code></pre>
<p>There is 4 files in the folder, with the following sizes : 120MB, 102MB, 85MB, 75MB</p>
<p>I then run my code inside Docker container (inside Airflow task to be precise, but I don't think it's important here). When reading, the memory consumption on Docker Desktop can go as high as 10GB, and it's only for 4 relatively small files.</p>
<p>Is it an expected behaviour with Parquet files ? The file is 6M rows long, with some texts but really shorts.</p>
<p>I will soon have to read bigger files, like 600 or 700 MB, will it be possible in the same configuration ?</p>
|
<python><docker><amazon-s3><parquet><python-polars>
|
2023-04-12 08:25:39
| 1
| 737
|
FairPluto
|
75,993,306
| 9,097,114
|
Unable to select value from dropdown python selenium
|
<p>I am trying to pass/ select city name/station in Wunderground website, but unable select the city name from the dropdown list. From the below picture how to select "New York City,NY". Here is my code</p>
<p><a href="https://i.sstatic.net/Pjx1N.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Pjx1N.png" alt="enter image description here" /></a></p>
<p>CODE:</p>
<pre><code>from selenium.webdriver.support import expected_conditions as EC
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import Select
from selenium.webdriver.support.ui import WebDriverWait
from webdriver_manager.chrome import ChromeDriverManager
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.action_chains import ActionChains
driver = webdriver.Chrome(ChromeDriverManager().install())
driver.get("https://www.wunderground.com/")
time.sleep(10)
driver.maximize_window()
Search_city = driver.find_element("xpath",'//*[@id="wuSearch"]')
Search_city.send_keys('new york')
Search_city = driver.find_element("xpath",'//*[@id="wuForm"]/search-autocomplete/ul/li[2]').click()
</code></pre>
<p>Thanks in advance.</p>
|
<python><selenium-webdriver><xpath><selenium-chromedriver><webdriverwait>
|
2023-04-12 08:15:34
| 1
| 523
|
san1
|
75,993,187
| 2,924,334
|
pygal: How do I specify a different stroke width for different lines?
|
<p>I would like to have different line thickness for the different lines. I tried specifying <code>width</code> using the <code>stroke_style</code> but it does not seem to help.</p>
<pre><code>import pygal
chart = pygal.XY()
chart.add(**{'title': 'Line A', 'values': [(1, 1), (10, 10)]})
chart.add(**{'title': 'Line B', 'values': [(1, 2), (10, 20)], 'stroke_style': {'dasharray': '3, 6'}})
chart.add(**{'title': 'Line C', 'values': [(1, 3), (10, 30)], 'stroke_style': {'width': 5}})
chart.render_to_png('chart.png')
</code></pre>
<p>I have attached a screenshot of the resulting png. I was expecting the "Line C" to be thicker, but it is not. The <code>dasharray</code> ("Line B") works fine.</p>
<p>So, how do I specify different line thickness for different lines? Thank you!</p>
<p><a href="https://i.sstatic.net/ytSlX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ytSlX.png" alt="enter image description here" /></a></p>
|
<python><pygal>
|
2023-04-12 07:58:40
| 0
| 587
|
tikka
|
75,992,698
| 20,051,041
|
How do I click on clickable element with Selenium in shadow-root (closed)?
|
<p>An "Agree with the terms" button appears on <a href="https://www.sreality.cz/hledani/prodej/domy" rel="nofollow noreferrer">https://www.sreality.cz/hledani/prodej/domy</a> I am trying to go through that with a .click() using Selenium and Python.
The button element is:</p>
<pre><code><button data-testid="button-agree" type="button" class="scmp-btn scmp-btn--default w-button--footer sm:scmp-ml-sm md:scmp-ml-md lg:scmp-ml-dialog">Souhlasím</button>
</code></pre>
<p>My approach is:</p>
<pre><code>driver = webdriver.Chrome()
driver.implicitly_wait(20)
driver.get("https://www.sreality.cz/hledani/prodej/domy")
button = driver.find_element_by_css_selector("button[data-testid='button-agree']")
button.click()
</code></pre>
<p>Any idea what to change to make it work? Thanks! :)</p>
|
<python><selenium-webdriver><web-scraping>
|
2023-04-12 06:55:25
| 3
| 580
|
Mr.Slow
|
75,992,580
| 10,395,139
|
Python request function won't download entire file
|
<p>I'm creating a python function to download a .zip file from Kaggle (I don't want to use the Kaggle API) with the request library. However the .zip file don't have a <code>content-length</code> header, so I can't check the size of the kaggle .zip file before downloading it.</p>
<p>So now my function looks like this, but it only downloads 5465 (6KB) of the entire 700MB file.</p>
<pre><code>url = "https://www.kaggle.com/c/dog-breed-identification/download-all"
filename = "stanford-dogs.zip"
save_path = os.path.join(base_dir, filename)
def download_zip(url, save_path, chunk_size=128):
r = requests.get(url, stream=True)
with open(save_path, 'wb') as fd:
for chunk in r.iter_content(chunk_size=chunk_size):
fd.write(chunk)
download_url(url, path)
</code></pre>
|
<python><python-requests>
|
2023-04-12 06:39:39
| 0
| 579
|
Krullmizter
|
75,992,526
| 1,506,850
|
Typical reasons why model training on GPU not faster as expected
|
<p>I built a model on CPU.
The model is getting large.
When I try to move to GPU for training, I don't see the expected 10x-30x speed increase.
My net is a multi-layer convolution with FC at the end.</p>
<p>What are common reasons why GPU training can be as slow as CPU training?</p>
|
<python><deep-learning><pytorch><gpu><cpu>
|
2023-04-12 06:30:12
| 0
| 5,397
|
00__00__00
|
75,992,409
| 13,396,497
|
Panda combine multiple rows into one row with different column names not in one column
|
<p>I have a csv file -</p>
<p>CSV A-</p>
<pre><code> Date/Time Num
2023/04/10 14:13:18 6122
2023/04/10 14:14:24 6005
2023/04/10 14:14:59 6004
</code></pre>
<p>There will be 3 rows max or less then that, also Num=6122 will be there always. The other two numbers(6005 & 6004) rows may or may not be there.</p>
<p>Output I am looking for-</p>
<pre><code> Date/Time Num Date/Time_1 Num_1 Date/Time_2 Num_2
2023/04/10 14:13:18 6122 2023/04/10 14:14:24 6005 2023/04/10 14:14:59 6004
</code></pre>
<p>If Num=6005, then the date/time and number should be in 3rd and 4th column, if Num=6004 then the date/time and number should be in 5th and 6th column, otherwise leave it empty like below -</p>
<pre><code> Date/Time Num Date/Time_1 Num_1 Date/Time_2 Num_2
2023/04/10 14:13:18 6122 2023/04/10 14:14:59 6004
</code></pre>
|
<python><pandas><dataframe>
|
2023-04-12 06:13:14
| 2
| 347
|
RKIDEV
|
75,992,360
| 15,632,586
|
What should I do to build wheel for Tokenizers (with 2023 version of Rust)?
|
<p>I am trying to install the required Python packages for a Python project in Python 3.11 (for Windows), using <code>pip install -r requirements.txt</code>. My libraries that I need to download are:</p>
<pre><code> numpy
transformers==v3.1.0
tqdm
torch
scikit-learn
spacy
torchtext
pandas
nltk
sentence_transformers
</code></pre>
<p><code>tokenizers</code> are needed for one of my packages to run, however my Anaconda failed to build wheel for this package. At first it was caused by my lack of Rust compiler, so I install them like in this question: <a href="https://stackoverflow.com/questions/69595700/could-not-build-wheels-for-tokenizers-which-is-required-to-install-pyproject-to">Could not build wheels for tokenizers, which is required to install pyproject.toml-based projects.</a> However, when I reinstall my tokenizers I got this problem:</p>
<pre><code> running build_ext
running build_rust
cargo rustc --lib --message-format=json-render-diagnostics --manifest-path Cargo.toml --release -v --features pyo3/extension-module -- --crate-type cdylib
warning: unused manifest key: target.x86_64-apple-darwin.rustflags
Updating crates.io index
Updating git repository `https://github.com/n1t0/rayon-cond`
Downloading crates ...
error: failed to download `once_cell v1.17.1`
Caused by:
unable to get packages from source
Caused by:
failed to parse manifest at `C:\Users\hoang\.cargo\registry\src\github.com-1ecc6299db9ec823\once_cell-1.17.1\Cargo.toml`
Caused by:
failed to parse the `edition` key
Caused by:
this version of Cargo is older than the `2021` edition, and only supports `2015` and `2018` editions.
error: `cargo rustc --lib --message-format=json-render-diagnostics --manifest-path Cargo.toml --release -v --features pyo3/extension-module -- --crate-type cdylib` failed with code 101
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for tokenizers
Failed to build tokenizers
ERROR: Could not build wheels for tokenizers, which is required to install pyproject.toml-based projects
</code></pre>
<p>As I check my Cargo and Rust version, both of them are from version 1.68.2 (in 2023), so I am not sure what has gone wrong with my installation. So what methods could I use to fix this problem?</p>
<p><strong>Update:</strong> Here is the result when I look into Cargo.toml file above:</p>
<pre><code>[package]
edition = "2021"
rust-version = "1.56"
name = "once_cell"
version = "1.17.1"
authors = ["Aleksey Kladov <aleksey.kladov@gmail.com>"]
exclude = [
"*.png",
"*.svg",
"/Cargo.lock.msrv",
"rustfmt.toml",
]
</code></pre>
<p>Another detail that I would like to add is that my packages needed <code>tokenizers</code> to run, so it requires <code>tokenizers-0.8.1</code>; and when I installed <code>tokenizers</code> directly from <code>pip</code>, <code>pip</code> would only download the wheel file and not building any wheels.</p>
|
<python><rust><pip><python-wheel><huggingface-tokenizers>
|
2023-04-12 06:04:16
| 0
| 451
|
Hoang Cuong Nguyen
|
75,992,129
| 1,440,565
|
Generate media url from file path in Django
|
<p>Let's say I have a function that writes a file to <code>MEDIA_ROOT</code>:</p>
<pre><code>def write_file():
with open(settings.MEDIA_ROOT / 'myfile.txt') as file:
file.write('foobar')
</code></pre>
<p>Now I want the absolute URL for this file as it is served from <code>MEDIA_URL</code>. Does Django have any utility function to do this for me? Or do I need to build it myself?</p>
<p>Note this file is NOT associated with a model's <code>FileField</code>.</p>
|
<python><django>
|
2023-04-12 05:18:51
| 1
| 83,954
|
Code-Apprentice
|
75,992,036
| 10,437,110
|
Python code for this algorithm to identify outliers in k-means clustering
|
<p>The have an <code>input_df</code> which has stirng index and not integers.
The index could be anything like '1234a', 'abcd', and so on.</p>
<p>I have performed k-means on an input df with <code>k = 100</code> and have received <code>centroid</code> and <code>labels</code> as output.</p>
<p>If I am not wrong,</p>
<ul>
<li><p>the <code>centroid</code> has 100 values corresponding to the mean of all points within a cluster for 100 such clusters.</p>
</li>
<li><p><code>labels</code> has the same size as that of the <code>input_df</code> which shows which cluster does that point/row belongs to.</p>
</li>
</ul>
<p>I have to now perform a process to identify the outliers in k-means clustering as per the following pseudo-code.</p>
<pre><code>c_x : corresponding centroid of sample point x where x ∈ X
1. Compute the l2 distance of every point to its corresponding centroid.
2. t = the 0.05 or 95% percentile of the l2 distances.
3. for each sample point x in X do
4. if || x - c_x ||2 > t then
5. mark x as outlier
</code></pre>
<p>Note: the <code>2</code> in line 4 is a subscript.</p>
<p>Now, I do not completely understand the condition mentioned in line 4.</p>
<p>Can someone give an equivalent Python code for the above-mentioned algorithm?</p>
<p>Here is the structure of the code.</p>
<pre><code>def remove_outliers(input_df, centroids, labels):
pass
kmeans = KMeans(n_clusters=100)
kmeans.fit(input_df)
centroids = kmeans.cluster_centers_
labels = kmeans.labels_
filtered_centroids, filtered_labels = remove_outliers(input_df, centroids, labels)
</code></pre>
|
<python><k-means>
|
2023-04-12 05:00:29
| 1
| 397
|
Ash
|
75,991,973
| 2,073,937
|
Is it OK to derive the same class twice in python?
|
<p>I have this code:</p>
<pre class="lang-py prettyprint-override"><code>class A:
...
class B(A):
...
class C(B, A):
...
</code></pre>
<p>As you can see, class <strong>C</strong> derives <strong>A</strong> twice, once directly and once indirectly through class <strong>B</strong>.</p>
<p>Is it OK? What possible wrong can happen with this?</p>
|
<python><class>
|
2023-04-12 04:43:41
| 3
| 616
|
Leonid Ganeline
|
75,991,895
| 11,918,314
|
Python - How to get number of teams under a manager using manager hierarchy columns
|
<p>I have a dataframe with the employee emails, manager emails and the manager hierarchy columns. Am trying to get the number of teams that a manager has.</p>
<p>My current dataframe</p>
<pre><code>emp_email mgr_email mgr_hier_01 mgr_hier_02 mgr_hier_03
jack@abc.com cook@abc.com CEO@abc.com sandy@abc.com cook@abc.com
katy@abc.com cook@abc.com CEO@abc.com sandy@abc.com cook@abc.com
panko@abc.com jacob@abc.com CEO@abc.com jacob@abc.com
lynne@abc.com jacob@abc.com CEO@abc.com jacob@abc.com
tom@abc.com brian@abc.com CEO@abc.com livp@abc.com brian@abc.com
grace@abc.com noah@abc.com CEO@abc.com noah@abc.com
will@abc.com hugh@abc.com CEO@abc.com hugh@abc.com
samson@abc.com sheila@abc.com CEO@abc.com noah@abc.com sheila@abc.com
johnson@abc.com nick@abc.com CEO@abc.com nick@abc.com
pete@abc.com jody@abc.com CEO@abc.com livp@abc.com jody@abc.com
torres@abc.com golio@abc.com CEO@abc.com sandy@abc.com golio@abc.com
jody@abc.com livp@abc.com CEO@abc.com livp@abc.com
sandy@abc.com CEO@abc.com CEO@abc.com
jacob@abc.com CEO@abc.com CEO@abc.com
cook@abc.com sandy@abc.com CEO@abc.com sandy@abc.com
livp@abc.com CEO@abc.com CEO@abc.com
noah@abc.com CEO@abc.com CEO@abc.com
hugh@abc.com CEO@abc.com CEO@abc.com
brian@abc.com livp@abc.com CEO@abc.com livp@abc.com
nick@abc.com CEO@abc.com CEO@abc.com
sheila@abc.com noah@abc.com CEO@abc.com noah@abc.com
golio@abc.com sandy@abc.com CEO@abc.com sandy@abc.com
CEO@abc.com NAN NAN
</code></pre>
<p>what I hope to achieve is a column which gives the counts of number of teams a manager has if the employee is a manager. For example, sandy@abc.com has 2 managers reporting to her (cook@abc.com and golio@abc.com) so the count of teams under her should be 2. While jacob@abc.com has no managers reporting to him but he is a manager managing 2 individual contributors (panko@abc.com and lynne@abc.com). So the count of teams under jacob@abc.com should be 1.</p>
<pre><code>emp_email mgr_email mgr_hier_01 mgr_hier_02 mgr_hier_03 num_teams_if_mgr
jack@abc.com cook@abc.com CEO@abc.com sandy@abc.com cook@abc.com 0
katy@abc.com cook@abc.com CEO@abc.com sandy@abc.com cook@abc.com 0
panko@abc.com jacob@abc.com CEO@abc.com jacob@abc.com 0
lynne@abc.com jacob@abc.com CEO@abc.com jacob@abc.com 0
tom@abc.com brian@abc.com CEO@abc.com livp@abc.com brian@abc.com 0
grace@abc.com noah@abc.com CEO@abc.com noah@abc.com 0
will@abc.com hugh@abc.com CEO@abc.com hugh@abc.com 0
samson@abc.com sheila@abc.com CEO@abc.com noah@abc.com sheila@abc.com 0
johnson@abc.com nick@abc.com CEO@abc.com nick@abc.com 0
pete@abc.com jody@abc.com CEO@abc.com livp@abc.com jody@abc.com 0
torres@abc.com golio@abc.com CEO@abc.com sandy@abc.com golio@abc.com 0
jody@abc.com livp@abc.com CEO@abc.com livp@abc.com 1
sandy@abc.com CEO@abc.com CEO@abc.com 2
jacob@abc.com CEO@abc.com CEO@abc.com 1
cook@abc.com sandy@abc.com CEO@abc.com sandy@abc.com 1
livp@abc.com CEO@abc.com CEO@abc.com 2
noah@abc.com CEO@abc.com CEO@abc.com 1
hugh@abc.com CEO@abc.com CEO@abc.com 1
brian@abc.com livp@abc.com CEO@abc.com livp@abc.com 1
nick@abc.com CEO@abc.com CEO@abc.com 1
sheila@abc.com noah@abc.com CEO@abc.com noah@abc.com 1
golio@abc.com sandy@abc.com CEO@abc.com sandy@abc.com 1
CEO@abc.com NAN NAN 6
</code></pre>
<p>So far, I am only able to create the hierarchy columns for the dataframe with the code below. Appreciate any form of assistance.</p>
<pre><code>import networkx as nx
# create graph
G = nx.from_pandas_edgelist(df_hc, source='mgr_email', target='emp_email', create_using=nx.DiGraph)
# find roots (= top managers)
roots = [n for n,d in G.in_degree() if d==0]
# for each employee, find the hierarchy
df_hierarchy = (pd.DataFrame([next((p for root in roots for p in nx.all_simple_paths(G, root, node)), [])[:-1] for node in df_hc['emp_email']], index= df_hc.index).rename(columns=lambda x: f'mgr_hier_{x+1:02d}'))
# join to original DataFrame
df_hc2 = df_hc.join(df_hierarchy)
</code></pre>
|
<python><pandas><dataframe><group-by><hierarchy>
|
2023-04-12 04:25:23
| 1
| 445
|
wjie08
|
75,991,822
| 6,087,589
|
from transformers import AutoTokenizer, AutoModel
|
<p>I am running this code:</p>
<p>I have these updated packages versions:
tqdm-4.65.0 transformers-4.27.4</p>
<p>I am running this code:
from transformers import AutoTokenizer, AutoModel</p>
<p>I am obtaining this erros:
ImportError: cannot import name 'ObjectWrapper' from 'tqdm.utils' (/Users/anitasancho/opt/anaconda3/lib/python3.7/site-packages/tqdm/utils.py)</p>
|
<python><import><filenames><huggingface-transformers><tqdm>
|
2023-04-12 04:08:56
| 1
| 419
|
anitasp
|
75,991,817
| 11,938,023
|
How do I update a pandas/numpy row with a xor of a next row into that same row
|
<p>Ok, the question is if there is a fast way with pandas or numpy to xor an array and update the next row with the results.</p>
<p>Basically I have a pandas data frame named 'ss' like so:</p>
<pre><code> rst no1 no2 no3 no4 no5 no6 no7
0 1 6 2 15 14 9 5 1
1 11 0 0 0 0 0 0 0
2 9 0 0 0 0 0 0 0
3 11 0 0 0 0 0 0 0
4 3 0 0 0 0 0 0 0
5 15 0 0 0 0 0 0 0
6 0 0 0 0 0 0 0 0
Use: ss = pd.read_clipboard()
to copy paste the dataframe into a variable use the above command
</code></pre>
<p>What I want to do is to update each 'no' column with a xor from the next 'rst' column such that each no row in is equal to ss.loc[1:, ['no1', 'no2', 'etc']) = [ss.loc[1, ('rst')] ^ ss.loc[0, [0, ['no1', 'no2', 'etc']) or something like that so the first step would create a dataframe like this:</p>
<pre><code> rst no1 no2 no3 no4 no5 no6 no7
0 1 6 2 15 14 9 5 1
1 11 13 9 4 5 2 14 10
2 9 0 0 0 0 0 0 0
3 11 0 0 0 0 0 0 0
4 3 0 0 0 0 0 0 0
5 15 0 0 0 0 0 0 0
6 0 0 0 0 0 0 0 0
which is basically ss.loc[1, ('rst')] which is 11 so 11 ^ np.array([ 6, 2, 15, 14, 9, 5, 1]) which the result is np.array([13, 9, 4, 5, 2, 14, 10]) which then I set to each no column in sequence as you can see above.
and the next step is to take ss.loc[2, ('rst')] which is 9 and do the next sequence:
rst no1 no2 no3 no4 no5 no6 no7
0 1 6 2 15 14 9 5 1
1 11 13 9 4 5 2 14 10
2 9 4 0 13 12 11 7 3
3 11 0 0 0 0 0 0 0
4 3 0 0 0 0 0 0 0
5 15 0 0 0 0 0 0 0
6 0 0 0 0 0 0 0 0
</code></pre>
<p>so 9 ^ np.array([13, 9, 4, 5, 2, 14, 10]) which the result is
np.array([4, 0, 13, 12, 11 , 7, 3]) which then I set in each no column in sequence as you can see above.</p>
<p>My question is how do I do this with numpy or pandas in a fast/quick way, and can I do the without the use of any loops as I'm working with a data set of one million and looping is slow so I'm hoping there is a shortcut or better method of setting each 'no*' column with the xor of the next 'rst' row to the corresponding 'no' column in the same row as the 'rst' column.</p>
|
<python><pandas><numpy><xor>
|
2023-04-12 04:06:33
| 1
| 7,224
|
oppressionslayer
|
75,991,643
| 400,119
|
FastAPI coerces a boolean to string when the type is defined str | bool
|
<p>When running my code with <code>main.py</code> and <code>config.py</code> I get <code>config.testing</code> back as a <code>str</code> and not a <code>bool</code>.</p>
<p>Repo/branch here: <a href="https://github.com/dycw/tutorial-test-driven-development-with-fastapi-and-docker/blob/getting-started/" rel="nofollow noreferrer">https://github.com/dycw/tutorial-test-driven-development-with-fastapi-and-docker/blob/getting-started/</a></p>
<p>or with source</p>
<pre class="lang-py prettyprint-override"><code># src/app/main.py
from fastapi import Depends, FastAPI
from app.config import Settings, get_settings
app = FastAPI()
@app.get("/ping")
async def pong(*, settings: Settings = Depends(get_settings)) -> dict[str, str | bool]:
return {
"ping": "pong!",
"environment": settings.environment,
"testing": settings.testing,
}
</code></pre>
<pre class="lang-py prettyprint-override"><code># src/app/config.py
from functools import lru_cache
from logging import getLogger
from typing import cast
from pydantic import BaseSettings
_LOGGER = getLogger("uvicorn")
class Settings(BaseSettings):
environment: str = "dev"
testing: bool = cast(bool, 0)
@lru_cache
def get_settings() -> Settings:
_LOGGER.info("Loading config settings from the environment...")
return Settings()
</code></pre>
<p>My JSON returns:</p>
<p><a href="https://i.sstatic.net/Dq2RT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Dq2RT.png" alt="enter image description here" /></a></p>
|
<python><fastapi><pydantic>
|
2023-04-12 03:21:10
| 2
| 657
|
Derek
|
75,991,578
| 11,634,498
|
TypeError: `generator` yielded an element of shape (32, 224, 224, 3) where an element of shape (224, 224, 3) was expected
|
<p>My generator code has takes dataframe(csv file) and images as input and generates images with label.
My generator code is:</p>
<pre><code>class ImageSequence:
def __init__(self, df, mode,img_size=(224, 224), num_channels=3):
self.df = df
self.indices = np.arange(len(df))
self.batch_size = 32
self.img_dir = 'dataset'
self.img_size = tuple(img_size)
self.num_channels = num_channels
self.mode = mode
def __getitem__(self, idx):
sample_indices = self.indices[idx * self.batch_size:(idx + 1) * self.batch_size]
imgs = []
genders = []
ages = []
for _, row in self.df.iloc[sample_indices].iterrows():
img = cv2.imread(str(os.path.join(self.img_dir, row["img_paths"])))
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img = cv2.resize(img, self.img_size)
img = img.astype(np.float32) / 255.0
imgs.append(img)
genders.append(row["genders"])
ages.append(row["ages"])
return imgs, genders
def __len__(self):
return len(self.df)
def __call__(self):
for i in range(self.__len__()):
yield self.__getitem__(i)
if i == self.__len__()-1:
self.on_epoch_end()
def on_epoch_end(self):
np.random.shuffle(self.indices)
</code></pre>
<p>And using the below code to call the generator to train the model</p>
<pre><code>epochs = 20
batch_size = 32
csv_path = 'asian_dataset.csv'
df = pd.read_csv(str(csv_path))
train, val = train_test_split(df, random_state=42, test_size=0.1)
train_gen = ImageSequence(train, "train")
val_gen = ImageSequence(val, "val")
print(train_gen)
ot = (tf.float32, tf.int32)
os = ((224, 224, 3), ())
train_data = tf.data.Dataset.from_generator(train_gen,output_types=ot,output_shapes=os)
val_data = tf.data.Dataset.from_generator(val_gen,output_types=ot,output_shapes=os)
print(train_data)
train_data = train_data.batch(batch_size)
val_data = val_data.batch(batch_size)
print(train_data)
</code></pre>
<p>the above code when executes has error</p>
<blockquote>
<p>TypeError: <code>generator</code> yielded an element of shape (32, 224, 224, 3)
where an element of shape (224, 224, 3) was expected .</p>
</blockquote>
<p>I haven't used <code>tf.data.Dataset.from_generator</code> earlier, but due to lack of memory of the system I have to use it.</p>
|
<python><tensorflow><generator><tensorflow-datasets>
|
2023-04-12 03:05:39
| 1
| 644
|
Krupali Mistry
|
75,991,516
| 1,232,087
|
From a single row dataframe how to create a new dataframe containing list of column names and their values
|
<p><strong>Given df1</strong>:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>c1</th>
<th>c2</th>
<th>c3</th>
<th>c4</th>
<th>c5</th>
<th>c6</th>
<th>c7</th>
<th>c8</th>
</tr>
</thead>
<tbody>
<tr>
<td>45</td>
<td>15</td>
<td>100</td>
<td>68</td>
<td>96</td>
<td>86</td>
<td>35</td>
<td>48</td>
</tr>
</tbody>
</table>
</div>
<p><strong>How to create df2</strong>:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>columnName</th>
<th>ColumnValues</th>
</tr>
</thead>
<tbody>
<tr>
<td>c1</td>
<td>45</td>
</tr>
<tr>
<td>c2</td>
<td>15</td>
</tr>
<tr>
<td>c3</td>
<td>100</td>
</tr>
<tr>
<td>c4</td>
<td>68</td>
</tr>
<tr>
<td>c5</td>
<td>96</td>
</tr>
<tr>
<td>c6</td>
<td>86</td>
</tr>
<tr>
<td>c7</td>
<td>35</td>
</tr>
<tr>
<td>c8</td>
<td>48</td>
</tr>
</tbody>
</table>
</div>
<p><strong>Question</strong>: Using <code>pyspark</code>, if we are given dataframe <code>df1</code> (shown above), how can we create a dataframe <code>df2</code> that contains the column names of <code>df1</code> in the first column and the values of <code>df1</code> in the second second column?</p>
<p><strong>REMARKS</strong>: Please note that <code>df1</code> will be dynamic, it will change based on the data loaded to it. As shown below, I already know how to do it if <code>df1</code> is static:</p>
<pre><code>data = [['c1', 45], ['c2', 15], ['c3', 100]]
mycolumns = ["myCol1","myCol2"]
df = spark.createDataFrame(data, mycolumns)
df.show()
</code></pre>
<p>For a static df1, the above code will show df2 as:</p>
<pre><code>|myCol1|myCol2|
|---|---|
|c1|45|
|c2|15|
|c3|100|
</code></pre>
|
<python><apache-spark><pyspark>
|
2023-04-12 02:51:46
| 1
| 24,239
|
nam
|
75,991,498
| 14,109,040
|
Flattening a list of sublists and individual elements
|
<p>I have a list of sublists and elements. I want to flatten the list such that only the sublists within the list are split and added to the list.</p>
<pre><code>[['item1','item2'],['item3','item4','item5'],'item6']
</code></pre>
<p>I have tried the following</p>
<pre><code>[item for sublist in [['item1','item2'],['item3','item4','item5'],'item6'] for item in sublist]
</code></pre>
<p>results in:</p>
<pre><code>['item1', 'item2', 'item3', 'item4', 'item5', 'i', 't', 'e', 'm', '6']
</code></pre>
<p>However, I want it to result in:</p>
<pre><code>['item1', 'item2', 'item3', 'item4', 'item5', 'item6']
</code></pre>
|
<python>
|
2023-04-12 02:45:48
| 2
| 712
|
z star
|
75,991,137
| 6,495,199
|
Make extendable fastAPI query class
|
<p>Im trying to have a base query class to reuse according to the different use cases</p>
<pre><code>from fastapi.params import Query
class BaseParams:
def __init__(
self,
name: str = Query(alias="name", description="name", example="Bob"),
age: int = Query(alias="age", description="age", example=18),
# ... more base fields
):
self.name = name
self.age = age
# ...
</code></pre>
<p>then have something like</p>
<pre><code>class TeacherParams(BaseParams):
def __init__(
self,
salary: int = Query(default=None, description="Salary", example=100000),
*args,
**kwargs,
):
super().__init__()
self.salary = salary
class StudentParams(BaseParams):
def __init__(
self,
classes: list[str] = Query(default=["math"], description="Classes", example=["math"]),
*args,
**kwargs,
):
super().__init__()
self.classes = classes
@app.get("/students", tags=["students"])
async def students(request: Request, params: StudentParams = Depends()):
# do stuffs
@app.get("/teachers", tags=["teachers"])
async def teachers(request: Request, params: TeacherParams = Depends()):
# do stuffs
# ...
</code></pre>
<p>This approach is not working as I may expect.</p>
<p>the <code>StudentParams</code> | <code>TeacherParams</code> are not having the fields from the BaseParams, only the specific on each.</p>
|
<python><rest><fastapi><pydantic>
|
2023-04-12 00:59:59
| 1
| 354
|
Carlos Rojas
|
75,990,991
| 11,737,958
|
How to get the file name given in tkinter save dialog
|
<p>I am new to python. I use tkinter to create a text file editor. I try to save the file contents to a text file. If i save the file name as "abc.txt" or "abc", how do i get the file name in code which is given in file name dialog before saving the file. Thanks in advance!</p>
<p><a href="https://i.sstatic.net/FJfxk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FJfxk.png" alt="enter image description here" /></a></p>
<p><strong>Code:</strong></p>
<pre><code>def save_file():
files = [('Text Document','*.txt'), ('Python Files','*.py')]
wfile = asksaveasfilename(filetypes = files, defaultextension=".txt")
filename = wfile.get() <==== this line is not correct and throws an error
wrfile = open(filename,"w")
wrfile.write(str(txt.get(1.0, END)))
wrfile.close()
</code></pre>
|
<python>
|
2023-04-12 00:20:17
| 1
| 362
|
Kishan
|
75,990,892
| 6,075,349
|
Pandas stacked bar plot with multi week data
|
<p>I have a <code>df</code> as follows:</p>
<pre><code>Week Instrument Trader Count
1 Stock 100
1 Bond 50
1 MBS 20
2 Stock 150
2 Bond 500
2 MBS 200
</code></pre>
<p>I want to created a stacked bar plot such that <code>week</code> is the <code>x-axis</code> and <code>Trader Count</code> is on the <code>y-axis</code> as a stacked bar and the <code>Instrument</code> is color coded.</p>
<p>I tried <code>df.plot.bar(stacked=True, figsize=(20,10), x='Week')</code> but it resulted in unstacked bars. Also my full dataset has 52 weeks of data, so its quite large to plot, and I want to set the week labels ticked few weeks apart.
Thanks!</p>
|
<python><pandas>
|
2023-04-11 23:52:57
| 1
| 1,153
|
FlyingPickle
|
75,990,859
| 9,883,236
|
Transforming the type of an argument of a function using decorator
|
<p>I am attempting to transform a certain instance that belongs to type A to type B, but I am stuck at the type annotations (using pyright to type check), I currently have the following:</p>
<pre class="lang-py prettyprint-override"><code>A = TypeVar("A")
B = TypeVar("B")
T = TypeVar("T")
P = ParamSpec("P")
P2 = ParamSpec("P2")
def ensure(a: Type[A], convert: Callable[[A], B]
) -> Callable[[Callable[P2, Coroutine[Any, Any, T]]], Callable[P, Coroutine[Any, Any, T]]]:
def command(func: Callable[P2, Coroutine[Any, Any, T]]) -> Callable[P, Coroutine[Any, Any, T]]:
async def wrapper(*args: P.args, **kwargs: P.kwargs) -> T:
for i, argument in enumerate(args):
if isinstance(argument, a):
new_instance = convert(argument)
arguments = (*args[:i], new_instance, *args[i + 1:])
return await func(*arguments, **kwargs)
raise TypeError("Type not found in arguments")
return wrapper
return command
</code></pre>
<p>but it gives me
<code>error: ParamSpec "P" is scoped to a callable within the return type and cannot be referenced in the function body (reportGeneralTypeIssues)</code>, which is valid, as well as <code>Arguments for ParamSpec "P2@command" are missing (reportGeneralTypeIssues)</code>.</p>
<p>I also tried using Concatenate as below</p>
<pre class="lang-py prettyprint-override"><code>
def ensure(a: Type[A], convert: Callable[[A], B]
) -> Callable[[Callable[Concatenate[B, P], Coroutine[Any, Any, T]]], Callable[Concatenate[A, P], Coroutine[Any, Any, T]]]:
def command(func: Callable[Concatenate[B, P], Coroutine[Any, Any, T]]) -> Callable[Concatenate[A, P], Coroutine[Any, Any, T]]:
async def wrapper(*args: P.args, **kwargs: P.kwargs) -> T:
for i, argument in enumerate(args):
if isinstance(argument, a):
new_instance = convert(argument)
arguments = (*args[:i], new_instance, *args[i + 1:])
return await func(*arguments, **kwargs)
raise TypeError("Type not found in arguments")
return wrapper
return command
</code></pre>
<p>but, I don't know the position of the argument beforehand.</p>
<p>I can't seem to find a solution that demonstrates what I'm trying to do via type hinting, and I would appreciate some guidance in how to go about this?</p>
<p>Edit: Example use case</p>
<pre class="lang-py prettyprint-override"><code>
class UserContext(api.Context):
def __init__(self, ctx: api.Context):
...
@api.command()
@ensure(api.Context, UserContext)
async def write(ctx: UserContext, ...):
...
</code></pre>
<p>The API package will then use the write command and passes it an instance of api.Context and the decorator will transform api.Context into UserContext and pass it to the write command.</p>
|
<python><python-typing><pyright>
|
2023-04-11 23:43:35
| 0
| 345
|
YousefZ
|
75,990,832
| 10,387,506
|
Random combination of letters and numbers in each cell of one column
|
<p>I haven't been able to find the solution to the loop part of the following. I have a data frame with over 500K of rows. I want to write a random combination of letters and numbers in a column we'll call "ProductID". I found solutions here that let me write simple numbers, which work, even if they're painfully slow. For example:</p>
<pre><code>for index, row in df3.iterrows():
df3['ProductID'] = np.arange(1,551586)
</code></pre>
<p>I have also found the code on this site to produce a random sequence, and each time I run it, it dutifully produces a new string:</p>
<pre><code>import string
import random
def id_generator(size=12, chars=string.ascii_uppercase + string.digits):
return ''.join(random.choice(chars) for _ in range(size))
# df3['ProductID'] = id_generator()
i = 0
while i < 6:
print(id_generator())
i = i + 1
</code></pre>
<p>Output:</p>
<pre><code>7JKD7LWUZPHC
1ETULSX4WRJI
B42TSN4SFC20
RYIDD7N2RPI2
8GEMULEC7TX1
0FGZZQLBF0XE
</code></pre>
<p>What I can't seem to do is write that string to each cell in a new column as described above.</p>
<p>My apologies, I cannot find where I found it exactly. However, when I try to enclose it in a loop, like so, it takes the first string generated and simply duplicates it:</p>
<pre><code>for index, row in df3.iterrows():
df3['ProductID'] = id_generator()
</code></pre>
<p>The same thing happens if I use a simple <code>while</code> loop.</p>
<p>Current output:</p>
<pre><code>+---------------------------------------------------+---------------+------------------+---------+---------------+--------------------+------------------+--------------+
| name | main_category | sub_category | ratings | no_of_ratings | discount_price_USD | actual_price_USD | ProductID |
+---------------------------------------------------+---------------+------------------+---------+---------------+--------------------+------------------+--------------+
| Lloyd 1.5 Ton 3 Star Inverter Split Ac (5 In 1... | appliances | Air Conditioners | 4.2 | 2255 | 402.5878 | 719.678 | HP2ISWKAI7CA |
| LG 1.5 Ton 5 Star AI DUAL Inverter Split AC (C... | appliances | Air Conditioners | 4.2 | 2948 | 567.178 | 927.078 | HP2ISWKAI7CA |
| LG 1 Ton 4 Star Ai Dual Inverter Split Ac (Cop... | appliances | Air Conditioners | 4.2 | 1206 | 420.778 | 756.278 | HP2ISWKAI7CA |
| LG 1.5 Ton 3 Star AI DUAL Inverter Split AC (C... | appliances | Air Conditioners | 4 | 69 | 463.478 | 841.678 | HP2ISWKAI7CA |
| Carrier 1.5 Ton 3 Star Inverter Split AC (Copp... | appliances | Air Conditioners | 4.1 | 630 | 420.778 | 827.038 | HP2ISWKAI7CA |
+---------------------------------------------------+---------------+------------------+---------+---------------+--------------------+------------------+--------------+
</code></pre>
<p>Expected output:</p>
<pre><code>+---------------------------------------------------+---------------+------------------+---------+---------------+--------------------+------------------+--------------+
| name | main_category | sub_category | ratings | no_of_ratings | discount_price_USD | actual_price_USD | ProductID |
+---------------------------------------------------+---------------+------------------+---------+---------------+--------------------+------------------+--------------+
| Lloyd 1.5 Ton 3 Star Inverter Split Ac (5 In 1... | appliances | Air Conditioners | 4.2 | 2255 | 402.5878 | 719.678 | HP2ISWKAI7CA |
| LG 1.5 Ton 5 Star AI DUAL Inverter Split AC (C... | appliances | Air Conditioners | 4.2 | 2948 | 567.178 | 927.078 | 7JKD7LWUZPHC |
| LG 1 Ton 4 Star Ai Dual Inverter Split Ac (Cop... | appliances | Air Conditioners | 4.2 | 1206 | 420.778 | 756.278 | 1ETULSX4WRJI |
| LG 1.5 Ton 3 Star AI DUAL Inverter Split AC (C... | appliances | Air Conditioners | 4 | 69 | 463.478 | 841.678 | B42TSN4SFC20 |
| Carrier 1.5 Ton 3 Star Inverter Split AC (Copp... | appliances | Air Conditioners | 4.1 | 630 | 420.778 | 827.038 | RYIDD7N2RPI2 |
+---------------------------------------------------+---------------+------------------+---------+---------------+--------------------+------------------+--------------+
</code></pre>
<p>I'm clearly doing something wrong, but I can't figure out what.</p>
|
<python><loops><random>
|
2023-04-11 23:35:54
| 1
| 333
|
Dolunaykiz
|
75,990,828
| 7,175,049
|
How to get the pandas query statement to work for to_datetime and to_timedelta?
|
<p>I would like to filter my dataframe so that I get only the rows with the latest 2 days of data (relative to the latest available date from the dataframe). So this code would work:</p>
<p><code>df[pd.to_datetime(df['date']) <= pd.to_datetime(df['date'].max()) - pd.to_timedelta('2 days')]</code></p>
<p>But now I would like to achieve this same effect but using the <code>query</code> method. But if I do this:</p>
<p><code>df.query("@pd.to_datetime(quote_date) <= @pd.to_datetime(quote_date.max()) - @pd.to_timedelta('2 days')")</code></p>
<p>then I get <code>TypeError: Cannot convert input [2 days 00:00:00] of type <class 'pandas._libs.tslibs.timedeltas.Timedelta'> to Timestamp</code> error.</p>
<p>I can't get this thing to work, and would love to get some feedbacks on what I'm doing wrong.</p>
|
<python><pandas>
|
2023-04-11 23:34:26
| 0
| 400
|
StatsNoob
|
75,990,752
| 5,032,387
|
Pipenv: Staircase package downgraded after updating to Pandas 2.0
|
<p>I'm running pipenv 2023.3.20, pip 21.1.3. I updated pandas to version 2.0 from version 1.5.3. I did have to specify this version explicitly because updating as is wasn't actually updating pipfile.lock from the old version. Maybe this is a clue to the problem.</p>
<p>Now, my staircase package version has been downgraded from 2.5.0 to 2.0.0 and is causing errors in execution. When I try to update staircase
<code>pipenv update staircase==2.5.0</code></p>
<p>I get the following message:</p>
<pre><code>Warning: Your dependencies could not be resolved. You likely have a mismatch in your sub-dependencies.
You can use $ pipenv install --skip-lock to bypass this mechanism, then run $ pipenv graph to inspect the situation.
Hint: try $ pipenv lock --pre if it is a pre-release dependency.
ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts
</code></pre>
<p>When I run <code>pipenv graph</code>, here is what I see under the staircase dependencies:</p>
<pre><code>staircase==2.0.0
- matplotlib [required: >=2, installed: 3.7.1]
- contourpy [required: >=1.0.1, installed: 1.0.7]
- numpy [required: >=1.16, installed: 1.23.5]
- cycler [required: >=0.10, installed: 0.11.0]
- fonttools [required: >=4.22.0, installed: 4.39.3]
- importlib-resources [required: >=3.2.0, installed: 5.12.0]
- zipp [required: >=3.1.0, installed: 3.15.0]
- kiwisolver [required: >=1.0.1, installed: 1.4.4]
- numpy [required: >=1.20, installed: 1.23.5]
- packaging [required: >=20.0, installed: 23.0]
- pillow [required: >=6.2.0, installed: 9.5.0]
- pyparsing [required: >=2.3.1, installed: 3.0.9]
- python-dateutil [required: >=2.7, installed: 2.8.2]
- six [required: >=1.5, installed: 1.16.0]
</code></pre>
|
<python><pipenv><package-management>
|
2023-04-11 23:15:32
| 1
| 3,080
|
matsuo_basho
|
75,990,745
| 6,051,652
|
Remove timezone from timestamp but keep the local time
|
<p>I have a dataframe with epoch time. I convert the epoch time to a timestamp with my local timezone. I would like to remove the timezone information but keep my local timezone in the timestamp (subtract the timezone offset from the timestamp and then remove the timezone).
This is the code I have:</p>
<pre><code>epochs = np.arange(1644516000, 1644516000 + 1800*10, 1800)
df = pd.DataFrame({'time': epochs})
df['time'] = pd.to_datetime(df['time'], unit='s').dt.tz_localize("US/Pacific")
</code></pre>
<p>I cannot use:</p>
<pre><code>dt.tz_localize(None)
</code></pre>
<p>Since it converts it back to UTC.</p>
<p>My desired output is a timestamp with no timezone information but in my local timezone:</p>
<pre><code>pd.date_range('2022-02-10 10:00', freq='30min', periods=10)
</code></pre>
<p>How do I do that?</p>
|
<python><pandas>
|
2023-04-11 23:13:48
| 1
| 1,159
|
Eyal S.
|
75,990,694
| 840,821
|
Python Atlassian JIRA iterate for next page
|
<p>I followed the example described here:
<a href="https://community.atlassian.com/t5/Jira-articles/Atlassian-Python-API-s/ba-p/2091355" rel="nofollow noreferrer">https://community.atlassian.com/t5/Jira-articles/Atlassian-Python-API-s/ba-p/2091355</a>
to write a toy program to get through the list of JIRAs.</p>
<pre class="lang-py prettyprint-override"><code>from atlassian import Jira
jira = Jira(
url='https://your-site.atlassian.net',
username='email',
password='token',
cloud=True)
jql_request ='project = WSP AND issuetype = Story'
issues = jira.jql(jql_request)
print(issues)
</code></pre>
<p>But this doesn't list all the JIRAs (only the first 50). I want to iterate through the next batch. <a href="https://stackoverflow.com/questions/69877366/iterating-through-jira-api-url">Iterating through Jira API Url</a> post describes the setting of parameter startAt if I use the REST method.</p>
<p>I want to know how I can specify using the Python SDK method. More specifically, it is not mentioned in the API documentation: <a href="https://atlassian-python-api.readthedocs.io/jira.html#get-issues-from-jql-search-result-with-all-related-fields" rel="nofollow noreferrer">https://atlassian-python-api.readthedocs.io/jira.html#get-issues-from-jql-search-result-with-all-related-fields</a></p>
|
<python><jira>
|
2023-04-11 23:04:02
| 1
| 1,949
|
Coder
|
75,990,606
| 3,620,725
|
Automatically add project to sys.path in VS Code like PyCharm/Spyder do?
|
<h2>Problem</h2>
<p>In a Python project with subpackages, absolute imports don't work inside any files that aren't in the project root directory.</p>
<pre><code>- my_project
- my_package
- __init__.py
- my_module.py
- my_scripts
- some_script.py
</code></pre>
<p><code>some_script.py</code></p>
<pre><code>import sys
print('\n'.join(sys.path))
from my_package.my_module import hello_world
hello_world()
</code></pre>
<p><code>Output (PyCharm)</code></p>
<pre><code>D:\_MyFiles\Programming\Projects\python-import-demo\my_scripts
D:\_MyFiles\Programming\Projects\python-import-demo
***list of unrelated paths***
Hello, World!
</code></pre>
<p><code>Output (VS Code)</code></p>
<pre><code>d:\_MyFiles\Programming\Projects\python-import-demo\my_scripts
***list of unrelated paths***
Traceback (most recent call last):
File "d:\_MyFiles\Programming\Projects\python-import-demo\my_scripts\some_script.py", line 4, in <module>
from my_package.my_module import hello_world
ModuleNotFoundError: No module named 'my_package'
</code></pre>
<h2>Workarounds</h2>
<ul>
<li>Use relative imports (this breaks <code>__main__</code> blocks)</li>
<li>Edit project-level <code>launch.json</code> config (the problem still happens when running <code>.py</code> files through the top bar or CLI)</li>
<li>Run <code>pip install -e MY_PROJECT</code> (I don't want to repeat this for every project I open)</li>
<li>Explicitly find the project root directory and append it to <code>sys.path</code> inside my own code (this is disgusting in my opinion, but good if I want to send the project to someone and have it simply work with no additional configuration by them)</li>
</ul>
<p>I found all these workarounds <a href="https://stackoverflow.com/questions/14132789/relative-imports-for-the-billionth-time">here</a> and <a href="https://fadil-nohur.medium.com/resolving-intra-project-imports-in-python-a-simple-guide-visual-studio-code-98472b0a8f59" rel="nofollow noreferrer">here</a></p>
<h2>Question</h2>
<p>Is there any way to solve this problem for VS Code globally? When I open a project in PyCharm and run any <code>.py</code> file, absolute imports just work automatically without any manual configuration because the project root gets added to <code>sys.path</code> and that's what I want for VS Code. I don't want to have to use any of the above workarounds on every new project.</p>
|
<python><visual-studio-code>
|
2023-04-11 22:43:31
| 2
| 5,507
|
pyjamas
|
75,990,603
| 2,687,317
|
pandas groupby and agg with multiple levels
|
<p>I have several very very large datasets like this (simplified). Notice however that not all SBs (in this case 1-4) are represented at every LFrame...</p>
<pre><code>LFrame, Pwr, SB, Channels_Active, Channels_Assigned
1, 10, 1, 2, 2
1, 2, 2, 2, 1
1, 4, 3, 3, 2
1, 6, 3, 2, 2
10, 8, 1, 2, 2
10, 2, 2, 3, 2
10, 4, 3, 2, 1
10, 2, 3, 2, 1
10, 5, 4, 2, 2
</code></pre>
<p>I need to combine it in a couple ways. I believe there's a version of groupby I can use... but I can't figure it out. Maybe something like:</p>
<pre><code>call_data_map = {
'LFrame' :('LFrame','first'),
'Total_Pwr' :("Pwr", 'sum'),
'Channels_Act' :("Channels_Active", 'sum'),
'Channels_Assigned' :("Channels_Assigned", 'sum'),
# ??? some kind of generator? (sb :('SB', 'sum') for sb in range(1,5)) ???
}
df.groupby('LFrame').agg(**call_data_map)
</code></pre>
<p>I want the output to look like this (where I sum over the SBs to get total power in each SB and the total power overall). The cols (1-4) come from the SB col and I know the range will always be 1-4:</p>
<pre><code>LFrame, Total_Pwr, Channels_Act, Channels_Assigned, 1, 2, 3, 4
1, 22, 9, 7, 10, 2, 10, 0
10, 21, 11, 8, 8, 2, 6, 5
</code></pre>
<p>Is there an efficient way to do this? Using list comprehension and building up the rows (on each file) is slow.</p>
|
<python><pandas><dataframe>
|
2023-04-11 22:43:04
| 4
| 533
|
earnric
|
75,990,599
| 3,750,282
|
Errors with Selenium and Python and Using Chrome Driver after upgrade to a version >89
|
<p>I have the following configuration, which works perfectly in it's current form.</p>
<p>I can use any chrome/chromedriver under v89 without any issues
Once I pass the v89 mark, it does not work anymore, giving the below error</p>
<p>Any help would be appreciated, since I am going crazy with this.</p>
<p>I tried different approaches, like using the automatic webdriver-manager installer, but failed in each attempt.</p>
<pre><code>Traceback (most recent call last):\n File \"/opt/program/src/steps/start_browser_step.py\", line 30, in handle\n browser = self.browser(options, service)\n File \"/opt/program/src/steps/start_browser_step.py\", line 105, in browser\n browser = ChromeBrowser(options=options, service=service)\n File \"/opt/bitnami/python/lib/python3.8/site-packages/selenium/webdriver/chrome/webdriver.py\", line 81, in __init__\n super().__init__(\n File \"/opt/bitnami/python/lib/python3.8/site-packages/selenium/webdriver/chromium/webdriver.py\", line 106, in __init__\n super().__init__(\n File \"/opt/bitnami/python/lib/python3.8/site-packages/selenium/webdriver/remote/webdriver.py\", line 288, in __init__\n self.start_session(capabilities, browser_profile)\n File \"/opt/bitnami/python/lib/python3.8/site-packages/selenium/webdriver/remote/webdriver.py\", line 381, in start_session\n response = self.execute(Command.NEW_SESSION, parameters)\n File \"/opt/bitnami/python/lib/python3.8/site-packages/selenium/webdriver/remote/webdriver.py\", line 444, in execute\n self.error_handler.check_response(response)\n File \"/opt/bitnami/python/lib/python3.8/site-packages/selenium/webdriver/remote/errorhandler.py\", line 249, in check_response\n raise exception_class(message, screen, stacktrace)\nselenium.common.exceptions.WebDriverException: Message: unknown error: Chrome failed to start: crashed.\n (chrome not reachable)\n (The process started from chrome location /opt/program/bin/chrome-linux/chrome is no longer running, so ChromeDriver is assuming that Chrome has crashed.)\nStacktrace:\n#0 0x00400084a262 <unknown>\n#1 0x00400083b133 <unknown>\n#2 0x004000545ce1 <unknown>\n#3 0x00400056cf1c <unknown>\n#4 0x004000568c4e <unknown>\n#5 0x0040005ae38d <unknown>\n#6 0x0040005a4d83 <unknown>\n#7 0x004000573552 <unknown>\n#8 0x00400057478c <unknown>\n#9 0x0040007f33f6 <unknown>\n#10 0x004000809858 <unknown>\n#11 0x0040008090ff <unknown>\n#12 0x00400080a015 <unknown>\n#13 0x004000810cd3 <unknown>\n#14 0x00400080a39b <unknown>\n#15 0x0040007e51e7 <unknown>\n#16 0x004000827848 <unknown>\n#17 0x00400082798f <unknown>\n#18 0x004000835256 <unknown>\n#19 0x004002768fa3 start_thread\n#20 0x004002d244cf clone
</code></pre>
<p>This is the version I am trying to use, which does not work. I also tried with 110, 100, 90 ... all of them failed</p>
<pre><code>ARG CHROME_URL="https://www.googleapis.com/download/storage/v1/b/chromium-browser-snapshots/o/Linux_x64%2F1109227%2Fchrome-linux.zip?alt=media"
ARG CHROMEDRIVER_URL="https://www.googleapis.com/download/storage/v1/b/chromium-browser-snapshots/o/Linux_x64%2F1109227%2Fchromedriver_linux64.zip?alt=media"
</code></pre>
<p>The below code works perfectly. The issue happens only when I use the above drivers.</p>
<pre><code>FROM public.ecr.aws/bitnami/python:3.8.12
ARG CHROME_URL="https://www.googleapis.com/download/storage/v1/b/chromium-browser-snapshots/o/Linux_x64%2F843831%2Fchrome-linux.zip?alt=media"
ARG CHROMEDRIVER_URL="https://www.googleapis.com/download/storage/v1/b/chromium-browser-snapshots/o/Linux_x64%2F843831%2Fchromedriver_linux64.zip?alt=media"
ARG
RUN apt-get update -y
RUN apt-get install python3-pip -y
RUN apt-get install python3-dev -y
RUN apt-get install gcc g++ -y
RUN pip install --upgrade pip
RUN apt-get update -y
RUN apt-get install -y wget
RUN apt-get install -y unzip
RUN apt-get install -y nginx
RUN apt-get install -y git
RUN apt-get install -y xvfb
RUN apt-get install -y packagekit-gtk3-module
RUN apt-get install -y libx11-xcb1
RUN apt-get install -y libdbus-glib-1-2
RUN apt-get install -y libxt6
RUN apt-get install -y libnss3-dev
RUN apt-get install -y libgbm-dev
RUN apt-get install -y libasound2
RUN apt-get install -y ca-certificates
RUN apt-get install -y vim
RUN apt-get install -y software-properties-common apt-transport-https
RUN apt-get install -y fonts-indic
RUN apt-get install -y fonts-noto
RUN apt-get install -y fonts-noto-cjk
RUN mkdir -p /opt/program
RUN mkdir -p /opt/program/data
RUN mkdir -p /opt/program/bin/tmp
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
RUN echo "deb http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list
RUN wget $CHROME_URL -O /opt/program/bin/tmp/chrome.zip
RUN unzip /opt/program/bin/tmp/chrome.zip -d /opt/program/bin
RUN wget $CHROMEDRIVER_URL -O /opt/program/bin/tmp/chromedriver.zip
RUN unzip /opt/program/bin/tmp/chromedriver.zip -d /opt/program/bin
ENV PYTHONUNBUFFERED=TRUE
ENV PYTHONDONTWRITEBYTECODE=TRUE
ENV PATH="/opt/program:${PATH}"
COPY . /opt/program
WORKDIR /opt/program
RUN pip install -r requirements.txt
CMD ["python", "-v"]
</code></pre>
<pre><code>autopep8==1.5.5
beautifulsoup4==4.10.0
black==20.8b1
boto3==1.17.2
Faker==11.3.0
flake8==3.8.4
html5lib==1.1
ImageHash==4.2.1
importlib-metadata==4.11.3
mergedeep~=1.3.4
mock==4.0.3
moto==1.3.14
numpy==1.22.4
Pillow==9.3.0
pre-commit==2.10.1
pytest==6.2.4
pytest-cov==2.12.0
pytest-env==0.6.2
pytest-pythonpath==0.7.0
python-dotenv==0.15.0
pyYAML==5.4.1
requests==2.25.1
requests-mock==1.9.3
scipy==1.8.1
selenium==4.6.0
webdriver-manager==3.8.5
</code></pre>
<pre><code>from selenium.webdriver import Chrome as ChromeBrowser
from selenium.webdriver.chrome.options import Options as ChromeOptions
from selenium.webdriver.chrome.service import Service as ChromeService
options = ChromeOptions()
options.binary_location = '/opt/program/bin/chrome-linux/chrome'
user_agent = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_5) AppleWebKit/605.1.15 (KHTML, like Gecko) ' \
'CriOS/85 Version/11.1.1 Safari/605.1.15'
options.add_argument(f'user-agent={user_agent}')
options.add_argument('--single-process')
options.add_argument('--allow-running-insecure-content')
options.add_argument('--ignore-certificate-errors')
options.add_argument('--disable-gpu')
options.add_argument('--hide-scrollbars')
options.add_argument('--window-size=1400,1080')
options.add_argument('--disable-cache')
options.add_argument('--headless')
options.add_argument('--no-sandbox')
options.add_argument('--disable-dev-shm-usage')
options.add_argument('--start-maximized')
options.add_argument('--kiosk')
service = ChromeService(executable_path='/opt/program/bin/chromedriver_linux64/chromedriver')
browser = ChromeBrowser(options=options, service=service)
</code></pre>
|
<python><google-chrome><selenium-webdriver><selenium-chromedriver><webdriver>
|
2023-04-11 22:42:13
| 1
| 448
|
Scobee
|
75,990,526
| 10,509,939
|
how to import my custom module into a python IDE like Spyder and run my module?
|
<p>I'm pretty new to packaging scripts and I have an issue understanding how to run my code after converting it to a package!
I've developed a script with proper files and functions and used the setup tools to convert it to a package. Suppose a simple module named "Simulation_Package" with __ <em>main</em>_ _.py as:</p>
<pre><code>def main():
print("running the simulations")
do stuff
if __name__ == "__main__":
main()
</code></pre>
<p>I installed the package in an Anaconda environment and when I use the Anaconda powershell prompt, I can run my script with no issues using:</p>
<pre><code>python Simulation_Package
</code></pre>
<p>and it automatically runs my module and gets me the outputs which in this case prints "running the simulations"</p>
<p>When I use Spyder as an IDE, while I can import the package and Spyder recognizes my package, I cannot run the script. I have no idea how to run it actually. Call the <strong>main</strong> also returns an error:</p>
<pre><code>import Simulation_Package
Simulation_Package.main()
AttributeError: module 'Simulation_Package' has no attribute 'main'
</code></pre>
<p>Any idea how to run the code in Spyder?</p>
|
<python><package><anaconda><spyder>
|
2023-04-11 22:22:44
| 1
| 391
|
Seji
|
75,990,460
| 581,002
|
Why can't Python decode this valid JSON with escaped quotes?
|
<p>I have this almost JSON which has something that's only similar to JSON inside:</p>
<pre class="lang-py prettyprint-override"><code>TEST_LINE = """Oct 21 22:39:28 GMT [TRACE] (Carlos-288) org.some.awesome.LoggerFramework RID=8e9076-4dd9-ec96-8f35-bde193498f: {
"service": "MyService",
"operation": "queryShowSize",
"requestID": "8e9076-4dd9-ec96-8f35-bde193498f",
"timestamp": 1634815968000,
"parameters": [
{
"__type": "org.some.awsome.code.service#queryShowSizeRequest",
"externalID": {
"__type": "org.some.awsome.code.common#CustomerID",
"value": "48317"
},
"CountryID": {
"__type": "org.some.awsome.code.common#CountryID",
"value": "125"
},
"operationOriginalDate": 1.63462085667E9,
"operationType": "MeasureWithToes",
"measureInstrumentIdentifier": "595909-48d2-6115-85e8-b3aa7b"
}
],
"output": {
"__type": "org.some.awsome.code.common#queryShowSizeReply",
"shoeSize": {
"value": "$ion_1_0 'org.some.awsome.model.processing.ShoeMeasurementBI@1.0'::'org.some.awsome.model.processing.FeetScience@1.0'::{customer_id:\"983017317\",measureInstrumentIdentifierTilda:\"595909-48d2-6115-85e8-b3aa7b\",foot_owner:\"Oedipus\",toe_code:\"LR2X10\",account_number_token:\"1234-2838316-1298470\",token_status:VALID,country_code:GRC,measure_store_format:METRIC}"
}
}
}
"""
</code></pre>
<p>The regex gives me the start of the JSON and I try decoding from there. According to <a href="https://jsonlint.com/" rel="nofollow noreferrer">https://jsonlint.com/</a>, it is valid JSON after that point.</p>
<p>So why doesn't Python's JSON module decode it? I get this error:</p>
<pre><code>Exception has occurred: JSONDecodeError
Expecting ',' delimiter: line 25 column 156 (char 992)
File "/Users/decoder/Downloads/json-problem.py", line 44, in read_json
d = json.loads(line)
^^^^^^^^^^^^^^^^
File "/Users/decoder/Downloads/json-problem.py", line 48, in <module>
print(read_json(TEST_LINE))
^^^^^^^^^^^^^^^^^^^^
json.decoder.JSONDecodeError: Expecting ',' delimiter: line 25 column 156 (char 992)
</code></pre>
<p>Line 25 and character 156 points to the first <code>\"</code> in <code>output.shoeSize.value</code>.</p>
<p>But why? That embedded value is only <em>roughly</em> JSON but it should <em>not</em> try to decode it anyway as it is given as a plain string. And the quotes are nicely escaped to not end the string early.</p>
<pre class="lang-py prettyprint-override"><code>FIND_JSON = re.compile(
r"\w{3} \d{2} (\d{2}[: ]){3}GMT \[[^]]+\] \([^)]+\) "
r"org.some.awesome.LoggerFramework RID=[^:]+: "
)
def read_json(line: str) -> str | None:
if not (m := FIND_JSON.match(line)):
return None
line = line[m.end(0) :]
d = json.loads(line)
return d
print(read_json(TEST_LINE))
</code></pre>
<p>I've also tried the <a href="https://docs.python.org/3/library/json.html#json.JSONDecoder.raw_decode" rel="nofollow noreferrer"><code>raw_decode()</code></a> but that fails similarly. I don't understand.</p>
<p><strong>Update 1:</strong> To the commenter pointing to a non-escaped double quote, I don't see it. For me after the colon it follows a backslash and then a double quote. And—again—for me, the linter tells me it's good. Is there some copy & paste transformation happening on SO?</p>
<p><strong>Update 2:</strong> Added the (still missing) code that makes the problem apparent.</p>
|
<python><json><escaping><decode>
|
2023-04-11 22:07:54
| 2
| 3,039
|
primfaktor
|
75,990,432
| 2,778,224
|
List of int and floats in Python Polars casting to object
|
<p>Why does the column <code>floats</code> in the following example casts to <code>object</code> instead of <code>list[f64]</code>? How can I change it to <code>list[f64]</code>?</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
dfLists = pl.DataFrame({
'ints':[ [0,1], [4,3,2]],
'floats':[ [0.0,1], [2,3]],
'strings':[ ["0","1"],["2","3"]]
})
dfLists
</code></pre>
<pre><code>shape: (2, 3)
┌───────────┬──────────┬────────────┐
│ ints ┆ floats ┆ strings │
│ --- ┆ --- ┆ --- │
│ list[i64] ┆ object ┆ list[str] │
╞═══════════╪══════════╪════════════╡
│ [0, 1] ┆ [0.0, 1] ┆ ["0", "1"] │
│ [4, 3, 2] ┆ [2, 3] ┆ ["2", "3"] │
└───────────┴──────────┴────────────┘
</code></pre>
<p>Note: It seems this behavior may be new, based on this blog post (<a href="https://www.rhosignal.com/posts/polars-nested-dtypes/" rel="nofollow noreferrer">.html</a>) where the same example casts to <code>list[f64]</code>.</p>
<p><strong>Edit</strong>: After updating polars to version <code>0.17.2</code> the column casts to <code>list[f64]</code> as expected.</p>
<p>I would still like to know if it is possible to change the dtypes of the elements of a list. For example, adapting the previous example:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
dfLists = pl.DataFrame({
'ints':[ [0,1], [4,3,2]],
'object':[ [0,"1"], [2,3]],
'strings':[ ["0","1"],["2","3"]]
})
dfLists.select(pl.col("strings").list.eval(pl.element().cast(pl.Int16)))
# works
dfLists.select(pl.col("object").list.eval(pl.element().cast(pl.Int16))) # error
# error:
# PanicException: not implemented for dtype Object("object")
</code></pre>
|
<python><dataframe><python-polars>
|
2023-04-11 22:00:32
| 0
| 479
|
Maturin
|
75,990,424
| 6,242,883
|
PyTorch - Error when trying to minimize a function of a symmetric matrix
|
<p>I want to minimize a loss function of a symmetric matrix where some values are fixed. To do this, I defined the tensor <code>A_nan</code> and I placed objects of type <code>torch.nn.Parameter</code> in the values to estimate.</p>
<p>However, when I try to run the code I get the following exception:</p>
<pre><code>RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.
</code></pre>
<p>I found <a href="https://stackoverflow.com/questions/48274929/pytorch-runtimeerror-trying-to-backward-through-the-graph-a-second-time-but">this question</a> that seemed to have the same problem, but the solution proposed there does not apply to my case (as far as I understand). Or at least I would not know how to apply it.</p>
<p>Here is a self-contained example of what I am trying to do:</p>
<pre><code>import torch
A_nan = torch.tensor([[1.0, 2.0, torch.nan], [2.0, torch.nan, 5.0], [torch.nan, 5.0, 6.0]])
nan_idxs = torch.where(torch.isnan(torch.triu(A_nan)))
A_est = torch.clone(A_nan)
weights = torch.nn.ParameterList([])
for i, j in zip(*nan_idxs):
w = torch.nn.Parameter(torch.distributions.Normal(3, 0.5).sample())
A_est[i, j] = w
A_est[j, i] = w
weights.append(w)
optimizer = torch.optim.Adam(weights, lr=0.01)
for _ in range(10):
optimizer.zero_grad()
loss = torch.sum(A_est ** 2)
loss.backward()
optimizer.step()
</code></pre>
|
<python><optimization><pytorch>
|
2023-04-11 21:57:57
| 1
| 1,176
|
Tendero
|
75,990,405
| 19,130,803
|
getting error while installing black and safety
|
<p>I am using poetry for python application. While adding developement dependencies as below</p>
<pre><code>poetry add black flake8 flake8-import-order flake8-docstrings flake8-black flake8-bugbear safety mypy pytest-cov pytest --group dev
</code></pre>
<p>Getting error for <code>black</code> and <code>safety</code> as below</p>
<pre><code>Because no versions of safety match >2.3.5,<3.0.0
and safety (2.3.5) depends on packaging (>=21.0,<22.0), safety (>=2.3.5,<3.0.0) requires packaging (>=21.0,<22.0).
And because black (23.3.0) depends on packaging (>=22.0)
and no versions of black match >23.3.0,<24.0.0, safety (>=2.3.5,<3.0.0) is incompatible with black (>=23.3.0,<24.0.0).
So, because project_titanic depends on both black (^23.3.0) and safety (^2.3.5), version solving failed.
</code></pre>
<p>Please help.</p>
|
<python>
|
2023-04-11 21:54:23
| 2
| 962
|
winter
|
75,990,307
| 3,821,009
|
Pandas concatenate strings and numpy array
|
<p>I have this:</p>
<pre><code>n = 5
df = pandas.DataFrame(dict(j=numpy.repeat(0, n)))
df['j'] = 'prefix-'
df['j'] += numpy.arange(n).astype('U')
df['j'] += '-suffix'
print(df)
</code></pre>
<p>which produces:</p>
<pre><code> j
0 prefix-0-suffix
1 prefix-1-suffix
2 prefix-2-suffix
3 prefix-3-suffix
4 prefix-4-suffix
</code></pre>
<p>Is there a way to do this (reasonably) in one line?</p>
<p>I tried:</p>
<pre><code>df['j'] = 'prefix-' + numpy.arange(n).astype('U') + '-suffix'
</code></pre>
<p>but that resulted in this error:</p>
<pre><code>numpy.core._exceptions._UFuncNoLoopError: ufunc 'add' did not contain a loop with signature matching types (dtype('<U7'), dtype('<U21')) -> None
</code></pre>
|
<python><pandas><numpy>
|
2023-04-11 21:35:10
| 2
| 4,641
|
levant pied
|
75,990,114
| 1,287,788
|
Error when writing to a compressed CSV file in Python 3.x
|
<p>I am trying to write data (contained in a dict) to a compressed (gzip) CSV file. As far as I understand the <a href="https://docs.python.org/3/library/gzip.html#gzip.GzipFile" rel="nofollow noreferrer">gzip.GzipFile</a> method should accept a writing operation as a normal file object. Such as:</p>
<pre><code> import gzip
import csv
with gzip.GzipFile(filename="test.csv.gz", mode="a") as gziphdlr:
writer = csv.DictWriter(gziphdlr, fieldnames=['time','value'])
writer.writeheader()
writer.writerow({'time': 0.1, "value": 100})
writer.writerow({'time': 0.2, "value": 200})
</code></pre>
<p>However, I get the error:</p>
<pre><code> ...
File "/usr/lib/python3.10/csv.py", line 154, in writerow
return self.writer.writerow(self._dict_to_list(rowdict))
File "/usr/lib/python3.10/gzip.py", line 285, in write
data = memoryview(data)
TypeError: memoryview: a bytes-like object is required, not 'str'
</code></pre>
<p>Any suggestions where I may be wrong?</p>
<p>Many thanks!</p>
|
<python><python-3.x><csv><gzip>
|
2023-04-11 21:01:31
| 1
| 318
|
mauscope
|
75,990,075
| 10,266,106
|
Efficient Sharing of Numpy Arrays in Multiprocess
|
<p>I have two multi-dimensional Numpy arrays loaded/assembled in a script, named <code>stacked</code> and <code>window</code>. The size of each array is as follows:</p>
<p><code>stacked</code>: (1228, 2606, 26)</p>
<p><code>window</code>: (1228, 2606, 8, 2)</p>
<p>The goal is to perform statistical analysis at each i,j point in the multi-dimensional array, where:</p>
<ol>
<li>i,j of <code>window</code> is a subset collection of eight i,j points</li>
<li>These eight i, j points are used to extract values across the entirety of the <code>stacked</code> array and assemble them into a list</li>
<li>Statistical analysis is then performed by scipy on this list, with the resultant then prepared for use</li>
</ol>
<p>The portion of this script is as follows:</p>
<pre><code>def statagg(queue, startrng, endrng):
for stepone in range(startrng,endrng,1):
for steptwo in range(0,2606,1):
selection = window[stepone][steptwo]
piece = stacked[selection[:,0], selection[:,1]].tolist()
piece = [j for i in piece for j in i]
piece = [(i * 0.0393701) for i in piece]
piece = [0 if math.isnan(i) else i for i in piece]
param = sci.stats.gamma.fit(piece)
x = np.linspace(0, max(piece), 500)
cdf = sci.stats.gamma.cdf(x, *param)
def parallelstats():
processes = []
q = mpr.Queue()
for step in range(600,701,1):
pro = mpr.Process(target=statagg, args=(q, step, step + 1))
processes.extend([pro])
pro.start()
for p in processes:
p.join()
</code></pre>
<p>I've elected to use a 100 row chunk to test this process and am initiating parallel processes of <code>statagg</code> with Multiprocess. Upon execution, I've noticed that these parallel processes in total consume ~8.5-9.0 GB of RAM, which seems unusually high given both <code>stacked</code> and <code>window</code> have been declared once outside of the function executing tasks in parallel. The entries in each list only total 200-210, which I also expected consume minimal amounts of memory.</p>
<p>What steps can I take to make this process more efficient in memory, such that I can conduct the statistical analysis with Scipy at a much faster pace?</p>
|
<python><numpy><parallel-processing><multiprocessing><numpy-ndarray>
|
2023-04-11 20:55:29
| 0
| 431
|
TornadoEric
|
75,990,005
| 12,817,213
|
How to set the text color of a textobject using reportlab pdfgen
|
<p>I was recently looking for the function needed to set a <code>textobject</code>'s text color. The <code>textobject</code> is a specific method of working with <code>reportlab</code>'s <code>pdfgen</code>. The documentation has some information, but it's pretty unclear.</p>
<p>Starting code:</p>
<pre><code>textobject = canvas.beginText()
textobject.textOut("Example Text")
canvas.drawText(textobject)
</code></pre>
|
<python><pdf-generation><reportlab>
|
2023-04-11 20:45:04
| 1
| 921
|
Austin Poulson
|
75,989,972
| 850,781
|
Numpy cannot `vectorize` a function
|
<p>I want to apply this simple function to numpy arrays <em><strong>fast</strong></em>:</p>
<pre><code>def f (x):
return max(0,1-abs(x))
</code></pre>
<p>Just for clarity's sake, here is the plot:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(start=-4,stop=4, num=100)
plt.plot(x, list(map(f,x)))
</code></pre>
<p><a href="https://i.sstatic.net/FMFM1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FMFM1.png" alt="function plot" /></a></p>
<p>However, <code>np.vectorize(f)(x)</code> returns a vector of 0</p>
<p>Yes, I know that</p>
<blockquote>
<p>The <a href="https://numpy.org/doc/stable/reference/generated/numpy.vectorize.html" rel="nofollow noreferrer"><code>vectorize</code></a> function is provided primarily for convenience, not for performance. The implementation is essentially a <code>for</code> loop.</p>
</blockquote>
<p>but it should at least return a correct value, no?</p>
<p>PS. I managed to vectorize <code>f</code> by hand:</p>
<pre><code>def f (x):
z = np.zeros(x.shape)
return np.fmax(z,1-np.abs(x), out=z)
</code></pre>
|
<python><numpy><vectorization>
|
2023-04-11 20:39:55
| 2
| 60,468
|
sds
|
75,989,956
| 11,922,765
|
Python Crontab run a function: OSError: "crontab: user `user' unknown\n"
|
<p>I want to use task scheduler inside my python script. Here is an example:</p>
<pre><code>import datetime
def myjob():
print("%s: This is my present Job"%(datetime.datetime.now()))
from crontab import CronTab
cron = CronTab(user='user')
job = cron.new(myjob)
job.minute.every(1)
cron.write()
</code></pre>
<p>Present output:</p>
<pre><code>OSError: Read crontab user: b"crontab: user `user' unknown\n"
</code></pre>
<p><a href="https://i.sstatic.net/g7Sym.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/g7Sym.png" alt="enter image description here" /></a></p>
<p><strong>update:</strong> Now I have used user <code>raspberrypi</code> but still getting same error.
<a href="https://i.sstatic.net/wUaw7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wUaw7.png" alt="enter image description here" /></a></p>
|
<python><python-3.x><cron><cron-task>
|
2023-04-11 20:36:54
| 0
| 4,702
|
Mainland
|
75,989,908
| 15,178,267
|
Django: How to get tax amount for diffrent countries and multiply it by product price in django?
|
<p>I am creating an ecommerce system where i would need to calculate taxes for diffrent countries during checkout. I have added a flat rate for the tax, but that is not how tax works, it need to be percentage based which i have done, but how do i know the country a user is about to order a product from and also get the tax amount for that country and do my basic calculation.</p>
<p>please if you have done this before i would really appreciate any help as that would help me fix this issue.</p>
<p>Right now, this is how i am performing the tax calculation with a fixed percentage for all users from all countries</p>
<pre><code> new_tax_amount = 0
tax_amount_by_percentage = 25 / 100 # 25% / 100 = 0.25
new_tax_amount += int(item['qty']) * float(tax_amount_by_percentage )
</code></pre>
<p>This works well for me, but i need to know how to get tax percentage based on the country the user is trying to order the product from</p>
|
<python><django><django-models><django-views>
|
2023-04-11 20:28:10
| 1
| 851
|
Destiny Franks
|
75,989,899
| 13,045,595
|
Can't open video saved on jetson xavier using OpenCV video writer
|
<p>I possess a Jetson Xavier, and I'm able to access the camera through dev/video0 or GStreamer. However, when I try to save the video in AVI or MP4 formats, I'm unable to open it. Based on the video size, which is around 10-12 MB, I believe it should contain the frames. How can I properly save and reopen the video later?</p>
<pre><code>import numpy as np
import cv2
source = "/dev/video0"
gstr = "v4l2src device=/dev/video0 ! video/x-raw,format=YUY2,width=640,height=480,framerate=30/1 ! nvvidconv ! video/x-raw(memory:NVMM) ! nvvidconv ! video/x-raw, format=BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink drop=1 "
# cap = cv2.VideoCapture(gstr, cv2.CAP_GSTREAMER)
cap = cv2.VideoCapture(source)
fourcc = cv2.VideoWriter_fourcc(*'XVID')
out = cv2.VideoWriter('./output.avi',fourcc, 30.0, (640,480))
count = 0
while(cap.isOpened()):
ret, frame = cap.read()
if ret==True:
out.write(frame)
print(count)
try:
cv2.imshow('frame',frame)
# cv2.imwrite("./log/"+str(count)+".jpg",frame)
except:
pass
if cv2.waitKey(1) & 0xFF == ord('q'):
break
count += 1
if count >100:
break
else:
break
# Release everything if job is finished
cap.release()
out.release()
cv2.destroyAllWindows()
</code></pre>
<p>Build Info</p>
<pre><code>General configuration for OpenCV 4.5.0 =====================================
Version control: 4.5.0
Extra modules:
Location (extra): /opt/opencv_contrib/modules
Version control (extra): 4.5.0
Platform:
Timestamp: 2021-10-23T16:52:07Z
Host: Linux 5.10.59-tegra aarch64
CMake: 3.16.3
CMake generator: Unix Makefiles
CMake build tool: /usr/bin/make
Configuration: RELEASE
CPU/HW features:
Baseline: NEON FP16
required: NEON
disabled: VFPV3
C/C++:
Built as dynamic libs?: YES
C++ standard: 11
C++ Compiler: /usr/bin/c++ (ver 9.3.0)
C++ flags (Release): -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wuninitialized -Winit-self -Wsuggest-override -Wno-delete-non-virtual-dtor -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -fvisibility=hidden -fvisibility-inlines-hidden -O3 -DNDEBUG -DNDEBUG
C++ flags (Debug): -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wuninitialized -Winit-self -Wsuggest-override -Wno-delete-non-virtual-dtor -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -fvisibility=hidden -fvisibility-inlines-hidden -g -O0 -DDEBUG -D_DEBUG
C Compiler: /usr/bin/cc
C flags (Release): -fsigned-char -W -Wall -Werror=return-type -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wuninitialized -Winit-self -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -fvisibility=hidden -O3 -DNDEBUG -DNDEBUG
C flags (Debug): -fsigned-char -W -Wall -Werror=return-type -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wuninitialized -Winit-self -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -fvisibility=hidden -g -O0 -DDEBUG -D_DEBUG
Linker flags (Release): -Wl,--gc-sections -Wl,--as-needed
Linker flags (Debug): -Wl,--gc-sections -Wl,--as-needed
ccache: NO
Precompiled headers: NO
Extra dependencies: m pthread cudart_static dl rt nppc nppial nppicc nppidei nppif nppig nppim nppist nppisu nppitc npps cublas cudnn cufft -L/usr/local/cuda/lib64 -L/usr/lib/aarch64-linux-gnu
3rdparty dependencies:
OpenCV modules:
To be built: alphamat aruco bgsegm bioinspired calib3d ccalib core cudaarithm cudabgsegm cudacodec cudafeatures2d cudafilters cudaimgproc cudalegacy cudaobjdetect cudaoptflow cudastereo cudawarping cudev datasets dnn dnn_objdetect dnn_superres dpm face features2d flann freetype fuzzy gapi hfs highgui img_hash imgcodecs imgproc intensity_transform line_descriptor mcc ml objdetect optflow phase_unwrapping photo plot python3 quality rapid reg rgbd saliency shape stereo stitching structured_light superres surface_matching text tracking video videoio videostab xfeatures2d ximgproc xobjdetect xphoto
Disabled: world
Disabled by dependency: -
Unavailable: cnn_3dobj cvv hdf java js julia matlab ovis python2 sfm ts viz
Applications: apps
Documentation: NO
Non-free algorithms: YES
GUI:
GTK+: YES (ver 3.24.20)
GThread : YES (ver 2.64.6)
GtkGlExt: NO
OpenGL support: NO
VTK support: NO
Media I/O:
ZLib: /usr/lib/aarch64-linux-gnu/libz.so (ver 1.2.11)
JPEG: /usr/lib/aarch64-linux-gnu/libjpeg.so (ver 80)
WEBP: build (ver encoder: 0x020f)
PNG: /usr/lib/aarch64-linux-gnu/libpng.so (ver 1.6.37)
TIFF: build (ver 42 - 4.0.10)
JPEG 2000: build (ver 2.3.1)
OpenEXR: build (ver 2.3.0)
HDR: YES
SUNRASTER: YES
PXM: YES
PFM: YES
Video I/O:
DC1394: YES (2.2.5)
FFMPEG: YES
avcodec: YES (58.54.100)
avformat: YES (58.29.100)
avutil: YES (56.31.100)
swscale: YES (5.5.100)
avresample: YES (4.0.0)
GStreamer: YES (1.16.2)
v4l/v4l2: YES (linux/videodev2.h)
Parallel framework: TBB (ver 2020.1 interface 11101)
Trace: YES (with Intel ITT)
Other third-party libraries:
Lapack: YES (/usr/lib/aarch64-linux-gnu/liblapack.so /usr/lib/aarch64-linux-gnu/libcblas.so /usr/lib/aarch64-linux-gnu/libatlas.so)
Eigen: YES (ver 3.3.7)
Custom HAL: YES (carotene (ver 0.0.1))
Protobuf: build (3.5.1)
NVIDIA CUDA: YES (ver 11.4, CUFFT CUBLAS FAST_MATH)
NVIDIA GPU arch: 72 87
NVIDIA PTX archs:
cuDNN: YES (ver 8.2.6)
Python 3:
Interpreter: /usr/bin/python3 (ver 3.8.10)
Libraries: /usr/lib/aarch64-linux-gnu/libpython3.8.so (ver 3.8.10)
numpy: /usr/lib/python3/dist-packages/numpy/core/include (ver 1.17.4)
install path: lib/python3.8/dist-packages/cv2/python-3.8
Python (for build): /usr/bin/python2.7
Java:
ant: NO
JNI: NO
Java wrappers: NO
Java tests: NO
Install to: /usr/local
-----------------------------------------------------------------
</code></pre>
|
<python><opencv><jetson-xavier>
|
2023-04-11 20:27:22
| 1
| 335
|
M.Akyuzlu
|
75,989,831
| 8,285,736
|
os.path.join(*list) not working as expected when trying to convert a list to a path
|
<p>I'm trying to convert this list:
<code>a_list = ['c:', 'project_files', 'ProjA', 'B_Files']</code></p>
<p>into this path:
<code>'c:\\project_files\\ProjA\\B_Files'</code></p>
<p>I'm using this:</p>
<pre><code>a_list = ['c:', 'project_files', 'ProjA', 'B_Files']
my_path = os.path.join(*a_list)
</code></pre>
<p>However this is what I get:
<code>'c:project_files\\ProjA\\B_Files'</code></p>
<p>Why isn't there a <code>\\</code> after <code>c:</code> ?
I was reading some similar questions and apparently it has something to do with this not being an absolute path but a relative one, but I'm still unsure how to get what I want</p>
<p>I'd appreciate any advice</p>
|
<python><list><path><os.path>
|
2023-04-11 20:18:10
| 1
| 643
|
ATP
|
75,989,822
| 2,727,655
|
What is Stanford CoreNLP's recipe for tokenization?
|
<p>Whether you're using Stanza or Corenlp (now deprecated) python wrappers, or the original Java implementation, the tokenization rules that StanfordCoreNLP follows is super hard for me to figure out from the code in the original codebases.</p>
<p>The implementation is very verbose and the tokenization approach is not really documented. Do they consider this proprietary? On their website, they say that "CoreNLP splits texts into tokens with an elaborate collection of rules, designed to follow UD 2.0 specifications."</p>
<p>I'm looking for where to find those rules, and ideally, to replace CoreNLP (a massive codebase!) with just a regex or something much simpler to mimic their tokenization strategy. Please assume in your responses that Stanford's tokenization approach is the goal. I am not looking for alternative tokenization solutions, but I also very much do not want to include and ship a code base that requires a massive java library as a dependency.</p>
<p>The answer should address the following behavior:</p>
<ul>
<li>Word hyphenation should be disabled (someone with a hyphenated last name should not be split, e.g., Marie Illonig-Alberts should tokenize as ["Marie", "Illonig-Alberts"]. Similarly, compound words like "well-intentioned" should not be split.</li>
<li>Plural apostrophes should be tokenized (e.g., all boys' shoes are red to ["all", "boys", "'", "shoes", "are", red"])</li>
<li>Apostrophes for single ownership (e.g., my aunt's favorite to ["my", "aunt", "'s", "favorite"]</li>
<li>Mr./Mrs. should not be ["Mr", "."] / ["Mrs", "."]</li>
<li>Normal punctuation should be their own tokens (end of sentence periods, commas, quotes for direct quotes or to denote sarcasm, question marks, semicolon and colons, and dashes). Double dashes should not be separated (e.g., -- is ["--"] NOT ["-", "-"]</li>
<li>Wouldn't should tokenize to ["would", "n't"]</li>
<li>"and/or" should not tokenize</li>
<li>Contractions should tokenize (e.g., I'm to ["I", "'m"]</li>
<li>I also see weird tokens that correspond to POS tags sometimes like "-LRB-" and ":-RRB-", which I do not understand.</li>
</ul>
|
<python><nlp><stanford-nlp><tokenize>
|
2023-04-11 20:16:51
| 2
| 554
|
lrthistlethwaite
|
75,989,817
| 8,372,455
|
AttributeError: 'NoneType' object has no attribute 'outputs'
|
<p>How do you save a tensorflow keras model to disk in h5 format when the model is trained in the scikit learn pipeline fashion? I am trying to follow <a href="https://gist.github.com/MaxHalford/9bfaa8daf8b4bc17a7fb7ba58c880675" rel="nofollow noreferrer">this example</a> but not having any luck.</p>
<p>This works to train the models:</p>
<pre><code>import numpy as np
import pandas as pd
from tensorflow import keras
from tensorflow.keras import models
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
from scikeras.wrappers import KerasRegressor
from sklearn.model_selection import KFold
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
import os
import joblib
ELECTRIC_POINT = 'total_main_kw'
# clean dataset
def clean_dataset(df):
assert isinstance(df, pd.DataFrame), "df needs to be a pd.DataFrame"
df.dropna(inplace=True)
indices_to_keep = ~df.isin([np.nan, np.inf, -np.inf]).any(1)
cleaner = (f'dataset has been cleaned')
print(cleaner)
return df[indices_to_keep].astype(np.float64)
def my_model(input_shape):
# create model
model = Sequential()
model.add(Dense(22, input_shape=input_shape, kernel_initializer='normal', activation='relu'))
model.add(Dense(14, activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(1, kernel_initializer='normal'))
# Compile model
model.compile(loss='mean_squared_error', optimizer='adam')
return model
# load data
df = pd.read_csv('./all_data.csv', index_col=[0], parse_dates=True)
df = clean_dataset(df)
# shuffle the DataFrame rows
df = df.sample(frac=1)
X = np.array(df.drop([ELECTRIC_POINT], 1))
Y = np.array(df[ELECTRIC_POINT])
# set the input shape
input_shape = (X.shape[1],)
print(f'Feature shape: {input_shape}')
# define the Keras model
estimators = []
estimators.append(('standardize', StandardScaler()))
estimators.append(('mlp', KerasRegressor(build_fn=my_model, input_shape=input_shape, epochs=100, batch_size=5, verbose=0)))
pipeline = Pipeline(estimators)
# define number of models to train
num_models = int(np.sqrt(input_shape))
#num_models = 2
print(f'Number of models: {num_models}')
# define k-fold cross-validation
kfold = KFold(n_splits=num_models)
# define early stopping and model checkpoint callbacks
callbacks = [EarlyStopping(monitor='val_loss', patience=10),
ModelCheckpoint(filepath=os.path.join(os.path.curdir, 'model.h5'),
monitor='val_loss', save_best_only=True)]
# evaluate the model using cross-validation with callbacks
results = []
for train_idx, test_idx in kfold.split(X, Y):
X_train, X_test = X[train_idx], X[test_idx]
y_train, y_test = Y[train_idx], Y[test_idx]
pipeline.fit(X_train, y_train, mlp__validation_data=(X_test, y_test), mlp__callbacks=callbacks)
mse = pipeline.score(X_test, y_test)
print(f'MSE this round: {mse}')
results.append(mse)
# report performance
print("MSE: %.2f (%.2f)" % (np.mean(results), np.std(results)))
# compare report performance to electricity summary stats
print(df[ELECTRIC_POINT].agg([np.min,np.max,np.mean,np.median]))
</code></pre>
<p>If I print the model_step <code>model_step = pipeline.steps.pop(-1)[1]</code> this will return:</p>
<pre><code>KerasRegressor(
model=None
build_fn=<function my_model at 0x0000029156028CA0>
warm_start=False
random_state=None
optimizer=rmsprop
loss=None
metrics=None
batch_size=5
validation_batch_size=None
verbose=0
callbacks=None
validation_split=0.0
shuffle=True
run_eagerly=False
epochs=100
input_shape=(27,)
)
</code></pre>
<p>And then I can save the pipeline to a pickle file which works I get the <code>pipeline.pkl</code> in my current dir:</p>
<pre><code># save best trained model to file
joblib.dump(pipeline, os.path.join(os.path.curdir, 'pipeline.pkl'))
</code></pre>
<p>But trying to run keras <code>save_model</code>:</p>
<pre><code>models.save_model(model_step.model, os.path.join(os.path.curdir, 'model.h5'))
</code></pre>
<p>I get an error:</p>
<pre><code>---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_27056\3991387177.py in <module>
----> 1 models.save_model(model_step.model, os.path.join(os.path.curdir, 'model.h5'))
~\Anaconda3\lib\site-packages\keras\utils\traceback_utils.py in error_handler(*args, **kwargs)
68 # To get the full stack trace, call:
69 # `tf.debugging.disable_traceback_filtering()`
---> 70 raise e.with_traceback(filtered_tb) from None
71 finally:
72 del filtered_tb
~\Anaconda3\lib\site-packages\keras\saving\legacy\saving_utils.py in try_build_compiled_arguments(model)
349 if (
350 not version_utils.is_v1_layer_or_model(model)
--> 351 and model.outputs is not None
352 ):
353 try:
AttributeError: 'NoneType' object has no attribute 'outputs'
</code></pre>
|
<python><tensorflow><machine-learning><keras><scikit-learn>
|
2023-04-11 20:16:15
| 1
| 3,564
|
bbartling
|
75,989,750
| 20,266,647
|
PySpark, parquet "AnalysisException: Unable to infer schema for Parquet"
|
<p>I got this issue, when I read data from parquet via PySpark in MLRun (it seems as invalid parquet). See exception:</p>
<pre><code>-------------------------------------------------
AnalysisException Traceback (most recent call last)
<ipython-input-19-c76c691629bf> in <module>
----> 1 new_DF=spark.read.parquet('v3io:///projects/risk/FeatureStore/pbr/parquet/')
2 new_DF.show()
/spark/python/pyspark/sql/readwriter.py in parquet (self, *paths, **options)
299 int96RebaseMode=int96RebaseMode)
300
--> 301 return self._df (self._jreader.parquet (_to_seq(self._spark._sc, paths)))
302
303 def text(self, paths, wholetext=False, lineSep=None, pathGlobFilter=None,
/spark/python/lib/py4j-0.10.9.3-src.zip/py4j/java_gateway.py in _call_(self, *args)
1320 answer = self.gateway_client.send_command(command)
1321 return_value = get_return_value(
-> 1322 answer, self.gateway_client, self.target_id, self.name)
1323
1324 for temp_arg in temp_args:
/spark/python/pyspark/sql/utils.py in deco(*a, **kw)
115 # Hide where the exception came from that shows a non-Pythonic
116 # JVM exception message.
--> 117 raise converted from None
118 else:
119 raise
AnalysisException: Unable to infer schema for Parquet. It must be specified manually.
</code></pre>
<p>See the key part of source code (which generated the exception):</p>
<pre><code>from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('Test') \
.config("spark.executor.memory", "9g") \
.config("spark.executor.cores", "3") \
.config('spark.cores.max', 12) \
.getOrCreate()
new_DF=spark.read.parquet("v3io:///projects/risk/FeatureStore/pbr/parquet/")
new_DF.show()
</code></pre>
<p>Did you get similar issue?</p>
|
<python><pyspark><parquet><mlrun>
|
2023-04-11 20:05:50
| 1
| 1,390
|
JIST
|
75,989,484
| 3,579,198
|
BeautifulSoup extract URL from HTML
|
<p>I want to extract <code>title</code> ( "Airmeet Invite Email" ) & <code>srcset</code> URLs from following HTML using <code>bs4</code></p>
<p><a href="https://i.sstatic.net/0CInY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0CInY.png" alt="enter image description here" /></a></p>
<p>I tried following code</p>
<pre><code>import requests
from bs4 import BeautifulSoup
URL = "https://www.mailmodo.com/email-templates/"
page = requests.get(URL)
soup = BeautifulSoup(page.content, "html.parser")
results = soup.find(id="__next")
grid = results.find_all("img", class_="alt")
print(grid)
</code></pre>
<p>But I am unable to get the the title & url.</p>
<p>Any help on this ?</p>
|
<python><beautifulsoup>
|
2023-04-11 19:25:47
| 1
| 7,098
|
rp346
|
75,989,399
| 13,634,560
|
Polars, multiply columns
|
<p>This should be a straightforward python question, but it's not working for me. I have a list of strings, the columns, as well as an integer, with repeated number of columns as the output.</p>
<pre><code>multi_cols = lpd.columns
len(multi_cols)
103
multi_cols = [[k]*6 for k in lpd.columns]
len(multi_cols)
103
</code></pre>
<p>Why does this output 103?</p>
|
<python><python-polars>
|
2023-04-11 19:15:19
| 2
| 341
|
plotmaster473
|
75,989,324
| 4,366,541
|
Reporting Endpoints in LinkedIn Marketing API
|
<p>I've been trying to hit the new "versioned" LinkedIn Marketing API in an attempt to pull any kind of analytics on my campaigns. Utilizing the python and requests. Documentation found <a href="https://learn.microsoft.com/en-us/linkedin/marketing/integrations/ads-reporting/ads-reporting?view=li-lms-2023-03&tabs=http%2Chttp-member-country-above0323%2Chttp-member-country-below0323%2Chttp-member-country-unversioned%2Chttp-memberregion-above0323%2Chttp-memberregion-below0323%2Chttp-memberregion-unversioned%2Chttp-above0323-membercounty%2Chttp-versioned-membercounty%2Chttp-unversion-membercounty" rel="nofollow noreferrer">here</a>.</p>
<pre><code>import requests
headers = {
'X-Restli-Protocol-Version': '2.0.0',
'Authorization': 'Bearer {TOKEN}',
'Linkedin-Version': '202303',
}
url = 'https://api.linkedin.com/rest/adAnalytics?q=statistics&pivot=CAMPAIGN&dateRange.start.day=1&dateRange.start.month=3&dateRange.start.year=2023&timeGranularity=DAILY&fields=pivotValue&accounts=urn%3Ali%3AsponsoredAccount%3A507920541'
response = requests.get(url, headers=headers)
print(response)
</code></pre>
<p>After a bunch of fighting, I've settled that my closest effort yields this result:</p>
<blockquote>
<p>{'errorDetailType': 'com.linkedin.common.error.BadRequest', 'message':
'Multiple errors occurred during query param validation. Please see
errorDetails for more information.', 'errorDetails': {'inputErrors':
[{'input': {'inputPath': {'fieldPath': 'dateRange.start.day'}},
'code': 'QUERY_PARAM_NOT_ALLOWED'}, {'input': {'inputPath':
{'fieldPath': 'dateRange.start.month'}}, 'code':
'QUERY_PARAM_NOT_ALLOWED'}, {'input': {'inputPath': {'fieldPath':
'dateRange.start.year'}}, 'code': 'QUERY_PARAM_NOT_ALLOWED'}]},
'status': 400}</p>
</blockquote>
<p>I do not know what it is looking for for the date range. If change the Linkedin-Version to 202302 I get this error:</p>
<blockquote>
<p>{'status': 403, 'serviceErrorCode': 100, 'code': 'ACCESS_DENIED',
'message': 'Unpermitted fields present in PARAMETER: Data Processing
Exception while processing fields [/dateRange.start.day,
/dateRange.start.month, /dateRange.start.year]'}</p>
</blockquote>
<p>Suggestions are very much appreciated.</p>
<p>EDIT</p>
<p>Utilizing postman as was suggested in the comments yielded this:</p>
<blockquote>
<p>{
"status": 400,
"code": "ILLEGAL_ARGUMENT",
"message": "Invalid query parameters passed to request"
}</p>
</blockquote>
<p>This was the url, which I think is set up as the old version tbh.</p>
<p><a href="https://api.linkedin.com/rest/adAnalytics?q=statistics&dateRange.start.year=2023&dateRange.start.month=3&dateRange.start.day=1&dateRange.end.year=2023&dateRange.end.month=03&dateRange.end.day=20&timeGranularity=ALL&accounts=urn:li:sponsoredAccount:507920541&pivots%5B0%5D=CAMPAIGN&pivots%5B1%5D=CONVERSION&fields=externalWebsiteConversions,externalWebsitePostClickConversions,externalWebsitePostViewConversions,costInLocalCurrency,externalWebsiteConversions,costInLocalCurrency,dateRange,pivotValues" rel="nofollow noreferrer">https://api.linkedin.com/rest/adAnalytics?q=statistics&dateRange.start.year=2023&dateRange.start.month=3&dateRange.start.day=1&dateRange.end.year=2023&dateRange.end.month=03&dateRange.end.day=20&timeGranularity=ALL&accounts=urn:li:sponsoredAccount:507920541&pivots[0]=CAMPAIGN&pivots[1]=CONVERSION&fields=externalWebsiteConversions,externalWebsitePostClickConversions,externalWebsitePostViewConversions,costInLocalCurrency,externalWebsiteConversions,costInLocalCurrency,dateRange,pivotValues</a></p>
|
<python><linkedin-api>
|
2023-04-11 19:04:43
| 1
| 825
|
Joe Fedorowicz
|
75,989,189
| 10,466,809
|
Python dataclass type hints and input type conversions
|
<p>How should we type hint for dataclasses which can accept multiple input types but cast them all to a particular type during, e.g. <code>__post_init__</code>. Example:</p>
<pre><code>from dataclasses import dataclass
from typing import Collection, List
@dataclass
class Foo1:
input_list: Collection
def __post_init__(self):
self.input_list = list(self.input_list)
@dataclass
class Foo2:
input_list: List
def __post_init__(self):
self.input_list = list(self.input_list)
</code></pre>
<p>For class <code>Foo1</code> we will get type warnings if we try something like <code>foo1.input_list.append(0)</code> because the type checker doesn't know that <code>foo1.input_list</code> is a <code>List</code> (it only knows it is a <code>Collection</code>). On the other hand, class <code>Foo2</code> will give type warning for <code>foo2 = Foo2((1, 2))</code> because it expects a <code>List</code> input, not a <code>Tuple</code>.</p>
<p>What is the appropriate way to write a dataclass (or any class) that does mild type conversion on its attributes during <code>__post_init__</code>?</p>
|
<python><python-3.x><types><casting><python-dataclasses>
|
2023-04-11 18:46:25
| 3
| 1,125
|
Jagerber48
|
75,989,172
| 2,103,050
|
tf_agents reset environment using actor
|
<p>I'm trying to understand how to use <code>Actor</code> class in tf_agents. I am using DDPG (actor-critic, although this doesn't really matter per say). I also am learning off of <code>gym</code> package, although again this isn't fully important to the question.</p>
<p>I went into the class definition for <code>train.Actor</code> and under the hood the run method calls py_driver.PyDriver. It is my understanding that after it reaches a terminal state, the gym environment needs to be reset. However following the Actor and PyDriver classes, I don't see anywhere (outside the init method) where env.reset() is called. And then looking at the tutorial for <code>sac_agent.SacAgent</code>, I don't see them calling env.reset() either.</p>
<p>Can someone help me understand what is missing? Do I not need to call env.reset()? Or is there some code that is being called that I am missing?</p>
<p>Here is the method for PyDriver.run():</p>
<pre><code> def run(
self,
time_step: ts.TimeStep,
policy_state: types.NestedArray = ()
) -> Tuple[ts.TimeStep, types.NestedArray]:
num_steps = 0
num_episodes = 0
while num_steps < self._max_steps and num_episodes < self._max_episodes:
# For now we reset the policy_state for non batched envs.
if not self.env.batched and time_step.is_first() and num_episodes > 0:
policy_state = self._policy.get_initial_state(self.env.batch_size or 1)
action_step = self.policy.action(time_step, policy_state)
next_time_step = self.env.step(action_step.action)
# When using observer (for the purpose of training), only the previous
# policy_state is useful. Therefore substitube it in the PolicyStep and
# consume it w/ the observer.
action_step_with_previous_state = action_step._replace(state=policy_state)
traj = trajectory.from_transition(time_step, action_step_with_previous_state, next_time_step)
for observer in self._transition_observers:
observer((time_step, action_step_with_previous_state, next_time_step))
for observer in self.observers:
observer(traj)
for observer in self.info_observers:
observer(self.env.get_info())
if self._end_episode_on_boundary:
num_episodes += np.sum(traj.is_boundary())
else:
num_episodes += np.sum(traj.is_last())
num_steps += np.sum(~traj.is_boundary())
time_step = next_time_step
policy_state = action_step.state
return time_step, policy_state
</code></pre>
<p>As you can see, it increases the number of steps if it hits a boundary, and increases the number of episodes if it hits the terminal state. But then there is no call to <code>env.reset()</code>.</p>
|
<python><tensorflow><openai-gym><tf-agent>
|
2023-04-11 18:44:44
| 0
| 377
|
brian_ds
|
75,989,151
| 886,357
|
Putting a value filter on pivot table in pandas
|
<p>I am trying to duplicate some excel functionality in pandas and we have a huge pivot table upon which we do numerous operations that are very slow.</p>
<p>Here is what I am trying to do</p>
<pre><code>import pandas as pd
Data = [["NonLin1", "NestleBig-100", "daily", "solved", "NestleBig", "v10_10", 435, 1.4],
["NonLin1", "NestleBig-100", "daily", "solved", "NestleBig", "v10_11", 743, 1.3],
["NonLin1", "NestleBig-101", "daily", "solved", "NestleBig", "v10_10", 542, 1.5],
["NonLin1", "NestleBig-101", "daily", "solved", "NestleBig", "v10_11", 324, 1.2],
["NonLin1", "NestleBig-102", "daily", "solved", "NestleBig", "v10_10", 243, 1.8],
["NonLin1", "NestleBig-102", "daily", "solved", "NestleBig", "v10_11", 444, 1.2],
["NonLin2", "NestleSmall-100", "daily", "solved", "NestleBig", "v10_10", 655, 1.0],
["NonLin2", "NestleSmall-100", "daily", "solved", "NestleBig", "v10_11", 252, 1.3],
["NonLin2", "NestleSmall-101", "daily", "solved", "NestleBig", "v10_10", 435, 1.1],
["NonLin2", "NestleSmall-101", "daily", "solved", "NestleBig", "v10_11", 542, 1.3],
["NonLin2", "NestleSmall-102", "daily", "solved", "NestleBig", "v10_10", 645, 1.5],
["NonLin2", "NestleSmall-102", "daily", "solved", "NestleBig", "v10_11", 435, 1.1],
["NonLin3", "NestleBig-100", "daily", "solved", "NestleBig", "v10_10", 653, 1.2],
["NonLin3", "NestleBig-100", "daily", "solved", "NestleBig", "v10_11", 435, 1.4],
["NonLin3", "NestleBig-101", "daily", "unsolved", "NestleBig", "v10_10", 875, 1.4],
["NonLin3", "NestleBig-101", "daily", "solved", "NestleBig", "v10_11", 214, 1.5],
["NonLin3", "NestleBig-102", "daily", "solved", "NestleBig", "v10_10", 890, 1.2],
["NonLin3", "NestleBig-102", "daily", "unsolved", "NestleBig", "v10_11", 432, 1.5]]
df = pd.DataFrame(Data, columns = ["ProblemClass", "inputID", "profile", "Status", "TestID","runID", "NofIters", "time"])
pivottab = pd.pivot_table(data=df, index=["ProblemClass", "inputID", "Status", "TestID","runID"], values=['time'], aggfunc={'count'})
pivottab.columns = list("#") #Can we rename this column white creating the pivot table itself?
</code></pre>
<p>The pivot table itself is fine and looks like this</p>
<p><a href="https://i.sstatic.net/s7Khj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/s7Khj.png" alt="pivot table from pandas dataframe" /></a></p>
<p>But I want to highlight the rows where v10_11 and v10_10 produce different results for Status. as in filter the pivot table further to only see those rows where the statuses differ. In the excel pivot table, we can put a value filter on Status and choose value filter on (#) != 2 but I am wondering how to filter this pivot table further. I would like to do operations on this pivot table as it will be easier and I would like to exclude these from the pivot table for further analysis. But any other options are welcome.</p>
|
<python><pandas>
|
2023-04-11 18:43:14
| 1
| 3,593
|
Morpheus
|
75,989,123
| 5,342,700
|
Unable to convert scraped list of dictionaries to a Pandas DataFrame
|
<p>I am trying to scrape tables from the following website:</p>
<p><a href="https://www.rotowire.com/betting/mlb/player-props.php" rel="nofollow noreferrer">https://www.rotowire.com/betting/mlb/player-props.php</a></p>
<p>Data for each table is within a script on the site starting with <code>data: [{ ... }]</code>. This can be pulled using a combination of BeautifulSoup and regex. I cannot seem to convert this data into a Pandas DataFrame and it only reads it in as a single row. The data is read in as a list of dictionaries and looks as follows:</p>
<pre><code>[{"gameID":"2513620","playerID":"13902","firstName":"Mark"},
{"gameID":"2512064","playerID":"12450","firstName":"Mike"},
{"gameID":"2513053","playerID":"14261","firstName":"Will"}]
</code></pre>
<p>This should work with <code>pd.DataFrame(df)</code>, but it does not seem to read correctly when scraped from the site.</p>
<p>I have tried the following:</p>
<pre><code>from bs4 import BeautifulSoup
import pandas as pd
import requests
import re
import json
url = 'https://www.rotowire.com/betting/mlb/player-props.php'
page = requests.get(url, verify=False)
soup = BeautifulSoup(page.text)
# Read first table
script = str(soup.findAll('script')[4])
data = re.findall(r'data: \[(.*?)\]', script)
df = pd.DataFrame(data)
</code></pre>
<pre><code> 0
0 {"gameID":"2513620","playerID":"13902","firstN...
</code></pre>
|
<python><pandas><dictionary><web-scraping><beautifulsoup>
|
2023-04-11 18:38:52
| 1
| 13,059
|
Stu Sztukowski
|
75,989,104
| 542,270
|
How to install local package with transitive dependencies?
|
<p>I have two python projects under the same root, after <code>tree</code> it looks as follows:</p>
<pre><code>libs
└── lol-pandas
├── pyproject.toml
├── requirements-dev.txt
├── requirements.txt
├── src
│ └── lol
│ ├── __init__.py
│ └── pandas
│ ├── __init__.py
└── tests
└── __init__.py
</code></pre>
<p>and:</p>
<pre><code>batch
└── lol_integration
├── pyproject.toml
├── requirements-dev.txt
├── requirements.txt
└── src
└── lol_integration
└── __init__.py
</code></pre>
<p>In <code>lol-integration/requirements.txt</code> I have:</p>
<pre><code>../../libs/lol-pandas/
</code></pre>
<p>When I run <code>pip install -r requirements.txt</code> in <code>lol-integration</code> I'm getting a dependency to <code>lol_pandas</code> and it works, but can I have the dependencies that <code>lol-pandas</code> have - namely the dependencies from <code>lol-pandas/requirements.txt</code> - installed too? If so, how?</p>
|
<python><pip>
|
2023-04-11 18:36:25
| 0
| 85,464
|
Opal
|
75,988,992
| 14,414,944
|
How can I inspect the internal state of an asyncio event loop?
|
<p>I have a Python program that uses <code>asyncio</code>. When concurrency is high, a particular part of the program hangs. I would like to inspect the state of the <code>asyncio</code> event loop myself, to try and understand the problem, which I have also detailed below. What can I do?</p>
<p>The part of the program that hangs looks something like the below...</p>
<pre class="lang-py prettyprint-override"><code>async def poll_buffer(self):
while True:
try:
val = self.buffer.get(block=False)
# hangs here
await asyncio.gather(*[
cb(val) for cb in self.callbacks
])
except Empty:
await asyncio.sleep(self.latency)
</code></pre>
<p>The odd part is it's not as if these callbacks are hanging on a particularly expensive bit of compute; rather it's as if they hang before the starting at all. That is...</p>
<pre class="lang-py prettyprint-override"><code>async def my_callback(val):
print("HELLO") # we don't even get here
await do_expensive_thing() # let alone here
</code></pre>
<p>As I said, this has been showing up when concurrency is high. I've checked for deadlocks, going so far as to remove all <code>asyncio.Lock</code>. Nothing has changed. It almost feels to me like a scheduling problem, so I would like to take a closer look at what asyncio is actually doing under the hood.</p>
<p>Other async parts of the program will generally continue normally, though in my fiddling, I've found ways to make them hang too.</p>
<p><strong>UPDATE</strong>: Checking the event loop policy, <code>asyncio.all_tasks()</code>, and the state of the event loop tasks was enough to confirm that the problem did not have to do with <code>asyncio's</code> event loop. Instead, for a reason that's still not entirely clear to me, an error in calling the function was being suppressed.</p>
|
<python><python-asyncio>
|
2023-04-11 18:23:51
| 1
| 1,011
|
lmonninger
|
75,988,588
| 10,994,166
|
Intersect a list with column pyspark
|
<p>I have a pyspark df like this:</p>
<pre><code>+--------------+--------------------+
| id| recs|
+--------------+--------------------+
| 420281136|[531698003, 81262...|
| 801646419|[685057033, 11542...|
| 920475166|[344077868, 99389...|
| 577242054|[242471215, 99876...|
| 858082577|[910361558, 75957...|
+--------------+--------------------+
[('id', 'string'), ('recs', 'array<string>')]
</code></pre>
<p>Now I have list with 4k elements:</p>
<pre><code>a:
['100075010',
'100755706',
'1008039072',
'1010520008',
'101081875',
'101418337',
'101496347',
'10153658',
'1017744620',
'1021412485'...]
</code></pre>
<p>Now I want to create another column with intersection of list <code>a and recs</code> column.</p>
<p>Here's what I tried:</p>
<pre><code>def column_array_intersect(col_name):
return f.udf(lambda arr: f.array_intersect(col_name, arr), ArrayType(StringType()))
df = df.withColumn('intersect', column_array_intersect("recs")(f.array(a)))
</code></pre>
<p>Here's the error I'm getting:</p>
<pre><code>Py4JJavaError: An error occurred while calling o212.withColumn.
: org.apache.spark.sql.AnalysisException: cannot resolve '`100075010`' given input columns: [anchor_item_id, recs];;
'Project [anchor_item_id#0, recs#156, <lambda>(array('100075010, '100755706, '1008039072, '1010520008, '101081875, '101418337, '101496347, '10153658, '1017744620, '1021412485, '1021845009, '102191240, '10239093, '102617377, '10265400, '10293250, '10295721, '102989529, '10309597, '10311990, '10312907, '10314212, '10314212, '10321251, ... 508253 more fields)) AS intersect#174]
+- Project [anchor_item_id#0, recs#156]
+- Project [anchor_item_id#0, model#1, rec_item_info_list#2, reco_item_category_set#3, category#4, item_name#5, brand#6, primary_shelf_value#7, price#8, num_appr_reviews#9, avg_overall_rating#10, prod_type#11, rec_item_info_list#2.item_id AS recs#156]
+- Relation[id#0,model#1,rec_item_info_list#2,reco_item_category_set#3,category#4,item_name#5,brand#6,primary_shelf_value#7,price#8,num_appr_reviews#9,avg_overall_rating#10,prod_type#11] parquet
</code></pre>
<p>Here I'm seeing the column which I have already removed from df with select statement.</p>
|
<python><apache-spark><pyspark><apache-spark-sql>
|
2023-04-11 17:24:48
| 1
| 923
|
Chris_007
|
75,988,574
| 2,930,793
|
download nltk data in aws lambda from python code
|
<p>I am trying to dowloand nltk data in AWS lambda from python code. But It says</p>
<pre><code>{
"errorMessage": "[Errno 30] Read-only file system: 'layers'",
"errorType": "OSError",
"requestId": "",
"stackTrace": [
" File \"/var/lang/lib/python3.9/importlib/__init__.py\", line 127, in import_module\n return _bootstrap._gcd_import(name[level:], package, level)\n",
" File \"<frozen importlib._bootstrap>\", line 1030, in _gcd_import\n",
" File \"<frozen importlib._bootstrap>\", line 1007, in _find_and_load\n",
" File \"<frozen importlib._bootstrap>\", line 986, in _find_and_load_unlocked\n",
" File \"<frozen importlib._bootstrap>\", line 680, in _load_unlocked\n",
" File \"<frozen importlib._bootstrap_external>\", line 850, in exec_module\n",
" File \"<frozen importlib._bootstrap>\", line 228, in _call_with_frames_removed\n",
" File \"/opt/python/lib/python3.9/site-packages/datadog_lambda/handler.py\", line 30, in <module>\n handler_module = import_module(modified_mod_name)\n",
" File \"/var/lang/lib/python3.9/importlib/__init__.py\", line 127, in import_module\n return _bootstrap._gcd_import(name[level:], package, level)\n",
" File \"<frozen importlib._bootstrap>\", line 1030, in _gcd_import\n",
" File \"<frozen importlib._bootstrap>\", line 1007, in _find_and_load\n",
" File \"<frozen importlib._bootstrap>\", line 972, in _find_and_load_unlocked\n",
" File \"<frozen importlib._bootstrap>\", line 228, in _call_with_frames_removed\n",
" File \"<frozen importlib._bootstrap>\", line 1030, in _gcd_import\n",
" File \"<frozen importlib._bootstrap>\", line 1007, in _find_and_load\n",
" File \"<frozen importlib._bootstrap>\", line 986, in _find_and_load_unlocked\n",
" File \"<frozen importlib._bootstrap>\", line 680, in _load_unlocked\n",
" File \"<frozen importlib._bootstrap_external>\", line 850, in exec_module\n",
" File \"<frozen importlib._bootstrap>\", line 228, in _call_with_frames_removed\n",
" File \"/var/task/entrypoints/__init__.py\", line 14, in <module>\n from utils.container import state_machine\n",
" File \"/var/task/utils/container.py\", line 10, in <module>\n from service_layer.models.nltk_downloader import is_nltk_data_download\n",
" File \"/var/task/service_layer/models/nltk_downloader.py\", line 12, in <module>\n nltk.download(\"punkt\", download_dir=environ[\"LAMBDA_NLTK_DATA\"])\n",
" File \"/var/task/nltk/downloader.py\", line 777, in download\n for msg in self.incr_download(info_or_id, download_dir, force):\n",
" File \"/var/task/nltk/downloader.py\", line 642, in incr_download\n yield from self._download_package(info, download_dir, force)\n",
" File \"/var/task/nltk/downloader.py\", line 699, in _download_package\n os.makedirs(download_dir)\n",
" File \"/var/lang/lib/python3.9/os.py\", line 215, in makedirs\n makedirs(head, exist_ok=exist_ok)\n",
" File \"/var/lang/lib/python3.9/os.py\", line 225, in makedirs\n mkdir(name, mode)\n"
]
}
</code></pre>
<p>Here is my code</p>
<pre><code> nltk.download("punkt", download_dir=download_dir)
nltk.download("wordnet", download_dir=download_dir)
nltk.download("omw-1.4", download_dir=download_dir)
</code></pre>
<p>I tried with a different download_dir option. But Nothing worked. For example,</p>
<pre><code>download_dir = /layers/nltk_data
download_dir = nltk_data
etc
</code></pre>
<p>Any idea?</p>
|
<python><amazon-web-services><aws-lambda><nltk>
|
2023-04-11 17:22:55
| 2
| 903
|
Sazzad
|
75,988,511
| 9,390,633
|
create a date range if a column value matches one
|
<p>I am using an answer found at <a href="https://stackoverflow.com/questions/74376972/iterate-over-select-columns-and-check-if-a-specfic-value-is-in-these-select-colu/74384246#74384246">iterate over select columns and check if a specfic value is in these select columns and use that column name that has that value to create a new table</a></p>
<p>we can use pyspark native functions to create an array of the column names that have the value <code>1</code>. the array can then be used to get the <code>min</code> and <code>max</code> of years but I want to create a new row if 1 comes after a 0.</p>
<p>here's an example input table</p>
<pre><code>
# +---+-----+---+-----+-----+-----+-----+-----+-----+
# | a| b| id|m2000|m2001|m2002|m2003|m2004|m2005|
# +---+-----+---+-----+-----+-----+-----+-----+-----+
# | a|world| 1| 0| 1| 1| 0| 0| 1|
# | b|world| 2| 0| 1| 1| 1| 1| 1|
# | c|world| 3| 1| 1| 0| 0| 1| 1|
# +---+-----+---+-----+-----+-----+-----+-----+-----+
</code></pre>
<p>I want the final table to be like:</p>
<pre><code># +---+-----+---+--------+--------+
# | a| b| id|startdate|enddate|
# +---+-----+---+--------+---------
# | a|world| 1| 2001| 2002|
# | a|world| 1| 2005| 2005|
# | b|world| 2| 2001| 2005|
# | c|world| 3| 2000| 2001|
# | c|world| 3| 2004| 2005|
# +---+-----+---+-----+-----+-----+
python
data_ls = [
("a", "world", "1", 0, 0, 1,0,0,1),
("b", "world", "2", 0, 1, 0,1,0,1),
("c", "world", "3", 0, 0, 0,0,0,0)
]
data_sdf = spark.sparkContext.parallelize(data_ls). \
toDF(['a', 'b', 'id', 'm2000', 'm2001', 'm2002', 'm2003', 'm2004', 'm2005'])
yearcols = [k for k in data_sdf.columns if k.startswith('m20')]
data_sdf. \
withColumn('yearcol_structs',
func.array(*[func.struct(func.lit(int(c[-4:])).alias('year'), func.col(c).alias('value'))
for c in yearcols]
)
). \
withColumn('yearcol_1s',
func.expr('transform(filter(yearcol_structs, x -> x.value = 1), f -> f.year)')
). \
filter(func.size('yearcol_1s') >= 1). \
withColumn('year_start', func.concat(func.lit('10/10/'), func.array_min('yearcol_1s'))). \
withColumn('year_end', func.concat(func.lit('10/10/'), func.array_max('yearcol_1s'))). \
show(truncate=False)
</code></pre>
|
<python><pandas><dataframe><apache-spark><pyspark>
|
2023-04-11 17:12:32
| 1
| 363
|
lunbox
|
75,988,414
| 5,404,647
|
Extremely slow scraping with scrapy
|
<p>I have written a Python script to scrape data from IMDb using the Scrapy library. The script is working fine but it is very slow and seems to be getting stuck. I have added a DOWNLOAD_DELAY of 1 second between requests but it doesn't seem to help. Here is the script:</p>
<pre><code>import scrapy
import json
class MoviesSpider(scrapy.Spider):
name = 'movies'
allowed_domains = ['www.imdb.com']
start_urls = ['https://www.imdb.com/title/tt0096463/fullcredits/']
custom_settings = {
'DOWNLOAD_DELAY': 1 # add a delay of 1 seconds between requests
}
def parse(self, response):
# Get movie information
movie_year = response.css("img.poster::attr(alt)").extract_first().split("(")[1].split(")")[0]
movie_name = response.css('meta[property="og:title"]::attr(content)').extract_first().split("(")[0]
movie_id = response.url.split("/")[4]
# Iterate over actors
for i, actor in enumerate(response.css("table.cast_list tr")):
actor_id = actor.css("a::attr(href)").extract_first()
if actor_id:
actor_id = actor_id.split("/")[-2]
actor_name = actor.css("img::attr(title)").extract_first()
# Get role name
role_selector = f"table.cast_list tr:nth-child({i+1}) td.character a"
actor_role = response.css(role_selector + "::text").extract_first()
actor_role = actor_role.strip() if actor_role else None
# Build movie data
movie_data = {
"movie_id": movie_id,
"movie_name": movie_name,
"movie_year": movie_year,
"actor_id": actor_id,
"actor_name": actor_name,
"role_name": actor_role
}
# Follow actor page
next_page = f"https://www.imdb.com/name/{actor_id}"
yield response.follow(next_page, callback=self.parse_actor_bio,
meta={'movie_data': movie_data})
def parse_actor_bio(self, response):
response.css(".ipc-metadata-list-item__list-content ::text").extract()
movie_data = response.meta['movie_data']
date_place_info = response.css('ul li:contains("Born") ::text').extract()[1:]
born_date = "".join(date_place_info[0:3])
born_place = "".join(date_place_info[3:])
# Build result object
result = {
"movie_id": movie_data['movie_id'],
"movie_name": movie_data['movie_name'],
"movie_year": movie_data['movie_year'],
"actor_id": movie_data['actor_id'],
"actor_name": movie_data['actor_name'],
"role_name": movie_data['role_name'],
"born_date": born_date,
"born_place": born_place
}
yield json.loads(json.dumps(result))
movie_links = [x.split("/")[2] for x in response.css('a[href^="/title/"]::attr(href)').extract()]
movie_links = list(set(movie_links))
for movie_link in movie_links:
yield response.follow(f"https://www.imdb.com/title/{movie_link}/fullcredits/", callback=self.parse)
</code></pre>
<p>However, it is extremely slow.
Some logs:</p>
<pre><code>2023-04-11 18:54:23 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.imdb.com/name/nm14444245/>
{'movie_id': 'tt5640060', 'movie_name': 'Chicago Justice ', 'movie_year': 'TV Series 2017', 'actor_id': 'nm14444245', 'actor_name': 'Matt Abbott', 'role_name': 'Juror', 'born_date': '', 'born_place': ''}
2023-04-11 18:54:25 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.imdb.com/name/nm0811523/>
{'movie_id': 'tt0102975', 'movie_name': 'Star Trek VI: The Undiscovered Country ', 'movie_year': '1991', 'actor_id': 'nm0811523', 'actor_name': 'Michael Snyder', 'role_name': 'Crewman Dax', 'born_date': '', 'born_place': ''}
2023-04-11 18:54:39 [scrapy.extensions.logstats] INFO: Crawled 41 pages (at 10 pages/min), scraped 22 items (at 6 items/min)
2023-04-11 18:55:39 [scrapy.extensions.logstats] INFO: Crawled 41 pages (at 0 pages/min), scraped 22 items (at 0 items/min)
2023-04-11 18:56:39 [scrapy.extensions.logstats] INFO: Crawled 41 pages (at 0 pages/min), scraped 22 items (at 0 items/min)
</code></pre>
<p>Is there any way to improve it so it does not get stuck?</p>
|
<python><scrapy>
|
2023-04-11 16:58:53
| 0
| 622
|
Norhther
|
75,988,408
| 12,300,981
|
How to get errors from solved basin-hopping results (using Powell method for local)
|
<p>I traditionally like to use BFGS or L-BFGS because it reports the inverse-hessian for errors. However I've noticed other solvers such as Nelder-Mead and Powell, do not report the inverse-hessian. The same is true of Basin Hopping.</p>
<p>In a scenario where you are using Basin Hopping with Powell as the local minimizer, how would one go about trying to find the error of the solution?</p>
<p>Say you had an input such as this</p>
<pre><code>basinhopping(fun,minimizer_kwargs={"args":(input1,input2),"method" : 'Powell',"bounds":(0,np.inf)}, x0=1.0)
</code></pre>
<p>With an output such as this</p>
<pre><code> message: ['requested number of basinhopping iterations completed successfully']
success: True
fun: 2.9445950465323385
x: [ 9.349e+03]
nit: 100
minimization_failures: 0
nfev: 10141
lowest_optimization_result: message: Optimization terminated successfully.
success: True
status: 0
fun: 2.9445950465323385
x: [ 9.349e+03]
nit: 5
direc: [[ 1.805e+03]
[ 2.962e+03]]
nfev: 175
</code></pre>
<p>So the solution is 9.349e3 with a chi2 of 2.944, but what is the error of 9.349e3? i.e. 9.349+/-?</p>
<p><strong>Edit:</strong></p>
<p>So attempting to use your solution, I think I have it setup properly?</p>
<pre><code>from statsmodels.base.optimizer import _fit_powell
from statsmodels.tools.numdiff import approx_hess1
from numpy.linalg import inv
import numpy as np
solution=_basinhopping(fun,minimizer_kwargs={"args":(input1,input2),"method" : 'Powell',"bounds":(0,np.inf)}, x0=1.0)
hessian=approx_hess(solution.x,fun,args=(input1,input2))
print(np.diag(inv(hessian))
</code></pre>
<p>So just wanted to confirm this is the proper method/setup for error determination. I.E. You calculate the hessian with finite differences, then take inverse diagonal for the errors for your values. Again there are no examples, so while this works, I don't know if I have the code written properly/setup correctly.</p>
<p>The only other question I have however is, is this not for determining the differential? I.E. For error propagation you also need a covariance matrix?</p>
<pre><code>[df/dx df/dy][covariance matrix][df/dx df/dy]^T
</code></pre>
<p>So how does one calculate the variances for this? Or is the inverse hessian the variance?</p>
|
<python><scipy><scipy-optimize>
|
2023-04-11 16:57:46
| 0
| 623
|
samman
|
75,988,341
| 1,916,588
|
gc.get_referrers returns a list of dictionaries
|
<p>As far as I understood, <code>gc.get_referrers(my_obj)</code> should return a list of the objects that refer to <code>my_obj</code>.
However, I'm currently seeing this behaviour:</p>
<pre class="lang-py prettyprint-override"><code>import sys
import gc
my_obj = []
ref_1 = my_obj
ref_2 = my_obj
sys.getrefcount(my_obj) # Returns 4, as expected
gc.get_referrers(my_obj)
</code></pre>
<p>This last command returns the following:</p>
<pre class="lang-py prettyprint-override"><code>[
{
'__name__': '__main__',
'__doc__': None,
'__package__': None,
'__loader__': <class '_frozen_importlib.BuiltinImporter'>,
'__spec__': None,
'__annotations__': {},
'__builtins__': <module 'builtins' (built-in)>,
'sys': <module 'sys' (built-in)>,
'gc': <module 'gc' (built-in)>,
'my_obj': [],
'ref_1': [],
'ref_2': []
}
]
</code></pre>
<p>I was expecting to receive a list of 4 objects, but <code>gc.get_referrers(my_obj)</code> is returning a list that contains only one dictionary instead.</p>
<p>What does this dictionary represent? Where is it documented? And why is <code>gc.get_referrers(my_obj)</code> returning it instead of the 4 objects I was expecting?</p>
|
<python><garbage-collection>
|
2023-04-11 16:48:41
| 2
| 12,676
|
Kurt Bourbaki
|
75,988,280
| 13,634,560
|
Select polars columns by index
|
<p>I have a polars dataframe of species, 89 date columns and 23 unique species. The goal is aggregation by a groupby as well as a range of columns. iloc would be the way to do this in pandas, but the select option doesn't seem to work the way I want it to.</p>
<p>In pandas:</p>
<pre><code>gb = df.groupby(["Common_name"]).agg(dict(zip(df.iloc[:, 32:103].columns, ["mean"] * len(df.iloc[:, 32:103]))))
</code></pre>
<p>is there a way to select column indices in polars?</p>
|
<python><python-polars>
|
2023-04-11 16:40:28
| 1
| 341
|
plotmaster473
|
75,988,277
| 2,665,896
|
Is there an async equivalent of unitsetUpClass in Python 3.10?
|
<p>I have been using <code>unittest.IsolatedAsyncioTestCase</code> to test my async methods. I have been making use of <code>setUpClass</code> and <code>asyncSetUp</code> to create a fixture, and <code>asyncTearDown</code> to cleanup. This is all working merrily so far :-)</p>
<p>But now I have a new requirement which is to asynchronously create some fixtures once per test class and use it by the test methods throughout.</p>
<p>I am aware <code>setUpClass</code> is run once per test class and <code>setUp</code> is run once per test method. <code>asyncSetUp</code> is the async equivalent of <code>setUp</code>. But I can't seem to find an async equivalent of <code>setUpClass</code>.</p>
<p>So, what is the best way to asynchronously create and cleanup fixtures once per test?</p>
<p>I tried checking the official unittest doc at <a href="https://docs.python.org/3/library/unittest.html#unittest.TestCase.setUpClass" rel="nofollow noreferrer">https://docs.python.org/3/library/unittest.html#unittest.TestCase.setUpClass</a>, but it only documents about the <code>asyncSetUp</code>.</p>
<p>I am on Python 3.10 and using pytest.</p>
|
<python><python-3.x><pytest><python-unittest><pytest-asyncio>
|
2023-04-11 16:40:09
| 2
| 345
|
vxxxi
|
75,988,174
| 143,960
|
Python: OSError with Augmentor
|
<pre class="lang-none prettyprint-override"><code>2023-04-11 16:12:33,568 ERROR [Errno 5] Input/output error
2023-04-11 16:12:33,570 ERROR Traceback (most recent call last):
File "/home/me/bot/modules/verification.py", line 342, in onMemberJoin
p.process()
File "/home/me/.local/lib/python3.8/site-packages/Augmentor/Pipeline.py", line 391, in process
self.sample(0, multi_threaded=True)
File "/home/me/.local/lib/python3.8/site-packages/Augmentor/Pipeline.py", line 362, in sample
with tqdm(total=len(augmentor_images), desc="Executing Pipeline", unit=" Samples") as progress_bar:
File "/home/me/.local/lib/python3.8/site-packages/tqdm/std.py", line 1093, in __init__
self.sp = self.status_printer(self.fp)
File "/home/me/.local/lib/python3.8/site-packages/tqdm/std.py", line 336, in status_printer
getattr(sys.stderr, 'flush', lambda: None)()
OSError: [Errno 5] Input/output error
2023-04-11 16:12:33,570 ERROR <class 'OSError'>
</code></pre>
<p>I am presented with the above statement when doing the following:</p>
<pre><code>p = Augmentor.Pipeline(folderPath)
p.random_distortion(probability=1, grid_width=4, grid_height=4, magnitude=14)
p.process()
</code></pre>
<p>(Error on p.process line). folderPath is a valid, writeable folder (the image existing in there was generated successfully). The folder has valid permissions.</p>
<p>It is running on a cloud server that I'll ssh into on occasion as a discord verification bot. It was previously running on another cloud server with no issues for years but I moved everything over to a new server and now am presented with this.</p>
<ul>
<li>Old OS: CentOS v8</li>
<li>New OS: Rocky v9</li>
<li>Python v3.8.2 (same on both)</li>
<li>Augmentor v0.2.12 (same on both)</li>
<li>discord.py v1.5.0 (same on both)</li>
<li>psutil v5.6.7 (same on both)</li>
</ul>
|
<python><augmentor>
|
2023-04-11 16:30:37
| 0
| 1,271
|
dangerisgo
|
75,988,086
| 19,003,861
|
How to print actual number when using Aggregate(Max()) in a Django query
|
<p>I have the following query set:</p>
<pre><code> max_latitude = Model.objects.aggregate(Max('latitude'))
</code></pre>
<p>When I print it, it returns <code>{'latitude__max': 51.6639002}</code> and not <code>51.6639002</code>.</p>
<p>This is causing a problem when I want to add the latitudes together to calculate an average latitude.</p>
<p>If I do <code>print(max_latitude.latitude)</code> (field of Model object) I get the following: <code>error:'dict' object has no attribute 'latitude'</code></p>
<p>How can I extract the actual number given from the queryset?</p>
|
<python><django><django-views>
|
2023-04-11 16:19:46
| 2
| 415
|
PhilM
|
75,988,055
| 4,254,538
|
Install Pytorch CPU Only with pip that works on Apple machine as well as Linux
|
<p>I'm getting install errors when I deploy a Flask app to Azure services. I've tracked the issue down to a <code>pip install torch</code> and likely being due to a CPU version.</p>
<p>I'm unable to find a way to install a CPU-only version of Pytorch that will install on both Macbook and Linux?</p>
|
<python><azure><pytorch>
|
2023-04-11 16:15:53
| 1
| 780
|
Aus_10
|
75,988,013
| 1,629,904
|
How to use django channel to create a game lobby and matchmaking
|
<p>I am using django channels to create a 2-player game like Tic Tac Toe or checkers.</p>
<p>My main issue is how I can preserve the state of the players that joined the lobby waiting to be paired. Would this be something I need to do in the connect method in the channel consumer? If not what if the approach here?</p>
|
<python><django><game-development><django-channels><matchmaking>
|
2023-04-11 16:10:21
| 1
| 332
|
dianesis
|
75,987,836
| 21,420,742
|
How to merge multiple columns from different dataframes with different columns names in Python
|
<p>I have two datasets that show Employment by ID and one that shows manager approvals for work, neither one has identical column names but have similar values for some columns. I need to merge in multiple columns from the first dataset into the second and have them read in a employees ID by their name.</p>
<p>DF1: All Employment</p>
<pre><code>ID Emp_Name Job Status Team
101 Josh A. Sales Advisor Active Sales
102 Sarah B. Sales Advisor Termed Sales
103 Michael C. Tech Support Active Tech
104 Fred D. Tech Support Termed Tech
.
.
.
823 Frank O. Financial Advisor Termed Finance
</code></pre>
<p>DF2: Focuses on the terminated IDs</p>
<pre><code>Termed_Name Manager_Name Manager_ID
Sarah B. Mary S. 156
Fred D. John D. 164
Paul M.
Frank O. Gary H. 532
</code></pre>
<p>Desired DF:</p>
<pre><code>Manager_Name Manager_ID Termed_Name ID
Mary S. 156 Sarah B. 102
John D. 164 Fred D. 104
NA NA Paul M. NA
.
.
.
Gary H. 532 Frank O. 823
</code></pre>
<p>I have tried using <code>pd.concat</code> and <code>pd.merge</code> but have no luck so far with getting the result I want.</p>
|
<python><python-3.x><pandas><numpy>
|
2023-04-11 15:51:15
| 1
| 473
|
Coding_Nubie
|
75,987,725
| 2,601,293
|
Using WebDAV to list files on NextCloud server results in method not supported
|
<p>I'm trying to list files using webdab but I'm having issues. I can create directories and put files just fine but not list a directory or pull a file. I'm seeing the error, "Method not supported".</p>
<pre><code>from webdav3.client import Client
options = {
'webdav_hostname': "https://___________.com/remote.php/dav/files/",
'webdav_login': "user_name",
'webdav_password': "password"
}
client = Client(options)
print(client.list('/'))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/<user>/.local/lib/python3.10/site-packages/webdav3/client.py", line 67, in _wrapper
res = fn(self, *args, **kw)
File "/home/<user>/.local/lib/python3.10/site-packages/webdav3/client.py", line 264, in list
response = self.execute_request(action='list', path=directory_urn.quote())
File "/home/<user>/.local/lib/python3.10/site-packages/webdav3/client.py", line 228, in execute_request
raise MethodNotSupported(name=action, server=self.webdav.hostname)
webdav3.exceptions.MethodNotSupported: Method 'list' not supported for https://________.com/remote.php/dav/files
</code></pre>
|
<python><webdav><nextcloud>
|
2023-04-11 15:37:30
| 4
| 3,876
|
J'e
|
75,987,646
| 7,000,874
|
combine rows if consecutive index exist
|
<p>I am trying to combine the string in column text_info in one row only if index is consecutive. The data I have looks very similar to the below table:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>index</th>
<th>text_info</th>
</tr>
</thead>
<tbody>
<tr>
<td>0.0</td>
<td>word 1</td>
</tr>
<tr>
<td>NAN</td>
<td>NAN</td>
</tr>
<tr>
<td>3.0</td>
<td>word2</td>
</tr>
<tr>
<td>0.0</td>
<td>word3</td>
</tr>
<tr>
<td>1.0</td>
<td>word4</td>
</tr>
<tr>
<td>2.0</td>
<td>word5</td>
</tr>
<tr>
<td>4.0</td>
<td>word6</td>
</tr>
</tbody>
</table>
</div>
<p>I would like to combine the text in rows 0,1 and 2 in one row to look like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>index</th>
<th>text_info</th>
</tr>
</thead>
<tbody>
<tr>
<td>0.0</td>
<td>word 1</td>
</tr>
<tr>
<td>NAN</td>
<td>NAN</td>
</tr>
<tr>
<td>3.0</td>
<td>word2</td>
</tr>
<tr>
<td>0.0</td>
<td>word3, word4, word5</td>
</tr>
<tr>
<td>4.0</td>
<td>word6</td>
</tr>
</tbody>
</table>
</div>
<p>The data contains similar consecutive indexes.</p>
<p>I tried multiple solutions including the answer in <a href="https://stackoverflow.com/questions/68200424/combine-consecutive-rows-for-given-index-values-in-pandas-dataframe">this question</a> but it did not work. I also tried <code>multi = df.groupby('index',dropna=False)["text_info"].sum()</code> but it combined all the indexes in consecutive order.</p>
<p>Is there a way to do this?</p>
|
<python><python-3.x><pandas><dataframe><group-by>
|
2023-04-11 15:28:41
| 1
| 393
|
J.Doe
|
75,987,622
| 375,432
|
Change case of all column names with Ibis
|
<p>I have an <a href="http://ibis-project.org/" rel="nofollow noreferrer">Ibis</a> table named <code>t</code>. Its column names are all lowercase. I want to change them all to uppercase. How can I do that?</p>
|
<python><ibis>
|
2023-04-11 15:26:43
| 1
| 763
|
ianmcook
|
75,987,560
| 1,997,735
|
NumPy - process masked image to get min & max distance from specified color
|
<p>I've got an image (ndarray with shape (480, 640, 3)) and an associated mask (ndarray with shape (480,640)). What I want to do is this:
For each pixel in the image whose corresponding mask value is 255:</p>
<ul>
<li>Calculate the "distance" of that pixel from a reference color = sqrt((R-r)^2+(B-b)^2+(G-g)^2))</li>
<li>Return the minimum and maximum values of that distance</li>
</ul>
<p>I could of course loop over each pixel individually, but it feels like there's a more efficient way using np.array() or something. <a href="https://stackoverflow.com/questions/72437599/how-to-turn-3d-array-into-2d-array-with-labels-based-on-the-3rd-dimension">This question</a> seems to be going in the right direction for me, but it doesn't include the mask.</p>
<p>Is there a clever/efficient way to do this?</p>
|
<python><numpy><image-processing>
|
2023-04-11 15:20:52
| 1
| 3,473
|
Betty Crokker
|
75,987,430
| 10,499,953
|
All the ways to construct DataFrame() from data
|
<p>The parameters section of the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html" rel="nofollow noreferrer">documentation</a> for <code>DataFrame</code> (as of <code>pandas</code> 2.0.0) begins:</p>
<blockquote>
<p><strong>data : <em>ndarray (structured or homogeneous), Iterable, dict, or DataFrame</em></strong></p>
<p>Dict can contain Series, arrays, constants, dataclass or list-like objects. If data is a dict, column order follows insertion-order. If a dict contains Series which have an index defined, it is aligned by its index. This alignment also occurs if data is a Series or a DataFrame itself. Alignment is done on Series/DataFrame inputs.</p>
<p>If data is a list of dicts, column order follows insertion-order.</p>
</blockquote>
<p>The description points to valid input types (i.e., <em>ndarray, Iterable, dict, or DataFrame</em>) <strong>but does not completely describe <em>how</em> the constructor will turn the <code>data</code> into a <code>DataFrame</code></strong>. It seems like somewhat of a black box. Should I be able to predict, based on the documentation, that, say, passing a <code>list</code> containing a single <code>Series</code> and no other arguments will give a result that looks like <code>Series.to_frame().T</code> (although the dtypes may differ; see <a href="https://stackoverflow.com/a/56713499">this answer</a> and <a href="https://stackoverflow.com/a/74385065">this one</a>)?</p>
<p>The purpose of this question is to solicit answers that <strong>classify the different ways of passing data to a <code>DataFrame()</code> via <code>data</code>, according to how the constructor puts or massages the data into the <code>DataFrame</code></strong>. It is necessarily a broad question, but there should be a finite number of cases given that the constructor is, you know, implemented in code. I'm interested in this question and would be willing to dig through the source code a little to discover the answer; however, I think others with more experience may have insights to share here before I do that.</p>
<p>This is a single question about rules broadly, and I believe its answers belong together in one place. However, since it is broad, I will provide some specific sub-questions to get us started:</p>
<ul>
<li><p>For <code>iterable</code>s, what container and element combinations are valid? Without needing to try it, should I be able to predict what will happen if I pass a <code>list</code> of <code>DataFrames</code> or a <code>Series</code> of <code>Series</code>? Which axis is used when a <code>Series</code> input is "aligned by its index"? Does the treatment depend at all on what its elements are?</p>
</li>
<li><p>How do the container and element types passed via <code>data</code> affect how the <code>DataFrame</code> will be put together? Should I be able to predict how the data will be aligned along the axes of the resulting <code>DataFrame</code> based on knowledge of <code>data</code> alone? I don't know if the answer is obvious, but in either case I do not see it documented.</p>
</li>
<li><p>If I think of a <code>DataFrame</code> as "a dict-like container for <code>Series</code> objects" (as docs suggest), what are the intuitive rules governing how <code>data</code> gets interpreted (loosely) into keys and values?</p>
</li>
</ul>
<p>I'm open to suggestions for improving the question, but I do think it's a question that needs to be asked and I did not find a similar question on this site.</p>
|
<python><pandas>
|
2023-04-11 15:07:02
| 2
| 417
|
Attila the Fun
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.