QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
76,344,199
| 1,812,732
|
How to read FastAPI UploadFile as text one line at a time
|
<p>I am making a REST API using FastAPI and Python. For example here is an api that takes an uploaded file, and returns an array that with the length of each line.</p>
<pre><code>router = APIRouter()
@router.post('/api/upload1')
async def post_a_file(file: UploadFile):
result = []
f = io.TextIOWrapper(file.file, encoding='utf-8')
while True:
s = f.readline()
if not s: break
result.append(len(s))
return result
</code></pre>
<p>However this fails with error...</p>
<pre><code>f = io.TextIOWrapper(file.file, encoding='utf-8')
AttributeError: 'SpooledTemporaryFile' object has no attribute 'readable'
</code></pre>
<p>If i change to</p>
<pre><code> f = file.file
while True:
s = f.readline().decode('utf-8')
</code></pre>
<p>then it works, but that is <del>stupid</del>, because reading a "line" of bytes doesn't make sense.</p>
<p>What is the <strong>right</strong> way to do this?</p>
<p><strong>EDIT:</strong> As I learned (see comments) it is <em>not wrong</em> to read a "line" of bytes, because the line break characters (either 0x0A or 0x0D0A) are the same in all character sets.</p>
|
<python><post><fastapi>
|
2023-05-26 21:11:37
| 1
| 11,643
|
John Henckel
|
76,344,145
| 2,749,397
|
Explain the error produced using plt.legend in a 3D stacked bar plot
|
<p><a href="https://i.sstatic.net/m5OIV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/m5OIV.png" alt="enter image description here" /></a></p>
<p>The figure above was produced by this code</p>
<pre><code>%matplotlib
import numpy as np
import matplotlib.pyplot as plt
x = y = np.array([1, 2])
fig = plt.figure(figsize=(5, 3))
ax1 = fig.add_subplot(111, projection='3d')
ax1.bar3d(x, y, [0,0], 0.5, 0.5, [1,1], shade=True, label='a')
ax1.bar3d(x, y, [1,1], 0.5, 0.5, [1,1], shade=True, label='b')
ax1.legend()
</code></pre>
<p>that asked also for a legend. As you can see, no legend but I've got this Traceback</p>
<pre><code>---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[1], line 12
10 ax1.bar3d(x, y, [0,0], 0.5, 0.5, [1,1], shade=True, label='a')
11 ax1.bar3d(x, y, [1,1], 0.5, 0.5, [1,1], shade=True, label='b')
---> 12 plt.legend()
File /usr/lib64/python3.11/site-packages/matplotlib/pyplot.py:2646, in legend(*args, **kwargs)
2644 @_copy_docstring_and_deprecators(Axes.legend)
2645 def legend(*args, **kwargs):
-> 2646 return gca().legend(*args, **kwargs)
File /usr/lib64/python3.11/site-packages/matplotlib/axes/_axes.py:313, in Axes.legend(self, *args, **kwargs)
311 if len(extra_args):
312 raise TypeError('legend only accepts two non-keyword arguments')
--> 313 self.legend_ = mlegend.Legend(self, handles, labels, **kwargs)
314 self.legend_._remove_method = self._remove_legend
315 return self.legend_
File /usr/lib64/python3.11/site-packages/matplotlib/_api/deprecation.py:454, in make_keyword_only.<locals>.wrapper(*args, **kwargs)
448 if len(args) > name_idx:
449 warn_deprecated(
450 since, message="Passing the %(name)s %(obj_type)s "
451 "positionally is deprecated since Matplotlib %(since)s; the "
452 "parameter will become keyword-only %(removal)s.",
453 name=name, obj_type=f"parameter of {func.__name__}()")
--> 454 return func(*args, **kwargs)
File /usr/lib64/python3.11/site-packages/matplotlib/legend.py:517, in Legend.__init__(self, parent, handles, labels, loc, numpoints, markerscale, markerfirst, scatterpoints, scatteryoffsets, prop, fontsize, labelcolor, borderpad, labelspacing, handlelength, handleheight, handletextpad, borderaxespad, columnspacing, ncols, mode, fancybox, shadow, title, title_fontsize, framealpha, edgecolor, facecolor, bbox_to_anchor, bbox_transform, frameon, handler_map, title_fontproperties, alignment, ncol)
514 self._alignment = alignment
516 # init with null renderer
--> 517 self._init_legend_box(handles, labels, markerfirst)
519 tmp = self._loc_used_default
520 self._set_loc(loc)
File /usr/lib64/python3.11/site-packages/matplotlib/legend.py:782, in Legend._init_legend_box(self, handles, labels, markerfirst)
779 text_list.append(textbox._text)
780 # Create the artist for the legend which represents the
781 # original artist/handle.
--> 782 handle_list.append(handler.legend_artist(self, orig_handle,
783 fontsize, handlebox))
784 handles_and_labels.append((handlebox, textbox))
786 columnbox = []
File /usr/lib64/python3.11/site-packages/matplotlib/legend_handler.py:119, in HandlerBase.legend_artist(self, legend, orig_handle, fontsize, handlebox)
95 """
96 Return the artist that this HandlerBase generates for the given
97 original artist/handle.
(...)
112
113 """
114 xdescent, ydescent, width, height = self.adjust_drawing_area(
115 legend, orig_handle,
116 handlebox.xdescent, handlebox.ydescent,
117 handlebox.width, handlebox.height,
118 fontsize)
--> 119 artists = self.create_artists(legend, orig_handle,
120 xdescent, ydescent, width, height,
121 fontsize, handlebox.get_transform())
123 if isinstance(artists, _Line2DHandleList):
124 artists = [artists[0]]
File /usr/lib64/python3.11/site-packages/matplotlib/legend_handler.py:806, in HandlerPolyCollection.create_artists(self, legend, orig_handle, xdescent, ydescent, width, height, fontsize, trans)
802 def create_artists(self, legend, orig_handle,
803 xdescent, ydescent, width, height, fontsize, trans):
804 p = Rectangle(xy=(-xdescent, -ydescent),
805 width=width, height=height)
--> 806 self.update_prop(p, orig_handle, legend)
807 p.set_transform(trans)
808 return [p]
File /usr/lib64/python3.11/site-packages/matplotlib/legend_handler.py:78, in HandlerBase.update_prop(self, legend_handle, orig_handle, legend)
76 def update_prop(self, legend_handle, orig_handle, legend):
---> 78 self._update_prop(legend_handle, orig_handle)
80 legend._set_artist_props(legend_handle)
81 legend_handle.set_clip_box(None)
File /usr/lib64/python3.11/site-packages/matplotlib/legend_handler.py:787, in HandlerPolyCollection._update_prop(self, legend_handle, orig_handle)
783 return None
785 # orig_handle is a PolyCollection and legend_handle is a Patch.
786 # Directly set Patch color attributes (must be RGBA tuples).
--> 787 legend_handle._facecolor = first_color(orig_handle.get_facecolor())
788 legend_handle._edgecolor = first_color(orig_handle.get_edgecolor())
789 legend_handle._original_facecolor = orig_handle._original_facecolor
File /usr/lib64/python3.11/site-packages/matplotlib/legend_handler.py:775, in HandlerPolyCollection._update_prop.<locals>.first_color(colors)
774 def first_color(colors):
--> 775 if colors.size == 0:
776 return (0, 0, 0, 0)
777 return tuple(colors[0])
AttributeError: 'tuple' object has no attribute 'size'
</code></pre>
<p>Can you help me at understanding what happened?</p>
|
<python><matplotlib><legend><matplotlib-3d><bar3d>
|
2023-05-26 20:58:19
| 1
| 25,436
|
gboffi
|
76,344,134
| 343,159
|
In Python, how to yield the results of the call to another class's function?
|
<p>Very little experience of Python, struggling with the best pattern here.</p>
<p>I have a generator function, and I would like this function to <code>yield</code> the return value of another function until exhausted. This is using the streaming feature of <a href="https://docs.haystack.deepset.ai/docs/prompt_node#streaming" rel="nofollow noreferrer">Haystack's PromptNode</a>, although that's really neither here nor there, just the pattern I'm after.</p>
<p>It is a little chatbot whose results I am trying to stream using gRPC. That bit is working, it's just getting it to <code>yield</code> correctly.</p>
<p>My generator looks like this:</p>
<pre><code>class ChatBot(chat_pb2_grpc.ChatBotServicer):
def AskQuestion(self, request, context):
query = request.query
custom_handler = GRPCTokenStreamingHandler()
prompt_node = PromptNode(
"gpt-4",
default_prompt_template=lfqa_prompt,
api_key=api_key,
max_length=4096,
model_kwargs={"stream": True, "stream_handler": custom_handler}
)
pipe = Pipeline()
pipe.add_node(component=retriever, name="retriever", inputs=["Query"])
pipe.add_node(component=prompt_node, name="prompt_node", inputs=["retriever"])
output = pipe.run(query=query)
# This should yield the "return token_received" from custom_handler
# yield chat_pb2.Response(token=token, final=False)
yield chat_pb2.Response(token="", final=True)
</code></pre>
<p>And my <code>GRPCTokenStreamingHandler</code> looks like this:</p>
<pre><code>class GRPCTokenStreamingHandler(TokenStreamingHandler):
def __call__(self, token_received, **kwargs) -> str:
return token_received
</code></pre>
<p>The way the <code>PromptNode</code> functions is that the <code>output</code>, where ordinarily it would print token after token to the console, is instead directed to this <code>GRPCTokenStreamingHandler</code> class.</p>
<p>So each of those <code>token_received</code> should be <code>yield</code>ed by the <code>AskQuestion</code> generator.</p>
<p>How can I do this?</p>
|
<python><grpc-python><haystack>
|
2023-05-26 20:56:11
| 1
| 12,750
|
serlingpa
|
76,344,056
| 4,419,423
|
Pandas: sequence data over date based on column change
|
<p>i have timeseries data spanning multiple days that i need to sequence (i.e., create a column each time a value changes). i have the sequencing working without groupby, but i'm a little lost on how to apply the same or similar code to the grouped data.</p>
<p>my data looks like:</p>
<pre><code>index timestamp value
0 1684713605000 1
1 1684713610000 1
2 1684713611000 1
3 1684713614000 0
4 1684713615000 0
5 1684713616000 0
6 1684713619000 1
7 1684713620000 1
8 1684713621000 1
9 1684832896000 1
10 1684832897000 1
11 1684832898000 1
12 1684832901000 0
13 1684832902000 0
14 1684832903000 0
15 1684832906000 1
16 1684832907000 1
17 1684832908000 1
</code></pre>
<p>my <code>timestamp</code> column is not guaranteed to be sequential, but is generally one record per second of the day. i need my desired <code>sequence</code> column to increment up until the end of the day, then begin counting again at 0 the next day.</p>
<p>the code i'm using to sequence is:</p>
<pre><code>subset = df[["value"]].copy()
subset["change"] = (subset["value"].shift() != subset["value"]) * 1
subset["seq"] = subset["change"].cumsum(axis = 0) - 1
df["seq"] = subset["seq"]
</code></pre>
<p>i've been able to create groups with:</p>
<pre><code>subset = df[["timestamp", "value"]].copy()
subset["date"] = pd.to_datetime(subset["timestamp"], unit="ms", origin="unix").dt.date
g = subset.groupby("date")
</code></pre>
<p>but i'm not sure how to proceed. my desired result is a sequence column that increments every time <code>value</code> changes but resets</p>
<pre><code>index timestamp value seq
0 1684713605000 1 0
1 1684713610000 1 0
2 1684713611000 1 0
3 1684713614000 0 1
4 1684713615000 0 1
5 1684713616000 0 1
6 1684713619000 1 2
7 1684713620000 1 2
8 1684713621000 1 2
9 1684832896000 1 0 <-- first record of a new day
10 1684832897000 1 0
11 1684832898000 1 0
12 1684832901000 0 1
13 1684832902000 0 1
14 1684832903000 0 1
15 1684832906000 1 2
16 1684832907000 1 2
17 1684832908000 1 2
</code></pre>
|
<python><pandas><group-by><sequence>
|
2023-05-26 20:37:45
| 1
| 3,984
|
niko
|
76,343,899
| 1,018,733
|
Python doctest dictionary equality test with a strange failure (python bug?)
|
<p>This test isn't failing appropriately. What is wrong?</p>
<p>This incorrectly passes!?</p>
<pre><code>def drop_keys_conditionally(some_dict):
"""Weird bug where the dictionary equality shouldn't pass but does
>>> d = {'check': '...lala...', 'a': 'a', 'b': 'b', 'key': 'val', 'c': 'c', 'd':'d'}
>>> drop_keys_conditionally(d)
{'check': '...lala...', 'key': 'val'}
"""
invocation_to_correct = 'lala'
keys_to_drop = [
"a",
# "b",
"c",
"d"
]
if invocation_to_correct in some_dict['check']:
for k in keys_to_drop:
some_dict.pop(k)
return some_dict
</code></pre>
<p>This correctly fails (see the added comma to force it to fail somehow)</p>
<pre><code>def drop_keys_conditionally(some_dict):
"""Weird bug where the dictionary equality shouldn't pass but does
>>> d = {'check': '...lala...', 'a': 'a', 'b': 'b', 'key': 'val', 'c': 'c', 'd':'d'}
>>> drop_keys_conditionally(d)
{'check': '...lala...', 'key': 'val',}
"""
invocation_to_correct = 'lala'
keys_to_drop = [
"a",
# "b",
"c",
"d"
]
if invocation_to_correct in some_dict['check']:
for k in keys_to_drop:
some_dict.pop(k)
return some_dict
</code></pre>
<p>Error message:</p>
<pre><code>002 Weird bug where the dictionary equality shouldn't pass but does
003
004 >>> d = {'check': '...lala...', 'a': 'a', 'b': 'b', 'key': 'val', 'c': 'c', 'd':'d'}
005 >>> drop_keys_conditionally(d)
Expected:
{'check': '...lala...', 'key': 'val',}
Got:
{'check': '...lala...', 'b': 'b', 'key': 'val'}
</code></pre>
<p>Clearly the 'b':'b' is stil there and should have failed the above version. Why does the first version pass? Python doctest bug?</p>
<p><code>pytest python_bug.py --doctest-modules -v</code></p>
<p>Version of python:
Python 3.11.1</p>
<h2>Further exploration</h2>
<p>Perhaps even scarier, calling sorted(the_dict.items()) and doing a doctest against that doesn't work either and also silently passes what should be a simple comparison error. Does only a certain amount of the substring have to be equal for the doctest to pass? That would be crazy. Certainly that's not what's going on, but that seems to be what's going on. I can add in a print statement and clearly see the comparison should not be equal.</p>
<p>Also, if I change the doctest to just be an assert, rather than relying on string comparison, now it works correctly. Wild? The whole point of doctests is that it's already supposed to be doing an assert on the string representation of the output. Buggy? I thought this was what doctests were for, but maybe it's python tribal knowledge python doctests are buggy and don't work correctly?</p>
|
<python><pytest><doctest>
|
2023-05-26 19:57:31
| 1
| 4,510
|
SwimBikeRun
|
76,343,898
| 3,507,584
|
AWS EB CLI - 'eb' is not recognized as an internal or external command
|
<p>I installed AWS with Python 3.11 in my PC. later I uninstalled Python 3.11 to use Python 3.10 but when I run <code>aws --version</code> it still shows Python 3.11.</p>
<p><a href="https://i.sstatic.net/JK9VS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JK9VS.png" alt="enter image description here" /></a></p>
<p>When I run other commands like <code>eb init</code> it shows and error:</p>
<p><code>'eb' is not recognized as an internal or external command,operable program or batch file.</code></p>
<p>And even if I <code>pip install awsebcli</code> it gets installed in Python 3.10. How to fix this?</p>
|
<python><amazon-web-services><amazon-elastic-beanstalk><aws-cli>
|
2023-05-26 19:57:26
| 1
| 3,689
|
User981636
|
76,343,773
| 13,793,478
|
typeError: fromisoformat: argument must be str,
|
<p>I trie many solutions from the internet nothing worked.. thanks in advance</p>
<p>Error</p>
<pre><code>typeError: fromisoformat: argument must be str
</code></pre>
<p>view.py</p>
<pre><code>def notes(request):
now = datetime.datetime.now()
month = now.month
year = now.year
cal = HTMLCalendar()
events = Note.objects.filter(date_month=month, date_year=year)
notes = Note.objects.all()
return render(request, 'notes/home.html',{
'cal': cal,
'events': events,
}
</code></pre>
|
<python><django>
|
2023-05-26 19:32:53
| 1
| 514
|
Mt Khalifa
|
76,343,741
| 14,057,599
|
How to get a batch of binary masks from segmentation map without using `for` loop
|
<p>suppose I have a segmentation map <code>a</code> with dimension of <code>(1, 1, 6, 6)</code></p>
<pre><code>print(a)
array([[[[ 0., 0., 0., 0., 0., 0.],
[ 0., 15., 15., 16., 16., 0.],
[ 0., 15., 15., 16., 16., 0.],
[ 0., 13., 13., 9., 9., 0.],
[ 0., 13., 13., 9., 9., 0.],
[ 0., 0., 0., 0., 0., 0.]]]], dtype=float32)
</code></pre>
<p>How can I get the binary masks for each class without using for loop? The binary masks should have a dimension of <code>(4, 1, 6, 6)</code>, currently im doing something like this and the reason I want it without <code>for</code> loop is that the dimension of a might change and there might be more/less classes. Thanks.</p>
<pre><code>a1 = np.where(a == 15, 1, 0)
a2 = np.where(a == 16, 1, 0)
a3 = np.where(a == 13, 1, 0)
a4 = np.where(a == 9, 1, 0)
b = np.concatenate((a1, a2, a3, a4), axis=0)
print(b)
array([[[[0, 0, 0, 0, 0, 0],
[0, 1, 1, 0, 0, 0],
[0, 1, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0]]],
[[[0, 0, 0, 0, 0, 0],
[0, 0, 0, 1, 1, 0],
[0, 0, 0, 1, 1, 0],
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0]]],
[[[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0],
[0, 1, 1, 0, 0, 0],
[0, 1, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 0]]],
[[[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 1, 1, 0],
[0, 0, 0, 1, 1, 0],
[0, 0, 0, 0, 0, 0]]]])
</code></pre>
|
<python><numpy>
|
2023-05-26 19:27:42
| 1
| 317
|
Qimin Chen
|
76,343,683
| 12,404,524
|
Running multiple processes in each iteration of loop in Python
|
<p>I have two functions <code>func_1</code> and <code>func_2</code>. They both take an array of integers as input.</p>
<p>I have a loop that creates arrays of length <code>i</code> in the <code>i</code>th iteration.</p>
<p>I have two predefined lists <code>list_1</code> and <code>list_2</code> intended to store the outputs from the functions at each iteration.</p>
<p>Thus, for each iteration I want to run the two functions parallelly with the array created in the iteration as the input. CPython doesn't have true multithreading, so I'm using <a href="https://docs.python.org/3/library/multiprocessing.html" rel="nofollow noreferrer"><code>multiprocessing</code></a> instead.</p>
<p>I want to create two separate processes for each of the functions at each iteration of the loop, and <code>close</code> the processes by the end of the iteration.</p>
<p>I have so far tried using <code>Pool</code> and <code>Process</code>. I can't get them to populate <code>list_1</code> and <code>list_2</code>, even by sending the two lists as arguments to the respective functions and appending the result to them, in the functions.</p>
<p>How do I achieve this? Explanations will be appreciated.</p>
<p>Here is what I've done that's not working:</p>
<pre class="lang-py prettyprint-override"><code>import random
import multiprocessing as mp
list_1 = []
list_2 = []
for input in range(n):
arr_1 = [random.randint(0,100) for _ in range(input)]
arr_2 = list(arr_1)
proc_one = mp.Process(target=func_1, args=(arr_1, list_1)
proc_two = mp.Process(target=func_2, args=(arr_2, list_2)
proc_one.start()
proc_two.start()
proc_one.join()
proc_two.join()
print(list_1)
print(list_2)
</code></pre>
<p>And here are the two worker functions:</p>
<pre class="lang-py prettyprint-override"><code>def func_1(arr, outlist):
# do something with arr
# store the value in result
outlist.append(result)
def func_2(arr, outlist):
# do something differnt with arr
# store the value in result
outlist.append(result)
</code></pre>
<p>P.S. This is a simpliified version of my <a href="https://github.com/amkhrjee/algorithms-notebook/blob/main/Sorting.ipynb" rel="nofollow noreferrer">actual code</a>. I must take the entire array as input to the worker functions at each iteration.</p>
|
<python><multiprocessing><cpython>
|
2023-05-26 19:16:13
| 2
| 1,006
|
amkhrjee
|
76,343,624
| 3,948,658
|
AWS CDK Failing with: "Error: There are no 'Public' subnet groups in this VPC. Available types:"
|
<p>I am using the AWS CDK in Python to spin up infrastructure. However whenever I add the CDK code to create an EC2 instance resource I get the following error when running <strong>cdk deploy</strong>:</p>
<blockquote>
<p>Error: There are no 'Public' subnet groups in this VPC. Available types:</p>
</blockquote>
<p>And the stack trace points to the code that creates the EC2 instance resource. I've definitely created public subnets in the vpc. Here is my code. The first file creates the EC2 resource and the second one creates the new VPC and subnet resources that it belongs to. How do I resolve this error?</p>
<p><strong>Stack Code to create the EC2 resource:</strong>
animal_cdk/ec2.py</p>
<pre><code>from constructs import Construct
from aws_cdk import (
Stack,
aws_ec2 as ec2,
Tags,
CfnTag
)
import aws_cdk.aws_elasticloadbalancingv2 as elbv2
class Ec2Stack(Stack):
def __init__(self, scope: Construct, construct_id: str, vpc_stack, stage, **kwargs) -> None:
super().__init__(scope, construct_id, **kwargs)
shark_ec2 = ec2.Instance(self, "SharkEc2Instance",
vpc=vpc_stack.vpc,
instance_type=ec2.InstanceType.of(ec2.InstanceClass.C5, ec2.InstanceSize.XLARGE9),
machine_image=ec2.MachineImage.latest_amazon_linux(
generation=ec2.AmazonLinuxGeneration.AMAZON_LINUX_2
),
)
</code></pre>
<p><strong>Stack Code to create VPC and subnets, that gets imported by EC2 above:</strong>
animal_cdk/vpc.py</p>
<pre><code># Code to create the VPC and subnets
from constructs import Construct
from aws_cdk import (
Stack,
aws_ec2 as ec2,
Tags,
CfnTag
)
class VpcStack(Stack):
def __init__(self, scope: Construct, construct_id: str, stage, **kwargs) -> None:
super().__init__(scope, construct_id, **kwargs)
self.vpc = ec2.Vpc(self, "AnimalVpc",
ip_addresses=ec2.IpAddresses.cidr("10.0.0.0/16"),
vpc_name="animal-vpc",
subnet_configuration= []
)
self.shark_public_subnet = ec2.PublicSubnet(self, "SharkPublicSubnet",
availability_zone="us-west-2c",
cidr_block="10.0.0.0/28",
vpc_id=self.vpc.vpc_id,
map_public_ip_on_launch=True,
)
Tags.of(self.shark_public_subnet).add("Name", "shark-public-subnet")
</code></pre>
<p><strong>How VPC gets passed to EC2 Stack:</strong>
animal_cdk/application_infrastucture.py</p>
<pre><code>from constructs import Construct
from aws_cdk import (
Stack,
)
from animal_cdk.vpc import VpcStack
from animal_cdk.ec2 import Ec2Stack
class ApplicationInfrastructure(Stack):
def __init__(self, scope: Construct, **kwargs) -> None:
super().__init__(scope, **kwargs)
vpcStack = VpcStack(self, "Animal-VPC-Stack", stage="beta")
ec2Stack = Ec2Stack(self, "Animal-EC2-Stack", vpc_stack=vpcStack, stage="beta")
</code></pre>
<p>Anyone know how I can resolve this error or why I'm getting it? I've looked through the docs and tried a bunch of things but no luck so far.</p>
|
<python><amazon-web-services><amazon-ec2><aws-cdk><amazon-vpc>
|
2023-05-26 19:05:48
| 2
| 1,699
|
dredbound
|
76,343,623
| 13,982,165
|
match.group for array of named groups / strings python
|
<p>I'm parsing strings with following format:</p>
<pre><code>ADD id='5' titulo='The Diary of a Young Girl' autor='Anne Frank'
ADD id='5' titulo='The Diary of a Young Girl' autor='Anne Frank'
SEARCH id='5'
REMOVE id='10'
REMOVE id='5'
SEARCH id='5'
ADD id='8' titulo='The Fault in Our Stars' autor='John Green'
ADD id='14' titulo='Looking for Alaska' autor='John Green'
SEARCH autor='John Green'
REMOVE autor='Anne Frank'
REMOVE autor='John Green'
SEARCH autor='John Green'
</code></pre>
<p>Into 4 variables, <code>operation</code>, <code>id</code>, <code>title</code> and <code>author</code> using the following regex:</p>
<pre class="lang-py prettyprint-override"><code>(?P<operation>\w*)?( id='(?P<id>[^']*)')?( titulo='(?P<title>[^']*)')?( autor='(?P<author>[^']*)')?
</code></pre>
<p>I've tested it on regex101 and it works fine:</p>
<p><a href="https://i.sstatic.net/eZLGU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eZLGU.png" alt="enter image description here" /></a></p>
<p>However, matching each named group invidivually seems verbose to me:</p>
<pre class="lang-py prettyprint-override"><code>def parse_input(input_):
regex = r"(?P<operation>\w*)?( id='(?P<id>[^']*)')?( titulo='(?P<title>[^']*)')?( autor='(?P<author>[^']*)')?"
match = re.search(regex, input_)
operation = match.group('operation')
id_ = match.group('id')
title = match.group('title')
author = match.group('author')
return operation, (id_, title, author)
</code></pre>
<p>Is there some kind of <code>group_all()</code> method for passing an array of strings, each one containing the name of my matching groups, that returns the array of matches? Something like:</p>
<pre class="lang-py prettyprint-override"><code>GROUP_LABELS = ['operation', 'id', 'title', 'author']
regex = r"(?P<operation>\w*)?( id='(?P<id>[^']*)')?( titulo='(?P<title>[^']*)')?( autor='(?P<author>[^']*)')?"
match = re.search(regex, input_)
matched_groups = match.group_all(GROUP_LABELS)
# or operation, id_, title, author = match.group_all(GROUP_LABELS)
</code></pre>
|
<python><regex>
|
2023-05-26 19:04:50
| 0
| 375
|
nluizsoliveira
|
76,343,508
| 9,780,918
|
Reading csv with pandas and it gets NA values
|
<p>How can I solve in pandas when I want to read a csv file and it has a NaN somewhere?</p>
<p>it returns:</p>
<blockquote>
<p>ValueError: Integer column has NA values in column 22</p>
</blockquote>
<pre><code>def read_csv_file(filename):
dtypes = {
'transaction': int,
'kev': int,
'companyName': str,
'companyOwner': str,
'companyCountry': str,
}
# Importar el archivo CSV especificando los tipos de datos
data = pd.read_csv(filename, dtype=dtypes, na_values={'tradedate': -1})
return data.to_dict('records')
</code></pre>
<p>The problem is that I want to standardize this and I don't know which of all the data types or columns will have that problem</p>
|
<python><pandas>
|
2023-05-26 18:44:07
| 1
| 760
|
Julián Oviedo
|
76,343,486
| 19,055,149
|
setuptools: include converted data files into the wheel, keeping originals in the source distribution
|
<p>In my case, parsing the original data is slower than deserialising pickles. I'm trying to transform data during <code>setuptools</code> build:</p>
<pre class="lang-py prettyprint-override"><code>from setuptools.command.build_py import build_py
ROOT = Path(__file__).resolve().parent
DATA_DIR = ROOT / 'data'
OUT_DIR = ROOT / 'src' / 'package'
class BuildPy(build_py):
def run(self):
# Turn files from DATA_DIR into pickles in the OUT_DIR.
build_py.run(self)
setup(
...,
cmdclass={'build_py': BuildPy},
package_data={"package": ["*.pickle"]}
)
</code></pre>
<p>Unfortunately, <code>setup.py</code> is executed in a temporary directory, and thus it doesn't find any <code>data/*</code>:</p>
<pre class="lang-text prettyprint-override"><code>* Building wheel...
running bdist_wheel
running build
running build_py
error: [Errno 2] No such file or directory: '/tmp/build-via-sdist-dkhq2z2i/package-0.0.1/data/file.txt'
</code></pre>
<p>Equally bad, <code>sdist</code> ends up having neither <code>data/*</code> sources nor pickles. Adding <code>data/*</code> to <code>package_data</code> results in both original and converted files to be present in the resulting wheel.</p>
<p>How to get original <code>data/*</code> files in the source distribution and transformed in the binary wheel?</p>
|
<python><setuptools><setup.py><python-packaging>
|
2023-05-26 18:40:08
| 0
| 419
|
Nil Admirari
|
76,343,407
| 6,457,407
|
Python equivalent to `PyArrayContiguousFromAny`?
|
<p>Is there a Python equivalent to the C API's function:</p>
<pre><code>PyArray_ContiguousFromAny(PyObject* op, int typenum, int min_depth, int max_depth)
</code></pre>
<p><code>numpy.ascontiguousarray(a, dtype)</code> seems close, but it doesn't take a <code>min_depth</code> or <code>max_depth</code> argument giving the minimum and maximum for the dimension of the array.</p>
<p>If worst comes to worst, I can just call this function and then check the dimensions of the result, but that seems a bit wasteful.</p>
|
<python><numpy><swig>
|
2023-05-26 18:25:08
| 0
| 11,605
|
Frank Yellin
|
76,343,217
| 1,305,420
|
Setting maxBytes on RotatingFileHandler causes Formatter.format() to be called twice per record
|
<p>Can someone explain why setting <code>maxBytes</code> on <code>RotatingFileHandler</code> causes <code>DerivedFormatter.format()</code> to be called twice per log record? Ultimately, only 1 entry makes it to the log file, but I want to have some logic performed in <code>DerivedFormatter.format()</code> that shouldn't be done multiple times.</p>
<p>Example:</p>
<pre><code># Python 3.10.6
import logging
from logging.handlers import RotatingFileHandler
class DerivedFormatter(logging.Formatter):
def __init__(self, fmt=None, datefmt=None, style='%'):
super().__init__(fmt, datefmt, style)
def format(self, record):
print("format_override called.")
return super().format(record)
format = '%(asctime)s | %(levelname)-8s | %(filename)s | %(message)s'
log_file = "fubar.log"
Log = logging.getLogger()
Log.handlers.clear()
Log.setLevel(logging.DEBUG)
fhandler = RotatingFileHandler(log_file)
fhandler.maxBytes = 102400 # comment this line
fhandler.backupCount = 1
fhandler.setFormatter(DerivedFormatter(format))
Log.addHandler(fhandler)
Log.info("something happened")
</code></pre>
<p>When commenting out the the line where setting <code>fhandler.maxBytes</code>, <code>DerivedFormatter.format()</code> only gets called once.</p>
<p>Thanks in advance.</p>
|
<python><logging>
|
2023-05-26 17:48:06
| 1
| 368
|
kernelk
|
76,343,191
| 554,481
|
Understanding large difference in cache hits
|
<p>I'm working on a Leetcode hard-tagged dynamic programming <a href="https://leetcode.com/problems/profitable-schemes/description/" rel="nofollow noreferrer">problem</a>. My solution is 16x slower than a solution I found in the discussion section, and I'd like to better understand why.</p>
<p>Historically the recurrence relation I've used in similar problems is to advance the item index by <code>1</code> and either keep or skip the item. That's what the fast solution does. I thought that looping through items would be similar (I'm pretty sure I've seen that approach used with success elsewhere), and since I have less practice with that technique, I thought I would apply it to this problem. That's what I did in my much slower solution, and I think that's the source of the slowness. Are these two approaches not practically equivalent?</p>
<p>Here is the fast code that I didn't write. It takes about half a second.</p>
<pre class="lang-py prettyprint-override"><code>from typing import List
from functools import cache
class Solution:
def profitableSchemes(self, n: int, minProfit: int, group: List[int], profit: List[int]) -> int:
@cache
def dfs(i, members, cur_profit):
if i >= len(profit):
if cur_profit >= minProfit and members <= n:
return 1
else:
return 0
ans = 0
ans += dfs(i + 1, members, cur_profit)
if members + group[i] <= n:
ans += dfs(i + 1, members + group[i], min(cur_profit + profit[i], minProfit))
return ans
answer = dfs(0, 0, 0) % (10 ** 9 + 7)
print(dfs.cacehe_info())
return answer
solution = Solution()
answer = solution.profitableSchemes(
100, 100,
[2,5,36,2,5,5,14,1,12,1,14,15,1,1,27,13,6,59,6,1,7,1,2,7,6,1,6,1,3,1,2,11,3,39,21,20,1,27,26,22,11,17,3,2,4,5,6,18,4,14,1,1,1,3,12,9,7,3,16,5,1,19,4,8,6,3,2,7,3,5,12,6,15,2,11,12,12,21,5,1,13,2,29,38,10,17,1,14,1,62,7,1,14,6,4,16,6,4,32,48],
[21,4,9,12,5,8,8,5,14,18,43,24,3,0,20,9,0,24,4,0,0,7,3,13,6,5,19,6,3,14,9,5,5,6,4,7,20,2,13,0,1,19,4,0,11,9,6,15,15,7,1,25,17,4,4,3,43,46,82,15,12,4,1,8,24,3,15,3,6,3,0,8,10,8,10,1,21,13,10,28,11,27,17,1,13,10,11,4,36,26,4,2,2,2,10,0,11,5,22,6]
)
</code></pre>
<p>Here is my slow code. It takes about 8.5 seconds.</p>
<pre class="lang-py prettyprint-override"><code>from typing import List
from functools import cache
class Solution:
def profitableSchemes(self, n: int, minProfit: int, group: List[int], profit: List[int]) -> int:
self.modulus = 10 ** 9 + 7
self.groups = group
self.profit = profit
self.min_profit = minProfit
self.n = n
answer = self.solve(-1, n, 0) % self.modulus
if minProfit == 0:
answer += 1
return answer
@cache
def solve(self, previous_crime_index, remaining_members, accumulated_profit):
if remaining_members <= 0 or previous_crime_index == self.n - 1:
return 0
answer = 0
for crime_index in range(previous_crime_index+1, len(self.groups)):
capped_profit = min(accumulated_profit + self.profit[crime_index], self.min_profit)
answer += self.solve(
crime_index, remaining_members - self.groups[crime_index],
capped_profit
)
answer = answer % self.modulus
if self.profit[crime_index] + accumulated_profit >= self.min_profit and remaining_members - self.groups[crime_index] >= 0:
answer += 1
return answer % self.modulus
solution = Solution()
answer = solution.profitableSchemes(
100, 100,
[2,5,36,2,5,5,14,1,12,1,14,15,1,1,27,13,6,59,6,1,7,1,2,7,6,1,6,1,3,1,2,11,3,39,21,20,1,27,26,22,11,17,3,2,4,5,6,18,4,14,1,1,1,3,12,9,7,3,16,5,1,19,4,8,6,3,2,7,3,5,12,6,15,2,11,12,12,21,5,1,13,2,29,38,10,17,1,14,1,62,7,1,14,6,4,16,6,4,32,48],
[21,4,9,12,5,8,8,5,14,18,43,24,3,0,20,9,0,24,4,0,0,7,3,13,6,5,19,6,3,14,9,5,5,6,4,7,20,2,13,0,1,19,4,0,11,9,6,15,15,7,1,25,17,4,4,3,43,46,82,15,12,4,1,8,24,3,15,3,6,3,0,8,10,8,10,1,21,13,10,28,11,27,17,1,13,10,11,4,36,26,4,2,2,2,10,0,11,5,22,6]
)
print(solution.solve.cache_info())
</code></pre>
<p>The fast method has <code>692860</code> cache hits, while mine has <code>24868771</code>. That's a 35x difference! I would have expected similar cache hits. I ran <code>python -m cProfile</code> on both functions, and got similar stats. The only reason for the difference I can think of is that the loop approach and that the <code>+1</code> item index approach are not equally efficient.</p>
|
<python><dynamic-programming>
|
2023-05-26 17:44:59
| 0
| 2,075
|
user554481
|
76,343,100
| 15,170,662
|
How to convert WMF/EMF to PNG/JPEG in Python?
|
<p>I need to convert a <code>.wmf</code> file to a <code>.png</code> format.</p>
<p>I can't use libraries under the *GPL license.</p>
<p>Solutions that use external dependencies (such as inkscape or libreoffice in a subprocess) are also not suitable for my project.
Using external libraries in the system(installed by <code>apt</code> or another package manager, not by <code>pip</code>) is also undesirable.</p>
<p>I tried using <code>Pillow</code>, but it gives out an <code>OSError</code> (because the project runs on Linux, and <code>Pillow</code> supports <code>.wmf</code> only on Windows):</p>
<pre><code>from PIL import Image
def main():
img = Image.open('test.wmf')
img.save('test.png')
if __name__ == '__main__':
main()
</code></pre>
<p><code>OSError: cannot find loader for this WMF file</code>.</p>
|
<python><image><python-imaging-library><vector-graphics><wmf>
|
2023-05-26 17:28:18
| 0
| 415
|
Meetinger
|
76,342,866
| 897,041
|
How to decide if a command execution is complete while interacting with a CLI tool through python?
|
<p>I'm running beeline.sh through a python script and reading its output after submitting numerous sequential SQL queries to be executed through it. I can't send all queries in a single run, I have to run each of these queries and inspect their effect individually.</p>
<p>Unfortunately I can't reliably decide if a query was fully processed or not in order to inspect the output and then send another query to proceed with the purpose of my script.</p>
<p>I thought I can wait for the prompt to be printed because that's what I would see after manually running a query through beeline.sh, but the prompt wouldn't be captured while I'm reading the process's stdout line by line!</p>
<pre><code>process = subprocess.Popen(['bash', self.beeline_path], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
line = self.process.stdout.readline().decode("utf-8").strip()
while line is not None and len(line) > 0:
if line.startswith("0: jdbc:hive2://"):
logging.info("Found prompt: " + line) # Never reached here!
break
else:
line = self.process.stdout.readline().decode("utf-8").strip()
</code></pre>
<p>So is there a way to make sure the prompt is captured or at least a reliable way to decide when a query was completely processed?</p>
|
<python><python-3.x><stream><command-line-interface><beeline>
|
2023-05-26 16:48:50
| 0
| 4,028
|
Muhammad Gelbana
|
76,342,657
| 16,383,578
|
How to optimize NumPy Sieve of Eratosthenes?
|
<p>I have made my own Sieve of Eratosthenes implementation in NumPy. I am sure you all know it is for finding all primes below a number, so I won't explain anything further.</p>
<p>Code:</p>
<pre><code>import numpy as np
def primes_sieve(n):
primes = np.ones(n+1, dtype=bool)
primes[:2] = False
primes[4::2] = False
for i in range(3, int(n**0.5)+1, 2):
if primes[i]:
primes[i*i::i] = False
return np.where(primes)[0]
</code></pre>
<p>As you can see I have already made some optimizations, first all primes are odd except for 2, so I set all multiples of 2 to <code>False</code> and only brute-force odd numbers.</p>
<p>Second I only looped through numbers up to the floor of the square root, because all composite numbers after the square root would be eliminated by being a multiple of a prime number below the square root.</p>
<p>But it isn't optimal, because it loops through all odd numbers below the limit, and not all odd numbers are prime. And as the number grows larger, primes become more sparse, so there are lots of redundant iterations.</p>
<p>So if the list of candidates is changed dynamically, in such a way that composite numbers already identified wouldn't even ever be iterated upon, so that only prime numbers are looped through, there won't be any wasteful iterations, thus the algorithm would be optimal.</p>
<p>I have written a crude implementation of the optimized version:</p>
<pre><code>def primes_sieve_opt(n):
primes = np.ones(n+1, dtype=bool)
primes[:2] = False
primes[4::2] = False
limit = int(n**0.5)+1
i = 2
while i < limit:
primes[i*i::i] = False
i += 1 + primes[i+1:].argmax()
return np.where(primes)[0]
</code></pre>
<p>But it is much slower than the unoptimized version:</p>
<pre><code>In [92]: %timeit primes_sieve(65536)
271 µs ± 22 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [102]: %timeit primes_sieve_opt(65536)
309 µs ± 3.86 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
</code></pre>
<p>My idea is simple, by getting the next index of <code>True</code>, I can ensure all primes are covered and only primes are processed.</p>
<p>However <code>np.argmax</code> is slow in this regard. I Google searched "how to find the index of the next True value in NumPy array" (without quotes), and I found several StackOverflow questions that are slightly relevant but ultimately doesn't answer my question.</p>
<p>For example, <a href="https://stackoverflow.com/questions/16094563/numpy-get-index-where-value-is-true">numpy get index where value is true</a> and <a href="https://stackoverflow.com/questions/16243955/numpy-first-occurrence-of-value-greater-than-existing-value">Numpy first occurrence of value greater than existing value</a>.</p>
<p>I am not trying to find all indexes where <code>True</code>, and it is extremely stupid to do that, I need to find the next <code>True</code> value, get its index and immediately stop looping, there are only <code>bool</code>s.</p>
<p>How can I optimize this?</p>
<hr />
<h2>Edit</h2>
<p>If anyone is interested, I have optimized my algorithm further:</p>
<pre><code>import numba
import numpy as np
@numba.jit(nopython=True, parallel=True, fastmath=True, forceobj=False)
def prime_sieve(n: int) -> np.ndarray:
primes = np.full(n + 1, True)
primes[:2] = False
primes[4::2] = False
primes[9::6] = False
limit = int(n**0.5) + 1
for i in range(5, limit, 6):
if primes[i]:
primes[i * i :: 2 * i] = False
for i in range(7, limit, 6):
if primes[i]:
primes[i * i :: 2 * i] = False
return np.flatnonzero(primes)
</code></pre>
<p>I used <code>numba</code> to speed things up. And since all primes except 2 and 3 are either 6k+1 or 6k-1, this makes things even faster.</p>
|
<python><python-3.x><numpy><primes><sieve-of-eratosthenes>
|
2023-05-26 16:18:29
| 1
| 3,930
|
Ξένη Γήινος
|
76,342,437
| 10,108,332
|
How can I scrape an embedded image url from xlsx spreadsheet
|
<p>I have Excel spreadsheets that have an image per row, scraping the image using <a href="https://stackoverflow.com/questions/62039535/extract-images-from-excel-file-with-python">this example</a> works. However what I want to do instead of scraping the image of the spreadsheet is I want to extract the url associated with that image. If I open up the Excel file I can click on the image and navigate to the given url. Is it impossible to extract this URL via Python?<br><br>I have looked through the documentation on openpyxl to see if there are any examples of scraping embedded urls in images, and I couldn't find anything.<br><br>Any help would be much appreciated. Thanks</p>
|
<python><pandas><excel><xlsx><imageurl>
|
2023-05-26 15:56:50
| 1
| 1,070
|
Sharp Dev
|
76,342,420
| 1,761,521
|
Polars get unique values from List[Any]
|
<p>I want to get a list of unique values from a column of list[str],</p>
<pre><code>import polars as pl
data = [
{"name": "foo", "tags": ["A", "B"]},
{"name": "bar", "tags": ["A", "B", "C"]},
{"name": "baz", "tags": ["A"]},
{"name": "bing", "tags": ["B"]},
]
df = pl.from_dicts(data)
</code></pre>
<p>Expected output: <code>["A", "B", "C"]</code></p>
|
<python><python-polars>
|
2023-05-26 15:53:47
| 2
| 3,145
|
spitfiredd
|
76,342,379
| 8,087,322
|
`__main__.py` structure with argparse, pytest and sphinxdoc
|
<p>For a small module, I want to have a <code>__main__.py</code> file that is executed via Pythons <code>-m</code> argument. As per basic docs, this file looks like</p>
<pre><code>import argparse
from . import MyClass
def getparser():
parser = argparse.ArgumentParser()
# […]
return parser
if __name__ == "__main__":
parser = getparser()
args = parser.parse_args()
c = MyClass()
# […]
</code></pre>
<p>I want to test this with <strong>runpy</strong>:</p>
<pre><code>import runpy
def test_main():
runpy.run_module("mymodule")
# […]
</code></pre>
<p>which does not work because the <code>__name__</code> in this case is not <code>__main__</code>. In principle one could omit the <code>if __name__ == "__main__"</code> condition to get this working.</p>
<p>But, I also want to document the routine with <a href="https://sphinx-argparse.readthedocs.io" rel="nofollow noreferrer"><strong>sphinxarg.ext</strong></a>. This requires to have the <code>getparser()</code> function available from sphinx. Removing the <code>if __name__ == "__main__"</code> condition then also runs the module within sphinxdoc, which is not what is wanted.</p>
<pre><code>.. argparse::
:module: mymodule.__main__
:func: getparser
:prog: myprog
</code></pre>
<p>How can I structure this so that all use cases work well <em><strong>and</strong></em> the code is well-readable (i.e. the ``getparser()` function and the main code should not be distributed over different files)?</p>
|
<python><pytest><python-sphinx><runpy>
|
2023-05-26 15:48:43
| 1
| 593
|
olebole
|
76,342,215
| 13,874,745
|
Can't instantiate abstract class when I use `Dataset` in torch_geometric
|
<p>I try to establish a class that inherit from <code>Dataset</code> of <code>torch_geometric</code>, the codes are:</p>
<pre class="lang-py prettyprint-override"><code>from torch_geometric.data import Data, Dataset
import numpy as np
class GraphTimeSeriesDataset(Dataset):
def __init__(self):
# Initialize your dataset here
pass
def __len__(self):
# Return the total number of samples in the dataset
return len(self.data_list)
def __getitem__(self, idx):
# Return the data sample at the given index
return self.data_list[idx]
dataset = GraphTimeSeriesDataset()
</code></pre>
<p>And the error message:</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[4], line 17
13 def __getitem__(self, idx):
14 # Return the data sample at the given index
15 return self.data_list[idx]
---> 17 dataset = GraphTimeSeriesDataset()
TypeError: Can't instantiate abstract class GraphTimeSeriesDataset with abstract methods get, len
</code></pre>
<p>I'm not sure what's the root cause of this error, because I can operate these codes in other environment.</p>
<ul>
<li>Successful environment description: <a href="https://gist.github.com/theabc50111/d11d47971a7e9017509dd5e30cf0341a" rel="nofollow noreferrer">here</a></li>
<li>Failed environment description: <a href="https://gist.github.com/theabc50111/988e8bde9fcf40f63868d81aefff9cff" rel="nofollow noreferrer">here</a></li>
</ul>
<p>My questions:</p>
<ol>
<li>Is this error caused by environment or codes itself?</li>
<li>How should I adjust my codes to avoid this kind of error?</li>
<li>What's possible root cause?</li>
</ol>
|
<python><pytorch><pytorch-dataloader><pytorch-geometric>
|
2023-05-26 15:26:48
| 1
| 451
|
theabc50111
|
76,342,140
| 11,649,050
|
Error when applying Pytorch pre-trained weights' transform on Huggingface dataset
|
<p>I'm trying to train/fine-tune the MobileNetV3 model on the CIFAR100 dataset. I'm using Pytorch and Huggingface to simplify things like training loop which I'm not used to having to do manually coming from Tensorflow/Keras.</p>
<p>However, I get an error when trying to apply the pre-trained weights' transform (preprocessing) to the dataset with the <code>.with_transform()</code> method. I wanted to visualize the preprocessing partly to see that it actually works.</p>
<p>If I apply the preprocessing manually on an image it works, but if I used the with_transform method, I get an error.</p>
<p>Minimum reproducible example:</p>
<pre class="lang-py prettyprint-override"><code>import torch
from torchvision.models import MobileNet_V3_Small_Weights
from datasets import load_dataset
from matplotlib import pyplot as plt
weights = MobileNet_V3_Small_Weights.DEFAULT
preprocess = weights.transforms()
raw_data = load_dataset("cifar100")
data = raw_data.with_transform(preprocess)
raw_img = raw_data["train"][0]["img"]
fig, axes = plt.subplots(1, 3)
axes[0].imshow(raw_img)
axes[0].set_title("Raw image")
img = preprocess(raw_img).permute(1, 2, 0) # <----- Applying preprocessing "manually" on image works
axes[1].imshow(img)
axes[1].set_title("Preprocessed image (manual)")
img = data["train"][0]["img"] # <----- Getting image from preprocessed dataset doesn't work (preprocessing is lazy)
axes[2].imshow(img.permute(1, 2, 0))
axes[2].set_title("Preprocessed image (dataset)")
plt.show()
</code></pre>
<p>The error I get is:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\thiba\OneDrive - McGill University\Internship\ECSE301\pytorch_test.py", line 23, in <module>
img = data["train"][0]["img"]
~~~~~~~~~~~~~^^^
File "C:\Python311\Lib\site-packages\datasets\arrow_dataset.py", line 2778, in __getitem__
return self._getitem(key)
^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\datasets\arrow_dataset.py", line 2763, in _getitem
formatted_output = format_table(
^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\datasets\formatting\formatting.py", line 624, in format_table
return formatter(pa_table, query_type=query_type)
return self.format_row(pa_table)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\datasets\formatting\formatting.py", line 480, in format_row
formatted_batch = self.format_batch(pa_table)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\datasets\formatting\formatting.py", line 510, in format_batch
return self.transform(batch)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\torchvision\transforms\_presets.py", line 58, in forward
img = F.resize(img, self.resize_size, interpolation=self.interpolation, antialias=self.antialias)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\torchvision\transforms\functional.py", line 476, in resize
_, image_height, image_width = get_dimensions(img)
^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\torchvision\transforms\functional.py", line 78, in get_dimensions
return F_pil.get_dimensions(img)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\torchvision\transforms\_functional_pil.py", line 31, in get_dimensions
raise TypeError(f"Unexpected type {type(img)}")
TypeError: Unexpected type <class 'dict'>
</code></pre>
|
<python><pytorch><huggingface-datasets>
|
2023-05-26 15:17:50
| 1
| 331
|
Thibaut B.
|
76,341,832
| 2,110,476
|
How to pass a variable from init to parse_item in Scrapy CrawlSpider?
|
<p>Maybe it's the wrong approach alltogether. I'd be happy about an alternative approach as well, if you have an idea.</p>
<ol>
<li>Connect to my DB, get the next batch of seed URLs (as a dict, with some other info)</li>
<li>Add their URL/domains to <code>start_urls</code> and <code>allowed_domains</code></li>
<li>With each request, I'd like to write some other info from the DB with the seed URLs to the yielded item</li>
</ol>
<p>In a <strong>Spider</strong>, I'd simply use <code>meta</code>. But that won't work for a CrawlSpider.</p>
<pre><code>class TestCrawlerSpider(CrawlSpider):
name = 'testcrawler'
allowed_domains = []
start_urls = []
current_batch = None
rules = (
Rule(LinkExtractor(), process_request=my_request_processor, callback='parse_item', follow=True, process_links="filter_links"),
Rule(LinkExtractor(), process_request=my_request_processor, follow=True, process_links="filter_links"),
)
def __init__(self, *args, **kwargs):
self.current_batch = run_manager.get_next_batch()
for doc in self.current_batch:
self.start_urls.append(doc.get('website'))
self.allowed_domains.append(doc.get('domain'))
super(TestCrawlerSpider, self).__init__(*args, **kwargs)
def parse_item(self, response):
# how to access the current doc (from __init__) here?
doc_id = doc.get('id')
</code></pre>
<p>I've tried <a href="https://stackoverflow.com/questions/56995150/how-to-use-meta-in-scrapy-rule/57076321#57076321">how to use meta in scrapy rule</a> but I still don't know, how to pass the data from <code>__init__</code> to the Rule.</p>
|
<python><scrapy>
|
2023-05-26 14:36:15
| 0
| 1,357
|
Chris
|
76,341,802
| 13,154,227
|
pyodbc: works in 3.7.2 but breaks in 3.11.3
|
<pre><code>import pyodbc
my_db_path = r'C:\testing\mydb.accdb'
my_db_pw = r'djw85nawj12'
conn = pyodbc.connect(r'DRIVER={Microsoft Access Driver (*.mdb, *.accdb)};DBQ=%s;PWD=%s' % (my_db_path, my_db_pw))
#in python 3.7.2 (pyodbc 4.0.32) -> works, returns conn object
#in python 3.11.3 (pyodbc 4.0.39) -> error: data source name not found and no default driver specified
</code></pre>
<p>I need to get this working in 3.11.3 and I don't understand why it fails. I tried multiple variations with and without usage of raw strings. What's happening in 3.11.3?</p>
|
<python><python-3.x><pyodbc><python-3.11>
|
2023-05-26 14:33:09
| 0
| 609
|
R-obert
|
76,341,655
| 1,823,476
|
Store and evaluate a condition in an instance variable
|
<p>I start my question by describing the use case:</p>
<p>A context-menu should be populated with actions. Depending on the item for which the menu is requested, some actions should be hidden because they are not allowed for that specific item.</p>
<p>So my idea is to create actions like this:</p>
<pre class="lang-py prettyprint-override"><code>edit_action = Action("Edit Item")
edit_action.set_condition(lambda item: item.editable)
</code></pre>
<p>Then, when the context-menu is about to be opened, I evaluate every possible action whether it is allowed for the specific item or not:</p>
<pre class="lang-py prettyprint-override"><code>allowed_actions = list(filter(
lambda action: action.allowed_for_item(item),
list_of_all_actions
))
</code></pre>
<p>For that plan to work, I have to store the condition in the specific <code>Action</code> instance so that it can be evaluated later. Obviously, the condition will be different for every instance.</p>
<p>The basic idea is that the one who defines the actions also defines the conditions under which they are allowed or not. I want to use the same way to enable/disable toolbar buttons depending on the item selected.</p>
<p>So that is how I tried to implement <code>Action</code> (leaving out unrelated parts):</p>
<pre class="lang-py prettyprint-override"><code>class Action:
_condition = lambda i: True
def set_condition(self, cond):
self._condition = cond
def allowed_for_item(self, item):
return self._condition(item)
</code></pre>
<p>My problem is now:</p>
<pre><code>TypeError('<lambda>() takes 1 positional argument but 2 were given')
</code></pre>
<p>Python treats <code>self._condition(item)</code> as call of an instance method and passes <code>self</code> as the first argument.</p>
<p>Any ideas how I can make that call work? Or is the whole construct too complicated and there is a simpler way that I just don't see? Thanks in advance!</p>
<hr />
<p><strong>Update:</strong> I included the initializer for <code>_condition</code>, which I found (thanks @slothrop) to be the problem. This was meant as default value, so <code>allowed_for_item()</code> also works when <code>set_condition()</code> has not been called before.</p>
|
<python><lambda><pyqt5><qaction>
|
2023-05-26 14:12:11
| 1
| 711
|
chrset
|
76,341,623
| 158,668
|
Error: "Subprocess exited with error 9009" when running cdk
|
<p>I've installed the <a href="https://aws.amazon.com/cdk/" rel="nofollow noreferrer">AWS CDK (Cloud Development Kit)</a> on Windows 10 and have tried to deploy an existing CDK stack using python on my AWS account.</p>
<p>When running different <code>cdk</code> commands, I get the following error message:</p>
<pre><code>$ cdk bootstrap
Subprocess exited with error 9009
$ cdk diff
Subprocess exited with error 9009
$ cdk synth
Subprocess exited with error 9009
</code></pre>
<p>However, running <code>cdk --version</code> works:</p>
<pre><code>$ cdk --version
2.81.0 (build bd920f2)
</code></pre>
|
<python><windows><amazon-web-services><aws-cdk>
|
2023-05-26 14:09:04
| 1
| 51,850
|
Dennis Traub
|
76,341,381
| 6,195,489
|
Get dict from list of dicts with the maximum of two items
|
<p>I would like to get the dict in a list of dicts which has the maximum of two values:</p>
<pre><code>manager1={"id":"1","start_date":"2019-08-01","perc":20}
manager2={"id":"2","start_date":"2021-08-01","perc":20}
manager3={"id":"3","start_date":"2019-08-01","perc":80}
manager4={"id":"4","start_date":"2021-08-01","perc":80}
managers=[manager1,manager2,manager3,manager4]
</code></pre>
<p>I want to select the managers that have the latest start date, then get the manager with the max value of perc</p>
<p>I can do:</p>
<pre><code>max(managers, key=lambda x:x['perc'])
</code></pre>
<p>to get the maximum perc, how to i do get it to return more than one dict. In this case it gives manager3. But I want manager4 returned.</p>
|
<python><list><dictionary>
|
2023-05-26 13:41:10
| 1
| 849
|
abinitio
|
76,341,366
| 8,869,570
|
How to generalize an inherited method call and a composed method call
|
<p>At work, I'm running into an issue where a piece of code takes in a class object, <code>obj</code>, where the object can be one of multiple classes. The code makes the following call:</p>
<pre><code>obj.compute()
</code></pre>
<p>For one of the classes, <code>MyClass</code>, I made a refactorization from using inheritance to composition. Previously, that class would have inherited <code>compute()</code> from one of its parent classes, but now it's accessed using <code>obj.sub.compute()</code>, so this breaks the above call because the rest of the classes still have a <code>compute()</code> method.</p>
<p>I can think of a couple solutions to this problem:</p>
<ol>
<li>In <code>MyClass</code>, after initialization the composition in its constructor, we can do:</li>
</ol>
<pre><code>self.compute = self.sub.compute
</code></pre>
<ol start="2">
<li>In the call sites for <code>obj.compute()</code>, we can do a type check. When <code>obj</code> is <code>MyClass</code>, we can call <code>obj.sub.compute()</code></li>
</ol>
<p>I don't know how appropriate these solutions are? Are there are alternatives?</p>
|
<python><inheritance><composition>
|
2023-05-26 13:39:51
| 1
| 2,328
|
24n8
|
76,341,333
| 512,480
|
Coloring the background of a ttk button
|
<p>I got some helpful code from GeeksForGeeks and modified it slightly. It colors the text, but no, I wanted something different: I wanted to make the background blue. I tried what I hoped would work, but no, background means "what it looks like when the window is in the background". And anyway, it doesn't work, text just becomes black when the window is in the background. I have looked through the documentation, all I could find, and there is no clear reference. Is this doable?</p>
<pre><code>from tkinter import *
from tkinter.ttk import *
# Create Object
root = Tk()
root.geometry('1000x600')
style = Style()
style.configure('W.TButton', font =
('calibri', 10, 'bold', 'underline'),
foreground = 'red', background = 'green')
''' Button 1'''
btn1 = Button(root, text = 'Quit !',
style = 'W.TButton',
command = root.destroy)
btn1.grid(row = 0, column = 3, padx = 100)
''' Button 2'''
btn2 = Button(root, text = 'Click me !', command = None)
btn2.grid(row = 1, column = 3, pady = 10, padx = 100)
root.mainloop()
</code></pre>
<p><a href="https://i.sstatic.net/HlA3I.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HlA3I.png" alt="enter image description here" /></a></p>
|
<python><tkinter><button><ttk>
|
2023-05-26 13:36:15
| 1
| 1,624
|
Joymaker
|
76,341,318
| 2,304,575
|
Calling MPI subprocess within python script run from SLURM job
|
<p>I am having trouble launching a SLURM job calling a <code>mpirun</code> subprocess from a python script. Inside the python script (let's call it <code>script.py</code>) I have this <code>subprocess.run</code>:</p>
<pre><code> import subprocess
def run_mpi(config_name, np, working_dir):
data_path = working_dir + "/" + config_name
subprocess.run(
[
"mpirun -np "
+ np
+ " "
+ working_dir
+ "/spk_mpi -echo log < "
+ data_path
+ "/in.potts"
],
# mpirun -np 32 spk_mpi -echo log < /$PATH/in.potts
check=True,
stderr=subprocess.PIPE,
universal_newlines=True,
stdout=subprocess.PIPE,
shell=True,
)
</code></pre>
<p>I then execute the script by submitting a SLURM job to a cluster node by something like:</p>
<pre><code>#!/bin/bash
#SBATCH --job-name=myjob
#SBATCH --nodes=1
#SBATCH --ntasks=32
#SBATCH --time=2-00:00:00 # Time limit hrs:min:sec
#SBATCH --partition=thin
python script.py --working_dir=$PATH --np=$SLURM_NTASKS
</code></pre>
<p>but somehow the subprocess is never executed. I also tried with changing the format of the subprocess to <code>shell=False</code> but get <code>returned non-zero exit status 1</code> (i might do something wrong while parsing the input).</p>
<p>Note that if i don't submit the script as a job i am able to execute the subprocess run; this is only happening with the <strong>batch job</strong> - if I first allocate resources with <code>salloc</code> and then run an interactively job I don't run into this issue as well.</p>
<p>I'm not 100% sure, but it might be that when spawning a subprocess, that process doesn't have the SLURM configuration variables passed properly, so it doesn't know over which nodes to parallelize.</p>
<p>Any hint how to fix that?</p>
<p>UPDATE: I could fix calling <code>mpirun</code> directly from BATCH file. As the input to it was changing according a path indicated in a <code>config_file</code>, I solved by reading the file from command line:</p>
<pre><code>#SBATCH --ntasks=32
while IFS= read -r line; do
path="$(echo -e "${line}" | sed -e 's/^[[:space:]]*//' -e 's/[[:space:]]*$//')"
echo "Processing: $path/in.potts_am_IN100_3d"
mpirun -np 32 ${SPPARKS}/spk_mpi -echo log < ${SPPARKS}/${path}/in.potts
done < "$config_file"
</code></pre>
|
<python><subprocess><mpi><slurm><hpc>
|
2023-05-26 13:34:49
| 0
| 692
|
Betelgeuse
|
76,341,244
| 3,870,664
|
Ruff Ignore Inline or Function Rule Check
|
<p>I am using <code>ruff==0.0.265</code> I have a single function I expect to have complexity and do not wish to change that.</p>
<pre><code>src/exceptions.py:109:5: C901 `convert_py4j_exception` is too complex (11 > 10)
src/exceptions.py:109:5: PLR0911 Too many return statements (10 > 6)
</code></pre>
<p>How do I ignore a specific rule with ruff on a single function <code>convert_py4j_exception()</code>. I do not want to turn it off completly.</p>
|
<python><python-3.x><ruff>
|
2023-05-26 13:25:44
| 1
| 1,508
|
vfrank66
|
76,341,205
| 5,889,169
|
Python - Pass a function as argument/parameter to create a dynamic script
|
<p>I have a python script that reads messages from a Kafka topic, run some custom filtering, and produce a new message to another Kafka topic.</p>
<p>Currently the script accepts 2 arguments: <code>--source_topic</code> and <code>--target_topic</code>. script pseudo code:</p>
<pre><code>for each message in source_topic:
is_fit = check_if_message_fits_target_topic(message)
if is_fit:
produce(target_topic, message)
</code></pre>
<p>and i run my script like:
<code>python3 my_script.py --source_topic someSourceTopic --target_topic someTargetTopic </code></p>
<hr />
<p>My wish is to be able to make the function <code>check_if_message_fits_target_topic</code> to be dynamic so i can run the same script on-demand with different custom parameters.</p>
<p>I'm using <code>argparse</code> to manage the topic names arguments. What is the best way to pass a whole function as argument?</p>
<p>Just for the sake of the example, i have a running app that apply:</p>
<pre><code>def check_if_message_fits_target_topic(message):
values = message.value
if values['event_name'] == 'some_event_name':
return True
return False
</code></pre>
<p>I want to build it in a generic way so i will be able to push some other custom logic, for example:</p>
<pre><code>def check_if_message_fits_target_topic(message):
values = message.value
yesterday = datetime.date.today() - datetime.timedelta(days=1)
if values['created_at'] > yesterday:
return True
return False
</code></pre>
<p><code>check_if_message_fits_target_topic</code> should be able to do anything i pass it, as long it returns True or False.</p>
|
<python><apache-kafka>
|
2023-05-26 13:20:50
| 1
| 781
|
shayms8
|
76,341,124
| 2,463,948
|
scipy.signal not defined, but works after importing skimage
|
<p>I would like to use <code>scipy.signal.convolve2d()</code> function from <a href="https://scipy.org/" rel="nofollow noreferrer">SciPy</a>, but <code>signal</code> is undefined:</p>
<pre><code>>>> import scipy
...
>>> conv = scipy.signal.convolve2d(data, kernel, mode="same")
Error: Traceback (most recent call last):
File "test.py", line n, in <module>
conv = scipy.signal.convolve2d(data, kernel, mode="same")
AttributeError: module 'scipy' has no attribute 'signal'.
</code></pre>
<p>But when I add skimage import:</p>
<pre><code>from skimage.morphology import square
</code></pre>
<p>or</p>
<pre><code>from skimage.morphology import disk
</code></pre>
<p>It suddenly starts to be defined, and works fine. Any ideas why and how to fix it properly, so it wouldn't need unused import? Skimage is a totally different thing, not related (at least in theory).</p>
<p>Lib versions:</p>
<pre><code>scikit-image 0.19.2 py37hf11a4ad_0 anaconda
scikit-learn 1.0.2 py37hf11a4ad_1
scipy 1.7.3 py37h0a974cb_0
</code></pre>
<p>Python version:</p>
<pre><code>Python 3.7.6 (default, Jan 8 2020, 20:23:39) [MSC v.1916 64 bit (AMD64)]
</code></pre>
|
<python><python-3.x><scipy>
|
2023-05-26 13:10:48
| 1
| 12,087
|
Flash Thunder
|
76,340,986
| 14,860,526
|
SSLError in python requests due to self-signed certificate
|
<p>I'm working for a company that has a an internal network. When I connect to the website <em>https://engine.my_company.com</em> through the browser everything is ok, even if I look at the certificates in the browser they seem legit. However if I run a request to the website with python:</p>
<pre><code>url = "https://engine.my_company.com"
response = requests.get(url, verify=True))
</code></pre>
<p>I get:</p>
<blockquote>
<p>(Caused by SSLError(SSLCertVerificationError(1, '[SSL:
CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get
issuer certificate (_ssl.c:997)')))</p>
</blockquote>
<p>I know I can put verify=False to make it work, but I would rather solve the issue.
I tried to save the 3 certificates that i see in the browser:</p>
<p><a href="https://i.sstatic.net/6mITo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6mITo.png" alt="enter image description here" /></a></p>
<p>both as single certificate and as chain certificate:</p>
<p><a href="https://i.sstatic.net/ixF3I.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ixF3I.png" alt="enter image description here" /></a></p>
<p>and modify the code in this way:</p>
<pre><code>response = requests.get(url, verify="/path_to_cert/certificate_1.cer"))
response = requests.get(url, verify="/path_to_cert/certificate_2.cer"))
response = requests.get(url, verify="/path_to_cert/certificate_3.cer"))
</code></pre>
<p>but I keep getting the same error.</p>
<p>I even tried to copy/paste the content of the three certificates in cacert.pem (solution that i don't like) located in:</p>
<pre><code>import certifi
certifi.where()
</code></pre>
<p>but that also failed.</p>
<p>I also tried setting the environment variable REQUESTS_CA_BUNDLE pointing to the chain of my certificates but still got the same error.</p>
<p>Any idea on how I could solve this?</p>
<p>Is this the right way to get the certificates (through the browser I mean)?</p>
|
<python><python-requests><ssl-certificate>
|
2023-05-26 12:49:24
| 1
| 642
|
Alberto B
|
76,340,978
| 3,182,044
|
Relative import of a module with a dependency
|
<p>I have a python script that imports another class called <code>myClass</code> from a relative path. Next to <code>myClass.py</code> lies a csv-file from which a dataframe is generated when calling <code>myClass()</code>. This dependency on the file forces me to change the directory to the relative path when creating an instance of myClass:</p>
<pre><code>currentPath = os.getcwd()
sys.path.append("../../../ClassFolder/") # relative folder
os.chdir(libPath)
from myScript import myClass
myObj = myClass()
os.chdir(currentPath)
</code></pre>
<p>Is there a way to do this without having to manually jump directories?</p>
|
<python><python-import>
|
2023-05-26 12:48:06
| 0
| 345
|
dba
|
76,340,960
| 21,346,793
|
CUDA to docker-container
|
<p>I need to make docker of my server, but it works only with cuda, how can i add it in my Dockerfile?</p>
<pre><code>FROM python:3.10
ENV FLASK_RUN_PORT=5000
RUN sudo nvidia-ctk runtime configure # Here
COPY . /app
WORKDIR /app
RUN pip install --no-cache-dir -r requirements.txt
EXPOSE 5000
CMD ["python", "server.py"]
</code></pre>
<p>I try to do it bellow, but it doesn't work, please, help</p>
|
<python><docker><deployment><dockerfile>
|
2023-05-26 12:44:24
| 1
| 400
|
Ubuty_programmist_7
|
76,340,789
| 7,678,074
|
How to avoid/stop webpage flickering with python selenium firefox
|
<p>I am trying to scrape a webpage to download some images. For this I need to:</p>
<ul>
<li>get image elements</li>
<li>loop through them and download each one, where download means:
<ul>
<li>right-click</li>
<li>press download button</li>
</ul>
</li>
<li>scroll down the page to get new elements</li>
</ul>
<p>However, after some hundreds iterations I occasionally have a weird behaviour when the next element is at the very beginning of the page. In fact, when I click on the element with <code>image.click()</code>, the driver starts flickering. This of course breaks the next processing with actions to right-click and download.</p>
<p>Apologies in advance for the few details. Unfortunately I am not able to share reproducible code/examples since the repository is accessible only through my credentials.
I try to attach the code I am using and a gif to visualize the problem.
As you can see, the webpage is static at first (bottom), then when I run <code>image.click()</code> (top) the webpage starts flickering (bottom).</p>
<pre><code>driver.get(website_url)
SLEEP_START = 3
SLEEP_LOAD = 0.3
time.sleep(SLEEP_START)
folder_element = driver.find_element(
By.XPATH, '//div[@class="material-list-line-1" and text()="INFN_New"]'
)
action_chains = ActionChains(driver)
action_chains.double_click(folder_element).perform()
time.sleep(SLEEP_START)
action = ActionChains(driver)
def download_image(image, chain_sleep=0):
# right click and download
try:
action.context_click(image).pause(chain_sleep).send_keys(Keys.TAB).pause(
chain_sleep
).send_keys(Keys.ARROW_UP).pause(chain_sleep).send_keys(Keys.RETURN).perform()
except MoveTargetOutOfBoundsException as e:
print("Catched OOB exception at:", image.text)
print("\n\n", e)
def download_current_images(images, chain_sleep=0):
downloaded = []
for i, image in enumerate(tqdm(images), start=1):
if image.text == "RT366S1C3R1_r1.JPG":
print("Here we go ...") # breakpoint here, since it starts flickering
image.click()
if a:
print("Yes, I am starting from the right place")
raise (NotImplementedError())
# scrolling every 10 files
if (i % 10) == 0:
# print('Scrolling at', image.text)
driver.execute_script("arguments[0].scrollIntoView(true)", image)
time.sleep(SLEEP_LOAD)
try:
image.click()
except ElementClickInterceptedException as e:
print("Catched click exception at:", image.text)
print("\n\n", e)
try:
# CASE 1: opened preview -> roll back to list view, then run delayed
close_preview_button = driver.find_element(
By.XPATH,
"/html/body/div[1]/div/div[3]/div/div[2]/div[1]/div/div[1]/div[1]/button/div/span",
)
time.sleep(5)
close_preview_button.click()
image = images[i - 2]
image.click()
download_image(image, chain_sleep=0.5)
print("correctly downloaded", image.text)
print("see if I continue from right place to avoid double download")
a = 1
continue
except NoSuchElementException as e:
# CASE 2: element out of bounds -> scroll first then click
# print('It is not a problem of preview...')
# print('Try scrolling instead at:', image.text)
driver.execute_script("arguments[0].scrollIntoView(true)", image)
time.sleep(SLEEP_LOAD)
image.click()
# download_button = driver.find_element(By.XPATH, "/html/body/div[1]/div/span/div/div/div/div[1]/span/div/div/div")
download_image(image)
downloaded.append(image.text)
return downloaded
def _get_new_image_elements(already_downloaded: list):
icons = driver.find_elements(
By.XPATH, '//div[contains(@class, "material-list-line")]'
)
# only images
image_elements = [icon for icon in icons if (icon.text.endswith("JPG"))]
# only new elements
image_elements = [
image for image in image_elements if (image.text not in already_downloaded)
]
return image_elements
if __name__ == "__main__":
# get image elements of the first page
page_elements = _get_new_image_elements(already_downloaded=[])
# download
print("downloading", len(page_elements), "images")
downloaded_image_names = download_current_images(page_elements)
# scroll
driver.execute_script("arguments[0].scrollIntoView();", page_elements[-1])
time.sleep(SLEEP_LOAD)
# get new image elements and their names
new_page_elements = _get_new_image_elements(downloaded_image_names)
check_names = [*map(lambda x: x.text, new_page_elements)]
# only run until new images are available
while set(check_names).difference(downloaded_image_names) != set():
# download new images
print("downloading", len(new_page_elements), "images")
new_downloads = download_current_images(new_page_elements)
downloaded_image_names = downloaded_image_names + new_downloads
time.sleep(SLEEP_LOAD)
# scroll
new_page_elements[-1].click()
# time.sleep(SLEEP_LOAD)
driver.execute_script("arguments[0].scrollIntoView();", new_page_elements[-1])
time.sleep(SLEEP_LOAD)
# update image elements
new_page_elements = _get_new_image_elements(downloaded_image_names)
check_names = [*map(lambda x: x.text, new_page_elements)]
</code></pre>
<p>Any idea how to solve this? I'd be happy also to receive just hints about why this happens or what steps I may try to debug it :) <a href="https://i.sstatic.net/zdWjp.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zdWjp.gif" alt="enter image description here" /></a></p>
|
<python><selenium-webdriver><web-scraping><firefox>
|
2023-05-26 12:24:11
| 0
| 936
|
Luca Clissa
|
76,340,729
| 3,813,064
|
Persisting TemporaryFile in Python without rewriting it | Call linkstat in python on fd
|
<p>I have a library that provides me with a <code>TemporaryFile</code> object which I would like to persist on the file system it was stored, <strong>without rewriting it</strong>: The file could be quite huge and performance is critical. I am working in an Linux/POSIX environment.</p>
<p>The <a href="https://docs.python.org/3/library/tempfile.html#tempfile.TemporaryFile" rel="nofollow noreferrer">tempfile documentation</a> explains that the temporary file is created using the <code>O_TMPFILE</code> attribute. Doing this the file has no visible file name in the directory listing.</p>
<p>According to the <a href="https://manpages.debian.org/testing/manpages-dev/open.2.en.html#O_TMPFILE" rel="nofollow noreferrer">Linux manpage about O_TMPFILE</a> it is possible to use the C call <code>linkat(fd, "", AT_FDCWD, "/path/for/file", AT_EMPTY_PATH);</code> to persist the file.</p>
<p>How can I achieve this in Python?</p>
<p><em>(I am aware that python probably unlinks the file, once the file opener is closed. But unlinking, as the name suggest doesn't delete the file, just the record in the directory listing, so the second created listing should remain).</em></p>
<h3>What I have tried</h3>
<p>Unfortunately <code>os.link</code>, even so it uses <code>linkstat</code> in the background does not support to be fed with a file link.</p>
<pre class="lang-py prettyprint-override"><code>import tempfile
import os
# Ensure we save tempfile to the correct block device (the same we want to create the hardlink at)
tempfile.tempdir = "/home/user/tmp"
file = tempfile.TemporaryFile()
# Add some content to the file, so it isn't empty
file.write(b"Das ist ein TEst")
os.link(file, "/home/user/tmp/test.txt")
# -> TypeError: link: src should be string, bytes or os.PathLike, not BufferedRandom
os.link(file.raw, "/home/user/tmp/test.txt")
# -> TypeError: TypeError: link: src should be string, bytes or os.PathLike, not FileIO
os.link(os.link(file.raw.fileno(), "/home/user/tmp/test.txt")
# -> TypeError: link: src should be string, bytes or os.PathLike, not int
</code></pre>
<p>I seems I need to call <code>linkstat</code> directly, but I don't know how.
Is this possible without compiling a Python C extension? Maybe using <a href="https://docs.python.org/3/library/ctypes.html" rel="nofollow noreferrer">ctypes</a>?</p>
<h3>Background</h3>
<p>Before people ask, why I do no simply use: <a href="https://stackoverflow.com/a/36968297/3813064"><code>NamedTemporaryFile(delete=False)</code></a> ⇒ I don't have power over the creation of the temporary file.
It is created in Starlette during <a href="https://www.starlette.io/requests/#request-files" rel="nofollow noreferrer">FileUploads</a> and I want to directly use it <a href="https://stackoverflow.com/a/73443824/3813064">without the hassle of creating my own stream file handler</a>.
Starlette already handles the streaming and rendering very well, it just <a href="https://github.com/encode/starlette/issues/388" rel="nofollow noreferrer">gives no possibility</a> <a href="https://github.com/encode/starlette/issues/849" rel="nofollow noreferrer">to change the details of the rendering behaviour</a>.</p>
<p>The <a href="https://manpages.debian.org/testing/manpages-dev/open.2.en.html#O_TMPFILE" rel="nofollow noreferrer">O_TMPFILE documentation</a> states this option is used for two main reason, one of them:</p>
<blockquote>
<p>Creating a file that is initially invisible, which is
then populated with data and adjusted to have
appropriate filesystem attributes (fchown(2),
fchmod(2), fsetxattr(2), etc.) before being atomically
linked into the filesystem in a fully formed state
(using linkat(2) as described above).</p>
</blockquote>
<p>Which is exactly what I want to achieve in my use case.</p>
|
<python><python-3.x><posix><ctypes><cpython>
|
2023-05-26 12:15:41
| 0
| 2,711
|
Kound
|
76,340,691
| 13,158,157
|
Power Automate Flow HTTP request: know when flow is done
|
<p>I am triggering my Power Automate Flow with an HTTP request send thru pythons request module. The response I get is 202, what stands for request is accepted to processing.</p>
<p>How can I make python wait until Power Automate is finished or alternatively is there a way to to make Power Automate send additional response when Flow is finished ?</p>
<p>EDIT the code for triggering Flow</p>
<pre><code>>>> flow0_url = 'url_to_trigger_pa_flow'
>>> requests.post(flow0_url)
Out[10]: <Response [202]>
</code></pre>
|
<python><http><python-requests><power-automate>
|
2023-05-26 12:11:19
| 0
| 525
|
euh
|
76,340,611
| 6,195,489
|
Get information from xml with element tree
|
<p>I am trying to use elementTree to get at information in an xml response.</p>
<p>The response <code>xmlresponse.xml</code> looks like:</p>
<pre><code><result xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="https://somewhere.co.uk/">
<count>1</count>
<pageInformation>
<offset>0</offset>
<size>10</size>
</pageInformation>
<items>
<person uuid="1">
<name>
<firstName>John</firstName>
<lastName>Doe</lastName>
</name>
<ManagedByRelations>
<managedByRelation Id="1234">
<manager uuid="2">
<name formatted="false">
<text>Jane Doe</text>
</name>
</manager>
<managementPercentage>30</managementPercentage>
<period>
<startDate>2019-09-26</startDate>
</period>
</managedByRelation>
<managedByRelation Id="1234">
<manager uuid="3">
<name formatted="false">
<text>Joe Bloggs</text>
</name>
</manager>
<managementPercentage>70</managementPercentage>
<period>
<startDate>2019-09-26</startDate>
</period>
</managedByRelation>
</ManagedByRelations>
<fte>0.0</fte>
</person>
</items>
</result>
</code></pre>
<p>How do I get the information contained using elementTree, for example how can I retrieve the list of managers names, ids and start dates?</p>
<p>If I do:</p>
<pre><code>from xml.etree.ElementTree import Element, ParseError, fromstring, tostring, parse
tree = parse('xmlresponse.xml')
root = tree.getroot()
for manager in root.findall('managedByRelation'):
print(manager)
</code></pre>
<p>The findall() doesnt return anything. I know i could do a <code>list(root.iter())</code> to iterate through everything in the tree, but I want to know why <code>root.findall()</code> isn't working as I expect?</p>
|
<python><xml><elementtree>
|
2023-05-26 12:00:49
| 1
| 849
|
abinitio
|
76,340,526
| 1,341,855
|
How to merge 2 datasets with Python / Pandas?
|
<p>I have two pandas datasets:</p>
<pre><code> Name A B
1 Michael 1 2
2 Peter 3 4
</code></pre>
<p>and</p>
<pre><code> Name C D
1 Peter 8 9
2 John 5 6
</code></pre>
<p>How can I merge these datasets to get the following result:</p>
<pre><code> Name A B C D
1 Michael 1 2 - -
2 Peter 3 4 8 9
3 John - - 5 6
</code></pre>
<p>The value for "-" should either be 0 or NaN.</p>
|
<python><pandas><numpy>
|
2023-05-26 11:49:24
| 1
| 538
|
Michael
|
76,340,511
| 6,876,149
|
gurobi basic example how to print the total minimal cost?
|
<p>I am using gurobipy to solve a transportation problem in python. Transportation Problem, is a linear programming (LP) problem that identifies an optimal solution for transporting one type of product from sources (i) to destinations (j) at the minimum cost (z). You can read more about it in <a href="https://www.linkedin.com/pulse/transportation-problem-capacity-allocation-python-using-gaurav-dubey/" rel="nofollow noreferrer"> this link </a></p>
<pre><code>import gurobipy as gp
from gurobipy import GRB
import csv
import numpy
m = 2
n = 4
# Define data
supply = [100000, 100000] # Supply nodes
demand = [40500, 22300, 85200, 47500] # Demand nodes
cost = [
[52, 32, 11, 69],
[45, 84, 76, 15],
] # Cost of transportation from supply i to demand j
theModel = gp.Model()
# Define decision variables
flow = theModel.addVars(m, n, lb=0, vtype=GRB.INTEGER, name="flow")
# Define supply constraints
for i in range(m):
theModel.addConstr(gp.quicksum(flow[i, j] for j in range(n)) <= supply[i], name=f"supply_{i}")
# Define demand constraints
for j in range(n):
theModel.addConstr(gp.quicksum(flow[i, j] for i in range(m)) >= demand[j], name=f"demand_{j}")
# Define objective function
theModel.setObjective(gp.quicksum(flow[i, j] * cost[i][j] for i in range(m) for j in range(n)), sense=GRB.MINIMIZE)
theModel.optimize()
# Print results
if theModel.status == GRB.OPTIMAL:
print(f"Optimal solution found with objective value {theModel.objVal:.2f}")
for i in range(m):
for j in range(n):
if flow[i, j].x > 0:
print(f"Flow from supply node {i+1} to demand node {j+1}: {flow[i, j].x:.0f}")
else:
print("No solution found.")
</code></pre>
<p>Like this I can print the optimal shipment sizes from sources to destinations, but how do i print the TOTAL minimal cost of the operation ? I know that this is the supply transferred <code>{flow[i, j].x:.0f}</code> so basically how can i print the total cost ? Should it add the cost of each transportation unique?</p>
<p>Output from client:</p>
<pre><code>Optimal solution found (tolerance 1.00e-04)
Best objective 4.575800000000e+06, best bound 4.575800000000e+06, gap 0.0000%
Optimal solution found with objective value 4575800.00
Flow from supply node 1 to demand node 2: 14800
Flow from supply node 1 to demand node 3: 85200
Flow from supply node 2 to demand node 1: 40500
Flow from supply node 2 to demand node 2: 7500
Flow from supply node 2 to demand node 4: 47500
</code></pre>
|
<python><gurobi>
|
2023-05-26 11:47:11
| 1
| 2,826
|
C.Unbay
|
76,340,245
| 11,720,193
|
GET file from URL and store in S3
|
<p>I am required to download a <code>.gz</code> file (using GET) from a URL, <code>uncompress</code> it and then store it in <code>S3</code>.</p>
<p>I have written the following code to download file to a directory but I am struggling to uncompress it and store it in S3.</p>
<pre><code>URL= https://api.botify.com/v1/jobs/
def download_file(url, folder_name):
local_filename = url.split('/')[-1]
path = os.path.join("{}\{}".format(folder_name, local_filename))
with requests.get(url, stream=True) as r:
with open(path, 'wb') as f:
shutil.copyfileobj(r.raw, f)
</code></pre>
<p>How can I download the file from the given URL and unzip and store in S3 ?</p>
<p>Can someone please help.</p>
|
<python><amazon-web-services><amazon-s3><python-requests>
|
2023-05-26 11:12:28
| 0
| 895
|
marie20
|
76,340,102
| 6,389,268
|
Reverse pandas string / reverse extract pandas DF
|
<p>Good day,</p>
<p>I got files with path in column thus:</p>
<pre><code>pd.DataFrame({'path':[
'C:/some_1_path/file_1.zip',
'C:/some_1_path/file_2.zip']
</code></pre>
<p>I wish to extract the pattern _\d from this like this:</p>
<pre><code>'C:/some_1_path/file_1.zip'| '_1'
'C:/some_1_path/file_2.zip'| '_2'
</code></pre>
<p>Since the _1 happens to be in path .str.extract() picks that one. extractall will work , but requires extra steps.</p>
<p>How would one do .str.extract(pattern) in reverse order? I could reverse the string but extra steps and all that.</p>
|
<python><pandas><dataframe><extract><reverse>
|
2023-05-26 10:54:47
| 2
| 1,394
|
pinegulf
|
76,340,028
| 166,229
|
Overwrite type hint of 3rd party library (for monkey patching)
|
<p>I am monkey-patching a class from a 3rd party library (playwright in my case) and also want to adjust the type hinting to include my adjustment. Is there a way somehow to overwrite or augment types from 3rd party libraries?</p>
<pre class="lang-py prettyprint-override"><code>##
# Monkey-patching
##
from playwright.sync_api import Locator
def poll(self): # my custom function
pass
# type error: poll is unknown
Locator.poll = poll
##
# Usage
##
def test_succeeds(page: Page):
...
# type error: poll is unknown
expect(page.locator('dl:has-text("Success")').poll()).to_be_visible()
</code></pre>
|
<python><python-typing><pyright>
|
2023-05-26 10:43:44
| 0
| 16,667
|
medihack
|
76,339,979
| 18,018,869
|
Separate axis in two parts, give each a label and put it in a box
|
<p>My current approach does not automatically center the 'HIGH' and 'LOW' descriptions. I would like them automatically centered at 25% and 75% of corresponding axis. Another phrasing: I want the 'HIGH' label exactly centered inside its surrounding box.</p>
<p>Maybe I chose a too complicated approach... Maybe a <code>Bbox</code> would be better?</p>
<p><a href="https://i.sstatic.net/s0MS3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/s0MS3.png" alt="This is the plot" /></a> <br>It should give a basic understanding of what I am trying to achieve.</p>
<p>This is the code:</p>
<pre class="lang-py prettyprint-override"><code>def create_my_plot():
# initialize figure
fig = Figure(figsize=(7, 6))
axs = fig.subplots()
minx, maxx, midx = 0, 1, 0.5
miny, maxy, midy = 0, 1, 0.5
offsetx, offsety = 0.15, 0.15
axs.add_patch(mpatches.Rectangle((minx, maxy), maxx, offsety, fill=False, edgecolor="black", clip_on=False, lw=0.5))
axs.add_patch(mpatches.Rectangle((minx, miny), -offsetx, maxy, fill=False, edgecolor="black", clip_on=False, lw=0.5))
axs.add_patch(mpatches.Rectangle((minx, midy), midx, midy, alpha=0.1, facecolor="green"))
axs.add_patch(mpatches.Rectangle((midx, midy), midx, midy, alpha=0.1, facecolor="yellow"))
axs.add_patch(mpatches.Rectangle((minx, miny), midx, midy, alpha=0.1, facecolor="gray"))
axs.add_patch(mpatches.Rectangle((midx, miny), midx, midy, alpha=0.1, facecolor="red"))
axs.add_line(Line2D(xdata=(minx-offsetx, maxx), ydata=(midy, midy), clip_on=False, color="black", lw=0.5))
axs.add_line(Line2D(xdata=(midx, midx), ydata=(miny, maxy + offsety), clip_on=False, color="black", lw=0.5))
# y-axis HIGH, LOW labeling
axs.text(minx - 0.5 * offsetx, 0.25 * maxy, "LOW", fontdict={}, rotation="vertical")
axs.text(minx - 0.5 * offsetx, 0.75 * maxy, "HIGH", fontdict={}, rotation="vertical")
# x-axis HIGH, LOW labeling
axs.text(0.25 * maxx, maxy + 0.5 * offsety, "LOW", fontdict={})
axs.text(0.75 * maxx, maxy + 0.5 * offsety, "HIGH", fontdict={})
buf = BytesIO()
fig.savefig(buf, format="png")
data = base64.b64encode(buf.getbuffer()).decode("ascii")
return f"<img src='data:image/png;base64,{data}'/>"
</code></pre>
<p>Note: I am writing a little visualization for a web application. That's why I am not using the <code>pyplot</code> approach.</p>
|
<python><matplotlib>
|
2023-05-26 10:35:56
| 1
| 1,976
|
Tarquinius
|
76,339,879
| 5,807,524
|
How to prevent/warn on reassignment to Python function arguments?
|
<p>I would like to prevent function arguments being reassigned in Python, like Java's <code>final</code> or C's <code>const</code> modifer.</p>
<p>For example, the following code should not be considered valid:</p>
<pre class="lang-python prettyprint-override"><code>def update_bars_in_foos(foo: Foo, bars: List[str]):
foos = foo.get_all_foos()
for foo in foos: # <- this should be a linting/type error - reasssignment of foo
foo.bar = ', '.join(bars)
</code></pre>
<p>I understand that it's not possible to prevent this during runtime. Type annotations offer <a href="https://peps.python.org/pep-0591/#semantics-and-examples" rel="nofollow noreferrer"><code>typing.Final</code></a>, however it is specifically disallowed for function arguments:</p>
<blockquote>
<p>Final may only be used as the outermost type in assignments or variable annotations. Using it in any other position is an error. In particular, Final can’t be used in annotations for function arguments</p>
</blockquote>
<p><a href="https://peps.python.org/pep-0591/#semantics-and-examples" rel="nofollow noreferrer">https://peps.python.org/pep-0591/#semantics-and-examples</a></p>
<p>Since it's impossible to do on type annotations level, is there a Python linter with a rule that would prevent it? I'd be happy with an equivalent to <a href="https://eslint.org/docs/latest/rules/no-param-reassign" rel="nofollow noreferrer">ESLint's <code>no-param-reassign</code> rule</a>.</p>
|
<python>
|
2023-05-26 10:20:38
| 1
| 475
|
Patryk Koryzna
|
76,339,790
| 8,055,025
|
Python support for NSTREAM?
|
<p>Does NSTREAM (formerly know as SWIM) have Python support?
I can see the documentations and development in Java, but not on Python.</p>
|
<python>
|
2023-05-26 10:07:31
| 2
| 2,063
|
Ankit Sahay
|
76,339,607
| 1,736,407
|
Invoking a dataproc workflow from yaml file using cloud function
|
<p>I am trying to write a Google cloud function that invokes a Dataproc workflow from a YAML template stored in a storage bucket. The template must accept parameters. I have pieced together what I have so far from various sources, and I feel like I am running in circles trying to get this right.</p>
<p>The relevant bits from the function are here:</p>
<pre class="lang-py prettyprint-override"><code>from google.cloud import dataproc_v1 as dataproc, storage
from google.cloud.dataproc_v1.types.workflow_templates import WorkflowTemplate, ParameterValidation
def submit_workflow(parameters, date_fmt):
'''Initialises a DataProc workflow from a yaml file'''
# workflow vars
workflow_file = '{0}-app/xxx-workflow.yaml'.format(project_id)
try:
# create client
client = dataproc.WorkflowTemplateServiceClient()
# build workflow parameter map
parameter_map = []
for k, v in parameters.items():
parameter_map.append(ParameterValidation(
name=k,
value=ParameterValidation.Value(values=[v])
))
# create template
template_name = f'projects/{0}/regions/{1}/workflowTemplates/{2}'.format(project_id, region, workflow_file)
workflow_template = WorkflowTemplate(
parameters=parameter_map,
template=WorkflowTemplate.Template(id=template_name)
)
# create request
workflow_request = dataproc.InstantiateWorkflowTwmplateRequest(
parent=parameters['regionUri'],
template=workflow_template
)
# run workflow
operation = client.instantiate_workflow_template(request=workflow_request)
except Exception as e:
message = ':x: An error has occurred invoking the workflow. Please check cloud function log.\n{}'.format(e)
post_to_slack(url, message)
else:
# wait for workflow to complete
result = operation.result()
print(result)
# post completion to slack
message = 'run is complete for {}'.format(date_fmt)
post_to_slack(url, message)
</code></pre>
<p>The current error I am getting is <code>type object 'ParameterValidation' has no attribute 'Value'</code> and I feel like I am going around in circles trying to find the best way to implement this. Any advice would be fantastic.</p>
|
<python><google-cloud-platform><google-cloud-functions><google-cloud-dataproc><google-workflows>
|
2023-05-26 09:47:31
| 0
| 2,220
|
Cam
|
76,339,555
| 4,438,445
|
How to monkeypatch '__name__' with pytest
|
<p>I have a file 'mymodule.py' that I want to test with 100% coverage using pytest. This is the content of the file:</p>
<pre><code>def important_function():
print('I can test this function easily.')
if __name__ == '__main__':
print('I want to test this part.')
assert False
</code></pre>
<p>I want to test the code under <code>if __name__ == '__main__':</code>
Since I added <code>assert False</code> this test is expected to fail.</p>
<p>I am using the <code>pytest</code> framework and I do not want to use <code>unittest</code> or anything from its family, like <code>unittest.mock</code>.</p>
<p>In my last unsuccessful attempt I was trying to monkeypatch <code>__name__</code> from <code>builtins</code>. This is the content of test file:</p>
<pre><code>from _pytest.monkeypatch import MonkeyPatch
monkeypatch = MonkeyPatch()
import builtins
monkeypatch.setattr(builtins, '__name__', '__main__')
import mymodule
</code></pre>
<p>It did not work, because it did not fail.</p>
|
<python><pytest>
|
2023-05-26 09:40:50
| 0
| 745
|
Marcos
|
76,339,453
| 10,715,700
|
IPython web server or other alternatives?
|
<p>Does IPython have a web server mode to run code submitted through REST APIs? I couldn't find any info on this. Are there any alternatives to this?</p>
<p>My use case is similar to an online interpreter, but I want the code to run on the machine where the server is running (to be able to import packages and modules, and use file that are in the server's PATH).</p>
<p>If not, how do I get started on building one? The places where I need help are:</p>
<ul>
<li>How to execute the python code I get through the API in the server? Do I just use <code>exec</code> or is there a better way to do this?</li>
<li>How do I isolate different scripts? If two scripts are running in the server, they should not interfere with each other (for example, if both scripts have a variable with the same name, they should be isolated and should not be considered as the same variable which not be the case if I use <code>exec</code>).</li>
</ul>
|
<python>
|
2023-05-26 09:27:00
| 1
| 430
|
BBloggsbott
|
76,339,434
| 3,751,931
|
build ignores specified version value
|
<p>I am building a python project using this <code>pyproject.toml</code> file</p>
<pre><code>[tool.poetry]
name = "my_package"
version = "1.0.0"
description = "My precious"
readme = "README.md"
[tool.poetry.dependencies]
pandas = "1.4.3"
numpy = "1.22.4"
scikit-learn = "1.1.2"
xgboost = "1.6.1"
holidays = "0.14.2"
matplotlib = "3.4.3"
matplotlib-inline = "0.1.6"
numba = "0.56.4"
shap = "0.41.0"
pytest = "7.3.1"
[tool.pytest.ini_options]
testpaths = [
"tests"
]
[tool.poetry.build]
script = "build.py"
generate-setup-file=true
</code></pre>
<p>When running <code>py -m build --wheel --outdir ..\wheels\</code>
the wheel is correctly created but the filename is</p>
<blockquote>
<p>my_package-0.0.0-py3-none-any.whl</p>
</blockquote>
<p>So the version number seems to be ignored.</p>
<p>How do I correct this behavior?</p>
<hr />
<p>EDIT:
I have noticed that by default <code>build</code> uses setuptool, not poetry. However, changing the .toml file removing ".poetry" (ex. <code>[tool.poetry]</code> to <code>[tool]</code>) did not change the behavior.</p>
<hr />
<p>WORKING SOLUTION:</p>
<pre><code>[project]
name = "my_package"
version = "1.0.0"
description = "My precious"
readme = "README.md"
dependencies = [
"pandas==1.4.3",
"numpy==1.22.4",
"scikit-learn==1.1.2",
"xgboost==1.7.4",
"holidays==0.14.2",
"matplotlib==3.4.3",
"matplotlib-inline==0.1.6",
"numba==0.56.4",
"shap==0.41.0",
"pytest==7.3.1",
]
[tool.pytest.ini_options]
testpaths = [
"tests"
]
</code></pre>
<p>Note: another solution would have been to go with poetry but as for now xgboost does not get along with it (but from release 1.7.5 it will)</p>
|
<python><build>
|
2023-05-26 09:24:10
| 1
| 2,391
|
shamalaia
|
76,339,387
| 274,460
|
Is there a good way of referring to a collection of exceptions?
|
<p>I have a database access layer that I maintain. There are some operations that can raise a variety of exceptions, some of them local to the library and some of them builtins or from other libraries. Is there a good way to declare a collection of them so that users of the library can refer to them easily?</p>
<p>I've considered doing this:</p>
<pre><code>DatabaseAccessError = (
requests.exceptions.ConnectionError,
http.client.IncompleteRead,
ConnectionRefusedError,
DatabaseDoesNotExistError
)
</code></pre>
<p>This then means that someone can:</p>
<pre><code>try:
...
except DatabaseAccessError:
...
</code></pre>
<p>But it causes a mess if they want to handle anything else at the same time:</p>
<pre><code>try:
...
except DatabaseAccessError + (KeyboardInterrupt,):
...
</code></pre>
<p>The other thing I've considered is that I could go around the library and carefully catch all these exceptions anywhere they are raised and then <code>raise DatabaseAccessError from e</code>. But I'm lazy and this looks error-prone. Isn't there a better way?</p>
<p>The motivation here is that I've noticed that users of my database access layer generally manage to catch <em>some</em> of these exceptions but are far from reliable in catching all of them. It's never clear whether this is intentional (they want some of the exceptions to bubble up - I think this is unlikely but maybe) or if they just haven't thought through their exception handling well enough (and, to be fair, the exception list is not very well documented).</p>
|
<python><python-3.x><exception>
|
2023-05-26 09:19:26
| 1
| 8,161
|
Tom
|
76,339,232
| 14,076,103
|
Pyspark pivot with Dynamic columns
|
<p>I have Pyspark Dataframe as follows,</p>
<p><a href="https://i.sstatic.net/aWjCk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/aWjCk.png" alt="enter image description here" /></a></p>
<p>I am pivoting the data based Month and T columns and need to produce the following output.</p>
<p><a href="https://i.sstatic.net/OiRf6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OiRf6.png" alt="enter image description here" /></a></p>
<p>There are some quarters like q2,q3,q4 are not present in T column but i need to fill them with null values.It should be dynamic and these quarters should be in latest order like q4 2023 will be first then q3 2023,q2 2023,q1 2023 etc...</p>
<p>I amusing the following pyspark code</p>
<pre><code>FinalDF = df.groupBy("id","month").pivot("T").agg(
F.first("Oil"),
F.first("Gas"),
)
</code></pre>
|
<python><apache-spark><pyspark><apache-spark-sql><pivot>
|
2023-05-26 08:58:16
| 1
| 415
|
code_bug
|
76,339,162
| 1,422,096
|
pythonnet clr: how to get the signature of a function/method?
|
<p>When importing a .NET library in Python with:</p>
<pre><code>import clr
clr.AddReference(r"C:\foo.dll")
from foo import *
my_foo = Foo()
print(my_foo.my_method)
# <bound method 'my_method'>
</code></pre>
<p>I'd like to know the signature of the function (its parameters). This doesn't work:</p>
<pre><code>from inspect import signature
print(signature(my_foo.my_method))
</code></pre>
<p>It fails with:</p>
<blockquote>
<p>ValueError: callable <bound method 'GroupStatusGet'> is not supported by signature</p>
</blockquote>
<p><strong>Question: how to get the signature of a function on Python + .NET ?</strong></p>
|
<python><clr><signature><python.net><method-signature>
|
2023-05-26 08:48:15
| 1
| 47,388
|
Basj
|
76,339,153
| 8,869,003
|
makemigrations-command doesn't do anything
|
<p>My production django-server uses postgres 12.12, django 32 and python 3.7. My test-server uses the exactly same software, but postgress 12.15.</p>
<p>After adding a new table 'etusivu_heroimg' I copied contents of the production postgres into my test postgres database and run <code>python manage.py makemigrations</code> and <code>python manage.py migrate --fake</code>.</p>
<p>The table was created and related features worked.</p>
<p>Then I checked that the production side had exactly same code as the test server before migration commands, even the app-specific migration-directories were same. But when I ran the migration commands on the production side the new table wansn't created.</p>
<pre><code>(pika-env) [django@tkinfra01t pika]python manage.py makemigrations
Migrations for 'etusivu':
etusivu/migrations/0003_heroimg.py
- Create model HeroImg
(pika-env) [django@tkinfra01t pika]$ python manage.py migrate
Operations to perform:
Apply all migrations: admin, auth, conf, contenttypes, core, django_comments, etusivu, generic, kasitesivu, page_types, pages, redirects, sessions, sites
Running migrations:
No migrations to apply.
Your models in app(s): 'admin', 'auth', 'conf', 'contenttypes', 'core', 'django_comments', 'generic', 'page_types', 'pages', 'redirects', 'sessions', 'sites' have changes that are not yet reflected in a migration, and so won't be applied.
Run 'manage.py makemigrations' to make new migrations, and then re-run 'manage.py migrate' to apply them.
(pika-env) [django@tkinfra01t pika]python manage.py makemigrations
No changes detected
</code></pre>
<p>However the django_migrations-table was changed on the production side, too and even some related permissions were :seen in the database dump:</p>
<pre><code>INSERT INTO "public"."django_migrations" ("id", "app", "name", "applied") VALUES (55, 'etusivu', '0003_heroimg', '2023-05-25 14:26:29.780323+03');
INSERT INTO "public"."auth_permission" ("id", "name", "content_type_id", "codename") VALUES (267, 'Can add Herokuva', 71, 'add_heroimg');
</code></pre>
<p>So I run the production-side database dump into the test-db, noticed that the fault repeated and added the missing table manually using the commands found from the database dump of the working db. Everything worked.</p>
<p>QUESTION: What should I do to fix the production ? Add the missing table manually or should I try to find more django-specific way to fix the problem ?</p>
<p>BTW: When I started the server [with the new code and the new table before migration-commands] using <code>python manage.py runserver</code> I didn't get any warnings about migrations; just noticed that the system crashed. That happened on both sides, test and production.</p>
|
<python><django><postgresql>
|
2023-05-26 08:46:02
| 0
| 310
|
Jaana
|
76,339,133
| 3,789,481
|
Is it possible to deploy Flask to Azure with external directories?
|
<p>Let's say I have following structure</p>
<pre><code>root/
- app/
- flask_app.py
- requirements.txt
- shared_libs/
- lib.py
- others/
</code></pre>
<p>The shared folder is used for either flask_app.py and other processing that can reference to. As it is stored outside of the flask_app, it could only run in local by adding parent working directory (root) to <code>sys.path</code></p>
<pre><code>import sys
sys.path.append('..')
import os
from shared_libs.lib import ExampleLib
</code></pre>
<p>Typically, the deployment to Azure with only <code>app/</code> is easy to achieve, but in this case I would like to pick these <code>app/</code> and <code>shared_libs/</code> to compress to a zip file, subsequently point default runner to <code>app/flask_app.py</code></p>
<p>I am not sure which technique to do it, thank if you could give me some ideas.</p>
|
<python><flask><azure-web-app-service>
|
2023-05-26 08:44:12
| 1
| 2,086
|
Alfred Luu
|
76,339,067
| 13,154,227
|
pyodbc with msaccess: too few parameters. Trying to set date to yesterday in a column
|
<pre><code>import pyodbc
conn = pyodbc.connect('DRIVER={Microsoft Access Driver (*.mdb, *.accdb)};DBQ=C:\testing\mydb.accdb')
crs = conn.cursor()
crs.execute('UPDATE [testtable] SET "dateoflastchange" = DATEADD("d", -1, DATE())')
# Too few parameters. Expected 1
</code></pre>
<p>I tried replacing DATEADD with DateAdd and replacing commas with semi-colons. Also "day" instead of "d" as seen in other examples. I cannot figure it out. In the past it was usually some Microsoft syntax that I had wrong. Where did he expect 1 parameter but got 0?</p>
|
<python><ms-access><pyodbc>
|
2023-05-26 08:34:59
| 2
| 609
|
R-obert
|
76,338,888
| 1,021,819
|
How can I use a third-party context manager to run a custom-class's method?
|
<p>In python, I'd like to use a <code>with</code>-statement context manager when using a particular third-party package because their context manager will handle clean-up for me in a best-practice way.</p>
<p>Right now I have (say):</p>
<pre class="lang-py prettyprint-override"><code>class MyClass(object):
def __init__(self):
self.open()
def open(self):
# Instantiate something *e.g.* open() a file
def run(self):
# Do some work *e.g.* read() from a filehandle
def cleanup(self):
# Clean everything up *e.g.* close() filehandle
</code></pre>
<p>This would then be called as</p>
<pre class="lang-py prettyprint-override"><code>my_obj = MyClass()
my_obj.open()
my_obj.run()
my_obj.cleanup()
</code></pre>
<p>I'd ideally like to do all this using a context manager referenced <em>within</em> the class (if possible). It's the <code>run()</code> method that needs the context here. I am not sure how to do this - here are some theoretical attempts:</p>
<p>Option 1:</p>
<pre class="lang-py prettyprint-override"><code>my_obj2=MyClass2()
with my_obj2.third_party_context_manager_method() as mcmm:
my_obj2.run()
</code></pre>
<p>Option 2:</p>
<pre class="lang-py prettyprint-override"><code>my_obj3=MyClass3()
my_obj3.run_modified_containing_third_party_context_manager()
</code></pre>
<p><strong>Bonus:</strong> Now, in fact I need a double context manager in this case - how on earth..?</p>
<p><strong>Please note:</strong> The file open example is just an example! My case is somewhat more complicated but the idea is that if I can make it work for that I can achieve anything :)</p>
<p><strong>Another note:</strong> The use case is a dask cluster/client cleanup scenario - REF: <a href="https://stackoverflow.com/questions/76312844/how-can-i-exit-dask-cleanly">How can I exit Dask cleanly?</a></p>
|
<python><object><with-statement><contextmanager>
|
2023-05-26 08:11:09
| 1
| 8,527
|
jtlz2
|
76,338,880
| 8,547,163
|
How to save large set of rows and columns from .xlsx output in a file
|
<p>I have a <code>.xlsx</code> file with large set of rows and columns, and I would like to save the desired output in .csv or .txt file.</p>
<pre><code>import pandas as pd
def foo():
data = pd.read_excel('file/path/filename.xlsx')
print(data)
f = open('Output.txt', 'w')
print(data.head(50),file = f)
f.close()
</code></pre>
<p>I get the <code>Output.txt</code> as</p>
<pre><code>Col1 Col2 ... Col99
1 1 ... 2
2 4 ... 3
. . ... .
. . ... .
4 2 ... 3
</code></pre>
<p>instead of all the rows and columns. I realize that I'm just saving whats been printed on screen, can anyone suggest how to save such columns and rows completely in a .csv or .txt?</p>
|
<python><pandas>
|
2023-05-26 08:10:17
| 2
| 559
|
newstudent
|
76,338,851
| 716,682
|
Using Pybind11 and access C++ objects through a base pointer
|
<p>Suppose I have the following C++ classes:</p>
<pre class="lang-cpp prettyprint-override"><code>class Animal {
public:
virtual void sound() = 0;
};
class Dog : public Animal {
public:
void sound() override {
std::cout << "Woof\n";
}
};
class Cat : public Animal {
public:
void sound() override {
std::cout << "Miao\n";
}
};
std::unique_ptr<Animal> animalFactory(std::string_view type) {
if (type == "Dog") {
return std::make_unique<Dog>();
} else {
return std::make_unique<Cat>();
}
}
</code></pre>
<p>Is it possible, and if so, how, to write a binding using Pybind11 so that I in Python code can write:</p>
<pre class="lang-py prettyprint-override"><code>dog = animalFactory('dog')
cat = animalFactory('cat')
dog.sound()
cat.sound()
</code></pre>
<p>and have the correct functions in the derived classes called?</p>
|
<python><c++><pybind11>
|
2023-05-26 08:06:59
| 1
| 2,057
|
Pibben
|
76,338,850
| 8,770,170
|
Python output highlighting in Quarto
|
<p>Continuing on <a href="https://stackoverflow.com/q/74981820/8770170">this question</a> about code output highlighting in Quarto: Is it also possible to make this work with Python output?</p>
<p>A minimal working example:</p>
<pre><code>---
title: "Output line highlighting"
format: revealjs
editor: visual
filters:
- output-line-highlight.lua
---
## Regression Table
In R:
```{r}
#| class-output: highlight
#| output-line-numbers: "9,10,11,12"
LM <- lm(Sepal.Length ~ Sepal.Width, data = iris)
summary(LM)
```
## Regression Table
In Python:
```{python}
#| class-output: highlight
#| output-line-numbers: "12,13,14,15,16,17"
import pandas as pd
import statsmodels.api as sm
iris = sm.datasets.get_rdataset('iris').data
LM = sm.OLS(iris['Sepal.Length'], sm.add_constant(iris['Sepal.Width'])).fit()
LM.summary2()
```
</code></pre>
<p>Rendering this file shows the Lua filter works for the R output, but not for Python.</p>
|
<python><r><lua><quarto><reveal.js>
|
2023-05-26 08:06:50
| 1
| 551
|
Frans Rodenburg
|
76,338,694
| 1,342,516
|
Split array into intervals with given limits
|
<p>From an array, how to take only the parts within given intervals? E.g.
from</p>
<pre><code>x = np.linspace(0,10,51)
intervals = [[2, 2.5], [8.1, 9]]
</code></pre>
<p>get</p>
<pre><code>[[2, 2.2, 2.4], [8.2, 8.4, 8.6, 8.8]]
</code></pre>
|
<python><numpy>
|
2023-05-26 07:45:51
| 2
| 539
|
user1342516
|
76,338,652
| 2,828,006
|
jira python API get active Sprint
|
<p>I am using atlassian python API connector for writing script which can get the active sprint for the project.</p>
<p>API doc : <a href="https://atlassian-python-api.readthedocs.io/" rel="nofollow noreferrer">https://atlassian-python-api.readthedocs.io/</a></p>
<p>In this API doc i am not able to find any method which can indicate whats the active sprint currently.</p>
<p>It is only having methods for creating sprint</p>
<pre><code> # Create sprint
jira.jira.create_sprint(sprint_name, origin_board_id, start_datetime, end_datetime, goal)
# Rename sprint
jira.rename_sprint(sprint_id, name, start_date, end_date)
# Add/Move Issues to sprint
jira.add_issues_to_sprint(sprint_id, issues_list)
</code></pre>
<p>There is no such method to get the current active sprint.</p>
<p>Can anyone please tell whats the API signature to get the current active sprint.</p>
|
<python><jira>
|
2023-05-26 07:41:09
| 0
| 1,474
|
Scientist
|
76,338,557
| 1,145,666
|
How can I set the character set encoding for serving static files?
|
<p>I am using a simple setup for serving static files in CherryPy:</p>
<pre><code>class StaticServer(object):
pass
config = {
'/': {
'tools.encode.on': True,
'tools.encode.encoding': 'utf-8',
'tools.staticdir.on': True,
'tools.staticdir.dir': fullpath,
'tools.staticdir.index': 'index.html',
'error_page.default': error_page
}
}
cherrypy.tree.mount(StaticServer(), "/", config = config)
</code></pre>
<p>However, when loading the <code>index.html</code> page (which is UTF-8 encoded), it is not served correctly, resulting in jumbled characters. When checking the response header, I can see the <code>content-type: text/html</code>, but no character set encoding.</p>
<p>How can I set the character set encoding to UTF-8 in CherryPy for statically served files?</p>
|
<python><encoding><cherrypy>
|
2023-05-26 07:29:04
| 0
| 33,757
|
Bart Friederichs
|
76,338,475
| 17,973,259
|
Argument of type "str" cannot be assigned to parameter "__key" of type "slice" in function "__setitem__" "str" is incompatible with "slice"
|
<p>My code is running with no errors but VS Code warns me with a warning message after I added the <code>_update_scale</code> method in this class:</p>
<pre><code>class AlienAnimation:
"""This class manages the animation of an alien,
based on its level prefix. The alien frames are chosen
based on the current level in the game.
"""
frames = {}
def __init__(self, game, alien, scale=1.0):
self.alien = alien
self.game = game
self.scale = scale
self.frame_update_rate = 6
self.frame_counter = 0
self.current_frame = 0
level_prefix = LEVEL_PREFIX.get(game.stats.level // 4 + 1, "Alien7")
if level_prefix not in AlienAnimation.frames:
AlienAnimation.frames[level_prefix] = load_alien_images(level_prefix) # this returns a list
self.frames = AlienAnimation.frames[level_prefix]
self.image = self.frames[self.current_frame]
def _update_scale(self):
scaled_width = int(self.image.get_width() * self.scale)
scaled_height = int(self.image.get_height() * self.scale)
self.image = pygame.transform.scale(self.image, (scaled_width, scaled_height))
self.frames = [pygame.transform.scale(frame, (scaled_width, scaled_height)) for frame in self.frames] # If I remove this line, the warnings are gone.
</code></pre>
<p>Warning messages:</p>
<pre><code>Argument of type "str" cannot be assigned to parameter "__key" of type "slice" in function "__setitem__"
"str" is incompatible with "slice"
Argument of type "str" cannot be assigned to parameter "__s" of type "slice" in function "__getitem__"
"str" is incompatible with "slice"
</code></pre>
<p>The warnings are pointing me to:
<code>AlienAnimation.frames[level_prefix] = load_alien_images(level_prefix)</code>
and
<code>self.frames = AlienAnimation.frames[level_prefix]</code></p>
<p>What does this warning mean? Is there a way to not receive it?</p>
|
<python><python-3.x>
|
2023-05-26 07:17:13
| 1
| 878
|
Alex
|
76,338,218
| 17,530,552
|
How to skip a row of zeros with np.loadtxt?
|
<p>I have a .txt file that contains many rows of numeric values. Each row looks as follows
<code>3.896784 0.465921 1.183185 5.468042 ...</code>, where the values differ depending on the row.</p>
<p>Furthermore, every row contains 900 values.
Some rows do not contain real data, but only 900 zeros as follows
<code>0 0 0 0 0 0 0 0 ...</code>.</p>
<p>I know how to skip rows that only contain one value, i.e., one zero, such as via the following code:</p>
<pre><code>import numpy as np
data = np.loadtxt("pathtodata")
data = data[~(data==0)]
</code></pre>
<p>But this code does not work for 2 zero values or more per row. Is there a way to not load rows that only contain 900 or any arbitrary number of zeros (or a specific integer)?</p>
|
<python><numpy><txt>
|
2023-05-26 06:38:56
| 2
| 415
|
Philipp
|
76,338,089
| 11,357,695
|
Failed building wheel for backports-zoneinfo
|
<p><em><strong>EDIT</strong></em></p>
<p>I tried <a href="https://stackoverflow.com/a/61762308/11357695">clearing my pip cache</a> due to the error line <code>Using cached backports.zoneinfo-0.2.1.tar.gz (74 kB)</code> (maybe the cached version is broken/old and incompatible with other packages) - but this made no difference other than removing that line</p>
<p><em><strong>CONTEXT</strong></em></p>
<p>I've been having dependency issues after installing <code>Python 3.9</code> (discussed <a href="https://stackoverflow.com/questions/76320554/import-errors-after-updating-spyder-and-python?noredirect=1#comment134584293_76320554">here</a> and <a href="https://stackoverflow.com/questions/76314158/spyder-kernels-anaconda-and-python-3-9/76320534?noredirect=1#comment134585043_76320534">here</a>). The issues in the linked posts have been fixed, but I am uninstalling and re-installing my <code>pip</code>-installed packages to make sure they are compatible with my new python version. One of the packages I am trying to do this with is <code>backports-zoneinfo</code>. I am aware I <a href="https://stackoverflow.com/a/72796492/11357695">don't really need this</a>, but I was going to keep it anyway in case I write something needing compatibility with older python versions.</p>
<p><em><strong>ISSUE:</strong></em></p>
<p>I have uninstalled <code>backports-zoneinfo</code>, and then tried to reinstall it, but get the error message at the bottom of this post. I then installed <a href="https://visualstudio.microsoft.com/visual-cpp-build-tools/?fbclid=IwAR2CjL1xcjVN8PCHAszkPfVFix-1Iosqn_wbvp9Ij8whDsQyTKDNe-66nfs" rel="nofollow noreferrer">build tools 2022</a> as per the error message and tried to re-install <code>backports-zoneinfo</code> again, but got the same error. Please can someone help me to diagnose and fix the issue?</p>
<p><em><strong>Uninstall message:</strong></em></p>
<pre><code>C:\Users\u03132tk>pip uninstall backports-zoneinfo
WARNING: Ignoring invalid distribution -umpy (c:\anaconda3\lib\site-packages)
WARNING: Ignoring invalid distribution -umexpr (c:\anaconda3\lib\site-packages)
WARNING: Ignoring invalid distribution -iopython (c:\anaconda3\lib\site-packages)
WARNING: Ignoring invalid distribution -illow (c:\anaconda3\lib\site-packages)
WARNING: Ignoring invalid distribution -cipy (c:\anaconda3\lib\site-packages)
WARNING: Ignoring invalid distribution -umpy (c:\anaconda3\lib\site-packages)
WARNING: Ignoring invalid distribution -umexpr (c:\anaconda3\lib\site-packages)
WARNING: Ignoring invalid distribution -iopython (c:\anaconda3\lib\site-packages)
WARNING: Ignoring invalid distribution -illow (c:\anaconda3\lib\site-packages)
WARNING: Ignoring invalid distribution -cipy (c:\anaconda3\lib\site-packages)
Found existing installation: backports.zoneinfo 0.2.1
Uninstalling backports.zoneinfo-0.2.1:
Would remove:
c:\anaconda3\lib\site-packages\backports.zoneinfo-0.2.1.dist-info\*
c:\anaconda3\lib\site-packages\backports\*
Would not remove (might be manually added):
c:\anaconda3\lib\site-packages\backports\functools_lru_cache.py
c:\anaconda3\lib\site-packages\backports\tempfile.py
c:\anaconda3\lib\site-packages\backports\weakref.py
Proceed (Y/n)? Y
Successfully uninstalled backports.zoneinfo-0.2.1
</code></pre>
<p><em><strong>Reinstall error</strong></em></p>
<pre><code>C:\Users\u03132tk>pip install backports-zoneinfo
WARNING: Ignoring invalid distribution -umpy (c:\anaconda3\lib\site-packages)
WARNING: Ignoring invalid distribution -umexpr (c:\anaconda3\lib\site-packages)
WARNING: Ignoring invalid distribution -iopython (c:\anaconda3\lib\site-packages)
WARNING: Ignoring invalid distribution -illow (c:\anaconda3\lib\site-packages)
WARNING: Ignoring invalid distribution -cipy (c:\anaconda3\lib\site-packages)
WARNING: Ignoring invalid distribution -umpy (c:\anaconda3\lib\site-packages)
WARNING: Ignoring invalid distribution -umexpr (c:\anaconda3\lib\site-packages)
WARNING: Ignoring invalid distribution -iopython (c:\anaconda3\lib\site-packages)
WARNING: Ignoring invalid distribution -illow (c:\anaconda3\lib\site-packages)
WARNING: Ignoring invalid distribution -cipy (c:\anaconda3\lib\site-packages)
Collecting backports-zoneinfo
Using cached backports.zoneinfo-0.2.1.tar.gz (74 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Building wheels for collected packages: backports-zoneinfo
Building wheel for backports-zoneinfo (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for backports-zoneinfo (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [46 lines of output]
C:\Users\u03132tk\AppData\Local\Temp\pip-build-env-0o6uoggk\overlay\Lib\site-packages\setuptools\config\setupcfg.py:293: _DeprecatedConfig: Deprecated config in `setup.cfg`
!!
********************************************************************************
The license_file parameter is deprecated, use license_files instead.
By 2023-Oct-30, you need to update your project and remove deprecated calls
or your builds will no longer be supported.
See https://setuptools.pypa.io/en/latest/userguide/declarative_config.html for details.
********************************************************************************
!!
parsed = self.parsers.get(option_name, lambda x: x)(value)
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win-amd64-cpython-39
creating build\lib.win-amd64-cpython-39\backports
copying src\backports\__init__.py -> build\lib.win-amd64-cpython-39\backports
creating build\lib.win-amd64-cpython-39\backports\zoneinfo
copying src\backports\zoneinfo\_common.py -> build\lib.win-amd64-cpython-39\backports\zoneinfo
copying src\backports\zoneinfo\_tzpath.py -> build\lib.win-amd64-cpython-39\backports\zoneinfo
copying src\backports\zoneinfo\_version.py -> build\lib.win-amd64-cpython-39\backports\zoneinfo
copying src\backports\zoneinfo\_zoneinfo.py -> build\lib.win-amd64-cpython-39\backports\zoneinfo
copying src\backports\zoneinfo\__init__.py -> build\lib.win-amd64-cpython-39\backports\zoneinfo
running egg_info
writing src\backports.zoneinfo.egg-info\PKG-INFO
writing dependency_links to src\backports.zoneinfo.egg-info\dependency_links.txt
writing requirements to src\backports.zoneinfo.egg-info\requires.txt
writing top-level names to src\backports.zoneinfo.egg-info\top_level.txt
reading manifest file 'src\backports.zoneinfo.egg-info\SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching '*.png' under directory 'docs'
warning: no files found matching '*.svg' under directory 'docs'
no previously-included directories found matching 'docs\_build'
no previously-included directories found matching 'docs\_output'
adding license file 'LICENSE'
adding license file 'licenses/LICENSE_APACHE'
writing manifest file 'src\backports.zoneinfo.egg-info\SOURCES.txt'
copying src\backports\zoneinfo\__init__.pyi -> build\lib.win-amd64-cpython-39\backports\zoneinfo
copying src\backports\zoneinfo\py.typed -> build\lib.win-amd64-cpython-39\backports\zoneinfo
running build_ext
building 'backports.zoneinfo._czoneinfo' extension
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for backports-zoneinfo
Failed to build backports-zoneinfo
ERROR: Could not build wheels for backports-zoneinfo, which is required to install pyproject.toml-based projects
WARNING: Ignoring invalid distribution -umpy (c:\anaconda3\lib\site-packages)
WARNING: Ignoring invalid distribution -umexpr (c:\anaconda3\lib\site-packages)
WARNING: Ignoring invalid distribution -iopython (c:\anaconda3\lib\site-packages)
WARNING: Ignoring invalid distribution -illow (c:\anaconda3\lib\site-packages)
WARNING: Ignoring invalid distribution -cipy (c:\anaconda3\lib\site-packages)
WARNING: Ignoring invalid distribution -umpy (c:\anaconda3\lib\site-packages)
WARNING: Ignoring invalid distribution -umexpr (c:\anaconda3\lib\site-packages)
WARNING: Ignoring invalid distribution -iopython (c:\anaconda3\lib\site-packages)
WARNING: Ignoring invalid distribution -illow (c:\anaconda3\lib\site-packages)
WARNING: Ignoring invalid distribution -cipy (c:\anaconda3\lib\site-packages)
WARNING: Ignoring invalid distribution -umpy (c:\anaconda3\lib\site-packages)
WARNING: Ignoring invalid distribution -umexpr (c:\anaconda3\lib\site-packages)
WARNING: Ignoring invalid distribution -iopython (c:\anaconda3\lib\site-packages)
WARNING: Ignoring invalid distribution -illow (c:\anaconda3\lib\site-packages)
WARNING: Ignoring invalid distribution -cipy (c:\anaconda3\lib\site-packages)
</code></pre>
|
<python><visual-c++><pip><conda><python-wheel>
|
2023-05-26 06:17:34
| 1
| 756
|
Tim Kirkwood
|
76,337,429
| 10,984,994
|
No attribute "append" error while writing data using pandas DataFrame
|
<p>I am getting below error,
<code>AttributeError: 'DataFrame' object has no attribute 'append'. Did you mean: '_append'?</code></p>
<p>I am trying to write in <code>result_df</code> variable with all the device name corresponding values on each rows using for loop. But I am getting no attribute error.</p>
<p>what could I possibly be missing here?</p>
<p>Reproducible code:</p>
<pre><code>import pandas as pd
import os
import json
currDir = os.getcwd()
def parse_json_response():
filename = "my_json_file.json"
device_name = ["Trona", "Sheldon"]
"creating dataframe to store result"
column_names = ["DEVICE", "STATUS", "LAST UPDATED"]
result_df = pd.DataFrame(columns=column_names)
my_json_file = currDir + '/' + filename
for i in range(len(device_name)):
my_device_name = device_name[i]
with open(my_json_file) as f:
data = json.load(f)
for devices in data:
device_types = devices['device_types']
if my_device_name in device_types['name']:
if device_types['name'] == my_device_name:
device = devices['device_types']['name']
last_updated = devices['devices']['last_status_update']
device_status = devices['devices']['status']
result_df = result_df.append(
{'DEVICE': device, 'STATUS': device_status,
'LAST UPDATED': last_updated}, ignore_index=True)
print(result_df)
parse_json_response()
</code></pre>
<p>Here is my JSON file contents: (save in your current path named as "my_json_file.json")</p>
<pre><code>[{"devices": {"id": 34815, "last_status_update": "2023-05-25 07:56:49", "status": "idle" }, "device_types": {"name": "Trona"}}, {"devices": {"id": 34815, "last_status_update": "2023-05-25 07:56:49", "status": "idle" }, "device_types": {"name": "Sheldon"}}]
</code></pre>
|
<python><python-3.x><pandas><dataframe>
|
2023-05-26 03:14:51
| 2
| 1,009
|
ilexcel
|
76,337,361
| 3,247,006
|
Is there the parameter to pass a billing address to "Payments" on Stripe Dashboard after a payment on Stripe Checkout?
|
<p>I'm trying to pass a billing address to <strong>Payments</strong> on <strong>Stripe Dashboard</strong> but I couldn't find the parameter to do it in <a href="https://stripe.com/docs/api/checkout/sessions/create#create_checkout_session-payment_intent_data" rel="nofollow noreferrer">payment_intent_data</a>.</p>
<p>So instead, I used <a href="https://stripe.com/docs/api/checkout/sessions/create#create_checkout_session-payment_intent_data-metadata" rel="nofollow noreferrer">payment_intent_data.metadata</a> as shown below. *I use Django:</p>
<pre class="lang-py prettyprint-override"><code># "views.py"
from django.shortcuts import redirect
import stripe
checkout_session = stripe.checkout.Session.create(
line_items=[
{
"price_data": {
"currency": "USD",
"unit_amount_decimal": 1000,
"product_data": {
"name": "T-shirt"
},
},
"quantity": 2
}
],
payment_intent_data={
"shipping":{
"name": "John Smith",
"phone": "14153758094",
"address":{
"country": "USA",
"state": "California",
"city": "San Francisco",
"line1": "58 Middle Point Rd",
"line2": "",
"postal_code": "94124"
}
},
"metadata":{ # Here
"name": "Anna Brown",
"phone": "19058365484",
"country": "Canada",
"state": "Ontario",
"city": "Newmarket",
"line1": "3130 Leslie Street",
"line2": "",
"postal_code": "L3Y 2A3"
}
},
mode='payment',
success_url='http://localhost:8000',
cancel_url='http://localhost:8000'
)
return redirect(checkout_session.url, code=303)
</code></pre>
<p>Then, I could pass a billing address to <strong>Payments</strong> on <strong>Stripe Dashboard</strong> as shown below, but if there is the parameter to pass a billing address to <strong>Payments</strong> on <strong>Stripe Dashboard</strong>, it is really useful:</p>
<p><a href="https://i.sstatic.net/SZsJD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SZsJD.png" alt="enter image description here" /></a></p>
<p>I know that there are <a href="https://stripe.com/docs/api/customers/create#create_customer-address" rel="nofollow noreferrer">address</a> parameter when <a href="https://stripe.com/docs/api/customers/create" rel="nofollow noreferrer">creating a customer</a> and <a href="https://stripe.com/docs/api/customers/update#update_customer-address" rel="nofollow noreferrer">address</a> parameter when <a href="https://stripe.com/docs/api/customers/update" rel="nofollow noreferrer">updating a customer</a> but both <code>address</code> doesn't have <code>name</code> and <code>phone</code> parameters.</p>
<pre class="lang-py prettyprint-override"><code># "views.py"
from django.shortcuts import redirect
import stripe
def test(request):
customer = stripe.Customer.search(query="email:'test@gmail.com'", limit=1)
shipping={
"name": "John Smith",
"phone": "14153758094",
"address":{
"country": "USA",
"state": "CA",
"city": "San Francisco",
"line1": "58 Middle Point Rd",
"line2": "",
"postal_code": "94124"
}
}
address={ # Here
"country": "Canada",
"state": "Ontario",
"city": "Newmarket",
"line1": "3130 Leslie Street",
"line2": "",
"postal_code": "L3Y 2A3"
}
if customer['data']:
customer = stripe.Customer.modify(
customer['data'][0]['id'],
name="John Smith",
shipping=shipping,
address=address # Here
)
else:
customer = stripe.Customer.create(
name="John Smith",
shipping=shipping,
address=address # Here
)
checkout_session = stripe.checkout.Session.create(
customer=customer["id"],
line_items=[
{
"price_data": {
"currency": "USD",
"unit_amount_decimal": 1000,
"product_data": {
"name": "T-shirt"
},
},
"quantity": 2
}
],
mode='payment',
success_url='http://localhost:8000',
cancel_url='http://localhost:8000'
)
</code></pre>
<p>And, a billing address is passed to <strong>Customers</strong> as <strong>Billing details</strong> but not to <strong>Payments</strong> on <strong>Stripe Dashboard</strong> as shown below which means each payment in <strong>Payments</strong> cannot have a specific billing address:</p>
<p><a href="https://i.sstatic.net/Mw866.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Mw866.png" alt="enter image description here" /></a></p>
<p>Now, is there the parameter to pass a billing address to <strong>Payments</strong> on <strong>Stripe Dashboard</strong> after a payment on <strong>Stripe Checkout</strong>?</p>
<p>And, if not, will <strong>Stripe</strong> add <code>billing_address</code> parameter to <code>payment_intent_data</code>?</p>
|
<python><django><stripe-payments><checkout>
|
2023-05-26 02:49:23
| 1
| 42,516
|
Super Kai - Kazuya Ito
|
76,337,141
| 13,444,774
|
How to forecast with new dataset by "MarkovAutoregression" model in Statsmodels?
|
<p>I'd like to fit a MarkovAutoregression model with training time-seriese dataset(train_data) and make it forecast with validation time-seriese dataset(val_data). Training part is like below and I don't find any errors.</p>
<pre><code>import numpy as np
from statsmodels.tsa.regime_switching.markov_autoregression import MarkovAutoregression
from sklearn.model_selection import train_test_split
# Generate some random data
np.random.seed(0)
n_samples = 100
data = np.random.randn(n_samples)
# Split data into training and validation datasets
train_data, val_data = train_test_split(data, test_size=0.2, random_state=0)
# Fit the Markov autoregression model
lag_order = 2 # Order of the autoregressive process
model = MarkovAutoregression(train_data, k_regimes=2, order=lag_order)
result = model.fit()
</code></pre>
<p>Then, prediction part has to be like below according to a <a href="https://www.statsmodels.org/stable/generated/statsmodels.tsa.regime_switching.markov_autoregression.MarkovAutoregression.predict.html" rel="nofollow noreferrer">official site</a> about predict() method.</p>
<pre><code>MarkovAutoregression.predict(
params,
start=None,
end=None,
probabilities=None,
conditional=False
)
</code></pre>
<p>As you can see, there is arguments of <strong>start</strong> and <strong>end</strong> in order to designate target time window by indexs. Are these indexs for <strong>train_data</strong> which has already used in fit()? How can I pass my <strong>val_data</strong> to predict() from the dataset?</p>
|
<python><machine-learning><statsmodels><forecasting><markov>
|
2023-05-26 01:34:53
| 1
| 353
|
Ihmon
|
76,337,133
| 10,284,437
|
How to enhance speed to compare a list of ID to a known list of ID from MongoDB?
|
<p>In my code, I use a process that takes a lot of time, this is web-scraping, I need to know which ID is already known:</p>
<pre><code>from pymongo import MongoClient
client = MongoClient('mongodb://localhost:27017/')
mydb = client['db']
mycol = mydb['collection']
if __name__ == '__main__':
[...]
for item in driver.find_element(By.XPATH, '//a'):
flag = True
url = item.get_attribute('href')
myid = re.sub(r'.*item/([0-9a-f-]+)\?.*', r'\1', url)
# I iterate over all known MongoDB ID's to find a match or not
# It takes too much time to compare
for oupid in mycol.find({ }, { "Id": 1, "_id": 0}):
if myid == oupid['Id']:
flag = False
if not flag:
continue
</code></pre>
<p>Any recommendation?
It takes 6mn to parse the site, and most of the time is waste to compare IDs.</p>
|
<python><mongodb><selenium-webdriver><web-scraping>
|
2023-05-26 01:32:23
| 1
| 731
|
Mévatlavé Kraspek
|
76,337,109
| 4,390,160
|
Unexpected type warning in `__new__` method for `int` subclass
|
<p>The following code:</p>
<pre><code>class Foo(int):
def __new__(cls, x, *args, **kwargs):
x = x if isinstance(x, int) else 42
return super(Foo, cls).__new__(cls, x, *args, **kwargs)
</code></pre>
<p>Results in a warning (in PyCharm): "<code>Expected type 'str | bytes | bytearray', got 'int' instead</code>" on <code>x</code> in the last line.</p>
<p>Why is this?</p>
<p>If I evaluate <code>super(Size, cls).__new__ == int.__new__</code>, the result is <code>True</code>. Wouldn't that expect an <code>int</code> as well?</p>
<p>Is there a better way to create a subclass of <code>int</code>, if I want to add behaviour when a value is first assigned or some value is cast to this type?</p>
<p>A concrete example of such a class would be a <code>class Size(int)</code> that could be instantiated as <code>Size(1024)</code> or <code>Size('1 KiB')</code>, with the same result.</p>
|
<python><subclassing>
|
2023-05-26 01:22:38
| 1
| 32,399
|
Grismar
|
76,337,096
| 825,227
|
Calculate a trailing average using day of year closest to anchor date in Python
|
<p>I have a dataframe of weekly values for different series identifiers as follows. The week entries are made for each series every week but don't coincide year after year (eg, 1/1 one year, 1/3 the next, 1/2, etc.):</p>
<pre><code>period series value
2017-05-12 R33 720
2017-05-12 R33 1057
2017-05-12 R33 337
2017-05-12 R34 161
2017-05-12 R35 244
2017-05-12 R48 2369
2017-05-19 R31 390
2017-05-19 R32 562
2017-05-19 R33 1076
2017-05-19 R33 738
2017-05-19 R33 338
2017-05-19 R34 166
2017-05-19 R35 250
2017-05-19 R48 2444
2017-05-26 R31 419
2017-05-26 R32 585
2017-05-26 R33 342
2017-05-26 R33 755
2017-05-26 R33 1097
2017-05-26 R34 166
2017-05-26 R35 258
2017-05-26 R48 2525
2017-06-02 R31 457
2017-06-02 R32 614
2017-06-02 R33 774
2017-06-02 R33 345
2017-06-02 R33 1119
2017-06-02 R34 172
2017-06-02 R35 269
2017-06-02 R48 2631
2017-06-09 R31 491
2017-06-09 R32 634
2017-06-09 R33 1133
2017-06-09 R33 784
2017-06-09 R33 348
2017-06-09 R34 177
2017-06-09 R35 274
2017-06-09 R48 2709
2017-06-16 R31 513
2017-06-16 R32 656
2017-06-16 R33 1138
2017-06-16 R33 343
2017-06-16 R33 794
2017-06-16 R34 182
2017-06-16 R35 281
2017-06-16 R48 2770
2017-06-23 R31 536
2017-06-23 R32 676
2017-06-23 R33 341
2017-06-23 R33 799
2017-06-23 R33 1140
2017-06-23 R34 184
2017-06-23 R35 280
2017-06-23 R48 2816
2017-06-30 R31 564
2017-06-30 R32 699
2017-06-30 R33 1141
2017-06-30 R33 810
2017-06-30 R33 332
2017-06-30 R34 187
2017-06-30 R35 287
2017-06-30 R48 2878
2017-07-07 R31 588
2017-07-07 R32 719
2017-07-07 R33 1144
2017-07-07 R33 817
2017-07-07 R33 327
2017-07-07 R34 193
2017-07-07 R35 292
2017-07-07 R48 2936
2017-07-14 R31 609
2017-07-14 R32 733
2017-07-14 R33 319
2017-07-14 R33 816
2017-07-14 R33 1135
2017-07-14 R34 194
2017-07-14 R35 292
2017-07-14 R48 2963
2017-07-21 R31 626
2017-07-21 R32 743
2017-07-21 R33 308
2017-07-21 R33 812
2017-07-21 R33 1120
2017-07-21 R34 197
2017-07-21 R35 294
2017-07-21 R48 2980
2017-07-28 R31 651
2017-07-28 R32 754
2017-07-28 R33 805
2017-07-28 R33 296
2017-07-28 R33 1101
2017-07-28 R34 200
2017-07-28 R35 293
2017-07-28 R48 2999
2017-08-04 R31 673
2017-08-04 R32 771
2017-08-04 R33 1094
2017-08-04 R33 804
2017-08-04 R33 290
2017-08-04 R34 202
2017-08-04 R35 289
2017-08-04 R48 3029
2017-08-11 R31 701
2017-08-11 R32 798
2017-08-11 R33 804
2017-08-11 R33 1087
2017-08-11 R33 283
2017-08-11 R34 204
2017-08-11 R35 292
2017-08-11 R48 3082
2017-08-18 R31 728
2017-08-18 R32 821
2017-08-18 R33 274
2017-08-18 R33 800
2017-08-18 R33 1074
2017-08-18 R34 206
2017-08-18 R35 296
2017-08-18 R48 3125
2017-08-25 R31 749
2017-08-25 R32 840
2017-08-25 R33 1060
2017-08-25 R33 262
2017-08-25 R33 797
2017-08-25 R34 205
2017-08-25 R35 301
2017-08-25 R48 3155
2017-09-01 R31 781
2017-09-01 R32 872
2017-09-01 R33 266
2017-09-01 R33 800
2017-09-01 R33 1066
2017-09-01 R34 205
2017-09-01 R35 296
2017-09-01 R48 3220
2017-09-08 R31 809
2017-09-08 R32 906
2017-09-08 R33 285
2017-09-08 R33 1092
2017-09-08 R33 807
2017-09-08 R34 208
2017-09-08 R35 296
2017-09-08 R48 3311
2017-09-15 R31 833
2017-09-15 R32 938
2017-09-15 R33 304
2017-09-15 R33 821
2017-09-15 R33 1125
2017-09-15 R34 212
2017-09-15 R35 300
2017-09-15 R48 3408
2017-09-22 R31 848
2017-09-22 R32 964
2017-09-22 R33 305
2017-09-22 R33 1130
2017-09-22 R33 825
2017-09-22 R34 217
2017-09-22 R35 307
2017-09-22 R48 3466
2017-09-29 R31 861
2017-09-29 R32 989
2017-09-29 R33 1127
2017-09-29 R33 300
2017-09-29 R33 827
2017-09-29 R34 220
2017-09-29 R35 311
2017-09-29 R48 3508
2017-10-06 R31 884
2017-10-06 R32 1024
2017-10-06 R33 838
2017-10-06 R33 1149
2017-10-06 R33 310
2017-10-06 R34 223
2017-10-06 R35 315
2017-10-06 R48 3595
2017-10-13 R31 901
2017-10-13 R32 1054
2017-10-13 R33 302
2017-10-13 R33 1150
2017-10-13 R33 848
2017-10-13 R34 224
2017-10-13 R35 316
2017-10-13 R48 3645
2017-10-20 R31 915
2017-10-20 R32 1082
2017-10-20 R33 1174
2017-10-20 R33 861
2017-10-20 R33 313
2017-10-20 R34 224
2017-10-20 R35 315
2017-10-20 R48 3710
2017-10-27 R31 926
2017-10-27 R32 1107
2017-10-27 R33 324
2017-10-27 R33 1199
2017-10-27 R33 874
2017-10-27 R34 226
2017-10-27 R35 317
2017-10-27 R48 3775
2017-11-03 R31 925
2017-11-03 R32 1112
2017-11-03 R33 876
2017-11-03 R33 336
2017-11-03 R33 1212
2017-11-03 R34 224
2017-11-03 R35 317
2017-11-03 R48 3790
2017-11-10 R31 915
2017-11-10 R32 1108
2017-11-10 R33 1214
2017-11-10 R33 341
2017-11-10 R33 873
2017-11-10 R34 220
2017-11-10 R35 315
2017-11-10 R48 3772
2017-11-17 R31 891
2017-11-17 R32 1090
2017-11-17 R33 1211
2017-11-17 R33 871
2017-11-17 R33 340
2017-11-17 R34 220
2017-11-17 R35 314
2017-11-17 R48 3726
2017-11-24 R31 876
2017-11-24 R32 1068
2017-11-24 R33 866
2017-11-24 R33 348
2017-11-24 R33 1214
2017-11-24 R34 221
2017-11-24 R35 314
2017-11-24 R48 3693
2017-12-01 R31 868
2017-12-01 R32 1058
2017-12-01 R33 875
2017-12-01 R33 360
2017-12-01 R33 1235
2017-12-01 R34 221
2017-12-01 R35 313
2017-12-01 R48 3695
2017-12-08 R31 855
2017-12-08 R32 1033
2017-12-08 R33 1220
2017-12-08 R33 360
2017-12-08 R33 860
2017-12-08 R34 213
2017-12-08 R35 305
2017-12-08 R48 3626
2017-12-15 R31 811
2017-12-15 R32 980
2017-12-15 R33 825
2017-12-15 R33 330
2017-12-15 R33 1155
2017-12-15 R34 204
2017-12-15 R35 294
2017-12-15 R48 3444
2017-12-22 R31 782
2017-12-22 R32 941
2017-12-22 R33 809
2017-12-22 R33 1133
2017-12-22 R33 324
2017-12-22 R34 195
2017-12-22 R35 281
2017-12-22 R48 3332
2017-12-29 R31 740
2017-12-29 R32 875
2017-12-29 R33 1060
2017-12-29 R33 759
2017-12-29 R33 302
2017-12-29 R34 183
2017-12-29 R35 268
2017-12-29 R48 3126
2018-01-05 R31 664
2018-01-05 R32 778
2018-01-05 R33 683
2018-01-05 R33 224
2018-01-05 R33 907
2018-01-05 R34 167
2018-01-05 R35 251
2018-01-05 R48 2767
2018-01-12 R31 614
2018-01-12 R32 717
2018-01-12 R33 206
2018-01-12 R33 851
2018-01-12 R33 646
2018-01-12 R34 158
2018-01-12 R35 244
2018-01-12 R48 2584
2018-01-19 R31 555
2018-01-19 R32 634
2018-01-19 R33 578
2018-01-19 R33 729
2018-01-19 R33 151
2018-01-19 R34 145
2018-01-19 R35 233
2018-01-19 R48 2296
2018-01-26 R31 525
2018-01-26 R32 596
2018-01-26 R33 169
2018-01-26 R33 550
2018-01-26 R33 719
2018-01-26 R34 137
2018-01-26 R35 220
2018-01-26 R48 2197
2018-02-02 R31 488
2018-02-02 R32 543
2018-02-02 R33 518
2018-02-02 R33 703
2018-02-02 R33 184
2018-02-02 R34 131
2018-02-02 R35 213
2018-02-02 R48 2078
2018-02-09 R31 432
2018-02-09 R32 468
2018-02-09 R33 472
2018-02-09 R33 178
2018-02-09 R33 649
2018-02-09 R34 122
2018-02-09 R35 213
2018-02-09 R48 1884
2018-02-16 R31 403
2018-02-16 R32 428
2018-02-16 R33 614
2018-02-16 R33 440
2018-02-16 R33 175
2018-02-16 R34 111
2018-02-16 R35 204
2018-02-16 R48 1760
2018-02-23 R31 382
2018-02-23 R32 398
2018-02-23 R33 429
2018-02-23 R33 611
2018-02-23 R33 183
2018-02-23 R34 102
2018-02-23 R35 189
2018-02-23 R48 1682
2018-03-02 R31 359
2018-03-02 R32 380
2018-03-02 R33 612
2018-03-02 R33 189
2018-03-02 R33 423
2018-03-02 R34 97
2018-03-02 R35 177
2018-03-02 R48 1625
2018-03-09 R31 314
2018-03-09 R32 350
2018-03-09 R33 186
2018-03-09 R33 420
2018-03-09 R33 606
2018-03-09 R34 93
2018-03-09 R35 169
2018-03-09 R48 1532
2018-03-16 R31 270
2018-03-16 R32 315
2018-03-16 R33 182
2018-03-16 R33 602
2018-03-16 R33 419
2018-03-16 R34 90
2018-03-16 R35 169
2018-03-16 R48 1446
2018-03-23 R31 242
2018-03-23 R32 284
2018-03-23 R33 181
2018-03-23 R33 422
2018-03-23 R33 603
2018-03-23 R34 88
2018-03-23 R35 166
2018-03-23 R48 1383
2018-03-30 R31 229
2018-03-30 R32 266
2018-03-30 R33 188
2018-03-30 R33 606
2018-03-30 R33 419
2018-03-30 R34 87
2018-03-30 R35 166
2018-03-30 R48 1354
2018-04-06 R31 217
2018-04-06 R32 246
2018-04-06 R33 618
2018-04-06 R33 196
2018-04-06 R33 422
2018-04-06 R34 83
2018-04-06 R35 171
2018-04-06 R48 1335
2018-04-13 R31 207
2018-04-13 R32 228
2018-04-13 R33 606
2018-04-13 R33 185
2018-04-13 R33 421
2018-04-13 R34 83
2018-04-13 R35 175
2018-04-13 R48 1299
2018-04-20 R31 205
2018-04-20 R32 211
2018-04-20 R33 182
2018-04-20 R33 421
2018-04-20 R33 604
2018-04-20 R34 84
2018-04-20 R35 177
2018-04-20 R48 1281
2018-04-27 R31 223
2018-04-27 R32 221
2018-04-27 R33 626
2018-04-27 R33 436
2018-04-27 R33 190
2018-04-27 R34 86
2018-04-27 R35 187
2018-04-27 R48 1343
2018-05-04 R31 243
2018-05-04 R32 240
2018-05-04 R33 458
2018-05-04 R33 204
2018-05-04 R33 662
2018-05-04 R34 92
2018-05-04 R35 195
2018-05-04 R48 1432
2018-05-11 R31 275
2018-05-11 R32 267
2018-05-11 R33 477
2018-05-11 R33 694
2018-05-11 R33 216
2018-05-11 R34 98
2018-05-11 R35 204
2018-05-11 R48 1538
2018-05-18 R31 299
2018-05-18 R32 288
2018-05-18 R33 226
2018-05-18 R33 496
2018-05-18 R33 722
2018-05-18 R34 107
2018-05-18 R35 213
2018-05-18 R48 1629
2018-05-25 R31 328
2018-05-25 R32 315
2018-05-25 R33 235
2018-05-25 R33 748
2018-05-25 R33 514
2018-05-25 R34 113
2018-05-25 R35 221
2018-05-25 R48 1725
2018-06-01 R31 351
2018-06-01 R32 341
2018-06-01 R33 245
2018-06-01 R33 773
2018-06-01 R33 528
2018-06-01 R34 121
2018-06-01 R35 231
2018-06-01 R48 1817
2018-06-08 R31 377
2018-06-08 R32 372
2018-06-08 R33 252
2018-06-08 R33 547
2018-06-08 R33 800
2018-06-08 R34 125
2018-06-08 R35 239
2018-06-08 R48 1913
2018-06-15 R31 406
2018-06-15 R32 401
2018-06-15 R33 570
2018-06-15 R33 828
2018-06-15 R33 258
2018-06-15 R34 127
2018-06-15 R35 246
2018-06-15 R48 2008
2018-06-22 R31 430
2018-06-22 R32 425
2018-06-22 R33 252
2018-06-22 R33 835
2018-06-22 R33 583
2018-06-22 R34 133
2018-06-22 R35 251
2018-06-22 R48 2074
2018-06-29 R31 460
2018-06-29 R32 455
2018-06-29 R33 841
2018-06-29 R33 245
2018-06-29 R33 596
2018-06-29 R34 139
2018-06-29 R35 257
2018-06-29 R48 2152
2018-07-06 R31 480
2018-07-06 R32 477
2018-07-06 R33 843
2018-07-06 R33 238
2018-07-06 R33 605
2018-07-06 R34 143
2018-07-06 R35 260
2018-07-06 R48 2203
2018-07-13 R31 507
2018-07-13 R32 501
2018-07-13 R33 231
2018-07-13 R33 838
2018-07-13 R33 607
2018-07-13 R34 144
2018-07-13 R35 259
2018-07-13 R48 2248
2018-07-20 R31 527
2018-07-20 R32 525
2018-07-20 R33 820
2018-07-20 R33 215
2018-07-20 R33 605
2018-07-20 R34 145
2018-07-20 R35 256
2018-07-20 R48 2272
2018-07-27 R31 552
2018-07-27 R32 552
2018-07-27 R33 807
2018-07-27 R33 206
2018-07-27 R33 601
2018-07-27 R34 146
2018-07-27 R35 249
2018-07-27 R48 2305
2018-08-03 R31 574
2018-08-03 R32 580
2018-08-03 R33 808
2018-08-03 R33 204
2018-08-03 R33 604
2018-08-03 R34 148
2018-08-03 R35 244
2018-08-03 R48 2353
2018-08-10 R31 590
2018-08-10 R32 604
2018-08-10 R33 606
2018-08-10 R33 195
2018-08-10 R33 802
2018-08-10 R34 151
2018-08-10 R35 239
2018-08-10 R48 2386
2018-08-17 R31 611
2018-08-17 R32 633
2018-08-17 R33 609
2018-08-17 R33 190
2018-08-17 R33 799
2018-08-17 R34 153
2018-08-17 R35 238
2018-08-17 R48 2435
2018-08-24 R31 637
2018-08-24 R32 669
2018-08-24 R33 801
2018-08-24 R33 188
2018-08-24 R33 613
2018-08-24 R34 157
2018-08-24 R35 240
2018-08-24 R48 2504
2018-08-31 R31 659
2018-08-31 R32 702
2018-08-31 R33 183
2018-08-31 R33 799
2018-08-31 R33 615
2018-08-31 R34 162
2018-08-31 R35 246
2018-08-31 R48 2567
2018-09-07 R31 679
2018-09-07 R32 734
2018-09-07 R33 624
2018-09-07 R33 182
2018-09-07 R33 806
2018-09-07 R34 166
2018-09-07 R35 250
2018-09-07 R48 2636
2018-09-14 R31 709
2018-09-14 R32 770
2018-09-14 R33 184
2018-09-14 R33 635
2018-09-14 R33 818
2018-09-14 R34 170
2018-09-14 R35 255
2018-09-14 R48 2722
2018-09-21 R31 729
2018-09-21 R32 800
2018-09-21 R33 634
2018-09-21 R33 173
2018-09-21 R33 807
2018-09-21 R34 173
2018-09-21 R35 259
2018-09-21 R48 2768
2018-09-28 R31 763
2018-09-28 R32 836
2018-09-28 R33 829
2018-09-28 R33 648
2018-09-28 R33 181
2018-09-28 R34 177
2018-09-28 R35 262
2018-09-28 R48 2866
2018-10-05 R31 790
2018-10-05 R32 871
2018-10-05 R33 191
2018-10-05 R33 854
2018-10-05 R33 663
2018-10-05 R34 180
2018-10-05 R35 262
2018-10-05 R48 2956
2018-10-12 R31 812
2018-10-12 R32 908
2018-10-12 R33 877
2018-10-12 R33 203
2018-10-12 R33 673
2018-10-12 R34 177
2018-10-12 R35 264
2018-10-12 R48 3037
2018-10-19 R31 825
2018-10-19 R32 934
2018-10-19 R33 218
2018-10-19 R33 896
2018-10-19 R33 678
2018-10-19 R34 177
2018-10-19 R35 262
2018-10-19 R48 3095
2018-10-26 R31 826
2018-10-26 R32 956
2018-10-26 R33 919
2018-10-26 R33 234
2018-10-26 R33 686
2018-10-26 R34 180
2018-10-26 R35 262
2018-10-26 R48 3143
2018-11-02 R31 831
2018-11-02 R32 980
2018-11-02 R33 253
2018-11-02 R33 949
2018-11-02 R33 696
2018-11-02 R34 182
2018-11-02 R35 265
2018-11-02 R48 3208
2018-11-09 R31 835
2018-11-09 R32 991
2018-11-09 R33 272
2018-11-09 R33 974
2018-11-09 R33 702
2018-11-09 R34 181
2018-11-09 R35 266
2018-11-09 R48 3247
2018-11-16 R31 803
2018-11-16 R32 959
2018-11-16 R33 251
2018-11-16 R33 668
2018-11-16 R33 919
2018-11-16 R34 174
2018-11-16 R35 258
2018-11-16 R48 3113
2018-11-23 R31 778
2018-11-23 R32 938
2018-11-23 R33 654
2018-11-23 R33 914
2018-11-23 R33 259
2018-11-23 R34 171
2018-11-23 R35 254
2018-11-23 R48 3054
2018-11-30 R31 752
2018-11-30 R32 914
2018-11-30 R33 642
2018-11-30 R33 905
2018-11-30 R33 263
2018-11-30 R34 168
2018-11-30 R35 253
2018-11-30 R48 2991
2018-12-07 R31 732
2018-12-07 R32 885
2018-12-07 R33 627
2018-12-07 R33 271
2018-12-07 R33 898
2018-12-07 R34 160
2018-12-07 R35 238
2018-12-07 R48 2914
2018-12-14 R31 692
2018-12-14 R32 841
2018-12-14 R33 860
2018-12-14 R33 598
2018-12-14 R33 262
2018-12-14 R34 153
2018-12-14 R35 227
2018-12-14 R48 2773
2018-12-21 R31 676
2018-12-21 R32 818
2018-12-21 R33 584
2018-12-21 R33 858
2018-12-21 R33 274
2018-12-21 R34 150
2018-12-21 R35 223
2018-12-21 R48 2725
2018-12-28 R31 661
2018-12-28 R32 798
2018-12-28 R33 878
2018-12-28 R33 582
2018-12-28 R33 296
2018-12-28 R34 147
2018-12-28 R35 220
2018-12-28 R48 2705
2019-01-04 R31 651
2019-01-04 R32 763
2019-01-04 R33 865
2019-01-04 R33 302
2019-01-04 R33 563
2019-01-04 R34 132
2019-01-04 R35 204
2019-01-04 R48 2614
2019-01-11 R31 620
2019-01-11 R32 729
2019-01-11 R33 303
2019-01-11 R33 557
2019-01-11 R33 861
2019-01-11 R34 127
2019-01-11 R35 196
2019-01-11 R48 2533
2019-01-18 R31 566
2019-01-18 R32 673
2019-01-18 R33 823
2019-01-18 R33 295
2019-01-18 R33 528
2019-01-18 R34 121
2019-01-18 R35 185
2019-01-18 R48 2370
2019-01-25 R31 527
2019-01-25 R32 606
2019-01-25 R33 771
2019-01-25 R33 493
2019-01-25 R33 278
2019-01-25 R34 114
2019-01-25 R35 178
2019-01-25 R48 2197
2019-02-01 R31 468
2019-02-01 R32 522
2019-02-01 R33 451
2019-02-01 R33 241
2019-02-01 R33 692
2019-02-01 R34 105
2019-02-01 R35 172
2019-02-01 R48 1960
2019-02-08 R31 444
2019-02-08 R32 492
2019-02-08 R33 696
2019-02-08 R33 447
2019-02-08 R33 248
2019-02-08 R34 95
2019-02-08 R35 155
2019-02-08 R48 1882
2019-02-15 R31 395
2019-02-15 R32 436
2019-02-15 R33 224
2019-02-15 R33 425
2019-02-15 R33 649
2019-02-15 R34 87
2019-02-15 R35 138
2019-02-15 R48 1705
2019-02-22 R31 354
2019-02-22 R32 385
2019-02-22 R33 199
2019-02-22 R33 399
2019-02-22 R33 598
2019-02-22 R34 79
2019-02-22 R35 122
2019-02-22 R48 1539
2019-03-01 R31 311
2019-03-01 R32 338
2019-03-01 R33 377
2019-03-01 R33 180
2019-03-01 R33 557
2019-03-01 R34 73
2019-03-01 R35 112
2019-03-01 R48 1390
2019-03-08 R31 262
2019-03-08 R32 287
2019-03-08 R33 344
2019-03-08 R33 129
2019-03-08 R33 473
2019-03-08 R34 66
2019-03-08 R35 102
2019-03-08 R48 1190
2019-03-15 R31 245
2019-03-15 R32 268
2019-03-15 R33 336
2019-03-15 R33 135
2019-03-15 R33 471
2019-03-15 R34 62
2019-03-15 R35 96
2019-03-15 R48 1143
2019-03-22 R31 225
2019-03-22 R32 248
2019-03-22 R33 329
2019-03-22 R33 137
2019-03-22 R33 467
2019-03-22 R34 62
2019-03-22 R35 104
2019-03-22 R48 1107
2019-03-29 R31 210
2019-03-29 R32 241
2019-03-29 R33 347
2019-03-29 R33 156
2019-03-29 R33 502
2019-03-29 R34 64
2019-03-29 R35 113
2019-03-29 R48 1130
2019-04-05 R31 209
2019-04-05 R32 240
2019-04-05 R33 166
2019-04-05 R33 357
2019-04-05 R33 523
2019-04-05 R34 64
2019-04-05 R35 119
2019-04-05 R48 1155
2019-04-12 R31 228
2019-04-12 R32 254
2019-04-12 R33 187
2019-04-12 R33 384
2019-04-12 R33 571
2019-04-12 R34 66
2019-04-12 R35 128
2019-04-12 R48 1247
2019-04-19 R31 251
2019-04-19 R32 264
2019-04-19 R33 413
2019-04-19 R33 616
2019-04-19 R33 204
2019-04-19 R34 70
2019-04-19 R35 138
2019-04-19 R48 1339
2019-04-26 R31 279
2019-04-26 R32 290
2019-04-26 R33 442
2019-04-26 R33 666
2019-04-26 R33 224
2019-04-26 R34 75
2019-04-26 R35 152
2019-04-26 R48 1462
2019-05-03 R31 299
2019-05-03 R32 309
2019-05-03 R33 699
2019-05-03 R33 234
2019-05-03 R33 466
2019-05-03 R34 78
2019-05-03 R35 162
2019-05-03 R48 1547
2019-05-10 R31 330
2019-05-10 R32 336
2019-05-10 R33 731
2019-05-10 R33 240
2019-05-10 R33 491
2019-05-10 R34 82
2019-05-10 R35 174
2019-05-10 R48 1653
2019-05-17 R31 353
2019-05-17 R32 364
2019-05-17 R33 762
2019-05-17 R33 249
2019-05-17 R33 513
2019-05-17 R34 89
2019-05-17 R35 186
2019-05-17 R48 1753
2019-05-24 R31 383
2019-05-24 R32 399
2019-05-24 R33 793
2019-05-24 R33 253
2019-05-24 R33 540
2019-05-24 R34 93
2019-05-24 R35 198
2019-05-24 R48 1867
2019-05-31 R31 414
2019-05-31 R32 436
2019-05-31 R33 821
2019-05-31 R33 256
2019-05-31 R33 565
2019-05-31 R34 101
2019-05-31 R35 213
2019-05-31 R48 1986
2019-06-07 R31 440
2019-06-07 R32 469
2019-06-07 R33 586
2019-06-07 R33 256
2019-06-07 R33 842
2019-06-07 R34 111
2019-06-07 R35 227
2019-06-07 R48 2088
2019-06-14 R31 472
2019-06-14 R32 503
2019-06-14 R33 264
2019-06-14 R33 875
2019-06-14 R33 612
2019-06-14 R34 118
2019-06-14 R35 234
2019-06-14 R48 2203
2019-06-21 R31 499
2019-06-21 R32 538
2019-06-21 R33 630
2019-06-21 R33 263
2019-06-21 R33 893
2019-06-21 R34 127
2019-06-21 R35 245
2019-06-21 R48 2301
2019-06-28 R31 526
2019-06-28 R32 568
2019-06-28 R33 259
2019-06-28 R33 907
2019-06-28 R33 648
2019-06-28 R34 134
2019-06-28 R35 255
2019-06-28 R48 2390
2019-07-05 R31 544
2019-07-05 R32 597
2019-07-05 R33 257
2019-07-05 R33 669
2019-07-05 R33 927
2019-07-05 R34 140
2019-07-05 R35 263
2019-07-05 R48 2471
2019-07-12 R31 561
2019-07-12 R32 627
2019-07-12 R33 246
2019-07-12 R33 683
2019-07-12 R33 929
2019-07-12 R34 147
2019-07-12 R35 268
2019-07-12 R48 2533
2019-07-19 R31 575
2019-07-19 R32 650
2019-07-19 R33 921
2019-07-19 R33 229
2019-07-19 R33 692
2019-07-19 R34 151
2019-07-19 R35 271
2019-07-19 R48 2569
2019-07-26 R31 597
2019-07-26 R32 677
2019-07-26 R33 934
2019-07-26 R33 708
2019-07-26 R33 226
2019-07-26 R34 156
2019-07-26 R35 270
2019-07-26 R48 2634
2019-08-02 R31 613
2019-08-02 R32 701
2019-08-02 R33 941
2019-08-02 R33 719
2019-08-02 R33 221
2019-08-02 R34 161
2019-08-02 R35 272
2019-08-02 R48 2689
2019-08-09 R31 634
2019-08-09 R32 729
2019-08-09 R33 939
2019-08-09 R33 725
2022-12-30 R32 839
2022-12-30 R33 770
2022-12-30 R33 270
2022-12-30 R33 1040
2022-12-30 R34 157
2022-12-30 R35 165
2022-12-30 R48 2891
2023-01-06 R31 700
2023-01-06 R32 823
2023-01-06 R33 1067
2023-01-06 R33 772
2023-01-06 R33 295
2023-01-06 R34 153
2023-01-06 R35 160
2023-01-06 R48 2902
2023-01-13 R31 662
2023-01-13 R32 785
2023-01-13 R33 762
2023-01-13 R33 1069
2023-01-13 R33 307
2023-01-13 R34 147
2023-01-13 R35 157
2023-01-13 R48 2820
2023-01-20 R31 622
2023-01-20 R32 754
2023-01-20 R33 310
2023-03-24 R31 343
2023-03-24 R32 437
2023-05-05 R31 422
2023-05-05 R32 497
2023-05-05 R33 1002
2023-05-05 R33 287
2023-05-05 R33 715
2023-05-05 R34 104
2023-05-05 R35 114
2023-05-05 R48 2141
2023-05-12 R31 458
2023-05-12 R32 520
2023-05-12 R33 1023
2023-05-12 R33 290
2023-05-12 R33 734
2023-05-12 R34 112
2023-05-12 R35 127
2023-05-12 R48 2240
</code></pre>
<p>I'd like to calculate a trailing 5 year average of values grouped on 'series' and for each period date (ie, an average will be calculated for each series/period record).</p>
<p>For a given period and series record, the values used in the average calculation should represent each of the last five years previous values with its dayofyear being closes to that of the anchor record.</p>
<p>As a simple case, for series R48, date 5/12/2023, the trailing 5y average returned should be 1889 using the sample data below:</p>
<pre><code>period series value
2018-05-04 R48 1432
2018-05-11 R48 1538
2018-05-18 R48 1629
2018-05-25 R48 1725
2019-05-03 R48 1547
2019-05-10 R48 1653
2019-05-17 R48 1753
2019-05-24 R48 1867
2019-05-31 R48 1986
2020-05-01 R48 2319
2020-05-08 R48 2422
2020-05-15 R48 2503
2020-05-22 R48 2612
2020-05-29 R48 2714
2021-05-07 R48 2029
2021-05-14 R48 2100
2021-05-21 R48 2215
2021-05-28 R48 2313
2022-05-06 R48 1643
2022-05-13 R48 1732
2022-05-20 R48 1819
2022-05-27 R48 1901
2023-05-05 R48 2141
2023-05-12 R48 2240
</code></pre>
<p>For the time being, this code works (ignoring complications for dates with less than 5y of data, etc). Would prefer a more direct approach to solving this though. Appreciate all input.</p>
<pre><code>for s in df.series.unique():
a = df[(df.series==s)]
for i in a[df.period.dt.year >= 2013].period:
d = a[(a.period > (i - pd.DateOffset(months=61))) & (a.period.dt.year < i.year)]
d['d'] = abs(a.period.dt.dayofyear - int(i.strftime('%j')))
df.loc[(df.period==i) & (df.series==s),'5yavg'] = d.loc[d.groupby(d.period.dt.year).d.idxmin()].value.mean()
</code></pre>
|
<python><python-3.x><pandas><algorithm>
|
2023-05-26 01:19:50
| 1
| 1,702
|
Chris
|
76,337,068
| 9,186,481
|
URLError: <urlopen error [Errno 97] Address family not supported by protocol>
|
<p>I am trying to get <code>secure string</code> variable stored AWS <code>parameter store</code>, from AWS <code>lambda</code>. I follow this <a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/ps-integration-lambda-extensions.html" rel="nofollow noreferrer">document</a>, and have already deploy this code to <code>lambda layer</code></p>
<pre><code># parameter_store_extension.py
import urllib
import json
import os
from urllib.parse import urlencode
aws_session_token = os.environ.get('AWS_SESSION_TOKEN')
port = '2773'
def get_param(name: str):
"""
Get SALT form systems manager/parameter store
"""
params = dict(name=name)
req = urllib.request.Request(
f"http://localhost:{port}/systemsmanager/parameters/get/?{urlencode(params)}&withDecryption=true")
req.add_header('X-Aws-Parameters-Secrets-Token', aws_session_token)
config = urllib.request.urlopen(req).read()
return json.loads(config)
</code></pre>
<p>However, when I try to use it in my <code>lambda_function.lambda_handle</code></p>
<pre><code>from parameter_store_extension import get_param
SALT = get_param('SALT')
</code></pre>
<p>I get this error in TEST</p>
<pre><code>{
"errorMessage": "<urlopen error [Errno 97] Address family not supported by protocol>",
"errorType": "URLError",
"requestId": "",
"stackTrace": [
" File \"/var/lang/lib/python3.10/importlib/__init__.py\", line 126, in import_module\n return _bootstrap._gcd_import(name[level:], package, level)\n",
" File \"<frozen importlib._bootstrap>\", line 1050, in _gcd_import\n",
" File \"<frozen importlib._bootstrap>\", line 1027, in _find_and_load\n",
" File \"<frozen importlib._bootstrap>\", line 1006, in _find_and_load_unlocked\n",
" File \"<frozen importlib._bootstrap>\", line 688, in _load_unlocked\n",
" File \"<frozen importlib._bootstrap_external>\", line 883, in exec_module\n",
" File \"<frozen importlib._bootstrap>\", line 241, in _call_with_frames_removed\n",
" File \"/var/task/lambda_function.py\", line 9, in <module>\n SALT = get_param('SALT')\n",
" File \"/opt/python/lib/python3.10/site-packages/parameter_store_extension.py\", line 19, in get_param\n config = urllib.request.urlopen(req).read()\n",
" File \"/var/lang/lib/python3.10/urllib/request.py\", line 216, in urlopen\n return opener.open(url, data, timeout)\n",
" File \"/var/lang/lib/python3.10/urllib/request.py\", line 519, in open\n response = self._open(req, data)\n",
" File \"/var/lang/lib/python3.10/urllib/request.py\", line 536, in _open\n result = self._call_chain(self.handle_open, protocol, protocol +\n",
" File \"/var/lang/lib/python3.10/urllib/request.py\", line 496, in _call_chain\n result = func(*args)\n",
" File \"/var/lang/lib/python3.10/urllib/request.py\", line 1377, in http_open\n return self.do_open(http.client.HTTPConnection, req)\n",
" File \"/var/lang/lib/python3.10/urllib/request.py\", line 1351, in do_open\n raise URLError(err)\n"
]
}
Function Logs
[ERROR] URLError: <urlopen error [Errno 97] Address family not supported by protocol>
Traceback (most recent call last):
File "/var/lang/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/var/task/lambda_function.py", line 9, in <module>
SALT = get_param('SALT')
File "/opt/python/lib/python3.10/site-packages/parameter_store_extension.py", line 19, in get_param
config = urllib.request.urlopen(req).read()
File "/var/lang/lib/python3.10/urllib/request.py", line 216, in urlopen
return opener.open(url, data, timeout)
File "/var/lang/lib/python3.10/urllib/request.py", line 519, in open
response = self._open(req, data)
File "/var/lang/lib/python3.10/urllib/request.py", line 536, in _open
result = self._call_chain(self.handle_open, protocol, protocol +
File "/var/lang/lib/python3.10/urllib/request.py", line 496, in _call_chain
result = func(*args)
File "/var/lang/lib/python3.10/urllib/request.py", line 1377, in http_open
return self.do_open(http.client.HTTPConnection, req)
File "/var/lang/lib/python3.10/urllib/request.py", line 1351, in do_open
raise URLError(err)[ERROR] URLError: <urlopen error [Errno 97] Address family not supported by protocol>
Traceback (most recent call last):
File "/var/lang/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/var/task/lambda_function.py", line 9, in <module>
SALT = get_param('SALT')
File "/opt/python/lib/python3.10/site-packages/parameter_store_extension.py", line 19, in get_param
config = urllib.request.urlopen(req).read()
File "/var/lang/lib/python3.10/urllib/request.py", line 216, in urlopen
return opener.open(url, data, timeout)
File "/var/lang/lib/python3.10/urllib/request.py", line 519, in open
response = self._open(req, data)
File "/var/lang/lib/python3.10/urllib/request.py", line 536, in _open
result = self._call_chain(self.handle_open, protocol, protocol +
File "/var/lang/lib/python3.10/urllib/request.py", line 496, in _call_chain
result = func(*args)
File "/var/lang/lib/python3.10/urllib/request.py", line 1377, in http_open
return self.do_open(http.client.HTTPConnection, req)
File "/var/lang/lib/python3.10/urllib/request.py", line 1351, in do_open
raise URLError(err)START RequestId: 41f27d03-65ab-4130-a5e9-2db8ed47e8e2 Version: $LATEST
Unknown application error occurred
Runtime.Unknown
END RequestId: 41f27d03-65ab-4130-a5e9-2db8ed47e8e2
REPORT RequestId: 41f27d03-65ab-4130-a5e9-2db8ed47e8e2 Duration: 5031.52 ms Billed Duration: 5032 ms Memory Size: 128 MB Max Memory Used: 27 MB
</code></pre>
<p>How do I solve this problem?</p>
|
<python><amazon-web-services><aws-lambda><aws-parameter-store>
|
2023-05-26 01:09:49
| 0
| 361
|
UMR
|
76,337,058
| 9,218,849
|
How to generate sentiment scores using predefined aspects with deberta-v3-base-absa-v1.1 Huggingface model?
|
<p>I have a dataframe , where there is text in 1st column and predefine aspect in another column however there is no aspects defined for few text ,for example row 2.</p>
<pre><code>data = {
'text': [
"The camera quality of this phone is amazing.",
"The belt is poor quality",
"The battery life could be improved.",
"The display is sharp and vibrant.",
"The customer service was disappointing."
],
'aspects': [
["camera", "phone"],
[],
["battery", "life"],
["display"],
["customer service"]
]
}
df = pd.DataFrame(data)
</code></pre>
<p>I want to generate two things</p>
<ol>
<li>using pre define aspect for the text, generate sentiment score</li>
<li>using text generate aspect and also the sentiment score from the package</li>
</ol>
<p>Note: This package yangheng/deberta-v3-base-absa-v1.1</p>
<p>1)generate sentiment score based on predefine aspects</p>
<p>2)generate both aspect and it's respective sentiments</p>
<p><strong>Note Row 2 does not have predefine aspect</strong></p>
<p><strong>I tried and getting error</strong></p>
<pre><code>import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import pandas as pd
# Load the ABSA model and tokenizer
model_name = "yangheng/deberta-v3-base-absa-v1.1"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
# Generate aspects and sentiments
aspects = []
sentiments = []
for index, row in df.iterrows():
text = row['text']
row_aspects = row['aspects']
aspect_sentiments = []
for aspect in row_aspects:
inputs = tokenizer(text, aspect, return_tensors="pt")
with torch.inference_mode():
outputs = model(**inputs)
predicted_sentiment = torch.argmax(outputs.logits).item()
sentiment_label = model.config.id2label[predicted_sentiment]
aspect_sentiments.append(f"{aspect}: {sentiment_label}")
aspects.append(row_aspects)
sentiments.append(aspect_sentiments)
# Add the generated aspects and sentiments to the DataFrame
df['generated_aspects'] = aspects
df['generated_sentiments'] = sentiments
# Print the updated DataFrame
print(df)
</code></pre>
<p><strong>generic example to use the package</strong></p>
<pre><code>import torch
import torch.nn.functional as F
from transformers import AutoTokenizer, AutoModelForSequenceClassification
model_name = "yangheng/deberta-v3-base-absa-v1.1"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
aspects = ["food", "service"]
text = "The food was great but the service was terrible."
sentiment_aspect = {}
for aspect in aspects:
inputs = tokenizer(text, aspect, return_tensors="pt")
with torch.inference_mode():
outputs = model(**inputs)
scores = F.softmax(outputs.logits[0], dim=-1)
label_id = torch.argmax(scores).item()
sentiment_aspect[aspect] = (model.config.id2label[label_id], scores[label_id].item())
print(sentiment_aspect)
</code></pre>
<p><strong>Desired Output</strong></p>
<p><a href="https://i.sstatic.net/Fe9Lq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Fe9Lq.png" alt="enter image description here" /></a></p>
|
<python><nlp><huggingface-transformers><sentiment-analysis><large-language-model>
|
2023-05-26 01:06:19
| 1
| 572
|
Dexter1611
|
76,337,052
| 14,459,861
|
How to get specific phrases from data frame in python?
|
<p>I filtered a column in my data frame that contains the word "no". I want to print the phrases that have "no" in them.</p>
<p>For instance, if this is my dataset:</p>
<pre><code>index | Column 1
------------------------------------------------------------------------
0 | no school for the rest of the year. no homework and no classes
1 | no more worries. no stress and no more anxiety
2 | no teachers telling us what to do
</code></pre>
<p>I want to get words/phrases that come after the word "no". As you can see, the word "no" occurs more than 1 time in some strings. I'd want my output to be</p>
<pre><code>no school
no homework
no classes
no more worries
no stress
no more anxiety
no teachers
</code></pre>
<p>This is my code so far :</p>
<pre><code>#make a copy of the column I'd like to filter
copy = df4['phrases'].copy()
#find rows that contain the word 'no'
nomore = copy.str.contains(r'\bno\b',na=False)
#split words in each string
copy.loc[nomore] = copy[nomore].str.split()
</code></pre>
<p>I'm not sure how to join the phrases. I've tried:</p>
<pre><code>for i in copy.loc[nomore]:
for x in i:
if x == 'no':
print(x,x+1)
</code></pre>
<p>But this does not work. It does not recognize if <code>x == 'no'</code> and it gives and error with <code>x+1</code>.</p>
<p>How can I fix this?</p>
<p>Thank you for taking the time to read my post and assist in any way that you can. I really appreciate it.</p>
|
<python><pandas><string><dataframe>
|
2023-05-26 01:04:39
| 2
| 325
|
shorttriptomars
|
76,337,049
| 1,631,414
|
How to pass keyword arguments to another method or class in python?
|
<p>I'm running into something strange as I'm reading someone's code and trying to make use of it.</p>
<p>I need to instantiate a class they wrote. I was looking at their example which is currently being run in production so me mimicking it should be working correctly. Anyway, in their code, I see them passing a kwarg dict to the class. I try to do the same thing in my code, however, when I step thru the code with pdb, my kwargs is empty. Perhaps I'm missing a very simple concept but I can't understand why it works for them and not for me.</p>
<p>Please excuse any errors from sanitizing the code.</p>
<p>This is their class I am trying to instantiate from my code</p>
<pre><code>class MainConsumer(theading.Thread):
def __init__(self, varA, varB, varC=None, varD=None, **kwargs):
self.varA = varA
self.varB = varB
self.varC = varC
self.varD = varD
self.kwargs = kwargs # I am checking here in pdb and apparently it's empty
</code></pre>
<p>This is my code calling their MainConsumer code, which doesn't appear to be passing extra_args to their class/code. My code is very similar to their code when they instantiate their MainConsumer.</p>
<pre><code>class MyTestConsumer(object):
def __init__(self, varA, varB, varC, varD=None, varE=None):
self.varA = varA
self.varB = varB
extra_args = dict(varC=varC, varD=varD, varE=varE)
self.some_consumer = MainConsumer(varA, varB, **extra_args) # checked here in pdb and extra_args has a dict of the variables from above
</code></pre>
<p>Apparently it's working this way for them. They are able to pass extra_args to MainConsumer and the kwargs == extra_args.
Why is kwargs empty when I check on it after it was instantiated in MyTestConsumer? Am I passing extra_args correctly when instantiating MainConsumer? I made sure my vars were not None but maybe I'm missing something?</p>
|
<python><arguments><keyword-argument>
|
2023-05-26 01:04:05
| 1
| 6,100
|
Classified
|
76,337,035
| 1,946,418
|
python - get classname and assign to class variable
|
<p>Can use <code>self.__class__.__name__</code> inside an instance method, but I need to do something like this</p>
<pre class="lang-py prettyprint-override"><code>class Utilities:
className = Utilities.__name__
</code></pre>
<p>I am hoping to assign the class name to <code>className</code> (and do something with it later) to a class variable, but unable to figure it out.</p>
<p>Any ideas anyone? TIA</p>
|
<python>
|
2023-05-26 00:54:23
| 0
| 1,120
|
scorpion35
|
76,336,780
| 7,846,884
|
Unexpected keyword filePath in rule definition
|
<p>my snakemake fails to lunch with error</p>
<pre><code>$snakemake -s bsmooth_Snakefile.smk -np --forcerun
SyntaxError in file /scripts/bsmooth_snakemake/bsmooth_Snakefile.smk, line 9:
Unexpected keyword filePath in rule definition (bsmooth_Snakefile.smk, line 9)
</code></pre>
<p>here's how actual snakefile looks like</p>
<pre><code>$ cat bsmooth_Snakefile.smk
configfile: "config/config.yaml"
configfile: "config/samples.yaml"
rule all:
input:
expand("results/bsmooth_fit/{sample}/{sample}.fitted.rda", sample=config["samples"])
rule bsmooth_fit:
input:
filePath=lambda wildcards: config["samples"][wildcards.samples]
output:
bsfit="results/{rule}/{sample}/{sample}.fitted.rda"
params:
rscript=config["BSmooth_fit"]
log:
"logs/{rule}/{sample}.log"
shell:
"Rscript {params.rscript} --sample {wildcards.samples} --file {input.filePath} --outfile {output.bsfit} 2> {log}"
</code></pre>
<p>pls see attached <code>sample.yaml</code></p>
<pre><code>$cat sample.yaml
samples:
Sample1Tumor: methylation_coverage/Sample1Tumor.bismark.cov.gz
Sample1Norm: methylation_coverage/Sample1Norm.bismark.cov.gz
</code></pre>
<p>I'm also attaching the config.yaml that contains the script</p>
<pre><code>$cat config.yaml
BSmooth_fit: scripts/bsmooth_snakemake.r
</code></pre>
<p>Any help would be appreciated</p>
|
<python><snakemake>
|
2023-05-25 23:21:21
| 1
| 473
|
sahuno
|
76,336,655
| 2,988,730
|
Operate on columns with the same name in the second level in a dataframe
|
<p>I have a dataframe with a multi-index on the columns:</p>
<pre><code>df = pd.DataFrame({('a', 'status'): [0.1, 0.2, 0.3],
('a', 'value'): [1.1, 1.2, 1.3],
('b', 'status'): [0.1, 0.2, 0.3],
('b', 'value'): [2.1, 2.2, 2.3],
('c', 'status'): [0.1, 0.2, 0.3]})
</code></pre>
<p>My goal is to multiply all the <code>value</code> columns by a scalar, or add a scalar to them. I have been struggling to find the appropriate expression to use with direct indexing or <code>iloc</code>, but can't seem to find the right one. Here are some failed attempts:</p>
<pre><code>>>> df[(None, 'value')] += 2
...
KeyError: 2
>>> df.iloc[:, (None, 'value')] += 2
...
IndexingError: Too many indexers
</code></pre>
<p>I imagine it's possible, though not very elegant to make a mask or index of the columns, so I tried:</p>
<pre><code>>>> df.columns.levels[1] == 'value'
array([False, True])
</code></pre>
<p>This does not help with the five actual columns that I have.</p>
|
<python><pandas><multi-index>
|
2023-05-25 22:43:10
| 2
| 115,659
|
Mad Physicist
|
76,336,566
| 3,380,902
|
Mosaic `st_buffer` doesn't return geometry of type Point or Polygon
|
<p>I am expecting to create a buffer around <code>point</code> geometry that would be of <code>Polygon</code> type. For example, I run the following code to obtain the geometries.</p>
<pre><code>import pyspark.sql.functions as F
df = spark.createDataFrame([
(37.775, -122.418),
(40.714, -74.006),
(41.007, 28.613),
], ["lat", "long"])
df1 = df.withColumn("point_geom", st_point("lat", "lng"))
df1.select('point_geom').show(5)
--------------------+
| point_geom|
+--------------------+
|{1, 0, [[[32.4374...|
|{1, 0, [[[32.4374...|
|{1, 0, [[[32.4374...|
|{1, 0, [[[32.4374...|
|{1, 0, [[[32.4374...|
df2 = df1.withColumn("geom_buffer", st_buffer("point_geom", lit(.2)))
df2.select('geom_buffer').show(5)
--------------------+
| geom_buffer|
+--------------------+
|{5, 4326, [[[32.6...|
|{5, 4326, [[[32.6...|
|{5, 4326, [[[32.6...|
|{5, 4326, [[[32.6...|
|{5, 4326, [[[32.6...|
+--------------------+
</code></pre>
<ol>
<li>Why are these not <code>Point</code> or of <code>Polygon</code> type? How do we convert them to geometry type?</li>
<li>What is the unit of <code>radius</code> in <code>st_buffer</code>? It is not clear from the docs.</li>
<li>What do the numbers in <code>{1, 0} {5, 4326}</code> represent?</li>
</ol>
<p><a href="https://databrickslabs.github.io/mosaic/api/spatial-functions.html" rel="nofollow noreferrer">https://databrickslabs.github.io/mosaic/api/spatial-functions.html</a></p>
|
<python><apache-spark><pyspark><databricks><geospatial>
|
2023-05-25 22:17:28
| 1
| 2,022
|
kms
|
76,336,539
| 504,717
|
How To Fix: TypeError: No positional arguments allowed' in python List object with gRPC
|
<p>I have already looked at this <a href="https://stackoverflow.com/questions/58752816/how-to-fix-typeerror-no-positional-arguments-allowed-in-python-with-grpc">post</a>, which doesn't answer my problem</p>
<p>Here is how my proto file looks like</p>
<pre><code>message GetWarehousesRequest {
CreateAttributes base_wh = 1;
repeated CreateAttributes partnered_wh = 2;
}
</code></pre>
<p>(i am not posting grpc method signatures because they are trivial)</p>
<p><strong>Note</strong> how <code>partnered_wh</code> is an array.</p>
<p>In python i have this method</p>
<pre class="lang-py prettyprint-override"><code> def get_warehouses(
self,
base_wh: create_attributes_pb2,
partnered_whs: List[create_attributes_pb2],
) -> int:
start_time = utcnow().replace(tzinfo=None)
request = get_warehouses_request_pb2(
base_wh=base_wh,
)
for partnered_wh in partnered_whs:
request.partnered_wh.add(partnered_wh)
</code></pre>
<p>In the for loop i am getting error that <em>No position arguments are allowed</em>. I need to convert python List to gRPC array. What should be best way to do so? Can i just assign that list to object? or there is better way?</p>
|
<python><arrays><protocol-buffers><grpc><python-3.7>
|
2023-05-25 22:10:21
| 1
| 8,834
|
Em Ae
|
76,336,493
| 926,918
|
Dask DataFrame of strings works too slow on row-wise apply
|
<p>I have a Dask dataframe with no missing values. I am trying to apply a function to all but first two columns to do the following:</p>
<ul>
<li>col_1 is called R</li>
<li>col_2 is called A</li>
<li>col_i (i>2) contains binary strings</li>
<li>All col_i (i>2) should be translated such that '0' and '1' are replaced by corresponding col_1 and col_2 elements in the row.</li>
<li>The resulting strings will be evaluated</li>
<li>col_1 and col_2 are eventually dropped.</li>
</ul>
<p>Sample input:</p>
<pre><code> R A T U V
0 R A 00 10 11
1 R A 00 10 11
2 R A 00 10 11
3 R A 00 10 11
4 R A 00 10 11
.. .. .. .. .. ..
95 R A 11 00 00
96 R A 11 00 00
97 R A 11 00 00
98 R A 11 00 00
99 R A 11 00 00
</code></pre>
<p>The possible number of strings is very small in col_i (<50), and a simplified version is used below.</p>
<p>Output:</p>
<pre><code> T U V
0 rr ar aa
1 rr ar aa
2 rr ar aa
3 rr ar aa
4 rr ar aa
.. .. .. ..
95 aa rr rr
96 aa rr rr
97 aa rr rr
98 aa rr rr
99 aa rr rr
</code></pre>
<p>Current code:</p>
<pre class="lang-py prettyprint-override"><code>from dask.distributed import Client, progress
client = Client(n_workers=20, threads_per_worker=1)
client
import pandas as pd
import dask.dataframe as dd
def score(x):
return str(x[0] + x[1]).lower()
def func(row, i, fs):
r = row[0]
a = row[1]
row[i] = fs(row[i].replace('0',r).replace('1',a))
return row
s1 = ['00']*25 + ['01']*25 + ['10']*25 + ['11']*25
s2 = ['10']*25 + ['11']*25 + ['01']*25 + ['00']*25
df = pd.DataFrame({'R':['R']*100, 'A':['A']*100, 'T':s1, 'U':s2, 'V': reversed(s1)})
ddf = dd.from_pandas(df, npartitions=10)
meta = dict()
for cn in ddf.columns:
meta[cn] = 'object'
ddf.compute()
for i in range(2,len(ddf.columns)):
ddf = ddf.apply(func, args=(i,score,), axis=1, meta=meta)
ddf = ddf.drop(['R','A'], axis=1)
ddf.compute()
</code></pre>
<p>Tried using <code>numba</code>, etc but did not have success. I would be very grateful for any improvements to the above code.</p>
|
<python><parallel-processing><dask><dask-distributed><dask-dataframe>
|
2023-05-25 21:57:07
| 1
| 1,196
|
Quiescent
|
76,336,434
| 1,812,732
|
How to change TabNine to use single quotes in the Python suggested code
|
<p>I just started using TabNine for python in VS Code. It makes some great suggestions. However, it always uses <strong>double quotes</strong> for the string constants. For me this is irksome. All the other code in my project is using single quotes. Is there a way to tell the AI to please use single quotes for strings?</p>
|
<python><visual-studio-code><tabnine>
|
2023-05-25 21:45:01
| 0
| 11,643
|
John Henckel
|
76,336,153
| 1,952,857
|
External merge-sort, but for dictionaries
|
<p>I have the following case:</p>
<p>Very large file of lines of of integers of random length each, i.e.,</p>
<p><strong>input.dat</strong></p>
<pre><code>1 3 5 6
3 5 7
2 3 7 8
</code></pre>
<p>I need to bring this file to the following form represented by a dictionary:</p>
<p><strong>final dict</strong></p>
<pre><code>{
1: [1],
2: [3],
3: [1, 2, 3],
5: [1, 2],
6: [1],
7: [2, 3],
8: [3]
}
</code></pre>
<p>So every new number that I encounter is a key on my dictionary, and as value it will have a list of all the lines it was found in the input file. But since the input file is too large to fit in memory, while reading it line by line I update the dictionary until it reaches let's say <em>1MB</em>, and once it does I save it to a temporary file, and start again.</p>
<p>So in the fictional case where I could only fit one line of the input file in memory I would have:</p>
<p><strong>partial dict 1</strong></p>
<pre><code>{
1: [1],
3: [1],
5: [1],
6: [1]
}
</code></pre>
<p><strong>partial dict 2</strong></p>
<pre><code>{
3: [2],
5: [2],
7: [2]
}
</code></pre>
<p><strong>partial dict 3</strong></p>
<pre><code>{
2: [3],
3: [3],
7: [3],
8: [3]
}
</code></pre>
<p>By the end of it I'm left with a number of temporary files holding each partial dictionary that I need to merge in order to create a large file containing "<strong>final dict</strong>" again. I am assuming here that one key, value pair of <strong>final dict</strong> will always fit in memory and for each such key, value pair, on each file, I scan all the other files and if I find it I sort and merge their values. Once I checked all the files I store it in the final output file.</p>
<p>Working iteratively like this for every file is a bit slow. Is there a better algorithm to merge these files faster?</p>
|
<python>
|
2023-05-25 20:44:40
| 0
| 1,636
|
ealione
|
76,336,133
| 12,096,670
|
Multivariate linear hypothesis testing using statsmodels in Python
|
<p>I am trying to run a multivariate regression model using statsmodels, but there appears to be no implementation of that yet, so in the meantime, what I did was to run a manova model on the data like below:</p>
<pre><code># the modules
import pandas as pd
from statsmodels.multivariate.manova import MANOVA
from statsmodels.multivariate.multivariate_ols import _MultivariateOLS
from statsmodels.multivariate.multivariate_ols import MultivariateTestResults
# manova
mod = MANOVA.from_formula('se + em ~ pv + ed + fs + hp', df)
mod_mvtest = mod.mv_test()
mod_mvtest.summary_frame
# the output
pd.DataFrame({'Value': {('Intercept', "Wilks' lambda"): 0.35959042974944566,
('Intercept', "Pillai's trace"): 0.6404095702505543,
('Intercept', 'Hotelling-Lawley trace'): 1.780941641569207,
('Intercept', "Roy's greatest root"): 1.780941641569207,
('pv', "Wilks' lambda"): 0.9995874157798398,
('pv', "Pillai's trace"): 0.00041258422016027143,
('pv', 'Hotelling-Lawley trace'): 0.0004127545161604391,
('pv', "Roy's greatest root"): 0.0004127545161604391,
('ed', "Wilks' lambda"): 0.9947762433747658,
('ed', "Pillai's trace"): 0.005223756625234229,
('ed', 'Hotelling-Lawley trace'): 0.005251187550994082,
('ed', "Roy's greatest root"): 0.005251187550994082,
('fs', "Wilks' lambda"): 0.9906903180436285,
('fs', "Pillai's trace"): 0.009309681956371496,
('fs', 'Hotelling-Lawley trace'): 0.009397166588602426,
('fs', "Roy's greatest root"): 0.009397166588602426,
('hp', "Wilks' lambda"): 0.9643721311041931,
('hp', "Pillai's trace"): 0.035627868895806984,
('hp', 'Hotelling-Lawley trace'): 0.036944108759150426,
('hp', "Roy's greatest root"): 0.036944108759150426},
'Num DF': {('Intercept', "Wilks' lambda"): 2,
('Intercept', "Pillai's trace"): 2.0,
('Intercept', 'Hotelling-Lawley trace'): 2,
('Intercept', "Roy's greatest root"): 2,
('pv', "Wilks' lambda"): 2,
('pv', "Pillai's trace"): 2.0,
('pv', 'Hotelling-Lawley trace'): 2,
('pv', "Roy's greatest root"): 2,
('ed', "Wilks' lambda"): 2,
('ed', "Pillai's trace"): 2.0,
('ed', 'Hotelling-Lawley trace'): 2,
('ed', "Roy's greatest root"): 2,
('fs', "Wilks' lambda"): 2,
('fs', "Pillai's trace"): 2.0,
('fs', 'Hotelling-Lawley trace'): 2,
('fs', "Roy's greatest root"): 2,
('hp', "Wilks' lambda"): 2,
('hp', "Pillai's trace"): 2.0,
('hp', 'Hotelling-Lawley trace'): 2,
('hp', "Roy's greatest root"): 2},
'Den DF': {('Intercept', "Wilks' lambda"): 11608.0,
('Intercept', "Pillai's trace"): 11608.0,
('Intercept', 'Hotelling-Lawley trace'): 11608.000000002476,
('Intercept', "Roy's greatest root"): 11608,
('pv', "Wilks' lambda"): 11608.0,
('pv', "Pillai's trace"): 11608.0,
('pv', 'Hotelling-Lawley trace'): 11608.000000002476,
('pv', "Roy's greatest root"): 11608,
('ed', "Wilks' lambda"): 11608.0,
('ed', "Pillai's trace"): 11608.0,
('ed', 'Hotelling-Lawley trace'): 11608.000000002476,
('ed', "Roy's greatest root"): 11608,
('fs', "Wilks' lambda"): 11608.0,
('fs', "Pillai's trace"): 11608.0,
('fs', 'Hotelling-Lawley trace'): 11608.000000002476,
('fs', "Roy's greatest root"): 11608,
('hp', "Wilks' lambda"): 11608.0,
('hp', "Pillai's trace"): 11608.0,
('hp', 'Hotelling-Lawley trace'): 11608.000000002476,
('hp', "Roy's greatest root"): 11608},
'F Value': {('Intercept', "Wilks' lambda"): 10336.585287667678,
('Intercept', "Pillai's trace"): 10336.585287667676,
('Intercept', 'Hotelling-Lawley trace'): 10336.585287667674,
('Intercept', "Roy's greatest root"): 10336.585287667678,
('pv', "Wilks' lambda"): 2.3956272117948725,
('pv', "Pillai's trace"): 2.3956272117951882,
('pv', 'Hotelling-Lawley trace'): 2.3956272117951882,
('pv', "Roy's greatest root"): 2.3956272117951882,
('ed', "Wilks' lambda"): 30.477892545969606,
('ed', "Pillai's trace"): 30.477892545969652,
('ed', 'Hotelling-Lawley trace'): 30.477892545969645,
('ed', "Roy's greatest root"): 30.477892545969652,
('fs', "Wilks' lambda"): 54.54115488024848,
('fs', "Pillai's trace"): 54.54115488024848,
('fs', 'Hotelling-Lawley trace'): 54.54115488024847,
('fs', "Roy's greatest root"): 54.54115488024848,
('hp', "Wilks' lambda"): 214.4236072381088,
('hp', "Pillai's trace"): 214.4236072381091,
('hp', 'Hotelling-Lawley trace'): 214.42360723810904,
('hp', "Roy's greatest root"): 214.42360723810907},
'Pr > F': {('Intercept', "Wilks' lambda"): 0.0,
('Intercept', "Pillai's trace"): 0.0,
('Intercept', 'Hotelling-Lawley trace'): 0.0,
('Intercept', "Roy's greatest root"): 0.0,
('pv', "Wilks' lambda"): 0.09116055879307752,
('pv', "Pillai's trace"): 0.09116055879307752,
('pv', 'Hotelling-Lawley trace'): 0.09116055879303571,
('pv', "Roy's greatest root"): 0.09116055879307752,
('ed', "Wilks' lambda"): 6.284223443412654e-14,
('ed', "Pillai's trace"): 6.284223443412654e-14,
('ed', 'Hotelling-Lawley trace'): 6.284223443411805e-14,
('ed', "Roy's greatest root"): 6.284223443412654e-14,
('fs', "Wilks' lambda"): 2.652650399331105e-24,
('fs', "Pillai's trace"): 2.652650399331105e-24,
('fs', 'Hotelling-Lawley trace'): 2.6526503993306902e-24,
('fs', "Roy's greatest root"): 2.652650399331105e-24,
('hp', "Wilks' lambda"): 3.5971364993426025e-92,
('hp', "Pillai's trace"): 3.5971364993426025e-92,
('hp', 'Hotelling-Lawley trace'): 3.5971364993389216e-92,
('hp', "Roy's greatest root"): 3.5971364993426025e-92}})
</code></pre>
<p>I know that the multivariate omnibus test gives me some ideas about the associations between the predictors and the responses. What I want to do next is some linear hypothesis testing, but can't seem to figure that out in statsmodels. I can't seem to find any documentation with pointers.</p>
<p>For example, what I would like to do is to compare whether the effects of the predictors are the same across both responses. In R, using the car package <code>linearhypothesis()</code> function, I could do something like this:</p>
<pre><code>rhs = matrix(c(1, -1), ncol=2)
linearHypothesis(model=mod, hypothesis.matrix="pv", rhs=rhs,
title = "equality of cofficients for pv predictor",
verbose=TRUE)
#where the rhs = matrix [1, -1] describes the coefficient difference I am testing for, while the
#hypothesis.matrix='pv' restricts the coefficient difference I am testing to the pv predictor only.
</code></pre>
|
<python><linear-regression><statsmodels><multivariate-testing>
|
2023-05-25 20:42:14
| 0
| 845
|
GSA
|
76,335,993
| 20,999,526
|
pytube takes too long to stream a video in Android
|
<p>I am using <em>pytube</em> to stream a video in Android, with the help of <strong>chaquopy</strong>.</p>
<p><em><strong>videofile.py</strong></em></p>
<pre><code>from pytube import YouTube
def video(link):
yt = YouTube(f'https://www.youtube.com/watch?v=' + link)
stream_url = yt.streams.get_highest_resolution().url
return stream_url
</code></pre>
<p><em><strong>VideoActivityPy.java</strong></em></p>
<pre><code>progressBar = findViewById(R.id.pro);
videoView = findViewById(R.id.videoview);
new Thread(() -> {
try {
if (!Python.isStarted()) {
Python.start(new AndroidPlatform(VideoActivityPy.this));
}
python = Python.getInstance();
pyScript = python.getModule("videofile");
videoUri = pyScript.callAttr("video", MyData.videoLink);
runOnUiThread(() -> {
videoView.setSystemUiVisibility(
View.SYSTEM_UI_FLAG_LAYOUT_STABLE
| View.SYSTEM_UI_FLAG_LAYOUT_HIDE_NAVIGATION
| View.SYSTEM_UI_FLAG_LAYOUT_FULLSCREEN
| View.SYSTEM_UI_FLAG_HIDE_NAVIGATION
| View.SYSTEM_UI_FLAG_FULLSCREEN
| View.SYSTEM_UI_FLAG_IMMERSIVE_STICKY);
Uri uri = Uri.parse(videoUri.toString());
videoView.setVideoURI(uri);
MediaController mediaController = new MediaController(VideoActivityPy.this);
mediaController.setAnchorView(videoView);
mediaController.setMediaPlayer(videoView);
videoView.setMediaController(mediaController);
videoView.setOnPreparedListener(new MediaPlayer.OnPreparedListener() {
@Override
public void onPrepared(MediaPlayer mediaPlayer) {
progressBar.setVisibility(View.INVISIBLE);
videoView.start();
}
});
});
}
catch (com.chaquo.python.PyException pyException) {
progressBar.setVisibility(View.INVISIBLE);
Toast.makeText(VideoActivityPy.this, "Check your internet connection", Toast.LENGTH_LONG).show();
}
catch (Exception e) {
progressBar.setVisibility(View.INVISIBLE);
Toast.makeText(VideoActivityPy.this, e.toString(), Toast.LENGTH_LONG).show();
}
}).start();
</code></pre>
<p>At first, I had written the code without using Thread, but the app was not responding. So, I used Thread. Now the app works, the video loads, but it takes about 40-50 seconds to start the video (the video is 1.5 hours long though). Is there any way to reduce the loading time?</p>
<p>Note: I have downloaded <strong>.tar.gz</strong> file from <em>PyPI</em>, changed the built-in code of pytube, and then written the gradle as below:</p>
<pre><code>python {
buildPython "C:/Python38/python.exe"
pip {
install "pytube-15.0.0.tar.gz"
}
}
</code></pre>
<p>I changed the <strong>var_regex</strong> in <em><strong>cipher.py</strong></em></p>
|
<python><android><pytube><chaquopy><android-thread>
|
2023-05-25 20:18:16
| 1
| 337
|
George
|
76,335,866
| 9,218,849
|
Extract key as string from list of dictionary
|
<p>This is for example ,
I have below list of dictionary which has 1000's of dictionary and need key string for each dictionary as row</p>
<p>The example is just for understanding purpose</p>
<pre><code>Corp =[{'boy':1},
{},
{'kids': 3, 'parents' :4}]
</code></pre>
<p>I need below output in pandas with one columns as key strings</p>
<p>O/P</p>
<pre><code>|boy |
| |
|kids,parents|
</code></pre>
|
<python><string><list><dictionary>
|
2023-05-25 19:57:14
| 1
| 572
|
Dexter1611
|
76,335,727
| 9,138,148
|
Botocore Not Recognising Region in Docker Container when AWS credentials are mapped
|
<p>Not sure if more details are required, but I'm running a docker container to execute boto3 commands and I have already been using them outside of the container. So upon mapping them to the container as such:</p>
<pre><code>docker run --name store -v $HOME/.aws:/root/.aws my-store python ./mystore/manage.py test store/ --pattern="tests_*.py"
</code></pre>
<p>you would think it will just work but it threw:</p>
<pre><code>botocore.exceptions.NoRegionError: You must specify a region.
</code></pre>
<p>Why is it not using <code> ~/.aws/config</code> ?</p>
|
<python><docker><boto3>
|
2023-05-25 19:39:32
| 1
| 463
|
momo668
|
76,335,403
| 4,866,010
|
Solve inequality in Sympy
|
<p>I'm trying to solve a simple linear inequality in Sympy but can't seem to get it to work.
Basically, I want to be able to calculate the optimal values of <code>a</code> when the inequality happens to be false.</p>
<pre><code>import sympy
num_attractors = 4
attractor_size = 64
network_size = 256
a, s, n = sympy.symbols('a s n')
expr = a * (s + 16) <= n
if not expr.subs([(a, num_attractors), (s, attractor_size), (n, network_size)]):
# compute optimal value for a i.e., the closest integer a s.t. a <= n / (s + 16)
</code></pre>
<p>I have tried using <code>solve(expr, a , sympy.Integer)</code> and <code>sympy.solvers.inequalities.reduce_inequalities(expr,a)</code> but can't seem to make sense of their output.</p>
|
<python><sympy><inequality>
|
2023-05-25 18:49:32
| 1
| 449
|
Thomas Tiotto
|
76,335,314
| 1,918,965
|
Why global variable is empty in AppFilter (logging.basicConfig)?
|
<p>I use Python and Logging facility for Python.</p>
<p>I have config.py:</p>
<pre><code>MY_VARIABLE = ''
</code></pre>
<p>I have main.py:</p>
<pre><code>from log import *
from config import MY_VARIABLE
for i in range(3):
global MY_VARIABLE
print(f"main.py Old: '{MY_VARIABLE}'")
MY_VARIABLE = i
print(f"main.py New: '{MY_VARIABLE}'")
logging.info("The process is running...")
</code></pre>
<p>I have log.py:</p>
<pre><code>import logging
from config import MY_VARIABLE
class AppFilter(logging.Filter):
def filter(self, record):
global MY_VARIABLE
print(f"log.py MY_VARIABLE:{MY_VARIABLE}")
record.my_variable = MY_VARIABLE
return True
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s %(levelname)s (%(my_variable)s) > %(message)s',
datefmt='%H:%M:%S'
)
logging.getLogger('').addFilter(AppFilter())
</code></pre>
<p>I get:</p>
<pre><code>main.py Old: ''
main.py New: '0'
log.py MY_VARIABLE:''
main.py Old: '0'
main.py New: '1'
log.py MY_VARIABLE:''
main.py Old: '1'
main.py New: '2'
log.py MY_VARIABLE:''
14:23:28 INFO file № > The process is running...
14:23:28 INFO file № > The process is running...
14:23:28 INFO file № > The process is running...
</code></pre>
<p>Why variable <strong>MY_VARIABLE</strong> is empty in log.py? <strong>How to fix it?</strong></p>
|
<python><logging><global-variables>
|
2023-05-25 18:34:50
| 1
| 1,455
|
Olga
|
76,335,241
| 11,986,167
|
how to identify sequence order and cumsum the transactions?
|
<p>I have the following dataframe:</p>
<pre><code>df = pd.DataFrame({'id':[1,1,1,2,2,3,3,4,5,6,6,6,6,6,8,8,9,11,12,12],'letter':['A','A','Q','Q','Q','F','F','G','D','G','I','I','K','Q','E','S','S','I','I','F']})
</code></pre>
<p>My objective is to add another column tx that shows the followings: if it finds Q and there after an I - mark it as 1st transaction. Both Q and I must exists and must have the same comes as last_Q --> first_I.</p>
<p>so the end result should look like this:</p>
<p><a href="https://i.sstatic.net/BV6nk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BV6nk.png" alt="enter image description here" /></a></p>
|
<python><pandas><cumsum>
|
2023-05-25 18:24:24
| 3
| 741
|
ProcolHarum
|
76,335,157
| 7,648
|
Unexpected collapse of dimension when calling np.tile()
|
<p>I am creating a multi-dimension <em>numpy</em> matrix like this:</p>
<pre><code>a = np.array([255, 0])
mins_and_maxes = np.tile(a, [9, 2, 43])
</code></pre>
<p>I'm expecting <code>mins_and_maxes</code> to be a 4-D array with a shape of <em>(9, 2, 43, 2)</em>. However, <code>mins_and_maxes</code> has a shape of <em>(9, 2, 86)</em>. The <em>[255, 0]</em> arrays are sort of being 'dissolved'. (I can't think of a better word. "Exploded"?)</p>
<p>How do I get a matrix of size <em>(9, 2, 43)</em> where every element is a copy of the array of length <em>2</em>, <em>[255, 0]</em>?</p>
|
<python><python-3.x><numpy><matrix><tile>
|
2023-05-25 18:12:34
| 1
| 7,944
|
Paul Reiners
|
76,335,115
| 19,989,634
|
Stop paypal access token from refreshing so frequently
|
<p>Straight forward and simple question;</p>
<p>What is the best way to stop PayPal access token from refreshing so frequently?</p>
<p>I have tried to do curl command with extended 'expires_in': command. Doesn't work. Still refreshes after 24hr</p>
|
<python><django><paypal><paypal-rest-sdk>
|
2023-05-25 18:05:29
| 1
| 407
|
David Henson
|
76,335,101
| 10,553,098
|
Reinitialization of dataclass object does not reinitialize nested dataclass objects
|
<p>I'm having trouble reinitializing nested dataclass object.</p>
<p>I have 2 nested dataclasses set up like so:</p>
<pre><code>from dataclasses import dataclass, field
from typing import Dict
@dataclass
class MyNestedClass():
field1: str = ""
field2: int = 0
field3: Dict[str, float] = field(default_factory=dict)
@dataclass
class MyClass():
field1: str = ""
field2: int = 0
field3: Dict[str, float] = field(default_factory=dict)
field4: MyNestedClass = MyNestedClass()
</code></pre>
<p>As part of a unit testing procedure, I would like to initialize a new <code>MyClass</code> object and then run a test. However, I've noticed something odd. When I reinitialize my <code>MyClass</code> object, it somehow remembers the contents of field4. For example:</p>
<pre><code>my_obj = MyClass()
print(my_obj)
my_obj.field2 = 1
my_obj.field3 = {'something': 2.5}
my_obj.field4.field1 = "B"
my_obj.field4.field2 = 2
my_obj.field4.field3 = {'something else': 3.14}
print(my_obj)
my_obj = MyClass()
print(my_obj)
</code></pre>
<p>yields</p>
<pre><code>MyClass(field1='', field2=0, field3={}, field4=MyNestedClass(field1='', field2=0, field3={}))
MyClass(field1='', field2=1, field3={'something': 2.5}, field4=MyNestedClass(field1='B', field2=2, field3={'something else': 3.14}))
MyClass(field1='', field2=0, field3={}, field4=MyNestedClass(field1='B', field2=2, field3={'something else': 3.14}))
</code></pre>
<p>does the autogenerated dataclass constructor for <code>MyClass</code> not call the constructor for <code>MyNestedClass</code> by default? If not, what is it doing instead and what is the correct way to do this?</p>
|
<python><nested><initialization><reinitialization>
|
2023-05-25 18:03:09
| 1
| 2,177
|
John
|
76,334,937
| 2,398,040
|
How do I fill in missing factors in a polars dataframe?
|
<p>I have this dataframe:</p>
<pre class="lang-py prettyprint-override"><code>df = pl.DataFrame({
'date':['date1','date1','date1','date2','date3','date3'],
'factor':['A','B','C','B','B','C'], 'val':[1,2,3,3,1,5]
})
</code></pre>
<pre><code>shape: (6, 3)
┌───────┬────────┬─────┐
│ date ┆ factor ┆ val │
│ --- ┆ --- ┆ --- │
│ str ┆ str ┆ i64 │
╞═══════╪════════╪═════╡
│ date1 ┆ A ┆ 1 │
│ date1 ┆ B ┆ 2 │
│ date1 ┆ C ┆ 3 │
│ date2 ┆ B ┆ 3 │
│ date3 ┆ B ┆ 1 │
│ date3 ┆ C ┆ 5 │
└───────┴────────┴─────┘
</code></pre>
<p>Some of the factors are missing. I'd like to fill in the gaps with values 0.</p>
<pre><code>shape: (9, 3)
┌───────┬────────┬───────┐
│ date ┆ factor ┆ value │
│ --- ┆ --- ┆ --- │
│ str ┆ str ┆ i64 │
╞═══════╪════════╪═══════╡
│ date1 ┆ A ┆ 1 │
│ date1 ┆ B ┆ 2 │
│ date1 ┆ C ┆ 3 │
│ date2 ┆ A ┆ 0 │
│ date2 ┆ B ┆ 3 │
│ date2 ┆ C ┆ 0 │
│ date3 ┆ A ┆ 0 │
│ date3 ┆ B ┆ 1 │
│ date3 ┆ C ┆ 5 │
└───────┴────────┴───────┘
</code></pre>
|
<python><dataframe><python-polars>
|
2023-05-25 17:35:03
| 2
| 1,057
|
ste_kwr
|
76,334,911
| 11,485,896
|
Formula and encoding issues when saving df to Excel
|
<p>I'm developing a script which gathers some YouTube data. The script of course creates a <code>pandas</code> dataframe which is later exported to Excel. I'm experiencing two major issues which somehow seem to be related to each other.</p>
<p>So, Excel 365 allows users to insert an image to a cell using <code>IMAGE()</code> formula (<a href="https://support.microsoft.com/en-au/office/image-function-7e112975-5e52-4f2a-b9da-1d913d51f5d5" rel="nofollow noreferrer">https://support.microsoft.com/en-au/office/image-function-7e112975-5e52-4f2a-b9da-1d913d51f5d5</a>). Script extracts YouTube thumbnail link to a video and saves it to a <code>defaultdict(list)</code> dictionary. Next and in parallel, the <code>IMAGE()</code> formula string is created. After saving the <code>df</code> to <code>.xlsx</code> by a dedicated <code>ExcelWriter</code> (as recommended here: <a href="https://stackoverflow.com/a/58062606/11485896">https://stackoverflow.com/a/58062606/11485896</a>) my formulas are <strong>always</strong> followed by <code>=@</code> <strong>no matter which name and settings - English or local - I use</strong>. It's strange because <code>xlsxwriter</code> requires English names: <a href="https://xlsxwriter.readthedocs.io/working_with_formulas.html" rel="nofollow noreferrer">https://xlsxwriter.readthedocs.io/working_with_formulas.html</a>).</p>
<p>Code (some parts are deleted for better readability):</p>
<pre class="lang-py prettyprint-override"><code>if export_by_xlsxwriter:
# English formula name - recommended by xlsxwriter guide
channel_videos_data_dict["thumbnailHyperlink_en"].append(
fr'=IMAGE("{thumbnail_url}",,1)')
# local formula name
# note: in my local language formula arguments are splitted by ";" - not ","
# interestingly, using ";" makes workbook corrupted
channel_videos_data_dict["thumbnailHyperlink_locale"].append(
fr'=OBRAZ("{thumbnail_url}",,1)')
writer: pd.ExcelWriter = pd.ExcelWriter("data.xlsx", engine = "xlsxwriter")
df.to_excel(writer)
writer.save()
writer.close()
</code></pre>
<p>I managed to save this <code>df</code> to <code>.csv</code>. Formulas now work fine (<strong>written in local language!</strong>) but I lose all the implicit formatting (Excel automatically converts urls to hyperlinks etc.), encoding is crashed and some videos IDs which are followed by <code>-</code> are mistakenly considered as formulas (ironically). Code:</p>
<pre class="lang-py prettyprint-override"><code>df.to_csv("data.csv", encoding = "utf-8", sep = ";")
</code></pre>
<p>I thought I can at least deal with encoding issues:</p>
<pre class="lang-py prettyprint-override"><code>df.to_csv("data.csv", encoding = "windows-1250", sep = ";")
</code></pre>
<p>...but I get this error:</p>
<pre class="lang-py prettyprint-override"><code># ironically again, this is "loudly crying face" emoji 😭
UnicodeEncodeError:
'charmap' codec can't encode character '\U0001f62d' in position 305: character maps to <undefined>
</code></pre>
<p>Thus, my questions are:</p>
<ol>
<li><strong>How to save the <code>df</code> using <code>xlsxwriter</code> with formulas preserved and working? (get rid of <code>@</code> in short)</strong></li>
<li><strong>Alternatively, how to save the <code>df</code> to <code>.csv</code> with proper encoding and videos IDs starting with <code>-</code> treated as text and text only?</strong></li>
</ol>
|
<python><pandas><excel><excel-formula><xlsxwriter>
|
2023-05-25 17:30:40
| 1
| 382
|
Soren V. Raben
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.