QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
76,509,136
8,372,455
Designing reward function in RL best practices
<p>I created a real simple custom Pygame Mario nock off where I was hoping to train an agent with stable baselines 3 algorithms in a custom open AI gym environment.</p> <p>It looks like this where Mario has to make 3 hops and the question box is the finish line or best reward. This is the <a href="https://github.com/bbartling/Building-Automation-Game-Bridge/blob/develop/RL/pygameStuff/game.py" rel="nofollow noreferrer">Pygame script</a> I made and then converted into <a href="https://github.com/bbartling/Building-Automation-Game-Bridge/blob/develop/RL/pygameStuff/rl_game.py" rel="nofollow noreferrer">an open AI Gym environment</a>. The basics of it is Mario has to jump the way up to the top without falling off a platform and if he hits the question box that is the finish or biggest reward.</p> <p><a href="https://i.sstatic.net/Cbv3H.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Cbv3H.png" alt="Pygame window" /></a> There is only 3 moves Mario can make jump, move right, or move left. Would anyone have any advice to try? It seems like the stable baseline PPO algorithm works where the agent does explore the environment when I watch it attempt the game but it seems like no matter what I try when the learning stops the agent just learns to either jump off the edge immediately where Mario spawns or just jump up and down without any attempt at getting to the top. Its like my reward function is the biggest hurdle: (I think) This is the basics of the open AI gym step function below, would anyone have any tips to try for better reward?</p> <pre><code>max_moves= 150 reward = 0 step() # on start method # Reward for moving left reward += -10 # Reward for moving right reward += -10 # Check if Mario has not exceeded maximum jumps reward += 25 # Huge penalty for exceeding jump limit reward -= 1500 # Penalty for standing still reward -= 1000 # Handle collisions with platforms for platform in platforms: # Check if Mario has jumped up to the next platform reward += 2500 # Reward for jumping up to the next platform reward -= 150 # Penalty for not jumping enough reward -= 5000 # Penalty for falling off reward += 5000 # Reward for reaching the question box done </code></pre>
<python><artificial-intelligence><reinforcement-learning><openai-gym><stable-baselines>
2023-06-19 18:15:13
0
3,564
bbartling
76,509,108
49,560
Fetching my own tweets using tweepy gives 453/403
<p>I am trying to write a python script to fetch my tweets using <strong>tweepy</strong>. I tried 2 versions. They result in 453/403 respectively. Detailed errors below. I have &quot;Free&quot; access to twitter APIs and I have a project with an app(see the screenshot below). Do I need paid access to achieve this?</p> <p>First version with error:</p> <pre><code>import tweepy CONSUMER_KEY = '' CONSUMER_SECRET = '' ACCESS_TOKEN = '' ACCESS_SECRET = '' authenticator = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET) authenticator.set_access_token(ACCESS_TOKEN, ACCESS_SECRET) api = tweepy.API(authenticator, wait_on_rate_limit=True) query = 'from:my_screen_name -is:retweet' tweet_cursor = tweepy.Cursor(api.search_tweets, q= query, lang=&quot;en&quot;, tweet_mode=&quot;extended&quot;).items(100) tweets = [tweet.full_text for tweet in tweet_cursor] print(tweets) </code></pre> <p>Error:</p> <pre><code>453 - You currently have access to a subset of Twitter API v2 endpoints and limited v1.1 endpoints (e.g. media post, oauth) only. If you need access to this endpoint, you may need a different access level. </code></pre> <p>Second version with error:</p> <pre><code>client = tweepy.Client( consumer_key=CONSUMER_KEY, consumer_secret=CONSUMER_SECRET, access_token=ACCESS_TOKEN, access_token_secret=ACCESS_SECRET, bearer_token=BEARER_TOKEN ) query = 'from:my_screen_name -is:retweet' tweets = client.search_recent_tweets(query=query, tweet_fields=['context_annotations', 'created_at'], max_results=100) for tweet in tweets.data: print(tweet.text) </code></pre> <p>Error:</p> <pre><code>When authenticating requests to the Twitter API v2 endpoints, you must use keys and tokens from a Twitter developer App that is attached to a Project. You can create a project via the developer portal. </code></pre> <p><a href="https://i.sstatic.net/BZRgw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BZRgw.png" alt="enter image description here" /></a></p>
<python><tweepy><twitter-api-v2>
2023-06-19 18:09:32
0
13,025
simplfuzz
76,509,006
2,868,899
TypeError: Cannot cast DatetimeArray to dtype datetime64[D]
<p>I recently updated my python install from 3.11.2 to 3.11.3, pandas version is 2.0.2</p> <p>I am now getting this error:</p> <pre><code>TypeError: Cannot cast DatetimeArray to dtype datetime64[D] </code></pre> <p>When I try to perform this:</p> <pre><code>df = df[df['CancelDate'].astype('datetime64[D]') &gt;= (datetime.now() - relativedelta(years=2))] </code></pre> <p>On this dataframe:</p> <pre><code>mydataset = { 'CancelDate': [&quot;2021-09-07&quot;, &quot;2021-07-26&quot;, &quot;2021-11-01&quot;,&quot;2015-06-15&quot;] } df = pandas.DataFrame(mydataset) </code></pre> <p>Prior to the update, I was not getting the given error. Could anyone help me realize the error in my ways?</p>
<python><pandas>
2023-06-19 17:48:59
3
2,790
OldManSeph
76,508,986
9,363,181
AWS python docker base image via gitlab pipeline raises error
<p>I have followed the <a href="https://docs.aws.amazon.com/lambda/latest/dg/python-image.html" rel="nofollow noreferrer">offical documentation</a> for using the <code>AWS base python</code> recommended image but I am getting an unexpected error which is related to non-aws base python docker images. I did a POC where the image works fine but when I deploy the code as the docker image and create a lambda function out of it it throws an error. I am using <code>gitlab</code> pipeline for CI/CD. Below is my dockerfile code:</p> <pre><code>FROM public.ecr.aws/lambda/python:3.8 # Install the function's dependencies using file requirements.txt # from your project folder. COPY requirements.txt . RUN pip3 install -r requirements.txt --target ${LAMBDA_TASK_ROOT}; \ bash -c 'mkdir -p ${LAMBDA_TASK_ROOT}/{services,api} &amp;&amp; mkdir -p ${LAMBDA_TASK_ROOT}/services/resources'; \ yum update -y ; \ yum install git -y # Copy function code COPY src/api/lambda_handler.py ${LAMBDA_TASK_ROOT} COPY src/api/constants.py src/api/controller.py src/api/handler.py ${LAMBDA_TASK_ROOT}/api/ COPY src/services/*.py ${LAMBDA_TASK_ROOT}/services/ COPY src/services/resources/dag.py ${LAMBDA_TASK_ROOT}/services/resources/ # Set the CMD to your handler (could also be done as a parameter override outside of the Dockerfile) CMD [ &quot;lambda_handler.handler&quot; ] </code></pre> <p>Now, this raises an unidentifiable error when I hit the lambda function via <code>API gateway</code>:</p> <pre><code>Endpoint response body before transformations: {&quot;errorType&quot;:&quot;Runtime.ExitError&quot;,&quot;errorMessage&quot;:&quot;RequestId: a8ebb6f3-sa Error: Runtime exited with error: exit status 1&quot;} </code></pre> <p>and when I check logs it shows the below:</p> <p><a href="https://i.sstatic.net/Ne1Dk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ne1Dk.png" alt="enter image description here" /></a></p> <p>I also tried adding the below line in the <code>RUN</code> command of the dockerfile but no luck.</p> <pre><code>python3.8 -m pip --no-cache-dir install awslambdaric; </code></pre> <p>I am using the <code>gitlab</code> pipeline to build and push the image and the lambda function is created via <code>Terraform</code>. Below is the gitlab stage code:</p> <pre><code>docker_stage: image: docker:23.0.6 stage: docker_stage services: - docker:23.0.6-dind script: - cd provisioner - apk add python3 - python3 -m ensurepip --upgrade - pip3 install --upgrade pip - pip3 install awscli - docker build -t dbtsample . - docker tag dbtsample:latest &lt;account-id&gt;.dkr.ecr.eu-west-1.amazonaws.com/dbtsample:latest - aws ecr get-login-password --region eu-west-1 | docker login --username AWS --password-stdin &lt;account-id&gt;.dkr.ecr.eu-west-1.amazonaws.com - docker push &lt;account-id&gt;.dkr.ecr.eu-west-1.amazonaws.com/dbtsample:latest </code></pre> <p>Also, below is the terraform code for creating the lambda function:</p> <pre><code>resource &quot;aws_lambda_function&quot; &quot;api_lambda&quot; { function_name = local.lambda_name timeout = 300 image_uri = &quot;${local.account_id}.dkr.ecr.eu-west-1.amazonaws.com/dbtsample:latest&quot; package_type = &quot;Image&quot; architectures = [&quot;x86_64&quot;] memory_size = 1024 role = aws_iam_role.api_lambda_role.arn vpc_config { security_group_ids = [aws_security_group.security_group_for_lambda.id] subnet_ids = var.subnet_ids } environment { variables = { gitlab_username = var.gitlab_username gitlab_access_token = var.gitlab_access_token } } } </code></pre> <p>Also, I would like to highlight that the docker image which is used in the gitlab stage is <code>alpine linux</code> and the Python installed through the commands is <code>3.10</code> (that's what comes with that specific alpine version) and my project has been built over <code>Python 3.8</code> but I don't think so that would make any difference because I am just installing Python to install <code>awscli</code> module but just an FYI.</p> <p>Also, as suggested by Pierre in the comments, when I checked the list of packages installed I see the below list:</p> <p><a href="https://i.sstatic.net/ZD0Ln.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZD0Ln.png" alt="enter image description here" /></a></p> <p>What else am I missing here?</p>
<python><amazon-web-services><docker>
2023-06-19 17:45:08
1
645
RushHour
76,508,962
5,227,892
How to save mesh ply file
<p>I am new to python. I was given the following code that generates mesh from source data and plots tree meshes. I would like to save (<code>x,y,z</code>) of each tree mesh as a separate file using a <a href="https://pypi.org/project/plyfile/" rel="nofollow noreferrer">plyfile</a>.</p> <pre><code>for tid,tree in enumerate(forest): tree_arr = np.array(tree).reshape((-1,6)) vertices = tree_arr[:,0:3].astype(float) radius = tree_arr[:,3].astype(float) parent_id = tree_arr[:,4].astype(int) section_id = tree_arr[:,5].astype(int) if np.max(vertices[:,2]) &gt; 30: fig = plt.figure(figsize=[16,16]) ax = fig.add_subplot(111, projection='3d') ax.set_box_aspect([np.ptp(vertices[:,0]), np.ptp(vertices[:,1]), np.ptp(vertices[:,2])]) for i in range(1,parent_id.shape[0],1): j = parent_id[i] p0 = vertices[j] p1 = vertices[i] v = p1 - p0 mag = norm(v) v = v / mag not_v = np.array([1, 0, 0]) if (v == not_v).all(): not_v = np.array([0, 1, 0]) n1 = np.cross(v, not_v) n1 /= norm(n1) n2 = np.cross(v, n1) r0 = radius[j] r1 = radius[i] rv = np.array([r0, r1])[np.newaxis] t = np.linspace(0, mag, 2) theta = np.linspace(0, 2 * np.pi, 15) t, theta = np.meshgrid(t, theta) x,y,z = [p0[k] + v[k] * t + rv * np.sin(theta) * n1[k] + rv * np.cos(theta) * n2[k] for k in range(3)] # I am trying to save PLY mesh files with following code. However, it only saves a few points for each tree instead of mesh. x = x.flatten() y = y.flatten() z = z.flatten() vertex= zip(*[x,y,z]) vertex = list(vertex) dtype=[('x', 'f4'), ('y', 'f4'),('z', 'f4')] M = np.array(vertex, dtype) el = PlyElement.describe(M, 'vertex') name = f'modeled_tree{tree[5]}.ply' PlyData([el]).write(name) </code></pre>
<python><ply-file-format>
2023-06-19 17:41:43
0
435
Sher
76,508,923
1,897,063
How to avoid coordinates shifting in upscaled images?
<p>I have a following image:</p> <p><a href="https://i.sstatic.net/VErnK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VErnK.png" alt="enter image description here" /></a></p> <p>There're 4 violet keypoints in the corners of the yellow area.</p> <ol> <li>top left</li> </ol> <p><a href="https://i.sstatic.net/qidDZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qidDZ.png" alt="enter image description here" /></a></p> <ol start="2"> <li>top right</li> </ol> <p><a href="https://i.sstatic.net/sWpQ5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sWpQ5.png" alt="enter image description here" /></a></p> <ol start="3"> <li>bottom right</li> </ol> <p><a href="https://i.sstatic.net/8vp0w.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8vp0w.png" alt="enter image description here" /></a></p> <ol start="4"> <li>bottom left</li> </ol> <p><a href="https://i.sstatic.net/8LJ1G.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8LJ1G.png" alt="enter image description here" /></a></p> <p>I upscale my image by factor 4 and then apply the same scaling to the coordinates of the keypoints. It's a simple scaling, not affine / perspective correction.</p> <p>Here's my script:</p> <pre><code>import cv2 import numpy as np src_image = cv2.imread(&quot;source_image.png&quot;) src_height, src_width = src_image.shape[0:2] target_width = 968 target_height = 712 target_image = cv2.resize(src_image, (target_width, target_height)) points = [[20, 11], [11, 147], [223, 168], [232, 32]] for p in points: p[0] = round(p[0] * target_width / src_width) p[1] = round(p[1] * target_height / src_height) points = np.array(points) points = points.reshape((-1, 1, 2)) cv2.polylines(target_image, [points], True, (255, 0, 255), 1) cv2.imwrite(&quot;resized_image.png&quot;, target_image) </code></pre> <p><strong>What I expect to see</strong> as a result of this script is the same image resized to a higher resolution (968 x 712) and all 4 keypoints still sticking to the corners of tha yellow area.</p> <p><strong>What I get</strong> as a result of this script is the same image resized to a higher resolution (968 x 712) and all 4 keypoint shifted top-left for some reason.</p> <p>For instance, bottom right point is recessed inside of the yellow area:</p> <p><a href="https://i.sstatic.net/RBnPj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RBnPj.png" alt="enter image description here" /></a></p> <p>I've connected all 4 points for better understanding of the shift:</p> <p><a href="https://i.sstatic.net/QUe2n.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QUe2n.png" alt="enter image description here" /></a></p> <p>The first question is what's the origin of the shift? Both sides of the source image are even numbers and the scaling factor is a perfect integer number, so it's not an approximation / rounding issue.</p> <p>The main question is how to overcome the problem and keep the keypoints in the very corners?</p>
<python><opencv><image-processing>
2023-06-19 17:34:44
0
448
SagRU
76,508,499
15,547,208
Vectorisation of a numpy matrix
<p>I'm working on numerical option pricing, and am now trying to implement vectorised inputs for performance reasons. The function I want to vectorise for <code>u</code>-inputs in the <code>GBM</code> class is as follows.</p> <pre><code>def spitzer_recurrence(self, m : int, u, x, t, T, K, num=False): p = np.zeros((m, m), dtype=complex) for I in range(m): p[0,I] = self.ana_a(I, u, t, (T-t)/m) / (I+1) out = p[0, m-1] for I in range(1, m): for J in range(m - I): for k in range(J+1): p[I, J] += p[I-1, k] * p[0, J-k] out += p[I, m-1-I] / math.factorial(I+1) return out </code></pre> <p>where <code>ana_a()</code> is defined as</p> <pre><code>def ana_a(self, k, u, t, stepsize): tau_k = k * stepsize alpha = self.mu - self.sigma**2/2 t1 = norm.cdf(-alpha*math.sqrt(tau_k)/(self.sigma)) f1 = 0.5*np.exp(-0.5*u**2*self.sigma**2*tau_k + 1j*tau_k*u*alpha) f2 = erfc(-math.sqrt(tau_k/2) * (u*self.sigma*1j + alpha/self.sigma)) return t1 + f1 * f2 </code></pre> <p>Trying to input a np array into this fuction for <code>u</code> gives an error in the line <code>p[0,I] = self.ana_a(I, u, t, (T-t)/m) / (I+1)</code>, for trying to insert a np array into a matrix of scalars.</p> <p>Is there a way to initialise p as a matrix of zero valued vectors?</p> <p><em><strong>TEST PROGRAM</strong></em></p> <pre><code>xs = np.linspace(-10, 10, 1000) gbm = GBM(0.1, 0.2) a1 = gbm.spitzer_recurrence(20, xs, 100, 0, 1, 100) a2 = np.array([gbm.spitzer_recurrence(20, x, 100, 0, 1, 100) for x in xs]) print(a1 == a2) </code></pre> <p>running this script should result in an array of True, but does not.</p>
<python><numpy><matrix><vectorization>
2023-06-19 16:30:22
1
519
Jord van Eldik
76,508,452
7,936,119
Installing a package installs only the package with version number attached
<p>I migrated my setup.py to pyproject.toml for my package but now when I try to install my package via pip, it only install the package version with the version attached. Ex: package is <code>aanalytics2</code> and the folder created in my Lib &gt; site-packages folder is : <code>aanalytics2-0.3.4.dist-info</code></p> <p>Therefore, when I try to do <code>import aanalytics2</code>, it cannot find the package.</p> <p>Here is my config file for the pyproject.toml.</p> <pre><code>[build-system] requires = [&quot;setuptools&quot;, &quot;setuptools-scm&quot;] build-backend = &quot;setuptools.build_meta&quot; [project] name = &quot;aanalytics2&quot; authors = [ {name = &quot;Julien Piccini&quot;, email = &quot;piccini.julien@gmail.com&quot;}, ] description = &quot;Adobe Analytics API 2.0 and 1.4 python wrapper&quot; readme = &quot;README.md&quot; requires-python = &quot;&gt;=3.7&quot; keywords = [&quot;adobe&quot;, &quot;analytics&quot;, &quot;API&quot;, &quot;python&quot;] license = {text = &quot;Apache License 2.0&quot;} classifiers = [ &quot;Intended Audience :: Developers&quot;, &quot;License :: OSI Approved :: GNU General Public License v3 (GPLv3)&quot;, &quot;Operating System :: OS Independent&quot;, &quot;Programming Language :: Python&quot;, &quot;Topic :: Scientific/Engineering :: Information Analysis&quot;, &quot;Programming Language :: Python :: 3.6&quot;, &quot;Programming Language :: Python :: 3.7&quot;, &quot;Programming Language :: Python :: 3.8&quot;, &quot;Programming Language :: Python :: 3.9&quot;, &quot;Development Status :: 4 - Beta&quot; ] dependencies = [ 'pandas&gt;=0.25.3', 'pathlib2', 'pathlib', 'requests', 'PyJWT[crypto]', 'PyJWT', &quot;dicttoxml&quot;, &quot;pytest&quot;, &quot;openpyxl&gt;2.6.0&quot; ] dynamic = [&quot;version&quot;] [project.urls] homepage = &quot;https://github.com/pitchmuc/adobe-analytics-api-2.0&quot; changelog = &quot;https://github.com/pitchmuc/adobe-analytics-api-2.0/blob/master/docs/releases.md&quot; [tool.setuptools] include-package-data = true [tool.setuptools.packages.find] where = [&quot;aanalytics2&quot;] [tool.setuptools.package-data] mypkg = [&quot;*.pickle&quot;] [project.optional-dependencies] dynamic = [&quot;version&quot;] </code></pre> <p>Is there anything I am missing ? I read the documentation and the &quot;name&quot; keyword is the one to be used no ?</p> <p>my project structure is:</p> <pre><code>./ ./aanalytics2/.git/ ./aanalytics2/aanalytics2/ ./aanalytics2/aanalytics2/__init__.py ./aanalytics2/aanalytics2/__version__.py ./aanalytics2/aanalytics2/aanalytics2.py ./aanalytics2/aanalytics2/aanalytics14.py ./aanalytics2/aanalytics2/config.py ./aanalytics2/aanalytics2/configs.py ./aanalytics2/aanalytics2/supported_tags.pickle ./aanalytics2/aanalytics2/otherPickleOrPyfiles ./aanalytics2/dist/ ./aanalytics2/docs/... (md file) ./aanalytics2/test ./aanalytics2/.gitignore ./aanalytics2/LICENSE ./aanalytics2/MANIFEST.in ./aanalytics2/pyproject.toml ./aanalytics2/LICENSE ./aanalytics2/README.md ./aanalytics2/requirements.txt ./aanalytics2/setup.cfg ./aanalytics2/setup.py </code></pre> <p>Documentation I read: <a href="https://python-poetry.org/docs/pyproject/" rel="nofollow noreferrer">https://python-poetry.org/docs/pyproject/</a> <a href="https://peps.python.org/pep-0621/" rel="nofollow noreferrer">https://peps.python.org/pep-0621/</a></p>
<python><setuptools><python-packaging><pyproject.toml>
2023-06-19 16:21:11
1
370
Pitchkrak
76,508,431
1,601,703
How to correctly call function with standard arguments, *args and **kwargs
<p>This is my code:</p> <pre><code>def ar4(val1, val2, *args, **kwargs): print(val1) print(val2) print(args) print(kwargs) ar4(val1=100, val2=200, &quot;a&quot;, &quot;b&quot;, &quot;c&quot;, k1=1, k2=2, k3=&quot;Three&quot;, k4=True) </code></pre> <p>In function call <code>val1=100, val2=200</code> are for standard arguments, <code>&quot;a&quot;, &quot;b&quot;, &quot;c&quot;</code> are for args and <code>k1=1, k2=2, k3=&quot;Three&quot;, k4=True</code> for kwargs.</p> <p>It produces error:</p> <pre><code> Cell In[28], line 7 ar4(val1=100, val2=200, &quot;a&quot;, &quot;b&quot;, &quot;c&quot;, k1=1, k2=2, k3=&quot;Three&quot;, k4=True) ^ SyntaxError: positional argument follows keyword argument </code></pre> <p>This works well:</p> <pre><code>def ar4(val1, val2, *args, **kwargs): print(val1) print(val2) print(args) print(kwargs) ar4(100, 200, &quot;a&quot;, &quot;b&quot;, &quot;c&quot;, k1=1, k2=2, k3=&quot;Three&quot;, k4=True) </code></pre> <p>but I want to pass <code>val1</code> and <code>val2</code> in a form <code>val1=100, val2=200</code> and not just <code>100, 200</code></p> <br> <p><strong>Question:</strong> How to correct my code above so I can call function with standard arguments with argument names provided to function call, args and kwargs at the same time?</p>
<python><python-3.x>
2023-06-19 16:17:49
2
7,080
vasili111
76,508,368
3,247,006
"empty_value_display" doesn't work for an empty string on "change list" page in Django Admin
<p>I defined <code>name</code> field which accepts an empty string in <code>Person</code> model as shown below. *I use <strong>Django 4.2.1</strong>:</p> <pre class="lang-py prettyprint-override"><code># &quot;models.py&quot; class Person(models.Model): name = models.CharField(max_length=20, blank=True) def __str__(self): return self.name </code></pre> <p>Then, I set <code>&quot;-empty-&quot;</code> to <a href="https://docs.djangoproject.com/en/4.2/ref/contrib/admin/#django.contrib.admin.AdminSite.empty_value_display" rel="nofollow noreferrer">AdminSite.empty_value_display</a>, <a href="https://docs.djangoproject.com/en/4.2/ref/contrib/admin/#django.contrib.admin.ModelAdmin.empty_value_display" rel="nofollow noreferrer">ModelAdmin.empty_value_display</a>, <a href="https://docs.djangoproject.com/en/4.2/ref/contrib/admin/#the-display-decorator" rel="nofollow noreferrer">@admin.display()'s empty_value</a> or <code>view_name.empty_value_display</code> as shown below:</p> <pre class="lang-py prettyprint-override"><code># &quot;admin.py&quot; admin.site.empty_value_display = &quot;-empty-&quot; # Or @admin.register(Person) class PersonAdmin(admin.ModelAdmin): list_display = (&quot;view_name&quot;,) empty_value_display = &quot;-empty-&quot; # Or @admin.display(empty_value=&quot;-empty-&quot;) # Or def view_name(self, obj): return obj.name view_name.empty_value_display = '-empty-' # Or </code></pre> <p>But, <code>-empty-</code> is not shown even though I saved an empty string as shown below:</p> <p><a href="https://i.sstatic.net/hytkk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hytkk.png" alt="enter image description here" /></a></p> <p>So, how can I show <code>-empty-</code> when I save an empty string?</p>
<python><django><string><django-models><django-admin>
2023-06-19 16:10:08
2
42,516
Super Kai - Kazuya Ito
76,508,159
7,130,689
Python script NameError in power bi
<p>Getting below name error in Power BI-desktop application.Already explored some stack-overflow threads but no help.Thanks in advance.also put same in <a href="https://community.fabric.microsoft.com/t5/Issues/python-script-integration-issues/idi-p/3291341#M96651" rel="nofollow noreferrer">Power BI community</a>.</p> <p>Please refer below screenshot.</p> <p><a href="https://i.sstatic.net/3PdeF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3PdeF.png" alt="enter image description here" /></a></p> <p><strong>Python code:</strong></p> <pre><code>import matplotlib.pyplot as plt dataset = pandas.DataFrame(Agent, Resolved) dataset = dataset.drop_duplicates() dataset.plot(kind='scatter', x='Agent', y='Resolved', color='green') plt.show() </code></pre> <p><strong>option tried:</strong></p> <ol> <li>Uninstall and install power BI desktop application.</li> <li>Try with different version of python.  3. Even tried with python virtual environment but no success</li> </ol> <p>Looks like python and power BI version compatibility issue.</p> <p><strong>Error stack-trace:</strong></p> <pre><code>Feedback Type: Frown (Error) Error Message: Python script error. &lt;pi&gt;NameError: name 'Agent' is not defined &lt;/pi&gt; Stack Trace: JavaScript: Error Microsoft.PowerBI.ExploreServiceCommon.ScriptHandlerException: Python script error. NameError: name 'Agent' is not defined ---&gt; Microsoft.PowerBI.Scripting.Python.Exceptions.PythonScriptRuntimeException: Python script error. NameError: name 'Agent' is not defined at Microsoft.PowerBI.Scripting.Python.PythonScriptWrapper.RunScript(String originalScript, Int32 timeoutMs) at Microsoft.PowerBI.Client.Windows.Python.PythonScriptHandler.GenerateVisual(ScriptHandlerOptions options) --- End of inner exception stack trace --- at Microsoft.PowerBI.Client.Windows.Python.PythonScriptHandler.GenerateVisual(ScriptHandlerOptions options) at Microsoft.PowerBI.ExploreHost.SemanticQuery.ScriptVisualCommandFlow.RunInternal(Stream dataShapeResultStream, QueryBindingDescriptor&amp; bindingDescriptor) at Microsoft.PowerBI.ExploreHost.SemanticQuery.ScriptVisualCommandFlow.Run(Stream dataShapeResultStream, QueryBindingDescriptor&amp; bindingDescriptor) at Microsoft.PowerBI.ExploreHost.SemanticQuery.ExecuteSemanticQueryFlow.TransformDataShapeResult(QueryCommand transformCommand, SemanticQueryDataShapeCommand command, Stream dataShapeResultStream, QueryBindingDescriptor&amp; bindingDescriptor) at Microsoft.PowerBI.ExploreHost.SemanticQuery.ExecuteSemanticQueryFlow.ExecuteDataQuery(IQueryResultDataWriter queryResultDataWriter, EngineDataModel engineDataModel, DataQuery query, Int32 queryId, ServiceErrorStatusCode&amp; serviceErrorStatusCode, CancellationToken cancelToken) at Microsoft.PowerBI.ExploreHost.SemanticQuery.ExecuteSemanticQueryFlow.ProcessAndWriteSemanticQueryCommands(IQueryResultsWriter queryResultsWriter, IList`1 queries, HashSet`1 pendingQueriesToCancel, EngineDataModel engineDataModel) Stack Trace Message: Python script error. &lt;pi&gt;NameError: name 'Agent' is not defined &lt;/pi&gt; Invocation Stack Trace: at Microsoft.Mashup.Host.Document.ExceptionExtensions.GetCurrentInvocationStackTrace() at Microsoft.Mashup.Client.UI.Shared.StackTraceInfo..ctor(String exceptionStackTrace, String invocationStackTrace, String exceptionMessage) at Microsoft.PowerBI.Client.Windows.Telemetry.PowerBIUserFeedbackServices.GetStackTraceInfo(Exception e) at Microsoft.PowerBI.Client.Windows.Telemetry.PowerBIUserFeedbackServices.ReportException(IWindowHandle activeWindow, IUIHost uiHost, FeedbackPackageInfo feedbackPackageInfo, Exception e, Boolean useGDICapture) at Microsoft.Mashup.Client.UI.Shared.UnexpectedExceptionHandler.&lt;&gt;c__DisplayClass14_0.&lt;HandleException&gt;b__0() at Microsoft.Mashup.Client.UI.Shared.UnexpectedExceptionHandler.HandleException(Exception e) at Microsoft.PowerBI.Client.PowerBIUnexpectedExceptionHandler.HandleException(Exception e) at Microsoft.PowerBI.Client.Windows.Utilities.PowerBIFormUnexpectedExceptionHandler.HandleException(Exception e) at System.RuntimeMethodHandle.InvokeMethod(Object target, Object[] arguments, Signature sig, Boolean constructor) at System.Reflection.RuntimeMethodInfo.UnsafeInvokeInternal(Object obj, Object[] parameters, Object[] arguments) at System.Delegate.DynamicInvokeImpl(Object[] args) at System.Windows.Forms.Control.InvokeMarshaledCallbackDo(ThreadMethodEntry tme) at System.Windows.Forms.Control.InvokeMarshaledCallbackHelper(Object obj) at System.Windows.Forms.Control.InvokeMarshaledCallback(ThreadMethodEntry tme) at System.Windows.Forms.Control.InvokeMarshaledCallbacks() at System.Windows.Forms.Control.MarshaledInvoke(Control caller, Delegate method, Object[] args, Boolean synchronous) at System.Windows.Forms.Control.Invoke(Delegate method, Object[] args) at System.Windows.Forms.WindowsFormsSynchronizationContext.Send(SendOrPostCallback d, Object state) at Microsoft.PowerBI.Client.Windows.Services.UIBlockingService.AllowModalDialogs(Action action) at Microsoft.PowerBI.Client.Windows.HostServiceDispatcher.&lt;&gt;c__DisplayClass14_0.&lt;ExecuteOnUIThreadAndHandlePromise&gt;b__0() at Microsoft.PowerBI.Client.Windows.HostServiceDispatcher.ExecuteOnUIThreadAndHandlePromise[T](Func`1 func, IPromiseStore promiseStore, Int64 promiseHandle) at System.RuntimeMethodHandle.InvokeMethod(Object target, Object[] arguments, Signature sig, Boolean constructor) at System.Reflection.RuntimeMethodInfo.UnsafeInvokeInternal(Object obj, Object[] parameters, Object[] arguments) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture) at Microsoft.PowerBI.Client.Windows.WebView2.WebView2Interop.InvokeCs(InteropCall call) at Microsoft.Mashup.Host.Document.ExceptionHandlerExtensions.HandleExceptions(IExceptionHandler exceptionHandler, Action action) at System.EventHandler`1.Invoke(Object sender, TEventArgs e) at System.Windows.Forms.UnsafeNativeMethods.DispatchMessageW(MSG&amp; msg) at System.Windows.Forms.UnsafeNativeMethods.DispatchMessageW(MSG&amp; msg) at System.Windows.Forms.Application.ComponentManager.System.Windows.Forms.UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(IntPtr dwComponentID, Int32 reason, Int32 pvLoopData) at System.Windows.Forms.Application.ThreadContext.RunMessageLoopInner(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.ThreadContext.RunMessageLoop(Int32 reason, ApplicationContext context) at System.Windows.Forms.Form.ShowDialog(IWin32Window owner) at Microsoft.Mashup.Client.UI.Shared.WindowManager.ShowModal[T](T dialog, Func`1 showModalFunction) at Microsoft.PowerBI.Client.Program.&lt;&gt;c__DisplayClass6_0.&lt;RunApplication&gt;b__0() at Microsoft.PowerBI.Client.Windows.IExceptionHandlerExtensions.&lt;&gt;c__DisplayClass3_0.&lt;HandleExceptionsWithNestedTasks&gt;b__0() at Microsoft.Mashup.Host.Document.ExceptionHandlerExtensions.HandleExceptions(IExceptionHandler exceptionHandler, Action action) at Microsoft.PowerBI.Client.Program.RunApplication(String[] args) at Microsoft.PowerBI.Client.Program.Main(String[] args) PowerBINonFatalError: {&quot;AppName&quot;:&quot;PBIDesktop&quot;,&quot;AppVersion&quot;:&quot;2.118.621.0&quot;,&quot;ModuleName&quot;:&quot;&quot;,&quot;Component&quot;:&quot;&quot;,&quot;Error&quot;:&quot;Error&quot;,&quot;MethodDef&quot;:&quot;&quot;,&quot;ErrorOffset&quot;:&quot;-1:-1&quot;,&quot;ErrorCode&quot;:&quot;&quot;} Snapshot Trace Logs: C:\Users\AppData\Local\Microsoft\Power BI Desktop\FrownSnapShotc7961737-aeb3-43dc-8a36-8ba93ecf010a.zip Model Default Mode: Import Model Version: PowerBI_V3 Performance Trace Logs: C:\Users\AppData\Local\Microsoft\Power BI Desktop\PerformanceTraces.zip Enabled Preview Features: PBI_enableWebView2 PQ_WebView2Connector PBI_sparklines PBI_scorecardVisual PBI_NlToDax PBI_fieldParametersSuperSwitch PBI_horizontalFusion PBI_setLabelOnExportPdf PBI_newCard Disabled Preview Features: PBI_shapeMapVisualEnabled PBI_SpanishLinguisticsEnabled PBI_qnaLiveConnect PBI_b2bExternalDatasetSharing PBI_enhancedTooltips PBI_angularRls PBI_onObject PBI_dynamicFormatString PBI_oneDriveSave PBI_oneDriveShare PBI_gitIntegration Disabled DirectQuery Options: TreatHanaAsRelationalSource Cloud: GlobalCloud PowerBIUserFeedbackServices_IsReported: True </code></pre>
<python><python-3.x><powerbi><windows-10><powerbi-desktop>
2023-06-19 15:40:26
1
643
devesh
76,508,091
8,325,015
How to prevent python google functions logs from splitting in multiple lines
<p>When I moved to python 3.11 with my Google functions, logs started to be splitted in multiple lines. For example if function result in error, Traceback is splitted in many lines which makes it really hard to debug. Example in the screenshot below.</p> <p>Previously I deployed functions in python 3.7 and Traceback errors where all in one log entry</p> <p>How to prevent that?</p> <p><a href="https://i.sstatic.net/3BgSa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3BgSa.png" alt="enter image description here" /></a></p>
<python><function><google-cloud-platform><logging>
2023-06-19 15:30:59
1
1,371
M.wol
76,508,047
20,176,161
Looping through folders to read shapefiles
<p>I would like to adapt a function, that reads many shapefiles in one specific folder, to read many shapefiles in many folders.</p> <p>Here is the function that reads multiple shapefiles in ONE folder.</p> <pre><code>def import_shapes_list(path_to_data:str,shapes_folder:str,crs:str,current_crs:str) -&gt;gpd.GeoDataFrame: &quot;&quot;&quot; &quot;&quot;&quot; files = glob.iglob(path_+'*.shp') gdfs = [] for file in files: print(file) gdf = read_gdf(file,crs,current_crs=current_crs) gdf.columns = map(str.lower, gdf.columns) gdfs.append(gdf) geomap = gpd.GeoDataFrame( pd.concat( gdfs, ignore_index=True) ) return geomap geomap_nord=import_shapes_list(path_to_data=path_to_data,shapes_folder=shapes_folder_nord, crs='EPSG:4326',current_crs='EPSG:26191') The output is this: ./Source data/...../shapefile1.shp ./Source data/...../shapefile2.shp ./Source data/...../shapefile3.shp </code></pre> <p>I have tried to adapt it so that it loops through multiple folders. Here is what I have tried:</p> <pre><code>import os path_to_data = './Source data/' rootdir = path_to_data + '...2021/' files = glob.iglob(rootdir+'*.shp') gdfs = [] for subdir, dirs, files in os.walk(rootdir): for file in files: print(os.path.join(subdir, file)) </code></pre> <p>the output is:</p> <pre><code>folder1/xxxx.cpg folder1/xxxx.dbf folder1/xxxx.prj folder1/xxxx.qmd folder1/xxxx.shp folder1/xxxx.shx folder2/yyyy.cpg folder2/yyyy.dbf folder2/yyyy.prj folder2/yyyy.shp </code></pre> <p>My issue is that it reads everything inside each folder when it should be reading only shapefiles (<code>.shp</code>).</p> <p>How can I adapt the function above to make it read the shapefiles inside each folder?</p>
<python><geopandas>
2023-06-19 15:25:35
2
419
bravopapa
76,507,874
2,415,237
Scikit-learn LinReg triggers Scipy 'MatReadWarning'
<p>The code I have is short and concise, but I cannot due to proprietary reasons share the .mat files associated. I have tried to create .mat files using matlab in a similar fashion, but the newly created mat files do not cause the warning.</p> <p>Warning:</p> <blockquote> <p>packages\scipy\io\matlab_mio.py:227: MatReadWarning: Duplicate variable name &quot;None&quot; in stream - replacing previous with new Consider mio5.varmats_from_mat to split file into single variable files<br /> matfile_dict = MR.get_variables(variable_names)</p> </blockquote> <p>There is no 'None' variable name. I've read several different SO answers, and wonder if <a href="https://stackoverflow.com/questions/56456945/scipy-loadmat-memory-leak," title="scipy-loadmat-memory-leak">this</a> post about memory leaks is related. Anyway, curious if others have seen this before.</p> <p>I wonder if it's how the data is allocated in memory by the underlying libraries.</p> <pre><code>import scipy.io as sio from sklearn.linear_model import LinearRegression as LR import numpy as np import time def linreg(x,y): model = LR().fit(x,y) r = model.score(x,y) fitx = [min(x),max(x)] fity = model.predict(fitx) return r, fitx, fity if __name__ == '__main__': fname = 'file{}.mat' for i in range(5): print(&quot;Loop {}&quot;.format(i)) mat = sio.matlab.loadmat(fname.format(i)) print(&quot;Mat Loaded&quot;) x = mat['x'].tolist()[0] y = mat['y'].tolist()[0] lx = np.array(x).reshape(-1,1) ly = np.array(y).reshape(-1,1) r, fx, fy = linreg(lx,ly) print(&quot;sklearn Called&quot;) #time.sleep(1) </code></pre> <p>Here are some results that are interesting.</p> <p>If I comment out <code>r, fx, fy = linreg(lx,ly)</code>, the scipy MatReadWarning does not appear. It only appears if a call to sklearn is done.</p> <p>if <code>time.sleep(1)</code> is called, I get error message like this for all loops</p> <ul> <li>Loop n</li> <li>Mat Loaded</li> <li>Sklearn Called</li> <li>Warning Msg</li> </ul> <p>If I comment out the sleep call I get more randomness</p> <ul> <li>Loop n</li> <li>Mat Loaded</li> <li>SKLearn Called</li> <li>Loop n+1</li> <li>Mat Loaded</li> <li>SKLearn Called</li> <li>Loop n+2</li> <li>Mat Loaded</li> <li>SKLearn Called</li> <li>Mat Warning 1</li> <li>Mat Warning 2</li> <li>Mat Warning 3</li> </ul> <p>Why would the call into sklearn impact scipy mat read, why doesn't the error trigger on the load, and why is this error message actually telling me to use a <a href="https://github.com/scipy/scipy/blob/main/scipy/io/matlab/mio5.py" rel="nofollow noreferrer">deprecated</a> method?</p> <p>scikit-learn 1.2.2 py310hd77b12b_1<br /> scipy 1.10.1 py310hb9afe5d_0</p> <p>I will see if I can create a spoof mat to make the example complete. Until then, I welcome any insights.</p>
<python><scikit-learn><scipy>
2023-06-19 15:02:00
0
2,056
Chemistpp
76,507,866
2,125,671
How to set brakpoint in Visual Studio Code when an object changes?
<p>I'm working on a Python/FastAPI application where I have this endpoint :</p> <pre><code>from fastapi import Request @router_phase.get(&quot;/hello&quot;, response_class=HTMLResponse) def hello( request: Request, ): ... </code></pre> <p><code>request</code> has an attribute <code>user</code> which tracks user information, which has an attribute, ip_address (for example) which is not <code>None</code>.</p> <p>Somewhere the application hit an Exception, and Visual Studio Code arrives at following exception handler</p> <pre><code>@app.exception_handler(500) def custom_500_handler( request: Request, error: Exception, ): ... </code></pre> <p>Now <code>request</code> still has an attribute <code>user</code>, which is the same as in <code>hello</code> (because I run <code>id(request.user)</code>, but <code>request.user.ip_address</code> is <code>None</code> now.</p> <p>The question is : Is it possible to set a breakpoint so that debugger stops where <code>ip_address</code> changes.</p>
<python><visual-studio-code><fastapi><breakpoints>
2023-06-19 15:00:32
0
27,618
Philippe
76,507,853
17,487,457
Add a column of string using list's indices from another column
<p>Having this list of name:</p> <pre class="lang-py prettyprint-override"><code>name_list = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K'] </code></pre> <p>As well as the following <code>df</code>:</p> <pre class="lang-py prettyprint-override"><code>df = pd.DataFrame( { 'idx': ['(3,)','(3, 5)','(1, 3, 5)', '(1, 3, 5, 10)','(1, 3, 5, 8, 10)'], 'score': [0.773,0.841,0.862,0.874,0.883] } ) df.head(2) idx score 0 (3,) 0.773 1 (3, 5) 0.841 </code></pre> <p>The <code>idx</code> column represents indices of the elements of <code>name_list</code>. I want to add a new column <code>name</code> to the <code>df</code> with the corresponding name from the list.</p> <p><strong>Expected results:</strong></p> <pre class="lang-py prettyprint-override"><code> idx score name 0 (3,) 0.773 (D,) 1 (3, 5) 0.841 (D, F) 2 (1, 3, 5) 0.862 (B, D, F) 3 (1, 3, 5, 10) 0.874 (B, D, F, K) 4 (1, 3, 5, 8, 10) 0.883 (B, D, F, I, K) </code></pre>
<python><pandas><dataframe>
2023-06-19 14:59:09
2
305
Amina Umar
76,507,566
1,473,517
How to run two optimizations in parallel
<p>I am trying to optimize a function which takes a long time. There are many different global optimizers so I would like to run them in parallel, seeing the progress they make in different windows. Here is my non-parallel code:</p> <pre><code>##### optimization 1 #### from scipy.optimize import basinhopping def build_show_bh(MIN=None): if MIN is None: MIN = [0] def fn(xx, f, accept): if f &lt; MIN[-1]: print([round(x, 2) for x in xx], f) MIN.append(f) return fn x0 = (0.5, 0.5, 0.5) bounds = [(0,1)]*3 minimizer_kwargs = dict(method=&quot;L-BFGS-B&quot;, bounds=bounds) progress_f = [0] c = build_show_bh(progress_f) print(&quot;Optimizing using basinhopping&quot;) res = basinhopping( opt, x0, minimizer_kwargs=minimizer_kwargs, niter=10, callback=c, disp=True ) print(f&quot;external way of keeping track of MINF: {progress_f}&quot;) ##### optimization 2 #### import dlib lower_bounds = [0]*3 upper_bounds = [1]*3 x, y = dlib.find_max_global(opt_dlib, lower_bounds, upper_bounds, 10) print(f&quot;The optimal inputs are {x} with value {y}&quot;) </code></pre> <p>How can I do this?</p> <p>I am on Python 3.10.</p>
<python><multithreading><scipy>
2023-06-19 14:26:02
1
21,513
Simd
76,507,527
1,473,517
Best way to enforce a distance between coefficients
<p>I am using <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.basinhopping.html" rel="nofollow noreferrer">basinhopping</a> to perform a global optimization. My simple code is:</p> <pre><code>from scipy.optimize import basinhopping, rosen def build_show_bh(MIN=None): if MIN is None: MIN = [0] def fn(xx, f, accept): if f &lt; MIN[-1]: print([round(x, 2) for x in xx], f) MIN.append(f) return fn x0 = (0.4, 0.6, 0.8) bounds = [(0,1)]*3 minimizer_kwargs = dict(method=&quot;L-BFGS-B&quot;, bounds=bounds) progress_f = [0] c = build_show_bh(progress_f) print(&quot;Optimizing using basinhopping&quot;) res = basinhopping( rosen, x0, minimizer_kwargs=minimizer_kwargs, niter=10, callback=c, disp=True ) print(f&quot;external way of keeping track of MINF: {progress_f}&quot;) </code></pre> <p>I would like to add a constraint that each of the coefficients must be at last 0.1 from both the other coefficients. I am happy for them to be in sorted order if that helps. What is the best way of doing that?</p>
<python><scipy><scipy-optimize>
2023-06-19 14:21:44
1
21,513
Simd
76,507,474
926,918
Improving polars statement that adds a column applying a lambda function on each row
<p>I am trying to add a column using <code>map_rows</code> in <a href="https://www.pola.rs/" rel="nofollow noreferrer">polars</a>. The equivalent of <code>pandas</code> is as follows:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd df = pd.DataFrame({&quot;ref&quot;: [-1, 2, 8], &quot;v1&quot;: [-1, 5, 0], &quot;v2&quot;: [-1, 5, 8]}) df['count'] = df.apply(lambda r: len([i for i in r if i == r[0]]) - 1, axis=1) df = df.drop('ref', axis=1) df </code></pre> <pre><code> v1 v2 count 0 -1 -1 2 1 5 5 0 2 0 8 1 </code></pre> <p>The following is the sample code that I have with polars. Though it works as desired, it looks ugly and probably can be improved as well.</p> <pre class="lang-py prettyprint-override"><code>import polars as pl df = pl.DataFrame({&quot;ref&quot;: [-1, 2, 8], &quot;v1&quot;: [-1, 5, 0], &quot;v2&quot;: [-1, 5, 8]}) x = df.map_rows(lambda r: len([i for i in r if i == r[0]]) - 1).rename({'map': 'count'}) df = df.hstack([x.to_series()]).drop('ref') df </code></pre> <pre><code>shape: (3, 3) ┌─────┬─────┬───────┐ │ v1 ┆ v2 ┆ count │ │ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ i64 │ ╞═════╪═════╪═══════╡ │ -1 ┆ -1 ┆ 2 │ │ 5 ┆ 5 ┆ 0 │ │ 0 ┆ 8 ┆ 1 │ └─────┴─────┴───────┘ </code></pre> <p>What bothers me is the <code>rename</code> part and <code>hstack</code> that I clobbered together to work. I would be grateful for any improvements in the above code.</p> <p>TIA</p>
<python><python-polars>
2023-06-19 14:14:31
1
1,196
Quiescent
76,507,454
14,256,643
Django app sending jwt token and refresh token to browser cookie but why my frontend app couldn't verify it?
<p>I am using jwt authentication on my Django app. When user login to my website my server sends jwt token and refresh token to the browser cookie but I am getting <code>&quot;User is not authenticated.&quot;</code> error and not getting any profile data for my <code>user_profile/</code> api endpoint.</p> <p>Even I can see jwt token and refresh token also avaiable on the browser cookie after user login and aslo <code>{withCredentials:true}</code> in my axois post.</p> <p><a href="https://i.sstatic.net/2fI52.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2fI52.png" alt="enter image description here" /></a></p> <p>here is my login code:</p> <pre><code>@api_view(['POST']) def user_login(request): if request.method == 'POST': ...others code refresh = RefreshToken.for_user(user) response = Response({'message': 'Login successful.'}, status=status.HTTP_200_OK) response.set_cookie('jwt_token', str(refresh.access_token)) response.set_cookie('refresh_token', str(refresh)) return response else: return Response({'error': 'Invalid credentials.'}, status=status.HTTP_401_UNAUTHORIZED) </code></pre> <p>here is my api for get user profile</p> <pre><code>@api_view(['GET']) def get_user_profile(request): if request.user.is_anonymous: return Response({'error': 'User is not authenticated.'}, status=status.HTTP_401_UNAUTHORIZED) user = request.user profile = Profile.objects.get(user=user) data = { 'username': user.username, } return Response(data, status=status.HTTP_200_OK) </code></pre> <p>my settings.py</p> <pre><code>REST_FRAMEWORK = { 'DEFAULT_SCHEMA_CLASS': 'drf_spectacular.openapi.AutoSchema', 'DEFAULT_AUTHENTICATION_CLASSES': ( 'rest_framework_simplejwt.authentication.JWTAuthentication', ) } SIMPLE_JWT = { &quot;ACCESS_TOKEN_LIFETIME&quot;: timedelta(minutes=5), &quot;REFRESH_TOKEN_LIFETIME&quot;: timedelta(days=1), } </code></pre> <p>my frontend code:</p> <pre><code> axios.get(`${CustomDomain}/user_profile/`,{withCredentials:true}) .then((res) =&gt; { console.log(res); }) .catch((error) =&gt; { console.error(error); }); }) </code></pre> <p>;</p> <p>you can see my api is working</p> <p><a href="https://i.sstatic.net/9aiUW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9aiUW.png" alt="enter image description here" /></a></p>
<javascript><python><reactjs><django><next.js13>
2023-06-19 14:12:19
1
1,647
boyenec
76,507,449
7,447,976
how to get a unique week number for start and end dates in multi years - Pandas
<p>I have a dataframe where two of the columns represent the start and end date of the data record. There are multiple years. My goal is to assign a new column that represents the time step of the data record in each row. Since I have a location columns as well, some of these weeks will be repeating.</p> <pre><code>import pandas as pd dates = pd.date_range(start='2021-11-11', periods=20, freq='W') df = pd.DataFrame({ 'start_date': np.repeat(dates, 5), 'end_date': np.repeat(dates + pd.DateOffset(days=6), 5), 'country': ['USA', 'Canada', 'UK', 'Australia', 'Russia'] * 20 }) df = df.sort_values(&quot;start_date&quot;) start_date end_date country 0 2021-11-14 2021-11-20 USA 1 2021-11-14 2021-11-20 Canada 2 2021-11-14 2021-11-20 UK 3 2021-11-14 2021-11-20 Australia 4 2021-11-14 2021-11-20 Russia </code></pre> <p>I can get the week number using <code>isocalendar().week</code>, but it is giving the week number of the corresponding year. For instance, if <code>2021-11-14</code> and <code>2021-11-20</code> is the first week in the data frame, it should get <code>1</code>. It may skip the next week, and have another record starting from <code>2021-11-27</code>. Such time step should be the second week for me in the data frame.</p>
<python><pandas>
2023-06-19 14:12:03
2
662
sergey_208
76,507,337
17,487,457
pandas: convert string column to array of float
<p>I have a the following df:</p> <pre class="lang-py prettyprint-override"><code>df = pd.DataFrame( {'feature_idx': ['(4,)','(4, 15)','(1, 4, 15)', '(1, 4, 15, 176)','(1, 4, 15, 89, 176)'], 'cv_scores': ['[0.71936929 0.75262699 0.77660679 0.85333625 0.76398875]', '[0.79227296 0.82675175 0.83723801 0.92502134 0.82625185]', '[0.82134069 0.84987581 0.86420576 0.93398567 0.84150328]', '[0.83244816 0.86689598 0.87095624 0.9445071 0.85839512]', '[0.84192526 0.87788764 0.87939774 0.95181742 0.86563099]']} ) df.head(2) feature_idx cv_scores 0 (4,) [0.71936929 0.75262699 0.77660679 0.85333625 0... 1 (4, 15) [0.79227296 0.82675175 0.83723801 0.92502134 0... </code></pre> <p>The column <code>cv_scores</code> contains string for the scores of 5-fold, so for example.</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; sub_df.iloc[0]['cv_scores'] '[0.71936929 0.75262699 0.77660679 0.85333625 0.76398875]' &gt;&gt;&gt; type(sub_df.iloc[0]['cv_scores']) str </code></pre> <p>I would like to add a column <code>avg_score</code> for the average score of each feature (<code>cv_scores / 5</code>).</p> <p>Since <code>cv_scores</code> is a string, I need a way to convert this column to array of float, to derive the intended column.</p> <p><strong>Expected results</strong></p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; df_result feature_idx cv_scores avg_score 0 (4,) [0.71936929 0.75262699 0.77660679 0.85333625 0... 0.773186 1 (4, 15) [0.79227296 0.82675175 0.83723801 0.92502134 0... 0.841507 2 (1, 4, 15) [0.82134069 0.84987581 0.86420576 0.93398567 0... 0.862182 3 (1, 4, 15, 176) [0.83244816 0.86689598 0.87095624 0.9445071 0... 0.874641 4 (1, 4, 15, 89, 176) [0.84192526 0.87788764 0.87939774 0.95181742 0... 0.883332 </code></pre>
<python><pandas><dataframe>
2023-06-19 13:56:33
1
305
Amina Umar
76,506,952
1,264,097
Information content of a (1D) curve (i.e. spectroscopy)
<p>I am looking for a measure to quantify the information content of a 1D curve. To explain, here is an example in python:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt x, dx = np.linspace(0, 10, 1001, retstep=True) y1 = 2 * np.pi * np.exp(-.5 * (x - 5)**2) y2 = np.pi * np.exp(-.5 * (x - 5)**2) \ + 2 * np.exp(-5 * (x - 2)**2) \ + 3 * np.exp(-5 * (x - 8)**2) plt.plot(x, y1, label='curve 1') plt.plot(x, y2, label='curve 2') plt.legend() </code></pre> <p><a href="https://i.sstatic.net/UheCX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UheCX.png" alt="enter image description here" /></a></p> <p>Curve 1 is a single Gaussian, which has an &quot;information content&quot; of 3: x-position, amplitude, and width. Curve 2 contains 3 such peaks and hence carries 9 numbers as information content. Here we have the first issue: We can only say this because we have a model of the curve (Gaussian peaks). Is there any function or approach that can calculate the information content/entropy of such a curve?</p> <p>I looked into the Shannon entropy, which can give some numbers if the probability density function is calculated first, here by <code>np.histogram</code>:</p> <pre><code>&gt;&gt;&gt; pdf1, x1 = np.histogram(y1, 31, density=True) &gt;&gt;&gt; pdf2, x2 = np.histogram(y2, 31, density=True) &gt;&gt;&gt; -np.sum(pdf1 * np.log2(pdf1)) # shannon entropy 5.684224974417829 &gt;&gt;&gt; -np.sum(pdf2 * np.log2(pdf2)) 11.151687052639227 </code></pre> <p>The problem is, that this approach does not consider correlations between data points, which are clearly there.</p> <p>This is how my actual data looks like:</p> <p><a href="https://i.sstatic.net/yEwB3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yEwB3.png" alt="Actual data" /></a></p> <p>Btw: any approximation would be fine, it does not need mathematical rigor here.</p> <p>Any ideas?</p>
<python><curve><entropy><information-theory>
2023-06-19 13:07:56
0
697
R. C.
76,506,894
6,447,123
Create python module
<p>I would like to create a python module, this the the structure of my project (I have shown the minimal of it)</p> <pre><code>. ├── code/ │ └── main.py └── pyproject.toml </code></pre> <p>and this is my <code>pyproject.toml</code> file</p> <pre class="lang-ini prettyprint-override"><code>[build-system] requires = [&quot;flit_core &gt;=3.2,&lt;4&quot;] build-backend = &quot;flit_core.buildapi&quot; [tool.flit.module] name = &quot;code&quot; [project] name = &quot;my_module_name&quot; version = &quot;0.0.1&quot; description = &quot;Foobar&quot; readme = &quot;README.md&quot; classifiers = [ &quot;Programming Language :: Python :: 3&quot;, &quot;Operating System :: OS Independent&quot;, ] requires-python = &quot;&gt;=3.0&quot; dependencies = [ &quot;foo &gt;=1.2.3&quot;, ] [project.urls] &quot;Homepage&quot; = &quot;https://github.com/foo/bar&quot; &quot;Bug Tracker&quot; = &quot;https://github.com/foo/bar/issues&quot; </code></pre> <p>When I run <code>pip install git+https://github.com/foo/bar.git</code> it successfully install the module But when I write <code>from my_module_name import code</code>, I got <code>ModuleNotFoundError: No module named 'my_module_name'</code> error</p> <p>How can I fix it?</p>
<python><pip><python-module><pyproject.toml>
2023-06-19 12:59:56
0
4,309
A.A
76,506,784
8,720,308
Dropped columns reappear in columns.level
<p>I have a DataFrame with MultiIndex.</p> <p>When I drop a column (e.g., containing a NaN) this column name still appears, when I call <code>df.columns.levels[1]</code>.</p> <p>Minimal working example:</p> <pre><code># Create DataFrame midx = pd.MultiIndex.from_tuples([('A','aa'),('A','bb'),('B','cc'),('B','dd')]) mydf = pd.DataFrame(np.random.randn(5,4), columns=midx) mydf.loc[1,('B','cc')] = np.nan print(mydf) &gt;&gt; A B aa bb cc dd 0 -0.565250 -1.267290 -1.811422 -0.242648 1 0.138827 0.182022 NaN -0.286807 2 0.037163 -1.867622 1.259539 -0.485333 3 1.283082 1.030154 0.678748 -0.200731 4 -0.405116 -0.963670 -0.405438 -1.695403 # Drop column with NaN mydf.dropna(how='any', axis=1, inplace=True) print(mydf) &gt;&gt; A B aa bb dd 0 -0.565250 -1.267290 -0.242648 1 0.138827 0.182022 -0.286807 2 0.037163 -1.867622 -0.485333 3 1.283082 1.030154 -0.200731 4 -0.405116 -0.963670 -1.695403 mydf.columns.levels[1] &gt;&gt; Index(['aa', 'bb', 'cc', 'dd'], dtype='object') </code></pre> <p>Alternatives I've tried, all ending with the same results:</p> <pre><code>new_df = mydf.dropna(how='any', axis=1) new_df = mydf.dropna(how='any', axis=1).copy() </code></pre> <p>I need to access the list of present column names on level 1. I have found a doable work-around, but I need to understand why this code above is not working as intended.</p>
<python><pandas><dataframe>
2023-06-19 12:47:17
2
307
ABot
76,506,774
843,075
Error when trying to locate Email textbox
<p>I'm trying to locate a, email textbox on a payment popup on a webpage, but I encounter a &quot;waiting for locator&quot; error when I attempt to do so. Its the same issue with the other elements on the popup. I have successfully located elements on the other pages. Is there something I have to do specifically when locating elements on popups?</p> <p>Here is the payment popup with the email text box I wish to locate along with the html: <a href="https://i.sstatic.net/GwUtf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GwUtf.png" alt="enter image description here" /></a></p> <p>This is the locator that I am using:</p> <pre><code>self.email_txt = page.locator('#email') </code></pre> <p>Here is the error message: <a href="https://i.sstatic.net/V0TEk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/V0TEk.png" alt="enter image description here" /></a></p>
<python><playwright>
2023-06-19 12:46:45
1
304
fdama
76,506,652
12,131,616
Can you select a variable by binning another variable in a vectorized way?
<p><strong>Problem</strong></p> <p>I have several variables <code>x</code> that I want to sort in a variable <code>binned_list</code> using some bins.</p> <p>As an example, <code>x</code> is a random vector in two components, going from 0 to 10/sqrt(2), that I want to sort in the list <code>binned_list</code> by the modulus of <code>x</code>. I have three bins for the modulus: [0, 3.33), [3.33, 6.66) and [6.66, 10) and I want to save different iterations of <code>x</code> into <code>binned_list</code>, which is a list of 3 lists, each one corresponding to values of the modulus of <code>x</code> on that bin.</p> <p>I can do it in the following way:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np N_bins = 3 Bins = np.linspace(0, 10, N_bins+1) binned_list = [[] for b in range(N_bins)] N_elements = 5 np.random.seed(1) for k in range(3): x = np.random.random((N_elements,2))/np.sqrt(2)*10 mod_x = np.sqrt(x[:,0]**2 + x[:,1]**2) dig_x = np.digitize( mod_x, bins = Bins ) - 1 for y in range(len(x)): binned_list[dig_x[y]].append(x[y]) </code></pre> <p>Output:</p> <pre><code>[[array([8.08752089e-04, 2.13781412e+00]), array([1.03772086, 0.65293247]), array([1.31705859, 2.44348333]), array([0.99268556, 1.40078906]), array([0.60135339, 0.27615902])], [array([2.94879087, 5.09346334]), array([2.80556972, 3.81000966]), array([2.96415284, 4.84523355]), array([1.44569572, 6.20922794]), array([0.19365953, 4.74092123]), array([2.95079056, 3.95053366]), array([2.21624362, 4.89546016]), array([1.20088241, 6.20940519])], [array([5.66211915, 6.84664326]), array([6.19700713, 6.32582438])]] </code></pre> <p><strong>Question</strong></p> <p>Once I have digitized the elements of <code>x</code>, can I avoid looping through them in order to save them in variable <code>binned_list</code>? I would like to do this in a vectorized way in order to make the code more efficient.</p> <p>I thought of something like:</p> <pre><code>binned_list[dig_x].append(x) </code></pre> <p>But I can't slice a list with an array. Also if I define <code>binned_list</code> as an array I can't append either.</p>
<python><numpy><vectorization><binning>
2023-06-19 12:30:58
2
663
Puco4
76,506,643
3,556,110
How to create a patch.object mock that I can start() and stop()?
<p>So I have a large suite of unit tests in python, and I want to apply a set of 4 patches across all the tests (to disable expensive stuff).</p> <p>I have about 300 tests over 20 test classes, and I want to want to provide a mixin to my TestCase that applies these patches in setUp, rather than patching each class individually.</p> <p>I can use <code>patch.object</code> to patch each individual test or test class, which is a syntax nightmare for 300 tests but works as intended:</p> <pre class="lang-py prettyprint-override"><code>from django.test import TestCase from unittest.mock import patch class MyTestCase(TestCase): def test_stuff(self): with patch.object('MyClass1', 'mymethod1') as mock_method_1: with patch.object('MyClass2', 'mymethod2') as mock_method_2: with patch.object('MyClass3', 'mymethod3') as mock_method_3: with patch.object('MyClass4', 'mymethod4') as mock_method_4: # do stuff self.assertEqual(mock_method_4.call_count, 0) </code></pre> <p>Then I made a mixin where I create the same patch but add it to the test class as an attribute, and use the <code>start()</code> method. This is completely DRY and I can patch everything at once.</p> <pre class="lang-py prettyprint-override"><code> class PatchMixin(): def setUp(self): self.mock_method_1 = patch.object( MyClass1, &quot;mymethod1&quot; ) self.mock_method_2 = patch.object( MyClass2, &quot;mymethod2&quot; ) self.mock_method_3 = patch.object( MyClass3, &quot;mymethod3&quot; ) self.mock_method_4 = patch.object( MyClass4, &quot;mymethod4&quot; ) self.mock_method_1.start() self.mock_method_2.start() self.mock_method_3.start() self.mock_method_4.start() super().setUp() def tearDown(self): self.mock_method_1.stop() self.mock_method_2.stop() self.mock_method_3.stop() self.mock_method_4.stop() super().tearDown(self) class MyTestCase(TestCase): def test_stuff(self): # do stuff self.assertEqual(self.mock_method_4.call_count, 0) </code></pre> <pre><code> **THE QUESTION** When running the mixin, it doesn't work. I get an error: </code></pre> <p>Failed: [undefined]AttributeError: '_patch' object has no attribute 'call_count'</p> <pre><code>Why does this error occur, and how can I use `patch.object` (or some equivalent) to store the patch on my class in this way? </code></pre>
<python><unit-testing><pytest><mixins>
2023-06-19 12:29:49
1
5,582
thclark
76,506,616
1,777,170
Matrix-synapse doesn't retrieve CAS attributes
<p>I have a problem on Matrix-Synapse with the SSO using CAS.</p> <p>Synapse doesn't retrieve CAS attributes <code>synapse.handlers.sso - 1262 - INFO - GET-50 - SSO attribute missing</code>.</p> <p>But CAS sends the attributes and I can retrieve them with a PHP script on the same server.</p> <p>I can't figure out where it's coming from, CAS Python library problem or CAS protocol configuration problem or anything else ?</p> <p>I've modified <code>/opt/venvs/matrix-synapse/lib/python3.9/site-packages/synapse/handlers/cas.py</code> to log the CAS response, here's what I get back:</p> <pre><code>&lt;cas:serviceResponse xmlns:cas='http://www.yale.edu/tp/cas'&gt; &lt;cas:authenticationSuccess&gt; &lt;cas:user&gt;MYUSER&lt;/cas:user&gt; &lt;/cas:authenticationSuccess&gt; &lt;/cas:serviceResponse&gt; </code></pre> <p>Informations :</p> <ul> <li>Platform : KVM / Debian 11</li> <li>Synapse Version : 1.85.0</li> <li>Installation Method : Debian packages from packages.matrix.org</li> <li>Database : PostgreSQL</li> <li>Workers : Single process</li> </ul>
<python><single-sign-on><cas><matrix-synapse>
2023-06-19 12:26:04
1
962
Aurélien Grimpard
76,506,452
5,522,007
Rename columns in an xlsx file without reading/writing the whole file
<p>I have some relatively large Excel files of type .xlsx containing a single sheet in which one column needs to be renamed in every file.</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">column_a</th> <th style="text-align: left;">column_b</th> <th style="text-align: left;">wrong_name</th> <th style="text-align: left;">column_d</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">1</td> <td style="text-align: left;">2</td> <td style="text-align: left;">3</td> <td style="text-align: left;">4</td> </tr> <tr> <td style="text-align: left;">1</td> <td style="text-align: left;">2</td> <td style="text-align: left;">3</td> <td style="text-align: left;">4</td> </tr> <tr> <td style="text-align: left;">1</td> <td style="text-align: left;">2</td> <td style="text-align: left;">3</td> <td style="text-align: left;">4</td> </tr> </tbody> </table> </div> <p>needs to be</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">column_a</th> <th style="text-align: left;">column_b</th> <th style="text-align: left;">column_c</th> <th style="text-align: left;">column_d</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">1</td> <td style="text-align: left;">2</td> <td style="text-align: left;">3</td> <td style="text-align: left;">4</td> </tr> <tr> <td style="text-align: left;">1</td> <td style="text-align: left;">2</td> <td style="text-align: left;">3</td> <td style="text-align: left;">4</td> </tr> <tr> <td style="text-align: left;">1</td> <td style="text-align: left;">2</td> <td style="text-align: left;">3</td> <td style="text-align: left;">4</td> </tr> </tbody> </table> </div> <p>I understand this can be done by reading the file into memory, replacing the column name, and then overwriting the original file e.g. with Pandas:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd df = pd.read_excel('my_file.xlsx') df.rename(columns={'wrong_name', 'column_c'}, inplace=True) df.to_excel('my_file.xlsx') </code></pre> <p>And I can read just the columns and replace the problematic column name so I have a list with the correct names:</p> <pre class="lang-py prettyprint-override"><code>cols = pd.read_excel('my_file.xlsx', nrows=0).columns.to_list() new_cols = [x if x != 'wrong_name' else 'column_c' for x in cols] </code></pre> <p>But I can't figure out a way to write just the list of column names back to the source .xlsx file as the top row without overwriting the rest of the data.</p> <p>Is there any way in Python to modify just the column names in the .xlsx file without having to read and write the whole file each time?</p>
<python><pandas><excel><openpyxl>
2023-06-19 12:04:32
0
371
Violet
76,506,330
11,644,523
PySpark / Snowpark calculate running sum between two given dates
<p>Using this sample table:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">id</th> <th style="text-align: center;">sales</th> <th style="text-align: right;">sales_date</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">1</td> <td style="text-align: center;">10</td> <td style="text-align: right;">2020-04-30</td> </tr> <tr> <td style="text-align: left;">1</td> <td style="text-align: center;">6</td> <td style="text-align: right;">2020-10-31</td> </tr> <tr> <td style="text-align: left;">1</td> <td style="text-align: center;">9</td> <td style="text-align: right;">2020-09-30</td> </tr> <tr> <td style="text-align: left;">1</td> <td style="text-align: center;">2</td> <td style="text-align: right;">2021-04-30</td> </tr> <tr> <td style="text-align: left;">2</td> <td style="text-align: center;">8</td> <td style="text-align: right;">2020-08-31</td> </tr> <tr> <td style="text-align: left;">2</td> <td style="text-align: center;">7</td> <td style="text-align: right;">2020-07-31</td> </tr> <tr> <td style="text-align: left;">2</td> <td style="text-align: center;">3</td> <td style="text-align: right;">2021-06-30</td> </tr> <tr> <td style="text-align: left;">2</td> <td style="text-align: center;">2</td> <td style="text-align: right;">2021-05-31</td> </tr> </tbody> </table> </div> <p>I would like to calculate the total sum of sales, with the range between two dates.</p> <p>I assume it would be a window function, partition by id and order by sales_date. But I don't know how to get the sum(sales_date) between two given dates.</p> <pre class="lang-py prettyprint-override"><code>win = Window.partitionBy('id').orderBy('sales_date') df.withColumn('running_sum',sum(sales).over(win).rangeBetween(start_date,end_date) ?? # rangeBetween of start_date and start_date + 1 year </code></pre> <p>For example;</p> <p>ID <code>1</code> has a start date of <code>2020-04-30</code>, and I want to get the sum from <code>2020-04-30</code> to <code>2021-04-30</code>.</p> <p>ID <code>2</code> has a start date of <code>2020-08-31</code>, and the end date would be <code>2021-08-31</code>.</p> <p>Referring to this, it seems quite close to what I want but my problem is that each ID can have a different start-date and end-date for the window sum:</p> <p><a href="https://stackoverflow.com/questions/45806194/pyspark-rolling-average-using-timeseries-data">pyspark: rolling average using timeseries data</a></p>
<python><apache-spark><pyspark><snowflake-cloud-data-platform>
2023-06-19 11:49:22
1
735
Dametime
76,506,289
5,114,342
Bokeh 3.1.1 DeserializationError when changing layout content
<p>I have a project that worked fine under Bokeh 2.4.3 but after me upgrading to 3.1.1 produces <code>bokeh.core.serialization.DeserializationError</code>s.</p> <p>The culprit is me having frames which first display a &quot;loading screen&quot; and afterwards an actual plot.<br /> I have a class that manages a frame and looks somewhat like this, simplified:</p> <pre><code>class MyFrame: def __init__(self): self.layout = column() self.loading = loading_pic() self.layout.children.append(self.loading) def display_loading(self) -&gt; None: self.layout.children[0] = self.loading def plot(self, data) -&gt; None: self.my_plot = MyPlot(data) self.layout.children[0] = self.my_plot </code></pre> <p>In this, the function <code>loading_pic()</code> simply generates a (Bokeh) plot that contains the label &quot;Loading&quot;, while the <code>MyPlot</code> class does some more sophisticated rendering.</p> <p>This produces the error</p> <pre><code>2023-06-19 13:29:44,116 error handling message message: Message 'PATCH-DOC' content: {'events': [{'kind': 'ModelChanged', 'model': {'id': 'p1004'}, 'attr': 'inner_width', 'new': 865}, {'kind': 'ModelChanged', 'model': {'id': 'p1004'}, 'attr': 'inner_height', 'new': 540}, {'kind': 'ModelChanged', 'model': {'id': 'p1004'}, 'attr': 'outer_width', 'new': 900}, {'kind': 'ModelChanged', 'model': {'id': 'p1004'}, 'attr': 'outer_height', 'new': 550}]} error: DeserializationError(&quot;can't resolve reference 'p1004'&quot;) </code></pre> <p>Reference p1004 is the loading screen, and if I turn the code of <code>display_loading</code> into a comment, the error goes away (but of course so does the loading screen functionality).<br /> Interestingly by my code as is, the loading screen plot object exists during the error, and also the loading screen is still actually used in the <code>__init__</code> method. And I actually see the loading screen during the initial loadup.</p> <p>Now the error does not crash the program and it actually works despite it, but leaving it as is, suppressing it, or not using loading screens all feel like code smell to me.</p> <p>Does anybody have an idea what causes this? Is Bokeh trying to resize an item that is not visible?</p> <hr /> <p>Edit: Also posted this issue in the Bokeh forums: <a href="https://discourse.bokeh.org/t/bug-with-deserializationerror-when-changing-content-layout-in-bokeh-3-1-1/10566" rel="nofollow noreferrer">https://discourse.bokeh.org/t/bug-with-deserializationerror-when-changing-content-layout-in-bokeh-3-1-1/10566</a></p> <p>Edit: I created a whole reproducible example.<br /> To reproduce the bug, execute the following code as Bokeh server and then change the value of the range slider <em>twice</em>.</p> <pre><code>from bokeh.plotting import figure, curdoc from bokeh.models import ColumnDataSource, Label, RangeSlider from bokeh.layouts import column from random import * def loading_pic(width: int = 500, height: int = 500) -&gt; figure: plot = figure(x_range=(0,width), y_range=(0,height), width=width, height=height, tools='') pos_x = width/2 pos_y = height/2 label_size = min(height/4, width/4) label_size_str = str(label_size)+'px' loading_label = Label(x=pos_x, y=pos_y, text='Loading', text_font_size = label_size_str, text_align='center', text_baseline='middle') plot.add_layout(loading_label) plot.xaxis.visible = False plot.yaxis.visible = False plot.xgrid.grid_line_color = None plot.ygrid.grid_line_color = None return plot def random_coords(maximum: float = 500, amount: int = 1000) -&gt; ColumnDataSource: data = ColumnDataSource() data.data['x'] = [random()*maximum for i in range(amount)] data.data['y'] = [random()*maximum for i in range(amount)] return data class MyPlot: def __init__(self, data: ColumnDataSource): self.plot = figure(x_range=(0,500), y_range=(0,500), width=500, height=500) self.plot.circle(source=data, x='x', y='y') class MyFrame: def __init__(self): self.layout = column() self.loading = loading_pic() self.layout.children.append(self.loading) def display_loading(self) -&gt; None: self.layout.children[0] = self.loading def plot(self, data) -&gt; None: self.my_plot = MyPlot(data) self.layout.children[0] = self.my_plot.plot def plot_delayed(self, attr, old, new) -&gt; None: self.display_loading() data = random_coords() curdoc().add_next_tick_callback(lambda: self.plot(data)) my_frame = MyFrame() range_slider = RangeSlider(start = 0, end = 10, value = (0,10), step = 1, title = 'Range') range_slider.on_change('value_throttled', my_frame.plot_delayed) layout = column(my_frame.layout, range_slider) curdoc().add_root(layout) </code></pre>
<python><bokeh>
2023-06-19 11:44:07
1
3,912
Aziuth
76,506,149
489,010
How to encode a string column into integers on polars?
<p>I would like to &quot;encode&quot; in a simple manner the values of a given column, a string for instance, into an arbitrary integer identifier?</p> <pre><code>df = ( pl.DataFrame({&quot;animal&quot;: ['elephant', 'dog', 'cat', 'mouse'], &quot;country&quot;: ['Mexico', 'Denmark', 'Mexico', 'France'], &quot;cost&quot;: [1000.0, 20.0, 10.0, 120.0]}) ) print(df) shape: (4, 3) ┌──────────┬─────────┬────────┐ │ animal ┆ country ┆ cost │ │ --- ┆ --- ┆ --- │ │ str ┆ str ┆ f64 │ ╞══════════╪═════════╪════════╡ │ elephant ┆ Mexico ┆ 1000.0 │ │ dog ┆ Denmark ┆ 20.0 │ │ cat ┆ Mexico ┆ 10.0 │ │ mouse ┆ France ┆ 120.0 │ └──────────┴─────────┴────────┘ </code></pre> <p>I would like to encode the <code>animal</code> and the <code>country</code> columns to get something like</p> <pre><code>shape: (4, 5) ┌──────────┬─────────┬────────┬────────────────┬─────────────────┐ │ animal ┆ country ┆ cost ┆ animal_encoded ┆ country_encoded │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ str ┆ str ┆ f64 ┆ i64 ┆ i64 │ ╞══════════╪═════════╪════════╪════════════════╪═════════════════╡ │ elephant ┆ Mexico ┆ 1000.0 ┆ 0 ┆ 0 │ │ dog ┆ Denmark ┆ 20.0 ┆ 1 ┆ 1 │ │ cat ┆ Mexico ┆ 10.0 ┆ 2 ┆ 0 │ │ mouse ┆ France ┆ 120.0 ┆ 3 ┆ 2 │ └──────────┴─────────┴────────┴────────────────┴─────────────────┘ </code></pre> <p>I thought that doing some sort of row indexing from a <code>unique</code>d context and then <code>over</code> to expand to the same number of original rows could work out but I can't manage to implement it.</p>
<python><dataframe><python-polars>
2023-06-19 11:24:15
1
5,026
pedrosaurio
76,506,093
11,992,033
Eror when building DLIB "pip install dlib==19.18.0"
<p>When I run command</p> <pre><code>pip install dlib==19.18.0 </code></pre> <p>the error below appears:</p> <pre><code>amd64-cpython-310', '-DPYTHON_EXECUTABLE=C:\\Users\\ASUS\\AppData\\Local\\Programs\\Python\\Python310\\python.exe', '-DCMAKE_LIBRARY_OUTPUT_DIRECTORY_RELEASE=C:\\Users\\ASUS\\AppData\\Local\\Temp\\pip-install-e2j7rtug\\dlib_5512cdae6fa04c2886bfa3506800a523\\build\\lib.win-amd64-cpython-310', '-A', 'x64']' returned non-zero exit status 1. [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for dlib Running setup.py clean for dlib Failed to build dlib Installing collected packages: dlib Running setup.py install for dlib ... error error: subprocess-exited-with-error × Running setup.py install for dlib did not run successfully. │ exit code: 1 ╰─&gt; [74 lines of output] running install C:\Users\ASUS\AppData\Local\Programs\Python\Python310\lib\site-packages\setuptools\command\install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. warnings.warn( running build running build_py running build_ext C:\Users\ASUS\AppData\Local\Temp\pip-install-e2j7rtug\dlib_5512cdae6fa04c2886bfa3506800a523\setup.py:129: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead. if LooseVersion(cmake_version) &lt; '3.1.0': Building extension for Python 3.10.7 (tags/v3.10.7:6cc6b13, Sep 5 2022, 14:08:36) [MSC v.1933 64 bit (AMD64)] Invoking CMake setup: 'cmake C:\Users\ASUS\AppData\Local\Temp\pip-install-e2j7rtug\dlib_5512cdae6fa04c2886bfa3506800a523\tools\python -DCMAKE_LIBRARY_OUTPUT_DIRECTORY=C:\Users\ASUS\AppData\Local\Temp\pip-install-e2j7rtug\dlib_5512cdae6fa04c2886bfa3506800a523\build\lib.win-amd64-cpython-310 -DPYTHON_EXECUTABLE=C:\Users\ASUS\AppData\Local\Programs\Python\Python310\python.exe -DCMAKE_LIBRARY_OUTPUT_DIRECTORY_RELEASE=C:\Users\ASUS\AppData\Local\Temp\pip-install-e2j7rtug\dlib_5512cdae6fa04c2886bfa3506800a523\build\lib.win-amd64-cpython-310 -A x64' -- Building for: NMake Makefiles CMake Error at CMakeLists.txt:5 (message): !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! You must use Visual Studio to build a python extension on windows. If you are getting this error it means you have not installed Visual C++. Note that there are many flavors of Visual Studio, like Visual Studio for C# development. You need to install Visual Studio for C++. !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! -- Configuring incomplete, errors occurred! Traceback (most recent call last): File &quot;&lt;string&gt;&quot;, line 2, in &lt;module&gt; File &quot;&lt;pip-setuptools-caller&gt;&quot;, line 34, in &lt;module&gt; File &quot;C:\Users\ASUS\AppData\Local\Temp\pip-install-e2j7rtug\dlib_5512cdae6fa04c2886bfa3506800a523\setup.py&quot;, line 222, in &lt;module&gt; setup( File &quot;C:\Users\ASUS\AppData\Local\Programs\Python\Python310\lib\site-packages\setuptools\__init__.py&quot;, line 87, in setup return distutils.core.setup(**attrs) File &quot;C:\Users\ASUS\AppData\Local\Programs\Python\Python310\lib\site-packages\setuptools\_distutils\core.py&quot;, line 177, in setup return run_commands(dist) File &quot;C:\Users\ASUS\AppData\Local\Programs\Python\Python310\lib\site-packages\setuptools\_distutils\core.py&quot;, line 193, in run_commands dist.run_commands() File &quot;C:\Users\ASUS\AppData\Local\Programs\Python\Python310\lib\site-packages\setuptools\_distutils\dist.py&quot;, line 968, in run_commands self.run_command(cmd) File &quot;C:\Users\ASUS\AppData\Local\Programs\Python\Python310\lib\site-packages\setuptools\dist.py&quot;, line 1217, in run_command super().run_command(command) File &quot;C:\Users\ASUS\AppData\Local\Programs\Python\Python310\lib\site-packages\setuptools\_distutils\dist.py&quot;, line 987, in run_command cmd_obj.run() File &quot;C:\Users\ASUS\AppData\Local\Programs\Python\Python310\lib\site-packages\setuptools\command\install.py&quot;, line 68, in run return orig.install.run(self) File &quot;C:\Users\ASUS\AppData\Local\Programs\Python\Python310\lib\site-packages\setuptools\_distutils\command\install.py&quot;, line 695, in run self.run_command('build') File &quot;C:\Users\ASUS\AppData\Local\Programs\Python\Python310\lib\site-packages\setuptools\_distutils\cmd.py&quot;, line 317, in run_command self.distribution.run_command(command) File &quot;C:\Users\ASUS\AppData\Local\Programs\Python\Python310\lib\site-packages\setuptools\dist.py&quot;, line 1217, in run_command super().run_command(command) File &quot;C:\Users\ASUS\AppData\Local\Programs\Python\Python310\lib\site-packages\setuptools\_distutils\dist.py&quot;, line 987, in run_command cmd_obj.run() File &quot;C:\Users\ASUS\AppData\Local\Programs\Python\Python310\lib\site-packages\setuptools\command\build.py&quot;, line 24, in run super().run() File &quot;C:\Users\ASUS\AppData\Local\Programs\Python\Python310\lib\site-packages\setuptools\_distutils\command\build.py&quot;, line 131, in run </code></pre> <pre><code>PS D:\SkripsiGigaJavapocalypse&gt; C:/Users/ASUS/anaconda3/Scripts/activate PS D:\SkripsiGigaJavapocalypse&gt; conda activate base conda : The term 'conda' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path wa s included, verify that the path is correct and try again. At line:1 char:1 + conda activate base + ~~~~~ + CategoryInfo : ObjectNotFound: (conda:String) [], CommandNotFoundException + FullyQualifiedErrorId : CommandNotFoundException </code></pre> <p>In my system I have installed:</p> <ul> <li>CMake 3.26.3,</li> <li>Python 3.10.6.</li> </ul>
<python><cmake><dlib>
2023-06-19 11:15:15
0
1,317
GigaTera
76,506,090
6,218,849
How to merge arbitrarily chained overlapping time ranges
<p>I have a set of events with a start and end timestamp. There are few constraints</p> <ul> <li>event 1 may overlap with events 2, 3, 5, 99, ...</li> <li>event 1 may overlap with 2 which may overlap with 3, ...</li> <li>an event may be completely contained by another event</li> <li>an event may not overlap with any other event</li> </ul> <p>Here is a quick visualization of one group over overlapping events and the desired output (<code>s</code> indicates a start, and <code>e</code> indicates an end). The data may contain multiple such &quot;chained&quot; overlapping time ranges, as well as time ranges with no overlaps.</p> <pre><code> _______ __________________ | | ______ | ___________|____|___ | | | | | __ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |______|___|__|____|____|___|___| |______| s1 s2 s3 e3 e1 s4 e2 e4 s5 e5 _______________________________ ______ | | | | | | | | | | | | | | | | | | | | |_______________________________| |______| s1 e1 s2 e2 </code></pre> <h3>Helper function</h3> <p>Here's a quick function for making an interactive plot with <code>plotly</code> in order to visualize the events. It assumes that the input is a <code>pd.DataFrame</code> with a <code>Start</code> column and an <code>End</code> column (as obtained by a simple <code>pd.read_csv</code> based on the input below).</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import numpy as np import plotly.graph_objects as go def plot_events(events): fig = go.Figure() # Add dummy to generate the datetime axis fig.add_trace(go.Scatter( x=pd.date_range(start=events.Start.min(), end=events.End.max()), y=np.zeros(events.shape[0]) )) # Add time capsules for i, event in events.iterrows(): fig.add_vrect( x0=event .Start, x1=event .End ) return fig </code></pre> <h3>Raw data (CSV)</h3> <pre><code>,Start,End,Duration 0,2020-08-03 21:05:00+00:00,2020-08-03 21:58:00+00:00,0 days 00:53:00 1,2020-08-03 22:53:00+00:00,2020-08-03 23:42:00+00:00,0 days 00:49:00 2,2020-08-03 23:29:00+00:00,2020-08-04 00:18:00+00:00,0 days 00:49:00 3,2020-08-04 17:01:00+00:00,2020-08-04 17:50:00+00:00,0 days 00:49:00 4,2020-08-12 02:09:00+00:00,2020-08-12 03:06:00+00:00,0 days 00:57:00 5,2020-08-12 02:13:00+00:00,2020-08-12 03:10:00+00:00,0 days 00:57:00 6,2020-08-12 02:33:00+00:00,2020-08-12 03:38:00+00:00,0 days 01:05:00 7,2020-08-12 04:45:00+00:00,2020-08-12 05:38:00+00:00,0 days 00:53:00 8,2020-08-12 05:09:00+00:00,2020-08-12 06:14:00+00:00,0 days 01:05:00 9,2020-08-12 05:37:00+00:00,2020-08-12 06:26:00+00:00,0 days 00:49:00 10,2020-08-12 05:57:00+00:00,2020-08-12 06:46:00+00:00,0 days 00:49:00 11,2020-08-12 06:09:00+00:00,2020-08-12 07:02:00+00:00,0 days 00:53:00 12,2020-08-12 08:21:00+00:00,2020-08-12 09:26:00+00:00,0 days 01:05:00 13,2020-08-12 08:53:00+00:00,2020-08-12 10:14:00+00:00,0 days 01:21:00 14,2020-08-12 09:49:00+00:00,2020-08-12 10:42:00+00:00,0 days 00:53:00 15,2020-08-12 10:09:00+00:00,2020-08-12 10:58:00+00:00,0 days 00:49:00 16,2020-08-12 10:37:00+00:00,2020-08-12 11:46:00+00:00,0 days 01:09:00 17,2020-08-12 13:17:00+00:00,2020-08-12 14:06:00+00:00,0 days 00:49:00 18,2020-08-12 13:41:00+00:00,2020-08-12 14:30:00+00:00,0 days 00:49:00 19,2020-08-13 07:21:00+00:00,2020-08-13 08:18:00+00:00,0 days 00:57:00 20,2020-08-13 07:45:00+00:00,2020-08-13 08:46:00+00:00,0 days 01:01:00 21,2020-08-14 12:33:00+00:00,2020-08-14 13:22:00+00:00,0 days 00:49:00 22,2020-08-14 12:33:00+00:00,2020-08-14 13:22:00+00:00,0 days 00:49:00 23,2020-08-21 06:49:00+00:00,2020-08-21 07:46:00+00:00,0 days 00:57:00 24,2020-08-21 06:49:00+00:00,2020-08-21 07:42:00+00:00,0 days 00:53:00 25,2020-08-21 07:05:00+00:00,2020-08-21 07:54:00+00:00,0 days 00:49:00 26,2020-08-21 08:53:00+00:00,2020-08-21 09:46:00+00:00,0 days 00:53:00 27,2020-08-21 08:53:00+00:00,2020-08-21 10:14:00+00:00,0 days 01:21:00 28,2020-08-21 10:05:00+00:00,2020-08-21 10:54:00+00:00,0 days 00:49:00 29,2020-08-27 16:53:00+00:00,2020-08-27 18:02:00+00:00,0 days 01:09:00 30,2020-08-27 17:05:00+00:00,2020-08-27 18:02:00+00:00,0 days 00:57:00 31,2020-08-27 19:57:00+00:00,2020-08-27 20:46:00+00:00,0 days 00:49:00 32,2020-08-27 19:57:00+00:00,2020-08-27 20:46:00+00:00,0 days 00:49:00 33,2020-08-27 21:37:00+00:00,2020-08-27 22:30:00+00:00,0 days 00:53:00 34,2020-08-28 13:45:00+00:00,2020-08-28 14:30:00+00:00,0 days 00:45:00 35,2020-08-28 13:53:00+00:00,2020-08-28 14:38:00+00:00,0 days 00:45:00 36,2020-08-28 21:33:00+00:00,2020-08-29 04:26:00+00:00,0 days 06:53:00 37,2020-09-01 23:53:00+00:00,2020-09-02 00:46:00+00:00,0 days 00:53:00 38,2020-09-02 00:17:00+00:00,2020-09-02 01:50:00+00:00,0 days 01:33:00 39,2020-09-02 00:17:00+00:00,2020-09-02 01:06:00+00:00,0 days 00:49:00 40,2020-09-02 02:33:00+00:00,2020-09-02 04:22:00+00:00,0 days 01:49:00 41,2020-09-02 02:33:00+00:00,2020-09-02 03:34:00+00:00,0 days 01:01:00 42,2020-09-02 10:09:00+00:00,2020-09-02 11:02:00+00:00,0 days 00:53:00 43,2020-09-02 10:57:00+00:00,2020-09-02 11:46:00+00:00,0 days 00:49:00 44,2020-09-14 06:13:00+00:00,2020-09-14 07:02:00+00:00,0 days 00:49:00 45,2020-10-12 07:29:00+00:00,2020-10-12 08:38:00+00:00,0 days 01:09:00 46,2020-10-12 07:33:00+00:00,2020-10-12 08:26:00+00:00,0 days 00:53:00 47,2020-10-12 09:41:00+00:00,2020-10-12 10:30:00+00:00,0 days 00:49:00 48,2020-10-12 09:41:00+00:00,2020-10-12 10:58:00+00:00,0 days 01:17:00 49,2020-10-16 08:05:00+00:00,2020-10-16 08:50:00+00:00,0 days 00:45:00 50,2020-10-20 13:05:00+00:00,2020-10-20 13:50:00+00:00,0 days 00:45:00 51,2020-10-25 12:45:00+00:00,2020-10-25 13:38:00+00:00,0 days 00:53:00 52,2020-10-25 13:21:00+00:00,2020-10-25 14:18:00+00:00,0 days 00:57:00 53,2020-11-14 13:13:00+00:00,2020-11-14 14:26:00+00:00,0 days 01:13:00 54,2020-11-14 13:17:00+00:00,2020-11-14 14:26:00+00:00,0 days 01:09:00 55,2020-11-14 13:49:00+00:00,2020-11-14 14:42:00+00:00,0 days 00:53:00 56,2020-11-14 14:21:00+00:00,2020-11-14 16:06:00+00:00,0 days 01:45:00 57,2020-12-12 09:41:00+00:00,2020-12-12 10:46:00+00:00,0 days 01:05:00 58,2020-12-12 10:09:00+00:00,2020-12-12 10:58:00+00:00,0 days 00:49:00 59,2020-12-12 10:37:00+00:00,2020-12-12 11:26:00+00:00,0 days 00:49:00 60,2020-12-12 12:33:00+00:00,2020-12-12 13:30:00+00:00,0 days 00:57:00 61,2020-12-12 13:25:00+00:00,2020-12-12 14:14:00+00:00,0 days 00:49:00 62,2020-12-18 08:13:00+00:00,2020-12-18 09:02:00+00:00,0 days 00:49:00 63,2020-12-19 11:17:00+00:00,2020-12-19 12:06:00+00:00,0 days 00:49:00 64,2020-12-19 13:01:00+00:00,2020-12-19 13:54:00+00:00,0 days 00:53:00 65,2020-12-23 13:53:00+00:00,2020-12-23 14:42:00+00:00,0 days 00:49:00 66,2020-12-24 05:33:00+00:00,2020-12-24 06:22:00+00:00,0 days 00:49:00 67,2020-12-25 05:49:00+00:00,2020-12-25 06:38:00+00:00,0 days 00:49:00 68,2020-12-30 19:57:00+00:00,2020-12-30 20:50:00+00:00,0 days 00:53:00 69,2020-12-31 13:13:00+00:00,2020-12-31 14:10:00+00:00,0 days 00:57:00 70,2021-01-02 04:05:00+00:00,2021-01-02 04:50:00+00:00,0 days 00:45:00 71,2021-01-18 04:57:00+00:00,2021-01-18 05:42:00+00:00,0 days 00:45:00 72,2021-01-27 14:01:00+00:00,2021-01-27 15:14:00+00:00,0 days 01:13:00 73,2021-01-27 14:53:00+00:00,2021-01-27 15:42:00+00:00,0 days 00:49:00 74,2021-03-05 06:53:00+00:00,2021-03-05 07:50:00+00:00,0 days 00:57:00 75,2021-03-05 06:53:00+00:00,2021-03-05 08:06:00+00:00,0 days 01:13:00 76,2021-03-05 07:29:00+00:00,2021-03-05 08:18:00+00:00,0 days 00:49:00 77,2021-03-05 11:09:00+00:00,2021-03-05 11:58:00+00:00,0 days 00:49:00 78,2021-03-05 11:09:00+00:00,2021-03-05 12:34:00+00:00,0 days 01:25:00 79,2021-03-05 13:01:00+00:00,2021-03-05 13:50:00+00:00,0 days 00:49:00 80,2021-03-05 13:01:00+00:00,2021-03-05 13:50:00+00:00,0 days 00:49:00 81,2021-03-08 07:37:00+00:00,2021-03-08 08:26:00+00:00,0 days 00:49:00 82,2021-03-08 07:37:00+00:00,2021-03-08 08:50:00+00:00,0 days 01:13:00 83,2021-03-08 08:13:00+00:00,2021-03-08 09:02:00+00:00,0 days 00:49:00 84,2021-03-08 08:29:00+00:00,2021-03-08 09:18:00+00:00,0 days 00:49:00 85,2021-03-08 08:37:00+00:00,2021-03-08 09:42:00+00:00,0 days 01:05:00 86,2021-03-08 09:05:00+00:00,2021-03-08 09:58:00+00:00,0 days 00:53:00 87,2021-03-08 09:21:00+00:00,2021-03-08 10:10:00+00:00,0 days 00:49:00 88,2021-03-08 09:41:00+00:00,2021-03-08 10:30:00+00:00,0 days 00:49:00 89,2021-03-08 10:53:00+00:00,2021-03-08 11:42:00+00:00,0 days 00:49:00 90,2021-03-08 12:05:00+00:00,2021-03-08 12:54:00+00:00,0 days 00:49:00 91,2021-03-08 12:05:00+00:00,2021-03-08 12:54:00+00:00,0 days 00:49:00 92,2021-03-08 12:53:00+00:00,2021-03-08 13:42:00+00:00,0 days 00:49:00 93,2021-03-08 13:33:00+00:00,2021-03-08 14:22:00+00:00,0 days 00:49:00 94,2021-03-08 14:09:00+00:00,2021-03-08 14:58:00+00:00,0 days 00:49:00 95,2021-03-08 14:21:00+00:00,2021-03-08 15:10:00+00:00,0 days 00:49:00 96,2021-03-08 14:41:00+00:00,2021-03-08 15:34:00+00:00,0 days 00:53:00 97,2021-03-08 20:13:00+00:00,2021-03-08 21:02:00+00:00,0 days 00:49:00 98,2021-03-08 20:13:00+00:00,2021-03-08 21:10:00+00:00,0 days 00:57:00 99,2021-03-08 20:29:00+00:00,2021-03-08 21:54:00+00:00,0 days 01:25:00 100,2021-03-08 21:57:00+00:00,2021-03-08 23:34:00+00:00,0 days 01:37:00 101,2021-03-08 22:09:00+00:00,2021-03-08 22:58:00+00:00,0 days 00:49:00 102,2021-03-08 23:01:00+00:00,2021-03-08 23:50:00+00:00,0 days 00:49:00 103,2021-03-08 23:13:00+00:00,2021-03-08 23:58:00+00:00,0 days 00:45:00 104,2021-03-08 23:21:00+00:00,2021-03-09 00:06:00+00:00,0 days 00:45:00 105,2021-03-08 23:41:00+00:00,2021-03-09 00:30:00+00:00,0 days 00:49:00 106,2021-03-09 00:17:00+00:00,2021-03-09 01:06:00+00:00,0 days 00:49:00 107,2021-03-09 00:49:00+00:00,2021-03-09 01:38:00+00:00,0 days 00:49:00 108,2021-03-09 01:01:00+00:00,2021-03-09 01:58:00+00:00,0 days 00:57:00 109,2021-03-09 01:21:00+00:00,2021-03-09 02:10:00+00:00,0 days 00:49:00 110,2021-03-09 01:41:00+00:00,2021-03-09 02:42:00+00:00,0 days 01:01:00 111,2021-03-09 02:09:00+00:00,2021-03-09 03:06:00+00:00,0 days 00:57:00 112,2021-03-09 03:01:00+00:00,2021-03-09 03:50:00+00:00,0 days 00:49:00 113,2021-03-09 03:49:00+00:00,2021-03-09 04:38:00+00:00,0 days 00:49:00 114,2021-03-09 04:13:00+00:00,2021-03-09 05:02:00+00:00,0 days 00:49:00 115,2021-03-09 04:25:00+00:00,2021-03-09 05:14:00+00:00,0 days 00:49:00 116,2021-03-09 04:25:00+00:00,2021-03-09 05:14:00+00:00,0 days 00:49:00 117,2021-03-09 05:01:00+00:00,2021-03-09 05:50:00+00:00,0 days 00:49:00 118,2021-03-09 10:05:00+00:00,2021-03-09 10:58:00+00:00,0 days 00:53:00 119,2021-03-09 10:09:00+00:00,2021-03-09 11:26:00+00:00,0 days 01:17:00 120,2021-03-09 10:37:00+00:00,2021-03-09 11:42:00+00:00,0 days 01:05:00 121,2021-03-09 11:41:00+00:00,2021-03-09 12:30:00+00:00,0 days 00:49:00 122,2021-03-09 11:53:00+00:00,2021-03-09 12:42:00+00:00,0 days 00:49:00 123,2021-03-09 12:25:00+00:00,2021-03-09 13:42:00+00:00,0 days 01:17:00 124,2021-03-09 13:05:00+00:00,2021-03-09 13:58:00+00:00,0 days 00:53:00 125,2021-03-09 13:21:00+00:00,2021-03-09 14:10:00+00:00,0 days 00:49:00 126,2021-03-09 13:41:00+00:00,2021-03-09 14:38:00+00:00,0 days 00:57:00 127,2021-03-09 15:13:00+00:00,2021-03-09 16:30:00+00:00,0 days 01:17:00 128,2021-03-09 16:01:00+00:00,2021-03-09 17:30:00+00:00,0 days 01:29:00 129,2021-03-09 16:45:00+00:00,2021-03-09 18:30:00+00:00,0 days 01:45:00 130,2021-03-09 16:53:00+00:00,2021-03-09 17:46:00+00:00,0 days 00:53:00 131,2021-03-09 19:21:00+00:00,2021-03-09 20:38:00+00:00,0 days 01:17:00 132,2021-03-09 19:21:00+00:00,2021-03-09 20:18:00+00:00,0 days 00:57:00 133,2021-03-09 20:01:00+00:00,2021-03-09 20:54:00+00:00,0 days 00:53:00 134,2021-03-09 22:09:00+00:00,2021-03-09 23:18:00+00:00,0 days 01:09:00 135,2021-03-10 20:37:00+00:00,2021-03-10 21:30:00+00:00,0 days 00:53:00 136,2021-03-10 20:53:00+00:00,2021-03-10 21:50:00+00:00,0 days 00:57:00 137,2021-03-11 00:01:00+00:00,2021-03-11 00:54:00+00:00,0 days 00:53:00 138,2021-03-11 00:25:00+00:00,2021-03-11 01:22:00+00:00,0 days 00:57:00 139,2021-03-11 00:49:00+00:00,2021-03-11 01:38:00+00:00,0 days 00:49:00 140,2021-03-11 03:25:00+00:00,2021-03-11 04:30:00+00:00,0 days 01:05:00 141,2021-03-11 04:05:00+00:00,2021-03-11 04:58:00+00:00,0 days 00:53:00 142,2021-03-11 04:21:00+00:00,2021-03-11 05:06:00+00:00,0 days 00:45:00 143,2021-03-11 04:29:00+00:00,2021-03-11 05:14:00+00:00,0 days 00:45:00 144,2021-03-11 07:13:00+00:00,2021-03-11 08:10:00+00:00,0 days 00:57:00 145,2021-03-11 07:13:00+00:00,2021-03-11 08:14:00+00:00,0 days 01:01:00 146,2021-03-11 08:09:00+00:00,2021-03-11 08:58:00+00:00,0 days 00:49:00 147,2021-03-11 08:13:00+00:00,2021-03-11 09:30:00+00:00,0 days 01:17:00 148,2021-03-11 09:25:00+00:00,2021-03-11 10:14:00+00:00,0 days 00:49:00 149,2021-03-11 10:29:00+00:00,2021-03-11 11:34:00+00:00,0 days 01:05:00 150,2021-03-11 12:49:00+00:00,2021-03-11 13:38:00+00:00,0 days 00:49:00 151,2021-03-11 13:37:00+00:00,2021-03-11 14:42:00+00:00,0 days 01:05:00 152,2021-03-11 14:01:00+00:00,2021-03-11 14:50:00+00:00,0 days 00:49:00 153,2021-03-11 14:13:00+00:00,2021-03-11 15:06:00+00:00,0 days 00:53:00 154,2021-03-11 14:13:00+00:00,2021-03-11 15:02:00+00:00,0 days 00:49:00 155,2021-03-11 15:33:00+00:00,2021-03-11 16:30:00+00:00,0 days 00:57:00 156,2021-03-11 16:01:00+00:00,2021-03-11 16:54:00+00:00,0 days 00:53:00 157,2021-03-11 16:05:00+00:00,2021-03-11 17:06:00+00:00,0 days 01:01:00 158,2021-03-12 12:21:00+00:00,2021-03-12 13:14:00+00:00,0 days 00:53:00 159,2021-03-12 13:13:00+00:00,2021-03-12 14:02:00+00:00,0 days 00:49:00 160,2021-03-12 13:13:00+00:00,2021-03-12 14:06:00+00:00,0 days 00:53:00 161,2021-03-12 14:13:00+00:00,2021-03-12 15:02:00+00:00,0 days 00:49:00 162,2021-03-12 14:33:00+00:00,2021-03-12 15:22:00+00:00,0 days 00:49:00 163,2021-03-12 14:49:00+00:00,2021-03-12 15:38:00+00:00,0 days 00:49:00 164,2021-03-12 14:57:00+00:00,2021-03-12 15:46:00+00:00,0 days 00:49:00 165,2021-03-12 15:09:00+00:00,2021-03-12 15:58:00+00:00,0 days 00:49:00 166,2021-03-12 15:09:00+00:00,2021-03-12 15:58:00+00:00,0 days 00:49:00 167,2021-03-12 15:25:00+00:00,2021-03-12 16:14:00+00:00,0 days 00:49:00 168,2021-03-12 15:57:00+00:00,2021-03-12 16:46:00+00:00,0 days 00:49:00 169,2021-03-12 15:57:00+00:00,2021-03-12 16:46:00+00:00,0 days 00:49:00 170,2021-03-12 17:33:00+00:00,2021-03-12 18:22:00+00:00,0 days 00:49:00 171,2021-03-16 11:57:00+00:00,2021-03-16 12:46:00+00:00,0 days 00:49:00 172,2021-03-16 12:01:00+00:00,2021-03-16 12:58:00+00:00,0 days 00:57:00 173,2021-03-16 12:37:00+00:00,2021-03-16 13:26:00+00:00,0 days 00:49:00 174,2021-03-16 12:57:00+00:00,2021-03-16 13:46:00+00:00,0 days 00:49:00 175,2021-03-16 13:13:00+00:00,2021-03-16 14:02:00+00:00,0 days 00:49:00 176,2021-03-16 13:45:00+00:00,2021-03-16 14:34:00+00:00,0 days 00:49:00 177,2021-03-16 14:01:00+00:00,2021-03-16 14:54:00+00:00,0 days 00:53:00 178,2021-03-16 14:05:00+00:00,2021-03-16 14:58:00+00:00,0 days 00:53:00 179,2021-03-16 15:09:00+00:00,2021-03-16 15:58:00+00:00,0 days 00:49:00 180,2021-03-16 17:41:00+00:00,2021-03-16 18:38:00+00:00,0 days 00:57:00 181,2021-03-16 17:49:00+00:00,2021-03-16 18:54:00+00:00,0 days 01:05:00 182,2021-03-16 18:17:00+00:00,2021-03-16 19:14:00+00:00,0 days 00:57:00 183,2021-03-16 18:41:00+00:00,2021-03-16 19:30:00+00:00,0 days 00:49:00 184,2021-03-16 18:53:00+00:00,2021-03-16 19:58:00+00:00,0 days 01:05:00 185,2021-03-16 20:01:00+00:00,2021-03-16 20:58:00+00:00,0 days 00:57:00 186,2021-03-16 22:05:00+00:00,2021-03-16 22:54:00+00:00,0 days 00:49:00 187,2021-03-16 22:17:00+00:00,2021-03-16 23:14:00+00:00,0 days 00:57:00 188,2021-03-16 22:41:00+00:00,2021-03-16 23:38:00+00:00,0 days 00:57:00 189,2021-03-16 23:41:00+00:00,2021-03-17 00:30:00+00:00,0 days 00:49:00 190,2021-03-17 02:57:00+00:00,2021-03-17 03:54:00+00:00,0 days 00:57:00 191,2021-03-17 09:37:00+00:00,2021-03-17 10:26:00+00:00,0 days 00:49:00 192,2021-06-28 23:05:00+00:00,2021-06-29 00:10:00+00:00,0 days 01:05:00 193,2021-06-29 00:05:00+00:00,2021-06-29 00:50:00+00:00,0 days 00:45:00 194,2021-06-29 00:13:00+00:00,2021-06-29 00:58:00+00:00,0 days 00:45:00 195,2021-09-19 23:21:00+00:00,2021-09-20 00:10:00+00:00,0 days 00:49:00 196,2021-09-20 03:33:00+00:00,2021-09-20 04:22:00+00:00,0 days 00:49:00 197,2021-09-30 09:29:00+00:00,2021-09-30 10:18:00+00:00,0 days 00:49:00 198,2021-09-30 09:41:00+00:00,2021-09-30 10:30:00+00:00,0 days 00:49:00 199,2021-09-30 09:53:00+00:00,2021-09-30 10:46:00+00:00,0 days 00:53:00 200,2021-11-01 11:33:00+00:00,2021-11-01 12:22:00+00:00,0 days 00:49:00 201,2021-11-01 12:37:00+00:00,2021-11-01 13:50:00+00:00,0 days 01:13:00 202,2021-11-01 12:45:00+00:00,2021-11-01 13:50:00+00:00,0 days 01:05:00 203,2021-11-01 13:21:00+00:00,2021-11-01 14:38:00+00:00,0 days 01:17:00 204,2021-11-01 13:21:00+00:00,2021-11-01 14:14:00+00:00,0 days 00:53:00 205,2021-11-01 17:49:00+00:00,2021-11-01 18:42:00+00:00,0 days 00:53:00 206,2021-11-01 17:49:00+00:00,2021-11-01 18:42:00+00:00,0 days 00:53:00 207,2021-11-01 18:33:00+00:00,2021-11-01 19:46:00+00:00,0 days 01:13:00 208,2021-11-01 18:33:00+00:00,2021-11-01 19:22:00+00:00,0 days 00:49:00 209,2021-11-01 20:25:00+00:00,2021-11-01 21:26:00+00:00,0 days 01:01:00 210,2021-11-01 20:29:00+00:00,2021-11-01 21:22:00+00:00,0 days 00:53:00 211,2021-11-01 20:49:00+00:00,2021-11-01 21:42:00+00:00,0 days 00:53:00 212,2021-11-01 21:41:00+00:00,2021-11-01 22:34:00+00:00,0 days 00:53:00 213,2021-11-01 21:57:00+00:00,2021-11-01 22:50:00+00:00,0 days 00:53:00 214,2021-11-01 22:17:00+00:00,2021-11-01 23:02:00+00:00,0 days 00:45:00 215,2021-11-01 22:25:00+00:00,2021-11-01 23:10:00+00:00,0 days 00:45:00 216,2021-11-02 10:29:00+00:00,2021-11-02 11:26:00+00:00,0 days 00:57:00 217,2021-11-02 18:09:00+00:00,2021-11-02 19:06:00+00:00,0 days 00:57:00 218,2021-11-03 17:01:00+00:00,2021-11-03 17:46:00+00:00,0 days 00:45:00 219,2021-11-05 09:21:00+00:00,2021-11-05 10:26:00+00:00,0 days 01:05:00 220,2021-11-06 13:21:00+00:00,2021-11-06 14:06:00+00:00,0 days 00:45:00 221,2021-11-06 13:29:00+00:00,2021-11-06 14:14:00+00:00,0 days 00:45:00 222,2021-11-13 17:21:00+00:00,2021-11-13 18:10:00+00:00,0 days 00:49:00 223,2021-11-13 17:33:00+00:00,2021-11-13 18:50:00+00:00,0 days 01:17:00 224,2021-11-13 18:29:00+00:00,2021-11-13 19:26:00+00:00,0 days 00:57:00 225,2021-11-13 19:45:00+00:00,2021-11-13 20:54:00+00:00,0 days 01:09:00 226,2021-11-13 21:09:00+00:00,2021-11-13 22:10:00+00:00,0 days 01:01:00 227,2021-11-13 21:57:00+00:00,2021-11-13 22:58:00+00:00,0 days 01:01:00 228,2021-11-13 23:25:00+00:00,2021-11-14 00:22:00+00:00,0 days 00:57:00 229,2021-11-14 00:01:00+00:00,2021-11-14 00:46:00+00:00,0 days 00:45:00 230,2021-11-14 00:09:00+00:00,2021-11-14 00:54:00+00:00,0 days 00:45:00 231,2021-11-14 00:45:00+00:00,2021-11-14 01:42:00+00:00,0 days 00:57:00 232,2021-11-14 14:45:00+00:00,2021-11-14 15:38:00+00:00,0 days 00:53:00 233,2021-11-14 15:17:00+00:00,2021-11-14 16:06:00+00:00,0 days 00:49:00 234,2021-11-14 17:05:00+00:00,2021-11-14 17:50:00+00:00,0 days 00:45:00 235,2021-11-14 17:13:00+00:00,2021-11-14 17:58:00+00:00,0 days 00:45:00 236,2021-11-14 18:09:00+00:00,2021-11-14 19:06:00+00:00,0 days 00:57:00 237,2021-11-14 20:21:00+00:00,2021-11-14 21:06:00+00:00,0 days 00:45:00 238,2021-11-14 20:29:00+00:00,2021-11-14 21:14:00+00:00,0 days 00:45:00 239,2021-11-14 20:53:00+00:00,2021-11-14 21:42:00+00:00,0 days 00:49:00 240,2021-11-14 21:05:00+00:00,2021-11-14 21:54:00+00:00,0 days 00:49:00 241,2021-11-14 21:09:00+00:00,2021-11-14 22:02:00+00:00,0 days 00:53:00 242,2021-11-14 21:17:00+00:00,2021-11-14 22:14:00+00:00,0 days 00:57:00 243,2021-11-14 21:33:00+00:00,2021-11-14 22:30:00+00:00,0 days 00:57:00 244,2021-11-14 21:53:00+00:00,2021-11-14 22:42:00+00:00,0 days 00:49:00 245,2021-11-14 22:01:00+00:00,2021-11-14 22:58:00+00:00,0 days 00:57:00 246,2021-11-14 22:05:00+00:00,2021-11-14 22:54:00+00:00,0 days 00:49:00 247,2021-11-14 22:17:00+00:00,2021-11-14 23:30:00+00:00,0 days 01:13:00 248,2021-11-14 22:29:00+00:00,2021-11-14 23:26:00+00:00,0 days 00:57:00 249,2021-11-14 23:21:00+00:00,2021-11-15 00:14:00+00:00,0 days 00:53:00 250,2021-11-14 23:37:00+00:00,2021-11-15 00:30:00+00:00,0 days 00:53:00 251,2021-11-14 23:49:00+00:00,2021-11-15 00:46:00+00:00,0 days 00:57:00 252,2021-11-15 00:21:00+00:00,2021-11-15 01:14:00+00:00,0 days 00:53:00 253,2021-11-15 00:33:00+00:00,2021-11-15 01:34:00+00:00,0 days 01:01:00 254,2021-11-15 01:01:00+00:00,2021-11-15 01:54:00+00:00,0 days 00:53:00 255,2021-11-15 01:17:00+00:00,2021-11-15 02:06:00+00:00,0 days 00:49:00 256,2021-11-15 01:33:00+00:00,2021-11-15 02:22:00+00:00,0 days 00:49:00 257,2021-11-15 02:01:00+00:00,2021-11-15 02:54:00+00:00,0 days 00:53:00 258,2021-11-15 04:33:00+00:00,2021-11-15 05:22:00+00:00,0 days 00:49:00 259,2021-11-15 04:49:00+00:00,2021-11-15 05:38:00+00:00,0 days 00:49:00 260,2021-11-15 11:17:00+00:00,2021-11-15 12:10:00+00:00,0 days 00:53:00 261,2021-11-15 11:37:00+00:00,2021-11-15 12:30:00+00:00,0 days 00:53:00 262,2021-11-15 15:57:00+00:00,2021-11-15 16:50:00+00:00,0 days 00:53:00 263,2021-11-15 17:57:00+00:00,2021-11-15 18:50:00+00:00,0 days 00:53:00 264,2021-11-15 18:21:00+00:00,2021-11-15 19:10:00+00:00,0 days 00:49:00 265,2021-11-15 23:29:00+00:00,2021-11-16 00:18:00+00:00,0 days 00:49:00 266,2021-11-15 23:53:00+00:00,2021-11-16 00:42:00+00:00,0 days 00:49:00 267,2021-11-16 00:13:00+00:00,2021-11-16 01:14:00+00:00,0 days 01:01:00 268,2021-11-16 00:37:00+00:00,2021-11-16 01:34:00+00:00,0 days 00:57:00 269,2021-11-16 00:53:00+00:00,2021-11-16 02:22:00+00:00,0 days 01:29:00 270,2021-11-16 01:49:00+00:00,2021-11-16 02:50:00+00:00,0 days 01:01:00 271,2021-11-16 02:13:00+00:00,2021-11-16 03:02:00+00:00,0 days 00:49:00 272,2021-11-16 02:25:00+00:00,2021-11-16 03:14:00+00:00,0 days 00:49:00 273,2021-11-16 02:37:00+00:00,2021-11-16 03:30:00+00:00,0 days 00:53:00 274,2021-11-16 05:17:00+00:00,2021-11-16 06:14:00+00:00,0 days 00:57:00 275,2021-11-16 19:17:00+00:00,2021-11-16 20:22:00+00:00,0 days 01:05:00 276,2021-11-16 19:53:00+00:00,2021-11-16 20:46:00+00:00,0 days 00:53:00 277,2021-11-16 20:13:00+00:00,2021-11-16 21:02:00+00:00,0 days 00:49:00 278,2021-11-16 20:25:00+00:00,2021-11-16 21:14:00+00:00,0 days 00:49:00 279,2021-11-16 20:53:00+00:00,2021-11-16 21:42:00+00:00,0 days 00:49:00 280,2021-11-16 21:05:00+00:00,2021-11-16 21:54:00+00:00,0 days 00:49:00 281,2021-11-16 21:29:00+00:00,2021-11-16 22:18:00+00:00,0 days 00:49:00 282,2021-11-16 21:41:00+00:00,2021-11-16 22:34:00+00:00,0 days 00:53:00 283,2021-11-16 21:45:00+00:00,2021-11-16 22:38:00+00:00,0 days 00:53:00 284,2021-11-16 22:01:00+00:00,2021-11-16 23:26:00+00:00,0 days 01:25:00 285,2021-11-16 22:21:00+00:00,2021-11-16 23:14:00+00:00,0 days 00:53:00 286,2021-11-16 22:49:00+00:00,2021-11-16 23:46:00+00:00,0 days 00:57:00 287,2021-11-16 23:29:00+00:00,2021-11-17 00:18:00+00:00,0 days 00:49:00 288,2021-11-17 00:13:00+00:00,2021-11-17 01:06:00+00:00,0 days 00:53:00 289,2021-11-17 00:49:00+00:00,2021-11-17 02:06:00+00:00,0 days 01:17:00 290,2021-11-17 00:49:00+00:00,2021-11-17 01:58:00+00:00,0 days 01:09:00 291,2021-11-17 03:29:00+00:00,2021-11-17 04:22:00+00:00,0 days 00:53:00 292,2021-11-17 03:29:00+00:00,2021-11-17 04:30:00+00:00,0 days 01:01:00 293,2021-11-17 04:01:00+00:00,2021-11-17 04:50:00+00:00,0 days 00:49:00 294,2021-11-17 04:05:00+00:00,2021-11-17 04:54:00+00:00,0 days 00:49:00 295,2021-11-17 07:25:00+00:00,2021-11-17 08:14:00+00:00,0 days 00:49:00 296,2021-11-17 07:25:00+00:00,2021-11-17 08:14:00+00:00,0 days 00:49:00 297,2021-11-17 07:37:00+00:00,2021-11-17 08:34:00+00:00,0 days 00:57:00 298,2021-11-17 07:37:00+00:00,2021-11-17 08:26:00+00:00,0 days 00:49:00 299,2021-11-17 07:57:00+00:00,2021-11-17 08:58:00+00:00,0 days 01:01:00 300,2021-11-17 10:49:00+00:00,2021-11-17 11:50:00+00:00,0 days 01:01:00 301,2021-11-17 10:57:00+00:00,2021-11-17 11:46:00+00:00,0 days 00:49:00 302,2021-11-17 11:13:00+00:00,2021-11-17 12:18:00+00:00,0 days 01:05:00 303,2021-11-17 11:41:00+00:00,2021-11-17 12:30:00+00:00,0 days 00:49:00 304,2021-11-17 14:01:00+00:00,2021-11-17 14:50:00+00:00,0 days 00:49:00 305,2021-11-17 14:13:00+00:00,2021-11-17 14:58:00+00:00,0 days 00:45:00 306,2021-11-17 14:25:00+00:00,2021-11-17 15:18:00+00:00,0 days 00:53:00 307,2021-11-17 17:17:00+00:00,2021-11-17 18:38:00+00:00,0 days 01:21:00 308,2021-11-17 18:53:00+00:00,2021-11-17 19:42:00+00:00,0 days 00:49:00 309,2021-11-17 18:57:00+00:00,2021-11-17 19:54:00+00:00,0 days 00:57:00 310,2021-11-17 19:17:00+00:00,2021-11-17 20:10:00+00:00,0 days 00:53:00 311,2021-11-17 20:05:00+00:00,2021-11-17 20:54:00+00:00,0 days 00:49:00 312,2021-11-17 20:05:00+00:00,2021-11-17 21:10:00+00:00,0 days 01:05:00 313,2021-11-17 20:21:00+00:00,2021-11-17 21:14:00+00:00,0 days 00:53:00 314,2021-11-17 21:01:00+00:00,2021-11-17 21:54:00+00:00,0 days 00:53:00 315,2021-11-17 21:21:00+00:00,2021-11-17 22:10:00+00:00,0 days 00:49:00 316,2021-11-17 23:29:00+00:00,2021-11-18 01:06:00+00:00,0 days 01:37:00 317,2021-11-17 23:33:00+00:00,2021-11-18 00:22:00+00:00,0 days 00:49:00 318,2021-11-18 00:29:00+00:00,2021-11-18 01:18:00+00:00,0 days 00:49:00 319,2021-11-18 00:45:00+00:00,2021-11-18 01:34:00+00:00,0 days 00:49:00 320,2021-11-18 01:57:00+00:00,2021-11-18 03:54:00+00:00,0 days 01:57:00 321,2021-11-18 01:57:00+00:00,2021-11-18 02:50:00+00:00,0 days 00:53:00 322,2021-11-18 04:53:00+00:00,2021-11-18 05:50:00+00:00,0 days 00:57:00 323,2021-11-18 05:57:00+00:00,2021-11-18 06:46:00+00:00,0 days 00:49:00 324,2021-11-18 09:45:00+00:00,2021-11-18 10:38:00+00:00,0 days 00:53:00 325,2021-11-18 13:33:00+00:00,2021-11-18 14:22:00+00:00,0 days 00:49:00 326,2021-11-18 14:25:00+00:00,2021-11-18 15:14:00+00:00,0 days 00:49:00 327,2021-11-18 15:33:00+00:00,2021-11-18 16:26:00+00:00,0 days 00:53:00 328,2021-11-18 16:13:00+00:00,2021-11-18 17:02:00+00:00,0 days 00:49:00 329,2021-11-18 18:17:00+00:00,2021-11-18 19:06:00+00:00,0 days 00:49:00 330,2021-11-18 19:01:00+00:00,2021-11-18 19:50:00+00:00,0 days 00:49:00 331,2021-11-18 19:57:00+00:00,2021-11-18 20:42:00+00:00,0 days 00:45:00 332,2021-11-18 20:05:00+00:00,2021-11-18 20:50:00+00:00,0 days 00:45:00 333,2021-11-20 14:33:00+00:00,2021-11-20 15:26:00+00:00,0 days 00:53:00 334,2021-11-20 14:53:00+00:00,2021-11-20 15:46:00+00:00,0 days 00:53:00 335,2021-11-20 15:29:00+00:00,2021-11-20 16:26:00+00:00,0 days 00:57:00 336,2021-11-20 15:57:00+00:00,2021-11-20 16:58:00+00:00,0 days 01:01:00 337,2021-11-20 16:05:00+00:00,2021-11-20 16:54:00+00:00,0 days 00:49:00 338,2021-11-20 16:17:00+00:00,2021-11-20 17:06:00+00:00,0 days 00:49:00 339,2021-11-20 16:29:00+00:00,2021-11-20 17:18:00+00:00,0 days 00:49:00 340,2021-11-20 16:53:00+00:00,2021-11-20 17:42:00+00:00,0 days 00:49:00 341,2021-11-20 17:21:00+00:00,2021-11-20 18:14:00+00:00,0 days 00:53:00 342,2021-11-20 18:45:00+00:00,2021-11-20 20:30:00+00:00,0 days 01:45:00 343,2021-11-20 19:01:00+00:00,2021-11-20 19:50:00+00:00,0 days 00:49:00 344,2021-11-20 19:53:00+00:00,2021-11-20 20:42:00+00:00,0 days 00:49:00 345,2021-11-20 20:09:00+00:00,2021-11-20 20:58:00+00:00,0 days 00:49:00 346,2021-11-20 20:25:00+00:00,2021-11-20 21:14:00+00:00,0 days 00:49:00 347,2021-11-20 20:49:00+00:00,2021-11-20 21:38:00+00:00,0 days 00:49:00 348,2021-11-20 21:01:00+00:00,2021-11-20 21:50:00+00:00,0 days 00:49:00 349,2021-11-20 22:13:00+00:00,2021-11-20 23:02:00+00:00,0 days 00:49:00 350,2021-11-20 23:01:00+00:00,2021-11-20 23:50:00+00:00,0 days 00:49:00 351,2021-11-21 03:09:00+00:00,2021-11-21 03:58:00+00:00,0 days 00:49:00 352,2021-11-21 03:21:00+00:00,2021-11-21 04:14:00+00:00,0 days 00:53:00 353,2021-11-21 17:09:00+00:00,2021-11-21 17:58:00+00:00,0 days 00:49:00 354,2021-11-21 20:33:00+00:00,2021-11-21 21:22:00+00:00,0 days 00:49:00 355,2021-11-22 08:33:00+00:00,2021-11-22 09:22:00+00:00,0 days 00:49:00 356,2021-11-22 09:17:00+00:00,2021-11-22 10:06:00+00:00,0 days 00:49:00 357,2021-11-22 09:57:00+00:00,2021-11-22 10:46:00+00:00,0 days 00:49:00 </code></pre>
<python><pandas><data-cleaning>
2023-06-19 11:14:53
2
710
Yoda
76,506,082
3,994,092
HTTP request with Python requests library does not complete unless specify timeout
<p>A simple HTTP request in Python will succeed from some pcs and not others. The pcs are on different networks</p> <p>python version = 3.10.10 requests==2.28.2</p> <p>The request can be as simple as below but replace the API URL with the one that you are having trouble with.</p> <pre><code>import requests print(requests.get(&quot;https://api.github.com&quot;),) </code></pre> <p>No error is produced. There is no sign of activity on the http server.</p> <p>If I add a timeout such as</p> <pre><code>import requests print(requests.get(&quot;https://api.github.com&quot;, timeout=1)) </code></pre> <p>the request will complete successfully. However, it is clear from testing with various timeout values that the request is only being made after the timeout ends.</p> <p>This creates a problem for performance testing.</p>
<python><http><python-requests><timeout>
2023-06-19 11:13:43
1
501
John Curry
76,505,989
9,488,023
Combining two unique values in a Pandas column if they have the same value in another column
<p>Let's say I have a very large Pandas dataframe in Python that looks something like this:</p> <pre><code>df_test = pd.DataFrame(data = None, columns = ['file','source']) df_test.file = ['file_1', 'file_1', 'file_2', 'file_2', 'file_3', 'file_3'] df_test.source = ['usa', 'uk', 'jp', 'sk', 'au', 'nz'] </code></pre> <p>What I want to get out from this is for the 'source' column to combine the unique sources into a single string separating the two unique sources with a '; ' for each value in the 'file' column that is the same. The end result for the 'source' column should therefore be:</p> <pre><code>['usa; uk', 'usa; uk', 'jp; sk', 'jp; sk', 'au; nz', 'au; nz'] </code></pre> <p>Since 'file_1' in the 'file' column has the two sources 'usa' and 'uk', etc. The actual dataframe is very large so it must be done automatically and not manually. Any help on how to do this would be really appreciated, thanks!</p>
<python><pandas><dataframe><merge>
2023-06-19 11:00:31
1
423
Marcus K.
76,505,869
2,725,810
Catching messages passed to unit tests
<p>I am developing a Django backend for an online course platform. When a student submits code for grading, the backend runs unit tests (using the <code>unittest</code> library) and returns to the frontend the message returned by the failed test.</p> <p>After much help from ChatGPT, I can catch the message passed to the assertions by using the following wrapper:</p> <pre class="lang-py prettyprint-override"><code>def save_custom_message(func): def wrapper(self, *args, **kwargs): MyTests.msg = kwargs.get('msg', '') return func(self, *args, **kwargs) return wrapper class CustomTestCase(unittest.TestCase): def __getattribute__(self, name): attr = super().__getattribute__(name) if name.startswith(&quot;assert&quot;) and callable(attr): return save_custom_message(attr) return attr </code></pre> <p>Here is the usage:</p> <pre class="lang-py prettyprint-override"><code>class MyTests(CustomTestCase): def test_division(self): self.assertEqual(10 / 2, 5, msg=&quot;First division assert&quot;) self.assertEqual(10 / 2, 3, msg=&quot;Second division assert&quot;) self.assertEqual(10 / 5, 2, msg=&quot;Third division assert&quot;) def run_unit_tests(test_class): loader = unittest.TestLoader() suite = loader.loadTestsFromTestCase(test_class) for test in suite: result = unittest.TextTestRunner().run(test) if not result.wasSuccessful(): for failure in result.failures: return MyTests.msg return True result = run_unit_tests(MyTests) if result is True: print(&quot;All tests passed!&quot;) else: print(f&quot;Test failed: {result}&quot;) </code></pre> <p>This works just fine. But now I would like to be able to pass a message to the test function:</p> <pre class="lang-py prettyprint-override"><code>class MyTests(CustomTestCase): def test_division(self, msg=&quot;Something is wrong with division&quot;): self.assertEqual(10 / 2, 5) self.assertEqual(10 / 2, 3) self.assertEqual(10 / 5, 2) </code></pre> <p>My understanding was that <code>test_division</code> was a method of MyTests just like <code>assertEqual</code>. Therefore, I tried simply testing for the prefix <code>test</code>:</p> <pre class="lang-py prettyprint-override"><code>class CustomTestCase(unittest.TestCase): default_message = &quot;The test case failed&quot; def __getattribute__(self, name): attr = super().__getattribute__(name) if (name.startswith(&quot;assert&quot;) or name.startswith(&quot;test&quot;)) and \ callable(attr): return save_custom_message(attr) return attr </code></pre> <p>However, I got the error:</p> <p><code>TypeError: save_custom_message.&lt;locals&gt;.wrapper() missing 1 required positional argument: 'self'</code></p> <p>This suggests that my understanding is wrong. I would very much appreciate an in-depth explanation of the issue and a suggestion on how to fix it.</p> <p>P.S. An additional confusion I would be happy to resolve is that, when I tried printing <code>args</code> for `self.assertEqual(10 / 2, 3, msg=&quot;Second division assert&quot;), only 3 would be printed, but 5 would not. This suggest that the first argument does not seem to be passed. I would be happy to understand what's going on.</p>
<python><wrapper><python-unittest><python-decorators>
2023-06-19 10:43:37
1
8,211
AlwaysLearning
76,505,784
9,182,743
Independently sized buckets for multi-faceted histogram plot
<p>in plotly express:</p> <p>I want to plot the histograms of multiple columns, that are on <strong>different scales</strong>.</p> <p>I need the buckets of each subolot to be indepent of the others.</p> <p>here is the code:</p> <pre class="lang-py prettyprint-override"><code>import plotly.express as px df = px.data.tips() df['total_bill_times_100'] = df['total_bill']*100 df_plot = df.melt(value_vars = ['total_bill','total_bill_times_100' ]) fig = px.histogram(df_plot, x=&quot;value&quot;, facet_col = 'variable', template='simple_white') fig.update_yaxes(matches=None) fig.update_xaxes(matches=None) fig.show() </code></pre> <p>Current output: <a href="https://i.sstatic.net/o0FfY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/o0FfY.png" alt="enter image description here" /></a></p>
<python><pandas><plotly>
2023-06-19 10:32:34
1
1,168
Leo
76,505,566
11,831,942
WebRTC aiortc - Sending video from server.py to client.py giving RTCIceTransport error
<p>I want to stream a video from server.py to client.py using aiortc and TCPSocketHandling. I seem do be doing everything as per the aiortc examples and documentation, but still getting the following error:-</p> <blockquote> <p>exception=InvalidStateError('RTCIceTransport is closed')&gt;</p> </blockquote> <p>Attaching the client.py and server.py code</p> <p><strong>Server.py</strong></p> <pre><code>import asyncio import cv2 from aiortc import MediaStreamTrack, RTCPeerConnection, RTCSessionDescription, RTCIceCandidate from aiortc.contrib.media import MediaBlackhole, MediaPlayer, MediaRelay from aiortc.contrib.signaling import TcpSocketSignaling from av import VideoFrame class VideoStreamTrack(MediaStreamTrack): kind = &quot;video&quot; def __init__(self, video_path): super().__init__() # Initialize the base class # self.track = track self.video_path = video_path self.cap = cv2.VideoCapture(video_path) async def recv(self): # Read frames from the video file and convert them to RTCVideoFrames ret, img = self.cap.read() if ret: # pts, time_base = await self.next_timestamp() frame = VideoFrame.from_ndarray(img, format=&quot;bgr24&quot;) # frame.pts = pts # frame.time_base = time_base # await asyncio.sleep(1/30) return frame else: # Video ended, close the connection self.cap.release() raise ConnectionError(&quot;Video stream ended&quot;) async def serve_video(pc, signaling): # Create a MediaRelay to relay media tracks between peers relay = MediaRelay() # track = MediaStreamTrack() # Add the video track to the peer connection video_path = &quot;test.mp4&quot; video_track = VideoStreamTrack(video_path) pc.addTrack(relay.subscribe(video_track)) # Create an offer and set it as the local description offer = await pc.createOffer() await pc.setLocalDescription(offer) # Send the offer to the client await signaling.send(pc.localDescription) # Wait for the answer from the client answer = await signaling.receive() if isinstance(answer, RTCSessionDescription): await pc.setRemoteDescription(answer) elif isinstance(answer, RTCIceCandidate): await pc.addIceCandidate(answer) # await pc.setRemoteDescription(RTCSessionDescription(answer)) # Start relaying the media between peers # while True: # try: # frame = await video_track.recv() # await pc.sendVideoFrame(frame) # except ConnectionError: # break # # Cleanup when done await pc.close() async def main(): pc = RTCPeerConnection() # Create a TCP socket signaling instance signaling = TcpSocketSignaling(&quot;127.0.0.1&quot;, 8080) # Connect to the signaling server await signaling.connect() # Serve the video await serve_video(pc, signaling) # Close the signaling connection await signaling.close() if __name__ == &quot;__main__&quot;: loop = asyncio.get_event_loop() loop.run_until_complete(main()) </code></pre> <p><strong>Client.py:-</strong></p> <pre><code>import asyncio import cv2 from aiortc import MediaStreamTrack, RTCPeerConnection, RTCSessionDescription, RTCIceCandidate from aiortc.contrib.media import MediaBlackhole, MediaPlayer, MediaRelay from aiortc.contrib.signaling import TcpSocketSignaling class VideoStreamTrack(MediaStreamTrack): kind = &quot;video&quot; def __init__(self, track): super().__init__() # Initialize the base class self.track = track async def recv(self): # Receive RTCVideoFrames and render them frame = await self.track.recv() if frame: img = frame.to_ndarray(format=&quot;bgr24&quot;) cv2.imshow(&quot;b&quot;, frame) print(&quot;a&quot;) cv2.waitKey(1) else: # Video ended, close the connection raise ConnectionError(&quot;Video stream ended&quot;) return frame async def receive_video(pc, signaling): # Create a MediaRelay to relay media tracks between peers # relay = MediaRelay() offer = await signaling.receive() @pc.on(&quot;iceconnectionstatechange&quot;) async def on_iceconnectionstatechange(): print(&quot;ICE connection state is %s&quot; % pc.iceConnectionState) if pc.iceConnectionState == &quot;failed&quot;: print(&quot;Error!!!&quot;) await pc.close() # pcs.discard(pc) @pc.on(&quot;track&quot;) def on_track(track): video_track = VideoStreamTrack(track) if isinstance(offer, RTCSessionDescription): await pc.setRemoteDescription(offer) elif isinstance(offer, RTCIceCandidate): await pc.addIceCandidate(offer) # await pc.setRemoteDescription(RTCSessionDescription(answer)) # Create a renderer to display the received video # Add the video track to the peer connection # pc.addTrack(video_track) # Create an answer and set it as the local description await pc.setLocalDescription(await pc.createAnswer()) # Send the answer back to the server await signaling.send(pc.localDescription) # Start relaying the media between peers # while True: # try: # await video_track.recv() # except ConnectionError: # break # Cleanup when done await pc.close() async def main(): pc = RTCPeerConnection() # Create a TCP socket signaling instance signaling = TcpSocketSignaling(&quot;127.0.0.1&quot;, 8080) # Connect to the signaling server await signaling.connect() # Receive the video await receive_video(pc, signaling) # Close the signaling connection await signaling.close() if __name__ == &quot;__main__&quot;: loop = asyncio.get_event_loop() loop.run_until_complete(main()) </code></pre>
<python><sockets><video><webrtc><aiortc>
2023-06-19 10:03:04
0
328
10may
76,505,530
15,229,310
python pathllib joinpath() drops part of main path if other path starts with slash
<p>pathlib Path joinpath() seems to drop subfolder of main path if joining with other path starting with forward slash</p> <pre><code>from pathlib import Path a = Path(r'c:\main_folder') a.joinpath(element_a) element_a = r'subfolderX/subfolderXY/file.txt' Out[]: WindowsPath('c:/main_folder/subfolderX/subfolderXY/file.txt') # expected element_b = r'/subfolderX/subfolderXY/file.txt' a.joinpath(element_b) Out[]: WindowsPath('c:/subfolderX/subfolderXY/file.txt') # ?! missing 'main_folder' </code></pre> <p>joining path with element_b the result is missing 'main_folder' (!)</p> <p>I must be missing something (fundamental) , since I find this very much confusing and leading to hard to find bugs</p> <p>Firstly I would expect pathlib and its methods purpose is to gracefully handle filepaths intricacies to shield developers of dealing with this ugliness. Save for ambiguous cases, but then -&gt;</p> <p>Secondly, if ambiguous situation (windows vs posix differences or whatnot) I'd expect the library or method crash \ won't allow to use it, rather than silently give me unexpected result</p> <p>and thirdly, mainly, I'd expect joinpath() method to <strong>join</strong> the two paths above anything else</p> <p>I see in joinpath() docs it returns a <em>new path representing either a subpath (if all arguments are relative paths) or a totally different path (if one of the arguments is anchored)</em></p> <p>What is 'anchored' and how to work around it, if I am constructing a path out of various elements, some configured, some provided by users, some stripped of other paths am I back to awkwardly manually testing for the slashes and constructing various string concatenations based on that?</p>
<python>
2023-06-19 09:58:02
2
349
stam
76,505,431
15,452,601
Declare JSON encoder on the class itself for Pydantic
<p>I have the following class:</p> <pre class="lang-py prettyprint-override"><code>class Thing: def __init__(self, x: str): self.x = x def __str__(self): return self.x @classmethod def __get_validators__(cls): yield cls.validate @classmethod def validate(cls, v: str) -&gt; &quot;Thing&quot;: return cls(v) </code></pre> <p>Due to the validator method I can use this class as custom field type in a Pydantic model:</p> <pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel from thing import Thing class Model(BaseModel): thing: Thing </code></pre> <p>But if I want to serialize to JSON I need to set the <a href="https://docs.pydantic.dev/latest/usage/exporting_models/#json_encoders" rel="nofollow noreferrer"><code>json_encoders</code></a> option on the Pydantic model:</p> <pre class="lang-py prettyprint-override"><code>class Model(BaseModel): class Config: json_encoders = { Thing: str } thing: Thing </code></pre> <p>Now Pydantic can serialize <code>Thing</code>s to JSON and back. But the config is in two places: Partly on the <code>Model</code> and partly on the class <code>Thing</code>. I'd like to set it all on <code>Thing</code>.</p> <p>Is there any way to set the <code>json_encoders</code> option on <code>Thing</code> so Pydantic knows how to handle it transparently?</p> <p>Note that <code>Thing</code> is minimized here: It has a lot of logic and I'm not just trying to declare a custom <code>str</code> type.</p>
<python><json><pydantic>
2023-06-19 09:44:55
1
6,024
2e0byo
76,505,383
3,614,197
Inserting Text into tables of a power point slide using PPTX package adds a carriage return before desired text
<p>I am using the Python PPTX package to generate tables into a powerpoint presentation. I have a template which is loaded then I generate two tables on the first slide.</p> <p>Tables are 5,2 each with the first row merged, in the merged row of each table I am attemping to enter text. I can get the text to insert, however, my code seems to add a carriage return before the text, them the top row ends up a different height to the rest of the rows.</p> <p>My code below</p> <pre><code> from pptx.dml.color import RGBColor from pptx.util import Cm from pptx.util import Pt from pptx.enum.text import PP_ALIGN from pptx.dml.color import RGBColor prs = Presentation('pathto/Template.pptx') # Select the second slide (index 1 since slide indexes start from 0) slide = prs.slides[1] # Define the position and size of the first table x1, y1, cx1, cy1 = Cm(4), Cm(4), Cm(8), Cm(6) # Add the first table shape to the slide shape1 = slide.shapes.add_table(5, 2, x1, y1, cx1, cy1) # Access the first table object table1 = shape1.table # Merge cells in the top row of the first table table1.cell(0, 0).merge(table1.cell(0, 1)) # Get the merged cell in the first table merged_cell1 = table1.cell(0, 0) # Set text and formatting for the merged cell in the first table text_frame1 = merged_cell1.text_frame text_frame1.clear() # Clear existing text p1 = text_frame1.add_paragraph() p1.text = &quot;Background Window&quot; p1.alignment = PP_ALIGN.CENTER p1.font.name = &quot;Calibri Light&quot; p1.font.color.rgb = RGBColor(0, 0, 0) # Black font color p1.font.size = Pt(14) # Change the font size to the desired value # Define the position and size of the second table x2, y2, cx2, cy2 = Cm(4), Cm(11), Cm(8), Cm(6) # Add the second table shape to the slide shape2 = slide.shapes.add_table(5, 2, x2, y2, cx2, cy2) # Access the second table object table2 = shape2.table # Merge cells in the top row of the second table table2.cell(0, 0).merge(table2.cell(0, 1)) # Get the merged cell in the second table merged_cell2 = table2.cell(0, 0) # Set text and formatting for the merged cell in the second table text_frame2 = merged_cell2.text_frame text_frame2.clear() # Clear existing text p2 = text_frame2.add_paragraph() p2.text = &quot;Activity Tracker&quot; p2.alignment = PP_ALIGN.CENTER p2.font.name = &quot;Calibri Light&quot; p2.font.color.rgb = RGBColor(0, 0, 0) # Black font color p2.font.size = Pt(14) # Change the font size to the desired value </code></pre> <p>How do I stop the carriage return from being inserted? I am going to start populating the rest of the tables with additional information so would like to prevent it from occurring there for the remainder of the table as well.</p>
<python><python-pptx>
2023-06-19 09:37:31
1
636
Spooked
76,505,257
9,488,023
Replace values in a Pandas column to be the same for all unique values in another column
<p>What I have is a large Pandas dataframe in Python that looks something like this:</p> <pre><code>df_test = pd.DataFrame(data = None, columns = ['file','source']) df_test.file = ['file_1', 'file_1', 'file_2', 'file_2', 'file_3', 'file_3'] df_test.source = ['nasa', 'unknown', 'esa', 'unknown', 'jaxa', 'unknown'] </code></pre> <p>What I want to get from this is that all values in the 'file' column with the same name should have the same value in the 'source' column, and not be 'unknown'. It should then look like this:</p> <pre><code>['nasa', 'nasa', 'esa', 'esa', 'jaxa', 'jaxa'] </code></pre> <p>I can easily find which entries should be replaced with:</p> <pre><code>df_test.loc[df_test.source == 'unknown'] </code></pre> <p>But I'm not sure how to replace them from this, any help is appreciated!</p>
<python><pandas><dataframe><replace>
2023-06-19 09:19:42
2
423
Marcus K.
76,505,016
3,667,693
running streamlit app on ec2 nginx subdomain
<p>I am facing difficulties running 2 different streamlit apps on 2 different subdomains on a single EC2 server using nginx and tmux.</p> <p><strong>Stage 1</strong>: I first tried running only 1 app successfully and my nginx config is as follows:</p> <pre><code>server { listen 80 default_server; listen [::]:80 default_server; root /var/www/html; index index.html index.htm index.nginx-debian.html; server_name _; location / { proxy_pass http://localhost:8501; proxy_http_version 1.1; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection &quot;upgrade&quot;; proxy_read_timeout 86400; } } </code></pre> <p><strong>Stage 2</strong>: However, when I tried to run 2 different streamlit apps. It does not work. My nginx config file as follows:</p> <pre><code>server { listen 80 default_server; listen [::]:80 default_server; root /var/www/html; index index.html index.htm index.nginx-debian.html; server_name _; location /app { proxy_pass http://localhost:8501; proxy_http_version 1.1; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection &quot;upgrade&quot;; proxy_read_timeout 86400; } } location /upload { proxy_pass http://localhost:8502; proxy_http_version 1.1; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection &quot;upgrade&quot;; proxy_read_timeout 86400; } } </code></pre> <p>When I dig into the browser console, the following 2 files are reported as not available.</p> <ol> <li>GET <a href="http://xx.xxx.xx.xxx/static/js/main.4e910df2.js" rel="nofollow noreferrer">http://xx.xxx.xx.xxx/static/js/main.4e910df2.js</a> net::ERR_ABORTED 404 (Not Found)</li> <li>GET <a href="http://xx.xxx.xx.xxx/static/css/main.f4a8738f.css" rel="nofollow noreferrer">http://xx.xxx.xx.xxx/static/css/main.f4a8738f.css</a> net::ERR_ABORTED 404 (Not Found)</li> </ol> <p>These 2 files are actually the site-packages installed for streamlit.</p> <p><strong>Stage 3</strong>: I tried to fix the above error by passing root directory into the respective location. As well as adding a slash after the location url. Updated config file as follows:</p> <pre><code>server { listen 80 default_server; listen [::]:80 default_server; root /var/www/html; index index.html index.htm index.nginx-debian.html; server_name _; location /app/ { root /home/ubuntu/.local/lib/python3.8/site-packages/streamlit; proxy_pass http://localhost:8501; proxy_http_version 1.1; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection &quot;upgrade&quot;; proxy_read_timeout 86400; } location /upload/ { root /home/ubuntu/.local/lib/python3.8/site-packages/streamlit; proxy_pass http://localhost:8502; proxy_http_version 1.1; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection &quot;upgrade&quot;; proxy_read_timeout 86400; } } </code></pre> <p>Unfortunately, I still face issue. thought a slightly different one as follows:</p> <ol> <li>GET <a href="http://xx.xxx.xx.xxx/app/_stcore/health" rel="nofollow noreferrer">http://xx.xxx.xx.xxx/app/_stcore/health</a> 404 (Not Found)</li> <li>GET <a href="http://xx.xxx.xx.xxx/app/_stcore/allowed-message-origins" rel="nofollow noreferrer">http://xx.xxx.xx.xxx/app/_stcore/allowed-message-origins</a> 404 (Not Found)</li> </ol>
<python><nginx><amazon-ec2><streamlit>
2023-06-19 08:44:47
1
405
John Jam
76,504,828
4,865,723
How to remove old (out commented) msgid from po files?
<p>I'm quite new to the GNU gettext tools. So please feel free to correct me if I use the wrong terms.</p> <p>In my Python code I modified <code>_(&quot;Every 30 minutes&quot;)</code> to <code>_(&quot;Every {n} minutes&quot;).format(...)</code>. Updating the <code>pot</code> (po-template) via <code>xgettext</code> worked well. The old msgid totally disapeared in that file and the new msgid appeared.</p> <p>But then updating my <code>po</code> files based on the <code>pot</code> (using <code>msgmerge --update</code>) do confuse me. The old msgid do not disappear but is just out commented.</p> <pre><code>#~ msgid &quot;Every 30 minutes&quot; </code></pre> <p>Can I influence that behaviour somehow and prevent <code>msgmerge</code> from creating them?</p>
<python><gettext>
2023-06-19 08:18:36
1
12,450
buhtz
76,504,780
13,038,144
Managing square brackets in dictionary keys with dpath library
<p>Using the <code>dpath</code> library, I'm having trouble working with a dictionary with keys containing square brackets. From the docs I see that square brackets are considered as elements of regular exporessions. However, this is causing the issue in my case, because I just want them to be &quot;normal&quot; square brackets.</p> <p>I tried to escape the square brackets with a backspace <code>\[</code>, but the result is the same.</p> <h4>Code example</h4> <pre class="lang-py prettyprint-override"><code>import dpath d = {'Position': {'Position x [mm]': 3}} dpath.search(d, 'Position/Position x [mm]/*') </code></pre> <p>This outputs: <code>{}</code> instead of the expected <code>{'Position': {'Position x [mm]': 3}}</code></p> <p>Maybe there is already a solution for the problem, but I did not find it in the docs.</p>
<python><dictionary><dpath>
2023-06-19 08:12:16
1
458
gioarma
76,504,773
2,715,318
Microsoft Bot Framework - Python - Sometimes i get a "Unauthorized" error when using Microsoft Teams
<p>Written a Python app using aiohttp based on the MS examples, working allover, nothing special. Python bot service is running in production on my own server behind a firewall on a K8S cluster..... no issues so far.</p> <p>We also configured the &quot;Azure bot&quot; as described by MS without issues, MS Teams channel activated, App uploaded and published. Yeah, working! Cool!</p> <p>But, unpredictable after a minute or a hour or a day i get the following error messages when sending request through the Teams app. Important notice, usign the WebChat through the Azure Webportal is working!!!!!???? Anybody else hit such a issue?</p> <pre><code>│ Traceback (most recent call last): │ │ File &quot;/usr/local/lib/python3.10/site-packages/botbuilder/core/bot_adapter.py&quot;, line 128, in run_pipeline │ │ return await self._middleware.receive_activity_with_status( │ │ File &quot;/usr/local/lib/python3.10/site-packages/botbuilder/core/middleware_set.py&quot;, line 69, in receive_activity_with_status │ │ return await self.receive_activity_internal(context, callback) │ │ File &quot;/usr/local/lib/python3.10/site-packages/botbuilder/core/middleware_set.py&quot;, line 79, in receive_activity_internal │ │ return await callback(context) │ │ File &quot;/usr/local/lib/python3.10/site-packages/botbuilder/core/activity_handler.py&quot;, line 70, in on_turn │ │ await self.on_message_activity(turn_context) │ │ File &quot;/app/bots/teams_conversation_bot.py&quot;, line 99, in on_message_activity │ │ await self._send_unknown_request_activity(turn_context) │ │ File &quot;/app/bots/teams_conversation_bot.py&quot;, line 103, in _send_unknown_request_activity │ │ await turn_context.send_activity( │ │ File &quot;/usr/local/lib/python3.10/site-packages/botbuilder/core/turn_context.py&quot;, line 174, in send_activity │ │ result = await self.send_activities([activity_or_text]) │ │ File &quot;/usr/local/lib/python3.10/site-packages/botbuilder/core/turn_context.py&quot;, line 226, in send_activities │ │ return await self._emit(self._on_send_activities, output, logic()) │ │ File &quot;/usr/local/lib/python3.10/site-packages/botbuilder/core/turn_context.py&quot;, line 304, in _emit │ │ return await logic │ │ File &quot;/usr/local/lib/python3.10/site-packages/botbuilder/core/turn_context.py&quot;, line 221, in logic │ │ responses = await self.adapter.send_activities(self, output) │ │ File &quot;/usr/local/lib/python3.10/site-packages/botbuilder/core/bot_framework_adapter.py&quot;, line 735, in send_activities │ │ raise error │ │ File &quot;/usr/local/lib/python3.10/site-packages/botbuilder/core/bot_framework_adapter.py&quot;, line 720, in send_activities │ │ response = await client.conversations.reply_to_activity( │ │ File &quot;/usr/local/lib/python3.10/site-packages/botframework/connector/aio/operations_async/_conversations_operations_async.py&quot;, line 529, in reply_to_act │ │ raise models.ErrorResponseException(self._deserialize, response) │ │ botbuilder.schema._models_py3.ErrorResponseException: Operation returned an invalid status code 'Unauthorized' </code></pre> <p>I have to restart the container (to the Python bot) and to delete the MS teams chat to fix this, uncool :(</p> <p>Already tried to get some more information from logs and scraped the known locations for answers.</p>
<python><frameworks><bots><microsoft-teams><azure-bot-service>
2023-06-19 08:11:20
0
971
skroll
76,504,762
11,893,427
How to update a csv from another csv considering a combination of columns as the primary key?
<p>I am having a block in my assignment to build a python project to do the following task. I'm kindly seeking help from you to make it complete! I have 2 CSVs with same headers, which are:</p> <p><code>[ID,Date,time,type,status,member,account,property,credit quantity, debitquantity, Net quantity, credit value, debit value, Net value, currency]</code></p> <p>One csv is input.csv which contains input values for a run and other csv is system.csv which act like a database containing all the summed up values after each run. system.csv must be updated based on the input.csv.</p> <p>When updating the <code>systems.csv</code> the following combination of fields is considered as the primary key: <code>'Date', 'member', 'account', 'property'</code></p> <p>if the values of the primary key is found in the systems.csv each values under credit quantity, debit quantity, Net quantity, credit value, debit value, Net value must be added to the existing values as follows:</p> <p><strong>input.csv</strong></p> <pre><code>ID,Date,time,type,status,member,account,property,credit quantity, debit quantity, Net quantity, credit value, debit value, Net value, currency id01,2023.03.16,21:00:00,visa,active,xyz,acc001,cc,100, 0, 1000, 100, 0, 1000, usd id02,2023.03.16,22:00:00,visa,active,abc,acc002,cc,0,200, 2000, 0, 200, 2000, usd </code></pre> <p><strong>system.csv</strong></p> <pre><code>id101,2023.03.16,08:00:00,visa,active,xyz,acc001,cc,500, 0, 5000, 400, 0, 4000, usd id102,2023.03.16,09:00:00,visa,active,abc,acc002,cc,0,600, 6000, 0, 200, 2000, usd </code></pre> <p><strong>system.csv after run</strong></p> <pre><code>id101,2023.03.16,21:00:00,visa,active,xyz,acc001,cc,600, 0, 6000, 500, 0, 5000, usd id102,2023.03.16,22:00:00,visa,active,abc,acc002,cc,0,800, 8000, 0, 400, 4000, usd </code></pre> <p>Currently I've taken two csvs as dataframes and trying to do the process. but since my lack of knowledge please help me to complete this. Thanks in advance!!</p>
<python><pandas><dataframe><csv>
2023-06-19 08:10:16
1
429
Indi
76,504,653
1,923,174
How to make a PDF from an online ebook that is displayed page by page?
<p>I would like to save into PDF books like this one to PDF <a href="https://kcenter.korean.go.kr/repository/ebook/culture/SB_step3/index.html" rel="nofollow noreferrer">https://kcenter.korean.go.kr/repository/ebook/culture/SB_step3/index.html</a> that shows a book page by page.</p> <p>How to do it?</p> <p>The only thing that I managed so far is to print page by page into a pdf, and then combine separate pdf pages.</p> <p>Is there a way to do it automatically in Python or other scripts?</p>
<python><web-scraping><pdf-generation>
2023-06-19 07:55:30
1
407
Vladislav Gladkikh
76,504,567
2,881,414
execvpe on Windows not returning ExitValue
<p>I'm using <a href="https://docs.python.org/3/library/os.html?highlight=execv#os.execvpe" rel="nofollow noreferrer"><code>execvpe</code></a> to execute a program replacing the current process.</p> <p>This works well on all platforms (Linux, Windows, Mac), the only thing that seems different is that the <code>ExitValue</code> of the called program is not returned correctly on Windows.</p> <p>For context: The program is called <a href="https://github.com/venthur/dotenv-cli" rel="nofollow noreferrer"><code>dotenv</code></a> and it reads <code>.env</code> files adds them to the environment variables and executes the program using the new environment. Naturally the program's <code>ExitValue</code> must be returned by <code>dotenv</code> as well.</p> <p>I have a simple test that works for Linux and Mac but is failing on Windows:</p> <pre class="lang-py prettyprint-override"><code>def test_returncode(dotenvfile: Path) -&gt; None: proc = subprocess.run(['dotenv', 'false']) assert proc.returncode == 1 proc = subprocess.run(['dotenv', 'true']) assert proc.returncode == 0 </code></pre> <p>Is there a way to make <code>execvpe</code> return the exit value or do I have to check for <code>os.name == 'posix'</code> and use <a href="https://docs.python.org/3/library/subprocess.html?highlight=subprocess%20popen#subprocess.Popen" rel="nofollow noreferrer"><code>subprocess.Popen</code></a> for Windows?</p>
<python><windows><exec>
2023-06-19 07:43:09
0
17,530
Bastian Venthur
76,504,443
12,404,524
How to spread two widgets in the same row to the extremes of the window in Tkinter?
<p>Here is my window:</p> <p><a href="https://i.sstatic.net/6fuxum.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6fuxum.png" alt="enter image description here" /></a></p> <p>I want to place the values from the labels as far as possible in the window. I want to place them in the left and right extremes of the row.</p> <p>Here is what I am doing that is not working:</p> <pre class="lang-py prettyprint-override"><code># output frame self.out_frame = tk.Frame(self) self.out_frame.grid(row=1, column=0, columnspan=2, sticky=&quot;ew&quot;, pady=24) # ... # take the hexadecimal value for example -&gt; ttk.Label(self.out_frame, text=&quot;Hexadecimal&quot;, font=sub_title_font).grid(row=0, column=0, sticky=&quot;w&quot;) self.hex_out.set(str(hexnum)) ttk.Label(self.out_frame, textvariable=self.hex_out, font=sub_title_font).grid(row=0, column=1, sticky=&quot;e&quot;) ttk.Separator(self.out_frame, orient=tk.HORIZONTAL).grid(row=1, column=0, columnspan=2, sticky=&quot;ew&quot;, pady=5) </code></pre> <p>To sum up, I am just sticking the value label and the actual value to West and East respectively for each row. However, I cannot get the parent frame (i.e., <code>out_frame</code>) to span the entire column space of the window.</p>
<python><tkinter>
2023-06-19 07:24:05
1
1,006
amkhrjee
76,504,339
19,130,803
Pandas: Update multiple rows using list
<p>I am trying to update pandas dataframe using list.</p> <pre><code>Dataframe with columns A, B, C A B C ------ 1 a F 2 b F 3 c F 4 d F 5 e F </code></pre> <p>I have 2 lists, one contains list of elements whose value needs to update from column <code>B</code> and second contains actual value to replace in column <code>C</code>.</p> <p>Elements to update from column <code>B</code> names=['a', 'd', 'e'] Values to replace in column <code>C</code> values=['T', 'T', 'G']</p> <pre><code>Output after update A B C ------ 1 a T 2 b F 3 c F 4 d T 5 e G </code></pre> <p>How to update the dataframe?</p>
<python><pandas>
2023-06-19 07:07:50
1
962
winter
76,504,267
6,531,060
How to read all SVG file content as plain text?
<p>currently I'm trying to read all SVG files' content in order to ultimately merge it into one svg file after some modification. Some icon sample looks like this:</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>&lt;svg width="26" height="26" viewBox="0 0 26 26" fill="none" xmlns="http://www.w3.org/2000/svg"&gt; &lt;mask id="path-1-outside-1_420_1264" maskUnits="userSpaceOnUse" x="4" y="5" width="16" height="15" fill="black"&gt; &lt;rect fill="white" x="4" y="5" width="16" height="15"/&gt; &lt;path fill-rule="evenodd" clip-rule="evenodd" d="M17.1331 11.529C18.2573 11.1965 19.0666 10.1065 18.9957 8.83631C18.92 7.47453 17.8082 6.38164 16.4694 6.3508C15.801 6.33538 15.1892 6.57729 14.7212 6.98688H14.5624C14.0509 6.38261 13.2936 6 12.4484 6C11.6031 6 10.8458 6.38261 10.3343 6.98688H10.2785C9.75379 6.52717 9.04659 6.27852 8.2836 6.36815C7.12163 6.505 6.179 7.44466 6.02394 8.62718C5.84809 9.96775 6.66024 11.1483 7.8241 11.5165L8.91989 17.8955C9.02862 18.5345 9.57321 19 10.2086 19H14.7515C15.3869 19 15.9305 18.5335 16.0402 17.8955L17.1331 11.529ZM13.0642 14.7386C13.0642 14.4624 12.8403 14.2386 12.5642 14.2386C12.288 14.2386 12.0642 14.4624 12.0642 14.7386V17.7153C12.0642 17.9915 12.288 18.2153 12.5642 18.2153C12.8403 18.2153 13.0642 17.9915 13.0642 17.7153V14.7386ZM14.9979 14.296C15.2661 14.3614 15.4306 14.632 15.3651 14.9002L14.6595 17.7917C14.594 18.06 14.3235 18.2244 14.0552 18.1589C13.7869 18.0934 13.6225 17.8229 13.688 17.5546L14.3936 14.6632C14.4591 14.3949 14.7296 14.2305 14.9979 14.296ZM10.819 14.663C10.7534 14.3948 10.4828 14.2304 10.2146 14.296C9.94631 14.3615 9.78199 14.6321 9.84754 14.9004L10.5541 17.7918C10.6197 18.0601 10.8903 18.2244 11.1585 18.1589C11.4268 18.0933 11.5911 17.8227 11.5255 17.5545L10.819 14.663Z"/&gt; &lt;/mask&gt; &lt;path d="M18.9957 8.83631L17.9972 8.89177L17.9972 8.89205L18.9957 8.83631ZM17.1331 11.529L16.8495 10.5701L16.2528 10.7466L16.1475 11.3598L17.1331 11.529ZM16.4694 6.3508L16.4463 7.35054L16.4464 7.35054L16.4694 6.3508ZM14.7212 6.98688V7.98688H15.097L15.3798 7.73938L14.7212 6.98688ZM14.5624 6.98688L13.7991 7.63296L14.0987 7.98688H14.5624V6.98688ZM10.3343 6.98688V7.98688H10.798L11.0976 7.63296L10.3343 6.98688ZM10.2785 6.98688L9.61956 7.73905L9.90244 7.98688H10.2785V6.98688ZM8.2836 6.36815L8.16693 5.37498L8.16663 5.37502L8.2836 6.36815ZM6.02394 8.62718L7.01545 8.75724L7.01546 8.75719L6.02394 8.62718ZM7.8241 11.5165L8.80967 11.3472L8.70653 10.7468L8.12569 10.5631L7.8241 11.5165ZM8.91989 17.8955L9.90572 17.7278L9.90546 17.7262L8.91989 17.8955ZM16.0402 17.8955L17.0257 18.065L17.0257 18.0647L16.0402 17.8955ZM15.3651 14.9002L14.3936 14.6632L14.3936 14.6632L15.3651 14.9002ZM14.9979 14.296L14.7608 15.2675L14.7608 15.2675L14.9979 14.296ZM14.6595 17.7917L15.631 18.0288L15.631 18.0288L14.6595 17.7917ZM14.0552 18.1589L14.2923 17.1874L14.2923 17.1874L14.0552 18.1589ZM13.688 17.5546L12.7165 17.3175V17.3175L13.688 17.5546ZM14.3936 14.6632L15.3651 14.9002V14.9002L14.3936 14.6632ZM10.2146 14.296L10.4519 15.2674L10.4519 15.2674L10.2146 14.296ZM10.819 14.663L11.7904 14.4256L10.819 14.663ZM9.84754 14.9004L8.87613 15.1378L9.84754 14.9004ZM10.5541 17.7918L11.5255 17.5545H11.5255L10.5541 17.7918ZM11.1585 18.1589L10.9211 17.1874L10.9211 17.1874L11.1585 18.1589ZM11.5255 17.5545L10.5541 17.7918L11.5255 17.5545ZM17.9972 8.89205C18.0422 9.69711 17.5309 10.3685 16.8495 10.5701L17.4167 12.488C18.9836 12.0245 20.091 10.516 19.9941 8.78057L17.9972 8.89205ZM16.4464 7.35054C17.2491 7.36903 17.9497 8.03658 17.9972 8.89177L19.9941 8.78085C19.8904 6.91249 18.3673 5.39426 16.4924 5.35107L16.4464 7.35054ZM15.3798 7.73938C15.6685 7.48673 16.0397 7.34116 16.4463 7.35054L16.4925 5.35107C15.5622 5.32961 14.71 5.66784 14.0627 6.23437L15.3798 7.73938ZM14.5624 7.98688H14.7212V5.98688H14.5624V7.98688ZM12.4484 7C12.9837 7 13.4672 7.24084 13.7991 7.63296L15.3257 6.3408C14.6346 5.52438 13.6035 5 12.4484 5V7ZM11.0976 7.63296C11.4295 7.24084 11.913 7 12.4484 7V5C11.2933 5 10.2621 5.52438 9.57104 6.3408L11.0976 7.63296ZM10.2785 7.98688H10.3343V5.98688H10.2785V7.98688ZM8.40027 7.36132C8.86885 7.30628 9.29726 7.4567 9.61956 7.73905L10.9375 6.23471C10.2103 5.59764 9.22432 5.25077 8.16693 5.37498L8.40027 7.36132ZM7.01546 8.75719C7.11231 8.01856 7.70338 7.4434 8.40057 7.36129L8.16663 5.37502C6.53987 5.56661 5.24569 6.87076 5.03243 8.49717L7.01546 8.75719ZM8.12569 10.5631C7.41799 10.3392 6.90347 9.61092 7.01545 8.75724L5.03244 8.49711C4.79271 10.3246 5.90249 11.9575 7.52251 12.4699L8.12569 10.5631ZM9.90546 17.7262L8.80967 11.3472L6.83854 11.6858L7.93433 18.0648L9.90546 17.7262ZM10.2086 18C10.073 18 9.93474 17.8983 9.90572 17.7278L7.93406 18.0633C8.12251 19.1707 9.07339 20 10.2086 20V18ZM14.7515 18H10.2086V20H14.7515V18ZM15.0546 17.7261C15.0249 17.8989 14.8853 18 14.7515 18V20C15.8884 20 16.8361 19.1682 17.0257 18.065L15.0546 17.7261ZM16.1475 11.3598L15.0546 17.7263L17.0257 18.0647L18.1187 11.6982L16.1475 11.3598ZM12.5642 15.2386C12.288 15.2386 12.0642 15.0147 12.0642 14.7386H14.0642C14.0642 13.9101 13.3926 13.2386 12.5642 13.2386V15.2386ZM13.0642 14.7386C13.0642 15.0147 12.8403 15.2386 12.5642 15.2386V13.2386C11.7357 13.2386 11.0642 13.9101 11.0642 14.7386H13.0642ZM13.0642 17.7153V14.7386H11.0642V17.7153H13.0642ZM12.5642 17.2153C12.8403 17.2153 13.0642 17.4392 13.0642 17.7153H11.0642C11.0642 18.5437 11.7357 19.2153 12.5642 19.2153V17.2153ZM12.0642 17.7153C12.0642 17.4392 12.288 17.2153 12.5642 17.2153V19.2153C13.3926 19.2153 14.0642 18.5437 14.0642 17.7153H12.0642ZM12.0642 14.7386V17.7153H14.0642V14.7386H12.0642ZM16.3366 15.1373C16.533 14.3325 16.0398 13.5209 15.2349 13.3245L14.7608 15.2675C14.4925 15.202 14.3281 14.9314 14.3936 14.6632L16.3366 15.1373ZM15.631 18.0288L16.3366 15.1373L14.3936 14.6632L13.688 17.5546L15.631 18.0288ZM13.8181 19.1304C14.6229 19.3268 15.4346 18.8336 15.631 18.0288L13.688 17.5546C13.7535 17.2863 14.024 17.1219 14.2923 17.1874L13.8181 19.1304ZM12.7165 17.3175C12.5201 18.1223 13.0133 18.934 13.8181 19.1304L14.2923 17.1874C14.5605 17.2529 14.7249 17.5234 14.6595 17.7917L12.7165 17.3175ZM13.4221 14.4261L12.7165 17.3175L14.6595 17.7917L15.3651 14.9002L13.4221 14.4261ZM15.235 13.3245C14.4301 13.1281 13.6185 13.6213 13.4221 14.4261L15.3651 14.9002C15.2996 15.1685 15.0291 15.3329 14.7608 15.2675L15.235 13.3245ZM10.4519 15.2674C10.1837 15.333 9.91309 15.1686 9.84754 14.9004L11.7904 14.4256C11.5937 13.6209 10.7819 13.1279 9.97718 13.3246L10.4519 15.2674ZM10.819 14.663C10.8845 14.9313 10.7202 15.2019 10.4519 15.2674L9.97719 13.3246C9.17244 13.5212 8.67948 14.333 8.87613 15.1378L10.819 14.663ZM11.5255 17.5545L10.819 14.663L8.87613 15.1378L9.58269 18.0292L11.5255 17.5545ZM10.9211 17.1874C11.1894 17.1219 11.46 17.2862 11.5255 17.5545L9.58269 18.0292C9.77934 18.834 10.5911 19.3269 11.3959 19.1303L10.9211 17.1874ZM10.5541 17.7918C10.4886 17.5236 10.6529 17.253 10.9211 17.1874L11.3959 19.1303C12.2006 18.9336 12.6936 18.1218 12.4969 17.3171L10.5541 17.7918ZM9.84754 14.9004L10.5541 17.7918L12.4969 17.3171L11.7904 14.4256L9.84754 14.9004Z" fill="#1B1A1A" mask="url(#path-1-outside-1_420_1264)"/&gt; &lt;/svg&gt;</code></pre> </div> </div> </p> <p>I am using the following code with the use of minidom to parse the SVG file content and print them into plain text. However the current function doesn't work as intended. Is there a problem with how I defined the <code>get_text_content()</code>?</p> <pre><code>import csv import os, stat path = &quot;./data/Icons/&quot; def get_text_content(node): text_content = &quot;&quot; if node.nodeType == node.TEXT_NODE: text_content += node.data elif node.nodeType == node.ELEMENT_NODE: for child_node in node.childNodes: text_content += get_text_content(child_node) return text_content for file in os.listdir(path): print(file) svg_file = os.path.join(path + file) os.chmod(svg_file, stat.S_IRWXU|stat.S_IRWXG|stat.S_IRWXO) doc = minidom.parse(svg_file) plain_text = get_text_content(doc.documentElement) print(plain_text) doc.unlink() </code></pre>
<python><svg><jupyter-notebook><minidom>
2023-06-19 06:55:23
0
419
Việt Tùng Nguyễn
76,504,099
5,699,679
Paper.js not drawing to canvas
<p>I have designed a canvas that is created with functionality from the <code>pywebview</code> package. On this canvas, I attempt to use <code>paper.js</code> to draw a <code>Path</code>, then output the resulting rendered HTML to an image popup. The <code>Path</code> does not render on the page, although the same code seems to work on an <a href="http://sketch.paperjs.org/#V/0.12.17/S/dZAxawMxDIX/ivESHxxHktHQKVvpEMiQIcmgnkXPvcQ2OvWOEvLfI4VSmrT1ILC+56cnn22CE1pvNz1y29natjnofQQyAwOxeTIJJ1OgIDXrHBO7xXxeGynVnvZJhZjCn7LFvawAd/c6abizYiNnwLcTJh682d0m1+p7qL8xU+5xlY+ZvJkRhtkjgqIgf6Rf6FkC/ce2MXDnJai2L7ewGrSBENzjSkvdXEr1JVNSKL9jyw20HEd8gU/pydtVF4/BqdMP7RhxagLB5Cr56ldC6IsaD9bvDpcr" rel="nofollow noreferrer">interactive snippet tester</a> on the Paper.js website. This is my code so far:</p> <pre><code>import io from PIL import Image import base64 import webview def extract_image(width, height, path=[(100, 100)]): color = 'red' def callback(window): window.html = f&quot;&quot;&quot; &lt;script type=&quot;text/javascript&quot; src=&quot;js/paper.js&quot;&gt;&lt;/script&gt; &lt;canvas id=&quot;myCanvas&quot; width=&quot;{width}&quot; height=&quot;{height}&quot;&gt;&lt;/canvas&gt; &quot;&quot;&quot; js_code = f&quot;&quot;&quot; var canvas = document.getElementById(&quot;myCanvas&quot;) paper.setup(canvas) var start = new paper.Point({path[0][0]}, {path[0][1]}) var end = new paper.Point({path[0][0] + 1}, {path[0][1]}) var path = new paper.Path({{ segments: [start, end], strokeColor: '{color}', strokeCap: 'round', strokeJoin: 'round', strokeWidth: 10 }}) path.add(new paper.Point(200, 200)) paper.project.activeLayer.addChild(path) paper.view.draw() &quot;&quot;&quot; print(js_code) print(window.evaluate_js(js_code)) data_url = window.evaluate_js(&quot;canvas.toDataURL()&quot;) print(f&quot;data url is: {data_url}&quot;) # Extract the base64 encoded image data from the data URL base64_data = data_url.split(&quot;,&quot;)[1] # Decode the base64 data and create a PIL image from it image_data = io.BytesIO(base64.b64decode(base64_data)) image = Image.open(image_data) # Display the image image.show() window.destroy() return callback def extract_canvas_to_image(): return webview.create_window('Hello world', frameless=True, hidden=True) if __name__ == '__main__': window = extract_canvas_to_image() webview.start(extract_image(400, 400), window) </code></pre> <p>How can I get my Path to render properly?</p>
<javascript><python><paperjs>
2023-06-19 06:27:04
1
2,651
Avi
76,504,055
13,994,829
python permission access denied
<p>Issue with running Python scripts on company laptop</p> <p>My device:</p> <ul> <li>Python3.8</li> <li>Windows10 (21H2)</li> </ul> <p>I have been facing an issue with running Python scripts on my company laptop recently.</p> <p>Previously, I was able to use Python without any problems, but today I encountered an error where I am unable to execute <code>.py</code> scripts.</p> <p>To troubleshoot the issue, I have checked the python.exe file and verified that the environment variables path are set correctly.</p> <p><a href="https://i.sstatic.net/mPHA5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mPHA5.png" alt="enter image description here" /></a></p> <p>However, when I tried running the command &quot;<code>python -v</code>&quot; in the terminal, I received an error message stating &quot;permission denied.&quot;</p> <p><a href="https://i.sstatic.net/xb5j6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xb5j6.png" alt="enter image description here" /></a></p> <p>I have consulted with my colleagues in the company, and they confirmed that they can still use Python as usual without any issues.</p> <p>I would appreciate any assistance in resolving this problem. Please let me know how I can troubleshoot and fix this issue.</p> <hr /> <h2>Upgrade</h2> <p>In addition, I just found an &quot;<strong>EventID: 4 - Error: php-8.1.10</strong>&quot; event in the Windows Event Viewer under System Management Events.</p> <p>The event occurred during a period when I was not using the computer.</p> <p>As a result, I went to the terminal to check the PHP version and discovered an error:</p> <p><code>PHP Warning: 'C:\\WINDOWS\\SYSTEM32\\VCRUNTIME140.dll' 14.15 is not compatible with this PHP build linked with 14.29 in Unknown on line 0</code></p> <p>Suddenly, I am unable to use both Python and PHP, and I'm unsure if there is a common reason behind this issue.</p> <p>Interestingly, just before the occurrence of the PHP Warning (with a time difference of a few seconds), there was another error in the Event Viewer stating &quot;Event: 12 - Keeper - Invalid access code.&quot;</p>
<python><windows>
2023-06-19 06:21:34
1
545
Xiang
76,503,726
839,733
Assignment expression in a while loop
<pre><code>m = re.match(pattern, text) while m := p.search(text, m.end()): # do with match </code></pre> <p>Does the <code>m.end()</code> refer to the previous match <code>m</code> so that the search starts from the end of the previous match? I read <a href="https://peps.python.org/pep-0572/#capturing-condition-values" rel="nofollow noreferrer">PEP-572</a>, and there's a section that talks specifically about regex matches, but not with a <code>while</code> loop. I understand that <code>m</code> is available in the body of the loop, that's not the question here.</p>
<python><regex>
2023-06-19 05:02:36
2
25,239
Abhijit Sarkar
76,503,658
14,226,645
warnings.filterwarnings() doesn't work to suppress ConvergenceWarning of SGDClassifier
<p>I was testing the Scikit-learn package's <code>SGDClassifier</code>'s accuracy according to the change of the <code>max_iter</code> property. I also knew that the testing <code>max_iter</code> values are small so there would be a bunch of <code>ConvergenceWarning</code>, so I added a code to ignore those warnings. (Testing on Google colab interface, using a local runtime(Jupyter notebook, WSL2 on Windows 11))</p> <pre class="lang-py prettyprint-override"><code>import warnings warnings.filterwarnings(action='ignore') # &lt;---- from sklearn.model_selection import cross_validate from sklearn.linear_model import SGDClassifier for _seq in range(5, 20 + 1, 5): sc = SGDClassifier(loss = &quot;log_loss&quot;, max_iter = _seq, random_state = 42) scores = cross_validate(sc, train_scaled, train_target, n_jobs = -1) print(f&quot;&quot;&quot;max_iter: {_seq}, scores = {np.mean(scores[&quot;test_score&quot;])}&quot;&quot;&quot;) </code></pre> <p>Unfortunately, the code didn't work and the unnecessary warnings filled all over the console, and bothered me looking at the change in the model performances.</p> <pre><code>/home/knightchaser/.local/lib/python3.10/site-packages/sklearn/linear_model/_stochastic_gradient.py:702: ConvergenceWarning: Maximum number of iteration reached before convergence. Consider increasing max_iter to improve the fit. warnings.warn( /home/knightchaser/.local/lib/python3.10/site-packages/sklearn/linear_model/_stochastic_gradient.py:702: ConvergenceWarning: Maximum number of iteration reached before convergence. Consider increasing max_iter to improve the fit. warnings.warn( /home/knightchaser/.local/lib/python3.10/site-packages/sklearn/linear_model/_stochastic_gradient.py:702: ConvergenceWarning: Maximum number of iteration reached before convergence. Consider increasing max_iter to improve the fit. warnings.warn( /home/knightchaser/.local/lib/python3.10/site-packages/sklearn/linear_model/_stochastic_gradient.py:702: ConvergenceWarning: Maximum number of iteration reached before convergence. Consider increasing max_iter to improve the fit. warnings.warn( /home/knightchaser/.local/lib/python3.10/site-packages/sklearn/linear_model/_stochastic_gradient.py:702: ConvergenceWarning: Maximum number of iteration reached before convergence. Consider increasing max_iter to improve the fit. warnings.warn( max_iter: 5, scores = 0.8196000000000001 ...(abbreviated)... </code></pre> <p>Is there a way to suppress those annoying and unnecessary warning messages? I really appreciate any help you can provide.</p>
<python><scikit-learn><google-colaboratory><sgdclassifier>
2023-06-19 04:42:49
1
356
KnightChaser
76,503,643
9,477,338
How to change Traceback back to normal?
<p>My most recent env prints traceback like this</p> <pre><code>╭───────────────────── Traceback (most recent call last) ──────────────────────╮ </code></pre> <p>which is beyond useless.</p> <p>I've already looked at this <a href="https://stackoverflow.com/questions/76375307/how-to-make-typer-traceback-look-normal">How to make typer traceback look normal</a> but it doesn't help.</p> <p>My hunch is it may be about Huggingface but maybe something else like <code>datasets</code> or <code>evaluate</code> but I can't find anything useful so far.</p> <p>How to make stacktrace print everything it need to again?</p>
<python><huggingface-transformers>
2023-06-19 04:40:02
1
2,439
Natthaphon Hongcharoen
76,503,598
6,017,833
Poetry import local package to script in subfolder
<p>I am unable to import my local package into a Python script (within a subfolder) in my Poetry project.</p> <p><code>run.py</code></p> <pre><code>from abc import module1 </code></pre> <p>When running <code>python run.py</code> from within the root folder using this structure, everything works fine:</p> <pre><code>abc/ pyproject.toml abc/ module1.py __init__.py run.py </code></pre> <p>However, I get <code>No module names 'module1'</code> when running <code>python scripts/run.py</code> from within the root folder using this structure:</p> <pre><code>abc/ pyproject.toml abc/ module1.py __init__.py scripts/ run.py </code></pre>
<python><package><python-poetry>
2023-06-19 04:25:37
0
1,945
Harry Stuart
76,503,502
2,313,307
Pandas: convert string of dicts in column to actual dictionary to expand contents into columns
<p>I have a data frame that looks like this</p> <pre><code> account_id result 0 588930 {&quot;symbol&quot;: &quot;MSFT&quot;, &quot;balance&quot;: 0.00... </code></pre> <p>and when I print a single cell value of the result column I get the following, which seems to be a string of dicts:</p> <pre><code>'{&quot;symbol&quot;: &quot;MSFT&quot;, &quot;balance&quot;: 0.00, &quot;transactionId&quot;: 10496491},{&quot;symbol&quot;: &quot;AAPL&quot;, &quot;balance&quot;: 300.12, &quot;transactionId&quot;: 10509620},{&quot;symbol&quot;: &quot;TSLA&quot;, &quot;balance&quot;: 40.4, &quot;transactionId&quot;: 10632589}' </code></pre> <p>Other users may have different symbol assets.</p> <p>I want to access the content in <code>result</code> as dictionaries in order to expand the content into multiple columns where the column names are the <code>symbols</code> (ex: MSFT, TSLA...) and the values are the <code>balance</code> numbers.</p> <p>I haven't been able to transform the string into dictionaries to be able to access the contents.</p> <p>Thanks!</p> <p><strong>UPDATE</strong></p> <p>I tried the following</p> <pre><code>def string_to_dict(dict_string): # Convert to proper json format dict_string = dict_string.replace(&quot;'&quot;, '&quot;').replace('u&quot;', '&quot;') return json.loads(dict_string) df.result = f.result.apply(string_to_dict) </code></pre> <p>But I get the following error</p> <pre><code>JSONDecodeError: Extra data: line 1 column 93 (char 92) </code></pre> <p>which I believe means that json.loads cannot decode multiple dictionaries at the same time <a href="https://stackoverflow.com/questions/21058935/python-json-loads-shows-valueerror-extra-data">As described here</a></p>
<python><json><pandas><dictionary>
2023-06-19 03:44:10
1
1,419
finstats
76,503,388
15,542,245
Differentiate and replace single for multiple string list elements
<p>Require each list element to be a single string. Second list element contains multiple strings:</p> <pre><code>suburbs = ['Wamuomata'],['Wan', 'omata'],['Eastboume'] </code></pre> <p>Required to be:</p> <pre><code>suburbs = ['Wamuomata'],['Wan omata'],['Eastboume'] </code></pre> <p>Code:</p> <pre><code>suburbs = ['Wamuomata'],['Wan', 'omata'],['Eastboume'] for x in range(len(suburbs)): elementNumber = len(suburbs[x]) if elementNumber &gt; 1: for y in range(elementNumber): print(suburbs[x][y]) print(suburbs) </code></pre> <p>Output:</p> <pre><code>Wan omata (['Wamuomata'], ['Wan', 'omata'], ['Eastboume']) </code></pre> <p>The output from <code>print(suburbs[x][y])</code> references the list strings of element two correctly. However don't know how to replace.</p>
<python><list>
2023-06-19 03:08:04
2
903
Dave
76,503,315
20,240,835
Optimizing a slow-running Python dict compare function
<p>I have a Python function that is running very slowly, and I am looking for suggestions on how to optimize it. The function compares two Genotype objects and returns a dictionary of sample genotypes. The function's runtime may be affected by the size of the Genotype objects and the length of the sample_list. Here is the current implementation of the function:</p> <pre><code>def compare_gt(gt1: Genotype, gt2: Genotype, sample_list: list = None): # is gt1 is None if not gt1 is None and not gt1.isValid(): gt1=None if not gt2 is None and not gt2.isValid(): gt2=None if gt1 is None and gt2 is None: return None if gt1 is None: if sample_list is None: sample_list = gt2.get_sample_list() return {sample: &quot;6&quot; + gt2.get_sample_genotypes(sample_list)[sample] for sample in sample_list} if gt2 is None: if sample_list is None: sample_list = gt1.get_sample_list() return {sample: gt1.get_sample_genotypes(sample_list)[sample] + &quot;6&quot; for sample in sample_list} # compare two genotype if sample_list is None: sample_list = gt1.get_sample_list() return {sample: gt1.get_sample_genotypes(sample_list)[sample] + gt2.get_sample_genotypes(sample_list)[sample] for sample in sample_list} </code></pre> <p>I have already tried a few optimization techniques, such as avoiding duplicate calculations of sample_list and storing the results of gt1 and gt2 in variables to avoid repeated method calls. However, the function is still running very slowly.</p> <p>I would appreciate any suggestions on how to optimize this function further. Thank you in advance for your help</p> <p>=======update========</p> <p>As @J_H suggest, I would like to update some pseudocode to illustrate how the program works</p> <pre><code>class gt_reader(): def __init__(self, ...): # do someting def update(self): # read next genotype line def get_sample_genotypes(self, sample_list) -&gt; dict: # do someting return sample_gt # other gt1 = gt_reader(...) gt2 = gt_reader(...) while gt1.update(): # some code while gt2.update(): # some code compare_gt(gt1, gt2) # some code if x: break </code></pre> <p>In addition to the suggestions provided by the participants, I also implemented some additional optimizations to the program. For example, I ensured that the output of <code>get_sample_genotypes()</code> is always sorted by its query, so I don't have to keep using indexing to retrieve the return values if both <code>gt1</code> and <code>gt2</code> sample is same, which improves performance in a large number of comparison operations</p>
<python><performance><dictionary>
2023-06-19 02:41:19
0
689
zhang
76,503,200
1,203,797
row_number resets based on two columns in Python
<p>My goal is to generate the following row_number, called <code>transaction_in_row</code></p> <p><a href="https://i.sstatic.net/5yNWH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5yNWH.png" alt="enter image description here" /></a></p> <p>The row_number should reset based on (<code>partition_by</code>) customer and has_transaction column. My issue is on the yellow column, the row_number function in SQL will return 3 instead of 1.</p> <p>This my current SQL code</p> <pre><code>row_number() over(partition by customer, has_transaction order by month asc) as transaction_in_row </code></pre> <p>Because I'm stuck in the SQL, I'm trying to find a way to do this in Python Dataframe instead. My thinking is to loop manually per customer and per month, but this will be painfully slow as I'm handling ~30 million rows.</p> <p>Anyone can help me on a more efficient way to do this?</p>
<python><pandas><dataframe>
2023-06-19 01:56:11
1
10,958
Blaze Tama
76,503,151
9,744,061
Create numpy array start from 0 to 1 with increment 0.1
<p>I'm studying in python. I want to create numpy array in python, start from 0 to 1 with increment 0.1. This is the code:</p> <pre><code>import numpy as np t=np.arange(0, 1, 0.1) print(t) </code></pre> <p>the result is</p> <pre><code>[0. 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9] </code></pre> <p>But &quot;1&quot; is not include in array, how to include 1 in array? Does we manually adding 1 to array, or any other ways?</p>
<python><arrays><numpy>
2023-06-19 01:32:58
2
305
Ongky Denny Wijaya
76,503,055
1,281,485
Using nested helper functions while building a class
<p>I want to build a class which is merely a wrapper for some data. This is the basic idea:</p> <pre><code>class Fruits: fruits = [ 'apple', 'orange', 'cherry' ] </code></pre> <p>My fruits aren’t strings, though, but classes defined earlier:</p> <pre><code>class Apple: pass class Orange: pass class Cherry: pass class Fruits: fruits = [ Apple(), Orange(), Cherry() ] </code></pre> <p>My fruit classes are a bit more complex, though:</p> <pre><code>from dataclasses import dataclass @dataclass class Apple: color: str size: float price: float </code></pre> <p>Because my fruit-classes are quite complex, I need some helper functions to avoid massive code-doubling while calling the constructors:</p> <pre><code>class Fruits: @staticmethod def green_apple(size, price_factor=2.5): return Apple(color='green', size=size, price=size * price_factor) fruits = [ green_apple(3), green_apple(3, price_factor=1.2) ] </code></pre> <p>This far this works with static methods.</p> <p>Now I would like to have one more layer of abstraction:</p> <pre><code>class Fruits: @staticmethod def green_apple(size, price_factor=2.5): return Apple(color='green', size=size, price=size * price_factor) @staticmethod def cheap_green_apple(size): return green_apple(size, price_factor=1.2) # alternative spelling: # return Fruits.green_apple(size, price_factor=1.2) fruits = [ green_apple(3), cheap_green_apple(3) ] </code></pre> <p>I did not find any way to achieve this yet. I understand why this is a problem: The second static method cannot call the first one before the class exists, and the class cannot exist before the <code>fruits</code> field hasn’t been built. There is no logical reason why one static method cannot call another one before the class exists, though. The methods are just scoped inside the class, they don’t use anything of the class; hence the building of the field <code>fruits</code> still works in the example one above.</p> <p>The methods shall be wrapped in the wrapper class <code>Fruits</code> as users of this class are supposed to also use them. Putting them before the class (and repeating them within) would be an ugly solution, but it would clutter the outer scope, therefore I would like to avoid that.</p> <p>So my question is not why this happens. I am asking for clever solutions to achieve what I want: Several fruit-classes (<code>Apple</code>, etc.) and a single wrapper class (<code>Fruits</code>) for providing all fruits in my scenario without cluttering the outer scope, while using a nested set of helper functions within the wrapper class.</p> <p>Is there an option to achieve this? I tried several variants with static and class methods, but nothing worked yet.</p>
<python><static-methods><class-method><nested-function>
2023-06-19 00:50:15
1
60,124
Alfe
76,502,806
191,577
Python Google Speech to Text timeout "operation was cancelled"
<p>I am trying to use Google Speech to text to transcribe audio from a microphone in realtime. I also want to set a timeout for how long Google will wait before it times out. I have tried to use the following code in python</p> <pre><code>client = speech.SpeechClient() activityTimeout = speech.StreamingRecognitionConfig.VoiceActivityTimeout() activityTimeout.speech_start_timeout = Duration(seconds=60) config = speech.RecognitionConfig( encoding=speech.RecognitionConfig.AudioEncoding.LINEAR16, sample_rate_hertz=SAMPLE_RATE, language_code=&quot;en-US&quot;, max_alternatives=1, model=&quot;command_and_search&quot;, use_enhanced=True, ) streaming_config = speech.StreamingRecognitionConfig( config=config, interim_results=False, enable_voice_activity_events=True, voice_activity_timeout=activityTimeout ) </code></pre> <p>However, everytime I make the call to Google with the audio content, all I ever get back is:</p> <pre><code>google.api_core.exceptions.Cancelled: 499 The operation was cancelled. </code></pre> <p>And the stack trace goes back to this file:</p> <pre><code>google\api_core\grpc_helpers.py&quot;, line 115, in __next__ raise exceptions.from_grpc_error(exc) from exc </code></pre> <p>Is there something I am doing wrong here to make a call with a timeout?</p> <p>If I do not use a timeout, everything works perfectly except for the fact the program sits there waiting for something to send to Google for transcription. I want the program to timeout after a set time period such as 60 seconds.</p>
<python><google-cloud-speech>
2023-06-18 22:50:40
1
2,275
Avanst
76,502,793
2,049,273
How to always import a package when running Python script (non-interactive)?
<p>Basically I want to replicate the behavior of <code>PYTHONSTARTUP</code> but when running a script i.e. <code>python app.py</code>. The answers I've found for this involve creating a <code>usercustomize.py</code> file and putting it in the path specified by <code>python -m site</code>.</p> <p>This works to execute the script before my other script but doesn't persist any imports in that script. My goal is to always override the builtin print function with Rich when possible. So my current <code>PYTHONSTARTUP</code> file looks like this:</p> <pre><code>try: from rich import print, pretty pretty.install() except ImportError: pass </code></pre> <p>If I put the above in <code>usercustomize.py</code>, it runs but the print function isn't overridden like I want it to be. It seems that is by design, but is there any way of achieving what I want some other way?</p>
<python>
2023-06-18 22:45:50
1
1,271
swigganicks
76,502,697
10,962,766
Using pdftotext in Google Colab
<p>The laptop provided by my German research institute broke down and I am now using a new laptop provided by my Dutch institute, but I have not set up Python and Jupyter Notebook yet. This is why I wanted to run code in <strong>Google Colab</strong> but realise that the <a href="https://pypi.org/project/pdftotext/" rel="nofollow noreferrer"><code>pdftotext</code></a> Python package cannot be installed.</p> <p>Using <code> !pip install pdftotext</code> or <code>!apt-get install</code> both result in this error notification:</p> <p><code>E: Unable to locate package pdftotext</code></p> <p>I assume that I am missing dependencies. Is there any way can make this work in Google Colab, or will I need to run my code elsewhere?</p>
<python><google-colaboratory>
2023-06-18 22:03:49
1
498
OnceUponATime
76,502,542
1,115,833
scikit learn f1 score
<p>I have been using scikit learn to produce f1 scores (well I use huggingface's f1, which inturn uses scikit learn's f1 function): <a href="https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html" rel="nofollow noreferrer">https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html</a></p> <p>I would have thought I could input a threshold for this since it depends on precesion and recall but I cant see any place to input this.</p> <p>what is the default used?</p>
<python><scikit-learn>
2023-06-18 21:13:06
1
7,096
JohnJ
76,502,533
5,568,409
How to show axes of a confidence ellipse
<p>I know how to plot a confidence ellipse for a two-dimensional dataset, using:</p> <pre><code>from matplotlib.patches import Ellipse </code></pre> <p>The global syntax being:</p> <pre><code>matplotlib.patches.Ellipse(xy, width, height, angle=0, **kwargs) </code></pre> <p>But I didn't find in the documentation how to <strong>also</strong> draw the two major and minor axes of the ellipse... I guess that some instruction should be used within the <strong>kwargs</strong> argument, but I can't find which ones.</p>
<python><matplotlib><ellipse>
2023-06-18 21:08:53
1
1,216
Andrew
76,502,406
3,788,493
Python gRPC makefile: mixed implicit and static pattern rules
<p>Trying to write a Makefile that compiles python gRPC protos but only when the source proto file changed. I ran through it multiple times and can't figure out why I'm getting that error (when running <code>make build</code>). Supposedly it's because a file is matching multiple targets, but I don't see how:</p> <pre><code># All proto source files PROTOS_SRC = $(wildcard api/*.proto) # The three types of generated files per proto source PROTOS_PY = $(patsubst api/%.proto, api/%_pb2.py, $(PROTO_SRC)) PROTOS_GRPC_PY = $(patsubst api/%.proto, api/%_pb2_grpc.py, $(PROTO_SRC)) PROTOS_PYI = $(patsubst api/%.proto, api/%_pb2.pyi, $(PROTO_SRC)) # All the generated files we have to depend on PROTOS_OUT = $(PROTOS_PY) $(PROTOS_GRPC_PY) $(PROTOS_PYI) # How to obtain generated sources from a proto file api/%_pb2.py api/%_pb2_grpc.py api/%_pb2.pyi: api/%.proto: python -m grpc_tools.protoc -I./api --python_out=./api --grpc_python_out=./api --mypy_out=./api $&lt; build: $(PROTOS_OUT) clean: rm -rf api/*_pb2.py api/*_pb2_grpc.py .PHONY: clean </code></pre> <p>The proto source code is in the <code>api/</code> subdirectory and that's where I'm outputting the generated code too.</p> <p>I wrote the conversion rule based on this, so i think it's correct: <a href="https://stackoverflow.com/questions/2973445/gnu-makefile-rule-generating-a-few-targets-from-a-single-source-file">GNU Makefile rule generating a few targets from a single source file</a></p>
<python><makefile><protocol-buffers><grpc><protoc>
2023-06-18 20:22:08
0
1,738
jeanluc
76,502,318
9,008,300
XGBoost: How to use a DMatrix with scikit-learn interface .fit
<p>I am currently using the scikit-learn interface for XGBoost in my project. However, I have an extremely large dataset, and each time I call <code>.fit</code>, the data is converted to a DMatrix, which is very time-consuming, especially when using a GPU that trains relatively fast. I've benchmarked using the native interface by using a single DMatrix for each fit, and the results show a significant difference (14s per fit vs 0.9s per fit). The problem is, I need a scikit-learn model so that it works with the rest of my program.</p> <p>Is there a way to use a DMatrix with the scikit-learn interface in XGBoost, or any workarounds to avoid the repeated conversion to DMatrix while still maintaining compatibility with scikit-learn?</p> <p>See the below code to get a reproducible way to cause this issue.</p> <pre class="lang-py prettyprint-override"><code>from sklearn.datasets import make_classification from xgboost import XGBClassifier import xgboost as xgb # Large synthetic dataset X, y = make_classification(n_samples=500_0000, n_features=20, n_informative=10, n_redundant=10, random_state=42) # scikit-learn t = time.time() model = XGBClassifier(tree_method=&quot;gpu_hist&quot;, gpu_id=0, predictor=&quot;gpu_predictor&quot;, max_bin=256) model.fit(X, y) print(&quot;scikit-learn interface: &quot;, time.time() - t) # scikit-learn again t = time.time() model.fit(X, y) print(&quot;scikit-learn (2nd) interface: &quot;, time.time() - t) print() # DMatrix dtrain = xgb.DMatrix(data=X, label=y) t = time.time() model = xgb.train({&quot;tree_method&quot;: &quot;gpu_hist&quot;, &quot;gpu_id&quot;: 0, &quot;predictor&quot;: &quot;gpu_predictor&quot;}, dtrain) print(&quot;native interface: &quot;, time.time() - t) # DMatrix again t = time.time() model = xgb.train({&quot;tree_method&quot;: &quot;gpu_hist&quot;, &quot;gpu_id&quot;: 0, &quot;predictor&quot;: &quot;gpu_predictor&quot;}, dtrain) print(&quot;native (2nd) interface:: &quot;, time.time() - t) </code></pre> <p>Output:</p> <pre><code>scikit-learn interface: 14.393212795257568 scikit-learn (2nd) interface: 14.048950433731079 native interface: 3.9494242668151855 native (2nd) interface:: 0.9888997077941895 </code></pre> <p>As you can see, there is a big time discrepancy between scikit-learn and native.</p>
<python><machine-learning><artificial-intelligence><xgboost>
2023-06-18 19:56:25
4
422
Chris
76,502,227
2,403,819
What version of pyinstaller should I be using with Python 3.11.3?
<p>I am writing a software package on an Arch Linux platform using Python 3.11.3. In addition, I am also using poetry to manage my dependencies, and have run into a problem. When I try to install pyinstaller with the command <code>poetry add pyinstaller</code> I get the following error.</p> <pre><code>The current project's Python requirement (&gt;=3.11,&lt;4.0) is not compatible with some of the required packages Python requirement: - pyinstaller requires Python &lt;3.12,&gt;=3.7, so it will not be satisfied for Python &gt;=3.12,&lt;4.0 Because no versions of pyinstaller match &gt;5.12.0,&lt;6.0.0 and pyinstaller (5.12.0) requires Python &lt;3.12,&gt;=3.7, pyinstaller is forbidden. So, because todo-six depends on pyinstaller (^5.12.0), version solving failed. • Check your dependencies Python requirement: The Python requirement can be specified via the `python` or `markers` properties For pyinstaller, a possible solution would be to set the `python` property to &quot;&gt;=3.11,&lt;3.12&quot; </code></pre> <p>This implies that the most up to date version of pyinstaller does not work with Python 3.11.3. The dependencies using in my <code>pyproject.toml</code> file are shown below.</p> <pre><code>[tool.poetry.dependencies] python = &quot;^3.11&quot; pyqt6 = &quot;^6.5.0&quot; pandas = &quot;^2.0.2&quot; pyinstall = &quot;^0.1.4&quot; [tool.poetry.group.dev.dependencies] pytest = &quot;^7.3.1&quot; flake8 = &quot;^6.0.0&quot; mypy = &quot;^1.3.0&quot; black = &quot;^23.3.0&quot; isort = &quot;^5.12.0&quot; flake8-bandit = &quot;^4.1.1&quot; flake8-bugbear = &quot;^23.5.9&quot; flake8-builtins = &quot;^2.1.0&quot; flake8-comprehensions = &quot;^3.12.0&quot; flake8-implicit-str-concat = &quot;^0.4.0&quot; flake8-print = &quot;^5.0.0&quot; tox = &quot;^4.5.2&quot; pytest-cov = &quot;^4.1.0&quot; pyupgrade = &quot;^3.4.0&quot; pre-commit = &quot;^3.3.2&quot; </code></pre> <p>Based on the fact that for some reason I can not install the most up to date version with poetry, I walked my way through every version of pyinstaller until one of them installed, which was version 4.5.1, with the command <code>poetry add pyinstaller==4.5.1</code>. However, when I run the command `pyinstaller -F -w- i icon.ico --add-data &quot;todo_six:todo_six&quot; todo.py I get the following error, which indicates that pyinstaller version 4.5.1 is not compatible., or at least that is what I think it means.</p> <pre><code>Traceback (most recent call last): File &quot;/home/jonwebb/Desktop/todo_six/.venv/bin/pyinstaller&quot;, line 8, in &lt;module&gt; sys.exit(run()) ^^^^^ File &quot;/home/jonwebb/Desktop/todo_six/.venv/lib/python3.11/site-packages/PyInstaller/__main__.py&quot;, line 107, in run parser = generate_parser() ^^^^^^^^^^^^^^^^^ File &quot;/home/jonwebb/Desktop/todo_six/.venv/lib/python3.11/site-packages/PyInstaller/__main__.py&quot;, line 78, in generate_parser import PyInstaller.building.build_main File &quot;/home/jonwebb/Desktop/todo_six/.venv/lib/python3.11/site-packages/PyInstaller/building/build_main.py&quot;, line 35, in &lt;module&gt; from PyInstaller.depend import bindepend File &quot;/home/jonwebb/Desktop/todo_six/.venv/lib/python3.11/site-packages/PyInstaller/depend/bindepend.py&quot;, line 26, in &lt;module&gt; from PyInstaller.depend import dylib, utils File &quot;/home/jonwebb/Desktop/todo_six/.venv/lib/python3.11/site-packages/PyInstaller/depend/utils.py&quot;, line 33, in &lt;module&gt; from PyInstaller.depend import bytecode File &quot;/home/jonwebb/Desktop/todo_six/.venv/lib/python3.11/site-packages/PyInstaller/depend/bytecode.py&quot;, line 95, in &lt;module&gt; _call_function_bytecode = bytecode_regex(rb&quot;&quot;&quot; ^^^^^^^^^^^^^^^^^^^^ File &quot;/home/jonwebb/Desktop/todo_six/.venv/lib/python3.11/site-packages/PyInstaller/depend/bytecode.py&quot;, line 60, in bytecode_regex pattern = re.sub( ^^^^^^^ File &quot;/usr/lib/python3.11/re/__init__.py&quot;, line 185, in sub return _compile(pattern, flags).sub(repl, string, count) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/jonwebb/Desktop/todo_six/.venv/lib/python3.11/site-packages/PyInstaller/depend/bytecode.py&quot;, line 62, in &lt;lambda&gt; lambda m: _instruction_to_regex(m[1].decode()), ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/jonwebb/Desktop/todo_six/.venv/lib/python3.11/site-packages/PyInstaller/depend/bytecode.py&quot;, line 40, in _instruction_to_regex return re.escape(bytes([dis.opmap[x]])) ~~~~~~~~~^^^ KeyError: 'CALL_FUNCTION' </code></pre> <p>If someone could help me understand what these errors mean, and how to get pyinstaller working with my current dependencies, I would appreciate it.</p>
<python><python-3.x><pyinstaller>
2023-06-18 19:30:31
1
1,829
Jon
76,502,222
1,308,250
What is the fastest way to iterate through a pandas DataFrame to create custom objects?
<p>My use case is this: I have a pandas DataFrame that was loaded from an SQL database. I want to construct an object from every row. You'd maybe think you'd want to use df.apply, but this is actually extremely slow as demonstrated later. I don't know what the fastest way to accomplish this would be.</p> <p>To figure out what was fastest, I constructed a repo that tests various functions that does this, but I don't know if I might be missing something.</p> <p>My test setup is as follows:</p> <p>Create a list of functions that takes as input a DataFrame and outputs a list of <code>Node</code> objects.</p> <p>Set a random seed so every function gets the same input DataFrame.</p> <p>Construct a DataFrame of size N (from 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, 8192, 16384, 32768).</p> <p>This DataFrame is then passed into our test functions that calls a function with the following signature: <code>create_node(b, a, f, e, d, c, g)</code> or <code>create_node_ignored_args(b, a, f, e, d, c, g, *args, **kwargs)</code>.</p> <p>These functions create a <code>Node</code> object (class <code>__init__</code> signature: <code>def __init__(self, node_id, b, a, f, c, d, e, g)</code>).</p> <p>Note that the DataFrame contains these columns <code>b, a, f, e, d, c, g</code>, but not in that order.</p> <p>It's important that we be able to pass arguments in the correct order or the <code>Node</code> will be incorrectly constructed.</p> <p>For example if you have a DataFrame: <code>{&quot;a&quot;:1, &quot;b&quot;:2, &quot;c&quot;:3, &quot;d&quot;:4, &quot;e&quot;:5, &quot;f&quot;:6: &quot;g&quot;:{&quot;subset&quot;: [1,2,3,4]}}</code> and you just naively pass that into <code>create_node</code> as *args you'll end up with an incorrect <code>Node</code> where the <code>b</code> field in the object is set to <code>a</code> and so forth.</p> <p>This can be avoided if you pass in the arguments as <code>**kwargs</code>, which will match the columns in the DataFrame with the exact arguments in the function.</p> <p>Note that if you have too many columns in the DataFrame then calling <code>create_node</code> with all of them will fail because of too many arguments, which is why we also have <code>create_node_ignored_args</code> which can ignore args/kwargs if there's too many args or non-matching kwargs.</p> <p>Here are a few example functions:</p> <pre class="lang-py prettyprint-override"><code>def index_df_apply(df): &quot;&quot;&quot;Use apply, get the fields indexing using LOOKUP. Use as args in create_node (using *). &quot;&quot;&quot; nodes = df.apply(lambda row: create_node(*row[LOOKUP]), axis=1) return [node for node in nodes] def itertuples(df): &quot;&quot;&quot;Loop over itertuples, convert namedtuple to dict, get fields using tuple_unwrap fn according to LOOKUP. Use as kwargs in create_node (using **). &quot;&quot;&quot; nodes = [] for row in df.itertuples(index=False): nodes.append(create_node(**tuple_unwrap(row, LOOKUP))) return nodes def zip_comprehension_lookup(df): &quot;&quot;&quot;List comprehension over zipped df columns, get fields using * and LOOKUP. Use as args in create_node (using *). &quot;&quot;&quot; nodes = [create_node(*args) for args in zip(*(df[c] for c in LOOKUP))] return nodes </code></pre> <p>We have two cases we want to test: Performance for large DataFrames over less iterations, and performance for small DataFrames over many iterations. We'll visualize the one for large DataFrames over less iterations and provide a text report as well, but only provide text report for the small DataFrames over many iterations.</p> <p>For the large DataFrames 1 iteration, we can create two straight-forward perfplot images:</p> <p><a href="https://i.sstatic.net/IvYrh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IvYrh.png" alt="Perfplot with relative timings to the fastest function" /></a> <a href="https://i.sstatic.net/7JHAX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7JHAX.png" alt="Perfplot with absolute timings" /></a></p> <p>Here's some roughly equivalent timing data for these images: <a href="https://github.com/Atheuz/pandas-to-object-perf-test/blob/master/time_one_iteration_count.txt" rel="nofollow noreferrer">https://github.com/Atheuz/pandas-to-object-perf-test/blob/master/time_one_iteration_count.txt</a></p> <p>From this timing test, we see that the fastest functions are: zip_comprehension_np_values_lookup, zip_comprehension_lookup, zip_comprehension_direct_access, to_numpy_direct_access, itertuples_direct_access_comprehension.</p> <p>These are those functions:</p> <pre class="lang-py prettyprint-override"><code>def zip_comprehension_np_values_lookup(df): &quot;&quot;&quot;List comprehension over zipped df columns, get fields using *, LOOKUP, use .values. Use as args in create_node (using *). &quot;&quot;&quot; nodes = [create_node(*args) for args in zip(*(df[c].values for c in LOOKUP))] return nodes def zip_comprehension_direct_access(df): &quot;&quot;&quot;List comprehension over zip object, get fields using direct indexing. Use as args in create_node (using *). &quot;&quot;&quot; nodes = [create_node(*args) for args in zip(df[&quot;b&quot;], df[&quot;a&quot;], df[&quot;f&quot;], df[&quot;e&quot;], df[&quot;d&quot;], df[&quot;c&quot;], df[&quot;g&quot;])] return nodes def zip_comprehension_lookup(df): &quot;&quot;&quot;List comprehension over zipped df columns, get fields using * and LOOKUP. Use as args in create_node (using *). &quot;&quot;&quot; nodes = [create_node(*args) for args in zip(*(df[c] for c in LOOKUP))] return nodes def to_numpy_direct_access(df): &quot;&quot;&quot;Get the names of columns in our dataframe, create indices lookup from name-&gt;idx, convert df to numpy and access fields using indices lookup. Use as kwargs in create_node (using direct assignment). &quot;&quot;&quot; cols = list(df.columns) indices = {k: cols.index(k) for k in cols} nodes = [ create_node( b=row[indices[&quot;b&quot;]], a=row[indices[&quot;a&quot;]], f=row[indices[&quot;f&quot;]], e=row[indices[&quot;e&quot;]], d=row[indices[&quot;d&quot;]], c=row[indices[&quot;c&quot;]], g=row[indices[&quot;g&quot;]], ) for row in df.to_numpy() ] return nodes def itertuples_direct_access_comprehension(df): &quot;&quot;&quot;List comprehension over itertuples, get fields accessing them directly. Use as kwargs in create_node (using direct assignment). &quot;&quot;&quot; nodes = [create_node(b=row.b, a=row.a, f=row.f, e=row.e, d=row.d, c=row.c, g=row.g) for row in df.itertuples(index=False)] return nodes </code></pre> <p>Similarly, a text report for the other case where we want to test small DataFrames with a high iteration count: <a href="https://github.com/Atheuz/pandas-to-object-perf-test/blob/master/time_high_iteration_count.txt" rel="nofollow noreferrer">https://github.com/Atheuz/pandas-to-object-perf-test/blob/master/time_high_iteration_count.txt</a></p> <p>From this timing test, it appears that the fastest functions are these:</p> <p>zip_comprehension_np_values_lookup, zip_comprehension_direct_access, zip_comprehension_lookup, to_numpy_direct_access, to_numpy_take</p> <p>The only new function for this case is: to_numpy_take</p> <pre class="lang-py prettyprint-override"><code>def to_numpy_take(df): &quot;&quot;&quot;Get the names of columns in our dataframe, create indices lookup from name-&gt;idx using LOOKUP, convert df to numpy and access fields using the indices lookup using np.take. Use as args in create_node (using *). &quot;&quot;&quot; cols = list(df.columns) indices = [cols.index(k) for k in LOOKUP] nodes = [create_node(*np.take(row, indices)) for row in df.to_numpy()] return nodes </code></pre> <p>For more information you can visit my GitHub repo here: <a href="https://github.com/Atheuz/pandas-to-object-perf-test" rel="nofollow noreferrer">https://github.com/Atheuz/pandas-to-object-perf-test</a></p> <p>Anyway, my question still stands: Is there a generally recommended fast way to create objects from pandas DataFrames, or did I accidentally find it? access the DataFrame columns you want and zip them together. I'm also very confused about why <code>df.apply(lambda row: create_node(*row[LOOKUP]), axis=1)</code> is actually one of the slowest ways to accomplish this.</p>
<python><pandas>
2023-06-18 19:29:07
1
331
Atheuz
76,502,083
4,115,031
PyCharm requirements.txt warning says packages are not installed, but they *are* installed
<p>I'm getting this error for all of my packages in my <code>requirements.txt</code>:</p> <pre><code>'&lt;PackageName&gt; X.Y.Z' is not installed (required: X.Y.Z, installed: &lt;nothing&gt;, latest: X.Y.Z) </code></pre> <p>But when I open up the virtual environment settings, I can see that all of those packages are, in fact, installed into the virtual environment.</p> <p>How do I resolve this?</p> <p><a href="https://i.sstatic.net/f4fz4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/f4fz4.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/at28z.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/at28z.png" alt="enter image description here" /></a></p>
<python><pycharm><jetbrains-ide>
2023-06-18 18:48:48
0
12,570
Nathan Wailes
76,502,047
784,586
In python how to read a url content that uses the current windows default authentication?
<p>I have a URL of a power Bi report that I wish to read from it using python. the report opens normally in any browser since I'm logged into windows 10 using my username that has access to that URL. (default authentication)</p> <p>I tried this code but it doesn't seem to understand that I'm logged in. how can I read the URL using my windows default authentication?</p> <pre><code>import urllib.request import pandas as pd f=open(&quot;link.txt&quot;,&quot;r&quot;) url=f.readline() #create a password manager with defaule credentials. password_mgr = urllib.request.HTTPPasswordMgrWithDefaultRealm() password_mgr.add_password(None,url,None,None) #create an authentication handler with the password manager auth_handler = urllib.request.HTTPBasicAuthHandler(password_mgr) #create an opener with the authentication handler opener = urllib.request.build_opener(auth_handler) #open the url with the opener response = opener.open(url) #read the contents content = response.read() print(content.decode(&quot;utf-8&quot;)) </code></pre>
<python><pandas><powerbi>
2023-06-18 18:39:59
1
1,930
medo ampir
76,502,021
2,844,655
Is SQLAlchemy efficient when using it with raw SQL?
<p>I have a Flask web application with a Postgres database (&lt;10 million rows). I used SQLAlchemy to connect with Postgres, but queries that I have written in native SQLAlchemy are compiled to SQL that is too slow. The <a href="https://stackoverflow.com/questions/14754994/why-is-sqlalchemy-count-much-slower-than-the-raw-query">.count() method</a> is one of the main offenders here. I am planning to rewrite my queries. Pseudocode examples:</p> <p>From <code>db.session.query(Table).filter(Table.column==condition).count()</code></p> <p>To <code>db.session.execute(sqlalchemy.text(&quot;SELECT count(Table.id) from Table WHERE Table.column=condition&quot;)</code></p> <p>My question is: can I do better than the execute-text construct? Will this still get wrapped in slow SQLAlchemy logic? Or is this as close to running raw SQL as it gets? How much faster can I tune my Flask-Postgres interaction? (I'm not interested in answers that involve additional third-party services.)</p>
<python><postgresql><flask><sqlalchemy><flask-sqlalchemy>
2023-06-18 18:29:31
1
2,940
Newb
76,501,993
10,771,559
looping through dictionary keys to plot graph
<p>I have a dictionary that looks like this:</p> <pre><code>habitat_dictionary={'trees':6, 'ponds':10, 'scrub':23} </code></pre> <p>I also have a dataframe with many columns, among which are no._trees, no._ponds, no._scrub -for example:</p> <pre><code>d = {'no._trees': [1, 2], 'no._rabbit_holes':[7,8],'no._ponds':[3,5], 'no._scrub':[3,2],'robin': [0.5, 0.6], 'dove':[0.6,0.2]} df = pd.DataFrame(data=d) </code></pre> <p>For each key in the habitat dictionary, I want to create a graph with the key as the x-axis variable, a y-axis variable between 0 and 1, and there would be two lines: one for 'robin' and one for 'dove'.</p> <p>I have been trying this:</p> <pre><code>for habitat in habitat_dictionary.keys(): out=df.set_index('no._'+str(habitat))[['robin', 'dove'] out.plot(ylim=(0, 1), xticks=out.index, legend=True) </code></pre> <p>but I am getting the error: 'TypeError: loop of ufunc does not support argument 0 of type float which has no callable rint method'</p> <p>I would like to have all the graphs as subplots with a shared title</p>
<python><matplotlib>
2023-06-18 18:22:28
0
578
Niam45
76,501,826
11,092,636
does python detect code snippets that are useless? (dead code elimination)
<p>If I code something like this:</p> <pre class="lang-py prettyprint-override"><code>for _ in range(100_000_000): a = 10 b = 20 c = a + b d = c * 2 e = d / 2 del a, b, c, d, e print(&quot;Hello World&quot;) </code></pre> <p>Will Python compiler realize it is useless and that there is no need to do anything? I've heard gcc can understand that but not Python.</p> <p>My tests confirm that but I'm looking for a confirmation since I've stumbled across posts like this (<a href="https://bugs.python.org/issue1346214" rel="nofollow noreferrer">https://bugs.python.org/issue1346214</a>) that make me wonder if it's actually implemented. It's a very old version of Python and I'm using 3.11.1 but if they were already talking about it, I'm sure if they were to implement it it's now implemented?</p> <p>On my laptop, the python code takes 0.07 seconds to run against 0 seconds for just the hello world which is why I think there is no dead code elimination.</p>
<python>
2023-06-18 17:41:25
1
720
FluidMechanics Potential Flows
76,501,774
3,990,451
pandas how to override attibute error from timezome localization
<p>I have a <strong>idaydf.index</strong> that I am trying to localize timezone</p> <pre><code>DatetimeIndex(['2022-10-24 16:00:00', '2022-10-24 16:15:00', ... '2023-06-16 21:58:00', '2023-06-16 22:00:00'], dtype='datetime64[ns]', name='DateTime', length=9012, freq=None) </code></pre> <p>with</p> <pre><code>idaydf.index = idaydf.index.tz_localize(LOCAL_TZ) </code></pre> <p>where LOCAL_TZ is</p> <pre><code>_PytzShimTimezone(zoneinfo.ZoneInfo(key='Europe/London'), 'Europe/London') </code></pre> <p>I get this error:</p> <pre><code>*** AttributeError: 'NoneType' object has no attribute 'total_seconds' </code></pre> <p>I have these versions:</p> <pre><code>python3-3.11.3 pandas-1.5.3 pytz-2023.3-1 tzlocal 4.2 </code></pre> <p>How to fix?</p>
<python><pandas><datetime><pytz>
2023-06-18 17:29:31
1
982
MMM
76,501,733
10,327,984
parallelize a function that fill missing values from duplicates in pandas dataframe
<p>I have a product data frame that consists of 1838379 rows that have description image_url, eans, and product name this dataset has duplicates in the product name I am trying to fill the nan values in description image_url, eans with the duplicated values in product name so i implemented this function</p> <pre><code>def fill_descriptions_images_ean_from_duplicates(row,train): import pandas as pd duplicated_rows = train.loc[train['product_name'] == row[&quot;product_name&quot;]] if not duplicated_rows.empty: descriptions=duplicated_rows[&quot;description&quot;].dropna() if not descriptions.empty: description=list(descriptions)[0] train.loc[train['product_name'] == row[&quot;product_name&quot;], 'description',] = train.loc[train['product_name'] == row[&quot;product_name&quot;], 'description'].fillna(description) images=duplicated_rows[&quot;image_url&quot;].dropna() if not images.empty: image=list(images)[0] train.loc[train['product_name'] == row[&quot;product_name&quot;], 'image_url',] = train.loc[train['product_name'] == row[&quot;product_name&quot;], 'image_url'].fillna(image) eans=duplicated_rows[&quot;ean&quot;].dropna() if not eans.empty: ean=list(eans)[0] train.loc[train['product_name'] == row[&quot;product_name&quot;], 'ean',] = train.loc[train['product_name'] == row[&quot;product_name&quot;], 'ean'].fillna(ean) </code></pre> <p>when I use apply it takes forever to execute so I tried using Pandaralele but pandaralele doesn't support the lambda function and it tells me that the fill_descriptions_images_ean_from_duplicates is not defined</p> <pre><code>from pandarallel import pandarallel import psutil psutil.cpu_count(logical=False) pandarallel.initialize() train.parallel_apply(lambda row: fill_descriptions_images_ean_from_duplicates(row, train), axis=1) </code></pre> <p>so i tried using dask but nothing happend either the progressbar is stuck</p> <pre><code>def process_partition(df_partition,train): df_partition.apply(lambda row: fill_descriptions_images_ean_from_duplicates(row, train), axis=1) return df_partition </code></pre> <pre><code>import dask.dataframe as dd from dask.diagnostics import ProgressBar dask_train = dd.from_pandas(train, npartitions=7) dask_df_applied = dask_train.map_partitions(lambda row: process_partition(row, train),meta=train.dtypes) with ProgressBar(): train=dask_df_applied.compute() </code></pre> <p>sample data</p> <pre><code>import pandas as pd import numpy as np # Set the random seed for reproducibility np.random.seed(42) # Generate random data data = { 'product_name': ['Product A', 'Product B', 'Product B', 'Product C', 'Product D'] * 20, 'description': np.random.choice([np.nan, 'Description'], size=100), 'image_url': np.random.choice([np.nan, 'image_url'], size=100), 'ean': np.random.choice([np.nan, 'EAN123456'], size=100) } # Create the DataFrame train= pd.DataFrame(data) </code></pre>
<python><pandas><multiprocessing>
2023-06-18 17:17:08
2
622
Mohamed Amine
76,501,708
1,123,336
Using axis units to set the aspect ratio
<p>I am looking for a way to define the aspect ratio of a 2D plot in Matplotlib, using axis units with pre-defined conversions. In the past, I have done this manually by calculating the desired aspect ratio, but this was before I knew about the axis <code>units</code> capabilities in Matplotlib. It would seem that Matplotlib should be able to use <code>units</code> to accomplish the same thing automatically, but it looks as if <code>units</code> are not used in computing aspect ratios. I wanted to check here if that is the case, or if I'm missing something.</p> <p>Here's an example to explain what I mean. It's not what I want this for, but it is a fully-worked example.</p> <pre><code>import numpy as np import matplotlib.pyplot as plt from basic_units import cm, inch def gauss(sigma=0.1): x = y = np.linspace(-0.5, 0.4, 10) X, Y = np.meshgrid(y, x) return np.exp(-(X**2 + Y**2) / (2 * sigma**2)) z = np.tile(gauss(), (5, 5)) plt.figure() plt.imshow(z, extent=[-2.5,2.5,-2.5,2.5]) ax = plt.gca() ax.xaxis.set_units(cm) ax.yaxis.set_units(inch) ax.set_xlim(-2.5*cm, 2.5*cm) ax.set_ylim(-2.5*inch, 2.5*inch) ax.set_aspect('equal') plt.show() import numpy as np import matplotlib.pyplot as plt from basic_units import cm, inch def gauss(sigma=0.1): x = y = np.linspace(-0.5, 0.4, 10) X, Y = np.meshgrid(y, x) return np.exp(-(X**2 + Y**2) / (2 * sigma**2)) z = np.tile(gauss(), (5, 5)) plt.figure() plt.imshow(z, extent=[-2.5,2.5,-2.5,2.5]) ax = plt.gca() ax.xaxis.set_units(cm) ax.yaxis.set_units(inch) ax.set_xlim(-2.5*cm, 2.5*cm) ax.set_ylim(-2.5*inch, 2.5*inch) ax.set_xlabel(cm) ax.set_ylabel(inch) ax.set_aspect('equal') plt.show() </code></pre> <p>To run this, you also need <a href="https://matplotlib.org/stable/_downloads/0f0fd288c7d4a6a16f4835b96343f597/basic_units.py" rel="nofollow noreferrer">basic_units.py</a></p> <p><a href="https://i.sstatic.net/zYRXh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zYRXh.png" alt="enter image description here" /></a></p> <p>I was hoping the physical aspect ratio would be 2.54, after setting it to 'equal', but it's not. My question therefore is if there is a way to configure Matplotlib so that an aspect ratio of 'equal' is adjusted by the conversion factor between units, without having to compute the ratio by hand.</p>
<python><matplotlib>
2023-06-18 17:10:29
0
582
Ray Osborn
76,501,681
673,600
Plotly trend lines for each data set not working
<p>I can plot a trend line that for example works on the min/max of all the data, but I want a trend line for each group (color) of points. However it doesn't work when I change <code>trendline_scope=&quot;overall&quot;</code> to <code>trendline_scope=&quot;trace&quot;</code>. I get a mess, instead of I am expecting.</p> <pre><code>fig = px.scatter(df, &quot;Date&quot;, y=&quot;Number of X&quot;, color=&quot;Technology&quot;, labels={ &quot;Date&quot;: &quot;Reported Date&quot;, &quot;Number of Qubits&quot;: &quot;Number of X&quot;, &quot;markersize&quot;: 50, &quot;s&quot;: 50, &quot;title&quot;: &quot;X&quot; }, trendline=&quot;expanding&quot;, trendline_options=dict(function=&quot;max&quot;), trendline_scope=&quot;trace&quot;) </code></pre>
<python><plotly>
2023-06-18 17:03:52
0
6,026
disruptive
76,501,677
3,412,660
SIMPLE Pythonic way to Pretty Print requests.response Headers
<p>I am trying to pretty print the http response coming back from Python Requests library in a fully Pythonic way without using any packages not built into the Python Standard Library. I am getting the JSON run around.</p> <p><strong>What I tried:</strong></p> <p>I tried loading the response.headers into a JSON string using json.loads(), and then indenting the output using json.dumps(), as follows,</p> <pre><code>import json response_json = json.loads(response.headers) pretty_response = json.dumps(response_json, indent=4) print(pretty_response) </code></pre> <p>but I get the following error:</p> <pre><code>TypeError Traceback (most recent call last) Cell In[21], line 2 1 import json ----&gt; 2 response_json = json.loads(response.headers) 4 pretty_response = json.dumps(response_json, indent=4) 5 print(pretty_response) File c:\ProgramData\Anaconda3\envs\webscrapers\lib\json\__init__.py:341, in loads(s, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw) 339 else: 340 if not isinstance(s, (bytes, bytearray)): --&gt; 341 raise TypeError(f'the JSON object must be str, bytes or bytearray, ' 342 f'not {s.__class__.__name__}') 343 s = s.decode(detect_encoding(s), 'surrogatepass') 345 if &quot;encoding&quot; in kw: TypeError: the JSON object must be str, bytes or bytearray, not CaseInsensitiveDict </code></pre> <p><strong>I also tried:</strong></p> <p>First installing and importing the <code>rich</code> package, and then using a module within the <code>rich</code> package for Rich Text Formatting output to terminals. The print_json() function is supposed to pretty print json:</p> <pre><code>import json import rich rich.print_json(dict(response.headers)) </code></pre> <p>but I get an error:</p> <pre><code>TypeError: json must be str. Did you mean print_json(data={'Date': 'Sun, 18 Jun 2023 16:22:08 GMT', ...} </code></pre> <p>I finally got it to work by installing and importing the <code>rich</code> package and using some hints from <a href="https://stackoverflow.com/questions/3229419/how-to-pretty-print-nested-dictionaries">How to pretty print nested dictionaries?</a> and <a href="https://forum.freecodecamp.org/t/requests-pretty-print-http-response-headers/567548" rel="nofollow noreferrer">Requests: Pretty print http response headers</a>.</p> <pre><code>import rich rich.pretty.pprint(dict(response.headers)) </code></pre> <p>However, it took some time to figure out the correct syntax, because the rich.pretty.pprint <code>help()</code> documentation in Jupyter did not have detailed examples for this use case. While <code>rich</code> is a very nice package, it has a significant learning curve. More importantly, it is not a native Python built-in solution.</p> <p>How can this be done simply using Python with no 3rd party package installations?</p>
<python><json><python-requests>
2023-06-18 17:03:16
1
3,186
Rich Lysakowski PhD
76,501,666
6,440,589
Rounding to the closest bin in pandas/numpy
<p>I have dataframe containing distances as float64 values.</p> <pre><code>import numpy as np import pandas as pd binsize = 0.05 df = pd.DataFrame() df['distance'] = [0.01555, 0.6, 0.99, 1.24] </code></pre> <p>This returns:</p> <pre><code> distance 0 0.01555 1 0.60000 2 0.99000 3 1.24000 </code></pre> <p>I would like to round the min and max values rounded to the closest multiple of <code>binsize</code>.</p> <p>This is how I am currently doing this:</p> <pre><code>np.round(np.round(df['distance'].min() / binsize) * binsize, 2) np.round(np.round(df['distance'].max() / binsize) * binsize, 2) </code></pre> <p>Thus returning <code>0.0</code> and <code>1.25</code> for the above example.</p> <p>Is there an easier way to achieve this?</p>
<python><rounding><binning>
2023-06-18 17:00:41
1
4,770
Sheldon
76,501,632
19,504,610
Creating a metaclass in Cython
<p>I aim to create a metaclass (let's call it <code>SlotsMeta</code>) in Cython that performs the following:</p> <ol> <li>It takes all the class variables defined by the class that uses the <code>SlotsMeta</code> as metaclass and converts them into <code>readonly</code> attributes in Cython.</li> </ol> <p>I am using trying to convert all class variables (as keys in the parameter <code>**kwargs</code>) passed into the <code>__cinit__</code> method of <code>SlotsMeta</code> to a <code>Tuple[str]</code> and assign it to <code>SlotsMeta.__slots__</code>.</p> <p>I have looked at the following references:</p> <ol> <li><a href="https://stackoverflow.com/questions/66026576/python-metaclass-defining-slots-makes-slots-readonly">Python Metaclass defining __slots__ makes __slots__ readonly</a></li> <li><a href="https://github.com/sagemath/sagelib/blob/master/sage/misc/classcall_metaclass.pxd" rel="nofollow noreferrer">https://github.com/sagemath/sagelib/blob/master/sage/misc/classcall_metaclass.pxd</a></li> <li><a href="https://github.com/sagemath/sagelib/blob/master/sage/misc/classcall_metaclass.pyx" rel="nofollow noreferrer">https://github.com/sagemath/sagelib/blob/master/sage/misc/classcall_metaclass.pyx</a></li> </ol> <p>My implementation is as follows:</p> <pre><code>#cython: language_level=3 # copied from https://github.com/sagemath/sagelib/blob/master/sage/misc/classcall_metaclass.pyx from cpython cimport PyObject, Py_XDECREF cdef extern from &quot;Python.h&quot;: ctypedef PyObject *(*callfunc)(type, object, object) except NULL ctypedef struct PyTypeObject_call &quot;PyTypeObject&quot;: callfunc tp_call # needed to call type.__call__ at very high speed. cdef PyTypeObject_call PyType_Type # Python's type cdef class SlotsMeta(type): cdef readonly tuple __slots__ def __init__(mcls, str name, tuple bases, dict attrs): attrs_keys = tuple(str(k) for k in attrs.keys()) mcls.__slots__ = attrs_keys def __call__(cls, *args, **kwargs): ptr = PyType_Type.tp_call(cls, args, kwargs) inst = &lt;object&gt;ptr inst.__init__(*args, **kwargs) Py_XDECREF(ptr) # During the cast to &lt;object&gt; Cython did INCREF(res) return inst </code></pre> <p>The test function that fails is:</p> <pre><code>class A(metaclass=SlotsMeta): B: str = &quot;H&quot; def test_slotsmeta(): a = A() # passes assert a.B == &quot;H&quot; # passes with pytest.raises(AttributeError): # passes a.C # passes with pytest.raises(AttributeError): a.C = 500 # failed assert a.__slots__ == (&quot;B&quot;, ) # failed </code></pre> <p>What failed:</p> <ol> <li><code>a.C</code> = 500 should not be possible but it is.</li> <li><code>a.__slots__</code> should return <code>(&quot;B&quot;, )</code> but it doesn't.</li> <li><code>a.__dict__</code> exists when it shouldn't.</li> </ol>
<python><cython><extension-modules>
2023-06-18 16:51:55
0
831
Jim
76,501,463
8,176,763
airflow cannot find file under dags directory
<p>I have a PostgresOperator that gives the file path to a sql file.</p> <pre><code> stage_evergreen = PostgresOperator( task_id=&quot;RefreshStageEvergreen&quot;, postgres_conn_id=&quot;evergreen&quot;, autocommit=True, sql=&quot;sql/evergreen_stage.sql&quot;) </code></pre> <p>This is a relative path under the dags directory, accordingly to the official tutorial this should work. <a href="https://airflow.apache.org/docs/apache-airflow-providers-postgres/stable/operators/postgres_operator_howto_guide.html" rel="nofollow noreferrer">https://airflow.apache.org/docs/apache-airflow-providers-postgres/stable/operators/postgres_operator_howto_guide.html</a></p> <p>But it does not work.....</p> <p>Error is:</p> <p>jinja2.exceptions.TemplateNotFound: sql/evergreen_stage.sql</p> <p>I also tried adding <code>template_searchpath</code> to my dag decorator :</p> <pre><code>@dag( dag_id = &quot;Evergreen&quot;, schedule_interval = '0 10 * * *', start_date=pendulum.datetime(2023, 3, 9, tz=&quot;UTC&quot;), catchup=False, dagrun_timeout=timedelta(minutes=20), template_searchpath = ['/opt/airflow/dags/sql/'] ) </code></pre> <p>but still does not work...</p> <p>Only placing the sql file directly under the dags folder worked for me.</p>
<python><airflow>
2023-06-18 16:13:08
1
2,459
moth
76,501,385
7,535,168
Kivy MDLabel in RecycleView losing text after updating data
<p>I'm having two screens, loginScreen and mainScreen. In the mainScreen I have a RecycleView of MDLabels. Initially when entering on mainScreen everything works fine, but whenever I'm refreshing the data of my RecycleView, the text of some labels keeps disappearing and appearing on scrolling. When I use regular kivy labels instead of MDLabels, I'm not getting this strange behavior. Am I doing something wrong in the code or is this expected when using MDLabels in RecycleView?</p> <p>main.py</p> <pre><code>from kivymd.app import MDApp from kivy.lang import Builder from kivy.uix.screenmanager import ScreenManager from kivy.uix.screenmanager import Screen from kivymd.color_definitions import colors from kivymd.uix.boxlayout import MDBoxLayout import random class DailyService(MDBoxLayout): pass class MainScreen(Screen): def __init__(self, **kwargs): super(MainScreen, self).__init__(**kwargs) def switchButton(self): self.manager.switchToLoginScreen() class LoginScreen(Screen): def __init__(self, **kwargs): super(LoginScreen, self).__init__(**kwargs) def switchButton(self): self.manager.switchToMainScreen() class MyScreenManager(ScreenManager): def __init__(self, **kwargs): super(MyScreenManager, self).__init__(**kwargs) #self.current = 'loginScreen' def switchToMainScreen(self): data = [] for i in range(20): k = random.randint(0, 9) if k%2 == 0: color = colors['BlueGray']['700'] else: color = colors['Green']['700'] data.append({'day': 'DAY', 'service': 'SERVICE', 'bg_color': color}) self.mainScreen.rvid.data = data self.current = 'mainScreen' def switchToLoginScreen(self): self.current = 'loginScreen' class MyApp(MDApp): def build(self): self.theme_cls.theme_style = 'Dark' self.theme_cls.primary_palette = 'Blue' self.theme_cls.accent_palette = 'Amber' return Builder.load_file('main.kv') if __name__ == '__main__': MyApp().run() </code></pre> <p>main.kv</p> <pre><code> &lt;LoginScreen&gt;: name: 'loginScreen' Button: text: 'MAIN' on_release: root.switchButton() &lt;DailyService&gt;: bg_color: app.theme_cls.primary_dark day: '' service: '' MDGridLayout: rows: 2 MDLabel: halign: 'center' text: root.day MDLabel: halign: 'center' md_bg_color: root.bg_color text: root.service &lt;MainScreen&gt;: name: 'mainScreen' rvid: myRv MDRecycleView: viewclass: 'DailyService' id: myRv RecycleBoxLayout: default_size: None, dp(200) default_size_hint: 1, None size_hint_y: None height: self.minimum_height orientation: 'vertical' Button: pos_hint:{&quot;x&quot;:0.5,'bottom': 1} size_hint: 0.4, 0.1 text: 'LOGIN' on_release: root.switchButton() MyScreenManager: loginScreen: loginScreenId mainScreen: mainScreenId LoginScreen: id: loginScreenId MainScreen: id: mainScreenId </code></pre> <p>Screenshot of the first enter on mainScreen: <a href="https://i.sstatic.net/yxPdM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yxPdM.png" alt="enter image description here" /></a></p> <p>Screenshot of the second enter on mainScreen after data update: <a href="https://i.sstatic.net/40zPP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/40zPP.png" alt="enter image description here" /></a></p>
<python><android><kivy><kivy-language><kivymd>
2023-06-18 15:56:32
1
601
domdrag
76,501,267
6,643,799
Randomly generate all unique pair-wise combination of elements between two list in set time
<p>I have two lists:</p> <pre><code>a = [1, 2, 3, 5] b = [&quot;a&quot;, &quot;b&quot;, &quot;c&quot;, &quot;d&quot;] </code></pre> <p>And would like to generate all possible combinations with a python generator. I know I could be doing:</p> <pre><code>combinations = list(itertools.product(a,b)) random.shuffle(combinations) </code></pre> <p>But that one has an extreme memory cost as i would have to hold in memory all possible combinations, even if only wanted two random unique combinations.</p> <p>My target is to get a python generator that has its memory cost increase with the more iterations are requested from it, getting to the same O memory cost as itertools at max iterations.</p> <p>I had this for now:</p> <pre><code>def _unique_combinations(a: List, b: List): &quot;&quot;&quot; Creates a generator that yields unique combinations of elements from a and b in the form of (a_element, b_element) tuples in a random order. &quot;&quot;&quot; len_a, len_b = len(a), len(b) generated = set() for i in range(len_a): for j in range(len_b): while True: # choose random elements from a and b element_a = random.choice(a) element_b = random.choice(b) if (element_a, element_b) not in generated: generated.add((element_a, element_b)) yield (element_a, element_b) break </code></pre> <p>But its flawed as it can theoretically run forever if the random.choice lines are unlucky.</p> <p>I'm looking to modify that existing generator so it generates the indexes randomly within a fix set of time, it will be okay to keep them track of as this will be linear increase in memory cost and not exponential.</p> <p>How could i modify that random index generator to be bound in time?</p>
<python><random>
2023-06-18 15:29:02
7
856
eljiwo
76,501,256
22,009,322
Assign different color to each plt.step line
<p>I have a code which draws lines for the teams according to their tournament position in each game week. Pretty much I managed to make it work, except 2 things:</p> <ol> <li>For some reason a 4th (violet) line is drawn (teams are only 3) which goes from the top to the bottom throughout each game week. As I found out this line is drawn for every iteration (for each team) when plotting the lines. But why?</li> <li>Lines are drawn from x = 0 starting point, thus not aligning with the points (which are drawn correctly). Lines should be drawn from x = 1 starting point as well (according to their Game_week value). Output:</li> </ol> <p><a href="https://i.sstatic.net/1rErg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1rErg.png" alt="enter image description here" /></a></p> <p>What have I missed?</p> <p>Example of the code:</p> <pre><code>import pandas as pd import matplotlib.pyplot as plt import numpy as np df = pd.DataFrame([['Team1', 1, 1], ['Team1', 2, 2], ['Team1', 1, 3], ['Team1', 5, 4], ['Team1', 1, 5], ['Team2', 2, 1], ['Team2', 3, 2], ['Team2', 4, 3], ['Team2', 4, 4], ['Team2', 3, 5], ['Team3', 3, 1], ['Team3', 4, 2], ['Team3', 3, 3], ['Team3', 2, 4], ['Team3', 2, 5] ], columns=['Team', 'Position', 'Game_week']) positions = df['Position'] weeks = df['Game_week'] teams = df['Team'].unique() print(teams) # Coordinates: y = positions x = weeks print(y) print(x) fig, ax = plt.subplots() # Labels: plt.xlabel('Game weeks') plt.ylabel('Positions') plt.xlim(-0.2, 5.2) plt.ylim(0.8, 5.2) # Inverting the y-axis: plt.gca().invert_yaxis() # x, y ticks: xi = list(np.unique(x)) yi = list(np.unique(y)) plt.xticks(xi) plt.yticks(yi) # Colors for teams: colors = {'Team1': 'tab:red', 'Team2': 'tab:blue', 'Team3': 'blue'} # Points: plt.scatter(x, y, s=45, c=df['Team'].map(colors), zorder=2) # Lines between points: for i, (team, l) in enumerate(df.groupby('Team', sort=False)): plt.step(list(zip(l['Game_week'], l['Position'])), '-', color=colors[team], linewidth=8, alpha=0.2, zorder=1) print('step:', i, '; team:', [team]) print(l) plt.show() plt.close() </code></pre> <p>Thank you!</p>
<python><pandas><matplotlib>
2023-06-18 15:27:12
1
333
muted_buddy
76,501,230
15,520,615
How to display date in PySpark in descending/ascending order in Databricks
<p>I am trying to display the results from the following query in ascending/descending order. However, I'm not sure where to place the DES or ASC clause.</p> <pre><code>df = sql(&quot;&quot;&quot;select * from myview where date = last_day(add_months(date_trunc(&quot;month&quot;, current_date()), -2))&quot;&quot;&quot;) </code></pre> <p>So, I would like to know where to place the des or asc in the above code to get date output shown in descending or ascending order?</p> <p>I tried the following</p> <pre><code>df = sql(&quot;&quot;&quot;select * from myview where date = last_day(add_months(date_trunc(&quot;month&quot;, current_date() DES), -2))&quot;&quot;&quot;) </code></pre> <p>But I got a syntax error</p>
<python><pyspark><azure-databricks>
2023-06-18 15:19:59
1
3,011
Patterson
76,501,219
6,687,699
Handle NoSuchKey error with Custom Storage class AWS S3 and Django
<p>I am using Django, I wanted to handle the NoSuchKey error which happens when am redirected to the file link from my Django project.</p> <p>How best can I use the Custom storage error to handle this :</p> <pre><code>&quot;&quot;&quot;Custom storage settings.&quot;&quot;&quot; from storages.backends.s3boto3 import S3Boto3Storage from botocore.exceptions import NoSuchKey from django.shortcuts import render class FileStorage(S3Boto3Storage): &quot;&quot;&quot;File storage class.&quot;&quot;&quot; print(&quot; Hi there am AWS&quot;) def open(self, name, mode='rb'): try: return super().open(name, mode) except NoSuchKey: return {'errror': 'Error Happened'} </code></pre> <p>I already defined the class in the settings.py as below :</p> <pre><code>DEFAULT_FILE_STORAGE = 'proj.storage.FileStorage' </code></pre> <p>Remember when you define the following env variables, you can auto upload files to the specific Images or File fields in the Models, now I want a centralized way of handling the <code>NoSuchKey</code> error and I show a custom page or template</p> <pre><code>AWS_LOCATION = ENV.str('AWS_LOCATION', default='') AWS_ACCESS_KEY_ID = ENV.str('AWS_ACCESS_KEY_ID', default='') AWS_SECRET_ACCESS_KEY = ENV.str('AWS_SECRET_ACCESS_KEY', default='') AWS_S3_REGION_NAME = ENV.str('AWS_S3_REGION_NAME', default='') AWS_S3_SIGNATURE_VERSION = ENV.str('AWS_S3_SIGNATURE_VERSION', default='') AWS_S3_FILE_OVERWRITE = ENV.bool('AWS_S3_FILE_OVERWRITE', default=False) </code></pre> <p>How can I do this ?</p> <p>Here's the error I get :</p> <pre><code>&lt;Error&gt; &lt;Code&gt;NoSuchKey&lt;/Code&gt; &lt;Message&gt;The specified key does not exist.&lt;/Message&gt; &lt;Key&gt; files/file-information.html &lt;/Key&gt; &lt;RequestId&gt;XXXXXXXXXXXXXXXXX&lt;/RequestId&gt; &lt;HostId&gt; yOjP+w8917JsB08ZV2Gf+WMUDfIETFZcVvn/fxxxxxxp0PWhi0nIss7qfLaM4gizdWfX1k4vhalE0XMOg= &lt;/HostId&gt; &lt;/Error&gt; </code></pre>
<python><django><amazon-s3><boto3>
2023-06-18 15:17:17
0
4,030
Lutaaya Huzaifah Idris
76,500,981
11,101,156
How to add conversational memory to pandas toolkit agent?
<p>I want to add a <code>ConversationBufferMemory</code> to <code>pandas_dataframe_agent</code> but so far I was unsuccessful.</p> <ul> <li>I have tried adding the memory via construcor: <code>create_pandas_dataframe_agent(llm, df, verbose=True, memory=memory)</code> which didn't break the code but didn't resulted in the agent to remember my previous questions.</li> <li>Also I have tried to add memory into the agent via this pieace of code: <code>pd_agent.agent.llm_chain.memory = memory</code>. Which resulted in <code>ValueError: One input key expected got ['input', 'agent_scratchpad'] </code></li> </ul> <p>This is my code so far (which doesn't work):</p> <pre><code>llm = ChatOpenAI(temperature=0, model_name=&quot;gpt-4-0613&quot;) memory = ConversationBufferMemory() pd_agent = create_pandas_dataframe_agent(llm, df, verbose=True, memory=memory) #pd_agent.agent.llm_chain.memory = memory #Or if I use this approach the code breaks when calling the .run() methods pd_agent.run(&quot;Look into the data in step 12. Are there any weird patterns? What can we say about this part of the dataset.&quot;) pd_agent.run(&quot;What was my previouse question?&quot;) #Agent doesn't rember </code></pre>
<python><openai-api><langchain>
2023-06-18 14:13:00
2
2,152
Jakub Szlaur
76,500,913
9,142,914
Opencv VideoWriter: how to get an output video with the same "rate" as a live inference of the model?
<p>Here is a code to see &quot;live&quot; the inference of a YOLO model on a mp4 video:</p> <pre><code>import cv2 from ultralytics import YOLO model = YOLO('yolov8n.pt') video_path = &quot;path/to/your/video/file.mp4&quot; cap = cv2.VideoCapture(video_path) while cap.isOpened(): success, frame = cap.read() if success: results = model(frame) annotated_frame = results[0].plot() cv2.imshow(&quot;YOLOv8 Inference&quot;, annotated_frame) if cv2.waitKey(1) &amp; 0xFF == ord(&quot;q&quot;): break else: break cap.release() cv2.destroyAllWindows() </code></pre> <p>When I run this above code, it opens VLC and displays the video with boxes etc. and, since the model takes time to make the predictions, it's pretty laggy.</p> <p>I would like to have this result (the laggy video) but <strong>recorded</strong>.</p> <p>Problem is that if I do this:</p> <pre><code>import cv2 from ultralytics import YOLO model = YOLO('yolov8n.pt') video_path = &quot;path/to/your/video/file.mp4&quot; cap = cv2.VideoCapture(video_path) frame_num = int(cap.get(cv2.CAP_PROP_FRAME_COUNT)) frame_width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)) frame_height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) fourcc = cv2.VideoWriter_fourcc(*'mp4v') video_writer = cv2.VideoWriter('video_opencv.mp4',fourcc,30,(frame_width,frame_height)) while cap.isOpened(): success, frame = cap.read() if success: results = model(frame) annotated_frame = results[0].plot() #cv2.imshow(&quot;YOLOv8 Inference&quot;, annotated_frame) video_writer.write(annotated_frame) if cv2.waitKey(1) &amp; 0xFF == ord(&quot;q&quot;): break else: break cap.release() cv2.destroyAllWindows() </code></pre> <p>which is basically the same but where I record the video instead of simply showing it, then when I read the video, it goes as fast as the original video (without predictions)</p> <p>I hope I am clear.</p> <p>Any solution for this ?</p>
<python><opencv><video><yolo>
2023-06-18 13:58:03
0
688
ailauli69
76,500,689
3,808,573
Why this 'itertools.product' does not run as expected?
<p>I have two lists, a list of numbers <code>list_n</code> and a list of powers <code>list_p</code>. Then I combine these two lists in a list of tuples, <code>list_t</code>.</p> <pre><code>list_n = [2, 3, 5, 7] list_p = [0, 1, 2, 3] list_t = [[(n, p) for p in list_p] for n in list_n] </code></pre> <p><code>list_t</code> is now:</p> <pre><code>list_t = [ [(2, 0), (2, 1), (2, 2), (2, 3)], [(3, 0), (3, 1), (3, 2), (3, 3)], [(5, 0), (5, 1), (5, 2), (5, 3)], [(7, 0), (7, 1), (7, 2), (7, 3)] ] </code></pre> <p>So far so good...</p> <p>In the next step, I try to create a combination list <code>list_c</code>, that looks like:</p> <pre><code>list_c = [ [(2, 0), (3, 0), (5, 0), (7, 0)], [(2, 1), (3, 0), (5, 0), (7, 0)], [(2, 2), (3, 0), (5, 0), (7, 0)], [(2, 3), (3, 0), (5, 0), (7, 0)], [(2, 0), (3, 1), (5, 0), (7, 0)], [(2, 1), (3, 1), (5, 0), (7, 0)], [(2, 2), (3, 1), (5, 0), (7, 0)], [(2, 3), (3, 1), (5, 0), (7, 0)], [(2, 0), (3, 2), (5, 0), (7, 0)], [(2, 1), (3, 2), (5, 0), (7, 0)], [(2, 2), (3, 2), (5, 0), (7, 0)], [(2, 3), (3, 2), (5, 0), (7, 0)], ... ... ] </code></pre> <p>But cannot get the expected list if I had tried the line below:</p> <pre><code>list_c = list(itertools.product(t for t in list_t)) # list c is now # list_c = [ # ([(2, 0), (2, 1), (2, 2), (2, 3)],) # ([(3, 0), (3, 1), (3, 2), (3, 3)],) # ([(5, 0), (5, 1), (5, 2), (5, 3)],) # ([(7, 0), (7, 1), (7, 2), (7, 3)],) # ] </code></pre> <p>Btw, I can get the expected list if I had tried with 4 distinct lists:</p> <pre><code>list_2 = [(2, 0), (2, 1), (2, 2), (2, 3)] list_3 = [(3, 0), (3, 1), (3, 2), (3, 3)] list_5 = [(5, 0), (5, 1), (5, 2), (5, 3)] list_7 = [(7, 0), (7, 1), (7, 2), (7, 3)] list_c = list(itertools.product(list_2, list_3, list_5, list_7)) </code></pre> <p>After the last line, list_c is now:</p> <pre><code>list_c = [ ((2, 0), (3, 0), (5, 0), (7, 0)), ((2, 0), (3, 0), (5, 0), (7, 1)), ((2, 0), (3, 0), (5, 0), (7, 2)), ((2, 0), (3, 0), (5, 0), (7, 3)), ((2, 0), (3, 0), (5, 1), (7, 0)), ((2, 0), (3, 0), (5, 1), (7, 1)), ((2, 0), (3, 0), (5, 1), (7, 2)), ((2, 0), (3, 0), (5, 1), (7, 3)), ((2, 0), (3, 0), (5, 2), (7, 0)), ((2, 0), (3, 0), (5, 2), (7, 1)), ... ... ] </code></pre> <p>Can somebody explain how to arrange the line below to get the expected result?</p> <pre><code>list_c = list(itertools.product(t for t in list_t)) </code></pre>
<python>
2023-06-18 13:04:34
0
2,432
ssd
76,500,590
20,740,043
Map two dataframes, based on their group/id, with closer values
<p>I have two datafames as such:</p> <pre><code>#Load the required libraries import pandas as pd import matplotlib.pyplot as plt #Create dataset_1 data_set_1 = {'id': [1, 2, 3, 4, 5, ], 'Available_Salary': [10, 20, 30, 40, 50, ], } #Convert to dataframe_1 df_1 = pd.DataFrame(data_set_1) print(&quot;\n df_1 = \n&quot;,df_1) #Create dataset_2 data_set_2 = {'id': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, ], 'Expected_Salary': [9, 49, 18, 19, 29, 41, 4, 57, 42, 3, ], } #Convert to dataframe_2 df_2 = pd.DataFrame(data_set_2) print(&quot;\n df_2 = \n&quot;,df_2) </code></pre> <p>Here, visually I can say, 'Expected_Salary' 9 (with id=1), 'Expected_Salary' 4 (with id=7) and 'Expected_Salary' 3 (with id=10) is closer to 'Available_Salary' 10 (with id=1).</p> <p>Likewise, 'Expected_Salary' of 49 (with id=2) and 'Expected_Salary' 57 (with id=8) is closer to 'Available_Salary' 50 (with id=5), and so on.</p> <p>This can be shown in below image file for better representation:</p> <p><a href="https://i.sstatic.net/IxpsM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IxpsM.png" alt="enter image description here" /></a></p> <p>Now, I need to generate a new columns 'Salary_from_df_1' and 'id_from_df_1' in df_2 that will map with the id's of df_1 that signifies the closer salary.</p> <p>For example, since the 'Expected_Salary' 9 (with id=1), 'Expected_Salary' 4 (with id=7) and 'Expected_Salary' 3 (with id=10) is closer to 'Available_Salary' 10 (with id=1), so they will have 'Salary_from_df_1' as 10 and 'id_from_df_1' as 1. This looks as such:</p> <p><a href="https://i.sstatic.net/0XW2e.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0XW2e.png" alt="enter image description here" /></a></p> <p>The same logic follows for other id's of df_2 to map with df_1.</p> <p>Can somebody please let me know how to achieve this task in Python?</p>
<python><pandas><dataframe><group-by><mapping>
2023-06-18 12:39:35
2
439
NN_Developer
76,500,550
9,859,642
Alternatives for nested loops for dataframes with multiindex
<p>I have a large dataframe with multiindex, and I have to do some simple mathematical operations on it, creating a new column. The problem is that it takes a lot of time. For now I use a nested loop for this, but I can't think of any more pythonic solution for this. The code looks like this:</p> <pre><code>for element1 in short_list: for element2 in long_list: df.loc[(element1, element2), ('name1', 'name2')] = abs(df.loc[(element1, element2), ('name3', 'name4')] - df.loc[(element1, element2), ('name5', 'name6')] * another_list[element1]) </code></pre> <p>I tried to search for solutions like using .groupby or other iterators, but either I don't understand how they function, or they don't fit my needs. I'd appreciate any help with this.</p>
<python><dataframe><loops><iteration>
2023-06-18 12:30:23
1
632
Anavae
76,500,503
14,070,318
How I can unpack typing.TypeVarTuple with pattern?
<p>How do I get the type <code>tuple[A[int], A[int]]</code> for <code>b.b</code>?</p> <pre class="lang-py prettyprint-override"><code>import typing T = typing.TypeVar('T') Ts = typing.TypeVarTuple('Ts') class A(typing.Generic[T]): a: T class B(typing.Generic[*Ts]): b: tuple[*A[Ts]] b: B[int, int] = B() typing.reveal_type(b.b) # tuple[A[*tuple[int, int]]] i got # but i need a tuple[A[int], A[int]] </code></pre> <p>I tried something like <code>tuple[*(A[T1] for T1 in Ts)]</code>, but it doesn't work too</p> <p>UPD: I want it to be like in c++</p> <pre class="lang-cpp prettyprint-override"><code>#include &lt;iostream&gt; #include &lt;tuple&gt; #include &lt;typeinfo&gt; template&lt;typename T&gt; struct A { T a; }; template&lt;typename... Ts&gt; struct B { std::tuple&lt;A&lt;Ts&gt;...&gt; b; }; int main() { B&lt;int, int&gt; b; std::cout &lt;&lt; typeid(b.b).name() &lt;&lt; std::endl; // std::__1::tuple&lt;A&lt;int&gt;, A&lt;int&gt;&gt; return 0; } </code></pre>
<python><python-typing>
2023-06-18 12:19:50
1
486
Be3y4uu_K0T
76,500,476
7,658,051
plt.plot(...) doesn't show a window. what am I missing?
<p>I am practicing with open cv on VScode.</p> <p>I am used to display images by using the following structure:</p> <pre><code>import cv2 import numpy as np import matplotlib.pyplot as plt puppy = cv2.imread('./DATA/00-puppy.jpg') horse = cv2.imread('./DATA/horse.jpg') rainbow = cv2.imread('./DATA/rainbow.jpg') # &quot;inplace&quot; operations on these images such as # puppy = cv2.cvtColor(puppy, cv2.COLOR_BGR2RGB) # etc... img_to_show_1 = puppy img_to_show_2 = horse img_to_show_3 = rainbow # ---&gt; beginning of the &quot;while True ... - if 0xFF == 27: break&quot; structure cv2.namedWindow(winname='window_1') cv2.namedWindow(winname='window_2') cv2.namedWindow(winname='window_3') while True: cv2.imshow('window_1', img_to_show_1) cv2.imshow('window_2', img_to_show_2) cv2.imshow('window_3', img_to_show_3) if cv2.waitKey(1) &amp; 0xFF == 27: # stop loop when 'esc' key is pressed break cv2.destroyAllWindows() # ---&gt; end of the &quot;while True ... - if 0xFF == 27: break&quot; structure </code></pre> <p>Now I have calculated the histogram of the values of an image stored in <code>puppy</code> variable</p> <pre><code>hist_values = cv2.calcHist([puppy], channels=[0], mask=None, histSize=[256], ranges=[0,256]) </code></pre> <p>On jupyter notebooks, this can be showed by simply doing:</p> <pre><code>plt.plot(hist_values) </code></pre> <p>But on VScode, this does not show anything.</p> <p>Is there a way to show this image on VScode and without intalling any other graphical extention?</p> <p>Is there a way to exploit the same &quot;while True ... - if 0xFF == 27: break&quot; structure, in order to have this image also disappear with the others as I press 'esc' key?</p>
<python><matplotlib><visual-studio-code><jupyter-notebook>
2023-06-18 12:12:41
1
4,389
Tms91