QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
76,214,347
| 8,849,071
|
Can I run an ORM module import in Django without setting up a database?
|
<p>in our application (a big monolith), we have a lot of "unit tests" that inherit from django <code>TestCase</code>. They are something like this:</p>
<pre><code>from django.test import TestCase
class SomeTestCase(TestCase):
def test_something(self):
self.assertTrue(1)
</code></pre>
<p>Running this simple test in our codebase, it takes around 27 seconds. As you can say, this is a really high number given how simple is the code being run. Nonetheless, if instead we be just inherit from the <code>TestCase</code> from <code>unittest</code> module, like this:</p>
<pre><code>from unittest import TestCase
class SomeTestCase(TestCase):
def test_something(self):
self.assertTrue(1)
</code></pre>
<p>The time needed goes down to 4 seconds. Which is still a little bit high, but you need to consider that we are using PyCharm connected to a docker compose environment, so there is a little bit of overhead in that side.</p>
<p>So what we want to achieve, is able to run tests just inheriting from <code>unittest</code> in the cases where we can, and inherit from django TestCase where we need some access to the ORM.</p>
<p>The main problem we have faced is imports of models. Everytime we try to run a test file inheriting just from unittest, there is an import somewhere of a django model, which triggers the following error:</p>
<pre><code>django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet.
</code></pre>
<p>You may be thinking: oh, if you are importing in the test a model, then the code being tested must need a model so anyway you need the database. Well, this is not entirely true, because for instance, when we are mocking a service in the application layer, and that service contains a repository importing a model, then the repository is mocked and we cannot mock the repository without importing it, importing also the ORM dependency.</p>
<p>So, is there anyway to change this behaviour? Have you faced similar problems? All the posts I have read about speeding up django tests do not cover this problem, so maybe we are doing something weird that the rest of the people is not doing.</p>
<p>Ideally, we would only import from django TestCase when we are testing repositories or data sources, where we are coding an integration test between our code and the ORM.</p>
<h3>Update</h3>
<p>Some people suggestd to carefully design our system to avoid our logic having to import ORM code, but to be honest I have thought about this but I have not found any way of doing this. Let's see this example:</p>
<pre><code>class SomeService:
def __init__(self):
self.repository = Repository()
</code></pre>
<p>and then the repository would be something like this:</p>
<pre><code>from models import SomeModel
class Repository:
pass
</code></pre>
<p>I cannot think of any way "good" of inverthing this import dependency. And this will also have the same problem in our controllers when we import the services.</p>
<p>Nonetheless, I have a couple of options:</p>
<h4>Option 1: local imports</h4>
<pre><code>class Repository:
from models import SomeModel
self.model = SomeModel
</code></pre>
<p>I don't like this option because it resuls in a pretty weird code.</p>
<h4>Option 2: Use Dependency Injection</h4>
<pre><code>class SomeService(SomeServiceInterface):
def __init__(self, repository: RepositoryInterface):
self.repository = repository
</code></pre>
<p>And the repository:</p>
<pre><code>from models import SomeModel
class Repository(RepositoryInterface):
pass
</code></pre>
<p>In a static configuration file, we could define the mapping between interfaces, like this:</p>
<p>di_register = {
"import path to interface": "import path to implementation"
}</p>
<p>Take into account that the use of strings is quite important to avoid python from actually running the import of the modules (which whould trigger the ORM problem). Now we just need a class to inject this automatically by inspecting the signature of the constructor (which we have done and implemented successfully, but I'm going to omit the code because is not needed for this question).</p>
<p>If we already have this solution, why are we looking for another, you may ask? Because some team members do not like the idea of creating an interface for every class we create, then having to register them in the configuration file and declare all dependencies as the signature of the constructor. Honestly, I like this approach, but given the opinion of other team members, I am looking for alternative solutions to this problem.</p>
|
<python><django><unit-testing><python-unittest>
|
2023-05-10 01:26:39
| 1
| 2,163
|
Antonio Gamiz Delgado
|
76,214,242
| 8,281,276
|
How does Flask flash message work and why does it require a cookie?
|
<p>A web framework, Flask, in Python has a functionality to flash messages. It requires to access signed cookie, but I'm not sure why it needs to access cookie. The document states as quoted below. When does it store and how? Does it need a cookie to identify which user is accessing?</p>
<blockquote>
<p>The flashing system basically makes it possible to record a message at the end of a request and access it on the next (and only the next) request.</p>
</blockquote>
<ul>
<li>Flask document for message flashing<br />
<a href="https://flask.palletsprojects.com/en/2.3.x/quickstart/#message-flashing" rel="nofollow noreferrer">https://flask.palletsprojects.com/en/2.3.x/quickstart/#message-flashing</a></li>
</ul>
|
<python><flask>
|
2023-05-10 00:55:24
| 0
| 1,676
|
Wt.N
|
76,214,137
| 240,443
|
Homebrew Gstreamer, Python and video
|
<p>I am trying to go through the Gstreamer tutorial, but I got stuck on the first two already (<a href="https://gstreamer.freedesktop.org/documentation/tutorials/basic/hello-world.html?gi-language=python" rel="nofollow noreferrer">1st</a>, <a href="https://gstreamer.freedesktop.org/documentation/tutorials/basic/concepts.html?gi-language=python" rel="nofollow noreferrer">2nd</a>). Here is the copy of the 2nd lesson's code, for reference:</p>
<pre><code>#!/usr/bin/env python3
import sys
import gi
import logging
gi.require_version("GLib", "2.0")
gi.require_version("GObject", "2.0")
gi.require_version("Gst", "1.0")
from gi.repository import Gst, GLib, GObject
logging.basicConfig(level=logging.DEBUG, format="[%(name)s] [%(levelname)8s] - %(message)s")
logger = logging.getLogger(__name__)
# Initialize GStreamer
Gst.init(sys.argv[1:])
# Create the elements
source = Gst.ElementFactory.make("videotestsrc", "source")
sink = Gst.ElementFactory.make("autovideosink", "sink")
# Create the empty pipeline
pipeline = Gst.Pipeline.new("test-pipeline")
if not pipeline or not source or not sink:
logger.error("Not all elements could be created.")
sys.exit(1)
# Build the pipeline
pipeline.add(source, sink)
if not source.link(sink):
logger.error("Elements could not be linked.")
sys.exit(1)
# Modify the source's properties
source.props.pattern = 0
# Can alternatively be done using `source.set_property("pattern",0)`
# or using `Gst.util_set_object_arg(source, "pattern", 0)`
# Start playing
ret = pipeline.set_state(Gst.State.PLAYING)
if ret == Gst.StateChangeReturn.FAILURE:
logger.error("Unable to set the pipeline to the playing state.")
sys.exit(1)
# Wait for EOS or error
bus = pipeline.get_bus()
msg = bus.timed_pop_filtered(Gst.CLOCK_TIME_NONE, Gst.MessageType.ERROR | Gst.MessageType.EOS)
# Parse message
if msg:
if msg.type == Gst.MessageType.ERROR:
err, debug_info = msg.parse_error()
logger.error(f"Error received from element {msg.src.get_name()}: {err.message}")
logger.error(f"Debugging information: {debug_info if debug_info else 'none'}")
elif msg.type == Gst.MessageType.EOS:
logger.info("End-Of-Stream reached.")
else:
# This should not happen as we only asked for ERRORs and EOS
logger.error("Unexpected message received.")
pipeline.set_state(Gst.State.NULL)
</code></pre>
<p>If I run this code, the Python's rocket icon starts jumping on the Dock, but no video is shown. If I launch it with <code>GST_DEBUG=5 python3 02.py</code>, I can see the things happening and time ticking, just no video output.</p>
<p>When I run the 1st lesson's code, which creates a <code>playbin</code>, there is no Python rocket on the Dock; I hear the audio playing, but again, no video.</p>
<p>If I change the 2nd lesson's code (copied above) and make the <code>videotestsrc</code> and <code>autovideosink</code> into <code>audiotestsrc</code> and <code>autoaudiosink</code> (and comment out the <code>pattern</code> parameter), again, it works — I can hear the beep.</p>
<p>The Gstreamer command line tools work correctly, showing a video window for the equivalent pipelines:</p>
<pre><code>gst-launch-1.0 playbin uri=https://gstreamer.freedesktop.org/data/media/sintel_trailer-480p.webm
gst-launch-1.0 videotestsrc pattern=0 ! autovideosink
</code></pre>
<p>Any ideas why the Python version does not work correctly?</p>
<hr />
<p>I have installed Gstreamer using <code>brew install gstreamer</code> (version: <code>stable 1.22.2 (bottled), HEAD</code>), with Python 3.11.3, on Mac OS Ventura 13.2.1 (22D68).</p>
|
<python><macos><gstreamer><python-gstreamer>
|
2023-05-10 00:19:03
| 1
| 199,494
|
Amadan
|
76,214,066
| 3,388,962
|
How to plot time series data within a fixed period?
|
<p>Is it possible to plot time series data within a fixed time period?</p>
<p>Assume I have the following sample dataset:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
date_rng = pd.date_range(start="1.1.2023", end="10.1.2023", freq="15T")
df = pd.DataFrame(date_rng, columns=["datetime"])
df["value"] = pd.Series(range(len(date_rng)))
</code></pre>
<p>How to plot this data for a fixed period (e.g. from 6:00 to 6:00 the next day, or for every 8 hours) using seaborn? This should result in a plot with one line (for the mean curve) and an error band, similar to the example shown <a href="https://seaborn.pydata.org/examples/errorband_lineplots.html" rel="nofollow noreferrer">here</a>.</p>
|
<python><plot><time-series><seaborn>
|
2023-05-09 23:58:39
| 1
| 9,959
|
normanius
|
76,213,952
| 2,217,612
|
In Python, how can I create an in-memory Kubernetes pod object by reading YAML?
|
<p>I have a Kubernetes pod YAML file. I want to read this file and create a Kubernetes pod object in memory, run some additional checks on this object and then submit this object to the K8s cluster.</p>
<p>I see some options such as <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/utils/create_from_yaml.py" rel="nofollow noreferrer">create_from_yaml</a>. But, this directly creates the object in K8s. It doesn't give me a pod object to work with BEFORE actually running it.</p>
|
<python><kubernetes>
|
2023-05-09 23:23:30
| 0
| 327
|
Shri Javadekar
|
76,213,942
| 396,014
|
numpy: how to smooth x,y,z values in columns of an array?
|
<p>I have accelerometer data—to keep it simple just acceleration—loaded into a numpy array like so (toy data; columns are x,y,z):</p>
<pre><code>import numpy as np
import pandas as pd
from scipy.interpolate import make_interp_spline, BSpline
idx = [1,2,3,4]
idx = np.array(idx,dtype=int)
accel = [[-0.7437,0.1118,-0.5367],
[-0.5471,0.0062,-0.6338],
[-0.7437,0.1216,-0.5255],
[-0.4437,0.3216,-0.3255],
]
accel = np.array(accel,dtype=float)
</code></pre>
<p>I want to smooth these values. I have found an example <a href="https://www.statology.org/matplotlib-smooth-curve/" rel="nofollow noreferrer">here</a> of how to do it with bspline. Trouble is the example uses a 1d array. I'm not yet good enough at Python to figure out how to reproduce this for a 2D array. I can extract a column slice for x and smooth it like so:</p>
<pre><code>x = accel[:,0]
idxnew = np.linspace(idx.min(), idx.max(), 200)
spl = make_interp_spline(idx, x, k=3)
x_smooth = spl(idxnew)
</code></pre>
<p>But I'm not sure how to write that back into the first column of a new, longer array. I tried</p>
<pre><code>accel_sm = []
accel_sm = np.array(idx,dtype=float)
np.hstack(accel_sm,x_smooth)
</code></pre>
<p>But that throws "TypeError: expected bytes-like object, not str". I'm a little stuck. Can anyone point me in the right direction?</p>
|
<python><arrays><numpy><spline>
|
2023-05-09 23:21:17
| 2
| 1,001
|
Steve
|
76,213,873
| 6,664,872
|
How to finetune a zero-shot model for text classification
|
<p>I need a model that is able to classify text for an unknown number of classes (i.e. the number might grow over time). The <a href="https://arxiv.org/abs/1909.00161" rel="nofollow noreferrer">entailment approach</a> for zero-shot text classification seems to be the solution to my problem, the model I tried <a href="https://huggingface.co/facebook/bart-large-mnli" rel="nofollow noreferrer">facebook/bart-large-mnli</a> doesn't perform well on my annotated data. Is there a way to fine-tune it without losing the robustness of the model?</p>
<p>My dataset looks like this:</p>
<pre><code># http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html
World, "Afghan Army Dispatched to Calm Violence KABUL, Afghanistan - Government troops intervened in Afghanistan's latest outbreak of deadly fighting between warlords, flying from the capital to the far west on U.S. and NATO airplanes to retake an air base contested in the violence, officials said Sunday..."
Sports, "Johnson Helps D-Backs End Nine-Game Slide (AP) AP - Randy Johnson took a four-hitter into the ninth inning to help the Arizona Diamondbacks end a nine-game losing streak Sunday, beating Steve Trachsel and the New York Mets 2-0."
Business, "Retailers Vie for Back-To-School Buyers (Reuters) Reuters - Apparel retailers are hoping their\back-to-school fashions will make the grade among\style-conscious teens and young adults this fall, but it could\be a tough sell, with students and parents keeping a tighter\hold on their wallets."
</code></pre>
<p>P.S.: This is an artificial question that was created because this topic came up in the comment section of this <a href="https://stackoverflow.com/q/76170604/6664872">post</a> which is related to this <a href="https://stackoverflow.com/questions/76099140/hugging-face-transformers-bart-cuda-error-cublas-status-not-initialize">post</a>.</p>
|
<python><nlp><huggingface-transformers>
|
2023-05-09 23:01:11
| 1
| 19,975
|
cronoik
|
76,213,819
| 15,355,758
|
AttributeError: module 'ray.rllib.algorithms.ppo' has no attribute 'DEFAULT_CONFIG'
|
<p>When running software on Python 3.8.16, I had to change "ray.rllib.agents.ppo" into "ray.rllib.algorithms.ppo". Now it throws:
AttributeError: module 'ray.rllib.algorithms.ppo' has no attribute 'DEFAULT_CONFIG'.</p>
<p>How this issue can be approached?
Thank you.</p>
|
<python><attributeerror><ray>
|
2023-05-09 22:44:46
| 1
| 351
|
Fusen
|
76,213,817
| 2,302,262
|
Split a python class over multiple files
|
<p>There is a situation I constantly run into when refactoring python classes, and I am starting to wonder if I'm using some antipattern or if I'm missing some fundamental trick. The problem arises when I try to split large classes over multiple files, causing dependency pains.</p>
<p>Here is a small contrived example. Imagine an <code>Adult</code> class with very many methods, which have been grouped and extracted into seperate classes (<code>AdultFinances</code> and <code>AdultFamily</code>), which are added back in through composition. There is also a function <code>age_1_year</code> that returns an <code>Adult</code> instance, which is not added as a class method itself, but used in one (<code>age_somewhat</code>).</p>
<pre class="lang-py prettyprint-override"><code>from __future__ import annotations
from dataclasses import dataclass
class AdultFinances:
def earn_money(self: Adult, amount: int) -> Adult:
return Adult(self.name, self.age, self.money_in_the_bank + amount)
class AdultFamily:
def have_kids(self: Adult, number: int) -> list[Kid]:
return [Kid(f"child {i+1} of {self.name}") for i in range(number)]
class KidBiology:
def grow_up(self: Kid) -> Adult:
return Adult(self.name, 18, 0.0)
@dataclass(frozen=True)
class Adult(AdultFinances, AdultFamily): # composition
name: str
age: int
money_in_the_bank: int
# Some functionality is added as a method directly at the class.
def age_somewhat(self, years:int) -> Adult:
adult = self
for _ in range(years):
adult = age_1_year(adult)
return adult
@dataclass(frozen=True)
class Kid(KidBiology): # composition
name: str
# Seldom-used functions are not attached as methods.
def age_1_year(adult: Adult) -> Adult:
return Adult(adult.name, adult.age + 1, adult.money_in_the_bank)
me = Adult("Rudi", 42, -400).earn_money(100).age_somewhat(1)
my_kids = me.have_kids(2)
# me = Adult(name='Rudi', age=43, money_in_the_bank=-300)
# my_kids = [Kid(name='child 1 of Rudi'), Kid(name='child 2 of Rudi')]
</code></pre>
<p>This works.</p>
<p>I now want to put <code>AdultFinances</code>, <code>AdultFamily</code> and <code>KidBiology</code> into their own file, as well as <code>age_1_year</code>, like so:</p>
<pre class="lang-py prettyprint-override"><code># mixins.py
from main import Adult, Kid
class AdultFinances:
# ...
class AdultFamily:
# ...
class KidBiology:
# ...
# tools.py
from main import Adult
def age_1_year(adult: Adult) -> Adult:
# ...
# main.py
from dataclasses import dataclass
from mixins import AdultFamily, AdultFinances, KidBiology
from tools import age_1_year
@dataclass(frozen=True)
class Adult(AdultFinances, AdultFamily):
# ...
@dataclass(frozen=True)
class Kid(KidBiology):
# ...
</code></pre>
<p>This is where things go off the rails.</p>
<p>This setup causes an <code>ImportError</code> due to circular imports, and it's clear to see, why: <code>main.py</code> imports from <code>mixins.py</code> and <code>tools.py</code>, and these both import from <code>main.py</code>.</p>
<p><strong>What is the best-practice way of dealing with this problem?</strong></p>
<p>Remarks:</p>
<ul>
<li>This is a contrived example. I'm aware there are other solutions in this case, e.g. not use <code>age_1_year</code> in <code>age_somewhat</code>. But that's not the point. I'd like to know how to refactor exactly when there are these dependencies.</li>
</ul>
<hr />
<h1>Possible solutions</h1>
<p>In this particular case, I <em>can</em> think of a few solutions, which I'll detail below. <strong>However, they are cumbersome and/or do not scale. I'd like to find a solution that also works in more general cases and does not break my brain.</strong></p>
<h2>Solution 1: move (some) imports to bottom</h2>
<p>The Big Idea: when we have a <code>import a</code> line in <code>b.py</code> and a <code>import b</code> line in <code>a.py</code>, we can move one of the imports to the final line of its file to avoid circular import errors.</p>
<p><strong>main vs mixins</strong></p>
<p>The <code>mixins</code> classes are used at the top-level of <code>main</code>. It must therefore be imported into <code>main</code> before the rest of the code in that file is run, so in <code>main</code>, we must keep the <code>from mixins import ...</code> statement at the top.</p>
<p>Therefore, in <code>mixins</code>, we must move the <code>from main import ...</code> statement down.</p>
<p><strong>main vs tools</strong></p>
<p>The <code>tools</code> function is not used at the top-level of <code>main</code>, but only when calling the method <code>age_somewhat</code>. Thus, in <code>main</code>, we <em>can</em> move the <code>from tools import ...</code> statement down.</p>
<p>Conseqently, in <code>tools</code>, we can leave the <code>from main import ...</code> statement at the top, leaving it unchanged.</p>
<p>The resulting files:</p>
<pre class="lang-py prettyprint-override"><code># mixins.py
# (type-checking-part omitted)
class AdultFinances:
# ...
class AdultFamily:
# ...
class KidBiology:
# ...
from main import Adult, Kid # <- this is now at bottom
# tools.py
from main import Adult # <- still unchanged at the top
def age_1_year(adult: Adult) -> Adult:
# ...
# main.py
from __future__ import annotations
from dataclasses import dataclass
from mixins import AdultFamily, AdultFinances, KidBiology # <- still unchanged at the top
@dataclass(frozen=True)
class Adult(AdultFinances, AdultFamily):
# ...
@dataclass(frozen=True)
class Kid(KidBiology):
# ...
from tools import age_1_year # <- this now at the bottom
</code></pre>
<p>(Confusingly, a few lines needed to be added also to the top of the file - this is for type annotations, and not the subject of this question, so please ignore. For the exact lines, see further below. But again: not part of this question, so ignore, please.)</p>
<p>So: <em>where in <code>file_a</code> we must place the <code>import file_b</code> line depends on where in <code>file_b</code> we have the <code>import file_a</code> line.</em> I hope it is clear, that this is a nightmare to understand, maintain, and refactor. When there are more mixins or tools, each in their own file, it only becomes more complicated, esp. if they depend on each other.</p>
<h2>Solution 2: add references to other classes to make importing superfluous</h2>
<p>The Big Idea: if we can get a reference to <code>Kid</code> and <code>Adult</code> from within the <code>mixins</code> and <code>tools</code> files via some other means, we can remove the imports in those files.</p>
<ul>
<li><p>To get a reference to the class of a given instance is easy: with <code>self.__class__</code> or <code>type(self)</code>.</p>
</li>
<li><p>To get a reference to <em>another</em> class is not so easy. The only option we have (I think) is to sneakily add a reference to <code>Kid</code> from within <code>Adult</code>, and vice versa, like so:</p>
</li>
</ul>
<pre class="lang-py prettyprint-override"><code># main.py
# ...
@dataclass(frozen=True)
class Adult(AdultFinances, AdultFamily):
_kid_constructor = staticmethod(lambda: Kid) # <- new. Must be function if
# Kid is defined further below.
# ...
@dataclass(frozen=True)
class Kid(KidBiology):
_adult_constructor = staticmethod(lambda: Adult) # <- new. Also made a function,
# for consistency.
# ...
</code></pre>
<p>We can now use <code>adult._kid_constructor()(...)</code> to create a <code>Kid</code> object from within an <code>Adult</code> instance. And vice versa.</p>
<p>Here is the full solution, in which I've gone all-in on this idea and made <em>all</em> constructors available to <em>all</em> classes:</p>
<pre class="lang-py prettyprint-override"><code># mixins.py
# (type-checking-part omitted)
class AdultFinances:
def earn_money(self: Adult, amount: int) -> Adult:
Adult = self.__class__ # <-- new. Alternatively:
# self._constructors('Adult')
return Adult(self.name, self.age, self.money_in_the_bank + amount)
class AdultFamily:
def have_kids(self: Adult, number: int) -> list[Kid]:
Kid = self._constructors('Kid') # <-- new
return [Kid(f"child {i+1} of {self.name}") for i in range(number)]
class KidBiology:
def grow_up(self: Kid) -> Adult:
Adult = self._constructors('Adult') # <-- new
return Adult(self.name, 18, 0.0)
# tools.py
# (type-checking-part omitted)
def age_1_year(adult: Adult) -> Adult:
Adult = adult.__class__ # <-- new
return Adult(adult.name, adult.age + 1, adult.money_in_the_bank)
# main.py
from dataclasses import dataclass
from mixins import AdultFamily, AdultFinances, KidBiology
from tools import age_1_year
def constructors(classname):
return {"Kid": Kid, "Adult": Adult}[classname]
@dataclass(frozen=True)
class Adult(AdultFinances, AdultFamily):
_constructors = staticmethod(constructors) # <-- new
# ...
@dataclass(frozen=True)
class Kid(KidBiology):
_constructors = staticmethod(constructors) # <-- new
# ...
</code></pre>
<p>I think this is slightly better than solution 1, but it's still very convoluted. Passing references back up to the parent classes by attaching them as a class method that are then accessed through the instances is really quite a mess, I think.</p>
<hr />
<hr />
<p>(Irrelevant and confusing to understanding the question, but needed if you want to play along at home: the line <code># (type-checking-part omitted)</code> is short for</p>
<pre><code>from __future__ import annotations
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from main import Adult, Kid
</code></pre>
<p>.)</p>
|
<python><class>
|
2023-05-09 22:43:16
| 2
| 2,294
|
ElRudi
|
76,213,791
| 1,096,900
|
OpenTelemetry Python Metric Counter not picked up by Prometheus
|
<p>I have a simple python app, that I'm trying to use to learn about instrumentation. This is a short live app that I will run just by running <code>python3 app.py</code>, my goal is to keep track in metric counter about how many times I run this app, at least this is the goal for this simple app. But it seems like the counter always get set to 0 again whenever the app stop and I rerun the app. So 2 questions:</p>
<ul>
<li>Is Instrumentation mainly used for a continuous long running app like web app? Will this work for CLI application that stop once it is finished its task?</li>
<li>Since this app end really quick it does not seems like prometheus pick up the data, I need to add <code>time.sleep(5)</code> before it shows up, even then it's a hit and miss sometimes it shows up sometimes it's not. if I don't have time.sleep, it won't even pick it up. I tried changing <code>prometheus.yml</code> config for scrape interval</li>
</ul>
<pre><code>global:
scrape_interval: 1s #or 0s1ms, both does not seems to work properly
</code></pre>
<p>This is the code python code <code>app.py</code></p>
<pre><code>from opentelemetry import metrics
from opentelemetry.sdk.metrics import MeterProvider
from opentelemetry.sdk.metrics.export import (
ConsoleMetricExporter,
PeriodicExportingMetricReader,
)
from opentelemetry.exporter.prometheus import PrometheusMetricReader
from opentelemetry.sdk.resources import SERVICE_NAME, Resource
import time
# Service name is required for most backends
resource = Resource(attributes={
SERVICE_NAME: "my-python-app-service"
})
# Start Prometheus client
start_http_server(port=8000, addr="localhost")
# Initialize PrometheusMetricReader which pulls metrics from the SDK
# on-demand to respond to scrape requests
reader = PrometheusMetricReader()
provider = MeterProvider(resource=resource, metric_readers=[reader])
metrics.set_meter_provider(provider)
meter = provider.get_meter("your-meter-name")
def counter():
counter = meter.create_counter(name="some-prefix-counter", description="TODO")
# while True:
counter.add(1)
time.sleep(5) # if I remove this prometheus won't pick up any data I think because the scrape is not even run yet?
counter()
</code></pre>
<p>This is <code>prometheus.yml</code></p>
<pre><code># my global config
global:
scrape_interval: 1s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: "prometheus-demo"
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ["localhost:8000"]
</code></pre>
|
<python><prometheus><metrics><instrumentation><open-telemetry>
|
2023-05-09 22:35:43
| 0
| 4,113
|
Harts
|
76,213,724
| 2,897,989
|
Displaying dynamic text on a Rect in PyGame
|
<p>I want to display a dynamically changing <code>energy</code> score text on a moving Rect in Pygame. I'm doing:</p>
<pre><code>class Creature(pygame.sprite.Sprite):
def __init__(self, x, y, color=RED, agent=None, size=(30, 20)):
super().__init__()
self.image = pygame.Surface([size[0], size[1]])
self.image.fill(color)
self.rect = self.image.get_rect()
self.rect.x=x
self.rect.y=y
...
self.energy = 200
self.font = pygame.font.SysFont("Arial", 36)
self.text = self.font.render(self.energy, False, (0, 0, 0))
screen.blit(self.text, (self.rect.x, self.rect.y))
def update(self):
self.energy -= 1
self.font = pygame.font.SysFont("Arial", 36)
self.text = self.font.render(self.energy, False, (0, 0, 0))
screen.blit(self.text, (self.rect.x, self.rect.y))
</code></pre>
<p>However, no text appears. Everything else works fine, I'm showing and updating the rectangle on screen. What am I missing?</p>
|
<python><pygame>
|
2023-05-09 22:20:13
| 1
| 7,601
|
lte__
|
76,213,664
| 8,849,755
|
Plotly express contour
|
<p>Consider the following example of a heatmap plot taken from <a href="https://plotly.com/python/heatmaps/" rel="nofollow noreferrer">the documentation</a>:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import numpy as np
import plotly.express as px
# Create the index for the data frame
x = np.linspace(-1,1, 6)
y = np.linspace(-1,1,6)
n_channel = [1, 2, 3, 4]
xx, yy = np.meshgrid(x, y)
all_arrays = np.empty((0, 6, 6))
for n in n_channel:
z = np.random.randn(len(y), len(x))
all_arrays = np.vstack((all_arrays, [z]))
fig = px.imshow(
all_arrays,
facet_col=0,
)
for k in range(4):
fig.layout.annotations[k].update(text='n_channel:{}'.format(k))
fig.show()
</code></pre>
<p>which produces the following plot</p>
<p><a href="https://i.sstatic.net/SvNXL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SvNXL.png" alt="enter image description here" /></a></p>
<p>I would like to have a <a href="https://plotly.com/python/contour-plots/" rel="nofollow noreferrer">contour plot</a> instead which shares the axes and the color bar. Given the similarity between the two types of plots I was expecting that this could be achieved by simply replacing <code>imshow</code> by <code>contour</code>. It is not the case however.</p>
<p>How can this be done?</p>
|
<python><plotly><contour>
|
2023-05-09 22:06:26
| 1
| 3,245
|
user171780
|
76,213,501
| 21,865,432
|
Python packages imported in editable mode can't be resolved by pylance in VSCode
|
<p>I have several Python packages developed locally, which I often use in VSCode projects. For these projects, I create a new virtualenv and install these packages with <code>pip install -e ../path/to/package</code>. This succeeds, and I can use these packages in the project. However, VSCode underlines the package's import line in yellow, with this error:</p>
<blockquote>
<p>Import "mypackage" could not be resolved Pylance(reportMissingImports)</p>
</blockquote>
<p>Again, <code>mypackage</code> works fine in the project, but VSCode reports that error, and I lose all autocomplete and type hint features when calling <code>mypackage</code> in the project.</p>
<p>I ensured that the right Python interpreter is selected (the one from the project's virtualenv), but the error persists. The error and Pylance docs do not offer any other possible solutions.</p>
<p>VSCode version: 1.78.0</p>
|
<python><visual-studio-code>
|
2023-05-09 21:31:04
| 2
| 353
|
Austin de Coup-Crank
|
76,213,486
| 11,922,765
|
Python Parse weather station data to dataframe and save in Postgresql
|
<p>How do I pasre the following data obtained as result of API query into a column name (column units) followed by values as cell values.</p>
<pre><code>wea_data = [{'observation_time': '2023-05-09T15:55:00.000000+00:00',
'station': 'KCOF',
'weather_results': {'@id': 'https://api.weather.gov/stations/KCOF/observations/2023-05-09T15:55:00+00:00',
'@type': 'wx:ObservationStation',
'barometricPressure': {'qualityControl': 'V',
'unitCode': 'wmoUnit:Pa',
'value': 101800},
'cloudLayers': [{'amount': 'CLR',
'base': {'unitCode': 'wmoUnit:m',
'value': None}}],
'dewpoint': {'qualityControl': 'V',
'unitCode': 'wmoUnit:degC',
'value': 17.9},
'elevation': {'unitCode': 'wmoUnit:m', 'value': 3},
'heatIndex': {'qualityControl': 'V',
'unitCode': 'wmoUnit:degC',
'value': 28.657944011208333},
'icon': 'https://api.weather.gov/icons/land/day/skc?size=medium',
'maxTemperatureLast24Hours': {'unitCode': 'wmoUnit:degC',
'value': None},
'minTemperatureLast24Hours': {'unitCode': 'wmoUnit:degC',
'value': None},
'precipitationLast3Hours': {'qualityControl': 'Z',
'unitCode': 'wmoUnit:mm',
'value': None},
'precipitationLast6Hours': {'qualityControl': 'Z',
'unitCode': 'wmoUnit:mm',
'value': None},
'precipitationLastHour': {'qualityControl': 'Z',
'unitCode': 'wmoUnit:mm',
'value': None},
'presentWeather': [],
'rawMessage': 'KCOF 091555Z AUTO 33005KT 10SM CLR 28/18 '
'A3006 RMK AO2 SLP184 T02780179 $',
'relativeHumidity': {'qualityControl': 'V',
'unitCode': 'wmoUnit:percent',
'value': 54.878330638989},
'seaLevelPressure': {'qualityControl': 'V',
'unitCode': 'wmoUnit:Pa',
'value': 101840},
'station': 'https://api.weather.gov/stations/KCOF',
'temperature': {'qualityControl': 'V',
'unitCode': 'wmoUnit:degC',
'value': 27.8},
'textDescription': 'Clear',
'timestamp': '2023-05-09T15:55:00+00:00',
'visibility': {'qualityControl': 'C',
'unitCode': 'wmoUnit:m',
'value': 16090},
'windChill': {'qualityControl': 'V',
'unitCode': 'wmoUnit:degC',
'value': None},
'windDirection': {'qualityControl': 'V',
'unitCode': 'wmoUnit:degree_(angle)',
'value': 330},
'windGust': {'qualityControl': 'Z',
'unitCode': 'wmoUnit:km_h-1',
'value': None},
'windSpeed': {'qualityControl': 'V',
'unitCode': 'wmoUnit:km_h-1',
'value': 9.36}}}]
</code></pre>
|
<python><pandas><dataframe><numpy><parsing>
|
2023-05-09 21:28:36
| 1
| 4,702
|
Mainland
|
76,213,454
| 11,748,924
|
Hide Ultralytics' Yolov8 model.predict() output from terminal
|
<p>I have this output that was generated by <code>model.predict()</code></p>
<pre><code>0: 480x640 1 Hole, 234.1ms
Speed: 3.0ms preprocess, 234.1ms inference, 4.0ms postprocess per image at shape (1, 3, 640, 640)
0: 480x640 1 Hole, 193.6ms
Speed: 3.0ms preprocess, 193.6ms inference, 3.5ms postprocess per image at shape (1, 3, 640, 640)
...
</code></pre>
<p>How do I hide the output from the terminal?</p>
<p>I can't find out the information in this official link:
<a href="https://docs.ultralytics.com/modes/predict/#arguments" rel="noreferrer">https://docs.ultralytics.com/modes/predict/#arguments</a></p>
|
<python><output><stdout><yolo><yolov8>
|
2023-05-09 21:23:28
| 2
| 1,252
|
Muhammad Ikhwan Perwira
|
76,213,385
| 2,540,204
|
Geopandas gets creative when it can't find the street
|
<p>I have noticed an interesting behavior when when working with Geopandas. When Geopandas is fed a street that it presumably cannot find, it will somehow substitute an existing street that seems to be near the city of the address. See below input/output:</p>
<pre><code>> getGeoLoc(["2502 asdfasdf St, Albany NY"])
0 POINT (-73.78173 42.66155)
</code></pre>
<p>The above jibberish returned coordinate is 500 Hamilton St Apartment 1, Albany, NY</p>
<p>What's even more bizarre is that varying the street number results in additional locations around the returned street. This doesn't apparently work if you botch the number, city or state, which returns a null value.</p>
<p>This makes things a little tricky when I'm bulk converting addresses because I can't really tell if it is actually finding the street or if I've fed it a bad piece of data.</p>
<p>Can anybody explain this or tell me how to get an error for a bad street name?</p>
|
<python><geopandas><geopy>
|
2023-05-09 21:09:21
| 1
| 2,703
|
neanderslob
|
76,213,367
| 13,630,879
|
How to use gnupg python library within Docker container for AWS Lambda Function
|
<p>I am trying to run a python lambda function that is being deployed as a docker container. The purpose of the function is to connect to an ftp server, retrieve encrypted files, and then decrypt them. The part that is causing issues is the decryption. I am using <a href="https://gnupg.org/" rel="nofollow noreferrer">gpg</a> for the decryption with the use of a private key.</p>
<p>I was able to setup gpg and my python code with my aws lambda docker container as following:</p>
<pre><code>FROM amazon/aws-lambda-python:3.9
RUN yum update
RUN yum install -y gnupg
RUN yum -y install gcc g++ sudo
COPY requirements.txt .
RUN pip3 install -r requirements.txt --target "${LAMBDA_TASK_ROOT}"
COPY . ${LAMBDA_TASK_ROOT}
CMD [ "app.handler" ]
</code></pre>
<p>But after getting gnupg installed inside the container. I was not able to successfully import keys and decrypt files. I kept receiving errors, such as:</p>
<pre><code>gpg: Fatal: can't create directory /home/sbx_user1051/.gnupg: No such file or directory
</code></pre>
<p>and this error:</p>
<pre><code>gpg returned a non-zero error code: 2
</code></pre>
<p>I am using python3.9 with the library <a href="https://gnupg.readthedocs.io/en/latest/" rel="nofollow noreferrer">python-gnupg</a>, which is built off the gnupg binary. The errors mentioned above took place when I tried the following:</p>
<pre class="lang-py prettyprint-override"><code>import gnupg
my_gpg = gnupg.GPG(gpgbinary="/usr/bin/gpg")
my_gpg.import_keys_file("/var/task/keys/priv_key.asc", passphrase=os.getenv("PASSPHRASE"))
</code></pre>
<p>How can I avoid these errors and successfully import the priv_key to my gpg keyring?</p>
|
<python><amazon-web-services><docker><aws-lambda><gnupg>
|
2023-05-09 21:05:59
| 1
| 403
|
Jeff Gruenbaum
|
76,213,293
| 18,203,813
|
Lock row for very large time interval on SqlAlchemy
|
<p>I have to lock a row for a long period of time, for example an hour. My code look like this:</p>
<pre><code>with get_orm().session() as session:
instance = session.query(Model).filter(Model.id=1).with_for_update().first()
very_slow_process() # duration: one hour
instance.retry = instance.retry +1
</code></pre>
<p>I get the error:</p>
<blockquote>
<p>Lost connection to MySQL server during query</p>
</blockquote>
<p>I expected to block the row, and one hour aft update the row, and after that close the transaction.</p>
|
<python><mysql><sqlalchemy>
|
2023-05-09 20:49:57
| 0
| 399
|
Pablo Estevez
|
76,213,177
| 12,415,855
|
VSCode / Pylance / Disable TypeChecking?
|
<p>I would like to disable the underlining of errors like .text in this example
<a href="https://i.sstatic.net/i2IKv.png" rel="noreferrer"><img src="https://i.sstatic.net/i2IKv.png" alt="enter image description here" /></a></p>
<p>But it would be ok for me when eg. driver is wrong written like that</p>
<p><a href="https://i.sstatic.net/cGWSD.png" rel="noreferrer"><img src="https://i.sstatic.net/cGWSD.png" alt="enter image description here" /></a></p>
<p>In my defaultSettings.json the parameter for this option is set:</p>
<pre><code>"python.analysis.typeCheckingMode": "off",
</code></pre>
<p>And in my settings.json this parameter is not at all.</p>
<p>So why is Pylance still underlining the .text-command?
I only want that it is underlining error like wrong written driver?</p>
|
<python><visual-studio-code><beautifulsoup><pylance>
|
2023-05-09 20:28:38
| 3
| 1,515
|
Rapid1898
|
76,213,164
| 19,506,623
|
How to order big list in tabulate form to avoid memery error?
|
<p>I have the input list <code>data</code> like below, for which I'd like to tabulate based on letters order, locating each value in correct column for each row. The code below it works, but when I have a big input list (a <code>data</code> list with more than 10K sublists and <code>letters</code> list has about 12 elements) I get <code>MemoryError</code></p>
<pre><code>import pandas as pd
letters = ['A','B','C']
data = [['C', 3], ['B', 5], ['A', 1], ['B', 4], ['C', 2], ['A', 3], ['C', 8], ['A',9],['B',5]]
out = []
while data:
out.append([
data.pop(0)[1] if data and data[0][0] == h else None
for h in letters
])
df = pd.DataFrame(out, columns=letters)
</code></pre>
<p><strong>OUTPUT:</strong></p>
<p><a href="https://i.sstatic.net/0tdxx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0tdxx.png" alt="enter image description here" /></a></p>
<p>Each row is built from element <code>[0][1]</code> of each sublist of <code>data</code>. If it is found the sequence <code>['A','x'], ['B','y'], ['C','z']</code>, then the row would be <code>['x','y','z']</code>. If the sublist is <code>['B','y']</code> or <code>['C','z']</code>, then the other values for the row should be <code>None</code>. Is like the input <code>data</code> list was like this one.</p>
<pre><code>list2 = [
['A', ''], ['B', ''], ['C', 3 ],
['A', ''], ['B', 5 ], ['C', ''],
['A', 1 ], ['B', 4 ], ['C', 2 ],
['A', 3 ], ['B', ''], ['C', 8 ],
['A', 9 ], ['B', 5 ], ['C', '']
]
</code></pre>
<p>My attempt to avoid the <code>MemoryError</code> is create 3 loops like this, but since initially I don't know how many rows would have the <code>output</code> list in order to separate in 2 or 3 loops, to create 3 outputs list. I hope make sense. Thanks for any help.</p>
<pre><code>a = []
b = []
c = []
n = len(data)//3
if data:
for k in range(n):
a.append([
data.pop(0)[1] if data[0][0] == h else None
for h in letters
])
</code></pre>
|
<python><pandas><dataframe><list>
|
2023-05-09 20:25:55
| 1
| 737
|
Rasec Malkic
|
76,213,077
| 981,981
|
Extract joined urls but not if redirect exists
|
<p>I'm looking for a regex for extracting urls when they are not separated by a space or whatever, but keep the "redirect" ones a a complete url.</p>
<p>Let me show you an example:</p>
<pre><code>http://foo.barhttps://foo.bazhttp://foo.bar?url=http://foo.baz
</code></pre>
<p>should result in the following array:</p>
<pre><code>['http://foo.bar', 'https://foo.baz', 'http://foo.bar?url=http://foo.baz']
</code></pre>
<p>I am able to separate urls joined thanks to this regex :</p>
<pre><code>'~(?:https?:)?//.*?(?=$|(?:https?:)?//)~'
</code></pre>
<p>from this answer: <a href="https://stackoverflow.com/questions/43495572/extract-urls-from-string-without-spaces-between">Extract urls from string without spaces between</a></p>
<p>But I struggle to also extract the ones by keeping the <code>=http</code></p>
<p>Thanks,</p>
|
<python><regex>
|
2023-05-09 20:12:45
| 2
| 2,975
|
Guillaume Cisco
|
76,213,011
| 11,064,604
|
Pandas Centric Way to Find Previous Value in a Pandas Dataframe
|
<p>I have an ordered pandas dataframe called <strong>keys</strong> containing columns <code>group_1</code>, <code>group_2</code>, ..., <code>group_N</code>, and <code>number</code>.</p>
<p>I have a second, ordered, pandas dataframe called <strong>fill_in</strong> having identical columns but different values. For each row in <strong>fill_in</strong> I want to find the index of the corresponding row in <strong>keys</strong> that has 1) identical <code>group</code> values and 2) the largest value in <code>number</code> that is less than <strong>fil_in</strong>'s current <code>number</code>. If a <strong>fill_in</strong>'s groups are not found in <strong>keys</strong>, it should output <code>np.nan</code>. If the value in <code>number</code> is lower than any found within the group of <strong>keys</strong>, it should also output <code>np.nan</code>.</p>
<p>For a toy example, consider the following <strong>keys</strong>, <strong>fill_in</strong> and expected output:</p>
<pre><code>keys = pd.DataFrame({'group1':[1, 1, 1, 1, 2, 2],
'group2':[5, 5, 5, 7, 9, 9],
'number': [19,35,61,5, 105,300]})
fill_in = pd.DataFrame({'group1':[1, 1, 2, 5],
'group2':[5, 5, 9, 9],
'number': [0,43.2,900.3,14]})
expected_output = [np.nan, 1, 5, np.nan]
</code></pre>
<p>I have solved this problem by pinching my nose and writing a <code>for</code> loop on pandas dataframes. Not surprisingly, my solution is very slow. Is there a way to solve this using pandas operations?</p>
|
<python><python-3.x><pandas><group-by>
|
2023-05-09 20:04:30
| 2
| 353
|
Ottpocket
|
76,212,928
| 612,580
|
Numpy: Packing N bits per byte
|
<p>I'm using numpy in GNURadio, where there are variable numbers of bits that should be packed into a given byte based on the type of modulation. Not really relevant for this question, but that's the background.</p>
<p>With numpy, how can I take an array of bits, and repack it into <code>N</code> bits per byte?</p>
<p>For example, if I have a bit array <code>[0, 1, 1, 1, 0, 0, 1, 0]</code> and I want to pack 2 bits per byte, my output array should look like <code>[1, 3, 0, 2]</code>. If I want to pack 4 bits per byte, the output should look like <code>[7, 2]</code>.</p>
<p>The <a href="https://numpy.org/doc/stable/reference/generated/numpy.packbits.html" rel="nofollow noreferrer"><code>np.packbits</code></a> function doesn't have a parameter for bits-per-byte. Any suggestions for a high-performance solution?</p>
|
<python><numpy>
|
2023-05-09 19:51:32
| 1
| 4,640
|
Jordan
|
76,212,720
| 9,403,794
|
How to remove similar items from a list of lists
|
<p>EDITED DESCRIPTION:</p>
<blockquote>
<p>Let's say I have a set of linear functions. I am tasked with comparing
features. Let's say our "length" is going to be the slope of the
linear function. Let's say "multi" is going to be a factor of b. Now:</p>
<ol>
<li>I don't want to compare the same linear functions. 2. I don't want to compare linear functions twice. Because if we put the coefficients
together in such an iterator, we generate the same line</li>
</ol>
</blockquote>
<p>I have two lists. I need the product of two lists, but two lists I will use twice - something like this: <code>itertools.product(l1, l2, l1, l2)</code>.</p>
<p>The problem is that I have similar (not duplicated) items in output.</p>
<p>I wrote this code but it is ultra slow for longer lists:</p>
<pre><code>import itertools
lenght = [1, 2]
multi = [3, 4]
iter = itertools.product(lenght, multi, lenght, multi)
l = []
try:
while True:
l.append(list(next(iter)))
except:
pass
for inter_list in l:
print(inter_list)
print()
"BELOW IS SOLUTION BUT SLOW."
for l_master in l:
if l_master[0] == l_master[2] and l_master[1] == l_master[3] :
print("Step1: remove > ", l_master)
l.remove(l_master)
print()
for l_master in l:
print(l_master)
l_slave = l_master[:]
l_slave.append(l_slave.pop(0))
l_slave.append(l_slave.pop(0))
print("Step2 remove >>, ", l_slave)
l.remove(l_slave)
print( )
for k in l:
print(k)
</code></pre>
<p>This gives me:</p>
<pre class="lang-none prettyprint-override"><code>[1, 3, 1, 3]
[1, 3, 1, 4]
[1, 3, 2, 3]
[1, 3, 2, 4]
[1, 4, 1, 3]
[1, 4, 1, 4]
[1, 4, 2, 3]
[1, 4, 2, 4]
[2, 3, 1, 3]
[2, 3, 1, 4]
[2, 3, 2, 3]
[2, 3, 2, 4]
[2, 4, 1, 3]
[2, 4, 1, 4]
[2, 4, 2, 3]
[2, 4, 2, 4]
Step1: remove > [1, 3, 1, 3]
Step1: remove > [1, 4, 1, 4]
Step1: remove > [2, 3, 2, 3]
Step1: remove > [2, 4, 2, 4]
[1, 3, 1, 4]
Step2 remove >>, [1, 4, 1, 3]
[1, 3, 2, 3]
Step2 remove >>, [2, 3, 1, 3]
[1, 3, 2, 4]
Step2 remove >>, [2, 4, 1, 3]
[1, 4, 2, 3]
Step2 remove >>, [2, 3, 1, 4]
[1, 4, 2, 4]
Step2 remove >>, [2, 4, 1, 4]
[2, 3, 2, 4]
Step2 remove >>, [2, 4, 2, 3]
[1, 3, 1, 4]
[1, 3, 2, 3]
[1, 3, 2, 4]
[1, 4, 2, 3]
[1, 4, 2, 4]
[2, 3, 2, 4]
</code></pre>
<p>That means that I need only one item from pair: <code>[1, 3, 1, 4]</code> and <code>[1, 4, 1, 3]</code>. It does not matter which one.</p>
<p>I also do not need rows like:</p>
<pre class="lang-none prettyprint-override"><code>[1, 3, 1, 3]
[1, 4, 1, 4]
[2, 3, 2, 3]
[2, 4, 2, 4]
</code></pre>
<p>How to build output like I need?
I think I will need something else than <code>itertools.product</code> or a completely new solution.</p>
<p>EDITED:
Desired output:</p>
<pre><code>[1, 3, 1, 4]
[1, 3, 2, 3]
[1, 3, 2, 4]
[1, 4, 2, 3]
[1, 4, 2, 4]
[2, 3, 2, 4]
</code></pre>
|
<python><list><product>
|
2023-05-09 19:12:41
| 1
| 309
|
luki
|
76,212,641
| 5,942,100
|
Tricky rounding conditional rounding conversions to dataframe using Pandas
|
<p>I wish to round each value under columns: Q1 24 - Q4 24 to the nearest even # with the exception if: the value is between 0.75 - 3 round to 2 , while creating a new column that captures
its sum.</p>
<p><strong>Data</strong></p>
<pre><code>state range type Q1 24 Q2 24 Q3 24 Q4 24 rounded_sum
NY up AA 2.39 3.24 3.57 7.07 16
NY up DD 3.53 4.79 5.28 10.45 24
NY up DD 0.51 1.02 2.05 2.05 6
NY up SS 0.59 0.8 0.88 1.75 4
CA low SS 0 0.79 0 0 2
CA low SS 0 0 0 0 0
import pandas as pd
data = {
'state': ['NY', 'NY', 'NY', 'NY', 'CA', 'CA'],
'range': ['up', 'up', 'up', 'up', 'low', 'low'],
'type': ['AA', 'DD', 'DD', 'SS', 'SS', 'SS'],
'Q1 24': [2.39, 3.53, 0.51, 0.59, 0, 0],
'Q2 24': [3.24, 4.79, 1.02, 0.8, 0.79, 0],
'Q3 24': [3.57, 5.28, 2.05, 0.88, 0, 0],
'Q4 24': [7.07, 10.45, 2.05, 1.75, 0, 0],
'rounded_sum': [16, 24, 6, 4, 2, 0]
}
df = pd.DataFrame(data)
print(df)
</code></pre>
<p><strong>Desired</strong></p>
<pre><code>state range type Q1 24 Q2 24 Q3 24 Q4 24 rounded_sum rounded_sum2
NY up AA 2 4 4 8 16 18
NY up DD 4 4 6 10 24 24
NY up DD 0 2 2 2 6 6
NY up SS 0 2 2 2 4 6
CA low SS 0 2 0 0 2 2
CA low SS 0 0 0 0 0 0
</code></pre>
<p><strong>Doing</strong></p>
<pre><code>def round_to_even(x):
if x <= 0.69:
return 0
else:
return int(np.ceil(x / 2) * 2)
# Apply the rounding function to the relevant columns (Q1 24, Q2 24, Q3 24, Q4 24)
df2.iloc[:, 3:7] = df2.iloc[:, 3:7].applymap(round_to_even)
# Recalculate the rounded_sum
df2['rounded_sum_2'] = df2.iloc[:, 3:7].sum(axis=1)
</code></pre>
<p>However, this is not properly rounding each value under the Q1-Q4 24 headers to the nearest even value (including the exception)</p>
<p>Any suggestion is appreciated</p>
|
<python><pandas><numpy>
|
2023-05-09 19:02:02
| 1
| 4,428
|
Lynn
|
76,212,538
| 15,481,917
|
Python: how to schedule a task to happen after 1 hour?
|
<p>I am creating a user registration that requires an activation code. I want the activation code to expire after 1 hour, so I did this:</p>
<pre><code>def register(request):
form = CreateUserForm()
if request.method == "POST":
form = CreateUserForm(request.data)
if form.is_valid():
email = request.POST.get('email')
password = request.POST.get('password')
user = get_user_model()
user = user_manager.create_user(email=email, password=password)
# Generate an activation code and store it in the database
code = generate_activation_code()
# activation = Activations.objects.create(user=12, code=code)
# Schedule the activation code for deletion after 1 hour
schedule_activation_code_deletion(user, code)
# schedule_activation_code_deletion(user, code)
# Send the activation code to the user's email address
subject = "Your verification code"
message = f"Your verification code is {code}"
send_email(subject, message, settings.EMAIL_HOST_USER, email)
return Response('User created.')
return Response(status=400)
def schedule_activation_code_deletion(user_id, code):
queue = django_rq.get_queue('default')
queue.enqueue_in(datetime.timedelta(minutes=1), test)
</code></pre>
<p>With this code I get <code>Error while reading from localhost:5432 : (10054, 'An existing connection was forcibly closed by the remote host', None, 10054, None)</code>
Is there any other library that works?</p>
|
<python><django>
|
2023-05-09 18:46:30
| 0
| 584
|
Orl13
|
76,212,338
| 5,942,100
|
Conditionally sum and round to nearest even value row by row using Pandas
|
<p>I wish to round the sum of a row of values to the nearest even number, except if the value is between 0.75 and 3, then the value should round to 2.</p>
<p><strong>Data</strong></p>
<pre><code>Location range type Q1 24 Q2 24 Q3 24 Q4 24
NY low SS 0 0.79 0 0
NY low SS 0 0 0 0
CA low AA 0.24 0.34 0.93 1.07
CA low AA 1.71 0.57 0 0
CA low BB 0.08 0.11 0.65 0.47
CA low BB 0 0 2 0
CA low DD 0.35 0.5 1.37 1.57
</code></pre>
<p><strong>Desired</strong></p>
<pre><code> Location range type Q1 24 Q2 24 Q3 24 Q4 24 rounded_sum
NY low SS 0 0.79 0 0 2
NY low SS 0 0 0 0 0
CA low AA 0.24 0.34 0.93 1.07 2
CA low AA 1.71 0.57 0 0 2
CA low BB 0.08 0.11 0.65 0.47 2
CA low BB 0 0 2 0 2
CA low DD 0.35 0.5 1.37 1.57 4
</code></pre>
<p><strong>Doing</strong></p>
<p>SO member helped, but needing to tweak if the sum is
greater or equal to 0.75 it should round to the value 2.</p>
<pre><code># using np.where, check if integer part is even or odd
# if odd, add 1 else, keep the integer as is
df2['rounded_sum']=np.where(df2['rounded_sum'].astype(int)%2==1,
df2['rounded_sum'].astype(int) +1,
df2['rounded_sum'].astype(int))
</code></pre>
<p>Any suggestion is appreciated</p>
|
<python><pandas><numpy>
|
2023-05-09 18:17:03
| 4
| 4,428
|
Lynn
|
76,212,158
| 19,328,707
|
Python - API request works in __init__ but not in function
|
<p>im implementing <code>api requests</code> to my tkinter app. At the <code>Gui Class init</code> there is one <code>api request</code> running that works without problems. But when im calling a <code>function</code> inside the <code>class</code> that is running the same <code>api request</code> im getting back an <code>error</code>.</p>
<p>I'm not quite sure if it is the <code>API</code> or maybe my <code>class structure</code>.</p>
<p>My folder structure looks like this</p>
<pre><code>Project Folder/
api/
__init__.py
api.py
gui/
__init__.py
gui.py
main.py
</code></pre>
<p>The <code>main.py</code> holds the <code>Gui Class</code> with the <code>mainloop</code>.</p>
<pre><code>from gui.gui import Gui
if __name__ == '__main__':
app = Gui()
app.mainloop()
</code></pre>
<p>The <code>GUI class</code> contains the tkinter app and is inherited from <code>customtkinter.CTk</code>. Simplified it looks like that</p>
<pre><code>import logging
import PIL.Image
from tkinter import *
import tkinter.messagebox
import customtkinter
from api.api import WikiApi
class Gui(customtkinter.CTk):
def __init__(self):
super().__init__()
# API
self.api = WikiApi()
self.book_list = self.api.get_all_books_from_shelv()['books'] # Fetching all book names from api
# API Output looks like this
'''
{
'books': [{
'id': 37,
'name': 'book-1',
'slug': 'book-1',
'description': '',
'created_at': '2023-04-11T09:16:19.000000Z',
'updated_at': '2023-04-14T08:40:49.000000Z',
'created_by': 16,
'updated_by': 4,
'owned_by': 16
}, {
'id': 38,
'name': 'book-2',
'slug': 'book-2',
'description': '',
'created_at': '2023-04-13T07:07:23.000000Z',
'updated_at': '2023-04-13T07:07:23.000000Z',
'created_by': 16,
'updated_by': 16,
'owned_by': 16
}
]
}
'''
# logger
# screen constants
# root window attributes & widgets
# refresh button that calls self.refresh_book_list
def refresh_book_list(self):
self.book_list.clear()
self.book_list = [book['name'] for book in self.api.get_all_books_from_shelv()['books']]
# API response from function
'''
{
'error': {
'code': 403,
'message': 'The owner of the used API token does not have permission to make API calls'
}
}
'''
</code></pre>
<p>The <code>api.py</code> contains the class <code>WikiApi</code> and simplified looks like this</p>
<pre><code>import requests
import logging
import json
class WikiApi:
def __init__(self):
self.TOKEN_ID = 'abcdef123'
self.TOKEN_SECRET = 'password'
self.HEADER = {
"Authorization": f"Token {self.TOKEN_ID}:{self.TOKEN_SECRET}",
"Accept": "application/vnd.api+json"
}
self.session = requests.session()
self.session.headers = self.HEADER
def _get(self, url):
response = self.session.get(url)
return response.json()
def get_all_books_from_shelv(self):
url = f'url to api endpoint'
response = self._get(url)
return response
</code></pre>
|
<python><tkinter>
|
2023-05-09 17:50:44
| 1
| 326
|
LiiVion
|
76,212,145
| 15,559,492
|
Exclude duplicate elements with lower priority (Django)
|
<p>I have the following model:</p>
<pre><code>class Action(BaseModel):
action_id = models.IntegerField(unique=True, db_index=True)
name = models.CharField(max_length=25)
object = models.CharField(max_length=25)
priority = models.IntegerField(default=0)
</code></pre>
<p>Suppose there are two 4 objects:</p>
<pre><code>{"action_id":1, "name":"read", "object":"post", "priority":1}
{"action_id":2, "name":"write", "object":"post", "priority":2}
{"action_id":3, "name":"read", "object":"user", "priority":1}
{"action_id":4, "name":"update", "object":"user", "priority":2}
</code></pre>
<p>How can I filter out objects with the same <code>object</code> value, and leave only those with a <code>priority</code> higher in the duplicate set?</p>
<pre><code>[{"action_id":2, "name":"write", "object":"post", "priority":2}, {"action_id":4, "name":"update", "object":"user", "priority":2}]
</code></pre>
<p>I tried filtering methods with - <code>annotate</code>, <code>Max</code>, <code>Count</code> and <code>filter</code> but duplicates are returned if i filter by <em>priority</em></p>
|
<python><django><django-orm>
|
2023-05-09 17:49:22
| 1
| 426
|
darl1ne
|
76,212,128
| 7,700,802
|
Create a dictionary from key value, where the keys and values are pair types
|
<p>Suppose I have this dictionary object</p>
<pre><code>{('ABE2', '170xi iii plus'): (1, '1178.55', 1178.55),
('ABE2', '170xi4+'): (1, '2844.12', 2844.12),
('ABE2', '1900gsr-1'): (30, '115.0', 3450.0)}
</code></pre>
<p>and I have a empty dataframe that looks like this</p>
<pre><code>df_test = pd.DataFrame(columns=['location_sitecode', 'model_name', 'total_quantity', 'cost_per_device', 'total_cost'])
</code></pre>
<p>I want to append each entry into the rows of df_test. I tried this</p>
<pre><code>df_test = pd.DataFrame(data = result_dict, columns=['location_sitecode', 'model_name', 'total_quantity', 'cost_per_device', 'total_cost'])
</code></pre>
<p>But I just get the columns and not the data I want inserted into the rows.</p>
|
<python><pandas>
|
2023-05-09 17:47:02
| 3
| 480
|
Wolfy
|
76,212,084
| 5,790,653
|
How to run a simple python http server as service
|
<p>This is my <code>script.py</code>:</p>
<pre><code>from http.server import HTTPServer, BaseHTTPRequestHandler
class Serv(BaseHTTPRequestHandler):
def do_GET(self):
if self.path == '/':
self.path = '/index.html'
try:
file_to_open = open(self.path[1:]).read()
self.send_response(200)
except FileNotFoundError:
file_to_open = "Not Found!"
self.send_response(404)
self.end_headers()
self.wfile.write(bytes(file_to_open, 'utf-8'))
httpd = HTTPServer(('0.0.0.0', 8000), Serv)
httpd.serve_forever()
</code></pre>
<p>And this is my <code>/lib/systemd/system/python.service</code>:</p>
<pre><code>[Unit]
Description=test
Documentation=test doc
[Service]
Type=notify
WorkingDirectory=/home/python
ExecStart=/usr/bin/python /home/python/server.py
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutStartSec=0
RestartSec=2
Restart=always
KillMode=process
[Install]
WantedBy=multi-user.target
</code></pre>
<p>And when I run this command, it stuck and doesn't get me back the terminal:</p>
<pre><code>systemctl daemon-reload
systemctl start python
</code></pre>
<p>But I can go to <code>http://ip:8000</code> and I can see my <code>index.html</code> content.</p>
<p>This is a simple web server I want to run it as a service (I don't want to run <code>python -m http.server 8000</code> for some reasons).</p>
<p>How can I run this script as a service?</p>
|
<python><linux>
|
2023-05-09 17:42:07
| 1
| 4,175
|
Saeed
|
76,212,074
| 16,978,454
|
"Connection refused" Boost.Beast library for the C++ WebSocket client
|
<p>I have this Python code:</p>
<pre><code>Server.py
import asyncio
import websockets
import random
async def random_number(websocket, path):
while True:
number = random.randint(1, 100)
await websocket.send(str(number))
await asyncio.sleep(1)
start_server = websockets.serve(random_number, "localhost", 8765)
asyncio.get_event_loop().run_until_complete(start_server)
asyncio.get_event_loop().run_forever()
</code></pre>
<p>which sends random numbers via localhost:8765. I tested that and it works with a different Python script:</p>
<pre><code>Client.py
import asyncio
import websockets
async def websocket_client():
uri = "ws://localhost:8765"
async with websockets.connect(uri) as websocket:
while True:
message = await websocket.recv()
print(f"Received: {message}")
asyncio.get_event_loop().run_until_complete(websocket_client())
</code></pre>
<p>Now, I want to go further and involve C++. Basically I want to be able to do in C++ exactly what Client.py is doing. Hence, I wrote:</p>
<pre><code>#include <boost/beast/core.hpp>
#include <boost/beast/websocket.hpp>
#include <boost/asio/connect.hpp>
#include <boost/asio/ip/tcp.hpp>
#include <cstdlib>
#include <iostream>
#include <string>
namespace beast = boost::beast;
namespace http = beast::http;
namespace websocket = beast::websocket;
namespace net = boost::asio;
using tcp = boost::asio::ip::tcp;
int main(int argc, char* argv[]) {
try {
const std::string host = "localhost";
const std::string port = "8765";
const std::string target = "/";
net::io_context ioc;
tcp::resolver resolver{ioc};
websocket::stream<tcp::socket> ws{ioc};
auto const results = resolver.resolve(host, port);
net::connect(ws.next_layer(), results.begin(), results.end());
ws.handshake(host + ':' + port, target);
for (;;) {
beast::flat_buffer buffer;
ws.read(buffer);
std::string message = beast::buffers_to_string(buffer.data());
std::cout << "Received: " << message << std::endl;
}
}
catch (std::exception const& e) {
std::cerr << "Error: " << e.what() << std::endl;
return EXIT_FAILURE;
}
}
</code></pre>
<p>And I get: <strong>Error: connect: Connection refused</strong>
What is going on?</p>
|
<python><c++><websocket><boost>
|
2023-05-09 17:40:20
| 1
| 303
|
IceCode
|
76,211,854
| 4,377,095
|
In Pandas how can I increment values correctly for each row and column
|
<p>assuming I have this table:</p>
<pre><code>data = {'column1': ['A1', 'A1', 'A1', 'A1'],
'column2': ['A2', 'A1', 'A2', 'A3'],
'column3': ['A1', 'A3', 'A3', 'A3'],
'column4': ['A4', 'A4', 'A1', 'A4'],
'column5': ['A5', 'A5', 'A5', 'A5']}
df = pd.DataFrame(data)
</code></pre>
<p>which looks like this:</p>
<pre><code>+---------+---------+---------+---------+---------+
| column1 | column2 | column3 | column4 | column5 |
+---------+---------+---------+---------+---------+
| A1 | A2 | A1 | A4 | A5 |
| A1 | A1 | A3 | A4 | A5 |
| A1 | A2 | A3 | A1 | A5 |
| A1 | A6 | A3 | A4 | A5 |
+---------+---------+---------+---------+---------+
</code></pre>
<p>Is there a way to correctly increment the number on each value in column per row? By Moving from left to right per row. If the value is not "A1", we should increment correctly.</p>
<p>The output should be like this:</p>
<pre><code>+---------+---------+---------+---------+---------+
| column1 | column2 | column3 | column4 | column5 |
+---------+---------+---------+---------+---------+
| A1 | A2 | A1 | A3 | A4 |
| A1 | A1 | A2 | A3 | A4 |
| A1 | A2 | A3 | A1 | A4 |
| A1 | A2 | A3 | A4 | A5 |
+---------+---------+---------+---------+---------+
</code></pre>
<p>I could loop with each row, but the problem is I might be dealing with large number of rows.</p>
|
<python><pandas><dataframe>
|
2023-05-09 17:08:35
| 1
| 537
|
Led
|
76,211,832
| 8,378,817
|
Extract doi from multiple pages with scrapy
|
<p>I have this webpage (<a href="https://academic.oup.com/plphys/search-results?q=photosynthesis&allJournals=1&fl_SiteID=6323&page=1" rel="nofollow noreferrer">https://academic.oup.com/plphys/search-results?q=photosynthesis&allJournals=1&fl_SiteID=6323&page=1</a>) from which I want to extract information, for example, title, name, doi etc.
For the first page I am able to do easily, but as there are more pages, I am not being able to crawl through. The code I have is:</p>
<pre><code>import scrapy
class PhotosynSpiderSpider(scrapy.Spider):
name = 'photosyn_spider'
allowed_domains = ['https://academic.oup.com/plphys']
start_urls = ['https://academic.oup.com/plphys/search-results?q=photosynthesis&allJournals=1&fl_SiteID=6323']
def parse(self, response):
# Step 1: Locate the first page in div class 'pageNumbers al-pageNumbers'
page_numbers = response.css('div.pageNumbers.al-pageNumbers')
current_page = page_numbers.css('span.current-page::text').get()
total_pages = page_numbers.css('span.total-pages::text').get()
# Step 2: Locate link in a class 'al-citation-list', and extract all the href for doi in the element 'a'
citation_list = response.css('a.al-citation-list')
dois = citation_list.css('a::attr(href)').getall()
for doi in dois:
yield {'doi': doi}
# Step 3: Open url for the next page in the element 'a' and class 'sr-nav-next al-nav-next' and repeat step 2
if current_page != total_pages:
next_page_url = response.css('a.sr-nav-next.al-nav-next::attr(href)').get()
yield scrapy.Request(next_page_url, callback=self.parse)
</code></pre>
<p>I am trying to dump the result into a json file. However, the result is empty.
Can anyone help me with this?
Thanks</p>
<p>screenshot of page:
<a href="https://i.sstatic.net/q8PLz.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/q8PLz.jpg" alt="enter image description here" /></a></p>
|
<python><web-scraping><beautifulsoup><scrapy><web-crawler>
|
2023-05-09 17:05:47
| 1
| 365
|
stackword_0
|
76,211,651
| 1,872,565
|
Robotframework - AttributeError: 'str' object has no attribute '_parent'
|
<p>Iam working on the robotframework, i want to highlight the element on the page below is the code
i am getting the error</p>
<p>"AttributeError: 'str' object has no attribute '_parent' "</p>
<p>Below is the code,</p>
<pre><code>import time
def highlight(element):
"""Highlights (blinks) a Selenium Webdriver element"""
driver = element._parent
def apply_style(s):
driver.execute_script("arguments[0].setAttribute('style', arguments[1]);",
element, s)
original_style = element.get_attribute('style')
apply_style("background: yellow; border: 2px solid red;")
time.sleep(.3)
apply_style(original_style)
</code></pre>
<p><strong>Common.robot</strong></p>
<pre><code>*** Settings ***
Library SeleniumLibrary 15.0 5.0
Library OperatingSystem
Library highlight.py
Resource ../LoginPage/LoginKW.robot
*** Variables ***
${URL} https://www.involve.me/
*** Keywords ***
Start
${options}= Evaluate sys.modules['selenium.webdriver'].ChromeOptions() sys, selenium.webdriver
Log To Console ${options}
Create WebDriver Chrome chrome_options=${options}
Go To ${URL}
${handles}= Get Window Handles
Maximize Browser Window
Verify Current URL
${SCREEN_TEXTS} Read Json ../../TestData/Involve_Test.json
Set Global Variable ${SCREEN_TEXTS}
Read Json
[Documentation] The function, with its' own keyword reads the json files.
[Arguments] ${file_path}
${JSONContent} Get File ${file_path}
${page}= Evaluate json.loads("""${JSONContent}""") json
[Return] ${page}
Close My Browser
Sleep 2s
Close All Browsers
Click For Element
[Arguments] ${element}
Highlight ${element}
Wait Until Element Is Visible ${element}
Click Element ${element}
Input For Text
[Arguments] ${element} ${input}
Wait Until Element Is Visible ${element}
Highlight ${element}
Input Text ${element} ${input}
</code></pre>
<p><strong>LoginTC.robot</strong></p>
<pre><code>*** Settings ***
Documentation Test Case Login
Library SeleniumLibrary
Resource ../../PageObjects/LoginPage/LoginKW.robot
Resource ../../PageObjects/Common/common.robot
Test Setup Start # Suitw Setup will run only once
Test Teardown common.Close My Browser
*** Test Cases ***
Test Login
sleep 3s
Click Cookie
Click On Link Login
Enter Email xxxxxyyyyzzz@gmail.com
Enter Password abcdefghijk3Only
Click On Sign in button
Is Text Displayed ${SCREEN_TEXTS["ws_page_title"]}
</code></pre>
<p>When i run the script getting the error.</p>
<h1>(venv) C:\Users\User\PycharmProjects\InvolvemeRobotFramework\TestCases\Login> robot LoginTC1.robot</h1>
<h1>LoginTC1 :: Test Case Login</h1>
<p>Test Login <selenium.webdriver.chrome.options.Options object at 0x041902B0></p>
<p>DevTools listening on ws://127.0.0.1:61649/devtools/browser/4914f52f-47f6-4632-a776-02045dda890c
<a href="https://www.involve.me/" rel="nofollow noreferrer">https://www.involve.me/</a>
..F.....[28328:8120:0508/220557.776:ERROR:device_event_log_impl.cc(222)] [22:05:57.777] USB: usb_device_handle_win.cc:1046 Failed to read descriptor from node connec
tion: A device attached to the system is not functioning. (0x1F)
Test Login | FAIL |
AttributeError: 'str' object has no attribute '_parent'</p>
<p>Can someone please help me</p>
|
<python><selenium-webdriver><robotframework>
|
2023-05-09 16:41:40
| 1
| 947
|
Learner
|
76,211,473
| 396,014
|
pandas accessing elements of the first row of a df
|
<p>I am still learning pandas and am trying to access the first element of the first row of a named array like so:</p>
<pre><code>import numpy as np
import pandas as pd
accel = [[-0.7437,0.1118,-0.6367],
[-0.7471,0.1162,-0.6338],
[-0.7437,0.1216,-0.6255]]
accel = np.array(accel,dtype=float)
angle = [[169.4366,49.4714,56.9421],
[169.3762,49.5374,56.8433],
[169.2828,49.6582,56.7059]]
angle = np.array(angle,dtype=float)
avelo = [[-0.5493,-0.9766,1.4038],
[0,-1.4038,0.7935],
[0.061,-1.0986,0.2441]]
avelo - np.array(avelo,dtype=float)
dfs = {
'accel': pd.DataFrame(accel, columns=list('xyz')),
'angle': pd.DataFrame(angle, columns=list('xyz')),
'avelo': pd.DataFrame(avelo, columns=list('xyz'))
}
df = pd.concat(dfs, axis=1)
param = df['accel']
val = param.head()
print(val['x'][0])
</code></pre>
<p>This (last three lines) does what I want, but it seems like a clunky way to do it. Is there a more elegant way?</p>
|
<python><pandas>
|
2023-05-09 16:17:51
| 2
| 1,001
|
Steve
|
76,211,456
| 373,121
|
Intermittent PermissionError opening file in append mode in Python program
|
<p>I have a Python program that opens a text file in append mode (think of it being something like a log file) and writes some new output to it before closing it. Occasionally, a <code>PermissionError</code> exception is raised at the line <code>with open(log_file_path, "a") as f</code>.</p>
<p><strong>NB</strong> This question might look like a duplicate but all of the answers I have seen to similar questions talk about issues such as setting permissions on directories and suchlike. That does not apply here as this is something that happens <em>intermittently</em> (not every run and not at the same point when it does happen). Almost always, it happens at a point in a run after it has already been working for a while.</p>
<p>Another program (not Python BTW and not under my control) periodically reads the file. As this issue happens intermittently, my assumption is that there is some sort of file locking issue occurring caused by the other process opening the file for reading. If I place my file open statement inside a loop that retries a certain number of times, with a short sleep between each, the issue goes away. Problem solved - sort of - but it hardly seems satisfactory.</p>
<p>Two questions:</p>
<ol>
<li><p>Does it seem plausible that this exception can arise in a Python program that opens a file for appending at the same time that another process is either opening it, or already has it open, for reading?</p>
</li>
<li><p>Is there a "correct" way to deal with this? I have found it surprisingly difficult to find any guidance. Surely it is quite a common scenario to want to read from files that are continuously being updated by another process and where the two programs are implemented independently (so no special protocol is possible).</p>
</li>
</ol>
<p><strong>Edit</strong>: clarifying a few points raised in comments.</p>
<ol>
<li>This issue has only been seen on Windows.</li>
<li>Both applications are running under the same user account.</li>
<li>Only one process writes to the file. The other process only ever reads from it.</li>
<li>When I said "not under my control", the truth of the situation is that the other application is developed by a another team in a different part of the company working in a separate repository and on a separate development schedule. Practically, any sort of coordinated fix is not possible at the moment. However, I do know enough about the consuming code to know what it does. As stated above, it only reads the file.</li>
<li>The consuming application is a kind of integration hub that runs and collects output from numerous other applications like mine. Reading of log files from the different applications is typical of what it does, and so far there is no special infrastructure or protocol for coordinating this and as far as I am aware none has been needed.</li>
</ol>
|
<python><file>
|
2023-05-09 16:15:15
| 1
| 475
|
Bob
|
76,211,421
| 4,035,257
|
Explicit formula for EWMA(n) in Python
|
<p>I have the following <code>pandas.core.series.Series</code>:
<code>x=pd.Series([17.39, 8.70, 2.90, 1.45, 1.45])</code> for <code>t=0,1,2,3,4</code>.
When I try <code>x.ewm(span=2).mean()</code> I get the following results:</p>
<p><code>17.39, 10.87, 5.35, 2.71, 1.86</code></p>
<p>My understanding is that <code>.ewm.mean()</code> uses the following explicit formula:</p>
<p><code>y[t] = (1 - alpha) * y[t-1] + alpha * x[t], where alpha = 2/(span+1)</code></p>
<p><code>y[0] = x[0]</code></p>
<p>Using the above formula:</p>
<p><code>EMWA[0] = x[0] = 17.39</code></p>
<p><code>EMWA[1] = (1-(2/(2+1)))*17.39 + (2/(2+1))*8.7 = 11.59</code> which is different to <code>10.87</code>.</p>
<p><code>EMWA[2] = (1-(2/(2+1)))*10.87 + (2/(2+1))*2.9 = 5.55</code> which is different to <code>5.35</code>.</p>
<p><code>EMWA[3] = (1-(2/(2+1)))*5.35 + (2/(2+1))*1.45 = 2.75</code> which is different to <code>2.71</code>. etc..</p>
<p>Could you please help me understand where these differences coming from? What I am missing or doing wrong here? Thank you.</p>
|
<python><pandas><smoothing><moving-average><exponential>
|
2023-05-09 16:11:05
| 1
| 362
|
Telis
|
76,211,404
| 21,346,793
|
How to restrict user access to prevent visiting other user profiles
|
<p>In my app, no anonimus uzer can switch to another profile, he only should know the pers_id.
How can I forbid doing this?
my views.py:</p>
<pre><code>@login_required
def home(request):
pers_id = request.GET.get('pers_id', None)
if pers_id is None:
return redirect('/')
user_profile = Profile.objects.get(profile_id=pers_id)
try:
user_memories = Memory.objects.get(profile_id=pers_id)
except Memory.DoesNotExist:
user_memories = False
context = {'memories': user_memories, 'user_profile': user_profile}
return render(request, 'home.html', context)
def vk_callback(request):
try:
vk_user = UserSocialAuth.objects.get(provider='vk-oauth2', user=request.user)
vk_profile = vk_user.extra_data
user_info = UserInfoGetter(vk_profile['access_token'])
pers_id, first_name, last_name, photo_url = user_info.get_vk_info()
try:
user_profile = Profile.objects.get(profile_id=pers_id)
except Profile.DoesNotExist:
user_profile = Profile.objects.create(
profile_id=pers_id,
first_name=first_name,
last_name=last_name,
photo_url=photo_url
)
user_profile.first_name = first_name
user_profile.last_name = last_name
user_profile.photo_url = photo_url
user_profile.save()
return redirect(f'/home?pers_id={pers_id}')
except UserSocialAuth.DoesNotExist:
pass
return redirect(home)
</code></pre>
<p>When the user take auntification he redirected to /home?pers_id={pers_id}. if authorized user knows the pers_id he can check antoher profiles. How can i fix it?</p>
|
<python><django><django-views><django-authentication>
|
2023-05-09 16:09:19
| 0
| 400
|
Ubuty_programmist_7
|
76,211,229
| 21,420,742
|
Getting Count grouped by ID in Pandas
|
<p>I know their are extensive amount of answers and questions related to count, I just can't find the one I am looking for. I have a dataset about employment history and I need to see How many times the number 1 appears by ID grouped by manager.</p>
<p>DF</p>
<pre><code> ID Date Job Manager Full-Time
101 05/2022 Sales 103 1
101 06/2022 Sales 103 1
102 08/2022 Tech 105 0
102 09/2022 Tech 105 1
103 11/2021 Sales 110 0
104 04/2022 Sales 103 0
104 05/2022 Sales 103 1
104 06/2022 Sales 103 1
104 07/2022 Sales 103 1
104 08/2022 Sales 103 0
......
201 10/2022 HR 198 1
</code></pre>
<p>What I want is a count of the not how many 1s an ID has but how many IDs have a 1 in their history grouped to the manager they belong to.</p>
<p>Desired Output</p>
<pre><code>Manager Full-Time Count
103 2
105 1
110 3
....
198 ` 2
</code></pre>
<p>I tried using <code>df['FT Count'] df.groupby('Manager')['Full-Time'].nunique</code></p>
|
<python><python-3.x><pandas><dataframe><count>
|
2023-05-09 15:50:14
| 1
| 473
|
Coding_Nubie
|
76,211,013
| 10,481,744
|
Standardize the text in the pandas column with some common string
|
<p>I have following DataFrame df</p>
<pre><code>id1 id2 text_column
key1 220 ABC corp
key1 220 ABC Pvt Ltd
key2 300 PQR Ltd
key2 300 PQR
key2 300 PQR something else
key2 400 XYZ company
</code></pre>
<p>I don't know what kind of text will be in the Text_Column, but for the same id1 and id2, I want to identify the similar strings in the "text_column" and replace those strings with the standard text.I want text in the text_column to be standardized something similar to below</p>
<pre><code>id1 id2 text_column
key1 220 ABC corp
key1 220 ABC corp
key2 300 PQR
key2 300 PQR
key2 300 PQR
key2 400 XYZ company
</code></pre>
<p>to compute the similarity score i m using the following code</p>
<pre><code>import pandas as pd
import numpy as np
from fuzzywuzzy import process,fuzz
df=pd.DataFrame(data={"id1":["key1","key1","key2","key2","key2","key2"],"id2":[220,220,300,300,300,400],"text_column":["ABC corp","ABC Pvt Ltd","PQR Ltd","PQR","PQR something else","XYZ company"]})
filters=['id1','id2']
df_text_column=df.groupby(filters)['text_column'].apply(list).reset_index().rename(columns={"text_column":"lst_text_column"})
lst_temp=df_text_column.columns.difference(df.columns).union(filters).to_list()
df = pd.merge(df, df_text_column[lst_temp], on = filters, how = 'left')
df['text_column smilarity score']=df.apply(lambda x:process.extract(x['text_column'],x['lst_text_column']),axis=1)
df['text_column smilarity score']=df['text_column smilarity score'].apply(lambda x:[i[1] for i in x])
df['text_column smilarity score min']=df['text_column smilarity score'].apply(lambda x:np.min(x))
</code></pre>
<p>I would like the text in the text_column to be standardized for the similar strings in the same way as mentioned above.This way I would be able to use the text_column in groupby to perform further computations</p>
|
<python><pandas><nlp><text-processing>
|
2023-05-09 15:29:52
| 1
| 340
|
TLanni
|
76,210,988
| 3,821,009
|
Shorter way to simulate dropWhile in polars
|
<p>Say I have this:</p>
<pre class="lang-py prettyprint-override"><code>>>> import polars as pl
>>> df = pl.DataFrame(dict(j=[1,1,1,2,1,2,1,2,2], k=[1,2,3,4,5,6,7,8,9]))
>>> df
shape: (9, 2)
┌─────┬─────┐
│ j ┆ k │
│ --- ┆ --- │
│ i64 ┆ i64 │
╞═════╪═════╡
│ 1 ┆ 1 │
│ 1 ┆ 2 │
│ 1 ┆ 3 │
│ 2 ┆ 4 │
│ 1 ┆ 5 │
│ 2 ┆ 6 │
│ 1 ┆ 7 │
│ 2 ┆ 8 │
│ 2 ┆ 9 │
└─────┴─────┘
</code></pre>
<p>Is there a shorter way to simulate <code>dropWhile</code> (i.e. remove all the rows at the top of the DataFrame that satisfy a certain condition) than this (which removes all initial rows that have <code>j == 1</code>):</p>
<pre class="lang-py prettyprint-override"><code>>>> df.slice(df.with_row_index().filter(pl.col('j') != 1).select('index').head(1).item())
shape: (6, 2)
┌─────┬─────┐
│ j ┆ k │
│ --- ┆ --- │
│ i64 ┆ i64 │
╞═════╪═════╡
│ 2 ┆ 4 │
│ 1 ┆ 5 │
│ 2 ┆ 6 │
│ 1 ┆ 7 │
│ 2 ┆ 8 │
│ 2 ┆ 9 │
└─────┴─────┘
</code></pre>
|
<python><dataframe><python-polars>
|
2023-05-09 15:26:58
| 2
| 4,641
|
levant pied
|
76,210,895
| 15,067,358
|
Issues installing packages from requirements.txt for Django project
|
<p>I'm working with Python 3.11.3 and I'm having difficulty installing from my requirements.txt file, this is what my output is:</p>
<pre><code>2023-05-09 10:04:59.539 [info] Running Env creation script: [
'c:\\Users\\alex\\AppData\\Local\\Programs\\Python\\Python311\\python.exe',
'c:\\Users\\alex\\.vscode\\extensions\\ms-python.python-2023.8.0\\pythonFiles\\create_venv.py',
'--git-ignore',
'--requirements',
'c:\\Coding\\api\\requirements.txt'
]
2023-05-09 10:04:59.830 [info] EXISTING_VENV:c:\Coding\api\.venv\Scripts\python.exe
CREATE_VENV.UPGRADING_PIP
Running: c:\Coding\api\.venv\Scripts\python.exe -m pip install --upgrade pip
2023-05-09 10:05:01.188 [info] Requirement already satisfied: pip in c:\coding\api\.venv\lib\site-packages (23.1.2)
2023-05-09 10:05:01.701 [info] CREATE_VENV.UPGRADED_PIP
VENV_INSTALLING_REQUIREMENTS: ['c:\\Coding\\api\\requirements.txt']
VENV_INSTALLING_REQUIREMENTS: c:\Coding\api\requirements.txt
Running: c:\Coding\api\.venv\Scripts\python.exe -m pip install -r c:\Coding\api\requirements.txt
2023-05-09 10:05:03.181 [info] Collecting asgiref==3.6.0 (from -r c:\Coding\api\requirements.txt (line 1))
2023-05-09 10:05:03.195 [info] Using cached asgiref-3.6.0-py3-none-any.whl (23 kB)
2023-05-09 10:05:03.281 [info] Collecting autopep8==2.0.2 (from -r c:\Coding\api\requirements.txt (line 2))
2023-05-09 10:05:03.296 [info] Using cached autopep8-2.0.2-py2.py3-none-any.whl (45 kB)
2023-05-09 10:05:03.361 [info] Collecting distlib==0.3.6 (from -r c:\Coding\api\requirements.txt (line 3))
2023-05-09 10:05:03.379 [info] Using cached distlib-0.3.6-py2.py3-none-any.whl (468 kB)
2023-05-09 10:05:03.579 [info] Collecting Django==4.2.1 (from -r c:\Coding\api\requirements.txt (line 4))
2023-05-09 10:05:03.807 [info] Downloading Django-4.2.1-py3-none-any.whl (8.0 MB)
2023-05-09 10:05:19.573 [info]
2023-05-09 10:05:19.574 [info] --- 0.6/8.0 MB 819.2 kB/s eta 0:00:09
2023-05-09 10:05:19.574 [info]
2023-05-09 10:05:19.596 [info] ERROR: Exception:
Traceback (most recent call last):
File "c:\Coding\api\.venv\Lib\site-packages\pip\_vendor\urllib3\response.py", line 438, in _error_catcher
yield
File "c:\Coding\api\.venv\Lib\site-packages\pip\_vendor\urllib3\response.py", line 561, in read
data = self._fp_read(amt) if not fp_closed else b""
^^^^^^^^^^^^^^^^^^
File "c:\Coding\api\.venv\Lib\site-packages\pip\_vendor\urllib3\response.py", line 527, in _fp_read
return self._fp.read(amt) if amt is not None else self._fp.read()
^^^^^^^^^^^^^^^^^^
File "c:\Coding\api\.venv\Lib\site-packages\pip\_vendor\cachecontrol\filewrapper.py", line 90, in read
data = self.__fp.read(amt)
^^^^^^^^^^^^^^^^^^^
File "c:\Users\alex\AppData\Local\Programs\Python\Python311\Lib\http\client.py", line 466, in read
s = self.fp.read(amt)
^^^^^^^^^^^^^^^^^
File "c:\Users\alex\AppData\Local\Programs\Python\Python311\Lib\socket.py", line 706, in readinto
return self._sock.recv_into(b)
^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\alex\AppData\Local\Programs\Python\Python311\Lib\ssl.py", line 1278, in recv_into
return self.read(nbytes, buffer)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\alex\AppData\Local\Programs\Python\Python311\Lib\ssl.py", line 1134, in read
return self._sslobj.read(len, buffer)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TimeoutError: The read operation timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\Coding\api\.venv\Lib\site-packages\pip\_internal\cli\base_command.py", line 169, in exc_logging_wrapper
status = run_func(*args)
^^^^^^^^^^^^^^^
File "c:\Coding\api\.venv\Lib\site-packages\pip\_internal\cli\req_command.py", line 248, in wrapper
return func(self, options, args)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Coding\api\.venv\Lib\site-packages\pip\_internal\commands\install.py", line 377, in run
requirement_set = resolver.resolve(
^^^^^^^^^^^^^^^^^
File "c:\Coding\api\.venv\Lib\site-packages\pip\_internal\resolution\resolvelib\resolver.py", line 92, in resolve
result = self._result = resolver.resolve(
^^^^^^^^^^^^^^^^^
File "c:\Coding\api\.venv\Lib\site-packages\pip\_vendor\resolvelib\resolvers.py", line 546, in resolve
state = resolution.resolve(requirements, max_rounds=max_rounds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Coding\api\.venv\Lib\site-packages\pip\_vendor\resolvelib\resolvers.py", line 397, in resolve
self._add_to_criteria(self.state.criteria, r, parent=None)
File "c:\Coding\api\.venv\Lib\site-packages\pip\_vendor\resolvelib\resolvers.py", line 173, in _add_to_criteria
if not criterion.candidates:
File "c:\Coding\api\.venv\Lib\site-packages\pip\_vendor\resolvelib\structs.py", line 156, in __bool__
return bool(self._sequence)
^^^^^^^^^^^^^^^^^^^^
File "c:\Coding\api\.venv\Lib\site-packages\pip\_internal\resolution\resolvelib\found_candidates.py", line 155, in __bool__
return any(self)
^^^^^^^^^
File "c:\Coding\api\.venv\Lib\site-packages\pip\_internal\resolution\resolvelib\found_candidates.py", line 143, in <genexpr>
return (c for c in iterator if id(c) not in self._incompatible_ids)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Coding\api\.venv\Lib\site-packages\pip\_internal\resolution\resolvelib\found_candidates.py", line 47, in _iter_built
candidate = func()
^^^^^^
File "c:\Coding\api\.venv\Lib\site-packages\pip\_internal\resolution\resolvelib\factory.py", line 206, in _make_candidate_from_link
self._link_candidate_cache[link] = LinkCandidate(
^^^^^^^^^^^^^^
File "c:\Coding\api\.venv\Lib\site-packages\pip\_internal\resolution\resolvelib\candidates.py", line 293, in __init__
super().__init__(
File "c:\Coding\api\.venv\Lib\site-packages\pip\_internal\resolution\resolvelib\candidates.py", line 156, in __init__
self.dist = self._prepare()
^^^^^^^^^^^^^^^
File "c:\Coding\api\.venv\Lib\site-packages\pip\_internal\resolution\resolvelib\candidates.py", line 225, in _prepare
dist = self._prepare_distribution()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Coding\api\.venv\Lib\site-packages\pip\_internal\resolution\resolvelib\candidates.py", line 304, in _prepare_distribution
return preparer.prepare_linked_requirement(self._ireq, parallel_builds=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Coding\api\.venv\Lib\site-packages\pip\_internal\operations\prepare.py", line 516, in prepare_linked_requirement
return self._prepare_linked_requirement(req, parallel_builds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Coding\api\.venv\Lib\site-packages\pip\_internal\operations\prepare.py", line 587, in _prepare_linked_requirement
local_file = unpack_url(
^^^^^^^^^^^
File "c:\Coding\api\.venv\Lib\site-packages\pip\_internal\operations\prepare.py", line 166, in unpack_url
file = get_http_url(
^^^^^^^^^^^^^
File "c:\Coding\api\.venv\Lib\site-packages\pip\_internal\operations\prepare.py", line 107, in get_http_url
from_path, content_type = download(link, temp_dir.path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Coding\api\.venv\Lib\site-packages\pip\_internal\network\download.py", line 147, in __call__
for chunk in chunks:
File "c:\Coding\api\.venv\Lib\site-packages\pip\_internal\cli\progress_bars.py", line 53, in _rich_progress_bar
for chunk in iterable:
File "c:\Coding\api\.venv\Lib\site-packages\pip\_internal\network\utils.py", line 63, in response_chunks
for chunk in response.raw.stream(
File "c:\Coding\api\.venv\Lib\site-packages\pip\_vendor\urllib3\response.py", line 622, in stream
data = self.read(amt=amt, decode_content=decode_content)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Coding\api\.venv\Lib\site-packages\pip\_vendor\urllib3\response.py", line 560, in read
with self._error_catcher():
File "c:\Users\alex\AppData\Local\Programs\Python\Python311\Lib\contextlib.py", line 155, in __exit__
self.gen.throw(typ, value, traceback)
File "c:\Coding\api\.venv\Lib\site-packages\pip\_vendor\urllib3\response.py", line 443, in _error_catcher
raise ReadTimeoutError(self._pool, None, "Read timed out.")
pip._vendor.urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='files.pythonhosted.org', port=443): Read timed out.
2023-05-09 10:05:19.798 [info] Traceback (most recent call last):
File "c:\Users\alex\.vscode\extensions\ms-python.python-2023.8.0\pythonFiles\create_venv.py", line 77, in run_process
2023-05-09 10:05:19.798 [info] subprocess.run(args, cwd=os.getcwd(), check=True)
File "c:\Users\alex\AppData\Local\Programs\Python\Python311\Lib\subprocess.py", line 571, in run
2023-05-09 10:05:19.799 [info] raise CalledProcessError(retcode, process.args,
2023-05-09 10:05:19.800 [info] subprocess.CalledProcessError: Command '['c:\\Coding\\api\\.venv\\Scripts\\python.exe', '-m', 'pip', 'install', '-r', 'c:\\Coding\\api\\requirements.txt']' returned non-zero exit status 2.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\Users\alex\.vscode\extensions\ms-python.python-2023.8.0\pythonFiles\create_venv.py", line 232, in <module>
2023-05-09 10:05:19.801 [info]
2023-05-09 10:05:19.801 [info] main(sys.argv[1:])
File "c:\Users\alex\.vscode\extensions\ms-python.python-2023.8.0\pythonFiles\create_venv.py", line 228, in main
2023-05-09 10:05:19.802 [info] install_requirements(venv_path, args.requirements)
File "c:\Users\alex\.vscode\extensions\ms-python.python-2023.8.0\pythonFiles\create_venv.py", line 97, in install_requirements
2023-05-09 10:05:19.802 [info] run_process(
File "c:\Users\alex\.vscode\extensions\ms-python.python-2023.8.0\pythonFiles\create_venv.py", line 79, in run_process
2023-05-09 10:05:19.803 [info] raise VenvError(error_message)
VenvError: CREATE_VENV.PIP_FAILED_INSTALL_REQUIREMENTS
2023-05-09 10:05:19.838 [error] Error while running venv creation script: CREATE_VENV.PIP_FAILED_INSTALL_REQUIREMENTS
2023-05-09 10:05:19.839 [error] CREATE_VENV.PIP_FAILED_INSTALL_REQUIREMENTS
</code></pre>
<p>This is what my requirements.txt is:</p>
<pre><code>asgiref==3.6.0
autopep8==2.0.2
distlib==0.3.6
Django==4.2.1
django-dotenv==1.4.2
environ==1.0
filelock==3.12.0
platformdirs==3.5.0
psycopg2-binary==2.9.6
pycodestyle==2.10.0
python-dotenv==1.0.0
sqlparse==0.4.4
tzdata==2023.3
virtualenv==20.23.0
</code></pre>
|
<python><django><pip>
|
2023-05-09 15:17:09
| 1
| 364
|
code writer 3000
|
76,210,821
| 424,957
|
How to plot a data on half of Y axis in log scale?
|
<p>At final, my son give this asnwer, <code>y = sqrt(y / 2)</code>, I think it is right answer, do you think so?
<a href="https://i.sstatic.net/PHhKb.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PHhKb.jpg" alt="enter image description here" /></a>
At last, I found <code>y = sqrt(h) + log(0.5, 10)</code> is most near what I need, do you have better idea?
<a href="https://i.sstatic.net/AYmiE.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AYmiE.jpg" alt="enter image description here" /></a>
I found the fomular is still wrong when y = 2, how can I get a better formular for all cases?</p>
<p>The sqrt(y) is the answer, but correction for minY = 0.5, the last is <code>y = max(sqrt(0.5), sqrt(y) - sqrt(0.5))</code>.
<a href="https://i.sstatic.net/1O6Sy.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1O6Sy.jpg" alt="enter image description here" /></a>
<code>sqrt(y)</code> will be as below
<a href="https://i.sstatic.net/EgVYk.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EgVYk.jpg" alt="enter image description here" /></a></p>
<p>I modified as below as minY = 0.5 or exception hanppend! log(y)/2 seems like wrong.</p>
<pre><code>y = log(y) / 2
if y < 0.5:
y = 0.75
</code></pre>
<p><a href="https://i.sstatic.net/kpvxR.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kpvxR.jpg" alt="enter image description here" /></a>
I want to show white data on the middle of green bar, how can I calculate the y loc on the Y axis? in normal scale, I can use y / 2, but I am not sure how to do it in log scale, I tried 10 ** (log(y, 10) - log(2, 10)), but it seems not correct.
<a href="https://i.sstatic.net/zqnMQ.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zqnMQ.jpg" alt="enter image description here" /></a></p>
|
<python><matplotlib>
|
2023-05-09 15:10:16
| 1
| 2,509
|
mikezang
|
76,210,742
| 1,717,931
|
PySpark concat a column of lists over a weekly or monthly timeperiod
|
<p>I am new to PySpark/Databricks. I have a question related to concatenating a column of lists based on weekly or monthly time-periods. Here is the code I have with expected results.</p>
<pre><code>dates = ['2023-04-01', '2023-04-02', '2023-04-03', '2023-04-04', '2023-04-05', '2023-04-06', '2023-04-07', '2023-04-08', '2023-04-09', '2023-04-10', '2023-04-11', '2023-04-12', '2023-04-13', '2023-04-14']
brands = [['bmw', 'vw'], ['chevy', 'buick'], ['nissan', 'lexus', 'buick'], ['bmw', 'nissan', 'lexus'], ['bmw', 'vw', 'nissan', 'lexus'], ['bmw', 'vw', 'chevy'], ['chevy', 'bmw', 'buick'], ['bmw', 'vw'], ['chevy', 'nissan'], ['nissan', 'lexus', 'vw'], ['bmw', 'nissan', 'vw'], ['bmw', 'vw', 'nissan', 'lexus'], ['bmw', 'lexus', 'chevy'], ['chevy', 'bmw', 'buick']]
weights = [[0.99, 0.98], [0.97, 0.96], [0.95, 0.94, 0.93], [0.98, 0.96, 0.95], [0.97, 0.96, 0.95, 0.94], [0.975, 0.964, 0.952], [0.98, 0.976, 0.967],
[0.978, 0.975], [0.928, 0.935], [0.982, 0.961, 0.952], [0.97, 0.96, 0.952], [0.975, 0.964, 0.952, 0.943], [0.982, 0.976, 0.967], [0.992, 0.987, 0.978]]
df = spark.createDataFrame(zip(dates, brands, weights), ['date', 'brands', 'weight'])
df.show()
+----------+--------------------+--------------------+
| date| brands| weight|
+----------+--------------------+--------------------+
|2023-04-01| [bmw, vw]| [0.99, 0.98]|
|2023-04-02| [chevy, buick]| [0.97, 0.96]|
|2023-04-03|[nissan, lexus, e...| [0.95, 0.94, 0.93]|
|2023-04-04|[bmw, nissan, lexus]| [0.98, 0.96, 0.95]|
|2023-04-05|[bmw, vw, nissan,...|[0.97, 0.96, 0.95...|
|2023-04-06| [bmw, vw, chevy]|[0.975, 0.964, 0....|
|2023-04-07| [chevy, bmw, buick]|[0.98, 0.976, 0.967]|
|2023-04-08| [bmw, vw]|[0.99, 0.987, 0.978]|
|2023-04-09| [chevy, nissan]| [0.978, 0.975]|
|2023-04-10| [nissan, lexus, vw]| [0.972, 0.963]|
|2023-04-11| [bmw, security, vw]|[0.955, 0.942, 0....|
|2023-04-12|[bmw, vw, nissan,...|[0.982, 0.961, 0....|
|2023-04-13| [bmw, lexus, chevy]|[0.97, 0.96, 0.95...|
|2023-04-14| [chevy, bmw, buick]|[0.975, 0.964, 0....|
+----------+--------------------+--------------------+
df1 = df.withColumn("DateFormatted", to_timestamp(col("date"), "yyyy-MM-dd"))
df1.show()
+----------+--------------------+--------------------+-------------------+
| date| brands| weight| DateFormatted|
+----------+--------------------+--------------------+-------------------+
|2023-04-01| [bmw, vw]| [0.99, 0.98]|2023-04-01 00:00:00|
|2023-04-02| [chevy, buick]| [0.97, 0.96]|2023-04-02 00:00:00|
|2023-04-03|[nissan, lexus, e...| [0.95, 0.94, 0.93]|2023-04-03 00:00:00|
|2023-04-04|[bmw, nissan, lexus]| [0.98, 0.96, 0.95]|2023-04-04 00:00:00|
|2023-04-05|[bmw, vw, nissan,...|[0.97, 0.96, 0.95...|2023-04-05 00:00:00|
|2023-04-06| [bmw, vw, chevy]|[0.975, 0.964, 0....|2023-04-06 00:00:00|
|2023-04-07| [chevy, bmw, buick]|[0.98, 0.976, 0.967]|2023-04-07 00:00:00|
|2023-04-08| [bmw, vw]|[0.99, 0.987, 0.978]|2023-04-08 00:00:00|
|2023-04-09| [chevy, nissan]| [0.978, 0.975]|2023-04-09 00:00:00|
|2023-04-10| [nissan, lexus, vw]| [0.972, 0.963]|2023-04-10 00:00:00|
|2023-04-11| [bmw, security, vw]|[0.955, 0.942, 0....|2023-04-11 00:00:00|
|2023-04-12|[bmw, vw, nissan,...|[0.982, 0.961, 0....|2023-04-12 00:00:00|
|2023-04-13| [bmw, lexus, chevy]|[0.97, 0.96, 0.95...|2023-04-13 00:00:00|
|2023-04-14| [chevy, bmw, buick]|[0.975, 0.964, 0....|2023-04-14 00:00:00|
+----------+--------------------+--------------------+-------------------+
</code></pre>
<p>I converted the date col into Date column to do any sort of timestamp related aggregation. Now, here is what I want:</p>
<ol>
<li>On a weekly basis (say 1st Apr - 7th Apr, 8th Apr - 14th Apr...), I want the 'brands' column and 'weight' column to be concatenated into just one row (of a new dataframe).</li>
<li>I want a another dataframe, monthly_aggregate_df that will do similar stuff, for every calendar month.</li>
</ol>
<p>I tried these...but, hit several issues:</p>
<p>window(column, window duration, sliding duration, starting offset)</p>
<pre><code> df2 = df1.groupBy(window(col("DateFormatted"), "1 week", "1 week", "64 hours")).agg(concat("brands") as "brands_concat").select("window.start", "window.end", "DateFormatted", "brands_concat").show()
SyntaxError: invalid syntax
File "<command-4423978228267630>", line 2
df2 = df1.groupBy(window(col("DateFormatted"), "1 week", "1 week", "64 hours")).agg(concat("brands") as "brands_concat").select("window.start", "window.end", "DateFormatted", "brands_concat").show()
^
SyntaxError: invalid syntax
</code></pre>
<p>Another try:</p>
<pre><code>import pyspark.sql.functions as f
df.groupBy("date").agg(f.concat_ws(",", f.collect_list("brands")).alias("brands")).show()
AnalysisException: cannot resolve 'concat_ws(',', collect_list(`brands`))' due to data type mismatch: argument 2 requires (array<string> or string) type, however, 'collect_list(`brands`)' is of array<array<string>> type.;
</code></pre>
<p>Looks like concat_ws concats only strings. Not lists. Maybe I need to use some sort of UDF to do this. So, tried array_join..but, doesn't like to work with grouped-data...</p>
<pre><code>from pyspark.sql.functions import array_join
df2.withColumn("week_strt_day",date_sub(next_day(col("DateFormatted"),"sunday"),7)).groupBy("week_strt_day").apply(array_join("brands", ",").alias("brands")).orderBy("week_strt_day").show()
ValueError: Invalid udf: the udf argument must be a pandas_udf of type GROUPED_MAP.
</code></pre>
|
<python><pyspark><concatenation><aggregate>
|
2023-05-09 15:01:09
| 2
| 2,501
|
user1717931
|
76,210,727
| 15,452,601
|
How to find out which object a name is resolved to at runtime
|
<p>To patch an object one needs to use the right path from the call site. This is sufficiently fiddly to get its own section in the <a href="https://docs.python.org/3/library/unittest.mock.html#id6" rel="nofollow noreferrer">unittest docs</a>.</p>
<p>Sometimes when this is a pain to figure out I'd rather just set a breakpoint right before calling the object to patch and then inspect it to see what is actually looked up. Then I could exit the debugger and patch the right object. This feels an easy problem, akin to <code>which</code> on the shell, but I can't think how to do it.</p>
<p>Condensing the example from the docs:</p>
<p>a.py</p>
<pre><code>class SomeClass: pass
</code></pre>
<p>b.py</p>
<pre><code>from a import SomeClass
@breakpoint # resolve SomeClass into b.someclass here
def fn():
s = SomeClass()
</code></pre>
<blockquote>
<p>Now we want to test some_function but we want to mock out SomeClass using patch() [...] If we use patch() to mock out a.SomeClass then it will have no effect on our test[...].</p>
</blockquote>
<blockquote>
<p>The key is to patch out SomeClass where it is used (or where it is looked up). In this case some_function will actually look up SomeClass in module b, where we have imported it. The patching should look like:</p>
</blockquote>
<pre><code>@patch('b.SomeClass')
</code></pre>
<p>How do I get <code>b.SomeClass</code> from the debugger?</p>
<p>[Note that the imports are easy in the example given. But sometimes things are sufficiently deep or convoluted it can take a bit of puzzling to get it right. A separate problem is expressing <code>b</code> correctly from the test function, but assuming everything is in packages the canonical form should work.]</p>
|
<python><unit-testing><pytest><introspection><monkeypatching>
|
2023-05-09 14:59:50
| 1
| 6,024
|
2e0byo
|
76,210,610
| 7,253,901
|
Replacing entire column value in Pandas series based on regex condition
|
<p><strong>What I need</strong></p>
<p>I need to replace an entire column value in a Pandas series, when the condition is met. For example, consider the following Series:</p>
<pre><code>d = ["foo", "bar", "bbl dklha", "bbl hoi", "bbl lala ho", "bbl ljhkh"]
ser = pd.Series(data=d)
</code></pre>
<p>Which looks as follows:</p>
<pre><code>0 foo
1 bar
2 bbl dklha
3 bbl hoi
4 bbl lala ho
5 bbl ljhkh
dtype: object
</code></pre>
<p>What I need now, is every string that starts with bbl should be replaced with "bbl leerling", so like this:</p>
<pre><code>0 foo
1 bar
2 bbl leerling
3 bbl leerling
4 bbl leerling
5 bbl leerling
dtype: object
</code></pre>
<p>I'm using regex for this (I need it to be regex, this example is simplified but in reality the regexes are more complex).</p>
<p><strong>What I've tried</strong></p>
<pre><code>ser = ser.str.replace(pat=r'^bbl', repl="bbl leerling", regex=True)
ser = ser.replace(to_replace=r'^bbl', value="bbl leerling", regex=True)
</code></pre>
<p>But both only replace the occurence of the substring with the desired string, like so:</p>
<pre><code>0 foo
1 bar
2 bbl leerling dklha
3 bbl leerling hoi
4 bbl leerling lala ho
5 bbl leerling ljhkh
dtype: object
</code></pre>
<p>How do I make it so that the <em>entire</em> value is replaced? I was looking for some kind of argument in <code>Series.replace</code> or <code>Series.str.replace</code> that does this, but there doesn't seem to be one. I don't want to loop over this series, use a list comprehension or .apply, because this code is going to be run on a spark production cluster where those constructs are unavailable/infeasible.</p>
|
<python><pandas>
|
2023-05-09 14:48:18
| 3
| 2,825
|
Psychotechnopath
|
76,210,593
| 1,497,720
|
Extract text from chat windows
|
<p>I tried to parse <a href="https://i.sstatic.net/gX4iX.png" rel="nofollow noreferrer">this image</a> attached followed,
<a href="https://i.sstatic.net/gX4iX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gX4iX.png" alt="enter image description here" /></a></p>
<p>with the desired outcome of</p>
<pre><code>A: "好的呀:)"
B: "Mengjia, 今天的ARN meeting你会从哪里出发呢?"
A: "我今天早的话会从家里出去"
B: "好呀 到时候一起"
B: "顺便跟你取经 怎么转正"
</code></pre>
<p>I use the following code</p>
<pre><code>import pytesseract
import cv2
pytesseract.pytesseract.tesseract_cmd = r"C:\Program Files\Tesseract-OCR\tesseract.exe"
# Load the image
img = cv2.imread('D:\C_Drive\Dropbox\Important\Python_Code\self_util\ocr\input\chinese_text.png')
for i in range(3,14):
print(i)
# Set the language for OCR to Chinese (Simplified)
config = (f"-l chi_sim --psm {i}")
# Perform OCR using Tesseract
text = pytesseract.image_to_string(img, config=config)
# Print the extracted text
print(text)
</code></pre>
<p>I got an inaccurate outcome</p>
<pre><code>一 我今天早的话会从家里过去。
</code></pre>
<p>How should I modify it to get the desire outcome? The method is flexible, any method will result in correct output will do.</p>
|
<python><ocr><tesseract><python-tesseract>
|
2023-05-09 14:45:58
| 1
| 18,765
|
william007
|
76,210,582
| 17,561,414
|
string indices must be integers ERROR python
|
<p>I have pyspark dataframe created this way:</p>
<pre><code>dfquery=sqlContext.sql("""
SELECT CONCAT('CAST(', target_column_name, ' AS ',
CASE
WHEN target_column_data_type IN ('NVARCHAR', 'VARCHAR', 'CHAR', 'NCHAR') THEN 'STRING'
WHEN target_column_data_type = 'DECIMAL' THEN concat("DECIMAL", "(",target_column_number_length, ",", target_column_number_precision, ")")
ELSE target_column_data_type
END,
' )',
CASE
WHEN KEYFLAG IS NOT NULL THEN concat(' AS BK_',target_column_name)
ELSE concat(" AS ", target_column_name)
END) AS concatenated_string
FROM mdp_dev.cfg.schema_
where source_object_name = '${personal.table}'""")
</code></pre>
<p>this gives me a one column dataframe make a list out of it with this code:</p>
<pre><code>mylist = list(
dfquery.select('concatenated_string').toPandas()['concatenated_string']
)
</code></pre>
<p>my goal is to loop through this list and create the string, which I attampted to do in the below code:</p>
<pre><code>sql_query_test = f"""CREATE OR REPLACE VIEW {catalog}.{schema}.`vw_{source}_{table}`
AS SELECT
"""
for c in mylist:
sql_query_test +="`"+ c["COLUMN_NAME"] + "`,\n"
sql_query = sql_query_test[:-2]
sql_query += f"""\n FROM {catalog}.{schema}.`{source}_{table}`"""
</code></pre>
<p>but I get the error point to this</p>
<pre><code>sql_query_test +="`"+ c["COLUMN_NAME"] + "`,\n"
</code></pre>
<p>saying that</p>
<blockquote>
<blockquote>
<p>TypeError: string indices must be integers</p>
</blockquote>
</blockquote>
<p>Any advice how to fix this?</p>
|
<python><pyspark>
|
2023-05-09 14:45:06
| 1
| 735
|
Greencolor
|
76,210,546
| 5,573,294
|
Resolving conflicts in python library dependency versions in apache/airflow docker image (due to dbt-bigquery library)
|
<pre><code>#15 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
#15 google-cloud-aiplatform 1.16.1 requires google-cloud-bigquery<3.0.0dev,>=1.15.0, but you have google-cloud-bigquery 3.10.0 which is incompatible.
#15 google-ads 18.0.0 requires protobuf!=3.18.*,!=3.19.*,<=3.20.0,>=3.12.0, but you have protobuf 3.20.3 which is incompatible.
</code></pre>
<p>We are receiving these errors in the logs of <code>docker-compose build</code> when building our apache airflow image. According to LLM model:</p>
<ul>
<li>The first conflict is between google-cloud-aiplatform and google-cloud-bigquery. The google-cloud-aiplatform library requires a version of google-cloud-bigquery that is less than 3.0.0dev and greater than or equal to 1.15.0, but you have google-cloud-bigquery version 3.10.0 installed which is incompatible.</li>
<li>The second conflict is between google-ads and protobuf. The google-ads library requires a version of protobuf that is less than or equal to 3.20.0 and greater than or equal to 3.12.0, excluding versions 3.18.* and 3.19.*, but you have protobuf version 3.20.3 installed which is incompatible.</li>
</ul>
<p>It's worth noting that <code>dbt-bigquery==1.5.0</code> is a new release from only a few weeks ago.</p>
<p>Here is our <strong>Dockerfile</strong>:</p>
<pre><code>FROM --platform=linux/amd64 apache/airflow:2.5.3
# install mongodb-org-tools
USER root
RUN apt-get update && apt-get install -y gnupg software-properties-common && \
curl -fsSL https://www.mongodb.org/static/pgp/server-4.2.asc | apt-key add - && \
add-apt-repository 'deb https://repo.mongodb.org/apt/debian buster/mongodb-org/4.2 main' && \
apt-get update && apt-get install -y mongodb-org-tools
USER airflow
ADD requirements.txt /requirements.txt
RUN pip install -r /requirements.txt
</code></pre>
<p>and our <strong>requirements.txt</strong></p>
<pre><code>gcsfs==0.6.1 # Google Cloud Storage file system interface
ndjson==0.3.1 # Newline delimited JSON parsing and serialization
pymongo==3.12.1 # MongoDB driver for Python
dbt-bigquery==1.5.0 # dbt adapter for Google BigQuery
numpy==1.21.1 # Numerical computing in Python
pandas==1.3.1 # Data manipulation and analysis library
billiard # Multiprocessing replacement, to avoid "daemonic processes are not allowed to have children" error using Pool
</code></pre>
<p>How can we resolve these dependency conflicts? How can we even tell which library dependencies are for which libraries in our requirements.txt? My assumption is that <code>google-cloud-aiplatform</code> and <code>google-cloud-bigquery</code> are both dependencies of <code>dbt-bigquery</code>, however if they were dependencies to the same library, I wouldn't except a dependency conflict.</p>
<p>Edit: some useful logs from the build:</p>
<pre><code>Requirement already satisfied: protobuf>=3.18.3 in /home/airflow/.local/lib/python3.7/site-packages (from dbt-core~=1.5.0->dbt-bigquery==1.5.0->-r /requirements.txt (line 5)) (3.20.0)
Collecting google-cloud-bigquery~=3.0
Downloading google_cloud_bigquery-3.10.0-py2.py3-none-any.whl (218 kB)
Requirement already satisfied: proto-plus<2.0.0dev,>=1.15.0 in /home/airflow/.local/lib/python3.7/site-packages (from google-cloud-bigquery~=3.0->dbt-bigquery==1.5.0->-r /requirements.txt (line 5)) (1.19.6)
Requirement already satisfied: grpcio<2.0dev,>=1.47.0 in /home/airflow/.local/lib/python3.7/site-packages (from google-cloud-bigquery~=3.0->dbt-bigquery==1.5.0->-r /requirements.txt (line 5)) (1.53.0)
Requirement already satisfied: google-resumable-media<3.0dev,>=0.6.0 in /home/airflow/.local/lib/python3.7/site-packages (from google-cloud-bigquery~=3.0->dbt-bigquery==1.5.0->-r /requirements.txt (line 5)) (2.4.1)
Requirement already satisfied: google-cloud-core<3.0.0dev,>=1.6.0 in /home/airflow/.local/lib/python3.7/site-packages (from google-cloud-bigquery~=3.0->dbt-bigquery==1.5.0->-r /requirements.txt (line 5)) (2.3.2)
Requirement already satisfied: google-api-core[grpc]!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.0,<3.0.0dev,>=1.31.5 in /home/airflow/.local/lib/python3.7/site-packages (from google-cloud-bigquery~=3.0->dbt-bigquery==1.5.0->-r /requirements.txt (line 5)) (2.8.2)
Requirement already satisfied: googleapis-common-protos<2.0dev,>=1.56.2 in /home/airflow/.local/lib/python3.7/site-packages (from google-api-core[grpc]!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.0,<3.0.0dev,>=1.31.5->google-cloud-bigquery~=3.0->dbt-bigquery==1.5.0->-r /requirements.txt (line 5)) (1.56.4)
Requirement already satisfied: grpcio-status<2.0dev,>=1.33.2 in /home/airflow/.local/lib/python3.7/site-packages (from google-api-core[grpc]!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.0,<3.0.0dev,>=1.31.5->google-cloud-bigquery~=3.0->dbt-bigquery==1.5.0->-r /requirements.txt (line 5)) (1.48.2)
Requirement already satisfied: google-crc32c<2.0dev,>=1.0 in /home/airflow/.local/lib/python3.7/site-packages (from google-resumable-media<3.0dev,>=0.6.0->google-cloud-bigquery~=3.0->dbt-bigquery==1.5.0->-r /requirements.txt (line 5))
</code></pre>
<p><code>google-cloud-aiplatform</code> and <code>google-ads</code> do not appear a single time in the build logs other than in the error message.</p>
|
<python><dockerfile><conflicting-libraries><dbt-bigquery>
|
2023-05-09 14:42:16
| 1
| 10,679
|
Canovice
|
76,210,399
| 3,399,638
|
Inserting Python List to Pandas dataframe rows and setting row values to NaN
|
<p>Pandas way of inserting python list into specific Pandas dataframe column as new rows. After inserting, fil in the remaining values as NaN.</p>
<p>For example, the list appears simlar to the below:</p>
<pre><code>list_dates = ['2021-12-01', '2021-12-02', '2021-12-03', '2021-12-06', '2021-12-08']
</code></pre>
<p>The Pandas Dataframe appears as:</p>
<pre><code>df_sentiment.iloc[10:20, : 10]
ticker AAPL.OQ ABBV.N ABT.N ACN.N ADBE.OQ AIG.N AMD.OQ AMGN.OQ AMT.N AMZN.OQ
Date
2021-11-24 NaN NaN NaN NaN NaN NaN NaN 0.632792 NaN NaN
2021-11-25 NaN 0.211714 NaN 0.846193 0.210173 NaN NaN 0.043700 NaN NaN
2021-11-26 NaN 0.115301 -0.629839 NaN 0.081402 -0.287198 NaN NaN -0.448907 NaN
2021-11-27 NaN NaN 0.384544 NaN 0.490425 -0.003641 0.253752 NaN NaN NaN
2021-11-28 NaN NaN 0.036393 NaN 0.003484 NaN NaN 0.056091 NaN NaN
2021-11-29 0.163266 -0.165520 0.149920 NaN -0.014639 -0.448595 0.097651 0.381039 0.058590 -0.016986
2021-11-30 NaN NaN NaN NaN NaN NaN NaN NaN NaN 0.557565
2021-12-07 NaN NaN NaN 0.097732 NaN NaN NaN NaN NaN NaN
2021-12-09 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
2021-12-10 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
</code></pre>
<p>Desired update, from appending list to Date column and setting remaining values of rows where list values inserted into "Date" column to NaN, would look like df_sentiment:</p>
<pre><code>df_sentiment.iloc[10:20, : 10]
ticker AAPL.OQ ABBV.N ABT.N ACN.N ADBE.OQ AIG.N AMD.OQ AMGN.OQ AMT.N AMZN.OQ
Date
2021-11-24 NaN NaN NaN NaN NaN NaN NaN 0.632792 NaN NaN
2021-11-25 NaN 0.211714 NaN 0.846193 0.210173 NaN NaN 0.043700 NaN NaN
2021-11-26 NaN 0.115301 -0.629839 NaN 0.081402 -0.287198 NaN NaN -0.448907 NaN
2021-11-27 NaN NaN 0.384544 NaN 0.490425 -0.003641 0.253752 NaN NaN NaN
2021-11-28 NaN NaN 0.036393 NaN 0.003484 NaN NaN 0.056091 NaN NaN
2021-11-29 0.163266 -0.165520 0.149920 NaN -0.014639 -0.448595 0.097651 0.381039 0.058590 -0.016986
2021-11-30 NaN NaN NaN NaN NaN NaN NaN NaN NaN 0.557565
2021-12-01 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
2021-12-02 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
2021-12-03 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
2021-12-06 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
2021-12-07 NaN NaN NaN 0.097732 NaN NaN NaN NaN NaN NaN
2021-12-08 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
2021-12-09 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
2021-12-10 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
</code></pre>
<p>I've tried append and concat, but haven't been able to get it right or have been getting errors.</p>
|
<python><pandas>
|
2023-05-09 14:26:11
| 1
| 323
|
billv1179
|
76,210,344
| 534,238
|
Is it possible in Python to pass keyword arguments to a function and use them *only* if the function has them
|
<p>I am trying to make a functional map-like function which fully traverses a list or dict of arbitrary depth / shape, then map the provided function over it.</p>
<p>I have the map function working, and I have the function to feed to it working, but I do not like how "un-generalized" it is, in case I want to use it for the future.</p>
<p>The main problem is that it isn't clear to me how I can pass an arbitrary # of arguments to the function-to-be-mapped, and only pass on the arguments that are found in the function-to-be-mapped, while all others are silently dropped.</p>
<h1>Background</h1>
<p>The mapping function and the function to be mapped are being used to create fake data:</p>
<ol>
<li>I traverse the arbitrarily-shaped object</li>
<li>When I find a "proper" data type, then I swap what it has -- dummy example data -- for data generated from the <em>faker</em> package.</li>
</ol>
<h1>Mapping Function</h1>
<p>The full function is not that big, so I will just add the whole thing:</p>
<pre class="lang-py prettyprint-override"><code>def object_map(
action: Callable,
obj: Union[dict, list],
coerce: bool,
data_types: Tuple[type, ...] = avail_data_types,
error_on_unknown: bool = False,
) -> Union[dict, list]:
"""Map an action across all values of interest in a dict or list of arbitrary depth.
For instance, if you have a list of dicts, and some of the dicts have lists, but
eventually each embedded list / dict ends with a type found in `data_types`, then
the `action` will only be performed at this final leaf where a type in `data_types`
is found.
Parameters
----------
action : Callable (function)
The function which you wish to map onto *only `data_types`* leaves in the
`obj` object.
obj : dict or list
The object of arbitrary depth that you wish to traverse.
data_types : tuple, optional
A tuple of data types that you want to allow the `action` to be performed upon.
Note: the *order* of the tuple matters if you are coercing. If `str` is before
`int`, for instance, then no strings will be coerced into integers. The type
which you want to coerce into must be before the type you expect to coerce from.
Defaults to a tuple of the keys found in `fake_generators` dict.
coerce : bool
Whether or not to coerce the elements into one of the `data_types`, if
possible. Eg, should the string `'7.1'` be coerced into the float `7.1`.
error_on_unknown : bool, optional
Whether or not to error when neither a list, dict, nor any of the types found
in `data_types` are found. If `True`, then a `NotImplementedError` will be
generated. Otherwise, the element is kept in the object without performing any
mapping. Defaults to `False`.
Returns
-------
dict or list
The original `obj`, traversed such that elements of type found in `data_types`
have the function `action` mapped onto them.
Examples
--------
>>> orig = {'elem1': [7, 5, 'apple', 13.1], 'elem2': {'extra1': '12'}}
"""
if isinstance(obj, dict):
result = {}
for key, val in obj.items():
if isinstance(val, (dict, list)):
result[key] = object_map(
action, val, coerce, data_types, error_on_unknown
)
elif isinstance(val, data_types):
if not coerce:
result[key] = action(val)
else:
for data_type in data_types:
try:
result[key] = action(data_type(val))
except (TypeError, ValueError):
pass
else:
# Once successful, don't keep generating further types.
break
else:
if error_on_unknown:
raise NotImplementedError(
"Only dicts and lists are allowed as iterables, and only "
f"{', '.join([val.__name__ for val in data_types])} allowed as "
f"single values. You provided `{val}`, which is none of these"
)
else:
result[key] = val
elif isinstance(obj, list):
result = []
for val in obj:
if isinstance(val, (dict, list)):
result.append(
object_map(action, val, coerce, data_types, error_on_unknown)
)
elif isinstance(val, data_types):
if not coerce:
result.append(action(val))
else:
for data_type in data_types:
try:
result.append(action(data_type(val)))
except (TypeError, ValueError):
pass
else:
# Once successful, don't keep generating further types.
break
else:
if error_on_unknown:
raise NotImplementedError(
"Only dicts and lists are allowed as iterables, and only "
f"{', '.join([val.__name__ for val in data_types])} allowed as "
f"single values. You provided `{val}`, which is none of these"
)
else:
result.append(val)
else:
raise NotImplementedError(
f"Only dicts and lists are allowed as iterables. You provided {obj}, "
"which is none of these"
)
return result
</code></pre>
<h1>Function to be Mapped</h1>
<p>This function is quite small, but I'll explain some of the unclear parameters after:</p>
<pre class="lang-py prettyprint-override"><code>D = TypeVar("D", *avail_data_types)
def gen_fake_data(
entry: D,
recover_type: bool = True,
fake_generator_types: Dict[type, List] = fake_generators,
fake_obj: Faker = fake,
):
data_type = type(entry)
try:
generator = choice(fake_generator_types[data_type])
result = getattr(fake_obj, generator)()
except (TypeError, ValueError):
data_types = tuple(fake_generator_types)
raise NotImplementedError(
f"Only {', '.join([val.__name__ for val in data_types])} supported for "
f"fake data generation. You provided {entry}, which is none of these"
)
if recover_type:
return data_type(result)
else:
return result
</code></pre>
<h1>Weakness with my Current Generalization</h1>
<p>In the mapping function, I write things like:</p>
<pre class="lang-py prettyprint-override"><code> result[key] = action(val)
</code></pre>
<p>This means that I can <em>only</em> pass a <code>Callable</code> (a function) to <code>action</code> that only has 1 required parameter. As you can see, <code>data_types</code> and <code>coerce</code> are basically the same as what <code>gen_fake_data</code> needs (optional parameters <code>data_types</code> and <code>recover_type</code>). I would also pass those to the mapping function, <code>object_map</code>, except that then I have completely coupled these two functions together. That's no <em>so bad</em>, except that <code>object_map</code> could be quite useful for other work, too.</p>
<p>I'd like to be able to do something like</p>
<pre class="lang-py prettyprint-override"><code> result[key] = action(val, data_types=data_types, recover_type=coerce)
</code></pre>
<p>and just have those extra keywords (<code>data_types</code> and <code>recover_type</code>) just be ignored if the function <code>action</code> does not contain them in its parameters, so that it silently converts to</p>
<pre class="lang-py prettyprint-override"><code> result[key] = action(val)
</code></pre>
<p>when the keywords are not found.</p>
<p><em><strong>Is there any way to do this in Python?</strong></em></p>
<p><strong>Or instead, is there a better way that I can construct these two functions so that the mapping function is still generalized in how it can manage the function-to-be-mapped?</strong></p>
|
<python><functional-programming>
|
2023-05-09 14:20:34
| 1
| 3,558
|
Mike Williamson
|
76,210,258
| 42,508
|
Fail to authorize my request when using TickTick API
|
<p>I'm using the API described <a href="https://developer.ticktick.com/docs/index.html#/openapi?id=getting-started" rel="nofollow noreferrer">here</a> , using the default Python version in Google Collab notebook.</p>
<p>I have a client_id created as they recommend, but I can't figure out how to get the token back without having any redirect URL.</p>
<p>I aim to connect to my account in TickTick and do some bulk edits and deletes that the UI interface doesn't allow.</p>
<p>My code so far</p>
<pre><code>import requests
payload = {'client_id': 'Ysf0D223UGhdlY5Y7W', 'scope': 'tasks:write','response_type':'code'}
r = requests.get('https://ticktick.com/oauth/authorize', params=payload)
r.text
</code></pre>
|
<python><python-requests><openapi>
|
2023-05-09 14:11:16
| 1
| 1,139
|
akapulko2020
|
76,209,929
| 2,458,922
|
Fast AI Api doc for leaner.fine_tune
|
<p>I was looking for FastAI API Docs.</p>
<p>I could see code as ,</p>
<pre><code>dls = ImageDataLoaders.from_name_func('.',
get_image_files(path), valid_pct=0.2, seed=42,
label_func=is_cat,
item_tfms=Resize(192)) # Data Loader
learner = vision_learner(dls, resnet18, metrics=error_rate)
learner.fine_tune(3)
</code></pre>
<p>I could see some <a href="https://docs.fast.ai/learner.html#learner" rel="nofollow noreferrer">API_DOC</a> and <a href="https://github.com/fastai/fastai/blob/master/fastai/learner.py#L98" rel="nofollow noreferrer">Source</a> ,but I am not able to find, learner.fine_tune() API and what arguments it takes ?</p>
|
<python><pytorch><fast-ai>
|
2023-05-09 13:32:28
| 1
| 1,731
|
user2458922
|
76,209,871
| 9,008,261
|
How to create a DataFrame if the executor nodes do not have python
|
<p>Due to a limitation in the hadoop cluster I am using with YARN, the execute nodes do not have python.</p>
<pre class="lang-py prettyprint-override"><code>df = spark.createDataFrame([[0]], ['foo'])
print(df)
df.collect()
</code></pre>
<pre><code>DataFrame[foo: bigint]
... java.io.IOException: Cannot run program "python3": error=2, No such file or directory
</code></pre>
<p>It is not an issue of properly setting <code>PYSPARK_PYTHON</code> and <code>PYSPARK_DRIVER_PYTHON</code> as the execute nodes do not have a python interpreter at all. I am wondering if there is a java python extension or some other way of converting data into a dataframe to pass to the cluster.</p>
|
<python><scala><apache-spark><pyspark>
|
2023-05-09 13:26:14
| 0
| 305
|
Todd Sierens
|
76,209,744
| 4,665,335
|
Unable to test a celery chain when tasks are executed by different celery apps
|
<p>I have two celery apps in my environment that are :</p>
<pre class="lang-py prettyprint-override"><code>app1 = Celery('app1', broker=BROKER_URL1, backend=BROKER_URL1)
app2 = Celery('app2', broker=BROKER_URL2, backend=BROKER_URL2)
</code></pre>
<p>From a Django app within a <code>web</code> container I need to call a chain of tasks:</p>
<ul>
<li><code>task1</code> is executed by <code>app1</code></li>
<li><code>task2</code> is executed by <code>app2</code></li>
</ul>
<p>The two apps have different virtual environments / python versions so there no way of executing both tasks by the same <code>app</code>.</p>
<pre class="lang-py prettyprint-override"><code>@app1.task(bind=False)
def task1():
return {"key": "value"}
@app2.task(bind=False)
def task2(result):
return result["key"]
task_chain = (
app1.signature('app1.task1') |
app2.signature('app2.task2')
)
</code></pre>
<p>I am then trying to test the chain works correctly - task1 appears to execute correctly but task2 errors:</p>
<pre class="lang-py prettyprint-override"><code>@override_settings(CELERY_TASK_ALWAYS_EAGER=True)
def test_chain(self):
task_chain.apply_async().get() # Error: celery.exceptions.NotRegistered: 'app2.task2'
</code></pre>
<p>How can i test this chain?</p>
|
<python><django><testing><celery><django-celery>
|
2023-05-09 13:10:59
| 1
| 400
|
michal111
|
76,209,695
| 3,104,621
|
Pipe two subprocesses in Python with asyncio and interrupt them
|
<p>I have two subprocesses launched by my <code>asyncio</code>-based Python code with <code>asyncio.create_subprocess_exec</code>. I want one to output data to stdout and the other one to read it through a pipe as soon as it's available (in my case, the first process outputs video frames and the second process encodes them - it's <code>ffmpeg</code>).
The only way to create a pipe usable by <code>asyncio</code> that I found is in <a href="https://stackoverflow.com/a/36666420/3104621">this post</a>. But following that method the second process (<code>ffmpeg</code>) crashes with the error "Bad file descriptor". Given that the setup works fine when using a shell pipe, I decided to use <code>asyncio.create_subprocess_shell</code> to handle the piping for me. But both processes are designed to run forever until interruption and so I need a way to terminate them. Calling <code>.terminate()</code> on the process returned by <code>create_subprocess_shell</code>, or killing it with <code>SIGTERM</code> or <code>SIGINT</code> do nothing. How can I kill these processes?</p>
|
<python><subprocess><python-asyncio>
|
2023-05-09 13:06:23
| 1
| 1,134
|
Joald
|
76,209,600
| 9,470,099
|
SSH Remote Port Forwarding using Python
|
<p>I am trying to expose a local port on my laptop to be accessed by everyone through Python, this python script will be run by someone without ssh installed (Windows). I try to use <code>paramiko</code> module but it only allow me to do Local port forwarding.</p>
<pre class="lang-py prettyprint-override"><code>import paramiko, sys
from forward import forward_tunnel
remote_host = "localhost"
remote_port = 3001
local_port = 3000
ssh_host = SERVER_IP
ssh_port = 22
user = "root"
password = "pAss@123A"
transport = paramiko.Transport((ssh_host, ssh_port))
transport.connect(hostkey = None,
username = user,
password = password,
pkey = None)
try:
forward_tunnel(local_port, remote_host, remote_port, transport)
except KeyboardInterrupt:
print('Port forwarding stopped.')
sys.exit(0)
</code></pre>
<p>But I want to do remote port forwarding, like I want to replicate the following ssh command.</p>
<pre class="lang-bash prettyprint-override"><code>ssh -R 0.0.0.0:3001:127.0.0.1:3000 -N root@SERVER_IP
</code></pre>
|
<python><ssh><paramiko>
|
2023-05-09 12:55:26
| 0
| 4,965
|
Jeeva
|
76,209,426
| 2,399,158
|
Running an keyword on Robotframework throw an error: ValueError: Timeout value connect was <object object at 0x106cc5b50>
|
<p>When I try to launch an app from robotframework using appium</p>
<p>I got the following error</p>
<pre><code>ValueError: Timeout value connect was <object object at 0x106cc5b50>, but it must be an int, float or None.
</code></pre>
<p>This never happen. I do not know the reason why, all I did is just trying a simple open app keyword from robotframework appium library</p>
<p>pip list</p>
<pre><code>Package Version
------------------------------ --------
Appium-Python-Client 1.3.0
astor 0.8.1
async-generator 1.10
attrs 23.1.0
beautifulsoup4 4.9.1
certifi 2023.5.7
chardet 3.0.4
charset-normalizer 2.0.12
decorator 5.1.1
docutils 0.18.1
exceptiongroup 1.1.1
h11 0.14.0
idna 2.10
kitchen 1.2.6
lxml 4.5.2
outcome 1.2.0
pip 22.0.4
PySocks 1.7.1
PyYAML 6.0
requests 2.30.0
robotframework 5.0
robotframework-appiumlibrary 1.6.3
robotframework-pabot 2.5.4
robotframework-pythonlibcore 4.1.2
robotframework-requests 0.9.2
robotframework-seleniumlibrary 6.1.0
robotframework-stacktrace 0.4.1
selenium 3.141.0
setuptools 60.10.0
six 1.16.0
sniffio 1.3.0
sortedcontainers 2.4.0
soupsieve 2.0.1
strutil 0.2.1
tk 0.1.0
trio 0.22.0
trio-websocket 0.10.2
urllib3 2.0.2
wheel 0.37.1
wsproto 1.2.0
</code></pre>
<p>Pip3 list</p>
<pre><code>Package Version
------------------------------ --------
Appium-Python-Client 1.3.0
astor 0.8.1
async-generator 1.10
attrs 23.1.0
beautifulsoup4 4.9.1
certifi 2023.5.7
chardet 3.0.4
charset-normalizer 2.0.12
decorator 5.1.1
docutils 0.18.1
exceptiongroup 1.1.1
h11 0.14.0
idna 2.10
kitchen 1.2.6
lxml 4.5.2
outcome 1.2.0
pip 22.0.4
PySocks 1.7.1
PyYAML 6.0
requests 2.30.0
robotframework 5.0
robotframework-appiumlibrary 1.6.3
robotframework-pabot 2.5.4
robotframework-pythonlibcore 4.1.2
robotframework-requests 0.9.2
robotframework-seleniumlibrary 6.1.0
robotframework-stacktrace 0.4.1
selenium 3.141.0
setuptools 60.10.0
six 1.16.0
sniffio 1.3.0
sortedcontainers 2.4.0
soupsieve 2.0.1
strutil 0.2.1
tk 0.1.0
trio 0.22.0
trio-websocket 0.10.2
urllib3 2.0.2
wheel 0.37.1
wsproto 1.2.0
</code></pre>
<p>My python version</p>
<pre><code>Python 3.9.12
</code></pre>
<p><strong>The Appium is not running when I check the log, which could be the issue?</strong></p>
<p>My sample app</p>
<pre><code>*** Settings ***
# Default Library
Library AppiumLibrary
Library BuiltIn
*** Variables ***
${path} /Users/jj/Downloads/sample.apk
*** Test Cases ***
Install the app
Open Application http://localhost:4723/wd/hub alias=Myapp1 platformName=iOS platformVersion=7.0 deviceName='iPhone Simulator' app=${path}
</code></pre>
|
<python><robotframework>
|
2023-05-09 12:34:27
| 3
| 591
|
user2399158
|
76,209,235
| 1,186,739
|
Pandas styling of rows
|
<p>Hi I am trying to style a pandas data frame based on some condition and write the output to excel. I followed the solution here and got it partially done: <a href="https://stackoverflow.com/questions/57574409/applying-style-to-a-pandas-dataframe-row-wise">applying-style-to-a-pandas-dataframe-row-wise</a>.</p>
<p>But my output looks the one in picture. What can I do to get the original data colored instead of <strong>False</strong> and <strong>Background-color:yellow</strong></p>
<p><a href="https://i.sstatic.net/0xtqE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0xtqE.png" alt="enter image description here" /></a></p>
<p>This is my whole code. Can someone help me what can I do to get it right?</p>
<pre><code>def highlight():
df1,df2 = read_df();
filterd_df = filter(df1,df2);
styled_df = highlight_cells(df1,filterd_df,)
with pd.ExcelWriter(r"C:\Users\test.xlsx", engine="openpyxl", mode="a", if_sheet_exists="replace") as writer:
styled_df.to_excel(writer, sheet_name='result',index=False)
def highlight_cells(rows,filtered):
c1 = 'background-color: yellow'
mask = rows.isin(filtered).any(axis=1)
style_df = pd.DataFrame(False, index=rows.index, columns=rows.columns)
style_df.loc[mask, :] = c1
return style_df.style.applymap(lambda x: c1 if x else '')
</code></pre>
|
<python><pandas><dataframe><pandas-styles>
|
2023-05-09 12:12:33
| 2
| 391
|
Soumya
|
76,209,122
| 509,868
|
How to detect a Windows network path?
|
<p>I am writing a script which runs a Windows executable file in a directory supplied by a user. If this script is a network path, making it the current directory will look like it worked, but running any commands in this directory will fail:</p>
<pre><code>>>> import os
>>> os.chdir(r'\\server\share\folder') # my actual name is different but similar
>>> os.getcwd()
'\\\\server\\share\\folder'
>>> os.system('echo %CD%')
'\\server\share\folder'
CMD.EXE was started with the above path as the current directory.
UNC paths are not supported. Defaulting to Windows directory.
C:\Windows
0
</code></pre>
<p>I want to predict this situation in my script — if the user supplies a network path instead of a local path like <code>c:\work\project42</code>, give a meaningful error immediately instead of letting it fail later.</p>
<p>How can I determine if a path is a Windows network path?</p>
|
<python><windows><path>
|
2023-05-09 12:02:29
| 3
| 28,630
|
anatolyg
|
76,209,050
| 9,457,900
|
unable to consume asynchronous msg from producer & consumer
|
<p>Kafka , zookeeper is running successfully</p>
<p>This is my producer.py</p>
<pre><code>async def publish():
producer = AIOKafkaProducer(bootstrap_servers='localhost:9092',
enable_idempotence=True)
await producer.start()
consumer = AIOKafkaConsumer(
topicAKG,
bootstrap_servers='localhost:9092',group_id='test',
max_poll_interval_ms=60000,
max_poll_records=50)
await consumer.start()
try:
for i in range(1, 6):
await producer.send_and_wait(topic, value='from producer'.encode())
print(f"Iteration: {i}")
async for message in consumer:
print("Received ========== ", message.value.decode())
await consumer.commit()
finally:
await producer.stop()
await consumer.stop()
</code></pre>
<p>This is my consumer.py</p>
<p>import asyncio
from aiokafka import AIOKafkaConsumer, AIOKafkaProducer</p>
<pre><code>topic = 'app'
topicAKG = 'back'
async def consume():
consumer = AIOKafkaConsumer(topic, bootstrap_servers='localhost:9092',
group_id="test",
max_poll_interval_ms=60000,
max_poll_records=50)
await consumer.start()
producer = AIOKafkaProducer(bootstrap_servers='localhost:9092',
enable_idempotence=True)
await producer.start()
try:
async for message in consumer:
print("Received",message.value.decode())
await asyncio.sleep(2) # Delay for 3 seconds
await consumer.commit() # Commit the offset to avoid re-consuming the same message
await producer.send_and_wait(topicAKG, value='from consumer'.encode())
finally:
await consumer.stop()
await producer.stop()
loop = asyncio.get_event_loop()
loop.run_until_complete(consume())
</code></pre>
<blockquote>
<p>output from producer</p>
</blockquote>
<blockquote>
<p>Iteration: 1</p>
</blockquote>
<blockquote>
<p>Received ========== from consumer</p>
</blockquote>
<blockquote>
<p>output from consumer</p>
</blockquote>
<blockquote>
<p>Received from producer</p>
</blockquote>
<p>so it stuck on iteration 1 and hangup</p>
|
<python><apache-kafka><aiokafka>
|
2023-05-09 11:55:06
| 1
| 2,644
|
Tanjin Alam
|
76,208,898
| 487,993
|
Docstring for toml file in Python
|
<p>Since 3.11, Python <a href="https://docs.python.org/3/library/tomllib.html" rel="nofollow noreferrer">tomblib</a> provides support for TOML files.
I would like to know if there is a way to add docstrings to the fields in the TOML file.</p>
<p>In particular, to be rendered with Sphinx.
I am looking for something like <a href="https://github.com/Jakski/sphinxcontrib-autoyaml" rel="nofollow noreferrer">autoyaml</a>.</p>
|
<python><python-sphinx><documentation-generation><toml>
|
2023-05-09 11:37:21
| 0
| 801
|
JuanPi
|
76,208,784
| 5,015,382
|
Docker on GCloud Vertex AI: We receive no predictions in our answer
|
<p>We try to deploy a Docker container on Google Cloud Vertex AI. However, every time we send a request to the Vertex AI, we only get a response with the model specifications but not the predictions by the model (see picture)</p>
<p><a href="https://i.sstatic.net/gPuVz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gPuVz.png" alt="enter image description here" /></a></p>
<p>Herewith also the code of the app.py file we use:</p>
<pre><code>class PredictionHandler(tornado.web.RequestHandler):
def __init__(
self,
application: "Application",
request: tornado.httputil.HTTPServerRequest,
**kwargs: Any
) -> None:
super().__init__(application, request, **kwargs)
def post(self):
response_body = None
try:
response_body = json.dumps({'predictions': 'Prediction ran succsessfully'})
except Exception as e:
response_body = json.dumps({'error:':str(e)})
self.set_header("Content-Type", "application/json")
self.set_header("Content-Length", len(response_body))
self.write(response_body)
self.finish()
def make_app():
tornado_app = tornado.web.Application([('/health_check', HealthCheckHandler),
('/predict', PredictionHandler)],
debug=False)
tornado_app.listen(8080)
print("Running App...")
tornado.ioloop.IOLoop.current().start()
</code></pre>
<p>and the code of the Dockerfile:</p>
<pre><code>FROM python:3.8
RUN mkdir -p /app/model
WORKDIR /app
COPY requirements.txt /app/requirements.txt
RUN pip3 install --no-cache-dir -r requirements.txt
COPY app /app/
EXPOSE 8080
ENTRYPOINT ["python3", "app.py"]
</code></pre>
|
<python><docker><google-cloud-platform><google-cloud-vertex-ai>
|
2023-05-09 11:23:34
| 1
| 452
|
Jan Janiszewski
|
76,208,676
| 5,834,316
|
How to count throttled requests on a per endpoint in Locust?
|
<p>I am currently creating a custom TUI dashboard to extend the data I can see in the terminal with <a href="https://locust.io/" rel="nofollow noreferrer">locust</a>.</p>
<p>Currently, I am using the following logic to capture all data:</p>
<pre class="lang-py prettyprint-override"><code>@events.init.add_listener
def on_locust_init(environment, **_kwargs):
if isinstance(environment.runner, (MasterRunner, LocalRunner)):
stream_handler = prepare_handler()
gevent.spawn(Dashboard(stream_handler).run, environment)
</code></pre>
<p>I can then use the <code>environment.runner.stats.entries</code> data to get all <code>RequestStats</code> data and place them into a custom table.</p>
<p>One issue I have is getting throttled requests into this table.</p>
<p>I am using the following in the <code>HttpUser</code> <code>task</code> to ensure all responses with status 429 count towards failures and add an new attribute that could be used downstream:</p>
<pre class="lang-py prettyprint-override"><code> @task
def endpoint(self, method, url, *args, **kwargs):
with self.client.get("/some/url/", catch_response=True, *args, **kwargs) as response:
if response.status_code == 429:
response.failure("Throttled!")
if entry := self.environment.runner.stats.entries.get((url, method)):
if not getattr(entry, "throttled"):
entry.throttled = 1
else:
entry.throttled += 1
</code></pre>
<p>However, the <code>environment.runner.stats.entries</code> values do not contain the new <code>throttled</code> attribute downstream that is parsed via the <code>on_locust_init</code> listener.</p>
<p>I am wondering if there is a way to add custom stats as requests are processed so I can consume they elsewhere for creating custom tables?</p>
|
<python><load-testing><metrics><locust>
|
2023-05-09 11:09:50
| 1
| 1,177
|
David Ross
|
76,208,396
| 15,518,834
|
Pyrebase4 error cannot import name 'gaecontrib'
|
<p>I have been trying to install <code>pyrebase4</code> using <code>pip install pyrebase4</code> but when ran it throws below error</p>
<p><code>"C:\Users\ChodDungaTujheMadhadchod\anaconda3\envs\sam_upgraded\lib\site-packages\requests_toolbelt\adapters\appengine.py", line 42, in <module> from .._compat import gaecontrib ImportError: cannot import name 'gaecontrib' from 'requests_toolbelt._compat'</code></p>
<p>As I see the error direct directly to <code>requests_toolbelt</code> , but I cannot figure out the possible way to fix it, I tried upgrading to latest version as which is <code>requests-toolbelt==1.0.0</code> . So is there any way to fix it.</p>
|
<python><pyrebase><python-requests-toolbelt>
|
2023-05-09 10:34:37
| 2
| 564
|
Hack Try
|
76,208,229
| 405,818
|
Aggregate and group dataframe rows in python
|
<pre><code># import the module
import pandas as pd
# creating a DataFrame
df = pd.DataFrame({'name' :['C1', 'C2', 'C3', 'C4', 'C5'],
'Size' :[200, 70, 60, 140, 40],
"CPU":[25.7, 5.1, 6.2, 15.1, 10]})
df
</code></pre>
<p>#Need to find rows where the sum of Size <= 100 and sum of CPU <= 100</p>
<p>#Need to find how many groups of rows we can create with above filter criteria</p>
<p>Also can this be looked as an optimization problem i.e linear optimization?</p>
|
<python><dataframe><optimization><linear-programming>
|
2023-05-09 10:14:54
| 1
| 7,065
|
Amit
|
76,208,111
| 13,154,650
|
python serial communication stops until ctrl c
|
<p>I want to print what is coming on my serial port on Windows, I have tried this on linux before and haven't had problems. This is my code</p>
<pre class="lang-py prettyprint-override"><code>import serial
ser = serial.Serial(
port="COM4", baudrate=115200, bytesize=8, timeout=0.1, stopbits=serial.STOPBITS_ONE
)
while True:
try:
if ser.in_waiting > 0:
print("found sth")
data = ser.readline()
print(data)
except KeyboardInterrupt:
break
</code></pre>
<p>However, it won't print anything (even the 'found sth' is not printed) until I hit ctrl + c, why is it happening? once I hit ctrl + c, I get</p>
<pre><code>found sth
b'Serial test'
</code></pre>
<p>How could I make my python program print every message I receive right after it arrives?</p>
|
<python><windows><pyserial>
|
2023-05-09 10:02:01
| 0
| 450
|
afvmil
|
76,207,954
| 17,596,179
|
Importing from few directories down python
|
<p>This is my file structure.</p>
<pre><code>- scraper_backend
- jobs
- bronze
- extract_bronze.py
-tests
- jobs
- bronze
- extract_bronze_test.py
</code></pre>
<p>Now when I try to import this <code>extract_bronze.py</code> file I get this error.</p>
<pre><code>ImportError while importing test module 'D:\School\Academiejaar 3\Semester 2\internship2023-solarstreaming\tests\unit_tests\jobs\bronze\test_extract_bronze.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
C:\Users\david\AppData\Local\Programs\Python\Python310\lib\importlib\__init__.py:126: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
tests\unit_tests\jobs\bronze\test_extract_bronze.py:4: in <module>
from .....scraper_backend.jobs.bronze import extract_bronze
E ImportError: attempted relative import with no known parent package
===================================================================================== short test summary info =====================================================================================
ERROR tests/unit_tests/jobs/bronze/test_extract_bronze.py
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
======================================================================================== 1 error in 0.13s =========================================================================================
</code></pre>
<p>I do the import like this.</p>
<pre><code>from datetime import datetime
from .....scraper_backend.jobs.bronze import extract_bronze # <------ this import
# also tried
# from scraper_backend.jobs.bronze import extract_bronze
def test_extract_energy_happy():
energy_type = "solar"
params = {
"datetime": datetime.now(),
"province": "Antwerp"
}
date_time = datetime.now()
result = extract_bronze.extract_energy(energy_type, params, date_time)
assert type(result) != str
assert type(result) == float
</code></pre>
<p>Also when moving the file in the <code>tests</code> folder and not in other sub folders it runs fine. So the problem is with that it is maybe nested to deep.
I have tried apppending to path with sys but i'm looking for a nicer solution
All help is greatly appreciated!</p>
|
<python><python-3.x><modulenotfounderror>
|
2023-05-09 09:46:13
| 0
| 437
|
david backx
|
76,207,832
| 1,293,127
|
How to use S3 in China?
|
<p>I have a Python app that uses boto3 to list & read objects from AWS S3 buckets:</p>
<pre class="lang-py prettyprint-override"><code>session = boto3.session.Session(aws_access_key_id=XYZ, aws_secret_access_key=ABC)
client = session.resource('s3').meta.client
listing = client.list_objects(…)
</code></pre>
<p>The app works great except for buckets in the China region, where it blows up with <code>The AWS Access Key Id you provided does not exist in our records</code> (the access key & secret are correct).</p>
<p>I googled up something about providing special ARNs / URLs for AWS in China, <a href="https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/organizations.html#client" rel="nofollow noreferrer">here</a> and <a href="https://docs.amazonaws.cn/en_us/aws/latest/userguide/endpoints-arns.html#cnnorth_region" rel="nofollow noreferrer">here</a>. But what do I do in practice, concretely?</p>
<p><strong>If the bucket is in China, how do I initialize the boto3 client so that the rest of my code works as normal?</strong></p>
|
<python><python-3.x><amazon-web-services><amazon-s3><boto3>
|
2023-05-09 09:32:09
| 1
| 8,721
|
user124114
|
76,207,766
| 16,389,095
|
Kivy/KivyMD : How to reset a drop down item to its original state before the user selection
|
<p>I developed a simple app in Kivy/KivyMD/Python. The interface contains only a DropDownItem ddi and a button. The ddi displays a text, that acts as a header, and a menu with three options. Once the user selects one of the three options, the button should reset the ddi to its original state: in other words, when the button is pressed the ddi should come back as it was before the user selection, showing the text/header ('SELECT DEVICE') and removing the selection.
Here is the code</p>
<pre><code>from kivy.lang import Builder
from kivy.metrics import dp
from kivymd.app import MDApp
from kivymd.uix.floatlayout import MDFloatLayout
from kivymd.uix.screen import MDScreen
from kivymd.uix.menu import MDDropdownMenu
# %% LOAD STRING
Builder.load_string(
"""
<View>:
MDRelativeLayout:
padding: dp(20), dp(20), dp(20), dp(20)
MDDropDownItem:
id: ddi_device_info
size_hint_x: None
width: dp(100)
pos_hint: {"x": 0.3, "center_y": 0.5}
on_release: root.menu_device_info.open()
text: 'SELECT DEVICE'
MDRaisedButton:
pos_hint: {"x": 0.7, "center_y": 0.5}
on_release: root.Button_Reset_DDI()
text: 'RESET DDI'
"""
)
class View(MDScreen):
def __init__(self, **kwargs):
super(View, self).__init__(**kwargs)
self.menu_device_info, device_info_items = self.Create_DropDown_Widget(self.ids.ddi_device_info, ['A', 'B', 'C'], width=5)
def Button_Reset_DDI(self):
pass
def Create_DropDown_Widget(self, drop_down_item, item_list, width):
items_collection = [
{
"viewclass": "OneLineListItem",
"text": item_list[i],
"height": dp(56),
"on_release": lambda x = item_list[i]: self.Set_DropDown_Item(drop_down_item, menu, x),
} for i in range(len(item_list))
]
menu = MDDropdownMenu(caller=drop_down_item, items=items_collection, width_mult=width)
menu.bind()
return menu, items_collection
def Set_DropDown_Item(self, drop_down_item, menu, text_item):
drop_down_item.set_item(text_item)
menu.dismiss()
class MainApp(MDApp):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.view = View()
def build(self):
self.title = 'RESET DDI'
return self.view
if __name__ == '__main__':
MainApp().run()
</code></pre>
<p>What the method <strong>Button_Reset_DDI</strong> should contain in order to reset the ddi?</p>
|
<python><kivy><kivy-language><kivymd>
|
2023-05-09 09:25:14
| 2
| 421
|
eljamba
|
76,207,660
| 2,966,197
|
'PandasExcelReader' object has no attribute '_concat_rows' in llama-index excel reader
|
<p>I am trying to read an excel file with multiple sheets using llama-index. Here is my code:</p>
<pre><code>from pathlib import Path
from llama_index import download_loader
PandasExcelReader = download_loader("PandasExcelReader")
loader = PandasExcelReader()
documents = loader.load_data(file=Path('dir1/excel.xlsx'),sheet_name=none)
</code></pre>
<p>When I run it, I get <code>File "PycharmProjects/venv/lib/python3.11/site-packages/llama_index/readers/llamahub_modules/file/pandas_excel/base.py", line 64, in load_data if self._concat_rows: ^^^^^^^^^^^^^^^^^ AttributeError: 'PandasExcelReader' object has no attribute '_concat_rows'</code></p>
<p>My llama-index version is <code>0.6.0</code></p>
|
<python><excel><llama-index>
|
2023-05-09 09:15:15
| 1
| 3,003
|
user2966197
|
76,207,581
| 12,990,185
|
Python - list of lists - check if unique combination of values has last different value in the list
|
<p>we have a list of lists</p>
<pre><code>links_list = [['project1','repo1', 'folder1', 'target1', 'link1', 'branch1'],
['project1', 'repo1', 'folder1', 'target1', 'link1', 'branch1'],
['project1', 'repo1', 'folder1', 'target1', 'link1', 'branch2'],
['project2', 'repo2', 'folder2', 'target2', 'link2', 'branch2']]
for link_list in links_list:
print(link_list)
</code></pre>
<p>Our unique combination of columns are: <strong>project, repo, folder</strong> and <strong>target</strong></p>
<p>That means, it is OK if we have lists which are repeated such as the first two lists:</p>
<pre><code>['project1', 'repo1', 'folder1', 'target1', 'link1', 'branch1'],
['project1', 'repo1', 'folder1', 'target1', 'link1', 'branch1']
</code></pre>
<p>That is not an issue.</p>
<p>it is NOT OK if <strong>BRANCH</strong> column (the last column) is different for this combination of the first 4 columns: <strong>project, repo, folder</strong> and <strong>target</strong>.</p>
<p>So since we have</p>
<p><code>['project1', 'repo1', 'folder1', 'target1', 'link1', 'branch1']</code> and</p>
<p><code>['project1', 'repo1', 'folder1', 'target1', 'link1', 'branch2']</code>
we do not know which <strong>BRANCH</strong> value is valid so script should exit and throw exception for this use case because for combination:
<strong>'project1', 'repo1', 'folder1', 'target1'</strong> we have two branches: <strong>branch1</strong> and <strong>branch2</strong> which makes inconsistency.</p>
<p><strong>Link</strong> column (column5 - index4) is irrelevant for the logic.</p>
<p>I am really not sure how to start here.</p>
<p>Thanks a lot</p>
|
<python><python-3.8>
|
2023-05-09 09:05:47
| 2
| 1,260
|
vel
|
76,207,539
| 14,427,714
|
How to scroll popup dialog in instagram and scrape followers?
|
<p>I am trying to scrape followers from an account. I did everything but I am stuck is scrolling the popup dialog. When I click the followers button a popup comes. Then I have to scroll down to the end and scrape all the followers name.</p>
<p>Here is my code:</p>
<pre><code>driver = webdriver.Chrome()
driver.get("https://www.instagram.com/")
wait = WebDriverWait(driver, 30)
username_field = wait.until(EC.element_to_be_clickable((
By.XPATH, "//input[@name='username']"
)))
time.sleep(2)
password_field = wait.until(EC.element_to_be_clickable((
By.XPATH, "//input[@name='password']"
)))
time.sleep(2)
username_field.clear()
password_field.clear()
username_field.send_keys("#######")
password_field.send_keys("#########")
time.sleep(2)
login = wait.until(EC.element_to_be_clickable((
By.XPATH,
"/html[1]/body[1]/div[2]/div[1]/div[1]/div[1]/div[1]/div[1]/div[1]/div[1]/section[1]/main[1]/article[1]/div["
"2]/div[1]/div[2]/form[1]/div[1]/div[3]/button[1] "
))).click()
search = wait.until(EC.element_to_be_clickable((
By.CSS_SELECTOR, "body > div:nth-child(2) > div:nth-child(2) > div:nth-child(1) > div:nth-child(2) > "
"div:nth-child(1) > div:nth-child(1) > div:nth-child(1) > div:nth-child(1) > div:nth-child(1) > "
"div:nth-child(1) > div:nth-child(1) > div:nth-child(1) > div:nth-child(1) > div:nth-child(1) > "
"div:nth-child(2) > div:nth-child(2) > div:nth-child(1) > a:nth-child(1) > div:nth-child(1) "
)))
search.click()
search_field = wait.until(EC.element_to_be_clickable((
By.XPATH, "//input[@placeholder='Search']"
)))
search_field.send_keys("dunn")
dunn = wait.until(EC.element_to_be_clickable((
By.XPATH,
"//a[@href='/dunn/']//div[@class='x9f619 xjbqb8w x78zum5 x168nmei x13lgxp2 x5pf9jr xo71vjh xxbr6pl xbbxn1n xwib8y2 x1y1aw1k x1uhb9sk x1plvlek xryxfnj x1c4vz4f x2lah0s xdt5ytf xqjyukv x1qjc9v5 x1oa3qoh x1nhvcw1']//div[@class='x9f619 x1n2onr6 x1ja2u2z x1qjc9v5 x78zum5 xdt5ytf x1iyjqo2 xl56j7k xeuugli']//div[@class='x9f619 x1n2onr6 x1ja2u2z x78zum5 x2lah0s x1qughib x6s0dn4 xozqiw3 x1q0g3np']//div[@class='x9f619 x1n2onr6 x1ja2u2z x78zum5 x1iyjqo2 xs83m0k xeuugli x1qughib x6s0dn4 x1a02dak x1q0g3np xdl72j9']//div[@class='x9f619 x1n2onr6 x1ja2u2z x78zum5 xdt5ytf x2lah0s x193iq5w xeuugli x1iyjqo2']//div[@class='x9f619 xjbqb8w x78zum5 x168nmei x13lgxp2 x5pf9jr xo71vjh x1uhb9sk x1plvlek xryxfnj x1iyjqo2 x2lwn1j xeuugli xdt5ytf xqjyukv x1cy8zhl x1oa3qoh x1nhvcw1']//span[@class='x1lliihq x1plvlek xryxfnj x1n2onr6 x193iq5w xeuugli x1fj9vlw x13faqbe x1vvkbs x1s928wv xhkezso x1gmr53x x1cpjm7i x1fgarty x1943h6x x1i0vuye xvs91rp x1s688f x5n08af x10wh9bi x1wdrske x8viiok x18hxmgj']//span[@class='x1lliihq x193iq5w x6ikm8r x10wlt62 xlyipyv xuxw1ft']//div[@class='x9f619 xjbqb8w x78zum5 x168nmei x13lgxp2 x5pf9jr xo71vjh x1uhb9sk x1plvlek xryxfnj x1c4vz4f x2lah0s x1q0g3np xqjyukv x6s0dn4 x1oa3qoh x1nhvcw1']"
)))
dunn.click()
follower_button = wait.until(EC.element_to_be_clickable((
By.CSS_SELECTOR, "a[href='/dunn/followers/']"
)))
follower_button.click()
print("Followers Found. Scraping followers")
followers = driver.find_elements(By.XPATH, "//a[contains(@href, '/')]")
# //div[contains(text(),'amirex1183')]
# Getting url from href attribute
users = set()
for i in followers:
if i.get_attribute('href'):
users.add(i.get_attribute('href').split("/")[3])
else:
continue
print("Info Saving........")
print("Done scraping data..")
with open('followers.txt', 'a') as file:
file.write('\n'.join(users) + "\n")
time.sleep(5)
driver.quit()
</code></pre>
<p>In this code everything is working but I can't scroll down the popup dialog and can't scrape the followers.</p>
<p>Is there any better solution for this? I am using python selenium for this.</p>
<p>and my output is coming like this:</p>
<pre><code>directory
direct
?u=https%3A%2F%2Fwww.penguinrandomhouse.com%2Fbooks%2F653309%2Fburn-rate-by-andy-dunn%2F&e=AT0q19otIXi8YQbFLmJagY77-HIzx8jRmBBJeB_70HFbVroT2vvlt_3bCGViJSlXax6_guJxblN04igYdsKpGv032Z34YcJ1wOTGjM8
legal
docs
bonobos
web
reels
dunn
explore
technologies
blog
?u=https%3A%2F%2Fwww.facebook.com%2Fhelp%2Finstagram%2F261704639352628&e=AT1mG5QKzJ58e8F_R2ZuWtWjNamsWjJzDoJlDbP2lHyZT6rCFeNsd6ggE1869Ev5eJUsdd1dN3GSbJukxV6K11Sqxs4nTCgszP4gmqLNRyFdkkjK5IpZiUPEoHSFzQccZSxwYqsGLJEJIVBXfeuq4g
about
dangahmed38
p
</code></pre>
|
<python><selenium-webdriver><web-scraping>
|
2023-05-09 09:01:35
| 0
| 549
|
Sakib ovi
|
76,207,341
| 5,547,553
|
How to get first n chars from a str column in python polars?
|
<p>What's the alternative of pandas :</p>
<pre><code>data['ColumnA'].str[:2]
</code></pre>
<p>in python polars?</p>
<pre><code>pl.col('ColumnA').str[:3]
</code></pre>
<p>throws <code>TypeError: 'ExprStringNameSpace' object is not subscriptable
</code>
error.</p>
|
<python><dataframe><python-polars>
|
2023-05-09 08:36:47
| 2
| 1,174
|
lmocsi
|
76,207,323
| 14,256,643
|
While storing pdf text in csv how to avoid spreading text to multiple row
|
<p>I am storing pdf text (extracted with pypdf) in a CSV file. the problem is few pdf file is very long and the text spreads into multiple rows for those long pdf file instead of keeping a single row. How to keep them in a single row? here my output looks like</p>
<pre><code>column1 column2
long pdf hello my
name is jhone
short pdf hello my name is jhone. I haven't any problem for short pdf file
</code></pre>
<p>my code:</p>
<pre><code>pdf_url ='https://www.snb.ch/en/mmr/speeches/id/ref_20230330_amrtmo/source/ref_20230330_amrtmo.en.pdf'
print("pdf_url: ",pdf_url)
# Download the PDF file from the URL
response = requests.get(pdf_url)
# Create an in-memory buffer from the PDF content
pdf_buffer = io.BytesIO(response.content)
# Read the PDF file from the in-memory buffer
pdf = PdfReader(pdf_buffer)
pdf_content = []
# Access the contents of the PDF file
for page_num in range(len(pdf.pages)):
page = pdf.pages[page_num]
page = str(page.extract_text())
pdf_content.append(page)
with open(filename, "a", newline="", encoding='utf8') as f:
writer = csv.writer(f)
writer.writerow([first_author, new_date_str, speech_title,pdf_url,pdf_content])
pdf_content.clear()
</code></pre>
|
<python><python-3.x><pdf><pypdf>
|
2023-05-09 08:32:43
| 1
| 1,647
|
boyenec
|
76,207,071
| 10,232,932
|
Sktime TimeSeriesSplit with pandas dataframe
|
<p>I try to use <code>cross-validation</code>with a timeseries for a pandas dataframe with the <code>sktime</code> <code>TimeSeriesSplit</code>. The dataframe <code>df</code> has a daily format:</p>
<pre><code> timepoint balance
0 2017-03-01 1.0
1 2017-04-01 0.0
2 2017-05-01 2.0
3 2017-06-01 3.0
4 2017-07-01 0.0
...
</code></pre>
<p>I try to use <code>prophet</code> and run the following code:</p>
<pre><code>#Packages
from sktime.forecasting.fbprophet import Prophet
from sklearn.model_selection import TimeSeriesSplit
from sklearn.metrics import mean_squared_error
import numpy as np
#preperation
tscv = TimeSeriesSplit()
rmse = []
model_ph = Prophet()
#function
for train_index, test_index in tscv.split((df)):
cv_train, cv_test = df.iloc[train_index], df.iloc[test_index]
ph= model_ph.fit(cv_train)
predictions = model_ph.predict(cv_test.index.values[0], cv_test.index.values[-1])
true_values = cv_test.values
rmse.append(sqrt(mean_squared_error(true_values, predictions)))
#print
print("RMSE: {}".format(np.mean(rmse)))
</code></pre>
<p>which leads to the following error:</p>
<pre><code>TypeError: X must be either None, or in an sktime compatible format, of scitype
Series, Panel or Hierarchical, for instance a pandas.DataFrame with sktime
compatible time indices...
</code></pre>
<p>I would have expected outputs for the <code>mean_squared_error</code></p>
|
<python><pandas><dataframe><scikit-learn><time-series>
|
2023-05-09 07:58:15
| 2
| 6,338
|
PV8
|
76,206,987
| 2,932,907
|
Group together key value pairs in dictionary with a common value
|
<p>Given the following dictionary:</p>
<pre><code> my_dict = {
'Admin.Email': 'admin@domein.nl',
'Admin.Name': 'Admin Account',
'Admin.Phone': '+31 6 12345678',
'Admin.RipeID': 'nan',
'Loc.Address': 'Testraat 100',
'Loc.Country': 'Nederland',
'Loc.Place': 'Plaatsnaam',
'Loc.Postalcode': '1234AB',
'Org.Address': 'Testraat 100',
'Org.Country': 'Nederland',
'Org.Email': 'email@domein.nl',
'Org.Name': 'Bedrijfsnaam',
'Org.Place': 'Plaatsnaam',
'Org.Postalcode': '1234AB',
'Org.RipeID': 'nan',
'Tech.Email': 'tech@domein.nl',
'Tech.Name': 'Tech Account',
'Tech.Phone': '+31 6 87654321',
'Tech.RipeID': 'nan',
'Technische contactpersoon': 'nan'
}
</code></pre>
<p>I want to group together all key value pairs that have a common name (like admin or tech) so that I can easily find the value corresponding to its key based on the groupname. Desired result would be something like this:</p>
<pre><code> new_dict = {
'attributes': {
'admin': [
{
'name': 'email'
'value: 'admin@domein.nl',
},
{
'name': 'name'
'value': 'Admin Account'
},
{
'name': 'phone',
'value': '+31 6 12345678'
},
{
'name': 'ripeid'
'value': 'nan'
}
],
'tech': [
{
'name': 'email'
'value: 'tech@domein.nl',
},
{
'name': 'name'
'value': 'Tech Account'
},
{
'name': 'phone',
'value': '+31 6 87654321'
},
{
'name': 'ripeid'
'value': 'nan'
}
]
}
}
</code></pre>
|
<python><dictionary>
|
2023-05-09 07:48:31
| 1
| 503
|
Beeelze
|
76,206,956
| 3,016,709
|
Map two mixed columns with another dataframe
|
<p>I have a dataframe where two columns, <code>User</code> and <code>Code</code>, have their values mixed or not correctly filled.</p>
<pre><code>df = pd.DataFrame({"User":['Dan', 'Mary','003','010','Luke','Peter'],"Code":['001','035','003','Martin','Luke','AAA'],"Job":['astronaut','biologist','director','footballer','waiter']})
df
User Code Job
Dan 001 astronaut
Mary 035 biologist
003 003 director
010 Martin footballer
Luke Luke waiter
Peter AAA unkown
</code></pre>
<p>Given that I have a second dataframe - dictionary with the mapping of users and codes:</p>
<pre><code>df_dict = pd.DataFrame({"name":['Dan','Paul','Julia','Mary','Martin','George','Daniel','Luke','Marina'],"code":['001','045','012','035','010','003','200','501']})
df_dict
name code
Dan 001
Paul 045
Julia 012
Mary 035
Martin 010
George 003
Daniel 200
Marina 501
</code></pre>
<p>how can I do the mapping so that the first dataframe <code>df</code> looks like:</p>
<pre><code>User Code Job
Dan 001 astronaut
Mary 035 biologist
George 003 director
Martin 010 footballer
Luke 200 waiter
Peter AAA unknown
</code></pre>
<p>Notice that if a value is not in the <code>df_dict</code> (like 'Peter' in this example) the record should remain as in the original <code>df</code></p>
|
<python><pandas><dataframe><mapping>
|
2023-05-09 07:44:17
| 1
| 371
|
LRD
|
76,206,927
| 2,966,197
|
Wrong output in writing list of pandas dataframe to separate sheets in same excel
|
<p>I have a code where I am using <code>tabula-py</code> to read tables from pdf and then write the resulting list of <code>dataframes</code> to a single excel with each <code>dataframe</code> in a separate sheet.</p>
<p>Here is my current code:</p>
<pre><code>def read_pdf(pdf_file):
output_filepath = "output.xlsx"
dfs = tabula.read_pdf(pdf_file, pages='all')
for i in range(len(dfs)):
print(dfs[i].to_string())
with ExcelWriter(output_filepath) as writer:
dfs[i].to_excel(writer, sheet_name='sheet%s' % i)
</code></pre>
<p>With the print function I can see <code>dataframes</code> with values, but the resulting excel is empty with just one sheet and no output in it.</p>
|
<python><pandas><excel><tabula-py>
|
2023-05-09 07:40:43
| 2
| 3,003
|
user2966197
|
76,206,905
| 9,202,251
|
Sympy does not seem to integrate exponentials of the variable (MATLAB does it)
|
<p>I am trying to integrate my function S(w)</p>
<pre><code>import sympy as sym
w = sym.Symbol('w')
Spm = 0.95**4/w**5*sym.exp(-5/4*(0.95/w)**4)
S = sym.Piecewise(
(Spm**sym.exp(-1/2*((w-0.95)/(0.07*0.95))**2), w<=0.95),
(Spm**sym.exp(-1/2*((w-0.95)/(0.09*0.95))**2), w>0.95))
S_int = sym.integrate(S, (w, 0.8, 1.0))
</code></pre>
<p>It seems that <code>**sym.exp()</code> is the troubleso</p>
<p>me part. If I simply my function to the below, then the integral is computed successfully.</p>
<pre><code>S = sym.Piecewise(
(Spm**((0.07*0.95))**2), w<=0.95),
(Spm**((0.09*0.95))**2), w>0.95))
</code></pre>
<p>MATLAB computes the integral easily with 'ArrayValue'.</p>
<pre><code>Spm = @(w) 0.95^4./w.^5.*exp(-5/4*(0.95./w).^4);
S = @(w) ...
Spm(w).^exp(-1/2*((w-0.95)/(0.07*0.95)).^2).*(w <= 0.95) + ...
Spm(w).^exp(-1/2*((w-0.95)/(0.09*0.95)).^2).*(w > 0.95);
S_int = integral(S, 0.8, 1.0, 'ArrayValued', true);
</code></pre>
<p>Is there an equivalent feature in Sympy or Numpy?</p>
|
<python><python-3.x><numpy><matlab><sympy>
|
2023-05-09 07:37:58
| 1
| 490
|
J. Serra
|
76,206,798
| 3,734,059
|
How to throw a validation error with Pydantic StrictFloat on float("nan")?
|
<p>I am using Pydantic's <a href="https://docs.pydantic.dev/latest/usage/types/#strict-types" rel="nofollow noreferrer">strict types</a> and want to throw an error when a <code>StrictFloat</code> field is assigned a <code>float("nan")</code> value.</p>
<p>Here's an example:</p>
<pre class="lang-py prettyprint-override"><code>from typing import List, Union
from pydantic import BaseModel, StrictFloat, StrictInt
class StrictNumbers(BaseModel):
values: List[Union[StrictFloat, StrictInt]]
# throws no validation error
my_model = StrictNumbers(values=[1, 2.0, float("nan")])
# throws a validation error
# my_model = StrictNumbers(values=[1, 2.0, None])
print(my_model)
</code></pre>
<p>What's the easiest way to throw an error on <code>float("nan")</code> here?</p>
|
<python><pydantic>
|
2023-05-09 07:22:54
| 2
| 6,977
|
Cord Kaldemeyer
|
76,206,666
| 11,748,924
|
Unable to display mp4 after process video opencv
|
<p>I have this code</p>
<pre><code>import cv2
from .inference import inference
def create_video_from_frames(frames, output_path, fps):
# Get the dimensions of the first frame
height, width, _ = frames[0].shape
# Resize all frames to have the same dimensions
resized_frames = [cv2.resize(frame, (width, height)) for frame in frames]
# Create a VideoWriter object
fourcc = cv2.VideoWriter_fourcc(*'mp4v')
video_writer = cv2.VideoWriter(filename=output_path,
fourcc=fourcc,
fps=fps,
frameSize=(width, height))
# Write each frame to the video file
for frame in resized_frames:
video_writer.write(frame)
# Release the video writer
video_writer.release()
def process_video(original_video_path, processed_video_path):
# Open the video file
capture = cv2.VideoCapture(original_video_path)
fps = capture.get(cv2.CAP_PROP_FPS)
width = int(capture.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(capture.get(cv2.CAP_PROP_FRAME_HEIGHT))
# # Create a VideoWriter object to save the processed video
# fourcc = cv2.VideoWriter_fourcc(*'mp4v')
# writer = cv2.VideoWriter(processed_video_path, fourcc, fps, (width, height))
# Process each frame of the video
frames = []
counter = 0
try:
while True:
# Read a frame from the video
ret, frame = capture.read()
if not ret:
break
# Process Layers
frame = inference(frame)
# Write the processed frame to the output video
# writer.write(frame)
frames.append(frame)
print(counter)
counter += 1
except:
pass
# Release resources
capture.release()
# writer.release()
create_video_from_frames(frames, processed_video_path, fps)
</code></pre>
<p>It successfully processed video, I'm able to open the processed video with <code>Windows Media Player</code> but I'm not able to open the processed video with <code>web browser</code> such as Edge and Firefox.</p>
<p>I also check the validity of mp4 using this code</p>
<pre><code>import cv2
def is_valid_mp4(file_path):
try:
cap = cv2.VideoCapture(file_path)
is_valid = cap.isOpened()
cap.release()
return is_valid
except:
return False
print(is_valid_mp4('static/tmp/original_video.mp4'))
print(is_valid_mp4('static/tmp/processed_video.mp4'))
</code></pre>
<p>It returns true.</p>
<p>Note that, original_video is able to open with web browser while processed_video is unable.</p>
<p>I also check size of frames of video and all are same</p>
<pre><code>import cv2
# Open the video file
video_path = "static/tmp/processed_video.mp4"
cap = cv2.VideoCapture(video_path)
# Check if the video file was opened successfully
if not cap.isOpened():
print("Error opening video file")
exit()
# Iterate through the frames
frame_number = 0
while True:
# Read the next frame
ret, frame = cap.read()
# Check if frame was successfully read
if not ret:
break
# Increment the frame number
frame_number += 1
# Try to decode the frame
try:
# Convert the frame to grayscale for simplicity
gray_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
except cv2.error as e:
print(f"Corrupt frame at frame number {frame_number}")
print("Error:", e)
# Release the video capture object and close any open windows
cap.release()
cv2.destroyAllWindows()
</code></pre>
<p>So what's wrong this?</p>
|
<python><opencv><video><mp4>
|
2023-05-09 07:05:20
| 0
| 1,252
|
Muhammad Ikhwan Perwira
|
76,206,605
| 3,943,600
|
python setup.py with root folder directories
|
<p>I have a python project folder with a setup.py in below structure.</p>
<pre><code>root
|_app
|__main.py__
|__init.py__
|_ other.py files
|_inputs
|_.py files
|_logger
|_.py files
|_setup.py
</code></pre>
<p>I need to package all these folders into one directory when build the list in following format.</p>
<pre><code>app
|_main, init and other py files
|_logger
|_inputs
</code></pre>
<p>my setup file is currently looks like,</p>
<pre><code>from setuptools import setup
import os
setup(name='app',
version=os.environ['BUILD_NUMBER'],
description='sol',
author='codebot',
packages=['app',
'app.loaders',
'app.data',
'app.config',
'inputs',
'logger'
],
entry_points={
'group_1': 'run=app.__main__:main'
})
</code></pre>
<p>But with this I can't see the inputs and logging(which contains only .py files) in the package folder once installed. How may I fix this with setup.py? I was thinking of whether there is a way to copy folders using setup.py. But no luck yet.</p>
|
<python><setup.py>
|
2023-05-09 06:55:52
| 0
| 2,676
|
codebot
|
76,206,579
| 5,800,969
|
How to use aws_lambda_powertools logger to filter/disable specific api endpoint logs like health check /ping api
|
<p>I tried to follow this general logging class filter() method but the method doesn't exist in aws_lambda_powertools logging and it throws an error. I am doing this to discard <code>INFO: 127.0.0.1:51927 - "GET /ping HTTP/1.1" 200 OK</code> rows in the aws cloud watch log as it gets fired every 20 sec through terraform health check which we can't disable.</p>
<p>Code I tried:</p>
<pre><code>import os
from aws_lambda_powertools import Logger, logging
class EndpointFilter(logging.Filter):
def filter(self, record: logging.LogRecord) -> bool:
return record.getMessage().find("/ping") == -1
def get_aws_powertool_logger():
date_format = "%Y-%m-%dT%H:%M:%S.%f%z"
service = os.path.basename(os.getcwd())
print("---- logger---- ", service)
# logger: Logger = Logger(service=service, datefmt=date_format, level="DEBUG")
logger = logging.getLogger("uvicorn.access").addFilter(EndpointFilter())
return logger
</code></pre>
<p>Error I am getting:</p>
<pre><code> class EndpointFilter(logging.Filter):
AttributeError: module 'aws_lambda_powertools.logging' has no attribute 'Filter'
</code></pre>
<p>Refrence: <a href="https://github.com/encode/starlette/issues/864" rel="nofollow noreferrer">https://github.com/encode/starlette/issues/864</a></p>
<p>Any help is highly appreciated. Thanks in advance.</p>
|
<python><logging><amazon-cloudwatchlogs><health-check><aws-lambda-powertools>
|
2023-05-09 06:52:42
| 2
| 2,071
|
iamabhaykmr
|
76,206,555
| 9,990,621
|
How to know my Python script is being run by another script or directly in a shell?
|
<p>I have this Python script for formatting and coloring output:</p>
<pre><code>def error(message):
print(f"\033[91m{message}\033[00m")
def warning(message):
print(f"\033[93m{message}\033[00m")
def info(message):
print(f"\033[96m{message}\033[00m")
def success(message):
print(f"\033[92m{message}\033[00m")
</code></pre>
<p>And it works great when I write messages directly to the terminal.</p>
<p>However, some of my scripts are used indirectly in other shell scripts, and in that case, my output won't work properly.</p>
<p>For example, I have a Python script to find out the name of the OS. It's called <code>GetOs</code> and when executed, it would print <code>Ubuntu</code> in a blue color.</p>
<p>Now when I want to use this Python script in my Shell script as follow:</p>
<pre><code>if [[ $(GetOs) == Ubuntu ]]; then
echo "This feature is not supported on Ubuntu"
fi
</code></pre>
<p>it fails. The equality operator won't work because the <code>$(GetOs)</code> would return a blue result to the stdout.</p>
<p>Is it possible for me to conditionally color messages in my <code>Message.py</code> file? To know if it's being called from another script or directly from the terminal?</p>
|
<python>
|
2023-05-09 06:48:48
| 1
| 339
|
Fatemeh motlagh
|
76,206,290
| 2,156,784
|
Unable to query BigQuery data in Apache Superset - queries executed successfully at BigQuery but results not returned at Superset
|
<p>Unable to query BigQuery data. I could see the executed queries at the BigQuery project history. Even simple SQL statements are not returning results at Superset.</p>
<p><strong>How to reproduce the error:</strong></p>
<ul>
<li>Installed latest version of Apache Superset (2.1) & tried with older previous version of Superset and Python (3.8)</li>
<li>Installed SQLAlchemy BigQuery connector</li>
<li>Created database connection successfully and able to see the datasets and tables</li>
<li>Creating Dataset / creating charts / running simple SQL statements at SQL lab produce Gateway
TimeOut error.</li>
</ul>
<p><strong>Expected results</strong></p>
<p>Result set at sql lab / data returning at the superset for creating charts.</p>
<p><strong>Actual results</strong></p>
<p>Gateway Timeout Error - queries running forever and then timeout after the timeout threshold.</p>
<p><strong>Screenshots</strong></p>
<p><a href="https://i.sstatic.net/IB8cJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IB8cJ.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/40uYT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/40uYT.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/n1bkq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/n1bkq.png" alt="enter image description here" /></a></p>
<p><strong>Environment</strong></p>
<pre><code>Browser: Tested with Google Chrome, Edge, Safari
superset version: Superset 2.1.0 / Tested with Superset 2.0
python version: Python 3.10 / Tested with Python 3.8
bigquery connector: sqlalchemy-bigquery 1.6.1 / Tested with Pybigquery
OS version: Ubuntu 22.04.2 LTS
Hosted on: GCP VM
</code></pre>
<p><strong>Additional context</strong></p>
<p>The VM hosted on GCP Compute Engine and verified the access to BigQuery - APIs access is already enabled and able to query data using the same connector (sqlalchemy-bigquery).</p>
<pre><code>from sqlalchemy.engine import create_engine
from pprint import pprint
bigquery_uri = f'bigquery://{project}/{table}'
engine = create_engine(
bigquery_uri,
)
query = f'SELECT * FROM table WHERE DATE(partition) = "2023-05-05" LIMIT 10;'
rows = engine.execute(query).fetchall()
rows = [dict(row) for row in rows]
pprint(rows)
</code></pre>
<p>Does not look like connectivity issue as I could see the datasets and tables listed for the database and once queries are executed, I could see the same successfully executed queries at the BigQuery.</p>
<p><a href="https://i.sstatic.net/PiuuQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PiuuQ.png" alt="enter image description here" /></a></p>
<p>Please let me know if you require any additional information. Your assistance would be greatly appreciated. Thank you.</p>
|
<python><flask><google-bigquery><sqlalchemy><apache-superset>
|
2023-05-09 06:05:15
| 0
| 1,412
|
Rathish Kumar B
|
76,206,099
| 12,017,769
|
imblearn library BorderlineSMOTE module does not generate any synthetic data
|
<p>I tried to generate synthetic data with Border line SMOTE in imblearn library but no synthetic data was generated. I am working with a multiclass based dataset, for purposes of generating data I split my dataframe into the minority class and majority class such like binary classification. Then I put the features to X and the target class consisting of 1's and 0's to y. This method worked with SVMSMOTE,SMOTENC in imblearn library but doesn't work with BorderlineSMOTE.</p>
<pre><code>X=df.drop(['target'], axis=1)
y=df['target']
border_line = BorderlineSMOTE(random_state=42)
X_res, y_res = border_line.fit_resample(X, y)
</code></pre>
<p>The code doesnot provide an error but <code>X_res</code> contains the same records as <code>X</code>, with no synthetic data added.</p>
<p>Is the BorderlineSMOTE module deprecated in imblearn library?
<br>https://imbalanced-learn.org/stable/references/generated/imblearn.over_sampling.BorderlineSMOTE.html</p>
|
<python><imblearn><smote><oversampling>
|
2023-05-09 05:27:59
| 1
| 1,243
|
Dulangi_Kanchana
|
76,206,069
| 4,314,265
|
Python Interpreter can't import files until exited and restarted
|
<p>I have my python files all saved in one directory ('F:\code').</p>
<p>When I enter the Python interpreter from the Windows command prompt :</p>
<pre><code>F:\python
>>>
</code></pre>
<p>I can import any .py files created when I entered the interpreter. But I can't import any files created after the interpreter already started. I have to exit the interpreter and then open it again to be able to import any new .py modules.</p>
<p>I have tried:</p>
<pre><code>>>> import sys
>>> sys.path.append('F:/code')
>>> sys.path.append('F://code')
>>> sys.path.append('F:\')
>>> sys.path.append('F:\\')
</code></pre>
<p>but get the following:</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'module-whatever'
</code></pre>
<p>I have tried artificially creating a .pyc file in the <strong>pycache</strong> folder in the sub of the directory but that doesn't work either.</p>
<p>I'm not sure why I have to exit and re-enter the interpreter because I would like to be able to write .py scripts on the fly eventually and load them in the interpreter to debug.</p>
|
<python><import><module><interpreter><restart>
|
2023-05-09 05:20:32
| 0
| 434
|
Jeeves
|
76,205,988
| 16,780,162
|
Why my views.py doesn't save submitted csv file in data folder in my app directory in Django project?
|
<p>Hello I am working on a project Django.</p>
<p>I am trying to save a file after users submits it from webpage.</p>
<p>I created views and HTML..</p>
<p>part of the views.py, html and urls.py are provided below.</p>
<p>Can i get some help why my code is not saving file in data folder in my app directory?</p>
<p>views.py</p>
<pre><code>
class CsvPlot(View):
def save_uploaded_file(self, request, file):
if request.method == 'POST' and request.FILES[file]:
uploaded_file = request.FILES['document']
# attribute = request.Post.get("attribute")
# print(attribute)
# check if the file ends with csv
if uploaded_file.name.endswith('.csv'):
save_file = FileSystemStorage()
name = save_file.save(uploaded_file.name, uploaded_file)
print(name)
# Let's save the file in the data folder
current_dir = settings.BASE_DIR / 'ml_in_geotech'
file_directory = os.path.join(current_dir, 'data')
if not os.path.exists(file_directory):
os.makedirs(file_directory)
self.file_path = os.path.join(file_directory, name)
self.readfile(request, self.file_path)
request.session['file_path'] = self.file_path
else:
return HttpResponse('Invalid file type. Only CSV files are allowed.')
</code></pre>
<p>My HTML</p>
<pre><code><div class="upload_file">
<h2>Train Your Data</h2>
<form method="POST" enctype="multipart/form-data" action="/">
{% csrf_token %}
<input type="file" name="document" id="document" required="required">
<button type="submit">Submit</button>
</form>
{% block messages %}
{% if messages %}
<div class="container" style="color: firebrick; margin-top: 20px" >
{% for message in messages %}
{{ message }}
{% endfor %}
</div>
{% endif %}
{% endblock %}
</div>
</code></pre>
<p>My urls.py</p>
<pre><code>[
path('csv-plot/', views.CsvPlot.as_view(), name='csv_plot'),
]
</code></pre>
|
<python><django><server><django-views><http-post>
|
2023-05-09 05:02:44
| 1
| 332
|
Codeholic
|
76,205,844
| 1,187,968
|
Python: pip install pip install --platform manylinux_2_28_x86_64 --only-binary=:all:
|
<p>I'm using this pip command to install a binary version of cryptography.</p>
<p><code>python3.9 -m pip install --platform manylinux_2_28_x86_64 --only-binary=:all: cryptography -t build</code></p>
<p>However, I'm getting the following error. How should I fix it without version locking?</p>
<pre><code>Collecting cryptography
Using cached cryptography-40.0.2-cp36-abi3-manylinux_2_28_x86_64.whl (3.7 MB)
Using cached cryptography-40.0.1-cp36-abi3-manylinux_2_28_x86_64.whl (3.7 MB)
Using cached cryptography-40.0.0-cp36-abi3-manylinux_2_28_x86_64.whl (3.7 MB)
Using cached cryptography-39.0.2-cp36-abi3-manylinux_2_28_x86_64.whl (4.2 MB)
Using cached cryptography-39.0.1-cp36-abi3-manylinux_2_28_x86_64.whl (4.2 MB)
Using cached cryptography-39.0.0-cp36-abi3-manylinux_2_28_x86_64.whl (4.2 MB)
Using cached cryptography-38.0.4-cp36-abi3-manylinux_2_28_x86_64.whl (4.2 MB)
Using cached cryptography-38.0.3-cp36-abi3-manylinux_2_28_x86_64.whl (4.2 MB)
Using cached cryptography-38.0.1-cp36-abi3-manylinux_2_28_x86_64.whl (4.2 MB)
Using cached cryptography-38.0.0-cp36-abi3-manylinux_2_28_x86_64.whl (4.2 MB)
ERROR: Cannot install cryptography==38.0.0, cryptography==38.0.1, cryptography==38.0.3, cryptography==38.0.4, cryptography==39.0.0, cryptography==39.0.1, cryptography==39.0.2, cryptography==40.0.0, cryptography==40.0.1 and cryptography==40.0.2 because these package versions have conflicting dependencies.
The conflict is caused by:
cryptography 40.0.2 depends on cffi>=1.12
cryptography 40.0.1 depends on cffi>=1.12
cryptography 40.0.0 depends on cffi>=1.12
cryptography 39.0.2 depends on cffi>=1.12
cryptography 39.0.1 depends on cffi>=1.12
cryptography 39.0.0 depends on cffi>=1.12
cryptography 38.0.4 depends on cffi>=1.12
cryptography 38.0.3 depends on cffi>=1.12
cryptography 38.0.1 depends on cffi>=1.12
cryptography 38.0.0 depends on cffi>=1.12
</code></pre>
|
<python><pip>
|
2023-05-09 04:25:44
| 1
| 8,146
|
user1187968
|
76,205,611
| 1,734,097
|
Python Dataframe Tuple to insert data
|
<p>i have the following <code>tuple</code> in python <code>dataframe</code>:</p>
<pre><code> (array(['2023-04-01', '2023-06-01',
'item1', 10, 2,
'Promo1', 'NULL'], dtype=object),
array(['2023-04-01', '2023-06-01',
'item2', 10, 2,
'Promo1', 'NULL'], dtype=object),
array(['2023-04-01', '2023-06-01',
'item3', 10, 4,
'Promo1', 'NULL'], dtype=object),
array(['2023-04-01', '2023-06-01',
'item4', 10, 4,
'Promo1', 'NULL'], dtype=object),
array(['2023-04-01', '2023-06-01',
'item5', 10, 2,
'Promo1', 'NULL'], dtype=object))
</code></pre>
<p>i want to transform them to the following output:</p>
<pre><code>(
('2023-04-01', '2023-06-01',
'item1', 10, 2,
'Promo1', 'NULL', ),
('2023-04-01', '2023-06-01',
'item2', 10, 2,
'Promo1', 'NULL', ),
('2023-04-01', '2023-06-01',
'item3', 10, 4,
'Promo1', 'NULL', ),
('2023-04-01', '2023-06-01',
'item4', 10, 4,
'Promo1', 'NULL', ),
('2023-04-01', '2023-06-01',
'item5', 10, 2,
'Promo1', 'NULL', )
)
</code></pre>
<p>Basically i want to insert all values in dataframe in batch processing (per 5 rows for example)</p>
<p>I am able to utilize <code>iloc</code> functions to have the dataframe in loop.
Now i'm stuck in transforming tuple into expected output.</p>
<p>how do i do that?</p>
|
<python><dataframe><tuples>
|
2023-05-09 03:15:58
| 1
| 1,099
|
Cignitor
|
76,205,494
| 8,321,207
|
shifting columns in dataframe and adding zeros at the start
|
<p>I have a data frame with more than 700 columns. I have an array whose length is equal to the rows of data frame so each array value corresponds to one row of data frame. the value in the array is basically a random column from the data frame. This is what I am looking for:</p>
<p>I want to shift all the columns mentioned in the array to the 100th column in my data frame. To do so, I can add zero columns to the start depending on how much shift I need to make. All the values in the array are less than 100, so I know I will have to add zero columns for all rows.</p>
<p>for instance if this is the data frame</p>
<pre><code> DF1
col1 col2 col3 col4 col5 col6... col100....col700
21 321 52 74 74 55 ..... 20 .... 447
</code></pre>
<p>array[0] = 6
so I will have to shift 96 columns so I will add 96 zeros to the start. and the new dataframe will be:</p>
<pre><code>modified_DF1
col1 col2 col3 col4 col5 col6... col 98 col99 col100....col796
0 0 0 0 0 0 ..... 52 75 55 .... 447
</code></pre>
<p>I have tried following code but it does not seem to be working:</p>
<pre><code>def shift_r_peak(df, shift_array):
for i in range(df.shape[0]):
shift_val = 100 - shift_array[i]
new_row = np.zeros(df.shape[1]) # create a new row with zeros
new_row[shift_val:shift_val+shift_array[i]+1] = df.iloc[i, :shift_array[i]+1].values # insert the values from the original row at the appropriate position
df.iloc[i] = new_row # assign the new row to the original DataFrame
return df
</code></pre>
|
<python><pandas><dataframe>
|
2023-05-09 02:36:19
| 2
| 375
|
Kathan Vyas
|
76,205,477
| 3,532,564
|
Local Jax variable not updating in `jit`ted function, but updating in standard?
|
<p>So, I've got some code and I could really use help deciphering the behavior and how to get it to do what I want.</p>
<p>See my code as follows:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Callable, List
import chex
import jax.numpy as jnp
import jax
Weights = List[jnp.ndarray]
@chex.dataclass(frozen=True)
class Model:
mult: Callable[
[jnp.ndarray],
jnp.ndarray
]
jitted_mult: Callable[
[jnp.ndarray],
jnp.ndarray
]
weight_updater: Callable[
[jnp.ndarray], None
]
def create_weight():
return jnp.ones((2, 5))
def wrapper():
weights = create_weight()
def mult(input_var):
return weights.dot(input_var)
@jax.jit
def jitted_mult(input_var):
return weights.dot(input_var)
def update_locally_created(new_weights):
nonlocal weights
weights = new_weights
return weights
return Model(
mult=mult,
jitted_mult=jitted_mult,
weight_updater=update_locally_created
)
if __name__ == '__main__':
tester = wrapper()
to_mult = jnp.ones((5, 2))
for i in range(5):
print(jnp.sum(tester.mult(to_mult)))
print(jnp.sum(tester.jitted_mult(to_mult)))
if i % 2 == 0:
tester.weight_updater(jnp.zeros((2, 5)))
else:
tester.weight_updater(jnp.ones((2, 5)))
print("*" * 10)
</code></pre>
<p>TL;DR I'm defining some "weights" within a function closure, and I'm trying to modify the weights via a <code>nonlocal</code>. The problem seems to be that the <code>jit</code>-ted version (<code>jitted_mult</code> of the function doesn't recognize the "updated" weights, whereas the non-jit function (<code>mult</code>) does.</p>
<p>What can I do to make it recognize the update? I <strong>think</strong> that I might be able to do what <a href="https://dm-haiku.readthedocs.io/en/latest/notebooks/build_your_own_haiku.html" rel="nofollow noreferrer">Build your own Haiku</a> does, but that seems like a lot of work for an experiment</p>
|
<python><jax>
|
2023-05-09 02:30:32
| 1
| 2,258
|
IanQ
|
76,205,305
| 3,821,009
|
Updating non-trivial structures in polars cells
|
<p>Say I have this:</p>
<pre><code>>>> polars.DataFrame([[(1,2),(3,4)],[(5,6),(7,8)]], list('ab'))
shape: (2, 2)
┌────────┬────────┐
│ a ┆ b │
│ --- ┆ --- │
│ object ┆ object │
╞════════╪════════╡
│ (1, 2) ┆ (5, 6) │
│ (3, 4) ┆ (7, 8) │
└────────┴────────┘
</code></pre>
<p>How would I go with updating the 2nd element of the tuple only? E.g. what do I need to do to get this:</p>
<pre><code>┌────────┬────────┐
│ a ┆ b │
│ --- ┆ --- │
│ object ┆ object │
╞════════╪════════╡
│ (1, 9) ┆ (5, 9) │
│ (3, 9) ┆ (7, 9) │
└────────┴────────┘
</code></pre>
<p>where I've replaced the 2nd element of all tuples to 9.</p>
<p>Note that I'm not necessarily married to using tuples, but I'd like some kind of structure (dict, list, ...) as I want elements to have two pieces of information per cell due to a downstream requirement.</p>
|
<python><python-polars>
|
2023-05-09 01:32:14
| 2
| 4,641
|
levant pied
|
76,205,294
| 5,942,100
|
Tricky conversion within dataframe using Pandas (compress and de-aggregate)
|
<p>I wish to compress the columns within my dataframe and de aggregate values:</p>
<p><strong>Data</strong></p>
<pre><code>state range type Q1 24 Q2 24 stat
NY up AA 2 2 grow
NY up AA 1 0 re
NY up BB 1 1 grow
NY up BB 0 0 re
NY up DD 2 3 grow
NY up DD 0 1 re
CA low AA 0 2 grow
CA low AA 1 0 re
CA low BB 0 1 grow
CA low BB 0 0 re
CA low DD 0 3 grow
CA low DD 1 0 re
dataframe:
import pandas as pd
data = {
'state': ['NY', 'NY', 'NY', 'NY', 'NY', 'NY', 'CA', 'CA', 'CA', 'CA', 'CA', 'CA'],
'range': ['up', 'up', 'up', 'up', 'up', 'up', 'low', 'low', 'low', 'low', 'low', 'low'],
'type': ['AA', 'AA', 'BB', 'BB', 'DD', 'DD', 'AA', 'AA', 'BB', 'BB', 'DD', 'DD'],
'Q1 24': [2, 1, 1, 0, 2, 0, 0, 1, 0, 0, 0, 1],
'Q2 24': [2, 0, 1, 0, 3, 1, 2, 0, 1, 0, 3, 0],
'stat': ['grow', 're', 'grow', 're', 'grow', 're', 'grow', 're', 'grow', 're', 'grow', 're']
}
df = pd.DataFrame(data)
print(df)
</code></pre>
<p><strong>Desired</strong></p>
<pre><code>state qtr type range stat
NY Q1 24 AA01 up grow
NY Q1 24 AA02 up grow
NY Q1 24 AA03 up re
NY Q1 24 BB01 up grow
NY Q1 24 DD01 up grow
NY Q1 24 DD02 up grow
CA Q1 24 AA01 low re
CA Q1 24 DD01 low re
NY Q2 24 AA01 up grow
NY Q2 24 AA02 up grow
NY Q2 24 BB01 up grow
NY Q2 24 DD01 up grow
NY Q2 24 DD02 up grow
NY Q2 24 DD03 up grow
NY Q2 24 DD04 up re
CA Q2 24 AA01 low grow
CA Q2 24 AA02 low grow
CA Q2 24 BB01 low grow
CA Q2 24 DD01 low grow
CA Q2 24 DD02 low grow
CA Q2 24 DD03 low grow
</code></pre>
<p><strong>Doing</strong></p>
<pre><code>(df
.pivot_longer(
index = slice('state', 'type'),
names_to = ("qtr", ".value"),
names_sep = " ")
)
</code></pre>
<p>update count</p>
<pre><code>newdf=newdf.assign(count=newdf['type']+(newdf.groupby(['state','type'])['type'].cumcount()+1).astype(str))
</code></pre>
<p>However, this is only once piece, but I am still trying to figure out how to de-aggregate the count. I am researching this. Any suggestion is appreciated.</p>
|
<python><pandas><numpy>
|
2023-05-09 01:27:18
| 1
| 4,428
|
Lynn
|
76,205,288
| 3,821,009
|
Row-wise updates in polars
|
<p>Say I have this dataframe:</p>
<pre><code>>>> pl.DataFrame([[1,2,3],[4,5,6],[7,8,9]],list('abc'))
shape: (3, 3)
┌─────┬─────┬─────┐
│ a ┆ b ┆ c │
│ --- ┆ --- ┆ --- │
│ i64 ┆ i64 ┆ i64 │
╞═════╪═════╪═════╡
│ 1 ┆ 4 ┆ 7 │
│ 2 ┆ 5 ┆ 8 │
│ 3 ┆ 6 ┆ 9 │
└─────┴─────┴─────┘
</code></pre>
<p>Note that below I'm specifically asking about the case where columns are "unimporant" - i.e. I'm looking for a solution that doesn't necessarily depend on columns being named <code>a</code>, <code>b</code> and <code>c</code> or that there is a certain number of columns.</p>
<p>I can use the following to update a certain column at a certain row:</p>
<pre><code>>>> pl.DataFrame([[1,2,3],[4,5,6],[7,8,9]],list('abc')).with_row_index().with_columns(a=pl.when(pl.col('index') == 1).then(42).otherwise(pl.col('a'))).drop('index')
shape: (3, 3)
┌─────┬─────┬─────┐
│ a ┆ b ┆ c │
│ --- ┆ --- ┆ --- │
│ i64 ┆ i64 ┆ i64 │
╞═════╪═════╪═════╡
│ 1 ┆ 4 ┆ 7 │
│ 42 ┆ 5 ┆ 8 │
│ 3 ┆ 6 ┆ 9 │
└─────┴─────┴─────┘
</code></pre>
<p>but that's a bit hard if I don't know that I should be updating <code>a</code>.</p>
<p>How can I do the following:</p>
<ol>
<li>Replace the whole row - e.g. I want to replace the 2nd row with <code>43,42,41</code>:</li>
</ol>
<pre><code>┌─────┬─────┬─────┐
│ a ┆ b ┆ c │
│ --- ┆ --- ┆ --- │
│ i64 ┆ i64 ┆ i64 │
╞═════╪═════╪═════╡
│ 1 ┆ 4 ┆ 7 │
│ 43 ┆ 42 ┆ 41 │
│ 3 ┆ 6 ┆ 9 │
└─────┴─────┴─────┘
</code></pre>
<ol start="2">
<li>Replace any column in a certain row by condition - e.g. I want to negate any values <code>> 4</code> in 2nd row:</li>
</ol>
<pre><code>┌─────┬─────┬─────┐
│ a ┆ b ┆ c │
│ --- ┆ --- ┆ --- │
│ i64 ┆ i64 ┆ i64 │
╞═════╪═════╪═════╡
│ 1 ┆ 4 ┆ 7 │
│ 2 ┆ -5 ┆ -8 │
│ 3 ┆ 6 ┆ 9 │
└─────┴─────┴─────┘
</code></pre>
|
<python><python-polars>
|
2023-05-09 01:24:11
| 3
| 4,641
|
levant pied
|
76,205,099
| 447,457
|
How do I update a value of a key in a list of dictionaries in python?
|
<p>I have following dictionary,</p>
<pre class="lang-py prettyprint-override"><code> result =
[{'Img Entropy': 0.4759365334486925,
'Avg Row Entropy': 0.4756050513785311,
'Comp Size, B': 9675063,
'COMP RATIO, out/in': 0.10262087228128054,
'Stack Pos': 3},
{'Img Entropy': 0.4759365334486925,
'Avg Row Entropy': 0.4756050513785311,
'Comp Size, B': 9675063,
'COMP RATIO, out/in': 0.10262087228128054,
'Stack Pos': 3},
{'Img Entropy': 0.4759365334486925,
'Avg Row Entropy': 0.4756050513785311,
'Comp Size, B': 9675063,
'COMP RATIO, out/in': 0.10262087228128054,
'Stack Pos': 3}]
</code></pre>
<p>I would like to update the value for 2nd last 'Stack Pos'. When I run the following command, all the 'Stack Pos' keys get updated with value 10.</p>
<pre><code>result[-2]['Stack Pos'] = 10
</code></pre>
<p>How can I update/add to only the specific key in the list?</p>
<p>The following function creates the list of dictionaries -</p>
<pre class="lang-py prettyprint-override"><code>def get_compression_stats(img):
result = []
meta = {}
r = range(0,len(img))
i_e, r_avg_e, r_median_e = get_entropy(img,r)
#iterate over all combinations for each file
comp_parameters = {'typesize':4}
filter_names = ['NONE','SHUFFLE','BITSHUFFLE','BYTEDELTA','SHUFFLE+BYTEDELTA','BITSHUFFLE+BYTEDELTA','SHUFFLE+BITSHUFFLE+BYTEDELTA']
rows_for_blocksize = [0,1,64]
for c in blosc2.compressor_list():
print("Codec: "+str(c))
comp_parameters['codec'] = c
for f in filter_names:
print("Filter: "+f)
comp_parameters['filters'] = get_filter_array(f)
for r in rows_for_blocksize:
comp_parameters['blocksize'] = r*img.shape[1]*img[0][0].nbytes
print("Blocksize: "+ str(comp_parameters['blocksize']))
i_u8 = get_ubyte_img(img)
c_img = blosc2.compress2(i_u8,**comp_parameters)
orig_len, comp_len, blk_size = blosc2.get_cbuffer_sizes(c_img)
c_ratio = comp_len/orig_len
meta['Img Entropy'] = i_e
meta['Avg Row Entropy'] = r_avg_e
meta['Comp Size, B'] = comp_len
meta['COMP RATIO, out/in'] = c_ratio
print("Comp Ratio, out/in: "+ str(c_ratio))
result.append(meta)
return(result)
</code></pre>
<p>Thank you.</p>
|
<python><dictionary><key-value>
|
2023-05-09 00:07:54
| 2
| 830
|
shparekh
|
76,204,981
| 1,068,223
|
Managing a long-running API listener using Python / Flask
|
<p>My python script (e.g. <strong>downloader.py</strong>) listens to a streaming REST API endpoint which continuously sends it data. After making an initial <code>GET</code> request, it uses a function such as <code>requests</code> <code>iter_content()</code> to read the stream and then writes the downloaded data into a Mongo DB. It handles authentication and it has some logic for re-authentication if there are auth issues and pause + retry if there are certain types of errors.</p>
<p>The script (<strong>downloader.py</strong>) works well but I would like to be able to "manage" it through a web interface. Specifically, I want to be able to start the script, see if the script is running, stop the script and re-start it. Making the front end for the web interface is trivial using some front end JS or anything that makes API calls but how to encapsulate the <strong>downloader.py</strong> script into something that exposes a REST API which allows me to control the script? e.g. a <strong>manage.py</strong></p>
<p><strong>manage.py</strong> could be a Flask app to handle REST API calls from the web interface and then there are two options:</p>
<ol>
<li>Merge them...</li>
</ol>
<p>I could import code from <strong>downloader.py</strong> into the <strong>manage.py</strong> Flask App and use Redis/Celery to manage asynchronous long-running tasks</p>
<p>or</p>
<ol start="2">
<li>Keep them separate...</li>
</ol>
<p>I could have <strong>downloader.py</strong> periodically poll the Flask API <strong>manage.py</strong> with status updates or to check for requests for termination. manage.py could launch downloader.py as a process if requested to using the API.</p>
<p>Some guidance as to which pattern might work better would be great. Thanks for your help.</p>
|
<python><rest><flask><redis><celery>
|
2023-05-08 23:22:09
| 1
| 1,478
|
BYZZav
|
76,204,928
| 8,241,568
|
Loading multiple json files and add path name as column
|
<p>I am currently loading into a spark DataFrame all the json files present in my_folder using the following code:</p>
<pre><code>data = spark.read.option("multiline", "true").json("/path/my_folder")
</code></pre>
<p>Example with two json files:</p>
<ul>
<li>filename1: aaa.json</li>
<li>filename2: bbb.json</li>
</ul>
<p>I get the following DataFrame:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>first_name</th>
<th>last_name</th>
</tr>
</thead>
<tbody>
<tr>
<td>Jack</td>
<td>Paris</td>
</tr>
<tr>
<td>John</td>
<td>Prior</td>
</tr>
<tr>
<td>Nick</td>
<td>Pair</td>
</tr>
<tr>
<td>Louis</td>
<td>Pit</td>
</tr>
<tr>
<td>Georges</td>
<td>Pais</td>
</tr>
</tbody>
</table>
</div>
<p>I now want to save the name of each file into an additional column:</p>
<ul>
<li>filename1: aaa.json</li>
<li>filename2: bbb.json</li>
</ul>
<p>To get the following DataFrame:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>first_name</th>
<th>last_name</th>
<th>filename</th>
</tr>
</thead>
<tbody>
<tr>
<td>Jack</td>
<td>Paris</td>
<td>aaa.json</td>
</tr>
<tr>
<td>John</td>
<td>Prior</td>
<td>aaa.json</td>
</tr>
<tr>
<td>Nick</td>
<td>Pair</td>
<td>aaa.json</td>
</tr>
<tr>
<td>Louis</td>
<td>Pit</td>
<td>bbb.json</td>
</tr>
<tr>
<td>Georges</td>
<td>Pais</td>
<td>bbb.json</td>
</tr>
</tbody>
</table>
</div>
<p>I have too many files to loop over all of it one after the other. Is there another way?</p>
|
<python><json><pyspark>
|
2023-05-08 23:05:42
| 1
| 1,257
|
Liky
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.