QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
77,075,621
11,561,158
Django Query: Annotating queryset list with Sum of fields Values
<p>I have 3 simple models:</p> <pre><code>from django_better_admin_arrayfield.models.fields import ArrayField class Skill(BaseModel): value = models.IntegerField() item = models.ForeignKey(Item, on_delete=models.CASCADE related_name=&quot;skills&quot;) class Item(BaseModel): name = models.CharField(max_length=100) class BuildItem(models.Model): base = models.ForeignKey(Item, on_delete=models.CASCADE) mandatory_skills = ArrayField(models.IntegerField(null=True, blank=True), default=list) </code></pre> <p>I would like to have a list of <strong>BuildItem</strong> annotated with the sums of the value field of the <strong>Skill</strong> table, only if the <strong>Skill</strong> id is contained in the <strong>mandatory_skills</strong> field.</p> <p>To annotate the list of <strong>BuildItem</strong> I have the following query which works:</p> <pre><code>q = BuildItem.objects.annotate(sum_skill=Coalesce(Sum(&quot;base__skills__value&quot;), 0)) </code></pre> <p>I would also like to add up the <strong>Skills</strong> having an id present in <strong>mandatory_skills</strong>.</p> <p>I tried many things but without success, does anyone have a solution?</p>
<python><django>
2023-09-10 09:59:50
0
863
Hippolyte BRINGER
77,075,446
4,246,716
subset pandas dataframe to get specific number of rows based on values in another dataframe
<p>I have a pandas dataframe as follows:</p> <pre><code>df1 site_id date hour reach maid 0 16002 2023-09-02 21 NaN 33f9fad6-20c5-426c-962f-bc2fbb82aecb 1 16002 2023-09-04 17 NaN 33f9fad6-20c5-426c-962f-bc2fbb82aecb 2 16002 2023-09-04 19 NaN 4a676aeb-6f6f-4622-934b-59b8f149aad7 3 16002 2023-09-04 17 NaN 35363191-c6aa-49fb-beb1-04a98898bed2 4 16002 2023-09-03 22 NaN a44beb20-a90a-4135-be18-6dda71eeb7c2 </code></pre> <p>I have created another dataframe based on the above dataframe that provides the count of records for each <code>[site_id,date,hour]</code> combination. T</p> <pre><code>df2 site_id date hour count 1666 37226 2023-09-02 8 4586 1676 37226 2023-09-03 16 3586 639 36972 2023-09-03 21 235 640 36972 2023-09-03 22 5431 641 36972 2023-09-03 23 343 </code></pre> <p>I want to filter the first data frame and get exact number of records as given in the <code>count</code> column of second data frame. For example, I want to get the <code>4586</code> records from the first data frame matching the <code>site_id 37226, date 2023-09-02 and hour 8</code>.</p> <p>I tried using a forloop on the second dataframe as follows:</p> <pre><code>for index,rows in k3.iterrows(): sid=rows['site_id'] dt=rows['date'] hr=rows['hour'] cnt=rows['count'] kdf1=dff[(dff['site_id'] == sid) &amp; (dff['date']==dt) &amp; (dff['hour']==hr)] kdf2=kdf1[:cnt] </code></pre> <p>This works - but works extremely slow. Is there a faster method to get the subset. I am also attaching herewith the links to both sample dataframes:</p> <p><a href="https://drive.google.com/drive/folders/1_r4WfSzeBrzv6bR8hr6PWmvKTG_496sE?usp=sharing" rel="nofollow noreferrer">Link to df1 and df2 </a></p>
<python><pandas><dataframe>
2023-09-10 09:09:36
2
3,045
Apricot
77,075,348
11,210,476
How to type annotate reduce function?
<p>I'm writing a very thin wrapper on top of <code>list</code> and I want to define a method called <code>reduce</code>, but I'm struggling to annotate it properly such that <code>pylance</code>, <code>mypy</code> &amp; <code>pylint</code> cut their complaints whenever I use the method, or even define it.</p> <p>I was perturbed to realize that almost None of Python's builtin libraries are type annotated.</p> <p>This is my implementation attempt:</p> <pre><code> def reduce(self, func: Callable[[list[T], list[T]], list[T]] = lambda x, y: x + y, default: Optional[T] = None) -&gt; 'List[T]': # type: ignore from functools import reduce if default is None: return List(reduce(func, self.list)) # type: ignore return List(reduce(func, self.list, default)) # type: ignore </code></pre> <p>This fails when my <code>List</code> is actually a list of strings</p> <pre><code>a: List[str] = List(['a', 'b']) b = a.reduce(lambda x, y: x + y) </code></pre> <p>Obviously here, the type checkers and linters say they expect <code>list[T]</code> while I passed <code>str</code>.</p>
<python><python-typing>
2023-09-10 08:35:40
1
636
Alex
77,075,322
4,865,874
Why does adding bound to a TypeVar make mypy not recognize this Protocol?
<p>The assignment within <code>func()</code> below doesn't typecheck:</p> <pre class="lang-py prettyprint-override"><code>from typing import Generic, Protocol, TypeVar T = TypeVar(&quot;T&quot;) V = TypeVar(&quot;V&quot;) K = TypeVar(&quot;K&quot;, bound=int) class Gen(Generic[T, K]): ... class Foo(Protocol[T]): def g(self, t: Gen[T, K]) -&gt; T: ... class Bar(Generic[T]): def g(self, t: Gen[T, K]) -&gt; T: raise RuntimeError def func(v: V) -&gt; None: x: Foo[V] = Bar[V]() </code></pre> <p><em>However</em>, if we remove the <code>bound=int</code> from the TypeVar, it does. What's going on here?</p>
<python><mypy><python-typing>
2023-09-10 08:28:08
0
385
leontrolski
77,075,310
1,942,868
How can I get the data used as key for group by
<pre><code>id num tuser 1 2 user1 2 2 user2 3 3 user2 4 1 user4 5 1 user4 </code></pre> <p>For example I have table like this .</p> <p>Now I want to get the unique data appeared in <code>tuser</code> so, the result should be <code>user1,user2,user4</code></p> <p>I guess it should be related with <code>group by</code>. So, I made this sentence.</p> <pre><code>p = m.MyTables.objects.all().values('tuser') </code></pre> <p>However how can I get the <code>user1,user2,user4</code> ?</p>
<python><sql><django>
2023-09-10 08:24:28
2
12,599
whitebear
77,075,244
1,692,384
pip install doesn't work ('pip' is not recognized as an internal or external command) but py -m pip install -U works
<p>Using python on windows. When I try to install new libraries using</p> <pre><code>pip install 'library name' </code></pre> <p>it throws an error saying 'pip' is not recognized as an internal or external command, operable program or batch file.</p> <p>But instead when I use the code</p> <pre><code>py -m pip install -U 'library name' </code></pre> <p>it gives me a working result.</p> <p>Why is this so? I'm quite a beginner at programming, so please be a little elaborate with your answers. Thank You!</p> <pre><code>enter code here enter code here </code></pre>
<python><pip>
2023-09-10 08:02:13
1
399
Black Dagger
77,075,194
893,254
How to plot on the same object returned by a pandas histogram plot
<p>I have created a Pandas DataFrame and plotted it using <code>.hist()</code>.</p> <p>I want to be able to draw lines/curves on top of the same figure.</p> <p>How can I do that?</p> <p>I was able to plot my data as a histogram using <code>df.hist(column='Example', bins=15)</code>. This assigns the returned object to <code>axis</code>.</p> <p>I thought I might be able to plot a line using <code>ax=axis</code> as an argument. But this isn't valid.</p> <p>It appears that <code>plt.plot</code> takes different <code>kwargs</code> to <code>DataFrame.hist</code>. Multiple sets of data from a DataFrame can be plot on the same figure, as histograms, using <code>.hist()</code> in combination with an argument of <code>ax=axis</code>.</p> <p>Here is some example code, taken from a Jupyter Notebook, plus some data to play with.</p> <pre><code>data = [211995, 139950, 202995, 223000, 184995, 82000, 127000, 240000, 116000, 74500, 151000, 149000, 290000, 146000, 174500, 418000, 150000, 150000, 260000, 100000, 282500, 510000, 142000, 382000, 220000, 259000, 330000, 177500, 290000, 280000, 118000, 97000, 124000, 385000, 199950, 90000, 135000, 395000, 182000, 105000, 80000, 230000, 227950, 176995, 110000, 142000, 132500, 100000, 95000, 257500, 186000, 230000, 169995, 167995, 119950, 119950, 361000, 125000, 242000, 240000, 205000, 187500, 180000, 146000, 257995, 380000, 144995, 139995, 159995, 265000, 288000, 288000, 162500, 290000, 182737, 235000, 250000, 175000, 153000, 125000, 170000, 165000, 187995, 250000, 220000, 108750, 125000, 245000, 100000, 130000, 115000, 218000, 190000, 435000, 300000, 465000, 179950, 259500, 187000, 200000] </code></pre> <pre><code>import pandas as pd import matplotlib.pyplot as plt import numpy df = pd.DataFrame(example_data) df.columns = ['Example'] axis = df.hist(column='Example', bins = 15) x = numpy.linspace(1e5, 5e5, 20) def f(x): return x * numpy.exp(-x) y = f(x) plt.plot(x, y, axis) </code></pre>
<python><pandas><matplotlib><histogram>
2023-09-10 07:41:25
2
18,579
user2138149
77,074,974
10,305,444
rendering SVG from segments using networkx is not similar to actual SVG image
<p>I'm trying to render a SVG, it's content is pretty simple:</p> <pre><code> &lt;svg width=&quot;3000&quot; height=&quot;3000&quot; xmlns=&quot;http://www.w3.org/2000/svg&quot;&gt; &lt;path d=&quot;M217 279H473Q441 364 409.5 443.0Q378 522 345 607ZM-2 4V66H63L325 747H415L677 66H750V4H474V66H553L495 217H194L137 66H215V4Z&quot; fill=&quot;red&quot; /&gt; &lt;/svg&gt; </code></pre> <p>It's an alphabet (A), flipped vertically, that's all. Now I'm read this file using <code>xml.etree.ElementTree</code>. Then parsing path with <code>svg.path.parse_path</code>.</p> <p>I've only one path here: <code>M217 279H473Q441 364 409.5 443.0Q378 522 345 607ZM-2 4V66H63L325 747H415L677 66H750V4H474V66H553L495 217H194L137 66H215V4Z</code>. If I translate it into 2D coordinate system, it looks something like this:</p> <pre><code>Path(Move(to=(217+279j)), Line(start=(217+279j), end=(473+279j)), QuadraticBezier(start=(473+279j), control=(441+364j), end=(409.5+443j), smooth=False), QuadraticBezier(start=(409.5+443j), control=(378+522j), end=(345+607j), smooth=False), Close(start=(345+607j), end=(217+279j)), Move(to=(-2+4j)), Line(start=(-2+4j), end=(-2+66j)), Line(start=(-2+66j), end=(63+66j)), Line(start=(63+66j), end=(325+747j)), Line(start=(325+747j), end=(415+747j)), Line(start=(415+747j), end=(677+66j)), Line(start=(677+66j), end=(750+66j)), Line(start=(750+66j), end=(750+4j)), Line(start=(750+4j), end=(474+4j)), Line(start=(474+4j), end=(474+66j)), Line(start=(474+66j), end=(553+66j)), Line(start=(553+66j), end=(495+217j)), Line(start=(495+217j), end=(194+217j)), Line(start=(194+217j), end=(137+66j)), Line(start=(137+66j), end=(215+66j)), Line(start=(215+66j), end=(215+4j)), Close(start=(215+4j), end=(-2+4j))) </code></pre> <p>Then I try to parse all nodes and edges from this data:</p> <pre><code>node: (-2.0, 4.0) node: (-2.0, 66.0) node: (63.0, 66.0) node: (137.0, 66.0) node: (194.0, 217.0) node: (215.0, 4.0) node: (215.0, 66.0) node: (217.0, 279.0) node: (325.0, 747.0) node: (345.0, 607.0) node: (409.5, 443.0) node: (415.0, 747.0) node: (473.0, 279.0) node: (474.0, 4.0) node: (474.0, 66.0) node: (495.0, 217.0) node: (553.0, 66.0) node: (677.0, 66.0) node: (750.0, 4.0) node: (750.0, 66.0) edge: (0, 0) edge: (0, 1) edge: (1, 2) edge: (2, 3) edge: (3, 0) edge: (4, 4) edge: (4, 5) edge: (5, 6) edge: (6, 7) edge: (7, 8) edge: (8, 9) edge: (9, 10) edge: (10, 11) edge: (11, 12) edge: (12, 13) edge: (13, 14) edge: (14, 15) edge: (15, 16) edge: (16, 17) edge: (17, 18) edge: (18, 19) edge: (19, 4) </code></pre> <p>Then I'm using <code>networkx</code> and <code>matplotlib</code> together to visualize it:</p> <pre class="lang-py prettyprint-override"><code>import networkx as nx import matplotlib.pyplot as plt # Create a new graph G = nx.Graph() # Add nodes to the graph for node in nodes: G.add_node(node) print(&quot;node:&quot;,node) # Add edges to the graph for edge in edges: e1, e2 = edge G.add_edge(*(nodes[e1],nodes[e2])) print(&quot;edge:&quot;,edge) # Create a plot pos = {node: (node[0], node[1]) for node in nodes} nx.draw(G, pos, node_color='lightblue', edge_color='gray', node_size=1000) nx.draw(G) #, pos, with_labels=True, node_color='lightblue', edge_color='gray', node_size=1000) # plt.title(cha) plt.show() </code></pre> <p>But what is being represented to me is quite different than expected:</p> <p><a href="https://i.sstatic.net/3B4fG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3B4fG.png" alt="rendering svg with networkx" /></a></p> <p>Actual image:</p> <p><a href="https://i.sstatic.net/cMnur.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cMnur.png" alt="actual image of inverted A" /></a></p> <p>I've always used some graphics library to render these stuff. This is my first time processing SVG files directly for rendering. Am I missing any steps? How can I resolve this issue?</p> <p>Although it looks like an 'A', but when I try with other alphabets, it's nowhere close to that alphabet.</p>
<python><matplotlib><svg><networkx>
2023-09-10 06:01:50
0
4,689
Maifee Ul Asad
77,074,682
412,655
Is it possible for `**kwargs` to have one type for a specific name, and another type for all other names?
<p>I have a function like this:</p> <pre class="lang-py prettyprint-override"><code>def foo(*, x: bool, **kwargs: int) -&gt; None: ... </code></pre> <p>If there is an argument <code>x</code>, it must be a <code>bool</code>. Any other named argument must be an <code>int</code>.</p> <p>Other people will write functions that call <code>foo</code>, and I would like them to be able to pass along <code>x</code> and the <code>kwargs</code>. Currently, that means that every function which calls <code>foo()</code> will have to explicitly pass <code>x</code>, in addition to the <code>kwargs</code>, like this:</p> <pre class="lang-py prettyprint-override"><code>def f(a: str, b: str, x: bool, **kwargs: int) -&gt; None: print(a + b) foo(x=x, **kwargs) </code></pre> <p>What I would like to do is define a type for the <code>**kwargs</code> which makes it unnecessary to explicitly specify <code>x</code> in each of the functions.</p> <p>Using <code>Unpack</code> gets part of the way there, but <code>Unpack</code> requires all names to be known and listed in the <code>TypedDict</code>. The code below doesn't work, but I wish I could do something like this, where <code>_other_</code> magically means any name that's not listed above in the <code>TypedDict</code>:</p> <pre class="lang-py prettyprint-override"><code>class FooKwargs(TypedDict): x: bool _other_: int def foo(**kwargs: Unpack[FooKwargs]) -&gt; None: ... def f(a: str, b: str, **kwargs: Unpack[FooKwargs]) -&gt; None: print(a + b) foo(**kwargs) </code></pre> <p>Another possibility that I wish would work is something like this, but it doesn't work because <code>Unpack</code> only accept <code>TypedDict</code>s -- it won't accept a <code>Union</code>, or a <code>dict[str, int]</code>:</p> <pre class="lang-py prettyprint-override"><code>class FooKnownKwargs(TypedDict): x: bool FooKwargs = Union[FooKnownKwargs, dict[str, int]] def foo(**kwargs: Unpack[FooKwargs]) -&gt; None: ... def f(a: str, b: str, **kwargs: Unpack[FooKwargs]) -&gt; None: print(a + b) foo(**kwargs) </code></pre> <p>Again, what I'm looking for is:</p> <ul> <li>A type for <code>**kwargs</code> so that I don't need to explicitly specify an <code>x</code> parameter in the signatures of <code>foo</code> and <code>f</code>.</li> <li>In <code>f()</code>, I don't want to explicitly pass <code>x</code> to <code>foo()</code>. I would like <code>x</code> to go along with the <code>**kwargs</code> automatically, but also keep the type.</li> </ul>
<python><typing>
2023-09-10 03:01:58
0
4,147
wch
77,074,476
3,118,330
FastAPI Integrity Error - Null Value or relation not-null constraint
<p>Once I did all changes to convert the example SQL (Relational) Databases (<code>https://fastapi.tiangolo.com/tutorial/sql-databases/</code>) in <code>Async model</code> I am able to create user however for item I get the error:</p> <pre><code>raise translated_error from error sqlalchemy.exc.IntegrityError: (sqlalchemy.dialects.postgresql.asyncpg.IntegrityError) &lt;class 'asyncpg.exceptions.NotNullViolationError'&gt;: null value in column &quot;owner_id&quot; of relation &quot;items&quot; violates not-null constraint DETAIL: Failing row contains (9, string, string, null). [SQL: INSERT INTO items (title, description, owner_id) VALUES ($1::VARCHAR, $2::VARCHAR, $3::INTEGER) RETURNING items.id] [parameters: ('string', 'string', None)] (Background on this error at: https://sqlalche.me/e/20/gkpj) </code></pre> <p>After investigate and do some test such manually inser <code>owner_id</code> I realized that model is raise that error.</p> <p>here are my files:</p> <p><code>database.py</code></p> <pre><code>import os from dotenv import load_dotenv from sqlalchemy import MetaData from sqlalchemy.ext.asyncio import create_async_engine, async_sessionmaker from sqlalchemy.orm import DeclarativeBase class Base(DeclarativeBase): metadata = MetaData(naming_convention={ &quot;ix&quot;: &quot;ix_%(column_0_label)s&quot;, &quot;uq&quot;: &quot;uq_%(table_name)s_%(column_0_name)s&quot;, &quot;ck&quot;: &quot;ck_%(table_name)s_%(constraint_name)s&quot;, &quot;fk&quot;: &quot;fk_%(table_name)s_%(column_0_name)s_%(referred_table_name)s&quot;, &quot;pk&quot;: &quot;pk_%(table_name)s&quot;, }) load_dotenv() engine = create_async_engine(os.environ['DATABASE_URL']) SessionLocal = async_sessionmaker(engine, expire_on_commit=False) </code></pre> <p><code>schemas</code></p> <pre><code>from pydantic import BaseModel class ItemBase(BaseModel): title: str description: str | None = None class ItemCreate(ItemBase): pass class Item(ItemBase): id: int owner_id: int class Config: from_attributes = True class UserBase(BaseModel): username: str class UserCreate(UserBase): password: str email: str class User(UserBase): id: int is_active: bool items: list[Item] = [] class Config: from_attributes = True </code></pre> <p><code>models</code></p> <pre><code>import sqlalchemy as sa import sqlalchemy.orm as so from database import Base class Item(Base): __tablename__ = &quot;items&quot; id: so.Mapped[int] = so.mapped_column(sa.Integer, primary_key=True, index=True) title: so.Mapped[str] = so.mapped_column(sa.String, index=True) description: so.Mapped[str] = so.mapped_column(sa.Text, index=True) owner_id: so.Mapped[int] = so.mapped_column( sa.ForeignKey('users.id'), index=True) user: so.Mapped['User'] = so.relationship( lazy='joined', innerjoin=True, back_populates='items' ) def __repr__(self): return f'Item({self.id} &quot;{self.title}&quot;)' class User(Base): __tablename__ = &quot;users&quot; id: so.Mapped[int] = so.mapped_column(sa.Integer, primary_key=True, index=True) username: so.Mapped[str] = so.mapped_column(sa.String, unique=True, index=True) email: so.Mapped[str] = so.mapped_column(sa.String, unique=True, index=True) hashed_password: so.Mapped[str] is_active: so.Mapped[bool] = so.mapped_column(sa.Boolean, default=True) items: so.Mapped[list['Item']] = so.relationship( lazy='selectin', cascade='all, delete-orphan', back_populates='user' ) def __repr__(self): return f'User({self.id})' </code></pre> <p><code>main.py</code></p> <pre><code>from fastapi import Depends, FastAPI, HTTPException from sqlalchemy.orm import Session from sqlalchemy import select, func from sqlalchemy.ext.asyncio import AsyncSession import crud, models, schemas from database import SessionLocal app = FastAPI() async def get_db(): db = SessionLocal() try: yield db finally: await db.close() @app.post(&quot;/users/&quot;, response_model=schemas.User) async def create_user(user: schemas.UserCreate, db: AsyncSession = Depends(get_db)): fake_hashed_password = user.password + &quot;notreallyhashed&quot; db_user = models.User( username = user.username, email = user.email, hashed_password = fake_hashed_password ) db.add(db_user) await db.commit() # await db.refresh(db_user) return db_user @app.post(&quot;/users/{user_id}/items/&quot;, response_model=schemas.Item) async def create_item_for_user( user_id: int, item: schemas.ItemCreate, db: AsyncSession = Depends(get_db) ): # db_item = models.Item(**item.model_dump(), owner_id=1) db_item = models.Item( title = item.title, description = item.description, owner_id = user_id) db.add(db_item) await db.commit() await db.refresh(db_item) return db_item </code></pre> <p>I am not able to read user created, I guess it is because this relationship with an issue, please your expertise is appreciate. thanks</p>
<python><sqlalchemy><fastapi>
2023-09-10 00:51:59
1
472
dannisis
77,074,445
11,890,443
Effects of POSIX_FADV_DONTNEED when calculating SHA256 sum with python (TL;DR don't use it)
<p>Update: This is now more for documentation after doing more tests.</p> <p>TL;DR using POSIX_FADV_DONTNEED isn't worth it. Not using it gives best speed. On AMD64 it even seems not to be respected.</p> <p>Environment Rasperry PI4 USB 3.0 5TB spinning disk with ext4 file system</p> <p>Using 1 thread with this config gives the best speed, probably because of the spinning disk.</p> <p>When calculating the SHA256 sum of all files in a directory tree (checking multiple restic repositories without having to enter the encryption password for every repository), the read spead of the disk is displayed in nmon and with node_exporter almost twice the speed when using <code>POSIX_FADV_DONTNEED</code>. This argument tells the kernel not to keep the data in the cache. This makes sense because these files are read only once and would otherwise pollute the cache of the system and thus slow it down because other data would miss in the cache.</p> <p>Without <code>POSIX_FADV_DONTNEED</code> read speed is between 60 and 90 MB/s. With <code>POSIX_FADV_DONTNEED</code> read speed is between 155 MB/s and 175 MB/s, so about twice the speed. This value is shown in nmon and with prometheus node_exporter in combination with VictoriaMetrics. However using the time command gives completly different results. Between each run there was a <code>' sync; echo 3 &gt; /proc/sys/vm/drop_caches</code></p> <p>With <code>posix_fadvise(fd, 0, bytesRead)</code> time was 37s and slow disk speed was displayed. When using <code>posix_fadvise(fd, 0, 0)</code> about twice the disk speed was displayed, but in fact time was 1m8 seconds. When using</p> <pre><code>def posix_fadvise(fd, offset, length): return </code></pre> <p>only 29s where needed, so the fastest results were reached not using <code>POSIX_FADV_DONTNEED</code> at all.</p> <p>So there is a wrong disk speed shown on Raspberry-Pi, where as more accurate speed is shown on AMD64. On Raspberry PI you can see in VictoriaMectrics and the cache size that it isn't growing when using <code>POSIX_FADV_DONTNEED</code>, so the flag is respected.</p> <p>EDIT: On a hosted VM with SSD and much more performance even when using 4 threads, using <code>POSIX_FADV_DONTNEED</code> makes it reproducible about factor 5 slower. Between every run I did <code># echo 3 &gt; /proc/sys/vm/drop_caches</code> Very strange.</p> <p>EDIT2: On a physical host using a spinning disk connected via USB3.0 and 1 thread when using POSIX_FADV_DONTNEED it takes 1m25s to read all the files. After clearing cache with <code># echo 3 &gt; /proc/sys/vm/drop_caches</code> and not using POSIX_FADV_DONTNEED it only takes 12seconds to calculate the checksum. So a factor 7 (!) difference.</p> <p>Update 20.09.2023: With VictoriaMectrics I can see that <code>POSIX_FADV_DONTNEED</code> seems not to be respected regarding cache on AMD64 (on RPi it is, see above), you can see it growing, despite setting the flag. There is no noticeable difference in speed (using <code>time</code>command) between using <code>posix_fadvise(fd, 0, bytesRead)</code> (real 1m53,630s user 0m48,819s sys 0m5,627s) and immediately returning in <code>def posix_fadvise(fd, offset, length):</code> (real 1m52,675s user 0m51,346s sys 0m6,928s). Using <code>posix_fadvise(fd, 0, 0)</code> takes real 2m31,398s user 1m2,004s sys 0m16,178s</p> <pre class="lang-py prettyprint-override"><code>import os import subprocess import hashlib import concurrent.futures import sys import ctypes # Constants for posix_fadvise POSIX_FADV_DONTNEED = 4 base_directory = '/home/pi/5TB' num_threads = 1 # Adjust the number of threads as needed # Define posix_fadvise function def posix_fadvise(fd, offset, length): #return #uncomment and speed will be much slower libc = ctypes.CDLL(&quot;libc.so.6&quot;) ret = libc.posix_fadvise(fd, offset, length, POSIX_FADV_DONTNEED) if ret != 0: raise OSError(f&quot;posix_fadvise failed with error code {ret}&quot;) def calculate_sha256(file_path): try: # Calculate the SHA256 checksum of the file sha256_hash = hashlib.sha256() bytesRead = 0 # Initialize the counter for bytes read with open(file_path, 'rb') as f: fd = f.fileno() # Get file descriptor # Advise the kernel that we don't need the file data anymore #posix_fadvise(fd, 0, 0) while True: data = f.read(65536) # Read in 64KB chunks if not data: break bytesRead += len(data) sha256_hash.update(data) posix_fadvise(fd, 0, bytesRead) checksum = sha256_hash.hexdigest() # Check if the checksum matches the filename filename = os.path.basename(file_path) if checksum != filename: sys.stderr.write(f&quot;Error: Checksum mismatch for file '{file_path}'\n&quot;) return file_path except Exception as e: sys.stderr.write(f&quot;Error processing file '{file_path}': {str(e)}\n&quot;) return None def process_files_in_directory(directory): files = [os.path.join(directory, filename) for filename in os.listdir(directory) if os.path.isfile(os.path.join(directory, filename))] results = [] with concurrent.futures.ThreadPoolExecutor(max_workers=num_threads) as executor: for file in executor.map(calculate_sha256, files): if file is not None: results.append(file) return results if __name__ == &quot;__main__&quot;: checked_count = 0 for root, _, _ in os.walk(base_directory): checked_files = process_files_in_directory(root) checked_count += len(checked_files) if checked_count % 100 == 0: sys.stdout.write(f&quot;Checked {checked_count} files...\n&quot;) sys.stdout.flush() # Flush the stdout buffer to write immediately sys.stdout.write(f&quot;Checked {checked_count} files in total.\n&quot;) </code></pre>
<python><linux><performance>
2023-09-10 00:31:10
0
326
Hannes
77,074,339
4,001,300
Wagtail: custom URLs for blog page categories?
<p>My blog pages have a type and a subtype.</p> <p>I want blog page URLs to look like <code>mysite.com/pages/&lt;blog_type&gt;/&lt;blog_subtype&gt;/&lt;blog_title&gt;</code></p> <p>By default, they are just <code>mysite.com/pages/&lt;blog_title&gt;</code></p> <p>I can't think of a way to do this (I'm new to Wagtail). Can anyone help?</p> <p>Thanks in advance</p>
<python><django><wagtail>
2023-09-09 23:25:08
1
796
Mike Johnson Jr
77,074,094
1,150,683
Dataloader/sampler/collator to create batches based on the sample contents (sequence length)
<p>I am converting someone else's code into a neater torch-y pipeline, using datasets and dataloaders, collate functions and samplers. While I have done such work before, I am not sure how to tackle the following problem.</p> <p>The dataset contains sentences as samples. Every samples therefore has a number of words (or <code>tokens</code>), which we can get by naively splitting the sample on white space (<code>sample.split()</code>). Such a dummy dataset can look like this:</p> <pre class="lang-py prettyprint-override"><code>from random import randint from torch.utils.data import Dataset class DummyDataset(Dataset): def __init__(self): data = [] for _ in range(128): data.append(&quot;hello &quot; * randint(64, 176)) self.data = data def __len__(self): return len(self.data) def __getitem__(self, idx: int): return self.data[idx] </code></pre> <p>Now I want to be able to load data so that the max. number of <em>tokens</em> in a batch is not more than 250. That implies that the batch size can differ between iterations. One batch may contain two samples that have no more than 250 tokens in total (for instance 127 + 77) and another can have three (66+66+66). Now, the core functionality for this is rather straightforward. Full example below; not optimized by sorting on length or something but that's okay for this example.</p> <p>The question is, how can I integrate this in the PyTorch eco-system? Batch sizes are so often used to indicate the number of <code>samples</code> (like in the dataloader). So where should I plug this in, or what should I subclass, to make this work like a regular dataloader?</p> <pre class="lang-py prettyprint-override"><code>from random import randint from torch.utils.data import Dataset class DummyDataset(Dataset): def __init__(self): data = [] for _ in range(128): data.append(&quot;hello &quot; * randint(64, 176)) self.data = data def __len__(self): return len(self.data) def __getitem__(self, idx: int): return self.data[idx] if __name__ == '__main__': dataset = DummyDataset() def get_batch(max_tokens: int = 250): data_idxs = list(range(len(dataset))) batch = [] total_batch_len = 0 while data_idxs: sample = dataset[data_idxs[0]] sample_len = len(sample.split()) if total_batch_len + sample_len &lt;= max_tokens: batch.append(sample) total_batch_len += sample_len data_idxs.pop(0) elif batch: yield batch batch = [] total_batch_len = 0 yield batch # Sanity check that we indeed get all items from the dataset num_samples = 0 num_batches = 0 for b in get_batch(): num_samples += len(b) num_batches += 1 print(f&quot;Created {num_batches} batches&quot;) assert num_samples == len(dataset) </code></pre> <p>Maybe torchtext's <a href="https://torchtext.readthedocs.io/en/latest/data.html#iterator" rel="nofollow noreferrer">Iterator</a> and its <code>batch_size_fn</code> can help but I have no experience with it (where should I add it; is it a dataloader itself or should I still wrap a dataloader around it, etc.).</p>
<python><pytorch><nlp><batch-processing><dataloader>
2023-09-09 21:28:17
1
28,776
Bram Vanroy
77,074,085
5,424,359
AWS Amplify deploy fails becaus of typing-extensions versions
<p>My deploy to AWS amplify suddenly started failing with this error:</p> <pre><code>ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. aws-sam-cli 1.55.0 requires typing-extensions==3.10.0.0, but you have typing-extensions 4.7.1 which is incompatible. </code></pre> <ul> <li>I've tried downgrading pipenv to version 2023.8.26, which is the last version where the deploy succeeded</li> <li>I've tried adding <code>python3 -m pip install typing-extensions==3.10.0.0</code> in amplify.yml, which didn't appear to do anything.</li> <li>I've tried downgrading aws-sam-cli, boto3, botocore, and awscli to the last versions that worked, but I hit a dependency error with awscli's dependency on botocore. No matter what I did, I couldn't downgrade botocore.</li> </ul> <p>This is the backend part of my amplify.yml when this error started happening:</p> <pre><code>backend: phases: preBuild: commands: - ln -fs /usr/local/bin/pip3.8 /usr/bin/pip3 - ln -fs /usr/local/bin/python3.8 /usr/bin/python3 - python3 -m pip install --upgrade pip - python3 -m pip install --upgrade setuptools wheel - python3 -m pip install --upgrade pipenv build: commands: - amplifyPush --simple </code></pre> <p>Any help would be appreciated.</p>
<python><amazon-web-services><pip><aws-amplify>
2023-09-09 21:23:26
0
710
Flobbinhood
77,073,979
6,110,557
How to build a python package with setup.py?
<p>I'm trying to build package so that i can install it simply with pip3 command. It is an open-source project. Can someone please suggest me how to make it work.</p> <p>This is content of my setup.py</p> <pre><code>from setuptools import setup, find_packages setup( name='hawkeye', version='0.1.0', description='A powerful scanner to scan your Filesystem, S3, MySQL, Redis, Google Cloud Storage and Firebase storage for PII and sensitive data.', url='e', author='Rohit Kumar', author_email='', packages=find_packages('src'), package_dir={'': 'src'}, entry_points={ 'console_scripts': [ 'hawk_eye=hawk_eye:main', ], }, license='Apache License 2.0', install_requires=['pyyaml', 'rich', 'mysql-connector-python', 'redis', 'boto3'], classifiers=[ 'Development Status :: 3 - Alpha', 'Intended Audience :: Developers', 'Topic :: Software Development :: Build Tools', 'License :: OSI Approved :: Apache License 2.0', 'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3.7', ], keywords='pii secrets sensitive-data cybersecurity scanner', ) </code></pre> <p>And this is my project structure, and i made this open-source, entire source code can be found here - <a href="https://github.com/rohitcoder/hawk-eye" rel="nofollow noreferrer">https://github.com/rohitcoder/hawk-eye</a>, you can raise a PR if you want :) but let me know how to make it work.</p> <pre><code>├── LICENSE ├── __init__.py ├── assets │ ├── banner.png │ ├── hawk-eye.jpg │ ├── preview.png │ └── preview2.png ├── connection.yml ├── connection.yml.sample ├── fingerprint.yml ├── hawk_eye.py ├── pyvenv.cfg ├── readme.md ├── requirements.txt ├── setup.py └── src ├── __init__.py └── hawk_eye ├── commands │ ├── __init__.py │ ├── firebase.py │ ├── fs.py │ ├── gcs.py │ ├── mysql.py │ ├── redis.py │ └── s3.py ├── internals │ └── system.py └── main.py </code></pre> <p>And hawk_eye.py which is in root folder is having this content</p> <pre><code>import sys import os # Add the root directory to sys.path root_dir = os.path.dirname(os.path.realpath(__file__)) sys.path.insert(0, root_dir) from src import main if __name__ == '__main__': main.main() </code></pre> <p>I'm getting this error</p> <pre><code> Attempting uninstall: hawkeye Found existing installation: hawkeye 0.1.0 Uninstalling hawkeye-0.1.0: Successfully uninstalled hawkeye-0.1.0 DEPRECATION: hawkeye is being installed using the legacy 'setup.py install' method, because it does not have a 'pyproject.toml' and the 'wheel' package is not installed. pip 23.1 will enforce this behaviour change. A possible replacement is to enable the '--use-pep517' option. Discussion can be found at https://github.com/pypa/pip/issues/8559 Running setup.py install for hawkeye ... done Successfully installed hawkeye-0.1.0 [notice] A new release of pip is available: 23.0.1 -&gt; 23.2.1 [notice] To update, run: pip install --upgrade pip (venv) rohitcoder@Rohits-MacBook-Pro hawk-eye % hawk_eye Traceback (most recent call last): File &quot;/Users/rohitcoder/Desktop/Projects/hawk-eye/venv/bin/hawk_eye&quot;, line 33, in &lt;module&gt; sys.exit(load_entry_point('hawkeye==0.1.0', 'console_scripts', 'hawk_eye')()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/rohitcoder/Desktop/Projects/hawk-eye/venv/bin/hawk_eye&quot;, line 25, in importlib_load_entry_point return next(matches).load() ^^^^^^^^^^^^^^^^^^^^ File &quot;/opt/homebrew/Cellar/python@3.11/3.11.4/Frameworks/Python.framework/Versions/3.11/lib/python3.11/importlib/metadata/__init__.py&quot;, line 202, in load module = import_module(match.group('module')) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/opt/homebrew/Cellar/python@3.11/3.11.4/Frameworks/Python.framework/Versions/3.11/lib/python3.11/importlib/__init__.py&quot;, line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1204, in _gcd_import File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1176, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1126, in _find_and_load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 241, in _call_with_frames_removed File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1204, in _gcd_import File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1176, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1140, in _find_and_load_unlocked ModuleNotFoundError: No module named 'hawk_eye' (venv) rohitcoder@Rohits-MacBook-Pro hawk-eye % </code></pre> <p>I tried restructuring the folder but everytime i am getting error like hawk_eye isn't module, it's happening for that file only, which is in root folder</p>
<python><python-3.x><installation><pip><setup.py>
2023-09-09 20:37:43
1
400
rohitcoder
77,073,915
2,620,838
How to run alembic migrations inside of an AWS Lambda?
<p>I created a CircleCI pipeline that builds an AWS infrastructure using Terraform.</p> <p>The problem now is how to run the Alembic migrations.</p> <p>So, how to run alembic migrations inside of an AWS Lambda?</p>
<python><amazon-web-services><aws-lambda><alembic>
2023-09-09 20:18:09
1
1,003
Claudio Shigueo Watanabe
77,073,786
887,074
Python Yaml parse inf as float
<p>In PyYaml or ruamel.yaml I'm wondering if there is a way to handle parsing of specific strings. Specifically, I'd like to be able to parse <code>&quot;[inf, nan]&quot;</code> as <code>[float('inf'), float('nan')]</code>. I'll also note that I would like <code>&quot;['inf', 'nan']&quot;</code> to continue to parse as <code>['inf', 'nan']</code>, so it's just the unquoted variant that I'd like to intercept and change the current behavior.</p> <p>I'm aware that currently I could use <code>&quot;[.inf, .nan]&quot;</code> or <code>&quot;[!!float inf, !!float nan]&quot;</code>, but I'm curious if I could extend the Loader to allow for the syntax that I expected would have worked (but doesn't).</p> <p>Perhaps I'm just making a footgun by allowing &quot;nan&quot; and &quot;inf&quot; to be parsed as floats rather than strings - and I'm interested in hearing compelling reasons that I should <em>not</em> allow for this type of parsing. But I'm not too woried about the case where other parses would parse my configs incorrectly (but maybe I'm underestimating the pain that will cause in the future). I plan to use this as a one way convineince in parsing arguments on the command line, and I don't expect actual config files to be written like this.</p> <p>In any case I'd still be interested in how it could be done, even if the conclusion is that it shouldn't be done.</p>
<python><pyyaml><ruamel.yaml>
2023-09-09 19:38:23
1
5,318
Erotemic
77,073,147
1,852,526
Parse XML with namespace attribute changing in Python
<p>I am making a request to a URL and in the xml response I get, the xmlns attribute namespace changes from time to time. Hence finding an element returns None when I hardcode the namespace.</p> <p>For instance I get the following XML:</p> <pre><code>&lt;package xmlns=&quot;http://schemas.microsoft.com/packaging/2012/06/nuspec.xsd&quot;&gt; &lt;metadata&gt; &lt;id&gt;SharpZipLib&lt;/id&gt; &lt;version&gt;1.1.0&lt;/version&gt; &lt;authors&gt;ICSharpCode&lt;/authors&gt; &lt;owners&gt;ICSharpCode&lt;/owners&gt; &lt;requireLicenseAcceptance&gt;false&lt;/requireLicenseAcceptance&gt; &lt;licenseUrl&gt;https://github.com/icsharpcode/SharpZipLib/blob/master/LICENSE.txt&lt;/licenseUrl&gt; &lt;projectUrl&gt;https://github.com/icsharpcode/SharpZipLib&lt;/projectUrl&gt; &lt;description&gt;SharpZipLib (#ziplib, formerly NZipLib) is a compression library for Zip, GZip, BZip2, and Tar written entirely in C# for .NET. It is implemented as an assembly (installable in the GAC), and thus can easily be incorporated into other projects (in any .NET language)&lt;/description&gt; &lt;releaseNotes&gt;Please see https://github.com/icsharpcode/SharpZipLib/wiki/Release-1.1 for more information.&lt;/releaseNotes&gt; &lt;copyright&gt;Copyright © 2000-2018 SharpZipLib Contributors&lt;/copyright&gt; &lt;tags&gt;Compression Library Zip GZip BZip2 LZW Tar&lt;/tags&gt; &lt;repository type=&quot;git&quot; url=&quot;https://github.com/icsharpcode/SharpZipLib&quot; commit=&quot;45347c34a0752f188ae742e9e295a22de6b2c2ed&quot;/&gt; &lt;dependencies&gt; &lt;group targetFramework=&quot;.NETFramework4.5&quot;/&gt; &lt;group targetFramework=&quot;.NETStandard2.0&quot;/&gt; &lt;/dependencies&gt; &lt;/metadata&gt; &lt;/package&gt; </code></pre> <p>Now see the xmlns attribute. The entire attribute is same but sometimes the '2012/06' part keeps changing from time to time for certain responses. I have the following python script. See the line <code>ns = {'nuspec': 'http://schemas.microsoft.com/packaging/2013/05/nuspec.xsd'}</code>. I can't hardcode the namespace like that. Are there any alternatives like using regular expressions etc to map the namespace? Only the date part changes i.e. 2013/05 in some responses its 2012/04 etc.</p> <pre><code>def fetch_nuget_spec(self, versioned_package): name = versioned_package.package.name.lower() version = versioned_package.version.lower() url = f'https://api.nuget.org/v3-flatcontainer/{name}/{version}/{name}.nuspec' response = requests.get(url) metadata = ET.fromstring(response.content) ns = {'nuspec': 'http://schemas.microsoft.com/packaging/2013/05/nuspec.xsd'} license = metadata.find('./nuspec:metadata/nuspec:license', ns) if license is None: license_url=metadata.find('./nuspec:metadata/nuspec:licenseUrl', ns) if license_url is None: return { 'license': 'Not Found' } return {'license':license_url.text} else: if len(license.text)==0: print('SHIT') return { 'license': license.text } </code></pre>
<python><xml><elementtree>
2023-09-09 16:43:10
4
1,774
nikhil
77,072,972
7,077,532
Python: Pandas Dataframe -- Convert String Time Column in mm:ss Format to Total Minutes in Float Format
<p>Let's say I have a python dataframe with a time related column called &quot;Time&quot;. Inside this column there are strings that represent minutes and seconds. For example, the first row value 125:19 represents 125 minutes and 19 seconds. Its datatype is a string.</p> <p>I want to convert this value to total minutes in a new column &quot;Time_minutes&quot;. So 125:19 should become 125.316666666667 which should be a float datatype.</p> <p>Along a similar vein if the value is 0:00 then the corresponding &quot;Time_minutes&quot; column should show 0 (float datatype).</p> <p>I've done this in SQL using lambdas and index functions. But is there an easier/more straightforward way to do this in python?</p>
<python><dataframe><time><string-conversion><minute>
2023-09-09 15:53:31
2
5,244
PineNuts0
77,072,770
10,418,143
Simple Linear Regression Model parameters not getting updated
<p>I have been following one of the tutorial of PyTorch of creating a Simple Linear Regression model.</p> <p>I have followed up accordingly here's the code for reference.</p> <pre><code>class LinearRegression(nn.Module): def __init__(self): super().__init__() self.bias = nn.Parameter(torch.randn(1, requires_grad = True, dtype = torch.float)) self.weights = nn.Parameter(torch.randn(1, requires_grad = True, dtype = torch.float)) def forward(self, x: torch.Tensor) -&gt; torch.Tensor: return self.weights * x + bias </code></pre> <p>I'm using the SGD as the optimizer with a learning rate of 0.01. And using MAE as the loss function.</p> <p>Whatever parameters I tune, the bias parameter is not changing at all. Weight is changing fine.</p> <p>I have seen few Stackoverflow threads which talks of cloning the parameters but it is not working in my case.</p> <p>I have initialized the model with:</p> <pre><code>torch.manual_seed(42) model = LinearRegression() list(model.parameters()) </code></pre> <p>When I'm printing the parameters its fine. Where I'm doing it wrong.</p>
<python><machine-learning><pytorch>
2023-09-09 14:55:35
1
352
user10418143
77,072,672
13,793,478
python data formatting in Django
<p>I get this flow of data from database. but I couldn't figure out how to format it. this is how I want it;</p> <pre><code>This is how it is. [{'date': datetime.date(2023, 5, 27), 'pnl': '69'}, {'date': datetime.date(2023, 7, 23), 'pnl': '81'}, {'date': datetime.date(2023, 9, 12), 'pnl': '67'} ] </code></pre> <pre><code>This is how I want it [[69, 27, 5, 2023],[81, 23, 7, 2023],[67, 12, 9, 2023]] [0] = pnl, [1] = date, [2] = month, [3] = year </code></pre>
<python><django><django-models><django-views>
2023-09-09 14:27:39
1
514
Mt Khalifa
77,072,610
19,238,204
How to Implement Brachistochrone with Manim SciPy/SymPy/Numpy and Python
<p>I am just learning about <code>manim</code> from the tutorial, I modify to add another line, just a <code>-x+50</code>, then I was wondering whether the ball that are moving is actually correct, can we determine the speed for each curve?</p> <p>this is my code :</p> <pre><code># python3 -m manim manim-argminexample.py ArgMinExample -p -ql from manim import * class ArgMinExample(Scene): def construct(self): ax = Axes( x_range=[0, 10], y_range=[0, 100, 10], axis_config={&quot;include_tip&quot;: False} ) labels = ax.get_axis_labels(x_label=&quot;x&quot;, y_label=&quot;f(x)&quot;) t = ValueTracker(0) def func(x): return 2 * (x - 5) ** 2 graph = ax.plot(func, color=MAROON) def func2(x): return -x+50 graph2 = ax.plot(func2, color=GREEN) initial_point = [ax.coords_to_point(t.get_value(), func(t.get_value()))] dot = Dot(point=initial_point) dot2 = Dot(point=initial_point) dot.add_updater(lambda x: x.move_to(ax.c2p(t.get_value(), func(t.get_value())))) dot2.add_updater(lambda x: x.move_to(ax.c2p(t.get_value(), func2(t.get_value())))) x_space = np.linspace(*ax.x_range[:2],200) minimum_index = func(x_space).argmin() self.add(ax, labels, graph, dot) self.add(ax, labels, graph2, dot2) self.play(t.animate.set_value(x_space[minimum_index])) self.wait() </code></pre> <p>How am I suppose to create animation for Brachistochrone with manim like this:</p> <p><a href="https://i.sstatic.net/R9TRJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/R9TRJ.png" alt="1" /></a></p> <p>I believe using NumPy or SymPy can help a lot. Thanks.</p>
<python><numpy><sympy><manim>
2023-09-09 14:12:37
0
435
Freya the Goddess
77,072,374
3,387,716
Classify input numbers into fixed ranges, several million of times
<p>I have a few ranges (that can overlap) as parameter; for example:</p> <pre class="lang-py prettyprint-override"><code># tuple[0] &lt;= n &lt; tuple[1] ranges = [(70, 80), (80, 120), (120, 130), (120, 2000), (1990, 2000), (2000, 2040), (2040, 2050)] </code></pre> <p>And I have a list of tuples as input, where the second element of each tuple is the number that determines the range(s) the tuple belongs.</p> <pre class="lang-py prettyprint-override"><code>tuples = [('a', 71), ('b', 79), ('c', 82), ('d', 121), ('e', 1991), ('f', 2010), ('g', 2045), ('h', 3000)] </code></pre> <p>I need to determine the members of each range and store the results in the following manner:</p> <pre class="lang-py prettyprint-override"><code># ranges[i] =&gt; members[i] members = [{71: 'a', 79: 'b'}, {82: 'c'}, {121: 'd'}, {121: 'd', 1991: 'e'}, {1991: 'e'}, {2010: 'f'}, {2045: 'g'}] </code></pre> <p>My approach is to create a <code>dict</code> that associates a number to the range(s) it belongs:</p> <pre class="lang-py prettyprint-override"><code>members = [{} for _ in ranges] number_to_members = defaultdict(list) for i,(x0,x1) in enumerate(ranges): for x in range(x0,x1): number_to_members[x].append(members[i]) for c,n in tuples: if n in number_to_members: for m in number_to_members[n]: m[n] = c </code></pre> <p>Now the problem is that I need to classify tens of millions of different lists of tuples using the same ranges; the current implementation would require to generate <code>number_to_members</code> for each list of tuples, which is a lot of overhead. How do you work around that problem?</p> <hr /> <h4>UPDATE</h4> <p>Adding an example of the actual problem:</p> <pre class="lang-py prettyprint-override"><code>from collections import defaultdict ranges = [(70, 80), (80, 120), (120, 130), (120, 2000), (1990, 2000), (2000, 2040), (2040, 2050)] inputs = [ [('a', 71), ('b', 79), ('c', 82), ('d', 121), ('e', 1991), ('f', 2010), ('g', 2045), ('h', 3000)], [('x', 75), ('y', 78), ('z', 1995)] ] members = [{} for _ in ranges] number_to_members = defaultdict(list) for i,(x0,x1) in enumerate(ranges): for x in range(x0,x1): number_to_members[x].append(members[i]) for tuples in inputs: for c,n in tuples: if n in number_to_members: for m in number_to_members[n]: m[n] = c print(members) </code></pre> <p>output:</p> <pre class="lang-py prettyprint-override"><code>[{71: 'a', 79: 'b'}, {82: 'c'}, {121: 'd'}, {121: 'd', 1991: 'e'}, {1991: 'e'}, {2010: 'f'}, {2045: 'g'}] [{71: 'a', 79: 'b', 75: 'x', 78: 'y'}, {82: 'c'}, {121: 'd'}, {121: 'd', 1991: 'e', 1995: 'z'}, {1991: 'e', 1995: 'z'}, {2010: 'f'}, {2045: 'g'}] </code></pre>
<python>
2023-09-09 13:01:33
1
17,608
Fravadona
77,072,040
10,305,444
font.getGlyphID gives KeyError on unicode
<p>I'm trying to render SVG out of an unicode using Python's fonttools:</p> <pre><code>from fontTools.ttLib import TTFont from fontTools.pens.svgPathPen import SVGPathPen def unicode_to_svg(unicode_char, font_path, output_svg_path): # Load the TTF font using fonttools font = TTFont(font_path) # Get the glyph index for the Unicode character glyph_index = font.getGlyphID(unicode_char) # Convert the glyph to an SVG path glyph_set = font.getGlyphSet() glyph_name = font.getGlyphName(glyph_index) glyph = glyph_set[glyph_name] pen = SVGPathPen(glyph_set) glyph.draw(pen) svg_path_data = pen.getCommands() # Create the SVG content svg_content = f''' &lt;svg width=&quot;3000&quot; height=&quot;3000&quot; xmlns=&quot;http://www.w3.org/2000/svg&quot;&gt; &lt;path d=&quot;{svg_path_data}&quot; fill=&quot;red&quot; /&gt; &lt;/svg&gt; ''' # Save the SVG content to a file with open(output_svg_path, 'w') as f: f.write(svg_content) # Example usage font_path = './Nikosh.ttf' # unicode_char = 'A' unicode_char = 'ম' output_svg_path = 'output.svg' unicode_to_svg(unicode_char, font_path, output_svg_path) </code></pre> <p>But the issue is, it works fine for English alphabets. But when I try to render unicode (such as Bengali) it gives me:</p> <pre><code>KeyError: 'ম' </code></pre> <p>On this line: <code>glyph_index = font.getGlyphID(unicode_char)</code></p> <p>I simply wanted to take a deeper dive into SVG rendering of different languages, fonts, texts, etc.</p> <p>How can I resolve this issue? And make it SVG render-able for any unicode character?</p>
<python><svg><unicode><graphics><truetype>
2023-09-09 11:42:31
2
4,689
Maifee Ul Asad
77,072,034
19,318,120
Django query by parent id without join
<p>Let's say I've two models Parent and Child related by foreign key</p> <p>want to query all children without making a join with the parent table</p> <p>will</p> <pre><code>Child.objects.filter( parent_id=parent_id ) </code></pre> <p>do the trick? notice I'm doing a single underscore instead of double, so in theory it should only look through parent_id column in child table instead of making the joins, right?</p>
<python><django>
2023-09-09 11:40:32
1
484
mohamed naser
77,071,698
2,602,550
A postgresql db does not exist after its successful creation
<p>The following code fails, and I have run out of ideas. How come I get an exception stating that the DB does not exist, when it gets successfully created?</p> <pre class="lang-py prettyprint-override"><code> db_url = f&quot;postgresql://{priv_user}:{priv_password}@{db_host}:{db_port}/grocerylists&quot; engine = create_engine(db_url, echo=True, isolation_level='AUTOCOMMIT') with engine.connect() as con: con.execute(text(f&quot;CREATE DATABASE {db_name};&quot;)) con.execute(text(f&quot;GRANT ALL PRIVILEGES ON DATABASE {db_name} TO {db_user};&quot;)) engine.dispose() db_url = f&quot;postgresql://{db_user}:{db_password}@{db_host}:{db_port}/{db_name}&quot; engine = create_engine(db_url, echo=True) try: with engine.connect() as con: t = con.execute(text(&quot;SELECT * FROM information_schema.tables;&quot;)) except Exception as _exc: print(_exc) </code></pre>
<python><postgresql><sqlalchemy>
2023-09-09 09:51:08
1
355
chronos
77,071,658
5,303,845
Error in getting specific node shape, edge color and edge weight
<p>I want to plot a directed graph for a group of genes. Let's say some genes are oncogenes, and some are driver genes. Further, the gene-gene interactions are weighted and represented using specific shapes and colours. I am using the following code to draw the directed graph:</p> <pre><code> import networkx as nx import matplotlib.pyplot as plt import numpy as np # Your weighted adjacency matrix (replace this with your actual matrix) adjacency_matrix = np.array([ [0, 0.2, 0, 0, 0.4], [0.0, 0, 0, 0, 0.1], [0.1, 0, 0, 0.1, 0], [0, 0, 0.3, 0, 0], [0.0, 0.0, 0, 0, 0] ]) # Your Katz centrality scores (replace this with your actual scores) katz_centrality_scores = [0.95, 0.03, 0.65, 0.12, 0.06] # Relative scaling to range [1,10] katz_centrality_scores = [9*(i-min(katz_centrality_scores))/(max(katz_centrality_scores) - min(katz_centrality_scores)) + 1 for i in katz_centrality_scores] # Gene labels (replace with your gene labels) gene_labels = [&quot;Gene1&quot;, &quot;Gene2&quot;, &quot;Gene3&quot;, &quot;Gene4&quot;, &quot;Gene5&quot;] # Gene types (1 for oncogene, 2 for driver gene) gene_types = [1, 2, 1, 2, 2] # Create a graph G = nx.DiGraph() # Add nodes with attributes for i in range(len(gene_labels)): node_color = 'red' if gene_types[i] == 1 else 'green' node_shape = 'v' if gene_types[i] == 1 else 's' node_size = katz_centrality_scores[i]*80 # Adjust the scaling factor as needed G.add_node(gene_labels[i], color=node_color, shape=node_shape, size=node_size) node_colors = [v['color'] for v in dict(G.nodes(data=True)).values()] # Add edges from the adjacency matrix for i in range(len(gene_labels)): for j in range(len(gene_labels)): if adjacency_matrix[i][j] &gt; 0: G.add_edge(gene_labels[i], gene_labels[j], weight=katz_centrality_scores[i], color=node_colors[i]) # Extract node attributes node_colors = [G.nodes[n]['color'] for n in G.nodes()] node_shapes = [G.nodes[n]['shape'] for n in G.nodes()] node_sizes = [G.nodes[n]['size'] for n in G.nodes()] edge_colors = [G[u][v]['color'] for u, v in G.edges()] # Extract edge weights edge_weights = [G[u][v]['weight'] for u, v in G.edges()] # Draw the graph pos = nx.spring_layout(G, seed=42) # You can use other layout algorithms curved_edges = [edge for edge in G.edges() if reversed(edge) in G.edges()] straight_edges = [edge for edge in G.edges() if not reversed(edge) in G.edges()] nx.draw(G, pos, node_color=node_colors, node_size=node_sizes, edge_color=edge_colors, # node_shape=node_shapes, width=edge_weights, with_labels=True, edgelist=straight_edges, arrowsize=25, arrowstyle='-&gt;') nx.draw(G, pos, node_color=node_colors, node_size=node_sizes, edge_color=edge_colors, # node_shape=node_shapes, width=edge_weights, with_labels=True, edgelist=curved_edges, connectionstyle='arc3, rad = 0.25', arrowsize=25, arrowstyle='-&gt;') # Create a legend red_patch = plt.Line2D([0], [0], marker='v', color='red', label='Oncogene', markersize=10, linestyle='None') green_patch = plt.Line2D([0], [0], marker='s', color='green', label='Driver Gene', markersize=10, linestyle='None') plt.legend(handles=[red_patch, green_patch], loc='upper right') # Show the plot plt.title('Gene Network') plt.axis('off') # Turn off axis labels and ticks plt.show() </code></pre> <p>After running the above code, the graph edge attribute is as follows:</p> <pre><code> list(G.edges(data=True)) [('Gene1', 'Gene2', {'weight': 10.0, 'color': 'red'}), ('Gene1', 'Gene5', {'weight': 10.0, 'color': 'red'}), ('Gene2', 'Gene5', {'weight': 1.0, 'color': 'green'}), ('Gene3', 'Gene1', {'weight': 7.065217391304349, 'color': 'red'}), ('Gene3', 'Gene4', {'weight': 7.065217391304349, 'color': 'red'}), ('Gene4', 'Gene3', {'weight': 1.8804347826086958, 'color': 'green'})] </code></pre> <p>`</p> <p>The above code is generating the following graph (which is not correct):</p> <p><a href="https://i.sstatic.net/vT1O1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vT1O1.png" alt="Output directed graph" /></a></p> <p>Please note that the image is not according to the node and edge criteria mentioned in the code. For example,</p> <ol> <li><p>The edge colour and weight for an edge for node pair (Gene3, Gene4) and (Gene4, Gene3) are (red,7.06) and (green,1.88). But in the generated graph, the edge between (Gene4, Gene3) the colour is shown as red (not green), and the edge thickness is the same as (Gene3, Gene4) (which is wrong).</p> </li> <li><p>When I uncomment the <code>node_shape</code> argument in <code>nx.draw</code>, I get the following error:<code>ValueError: Unrecognized marker style ['v', 's', 'v', 's', 's']</code>. I am not able to figure out how to give the node shape for each gene category (e.g., triangle for oncogene and square for driver gene)</p> </li> </ol> <p>Can anyone suggest to me what I am missing in the above code?</p> <p>Thanks.</p>
<python><graph><networkx><directed-graph>
2023-09-09 09:37:26
1
622
Lot_to_learn
77,071,549
2,469,032
What is the difference between relativedelta() and DateOffset()
<p>I am having trouble understanding the differences between the relativedelta() function from the dateutil module and the DateOffset() function from Pandas. To me their syntax and outcomes are the same. For example:</p> <pre><code>import pandas as pd from dateutil.relativedelta import relativedelta current_date = pd.to_datetime('2023-09-10') print('current_date + relativedelta(months = 1)') print('current_date + pd.DateOffset(months = 1)') </code></pre> <p>Results:</p> <p>2023-10-10 00:00:00</p> <p>2023-10-10 00:00:00</p>
<python><pandas><datetime><timedelta><python-dateutil>
2023-09-09 09:09:50
2
1,037
PingPong
77,071,544
1,668,622
Is it possible to let a function's generic return type be based on the last of variadic positional arguments?
<p>I want to implement a function which can be called with one or more callables as arguments and let the return type be the same as the the return type of the last provided callable.</p> <p>So (without the type hinting) I want to be able to call e.g. <code>foo()</code> the following ways:</p> <pre class="lang-py prettyprint-override"><code>foo(lambda: &quot;hello&quot;) # -&gt; str foo(lambda: &quot;hello&quot;, str.upper) # -&gt; str foo(lambda: &quot;hello&quot;, len) # -&gt; int </code></pre> <p>which could be implemented like this:</p> <pre class="lang-py prettyprint-override"><code>def foo(first_fn, *more_fns): result = first_fn() for fn in more_fns: result = fn(result) return result </code></pre> <p>...if you were not interested in type hinting.</p> <p>But trying to make <code>foo()</code> type aware in a basic way I'd like to write something like:</p> <pre class="lang-py prettyprint-override"><code>from collections.abc import Callable from typing import TypeVar, Any T = TypeVar(&quot;T&quot;) def foo(*fns: Callable[..., Any], fnN: Callable[..., T]) -&gt; T: ... </code></pre> <p>which unfortunately can't be called the same way anymore because <code>*fns</code> makes <code>fnN</code> (coming after <code>*</code>) a keyword-only argument.</p> <p>I could go <a href="https://stackoverflow.com/a/7113162/1668622">this</a> approach and provide a bunch of explicit signatures like this one:</p> <pre class="lang-py prettyprint-override"><code>def foo( fn1: Callable[..., Any], fn2: Callable[..., Any], fnN: Callable[..., T], ) -&gt; T: return fnN(fn2(fn1())) print(f&quot;{foo(lambda: 1, lambda x: 2*x, lambda y: str(y))!r}&quot;) </code></pre> <p>...but that doesn't feel good (as it would limit the number of possible arguments).</p> <p>Another approach would be to reverse the call logic and have a <code>foo(fn_N, ..., fn_1)</code> semantic, but this would be sacrificing an intuitive signature for type hinting.</p> <p>Maybe I'm only missing a syntactic way to write a function with <em>variadic positional only</em> arguments with the last argument named explicitly (to provide explicit type hinting) while keeping it positional.</p> <p>Any ideas?</p>
<python><mypy><python-typing>
2023-09-09 09:08:32
1
9,958
frans
77,071,473
11,501,976
Where can I import "DataclassInstance" for mypy check?
<p>I have been using custom-defined <a href="https://stackoverflow.com/a/55240861/11501976">DataclassProtocol</a> to annotate the arg of function which takes dataclass type. It was something like this:</p> <pre class="lang-py prettyprint-override"><code>import dataclasses from typing import Type class DataclassProtocol(Protocol): &quot;&quot;&quot;Type annotation for dataclass type object.&quot;&quot;&quot; # https://stackoverflow.com/a/55240861/11501976 __dataclass_fields__: Dict def f(dcls: Type[DataclassProtocol]): return dataclasses.fields(dcls) </code></pre> <p>But recent mypy check fails with message: <code>error: Argument 1 to &quot;fields&quot; has incompatible type &quot;type[DataclassProtocol]&quot;; expected &quot;DataclassInstance | type[DataclassInstance]&quot; [arg-type]</code></p> <p>It seems I should now annotate with this <code>DataclassInstance</code>, but I can't find out from where I can import this. Where can I find it?</p>
<python>
2023-09-09 08:43:51
1
378
JS S
77,071,445
19,130,803
Convert plotly express graph into json
<p>I am using <code>plotly express</code> to create different graphs. I am trying to convert <code>graphs</code> into <code>json</code> format to save in <code>json</code> file. While doing so I am getting error using different ways as below:</p> <pre><code>Way-1 code gives error as below Error-2 Object of type ndarray is not JSON serializable Way-2 code gives error as below Error-2 Object of type Figure is not JSON serializable </code></pre> <p>Below is MWE:</p> <pre><code>import json import dash_bootstrap_components as dbc from dash import dcc from dash_bootstrap_templates import load_figure_template import plotly.express as px import plotly.io as pio import pandas as pd def generate_pie_charts(df, template) -&gt; list[dict[str, Any]]: pie_charts = list() for field in df.columns.tolist(): value_count_df = df[field].value_counts().reset_index() cols = value_count_df.columns.tolist() name: str = cols[0] value: str = cols[1] try: # Way-1 # figure = px.pie( # data_frame=value_count_df, # values=value, # names=name, # title=f&quot;Pie chart of {field}&quot;, # template=template, # ).to_plotly_json() # pie_chart = dcc.Graph(figure=figure).to_plotly_json() # Way-2 figure = px.pie( data_frame=value_count_df, values=value, names=name, title=f&quot;Pie chart of {field}&quot;, template=template, ) figure = pio.to_json(figure) # figure = pio.to_json(figure).encode() pie_chart = dcc.Graph(figure=pio.from_json(figure)).to_plotly_json() # pie_chart = dcc.Graph(figure=pio.from_json(figure.decode())).to_plotly_json() pie_charts.append(pie_chart) except Exception as e: print(f&quot;Error-1 {e}&quot;) print(f&quot;Length {len(pie_charts)}&quot;) return pie_charts def perform_exploratory_data_analysis(): rows = list() template = &quot;darkly&quot; load_figure_template(template) info = { &quot;A&quot;: [ &quot;a&quot;, &quot;a&quot;, &quot;b&quot;, &quot;b&quot;, &quot;c&quot;, &quot;a&quot;, &quot;a&quot;, &quot;b&quot;, &quot;b&quot;, &quot;c&quot;, &quot;a&quot;, &quot;a&quot;, &quot;b&quot;, &quot;b&quot;, &quot;c&quot;, ], &quot;B&quot;: [ &quot;c&quot;, &quot;c&quot;, &quot;c&quot;, &quot;c&quot;, &quot;c&quot;, &quot;a&quot;, &quot;a&quot;, &quot;b&quot;, &quot;b&quot;, &quot;c&quot;, &quot;a&quot;, &quot;a&quot;, &quot;b&quot;, &quot;b&quot;, &quot;c&quot;, ], } df = pd.DataFrame(info) try: row = dbc.Badge( &quot;For Pie Charts&quot;, color=&quot;info&quot;, className=&quot;ms-1&quot; ).to_plotly_json() rows.append(row) row = generate_pie_charts(df, template) rows.append(row) data = {&quot;contents&quot;: rows} status = False msg = &quot;Error creating EDA graphs.&quot; file = &quot;eda.json&quot; with open(file, &quot;w&quot;) as json_file: json.dump(data, json_file) msg = &quot;EDA graphs created.&quot; status = True except Exception as e: print(f&quot;Error-2 {e}&quot;) result = (status, msg) return result perform_exploratory_data_analysis() </code></pre> <p>What I am missing?</p>
<python><pandas><plotly-dash><plotly-express>
2023-09-09 08:35:24
1
962
winter
77,071,419
4,393,334
LLM Agent Executor gives InvalidRequestError "Unrecognized request argument supplied: functions"
<p>I get an error &quot;InvalidRequestError: Unrecognized request argument supplied: functions&quot; for the following code I tried to copy-paste from <a href="https://python.langchain.com/docs/modules/agents/" rel="nofollow noreferrer">https://python.langchain.com/docs/modules/agents/</a></p> <p>The error happens after &quot;agent_executor.run()&quot; line of code.</p> <p>I used &quot;gpt-35-turbo&quot; model.</p> <pre><code>from langchain.agents import tool from langchain.agents import OpenAIFunctionsAgent from langchain.agents import AgentExecutor from langchain.chat_models import ChatOpenAI from langchain.schema import SystemMessage # Create LLM llm = ChatOpenAI( model_kwargs={&quot;engine&quot;: deployment_name}, temperature=0.2) @tool def get_word_length(word: str) -&gt; int: &quot;&quot;&quot;Returns the length of a word.&quot;&quot;&quot; return len(word) tools = [get_word_length] # Prompt system_message = SystemMessage(content=&quot;You are very powerful assistant, but bad at calculating lengths of words.&quot;) prompt = OpenAIFunctionsAgent.create_prompt(system_message=system_message) # Create Agent agent = OpenAIFunctionsAgent(llm=llm, tools=tools, prompt=prompt) # Create Agent Executor agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True) # Run it agent_executor.run(&quot;how many letters in the word educa?&quot;) ============ &gt; Entering new AgentExecutor chain... ------ InvalidRequestError: Unrecognized request argument supplied: functions Traceback (most recent call last) Cell In[29], line 51 48 agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True) 50 # Run it ---&gt; 51 agent_executor.run(&quot;how many letters in the word educa?&quot;) </code></pre> <p>Please, help me understand how to fix it</p> <p>Thanks</p>
<python><langchain><large-language-model>
2023-09-09 08:26:30
1
3,113
Andrii
77,071,384
893,254
Python3 and pip3 venv: Is there a global environment? Where is it?
<h1>Is there one or more System-wide or User-Account-wide Global Python3 Virtual Environment?</h1> <ul> <li>Is it globally available across the whole Operating System?</li> <li>Is it visible only to each individual User Account? (This may depend on <em>where</em> on the host OS it lives)</li> </ul> <p>I am trying to understand some basic concepts behind <code>python3</code> / <code>pip3</code> <code>venv</code>s.</p> <p>Is there a concept of a &quot;system-wide&quot; or &quot;user-account-wide&quot; &quot;global&quot; environment? (venv)</p> <p>If so, where is that stored?</p> <p>This question will depend on the type of Operating System. It might be useful to have information about the 3 most common ones (Windows, Mac OS X, Linux) here, however for me personally I am using Debian 12 Linux.</p> <hr /> <h4>Additional Information (Not part of Primary Question)</h4> <p>This question was propted by the fact that Visual Studio Code shows multiple environments when connected to my remote development host.</p> <p>As can be seen in the screenshot below, with the <em>Python Environment Manager Extension</em> installed, some environments are shown. This includes 2x Workspace environments and another environment called &quot;default_venv&quot;.</p> <ul> <li>I created &quot;default_venv&quot; in a folder called &quot;python3_venv_environments&quot; which lives inside my home directory <code>~</code></li> <li>I created a venv with the folder name <code>.venv</code> which lives inside the folder which is currently open with VS Code</li> <li>VS Code also shows two Global Environments. I do not know what these are, where they are, or why there are two. &quot;Global&quot; and &quot;Venv&quot;</li> </ul> <p><a href="https://i.sstatic.net/ynN5T.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ynN5T.png" alt="Python Environment Manager Extension VS Code" /></a></p>
<python><visual-studio-code><python-venv>
2023-09-09 08:13:51
1
18,579
user2138149
77,071,361
5,924,264
Is there a functional difference between using parameterized query vs string interpolation?
<p>I'm working with a py2 codebase that performs the following query:</p> <pre><code>import datetime # conn is a connector pointing to the sql database cursor = conn.cursor() cursor.execute('select * from table_name where end &gt;= :start and end &lt;= :end', { 'start' : datetime.datetime.combine(date, datetime.time(0)), # date is a datetime.date object 'end' : datetime.datetime.combine(date, datetime.time(23,59,59)) }) </code></pre> <p>Note the <code>end</code> column has integer format <code>YYYYMMDDHHMMSS</code>. This snippet uses parameterized querying and returns the correct rows. I was surprised that this works because I don't know how sql compares <code>YYYYMMDDHHMMSS</code> to a python datetime object. I assume implicit conversions occurs on the sql side.</p> <p>I tried to change the query to use string interpolation:</p> <pre><code>import datetime # conn is a connector pointing to the sql database cursor = conn.cursor() cursor.execute('select * from table_name where end &gt;= '%s' and end &lt;= '%s' % ( datetime.datetime.combine(date, dt.time(0)), datetime.datetime.combine(date, dt.time(23,59,59))) )) </code></pre> <p>but this produces no results. Aren't these 2 execution queries identical?</p>
<python><sql><datetime>
2023-09-09 08:06:23
1
2,502
roulette01
77,071,244
16,420,204
Polars: Calculate rolling mode over multiple columns
<p>I have a <code>polars.DataFrame</code> like:</p> <pre class="lang-py prettyprint-override"><code>import polars as pl df = pl.from_repr(&quot;&quot;&quot; ┌──────┬──────┬──────┬──────┬──────┐ │ col1 ┆ col2 ┆ col3 ┆ col4 ┆ col5 │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ i64 ┆ i64 ┆ f32 │ ╞══════╪══════╪══════╪══════╪══════╡ │ 3 ┆ 3 ┆ 3 ┆ null ┆ null │ │ 2 ┆ 4 ┆ 1 ┆ 5 ┆ null │ │ 4 ┆ null ┆ null ┆ null ┆ null │ │ 7 ┆ 1 ┆ null ┆ null ┆ null │ │ 1 ┆ null ┆ null ┆ null ┆ null │ │ 10 ┆ 1 ┆ null ┆ null ┆ null │ │ 7 ┆ 9 ┆ 4 ┆ null ┆ null │ └──────┴──────┴──────┴──────┴──────┘ &quot;&quot;&quot;) </code></pre> <p>I want to create a new column that contains the rolling <code>mode</code> - but not based on one column and the respective row values within the window <strong>but on row values of all columns</strong> within the window. The <code>nulls</code> should be dropped and shouldn't appear in the resulting columns as a mode value.</p> <p>Under the assumption of something like <code>polars.rolling_apply(&lt;function&gt;, window_size=2, min_periods=1, center=False)</code> I would expect the following result:</p> <pre><code>┌──────┐ │ res │ │ --- │ │ i64 │ ╞══════╡ │ 3 │ │ 3 │ │ 4 │ │ None │ # &lt;- all values different │ 1 │ │ 1 │ │ None │ # &lt;- all values different └──────┘ </code></pre> <p>In case there is no mode <code>None</code> as a result would be fine. Only the missing value in the original <code>polars.DataFrame</code> should be ignored.</p>
<python><dataframe><python-polars><rolling-computation>
2023-09-09 07:22:32
1
1,029
OliverHennhoefer
77,070,883
16,948,533
Performance Comparison - Mojo vs Python
<p>Mojo, a programming language, claims to be 65000x faster than python. I am eager to understand if is there any concrete benchmark data that supports this claim? Also, how does it differ in real world problems?</p> <p>I am primarily encountered this claim on their website and have watched several videos discussing Mojo's speed. However I am seeking concrete benchmark data that substantiates this assertion.</p>
<python><performance><benchmarking><mojolang>
2023-09-09 04:49:48
1
1,700
Muneeb
77,070,782
355,715
How to pipe input to urwid?
<p>I'd like to make a program in Python that works a bit like fzf - you pipe input, manipulate it, and it's piped to something else.</p> <p>I have tried making a program like this in Python using urwid. If I set up a fake input within the program, it works the way I want it to, but if I pipe input (<code>cat foo.txt | python myprog.py</code>) I get an error. How can I make this work?</p> <p>In particular, I would like to be able to start the UI before closing stdin, the same as fzf.</p> <hr /> <p>To give a simple example of the issue I ran into, given this program:</p> <pre><code>import urwid txt = urwid.Text(&quot;blah&quot;) filler = urwid.Filler(txt) def quitter(key): if key == &quot;q&quot;: raise urwid.ExitMainLoop() loop = urwid.MainLoop(filler, unhandled_input=quitter) loop.run() </code></pre> <p>This command line:</p> <pre><code>ls | python script.py </code></pre> <p>Gives this error:</p> <pre><code>TypeError: ord() expected a character, but string of length 0 found </code></pre> <p>It wasn't obvious to me why this would happen, but looking at <a href="https://github.com/urwid/urwid/issues/423" rel="nofollow noreferrer">this issue</a> for example, it does seem to be related to piping input.</p>
<python><pty><tui><urwid>
2023-09-09 03:53:51
1
15,817
polm23
77,070,699
4,225,430
Django - confused about naming class and parameters (when to use uppercase / plural form)?
<p>Such confusion reflects unclear understanding about class and framework.</p> <p>I'm learning database in Django models. I get confused about naming class. When to use uppercase / lowercase, when to use singular and plural? I'm following the lecture from <a href="https://www.youtube.com/watch?v=jBzwzrDvZ18&amp;t=19742s" rel="nofollow noreferrer">freecodecamp</a> with this example:</p> <p>In <code>models.py</code>, I assign a class called <code>Feature(models.Model)</code>.</p> <p>Then, in <code>views.py</code>, I assign features for templates:</p> <pre><code>features = def index(request): Features.objects.all() return render (request, &quot;index.html&quot;, ({&quot;features&quot;: features}) </code></pre> <p>In <code>index.html</code>, there exists a for loop, therefore I run the syntax for feature in features: with the variable <code>{{feature.name}}</code> and <code>{{feature.details}}</code></p> <p>At this moment I just memorize without understanding when deciding when to use Feature vs feature, features vs feature. I find it rather difficult in memorizing the code <code>views.py</code>. I need a real understanding about naming.</p> <p>Below is the flow of some of the code. Thank you so much for your help.</p> <pre><code>models.py class Feature(models.Model): name = models.CharField(max_length = 100) details = models.CharField(max_length = 500) ------ settings.py python manage.py makemigrations python manage.py migrate python manage.py createsuperuser ---- admin.py from .models import Feature admin.site.register(Feature) ---- views.py def index(req): features = Feature.objects.all() return render(req, &quot;index.html&quot;, ({'features': features}) ---- index.html for feature in features: ....{feature.name} ....{feature.details} </code></pre>
<python><django><class><naming-conventions>
2023-09-09 03:11:45
2
393
ronzenith
77,070,586
11,218,687
How to handle the sqlite3.OperationalError: database is locked
<p>I used Python code that followed the pattern:</p> <pre class="lang-py prettyprint-override"><code>database = sqlite3.connect(&quot;path to database&quot;) for job in range(1000): for row in database.execute(&quot;SELECT something&quot;): database.execute(&quot;UPDATE something for the row&quot;) database.commit() </code></pre> <p>The application is a long running process, it queries some jobs from the database, executes each job (makes an HTTP request) one by one. The <code>&quot;SELECT something&quot;</code> returns a single row by design (however it may return a batch of rows), then for each row the application updates the database writing into the row the result of the job.</p> <p>This works if I run this application in a single process, but I need to parallelize execution, so I'm running several processes with the same code. As the result I started to get <code>sqlite3.OperationalError</code> exceptions with the description <em>&quot;database is locked&quot;</em>. I didn't handle this exception, so the processes were crashing, and I was restarting them manually.</p> <p>Now I wish to avoid crashes, and in case of the database lock I'm catching the exception. My goal is to discard the result of the current job and continue with the next one, so I changed the code:</p> <pre class="lang-py prettyprint-override"><code>database = sqlite3.connect(&quot;path to database&quot;) for job in range(1000): for row in database.execute(&quot;SELECT something&quot;): try: # process row database.execute(&quot;UPDATE something for the row&quot;) database.commit() except sqlite3.OperationalError as e: # wait for some time to allow other processes to complete the transaction </code></pre> <p>Now I'm getting a deadlock as both processes cannot proceed with their queries, both discard the results, make queries again and always get the exception. If I kill one of two processes, the other can continue processing the jobs successfully. My goal is to drop the connection state in case of catching the <code>sqlite3.OperationalError</code>. Closing/reopening the connection doesn't help as I fail to open the locked database after closing it. Is there an API that allows to unlock the database without closing?</p>
<python><sqlite><database-locking><sqlite3-python>
2023-09-09 01:55:17
0
6,630
Dmitry Kuzminov
77,070,438
2,681,662
Create a Tempfile with lifespan as the object
<p>I have a project where I need to create temp files.</p> <p>I have a class that accepts a file path. and as needed it reads the data from the file. However, sometimes I have the data and I want to create the same object so I use <code>tempfile</code>. But I want the file to be deleted as the object garbage-collected.</p> <p>This is my implementation which is wrong.</p> <pre><code>from pickle import load import tempfile class A: def __init__(self, file_path): self.file_path = file_path @property def data(self): with open(self.file_path, &quot;rb&quot;) as f2r: return load(f2r) @classmethod def from_data(cls, data): with tempfile.NamedTemporaryFile() as tmp: tmp.write(data) return cls(tmp.name) </code></pre> <p>The <code>from_data</code> method creates the temp file and returns an object but the file will be deleted as soon as <code>from_data</code> returns the object. But I want the file to live until the object itself is garbage-collected.</p>
<python><temporary-files>
2023-09-09 00:36:32
1
2,629
niaei
77,070,355
8,396,690
AzCosmosDBSqlRoleAssignment issue for a group
<p>We have an AzureGroup, say <code>CMR</code>, &amp; it has <code>&gt; 400 members</code>. I want to give a <code>cosmosDB permission</code> for each member of the group. <code>Azure AD identity (which could be a user, security group, service principal, or managed identity)</code> for <code>CMR</code> can be used for giving access to each member of the group.</p> <p>For that I have used <code>AzCosmosDBSqlRoleAssignment</code> and ran the following command for the <code>principalID</code> for the <code>CMR</code> whose <code>principlaid</code> is <code>9f58c295-efa6-4513-84c6-9c84d4033396</code></p> <pre><code>$resourceGroupName = &quot;test&quot; $accountName = &quot;test7782&quot; $contributorRoleDefinitionId = &quot;00000000-0000-0000-0000-000000000001&quot; $principalId = &quot;9f58c295-efa6-4513-84c6-9c84d4033396&quot; New-AzCosmosDBSqlRoleAssignment -AccountName $accountName ` -ResourceGroupName $resourceGroupName ` -RoleDefinitionId $contributorRoleDefinitionId ` -Scope &quot;/&quot; ` -PrincipalId $principalId </code></pre> <p>Please note, I'm using <code>contributorRoleDefinitionId</code> as <code>00000000-0000-0000-0000-000000000001</code> which has <code>readMetadata</code> permission. <a href="https://learn.microsoft.com/en-us/azure/cosmos-db/how-to-setup-rbac" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/cosmos-db/how-to-setup-rbac</a> :</p> <p><a href="https://i.sstatic.net/N3xd6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/N3xd6.png" alt="enter image description here" /></a></p> <p>The command ran fine. But then I used Python API to establish connection with the database and getting permission error.</p> <pre><code># Azure Identity library provides Azure Active Directory (Azure AD) token authentication support across the Azure SDK. # https://learn.microsoft.com/en-us/dotnet/api/azure.identity.azureclicredential?view=azure-dotnet from azure.identity.aio import AzureCliCredential azure_credential = AzureCliCredential() </code></pre> <p>I'm getting following error:</p> <pre><code>Request is blocked because principal [9f58c295-efa6-4513-84c6-9c84d4033396] does not have required RBAC permissions to perform action [Microsoft.DocumentDB/databaseAccounts/readMetadata] on resource [dbs/]. </code></pre> <p>It works perfectly when I switch to giving permissions for <code>each member of the group</code> of <code>CMR</code> with their <code>principalid</code>. But that is cumbersome and way more difficult for me to manage. Any help/suggestions on how to make the it work giving group permission?</p>
<python><azure><azure-cosmosdb><azure-identity>
2023-09-08 23:53:15
1
379
Dibyendu Dey
77,070,267
2,532,408
How do I create an overload annotation for a function that returns a different type when the argument is 0 vs any other integer?
<p>Is it possible to create an overload annotation for a function that returns a different type when argument is <code>0</code> vs any other integer?</p> <pre class="lang-py prettyprint-override"><code>def foo(val: int) -&gt; MyObjectA | MyObjectB: if val == 0: return MyObjectA() return MyObjectB() </code></pre> <p>Is there a way to overload that function to avoid the following?</p> <p><code>Incompatible types in assignment (expression has type &quot;MyObjectA | MyObjectB&quot;, variable has type &quot;MyObjectA&quot;) [assignment]</code></p> <pre class="lang-py prettyprint-override"><code>baz: MyObjectA = foo(0) bar: MyObjectB = foo(1) </code></pre> <p>The problem I'm running into is that I can create the case for <code>Literal[0]</code> but I don't know how (or if it is even possible) to create the case for all other integers besides <code>0</code>.</p> <pre class="lang-py prettyprint-override"><code>@overload def foo(val: Literal[0]) -&gt; MyObjectA: ... # Is there some way to express all positive integers? or everything BUT 0? @overload def foo(val: Literal[1,2,3,4,5,6, ...]) -&gt; MyObjectB: ... </code></pre> <p>I thought maaaaybe there would be some way to express &quot;not 0&quot;, but I didn't find anything like that in <code>typing</code>.</p> <hr /> <p>edit:</p> <p>I tried the following hoping the order would give precedence to the <code>Literal[0]</code> but that unfortunately does not work.</p> <pre class="lang-py prettyprint-override"><code>@overload def foo(val: Literal[0]) -&gt; MyObjectA: ... @overload def foo(val: int) -&gt; MyObjectB: ... </code></pre> <p>Mypy will return this:</p> <p><code>Overloaded function signatures 1 and 2 overlap with incompatible return types [misc]</code></p>
<python><mypy><python-typing>
2023-09-08 23:13:51
3
4,628
Marcel Wilson
77,070,177
10,308,255
How to iteratively add tuple elements to a dataframe as new columns?
<p>I am using the <code>statsmodels.stats.multitest.multipletests</code> <a href="https://www.statsmodels.org/dev/generated/statsmodels.stats.multitest.multipletests.html" rel="nofollow noreferrer">function</a></p> <p>to correct <code>p-values</code> I have stored in a dataframe:</p> <pre><code>p_value_df = pd.DataFrame({&quot;id&quot;: [123456, 456789], &quot;p-value&quot;: [0.098, 0.05]}) for _, row in p_value_df.iterrows(): p_value = row[&quot;p-value&quot;] print(p_value) results = multi.multipletests( p_value, alpha=0.05, method=&quot;bonferroni&quot;, maxiter=1, is_sorted=False, returnsorted=False, ) print(results) </code></pre> <p>which looks like: <a href="https://i.sstatic.net/lVU96.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lVU96.png" alt="enter image description here" /></a></p> <p>I would <em>really</em> like to add each of the elements of the <code>tuple</code> output as a new column in the <code>p_value_df</code> and am a bit stuck.</p> <p>I've attempted to convert the results to a list and use <code>zip(*tuples_converted_to_list)</code> but as some of the values are <code>floats</code> this throws an error.</p> <p>Additionally, I'd like to pull the <code>array</code> elements so that <code>array([False])</code> is just <code>False</code>.</p> <p>Can anyone make any recommendations on a strategy to do this?</p>
<python><pandas><tuples><statsmodels><python-zip>
2023-09-08 22:39:03
1
781
user
77,070,154
2,000,548
AWS Glue: Cannot find catalog plugin class for catalog 'spark_catalog': org.apache.spark.sql.delta.catalog.DeltaCatalog
<p>I have a Glue job in Python, which write individual parquet files to Delta Lake in S3.</p> <pre class="lang-py prettyprint-override"><code>import sys from awsglue.context import GlueContext from awsglue.job import Job from awsglue.utils import getResolvedOptions from pyspark.context import SparkContext raw_parquet_path = &quot;s3://my-bucket/data/raw-parquet/motor/&quot; delta_table_path = ( &quot;s3://my-bucket/data/delta-tables/my_table/&quot; ) partition_list = [&quot;_event_id&quot;] args = getResolvedOptions(sys.argv, [&quot;JOB_NAME&quot;]) sc = SparkContext() glueContext = GlueContext(sc) spark = glueContext.spark_session job = Job(glueContext) job.init(args[&quot;JOB_NAME&quot;], args) # Script generated for node S3 bucket S3bucket_node1 = glueContext.create_dynamic_frame.from_options( format_options={}, connection_type=&quot;s3&quot;, format=&quot;parquet&quot;, connection_options={ &quot;paths&quot;: [raw_parquet_path], &quot;recurse&quot;: True, }, transformation_ctx=&quot;S3bucket_node1&quot;, ) # Script generated for node sink_to_delta_lake additional_options = { &quot;path&quot;: delta_table_path, &quot;write.parquet.compression-codec&quot;: &quot;snappy&quot;, &quot;mergeSchema&quot;: &quot;true&quot;, } sink_to_delta_lake_node3_df = S3bucket_node1.toDF() sink_to_delta_lake_node3_df.write.format(&quot;delta&quot;).options( **additional_options ).partitionBy(*partition_list).mode(&quot;append&quot;).save() job.commit() </code></pre> <p>I got error</p> <pre><code>An error occurred while calling o111.save. Failed to find data source: delta. Please find packages at </code></pre> <p>I added Job details -&gt; Advanced properties -&gt; Job parameters</p> <ul> <li><code>--conf</code>: <code>spark.sql.extensions=io.delta.sql.DeltaSparkSessionExtension --conf spark.sql.catalog.spark_catalog=org.apache.spark.sql.delta.catalog.DeltaCatalog</code></li> <li><code>--packages</code>: <code>io.delta:delta-core_2.12:2.1.0</code></li> </ul> <p>(Note <code>--conf</code> is in one line based on <a href="https://stackoverflow.com/a/59261013/2000548">this</a>)</p> <p><a href="https://i.sstatic.net/eSouP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eSouP.png" alt="enter image description here" /></a></p> <p>However, then the error becomes</p> <blockquote> <p>An error occurred while calling o100.getDynamicFrame. Cannot find catalog plugin class for catalog 'spark_catalog': org.apache.spark.sql.delta.catalog.DeltaCatalog</p> </blockquote> <p>Any ideas? Thanks!</p>
<python><apache-spark><pyspark><aws-glue><delta-lake>
2023-09-08 22:30:20
1
50,638
Hongbo Miao
77,069,937
706,389
sqlalchemy core: slow performance for a simple SELECT query comparing to sqlite
<p>I've been trying to optimize some of my code that reads from an <code>sqlite</code> database via <code>sqlalchemy</code>, it seemed that reading the db was taking quite a lot of time.</p> <p>I've reduced it to the following test: read 5 million integers from an <code>sqlite</code> database.</p> <p>Raw <code>sqlite3</code> implementation takes 1.2 seconds, whereas when I do it via sqlalchemy core, it takes 3.2 seconds.</p> <p>After some profiling with py-spy I've noticed that most of the time is spent outside sqlite and rather in sqlalchemy itself -- and the time spent in sqlite is about 1 second, which matches the raw sqlite3 performance.</p> <p>The most offending line performance-wise was <a href="https://github.com/sqlalchemy/sqlalchemy/blob/8503dc2e948908199cd8ba4e6b1d1ddcf92f4020/lib/sqlalchemy/engine/result.py#L557" rel="nofollow noreferrer">this</a>. I imagine converting a bare sqlite row to sqlachemy Row takes some time, so that's fair.</p> <p>However I can't find a way to control that behaviour, e.g. ideally I need some sort of cursor that just returns bare tuples from sqlite without any additional processing by sqlalchemy.</p> <p>If I simply set <code>make_row = None</code> in sqlalchemy itself it actually kinda achieves what I want, and runs in 2.2 seconds (although still a bit of overhead). I tried to read the sqlalchemy code to understand what impacts <code>_row_getter</code> and what in the library could force it to be <code>None</code>, but didn't manage to find anything.</p> <p>From sqlalchemy logs, it's the same query I'm running against `sqlite3</p> <pre><code>SELECT data.x FROM data </code></pre> <p>I'm using latest sqlalchemy <code>2.0.20</code>, libsqlite version is <code>3.37.2</code>, python <code>3.10.4</code>.</p> <pre class="lang-py prettyprint-override"><code>N = 5_000_000 # database with 5000000 ints db_path = '/tmp/ints.sqlite' # takes about 1.2s def test_sqlite(): import sqlite3 total = 0 last = None with sqlite3.connect(db_path) as conn: for (x,) in conn.execute('SELECT * FROM data'): total += 1 last = x assert total == N # just in case print(last) # takes about 3.2s def test_sqlalchemy(): import sqlalchemy from sqlalchemy import Table, MetaData, Column, text engine = sqlalchemy.create_engine(f'sqlite:///{db_path}', echo=True) meta = MetaData() table_cache = Table('data', meta, Column('x', sqlalchemy.Integer)) query = table_cache.select() total = 0 last = None with engine.connect() as conn: rows = conn.execute(query) for (x,) in rows: total += 1 last = x assert total == N # just in case print(last) </code></pre>
<python><sqlite><sqlalchemy>
2023-09-08 21:22:38
1
2,549
karlicoss
77,069,773
893,254
How can I install IPython in Debian 12 or Ubuntu 23.04 where pip3 prevents installation due to "externally-managed-environment"?
<p><code>python3</code> is a system wide program, just as <code>pip3</code> is.</p> <p>I want to install <a href="https://en.wikipedia.org/wiki/IPython" rel="nofollow noreferrer">IPython</a> on <a href="https://en.wikipedia.org/wiki/Debian_version_history#Debian_12_(Bookworm)" rel="nofollow noreferrer">Debian 12</a> (Bookworm). (This information is also relevant to newer <a href="https://en.wikipedia.org/wiki/Ubuntu_%28operating_system%29" rel="nofollow noreferrer">Ubuntu</a> versions, since these are derived directly from <a href="https://en.wikipedia.org/wiki/Debian" rel="nofollow noreferrer">Debian</a> and contain the same policy change.)</p> <p>I would probably expect this to also be a system-wide available program, just like <code>python3</code> and <code>pip3</code>. Please correct me if that no longer makes sense, given the recent changes which prevent (by default) users from installing <code>pip3</code> packages system wide, instead encouraging the use of <code>venv</code>s.</p> <p>Previously I would have run <code>pip3 install ipython</code>. What should I now do instead?</p> <p>Error message when attempting to run <code>pip3 install ipython</code>.</p> <pre class="lang-none prettyprint-override"><code>error: externally-managed-environment × This environment is externally managed ╰─&gt; To install Python packages system-wide, try apt install python3-xyz, where xyz is the package you are trying to install. If you wish to install a non-Debian-packaged Python package, create a virtual environment using python3 -m venv path/to/venv. Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make sure you have python3-full installed. If you wish to install a non-Debian packaged Python application, it may be easiest to use pipx install xyz, which will manage a virtual environment for you. Make sure you have pipx installed. See /usr/share/doc/python3.11/README.venv for more information. note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages. hint: See PEP 668 for the detailed specification. </code></pre>
<python><python-3.x><ipython><python-venv><debian-bookworm>
2023-09-08 20:43:52
4
18,579
user2138149
77,069,623
893,254
pandas could not be resolved from source jupyter notebook
<p>I am trying to get Python3, Pandas, Jupyter, virtual envs working on one of my remote machines, via Visual Studio Code.</p> <p>I don't fully understand how all the moving parts interact, yet.</p> <p>When one of my test Jupyter Notebooks is open, although I can run the Python 3 code without issue, Visual Studio Code shows a warning <code>Import &quot;pandas&quot; could not be resolved from sourcePylance</code>.</p> <p>Further to this, there appears to be something missing from the Status Bar. Elsewhere I have seen descriptions of a widget on the status bar which relates to Python development. I think it is for switching Python environments, or similar. I have installed the Python Extension Pack. I don't know why this doesn't show up. This suggests to me that I haven't got something setup quite right.</p> <p>I searched for a solution to the &quot;import Pandas could not be resolved&quot; issue and saw suggestions to run <code>pip3 install pandas</code>. If I try and do this, I see the following message:</p> <pre><code>error: externally-managed-environment × This environment is externally managed ╰─&gt; To install Python packages system-wide, try apt install python3-xyz, where xyz is the package you are trying to install. If you wish to install a non-Debian-packaged Python package, create a virtual environment using python3 -m venv path/to/venv. Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make sure you have python3-full installed. If you wish to install a non-Debian packaged Python application, it may be easiest to use pipx install xyz, which will manage a virtual environment for you. Make sure you have pipx installed. See /usr/share/doc/python3.11/README.venv for more information. note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages. hint: See PEP 668 for the detailed specification. </code></pre> <p>My system is Debian 12. A recent change prevents pip installing any packages without first creating a virtual environment. (At least by default.) This is to prevent conflicts with system packages.</p> <p>This is why I opted to create a venv in the folder I am currently working in. However, taking this route appears to prevent the Python syntax analysis extensions from working when working inside a Jupyter Notebook.</p> <p>How should I setup my system to fix this?</p> <p>Note: Although VS Code as shown in the screenshot is running on a Windows machine, it is connected to a development server running Debian 12.</p> <p>The code in the Notebook does run, suggesting the issue is not with the Jupyter configuration, but something else.</p> <p><a href="https://i.sstatic.net/NX0tA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NX0tA.png" alt="VS Code" /></a></p>
<python><pandas><jupyter-notebook>
2023-09-08 20:07:02
0
18,579
user2138149
77,069,577
9,731,380
Problems using zdir in contour
<p>I'm attempting to plot 2D slices from a 3D scatter plot using matplotlib, and am having problems with the <code>zdir</code> argument. Here is my code:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import axes3d from scipy.stats import multivariate_normal mean = np.array([0, 0, 0]) covariance_matrix = np.array([[1, 0.5, 0.2], [0.5, 1, 0.3], [0.2, 0.3, 1]]) x1=np.linspace(-3, 3, 50) x, y, z = np.meshgrid(x1,x1,x1) pos = np.column_stack((x.ravel(), y.ravel(), z.ravel())) pdf_values = multivariate_normal.pdf(pos, mean=mean, cov=covariance_matrix) pdf_values = pdf_values.reshape(50, 50, 50) fig = plt.figure() ax = fig.add_subplot(111, projection='3d') x_flat, y_flat, z_flat = x.flatten(), y.flatten(), z.flatten() pdf_flat = pdf_values.flatten() threshold=0.03 inside_sphere = pdf_flat &gt; threshold from matplotlib import cm ax.scatter(x_flat[inside_sphere], y_flat[inside_sphere], z_flat[inside_sphere], c=pdf_flat[inside_sphere], marker='.',s=1) ax.contour(X=x[:,:,25], Y=y[:,:,25], Z=pdf_values[:,:,25], zdir='z',offset=-1.5 , cmap=cm.coolwarm) #Problems begin when changing zdir to a non-z argument ax.contour(X=x[25,:,:], Y=z[25,:,:], Z=pdf_values[25,:,:], zdir='y', cmap=cm.coolwarm) plt.show() </code></pre> <p>This generates the following image <a href="https://i.sstatic.net/WRp1F.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WRp1F.png" alt="enter image description here" /></a></p> <p>But where is the Y projection? If you comment out the other two lines you get this:</p> <p><a href="https://i.sstatic.net/f8t99.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/f8t99.png" alt="enter image description here" /></a></p> <p>Changing the <code>zdir</code> argument to &quot;z&quot; turns into a more recognisable contour plot. Why is that?</p>
<python><matplotlib><contour><matplotlib-3d>
2023-09-08 19:59:42
1
415
HLL
77,069,436
5,451,356
Why is the total execution time so much greater than the profiled cumulated time in Python?
<p>I'm running a Python script to find flight routes using a dataset with around 20k flights containing ~4000 flights each day. When profiling the code with <code>cProfile</code>, I print that the cumulative time recorded is much less than the query execution time. The discrepancy I'm seeing is:</p> <pre><code>Total function calls in cProfile: 0.468 seconds Query executed time: 4.25759482383728 seconds </code></pre> <p>Here's the profilers output:</p> <pre><code>16499769 function calls in 0.468 seconds Ordered by: cumulative time ncalls tottime percall cumtime percall filename:lineno(function) 4167579 0.155 0.000 0.155 0.000 {method 'get' of 'dict' objects} 4059336 0.101 0.000 0.101 0.000 {method 'append' of 'collections.deque' objects} 4059365 0.099 0.000 0.099 0.000 {method 'popleft' of 'collections.deque' objects} 3972499 0.090 0.000 0.090 0.000 {built-in method builtins.len} 135618 0.019 0.000 0.019 0.000 /Users/&lt;user&gt;/PycharmProjects/route-parser/pure-python-9-8.py:109(&lt;setcomp&gt;) 95182 0.003 0.000 0.003 0.000 {method 'append' of 'list' objects} 1589 0.000 0.000 0.001 0.000 {method 'sort' of 'list' objects} 8316 0.000 0.000 0.000 0.000 /Users/&lt;user&gt;/PycharmProjects/route-parser/pure-python-9-8.py:91(&lt;lambda&gt;) 125 0.000 0.000 0.000 0.000 /Users/&lt;user&gt;/PycharmProjects/route-parser/pure-python-9-8.py:85(&lt;lambda&gt;) 127 0.000 0.000 0.000 0.000 {method 'values' of 'dict' objects} 30 0.000 0.000 0.000 0.000 /Users/&lt;user&gt;/PycharmProjects/route-parser/pure-python-9-8.py:95(&lt;genexpr&gt;) 2 0.000 0.000 0.000 0.000 {built-in method time.time} 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects} </code></pre> <p>I understand that cProfile is not measuring wall time, but how can I bring these numbers closer together?</p> <p>This is the program, eventually I'll not be reading from CSV and instead be reading from a DB.</p> <pre class="lang-py prettyprint-override"><code>import cProfile import csv import pstats import time from collections import defaultdict from collections import deque from datetime import timedelta from io import StringIO from dateutil.parser import parse flights = [] with open('lambda.csv', 'r') as f: reader = csv.reader(f) next(reader) # Skip the header for row in reader: flight_id, origin, destination, start_datetime, end_datetime, capacity, equipment_type, tail_number = row start_datetime = parse(start_datetime) end_datetime = parse(end_datetime) flights.append({ 'flight_id': flight_id, 'origin': origin, 'destination': destination, 'start_datetime': start_datetime, 'end_datetime': end_datetime, 'capacity': int(capacity), 'equipment_type': equipment_type, 'tail_number': tail_number }) # Initialize a connectivity map to store direct connections between airports connectivity_map = defaultdict(set) # Build the connectivity map from the flights data for flight in flights: connectivity_map[flight['origin']].add(flight['destination']) def find_routes(origin, destination, start_datetime=None, end_datetime=None): &quot;&quot;&quot; Find possible flight routes from origin to destination within a specified time window. Parameters: - origin (str): The starting airport code. - destination (str): The destination airport code. - start_datetime (str): The start date and time in ISO format. If provided, searches 48 hours ahead from this time. - end_datetime (str): The end date and time in ISO format. If provided, searches 48 hours before this time. Returns: None (prints the valid paths, number of paths, and query execution time). &quot;&quot;&quot; if start_datetime is not None: start_timestamp = int(parse(start_datetime).timestamp()) end_timestamp = int(start_timestamp + timedelta(hours=48).total_seconds()) elif end_datetime is not None: end_timestamp = int(parse(end_datetime).timestamp()) start_timestamp = int(end_timestamp - timedelta(hours=48).total_seconds()) else: raise ValueError(&quot;Either start_datetime or end_datetime must be provided&quot;) for flight in flights: flight['start_timestamp'] = flight['start_datetime'].timestamp() flight['end_timestamp'] = flight['end_datetime'].timestamp() # Filter out the flights which don't fit within the specified time window flights_filtered = [ flight for flight in flights if start_timestamp &lt;= flight['start_timestamp'] &lt;= end_timestamp and end_timestamp &gt;= flight['end_timestamp'] ] pr = cProfile.Profile() s = StringIO() pr.enable() start_time = time.time() # Build a mapping from origin to destination with lists of flights that follow that route next_flights_map = defaultdict(lambda: defaultdict(list)) for flight in flights_filtered: next_flights_map[flight['origin']][flight['destination']].append(flight) # Sort the flights for each route based on their start timestamp for origin_data in next_flights_map.values(): for destination_data in origin_data.values(): destination_data.sort(key=lambda x: x['start_timestamp']) valid_paths = [] # Initialize the exploration queue with the first set of flights from the origin to_explore = deque([flight] for dest_flights in next_flights_map[origin].values() for flight in dest_flights) while to_explore: path = to_explore.popleft() last_flight = path[-1] if last_flight['destination'] == destination: valid_paths.append(path) continue if len(path) &gt;= 4: continue # Keep track of airports already visited in the current path to avoid loops visited_airports = {flight['origin'] for flight in path} # Find potential next destinations that haven't been visited yet next_possible_destinations = connectivity_map[last_flight['destination']] - visited_airports # For each potential next destination, explore the flights going there for dest in next_possible_destinations: potential_flights = next_flights_map[last_flight['destination']].get(dest, []) for next_flight in potential_flights: # Ensure the next flight starts after the last one ends if next_flight['start_timestamp'] &gt; last_flight['end_timestamp']: new_path = path + [next_flight] to_explore.append(new_path) end_time = time.time() pr.disable() sortby = 'cumulative' ps = pstats.Stats(pr, stream=s).sort_stats(sortby) ps.print_stats() print(s.getvalue()) print(len(valid_paths)) print(f&quot;Query executed in: {end_time - start_time} seconds&quot;) # Test the function with a sample input find_routes('MIA', 'ATL', start_datetime='2023-09-15T10:00:00Z') </code></pre> <p>My question is: Why is there such a significant discrepancy between the cProfile cumulative time and the execution time of the query? Am I missing something or not accounting for some overheads?</p> <p>Also any improvements to the algorithm will be appreciated :D</p>
<python><algorithm><profiling><cprofile>
2023-09-08 19:29:41
0
964
terrabl
77,069,152
963,319
Poetry pytest, ModuleNotFoundError: No module named 'fastapi'
<p>I'm running <code>FastAPI</code> successfully with Poetry but I cannot run unit tests:</p> <pre><code>poetry run pytest platform linux -- Python 3.11.5, pytest-7.4.1, pluggy-1.3.0 rootdir: /home/jenia/git/example00 collected 0 items / 1 error app/example.py:2: in &lt;module&gt; from fastapi import FastAPI E ModuleNotFoundError: No module named 'fastapi' </code></pre> <p>I executed the poetry shell:</p> <pre><code>[jenia@archlinux example00]$ poetry shell Spawning shell within /home/jenia/.cache/pypoetry/virtualenvs/example00-IayGs80J-py3.11 [jenia@archlinux example00]$ . /home/jenia/.cache/pypoetry/virtualenvs/example00-IayGs80J-py3.11/bin/activate (example00-py3.11) [jenia@archlinux example00]$ </code></pre> <p>Still the error persists.</p> <p>Here is my file that I need to test <code>app/example.py</code>:</p> <pre><code>import socket from fastapi import FastAPI #&lt;======== I know it's unused, it's for demonstration/learning purposes. def resolvePostgres(): x = socket.getaddrinfo('postgres', 5432) </code></pre> <p>And here is the test <code>tests/test_example.py</code>:</p> <pre><code>import unittest from unittest import mock from app.example import resolvePostgres class TestStringMethods(unittest.TestCase): def test_example(self): resolvePostgres() </code></pre> <p>However I can run the program no problem with <code>uvicorn main:app</code>:</p> <pre><code>which uvicorn /home/jenia/.cache/pypoetry/virtualenvs/example00-IayGs80J-py3.11/bin/uvicorn </code></pre> <p>Here is the file structure:</p> <pre><code>|--example00/ |--app/ |--__init__.py |--example.py |--tests/ |--__init__.py |--test_example.py |--main.py |--pyproject.toml </code></pre> <p>And finally, here is my <code>pyproject.toml</code>:</p> <pre><code>[tool.poetry] name = &quot;example00&quot; version = &quot;0.1.0&quot; authors = [&quot;Evgeniy&quot;] readme = &quot;README.md&quot; packages = [{include = &quot;app&quot;}] [tool.poetry.dependencies] python = &quot;^3.11&quot; fastapi = &quot;^0.100.1&quot; uvicorn = {extras = [&quot;standard&quot;], version = &quot;^0.23.2&quot;} app = &quot;^0.0.1&quot; [build-system] requires = [&quot;poetry-core&quot;] build-backend = &quot;poetry.core.masonry.api&quot; </code></pre> <p>Can someone please give me a hint, how do I run the tests in such a way that the import get resolved properly?</p>
<python><pytest><fastapi><python-poetry>
2023-09-08 18:31:38
0
2,751
Jenia Be Nice Please
77,069,092
2,486,083
Python argparse with two positional arguments
<p>I am trying to use python3 argparse to get a CLI like below:</p> <pre><code>python main.py virtual_machine deploy --name &quot;chetan&quot; python main.py virtual_machine power-on python main.py image --list </code></pre> <p>I tried something like this, which is not working:</p> <pre><code>parser = argparse.ArgumentParser(description=&quot;VM Services&quot;) subparsers = parser.add_subparsers(help=&quot;Subcommands&quot;) parser.add_argument('virtual_machine', nargs=2, action=TwoPositionalAction, help=&quot;Virtual machine operations&quot;) args = parser.parse_args() if args.virtual_machine[1] == &quot;deploy&quot;: vm_parser = subparsers.add_parser(&quot;deploy&quot;, help=&quot;Virtual machine deploy&quot;) vm_parser .add_argument(&quot;--name&quot;, required=False, help=&quot;name of the vm&quot;) vm_parser .set_defaults(func=two_parser_command) </code></pre> <p>Output:</p> <pre><code>python3 main.py virtual_machine deploy usage: main.py [-h] {} ... virtual_machine virtual_machine main.py: error: invalid choice: 'virtual_machine' (choose from ) </code></pre> <p>Need help on setting 2 or 1 positional parameter(s) with same parser which will call functions to perform respective operation</p>
<python><argparse>
2023-09-08 18:17:16
3
1,271
Chetan
77,069,045
3,700,524
How to call grandparent's `__str__` from child class in python?
<p>Suppose we have 3 classes <code>grandparent</code>,<code>parent</code> and <code>child</code> where <code>parent</code> inherits from <code>grandparent</code> and <code>child</code> inherits from <code>parent</code>. I want to call the <code>__str__</code> of <code>grandparent</code> in class <code>child</code>. How can I do it?</p> <pre><code>class grandparent: def __init__(self): pass def __str__(self): return f'grandparent' class parent(grandparent): def __init__(self): pass def __str__(self): return f'{super().__str__()},parent' class child(parent): def __init__(self): pass def __str__(self): return f'{super().__str__()},child' </code></pre> <p>In this case if I make an object from <code>child</code> class, after printing it I see :</p> <pre><code>c = child() print(c) # prints 'grandparent,parent,child' </code></pre> <p>I want to see <code>'grandparent,child'</code> by calling the grandparent's <code>__str__</code> in child class.</p>
<python><inheritance>
2023-09-08 18:06:04
4
3,421
Mohsen_Fatemi
77,068,908
22,515,567
How to install PyTorch with CUDA support on Windows 11 (CUDA 12)? - No Matching Distribution Found
<p>I'm trying to install PyTorch with CUDA support on my Windows 11 machine, which has CUDA 12 installed and python 3.10. When I run nvcc --version, I get the following output:</p> <pre><code>nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2023 NVIDIA Corporation Built on Tue_Aug_15_22:09:35_Pacific_Daylight_Time_2023 Cuda compilation tools, release 12.2, V12.2.140 Build cuda_12.2.r12.2/compiler.33191640_0 </code></pre> <p>I'd like to install PyTorch version 2.0.0 with CUDA support, so I attempted to run the following command:</p> <pre><code>python -m pip install torch==2.0.0+cu117 </code></pre> <p>However, I encountered the following error:</p> <pre><code>ERROR: Could not find a version that satisfies the requirement torch==2.0.0+cu117 (from versions: 1.11.0, 1.12.0, 1.12.1, 1.13.0, 1.13.1, 2.0.0, 2.0.1) ERROR: No matching distribution found for torch==2.0.0+cu117 </code></pre> <p>Does anyone have any suggestions?</p>
<python><pip><pytorch><dependencies>
2023-09-08 17:37:40
1
573
Marco Parola
77,068,901
14,141,072
Discord.py v2.0: Slash Command with Cogs integrated with OpenAI API
<p>I have been making a Discord bot using discord.py v2.0 and OpenAI API to make a AI Bot for my personal server. So firstly I will share my code.</p> <p>This the main.py file:</p> <pre><code>import os import asyncio import discord from discord.ext import commands intents = discord.Intents.all() ceriumAI = commands.Bot( command_prefix=&quot;&lt;&gt;&quot;, intents=intents ) @ceriumAI.event async def on_ready(): print(&quot;CeriumAI is ready and online!&quot;) @ceriumAI.command() async def sync(ctx): synced = await ceriumAI.tree.sync() print(f&quot;Synced {len(synced)} command(s).&quot;) async def loadCogs(): for filename in os.listdir(&quot;./Cogs&quot;): if filename.endswith(&quot;.py&quot;): await ceriumAI.load_extension(f&quot;Cogs.{filename[:-3]}&quot;) print(f&quot;Loaded the cog: {filename[:-3]}&quot;) async def main(): await loadCogs() await ceriumAI.start(os.getenv(&quot;TOKEN&quot;)) asyncio.run(main()) </code></pre> <p>This is the other file/Cog, askai.py:</p> <pre><code>import openai import discord from discord import app_commands from discord.ext import commands class Ask_AI(commands.Cog): def __init__(self, ceriumAI): self.ceriumAI = ceriumAI self.openai.api_key = &quot;MY-API-KEY&quot; self.messages = [{&quot;role&quot;: &quot;system&quot;,&quot;content&quot;:&quot;You are a intelligent assistant.&quot;}] @app_commands.command(name=&quot;askai&quot;) async def ask_ai(self, interaction: discord.Interaction, query: str): while True: message = query if message: self.messages.append( {&quot;role&quot;: &quot;user&quot;, &quot;content&quot;: message} ) chat = openai.ChatCompletion.create( model=&quot;gpt-3.5-turbo&quot;, messages=self.messages ) reply = chat.choices[0].message.content await interaction.response.send_message(f&quot;{reply}&quot;) self.messages.append({&quot;role&quot;: &quot;assistant&quot;, &quot;content&quot;: reply}) async def setup(bot): await bot.add_cog(Ask_AI(bot)) </code></pre> <p>So the problem: Basically, the slash command works. The weird part is that when I use the command and add queries like &quot;hi&quot;, &quot;hey&quot;, etc. it responds quickly to the user. But when I add a query like &quot;What is your name?&quot;, etc. or just queries in which chatgpt takes time to respond, the bot shows &quot;the application did not respond&quot;. My guess is that there is a response time limit of a bot in a slash command which when expires gives the message &quot;application did not respond&quot;. I would like to know if such a thing exists and if yes, how to extend it for the AI to respond by taking its own time and get the longer reply. If no, then a bypass to this problem. Thanks :)</p>
<python><python-3.x><discord><discord.py><openai-api>
2023-09-08 17:36:21
1
831
Bhavyadeep Yadav
77,068,887
2,302,485
What is the error rate of DuckDB approx_count_distinct for small cardinalities?
<p>I've been trying to understand how to measure the error rate for <code>approx_count_distinct</code> in DuckDB. The database doesn't provide a way to output it. As I read from the science paper the error increases if the cardinality is low or/and it depends on the number of buckets in hyperloglog implementation but there's no formula to calculate it. Is there a way to show the possible error for this function?</p>
<python><duckdb><hyperloglog>
2023-09-08 17:33:59
1
402
egor10_4
77,068,779
9,191,460
How do I output the version of my FastAPI app?
<p>I have an application in FastAPI.</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>app = FastAPI( version="v0.1.0.0" )</code></pre> </div> </div> </p> <p>I want to output the version of my application in a separate endpoint so another application knows what version this application is.</p> <p>I want to build something like this:</p> <pre><code>@app.get(&quot;/v0/version&quot;): async def api_version(): return {&quot;version&quot;: app_version} </code></pre> <p>where the above would return <code>{&quot;version&quot;: 0.1.0.0&quot;}</code>.</p> <p>However, I am not sure what to specify in the <code>app_version</code> field.</p>
<python><version><fastapi>
2023-09-08 17:13:25
2
3,035
Rui Nian
77,068,679
1,252,334
Twilio Media Streams - Twilio is not connecting to my websocket server
<p>I'm currently developing an application using Twilio Media Streams to process audio data in real-time. However, I'm encountering an issue where the call disconnects immediately after making a POST request to the /voice endpoint.</p> <p>Here are the details of my setup:</p> <ol> <li>I am using Flask to host my application and Flask-Sockets to handle WebSocket connections.</li> <li>My /voice endpoint returns a TwiML response with the verb to start a media stream. The WebSocket URL in the TwiML response is wss://my-ngrok-subdomain.ngrok.io/stream.</li> <li>My WebSocket server is running and accessible from the internet. I have confirmed this by testing the WebSocket connection independently of Twilio using a WebSocket client.</li> <li>I have enabled detailed logging in my application and WebSocket server, but I have not found any errors or issues that could explain why the call is disconnecting / why Twilio is not connecting to my websocket.</li> </ol> <p>Despite these measures, the call still disconnects immediately after making the POST request to the /voice endpoint. I have checked the Twilio logs, and they end on making the request to /voice. There are no logs indicating that Twilio is attempting to connect to my WebSocket.</p> <p>Here is the relevant part of my WebSocket server code:</p> <pre><code>def handle_audio(ws): logging.info('Handling audio') # Added logging while not ws.closed: message = ws.receive() if message is None: logging.info(&quot;No message received...&quot;) continue # Messages are a JSON encoded string data = json.loads(message) # Using the event type you can determine what type of message you are receiving if data['event'] == &quot;connected&quot;: logging.info(&quot;Connected Message received: {}&quot;.format(message)) elif data['event'] == &quot;start&quot;: logging.info(&quot;Start Message received: {}&quot;.format(message)) elif data['event'] == &quot;media&quot;: logging.info(&quot;Media message: {}&quot;.format(message)) payload = data['media']['payload'] logging.info(&quot;Payload is: {}&quot;.format(payload)) audio_data = base64.b64decode(payload) logging.info(&quot;That's {} bytes&quot;.format(len(audio_data))) # Split the audio data on silence audio_chunks = whisper_handler.split_on_silence(audio_data) logging.info('Split audio data into %d chunks', len(audio_chunks)) # Added logging # Transcribe each audio chunk transcriptions = [whisper_handler.transcribe_audio(chunk) for chunk in audio_chunks] logging.info('Transcribed audio chunks: %s', transcriptions) # Added logging elif data['event'] == &quot;closed&quot;: logging.info(&quot;Closed Message received: {}&quot;.format(message)) break </code></pre> <p>I would appreciate any assistance you could provide in resolving this issue. Please let me know if you need any additional information.</p>
<python><websocket><twilio>
2023-09-08 16:55:36
1
3,104
Defozo
77,068,567
6,024,187
How to make annotation arrows that are NOT xkcd stylized
<p>I am using matplotlib version 3.7.1 and it appears that the default annotation arrows are stylized following <a href="https://www.xkcd.com" rel="nofollow noreferrer">xkcd</a>. Here is a simple plot demonstrating this:</p> <pre><code>fig, ax = plt.subplots(1, 1, figsize=(6, 6)) p1 = 0.3 p2 = 0.7 ax.axvline(p1, color='k', linestyle='solid') ax.axvline(p2, color='k', linestyle='solid') ax.annotate('blah', xy=(p1, 0.75), xytext=(p2, 0.75), arrowprops=dict(arrowstyle='&lt;-&gt;') ) ax.set_ylim(0, 1.1) ax.set_xlim(0, 1) plt.show() </code></pre> <p><a href="https://i.sstatic.net/lFrJe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lFrJe.png" alt="wiggly arrow lines" /></a></p> <p>Notice that the line is neither straight nor level and the arrow heads do not span the entire distance. Notice as well that I did not enable this feature, it appears to be the default. The <a href="https://matplotlib.org/stable/gallery/text_labels_and_annotations/fancyarrow_demo.html#sphx-glr-gallery-text-labels-and-annotations-fancyarrow-demo-py" rel="nofollow noreferrer">matplotlib annotation gallery</a> makes it seem like this is the expected outcome. I would like to turn all this off and get straight lines.</p> <p>Edit: In response to a comment that says the line is straight, here is the code to plot level, straight arrows to/from the same points (in red):</p> <pre><code>fig, ax = plt.subplots(1, 1, figsize=(6, 6)) p1 = 0.3 p2 = 0.7 ax.axvline(p1, color='k', linestyle='solid') ax.axvline(p2, color='k', linestyle='solid') ax.arrow(p1, 0.75, p2 - p1, 0, length_includes_head=True, head_width=0.1, head_length=0.01, facecolor='none', overhang=1, edgecolor='r', alpha=0.5) ax.arrow(p2, 0.75, p1 - p2, 0, length_includes_head=True, head_width=0.1, head_length=0.01, facecolor='none', overhang=1, edgecolor='r', alpha=0.5) ax.annotate('blah', xy=(p1, 0.75), xytext=(p2, 0.75), arrowprops=dict(arrowstyle='&lt;-&gt;') ) ax.set_ylim(0, 1.1) ax.set_xlim(0, 1) plt.show() </code></pre> <p><a href="https://i.sstatic.net/wNQMo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wNQMo.png" alt="xkcd line with level line" /></a></p> <p>The annotation line is clearly neither straight (it wiggles) nor level. At any rate, that maybe this is functioning as intended is beside the point, the question to be answered is how do I get straight, level lines using annotate.</p>
<python><matplotlib>
2023-09-08 16:35:14
1
897
Finncent Price
77,068,509
11,686,518
Create Pandas Dataframe in Mojo
<p>I am trying to declare a Pandas DataFrame in mojo using a list of data. I have followed the examples seen for importing and using Numpy but declaring a DataFrame is only giving errors. <strong>How do I fix this so it creates a DataFrame?</strong> Here is what I have</p> <pre><code>from python import Python let pd = Python.import_module(&quot;pandas&quot;) var data = [[1, 2, 3],[2, 3, 4],[4, 5, 6]] var df = pd.DataFrame(data, columns=['cola', 'colb', 'colc']) </code></pre> <p>And here is the error I am receiving.</p> <pre><code>error: Expression [13]:19:41: keyword arguments are not supported yet var df = pd.DataFrame(data, columns=['cola', 'colb', 'colc']) ^ expression failed to parse (no further compiler diagnostics) </code></pre> <p>Observation:</p> <p>Declaring without the column names gives even more errors, as follows</p> <pre><code>error: Expression [14]:7:1: no viable expansions found fn __lldb_expr__14(inout __mojo_repl_arg: __mojo_repl_context__): ^ Expression [14]:9:28: call expansion failed - no concrete specializations __mojo_repl_expr_impl__(__mojo_repl_arg, __get_address_as_lvalue(__mojo_repl_arg.`___lldb_expr_failed`.load().address), __get_address_as_lvalue(__mojo_repl_arg.`pd`.load().address)) ^ Expression [14]:13:1: no viable expansions found def __mojo_repl_expr_impl__(inout __mojo_repl_arg: __mojo_repl_context__, inout `___lldb_expr_failed`: __mlir_type.`!kgen.declref&lt;@&quot;$builtin&quot;::@&quot;$bool&quot;::@Bool&gt;`, inout `pd`: __mlir_type.`!kgen.declref&lt;@&quot;$python&quot;::@&quot;$object&quot;::@PythonObject&gt;`) -&gt; None: ^ Expression [14]:22:26: call expansion failed - no concrete specializations __mojo_repl_expr_body__() ^ Expression [14]:15:3: no viable expansions found def __mojo_repl_expr_body__() -&gt; None: ^ Expression [14]:19:27: call expansion failed - no concrete specializations var df = pd.DataFrame(data) ^ expression failed to parse (no further compiler diagnostics) </code></pre>
<python><python-3.x><mojolang>
2023-09-08 16:26:05
1
1,633
Jesse Sealand
77,068,391
737,971
Optimizer keeps almost the initial values
<p>I am trying to get used to torch, and after optimizing a simple example (x^2+y^2), I tried something different.</p> <p>This should optimize a path in a <a href="https://en.wikipedia.org/wiki/Poincar%C3%A9_disk_model" rel="nofollow noreferrer">hyperbolic</a> metric. I give as input a set of points, and it should move them upwards to create a curved shape. The code is as follows.</p> <p>First I define the correct distance:</p> <pre><code>import numpy import torch import torch.optim as optim import pandas # poincare distance between two points def poincare_distance(x1, y1, x2, y2, R=1): r = torch.sqrt(x1**2 + y1**2) matg = torch.diag(torch.tensor([2.0, 2.0])) * (4 * R**4/(R**2-r**2)**2) delta = torch.tensor( [[x2 - x1], [y2 - y1]] ) f = torch.sqrt( delta.T @ matg @ delta ) return f </code></pre> <p>The loss function uses two Lagrange multipliers. I'd like the points to keep an even distance between them, and the first and last points should remain the same:</p> <pre><code># loss function: arc length + equidistance between all points + equality # between the fist and last point and the original (should remain the same) def poincare_loss(coords, k, kequality, eqfirst, eqlast): res = torch.zeros( (coords.shape[0]-1) ) for i in range(0, coords.shape[0]-1): res[i] = poincare_distance(coords[i ][0], coords[i ][1], coords[i+1][0], coords[i+1][1]) diffs = torch.zeros( (res.shape[0]-1) ) for i in range(0, res.shape[0]-1): diffs[i] = res[i] - res[i+1] eq = kequality * (torch.sum((coords[0] - eqfirst)**2) + torch.sum((coords[-1] - eqlast)**2)) d = torch.sum(res) + k * torch.sum(diffs**2) + eq return d </code></pre> <p>Next I read the input file, a set of initial points with the same y-coordinate:</p> <pre><code># handy usedevice = 'cpu' usedtype = torch.float32 # load the file in one float per line: x x x x ... y y y y y y arc_csv = pandas.read_csv(&quot;../mesh/poincare/arc.txt&quot;, header=None, delimiter=' ') arc = torch.tensor(arc_csv.values.astype(numpy.float64), dtype=usedtype, device=usedevice) # coordinates as tensor [ [x,y], [x,y], ... ] coords = arc.reshape([2,40]).T </code></pre> <p>After, I can start the loop. I create the initial coordinates to be optimized <code>inicoords</code>, and the Lagrange multipliers <code>k</code> and <code>keq</code>:</p> <pre><code># initial guess inicoords = coords.clone().detach().requires_grad_(True) # lagrange multipliers for the equidistance, and equality of first and last point k = torch.tensor([100.0], dtype=usedtype, device=usedevice, requires_grad=True) keq = torch.tensor([1000.0], dtype=usedtype, device=usedevice, requires_grad=True) # optimizer learning_rate = 1e-3 optimizer = optim.Adam([inicoords, k, keq], lr=learning_rate) num_steps = 1000 for step in range(num_steps): optimizer.zero_grad() loss = poincare_loss(inicoords, k, keq, coords[0], coords[-1]) # print(&quot;step&quot;, step, &quot;loss&quot;, loss) loss.backward() optimizer.step() optimal_inicoords = inicoords.detach().numpy() optimal_k = k.detach().numpy() optimal_keq = keq.detach().numpy() optimal_value = poincare_loss(inicoords, k, keq, coords[0], coords[-1]).item() print(&quot;Optimal coords:\n&quot;, optimal_inicoords) print(&quot;Optimal k:\n&quot;, optimal_k) print(&quot;Optimal keq:\n&quot;, optimal_keq) print(&quot;Optimal Value:\n&quot;, optimal_value) print(&quot;FIRST:\n&quot;, optimal_inicoords[0]) print(&quot;LAST:\n&quot;, optimal_inicoords[-1]) </code></pre> <p>The problem is that, even playing with <code>k</code> and <code>keq</code>, and with the learning rate, the points almost remain the initial ones. They vary a little, but they almost the same.</p> <p>The y coordinate is almost identical, and the multipliers <code>k</code> and <code>keq</code> are identical.</p> <p>Being a newcomer to torch, I am really thinking I might have misunderstood how a torch optimizer works.</p> <p>Any hints are more than welcome...</p>
<python><pytorch>
2023-09-08 16:04:11
0
2,499
senseiwa
77,068,304
10,216,028
Getting returned values in multiprocessing does not work
<p>I want to control the execution of a function by checking its runtime and memory consumption. I have the following code:</p> <pre><code>import os import time import psutil from multiprocessing import Process, SimpleQueue def my_fun(a, b, c, q): l = [] i = 2 for _ in range(a * b * c): l.append(i ** 50) i *= 1024 q.put((a, b, c)) def check(pid, available_mem, start_time, allowed_time, q): while True: try: if not psutil.pid_exists(pid): print('Function executed successfully') return True if psutil.virtual_memory()[1] &lt; available_mem * (10 ** 9): os.kill(pid, 9) q.put(False) print('Memory usage violation') break if time.time() - start_time &gt; allowed_time: os.kill(pid, 9) q.put(False) print('Time out') break time.sleep(1) except Exception as e: q.put(False) print(e) break if __name__ == &quot;__main__&quot;: minimum_available_memory = 5 # in GigaByte time_allowed = 5 # in seconds my_fun_q = SimpleQueue() check_q = SimpleQueue() arg1, arg2, arg3 = 1, 1, 1 my_fun_process = Process(target=my_fun, args=(arg1, arg2, arg3, my_fun_q)) start_time = time.time() my_fun_process.start() pid = my_fun_process.pid check_process = Process(target=check, args=(pid, minimum_available_memory, start_time, time_allowed, check_q)) check_process.start() my_fun_process.join() check_process.join() my_fun_result = my_fun_q.get() check_result = check_q.get() print(check_result) if check_result: print(my_fun_result) </code></pre> <p>In my code, <code>my_fun()</code> is the function that may have high runtime or memory consumption based on the given arguments.</p> <p>If the <code>my_fun</code> function is executed successfully, <code>check</code> returns <code>True</code> and <code>my_fun</code> returns a tuple of three values. But if either the runtime is higher than 5 seconds or the available memory is less than 2 GB, the corresponding process of <code>my_fun</code> is killed by the process of <code>check</code> and only the process of <code>check</code> will return <code>False</code>. This is what I expect my code to do. However,</p> <ul> <li>If <code>arg1, arg2, arg3 = 1, 1, 1</code> are passed to <code>my_fun</code>, the program prints &quot;Function executed successfully&quot; but gets stuck in returning the values without termination.</li> <li>If <code>arg1, arg2, arg3 = 100000, 100000, 100000</code> are passed to <code>my_fun</code>, the program prints &quot;Time out&quot; but again it gets stuck without returning anything or getting terminated.</li> </ul> <p>This shows that the <code>check</code> function is doing its job correctly but there is something wrong with returning the values. How do I fix my code?</p>
<python><python-3.x><multithreading><multiprocessing>
2023-09-08 15:51:16
1
455
Coder
77,068,253
12,300,981
How to change the size of a subplot using Axes object (without gs)
<p>Been trying to figure out how to change the size of a subplot in Matplotlib when using the Axes object. Say you setup some plot</p> <pre><code>fig=plt.figure() ax=np.empty((2,2,),dtype=object) ax[0][0]=fig.add_subplot(2,2,1) ax[0][1]=fig.add_subplot(2,2,2) ax[1][0]=fig.add_subplot(2,2,3) ax[1][1]=fig.add_subplot(2,2,4) </code></pre> <p>Now say you'd like to change the size of one of these subplots. I've seen instances where you use gs to do so, but curious if there was another method without gs. It looks like you can use get_position with set_position?</p> <pre><code>old_position=ax[0][0].get_position() old_position[-1]=old_position[-1]/2 ax[0][0].set_position(old_position) </code></pre> <p>The idea here was just to squish the top left plot by half its original size. However the position object returned is a BBox object that is not subscriptable, and I cannot see a way of &quot;subscripting it&quot; so the old position can be changed and a new one set.</p> <p>Any suggestions on how to change the size of one of the subplots using the get/set position?</p>
<python><matplotlib>
2023-09-08 15:44:05
1
623
samman
77,067,914
11,246,868
Django admin filter for inline model rows
<p>I have a parent model (Author) &amp; inline models (Book)</p> <pre class="lang-py prettyprint-override"><code> class Author(models.Model): age = models.IntegerField() class Book(models.Model): name = models.CharField(max_length=250) price = models.CharField(max_length=250) author = models.ForeignKey(Author, on_delete=models.CASCADE) </code></pre> <p>Book model has 3 rows</p> <p><em>to convey my issue I'm giving UI results in readable dict format</em></p> <pre><code>{id:1, name:'book1', price: 50, author_id: 1} {id:2, name:'book2', price: 75, author_id: 1} {id:3, name:'book1', price: 65, author_id: 2} </code></pre> <p>in admin.py, i have <strong>AuthorAdmin</strong> that has a list filter,</p> <pre class="lang-py prettyprint-override"><code>list_filter = ['age', 'book__name', 'book__price'] </code></pre> <p>in django admin list view page, if i filter <strong>/admin/?book__name=book1&amp;book__price=75</strong></p> <p>it gives Author(id=1), Author(id=2) as result</p> <p>but it should only return the id:1 row alone.</p> <p>kindly help how to use list_filter in m2m relations (inlines) in django admin.</p> <p>i have used __ to filter the inlines, but the results are not accurate.</p> <p>my understanding is that django is returning the parent if atleast 1 of the inline item matches the query. i want the filters to be chained.</p>
<python><django><django-admin><m2m>
2023-09-08 14:52:50
1
406
RG_RG
77,067,865
1,132,544
How to split a PySpark dataframe on column values of a separate dataframe?
<p>I have a PySpark dataframe which I want to split to two dataframes based on the condition if the value of a column exists in another dataframe.</p> <p>For exmaple my input dataframe looks like:</p> <pre><code>| Product | Category | | ----------- | ----------------| | Product A | Food | | Product B | Food | | Product C | leisure goods. | | Product D | drinks | </code></pre> <p>The second dataframe is:</p> <pre><code>| Product Categories | | -------------------| | Food | | leisure goods | </code></pre> <p>As a result I want to have two dataframes splitted by categories that are in the second dataframe:</p> <pre><code>df1.show() | Product | Category | | ----------- | ----------------| | Product A | Food | | Product B | Food | | Product C | leisure goods. | df2.show() | Product | Category | | ----------- | ----------------| | Product D | drinks | </code></pre> <p>Of course I could do two filter operations on the same dataframe but I would expect a longer runtime.</p>
<python><dataframe><pyspark><azure-synapse>
2023-09-08 14:45:59
1
2,707
Gerrit
77,067,838
3,520,621
Keras Transformers - Dimensions must be equal
<p>I wanted to do NER with keras model using transformers. The example was working correctly but I wanted to add some context to each words in order to help the model being more accurate. What I mean by context is &quot;coordinate X&quot;, &quot;coordinate Y&quot;, &quot;width of the word&quot;, &quot;height of the word&quot;, &quot;page index&quot;, ... For example some informations are usually on the top right corner of a document so having the coordinate of the word might help (I'm new to ML so feel free to tell me I'm wrong if it's the case).</p> <p>In order to have this &quot;context&quot; I've transformed the <code>x_train</code> and <code>x_val</code> in this format:</p> <pre><code>[ [ [pageIndex, wordVocabId, x, y, width, height, ocrScore], [pageIndex, wordVocabId, x, y, width, height, ocrScore], ... ], [ [pageIndex, wordVocabId, x, y, width, height, ocrScore], [pageIndex, wordVocabId, x, y, width, height, ocrScore], ... ], ... ] </code></pre> <p>Where each array of 2nd level represent a document and each array of 3nd level represent a word with its context. The 3nd level array is a numpy array of numbers.</p> <p>Even if I tried to edit the model to make it working I don't think I went in the right direction so I'll post here the model from the example of keras that I try to use and that I would like to adapt to my usecase:</p> <pre class="lang-py prettyprint-override"><code> class TransformerBlock(layers.Layer): def __init__(self, embed_dim, num_heads, ff_dim, rate=0.1): super().__init__() self.att = keras.layers.MultiHeadAttention( num_heads=num_heads, key_dim=embed_dim ) self.ffn = keras.Sequential( [ keras.layers.Dense(ff_dim, activation=&quot;relu&quot;), keras.layers.Dense(embed_dim), ] ) self.layernorm1 = keras.layers.LayerNormalization(epsilon=1e-6) self.layernorm2 = keras.layers.LayerNormalization(epsilon=1e-6) self.dropout1 = keras.layers.Dropout(rate) self.dropout2 = keras.layers.Dropout(rate) def call(self, inputs, training=False): attn_output = self.att(inputs, inputs) attn_output = self.dropout1(attn_output, training=training) out1 = self.layernorm1(inputs + attn_output) ffn_output = self.ffn(out1) ffn_output = self.dropout2(ffn_output, training=training) return self.layernorm2(out1 + ffn_output) class TokenAndPositionEmbedding(layers.Layer): def __init__(self, maxlen, vocab_size, embed_dim): super().__init__() self.token_emb = keras.layers.Embedding( input_dim=vocab_size, output_dim=embed_dim ) self.pos_emb = keras.layers.Embedding(input_dim=maxlen, output_dim=embed_dim) def call(self, inputs): maxlen = tf.shape(inputs)[-1] positions = tf.range(start=0, limit=maxlen, delta=1) position_embeddings = self.pos_emb(positions) token_embeddings = self.token_emb(inputs) return token_embeddings + position_embeddings class NERModel(keras.Model): def __init__( self, num_tags, vocab_size, maxlen=128, embed_dim=32, num_heads=2, ff_dim=32 ): super().__init__() self.embedding_layer = TokenAndPositionEmbedding(maxlen, vocab_size, embed_dim) self.transformer_block = TransformerBlock(embed_dim, num_heads, ff_dim) self.dropout1 = layers.Dropout(0.1) self.ff = layers.Dense(ff_dim, activation=&quot;relu&quot;) self.dropout2 = layers.Dropout(0.1) self.ff_final = layers.Dense(num_tags, activation=&quot;softmax&quot;) def call(self, inputs, training=False): x = self.embedding_layer(inputs) x = self.transformer_block(x) x = self.dropout1(x, training=training) x = self.ff(x) x = self.dropout2(x, training=training) x = self.ff_final(x) return x </code></pre> <p>Source: <a href="https://keras.io/examples/nlp/ner_transformers/" rel="nofollow noreferrer">https://keras.io/examples/nlp/ner_transformers/</a></p> <p>I try to compile and fit this way:</p> <pre class="lang-py prettyprint-override"><code> print(len(tag_mapping), vocab_size, len(x_train), len(y_train)) model = NERModel(len(tag_mapping), vocab_size, embed_dim=32, num_heads=4, ff_dim=64) model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy']) model.fit(tf.convert_to_tensor(x_train), tf.convert_to_tensor(y_train), validation_data=(x_val, y_val), epochs=10) model.save(&quot;model.keras&quot;) </code></pre> <p>The result of the <code>print</code> is (I have only 3 tags for now because I first try to make the model working):</p> <pre><code>3 20000 1000 1000 </code></pre> <p>The format of my <code>y_train</code> is the follow:</p> <pre><code>[ [tagId_document1_word1, tagId_document1_word2, ...], [tagId_document2_Word1, tagId_document2_word1, ...] ] </code></pre> <p>When I run <code>model.fit</code> I have this error:</p> <pre><code> ValueError: Dimensions must be equal, but are 516 and 7 for '{{node Equal}} = Equal[T=DT_FLOAT, incompatible_shape_error=true](Cast_1, Cast_2)' with input shapes: [?,516], [?,516,7]. </code></pre> <p>I hope with all these informations someone can pin me in the right direction because I'm a bit lost here.</p> <p>Thank you.</p>
<python><keras><transformer-model>
2023-09-08 14:42:33
2
5,236
MHogge
77,067,833
5,924,264
Separate lru for each argument value?
<p>I need to write a sql database column getter, that takes in a column name and time, and returns the entire column of values for that column corresponding to the input time. This may be a frequent function call with the same arguments, so I would like to use an lru cache. However, I'm not sure if the frequency of the column names is uniformly distributed, so ideally, I would have a separate lru cache for each column name.</p> <p>I previously had it like below, but I would like to separate the lru for each <code>col_name</code>.</p> <pre><code>@lru_cache(...) def get_col(self, col_name, time) # do stuff to get the column and return it </code></pre> <p>How can I achieve this? Also, unfortunately, I have to support py2.</p>
<python><python-2.x><lru>
2023-09-08 14:41:52
2
2,502
roulette01
77,067,781
7,695,845
AnnotationBbox not showing together with plot
<p>I have this code snippet which I use to render a text box next to the plot:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import matplotlib.pyplot as plt from matplotlib.offsetbox import AnnotationBbox, TextArea fig, ax = plt.subplots(layout=&quot;constrained&quot;) f = np.array([2.15, 2.22, 2.3, 2.43, 2.54, 2.7, 2.79, 2.91, 3.04, 3.2, 3.29]) f_err = 0.01 d = np.array( [ 0.3393, 0.3458, 0.3622666666666667, 0.38306666666666667, 0.4004, 0.429, 0.43939999999999996, 0.45239999999999997, 0.481, 0.5044, 0.5174, ] ) d_err = 0.0052 ax.errorbar(f, d, xerr=f_err, yerr=d_err, fmt=&quot;.&quot;, label=&quot;Measurements&quot;) text_area = TextArea(&quot;Hello there!!!!!!&quot;) text_box = AnnotationBbox( text_area, (1, 1), box_alignment=(0, 1), boxcoords=&quot;axes fraction&quot; ) ax.add_artist(text_box) ax.legend() plt.show() </code></pre> <p>In the past, I used the same code and it worked just fine as you can see:</p> <p><a href="https://i.sstatic.net/jgYGR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jgYGR.png" alt="" /></a></p> <p>However, running the snippet above makes the text box invisible:</p> <p><a href="https://i.sstatic.net/buBOQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/buBOQ.png" alt="" /></a></p> <p>Notice that it did create the space needed for the text (Changing the length of the text changes this space). However, the text itself doesn't appear. After debugging this, I found that commenting out the <code>errorbar</code> makes it work:</p> <p><a href="https://i.sstatic.net/vvaag.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vvaag.png" alt="" /></a></p> <p>I can't understand why this happens. As I said before, I used the same code in the past and it worked fine. Can somebody explain why this happens and how I can fix this?</p>
<python><matplotlib>
2023-09-08 14:34:15
0
1,420
Shai Avr
77,067,687
258,516
fiftyone states "Not Found" instead of showing GUI
<p>I hope someone can help.</p> <p>tyring to simply show the fiftyone GUI in Windows11. I always just says &quot;Not Found&quot;.</p> <p>Tried Different Anaconda environments (Python 3.10 and 3.9) Using jupyter notbook version Using Python 3.10 without anaconda</p> <p>Here is an exmaple</p> <p><a href="https://i.sstatic.net/IqpWq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IqpWq.png" alt="running session = fo.launch_app(dataset) just shows not found" /></a></p> <p>I'm new to fiftyone, but following the tutorials, should be simple. Tried: <a href="https://docs.voxel51.com/getting_started/install.html" rel="nofollow noreferrer">https://docs.voxel51.com/getting_started/install.html</a> Example and Tried: <a href="https://docs.voxel51.com/tutorials/evaluate_detections.html" rel="nofollow noreferrer">https://docs.voxel51.com/tutorials/evaluate_detections.html</a></p> <p>All give a &quot;Not Found&quot; message, and no web interface. (when using browser, it's a 404, but with &quot;Not Found&quot; text returned, very odd)</p> <p>my environment</p> <pre><code> System Information: - OS: Windows - OS Version: 10.0.22621 - Architecture: ('64bit', 'WindowsPE') - Python Version: 3.10.12 | packaged by Anaconda, Inc. | (main, Jul 5 2023, 19:01:18) [MSC v.1916 64 bit (AMD64)] </code></pre>
<python><fiftyone>
2023-09-08 14:21:02
1
944
adudley
77,067,659
244,130
Numpy: Smart way to clip array values and carry leftover to next
<p>Because of an underlying datastructure there is a maximum for values in my array.</p> <p>Is there a smart way in numpy to clip the array and carry the leftover to next elements?</p> <p>(I'm simlifying this to a 1D array here)</p> <pre class="lang-py prettyprint-override"><code>In [1]: import numpy as np In [2]: array = np.arange(10, 1, -1) In [3]: rest = 0 In [4]: array Out[4]: array([10, 9, 8, 7, 6, 5, 4, 3, 2]) In [5]: for i, x in enumerate(array): ...: tmp = array[i] + rest ...: if tmp &gt; 8: ...: array[i] = 8 ...: rest = tmp - 8 ...: elif tmp &lt;= 8: ...: array[i] = tmp ...: rest = 0 ...: In [6]: array Out[6]: array([8, 8, 8, 8, 8, 5, 4, 3, 2]) </code></pre> <p>This works, but is there a smarter way to do this in numpy, that also supports more dimensions? Ex. an array of vectors. When combining them, the path will not be the same, but they will sum up to the same.</p>
<python><numpy>
2023-09-08 14:16:19
1
487
kristus
77,067,527
3,854,191
Odoo-13 template Error : CacheMiss: ('my_custom_model(6,).name', None)
<p>In my server Log-file on Odoo.sh (odoo-v13), i often see the following error without understanding exactly what is the reason of that: <code>odoo.exceptions.CacheMiss: ('x_eventcoursetype(6,).x_name', None)</code></p> <p>I can see that this error is related to my custom model x_eventcoursetype, that is a sub-category of official model &quot;event.event&quot; (bind to it via M2O).</p> <p><a href="https://i.sstatic.net/DJcqD.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DJcqD.jpg" alt="enter image description here" /></a></p> <p>This model is then used on my website inside the top-bar menu (xml template inheriting from the standard view: <code>website_event.index_topbar</code>) to filter the event by this sub-category:</p> <pre class="lang-xml prettyprint-override"><code>&lt;?xml version=&quot;1.0&quot;?&gt; &lt;data inherit_id=&quot;website_event.index_topbar&quot; active=&quot;True&quot; customize_show=&quot;True&quot; name=&quot;Filter by xEvTyp&quot;&gt; &lt;xpath expr=&quot;//ul[hasclass('o_wevent_index_topbar_filters')]&quot; position=&quot;inside&quot;&gt; &lt;li class=&quot;nav-item dropdown mr-2 my-1 my-lg-0&quot;&gt; &lt;a href=&quot;#&quot; role=&quot;button&quot; class=&quot;btn dropdown-toggle&quot; data-toggle=&quot;dropdown&quot;&gt; &lt;i class=&quot;fa fa-folder-open&quot;/&gt; &lt;t t-if=&quot;current_xevtyp&quot; t-esc=&quot;current_xevtyp.x_name&quot;/&gt; &lt;t t-else=&quot;&quot;&gt;All Event Types&lt;/t&gt; &lt;/a&gt; &lt;div class=&quot;dropdown-menu&quot;&gt; &lt;t t-foreach=&quot;xevtyps&quot; t-as=&quot;xevtyp&quot;&gt; &lt;t t-if=&quot;xevtyp['evcoursetype_id']&quot;&gt; &lt;a t-att-href=&quot;keep('/event', xevtyp=xevtyp['evcoursetype_id'][0])&quot; t-attf-class=&quot;dropdown-item d-flex align-items-center justify-content-between #{searches.get('xevtyp') == str(xevtyp['evcoursetype_id'] and xevtyp['evcoursetype_id'][0]) and 'active'}&quot;&gt; &lt;t t-if=&quot;xevtyp['evcoursetype_id'][1] == 'all' &quot;&gt; All types &lt;/t&gt; &lt;t t-else=&quot;&quot; t-esc=&quot;xevtyp['evcoursetype_id'][1]&quot;/&gt; &lt;span t-esc=&quot;xevtyp['evcoursetype_id_count']&quot; class=&quot;badge badge-pill badge-primary ml-3&quot;/&gt; &lt;/a&gt; &lt;/t&gt; &lt;/t&gt; &lt;/div&gt; &lt;/li&gt; &lt;/xpath&gt; &lt;/data&gt; </code></pre> <p>The names of model-records are perfectly displayed in the website top-bar (as filter) and the choice does filter the event accordingly to their x_eventcoursetype value, without displaying any Error on the Front Office.</p> <p><strong>But the Error is reported in the log file: TRACEBACK:</strong></p> <pre><code>2023-09-08 13:41:52,801 4 ERROR myodoo-oerp-master-1178741 odoo.addons.http_routing.models.ir_http: 500 Internal Server Error: Traceback (most recent call last): File &quot;/home/odoo/src/odoo/odoo/api.py&quot;, line 748, in get value = self._data[field][record._ids[0]] KeyError: 6 During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;/home/odoo/src/odoo/odoo/fields.py&quot;, line 1059, in __get__ value = env.cache.get(record, self) File &quot;/home/odoo/src/odoo/odoo/api.py&quot;, line 754, in get raise CacheMiss(record, field) odoo.exceptions.CacheMiss: ('x_eventcoursetype(6,).x_name', None) During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;/home/odoo/src/odoo/odoo/addons/base/models/qweb.py&quot;, line 334, in _compiled_fn return compiled(self, append, new, options, log) File &quot;&lt;template&gt;&quot;, line 1, in template_website_event_index_topbar_12493 File &quot;/home/odoo/src/odoo/odoo/fields.py&quot;, line 1070, in __get__ raise MissingError(&quot;\n&quot;.join([ odoo.exceptions.MissingError: ('Enregistrement inexistant ou détruit.\n(Enregistrement: x_eventcoursetype(6,), Utilisateur: 4)', None) During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;/home/odoo/src/odoo/odoo/addons/base/models/ir_http.py&quot;, line 234, in _dispatch result = request.dispatch() File &quot;/home/odoo/src/odoo/odoo/http.py&quot;, line 805, in dispatch r = self._call_function(**self.params) File &quot;/home/odoo/src/odoo/odoo/http.py&quot;, line 350, in _call_function return checked_call(self.db, *args, **kwargs) File &quot;/home/odoo/src/odoo/odoo/service/model.py&quot;, line 94, in wrapper return f(dbname, *args, **kwargs) File &quot;/home/odoo/src/odoo/odoo/http.py&quot;, line 342, in checked_call result.flatten() File &quot;/home/odoo/src/odoo/odoo/http.py&quot;, line 1214, in flatten self.response.append(self.render()) File &quot;/home/odoo/src/odoo/odoo/http.py&quot;, line 1207, in render return env[&quot;ir.ui.view&quot;].render_template(self.template, self.qcontext) File &quot;/home/odoo/src/odoo/odoo/addons/base/models/ir_ui_view.py&quot;, line 1201, in render_template return self.browse(self.get_view_id(template)).render(values, engine) File &quot;/home/odoo/src/odoo/addons/website/models/ir_ui_view.py&quot;, line 347, in render return super(View, self).render(values, engine=engine, minimal_qcontext=minimal_qcontext) File &quot;/home/odoo/src/odoo/addons/web_editor/models/ir_ui_view.py&quot;, line 27, in render return super(IrUiView, self).render(values=values, engine=engine, minimal_qcontext=minimal_qcontext) File &quot;/home/odoo/src/odoo/odoo/addons/base/models/ir_ui_view.py&quot;, line 1209, in render return self.env[engine].render(self.id, qcontext) File &quot;/home/odoo/src/enterprise/web_studio/models/ir_qweb.py&quot;, line 43, in render return super(IrQWeb, self).render(template, values=values, **options) File &quot;/home/odoo/src/odoo/odoo/addons/base/models/ir_qweb.py&quot;, line 58, in render result = super(IrQWeb, self).render(id_or_xml_id, values=values, **context) File &quot;/home/odoo/src/odoo/odoo/addons/base/models/qweb.py&quot;, line 261, in render self.compile(template, options)(self, body.append, values or {}) File &quot;/home/odoo/src/odoo/odoo/addons/base/models/qweb.py&quot;, line 336, in _compiled_fn raise e File &quot;/home/odoo/src/odoo/odoo/addons/base/models/qweb.py&quot;, line 334, in _compiled_fn return compiled(self, append, new, options, log) File &quot;&lt;template&gt;&quot;, line 1, in template_website_event_index_12472 File &quot;&lt;template&gt;&quot;, line 2, in body_call_content_12470 File &quot;/home/odoo/src/odoo/odoo/addons/base/models/qweb.py&quot;, line 341, in _compiled_fn raise QWebException(&quot;Error to render compiling AST&quot;, e, path, node and etree.tostring(node[0], encoding='unicode'), name) odoo.addons.base.models.qweb.QWebException: ('Enregistrement inexistant ou détruit.\n(Enregistrement: x_eventcoursetype(6,), Utilisateur: 4)', None) Traceback (most recent call last): File &quot;/home/odoo/src/odoo/odoo/api.py&quot;, line 748, in get value = self._data[field][record._ids[0]] KeyError: 6 During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;/home/odoo/src/odoo/odoo/fields.py&quot;, line 1059, in __get__ value = env.cache.get(record, self) File &quot;/home/odoo/src/odoo/odoo/api.py&quot;, line 754, in get raise CacheMiss(record, field) odoo.exceptions.CacheMiss: ('x_eventcoursetype(6,).x_name', None) During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;/home/odoo/src/odoo/odoo/addons/base/models/qweb.py&quot;, line 334, in _compiled_fn return compiled(self, append, new, options, log) File &quot;&lt;template&gt;&quot;, line 1, in template_website_event_index_topbar_12493 File &quot;/home/odoo/src/odoo/odoo/fields.py&quot;, line 1070, in __get__ raise MissingError(&quot;\n&quot;.join([ odoo.exceptions.MissingError: ('Enregistrement inexistant ou détruit.\n(Enregistrement: x_eventcoursetype(6,), Utilisateur: 4)', None) Error to render compiling AST MissingError: ('Enregistrement inexistant ou détruit.\n(Enregistrement: x_eventcoursetype(6,), Utilisateur: 4)', None) Template: website_event.index_topbar Path: /t/t/nav/div/ul/li[3]/a/t[1] Node: &lt;t t-if=&quot;current_xevtyp&quot; t-esc=&quot;current_xevtyp.x_name&quot;/&gt; </code></pre> <p>Any clue is welcome !</p>
<python><xml><caching><odoo><odoo-13>
2023-09-08 13:54:47
1
1,677
S Bonnet
77,066,978
534,238
Renovate with Google Cloud, Github, Python project, and pyproject.toml
<h1>High Level</h1> <p>I am trying to allow <a href="https://github.com/renovatebot/renovate" rel="nofollow noreferrer">Renovate</a> to manage Python dependency updates, including updates dependent upon our own internal registry, which lives in <a href="https://cloud.google.com/artifact-registry" rel="nofollow noreferrer">Google's Artifact Registry</a>. We use GitHub to manage our repositories, and Github's provision of Renovate.</p> <h1>Details</h1> <p>To do this, I need to update the <code>renovate.json5</code> file to allow access to the repo. There is scant information in the documentation, and I have looked in the following places:</p> <ul> <li><a href="https://docs.renovatebot.com/getting-started/private-packages/" rel="nofollow noreferrer">Private package support</a></li> <li><a href="https://docs.renovatebot.com/python/" rel="nofollow noreferrer">Python package manager support</a></li> <li><a href="https://github.com/renovatebot/renovate/issues?q=is%3Aissue+pyproject.toml" rel="nofollow noreferrer">Renovate Git Issues filtered for <code>pyproject.toml</code></a></li> <li><a href="https://docs.renovatebot.com/presets-packages/#packagesgoogleapis" rel="nofollow noreferrer">Scant documentation on GCP</a></li> </ul> <p>and as far as I can tell, I am <em>supposed to add something like this</em> to get access to my private repo:</p> <pre class="lang-json prettyprint-override"><code>{ &quot;hostRules&quot;: [ { &quot;matchHost&quot;: &quot;https://registry.company.com/pypi-simple/&quot;, &quot;username&quot;: &quot;engineering&quot;, &quot;password&quot;: &quot;abc123&quot; } ] } </code></pre> <p>It <em>seems that</em> Renovate can <em>only</em> use username + password for Python packages. However, I am using a service account, for which Google uses tokens (and other passwordless authentication means). For example, to access the repo, I add the following to <code>pip.conf</code> (or to <code>[[tool.pdm.source]]</code> in my <code>pyproject.toml</code> since I am using PDM to build):</p> <pre><code>[global] extra-index-url = https://_json_key_base64:&lt;long_string_for_private_info&gt;@us-west1-python.pkg.dev/my-project/my-repo/simple/ </code></pre> <p>In the case of Google / GCP, that <code>&lt;long_string_for_private_info&gt;</code> is actually a base 64 encoded JSON blob with a bunch of information:</p> <pre class="lang-bash prettyprint-override"><code>&gt; echo &lt;long_string_for_private_info&gt; | base64 --decode { &quot;type&quot;: &quot;service_account&quot;, &quot;project_id&quot;: &quot;my-project&quot;, &quot;private_key_id&quot;: &quot;&lt;filtered out for security&gt;&quot;, &quot;private_key&quot;: &quot;-----BEGIN PRIVATE KEY-----\n&lt;Filtered out for security&gt;\n-----END PRIVATE KEY-----\n&quot;, &quot;client_email&quot;: &quot;my-service-account@my-project.iam.gserviceaccount.com&quot;, &quot;client_id&quot;: &quot;&lt;filtered out for security&gt;&quot;, &quot;auth_uri&quot;: &quot;https://accounts.google.com/o/oauth2/auth&quot;, &quot;token_uri&quot;: &quot;https://oauth2.googleapis.com/token&quot;, &quot;auth_provider_x509_cert_url&quot;: &quot;https://www.googleapis.com/oauth2/v1/certs&quot;, &quot;client_x509_cert_url&quot;: &quot;https://www.googleapis.com/robot/v1/metadata/x509/my-service-account%40my-project.iam.gserviceaccount.com&quot;, &quot;universe_domain&quot;: &quot;googleapis.com&quot; } </code></pre> <p>The filtered out private key is also base64 encoded (so that there is a base64 encoded object that after decoding, it has another base64 encoded object in it).</p> <h1>Attempts so far</h1> <p>I have tried the following:</p> <h2>Using a token:</h2> <pre><code>{ &quot;matchHost&quot;: &quot;us-west1-python.pkg.dev&quot;, &quot;hostType&quot;: &quot;pypi&quot;, &quot;encrypted&quot;: { &quot;token&quot;: &lt;hidden for security&gt; } } </code></pre> <h2>Using a username and password:</h2> <pre><code>{ &quot;matchHost&quot;: &quot;us-west1-python.pkg.dev&quot;, &quot;hostType&quot;: &quot;pypi&quot;, &quot;username&quot;: &quot;_json_key_base64&quot; &quot;encrypted&quot;: { &quot;password&quot;: &lt;hidden for security&gt; } } </code></pre> <p>In each of the cases above (token or username/password), I have:</p> <ul> <li>Always encoded <a href="https://app.renovatebot.com/encrypt" rel="nofollow noreferrer">using Renovate's encryption page</a></li> <li>Tried the <em>base64 encoded</em> version of just the <code>&quot;private_key&quot;</code> mentioned above</li> <li>Tried the <em>decoded</em> version of just the <code>&quot;private_key&quot;</code> mentioned above</li> <li>Tried the <em>base64 encoded</em> <em>entire</em> <code>&quot;long_string_for_private_info&quot;</code> object mentioned in the <code>extra-index-url</code> in <code>pip.conf</code>. <ul> <li>Remember that I am able to download (and upload) Python packages to the registry, so I know that <code>pip.conf</code> file is correct.</li> </ul> </li> <li>Tried the <em>decoded</em> version of the entire <code>&quot;long_string_for_private_info&quot;</code>.</li> </ul> <p>At this point, I have run out of ideas, and the documentation does not really explain how to use tokens with Python packages at all, let alone with Google's cloud infrastructure.</p> <h1>Error Reported</h1> <p>No matter how the secret is seen, the error is always the same, a variant of:</p> <pre class="lang-json prettyprint-override"><code>{ &quot;err&quot;: { &quot;validationError&quot;: &quot;Encrypted secret is scoped to a different repository: \&quot;US-WEST1-PYTHON.PKG.DEV/MY-PROJ/MY-REPO\&quot;.&quot;, &quot;message&quot;: &quot;config-validation&quot;, &quot;stack&quot;: &quot;Error: config-validation\n at tryDecrypt (/opt/containerbase/tools/renovate/36.83.0/node_modules/renovate/lib/config/decrypt.ts:124:31)\n at processTicksAndRejections (node:internal/process/task_queues:95:5)\n at decryptConfig (/opt/containerbase/tools/renovate/36.83.0/node_modules/renovate/lib/config/decrypt.ts:186:30)\n at decryptConfig (/opt/containerbase/tools/renovate/36.83.0/node_modules/renovate/lib/config/decrypt.ts:238:13)\n at mergeRenovateConfig (/opt/containerbase/tools/renovate/36.83.0/node_modules/renovate/lib/workers/repository/init/merge.ts:252:27)\n at getRepoConfig (/opt/containerbase/tools/renovate/36.83.0/node_modules/renovate/lib/workers/repository/init/config.ts:12:12)\n at initRepo (/opt/containerbase/tools/renovate/36.83.0/node_modules/renovate/lib/workers/repository/init/index.ts:44:12)\n at Object.renovateRepository (/opt/containerbase/tools/renovate/36.83.0/node_modules/renovate/lib/workers/repository/index.ts:55:14)\n at attributes.repository (/opt/containerbase/tools/renovate/36.83.0/node_modules/renovate/lib/workers/global/index.ts:184:11)\n at start (/opt/containerbase/tools/renovate/36.83.0/node_modules/renovate/lib/workers/global/index.ts:169:7)\n at /opt/containerbase/tools/renovate/36.83.0/node_modules/renovate/lib/renovate.ts:18:22&quot; . . . { &quot;error&quot;: { &quot;validationError&quot;: &quot;Failed to decrypt field token. Please re-encrypt and try again.&quot;, &quot;message&quot;: &quot;config-validation&quot;, &quot;stack&quot;: &quot;Error: config-validation\n at decryptConfig (/opt/containerbase/tools/renovate/36.83.0/node_modules/renovate/lib/config/decrypt.ts:192:27)\n at processTicksAndRejections (node:internal/process/task_queues:95:5)\n at decryptConfig (/opt/containerbase/tools/renovate/36.83.0/node_modules/renovate/lib/config/decrypt.ts:238:13)\n at mergeRenovateConfig (/opt/containerbase/tools/renovate/36.83.0/node_modules/renovate/lib/workers/repository/init/merge.ts:252:27)\n at getRepoConfig (/opt/containerbase/tools/renovate/36.83.0/node_modules/renovate/lib/workers/repository/init/config.ts:12:12)\n at initRepo (/opt/containerbase/tools/renovate/36.83.0/node_modules/renovate/lib/workers/repository/init/index.ts:44:12)\n at Object.renovateRepository (/opt/containerbase/tools/renovate/36.83.0/node_modules/renovate/lib/workers/repository/index.ts:55:14)\n at attributes.repository (/opt/containerbase/tools/renovate/36.83.0/node_modules/renovate/lib/workers/global/index.ts:184:11)\n at start (/opt/containerbase/tools/renovate/36.83.0/node_modules/renovate/lib/workers/global/index.ts:169:7)\n at /opt/containerbase/tools/renovate/36.83.0/node_modules/renovate/lib/renovate.ts:18:22&quot; } } </code></pre> <p>I have even inspected those two functions on Renovate's github repo, but they are very &quot;introspective&quot; and javascript is not my best language, so I could not get too far.</p>
<python><github><google-cloud-platform><renovate>
2023-09-08 12:42:16
1
3,558
Mike Williamson
77,066,939
9,671,120
Python's datetime.min with time zone (Python 3.7)
<p>If I do:</p> <pre><code>&gt;&gt;&gt; datetime.max.astimezone(pytz.UTC) datetime.datetime(9999, 12, 31, 23, 59, 59, 999999, tzinfo=&lt;UTC&gt;) &gt;&gt;&gt; datetime.min.astimezone(pytz.UTC) ValueError: year 0 is out of range </code></pre> <p>However:</p> <pre><code>datetime.min.replace(tzinfo=pytz.UTC) datetime.datetime(1, 1, 1, 0, 0, tzinfo=&lt;UTC&gt;) </code></pre> <p>From <a href="https://stackoverflow.com/questions/10286224/javascript-timestamp-to-python-datetime-conversion">this</a> and <a href="https://stackoverflow.com/questions/31548132/python-datetime-fromtimestamp-yielding-valueerror-year-out-of-range">this</a> answers, it seems a conversion issue.</p> <p>Is this a bug or an intended behaviour, and why?</p>
<python><datetime><timezone>
2023-09-08 12:37:03
1
386
C. Claudio
77,066,885
8,964,393
Ignore invalid value in pandas dataframe column
<p>I have created the following pandas dataframe:</p> <pre><code>import pandas as pd import numpy as np ds = {'col1' : ['1489900119000', '้้คค,1'] } df = pd.DataFrame(data=ds) </code></pre> <p>which looks like this:</p> <pre><code> col1 0 1489900119000 1 ้้คค,1 </code></pre> <p>I am trying to build a new column (called <code>col2</code>) which contains the values in <code>col1</code> arranged as list. Hence the code:</p> <pre><code>df['col2'] = [list(map(int, str(x))) for x in df.col1] </code></pre> <p>Since there is that unexpected / invalid value at row 1 (<code>คค,1</code>), the code fails.</p> <p>Is there any way to by-pass that row and fill it in with a default value (e.g. [9,9])? For example, the resulting dataframe would look like this:</p> <p><a href="https://i.sstatic.net/BU2tZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BU2tZ.png" alt="enter image description here" /></a></p>
<python><pandas><list><integer>
2023-09-08 12:27:52
1
1,762
Giampaolo Levorato
77,066,807
16,611,809
PyQt5 is waiting for input, when installing via pip
<p>I am trying to install PyQt5 into an conda env (I need pyqt installed via pip and the same error happens with a &quot;normal&quot; Python <code>venv</code>. First I got a real quick error message, which is solved by installing qt5 via brew (<code>brew install qt5</code>). When I now try to create the environment with pyqt5 directly using a yaml file, it takes ages and then pip is killed (<code>killed: 9</code>). If I try to install it manually using <code>pip install pyqt5==5.15.9</code> I get the following output:</p> <pre><code>Collecting pyqt5==5.15.9 Using cached PyQt5-5.15.9.tar.gz (3.2 MB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) </code></pre> <p>At this point nothing happens anymore (I guess, it would be killed, if I waited longer). If I install using <code>pip install pyqt5==5.15.9 -v</code> it shows:</p> <pre><code>... Querying qmake about your Qt installation... This is the GPL version of PyQt 5.15.9 (licensed under the GNU General Public License) for Python 3.10.12 on darwin. Type 'L' to view the license. Type 'yes' to accept the terms of the license. Type 'no' to decline the terms of the license. </code></pre> <p>And nothing happens from here on. Of course, I tried to type <code>yes</code>, but nothing happened. Any idea, what I am doing wrong?</p>
<python><pyqt><pyqt5>
2023-09-08 12:15:54
0
627
gernophil
77,066,528
13,314,888
input n as number then return '0' & '1' as number of n rows and columns in python
<p>input n as number then return '0' &amp; '1' as number of n rows and columns in python (without using any inbuilt type of function or library)</p> <p>If input n = 3</p> <pre><code>The output: 100 010 001 </code></pre> <p>If input n = 4</p> <pre><code>The output: 1000 0100 0010 0001 </code></pre> <p>I tried myself below, but i failed to achieve.</p> <pre><code>n = 3 for i in range(1, n+1): for j in range(i, n+1): if i == j: print(1) else: print(0) break </code></pre> <p>Anyone who can solve, will be much appreciated.</p>
<python><python-3.x><python-2.7>
2023-09-08 11:33:04
4
694
satyajit
77,066,489
2,690,578
method polygon_to_cells seems to not exist in H3 library
<p>From the Uber h3 api documentation page (<a href="https://uber.github.io/h3-py/api_reference.html#" rel="nofollow noreferrer">https://uber.github.io/h3-py/api_reference.html#</a>), they have a method called: <strong>polygon_to_cells</strong> which is suppose to transform a given Polygon to a H3 index given a H3 resolution. However, I call this method both in Python (3.6) and from Snowflake UDF (Python 3.9) and in both cases I get the error message back: <strong>module 'h3' has no attribute 'polygon_to_cells'</strong></p> <p>Has anyone faced the same issue?</p>
<python><h3>
2023-09-08 11:27:13
2
609
Gabriel
77,066,445
6,086,115
How to show the tooltip for the top layer only in multi-layer plotly graphs?
<p>I have a multi-layer plotly express timeline showing grey blocks with blue activities on them. Both blocks and activities have a tooltip. When hovering over them, I want to show the tooltip for the top layer only: i.e. when hovering over a blue activity (on the top layer), the tooltip for the activity must be displayed; when hovering over a grey block (on the bottom layer), the tooltip for the block must be displayed.</p> <p>Now, instead, on a part of the region of some of the blue activities, the grey tooltip is displayed.</p> <p>This can be reproduced by the following example. The problem shows up especially when hovering over the second activity in both blocks: on the left part of those activities, a grey tooltip is shown.</p> <p>First, I create two separate plots.</p> <pre><code>import plotly.express as px import pandas as pd import plotly.graph_objects as go df_blocks = pd.DataFrame({ &quot;start&quot;: [&quot;2023-09-01 10:00:00&quot;, &quot;2023-09-01 16:00:00&quot;], &quot;end&quot;: [&quot;2023-09-01 20:00:00&quot;, &quot;2023-09-01 22:00:00&quot;], &quot;y&quot;: [0, 1], &quot;tooltip&quot;: [&quot;First block&quot;, &quot;Second block&quot;], &quot;type&quot;: [&quot;block&quot;, &quot;block&quot;] }) df_activities = pd.DataFrame({ &quot;start&quot;: [&quot;2023-09-01 10:00:00&quot;, &quot;2023-09-01 14:00:00&quot;, &quot;2023-09-01 16:00:00&quot;, &quot;2023-09-01 17:00:00&quot;], &quot;end&quot;: [&quot;2023-09-01 14:00:00&quot;, &quot;2023-09-01 20:00:00&quot;, &quot;2023-09-01 17:00:00&quot;, &quot;2023-09-01 22:00:00&quot;], &quot;y&quot;: [0, 0, 1, 1], &quot;tooltip&quot;: [&quot;First activity&quot;, &quot;Second activity&quot;, &quot;Third activity&quot;, &quot;Fourth activity&quot;], &quot;type&quot;: [&quot;activity&quot;, &quot;activity&quot;, &quot;activity&quot;, &quot;activity&quot;] }) fig_blocks = px.timeline(df_blocks, x_start='start', x_end='end', y='y', hover_data=['tooltip'], color=&quot;type&quot;, color_discrete_map={&quot;block&quot;: '#8C8C8F'}, ) fig_activities = px.timeline(df_activities, x_start='start', x_end='end', y='y', hover_data=['tooltip'], color=&quot;type&quot;, color_discrete_map={&quot;activity&quot;: '#85AFFF'}, ) fig_activities.update_traces(width=0.5) </code></pre> <p>Then, I combine the two plots into one.</p> <pre><code>fig_combined = go.Figure(fig_blocks) fig_combined.add_traces(fig_activities.data) fig_combined.show() </code></pre> <p>This results in the unwanted grey tooltip when hovering over the left part of the blue activities.</p> <p>See the image below: I hover over a blue area and the grey tooltip appears. However, the blue tooltip should be shown here (obviously).</p> <p><a href="https://i.sstatic.net/BfZCO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BfZCO.png" alt="enter image description here" /></a></p> <p>Does anyone know how to fix this?</p>
<python><plotly><tooltip><timeline>
2023-09-08 11:20:37
0
351
Jordi
77,066,094
3,873,799
Extract annotations by layer from a PDF in Python
<p>I have a PDF with annotations (markups) stored in different layers. Each layer has a specific name. I need to extract the annotations with their layer name. In particular, I'm interested only in the <em>location</em> of the annotation (as in, the bounding box of it) and the name of their layer, i.e. an output like:</p> <pre><code>{ &quot;layerName&quot;: &quot;myLayer01&quot;, &quot;location&quot; : [ 10, 5, 4, 2 ] } </code></pre> <p>Using a library like <code>pyPDF2</code> (I'm the latest v3.0.1), I can extract the annotations' location using this:</p> <pre class="lang-py prettyprint-override"><code>from PyPDF2 import PdfReader reader = PdfReader(&quot;myFile.pdf&quot;) for page in reader.pages: if &quot;/Annots&quot; in page: for annot in page[&quot;/Annots&quot;]: obj = annot.get_object() annotation = { &quot;layerName&quot;: ???, &quot;location&quot;: obj[&quot;/Rect&quot;] } # how do I get the layer Name? </code></pre> <p>While it's easy to get the location, I am struggling to figure out how to get the layerName of the annotation.</p> <p>If I look into the properties of the extracted <code>obj</code> (for example serializing it entirely with <code>jsonPickle</code> and CTRL+F in the entire result) I cannot find any mention of the layer the annotations are located on.</p> <p>I know it's possible to get a list of all existing layers with something like:</p> <pre class="lang-py prettyprint-override"><code># Get the first page of the PDF and its layers page = pdf_reader.getPage(0) layers = page['/OCProperties']['/OCGs'] </code></pre> <p>but this doesn't help grouping the annotations per layer.</p> <p>Any suggestion is appreciated. I'd prefer a concise solution, using also libraries different than pyPDF if helpful.</p>
<python><pdf><annotations><extract><pypdf>
2023-09-08 10:26:29
2
3,237
alelom
77,066,079
16,222,937
Python: Expected expression error on end if
<p>I'm quite new to the Python programming language and am just working on a basic project to sort of try to grasp the basics. This may have been asked before but I can't really get a clear answer (most of the answers I see don't include end if). I'm not sure if it means that I've used wrong spacing somewhere or if I'm missing a semi colon (does Python have those?) somewhere.</p> <p>My code</p> <pre><code>print(&quot;----Places to visit app---&quot;) print(&quot;1. Insert a place&quot;) print(&quot;2. Print places&quot;) print(&quot;3. Delete a place&quot;) print(&quot;4. Exit&quot;) print(&quot;5. Please enter your choice&quot;) option = input() if option == '1': print(&quot;Please enter the name of the place: &quot;) place_name = input() print(&quot;please enter the address of the place: &quot;) place_address = input() print(&quot;please enter the number of days you plan to stay at this place: &quot;) num_days = int(input()) print(&quot;Please enter the amount you plan to spend per day&quot;) daily_cost=float(input()) total_cost=num_days*daily_cost print(&quot;-----Place Added-----&quot;) print(&quot;Place name: &quot;, place_name) print(&quot;Address: &quot;, place_address) print(&quot;Number of Days: &quot;, num_days) print(&quot;Total cost: $&quot;, total_cost) elif option == '2': print(&quot;Print places feature will be develped in future...&quot;) elif option == '3': print(&quot;Delete a place feature will be develped in future...&quot;) elif option == '4': print(&quot;Goodbye!!!&quot;) else: print(&quot;Invalid option selected. Please choose a valid option.&quot;) end if </code></pre> <p>Error on line 31 (end if): Expected expression Pylance</p>
<python><pylance>
2023-09-08 10:23:33
1
443
CloakedArrow
77,066,014
7,339,624
Difference between `plt.imshow()` and `plt.matshow()` for Heatmaps
<p>I'm working with the following matrix, and I want to creat a heatmap using <code>Matplotlib</code>:</p> <pre class="lang-py prettyprint-override"><code>a = [[1, 1, 1, 1], [1, 2, 3, 4], [4, 3, 2, 1], [4, 4, 4, 4]] </code></pre> <p>I've read the documentation and understand that <code>plt.imshow()</code> is a more general function for image display. However, when I use both <code>plt.imshow(a)</code> and <code>plt.matshow(a)</code> to create a heatmap, the outputs appear identical. In the context of visualizing a matrix as a heatmap, are there any differences between the two functions?</p> <p>P.S.: code for heatmaps</p> <pre><code>from matplotlib import pyplot as plt plt.matshow(a) # Or `plt.imshow(a)` plt.show() </code></pre>
<python><matplotlib><visualization><heatmap>
2023-09-08 10:14:37
1
4,337
Peyman
77,065,700
292,291
How do I update value of a ContextVar in an async function?
<p>It seems that when I reference a <code>contextlib.ContextVar</code> in an async function its a copy of the actual. Thus when I update its value it wont be persisted, is there a way to pass the context by reference instead?</p> <pre class="lang-py prettyprint-override"><code>import asyncio from contextvars import ContextVar import uuid from fastapi import FastAPI app = FastAPI() req_context = ContextVar(&quot;req_context&quot;) def do_something(): print(&quot;uuid in do something: &quot;, req_context.get()) async def do_something_async(): req_context.set({ **req_context.get(), &quot;async1&quot;: &quot;1&quot;, }) print(&quot;uuid in do something async: &quot;, req_context.get()) async def do_something_async2(): req_context.set({ **req_context.get(), &quot;async2&quot;: &quot;done&quot;, }) print(&quot;uuid in do something async2: &quot;, req_context.get()) @app.get(&quot;/&quot;) async def root(): print(&quot;=====\n\n0uuid: &quot;, req_context.get({})) req_context.set({&quot;REQ_ID&quot;: uuid.uuid4()}) print(&quot;1uuid: &quot;, req_context.get()) do_something() await asyncio.gather( do_something_async(), do_something_async2(), ) print(&quot;2uuid: &quot;, req_context.get()) return {&quot;message&quot;: &quot;Hello World&quot;} if __name__ == &quot;__main__&quot;: import uvicorn uvicorn.run(app, host=&quot;0.0.0.0&quot;, port=8000) </code></pre> <p>When I run</p> <pre><code>0uuid: {} 1uuid: {'REQ_ID': UUID('e5efe0a0-5975-43b9-aff3-9e618ffb8be0')} uuid in do something: {'REQ_ID': UUID('e5efe0a0-5975-43b9-aff3-9e618ffb8be0')} uuid in do something async: {'REQ_ID': UUID('e5efe0a0-5975-43b9-aff3-9e618ffb8be0'), 'async1': '1'} uuid in do something async2: {'REQ_ID': UUID('e5efe0a0-5975-43b9-aff3-9e618ffb8be0'), 'async2': 'done'} # Notice the mutations do not get saved to the original context 2uuid: {'REQ_ID': UUID('e5efe0a0-5975-43b9-aff3-9e618ffb8be0')} </code></pre> <p>Or perhaps I am using the wrong thing?</p>
<python><python-asyncio>
2023-09-08 09:27:54
0
89,109
Jiew Meng
77,065,631
15,682,259
How to Handle Authentication Errors in Python Flask?
<p>I'm working on a Python Flask web application that requires user authentication. I've implemented a basic authentication system using Flask-Login, but I'm encountering some issues with handling authentication errors. Specifically, when a user enters incorrect credentials, I want to display a custom error message and redirect them to the login page.</p> <p>Here's a summary of what I've done so far:</p> <p>I've created a User model and integrated Flask-Login for session management. I've set up a login route that handles user login and redirects to the dashboard on success. However, when a user enters incorrect credentials, Flask-Login's default behavior displays a generic &quot;Invalid username or password&quot; message. I want to replace this message with a custom error message and redirect the user back to the login page.</p> <p>I've searched for solutions and looked at related questions on Stack Overflow, but I couldn't find a clear answer that addresses my specific issue.</p> <p>Could someone please provide guidance on how to handle authentication errors in Flask and customize the error message? Additionally, if you can provide code examples or point me in the right direction, it would be greatly appreciated.</p>
<python><authentication><flask><error-handling><flask-login>
2023-09-08 09:17:08
0
467
Bernardo Almeida
77,065,569
5,562,092
setuptools and files not installed
<p>So the issue is quite confusing. Conventionally I have a basic structure in a python package</p> <pre><code>setup.py requirements.txt src/ package_name/ file1.py file2.py __init__.py </code></pre> <p>The above installs great but the one below doesnt.</p> <pre><code>setup.py requirements.txt src/ file1.py file2.py __init__.py </code></pre> <p>things that dont work</p> <pre><code>find_packages() find_packages(where=&quot;src&quot;) find_packages(where=&quot;./src&quot;) </code></pre> <p>when getting into python i can import src but thats not how packages are supposed to work</p> <pre><code>from src import file1 </code></pre> <p>Also, I wish I could change the folder structure but constrained by requirements.</p> <p>Thanks in advance and happy coding.</p>
<python>
2023-09-08 09:08:09
0
875
A H Bensiali
77,065,296
5,547,553
How to make polars in python not quote anything in write_csv?
<p>I'm generating a javascript data file in polars and when writing it out, I'd like polars NOT to quote anything.<br> How can I do that? This is still quoting (and quote parameter cannot be None or empty):</p> <pre><code>import polars as pl data = ['// 2023.09.01.', 'var mydata = [', '[11.407538,22.003241,51,,&quot;L&quot;,&quot;T&quot;,,67', '[12.547899,21.033232,112,,&quot;L&quot;,&quot;T&quot;,,139', '];'] df = pl.DataFrame(data, schema=['X']) df.write_csv('myfile.js', has_header = False, quote_style = None) </code></pre> <p>Thanks.</p>
<python>
2023-09-08 08:24:41
0
1,174
lmocsi
77,065,281
14,282,714
Reset Zoom of Interactive Altair
<p>I would like to have a way (button for example) to reset the zoom in or zoom out to some start of an interactive plot from <code>altair</code>. Here is some reproducible code:</p> <pre><code>import altair as alt from vega_datasets import data cars = data.cars() alt.Chart(cars).mark_point().encode( x='Horsepower', y='Miles_per_Gallon', color='Origin', ).interactive() </code></pre> <p>Output:</p> <p><a href="https://i.sstatic.net/r5MUE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/r5MUE.png" alt="enter image description here" /></a></p> <p>Sometimes you accidentally zoom in or zoom out to some way like this:</p> <p><a href="https://i.sstatic.net/MYlnk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MYlnk.png" alt="enter image description here" /></a></p> <p>Is there a way to quickly reset the zoom in <code>altair</code>?</p>
<python><zooming><altair>
2023-09-08 08:22:57
1
42,724
Quinten
77,065,235
634,780
How to check for a TypeVar actual type and keep mypy happy
<p>I have a type bound defined as</p> <pre class="lang-py prettyprint-override"><code>T = TypeVar(&quot;T&quot;, bool, int, str) </code></pre> <p>and I want to pass <strong>type</strong> (not a value) T to a method, defined as</p> <pre class="lang-py prettyprint-override"><code>def env(value_type: Type[T]) -&gt; Optional[T]: </code></pre> <p>and in the method, based on the type of T, return values with matching type. This works (see example attached), but mypy says that</p> <pre><code>test_generic.py:20: error: Incompatible return value type (got &quot;bool&quot;, expected &quot;str | None&quot;) [return-value] test_generic.py:22: error: Incompatible return value type (got &quot;int&quot;, expected &quot;bool | None&quot;) [return-value] test_generic.py:22: error: Incompatible return value type (got &quot;int&quot;, expected &quot;str | None&quot;) [return-value] test_generic.py:24: error: Incompatible return value type (got &quot;str&quot;, expected &quot;bool | None&quot;) [return-value] test_generic.py:24: error: Incompatible return value type (got &quot;str&quot;, expected &quot;int | None&quot;) [return-value] </code></pre> <p>What's the reason? How can I fix it?</p> <p>test_generic.py:</p> <pre class="lang-py prettyprint-override"><code>from typing import TypeVar, Type, Optional T = TypeVar(&quot;T&quot;, bool, int, str) def get_str() -&gt; str: return &quot;str&quot; def get_int() -&gt; int: return 1 def get_bool() -&gt; bool: return True def env(value_type: Type[T]) -&gt; Optional[T]: if value_type is bool: return get_bool() elif value_type is int: return get_int() elif value_type is str: return get_str() else: return None def test_int() -&gt; None: assert env(int) == get_int() def test_bool() -&gt; None: assert env(bool) == get_bool() def test_str() -&gt; None: assert env(str) == get_str() </code></pre> <p>Versions used:</p> <pre class="lang-bash prettyprint-override"><code>$ python --version Python 3.11.1 $ mypy --version mypy 1.5.1 (compiled: yes) </code></pre>
<python><mypy>
2023-09-08 08:15:53
0
1,635
icepopo
77,065,234
2,069,099
How to add a hierarchical checkbox
<p>I try to modify the code below, to have a &quot;curves&quot; checkbox that would deactivate / activate all the children (actually, the code I need will handle a lot of those, this is just an example):</p> <p>The current code gives:</p> <p><a href="https://i.sstatic.net/fzKtm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fzKtm.png" alt="enter image description here" /></a></p> <p>And I need to achieve this:</p> <p><a href="https://i.sstatic.net/oJFmu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oJFmu.png" alt="enter image description here" /></a></p> <pre><code>import numpy as np from bokeh.layouts import column, row from bokeh.models import CustomJS, Slider, CheckboxGroup from bokeh.plotting import ColumnDataSource, figure, show # output_notebook() # initial input data x = np.linspace(0, 10, 500) y = np.sin(x) z = np.cos(x) name_lst = ['sin', 'cos'] # dataframe source = ColumnDataSource(data=dict(x=x, y=y, z=z)) # initialize figure fig = figure(width=200, height=200) line_renderer = [ fig.line('x', 'y', source=source, name=name_lst[0]), fig.line('x', 'z', source=source, name=name_lst[1]) ] line_renderer[0].visible = False # a couple of check boxes checkbox = CheckboxGroup( labels=name_lst, active=[1, 1], width=100 ) callback = CustomJS(args=dict(lines=line_renderer, checkbox=checkbox), code=&quot;&quot;&quot; lines[0].visible = checkbox.active.includes(0); lines[1].visible = checkbox.active.includes(1); &quot;&quot;&quot;) # changes upon clicking and sliding checkbox.js_on_change('active', callback) layout = row(fig, column( checkbox)) show(layout) </code></pre>
<javascript><python><bokeh>
2023-09-08 08:15:46
1
3,517
Nic
77,064,993
3,225,420
How to only show portion of x-axis spine on scatterplot
<p>I don't want the x-axis spine between (0,0) and (0,4) to display on the scatterplot below. I still want the distance on the chart, I just don't want the spine to display between those coordinates.</p> <pre><code>import matplotlib.pyplot as plt x = range(4,10) y=x fig, ax = plt.subplots() ax.scatter(x=x, y=y) ax.set_xlim(left=0, right=10) ax.set_xticks(x) plt.show() </code></pre> <p><a href="https://i.sstatic.net/C9UVb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/C9UVb.png" alt="enter image description here" /></a></p> <p>Ideally something like this:</p> <pre><code>ax.spines['bottom'].set_visible([4:]) </code></pre> <p>But it yields <code>SyntaxError: invalid syntax</code>.</p> <p>This hides the entire line:</p> <pre><code>ax.spines['bottom'].set_visible(False) </code></pre> <p>I tried <a href="https://matplotlib.org/stable/api/spines_api.html#matplotlib.spines.Spine.set_position" rel="nofollow noreferrer">set_position()</a>:</p> <pre><code>ax.spines['bottom'].set_position('data',4) </code></pre> <p>But I could only get error messages. If I supplied one argument it said it wasn't enough, but when I supply two it complained about too many:</p> <pre><code>TypeError: Spine.set_position() takes 2 positional arguments but 3 were given </code></pre> <p>What should I try next?</p>
<python><matplotlib><scatter-plot><x-axis>
2023-09-08 07:38:19
1
1,689
Python_Learner
77,064,942
10,200,497
Understanding the cache size of arbitrary callback data in python telegram bot
<p>I have recently read the <code>python-telegram-bot</code> <a href="https://github.com/python-telegram-bot/python-telegram-bot/wiki/Arbitrary-callback_data" rel="nofollow noreferrer">docs</a> that explains the advantages of <code>arbitrary_callback_data</code>. I did not understand this sentence in the docs:</p> <blockquote> <p>PTB stores the callback data objects in memory. Additionally, to that, it stores a mapping of CallbackQuery.id to the corresponding UUID. By default, both storages contain a maximum number of 1024 items.</p> </blockquote> <p>What does 1024 represent? In my code I have some inline buttons that for example remove a document from Mongodb. Like the example below:</p> <p><code>InlineKeyboardButton(text='title',callback_data=f'delete_doc_{the_id_of_doc}')</code></p> <p>Does that mean for instance if I have millions of ids, It exceeds the limit that is set to 1024 by default? Because the <code>callback_data</code> of each button in my code is unique.</p>
<python><python-telegram-bot>
2023-09-08 07:29:48
1
2,679
AmirX
77,064,674
350,980
Type hints for mypy for a decorator over staticmethod/classmethod
<p>I'm writing a helper for logger library that has decorators with specific logging for trace (debugging).</p> <p>The code itself is correct (it is partially based on existing library), but i struggle to find how to make mypy accept types for it.</p> <p>Question marks is where i have problems with types. Or maybe the problem is more general</p> <p>For staticmethod:</p> <pre><code>def trace_static_method(_staticmethod: staticmethod) -&gt; staticmethod: @wraps(_staticmethod.__func__) # this generate mypy error for incorrect type def wrapper(*args: ???, **kwargs: ???) -&gt; ???: return _log_trace(_staticmethod.__func__, *args, **kwargs) return staticmethod(wrapper) # this generate mypy error for incorrect type </code></pre> <p>For classmethod:</p> <pre><code>def trace_class_method(_classmethod: classmethod) -&gt; classmethod: @wraps(_classmethod.__func__) # this generate mypy error for incorrect type def wrapper(_cls: ???, *args: ???, **kwargs: ???) -&gt; ???: method = _classmethod.__get__(None, _cls) # this generate mypy error for incorrect type return _log_trace(method, *args, **kwargs) return classmethod(wrapper) # this generate mypy error for incorrect type </code></pre> <p>Log trace:</p> <pre><code>def _log_trace(func: Callable[P, T], *args: P.args, **kwargs: P.kwargs) -&gt; T: name = func.__qualname__ module = func.__module__ logger_ = logger.opt(depth=1) logger_.log(&quot;TRACE&quot;, &quot;{}.{} CALL args={}, kwargs={}&quot;, module, name, args, kwargs) result = func(*args, **kwargs) logger_.log(&quot;TRACE&quot;, &quot;{}.{} RETURN {}&quot;, module, name, result) return result </code></pre> <p>Working types for a simple function decorator:</p> <pre><code>def trace(func: Callable[P, T]) -&gt; Callable[P, T]: @wraps(func) def wrapper(*args: P.args, **kwargs: P.kwargs) -&gt; T: return _log_trace(func, *args, **kwargs) return wrapper </code></pre> <p><strong>EDIT</strong>: adding correct types for static method was actually pretty straightforward:</p> <pre><code>def trace_static_method(_staticmethod: staticmethod[P, T]) -&gt; staticmethod[P, T]: @wraps(_staticmethod.__func__) def wrapper(*args: P.args, **kwargs: P.kwargs) -&gt; T: return _log_trace(_staticmethod.__func__, *args, **kwargs) return staticmethod(wrapper) </code></pre>
<python><python-typing><mypy>
2023-09-08 06:39:32
1
1,422
UnstableFractal
77,064,582
13,396,497
Remove duplicates from panda dataframe but keep the row if it repeats after another set
<p>I am trying to remove the duplicates and keep first but I also want to keep the row if the set repeats again -</p> <pre><code>from io import StringIO import pandas as pd dfa = pd.read_csv(StringIO(&quot;&quot;&quot; Date/Time C_1 C_2 &quot;16/08/2023 3:00:15&quot; online offline &quot;16/08/2023 3:00:17&quot; online offline &quot;16/08/2023 3:00:18&quot; offline online &quot;16/08/2023 3:00:21&quot; offline online &quot;16/08/2023 3:00:24&quot; offline online &quot;16/08/2023 3:00:23&quot; offline online &quot;16/08/2023 3:00:26&quot; offline online &quot;16/08/2023 3:00:28&quot; online offline &quot;16/08/2023 3:00:31&quot; online offline &quot;16/08/2023 3:00:37&quot; online offline &quot;16/08/2023 3:00:39&quot; online offline &quot;16/08/2023 3:00:42&quot; online offline&quot;&quot;&quot;), sep=&quot;\s+&quot;) dfa = dfa.drop_duplicates(['C_1','C_2']).reset_index(drop=True) print(dfa) </code></pre> <p>Output I am geeting is -</p> <pre><code> Date/Time C_1 C_2 16/08/2023 3:00:15 online offline 16/08/2023 3:00:18 offline online </code></pre> <p>But I am expecting -</p> <pre><code> Date/Time C_1 C_2 16/08/2023 3:00:15 online offline 16/08/2023 3:00:18 offline online 16/08/2023 3:00:28 online offline </code></pre>
<python><pandas>
2023-09-08 06:22:11
4
347
RKIDEV
77,064,536
9,028,779
How to update text within HTML parent tags that contain nested tags, using BeautifulSoup?
<p>I am facing challenges in using BeautifulSoup to update the text within HTML tags when those tags act as 'parent' tags containing other nested tags like <code>&lt;i&gt;&lt;/i&gt;</code> or <code>&lt;b&gt;&lt;/b&gt;</code>. The issue is that BeautifulSoup only identifies the text within the deepest nested tag and doesn't allow me to extract and modify the text of the parent tag. How can I effectively process and replace text within parent tags, even when they contain nested tags?</p> <p>This is a sample code:</p> <pre class="lang-py prettyprint-override"><code>from bs4 import BeautifulSoup # Sample HTML content html_content = &quot;&quot;&quot; &lt;html&gt; &lt;body&gt; &lt;p&gt;First paragraph&lt;/p&gt; &lt;p&gt;Second paragraph &lt;i&gt;italic text&lt;/i&gt; paragraph continues &lt;i&gt; italic text&lt;/i&gt;&lt;b&gt;bold text&lt;/b&gt; paragraph&lt;/p&gt; &lt;div&gt;Here is a Div&lt;/div&gt; &lt;/body&gt; &lt;/html&gt; &quot;&quot;&quot; # Processing function def process_text(text): # Replace 'text' processing logic with your own return f&quot;Processed {text}&quot; # Parse the HTML content soup = BeautifulSoup(html_content, 'html.parser') # Iterate through tags and process the text within for tag in soup.find_all(): if tag.string: tag.string = process_text(tag.string) # Print the resulting HTML print(soup) </code></pre> <p>The result is:</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>&lt;html&gt; &lt;body&gt; &lt;p&gt;Processed First paragraph&lt;/p&gt; &lt;p&gt;Second paragraph &lt;i&gt;Processed italic text&lt;/i&gt; paragraph continues &lt;i&gt;Processed italic text&lt;/i&gt;&lt;b&gt;Processed bold text&lt;/b&gt; paragraph&lt;/p&gt; &lt;div&gt;Processed Here is a Div&lt;/div&gt; &lt;/body&gt; &lt;/html&gt;</code></pre> </div> </div> </p> <p>But the expected result should be:</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>&lt;html&gt; &lt;body&gt; &lt;p&gt;Processed First paragraph&lt;/p&gt; &lt;p&gt;Processed Second paragraph &lt;i&gt;Processed italic text&lt;/i&gt; Processed paragraph continues &lt;i&gt;Processed italic text&lt;/i&gt;&lt;b&gt;Processed bold text&lt;/b&gt; Processed paragraph&lt;/p&gt; &lt;div&gt;Processed Here is a Div&lt;/div&gt; &lt;/body&gt; &lt;/html&gt;</code></pre> </div> </div> </p> <p>In summary, I am unable to manipulate the text of a tag if it is a 'parent', containing other tags such as <code>&lt;i&gt;&lt;/i&gt;</code> or <code>&lt;b&gt;&lt;/b&gt;</code>.</p> <p>Is there a way to modify my BeautifulSoup code to handle this situation and update the text correctly within all tags, including those nested within other tags? Any help is greatly appreciated. Thank you!</p>
<python><html><beautifulsoup>
2023-09-08 06:14:15
1
489
JPM
77,064,503
17,423,976
Only 3 microphone streams are open on Raspberry Pi 4
<p>I typed a code that listens to 8 USB microphone inputs simultaneously on the <strong>Raspberry Pi 4 B 4GB</strong>.<br /> It ran fine on the PC, but only partially ran on the Raspberry Pi.<br /> I confirmed that multiprocess and multithreading worked well, but only 3 microphones were streamed in each thread.<br /> And when I turned off the working microphone, the other microphone started running, maintaining three streams.</p> <p><strong>8 USB microphones</strong> were connected to the Raspberry Pi using 2 <strong>4-port hubs</strong>.<br /> Coding was done in <strong>Python 3.9.11</strong> and the <strong>sounddevice</strong> was used.<br /> Since the sound input to the microphone had to be received almost simultaneously, 3 multiprocesses were run and 3 multithreads were run in each process.<br /> Each thread opened a stream with a sounddevice to the microphone's input and read the data.<br /> Please tell me how I can increase the number of streams to 8. I desperately need help.</p> <pre><code>import sounddevice as sd from threading import Thread from multiprocessing import Process, Queue from collections import deque import time import copy SAMPLE_RATE = 48000 CHANNELS = 1 SPLIT_TIME = 1 MAX_MIC = 2 def initialize(): # find mic to use mic_all = sd.query_devices() mic_indices = [] for mic in mic_all: if 'AB13X USB Audio' in mic['name'] and mic['hostapi'] == 0 and mic['max_input_channels'] &gt; 0: mic_indices.append(mic['index']) # mic index return [mic_indices[i:i+MAX_MIC] for i in range(0, len(mic_indices), MAX_MIC)], mic_indices def run(que, mic_list): # Deque declaration according to number of microphones dques = [] for mic_index in mic_list: dques.append(deque()) # Thread declaration according to number of microphones thrds = [] for mic_index, dqu in zip(mic_list, dques): thrd = Thread(target=open_stream, args=(mic_index, dqu), daemon=True) thrds.append(thrd) # Run threads as simultaneously as possible for thrd in thrds: thrd.start() while True: is_all_existed = True # Check deque for dqu in dques: if not dqu: is_all_existed = False if is_all_existed: for dqu in dques: data = dqu.popleft() que.put(data) time.sleep(0.1) def open_stream(device_index, dqu): stream = sd.InputStream(device=device_index, samplerate= SAMPLE_RATE, channels=CHANNELS, dtype='int16', latency=True) stream.start() while True: audio_data, overflowed = stream.read(int(SAMPLE_RATE * SPLIT_TIME)) audio_data = audio_data.reshape(-1,) now = time.time() dqu.append([device_index, audio_data, now]) async def main(): mic_slice, mic_indices = initialize() # Queue declaration for multiprocessing que = Queue() procs = [] for mic_list in mic_slice: proc = Process(target=run, args=(que, mic_list), daemon=True) procs.append(proc) # Run processes as simultaneously as possible for proc in procs: proc.start() # Main Loop try: stop_cnt = 0 # TODO Counter for stopping, deleting after development while True: stop_cnt += 1 if stop_cnt &gt; 600: break # Get data from queue if que.qsize() &gt;= len(mic_indices): # TODO Code to be improved in the future mic_data = {} time_data = {} for i in range(len(mic_indices)): data = que.get() mic_data[data[0]] = data[1] time_data[data[0]] = data[2] if len(mic_data.keys()) == len(time_data.keys()) == len(mic_indices): for mic_index in mic_indices: mic_data_list = mic_data[mic_index].tolist() result = {mic_index: copy.deepcopy(mic_data_list)} # Data post-processing, omitted below else: time.sleep(0.1) except KeyboardInterrupt: print('System off') </code></pre>
<python><python-3.x><python-sounddevice>
2023-09-08 06:07:17
0
337
Desty
77,064,470
10,207,083
Python Redis stream xread with "$"
<p>Does Python redis not support <code>&quot;$&quot;</code>? The following will not return a record when the stream is updated.</p> <p><code>redis.xread(streams={&quot;stream_name&quot;: &quot;$&quot;}, count=None, block=0)</code></p>
<python><redis>
2023-09-08 06:01:16
1
533
TruBlu
77,064,455
6,729,591
PyTorch Dataloader - list indices must be integers or slices, not list
<p>I have implemented a COCO dataset as follows:</p> <pre><code>from torch.utils.data import Dataset from detr.datasets.coco import CocoDetection class MyCoco(CocoDetection): def __init__(self, img_folder, ann_file, transform=None) -&gt; None: super().__init__(img_folder, ann_file, transform, return_masks=True) def __getitem__(self, idx): img, target = super(MyCoco, self).__getitem__(idx) return img, target </code></pre> <p>Then I defined a batch sampler and dataloader as follows:</p> <pre><code>my_coco = MyCoco( settings.datasets.img_folder, settings.datasets.ann_file ) sampler_train = torch.utils.data.RandomSampler(my_coco) batch_sampler_train = torch.utils.data.BatchSampler(sampler_train, batch_size=32, drop_last=True) data_loader_train = DataLoader(my_coco, sampler=batch_sampler_train, collate_fn=collate_fn, num_workers=1) </code></pre> <p>When I try to iterate the loader there is an error:</p> <pre class="lang-py prettyprint-override"><code>for a in data_loader_train: print(a) break </code></pre> <p><code>TypeError: list indices must be integers or slices, not list</code></p> <p>Looking into the functions themselves, for some reason the indexes are within another list, and i dont understand why, and more importantly, how to how to fix it:</p> <p><a href="https://i.sstatic.net/qEReT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qEReT.png" alt="enter image description here" /></a></p>
<python><machine-learning><pytorch><pytorch-lightning><pytorch-dataloader>
2023-09-08 05:58:11
1
1,404
Dr. Prof. Patrick
77,064,158
5,179,643
How to create several Pandas dataframes and assign their names as elements from a list
<p>Let's say I have the following Pandas dataframe:</p> <pre><code>import pandas as pd df = pd.DataFrame({ 'entity': ['foo', ' bar', 'baz', 'buzz', 'bar', 'buzz', 'foo', 'foo'], 'value': [4, 3, 2, 1, 9, 8, 7, 4] }) df entity value 0 foo 4 1 bar 3 2 baz 2 3 buzz 1 4 bar 9 5 buzz 8 6 foo 7 7 foo 4 </code></pre> <p>For each unique value of <code>entity</code>, I'd like to create a separate dataframe and assign its name as the unique value of <code>entity</code>, prefaced with <code>df_</code>:</p> <p>For example:</p> <pre><code>df_foo df_bar df_baz df_buzz </code></pre> <p>How would I do this?</p> <p>Thanks!</p>
<python><pandas>
2023-09-08 04:34:56
1
2,533
equanimity
77,064,104
13,078,279
Does using render_template() with Flask-SocketIO cause non-auto-updating?
<p>I recently ported a Socket.IO app from Node.js to Python. I noticed that unlike the Node.js original, the Python version didn't autoupdate on multiple connections. While that app is far too complex to post on a simple SO question, here is a minimal reproduction of my issue.</p> <p>Create a file called <code>templates/index.html</code>:</p> <pre class="lang-html prettyprint-override"><code>&lt;!DOCTYPE html&gt; &lt;html lang=&quot;en&quot;&gt; &lt;head&gt; &lt;meta charset=&quot;UTF-8&quot;&gt; &lt;meta name=&quot;viewport&quot; content=&quot;width=device-width, initial-scale=1.0&quot;&gt; &lt;meta http-equiv=&quot;X-UA-Compatible&quot; content=&quot;ie=edge&quot;&gt; &lt;script src=&quot;https://cdn.socket.io/4.5.4/socket.io.min.js&quot;&gt;&lt;/script&gt; &lt;title&gt;Test chat app - multiple connections&lt;/title&gt; &lt;/head&gt; &lt;body&gt; &lt;div id=&quot;messages&quot;&gt;&lt;/div&gt; &lt;input id=&quot;message&quot; type=&quot;text&quot; placeholder=&quot;Message&quot; /&gt; &lt;button id=&quot;send&quot;&gt;Send&lt;/button&gt; &lt;script&gt; let socket = io(); document.getElementById(&quot;send&quot;).addEventListener(&quot;click&quot;, () =&gt; { let msg = document.getElementById(&quot;message&quot;).value; socket.emit(&quot;chat&quot;, { message: msg }); }) socket.on(&quot;update&quot;, (messages) =&gt; { document.getElementById(&quot;messages&quot;).innerHTML = messages.map((message) =&gt; `&lt;p&gt;${message}&lt;/p&gt;`).join(&quot;&quot;) }); &lt;/script&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>Create a file called <code>test.py</code>:</p> <pre class="lang-py prettyprint-override"><code>from flask import Flask, render_template from flask_socketio import SocketIO, send, emit app = Flask(__name__) socketio = SocketIO(app) messages = [] @app.route(&quot;/&quot;) def main(): return render_template(&quot;index.html&quot;) @socketio.on(&quot;connect&quot;) def handle_connect(): emit(&quot;update&quot;, messages) @socketio.on(&quot;chat&quot;) def handle_chat(data): msg = data[&quot;message&quot;] messages.append(msg) emit(&quot;update&quot;, messages) if __name__ == &quot;__main__&quot;: socketio.run(app, port=5500) </code></pre> <p>Now run <code>python test.py</code>, and open 2 tabs at <a href="http://localhost:5500" rel="nofollow noreferrer">http://localhost:5500</a>. If you try sending messages on the first tab, they don't show up on the second tab.</p> <p>I have a suspicion that this is due to how Flask serves static assets. Is that the issue?</p>
<python><flask><socket.io>
2023-09-08 04:18:13
1
416
JS4137
77,063,677
474,197
How to pass Mojo function to Python in Python interop?
<p>The question is how to pass Mojo function to Python in Python interop?</p> <p>For example,</p> <pre class="lang-py prettyprint-override"><code># This is main.mojo from python.python import Python def callback(): return 5 def main(): Python.add_to_path(&quot;.&quot;) let test_module = Python.import_module(&quot;lib&quot;) print(test_module.test_interop(callback)) </code></pre> <pre class="lang-py prettyprint-override"><code># This is lib.py def test_interop(func): return func() </code></pre> <p>If I run this, it will show the following message:</p> <pre><code>$ main.mojo main.mojo:9:33: error: invalid call to '__call__': argument #1 cannot be converted from 'fn() raises -&gt; object' to 'PythonObject' print(test_module.test_interop(callback)) ~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~ main.mojo:1:1: note: function declared here from python.python import Python ^ mojo: error: failed to parse the provided Mojo </code></pre>
<python><mojolang>
2023-09-08 01:40:45
1
3,320
HKTonyLee
77,063,612
5,924,264
Does setattr make a deepcopy?
<p><a href="https://docs.python.org/3/library/functions.html#setattr" rel="nofollow noreferrer">https://docs.python.org/3/library/functions.html#setattr</a> From this documentation, I cannot tell if a deep copy is made or not.</p> <p>I'm working with a codebase currently that uses <code>setattr</code> is a rather disgusting way.</p> <p>Here is the relevant snippet from the constructor of a class:</p> <pre><code> for var in vars: # ExtMang is another class setattr(self, var, ExtMang(self)) # create an alias for var in vars : if var.endswith(&quot;_modified&quot;) : setattr(self, var[:-9], getattr(self, var)) </code></pre> <p>I'm mostly a C++ guy, so I don't know how common/acceptable this is to do in python, but this snippet looks awful to me.</p> <p>Essentially, I'm trying to figure out if any unmodified var is a deepcopy or a reference to the modified var. e.g., say <code>vars = [&quot;price_modified&quot;]</code>. In the first loop, <code>price_modified</code> would be set as an attribute of self. In the second loop, it would set <code>price</code> as an attribute of <code>self</code>. I'm trying to figure out if <code>self.price</code> is a deepcopy or reference of <code>self.price_modified</code></p>
<python><deep-copy><shallow-copy><setattr>
2023-09-08 01:11:13
0
2,502
roulette01
77,063,590
13,258,121
styling Tab title text in ipywidgets
<p>Is it possible to style the title of a tab in ipywidgets? Using the <a href="https://ipywidgets.readthedocs.io/en/stable/examples/Widget%20List.html#tabs" rel="nofollow noreferrer">docs example</a>, is it possible to set a style for the tabs 0-4 either individually or as a collective?</p> <pre><code>tab_contents = ['P0', 'P1', 'P2', 'P3', 'P4'] children = [widgets.Text(description=name) for name in tab_contents] tab = widgets.Tab() tab.children = children tab.titles = [str(i) for i in range(len(children))] tab </code></pre> <p>There is no <code>style</code> attribute for the <code>Tab()</code>, and options for <code>tab.titles</code> are <code>['count', 'index']</code> The <a href="https://ipywidgets.readthedocs.io/en/stable/examples/Widget%20Styling.html" rel="nofollow noreferrer">styling of widgets</a> is related to widgets themselves rather than the containers they are in.</p>
<python><ipywidgets>
2023-09-08 01:06:02
1
370
Lachlan