QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
74,796,726
| 9,869,260
|
Pandas Dataframe create new column with grouppy count with condition on count
|
<p>I have this Dataframe</p>
<pre><code> df = pd.DataFrame({"A": [1, 1, 1, 1, 1, 2, 2, 2, 3], "B": [1, 4, 5, 6, 10, 7, 8, 9, 3], "C": ["Hello", "World", "How", "are", "you", "today", "miss", "?", "!"]})
A B C
0 a1 a1 Hello
1 a1 a4 World
2 a1 a5 How
3 a1 a6 are
4 a1 a10 you
5 a2 a7 today
6 a2 a8 miss
7 a2 a9 ?
8 a3 a3 !
</code></pre>
<p>And I want something like this</p>
<pre><code> A B C n
0 a1 a1 Hello 4
1 a1 a4 World 4
2 a1 a5 How 4
3 a1 a6 are 4
4 a1 a10 you 4
5 a2 a7 today 3
6 a2 a8 miss 3
7 a2 a9 ? 3
8 a3 a3 ! 0
</code></pre>
<p>I tried this operation</p>
<pre><code>df["n"] = df.loc[df.A != df.B].groupby("A")["B"].transform(len)
</code></pre>
<p>But I have this result</p>
<pre><code> A B C n
0 a1 a1 Hello NaN
1 a1 a4 World 4
2 a1 a5 How 4
3 a1 a6 are 4
4 a1 a10 you 4
5 a2 a7 today 3
6 a2 a8 miss 3
7 a2 a9 ? 3
8 a3 a3 ! NaN
</code></pre>
<p>Do you know i could set my condition <code>df.A != df.B</code> on the <code>transform</code> instead on the original dataframe ?
Thanks</p>
|
<python><pandas>
|
2022-12-14 10:21:22
| 1
| 437
|
Chjul
|
74,796,555
| 5,346,843
|
Error locating Pandas index using Timestamp
|
<p>I have a <code>Pandas</code> <code>dataframe</code> that looks something like this:</p>
<pre><code> a b c ... x y z
date ...
2043-10-01 10230.413086 846.184082 0.267180 ... 2771.997314 20.699804 4000.0
2043-11-01 10229.154297 841.288513 0.267003 ... 2770.365723 20.749172 4000.0
2043-12-01 10231.440430 836.821472 0.266981 ... 2769.230469 20.797396 4000.0
2044-01-01 10237.501953 832.406677 0.267381 ... 2768.310547 20.849573 4000.0
2044-02-01 10233.545898 827.571655 0.266966 ... 2766.528564 20.897126 4000.0
2044-03-01 10235.044922 823.357910 0.266938 ... 2765.628906 20.942534 4000.0
2044-04-01 10243.462891 819.170654 0.267569 ... 2765.451172 20.993223 4000.0
2044-05-01 10236.799805 814.516602 0.266984 ... 2763.450684 21.038358 4000.0
2044-06-01 10240.304688 810.241150 0.266869 ... 2762.673828 21.087164 4000.0
2044-07-01 10259.951172 806.501587 0.267803 ... 2764.588135 21.142576 4000.0
</code></pre>
<p>I want to extract the values at dates defined using a <code>Pandas</code> <code>date_range</code> eg:</p>
<pre><code>import pandas as pd
for xdat in pd.date_range(start="2040/01/01", end="2044/07/01", freq="MS"):
x = df[xdat]['x']
</code></pre>
<p>However, I get this error <code>KeyError: Timestamp('2040-01-01 00:00:00')</code>. I have tried converting the <code>Timestamp</code> variable <code>xdat</code> using <code>pd.to_datetime</code> (and variations of this) but so far without success. I'm sure the answer is trivial but I can't see it so would appreciate any suggestions. Thanks in advance!</p>
|
<python><pandas>
|
2022-12-14 10:08:06
| 1
| 545
|
PetGriffin
|
74,796,509
| 1,150,683
|
number() not supported in LXML XPath parsing?
|
<p>I am running into some unexpected issues with XPath. I have a query that runs fine when using a database system like BaseX, but in Python with <code>lxml</code> it throws an error.</p>
<p>Here is an example:</p>
<pre class="lang-py prettyprint-override"><code>import lxml.etree as ET
xml = """<tree>
<node begin="0" cat="top" end="27" id="0" rel="top">
<node begin="0" end="1" frame="punct(aanhaal_both)" id="1" lcat="--" lemma="&apos;" pos="punct" postag="LET()" pt="let" rel="--" root="&quot;" sense="&quot;" special="aanhaal_both" word="&quot;"/>
<node begin="12" end="13" frame="punct(komma)" id="2" lcat="punct" lemma="," pos="punct" postag="LET()" pt="let" rel="--" root="," sense="," special="komma" word=","/>
<node begin="1" cat="du" end="26" id="3" rel="--">
<node begin="1" cat="du" end="20" id="4" rel="dp">
<node begin="1" cat="cp" end="12" id="5" rel="sat">
<node begin="1" end="2" frame="complementizer(al)" id="6" lcat="cp" lemma="al" pos="comp" postag="BW()" pt="bw" rel="cmp" root="al" sc="al" sense="al" word="Al"/>
<node begin="2" cat="sv1" end="12" id="7" rel="body">
<node begin="2" end="3" frame="verb(hebben,sg1,transitive)" id="8" infl="sg1" lcat="sv1" lemma="geven" pos="verb" postag="WW(pv,tgw,ev)" pt="ww" pvagr="ev" pvtijd="tgw" rel="hd" root="geef" sc="transitive" sense="geef" tense="present" word="geef" wvorm="pv"/>
<node begin="3" case="both" def="def" end="4" frame="pronoun(nwh,je,sg,de,both,def,wkpro)" gen="de" getal="ev" id="9" lcat="np" lemma="je" naamval="nomin" num="sg" pdtype="pron" per="je" persoon="2v" pos="pron" postag="VNW(pers,pron,nomin,red,2v,ev)" pt="vnw" rel="su" root="je" sense="je" special="wkpro" status="red" vwtype="pers" wh="nwh" word="je"/>
<node begin="4" cat="np" end="12" id="10" rel="obj1">
<node begin="4" end="5" frame="noun(de,count,pl)" gen="de" getal="mv" graad="basis" id="11" lcat="np" lemma="programmeur" ntype="soort" num="pl" pos="noun" postag="N(soort,mv,basis)" pt="n" rel="hd" root="programmeur" sense="programmeur" word="programmeurs"/>
<node begin="5" cat="pp" end="12" id="12" rel="mod">
<node begin="5" end="6" frame="preposition(van,[af,uit,vandaan,[af,aan]])" id="13" lcat="pp" lemma="van" pos="prep" postag="VZ(init)" pt="vz" rel="hd" root="van" sense="van" vztype="init" word="van"/>
<node begin="6" cat="np" end="12" id="14" rel="obj1">
<node begin="6" end="7" frame="noun(de,count,pl)" gen="de" getal="mv" graad="basis" id="15" lcat="np" lemma="vertaalcomputer" ntype="soort" num="pl" pos="noun" postag="N(soort,mv,basis)" pt="n" rel="hd" root="vertaal_computer" sense="vertaal_computer" word="vertaalcomputers"/>
<node begin="7" cat="pp" end="12" id="16" rel="mod">
<node begin="7" end="8" frame="er_vp_adverb" getal="getal" id="17" lcat="advp" lemma="er" naamval="stan" pdtype="adv-pron" persoon="3" pos="adv" postag="VNW(aanw,adv-pron,stan,red,3,getal)" pt="vnw" rel="obj1" root="er" sense="er" special="er" status="red" vwtype="aanw" word="er"/>
<node begin="8" cat="np" end="11" id="18" rel="mod">
<node begin="8" end="9" frame="modal_adverb" id="19" lcat="advp" lemma="nog" pos="adv" postag="BW()" pt="bw" rel="mod" root="nog" sc="modal" sense="nog" word="nog"/>
<node begin="9" end="10" frame="number(hoofd(pl_num))" id="20" infl="pl_num" lcat="detp" lemma="vijftig" naamval="stan" numtype="hoofd" pos="num" positie="prenom" postag="TW(hoofd,prenom,stan)" pt="tw" rel="det" root="vijftig" sense="vijftig" special="hoofd" word="vijftig"/>
<node begin="10" end="11" frame="tmp_noun(het,count,meas)" gen="het" genus="onz" getal="ev" graad="basis" id="21" lcat="np" lemma="jaar" naamval="stan" ntype="soort" num="meas" pos="noun" postag="N(soort,ev,basis,onz,stan)" pt="n" rel="hd" root="jaar" sense="jaar" special="tmp" word="jaar"/>
</node>
<node begin="11" end="12" frame="preposition(bij,[vandaan])" id="22" lcat="pp" lemma="bij" pos="prep" postag="VZ(fin)" pt="vz" rel="hd" root="bij" sense="bij" vztype="fin" word="bij"/>
</node>
</node>
</node>
</node>
</node>
</node>
</node>
</node>
<node begin="26" end="27" frame="punct(punt)" id="42" lcat="--" lemma="." pos="punct" postag="LET()" pt="let" rel="--" root="." sense="." special="punt" word="."/>
</node>
</tree>
"""
xpath = """//node[@cat="cp" and node[@rel="cmp" and @pt="vg" and number(@begin) < ../node[@rel="body" and @cat="ssub"]/node[@rel="vc" and @cat="ppart"]/node[@rel="hd" and @pt="ww"]/number(@begin)] and node[@rel="body" and @cat="ssub" and node[@rel="vc" and @cat="ppart" and node[@rel="hd" and @pt="ww" and number(@begin) < ../../node[@rel="hd" and @pt="ww"]/number(@begin)]] and node[@rel="hd" and @pt="ww"]]]"""
root = ET.fromstring(xml)
results = root.xpath(xpath)
print(results)
</code></pre>
<p>Here is a beautified version of the XPath:</p>
<pre class="lang-xml prettyprint-override"><code>//node[
@cat="cp" and
node[
@rel="cmp" and
@pt="vg" and
number(@begin) < ../node[
@rel="body" and
@cat="ssub"
]/node[
@rel="vc" and
@cat="ppart"
]/node[
@rel="hd" and
@pt="ww"
]/number(@begin)
] and
node[
@rel="body" and
@cat="ssub" and
node[
@rel="vc" and
@cat="ppart" and
node[
@rel="hd" and
@pt="ww" and
number(@begin) < ../../node[
@rel="hd" and
@pt="ww"
]/number(@begin)
]
] and
node[
@rel="hd" and
@pt="ww"
]
]
]
</code></pre>
<p>When I remove the number comparisons, and simplify the query to the following, I do not get any results (expected) but at least I do not get any errors.</p>
<pre class="lang-xml prettyprint-override"><code>//node[
@cat="cp" and
node[
@rel="cmp" and
@pt="vg"
] and
node[
@rel="body" and
@cat="ssub" and
node[
@rel="vc" and
@cat="ppart" and
node[
@rel="hd" and
@pt="ww"
]
] and
node[
@rel="hd" and
@pt="ww"
]
]
]
</code></pre>
<p>So how can I use <code>number()</code> in my XPath in Python (preferably with lxml but I am open to other libraries too). And why does this work in BaseX but does not work in Python?</p>
|
<python><xml><lxml><basex><xpath-1.0>
|
2022-12-14 10:04:28
| 1
| 28,776
|
Bram Vanroy
|
74,796,337
| 1,939,730
|
How to count how many times an event occurred in the last 24 hour of a given date?
|
<p>Given a dataframe representing orders with columns: <code>ID, order_date, expedition_date, ...</code> I want to assign a new column to the dataframe that contains, for each row, how many orders were placed (i.e., order_date) in the last 24h with respect to the row's expedition_date.</p>
<p>For example: for a row with expedition_date <strong>2022-12-14 10:46:00</strong>, the new calculated field would be the number of rows with order_date between <strong>2022-12-13 10:46:00</strong> and <strong>2022-12-14 10:46:00</strong>.</p>
<p>I can't just use rolling on order_date, as that would return, for each row, how many orders were placed in the last 24h with respect to its own <code>order_date</code>, not with respect to its <code>expedition_date</code>.</p>
|
<python><pandas><window><rolling-computation>
|
2022-12-14 09:51:42
| 1
| 567
|
Carlos Navarro Astiasarán
|
74,796,191
| 1,184,899
|
Get element text behind shadow DOM element using Playwright
|
<p>I am trying to use Playwright to get contents of the open shadow root element which looks like this.</p>
<pre><code><some-element>
#shadow-root
ABC
</some-element>
</code></pre>
<p>Here <code>#shadow-root</code> contains text <code>ABC</code> without any additional tags.</p>
<p>I am able to locate <code>some-element</code> but I cannot find a way to get the contents of <code>#shadow-root</code></p>
<p>Example Python code I am using is below:</p>
<pre><code>from playwright.sync_api import sync_playwright
with sync_playwright() as p:
browser = p.firefox.launch(args=["--disable-gpu"], headless=False)
page = browser.new_page()
page.goto("https://www.sample.com")
some_element = page.locator('some-element')
...
# ???
</code></pre>
<p>Playwright <a href="https://playwright.dev/python/docs/selectors" rel="nofollow noreferrer">docs</a> state that their selectors can choose elements in shadow DOM, but examples contain only options where <code>shadow-root</code> contains other tags.</p>
<p>How do I get the contents of <code>#shadow-root</code> if it only contains the text, without any tags ?</p>
|
<python><playwright><playwright-python>
|
2022-12-14 09:38:56
| 2
| 704
|
Termos
|
74,796,189
| 11,167,163
|
How to remove white space in the middle of a pie chart?
|
<p>Below is the example code I made to have a reproducible example.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
size = 0.3
vals = np.array([[60., 32.], [37., 40.], [29., 10.]])
cmap = plt.get_cmap("tab20c")
outer_colors = cmap(np.arange(3)*4)
inner_colors = cmap([1, 2, 5, 6, 9, 10])
ax.pie(vals.sum(axis=1), radius=3-0.9, colors=outer_colors,
wedgeprops=dict(width=0.9, edgecolor='w'),labels=["Europe","North America","Scandiavia"],
pctdistance=1.1, labeldistance=0.65)
ax.pie(vals.flatten(), radius=3, colors=inner_colors,
wedgeprops=dict(width=0.9, edgecolor='w'),labels=["Germany","France","USA","Mexico","Finland","Sweden"],
pctdistance=1.1, labeldistance=0.85)
ax.set(aspect="equal", title='')
plt.show()
</code></pre>
<p>which gives the following chart :</p>
<p><a href="https://i.sstatic.net/7NASy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7NASy.png" alt="enter image description here" /></a></p>
<p>How can we play with the radius to reduce or even delete the white space in the middle ?</p>
<p>the goal is to have the following :</p>
<p><a href="https://i.sstatic.net/o4ctL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/o4ctL.png" alt="enter image description here" /></a></p>
|
<python><matplotlib>
|
2022-12-14 09:38:36
| 1
| 4,464
|
TourEiffel
|
74,796,120
| 2,528,453
|
Saving matplotlib figure from tkinter application without using pyplot
|
<p>I am using multiple <code>matplotlib</code> figures to visualize data in a <code>tkinter</code> application and so far that has been going great by using the <a href="https://matplotlib.org/stable/users/explain/api_interfaces.html" rel="nofollow noreferrer">explicit interface</a>, i.e. by working on the axis objects.</p>
<p>Now I would like to save some figures to files. For this I can only find the pyplot function <code>plt.savefig</code> which is using the <code>pyplot</code> interface. Hence, since I have multiple figures I need to run</p>
<pre><code>plt.figure(somefig)
</code></pre>
<p>to choose the figure I'd like to save. Unfortunately, this breaks in my usecase since I'm using <code>FigureCanvasTkAgg</code> to get the canvas for <code>tkinter</code>. As discribed in <a href="https://github.com/matplotlib/matplotlib/issues/19380" rel="nofollow noreferrer">this bugreport</a>, that call ruins the figure mangager so the call <code>plt.figure(somefig)</code> results in the error</p>
<pre><code>ValueError: The passed figure is not managed by pyplot
</code></pre>
<p>So I guess my question is: Is there a way to save a figure witout using the implicit <code>pyplot</code> module?</p>
|
<python><matplotlib><tkinter>
|
2022-12-14 09:32:43
| 0
| 1,061
|
obachtos
|
74,795,811
| 11,729,954
|
Get version of Python poetry project from inside the project
|
<p>I have a python library packaged using poetry. I have the following requirements</p>
<p>Inside the poetry project</p>
<pre class="lang-py prettyprint-override"><code># example_library.py
def get_version() -> str:
# return the project version dynamically. Only used within the library
def get_some_dict() -> dict:
# meant to be exported and used by others
return {
"version": get_version(),
"data": #... some data
}
</code></pre>
<p>In the host project, I want the following test case to pass no matter which version of <code>example_library</code> I'm using</p>
<pre class="lang-py prettyprint-override"><code>from example_library import get_some_dict
import importlib.metadata
version = importlib.metadata.version('example_library')
assert get_some_dict()["version"] == version
</code></pre>
<p>I have researched ideas about reading the <code>pyproject.toml</code> file but I'm not sure how to make the function read the toml file regardless of the library's location.</p>
<p>The <a href="https://github.dev/python-poetry/poetry" rel="noreferrer">Poetry API</a> doesn't really help either, because I just want to read the top level TOML file from within the library and get the version number, not create one from scratch.</p>
|
<python><python-packaging><python-poetry>
|
2022-12-14 09:05:38
| 2
| 457
|
Beast
|
74,795,728
| 7,766,024
|
Getting a ModuleNotFoundError when trying to import from a particular module
|
<p>The directory I have looks like this:</p>
<pre><code>repository
/src
/main.py
/a.py
/b.py
/c.py
</code></pre>
<p>I run my program via <code>python ./main.py</code> and within <code>main.py</code> there's an important statement <code>from a import some_func</code>. I'm getting a <code>ModuleNotFoundError: No module named 'a'</code> every time I run the program.</p>
<p>I've tried running the Python shell and running the commands <code>import b</code> or <code>import c</code> and those work without any errors. There's nothing particularly special about <code>a</code> either, it just contains a few functions.</p>
<p>What's the problem and how can I fix this issue?</p>
|
<python><python-import>
|
2022-12-14 08:58:31
| 2
| 3,460
|
Sean
|
74,795,650
| 1,231,714
|
Plotting variable number of columns in pandas
|
<p>I have CSV file that has a date column and measurements column. Sometimes there are 4 or more measurement columns. Regardless of the number of columns, I would like to plot all columns on the same plot (same axis, not as subplots). Right now I have the following syntax for plotting all 4 data - how do I generalize plotting for any number of columns?</p>
<pre><code>ax1 = df.plot(x="Date",y=["M1","M2","M3","M4"])
</code></pre>
<p>Thanks.</p>
|
<python><pandas><matplotlib>
|
2022-12-14 08:52:30
| 0
| 1,390
|
SEU
|
74,795,598
| 10,035,190
|
how to do pandas groupby?
|
<p>i have a excel sheet</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Animal</th>
<th>max Speed</th>
</tr>
</thead>
<tbody>
<tr>
<td>Falcon</td>
<td>34</td>
</tr>
<tr>
<td>Falcon</td>
<td>42</td>
</tr>
<tr>
<td>Parrot</td>
<td>18</td>
</tr>
<tr>
<td>Parrot</td>
<td>29</td>
</tr>
</tbody>
</table>
</div>
<p>now i want to group it like this</p>
<p><a href="https://i.sstatic.net/NoxJw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NoxJw.png" alt="enter image description here" /></a></p>
<p>here is my code i took it from pandas official doc but it's not working</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'Animal': ['Falcon', 'Falcon','Parrot', 'Parrot'],
'Max Speed': [380., 370., 24., 26.]})
l = df.groupby("Animal", group_keys=True).apply(lambda x: x)
l.to_excel("kk.xlsx", index=False)
</code></pre>
|
<python><pandas><group-by>
|
2022-12-14 08:46:49
| 2
| 930
|
zircon
|
74,795,541
| 4,128,957
|
How to find all leaf and non leafe records and update them on each non leaf records in odoo 13
|
<p>In the employee profile, I added a <code>Many2one</code> field which relates to the employee table itself (known as Reporting of the current employee) and another field is a <code>Many2one</code> with a foreign key of Reporting manager field.</p>
<p>And I have another field <code>Many2many</code> related to the employee table itself, called All reporters. I need to populate all the reporters to the Many2many field including my sublevel reporters.</p>
<p>Here is the structure:</p>
<pre><code> A B C D
| | | |
/ | \ / | \ / | \ / \
E F G H I J K L M N O
/|\ /|\ | /\
P Q R S T U V W X
</code></pre>
<p>This is what I want,</p>
<ul>
<li>The many2many of A, need to populate with: E,F,G,P,Q,R,S,T,U</li>
<li>The many2many of B, need to populate with: H,I,J</li>
<li>The many2many of C, need to populate with: K,L,M,V</li>
<li>The many2many of E, need to populate with: P,Q,R</li>
<li>The many2many of N, need to populate with: W,X</li>
</ul>
<p>Here is my code:</p>
<pre><code>reportee_ids = fields.One2many('hr.employee','reporting_officer_id',string="Reportees")
employee_child_ids = fields.Many2many('hr.employee')
def compute_reportees(self,emp,list1=None):
rm_id = emp.reporting_officer_id
if list1 is None:
list1 = []
if emp.id != rm_id.id:
list1 += emp.reportee_ids.ids + rm_id.reportee_ids.ids
rm_id.user_id.write({'employee_child_ids':list1})
self.compute_reportees(rm_id,list1)
else:
return list1
</code></pre>
<p>How can I do this?</p>
|
<python><python-3.x><odoo><odoo-13>
|
2022-12-14 08:41:19
| 1
| 4,224
|
KbiR
|
74,795,465
| 20,732,098
|
Check if a row in column is unique python Dataframe
|
<p>I have the following Dataframe:</p>
<pre class="lang-py prettyprint-override"><code>
| id1 | result |
| -------- | -------------- |
| 2 | 0.5 |
| 3 | 1.4 |
| 4 | 1.4 |
| 7 | 3.4 |
| 2 | 1.4 |
</code></pre>
<p>I want to check for every row in the column ['id1'] if the value is unique</p>
<p>The output should be:</p>
<pre class="lang-py prettyprint-override"><code>False
True
True
True
False
</code></pre>
<p>The first and the last are False because id 2 exists twice.</p>
<p>I used this method:</p>
<pre class="lang-py prettyprint-override"><code>bool = df["id1"].is_unique`
</code></pre>
<p>but that checks if the whole column is unique. I want to check it for each row</p>
|
<python><pandas><dataframe>
|
2022-12-14 08:33:25
| 3
| 336
|
ranqnova
|
74,795,315
| 5,856,119
|
Processing large number of JSONs (~12TB) with Databricks
|
<p>I am looking for guidance/best practice to approach a task. I want to use Azure-Databricks and PySpark.</p>
<p><strong>Task:</strong> Load and prepare data so that it can be efficiently/quickly analyzed in the future. The analysis will involve summary statistics, exploratory data analysis and maybe simple ML (regression). Analysis part is not clearly defined yet, so my solution needs flexibility in this area.</p>
<p><strong>Data:</strong> session level data (12TB) stored in 100 000 single line JSON files. JSON schema is nested, includes arrays. JSON schema is not uniform but new fields are added over time - data is a time-series.</p>
<p>Overall, the task is to build an infrastructure so the data can be processed efficiently in the future. There will be no new data coming in.</p>
<p>My initial plan was to:</p>
<ol>
<li><p>Load data into blob storage</p>
</li>
<li><p>Process data using PySpark</p>
<ul>
<li>flatten by reading into data frame</li>
<li>save as parquet (alternatives?)</li>
</ul>
</li>
<li><p>Store in a DB so the data can be quickly queried and analyzed</p>
<ul>
<li>I am not sure which Azure solution (DB) would work here</li>
<li>Can I skip this step when data is stored in efficient format (e.g. parquet)?</li>
</ul>
</li>
<li><p>Analyze the data using PySpark by querying it from DB (or from blob storage when in parquet)</p>
</li>
</ol>
<p>Does this sound reasonable? Does anyone has materials/tutorials that follow similar process so I could use them as blueprints for my pipeline?</p>
|
<python><azure><pyspark><databricks><azure-databricks>
|
2022-12-14 08:19:23
| 1
| 1,311
|
An economist
|
74,795,034
| 4,845,935
|
Single-line conditional mocking return values in pytest
|
<p>I have some method in a Python module db_access called read_all_rows that takes 2 strings as parameters and returns a list:</p>
<pre><code>def read_all_rows(table_name='', mode=''):
return [] # just an example
</code></pre>
<p>I can mock this method non-conditionally using pytest like:</p>
<pre><code>mocker.patch('db_access.read_all_rows', return_value=['testRow1', 'testRow2'])
</code></pre>
<p>But I want to mock its return value in pytest depending on <strong>table_name</strong> and <strong>mode</strong> parameters, so that it would return different values of parameters and combinations of them. And to make this as simple as possible.</p>
<p>The pseudocode of what I want:</p>
<pre><code>when(db_access.read_all_rows).called_with('table_name1', any_string()).then_return(['testRow1'])
when(db_access.read_all_rows).called_with('table_name2' 'mode1').then_return(['testRow2', 'tableRow3'])
when(db_access.read_all_rows).called_with('table_name2' 'mode2').then_return(['testRow2', 'tableRow3'])
</code></pre>
<p>You can see that the 1st call is mocked for "any_string" placeholder.</p>
<p>I know that it can be achieved with <strong>side_effect</strong> like</p>
<pre><code>def mock_read_all_rows:
...
mocker.patch('db_access.read_all_rows', side_effect=mock_read_all_rows)
</code></pre>
<p>but it is not very convenient because you need to add extra function which makes the code cumbersome. Even with lambda it is not so convenient because you would need to handle all conditions manually.</p>
<p>How this could be solved in a more short and readable way (ideally in a single line of code for each mock condition)?</p>
<p>P.S. In Java Mockito it is can be easily acheived with single line of code for each condition like</p>
<pre><code>when(dbAccess.readAllRows(eq("tableName1"), any())).thenReturn(List.of(value1, value2));
...
</code></pre>
<p>but can I do this with Python's pytest mocker.patch?</p>
|
<python><mocking><pytest>
|
2022-12-14 07:50:15
| 1
| 874
|
dimnnv
|
74,794,792
| 4,451,521
|
Selecting rows from a list or other iterable but in order
|
<p>I have a dataframe that has a column named "ID"
I also have another dataframe with a list of ID values that I want to use.I can select a sub dataframe with the rows corresponding to the IDs in the list</p>
<p>For example</p>
<pre><code>IDlist_df=pd.DataFrame({"v":[3,4,6,9]})
df=pd.DataFrame({"ID":[1,1,2,3,3,4,4,4,5,6,6,7,8,9],"name":['menelaus','helen','ulyses','paris','hector', 'priamus','hecuba','andromache','achiles','ascanius','eneas','ajax','nestor','helenus']})
selected_lines=df[df['ID'].isin(IDlist_df['v'])]
print(selected_lines)
</code></pre>
<p>With this I get</p>
<pre><code> ID name
3 3 paris
4 3 hector
5 4 priamus
6 4 hecuba
7 4 andromache
9 6 ascanius
10 6 eneas
13 9 helenus
</code></pre>
<p>I got a sub dataframe with the rows with ID 3,4,6,9</p>
<p>So far so good.</p>
<p>However, if I want to maintain the order and I have</p>
<pre><code>IDlist_df=pd.DataFrame({"v":[3,9,6,4]})
</code></pre>
<p>I get the same result as above.</p>
<p>How can I get something like</p>
<pre><code> ID name
3 3 paris
4 3 hector
13 9 helenus
9 6 ascanius
10 6 eneas
5 4 priamus
6 4 hecuba
7 4 andromache
</code></pre>
<p>(You can see that the order 3,9,6,4 is being maintained)</p>
|
<python><pandas>
|
2022-12-14 07:21:48
| 2
| 10,576
|
KansaiRobot
|
74,794,744
| 11,419,494
|
Keras' model.summary() not reflecting the size of the input layer?
|
<p>In the example from 3b1b's video about Neural Network (<a href="https://youtu.be/aircAruvnKk?t=746" rel="nofollow noreferrer">the video</a>), the model has 784 "neurons" in the input layer, followed by two 16-neuron dense layers, and a 10-neuron dense layer. (Please refer to the screenshot of the video provided below). This makes sense, because for example the first neuron in the input layer will have 16 'weights' (as in x<em>w) so the number of weights is 784 * 16. And followed by 16</em>16, and 16*10. There are also biases, which is same as the number of neurons in the dense layers.
<a href="https://i.sstatic.net/D3bzS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/D3bzS.png" alt="3b1b says 13002!" /></a></p>
<p>Then I made the same model in Tensorflow, and the model.summary() shows the following:</p>
<pre><code>Model: "model_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 784, 1)] 0
dense_8 (Dense) (None, 784, 16) 32
dense_9 (Dense) (None, 784, 16) 272
dense_10 (Dense) (None, 784, 10) 170
=================================================================
Total params: 474
Trainable params: 474
Non-trainable params: 0
_________________________________________________________________
</code></pre>
<p>Code used to produce the above:</p>
<pre><code>#I'm using Keras through Julia so the code may look different?
input_shape = (784,1)
inputs = layers.Input(input_shape)
outputs = layers.Dense(16)(inputs)
outputs = layers.Dense(16)(outputs)
outputs = layers.Dense(10)(outputs)
model = keras.Model(inputs, outputs)
model.summary()
</code></pre>
<p>Which does not reflect the input shape at all? So I made another model with <code>input_shape=(1,1)</code>, and I get the same <code>Total Params</code>:</p>
<pre><code>Model: "model_3"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_10 (InputLayer) [(None, 1, 1)] 0
dense_72 (Dense) (None, 1, 16) 32
dense_73 (Dense) (None, 1, 16) 272
dense_74 (Dense) (None, 1, 10) 170
=================================================================
Total params: 474
Trainable params: 474
Non-trainable params: 0
_________________________________________________________________
</code></pre>
<p>I don't think it's a bug, but I probably just don't understand what these mean / how Params are calculated.</p>
<p>Any help will be very appreciated.</p>
|
<python><tensorflow><keras>
|
2022-12-14 07:16:21
| 1
| 316
|
jshji
|
74,794,708
| 13,954,738
|
How to sort a list of dictionaries alphabetically in Jinja2
|
<p>My sample list of dictionaries is shown below:</p>
<pre class="lang-py prettyprint-override"><code>mydata = [{'data': [27, 3, 30, None, None], 'name': 'S1'}, {'data': [57.33, 6.37, 63.7, None, None], 'name': 'A2'}, {'data': [2349.62, 261.09, 2610.71, 0, 0], 'name': 'Total'}]
</code></pre>
<p>I want to sort the whole list of dictionaries alphabetically on the basis of key <code>name</code> but have to exclude the last element of the list, which name is <code>Total</code>.<br />
The dictionary where the name is <code>Total</code> should be the last element of the list.</p>
<p>How should I proceed?</p>
<p>I tried something like below:</p>
<pre><code><div class="table-data">
<table class="table-theme">
<tr>
<th>Name</th>
{% for data in mydata |sort(attribute='0.name') %}
<th>{{ data["name"] }}</th>
{% endfor %}
</tr>
</table>
</div>
</code></pre>
<p>It's not sorting the data as expected.<br />
Also how to keep the dictionary named <code>Total</code> at the end?</p>
|
<python><html><python-3.x><flask><jinja2>
|
2022-12-14 07:12:38
| 1
| 336
|
ninjacode
|
74,794,636
| 17,782,348
|
Convert array of dicts to timestamp-indexed DataFrame
|
<p><code>ts</code> should be index (Unix time in milliseconds), column name is random string.</p>
<pre><code>[
{'ts': 1669246574000, 'value': '6.06'},
{'ts': 1669242973000, 'value': '6.5'}
]
</code></pre>
<p>I would like to have the following output</p>
<pre><code>DT index SomeParam
1669242973000 6.5
1669246574000 6.06
</code></pre>
<p>Instead of</p>
<pre><code>DT index ts SomeParam
0 1669242973000 6.5
1 1669246574000 6.06
</code></pre>
|
<python><pandas><dataframe><time-series>
|
2022-12-14 07:03:46
| 0
| 559
|
devaskim
|
74,794,574
| 8,208,804
|
Pytest parametrized django_db mark required
|
<p>I am writing unit tests for a function. In parameterize, I am generating tests cases by making some DB calls when required.</p>
<pre class="lang-py prettyprint-override"><code>def my_function(tokens):
pass
def generate_tokens_helper(**filters):
tokens = list(MyTable.objects.values(**filters))
return tokens
@pytest.mark.django_db
class TestMyClass:
@pytest.mark.parameterize(
"tokens, expected_result",
[
(
generate_tokens_helper(),
True
)
],
)
def test_my_function(self, tokens, expected_result):
assert expected_result == my_function(tokens)
</code></pre>
<p>This is generating the following error:</p>
<pre><code>test_file.py:47: in <module>
class TestMyClass:
test_file.py:17: in TestMyClass
generate_token_helper(),
test_file.py:5: in generate_token_helper
items = list(MyTable.objects.values())
spm/accessors/variable_accessor.py:182: in get_all_operators
operator_list = list(qs)
venv/lib/python3.9/site-packages/django/db/models/query.py:269: in __len__
self._fetch_all()
venv/lib/python3.9/site-packages/django/db/models/query.py:1308: in _fetch_all
self._result_cache = list(self._iterable_class(self))
venv/lib/python3.9/site-packages/django/db/models/query.py:53: in __iter__
results = compiler.execute_sql(chunked_fetch=self.chunked_fetch, chunk_size=self.chunk_size)
venv/lib/python3.9/site-packages/django/db/models/sql/compiler.py:1154: in execute_sql
cursor = self.connection.cursor()
venv/lib/python3.9/site-packages/django/utils/asyncio.py:26: in inner
return func(*args, **kwargs)
venv/lib/python3.9/site-packages/django/db/backends/base/base.py:259: in cursor
return self._cursor()
venv/lib/python3.9/site-packages/django/db/backends/base/base.py:235: in _cursor
self.ensure_connection()
E RuntimeError: Database access not allowed, use the "django_db" mark, or the "db" or "transactional_db" fixtures to enable it.
</code></pre>
<p>I have already marked <code>TestMyClass</code> with <code>django_db</code> but I am still getting this error.</p>
|
<python><django><pytest><pytest-django>
|
2022-12-14 06:57:23
| 1
| 1,462
|
Sreekar Mouli
|
74,794,392
| 7,806,720
|
FastAPI: Every new request to the API is taking twice the time, if a prior request hasn't completed yet
|
<p>I am trying to execute a longer task (<code>the_longer_function</code>) in the background, and return the response immediately without waiting for the execution of primary function to complete.</p>
<p>Here's the basic code line that I have created:</p>
<pre><code>@app.post("/something/something_test", response_class=JSONResponse)
@validate_token
async def home(request: Request, background_tasks: BackgroundTasks):
try:
request_params = await request.json()
background_tasks.add_task(the_longer_function, request_params)
return JSONResponse({
"Result": "Execution Started!"
}, status_code=200)
except Exception as ex:
return {
"Result": f"Error in starting execution. Error {ex}"
}
</code></pre>
<p>Here's the definition of <code>the_longer_function</code>:</p>
<pre><code>def the_longer_function(request_params):
variable = None
try:
variable = request_params.get('variable', None)
executionId = str(uuid.uuid4())
bot_message['ExecutionId'] = executionId
"""
EXPENSIVE BUSINESS LOGIC HERE
"""
publish_bot_scan_data(request_params, variable)
except (JSONDecodeError, Exception) as ex:
log.error(f"Error {ex}")
</code></pre>
<p>I want the API to respond immediately as soon as it adds the task for the new request in the background.</p>
<p>But I have observed, that if the prior request is running, the API is holding the call back and waiting for the previous one to complete and then is returning to the caller.</p>
<p>Have tried, <code>async</code>, <code>trio</code>, <code>parallelism</code> and <code>concurrency</code> but I don't think these are solution to what I am looking for.</p>
<p>Will appreciate the help and input.</p>
<p>Some sources that studied for above:</p>
<p><a href="https://medium.com/cuddle-ai/concurrency-with-fastapi-1bd809916130" rel="nofollow noreferrer">
[https://medium.com/cuddle-ai/concurrency-with-fastapi-1bd809916130]</a>(Concurrency-With-FastAPI)</p>
<p><a href="https://fastapi.tiangolo.com/async/#asynchronous-code" rel="nofollow noreferrer">
[https://fastapi.tiangolo.com/async/#asynchronous-code]</a>(Concurrnecy-and-async/await)</p>
<p><a href="https://anyio.readthedocs.io/en/stable/" rel="nofollow noreferrer">
[https://anyio.readthedocs.io/en/stable/]</a>(AnyIO)</p>
|
<python><python-3.x><python-asyncio><fastapi><background-process>
|
2022-12-14 06:33:50
| 0
| 901
|
SalGorithm
|
74,794,157
| 5,329,243
|
Kivy / Python: How to convert float color list to web rgb?
|
<p>How can I convert Kivy's <code>[0.5019607843137255, 0.796078431372549, 0.7686274509803922, 1.0]</code> float color list to web rgb (i.e. <code>#AABBCCDD</code>) in Python?</p>
<p>The list consist of RGBA colors in float format where <code>0..255</code> is represented as <code>0..1</code> and each color is the list element not the byte string.</p>
|
<python><web><colors><kivy><converters>
|
2022-12-14 06:04:12
| 1
| 648
|
CeDeROM
|
74,794,064
| 2,975,438
|
Pandas: how to add column with Booleans (True/False) based on duplicates in one column and group index in another column
|
<p>I have the following dataframe:</p>
<pre><code>d_test = {
'name' : ['bob', 'rob', 'dan', 'steeve', 'carl', 'steeve', 'dan', 'carl', 'bob'],
'group': [1, 4, 3, 3, 2, 3, 2, 1, 5]
}
df_test = pd.DataFrame(d_test)
</code></pre>
<p>I am looking for a way to add column <code>duplicate</code> with <code>True</code>/<code>False</code> for each entry. I want <code>True</code> only for case if there is more than one duplicate from 'name' that belongs more than one 'group' number. Here is expected output:</p>
<pre><code> name group duplicate
0 bob 1 True
1 rob 4 False
2 dan 3 True
3 steeve 3 False
4 carl 2 True
5 steeve 3 False
6 dan 2 True
7 carl 1 True
8 bob 5 True
</code></pre>
<p>For example above, row <code>0</code> has <code>True</code> in <code>duplicate</code> because <code>name</code> is the same as in row <code>8</code> and <code>group</code> number is different (<code>1</code> and <code>5</code>). Row <code>3</code> has <code>False</code> in <code>duplicate</code> because no duplicates exist outside of the same group <code>3</code>.</p>
|
<python><pandas>
|
2022-12-14 05:48:49
| 1
| 1,298
|
illuminato
|
74,793,930
| 4,793,216
|
Why is path parameter shown as query parameter in FastAPI docs?
|
<p>I am developing APIs using FastAPI.</p>
<p>I have set the router as the following:</p>
<pre><code>router = APIRouter(
prefix="/customer-profiles",
tags=["customer-profiles"],
responses={
404: {
"description": "Not found"
}
},
)
@router.get(
"/{customer_id}",
response_model=schemas.CustomerProfileDetail
)
async def get_profile(
profile: schemas.CustomerProfileDetail = Depends(models.CustomerProfile.get_profile_or_404)
) -> schemas.CustomerProfileDetail:
return profile
</code></pre>
<p>The issue here is, in the FastAPI docs, the path parameter is shown as <strong>query</strong> parameter.</p>
<p><a href="https://i.sstatic.net/tP6hC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tP6hC.png" alt="enter image description here" /></a></p>
<p>Due to this, I believe that I am not able to parse multiple Path parameters as I am only presented <strong>id</strong> field in the following API.</p>
<p><a href="https://i.sstatic.net/esF4q.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/esF4q.png" alt="enter image description here" /></a></p>
<p>What could be the issue behind this? And how can this be resolved?</p>
|
<python><fastapi><pydantic>
|
2022-12-14 05:27:59
| 1
| 331
|
Aadarsha
|
74,793,690
| 5,653,423
|
Convert images to numpy array with RGB values
|
<p>I have written following code to read set of images in a directory and convert it into NumPy array.</p>
<pre><code> import PIL
import torch
from torch.utils.data import DataLoader
import numpy as np
import os
import PIL.Image
# Directory containing the images
image_dir = "dir1/"
# Read and preprocess images
images = []
for filename in os.listdir(image_dir):
# Check if the file is an image
if not (filename.endswith(".png") or filename.endswith(".jpg")):
continue
# Read and resize the image
filepath = os.path.join(image_dir, filename)
image = PIL.Image.open(file path)
image = image.resize((32, 32)) # resize images to (32, 32)
#print(f"Image shape: {image.shape}")
# Convert images to NumPy arrays
image = np.array(image)
images.append(image)
# Convert images to PyTorch tensors
images1 = torch.tensor(np.array(images))
np.save('trial1.npy', np.array(images),allow_pickle=True)
</code></pre>
<p>The above code leads to a dataframe of shape <code>(24312, 32, 32)</code>. How to convert it into shape <code>(24312, 32, 32,3)</code>so that it stores RGB values also as 3 channel ?</p>
|
<python><machine-learning><image-processing><pytorch><python-imaging-library>
|
2022-12-14 04:47:41
| 1
| 1,402
|
shome
|
74,793,663
| 11,229,812
|
How to show sensor reading in tkinter?
|
<p>I am trying to capture readings from the sensor into tkniter and am stuck on a small problem.
To simulate sensor reading I created a small function that increments the number for 1 every second. And in this example, I am trying to present that counter within tkniter label.</p>
<p>Here is the code:</p>
<pre><code>import tkinter
import customtkinter
# Setting up theme of GUI
customtkinter.set_appearance_mode("Dark") # Modes: "System" (standard), "Dark", "Light"
customtkinter.set_default_color_theme("blue") # Themes: "blue" (standard), "green", "dark-blue"
class App(customtkinter.CTk):
def __init__(self):
super().__init__()
# configure window
self.is_on = True
self.title("Cool Blue")
self.geometry(f"{220}x{160}")
self.temperature = tkinter.IntVar()
# configure grid layout (4x4)
self.grid_columnconfigure(1, weight=1)
# create frame for environmental variable
self.temperature_frame = customtkinter.CTkFrame(self)
self.temperature_frame.grid(row=0, column=1, rowspan = 1, padx=(5, 5), pady=(10, 10), sticky="n")
self.temperature_frame.grid_rowconfigure(2, weight=1)
self.label_temperature = customtkinter.CTkLabel(master=self.temperature_frame, text="Temperature")
self.label_temperature.grid(row=0, column=1, columnspan=2, padx=10, pady=10, sticky="")
self.label_temperature_value = customtkinter.CTkLabel(master=self.temperature_frame,
textvariable=self.temperature,
font=customtkinter.CTkFont(size=50, weight="bold"))
self.label_temperature_value.grid(row=1, column=1, columnspan=1, padx=10, pady=10, sticky="e")
self.label_temperature_value = customtkinter.CTkLabel(master=self.temperature_frame,
text = f'\N{DEGREE CELSIUS}',
font=customtkinter.CTkFont(size=30, weight="bold"))
self.label_temperature_value.grid(row=1, column=3, columnspan=1, padx=(10, 10), pady=10, sticky="sw")
def temp(self):
import time
i = 0
while True:
start = round(time.time(), 0)
time.sleep(1)
stop = round(time.time(), 0)
j = stop - start
i = i + j
print(i)
return i
self.temperature = temp(self)
if __name__ == "__main__":
app = App()
app.mainloop()
</code></pre>
<p>If set my self.temperature.set(5) I see the 5 Celsius displayed within tkineter.
However, when I try dynamically feeding this variable using temp() function, I am not getting any numbers.</p>
<p>What I expect is to see 1, then 2, then 3 etc.</p>
<p>What am I doing wrong here?</p>
<p>thank you in advance.</p>
<p>PS. here is the example of my code for reading the data from the sensor:</p>
<pre><code>import time
import board
from busio import I2C
import adafruit_bme680
import datetime
import adafruit_veml7700
i2c = board.I2C() # uses board.SCL and board.SDA
veml7700 = adafruit_veml7700.VEML7700(i2c)
# Create library object using our Bus I2C port
i2c = I2C(board.SCL, board.SDA)
bme680 = adafruit_bme680.Adafruit_BME680_I2C(i2c, debug=False)
while True:
TEMPERATURE = round(bme680.temperature, 2)
time. Sleep(1)
</code></pre>
|
<python><python-3.x><tkinter>
|
2022-12-14 04:43:09
| 2
| 767
|
Slavisha84
|
74,793,652
| 10,534,633
|
What is the usage of defining many elements types in list type as for tuple type?
|
<p>Since <em>Python 3.9</em>, there are type hints built-in and for a <code>list</code> type, there's a possibility to define also type of its elements.</p>
<p>When all the list's elements have the same type, it's simple: <code>list[str]</code>.<br />
When the elements' types are different, we can define a type hint like:</p>
<ul>
<li><code>list[Union[str, int]]</code> (using <code>typing</code> library to import <code>Union</code>),</li>
<li><code>list[str | int]</code> (since <em>Python 3.11</em> and <em>PEP 604</em>).</li>
</ul>
<p>However, with the built-in type hits, there's a <strong>possibility</strong> to write <code>list</code> type like this:</p>
<pre class="lang-py prettyprint-override"><code>list[str, int]
</code></pre>
<p>But it seems not to be intended to work like in <code>tuple</code> type case: <code>tuple[str, int]</code> - the first tuple element is a string, and the second is an integer.<br />
Why? In addition to being able to use a correct notation like in the examples listed above, I found two more arguments against the <code>list[str, int]</code>:</p>
<ul>
<li><code>typing.List[str, int]</code> isn't provided and throws an error <code>TypeError: Too many arguments for typing.List; actual 2, expected 1</code>,</li>
<li>a type expected for every element of such a list is the first defined type (here: <code>str</code>) as IDEs (in my case: <em>PyCharm</em>) inform about:
<pre class="lang-py prettyprint-override"><code>def get_second_element(str_int_list: list[str, int]) -> int:
return str_int_list[1] # Expected type 'int', got 'str' instead
print(get_second_element(["string", 1])
</code></pre>
</li>
</ul>
<p>To sum up, the factors described above leads me to assume the notation with a comma (<code>,</code>) isn't correct.<br />
But still, there's a possibility to use it. Why? What's the usage?</p>
|
<python><list><type-hinting>
|
2022-12-14 04:41:30
| 1
| 1,230
|
maciejwww
|
74,793,562
| 2,368,545
|
Regular express to find all lower case string with dot with Python
|
<p>Trying with python to find all strings inside a double quote, and with domain name like format, such as <code>"abc.def.ghi"</code>.</p>
<p>I am currently using <code>re.findall('\"([a-z\\.]+[a-z]*)\"', input_string)</code>,</p>
<p><code>[a-z\\.]+</code> is for <code>abc.</code>, <code>def.</code> and <code>[a-z]*</code> is for <code>ghi</code>.</p>
<p>So far it has no issue to match all string like <code>"abc.def.ghi"</code>, but it also matches string that contains no <code>.</code>, such as <code>"opq"</code>, <code>"rst"</code>.</p>
<p>Question is, how to get rid of those string contains no dot <code>.</code> using regx?</p>
|
<python><regex>
|
2022-12-14 04:24:32
| 2
| 696
|
Frank
|
74,793,392
| 15,542,245
|
Numerical reference for backreference not working out in Python
|
<p>I was trying to deal with difflib matches that return double word place names when only one of the words has been used to make the match. That is: when I do the difflib regex substitution I get a double up of the second word.</p>
<p>Approach:</p>
<ul>
<li>capture substrings so repeated word is last/first of substrings</li>
<li>if words the same then remove first word & substitute for this 'everything after' substring</li>
</ul>
<p>I don't understand the output I am getting using Python backreferences.</p>
<pre><code># removeDupeWords.py --- test to remove double words eg "The sun shines in,_Days_Bay Bay some of the time"
import re
testString = "The sun shines in,_Days_Bay Bay some of the time"
# regex to capture comma to space of testString e.g ',_Days_Bay'
refRegex = '(,\S+)'
# regex to capture everything after e.g 'Bay some of the time'
afterRegex = '(,\S+)(.*)'
refString = re.search(refRegex, testString).group(0)
# print(refString)
afterString = re.sub(afterRegex, r'\2', testString)
print(afterString)
</code></pre>
<p>The output for <code>r'\0'</code>, <code>r'\1'</code> & <code>r'\2'</code> is as follows:</p>
<pre><code>The sun shines in
The sun shines in,_Days_Bay
The sun shines in Bay some of the time
</code></pre>
<p>I just want <code>' Bay some of the time'</code>
The docs <a href="https://docs.python.org/3/howto/regex.html#non-capturing-and-named-groups" rel="nofollow noreferrer">Regular Expression HOWTO</a> don't go into backreferences in much detail. I couldn't get enough info to offer any explanation why I would even get any output for <code>r'\0'</code></p>
|
<python><regex><backreference><capture-group>
|
2022-12-14 03:52:42
| 1
| 903
|
Dave
|
74,793,352
| 8,321,207
|
matplotlib pyplot display ticks and values which are in scientific form
|
<p>I have a plot which looks like this:</p>
<p><a href="https://i.sstatic.net/MS7Qw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MS7Qw.png" alt="enter image description here" /></a>
The values displayed on the blue line are the cost ratio.</p>
<p>The corresponding code looks like this:</p>
<pre><code> fpr=[some values]
tpr=[some values]
CR =
plt.plot(tpr,fpr)
plt.plot(x,y)
plt.plot(*intersection.xy, 'ro')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend(["Cost Ratio"])
for x,y,z in zip(tpr,fpr,CR):
label = "{:.2f}".format(z)
plt.annotate(label, # this is the text
(x,y),annotation_clip=True, # these are the coordinates to position the label
textcoords="offset points", # how to position the text
xytext=(0,10), # distance from text to points (x,y)
ha='center') # horizontal alignment can be left, right or center
plt.show()
</code></pre>
<p>The cost ratio values are :</p>
<pre><code>[3.436400227231698e-11
6.872800454463395e-11
1.374560090892679e-10
2.749120181785358e-10
5.498240363570716e-10
1.0996480727141433e-09
2.1992961454282866e-09
4.398592290856573e-09
8.797184581713146e-09
1.7594369163426292e-08
37.7836200753334]
</code></pre>
<p>I want to display the following values:</p>
<pre><code>[3.4e-11
6.8e-11
1.3e-10
2.7e-10
5.4e-10
1.0e-09
2.1e-09
4.3e-09
8.7e-09
1.7e-08
37.7]
</code></pre>
|
<python><matplotlib><plot><linegraph>
|
2022-12-14 03:44:42
| 1
| 375
|
Kathan Vyas
|
74,793,240
| 11,918,314
|
Python Loop Function to Create New Columns based on Groupby of multiple columns
|
<p>I have a dataframe with the following columns</p>
<pre><code>rater_id being_rated_id combine_id avg_jump avg_run avg_swim category
100 200 100200 3 3 2 heats
100 200 100200 4 4 1 heats
101 200 101200 1 1 2 finals
101 200 101200 2 3 2 finals
102 201 102201 3 2 3 heats
103 202 103202 4 4 4 finals
</code></pre>
<p>I'd like to use a function to loop through the columns with prefix ("avg") and groupby "combine_id" and "category" to create new columns with suffix ("_2") that give the average of the rows that have multiple entries of the "combine_id"</p>
<p>What I am to achieve</p>
<pre><code>rater_id being_rated_id combine_id avg_jump avg_run category avg_jump_2 avg_run_2
100 200 100200 3 2 heats 3.5 2.5
100 200 100200 4 3 heats 3.5 2.5
101 200 101200 1 1 finals 1.5 2
101 200 101200 2 3 finals 1.5 2
102 201 102201 3 2 heats 3 2
103 202 103202 4 4 finals 4 4
</code></pre>
<p>I've tried the following code but it doesnt seem to work</p>
<pre><code>
collist = ['avg_']
for col in collist:
avgcols = df.filter(like=col).columns
if len(avgcols) > 0:
df[f'{col}_2'] = df.groupby(['combine_id','category'])[avgcols].transform(np.mean)
</code></pre>
<p>Appreciate any advice and help, thank you.</p>
|
<python><pandas><dataframe><for-loop>
|
2022-12-14 03:22:25
| 1
| 445
|
wjie08
|
74,793,228
| 7,585,973
|
How to avoid unique key appear twice in PySpark left join
|
<p><code>df_1</code> column -> <code>|id|pym_cat|sub_status|year|month|day|</code></p>
<p><code>df_2</code> column -> <code>|id|loc_provinsi|loc_kabupaten|loc_kecamatan|</code></p>
<p>Here's my code</p>
<p><code>df_join = df_1.join(df_2, df_1.id == df_2.id, "left")</code></p>
<p>the error message</p>
<p><code>AnalysisException: "Reference 'id is ambiguous, could be: b.id, id.;"</code></p>
|
<python><join><pyspark>
|
2022-12-14 03:20:49
| 2
| 7,445
|
Nabih Bawazir
|
74,793,150
| 17,620,776
|
AttributeError: 'float' object has no attribute 'ids' when running kivy app
|
<p>I am trying to make a app that captures 30 images a second from the webcam in kivy.</p>
<p>But when I run it, it give me this error:</p>
<pre><code>AttributeError: 'float' object has no attribute 'ids'
</code></pre>
<p>Here is the code to reproduce the problem:</p>
<pre><code>from kivy.app import App
from kivy.lang import Builder
from kivy.uix.boxlayout import BoxLayout
from kivy.clock import Clock
Builder.load_string('''
<CameraClick>:
orientation: 'vertical'
Camera:
id: camera
resolution: (640, 480)
play: True
''')
class CameraClick(BoxLayout):
def capture(self):
'''
Function to capture the images from the camera
'''
camera = self.ids['camera']
camera.export_to_png("IMG.png")
print("Captured")
event = Clock.schedule_interval(capture, 1 / 30.)
class TestCamera(App):
def build(self):
return CameraClick()
TestCamera().run()
</code></pre>
<p>This code brings up the error but deleting <code>event = Clock.schedule_interval(capture, 1 / 30.)</code> fixes that error but I need that line of code.</p>
<p><strong>Question:</strong></p>
<p>So, how can I fix the error so that I can capture images from the webcam and store them?</p>
|
<python><kivy>
|
2022-12-14 03:05:44
| 2
| 357
|
JoraSN
|
74,792,952
| 19,425,874
|
Adding Current Date column next to current Data Frame using Python
|
<p>I'm not fully understanding data frames & am in the process of taking a course on them. This one feels like it should be so easy, but I could really use an explanation.</p>
<p>All I want to do is ADD a column next to my current output that has the CURRENT date in the cells.</p>
<p>I'm getting a timestamp using</p>
<pre><code>time = pd.Timestamp.today()
print (time)
</code></pre>
<p>But obviously this is just to print, not connecting it to my other code.</p>
<p>I was able to accomplish this in Google Sheets (once the output lands), but it would be so much cleaner (and informative) if I could do it right from the script.</p>
<p>This is what it currently looks like:</p>
<pre><code>import requests
import pandas as pd
import gspread
gc = gspread.service_account(filename='creds.json')
sh = gc.open_by_key('152qSpr-4nK9V5uHOiYOWTWUx4ojjVNZMdSmFYov-n50')
waveData = sh.get_worksheet(1)
id_list = [
"/Belmar-Surf-Report/3683/",
"/Manasquan-Surf-Report/386/",
"/Ocean-Grove-Surf-Report/7945/",
"/Asbury-Park-Surf-Report/857/",
"/Avon-Surf-Report/4050/",
"/Bay-Head-Surf-Report/4951/",
"/Belmar-Surf-Report/3683/",
"/Boardwalk-Surf-Report/9183/",
]
res = []
for x in id_list:
df = pd.read_html(requests.get("http://magicseaweed.com" +
x).text)[0]
values = [[x], df.columns.values.tolist(), *df.values.tolist()] ## does it go within here?
res.extend(values)
res.append([])
waveData.append_rows(res, value_input_option="USER_ENTERED")
</code></pre>
<p>I thought it would go within the values, since this is (where I believe) my columns are built?
Would love to understand this better if someone is willing to take the time.</p>
|
<python><pandas><dataframe><python-requests><timestamp>
|
2022-12-14 02:24:22
| 1
| 393
|
Anthony Madle
|
74,792,951
| 4,373,372
|
Convert RGBA image to array in specific range in python
|
<p>I have an array of values in range of 1500 to 4500.
I managed to convert the data using matplotlib function. The code as follows:</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
import numpy as np
norm = plt.Normalize(vmin=1500, vmax=4500)
jet = plt.cm.jet
# generate 100x100 with value in range 1500-4500
original = np.random.randInt(1500,4500, (100,100))
# array in shape (100,100)
# convert the array to rgba image
converted = jet(norm(original))
# image in shape (100,100,4)
</code></pre>
<p>How to get the original array from converted images?</p>
|
<python><numpy><matplotlib>
|
2022-12-14 02:24:21
| 1
| 518
|
Alif Jamaluddin
|
74,792,731
| 4,451,521
|
How do I add a column to a dataframe based on values from other columns?
|
<p>I have a dataframe and I would like to add a column based on the values of the other columns</p>
<p>If the problem were only that, I think a good solution would be <a href="https://stackoverflow.com/a/46684122/4451521">this answer</a>
However my problem is a bit more complicated</p>
<p>Say I have</p>
<pre><code>import pandas as pd
a= pd.DataFrame([[5,6],[1,2],[3,6],[4,1]],columns=['a','b'])
print(a)
</code></pre>
<p>I have</p>
<pre><code> a b
0 5 6
1 1 2
2 3 6
3 4 1
</code></pre>
<p>Now I want to add a column called 'result' where each of the values would be the result of applying this function</p>
<pre><code>def process(a,b,c,d):
return {"notthisone":2*a,
"thisone":(a*b+c*d),
}
</code></pre>
<p>to each of the rows and the next rows of the dataframe</p>
<p>This function is part of a library, it outputs two values but we are only interested in the values of the key <code>thisone</code>
Also, if possible we can not decompose the operations of the function but we have to apply it to the values</p>
<p>For example in the first row
<code>a=5,b=6,c=1,d=2</code> (c and d being the a and b of the next rows) and we want to add the value "thisone" so <code>5*6+1*2=32</code></p>
<p>In the end I will have</p>
<pre><code> a b result
0 5 6 32
1 1 2 20
2 3 6 22
3 4 1 22 --> This is an special case since there is no next row so just a repeat of the previous would be fine
</code></pre>
<p>How can I do this?</p>
<p>I am thinking of traversing the dataframe with a loop but there must be a better and faster way...</p>
<p>EDIT:</p>
<p>I have done this so far</p>
<pre><code>def p4(a,b):
return {"notthisone":2*a,
"thisone":(a*b),
}
print(a.apply(lambda row: p4(row.a,row.b)["thisone"], axis=1))
</code></pre>
<p>and the result is</p>
<pre><code>0 30
1 2
2 18
3 4
dtype: int64
</code></pre>
<p>So now I have to think of a way to incorporate next row values too</p>
|
<python><pandas>
|
2022-12-14 01:38:24
| 2
| 10,576
|
KansaiRobot
|
74,792,714
| 7,658,985
|
Find all JSON files within S3 Bucket
|
<p>is it possible to find all <code>.json</code> files within S3 <code>bucket</code> where the bucket itself can have multiple sub-directories ?</p>
<p>Actually my bucket includes multiple sub-directories where i would like to collect all JSON files inside it in order to iterate over them and parse specific key/values.</p>
|
<python><json><amazon-web-services><amazon-s3><boto3>
|
2022-12-14 01:35:36
| 1
| 11,557
|
αԋɱҽԃ αмєяιcαη
|
74,792,690
| 2,160,616
|
Calculating difference between two rows, based on another column match condition, in Python / Pandas
|
<p>I have seen questions like <a href="https://stackoverflow.com/questions/13114512/">calculate the difference between rows in DataFrame</a> & i understand Pandas provides <code>df.diff()</code> API but my question context is slightly different. DataFrame will consist thousands of rows with volume data till that time of the day. <code>Name</code> column indicates name of instrument. Need to calculate diff between <code>Volume</code> column only for matching <code>Name</code> column.</p>
<p>Input DataFrame :</p>
<pre><code> Date Name Volume
1 2011-01-03 A 10
2 2011-01-03 B 20
3 2011-01-03 C 30
4 2011-01-03 A 40
5 2011-01-03 B 30
6 2011-01-03 C 100
7 2011-01-03 A 140
8 2011-01-03 B 50
9 2011-01-03 C 120
</code></pre>
<p>Output DataFrame :</p>
<pre><code> Date Name Volume Volume Diff
1 2011-01-03 A 10 10
2 2011-01-03 B 20 20
3 2011-01-03 C 30 30
4 2011-01-03 A 40 30
5 2011-01-03 B 30 10
6 2011-01-03 C 100 70
7 2011-01-03 A 140 100
8 2011-01-03 B 50 20
9 2011-01-03 C 120 20
</code></pre>
|
<python><pandas><dataframe>
|
2022-12-14 01:31:00
| 1
| 3,343
|
BeingSuman
|
74,792,607
| 11,122,879
|
Poor Result in Detecting Meter Reading Using Pytesseract
|
<p>I am trying to develop a meter reading detection system. This is the picture
<a href="https://i.sstatic.net/qVPcW.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qVPcW.jpg" alt="enter image description here" /></a></p>
<p>I need to get the meter reading 27599 as the output.
I used this code:</p>
<pre><code>import pytesseract
import cv2
image = cv2.imread('read2.jpg')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
(H, W) = gray.shape
rectKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (20, 7))
gray = cv2.GaussianBlur(gray, (1, 3), 0)
blackhat = cv2.morphologyEx(gray, cv2.MORPH_BLACKHAT, rectKernel)
res = cv2.threshold(blackhat, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
pytesseract.image_to_string(res, config='--psm 12 --oem 3 digits')
</code></pre>
<p>I get this output:</p>
<pre><code>'.\n\n-\n\n3\n\n7\n\n7\n\n3\n\n-2105 566.261586\n\n161200\n\n310010\n\n--\n\n.-\n\n.\n\n5\n\x0c'
</code></pre>
<p>This is my first OCR project. Any help will be appreciated.</p>
|
<python><opencv><ocr><python-tesseract><meter>
|
2022-12-14 01:13:01
| 1
| 501
|
Rashida
|
74,792,454
| 7,211,014
|
python change a files creation and modification time without having to use a subcommand?
|
<p>I want to change the modification and creation time of any file with python. This would be the command I use in Linux:</p>
<pre><code>touch -a -m -t 201512180130.09 ./file.jpg
</code></pre>
<p>Then I check the following I can see the date is changed correctly:</p>
<pre><code>$ls -l --time-style=full-iso
-rw-rw-r-- 1 me me 44570 2015-12-18 01:30:09.000000000 -0500 file.jpg
$exiftool ./file.jpg | grep -i date
File Modification Date/Time : 2015:12:18 01:30:09-05:00
File Access Date/Time : 2015:12:18 01:30:09-05:00
File Inode Change Date/Time : 2022:12:13 19:39:43-05:00
</code></pre>
<p>I tried using the <code>Pathlib('./file.jpg').touch()</code> and <code>os.utime('./file.jpg', None)</code>. neither worked.
How can I do this in Python without having to use os or subprocess commands?</p>
|
<python><date><time><touch><concurrentmodification>
|
2022-12-14 00:43:48
| 0
| 1,338
|
Dave
|
74,792,378
| 7,148,668
|
zipfile.BadZipFile: Bad offset for central directory
|
<p>I have designed a webpage that allows the user to upload a zip file. What I want to do is store this zip file directly into my sqlite database as a large binary object, then be able to read this binary object as a zipfile using the <code>zipfile</code> package. Unfortunately this doesn't work because attempting to pass the file as a binary string in <code>io.BytesIO</code> into <code>zipfile.ZipFile</code> gives the error detailed in the title.</p>
<p>For my MWE, I exclude the database to better demonstrate my issue.</p>
<pre class="lang-py prettyprint-override"><code>views = Blueprint('views', __name__)
@views.route("/upload", methods=["GET", "SET"])
def upload():
# Assume that file in request is a zip file (checked already)
f = request.files['file']
zip_content = f.read()
# Store in database
# ...
# at some point retrieve the file from database
archive = zipfile.ZipFile(io.BytesIO(zip_content))
return ""
</code></pre>
<p>I have searched for days on-end how to fix this issue without success. I have even printed out <code>zip_content</code> and the contents of <code>io.BytesIO(zip_content)</code> after applying <code>.read()</code> and they are exactly the same string.</p>
<p>What am I doing wrong?</p>
|
<python><flask><zip>
|
2022-12-14 00:30:05
| 1
| 345
|
Kookie
|
74,792,243
| 315,168
|
Truncating by time in (name, timestamp) Pandas MultiIndex
|
<p>I have the following <code>MultiIndex</code> created with <code>df.groupby(...).resample()</code>. It is stock market-like OHLC data grouped by an asset and then having OHLC candle time-series for this asset.</p>
<pre><code> high low close ... avg_trade buys sells
pair timestamp ...
AAVE-ETH 2020-01-01 80.0 80.0 80.0 ... 1280.0 1 0
2020-01-02 96.0 96.0 96.0 ... 1120.0 1 0
ETH-USDC 2020-01-02 1600.0 1600.0 1600.0 ... 5000.0 1 0
2020-01-05 1620.0 1400.0 1400.0 ... 1125.0 1 1
</code></pre>
<p>The <code>df.index</code> content is:</p>
<pre><code>MultiIndex([('AAVE-ETH', '2020-01-01'),
('AAVE-ETH', '2020-01-02'),
('ETH-USDC', '2020-01-02'),
('ETH-USDC', '2020-01-05')],
names=['pair', 'timestamp'])
</code></pre>
<p>I would like to do a <code>DataFrame.truncate()</code> like operation by the second index (timestamp) so that I discard all entries beyond a certain timestamp.</p>
<p>However the naive <code>df.truncate(timestamp)</code> will give an error:</p>
<pre><code>TypeError: Level type mismatch: 2020-01-04 00:00:00
</code></pre>
<p>Can a grouped data frame be truncated by its second index (time series) somehow?</p>
|
<python><pandas>
|
2022-12-14 00:04:26
| 1
| 84,872
|
Mikko Ohtamaa
|
74,792,053
| 4,561,887
|
Python: obtain the path to the home directory of the user in whose directory the script being run is located
|
<p>I don't consider this question (<a href="https://stackoverflow.com/q/50499/4561887">How do I get the path and name of the file that is currently executing?</a>) nor <a href="https://stackoverflow.com/q/3718657/4561887">this one</a> to be a duplicate of mine, because the answer to them is simply <code>__file__</code>. My question and the situation is more complicated and deserves a place to be independently googlable. I'm seeking the home dir of the path in which the script lies, it turns out (and this isn't even the only solution, perhaps, as that's a solution detail not a problem description), not just the path in which the script lies. Also, the context surrounding my question is very unique (running the script as root but wanting the home dir of another user's file system in which the script lies), and deserves special attention as others will encounter it too I'm sure.</p>
<hr />
<p>I have <a href="https://github.com/ElectricRCAircraftGuy/eRCaGuy_dotfiles/blob/master/useful_scripts/cpu_logger.py" rel="nofollow noreferrer">a Python script</a> located in <code>/home/gabriel/dev/cpu_logger.py</code>. Inside of it I am logging to <code>/home/gabriel/cpu_log.log</code>. I obtain the <code>/home/gabriel</code> part inside the script using <code>pathlib.Path.home()</code> as follows. I use that part as the directory of my <code>log_file_path</code>:</p>
<pre class="lang-py prettyprint-override"><code>import pathlib
home_dir = str(pathlib.Path.home())
log_file_path = os.path.join(home_dir, 'cpu_log.log')
</code></pre>
<p><strong>However, I now need to run the script as root</strong> to allow it to set some restricted file permissions, so I configured it to run as root at boot using crontab <a href="https://askubuntu.com/a/290102/327339">following these instructions here</a>. <strong>Now, since it is running as root, <code>home_dir</code> above becomes <code>/root</code> and so <code>log_file_path</code> is <code>/root/cpu_log.log</code>.</strong> That's not what I want! I want it to log to <code>/home/gabriel/dev/cpu_logger.py</code>.</p>
<p>How can I do that?</p>
<p>I don't want to explicitly set that path, however, as I intend this script to be used by others, so it must not be hard-coded.</p>
<p>I thought about passing the username of the main user as the first argument to the program, and obtaining the <code>home_dir</code> of that user with <a href="https://docs.python.org/3/library/os.path.html#os.path.expanduser" rel="nofollow noreferrer"><code>os.path.expanduser("~" + username)</code></a>:</p>
<pre class="lang-py prettyprint-override"><code>import os
import sys
username = sys.argv[1]
home_dir = os.path.expanduser("~" + username)
</code></pre>
<p>...but I don't want to pass an extra argument like this if I don't have to. How can I get the home dir as <code>/home/gabriel</code> even when this script is running under the root user?</p>
|
<python><linux><path>
|
2022-12-13 23:35:46
| 1
| 55,879
|
Gabriel Staples
|
74,792,048
| 7,984,318
|
How to check pandas dataframe column value float nan
|
<p>I have a pandas dataframe column which value is nan and is a float:</p>
<pre><code>df['column']
</code></pre>
<p>I want to add a logic there,if df['column'] equal float nan then do something,the problem is I have no idea how to check if it is float nan ,is there anyway like:</p>
<pre><code>if df['column'] == 'nan':
print('hi')
</code></pre>
|
<python><pandas><dataframe>
|
2022-12-13 23:34:53
| 1
| 4,094
|
William
|
74,791,962
| 7,903,749
|
How to implement `left outer join` with additional matching condition, with `annotate()` or something else?
|
<p>The <code>transaction</code> (entity) records have customized attributes in EAV format, so we are implementing a design pattern that assembles EAV data with the entities by a series of <code>left outer join</code> operations in SQL query, briefly as the following:</p>
<ul>
<li>First, we have retrieved the metadata, e.g. <code>postalcode</code> corresponds to <code>attribute_id</code> of <code>22</code>, <code>phone</code> corresponds to <code>23</code>, etc.</li>
<li>Then, we follow the metadata and construct a dynamic QuerySet by adding <code>annotate()</code> method calls.</li>
<li>Like the below SQL query, the system behaviour is to repeat <code>left outer join</code> on the same <code>eav_value</code> table; however, besides the foreign key, the matching condition also requires a specific <code>attribute_id</code>. So, each join assembles one attribute.</li>
</ul>
<p><strong>Our Question:</strong></p>
<p>We tried to assemble the first attribute appending an <code>annotate()</code> with <code>filter</code> to the existing QuerySet like:</p>
<pre class="lang-py prettyprint-override"><code>transaction.annotate(
postalcode=F('eav_values__value_text'),
filter=Q(eav_values__attribute_id__exact=22)
)
</code></pre>
<p>The test got an error saying <code>AttributeError: 'WhereNode' object has no attribute 'select_format'</code>. And we believe the culprit is in the <code>filter</code> part because if we remove the argument, the error disappears.</p>
<p>So, how can we fix the issue and make the prototype run? And we are also OK to use something other than <code>annotate()</code> within the Django framework, not raw query.</p>
<p>We are new to this area of Django ORM, so we appreciate any hints and suggestions.</p>
<p><strong>Technical Details:</strong></p>
<p><strong>1. SQL query for assembling EAV data with entities:</strong></p>
<pre class="lang-sql prettyprint-override"><code>select t.`id`, t.`create_ts`
, `eav_postalcode`.`value_text` as `postalcode`
, `eav_phone`.`value_text` as `phone`
-- , ... to assemble more attributes
from
(
-- entity: transaction, in a toy example of page size 2
select * from `ebackendapp_transaction` where (`product_id` = __PRODUCT_ID__)
order by `id` desc
limit 2 offset 27000
) as t
left outer join `eav_value` as `eav_postalcode`
on t.`id` = `eav_postalcode`.`entity_id` and `eav_postalcode`.`attribute_id` = 22
left outer join `eav_value` as `eav_phone`
on t.`id` = `eav_phone`.`entity_id` and `eav_phone`.`attribute_id` = 23
-- ... to assemble more attributes
;
</code></pre>
<p><strong>2. Test steps:</strong></p>
<pre class="lang-py prettyprint-override"><code>transaction = Transaction.active_objects.filter(product_id=__PRODUCT_ID__).order_by('-id').all()[27000:27002].values('id', 'create_ts')
print(transaction)
# OK
transaction_eav = transaction.annotate(postalcode=F('eav_values__value_text'), filter=Q(eav_values__attribute_id__exact=22))
# transaction_eav is OK, however:
print(transaction_eav)
# got an error saying "AttributeError: 'WhereNode' object has no attribute 'select_format'"
</code></pre>
<p><strong>3. Model definitions:</strong></p>
<pre class="lang-py prettyprint-override"><code>class Transaction(models. Model):
transaction_id = models.CharField(max_length=100)
product = models.ForeignKey(Product)
...
# From the open-source Django EAV library
# imported as `eav_models`
#
class Value(models. Model):
'''
Putting the **V** in *EAV*.
...
'''
...
entity_id = models.IntegerField()
entity = GenericForeignKey(ct_field='entity_ct',
fk_field='entity_id')
value_text = models.TextField(blank=True, null=True)
value_float = models.FloatField(blank=True, null=True)
value_int = models.IntegerField(blank=True, null=True)
value_date = models.DateTimeField(blank=True, null=True)
value_basicdate = models.DateField(blank=True, null=True)
value_bool = models.NullBooleanField(blank=True, null=True)
...
attribute = models.ForeignKey(Attribute, db_index=True,
verbose_name=_(u"attribute"),
on_delete=models.DO_NOTHING)
...
</code></pre>
|
<python><django><entity-attribute-value>
|
2022-12-13 23:22:08
| 1
| 2,243
|
James
|
74,791,868
| 8,816,642
|
How to create a frequency / value count table from multiple dataframes
|
<p>I have two dataframes,</p>
<pre><code>df1 df2
country country
US AR
US AD
CA AO
CN AU
AR US
</code></pre>
<p>How do I group by them by combining the country list to a set the compare the difference between two dataframes?</p>
<p>My expected output will be like,</p>
<pre><code>country code df1_country_count df2_country_count
AR 1 1
AD 0 1
AO 0 1
AU 0 1
US 2 1
CA 1 0
CN 1 0
</code></pre>
|
<python><pandas><dataframe>
|
2022-12-13 23:09:22
| 3
| 719
|
Jiayu Zhang
|
74,791,859
| 8,925,864
|
How to generate all configurations of a given length in Python
|
<p>I am trying to generate a list of all configurations where in each configuration is as follows. Each configuration is a list of length <code>n</code> whose entry can take on values <code>0-q</code> where <code>q</code> is a positive integer. For example if <code>q=1</code> then I am trying to generate a list of all possible binary lists of length <code>n</code>. So if <code>n=2,q=1</code> then the desired output is <code>[[0,0],[0,1],[1,0],[1,1]]</code>. For an arbitrary <code>q</code>, the desired output list is of size <code>(q+1)^n</code> because there <code>q+1</code> choices for each element of the list of length <code>n</code>. For example, for <code>n=3,q=2</code>, the desired output is <code>[[0,0,0],[0,0,1],[0,0,2],[0,1,0],..]</code> and the output list is of size <code>3^3=27</code>.</p>
<p>I have tried to do this using recursion for <code>q=1</code> but am not sure how to write efficient code for arbitrary <code>q</code>? Here is the code and output for <code>q=1</code>.</p>
<pre><code>def generateAllSpinConfigs(n,arr,l,i):
if i == n:
l.append(arr[:])
return
arr[i] = 0
generateAllSpinConfigs(n,arr,l,i+1)
arr[i] = 1
generateAllSpinConfigs(n,arr,l,i+1)
return l
n=2
l=[]
arr=[None]*n
print(generateAllSpinConfigs(n,arr,l,0))
>>[[0,0],[0,1],[1,0],[1,1]]
</code></pre>
|
<python><recursion>
|
2022-12-13 23:08:21
| 1
| 305
|
q2w3e4
|
74,791,850
| 11,462,274
|
How to do a cumulative sum in a DataFrame analyzing all possible combination columns instead of analyzing only all columns together?
|
<p>If we want to know if the cumulative sums are profitable in the <code>['Col 1','Col 2','Col 3']</code> columns for the long term, we do it this way:</p>
<pre class="lang-python prettyprint-override"><code>import pandas as pd
import io
ex_csv = """
Col 1,Col 2,Col 3,return
a,b,c,1
d,e,f,1
a,e,c,-1
a,e,c,-1
d,b,c,-1
a,b,c,1
d,e,f,1
"""
df = pd.read_csv(io.StringIO(ex_csv), sep=",")
df['invest'] = df.groupby(['Col 1','Col 2','Col 3'])['return'].cumsum().gt(df['return'])
true_backs = df[(df['invest'] == True)]['return']
print(true_backs.sum())
</code></pre>
<p>But what if I want it to be <code>TRUE</code> for the cumulative as well when not only the combination of the 3 columns is positive, but if one or two are positive too?</p>
<p>Example:</p>
<p>Perhaps the value <code>a</code> of <code>Col 1</code> the cumulative sum of it will be positive, but together with the values of <code>Col 2</code> and <code>Col 3</code> they will no longer be profitable, so in my current code it would appear as FALSE.</p>
<p>And I want it to be TRUE.</p>
|
<python><pandas><dataframe><cumsum>
|
2022-12-13 23:07:12
| 1
| 2,222
|
Digital Farmer
|
74,791,822
| 12,821,675
|
SQLAlchemy - Default PKs and Bulk Save
|
<p>I have a model with a uuid as the pk with a default value like so:</p>
<pre class="lang-py prettyprint-override"><code>class Car(...):
...
uuid = Column(
String,
primary_key=True,
default=lambda x: str(uuid4()),
)
</code></pre>
<p>At somepoint in my application I bulk add a bunch of <code>Cars</code> like so:</p>
<pre class="lang-py prettyprint-override"><code># create a list of `Car` instances from raw data:
cars = [Car(**car) for car in raw_cars_data]
# bulk insert:
session.bulk_save_objects(cars)
session.commit()
# try to access pks:
for car in cars:
print(car.uuid) # <-- this is `None`
</code></pre>
<p>I am trying to get the pk for the cars that were bulk created, the pks were set using the <code>default</code> method. However, the pk on each <code>Car</code> in the <code>cars</code> array is <code>None</code>. How can I retrieve the pks of the cars that were created?</p>
<p>Note, if I do a single create operation the pk is successfully retrieved.</p>
|
<python><python-3.x><sqlalchemy>
|
2022-12-13 23:03:05
| 1
| 3,537
|
Daniel
|
74,791,817
| 9,749,124
|
Changing labels for Pandas rows that have the same value
|
<p>I want to change labels in the Pandas dataframe for the row that have the same value but different label:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({"text": ["bannana", "tomato", "potato", "potato", "lemon", "cucamber"],
"label": ["fruit", "veg", "fruit", "veg", "fruit", "veg"],
})
print(df)
text label
0 bannana fruit
1 tomato veg
2 potato fruit
3 potato veg
4 lemon fruit
5 cucamber veg
</code></pre>
<p>As you see, there are 2 elements in text that have diferent label</p>
<pre><code>2 potato fruit
3 potato veg
</code></pre>
<p>So I guess that first, I need to identify if there are rows like this, and then to update the values in the label column.
Note, I always want to change from fruit to veg.</p>
<p>Desired output:</p>
<pre><code> text label
0 bannana fruit
1 tomato veg
2 potato veg
3 potato veg
4 lemon fruit
5 cucamber veg
</code></pre>
|
<python><pandas>
|
2022-12-13 23:02:40
| 2
| 3,923
|
taga
|
74,791,708
| 2,142,728
|
How to call function with dict, while ignoring unexpected keyword arguments?
|
<p>Something of the following sort. Imagine this case:</p>
<pre class="lang-py prettyprint-override"><code>def some_function(a, b):
return a + b
some_magical_workaround({"a": 1, "b": 2, "c": 3}) # returns 3
</code></pre>
<p>I can't modify <code>some_function</code> to add a <code>**kwargs</code> parameter. How could I create a wrapper function <code>some_magical_workaround</code> which calls <code>some_function</code> as shown?</p>
<p>Also, <code>some_magical_workaround</code> may be used with other functions, and I don't know beforehand what args are defined in the functions being used.</p>
|
<python><keyword-argument>
|
2022-12-13 22:49:42
| 4
| 3,774
|
caeus
|
74,791,672
| 13,530,377
|
Datetime string needs to be converted to datetime object before matching on pandas column
|
<p>Looking for clarification. I've seen several comments in SO posts saying emphatically that you can do a greater than less than comparison with a datetime column and a string formatted like a datetime object. I am finding this to be false so I was wondering if anyone could indeed confirm that this is not possible.</p>
<p>Here I have an example:</p>
<pre><code># between_dates_sales_seed_ae_sales_plan_ramped_date__current_date_interval_1_year____1_1_2000_
dat = '12/1/2000'
# between_dates_sales_seed_capacity_plan_by_rep_ramped_date__current_date_interval_1_year____1_1_2000_
ae_long['ramped_date'] = pd.to_datetime(ae_long['ramped_date'], errors='coerce').dt.strftime('%-m/%-d/%Y')
try:
assert len(ae_long.loc[(ae_long['ramped_date'] > dat)]) == 0
except:
print(ae_long.loc[(ae_long['ramped_date'] < dat)])
</code></pre>
<p>This returns many dates in <code>ramped_date</code> that are clearly greater than <code>dat</code></p>
<pre><code> date salesforce_user_id original_start_date ramped_date \
3 1/31/2022 0051a000002Gxxxx 5/15/2018 10/31/2018
14 1/31/2022 0051xxxxxxxxxxxxx 5/11/2019 1/31/2020
15 1/31/2022 xxxxxxxxxxxxxxxxxxxxx 7/8/2019 1/31/2020
16 1/31/2022 xxxxxxxxxxxxxxxx 9/16/2019 1/31/2020
</code></pre>
<p>Is the only solution to convert <code>dat</code> to a datetime object? Thanks</p>
|
<python><pandas><datetime>
|
2022-12-13 22:43:48
| 1
| 483
|
Justin Benfit
|
74,791,436
| 19,675,781
|
How to sort dataframe columns baed on 2 indexes?
|
<p>I have a data frame like this</p>
<pre><code>df:
Index C-1 C-2 C-3 C-4 ........
Ind-1 3 9 5 4
Ind-2 5 2 8 3
Ind-3 0 1 1 0
.
.
</code></pre>
<p>The data frame has more than a hundred columns and rows with whole numbers(0-60) as values.<br />
The first two rows(indexes) have values in the range 2-12/<br />
I want to sort the columns based on values in the first and second rows(indexes) in ascending order. I need not care about sorting in the remaining rows.</p>
<p>Can anyone help me with this</p>
|
<python><python-3.x><pandas><dataframe>
|
2022-12-13 22:12:17
| 1
| 357
|
Yash
|
74,791,215
| 1,134,241
|
How to train an LSTM with multiple simultaneous time-series. Radon values over a year in 100 houses with 4 rooms
|
<p>I have trained an LSTM neural network on 1 year's worth of radon measurement time-series data for one room in one house. I have 100 houses with 4 rooms each. How could I create a for loop to train on 70 houses (4 rooms each) to keep training the network with data rather than having 70 different LSTMs?</p>
<pre><code>#Synthetic Radon data for 5 houses x 4 rooms each
df = pd.DataFrame()
df['date_time'] = pd.date_range(start='2020-01-01', end='2020-12-31', freq='H')
#make 1000 rows for each house and roomm columns
df['house'] = np.random.choice(['01TE85', '02TE85', '03TE85', '04TE85', '05TE85'], size=len(df))
df['room'] = np.random.choice(['Living room', 'Bedroom', 'Kitchen', 'Bathroom'], size=len(df))
df['radon_short_term_avg'] = np.random.normal(loc=0.5, scale=0.1, size=len(df))
#Filter house == 01TE85 and room == "Living room"
df = df[(df['house'] == '01TE85') & (df['room'] == 'Bedroom')]
# Convert date_time to datetime
df['date_time'] = pd.to_datetime(df['date_time'], format='%Y-%m-%d %H:%M:%S')
# Set date_time as index
df = df.set_index('date_time')
radon = df["radon_short_term_avg"]
# for every 5 hours, let's predict the next hour
# X=[[[1],[2],[3],[4],[5]]] y = [6]
# X=[[[2],[3],[4],[5],[6]]] y = [7]
# create function to create X and Y
def df_to_X_y(df, window_size):
df_as_np = df.to_numpy()
X = []
y = []
for i in range(len(df_as_np)-window_size):
row = [[a] for a in df_as_np[i:i+window_size]]
X.append(row)
label = df_as_np[i+window_size]
y.append(label)
return np.array(X), np.array(y)
WINDOW_SIZE = 5
X, y = df_to_X_y(radon, WINDOW_SIZE)
print(X.shape, y.shape)
# Split data into train and test
X_train, y_train = X[:60_000], y[:60_000]
X_val, y_val = X[60_000:65_000], y[60_000:65_000]
X_test, y_test = X[65_000:], y[65_000:]
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import *
from tensorflow.keras.callbacks import ModelCheckpoint
from tensorflow.keras.losses import MeanSquaredError
from tensorflow.keras.metrics import RootMeanSquaredError
from tensorflow.keras.optimizers import Adam
model2 = Sequential()
model2.add(InputLayer((WINDOW_SIZE, 1)))
model2.add(Conv1D(64,kernel_size=1))
model2.add(Flatten())
model2.add(Dense(8, 'relu'))
model2.add(Dense(1, 'linear'))
model2.summary()
# checkpoint
cp2 = ModelCheckpoint('model2/', save_best_only=True)
#compile model
model2.compile(loss=MeanSquaredError(), optimizer=Adam(learning_rate=0.0001), metrics=[RootMeanSquaredError()])
# fit model
model2.fit(X_train, y_train, validation_data=(X_val, y_val), epochs=10, callbacks=[cp2])
</code></pre>
|
<python><tensorflow><keras>
|
2022-12-13 21:45:38
| 1
| 2,263
|
HCAI
|
74,791,202
| 3,246,693
|
Match nearest timestamp without set_index
|
<p>I have 2 dataframes that contain event activities from various applications.</p>
<p>I want to merge them together and match up the events from the 2nd dataframe where the date field is less than(before) the date in the 1st dataframe wherever the user and system are the same. Also, if there is no event in Data2 or the only events in Data2 occur on or after the dates in Data1, those should just return as null, NaN, or blank.</p>
<p>I've found numerous ways to do this by using set_index and nearest, however if I set the index to the 3 columns in question I get errors since my index would contain duplicate values in the index. The error about the index needing to be unique makes perfect sense, however now I am wondering how I can do this without relying on indexes.</p>
<p>Sample data:</p>
<pre><code>data1 = {
'user':["bob", "bob", "bob", "bob", "bob", "bob", "jeb", "sue"],
'system':["A", "A", "A", "A", "B", "B", "A", "B"],
'date':[
'2022-12-11 10:00:00',
'2022-12-11 10:00:00',
'2022-12-11 10:00:01',
'2022-12-09 10:00:01',
'2022-12-10 11:00:01',
'2022-12-15 11:00:01',
'2022-12-10 10:00:01',
'2022-12-10 10:00:01'],
'other_data': ["Blah", "Blah","Blah", "Blah", "Blah", "Blah", "Blah", "Blah"]
}
data2 = {
'user':["bob", "bob", "bob", "bob", "bob", "bob", "jeb", "sue", "sue", "ted"],
'system':["A", "A", "A", "B", "B", "B", "B", "B", "B", "A"],
'date':[
'2022-12-11 11:00:00',
'2022-12-11 10:00:00',
'2022-12-11 09:59:00',
'2022-12-11 11:00:00',
'2022-12-11 10:00:00',
'2022-12-11 09:59:00',
'2022-12-10 08:00:01',
'2022-12-01 10:00:01',
'2022-12-13 10:00:01',
'2022-12-01 10:00:01'],
'other_data': ["Blah", "Blah","Blah", "Blah", "Blah", "Blah", "Blah", "Blah", "Blah", "Blah"]
}
dfData1 = pd.DataFrame(data=data1)
dfData1['date'] = pd.to_datetime(dfData1['date'])
dfData2 = pd.DataFrame(data=data2)
dfData2['date'] = pd.to_datetime(dfData2['date'])
</code></pre>
<p>Looking to get a dataframe returned similar to this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>user_1</th>
<th>system_1</th>
<th>date_1</th>
<th>other_data_1</th>
<th>user_2</th>
<th>system_2</th>
<th>date_2</th>
<th>other_data_2</th>
</tr>
</thead>
<tbody>
<tr>
<td>bob</td>
<td>A</td>
<td>2022-12-11 10:00:00</td>
<td>Blah</td>
<td>bob A</td>
<td>2022-12-11 09:59:00</td>
<td>Blah</td>
<td></td>
</tr>
<tr>
<td>bob</td>
<td>A</td>
<td>2022-12-11 10:00:00</td>
<td>Blah</td>
<td>bob A</td>
<td>2022-12-11 09:59:00</td>
<td>Blah</td>
<td></td>
</tr>
<tr>
<td>bob</td>
<td>A</td>
<td>2022-12-11 10:00:01</td>
<td>Blah</td>
<td>bob A</td>
<td>2022-12-11 10:00:00</td>
<td>Blah</td>
<td></td>
</tr>
<tr>
<td>bob</td>
<td>A</td>
<td>2022-12-09 10:00:01</td>
<td>Blah</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
</tr>
<tr>
<td>bob</td>
<td>B</td>
<td>2022-12-10 11:00:01</td>
<td>Blah</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
</tr>
<tr>
<td>bob</td>
<td>B</td>
<td>2022-12-15 11:00:01</td>
<td>Blah</td>
<td>bob B</td>
<td>2022-12-11 11:00:00</td>
<td>Blah</td>
<td></td>
</tr>
<tr>
<td>jeb</td>
<td>A</td>
<td>2022-12-10 10:00:01</td>
<td>Blah</td>
<td>jeb B</td>
<td>2022-12-10 08:00:01</td>
<td>Blah</td>
<td></td>
</tr>
<tr>
<td>sue</td>
<td>B</td>
<td>2022-12-10 10:00:01</td>
<td>Blah</td>
<td>sue B</td>
<td>2022-12-01 10:00:01</td>
<td>Blah</td>
<td></td>
</tr>
</tbody>
</table>
</div>
<p>How does one merge data like this on multiple columns without setting those columns as the index to avoid the issue of duplicate index values?</p>
<p>Edit:</p>
<p>This gets me closer by adding temp date fields to merge and using pd.merge_asof</p>
<pre><code>dfData1['mdate'] = dfData1['date']
dfData2['mdate'] = dfData2['date']
dfData1.sort_values(['mdate'], inplace=True)
dfData2.sort_values(['mdate'], inplace=True)
pd.merge_asof(
dfData1[(dfData1['user']=="bob") & (dfData1['system']=="A")],
dfData2[(dfData2['user']=="bob") & (dfData2['system']=="A")],
on="mdate",
by=["user", "system"],
suffixes=["_1","_2"]
)
</code></pre>
<p>This also does a less than or equal to on the compare and there doesn't appear to be a just less than merge option, but I can also just add 1 second to all the all values in dfData2["mdate"] to compensate for that.</p>
<p>However, the merge only works if I am subsetting to unique user/system rows for the merge_asof. I could loop through each subset of user/system data in the dataframes and reassemble later, but I'm still wondering if there's a more efficient way to do this?</p>
|
<python><python-3.x><pandas><dataframe>
|
2022-12-13 21:44:08
| 0
| 803
|
user3246693
|
74,791,141
| 12,574,341
|
Iterating through one dict of size n vs two dicts of size n/2
|
<p>Implementing a bijective map. Internally representing it using dictionary(s).</p>
<p>Is there a performance difference between iterating through a dictionary of size n and iterating through two dictionaries of size n/2?</p>
<p>Option 1:</p>
<pre class="lang-py prettyprint-override"><code>d = {'A': 1, 'B': 2, 1: 'A', 2: 'B'}
if some_key in d:
...
</code></pre>
<p>Option 2:</p>
<pre class="lang-py prettyprint-override"><code>d1 = {'A': 1, 'B': 2}
d2 = {1: 'A', 2: 'B'}
if some_key in d1 or some_key in d2:
...
</code></pre>
<p>While both options would involve the same number of iterations, my concern is that there may be some auxiliary operations happening before each <code>__contains__()</code> is invoked by the <code>in</code> keyword, resulting in the second option being slightly worse as it has two of them.</p>
<p>Are they equivalent?</p>
|
<python><performance><hashmap><big-o>
|
2022-12-13 21:37:20
| 1
| 1,459
|
Michael Moreno
|
74,791,092
| 2,601,293
|
pydantic dataclass allowing None parameter
|
<p>When using <code>pydantic.dataclass</code> I'm specifying that a type must be an <code>int</code>. When the constructor is called with a <code>None</code> parameter, the validator <strong>doesn't</strong> raise a <code>ValidationError</code>. How can I make the <code>pydantic.dataclass</code> raise when None is passed?</p>
<pre><code>from pydantic import Field
from pydantic.dataclasses import dataclass
@dataclass
class MyClass:
age: int = Field(None, title="the user age", ge=18, le=120)
def __init__(self, age: int):
self.age = age
>>> print(MyClass("foo", None))
MyClass(age=None) # Expecting an Error here
</code></pre>
|
<python><pydantic>
|
2022-12-13 21:31:02
| 1
| 3,876
|
J'e
|
74,791,087
| 4,613,465
|
Python opencv error (-215:Assertion failed) 0 <= scaleIdx && scaleIdx < (int)scaleData->size() in CascadeClassifier.detectMultiScale
|
<p>I'm using the CascadeClassifier of the opencv-python package to perform face detection with the haarcascade_frontalface_default.xml with this code:</p>
<pre><code>self.face_cascade = cv2.CascadeClassifier(haar_cascade_path)
...
gray_frame = cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY)
faces = self.face_cascade.detectMultiScale(gray_frame, 1.3, 5)
</code></pre>
<p>It's working fine for most of the frames, but sometimes I'm getting this exception:</p>
<pre><code>cv2.error: OpenCV(4.6.0) d:\a\opencv-python\opencv-python\opencv\modules\objdetect\src\cascadedetect.hpp:46: error: (-215:Assertion failed) 0 <= scaleIdx && scaleIdx < (int)scaleData->size() in function 'cv::FeatureEvaluator::getScaleData'
</code></pre>
<p>The error occurs during the <code>detectMultiScale</code> call. I already checked the gray_frame and it looks good (shape is 1366x1060 and it's not none or something like that). Do you have any idea on how the fix this?</p>
|
<python><opencv><face-detection><assertion>
|
2022-12-13 21:30:48
| 1
| 772
|
Fatorice
|
74,791,018
| 7,168,098
|
python ziping lists of single element or tuples into list of tuples
|
<p>Assuming I have a colection of lists of the same size (same number of elements), where the elements are single elements (strings, numbers) or tuples:</p>
<pre><code>a = ['AA', 'BB', 'CC']
b = [7,8,9]
d = [('TT', 'ZZ'),('UU', 'VV'),('JJ','KK')]
e = [('mm', 'nn'), ('bb', 'vv'), ('uu', 'oo')]
</code></pre>
<p>I would like a way of combining whatever two lists (ideally whatever number of lists) in a way that the result is a list of tuples</p>
<pre><code># none of this works:
print(list(zip(a,b))) # this ONLY works for the case in which a & b have single elements, not tuples
print(list(zip(b,*d)))
print(list(zip(*d,*e)))
</code></pre>
<p>Desired result:</p>
<p>pseudo code:</p>
<pre><code>combination of a,b = [('AA', 7), ('BB', 8), ('CC', 9)]
combination of a,d = [('AA','TT', 'ZZ'),('BB', 'UU', 'VV'),('CC','JJ','KK')]
combination of d,e = [('TT', 'ZZ','mm', 'nn'),('UU', 'VV','bb', 'vv'),('JJ','KK','uu', 'oo')]
</code></pre>
<p>basically the method would get the elements of the input lists treat them as tuples (even if being of only one element) and add up all the values of the tuples of the corresponding same position.</p>
|
<python><list><dictionary><tuples><zip>
|
2022-12-13 21:23:31
| 4
| 3,553
|
JFerro
|
74,790,952
| 11,229,812
|
How to create function that can be used with switch button in Tkniter GUI?
|
<p>I am trying to build a GUI with Tkinter and use it to control my Raspberry Pi robot.
With some help, I managed to get the functionality of the buttons when being pressed and held down but I am struggling with the function that
will allow me to do things with the switch button.
<a href="https://i.sstatic.net/HSWcT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HSWcT.png" alt="enter image description here" /></a></p>
<p>here is a quick snippet of the Tkinter code:</p>
<pre><code># LED Lights
self.led_switch = customtkinter.CTkSwitch(master=self.lights_control, text="LED",
command=lambda: self.led_switch())
self.led_switch.grid(row=0, column=1, pady=10, padx=20, sticky="n")
</code></pre>
<p>and here is the function I'm trying to implement:</p>
<pre><code>is_on = True
def led_switch(self, event):
global is_on
if is_on:
print("It is on")
#is_on = False
else:
print("It is off")
#is_on = True
</code></pre>
<p>This doesn't seem to work since I'm getting the error message:</p>
<pre><code> self.led_switch = customtkinter.CTkSwitch(master=self.lights_control, text="LED", command=lambda: self.led_switch())
</code></pre>
<p>TypeError: 'CTkSwitch' object is not callable`.</p>
<p>here is the full code if that helps:</p>
<pre><code> import tkinter
import tkinter.messagebox
import customtkinter
# Setting up theme
customtkinter.set_appearance_mode("Dark") # Modes: "System" (standard), "Dark", "Light"
customtkinter.set_default_color_theme("blue") # Themes: "blue" (standard), "green", "dark-blue"
class App(customtkinter.CTk):
def __init__(self):
super().__init__()
# configure window
self.title("Cool Blue")
self.geometry(f"{1200}x{560}")
# configure grid layout (4x4)
self.grid_columnconfigure(1, weight=1)
self.grid_columnconfigure((2, 3), weight=0)
self.grid_rowconfigure((0, 1, 2, 3, 4), weight=1)
self.bind("<KeyPress>", self.key_pressed)
self.bind("<KeyRelease>", self.key_released)
###################################################################
# create sidebar frame for controls
###################################################################
self.sidebar_frame = customtkinter.CTkFrame(self, width=100)
self.sidebar_frame.grid(row=0, column=0, rowspan=1, padx=(5, 5), pady=(10, 10), sticky="nsew")
self.sidebar_frame.grid_rowconfigure(4, weight=1)
# Setting up grid label
self.logo_label = customtkinter.CTkLabel(self.sidebar_frame, text="Motion", font=customtkinter.CTkFont(size=15, weight="bold"))
self.logo_label.grid(row=0, column=1, padx=20, pady=(10, 10))
# Setting up W - Forward button
self.button_up = customtkinter.CTkButton(self.sidebar_frame, text="W", height=10, width=10)
self.button_up.grid(row=1, column=1, padx=20, pady=10, ipadx=10, ipady=10)
self.button_up.bind('<ButtonPress-1>', lambda x: self.motion_event_start(x, 'W'))
self.button_up.bind('<ButtonRelease-1>', lambda x: self.motion_event_stop(x, 'W'))
# Setting up S - Backwards buttons
self.button_down = customtkinter.CTkButton(self.sidebar_frame, text="S", height=10, width=10)
self.button_down.grid(row=3, column=1, padx=20, pady=10, ipadx=10, ipady=10)
self.button_down.bind('<ButtonPress-1>', lambda x: self.motion_event_start(x, 'S'))
self.button_down.bind('<ButtonRelease-1>', lambda x: self.motion_event_stop(x, 'S'))
# Setting up A - Left button
self.button_left = customtkinter.CTkButton(self.sidebar_frame, text="A", height=10, width=10)
self.button_left.grid(row=2, column=0, padx=10, pady=10, ipadx=10, ipady=10)
self.button_left.bind('<ButtonPress-1>', lambda x: self.motion_event_start(x, 'A'))
self.button_left.bind('<ButtonRelease-1>', lambda x: self.motion_event_stop(x, 'A'))
# Setting up D - Right button
self.button_right = customtkinter.CTkButton(self.sidebar_frame, text="D", height=10, width=10)
self.button_right.grid(row=2, column=2, padx=10, pady=10, ipadx=10, ipady=10)
self.button_right.bind('<ButtonPress-1>', lambda x: self.motion_event_start(x, 'D'))
self.button_right.bind('<ButtonRelease-1>', lambda x: self.motion_event_stop(x,'D'))
###################################################################
# Create Sidebar for arm control
###################################################################
self.arm_control = customtkinter.CTkFrame(self)
self.arm_control.grid(row=1, column=0, rowspan = 1, padx=(5, 5), pady=(10, 10), sticky="nsew")
self.arm_control.grid_rowconfigure(2, weight=1)
# Setting up grid label
self.arm_label = customtkinter.CTkLabel(self.arm_control, text="Arm", font=customtkinter.CTkFont(size=15, weight="bold"))
self.arm_label.grid(row=0, column=0, padx=20, pady=(10, 10))
self.grip_label = customtkinter.CTkLabel(self.arm_control, text="Grip", font=customtkinter.CTkFont(size=15, weight="bold"))
self.grip_label.grid(row=0, column=1, padx=20, pady=(10, 10))
# Arm Up
self.button_arm_up = customtkinter.CTkButton(self.arm_control, text=" Up ", height=10, width=10)
self.button_arm_up.grid(row=1, column=0, padx=10, pady=10, ipadx=30, ipady=10)
# Arm Down
self.button_arm_down = customtkinter.CTkButton(self.arm_control, text=" Down", height=10, width=10)
self.button_arm_down.grid(row=2, column=0, padx=10, pady=10, ipadx=30, ipady=10)
self.button_grip_open = customtkinter.CTkButton(self.arm_control, text="Open", height=10, width=10)
self.button_grip_open.grid(row=1, column=1, padx=10, pady=10, ipadx=30, ipady=10, sticky="w")
self.button_grip_close = customtkinter.CTkButton(self.arm_control, text="Close", height=10, width=10)
self.button_grip_close.grid(row=2, column=1, padx=10, pady=10, ipadx=30, ipady=10, sticky="w")
###################################################################
# Create Sidebar for grip
###################################################################
self.lights_control = customtkinter.CTkFrame(self)
self.lights_control.grid(row=3, column=0, rowspan = 1, padx=(5, 5), pady=(10, 10), sticky="nsew")
self.lights_control.grid_rowconfigure(1, weight=1)
# LED Lights
self.led_switch = customtkinter.CTkSwitch(master=self.lights_control, text="LED", command=lambda: self.led_switch)
self.led_switch.grid(row=0, column=1, pady=10, padx=20, sticky="n")
# Regular Lights
self.regular_ligths_switch = customtkinter.CTkSwitch(master=self.lights_control, text="Lights", command=lambda: print("switch 1 toggle"))
self.regular_ligths_switch.grid(row=1, column=1, pady=10, padx=20)
# Camera
self.led_switch = customtkinter.CTkSwitch(master=self.lights_control, text="Camera", command=lambda: print("switch 1 toggle"))
self.led_switch.grid(row=2, column=1, pady=10, padx=20, )
# create Video Canvas
self.picam = customtkinter.CTkCanvas(self, width=800, background="gray")
self.picam.grid(row=0, column=1, rowspan=4, padx=(5, 5), pady=(20, 20), sticky="nsew")
self.picam.grid_rowconfigure(4, weight=1)
self.picam_label = customtkinter.CTkLabel(master=self.picam, text="Live Stream", font=customtkinter.CTkFont(size=20, weight="bold"))
self.picam_label.grid(row=0, column=2, columnspan=1, padx=10, pady=10, sticky="")
# create radiobutton frame
self.temperature = customtkinter.CTkFrame(self)
self.temperature.grid(row=0, column=3, rowspan = 1, padx=(5, 5), pady=(10, 10), sticky="n")
self.temperature.grid_rowconfigure(2, weight=1)
self.label_temperature = customtkinter.CTkLabel(master=self.temperature, text="Temperature")
self.label_temperature.grid(row=0, column=2, columnspan=1, padx=10, pady=10, sticky="")
# create checkbox and switch frame
self.pressure = customtkinter.CTkFrame(self)
self.pressure.grid(row=1, column=3, rowspan = 1, padx=(5, 5), pady=(10, 10), sticky="n")
self.pressure.grid_rowconfigure(1, weight=1)
self.label_pressure = customtkinter.CTkLabel(master=self.pressure, text="Pressure")
self.label_pressure.grid(row=0, column=2, columnspan=1, padx=10, pady=10, sticky="")
# create checkbox and switch frame
self.humidity = customtkinter.CTkFrame(self)
self.humidity.grid(row=3, column=3, rowspan = 1, padx=(5, 5), pady=(10, 10), sticky="")
self.humidity.grid_rowconfigure(1, weight=1)
self.label_humidity = customtkinter.CTkLabel(master=self.humidity, text="Humidity")
self.label_humidity.grid(row=0, column=2, columnspan=1, padx=10, pady=10, sticky="")
def key_pressed(self, event):
if event.char in 'wads':
self.motion_event_start(event, event.char.upper())
def key_released(self, event):
if event.char in 'wads':
self.motion_event_stop(event, event.char.upper())
def motion_event_start(self, event, button):
# if button == "W":
# kit1.motor1.throttle = 1
# kit2.motor1.throttle = 1
# elif button == "S":
# kit1.motor1.throttle = -1
# kit2.motor1.throttle = -1
print(f"{button} Pressed")
def motion_event_stop(self, event, button):
print(f"{button} Released")
# kit1.motor1.throttle = 0
# kit2.motor1.throttle = 0
# Switch
is_on = True
def led_switch(self, event):
global is_on
if is_on:
print("It is on")
#is_on = False
else:
print("It is off")
#is_on = True
if __name__ == "__main__":
app = App()
app.mainloop()
</code></pre>
<p>Any ideas on what I'm doing wrong here?</p>
|
<python><python-3.x><tkinter>
|
2022-12-13 21:17:39
| 1
| 767
|
Slavisha84
|
74,790,924
| 2,194,805
|
Python unittests one by one works well but not together under Flask
|
<p>I've a test file containing some unittests. All of them looks like:</p>
<pre><code>def test_something_222(self):
with app.test_client() as c:
response = c.get(URL)
assert response.status_code == some_value
</code></pre>
<p>. When I run them one by one, everything works well, however using <strong>python -m unittest</strong> results error.</p>
<p>What can be the reason for it, and how can I prevent?</p>
<p>Thanks.</p>
|
<python><unit-testing><flask>
|
2022-12-13 21:14:13
| 0
| 1,389
|
user2194805
|
74,790,814
| 5,103,969
|
Is there any way to cimport cv2 in cython?
|
<p>I am trying to cimport cv2 in Cython Code.</p>
<p><code>cimport cv2</code></p>
<p>I have installed the Python OpenCV module and the Cython wrapper for OpenCV, but I'm unsure how to cimport the cv2 module in my Cython code.</p>
<p>To get the best performance, if not cimport, how should the cv2 C++ code be imported and run directly using Cython?</p>
<p>This is the error that I am getting</p>
<pre><code>Error compiling Cython file:
'opencv/cv.pxd' not found
</code></pre>
|
<python><opencv><cython>
|
2022-12-13 21:00:46
| 1
| 975
|
Abhik Sarkar
|
74,790,809
| 12,199,326
|
Remove data of type category from plot
|
<p>Say we have a df with a column defined as a category:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'Color': ['Yellow', 'Blue', 'Red', 'Red']}, dtype='category') # data type is category
</code></pre>
<p>Now say we want to plot these data while removing one of the categorical levels:</p>
<pre><code># Exclude Yellow, save in new df
df2 = df.loc[df.Color != 'Yellow']
# Plot
df2.value_counts().plot(kind='bar')
</code></pre>
<p>Output:</p>
<p><a href="https://i.sstatic.net/ezRCS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ezRCS.png" alt="enter image description here" /></a></p>
<p>Although the bar for Yellow is not displayed, the Yellow tick label is still visible.</p>
<p>My question: How do we completely remove Yellow from the plot?</p>
<p>I suspect this issue is due to the fact that the data type is category. But I don't want to convert the data type. The type category is sometimes useful, e.g., to reorder levels or other operations.</p>
<p>Ideal solution for me would also work with seaborn, where I found a similar issue:</p>
<pre><code># Remake a df based on the above and plot with seaborn
df2=pd.DataFrame(df2.value_counts()).reset_index()
import seaborn as sns
from matplotlib import pyplot as plt
sns.catplot(data=df2, x=0, y='Color', kind='bar')
plt.show()
</code></pre>
<p>Output:</p>
<p><a href="https://i.sstatic.net/lgBQo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lgBQo.png" alt="enter image description here" /></a></p>
<p>Dani Mesejo answer works, but only with histograms, I believe. And I need bar plots per se.</p>
|
<python><pandas><plot><dtype>
|
2022-12-13 21:00:13
| 2
| 892
|
johnjohn
|
74,790,773
| 2,526,128
|
Is an attribute protected if initialized as public but setters & getters treat it as protected?
|
<p>While learning the basics of object oriented programming in Python, I came across something I cannot search effectively to find matching keywords:</p>
<p>I wrote validation checks for a basic class <code>Course</code>, which has a protected attribute <code>_level</code>. The getters and setters treat this as a protected attribute. The validation makes sure <code>self._level</code> are between 0 and 100.</p>
<ul>
<li>But the validation checks in the setter are evaded:</li>
</ul>
<pre class="lang-py prettyprint-override"><code>class Course:
def __init__(self, level=None):
self._level = level if level else 0
@property
def level(self):
return self._level
@level.setter
def level(self, value):
if not isinstance(value, int):
raise TypeError('The value of level must be of type int.')
if value < 0:
self._level = 0
elif value > 100:
self._level = 100
else:
self._level = value
###
courses = [Course(), Course(10), Course(-10), Course(150)]
for c in courses:
print(c.level)
>>> 0
>>> 10
>>> -10
>>> 150
</code></pre>
<ul>
<li>I then change the initialization from a protected <code>self._level</code> to a public attribute <code>self.level</code>. My setters and getters still refer to a protected attribute <code>self._level</code>.</li>
</ul>
<pre class="lang-py prettyprint-override"><code>class Course:
def __init__(self, level=None):
self.level = level if level else 0
@property
def level(self):
return self._level
@level.setter
def level(self, value):
if not isinstance(value, int):
raise TypeError('The value of level must be of type int.')
if value < 0:
self._level = 0
elif value > 100:
self._level = 100
else:
self._level = value
courses = [Course(), Course(10), Course(-10), Course(150)]
for c in courses:
print(c.level)
>>> 0
>>> 10
>>> 0
>>> 100
</code></pre>
<p>I was surprised that not only does the class realize it is the same attribute, but now my validations are implemented!</p>
<p>How does Python reconcile <code>self.level</code> with <code>self._level</code>, and why must I initialize it as a public attribute in order for my validations to kick in?</p>
|
<python><oop>
|
2022-12-13 20:55:21
| 1
| 718
|
batlike
|
74,790,739
| 7,882,846
|
Pandas Mixed Date Format Values in One Column
|
<pre><code>df = pd.Series('''18-04-2022
2016-10-05'''.split('\n') , name='date'
).to_frame()
df['post_date'] = pd.to_datetime(df['date'])
print (df)
date post_date
0 18-04-2022 2022-04-18
1 2016-10-05 2016-10-05
</code></pre>
<p>When trying to align the date column into one consistent format, I get an error such as above.<br>
The error is that values have mixed date formats dd-mm-yyyy (18-04-2022) and yyyy-dd-mm (2016-10-05). <br></p>
<p>What I want to have is below (yyyy-mm-dd) for both of the above inconsistent formats:</p>
<pre><code> date post_date
0 18-04-2022 2022-04-18
1 2016-10-05 2016-05-10
</code></pre>
<p>Appreciate it in advance.</p>
|
<python><pandas><datetime>
|
2022-12-13 20:51:18
| 1
| 439
|
Todd
|
74,790,649
| 311,130
|
How to repeatedly attach object A for every object B in python?
|
<pre><code>raw_data_row = { "account_level_data" : {"account_name": "A", "currency" :"USD"} "store_level_data" : {"store_name": "MySotre", "address" :"MyStreet"}, item_values : [{"name": "a", "price" : 30}, {"name": "b", "price" : 40}]}
</code></pre>
<p>How can I create an object like that:</p>
<pre><code>new data =
[{ "account_name": "A", "currency" :"USD", "store_name":"MySotre", "address":"MyStreet", "name": "a", "price" : 30}
{ "account_name": "A", "currency" :"USD", "store_name":"MySotre", "address":"MyStreet", "name": "b", "price" : 40}]
</code></pre>
<p>Right now I specify every attribute name and values manually, any way to do it more generic?</p>
<pre><code>new_rows = []
for item in raw_data_row.item_values:
new_rows.add({"account_name" :raw_data_row.account_level_data.account_name, "currency" :raw_data_row.account_level_data.currency, "store_name" :raw_data_row.store_level_data.store_name, "address" :raw_data_row.store_level_data.address, "name" : item.name, "price":item.price})
</code></pre>
|
<python>
|
2022-12-13 20:42:53
| 1
| 36,892
|
Elad Benda
|
74,790,598
| 3,937,811
|
How to process a response from the Twilio REST API
|
<p>I am developing a program to help treat depression. I do not have a deep understanding of Twilio. I would like to collect the responses to this message:</p>
<pre><code>Sent from your Twilio trial account - What are the positive results or outcomes you have achieved lately?
What are the strengths and resources you have available to you to get even more results and were likely the reason you got the results in the first question.
What are your current priorities? What do you and your team need to be focused on right now?
What are the benefits to all involved-you
your team and all other stakeholders who will be impacted by achieving your priority focus.
How can we (you and/or your team) move close? What action steps are needed?
What am I going to do today?
What am I doing tomorrow ?
What did I do yesterday?
</code></pre>
<p>and process them 1-9. The responses will be enumerated by 1-9.</p>
<p>I've contacted Twilio support and I read these docs <a href="https://www.twilio.com/docs/sms/tutorials/how-to-receive-and-reply-python" rel="nofollow noreferrer">https://www.twilio.com/docs/sms/tutorials/how-to-receive-and-reply-python</a>.</p>
<p>Here is what I tried:</p>
<pre><code># Download the helper library from https://www.twilio.com/docs/python/install
import os
from twilio.rest import Client
import logging
import csv
import psycopg2
from flask import Flask, request, redirect
from twilio.twiml.messaging_response import MessagingResponse
app = Flask(__name__)
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s - %(levelname)s - %(message)s')
# Set environtment variables
DATABASE = os.environ["DATABASE"]
PASSWORD = os.environ["PASSWORD"]
PORT = os.environ["PORT"]
USER = os.environ["USER"]
HOST = os.environ["HOST"]
# initialization TODO: move into env vars
MY_PHONE_NUMBER = os.environ["MY_PHONE_NUMBER"]
TWILIO_PHONE_NUMBER = os.environ["TWILIO_PHONE_NUMBER"]
TWILIO_ACCOUNT_SID = os.environ["TWILIO_ACCOUNT_SID"]
TWILIO_AUTH_TOKEN = os.environ["TWILIO_AUTH_TOKEN"]
# Configure Twillio
# Set environment variables for your credentials
# Read more at http://twil.io/secure
client = Client(TWILIO_ACCOUNT_SID, TWILIO_AUTH_TOKEN)
logging.debug(f"Connected to Twilio using MY_PHONE_NUMBER:{MY_PHONE_NUMBER},TWILIO_PHONE_NUMBER{TWILIO_PHONE_NUMBER}")
# Establish db connection
# use psycopg to connect to the db and create a table
conn = psycopg2.connect(
database=DATABASE, user=USER, password=PASSWORD, host=HOST, port=PORT)
conn.autocommit = True
cursor = conn.cursor()
# Step 1: Set up frequency, i.e. times to send messages
# Step 2: Load questions
questionsFile = open('questions.csv')
questions = csv.reader(questionsFile)
logging.debug(f"message:{questions}")
message = "\n".join([question for row in questions for question in row])
logging.debug(f"message: {message}")
# Step 3: Send questions
# message = client.messages.create(
# body=message,
# from_=TWILIO_PHONE_NUMBER,
# to=MY_PHONE_NUMBER
# )
# Step 4: Collect response
@app.route("/sms", methods=['GET', 'POST'])
def incoming_sms():
"""Send a dynamic reply to an incoming text message"""
# Get the message the user sent our Twilio number
body = request.values.get('Body', None)
# Start our TwiML response
resp = MessagingResponse()
# Determine the right reply for this message
if body == 'hello':
resp.message("Hi!")
elif body == 'bye':
resp.message("Goodbye")
return str(resp)
if __name__ == "__main__":
app.run(debug=True)
# Step 5: Create a database table as the sheet name and Save responses in db
logging.debug(f'Step 2 creating table response')
# TODO: create 10 columns for saving responses (each response contains 10 answers)
sql = f'CREATE TABLE IF NOT EXISTS public.responses'
logging.debug(f'CREATE TABLE IF NOT EXISTS public.responses')
# cursor.execute(sql)
# conn.commit()
# Next steps:
# 1. Process positive and negative sentiment from responses
# 2. Calculuate total positive sentiment
# 3. Calculate total negative sentiment
# 4. Plot positive sentiment vs. negative sentiment
</code></pre>
<p>The documentation doesn't provide a clear path for completing step 4.</p>
<p><a href="https://i.sstatic.net/QPhfy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QPhfy.png" alt="enter image description here" /></a>
[shows response from text message.]</p>
<p><a href="https://i.sstatic.net/iN3dJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iN3dJ.png" alt="enter image description here" /></a>
[messages url]</p>
<p>Expected
processed responses from the questions.</p>
<p>Actual:</p>
<pre><code>Sent from your Twilio trial account - What are the positive results or outcomes you have achieved lately?
What are the strengths and resources you have available to you to get even more results and were likely the reason you got the results in the first question.
What are your current priorities? What do you and your team need to be focused on right now?
What are the benefits to all involved-you
your team and all other stakeholders who will be impacted by achieving your priority focus.
How can we (you and/or your team) move close? What action steps are needed?
What am I going to do today?
What am I doing tomorrow ?
What did I do yesterday?
</code></pre>
<p>I added a Twilio flow to test the connection to Twilio. This could go in two directions just python or using a Twilio flow. The flow does not work even after adding the correct number.</p>
<p><a href="https://i.sstatic.net/e4PNk.png" rel="nofollow noreferrer">messages screenshot</a></p>
<ul>
<li>this comes from the demo video which does not match the current UI: <a href="https://www.youtube.com/watch?v=VRxirse1UfQ" rel="nofollow noreferrer">https://www.youtube.com/watch?v=VRxirse1UfQ</a>.</li>
</ul>
|
<python><twilio>
|
2022-12-13 20:37:31
| 2
| 2,066
|
Evan Gertis
|
74,790,568
| 16,962,446
|
difference between autobegin and Session.begin
|
<p>In the following code snippet a commit is executed once the with block is exited.</p>
<pre><code>from sqlalchemy import create_engine, Column, String
from sqlalchemy.orm import declarative_base, Session
Base = declarative_base()
class Foo(Base):
__tablename__ = 'Foo'
id = Column(String, primary_key=True)
engine = create_engine('sqlite:///temp.db', echo=True)
Base.metadata.create_all(engine)
with Session(engine) as session, session.begin():
session.add(Foo(id=1))
</code></pre>
<p>As per the docs <a href="https://docs.sqlalchemy.org/en/20/orm/session_api.html#sqlalchemy.orm.Session.begin" rel="nofollow noreferrer">https://docs.sqlalchemy.org/en/20/orm/session_api.html#sqlalchemy.orm.Session.begin</a></p>
<blockquote>
<p>The Session object features autobegin behavior, so that normally it is not necessary to call the Session.begin() method explicitly</p>
</blockquote>
<p>Running without begin, like so does not commit even though the docs mention autobegin is not needed.</p>
<pre><code>with Session(engine) as session:
session.add(Foo(id=1))
</code></pre>
<p>The docs for <code>Session.begin</code> does not mention anything about commits, what am I not understanding with here?</p>
|
<python><sql><sqlalchemy><transactions>
|
2022-12-13 20:32:59
| 1
| 557
|
Jake
|
74,790,288
| 3,937,811
|
How to process words from a csv list
|
<p>I am running into an issue based on the following program.</p>
<h3>Code</h3>
<pre><code># Download the helper library from https://www.twilio.com/docs/python/install
import os
from twilio.rest import Client
import logging
import csv
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s - %(levelname)s - %(message)s')
# initialization
MY_PHONE_NUMBER = os.environ["MY_PHONE_NUMBER"]
TWILIO_PHONE_NUMBER = os.environ["TWILIO_PHONE_NUMBER"]
TWILIO_ACCOUNT_SID = os.environ["TWILIO_ACCOUNT_SID"]
TWILIO_AUTH_TOKEN = os.environ["TWILIO_AUTH_TOKEN"]
# Configure Twillio
# Set environment variables for your credentials
# Read more at http://twil.io/secure
client = Client(TWILIO_ACCOUNT_SID, TWILIO_AUTH_TOKEN)
logging.debug(f"Connected to Twilio using MY_PHONE_NUMBER:{MY_PHONE_NUMBER},TWILIO_PHONE_NUMBER{TWILIO_PHONE_NUMBER}")
# Establish db connection
# Step 1: Set up frequency, i.e. times to send messages
# Step 2: Load questions
questionsFile = open('questions.csv')
questions = csv.reader(questionsFile)
logging.debug(f"message:{questions}")
message = ""
for q in questions:
message= q+"\n"
logging.debug(f"message:{message}")
# Step 3: Send questions
# message = client.messages.create(
# body="Hello from Twilio",
# from_=TWILIO_PHONE_NUMBER,
# to=MY_PHONE_NUMBER
# )
# Step 4: Collect response
# Step 5: Save responses in db
# Next steps:
# 1. Process positive and negative sentiment from responses
# 2. Calculuate total positive sentiment
# 3. Calculate total negative sentiment
# 4. Plot positive sentiment vs. negative sentiment
# https://stackoverflow.com/questions/74752681/what-is-the-best-way-to-perform-sentiment-analysis-by-using-text-message-respons
</code></pre>
<h3>Expected</h3>
<p>the words are processed and converted into a string</p>
<h3>Actual</h3>
<pre><code>2022-12-13 14:32:42,165 - DEBUG - message:<_csv.reader object at 0x106ed05f0>
Traceback (most recent call last):
File "/Users/evangertis/development/PythonAutomation/IGTS/TwilioMessaging/accountability.py", line 33, in <module>
message= q+"\n"
TypeError: can only concatenate list (not "str") to list
</code></pre>
|
<python><typeerror>
|
2022-12-13 20:01:44
| 2
| 2,066
|
Evan Gertis
|
74,790,072
| 5,666,203
|
NaN values arise after writing and reading a .sigmf-data file in Python
|
<p>I have a Numpy array of complex values that I am preparing to write out to a .sigmf-data file:</p>
<pre><code># ISTFT
t_recon, x_recon = signal.istft(Zxx_unflip,
fs=sample_rate,
nperseg=NFFT,
nfft=NFFT,
noverlap=noverlap,
window=win,
boundary=boundary_istft,
input_onesided=onesided)
# Sanity check min, max, and lack of NaN values
print(type(x_recon))
print(np.min(x_recon),np.max(x_recon))
</code></pre>
<p>This output yields the following, so I'm comfortable that my array is a Numpy array, and has no NaN values in either the real nor imaginary part.</p>
<pre><code><class 'numpy.ndarray'>
(-0.6206440769919024+0.0022856157531386985j) (0.6165799125127172+0.112632267876876j)
</code></pre>
<p>Similar to the <a href="https://github.com/gnuradio/SigMF" rel="nofollow noreferrer">SigMF documentation</a>, I then write this array to a <code>.sigmf-data</code> and an associated <code>.sigmf-meta</code> file:</p>
<pre><code># write those samples to file in cf32_le
data = np.copy(x_recon)
data.tofile('new_example.sigmf-data')
# create the metadata
meta = SigMFFile(
data_file='new_example.sigmf-data', # extension is optional
global_info = {
SigMFFile.DATATYPE_KEY: 'cf32',
SigMFFile.SAMPLE_RATE_KEY: sample_rate,
SigMFFile.VERSION_KEY: sigmf.__version__,
}
)
# create a capture key at time index 0
meta.add_capture(0, metadata={
SigMFFile.FREQUENCY_KEY: center_freq,
SigMFFile.DATETIME_KEY: dt.datetime.utcnow().isoformat()+'Z',
})
# check for mistakes & write to disk
assert meta.validate()
meta.tofile('new_example.sigmf-meta') # extension is optional
</code></pre>
<p>This produces no errors, and the array of complex values is specified by the <code>DATATYPE_KEY</code> of complex (<code>c</code>), float (<code>f</code>), and 32-bit (<code>32</code>). Then I read back in the file(s) I generated:</p>
<pre><code># Open file
collect_base = '/Users/me/'
sigmf_new_data_file = glob.glob(collect_base + '/*.sigmf-data')
sigmf_new_filename = sigmf_new_data_file[0][:-5]
sigmf_new_signal = sigmffile.fromfile(sigmf_new_filename)
# Get the samples corresponding to annotation
capture_samples = sigmf_new_signal.read_samples_in_capture(0)
# Recheck the min and max of the read-in data array
print(np.min(capture_samples),np.max(capture_samples))
print(type(capture_samples))
</code></pre>
<p>Here, the output still reveals the array is a Numpy array, but now my min and max values are not only different, but they also have NaN values in the real parts <em>of the min and max values</em>. I know <code>(nan + nan*j)</code> values also exist elsewhere in the array.</p>
<p>What is causing the change in arrays? How can I remove whatever is damaging my arrays? Is the error occurring when I'm writing the file, or reading the file?</p>
|
<python><arrays><numpy>
|
2022-12-13 19:36:03
| 1
| 1,144
|
AaronJPung
|
74,790,067
| 8,094,926
|
Which variables are allowed in a __str__ implementation?
|
<p>I'm learning python <code>@property</code> annotation using <a href="https://www.online-python.com/online_python_compiler" rel="nofollow noreferrer">this</a>. My understanding is that it is a built-in property to facilitate accessing and modifying class properties. I created a class using this annotation on some properties, then tried implementing str to display everything.</p>
<p>Below, everything displays fine using the <code>str()</code> method. However, when I substitute <code>self._area</code> for <code>self.area</code>, I get <code>AttributeError: Circle object has no attribute _area</code></p>
<p>In the <code>str()</code> implementation, why am I allowed to use <code>self._diameter</code> but <em><strong>not</strong></em> self._area?</p>
<pre><code>class Circle(object):
def __init__(self, radius):
self._radius=radius
@property
def radius(self):
return self._radius
@radius.setter
def radius(self, radius):
self._radius=radius
@property
def diameter(self):
return self._diameter
@diameter.setter
def diameter(self, diameter):
self._diameter=diameter
@diameter.deleter
def diameter(self):
del self._diameter
@property
def area(self):
self._area = self._radius**2*3.14
return self._area
def __str__(self):
return f'Circle has radius of {self._radius}, diameter of {self._diameter}, and area of {self.area}'
c = Circle(4)
c.diameter=3
print(str(c))
</code></pre>
|
<python><python-3.x>
|
2022-12-13 19:35:23
| 1
| 468
|
chocalaca
|
74,790,062
| 12,076,197
|
How to Multi-Index an existing DataFrame
|
<p>"Multi-Index" might be the incorrect term of what I'm looking to do, but below is an example of what I'm trying to accomplish.</p>
<p>Original DF:</p>
<pre><code> HOD site1_units site1_orders site2_units site2_orders
hour1 6 3 20 16
hour2 25 10 16 3
hour3 500 50 50 25
hour4 125 65 59 14
hour5 16 1 158 6
hour6 0 0 15 15
hour7 180 18 99 90
</code></pre>
<p>Desired DF</p>
<pre><code> site1 site2
HOD units orders units orders
hour1 6 3 20 16
hour2 25 10 16 3
hour3 500 50 50 25
hour4 125 65 59 14
hour5 16 1 158 6
hour6 0 0 15 15
hour7 180 18 99 90
</code></pre>
<p>Is there an efficient way to construct/format the dataframe like this? Thank you for the help!</p>
|
<python><pandas><dataframe>
|
2022-12-13 19:35:00
| 2
| 641
|
dmd7
|
74,790,060
| 7,158,458
|
View takes 1 positional argument but 2 were given
|
<p>Trying to make a POST request to openAI with the input:</p>
<pre><code>{"write hello world"}
</code></pre>
<p>but getting the error:</p>
<pre><code>TypeError: View.__init__() takes 1 positional argument but 2 were given
</code></pre>
<p>Here is my view:</p>
<pre><code>def get_help(user_input):
response = openai.Completion.create(
engine="text-davinci-002",
prompt="user_input",
temperature=0.5,
max_tokens=1024,
top_p=1,
frequency_penalty=0,
presence_penalty=0
)
return response["choices"][0]["text"]
@api_view(['POST'])
class receive_response(View):
def post(self, request):
user_input = request.POST["user_input"]
response = get_help(user_input)
return HttpResponse(response)
</code></pre>
<p>and my urls.py:</p>
<pre><code>urlpatterns = [
path("get", get_help, name="get_help"),
path("post", receive_response, name="post"),
]
</code></pre>
|
<python><django><openai-api>
|
2022-12-13 19:34:55
| 1
| 2,515
|
Emm
|
74,789,903
| 10,328,083
|
How to write arrays to .bson file using bson package from pymongo
|
<p>I have a dictionary <code>my_dict</code> of the following form. It has two keys, <code>'0'</code> and <code>'1'</code>. For either of these keys, <code>my_dict['0']</code> is an <code>np.array</code> with <code>np.shape(my_dict['0']) = (10, 400000)</code>.</p>
<p>I am trying to use the <code>bson</code> package from PyMongo to write this dictionary to a .bson file. My code is</p>
<pre><code>import bson
with open('file_name.bson', 'wb') as f:
f.write(bson.encode(my_dict))
</code></pre>
<p>However, I get the following error message:</p>
<pre><code>---------------------------------------------------------------------------
InvalidDocument Traceback (most recent call last)
Cell In [58], line 2
1 with open('file_name.bson', 'wb') as f:
----> 2 f.write(bson.encode(my_dict))
File /opt/anaconda3/envs/env/lib/python3.10/site-packages/bson/__init__.py:1021, in encode(document, check_keys, codec_options)
1018 if not isinstance(codec_options, CodecOptions):
1019 raise _CODEC_OPTIONS_TYPE_ERROR
-> 1021 return _dict_to_bson(document, check_keys, codec_options)
InvalidDocument: cannot encode object: array([[-6.23719836e+02, -1.36955615e+03, -2.26481493e+03, ...,
-2.57530488e+05, 1.41191655e+05, 3.68818106e+05],
[ 3.51672758e+03, 4.23929871e+03, 4.19956936e+03, ...,
-1.36596353e+07, 2.86662539e+06, 1.44995899e+07],
[-6.13804894e+03, -3.91523214e+03, -4.16903017e+03, ...,
-4.40730152e+08, 9.73392618e+07, 4.49351728e+08],
...,
[-2.28620787e+04, 3.11671441e+04, -5.07810896e+04, ...,
-1.30874776e+15, 7.45906067e+14, 1.16835974e+15],
[-1.70260327e+05, 4.86525723e+04, -2.65229256e+05, ...,
-1.91624806e+16, 1.20260881e+16, 1.67045997e+16],
[-8.67528180e+05, 5.10548548e+04, -1.18028366e+06, ...,
-2.62114048e+17, 1.77851766e+17, 2.23740879e+17]]), of type: <class 'numpy.ndarray'>
</code></pre>
<p>It seems like .bson is struggling with the array structure? This is very odd because I've definitely imported arrays from .bson files before though.</p>
<p>I've been trying to google to see what is going on with this <code>InvalidDocument</code> error but haven't found anything useful yet.</p>
<p>The closest is <a href="https://www.mongodb.com/docs/manual/reference/limits/#:%7E:text=The%20maximum%20BSON%20document%20size,MongoDB%20provides%20the%20GridFS%20API." rel="nofollow noreferrer">this site</a> which says the max BSON doc size is 16 MB, but I really don't think my arrays have this issue / I've definitely imported way larger arrays from .bson before</p>
<p>If anyone can give me a pointer, or even just point me towards a gentle page describing how to write a .bson file (I've realy struggled to find a good reference for easy file I/O with <code>bson</code>), please let me know!</p>
|
<python><arrays><file-io><pymongo><bson>
|
2022-12-13 19:17:56
| 0
| 547
|
seeker_after_truth
|
74,789,640
| 17,696,880
|
How to perform replacement with re.sub() if and only if there is a ; or \n. in the middle of the capture groups?
|
<pre class="lang-py prettyprint-override"><code>import re, datetime
#Ejemplos en donde si se debe hacer uno o mas reemplazos
input_text = "Decian muchas cosas; Seguro eso ocurrira despues de 3 dias"
input_text = "seguro eso ocurrira despues de 3 dias.\n empieza el 2021-11-12"
input_text = "Recien el 2021-10-12; seguro eso ocurrira despues de 3 dias."
#Ejemplos en donde no se deberia hacer ningun reemplazo
input_text = "seguro eso ocurrira despues de 3 dias. empieza el 2021-11-12"
input_text = "Recien el 2021-10-12, seguro eso ocurrira despues 3 dias."
#operation function ( ([12]\d{3}-[01]\d-[0-3]\d)(\D*?), (\d+) , "add" )
def add_or_subtract_days(datestr, days, operation):
today = datetime.date.today()
if operation == "add" : x = (datetime.datetime.strptime(today, "%Y-%m-%d") + datetime.timedelta(days=int(days))).strftime('%Y-%m-%d')
elif operation == "subtract" : x = (datetime.datetime.strptime(today, "%Y-%m-%d") - datetime.timedelta(days=int(days))).strftime('%Y-%m-%d')
return x
some_text = r"[\s|]*"
input_text = re.sub(r"(?:([12]\d{3}-[01]\d-[0-3]\d)(\D*?)|)" + some_text + r"(?:(?:pasados|pasado|despues del|despues de el|despues de|despues|tras) (\d+) (?:días|día|dias|dia)|(\d+) (?:días|día|dias|dia) (?:pasados|pasado|despues del|despues de el|despues de|despues|tras))" + some_text + r"(?:([12]\d{3}-[01]\d-[0-3]\d)(\D*?)|)",
lambda m: print(m[3]) ,
#lambda m: add_or_subtract_days( m[1] or m[4] , m[2] or m[3] , "add" ),
input_text)
print("output: " + repr(input_text)) # ---> output
</code></pre>
<p>You should only perform the replace only if there is at least one <code>";"</code> or <code>".\n"</code> <strong>between</strong> <code>"(pasado|despues|tras) n (dias|dia)"</code> and the date <code>"\d*-\d{2}-\d{2}"</code></p>
<p>output that I need (today = 2022-12-13):</p>
<pre><code>"Decian muchas cosas; Seguro eso ocurrira 2022-12-16"
"seguro eso ocurrira 2022-12-16.\n empieza el 2021-11-12"
"Recien el 2021-10-12; seguro eso ocurrira despues de 2022-12-16."
In this last two examples no replacement is made since there is no ; or \n. in the middle of the capture groups
"seguro eso ocurrira despues de 3 dias. empieza el 2021-11-12"
"Recien el 2021-10-12, seguro eso ocurrira despues de 3 dias."
</code></pre>
|
<python><python-3.x><regex><regex-group><regexp-replace>
|
2022-12-13 18:52:14
| 0
| 875
|
Matt095
|
74,789,548
| 1,914,781
|
plotly express plot with filled rect in background
|
<p>How to move below fill rect in background layer?</p>
<pre><code>import numpy as np
import plotly.express as px
def plot(x,y):
fig = px.scatter(
x=x,
y=y,
error_y=[0] * len(y),
error_y_minus=y
)
fig.add_vrect(x0=0, x1=np.pi, line_width=0, fillcolor="pink")
tickvals = [0,np.pi/2,np.pi,np.pi*3/2,2*np.pi]
ticktext = ["0","$\\frac{\pi}{2}$","$\pi$","$\\frac{3\pi}{4}$","$2\pi$"]
layout = dict(
title="demo",
xaxis_title="X",
yaxis_title="Y",
title_x=0.5,
margin=dict(l=10,t=20,r=0,b=40),
height=300,
xaxis=dict(
tickangle=0,
tickvals = tickvals,
ticktext=ticktext,
),
yaxis=dict(
showgrid=True,
zeroline=False,
showline=False,
showticklabels=True
)
)
fig.update_traces(
marker_size=14,
)
fig.update_layout(layout)
fig.show()
return
n = 20
x = np.linspace(0.0, 2*np.pi, n)
y = np.sin(x)
plot(x,y)
</code></pre>
<p>Current output:
<a href="https://i.sstatic.net/WzbCW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WzbCW.png" alt="enter image description here" /></a></p>
|
<python><plotly>
|
2022-12-13 18:41:20
| 1
| 9,011
|
lucky1928
|
74,789,349
| 15,781,591
|
Why does first plot in for loop not get formatted, while the rest are?
|
<p>I have the following code that iterates through a data frame row and grabs a start timestamp and an end timestamp from two columns, creating a "time series range", and then plots that time series range from another dataframe of timeseries data. And so, I iterate through each time series range and create the corresponding plot from a for loop. I am using python. Here is my code:</p>
<pre><code>from datetime import datetime
import pandas as pd
import numpy as np
from functools import reduce
pd.options.mode.chained_assignment = None # default='warn'
#Ignore warning messages
from matplotlib import pyplot as plt
from matplotlib.pyplot import figure
import matplotlib as mpl
from datetime import datetime, timedelta
import warnings
warnings.filterwarnings("ignore")
for index, row in log_df.iterrows():
print('Range ID: ' + str(row['cycle_ID']))
start_time_string = row['Start_time']
end_time_string = row['End_time']
start_time = datetime.strptime(start_time_string, '%Y-%m-%d %H:%M:%S')
end_time = datetime.strptime(end_time_string, '%Y-%m-%d %H:%M:%S')
print('Start timestamp: ' + str(start_time))
print('End timestamp: ' + str(end_time))
mask = (data_df['timestamp'] > start_time) & (data_df['timestamp'] <= end_time)
df_masked = data_df.loc[mask]
df_masked
plt.rcParams['figure.figsize'] = [20, 10]
df_masked.plot('localminute' , 'values', legend=False)
plt.xlabel("Timestamp", fontsize=14)
plt.ylabel("Watts", fontsize=14)
plt.show()
</code></pre>
<p>This works, however, the problem is that the first plot that gets produced is not formatted with my intended plot size, while all the next plots are. And so the first plot is small, while every plot proceeding is correctly sized. I am confused why this is not working, since I would think my figure size parameter would be applied to each plot within the loop. How can I correctly set my figure size parameter so that every plot is formatted correctly?</p>
|
<python><pandas><matplotlib><plot>
|
2022-12-13 18:22:07
| 1
| 641
|
LostinSpatialAnalysis
|
74,789,249
| 12,752,172
|
How to extract data from a dynamic table with selenium python?
|
<p>I'm trying to extract data from a website. I need to enter the value in the search box and then find the details. it will generate a table. After generating the table, need to write the details to the text file or insert them into a database. I'm trying the following things.</p>
<p>Website: <a href="https://commtech.byu.edu/noauth/classSchedule/index.php" rel="nofollow noreferrer">https://commtech.byu.edu/noauth/classSchedule/index.php</a>
Search text: "C S 142"</p>
<p><strong>Sample Code</strong></p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
c_options = Options()
c_options.add_experimental_option("detach", True)
s = Service('C:/Users/sidat/OneDrive/Desktop/python/WebDriver/chromedriver.exe')
URL = "http://saasta.byu.edu/noauth/classSchedule/index.php"
driver = webdriver.Chrome(service=s, options=c_options)
driver.get(URL)
element = driver.find_element("id", "searchBar")
element.send_keys("C S 142", Keys.RETURN)
search_button = driver.find_element("id", "searchBtn")
search_button.click()
table = WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.XPATH, "//*[@id='sectionTable']")))
rows = table.find_elements("xpath", "//tr")
for row in rows:
cells = row.find_elements(By.TAG_NAME, "td")
for cell in cells:
print(cell.text)
</code></pre>
<p>I'm using PyCharm 2022.3 to code and test the result. There is nothing printing with my code. Please help me to solve this problem with to extract data to a text file and to an SQL database table.</p>
|
<python><selenium><selenium-webdriver><beautifulsoup><webdriverwait>
|
2022-12-13 18:12:47
| 2
| 469
|
Sidath
|
74,789,171
| 11,734,835
|
what is use_develop in tox and development mode
|
<p>I was trying to understand the purpose of <code>use_develop</code> and from the docs, I found this:</p>
<blockquote>
<p>Install the current package in development mode with develop mode. For pip this uses -e option, so should be avoided if you’ve specified a custom install_command that does not support -e.</p>
</blockquote>
<p>I don't understand what "development mode" means. Is this a python concept or it's specific to tox? Either way what does it mean?</p>
|
<python><tox>
|
2022-12-13 18:04:29
| 1
| 356
|
james pow
|
74,789,116
| 19,838,568
|
ast.literal_eval not working as part of list comprehension (when reading a file)
|
<p>I am trying to parse a file which has pairs of lines, each of them representing a list of integers or other lists. Example data from the file:</p>
<pre><code>[[[6,10],[4,3,[4]]]]
[[4,3,[[4,9,9,7]]]]
[[6,[[3,10],[],[],2,10],[[6,8,4,2]]],[]]
[[6,[],[2,[6,2],5]]]
</code></pre>
<p>I am trying to read the file into a list of tuples of data-structures (nested lists) with the following statement:</p>
<pre><code>with open("filename","r") as fp:
pairs = [tuple(ast.literal_eval(l.strip()) for l in lines.split("\n")) for lines in fp.read().split("\n\n")]
</code></pre>
<p>This failed with below stacktrace, leading me to believe that the data was somewhere corrupt (unmatched brackets or something similar):</p>
<pre><code>Traceback (most recent call last):
File "program.py", line 5, in <module>
pairs = [tuple(ast.literal_eval(l.strip()) for l in lines.split("\n")) for lines in fp.read().split("\n\n")]
File "program.py", line 5, in <listcomp>
pairs = [tuple(ast.literal_eval(l.strip()) for l in lines.split("\n")) for lines in fp.read().split("\n\n")]
File "program.py", line 5, in <genexpr>
pairs = [tuple(ast.literal_eval(l.strip()) for l in lines.split("\n")) for lines in fp.read().split("\n\n")]
File "C:\Python39\lib\ast.py", line 62, in literal_eval
node_or_string = parse(node_or_string, mode='eval')
File "C:\Python39\lib\ast.py", line 50, in parse
return compile(source, filename, mode, flags,
File "<unknown>", line 0
SyntaxError: unexpected EOF while parsing
</code></pre>
<p>So I cut down the program into manual loops and the problem was not reproducible any more. So the below code, which first reads into a list of tuples of strings and then evaluating the strings with <code>ast.literal_eval</code> works fine. The above "doing-it-all-at-once" still fails with the same error.</p>
<pre><code># This works:
with open("filename","r") as fp:
stringpairs = [tuple(l.strip() for l in lines.split("\n")) for lines in fp.read().split("\n\n")]
pairs = [tuple(ast.literal_eval(pair[i]) for i in range(2)) for pair in stringpairs]
# This still doesn't work:
with open("filename","r") as fp:
pairs = [tuple(ast.literal_eval(l.strip()) for l in lines.split("\n")) for lines in fp.read().split("\n\n")]
</code></pre>
|
<python><abstract-syntax-tree>
|
2022-12-13 17:59:19
| 3
| 2,406
|
treuss
|
74,788,974
| 12,574,341
|
Python annotate type as regex pattern
|
<p>I have a dictionary annotation</p>
<pre class="lang-py prettyprint-override"><code>class OrderDict(TypedDict):
name: str
price: float
time: str
</code></pre>
<p>The value of <code>time:</code> will always be formatted like <code>2022-01-01 00:00:00</code>, or <code>"%Y-%m-%d %H:%M:%S"</code>. I'd like a way to express this in the type annotation</p>
<p>Something like</p>
<pre class="lang-py prettyprint-override"><code>class OrderDict(TypedDict):
name: str
price: float
time: Pattern["%Y-%m-%d %H:%M:%S"]
</code></pre>
<p>WIth the goal of IDE hinting through VSCode Intellisense and Pylance.</p>
<p>Are regex-defined type annotations supported?</p>
|
<python><mypy><python-typing><pyright>
|
2022-12-13 17:47:59
| 2
| 1,459
|
Michael Moreno
|
74,788,890
| 4,602,248
|
Programmatically check when google sheets spreadsheet was last updated
|
<p>I am starting to pull data from multiple shared spreadsheets on google sheets and this works fine. Currently I have to read in the whole spreadsheet and process the data to see if any updates have been made to it. How can I programmatically (in Python 3.9) get a timestamp of the last update to a google sheet?</p>
|
<python><io><version-control><timestamp><google-sheets-api>
|
2022-12-13 17:41:29
| 1
| 2,529
|
Alex
|
74,788,877
| 17,889,840
|
how to create hdf5 file from numpy dataset files
|
<p>I have 1970 <code>.npy</code> files as features for MSVD dataset. I want to create one <code>.hdf5</code> file from these numpy files.</p>
<pre><code>import os
import numpy as np
import hdf5
TRAIN_FEATURE_DIR = "MSVD"
for filename in os.listdir(TRAIN_FEATURE_DIR):
f = np.load(os.path.join(TRAIN_FEATURE_DIR, filename))
...
</code></pre>
|
<python><numpy><hdf5>
|
2022-12-13 17:40:18
| 1
| 472
|
A_B_Y
|
74,788,743
| 4,329,348
|
Python logging perforamnce when level is higher
|
<p>One advantage of using logging in Python instead of print is that you can set the level of the logging. When debugging you could set the level to DEBUG and everything below will get printed. If you set the level to ERROR then only error messages will get printed.</p>
<p>In a high-performance application this property is desirable. You want to be able to print some logging information during development/testing/debugging but not when you run it in production.</p>
<p>I want to ask if logging will be an efficient way to suppress debug and info logging when you set the level to ERROR. In other words, would doing the following:</p>
<pre><code>logging.basicConfig(level=logging.ERROR)
logging.debug('something')
</code></pre>
<p>will be as efficient as</p>
<pre><code>if not in_debug:
print('...')
</code></pre>
<p>Obviously the second code will not cost because checking a boolean is fast and when not in debug mode the code will be faster because it will not print unnecessary stuff. It comes to the cost of having all those if statement though. If logging delivers same performance without the if statements is of course much more desirable.</p>
|
<python><logging><python-logging>
|
2022-12-13 17:29:10
| 1
| 1,219
|
Phrixus
|
74,788,740
| 3,161,120
|
How to use multithreading.Value in dataclass?
|
<p>Would anyone of you know if it's possible to use <code>multiprocessing.Value</code> field in <code>dataclass</code>?</p>
<p>For the following dataclass definition, I am getting <code>TypeError: this type has no size</code> exception.</p>
<pre><code>import multiprocessing
from dataclasses import dataclass
@dataclass
class TestResults:
count: multiprocessing.sharedctypes.Synchronized = multiprocessing.Value(int, 0)
</code></pre>
<p>Stack trace:</p>
<pre><code>$ python example.py
Traceback (most recent call last):
File "/tmp/example.py", line 5, in <module>
class TestResults:
File "/tmp/example.py", line 6, in TestResults
count: multiprocessing.sharedctypes.Synchronized = multiprocessing.Value(int, 0)
File "/usr/lib/python3.10/multiprocessing/context.py", line 135, in Value
return Value(typecode_or_type, *args, lock=lock,
File "/usr/lib/python3.10/multiprocessing/sharedctypes.py", line 74, in Value
obj = RawValue(typecode_or_type, *args)
File "/usr/lib/python3.10/multiprocessing/sharedctypes.py", line 49, in RawValue
obj = _new_value(type_)
File "/usr/lib/python3.10/multiprocessing/sharedctypes.py", line 40, in _new_value
size = ctypes.sizeof(type_)
TypeError: this type has no size
</code></pre>
|
<python><python-3.x><multiprocessing><python-dataclasses>
|
2022-12-13 17:28:57
| 1
| 1,830
|
gbajson
|
74,788,656
| 8,869,570
|
What is the "+01:00" in "2006-04-03 08:00:00+01:00" datetime timestamp object?
|
<p>I printed out a <code><class 'pandas._libs.tslibs.timestamps.Timestamp'></code> <code>time</code> variable, and it displays as</p>
<pre><code>2006-04-03 08:00:00+01:00
</code></pre>
<p>I am wondering what is the <code>+01:00</code> at the end? Seems this is related to daylight savings?</p>
<p>When I did</p>
<pre><code>print(time.timestamp())
</code></pre>
<p>I am seeing</p>
<pre><code>1144047600.0
</code></pre>
<p>which is off by 3600 (an hour).</p>
|
<python><datetime><timestamp><timezone>
|
2022-12-13 17:22:08
| 2
| 2,328
|
24n8
|
74,788,564
| 7,576,002
|
How do you connect use pyodbc to connect to MS SQL server when password has escape characters
|
<p>I'm writing a connection string in Python that connects to a MS SQL Server database. I am using pyodbc. Previously, I had used SqlAlchemy and that worked fine. However, I'd like to NOT use SQLAlchemy because I'd like to keep things simple.</p>
<p>The problem is, when I try to create a connect string and the password has "\" (escape character) in it (mine has two), something happens when the password string is passed from pyodbc to the underlying SQL Server ODBC driver and my login fails.</p>
<p>I've tried all sorts of url encoding and I still can't get it to work.</p>
<p>I'm sure this problem has been solved before, but I can't seem to find any solution that's pertinent to my use-case.</p>
<p>Any help would be appreciated!</p>
|
<python><sql-server><pyodbc>
|
2022-12-13 17:14:06
| 0
| 1,129
|
KSS
|
74,788,554
| 9,786,534
|
How can I change the dimensions of a xarray variable?
|
<p>I have a gridded xarray dataset that looks like this:</p>
<pre><code>print(ds)
<xarray.Dataset>
Dimensions: (month: 12, isobaricInhPa: 37, latitude: 721, longitude: 1440)
Coordinates:
* isobaricInhPa (isobaricInhPa) float64 1e+03 975.0 950.0 ... 3.0 2.0 1.0
* latitude (latitude) float64 90.0 89.75 89.5 ... -89.5 -89.75 -90.0
* longitude (longitude) float64 0.0 0.25 0.5 0.75 ... 359.2 359.5 359.8
* month (month) int64 1 2 3 4 5 6 7 8 9 10 11 12
Data variables:
h (month, isobaricInhPa, latitude, longitude) float32 dask.array<chunksize=(1, 19, 405, 720), meta=np.ndarray>
speed (month, isobaricInhPa, latitude, longitude) float32 dask.array<chunksize=(1, 19, 405, 720), meta=np.ndarray>
direction (month, isobaricInhPa, latitude, longitude) float32 dask.array<chunksize=(1, 19, 405, 720), meta=np.ndarray>
</code></pre>
<p>From this dataset, I extracted data for a series of points using:</p>
<pre class="lang-py prettyprint-override"><code>ds_volc = ds.sel(latitude=volc['Latitude'].values,longitude=volc['Longitude'].values, method='nearest', drop=True)
</code></pre>
<p>where <code>volc</code> is a Pandas DataFrame and <code>volc['Latitude'].values</code> and <code>volc['Longitude'].values</code> are vectors of lat/lon pairs for my 1431 points of interests.</p>
<pre><code>print(ds_volc)
<xarray.Dataset>
Dimensions: (month: 12, isobaricInhPa: 37, latitude: 1431, longitude: 1431)
Coordinates:
* isobaricInhPa (isobaricInhPa) float64 1e+03 975.0 950.0 ... 3.0 2.0 1.0
* latitude (latitude) float64 50.25 45.75 42.25 ... -56.0 -64.25 -62.0
* longitude (longitude) float64 6.75 3.0 2.5 0.0 ... 0.0 0.0 0.0 0.0
* month (month) int64 1 2 3 4 5 6 7 8 9 10 11 12
Data variables:
h (month, isobaricInhPa, latitude, longitude) float32 dask.array<chunksize=(1, 19, 190, 1431), meta=np.ndarray>
speed (month, isobaricInhPa, latitude, longitude) float32 dask.array<chunksize=(1, 19, 190, 1431), meta=np.ndarray>
direction (month, isobaricInhPa, latitude, longitude) float32 dask.array<chunksize=(1, 19, 190, 1431), meta=np.ndarray>
</code></pre>
<p>Rather than using pairs of lat/lon coordinates, I need to add the <em>ID</em> of my points (called <code>Volcano Number</code>) as a new coordinate:</p>
<pre class="lang-py prettyprint-override"><code>ds_volc = ds_volc.assign_coords({'Volcano Number':volc['Volcano Number'].values})
</code></pre>
<p>which results in:</p>
<pre><code><xarray.Dataset>
Dimensions: (month: 12, isobaricInhPa: 37, latitude: 1431, longitude: 1431, Volcano Number: 1431)
Coordinates:
* isobaricInhPa (isobaricInhPa) float64 1e+03 975.0 950.0 ... 3.0 2.0 1.0
* latitude (latitude) float64 50.25 45.75 42.25 ... -56.0 -64.25 -62.0
* longitude (longitude) float64 6.75 3.0 2.5 0.0 ... 0.0 0.0 0.0 0.0
* month (month) int64 1 2 3 4 5 6 7 8 9 10 11 12
* Volcano Number (Volcano Number) int64 210010 210020 ... 390829 390847
Data variables:
h (month, isobaricInhPa, latitude, longitude) float32 dask.array<chunksize=(1, 19, 190, 1431), meta=np.ndarray>
speed (month, isobaricInhPa, latitude, longitude) float32 dask.array<chunksize=(1, 19, 190, 1431), meta=np.ndarray>
direction (month, isobaricInhPa, latitude, longitude) float32 dask.array<chunksize=(1, 19, 190, 1431), meta=np.ndarray>
</code></pre>
<h2>The question</h2>
<p>Since I added the <code>Volcano Number</code> coordinate, the <code>latitude</code> and <code>longitude</code> coordinates (and dimensions) become obsolete and I need to reorganise the dimensions of the variables. I thought I could simply use <code>ds_volc.drop_dims(['latitude', 'longitude'])</code>, but that drops the associated variables. I have also tried <code>ds_volc.sel({'Volcano Number': 210010}, drop=True)</code>, but that returns a xarray.Dataset with 1431 longitude and 1431 latitude points.</p>
<p>Therefore, is there a way to somehow reshape the variables so their dimensions become <code>(month: 12, isobaricInhPa: 37, Volcano Number: 1431)</code> instead of <code>(month, isobaricInhPa, latitude, longitude)</code>, so as to return a xarray.Dataset of dimension <code>(month: 12, isobaricInhPa: 37)</code> when I query with <code>ds_volc.sel({'Volcano Number': 210010}, drop=True)</code>?</p>
|
<python><python-3.x><python-xarray>
|
2022-12-13 17:13:40
| 1
| 324
|
e5k
|
74,788,423
| 12,337,118
|
Can't pass header in Python client generated by OpenAPI Generator
|
<p>With the help of <a href="https://openapi-generator.tech/" rel="nofollow noreferrer">OpenAPI Generator</a> I generated a Python client for the Amadeus <a href="https://github.com/tsolakoua/amadeus-open-api-example/blob/master/TravelRestrictions_v2_swagger_specification.json" rel="nofollow noreferrer">Travel Restrictions API spec</a>.</p>
<p>According to the <a href="https://github.com/tsolakoua/amadeus-open-api-example/blob/master/covid19/README.md" rel="nofollow noreferrer">README</a> generated, I understand that I can pass headers with the <code>header_params</code> parameter but I receive the following error:</p>
<pre class="lang-bash prettyprint-override"><code>TypeError: g_et_covid_report() got an unexpected keyword argument 'header_params'
</code></pre>
<p>Below you can check my code:</p>
<pre class="lang-py prettyprint-override"><code>import openapi_client
from openapi_client.apis.tags import covid19_area_report_api
from openapi_client.model.disease_area_report import DiseaseAreaReport
from openapi_client.model.error import Error
from openapi_client.model.meta import Meta
from openapi_client.model.warning import Warning
from pprint import pprint
configuration = openapi_client.Configuration(
host = "https://test.api.amadeus.com/v2"
)
with openapi_client.ApiClient(configuration) as api_client:
api_instance = covid19_area_report_api.Covid19AreaReportApi(api_client)
query_params = {
'countryCode': "US",
}
header_params = {
'Authorization': "Bearer MY_ACCESS_TOKEN",
}
try:
api_response = api_instance.g_et_covid_report(
query_params=query_params,
header_params=header_params
)
pprint(api_response)
except openapi_client.ApiException as e:
print("Exception when calling Covid19AreaReportApi->g_et_covid_report: %s\n" % e)
</code></pre>
<p>I've been checking the library and also OpenAPI Generator project on GitHub but I still can't find a way to pass header parameters which I need in order to authorise my API call.</p>
|
<python><openapi><openapi-generator><amadeus>
|
2022-12-13 17:03:00
| 0
| 731
|
anna_ts
|
74,788,390
| 16,267,101
|
how to get a column value of foreign key in the form of object?
|
<p>There is 2 models <code>Registration</code> and <code>RegistrationCompletedByUser</code>, I want <code>Registration</code> queryset from <code>RegistrationCompletedByUser</code> with <code>filters(user=request.user, registration__in=some_value, is_completed=True)</code> over <code>RegistrationCompletedByUser</code>. Hence result should be like <code><QuerySet [<Registration: No name>, <Registration: p2>, <Registration: p-1>]></code>.</p>
<p>Now what I tried is
<code>Registration.objects.prefetch_related('registrationcompletedbyuser_set')</code> but <code>filters()</code> not working. Another way I tried is model Managers but don't pass parameters for custom filtering.</p>
<h3>models.py</h3>
<pre><code>class Registration(models.Model):
name=models.CharField(max_length=255)
number=models.SmallIntegerField(null=True, blank=True)
class RegistrationCompletedByUser(models.Model):
user = models.ForeignKey(User, on_delete=models.CASCADE)
registration= models.ForeignKey(Registration, on_delete=models.CASCADE)
points = models.SmallIntegerField(default=100)
is_completed = models.BooleanField(default=False)
</code></pre>
|
<python><django><django-models>
|
2022-12-13 16:59:57
| 2
| 335
|
shraysalvi
|
74,788,356
| 13,279,198
|
Quicksort with median-of-three not sorting properly?
|
<p>Please explain to me what I am doing wrong in my Quicksort code because the output array is not correctly sorted. It is using the median of three partitioning to select the pivot.</p>
<p>Here is the code:</p>
<pre><code>def medianof3(arr, low, high):
center = (low + high) // 2
if arr[low] < arr[center]:
arr[low], arr[center] = arr[center], arr[low]
if arr[low] < arr[high]:
arr[low], arr[high] = arr[high], arr[low]
if arr[center] < arr[high]:
arr[center], arr[high] = arr[high], arr[center]
arr[center], arr[high - 1] = arr[high - 1], arr[center]
return arr[high - 1]
def partition(array, low, high):
# choose the rightmost element as pivot
pivot = medianof3(array, low, high)
# pointer for greater element
i = low - 1
# traverse through all elements
# compare each element with pivot
for j in range(low, high):
if array[j] <= pivot:
# If element smaller than pivot is found
# swap it with the greater element pointed by i
i = i + 1
# Swapping element at i with element at j
(array[i], array[j]) = (array[j], array[i])
# Swap the pivot element with the greater element specified by i
(array[i + 1], array[high - 1]) = (array[high - 1], array[i + 1])
# Return the position from where partition is done
return i + 1
# function to perform quicksort
def quickSort(array, low, high):
if low < high:
# Find pivot element such that
# element smaller than pivot are on the left
# element greater than pivot are on the right
pi = partition(array, low, high)
# Recursive call on the left of pivot
quickSort(array, low, pi - 1)
# Recursive call on the right of pivot
quickSort(array, pi + 1, high)
data = [1, 7, 4, 1, 10, 9, -2]
print("Unsorted Array")
print(data)
size = len(data)
quickSort(data, 0, size - 1)
print('Sorted Array in Ascending Order:')
print(data)
</code></pre>
Output:
<p>Unsorted Array:
[1, 7, 4, 1, 10, 9, -2]</p>
<p>Sorted Array in Ascending Order:
[1, 1, 7, 4, 9, 10, -2]</p>
|
<python><sorting><data-structures><quicksort>
|
2022-12-13 16:56:57
| 1
| 359
|
Ah_bb
|
74,788,241
| 3,030,875
|
How to quickly instantiate a pyspark SparkContext for unit testing?
|
<p>In my Python 3.8 unit test code, I need to instantiate a SparkContext to test some functions manipulating a <a href="https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.RDD.html#pyspark.RDD" rel="nofollow noreferrer">RDD</a>.</p>
<p>The problem is that instantiating a SparkContext takes a few seconds, which is too slow. I'm using this code to instantiate a SparkContext:</p>
<pre><code>from pyspark import SparkContext
return SparkContext.getOrCreate()
</code></pre>
<p>The tests only run <a href="https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.RDD.filter.html#pyspark.RDD.filter" rel="nofollow noreferrer">filter</a>, <a href="https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.RDD.map.html#pyspark.RDD.map" rel="nofollow noreferrer">map</a> and <a href="https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.RDD.distinct.html#pyspark.RDD.distinct" rel="nofollow noreferrer">distinct</a> on an RDD. In my tests, I only want to test an expected output against obtained output on a tiny dataset (lists of length between 5 and 10) and I want them to run almost instantaneously. I don't need all other features offered by PySpark (such as parallelization).</p>
<p>I tried various ways, such as below, but instantiation takes as much time:</p>
<pre><code>from pyspark import SparkContext, SparkConf
conf = SparkConf().setMaster("local[2]").setAppName("pytest-pyspark-local-testing")
sc = SparkContext(conf=conf)
</code></pre>
|
<python><pyspark><rdd><python-unittest>
|
2022-12-13 16:47:44
| 0
| 1,778
|
Brainless
|
74,788,118
| 9,720,696
|
How to upload the a folder into Azure blob storage while preserving the structure in Python?
|
<p>I've wrote the following code to upload a file into blob storage using Python:</p>
<pre><code>blob_service_client = ContainerClient(account_url="https://{}.blob.core.windows.net".format(ACCOUNT_NAME),
credential=ACCOUNT_KEY,
container_name=CONTAINER_NAME)
blob_service_client.upload_blob("my_file.txt", open("my_file.txt", "rb"))
</code></pre>
<p>this works fine. I wonder how can I upload the entire folder with all files and sub folders in it while keeping the structure of my local folder intact?</p>
|
<python><azure><azure-blob-storage><azure-python-sdk>
|
2022-12-13 16:38:13
| 1
| 1,098
|
Wiliam
|
74,788,097
| 3,067,276
|
Why does TypeVar and TypedDict require repeating the variable name as a string?
|
<p>In Python typing, why do I have to write <code>T = TypeVar("T")</code> instead of just <code>T = TypeVar()</code>? Any static analyzer is able to read the variable name without requiring the string parameter. The string parameter only matters for getting the name of the type variable in runtime. As far as I know, this is only used in the type variable <code>repr</code>:</p>
<pre class="lang-py prettyprint-override"><code>>>> T = TypeVar("T")
>>> S = TypeVar("U")
>>> T
~T
>>> S
~U
</code></pre>
<p>There are two (related) things I'd like to understand:</p>
<ol>
<li><p>In what context would I need this <code>repr</code>? Specifically, because of type erasure, the <code>repr</code> will never tell me anything about the actual <em>value</em> of the type variable, only its <em>name</em>. I've never seen this name used for anything in runtime, even for debugging or introspection purposes. In what kind of situations is it useful?</p>
</li>
<li><p>Why is the runtime name mandatory (not only by convention, but as a type-checking rule)? If I don't care about runtime behaviour of the type variable, why am I required to even <em>specify</em> a runtime name to it, and why would this name be tied to the name of the runtime variable the <code>TypeVar</code> is assigned to?</p>
</li>
</ol>
|
<python><python-typing>
|
2022-12-13 16:36:54
| 1
| 3,439
|
fonini
|
74,788,083
| 11,170,350
|
compare two list of unequal length and fill third list by unmatched index
|
<p>I have two list of unequal size.</p>
<pre><code>large_list=['A','B','C','D','E','F','G','H','I']
small_list=['A','D','E']
</code></pre>
<p>I have another list.</p>
<pre><code>tag_list=['1','3','5']
</code></pre>
<p>I want to compare large_list against small_list. Where the elements are equal, at that point take the element from same index from tag_list, otherwise put '0' if the element of small_list and large_list are unequal at a particular index.</p>
<p>I tried this code</p>
<pre><code>new_tags=[]
for lrg in large_list:
for sm,tag in zip(small_list,tag_list):
if sm==lrg:
new_tags.append(tag)
else:
new_tags.append('0')
new_tags
</code></pre>
<p>But the ouptut produce has a larger length because of nested for loop but the length i want should be large_list</p>
<p>This is expected output.</p>
<pre><code>output=['1', '0', '0', '3', '5', '0', '0', '0', '0']
</code></pre>
|
<python>
|
2022-12-13 16:35:45
| 5
| 2,979
|
Talha Anwar
|
74,788,070
| 5,110,870
|
VS Code: why is Python dataclass not inheriting the parent's attributes?
|
<p>I am new to OOP, often a big pain as things aren't explained clearly and have weird behaviours.</p>
<p>One frustrating example in Python 3.9.13:</p>
<pre><code>@dataclass
class Person:
name: str
city: str
age: int
@dataclass
class Student(Person):
grade: int
subjects: list
</code></pre>
<p>I typed:</p>
<pre><code>s = Student()
</code></pre>
<p>getting, as expected:</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In [9], line 1
----> 1 s = Student()
TypeError: __init__() missing 2 required positional arguments: 'grade' and 'subjects'
</code></pre>
<p>What's unexpected: why does it not say that <code>name</code>, <code>city</code>, and <code>age</code> are also required?</p>
<p><a href="https://i.sstatic.net/hpVFo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hpVFo.png" alt="An example where I try to assign the values to all attributes." /></a></p>
|
<python><oop><inheritance><python-dataclasses>
|
2022-12-13 16:34:48
| 0
| 7,979
|
FaCoffee
|
74,787,906
| 2,970,705
|
Elasticsearch: mapper_parsing_exception: Get detailed error message
|
<p>I am inserting a Pandas dataframe into Elasticsearch, using the <code>elasticsearch</code> Python package.</p>
<p>I have created an index specifying an explicit mapping:</p>
<pre><code>mappings = {
'properties': {
'PLZ': {'type': 'integer'},
'ORT': {'type': 'text'},
'STRASSE': {'type': 'text'},
...
}
es.indices.create(index="test", mappings = mappings)
</code></pre>
<p>When I try to insert a specific document into this index, I receive a <em>mapper_parsing_exception</em>:</p>
<pre><code>res = es.index(index="test", id=1, document=df.iloc[0], error_trace = True)
</code></pre>
<p><em>BadRequestError: BadRequestError(400, 'mapper_parsing_exception', 'failed to parse')</em></p>
<p>Is there any way to receive a more detailed error message describing which field exactly could not be parsed?</p>
|
<python><pandas><elasticsearch>
|
2022-12-13 16:22:05
| 0
| 6,044
|
JavAlex
|
74,787,831
| 6,156,353
|
Easiest way to fill in jinja template
|
<p>I have a jinja templates (python files) with several variables like this <code>{{ some_variable }}</code>. Then I have a <code>yml</code> files with the defined variable values.</p>
<p>python/jinja template:</p>
<pre class="lang-py prettyprint-override"><code>import datetime
some_variable = '{{ some_variable }}'
</code></pre>
<p>yaml file:</p>
<pre class="lang-yaml prettyprint-override"><code>some_variable: 'some value'
</code></pre>
<p>desired output:</p>
<pre class="lang-py prettyprint-override"><code>import datetime
some_variable = 'some value'
</code></pre>
<p>What is the fastest/easiest way to fill the templates with the <code>yml</code> variable values?</p>
|
<python><jinja2>
|
2022-12-13 16:15:55
| 1
| 1,371
|
romanzdk
|
74,787,746
| 7,949,129
|
How to update only a part of a json which is stored in a database with a "json path" list?
|
<p>Let´s say we have a database and there is a json stored as a string which contains configurations.
In the application we want to update only a specific value and write then back the json as a string to the database.</p>
<p>The HTTP request provides a "jsonPath" which reflects a path in the json file to navigate to the object we want to update. This parameter is supplied as a list.</p>
<p>An example for the path could be something like <em>['input', 'keyboard', 'language']</em> and the value we want to set is <em>'en'</em></p>
<p>Here we load the configuration from the database which works so far:</p>
<pre><code>if request.data['jsonPath']:
config = json.loads(models.Config.objects.get(name__startswith='general').value
</code></pre>
<p>Then config is just a normal object now, which I can access with the dot notation like config.input.keyboard.language and it contains then the according part in json syntax.</p>
<p>I am able to read out the value of the field with the following syntax:</p>
<pre><code>config[request.data['jsonPath'][0]][request.data['jsonPath'][1]][request.data['jsonPath'[3]]
</code></pre>
<p>But the number of parameters can vary and I want also to write it back into the config object, convert it back to a string and write it into the database.</p>
<p>I know it is not a nice thing but I would be interessted how it can be done like this</p>
|
<python><json><django>
|
2022-12-13 16:09:14
| 2
| 359
|
A. L
|
74,787,600
| 11,002,498
|
Errors trying to switch to new version of Python in Visual Studio Code
|
<p>I am using visual studio code to run a program in python. I wrote my program in python 3.7 and everything was fine. I tried to install jupyterlab but it needed a new version. So I switched to Python 3.11 (3.11.0 64-bit to be more precise).</p>
<p>When I run my code in the newer Python version it says:</p>
<blockquote>
<p>bs4.FeatureNotFound: Couldn't find a tree builder with the features you requested: lxml. Do you need to install a parser library?</p>
</blockquote>
<p>For line: <code>soup = BeautifulSoup(page,features="lxml")</code></p>
<p>According to a 2018 topic <a href="https://stackoverflow.com/questions/53787102/couldnt-find-a-tree-builder-with-the-features-you-requested-html5lib-do-you-n">here</a>, where the user has the same problem, the most upvoted and accepted answer suggests to do two things. To add <code>"html5lib"</code> and pip install <code>html5lib</code>. I do the hypothesis that I need to do the same about <code>lxml</code>.</p>
<p>That's why I run <code>pip install lxml</code>. Now it returns:</p>
<pre><code>error: subprocess-exited-with-error
× Running setup.py install for lxml did not run successfully.
│ exit code: 1
╰─> [96 lines of output]
Building lxml version 4.9.1.
Building without Cython.
Building against pre-built libxml2 andl libxslt libraries
running install
</code></pre>
<p>And a <a href="https://prnt.sc/6lMlkeneYIuH" rel="nofollow noreferrer">screenshot</a>. And at the end again:</p>
<pre><code> *********************************************************************************
Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed?
*********************************************************************************
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: legacy-install-failure
× Encountered error while trying to install package.
╰─> lxml
note: This is an issue with the package mentioned above, not pip.
hint: See above for output from the failure.
</code></pre>
<p>And a <a href="https://prnt.sc/vKlR-bDSE09x" rel="nofollow noreferrer">screenshot</a>. In <a href="https://stackoverflow.com/questions/74332756/installing-lxml-facing-a-legacy-install-failure-error">this Stackoverflow question</a> they suggest to go back a version which sounds like a bad idea. And I do not know if the libraries I want to install will be available there. (In addition, there is no accepted answer and some answers even say to reinstall the entire OS?!)</p>
<p>Then I checked <a href="https://stackoverflow.com/questions/71152710/failing-to-install-lxml-using-pip">this question</a> and I tried to install <code>pip install C:\Users\@@@@@\Downloads\lxml-4.9.1-cp311-cp311-win_amd64.whl</code>. While everything is there as I downloaded them (<a href="https://prnt.sc/GprdwvVUnhOP" rel="nofollow noreferrer">screenshot</a> of the folder), it returns an error:</p>
<pre><code>ERROR: Could not install packages due to an OSError: [Errno 2] No such file or directory: 'C:\\Users\\@@@@@\\Downloads\\lxml-4.9.1-cp311-cp311-win_amd64.whl'
</code></pre>
<p>Information that may be needed:</p>
<ul>
<li>Windows 10</li>
<li>Visual Studio Code Version 1.74.0</li>
<li>Chosen Interpreter 3.11.0 <a href="https://prnt.sc/5N4w8srPSZ25" rel="nofollow noreferrer">https://prnt.sc/5N4w8srPSZ25</a></li>
</ul>
|
<python><visual-studio-code>
|
2022-12-13 15:56:38
| 1
| 464
|
Skapis9999
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.