QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
โ |
|---|---|---|---|---|---|---|---|---|
79,002,929
| 1,471,980
|
how do you select the maximum value accross each row in pandas
|
<p>Hi have this data frame:</p>
<pre><code>Server 1-Jun 6-Jun 1-jul Jul-10
ServerA 8 9 5 90
ServerB 100 10 9 90
</code></pre>
<p>I need to create another column called maximumval and pick the maximum value from all of the months per Server:</p>
<p>Resulting data frame needs to be like this:</p>
<pre><code>Server 1-Jun 6-Jun 1-jul Jul-10. maximumval
ServerA 8 9 5 90 90
ServerB 100 10 9 90 100
</code></pre>
<p>I tried this this</p>
<pre><code>df['maximumval'] = df.max(axis=1)
</code></pre>
<p>I get this error:</p>
<pre><code>'>=' no supported between instances of 'str' and 'float'
</code></pre>
<p>any ideas what does this mean? how do you fix this?</p>
|
<python><pandas><dataframe><aggregation><rowwise>
|
2024-09-19 14:03:16
| 2
| 10,714
|
user1471980
|
79,002,911
| 5,392,822
|
pass variable number of arguments to python from shell script
|
<p>I have a very simple shell script which accepts 2 arguments after some processing pass these 2 arguments to python code. like below</p>
<pre><code>arg1=$1
arg2=$2
#some processing...
python3.6 abc.py $arg1 $arg2
</code></pre>
<p>Now the requirement is my shell script will accept variable number of arguments and it should pass same to python program. For accepting I can use</p>
<pre><code>for arg in "$@" ; do
#use array for assignments
done
</code></pre>
<p>but issue is how to pass it to python program.</p>
|
<python><bash><shell>
|
2024-09-19 13:58:21
| 1
| 360
|
Aniket
|
79,002,888
| 10,811,647
|
Python Selenium getting Error@chrome while using firefox
|
<p>I am getting this strange error:</p>
<pre class="lang-bash prettyprint-override"><code>raise TimeoutException(message, screen, stacktrace)
selenium.common.exceptions.TimeoutException: Message:
Stacktrace:
RemoteError@chrome://remote/content/shared/RemoteError.sys.mjs:8:8
WebDriverError@chrome://remote/content/shared/webdriver/Errors.sys.mjs:193:5
NoSuchElementError@chrome://remote/content/shared/webdriver/Errors.sys.mjs:511:5
dom.find/</<@chrome://remote/content/shared/DOM.sys.mjs:136:16
</code></pre>
<p>I am using Firefox as Driver.</p>
<p>The code generating the error is quite simple. All I'm trying to do is get all the <code>div</code> of a given class and for each one get the <code>href</code> attribute of the <code>a</code> element inside each div. It works fine when I run the code line by line inside my python interpreter in he command line, but it throws the error when I try to run it inside a function from a script.</p>
<p>Here is the code:</p>
<pre class="lang-py prettyprint-override"><code>def get_links(class_name):
profile_divs = self.browser.find_elements(By.CLASS_NAME, class_name)
links = []
for profile in profile_divs:
try:
# We get the a tag and append its href attribute to the links list
a_tag = WebDriverWait(profile, 10).until(EC.presence_of_element_located((By.TAG_NAME, 'a')))
links.append(a_tag.get_attribute("href"))
except: # if no a tag is found in the div
continue
return(links)
</code></pre>
<p>the line that throws the error is</p>
<pre><code>a_tag = WebDriverWait(profile, 10).until(EC.presence_of_element_located((By.TAG_NAME, 'a'))).
</code></pre>
<p>But the strange thing is that it works when I run each line individually. I thought it was something related to wait time so I added the</p>
<p><code>WebDriverWait().until()</code>.</p>
<p>But it still doesn't work when trying to use the function.</p>
<p>Any help would be appreciated</p>
|
<python><selenium-webdriver>
|
2024-09-19 13:53:25
| 0
| 397
|
The Governor
|
79,002,774
| 2,800,918
|
How to create the cartesian product of a list of lists of tuples
|
<p>I'm trying to accomplish a cartesian product of a list of lists. The base elements are tuples. Something about the tuples seems to really throw <code>product</code> off. The more products I attempt the more it adds a rat's nesting of tuples.</p>
<p>Here's my code</p>
<pre><code>from itertools import product
listOfListsOfTuples = [ [(1,2),], [(3,4),(5,6)] ]
got = list(product(*listOfListsOfTuples))
print('got :',got)
wanted = [ [(1,2),(3,4)], [(1,2),(5,6)] ]
print('wanted:',wanted)
</code></pre>
<p>and my output</p>
<pre><code>got : [((1, 2), (3, 4)), ((1, 2), (5, 6))]
wanted: [[(1, 2), (3, 4)], [(1, 2), (5, 6)]]
</code></pre>
<p>Maybe I need to fall back to <code>for</code> loops and do it myself?</p>
<p>EDIT: was made aware I need a <code>*</code> on the call to <code>product</code>, so added that. That changed my output, so I changed that to my new output above. Note that <code>got != wanted</code></p>
|
<python><python-3.x><cartesian-product>
|
2024-09-19 13:26:52
| 1
| 1,148
|
CAB
|
79,002,571
| 7,462,275
|
How to create, in a jupyter notebook, an Interactive 2D hystogram with 3D datas
|
<p>The number of occurrence of an event that depends of many parameters have been calculated.
And I would like to create an interactive 2D histogram in a jupyter notebook.</p>
<p>To keep things simple in this question, there are three parameters (<code>par1</code>,<code>par2</code>,<code>par3</code>). The 3D histogram is simulated with the following instruction :: <code>all_datas=np.random.rand(5,10,20)*100</code>. the numbers of bins are 5, 10, 20 for <code>par1</code>, <code>par2</code>, <code>par3</code>, respectively.</p>
<p>The 2D histogram for <code>par1</code> and <code>par2</code> is plotted in ax0 with these instructions.</p>
<pre><code>fig,(ax0,ax1)=plt.subplots(1,2)
c=ax0.pcolor(np.sum(all_datas, axis=2).transpose())
plt.show;
</code></pre>
<p>When the mouse is over a specific 2D bin (so, <code>par1</code>=<code>i_par</code> and <code>par2</code>=<code>j_par</code> are fixed) in <code>ax0</code>, how is it possible to plot a <code>par3</code> 1D histogram of this bin (<code>all_datas(i_par,j_par,:)</code>) in <code>ax1</code>.</p>
<p>Obviously, the problem is not the creation of 1D histogram but the <strong>interactivity</strong> : How can I do to have this 1D histogram plot in <code>ax1</code> depending on mouse position on <code>ax0</code>)</p>
|
<python><matplotlib><jupyter-notebook><interactive>
|
2024-09-19 12:39:31
| 0
| 2,515
|
Stef1611
|
79,002,542
| 196,206
|
Cannot pass multiprocessing.Manager to process when using forkserver: cannot pickle 'weakref' object
|
<p>Sample code:</p>
<pre class="lang-py prettyprint-override"><code>import multiprocessing
def main():
multiprocessing.set_start_method('forkserver')
with multiprocessing.Manager() as manager:
proc = multiprocessing.Process(target=fun, args=('p1', manager))
proc.start()
proc.join()
def fun(name, manager):
print('fun:', name, manager)
if __name__ == '__main__':
main()
</code></pre>
<p>Fails with error:</p>
<pre><code>$ /opt/python3.12.6/bin/python3 ./demo_mp_manager_forkserver.py
Traceback (most recent call last):
File "/home/.../demo_mp_manager_forkserver.py", line 14, in <module>
main()
File "/home/.../demo_mp_manager_forkserver.py", line 7, in main
proc.start()
File "/opt/python3.12.6/lib/python3.12/multiprocessing/process.py", line 121, in start
self._popen = self._Popen(self)
^^^^^^^^^^^^^^^^^
File "/opt/python3.12.6/lib/python3.12/multiprocessing/context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/python3.12.6/lib/python3.12/multiprocessing/context.py", line 301, in _Popen
return Popen(process_obj)
^^^^^^^^^^^^^^^^^^
File "/opt/python3.12.6/lib/python3.12/multiprocessing/popen_forkserver.py", line 35, in __init__
super().__init__(process_obj)
File "/opt/python3.12.6/lib/python3.12/multiprocessing/popen_fork.py", line 19, in __init__
self._launch(process_obj)
File "/opt/python3.12.6/lib/python3.12/multiprocessing/popen_forkserver.py", line 47, in _launch
reduction.dump(process_obj, buf)
File "/opt/python3.12.6/lib/python3.12/multiprocessing/reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
TypeError: cannot pickle 'weakref.ReferenceType' object
</code></pre>
|
<python><python-3.x><multiprocessing>
|
2024-09-19 12:31:56
| 2
| 25,401
|
Messa
|
79,002,503
| 12,466,687
|
How to extract data from pdfs which are not in tables or containers into a column based table format in python?
|
<p>I am trying to convert my pdf data into structured table format data. I have tried bunch of options but none of them have been able to separate fields into columns of table format. I am able to do that using AI models like <code>LlamaParse</code> where fields get separated by <code>|</code> but unfortunately I cant use Llama for now because of <a href="https://stackoverflow.com/questions/78978810/how-to-use-st-file-uploader-returned-object-to-parse-pdf-through-llamaparse">this issue</a></p>
<p>Some other options that I have tried below:</p>
<ol>
<li><p><code>pdfplumber</code>:</p>
<pre><code>import pandas as pd
import pdfplumber
path = r"my_pdf_file.pdf"
data = []
with pdfplumber.open(path) as pdf:
pages = pdf.pages
for p in pages:
data.append(p.extract_tables())
print(data)
</code></pre>
<pre><code>with pdfplumber.open(path) as pdf:
first_page = pdf.pages[0]
first_page.to_image()
</code></pre>
<p><a href="https://i.sstatic.net/3GX8hq4l.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3GX8hq4l.png" alt="enter image description here" /></a></p>
<p>By below code I am able to extract only the top section of personal info but not the required test results which are separated by a lot of spaces.</p>
<pre><code>data[0][0][0][0].splitlines()
</code></pre>
<pre><code>['Patient Name : Mr.VINEET',
'Age/Sex : 39 YRS/M Lab Id. : 012408030006',
'Refered By : Self Sample Collection On : 03/Aug/2024 08:30AM',
'Collected By : HARPREET Sample Lab Rec.On : 03/Aug/2024 11:50 AM',
'Collection Mode : HOME COLLECTION Reporting On : 03/Aug/2024 03:24 PM',
'BarCode : 10593974']
</code></pre>
<p>so basically what this is able to give me is only the above containerized section of PDF and not the below required data which are separated by large number of white spaces:</p>
<pre><code>pages.to_image().debug_tablefinder()
</code></pre>
<p><a href="https://i.sstatic.net/X2EwIKcg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/X2EwIKcg.png" alt="enter image description here" /></a></p>
</li>
<li><p><code>PdfReader</code>:</p>
<p>this also gives only text format and nothing structured</p>
<pre><code>from pypdf import PdfReader
reader = PdfReader(path)
reader.pages[0].extract_text().splitlines()
</code></pre>
<p>output:</p>
<pre><code>['NEW AAROGYA EXECUTIVE PACKAGE 108 TEST',
'Kidney Function Test (KFT),Serum',
'BLOOD UREA',
'(Method : Urease-GLDH)31.80 12-43 mg/dL',
'BLOOD UREA NITROGEN (BUN)',
'(Method : Calculated)15 6 - 21 mg/dl',
'SERUM CREATININE',
'(Method : Modified Jaffe,s)1.10 0.9 - 1.3 mg/dL',
' ',
'Creatinine is a by product of muscle catabolism. It is filtered by kidney and excreted in the urine. if the filtering of the kidney is deficient, creatinine levels in blood are',
'increased.',
' ',
'Creatine level is used for the assessment of kidney function and to diagnose renal dysfunction. However, more important than absolute creatinine level is the trend of',
'serum creatinine levels over time. Serum creatine is especially useful in evaluation of glomerular function. BUN (Blood Urea Nitrogen) & Creatine are frequently',
'compared. If BUN increased and creatinine is normal, dehydration is present; and if both increased, then renal disorder is present.',
' ',
'Conditions associated with increased creatine level : Acute and chronic renal failure, shock (prolonged), systemic lupus erythematosis, cancer, leukemia ,',
'hypertension, acute myocardial infaction, diabetic nephropathy , diet rich in creatinine (e.g. beef), congenital renal disease etc. ',
'Condition associated with decreased creatine level : Pregnancy, Eclampsia etc.',
' ',
'Opinion & Advice:',
'People with increased creatine levels are advised to undergo : Kidney Function Test, Urine Examination & USG (Whole abdomen) at regular intervals.',
'Decreased creatine levels are usually insignificant.',
'SERUM URIC ACID',
'(Method : Uricase-POD)5.8 3.5-7.2 mg/dL',
'UREA / CREATININE RATIO',
'(Method : Calculated)28.91 23 - 33 Ratio',
'BUN / CREATININE RATIO',
'(Method : Calculated)13.51 5.5 - 19.2 Ratio',
'INORGANIC PHOSPHORUS',
'(Method : UV Molybdate)3.63 2.5-4.5 mg/dL',
</code></pre>
<p>Even this doesn't give any tabular data.</p>
</li>
<li><p><code>pytesseract</code>:</p>
<pre><code>import os
from PIL import Image
import pdf2image
from pdf2image import convert_from_path
import pytesseract
</code></pre>
<pre><code>img = Image.open(r"saved_image.png")
</code></pre>
<pre><code>text = pytesseract.image_to_string(img)
text
</code></pre>
<p>output:</p>
<pre><code>'OF tata\n\nOe\n\nLABORATORY TEST REPORT\n\nPatient Name >Mr.VINEET\n\nAge/Sex :39 YRS/M Lab Id. : 012408030006\nRefered By : Self Sample Collection On : 03/Aug/2024 08:30AM\n\nCollected By > HARPREET Sample Lab Rec.On : 03/Aug/2024 11:50 AM\nCollection Mode :HOME COLLECTION Reporting On : 03/Aug/2024 03:24 PM\nBarCode : 10593974\n\nTest Name Result Biological Ref. Int. Unit\n\nNEW AAROGYA EXECUTIVE PACKAGE 108 TEST\nKidney Function Test (KFT),Serum\n\nBLOOD UREA 31.80 12-43 mg/dL\n(Method ; Urease-GLDH)\n\nBLOOD UREA NITROGEN (BUN) 15 6-21 mg/dl\n(Method : Calculated)\n\nSERUM CREATININE 1.10 0.9-1.3 mg/dL\n\n(Method : Modified Jaffe,s)\n\nCreatinine is a by product of muscle catabolism. It is filtered by kidney and excreted in the urine. if the filtering of the kidney is deficient, creatinine levels in blood are\nincreased.\n\nCreatine level is used for the assessment of kidney function and to diagnose renal dysfunction. However, more important than absolute creatinine level is the trend of\nserum creatinine levels over time. Serum creatine is especially useful in evaluation of glomerular function. BUN (Blood Urea Nitrogen) & Creatine are frequently\ncompared. If BUN increased and creatinine is normal, dehydration Is present; and if both increased, then renal disorder is present.\n\nConditions associated with increased creatine level : Acute and chronic renal failure, shock (prolonged), systemic lupus erythematosis, cancer, leukemia ,\nhypertension, acute myocardial infaction, diabetic nephropathy , diet rich in creatinine (e.g. beef), congenital renal disease etc.\nCondition associated with decreased creatine level : Pregnancy, Eclampsia etc.\n\nOpinion & Advice:\nPeople with increased creatine levels are advised to undergo : Kidney Function Test, Urine Examination & USG (Whole abdomen) at regular intervals.\nDecreased creatine levels are usually insignificant.\n\nSERUM URIC ACID 5.8 3.5-7.2 mg/dL\n(Method : Uricase-POD)\n\nUREA / CREATININE RATIO 28.91 23-33 Ratio\n(Method : Calculated)\n\nBUN / CREATININE RATIO 13.51 5.5- 19.2 Ratio\n(Method : Calculated)\n\nINORGANIC PHOSPHORUS 3.63 2.5-4.5 mg/dL\n\n(Method : UV Molybdate)\n\na\n\n5 SS\n.โโ =\n\nDR. ASHOK MALHOTRA DR. SANJEEV R. SAIGAL DR. B. JUNEJA |\nMBBS, MD (BIOCHEMISTRY) PHD (MEDICAL MICROBIOLOGY) MBBS, MD (PATHOLOGY)\n\nCONDITIONS OF REPORTING\n\ni) ยซ) @) @)\n\nThe reported results are for information and for interpretation of the referring doctor only. 3 It is presumed that the tests performed on the specimen belong to the patient named or identified.\nResults of the tests may vary from laboratory to laboratory and also in some parameters from time to time forthe same patient 2 Should the results indicate an unexpected abnormally, the same should be reconfirmed.\nOnly such medical professionals who understand reporting units, reference ranges and limitations of technologies should interpret results. =< This report is not valid for medicotegal purposes.\n\nPage 1 of 20\n\nNeither Aarogya Pathcare, nor its employees / representatives assume any liability, responsibility for any loss or damages that may be incurred by any person as a result of persuming the meaning or contents of the r\n\nSUGGESTIONS\nValues out of reference range requires reconfirmation before starting any medical treatments. 3 Retesting is needed if you suspect any quality shotcomings. = Testing or retesting should be done iin accredited laboratories. 2 ff any complaint / Suggestion call : 7627957531, 7011346653\n\n7\nol\n\n'
</code></pre>
<p>sample output from above string:</p>
<pre><code>n\nBLOOD UREA NITROGEN (BUN) 15 6-21 mg/dl\n(Method : Calculated)\n\nSERUM CREATININE 1.10 0.9-1.3 mg/dL\n\n
</code></pre>
</li>
</ol>
<p>Desired output:</p>
<pre><code>BLOOD UREA NITROGEN (BUN)|15|6-21|mg/dl
(Method : Calculated)
SERUM CREATININE|1.10|0.9-1.3|mg/dL
</code></pre>
<p>I was able to do this in R as it was able to capture large white spaces and then I replaced them with delimiter like <code>|</code> to maintain separation</p>
<p>Same thing I was able to do using <code>Llama_Parse</code> but I am not able to use this in <code>Streamlit</code> so looking for alternatives. So I Need a solution in Python without AI models like Llama_parse.</p>
|
<python><parsing><pdf><python-tesseract><pdfplumber>
|
2024-09-19 12:22:34
| 0
| 2,357
|
ViSa
|
79,002,440
| 2,678,716
|
Update field in a DO SET UPDATE from a query result value
|
<p>I have the following query:</p>
<pre><code>INSERT INTO table1 (
col1,
col2,
col3
)
VALUES (
%s,
(SELECT id FROM table2 WHERE name = %s),
%s,
)
ON CONFLICT (fk_id)
DO UPDATE SET
col1 = %s,
***col2 = (SELECT id FROM table2 WHERE name = %s),***
col3 = %s
WHERE EXISTS (
SELECT 1 FROM table1 WHERE fk_id = %s
)
RETURNING id;
</code></pre>
<p>The one in bold/italic asterisks causes a "String literal error". But when it's removed, the query works fine. Is there any possible way to get the value from another table inside a DO SET UPDATE clause?</p>
|
<python><postgresql><psycopg2>
|
2024-09-19 12:06:25
| 1
| 1,383
|
Rav
|
79,002,434
| 5,467,541
|
Kafka message compression not working (broker level)
|
<p>I added compression for log messages on broker level compression.type=zstd on all three brokers with no other changes on the broker level or producer level.</p>
<p>When I tried to read messages from my python client (Consumer) for kafka, I started getting timeouts. Consumer didn't respond with any messages from that point. I reverted this change on all three brokers and now I can access messages again.</p>
<p>Where did I go wrong and how can I enable compression on the message logs correctly? Should I add compression on Producer level now as well? Thanks in advance.</p>
|
<python><apache-kafka><compression><kafka-consumer-api>
|
2024-09-19 12:05:14
| 1
| 595
|
Abhinav Ralhan
|
79,002,206
| 6,930,340
|
Concatenate polars dataframe with columns of dtype ENUM
|
<p>Consider having two <code>pl.DataFrame</code>s with identical schema. One of the columns has <code>dtype=pl.Enum</code>.</p>
<pre><code>import polars as pl
enum_col1 = pl.Enum(["type1"])
enum_col2 = pl.Enum(["type2"])
df1 = pl.DataFrame(
{"enum_col": "type1", "value": 10},
schema={"enum_col": enum_col1, "value": pl.Int64},
)
df2 = pl.DataFrame(
{"enum_col": "type2", "value": 200},
schema={"enum_col": enum_col2, "value": pl.Int64},
)
print(df1)
print(df2)
shape: (1, 2)
โโโโโโโโโโโโฌโโโโโโโโ
โ enum_col โ value โ
โ --- โ --- โ
โ enum โ i64 โ
โโโโโโโโโโโโชโโโโโโโโก
โ type1 โ 10 โ
โโโโโโโโโโโโดโโโโโโโโ
shape: (1, 2)
โโโโโโโโโโโโฌโโโโโโโโ
โ enum_col โ value โ
โ --- โ --- โ
โ enum โ i64 โ
โโโโโโโโโโโโชโโโโโโโโก
โ type2 โ 200 โ
โโโโโโโโโโโโดโโโโโโโโ
</code></pre>
<p>If I try to do a simple <code>pl.concat([df1, df2])</code>, I get the following error:</p>
<p><code>polars.exceptions.SchemaError: type Enum(Some(local), Physical) is incompatible with expected type Enum(Some(local), Physical)</code></p>
<p>You can get around this issue by "enlarging" the enums like this:</p>
<pre><code>pl.concat(
[
df1.with_columns(pl.col("enum_col").cast(pl.Enum(["type1", "type2"]))),
df2.with_columns(pl.col("enum_col").cast(pl.Enum(["type1", "type2"]))),
]
)
shape: (2, 2)
โโโโโโโโโโโโฌโโโโโโโโ
โ enum_col โ value โ
โ --- โ --- โ
โ enum โ i64 โ
โโโโโโโโโโโโชโโโโโโโโก
โ type1 โ 10 โ
โ type2 โ 200 โ
โโโโโโโโโโโโดโโโโโโโโ
</code></pre>
<p>I guess, there is a more pythonic way to do this?</p>
|
<python><python-polars>
|
2024-09-19 11:02:26
| 1
| 5,167
|
Andi
|
79,002,170
| 19,077,881
|
Ubuntu 22.04 syntax warning importing Pandas
|
<p>I have installed Python 3.12 in Ubuntu 22.04 alongside the system Python 3.10. It works as expected as do packages such as Numpy. However after installing Pandas 2.2.2 the line <code>import pandas as pd</code> produces the warning message below. Pandas still then works as expected with Python 3.12. Why does this happen and what should I do?</p>
<pre><code>/usr/lib/python3/dist-packages/pytz/__init__.py:31: SyntaxWarning: invalid escape sequence '\s'
match = re.match("^#\s*version\s*([0-9a-z]*)\s*$", line)
</code></pre>
|
<python><pandas><ubuntu>
|
2024-09-19 10:54:03
| 1
| 5,579
|
user19077881
|
79,002,136
| 14,715,428
|
tqdm not working within another library, but working fine directly in Jupyter Notebook
|
<p>In my Jupyter Notebook environment, everything works correctly with using <code>tqdm</code> and its progress bar directly. For example, the following code:</p>
<pre><code>for i in tqdm(range(100000)):
pass
</code></pre>
<p>is correctly displayed:</p>
<p><a href="https://i.sstatic.net/e8OhMGHv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/e8OhMGHv.png" alt="Correct tqdm" /></a></p>
<p>But when I use some other libraries that themselves call <code>tqdm</code> internally, like <a href="https://github.com/eliorc/node2vec/blob/master/node2vec/node2vec.py" rel="nofollow noreferrer"><code>Node2Vec</code></a> in the following code:</p>
<pre><code>from node2vec import Node2Vec
node2vec = Node2Vec(
G, dimensions=64, walk_length=100,
num_walks=2000, workers=4, seed=59, quiet=False
)
</code></pre>
<p>Then it is not displayed and just some spaces are added in the outputs:</p>
<p><a href="https://i.sstatic.net/oTThG5NA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oTThG5NA.png" alt="tqdm not working correctly" /></a></p>
<p>I don't know why it's happening; can someone explain please?</p>
|
<python><jupyter-notebook><tqdm>
|
2024-09-19 10:40:58
| 0
| 349
|
TayJen
|
79,002,102
| 13,921,816
|
Folium SideBySideLayers: Markers Not Displaying Correctly on Separate Layers
|
<p>Iโm trying to create a map using the Folium library in Python, where I want to use the SideBySideLayers plugin to compare two different tile layers. Each layer should have its own set of markers, but Iโm facing an issue where the markers are visible on both layers, instead of being restricted to their respective layers.</p>
<pre><code>import folium
from folium import plugins, FeatureGroup
# Create Map
m = folium.Map(location=(30, 20), zoom_start=4)
# Create layers
layer_left = folium.TileLayer('cartodbpositron', name='Layer Left')
layer_right = folium.TileLayer('openstreetmap', name='Layer Right')
# Add layers to map
layer_left.add_to(m)
layer_right.add_to(m)
# Create SideBySideLayers
sbs = plugins.SideBySideLayers(layer_left=layer_left, layer_right=layer_right)
sbs.add_to(m)
fg_left = FeatureGroup(name='Markers left', show=True)
fg_right = FeatureGroup(name='Markers right', show=True)
fg_left.add_child(folium.Marker(location=(35, 25), popup='Left: Point A', icon=folium.Icon(color='blue')))
fg_left.add_child(folium.Marker(location=(34, 24), popup='Left: Point B', icon=folium.Icon(color='blue')))
fg_right.add_child(folium.Marker(location=(25, 15), popup='Right: Point C', icon=folium.Icon(color='red')))
fg_right.add_child(folium.Marker(location=(26, 16), popup='Right: Point D', icon=folium.Icon(color='red')))
fg_left.add_to(m) #or layer_left.add_child(fg_left)
fg_right.add_to(m) #or layer_right.add_child(fg_right)
m.save("split_map.html")
</code></pre>
<p><a href="https://i.sstatic.net/oTVrcsuA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oTVrcsuA.png" alt="enter image description here" /></a></p>
<p>What i try to do :</p>
<p><a href="https://www.esri.com/arcgis-blog/wp-content/uploads/2020/06/swipe-1-loop-red.gif" rel="nofollow noreferrer">https://www.esri.com/arcgis-blog/wp-content/uploads/2020/06/swipe-1-loop-red.gif</a></p>
|
<python><visualization><folium>
|
2024-09-19 10:34:42
| 1
| 354
|
Mathieu P.
|
79,001,858
| 7,959,614
|
simulate markov chain using networkx and numpy
|
<p>My goal is to simulate a Markov chain using <code>networkx</code> and <code>numpy</code>. I write the following code</p>
<pre><code>import numpy as np
import networkx as nx
states = [
'distance',
'strike',
'knockout'
]
transition_matrix = np.array([
[0.85, 0.15, 0],
[0.98, 0, 0.02],
[0, 0, 1]
])
def create_graph(T: np.ndarray) -> nx.DiGraph:
G = nx.DiGraph()
G.add_nodes_from(states)
for i, u in enumerate(states):
for j, v in enumerate(states):
p = T[i][j]
if p > 0:
G.add_edge(u, v, p=p)
return G
G = create_graph(T=transition_matrix)
</code></pre>
<p>Basically, there are two fighters that are standing in front of each other. They move around with a probability of <code>0.85</code> and strike each other with a probability of <code>0.15</code>. The probability that a strike results in a knockout is <code>0.02</code>. Knockout is an absorbing state and ends the fight.</p>
<p>What's the best/fastest way to simulate a fight and keep track of the state? So ideally, I would like my output look as follows:</p>
<blockquote>
<p>['distance', 'distance', 'distance', 'distance', 'strike', 'distance',
... , 'distance', 'strike', 'knockout']</p>
</blockquote>
|
<python><numpy><networkx><markov-chains>
|
2024-09-19 09:32:36
| 1
| 406
|
HJA24
|
79,001,527
| 6,449,740
|
how create new column or update column inside a dataframe?
|
<p>Good morning eveyone</p>
<p>I have a question today, that i don't know exactly how to do.</p>
<p>Having a dataframe, i need create columns dynamically, and those column will contein a set of validations that i have to do, like: validate number os characters, see if there is strange characters inside a description field (<strong>break lines for example</strong>) , etc.</p>
<p>Having the next list:</p>
<pre><code>raw/ingest_date=20240918/eventos/
raw/ingest_date=20240918/llamadas/
raw/ingest_date=20240918/campanhas/
raw/ingest_date=20240918/miembros/
raw/ingest_date=20240918/objetivos/
</code></pre>
<p>i have this dataframe using:</p>
<pre><code>data_T =[(folder, folder.split('/')[-2]) for folder in subfolders]
df_result = spark.createDataFrame(data_T , ["s3_prefix", "table_name"])
</code></pre>
<p>output:</p>
<pre><code>s3_prefix | table_name
------------------------------------------------------
raw/ingest_date=20240918/eventos/ | eventos
raw/ingest_date=20240918/llamadas/ | llamadas
raw/ingest_date=20240918/campanhas/ | campanhas
raw/ingest_date=20240918/miembros/ | miembros
</code></pre>
<p>Now I will do a loop over this dataframe look into each s3_prefix to get information about the parquet files that are stored inside.</p>
<pre><code>for elem in df_result.collect():
s3_path = f"s3://{bucket_name}/{elem['s3_prefix']}/*.parquet"
ing_df = spark.read.parquet(s3_path)
ing_df = ing_df.select(F.length('ID').alias('length_ID'))
ing_df.show(1)
</code></pre>
<p>output</p>
<pre><code>eventos
+---------+
|length_ID|
+---------+
| 17|
+---------+
only showing top 1 row
------------------------------
llamadas
+---------+
|length_ID|
+---------+
| 18|
+---------+
...
...
...
</code></pre>
<p>And is inside this <strong>"for"</strong> where i will set my validations and per each validation there must be a column that will contain the result per row, like:</p>
<pre><code>s3_prefix | table_name | id_length | strange_character | id_strange character |
-------------------------------------|--------------|-----------|---------------------|-----------------------------|
raw/ingest_date=20240918/eventos/ | eventos | 17 | YES | [idxxxxxxx3,idxxxxx23] |
raw/ingest_date=20240918/llamadas/ | llamadas | 18 | NO | |
raw/ingest_date=20240918/campanhas/ | campanhas | 20 | NO | |
raw/ingest_date=20240918/miembros/ | miembros | 30 | YES | [idxxxsssxs10,idxsdas2200] |
</code></pre>
<p>Can somebody help me to create this new dataframe and how i could use regex to look into description field if there is break line inside the field.</p>
<p>Thank you so much</p>
<p>Regards</p>
|
<python><dataframe><apache-spark><pyspark><aws-glue>
|
2024-09-19 08:07:15
| 0
| 545
|
Julio
|
79,000,901
| 5,437,264
|
AVIF for Django-imagekit?
|
<p>Letโs say this is my code, powered by django-imagekit.</p>
<pre><code>
from django.db import models
from imagekit.models import ImageSpecField
from imagekit.processors import ResizeToFill
class Profile(models.Model):
avatar = models.ImageField(upload_to='avatars')
avatar_thumbnail = ImageSpecField(source='avatar',
processors=[ResizeToFill(100, 50)],
format='JPEG',
options={'quality': 60})
</code></pre>
<p>How can this be modified to support AVIF as the target image format? I know JPEG and even WebM is supported. But I donโt think AVIF is. So is there any way this can be modified to accomplish it?</p>
|
<python><django><django-imagekit><avif>
|
2024-09-19 04:44:55
| 1
| 368
|
ONMNZ
|
79,000,754
| 5,049,813
|
How to type hint a dynamically-created dataclass
|
<p>I hate writing things twice, so I came up with a decent way to not have to write things twice. However, this seems to break my type-hinting:</p>
<pre class="lang-py prettyprint-override"><code>from enum import Enum
from dataclasses import make_dataclass, field, dataclass
class DatasetNames(Enum):
test1 = "test1_string"
test2 = "test2_string"
test3 = "test3_string"
def get_path(s: str) -> str:
return s + "_path"
# the normal way to do this, but I have to type every new dataset name twice
# and there's a lot of duplicate code
@dataclass(frozen=True)
class StaticDatasetPaths:
test1 = get_path("test1_string")
test2 = get_path("test2_string")
test3 = get_path("test3_string")
# mypy recognizes that `StaticDatasetPaths` is a class
# mypy recognizes that `StaticDatasetPaths.test2` is a string
print(StaticDatasetPaths.test2) # 'test2_string_path'
# this is my way of doing it, without having to type every new dataset name twice and no duplicate code
DynamicDatasetPaths = make_dataclass(
'DynamicDatasetPaths',
[
(
name.name,
str,
field(default=get_path(name.value))
)
for name in DatasetNames
],
frozen=True
)
# mypy thinks `DynamicDatasetPaths` is a `variable` of type `type`
# mypy thinks that `DynamicDatasetPaths.test2` is an `function` of type `Unknown`
print(DynamicDatasetPaths.test2) # 'test2_string_path'
</code></pre>
<p><strong>How can I let mypy know that DynamicDatasetPaths is a frozen dataclass whose attributes are strings?</strong></p>
<p>Normally when I run into cases like this, I'm able to just use a <code>cast</code> and tell mypy what the right type is, but I don't know the correct type for "frozen dataclass whose attributes are strings".</p>
<p>(Also, if there's a better way in general to not have the duplicate code, I'd be happy to hear about that as well.)</p>
|
<python><python-typing><python-dataclasses>
|
2024-09-19 03:25:17
| 1
| 5,220
|
Pro Q
|
79,000,554
| 6,449,740
|
How convert a list into multiple columns and a dataframe?
|
<p>i have a challenge today, is:
Having a list of s3 paths, inside a list, split this and get a dataframe with one column with the path and a new column with just the name of the folder.</p>
<p>my list have the next content:</p>
<pre><code>raw/ingest_date=20240918/eventos/
raw/ingest_date=20240918/llamadas/
raw/ingest_date=20240918/campanhas/
raw/ingest_date=20240918/miembros/
raw/ingest_date=20240918/objetivos/
</code></pre>
<p>i try this code:</p>
<pre><code>new_dict = []
for folder in subfolders:
new_dict.append(folder)
name = folder.split("/", -1)
new_dict.append(name[2])
#print(name)
print(type(new_dict))
for elem in new_dict:
print(elem)
df = spark.createDataFrame(new_dict, ["s3_prefix", "table_name"])
df.show()
</code></pre>
<p>but the result is a list like:</p>
<pre><code>raw/ingest_date=20240918/eventos/
eventos
raw/ingest_date=20240918/llamadas/
llamadas
raw/ingest_date=20240918/campanhas/
campanhas
...
...
</code></pre>
<p>but when I try to print my dataframe i see this:</p>
<p><strong>TypeError: Can not infer schema for type: <class 'str'></strong></p>
<p>the idea is have a dataframe like :</p>
<pre><code>s3_prefix | table_name
------------------------------------------------------
raw/ingest_date=20240918/eventos/ | eventos
raw/ingest_date=20240918/llamadas/ | llamadas
raw/ingest_date=20240918/campanhas/ | campanhas
raw/ingest_date=20240918/miembros/ | miembros
</code></pre>
<p>Can somebody give a hand to resolve this?</p>
<p>Regards</p>
|
<python><dataframe><apache-spark><pyspark><aws-glue>
|
2024-09-19 00:59:35
| 1
| 545
|
Julio
|
79,000,230
| 6,357,916
|
Unable to return a boolean variable from Pytorch Dataset's __get_item__
|
<p>I have a pytorch <code>Dataset</code> subclass and I create a pytorch <code>DataLoader</code> out of it. It works when I return two tensors from DataSet's <code>__getitem__()</code> method. I tried to create minimal (but not working, more on this later) code as below:</p>
<pre><code>import torch
from torch.utils.data import Dataset
import random
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
class DummyDataset(Dataset):
def __init__(self, num_samples=3908, window=10): # same default values as in the original code
self.window = window
# Create dummy data
self.x = torch.randn(num_samples, 10, dtype=torch.float32, device='cpu')
self.y = torch.randn(num_samples, 3, dtype=torch.float32, device='cpu')
self.t = {i: random.choice([True, False]) for i in range(num_samples)}
def __len__(self):
return len(self.x) - self.window + 1
def __getitem__(self, i):
return self.x[i: i + self.window], self.y[i + self.window - 1] #, self.t[i]
ds = DummyDataset()
dl = torch.utils.data.DataLoader(ds, batch_size=10, shuffle=False, generator=torch.Generator(device='cuda'), num_workers=4, prefetch_factor=16)
for data in dl:
x = data[0]
y = data[1]
# t = data[2]
print(f"x: {x.shape}, y: {y.shape}") # , t: {t}
break
</code></pre>
<p>Above code gives following error:</p>
<pre><code>RuntimeError: Expected a 'cpu' device type for generator but found 'cuda'
</code></pre>
<p>on line <code>for data in dl:</code>.</p>
<p>But my original code is exactly like above: dataset contains tensors created on <code>cpu</code> and dataloader's generator's device set to <code>cuda</code> and it works (I mean above minimal code does not work, but same lines in my original code does indeed work!).</p>
<p>When I try to return a boolean value from it by un-commenting <code>, self.t[i]</code> from <code>__get_item__()</code> method, it gives me following error:</p>
<pre><code>Traceback (most recent call last):
File "/my_project/src/train.py", line 66, in <module>
trainer.train_validate()
File "/my_project/src/trainer_cpu.py", line 146, in train_validate
self.train()
File "/my_project/src/trainer_cpu.py", line 296, in train
for train_data in tqdm(self.train_dataloader, desc=">> train", mininterval=5):
File "/usr/local/lib/python3.9/site-packages/tqdm/std.py", line 1181, in __iter__
for obj in iterable:
File "/usr/local/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 630, in __next__
data = self._next_data()
File "/usr/local/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1344, in _next_data
return self._process_data(data)
File "/usr/local/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1370, in _process_data
data.reraise()
File "/usr/local/lib/python3.9/site-packages/torch/_utils.py", line 706, in reraise
raise exception
RuntimeError: Caught RuntimeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/torch/utils/data/_utils/worker.py", line 309, in _worker_loop
data = fetcher.fetch(index) # type: ignore[possibly-undefined]
File "/usr/local/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 55, in fetch
return self.collate_fn(data)
File "/usr/local/lib/python3.9/site-packages/torch/utils/data/_utils/collate.py", line 317, in default_collate
return collate(batch, collate_fn_map=default_collate_fn_map)
File "/usr/local/lib/python3.9/site-packages/torch/utils/data/_utils/collate.py", line 174, in collate
return [collate(samples, collate_fn_map=collate_fn_map) for samples in transposed] # Backwards compatibility.
File "/usr/local/lib/python3.9/site-packages/torch/utils/data/_utils/collate.py", line 174, in <listcomp>
return [collate(samples, collate_fn_map=collate_fn_map) for samples in transposed] # Backwards compatibility.
File "/usr/local/lib/python3.9/site-packages/torch/utils/data/_utils/collate.py", line 146, in collate
return collate_fn_map[collate_type](batch, collate_fn_map=collate_fn_map)
File "/usr/local/lib/python3.9/site-packages/torch/utils/data/_utils/collate.py", line 235, in collate_int_fn
return torch.tensor(batch)
File "/usr/local/lib/python3.9/site-packages/torch/utils/_device.py", line 79, in __torch_function__
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/torch/cuda/__init__.py", line 300, in _lazy_init
raise RuntimeError(
RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
</code></pre>
<p>Why is it so? Why it does not allow me to return extra boolean value from <code>__get_item__</code>?</p>
<p><strong>PS:</strong></p>
<p>Above is main question. However, I noticed some weird observations: above code (with or without <code>, self.t[i]</code> commented) starts working if I replace <code>DalaLoader</code>'s generator's device from <code>cuda</code> to <code>cpu</code> ! That is, if I replace <code>generator=torch.Generator(device='cuda')</code> with <code>generator=torch.Generator(device='cpu')</code>, it outputs:</p>
<pre><code>x: torch.Size([10, 10, 10]), y: torch.Size([10, 3])
</code></pre>
<p>And if I do the same in my original code, it gives me following error:</p>
<pre><code>RuntimeError: Expected a 'cuda' device type for generator but found 'cpu'
</code></pre>
<p>on line <code>for data in dl:</code>.</p>
<p><strong>Update</strong></p>
<p>It started working as soon as I changed type of <code>self.t</code> from python <code>dict</code> to torch tensor of type bool and moved it to cpu:</p>
<pre><code>self.t = torch.tensor([random.choice([True, False]) for _ in range(num_samples)], dtype=torch.bool).to('cpu')
</code></pre>
<p>Please explain why.</p>
|
<python><python-3.x><machine-learning><pytorch>
|
2024-09-18 21:33:47
| 1
| 3,029
|
MsA
|
78,999,991
| 1,909,206
|
Unable to publish python packages to Gitlab registry (due to 401 Unauthorized) using Poetry from within a CICD Job
|
<p>I'm using the following three commands in my publish job to (attempt) push Python packages to the Gitlab registry. However, I get 401's and am unable to do so.</p>
<p>I'm using Poetry 1.8.3 in a python:3.12 runner image. I'm using Gitlab SAAS (not self-hosted)</p>
<pre><code> - poetry config repositories.gitlab "${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/packages/pypi"
- poetry config http-basic.gitlab gitlab-ci-token $CI_JOB_TOKEN
- poetry publish -r gitlab -vvv # dist/ exists locally with built artifacts for publishing
</code></pre>
<p>This publish URL came from the Gitlab docs, as did username and password variable. Googling also says these are correct.</p>
<p>Does anybody know what's up / what I need to change to enable publishing packages?</p>
<p>I can't seem to find vaguely referenced settings for enabling the group tokens (they're disabled and I can't find any way to create one), nor can I find any settings around permission scopes for CI_JOB_TOKENs</p>
|
<python><gitlab-ci><python-poetry>
|
2024-09-18 19:57:10
| 1
| 672
|
geudrik
|
78,999,903
| 1,601,580
|
How do I print the arguments passed to a function when using Python Fire?
|
<p>I'm using the fire library in Python to create a simple command-line interface (CLI). My setup is as follows:</p>
<pre class="lang-py prettyprint-override"><code>import fire
def main(a):
print('hi')
if __name__ == '__main__':
fire.Fire(main)
</code></pre>
<p>When I run the script like this:</p>
<pre class="lang-bash prettyprint-override"><code>$ python my_script.py 123
</code></pre>
<p>It prints:</p>
<pre><code>hi
</code></pre>
<p>What I'd like to do is print the arguments passed to main (in this case 123) before it prints "hi". Is there a way to intercept and print the arguments in fire.Fire(main)?</p>
<p>I've tried modifying the function signature and adding *args, but that changes how the function behaves with Fire.</p>
<p>How can I print the arguments passed to main using Python Fire without changing the logic inside the function?</p>
<hr />
<p>Attempt:</p>
<pre class="lang-py prettyprint-override"><code>import fire
class MyWrapper:
def __init__(self, target):
self.target = target
def __getattr__(self, name):
def method(*args, **kwargs):
# Print the arguments passed to the Fire CLI
print(f"Method: {name}, Args: {args}, Kwargs: {kwargs}")
# Call the original method
return getattr(self.target, name)(*args, **kwargs)
return method
class MyCommands:
def greet(self, name="World"):
return f"Hello, {name}!"
if __name__ == '__main__':
fire.Fire(MyWrapper(MyCommands))
</code></pre>
|
<python><python-fire>
|
2024-09-18 19:26:00
| 2
| 6,126
|
Charlie Parker
|
78,999,885
| 22,407,544
|
Django `collecstatic` returns `[Errno 13] Permission denied: '/code/static/admin/js/vendor/select2/i18n/pl.6031b4f16452.js.gz'`
|
<p>I run my django app in Docker. I recently tried running <code>collecstatic</code> and instead was given this error code. Not sure what it means or what to do:</p>
<pre><code>>docker-compose exec web python manage.py collectstatic
</code></pre>
<pre><code>Traceback (most recent call last):
File "/code/manage.py", line 22, in <module>
main()
File "/code/manage.py", line 18, in main
execute_from_command_line(sys.argv)
File "/usr/local/lib/python3.11/site-packages/django/core/management/__init__.py", line 442, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python3.11/site-packages/django/core/management/__init__.py", line 436, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/python3.11/site-packages/django/core/management/base.py", line 412, in run_from_argv
self.execute(*args, **cmd_options)
File "/usr/local/lib/python3.11/site-packages/django/core/management/base.py", line 458, in execute
output = self.handle(*args, **options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/contrib/staticfiles/management/commands/collectstatic.py", line 209, in handle
collected = self.collect()
^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/contrib/staticfiles/management/commands/collectstatic.py", line 148, in collect
for original_path, processed_path, processed in processor:
File "/usr/local/lib/python3.11/site-packages/whitenoise/storage.py", line 162, in post_process_with_compression
for name, compressed_name in self.compress_files(files_to_compress):
File "/usr/local/lib/python3.11/site-packages/whitenoise/storage.py", line 199, in compress_files
for compressed_path in compressor.compress(path):
File "/usr/local/lib/python3.11/site-packages/whitenoise/compress.py", line 84, in compress
yield self.write_data(path, compressed, ".gz", stat_result)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/whitenoise/compress.py", line 120, in write_data
with open(filename, "wb") as f:
^^^^^^^^^^^^^^^^^^^^
PermissionError: [Errno 13] Permission denied: '/code/static/admin/js/vendor/select2/i18n/pl.6031b4f16452.js.gz'
</code></pre>
<p>I read somewhere that it may have to do with root privileges but I'm not sure which root privileges or how to go about remedying it.</p>
|
<python><django>
|
2024-09-18 19:19:41
| 1
| 359
|
tthheemmaannii
|
78,999,867
| 14,954,262
|
Django - change form prefix separator
|
<p>I'm using form <code>prefix</code> to render the same django form twice in the same template and avoid identical fields id's.</p>
<p>When you do so, the separator between the prefix and the field name is '-', I would like it to be '_' instead.</p>
<p>Is it possible ?</p>
<p>Thanks</p>
|
<python><html><django><prefix><separator>
|
2024-09-18 19:13:47
| 1
| 399
|
Nico44044
|
78,999,848
| 6,555,196
|
Fixing yaml indentation with python
|
<p>I'm manipulating existing yaml file (CloudFormation template) and copying needed resources into a new file.</p>
<p>The thing is, I'm getting part of the resource in a new line - not according to the correct indentation.<br />
It seems the 3rd line is not adhering to the indentation configuration.</p>
<p>I've checked Python modules: yaml-indent, ruamel.yaml config, yamlfix, pyymal and off course our AI friends.. but the issue persists.</p>
<p>How can I write it in a correct manner ?</p>
<p>Code snippet:</p>
<pre><code> with open(entrypath, 'r') as magic:
yaml_object = yaml.load(magic, Loader=yaml.SafeLoader)
# print(yaml_object['Resources'][rscItem])
with open('{}_Restore.yaml'.format(resourcename), 'a+') as s:
s.write(' {}:'.format(rscItem))
s.write('\n')
s.write(' {}'.format(yaml.safe_dump(yaml_object['Resources'][rscItem], default_flow_style=False, indent=4)))
</code></pre>
<p>(I am aware of the spaces in <code>s.write(' {}'</code>, that part is working as it should (-:)</p>
<p>Yaml file:
How it looks like:</p>
<pre><code>Resources:
defaulteventbus:
DeletionPolicy: Retain
Properties:
Name: default
Tags: []
Type: AWS::Events::EventBus
UpdateReplacePolicy: Retain
LambdaEventSourceMapping:
DeletionPolicy: Retain
Properties:
BatchSize: 1
BisectBatchOnFunctionError: true
DestinationConfig:
OnFailure:
Destination:
Ref: SNSTopicName
Enabled: true
</code></pre>
<p>How it <em>should</em> looks like:</p>
<pre><code>Resources:
defaulteventbus:
DeletionPolicy: Retain
Properties:
Name: default
Tags: []
Type: AWS::Events::EventBus
UpdateReplacePolicy: Retain
LambdaEventSourceMapping:
DeletionPolicy: Retain
Properties:
BatchSize: 1
BisectBatchOnFunctionError: true
DestinationConfig:
OnFailure:
Destination:
Ref: SNSTopicName
Enabled: true
</code></pre>
<p><strong>Updated code that @Michael wrote:</strong></p>
<pre><code> with open(entrypath, 'r') as magic:
yaml_object = yaml.load(magic, Loader=yaml.SafeLoader)
# print(yaml_object['Resources'][rscItem])
with open('{}_Restore.yaml'.format(resourcename), 'a+') as s:
data = {rscItem: yaml_object['Resources'][rscItem]}
text = yaml.safe_dump(data, default_flow_style=False, indent=2)
s.write(textwrap.indent(text, ' ' * 2))
</code></pre>
|
<python><amazon-web-services><aws-cloudformation>
|
2024-09-18 19:06:49
| 1
| 305
|
soBusted
|
78,999,807
| 4,996,797
|
How to activate plt.show() when creating axes without pyplot in matplotlib?
|
<p>I have a script like this which when executed opens up a new window with the figure.</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
fig = plt.Figure()
# ax = fig.add_axes(rect=(0.1, 0.0, 0.9, 1.0)) # does not open a new window
ax = fig.add_subplot() # works
plt.show()
</code></pre>
<p>The problem is that when I use the <code>add_axes</code> function, the <code>plt.show()</code> call seems to do nothing at all.</p>
<p>The <code>fig.savefig</code> still seems to work, but I would like to see the figure by <code>plt.show</code> as well. Any ideas? Thanks</p>
|
<python><matplotlib><plot><visualization><figure>
|
2024-09-18 18:54:21
| 1
| 408
|
Paweล Wรณjcik
|
78,999,789
| 2,215,904
|
Tkinter grid manager height/width nonconsistent
|
<p>I have issue with tkinter grid manager consistency.
I have specified columnconfigure so left part have 6, then one column is 1 as delimiter and then on right side (keyboard) the width is 3.
As I understand left side should be twice size of right side. But on screen isn't. The haight have similar problem. All rows are set to rowconfigure=1, but on screen they are not same height.
What I do wrong?
Minimal part of my program to show problem:</p>
<pre><code>#!/usr/bin/env python3
# -*- coding: utf8 -*-
from tkinter import *
#***************************************************************************************************
root = Tk()
root.geometry("1024x600")
tab3=Frame(root,relief=SUNKEN, bd=5 )
tab3.pack(fill=BOTH, expand=True)
#specify column widths
Colwidths=[6, 6, 6, 1, 3, 3, 3]
gui1=[
[["Double\nline txt",1,1], ["Double\nline txt",1,1], ["Double\nline txt",1,1], ["",1,1], ["1",1,1], ["2",1,1], ["3",1,1]],
[["145.0",1,1], ["133.7",1,1], ["133.1",1,1], ["",1,1], ["4",1,1], ["5",1,1], ["6",1,1]],
[["Double\nline txt",1,1], ["TwoCel\ndblLine",1,2], ["TwoCel\ndblLine",1,2], ["",1,1], ["7",1,1], ["8",1,1], ["9",1,1]],
[["20.5",1,1], ["",1,1], ["",1,1], ["",1,1], ["X",1,1], ["0",1,1], ["OK๏ธ",1,1]]
]
for i,v in enumerate(Colwidths): tab3.columnconfigure(i, weight=v)
for y,line in enumerate(gui1):
tab3.rowconfigure(y, weight=1) #all rows are same height
for x, (txt,cs,rs) in enumerate(line):
if txt!="":
itm=Label(tab3,text=txt, bd=1, relief=SUNKEN, font=("Arial", 30, "bold"),bg='khaki')
itm.grid(row=y,column=x, columnspan=cs,rowspan=rs, sticky=NSEW)
root.mainloop()
</code></pre>
<p>and the bad output:
<a href="https://i.sstatic.net/mA7ym8Ds.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mA7ym8Ds.png" alt="enter image description here" /></a></p>
<p>I was thinking that font is to big and can't fit into cell. So I remove sticky and try again and got this:
<a href="https://i.sstatic.net/pBihWEif.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pBihWEif.png" alt="enter image description here" /></a></p>
<p>so there are visible space around every item so size shouldn't be a problem.</p>
|
<python><tkinter><grid><row><configure>
|
2024-09-18 18:49:21
| 1
| 460
|
eSlavko
|
78,999,687
| 4,436,517
|
Polars make all groups the same size
|
<h1>Question</h1>
<p>I'm trying to make all groups for a given data frame have the same size. In <em>Starting point</em> below, I show an example of a data frame that I whish to transform. In <em>Goal</em> I try to demonstrate what I'm trying to achieve. I want to group by the column <code>group</code>, make all groups have a size of <code>4</code>, and fill 'missing' values with <code>null</code> - I hope it's clear.</p>
<p>I have tried several approaches but have not been able to figure this one out.</p>
<p><strong>Starting point</strong></p>
<pre class="lang-py prettyprint-override"><code>dfa = pl.DataFrame(data={'group': ['a', 'a', 'a', 'b', 'b', 'c'],
'value': ['a1', 'a2', 'a3', 'b1', 'b2', 'c1']})
โโโโโโโโโฌโโโโโโโโ
โ group โ value โ
โ --- โ --- โ
โ str โ str โ
โโโโโโโโโชโโโโโโโโก
โ a โ a1 โ
โ a โ a2 โ
โ a โ a3 โ
โ b โ b1 โ
โ b โ b2 โ
โ c โ c1 โ
โโโโโโโโโดโโโโโโโโ
</code></pre>
<p><strong>Goal</strong></p>
<pre class="lang-py prettyprint-override"><code>>>> make_groups_uniform(dfa, group_by='group', group_size=4)
โโโโโโโโโฌโโโโโโโโ
โ group โ value โ
โ --- โ --- โ
โ str โ str โ
โโโโโโโโโชโโโโโโโโก
โ a โ a1 โ
โ a โ a2 โ
โ a โ a3 โ
โ a โ null โ
โ b โ b1 โ
โ b โ b2 โ
โ b โ null โ
โ b โ null โ
โ c โ c1 โ
โ c โ null โ
โ c โ null โ
โ c โ null โ
โโโโโโโโโดโโโโโโโโ
</code></pre>
<p><strong>Package version</strong></p>
<p><code>polars: 1.1.0</code></p>
|
<python><python-polars>
|
2024-09-18 18:22:22
| 4
| 1,159
|
rindis
|
78,999,623
| 825,227
|
Return the index of a date match
|
<p>I'm drawing a blank even though it seems straightforward.</p>
<p>I have a list of dates in an index:</p>
<pre><code>book_df.index
Out[44]:
DatetimeIndex(['2023-08-13 15:10:47.284558', '2023-08-13 15:10:48.005322',
'2023-08-13 15:10:48.005953', '2023-08-13 15:10:49.022438',
'2023-08-13 15:10:53.218365', '2023-08-13 15:10:54.246143',
'2023-08-13 15:10:54.246874', '2023-08-13 15:10:54.247330',
'2023-08-13 15:10:55.270271', '2023-08-13 15:10:55.270948',
...
'2023-08-14 14:00:00.208915', '2023-08-14 14:00:00.209017',
'2023-08-14 14:00:00.209119', '2023-08-14 14:00:00.209221',
'2023-08-14 14:00:00.209323', '2023-08-14 14:00:00.209425',
'2023-08-14 14:00:00.209527', '2023-08-14 14:00:00.209629',
'2023-08-14 14:00:00.209730', '2023-08-14 14:00:00.209832'],
dtype='datetime64[ns]', length=1230021, freq=None)
</code></pre>
<p>How do I return the index value for a matching date? This returns a boolean mask, but would like to return index value of 0.</p>
<pre><code>book_df.index == rng[0]
Out[60]: array([ True, False, False, ..., False, False, False])
</code></pre>
|
<python><pandas>
|
2024-09-18 18:06:34
| 2
| 1,702
|
Chris
|
78,999,491
| 1,471,980
|
how do you summarize percentage usage data in pandas data frame groupby a column
|
<p>I need to count the occurunce of percentage usage by Server name. I have this data frame.</p>
<pre><code>df
Server Bandwidth 1-Jun 6-June 12-Jun 1-Jul
ServerA 10000 5000 6000 7500 8000
ServerB 100000 60000 80000 75000 80000
ServerC 20000 5000 6000 7500 8000
ServerD 30000 5000 6000 7500 8000
ServerF 10000 5000 6000 7500 8000
ServerX 5000 5000 6000 7500 8000
</code></pre>
<p>1st. I need to calculate the percent usage per month given the Bandwidth.</p>
<pre><code>cols=df.columns(difference(['Server','Bandwidth'], sort=false
out=df[cols].div(df['Bandwidth'], axis=0, combine_first(df)[list(df)]
</code></pre>
<p>I get this output:</p>
<pre><code>out
Server Bandwidth 1-Jun 6-June 12-Jun 1-Jul
ServerA 10000 0.50 0.60 0.75 0.80
ServerB 100000 0.60 0.80 0.75 0.80
ServerC 20000 0.25 0.30 0.38 0.40
</code></pre>
<p>etc</p>
<p>Next, I need place the percent usage data into bins, 70%-80%, 80%-90%, 90%-100%, 100%+</p>
<p>resulting data frame needs to be like this:</p>
<pre><code>result_df
Server Bandwidth 70%-80% 80%-90% 90%-100% 100%+
ServerA 10000 1 1 0 0
ServerB 100000 1 2 0 0
ServerC 20000 0 0 0 0
</code></pre>
<p>etc</p>
<p>How can I count the occurence of percentage group by Server in pandas?</p>
|
<python><pandas>
|
2024-09-18 17:27:18
| 2
| 10,714
|
user1471980
|
78,999,483
| 3,284,297
|
Extracting data of this xml to Python dataframe
|
<p>I cannot get the extraction of this xml data to dataframe to work properly.
This is my xml sample. In reality I have multiple stacks of "Entity" which represent one line of data in the desired result. To save space I only pasted one instance of "Entity" here:</p>
<pre><code><EntityCollection>
<Entities>
<Entity>
<Attributes>
<KeyValuePairOfstringanyType>
<key>client_number</key>
<value type="b:string">ABC123345</value>
</KeyValuePairOfstringanyType>
<KeyValuePairOfstringanyType>
<key>my_data.client_type</key>
<value type="AliasedValue">
<AttributeLogicalName>my_type</AttributeLogicalName>
<EntityLogicalName>my_data1</EntityLogicalName>
<NeedFormatting>true</NeedFormatting>
<ReturnType>123</ReturnType>
<Value type="OptionSetValue">
<Value>345</Value>
</Value>
</value>
</KeyValuePairOfstringanyType>
<KeyValuePairOfstringanyType>
<key>my_data.status</key>
<value type="AliasedValue">
<AttributeLogicalName>status</AttributeLogicalName>
<EntityLogicalName>my_data1</EntityLogicalName>
<NeedFormatting>true</NeedFormatting>
<ReturnType>123</ReturnType>
<Value type="OptionSetValue">
<Value>67</Value>
</Value>
</value>
</KeyValuePairOfstringanyType>
<KeyValuePairOfstringanyType>
<key>my_data.date</key>
<value type="AliasedValue">
<AttributeLogicalName>date</AttributeLogicalName>
<EntityLogicalName>my_data1</EntityLogicalName>
<NeedFormatting>true</NeedFormatting>
<ReturnType>89</ReturnType>
<Value type="b:string">2024-01-01</Value>
</value>
</KeyValuePairOfstringanyType>
<KeyValuePairOfstringanyType>
<key>my_data.country</key>
<value type="AliasedValue">
<AttributeLogicalName>country</AttributeLogicalName>
<EntityLogicalName>my_data1</EntityLogicalName>
<NeedFormatting>true</NeedFormatting>
<ReturnType>123</ReturnType>
<Value type="OptionSetValue">
<Value>456</Value>
</Value>
</value>
</KeyValuePairOfstringanyType>
<KeyValuePairOfstringanyType>
<key>client_code</key>
<value type="b:guid">some_code456</value>
</KeyValuePairOfstringanyType>
<KeyValuePairOfstringanyType>
<key>my_data.my_data1id</key>
<value type="AliasedValue">
<AttributeLogicalName>my_data1id</AttributeLogicalName>
<EntityLogicalName>my_data1</EntityLogicalName>
<NeedFormatting>true</NeedFormatting>
<ReturnType>13</ReturnType>
<Value type="b:guid">some_code123</Value>
</value>
</KeyValuePairOfstringanyType>
</Attributes>
<EntityState nil="true"/>
<FormattedValues>
<KeyValuePairOfstringstring>
<key>my_data.client_type</key>
<value>client123</value>
</KeyValuePairOfstringstring>
<KeyValuePairOfstringstring>
<key>my_data.status</key>
<value>OK</value>
</KeyValuePairOfstringstring>
<KeyValuePairOfstringstring>
<key>my_data.country</key>
<value>some_country</value>
</KeyValuePairOfstringstring>
</FormattedValues>
<Id>some_code456</Id>
<KeyAttributes/>
<LogicalName>some-id</LogicalName>
<RelatedEntities/>
<RowVersion>ID123</RowVersion>
</Entity>
</Entities>
</EntityCollection>
</code></pre>
<p>The elements in 'key' (or in 'AttributeLogicalName') should be the headers. The "value"/"Value" should be the actual data. This is the desired outcome (first row).<a href="https://i.sstatic.net/QscEcZ5n.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QscEcZ5n.png" alt="enter image description here" /></a></p>
<p>As the data is very nested, I tried using xml.etree but did not come very far as I not sure how to proceed form here:</p>
<pre><code>for _, elem in ET.iterparse(my_xml):
if len(elem) == 0:
print(f'{elem.tag} {elem.attrib} text={elem.text}')
else:
print(f'{elem.tag} {elem.attrib}')
</code></pre>
<p>How can I get the desired result?</p>
|
<python><pandas><xml>
|
2024-09-18 17:25:48
| 2
| 423
|
Charlotte
|
78,999,363
| 5,086,255
|
django admin Interdependent validation of formsets
|
<p>i have two inlines in admin models</p>
<pre><code>class AdminModel(admin.ModelAdmin):
...
inlines = [inline1, inline2]
form = AdminModelForm
model =model
class inline1(admin.TabularInline):
form = inline1form
model = inline1model
class inline2(admin.TabularInline):
form = inline2form
model = inline2model
class inline1form(forms.ModelForm):
class Meta:
model = inline1Edge
fields = ("field1",)
class inline2form(forms.ModelForm):
class Meta:
model = inline2Edge
fields = ("field2",)
class AdminModelForm(forms.ModelForm):
....
admin.site.register(model, AdminModel)
</code></pre>
<p>now my task is to check if in two inline, if two fields have value (one in inline1 and another in inline2) then show error</p>
<p>i have come with a solution where when creating formsets i am checking if field has vlaue then return answer</p>
<p>ie modified custom method `</p>
<pre><code>class AdminModel(admin.ModelAdmin):
def _create_formsets(self, request, new_object, change):
formsets, inline_instances = super()._create_formsets(request, new_object, change)
if request.method == "POST":
location_formset = None
individual_formset_validated = all([formset.is_valid() for formset in formsets])
if not individual_formset_validated:
return individual_formset_validated
# custom check condition on formsets
if check_fail:
inline1formset.non_form_errors().extend(
["Please select values for only one field, field1 config or field2 config"]
)
return formsets, inline_instances
</code></pre>
<p>this is working fine, and adding error in the inline</p>
<p>is there any other pythonic way which can help me solve this task ?</p>
|
<python><django><django-forms><django-admin><formset>
|
2024-09-18 16:51:04
| 1
| 11,372
|
sahasrara62
|
78,999,309
| 3,486,684
|
In Visual Studio Code, how can I rename a variable in Python and also update its occurrences in ReST formatted documentation?
|
<p>Suppose I have:</p>
<pre class="lang-py prettyprint-override"><code>class Hello:
"""An example class."""
world: str
"""This is the only field on :py:class:`Hello`."""
</code></pre>
<p>It is documented using the <a href="https://www.sphinx-doc.org/en/master/usage/restructuredtext/basics.html" rel="nofollow noreferrer">ReST format</a>, meant to be used with <a href="https://www.sphinx-doc.org/en/master/" rel="nofollow noreferrer">Sphinx</a>.</p>
<p>In VS Code, if I refactor <code>Hello</code> to something else, the reference <code>py:class:`Hello</code> will not be updated. Is there anything I can do (either in the settings, or using an extension) so that this update will be performed?</p>
|
<python><visual-studio-code><python-sphinx><restructuredtext>
|
2024-09-18 16:33:48
| 1
| 4,654
|
bzm3r
|
78,999,284
| 8,067,642
|
Pop specific item from Python set and get default if not exists
|
<p>Is there a simple method, maybe a one-liner, to pop an item from Python set, and get default value if it does not exist?</p>
<p>It could be done in four lines of code and two hash table lookups:</p>
<pre class="lang-py prettyprint-override"><code>def pop_default(coll: set, v, default=None):
if v not in coll:
return default
coll.remove(v)
return v
</code></pre>
<p>Can this function be re-written and</p>
<ol>
<li>Be more compact, preferably a one-liner,</li>
<li>Be at least same efficient?</li>
</ol>
<p>Function usage example:</p>
<pre class="lang-py prettyprint-override"><code>s = {'a', 'b', 'c'}
assert pop_default(s, 'a') == 'a'
assert s == {'b', 'c'}
assert pop_default(s, 'x') == None
assert s == {'b', 'c'}
</code></pre>
<h1>Solution</h1>
<p>Two separate improvements were proposed by @jonrsharpe and @TimurShtatland.</p>
<h2>One-liner</h2>
<pre class="lang-py prettyprint-override"><code>def pop_default(s: set, v, default=None):
return s.remove(v) or v if v in s else default
</code></pre>
<p>A clear winner. Same efficiency, one line of code.</p>
<h2>Alternative for some rare cases: exception-based</h2>
<pre class="lang-py prettyprint-override"><code>def pop_default(coll: set, v, default=None):
try:
coll.remove(v)
return v
except KeyError:
return default
</code></pre>
<p>This solution is ~5 times slower in @nocomment 's test but is faster if exception is not raised.</p>
|
<python><set>
|
2024-09-18 16:27:44
| 1
| 338
|
makukha
|
78,999,229
| 1,173,674
|
CMake Python execv-based wrapper fails with `CMake Error: Could not find CMAKE_ROOT !!!`
|
<p>I want to write a Python wrapper for CMake, but even the simplest of wrappings fails:</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python
import os
import sys
os.execvp("cmake", sys.argv) # or 'sys.argv[1:]', both fail
</code></pre>
<p>Running this Python script with e.g. <code>./pycmake .</code> in a directory containing a <code>CMakeLists.txt</code> fails with:</p>
<pre><code>CMake Error: Could not find CMAKE_ROOT !!!
CMake has most likely not been installed correctly.
Modules directory not found in
CMake Error: Error executing cmake::LoadCache(). Aborting.
</code></pre>
<p>Running <code>cmake .</code> on the same directory works without issues.</p>
<p>Am I forgetting something when doing the <code>os.execvp</code> call? Or is CMake using some mechanism that fails when wrapped by Python?</p>
|
<python><cmake>
|
2024-09-18 16:11:19
| 1
| 9,358
|
anol
|
78,999,116
| 23,260,297
|
Create Pivot table and add additional columns from another dataframe
|
<p>Given two identically formatted dataframes:</p>
<p>df1</p>
<pre><code>Counterparty Product Deal Date Value
foo bar Buy 01/01/24 10.00
foo bar Buy 01/01/24 10.00
foo bar Sell 01/01/24 10.00
foo bar Sell 01/01/24 10.00
fizz bar Buy 01/01/24 10.00
fizz bar Buy 01/01/24 10.00
fizz buzz Sell 01/01/24 10.00
fizz buzz Sell 01/01/24 10.00
</code></pre>
<p>df2</p>
<pre><code>Counterparty Product Deal Date Value
foo bar Buy 01/01/24 11.00
foo bar Buy 01/01/24 09.00
foo bar Sell 01/01/24 09.00
foo bar Sell 01/01/24 10.00
fizz bar Buy 01/01/24 12.00
fizz bar Buy 01/01/24 08.00
fizz buzz Sell 01/01/24 09.00
fizz buzz Sell 01/01/24 10.00
</code></pre>
<p>I have done this so far:</p>
<pre><code>out = pd.pivot_table(df1, values = 'Value', index='Counterparty', columns = 'Product', aggfunc='sum').reset_index().rename_axis(None, axis=1)
out = out.fillna(0)
Counterparty bar buzz
0 fizz 20.0 20.0
1 foo 40.0 0.0
</code></pre>
<p>buy How can I pivot these to create a visual like this:</p>
<pre><code>Counterparty Bar Buzz Total col1 col2
foo 40 0 40 39 1
fizz 20 20 40 39 1
</code></pre>
<p>where <code>col1</code> is coming from <code>df2</code> and <code>col2</code> is the difference between the <code>Total</code> and <code>col1</code></p>
<p>sample:</p>
<pre><code>df1 = pd.DataFrame({
"Counterparty": ["foo", "foo", "foo", "foo", "fizz", "fizz", "fizz", "fizz"],
"Product": ["bar", "bar", "bar", "bar", "bar", "bar", "buzz", "buzz"],
"Deal": ["Buy","Buy", "Sell", "Sell", "Buy", "Buy", "Sell", "Sell"],
"Date": ["01/01/24", "01/01/24", "01/01/24", "01/01/24", "01/01/24", "01/01/24", "01/01/24", "01/01/24"],
"Value": [10, 10, 10, 10, 10, 10, 10, 10]
})
df2 = pd.DataFrame({
"Counterparty": ["foo", "foo", "foo", "foo", "fizz", "fizz", "fizz", "fizz"],
"Product": ["bar", "bar", "bar", "bar", "bar", "bar", "buzz", "buzz"],
"Deal": ["Buy","Buy", "Sell", "Sell", "Buy", "Buy", "Sell", "Sell"],
"Date": ["01/01/24", "01/01/24", "01/01/24", "01/01/24", "01/01/24", "01/01/24", "01/01/24", "01/01/24"],
"Value": [11, 9, 9, 10, 12, 8, 9, 10]
})
out = pd.pivot_table(df1, values = 'Value', index='Counterparty', columns = 'Product', aggfunc='sum').reset_index().rename_axis(None, axis=1)
out = out.fillna(0)
</code></pre>
|
<python><arrays><pandas><dataframe>
|
2024-09-18 15:43:26
| 2
| 2,185
|
iBeMeltin
|
78,999,011
| 615,525
|
GeoText Package Not Identifying Cities at All in my Array
|
<p>Using Python, I have an Array that Stores All the Words in my string, which I Loop through:</p>
<pre><code> #Loop through Title String Array and have GeoText identify the City
for i in range(len(title_str_array)):
places = GeoText(title_str_array[i])
print(places.cities)
</code></pre>
<p>I am printing out my Title String Array to make sure the CITY is in there, and it is (please disregard the nonsense words - I am using Azure Computer Vision to identify Text on an image and it is not 100% correct; But the City Name: 'BOSTON' is clearly in my Array</p>
<pre><code> Title Array: ['A', 'COLOU', 'RE', '"', 'pu', 'BLICATION,', 'BOSTON', 't', 's,', 'MASS.,', 'U.'])
</code></pre>
<p>When I print out places.cities, I just get and Empty Value: []</p>
<p>I have PIP installed the GeoText Package, but the 'method' of identifying the cities is not returning 'BOSTON'</p>
<p>Am I missing something obvious?</p>
<p>Thank you!</p>
|
<python><geotext>
|
2024-09-18 15:19:50
| 1
| 353
|
user615525
|
78,999,010
| 695,134
|
json2html not outputting as described, want to remove generated outer table
|
<p>I'm using json2html (python) and the outputs are not matching what is shown at the website.
<a href="https://pypi.org/project/json2html/" rel="nofollow noreferrer">https://pypi.org/project/json2html/</a></p>
<p>Basically, using the site example:</p>
<pre class="lang-py prettyprint-override"><code>from json2html import *
input = {
"sample": [{
"a": 1, "b": 2, "c": 3
}, {
"a": 5, "b": 6, "c": 7
}]}
json2html.convert(json=input)
</code></pre>
<p>Shows the output on the website as a table of 2 rows.</p>
<p><a href="https://i.sstatic.net/poT97yfg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/poT97yfg.png" alt="enter image description here" /></a></p>
<p>However, the html created and embedded on the same page now outputs this (and is also what the code now says it is doing), just ignore the table formatting that is just my css styling:</p>
<p><a href="https://i.sstatic.net/51T77TNH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/51T77TNH.png" alt="enter image description here" /></a></p>
<p>If I remove the 'sample' element name it produces two separate tables and clubbing True/False just makes it worse.</p>
<p>Can anyone let me know how to get it to show the version the website describes it as the left-sided name is pretty ugly and redundant. Thanks.</p>
|
<python><json2html>
|
2024-09-18 15:19:35
| 1
| 6,898
|
Neil Walker
|
78,998,998
| 10,735,143
|
Could not connect to dockerized service which registered in eureka server
|
<p>Here is eureka_client.init:</p>
<pre><code>eureka_client.init(
eureka_server="http://<EUREKA_SERVER_IP>:8070/eureka",
app_name="<EUREKA_APPNAME>",
instance_port="<API_PORT",
instance_ip="<MACHINE_IP>",
)
</code></pre>
<p>At the first my fastapi application registered with docker range ip address (172.19...) in eureka.
Then tried to set <code>instance_ip</code> parameter to my machine ip address. But still could not call service from gateway. (Docker exposed port works correctly)</p>
<p>Note:
Unfortunatly i have not access to eureka server machine to check gateway logs.</p>
<pre><code>py-eureka-client 0.11.10
python 3.10
</code></pre>
<p>Any help is appriciated</p>
|
<python><docker><netflix-eureka>
|
2024-09-18 15:16:34
| 1
| 634
|
Mostafa Najmi
|
78,998,888
| 11,267,783
|
Matplotlib issue with mosaic and colorbars
|
<p>I am facing a strange behaviour with my code.</p>
<p>I don't understand why the subplot at the top left has a different space between the imshow and the colorbar compared to the subplot at the top right.
And also I don't understand why the colorbar at the bottom is not aligned with the one at the top right.</p>
<p>Can you explain this ?</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
import numpy as np
matrix = np.random.rand(100, 100)
mosaic = "AB;CC"
fig = plt.figure(layout="constrained")
ax_dict = fig.subplot_mosaic(mosaic)
img = ax_dict['A'].imshow(matrix, aspect="auto")
fig.colorbar(img, ax=ax_dict['A'])
img = ax_dict['B'].imshow(matrix, aspect="auto")
fig.colorbar(img, ax=ax_dict['B'])
img = ax_dict['C'].imshow(matrix, aspect="auto")
fig.colorbar(img, ax=ax_dict['C'])
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/HlQHWJoO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HlQHWJoO.png" alt="enter image description here" /></a></p>
|
<python><matplotlib>
|
2024-09-18 14:53:41
| 2
| 322
|
Mo0nKizz
|
78,998,782
| 16,525,263
|
How to handle accented letter in Pyspark
|
<p>I have a pyspark dataframe in which I need to add "translate" for a column.
I have the below code</p>
<pre><code>df1 = df.withColumn("Description", F.split(F.trim(F.regexp_replace(F.regexp_replace(F.lower(F.col("Short_Description")), \
r"[/\[/\]/\{}!-]", ' '), ' +', ' ')), ' '))\
df2 = df1.withColumn("Description", F.translate('Description', 'รฃรครถรผแบรกรคฤฤรฉฤรญฤบฤพลรณรดลลกลฅรบลฏรฝลพรรรแบรรฤฤรฤรฤนฤฝลรรลล ลครลฎรลฝ',
'aaousaacdeeillnoorstuuyzAOUSAACDEEILLNOORSTUUYZ'))
df3 = df2.withColumn('Description', F.explode(F.col('Description')))
</code></pre>
<p>I'm getting <code>datatype mismatch error: argument 1 requires string type, 'Description' is of array<string> type</code></p>
<p>I need to handle the accented letters in Description column.</p>
<p>Please let me know how to solve this</p>
|
<python><apache-spark><pyspark>
|
2024-09-18 14:33:21
| 1
| 434
|
user175025
|
78,998,619
| 5,756,179
|
Why this does not show any error during initial checks?
|
<p>The following is a derived excerpt from the cookiejar.py file for FileCookieJar and MozillaCookieJar implementation. <a href="https://github.com/python/cpython/blame/main/Lib/http/cookiejar.py#L1802" rel="nofollow noreferrer">https://github.com/python/cpython/blame/main/Lib/http/cookiejar.py#L1802</a></p>
<pre><code>class FCJ():
def __init__(self):
pass
def load(self):
self.__really_load()
class MCJ(FCJ):
def __init__(self):
super().__init__()
def __really_load(self):
pass
if __name__ == "__main__":
m = MCJ()
f = FCJ()
print(m, f)
</code></pre>
<p>In case of class FCJ, load is calling a function <code>__really_load</code> which is not defined in its construct. It is first defined in the subclass "MCJ" which according to my understanding FCJ should not have any information unless it is created as a part of MCJ creation. This seems to be some kind of an abstract class implementation.</p>
<p>But why does python not fail this during initial checks when executing this ?</p>
<p>I am using python3.7 and the code has been there since python2.4 so I think I am missing something very fundamental with respect to how python actually parses the file for errors during execution, but I could not understand what.</p>
|
<python><compilation>
|
2024-09-18 13:58:44
| 1
| 582
|
DaiCode-1523
|
78,998,389
| 1,308,807
|
Matplotlib warning due to multiple versions: how to handle this in Ubuntu 24?
|
<p>So Ubuntu 24LTS recently came out and it uses Python3.12 as the systemwide python version. Starting from 23, we now cannot really "play" with the default installation: we cannot install python3.12 packages locally with pip. That is ok, it is safer.</p>
<p>I installed python3.11 and I want to use it locally to run my scripts. In this specific case I would prefer to <strong>not use virtual environments</strong>. For python3.11, I installed packages locally with</p>
<p><code>python3.11 -m pip install --user --force-reinstall -r requirements.txt</code></p>
<p>because I would like to not rely on systemwide installations. I simply want to have python3.11 use local packages, without conflicting with Ubuntu python3.12</p>
<p>When I run:</p>
<pre><code>python3.11 -c "import matplotlib.pyplot"
</code></pre>
<p>I get this warning:</p>
<pre><code>/home/<user>local/lib/python3.11/site-packages/matplotlib/projections/__init__.py:63: UserWarning: Unable to import Axes3D. This may be due to multiple versions of Matplotlib being installed (e.g. as a system package and as a pip package). As a result, the 3D projection is not available.
warnings.warn("Unable to import Axes3D. This may be due to multiple versions of "
</code></pre>
<p>I would like to prevent python3.11 to use systemwide packages, without a virtual environment. Is it possible? Would it be easy and advisable, in this case, to permanently remove <code>/usr/lib/python3/dist-packages/</code> from the <strong>python3.11</strong> path?</p>
<p>Thank you very much.</p>
|
<python><matplotlib><ubuntu>
|
2024-09-18 13:07:48
| 0
| 1,075
|
Chicoscience
|
78,998,245
| 9,703,039
|
VSCode, Pylance how to get proper help popup?
|
<p>I enhanced a class by adding some few methods from another class (Jira's class from <a href="https://github.com/atlassian-api/atlassian-python-api/blob/master/atlassian/jira.py#L374" rel="nofollow noreferrer">atlassian-python</a> package)
My class is called <code>JiraExtended</code>.<br />
Original methods from the class display this way:
<a href="https://i.sstatic.net/rcbczikZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rcbczikZ.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/t8qPXCyf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/t8qPXCyf.png" alt="enter image description here" /></a></p>
<p>The ones I created show this way with lesser info on function parameters for exemple:
<a href="https://i.sstatic.net/kaTXWSb8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kaTXWSb8.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/GhwltmQE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GhwltmQE.png" alt="enter image description here" /></a></p>
<p>The way I define docstrings is pretty much the same.</p>
<pre class="lang-py prettyprint-override"><code>def grant_user_permission(self, username : str = None, permissionschemeid : str|int = None, permission : str = "BROWSE_PROJECTS") -> None:
"""
Grants a permission to a user (username) on a specified Permission Scheme.
:username: the user's username to grant the permission to, current username if not specified.
:permissionschemeid: The id of the permission scheme.
:permission: The permission key to grant, "BROWSE_PROJECTS" if not specified.
"""
if not permissionschemeid:
raise Exception("Provide a username and permissionschemeid")
if not username:
username : str = self.myself()['name']
url = self.resource_url(resource = f"permissionscheme/{permissionschemeid}/permission")
self.post(url, data=json.dumps({"holder": {"type": "user", "parameter": username}, "permission": permission}))
</code></pre>
<p>What's the secret so that Pylance guides me correctly?</p>
|
<python><visual-studio-code><pylance>
|
2024-09-18 12:37:52
| 1
| 339
|
Odyseus_v4
|
78,998,174
| 4,489,082
|
5 minute OHLC data to hourly at quarter past the hour
|
<p>I have OHLC data at 5 minute interval. I want to convert it to hourly, but I want those values at quarter past the hour, meaning between 9:15-10:15, 10:15-11:15, 11:15-12:15 and so forth.
The following code can do it between the o'clock measurements. How can I modify the code to get the behaviour I want?</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.read_csv('data.csv')
df['Datetime'] = pd.to_datetime(df['Datetime'])
df.set_index('Datetime', inplace=True)
hourly_data = df.resample('H').agg({'Open': 'first', 'High': 'max', 'Low': 'min', 'Close': 'last', 'Volume': 'sum'})
</code></pre>
<p>Sample data.csv:</p>
<pre class="lang-none prettyprint-override"><code>Datetime,Open,High,Low,Close,Volume
2015-02-03 09:15:00+05:30,200.4,202,199.1,200.6,286228
2015-02-03 09:20:00+05:30,200.6,201.5,200.45,201.15,110749
2015-02-03 09:25:00+05:30,201.2,201.7,201.15,201.65,108342
2015-02-03 09:30:00+05:30,201.75,202.5,201.45,202.2,162976
2015-02-03 09:35:00+05:30,202.2,202.2,201.4,202,118370
2015-02-03 09:40:00+05:30,201.9,202,201.5,201.7,49873
2015-02-03 09:45:00+05:30,201.7,201.85,201.6,201.65,27798
2015-02-03 09:50:00+05:30,201.65,201.7,201.5,201.5,41560
2015-02-03 09:55:00+05:30,201.5,201.75,201.5,201.65,105351
2015-02-03 10:00:00+05:30,201.65,201.9,201.6,201.8,79346
2015-02-03 10:05:00+05:30,201.65,201.9,201.65,201.75,23715
2015-02-03 10:10:00+05:30,201.65,202.85,201.65,202.8,112551
2015-02-03 10:15:00+05:30,202.85,203.2,202.65,202.95,165270
2015-02-03 10:20:00+05:30,202.95,203.25,202.8,203,105614
2015-02-03 10:25:00+05:30,203,203.4,203,203.2,51944
2015-02-03 10:30:00+05:30,203.2,203.8,203,203.75,78474
2015-02-03 10:35:00+05:30,203.75,204.35,203.75,203.75,125514
2015-02-03 10:40:00+05:30,203.75,205,203.75,204.9,249231
2015-02-03 10:45:00+05:30,204.9,206.2,204.6,205.6,388224
2015-02-03 10:50:00+05:30,205.85,205.85,204.7,205,118267
2015-02-03 10:55:00+05:30,205,205.25,204.4,204.9,72310
2015-02-03 11:00:00+05:30,204.9,205.35,204.05,205,94266
2015-02-03 11:05:00+05:30,205,205.4,204.8,205,53333
</code></pre>
|
<python><pandas><ohlc>
|
2024-09-18 12:20:02
| 1
| 793
|
pkj
|
78,998,074
| 8,040,287
|
Pattern matching in python to catch enum-flag combination
|
<p>When using <a href="https://docs.python.org/3/library/enum.html#enum.Flag" rel="nofollow noreferrer">Flags</a> in Python, pattern matching only catch direct equality and not inclusion. It can be circumvented with a condition and an <code>in</code> however, provided you catch the <code>0</code> flag before if you consider it a special case:</p>
<pre class="lang-py prettyprint-override"><code>from enum import Flag
class Test(Flag):
ONE = 1
TWO = 2
THREE = ONE | TWO
NONE = 0
def match_one(pattern):
match pattern:
case Test.THREE:
print(pattern, 'Three')
case Test.ONE:
print(pattern, 'ONE')
case v if v in Test.THREE:
print(pattern, 'in THREE')
case _:
print(pattern, 'Other')
</code></pre>
<blockquote>
<p>Test.ONE ONE<br />
Test.TWO in THREE<br />
Test.THREE Three<br />
Test.NONE in THREE</p>
</blockquote>
<p>However, using such condition is limiting when using multiple flags.
A minimal example would be trying to match OBJ_1 with TYPE_1 (either ATTR_1 or ATTR_2, or TYPE_1) or OBJ_2 with TYPE_2 (either ATTR_3, ATTR_4, or TYPE_2) (the problem is not limited to such a simple case, but it's a good minimal representation)</p>
<pre class="lang-py prettyprint-override"><code>class Flag_1(Flag):
OBJ_1 = 1
OBJ_2 = 2
NONE = 0
class Flag_2(Flag):
ATTR_1 = 1
ATTR_2 = 2
ATTR_3 = 4
ATTR_4 = 8
TYPE_1 = ATTR_1 | ATTR_2
TYPE_2 = ATTR_3 | ATTR_4
NONE = 0
</code></pre>
<p>which can be done as</p>
<pre class="lang-py prettyprint-override"><code>match flag1, flag2:
case Flag_1.NONE, _ | _, Flag_2.NONE:
print('Useless')
case (Flag_1.OBJ_1, c) if c in Flag_1.TYPE_1:
print('Do 1')
case (Flag_1.OBJ_2, c) if c in Flag_1.TYPE_2:
print('Do 1')
case _:
print('Other stuff')
</code></pre>
<p>or</p>
<pre class="lang-py prettyprint-override"><code>match flag1, flag2:
case Flag_1.NONE, _ | _, Flag_2.NONE:
print('Useless')
case (Flag_1.OBJ_1, Flag_2.ATTR_1 | Flag_2.ATTR_2) \
| (Flag_1.OBJ_2, Flag_2.ATTR_3 | Flag_2.ATTR_4):
print('Do 1')
case _:
print('Other stuff')
</code></pre>
<p>However solution 1 is limiting when there are some more combinations and adds many line which repeats the same operation. Solution 2 on the other hand is limiting if <code>TYPE_1</code> contains let's say 10 flags, as it would be a very long line.</p>
<p>something like below is not possible with pattern matching (and would be more adapted to a <code>if</code> statement I think)</p>
<pre class="lang-py prettyprint-override"><code>case (Flag_1.OBJ_1, c) if c in Flag_2.TYPE_1 \
| (Flag_1.OBJ_2, c) if c in Flag_2.TYPE_2: # invalid syntax
</code></pre>
<p>and the following does not works as explained earlier due to the way pattern matching works</p>
<pre class="lang-py prettyprint-override"><code>case (Flag_1.OBJ_1, Flag_2.TYPE_1) \
| (Flag_1.OBJ_2, Flag_2.TYPE_2) \
</code></pre>
<p>Is there a better way than solution 1 with doing one <code>case</code> for each combinations despite all having the same outcome, or using an <code>if</code> statement, to match combined Flag?</p>
|
<python><pattern-matching><enum-flags>
|
2024-09-18 11:57:42
| 1
| 1,215
|
JackRed
|
78,998,063
| 1,005,423
|
How to avoid the "Duplicate Module Import Trap" in python?
|
<p>I recently encountered a perplexing issue regarding module imports in Python, particularly with the way instances of classes can be duplicated based on different import styles.</p>
<p>I had to fight to be able to reproduce. It was not so easy.</p>
<p><strong>Main issue</strong></p>
<p>According to the import style, a module variable can be duplicated (instantiated several time, even without cyclic import). It's very hard to see that at program time, and can be tedious to debug.</p>
<p><strong>Main question</strong></p>
<p>What is the best practice to avoid this issue?</p>
<p><strong>Illustration</strong></p>
<p>Here is a simple project</p>
<pre><code>PROJECT ROOT FOLDER
โโโโapp
โ main.py
โ
โโโโwebsocket
a.py
b.py
ws.py
__init__.py
</code></pre>
<p>main.py</p>
<pre class="lang-py prettyprint-override"><code>import sys
def log_modules():
print("\n\nModules currently in cache:")
for module_name in sys.modules:
if ("web" in module_name or "ws" in module_name) and ("windows" not in module_name and "asyncio" not in module_name):
print(f" - {module_name}: {id(sys.modules[module_name])}")
from app.websocket.a import a
log_modules()
from app.websocket.b import b
log_modules()
if __name__ == "__main__":
a()
b()
</code></pre>
<p>ws.py</p>
<pre class="lang-py prettyprint-override"><code>
class ConnectionManager:
def __init__(self):
print(f"New ConnectionManager object created, id: {id(self)}")
self.caller_list = []
def use(self, caller):
self.caller_list.append(caller)
print(f"ConnectionManager object used by {caller}, id: {id(self)}. Callers = {self.caller_list}")
websocket_manager = ConnectionManager()
</code></pre>
<p>a.py</p>
<pre class="lang-py prettyprint-override"><code>from websocket.ws import websocket_manager # <= one import style: legitimate
def a():
websocket_manager.use("a")
</code></pre>
<p>b.py</p>
<pre class="lang-py prettyprint-override"><code>from .ws import websocket_manager # <= another import style: legitimate also
def b():
websocket_manager.use("b")
</code></pre>
<p>It outputs:</p>
<pre><code>New ConnectionManager object created, id: 1553357629648
New ConnectionManager object created, id: 1553357630608
ConnectionManager object used by a, id: 1553357629648. Callers = ['a']
ConnectionManager object used by b, id: 1553357630608. Callers = ['b']
</code></pre>
<p>When we would expect only one <code>ConnectionManager</code> instance.</p>
<p>I believe both imports are legitimate, especially in a development team where different styles may occur (even if we don't want: this issue occurs).</p>
<p>The question is: <strong>what should be the best practice to apply blindly</strong> ?</p>
<p><strong>Additional question</strong>
the logs of modules show:</p>
<pre><code>Modules currently in cache:
- app.websocket: 1553355041312
- websocket: 1553357652672
- websocket.ws: 1553357652512 <= here we are after main.py from app.websocket.a import a
- app.websocket.a: 1553355040112
- app.websocket.ws: 1553357653632
- app.websocket.b: 1553357652112 <= here we are after main.py from app.websocket.b import b
</code></pre>
<p>We can see the issue: <code>ws</code> module is imported twice: once as <code>websocket.ws</code>, another as <code>app.websocket.ws</code>.<br />
<strong>Can someone explain that in an easy way?</strong> I can't ;-)<br />
And I feel we are on a complex aspect of python, that usually we don't want to bother with! Python is so simple in a lot of aspects...</p>
|
<python><python-import><python-module>
|
2024-09-18 11:54:42
| 1
| 329
|
Nico
|
78,998,041
| 17,721,722
|
PostgreSQL COPY command is much slower locally compared to remote server with identical specs
|
<p>I'm encountering a performance issue with the PostgreSQL <code>COPY</code> command when running it locally. Specifically, my .csv files are on my local machine, and I am pushing them into a remote PostgreSQL server using COPY. Both my remote backend app server and my local PC have identical specs (both with SSDs, similar RAM, and CPU cores). The PostgreSQL database itself is hosted on a different remote server and can only be accessed through a specific whitelisted internet connection.</p>
<h4>Configuration</h4>
<ul>
<li><strong>Cores:</strong> 8</li>
<li><strong>RAM:</strong> 16 GB</li>
<li><strong>Disk Space:</strong> 100 GB (50% free)</li>
<li><strong>OS:</strong> Linux Ubuntu</li>
<li><strong>Django:</strong> 5.1</li>
<li><strong>Python:</strong> 3.11</li>
</ul>
<p>The issue is that the <code>COPY</code> command runs significantly slower on my local machine compared to the remote server, even though both environments are very similar. My local PC is connected to the internet through VPN, which is also fast. When I run the code below locally, it is much slower compared to when I run it on the server. I also noticed another problem: establishing a connection via PSQL, pgAdmin, DBeaver, etc., becomes extremely slow when I am running the process locally. Many times, it doesn't connect to the remote database. I am not sure if this problem is caused by multiprocessing or a different broadband connection.</p>
<p>Here's a snippet of my code:</p>
<pre class="lang-py prettyprint-override"><code>from functools import partial
from multiprocessing import Pool, cpu_count
import psycopg2
def get_connection(env):
conn = psycopg2.connect(f"""postgresql://{env["USER"]}:{env["PASSWORD"]}@{env["HOST"]}:{env["PORT"]}/{env["NAME"]}""")
try:
yield conn
finally:
conn.close()
def copy_from(file_path: str, table_name: str, env, column_string: str):
with get_connection(env) as connection:
with connection.cursor() as cursor:
with open(file_path, "r") as f:
query = f"COPY {table_name} ({column_string}) FROM STDIN WITH (FORMAT CSV, HEADER FALSE, DELIMITER ',', NULL '')"
cursor.copy_expert(query, f)
connection.commit()
with Pool(cpu_count()) as p:
p.map(partial(copy_from, table_name=table_name, env=env, column_string=column_string), file_path_list)
</code></pre>
<h3><strong>Local Machine</strong></h3>
<p>OS: Linux Ubuntu 22.04 LTS with Seqrite EndPoint Security</p>
<p><strong>Traceroute Result:</strong></p>
<ul>
<li>Gateway: ~3.5 ms</li>
<li>Intermediate Hops: 6.5 ms to 40.7 ms</li>
<li>Final Hops: 7.7 ms to 12.7 ms</li>
</ul>
<h3><strong>Remote Server</strong></h3>
<p>OS: Linux Ubuntu</p>
<p><strong>Traceroute Result:</strong></p>
<ul>
<li>Gateway: ~0.2 ms to 0.7 ms</li>
<li>Intermediate Hops: All responses are either not available or < 1 ms</li>
</ul>
<p><strong>Questions:</strong></p>
<ol>
<li>Why might the <code>COPY</code> command be slower locally despite similar specs and a fast internet connection?</li>
<li>Are there any specific optimizations or configurations I should check on my local machine to improve performance?</li>
<li>Could network latency or other factors be impacting the local performance, and if so, how can I address this?</li>
</ol>
<p>Any insights or suggestions would be greatly appreciated!</p>
|
<python><database><postgresql><multiprocessing><psycopg2>
|
2024-09-18 11:47:32
| 1
| 501
|
Purushottam Nawale
|
78,997,912
| 10,829,044
|
Pandas create % and # distribution list in descending order for each group
|
<p>I have a pandas dataframe like as below</p>
<pre><code>data = {
'cust_id': ['abc', 'abc', 'abc', 'abc', 'abc', 'abc', 'abc', 'abc', 'abc', 'abc'],
'product_id': [12, 12, 12, 12, 12, 12, 12, 12, 12, 12],
'purchase_country': ['India', 'India', 'India', 'Australia', 'Australia', 'Australia', 'Australia', 'Australia', 'Australia', 'Australia']
}
df = pd.DataFrame(data)
</code></pre>
<p>My objective is to do the below for each group of cust_id and product_id</p>
<p>a) create two output columns - 'pct_region_split' and 'num_region_split'</p>
<p>b) For 'pct_region_split' - store the % of country split. For ex: For the specific group shown in sample data, Australia - 70% (7 out of 10 is 70%) and India - 30% (3 out of 10 is 30%)</p>
<p>c) For 'num_region_split' - just store the no of rows for country value. For ex: For the specific group shown in sample data, Australia - 7 rows out of total 10 and India is 3 out of total 10.</p>
<p>b) Store the values in a list format (descending order). Meaning, Australia should appear first because it has 70% as the value (which is higher than India).</p>
<p>I tried the below but it is going no where</p>
<pre><code>df['total_purchases'] = df.groupby(['cust_id', 'product_id'])['purchase_country'].transform('size')
df['unique_country'] = df.groupby(['cust_id', 'product_id'])['purchase_country'].transform('nunique')
</code></pre>
<p>Please do note that my real data has more than 1000 customers and 200 product combinations.</p>
<p>I expect my output in a new dataframe like as shown below for each cust and product_id combination</p>
<p><a href="https://i.sstatic.net/EDqb27GZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EDqb27GZ.png" alt="enter image description here" /></a></p>
|
<python><pandas><dataframe><list><group-by>
|
2024-09-18 11:21:25
| 2
| 7,793
|
The Great
|
78,997,869
| 5,999,591
|
error in PyTorch dataloader with num_workers>0 in VSC under WSL
|
<p>I want to utilize my GPU by adjusting the workers number, but I have a problem with the number of workers > 0.</p>
<p><code>test_loader = DataLoader(test_dataset, batch_size=32, shuffle=False, num_workers=0)</code> - no problem<br />
<code>test_loader = DataLoader(test_dataset, batch_size=32, shuffle=False, num_workers=1)</code> - <code>RuntimeError: CUDA error: initialization error</code><br />
Data set is MNIST. The size is 50% / 50% train/test</p>
<p>Its NVIDIA GeForce RTX 3060 Ti, Torch 2.0.1 + CUDA 1.18
GPU memory 8191.50 MB</p>
<p>I work within Visual Studio Code in WSL</p>
<p>Here a reference a to similar case.
<a href="https://stackoverflow.com/questions/74178302/error-when-using-pytorch-num-workers-in-colab-when-num-workers-0">Error when using pytorch num_workers in colab. when num_workers > 0</a></p>
<p>Any suggestions?
Thank you!
<strong>>>>EDIT<<<<</strong>
<strong>Demo script</strong></p>
<pre><code> import tensorflow as tf
# Check GPU availability
print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU')))
# Simple model to test GPU functionality
model = tf.keras.Sequential([
tf.keras.layers.Dense(10, activation='relu', input_shape=(784,)),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
# Dummy data
import numpy as np
X_train = np.random.random((1000, 784))
Y_train = np.random.randint(10, size=(1000,))
# Train model
model.fit(X_train, Y_train, epochs=1, workers=1, use_multiprocessing=True)
</code></pre>
<p><strong>Output</strong></p>
<pre><code>File "/home/kalin_stoyanov/EntMax_TSNE/my_venv/lib/python3.11/site-packages/keras/src/optimizers/optimizer.py", line 1253, in _internal_apply_gradients
File "/home/kalin_stoyanov/EntMax_TSNE/my_venv/lib/python3.11/site-packages/keras/src/optimizers/optimizer.py", line 1345, in _distributed_apply_gradients_fn
File "/home/kalin_stoyanov/EntMax_TSNE/my_venv/lib/python3.11/site-packages/keras/src/optimizers/optimizer.py", line 1340, in apply_grad_to_update_var
DNN library initialization failed. Look at the errors above for more details.
**System info**
---- CPU Information ----
Physical cores: 12
Logical cores: 24
---- Memory Information ----
Total RAM (GB): 15.442893981933594
Available RAM (GB): 12.711883544921875
---- GPU Information ----
PyTorch GPU Count: 1
PyTorch GPU 0: NVIDIA GeForce RTX 3060 Ti
PyTorch Version: 2.0.1+cu118
TensorFlow GPUs: ['/device:GPU:0']
TensorFlow Version: 2.14.0
Keras Version: 2.14.0
---- Operating System Information ----
Operating System: Linux
OS Version: #1 SMP Fri Mar 29 23:14:13 UTC 2024
Machine: x86_64
---- Python Information ----
Python Version: 3.11.4
Python Executable: /home/kalin_stoyanov/EntMax_TSNE/my_venv/bin/python
PyTorch Version: 2.0.1+cu118
TensorFlow Version: 2.14.0
Keras Version: 2.14.0
---- Num GPUs Available ----
Num GPUs Available (TensorFlow): 1
</code></pre>
|
<python><pytorch><tf.keras><pytorch-dataloader>
|
2024-09-18 11:11:41
| 1
| 581
|
Kalin Stoyanov
|
78,997,729
| 2,791,346
|
Disable stoping the container after stop debuging in Pycharm
|
<p>I set up the debugging for my Django app that is inside the docker container in PyCharm.</p>
<p>I did it with:</p>
<ul>
<li>a new interpreter via Docker Compose</li>
<li>created the new run configuration Django Server.</li>
</ul>
<p>My settings are:</p>
<ul>
<li>Interpreter: new docker-compose interpreter</li>
<li>host: 0.0.0.0</li>
<li>port: 8000</li>
</ul>
<p>All work well (start docker container in the terminal, start debugger - it works)</p>
<p>BUT:
When I click the stop debugger, this stops my docker container.</p>
<p>How to set it to not stop - so I can use it as a "normal" development container? (And that I don't need to restart it every time)</p>
|
<python><django><debugging><docker-compose><pycharm>
|
2024-09-18 10:31:40
| 0
| 8,760
|
Marko Zadravec
|
78,997,621
| 2,546,099
|
Best practice for creating new release in gitlab CI/CD-pipeline
|
<p>I have a python-project on gitlab, with an integrated pipeline to both run tests, and if these tests are successful and are run on the main branch, the documentation and a wheel file should be build. Now, however, I would like to expand this by automatically creating releases for new versions and compiling the project into an executable. My current <code>.gitlab-ci.yml</code>-file looks like this:</p>
<pre><code># This file is a template, and might need editing before it works on your project.
# To contribute improvements to CI/CD templates, please follow the Development guide at:
# https://docs.gitlab.com/ee/development/cicd/templates.html
# This specific template is located at:
# https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Python.gitlab-ci.yml
# Official language image. Look for the different tagged releases at:
# https://hub.docker.com/r/library/python/tags/
image: python:3.11
# Change pip's cache directory to be inside the project directory since we can
# only cache local items.
variables:
PIP_CACHE_DIR: "$CI_PROJECT_DIR/.cache/pip"
# Pip's cache doesn't store the python packages
# https://pip.pypa.io/en/stable/topics/caching/
#
# If you want to also cache the installed packages, you have to install
# them in a virtualenv and cache it as well.
cache:
paths:
- /root/.cache/pypoetry
- .cache/pip
- venv/
before_script:
#- ubuntu-drivers install --gpgpu nvidia:555 -y
- apt-get update && apt-get install software-properties-common python3-launchpadlib -y
- apt-get update && add-apt-repository main contrib non-free non-free-firmware # ppa:graphics-drivers/ppa
- echo "deb http://deb.debian.org/debian/ bookworm main contrib non-free non-free-firmware" >> /etc/apt/sources.list
- cat /etc/apt/sources.list
#- deb http://deb.debian.org/debian/ bookworm main contrib non-free non-free-firmware
- apt-get update && apt-get install ffmpeg libsm6 libxext6 libegl-dev -y #nvidia-driver firmware-misc-nonfree -y -o Dpkg::Options::="--force-overwrite" # nvidia-dkms nvidia-utils -y
- nvidia-smi # For debugging
- python --version # For debugging
- pip install --upgrade pip
- pip install poetry
- poetry install --with dev
- source `poetry env info --path`/bin/activate
stages:
- test
- wheel_build
- doc_build
- nuitka_windows
- nuitka_linux
- release
test_job:
image: "python:$VERSION"
stage: test
script:
- poetry run pytest ./tests/
parallel:
matrix:
- VERSION: ["3.9", "3.10", "3.11", "3.12"]
artifacts:
when: always
reports:
junit: /builds/my_python_project/junit_report.xml
coverage_report:
coverage_format: cobertura
path: /builds/my_python_project/coverage.xml
coverage: '/TOTAL.*? (100(?:\.0+)?\%|[1-9]?\d(?:\.\d+)?\%)$/'
wheel_build_job:
stage: wheel_build
needs: [test_job]
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
script:
- poetry build
- pwd
artifacts:
paths:
- /builds/my_python_project/dist/*.whl
doc_build_job:
stage: doc_build
needs: [test_job]
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
script:
- cd docs
- poetry run sphinx-apidoc -o ./source/ ../my_python_project/
- poetry run sphinx-build -M html ./source/ ./build/
- mv build/html/ ../public/
artifacts:
paths:
- /builds/my_python_project/public
nuitka_job_windows:
stage: nuitka_windows
tags: [windows]
needs: [wheel_build_job]
rules:
- if: '($CI_COMMIT_TAG != null) && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
script:
- echo "Hello World from Windows"
nuitka_job_linux:
stage: nuitka_linux
tags: [linux]
needs: [wheel_build_job]
rules:
- if: '($CI_COMMIT_TAG != null) && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'
script:
- echo "Hello World from Linux"
release_job:
stage: release
needs:
- job: wheel_build_job
optional: false
- job: doc_build_job
optional: false
- job: nuitka_job_linux
optional: true
- job: nuitka_job_windows
optional: true
rules:
- if: '($CI_COMMIT_TAG != null) && ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH)'# Run this job when a tag is created
script:
- echo "running release_job"
- echo "Current commit tag is $CI_COMMIT_TAG"
- curl --location --output /usr/local/bin/release-cli "https://gitlab.com/api/v4/projects/gitlab-org%2Frelease-cli/packages/generic/release-cli/latest/release-cli-linux-amd64"
- chmod +x /usr/local/bin/release-cli
- release-cli -v
release: # See https://docs.gitlab.com/ee/ci/yaml/#release for available properties
tag_name: "$CI_COMMIT_TAG"
description: "$CI_COMMIT_DESCRIPTION"
</code></pre>
<p>My first idea was to create tags before merging from a feature branch, and thereby trigger all additional stages. However, this tag does not seem to be transferred to the main branch, thereby staying empty and never triggering the additional stages. Thus, my current approach would be to create a new branch explicitly for new releases (i.e. named <code>release/v<x.xx.xx></code>), and merge to main from there. However, I don't know yet how to extract the relevant information to determine if I'm merging from the correct branch to trigger the additional steps, and to extract the correct version number.</p>
<p>Is that the correct approach at all, or are there other, better implementations for my intentions? I looked for potential tutorials, but did not find anything useful yet.</p>
|
<python><git><gitlab><cicd>
|
2024-09-18 10:01:31
| 1
| 4,156
|
arc_lupus
|
78,997,541
| 9,542,989
|
Slack SocketModeClient - Respond Only to Messages That Mention App in Channels
|
<p>I am attempting to create Slack bot using <code>SocketModeClient</code> that only responds to messages in channels where it has explicitly been mentioned. My initial thought process was to do something as follows:</p>
<pre><code> socket_mode_client = SocketModeClient(
# This app-level token will be used only for establishing a connection
app_token=os.environ['APP_TOKEN'], # xapp-A111-222-xyz
# You will be using this WebClient for performing Web API calls in listeners
web_client=WebClient(token=os.environ['TOKEN']), # xoxb-111-222-xyz
)
def _process_websocket_message(client: SocketModeClient, request: SocketModeRequest):
# Acknowledge the request
response = SocketModeResponse(envelope_id=request.envelope_id)
client.send_socket_mode_response(response)
if request.type != 'events_api':
return
# ignore duplicated requests
if request.retry_attempt is not None and request.retry_attempt > 0:
return
payload_event = request.payload['event']
if payload_event['type'] == 'app_mention' and '<app_user_id>' in payload_event['<some-attr>']:
# respond
</code></pre>
<p>The idea here is to check the text of the message for the user ID of the app and only respond if it is in there. However, I am not entirely certain how I can get the user ID of the app. I know that with the <code>users_profile_get()</code> method of the <code>WebClient</code>, I can get the bot ID and the app ID, but these are not IDs present in the text when the app is mentioned.</p>
<p>I am open to trying a different approach as well.</p>
|
<python><slack><slack-api>
|
2024-09-18 09:41:31
| 1
| 2,115
|
Minura Punchihewa
|
78,997,523
| 17,519,895
|
How to concatenate a list of byte audio data (from frontend) into a single WAV file?
|
<h3>Body:</h3>
<p>I am working with two types of audio data, both in byte form. One type works fine with audio processing libraries like <code>pydub</code> and <code>AudioSegment</code>, allowing me to concatenate and export the audio as a WAV file. However, the second type of audio data, which I receive from the frontend, cannot be processed by these libraries.</p>
<p>I can concatenate the byte data for transcription using <code>b''.join(list)</code>, but when I try to save or play the concatenated audio, only the first segment plays. After it stops, I need to manually press the play button to hear the next segment.</p>
<p>Hereโs the method that works for the first type of audio but not the second:</p>
<pre class="lang-py prettyprint-override"><code>from pydub import AudioSegment
import io
import numpy as np
def audio_saver_AI(path, audio, dtype=np.int16):
combined_audio = AudioSegment.empty()
if isinstance(audio, list):
for audio_file in audio:
temp_audio = io.BytesIO(audio_file)
audio_segment = AudioSegment.from_wav(temp_audio) # Works with one type of audio
combined_audio += audio_segment
combined_audio.export(path, format='wav')
</code></pre>
<h3>Problem:</h3>
<ul>
<li>This code works fine for the first type of audio I deal with, but for the audio data coming from the frontend, does not include <code>RIFF</code> header required to process it as a <code>.wav</code> file, it can't be processed by <code>AudioSegment</code> or other similar libraries.</li>
<li>For transcription, <code>b''.join(list)</code> works fine for both audio types, but I cannot concatenate and play them back as a single continuous audio file.</li>
<li>I need a solution to properly concatenate the audio data received from the frontend and export it as a playable file.</li>
</ul>
<h3>What I've Tried:</h3>
<ul>
<li>Using <code>AudioSegment</code> and other libraries to concatenate the audio data, but they fail to process the frontend audio type.</li>
<li>Concatenating bytes for transcription with <code>b''.join(list)</code>, which works but isn't sufficient for generating a playable audio file.</li>
</ul>
<h3>Question:</h3>
<p>How can I concatenate a list of byte audio data (including the type received from the frontend) into a single audio file that can be saved and played back continuously?</p>
|
<python><audio><wav>
|
2024-09-18 09:37:03
| 0
| 421
|
Aleef
|
78,997,513
| 1,540,660
|
Why is there "TypeError: string indices must be integers" when using negative indices or slices in string formatting?
|
<p>I would like to understand why this works fine:</p>
<pre><code>>>> test_string = 'long brown fox jump over a lazy python'
>>> 'formatted "{test_string[0]}"'.format(test_string=test_string)
'formatted "l"'
</code></pre>
<p>Yet this fails:</p>
<pre><code>>>> 'formatted "{test_string[-1]}"'.format(test_string=test_string)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: string indices must be integers
>>> 'formatted "{test_string[11:14]}"'.format(test_string=test_string)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: string indices must be integers
</code></pre>
<p>I know this could be used:</p>
<pre><code>'formatted "{test_string}"'.format(test_string=test_string[11:14])
</code></pre>
<p>...but that is not possible in my situation.</p>
<p>I am dealing with a sandbox-like environment where a list of variables is passed to <code>str.format()</code> as dictionary of kwargs. These variables are outside of my control. I know the names and types of variables in advance and can only pass formatter string. The formatter string is my only input. It all works fine when I need to combine a few strings or manipulate numbers and their precision. But it all falls apart when I need to extract a substring.</p>
|
<python><formatting>
|
2024-09-18 09:34:06
| 2
| 336
|
Art Gertner
|
78,997,232
| 8,078,237
|
How to prevent scientific notation when exporting a pandas dataframe to csv?
|
<p>I read and modify excel files with pandas and export them as csv at the end.
One column can contain all kinds of text and sometimes a numeric string like "0123456789", note the leading zero. That leading zero seems to cause excel to autoformat those cells and display them in scientific format.
I do not want to change that cell value when exporting the csv at the end but it always puts it in the scientific format by default eventhough it wasn't in that format in the excel since its formatted as "text" or has an apostrophe at the beginning (which is not displayed in excel files).</p>
<p>If I put an apostrophe at the beginning manually in python it does not change the value to scientific format however the apostrophe is visible in the csv since there is no formatting in a csv and also I can have non-numeric values in that column as well and putting an apostrophe at the beginning wouldnt make sense in every case.</p>
<p>How can I read an excel and export it as csv without displaying those numeric strings in scientific format and just keeping them as they are without having to do anything manually in the csv afterwards?</p>
|
<python><excel><pandas><csv>
|
2024-09-18 08:24:13
| 1
| 317
|
jfordummies
|
78,997,194
| 4,690,023
|
Create discord bot buttons programmatically
|
<p>In my python discord bot I can define a <code>discord.ui.View</code> like this:</p>
<pre><code>class ConfirmResetView(discord.ui.View):
def __init__(self, cog):
super().__init__(timeout=60)
self.cog = cog # Reference to the cog for resetting the event
self.answer = None # To store the user's decision
@discord.ui.button(label="Yes", style=discord.ButtonStyle.blurple)
async def yes_button(self, interaction: discord.Interaction, button: discord.ui.Button):
self.clear_items()
self.answer = 'Yes'
await interaction.response.edit_message(delete_after=.1)
self.stop()
@discord.ui.button(label="No", style=discord.ButtonStyle.red)
async def no_button(self, interaction: discord.Interaction, button: discord.ui.Button):
self.clear_items()
self.answer = 'No'
await interaction.response.edit_message(delete_after=.1)
self.stop()
</code></pre>
<p>and use it inside commands like</p>
<pre><code>view = ConfirmResetView(self)
message = await ctx.send("Are you sure?", view=view)
# Wait for the user to respond (button click)
await view.wait()
print(view.answer)
</code></pre>
<p>I want to create a general-purpose <code>View</code> sub-class that I can use to generate questions with any number of buttons. I tried in a couple of ways like this:</p>
<pre><code>import discord
class Question_dialog(discord.ui.View):
def __init__(self, cog, timeout=60, buttons=[('Yes', 'Yes', discord.ButtonStyle.blurple), ('No', 'No', discord.ButtonStyle.red)]):
super().__init__(timeout=timeout)
self.cog = cog # Reference to the cog for resetting the event
self.answer = None # To store the user's decision
for i, (label, answer, style) in enumerate(buttons):
@discord.ui.button(label=label, style=style)
async def button(self, interaction: discord.Interaction, button: discord.ui.Button):
self.clear_items()
self.answer = answer
await interaction.response.edit_message(delete_after=.1)
self.stop()
setattr(self, f"button{i}", button)
</code></pre>
<p>and this</p>
<pre><code>import discord
class QuestionDialog(discord.ui.View):
def __init__(self, cog, timeout=60, buttons=[('Yes', 'Yes', discord.ButtonStyle.blurple), ('No', 'No', discord.ButtonStyle.red)]):
super().__init__(timeout=timeout)
self.cog = cog # Reference to the cog for resetting the event
self.answer = None # To store the user's decision
# Loop over the buttons and create them dynamically
for i, (label, answer, style) in enumerate(buttons):
self.add_item(self.create_button(label, answer, style))
def create_button(self, label, answer, style):
# This helper function returns a button that has a separate scope
async def button_callback(interaction: discord.Interaction):
self.clear_items()
self.answer = answer
await interaction.response.edit_message(delete_after=.1)
self.stop()
# Create and return a discord.ui.Button with the proper callback
return discord.ui.button(label=label, style=style, custom_id=f"button_{label}", row=0, disabled=False)(button_callback)
</code></pre>
<p>But when I call</p>
<pre><code>view = Question_dialog(self)
</code></pre>
<p>from within a function, this doesn't work and no button is showed. Where is the error?</p>
|
<python><discord.py>
|
2024-09-18 08:14:18
| 1
| 1,870
|
Luca
|
78,997,191
| 17,082,611
|
FastAPI app works locally, but /blogs endpoint causes redirect loop on AWS Lambda deployment
|
<p>Iโve developed a FastAPI app and deployed it on AWS Lambda as a <code>.zip</code> archive. Locally, when I run:</p>
<pre class="lang-bash prettyprint-override"><code>uvicorn src.main:app --reload
</code></pre>
<p>I can access the <code>/blogs</code> endpoint (<code>http://127.0.0.1:8000/blogs</code>) and it returns the correct data:</p>
<pre class="lang-json prettyprint-override"><code>[
{
"title": "blog with relationship",
"description": "description",
"written_by": {
"username": "Sarthak",
"email": "sss@ssss.com",
"blogs": [
{ "title": "blog with relationship", "description": "description" },
{ "title": "blog with relationship", "description": "description" }
]
}
}
]
</code></pre>
<p>However, when I deploy this app to AWS Lambda using the following SAM template:</p>
<pre><code>AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: An AWS Serverless Application Model template describing your function.
Resources:
Full:
Type: AWS::Serverless::Function
Properties:
CodeUri: .
Description: ''
MemorySize: 128
Timeout: 60
Handler: src.main.handler
Runtime: python3.12
Architectures:
- x86_64
EphemeralStorage:
Size: 512
EventInvokeConfig:
MaximumEventAgeInSeconds: 21600
MaximumRetryAttempts: 2
FunctionUrlConfig:
AuthType: NONE
InvokeMode: BUFFERED
PackageType: Zip
Policies:
- Statement:
- Effect: Allow
Action:
- logs:CreateLogGroup
Resource: arn:aws:logs:us-east-1:820363156269:*
- Effect: Allow
Action:
- logs:CreateLogStream
- logs:PutLogEvents
Resource:
- >-
arn:aws:logs:us-east-1:820363156269:log-group:/aws/lambda/Full:*
RecursiveLoop: Terminate
SnapStart:
ApplyOn: None
RuntimeManagementConfig:
UpdateRuntimeOn: Auto
</code></pre>
<p>I get a <strong>Too many redirects</strong> error when making a <code>GET</code> request to the generated URL:</p>
<pre><code>https://cjosbvzkkcjiaa4e7frfwtacia0molag.lambda-url.us-east-1.on.aws/blogs
</code></pre>
<p>This also happens when testing in Postman, where it exceeds the max redirects limit.</p>
<p>However, making requests to specific blog IDs, like:</p>
<pre><code>https://cjosbvzkkcjiaa4e7frfwtacia0molag.lambda-url.us-east-1.on.aws/blogs/1
</code></pre>
<p>works perfectly, returning the expected response.</p>
<p>If I test the app using the <em>Test</em> panel in the AWS Lambda console manager, using the AWS Proxy template:</p>
<pre><code>{
"body": "eyJ0ZXN0IjoiYm9keSJ9",
"resource": "/{proxy+}",
"path": "/blogs",
"httpMethod": "GET",
"isBase64Encoded": true,
"queryStringParameters": {
"foo": "bar"
},
"multiValueQueryStringParameters": {
"foo": [
"bar"
]
},
"pathParameters": {
"proxy": "/blogs"
},
"stageVariables": {
"baz": "qux"
},
"headers": {
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8",
"Accept-Encoding": "gzip, deflate, sdch",
"Accept-Language": "en-US,en;q=0.8",
"Cache-Control": "max-age=0",
"CloudFront-Forwarded-Proto": "https",
"CloudFront-Is-Desktop-Viewer": "true",
"CloudFront-Is-Mobile-Viewer": "false",
"CloudFront-Is-SmartTV-Viewer": "false",
"CloudFront-Is-Tablet-Viewer": "false",
"CloudFront-Viewer-Country": "US",
"Host": "1234567890.execute-api.us-east-1.amazonaws.com",
"Upgrade-Insecure-Requests": "1",
"User-Agent": "Custom User Agent String",
"Via": "1.1 08f323deadbeefa7af34d5feb414ce27.cloudfront.net (CloudFront)",
"X-Amz-Cf-Id": "cDehVQoZnx43VYQb9j2-nvCh-9z396Uhbp027Y2JvkCPNLmGJHqlaA==",
"X-Forwarded-For": "127.0.0.1, 127.0.0.2",
"X-Forwarded-Port": "443",
"X-Forwarded-Proto": "https"
},
"multiValueHeaders": {
"Accept": [
"text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8"
],
"Accept-Encoding": [
"gzip, deflate, sdch"
],
"Accept-Language": [
"en-US,en;q=0.8"
],
"Cache-Control": [
"max-age=0"
],
"CloudFront-Forwarded-Proto": [
"https"
],
"CloudFront-Is-Desktop-Viewer": [
"true"
],
"CloudFront-Is-Mobile-Viewer": [
"false"
],
"CloudFront-Is-SmartTV-Viewer": [
"false"
],
"CloudFront-Is-Tablet-Viewer": [
"false"
],
"CloudFront-Viewer-Country": [
"US"
],
"Host": [
"0123456789.execute-api.us-east-1.amazonaws.com"
],
"Upgrade-Insecure-Requests": [
"1"
],
"User-Agent": [
"Custom User Agent String"
],
"Via": [
"1.1 08f323deadbeefa7af34d5feb414ce27.cloudfront.net (CloudFront)"
],
"X-Amz-Cf-Id": [
"cDehVQoZnx43VYQb9j2-nvCh-9z396Uhbp027Y2JvkCPNLmGJHqlaA=="
],
"X-Forwarded-For": [
"127.0.0.1, 127.0.0.2"
],
"X-Forwarded-Port": [
"443"
],
"X-Forwarded-Proto": [
"https"
]
},
"requestContext": {
"accountId": "123456789012",
"resourceId": "123456",
"stage": "prod",
"requestId": "c6af9ac6-7b61-11e6-9a41-93e8deadbeef",
"requestTime": "09/Apr/2015:12:34:56 +0000",
"requestTimeEpoch": 1428582896000,
"identity": {
"cognitoIdentityPoolId": null,
"accountId": null,
"cognitoIdentityId": null,
"caller": null,
"accessKey": null,
"sourceIp": "127.0.0.1",
"cognitoAuthenticationType": null,
"cognitoAuthenticationProvider": null,
"userArn": null,
"userAgent": "Custom User Agent String",
"user": null
},
"path": "/blogs",
"resourcePath": "/{proxy+}",
"httpMethod": "GET",
"apiId": "1234567890",
"protocol": "HTTP/1.1"
}
}
</code></pre>
<p>I get this response:</p>
<pre><code>{
"statusCode": 307,
"headers": {
"content-length": "0",
"location": "https://0123456789.execute-api.us-east-1.amazonaws.com/blogs/?foo=bar"
},
"multiValueHeaders": {},
"body": "",
"isBase64Encoded": false
}
</code></pre>
<h3>Project Structure (simplified)</h3>
<pre class="lang-bash prettyprint-override"><code>โโโ aws_lambda_artifact.zip
โโโ src
โโโ main.py
โโโ routers
โ โโโ blog.py
โโโ models.py
โโโ database.py
</code></pre>
<h3>Main.py</h3>
<pre class="lang-py prettyprint-override"><code>from fastapi import FastAPI
from mangum import Mangum
from src import models
from src.database import engine
from src.routers import blog, user
app = FastAPI()
handler = Mangum(app)
models.Base.metadata.create_all(bind=engine)
app.include_router(blog.router)
</code></pre>
<h3>routers/blog.py</h3>
<pre><code>from typing import List
from fastapi import APIRouter, Depends, status
from sqlalchemy.orm import Session
from src.database import get_db
from src.oauth2 import get_current_user
from src.repository import blog_repository
from src.schemas import ShowBlog, Blog, User
router = APIRouter(
prefix="/blogs",
tags=["blogs"]
)
@router.get("/", response_model=List[ShowBlog])
async def blogs(db: Session = Depends(get_db)):
return blog_repository.get_all(db)
@router.get("/{blog_id}", response_model=ShowBlog)
async def get_blog(blog_id: int, db: Session = Depends(get_db)):
return blog_repository.get(blog_id, db)
@router.post("/create-blog", status_code=status.HTTP_201_CREATED)
async def create_blog(request: Blog, db: Session = Depends(get_db)):
return blog_repository.create(request, db)
@router.put("/{blog_id}", status_code=status.HTTP_202_ACCEPTED)
async def update_blog(blog_id: int, request: Blog, db: Session = Depends(get_db)):
return blog_repository.update(blog_id, request, db)
@router.delete("/{blog_id}", status_code=status.HTTP_204_NO_CONTENT)
async def delete_blog(blog_id: int, db: Session = Depends(get_db), current_user: User = Depends(get_current_user)):
return blog_repository.delete(blog_id, db)
</code></pre>
<h3>Deployment Steps</h3>
<ol>
<li>Installed dependencies to a <code>dependencies/</code> folder:
<pre class="lang-bash prettyprint-override"><code>pip3 install -r requirements.txt --target=dependencies --python-version 3.12
</code></pre>
</li>
<li>Added the <code>dependencies</code> and <code>src</code> directories to the <code>.zip</code> archive:
<pre class="lang-bash prettyprint-override"><code>(cd dependencies; zip ../aws_lambda_artifact.zip -r .)
zip aws_lambda_artifact.zip -u -r src
</code></pre>
</li>
</ol>
<h3>Issue</h3>
<p>The root <code>/blogs</code> endpoint is stuck in a redirect loop, but specific ID-based URLs (like <code>/blogs/1</code>) work fine. Any ideas why this is happening, and how I can resolve it?</p>
|
<python><amazon-web-services><aws-lambda><fastapi><redirect-loop>
|
2024-09-18 08:13:57
| 2
| 481
|
tail
|
78,997,019
| 6,074,182
|
In Python 3.12, why does 'รl' take less memory than 'ร'?
|
<p>I just read <a href="https://peps.python.org/pep-0393/" rel="noreferrer">PEP 393</a> and learned that Python's <code>str</code> type uses different internal representations, depending on the content. So, I experimented a little bit and was a bit surprised by the results:</p>
<pre><code>>>> sys.getsizeof('')
41
>>> sys.getsizeof('H')
42
>>> sys.getsizeof('Hi')
43
>>> sys.getsizeof('ร')
61
>>> sys.getsizeof('รl')
59
</code></pre>
<p>I understand that in the first three cases, the strings don't contain any non-ASCII characters, so an encoding with 1 byte per char can be used. Putting a non-ASCII character like <code>ร</code> in a string forces the interpreter to use a different encoding. Therefore, I'm not surprised that <code>'ร'</code> takes more space than <code>'H'</code>.</p>
<p>However, why does <code>'รl'</code> take less space than <code>'ร'</code>? I assumed that whatever internal representation is used for <code>'รl'</code> allows for an even shorter representation of <code>'ร'</code>.</p>
<p>I'm using Python 3.12, apparently it is not reproducible in earlier versions.</p>
|
<python><string><python-internals><python-3.12>
|
2024-09-18 07:30:44
| 1
| 2,445
|
Aemyl
|
78,996,929
| 7,916,348
|
Python recursion limits and functools.cache
|
<p><code>sys.setrecursionlimit</code> doesn't appear to affect functions declared using the <code>@cache</code> decorator:</p>
<pre class="lang-py prettyprint-override"><code>from functools import cache
import sys
sys.setrecursionlimit(100000)
@cache
def f1(n):
if n == 1: return 1
return n * f1(n-1)
mycache = {}
def f2(n):
if n in mycache:
return mycache[n]
if n == 1: val = 1
else: val = n * f2(n-1)
mycache[n] = val
return val
f2(90000) # this works
f1(600) # this fails at a depth of 500 with the error:
# "RecursionError: maximum recursion depth exceeded while calling a Python object"
</code></pre>
<p>The <code>RecursionError</code> thrown is also different than the usual "maximum recursion depth exceeded" when too many recursive calls are made.</p>
<p>I can't seem to replicate this without using <code>@cache</code> - custom decorators or callable class instances seem to respect the recursion limit.</p>
<p>What's going on here, and is there a way to change the recursion limit for functions declared using <code>@class</code>?</p>
|
<python><decorator><stack-overflow><functools>
|
2024-09-18 07:02:12
| 0
| 792
|
Zachary Barbanell
|
78,996,709
| 9,097,114
|
Multipage PDF to single image with multi page conversion
|
<p>I am trying to convert a multipage PDF to a single image.</p>
<p>From my code, I can only generate a multi(list of) images.</p>
<p>How can we create a single image with a multi page from PD?</p>
<p>Below is my code:</p>
<p>**</p>
<pre><code>import requests
from pdf2image import convert_from_path
url = 'https://education.github.com/git-cheat-sheet-education.pdf'
r = requests.get(url, stream=True)
with open('F:/New folder/metadata.pdf', 'wb') as f:
f.write(r.content)
pages = convert_from_path('F:/New folder/metadata.pdf', 500,poppler_path=r'C:\poppler-0.68.0\bin')
for count, page in enumerate(pages):
page.save(f'F:/New folder/out{count}.jpg', 'JPEG')
</code></pre>
<p>**</p>
<p>Request any suggestions/modifications from above code?</p>
|
<python><pdf>
|
2024-09-18 05:48:22
| 3
| 523
|
san1
|
78,996,541
| 4,701,852
|
Python Shapely - Selection of a given part of a Geopandas geometries
|
<p>I've got a <strong>geopandas</strong> (EPSG:4326) that when I plot, this is the result (a road intersection):</p>
<p><a href="https://i.sstatic.net/jtJCpJ4F.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jtJCpJ4F.png" alt="enter image description here" /></a></p>
<p>What I'm trying to achieve is to "trim" these geometries around the center of the intersection. The resulting <strong>geopandas</strong> would be something like this:</p>
<p><a href="https://i.sstatic.net/2fKJi2xM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2fKJi2xM.png" alt="enter image description here" /></a></p>
<p>Does anyone know how to achieve this using Python/Geopandas?</p>
|
<python><shapely>
|
2024-09-18 04:23:23
| 1
| 361
|
Paulo Henrique PH
|
78,996,443
| 10,461,632
|
How can I get the timezone based on the latitude and longitude using python?
|
<p>I want to get the timezone of a specific latitude and longitude, but formatted in a specific way. For example, instead of returning <code>America/New_York</code>, I'd like <code>Eastern</code>/<code>Eastern Standard Time</code>/<code>EST</code>. I tried to search the documentation for <a href="https://timezonefinder.readthedocs.io/en/latest/index.html" rel="nofollow noreferrer"><code>timezonefinder</code></a>, but didn't see any options for changing the output. I also haven't found anything online about converting <code>America/New_York</code> to <code>EST</code>. I can always create a dictionary mapping to achieve what I want, but I was hoping there was a more efficient way since I'll be working with more than what is shown in the example below.</p>
<p>Here's my code for a few examples with different timezones.</p>
<pre><code>from timezonefinder import TimezoneFinder
tf = TimezoneFinder()
tz1 = tf.timezone_at(lat=40.4467, lng=-80.0158)
tz2 = tf.timezone_at(lat=39.7439, lng=-105.02)
tz3 = tf.timezone_at(lat=37.403, lng=-121.97)
print(tz1)
print(tz2)
print(tz3)
</code></pre>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th></th>
<th>What I get</th>
<th>What I want</th>
</tr>
</thead>
<tbody>
<tr>
<td>tz1</td>
<td>America/New_York</td>
<td>Eastern, EST, etc</td>
</tr>
<tr>
<td>tz2</td>
<td>America/Denver</td>
<td>Mountain, MST, etc</td>
</tr>
<tr>
<td>tz3</td>
<td>America/Los_Angeles</td>
<td>Pacific, PST, etc</td>
</tr>
</tbody>
</table></div>
|
<python><python-3.x><timezone>
|
2024-09-18 03:16:20
| 0
| 788
|
Simon1
|
78,995,649
| 327,258
|
New to fifo in linux, reading with python produces gibberish? (binary)
|
<p>I am trying to grab some sensor data so I can toss it into a influxdb. We have very old sensors writing this out to a server via a fifo on that server. I know I can read it (made a quick program):</p>
<pre><code># reader.py
import os
FIFO = '/var/axis/sensor1'
FIFO_PATH = FIFO
fifo_path = FIFO
print("about to "+fifo_path)
# Ensure the FIFO exists
if not os.path.exists(fifo_path):
raise IOError("The FIFO '{0}' does not exist. Please create it using 'mkfifo {0}'.".format(fifo_path))
print("Opening FIFO {0} for reading...".format(fifo_path))
fifo_fd = os.open(fifo_path, os.O_RDONLY)
fifo_file = os.fdopen(fifo_fd)
try:
while True:
line = fifo_file.read(1)
if not line:
break
# Print the line to the screen
print(line),
finally:
fifo_file.close()
print("FIFO Closed.")
</code></pre>
<p>So when I run this and the sensor is having data ran to I see stuff like this scroll past the screen:</p>
<pre><code>@eN?>??Y?n??A?)??[Z@?c
@eN?>??\?n??A?)??[Z@?(
@eN?>`?n??A?)??[Z@?? @eN?>33c?n??A?)??[Z@?? @eN?>fff?n??A?)??[Z@?? @eN?>??i?n??A?)??[Z@? @eN?>??l?n??A?)??[Z@? @eN?>p?n??A?)??[Z@? @eN?>33s?n??A?)??[Z@? @eN?>ffv?n??A?)??[Z@? @eN?>??y?n??A?)??[Z@? @eN?>??|?n??A?)??[Z@? @eN?>??n??A?)??[Z@? @eN?>33??n??A?)??[Z@? @eN?>ff??n??A?)??[Z@? @eN?>????n??A?)??[Z@? @eN?>?ฬ?n??A?)??[Z@? @eN?>??n??A?)??[Z@? @eN?>33??n??A?)??[Z@? @eN?>ff??n??A?)??[Z@? @eN?>????n??A?)??[Z@? @eN?>?ฬ?n??A?)??[Z@? @eN?>??n??A?)??[Z@? @eN?>33??n??A?)??[Z@? @eN?>ff??n??A?)??[Z@? @eN?>????n??A?)??[Z@? @eN?>?ฬฌ?n??A?)??[Z@? @eN?>??n??A?)??[Z@? @eN?>33??n??A?)??[Z@? @eN?>ff??n??A?)??[Z@? @eN?>????n??A?)??[Z@? @eN?>?ฬผ?n??A?)??[Z@? @eN?>??n??A?)??[Z@? @eN?>33??n??A?)??[Z@? @eN?>ff??n??A?)??[Z@? @eN?>????n??A?)??[Z@? @eN?>????n??A?)??[Z@? @eN?>??n??A?)??[Z@? @eN?>33??n??A?)??[Z@? @eN?>ff??n??A?)??[Z@? @eN?>????n??A?)??[Z@? @eN?>????n??A?)??[Z@? @eN?>??n??A?)??[Z@? @eN?>33??n??A?)??[Z@@eN?>ff??n??A?)??[Z@? @eN?>????n??A?)??[Z@? @eN?>????n??A?)??[Z@? @eN?>??n??A?)??[Z@? @eN?>33??n??A?)??[Z@? @eN?>ff??n??A?)??[Z@? @eN?>????n??A?)??[Z@? @eN?>????n??A?)??[Z@P? @???z?>?n??A?)??[Z@?~
@???z?>33?n??A?)??[Z@?_
@???z?>ff?n??A?)??[Z@??
</code></pre>
<p>Clearly a readline is the wrong command, I guess this data is binary? Are fifo's always binary? If it is binary that means I have to hunt down how this old system/sensor is writing this fifo to know how to then load and parse it?</p>
<p>I want to know what I am reading so I know how to package it as json to send to influxdb.</p>
<p>EDIT: I believe I found some c code of an example of reading the data structure, with the structure can I read this in python?</p>
<p>C Structure of the data:</p>
<pre><code>typedef struct axis_data {
double now;
double p;
double v;
double current;
double error;
} AXIS_DATA;
fifoList.fd = open("/var/axis/sensor1", O_RDONLY);
AXIS_DATA data;
rb=read(this->fifoList.fd, &data, sizeof(data));
data.p, data.v, data.current, data.error
</code></pre>
<p>Paraphrased C of course, and my C is rusty but I believe I want to do the same thing with creating a structure equiv in Python and reading that structure size into the structure itself. Somehow, in python, my attempt at that below (I cannot believe I have never done binary in python really)</p>
<pre><code># reader.py
import os
from ctypes import *
class AXIS_DATA(Structure):
_fields_ = [('now', c_double),
('p', c_double),
('v', c_double),
('current', c_double),
('error', c_double)]
FIFO = '/var/axis/sensor1'
#FIFO = 'mytest'
FIFO_PATH = FIFO
fifo_path = FIFO
print("about to "+fifo_path)
# Ensure the FIFO exists
if not os.path.exists(fifo_path):
raise IOError("The FIFO '{0}' does not exist. Please create it using 'mkfifo {0}'.".format(fifo_path))
print("Opening FIFO {0} for reading...".format(fifo_path))
fifo_fd = os.open(fifo_path, os.O_RDONLY)
#fifo_file = os.fdopen(fifo_fd)
fifo_file = open(fifo_path)
try:
while True:
result = []
dat = AXIS_DATA()
# this just froze
while fifo_file.readinto(dat) == sizeof(dat):
result.append((dat.now, dat.p, dat.v, dat.current, dat.error))
if not line:
break
# Print the line to the screen
print(result),
finally:
fifo_file.close()
print("FIFO Closed.")
</code></pre>
|
<python><linux><fifo>
|
2024-09-17 20:08:07
| 2
| 3,856
|
Codejoy
|
78,995,607
| 1,182,299
|
How to calculate the center-line or baseline of a polygon
|
<p>I work on a segmentation model for historical documents. My dataset has the text lines as polygons but for a usable model training I need also the center-lines or baselines of the polygons.</p>
<p>As an example, these are the coordinates of the line as tuples of polygons:</p>
<pre><code>603,1220 600,1288 687,1304 691,1304 694,1304 726,1291 762,1278 801,1291 843,1307 846,1307 850,1307 924,1294 976,1285 1009,1294 1054,1310 1057,1310 1061,1310 1158,1298 1203,1291 1223,1298 1262,1314 1265,1314 1268,1314 1272,1314 1320,1298 1366,1285 1388,1301 1405,1310 1408,1310 1411,1310 1745,1323 1749,1323 1752,1323 1788,1304 1810,1294 1891,1294 1963,1307 2057,1327 2060,1327 2161,1310 2222,1301 2317,1314 2359,1317 2362,1317 2388,1314 2440,1304 2459,1314 2495,1336 2498,1336 2502,1336 2505,1336 2602,1317 2651,1307 2738,1317 2878,1336 2881,1336 2884,1336 2943,1320 2972,1314 3248,1323 3258,1262 3254,1203 603,1151 603,1220
</code></pre>
<p>And this is the baseline of the above coordinates:</p>
<pre><code>603,1220 2138,1255 3258,1262
</code></pre>
<p>I try to use python's <code>shapely.centroid</code> to calculate the baseline but without reliable result.</p>
<pre><code>from shapely import LineString, MultiPoint, Polygon, centroid
print(centroid(Polygon([(584, 1603), (580, 1654), (649, 1680), (652, 1680), (824, 1684), (827, 1684), (830, 1684), (918, 1661), (947, 1684), (950, 1684), (954, 1684), (957, 1684), (1061, 1671), (1106, 1687), (1109, 1687), (1113, 1687), (1116, 1687), (1187, 1664), (1343, 1690), (1346, 1690), (1732, 1671), (1758, 1671), (1797, 1671), (2158, 1693), (2161, 1693), (2164, 1693), (2193, 1677), (2196, 1677), (2239, 1700), (2242, 1700), (2245, 1700), (2248, 1700), (2320, 1680), (2323, 1680), (2404, 1703), (2407, 1703), (2411, 1703), (2667, 1687), (2712, 1684), (2719, 1687), (2771, 1710), (2774, 1710), (2777, 1710), (3261, 1693), (3267, 1638), (3264, 1570), (584, 1538), (584, 1603)])))
</code></pre>
<p>Result:</p>
<pre><code>POINT (1920.3383138928723 1620.2116469627338)
</code></pre>
<p>Image:
<a href="https://i.sstatic.net/WxqG1JYw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WxqG1JYw.png" alt="enter image description here" /></a></p>
<p>EDIT: I forgot to mention, if this matters or not, my image has the following measures in this case: <code>imageWidth="4381"</code> <code>imageHeight="5841"</code></p>
<p>After some research I came up with this code, though I don't get a central line but the upper line of the polygon:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
# Given polygon points (as x, y coordinates) for the shape
polygon_points = [
(603, 1220), (600, 1288), (687, 1304), (691, 1304), (694, 1304),
(726, 1291), (762, 1278), (801, 1291), (843, 1307), (846, 1307),
(850, 1307), (924, 1294), (976, 1285), (1009, 1294), (1054, 1310),
(1057, 1310), (1061, 1310), (1158, 1298), (1203, 1291), (1223, 1298),
(1262, 1314), (1265, 1314), (1268, 1314), (1272, 1314), (1320, 1298),
(1366, 1285), (1388, 1301), (1405, 1310), (1408, 1310), (1411, 1310),
(1745, 1323), (1749, 1323), (1752, 1323), (1788, 1304), (1810, 1294),
(1891, 1294), (1963, 1307), (2057, 1327), (2060, 1327), (2161, 1310),
(2222, 1301), (2317, 1314), (2359, 1317), (2362, 1317), (2388, 1314),
(2440, 1304), (2459, 1314), (2495, 1336), (2498, 1336), (2502, 1336),
(2505, 1336), (2602, 1317), (2651, 1307), (2738, 1317), (2878, 1336),
(2881, 1336), (2884, 1336), (2943, 1320), (2972, 1314), (3248, 1323),
(3258, 1262), (3254, 1203), (603, 1151), (603, 1220)
]
image_width = 4381
image_height = 5841
normalized_polygon_points = [(x / image_width, y / image_height) for x, y in polygon_points]
# Extract x and y coordinates from the polygon points
x_coords, y_coords = zip(*normalized_polygon_points)
# Create a dictionary to store top and bottom y-values for each x-coordinate
x_to_y_values = {}
for x, y in normalized_polygon_points:
if x not in x_to_y_values:
x_to_y_values[x] = {'top': y, 'bottom': y}
else:
# Update the top and bottom y-values for the x-coordinate
if y < x_to_y_values[x]['top']:
x_to_y_values[x]['top'] = y # Update top
if y > x_to_y_values[x]['bottom']:
x_to_y_values[x]['bottom'] = y # Update bottom
# Calculate the centerline by averaging the top and bottom y-values for each x-coordinate
centerline_points = []
for x in sorted(x_to_y_values.keys()):
top_y = x_to_y_values[x]['top']
bottom_y = x_to_y_values[x]['bottom']
center_y = (top_y + bottom_y) / 2 # Midpoint between top and bottom
centerline_points.append((x, center_y))
# Extract the centerline x and y points
centerline_x, centerline_y = zip(*centerline_points)
# Plot the polygon and the centerline
plt.figure(figsize=(10, 6))
plt.plot(x_coords, y_coords, label='Polygon', color='blue', linestyle='--', marker='o')
plt.plot(centerline_x, centerline_y, label='Centerline', color='red', marker='o')
plt.title('Polygon with Centerline')
plt.xlabel('X')
plt.ylabel('Y')
plt.legend()
plt.grid(True)
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/2ZkezXM6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2ZkezXM6.png" alt="enter image description here" /></a></p>
|
<python><polygon><shapely><baseline>
|
2024-09-17 19:55:45
| 2
| 1,791
|
bsteo
|
78,995,579
| 13,606,345
|
Is Python multiprocessing.Lock also thread safe?
|
<p>I am wondering if Lock imported from multiprocessing module is also thread safe.</p>
|
<python><multithreading><multiprocessing><thread-safety>
|
2024-09-17 19:48:11
| 0
| 323
|
Burakhan Aksoy
|
78,995,550
| 267,364
|
How to setup a celery worker to consume high priority tasks only?
|
<p>Celery allow to route tasks by task-name to avoid a load from a kind of task to delay tasks from other kinds.</p>
<p>Celery also handle priority with manual scale (often 0 : top priority -> 10 : lowest priority).</p>
<p>Priority can be set when you queue the task</p>
<pre class="lang-py prettyprint-override"><code>my_task.apply_async(args=("daily task",), priority=7)
my_task.apply_async(args=("user interaction task",), priority=2)
</code></pre>
<p>This way, a same given task can have a different priority if it's called from a user interaction or from a background task.</p>
<p>My idea of what would be nice would be for example having a priority worker which would only process tasks with priority below a given threshold (ex 5) and a normal worker handling any task (including high priority ones). It would avoid having any issue to have many long runtime tasks with low priority as it would not increase latency of latency critical tasks.</p>
<p>Is this even possible ?</p>
|
<python><celery>
|
2024-09-17 19:36:29
| 0
| 6,477
|
christophe31
|
78,995,526
| 5,468,905
|
Cannot get pylance to properly typecheck variable in extended class
|
<p>Here's my setup simplified:</p>
<pre class="lang-py prettyprint-override"><code>from abc import ABC
from typing import TypeVar
from pydantic import BaseModel
_M = TypeVar("_M", bound=BaseModel)
class A(BaseModel):
a = 1
b = 2
c = 3
class Base(ABC):
cfg_model: type[_M]
def __init__(self):
self.cfg = self.cfg_model()
class ChildA(Base):
cfg_model = A
def run(self):
self.cfg_model # (variable) model_class: type[_M]
self.cfg # (variable) cfg: _M
# pylance fails to detect properties in `self.cfg`
self.cfg.a
self.cfg.b
self.cfg.c
</code></pre>
<p>The idea is that there are multiple classes extending the base abstract class, each having it's own config model.</p>
<p>But i can't seem to figure out how to make pylance (i'm using Visual Studio Code) correctly typecheck <code>self.cfg</code> in <code>ChildA</code>'s <code>run</code> method as <code>A</code>.</p>
<p><strong>UPD:</strong> Turning <code>Base</code> into generic class helped:</p>
<pre class="lang-py prettyprint-override"><code>from abc import ABC
from typing import TypeVar
from pydantic import BaseModel
_M = TypeVar("_M", bound=BaseModel)
class A(BaseModel):
a = 1
b = 2
c = 3
class Base(ABC, Generic[_M]):
cfg_model: type[_M]
def __init__(self):
self.cfg = self.cfg_model()
class ChildA(Base[A]):
cfg_model = A
def run(self):
self.cfg_model # (variable) model_class: type[A]
self.cfg # (variable) cfg: A
# all properties are correctly typechecked
self.cfg.a
self.cfg.b
self.cfg.c
</code></pre>
<p>However, it irks me that i need to provide the Model name twice (in class argument and in class variable). Can this be avoided somehow?</p>
|
<python><python-typing><pyright>
|
2024-09-17 19:25:15
| 0
| 314
|
HYBRID BEING
|
78,995,452
| 3,670,765
|
SIGFPE while importing numpy into Python3
|
<p>I ran into a very odd issue. If I have the following C and Python codes:</p>
<pre class="lang-c prettyprint-override"><code>#include <Python.h>
int main(int argc, char *argv[])
{
Py_Initialize();
FILE* to_run_script= fopen("script.py", "r");
if (to_run_script != NULL)
PyRun_SimpleFile(to_run_script, "script.py");
Py_Finalize();
return 0;
}
</code></pre>
<p>and</p>
<pre class="lang-py prettyprint-override"><code>def func1():
print("numpy NOT YET imported!!\n")
import numpy
print("numpy is imported!!\n")
return 1.5
func1()
</code></pre>
<p>Then everything works as expected, Python starts and I get both output lines.</p>
<p>However, when I embed the same piece of code in our code (which does some setup, but I'm not exactly sure what) then the <code>import numpy</code> statement throws a SIGFPE (more precisely, FPE_FLTOVF, at least that's what strace says) and the code exits without printing the second line.</p>
<p>To further muddy the waters, if I run through valgrind the very same code that produced SIGFPE, then everything works again.</p>
<p>At this point I'm baffled and would be glad for any suggestions for what I should check.</p>
|
<python><c><numpy><sigfpe>
|
2024-09-17 19:00:42
| 0
| 973
|
LaszloLadanyi
|
78,995,379
| 15,848,470
|
How can I map a field of a polars struct from values of another field `a`, to values of another field `b`?
|
<p>I have a Polars dataframe with these columns. I want to replace the values in each list in column C with the corresponding value in column b, based on the position the value in c has in column a.</p>
<pre><code>โโโโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโ
โ a โ b โ c โ
โ --- โ --- โ --- โ
โ list[f32] โ list[f64] โ list[f32] โ
โโโโโโโโโโโโโโโโโโโโโโชโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโชโโโโโโโโโโโโโโโโโโโโโก
โ [1.0, 0.0] โ [0.001143, 0.998857] โ [0.0, 0.5, โฆ 1.5] โ
โ [5.0, 6.0, โฆ 4.0] โ [0.000286, 0.000143, โฆ 0.00357โฆ โ [0.0, 0.5, โฆ 6.5] โ
โ [1.0, 0.0] โ [0.005287, 0.994713] โ [0.0, 0.5, โฆ 1.5] โ
โ [0.0, 1.5, โฆ 2.5] โ [0.84367, 0.003858, โฆ 0.000429โฆ โ [0.0, 0.5, โฆ 3.5] โ
โ [5.0, 6.0, โฆ 1.0] โ [0.001286, 0.000286, โฆ 0.35267โฆ โ [0.0, 0.5, โฆ 6.5] โ
โ โฆ โ โฆ โ โฆ โ
โ [0.0, 1.0] โ [0.990283, 0.009717] โ [0.0, 0.5, โฆ 1.5] โ
โ [5.0, 1.0, โฆ 0.0] โ [0.003001, 0.352672, โฆ 0.42855โฆ โ [0.0, 0.5, โฆ 6.5] โ
โ [0.0, 2.0, โฆ 3.0] โ [0.90383, 0.004716, โฆ 0.000143โฆ โ [0.0, 0.5, โฆ 3.5] โ
โ [2.0, 0.0, โฆ 9.0] โ [0.233352, 0.060446, โฆ 0.00228โฆ โ [0.0, 0.5, โฆ 10.5] โ
โ [5.0, 8.0, โฆ 11.0] โ [0.134467, 0.022578, โฆ 0.00085โฆ โ [0.0, 0.5, โฆ 12.5] โ
โโโโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโ
</code></pre>
<p>Here is my attempt:</p>
<pre><code>df = df.with_columns(
pl.struct("a", "b", "c").alias("d").struct.with_fields(
pl.field("c").replace(old=pl.field("a"), new=pl.field("b"), default=0)
)
)
</code></pre>
<p>Unfortunately, this yields the error <code>*** polars.exceptions.InvalidOperationError: `old` input for `replace` must not contain duplicates</code>. However, "a", the field being passed to the old argument is column "a", which is the unique values from <code>Expr.value_counts()</code>, so it shouldn't contain any duplicates. And indeed, <code> df.select(pl.col("lines").list.eval(pl.element().is_duplicated().any()).explode().any())</code> returns false.</p>
<p>Small chunk of the data to reproduce:</p>
<pre><code>df = pl.DataFrame([
pl.Series('a', [[1.0, 0.0], [5.0, 6.0, 1.0, 0.0, 3.0, 2.0, 4.0], [1.0, 0.0], [0.0, 1.5, 3.0, 1.0, 2.0, 0.5, 2.5], [5.0, 6.0, 0.0, 3.0, 2.0, 4.0, 1.0]]),
pl.Series('b', [[0.0011431837667905116, 0.9988568162332095], [0.0002857959416976279, 0.00014289797084881395, 0.2842240640182909, 0.5985995998856817, 0.019291226064589884, 0.09388396684767077, 0.003572449271220349], [0.005287224921406116, 0.9947127750785939], [0.8436696198913975, 0.0038582452129179764, 0.00014289797084881395, 0.10703058016576164, 0.007859388396684767, 0.03701057444984281, 0.00042869391254644185], [0.0012860817376393256, 0.0002857959416976279, 0.4645613032294941, 0.038153758216633325, 0.13561017433552444, 0.007430694484138326, 0.3526721920548728]]),
pl.Series('c', [[0.0, 0.5, 1.0, 1.5], [0.0, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0, 5.5, 6.0, 6.5], [0.0, 0.5, 1.0, 1.5], [0.0, 0.5, 0.5, 1.0, 1.0, 1.5, 1.5, 2.0, 2.0, 2.5, 2.5, 3.0, 3.0, 3.5], [0.0, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0, 5.5, 6.0, 6.5]]),
])
</code></pre>
<p>Example of what the output should be:
For the first row, the new column/struct field should look like <code>[0.998857, 0, ..., 0]</code>, because the values of "c" are <code>[0.0, 0.5, ... 1.5]</code> and <code>0</code> is at index 1 in column "a", and the value at index 1 in column "b" is <code>0.998857</code>. The other values of <code>0.5</code> and <code>1.5</code> would be the default 0 as they do not appear in "a".</p>
<p>Is this just not possible? I am really hoping to find a vectorized way to do this.</p>
<p>Any help is appreciated, thanks.</p>
|
<python><dataframe><vectorization><python-polars>
|
2024-09-17 18:38:35
| 1
| 684
|
GBPU
|
78,995,189
| 2,276,054
|
CP-SAT | OR-Tools: Select max. 5 out of 10 total options?
|
<p>I have an optimization problem where I need to pick max. 5 out of 10 available options according to different criteria. I was wondering how to encode this constraint. Is any of the 3 alternative versions below better/faster than the others? If not, I would go with the first one, as it is the most concise...</p>
<pre><code># create model and 10 variables
model = CpModel()
vars = []
for i in range(10):
vars.append(model.new_bool_var(f"var {i}"))
model.maximize(sum(v for v in vars))
# constraint: max. 5 variables can be true
# Version 1
model.add(sum(v for v in vars) <= 5)
# Version 2
var_sum = model.new_int_var(0, 5, "sum")
model.add(var_sum == sum(vars))
# Version 3
var_sum = model.new_int_var(0, 10, "sum") # sic! -> 10
model.add(var_sum == sum(vars))
model.add(var_sum <= 5)
# ...aaand solve it
solver = CpSolver()
solver.solve(model)
print(solver.objective_value)
</code></pre>
|
<python><or-tools><cp-sat>
|
2024-09-17 17:35:03
| 1
| 681
|
Leszek Pachura
|
78,995,179
| 22,407,544
|
What is the best way to handle potentially large file uploads in django?
|
<p>I've been reading the django docs and posts here on stackoverflow but still not sure how to. So far this is my code:</p>
<p>forms.py:</p>
<pre><code>def validate_file(file):
# Validate if no file submitted
if not file:
#raise ValidationError("No file submitted")
raise ValidationError("Error")
# Check file size (5 GB limit)
max_size = 5 * 1024 * 1024 * 1024 # 5 GB in bytes
if file.size > max_size:
#raise ValidationError("The maximum file size that can be uploaded is 5GB")
raise ValidationError("Error")
# Define allowed file types and their corresponding MIME types
allowed_types = {
# Audio formats
'wav': ['audio/wav', 'audio/x-wav'],
'mp3': ['audio/mpeg', 'audio/mp3'],
'mpga': ['audio/mpeg'],
'aac': ['audio/aac'],
...
class UploadFileForm(forms.Form):
file = forms.FileField(validators=[validate_file])
</code></pre>
<p>views.py:</p>
<pre><code>def transcribe_Submit(request):
if request.method == 'POST':
form = UploadFileForm(request.POST, request.FILES)
if form.is_valid():
uploaded_file = request.FILES['file']
session_id = str(uuid.uuid4())
request.session['session_id'] = session_id
try:
transcribed_doc, created = TranscribedDocument.objects.get_or_create(id=session_id)
transcribed_doc.audio_file = uploaded_file
transcribed_doc.save()
...
</code></pre>
<p>modesl.py:</p>
<pre><code>class TranscribedDocument(models.Model):
id = models.CharField(primary_key=True, max_length = 40)
audio_file = models.FileField(max_length = 140, upload_to='example_upload_url/', null= True)
...
</code></pre>
<p>This is what I've been working with but I've been reading that it is better to handle uploads as chunks so that large files don't overwhelm the website.</p>
<p>The website uploads to directly DigitalOcean Spaces with no files stored on the server and the maximum file upload size allowed by my website is 10GB if that is important.</p>
|
<python><django>
|
2024-09-17 17:31:29
| 0
| 359
|
tthheemmaannii
|
78,994,944
| 4,560,685
|
Snowflake/ Snowpark "import sklearn" results in "no module found."
|
<p>I'm using a python worksheet in Snowflake.</p>
<p>When I got to information_schema.packages, scikit-learn is clearly there and installed.</p>
<p>But when I reference <code>'sklearn'</code> (don't really understand where this full-name vs. shortened name comes from, but sklearn is obviously the reference to scikit-learn package). It says <code>module not found.</code></p>
<p>My understanding is that it's pre-installed with Snowpark. So what's the deal? What am I doing wrong?</p>
|
<python><scikit-learn><snowflake-cloud-data-platform>
|
2024-09-17 16:14:07
| 2
| 983
|
user45867
|
78,994,681
| 3,458,788
|
Is it possible to install spaCy models using a different C++ compiler
|
<p>I am trying to install a spaCy model:</p>
<pre><code>pip install https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.0.0/en_core_web_sm-3.0.0.tar.gz
</code></pre>
<p>and getting the following error:</p>
<pre><code>error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
</code></pre>
<p>I do have a GCC compiler installed that is being recognized by the PATH:</p>
<pre><code>$ gcc --version
gcc.exe (GCC) 12.3.0
Copyright (C) 2022 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
</code></pre>
<p>Is it possible to make <code>pip</code> use <code>gcc</code>?</p>
<p>EDIT:
The C++ compiler I am using comes with <a href="https://cran.r-project.org/bin/windows/Rtools/" rel="nofollow noreferrer">RTools</a> and is installed at <code>C:/rtools43/x86_64-w64-mingw32.static.posix/bin/gcc</code></p>
|
<python><c++><spacy>
|
2024-09-17 15:09:20
| 0
| 515
|
cdd
|
78,994,612
| 4,706,952
|
Adjust console width to print the whole Pandas dataframe
|
<p>When printing to the console in Python I get output like:</p>
<pre><code> Kรถrpergrรถsse (cm) ... Motivation (1-10)
Kรถrpergrรถsse (cm) 1.000000 ... 0.226576
Schuhgrรถsse 0.710021 ... 0.096692
Alter (Jahre) -0.206736 ... 0.203746
Motivation (1-10) 0.226576 ... 1.000000
[4 rows x 4 columns]
</code></pre>
<p>This is a pity because in my console there is more than enough space to print a lot more.
(In this example the whole 4 x 4 correlation matrix..)</p>
<hr />
<p><em>How can I make my console print the 'whole thing'? For example, by adjusting the console width? Or if that's not possible by doing some workaround before/while printing?</em></p>
|
<python><pandas>
|
2024-09-17 14:50:42
| 1
| 7,517
|
symbolrush
|
78,994,566
| 9,846,358
|
How to Fetch GraphQL JSON Data from a URL in Python?
|
<p>I am trying to fetch GraphQL data from the URL:</p>
<blockquote>
<p><code>https://URL</code></p>
</blockquote>
<p>Iโve tried using Pythonโs requests library to make a POST request, but Iโm not sure how to structure the request to get the desired JSON data.</p>
<p>Hereโs what Iโve tried so far:</p>
<pre><code>import requests
url = "https://URL"
headers = {
"Content-Type": "application/json",
"Accept": "application/json"
}
# Example query, might need modification
query = """
{
races {
id
name
date
}
}
"""
response = requests.post(url, json={'query': query}, headers=headers)
if response.status_code == 200:
print(response.json())
else:
print(f"Failed to fetch data, status code: {response.status_code}")
</code></pre>
<p>However, this returns an error or no data, and Iโm not sure if my query is correct or if thereโs something else wrong with my request. Could anyone guide me on how to properly structure the request and the query?</p>
<p>Do I need to adjust the URL or headers?
How can I confirm if the endpoint supports GraphQL and how to form valid queries for it?
Any help would be greatly appreciated!</p>
|
<python><json><python-requests><graphql>
|
2024-09-17 14:37:00
| 1
| 797
|
Mary
|
78,994,516
| 32,043
|
How to initialize db connection in FastAPI with lifespan events?
|
<p>I would like to initialize a DB connection for my FastAPI application and tried to follow the tutorial for <a href="https://fastapi.tiangolo.com/advanced/events/#lifespan" rel="nofollow noreferrer">Lifespan Events</a>.</p>
<p>I set up a lifecycle function and plugged it into my app and within the function, the variable is initialized but after the start up, the variable remains <code>None</code>.</p>
<pre><code>db_session = None
@asynccontextmanager
async def lifespan(app: FastAPI):
db_session = DBConnection("file.db")
if db_session.get_db_connection() is None:
raise InitializationError("Could not get access database file.")
print(db_session.get_test_value()) # Prints out a valid value
yield
app = FastAPI(lifespan=lifespan)
app.mount("/static", StaticFiles(directory="static"), name="static")
templates = Jinja2Templates(directory="templates")
print(db_session) # Prints None
</code></pre>
<h1>Versions</h1>
<ul>
<li>Python 3.9</li>
<li>FastAPI 0.114.2</li>
<li>Pydantic 2.9.1</li>
<li>uvicorn 0.30.6</li>
</ul>
<h1>What I tried</h1>
<p>I initialized the value with [] instead of None.</p>
<p>I initialized the value with {} instead of None (like in the tutorial).</p>
<p>I plugged the db session into `db_session["db"] instead of assigning it directly, the map remains empty.</p>
<p>I initialized the value in <code>lifespan</code> with a string, just to test.</p>
<p>I declared the variable <code>db_session</code> as global in the <code>lifespan</code> function before assigning it the value.</p>
<p>All tests lead to the same result, that the value is as initialized outside of the <code>lifespan</code>function (<code>None</code> or <code>[]</code> or <code>{}</code>).</p>
<p>The error persists during the handling of the requests.</p>
|
<python><fastapi>
|
2024-09-17 14:23:16
| 2
| 24,231
|
guerda
|
78,994,503
| 6,751,456
|
django StreamingHttpResponse to return large files
|
<p>I need to get pdf files from s3 and return the same file to the frontend.</p>
<pre><code>def stream_pdf_from_s3(request, file_key):
s3_client = boto3.client('s3')
try:
response = s3_client.get_object(Bucket=settings.AWS_STORAGE_BUCKET_NAME, Key=file_key)
pdf_stream = response['Body']
# Use iter_chunks() for efficient streaming
return StreamingHttpResponse(pdf_stream.iter_chunks(chunk_size=65,536), content_type='application/pdf')
except Exception as e:
return HttpResponse(f"Error fetching PDF: {e}", status=500)
</code></pre>
<p>But in the browser's network tab, it seems that no bytes are streamed even after a longer time. The request is in <code>pending</code> state.</p>
<p>The expectation was bytes soon be returned in chunks immediately.</p>
<p>What could be the reason and is there any improper configuration with the code?</p>
|
<python><django><large-files><streaminghttpresponse>
|
2024-09-17 14:20:29
| 0
| 4,161
|
Azima
|
78,994,467
| 12,466,687
|
Unable to concatenate dataframes in streamlit
|
<p>I am trying to concatenate all the <code>dataframes</code> which are starting with <code>user_</code> string in the streamlit but have been getting error.</p>
<p>sample code:</p>
<pre><code>import streamlit as st
import pandas as pd
st.set_page_config(page_title="Science",
layout='wide',
initial_sidebar_state="expanded")
# sample dataframes
st.session_state.user_df1 = pd.DataFrame({'A': [1, 2], 'B': [3, 4]})
st.session_state.user_df2 = pd.DataFrame({'A': [5, 6], 'B': [7, 8]})
st.session_state.df3 = pd.DataFrame({'A': [1, 2], 'B': [3, 4]})
# list of dataframes starting with user_
st.session_state.users_list = [name for name in st.session_state if name.startswith('user_')]
st.write(st.session_state.users_list)
# using eval to evaluate string dataframe names
# st.write([eval(st.session_state.name) for st.session_state.name in st.session_state.users_list])
st.session_state.df_final = pd.concat([eval(st.session_state.name) for st.session_state.name
in st.session_state.users_list],
ignore_index=True)
st.table(st.session_state.df_final)
</code></pre>
<p>This whole logic works without <code>streamlit</code> but I am getting an error in this in streamlit, not sure what's wrong.</p>
<p>This logic/code without streamlit that works:</p>
<pre><code>import pandas as pd
user_df1 = pd.DataFrame({'A': [1, 2], 'B': [3, 4]})
user_df2 = pd.DataFrame({'A': [5, 6], 'B': [7, 8]})
df3 = pd.DataFrame({'A': [1, 2], 'B': [3, 4]})
users_list = ['user_df1','user_df2']
print([eval(name) for name in users_list])
df_final = pd.concat([eval(name) for name in users_list],ignore_index=True)
print('----df_final------')
print(df_final)
</code></pre>
<p>output:</p>
<pre><code>[ A B
0 1 3
1 2 4, A B
0 5 7
1 6 8]
----df_final------
A B
0 1 3
1 2 4
2 5 7
3 6 8
``
</code></pre>
|
<python><pandas><dataframe><streamlit>
|
2024-09-17 14:12:29
| 1
| 2,357
|
ViSa
|
78,994,274
| 12,466,687
|
how to panda concatenate on list with string dataframe names?
|
<p>This may sound silly but I have <code>list</code> containing <strong>dataframe names</strong> and I am trying to <strong>concatenate</strong> those <strong>dataframes</strong> based on <strong>for loop</strong> or <strong>list comprehension</strong> or just <code>pd.concat()</code> them.</p>
<p>Code sample:</p>
<pre><code>import pandas as pd
df1 = pd.DataFrame({'A': [1, 2], 'B': [3, 4]})
df2 = pd.DataFrame({'A': [5, 6], 'B': [7, 8]})
print(df1)
print(df2)
</code></pre>
<p><strong>This concatenation works, that I understand</strong></p>
<pre><code># ideally this is what I should have as list
users_list = [df1,df2]
print(users_list)
# this works that I know
print(pd.concat(users_list,ignore_index = True))
</code></pre>
<p><strong>Below is my situation and this is not working. So how can I handle this ?</strong></p>
<pre><code># but this is what I have
users_list2 = ['df1','df2']
print(users_list2)
# but this is what I am trying to achieve as I have list of dataframe names due to some processing
print(pd.concat(users_list2,ignore_index = True))
</code></pre>
<p>So I am basically facing error due to <code>users_list2</code> containing names/string dataframes and not the dataframes.</p>
<p>Appreciate any help here.</p>
|
<python><pandas><dataframe><concatenation>
|
2024-09-17 13:24:00
| 2
| 2,357
|
ViSa
|
78,994,224
| 587,680
|
Register Hook to Intermediate Nodes in PyTorch
|
<p>Imagine you have something simple like this:</p>
<pre class="lang-py prettyprint-override"><code>import torch
x = torch.tensor([4.0], requires_grad=True)
y = torch.tensor([2.0], requires_grad=True)
output = x * y + x / y
grad_x = torch.ones_like(output)
torch.autograd.grad(output, x, grad_outputs=grad_x)
</code></pre>
<p>which results in a computational graph like this:</p>
<p><a href="https://i.sstatic.net/9fEqzyKN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9fEqzyKN.png" alt="enter image description here" /></a></p>
<p>Now I'd like to access the gradients of the intermediate nodes i.e. <code>MulBackward0</code>, <code>DivBackward0</code> and <code>AddBackward0</code>. I know that PyTorch doesn't store it by default. I know I I can make it explicit and then define <code>retain_grad</code> etc.</p>
<p>But is there the possibiltiy of just attaching a hook to any of those nodes without having to loop through the graph etc.?</p>
|
<python><pytorch>
|
2024-09-17 13:10:51
| 1
| 532
|
xotix
|
78,994,156
| 9,189,389
|
facet figure - polynomial + linear in Altair
|
<p>I want to make a facet figure with Linear and Polynomial regression in Altair. Stacking together scatter and regression plots seems understandable. However, adding additional polynomial analysis seems problematic. I cannot display the result. I was trying to go around by making a degree list for 1, 3, and 5 as in Altair Gallery, and I was unsuccessful again.</p>
<p>Can you help me understand the issue?</p>
<p>Here is my data snippet:</p>
<pre><code>years = list(range(2024, 2051))
rcp_scenarios = ['RCP2.6', 'RCP4.5', 'RCP6.0', 'RCP8.5']
data = {"YEAR": years}
for rcp in rcp_scenarios:
data[rcp] = np.random.uniform(low=500, high=2000, size=len(years))
df = pd.DataFrame(data)
df_melted = pd.melt(df, id_vars=["YEAR"], value_vars=rcp_scenarios,
var_name="RCP", value_name="RR")
</code></pre>
<p>And here is my Altair snippet:</p>
<pre><code>scatter = alt.Chart(All_RCP_Df_yearly).mark_circle(size=60, opacity=0.60).encode(
x=alt.X('YEAR'),
y=alt.Y('RR', title="Annual Precipitation [mm]"),
color='RCP',
tooltip=['YEAR', 'RR', 'RCP'])
linear_regression = alt.Chart(All_RCP_Df_yearly).transform_regression(
'YEAR', 'RR', method='linear', groupby=['RCP']
).mark_line(opacity=0.75).encode(
x='YEAR',
y='RR',
color='RCP')
polynomial_regression = alt.Chart(All_RCP_Df_yearly).transform_regression(
'YEAR', 'RR', method='poly', order=2, groupby=['RCP']
).mark_line(opacity=0.75, strokeDash=[5, 3]).encode(
x='YEAR',
y='RR',
color='RCP')
chart = scatter + linear_regression + polynomial_regression
chart = chart.facet(
columns=2,
facet=alt.Facet('RCP'))
chart.display()
</code></pre>
|
<python><altair><polynomials>
|
2024-09-17 12:55:19
| 1
| 464
|
Luckasino
|
78,993,852
| 2,155,362
|
How to load config.json file which define by myself?
|
<p>My work dir like below:</p>
<pre><code>base---
|--- dir1
| |---test2.py
|--- public.py
|--- config.json
|--- test1.py
</code></pre>
<p>In public.py, I coded like below:</p>
<pre><code>def getCfg(key):
with open("config.json", 'r',encoding='utf-8') as f:
cfg=json.load(f)
return cfg[key]
</code></pre>
<p>In test1.py, I import public.py and invoke getCfg method,it's ok</p>
<pre><code>import public
public.getCfg("my_key")
</code></pre>
<p>and in test2.py, I import public.py and invoke getCfg method</p>
<pre><code>current = os.path.dirname(os.path.realpath(__file__))
parent = os.path.dirname(current)
sys.path.append(parent)
import public
public.getCfg("my_key")
</code></pre>
<p>system report error:</p>
<pre><code>FileNotFoundError: [Errno 2] No such file or directory: 'config.json'
</code></pre>
<p>I think it's something wrong about work dir. How can I make it correct both in test1.py and test2.py?
My python version is 3.9. Thanks.</p>
|
<python>
|
2024-09-17 11:32:50
| 1
| 1,713
|
user2155362
|
78,993,769
| 2,397,542
|
Deleting an Azure Loadbalancer Frontend IP configuration from python?
|
<p>I'm writing something to clean up our old Azure resources, and one thing that comes up from time to time is public-ips still bound to a kubernetes-created loadbalancer even after the Ingress was deleted.</p>
<p>Using the azure CLI, I can get the public IP, find the ipConfiguration from that, which is used to name the load balancer rules and frontend ipconfig for that IP. I can get the load balancer rule resource IDs and delete those with "az resource delete" or the ResourceManagementClient. <em>However</em> I can't do the same with the LB Front End config even though it does have a resource_id. The AZ CLI has <code>az network lb frontend-ip delete</code> but I can't find the Azure API that this is calling. <code>NetworkManagementClient.load_balancer_frontend_ip_configurations</code> only has list() and get() methods, not delete().</p>
<p>I did try looking at the AZ CLI source here, but it is not easy reading!
<a href="https://github.com/Azure/azure-cli/blob/dev/src/azure-cli/azure/cli/command_modules/network/aaz/latest/network/lb/frontend_ip/_delete.py#L64" rel="nofollow noreferrer">https://github.com/Azure/azure-cli/blob/dev/src/azure-cli/azure/cli/command_modules/network/aaz/latest/network/lb/frontend_ip/_delete.py#L64</a></p>
<p>What is the "approved" way to do this?</p>
|
<python><azure><network-programming><azure-sdk>
|
2024-09-17 11:07:00
| 1
| 832
|
AnotherHowie
|
78,993,417
| 4,432,498
|
Mapper class between API and DB field properties
|
<p>I would like to implement a kind of mapper between db data and some other data like obtained from the response of API.</p>
<p>The expected behavior is following:</p>
<pre><code>class IData:
a: str = ('db_a', 'type_a')
b: str = ('db_b', 'type_b')
c: str = ('db_c', 'type_c')
class MyData(IData):
a = 'api_a'
b = 'api_b'
c = 'api_c'
m = MyData() # or `m = MyData`
m.a.db_name -> 'db_a'
m.a.db_type -> 'db_b'
m.a.api_name -> 'db_c'
m.get_db_names() -> ['db_a', 'db_b', 'db_c']
m.get_db_types() -> ['type_a', 'type_b', 'type_c']
m.get_api_names() -> ['api_a', 'api_b', 'api_c'] # or Nones if some are not assigned in MyData definition
</code></pre>
<p>I implemented it like following:</p>
<pre><code>class _Field:
def __init__(self, db_name, db_type, api_name=None):
self.db_name = db_name
self.db_type = db_type
self.api_name = api_name
class _base:
def get_db_names(self):
return [self.__dict__[i].db_name for i in self.__dict__ if not i.startswith("_")]
def get_db_types(self):
return [self.__dict__[i].db_type for i in self.__dict__ if not i.startswith("_")]
def get_api_names(self):
return [self.__dict__[i].api_name for i in self.__dict__ if not i.startswith("_")]
class IData(_base):
def __init__(self):
self.a = _Field("db_a", "type_a")
self.b = _Field("db_b", "type_b")
self.c = _Field("db_c", "type_c")
class MyData(IData):
def __init__(self):
super().__init__()
self.a.api_name = "api_a"
self.b.api_name = "api_b"
self.c.api_name = "api_c"
</code></pre>
<p>Is there some better, simpler, more robust solution using <code>dataclasses</code> or <code>types</code> (namespaces) or something else?</p>
<p>Also would be nice to rise an exception if for some attribute the <code>api_name</code> is not set in <code>class MyData(IData)</code> when the user is instantiating <code>m = MyData()</code></p>
|
<python><database><mapping>
|
2024-09-17 09:32:35
| 2
| 550
|
Anton
|
78,993,284
| 14,202,481
|
How to improve pandas DF processing time on different combinations of calculated data
|
<p>I got a big dataset, something like 100 K or 1 mil rows, and I got a function that makes vector calculations that take 0.03 sec. Now all my columns before the process can be the same for every iteration. I want to calculate the 2^n combinations of conditions I make. So currently it will take me 2^n * 0.03 s to run it all by looping length and run the function. Is there a better way to improve performance and run all these possibilities vectorized or parallel(not Python CPU parallel. It help by little). The only thing I think of is to create unique per iteration column and make regex calculations, but then the <code>df</code> will be too big.</p>
<p>In this example process where each processing takes 0.01ms, the output is:
Total number of combinations: 1023.</p>
<p>Total time to evaluate all combinations: 20.73 seconds</p>
<pre><code>import pandas as pd
import numpy as np
from itertools import combinations
import time
# Generate a larger DataFrame with 100,000 rows
data = {
'Height': np.random.uniform(150, 200, size=100000),
'Weight': np.random.uniform(50, 100, size=100000),
'Gender': np.random.choice(['Male', 'Female'], size=100000),
'Age': np.random.randint(18, 70, size=100000)
}
df = pd.DataFrame(data)
# Define vectorized functions for each condition with dynamic values
def calculate_bmi(height, weight):
height_m = height / 100
return weight / (height_m ** 2)
def condition_bmi(df, min_bmi, max_bmi):
bmi = calculate_bmi(df['Height'], df['Weight'])
return (min_bmi <= bmi) & (bmi <= max_bmi)
def condition_age(df, min_age, max_age):
return (min_age <= df['Age']) & (df['Age'] <= max_age)
def condition_height(df, min_height, max_height):
return (min_height <= df['Height']) & (df['Height'] <= max_height)
def condition_weight(df, min_weight, max_weight):
return (min_weight <= df['Weight']) & (df['Weight'] <= max_weight)
def condition_gender(df, gender):
return df['Gender'] == gender
# List of possible dynamic values for each condition (with only 2 values each)
dynamic_values = {
'BMI is within the normal range': [(18.5, 24.9), (25.0, 29.9)],
'Age is within the healthy range': [(18, 30), (31, 45)],
'Height is within the normal range': [(150, 160), (161, 170)],
'Weight is within the normal range': [(50, 60), (61, 70)],
'Gender is specified': ['Male', 'Female']
}
# Function to create combinations of conditions with dynamic values
def create_condition_combinations(dynamic_values):
condition_combinations = []
for condition_name, values in dynamic_values.items():
if isinstance(values[0], tuple): # For range conditions
for value in values:
condition_combinations.append((condition_name, value))
else: # For categorical conditions
for value in values:
condition_combinations.append((condition_name, value))
return condition_combinations
# Generate all possible combinations of conditions and dynamic values
def generate_all_combinations(condition_combinations):
all_combinations = []
for r in range(1, len(condition_combinations) + 1):
for combo in combinations(condition_combinations, r):
all_combinations.append(combo)
return all_combinations
condition_combinations = create_condition_combinations(dynamic_values)
all_combinations = generate_all_combinations(condition_combinations)
# Calculate the total number of combinations
total_combinations = len(all_combinations)
print(f"Total number of combinations: {total_combinations}")
# Apply a combination of conditions
def evaluate_combination(df, combo):
combined_condition = pd.Series([True] * len(df))
for condition_name, value in combo:
if condition_name == 'BMI is within the normal range':
min_bmi, max_bmi = value
combined_condition &= condition_bmi(df, min_bmi, max_bmi)
elif condition_name == 'Age is within the healthy range':
min_age, max_age = value
combined_condition &= condition_age(df, min_age, max_age)
elif condition_name == 'Height is within the normal range':
min_height, max_height = value
combined_condition &= condition_height(df, min_height, max_height)
elif condition_name == 'Weight is within the normal range':
min_weight, max_weight = value
combined_condition &= condition_weight(df, min_weight, max_weight)
elif condition_name == 'Gender is specified':
gender = value
combined_condition &= condition_gender(df, gender)
return combined_condition
# Measure time to run all combinations
start_time = time.time()
for combo in all_combinations:
combo_start_time = time.time()
evaluate_combination(df, combo)
combo_end_time = time.time()
combo_elapsed_time = combo_end_time - combo_start_time
end_time = time.time()
total_elapsed_time = end_time - start_time
print(f"Total time to evaluate all combinations: {total_elapsed_time:.2f} seconds")
</code></pre>
|
<python><pandas><algorithm><numpy>
|
2024-09-17 08:56:29
| 3
| 557
|
mr_robot
|
78,993,267
| 448,317
|
Calling QDialog.reject() has no effect - why? How do I close my dialog when certain conditions are met?
|
<p>Calling <code>QDialog.reject()</code> has no effect. What am I missing?</p>
<pre class="lang-py prettyprint-override"><code>from PySide6.QtWidgets import QApplication, QMainWindow, QPushButton, QDialog, QVBoxLayout
class MyDialog(QDialog):
def __init__(self):
super().__init__()
print('MyDialog created')
self.setWindowTitle('My Dialog')
button = QPushButton('Close')
button.clicked.connect(self.accept)
self.setLayout(QVBoxLayout())
self.layout().addWidget(button)
self.reject()
class MainWindow(QMainWindow):
def __init__(self):
super().__init__()
button = QPushButton("Press Me!")
button.clicked.connect(self.show_dialog)
self.setCentralWidget(button)
def show_dialog(self):
d = MyDialog()
rv = d.exec()
if rv == QDialog.Accepted:
print('Dialog accepted')
else:
print('Dialog rejected')
app = QApplication([])
window = MainWindow()
window.show()
app.exec()
</code></pre>
<p>When running the program, the dialog shows up and I can press "Close". The return value is 'Accepted'. I would expect the dialog would either not show or close right away, and that the return value should be 'Rejected'</p>
<p>What am I missing?</p>
|
<python><qt><pyqt><pyside>
|
2024-09-17 08:51:58
| 0
| 864
|
Troels Blum
|
78,993,127
| 6,672,026
|
Vs-code unable to import modules modules load
|
<p>I'm opening the same python project in pycharm and everything is working well
opening same folder on vs code with same interpreter
I'm getting error 'Unable to import' on local modules in the project
what do i miss</p>
<p>Example tree</p>
<pre><code>MyFile.py
โ
โโโ csv (standard library)
โโโ io (standard library)
โโโ re (standard library)
โ
|-- __init__.py
|
โโโ myproject.myfolder.mymodule
โ โโโ methodA
โ โโโ methodB
| โ-- __init__.py
</code></pre>
<p>MyFile.py</p>
<pre><code>import csv
import io
import re
from myproject.myfolder.mymodule import methodA, methodB
</code></pre>
<p>in pycharm it working well, in Vs-code getting unable to import exception.
Note - i tried to create workspace and venv, not working either</p>
<p>Thanks in advance</p>
|
<python><visual-studio-code><import><python-import><python-module>
|
2024-09-17 08:16:49
| 3
| 465
|
Michael Gabbay
|
78,992,703
| 10,200,497
|
What is the best way to filter groups by conditionally checking the values of the first row of each group only?
|
<p>This is my DataFrame:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(
{
'group': list('xxxxyyy'),
'open': [100, 150, 200, 160, 300, 150, 170],
'close': [105, 150, 200, 160, 350, 150, 170],
'stop': [104, 104, 104, 104, 400, 400, 400]
}
)
</code></pre>
<p>Expected output is returning group <code>x</code> based on the <code>group</code> column:</p>
<pre><code> group open close stop
0 x 100 105 104
1 x 150 150 104
2 x 200 200 104
3 x 160 160 104
</code></pre>
<p>Logic:</p>
<p>I want to check if <code>df.stop.iloc[0]</code> for each group is between <code>df.open.iloc[0]</code> and <code>df.close.iloc[0]</code>. And if it is between these two, I want to return that entire group.</p>
<p>This is my attempt. It works but I think there is a better way to do it. Note that in the <code>if</code> clause, both conditions are needed to be checked.</p>
<pre><code>def func(df):
s = df.stop.iloc[0]
o = df.open.iloc[0]
c = df.close.iloc[0]
if (o <= s <= c) or (c <= s <= o):
return df
out = df.groupby('group').apply(func).reset_index(drop=True)
</code></pre>
|
<python><pandas><dataframe><group-by>
|
2024-09-17 05:57:41
| 2
| 2,679
|
AmirX
|
78,992,592
| 3,796,236
|
How to add pytest markers to multiple paramterize
|
<p>I have the below block of code.</p>
<pre class="lang-py prettyprint-override"><code>import pytest
@pytest.mark.parametrize("project_name", [
pytest.param("default", marks=pytest.mark.defa),
pytest.param("remote", marks=[pytest.mark.remote, pytest.mark.nightly])
])
@pytest.mark.parametrize("spec_file", [
pytest.param("some_A.json", marks=pytest.mark.nightly),
pytest.param("some_B.json")
])
def test_A(self, project_name, spec_file):
assert 1==1
</code></pre>
<p>If I try to collect/run the <code>nightly</code> marker using <code>pytest -m nightly --collect-only -qq <path_to_test_file></code> command I am getting the below output</p>
<pre><code><Function 'test_A[some_A.json-remote]'>
<Function 'test_A[some_A.json-default]'>
<Function 'test_A[some_B.json-remote]'>
</code></pre>
<p>I was expecting it to collect or run only <code><Function 'test_A[some_A.json-remote]'></code> params.</p>
<p>I have a questions here:</p>
<p>How do I make pytest understand to run only <code><Function 'test_A[some_A.json-remote]'></code> case?</p>
|
<python><pytest><parameterized-tests>
|
2024-09-17 05:04:05
| 1
| 641
|
Joshi
|
78,992,529
| 11,770,390
|
how to perform http request via proxy using sshtunnel and python
|
<p>I'm trying to create an ssh tunnel to a <code>remote_server</code> using a PKey entry from the known_hosts file. All seems to work using this configuration:</p>
<pre><code>with sshtunnel.open_tunnel(
(remote_server, 22),
ssh_username="process",
ssh_host_key=ssh_host_key,
remote_bind_address=('www.google.com', 80),
local_bind_address=('', 8888)
) as tunnel:
url = f'http://localhost:{8888}/'
response = requests.get(url)
print(response.text)
</code></pre>
<p>However: When I request <code>www.google.com</code> via the bound socket the response has a different body than when I first ssh into the server and curl it. Here's the comparison:</p>
<p>This is the response I get when I use the above request command:</p>
<pre><code><!DOCTYPE html>
<html lang=en>
<meta charset=utf-8>
<meta name=viewport content="initial-scale=1, minimum-scale=1, width=device-width">
<title>Error 404 (Not Found)!!1</title>
<style>
*{margin:0;padding:0}html,code{font:15px/22px arial,sans-serif}html{background:#fff;color:#222;padding:15px}body{margin:7% auto 0;max-width:390px;min-height:180px;padding:30px 0 15px}* > body{background:url(//www.google.com/images/errors/robot.png) 100% 5px no-repeat;padding-right:205px}p{margin:11px 0 22px;overflow:hidden}ins{color:#777;text-decoration:none}a img{border:0}@media screen and (max-width:772px){body{background:none;margin-top:0;max-width:none;padding-right:0}}#logo{background:url(//www.google.com/images/branding/googlelogo/1x/googlelogo_color_150x54dp.png) no-repeat;margin-left:-5px}@media only screen and (min-resolution:192dpi){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat 0% 0%/100% 100%;-moz-border-image:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) 0}}@media only screen and (-webkit-min-device-pixel-ratio:2){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat;-webkit-background-size:100% 100%}}#logo{display:inline-block;height:54px;width:150px}
</style>
<a href=//www.google.com/><span id=logo aria-label=Google></span></a>
<p><b>404.</b> <ins>Thatโs an error.</ins>
<p>The requested URL <code>/</code> was not found on this server. <ins>Thatโs all we know.</ins>
</code></pre>
<p>And here's the response when I manually invoke <code>curl www.google.com</code> from the remote server.</p>
<pre><code><!doctype html><html itemscope="" itemtype="http://schema.org/WebPage" lang="de"><head><meta content="text/html; charset=UTF-8" http-equiv="Content-Type"><meta content="/images/branding/googleg/1x/googleg_standard_color_128dp.png" item..
</code></pre>
<p>This response goes on to form the full page of <a href="http://www.google.com" rel="nofollow noreferrer">www.google.com</a> as seen in the browser.</p>
<p>Now why is it different and what do I have to change to make the first request answer through the ssh tunnel look like the later?</p>
|
<python><ssh><ssh-tunnel><proxy-server>
|
2024-09-17 04:21:55
| 0
| 5,344
|
glades
|
78,992,492
| 1,716,733
|
setting consistent column type for streamlit tables
|
<p>I need to append rows of a serialized subclass of BaseModel and display in streamlit. The problem is that some fields are Optional and sometimes are missing. When they are missing, the column type will of the initialized table will deviate. I tried a few ways to pre-specify column types:</p>
<ul>
<li>by setting them in the pandas.DataFrame that is used to initialize the streamlit.dataframe</li>
<li>by setting the column_config</li>
</ul>
<p>None of these seem work (see code below). Any tips appreciated</p>
<h3>Setup code</h3>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import numpy as np
from pydantic import BaseModel
import streamlit as st
from functools import partial
from typing import Optional
class MyModel(BaseModel):
first: str
second: Optional[str]
date: datetime.datetime = datetime.datetime.today()
TYPE_TO_STREAMLIT = {"str": st.column_config.TextColumn,
'float': st.column_config.NumberColumn,
'int': partial(st.column_config.NumberColumn, format='%u.'),
'datetime64[D]': partial(st.column_config.DatetimeColumn, format='YYYY-MM-DD'),
'datetime64[s]': partial(st.column_config.DatetimeColumn, format='YYYY-MM-DD HH:mm:ss'),
}
def remove_optional(type_):
"""
>>> remove_optional(typing.Optional[str])
"""
match = re.search("Optional\[(?P<t>[A-Z_a-z]+)\]", repr(type_))
if match:
return match.groupdict()['t']
elif 'date' in repr(type_):
return 'datetime64[s]' # pandas does not support [D]
elif type_ == str:
return 'str'
else:
return (type_)
def np_from_base_model(bm: BaseModel):
col_types = {kk: remove_optional(ff.annotation) for kk, ff in bm.model_fields.items()}
# list(col_types.values())
return np.empty(0, dtype=[tuple(tt) for tt in col_types.items()])
def df_from_base_model(bm: BaseModel):
return pd.DataFrame(np_from_base_model(bm))
def st_from_base_model(bm: BaseModel):
col_types = {kk: TYPE_TO_STREAMLIT[remove_optional(ff.annotation)](kk.replace('_', ' ')) for kk,ff in bm.model_fields.items()}
df_ = df_from_base_model(bm)
return st.dataframe(df_, column_config=col_types)
def format_metadata(report_metadatas: List[MyModel]) -> pd.DataFrame:
bm = MyModel
col_types = {kk: remove_optional(ff.annotation) for kk, ff in bm.model_fields.items()}
df_ = [md.model_dump() for md in report_metadatas]
df_ = pd.DataFrame(df_)
df_ = df_.astype(col_types)
return df_
</code></pre>
<h3>Actual code</h3>
<pre class="lang-py prettyprint-override"><code>
st.session_state['df_reports'] = st_from_base_model(MyModel)
examples = [MyModel(**{"first":x, "second":chr(ord(x)+5) if ord(x)%2==0 else None}) for x in "hello world!"]
st.session_state['df_reports'] = st_from_base_model(MyModel)
for ex in examples:
df_ = format_metadata([ex])
st.session_state['df_reports'].add_rows(df_)
# time.sleep(5)
</code></pre>
<p>I am still getting:</p>
<pre><code>Unsupported operation. The data passed into add_rows() must have the same data signature as the original data.
In this case, add_rows() received ["unicode","unicode","unicode","unicode","unicode","unicode","unicode","unicode","unicode","unicode","datetime","unicode"]
but was expecting ["empty","empty","empty","empty","empty","empty","empty","empty","empty","empty","datetime","empty"].
</code></pre>
<p>or</p>
<pre><code>elementType 'alert' is not a valid arrowAddRows target!
</code></pre>
<p>depending on the input data.</p>
<p>I am struggling to see how data is represented in streamlit and where I can see current types of the data (<a href="https://github.com/streamlit/streamlit/blob/develop/lib/streamlit/elements/arrow.py#L599" rel="nofollow noreferrer">the code seems to be quite obscure</a>).</p>
|
<python><pandas><streamlit>
|
2024-09-17 03:55:14
| 0
| 13,164
|
Dima Lituiev
|
78,992,321
| 1,716,733
|
getting argument of typing.Optional in python
|
<p>I would like to create a typed DataFrame from a Pydantic BaseModel class, let's call it MyModel that has Optional fields. As I create multiple instances of MyModel, some will have Optional fields with None values, and if I initialize a DataFrame with such rows, they will may have inconsistent column dtypes. I'd like thus to cast <code>Optional[TypeX]</code> to <code>TypeX</code>, e.g.:</p>
<pre class="lang-py prettyprint-override"><code>import pydantic
import pandas as pd
import numpy as np
from typing import Optional
class MyModel(pydantic.BaseModel):
thisfield: int
thatfield: Optional[str]
...
col_types = {kk: ff.annotation for kk, ff in MyModel.model_fields.items()}
pd.DataFrame(np.empty(0, dtype=[tuple(tt) for tt in col_types.items()]))
</code></pre>
<p>This fails with <code>TypeError: Cannot interpret 'typing.Optional[str]' as a data type</code>.</p>
<p>I need a function or method of <code>Optional[X] -> X</code>. Any suggestions other than using <code>repr</code> with regex?</p>
|
<python><pandas><python-typing>
|
2024-09-17 02:09:07
| 1
| 13,164
|
Dima Lituiev
|
78,992,265
| 3,727,678
|
Generic[T] class with access to type parameter of classmethod in Python 3.12
|
<p>Similar to how one could <a href="https://stackoverflow.com/questions/78376096/generict-classmethod-implicit-type-retrieval-in-python-3-12">retrieve a type from a generic class type signature via a class method</a>, I am now interested in doing the same for the class method's type signature itself. This is for use in a creative project.</p>
<p>For example, I would like to print the concrete type of <code>T</code> in this example:</p>
<pre class="lang-py prettyprint-override"><code>class MyClass[T]:
kind: type[T]
@classmethod
def make_class[T](cls) -> "MyClass[T]":
instance = cls()
instance.kind = ...
print(instance.kind)
return instance
</code></pre>
<p>Currently this would result in an exception:</p>
<pre class="lang-py prettyprint-override"><code>>>> MyClass.make_class[int]()
TypeError: 'method' object is not subscriptable
</code></pre>
<p>A custom "classmethod" decorator would be ideal as I imagine it would require the least amount of "hacking" to the way generic classes work.</p>
<pre class="lang-py prettyprint-override"><code>class MyClass[T]:
kind: type[T]
@typed_classmethod
def make_class[T](cls) -> "MyClass[T]":
...
</code></pre>
<p>Must I experiment with <code>MethodType</code>?</p>
|
<python><generics><python-typing>
|
2024-09-17 01:30:23
| 0
| 369
|
clintval
|
78,992,237
| 188,331
|
BertTokenizer.from_pretrained raises UnicodeDecodeError
|
<p>I pre-trained a <code>pytorch_model.bin</code> from a pre-train script. Yet when I load it with the following codes, it raises <code>UnicodeDecodeError</code>. Codes are as follows:</p>
<pre><code>from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("/path/to/pytorch_model.bin") # Raise UnicodeDecodeError
</code></pre>
<p>The traceback is:</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jupyter-raptor/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1811, in from_pretrained
return cls._from_pretrained(
File "/home/jupyter-raptor/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1965, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/home/jupyter-raptor/.local/lib/python3.10/site-packages/transformers/models/bert/tokenization_bert.py", line 218, in __init__
self.vocab = load_vocab(vocab_file)
File "/home/jupyter-raptor/.local/lib/python3.10/site-packages/transformers/models/bert/tokenization_bert.py", line 121, in load_vocab
tokens = reader.readlines()
File "/opt/tljh/user/lib/python3.10/codecs.py", line 322, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 64: invalid start byte
</code></pre>
<p>How can I resolve this issue?</p>
<p>Versions:</p>
<ul>
<li>transformers 4.28.1</li>
<li>tokenizers 0.13.3</li>
<li>Python 3.10.10</li>
</ul>
|
<python><huggingface-transformers>
|
2024-09-17 01:04:53
| 1
| 54,395
|
Raptor
|
78,992,094
| 4,690,023
|
Access class properties or methods from within a commands.Command
|
<p>I'm building a Discord bot. The bot should store some information into some internal variables to be accessed at a later time.
To do so I'm structuring it as a class (as opposed to many examples where the commands are outside a <code>class</code> definition). However, I discovered that when you use the <code>@commands.command(name='test')</code> decorator, the method becomes a kind of "static" method and no longer receives the object as first input.</p>
<p>Given this, is there any way I can access class properties (such as <code>an_instance_property</code> in the example below) and/or class methods (such as <code>a_class_method</code> in the example below)?</p>
<p>If this is the wrong approach, what could be a better approach for a bot with an internal state?</p>
<pre><code>import discord
from discord.ext import commands
with open('TOKEN', 'r') as f:
TOKEN = f.read()
class mybot(commands.Bot):
def __init__(self):
intents = discord.Intents.default()
super().__init__(command_prefix="!", intents=intents)
self.add_command(self.test)
self.an_instance_property = [] # <----
def a_class_method(x): # <----
return x
@commands.command(name='test')
async def test(ctx, *args):
# How can I access self.an_instance_property from here?
# How can I call self.a_class_method from here?
return
bot = mybot()
bot.run(TOKEN)
</code></pre>
|
<python><discord.py>
|
2024-09-16 23:20:43
| 1
| 1,870
|
Luca
|
78,992,089
| 6,213,809
|
AttributeError: module 'ibis.selectors' has no attribute 'index'
|
<p>I'm trying to select columns by index just like this reference from the <a href="https://ibis-project.org/reference/selectors#examples-10" rel="nofollow noreferrer">ibis docs</a>:</p>
<pre><code>import ibis
import ibis.selectors as s
spotify = ibis.duckdb.connect()
spotify.read_csv("hf://datasets/maharshipandya/spotify-tracks-dataset/dataset.csv", table_name = "tracks")
tracks = spotify.table("tracks")
tracks.select(s.index[1:4])
</code></pre>
<p>I get the following error: <code>AttributeError: module 'ibis.selectors' has no attribute 'index'</code>. How can I properly select multiple columns using their numeric indexes?</p>
<p>I am using <code>ibis==9.4.0</code>.</p>
|
<python><attributeerror><ibis>
|
2024-09-16 23:16:35
| 1
| 896
|
Mark Druffel
|
78,992,045
| 8,116,305
|
Rolling mode in Polars
|
<p>I have a ~100M rows long data frame containing IDs in different groups. Some of them are wrong (indicated by the 99). I am trying to correct them with a rolling mode window, similar to the code example below. Is there a better way to do this, since rolling_map() is super slow?</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
from scipy import stats
def dummy(input):
return stats.mode(input)[0]
df = pl.DataFrame({'group': [10, 10, 10, 10, 10, 10, 10, 20, 20, 20, 20],
'id': [1, 1, 99, 1, 1, 2, 2, 3, 3, 99, 3]})
df.with_columns(pl.col('id')
.rolling_map(function=dummy,
window_size=3,
min_periods=1,
center=True)
.over('group')
.alias('id_mode'))
</code></pre>
<pre><code>shape: (11, 3)
โญโโโโโโโโฌโโโโโโฌโโโโโโโโโโฎ
โ group โ id โ id_mode โ
โ i64 โ i64 โ i64 โ
โโโโโโโโโชโโโโโโชโโโโโโโโโโก
โ 10 โ 1 โ 1 โ
โ 10 โ 1 โ 1 โ
โ 10 โ 99 โ 1 โ
โ 10 โ 1 โ 1 โ
โ 10 โ 1 โ 1 โ
โ 10 โ 2 โ 2 โ
โ 10 โ 2 โ 2 โ
โ 20 โ 3 โ 3 โ
โ 20 โ 3 โ 3 โ
โ 20 โ 99 โ 3 โ
โ 20 โ 3 โ 3 โ
โฐโโโโโโโโดโโโโโโดโโโโโโโโโโฏ
</code></pre>
|
<python><dataframe><python-polars><rolling-computation>
|
2024-09-16 22:40:08
| 1
| 343
|
usdn
|
78,991,975
| 11,779,147
|
Get an <a> tag content using BeautifulSoup
|
<p>I'd like to get the content of an <code><a></code> tag using BeautifulSoup (version 4.12.3) in Python.
I have this code and HTML exemple:</p>
<pre class="lang-py prettyprint-override"><code>h = """
<a id="0">
<table>
<thead>
<tr>
<th scope="col">Person</th>
<th scope="col">Most interest in</th>
<th scope="col">Age</th>
</tr>
</thead>
<tbody>
<tr>
<th scope="row">Chris</th>
<td>HTML tables</td>
<td>22</td>
</tr>
</table>
</a>
"""
test = bs4.BeautifulSoup(h)
test.find('a') # find_all, select => same results
</code></pre>
<p>But it only returns :</p>
<pre class="lang-html prettyprint-override"><code><a id="0">
</a>
</code></pre>
<p>I'd would expect that the content inside <code><table></code> would appear between <code><a></code> tags.
(I don't know if it is common to wrap a table inside an <code><a></code> tag but the HTML code I try to read is like so)</p>
<p>I need to parse the table content from the <code><a></code> tag since I need to link the <code>id="0"</code> to the content of the table.</p>
<p>How can I achieve that ?
How can I get the <code><a></code> tag content with the <code><table></code> tag ?</p>
|
<python><html><beautifulsoup>
|
2024-09-16 22:00:16
| 1
| 527
|
maggle
|
78,991,883
| 1,720,737
|
Converting an HTML to PDF consuming too much memory
|
<p>I am trying to create a PDF using xhtml2pdf lib, but there is always a memory usage issue. There are around 4,000 pages in this HTML.</p>
|
<python><django><xhtml2pdf>
|
2024-09-16 21:36:07
| 0
| 398
|
Ansuman
|
78,991,877
| 7,700,802
|
Checking count discrepancies from one date to another in dataframe
|
<p>Suppose I have this data</p>
<pre><code>data = {'site': ['ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY'],
'usage_date': ['2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01'],
'item_id': ['COR30013', 'PAC10463', 'COR30018', 'PAC10958', 'PAC11188', 'PAC20467', 'COR20275', 'PAC20702', 'COR30020', 'PAC10137', 'PAC10445', 'COR30029', 'COR30025', 'PAC10457', 'COR10746', 'PAC11136', 'COR10346', 'PAC11050', 'PAC11132', 'PAC11135', 'PAC10964', 'COR10439', 'PAC11131', 'COR10695', 'PAC11128', 'COR10433', 'COR10432', 'PAC11051', 'PAC10137', 'COR10695', 'COR30029', 'COR10346', 'COR10432', 'COR10746', 'COR10439', 'COR10433', 'COR20275', 'COR30020', 'COR30018', 'PAC11135', 'PAC10964', 'PAC11136', 'PAC10445', 'PAC11050', 'PAC11132', 'PAC20467', 'PAC11188', 'PAC10463', 'PAC20702', 'PAC10457', 'PAC10958', 'PAC11051', 'PAC11128', 'PAC11131'],
'start_count':[400.0, 96000.0, 315.0, 45000.0, 2739.0, 2232.0, 2800.0, 283500.0, 280.0, 200000.0, 96000.0, 481.0, 600.0, 18000.0, 400.0, 5500.0, 1200.0, 5850.0, 5500.0, 5500.0, 36000.0, 600.0, 5500.0, 550.0, 300.0, 4800.0, 1800.0, 1800.0, 108000.0, 500.0, 481.0, 1200.0, 1800.0, 400.0, 600.0, 3300.0, 2800.0, 455.0, 315.0, 5500.0, 36000.0, 5500.0, 96000.0, 5400.0, 5500.0, 2232.0, 2739.0, 96000.0, 283500.0, 18000.0, 72000.0, 1800.0, 300.0, 5500.0],
'received_total': [0.0, 0.0, 0.0, 0.0, 3168.0, 0.0, 0.0, 0.0, 280.0, 0.0, 0.0, 0.0, 0.0, 0.0, 400.0, 0.0, 1800.0, 0.0, 0.0, 0.0, 0.0, 400.0, 0.0, 0.0, 0.0, 0.0, 0.0, 3600.0, 0.0, 0.0, 0.0, 1800.0, 2400.0, 400.0, 400.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1800.0, 0.0, 0.0, 3168.0, 0.0, 0.0, 0.0, 45000.0, 3600.0, 0.0, 0.0],
'end_count': [240.0, 84000.0, 280.0, 27000.0, 3432.0, 2160.0, 2000.0, 90000.0, 455.0, 108000.0, 96000.0, 437.0, 500.0, 9000.0, 600.0, 5500.0, 1950.0, 4950.0, 5500.0, 5500.0, 36000.0, 600.0, 5500.0, 550.0, 270.0, 3300.0, 1200.0, 4200.0, 192000.0, 450.0, 350.0, 1890.0, 3600.0, 600.0, 525.0, 2835.0, 1600.0, 420.0, 187.0, 5500.0, 36000.0, 5500.0, 96000.0, 6750.0, 5500.0, 1992.0, 1881.0, 84000.0, 58500.0, 9000.0, 85500.0, 3300.0, 252.0, 5500.0]}
df_sample = pd.DataFrame(data=data)
</code></pre>
<p>For each <code>item_id</code> we need to check if the current (9/1/2019) <code>end_count</code> is greater than the previous (8/25/2019) <code>end_count</code> and we have a currennt <code>received_total</code> of <code>0</code> meaning there is a bad count.</p>
<p>I have this code that works</p>
<pre><code>def check_end_count(df):
l = []
for loc, df_loc in df.groupby(['site', 'item_id']):
try:
ending_count_previous = df_loc['end_count'].iloc[0]
ending_count_current = df_loc['end_count'].iloc[1]
received_total_current = df_loc['received_total'].iloc[1]
if ending_count_current > ending_count_previous and received_total_current == 0:
l.append("Ending count discrepancy")
l.append("Ending count discrepancy")
else:
l.append("Good Row")
l.append("Good Row")
except:
l.append("Nothing to compare")
df['ending_count_check'] = l
return df
df_sample = check_end_count(df_sample)
</code></pre>
<p>But its not that pythonic. Also, in my case I have to check for a series of dates of which I have this tuple list</p>
<pre><code>print(sliding_window_dates[:3])
[array(['2019-08-25', '2019-09-01'], dtype=object),
array(['2019-09-01', '2019-09-08'], dtype=object),
array(['2019-09-08', '2019-09-15'], dtype=object)]
</code></pre>
<p>So what I am trying to do is the following on the larger dataframe</p>
<pre><code>df_list = []
for date1, date2 in sliding_window_dates:
df_check = df_test[(df_test['usage_date'] == date1) | (df_test['usage_date'] == date2)]
for loc, df_loc in df_check.groupby(['sort_center', 'item_id']):
df_list.append(check_end_count(df_loc))
</code></pre>
<p>But I again I am doing this in two for loops so I assume there must be a better way to do this. Any suggestions are appreciated.</p>
|
<python><pandas>
|
2024-09-16 21:35:13
| 1
| 480
|
Wolfy
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.