QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
74,983,664
17,696,880
Failure to identify and concatenate using a capture group identified with regex as reference
<pre class="lang-py prettyprint-override"><code>import re input_text = 'desde las 15:00 del 2002-11-01 hasta las 16 hs' #example </code></pre> <p>I have placed the pattern <code>(?:(?&lt;=\s)|^)</code> in front so that it only detects if it is the beginning of the string or if there are one or more whitespaces in front. Then there are other matches that must be present. And finally there is the time which is missing the minutes, and the program must add <code>:00</code></p> <pre class="lang-py prettyprint-override"><code>input_text = re.sub(r'(?:(?&lt;=\s)|^)(?:a[\s|]*las|a[\s|]*la|de[\s|]*las|de[\s|]*la)\s*(\d{1,2})[\s|]*(?::|)[\s|]*(?:h\. s\.|h s\.|h\. s|h s|h\.s\.|hs\.|h\.s|hs|horas|hora)', r'\1:00 hs', input_text) print(repr(input_text)) # ---&gt; output </code></pre> <p>I couldn't do a regular concatenation either because I don't know what could be next in the string.</p> <p>I'm not really getting the proper replacement using this search regex pattern, and the correct output is this:</p> <pre><code>'desde las 15:00 del 2002-11-01 hasta las 16:00 hs' </code></pre> <p>I think that the <code>(\d{1,2})</code> capture group is failing and that is why it is not correctly replaced in the <code>\1</code></p>
<python><python-3.x><regex><replace><regex-group>
2023-01-02 14:35:23
1
875
Matt095
74,983,650
5,359,846
Playwright - how to find input that will contain a value?
<p>I have an Input element with value like '123 456'.</p> <p>How can I validate that the Input element contains '123' using an Expect?</p> <pre><code>input_locator = 'input[id=&quot;edition&quot;]' expect(self.page.locator(input_locator).first). to_have_value('123', timeout=20 * 1000) </code></pre> <p>I got this error:</p> <p>selector resolved to &lt;input name=&quot;&quot; readonly type=&quot;text&quot; id=&quot;edition&quot; placeh…/&gt; unexpected value &quot;123 456&quot;</p> <p>selector resolved to &lt;input name=&quot;&quot; readonly type=&quot;text&quot; id=&quot;edition&quot; placeh…/&gt; unexpected value &quot;123 456&quot;</p>
<python><playwright><playwright-python>
2023-01-02 14:33:23
1
1,838
Tal Angel
74,983,580
1,930,543
connected multiselect filters in streamlit
<p>I would like to connect the selection options for streamlit multiselect.</p> <p>Lets assume I have the following dataframe</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Name</th> <th>Color</th> </tr> </thead> <tbody> <tr> <td>A</td> <td>red</td> </tr> <tr> <td>A</td> <td>blue</td> </tr> <tr> <td>B</td> <td>black</td> </tr> <tr> <td>B</td> <td>blue</td> </tr> <tr> <td>C</td> <td>green</td> </tr> <tr> <td>C</td> <td>black</td> </tr> <tr> <td>C</td> <td>blue</td> </tr> <tr> <td>D</td> <td>green</td> </tr> <tr> <td>D</td> <td>yellow</td> </tr> <tr> <td>D</td> <td>white</td> </tr> </tbody> </table> </div> <p>and two multiselect filters (one for each unique values of each column)</p> <p>I would like the following:</p> <ul> <li>When the end user selects an option (or options) for the 'Name' column, the options for the other multiselect for the 'Color' should be updated accordingly and vice versa.</li> </ul> <p>This is my unsuccessful attempt so far.</p> <pre class="lang-py prettyprint-override"><code>import streamlit as st import pandas as pd df = pd.DataFrame({ 'Name': ['A', 'A', 'B', 'B', 'C', 'C', 'C', 'D', 'D', 'D', 'D'], 'Color':['red', 'blue', 'blue', 'black', 'black', 'green', 'blue', 'yellow', 'white', 'green', 'purple'] }) tmp = df.copy() st.title(&quot;Connected filters examples&quot;) st.markdown('&lt;br&gt;', unsafe_allow_html=True) if 'color_class' in st.session_state: if st.session_state.color_class: tmp = tmp[tmp['Color'].isin(st.session_state.color_class)] else: tmp = df.copy() name = st.multiselect( 'Choose Name', tmp['Name'].unique(), [], key='name_class') if 'name_class' in st.session_state: if st.session_state.name_class: tmp = tmp[tmp['Name'].isin(st.session_state.name_class)] else: tmp = df.copy() color = st.multiselect( 'Choose Color', tmp['Color'].unique(), [], key='color_class') st.dataframe(tmp) </code></pre> <p>for example if the end users selects the color 'red' and 'green' the options for name should be 'A' 'C' and 'D'</p>
<python><streamlit>
2023-01-02 14:27:33
2
5,951
dimitris_ps
74,983,222
18,313,588
Convert dictionaries in pandas dataframe to list
<p>I have a dataframe</p> <pre><code>fruit1 fruit2 [banana,apple,orange] [apple,nuts,strawberry] [apple,mango,grape] [apple,mango,grape,guava] </code></pre> <p>My code for adding the two additional columns is</p> <pre><code> df[&quot;fruits_added&quot;] = df.apply(lambda row: set(row.fruit2) - set(row.fruit1), axis=1) df[&quot;fruits_deleted&quot;] = df.apply(lambda row: set(row.fruit1) - set(row.fruit2), axis=1) </code></pre> <p>My desired output is</p> <pre><code>fruit1 fruit2 fruits_added fruits_deleted [banana,apple,orange] [apple,nuts,strawberry] [strawberry,nuts] [banana,orange] [apple,mango,grape] [apple,mango,grape,guava] [guava] [] </code></pre> <p>but I am getting dictionaries instead</p> <pre><code>fruit1 fruit2 fruits_added fruits_deleted [banana,apple,orange] [apple,nuts,strawberry] {strawberry,nuts} {banana,orange} [apple,mango,grape] [apple,mango,grape,guava] {guava} {} </code></pre> <p>Any input is appreciated</p>
<python><python-3.x><pandas><dataframe>
2023-01-02 13:51:24
2
493
nerd
74,983,206
11,885,361
can't replace duplicate values with new values in xlsx with pandas
<p>I have an <code>xlsx</code> file containing too much data. however the data contains <code>duplicate</code> values in column named <code>UniversalIDS</code> which I wanted to replace it with a randomly generated <code>IDS</code> with <code>Pandas</code>.</p> <p>So far I've tried different scenarios which I googled but did not work. for example I tried this:</p> <pre><code>import pandas as pd import uuid df = pd.read_excel('C:/Users/Nervous/Downloads/ss.xlsx') df.loc[df.duplicated(subset=[&quot;UniversalIDS&quot;]), &quot;UniversalIDS&quot;] = uuid.uuid4() df.to_excel(&quot;C:/Users/Nervous/Downloads/tt.xlsx&quot;) print(&quot;done!&quot;) </code></pre> <p>also I tried other alternatives seen on this site like for example:</p> <pre><code> df2 = df.assign(UniversalIDS=df['UniversalIDS'].where( ~df.duplicated(['UniversalIDS']), uuid.uuid4())) </code></pre> <p>also this didn't work:</p> <pre><code> df.loc[df[&quot;UniversalIDS&quot;].duplicated(), &quot;test&quot;] </code></pre> <p>this is a snippet from <code>xlsx data</code>:</p> <pre><code>UniversalIDS f6112cd7-0868-4cc9-b5ab-d7381cc23bdf f3e75641-f328-429f-ae32-41399d8a0bf0 08dccc5c-5774-4614-925c-ad9373821075 79a8ebed-154c-47c7-b16d-cbba5d8469eb 396f8e63-1950-4c36-9524-c1dec8728ffd 62cba3bd-a855-4141-8460-7ff838ecea62 b7f4f753-b479-413a-abcd-34d08436eb85 c0fd6e61-edb1-4dce-94ac-7d0529860e1f c42f8c98-c285-4552-af9f-2f4d8e89b9e8 8cb77021-eb4f-4cfa-a4a3-e649f6a53c03 cb7f4b8d-976a-4481-919e-c8ff7d384cc6 e15fd2bb-5409-4a8b-9fdc-bf1e5862db58 27b97893-aae7-4a9a-aae1-0fc21a099209 1abc2c2f-94f2-4546-b912-b85bc4ed0cb8 6bf264fb-1b82-48e3-966a-14e48a61a63e 9653faeb-7b3d-408e-93e3-bc729f259c75 a09f3eb6-0059-4a77-bf2f-4f7436508ba8 65e06948-2e6c-413f-a768-c3faf8108a6c 291ff491-4ff0-4fb2-b095-b3e66f2d7ca0 653535c7-0389-4077-8e72-3835fbd72d4d 61408fc8-4f45-48e0-b83a-40b6bfd76ad5 3ae8d547-bf4b-42ac-b083-a1662f1a5c82 4955c673-c5da-464c-8e14-a897df0774eb a39bad90-5235-4679-945e-534bb47b8347 264a1f6e-adf4-45a7-b1b1-e6f3fc073447 a855025b-ee84-46d5-aedb-cbac9a5b1920 71b16a5b-3f6d-4d30-8a65-203959fe87a2 4f3f86f2-4e61-475a-bc1d-eb2112f23953 59da45de-c192-4885-8a55-9138ca49b33a 8f41df73-d9dc-4663-9f64-d090d7c5ca77 84f7103f-e9de-444f-b046-c02d75af0ed1 2738f733-7438-494c-9368-5fb700df93d1 777a3cd7-19ae-4181-b91d-9be8eaf30523 b6083731-a43e-4b5a-ac9a-94a3202103e7 f22873c1-6811-4025-8f0d-47d72d49e499 f262c369-f44a-4b90-8219-d29b33bc14e8 ea0d26f5-d8c4-4082-983a-8eeea29c6c54 ea0d26f5-d8c4-4082-983a-8eeea29c6c54 ea0d26f5-d8c4-4082-983a-8eeea29c6c54 ea0d26f5-d8c4-4082-983a-8eeea29c6c54 ea0d26f5-d8c4-4082-983a-8eeea29c6c54 ea0d26f5-d8c4-4082-983a-8eeea29c6c54 ea0d26f5-d8c4-4082-983a-8eeea29c6c54 ea0d26f5-d8c4-4082-983a-8eeea29c6c54 ea0d26f5-d8c4-4082-983a-8eeea29c6c54 ea0d26f5-d8c4-4082-983a-8eeea29c6c54 ea0d26f5-d8c4-4082-983a-8eeea29c6c54 ea0d26f5-d8c4-4082-983a-8eeea29c6c54 ea0d26f5-d8c4-4082-983a-8eeea29c6c54 ea0d26f5-d8c4-4082-983a-8eeea29c6c54 ea0d26f5-d8c4-4082-983a-8eeea29c6c54 ea0d26f5-d8c4-4082-983a-8eeea29c6c54 ea0d26f5-d8c4-4082-983a-8eeea29c6c54 ea0d26f5-d8c4-4082-983a-8eeea29c6c54 ea0d26f5-d8c4-4082-983a-8eeea29c6c54 ea0d26f5-d8c4-4082-983a-8eeea29c6c54 ea0d26f5-d8c4-4082-983a-8eeea29c6c54 ea0d26f5-d8c4-4082-983a-8eeea29c6c54 ea0d26f5-d8c4-4082-983a-8eeea29c6c54 ea0d26f5-d8c4-4082-983a-8eeea29c6c54 ea0d26f5-d8c4-4082-983a-8eeea29c6c54 ea0d26f5-d8c4-4082-983a-8eeea29c6c54 ea0d26f5-d8c4-4082-983a-8eeea29c6c54 ea0d26f5-d8c4-4082-983a-8eeea29c6c54 ea0d26f5-d8c4-4082-983a-8eeea29c6c54 ea0d26f5-d8c4-4082-983a-8eeea29c6c54 ea0d26f5-d8c4-4082-983a-8eeea29c6c54 ea0d26f5-d8c4-4082-983a-8eeea29c6c54 ea0d26f5-d8c4-4082-983a-8eeea29c6c54 ea0d26f5-d8c4-4082-983a-8eeea29c6c54 ea0d26f5-d8c4-4082-983a-8eeea29c6c54 ea0d26f5-d8c4-4082-983a-8eeea29c6c54 ea0d26f5-d8c4-4082-983a-8eeea29c6c54 ea0d26f5-d8c4-4082-983a-8eeea29c6c54 ea0d26f5-d8c4-4082-983a-8eeea29c6c54 ea0d26f5-d8c4-4082-983a-8eeea29c6c54 ea0d26f5-d8c4-4082-983a-8eeea29c6c54 ea0d26f5-d8c4-4082-983a-8eeea29c6c54 ea0d26f5-d8c4-4082-983a-8eeea29c6c54 ea0d26f5-d8c4-4082-983a-8eeea29c6c54 ea0d26f5-d8c4-4082-983a-8eeea29c6c54 ea0d26f5-d8c4-4082-983a-8eeea29c6c54 ea0d26f5-d8c4-4082-983a-8eeea29c6c54 ea0d26f5-d8c4-4082-983a-8eeea29c6c54 ea0d26f5-d8c4-4082-983a-8eeea29c6c54 ea0d26f5-d8c4-4082-983a-8eeea29c6c54 </code></pre> <p>as can be seen in the above there are a duplicate values in <code>UniversalIDS</code> column, also it is worth to mention that there are other columns in the data but cut out the problem causing column for simplicity.</p> <p>so my question is how can I replace the duplicate values in <code>UniversalIDS</code> column with a new unique IDs?</p>
<python><python-3.x><excel><pandas><dataframe>
2023-01-02 13:49:03
2
630
hanan
74,983,103
2,520,186
Networkx: Replacing labels of nodes
<p>I have the following minimal code:</p> <pre><code>import networkx as nx import matplotlib.pyplot as plt G = nx.DiGraph() #G = nx.Graph() #G = nx.path_graph(4) pos1 = {0: (0, 0), 1: (2, 1), 2: (2, 0), 3: (2,-1)} G.add_edge(0, 1) G.add_edge(0, 3) G.add_edge(1, 2) G.add_edge(2, 3) # First Network plt.figure(0) nx.draw_networkx(G, pos1) # Puts numbers as labels of nodes plt.axis(&quot;off&quot;) plt.savefig('graph1.png') # Second network plt.figure(1) mapping = {0: &quot;Zero&quot;, 1: &quot;One&quot;, 2: &quot;Two&quot;, 3: &quot;Three&quot;} H = nx.relabel_nodes(G, mapping) nx.draw_networkx(H) # Works ''' The below line I want to modify ''' #nx.draw_networkx(H, pos1) # Doesn't work # Says: NetworkXError: Node 'Zero' has no position plt.axis(&quot;off&quot;) plt.savefig('graph2.png') plt.show() </code></pre> <p>Here I am trying to replace the label names in the new graph <code>H</code>. But it shows an error since I am using coordinates for the nodes.</p> <p>Right now the outputs are:</p> <p><a href="https://i.sstatic.net/siWqt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/siWqt.png" alt="Graph 1" /></a></p> <p><a href="https://i.sstatic.net/meZEk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/meZEk.png" alt="Graph 2" /></a></p> <p>The second graph needs to be corrected. Also, can the sizes of the nodes be auto-adjusted and even-odd numbered (labels for the first network) nodes be colored differently?</p> <p>PS. If <code>networkx</code> has limitations, then an example with some other module may also serve the purpose.</p>
<python><python-3.x><matplotlib><graph><networkx>
2023-01-02 13:38:08
2
2,394
hbaromega
74,983,081
2,410,376
How do I mock an AWS lambda start_execution in Python?
<p>I am testing a function whose very last line of code starts the execution of an AWS lambda. This part is not relevant to my test, so I want to mock the call to <code>client.start_execution()</code> so that instead of actually triggering AWS it returns None.</p> <p>Is there a way to use pytest mocks to simply replace the response of this function with a generic None?</p>
<python><amazon-web-services><lambda><pytest>
2023-01-02 13:35:39
2
510
A R
74,982,962
4,454,635
Read image alt text with pandas.read_html
<p>Is there a way using <code>pandas.read_html</code> to get the <code>img alt</code> text from an image ? The page I am scrapping just replaced some texts with pictures, and the old text is now the <code>alt</code> text for those images. Here is an example:</p> <pre><code>&lt;td&gt; &lt;div&gt;... &lt;a href=&quot;/index.php/WF...&quot; title=&quot;WF&quot;&gt;&lt;img alt=&quot;WFrag&quot; src=&quot;/images/thumb/WF.png&quot; &lt;/a&gt; &lt;/div&gt; &lt;/td&gt; </code></pre> <p>This is how it looked, and it was perfect for <code>pandas.read_html</code></p> <pre><code>&lt;td&gt; WF &lt;/td&gt; </code></pre>
<python><html><pandas>
2023-01-02 13:23:29
2
3,186
horace_vr
74,982,808
1,947,542
Is it possible to perform sparse - dense matrix multiplication in Tensorflow for rank 3 matrices?
<p>I am trying to perform sparse matrix - dense matrix multiplication in TensorFlow, where both matrices have a leading batch dimension (i.e., rank 3). I am aware that TensorFlow provides the tf.sparse.sparse_dense_matmul function for rank 2 matrices, but I am looking for a method to handle rank 3 matrices. Is there a built-in function or method in TensorFlow that can handle this case efficiently, without the need for expensive reshaping or slicing operations? Performance is critical in my application.</p> <p>To illustrate my question, consider the following example code:</p> <pre><code>import tensorflow as tf # Define sparse and dense matrices with leading batch dimension sparse_tensor = tf.SparseTensor(indices=[[0, 1, 1], [0, 0, 1], [1, 1, 1], [1, 2, 1], [2, 1, 1]], values=[1, 1, 1, 1, 1], dense_shape=[3, 3, 2]) dense_matrix = tf.constant([[[0.1, 0.2, 0.3, 0.4], [0.5, 0.6, 0.7, 0.8]], [[0.9, 0.10, 0.11, 0.12], [0.13, 0.14, 0.15, 0.16]], [[0.17, 0.18, 0.19, 0.20], [0.21, 0.22, 0.23, 0.24]]], dtype=tf.float32) # Perform sparse x dense matrix multiplication result = tf.???(sparse_tensor, dense_matrix) # Result should have shape [3, 3, 4] </code></pre>
<python><tensorflow><sparse-matrix>
2023-01-02 13:07:59
1
441
Mustafa Orkun Acar
74,982,743
3,668,129
How to hide (or show) some of plotly colors
<p>I have simple dataframe with 3 plots:</p> <pre><code>import plotly.express as px import pandas as pd import numpy as np N = 100 random_x = np.linspace(0, 1, N) random_y0 = np.random.randn(N) + 5 random_y1 = np.random.randn(N) random_y2 = np.random.randn(N) - 5 df_all = pd.DataFrame() df = pd.DataFrame() df['x'] = random_x df['y'] = random_y0 df['type'] = np.full(len(random_x), 0) df_all = pd.concat([df_all, df], axis=0) df = pd.DataFrame() df['x'] = random_x df['y'] = random_y1 df['type'] = np.full(len(random_x), 1) df_all = pd.concat([df_all, df], axis=0) df = pd.DataFrame() df['x'] = random_x df['y'] = random_y2 df['type'] = np.full(len(random_x), 2) df_all = pd.concat([df_all, df], axis=0) fig = px.line(x=df_all['x'], y=df_all['y'], color=df_all['type']) fig.show() </code></pre> <p><a href="https://i.sstatic.net/Sepmg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Sepmg.png" alt="enter image description here" /></a></p> <ul> <li>Is there a way to show the figure and by default choose which colors to show (or which colors to hide) ? (i.e show only colors 0 and 1) ?</li> <li>Can we do it without changing the dataframe, but rather change the plotly settings ?</li> </ul>
<python><python-3.x><plotly><plotly-dash>
2023-01-02 13:02:13
1
4,880
user3668129
74,982,401
17,267,064
How to add data in Pandas Dataframe dynamically using Python?
<p>I wish to extract the data from a txt file which is given below and store in to a pandas Dataframe that has 8 columns.</p> <pre><code>Lorem | Ipsum | is | simply | dummy text | of | the | printing | and typesetting | industry. | Lorem more | recently | with | desktop | publishing | software | like | Aldus Ipsum | has | been | the | industry's standard | dummy | text | ever | since | the | 1500s took | a | galley | of | type | and scrambled | it | to | make | a | type | specimen | book It | has | survived | not | only | five | centuries, | but the | leap | into | electronic | typesetting remaining | essentially | unchanged It | was | popularised | in | the | 1960s | with | the Lorem | Ipsum | passages, | and PageMaker | including | versions | of | Lorem | Ipsum </code></pre> <p>Data on each line is separated by a pipe sign which refers to a data inside each cell of a row and column. My end goal is to have the data inserted in dataframe as per below format.</p> <pre><code>Column 1 | Column 2 | Column 3 | Column 4 | Column 5 | Column 6 | Column 7 | Column 8 ------------------------------------------------------------------------------------- Lorem | Ipsum | is | simply | dummy | text | of | the | printing | and | typesetting| industry. | Lorem | more | recently | with | desktop | publishing| software | like | Aldus | </code></pre> <p>and so on.....</p> <p>I performed below but I am unable to add data dynamically into dataframe.</p> <pre><code>import pandas as pd with open(file) as f: data = f.read().split('\n') columns = ['Column 1', 'Column 2', 'Column 3', 'Column 4', 'Column 5', 'Column 6', 'Column 7', 'Column 8'] df = pd.DataFrame(columns=columns) for i in data: row = i.split(' | ') df = df.append({'Column 1': f'{row[0]}', 'Column 2': f'{row[1]}', 'Column 3': f'{row[2]}', 'Column 4': f'{row[3]}', 'Column 5': f'{row[4]}'}, ignore_index = True) </code></pre> <p>Above is manual way of adding row's cells to a dataframe, but I require the dynamic way i.e. how do append the rows so as whatever may be number of cells in row, it may get added.</p>
<python><pandas><dataframe>
2023-01-02 12:24:55
4
346
Mohit Aswani
74,982,353
8,510,149
Problems with version control for dictionaries inside a python class
<p>I'm doing something wrong in the code below. I have a method (update_dictonary) that changes a value or values in a dictionary based on what is specificed in a tuple (new_points).</p> <p>Before I update the dictionary, I want to save that version in a list (history) in order to be able to access previous versions. However, my attempt below updates all dictionaries in history to be like the latest version.</p> <p>I can't figure out what I'm doing wrong here.</p> <pre><code>test_dict = {'var0':{'var1':{'cond1':1, 'cond2':2, 'cond3':3} } } class version_control: def __init__ (self, dictionary): self.po = dictionary self.history = list() self.version = 0 def update_dictionary(self, var0, var1, new_points): po_ = self.po self.history.append(po_) for i in new_points: self.po[var0][var1][i[0]] = i[1] self.version += 1 def get_history(self, ver): return self.history[ver] </code></pre> <pre><code>a = version_control(test_dict) new_points = [('cond1', 2), ('cond2', 0)] a.update_dictionary('var0', 'var1', new_points) new_points = [('cond3', -99), ('cond2', 1)] a.update_dictionary('var0', 'var1', new_points) print(a.get_history(0)) print(a.get_history(1)) </code></pre>
<python><dictionary><python-class>
2023-01-02 12:19:29
1
1,255
Henri
74,982,325
1,807,163
Poetry clean/remove package from env after removing from toml file
<p>I installed a package with <code>poetry add X</code>, and so now it shows up in the toml file and in the venv (mine's at <code>.venv/lib/python3.10/site-packages/</code>).</p> <p>Now to remove that package, I could use <code>poetry remove X</code> and I know that would work properly. But sometimes, it's easier to just go into the toml file and delete the package line there. So that's what I tried by removing the line for X.</p> <p>I then tried doing <code>poetry install</code> but that didn't do anything When I do <code>ls .venv/lib/python3.10/site-packages/</code>, I still see X is installed there.</p> <p>I also tried <code>poetry lock</code> but no change with that either.</p> <p>So is there some command to take the latest toml file and clean up packages from being installed that are no longer present in the toml?</p>
<python><python-poetry>
2023-01-02 12:16:37
1
5,201
rasen58
74,981,810
6,346,482
Pandas: Transform with custom maximum function
<p>I know that I can use transform for transforming every element in a group in a dataframe into the minimum value. This is done with something like</p> <pre><code>df.groupby(level=0).transform('min') </code></pre> <p>My problem is, that all of my cells are strings, in fact tuplelike strings with floats inside, like &quot;5.48$\pm$69.1&quot;. The minimum function here would transform it by string, which is incorrect.</p> <p>Is there a good way of using a custom transform function only dealing with the first part of it?</p> <p>An example input is:</p> <pre><code>df = pd.DataFrame({'0.001': {('Periodic', 'Klinger'): '0.3$\\pm$0.05', ('Periodic', 'Malte'): '0.26$\\pm$0.06', ('Periodic', 'Merkelig'): '0.22$\\pm$0.12', ('Periodic', 'Dings'): '0.18$\\pm$0.06', ('Periodic', 'Elf'): '0.28$\\pm$0.11', ('Periodic', 'Rar'): '0.2$\\pm$0.1', ('Periodic', 'Merd'): '0.12$\\pm$0.14', ('Sequential', 'Klinger'): '0.15$\\pm$0.14', ('Sequential', 'Malte'): '0.1$\\pm$0.1', ('Sequential', 'Merkelig'): '0.26$\\pm$0.09', ('Sequential', 'Dings'): '0.17$\\pm$0.16', ('Sequential', 'Elf'): '0.15$\\pm$0.12', ('Sequential', 'Rar'): '0.12$\\pm$0.1', ('Sequential', 'Merd'): '0.21$\\pm$0.13'}, '0.01': {('Periodic', 'Klinger'): '1.75$\\pm$1.27', ('Periodic', 'Malte'): '1.19$\\pm$1.51', ('Periodic', 'Merkelig'): '2.31$\\pm$0.54', ('Periodic', 'Dings'): '2.47$\\pm$0.37', ('Periodic', 'Elf'): '2.3$\\pm$1.3', ('Periodic', 'Rar'): '1.65$\\pm$0.59', ('Periodic', 'Merd'): '1.07$\\pm$1.68', ('Sequential', 'Klinger'): '1.14$\\pm$0.25', ('Sequential', 'Malte'): '2.99$\\pm$1.36', ('Sequential', 'Merkelig'): '2.85$\\pm$1.06', ('Sequential', 'Dings'): '2.61$\\pm$0.79', ('Sequential', 'Elf'): '1.62$\\pm$1.47', ('Sequential', 'Rar'): '1.29$\\pm$0.74', ('Sequential', 'Merd'): '2.88$\\pm$0.89'}, '0.1': {('Periodic', 'Klinger'): '18.75$\\pm$12.96', ('Periodic', 'Malte'): '15.9$\\pm$9.8', ('Periodic', 'Merkelig'): '36.47$\\pm$1.42', ('Periodic', 'Dings'): '16.13$\\pm$13.24', ('Periodic', 'Elf'): '26.36$\\pm$11.08', ('Periodic', 'Rar'): '11.26$\\pm$12.32', ('Periodic', 'Merd'): '17.55$\\pm$10.78', ('Sequential', 'Klinger'): '36.26$\\pm$3.19', ('Sequential', 'Malte'): '20.2$\\pm$14.42', ('Sequential', 'Merkelig'): '18.62$\\pm$15.79', ('Sequential', 'Dings'): '5.64$\\pm$7.28', ('Sequential', 'Elf'): '25.55$\\pm$12.74', ('Sequential', 'Rar'): '19.65$\\pm$16.98', ('Sequential', 'Merd'): '14.53$\\pm$2.54'}}) </code></pre> <p>There are three columns, 0.1, 0.01 and 0.001. There is a multiindex consisting of two values and I want the minimum values within every column for each multiindex-first-value.</p> <p>Everything is done by</p> <pre><code>df.groupby(level=0).transform('min') </code></pre> <p>but the minimum function is wrong due to the format of the values</p>
<python><python-3.x><pandas><transform>
2023-01-02 11:24:05
2
804
Hemmelig
74,981,801
6,263,000
python-oracledb thin client returns DPY-6000
<p>I'm trying to run a Python app packaged in a Docker container on an OCI <code>Ampere</code> node.</p> <p>Environment:</p> <ul> <li>base image: <code>python:3.10.9-slim</code> built using <code>buildx</code> for <code>arm64</code></li> <li>client library: <code>oracledb==1.2.1</code></li> <li>Docker version: <code>20.10.22, build 3a2c30b</code></li> <li>Docker host OS: <code>5.4.17-2136.311.6.1.el8uek.aarch64</code></li> </ul> <p>But the app is returning the following error when it's trying to connect to an Autonomous Transaction Processing DB using the thin client:</p> <pre><code>File &quot;/app/utils/db_connection_managers/oracle_connection_manager.py&quot;, line 13, in __init__ self.db_conn = oracledb.connect( File &quot;/usr/local/lib/python3.10/site-packages/oracledb/connection.py&quot;, line 1013, in connect return conn_class(dsn=dsn, pool=pool, params=params, **kwargs) File &quot;/usr/local/lib/python3.10/site-packages/oracledb/connection.py&quot;, line 135, in __init__ impl.connect(params_impl) File &quot;src/oracledb/impl/thin/connection.pyx&quot;, line 318, in oracledb.thin_impl.ThinConnImpl.connect File &quot;src/oracledb/impl/thin/connection.pyx&quot;, line 206, in oracledb.thin_impl.ThinConnImpl._connect_with_params File &quot;src/oracledb/impl/thin/connection.pyx&quot;, line 177, in oracledb.thin_impl.ThinConnImpl._connect_with_description File &quot;src/oracledb/impl/thin/connection.pyx&quot;, line 105, in oracledb.thin_impl.ThinConnImpl._connect_with_address File &quot;src/oracledb/impl/thin/connection.pyx&quot;, line 101, in oracledb.thin_impl.ThinConnImpl._connect_with_address File &quot;src/oracledb/impl/thin/protocol.pyx&quot;, line 168, in oracledb.thin_impl.Protocol._connect_phase_one File &quot;src/oracledb/impl/thin/protocol.pyx&quot;, line 344, in oracledb.thin_impl.Protocol._process_message File &quot;src/oracledb/impl/thin/protocol.pyx&quot;, line 323, in oracledb.thin_impl.Protocol._process_message File &quot;src/oracledb/impl/thin/messages.pyx&quot;, line 1676, in oracledb.thin_impl.ConnectMessage.process File &quot;/usr/local/lib/python3.10/site-packages/oracledb/errors.py&quot;, line 111, in _raise_err raise exc_type(_Error(message)) from cause oracledb.exceptions.OperationalError: DPY-6000: cannot connect to database. Listener refused connection. (Similar to ORA-12506) </code></pre> <p>which suggests that the client is able to make a connection to the database (i.e. there are no connection/networking issues) but the connection request is actively rejected by the server.</p> <p><strong>I'm running the same container on my Mac(Intel) using the same dockerfile (built for <code>amd64</code>, obviously) and the same connection details with no issues.</strong></p> <p>There's a hint to this behavior <a href="https://python-oracledb.readthedocs.io/en/latest/user_guide/appendix_b.html#native-network-encryption-and-checksumming" rel="nofollow noreferrer">in the documentation</a>, but I'm not sure how/why it would apply to my case. Specifically, how it would not cause an issue in the container that's running on my Intel Mac, but it is a factor when the arm64 container is running on the A1 node.</p> <p>Am I overlooking something or trying to do something that's not supported via the spec?</p>
<python><docker><arm64><oracle-cloud-infrastructure><oracle-autonomous-db>
2023-01-02 11:23:07
1
609
Babak Tourani
74,981,728
19,238,204
Check Whether my Plot of Bounded Region and Its Revolution toward x-axis and toward y-axis are correct
<p>I have tried this code to be able to plot a bounded region, between y=6x and y= 6x^{2}</p> <p>Please check whether it is correct or not...</p> <ol> <li><p>I want the bounded region to be revolved around x axis and y axis, become solid of revolution.</p> </li> <li><p>I want to add a legend so people will know the blue line is y=6x and the orange line is y=6x^{2}</p> </li> </ol> <p>this is my MWE:</p> <pre><code># Compare the plot at xy axis with the solid of revolution toward x and y axis # For region bounded by the line y = 6x and y = 6x^2 import matplotlib.pyplot as plt import numpy as np n = 100 fig = plt.figure(figsize=(14, 7)) ax1 = fig.add_subplot(221) ax2 = fig.add_subplot(222, projection='3d') ax3 = fig.add_subplot(223) ax4 = fig.add_subplot(224, projection='3d') y = np.linspace(0, 10, n) x1 = (y/6) x2 = (y/6) ** (1/2) t = np.linspace(0, np.pi * 2, n) u = np.linspace(-1, 2, 60) v = np.linspace(0, 2*np.pi, 60) U, V = np.meshgrid(u, v) X = U Y1 = (6*U**2)*np.cos(V) Z1 = (6*U**2)*np.sin(V) Y2 = (6*U)*np.cos(V) Z2 = (6*U)*np.sin(V) Y3 = ((1/6)*U**(1/2))*np.cos(V) Z3 = ((1/6)*U**(1/2))*np.sin(V) Y4 = (U/6)*np.cos(V) Z4 = (U/6)*np.sin(V) xn = np.outer(x1, np.cos(t)) yn = np.outer(x1, np.sin(t)) zn = np.zeros_like(xn) for i in range(len(x1)): zn[i:i + 1, :] = np.full_like(zn[0, :], y[i]) ax1.plot(x1, y, x2, y) ax1.set_title(&quot;$f(x)$&quot;) ax2.plot_surface(X, Y3, Z3, alpha=0.3, color='red', rstride=6, cstride=12) ax2.plot_surface(X, Y4, Z4, alpha=0.3, color='blue', rstride=6, cstride=12) ax2.set_title(&quot;$f(x)$: Revolution around $y$&quot;) # find the inverse of the function x_inverse = y y1_inverse = np.power(x_inverse, 1) y2_inverse = np.power(x_inverse, 1 / 2) ax3.plot(x_inverse, y1_inverse, x_inverse, y2_inverse) ax3.set_title(&quot;Inverse of $f(x)$&quot;) ax4.plot_surface(X, Y1, Z1, alpha=0.3, color='red', rstride=6, cstride=12) ax4.plot_surface(X, Y2, Z2, alpha=0.3, color='blue', rstride=6, cstride=12) ax4.set_title(&quot;$f(x)$: Revolution around $x$&quot;) plt.tight_layout() plt.show() </code></pre> <p><a href="https://i.sstatic.net/dsDDe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dsDDe.png" alt="image" /></a></p>
<python><numpy><matplotlib>
2023-01-02 11:14:56
1
435
Freya the Goddess
74,981,726
11,349,966
How to insert image blob in openpyxl
<p>In my web app I'am using openpyxl to create or modify excels and there's a part of my web that i need to insert image with a blob or base64 ?, I dont see anything related in how to inserting image in openpyxl except in a method where i need to pass a relative or absolute path of the image. i don't want to save the image first to my project directory before using it for the excel.</p> <pre><code>@route(data_mgt, rule='/elevation-values-calculation/download-error-ratio', methods=['POST']) def download_error_ratio(): payl = payload() survey_id = payl.get('survey_id', {}) project_id = payl.get('project_id', {}) image = request.files['image'] #return blob image_string = base64.b64encode(image.read()) #return base64 string base64_string = &quot;data:{content_type};base64,{img_string}&quot;.format( content_type=image.content_type, img_string=image_string.decode() ) wb = openpyxl.Workbook() ws = wb.worksheets[0] img = openpyxl.drawing.image.Image(base64_string) #error need to pass absolute or realte image path img.anchor = 'A1' ws.add_image(image) </code></pre>
<python><flask><openpyxl>
2023-01-02 11:14:42
1
1,114
Mark Anthony Libres
74,981,656
12,752,172
How to pass list data into insert statement in SQL server using python?
<p>I'm new to python and creating a python app to insert data into the SQL server table. I'm trying it in the following way but it gives me an error.</p> <p><strong>This is my code</strong></p> <pre><code>import pyodbc conn = pyodbc.connect('Driver={SQL Server};' 'Server=.\SQLEXPRESS;' 'Database=company_mine;' 'Trusted_Connection=yes;') datalist = ['KFC', ' kfc', '71 Tottenham Ct Rd', 'London', 'null', 'null', 'London'] sql_insert_query1 = &quot;&quot;&quot;INSERT INTO company_details(company_name,contact_name,mailing_street,mailing_city,mailing_state,mailing_postal_code,mailing_country)VALUES(%s)&quot;&quot;&quot; cursor1 = conn.cursor() cursor1.executemany(sql_insert_query1, datalist) conn.commit() print(cursor1.rowcount, &quot;Record inserted successfully&quot;) </code></pre> <p><a href="https://i.sstatic.net/m7z9u.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/m7z9u.png" alt="This is the error" /></a></p> <p>Please help me to solve this problem.</p>
<python><list>
2023-01-02 11:08:02
1
469
Sidath
74,981,572
979,974
Python train convolutional neural network on csv numpy error input shape
<p>I would like to train a convolutional neural network autoencoder on a csv file. The csv file contains pixel neighborhood position of an original image of 1024x1024. When I try to train it, I have the following error that I don't manage to resolve. <code>ValueError: Input 0 of layer max_pooling2d is incompatible with the layer: expected ndim=4, found ndim=5. Full shape received: (None, 1, 1024, 1024, 16)</code>. Any idea, what I am doing wrong in my coding?</p> <p>Let's me explain my code:</p> <p>My csv file has this structure:</p> <pre><code>0 0 1.875223e+01 1 0 1.875223e+01 2 0 2.637685e+01 3 0 2.637685e+01 4 0 2.637685e+01 </code></pre> <p>I managed to load my dataset, extract the x, y, and value columns as NumPy arrays and extract the relevant columns as NumPy arrays</p> <pre><code>x = data[0].values y = data[1].values values = data[2].values </code></pre> <p>Then, I create an empty image with the correct dimensions and fill in the image with the pixel values</p> <pre><code>image = np.empty((1024, 1024)) for i, (xi, yi, value) in enumerate(zip(x, y, values)): image[xi.astype(int), yi.astype(int)] = value </code></pre> <p>To use this array as input to my convolutional autoencoder I reshaped it to a 4D array with dimensions</p> <pre><code># Reshape the image array to a 4D tensor image = image.reshape((1, image.shape[0], image.shape[1], 1)) </code></pre> <p>Finally, I declare the convolutional autoencoder structure, at this stage I have the error `incompatible with the layer: expected ndim=4, found ndim=5. Full shape received: (None, 1, 1024, 1024, 16)'</p> <pre><code>import keras from keras.layers import Input, Conv2D, MaxPooling2D, UpSampling2D from keras.models import Model # Define the input layer input_layer = Input(shape=(1,image.shape[1], image.shape[2], 1)) # Define the encoder layers x = Conv2D(16, (3, 3), activation='relu', padding='same')(input_layer) x = MaxPooling2D((2, 2), padding='same')(x) x = Conv2D(8, (3, 3), activation='relu', padding='same')(x) x = MaxPooling2D((2, 2), padding='same')(x) x = Conv2D(8, (3, 3), activation='relu', padding='same')(x) encoded = MaxPooling2D((2, 2), padding='same')(x) # Define the decoder layers x = Conv2D(8, (3, 3), activation='relu', padding='same')(encoded) x = UpSampling2D((2, 2))(x) x = Conv2D(8, (3, 3), activation='relu', padding='same')(x) x = UpSampling2D((2, 2))(x) x = Conv2D(16, (3, 3), activation='relu')(x) x = UpSampling2D((2, 2))(x) decoded = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(x) # Define the autoencoder model autoencoder = Model(input_layer, decoded) # Compile the model autoencoder.compile(optimizer='adam', loss='binary_crossentropy') # Reshape the image array to a 4D tensor image = image.reshape((1, image.shape[0], image.shape[1], 1)) # Train the model autoencoder.fit(image, image, epochs=50, batch_size=1, shuffle=True) </code></pre>
<python><keras><conv-neural-network>
2023-01-02 10:59:35
1
953
user979974
74,981,558
6,799,513
Error Updating Python3 pip AttributeError: module 'lib' has no attribute 'OpenSSL_add_all_algorithms'
<p>I'm having an error when installing/updating any pip module in python3. Purging and reinstalling <code>pip</code> and every package I can thing of hasn't helped. Here's the error that I get in response to running <code>python -m pip install --upgrade pip</code> specifically (but the error is the same for attempting to install or update any pip module):</p> <pre><code>Traceback (most recent call last): File &quot;/usr/lib/python3.8/runpy.py&quot;, line 194, in _run_module_as_main return _run_code(code, main_globals, None, File &quot;/usr/lib/python3.8/runpy.py&quot;, line 87, in _run_code exec(code, run_globals) File &quot;/usr/lib/python3/dist-packages/pip/__main__.py&quot;, line 16, in &lt;module&gt; from pip._internal.cli.main import main as _main # isort:skip # noqa File &quot;/usr/lib/python3/dist-packages/pip/_internal/cli/main.py&quot;, line 10, in &lt;module&gt; from pip._internal.cli.autocompletion import autocomplete File &quot;/usr/lib/python3/dist-packages/pip/_internal/cli/autocompletion.py&quot;, line 9, in &lt;module&gt; from pip._internal.cli.main_parser import create_main_parser File &quot;/usr/lib/python3/dist-packages/pip/_internal/cli/main_parser.py&quot;, line 7, in &lt;module&gt; from pip._internal.cli import cmdoptions File &quot;/usr/lib/python3/dist-packages/pip/_internal/cli/cmdoptions.py&quot;, line 24, in &lt;module&gt; from pip._internal.exceptions import CommandError File &quot;/usr/lib/python3/dist-packages/pip/_internal/exceptions.py&quot;, line 10, in &lt;module&gt; from pip._vendor.six import iteritems File &quot;/usr/lib/python3/dist-packages/pip/_vendor/__init__.py&quot;, line 65, in &lt;module&gt; vendored(&quot;cachecontrol&quot;) File &quot;/usr/lib/python3/dist-packages/pip/_vendor/__init__.py&quot;, line 36, in vendored __import__(modulename, globals(), locals(), level=0) File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 991, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 975, in _find_and_load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 655, in _load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 618, in _load_backward_compatible File &quot;&lt;frozen zipimport&gt;&quot;, line 259, in load_module File &quot;/usr/share/python-wheels/CacheControl-0.12.6-py2.py3-none-any.whl/cachecontrol/__init__.py&quot;, line 9, in &lt;module&gt; File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 991, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 975, in _find_and_load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 655, in _load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 618, in _load_backward_compatible File &quot;&lt;frozen zipimport&gt;&quot;, line 259, in load_module File &quot;/usr/share/python-wheels/CacheControl-0.12.6-py2.py3-none-any.whl/cachecontrol/wrapper.py&quot;, line 1, in &lt;module&gt; File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 991, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 975, in _find_and_load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 655, in _load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 618, in _load_backward_compatible File &quot;&lt;frozen zipimport&gt;&quot;, line 259, in load_module File &quot;/usr/share/python-wheels/CacheControl-0.12.6-py2.py3-none-any.whl/cachecontrol/adapter.py&quot;, line 5, in &lt;module&gt; File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 991, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 975, in _find_and_load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 655, in _load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 618, in _load_backward_compatible File &quot;&lt;frozen zipimport&gt;&quot;, line 259, in load_module File &quot;/usr/share/python-wheels/requests-2.22.0-py2.py3-none-any.whl/requests/__init__.py&quot;, line 95, in &lt;module&gt; File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 991, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 975, in _find_and_load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 655, in _load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 618, in _load_backward_compatible File &quot;&lt;frozen zipimport&gt;&quot;, line 259, in load_module File &quot;/usr/share/python-wheels/urllib3-1.25.8-py2.py3-none-any.whl/urllib3/contrib/pyopenssl.py&quot;, line 46, in &lt;module&gt; File &quot;/home/patrick/.local/lib/python3.8/site-packages/OpenSSL/__init__.py&quot;, line 8, in &lt;module&gt; from OpenSSL import crypto, SSL File &quot;/home/patrick/.local/lib/python3.8/site-packages/OpenSSL/crypto.py&quot;, line 3268, in &lt;module&gt; _lib.OpenSSL_add_all_algorithms() AttributeError: module 'lib' has no attribute 'OpenSSL_add_all_algorithms' Error in sys.excepthook: Traceback (most recent call last): File &quot;/usr/lib/python3/dist-packages/apport_python_hook.py&quot;, line 72, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File &quot;/usr/lib/python3/dist-packages/apport/__init__.py&quot;, line 5, in &lt;module&gt; from apport.report import Report File &quot;/usr/lib/python3/dist-packages/apport/report.py&quot;, line 32, in &lt;module&gt; import apport.fileutils File &quot;/usr/lib/python3/dist-packages/apport/fileutils.py&quot;, line 12, in &lt;module&gt; import os, glob, subprocess, os.path, time, pwd, sys, requests_unixsocket File &quot;/usr/lib/python3/dist-packages/requests_unixsocket/__init__.py&quot;, line 1, in &lt;module&gt; import requests File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 991, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 975, in _find_and_load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 655, in _load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 618, in _load_backward_compatible File &quot;&lt;frozen zipimport&gt;&quot;, line 259, in load_module File &quot;/usr/share/python-wheels/requests-2.22.0-py2.py3-none-any.whl/requests/__init__.py&quot;, line 95, in &lt;module&gt; File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 991, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 975, in _find_and_load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 655, in _load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 618, in _load_backward_compatible File &quot;&lt;frozen zipimport&gt;&quot;, line 259, in load_module File &quot;/usr/share/python-wheels/urllib3-1.25.8-py2.py3-none-any.whl/urllib3/contrib/pyopenssl.py&quot;, line 46, in &lt;module&gt; File &quot;/home/patrick/.local/lib/python3.8/site-packages/OpenSSL/__init__.py&quot;, line 8, in &lt;module&gt; from OpenSSL import crypto, SSL File &quot;/home/patrick/.local/lib/python3.8/site-packages/OpenSSL/crypto.py&quot;, line 3268, in &lt;module&gt; _lib.OpenSSL_add_all_algorithms() AttributeError: module 'lib' has no attribute 'OpenSSL_add_all_algorithms' Original exception was: Traceback (most recent call last): File &quot;/usr/lib/python3.8/runpy.py&quot;, line 194, in _run_module_as_main return _run_code(code, main_globals, None, File &quot;/usr/lib/python3.8/runpy.py&quot;, line 87, in _run_code exec(code, run_globals) File &quot;/usr/lib/python3/dist-packages/pip/__main__.py&quot;, line 16, in &lt;module&gt; from pip._internal.cli.main import main as _main # isort:skip # noqa File &quot;/usr/lib/python3/dist-packages/pip/_internal/cli/main.py&quot;, line 10, in &lt;module&gt; from pip._internal.cli.autocompletion import autocomplete File &quot;/usr/lib/python3/dist-packages/pip/_internal/cli/autocompletion.py&quot;, line 9, in &lt;module&gt; from pip._internal.cli.main_parser import create_main_parser File &quot;/usr/lib/python3/dist-packages/pip/_internal/cli/main_parser.py&quot;, line 7, in &lt;module&gt; from pip._internal.cli import cmdoptions File &quot;/usr/lib/python3/dist-packages/pip/_internal/cli/cmdoptions.py&quot;, line 24, in &lt;module&gt; from pip._internal.exceptions import CommandError File &quot;/usr/lib/python3/dist-packages/pip/_internal/exceptions.py&quot;, line 10, in &lt;module&gt; from pip._vendor.six import iteritems File &quot;/usr/lib/python3/dist-packages/pip/_vendor/__init__.py&quot;, line 65, in &lt;module&gt; vendored(&quot;cachecontrol&quot;) File &quot;/usr/lib/python3/dist-packages/pip/_vendor/__init__.py&quot;, line 36, in vendored __import__(modulename, globals(), locals(), level=0) File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 991, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 975, in _find_and_load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 655, in _load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 618, in _load_backward_compatible File &quot;&lt;frozen zipimport&gt;&quot;, line 259, in load_module File &quot;/usr/share/python-wheels/CacheControl-0.12.6-py2.py3-none-any.whl/cachecontrol/__init__.py&quot;, line 9, in &lt;module&gt; File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 991, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 975, in _find_and_load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 655, in _load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 618, in _load_backward_compatible File &quot;&lt;frozen zipimport&gt;&quot;, line 259, in load_module File &quot;/usr/share/python-wheels/CacheControl-0.12.6-py2.py3-none-any.whl/cachecontrol/wrapper.py&quot;, line 1, in &lt;module&gt; File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 991, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 975, in _find_and_load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 655, in _load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 618, in _load_backward_compatible File &quot;&lt;frozen zipimport&gt;&quot;, line 259, in load_module File &quot;/usr/share/python-wheels/CacheControl-0.12.6-py2.py3-none-any.whl/cachecontrol/adapter.py&quot;, line 5, in &lt;module&gt; File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 991, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 975, in _find_and_load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 655, in _load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 618, in _load_backward_compatible File &quot;&lt;frozen zipimport&gt;&quot;, line 259, in load_module File &quot;/usr/share/python-wheels/requests-2.22.0-py2.py3-none-any.whl/requests/__init__.py&quot;, line 95, in &lt;module&gt; File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 991, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 975, in _find_and_load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 655, in _load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 618, in _load_backward_compatible File &quot;&lt;frozen zipimport&gt;&quot;, line 259, in load_module File &quot;/usr/share/python-wheels/urllib3-1.25.8-py2.py3-none-any.whl/urllib3/contrib/pyopenssl.py&quot;, line 46, in &lt;module&gt; File &quot;/home/patrick/.local/lib/python3.8/site-packages/OpenSSL/__init__.py&quot;, line 8, in &lt;module&gt; from OpenSSL import crypto, SSL File &quot;/home/patrick/.local/lib/python3.8/site-packages/OpenSSL/crypto.py&quot;, line 3268, in &lt;module&gt; _lib.OpenSSL_add_all_algorithms() AttributeError: module 'lib' has no attribute 'OpenSSL_add_all_algorithms' </code></pre> <p>I'm running Ubuntu 20.04 in WSL. Python <code>openssl</code> is already installed.</p> <pre><code>sudo apt install python3-openssl Reading package lists... Done Building dependency tree Reading state information... Done python3-openssl is already the newest version (19.0.0-1build1). 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. </code></pre> <p>My assumption is that I need to re-install some stuff, but I'm not sure what. I've tried the obvious stuff like <code>python3-openssl</code>, <code>libssl-dev</code>, <code>libffi-dev</code>, and <code>python3-pip</code> itself and <code>python3</code> alltogether.</p>
<python><ubuntu><pip><windows-subsystem-for-linux>
2023-01-02 10:57:36
4
1,225
patrick
74,981,514
10,695,613
Most efficient way to read a huge parquet file into memory in Python
<p>Ideally, I would like to have the data in a dictionary. I am not even sure if a dictionary is better than a dataframe in this context. After a bit of research, I found the following ways to read a parquet file into memory:</p> <ul> <li>Pyarrow (Python API of Apache Arrow):</li> </ul> <p>With pyarrow, I can read a parquet file into a pyarrow.Table. I can also read the data into a pyarrow.DictionaryArray. Both are easily convertible into a dataframe, but wouldn't memory consumption double in this case?</p> <ul> <li>Pandas:</li> </ul> <p>Via pd.read_parquet. The file is read into a dataframe. Again, would a dataframe perform as well as a dictionary?</p> <ul> <li>parquet-python (pure python, supports read-only):</li> </ul> <p>Supports reading in each row in a parquet as a dictionary. That means I'd have to merge <em>a lot</em> of nano-dictionaries. I am not sure if this is wise.</p>
<python><pandas><dictionary><parquet><pyarrow>
2023-01-02 10:53:12
0
405
BovineScatologist
74,981,481
6,387,095
DRF .as_viewset on {'post': 'list'} return attribute error?
<p>I am trying to send some text: example: &quot;Hello World&quot; to DRF end-point.</p> <p>This endpoint on receiving this text is to send me a e-mail with the text.</p> <p>When I hit the end-point with Postman, I get the error:</p> <blockquote> <p>Internal Server Error: /api/errors Traceback (most recent call last): File &quot;/Users/sid/eb-virt/lib/python3.8/site-packages/django/core/handlers/exception.py&quot;, line 55, in inner response = get_response(request) File &quot;/Users/sid/eb-virt/lib/python3.8/site-packages/django/core/handlers/base.py&quot;, line 197, in _get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File &quot;/Users/sid/eb-virt/lib/python3.8/site-packages/django/views/decorators/csrf.py&quot;, line 54, in wrapped_view return view_func(*args, **kwargs) File &quot;/Users/sid/eb-virt/lib/python3.8/site-packages/rest_framework/viewsets.py&quot;, line 117, in view handler = getattr(self, action) AttributeError: 'ErrorMsgViewSet' object has no attribute 'list'</p> </blockquote> <p>To test this out:</p> <p>urls.py</p> <pre><code>from django.urls import path from .api import * urlpatterns = [ path('api/errors', ErrorMsgViewSet.as_view({'post': 'list'}), name='errors'), ] # I tried .as_view(), which gave me an error to change to the above format # i tried using router.register() and got errors on using generic.GenericAPIview </code></pre> <p>api.py</p> <pre><code>from rest_framework import viewsets from email_alerts.serializers import ErrorsSerializer class ErrorMsgViewSet(viewsets.GenericViewSet): serializer_class = ErrorsSerializer permission_classes = [ ] def post(self, request, *args, **kwargs): print(request.data) </code></pre> <p>models.py</p> <pre><code>from django.db import models # Create your models here. class ErrorsModel(models.Model): error_msg = models.CharField(max_length=5000, blank=True, null=True) </code></pre> <p>serializers.py</p> <pre><code>from rest_framework import serializers from email_alerts.models import ErrorsModel class ErrorsSerializer(serializers.ModelSerializer): error_msg = serializers.CharField( required=False, allow_null=True, allow_blank=True) class Meta: model = ErrorsModel fields = '__all__' </code></pre>
<python><django><django-rest-framework>
2023-01-02 10:48:56
1
4,075
Sid
74,981,145
2,245,136
Handle KeyError exception and get dictionary name which caused the trouble
<p><code>KeyError</code> exception object contains <code>args</code> attribute. This is a list and it contains a key name which user tries to access within a dictionary. Is it possible to figure out dictionary name which does not contain that key and which caused an exception while trying to access the key within it?</p> <p>Example</p> <pre><code>data = {&quot;my_key&quot;: &quot;my_value&quot;} try: data[&quot;unknown_key&quot;] except KeyError as e: print(&quot;key name: &quot;, e.args[0]) print(&quot;dictionary name: &quot;, e.???) # Here I would need to know the the name of a variable which stores the dictionary is &quot;data&quot; </code></pre>
<python><exception><keyerror>
2023-01-02 10:14:03
1
372
VIPPER
74,981,113
18,972,785
Is it good to use Python garbage collecter inside the program?
<p>I have written an NLP GUI program with python. Everything works well and there is no problem. Since the program should process large corpus and make graphs, in order to free memory and let the processes to fit inside the RAM, i have used <code>gc.collect()</code> in several parts to delete some big variables. My question is this: is it right and efficient to use <code>gc.collect()</code> manually in several parts when processing big datasets or let the python use its automatic garbage collector?? I mean, does it have a bad effect on the flow of the program or <code>gc.collect()</code> works well? I have heard that using <code>gc.collect()</code> when processing big datasets, have bad effects on the program flow and even some times it wastes more RAM space to delete and free variables. I really appreciate answers which guide me.</p>
<python><garbage-collection>
2023-01-02 10:10:58
0
505
Orca
74,981,011
12,690,313
T5 model generates short output
<p>I have fine-tuned the T5-base model (from hugging face) on a new task where each input and target are sentences of 256 words. The loss is converging to low values however when I use the <code>generate</code> method the output is always too short. I tried giving minimal and maximal length values to the method but it doesn't seem to be enough. I suspect the issue is related to the fact that the sentence length before tokenization is 256 and after tokenization, it is not constant (padding is used during training to ensure all inputs are of the same size). Here is my generate method:</p> <pre class="lang-py prettyprint-override"><code>model = transformers.T5ForConditionalGeneration.from_pretrained('t5-base') tokenizer = T5Tokenizer.from_pretrained('t5-base') generated_ids = model.generate( input_ids=ids, attention_mask=attn_mask, max_length=1024, min_length=256, num_beams=2, early_stopping=False, repetition_penalty=10.0 ) preds = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=True) for g in generated_ids][0] preds = preds.replace(&quot;&lt;pad&gt;&quot;, &quot;&quot;).replace(&quot;&lt;/s&gt;&quot;, &quot;&quot;).strip().replace(&quot; &quot;, &quot; &quot;) target = [tokenizer.decode(t, skip_special_tokens=True, clean_up_tokenization_spaces=True) for t in reference][0] target = target.replace(&quot;&lt;pad&gt;&quot;, &quot;&quot;).replace(&quot;&lt;/s&gt;&quot;, &quot;&quot;).strip().replace(&quot; &quot;, &quot; &quot;) </code></pre> <p>The inputs are created using</p> <pre class="lang-py prettyprint-override"><code>tokens = tokenizer([f&quot;task: {text}&quot;], return_tensors=&quot;pt&quot;, max_length=1024, padding='max_length') inputs_ids = tokens.input_ids.squeeze().to(dtype=torch.long) attention_mask = tokens.attention_mask.squeeze().to(dtype=torch.long) labels = self.tokenizer([target_text], return_tensors=&quot;pt&quot;, max_length=1024, padding='max_length') label_ids = labels.input_ids.squeeze().to(dtype=torch.long) label_attention = labels.attention_mask.squeeze().to(dtype=torch.long) </code></pre>
<python><pytorch><huggingface-transformers><huggingface-tokenizers>
2023-01-02 10:00:42
1
1,341
Tamir
74,980,841
516,268
What is the idiomatic way to write pandas groupby result to DataFrame?
<p>Source df as like:</p> <pre><code>EventType User Item View A 1 View B 1 Like C 2 View C 2 Buy A 1 </code></pre> <p>We have 5 users: A B C D E</p> <p>We have 6 Items: 1 2 3 4 5 6</p> <p>I would like to generate new df like</p> <pre><code>Event_Type Event_Ratio ItemsHaveEvent UsersHaveEvent View 0.6 0.33 0.6 Like 0.2 0.167 0.2 Buy 0.2 0.167 0.2 </code></pre> <p>Event_Type: same as EventType in original df</p> <p>Event_Ratio: the event / total events</p> <p>ItemsHaveEvent: items have this event / total items</p> <p>UsersHaveEvent: users have this event / total users</p> <p>How to write idiomatic pandas code in declarative way to do this?</p>
<python><pandas>
2023-01-02 09:40:19
1
1,327
l4rmbr
74,980,790
17,530,552
How to correctly plot a linear regression on a log10 scale?
<p>I am plotting two lists of data against each other, namely <code>freq</code> and <code>data</code>. Freq stands for frequency, and data are the numeric observations for each frequency.</p> <p>In the next step, I apply the ordinary linear least-squared regression between <code>freq</code> and <code>data</code>, using <code>stats.linregress</code> on the logarithmic scale. My aim is applying the linear regression inside the log-log scale, not on the normal scale.</p> <p>Before doing so, I transform both <code>freq</code> and <code>data</code> into <code>np.log10</code>, since I plan to plot a straight linear regression line on the logarithmic scale, using <code>plt.loglog</code>.</p> <p><strong>Problem:</strong> The problem is that the regression line, plotted in red color, is plotted far from the actual data, plotted in green color. I assume that there is a problem in combination with <code>plt.loglog</code> in my code, hence the visual distance between the green data and the red regression line. How can I fix this problem, so that the regression line plots on top of the actual data?</p> <p>Here is my reproducible code:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt from scipy import stats # Data freq = [0.0102539, 0.0107422, 0.0112305, 0.0117188, 0.012207, 0.0126953, 0.0131836] data = [4.48575, 4.11893, 3.69591, 3.34766, 3.18452, 3.23554, 3.43357] # Plot log10 of freq vs. data plt.loglog(freq, data, c=&quot;green&quot;) # Linear regression log_freq = np.log10(freq) log_data = np.log10(data) reg = stats.linregress(log_freq, log_data) slope = reg[0] intercept = reg[1] plt.plot(freq, slope*log_freq + intercept, color=&quot;red&quot;) </code></pre> <p>And here is a screenshot of the code’s result: <a href="https://i.sstatic.net/7I7pW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7I7pW.png" alt="enter image description here" /></a></p>
<python><matplotlib><linear-regression>
2023-01-02 09:35:15
2
415
Philipp
74,980,768
7,713,770
How to communicate django api with frontend react native?
<p>I have a django application and I have a react native app. I am running the android emulator from android studio.</p> <p>And now I try to connect the backend with the frontend. I studied the example from: <code> https://reactnative.dev/docs/network</code></p> <p>And the example url: <a href="https://reactnative.dev/movies.json%27" rel="nofollow noreferrer">https://reactnative.dev/movies.json'</a> works.</p> <p>I run the native-react app on port: <a href="http://192.168.1.69:19000/" rel="nofollow noreferrer">http://192.168.1.69:19000/</a> And I run the backend on port: <a href="http://127.0.0.1:8000/api/movies/" rel="nofollow noreferrer">http://127.0.0.1:8000/api/movies/</a></p> <p>But now I try to connect with my own localhost data. So this is my component:</p> <pre><code>import { ActivityIndicator, FlatList, Text, View } from &quot;react-native&quot;; import React, { Component } from &quot;react&quot;; export default class App extends Component { constructor(props) { super(props); this.state = { data: [], isLoading: true, }; } async getMovies() { try { const response = await fetch(&quot;http://127.0.0.1:8000/api/movies/&quot;, { method: &quot;GET&quot;, headers: { &quot;Content-Type&quot;: &quot;application/json&quot;, &quot;Access-Control-Allow-Origin&quot;: &quot;*&quot;, }, }); const json = await response.json(); this.setState({ data: json.movies }); } catch (error) { console.log(error); } finally { this.setState({ isLoading: false }); } } componentDidMount() { this.getMovies(); } render() { const { data, isLoading } = this.state; return ( &lt;View style={{ flex: 1, padding: 24 }}&gt; {isLoading ? ( &lt;ActivityIndicator /&gt; ) : ( &lt;FlatList data={data} keyExtractor={({ id }, index) =&gt; id} renderItem={({ item }) =&gt; &lt;Text&gt;{item.title}&lt;/Text&gt;} /&gt; )} &lt;/View&gt; ); } } </code></pre> <p>this is the data from postman:</p> <pre><code>[ { &quot;id&quot;: 1, &quot;title&quot;: &quot;Husband&quot;, &quot;description&quot;: &quot;Very nice wife cocks the man&quot;, &quot;no_of_ratings&quot;: 1, &quot;avg_rating&quot;: 5.0 }, { &quot;id&quot;: 2, &quot;title&quot;: &quot;Nice movie&quot;, &quot;description&quot;: &quot;Description ah, niceNICE&quot;, &quot;no_of_ratings&quot;: 0, &quot;avg_rating&quot;: 0 } ] </code></pre> <p>But if I try to run this example. I get this error:</p> <pre><code>Access to fetch at 'http://127.0.0.1:8000/api/movies/' from origin 'http://localhost:19006' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled. </code></pre> <p>Question: how to make a connection from native react witht the backend?</p> <p>Oke, I installed:django.cors.headers and in settings I did all the configuration.</p> <p>So in django I have this:</p> <pre><code>CORS_ALLOWED_ORIGINS = [ &quot;http://localhost:3000&quot;, ] </code></pre> <p>and in react native I have this:</p> <pre><code>useEffect(() =&gt; { fetch(&quot;http://127.0.0.1:8000/animalGroups/group/&quot;, { method: &quot;GET&quot;, }) .then((res) =&gt; res.json()) .then((jsonres) =&gt; setAnimalGroup(jsonres)) .catch((error) =&gt; console.error(error)); }, []); </code></pre> <p>and the native react app is running on:</p> <pre><code>› Opening exp://192.168.1.69:19000 on MY_ANDROID </code></pre> <p>So but I still get this error:</p> <pre><code>Network request failed at node_modules\whatwg-fetch\dist\fetch.umd.js:null in setTimeout$argument_0 </code></pre>
<python><django><react-native><android-studio>
2023-01-02 09:33:00
2
3,991
mightycode Newton
74,980,748
10,829,044
pandas groupby and do categorical ordering to drop duplicates
<p>I have a dataframe like as below</p> <pre><code>df = pd.DataFrame({ &quot;Name&quot;: [&quot;Tim&quot;, &quot;Tim&quot;, &quot;Tim&quot;, &quot;Tim&quot;, &quot;Tim&quot;,'Jack','Jack','Jack'], &quot;Status&quot;: [&quot;A1&quot;, &quot;E1&quot;, &quot;B3&quot;, &quot;D4&quot;, &quot;C90&quot;,&quot;A1&quot;,&quot;C90&quot;,&quot;B3&quot;] }) </code></pre> <p>The actual order of my status variable is B3 &lt; A1 &lt; D4 &lt; C90 &lt; E1.</p> <p>So the last value is E1 and 1st value is B3.</p> <p>I would like to do the below</p> <p>a) groupby <code>Name</code></p> <p>a) order the values based on categorical ordering shown above</p> <p>c) Keep only the last value (after dropping duplicates based on <code>Name</code> column)</p> <p>So, I tried the below</p> <pre><code>df[&quot;Status&quot;] = df[&quot;Status&quot;].astype(&quot;category&quot;) df[&quot;Status&quot;] = df[&quot;Status&quot;].cat.set_categories([&quot;B3&quot;, &quot;A1&quot;, &quot;D4&quot;, &quot;C90&quot;, &quot;E90&quot;], ordered=True) df = df.sort_values(['Status']) df_cleaned = df.drop_duplicates(['Status'],keep='last') </code></pre> <p>But this results in incorrect output.</p> <p>I expect my output to be like as below (one row for each <code>Name</code> and their latest/last <code>Status</code> value)</p> <pre><code>Name Status Tim E1 Jack C90 </code></pre>
<python><pandas><list><dataframe><group-by>
2023-01-02 09:30:09
2
7,793
The Great
74,980,738
13,987,643
Get last n elements of list from current index
<p>I have a list like this : <code>[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]</code> and I want to extract the previous 'n' elements from a particular index.</p> <p>For eg: If I take index 7, I have the element 8 in it. And for <code>n = 3</code>, I want to get the previous 3 elements starting backwards from index 7. The result would be <code>[5, 6, 7]</code>.</p> <p>I am not able to come up with a slicing formula to get this. Could someone please help me?</p>
<python><list>
2023-01-02 09:29:24
1
569
AnonymousMe
74,980,682
16,698,040
"Document interning" in Mongo
<p>I a lot of documents which I know will rarely change and are very similar to each other, specifically I know they have a nested field in the document that is always the same (for some of them)</p> <pre class="lang-json prettyprint-override"><code>{ &quot;docid&quot;: 1 &quot;nested_field_that_will_always_be_the_same&quot;: { &quot;title&quot;: &quot;this will always be the same&quot; &quot;desc&quot;: &quot;this will always be the same, too&quot; } } { &quot;docid&quot;: 2 &quot;nested_field_that_will_always_be_the_same&quot;: { &quot;title&quot;: &quot;this will always be the same&quot; &quot;desc&quot;: &quot;this will always be the same, too&quot; } } </code></pre> <p>I don't want to store the same document over and over again, instead I want Mongo to &quot;intern&quot; this field, i.e only store it once and the rest will only store pointers to it.</p> <p>Something like:</p> <pre class="lang-json prettyprint-override"><code>{ &quot;docid&quot;: 1 &quot;nested_field_that_will_always_be_the_same&quot;: { &quot;title&quot;: &quot;this will always be the same&quot; &quot;desc&quot;: &quot;this will always be the same, too&quot; } } { &quot;docid&quot;: 2 &quot;nested_field_that_will_always_be_the_same&quot;: &lt;pointer to doc1.nested_field_that_will_always_be_the_same&gt; } </code></pre> <p>Now, of course, I can take out this nested field into a different document and then have Mongo reference its _id field, but I am not looking for app-side solution, because this collection is being accessed via multiple workers and I don't have all the documents that have the same nested_field_that_will_always_be_the_same at any given moment.</p> <p>Instead, I want a solution provided by Mongo to only store this field once for every instance it is unique.</p> <p>How can I do that?</p> <p>I am using Pymongo.</p>
<python><mongodb><pymongo>
2023-01-02 09:21:47
1
475
Stack Overflow
74,980,665
8,037,521
Plotly non-interactive image with sliders
<p>I have managed to make a plotly graph based on this code sample from Plotly documentation:</p> <pre><code>import plotly.graph_objects as go import numpy as np # Create figure fig = go.Figure() # Add traces, one for each slider step for step in np.arange(0, 5, 0.1): fig.add_trace( go.Scatter( visible=False, line=dict(color=&quot;#00CED1&quot;, width=6), name=&quot;𝜈 = &quot; + str(step), x=np.arange(0, 10, 0.01), y=np.sin(step * np.arange(0, 10, 0.01)))) # Make 10th trace visible fig.data[10].visible = True # Create and add slider steps = [] for i in range(len(fig.data)): step = dict( method=&quot;update&quot;, args=[{&quot;visible&quot;: [False] * len(fig.data)}, {&quot;title&quot;: &quot;Slider switched to step: &quot; + str(i)}], # layout attribute ) step[&quot;args&quot;][0][&quot;visible&quot;][i] = True # Toggle i'th trace to &quot;visible&quot; steps.append(step) sliders = [dict( active=10, currentvalue={&quot;prefix&quot;: &quot;Frequency: &quot;}, pad={&quot;t&quot;: 50}, steps=steps )] fig.update_layout( sliders=sliders ) fig.show() </code></pre> <p>Problem is that, as I am instead visualizing relatively big <code>go.Image</code>s, the preparation of the plots is extremely slow and not suitable for the final interactive Streamlit app that I am trying to create.</p> <p>To make it faster, I thought to have non-interactive plots which (I guess so at least) should be much faster. There is, however, no <code>interactive=False</code> option for the plotly traces, or at least I cannot find them. There is option to export figure to a static image, but that would make no sense, as the figure would have to be prepared anyway before it would be exported to png and visualized as a static image.</p> <p>So, is there any way to modify this Plotly example in a way that keeps the sliders but directly creates non-interactive plot from a Numpy array?</p> <p>Or do I have to switch do a different library completely? Thanks.</p>
<python><plotly>
2023-01-02 09:20:02
0
1,277
Valeria
74,980,645
13,709,317
Python equivalent of C struct for writing bytes to a file
<p>What could be the simplest Python equivalent to the following C code?</p> <pre class="lang-c prettyprint-override"><code>#include &lt;stdio.h&gt; int main(void) { struct dog { char breed[16]; char name[16]; }; struct person { char name[16]; int age; struct dog pets[2]; }; struct person p = { &quot;John Doe&quot;, 20, {{&quot;Lab&quot;, &quot;Foo&quot;}, {&quot;Pug&quot;, &quot;Bar&quot;}} }; FILE *fp = fopen(&quot;data_from_c.txt&quot;, &quot;w&quot;); fwrite(&amp;p, sizeof(p), 1, fp); fclose(fp); return 0; } </code></pre> <p>My main goal here is to write the data to the file as contiguous bytes:</p> <pre class="lang-none prettyprint-override"><code>$ xxd data_from_c.txt 00000000: 4a6f 686e 2044 6f65 0000 0000 0000 0000 John Doe........ 00000010: 1400 0000 4c61 6200 0000 0000 0000 0000 ....Lab......... 00000020: 0000 0000 466f 6f00 0000 0000 0000 0000 ....Foo......... 00000030: 0000 0000 5075 6700 0000 0000 0000 0000 ....Pug......... 00000040: 0000 0000 4261 7200 0000 0000 0000 0000 ....Bar......... 00000050: 0000 0000 .... </code></pre> <p>So far, I have tried using <code>namedtuple</code>s and the <code>struct</code> module for packing the Python values:</p> <pre><code>from collections import namedtuple import struct dog = namedtuple('dog', 'breed name') person = namedtuple('person', 'name age pets') p = person( name=b'John Doe', age=22, pets=(dog(breed=b'Lab', name=b'Foo'), dog(breed=b'Pug', name=b'Bar')) ) with open('data_from_python.txt', 'wb') as f: b = struct.pack('&lt;16s i 16s 16s 16s 16s', *p) f.write(b) </code></pre> <p>However, the <code>*p</code> unpacking does not unpack the iterable recursively. Is there a way for doing this properly?</p> <p>If there is an alternative to doing this that doesn't involve using <code>struct</code> or <code>namedtuple</code>, that would be welcome too.</p>
<python><file><struct>
2023-01-02 09:18:13
1
801
First User
74,980,509
2,998,077
Iteration in a dictionary with lists as values
<p>A dictionary with lists as values and ascending dates as keys, that I want to understand how many times M in the total past times P, cover some of the current numbers.</p> <p>For example, for L19981120: [2, 3, 5]: 2 numbers in the [2, 3, 5], appeared 3 times in the past 9 times.</p> <p>The code looks verbose, and only printing some iterations, but not all.</p> <p>What is the correct and better way to do so?</p> <pre><code>data = { &quot;L19980909&quot;: [11,12,25], &quot;L19981013&quot;: [19,28,31], &quot;L19981016&quot;: [4,9,31], &quot;L19981020&quot;: [8,11,17], &quot;L19981023&quot;: [5,22,25], &quot;L19981027&quot;: [5,20,27], &quot;L19981030&quot;: [12,19,26], &quot;L19981105&quot;: [31,32,38], &quot;L19981109&quot;: [2,22,24], &quot;L19981110&quot;: [2,16,19], &quot;L19981113&quot;: [9,15,17], &quot;L19981119&quot;: [2,10,11], &quot;L19981120&quot;: [2,3,5], &quot;L19981126&quot;: [4,6,14], &quot;L19981127&quot;: [5,9,18], &quot;L19981201&quot;: [1,6,7]} value_list = list(data.values()) for idx, (k, v) in enumerate(data.items()): ever_more_than_times = [] for how_many_past in [7,8,9]: if idx &gt;= how_many_past: past_appeared = sum(value_list[idx-how_many_past:idx],[]) for more_than_times in [2,3]: if how_many_past &gt; more_than_times: for ox in list(range(1,40)): if past_appeared.count(ox) &gt;= more_than_times: ever_more_than_times.append(ox) ever_more_than_times = list(set(ever_more_than_times)) hit = len(set(ever_more_than_times) &amp; set(v)) if hit != 0: print (k,'$',v,'$',how_many_past,'$',more_than_times,'$',hit) </code></pre> <p>Output:</p> <pre><code>L19981105 $ [31, 32, 38] $ 9 $ 3 $ 1 L19981110 $ [2, 16, 19] $ 9 $ 3 $ 1 L19981119 $ [2, 10, 11] $ 9 $ 3 $ 1 L19981120 $ [2, 3, 5] $ 9 $ 3 $ 2 L19981127 $ [5, 9, 18] $ 9 $ 3 $ 1 </code></pre>
<python><loops><iteration>
2023-01-02 08:58:59
2
9,496
Mark K
74,980,332
6,057,371
Python get num occurrences of elements in each of several lists
<p>I have a 4 corpuses:</p> <pre><code>C1 = ['hello','good','good','desk'] C2 = ['nice','good','desk','paper'] C3 = ['red','blue','green'] C4 = ['good'] </code></pre> <p>I want to define a list of words, and for each - get the occurances per corpus. so if</p> <blockquote> <p>l= ['good','blue']</p> </blockquote> <p>I will get</p> <pre><code>res_df = word. C1. C2. C3. C4 good. 2. 1. 0. 1 blue. 0. 0. 1. 0 </code></pre> <p>My corpus is very large so I am looking for efficient way. What is the best way to do this?</p> <p>Thanks</p>
<python><pandas><list><dataframe><collections>
2023-01-02 08:35:35
3
2,050
Cranjis
74,980,251
126,833
redirect_uri is always localhost when json and console have it as a proper domain of localhost
<p>I went through <a href="https://stackoverflow.com/questions/11485271/google-oauth-2-authorization-error-redirect-uri-mismatch">11485271</a> but not avail.</p> <p>I see</p> <p><a href="https://i.sstatic.net/fWUX3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fWUX3.png" alt="enter image description here" /></a></p> <p>Even though I removed localhost. (And when I gave localhost it was at port 8000 and not 8080)</p> <pre><code>&quot;redirect_uris&quot;:[&quot;https://local.mysite.com&quot;] </code></pre> <p><a href="https://local.mysite.com" rel="nofollow noreferrer">https://local.mysite.com</a> is my MAMP PRO setup.</p> <p>When I goto <a href="https://accounts.google.com/o/oauth2/auth?client_id=%7Bclient_id%7D&amp;response_type=token&amp;redirect_uri=https%3A%2F%2Flocal.mysite.com&amp;scope=%7Bscope%7D" rel="nofollow noreferrer">https://accounts.google.com/o/oauth2/auth?client_id={client_id}&amp;response_type=token&amp;redirect_uri=https%3A%2F%2Flocal.mysite.com&amp;scope={scope}</a> in the browser it works, its just that its not working when I do <code>python3 script.py</code>.</p> <p>So how do I fix this ?</p>
<python><oauth-2.0><google-developers-console><google-client><google-client-login>
2023-01-02 08:23:34
0
4,291
anjanesh
74,980,150
1,737,830
Imported function fails to save output to a file; succeeds when called in-place
<p>I'm trying to save output processed by Python to a text file. I started with approach #1 described below. It didn't work (details below), so I tried to isolate the failing function and launch it with predefined array to be processed (approach #2). It didn't work either. So, I tried to completely extract the code and put it into a separate module: it worked as intended (approach #3). However, the working approach is not usable in the context of the process I'm trying to design: the list of arguments will be dynamic and it should be processed the way it works in the approach #1 (importing a function, then feeding it with a dynamically generated list).</p> <p>Folder structure:</p> <pre><code>root +- containers +- processed output.txt +- controllers main_controller.py save_output_to_file.py test.py </code></pre> <p>Please keep in mind that all of the files with discussed code exist in the same directory, so in theory they should share the same relative path to the <code>output.txt</code> file. Directories <code>containers</code> and <code>controllers</code> are siblings.</p> <p>Now, the following things happen when I try to save the output to the file:</p> <ol> <li>When called from <code>main_controller.py</code> this way:</li> </ol> <pre class="lang-py prettyprint-override"><code>from controllers.save_output_to_file import save_output_to_file [...] print(urls) save_output_to_file(urls) </code></pre> <p>Output:</p> <pre><code>['url1', 'url2', 'url3'] # printed urls Traceback (most recent call last): File &quot;C:\Users\aqv\root\controllers\main_controller.py&quot;, line 113, in &lt;module&gt; save_output_to_file(urls) File &quot;C:\Users\aqv\root\controllers\save_output_to_file.py&quot;, line 19, in save_output_to_file with open(output_file, 'w+', encoding='utf-8') as f: FileNotFoundError: [Errno 2] No such file or directory: '..\\containers\\processed\\output.txt' Process finished with exit code 1 </code></pre> <p>It happens no matter if <code>output.txt</code> exists in the directory or not.</p> <ol start="2"> <li>When called from <code>save_output_to_file.py</code> (with predefined <code>urls</code>):</li> </ol> <pre class="lang-py prettyprint-override"><code>from pathlib import Path output_folder = Path('../containers/processed') output_source = 'output.txt' output_file = output_folder / output_source urls = ['url4', 'url5', 'url6'] print(urls) def save_output_to_file(urls): &quot;&quot;&quot;Save URLs to a text file for further processing by bash script.&quot;&quot;&quot; with open(output_file, 'w+', encoding='utf-8') as f: for url in urls: f.write(f'{url}\n') </code></pre> <p>Output:</p> <pre><code>['url4', 'url5', 'url6'] # printed urls </code></pre> <p>URLs get printed to the console, no errors are reported, and the file doesn't get created. For this piece of code, it doesn't matter whether the file exists or not - it's never reached.</p> <ol start="3"> <li>When called from <code>test.py</code> file:</li> </ol> <pre class="lang-py prettyprint-override"><code>from pathlib import Path output_folder = Path('../containers/processed') output_source = 'models.txt' output_file = output_folder / output_source urls = ['url7', 'url8', 'url9'] print(urls) with open(output_file, 'w+', encoding='utf-8') as f: for url in ssh_urls: f.write(f'{url}\n') </code></pre> <p>Now, everything works as intended:</p> <pre><code>['url7', 'url8', 'url9'] # printed urls </code></pre> <p>URLs get printed to the console, no errors are reported, and the file gets created if doesn't exist, or overwritten if it does.</p> <p>All of the examples were launched in WSL2 environment.</p> <p>The question: how should I call the file creation so that it works correctly when called using approach #1? And if it's a problem related to WSL, how to make it system-independent?</p>
<python><windows-subsystem-for-linux>
2023-01-02 08:08:46
1
2,368
AbreQueVoy
74,980,004
507,974
Output video with same settings as input video python OpenCV
<p>I have a video I read in to detect opbjects and generate an video that will at a later time be used as an monotone alpha channel for the original video.</p> <p>I get the current input video with:</p> <pre><code>cap = cv2.VideoCapture('file.mp4') </code></pre> <p>From here you are supposed to create a VideoWriter to output the edited frames with something to the effect of:</p> <pre><code>out = cv2.VideoWriter('output.mp4',fourcc, 20.0, (640,480)) </code></pre> <p>but is there a way to directly tell the writer to just copy the initial videos formatting and configurations?</p>
<python><opencv><video><video-processing>
2023-01-02 07:47:34
1
420
Skyler
74,979,976
2,092,445
Dynamodb query using FilterExpression involving nested attributes with boto3
<p>I have below data in my dynamodb table with <em>name</em> as partition-key.</p> <pre><code>{ &quot;name&quot;:&quot;some-name&quot;, &quot;age&quot;: 30, &quot;addresses&quot;:[ &quot;addr-1&quot;, &quot;addr-2&quot; ], &quot;status&quot;:&quot;active&quot; } </code></pre> <p>I have a GSI defined with partition-key on <em>status</em> field.</p> <p>Now, I have to run a query which returns all records whose status is active and whose address list contains a given address.</p> <pre><code> kwargs = { 'IndexName': 'status-index', 'KeyConditionExpression': Key('status').eq('active'), 'FilterExpression': 'contains(#filter, :val)', 'ExpressionAttributeNames': { '#filter': 'addresses' }, 'ExpressionAttributeValues': { ':val': addr } } response = table.query(**kwargs) </code></pre> <p>Here, <em>addr</em> is the parameter passes to the function.</p> <p>All this works great. But when I change the record format in the table to below, the query stops working.</p> <h2>updated-record-format</h2> <p>(addresses are now nested one level inside - in home key)</p> <pre><code>{ &quot;name&quot;:&quot;some-name&quot;, &quot;age&quot;: 30, &quot;addresses&quot;:{ &quot;home&quot;:[ &quot;addr-1&quot;, &quot;addr-2&quot; ] }, &quot;status&quot;:&quot;active&quot; } </code></pre> <h2>updated-query</h2> <pre><code>kwargs = { 'IndexName': 'status-index', 'KeyConditionExpression': Key('status').eq('active'), 'FilterExpression': 'contains(#filter, :val)', 'ExpressionAttributeNames': { '#filter': 'addresses.home' #nested home key on addresses }, 'ExpressionAttributeValues': { ':val': addr } } response = table.query(**kwargs) </code></pre> <p>How should I change the query in order to use nested list ?</p>
<python><amazon-dynamodb><boto3><dynamodb-queries>
2023-01-02 07:43:32
1
2,264
Naxi
74,979,873
9,407,941
Consistently getting `None` for the `gdb.Field.name` for a C++ function's parameters
<p>I'm trying to use <code>gdb</code>'s Python API to extract C++'s function parameters' names, and am consistently getting <code>None</code> when I query the <code>name</code> attribute of function parameters as <code>gdb.Field</code> objects.</p> <p>On a higher level, I need to distinguish between named and anonymous parameters in a function whilst knowing the exact order of the parameters going into the function, so other methods like <code>info args</code> or looping over <code>gdb.Block</code>s won't work, as they don't include anonymous parameters.</p> <p>As a simple run, I have the following compiled under debug:</p> <pre><code>// library.h auto named(int a, int b) -&gt; int; // library.cpp auto named(int a, int b) -&gt; int { return 0; } </code></pre> <p>Putting a breakpoint in <code>gdb</code> results in</p> <pre class="lang-py prettyprint-override"><code>(gdb) python named = gdb.parse_and_eval(&quot;named&quot;) (gdb) python print(named.type) int (int, int) (gdb) python print(named.type.fields()) [&lt;gdb.Field object at 0x7f79663f29d0&gt;, &lt;gdb.Field object at 0x7f79663f2a10&gt;] (gdb) python print(named.type.fields()[0].name) None (gdb) python print(named.type.fields()[1].name) None </code></pre> <p>According to <a href="https://sourceware.org/gdb/onlinedocs/gdb/Types-In-Python.html#Types-In-Python" rel="nofollow noreferrer">https://sourceware.org/gdb/onlinedocs/gdb/Types-In-Python.html#Types-In-Python</a>,</p> <blockquote> <p>Function: <strong>Type.fields</strong> <em>()</em></p> <p>...</p> <ul> <li>Function and method types have one field per parameter. The base types of C++ classes are also represented as fields.</li> </ul> <p>...</p> <p>Each field is a <code>gdb.Field</code> object, with some pre-defined attributes:</p> <p>...</p> <p><code>name</code></p> <p>The name of the field, or None for anonymous fields.</p> </blockquote> <p>I expected <code>print(named.type.fields()[...].name)</code> to show <code>'a'</code> and <code>'b'</code>, rather than <code>None</code>, as I've explicitly declared the parameters <code>a</code> and <code>b</code> in both the definition and the declaration; they aren't prototypes.</p> <p>Am I missing something about <code>gdb.Field.name</code>? Is this attribute only meaningful for <code>gdb.Type</code>s which represent C++ classes/structs?</p> <hr /> <p>Executed under GDB versions:</p> <ul> <li>12.1 (<code>apt</code> release)</li> <li>13.0.90.20230102 (weekly snapshot)</li> </ul>
<python><c++><gdb><trace>
2023-01-02 07:28:35
0
4,168
dROOOze
74,979,760
10,748,412
How to not save into database if document upload is failed
<p>When a user uploads a document and clicks submit the file is stored in a folder and a database entry is created along with bunch of other details. What I am looking for is to avoid the save if the document doesn't get uploaded into the specific location.</p> <p>serializer.py</p> <pre><code>class DocumentSerializer(serializers.ModelSerializer): class Meta: model = Request fields = ['file', 'doc_type'] def create(self, validated_data): msg = self.__construct_message_body() validated_data['type'] = Request.request_types[-1][0] validated_data['input_message'] = msg instance = ParseRequest.objects.create(**validated_data) msg['request_id'] = instance.id instance.input_message = msg instance.save() return instance </code></pre> <p>views.py</p> <pre><code>class DocumentView(CreateAPIView, ResponseViewMixin): parser_classes = (MultiPartParser, FileUploadParser,) serializer_class = DocumentSerializer def create(self, request, *args, **kwargs): try: data = request.data serializer = self.get_serializer( data=data, context={'request': request}) serializer.is_valid() serializer.save() except Exception as e: logger.error(e) return self.error_response(message=ERROR_RESPONSE['UPLOAD_DOCUMENT']) return self.success_response(message=SUCCESS_RESPONSE['UPLOAD_DOCUMENT']) </code></pre>
<python><django><database><postgresql><django-rest-framework>
2023-01-02 07:10:26
1
365
ReaL_HyDRA
74,979,653
10,045,509
Null/duplicate check in a column based on another column filter
<p>I am working on pandas with the below requierment</p> <p><a href="https://i.sstatic.net/U82vq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/U82vq.png" alt="enter image description here" /></a></p> <p>I need to check the below conditions if criteria is A, then m shouldn't be null if criteria is B then n shouldn't be null</p> <p>I wrote the below code for it</p> <pre><code>df_filter = df.loc[df['criteria']]=='A',[m]] #for A condition check </code></pre> <p>or</p> <pre><code>df_filter = df.query(&quot;criteria == A&quot;)[m] </code></pre> <p>but both are not giving correct result</p> <p>I have also tried</p> <pre><code>df_filter = df.loc[(df[&quot;criteria&quot;] == &quot;A&quot;) &amp; ~ (df[&quot;m&quot;].isnull()] </code></pre> <p>but this giving the columns without null..</p> <p>I need to check if there are any null values exist in m column if A is selected from criteria.</p> <p>Any help would be appreciated</p>
<python><python-3.x><pandas>
2023-01-02 06:51:36
1
313
RSK Rao
74,979,626
275,002
How to access multiple return values in restype?
<p>I am writing a Go program like the below:</p> <pre><code>package main import ( &quot;C&quot; ) //export getData func getData(symbol string, day string, month string, year string) (string, string) { return &quot;A&quot;, &quot;B&quot; } func main() { } </code></pre> <p>In Python, I am doing this:</p> <pre><code>import ctypes library = ctypes.cdll.LoadLibrary('./deribit.so') get_data = library.getData # Make python convert its values to C representation. get_data.argtypes = [ctypes.c_wchar, ctypes.c_wchar,ctypes.c_wchar,ctypes.c_wchar] get_data.restype = [ctypes.c_wchar,ctypes.c_wchar] d,c = get_data(&quot;symbol&quot;, &quot;day&quot;,&quot;month&quot;,&quot;year&quot;) print(d,c) </code></pre> <p>And it gives the error:</p> <pre><code> get_data.restype = ctypes.c_wchar,ctypes.c_wchar TypeError: restype must be a type, a callable, or None </code></pre>
<python><go><ctypes>
2023-01-02 06:47:04
1
15,089
Volatil3
74,979,602
15,887,240
How to take dynamic input from flask and pass it to other function?
<p>How can I take input from url in flask from the parameter?</p> <pre><code>@app.route('/&lt;id&gt;') def give_id(id): return id </code></pre> <p>I need to take the above Id from input and pass it to other function without again needing to write <code>&quot;id&quot;</code></p> <pre><code>def some_function(): variable1 = vid_stream.get_hls_url(give_id(&quot;I need to again pass that Id here&quot;)) </code></pre> <p>How can I directly use the id from <code>give_id(id)</code> function and feed it into <code>vid_stream.get_hls_url</code> function?</p> <p>Posting complete demo code, In case someone needs to run locally.</p> <pre><code>from flask import Flask app = Flask(__name__) @app.route('/&lt;id&gt;') def give_id(id): return id def some_function(): variable1 = vid_stream.get_hls_url(give_id(&quot;I need to again pass that Id here&quot;)) print(variable1) some_function() if __name__ == '__main__': app.run(host='0.0.0.0',port=8080) </code></pre>
<python><flask>
2023-01-02 06:43:35
1
314
DholuBholu
74,979,588
12,097,553
django: foreign key issues when creating a model object
<p>I am trying to write a row to database, with data gathered in a form. I need to work with two foreign keys and one of them is causing the creating to fail, although I am unable to figure out why:</p> <p>here is my model:</p> <pre><code>def upload_path(instance,file): file_dir = Path(file).stem print('usr',instance.user.id) path = '{}/{}/{}/{}'.format(instance.user.id,&quot;projects&quot;,file_dir,file) return path class BuildingFilesVersions(models.Model): version_id = models.AutoField(primary_key=True) building_id = models.ForeignKey(Building, on_delete=models.CASCADE,related_name='building_id_file') user = models.ForeignKey(Building, on_delete=models.CASCADE,related_name=&quot;user_file&quot;) created_at = models.DateTimeField(auto_now_add=True, blank=True) description = models.TextField(max_length=200, blank=True, null=True) modification_type = models.CharField(choices=WORK_TYPE_CHOICES, max_length=200, blank=True, null=True) filename = models.CharField(max_length=200, blank=True, null=True) file = models.FileField(upload_to=upload_path, null=True, blank=True) </code></pre> <p>and here is my view:</p> <pre><code>@login_required @owner_required def RegisterFileView(request,pk): form = AddBuildingFileForm() if request.method == 'POST': form = AddBuildingFileForm(request.POST,request.FILES) if form.is_valid(): description = form.cleaned_data[&quot;description&quot;] modification_type = form.cleaned_data[&quot;modification_type&quot;] filename = form.cleaned_data[&quot;modification_type&quot;] file = request.FILES['file'].name BuildingFilesVersions.objects.create(building_id_id=pk, user_id=request.user, description=description, modification_type=modification_type, filename=filename, file=file) return redirect('home') else: form = AddBuildingFileForm() context = {'form':form} return render(request, 'building_registration/register_file.html', context) </code></pre> <p>what gets me confused is that the error is <code>Field 'building_id' expected a number but got &lt;SimpleLazyObject: &lt;User: Vladimir&gt;&gt; </code> even though <code>pk</code> return the proper building_id</p> <p>Can anyone see where I messed up?</p>
<python><django>
2023-01-02 06:41:44
1
1,005
Murcielago
74,979,542
12,696,223
How to get Telegram audio (music) album cover using Telethon API client
<p>I want to get the URL or bytes of the Telegram audio documents (music) album cover using MTProto API and <code>Telethon</code> Python lib, but I could not find such a thing by checking audio message properties. There was a <code>thumbs</code> property for the message attached <code>media</code> property that was null.</p> <p><em><strong>Note:</strong></em> When I download a media using <code>client.download_media</code> the cover is attached to the file, but I do not want to download it.</p> <p><a href="https://i.sstatic.net/gQzts.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gQzts.jpg" alt="Telegram audio/music album cover" /></a> <a href="https://i.sstatic.net/CrxlV.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CrxlV.jpg" alt="Telegram audio/music album cover" /></a></p>
<python><telegram><telethon>
2023-01-02 06:34:03
1
990
Momo
74,979,497
3,026,206
animation loop only cares about first item in list
<p>In a simple pygame zero 2D game, I have a list of Actors that I'm looping through to ensure that they don't run off the side of the screen. However, going right, only the leftmost item in the list (the first one) is triggering the direction change--the rest run off the screen. Strangely, it works fine when they're moving left.</p> <p>I'm running this code (except the function at the bottom) in the update() function. The enemies are 48 pixels wide.</p> <p>Edit: I looked in the debugger and it seems the for loop never gets to the second enemy in the list. It's just constantly checking the first one.</p> <pre><code>WIDTH = 800 HEIGHT = 600 gameDisplay = pygame.display.set_mode((WIDTH, HEIGHT)) ... for enemy in enemies: if enemy.x &gt;= WIDTH - 50 or enemy.x &lt;= 50: # change enemy direction enemy_direction *= -1 break else: animate_sideways(enemies) ... def animate_sideways(enemies): for enemy in enemies: enemy.x += enemy_direction </code></pre>
<python><pgzero>
2023-01-02 06:26:19
1
3,560
beachCode
74,979,430
14,122,835
how to create a new column based on string from different columns
<p>I have a dataframe look like this:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>type</th> <th>city</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>dki jakarta</td> </tr> <tr> <td>2</td> <td>jawa barat</td> </tr> <tr> <td>3</td> <td>jawa tengah</td> </tr> <tr> <td>4</td> <td>jawa timur</td> </tr> <tr> <td>5</td> <td>sulawesi</td> </tr> </tbody> </table> </div> <p>I want to create a new column called <code>city_group</code> based on the city.</p> <ul> <li>dki jakarta, jawa barat: jabo, jabar</li> <li>jawa tengah, jawa tengah: jateng, jatim</li> <li>sulawesi: others</li> </ul> <p>The desire dataframe would be like this:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>type</th> <th>city</th> <th>city_group</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>dki jakarta</td> <td>jabo, jabar</td> </tr> <tr> <td>2</td> <td>jawa barat</td> <td>jabo, jabar</td> </tr> <tr> <td>3</td> <td>jawa tengah</td> <td>jateng, jatim</td> </tr> <tr> <td>4</td> <td>jawa timur</td> <td>jateng, jatim</td> </tr> <tr> <td>5</td> <td>sulawesi</td> <td>others</td> </tr> </tbody> </table> </div> <p>So far, what I have done is with this script below but I did not get how to put multiple string in the condition.</p> <pre><code>df.loc[df['city'].str.contains(&quot;dki jakarta),'city_group'] = 'jabo, jabar' </code></pre> <p>How can I get the desired dataframe with pandas? Thank you in advance</p>
<python><pandas><string><conditional-statements>
2023-01-02 06:14:57
1
531
yangyang
74,979,300
8,820,616
Python: Can We SSL wrap any http server to https server?
<p>This is a simple HTTPS python server (not for production use)</p> <pre><code># libraries needed: from http.server import HTTPServer, SimpleHTTPRequestHandler import ssl , socket # address set server_ip = '0.0.0.0' server_port = 3389 # configuring HTTP -&gt; HTTPS httpd = HTTPServer((server_ip, server_port), SimpleHTTPRequestHandler) httpd.socket = ssl.wrap_socket(httpd.socket, certfile='./public_cert.pem',keyfile='./private_key.pem', server_side=True) httpd.serve_forever() </code></pre> <p>as you see in the above script i wrap a simple python HTTP server with ssl certificate and make it a simple HTTPS server</p> <p><a href="https://github.com/ChocolatePadmanaban/DevOps-Notes/tree/main/Servers/HTTPS_Servers" rel="nofollow noreferrer">Detailed example of python simple HTTPS server</a></p> <p>Is there a way to ssl wrap any, already running HTTP server to HTTPS server ?</p> <p>Suppose I have a HTTP server running at port 8080. Can I just ssl wrap it to port 443?</p>
<python><python-3.x><http><ssl><https>
2023-01-02 05:45:21
0
694
Pradeep Padmanaban C
74,979,106
7,347,925
How to identify one column with continuous number and same value of another column?
<p>I have a DataFrame with two columns <code>A</code> and <code>B</code>.</p> <p>I want to create a new column named <code>C</code> to identify the continuous <code>A</code> with the same <code>B</code> value.</p> <p>Here's an example</p> <pre><code>import pandas as pd df = pd.DataFrame({'A':[1,2,3,5,6,10,11,12,13,18], 'B':[1,1,2,2,3,3,3,3,4,4]}) </code></pre> <p>I found a similar <a href="https://stackoverflow.com/q/50723114/7347925">question</a>, but that method only identifies the continuous <code>A</code> regardless of <code>B</code>.</p> <pre><code>df['C'] = df['A'].diff().ne(1).cumsum().sub(1) </code></pre> <p>I have tried to groupby <code>B</code> and apply the function like this:</p> <pre><code>df['C'] = df.groupby('B').apply(lambda x: x['A'].diff().ne(1).cumsum().sub(1)) </code></pre> <p>However, it doesn't work: TypeError: incompatible index of inserted column with frame index.</p> <p>The expected output is</p> <pre><code>A B C 1 1 0 2 1 0 3 2 1 5 2 2 6 3 3 10 3 4 11 3 4 12 3 4 13 4 5 18 4 6 </code></pre>
<python><pandas>
2023-01-02 04:59:32
2
1,039
zxdawn
74,979,003
11,199,298
How to intercept request with mitmproxy before response is streamed?
<p>The request I am trying to intercept and modify is a get request with only one parameter and I try to modify it:</p> <pre><code>from mitmproxy import http def request(flow: http.HTTPFlow) -&gt; None: if flow.request.pretty_url.startswith(BASE_URL): flow.request.url = BASE_URL.replace('abc', 'def') </code></pre> <p>The above shows what I am trying to do in a nutshell. But unfortunately, according to <a href="https://docs.mitmproxy.org/stable/api/events.html#HTTPEvents.request" rel="nofollow noreferrer">docs</a>,</p> <blockquote> <p>this event fires after the entire body has been streamed.</p> </blockquote> <p>In the end, I am not able to modify the request. Am I missing something here? Because if modify requests is not possible, then what is the point of mitmproxy?</p>
<python><request><mitmproxy>
2023-01-02 04:31:09
0
2,211
Tugay
74,978,815
2,882,380
Pivot table not showing row total Python
<p>I tested the following codes and set <code>margins</code> to be true. However, the result only shows sum of each column but not sum of each row. How to do that, please?</p> <pre><code>import pandas as pd test = pd.DataFrame( [['a1', 1, 1, 11], ['a1', 2, 3, 12], ['a1', 3, 5, 13], ['a2', 4, 7, 14], ['a2', 5, 9, 15], ['b', 6, 2, 16], ['c', 7, 4, 17]], columns=['ID1', 'Value1', 'Value2', 'Value3'] ) test.pivot_table(values=['Value1'] + ['Value2'], index=['ID1'], aggfunc='sum', margins=True) Value1 Value2 ID1 a1 6 9 a2 9 16 b 6 2 c 7 4 All 28 31 </code></pre>
<python><pandas><dataframe>
2023-01-02 03:36:42
0
1,231
LaTeXFan
74,978,707
3,466,818
Optimizing a puzzle solver
<p>Over the holidays, I was gifted a game called &quot;Kanoodle Extreme&quot;. The details of the game are somewhat important, but I think I've managed to abstract them away. The 2D variant of the game (which is what I'm focusing on) has a number of pieces that can be flipped/rotated/etc. A given puzzle will give you a certain amount of a hex-grid board to cover, and a certain number of pieces with which to cover it. See the picture below for a quick visual, I think that explains most of it.</p> <p><a href="https://i.sstatic.net/ILz61.png" rel="noreferrer"><img src="https://i.sstatic.net/ILz61.png" alt="enter image description here" /></a></p> <p>(Image attribution: screenshotted from the amazon listing)</p> <p><a href="https://www.educationalinsights.com/amfile/file/download/file/202/product/885/" rel="noreferrer">Here is the full manual for the game</a>, including rules, board configurations, and pieces (manufactorer's site).</p> <p>For convenience, here's the collection of pieces (individual problems may include a subset of these pieces):</p> <p><a href="https://i.sstatic.net/VG44n.png" rel="noreferrer"><img src="https://i.sstatic.net/VG44n.png" alt="image of puzzle pieces" /></a></p> <p>Here is an example of a few board configurations (shown pieces are fixed - the open spaces must be filled with the remaining pieces):</p> <p><a href="https://i.sstatic.net/w42JI.png" rel="noreferrer"><img src="https://i.sstatic.net/w42JI.png" alt="image of board to solve" /></a></p> <p>It's an interesting game, but I decided I didn't just want to solve <em>a</em> puzzle, I wanted to solve <em>all</em> the puzzles. I did this not because it would be easy, but because I thought it would be easy. As it turns out, a brute-force/recursive approach is pretty simple. It's also hilariously inefficient.</p> <p>What have I done so far? Well, I'll happily post code but I've got quite a bit of it. Essentially, I started with a few assumptions:</p> <ol> <li><p>It doesn't matter if I place piece 1, then 2, then 3... or 3, then 1, then 2... since every piece must be placed, the ordering doesn't matter (well, matter much: I think placing bigger pieces first might be more efficient since there are fewer options?).</p> </li> <li><p>In the worst case, solving for <em>all</em> possible solutions to puzzle is no slower than solving for a <em>single</em> solution. This is where I'm not confident: I guess on average the single solution could probably be early-exited sooner, but I think in the worst case they're equivalent.</p> </li> <li><p>I don't THINK there's a clean algebraic way to solve this - I don't know if that classifies it as NP-complete or what, but I think some amount of combinatorics must be explored to find solutions. This is my least-confident assumption.</p> </li> </ol> <p>My approach to solving so far:</p> <p>For each piece given, I find all possible locations/orientations of said piece on the game board. I code each piece/location on the board as a bitmap (where each bit represents a location on the board, and 0 means unoccupied while 1 means occupied). Now for each piece I have a collection of some 20-200 ints (depending on the size of the board) that represent technically-valid placements, though not all are optimal. (I think there's some room to trim down unlikely orientations here).</p> <p>I store all these ints in a map, as a list associated with a key that represents the index of the piece in question. Starting at piece index 0, I loop through all possible iterations (keyed by the index of that iteration in the list of all possible iterations for that piece), and loop through all possible iterations of the next piece. I take the two ints and bitwise-&amp; them together: if I get a &quot;0&quot; out, it means that there is no overlap between the pieces so they COULD be placed together.</p> <p>I store all the valid combos from piece 0-1 (for instance, piece 0 iteration index 5 is compatible with piece 1 iterations 1-3, 6, 35-39, and 42). How I store that is likely irrelevant, but I currently store it as a nested list: index 5 of the list would contain another list that held [1, 2, 3, 6, 35, 36, 37, 38, 39, 42].</p> <p>I do this for piece 0-1, 0-2, 0-3... 1-2, 1-3... 2-3... every combination of pieces. I then start finding 3-sequence combos: Iterate through the list of valid piece 0-&gt;1 lists, and for each piece 1 index (so 1, 2, 3, 6, 35, 36... from the example above), find the compatibility list from piece 1-&gt;2 for that index. This will give me a sequence of lists. For each item in this sequence, I filter it by taking the intersection with the compatibility list for piece 0-&gt;2 for the selected piece 0 iteration.</p> <p>This gives me a collection of &quot;3-lists&quot;. I find all 3-lists ((0, 1, 2), (1, 2, 3), (2, 3, 4)), and repeat the process of filtering to get 4-lists: ((0, 1, 2, 3), (1, 2, 3 4)). Repeat to get 5-lists. If I have only 5 pieces, the 5 list represents all solutions. If I have more than n pieces, repeat until I have an n-list.</p> <p>This approach DOES work, and I don't THINK I'm duplicating many (if any) calculations, but the combinatorics get so large that it takes too long or - when I have 8 or 9 pieces - takes up 30+ GB of ram, then crashes.</p> <p>The ultimate question: how can I solve this problem (searching for ALL solutions for a given puzzle) more efficiently?</p> <p>Sub-questions: Optimizations to my algorithmic approach? Optimizations to the calculations performed (I used ints and bitwise operations and then set intersections because I thought they'd be fast)? Rejections of my assumptions that might result in faster solutions?</p> <p>Thanks!</p>
<python><algorithm><recursion><optimization><combinatorics>
2023-01-02 02:59:23
3
706
Helpful
74,978,585
9,855,588
is this a bad approach to creating a global logger, how can I improve it
<p>Given my code, I would create a root logger instance <code>logger = GlobalLogger(level=10).logger</code> in <code>__init__.py</code> and include it in submodules where I need logging. Is there a better way to create this class instead of calling the attribute <code>.logger</code> to get the root logging class, or a better design approach overall?</p> <pre><code>import typing import logging class GlobalLogger: MINIMUM_GLOBAL_LEVEL = logging.DEBUG GLOBAL_HANDLER = logging.StreamHandler() LOG_FORMAT = &quot;[%(asctime)s] - %(levelname)s - [%(name)s.%(funcName)s:%(lineno)d] - %(message)s&quot; LOG_DATETIME_FORMAT = &quot;%Y-%m-%d %H:%M:%S&quot; def __init__(self, level: typing.Union[int, str] = MINIMUM_GLOBAL_LEVEL): self.level = level self.logger = self._get_logger() self.log_format = self._log_formatter() self.GLOBAL_HANDLER.setFormatter(self.log_format) def _get_logger(self): logger = logging.getLogger(__name__) logger.setLevel(self.level) logger.addHandler(self.GLOBAL_HANDLER) return logger def _log_formatter(self): return logging.Formatter(fmt=self.LOG_FORMAT, datefmt=self.LOG_DATETIME_FORMAT) </code></pre>
<python><python-3.x>
2023-01-02 02:22:41
1
3,221
dataviews
74,978,418
1,229,531
How to include xelatex in a mamba environment?
<p>Running a particular Python program within my environment tries to produce PDF output, which requires &quot;xelatex&quot;. I tried adding &quot;xelatex&quot; to the environment creation command:</p> <pre><code>mamba create -n notebook_rapidsai_env ... python jupyter xelatex </code></pre> <p>but this produced the following error:</p> <pre><code>Encountered problems while solving: - nothing provides requested xelatex </code></pre> <p>I have also tried &quot;latex&quot;. The does not appear to be anything in the Mamba documentation about adding xelatex support. How is that done?</p>
<python><conda><xelatex><mamba>
2023-01-02 01:31:31
1
599
Mark Bower
74,978,154
10,788,239
Why does adding multiprocessing prevent python from finding my compiled c program?
<p>I am currently looking to speed up my code using the power of multiprocessing. However I am encountering some issues when it comes to calling the compiled code from python, as it seems that the compiled file disappears from the code's view when it includes any form of multiprocessing.</p> <p>For instance, with the following test code:</p> <pre><code>#include &lt;omp.h&gt; int main() { int thread_id; #pragma omp parallel { thread_id = omp_get_thread_num(); } return 0; } </code></pre> <p>Here, I compile the program, then turn it into a .so file using the command</p> <pre><code>gcc -fopenmp -o theories/test.so -shared -fPIC -O2 test.c </code></pre> <p>I then attempt to run the said code from test.py:</p> <pre><code>from ctypes import CDLL import os absolute_path = os.path.dirname(os.path.abspath(__file__)) # imports the c libraries test_lib_path = absolute_path + '/theories/test.so' test = CDLL(test_lib_path) test.main() print('complete') </code></pre> <p>I get the following error:</p> <pre><code>FileNotFoundError: Could not find module 'C:\[my path]\theories\test.so' (or one of its dependencies). Try using the full path with constructor syntax. </code></pre> <p>However, when I comment out the multiprocessing element to get the follwing code:</p> <pre><code>#include &lt;omp.h&gt; int main() { int thread_id; /* #pragma omp parallel { thread_id = omp_get_thread_num(); } */ return 0; } </code></pre> <p>I then have a perfect execution with the python program printing out &quot;complete&quot; at the end.</p> <p>I'm wondering how this has come to happen, and how the code can seemingly be compiled fine but then throw problems only once it's called from python (also I have checked and the file is in fact created).</p> <p>UPDATES:</p> <ol> <li>I have now checked that I have libgomp-1.dll installed</li> <li>I have uninstalled and reinstalled MinGW, with no change happening.</li> <li>I have installed a different, 64 bit version of gcc and, using a different (64 bit python 3.10) version of python have reproduced the same error. This also has libgomp-1.dll.</li> </ol>
<python><c><multiprocessing><ctypes><file-not-found>
2023-01-02 00:09:03
2
438
Arkleseisure
74,978,130
3,313,834
pytest a script using stdin from argparse
<p>I have an argparse script using stdin:</p> <pre class="lang-py prettyprint-override"><code>$ tree . ├── go.py └── mypytest.py $ $ cat go.py # echo '[{&quot;k&quot;: [&quot;1&quot;, &quot;2&quot;]}]' | python go.py - import argparse, json parser = argparse.ArgumentParser() parser.add_argument('json', nargs='?', type=argparse.FileType('r')) args = parser.parse_args() with args.json as f: obj = json.load(f) print(obj) $ </code></pre> <p>The script is working well:</p> <pre><code>$ echo '[{&quot;k&quot;: [&quot;1&quot;, &quot;2&quot;]}]' | python go.py - [{'k': ['1', '2']}] $ </code></pre> <p>I'm using pytest and <a href="https://pypi.org/project/pytest-console-scripts/" rel="nofollow noreferrer">https://pypi.org/project/pytest-console-scripts/</a> to test scripts:</p> <pre class="lang-py prettyprint-override"><code>$ cat mypytest.py def test_stdin(script_runner, monkeypatch): import io monkeypatch.setattr('sys.stdin', io.StringIO('[{&quot;k&quot;: [&quot;1&quot;, &quot;2&quot;]}]')) ret = script_runner.run(&quot;go.py&quot;, &quot;-&quot;, print_result=True) assert ret.success assert &quot;k&quot; in ret.stdout assert ret.stderr == &quot;&quot; $ </code></pre> <p>pytest does not succed to forward from stdin the stream to the pipe:</p> <pre><code>$ pytest mypytest.py ================================================ test session starts ================================================ platform linux -- Python 3.10.4, pytest-7.2.0, pluggy-1.0.0 rootdir: /tmp/aaaa plugins: mock-3.10.0, cov-4.0.0, console-scripts-1.3.1 collected 1 item mypytest.py F [100%] ========================================= FAILURES =============================== ____________________________________________ test_stdin[inprocess] ____________________________________________ script_runner = &lt;ScriptRunner inprocess&gt;, monkeypatch = &lt;_pytest.monkeypatch.MonkeyPatch object at 0x7fc74ba75a50&gt; def test_stdin(script_runner, monkeypatch): import io monkeypatch.setattr('sys.stdin', io.StringIO('[{&quot;k&quot;: [&quot;1&quot;, &quot;2&quot;]}]')) ret = script_runner.run(&quot;go.py&quot;, &quot;-&quot;, print_result=True) &gt; assert ret.success E assert False E + where False = &lt;pytest_console_scripts.RunResult object at 0x7fc74ba75c90&gt;.success mypytest.py:5: AssertionError ------------------------------------------------------------------------------------ Captured stdout call ------------------------------------------------------------------------------------- # Running console script: go.py - # Script return code: 1 # Script stdout: # Script stderr: Traceback (most recent call last): File &quot;/home/luis/.pyenv/versions/3.10.4/envs/doc/lib/python3.10/site-packages/pytest_console_scripts.py&quot;, line 196, in exec_script exec(compiled, {'__name__': '__main__'}) File &quot;&lt;_io.TextIOWrapper name='/tmp/aaaa/go.py' mode='rt' encoding='utf-8'&gt;&quot;, line 7, in &lt;module&gt; File &quot;/home/luis/.pyenv/versions/3.10.4/lib/python3.10/json/__init__.py&quot;, line 293, in load return loads(fp.read(), File &quot;/home/luis/.pyenv/versions/3.10.4/lib/python3.10/json/__init__.py&quot;, line 346, in loads return _default_decoder.decode(s) File &quot;/home/luis/.pyenv/versions/3.10.4/lib/python3.10/json/decoder.py&quot;, line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File &quot;/home/luis/.pyenv/versions/3.10.4/lib/python3.10/json/decoder.py&quot;, line 355, in raw_decode raise JSONDecodeError(&quot;Expecting value&quot;, s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0) ====================================== short test summary info ====================================== FAILED mypytest.py::test_stdin[inprocess] - assert False ======================================= 1 failed in 0.15s ======================================= $ </code></pre> <p>the cmd bellow:</p> <pre><code>$ python -m pytest mypytest.py </code></pre> <p>give the same result</p> <h2>more debug with <code>-s -v</code></h2> <pre><code>$ python -m pytest mypytest.py -v -s ================================= test session starts ================================== platform linux -- Python 3.10.4, pytest-7.2.0, pluggy-1.0.0 -- /home/luis/.pyenv/versions/doc/bin/python cachedir: .pytest_cache rootdir: /tmp/aaaa plugins: mock-3.10.0, cov-4.0.0, console-scripts-1.3.1 collected 1 item mypytest.py::test_stdin[inprocess] # Running console script: go.py - # Script return code: 1 # Script stdout: # Script stderr: Traceback (most recent call last): File &quot;/home/luis/.pyenv/versions/3.10.4/envs/doc/lib/python3.10/site-packages/pytest_console_scripts.py&quot;, line 196, in exec_script exec(compiled, {'__name__': '__main__'}) File &quot;&lt;_io.TextIOWrapper name='/tmp/aaaa/go.py' mode='rt' encoding='utf-8'&gt;&quot;, line 7, in &lt;module&gt; File &quot;/home/luis/.pyenv/versions/3.10.4/lib/python3.10/json/__init__.py&quot;, line 293, in load return loads(fp.read(), File &quot;/home/luis/.pyenv/versions/3.10.4/lib/python3.10/json/__init__.py&quot;, line 346, in loads return _default_decoder.decode(s) File &quot;/home/luis/.pyenv/versions/3.10.4/lib/python3.10/json/decoder.py&quot;, line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File &quot;/home/luis/.pyenv/versions/3.10.4/lib/python3.10/json/decoder.py&quot;, line 355, in raw_decode raise JSONDecodeError(&quot;Expecting value&quot;, s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0) FAILED ======================================= FAILURES ======================================= ________________________________ test_stdin[inprocess] _________________________________ script_runner = &lt;ScriptRunner inprocess&gt; monkeypatch = &lt;_pytest.monkeypatch.MonkeyPatch object at 0x7f2b4307d630&gt; def test_stdin(script_runner, monkeypatch): import io monkeypatch.setattr('sys.stdin', io.StringIO('[{&quot;k&quot;: [&quot;1&quot;, &quot;2&quot;]}]')) ret = script_runner.run(&quot;go.py&quot;, &quot;-&quot;, print_result=True) &gt; assert ret.success E assert False E + where False = &lt;pytest_console_scripts.RunResult object at 0x7f2b430cf610&gt;.success mypytest.py:5: AssertionError =============================== short test summary info ================================ FAILED mypytest.py::test_stdin[inprocess] - assert False ================================== 1 failed in 0.14s =================================== $ </code></pre>
<python><pytest><pytest-mock>
2023-01-02 00:01:46
0
7,917
user3313834
74,978,089
13,677,853
How to tick-level backtest the spot grid trading strategy?
<p>Is there a Python library with which I could tick-level backtest the famous Spot Grid Trading crypto strategy? I already did the tick data download part from <a href="https://data.binance.vision/?prefix=data/spot/daily/trades/" rel="nofollow noreferrer">data.binance.vision</a>, although in my attempt I have used <a href="https://github.com/kernc/backtesting.py" rel="nofollow noreferrer">backtesting.py</a> that seems to not be suitable for tick-level backtests, as discussed <a href="https://github.com/kernc/backtesting.py/discussions/566" rel="nofollow noreferrer">here</a>. The grid strategy is pretty straightforward, so I assume it should be fairly easy to backtest on tick-level. I assume there are others who have already achieved this, I just can't find it, so that's why I'm asking the question.</p> <p>The strategy is described <a href="https://www.binance.com/en/support/faq/d5f441e8ab544a5b98241e00efb3a4ab" rel="nofollow noreferrer">here</a> and I also found a few open source references on GitHub:</p> <ul> <li><a href="https://github.com/xzmeng/crypto-grid-backtest/blob/master/grid/backtest.py" rel="nofollow noreferrer">https://github.com/xzmeng/crypto-grid-backtest/blob/master/grid/backtest.py</a></li> <li><a href="https://github.com/webclinic017/aibitgo/blob/master/strategy/GridStrategyPercent.py" rel="nofollow noreferrer">https://github.com/webclinic017/aibitgo/blob/master/strategy/GridStrategyPercent.py</a></li> <li><a href="https://github.com/ulbdir/trading_bot/blob/main/GridStrategy.py" rel="nofollow noreferrer">https://github.com/ulbdir/trading_bot/blob/main/GridStrategy.py</a></li> </ul> <h2>What I have so far</h2> <pre class="lang-py prettyprint-override"><code>from backtesting import Backtest from grid.grid import GridStrategy from tick_data.data import Kind, load_data if __name__ == &quot;__main__&quot;: # 1s interval pickle df = load_data(symbol=&quot;ETHUSDT&quot;, start=&quot;2022-12-16&quot;, end=&quot;2022-12-16&quot;, kind=Kind.SPOT, tz=&quot;UTC&quot;) df = df.resample(&quot;1s&quot;).first().dropna() print(f'{df}') # Backtest bt = Backtest(df, GridStrategy, cash=10_000, commission=.001, exclusive_orders=True) stats = bt.run() bt.plot() </code></pre> <pre class="lang-py prettyprint-override"><code>import numpy as np import pandas as pd from enum import Enum from backtesting import Strategy class GridType(Enum): ARITHMETIC = 1 GEOMETRIC = 2 class GridStrategy(Strategy): lower_limit = 2000 upper_limit = 10000 grid_count = 4 grid_type = GridType.ARITHMETIC grids = [] current_position = None def init(self): self.grids = self.get_grids(self.lower_limit, self.upper_limit, self.grid_count, self.grid_type) print(f'{self.grids}') def next(self): pass @staticmethod def get_grids(lower_limit, upper_limit, grid_count, grid_type=GridType.ARITHMETIC): if grid_type == GridType.ARITHMETIC: grids = np.linspace(lower_limit, upper_limit, grid_count + 1) elif grid_type == GridType.GEOMETRIC: grids = np.geomspace(lower_limit, upper_limit, grid_count + 1) else: print(&quot;not right range type&quot;) return grids </code></pre> <pre class="lang-py prettyprint-override"><code>import io import logging from concurrent.futures import ThreadPoolExecutor from datetime import date from enum import Enum from pathlib import Path from typing import Optional, Union from zipfile import ZipFile from datetime import datetime import pandas as pd import httpx from pandas import DataFrame DATA_DIR = Path.cwd().joinpath(&quot;data&quot;) def create_logger(): logger_ = logging.getLogger(__name__) logger_.setLevel(logging.INFO) formatter = logging.Formatter( &quot;%(asctime)s - %(name)s - %(levelname)s - %(message)s&quot; ) handler = logging.StreamHandler() handler.setFormatter(formatter) logger_.addHandler(handler) return logger_ logger = create_logger() class Kind(Enum): SPOT = &quot;spot&quot; FUTURES_UM = &quot;futures/um&quot; FUTURES_CM = &quot;futures/cm&quot; class DataLoader: def __init__( self, kind: Kind, symbol: str, start: Union[date, str], end: Union[date, str], tz: str = None, ) -&gt; None: self.kind = kind self.symbol = symbol self.start = start self.end = end self.tz = tz def load_data(self) -&gt; pd.DataFrame: with ThreadPoolExecutor(max_workers=20) as executor: dfs = list( executor.map(self.load_daily_data, pd.date_range(self.start, self.end)) ) df = pd.concat(dfs) if self.tz: df.index = df.index.tz_localize(&quot;utc&quot;).tz_convert(self.tz) return df def load_daily_data(self, dt: date) -&gt; Optional[pd.DataFrame]: try: return self.load_local_daily_data(dt) except FileNotFoundError: return self.download_daily_data(dt) def load_local_daily_data(self, dt: date) -&gt; pd.DataFrame: pickle_path = self.get_daily_pickle_path(dt) return pd.read_pickle(pickle_path) def download_daily_data(self, dt: date) -&gt; Optional[pd.DataFrame]: logger.info(f'Downloading {self.symbol} {datetime.strftime(dt, &quot;%Y-%m-%d&quot;)}') url = f'https://data.binance.vision/data/{self.kind.value}/daily/trades/{self.symbol}/' \ f'{self.symbol}-trades-{datetime.strftime(dt, &quot;%Y-%m-%d&quot;)}.zip' resp = httpx.get(url) resp.raise_for_status() with ZipFile(io.BytesIO(resp.content)) as zf: with zf.open(zf.namelist()[0]) as f: df = pd.read_csv(f, usecols=[1, 4], names=[&quot;price&quot;, &quot;datetime&quot;]) df[&quot;datetime&quot;] = pd.to_datetime(df.datetime, unit=&quot;ms&quot;) df.set_index(&quot;datetime&quot;, inplace=True) # df = df.resample(&quot;1s&quot;).first().dropna() # df.price.resample(&quot;1s&quot;).agg({ # &quot;Open&quot;: &quot;first&quot;, # &quot;High&quot;: &quot;max&quot;, # &quot;Low&quot;: &quot;min&quot;, # &quot;Close&quot;: &quot;last&quot; # }) pkl_path = self.get_daily_pickle_path(dt) Path(pkl_path).parent.mkdir(parents=True, exist_ok=True) df.to_pickle(pkl_path) return df def get_daily_pickle_path(self, dt: date) -&gt; Path: return DATA_DIR.joinpath(self.kind.value, f&quot;{self.symbol}-{dt.year}-{dt.month}-{dt.day}.pkl&quot;) def load_data( symbol: str, start: Union[date, str], end: Union[date, str] = date.today(), kind: Union[Kind, str] = Kind.SPOT, tz: str = &quot;UTC&quot;, ) -&gt; DataFrame: if isinstance(kind, str): kind = {&quot;spot&quot;: Kind.SPOT, &quot;cm&quot;: Kind.FUTURES_CM, &quot;um&quot;: Kind.FUTURES_UM}[kind] return DataLoader(kind, symbol, start, end, tz).load_data() </code></pre>
<python><pandas><back-testing>
2023-01-01 23:49:50
0
6,607
nop
74,978,038
10,200,497
How to create orders for multiple api keys asynchronously in binance?
<p>I want to create multiple orders for multiple users asynchronously in binance api using python. I know how to a create a single order for a pair of api key and api secret.</p> <pre><code>from binance.client import Client as BinanceClient from binance.enums import * binance_api_key = 'api_key' binance_api_secret = 'api_secret' binance_client = BinanceClient(binance_api_key, binance_api_secret) single_order = client.create_test_order( symbol='BTCUSDT', side=SIDE_SELL, type=ORDER_TYPE_LIMIT, timeInForce=TIME_IN_FORCE_GTC, quantity=1, price='16000' ) </code></pre> <p>I can just copy this or use a <code>for</code> loop to repeat the process. But the problem is that this code does not create orders asynchronously. That is, it starts from the first api key and api secret and then execute the second and third one. This will create the orders one after another and I loose time because each trade takes approximately 0.5 seconds. I have tried this <a href="https://stackoverflow.com/a/73703455/10200497">answer</a> but honestly I couldn't figure out how to use it to solve my problem.</p>
<python><binance><binance-api-client>
2023-01-01 23:35:27
1
2,679
AmirX
74,977,869
7,599,215
Custom built opencv 4..7.0 python import problem
<p>I built opencv 4.7.0 from source</p> <p>I have a folder <code>cv2</code> with <code>cv2.so</code> in it</p> <p>If I call <code>import cv2</code> within the folder -- all ok</p> <p>If I call from without <code>from cv2 import cv2</code> or similar stuff, nothing will work with error:</p> <pre><code>Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; ImportError: Bindings generation error. Submodule name should always start with a parent module name. Parent name: cv2.cv2. Submodule name: cv2 </code></pre> <p>Everything worked with opencv 4.5.5</p> <p>I've seen similar problems with opencv-python: <a href="https://github.com/opencv/opencv-python/issues/676" rel="nofollow noreferrer">https://github.com/opencv/opencv-python/issues/676</a></p> <p>Does anyone knows how to solve this import issue?</p>
<python><opencv><python-import>
2023-01-01 22:54:54
0
2,563
banderlog013
74,977,786
19,321,677
How to save model with cloudpickle to databricks DBFS folder and load it?
<p>I built a model and my goal is to save the model as a pickle and load it later for scoring. Right now, I am using this code:</p> <pre><code> #save model as pickle import cloudpickle pickled = cloudpickle.dumps(final_model) #load model cloudpickle.loads(pickled) Output: &lt;econml.dml.causal_forest.CausalForestDML at 0x7f388e70c373&gt; </code></pre> <p>My worry is that with this approach the model will be saved only in a session-variable &quot;pickled&quot; of the notebook in Databricks.</p> <p>I want the model to be stored in a DBFS storage though, so I can pull this model at any time (even if my notebook session expires) to make it more robust.</p> <p>How would I do this?</p>
<python><machine-learning><model><databricks><pickle>
2023-01-01 22:33:31
1
365
titutubs
74,977,549
3,142,472
Abstract Data Type definition in Python
<p>Consider the following <a href="https://wiki.haskell.org/Abstract_data_type" rel="nofollow noreferrer">Abstract Data Type</a> (using Haskell syntax):</p> <pre class="lang-hs prettyprint-override"><code>data Expr = Literal String | Symbol String | And [Expr] | Or [Expr] </code></pre> <p>In Python, one can make use of dataclasses and inheritance to obtain a similar type construction:</p> <pre class="lang-py prettyprint-override"><code>@dataclass class Expr: # &quot;union&quot; foo: str | None = None bar: list[&quot;Expr&quot;] | None = None @dataclass class Literal(Expr): pass @dataclass class Symbol(Expr): pass @dataclass class And(Expr): pass @dataclass class Or(Expr): pass </code></pre> <p>As an interesting exercise, I am wondering whether it is possible to obtain a similar effect, but with a different definition which avoids duplication. I came up with the following theoretical notation:</p> <pre class="lang-py prettyprint-override"><code># simulate Haskell's # data Expr = Literal String | Symbol String | And [Expr] | Or [Expr] # the following will bring names into scope ( make_adt( type_cons=&quot;Expr&quot;, possible_fields=[ (&quot;foo&quot;, str), (&quot;bar&quot;, list[&quot;Expr&quot;]), ], ) .add_data_cons(&quot;Literal&quot;, fields=[&quot;foo&quot;]) .add_data_cons(&quot;Symbol&quot;, fields=[&quot;foo&quot;]) .add_data_cons(&quot;And&quot;, fields=[&quot;bar&quot;]) .add_data_cons(&quot;Or&quot;, fields=[&quot;bar&quot;]) ) </code></pre> <p>Here I am saying that there's a base type (the <code>Expr</code> type constructor) with 4 data constructors: <code>Literal</code>, <code>Symbol</code>, <code>And</code>, <code>Or</code>.</p> <p>Each data constructor takes an additional argument (either <code>str</code> or <code>list[Expr]</code>), which is referred in the <code>fields</code> argument above (must be a subset of the <code>possible_fields</code>).</p> <p>So:</p> <ul> <li><code>Literal(&quot;foo&quot;)</code>: sets the <code>foo</code> field for the instance</li> <li><code>And([Literal(&quot;foo&quot;), Symbol(&quot;baz&quot;)])</code>: sets the <code>bar</code> field for the instance</li> </ul> <p>The constraint here, as opposed to plain inheritance, is that that <code>Literal</code> and <code>Symbol</code> don't have the <code>bar</code> field, and similarly, <code>And</code>, <code>Or</code>, don't have the <code>foo</code> field. Or, to relax this a bit, we at least have to enforce that only non-null attributes are the ones defined in the <code>fields</code> list above.</p> <p>My questions are:</p> <ul> <li>Can something like this be implemented? <ul> <li>I'm thinking along the lines of <a href="https://www.attrs.org/en/stable/" rel="nofollow noreferrer">attrs</a> and dynamic <code>class</code> creation using <code>type(...)</code>.</li> </ul> </li> <li>How brittle it would be?</li> </ul> <hr /> <p>P.S. I know it does not necessarily <em>make sense</em> to over-engineer this, especially in a dynamically typed language like Python, but I consider it to be an interesting exercise nonetheless.</p>
<python><metaprogramming><language-design><abstract-data-type>
2023-01-01 21:42:01
2
1,315
Alexandru Dinu
74,977,357
16,748,931
ModuleNotFoundError: No module named 'pygame_menu'
<p>I'm trying to make a game using <code>pygame</code> and <code>pygame_menu</code>. When I try to install <code>pygame_menu</code> I get this error:</p> <pre><code>Traceback (most recent call last): File &quot;D:\Game\main.py&quot;, line 7, in &lt;module&gt; import pygame_menu ModuleNotFoundError: No module named 'pygame_menu' </code></pre> <p>I ran this command to install the module: <code>pip install pygame-menu -U</code></p> <p>Here's how I import the module: <code>import pygame_menu</code></p> <p>I have no idea how to fix this. If you know how to fix this or a better module for GUI elements in pygame please tell me.</p>
<python><pip><pygame-menu>
2023-01-01 20:57:31
2
570
ProGamer2711
74,977,340
249,341
Matching everything except for a character followed by a newline
<p>This seems like a simple match, but I'm unable to figure out how to match all text that starts with a known block of text and ends with a semicolon + newline. What I have right now mostly works:</p> <pre><code>pattern = r'''[ ]+(value \w+\n)([^;]+)''' </code></pre> <p>For an example section of text that allows me to parse:</p> <pre><code> value Y1N5NALC 1 = 'Yes' 5 = 'No' 7 = 'Not ascertained' ; value AGESCRN 15 = '15 years' 16 = '16 years'; </code></pre> <p>However, if any of the key/value pairs contain a semicolon <em>in the string</em> the match fails early since the regex is looking for any semicolon. An example:</p> <pre><code> value Y1N5NALC 1 = 'Yes' 5 = 'No;Maybe' 7 = 'Not ascertained' ; </code></pre> <p>What I'd like to do is end the match by looking for a <code>semicolon</code> + <code>Optional(space or tab)</code> + <code>newline</code>. Using <code>([^;\n]+)</code> fails since the newline gets match to the negative.</p>
<python><regex>
2023-01-01 20:53:45
1
88,818
Hooked
74,977,313
14,670,370
How to use pd.json_normalize on a list of dictionaries in Pandas?
<p>I am trying to use the pd.json_normalize() function from the Pandas library on the following data:</p> <pre><code> data = { &quot;examples&quot;: [ { &quot;website&quot;: &quot;info&quot;, &quot;df&quot;: [ { &quot;Question&quot;: &quot;What?&quot;, &quot;Answers&quot;: [] }, { &quot;Question&quot;: &quot;how?&quot;, &quot;Answers&quot;: [] }, { &quot;Question&quot;: &quot;Why?&quot;, &quot;Answers&quot;: [] } ], }, { &quot;website&quot;: &quot;info2&quot;, &quot;df&quot;: [ { &quot;Question&quot;: &quot;What?&quot;, &quot;Answers&quot;: [&quot;example answer1&quot;] }, { &quot;Question&quot;: &quot;how?&quot;, &quot;Answers&quot;: [&quot;example answer1&quot;] }, { &quot;Question&quot;: &quot;Why?&quot;, &quot;Answers&quot;: [&quot;example answer1&quot;] } ] } ] } </code></pre> <p>I am trying to use the following function to filter the data:</p> <pre><code>def filter(data, name): resp = pd.concat([pd.DataFrame(data), pd.json_normalize(data['examples'])], axis=1) </code></pre> <p>The error is occurring on the line pd.json_normalize(data['examples'])]. I believe this is because data['examples'] is a list of dictionaries, rather than a single dictionary.</p> <p>How can I use pd.json_normalize() on this list of dictionaries in my function?</p>
<python><json><pandas><dataframe>
2023-01-01 20:48:21
1
354
Serkan Gün
74,977,307
4,031,604
How to intercept running python script and execute command within its context?
<p>Here is the situation (as an example) - I ran a ML learning python script (which I wrote) for a long time but I didn't add functionality to save its weights.</p> <p>My question in this situation is if it's possible to somehow intercept the running python interpreter and execute a command within the program context.</p> <p>For example since I have <code>model_seq</code> global variable inside the running program - I would like to execute:</p> <pre><code>model_seq.save_weights(&quot;./model_weights&quot;) </code></pre> <p>Inside the process.</p> <p>I've heard this is somewhat possible with gdb.</p> <p>(Personally I know this can be done for sure with a C program and gdb - but since Python is compiled and interpreted the steps are a bit unclear for me (and I'm not sure if I would actually need a special python3 build or the default ubuntu one I have running will work))</p>
<python><debugging><cpython><intercept>
2023-01-01 20:47:31
0
3,998
AnArrayOfFunctions
74,977,186
12,140,406
.loc into multindex pandas df on not-level zero with sorted order
<p>say I have a multi index pandas data frame and I want to slice into the whole data frame on the not-zeroth level of the data frame and get a dataframe back in the order of the sliced list.</p> <p>this happens automatically using <code>.loc</code> on the zeroth level of the multi index, but apparently not-so for other levels.</p> <p>here is what I mean:</p> <pre><code># create a multiindex dataframe index = [('a',1), ('b',2), ('c', 3)] index = pd.MultiIndex.from_tuples(index, names = ['letters', 'numbers']) df = pd.DataFrame(index = index, data = ['cat', 'dog', 'bird'], columns = ['animal']) # say I want to index on 'letters' level of df lst_1 = ['c', 'a'] df2 = df.loc[lst_1 ] # this returns what I want in the order of lst # now say I want to index on the 'numbers' level of df lst_2 = [3, 2, 1] df3 = df.loc(axis = 0)[:, lst_2] # PROBLEM: df3 is identical to df </code></pre> <p>EXPECTED OUTPUT:</p> <pre><code> animal letters numbers c 3 bird b 2 dog a 1 cat </code></pre> <p>but what df3 returns is</p> <pre><code> animal letters numbers a 1 cat b 2 dog c 3 bird </code></pre> <p>I want something where df3 is sorted as df2 is.</p> <p>I can cheat and do <code>df.swaplevels()</code> but I'm querying my dataframe with multiple different sources that sometimes require subsetting on <code>&quot;numbers&quot;</code> and sometimes on <code>&quot;letters&quot;</code> and I hate, hate copying and recopying my dataframe while adding unnecessary lines of code.</p> <p>thank you!</p>
<python><pandas><dataframe><multi-index>
2023-01-01 20:25:06
0
365
wiscoYogi
74,977,046
11,611,632
Django RequestFactory; TypeError: get() missing 1 required positional argument;
<p>In attempting to test the context of a view <code>PostedQuestionPage</code> with RequestFactory, the following error is being raised when running the test:</p> <pre><code> File &quot;C:\Users\..\django\test\testcases.py&quot;, line 1201, in setUpClass cls.setUpTestData() Fi..\posts\test\test_views.py&quot;, line 334, in setUpTestData cls.view = PostedQuestionPage.as_view()(request) File &quot;C:\..\django\views\generic\base.py&quot;, line 70, in view return self.dispatch(request, *args, **kwargs) File &quot;C:\..\django\views\generic\base.py&quot;, line 98, in dispatch return handler(request, *args, **kwargs) TypeError: get() missing 1 required positional argument: 'question_id' </code></pre> <p>It's unclear to me as to why the error is being raised while <code>question_id</code> is being passed into <code>reverse(&quot;posts:question&quot;, kwargs={'question_id': 1})</code>?</p> <p><em>test_views.py</em></p> <pre><code>class TestQuestionIPAddressHit(TestCase): '''Verify that a page hit is recorded from a user of a given IP address on a posted question.''' @classmethod def setUpTestData(cls): user_author = get_user_model().objects.create_user(&quot;ItsNotYou&quot;) user_me = get_user_model().objects.create_user(&quot;ItsMe&quot;) author_profile = Profile.objects.create(user=user_author) user_me = Profile.objects.create(user=user_me) question = Question.objects.create( title=&quot;Blah blahhhhhhhhh blahhh I'm bord blah blah zzzzzzzzzzzzz&quot;, body=&quot;This is zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz...zzzzz&quot;, profile=author_profile ) authenticated_user = user_me.user request = RequestFactory().get( reverse(&quot;posts:question&quot;, kwargs={&quot;question_id&quot;: 1}), headers={ &quot;REMOTE_ADDR&quot;: &quot;IP Address here&quot; } ) request.user = authenticated_user cls.view = PostedQuestionPage.as_view()(request) </code></pre> <p><em>views.py</em></p> <pre><code>class PostedQuestionPage(Page): template_name = &quot;posts/question.html&quot; def get_context_data(self, **kwargs): context = super().get_context_data(**kwargs) context['answer_form'] = AnswerForm return context def get(self, request, question_id): context = self.get_context_data() question = get_object_or_404(Question, id=question_id) if question.profile.user != request.user and not question.visible: raise Http404 context |= { 'question': question, 'answer_count': question.answers.count() } return self.render_to_response(context) </code></pre> <p><em>urls.py</em></p> <pre><code>posts_patterns = ([ path(&quot;&quot;, pv.QuestionListingPage.as_view(), name=&quot;main&quot;), re_path(r&quot;questions/ask/?$&quot;, pv.AskQuestionPage.as_view(), name=&quot;ask&quot;), re_path(r&quot;questions/(?P&lt;question_id&gt;\d+)/edit/?$&quot;, pv.EditQuestionPage.as_view(), name=&quot;edit&quot;), re_path(r&quot;questions/(?P&lt;question_id&gt;\d+)/edit/answers/(?P&lt;answer_id&gt;\d+)/?$&quot;, pv.EditPostedAnswerPage.as_view(), name=&quot;answer_edit&quot;), re_path(r&quot;questions/(?P&lt;question_id&gt;\d+)/?$&quot;, pv.PostedQuestionPage.as_view(), name=&quot;question&quot;), re_path(r&quot;questions/tagged/(?P&lt;tags&gt;[0-9a-zA-Z\.\-#]+(?:\+[0-9a-zA-Z\.\-#]+)*)$&quot;, pv.TaggedSearchResultsPage.as_view(), name=&quot;tagged&quot;), re_path(r&quot;questions/tagged/$&quot;, pv.SearchTaggedRedirect.as_view(), name=&quot;tagged_redirect&quot;), re_path(r&quot;questions/search/$&quot;, pv.SearchMenuPage.as_view(), name=&quot;search_menu&quot;), re_path(r&quot;questions/search$&quot;, pv.SearchResultsPage.as_view(), name=&quot;search_results&quot;), re_path(r&quot;questions/?$&quot;, pv.AllQuestionsPage.as_view(), name=&quot;main_paginated&quot;), </code></pre>
<python><django><django-tests>
2023-01-01 19:55:15
0
739
binny
74,977,008
10,430,394
Extract bezier curve info from a TTF file (ttfquery won't install)
<p>I saw this post: <a href="https://stackoverflow.com/questions/40437308/retrieving-bounding-box-and-bezier-curve-data-for-all-glyphs-in-a-ttf-font-file">Retrieving bounding box and bezier curve data for all glyphs in a .ttf font file in python</a></p> <p>about how to retrieve the bezier curve data from a TTF file. The reason I want this is so that I can make animations of letters being drawn in matplotlib. The problem is that the package ttfquery won't install. My guess is that my python version is too new (3.9) and that package is pretty old.</p> <pre><code>C:\Users\Chris&gt;pip install TTFQuery Collecting TTFQuery Downloading TTFQuery-1.0.5.tar.gz (17 kB) Preparing metadata (setup.py) ... error error: subprocess-exited-with-error × python setup.py egg_info did not run successfully. │ exit code: 1 ╰─&gt; [7 lines of output] Traceback (most recent call last): File &quot;&lt;string&gt;&quot;, line 2, in &lt;module&gt; File &quot;&lt;pip-setuptools-caller&gt;&quot;, line 34, in &lt;module&gt; File &quot;C:\Users\Chris\AppData\Local\Temp\pip-install-djyaq0d_\ttfquery_678fb4fdd4004a81b1bc72c29a99b35f\setup.py&quot;, line 11 except ImportError, err: ^ SyntaxError: invalid syntax [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed × Encountered error while generating package metadata. ╰─&gt; See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details. </code></pre> <p>Is there another way to obtain that sort of info from a TTF file? Alternatively, is there a way to specifically obtain the stroke order of Japanese characters that can be downloaded for free?</p> <p>I ultimately want to make animations such as <a href="https://jisho.org/search/%E9%9A%8E%20%23kanji" rel="nofollow noreferrer">this</a> one. Is there some free tool/ data online for that sort of purpose or do I have to install an earlier python version and use that instead?</p>
<python><truetype>
2023-01-01 19:47:03
0
534
J.Doe
74,976,963
14,584,978
Windows Authentication for polars connectorx SQL Server
<p><a href="https://stackoverflow.com/questions/74967574/connect-python-polars-to-sql-server-no-support-currently">Can we connect to SQL server using polars and connectorx? YES</a></p> <p>The username I used in SQL Server Management Studio right after the below test without issue.</p> <pre><code>conn = 'mssql+pyodbc://username@server/database?driver=SQL+Server''&amp;trusted_connection=yes' cx.read_sql(conn,query) </code></pre> <blockquote> <p>[2023-01-01T19:12:44Z ERROR tiberius::tds::stream::token] Login failed for user 'user'. code=18456 [2023-01-01T19:12:44Z ERROR tiberius::tds::stream::token] Login failed for user 'user'. code=18456 [2023-01-01T19:12:45Z ERROR tiberius::tds::stream::token] Login failed for user 'user'. code=18456 [2023-01-01T19:12:47Z ERROR tiberius::tds::stream::token] Login failed for user 'user'. code=18456 [2023-01-01T19:12:51Z ERROR tiberius::tds::stream::token] Login failed for user 'user'. code=18456 [2023-01-01T19:12:57Z ERROR tiberius::tds::stream::token] Login failed for user 'user'. code=18456 [2023-01-01T19:13:10Z ERROR tiberius::tds::stream::token] Login failed for user 'user'. code=18456 Traceback (most recent call last): File &quot;&quot;, line 1, in File &quot;C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\connectorx_init_.py&quot;, line 224, in read_sql result = _read_sql( RuntimeError: Timed out in bb8</p> </blockquote> <p>Do I need to reconfigure my connection string?</p>
<python><sql-server><windows-authentication><python-polars>
2023-01-01 19:39:23
1
374
Isaacnfairplay
74,976,749
9,649,681
How to scrape multi page website with python?
<p>I need to scrape the following table: <a href="https://haexpeditions.com/advice/list-of-mount-everest-climbers/" rel="nofollow noreferrer">https://haexpeditions.com/advice/list-of-mount-everest-climbers/</a></p> <p>How to do it with python?</p>
<python><web-scraping>
2023-01-01 18:58:03
1
686
juststuck
74,976,733
9,855,588
easy way to change logger name for each logger instance
<p>I have a root logging class that I created, which I'd like to use for each micro-service function that I'm deploying.</p> <p>Example output log: <code>[2023-01-01 13:46:26] - INFO - [utils.logger.&lt;module&gt;:5] - testaaaaa</code> The logger is defined in <code>utils.logger</code> so that's why it's showing that in the log, hence <code>%(name)s</code>.</p> <p>Instead of using the same root logger name which is set with <code>logger = logging.getLogger(__name__)</code>, how can I get the same structure in dot notation where the logger is being instantiated and called?</p> <p>Even if I have to modify my logger class to accept a <code>name</code> parameter when initializing the object, that is fine. But I like the dot notation because I will have files like <code>routes.users.functiona</code>, <code>routes.users.functionb</code>, <code>routes.database.functiona</code> and so on.</p> <p>So I want to show which module the logging came from. Can't seem to follow how <code>logging</code> is capturing the path when using <code>__name__</code>.</p> <p>Also if you have any suggestions about making the following more robust :) Here is my class:</p> <pre><code>import typing import logging class GlobalLogger: MINIMUM_GLOBAL_LEVEL = logging.DEBUG GLOBAL_HANDLER = logging.StreamHandler() LOG_FORMAT = &quot;[%(asctime)s] - %(levelname)s - [%(name)s.%(funcName)s:%(lineno)d] - %(message)s&quot; LOG_DATETIME_FORMAT = &quot;%Y-%m-%d %H:%M:%S&quot; def __init__(self, level: typing.Union[int, str] = MINIMUM_GLOBAL_LEVEL): self.level = level self.logger = self._get_logger() self.log_format = self._log_formatter() self.GLOBAL_HANDLER.setFormatter(self.log_format) def _get_logger(self): logger = logging.getLogger(__name__) logger.setLevel(self.level) logger.addHandler(self.GLOBAL_HANDLER) return logger def _log_formatter(self): return logging.Formatter(fmt=self.LOG_FORMAT, datefmt=self.LOG_DATETIME_FORMAT) </code></pre>
<python><python-3.x><python-logging>
2023-01-01 18:54:05
1
3,221
dataviews
74,976,655
8,524,178
Cairo Remove Faint Line Between to Adjacent Paths / Combine Paths in Cairo
<p>Is there a way to combine two closed paths in Cairo, so the group of paths is filled as a single solid shape, instead of two shapes sitting next to each other?</p> <p>If I draw two paths next to each other, there is a faint blank line visible between the two shapes. For example, the below script draws a square and an adjacent half circle so they both share a line. There is a faint blank line visible between the two shapes. I want there to be no dividing line between the two shapes.</p> <pre class="lang-py prettyprint-override"><code>import math, cairo def draw_path_1(ctx): ctx.rectangle(40, 55, 30, 30) ctx.fill() def draw_path_2(ctx): ctx.arc(70, 70, 30, -math.pi / 2, math.pi / 2) ctx.fill() with cairo.SVGSurface(&quot;two_path_shape.svg&quot;, 150, 150) as surface: ctx = cairo.Context(surface) draw_path_1(ctx) draw_path_2(ctx) </code></pre> <p><a href="https://i.sstatic.net/JbjO1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JbjO1.png" alt="Two shape SVG with faint line" /></a></p> <p>I want some way of drawing the SVG without the faint line between the two shapes. I am not only looking for a solution specifically for this example of a square and a half circle, but some method of doing this for more complex paths.</p> <p>I found that exporting the image as a png does remove this line, but that doesn't help me, as I want the final output as an SVG file.</p> <pre class="lang-py prettyprint-override"><code>surface.write_to_png(&quot;two_path_shape.png&quot;) </code></pre>
<python><svg><2d><cairo><pycairo>
2023-01-01 18:42:39
0
1,764
The Matt
74,976,605
8,799,471
Run excel macro (locked) using python
<p>It isn't duplicate. Others suggest to crack the password but that's not something we are planning to do. Don't way any legal issues. Macro models are locked. Can't execute programmatically</p> <p>So we have an excel where we fill some data. Click on the given button. It performs some calculations (using macros probably) and then fill the result in the excel only. We want to take another step forward and want to automate the task to fill the input data too.</p> <p>The problem is modules are locked i can't programmatically execute macro. <a href="https://i.sstatic.net/SsfrI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SsfrI.png" alt="enter image description here" /></a></p> <p>Tried executing with python. <a href="https://i.sstatic.net/BlMvE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BlMvE.png" alt="enter image description here" /></a></p> <p>If we open that file with libre office (linux) it shows this. <a href="https://i.sstatic.net/8YOdc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8YOdc.png" alt="enter image description here" /></a></p> <p>The excel file is the freely available <a href="https://www.hse.gov.uk/RESEARCH/rrpdf/rr446g.pdf" rel="nofollow noreferrer">Fatigue index calculator</a> If its not possible to do it this way. Is there any other way? Can i somehow use the button present in the excel? Any help is appreciated.</p> <p>I am not an excel guy, correct me if I am wrong somewhere. Can provide more info.</p> <p>NOTE: I can't (don't want to) break the password due to legality purposes.</p>
<python><excel><vba><libreoffice>
2023-01-01 18:33:04
2
1,827
targhs
74,976,538
17,795,398
Python session kept open after closing tkinter window with a matplotlib graph
<p>I want to put a <code>matplotlib</code> figure in a <code>tkinter</code> user interface. This is my code, based on the <code>matplotlib</code> documentation:</p> <pre><code>import tkinter as tk import matplotlib.pyplot as plt from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg, NavigationToolbar2Tk import sys root = tk.Tk() fig = plt.figure(figsize=(4,4)) ax = fig.add_subplot() line, = ax.plot([1,2,3],[1,4,9]) canvas = FigureCanvasTkAgg(fig, master=root) canvas.draw() toolbar = NavigationToolbar2Tk(canvas, root, pack_toolbar=False) toolbar.update() canvas.get_tk_widget().pack() toolbar.pack() tk.Button(root, text=&quot;Exit&quot;, command=lambda:sys.exit()).pack() root.mainloop() </code></pre> <p>I runs correctly. The problem comes when I close the window. If I press the &quot;X&quot; button of the window, the terminal never finishes, but if I press the &quot;Exit&quot; button I added, it finishes. If instead I use <code>root.destroy()</code>, it neither works. How can I solve this issue?</p> <p>Thanks!</p> <p><strong>Edit:</strong></p> <p>I uploaded a video in YouTube, so you can see my problem: <a href="https://youtu.be/qUkm-lnXRR8" rel="nofollow noreferrer">https://youtu.be/qUkm-lnXRR8</a></p> <p>I'm using Windows 11 PowerShell, version:</p> <pre><code>Name Value ---- ----- PSVersion 5.1.22621.963 PSEdition Desktop PSCompatibleVersions {1.0, 2.0, 3.0, 4.0...} BuildVersion 10.0.22621.963 CLRVersion 4.0.30319.42000 WSManStackVersion 3.0 PSRemotingProtocolVersion 2.3 SerializationVersion 1.1.0.1 </code></pre> <p>Python version 3.11.1</p>
<python><matplotlib><tkinter>
2023-01-01 18:20:54
1
472
Abel Gutiérrez
74,976,337
6,057,371
pandas how to get all rows with specific count of values
<p>I have a dataframe</p> <pre><code>df = C1 C2 a. 2 d. 8 d. 5 d. 5 b. 3 b. 4 c. 5 a. 6 b. 7 </code></pre> <p>I want to take all the rows, in which the count of the value in C1 is &lt;= 2, and add a new col that is low, and keep the original value otherwise. So the new df will look like that:</p> <pre><code>df_new = C1 C2 type a. 2 low d. 8 d d. 5 d d. 5 d b. 3. b b. 4 b c. 5. low a. 6. low b. 7 b </code></pre> <p>How can I do this?</p> <p>I also want to get back a list of all the types that were low (['a','c'] here)</p> <p>Thanks</p>
<python><pandas><dataframe><group-by>
2023-01-01 17:49:30
1
2,050
Cranjis
74,976,313
317,797
Possible to Stringize a Polars Expression?
<p>Is it possible to stringize a Polars expression and vice-versa?</p> <p>For example, convert <code>df.filter(pl.col('a')&lt;10)</code> to a string of <code>&quot;df.filter(pl.col('a')&lt;10)&quot;</code>.</p> <p>Is roundtripping possible e.g. <code>eval(&quot;df.filter(pl.col('a')&lt;10)&quot;)</code> for user input or tool automation?</p> <p>I know this can be done with a SQL expression but I'm interested in native. I want to show the specified filter in the title of plots.</p>
<python><sql><string><expression><python-polars>
2023-01-01 17:46:34
1
9,061
BSalita
74,976,269
11,300,553
TypeError: 'NoneType' object is not subscriptable even though checking the var with an if condition and setting it to a static value otherwise
<p>I am trying to write a prototype for web scraping. My problem is that I get the error in the title when <code>duetpartner = track['duet']['handle'] </code> is null or of NoneType.</p> <p>The thing is I already made a check for it and I set it to a static value if it is None:</p> <pre><code>def create_song_list(track): if track['duet']['handle'] is not None: duetpartner = track['duet']['handle'] else: duetpartner= 'Solo' return { 'Duet_partner': duetpartner } </code></pre> <p>I call <code>create_song_list</code> from within a array.</p> <p>If more code is required to reproduce I shall supply it, I try to keep it minimal.</p> <p>The simple solution got expired from a similar question, but still it's of NoneType for me...: <a href="https://stackoverflow.com/questions/62944825/discord-py-typeerror-nonetype-object-is-not-subscriptable">Discord.py TypeError: &#39;NoneType&#39; object is not subscriptable</a></p> <h1>UPDATE 1:</h1> <p>Adding <code>or track['duet']</code> condition doesn't help...</p>
<python><python-3.x><list><loops><python-requests>
2023-01-01 17:42:05
2
321
Sir Muffington
74,976,204
552,563
Which objects are not destroyed upon Python interpreter exit?
<p>According to <a href="https://docs.python.org/3/reference/datamodel.html#object.__del__" rel="noreferrer">Python documentation</a>:</p> <blockquote> <p>It is not guaranteed that <code>__del__()</code> methods are called for objects that still exist when the interpreter exits.</p> </blockquote> <p>I know that in older versions of Python cyclic referencing would be one of the examples for this behaviour, however as I understand it, in Python 3 such cycles will successfully be destroyed upon interpreter exit.</p> <p>I'm wondering what are the cases (as close to exhaustive list as possible) when the interpreter would not destroy an object upon exit.</p>
<python><garbage-collection>
2023-01-01 17:28:45
3
3,011
Alex Bochkarev
74,976,152
713,200
How to get twitter profile name using python BeautifulSoup module?
<p>I'm trying to get twitter profile name using profile url with beautifulsoup in python, but whatever html tags I use, I'm not able to get the name. What html tags can I use to get the profile name from twitter user page ?</p> <pre><code>url = 'https://twitter.com/twitterID' html = requests.get(url).text soup = BeautifulSoup(html, 'html.parser') # Find the display name name_element = soup.find('span', {'class':'css-901oao css-16my406 r-poiln3 r-bcqeeo r-qvutc0'}) if name_element != None: display_name = name_element.text else: display_name = &quot;error&quot; </code></pre>
<python><html><beautifulsoup><html-parser>
2023-01-01 17:21:18
1
950
mac
74,976,151
18,002,913
how to get value from a web site using beautifulsoap in python?
<p>Im trying to get words with headers from web site with beautifulsoap in python but I couldnt do it. I'm trying to make a german dictionary.</p> <p>Here is the site</p> <p><strong><a href="https://almancakonulari.com/a1-seviye-almanca-kelimeler/#gsc.tab=0" rel="nofollow noreferrer">https://almancakonulari.com/a1-seviye-almanca-kelimeler/#gsc.tab=0</a></strong></p> <p>I'm trying to pull data from the word Hallo, but <code>h3</code> tags come in between, I want to take these tags and use them as identifiers, so I'm trying to create a data set like this.</p> <pre><code> [ { &quot;zahlen&quot;:[ { &quot;index&quot;: &quot;1.&quot;, &quot;word&quot;: &quot;Hallo&quot;, &quot;meaning&quot;: &quot;Merhaba&quot; }, { &quot;index&quot;: &quot;2.&quot;, &quot;word&quot;: &quot;Herzlich willkommen&quot;, &quot;meaning&quot;: &quot;Hoş geldiniz&quot; }, { &quot;index&quot;: &quot;3.&quot;, &quot;word&quot;: &quot;Auf Wiedersehen&quot;, &quot;meaning&quot;: &quot;Hoşça kalın&quot; } ] } ] </code></pre> <p><strong>hear is my code but my code doesnt work properly.</strong></p> <pre><code>import requests from bs4 import BeautifulSoup import pandas as pd import json myList = [ &quot;jeden Tag&quot;, &quot;Zahlen&quot;, &quot;Tage – Monate&quot;, &quot;Jahreszeiten-Farben&quot;, &quot;Am Flughafen&quot;, &quot;Im Hotel&quot;, &quot;Im Restaurant&quot;, &quot;Transport&quot;, &quot;Zug&quot;, &quot;Taxi&quot;, &quot;Im weg&quot;, &quot;Theater&quot;, &quot;Kleidungsladen&quot;, &quot;Elektrofachgeschäft&quot;, &quot;Kosmetika&quot;, &quot;Coiffeur&quot;, &quot;Buchhandlung&quot;, &quot;Post&quot;, &quot;In der Arztpraxis&quot;, &quot;Polizei&quot;, &quot;Sport&quot;, &quot;Zubehör &quot;, &quot;Wichtige Verben Önemli Fiiller&quot;, &quot;Im Supermarkt Süpermarkette&quot;, &quot;Verkehrszeichen&quot;, &quot;Almanca Tur Kelimeleri&quot; ] headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36'} url = &quot;https://almancakonulari.com/a1-seviye-almanca-kelimeler/#gsc.tab=0&quot; result = requests.get(url) soup = BeautifulSoup(result.text, 'html.parser') div = soup.find('div', class_='entry-content clearfix') array = [] list={} i=0 for h3_tag in range(len(myList)): list[myList[h3_tag]] = [] tempObj = { &quot;identifier&quot;: myList[h3_tag] } tempArr = [] words = div.find_all('table',class_='table table-bordered')[i] for word in words: for word1 in word.find_all('tbody'): rows = word1.find_all('tr') for row in rows: each_word = row.find_all('td') tempArr.append({ &quot;index&quot;: each_word[0].string, &quot;word&quot;: each_word[1].string, &quot;meaning&quot;: each_word[2].string }) tempObj[&quot;words&quot;] = tempArr array.append(tempObj) i+=1 with open('almancaKelimeler1.json', 'w', encoding='utf-8') as f: json.dump(array, f, ensure_ascii=False, indent=4) </code></pre> <p>how can I do this ?</p> <p>when i run the current code i get this error:</p> <p><code>AttributeError: 'NavigableString' object has no attribute 'find_all'. Did you mean: '_find_all'?</code></p> <p>but when I remove</p> <p><code>words = div.find_all('table',class_='table table-bordered')</code><strong><code>[i]</code></strong></p> <p>for example like this</p> <p><code>words = div.find_all('table',class_='table table-bordered')</code></p> <p>in this case there is no error but it brings all the words for each identifier. i.e. it needs to fetch 1000 words in total but returns 1000 words for each title or identifier</p>
<python>
2023-01-01 17:21:15
0
1,298
NewPartizal
74,976,075
815,612
How do I horizontally center a fixed-size widget inside a VBox?
<p>This code draws a window with some buttons in it:</p> <pre><code>import gi gi.require_version(&quot;Gtk&quot;, &quot;3.0&quot;) from gi.repository import Gtk window = Gtk.Window() box = Gtk.VBox() window.add(box) button1 = Gtk.Button(label=&quot;Hello&quot;) box.pack_start(button1, False, False, 10) button2 = Gtk.Button(label=&quot;Goodbye&quot;) box.pack_start(button2, False, False, 0) window.connect(&quot;destroy&quot;, Gtk.main_quit) window.show_all() Gtk.main() </code></pre> <p><a href="https://i.sstatic.net/gCgqU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gCgqU.png" alt="enter image description here" /></a></p> <p>The buttons stretch with the window, but I'd like them to be a fixed size and be centered horizontally in the window, i.e. equal space to the left and to the right of each button (but their y-coordinates should behave as they do now).</p>
<python><gtk><pygtk>
2023-01-01 17:10:18
1
6,464
Jack M
74,975,991
1,734,990
shutil.copyfile PermissionError or FileNotFoundError Error
<p>If I use the following code to copy a file (based on the zillion of examples online):</p> <pre><code>import os import shutil from pathlib import Path DATA_DIR = Path.cwd() / 'sourceFolder' files = os.listdir(DATA_DIR) shutil.copyfile(os.path.join('sourceFolder', files[0]), '/destFolder') </code></pre> <p>I receive this error (I do have all the necessary permissions on both the directory and file):</p> <blockquote> <p>PermissionError: [Errno 13] Permission denied: '/destFolder'</p> </blockquote> <p>While, if I use this code (based on the other zillion of examples):</p> <pre><code>DATA_DIR = Path.cwd() / 'sourceFolder' files = os.listdir(DATA_DIR) des = os.path.join('/destFolder', files[0]) shutil.copyfile(os.path.join('sourceFolder', files[0]), des) </code></pre> <p>I receive the following error:</p> <blockquote> <p>FileNotFoundError: [Errno 2] No such file or directory: '/destFolder/nameOfFile</p> </blockquote> <p><strong>ANSWER to Initial Question:</strong></p> <p>I was finally able to make it work, adding the following lines of code:</p> <pre><code>dst = Path.cwd() / 'destFolder/nameOfFile' shutil.copyfile(os.path.join('sourceFolder', files[0]), dst) </code></pre> <p>Thanks, J.</p>
<python><python-3.x>
2023-01-01 16:55:37
1
859
JF0001
74,975,921
7,800,760
Fuzzy search of users to retrieve associated data
<p>I am building an application which identifies <strong>people mentioned in free text (order of magnitude of a million)</strong> and stores their names as keys with one or more (for handling people with the same name) <strong>associated URIs</strong> pointing to a <strong>SPARQL based knowledge graph node</strong> which then lets me retrieve additional data about them.</p> <p>Currently I am storing the name and last name as keys in a <strong>Redis DB</strong> but the problem is that I need to use <strong>fuzzy search</strong> on those names using the Rapidfuzz library <strong><a href="https://github.com/maxbachmann/RapidFuzz" rel="nofollow noreferrer">https://github.com/maxbachmann/RapidFuzz</a></strong> because Redis only provides a simple LD distance fuzzy query which is not enough.</p> <p>What architecture would you recommend? Happy 2023!!!</p>
<python><redis><sparql>
2023-01-01 16:43:49
0
1,231
Robert Alexander
74,975,854
1,556,875
JSON wrapped in NULL?
<p>I'm using the API of an affiliate network (Sovrn), expecting to retrieve a product's specification using the URL.</p> <p>As per their documentation, I use:</p> <pre><code>url = 'URL-goes-here' headers = { &quot;accept&quot;: &quot;application/json&quot;, &quot;authorization&quot;: &quot;VERY-HARD-TO-GUESS&quot; } response = requests.get(url, headers=headers) </code></pre> <p>The code is working, the response I get is <code>200</code>, the header contains the magical <code>content-type application/json</code> line</p> <p>when I do</p> <pre><code>print(response.text) </code></pre> <p>I get</p> <pre><code>NULL({&quot;merchantName&quot;:&quot;Overstock&quot;,&quot;canonicalUrl&quot;:&quot;URL goes here&quot;,&quot;title&quot;:&quot;product name&quot;,...}); </code></pre> <p>I tested for response type of <code>response.text</code>, it's <code>&lt;class 'str'&gt;</code> as expected. But when I try to process the response as json:</p> <pre><code>product_details = json.load(response.text) </code></pre> <p>I get an error message:</p> <pre><code>requests.exceptions.JSONDecodeError: [Errno Expecting value] </code></pre> <p>I'm new to JSON, but I assume the error is due to the outer NULL that the (seemingly valid) data is wrapped in.</p> <p>After spending a few hours searching for a solution, it seems that I must be missing something obvious, but not sure what.</p> <p>Any pointers would be extremely helpful.</p>
<python><json>
2023-01-01 16:32:30
2
533
Zsolt Balla
74,975,689
19,838,445
Set property on a function object
<p>Is it possible to assign property on a function object the similar way we assign it on class instances. My desired behaviour is like this</p> <pre class="lang-py prettyprint-override"><code>def prop(): print(&quot;I am a property&quot;) def my_func(): print(&quot;Just a function call&quot;) my_func.prop = prop my_func.prop # prints 'I am a property' </code></pre> <p>I am able to invoke it as a function call <code>my_func.prop()</code>, but is there a way to override <code>__getattribute__</code> or something to achieve this result?</p> <p>I have tried attaching it on a class</p> <pre class="lang-py prettyprint-override"><code>setattr(my_func.__class__, &quot;prop&quot;, property(prop)) </code></pre> <p>but definitely that's not the way</p> <pre><code>TypeError: cannot set 'prop' attribute of immutable type 'function' </code></pre>
<python><function><properties><getattr><python-descriptors>
2023-01-01 16:04:08
1
720
GopherM
74,975,596
8,549,456
Matplotlib's show function triggering unwanted output
<p>Whenever I have any Python code executed via Python v3.10.4 with or without debugging in Visual Studio Code v1.74.2, I get output looking like the following in the Debug Console window in addition to the normal output of the code. Otherwise, all of my Python programs work correctly and as intended at this time.</p> <pre><code>1 HIToolbox 0x00007ff81116c0c2 _ZN15MenuBarInstance22RemoveAutoShowObserverEv + 30 2 HIToolbox 0x00007ff8111837e3 SetMenuBarObscured + 115 3 HIToolbox 0x00007ff81118a29e _ZN13HIApplication11FrontUILostEv + 34 4 HIToolbox 0x00007ff811183622 _ZN13HIApplication15HandleActivatedEP14OpaqueEventRefhP15OpaqueWindowPtrh + 508 5 HIToolbox 0x00007ff81117d950 _ZN13HIApplication13EventObserverEjP14OpaqueEventRefPv + 182 6 HIToolbox 0x00007ff811145bd2 _NotifyEventLoopObservers + 153 7 HIToolbox 0x00007ff81117d3e6 AcquireEventFromQueue + 494 8 HIToolbox 0x00007ff81116c5a4 ReceiveNextEventCommon + 725 9 HIToolbox 0x00007ff81116c2b3 _BlockUntilNextEventMatchingListInModeWithFilter + 70 10 AppKit 0x00007ff80a973f33 _DPSNextEvent + 909 11 AppKit 0x00007ff80a972db4 -[NSApplication(NSEvent) _nextEventMatchingEventMask:untilDate:inMode:dequeue:] + 1219 12 AppKit 0x00007ff80a9653f7 -[NSApplication run] + 586 13 _macosx.cpython-310-darwin.so 0x0000000110407e22 show + 162 14 Python 0x0000000100bb7595 cfunction_vectorcall_NOARGS + 101 15 Python 0x0000000100c9101f call_function + 175 16 Python 0x0000000100c8a2c4 _PyEval_EvalFrameDefault + 34676 17 Python 0x0000000100c801df _PyEval_Vector + 383 18 Python 0x0000000100c9101f call_function + 175 19 Python 0x0000000100c8a2c4 _PyEval_EvalFrameDefault + 34676 20 Python 0x0000000100c801df _PyEval_Vector + 383 21 Python 0x0000000100b53f61 method_vectorcall + 481 22 Python 0x0000000100c8a4f2 _PyEval_EvalFrameDefault + 35234 23 Python 0x0000000100c801df _PyEval_Vector + 383 24 Python 0x0000000100c9101f call_function + 175 25 Python 0x0000000100c8a2c4 _PyEval_EvalFrameDefault + 34676 26 Python 0x0000000100c801df _PyEval_Vector + 383 27 Python 0x0000000100cf536d pyrun_file + 333 28 Python 0x0000000100cf4b2d _PyRun_SimpleFileObject + 365 29 Python 0x0000000100cf417f _PyRun_AnyFileObject + 143 30 Python 0x0000000100d20047 pymain_run_file_obj + 199 31 Python 0x0000000100d1f815 pymain_run_file + 85 32 Python 0x0000000100d1ef9e pymain_run_python + 334 33 Python 0x0000000100d1ee07 Py_RunMain + 23 34 Python 0x0000000100d201e2 pymain_main + 50 35 Python 0x0000000100d2048a Py_BytesMain + 42 36 dyld 0x00007ff80741b310 start + 2432 </code></pre> <ol> <li>Why do these lines come out in the Debug Console window though there is nothing in any of my Python programs that would directly cause them to come out as far as I know?</li> <li>How are they helpful and how can they be used if needed?</li> <li>How can I prevent them from coming out by default?</li> </ol> <p>I checked out <a href="https://code.visualstudio.com/docs/python/debugging" rel="nofollow noreferrer">the Visual Studio Code documentation on Python debugging</a> but could not come across anything that would explain these lines. I am running Visual Studio Code on macOS Ventura v13.1.</p> <hr /> <p><strong>UPDATE as of January 2, 2023</strong></p> <p>I have figured out that the unwanted output in my initial post is being triggered by the matplotlib.pyplot.show function in my Python programs. I get that output even when I run a program as simple as that below:</p> <pre><code>import matplotlib.pyplot as plt x = [1, 2, 3] y = [1, 2, 3] plt.plot(x, y) plt.show() </code></pre> <p>When I remove plt.show() from the code above, the 36-line unwanted output does not come out but then the graph is also not displayed. Again, other than the unwanted output, all of my Python programs with the show function appear to work correctly, including the graph display triggered by the show function. I have Matplotlib 3.5.2 installed on my Mac.</p> <p>A very similar unwanted output comes out if I run the same program directly through the command line as (assume the Python program's name is <code>test.py</code>):</p> <pre><code>python3 test.py </code></pre> <p>but not when I run <code>test.py</code> through IDLE, Python’s Integrated Development and Learning Environment, or the code in it from within a Jupyter notebook.</p> <p>I can remove the show function from my Python programs to avoid the unwanted output but then the charts will not appear and I would prefer using the show function rather than a makeshift solution.</p> <hr /> <p><strong>UPDATE as of January 4, 2023</strong></p> <p>I was <a href="https://discourse.matplotlib.org/t/show-function-triggering-unwanted-output/23442" rel="nofollow noreferrer">suggested at a Matplotlib forum</a> that this might somehow be a macOS Ventura v13.1 issue because similar issues have started being reported recently with different programs executed under macOS Ventura v13.1. One user has reported encountering a similar output <a href="https://www.reddit.com/r/learnpython/comments/zyukpk/weird_output_in_terminal_while_using_tkinter/" rel="nofollow noreferrer">with code using Tkinter</a> and another <a href="https://github.com/mpv-player/mpv/issues/11018" rel="nofollow noreferrer">while using a video player called mpv</a>.</p> <p>It is not implausible that the problem is also related with macOS Ventura v13.1 but I don’t know how and my questions remain.</p> <hr /> <p><strong>UPDATE as of January 6, 2023</strong></p> <p>Upgraded Matplotlib to v3.6.2 but the unwanted output issue has not been resolved.</p> <hr /> <p><strong>UPDATE as of January 8, 2023</strong></p> <p>Tried Matplotlib v3.6.2 along with Python v3.11.1. The unwanted output issue remains.</p> <hr /> <p><strong>UPDATE as of January 15, 2023</strong></p> <p>Reported this issue as a bug to Matplotlib Developers on GitHub: &quot;<a href="https://github.com/matplotlib/matplotlib/issues/24997" rel="nofollow noreferrer">[Bug]: Show function triggering unwanted additional output #24997 </a>&quot;</p> <hr /> <p><strong>UPDATE as of January 16, 2023</strong></p> <p>I have found out that the unwanted output comes out only when the &quot;Automatically hide and show the menu bar&quot; option under Systems Settings-&gt;Desktop&amp;Dock-&gt;Menu Bar is set to <code>Always</code> (which is my setting) or <code>on Desktop Only</code>. The unwanted output does not come out if I set that option to <code>In Full Screen Only</code> or <code>Never</code>.</p> <hr /> <p><strong>UPDATE as of January 18, 2023</strong></p> <p>Both Matplotlib and Python developers on GitHub think that the unwanted output, which they can reproduce, is the result of a bug in macOS Ventura 13.1 and therefore there is not anything they can do about it.</p> <p>For details, see the respective discussions following the bug report I mentioned submitting for Matplotlib on GitHub and also the one I later submitted for Tkinter through Python/CPython on GitHub again as &quot;<a href="https://github.com/python/cpython/issues/101067" rel="nofollow noreferrer">Tkinter causing unwanted output in most recent macOS</a>&quot;. I was also told in response to the latter that a Feedback Assistant report had now been submitted to Apple about the identified bug.</p> <hr /> <p><strong>UPDATE as of January 25, 2023</strong></p> <p>Upgraded the macOS on my Mac to Ventura 13.2 today (and further to Ventura 13.2.1 when it came out in mid-February). No change except that the unwanted output, when the small program is run, is now considerably longer (85 lines). As before, the program works fine otherwise and the unwanted output does not come out if I change my Mac's menu bar setting, for example, to <code>Never</code>.</p>
<python><macos><matplotlib><visual-studio-code><vscode-debugger>
2023-01-01 15:50:36
3
337
Alper
74,975,489
2,179,057
Finding the minimum supported Python version for a package
<p>I've just made a Python package and to help me in managing it, I'm using some relatively new features in Python for typings. Given that this is just typings, will my package work for lower versions?</p> <p>Here's an example of the package:</p> <pre class="lang-py prettyprint-override"><code>from __future__ import annotations from typing import List, TypeVar, Callable, Union, overload, Optional, Any from .another_module import SomeClass T = TypeVar(&quot;T&quot;) class ExampleClass: def __init__(self, arr: List[T] = ()): self.arr = arr @overload def find(self, predicate: Callable[[T, Optional[int], Optional[List[T]]], bool]) -&gt; T | None: ... @overload def find(self, predicate: SomeClass) -&gt; T | None: ... def find(self, predicate: Union[SomeClass, Callable[[T, Optional[int], Optional[List[T]]], bool]]) -&gt; T | None: # code goes here def chainable(self) -&gt; ExampleClass: return modified(self) </code></pre> <p>From what I understand, setting <code>ExampleClass</code> as the return type on <code>chainable</code> is something from version 3.11. There are some other typing things that I'm not 100% sure what version they were added in but I don't think someone using version 3.6 for example would have access to it.</p> <p>Considering these are just typing these, can I include versions &gt;=3.6 or do I have to remove all the typings?</p>
<python><pypi>
2023-01-01 15:30:59
1
4,510
Spedwards
74,975,291
2,707,864
Dataframe with column of type list: Append to selected rows
<p>I have two dataframes (created with code below) as</p> <pre><code>df1 Fecha Vals 0 2001-01-01 [] 1 2001-01-02 [] 2 2001-01-03 [] 3 2001-01-04 [] 4 2001-01-05 [] 5 2001-01-06 [] 6 2001-01-07 [] 7 2001-01-08 [] 8 2001-01-09 [] df2 Fecha Vals 0 2001-01-01 0.0 1 2001-01-03 1.0 2 2001-01-05 2.0 3 2001-01-07 3.0 4 2001-01-09 4.0 </code></pre> <p>I want to append values in <code>df2</code> to each corresponding row in <code>df1</code> to obtain</p> <pre><code>df1 Fecha Vals 0 2001-01-01 [0.0] 1 2001-01-02 [] 2 2001-01-03 [1.0] 3 2001-01-04 [] 4 2001-01-05 [2.0] 5 2001-01-06 [] 6 2001-01-07 [3.0] 7 2001-01-08 [] 8 2001-01-09 [4.0] </code></pre> <p>I am close to finishing this with <code>for</code> loops, but for large dataframes my partial work already shows this becomes very slow. I suspect there is a way to do it faster, without looping, but I couldn't so far get there.</p> <p>As a first step, I could filter rows in <code>df1</code> with</p> <pre><code>df1['Fecha'].isin(df2['Fecha'].values) </code></pre> <p><strong>Notes</strong>:</p> <ol> <li>I will next need to repeat the operation with <code>df3</code>, etc., appending to other rows in <code>df1</code>. I wouldn't want to remove duplicates. E.g., with <code>df3 = pd.DataFrame({'Fecha': ['2001-01-02', '2001-01-05', '2001-01-08'], 'Vals': [0.0, 1.0, 2.0]})</code>.</li> <li>The uniform skipping in <code>df2</code> is a fabricated case.</li> <li>After appending is complete, I would like to create one column for the averages of each row, and another column for the standard deviation.</li> <li>Code to create my <code>df</code>s</li> </ol> <pre><code>import datetime import pandas as pd yy = 2001 date_list = ['{:4d}-{:02d}-{:02d}'.format(yy, mm, dd) for mm in range(1, 2) for dd in range(1, 10)] fechas1 = [datetime.datetime.strptime(date_base, '%Y-%m-%d') for date_base in date_list] nf1 = len(fechas1) vals1 = [[] for _ in range(nf1)] dic1 = { 'Fecha': fechas1, 'Vals': vals1 } df1 = pd.DataFrame(dic1) fechas2 = [datetime.datetime.strptime(date_list[idx], '%Y-%m-%d') for idx in range(0, nf1, 2)] nf2 = len(fechas2) vals2 = [float(idx) for idx in range(nf2)] dic2 = { 'Fecha': fechas2, 'Vals': vals2 } df2 = pd.DataFrame(dic2) </code></pre> <p><strong>Related</strong>:</p> <ol> <li><a href="https://stackoverflow.com/questions/60583726/python-intersection-of-2-dataframes-with-list-type-columns">Python intersection of 2 dataframes with list-type columns</a></li> <li><a href="https://stackoverflow.com/questions/70341021/how-to-append-list-of-values-to-a-column-of-list-in-dataframe">How to append list of values to a column of list in dataframe</a></li> <li><a href="https://stackoverflow.com/questions/64734085/python-appending-a-list-to-dataframe-column">Python appending a list to dataframe column</a></li> <li><a href="https://stackoverflow.com/questions/63876119/pandas-dataframe-append-to-column-containing-list">Pandas dataframe append to column containing list</a></li> <li><a href="https://stackoverflow.com/questions/37134622/define-a-column-type-as-list-in-pandas">Define a column type as &#39;list&#39; in Pandas</a></li> <li><a href="https://towardsdatascience.com/dealing-with-list-values-in-pandas-dataframes-a177e534f173" rel="nofollow noreferrer">https://towardsdatascience.com/dealing-with-list-values-in-pandas-dataframes-a177e534f173</a></li> </ol>
<python><pandas><list><dataframe>
2023-01-01 14:52:28
2
15,820
sancho.s ReinstateMonicaCellio
74,975,284
1,800,459
Regular expression to find all the image urls in a string
<p>I am trying to construct a regular expression that finds all image urls from a string. An image url can be either absolute path or relative.</p> <p>All these should be valid matches:</p> <pre><code> ../example/test.png https://www.test.com/abc.jpg images/test.webp </code></pre> <p>For example: if we define</p> <pre><code>inputString=&quot;img src=https://www.test.com/abc.jpg background:../example/test.png &lt;div&gt; images/test.webp image.pnghello&quot; </code></pre> <p>then we should find these 3 matches:</p> <pre><code>https://www.test.com/abc.jpg ../example/test.png images/test.webp </code></pre> <p>This is the regex i am currently using (edited with the help of the answer here)</p> <pre><code>pat = re.compile(r'(?i)https?[^&lt;&gt;\s\'\&quot;=]+(?:jpg|png|webp)\b|[^:&lt;&gt;\s\'\&quot;=]+(?:jpg|png|webp)\b') </code></pre> <p>It works well but it finds also things that are not valid urls. For example it currently finds the whole string</p> <pre><code>https://ads.world/profile/person/abc/url(/images/profile/people/488379.jpg </code></pre> <p>as a match instead of finding only the last part of the string (/images/profile/people/488379.jpg) as a match</p> <p>Look here at the example that i am testing <a href="https://regex101.com/r/mcpmMM/1" rel="nofollow noreferrer">https://regex101.com/r/mcpmMM/1</a></p>
<python><regex>
2023-01-01 14:50:14
2
1,134
AJ222
74,975,275
13,505,957
Web scraping Google Maps with Selenium uses too much data
<p>I am scraping travel times from Google Maps. The below code scrapes travel times between 1 million random points in Tehran, which works perfectly fine. I also use multiprocessing to get travel times simultaneously. The results are fully replicable, feel free to run the code in a terminal (but not in an interactive session like Spyder as the multiprocessing won't work). This is how what I am scraping looks like on google maps (in this case 22 min is the travel time):</p> <p><a href="https://i.sstatic.net/Gq88G.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Gq88G.png" alt="enter image description here" /></a></p> <pre><code>from selenium import webdriver from selenium.webdriver.chrome.service import Service from selenium.webdriver.chrome.options import Options from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC from multiprocessing import Process, Pipe, Pool, Value import time from multiprocessing.pool import ThreadPool import threading import gc threadLocal = threading.local() class Driver: def __init__(self): options = webdriver.ChromeOptions() options.add_argument(&quot;--headless&quot;) options.add_experimental_option('excludeSwitches', ['enable-logging']) self.driver = webdriver.Chrome(options=options) def __del__(self): self.driver.quit() # clean up driver when we are cleaned up print('The driver has been &quot;quitted&quot;.') @classmethod def create_driver(cls): the_driver = getattr(threadLocal, 'the_driver', None) if the_driver is None: print('Creating new driver.') the_driver = cls() threadLocal.the_driver = the_driver driver = the_driver.driver the_driver = None return driver success = Value('i', 0) error = Value('i', 0) def f(x): global success global error with success.get_lock(): success.value += 1 print(&quot;Number of errors&quot;, success.value) with error.get_lock(): error.value += 1 print(&quot;counter.value:&quot;, error.value) def scraper(url): &quot;&quot;&quot; This now scrapes a single URL. &quot;&quot;&quot; global success global error try: driver = Driver.create_driver() driver.get(url) time.sleep(1) trip_times = driver.find_element(By.XPATH, &quot;//div[contains(@aria-labelledby,'section-directions-trip-title')]//span[@jstcache='198']&quot;) print(&quot;got data from: &quot;, url) print(trip_times.text) with success.get_lock(): success.value += 1 print(&quot;Number of sucessful scrapes: &quot;, success.value) except Exception as e: # print(f&quot;Error: {e}&quot;) with error.get_lock(): error.value += 1 print(&quot;Number of errors&quot;, error.value) import random min_x = 35.617487 max_x = 35.783375 min_y = 51.132557 max_y = 51.492329 urls = [] for i in range(1000000): x = random.uniform(min_x, max_x) y = random.uniform(min_y, max_y) url = f'https://www.google.com/maps/dir/{x},{y}/35.8069533,51.4261312/@35.700769,51.5571612,21z' urls.append(url) number_of_processes = min(2, len(urls)) start_time = time.time() with ThreadPool(processes=number_of_processes) as pool: # result_array = pool.map(scraper, urls) result_array = pool.map(scraper, urls) # Must ensure drivers are quitted before threads are destroyed: del threadLocal # This should ensure that the __del__ method is run on class Driver: gc.collect() pool.close() pool.join() print(result_array) print( &quot;total time: &quot;, round((time.time()-start_time)/60, 1), &quot;number of urls: &quot;, len(URLs)) </code></pre> <p>But after having it run for only 24 hours, it has already used around <strong>80 GB</strong> of data! Is there a way to make this more efficient in terms of data usage?</p> <p>I suspect this excessive data usage is because Selenium has to load each URL completely every time before it can access the HTML and get the target node. Can I change anything in my code to prevent that and still get the travel time?</p> <p>*Please note that using the Google Maps API is not an option. Because the limit is too small for my application and the service is not provided in my country.</p>
<python><html><selenium><webdriver>
2023-01-01 14:48:44
1
1,107
ali bakhtiari
74,975,192
14,670,370
Filtering empty elements in a nested list in pandas dataframe
<p>I have a list inside a pandas dataframe and I want to filter it. For example, I have a dataframe like this:</p> <pre><code>{ &quot;examples&quot;: [ { &quot;website&quot;: &quot;info&quot;, &quot;df&quot;: [ { &quot;Question&quot;: &quot;What?&quot;, &quot;Answers&quot;: [] }, { &quot;Question&quot;: &quot;how?&quot;, &quot;Answers&quot;: [] }, { &quot;Question&quot;: &quot;Why?&quot;, &quot;Answers&quot;: [] } ], &quot;whitelisted_url&quot;: true, &quot;exResponse&quot;: { &quot;pb_sentence&quot;: &quot;&quot;, &quot;solution_sentence&quot;: &quot;&quot;, &quot;why_sentence&quot;: &quot;&quot; } }, { &quot;website&quot;: &quot;info2&quot;, &quot;df&quot;: [ { &quot;Question&quot;: &quot;What?&quot;, &quot;Answers&quot;: [&quot;example answer1&quot;] }, { &quot;Question&quot;: &quot;how?&quot;, &quot;Answers&quot;: [&quot;example answer1&quot;] }, { &quot;Question&quot;: &quot;Why?&quot;, &quot;Answers&quot;: [] } ], &quot;whitelisted_url&quot;: true, &quot;exResponse&quot;: { &quot;pb_sentence&quot;: &quot;&quot;, } }, ] } </code></pre> <p>my filter function:</p> <pre><code>def filter(data, name): resp = pd.concat([pd.DataFrame(data), pd.json_normalize(data['examples'])], axis=1) resp = pd.concat([pd.DataFrame(resp), pd.json_normalize(resp['df'])], axis=1) resp['exResponse.pb_sentence'].replace( '', np.nan, inplace=True) resp.dropna( subset=['exResponse.pb_sentence'], inplace=True) resp.drop(resp[resp['df.Answers'].apply(len) == 0].index, inplace=True) </code></pre> <p>I want to remove the empty 'answers' elements in this dataframe. I have already filtered the empty 'problem_summary' elements using the following code:</p> <pre><code> resp['exResponse.pb_sentence'].replace( '', np.nan, inplace=True) resp.dropna( subset=['exResponse.pb_sentence'], inplace=True) </code></pre> <p>How can I do the same for the 'answers' elements?</p> <p>I don't actually expect a specific output. the following part of my code It throws the error &quot;AttributeError: 'list' object has no attribute 'keys'&quot;. I think this is due to empty answers arrays, so I want to remove these parts.</p> <pre><code> resp.rename( columns={0: 'Challenge', 1: 'Solution', 2: 'Importance'}, inplace=True) # challenge deserializing resp = pd.concat([pd.DataFrame(df_resp), pd.json_normalize(resp['Challenge'])], axis=1) resp = pd.concat([pd.DataFrame(resp), pd.json_normalize(resp['Answers'])], axis=1) </code></pre> <p>error line:</p> <pre><code> 29 resp = pd.concat([pd.DataFrame(resp), ---&gt; 30 pd.json_normalize(resp['Answers'])], 31 axis=1) </code></pre>
<python><pandas><dataframe><filtering><nested-lists>
2023-01-01 14:32:36
1
354
Serkan Gün
74,975,100
18,002,913
How to get value inside of h3 tag with beautifulsoup in python?
<p>I 'm trying to get value inside of <code>h3</code> <code>tag</code> but there is a problem that I dont figure out. I m stuck in a problem.</p> <p><strong>this is the data which I want to get. I want to get</strong> <code>Zahlen</code> word between <code>span</code> <code>classes</code> inside <code>h3</code> <code>tag</code> but I couldnt figure out how can I do this in python with beautifulsoup.</p> <pre><code>&lt;h3&gt; &lt;span class=&quot;ez-toc-section&quot; id=&quot;Zahlen&quot;&gt;&lt;/span&gt; Zahlen &lt;span class=&quot;ez-toc-section-end&quot;&gt;&lt;/span&gt; &lt;/h3&gt; </code></pre> <p>I'm trying to create a dictionary dataset. This dictionary dataset will be a german dictionary and words will be separated into categories. As I mentioned above, Zahlen is an identifier and this identifier will have words. There will be other words for other <code>h3 tags</code>. For example, this is the code I wrote.</p> <pre><code>result = requests.get(url) soup = BeautifulSoup(result.text, 'html.parser') div = soup.find('div', class_='entry-content clearfix') for word in div.find_all('table', class_='table table-bordered'): for word1 in word.find_all('tbody'): rows = word1.find_all('tr') for row in rows: each_word = row.find_all('td') case = { &quot;index&quot;: each_word[0].string, &quot;word&quot;: each_word[1].string, &quot;meaning&quot;: each_word[2].string } list.append(case) with open('DictionaryWords.json', 'w', encoding='utf-8') as f: json.dump(list, f, ensure_ascii=False, indent=4) </code></pre> <p><strong>and for example result:</strong></p> <pre><code>[ { &quot;index&quot;: &quot;1.&quot;, &quot;word&quot;: &quot;Hallo&quot;, &quot;meaning&quot;: &quot;Merhaba&quot; }, { &quot;index&quot;: &quot;2.&quot;, &quot;word&quot;: &quot;Herzlich willkommen&quot;, &quot;meaning&quot;: &quot;Hoş geldiniz&quot; }, { &quot;index&quot;: &quot;3.&quot;, &quot;word&quot;: &quot;Auf Wiedersehen&quot;, &quot;meaning&quot;: &quot;Hoşça kalın&quot; }, { &quot;index&quot;: &quot;4.&quot;, &quot;word&quot;: &quot;Guten Morgen&quot;, &quot;meaning&quot;: &quot;Günaydın&quot; }, { &quot;index&quot;: &quot;5.&quot;, &quot;word&quot;: &quot;Haben Sie einen guten Tag&quot;, &quot;meaning&quot;: &quot;İyi günler&quot; } </code></pre> <p><strong>the data that I want this.</strong></p> <pre><code> [ { &quot;zahlen&quot;:[ { &quot;index&quot;: &quot;1.&quot;, &quot;word&quot;: &quot;Hallo&quot;, &quot;meaning&quot;: &quot;Merhaba&quot; }, { &quot;index&quot;: &quot;2.&quot;, &quot;word&quot;: &quot;Herzlich willkommen&quot;, &quot;meaning&quot;: &quot;Hoş geldiniz&quot; }, { &quot;index&quot;: &quot;3.&quot;, &quot;word&quot;: &quot;Auf Wiedersehen&quot;, &quot;meaning&quot;: &quot;Hoşça kalın&quot; } ] } ] </code></pre>
<python><beautifulsoup>
2023-01-01 14:12:46
1
1,298
NewPartizal
74,975,098
6,460
Does type hinting class members shadow previously defined or built-in variables?
<p>With type-hinting, defining a class in Python goes from</p> <pre><code>class SomeClass: def __init__(self): self.id = 5 </code></pre> <p>to something like this</p> <pre><code>class SomeClass: id: int def __init__(self) -&gt; None: self.id = 5 </code></pre> <p>However, a linter like <code>ruff</code> has an issue with the <code>id: int</code> line, which apparently would shadow the built-in <code>id</code>. That feels like a suprising behaviour to me, since in previous, type-hint-less times, <code>id</code> would have always been used as <code>self.id</code> with no shadowing whatsoever.</p> <p>So I would like to know: is there really shadowing occurring and if so, to which extent, i.e. what is the scope of this shadowing?</p>
<python><python-typing><shadowing>
2023-01-01 14:12:34
1
8,833
Nikolai Prokoschenko
74,975,009
2,474,025
plotly interactive tooltip / hover text / popup
<p>Tooltips of a figure are only displayed while hovering over the data point: <a href="https://plotly.com/python/hover-text-and-formatting" rel="nofollow noreferrer">https://plotly.com/python/hover-text-and-formatting</a></p> <p>I'd like to have an easy way to customize the duration the tooltip is displayed after hovering over it or possibly display the tooltip permanently when clicking the data point.</p> <p>This will allow me to include clickable links in the tooltip.</p> <p>For data tables you can customize the tooltip display duration, but I don't see a similar option for figures: <a href="https://dash.plotly.com/datatable/tooltips" rel="nofollow noreferrer">https://dash.plotly.com/datatable/tooltips</a></p> <p>I think you can add your own tooltips via the event system or maybe change the css style of the resulting HTML somehow, but that seems to be overkill. I'd still accept an answer with a working example.</p>
<python><plotly><tooltip>
2023-01-01 13:54:08
2
1,033
phobic
74,974,901
10,016,858
How to understand empty second parameter to pandas DataFrame.loc
<p>Hi I am looking for help to understand the behaviour caused by not having/having an empty second parameter to pandas <code>DataFrame.loc</code> method</p> <p>Consider the following:</p> <pre><code>df=pd.DataFrame(index=pd.MultiIndex.from_tuples([('a', 1, 'x'),('a', 2, 'y'),('b', 1, 'x')]), data={'col_1':[1, 2, 3]}) df </code></pre> <p>Output:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th></th> <th></th> <th></th> <th>col_1</th> </tr> </thead> <tbody> <tr> <td>a</td> <td>1</td> <td>x</td> <td>1</td> </tr> <tr> <td></td> <td>2</td> <td>y</td> <td>2</td> </tr> <tr> <td>b</td> <td>1</td> <td>x</td> <td>3</td> </tr> </tbody> </table> </div> <p>If I access rows with <code>df.loc[(slice(None), 1, slice(None))]</code> the returned result has a different index:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th></th> <th></th> <th>col_1</th> </tr> </thead> <tbody> <tr> <td>a</td> <td>x</td> <td>1</td> </tr> <tr> <td>b</td> <td>x</td> <td>3</td> </tr> </tbody> </table> </div> <p>The index columns are preserved where the tuple values are <code>slice(None)</code> and the index column is dropped where the value is explcitly specified.</p> <p>However, if I put a comma after the restricting tuple, the index is preserved: <code>df.loc[(slice(None), 1, slice(None)),]</code> yields:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th></th> <th></th> <th></th> <th>col_1</th> </tr> </thead> <tbody> <tr> <td>a</td> <td>1</td> <td>x</td> <td>1</td> </tr> <tr> <td>b</td> <td>1</td> <td>x</td> <td>3</td> </tr> </tbody> </table> </div> <p>I'd be grateful if anyone could explain what is the difference in the inputs <code>(slice(None), 1, slice(None))</code>and <code>(slice(None), 1, slice(None)),</code> and why does this cause a difference in the outputs</p>
<python><pandas><multi-index>
2023-01-01 13:33:09
1
1,241
JohnnieL
74,974,733
4,451,521
Pytesting a script that is not a library
<p>I have the following directory structure</p> <pre><code>| |---test | |----test1.py | |----test2.py | |---alibrary | |--thelib.py | | |--main.py </code></pre> <p>In <code>test1.py</code> I tested the functions in <code>thelib.py</code>. To do this, the script started like this</p> <pre><code>import alibrary.thelib as thelib #and in some other part thelib.some_function() </code></pre> <p>It worked well. Now I have to write <code>test2.py</code> in which I have to test some functions that are written in <code>main.py</code></p> <p>Since <code>main.py</code> is in a level above the testing code which is the correct way I can include <code>main.py</code> so that I can test its functions? (Ideally without altering the <code>sys path</code>)</p>
<python><pytest>
2023-01-01 12:58:52
0
10,576
KansaiRobot
74,974,685
922,130
How to perform single- and complete-linkage clustering based on selected pairwise compairsons?
<p>Let's say I have 8 objects.</p> <pre class="lang-py prettyprint-override"><code>all_objects = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H'] </code></pre> <p>I performed all pairwise comparisons (8 x 7) using a custom method. As a result, I got pairs that meet a certain similarity criterion.</p> <pre class="lang-py prettyprint-override"><code>pairs = [ ('A', 'B'), ('B', 'A'), ('B', 'D'), ('D', 'B'), ('D', 'C'), ('C', 'D'), ('E', 'F'), ('F', 'E'), ('F', 'G'), ('G', 'F'), ('E', 'G'), ('G', 'E'), ('H', 'G') ] </code></pre> <p>I want to transform the above pairs into clusters. Also, the edges connecting objects must be symmetric (e.g., <code>('A', 'B')</code> because there is also <code>('B', 'A')</code> but not <code>('H', 'G')</code>).</p> <p>Specifically, I have two questions:</p> <ol> <li>What is the code to perform single- and complete-linkage clustering based on the above pairs? Ideally, I would like to get clusters and name of objects in each cluster.</li> <li>Are there any alternative methods of clustering this kind of data?</li> </ol>
<python><scipy><statistics><cluster-analysis><hierarchical-clustering>
2023-01-01 12:47:00
1
909
sherlock85
74,974,409
17,696,880
How to extract specific information with capture groups from an input string, rearrange and replace back inside this string using re.sub() function?
<pre class="lang-py prettyprint-override"><code>import re input_text = &quot;estoy segura que empezaria desde las 15:00 pm del 2002_-_11_-_01 hasta las 16:00 hs pm&quot; #example 1 input_text = &quot;estoy segura que empezara desde las 15:00 pm h.s. del 2002_-_11_-_(01_--_15) hasta las 16:10 pm hs, aunque no se cuando podria acabar&quot; #example 2 input_text = &quot;probablemente dure desde las 01:00 am hasta las 16:00 pm del 2002_-_11_-_01 pero seguramente no mucho mas que eso&quot; #example 3 input_text = &quot;desde las 11:00 am hasta las 16:00 pm del 2002_-_11_-_(01_--_17) o quizas desde las 15:00 pm hs hasta las 16:00 pm del 2003_-_11_-_(01_--_17)&quot; #example 4 def standardize_time_interval_associated_to_date(input_text, identify_only_4_digit_years = True): if (identify_only_4_digit_years == True): date_format_capture_01 = r&quot;(\d{4})_-_(\d{2})_-_(\d{2})&quot; date_format_capture_02 = r&quot;(\d{4})_-_(\d{2})_-_\((\d{1,2})_--_(\d{1,2})\)&quot; elif (identify_only_4_digit_years == False): date_format_capture_01 = r&quot;(\d*)_-_(\d{2})_-_(\d{2})&quot; date_format_capture_02 = r&quot;(\d*)_-_(\d{2})_-_\((\d{1,2})_--_(\d{1,2})\)&quot; time_format_capture = r&quot;(\d{1,2})[\s|:](\d{0,2})\s*(?:h.s.|h.s|hs|)\s*(?:(am)|(pm))\s*(?:h.s.|h.s|hs|)&quot; #replace for the example 1 input_text = re.sub(r&quot;(?:desde|a[\s|]*partir)[\s|]*(?:de|)[\s|]*(?:las|la|)[\s|]*&quot; + time_format_capture + r&quot;[\s|]*(del|de[\s|]*el|de )[\s|]*(?:&quot; + date_format_capture_02 + r&quot;|&quot; + date_format_capture_01 + r&quot;)[\s|]*(?:hasta|al)[\s|]*(?:las|la|)[\s|]*&quot; + time_format_capture, print(lambda m: print(m[1]) ) , input_text) #replace for the example 2 input_text = re.sub(r&quot;(?:desde|a[\s|]*partir)[\s|]*(?:de|)[\s|]*(?:las|la|)[\s|]*&quot; + time_format_capture + r&quot;[\s|]*(?:hasta|al)[\s|]*(?:las|la|)[\s|]*&quot; + time_format_capture + r&quot;[\s|]*(del|de[\s|]*el|de )[\s|]*(?:&quot; + date_format_capture_02 + r&quot;|&quot; + date_format_capture_01 + r&quot;)&quot;, print(lambda m: print(m[1])) , input_text) return input_text #Here I make the call to the function indicating the input string as the first parameter, and as the second I pass an indication about how it should identify the date information input_text = standardize_time_interval_associated_to_date(input_text, True) print(repr(input_text)) # --&gt; output </code></pre> <p>What should I put in the <strong>second parameter</strong> of the <code>re.sub()</code> function instead of <code>print(lambda m: print(m[1]))</code> so that the following string replacements are possible?</p> <p>Replacements are expected to comply with this <strong>substitution (generic) structure</strong>:</p> <p><code>(YYYY_-_MM_-_DD hh:mm pm or am_--_hh:mm am or pm)</code></p> <p>Bearing in mind that the goal of the program is to search and rearrange information in the main string, the <strong>output</strong> that I need to get in each of the input example strings:</p> <pre><code>&quot;estoy segura que empezaria (2002_-_11_-_01 (15:00 pm_--_16:00 pm))&quot; #for example 1 &quot;estoy segura que empezara (2002_-_11_-_(01_--_15) (15:00 pm_--_16:10 pm)), aunque no se cuando podria acabar&quot; #for example 2 &quot;probablemente dure (2002_-_11_-_01 (01:00 am_--_16:00 pm)) pero seguramente no mucho mas que eso&quot; #for example 3 &quot;(2002_-_11_-_(01_--_17) (11:00 am_--_16:00 pm)) o quizas (2003_-_11_-_(01_--_17) (15:00 pm_--_16:00 pm))&quot; #for example 4 </code></pre>
<python><python-3.x><regex><replace><regex-group>
2023-01-01 11:45:12
1
875
Matt095
74,974,132
11,555,352
Efficient way to restructure pandas dataframe from row to resampled column structure
<p>I have a pandas dataframe structured as follows:</p> <pre><code>TimeStamp 2022-12-30 10:31:58.483700+00:00 1 FixType 4 4.000000e+00 2022-12-30 10:31:58.483700+00:00 1 Satellites 11 1.100000e+01 2022-12-30 10:31:58.484150+00:00 2 TimeConfirmed 0 0.000000e+00 2022-12-30 10:31:58.484150+00:00 2 Epoch 63797521999 1.641638e+09 2022-12-30 10:31:58.484150+00:00 2 TimeValid 1 1.000000e+00 ... ... ... ... ... 2022-12-30 10:54:32.714050+00:00 9 AngularRateZ 1020 -1.000000e+00 2022-12-30 10:54:32.714050+00:00 9 AccelerationY 513 1.250000e-01 2022-12-30 10:54:32.714050+00:00 9 AccelerationZ 594 1.025000e+01 2022-12-30 10:54:32.714050+00:00 9 AngularRateX 1025 2.500000e-01 2022-12-30 10:54:32.714050+00:00 9 ImuValid 1 1.000000e+00 [973528 rows x 4 columns] </code></pre> <p>I need to get it into the following structure, while also resampling it to a specific frequency (e.g. <code>1S</code>):</p> <pre><code> FixType Satellites ... AngularRateZ ImuValid TimeStamp ... 2022-12-30 10:31:59+00:00 4.0 11.0 ... NaN NaN 2022-12-30 10:32:00+00:00 4.0 11.0 ... -1.00 1.0 2022-12-30 10:32:01+00:00 4.0 12.0 ... -1.00 1.0 2022-12-30 10:32:02+00:00 4.0 12.0 ... -1.00 1.0 2022-12-30 10:32:03+00:00 4.0 12.0 ... -1.00 1.0 ... ... ... ... ... ... 2022-12-30 10:54:28+00:00 4.0 13.0 ... -1.00 1.0 2022-12-30 10:54:29+00:00 4.0 14.0 ... -1.00 1.0 2022-12-30 10:54:30+00:00 4.0 14.0 ... -0.75 1.0 2022-12-30 10:54:31+00:00 4.0 14.0 ... -1.00 1.0 2022-12-30 10:54:32+00:00 4.0 14.0 ... -1.00 1.0 [1354 rows x 39 columns] </code></pre> <p>Currently I achieve this via below code:</p> <pre><code>def restructure_data(df_phys, res): import pandas as pd df_phys_join = pd.DataFrame({&quot;TimeStamp&quot;: []}) if not df_phys.empty: for message, df_phys_message in df_phys.groupby(&quot;CAN ID&quot;): for signal, data in df_phys_message.groupby(&quot;Signal&quot;): col_name = signal df_phys_join = pd.merge_ordered( df_phys_join, data[&quot;Physical Value&quot;].rename(col_name).resample(res).ffill().dropna(), on=&quot;TimeStamp&quot;, fill_method=&quot;none&quot;, ).set_index(&quot;TimeStamp&quot;) return df_phys_join </code></pre> <p>This works, but it seems inefficient. I wonder if there is a smarter and perhaps more pythonic way to achieve a similar result?</p>
<python><pandas><dataframe>
2023-01-01 10:33:15
1
1,611
mfcss