QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
76,830,965
21,113,865
Is there a way to explicitly declare an argument as an array in Python?
<p>Is there a way to explicitly state an argument to a function in python should be an array of some custom class?</p> <p>i.e.</p> <p>I can write the following function to enforce the type of the args</p> <pre><code>def foo_bar(foo: str, bar: int): print(&quot;foo bar stuff&quot;) </code></pre> <p>But what if I want an argument to be an array of a custom class</p> <pre><code>class foobar(): def __init__(self): ... def foo_bar(foo: str, bar: int, array_of_foobar: [foobar]): print(&quot;foo bar stuff&quot;) </code></pre> <p>Is there a syntax to ensure the variable &quot;array_of_foobar&quot; is an array of the proper type?</p> <p>To take it a step further, what if I have classes derived from class foobar() using AbstractBaseClass. Can I ensure the array if of type foobar, or derived from type foobar?</p>
<python><python-3.x><parameter-passing><abc>
2023-08-03 19:03:50
0
319
user21113865
76,830,940
10,853,071
Json/weird column transformation
<p>I am getting some data incoming from a mongo database. Such table contains several columns and some os such columns are composed of a very strange format.</p> <p>Example of one line of the column/series</p> <pre><code>'[{idEvento.$oid=63ffaec3cdc01e6352729bad, dataHoraEvento.$date=1677690003377, codigoTipoEvento=1, mesAnoReferenciaContabilizacao=032023}, {idEvento.$oid=63ffb5c8cdc01e6352729bae, dataHoraEvento.$date=1677691800676, codigoTipoEvento=3, mesAnoReferenciaContabilizacao=032023}, {idEvento.$oid=6405cc8711c78c20369b4033, dataHoraEvento.$date=1678090851560, codigoTipoEvento=8, mesAnoReferenciaContabilizacao=032023}, {idEvento.$oid=6422b4c97e45dd75abb4f831, dataHoraEvento.$date=1679985307560, codigoTipoEvento=6, mesAnoReferenciaContabilizacao=032023, _class=br.com.bb.rcp.model.vantagens.HistoricoContabil}, {idEvento.$oid=6422b4c97e45dd75abb4f832, dataHoraEvento.$date=1679985309584, codigoTipoEvento=6, mesAnoReferenciaContabilizacao=032023, _class=br.com.bb.rcp.model.vantagens.HistoricoContabil}]' </code></pre> <p>This does not, at least for my shot knowledge, is a Json. I am struggling how to transform each &quot;event&quot; (composed of a {} item) into a list.</p> <p>And after that, how could I query/filter data based on the containing of each event? Should I pd.explode the events into new lines and query as strings?</p>
<python><pandas>
2023-08-03 18:58:48
1
457
FábioRB
76,830,860
6,457,407
Monkeypatching C stdin from Python
<p>Is there any way of monkeypatching the C version of stdin in Python?</p> <p>I’m attempting to write unit tests (pytest) for a Python module written in C. One of the functions in the library tries to read a value from stdin. I have no idea how to get around this. If it were a Python function, I’d monkey patch sys.stdin. But that doesn’t work here.</p> <p>If forced to, I can start a new process with redirected standard input, but I’m hoping to avoid that.</p> <p>I didn’t write the module. I’m just testing it. . . .</p>
<python><pytest><monkeypatching>
2023-08-03 18:44:52
0
11,605
Frank Yellin
76,830,702
10,886,283
Is there a way of getting the inset axes by "asking" the axes it is embedded in?
<p>I have several subplots, <code>axs</code>, some of them with embedded inset axes. I would like to get the data plotted in the insets by iterating over the main axes. Let's consider this minimal reproducible example:</p> <pre><code>fig, axs = plt.subplots(1, 3) x = np.array([0,1,2]) for i, ax in enumerate(axs): if i != 1: ins = ax.inset_axes([.5,.5,.4,.4]) ins.plot(x, i*x) plt.show() </code></pre> <p><a href="https://i.sstatic.net/bpvhk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bpvhk.png" alt="enter image description here" /></a></p> <p>Is there a way of doing something like</p> <pre><code>data = [] for ax in axs: if ax.has_inset(): # &quot;asking&quot; if ax has embedded inset ins = ax.get_inset() # getting the inset from ax line = ins.get_lines()[0] dat = line.get_xydata() data.append(dat) print(data) # [array([[0., 0.], # [1., 0.], # [2., 0.]]), # array([[0., 0.], # [1., 2.], # [2., 4.]])] </code></pre>
<python><matplotlib><axis>
2023-08-03 18:20:12
1
509
alpelito7
76,830,633
1,786,016
Salesforce object is not accessible via API, but working via UI
<p>I can access my object by Id via user interface by using URL like <code>lightning/r/Account/0061n00000b5pEdAAI/view</code>, but I'm not able to get it via <code>API</code> with same user:</p> <pre><code>sf_client.query_all(f&quot;&quot;&quot;SELECT Id, Name FROM Account where Id='0061n00000b5pEdAAI'&quot;&quot;&quot;) </code></pre> <p>And this happens not with all objects. Like every 3rd object is not available. I checked permissions and set access to all like in this <a href="https://www.findmycrm.com/faq/crm-migration-faqs/how-can-i-get-access-to-all-objects-in-salesforce" rel="nofollow noreferrer">guide</a>:</p>
<python><salesforce><soql>
2023-08-03 18:08:30
1
7,822
Arti
76,830,535
11,202,401
How to reference newrelic keys via settings.py
<p>I am just using NewRelic for the first time. I have the new <code>relic.ini</code> file, which includes the <code>license_key</code> and <code>app_name</code> hard coded, in my Django Python repository and have updated my manage.py script to be as so:</p> <pre><code>#!/usr/bin/env python &quot;&quot;&quot;Django's command-line utility for administrative tasks.&quot;&quot;&quot; import newrelic.agent newrelic.agent.initialize('newrelic.ini') import os import sys def main(): ..... if __name__ == '__main__': main() </code></pre> <p>When I run my api locally via <code>./manage.py runserver</code> I can see the logs being tracked in NewRelic which is good.</p> <p>However, I'm wanting to remove the hardcoded <code>license_key</code> and <code>app_name</code> from the <code>newrelic.ini</code> file so that I can reference environment variables. I have read this documentation <a href="https://docs.newrelic.com/docs/apm/agents/python-agent/configuration/python-agent-configuration/#server-side-configuration" rel="nofollow noreferrer">https://docs.newrelic.com/docs/apm/agents/python-agent/configuration/python-agent-configuration/#server-side-configuration</a> which outlines the environment and config variables.</p> <p>I added this to my <code>settings.py</code> file so that I can override the env variables on deployment:</p> <pre><code>NEW_RELIC_APP_NAME = &quot;My API&quot; NEW_RELIC_LICENSE_KEY=&quot;XXXXXXXXXXXXXXXX&quot; </code></pre> <p>I have tried</p> <ul> <li>removing <code>license_key</code> and <code>app_name</code> entirely from newrelic.ini</li> <li>keeping the variables but setting them equal to &quot;&quot;</li> <li>keeping the variables but setting license_key = NEW_RELIC_LICENSE_KEY</li> </ul> <p>but I can't seem to get it tracking without it being hardcoded in the newrelic.ini file</p> <p>So my question - how can I use environment variables with the newrelic.ini file?</p> <p>thanks in advance</p>
<python><django><newrelic>
2023-08-03 17:55:52
0
605
nt95
76,829,991
1,843,511
Pydantic settings mock
<p>I'm using Pydantic Settings in a FastAPI project, but mocking these settings is kind of an issue. You cannot initiate <code>Settings()</code> successfully unless attributes like <code>ENV</code> and <code>DB_PATH</code>, which don't have a default value, are set as environment variables on your system or in an .env file, which pydantic can access.</p> <p>This also means that any fixtures are not possible, because if I import any module which uses the settings somewhere in the code, it will be executed before the fixture is able to do its work.</p> <pre><code>class Settings(BaseSettings): ENV: str DB_PATH: str class Config: env_file = &quot;.env&quot; case_sensitive = True settings = Settings() </code></pre> <p>Now, the only solution I came by thus far, is setting the environment variables in <code>conftest.py</code> outside any function:</p> <pre><code>import os os.environ[&quot;ENV&quot;] = &quot;production&quot; os.environ[&quot;DB_PATH&quot;] = &quot;path/to/sqlite.db&quot; </code></pre> <p>There has to be some neater solution? Any thoughts?</p> <h3>EDIT:</h3> <p>As for now I fixed it by just putting them in a function in <code>conftest.py</code> and immediately executing that one:</p> <pre><code>import os def set_environment_variables(): os.environ[&quot;ENV&quot;] = &quot;production&quot; os.environ[&quot;DB_PATH&quot;] = &quot;path/to/sqlite.db&quot; set_environment_variables() </code></pre> <p>This way it's possible to set the <code>settings</code> attributes later on in fixtures or whatever. But at least it won't complain anymore on initiation. If there is any better solution, please let me know.</p>
<python><fastapi><pydantic>
2023-08-03 16:28:25
0
5,005
Erik van de Ven
76,829,738
264,136
identify if the list has a number <10 and all 0 after that
<p>To check if the input list of numbers is ending with more than one 0s, the last non-zero value must be less than 50.</p> <p>There can be scenarios like the following:</p> <pre><code>[500,5000,600,1212,234232,121,10,0,0,0,0,0,0,0] </code></pre> <p>should return <code>True</code> (ending with 0s and 10 is smaller than 50)</p> <pre><code>[500,5000,600,1212,234232,121,500,0,0,0,0,0,0,0] </code></pre> <p>should return <code>False</code> (ending with 0s, but 500 is not less than 50)</p> <pre><code>[500,5000,600,1212,234232,121,200,0,0,0,0,0,0,50] </code></pre> <p>should return <code>False</code> (there is 0 at the end)</p>
<python><python-3.x><list>
2023-08-03 15:50:32
1
5,538
Akshay J
76,829,578
6,930,340
How to prevent pytest from executing all scripts in test folder?
<p>I have the following project structure:</p> <pre><code>proj - src - tests - setup create_expectations.py __init__.py test_1.py test_2.py ... </code></pre> <p>Now, when I run <code>pytest</code>, it not only executes <code>test_1.py</code> and <code>test_2.py</code>, but also <code>create_expectations.py</code>.</p> <p>I can verify that <code>create_expectations.py</code> is not imported from any other script. It's supposed to be a kind of &quot;stand-alone&quot; script that I run once.</p> <p><code>__init__.py</code> is an empty file.</p> <p>My current understanding is that running <code>pytest</code> will only executes all files with &quot;test&quot; in its name. I would like to understand what I am apparently doing wrong.</p> <p>EDIT:<br /> <code>pytest --collect-only</code> reveals that only files with &quot;test&quot; in its name will be collected. This is exactly what I expect.</p>
<python><pytest>
2023-08-03 15:30:24
0
5,167
Andi
76,829,565
9,356,256
Translate a single column to English using pandas
<p>My data frame looks like -</p> <pre><code>serial_no text 23 {'Headers': ['LA-Spanish (Español)[Change]', '5790B/5/AF Addendum', 'Secondary menu'], 'Divs': ['', 'Document(s):5790B/5/AF Addendum (168.88 KB)5790B/5/AF AddendumN.º de revisión:00']} 25 {'Headers': ['LA-Spanish (Español)[Change]', '700HPPK Service Information', 'Secondary menu'], 'Paragraphs': [], 'Tables': [], 'Lists': [&quot; ['Disclaimer', 'Declaración de privacidad', 'Terms of Use', 'Términos y Condiciones']&quot;], 'Divs': ['', 'Document(s):700HPPK Service Information (3.36 MB)N.º de revisión:00']} </code></pre> <p>I want to convert the text column to english language. But that column store in a key value format. My code is given below -</p> <pre><code>import pandas as pd from googletrans import Translator translator = Translator() df['text2'] = df['text'].apply(lambda x: translator.translate(x, dest='en').text) </code></pre> <p>I am getting below error -</p> <pre><code>AttributeError: 'NoneType' object has no attribute 'group' </code></pre>
<python><pandas><group-by>
2023-08-03 15:28:24
2
373
Nikita Agarwal
76,829,328
11,770,286
Align text in the center of the bounding box
<p>I'm trying to create some labels manually which should align exactly with the tick locations. However, when plotting text <code>ha='center'</code> aligns the bounding box of the text in the center, but the text itself within the bounding box is shifted to the left.</p> <p>How can I align the text itself in the center? I found <a href="https://stackoverflow.com/questions/34683864/text-alignment-within-bounding-box">this question</a> but it doesn't help as it shifts the bounding box, while I need to shift the text.</p> <pre class="lang-py prettyprint-override"><code> import matplotlib matplotlib.use('TkAgg') import matplotlib.pyplot as plt print(matplotlib.__version__) # 3.5.3 fig, ax = plt.subplots() ax.plot([.5, .5], [0, 1], transform=ax.transAxes) ax.text(.5, .5, 'This text needs to be center-aligned'.upper(), ha='center', va='center', rotation='vertical', transform=ax.transAxes, bbox=dict(fc='blue', alpha=.5)) ax.set_title('The box is center-aligned but the text is too much to the left') plt.show() </code></pre> <p><a href="https://i.sstatic.net/cKwwF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cKwwF.png" alt="example" /></a></p>
<python><matplotlib><plot-annotations>
2023-08-03 15:00:23
1
3,271
Wouter
76,829,178
5,597,304
Pydantic 2 Pandas Timestamp / NaT migration issue
<p>I am migrating from Pydantic 1 to 2 and having issues getting this code to work.</p> <p>Expected result is that 'NaT' string would be converted to pd.NaT and exist in the model.</p> <p>Minimal working example:</p> <pre><code>from typing import Union import pydantic from pydantic import BaseModel, ConfigDict import pandas as pd class CheckTimestamp(BaseModel): ts: pd.Timestamp model_config = ConfigDict(arbitrary_types_allowed=True) @pydantic.field_validator('ts', mode='before') def convert_to_timestamp(value: Union[str, pd.Timestamp]): &quot;&quot;&quot;coming from the cache, this value can sometimes be &quot;NaT&quot; string, convert into pd.NaT so it doesn't error&quot;&quot;&quot; if value == &quot;NaT&quot;: return pd.NaT return pd.Timestamp(value) sample_model = CheckTimestamp(ts='NaT') </code></pre> <p>This code raises the following validation error:</p> <pre><code>Traceback (most recent call last): File &quot;C:\Users\d53542\AppData\Local\JetBrains\Toolbox\apps\PyCharm-P\ch-0\232.8660.197\plugins\python\helpers\pydev\pydevconsole.py&quot;, line 364, in runcode coro = func() File &quot;&lt;input&gt;&quot;, line 21, in &lt;module&gt; File &quot;C:\Users\d53542\Documents\python\Analytics.OperationAnalytics-lib\virtual_environments\parsing_test\lib\site-packages\pydantic\main.py&quot;, line 159, in __init__ __pydantic_self__.__pydantic_validator__.validate_python(data, self_instance=__pydantic_self__) pydantic_core._pydantic_core.ValidationError: 1 validation error for CheckTimestamp ts Input should be an instance of Timestamp [type=is_instance_of, input_value=NaT, input_type=NaTType] For further information visit https://errors.pydantic.dev/2.1/v/is_instance_of </code></pre> <p>I believe the issue is that pydantic v2 is more picky and is treating the pd.NaT as a different object than pd.Timestamp. I have tried unioning classes but I am obviously missing something.</p> <p>Thank you for your time.</p>
<python><pandas><timestamp><pydantic>
2023-08-03 14:43:29
2
1,016
ak_slick
76,829,169
11,348,734
How to measure the width of white object inside the image
<p>I have this image, with a white object:</p> <p><a href="https://i.sstatic.net/jEKhw.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jEKhw.jpg" alt="enter image description here" /></a></p> <p>I cut the image and draw a vertical and horizontal line:</p> <p><a href="https://i.sstatic.net/U6Lwo.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/U6Lwo.jpg" alt="enter image description here" /></a></p> <p>and I would like to measure the width and height of white objetc in contact of the lines, vertical and horizontal. But my approach was wrong, I got the total size. Here the results:</p> <pre><code>w = 453 h = 555 </code></pre> <p>I think this the total size of cutted image. To do this I used this code:</p> <pre><code>import cv2 import numpy as np ############################################################################ mask_ref = cv2.imread(&quot;/original_image.jpg&quot;, 0) hh, ww = mask_ref.shape[:2] # get the single external contours contours = cv2.findContours(mask_ref, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) contours = contours[0] if len(contours) == 2 else contours[1] big_contour = max(contours, key=cv2.contourArea) # fixing position #print('Angle: ', angle) if angle &lt; -45 and width &gt; height: angle = -(90 + angle) # otherwise, check width vs height else: if angle &gt; -45 and width &gt; height: angle = -(-90 +angle) else: angle= -angle # negate the angle to unrotate neg_angle = -angle #print('unrotation angle:', neg_angle) #print('') # Get rotation matrix M = cv2.getRotationMatrix2D(center, neg_angle, scale=1.0) # unrotate to rectify rectified = cv2.warpAffine(mask_ref, M, (ww, hh), flags=cv2.INTER_CUBIC, borderMode=cv2.BORDER_REPLICATE) # threshold it again to binary rectified = cv2.threshold(rectified, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)[1] # get bounding box of contour of rectified image cntrs = cv2.findContours(rectified, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cntrs = cntrs[0] if len(cntrs) == 2 else cntrs[1] cntr = cntrs[0] x,y,w,h = cv2.boundingRect(cntr) # crop to blob limits crop = rectified[y:y+h, x:x+w] # get width at every row of crop count = np.count_nonzero(crop, axis=1) #### Draw Image Figure x1, y1 = w, int(h*0.5) x2, y2 = 1, int(h*0.5) line_thickness = 2 cv2.line(crop, (x1, y1), (x2, y2), (0, 0, 0), thickness=line_thickness) # Calculate the center coordinate of the image center_x, center_y = w // 2, h // 2 # Draw vertical line from top to bottom center cv2.line(crop, (center_x, 0), (center_x, h), (0, 0, 0), thickness=line_thickness) print(w) print(h) </code></pre> <p>Some suggestion about the code? I really appreciate.</p>
<python><opencv><image-editing>
2023-08-03 14:42:10
0
897
Curious G.
76,829,149
5,168,534
Keras Classifier - Input Shape for Multi Class Classification
<p>I am trying to work on a multi class classification Data Mentioned here (this would help to understand, what is not matching the data shape and the keras inpust shape) :</p> <pre><code>X = x_data.loc[:,x_data.columns[0:6]] Y = y_data.loc[:,] print(X.shape) print(Y.shape) X = X.values Y = Y.values </code></pre> <p>The above prints:</p> <pre><code>(237, 6) (237,) [[0 0 0 0 0 0] [0 0 0 0 0 0] [0 0 0 0 0 0] ... [0 0 0 1 0 0] [0 0 0 1 0 0] [0 0 0 1 0 0]] [ 0 0 2 8 8 9 5 0 1 2 4 4 5 5 6 9 10 0 3 8 10 2 7 7 7 7 7 7 7 8 8 8 8 8 8 8 8 8 8 9 9 9 9 9 9 9 9 9 10 10 10 10 10 10 10 10 10 10 1 2 4 5 4 1 3 8 9 11 4 5 8 6 1 11 8 9 11 2 11 1 3 4 1 1 4 10 11 9 3 11 8 6 9 0 0 6 7 10 0 2 7 5 7 9 11 1 4 3 5 6 7 5 7 3 5 2 6 6 9 2 10 11 6 8 8 11 6 10 0 3 3 10 2 5 9 9 11 8 7 8 4 10 10 1 1 6 9 4 5 10 0 3 2 4 7 2 6 7 10 11 11 11 11 11 11 11 11 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 4 4 4 4 4 4 4 4 4 4 5 5 5 5 5 5 5 5 5 6 6 6 6 6 6 6 6 6 7 7 7 7] </code></pre> <p>Below my code of keras model, that is where I am confused about the input shape.</p> <pre><code>def baseline_model(): # Create model here model = Sequential() model.add(Dense(15, input_shape = [6] , activation = 'relu')) # Rectified Linear Unit Activation Function model.add(Dense(15, activation = 'relu')) model.add(Dense(11, activation = 'softmax')) # Softmax for multi-class classification # Compile model here model.compile(loss = 'categorical_crossentropy', optimizer = 'adam', metrics = ['accuracy']) return model estimator = KerasClassifier(build_fn = baseline_model, epochs = 100, batch_size = 10, verbose = 0) kfold = KFold(n_splits = 2, shuffle = True, random_state = 10) results = cross_val_score(estimator, X, Y, cv = kfold) </code></pre> <p>The error what I am getting:</p> <pre><code> ValueError: Shapes (None, 1) and (None, 11) are incompatible </code></pre> <p>Can you help me here where I am going wrong in the parameters as needed to be aligned with data.</p> <p>Thanks in advance</p>
<python><tensorflow><keras><scikit-learn><artificial-intelligence>
2023-08-03 14:38:55
1
311
anshuk_pal
76,828,815
12,131,472
Concatenate dataframes with initially mismatching index structures (columns)
<p>I have one bigger dataframe and a small one which only has one row.</p> <p>the bigger one</p> <pre><code>route TC2_37 ... TD25 value daily_change ... value daily_change period ... Aug 23 20339.0 4018.0 ... 26569.0 -951.0 Sep 23 19737.0 3037.0 ... 32725.0 -507.0 Oct 23 19821.0 1316.0 ... 38033.0 -18.0 Nov 23 20803.0 580.0 ... 40282.0 -188.0 Dec 23 22070.0 115.0 ... 42195.0 -148.0 Q3 23 18158.0 1891.0 ... 31269.0 -1102.0 Q4 23 20899.0 672.0 ... 40170.0 -117.0 Q1 24 16361.0 363.0 ... 37983.0 -125.0 Q2 24 14581.0 380.0 ... 28731.0 546.0 Q3 24 13029.0 415.0 ... 27840.0 628.0 Q4 24 16701.0 310.0 ... 33390.0 520.0 Cal 24 15168.0 367.0 ... 31986.0 393.0 Cal 25 13950.0 98.0 ... 30712.0 139.0 </code></pre> <p>some columns are not shown but they all have same structures</p> <p>the small dataframe looks like this:</p> <pre><code>route A6TCE BCTI BDTI MA2TCE ... TD7 TD8 TD25 V2TCE period ... 2023-08-02 17134.0 720.0 821.0 28859.0 ... 9917.0 31700.0 10408.0 11800.0 </code></pre> <p>The small dataframe has more routes than the bigger one,</p> <p>I wish to create a new dataframe which has the small dataframe as the first row, but only with the columns(routes) which overlaps. And only under the column &quot;value&quot;, NOT &quot;daily_change&quot;</p> <pre><code> route TC2_37 ... TD25 value daily_change ... value daily_change period 2023-08-02 990.0 ... 10408.0 Aug 23 20339.0 4018.0 ... 26569.0 -951.0 Sep 23 19737.0 3037.0 ... 32725.0 -507.0 Oct 23 19821.0 1316.0 ... 38033.0 -18.0 Nov 23 20803.0 580.0 ... 40282.0 -188.0 Dec 23 22070.0 115.0 ... 42195.0 -148.0 Q3 23 18158.0 1891.0 ... 31269.0 -1102.0 Q4 23 20899.0 672.0 ... 40170.0 -117.0 Q1 24 16361.0 363.0 ... 37983.0 -125.0 Q2 24 14581.0 380.0 ... 28731.0 546.0 Q3 24 13029.0 415.0 ... 27840.0 628.0 Q4 24 16701.0 310.0 ... 33390.0 520.0 Cal 24 15168.0 367.0 ... 31986.0 393.0 Cal 25 13950.0 98.0 ... 30712.0 139.0 </code></pre> <p>Reproduce this part of the bigger dataframe from dict:</p> <pre class="lang-py prettyprint-override"><code>{('TC2_37', 'value'): {'Aug 23': 20339.0, 'Sep 23': 19737.0, 'Oct 23': 19821.0, 'Nov 23': 20803.0, 'Dec 23': 22070.0, 'Q3 23': 18158.0, 'Q4 23': 20899.0, 'Q1 24': 16361.0, 'Q2 24': 14581.0, 'Q3 24': 13029.0, 'Q4 24': 16701.0, 'Cal 24': 15168.0, 'Cal 25': 13950.0}, ('TC2_37', 'daily_change'): {'Aug 23': 4018.0, 'Sep 23': 3037.0, 'Oct 23': 1316.0, 'Nov 23': 580.0, 'Dec 23': 115.0, 'Q3 23': 1891.0, 'Q4 23': 672.0, 'Q1 24': 363.0, 'Q2 24': 380.0, 'Q3 24': 415.0, 'Q4 24': 310.0, 'Cal 24': 367.0, 'Cal 25': 98.0}, ('TD25', 'value'): {'Aug 23': 26569.0, 'Sep 23': 32725.0, 'Oct 23': 38033.0, 'Nov 23': 40282.0, 'Dec 23': 42195.0, 'Q3 23': 31269.0, 'Q4 23': 40170.0, 'Q1 24': 37983.0, 'Q2 24': 28731.0, 'Q3 24': 27840.0, 'Q4 24': 33390.0, 'Cal 24': 31986.0, 'Cal 25': 30712.0}, ('TD25', 'daily_change'): {'Aug 23': -951.0, 'Sep 23': -507.0, 'Oct 23': -18.0, 'Nov 23': -188.0, 'Dec 23': -148.0, 'Q3 23': -1102.0, 'Q4 23': -117.0, 'Q1 24': -125.0, 'Q2 24': 546.0, 'Q3 24': 628.0, 'Q4 24': 520.0, 'Cal 24': 393.0, 'Cal 25': 139.0}} </code></pre>
<python><pandas><dataframe><merge><multi-index>
2023-08-03 13:57:57
1
447
neutralname
76,828,742
10,966,677
python plotly treemap: how to use ids and treat duplicated names
<p>How do I correctly use <code>treemap</code> and the key <code>ids</code> to represent a hierarchical data set that contains duplicated names?</p> <p>To the most basic, I would like to reproduce a treemap similar as the following:</p> <pre><code>import plotly.express as px fig = px.treemap( parents = [&quot;&quot;, &quot;Anna&quot;,&quot;Anna&quot;, &quot;Bert&quot;, &quot;Bert&quot;, &quot;Belen&quot;, &quot;Belen&quot;, &quot;Belen&quot;, &quot;Cristiano&quot;], names = [&quot;Anna&quot;,&quot;Bert&quot;,&quot;Belen&quot;,&quot;Charlie&quot;,&quot;Chris&quot;,&quot;Carlos&quot;,&quot;Camila&quot;,&quot;Cristiano&quot;, &quot;Diego&quot;], ) fig.update_traces(root_color=&quot;lightgrey&quot;) fig.show() </code></pre> <p>which produces this figure:</p> <p><a href="https://i.sstatic.net/lwRQI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lwRQI.png" alt="desired result" /></a></p> <p>Problem 1: if a name is duplicated, e.g. suppose &quot;Carlos&quot; appears both under &quot;Belen&quot; and under &quot;Bert&quot;, then (due to the ambiguity) the figure is blank. To solve the ambiguity, I will use the key <code>ids</code>.</p> <p>Suppose, I have a dataset (teams.csv):</p> <pre><code>ltree,name,parent ,Anna, Bert,Bert,Anna Belen,Belen,Anna Bert-Carlos,Carlos,Bert Bert-Chris,Chris,Bert Belen-Carlos,Carlos,Belen Belen-Camila,Camila,Belen Belen-Cristiano,Cristiano,Belen Belen-Cristiano-Diego,Diego,Cristiano </code></pre> <p>where the first column is a path of ltree type (from a Postgres database) which I would use as <code>ids</code> in the plotly treemap method.</p> <pre><code>import plotly.express as px import pandas as pd df = pd.read_csv(&quot;teams.csv&quot;) df = df.fillna('') fig = px.treemap(df, parents=df.parent, names=df.name, ids=df.ltree, # path=[px.Constant('Anna'),'parent', 'name'] ) fig.show() </code></pre> <p>which produces a blank view.</p> <p>Problem 2: If I remove the last row from the csv data set,</p> <pre><code>df = df.fillna('').head(-1) </code></pre> <p>then treemap is able to render a figure, yet incomplete.</p> <p>I have tried to change the ltree path format like <code>Belen-Cristiano-Diego</code> to <code>Root-Belen-Cristiano-Diego</code> or (&quot;Anna&quot; instead of &quot;Root&quot;).</p> <p>I have also tried the solution from the post <a href="https://stackoverflow.com/questions/75806947/duplicate-data-in-plotly-treemap">Duplicate data in Plotly treemap</a> removing the first row (i.e. the root):</p> <pre><code>df = df.fillna('').tail(-1) fig = px.treemap(df, path=[px.Constant('Anna'),'parent', 'name'] ) fig.show() </code></pre> <p>which renders a figure but it is the wrong hierarchy.</p> <p><a href="https://i.sstatic.net/tLEav.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tLEav.png" alt="incorrect result" /></a></p> <p>I don't know what I am doing incorrectly with the <code>ids</code>, as the ltree path has unique values or how should I format / edit the ltree path.</p>
<python><plotly><treemap>
2023-08-03 13:50:30
1
459
Domenico Spidy Tamburro
76,828,644
1,569,058
python pandas select all NaN row and fill with previous row
<p>I have a <code>dataframe</code> look like this,</p> <pre><code>pd.DataFrame([list(range(8))+[np.nan]*2, [np.nan]*len(range(10)), range(10), [np.nan]*2+list(range(8)), range(10)]) Out[31]: 0 1 2 3 4 5 6 7 8 9 0 0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 NaN NaN 1 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 2 0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 3 NaN NaN 0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 4 0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 </code></pre> <p>I want to select the row with all <code>NaN</code>, that is the second in this case and fill it with the previous row.</p> <pre><code> 0 1 2 3 4 5 6 7 8 9 0 0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 NaN NaN 1 0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 NaN NaN 2 0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 3 NaN NaN 0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 4 0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 </code></pre> <p>How to do that? Thanks a lot.</p>
<python><pandas>
2023-08-03 13:37:14
2
2,803
tesla1060
76,828,600
3,179,698
What does it mean of "Package 'networkx' requires a different Python: 3.6.13 not in '>=3.6'"
<p>I want to install package networkx 2.5.1 but the message tell me</p> <pre><code>ERROR: Package 'networkx' requires a different Python: 3.6.13 not in '&gt;=3.6' </code></pre> <p>very strange</p> <p>I am using python 3.6.13</p> <p>Because I am using the company's system so I can't upgrade the python, I can only adjust the package I am installed</p>
<python><pip>
2023-08-03 13:32:21
0
1,504
cloudscomputes
76,828,527
3,979,160
Python: How to split a sentence in an ideal way in more or less equal pieces
<p>I am trying to write a function that splits a sentence in n lines. My goal is to get it as evenly distributed as possible but my results are that either the first line or the last line is way too long compared to the other lines.</p> <p>Example:</p> <p>&quot;Who has eaten the fresh brownies?&quot; split in 4 lines.</p> <p>ideal:</p> <pre><code>Who has eaten the fresh brownies </code></pre> <p>I get:</p> <pre><code>Who has eaten the fresh brownies </code></pre> <p>(3 lines, first too long)</p> <p>or</p> <pre><code>who has eaten the fresh brownies </code></pre> <p>(3 lines not 4)</p> <p>or</p> <pre><code>who has eaten the fresh brownies </code></pre> <p>(last line too long)</p> <p>It seems to be a really hard problem to tackle. I only want to break the sentence on spaces. So start is easy:</p> <pre><code>words = sentence.split() total_words = len(words) character_count = len(sentence) </code></pre> <p>From there I have created so much garbage that I am about to start fresh. Any advice? Should work with all kind of different sentences, very short ones and longer ones and also for different amount of lines...</p>
<python><string>
2023-08-03 13:24:05
2
523
Hasse
76,828,490
4,502,950
Merge on specific columns with null values in key columns pandas
<p>I am trying to merge two data frames that has null values in the key columns , I have already found a way to merge the two data frames but the issue is I don't want all the columns from df2 and just specific columns</p> <pre><code>import pandas as pd import numpy as np data = { 'Email': ['example@gmail.com', 'example@gmail.com', np.nan, np.nan], 'How long has your business been operating?': ['2 year', np.nan, '2 year', np.nan], 'How many employees do you have currently?': [np.nan, '2 employees', np.nan, '1 employee'], } df = pd.DataFrame(data) # Create a separate DataFrame with non-null email addresses df_non_null_email = df.dropna(subset=['Email']) df = df[df['Email'].isnull()] # Perform the group by operation on the separate DataFrame result_df = df_non_null_email.groupby('Email', as_index=False).agg(lambda x: ' '.join(x.dropna())) # Replace empty strings with np.nan in all columns result_df.replace('', np.nan, inplace=True) # Concatenate the grouped DataFrame back to the original DataFrame df = pd.concat([df, result_df], ignore_index=True) </code></pre> <p>and this is the second dataframe</p> <pre><code>data2 = { 'Email address': ['example@gmail.com', np.nan,np.nan], 'Any disability?': ['yes',np.nan,np.nan], 'Sexuality': [np.nan,'yes',np.nan] } df2 = pd.DataFrame(data2) select_column = ['Any disability?'] </code></pre> <p>This is the how I am merging the two data frames</p> <pre><code># Merge the two DataFrames on the 'Email' column merged_df = df.merge(df2[pd.notnull(df2['Email address'])], how='left', right_on='Email address',left_on='Email') </code></pre> <p>This is the output I am getting</p> <p><a href="https://i.sstatic.net/WDZwC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WDZwC.png" alt="enter image description here" /></a></p> <p>I only want the second last column disability from the other dataframe like this</p> <p><a href="https://i.sstatic.net/0DAYk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0DAYk.png" alt="enter image description here" /></a></p> <p>I have to do this step for multiple data frames so looking for a clean solution</p>
<python><pandas><dataframe>
2023-08-03 13:20:37
1
693
hyeri
76,828,407
8,465,299
Antithetic Sampling for variance reduction in graph convolutional network (GCN)
<p>I am trying to implement <a href="https://en.wikipedia.org/wiki/Antithetic_variates" rel="nofollow noreferrer">Antithetic Sampling</a> to sample vertices of the graph and train the downstream graph convolutional network (GCN) model on the sampled graph.</p> <blockquote> <p>Antithetic Sampling is a variance reduction technique that involves generating pairs of random samples and their corresponding antithetic counterparts to cancel out fluctuations, leading to more accurate estimates.</p> </blockquote> <p>Below is my code and function '_Antithetic_sampling(...)' is main function to implement the logic of Antithetic Sampling.:</p> <pre><code>import math import torch import numpy as np import scipy.sparse as sp from scipy.sparse.linalg import norm as sparse_norm class Antithetic_Sampler(Sampler): def __init__(self, pre_probs, features, adj, **kwargs): super().__init__(features, adj, **kwargs) col_norm = sparse_norm(adj, axis=0) self.probs = col_norm / np.sum(col_norm) def sampling(self, v): &quot;&quot;&quot; Inputs: v: batch nodes list &quot;&quot;&quot; all_support = [[]] * self.num_layers # Initialize empty list for all layers cur_out_nodes = v for layer_index in range(self.num_layers - 1, -1, -1): # Start from the last layer and move backwards cur_sampled, cur_support = self._Antithetic_sampling(cur_out_nodes, self.layer_sizes[layer_index]) # sample nodes and collect support all_support[layer_index] = cur_support # for corresponding layer, Store current support in all_support cur_out_nodes = cur_sampled # Update nodes all_support = self._change_sparse_to_tensor(all_support) # Convert support to tensor representation sampled_X0 = self.features[cur_out_nodes] # Extract features of the sampled nodes return sampled_X0, all_support, 0 # Perform Antithetic Sampling def _Antithetic_sampling(self, v_indices, output_size): support = self.adj[v_indices, :] neis = np.nonzero(np.sum(support, axis=0))[1] # Create two sets of random sampling weights p1 = self.probs[neis] p1 = p1 / np.sum(p1) # Normalize probability p2 = 1 - p1 p2 = p2 / np.sum(p2) # Sample the first set of neighbors sampled_1 = np.random.choice(np.arange(np.size(neis)), output_size, True, p1) u_sampled_1 = neis[sampled_1] support_1 = support[:, u_sampled_1] sampled_p1 = p1[sampled_1] support_1 = support_1.dot(sp.diags(1.0 / (sampled_p1 * output_size))) # Sample the second set of neighbors with opposite weights sampled_2 = np.random.choice(np.arange(np.size(neis)), output_size, True, p2) u_sampled_2 = neis[sampled_2] support_2 = support[:, u_sampled_2] sampled_p2 = p2[sampled_2] support_2 = support_2.dot(sp.diags(1.0 / (sampled_p2 * output_size))) # Average two sets of sampled neighbors u_sampled = (u_sampled_1 + u_sampled_2) // 2 support = (support_1 + support_2) / 2 #print(&quot;U samples: &quot;, u_sampled) #print(&quot;Support: &quot;, support) return u_sampled, support </code></pre> <p>Output of this code is:</p> <blockquote> <p>epchs:0~9 =&gt; test_loss: 1.882, test_acc: 0.319</p> </blockquote> <blockquote> <p>epchs:10~19 =&gt; test_loss: 1.879, test_acc: 0.319</p> </blockquote> <blockquote> <p>epchs:20~29 =&gt; test_loss: 1.876, test_acc: 0.319</p> </blockquote> <blockquote> <p>epchs:30~39 =&gt; test_loss: 1.879, test_acc: 0.319</p> </blockquote> <blockquote> <p>epchs:40~49 =&gt; test_loss: 1.871, test_acc: 0.319</p> </blockquote> <blockquote> <p>epchs:50~59 =&gt; test_loss: 1.873, test_acc: 0.319</p> </blockquote> <p>It can be observed that test accuracy is not changing, which indicates that there might be a logical mistake in my implementation of Antithetic Sampling. Comments in the code provide details of each step, and I've included links to relevant sources for reference. I'm <strong>looking for</strong> help to identify the mistake in my implementation.</p> <blockquote> <p>More details about Antithetic Sampling can be found <a href="https://www.math.arizona.edu/%7Etgk/mc/book_chap5.pdf" rel="nofollow noreferrer">here</a> and <a href="https://artowen.su.domains/mc/Ch-var-basic.pdf" rel="nofollow noreferrer">here</a> (section 8.2).</p> </blockquote>
<python><sampling><montecarlo><variance><gnn>
2023-08-03 13:08:37
0
733
Asif
76,828,393
11,001,493
How to pivot my dataframe into unique column headers?
<p>This is an example of a bigger dataframe. It has two columns like this:</p> <pre><code>import pandas as pd test = pd.DataFrame({&quot;a&quot;:[&quot;zone 1&quot;, &quot;zone 2&quot;, &quot;code&quot;, &quot;ID&quot;, np.nan, &quot;zone 1&quot;, &quot;zone 2&quot;, &quot;code&quot;, &quot;ID&quot;, &quot;OBS&quot;], &quot;b&quot;:[&quot;aaa&quot;, &quot;bbb&quot;, 31, 10, np.nan, &quot;ddd&quot;, &quot;ggg&quot;, 23, 8, &quot;NO INFO&quot;]}) test Out[9]: a b 0 zone 1 aaa 1 zone 2 bbb 2 code 31 3 ID 10 4 NaN NaN 5 zone 1 ddd 6 zone 2 ggg 7 code 23 8 ID 8 9 OBS NO INFO </code></pre> <p>I would like to convert the values from test[&quot;a&quot;] to column headers and then the occurrences from test[&quot;b&quot;] would be the real values for this dataframe. And for duplicated values in test[&quot;a&quot;], I would have only one column name (no duplicates). The final result should be like this:</p> <pre><code> zone 1 zone 2 code ID nan OBS 0 aaa bbb 31 10 NaN NaN 1 ddd ggg 23 8 NaN NO INFO </code></pre> <p>I tried to use <code>df_pivot = test.pivot(columns=&quot;a&quot;, values=&quot;b&quot;)</code>, but it didn't work as I expected. Can anyone help me?</p>
<python><pandas><pivot>
2023-08-03 13:06:03
2
702
user026
76,828,153
9,923,776
DRF Spectacular - @extend_schema subclass method for Serializers
<p>Im using DRF_SPECTACTULAR to generate swagger DOC from my DRF Project. I have a GenericViewSet that implement CRUD methods.</p> <p>Over each ones, I have extend_schema decorator. Take this one as example:</p> <pre><code>@extend_schema( # extra parameters added to the schema parameters=[ OpenApiParameter(name = &quot;jwt_token&quot;, description=&quot;JSON Web Token&quot;, required=True, type=str), OpenApiParameter(name = &quot;id&quot;, type = OpenApiTypes.STR, location=OpenApiParameter.PATH , description = &quot;ID &quot;), ], methods=[&quot;GET&quot;], description = 'Return a serialized User', request = JWTRequestSerializer, responses = { 200: GenericResponseSerializer }, ) def retrieve(self, request, pk=None): </code></pre> <p>I Have a lot of ViewSet for every Models that extends the Generic and override some methods to make it specific for the model.</p> <p>I need something like this for the Serializer in extend_schema. In the example, I have GenericResponseSerializer but I need every subclasses that extends this SuperClass, return its own specific Serializer.</p> <p>I've tried with a static method:</p> <pre><code>@staticmethod def getRetrieveSerializer(): return GenericResponseSerializer() ... responses = { 200: getRetrieveSerializer }, ... </code></pre> <p>It works well but it always return the SuperClasses method even if Im extending it. Unfortunally in decorator, I cant access self so I cant use a method that is overwritten at runtime.</p> <p>Is there a way to reach this goal? Maby with reflection?</p>
<python><django><django-rest-framework><drf-spectacular>
2023-08-03 12:40:11
0
656
EviSvil
76,827,940
1,075,114
Debugging into library code from VSCode Jupyter
<p>I'm using VS Code + Jupyter Notebooks to develop ML-based things in python. I can successfully place breakpoints inside the notebook cells, but weirdly enough, if I try to step into a function that isn't defined there, it steps over it.</p> <p>Example: <a href="https://i.sstatic.net/OOjZY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OOjZY.png" alt="enter image description here" /></a></p> <p>In this cell I am debugging a function called run() which calls model.generate(...), which in turns calls a callback function filter_allowed_tokens defined in a different cell. In this breakpoint, I am inside filter_allowed_tokens(), and according to the call stack (bottom left of screenshot), the frame below is run(), even though, clearly, there should be a generate() frame in the middle.</p> <p>This is consistent with the fact that I am unable to step into it. This call stack is very weird and I have a hard time explaining it to myself. Is there a setting that causes &quot;full call stacks&quot;, and hopefully I will be able to step into the library code that is called from the notebook's cells?</p> <p>If it has any effect, I am running on windows using WSL.</p> <p>If I take all of my notebook's code and paste it into a normal python file, the step into will work as expected. However, I would like to do it using jupyter as it allows me to keep the ML models in memory between runs...</p>
<python><jupyter-notebook><vscode-debugger><vscode-remote>
2023-08-03 12:10:47
0
1,911
Noam
76,827,831
10,349,960
Scrapy crawling finishes without crawling all start requests
<p>I am trying to use the <code>scrapy</code> library to run a <a href="https://docs.scrapy.org/en/latest/topics/broad-crawls.html" rel="nofollow noreferrer">broad crawl</a> - crawl where I parse millions of websites. The spider is connected to a PostgreSQL database. This is how I load unprocessed urls before starting the spider:</p> <pre class="lang-py prettyprint-override"><code>def get_unprocessed_urls(self, suffix): &quot;&quot;&quot; Fetch unprocessed urls. &quot;&quot;&quot; print(f'Fetching unprocessed urls for suffix {suffix}...') cursor = self.connection.cursor('unprocessed_urls_cursor', withhold=True) cursor.itersize = 1000 cursor.execute(f&quot;&quot;&quot; SELECT su.id, su.url FROM seed_url su LEFT JOIN footer_seed_url_status fsus ON su.id = fsus.seed_url_id WHERE su.url LIKE \'%.{suffix}\' AND fsus.seed_url_id IS NULL; &quot;&quot;&quot;) ID = 0 URL = 1 urls = [Url(url_row[ID], self.validate_url(url_row[URL])) for url_row in cursor] print('len urls:', len(urls)) return urls </code></pre> <p>This is my spider:</p> <pre class="lang-py prettyprint-override"><code>class FooterSpider(scrapy.Spider): ... def start_requests(self): urls = self.handler.get_unprocessed_urls(self.suffix) for url in urls: yield scrapy.Request( url=url.url, callback=self.parse, errback=self.errback, meta={ 'seed_url_id': url.id, } ) def parse(self, response): try: seed_url_id = response.meta.get('seed_url_id') print(response.url) soup = BeautifulSoup(response.text, 'html.parser') footer = soup.find('footer') item = FooterItem( seed_url_id=seed_url_id, html=str(footer) if footer is not None else None, url=response.url ) yield item print(f'Successfully processed url {response.url}') except Exception as e: print('Error while processing url', response.url) print(e) seed_url_id = response.meta.get('seed_url_id') cursor = self.handler.connection.cursor() cursor.execute( &quot;INSERT INTO footer_seed_url_status(seed_url_id, status) VALUES(%s, %s)&quot;, (seed_url_id, str(e))) self.handler.connection.commit() def errback(self, failure): print(failure.value) try: error = repr(failure.value) request = failure.request seed_url_id = request.meta.get('seed_url_id') cursor = self.handler.connection.cursor() cursor.execute( &quot;INSERT INTO footer_seed_url_status(seed_url_id, status) VALUES(%s, %s)&quot;, (seed_url_id, error)) self.handler.connection.commit() except Exception as e: print(e) </code></pre> <p>These are my custom settings for the crawl (taken from <a href="https://docs.scrapy.org/en/latest/topics/broad-crawls.html" rel="nofollow noreferrer">broad crawl</a> documentation page above):</p> <pre class="lang-py prettyprint-override"><code>SCHEDULER_DISK_QUEUE = 'scrapy.squeues.PickleFifoDiskQueue' SCHEDULER_MEMORY_QUEUE = 'scrapy.squeues.FifoMemoryQueue' CONCURRENT_REQUESTS = 100 CONCURRENT_ITEMS=1000 SCHEDULER_PRIORITY_QUEUE = 'scrapy.pqueues.DownloaderAwarePriorityQueue' REACTOR_THREADPOOL_MAXSIZE = 20 COOKIES_ENABLED = False DOWNLOAD_DELAY = 0.2 </code></pre> <p>My problem is: the spider does not crawl all urls but stops after crawling only a few hundred (or few thousand, this number seems to vary). No warnings or errors are shown in the logs. These are example logs after &quot;finishing&quot; the crawling:</p> <pre><code>{'downloader/exception_count': 2, 'downloader/exception_type_count/twisted.internet.error.DNSLookupError': 1, 'downloader/exception_type_count/twisted.web._newclient.ResponseNeverReceived': 1, 'downloader/request_bytes': 345073, 'downloader/request_count': 1481, 'downloader/request_method_count/GET': 1481, 'downloader/response_bytes': 1977255, 'downloader/response_count': 1479, 'downloader/response_status_count/200': 46, 'downloader/response_status_count/301': 791, 'downloader/response_status_count/302': 512, 'downloader/response_status_count/303': 104, 'downloader/response_status_count/308': 2, 'downloader/response_status_count/403': 2, 'downloader/response_status_count/404': 22, 'dupefilter/filtered': 64, 'elapsed_time_seconds': 113.895788, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2023, 8, 3, 11, 46, 31, 889491), 'httpcompression/response_bytes': 136378, 'httpcompression/response_count': 46, 'log_count/ERROR': 3, 'log_count/INFO': 11, 'log_count/WARNING': 7, 'response_received_count': 43, &quot;robotstxt/exception_count/&lt;class 'twisted.internet.error.DNSLookupError'&gt;&quot;: 1, &quot;robotstxt/exception_count/&lt;class 'twisted.web._newclient.ResponseNeverReceived'&gt;&quot;: 1, 'robotstxt/request_count': 105, 'robotstxt/response_count': 43, 'robotstxt/response_status_count/200': 21, 'robotstxt/response_status_count/403': 2, 'robotstxt/response_status_count/404': 20, 'scheduler/dequeued': 151, 'scheduler/dequeued/memory': 151, 'scheduler/enqueued': 151, 'scheduler/enqueued/memory': 151, 'start_time': datetime.datetime(2023, 8, 3, 11, 44, 37, 993703)} 2023-08-03 11:46:31 [scrapy.core.engine] INFO: Spider closed (finished) </code></pre> <p>Peculiarly enough, this problem seems to appear only on one of the two machines I tried to use for crawling. When I run crawling locally on my PC (Windows 11), the crawling does not stop. However, when I run the code on our company's server (Microsoft Azure Windows 10 machine), the crawling stops prematurely, as described above.</p> <p><strong>EDIT</strong>: Full logs can be found <a href="https://pastebin.com/SgJfiezk" rel="nofollow noreferrer">here</a>. In this case the process stops after a few urls.</p>
<python><windows><azure><web-scraping><scrapy>
2023-08-03 11:55:04
1
2,527
druskacik
76,827,746
3,179,698
Python in anaconda install packages using pip, which pip did I use?
<p>Suppose I have several virtual env in anaconda:</p> <p>base<br /> py36<br /> py37<br /> py38</p> <p>if I am using</p> <pre><code>conda activate py36 </code></pre> <p>I am in the py36 environment, if I install using</p> <pre><code>conda install xxx </code></pre> <p>Then I knew my xxx packages will be installed under py36 folder in my virtual env.</p> <p>but if I am using</p> <pre><code>pip install xxx </code></pre> <p>Where did xxx located in this case? Most likely in py36 or base. Which one is the default?</p> <p>The second question is can we use some options to specify the folder we want it land in?</p> <p>I mean even I am in py38 env, I can install xxx in base/py36 folder, if I want to.</p>
<python><pip><conda>
2023-08-03 11:42:33
2
1,504
cloudscomputes
76,827,482
1,473,517
How to draw lines connecting points in python?
<p>I would like to draw something like this in Python.</p> <p><a href="https://i.sstatic.net/Bsu2n.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Bsu2n.jpg" alt="enter image description here" /></a></p> <p>I tried in matplotlib but I wasn't sure where to start.</p>
<python>
2023-08-03 11:03:49
2
21,513
Simd
76,827,441
513,449
How to override the properties of a single glyph inside a Bokeh plot?
<p>I have the following simple example, and I would like to reset the colour property of one of the circles to be red, as a separate process, after the renderer is initialised.</p> <pre><code>from bokeh.plotting import show x = [2, 4, 8, 5, 9, 3] y = [4, 8, 2, 7, 4, 6] id = ['1001', '1002', '1003', '1004', '1005', '1006'] cds = ColumnDataSource(data={'x': x, 'y': y, 'id': id}) p = figure(width=300, height=200) p.circle(x='x', y='y', source=cds, size=5, color='green') # This is my issue?! #p.circle[0].color = 'red' show(p) </code></pre> <p>I'm aware that I can use the following API to achieve the result, but that is not an option considering what I would like to do next.</p> <pre><code>selected_circle = Circle(fill_alpha=1, fill_color=&quot;firebrick&quot;, line_color=None) nonselected_circle = Circle(fill_alpha=0.2, fill_color=&quot;blue&quot;, line_color=&quot;firebrick&quot;) </code></pre>
<python><plot><bokeh>
2023-08-03 10:58:40
2
2,591
mbilyanov
76,827,378
1,569,058
python all resample the whole row no matter it is null or numeric value
<p>I have a series,</p> <pre><code>pd.Series( [6.22, 6.23, 6.23, 6.24, 6.24, 6.25, np.nan, np.nan, np.nan, np.nan], index = pd.DatetimeIndex(['2023-08-01 10:31:40.110000', '2023-08-01 10:31:43.110000', '2023-08-01 10:31:46.111000', '2023-08-01 10:31:49.111000', '2023-08-01 10:31:52.111000', '2023-08-01 10:31:55.117000', '2023-08-01 10:31:58.112000', '2023-08-01 10:32:01.112000', '2023-08-01 10:32:04.117000', '2023-08-01 10:34:07.095000'], dtype='datetime64[ns]', name='exchange_time', freq=None)) </code></pre> <p>referring to this <a href="https://stackoverflow.com/questions/55896522/python-pandas-dataframe-resample-last-how-to-make-sure-data-comes-from-the-same">python pandas dataframe resample.last how to make sure data comes from the same row</a></p> <p>but,</p> <pre><code>ts.resample('6S', closed='left', label='right').apply(lambda x: x.iloc[-1]) </code></pre> <p>gives error, because my time index has a big gap, how to solve this? thanks.</p> <p>Note: <code>2023-08-01 10:32:00</code> should be resampled as <code>NaN</code> as I always want the previous value no matter it is null or not. so <code>.last()</code> won't work.</p>
<python><pandas><resample>
2023-08-03 10:51:10
1
2,803
tesla1060
76,827,345
9,003,900
Summarise the data set to show expense per category per month
<p>I have a dataframe with 5 columns, The first 2 columns show the persons name and month, and the remaining columns are categories of expense, Below is an Image of how this data set looks:</p> <p><a href="https://i.sstatic.net/OtNwH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OtNwH.png" alt="Data set" /></a></p> <p>I need to summarise this data and show the total expense per month per category. Below is an image of how the output should look like:</p> <p><a href="https://i.sstatic.net/IhpAq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IhpAq.png" alt="Final Desired Output" /></a></p> <p>In the actual data set I have far more categories (22) than the 3 categories of expense shown here, so some form of automation would definitely help instead of mentioning each and every column name.</p> <p>However, I'll be happy with either solutions/help. Many Thanks in advance and below is the code to create the data set:</p> <pre><code>data = {'Name': ['Tom', 'Nick','Jack', 'Tom', 'Nick','Jack', 'Tom', 'Nick','Jack'], 'Month': ['01', '01', '01', '02', '02', '02', '03', '03', '03'], 'Super':[52, 25, 125, 40, 35, 90, 42, 29, 88], 'travel':[32, 18, 41, 23, None, 45, 28, 21, 76], 'bar': [24, 12, 38, 14, 9, None, 28, 9, 22]} df = pd.DataFrame(data) df </code></pre>
<python><pandas><data-wrangling><data-transform>
2023-08-03 10:46:04
1
320
SAJ
76,827,261
16,383,578
How to cast the correct datatype for each column while parsing CSV most efficiently?
<p>Perhaps this may seem trivial, but I am parsing multiple gigantic CSV files with 6,359,008 rows of data and they occupy 367.105 MiB disk space, merely loading the data without referencing already loaded information will take 2.9GiB RAM. So every bit of performance counts.</p> <p>The CSV files have different number of columns, and the columns have different datatypes. But in each individual CSV file, the number of columns is the same for all rows, and also is the datatype of the fields in the same column.</p> <p>The datatypes for the fields can be <code>int</code>, <code>float</code>, <code>bool</code> and <code>str</code>, and there are lots of omitted fields, in the CSV files they show up as <code>,,</code>, when loaded in Python these become <code>''</code>, in this case I want them to be cast to <code>None</code>. And there are some columns that contain junk data (that is the same for every row), I want to omit these columns.</p> <p>My current method is to supply a list of data types to the parsing function, the function would then cast each field to the corresponding datatype using indexing, but unfortunately for this to work I need a dummy function that returns the string argument itself when no type casting is needed:</p> <pre class="lang-py prettyprint-override"><code>import csv from typing import Generator def string(s: str) -&gt; str: return s def parse_csv(path: str, types: list) -&gt; Generator: with open(path, mode='r', encoding='utf8') as file: reader = csv.reader(file) if len(next(reader)) != len(types): raise ValueError('number of columns is different from supplied number of types') functions = [(i, func) for i, func in enumerate(types) if func] for row in reader: yield [func(e) if (e := row[i]) else None for i, func in functions] </code></pre> <p>The following columns are the datatypes for the files that contain the absolute majority of the data:</p> <pre><code>IP_CITY_TYPES = [string, int, int, int, bool, bool, string, float, float, int] IP_COUNTRY_TYPES = [ string, int, int, int, bool, bool, ] IP_ASN_TYPES = [string, int, string] </code></pre> <p>And the following columns are for the smaller files, but they also occupy Mebibytes of disk space:</p> <pre><code>CITIES_TYPES = [ int, None, string, string, string, string, string, string, string, string, string, int, string, bool, ] COUNTRIES_TYPES = [ int, None, string, string, string, string, bool, ] </code></pre> <p>What is a better strategy to convert the datatypes?</p> <hr /> <p>Perhaps I didn't make it very clear. But now, know this, I cannot load all the data onto the RAM all at once with all the repeated combinations of some groups of columns.</p> <p>I have made it very clear at the top of the question, it really takes multiple Gibibytes (2<sup>30</sup> = 1,073,741,824 bytes) just to load all the data without compression, that is, reading the data as CSV files, without type conversion, into Python <code>list</code>s, with all duplicating parts.</p> <p>It takes around 3GiB RAM just to store all the data in RAM, and I have only 16GiB RAM. And this is without any processing involved. I fear if I process the data this way my RAM is going to be filled up and I would have to reboot my computer.</p> <p>My data is the text dump of GeoLite2 IP geolocation database, it contains the information about the autonomous system, country and city of each IPv4 and IPv6 networks. The data is mainly split into 6 files, each file contains the information about (ASN | country | city) about (IPv4 | IPv6) networks, and I intend to combine three types of information about each version of IP networks into two SQLite3 tables.</p> <p>The first column of the six files are all IP networks, in this case they are non-overlapping, the networks are unique, and the information about these networks, such as the columns for the information about the country, reappear quite a lot, they repeat, there are about a million networks for each version listed, and only 250 unique countries and 113,552 unique cities listed, so there is a huge amount of repetition and it would make sense to only store the unique combinations of the columns about each network and replace the columns with a reference to the unique combination.</p> <p>More specifically I am turning each table into a list of two element tuples, the first column is a tuple consisting of two integers, they represent the start and end IP addresses of the networks, and the second column is an integer, it is the index of the unique combination of the rest of the columns.</p> <p>I first convert the row from index 1 on into a <code>tuple</code>, then use the <code>tuple</code> as a key in a global variable that is a <code>dict</code>, check if it is present in the <code>dict</code>, if found, replace the columns with the integer value of the key as reference, else set the value corresponding to the key in the <code>dict</code> as the length of the <code>dict</code>, then replace the columns with the value.</p> <p>This is accomplished with the following syntax:</p> <pre><code>TABLE = [] GROUPS = {} for row in parse_csv('/path/to/file', DATATYPES): network, *group = row start, end = parse_network(network) TABLE.append(((start, end), GROUPS.setdefault(tuple(group), len(GROUPS)))) </code></pre>
<python><python-3.x><csv><type-conversion>
2023-08-03 10:36:24
2
3,930
Ξένη Γήινος
76,827,108
13,023,224
Pandas + Export to excel + Time Stamp + specify location
<p>I would like to export several df files to a single excel document.</p> <p>Requirments:</p> <ul> <li><p>File name should include:</p> <p>custom name (using predefined variable, ie. month) + timestamp</p> </li> <li><p>I want to specify path where file should be saved</p> </li> </ul> <p>Code below from <a href="https://stackoverflow.com/questions/36466372/pandas-add-timestamp-to-file-name-during-saving-data-frame-in-excel">this question</a>, does the job for storing several df in single file, but I cannot specify location.</p> <pre><code>writer = pd.ExcelWriter('output_{}.xlsx'.format(pd.datetime.today().strftime('%y%m%d-%H%M%S'))) df1.to_excel(writer,'Sheet1') df2.to_excel(writer,'Sheet2') writer.save() </code></pre> <p>Code below from this question, does the job for saving the file with timestamp, but I can't find how to set path where to save it, or add several df to same file.</p> <pre><code>import time TodaysDate = time.strftime(&quot;%d-%m-%Y&quot;) excelfilename = TodaysDate +&quot;.xlsx&quot; DataSet.to_excel(excelfilename, sheet_name='sheet1', index=False) </code></pre>
<python><pandas><excel><export>
2023-08-03 10:15:06
1
571
josepmaria
76,826,928
2,270,789
Unexpected results with parallel numba jit
<p>I'm trying to parallelize <a href="https://github.com/ec-jrc/nrt/blob/4a7c078fda02b3480ca0d967dd17dbcfaec4330f/nrt/stats.py#L21" rel="nofollow noreferrer">this</a> numba jitted function. I initially thought this would be trivial since it is an embarrassingly parallel problem, but it produces unexpected results (different output for the sequential and parallel 'implementations'). Any idea what could be happening there and whether there are ways to make this work?</p> <p>See reproducible example below:</p> <pre class="lang-py prettyprint-override"><code>import numba import numpy as np from sklearn.datasets import make_regression @numba.jit(nopython=True) def nanlstsq(X, y): &quot;&quot;&quot;Return the least-squares solution to a linear matrix equation Analog to ``numpy.linalg.lstsq`` for dependant variable containing ``Nan`` Args: X ((M, N) np.ndarray): Matrix of independant variables y ({(M,), (M, K)} np.ndarray): Matrix of dependant variables Returns: np.ndarray: Least-squares solution, ignoring ``Nan`` &quot;&quot;&quot; beta = np.zeros((X.shape[1], y.shape[1]), dtype=np.float64) for idx in range(y.shape[1]): # subset y and X isna = np.isnan(y[:,idx]) X_sub = X[~isna] y_sub = y[~isna,idx] # Compute beta on data subset XTX = np.linalg.inv(np.dot(X_sub.T, X_sub)) XTY = np.dot(X_sub.T, y_sub) beta[:,idx] = np.dot(XTX, XTY) return beta @numba.jit(nopython=True, parallel=True) def nanlstsq_parallel(X, y): beta = np.zeros((X.shape[1], y.shape[1]), dtype=np.float64) for idx in numba.prange(y.shape[1]): # subset y and X isna = np.isnan(y[:,idx]) X_sub = X[~isna] y_sub = y[~isna,idx] # Compute beta on data subset XTX = np.linalg.inv(np.dot(X_sub.T, X_sub)) XTY = np.dot(X_sub.T, y_sub) beta[:,idx] = np.dot(XTX, XTY) return beta # Generate random data n_targets = 10000 n_features = 3 X, y = make_regression(n_samples=200, n_features=n_features, n_targets=n_targets) # Add random nan to y array y.ravel()[np.random.choice(y.size, 5*n_targets, replace=False)] = np.nan # Run the regression beta = nanlstsq(X, y) beta_parallel = nanlstsq_parallel(X, y) np.testing.assert_allclose(beta, beta_parallel) </code></pre>
<python><numpy><parallel-processing><numba><jit>
2023-08-03 09:53:12
1
401
Loïc Dutrieux
76,826,890
4,620,387
HTTP request to authenticate user in Artifactory
<p>I would like to do an http request using a pyhton script.</p> <ol> <li><p>to authenticate a user using it's email and Token-Reference.</p> </li> <li><p>after authentication succeeds, I would like to continue by downloading a file which exists in the Jfrog Artifactory. Preferbally using the requests library.</p> </li> </ol> <p>I searched the web a lot, but didn't find something relevant.</p> <p>Thanks</p>
<python><http>
2023-08-03 09:49:23
1
1,805
Sam12
76,826,694
13,174,189
How to get text from drop-down button using Selenium?
<p>There is a webpage with drop-down button with text inside. I want to retrieve text from that drop-down button, which is : &quot;Description text&quot;.</p> <p>Here is html code part:</p> <pre class="lang-html prettyprint-override"><code> &lt;div data-v-e0a13c66=&quot;&quot; data-v-5e9bf2df=&quot;&quot; id=&quot;DetailDescription&quot; class=&quot;detail-dropdown&quot;&gt; &lt;header data-v-e0a13c66=&quot;&quot; class=&quot;detail-dropdown__header&quot;&gt; &lt;h5 data-v-e0a13c66=&quot;&quot; class=&quot;detail-dropdown__title detail-dropdown__title--open&quot;&gt;Описание&lt;/h5&gt; &lt;svg data-v-e0a13c66=&quot;&quot; width=&quot;8&quot; height=&quot;14&quot; xmlns=&quot;http://www.w3.org/2000/svg&quot; class=&quot;detail-dropdown__arrow--open detail-dropdown__arrow&quot;&gt; &lt;path data-v-e0a13c66=&quot;&quot; d=&quot;M5.38 6.978c-.03-.02-.065-.036-.09-.06A10051.03 10051.03 0 0 1 .544 2.17C.202 1.83.154 1.335.424.962A.916.916 0 0 1 1.765.807c.032.027.061.057.091.087l5.42 5.42c.41.41.41.96 0 1.37L1.831 13.13c-.401.4-1.018.38-1.373-.046a.918.918 0 0 1 0-1.164c.033-.04.07-.078.108-.115L5.29 7.08c.025-.025.06-.04.09-.06v-.043Z&quot;&gt;&lt;/path&gt; &lt;/svg&gt; &lt;/header&gt; &lt;div data-v-e0a13c66=&quot;&quot; class=&quot;detail-dropdown__body&quot;&gt; &lt;article data-v-37bed4a0=&quot;&quot; data-v-e0a13c66=&quot;&quot; itemprop=&quot;description&quot; class=&quot;detail-desc&quot;&gt; &lt;p data-v-37bed4a0=&quot;&quot; class=&quot;detail-desc__text detail-desc__text--main&quot;&gt; &lt;p&gt;Description text.&lt;/p&gt; &lt;!----&gt; &lt;!----&gt;&lt;/article&gt; &lt;/div&gt; &lt;/div&gt; </code></pre> <p>but when i run this code:</p> <pre><code>from selenium import webdriver from selenium.webdriver.common.by import By def web_driver(): options = webdriver.ChromeOptions() options.add_argument(&quot;--verbose&quot;) options.add_argument('--no-sandbox') options.add_argument('--headless') options.add_argument('--disable-gpu') options.add_argument(&quot;--window-size=1920, 1200&quot;) options.add_argument('--disable-dev-shm-usage') driver = webdriver.Chrome(options=options) return driver description_tags = driver.find_elements(By.XPATH, &quot;//*[@*[contains(., 'detail-dropdown_body')]]&quot;) list(map(lambda x: x.text, description_tags)) </code></pre> <p>but the output is empty. How could i fix it?</p>
<python><html><python-3.x><selenium-webdriver>
2023-08-03 09:29:18
2
1,199
french_fries
76,826,693
534,238
I cannot get asynchronous generators or iterators to actually run asynchronously
<p>I am working with <a href="https://cloud.google.com/python/docs/reference/bigquerystorage/latest/google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient#google_cloud_bigquery_storage_v1_services_big_query_write_BigQueryWriteAsyncClient_create_write_stream" rel="nofollow noreferrer">BigQuery's Async Storage Write API</a>. It works extensively with <em>async generators and iterators</em>, with which I have no previous experience. I was unfortunately not getting any of the <strong>async</strong> aspect working.</p> <p>I stepped back, to understand async generators better. I am most comfortable with <code>trio</code>, but it seems that <code>trio</code>'s core <code>open_nursery</code> function does not work with generators. So I created the simplest generator I could, just to get my head around it:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; import trio &gt;&gt;&gt; async def generator(): ... for i in range(10): ... await trio.sleep(random.random()) ... yield i ... &gt;&gt;&gt; async def main(): ... async for idx in generator(): ... print(idx) ... 0 1 2 3 4 5 6 7 8 9 &gt;&gt;&gt; trio.run(main) </code></pre> <p>This is exactly how it ran, in order. The length of the pause between printing numbers would change, but it was always running synchronously: the generator yielded a value only after awaiting, and the <code>async for</code> would only move to the next value after the previous yield was given.</p> <p>This is not what I expected, nor what I need or hope for. I was expecting that all of the &quot;jobs&quot; would be triggered immediately, that all of the print statements would happen randomly out of order, and that the entire thing would take less than 1 second. (<code>random.random()</code> has a range between 0 and 1.)</p> <hr /> <p>How can I work with <em><strong>async generators and iterators</strong></em> in a way that is truly asynchronous, such that all of the tasks are scheduled as fast as possible, and they are run &quot;in parallel&quot;?</p> <p>(I know that in fact I am only using one processor and one thread, but, for instance, <code>trio.sleep()</code> is supposed to provide a checkpoint to say that another task can be started because it is idle.)</p> <p>Please provide a toy code example that, for instance, takes my generator and runs it in a truly parallel way. And if there is a way to use <code>trio</code>'s nurseries with generators or iterators, please also show an example with that.</p>
<python><asynchronous><iterator><generator>
2023-08-03 09:29:11
0
3,558
Mike Williamson
76,826,666
19,811,869
AttributeError: 'SlideShapes' object has no attribute 'text_frame'
<p>I am using <code>python-pptx</code> library for editing powerpoint presentation in python. But on running the below script:</p> <pre class="lang-py prettyprint-override"><code>from pptx import Presentation prs = Presentation('path/to/presentation.pptx') slide = prs.slides[0] shape = slide.shapes shape.text_frame.text = &quot;Introduction&quot; </code></pre> <p>the error that arises is:</p> <pre class="lang-py prettyprint-override"><code>--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[18], line 3 1 slide = prs.slides[0] 2 shape = slide.shapes ----&gt; 3 shape.text_frame.text = &quot;Introduction&quot; AttributeError: 'SlideShapes' object has no attribute 'text_frame' </code></pre> <p>Here I am using <code>python-pptx==0.6.21</code>. Is there any method for the same?</p>
<python><powerpoint><python-pptx>
2023-08-03 09:26:17
1
347
Shuhul Handoo
76,826,664
1,569,058
python pandas resample how to let last fill last null value?
<p>I have a time series like below,</p> <pre><code>ts Out[20]: time 2023-08-01 10:31:40.110 6.22 2023-08-01 10:31:43.110 6.23 2023-08-01 10:31:46.111 6.23 2023-08-01 10:31:49.111 6.24 2023-08-01 10:31:52.111 6.24 2023-08-01 10:31:55.117 6.25 2023-08-01 10:31:58.112 NaN 2023-08-01 10:32:01.112 NaN 2023-08-01 10:32:04.117 NaN 2023-08-01 10:32:07.095 NaN </code></pre> <p>when I do <code>ts.resample('6S', closed='left', label='right').last()</code></p> <pre><code>ts.resample('6S', closed='left', label='right').last() Out[21]: time 2023-08-01 10:31:42 6.22 2023-08-01 10:31:48 6.23 2023-08-01 10:31:54 6.24 2023-08-01 10:32:00 6.25 2023-08-01 10:32:06 NaN 2023-08-01 10:32:12 NaN </code></pre> <p>The problem is I actually want <code>2023-08-01 10:32:00</code> to be filled with whatever value previous, no matter it is a numeric or null, in this case, should be <code>NaN</code> How to do that? Great thanks.</p> <p>To get series above,</p> <pre><code>pd.Series( [6.22, 6.23, 6.23, 6.24, 6.24, 6.25, np.nan, np.nan, np.nan, np.nan], index = pd.DatetimeIndex(['2023-08-01 10:31:40.110000', '2023-08-01 10:31:43.110000', '2023-08-01 10:31:46.111000', '2023-08-01 10:31:49.111000', '2023-08-01 10:31:52.111000', '2023-08-01 10:31:55.117000', '2023-08-01 10:31:58.112000', '2023-08-01 10:32:01.112000', '2023-08-01 10:32:04.117000', '2023-08-01 10:32:07.095000'], dtype='datetime64[ns]', name='exchange_time', freq=None)) </code></pre>
<python><pandas><resample>
2023-08-03 09:26:08
1
2,803
tesla1060
76,826,611
6,296,626
Replace string suffix
<p>In Python we have <code>.replace()</code> to replace a substring in a string and now, since Python 3.9, we have <code>.removesuffix()</code> to remove specific suffixes from a string.</p> <p>How can we replace suffix in Python in as simple way as possible, preferably in one short line?</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; string1 = &quot;hello.world.wo&quot; &gt;&gt;&gt; string2 = &quot;hello.world&quot; &gt;&gt;&gt; &gt;&gt;&gt; string1.removesuffix(&quot;.wo&quot;) 'hello.world' &gt;&gt;&gt; &gt;&gt;&gt; string2.removesuffix(&quot;.wo&quot;) 'hello.world' &gt;&gt;&gt; &gt;&gt;&gt; string1.replace(&quot;.wo&quot;, &quot;.he&quot;) 'hello.herld.he' &gt;&gt;&gt; &gt;&gt;&gt; string2.replace(&quot;.wo&quot;, &quot;.he&quot;) 'hello.herld' &gt;&gt;&gt; &gt;&gt;&gt; [ SOLUTION HERE with string1 ] 'hello.world.he' &gt;&gt;&gt; &gt;&gt;&gt; [ SAME SOLUTION HERE with string2 ] 'hello.world' </code></pre>
<python>
2023-08-03 09:20:15
2
1,479
Programer Beginner
76,826,214
4,390,525
opencv findChessboardCorners cannot find corners
<p>I am using <em>Python3.10</em> and <em>Opencv 4.7</em>.</p> <p>This is my code:</p> <pre><code>img = cv2.imread('1_1691050309204.jpg') gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) ret, corners = cv2.findChessboardCorners(gray, (5, 6)) print(ret) print(corners) </code></pre> <p>returns <code>False(ret)</code> and <code>None(corners)</code>.</p> <p>What's wrong with the code or the image?</p> <p>And attach the chessboard image.<br /> <a href="https://i.sstatic.net/yPRNt.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yPRNt.jpg" alt="enter image description here" /></a></p>
<python><opencv><computer-vision><pose-estimation><corner-detection>
2023-08-03 08:26:14
2
1,448
xuanzhui
76,826,199
14,367,125
`py_function` causes `ragged_batch()` not working in `tf.data`
<p>I'm working on an object detection project, and use <code>tf.data.Dataset</code> input pipeline to load local data. Because object detection requires not only image but also annotations, and the different dimension of annotations makes it even harder. I tried several ways but none of them works. Here's my attempts, and I'm exhausted of ideas. Very appreciate for your help!</p> <h2>Parse XML</h2> <p>My local data is in Pascal VOC format. First, I used <code>.from_tensor_slices()</code> to get <code>annotation_files</code> paths, and parse them to get image path, and finally <code>.ragged_batch()</code> them. But during <code>.map(load)</code>, it automatically converted string into <code>Tensor(&quot;args_0:0&quot;, shape=(), dtype=string)</code>, which cannot be used in many libraries like XML parser <code>ElementTree</code>. Then I used <code>tf.py_function()</code> to convert it back into Python string. And them find it a TensorFlow bug: <a href="https://github.com/tensorflow/tensorflow/issues/60710" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/issues/60710</a></p> <pre class="lang-py prettyprint-override"><code>annotation_files = [ '082f7a7f-IMG_0512.xml', '4f4c7f54-IMG_0511.xml', '5381454b-IMG_0510.xml', '05517884-IMG_0514.xml' ] def load(annotationFile): # load annotation (boxes, class ids) def _loadAnnotation(annotationFile): thisBoxes = [] thisClassIDs = [] annotationFile = annotationFile.numpy().decode(&quot;utf-8&quot;) root = ET.parse(annotationFile).getroot() for object in root.findall(&quot;object&quot;): # load bounding boxes bndbox = object.find(&quot;bndbox&quot;) xmin = int(bndbox.find(&quot;xmin&quot;).text) ymin = int(bndbox.find(&quot;ymin&quot;).text) xmax = int(bndbox.find(&quot;xmax&quot;).text) ymax = int(bndbox.find(&quot;ymax&quot;).text) thisBoxes.append([xmin, ymin, xmax, ymax]) # load class IDs className = object.find(&quot;name&quot;).text classID = classNames.index(className) thisClassIDs.append(classID) # image file path imageFile = imageFolder + &quot;/&quot; + root.find('filename').text return (imageFile, tf.cast(thisBoxes, dtype=tf.float32), tf.cast(thisClassIDs, dtype=tf.float32)) imageFile, thisBoxes, thisClassIDs = tf.py_function(_loadAnnotation, [annotationFile], [tf.string, tf.float32, tf.float32]) # load image image = tf.io.read_file(imageFile) image = tf.image.decode_jpeg(image, channels=3) # package annotation (boxes, class ids) to dictionary bounding_boxes = { &quot;boxes&quot;: tf.cast(thisBoxes, dtype=tf.float32), &quot;classes&quot;: tf.cast(thisClassIDs, dtype=tf.float32) } return {&quot;images&quot;: tf.cast(image, dtype=tf.float32), &quot;bounding_boxes&quot;: bounding_boxes} dataset = tf.data.Dataset.from_tensor_slices(annotation_files) dataset = dataset.map(load) dataset = dataset.ragged_batch(4) </code></pre> <h2><code>pickle</code></h2> <p>Then, I tried to package one record of data into a single file with <code>pickle</code> to prevent parsing with <code>ET</code>. Unfortunately, <code>pickle</code> also needs Python string. Same problem as first attempt, it not works.</p> <h2>TFReocrd</h2> <p>After that, I tried to store data into TFRecord and load them with <code>tf.data.TFRecordDataset()</code>. But problem comes when writing TFRecord. It comes an error <code>TypeError: Value must be iterable</code>. During search I find <a href="https://github.com/tensorflow/tensorflow/issues/9554" rel="nofollow noreferrer">this discussion</a>. It seems I must reshape my tensor to flatten it, and then bring it back to N-dimension when using. But because the unknown dimension of bounding boxes, which is why I want to use <code>ragged_batch</code>, it's impossible for me to flatten them.</p> <pre class="lang-py prettyprint-override"><code>def serializeTFRecord(data): image = data[&quot;images&quot;] classes = data[&quot;bounding_boxes&quot;][&quot;classes&quot;] boxes = data[&quot;bounding_boxes&quot;][&quot;boxes&quot;] feature = { &quot;images&quot;: tf.train.Feature(float_list=tf.train.FloatList(value=image)), &quot;bounding_boxes&quot;: { &quot;classes&quot;: tf.train.Feature(float_list=tf.train.FloatList(value=classes)), &quot;boxes&quot;: tf.train.Feature(float_list=tf.train.FloatList(value=boxes)) } } exampleProto = tf.train.Example(features=tf.train.Features(feature=feature)) return exampleProto.SerializeToString() </code></pre>
<python><tensorflow><tfrecord><tf.data.dataset>
2023-08-03 08:24:47
1
726
Yiming Designer
76,826,191
1,006,955
How to type-annotate correctly a custom MutableMapping implementation?
<p>I've got the following subclassed <code>MutableMapping</code>:</p> <pre><code>from typing import Hashable, Any, MutableMapping from _typeshed import SupportsKeysAndGetItem class MyMutableMapping(MutableMapping[Hashable, Any]): def update(self, other: SupportsKeysAndGetItem[Hashable, Any], /, **kwargs: Any) -&gt; None: pass </code></pre> <p>However, <code>mypy</code> complains about the signature of my overridden <code>update</code> method.</p> <pre><code>test.py:5: error: Signature of &quot;update&quot; incompatible with supertype &quot;MutableMapping&quot; [override] test.py:5: note: Superclass: test.py:5: note: @overload test.py:5: note: def update(self, SupportsKeysAndGetItem[Hashable, Any], /, **kwargs: Any) -&gt; None test.py:5: note: @overload test.py:5: note: def update(self, Iterable[tuple[Hashable, Any]], /, **kwargs: Any) -&gt; None test.py:5: note: @overload test.py:5: note: def update(self, **kwargs: Any) -&gt; None test.py:5: note: Subclass: test.py:5: note: def update(self, SupportsKeysAndGetItem[Hashable, Any], /, **kwargs: Any) -&gt; None Found 1 error in 1 file (checked 1 source file) </code></pre> <p>I went as far duplicating the exact signature that <code>mypy</code> claims the superclass has but no luck. What am I doing wrong?</p> <p>edit: I've gone ahead and submitted an <a href="https://github.com/python/mypy/issues/15816" rel="nofollow noreferrer">issue</a> in the github mypy issues tracker.</p>
<python><mypy><typing>
2023-08-03 08:23:43
1
7,498
Nobilis
76,826,175
6,498,757
Build a multi head tree with JSON list of children relationship
<p>I want to create a graph or tree structure from raw JSON data from mongoDB. The data points are parent-children related. I tried to create some code to make a data structure and pretty print it, but it still has duplicated rows.</p> <h1>Input</h1> <pre class="lang-js prettyprint-override"><code>[ { &quot;_id&quot;: { &quot;$oid&quot;: &quot;577048a0a5f4970b59493861&quot; }, &quot;children&quot;: [ { &quot;$oid&quot;: &quot;5914d7528ed1755954d6e88f&quot; }, { &quot;$oid&quot;: &quot;56675f1446e0324b4566f6e9&quot; }, { &quot;$oid&quot;: &quot;5924f500381e3b5854db8a0c&quot; }, { &quot;$oid&quot;: &quot;5cc64fadcf995a1d6b54e745&quot; }, { &quot;$oid&quot;: &quot;5d71b5f49a311e6fd587e3dc&quot; }, { &quot;$oid&quot;: &quot;599646b5bd0b9864d8e7391d&quot; }, { &quot;$oid&quot;: &quot;5850ae570b242ff1782ea07c&quot; }, { &quot;$oid&quot;: &quot;58e9a01f33d3f1f208e032de&quot; }, { &quot;$oid&quot;: &quot;5546db39dbeafcef63e6efb6&quot; }, { &quot;$oid&quot;: &quot;5fbdad479a311e6fd58812ff&quot; }, { &quot;$oid&quot;: &quot;5cabd38142524178c66be1f1&quot; }, { &quot;$oid&quot;: &quot;5491ddb3d707d710141027c2&quot; }, { &quot;$oid&quot;: &quot;5d66f0109a311e6fd587e12c&quot; }, { &quot;$oid&quot;: &quot;56ef0296a7b280e91b475aeb&quot; }, { &quot;$oid&quot;: &quot;5cd4dabce84cdd76faaf597f&quot; }, { &quot;$oid&quot;: &quot;58e43b26f79ce47642fae5c4&quot; }, { &quot;$oid&quot;: &quot;58252929332df05c01c1624f&quot; }, { &quot;$oid&quot;: &quot;55dfd7d2bb58034a69d563fe&quot; }, { &quot;$oid&quot;: &quot;56ef03976e418ae84779d827&quot; }, { &quot;$oid&quot;: &quot;5491dd8bd707d710141027c1&quot; }, { &quot;$oid&quot;: &quot;5ba2eb071e910a58acf98e3f&quot; }, { &quot;$oid&quot;: &quot;5cf580fb87823d36e74808bf&quot; }, { &quot;$oid&quot;: &quot;5dae373a3c6e806fd3bbb1fc&quot; }, { &quot;$oid&quot;: &quot;61c3a8bf3824dd48d99ab572&quot; }, { &quot;$oid&quot;: &quot;5a2f468069d12b0a0f4cc68e&quot; }, { &quot;$oid&quot;: &quot;5be8de44c052dc326ab1e810&quot; }, { &quot;$oid&quot;: &quot;5c98455267dcef531cbc44f6&quot; } ], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5835f65457eea2852d2bf48c&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5924f500381e3b5854db8a0c&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;56ef03976e418ae84779d827&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;59c87107cdfaa27f5759b107&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5a24977405bdd758991bdb93&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;55aef43d03d4ad2343c2e27f&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;56d382a0589377c94614cbbc&quot; }, &quot;children&quot;: [ { &quot;$oid&quot;: &quot;5cdb32901d30d576fe946d44&quot; }, { &quot;$oid&quot;: &quot;5a53e1ac23090e4877973532&quot; }, { &quot;$oid&quot;: &quot;613e86c24faf093afab2c0ea&quot; }, { &quot;$oid&quot;: &quot;62eb0a4456bdf04c7768843c&quot; }, { &quot;$oid&quot;: &quot;5a53ddb8159e7944e0b44795&quot; }, { &quot;$oid&quot;: &quot;606e58ad2f2af36f2219bcc8&quot; }, { &quot;$oid&quot;: &quot;628da1ce238868408a576c22&quot; } ], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;592776bc8ed1755954d6e8d5&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;594c8252381e3b5854db8af7&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;59503126381e3b5854db8afc&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;58e43b26f79ce47642fae5c4&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;59644a45442b1e826dd7cdac&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5965a855b3e775836dfd8d5b&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;599646b5bd0b9864d8e7391d&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;59965edbf39ca264d9dd41ca&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;58ddc042f67c5f75420cc403&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5914d7528ed1755954d6e88f&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5a53e1ac23090e4877973532&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5aaf190e159e7944e0b44fe1&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5b15c4adee10e503a6a6d757&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;55c8024609c8cb2176ba55da&quot; }, &quot;children&quot;: [ { &quot;$oid&quot;: &quot;581bbdfbf0df7f680817970b&quot; } ], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5491dd7ed707d710141027c0&quot; }, &quot;children&quot;: [ { &quot;$oid&quot;: &quot;5d54d4fc5937ee123ad60c9d&quot; }, { &quot;$oid&quot;: &quot;5cb7ef9281c9d71d6a82843f&quot; }, { &quot;$oid&quot;: &quot;5d59c4efa0e40d1239ec5e14&quot; }, { &quot;$oid&quot;: &quot;5cf707cde5dbb949ad5cc302&quot; }, { &quot;$oid&quot;: &quot;5d54bd25a0e40d1239ec5d9c&quot; }, { &quot;$oid&quot;: &quot;5c1a9ae870c07e201b3a4880&quot; }, { &quot;$oid&quot;: &quot;5c0ec151a9b73222dfdd82d6&quot; }, { &quot;$oid&quot;: &quot;5de70b4f9a311e6fd587f4e9&quot; } ], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;581bbdfbf0df7f680817970b&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5850ae570b242ff1782ea07c&quot; }, &quot;children&quot;: [ { &quot;$oid&quot;: &quot;5cd4dabce84cdd76faaf597f&quot; } ], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;54fe27cf5c25b1394be6b303&quot; }, &quot;children&quot;: [ { &quot;$oid&quot;: &quot;6241210f7210ae3af7727813&quot; } ], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;58252929332df05c01c1624f&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5b57b20e8fff7b28249879ae&quot; }, &quot;children&quot;: [ { &quot;$oid&quot;: &quot;5d01cc1787e6383735ed12a6&quot; } ], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5988de44d4978b0596152387&quot; }, &quot;children&quot;: [ { &quot;$oid&quot;: &quot;5d06bcaaefa930373488e7c0&quot; } ], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5ad6ca76159e7944e0b45553&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5b15e2dd5240347c64223fed&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;56675f1446e0324b4566f6e9&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5a0223cc285a000c2866565c&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5491dfa5d707d710141027cb&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5491ddb3d707d710141027c2&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;555560f0ab95c2765d65c3cd&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;55dfd7d2bb58034a69d563fe&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;55cd503216db9f1c385a69db&quot; }, &quot;children&quot;: [ { &quot;$oid&quot;: &quot;5f4f019d0e170b6f23370b53&quot; }, { &quot;$oid&quot;: &quot;5a0223cc285a000c2866565c&quot; }, { &quot;$oid&quot;: &quot;5f6175683c6e806fd3bbcef7&quot; }, { &quot;$oid&quot;: &quot;6289c9d0adb2f54c79e239d4&quot; } ], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5a8211ad159e7944e0b44b67&quot; }, &quot;children&quot;: [ { &quot;$oid&quot;: &quot;5d82a7063c6e806fd3bbaa37&quot; } ], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5491dd8bd707d710141027c1&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5771fda4d0c907f02785022a&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;57391e8ec1d461090ef7397d&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5a53ddb8159e7944e0b44795&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5af905d15240347c64223af3&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5b6cd1aff6056722483d49f2&quot; }, &quot;children&quot;: [ { &quot;$oid&quot;: &quot;5d531e95ff66e07233a2bf76&quot; }, { &quot;$oid&quot;: &quot;5d5475e993b2f3707a737d40&quot; } ], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5702f5161c852788280926f3&quot; }, &quot;children&quot;: [ { &quot;$oid&quot;: &quot;5f0b9d810e170b6f2337073d&quot; } ], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5a0dfc4894ba750156a47e9c&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;58e9a01f33d3f1f208e032de&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5a2472d6f6b76456d074f761&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5a2f468069d12b0a0f4cc68e&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5b4e5a958fff7b2824987846&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5546db39dbeafcef63e6efb6&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;56ef0296a7b280e91b475aeb&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;57ce00596dfb923915cf9a7f&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5a24b30df6b76456d074f7c4&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5ba2eb071e910a58acf98e3f&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5be8de44c052dc326ab1e810&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5be9cc9c20a38c39bc4c807b&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5be9d2553863d639bbface86&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5c075bd666a1c95d2acc6a3f&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5c083f19d011855d2b41d458&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5c0ec151a9b73222dfdd82d6&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5c1a9ae870c07e201b3a4880&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5c3bd69f70c07e201b3a4a99&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5c5cd18745f41e78ebfb0732&quot; }, &quot;children&quot;: [ { &quot;$oid&quot;: &quot;5c5cd1a31279d46f7d90a186&quot; } ], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5c5cd1a31279d46f7d90a186&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5c609d8686e12578ea359383&quot; }, &quot;children&quot;: [ { &quot;$oid&quot;: &quot;5d531e95ff66e07233a2bf76&quot; }, { &quot;$oid&quot;: &quot;5b6cd1aff6056722483d49f2&quot; }, { &quot;$oid&quot;: &quot;5d5475e993b2f3707a737d40&quot; } ], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5c64ccb086e12578ea359499&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5c85a3b56ce63b531de0eaef&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5c98455267dcef531cbc44f6&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5caabb5bc8bbd278c0392dab&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5cabd38142524178c66be1f1&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5cb7ef9281c9d71d6a82843f&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5cc64fadcf995a1d6b54e745&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5cc659bc17629e20e1a66f40&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5cd4dabce84cdd76faaf597f&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5cdb32901d30d576fe946d44&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5cedbb5e87823d36e7480823&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5cf580fb87823d36e74808bf&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5d01919b90347136e8e233e0&quot; }, &quot;children&quot;: [ { &quot;$oid&quot;: &quot;5d0191dc87823d36e7480ab6&quot; }, { &quot;$oid&quot;: &quot;5d898e340e170b6f2336e8f0&quot; }, { &quot;$oid&quot;: &quot;5d898fc83c6e806fd3bbab51&quot; } ], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5d0191dc87823d36e7480ab6&quot; }, &quot;children&quot;: [ { &quot;$oid&quot;: &quot;5d898e340e170b6f2336e8f0&quot; }, { &quot;$oid&quot;: &quot;5d898fc83c6e806fd3bbab51&quot; } ], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5d01cc1787e6383735ed12a6&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5d06bcaaefa930373488e7c0&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5d1534be2b4549192d419973&quot; }, &quot;children&quot;: [ { &quot;$oid&quot;: &quot;5d153c148c044d174fb9b799&quot; } ], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5d153c148c044d174fb9b799&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5cf707cde5dbb949ad5cc302&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5d531e95ff66e07233a2bf76&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5d5475e993b2f3707a737d40&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5d54bd25a0e40d1239ec5d9c&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5d54d460cd8ccb13386abd43&quot; }, &quot;children&quot;: [ { &quot;$oid&quot;: &quot;5fbdad479a311e6fd58812ff&quot; }, { &quot;$oid&quot;: &quot;599646b5bd0b9864d8e7391d&quot; }, { &quot;$oid&quot;: &quot;5924f500381e3b5854db8a0c&quot; }, { &quot;$oid&quot;: &quot;56675f1446e0324b4566f6e9&quot; }, { &quot;$oid&quot;: &quot;5914d7528ed1755954d6e88f&quot; }, { &quot;$oid&quot;: &quot;5d71b5f49a311e6fd587e3dc&quot; }, { &quot;$oid&quot;: &quot;5d66f0109a311e6fd587e12c&quot; }, { &quot;$oid&quot;: &quot;5cd4dabce84cdd76faaf597f&quot; }, { &quot;$oid&quot;: &quot;5546db39dbeafcef63e6efb6&quot; }, { &quot;$oid&quot;: &quot;5491dd8bd707d710141027c1&quot; }, { &quot;$oid&quot;: &quot;5cf580fb87823d36e74808bf&quot; }, { &quot;$oid&quot;: &quot;58252929332df05c01c1624f&quot; }, { &quot;$oid&quot;: &quot;56ef03976e418ae84779d827&quot; }, { &quot;$oid&quot;: &quot;55dfd7d2bb58034a69d563fe&quot; }, { &quot;$oid&quot;: &quot;58e43b26f79ce47642fae5c4&quot; }, { &quot;$oid&quot;: &quot;577048a0a5f4970b59493861&quot; }, { &quot;$oid&quot;: &quot;5c98455267dcef531cbc44f6&quot; }, { &quot;$oid&quot;: &quot;5850ae570b242ff1782ea07c&quot; }, { &quot;$oid&quot;: &quot;5cc64fadcf995a1d6b54e745&quot; }, { &quot;$oid&quot;: &quot;56ef0296a7b280e91b475aeb&quot; }, { &quot;$oid&quot;: &quot;5491ddb3d707d710141027c2&quot; }, { &quot;$oid&quot;: &quot;5cabd38142524178c66be1f1&quot; }, { &quot;$oid&quot;: &quot;58e9a01f33d3f1f208e032de&quot; }, { &quot;$oid&quot;: &quot;5dae373a3c6e806fd3bbb1fc&quot; }, { &quot;$oid&quot;: &quot;5ba2eb071e910a58acf98e3f&quot; }, { &quot;$oid&quot;: &quot;5be8de44c052dc326ab1e810&quot; }, { &quot;$oid&quot;: &quot;5a2f468069d12b0a0f4cc68e&quot; }, { &quot;$oid&quot;: &quot;61c3a8bf3824dd48d99ab572&quot; } ], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5d54d4fc5937ee123ad60c9d&quot; }, &quot;children&quot;: [ { &quot;$oid&quot;: &quot;5c1a9ae870c07e201b3a4880&quot; }, { &quot;$oid&quot;: &quot;5d59c4efa0e40d1239ec5e14&quot; }, { &quot;$oid&quot;: &quot;5de70b4f9a311e6fd587f4e9&quot; }, { &quot;$oid&quot;: &quot;5c0ec151a9b73222dfdd82d6&quot; } ], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5d59c4efa0e40d1239ec5e14&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5d66f0109a311e6fd587e12c&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5d66f12d2f2af36f22197fed&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5d66f1cb3c6e806fd3bba448&quot; }, &quot;children&quot;: [ { &quot;$oid&quot;: &quot;5d66f12d2f2af36f22197fed&quot; } ], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5d71b5f49a311e6fd587e3dc&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5d82a7063c6e806fd3bbaa37&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5d898e340e170b6f2336e8f0&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5d898fc83c6e806fd3bbab51&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5dae373a3c6e806fd3bbb1fc&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5de70b4f9a311e6fd587f4e9&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5e44aee59a311e6fd587fb53&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5e4dc8010e170b6f2336fd33&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5e9e56889a311e6fd587ff9c&quot; }, &quot;children&quot;: [ { &quot;$oid&quot;: &quot;59503126381e3b5854db8afc&quot; }, { &quot;$oid&quot;: &quot;5be9cc9c20a38c39bc4c807b&quot; }, { &quot;$oid&quot;: &quot;5c083f19d011855d2b41d458&quot; }, { &quot;$oid&quot;: &quot;5cc659bc17629e20e1a66f40&quot; }, { &quot;$oid&quot;: &quot;5491dfa5d707d710141027cb&quot; }, { &quot;$oid&quot;: &quot;5cedbb5e87823d36e7480823&quot; }, { &quot;$oid&quot;: &quot;5be9d2553863d639bbface86&quot; }, { &quot;$oid&quot;: &quot;581bbdfbf0df7f680817970b&quot; }, { &quot;$oid&quot;: &quot;5a24b30df6b76456d074f7c4&quot; }, { &quot;$oid&quot;: &quot;594c8252381e3b5854db8af7&quot; }, { &quot;$oid&quot;: &quot;55c8024609c8cb2176ba55da&quot; }, { &quot;$oid&quot;: &quot;59965edbf39ca264d9dd41ca&quot; } ], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5ed02be39a311e6fd58802d6&quot; }, &quot;children&quot;: [ { &quot;$oid&quot;: &quot;57391e8ec1d461090ef7397d&quot; } ], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5edd66c79a311e6fd5880363&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5f0b9d810e170b6f2337073d&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5f4f019d0e170b6f23370b53&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5f6175683c6e806fd3bbcef7&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5fbdad479a311e6fd58812ff&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;5fe3bd7a3c6e806fd3bbd9f5&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;60187c220e170b6f23371a5e&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;606e58ad2f2af36f2219bcc8&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;613e86c24faf093afab2c0ea&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;61c3a8bf3824dd48d99ab572&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;61c3b11a4faf093afab2c86e&quot; }, &quot;children&quot;: [ { &quot;$oid&quot;: &quot;5a24977405bdd758991bdb93&quot; } ], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;61c3b3e83824dd48d99ab580&quot; }, &quot;children&quot;: [ { &quot;$oid&quot;: &quot;5c3bd69f70c07e201b3a4a99&quot; } ], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;61c3b84d7d28ec48d8455710&quot; }, &quot;children&quot;: [ { &quot;$oid&quot;: &quot;5a2472d6f6b76456d074f761&quot; } ], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;61c3bb467d28ec48d8455714&quot; }, &quot;children&quot;: [ { &quot;$oid&quot;: &quot;5965a855b3e775836dfd8d5b&quot; } ], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;6241210f7210ae3af7727813&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;6289c9d0adb2f54c79e239d4&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;628da1ce238868408a576c22&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;6298322c238868408a576cc5&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;62eb0a4456bdf04c7768843c&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;639643faf06d0004d42b2457&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;63a23108a4243f051c006f77&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;64090ff9903a4604d5e69032&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;64518b0da4243f051c007932&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; }, { &quot;_id&quot;: { &quot;$oid&quot;: &quot;64924d566d951e05091e295d&quot; }, &quot;children&quot;: [], &quot;kind&quot;: &quot;Organisation&quot; } ] </code></pre> <p>When there are duplicates (nodes that occur as child of more than one parent), I would want to only keep the deepest layer and remove those other layers. How can that be done?</p> <h1>Expected output</h1> <pre class="lang-none prettyprint-override"><code> (Organisation) 5835f65457eea2852d2bf48c (Organisation) 59c87107cdfaa27f5759b107 (Organisation) 55aef43d03d4ad2343c2e27f (Organisation) 56d382a0589377c94614cbbc (Organisation) 5cdb32901d30d576fe946d44 (Organisation) 5a53e1ac23090e4877973532 (Organisation) 613e86c24faf093afab2c0ea (Organisation) 62eb0a4456bdf04c7768843c (Organisation) 5a53ddb8159e7944e0b44795 (Organisation) 606e58ad2f2af36f2219bcc8 (Organisation) 628da1ce238868408a576c22 (Organisation) 592776bc8ed1755954d6e8d5 (Organisation) 59644a45442b1e826dd7cdac (Organisation) 58ddc042f67c5f75420cc403 (Organisation) 5aaf190e159e7944e0b44fe1 (Organisation) 5b15c4adee10e503a6a6d757 (Organisation) 5491dd7ed707d710141027c0 (Organisation) 5d54d4fc5937ee123ad60c9d (Organisation) 5c1a9ae870c07e201b3a4880 (Organisation) 5d59c4efa0e40d1239ec5e14 (Organisation) 5de70b4f9a311e6fd587f4e9 (Organisation) 5c0ec151a9b73222dfdd82d6 (Organisation) 5cb7ef9281c9d71d6a82843f (Organisation) 5cf707cde5dbb949ad5cc302 (Organisation) 5d54bd25a0e40d1239ec5d9c (Organisation) 54fe27cf5c25b1394be6b303 (Organisation) 6241210f7210ae3af7727813 (Organisation) 5b57b20e8fff7b28249879ae (Organisation) 5d01cc1787e6383735ed12a6 (Organisation) 5988de44d4978b0596152387 (Organisation) 5d06bcaaefa930373488e7c0 (Organisation) 5ad6ca76159e7944e0b45553 (Organisation) 5b15e2dd5240347c64223fed (Organisation) 555560f0ab95c2765d65c3cd (Organisation) 55cd503216db9f1c385a69db (Organisation) 5f4f019d0e170b6f23370b53 (Organisation) 5a0223cc285a000c2866565c (Organisation) 5f6175683c6e806fd3bbcef7 (Organisation) 6289c9d0adb2f54c79e239d4 (Organisation) 5a8211ad159e7944e0b44b67 (Organisation) 5d82a7063c6e806fd3bbaa37 (Organisation) 5771fda4d0c907f02785022a (Organisation) 5af905d15240347c64223af3 (Organisation) 5702f5161c852788280926f3 (Organisation) 5f0b9d810e170b6f2337073d (Organisation) 5a0dfc4894ba750156a47e9c (Organisation) 5b4e5a958fff7b2824987846 (Organisation) 57ce00596dfb923915cf9a7f (Organisation) 5c075bd666a1c95d2acc6a3f (Organisation) 5c5cd18745f41e78ebfb0732 (Organisation) 5c5cd1a31279d46f7d90a186 (Organisation) 5c609d8686e12578ea359383 (Organisation) 5b6cd1aff6056722483d49f2 (Organisation) 5d531e95ff66e07233a2bf76 (Organisation) 5d5475e993b2f3707a737d40 (Organisation) 5c64ccb086e12578ea359499 (Organisation) 5c85a3b56ce63b531de0eaef (Organisation) 5caabb5bc8bbd278c0392dab (Organisation) 5d01919b90347136e8e233e0 (Organisation) 5d0191dc87823d36e7480ab6 (Organisation) 5d898e340e170b6f2336e8f0 (Organisation) 5d898fc83c6e806fd3bbab51 (Organisation) 5d1534be2b4549192d419973 (Organisation) 5d153c148c044d174fb9b799 (Organisation) 5d54d460cd8ccb13386abd43 (Organisation) 577048a0a5f4970b59493861 (Organisation) 5914d7528ed1755954d6e88f (Organisation) 56675f1446e0324b4566f6e9 (Organisation) 5924f500381e3b5854db8a0c (Organisation) 5cc64fadcf995a1d6b54e745 (Organisation) 5d71b5f49a311e6fd587e3dc (Organisation) 599646b5bd0b9864d8e7391d (Organisation) 5850ae570b242ff1782ea07c (Organisation) 5cd4dabce84cdd76faaf597f (Organisation) 58e9a01f33d3f1f208e032de (Organisation) 5546db39dbeafcef63e6efb6 (Organisation) 5fbdad479a311e6fd58812ff (Organisation) 5cabd38142524178c66be1f1 (Organisation) 5491ddb3d707d710141027c2 (Organisation) 5d66f0109a311e6fd587e12c (Organisation) 56ef0296a7b280e91b475aeb (Organisation) 58e43b26f79ce47642fae5c4 (Organisation) 58252929332df05c01c1624f (Organisation) 55dfd7d2bb58034a69d563fe (Organisation) 56ef03976e418ae84779d827 (Organisation) 5491dd8bd707d710141027c1 (Organisation) 5ba2eb071e910a58acf98e3f (Organisation) 5cf580fb87823d36e74808bf (Organisation) 5dae373a3c6e806fd3bbb1fc (Organisation) 61c3a8bf3824dd48d99ab572 (Organisation) 5a2f468069d12b0a0f4cc68e (Organisation) 5be8de44c052dc326ab1e810 (Organisation) 5c98455267dcef531cbc44f6 (Organisation) 5d66f1cb3c6e806fd3bba448 (Organisation) 5d66f12d2f2af36f22197fed (Organisation) 5e44aee59a311e6fd587fb53 (Organisation) 5e4dc8010e170b6f2336fd33 (Organisation) 5e9e56889a311e6fd587ff9c (Organisation) 59503126381e3b5854db8afc (Organisation) 5be9cc9c20a38c39bc4c807b (Organisation) 5c083f19d011855d2b41d458 (Organisation) 5cc659bc17629e20e1a66f40 (Organisation) 5491dfa5d707d710141027cb (Organisation) 5cedbb5e87823d36e7480823 (Organisation) 5be9d2553863d639bbface86 (Organisation) 5a24b30df6b76456d074f7c4 (Organisation) 594c8252381e3b5854db8af7 (Organisation) 55c8024609c8cb2176ba55da (Organisation) 581bbdfbf0df7f680817970b (Organisation) 59965edbf39ca264d9dd41ca (Organisation) 5ed02be39a311e6fd58802d6 (Organisation) 57391e8ec1d461090ef7397d (Organisation) 5edd66c79a311e6fd5880363 (Organisation) 5fe3bd7a3c6e806fd3bbd9f5 (Organisation) 60187c220e170b6f23371a5e (Organisation) 61c3b11a4faf093afab2c86e (Organisation) 5a24977405bdd758991bdb93 (Organisation) 61c3b3e83824dd48d99ab580 (Organisation) 5c3bd69f70c07e201b3a4a99 (Organisation) 61c3b84d7d28ec48d8455710 (Organisation) 5a2472d6f6b76456d074f761 (Organisation) 61c3bb467d28ec48d8455714 (Organisation) 5965a855b3e775836dfd8d5b (Organisation) 6298322c238868408a576cc5 (Organisation) 639643faf06d0004d42b2457 (Organisation) 63a23108a4243f051c006f77 (Organisation) 64090ff9903a4604d5e69032 (Organisation) 64518b0da4243f051c007932 (Organisation) 64924d566d951e05091e295d </code></pre> <h2>Current code</h2> <pre class="lang-js prettyprint-override"><code>const data = require(&quot;./dat.json&quot;); function createTree(data, rootId) { const root = data.find((item) =&gt; item._id.$oid === rootId); if (!root) return null; const children = root.children.map((child) =&gt; createTree(data, child.$oid)); return { ...root, children, }; } function prettyPrint(nodes, level = 0) { nodes.forEach((node) =&gt; { console.log(`${&quot; &quot;.repeat(level)} (${node.kind}) ${node._id.$oid}`); prettyPrint(node.children, level + 2); }); } const forest = data.map((item) =&gt; createTree(data, item._id.$oid)); prettyPrint(forest); </code></pre> <p>This code produces output, but I don't want those duplicated ids that appear, like here:</p> <p><a href="https://i.sstatic.net/F2Kuv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/F2Kuv.png" alt="enter image description here" /></a></p>
<javascript><python><data-structures><graph><tree>
2023-08-03 08:21:56
2
351
Yiffany
76,826,089
15,740,170
reshape rows of numpy array into matrices
<p>I have an numpy array <code>dataArray</code> with n rows and 10 columns. I want to reshape the first 9 columns of each row into a 3x3 matrix.</p> <p>If I write <code>dataArray[i,1]</code> for a row i, I want to see the reshaped matrix, i.e. <code>dataArray[i,1]</code> is supposed to be of size (3,3), and <code>dataArray[i,2]</code> is supposed to be the 10th column of the original array, i.e. a scalar.</p>
<python><numpy><reshape>
2023-08-03 08:11:05
3
335
SolidMechanicsFan
76,826,034
10,418,241
Why python -m xxx not get the right __name__ value?
<p>Now I have a very simple test python file, I want it as a module</p> <pre><code># hello.py print(__name__) </code></pre> <p>Then I test two commands:</p> <pre><code>python -m hello python -m hello.py </code></pre> <p>The first output <code>__name__</code> is <code>__main__</code>, this is what I can't explain The second output <code>__name__</code> is 'hello' I expected, but there is a error messsage</p> <pre><code>/usr/bin/python3: Error while finding module specification for 'hello.py' (ModuleNotFoundError: `__path__` attribute not found on 'hello' while trying to find 'hello.py'). Try using 'hello' instead of 'hello.py' as the module name. </code></pre> <p>Can you explain for me why the first situation the <code>__name__</code> is <code>__main__</code> and what`s the differcent between the two commands.</p> <p>Thanks a lot</p>
<python>
2023-08-03 08:05:04
0
344
Xu Wang
76,825,935
13,078,279
Seemingly bizarre shapes required for multi-label class cGAN
<p>I am making a cGAN with a generator that takes in a shape <code>(86,)</code> label tensor and combines it with random noise of shape <code>(1000,)</code> to produce architectural CAD paths with shape <code>(6, 2532, 39, 5)</code>, and a standard cGAN discriminator. Here are my generator and discrimminator:</p> <pre class="lang-py prettyprint-override"><code>def generator(latent_dim=1000, label_dim=86, output_data_shape=(6, 2532, 39, 5)): model = Sequential() model.add(Dense(256, input_dim=latent_dim * label_dim)) model.add(LeakyReLU(alpha=0.2)) model.add(BatchNormalization(momentum=0.8)) model.add(Dense(512)) model.add(LeakyReLU(alpha=0.2)) model.add(BatchNormalization(momentum=0.8)) model.add(Dense(2)) model.add(LeakyReLU(alpha=0.2)) model.add(BatchNormalization(momentum=0.8)) model.add(Dense(np.prod(output_data_shape), activation='tanh')) model.add(Reshape(output_data_shape)) model.summary() noise = Input(shape=(latent_dim,)) label = Input(shape=(label_dim,)) label_embedding = Embedding(label_dim, latent_dim, input_length=label_dim)(label) model_input = multiply([noise, label_embedding]) model_input = Flatten()(model_input) img = model(model_input) return Model([noise, label_embedding], img) def discriminator(latent_dim=1000, label_dim=86, output_data_shape=(6, 2532, 39, 5)): model = Sequential() # This is the max dims allowed or otherwise it'll run out of memory model.add(Dense(2, input_dim=np.prod(output_data_shape) * label_dim)) model.add(LeakyReLU(alpha=0.2)) model.add(Dense(128)) model.add(LeakyReLU(alpha=0.2)) model.add(Dropout(0.4)) model.add(Dense(128)) model.add(LeakyReLU(alpha=0.2)) model.add(Dropout(0.4)) model.add(Dense(1, activation='sigmoid')) model.summary() img = Input(shape=output_data_shape) label = Input(shape=(label_dim,)) label_embedding = Embedding(label_dim, np.prod(output_data_shape))(label) flat_img = Flatten()(img) model_input = multiply([flat_img, label_embedding]) model_input = Flatten()(model_input) validity = model(model_input) return Model([img, label], validity) </code></pre> <p>And for easier viewing, these are the respective shapes of the generator:</p> <p><a href="https://i.sstatic.net/oYwuI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oYwuI.png" alt="Generator diagram" /></a></p> <p>And discrimminator:</p> <p><a href="https://i.sstatic.net/0XVpc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0XVpc.png" alt="Discriminator diagram" /></a></p> <p>This model compiles, and the discriminator shapes make sense, but the generator shapes are bizarre. While I specifically set a label input shape of <code>Input(shape=(86,))</code>, why does the model state that the input shape for the labels is <code>Input(shape=(86, 1000))</code>?</p>
<python><tensorflow><machine-learning>
2023-08-03 07:50:37
0
416
JS4137
76,825,622
5,334,697
Add the data to the same row in new column in excel using Python
<p><a href="https://i.sstatic.net/ysQrm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ysQrm.png" alt="enter image description here" /></a>I have an an excel file with column Test where I have hello and hello1 as two rows. After performing certain operations I get data which I want to add to the same rows in new columns Test1 and Test2.</p> <p>I have below code</p> <pre><code>df = pd.read_excel(input_file) for index, row in df.iterrows(): #Iterating on column Test value = row[&quot;Test&quot;] print (value) # hello[![enter image description here][1]][1] matching_rows = some_operation(input_file, column_name, value) print (matching_rows) # hi, hello, how, are , you, I, am , testing, this ab = (matching_rows[&quot;Test1&quot;]) # ab = hi, hello, how, are , you abc = (matching_rows['Test2']) # ab = I, am , testing, this </code></pre> <p>I'm not sure what would be the next code to get the data as shown in the below picture.</p>
<python><python-3.x><excel>
2023-08-03 07:03:12
2
2,169
Aditya Malviya
76,825,607
160,808
how enable streaming music via the Google Assistant Service on Raspberry Pi 4 Python
<p>I have installed the Python Google Assistant Service SDK on my Raspberry pi4. It's working fine however it won't stream either music or radio stations.</p> <p>How do I implement this in Python?</p>
<python><google-assistant-sdk>
2023-08-03 07:00:43
1
2,311
Ageis
76,825,475
1,692,099
NameError: global name 'on_connect' is not defined
<pre><code>class HasadCoreManager: # The callback for when the client receives a connect response from the server. def on_connect(self, client, userdata, flags, rc): print(&quot;Connected with result code &quot;+str(rc)) client.subscribe(constants.MQTT_PATH_SUBSCRIBE) client.message_callback_add(&quot;hasadNode/water&quot;, on_message()) client.message_callback_add(&quot;hasadNode/air&quot;, on_message()) </code></pre> <p>when I call the function on_connect in the startingLooping function which is a function inside the same class on the same level like this :</p> <pre><code>def startLooping(self): print(&quot;Core Just started...&quot;) client = mqtt.Client() print(&quot;MQTT Client initialized&quot;) client.on_connect = on_connect client.on_message = on_message client.connect(constants.MQTT_SERVER, 1883, 60) client.loop_start() </code></pre> <p>i get the error : NameError: global name 'on_connect' is not defined</p> <p>what to do to solve this problem ?</p>
<python><python-3.x><python-2.7><mqtt>
2023-08-03 06:42:00
2
2,212
mohammad
76,825,298
6,734,243
how to project a epsg:4326 shapely geometry to epsg:3857?
<p>I'm working with a geodataset of fields. I want to compute the surface of each parcel on the fly when they are displayed on the map (the rational for that is not relevant here).</p> <p>When I was able to manipulate the dataset, I would have simply do the following:</p> <pre class="lang-py prettyprint-override"><code># I know 3857 is not optimal everywhere but that will do for the example gdf[&quot;surface&quot;] = gdf.to_crs(3857).area </code></pre> <p>Now what I have is the following:</p> <pre class="lang-py prettyprint-override"><code>feat = gdf[gdf.id == change[&quot;new&quot;]].squeeze() # this is a shaely geometry in 4326 and before computing area, # I would need to reproject it surface = feat.geometry </code></pre>
<python><projection><shapely>
2023-08-03 06:08:36
1
2,670
Pierrick Rambaud
76,825,261
8,389,274
Split Figma json into chunks
<p>I have a 6000 lines of nested dictionary and I want to split into chunks of 200 lines. I want to then send each chunk to GPT model with a prompt.</p> <p>Is there a way to split the chunks logically such that each chunk is a valid json ?</p> <p>Sample json</p> <pre><code>{ &quot;name&quot;: &quot;WMS Web UI&quot;, &quot;lastModified&quot;: &quot;2023-07-31T10: 17: 26Z&quot;, &quot;thumbnailUrl&quot;: &quot;https: //s3-alpha.figma.com/thumbnails/3ca7f866-5d92-4cc8-a5dc-46087888cd06?X-Amz-Algorithm=AWS4-HMAC-SHA256&amp;X-Amz-Credential=AKIAQ4GOSFWC7ZQJHYXS%2F20230803%2Fus-west-2%2Fs3%2Faws4_request&amp;X-Amz-Date=20230803T000000Z&amp;X-Amz-Expires=604800&amp;X-Amz-SignedHeaders=host&amp;X-Amz-Signature=ced8db9ae970ad0c60e8ee79b0f90b458f5e9dd130fe808eb5e7e2c640d968ed&quot;, &quot;version&quot;: &quot;3909344649&quot;, &quot;role&quot;: &quot;viewer&quot;, &quot;editorType&quot;: &quot;figma&quot;, &quot;linkAccess&quot;: &quot;org_view&quot;, &quot;nodes&quot;: { &quot;3121:311590&quot;: { &quot;document&quot;: { &quot;id&quot;: &quot;3121:311590&quot;, &quot;name&quot;: &quot;Field&quot;, &quot;type&quot;: &quot;INSTANCE&quot;, &quot;scrollBehavior&quot;: &quot;SCROLLS&quot;, &quot;blendMode&quot;: &quot;PASS_THROUGH&quot;, &quot;children&quot;: [ { &quot;id&quot;: &quot;I3121:311590;518:154896&quot;, &quot;name&quot;: &quot;Header&quot;, &quot;type&quot;: &quot;FRAME&quot;, &quot;scrollBehavior&quot;: &quot;SCROLLS&quot;, &quot;blendMode&quot;: &quot;PASS_THROUGH&quot;, </code></pre> <p>.... [6000 lines of nested json]</p>
<python><json><dictionary>
2023-08-03 06:01:12
0
577
Aadesh Kulkarni
76,825,015
1,016,428
Cython execution speed vs. MSVC and GCC versions
<h2>Intro</h2> <p>I have a fairly simple Cython module - which I simplified even further for the specific tests I have carried on.</p> <p>This module has only one class, which has only one interesting method (named <code>run</code>): this method accepts as an input a Fortran-ordered 2D NumPy array and two 1D NumPy arrays, and does some very, very simple things on those (see below for code).</p> <p>For the sake of benchmarking, I have compiled the exact same module with MSVC, GCC 8, GCC 11, GCC 12 and GCC 13 on Windows 10 64 bit, Python 3.9.10 64 bit, Cython 0.29.32. All the GCC compilers I have obtained from the excellent Brecht Sanders GitHub page (<a href="https://github.com/brechtsanders" rel="nofollow noreferrer">https://github.com/brechtsanders</a>).</p> <h2>Main Question</h2> <p>The overarching question of this very long post is: I am just curious to know if anyone has any explanation regarding why GCC12 and GCC13 are so much slower than GCC11 (which is the fastest of all). Looks like performances are going down at each release of GCC, rather than getting better...</p> <h2>Benchmarks</h2> <p>In the benchmarking, I simply vary the array dimensions of the 2D and 1D arrays (<code>m</code> and <code>n</code>) and the number on nonzero entries in the 2D and 1D arrays. I repeat the <code>run</code> method 20 times per compiler version, per set of <code>m</code> and <code>n</code> and nonzero entries.</p> <p>Optimization settings I am using:</p> <p><strong>MVSC</strong></p> <pre class="lang-py prettyprint-override"><code>MSVC_EXTRA_COMPILE_ARGS = ['/O2', '/GS-', '/fp:fast', '/Ob2', '/nologo', '/arch:AVX512', '/Ot', '/GL'] </code></pre> <p><strong>GCC</strong></p> <pre class="lang-py prettyprint-override"><code>GCC_EXTRA_COMPILE_ARGS = ['-Ofast', '-funroll-loops', '-flto', '-ftree-vectorize', '-march=native', '-fno-asynchronous-unwind-tables'] GCC_EXTRA_LINK_ARGS = ['-flto'] + GCC_EXTRA_COMPILE_ARGS </code></pre> <p>What I am observing is the following:</p> <ul> <li>MSVC is by far the slowest at executing the benchmark (why would that be on Windows?)</li> <li>The progression GCC8 -&gt; GCC11 is promising, as GCC11 is faster than GCC8</li> <li>GCC12 and GCC13 are both significantly slower than GCC11, with GCC13 being the worst (twice as slow as GCC11 and much worse than GCC12)</li> </ul> <p><strong>Table of Results:</strong></p> <p><em>Runtimes are in milliseconds (ms)</em></p> <p><a href="https://i.sstatic.net/sMO9Q.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sMO9Q.png" alt="enter image description here" /></a></p> <p><strong>Graph (NOTE: Logarithmic Y axis!!):</strong></p> <p><em>Runtimes are in milliseconds (ms)</em></p> <p><a href="https://i.sstatic.net/59sAA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/59sAA.png" alt="enter image description here" /></a></p> <h2>Code</h2> <p><strong>Cython file:</strong></p> <pre><code>############################################################################### import numpy as np cimport numpy as np import cython from cython.view cimport array as cvarray from libc.float cimport FLT_MAX DTYPE_float = np.float32 ctypedef np.float32_t DTYPE_float_t cdef float SMALL = 1e-10 cdef int MAXSIZE = 1000000 cdef extern from &quot;math.h&quot; nogil: cdef float fabsf(float x) ############################################################################### @cython.final cdef class CythonLoops: cdef int m, n cdef int [:] col_starts cdef int [:] row_indices cdef double [:] x def __cinit__(self): self.m = 0 self.n = 0 self.col_starts = cvarray(shape=(MAXSIZE,), itemsize=sizeof(int), format='i') self.row_indices = cvarray(shape=(MAXSIZE,), itemsize=sizeof(int), format='i') self.x = cvarray(shape=(MAXSIZE,), itemsize=sizeof(double), format='d') @cython.boundscheck(False) # turn off bounds-checking for entire function @cython.wraparound(False) # turn off negative index wrapping for entire function @cython.nonecheck(False) @cython.initializedcheck(False) @cython.cdivision(True) cpdef run(self, DTYPE_float_t[::1, :] matrix, DTYPE_float_t[:] ub_values, DTYPE_float_t[:] priority): cdef Py_ssize_t i, j, m, n cdef int nza, collen cdef double too_large, ok, obj cdef float ub, element cdef int [:] col_starts = self.col_starts cdef int [:] row_indices = self.row_indices cdef double [:] x = self.x m = matrix.shape[0] n = matrix.shape[1] self.m = m self.n = n nza = 0 collen = 0 for i in range(n): for j in range(m+1): if j == 0: element = priority[i] else: element = matrix[j-1, i] if fabsf(element) &lt; SMALL: continue if j == 0: obj = &lt;double&gt;element # Do action 1 with external library else: collen = nza + 1 col_starts[collen] = i+1 row_indices[collen] = j x[collen] = &lt;double&gt;element nza += 1 ub = ub_values[i] if ub &gt; FLT_MAX: too_large = 0.0 # Do action 2 with external library elif ub &gt; SMALL: ok = &lt;double&gt;ub # Do action 3 with external library # Use x, row_indices and col_starts in the external library </code></pre> <p><strong>Setup file:</strong></p> <p>I use the following to compile it:</p> <pre><code>python setup.py build_ext --inplace --compiler=mingw32 gcc13 </code></pre> <p>Where the last argument is the compiler I want to test</p> <pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python from setuptools import setup from setuptools import Extension from Cython.Build import cythonize from Cython.Distutils import build_ext import numpy as np import os import shutil import sys import getpass MODULE = 'loop_cython_%s' GCC_EXTRA_COMPILE_ARGS = ['-Ofast', '-funroll-loops', '-flto', '-ftree-vectorize', '-march=native', '-fno-asynchronous-unwind-tables'] GCC_EXTRA_LINK_ARGS = ['-flto'] + GCC_EXTRA_COMPILE_ARGS MSVC_EXTRA_COMPILE_ARGS = ['/O2', '/GS-', '/fp:fast', '/Ob2', '/nologo', '/arch:AVX512', '/Ot', '/GL'] MSVC_EXTRA_LINK_ARGS = MSVC_EXTRA_COMPILE_ARGS def remove_builds(kind): for folder in ['build', 'bin']: if os.path.isdir(folder): if folder == 'bin': continue shutil.rmtree(folder, ignore_errors=True) if os.path.isfile(MODULE + '_%s.c'%kind): os.remove(MODULE + '_%s.c'%kind) def setenv(extra_args, doset=True, path=None, kind='gcc8'): flags = '' if doset: flags = ' '.join(extra_args) for key in ['CFLAGS', 'FFLAGS', 'CPPFLAGS']: os.environ[key] = flags user = getpass.getuser() if doset: path = os.environ['PATH'] if kind == 'gcc8': os.environ['PATH'] = r'C:\Users\%s\Tools\MinGW64_8.0\bin;C:\WINDOWS\system32;C:\WINDOWS;C:\Users\%s\WinPython39\WPy64-39100\python-3.9.10.amd64;'%(user, user) elif kind == 'gcc11': os.environ['PATH'] = r'C:\Users\%s\Tools\MinGW64\bin;C:\WINDOWS\system32;C:\WINDOWS;C:\Users\%s\WinPython39\WPy64-39100\python-3.9.10.amd64;'%(user, user) elif kind == 'gcc12': os.environ['PATH'] = r'C:\Users\%s\Tools\MinGW64_12.2.0\bin;C:\WINDOWS\system32;C:\WINDOWS;C:\Users\%s\WinPython39\WPy64-39100\python-3.9.10.amd64;'%(user, user) elif kind == 'gcc13': os.environ['PATH'] = r'C:\Users\%s\Tools\MinGW64_13.2.0\bin;C:\WINDOWS\system32;C:\WINDOWS;C:\Users\%s\WinPython39\WPy64-39100\python-3.9.10.amd64;'%(user, user) elif kind == 'msvc': os.environ['PATH'] = r'C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.35.32215\bin\Hostx64\x64;C:\WINDOWS\system32;C:\WINDOWS;C:\Users\J0514162\WinPython39\WPy64-39100\python-3.9.10.amd64;C:\Program Files (x86)\Windows Kits\10\bin\10.0.22000.0\x64' os.environ['LIB'] = r'C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.35.32215\lib\x64;C:\Program Files (x86)\Windows Kits\10\Lib\10.0.22000.0\um\x64;C:\Program Files (x86)\Windows Kits\10\Lib\10.0.22000.0\ucrt\x64' os.environ[&quot;DISTUTILS_USE_SDK&quot;] = '1' os.environ[&quot;MSSdk&quot;] = '1' else: os.environ['PATH'] = path return path class CustomBuildExt(build_ext): def build_extensions(self): # Override the compiler executables. Importantly, this # removes the &quot;default&quot; compiler flags that would # otherwise get passed on to to the compiler, i.e., # distutils.sysconfig.get_var(&quot;CFLAGS&quot;). self.compiler.set_executable(&quot;compiler_so&quot;, &quot;gcc -mdll -O -Wall -DMS_WIN64&quot;) self.compiler.set_executable(&quot;compiler_cxx&quot;, &quot;g++ -O -Wall -DMS_WIN64&quot;) self.compiler.set_executable(&quot;linker_so&quot;, &quot;gcc -shared -static&quot;) self.compiler.dll_libraries = [] build_ext.build_extensions(self) if __name__ == '__main__': os.system('cls') kind = None for arg in sys.argv: if arg.strip() in ['gcc8', 'gcc11', 'gcc12', 'gcc13', 'msvc']: kind = arg sys.argv.remove(arg) break base_file = os.path.join(os.getcwd(), MODULE[0:-3]) source = base_file + '.pyx' target = base_file + '_%s.pyx'%kind shutil.copyfile(source, target) if kind == 'msvc': extra_compile_args = MSVC_EXTRA_COMPILE_ARGS[:] extra_link_args = MSVC_EXTRA_LINK_ARGS[:] + ['/MANIFEST'] else: extra_compile_args = GCC_EXTRA_COMPILE_ARGS[:] extra_link_args = GCC_EXTRA_LINK_ARGS[:] path = setenv(extra_compile_args, kind=kind) remove_builds(kind) define_macros = [('WIN32', 1)] nname = MODULE%kind include_dirs = [np.get_include()] if kind == 'msvc': include_dirs += [r'C:\Program Files (x86)\Windows Kits\10\Include\10.0.22000.0\ucrt', r'C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.35.32215\include', r'C:\Program Files (x86)\Windows Kits\10\Include\10.0.22000.0\shared'] extensions = [ Extension(nname, [nname + '.pyx'], extra_compile_args=extra_compile_args, extra_link_args=extra_link_args, include_dirs=include_dirs, define_macros=define_macros)] # build the core extension(s) setup_kwargs = {'ext_modules': cythonize(extensions, compiler_directives={'embedsignature' : False, 'boundscheck' : False, 'wraparound' : False, 'initializedcheck': False, 'cdivision' : True, 'language_level' : '3str', 'nonecheck' : False}, force=True, cache=False, quiet=False)} if kind != 'msvc': setup_kwargs['cmdclass'] = {'build_ext': CustomBuildExt} setup(**setup_kwargs) setenv([], False, path) remove_builds(kind) </code></pre> <p><strong>Test code:</strong></p> <pre class="lang-py prettyprint-override"><code>import os import numpy import time import loop_cython_msvc as msvc import loop_cython_gcc8 as gcc8 import loop_cython_gcc11 as gcc11 import loop_cython_gcc12 as gcc12 import loop_cython_gcc13 as gcc13 # M N NNZ(matrix) NNZ(priority) NNZ(ub) DIMENSIONS = [(1661 , 2608 , 3560 , 375 , 2488 ), (2828 , 3512 , 4333 , 413 , 2973 ), (780 , 985 , 646 , 23 , 984 ), (799 , 1558 , 1883 , 301 , 1116 ), (399 , 540 , 388 , 44 , 517 ), (10545, 10486, 14799 , 1053 , 10041), (3369 , 3684 , 3684 , 256 , 3242 ), (2052 , 5513 , 4772 , 1269 , 3319 ), (224 , 628 , 1345 , 396 , 594 ), (553 , 1475 , 1315 , 231 , 705 )] def RunTest(): print('M N NNZ MSVC GCC 8 GCC 11 GCC 12 GCC 13') for m, n, nnz_mat, nnz_priority, nnz_ub in DIMENSIONS: print('%-6d %-6d %-8d'%(m, n, nnz_mat), end='') for solver, label in zip([msvc, gcc8, gcc11, gcc12, gcc13], ['MSVC', 'GCC 8', 'GCC 11', 'GCC 12', 'GCC 13']): numpy.random.seed(123456) size = m*n idxes = numpy.arange(size) matrix = numpy.zeros((size, ), dtype=numpy.float32) idx_mat = numpy.random.choice(idxes, nnz_mat) matrix[idx_mat] = numpy.random.uniform(0, 1000, size=(nnz_mat, )) matrix = numpy.asfortranarray(matrix.reshape((m, n))) idxes = numpy.arange(m) priority = numpy.zeros((m, ), dtype=numpy.float32) idx_pri = numpy.random.choice(idxes, nnz_priority) priority[idx_pri] = numpy.random.uniform(0, 1000, size=(nnz_priority, )) idxes = numpy.arange(n) ub_values = numpy.inf*numpy.ones((n, ), dtype=numpy.float32) idx_ub = numpy.random.choice(idxes, nnz_ub) ub_values[idx_ub] = numpy.random.uniform(0, 1000, size=(nnz_ub, )) solver = solver.CythonLoops() time_tot = [] for i in range(20): start = time.perf_counter() solver.run(matrix, ub_values, priority) elapsed = time.perf_counter() - start time_tot.append(elapsed*1e3) print('%-8.4g'%numpy.mean(time_tot), end=' ') print() if __name__ == '__main__': os.system('cls') RunTest() </code></pre> <h1>EDIT</h1> <p>After @PeterCordes comments, I have changed the optimization flags to this:</p> <pre class="lang-py prettyprint-override"><code>MSVC_EXTRA_COMPILE_ARGS = ['/O2', '/GS-', '/fp:fast', '/Ob2', '/nologo', '/arch:AVX512', '/Ot', '/GL', '/QIntel-jcc-erratum'] </code></pre> <pre class="lang-py prettyprint-override"><code>GCC_EXTRA_COMPILE_ARGS = ['-Ofast', '-funroll-loops', '-flto', '-ftree-vectorize', '-march=native', '-fno-asynchronous-unwind-tables', '-Wa,-mbranches-within-32B-boundaries'] </code></pre> <p>MSVC appears to be marginally faster than before (between 5% and 10%), but GCC12 and GCC13 are slower than before (between 3% and 20%). Below a graph with the results on the largest 2D matrix:</p> <p><strong>Note</strong>: &quot;Current&quot; means with the latest optimization flags suggested by @PeterCordes, &quot;Previous&quot; is the original set of flags.</p> <p><a href="https://i.sstatic.net/2z4cO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2z4cO.png" alt="enter image description here" /></a></p>
<python><gcc><visual-c++><cython><compiler-optimization>
2023-08-03 05:10:10
1
1,449
Infinity77
76,824,692
2,867,882
ftplib RETR a small file stuck
<p>Ok, first things first - there is little configuration available on this FTP; it is part of an industrial HMI control panel. I have tested via Filezilla the ability to download a file without PASV and it works fine. Here is the verbose from filezilla - you can see it takes about 1 second because we're talking kilobytes here.</p> <pre><code>Status: Starting download of /USB_Pen_Memory/Log/Data Log Trend_Ops_Data_Log_230802.txt Trace: CFtpChangeDirOpData::Send() in state 0 Trace: CFtpChangeDirOpData::Send() in state 2 Command: CWD /USB_Pen_Memory/Log Trace: CFtpControlSocket::OnReceive() Response: 250 CWD command successful. Trace: CFtpChangeDirOpData::ParseResponse() in state 2 Trace: CFtpControlSocket::ResetOperation(0) Trace: CControlSocket::ResetOperation(0) Trace: CFtpChangeDirOpData::Reset(0) in state 2 Trace: CFtpFileTransferOpData::SubcommandResult(0) in state 1 Trace: CFtpControlSocket::SetAsyncRequestReply Trace: CControlSocket::SendNextCommand() Trace: CFtpFileTransferOpData::Send() in state 5 Trace: CFtpRawTransferOpData::Send() in state 0 Trace: CFtpRawTransferOpData::Send() in state 1 Command: TYPE A Trace: CFtpControlSocket::OnReceive() Response: 200 Type set to A. Trace: CFtpRawTransferOpData::ParseResponse() in state 1 Trace: CControlSocket::SendNextCommand() Trace: CFtpRawTransferOpData::Send() in state 2 Command: PASV Trace: CFtpControlSocket::OnReceive() Response: 502 Invalid command. Trace: CFtpRawTransferOpData::ParseResponse() in state 2 Trace: CControlSocket::SendNextCommand() Trace: CFtpRawTransferOpData::Send() in state 2 Command: PORT 192,168,0,41,204,135 Trace: CFtpControlSocket::OnReceive() Response: 200 PORT command successful. Trace: CFtpRawTransferOpData::ParseResponse() in state 2 Trace: CControlSocket::SendNextCommand() Trace: CFtpRawTransferOpData::Send() in state 4 Command: RETR Data Log Trend_Ops_Data_Log_230802.txt Trace: CTransferSocket::OnAccept(0) Trace: CTransferSocket::OnConnect Trace: CFtpControlSocket::OnReceive() Response: 150 Opening ASCII mode data connection for Data Log Trend_Ops_Data_Log_230802.txt. Trace: CFtpRawTransferOpData::ParseResponse() in state 4 Trace: CControlSocket::SendNextCommand() Trace: CFtpRawTransferOpData::Send() in state 5 Trace: CFtpControlSocket::OnReceive() Response: 226 Transfer complete. Trace: CFtpRawTransferOpData::ParseResponse() in state 5 Trace: CControlSocket::SendNextCommand() Trace: CFtpRawTransferOpData::Send() in state 8 Trace: CTransferSocket::TransferEnd(1) Trace: CFtpControlSocket::TransferEnd() Trace: CFtpControlSocket::ResetOperation(0) Trace: CControlSocket::ResetOperation(0) Trace: CFtpRawTransferOpData::Reset(0) in state 8 Trace: CFtpFileTransferOpData::SubcommandResult(0) in state 7 Trace: CFtpControlSocket::ResetOperation(0) Trace: CControlSocket::ResetOperation(0) Trace: CFtpFileTransferOpData::Reset(0) in state 7 Status: File transfer successful, transferred 48.20 KB in 1 second </code></pre> <p>Here is my code in AWS Lambda Python version 3.10</p> <pre><code>from datetime import datetime as dt from datetime import timedelta from io import BytesIO import ftplib import logging import os import csv import time import boto3 import logging def ftp_connection(HOST, USER, PASS): ftp = ftplib.FTP(source_address=()) ftp.connect(HOST) ftp.login(USER,PASS) ftp.set_pasv(False) ftp.set_debuglevel(1) return ftp def lambda_handler(event, context): yesterday = dt.strftime(dt.today() - timedelta (days = 1), '%y%m%d') timestamp = time.strftime(&quot;%Y%m%d%H%M%S&quot;, time.gmtime()) config_s3_bucket_name = os.environ['S3_CONFIG_LOC'] data_s3_bucket_name = os.environ['S3_DATA_LOC'] config_file_key = 'ftp_config.csv' crawler_name = 'FTPLogs' logging.basicConfig(level=logging.ERROR, format='%(asctime)s %(levelname)s %(name)s %(message)s') logger=logging.getLogger(__name__) # Step 1: Connect to S3 and download the config file s3 = boto3.client('s3', config=boto3.session.Config(signature_version='s3v4')) config_file_obj = s3.get_object(Bucket=config_s3_bucket_name, Key=config_file_key) config_file_data = config_file_obj['Body'].read().decode('utf-8').splitlines() config_reader = csv.DictReader(config_file_data) # Step 2: Loop through each row in the config file and load latest log to s3 for row in config_reader: ftp_site_name = row['ftp_site_name'] ftp_ip = row['ftp_ip_address'] ftp_username = row['ftp_username'] ftp_password = row['ftp_password'] file_directory = row['ftp_log_directory'] filename_convention = row['filename_convention'] conn = ftp_connection(ftp_ip, ftp_username, ftp_password) #change the directory conn.cwd(file_directory) try : with BytesIO() as output_buffer: conn.transfercmd('RETR ' + filename_convention + yesterday + '.txt', output_buffer.write) output_buffer.write(f',{ftp_site_name},{timestamp}\n'.encode('utf-8')) output_buffer.seek(0) s3_key = f'{ftp_site_name}/dt={timestamp}/{latest_csv_file}' s3.upload_fileobj(output_buffer, Bucket=data_s3_bucket_name, Key=s3_key) # Close file and connection local_file.close() conn.close() # Upload the file from the local temporary directory to S3 s3 = boto3.client('s3') s3_key = f'dt={timestamp}/{filename_convention}{yesterday}.txt' s3.upload_file(local_temp_file, Bucket=data_s3_bucket_name, Key=s3_key) #Log any error except ftplib.all_errors as e : logger.error(str(e)) continue logging.shutdown() glue = boto3.client('glue', config=boto3.session.Config(signature_version='s3v4')) glue.start_crawler(Name=crawler_name) return { 'statusCode': 200, 'body': 'Function executed successfully. Crawler Started' } </code></pre> <p>Problem is it just sits:</p> <pre><code>2023-08-02T19:58:15.620-07:00 *cmd* 'CWD /USB_Pen_Memory/Log' 2023-08-02T19:58:15.675-07:00 *resp* '250 CWD command successful.' 2023-08-02T19:58:15.676-07:00 *cmd* 'PORT 169,254,76,1,132,71' 2023-08-02T19:58:15.730-07:00 *resp* '200 PORT command successful.' 2023-08-02T19:58:15.730-07:00 *cmd* 'REST &lt;built-in method write of _io.BytesIO object at 0x7f4ff15a53a0&gt;' 2023-08-02T19:58:15.790-07:00 *resp* '350 Restarting at 0.' 2023-08-02T19:58:15.790-07:00 *cmd* 'RETR Data Log Trend_Ops_Data_Log_230801.txt' 2023-08-02T20:00:15.274-07:00 2023-08-03T03:00:15.273Z f68b8176-a188-4a50-ae45-6c12167dcad6 Task timed out after 120.05 seconds </code></pre> <p>the PORT has caught my eye here as it stands out as different than what is shown from filezilla. I have tried retrbinary and transfercmd; both are the same. What am I doing wrong?</p> <p>Edit:</p> <p>Since Tim pointed out that s3 is slow, here is the version I have which attempts to download to local tmp which is seeing the same exact problem. Also I just remembered that I have two sites with different IPs and both are showing the same thing. This strange unrouteable PORT IP.</p> <pre><code>from datetime import datetime as dt from datetime import timedelta from io import BytesIO import ftplib import logging import os import csv import time import boto3 import logging def ftp_connection(HOST, USER, PASS): ftp = ftplib.FTP(source_address=()) ftp.connect(HOST) ftp.login(USER,PASS) ftp.set_pasv(False) ftp.set_debuglevel(1) return ftp def lambda_handler(event, context): yesterday = dt.strftime(dt.today() - timedelta (days = 1), '%y%m%d') timestamp = time.strftime(&quot;%Y%m%d%H%M%S&quot;, time.gmtime()) config_s3_bucket_name = os.environ['S3_CONFIG_LOC'] data_s3_bucket_name = os.environ['S3_DATA_LOC'] config_file_key = 'ftp_config.csv' crawler_name = 'FTPLogs' logging.basicConfig(level=logging.ERROR, format='%(asctime)s %(levelname)s %(name)s %(message)s') logger=logging.getLogger(__name__) # Step 1: Connect to S3 and download the config file s3 = boto3.client('s3', config=boto3.session.Config(signature_version='s3v4')) config_file_obj = s3.get_object(Bucket=config_s3_bucket_name, Key=config_file_key) config_file_data = config_file_obj['Body'].read().decode('utf-8').splitlines() config_reader = csv.DictReader(config_file_data) # Step 2: Loop through each row in the config file for row in config_reader: ftp_site_name = row['ftp_site_name'] ftp_ip = row['ftp_ip_address'] ftp_username = row['ftp_username'] ftp_password = row['ftp_password'] file_directory = row['ftp_log_directory'] filename_convention = row['filename_convention'] conn = ftp_connection(ftp_ip, ftp_username, ftp_password) #change the directory conn.cwd(file_directory) try : # Define the local temporary directory local_temp_directory = '/tmp' os.makedirs(local_temp_directory, exist_ok=True) # Define the local temporary file path local_temp_file = os.path.join(local_temp_directory, f'{filename_convention}{yesterday}.txt') # Download the file to the local temporary directory with open(local_temp_file, 'wb') as local_file: conn.retrbinary(f'RETR {filename_convention}{yesterday}.txt', local_file.write) local_file.write(f',{ftp_site_name},{timestamp}\n'.encode('utf-8')) # Close file and connection local_file.close() conn.close() # Upload the file from the local temporary directory to S3 s3 = boto3.client('s3') s3_key = f'dt={timestamp}/{filename_convention}{yesterday}.txt' s3.upload_file(local_temp_file, Bucket=data_s3_bucket_name, Key=s3_key) #Log any error except ftplib.all_errors as e : logger.error(str(e)) continue logging.shutdown() glue = boto3.client('glue', config=boto3.session.Config(signature_version='s3v4')) glue.start_crawler(Name=crawler_name) return { 'statusCode': 200, 'body': 'Function executed successfully. Crawler Started' } </code></pre> <p>EDIT 3:</p> <p>Yes I used prints because... that is really all.</p> <pre><code> # Download the file to the local temporary directory with open(local_temp_file, 'wb') as local_file: print('opened local file') conn.retrbinary(f'RETR {filename_convention}{yesterday}.txt', local_file.write) print('{filename_convention}{yesterday} written to local tmp') local_file.write(f',{ftp_site_name},{timestamp}\n'.encode('utf-8')) print('{{ftp_site_name} and {timestamp} written to local file') # Close file and connection local_file.close() conn.close() print('local file and connection closed') # Upload the file from the local temporary directory to S3 s3 = boto3.client('s3') s3_key = f'dt={timestamp}/{filename_convention}{yesterday}.txt' s3.upload_file(local_temp_file, Bucket=data_s3_bucket_name, Key=s3_key) </code></pre> <p>Logs from cloudwatch:</p> <pre><code> 2023-08-02T21:31:02.647-07:00 *cmd* 'CWD /SD1/Log' 2023-08-02T21:31:02.795-07:00 *resp* '250 CWD command successful.' 2023-08-02T21:31:02.795-07:00 opened local file 2023-08-02T21:31:02.795-07:00 *cmd* 'TYPE I' 2023-08-02T21:31:02.907-07:00 *resp* '200 Type set to I.' 2023-08-02T21:31:02.907-07:00 *cmd* 'PORT 169,254,76,1,145,9' 2023-08-02T21:31:03.062-07:00 *resp* '200 PORT command successful.' 2023-08-02T21:31:03.062-07:00 *cmd* 'RETR Data Log Trend_Ops_Data_Log_230802.txt' </code></pre> <p>It never gets to even writing the local file. I hope that is enough for @TimRoberts</p> <p>Latest:</p> <p>I added <code>conn.sendcmd('SITE CHMOD 777 ' + filename_convention + yesterday + '.txt')</code> to my script to test and it started working locally. I added the same script to AWS and there is STILL the only difference being the following:</p> <p>Local:</p> <pre><code>*cmd* 'PORT 192,168,0,41,250,76' </code></pre> <p>AWS</p> <pre><code>*cmd* 'PORT 169,254,76,1,156,157' </code></pre> <p>This might mean there is some kind of networking setup issue on the AWS side that I need to research.</p>
<python><aws-lambda><ftplib>
2023-08-03 03:14:35
1
1,076
Shenanigator
76,824,618
1,644,925
Build python wheel with precompiled shared library
<p>When building a wheel of a Python extension module, how can I tell <code>setup()</code> to use an existing/precompiled shared library?</p> <p>For awful horrible reasons, the SO/DLL has to be compiled by a build system that cannot be invoked by Python. (FWIW it's compiled with the stable limited ABI.) But I need my <code>setup.py</code> to bundle the library correctly.</p> <p>Is there an option for this? Is it possible to subclass <code>Extension</code> somehow and just return the existing library when asked to build it?</p>
<python><setuptools><python-wheel><python-extensions>
2023-08-03 02:50:54
1
542
alexchandel
76,824,530
1,471,980
Insert data in CSV file into SQL Server table in chunks?
<p>I need to read load a CSV file into a SQL Server table. Below code inserts one row at a time and it takes a very long time. Is there a way to insert the data into the table in bulk chunks or any other way faster?</p> <pre><code>import pandas as pd df=pd.read_csv(r'd:/data_file.csv') for index row in df.iterrows(): cursor.execute(&quot;insert into &lt;table name&gt;&quot; &quot;(Name, phone, address)&quot; &quot;VALUES(?, ?, ?)&quot;, (Row['FirstandLastName'], row['xxx'], row['11 adison ln'])) </code></pre> <p>Update:</p> <p>I have tried this:</p> <pre><code>query=f&quot;&quot;&quot;insert into &lt;table name&gt;&quot; &quot;(Name, phone, address)&quot; &quot;VALUES(?, ?, ?)&quot;&quot;&quot; values_list=[tuple(row) for_, row in df.iterrows()] try: cursor.executemany(query, values_list) except Exception as e: print(e) </code></pre> <p>I get syntax error on this line:</p> <pre><code>cursor.executemany(query, values_list) SQL Server Statement(s) could not be prepared(8180) </code></pre>
<python><sql-server><pandas>
2023-08-03 02:12:32
1
10,714
user1471980
76,824,500
2,813,606
Connect coordinates on scatter_mapbox using lines but only connecting from origin
<p>I am working on creating a map that connects coordinates using a line with scattermapbox. Using the example <a href="https://plotly.com/python/lines-on-mapbox/?_gl=1*fj9isn*_ga*MzU1OTk4MzU4LjE2ODk3MzMwMzg.*_ga_6G7EE0JNSC*MTY5MTAyNTk0MS45LjEuMTY5MTAyNjA0My40Ni4wLjA." rel="nofollow noreferrer">here</a>, I have been able to connect each coordinate to one another.</p> <p>However, I just want to connect coordinates that have the value of &quot;Destination&quot; under the column &quot;Key&quot; to the one coordinate in my dataset that has the value of &quot;Origin&quot; for that same column.</p> <p>Here is my data:</p> <pre><code>import pandas as pd lat_dest = [21.028321, 10.426675, 10.776390,-35.297591,21.028321] lon_dest = [105.854022,-75.544342,106.701139,149.101268,105.854022] locations = ['Hanoi, Vietnam','Cartagena, Colombia','Ho Chi Minh City, Vietnam','Canberra, Australia','Hanoi, Vietnam'] dest_key = ['Destination','Destination','Destination','Destination','Origin'] test_df = pd.DataFrame({ 'lat': lat_dest, 'lon': lon_dest, 'name': locations, 'key':dest_key }) </code></pre> <p>Here is the map:</p> <pre><code>import plotly.express as px import plotly.graph_objects as go fig = px.scatter_mapbox( test_df, lat=&quot;lat&quot;, lon=&quot;lon&quot;, color=&quot;key&quot;, zoom=2, center = {&quot;lat&quot;: orig_lat, &quot;lon&quot;: orig_lon} ) fig.update_layout( mapbox_style=&quot;carto-positron&quot;, margin={&quot;r&quot;:0,&quot;t&quot;:0,&quot;l&quot;:0,&quot;b&quot;:0}, showlegend=False ) fig.add_trace(go.Scattermapbox( mode = &quot;lines&quot;, lon = new_df['lon_dest'], lat = new_df['lat_dest'] )) </code></pre>
<python><pandas><plotly><mapbox><line>
2023-08-03 02:03:22
1
921
user2813606
76,824,457
13,231,896
How to raise 403 Error (Forbiden) and 404 error in Odoo web controller
<p>I am creating a web controller in Odoo and I want to check if the user has permission to perform an action, if it doesn't I would like to raise a 403 exception. So, how can I raise a 403 exception inside Odoo web controller? I also want to know how to raise a 404 exception as well. How can I do it?</p>
<python><odoo>
2023-08-03 01:44:33
1
830
Ernesto Ruiz
76,824,456
433,261
Converting SQL that uses unnest and ilike to SQLAlchemy
<p>I have a single table, <code>books</code>, with multiple columns, one of the columns <code>categories</code> is an array of strings</p> <pre><code>select * from books where exists (select * from (select unnest(books.categories)) x(categories) where x.categories ilike 'fiction%'); </code></pre> <p>Having lots of trouble converting this to SQLAlchemy query string. What I am trying to do is to get all books that has at least a single category starting with the word 'fiction'.</p> <p>(the simple query below does not work, it is just so you can easily understand what I am trying to achieve with the complex query above)</p> <pre><code>select categories from books where 'fiction%' ilike any(categories); </code></pre>
<python><sql><sqlalchemy><flask-sqlalchemy>
2023-08-03 01:44:31
1
751
Hadi
76,824,384
9,588,300
Python regex why it takes 4 backslashes to get one backslash?
<p>I am using the library re <code>import re</code>, and I have a string that in human readable format (if you were to find it on a printed book and you see it with your eyes) looks like this:</p> <p>\t</p> <p>It is not a tab, it's a slash t. I know python will think it's a tab if I create this code <code>a=&quot;\t&quot;</code> but for that pythonic purpose, I do this <code>a=&quot;\\t&quot;</code> to represent a real world slash t (not tab, but a literal slash t). And the only pattern that matches string <code>a=&quot;\\t&quot;</code> is this <code>pattern='\\\\t'</code> . Why so many backslashes on regex to match a literal slash t?</p> <p>I want to regex that literal slash t. I've noticed that trying</p> <ol> <li><code>re.findall(pattern='\t',string='\\t')</code> : Doesn't matches (this makes sense to me)</li> <li><code>re.findall(pattern='\\t',string='\\t')</code> : Doesn't matches (this does not makes sense)</li> <li><code>re.findall(pattern='\\\t',string='\\t')</code> : Doesn't matches (this I think it would match a slash tab)</li> <li><code>re.findall(pattern='\\\\t',string='\\t')</code> : Finally it matches</li> </ol> <p>So I want to know why so many backslashes.</p> <p>I know that:</p> <ol> <li> General in python, a single \ is a escape character only if two conditions are met: It's followed by a special character and the combination of the slash with its immediate character is NOT a special sequence. If both conditions are false, then it's just a literal backslash. For example: <ol> <li> <b>Backslash with special character:</b> <p>This code <code>a=&quot;\'&quot;</code> in human format is just a single semicolon, because the backslash is suppressing the role of a python semicolon as a string delimiter, and it makes it a literal semicolon. In other words, if you send the value inside variable <code>a</code> to a printer machine (a physical one) your piece of paper will just have a semicolon and nothing else. </li></p> <li> <b>Backslash with no special character:</b> <p>This code <code>a=&quot;\o&quot;</code> in human readable it's a backslash followed by the letter o, because the letter <code>o</code> is not a special character, therefore the backslash preceding the o is acting as a literal backslash. If we send the variable a to a printer, our paper would have a \ followed by letter o </li></p> <li> <b>Backslash with no special character, but the combination is a special sequence: </b> <p>The letter <code>t</code> is not a special character in python, so if we would apply the logic of the previous point, this code <code>a=&quot;\t&quot;</code> could be thought to be a slash t in human readable. however, that's not the case. This is the third rule: the combination of the backslash with it's immediate character must not be a special sequence. In python, the combination of slash and t is a special sequence that represents a tab. So if we the send variable to a printer we get a white paper, although our printer &quot;printed&quot; a tab (which doesn't uses ink, so that's why the paper is white). Similar logic applies with the special sequence <code>\n</code> which is new line </li></p> </ol> <li> If you write a notepad with this string `Some text \t and then more` and read it on python while storing it on a variable, then if you print the variable, depending how you print and your IDE you can get "Some text \\t and then more" or "Some text \t and then more" Regardless of how your IDE represented it to you visually, python knows the slash t is NOT a tab since it made a literal scan of your file and a literal \t on a file is not a tab, only a tab is a tab on notepad. Since a tab in notepad has some binary representation different from the binary representation of a slash and a t together. In few words, python doesn't thinks the \t of the notepad is a tab, it's a literal slash t </li> </ol> <p>So having said this, If I write this code <code>a=&quot;some text \\t and then some more&quot;</code> I get the same as reading the notepad. And here comes the question:</p> <p>Why it takes four backslashes? What is each slash doing, or are all 4 together the escape sequence of special sequences? And what would cases 2 and 3 even match, what example strings?</p> <p>This question makes me think pattern 2 <code>re.findall(pattern='\\t',string='\\t')</code> should have matched, it says two slashes <code>\\</code> is a slash <a href="https://stackoverflow.com/questions/31213556/python-escaping-backslash">Python escaping backslash</a></p>
<python><pandas><regex><string><escaping>
2023-08-03 01:20:41
0
462
Eugenio.Gastelum96
76,824,076
2,372,954
In Django, how to multiply-aggregate a field?
<p>I want to aggregate a field using multiplication, but apparently Django doesn't have a <code>Product</code> function among its <a href="https://docs.djangoproject.com/en/3.2/ref/models/querysets/#aggregation-functions" rel="nofollow noreferrer">aggregation function</a>.</p> <h2>Example of what I want to do</h2> <pre class="lang-py prettyprint-override"><code># models.py class MyModel(models.Model): ratio = models.DecimalField(...) # views.py mys = MyModel.objects.filter(...).aggregate(cum_ratio=Product(ratio)) </code></pre> <p>How to achieve this?</p> <p>I feel like since Django didn't include a <code>Product</code> function like they did with <code>Sum</code> suggests that it's trivial but I can't put my finger on it.</p>
<python><django>
2023-08-02 23:32:07
1
360
ahmed
76,824,048
9,235,704
python asynchronous map function that apply a function to every element of a list
<p>I have a list of elements in python that I want to apply an API call to. The API involves web requests and I want to apply it synchronously. Since it is a third party API I can't really modify the code inside as well. I want to have something like this:</p> <pre><code>from API import get_user_handle arr = [1234, 2345, 6789] # list of user id new_arr = async_foreach(arr, get_user_handle) print(new_arr) # e.g. ['Tom', 'Amy', 'Jerry'] </code></pre> <p>This would be easy in nodejs by using the array map function. How do I build something similar in python?</p>
<python><asynchronous><parallel-processing><python-asyncio><python-multithreading>
2023-08-02 23:18:18
3
2,090
Julian Chu
76,823,727
5,164,079
OSError [Errno 2] No such file or directory while installing Jupyter Notebook in Python 3.11 on Windows 11
<p>I am trying to install Jupyter Notebook on my Windows 11 machine using Python 3.11.4, but I encounter an OSError with the message:</p> <blockquote> <p>[Errno 2] No such file or directory.</p> </blockquote> <p>I have already tried different installation commands, including <code>pip install notebook</code>, <code>pip install notebook --user</code>, and <code>python -m pip install jupyter</code>, but none of them resolved the issue.</p> <p>Error details:</p> <blockquote> <p>ERROR: Could not install packages due to an OSError: [Errno 2] No such file or directory: 'C:\Users\1234\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\jedi\third_party\typeshed\third_party\2and3\requests\packages\urllib3\packages\ssl_match_hostname\_implementation.pyi'</p> </blockquote> <p>I have also referred to the following Stack Overflow links, but the solutions provided did not help me resolve the issue:</p> <ol> <li><a href="https://stackoverflow.com/questions/65980952/python-could-not-install-packages-due-to-an-oserror-errno-2-no-such-file-or">Python: Could not install packages due to an OSError: [Errno 2] No such file or directory</a></li> <li><a href="https://stackoverflow.com/questions/72831972/could-not-install-packages-due-to-an-oserror-errno-2-no-such-file-or-director">Could not install packages due to an OSError: [Errno 2] No such file or directory: &#39;/C:/Windows/TEMP/abs_e9b7158a-aa56-4a5b-87b6-c00d295b01fanefpc8_o/</a></li> </ol> <p>How to resolve this OSError and successfully install Jupyter Notebook on Python 3.11 running on Windows 11?</p> <p>I have verified that my pip version is 23.2.1 and tried to install Jupyter Notebook using both system-wide and user-specific installation options, but the error persists.</p>
<python><python-3.x><jupyter-notebook><jupyter>
2023-08-02 21:48:12
2
517
Anirban B
76,823,582
13,060,649
Pydub: How to ave audio after convert it to aac?
<p>I am using django admin form to recieve a audio input, and I am trying to convert it into <code>aac</code> format with <code>pydub</code>. I have installed <code>ffmpeg</code> in my system and also added <code>pydub</code> and <code>ffmpeg-python</code> as dependencies.</p> <p>Here is the django ModelAdmin:</p> <pre><code>class AudioAdmin(admin.ModelAdmin): fields = ('file', 'bit_rate', 'format') list_display = ('id', 'file', 'bit_rate', 'format', 'content_type') def save_model(self, request, obj, form, change): audio_file = form.cleaned_data['file'] _, file_extension = os.path.splitext(audio_file.name) curr_audio = AudioSegment.from_file(file=audio_file, format=file_extension.replace('.', '')) new_audio = curr_audio.export(out_f=audio_file.name, format=&quot;aac&quot;) obj.file = new_audio obj.save() </code></pre> <p>I am still getting this error while converting an mp3 file in <code>aac</code> format.</p> <pre><code> raise CouldntEncodeError( pydub.exceptions.CouldntEncodeError: Encoding failed. ffmpeg/avlib returned error code: 1 Command:['ffmpeg', '-y', '-f', 'wav', '-i', '/var/folders/8g/230x9vx55p7c0bdwx0cmzx9r0000gq/T/tmpxaetjjk0', '-f', 'aac', '/var/folders/8g/230x9vx55p7c0bdwx0cmzx9r0000gq/T/tmpa3ae8nn1'] Output from ffmpeg/avlib: ffmpeg version 6.0 Copyright (c) 2000-2023 the FFmpeg developers built with Apple clang version 14.0.0 (clang-1400.0.29.202) configuration: --prefix=/usr/local/Cellar/ffmpeg/6.0 --enable-shared --enable-pthreads --enable-version3 --cc=clang --host-cflags= --host-ldflags= --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libaribb24 --enable-libbluray --enable-libdav1d --enable-libmp3lame --enable-libopus --enable-librav1e --enable-librist --enable-librubberband --enable-libsnappy --enable-libsrt --enable-libsvtav1 --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvmaf --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libspeex --enable-libsoxr --enable-libzmq --enable-libzimg --disable-libjack --disable-indev=jack --enable-videotoolbox --enable-audiotoolbox libavutil 58. 2.100 / 58. 2.100 libavcodec 60. 3.100 / 60. 3.100 libavformat 60. 3.100 / 60. 3.100 libavdevice 60. 1.100 / 60. 1.100 libavfilter 9. 3.100 / 9. 3.100 libswscale 7. 1.100 / 7. 1.100 libswresample 4. 10.100 / 4. 10.100 libpostproc 57. 1.100 / 57. 1.100 Guessed Channel Layout for Input Stream #0.0 : stereo Input #0, wav, from '/var/folders/8g/230x9vx55p7c0bdwx0cmzx9r0000gq/T/tmpxaetjjk0': Duration: 00:03:14.72, bitrate: 1411 kb/s Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, 2 channels, s16, 1411 kb/s [NULL @ 0x7fe40da072c0] Requested output format 'aac' is not a suitable output format /var/folders/8g/230x9vx55p7c0bdwx0cmzx9r0000gq/T/tmpa3ae8nn1: Invalid argument </code></pre>
<python><audio><ffmpeg><pydub>
2023-08-02 21:14:57
1
928
suvodipMondal
76,823,518
3,891,210
SQLAlchemy does not detect any existing table in postgres db
<p>I have a local postgres DB with a couple of tables and want to reflect their structure into Python.</p> <pre><code>import sqlalchemy.ext.automap as am from sqlalchemy.orm import Session from sqlalchemy import create_engine base_automap_schema: am.AutomapBase = am.automap_base() engine = create_engine(&quot;postgresql+psycopg://user:pw@localhost:5432/db&quot;) engine.echo = True engine.connect() # reflect the tables base_automap_schema.prepare(autoload_with=engine) print(base_automap_schema.classes.keys()) print(base_automap_schema.metadata.tables) </code></pre> <p>output is lots of SQL plus</p> <pre><code>[] FacadeDict({}) </code></pre> <p>Seems like my tables are not detected. But why?</p> <p>I <a href="https://stackoverflow.com/questions/76759072/sqlalchemy-automap-not-loading-table">read</a> that sometimes this happens due to a missing primary key, but all of my tables have it. Example:</p> <pre><code>CREATE TABLE IF NOT EXISTS public.manufacturer ( &quot;manu_ID&quot; integer NOT NULL, manu_name character varying[] COLLATE pg_catalog.&quot;default&quot; NOT NULL, CONSTRAINT &quot;manu_ID&quot; PRIMARY KEY (&quot;manu_ID&quot;) ) </code></pre> <p>What else could be the problem?</p>
<python><postgresql><sqlalchemy>
2023-08-02 21:04:10
1
363
anjuta
76,823,384
10,796,158
Is it possible to rename the features of an sklearn model after it was trained with other feature names?
<p>I trained an sklearn <code>sklearn.ensemble.RandomForestRegressor</code> and would like to rename the input features to the model. I tried doing: <code>model.feature_names_in_</code> = new feature names, but this doesn't work as I get:</p> <p><code>AttributeError: can't set attribute</code></p> <p>So is it just not possible?</p> <p>EDIT: After upgrading, this works:</p> <pre><code>from sklearn.ensemble import RandomForestRegressor from sklearn.datasets import make_regression import pandas as pd X, y = make_regression(n_features=4, n_informative=2, random_state=0, shuffle=False) regr = RandomForestRegressor(max_depth=2, random_state=0) X = pd.DataFrame(X) X.columns = [&quot;a&quot;, &quot;b&quot;, &quot;c&quot;, &quot;d&quot;] regr.fit(X, y) regr.feature_names_in_ </code></pre> <pre><code>array([&quot;a&quot;, &quot;b&quot;, &quot;c&quot;, &quot;d&quot;], dtype=object) </code></pre> <pre><code>regr.feature_names_in_ = [&quot;aaa&quot;, &quot;bbb&quot;, &quot;ccc&quot;, &quot;ddd&quot;] regr.feature_names_in_ </code></pre> <pre><code>['aaa', 'bbb', 'ccc', 'ddd'] </code></pre>
<python><scikit-learn>
2023-08-02 20:43:02
2
1,682
An Ignorant Wanderer
76,823,323
3,261,292
lxml library doesn't extract text in a given html tag when there is another tag with the text
<p>I have the following html script:</p> <pre><code>&lt;div&gt; &lt;p class=&quot;test1&quot;&gt; &lt;i class=&quot;empty&quot;&gt; &lt;/i&gt; WANTED TEXT &lt;/p&gt; &lt;/div&gt; </code></pre> <p>I want to extract the context of <code>p</code> tag (WANTED TEXT). My code:</p> <pre><code>from lxml import etree from io import StringIO html_parser = etree.HTMLParser() tmp_xp = &quot;//p[1]&quot; selected_tag = etree.parse(StringIO(&quot;&quot;&quot;&lt;div&gt; &lt;p class=&quot;test1&quot;&gt; &lt;i class=&quot;empty&quot;&gt; &lt;/i&gt; WANTED TEXT &lt;/p&gt; &lt;/div&gt;&quot;&quot;&quot;), html_parser).xpath(tmp_xp) print(selected_tag[0].text) </code></pre> <p>The code prints nothing. If I move <code>WANTED TEXT</code> to before the <code>&lt;i&gt;</code> tag, the code starts to work fine.</p> <p>How can I solve this?</p>
<python><html><parsing><lxml>
2023-08-02 20:31:04
2
5,527
Minions
76,823,279
608,576
Converting string date to utc epoch in Python
<p>I have a UTC time as string, I want to convert it to UTC timestamp. But following code seems to again convert it to UTC causing it to be wrong timestap. How do I convert that datetime format to timestamp so that it preserves it as UTC.</p> <pre><code>datetime.strptime('2023-08-02 14:52:05', '%Y-%m-%d %H:%M:%S').timestamp() * 1000 Input time GMT: 2023-08-02 14:52:05 Outputs GMT: 1691002325000.0 # This converts to GMT: Wednesday, August 2, 2023 6:52:05 PM https://www.epochconverter.com/ </code></pre>
<python><datetime><timezone><epoch>
2023-08-02 20:25:08
2
9,830
Pit Digger
76,823,193
726,730
QGraphicsLineItem dotted line with stroke
<p>How can I use QGraphicsLineItem not as a continuous line but like a dotted line with animated stroke (from 1 to 3 px)?</p>
<python><pyqt5><qgraphicslineitem>
2023-08-02 20:08:44
0
2,427
Chris P
76,823,174
2,307,570
What is the logic of changing class variables?
<p>The following code was supposed to clarify how Python class variables behave,<br>but somehow it opens more questions than it solves.</p> <p>The class <code>Bodyguard</code> has the variable <code>protect</code>, which is a list that by default contains the king.<br> The classes <code>AnnoyingBodyguard</code> and <code>Bureaucrat</code> change it.<br></p> <p>Guards that protect specific people shall be called <em>specific</em>.   (<code>bg_prime</code>, <code>bg_foreign</code>, ...)<br> The others shall be called <em>generic</em>.   (<code>bg1</code>, <code>bg2</code>, <code>bg3</code>)</p> <p>For specific guards the changes affect only those initialized after the change.<br> For generic guards the changes affect all of them, no matter when they were initialized.<br> Why the before/after difference for specific guards? Why the specific/generic difference?</p> <p>These differences are somewhat surprising, but I find the following even stranger.<br> Given two lists <code>a</code> and <code>b</code>, one might think that these operations will always have the same result:<br> reassign: <code>a = a + b</code>     add-assign: <code>a += b</code>     append: <code>for x in b: a.append(x)</code></p> <p>Why do they cause completely different results when used in <code>Bodyguard.__init__</code>?</p> <p>Only the results using reassign make any sense.<br> They can be seen below and in <a href="https://github.com/entenschule/examples_py/blob/main/a003_pcap/b4_oo/c07_change_class_variable/reassign_good.py" rel="nofollow noreferrer">reassign_good.py</a>.<br> The results for add-assign and append are quite useless, and I do not show them here.<br> But they can be seen in <a href="https://github.com/entenschule/examples_py/blob/main/a003_pcap/b4_oo/c07_change_class_variable/addassign_bad.py" rel="nofollow noreferrer">addassign_bad.py</a> and <a href="https://github.com/entenschule/examples_py/blob/main/a003_pcap/b4_oo/c07_change_class_variable/append_bad.py" rel="nofollow noreferrer">append_bad.py</a>.</p> <pre class="lang-py prettyprint-override"><code>class Bodyguard: protect = ['the king'] def __init__(self, *args): if args: self.protect = self.protect + list(args) ################################################################################## bg1 = Bodyguard() bg_prime = Bodyguard('the prime minister') bg_foobar = Bodyguard('the secretary of foo', 'the secretary of bar') assert bg1.protect == ['the king'] assert bg_prime.protect == ['the king', 'the prime minister'] assert bg_foobar.protect == [ 'the king', 'the secretary of foo', 'the secretary of bar' ] ################################################################################## class AnnoyingBodyguard(Bodyguard): Bodyguard.protect = ['his majesty the king'] bg2 = Bodyguard() bg_foreign = Bodyguard('the foreign minister') # The king's title was updated for all generic guards. assert bg1.protect == bg2.protect == ['his majesty the king'] # And for specific guards initialized after AnnoyingBodyguard was defined. assert bg_foreign.protect == ['his majesty the king', 'the foreign minister'] # But not for specific guards initialized before AnnoyingBodyguard was defined. assert bg_prime.protect == ['the king', 'the prime minister'] assert bg_foobar.protect == [ 'the king', 'the secretary of foo', 'the secretary of bar' ] ################################################################################## class Bureaucrat: def __init__(self, name): Bodyguard.protect.append(name) malfoy = Bureaucrat('Malfoy') bg3 = Bodyguard() bg_paper = Bodyguard('the secretary of paperwork') # Malfoy was added for all generic guards. assert bg1.protect == bg2.protect == bg3.protect == [ 'his majesty the king', 'Malfoy' ] # And for specific guards initialized after Malfoy: assert bg_paper.protect == [ 'his majesty the king', 'Malfoy', 'the secretary of paperwork' ] # But not for specific guards initialized before Malfoy: assert bg_prime.protect == ['the king', 'the prime minister'] assert bg_foreign.protect == [ 'his majesty the king', 'the foreign minister' ] </code></pre> <hr /> <p><strong>Edit:</strong> Based on the comments and answers, I added the script <a href="https://github.com/entenschule/examples_py/blob/main/a003_pcap/b4_oo/c07_change_class_variable/reassign_better.py" rel="nofollow noreferrer">reassign_better.py</a>,<br> where the differences between generic and specific guards are removed.</p> <p>The main class should look like this:</p> <pre class="lang-py prettyprint-override"><code>class Bodyguard: protect = ['the king'] def __init__(self, *args): self.protect = self.protect[:] # force reassign also for generic guards if args: self.protect = self.protect + list(args) </code></pre>
<python><python-class><class-variables>
2023-08-02 20:05:10
3
1,209
Watchduck
76,823,172
125,244
Transform arithmetic comparison from pine to Python
<p>I couldn't find if the following pine script statement can be written in a similar way using Python.</p> <p><code>numWhy = (dirTrade == 0) ? 102 : goodDirection ? -103 : 104</code></p> <p>or that it should be replaced by much more statements like</p> <pre><code>if dirTrade == 0: numWhy = 102 elif goodDirection: numWhy = -103 else: numWhy = 104 </code></pre> <p>I know that the longer version usually will be more readable (and thus preferable) but the short version becomes better readable if a lot of similar tests must be performed.</p> <p>How can the short version be transformed from pine to Python maintaining a short notation?</p>
<python>
2023-08-02 20:05:01
0
1,110
SoftwareTester
76,823,104
12,176,803
BytesIO unexpected behavior: adds bytes at the beginning of .wav file
<p>Here is the scenario:</p> <ul> <li>I read a .wav file using the Python wave package</li> <li>I write the data in a file-like BytesIO object using this same wave package</li> <li>I check if the original data is the same as the data from this file-like object</li> </ul> <p>Here is the code:</p> <pre><code>filepath = &quot;my_file_path.wav&quot; with wave.open(filepath) as f: params = f.getparams() data = f.readframes(f.getnframes()) bytes_io = io.BytesIO() wf = wave.open(bytes_io, &quot;wb&quot;) wf.setnchannels(params.nchannels) wf.setsampwidth(params.sampwidth) wf.setframerate(params.framerate) wf.writeframes(data) bytes_io.seek(0) new_data = bytes_io.read() print(len(new_data)) print(len(data)) print(new_data[:-len(data)]) assert new_data[-len(data):] == data </code></pre> <p>This prints:</p> <pre><code>700740 700696 b'RIFF&lt;\xb1\n\x00WAVEfmt \x10\x00\x00\x00\x01\x00\x01\x00\x80&gt;\x00\x00\x00}\x00\x00\x02\x00\x10\x00data\x18\xb1\n\x00' </code></pre> <p>Why is there this difference in terms of bytes?</p>
<python><byte><wave>
2023-08-02 19:50:54
0
366
HGLR
76,822,990
10,485,253
Python Debugger Not Working When Selecting 3.11 Interpreter But Works For 3.7 Interpreter
<p>This is my code</p> <pre><code>class Solution: def add(self, x: int, y: int) -&gt; int: z = x + y t = 5 + x + y return z x = Solution() print(x.add(1, 3)) </code></pre> <p>And I have breakpoints set at the <code>print</code> line and <code>t = 5 + x + y</code>.</p> <p>These are my interpreters: <a href="https://i.sstatic.net/3k2Qs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3k2Qs.png" alt="enter image description here" /></a></p> <p>If I use python 3.7 and hit F5, everything works fine. If I select any of the other 2 (I would prefer the <code>interview_env</code> one) and hit F5, the program does not run and the breakpoints are not hit. What is going on and how do I solve this?</p> <p>I am using a virtual environment that was sent to me and needs to be used for an interview so I need to make sure I can debug in that environment.</p>
<python><python-3.x><debugging>
2023-08-02 19:34:10
0
887
TreeWater
76,822,969
13,634,560
Plotly not registering "size" array
<p>Generating several subplots to investigate color palette. The plot (made with Plotly) is a scatter, with colors increasing in tandem with size. The code:</p> <pre><code>fig3 = go.Scatter(x=[1,2,3,4,5], y=[2,2,2,2,2], mode=&quot;markers&quot;, marker={&quot;size&quot;: [100,125,150,175,200], &quot;color&quot;: [1,2,3,4,5], &quot;sizemin&quot;:50}, color_continuous_scale=colors) </code></pre> <p>The colorscale:</p> <pre><code>Colors = [ &quot;#FFFFEF&quot;, &quot;#FFCA8C&quot;, &quot;#FFA05C&quot;, ] </code></pre> <p>Obviously there is more than one way to generate this with an array - But what confuses me is why an error is thrown. How should the &quot;size&quot; param be used if not like this?</p> <pre><code> ValueError: Invalid property specified for object of type plotly.graph_objs.Scatter: 'color' Did you mean &quot;fill&quot;? </code></pre> <p>Documentation:</p>
<python><pandas><plotly>
2023-08-02 19:30:16
1
341
plotmaster473
76,822,787
4,674,706
In python, can an inner function be interrogated about its parent function?
<p>Say we have a function, define in <em>my_module.py</em>:</p> <pre><code>def make_function(scalar: float) -&gt; Callable[[float], float]: def increment(x: float) -&gt; float: return x + scalar return increment </code></pre> <p>And on runtime, I have access to the live inner function objects (and I know that all the relevant code is defined in my_module.py).</p> <pre><code>&gt;&gt; make_function(2) &lt;function my_module.make_function.&lt;locals&gt;.increment(x: float) -&gt; float&gt; </code></pre> <p>Q: is there an elegant way, from the live object, to obtain information about what its parent function is? In particular I am interested in dynamically generating strings of the format <code>&quot;my_module.make_function.increment&quot;</code>. I am looking for a solution which would also work for an arbitrary number of nested levels (i.e. <code>&quot;my_module.outer_func_1.outer_func_2.inner_func&quot;</code>).</p> <p>Note that the answer to my question seems to be (almost) embedded in the <code>__repr__</code> of the function object as per my code sample before. But unsure about what internals are behind it and how to access them.</p> <p>What have I tried?</p> <ul> <li>inspect module: <code>inspect.getmodule(func).__name__</code> but that only returns the module path</li> <li><code>func.__code__.co_*</code>, but couldn't find a way to point to the parent function</li> <li>playing with <code>func.__globals__</code>, but that seemed more complex than needed</li> </ul> <p>Ideally I would like to avoid decorating/modifying the defined functions and only arrive at the answer from inspecting object itself. Any help on how to proceed is much appreciated!</p>
<python><namespaces>
2023-08-02 18:58:13
1
464
Azmy Rajab
76,822,687
7,663,296
Why do callbacks passed to chrome extension background script silently fail or return false?
<p>I'm refactoring a chrome extension that's worked for years. I'm moving chrome.runtime.* calls from content scripts to extension background so scripts can run externally (on my domain instead of content script in the extension). Manifest v2, externally connectable is setup properly, all good.</p> <p>Here's what happens. My web page script sends a message to extension with a callback for the results. Listener in extension background script gets the message with all correct data. Callback looks ok, it's a function object. Calling it on client side with test data works as intended. Otherwise, the callback never does anything.</p> <h2>Code</h2> <p>Here's sample code to illustrate what happens. Presenting python (brython) for explanation, same thing happens with javascript. Using console.log as the callback for testing purposes.</p> <pre><code># ===== client script chrome.runtime.sendMessage (EXTN_ID, {'action': 'get_tabs', 'args': {}}, console.log) # ===== server side, extension background script def get_tabs (args, sender, callback) : args ['windowId'] = sender.tab.windowId # tabs for caller's window callback ('foo') # &lt;--- ok prints foo in client console chrome.tabs.query (args, callback) # &lt;-- usually does nothing: never prints to client console, no error messages in extension background console. but sometimes prints &quot;false&quot; in client console # --- or try this way def check_result (res) : console.log (res) # &lt;-- ok prints tabs to extension background console callback (res) # &lt;-- does nothing: never prints to client console, no errors in extension background console chrome.tabs.query (args, check_result) # &lt;-- calls back check_result above, prints tabs to extension bg console. but nothing happens on client side and no error messages in either console # ---------------- # message listener, this part works fine def receive (msg, sender, callback) : if (msg.action == 'get_tabs') : get_tabs (msg.args, sender, callback) ; return true # &lt;-- for async responses, per wOxxOm comment chrome.runtime.onMessageExternal.addListener (receive) </code></pre> <h2>Observed Behavior</h2> <p>Other than test data, the callback never produces any results:</p> <ul> <li>passing it to chrome.tabs.query (args, callback) does nothing. silently fails</li> <li>invoking the callback manually in the extension before calling chrome.tabs.query works just fine. returns data correctly (but not useful, since I don't have real values to return yet).</li> <li>wrapping it in another callback (i.e. extension-side wrapper function around client-side callback) shows that chrome.tabs.query returns the correct results. but passing them back to first callback silently fails.</li> <li>once the extension calls chrome.tabs.query, the client side usually never sees the callback invoked at all. but sometimes, the callback mysteriously receives a single value: false. Even when the wrapper gets the correct results, the client callback is called with &quot;false&quot; instead of the real results.</li> </ul> <h2>Attempted Fixes</h2> <p>I tried all sorts of crazy things to fix this:</p> <ol> <li>Tried wrapping python callback function in a javascript function, doesn't help (later confirmed same issue occurs when using pure javascript on both ends). No change.</li> <li>Tried removing *args and **kwargs from all function signatures in the call chain, thinking brython might be mangling the callback parameter somehow. Didn't make a difference.</li> <li>Tried saving the callback on client side and sending a callback identifier instead in message to extension, in case there was a problem invoking the function across a protection boundary (client script -&gt; extension background -&gt; chrome internal). Like this:</li> </ol> <ul> <li>client stores callback and sends callback id to extension instead</li> <li>extension creates local callback to get results from chrome.tabs.query</li> <li>local callback sends a new message back to client with the results and callback id</li> <li>client gets callback id from message and invokes stored callback with the results</li> </ul> <p>The last way probably would've worked, but I couldn't find a good way to send reply messages from the extension to the client without using a client-provided callback. No API for client side to listen for messages from extension - chrome docs say <a href="https://developer.chrome.com/docs/extensions/mv3/messaging/#external-webpage" rel="nofollow noreferrer">only clients can initiate messages</a>. No way that I could find for extension to use window.postMessage to contact a client-side listener. Could do it by opening a long lived connection from client with chrome.runtime.connect instead of sending a single message. But that seems like overkill for one round-trip message and reply. And managing open connections across long-lived web pages seems like more hassle than it's worth.</p> <p>None of this helped. I later confirmed the same issue happens when the entire chain is javascript. Whether it's js or python, the callback works &quot;correctly&quot; when it's called early before chrome.tabs and not at all after chrome.tabs. Python doesn't seem to be the problem.</p>
<javascript><python><google-chrome><google-chrome-extension>
2023-08-02 18:40:54
1
383
Ed_
76,822,574
7,938,796
Turning a Pandas DataFrame into a SciPy Sparse CSC_Array
<p>I have a <code>pandas</code> dataframe that I'd like to turn into a sparse CSC_Array. My real dataframe is 200+ columns, but for this example I'm going to be working with just 4:</p> <pre><code>col1 = [1, 5, 6, 3, 3, 8, 6, 7, 4, 2] col2 = [10, 40, 45, 28, 34, 85, 58, 65, 34, 18] col3 = [4, 19, 25, 14, 11, 32, 28, 30, 11, 8] col4 = [1, 2, 1, 2, 2, 1, 2, 1, 1, 2] dataFrame = pd.DataFrame({'Cost': col1, 'Weight': col2, 'Value': col3, 'Type': col4}) </code></pre> <p>For this example, I know that I could just manually write out a SciPy csc_array as such and it will work with the rest of my script:</p> <pre><code>cscArray = scipy.sparse.csc_array( scipy.sparse.vstack(( scipy.sparse.csr_array(dataFrame.iloc[:0]), scipy.sparse.csr_array(dataFrame.iloc[:1]), scipy.sparse.csr_array(dataFrame.iloc[:2]), scipy.sparse.csr_array(dataFrame.iloc[:3]), ), format='csc') ) </code></pre> <p>But I need to find a way to generalize it since I can't do this for 200+ columns.</p> <p>I've read similar SO posts, and I've tried single-line conversions like <code>cscArray = scipy.sparse.csc_matrix(dataFrame.values)</code>, but nothing seems to work with my larger process like the manual example I've shown. Is there an easy way to generalize turning a large pandas dataframe into a SciPy vstack/csc_array like the example I've shown above? I feel like even looping over <code>len(dataFrame.columns)</code> in someway would work? But speed is of major importance and I fear that might be too slow? Thank you.</p>
<python><pandas><scipy>
2023-08-02 18:23:08
1
767
CoolGuyHasChillDay
76,822,541
7,169,895
How do I put a bold line under certain rows in a QTabWIdget?
<p>I have a table that looks like this<a href="https://i.sstatic.net/xfrQL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xfrQL.png" alt="PICTURE OF TABLE" /></a></p> <p>I am looking to set bold underlines on certain rows based on the value in the first cell in that row's values such as revenue avg 3, and tangible book ... I am stumped as where to start.</p> <p>I am looking to implement a QItemDelegate to underline a whole row based on the value in the leftmost cell so 'revenue avg 3' would result in the whole line getting underlined.</p> <p>My code:</p> <pre><code>import pandas as pd from PySide6 import QtCore, QtWidgets from PySide6.QtCore import Qt from PySide6.QtGui import QColor, QPen, QBrush from PySide6.QtWidgets import QStyle class TickerTableModel(QtCore.QAbstractTableModel): def __init__(self, parent=None, *args): super(TickerTableModel, self).__init__(parent, *args) self._filters = {} self._sortBy = [] self._sortDirection = [] self._dfSource = pd.DataFrame() self._dfDisplay = pd.DataFrame() def rowCount(self, parent=QtCore.QModelIndex()): if parent.isValid(): return 0 return self._dfDisplay.shape[0] def columnCount(self, parent=QtCore.QModelIndex()): if parent.isValid(): return 0 return self._dfDisplay.shape[1] def data(self, index, role): if index.isValid() and role == QtCore.Qt.ItemDataRole.DisplayRole: return self._dfDisplay.values[index.row()][index.column()] return None def headerData(self, col, orientation=QtCore.Qt.Orientation.Horizontal, role=QtCore.Qt.ItemDataRole.DisplayRole): if orientation == QtCore.Qt.Orientation.Horizontal and role == QtCore.Qt.ItemDataRole.DisplayRole: return str(self._dfDisplay.columns[col]) return None def setupModel(self, data): self._dfSource = data self._sortBy = [] self._sortDirection = [] self.setFilters({}) def setFilters(self, filters): self.modelAboutToBeReset.emit() self._filters = filters self.updateDisplay() self.modelReset.emit() def updateDisplay(self): dfDisplay = self._dfSource.copy() # Filtering cond = pd.Series(True, index=dfDisplay.index) for column, value in self._filters.items(): cond = cond &amp; \ (dfDisplay[column].str.lower().str.find(str(value).lower()) &gt;= 0) dfDisplay = dfDisplay[cond] # Sorting if len(self._sortBy) != 0: dfDisplay.sort_values(by=self._sortBy, ascending=self._sortDirection, inplace=True) # Updating self._dfDisplay = dfDisplay class TickerTableView(QtWidgets.QTableView): def setupUi(self, TickerTableWidget): self.verticalLayout = QtWidgets.QVBoxLayout(TickerTableWidget) self.tableView = QtWidgets.QTableView(TickerTableWidget) self.tableView.setSortingEnabled(True) self.tableView.setAlternatingRowColors(True) self.verticalLayout.addWidget(self.tableView) QtCore.QMetaObject.connectSlotsByName(TickerTableWidget) class TickerTableWidget(QtWidgets.QWidget): def __init__(self, data, parent=None): super(TickerTableWidget, self).__init__(parent) self.ui = TickerTableView() self.ui.setupUi(self) self.tableModel = TickerTableModel() self.tableModel.setupModel(data) self.ui.tableView.setModel(self.tableModel) # Set delegate here self.delegate = TickerTableDelegate(self.ui) self.ui.setItemDelegate(self.delegate) class TickerTableDelegate(QtWidgets.QItemDelegate): def __init__(self, parent=None, *args): QtWidgets.QItemDelegate.__init__(self, parent, *args) def paint(self, painter, option, index): painter.save() # set text color painter.setPen(QPen(Qt.black)) text = index.data(Qt.DisplayRole) if text: painter.setPen(QPen(Qt.red)) painter.drawText(option.rect, Qt.AlignCenter, text) painter.restore() </code></pre> <p>My main.py might be something like:</p> <pre><code>if __name__ == '__main__': app = QApplication(sys.argv) df = pd.DataFrame({'revenue avg 3' : [1,5,10], 'profit': [5, 4, 5], 'income': [3,5,6]}) window = TickerTableWidget(df) window.show() sys.exit(app.exec()) </code></pre> <p>I believe the process is get the row in the item delegate, test the value in column 0 and then set some property (which) to underline the whole row (maybe underline each cell in that row). I am currently stuck on the Item Delegate doing nothing no matter what I try.</p>
<python><pyside6>
2023-08-02 18:15:59
1
786
David Frick
76,822,460
14,173,197
How to detect section and subsection using regex
<p>I want to extract sections and subsections and corresponding contents from a passage. I have the following code using regex but the code can't detect the subsections. How to resolve it?</p> <pre><code>import re def extract_sections_and_content(text): pattern = r'(?P&lt;section&gt;^\d+(?:\.\d+)*(?:\s+\S+)?\s+[A-Za-z]+)\n(?P&lt;content&gt;(?:(?!\d+(?:\.\d+)*(?:\s+\S+)?\s+[A-Za-z]+|\n\d+(?:\.\d+)*(?:\s+\S+)?\s+[A-Za-z]+).)*)' matches = re.finditer(pattern, text, re.DOTALL | re.MULTILINE) section_content_pairs = [(re.sub(r'^\d+(?:\.\d+)*(?:\s+\S+)?\s+', '', match.group('section').strip()), match.group('content').strip()) for match in matches] return dict(section_content_pairs) # Example usage: text = &quot;&quot;&quot; 1 Introduction Lorem ipsum dolor sit amet, consectetur adipiscing elit. 2 Background Praesent euismod, arcu quis fermentum pulvinar, urna ex euismod ex. 2.1.2 Introduction to Topic Vestibulum nec lorem eu ligula faucibus cursus. 3 Conclusion Sed sed malesuada magna, at dignissim quam. &quot;&quot;&quot; result = extract_sections_and_content(text) print(result) </code></pre> <p>the result I get:</p> <pre><code>{'Introduction': 'Lorem ipsum dolor sit amet, consectetur adipiscing elit.', 'Background': 'Praesent euismod, arcu quis fermentum pulvinar, urna ex euismod ex.', 'Conclusion': 'Sed sed malesuada magna, at dignissim quam.'} </code></pre> <p>the result I want is:</p> <pre><code>{'Introduction': 'Lorem ipsum dolor sit amet, consectetur adipiscing elit.', 'Background': 'Praesent euismod, arcu quis fermentum pulvinar, urna ex euismod ex.', 'Introduction to Topic': 'Vestibulum nec lorem eu ligula faucibus cursus.', 'Conclusion': 'Sed sed malesuada magna, at dignissim quam.'} </code></pre>
<python><regex>
2023-08-02 18:04:02
2
323
sherin_a27
76,822,302
7,699,611
Add rows to pandas dataframe with new column - extend the dataframe
<p>Suppose I have a DatFrame like this:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>col1</th> <th>col2</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>3</td> </tr> <tr> <td>2</td> <td>4</td> </tr> </tbody> </table> </div> <p>And I want to apply a function to it so that each row will be duplicated and a new column will be added. The value in the new column is 'a' and 'b' for created duplicate rows for even valued col1 and 'c' and 'd' for odd valued col1.</p> <p>So the desired output is:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>col1</th> <th>col2</th> <th>col3</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>3</td> <td>c</td> </tr> <tr> <td>1</td> <td>3</td> <td>d</td> </tr> <tr> <td>2</td> <td>4</td> <td>a</td> </tr> <tr> <td>2</td> <td>4</td> <td>b</td> </tr> </tbody> </table> </div> <p>I have tried to solve this plainly by iterating over all rows and got what I want, but as far as I know <code>iterrows()</code> is extremely slow if we have hundreds of thousands of rows.</p> <pre><code>import pandas as pd import numpy as np d = {'col1': [1, 2], 'col2': [3, 4]} df = pd.DataFrame(data=d) def add_rows(row): &quot;&quot;&quot; This function is used in apply function. It adds a new column and adds rows to the dataframe. :param row: :return: &quot;&quot;&quot; col3 = np.array(['a', 'b']) out_df = pd.DataFrame([row.tolist()], columns=row.index) out_df['col3'] = None if row['col1'] % 2 == 0: out_df.at[0, 'col3'] = col3 else: out_df.at[0, 'col3'] = np.array(['c', 'd']) out = out_df.explode('col3', ignore_index=True) return out cols = list(df.columns) cols.append('col3') result = pd.DataFrame(columns=cols) for index, row in df.iterrows(): rows = add_rows(row) result = pd.concat([result, rows]) result.reset_index(drop=True, inplace=True) print(result) </code></pre> <p>And my data has hundreds of thousands of rows and thousands of columns. Can this be achieved in a better way? I have tried to <code>apply</code> this function but got some weird output. Maybe apply needs to return something different than a data frame for this to work?</p>
<python><pandas><dataframe><numpy>
2023-08-02 17:38:11
1
939
Muslimbek Abduganiev
76,821,977
549,226
Attempting to install conda package and receiving error
<p>I'm attempting to install the arcgis conda package from esri using the following command: conda install -c esri arcgis</p> <p>It fails with the following error message:</p> <pre><code>github-image/arcgis-python-api [master] » conda install -c esri arcgis Collecting package metadata (current_repodata.json): done Solving environment: unsuccessful initial attempt using frozen solve. Retrying with flexible solve. Collecting package metadata (repodata.json): done Solving environment: unsuccessful initial attempt using frozen solve. Retrying with flexible solve. PackagesNotFoundError: The following packages are not available from current channels: - arcgis Current channels: - https://conda.anaconda.org/esri/osx-arm64 - https://conda.anaconda.org/esri/noarch - https://repo.anaconda.com/pkgs/main/osx-arm64 - https://repo.anaconda.com/pkgs/main/noarch - https://repo.anaconda.com/pkgs/r/osx-arm64 - https://repo.anaconda.com/pkgs/r/noarch To search for alternate channels that may provide the conda package you're looking for, navigate to https://anaconda.org and use the search bar at the top of the page. </code></pre> <p>I checked the anaconda.org repository and there appears to be an arcgis package located under the esri channel:</p> <p><a href="https://i.sstatic.net/cXO1il.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cXO1il.png" alt="repository" /></a></p> <p>Please note that this is on MacOS with Apple silicon.</p>
<python><anaconda><conda><arcgis><esri>
2023-08-02 16:46:15
2
7,625
opike
76,821,875
1,839,674
Dataframe from Yaml: Trying to only convert strings to dtype string
<p>I have to follow this flow!</p> <p>I have a file with a YAML document, that contains this:</p> <pre><code>a_list_of_values: - 0 - 1 - 2 - 3 - 4 - 5 a_string: &quot;hello&quot; a_float: 1.23 </code></pre> <p>Later I have to convert to .csv which results in this:</p> <pre><code>a_list_of_values,a_string,a_float [1,2,3,4,5],hello,1.23 </code></pre> <p>Even later, I have to ingest the csv into a Pandas Dataframe:</p> <pre><code>pd.read_csv(os.path.join(path_to_algorithm, &quot;all_summary.csv&quot;),index_col=False) </code></pre> <p>This results in these dtypes:</p> <pre><code>a_list_of_values object a_string object a_float float64 </code></pre> <p>I want to convert the actual string to a dtype of string (without using the column name, this is a watered down example).</p> <p>If I use convert_dtypes(), both objects convert to string.</p> <p>I want to leave a_list_of_values as object and have a_string as string.</p> <p>Then further more, I want to filter out the objects out of the dataframe leaving a_string and a_float.</p>
<python><pandas><dataframe>
2023-08-02 16:32:07
1
620
lr100
76,821,763
9,588,300
Why pandas boolean series needs () to work with boolean operators?
<p>I've seen that if you pass a boolean series to a dataframe of the same length as rows in the dataframe it filters the dataframe. However, if we pass a condition instead of a boolean series (like <code>df['col']==value</code>) and want to perform boolean operations on that condition (like ~ ) it does not work, even though the condition's result is a boolean series. It only works if it is surrounded by parenthesis. In other words, this works <code>df[~(df['col']&gt;value)]</code> and this does not <code>df[~df['col']&gt;value]</code>, notice the only difference are the parenthesis</p> <p>I thought the parenthesis was doing something to the boolean series resulting from applying <code>df['col']&gt;value</code>, like casting it into another kind of object that supports operations such as <code>~</code>. But it does not, the <code>type(df['col']&gt;value)</code> and <code>type((df['col']&gt;value))</code> is the same, whcih is &quot;pandas.core.series.Series&quot;. So what are those parenthesis doing that enables the boolean series resulting from using the condition?</p> <p>Moreover, if you have two boolean_series derived from applying conditions to a dataframe, like</p> <p><code>series_a=df['col']&gt;value</code> and <code>series_b=df['col']==value</code> and you try to use both of them with an <code>&amp;</code> operator this way <code>df[series_a &amp; series_b]</code> it actually works fine. But calculating them inside the dataframe does not works <code>df[df['col']&gt;value &amp; df['col']==value]</code> , it gives error <code>TypeError: unsupported operand type(s) for &amp;: 'int' and 'IntegerArray'</code> From that error I would assume there is some precedence in the operators taking place since it seems it's trying to apply the &amp; to an IntegerArray, probably doing this: <code>df['col']&gt; (value &amp; df['col']) ==value</code> But I would like to ask to confirm</p> <p>Example: Supposing we have some dataframe with column <code>tag</code> that has either values A or B</p> <pre><code>import pandas as pd import numpy as np import random df=pd.DataFrame({'tag'=[random.choice['A','B' for i in range(100)]} </code></pre> <p>If I try to filter doing this:</p> <pre><code>df[~(df['tag']=='A')] </code></pre> <p>It works, but If I do this without those parenthesis it does not works with this error <code>TypeError: bad operand type for unary ~: 'str'</code></p> <pre><code>df[~df['tag']=='A'] </code></pre>
<python><pandas><series>
2023-08-02 16:15:06
1
462
Eugenio.Gastelum96
76,821,687
9,274,940
Polars equivalent of .values in Pandas
<p>I want to convert the values of my dataframe into an array using Polars.</p> <p>With Pandas I would do this:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd df_tests = pd.DataFrame({'col_a':[1,2,3], 'col_b':[4,5,6]}, 'col_c':[7,8,9]}) print(df_tests.values) </code></pre> <p>What's the equivalent in Polars?</p> <p>I've tried the to_list() method, but only works for series. The closest I could get is this, which returns a list of tuples:</p> <pre class="lang-py prettyprint-override"><code>import polars as pl # Convert Pandas DataFrame to Polars DataFrame df_tests = pd.DataFrame({'col_a':[1,2,3], 'col_b':[4,5,6]}, 'col_c':[7,8,9]}) df_tests_pl = pl.from_pandas(df_tests) print(df_tests_pl.select(pl.col(['col_a', 'col_b'])).rows()) </code></pre>
<python><dataframe><python-polars>
2023-08-02 16:05:01
1
551
Tonino Fernandez
76,821,492
18,618,577
Execute a python prog that read a list of argument file
<p>My prog extract a text file from an URL. In the URL needs, there is a pattern of 5 letters, a user and a password. The rest is totally the same on almost 120 URL.</p> <pre><code>import pandas as pd sta = 'router1' usr = 'user' pwd = 'password' url =&quot;http://&quot;+sta+&quot;.someurl/alotofsyntax&amp;usr=&quot;+usr+&quot;&amp;pwd=&quot;+pwd+&quot;&amp;command=dothis&quot; status = pd.read_csv(url, sep=':', on_bad_lines='skip', skipinitialspace = True) </code></pre> <p>So that work for a single router, now I want to automatise it on 120 routers, with a file which would contain something like</p> <pre><code>router1, user1, pawssword1 router2, user2, pawssword2 router3, user3, pawssword3 </code></pre> <p>etc.</p>
<python><pandas><file><arguments>
2023-08-02 15:39:18
1
305
BenjiBoy
76,821,477
5,356,096
Scrapy not saving scraped items
<p>Here is how I start my spider:</p> <pre class="lang-py prettyprint-override"><code> with resources.path(SCRAPING_MANIFESTS['systems'], 'manifest.jl') as path: process = CrawlerProcess({ 'FEEDS': { path: { 'format': 'jsonlines', 'overwrite': True, 'indent': 4 } } }) process.crawl(SystemsManifestSpider, prev_hashes=prev_list) process.start() </code></pre> <p>I verified in debug that indeed the path is correct and points to an existing file that I set to overwrite. It's a local file.</p> <p>When I debug my spider, I see my item is being successfully populated. I populate it as such:</p> <pre class="lang-py prettyprint-override"><code>MyCustomItem(scrapy.Item): my_field = scrapy.Field() # inside my spider class defined as SystemsManifestSpider(scrapy.Spider): def parse(self, response, **kwargs): while True: ... my_item = MyCustomItem() my_item['my_field'] = &quot;test&quot; ... print(my_item) # prints the dictionary with 'my_field' correctly populated yield my_item </code></pre> <p>And the debug output after letting it run a while: <code> 'item_scraped_count': 9</code></p> <p>It correctly accesses the webpage and scrapes the data verified in debug, but after the item is yielded it doesn't save anything to my <code>manifest.jl</code>.</p> <p>However, when I modify it to this:</p> <pre class="lang-py prettyprint-override"><code>process = CrawlerProcess({ 'FEEDS': { Path(&quot;./manifest.jsonlines&quot;): { 'format': 'jsonlines', 'overwrite': True, 'indent': 4 } } }) </code></pre> <p>It correctly creates a new file and saves it at the local directory where I run the code. Both paths lead to a real file and directory.</p> <p>But again when I specify an absolute path, it doesn't work:</p> <pre class="lang-py prettyprint-override"><code>process = CrawlerProcess({ 'FEEDS': { Path(&quot;C:\\Users\\xxxx\\PycharmProjects\\xxxx\\src\\python_scrapper_nf\\xxxx\\manifest.jsonlines&quot;): { 'format': 'jsonlines', 'overwrite': True, 'indent': 4 } } }) </code></pre> <p>The path above is identical to <code>Path(./manifest.jsonlines)</code></p> <p>Note: It seems all relative paths work, but absolute paths never do.</p>
<python><python-3.x><scrapy>
2023-08-02 15:37:02
2
1,665
Jack Avante
76,821,417
18,618,577
create a dataframe with list of strings
<p>I'm starting from a dataframe extract from gprs routeur fleet, approx 200 lines. Then I make a list of several specific values that I'm interested in, to monitor. To finish I want :</p> <p>1 - make a dataframe with my specific value only</p> <p>2 - write a csv file</p> <pre><code>import pandas as pd # the extraction url =&quot;someurl.splash.com&quot; status = pd.read_csv(url, sep=':', on_bad_lines='skip', skipinitialspace = True) # then my specifics values model = status['value'].iloc[2] SN = status['value'].iloc[4] OS = status['value'].iloc[6] CPUtemp = status['value'].iloc[10] TOBmodel = status['value'].iloc[63] firm = status['value'].iloc[64] IMEI = status['value'].iloc[65] ICCID = status['value'].iloc[67] SIMnum = status['value'].iloc[67][6:] SIMappel = status['value'].iloc[69] service = status['value'].iloc[73] band = status['value'].iloc[74] rssi = status['value'].iloc[75] network = status['value'].iloc[81] LAI = status['value'].iloc[82] LAC = status['value'].iloc[83] CID = status['value'].iloc[84] LANIP = status['value'].iloc[112] LANDHCP = status['value'].iloc[115] WANIP = status['value'].iloc[130] dlrate = status['value'].iloc[146] uprate = status['value'].iloc[147] </code></pre> <p>A sample of the data, after the pandas read function (sorry for not putting it here before) :</p> <pre><code> label value 0 === SYSTEM INFORMATION === 1 Product name NetModule Router 2 Product type NB800 3 Hardware version V3.2 4 Serial number 00112B02AC71 5 Operating system Linux 4.19.163 6 System software 4.7.0.103 7 UBoot 4.7.0.101 8 SPL 4.7.0.101 9 CPU Rev 2.1 Part 0xB944 Mfgr 0x0017 (1000 MHz) 10 CPU temperature 60.2 degrees Celsius 11 RAM 512 MB 12 Flash storage 4096 MB 13 Temperature 60.2 degrees Celsius 14 System errors none 15 Hydra happy 16 === CONFIGURATION INFORMATION === 17 Config version 1.16 18 Config name user-config 19 Config hash b460665137cdca6cd35a2a0da93c0c5d </code></pre>
<python><pandas><dataframe>
2023-08-02 15:29:22
2
305
BenjiBoy
76,821,400
8,342,261
How to apply tokenize to a particular column in a dataframe by using python?
<p>I have a dataframe having three columns. One of the column from the dataframe need to apply tokenization. I am getting <strong>TypeError : expected string or bytes-like object, got 'float'</strong> .</p> <pre><code>import pandas as pd import os df = pd.read_csv(r&quot;D:\......PATH\sample_regex.xlsx&quot;) from nltk.tokenize import RegexpTokenizer regexp = RegexpTokenizer('\w+') df['CDnew'] = df['CD'].apply(regexp.tokenize) </code></pre> <p>May I request to resolve this issue, please?</p> <p>Thanks in advance.</p> <p>Data <a href="https://i.sstatic.net/ULYdS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ULYdS.png" alt="enter image description here" /></a></p> <p>I am trying to tokenize the words from third column and keep tokenize words in a new column. But I am getting <em>TypeError : expected string or bytes-like object, got 'float'</em>.</p>
<python><regex><nltk><tokenize>
2023-08-02 15:26:37
1
335
Sanky Ach
76,821,347
3,491,151
SageMaker experiment tracking duplication
<p>I am trying to train a model using script mode via AWS SageMaker. I would like to track this training job with AWS SageMaker Experiments together with some calculated metrics in the training job. When I start the training job a new experiment run is created successfully that tracks all the provided hyperparameters (e.g., nestimators).</p> <p>However, as said earlier, additionally, I also want to track other metrics (e.g., Accuracy) in the custom script. Here I use <code>load_run()</code> before I fit the model and then for example log a metric with <code>run.log_metric()</code>. However, when I do that, SageMaker creates a new <strong>separate</strong> experiment entry in the UI which means that my hyperparameters and metrics are stored <strong>separately</strong> in two individual experiment runs:</p> <p><a href="https://i.sstatic.net/3oHUC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3oHUC.png" alt="Two separate runs created by SageMaker" /></a></p> <p>I would like to see the metrics and hyperparameters all in one Experiment run combined. What am I doing wrong?</p> <p>Here is the abbreviated code I am using to kick off the training process:</p> <pre><code> exp_name = &quot;sklearn-script-mode-experiment&quot; with Run( experiment_name=exp_name, sagemaker_session=sess, ) as run: sklearn_estimator = SKLearn('train.py', instance_type='ml.m5.large', framework_version='1.0-1', role=&quot;arn:aws:iam:::role/service-role/AmazonSageMaker-ExecutionRole-&quot;, hyperparameters={'nestimators': 100}, environment={&quot;REGION&quot;: REGION}) sklearn_estimator.fit({'train': f's3://{BUCKET}/{S3_INPUT_PATH}'}) </code></pre> <p>Here is the abbreviated <code>train.py</code>:</p> <pre><code> #parsing arguments here ... etc ... model = RandomForestClassifier(n_estimators=args.nestimators, max_depth=5, random_state=1) with load_run(sagemaker_session=sagemaker_session) as run: model.fit(X, y) run.log_metric(name = &quot;Final Test Loss&quot;, value = 0.9) </code></pre>
<python><machine-learning><amazon-sagemaker><amazon-sagemaker-experiments>
2023-08-02 15:20:27
1
6,566
maRtin
76,821,330
433,202
Cython doesn't determine memoryview dimension change in sliced argument
<p>I'm working with memoryviews of a fused type in a Cython application and have run into an issue where I expect Cython to know that a sliced 2D memoryview is a 1D memoryview, but can only get it to compile in 1 of 3 different ways.</p> <pre class="lang-py prettyprint-override"><code># cython: language_level=3, boundscheck=False, cdivision=True, wraparound=False, initializedcheck=False, nonecheck=False cimport cython cdef void _subfunc(cython.floating[:] arr1, cython.floating[:] out1, cython.floating[:] out2): out1[0] = arr1[0] ** 2 out2[0] = arr1[0] + 2 cdef void _func1(cython.floating[:] arr1, cython.floating[:, ::1] out1): _subfunc(arr1, out1[:, 0], out1[:, 1]) cdef void _func2(cython.floating[:] arr1, cython.floating[:, ::1] out1): _subfunc(arr1, &lt;cython.floating[:]&gt;out1[:, 0], &lt;cython.floating[:]&gt;out1[:, 1]) cdef void _func3(cython.floating[:] arr1, cython.floating[:, ::1] out1): cdef cython.floating[:] o1, o2 o1 = out1[:, 0] o2 = out1[:, 1] _subfunc(arr1, o1, o2) </code></pre> <p>When compiled with <code>cythonize test.pyx</code> I get the following errors:</p> <pre><code>Error compiling Cython file: ------------------------------------------------------------ ... cdef void _subfunc(cython.floating[:] arr1, cython.floating[:] out1, cython.floating[:] out2): out1[0] = arr1[0] ** 2 out2[0] = arr1[0] + 2 cdef void _func1(cython.floating[:] arr1, cython.floating[:, ::1] out1): _subfunc(arr1, out1[:, 0], out1[:, 1]) ^ ------------------------------------------------------------ test.pyx:10:12: no suitable method found Error compiling Cython file: ------------------------------------------------------------ ... cdef void _subfunc(cython.floating[:] arr1, cython.floating[:] out1, cython.floating[:] out2): out1[0] = arr1[0] ** 2 out2[0] = arr1[0] + 2 cdef void _func1(cython.floating[:] arr1, cython.floating[:, ::1] out1): _subfunc(arr1, out1[:, 0], out1[:, 1]) ^ ------------------------------------------------------------ test.pyx:10:12: no suitable method found Error compiling Cython file: ------------------------------------------------------------ ... cdef void _func1(cython.floating[:] arr1, cython.floating[:, ::1] out1): _subfunc(arr1, out1[:, 0], out1[:, 1]) cdef void _func2(cython.floating[:] arr1, cython.floating[:, ::1] out1): _subfunc(arr1, &lt;cython.floating[:]&gt;out1[:, 0], &lt;cython.floating[:]&gt;out1[:, 1]) ^ ------------------------------------------------------------ test.pyx:13:43: Can only create cython.array from pointer or array Error compiling Cython file: ------------------------------------------------------------ ... cdef void _func1(cython.floating[:] arr1, cython.floating[:, ::1] out1): _subfunc(arr1, out1[:, 0], out1[:, 1]) cdef void _func2(cython.floating[:] arr1, cython.floating[:, ::1] out1): _subfunc(arr1, &lt;cython.floating[:]&gt;out1[:, 0], &lt;cython.floating[:]&gt;out1[:, 1]) ^ ------------------------------------------------------------ test.pyx:13:75: Can only create cython.array from pointer or array Error compiling Cython file: ------------------------------------------------------------ ... cdef void _func1(cython.floating[:] arr1, cython.floating[:, ::1] out1): _subfunc(arr1, out1[:, 0], out1[:, 1]) cdef void _func2(cython.floating[:] arr1, cython.floating[:, ::1] out1): _subfunc(arr1, &lt;cython.floating[:]&gt;out1[:, 0], &lt;cython.floating[:]&gt;out1[:, 1]) ^ ------------------------------------------------------------ test.pyx:13:43: Can only create cython.array from pointer or array Error compiling Cython file: ------------------------------------------------------------ ... cdef void _func1(cython.floating[:] arr1, cython.floating[:, ::1] out1): _subfunc(arr1, out1[:, 0], out1[:, 1]) cdef void _func2(cython.floating[:] arr1, cython.floating[:, ::1] out1): _subfunc(arr1, &lt;cython.floating[:]&gt;out1[:, 0], &lt;cython.floating[:]&gt;out1[:, 1]) ^ ------------------------------------------------------------ test.pyx:13:75: Can only create cython.array from pointer or array Error compiling Cython file: ------------------------------------------------------------ ... cdef void _subfunc(cython.floating[:] arr1, cython.floating[:] out1, cython.floating[:] out2): out1[0] = arr1[0] ** 2 out2[0] = arr1[0] + 2 cdef void _func1(cython.floating[:] arr1, cython.floating[:, ::1] out1): _subfunc(arr1, out1[:, 0], out1[:, 1]) ^ ------------------------------------------------------------ test.pyx:10:4: Invalid use of fused types, type cannot be specialized Error compiling Cython file: ------------------------------------------------------------ ... cdef void _subfunc(cython.floating[:] arr1, cython.floating[:] out1, cython.floating[:] out2): out1[0] = arr1[0] ** 2 out2[0] = arr1[0] + 2 cdef void _func1(cython.floating[:] arr1, cython.floating[:, ::1] out1): _subfunc(arr1, out1[:, 0], out1[:, 1]) ^ ------------------------------------------------------------ test.pyx:10:4: Invalid use of fused types, type cannot be specialized </code></pre> <p>As you can see, only <code>func3</code> with its explicit variables &quot;o1&quot; and &quot;o2&quot; satisfy the compilation. This is unexpected but also annoying since it is a lot more code. Is this just the way this has to be? Or am I doing something wrong?</p> <p>Edit: It fails with the same errors with Cython 0.29.x and Cython 3.x.</p>
<python><cython><cythonize>
2023-08-02 15:17:45
1
3,695
djhoese
76,821,114
567,059
Python package requires different module import paths when run locally Vs when installed with pip
<p>I have a python package that I wish to make installable, but when running locally to test Vs running after installed with <code>pip</code>, paths to imported modules in the package differ, so I can't use the package in both cases.</p> <p>Long question short... how can I make my package work both locally without needing it installed with <code>pip</code>, and when installed with <code>pip</code>.</p> <hr /> <h1>Full Explaination</h1> <p>Here is my directory structure.</p> <pre class="lang-bash prettyprint-override"><code>example_project │ pyproject.toml │ setyp.py │ └── src/pkg │ __main__.py │ assets.py </code></pre> <p>When using the first example (see code below), the package can be run locally and it works as expected.</p> <pre class="lang-bash prettyprint-override"><code>user@computer:~/tools$ python example_project/src/pkg User Hello, User. </code></pre> <p>However, after installing the package with <code>pip</code> and running, it fails to work.</p> <pre class="lang-bash prettyprint-override"><code>user@computer:~/tools$ pip install example_project/ ... Successfully installed example-1.0.2 user@computer:~/tools$ example User Traceback (most recent call last): File &quot;/home/user/.local/bin/example&quot;, line 5, in &lt;module&gt; from example.__main__ import entry_point File &quot;/home/user/.local/lib/python3.10/site-packages/example/__main__.py&quot;, line 1, in &lt;module&gt; from assets import say_hello ModuleNotFoundError: No module named 'assets' </code></pre> <p>To seemingly fix everything, the import paths can be updated as shown in the second example (see code below), and now it appears as though the package works when run locally and when installed with <code>pip</code> and run. But that's not quite the whole story.</p> <pre class="lang-bash prettyprint-override"><code>user@computer:~/tools$ python example_project/src/example User Hello, User. user@computer:~/tools$ example User Hello, User. </code></pre> <p>The trouble is, if the package is uninstalled, it wont work locally. This makes sense, because <code>example</code> wouldn't be in the Python path. But how can I make this package work both locally without being installed, and when installed with <code>pip</code>?</p> <pre class="lang-bash prettyprint-override"><code>user@computer:~/tools$ pip uninstall example Found existing installation: example 1.0.2 Uninstalling example-1.0.2: ... Successfully uninstalled example-1.0.2 user@computer:~/tools$ python example_project/src/example User Traceback (most recent call last): File &quot;/usr/lib/python3.10/runpy.py&quot;, line 196, in _run_module_as_main return _run_code(code, main_globals, None, File &quot;/usr/lib/python3.10/runpy.py&quot;, line 86, in _run_code exec(code, run_globals) File &quot;/home/user/tools/example_project/src/example/__main__.py&quot;, line 1, in &lt;module&gt; from example.assets import say_hello ModuleNotFoundError: No module named 'example' </code></pre> <hr /> <h3>First Example</h3> <pre class="lang-bash prettyprint-override"><code>mkdir -p example_project/src/example cat &lt;&lt;EOF &gt; example_project/setup.py from setuptools import setup setup( name='example', version='1.0.2', entry_points={'console_scripts': ['example = example.__main__:entry_point']}, ) EOF cat &lt;&lt;EOF &gt; example_project/pyproject.toml [build-system] requires = [&quot;setuptools&quot;] build-backend = &quot;setuptools.build_meta&quot; EOF cat &lt;&lt;EOF &gt; example_project/src/example/__main__.py from assets import say_hello import argparse parser = argparse.ArgumentParser(description='Example') parser.add_argument('name') def entry_point() -&gt; None: args = parser.parse_args() say_hello(args.name) if __name__ == '__main__': entry_point() EOF cat &lt;&lt;EOF &gt; example_project/src/example/assets.py def say_hello(name: str) -&gt; None: print(f'Hello, {name}.') EOF </code></pre> <h3>Second Example</h3> <p><em><strong>Note:</strong> Requires that all command from the first example (above) have been run.</em></p> <pre class="lang-bash prettyprint-override"><code>cat &lt;&lt;EOF &gt; example_project/src/example/__main__.py from example.assets import say_hello import argparse parser = argparse.ArgumentParser(description='Example') parser.add_argument('name') def entry_point() -&gt; None: args = parser.parse_args() say_hello(args.name) if __name__ == '__main__': entry_point() EOF </code></pre>
<python><pip><python-import><python-packaging>
2023-08-02 14:51:46
0
12,277
David Gard
76,820,829
21,420,742
Removing IDs under specific conditions in Python
<p>I have a dataset and I need to remove IDs entirely that are flagged before a certain date and beyond. I am having trouble getting a start on this one.</p> <p>df =</p> <pre><code> ID Date Flagged 101 6/4/2023 0 101 7/23/2023 0 102 4/28/2023 1 102 5/2/2023 1 102 6/30/2023 1 102 7/11/2023 1 103 6/23/2023 1 103 7/12/2023 1 104 4/17/2023 0 104 5/12/2023 1 104 6/17/2023 1 104 7/22/2023 1 </code></pre> <p>I would like to remove the IDs all together that are <code>Flagged</code> before May 1 2023. I tried</p> <pre class="lang-py prettyprint-override"><code>today = datetime.datetime.today() x_days = today - dt(days=90)` filtered_df = df{(df['Flagged'] == 1) &amp; (df['Date' &gt;= x_days)] </code></pre> <p>When I run this I still have IDs that I would like to have completely removed. Below is a desired output:</p> <p>df =</p> <pre><code> ID Date Flagged 103 6/23/2023 1 103 7/12/2023 1 104 5/12/2023 1 104 6/17/2023 1 104 7/22/2023 1 </code></pre> <p>Any help with this would be great thank you!</p>
<python><python-3.x><pandas><dataframe><datetime>
2023-08-02 14:19:05
3
473
Coding_Nubie
76,820,704
311,130
Pandas lib gives a weird error "Length of values does not match length of index"
<p>I'm using pandas python library, and I get an error which I don't understand.</p> <p>&quot;Length of values (100) does not match length of index (4)&quot;,</p> <p>Whereas my params are a tuple of 100 items (rows) and a tuple of 4 strings.</p> <p>Should I pack the rows differently?</p> <p><a href="https://i.sstatic.net/8j7c4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8j7c4.png" alt="enter image description here" /></a></p>
<python><pandas>
2023-08-02 14:06:16
1
36,892
Elad Benda
76,820,703
1,753,640
Python regex findall between square brackets
<p>I have the following body of text and I want to extract all data in between the square brackets e.g. for the given text</p> <pre><code>a = '\tvar mydata = [{\nZWeight:&quot;13.00&quot;,\nStyleName:&quot;FIRECREEK SOLID&quot;,\nColorName:&quot;TRADITION&quot;\n}\n]' </code></pre> <p>I want to extract</p> <pre><code>{\nZWeight:&quot;13.00&quot;,\nStyleName:&quot;FIRECREEK SOLID&quot;,\nColorName:&quot;TRADITION&quot;\n} </code></pre> <p>I have tried</p> <pre><code>re.findall(r'var mydata = \[(.*?)\]',a) </code></pre> <p>but it came up as an empty list. Any help would be great!</p>
<python><regex>
2023-08-02 14:06:05
1
385
user1753640
76,820,636
3,158,028
Chrome HTML5 Video Not Loading
<p>I have a video component in React that works for both Firefox and Safari, but it's not working in Chrome. Just as a sanity check, I created a fresh React app (below) and added a simple video tag, but the video would not load in Chrome and only loads in Firefox. How do I fix this?</p> <pre class="lang-js prettyprint-override"><code>import './App.css'; function App() { return ( &lt;div className=&quot;App&quot;&gt; &lt;h1&gt;Test&lt;/h1&gt; &lt;video muted preload=&quot;metadata&quot; &gt; &lt;source src={`http://localhost:8005/video/test.mp4`} type=&quot;video/mp4&quot; /&gt; &lt;/video&gt; &lt;/div&gt; ); } export default App; </code></pre> <p>FastAPI endpoint</p> <pre class="lang-py prettyprint-override"><code>@router.get(&quot;/video/{file}&quot;, tags=[&quot;video&quot;]) async def get_video_stream(file: str, range: str = Header(None)): if range is None: raise HTTPException(status_code=400, detail=&quot;Range header required&quot;) start, end = range.replace(&quot;bytes=&quot;, &quot;&quot;).split(&quot;-&quot;) start = int(start) end = int(end) if end else start + CHUNK_SIZE video_path = Path(f&quot;/app/videos/{file}&quot;) try: with open(video_path, &quot;rb&quot;) as video: video.seek(start) data = video.read(end - start) filesize = str(video_path.stat().st_size) headers = { &quot;Content-Range&quot;: f&quot;bytes {str(start)}-{str(end)}/{filesize}&quot;, &quot;Accept-Ranges&quot;: &quot;bytes&quot;, } return Response( data, status_code=206, headers=headers, media_type=&quot;video/mp4&quot; ) except FileNotFoundError: logger.error(f&quot;File not found: {video_path}&quot;) raise HTTPException(status_code=404, detail=&quot;Video not found&quot;) </code></pre>
<javascript><python><google-chrome><html5-video>
2023-08-02 13:57:20
2
3,390
Soubriquet
76,820,536
5,562,092
sam template makefile not installing python libraries
<p>So, Ive been oging at this for the last few days so this is pretty much my last resort.</p> <p>The problem is with the sam template makefile option.</p> <p>When using an out of the box solution ie. <code>Metadata.BuildMethod = python3.10</code>, SAM cli goes and finds requirements.txt file and does its thing. This is all great until I try and do the same thing but with <code>BuildMethod: makefile</code>.</p> <p>In the makefile, I have tried pretty much every single permutation of this, and they dont work. The following dont work:-</p> <pre><code>python -m pip install -r requirements.txt -t $(ARTIFACT_DIR) python -m pip install -r requirements.txt -t $(ARTIFACT_DIR)/python python -m pip install -r requirements.txt -t $(ARTIFACT_DIR)/python/lib/python3.10/site-packes </code></pre> <p>All of the above give an error of <code>unable to import module 'app' No module name 'pydantic'</code></p> <p>The folder structure is like so</p> <pre><code>|── events │   └── event.json ├── hello_world │   ├── app.py │   ├── Makefile │   └── requirements.txt ├── README.md ├── samconfig.toml └── template.yaml </code></pre> <p>the template section is like so</p> <pre><code> HelloWorldFunction: Type: AWS::Serverless::Function Metadata: # BuildMethod: python3.10 BuildMethod: makefile Properties: CodeUri: hello_world/ Handler: app.lambda_handler Runtime: python3.10 Architectures: - x86_64 Events: HelloWorld: Type: Api Properties: Path: /hello Method: get </code></pre> <p>If anybody can help or give any tips, would be appreciated.</p>
<python><aws-lambda><aws-sam-cli>
2023-08-02 13:46:34
1
875
A H Bensiali
76,820,376
4,466,255
How to show AST generated from pglast package in graphical form
<p>Below code generates AST for postgresql query</p> <pre><code>from pglast import parse_sql parsed_sql = parse_sql(&quot;SELECT age FROM (select * from users group by age) as age_table WHERE age &gt; 18;&quot;) print(parsed_sql) </code></pre> <p>It gives</p> <pre><code>(&lt;RawStmt stmt=&lt;SelectStmt targetList=(&lt;ResTarget val=&lt;ColumnRef fields=(&lt;String sval='age'&gt;,)&gt;&gt;,) fromClause=(&lt;RangeSubselect lateral=False subquery=&lt;SelectStmt targetList=(&lt;ResTarget val=&lt;ColumnRef fields=(&lt;A_Star&gt;,)&gt;&gt;,) fromClause=(&lt;RangeVar relname='users' inh=True relpersistence='p'&gt;,) groupClause=(&lt;ColumnRef fields=(&lt;String sval='age'&gt;,)&gt;,) groupDistinct=False limitOption=&lt;LimitOption.LIMIT_OPTION_DEFAULT: 0&gt; op=&lt;SetOperation.SETOP_NONE: 0&gt; all=False&gt; alias=&lt;Alias aliasname='age_table'&gt;&gt;,) whereClause=&lt;A_Expr kind=&lt;A_Expr_Kind.AEXPR_OP: 0&gt; name=(&lt;String sval='&gt;'&gt;,) lexpr=&lt;ColumnRef fields=(&lt;String sval='age'&gt;,)&gt; rexpr=&lt;A_Const isnull=False val=&lt;Integer ival=18&gt;&gt;&gt; groupDistinct=False limitOption=&lt;LimitOption.LIMIT_OPTION_DEFAULT: 0&gt; op=&lt;SetOperation.SETOP_NONE: 0&gt; all=False&gt; stmt_location=0 stmt_len=78&gt;,) </code></pre> <p>How can we present it into graphical form. Is there a function available which can do it?</p>
<python><postgresql>
2023-08-02 13:26:56
1
1,208
Kushdesh
76,820,240
10,461,632
Using sockets in react+flask application
<p>I'm trying to implement sockets in my React+Flask application and am running into issues. When I start my backend server, I get a continuous loop of users connect. Then when I start the frontend, the users get disconnected. Any ideas on what the issue is? I was using this <a href="https://medium.com/@adrianhuber17/how-to-build-a-simple-real-time-application-using-flask-react-and-socket-io-7ec2ce2da977" rel="nofollow noreferrer">guide</a> as a starting point, but no differences in the code are jumping out at me. The only difference between what I'm doing is I'm using <code>Vite</code> instead of <code>Create React App</code>.</p> <p><strong>Python packages</strong></p> <pre><code>bidict==0.22.1 blinker==1.6.2 click==8.1.6 dnspython==2.4.1 Flask==2.3.2 Flask-Cors==4.0.0 Flask-SocketIO==5.3.5 greenlet==2.0.2 gunicorn==21.2.0 h11==0.14.0 importlib-metadata==6.8.0 itsdangerous==2.1.2 Jinja2==3.1.2 MarkupSafe==2.1.3 packaging==23.1 pyserial==3.5 python-dotenv==1.0.0 python-engineio==4.5.1 python-socketio==5.8.0 simple-websocket==0.10.1 six==1.16.0 Werkzeug==2.3.6 wsproto==1.2.0 zipp==3.16.2 </code></pre> <p><strong>app.py</strong></p> <pre><code>from flask import Flask, request from flask_cors import CORS from flask_socketio import SocketIO, emit app = Flask(__name__) CORS(app, resources={r&quot;*&quot;: {&quot;origins&quot;: &quot;*&quot;}}) socketio = SocketIO(app, cors_allowed_origins='*') @socketio.on(&quot;connect&quot;) def connected(): &quot;&quot;&quot;event listener when client connects to the server&quot;&quot;&quot; print('-' * 25) print(f&quot;Client has connected: {request.sid}&quot;) emit(&quot;connect&quot;, {&quot;data&quot;: f&quot;id: {request.sid} is connected&quot;}) print('-' * 25) @socketio.on(&quot;disconnect&quot;) def disconnected(): &quot;&quot;&quot;event listener when client disconnects to the server&quot;&quot;&quot; print('-' * 25) print(&quot;User disconnected&quot;) emit(&quot;disconnect&quot;, f&quot;User {request.sid} disconnected&quot;, broadcast=True) print('-' * 25) if __name__ == &quot;__main__&quot;: socketio.run(app, debug=True) </code></pre> <p><strong>package.json</strong></p> <pre><code>{ &quot;name&quot;: &quot;socket-example&quot;, &quot;private&quot;: true, &quot;version&quot;: &quot;0.0.0&quot;, &quot;type&quot;: &quot;module&quot;, &quot;scripts&quot;: { &quot;dev&quot;: &quot;vite&quot;, &quot;build&quot;: &quot;tsc &amp;&amp; vite build&quot;, &quot;lint&quot;: &quot;eslint src --ext ts,tsx --report-unused-disable-directives --max-warnings 0&quot;, &quot;preview&quot;: &quot;vite preview&quot; }, &quot;dependencies&quot;: { &quot;@reduxjs/toolkit&quot;: &quot;^1.9.5&quot;, &quot;@types/react-redux&quot;: &quot;^7.1.25&quot;, &quot;framer-motion&quot;: &quot;^10.12.22&quot;, &quot;react&quot;: &quot;^18.2.0&quot;, &quot;react-dom&quot;: &quot;^18.2.0&quot;, &quot;react-hot-toast&quot;: &quot;^2.4.1&quot;, &quot;react-icons&quot;: &quot;^4.10.1&quot;, &quot;react-redux&quot;: &quot;^8.1.1&quot;, &quot;socket.io&quot;: &quot;^4.7.1&quot;, &quot;socket.io-client&quot;: &quot;^4.7.1&quot;, &quot;uuid&quot;: &quot;^9.0.0&quot; }, &quot;devDependencies&quot;: { &quot;@types/react&quot;: &quot;^18.2.14&quot;, &quot;@types/react-dom&quot;: &quot;^18.2.6&quot;, &quot;@types/uuid&quot;: &quot;^9.0.2&quot;, &quot;@typescript-eslint/eslint-plugin&quot;: &quot;^5.61.0&quot;, &quot;@typescript-eslint/parser&quot;: &quot;^5.61.0&quot;, &quot;@vitejs/plugin-react&quot;: &quot;^4.0.1&quot;, &quot;autoprefixer&quot;: &quot;^10.4.14&quot;, &quot;eslint&quot;: &quot;^8.44.0&quot;, &quot;eslint-plugin-react-hooks&quot;: &quot;^4.6.0&quot;, &quot;eslint-plugin-react-refresh&quot;: &quot;^0.4.1&quot;, &quot;postcss&quot;: &quot;^8.4.26&quot;, &quot;prettier&quot;: &quot;3.0.0&quot;, &quot;tailwindcss&quot;: &quot;^3.3.3&quot;, &quot;typescript&quot;: &quot;^5.0.2&quot;, &quot;vite&quot;: &quot;^4.4.0&quot; } } </code></pre> <p><strong>App.tsx</strong></p> <pre><code>import { useState, useEffect } from 'react' import { io } from 'socket.io-client' // &quot;undefined&quot; means the URL will be computed from the `window.location` object const URL = import.meta.env.MODE === 'production' ? undefined : 'http://localhost:5000' const socket = io(URL, { cors: { origin: 'http://localhost:5173', }, }) export default function App() { const [isConnected, setIsConnected] = useState(socket.connected) useEffect(() =&gt; { socket.on('connect', (data) =&gt; { console.log(data) setIsConnected(true) }) socket.on('disconnect', (data) =&gt; { console.log(data) setIsConnected(false) }) return () =&gt; { socket.disconnect() } }, []) return ( &lt;&gt; &lt;p&gt;WebSocket: {'' + isConnected.toString()}&lt;/p&gt; &lt;/&gt; ) } </code></pre>
<python><reactjs><flask><socket.io><flask-socketio>
2023-08-02 13:11:21
1
788
Simon1
76,820,162
969,483
Valohai local execution
<p>I'm running python scripts with Valohai, using input files, like described here: <a href="https://help.valohai.com/hc/en-us/articles/4419897109777-Download-data-to-your-job" rel="nofollow noreferrer">Download data</a></p> <p>The usual command to create an execution and run it in a docker container in the cloud works fine:</p> <pre><code>vh exec run my_step --adhoc </code></pre> <p>But how can I execute the script locally, now? The function <code>valohai.inputs('dataset').path()</code> returns <code>None</code>, when I run it directly in my local venv. I would expect it to use the valohai.yaml. On the page <a href="https://help.valohai.com/hc/en-us/articles/4421611364241-valohai-inputs-" rel="nofollow noreferrer">valohai inputs()</a> they state <code>valohai-utils</code> would be</p> <blockquote> <p>Downloading the inputs for local runs</p> </blockquote>
<python>
2023-08-02 13:02:25
1
14,330
Christian
76,820,064
15,744,300
How to reference types from external C libraries (e.g. pygame) in Cython?
<p>I'm writing a game using the pygame library and I'm trying to speed it up using Cython. I have made some progress but now, main performance bottlenecks (based on the HTML annotation file) are related to the interaction with pygame methods like <code>surface.subsurface()</code> and <code>pygame.transform.scale()</code>. As a first step, I would like to declare my pygame surface variables with the appropriate C type. Following the <a href="https://cython.readthedocs.io/en/latest/src/userguide/external_C_code.html" rel="nofollow noreferrer">Cython documentation</a>, I added:</p> <pre class="lang-c prettyprint-override"><code>cdef extern from &quot;src_c/include/_pygame.h&quot;: ctypedef struct pgSurfaceObject </code></pre> <p>For this to work, I downloaded and symlinked the pygame source code which includes the following in <a href="https://github.com/pygame/pygame/blob/54fcf0551a99ee71b64e0296531c3b4a1da15a82/src_c/include/_pygame.h#L486" rel="nofollow noreferrer">_pygame.h</a>:</p> <pre class="lang-c prettyprint-override"><code>typedef struct { PyObject_HEAD struct SDL_Surface *surf; int owner; struct pgSubSurface_Data *subsurface; /* ptr to subsurface data (if a * subsurface)*/ PyObject *weakreflist; PyObject *locklist; PyObject *dependency; } pgSurfaceObject; </code></pre> <p>When compiling my Cython module (called <code>raycasting.pyx</code>), I get the following output:</p> <pre><code>$ python setup.py build_ext --inplace Compiling raycasting.pyx because it changed. [1/1] Cythonizing raycasting.pyx running build_ext building 'raycasting' extension gcc -pthread -B /home/user/software/miniconda3/envs/pygame/compiler_compat -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /home/user/software/miniconda3/envs/pygame/include -fPIC -O2 -isystem /home/user/software/miniconda3/envs/pygame/include -fPIC -I/home/user/software/miniconda3/envs/pygame/include/python3.11 -c raycasting.c -o build/temp.linux-x86_64-cpython-311/raycasting.o In file included from raycasting.c:767:0: src_c/include/_pygame.h:385:19: error: unknown type name ‘SDL_Rect’ PyObject_HEAD SDL_Rect r; ^~~~~~~~ src_c/include/_pygame.h:412:5: error: unknown type name ‘SDL_Joystick’ SDL_Joystick *joy; ^~~~~~~~~~~~ src_c/include/_pygame.h:450:5: error: unknown type name ‘SDL_PixelFormat’ SDL_PixelFormat *vfmt; ^~~~~~~~~~~~~~~ src_c/include/_pygame.h:451:5: error: unknown type name ‘SDL_PixelFormat’ SDL_PixelFormat vfmt_data; ^~~~~~~~~~~~~~~ error: command '/usr/bin/gcc' failed with exit code 1 </code></pre> <p>Do I have to add SDL as a dependency? I just want to tell Cython about the type which is presumably available through the compiled pygame package (which I have installed via Conda). Is there an easier way to reference the header files, like via my <code>setup.py</code>?</p> <p>My <code>setup.py</code> looks like this:</p> <pre class="lang-py prettyprint-override"><code>from setuptools import setup from Cython.Build import cythonize import Cython.Compiler.Options Cython.Compiler.Options.annotate = True setup( ext_modules=cythonize( 'raycasting.pyx', annotate=True, language_level=3), ) </code></pre>
<python><cython>
2023-08-02 12:50:02
1
3,647
Jan Wilamowski
76,819,970
13,734,323
How to make a chatbot created with Langchain that has itss own custom data have also access to the internet?
<p>I'm building a chatbot with Langchain. It is supposed to look first at the custom data and also to the internet to find the best answer.</p> <p>I've searched everywhere (specially on Langchain docs) but haven't come to a solution. I know I'm pretty close but somehow I can't make it work. The error I usually get is: &quot;The text does not provide information on... (and the question I make to the chatbot)&quot;</p> <p>Here the code:</p> <pre><code>from langchain.chains import ConversationalRetrievalChain from langchain.memory import ConversationBufferMemory from langchain.vectorstores import DocArrayInMemorySearch from langchain.embeddings.openai import OpenAIEmbeddings from langchain.document_loaders import ( UnstructuredWordDocumentLoader, TextLoader, UnstructuredPowerPointLoader, ) from langchain.tools import Tool from langchain.utilities import GoogleSearchAPIWrapper from langchain.chat_models import ChatOpenAI import os import openai import sys from dotenv import load_dotenv, find_dotenv sys.path.append('../..') _ = load_dotenv(find_dotenv()) # read local .env file google_api_key = os.environ.get(&quot;GOOGLE_API_KEY&quot;) google_cse_id = os.environ.get(&quot;GOOGLE_CSE_ID&quot;) # Initialize OpenAI API key openai.api_key = os.environ['OPENAI_API_KEY'] # Initialize Langchain environment os.environ[&quot;LANGCHAIN_TRACING_V2&quot;] = &quot;true&quot; os.environ[&quot;LANGCHAIN_ENDPOINT&quot;] = &quot;https://api.langchain.plus&quot; os.environ[&quot;LANGCHAIN_API_KEY&quot;] = os.environ['LANGCHAIN_API_KEY'] os.environ[&quot;GOOGLE_API_KEY&quot;] = google_api_key os.environ[&quot;GOOGLE_CSE_ID&quot;] = google_cse_id # Replace with the actual folder paths folder_path_docx = &quot;DB\\DB VARIADO\\DOCS&quot; folder_path_txt = &quot;DB\\BLOG-POSTS&quot; folder_path_pptx_1 = &quot;DB\\PPT JUNIO&quot; folder_path_pptx_2 = &quot;DB\\DB VARIADO\\PPTX&quot; # Create a list to store the loaded content loaded_content = [] # Load and process DOCX files for file in os.listdir(folder_path_docx): if file.endswith(&quot;.docx&quot;): file_path = os.path.join(folder_path_docx, file) loader = UnstructuredWordDocumentLoader(file_path) docx = loader.load() loaded_content.extend(docx) # Load and process TXT files for file in os.listdir(folder_path_txt): if file.endswith(&quot;.txt&quot;): file_path = os.path.join(folder_path_txt, file) loader = TextLoader(file_path, encoding='utf-8') text = loader.load() loaded_content.extend(text) # Load and process PPTX files from folder 1 for file in os.listdir(folder_path_pptx_1): if file.endswith(&quot;.pptx&quot;): file_path = os.path.join(folder_path_pptx_1, file) loader = UnstructuredPowerPointLoader(file_path) slides_1 = loader.load() loaded_content.extend(slides_1) # Load and process PPTX files from folder 2 for file in os.listdir(folder_path_pptx_2): if file.endswith(&quot;.pptx&quot;): file_path = os.path.join(folder_path_pptx_2, file) loader = UnstructuredPowerPointLoader(file_path) slides_2 = loader.load() loaded_content.extend(slides_2) # Initialize OpenAI Embeddings embedding = OpenAIEmbeddings() # Create embeddings for loaded content embeddings_content = [] for one_loaded_content in loaded_content: embedding_content = embedding.embed_query(one_loaded_content.page_content) embeddings_content.append(embedding_content) db = DocArrayInMemorySearch.from_documents(loaded_content, embedding) retriever = db.as_retriever(search_type=&quot;similarity&quot;, search_kwargs={&quot;k&quot;: 3}) search = GoogleSearchAPIWrapper() def custom_search(query): internet_results = search.run(query) print(internet_results) return internet_results chain = ConversationalRetrievalChain.from_llm( llm=ChatOpenAI(model_name=&quot;gpt-4&quot;, temperature=0), chain_type=&quot;map_reduce&quot;, retriever=retriever, return_source_documents=True, return_generated_question=True, ) history = [] while True: query = input(&quot;Hola, soy Chatbot. ¿Qué te gustaría saber? &quot;) # Use the custom_search function to get internet search results internet_results = custom_search(query) # Combine the custom data and internet search results combined_results = loaded_content + [internet_results] # Pass the combined results to the chain response = chain( {&quot;question&quot;: query, &quot;chat_history&quot;: history, &quot;documents&quot;: combined_results}) print(response[&quot;answer&quot;]) history.append((&quot;system&quot;, query)) # user's query history.append((&quot;assistant&quot;, response[&quot;answer&quot;])) # chatbot's response </code></pre> <p>Appreciate any help or suggestion to make it work.</p>
<python><openai-api><langchain>
2023-08-02 12:39:19
1
662
Zaesar