QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
βŒ€
75,827,296
1,797,307
deleting all files in a directory with no extension with python
<p>I am on a mac, I granted permissions to my desktop (which is where the files are) I run it and it detects the files and the if statement works but they are not in fact deleted</p> <pre><code> import glob import os import os.path def testThingy(system_path): for file in glob.glob(system_path+&quot;/*&quot;): print(file, &quot; this will be deleted&quot;) if input(&quot;continue ? &quot;) == &quot;Y&quot;: for file in glob.glob(system_path+&quot;/*&quot;): os.remove(file) </code></pre> <p>is it just a mac thing? or is there something missing in my code</p>
<python><file><delete-file>
2023-03-23 19:32:23
1
735
Kyle Sponable
75,827,178
4,560,996
How to set up API get query parameters in python request
<p>I'm trying to access an API endpoint in python. Generically, the request needs to look like this. In my real world example, (replacing URL with the actual endpoint), this works as expected. Notice the metanames arg is called multiple times</p> <pre><code>res = requests.get('url?someargs&amp;metanames=part1&amp;metanames=part2') </code></pre> <p>However, I'm now trying to do it this way:</p> <pre><code>params = {'metanames':'part1', 'metanames':'part2'} url = &quot;http:url&quot; res = requests.get(url = url, params = params) </code></pre> <p>This is pseudocode, but nontheless the metanames arg is not printed multiple times as I want it to be as in the example up top.</p> <p>Can anyone advise on the right way to set that up in the dictionary so it mirros example 1?</p>
<python>
2023-03-23 19:17:43
1
827
dhc
75,827,176
5,091,720
pandas not finding the duplicates
<p>I am having a problem with pandas drop_duplicates and duplicated. It is not finding all the duplicates. Do you know of a work around? Below is my code.</p> <pre><code># df_a is all the data combined. dp_r1 was from read_csv &amp; dp_r2 was from read_excel. df_a = pd.concat([df_a, dp_r1, dp_r2], ignore_index=True) df_a[&quot;AC&quot;] = df_a[&quot;AC&quot;].str.strip() df_a[&quot;PS Code&quot;] = df_a[&quot;PS Code&quot;].str.strip() df_a[&quot;UoM&quot;] = df_a[&quot;UoM&quot;].str.strip() df_a[&quot;LTRL&quot;] = df_a[&quot;LTRL&quot;].str.strip() df_a[&quot;UoM&quot;] = df_a[&quot;UoM&quot;].str.upper() # ____________________________ subset = [&quot;AC&quot;, &quot;LTRL&quot;, &quot;PS Code&quot;, &quot;Sample Date&quot;, &quot;Sample Time&quot;, &quot;UoM&quot;] df_a = df_a.drop_duplicates(subset=subset, keep=False) # this print below showed over 600 values that should have been dropped. #print('values of source', df_a.Source.value_counts()) # lets just focus on one 'AC' value. tds = df_a[df_a['AC']=='1930'] tds = tds[tds['PS Code']=='CA3310012_012_012'] print('tds shape 88:', tds.shape) tds['duplicated'] = tds.duplicated(subset=subset, keep=False) print('tds shape 90:', tds.shape) tds.to_csv(path + 'testing/tds_testing.csv', index=False) </code></pre> <p>Results below from above...</p> <blockquote> <p>tds shape 88: (70, 10)</p> <p>tds shape 90: (70, 11)</p> </blockquote> <p>I place just part of the table below.</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>PS Code</th> <th>Sampling Point Name</th> <th>Sample Date</th> <th>Sample Time</th> <th>AC</th> <th>Analyte Name</th> <th>Result</th> <th>UoM</th> <th>LTRL</th> <th>Source</th> <th>duplicated</th> </tr> </thead> <tbody> <tr> <td>CA3310012_012_012</td> <td>Machado</td> <td>4/5/2021</td> <td>12:40:00</td> <td>1930</td> <td>TDS</td> <td>688</td> <td>MG/L</td> <td>N</td> <td>WT</td> <td>FALSE</td> </tr> <tr> <td>CA3310012_012_012</td> <td>MACHADO</td> <td>5/5/2021</td> <td>9:04:00</td> <td>1930</td> <td>TDS</td> <td>68</td> <td>MG/L</td> <td>N</td> <td></td> <td>FALSE</td> </tr> <tr> <td>CA3310012_012_012</td> <td>Machado</td> <td>5/5/2021</td> <td>9:04:00</td> <td>1930</td> <td>TDS</td> <td>680</td> <td>MG/L</td> <td>N</td> <td>WT</td> <td>FALSE</td> </tr> <tr> <td>CA3310012_012_012</td> <td>MACHADO</td> <td>6/9/2021</td> <td>13:15:00</td> <td>1930</td> <td>TDS</td> <td>77</td> <td>MG/L</td> <td>N</td> <td></td> <td>FALSE</td> </tr> <tr> <td>CA3310012_012_012</td> <td>Machado</td> <td>6/9/2021</td> <td>13:15:00</td> <td>1930</td> <td>TDS</td> <td>778</td> <td>MG/L</td> <td>N</td> <td>WT</td> <td>FALSE</td> </tr> <tr> <td>CA3310012_012_012</td> <td>MACHADO</td> <td>7/8/2021</td> <td>10:30:00</td> <td>1930</td> <td>TDS</td> <td>7</td> <td>MG/L</td> <td>N</td> <td></td> <td>FALSE</td> </tr> <tr> <td>CA3310012_012_012</td> <td>Machado</td> <td>7/8/2021</td> <td>10:30:00</td> <td>1930</td> <td>TDS</td> <td>702</td> <td>MG/L</td> <td>N</td> <td>WT</td> <td>FALSE</td> </tr> </tbody> </table> </div> <p>It should have marked the above as all but first TRUE right? lets try some more code below.</p> <pre><code>tds2 = pd.read_csv(path + 'testing/tds_testing.csv', parse_dates=['Sample Date']) tds2 = tds2.drop_duplicates(subset=subset, keep=False) print('tds2 shape 94:',tds2.shape) </code></pre> <blockquote> <p>tds2 shape 94: (64, 11)</p> </blockquote>
<python><pandas><duplicates>
2023-03-23 19:17:30
1
2,363
Shane S
75,827,168
1,113,159
ffmpeg crashed if run as subprocess from nicegui
<p>I need to trigger a long ffmpeg process by nicegui button. I have created a code :</p> <pre class="lang-py prettyprint-override"><code>from concurrent.futures import ProcessPoolExecutor import shlex, subprocess import argparse import asyncio from nicegui import ui, app def run_cmd(): cmd = &quot;ffmpeg -y -i data/video.mp4 -acodec libmp3lame -ab 128000 data/video.mp3&quot; subprocess.call(shlex.split(cmd)) pool = ProcessPoolExecutor() async def async_run(): loop = asyncio.get_running_loop() await loop.run_in_executor(pool, run_cmd) args = argparse.ArgumentParser() args.add_argument(&quot;--webui&quot;, action=&quot;store_true&quot;) args = args.parse_args() if args.webui: ui.button('Translate', on_click=async_run) app.on_shutdown(pool.shutdown) ui.run() else: run_cmd() </code></pre> <p>ffmpeg works well if run just from python. But when I run it from nicegui (<code>python3 ./main.py --webui</code>) ffmpeg failed or crashed:</p> <pre><code>... Error while decoding stream #0:1: Invalid argument [aac @ 0x555e54100440] Sample rate index in program config element does not match the sample rate index configured by the container. [aac @ 0x555e54100440] Inconsistent channel configuration. [aac @ 0x555e54100440] get_buffer() failed ... ac @ 0x555e54100440] Error decoding AAC frame header. Error while decoding stream #0:1: Error number -50531338 occurred [aac @ 0x555e54100440] channel element 2.8 is not allocated Error while decoding stream #0:1: Invalid data found when processing input [aac @ 0x555e54100440] Pulse data corrupt or invalid. Error while decoding stream #0:1: Invalid data found when processing input ffmpeg: psymodel.c:576: calc_energy: Assertion `el &gt;= 0' failed. ... </code></pre> <p>What am I doing wrong? Or maybe there is a more straightforward pattern to start long-term background process with nicegui?</p>
<python><ffmpeg><concurrency><subprocess><nicegui>
2023-03-23 19:16:28
0
645
user1113159
75,827,156
6,679,011
How to change all the keys of a dictionary?
<p>I have a dictionary like this <code>{'lastname':'John', 'fistname':'Doe', 'id':'xxxxx'}</code> and I would like to add a prefix to all key values. The outcome should look like <code>{'contact_lastname':'John', 'contact_fistname':'Doe', 'contact_id':'xxxxx'}</code></p> <p>I tried to achieve this by a lambda function, but did not work.</p> <pre><code>original = {'lastname':'John', 'fistname':'Doe', 'id':'xxxxx'} modified = {(lambda k: 'contact_'+k) :v for k,v in original} </code></pre> <p>But it gives me an error. Any suggestions?</p>
<python><python-3.x><dictionary>
2023-03-23 19:14:12
2
469
Yang L
75,827,088
4,889,550
Reformat/pivot pandas dataframe
<p>For each row, I want that rows' index to be changed to column_index, and have the whole thing split from x row * y columns to 1 row * x*y columns shape.</p> <pre><code>import pandas as pd df = pd.DataFrame(data=[['Jon', 21, 1.77,160],['Jane',44,1.6,130]],columns=['name','age', 'height','weight']) want = pd.DataFrame(data=[['Jon', 21, 1.77,160,'Jane',44,1.6,130]],columns=['name_0','age_0', 'height_0','weight_0','name_1','age_1', 'height_1','weight_1']) # original df name age height weight 0 Jon 21 1.77 160 1 Jane 44 1.60 130 # desired df - want name_0 age_0 height_0 weight_0 name_1 age_1 height_1 weight_1 0 Jon 21 1.77 160 Jane 44 1.6 130 </code></pre> <p>I tried <code>df.unstack().to_frame().T</code> and while it reduces the rows to one, it creates a multiindex, so not ideal:</p> <pre><code> name age height weight 0 1 0 1 0 1 0 1 0 Jon Jane 21 44 1.77 1.6 160 130 </code></pre> <p>I don't think pivot table will work here.</p>
<python><pandas><dataframe>
2023-03-23 19:05:21
3
516
noblerthanoedipus
75,827,086
1,653,273
Convert pandas dataframe with dictionary objects into polars dataframe with object type
<p>I have a pandas dataframe with a column of dictionaries. I want to convert this to a polars dataframe with dtype <code>polars.Object</code>, which apparently wraps arbitrary Python objects. I am unable to figure out how to do this.</p> <p>Consider this code:</p> <pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({ &quot;the_column&quot;: [{ &quot;key&quot; : 123 }, { &quot;foo&quot; : 456 }, { &quot;bar&quot; : 789 }]}) </code></pre> <pre><code> the_column 0 {'key': 123} 1 {'foo': 456} 2 {'bar': 789} </code></pre> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; pl.from_pandas(df) shape: (3, 1) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ the_column β”‚ β”‚ --- β”‚ β”‚ struct[3] β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•‘ β”‚ {null,null,123} β”‚ β”‚ {null,456,null} β”‚ β”‚ {789,null,null} β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ </code></pre> <p>As you can see, by default, polars tries to convert the dictionaries to arrow structs. This is not what I want, since the keys are not the same for each object. I want them to stay as Python objects. The <code>schema_overrides</code> feature does something, but not what I want either:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; pl.from_pandas(df, schema_overrides = {'the_column': pl.Object }) InvalidOperationError: casting from Int64 to Unknown not supported </code></pre> <p>How can I accomplish what I want here?</p>
<python><dataframe><python-polars>
2023-03-23 19:04:54
1
801
GrantS
75,826,926
7,350,565
Why does [1].extend(my_list) return nothing
<p>When I type <code>&gt;&gt;&gt; [1].extend([2, 3])</code></p> <p>I expected Python to return <code>&gt;&gt;&gt; [1, 2, 3]</code></p> <p>Why does it return <code>&gt;&gt;&gt;</code> Ie. nothing</p>
<python>
2023-03-23 18:46:20
1
420
Yirmi
75,826,859
1,506,850
Binary focal loss in pytorch
<p>I am trying to implement this: <a href="https://arxiv.org/pdf/1708.02002.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1708.02002.pdf</a> eq4. Many public available implementations are multi-class while my problem is binary.</p> <p>I have tried</p> <pre><code>loss = -((1-p) ** gamma) * torch.log (p) * target </code></pre> <p>where <code>p</code> is my sigmoid output , <code>target</code> my binary label and <code>gamma</code> a tuning parameter. as it seems in the original paper. Howver this does not penalize cases where <code>target == 0</code> and <code>p</code> is large, which I suspect has to do with multi-class original formulation. I have then tried</p> <pre><code>loss = ((1-p) ** gamma) * torch.log (p) * target + (p) ** gamma * torch.log (1-p) * (1-target) </code></pre> <p>However, the loss just stalls on a dataset where BCELoss was so far performing well.</p> <p>What's a simple correct implementation of focal loss in binary case?</p>
<python><pytorch><loss-function>
2023-03-23 18:38:02
0
5,397
00__00__00
75,826,794
4,538,768
When QuerySet is evaluated when get_queryset is overridden
<p>Have not understand very well how QS is evaluated when overriden. Based on <a href="https://stackoverflow.com/questions/62976955/how-to-override-get-queryset-in-django">leedjango</a> question, we are able to override get_queryset in a view class. In this same example, I have not been able to understand when the get_queryset method is returning the QS evaluated. We know how QS are evaluated in general, as it is explained in <a href="https://stackoverflow.com/questions/26254516/when-does-djangos-values-values-list-get-evaluated">eugene</a> question.</p> <p>For instance, I have the following components: Front End sending GET request to the appropriate URL:</p> <pre><code>//Dashboard.js: export default function Dashboard() { const [tableData, setTableData] = useState([]); const getUserSchools = () =&gt; { getUser({ email: keycloak.email }).then((response) =&gt; { const { data: users } = response; if (users.length) { const appUser = users[0]; axios .get(`/api/school/list/?user_id=${appUser.id}`) .then((data) =&gt; { setTableData(data.data.results); }) .catch((err) =&gt; { /* eslint-disable-next-line */ console.error(err); setTableData([]); }); } }); }; </code></pre> <p>Then, it hits the school-urls.py looking for which view is related to that URL:</p> <pre><code>urlpatterns = [ url(r&quot;^list/$&quot;, SchoolsList.as_view()), ] </code></pre> <p>SchoolsList is the appropriate View which is overriding get_queryset and returns the queryset:</p> <pre><code>#list.py class LargeResultsSetPagination(pagination.PageNumberPagination): page_size = 10 page_size_query_param = 'page_size' max_page_size = 100 class SchoolsList(generics.ListAPIView): pagination_class = LargeResultsSetPagination serializer_class = SchoolGetSerializer def get_queryset(self): queryset = School.objects.all() user_id = self.request.query_params.get('user_id', None) if (user_id is not None): queryset = queryset.filter(user_id=user_id) return queryset.order_by('id') </code></pre> <p>Using the following serializer:</p> <pre><code>#get.py class SchoolGetSerializer(serializers.ModelSerializer): nearest_community = SchoolCommunitySerializer(required=False) nearest_post_secondary = SchoolPostSecondarySerializer(required=False) regional_district = SchoolRegionalDistrictSerializer(required=False) class Meta: model = School fields = ( &quot;id&quot;, &quot;address&quot;, &quot;nearest_post_secondary&quot;, &quot;nearest_community&quot;, &quot;regional_district&quot;, ) </code></pre> <p>In the above snippets, when the queryset is evaluated?</p> <p>When sending the GET request from the Front End or Postman, I can see queryset is returned twice, the first time the queryset is returned almost immediately. However, GET request is not finished until after 50 seconds again the queryset is evaluated and returned (in here I do not know how is evaluated). The query executed by hand delays 750ms. is not a heavy query, it is just selecting a record by id.</p> <p>Would you have any idea why get_queryset is hit and immediately return without executing the queryset and then after 50 secs is querying the database? How could I enforce querying the database right away?</p> <p><strong>Edit:</strong> QuerySet is evaluated until Serializer is executed. If nearest_community and nearest_post_secondary fields have heavy queries. Then, the execution of queryset would delay. If you simply the serializer, then you would get the response immediately.</p> <p>Thanks</p>
<python><django><django-models><django-rest-framework><django-views>
2023-03-23 18:29:34
2
1,787
JarochoEngineer
75,826,703
12,691,626
Image padding with reflect (mirror) in Python
<p>I have an image loaded in python as an np.array with shape (7, 7). I need to apply a filtering process to the image. For this I need to apply a kernel that will convolve the 2D image (like a moving window). However, to get an output filtered image with the same shape as the input I need to expand the original image (padding) and fill these pixels as a mirror before filtering.</p> <p>Below I illustrate my problem with an image:</p> <p><a href="https://i.sstatic.net/PvQFq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PvQFq.png" alt="enter image description here" /></a> <em>Note: Padding must be applied to all corners of the image.</em></p> <p>How can I create this padding?</p> <p>Here some example data:</p> <pre><code>img = np.array([[1, 2, 3, 4, 5, 6, 7], [8, 9, 10, 11, 12, 13, 14], [15, 16, 17, 18, 19, 20, 21], [22, 23, 24, 25, 26, 27, 28], [29, 30, 31, 32, 33, 34, 35], [36, 37, 37, 39, 40, 41, 42], [43, 44, 45, 46, 47, 48, 49]], dtype=np.uint8) plt.imshow(img) </code></pre>
<python><image><padding>
2023-03-23 18:19:14
1
327
sermomon
75,826,508
1,871,399
Python: Count number of times a row from one dataframe appears in another
<p>I have two dataframes that look like this:</p> <pre><code>df1 = pd.DataFrame({ 'name': ['Cat', 'Cat', 'Cat', 'David'], 'name2': ['Dog', 'Cat', 'Dog', 'David'], }) master_df = pd.DataFrame({ 'name': ['Dog', 'David'], 'name2': ['Cat', 'David'], }) </code></pre> <p>I want to count the number of times each row in master_df appears in df1. So the result should look like this:</p> <pre><code>final_df = pd.DataFrame({ 'Frequency': ['2', '1'], 'name': ['Dog', 'David'], 'name2': ['Cat', 'David'], }) </code></pre> <p>I tried to achieve this using the following code, but it isn't working</p> <pre><code>merged_df = master_df.merge(df1, on=['name', 'name2'], how='left') counts = merged_df.groupby(['name', 'name2']).size().reset_index(name='count') counts </code></pre> <p>Its hard to properly take into account that order of the columns don't matter, I just want to compare both elements in each row.</p> <p>Can anyone explain to me what I am doing wrong?</p>
<python><pandas>
2023-03-23 17:56:17
3
1,560
Workhorse
75,826,417
12,520,046
Python, problem with json.loads when trying to work with/upload files via O365/Sharepoint API
<p>I'm using the below code to connect to sharepoint for the purpose of uploading files to a site. all of the auth stuff is fine, for some reason I'm getting a json decode error. Anything obvious here? Assume flow starts with function &quot;getSharepointContext()&quot;. I've gone through numerous other tutorials and questions regarding this, and everything seems ok and consistent.</p> <p>The purpose is to iterate through each file in a local folder and upload to a sharepoint site/folder.</p> <p>Obviously something wrong with the request and json.loads but if anybody who has more experience working with files like this sees something, I would be appreciative!</p> <pre><code>import os from office365.runtime.auth.client_credential import ClientCredential from office365.sharepoint.client_context import ClientContext from logger import Logger def getSharepointContext(): sharepointUrl = 'https://url.sharepoint.com/site/folder/' clientCredentials = ClientCredential('a79f867e-3138-4qwdqwdqwdqwdqwdffe3', '_vp8Q~05OqwdqwdqwdqwdyCb_gQppjen8czP') ctx = ClientContext(sharepointUrl).with_credentials(clientCredentials) targetFolder = ctx.web.get_folder_by_server_relative_url(sharepointUrl) uploadFiles(targetFolder) def uploadFiles(targetFolder): wd = os.getcwd() td = wd+'/toUpload' for fn in os.listdir(td): f = os.path.join(td, fn) if os.path.isfile(f): with open(f, 'rb') as k: Logger.log('Uploading: ' + f, 1, True) file_content = k.read() print(file_content) targetFolder.upload_file(f, file_content).execute_query() if __name__ == &quot;__main__&quot;: getSharepointContext() </code></pre> <p>Output:</p> <pre><code>Traceback (most recent call last): File &quot;C:\Users\mnowicky\AppData\Roaming\Python\Python311\site-packages\requests\models.py&quot;, line 971, in json return complexjson.loads(self.text, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Program Files\Python311\Lib\json\__init__.py&quot;, line 346, in loads return _default_decoder.decode(s) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Program Files\Python311\Lib\json\decoder.py&quot;, line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Program Files\Python311\Lib\json\decoder.py&quot;, line 355, in raw_decode raise JSONDecodeError(&quot;Expecting value&quot;, s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0) During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;D:\projects\odUploader_py\uploadFiles.py&quot;, line 46, in &lt;module&gt; getSharepointContext() File &quot;D:\projects\odUploader_py\uploadFiles.py&quot;, line 17, in getSharepointContext uploadFiles(targetFolder) File &quot;D:\projects\odUploader_py\uploadFiles.py&quot;, line 31, in uploadFiles targetFolder.upload_file(f, file_content).execute_query() File &quot; \client_object.py&quot;, line 44, in execute_query self.context.execute_query() File &quot; \client_runtime_context.py&quot;, line 161, in execute_query self.pending_request().execute_query(qry) File &quot; \client_request.py&quot;, line 57, in execute_query response = self.execute_request_direct(request) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot; \client_request.py&quot;, line 69, in execute_request_direct self.beforeExecute.notify(request) File &quot; \types\event_handler.py&quot;, line 21, in notify listener(*args, **kwargs) File , line 221, in _build_modification_query self._ensure_form_digest(request) File , line 157, in _ensure_form_digest self._ctx_web_info = self._get_context_web_information() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File , line 170, in _get_context_web_information client.map_json(response.json(), return_value, json_format) ^^^^^^^^^^^^^^^ File &quot;C:models.py&quot;, line 975, in json raise RequestsJSONDecodeError(e.msg, e.doc, e.pos) requests.exceptions.JSONDecodeError: Expecting value: line 1 column 1 (char 0) </code></pre>
<python><python-3.x><rest><sharepoint>
2023-03-23 17:47:51
1
654
boog
75,826,367
3,668,495
Async redis 'get' RuntimeError 'got Future <Future pending> attached to a different loop'
<p>I have been trying to get async redis to work with async Flask app, but for the life of my, I can't figure out how to solve one problem. Here is the minimal reproducible code.</p> <pre><code>import asyncio import redis.asyncio as redis from flask import Flask async def run_app(): app = Flask('Flask') pool = redis.ConnectionPool(host='0.0.0.0', port=6379, db=0) r = redis.Redis(connection_pool=pool) @app.route('/get', methods=['GET']) async def get_value(): value = await r.get('key') return value if __name__ == '__main__': asyncio.run(run_app()) </code></pre> <p>The requests are failing alternatively with the runtime error 'got Future attached to a different loop'. I think it is because flask spins up the app with two threads, and each thread creates its own event loop. That's why one request behaves as expected and the second request throws the error and repeats indefinitely. When I run the app with <code>threaded=False</code> and <code>processes=2</code>, it works every time.</p> <p>Would anybody like to chime in on how can I force all flask threads to use the same event loop? Thank you.</p>
<python><flask><redis><python-asyncio>
2023-03-23 17:42:59
1
350
atalpha
75,825,885
4,921,888
Is it possible to access the event object in an internal AWS lambda extension?
<p>I'm attempting to use an internal lambda extension (aka a wrapper function) to set some environment variables before my lambda function runs, then to use the contents of the <code>event</code> object for some additional logic. I'm currently setting those environment variables no problem, but have no idea if accessing the <code>event</code> is even possible. Documentation around the limitations of Lambda Extensions is pretty sparse, with only a few examples out there. AWS also uses a bootstrap script to start the function's invoke step, which obfuscates where the <code>event</code> object comes from. Does anyone know how I can access the <code>event</code> object in the wrapper? Maybe if I return it from the lambda function?? Here's some code snippets because I know you love this stuff</p> <p>Here is the lambda handler:</p> <pre><code>def handler(event, context): for record in event['Records']: if bool(int(os.environ.get('MY_ENV_VAR', 0))): print('Env var is set, do a special little exit') return event, context print('Doing the normal lambda stuff') return 'All done!' </code></pre> <p>Here is the wrapper script:</p> <pre><code>#!/usr/bin/env python3 import sys import os args = sys.argv os.environ['MY_ENV_VAR'] = 1 # invoke the lambda function os.system(' '.join(args[1:])) # this part is what I'm trying to do, but don't know if it's possible event = AwsLambda.get_event() # this definitely doesn't exist, but you get the point (hopefully) process_event(event) </code></pre> <p>The wrapper script is packaged as a lambda layer and attached to the lambda function through CDK.</p>
<python><amazon-web-services><aws-lambda><aws-lambda-layers><aws-lambda-extensions>
2023-03-23 16:53:47
1
1,087
dslosky
75,825,724
4,502,950
explode columns with multiple entries and then stack all columns
<p>I have already asked this question in this link:</p> <p><a href="https://stackoverflow.com/questions/75788847/stack-and-explode-columns-in-pandas">Stack and explode columns in pandas</a></p> <p>And there is also an answer provided which works for the given dataset. However, it is possible that the two sessions occurring in a day has same number of attendees and thus they are not separated by '&amp;'. In that case the algo ignores the 'Course 2'. For example, this is the dataset</p> <pre><code>import pandas as pd import numpy as np df = pd.DataFrame({'Session':['session1', 'session2','session3'], 'Course 1':['intro to','advanced','Cv'], 'Course 2':['Computer skill',np.nan,'Write cover letter'], 'Attendees':['24 &amp; 46','23','30']}) </code></pre> <p>Now if I run the algorithm on this:</p> <pre><code>df = df.assign(Attendees = df[&quot;Attendees&quot;].str.split(&quot; &amp; &quot;)) dfn = df.iloc[:0] # Create empty dataframe with same columns as df for didx, d in df.iterrows(): # Explode manually for ci, attend_count in enumerate(d.Attendees): dfr = d.to_frame().T dfr.Attendees = attend_count # Set other courses than &quot;Course &lt;ci+1&gt;&quot; to NaN other_courses = [x for x in d.index if x.startswith(&quot;Course &quot;) and x != f'Course {ci + 1}'] # other_courses = d.index.to_series().filter(regex = f'Course [^{ci+1}]').index # Alternative for c in other_courses: dfr[c] = pd.NA dfn = pd.concat([dfn, dfr]) dfn.set_index([&quot;Session&quot;, &quot;Attendees&quot;]).stack().reset_index().rename({0: &quot;Courses&quot;}, axis = 1) </code></pre> <p>It gives the output as</p> <p><a href="https://i.sstatic.net/TxcWQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TxcWQ.png" alt="enter image description here" /></a></p> <p>Although it should include 'write cover letter' session as well. Desired output:</p> <pre><code> Session level_1 Courses Attendees 0 session1 Course 1 intro to 24 1 session1 Course 2 Computer skill 46 2 session2 Course 1 advanced 23 3 session3 Course 1 Cv 30 4 session3 Course 2 Write cover letter 30 </code></pre>
<python><pandas>
2023-03-23 16:39:23
2
693
hyeri
75,825,644
8,497,979
How to insert Variable into PartiqL function within f-String
<p>I want to insert a Variable into a PartiQL statement using an f-String. but somehow it fails. the following does work:</p> <pre><code> IndexName = 'GSI_1' stmt = f&quot;SELECT * FROM {table_name}.{IndexName} WHERE SK=? AND begins_with(PK, 'READ#')&quot; </code></pre> <p>the following does not work:</p> <pre><code>IndexName = 'GSI_1' Prefix = 'READ#' stmt = f&quot;SELECT * FROM {table_name}.{IndexName} WHERE SK=? AND begins_with(PK, {Prefix})&quot; </code></pre> <p>The variable &quot;GSI_1&quot; works perfectly fine, but the variable &quot;Prefix&quot; doesnt work since its inside the brackets of the Partiql function. Do I have to escape the second variable somehow?</p>
<python>
2023-03-23 16:31:26
1
1,410
aerioeus
75,825,270
3,333,687
How do I ensure my dataset does not accumulate a lot of small files when running in incremental mode?
<p>If I have a transform that runs frequently in incremental mode I accumulate at least one file per transaction.</p> <p>The result is a dataset that has:</p> <ul> <li>A lot metadata stored from build logs.</li> <li>Small dataset files.</li> </ul> <p>I have been told that small dataset files lead to increased build time and compute cost. I also have no need to accumulate all the build logs.</p> <p>How do I ensure my dataset does not accumulate a lot of small files when running in incremental mode?</p>
<python><palantir-foundry><foundry-code-repositories>
2023-03-23 15:56:05
1
1,781
jka.ne
75,825,249
11,232,438
Gspread dataframe how to hide index column?
<p>I'm trying to print a dataframe to google spreadsheets but it's printing an extra column which is the index of the dataframe:</p> <p><a href="https://i.sstatic.net/uKC4N.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uKC4N.png" alt="enter image description here" /></a></p> <p>What I'm doing is just looping a list of products:</p> <pre><code>def ExportToGoogleSheets(products): &quot;&quot;&quot;Shows basic usage of the Sheets API. Prints values from a sample spreadsheet. &quot;&quot;&quot; creds = None # The file token.json stores the user's access and refresh tokens, and is # created automatically when the authorization flow completes for the first # time. if os.path.exists('token.json'): creds = Credentials.from_authorized_user_file('token.json', SCOPES) # If there are no (valid) credentials available, let the user log in. if not creds or not creds.valid: if creds and creds.expired and creds.refresh_token: creds.refresh(Request()) else: flow = InstalledAppFlow.from_client_secrets_file( 'config.json', SCOPES) creds = flow.run_local_server(port=0) # Save the credentials for the next run with open('token.json', 'w') as token: token.write(creds.to_json()) try: # Open an existing spreadsheet gc = gspread.service_account() worksheet = gc.open(SHEET_NAME).sheet1 # Get some columns back out dataframe = get_as_dataframe(worksheet, usecols=[1,2,3,4],header=None, skiprows=1, include_index=False) dataframe = dataframe.dropna(how='all') columns = [&quot;NAME&quot;, &quot;QUANTITY&quot;, &quot;BRAND&quot;, &quot;LOCATION&quot;] dataframe.columns = columns for index, product in enumerate(products): # Modify value dataframe.at[index,&quot;NAME&quot;]=product.name dataframe.at[index,&quot;QUANTITY&quot;]=product.quantity dataframe.at[index,&quot;BRAND&quot;]=product.brand dataframe.at[index,&quot;LOCATION&quot;]=product.location print(dataframe) print(&quot;Uploading data to google sheets&quot;) d2g.upload(dataframe, SAMPLE_SPREADSHEET_ID, SHEET_NAME) except HttpError as err: print(err) </code></pre> <p>How can I print the dataframe without the &quot;A&quot; column or the index?</p>
<python><pandas><google-sheets>
2023-03-23 15:53:59
1
745
kuhi
75,825,109
420,157
Use information from one argument to create another argument in typemap(in)
<p>Currently I have the following C snippet:</p> <pre class="lang-c prettyprint-override"><code>typedef struct My_Struct { int x; int y; } my_struct_t; void generate_buffer(my_struct_t info, uint8_t* buffer_out); </code></pre> <p>I am trying to come up with a python swig interface to call this function. The output buffer size is in <code>info.x</code>.</p> <p>The following typemap I tried, but fails unfortunately:</p> <pre class="lang-c prettyprint-override"><code>%typemap(in) (my_struct_t info, uint8_t* buffer_out) %{ my_struct_t tmp; int res1 = SWIG_ConvertPtr($input, (void*)&amp;tmp, SWIGTYPE_p_My_Struct, 0 ); if (!SWIG_IsOK(res1)) { SWIG_exception_fail(SWIG_ArgError(res1), &quot;Converting my_struct_t type to swig&quot;); } $2 = (uint8_t*)malloc(tmp.x); $1 = tmp; %} %typemap(argout) (my_struct_t info, uint8_t* buffer_out) (PyObject* po) %{ po = PyBytes_FromStringAndSize((char*)$2,($1).x); $result = SWIG_Python_AppendOutput($result,po); %} %typemap(freearg) (my_struct_t info, uint8_t* buffer_out) %{ free($2); %} </code></pre> <p>I was expecting the info object to be accessible as is in the C function, also the newly allocated set of bytes to be absorbed and freed in the typemap. The input typemap fails with junk values that I am finding hard to debug.</p>
<python><c><swig>
2023-03-23 15:41:26
1
777
Maverickgugu
75,824,967
13,362,665
Making transparent pixels black
<p>I generated an image with an alpha channel, in python-opencv, the image contains specific masked areas, and the other unmasked area is transparent, and I want to get rid of the transparent pixels and have black pixels instead.</p> <p>Code for the masking:</p> <pre><code>mask1 = np.ones_like(img) mask1 = cv2.circle(mask1, center_coordinates, (int(r) - 50), (255,255,255), -1) mask2 = np.ones_like(img) mask2 = cv2.circle(mask2, center_coordinates, (int(r) + 50), (255,255,255), -1) # subtract masks and make into single channel mask = cv2.subtract(mask2, mask1) result = cv2.cvtColor(img, cv2.COLOR_BGR2BGRA) result[:, :, 3] = mask[:,:,0] </code></pre>
<python><opencv>
2023-03-23 15:28:26
0
593
Rami Janini
75,824,713
3,286,975
Enforcing Google Colab to use another python version
<p>Well, I'm trying to use Python 3.7.16 on Google Colab, I'm using the following command to install it:</p> <pre><code>!sudo apt-get install python3.7 !sudo add-apt-repository --yes ppa:deadsnakes/ppa !sudo apt-get update !sudo apt-get install python3-pip !sudo apt-get install python3.7-distutils !sudo apt-get install python3-apt !sudo apt-get install python3.7-dev </code></pre> <p>Then, I update the alternatives...</p> <pre><code>!sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.7 1 !sudo update-alternatives --install /usr/bin/python python /usr/bin/python3.7 1 !sudo update-alternatives --set python3 $(update-alternatives --list python3 | grep python3.7) !sudo update-alternatives --set python $(update-alternatives --list python3 | grep python3.7) </code></pre> <p>I tried both...</p> <p>Then, I check the version:</p> <pre><code>!python -V !python3 -V !which python !which python3 !/usr/local/bin/python -V !/usr/bin/python -V import sys print(sys.version) print(sys.executable) </code></pre> <p>This is prompted:</p> <pre><code>Python 3.7.16 Python 3.7.16 /usr/local/bin/python /usr/bin/python3 Python 3.7.16 Python 3.7.16 3.9.16 (main, Dec 7 2022, 01:11:51) [GCC 9.4.0] /usr/bin/python3 </code></pre> <p>Why the python part:</p> <pre><code>import sys print(sys.version) print(sys.executable) </code></pre> <p>Prints 3.9 version.</p> <p>I need that the same python environment is used when I execute scripts, not only executing the bash script, because I installed pip packages and they are out of context.</p> <p>Also, I cannot wget a python script from outside, because the context is also lost on the next cells...</p> <p>Conda or venv aren't my friends here because I already tried them and they have the same result. They are executed on a environment different from the cell...</p>
<python><google-colaboratory>
2023-03-23 15:06:40
0
2,451
z3nth10n
75,824,630
10,918,680
Downloading multiple csv files with Streamlit
<p>I build a Streamlit app where the end result is two Pandas dataframes that I want the user to download as separate csv files. However, it seems the Download button in Streamlit only allows the downloading of one file. One way of &quot;combining&quot; them is to put them as separate tabs in an Excel file, but that is unacceptable to the ideal workflow.</p> <p>I then created two download buttons for them, one for each file. However, clicking on the first Download button will cause the whole application to be rerun. After some Googling, there doesn't seem a way to disable this behavior.</p> <p>I then looked into creating a Zip file, using the 'zipfile' package:</p> <pre><code> with zipfile.ZipFile('my_test.zip', 'x') as csv_zip: csv_zip.writestr(&quot;data1.csv&quot;, pd.DataFrame(data1).to_csv()) csv_zip.writestr(&quot;data2.csv&quot;, pd.DataFrame(data2).to_csv()) with open(&quot;my_test.zip&quot;, &quot;rb&quot;) as file: st.download_button( label = &quot;Download zip&quot;, data = file, file_name = &quot;mydownload.zip&quot;, mime = 'application/zip' ) </code></pre> <p>However, when tested locally, this creates 'my_test.zip' in the same directory where my .py codes are. The file would persist even after I end the app. My question:</p> <ol> <li><p>If I deploy the Streamlit app by hosting it somewhere, would 'my_test.zip' still be created somewhere? Where?</p> </li> <li><p>Is there a way to enable the download of a zip file without first creating a zip file in a directory somewhere?</p> </li> </ol>
<python><streamlit><python-zipfile>
2023-03-23 14:59:58
1
425
user173729
75,824,621
297,823
How to run sicpy.signal freqz in C# / ASP.NET Core?
<p>I need to run Python source code (.py) with dependencies on <code>numpy</code> and <code>scipy.signal</code> in the ASP.NET Core context. I've found IronPython to be a suitable solution, but it doesn't support these two dependencies (<a href="https://github.com/IronLanguages/ironpython3/issues/355" rel="nofollow noreferrer">GitHub issue #355</a>).</p> <p>So, I decided to automatically generate C# code from the Python one and manually check all build errors. All looks promising, <code>numpy</code> seems to be supported by <a href="https://github.com/SciSharp/Numpy.NET" rel="nofollow noreferrer">Numpy.NET</a>, but my missing puzzle is the <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.freqz.html" rel="nofollow noreferrer"><code>freqz</code></a> method of <code>scipy.signal</code>.</p> <p>Example of usage:</p> <p><code>w, h = signal.freqz(b, a, worN=freq_hz, fs=fs_hz)</code></p> <p>Questions on the <code>freqz</code> function:</p> <ol> <li>Is there any C# fork of the function?</li> <li>Is there any source code of the function, so I can generate C# code from it?</li> <li>I was wondering if I can use MATLAB <a href="https://www.mathworks.com/help/signal/ref/freqz.html" rel="nofollow noreferrer"><code>freqz</code></a> function. Are these two functions equivalent? Is it possible to run that MATLAB function in the C# context?</li> </ol>
<python><c#><numpy><asp.net-core><scipy>
2023-03-23 14:58:43
1
10,440
Dariusz WoΕΊniak
75,824,614
2,805,824
How to insert a specific string between two words stored in a single variable?
<p>I want to insert a string <code>&quot;and&quot;</code> between two words of a string stored in a variable.</p> <p>How can I achieve this in Python?</p> <p>E.g.</p> <p>Input string or input variable:</p> <pre><code>location = 'Location-1 Location-2 ' </code></pre> <p>Expected Output:</p> <pre><code>location = 'Location-1 and Location-2' </code></pre> <p>But I don't want to insert <code>&quot;and&quot;</code> if the variable has single value</p> <pre><code>location = 'Location-1 ' </code></pre> <p>or</p> <pre><code>location = 'Location-2 ' </code></pre>
<python><python-3.x>
2023-03-23 14:58:10
1
1,141
Gaurav Pathak
75,824,594
1,323,992
Python poetry install fails on typed-ast (1.5.4). How to overcome the obstacle and install the package?
<p>I tried to install the package using pip:</p> <pre class="lang-bash prettyprint-override"><code>pip wheel --use-pep517 &quot;typed-ast (==1.5.4)&quot; </code></pre> <p>but it falls in the same place.</p> <p>What's the general approach when you walk into such kind of problems?</p> <p>I've found <a href="https://github.com/pypa/pip/issues/8559" rel="noreferrer">this thread</a>, which seemed to be helpful, but it wasn't</p> <pre class="lang-bash prettyprint-override"><code>(.venv) ➜ src git:(develop) βœ— poetry install Installing dependencies from lock file Warning: poetry.lock is not consistent with pyproject.toml. You may be getting improper dependencies. Run `poetry lock [--no-update]` to fix it. Package operations: 24 installs, 0 updates, 0 removals β€’ Installing typed-ast (1.5.4): Failed ChefBuildError Backend subprocess exited when trying to invoke build_wheel running bdist_wheel running build running build_py creating build creating build/lib.linux-x86_64-cpython-311 creating build/lib.linux-x86_64-cpython-311/typed_ast copying typed_ast/__init__.py -&gt; build/lib.linux-x86_64-cpython-311/typed_ast copying typed_ast/ast27.py -&gt; build/lib.linux-x86_64-cpython-311/typed_ast copying typed_ast/conversions.py -&gt; build/lib.linux-x86_64-cpython-311/typed_ast copying typed_ast/ast3.py -&gt; build/lib.linux-x86_64-cpython-311/typed_ast creating build/lib.linux-x86_64-cpython-311/typed_ast/tests copying ast3/tests/test_basics.py -&gt; build/lib.linux-x86_64-cpython-311/typed_ast/tests running build_ext building '_ast27' extension creating build/temp.linux-x86_64-cpython-311 creating build/temp.linux-x86_64-cpython-311/ast27 creating build/temp.linux-x86_64-cpython-311/ast27/Custom creating build/temp.linux-x86_64-cpython-311/ast27/Parser creating build/temp.linux-x86_64-cpython-311/ast27/Python x86_64-linux-gnu-gcc -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -fPIC -Iast27/Include -I/tmp/tmpwhqt9erw/.venv/include -I/usr/include/python3.11 -c ast27/Custom/typed_ast.c -o build/temp.linux-x86_64-cpython-311/ast27/Custom/typed_ast.o ast27/Custom/typed_ast.c:1:10: fatal error: Python.h: No such file or directory 1 | #include &quot;Python.h&quot; | ^~~~~~~~~~ compilation terminated. error: command '/usr/bin/x86_64-linux-gnu-gcc' failed with exit code 1 at ~/c/mrnet/mrnet_backend/.venv/lib/python3.11/site-packages/poetry/installation/chef.py:152 in _prepare 148β”‚ 149β”‚ error = ChefBuildError(&quot;\n\n&quot;.join(message_parts)) 150β”‚ 151β”‚ if error is not None: β†’ 152β”‚ raise error from None 153β”‚ 154β”‚ return path 155β”‚ 156β”‚ def _prepare_sdist(self, archive: Path, destination: Path | None = None) -&gt; Path: Note: This error originates from the build backend, and is likely not a problem with poetry but with typed-ast (1.5.4) not supporting PEP 517 builds. You can verify this by running 'pip wheel --use-pep517 &quot;typed-ast (==1.5.4)&quot;'. (.venv) ➜ src git:(develop) βœ— pip wheel --use-pep517 &quot;typed-ast (==1.5.4)&quot; Collecting typed-ast==1.5.4 Using cached typed_ast-1.5.4.tar.gz (252 kB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done Building wheels for collected packages: typed-ast Building wheel for typed-ast (pyproject.toml) ... error error: subprocess-exited-with-error Γ— Building wheel for typed-ast (pyproject.toml) did not run successfully. β”‚ exit code: 1 ╰─&gt; [25 lines of output] running bdist_wheel running build running build_py creating build creating build/lib.linux-x86_64-cpython-311 creating build/lib.linux-x86_64-cpython-311/typed_ast copying typed_ast/__init__.py -&gt; build/lib.linux-x86_64-cpython-311/typed_ast copying typed_ast/ast27.py -&gt; build/lib.linux-x86_64-cpython-311/typed_ast copying typed_ast/conversions.py -&gt; build/lib.linux-x86_64-cpython-311/typed_ast copying typed_ast/ast3.py -&gt; build/lib.linux-x86_64-cpython-311/typed_ast creating build/lib.linux-x86_64-cpython-311/typed_ast/tests copying ast3/tests/test_basics.py -&gt; build/lib.linux-x86_64-cpython-311/typed_ast/tests running build_ext building '_ast27' extension creating build/temp.linux-x86_64-cpython-311 creating build/temp.linux-x86_64-cpython-311/ast27 creating build/temp.linux-x86_64-cpython-311/ast27/Custom creating build/temp.linux-x86_64-cpython-311/ast27/Parser creating build/temp.linux-x86_64-cpython-311/ast27/Python x86_64-linux-gnu-gcc -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -fPIC -Iast27/Include -I/home/yevt/c/mrnet/mrnet_backend/.venv/include -I/usr/include/python3.11 -c ast27/Custom/typed_ast.c -o build/temp.linux-x86_64-cpython-311/ast27/Custom/typed_ast.o ast27/Custom/typed_ast.c:1:10: fatal error: Python.h: No such file or directory 1 | #include &quot;Python.h&quot; | ^~~~~~~~~~ compilation terminated. error: command '/usr/bin/x86_64-linux-gnu-gcc' failed with exit code 1 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for typed-ast Failed to build typed-ast ERROR: Failed to build one or more wheels (.venv) ➜ src git:(develop) βœ— pip wheel --use-pep517 &quot;typed-ast (==1.5.4)&quot; </code></pre> <ul> <li>ubuntu 22.04</li> <li>gcc 11.3.0</li> <li>python 3.11</li> </ul>
<python><setuptools><python-poetry><pyproject.toml><python-3.11>
2023-03-23 14:56:16
1
846
yevt
75,824,511
8,595,891
Creating a comman named class dependency in fastapi
<p>I am creating a fastapi server where a APIRouter hast lots of endpoint and all of them are accepting same request body. Is there any way I can omit that from all function in that routers and specify at single place</p> <p>Consider an example where I am building simple calculator server which accepts two number num1 and num2 with a requests body which is json has bunch of information like version of request, sender of request etc. and I want num1 and num2 in same variable but json body in <code>requests_info</code> field.</p> <p>Here is my <code>main.py</code></p> <pre><code>from fastapi import FastAPI from calculator import calculate # FastAPI app object fastapi = FastAPI() # User fulfillment routes fastapi.include_router(calculate.router, prefix=&quot;/calculator&quot;, tags=['calculator']) </code></pre> <p>My <code>calculator.py</code> looks like</p> <pre><code>from fastapi import APIRouter, Request router = APIRouter() @router.post(&quot;/add&quot;) async def add(num1: int, num2: int, request: Request) -&gt; int: req_info = await request.json() # here I don't want to put request: Request every time but want to specify in dependency and want to access with name request return num1 + num2 @router.post(&quot;/multiply&quot;) async def mul(num1: int, num2: int) -&gt; int: req_info = await request.json() return num1 * num2 @router.post(&quot;/subtract&quot;) def sub(num1: int, num2: int) -&gt; int: return num1 - num2 </code></pre> <p>In this example I have specified <code>request: Request</code> in <code>add</code> and not in other methods. But I want to access <code>requests</code> without specifying there. Is there any way I can do it using dependency.</p> <p>I know we can use <a href="https://fastapi.tiangolo.com/tutorial/dependencies/classes-as-dependencies/" rel="nofollow noreferrer">https://fastapi.tiangolo.com/tutorial/dependencies/classes-as-dependencies/</a> but how to created a named dependency here.</p>
<python><fastapi>
2023-03-23 14:48:17
0
1,362
Pranjal Doshi
75,824,420
8,179,672
Pytest doesn't see a data file from the source folder
<p>I have the following folder structure:</p> <p><a href="https://i.sstatic.net/OLrXO.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OLrXO.jpg" alt="enter image description here" /></a></p> <p><code>main.py</code></p> <pre><code>import pandas as pd data = pd.read_csv('data.csv') def add_num_to_env_num(): print(&quot;success&quot;) </code></pre> <p><code>test_main.py</code></p> <pre><code>import sys sys.path.insert(1, '/src') import src.main as main def test_add_num_to_env_num(): main.add_num_to_env_num() </code></pre> <p>When I run command <code>python -m pytest</code> I get the following error:</p> <p><code>ERROR tests/test_main.py - FileNotFoundError: [Errno 2] No such file or directory: 'data.csv' </code></p> <p>I hoped that the line: <code>sys.path.insert(1, '/src')</code> will solve the issue but it didn't. How can I solve the issue?</p> <p>EDIT: for the real use-case I cannot use absolute paths.</p>
<python>
2023-03-23 14:40:51
1
739
Roberto
75,824,264
13,039,962
Issues when plotting with axes AttributeError: 'NoneType' object has no attribute 'rowspan'
<p>I have this df:</p> <pre><code> DATE CODE PP 0 1964-01-01 109014 0.5 1 1964-01-02 109014 1.1 2 1964-01-03 109014 2 3 1964-01-04 109014 NaN 4 1964-01-05 109014 3 ... ... ... 21616 2023-03-08 109014 0.0 21617 2023-03-09 109014 2.3 21618 2023-03-10 109014 2.9 21619 2023-03-11 109014 5.1 21620 2023-03-12 109014 5.8 </code></pre> <p>I want to plot 7 graphic bar figures in the same image so i did this code:</p> <pre><code>#______________________________________________________________________________ fig1 = plt.figure('Time Series', figsize=(30,15), dpi=400) ax1 = fig1.add_axes([0.15, 0.86, 0.75, 0.10]) ax2 = fig1.add_axes([0.15, 0.74, 0.75, 0.10]) ax3 = fig1.add_axes([0.15, 0.62, 0.75, 0.10]) ax4 = fig1.add_axes([0.15, 0.50, 0.75, 0.10]) ax5 = fig1.add_axes([0.15, 0.38, 0.75, 0.10]) ax6 = fig1.add_axes([0.15, 0.26, 0.75, 0.10]) ax7 = fig1.add_axes([0.15, 0.14, 0.75, 0.10]) axes=[ax1,ax2,ax3,ax4,ax5,ax6,ax7] codes=list(df['CODE'].drop_duplicates()) for axe,code in zip(axes,codes): print(axe,code) data=df.loc[df['CODE']==code] data.plot('DATE',&quot;PP&quot;,kind='area', color=colors, alpha=1, linestyle='None', ax=axe) </code></pre> <p>when i run i got this error: <code>AttributeError: 'NoneType' object has no attribute 'rowspan'</code></p> <p>And my code worked very well but since I updated everything with &quot;conda upgrade all&quot; now I constantly get that message in all my graphs when I plot graphs defining axes.</p> <p>Do you know how to solve this? or why this error constantly appear.</p> <p>Thanks in advance.</p>
<python><pandas><matplotlib>
2023-03-23 14:24:34
2
523
Javier
75,824,165
528,369
Pandas error "Incompatible indexer with DataFrame" when element of DataFrame set to DataFrame
<p>For the code</p> <pre><code>import pandas as pd names = list(&quot;abc&quot;) df = pd.DataFrame(index=names, columns=[&quot;foo&quot;]) print(df) for name in names: print(&quot;name =&quot;, name) # df.loc[name, &quot;foo&quot;] = 123 # code works when line below is replaced by line above df.loc[name, &quot;foo&quot;] = pd.DataFrame(data=[10, 20]) </code></pre> <p>, when I set an element of a dataframe to another dataframe, I get an error</p> <pre><code> df.loc[name, &quot;foo&quot;] = pd.DataFrame(data=[10, 20]) ~~~~~~^^^^^^^^^^^^^ ValueError: Incompatible indexer with DataFrame </code></pre> <p>but get no error when I set the element to a number. Why?</p>
<python><pandas>
2023-03-23 14:15:14
1
2,605
Fortranner
75,824,045
9,989,761
Tensorflow M2 Pro Failure
<p>When I run the following test script for tensorflow</p> <pre><code>import tensorflow as tf cifar = tf.keras.datasets.cifar100 (x_train, y_train), (x_test, y_test) = cifar.load_data() model = tf.keras.applications.ResNet50( include_top=True, weights=None, input_shape=(32, 32, 3), classes=100,) loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) model.compile(optimizer=&quot;adam&quot;, loss=loss_fn, metrics=[&quot;accuracy&quot;]) model.fit(x_train, y_train, epochs=5, batch_size=4) </code></pre> <p>I obtain the following terminal output:</p> <pre><code>Metal device set to: Apple M2 Pro systemMemory: 16.00 GB maxCacheSize: 5.33 GB 2023-03-23 00:26:32.203361: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support. 2023-03-23 00:26:32.203521: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -&gt; physical PluggableDevice (device: 0, name: METAL, pci bus id: &lt;undefined&gt;) zsh: bus error python3 app/model/tf_verify.py </code></pre>
<python><macos><tensorflow>
2023-03-23 14:04:32
1
364
Josh Purtell
75,823,739
15,724,084
python class method invocation missing 1 required positional argument
<p>I created class method to ease file reading. But the end result gives me the following error message:</p> <pre><code>TypeError: Read_file.file_reading_through_class() missing 1 required positional argument: 'fileName' </code></pre> <p>My code is below:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import sys,os, time,re class Read_file(): def file_reading_through_class(self,fileName): if getattr(sys, 'frozen', False) and hasattr(sys, '_MEIPASS'): path_actual = os.getcwd() path_main_folder = path_actual[:-4] path_result = path_main_folder + fileName print('frozen path', os.path.normpath(path_result)) file_to_read = pd.read_json(path_result) return file_to_read else: file_to_read = pd.read_json(fileName) return file_to_read file_read=Read_file.file_reading_through_class('./ConfigurationFile/configFile.csv') </code></pre>
<python><class>
2023-03-23 13:38:09
3
741
xlmaster
75,823,656
5,868,293
Create dictionary with pairs from column from pandas dataframe using regex
<p>I have the following dataframe</p> <pre><code>import pandas as pd df = pd.DataFrame({'Original': [92,93,94,95,100,101,102], 'Sub_90': [99,98,99,100,102,101,np.nan], 'Sub_80': [99,98,99,100,102,np.nan,np.nan], 'Gen_90': [99,98,99,100,102,101,101], 'Gen_80': [99,98,99,100,102,101,100]}) </code></pre> <p>I would like to create the following dictionary</p> <pre><code>{ 'Gen_90': 'Original', 'Sub_90': 'Gen_90', 'Gen_80': 'Original', 'Sub_80': 'Gen_80', } </code></pre> <p>using <code>regex</code> (because at my original data I also have <code>Gen_70, Gen_60, ... , Gen_10</code> and <code>Sub_70, Sub_60, ... , Sub_10</code>)</p> <p>So I would like to create pairs of <code>Sub</code> and <code>Gen</code> for the same <code>_number</code> and also pairs or the <code>Original</code> with the <code>Gen</code>s</p> <p>How could I do that ?</p>
<python><pandas><dictionary>
2023-03-23 13:30:27
4
4,512
quant
75,823,453
3,556,110
How do I monkeypatch a module BEFORE importing it (to override something in its __init__.py)?
<p>I'm using a third party library, <code>floris</code>.</p> <p>In <code>floris/__init__.py</code> they have:</p> <pre class="lang-py prettyprint-override"><code> from . import logging_manager logging_manager._setup_logger() </code></pre> <p>This completely destroys my carefully crafted logging configuration, and does all sorts of things like adding opinionated log formatters onto the root logger.</p> <p>In my code, I do:</p> <pre class="lang-py prettyprint-override"><code> from floris.simulation.turbine import Turbine def run(): do_stuff_with_floris = Turbine() </code></pre> <p>Submitting an upstream fix will happen, but isn't a quick process so to my mind the clearest thing to do is to monkeypatch <code>floris</code>. But the problem is, the function I want to patch gets called <em>in the module import</em>.</p> <pre class="lang-py prettyprint-override"><code> # I can't do class MonkeyLoggingManager: def _setup_logging(self): pass floris.logging_manager = MonkeyLoggingManager() # ^^^^ Because this is not imported yet. from floris.simulation.turbine import Turbine </code></pre> <p>Is it even possible to monkeypatch a module either before, or at the point of, importing it?</p>
<python><import><module><monkeypatching>
2023-03-23 13:11:43
0
5,582
thclark
75,823,367
6,761,231
Sparse matrix multiplication in pytorch
<p>I want to implement the following formula in pytorch in a batch manner:</p> <pre><code>x^T A x </code></pre> <p>where <strong>x</strong> has shape: [BATCH, DIM1] and <strong>A</strong> has shape: [BATCH, DIM1, DIM1]</p> <p>I managed to implement it for the dense matrix A as follows:</p> <p><code>torch.bmm(torch.bmm(x.unsqueeze(1), A), x.unsqueeze(2)).squeeze()</code>.</p> <p>However, now I need to implement it for a SPARSE matrix <strong>A</strong> and I am failing to implement it.</p> <p>The error that I am getting is <code>{RuntimeError}bmm_sparse: Tensor 'mat2' must be dense</code>, which comes from the <code>torch.bmm(x.unsqueeze(1), A)</code> part of the code.</p> <p>In order to reproduce my work you could run this:</p> <pre><code>import torch sparse = True # switch to dense to see the working version batch_size = 10 dim1 = 5 x = torch.rand(batch_size, dim1) A = torch.rand(batch_size, dim1, dim1) if sparse: A = A.to_sparse_coo() xTAx = torch.bmm(torch.bmm(x.unsqueeze(1), A), x.unsqueeze(2)).squeeze() </code></pre> <p>My pytorch version is <code>1.12.1+cu116</code></p>
<python><deep-learning><pytorch><linear-algebra>
2023-03-23 13:03:59
1
303
Artur Pschybysz
75,823,246
8,050,689
Having issue with node-gyp and python
<p>I keep on getting this following error when I'm trying to compile my npm electron project.</p> <p>I feel theres some error with node-gyp and python.</p> <p><a href="https://i.sstatic.net/atKHE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/atKHE.png" alt="enter image description here" /></a></p> <p>Can you guys please help me with this. How can I fix this issue?</p> <p>Do I need to try to install a different version of python or something? My current python version installed is v3.9.6.</p> <p>OS: Mac Latest</p>
<python><electron><node-gyp>
2023-03-23 12:53:22
1
573
Just_Ice
75,823,105
6,284,716
How to deactivate virtualenv in Makefile?
<p>I am currently trying to create a makefile that detects, if a Python venv is active and if so, to deactivate it. So far my attempts have not been successful. Therefore my question, is it even possible to deactivate the current shells venv with make and if so, how?</p> <p>Update: I want to make sure, that devs do not accidentally install poetry directly in their projects venv.</p> <p>My ideas so far:</p> <pre><code>install: # Install poetry and dependencies ifneq (,$(findstring .venv,$(VIRTUAL_ENV))) @echo $(VIRTUAL_ENV) @echo &quot;venv active&quot; # @.$(VIRTUAL_ENV)/bin/activate deactivate @./scripts/deactivate_venv.sh deactivate_venv # @exit &quot;Please deactivate venv before running install command&quot; else @echo &quot;No venv activated&quot; @pip install poetry==1.4.0 @poetry install endif </code></pre> <p>The bash script linked to make</p> <pre><code>#!/usr/bin/env bash deactivate_venv(){ echo $VIRTUAL_ENV source $VIRTUAL_ENV/bin/activate deactivate } &quot;$@&quot; </code></pre>
<python><shell><makefile>
2023-03-23 12:38:56
2
437
lars
75,823,000
13,944,524
Module's __getattr__ is called twice
<p><a href="https://peps.python.org/pep-0562/" rel="nofollow noreferrer">PEP-562</a> introduced <code>__getattr__</code> for modules. While testing I noticed this magic method is called twice when called in this form: <code>from X import Y</code>.</p> <p><strong>file_b.py</strong>:</p> <pre class="lang-py prettyprint-override"><code>def __getattr__(name): print(&quot;__getattr__ called:&quot;, name) </code></pre> <p><strong>file_a.py</strong>:</p> <pre class="lang-py prettyprint-override"><code>from file_b import foo, bar </code></pre> <p><strong>output</strong>:</p> <pre class="lang-none prettyprint-override"><code>__getattr__ called: __path__ __getattr__ called: foo __getattr__ called: bar __getattr__ called: foo __getattr__ called: bar </code></pre> <p>I run it with: <code>python file_a.py</code>. The interpreter version is: 3.10.6</p> <p>Could you please let me know the reason behind this?</p>
<python><python-3.x><module><python-import>
2023-03-23 12:28:07
1
17,004
S.B
75,822,877
11,001,751
Exporting TIF Images from Google Earth Engine to Google Drive: Minimal Example in Python
<p>I thought exporting images from GEE should be quite straightforward, turns out I am facing difficulties, and I'm not satisfied with the answers given so far on this platform.</p> <p>As a minimal example, I want to extract nightlights images at original scale for South Africa:</p> <pre class="lang-py prettyprint-override"><code>import ee try: ee.Initialize() except Exception as e: ee.Authenticate() ee.Initialize() # Nightlights viirs = ee.ImageCollection(&quot;NOAA/VIIRS/DNB/MONTHLY_V1/VCMSLCFG&quot;).select('avg_rad') # Boundary of South Africa sa = ee.FeatureCollection(&quot;FAO/GAUL/2015/level0&quot;).filter(ee.Filter.eq(&quot;ADM0_NAME&quot;, &quot;South Africa&quot;)) # Get date to add in file name def get_date(img): return img.date().format().getInfo() # Collection to list viirs_list = viirs.toList(viirs.size()) # Iterate over images for i in range(viirs_list.size().getInfo()): img = ee.Image(viirs_list.get(i)) projection = img.projection().getInfo() d = get_date(img)[:7] # Data is monthly, so this gets year and month print(d) ee.batch.Export.image.toDrive( image = img, description = 'Download South Africa Nightlights', region = sa, crs = projection[&quot;crs&quot;], crsTransform = projection[&quot;transform&quot;], maxPixels = 1e13, folder = &quot;south_africa_viirs_dnb_nightlights&quot;, fileNamePrefix = 'south_africa_viirs_dnb_monthly_v1_vcmslcfg__' + d.replace('-', '_'), fileFormat = 'GeoTIFF').start() </code></pre> <p>This code runs perfectly, but in my drive I get something very odd:</p> <p><a href="https://i.sstatic.net/m68vF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/m68vF.png" alt="enter image description here" /></a></p> <p>In particular: why are there different versions of the same image (the constructed names are all unique, as evident from the printout), and what are these numbers appended to the file name?</p>
<python><google-earth-engine>
2023-03-23 12:14:27
1
1,379
Sebastian
75,822,589
15,724,084
python using concurrent futures to read file asynchronously method guidance
<p>I want to add <code>concurrency.futures</code> module asynchronous I/O reading to my script. I want file to be read one time, then the result to be worked on. As logic of module does not align with it I created two different functions which separately reads two separate time the file, as pandas dataframe and then gives me the result.</p> <pre><code>import pandas as pd import sys,os, time,re import concurrent.futures start=(time.perf_counter()) def getting_file_path(fileName): if getattr(sys, 'frozen', False) and hasattr(sys, '_MEIPASS'): path_actual = os.getcwd() path_main_folder = path_actual[:-4] path_result = path_main_folder + fileName print('frozen path',os.path.normpath(path_result)) return path_result else: return fileName def read_keys_dropdown(): global lst_dropdown_keys file_to_read = pd.read_json(getting_file_path('./ConfigurationFile/configFile.csv')) lst_dropdown_keys=list(file_to_read.to_dict().keys()) lst_dropdown_keys.pop(0) lst_dropdown_keys.pop(-1) return lst_dropdown_keys def read_url(): pattern = re.compile(r&quot;^(?:/.|[^//])*/((?:\\.|[^/\\])*)/&quot;) file_to_read=pd.read_json(getting_file_path('./ConfigurationFile/configFile.csv')) result = (re.match(pattern, file_to_read.values[0][0])) return pattern.match(file_to_read.values[0][0]).group(1) with concurrent.futures.ThreadPoolExecutor() as executor: res_1=executor.submit(read_keys_dropdown) res_2=executor.submit(read_url) finish=(time.perf_counter()) print(res_1.result(),res_2.result(),finish-start,sep=';') </code></pre> <p>Before, I was doing it differently. I was reading <code>file_to_read = pd.read_json(getting_file_path('./ConfigurationFile/configFile.csv'))</code> in global scope then using that variable name in both functions. I tried to do something like reading data, and then working on result it gave me <code>Futures</code> object has no <code>attribute</code> to_dict, nor values[0]...so, if I need to fasten my script and concurrency or threading modules are better choice for I/O reading files, then how else I can use them in my script?</p>
<python><concurrency>
2023-03-23 11:45:56
1
741
xlmaster
75,822,447
14,459,522
Can't use plt to show an image from the CIFAR-10 dataset in Google Colab
<p>I'm trying to show the images part of the CIFAR-10 dataset but for some reason <code>plt</code> shows me an axes image instead of the actual image that I want to see.</p> <pre><code> from os import lseek from cs231n.data_utils import load_CIFAR10 import matplotlib.pyplot as plt Xtr, Ytr, Xte, Yte = load_CIFAR10('cs231n/datasets/cifar-10-batches-py') # print(Xtr[0].shape) Shape is (32, 32, 3) RGB image. plt.imshow(Xtr[0]) </code></pre> <p>In the docs it says that a shape of <code>(M, N, 3)</code> is ok for RGB images so I don't know why it doesn't show it. Any ideas?</p>
<python><matplotlib>
2023-03-23 11:32:31
1
433
CupOfGreenTea
75,822,372
2,960,388
Jenkins sudo: a terminal is required to read the password
<p>I have a FastAPI app for which I have configured Jenkins pipeline. When I execute unit tests with the code coverage enabled they are failing with the following error :</p> <pre><code>Started by user gold Obtained Jenkinsfile from git https://github.com/edtshuma/devsecops-labs.git [Pipeline] Start of Pipeline [Pipeline] node Running on Jenkins in /var/lib/jenkins/workspace/Python-DevSecOps .... .... [Pipeline] sh + pip install -r requirements.txt .... Requirement already satisfied: uvicorn==0.20.0 in ./.pyenv-usr-bin-python3.8/lib/python3.8/site-packages (from -r requirements.txt (line 41)) (0.20.0) Requirement already satisfied: watchfiles==0.18.1 in ./.pyenv-usr-bin-python3.8/lib/python3.8/site-packages (from -r requirements.txt (line 42)) (0.18.1) Requirement already satisfied: websockets==10.4 in ./.pyenv-usr-bin-python3.8/lib/python3.8/site-packages (from -r requirements.txt (line 43)) (10.4) + sudo chown -R jenkins:jenkins ./docs/unit-tests/htmlcoverage sudo: a terminal is required to read the password; either use the -S option to read from standard input or configure an askpass helper [Pipeline] } [Pipeline] // withPythonEnv </code></pre> <p>Jenkinsfile</p> <pre><code>pipeline { agent any triggers { githubPush() } stages { stage('Setup'){ steps{ withPythonEnv('/usr/bin/python3.8') { sh 'echo &quot;Job is starting&quot; ' } } } stage('Unit Tests'){ steps{ withPythonEnv('/usr/bin/python3.8') { sh '''pip install -r requirements.txt sudo chown -R jenkins:jenkins ./docs/unit-tests/htmlcoverage pytest -v --junitxml=docs/unit-tests/htmlcoverage/coverage.xml --cov-report xml --cov app.main ''' } } } stage('Publish Test Report'){ steps{ cobertura autoUpdateHealth: false, autoUpdateStability: false, coberturaReportFile: 'coverage*.xml', conditionalCoverageTargets: '70, 0, 0', failUnhealthy: false, failUnstable: false, lineCoverageTargets: '80, 0, 0', maxNumberOfBuilds: 0, methodCoverageTargets: '80, 0, 0', onlyStable: false, sourceEncoding: 'ASCII', zoomCoverageChart: false archiveArtifacts artifacts: 'docs/unit-tests/htmlcoverage/*.*' } } } } </code></pre> <p>I have added the line <em>sudo chown -R jenkins:jenkins ./docs/unit-tests/htmlcoverage</em> because I was facing a permissions error to the coverage file :</p> <pre><code>INTERNALERROR&gt; PermissionError: [Errno 13] Permission denied: 'coverage.xml' </code></pre> <p>I have also verified that coverage.xml is under root user and not the regular jenkins user (<strong>What even causes this?</strong>) :</p> <p><a href="https://i.sstatic.net/zAavA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zAavA.png" alt="permissions" /></a></p> <p>What I have tried :</p> <pre><code>echo β€œjenkins ALL=(ALL) NOPASSWD: ALL” &gt;&gt; /etc/sudoers </code></pre> <p>This results in the same error <code>sudo: a terminal is required to read the password; either use the -S option to read from standard input or configure an askpass helper</code></p> <pre><code>echo β€œjenkins ALL= NOPASSWD: ALL” &gt;&gt; /etc/sudoers </code></pre> <p>This also results in the same error <code>sudo: a terminal is required to read the password; either use the -S option to read from standard input or configure an askpass helper</code></p> <p>In both cases I have done a restart of the jenkins service. The jenkins user is also already added to sudo group.</p> <p>What exactly am I missing ?</p>
<python><jenkins><jenkins-pipeline><code-coverage>
2023-03-23 11:23:08
3
1,039
Golide
75,822,277
726,150
Python data frames to SQL server DB
<p>I have the following code, and it is failing to connect to a Microsoft SQL,</p> <pre><code>import pandas as pd import pyodbc # set up a connection to the database server = '1.1.1.1' database = 'testDB' username = 'tuser' password = 'xxxxx' cnxn = pyodbc.connect('DRIVER={{SQL Server}};SERVER='+server+';DATABASE='+database+';ENCRYPT=yes;UID='+username+';PWD='+ password) # create a DataFrame to write to the database data = { 'id': [1, 2, 3, 4], 'name': ['Alice', 'Bob', 'Charlie', 'David'], 'age': [25, 32, 18, 47] } df = pd.DataFrame(data) # write the DataFrame to a new table in the database table_name = 'MyTable' df.to_sql(table_name, cnxn, if_exists='replace') # close the connection cnxn.close() </code></pre> <p>I first tried it with</p> <pre><code>cnxn = pyodbc.connect(f'DRIVER={{SQL Server}};SERVER={server}.... </code></pre> <p>And it errors with</p> <pre><code>pandas.errors.DatabaseError: Execution failed on sql 'SELECT name FROM sqlite_master ........ </code></pre> <p>Then I tried using sqlalchemy, as per some post I had read, and then tried</p> <pre><code>cnxn = pyodbc.connect('DRIVER={ODBC Driver 18 for SQL Server}; </code></pre> <p>But I have not managed to get any of them to work, at the all i want to do is to dump the df to a table, using both the append and replace methods.</p> <p>What does work is below, so I can connect using pyodbc.connect and list the tables, but I can't seem to write a df to the DB using pandas?</p> <p>At the moment I am testing on Windows but eventuly this will run on Linux so need a solution that is portable. Any ideas?</p> <pre><code>... ... ... cnxn = pyodbc.connect(f'DRIVER={{SQL Server}};SERVER={server};DATABASE={database};UID={username};PWD={password}') # get a list of all table names in the database cursor = cnxn.cursor() table_names = [row.table_name for row in cursor.tables(tableType='TABLE')] # iterate through each table and drop it for table_name in table_names: print (table_name) # close the connection cnxn.close() </code></pre>
<python><sql-server>
2023-03-23 11:12:58
1
2,653
DevilWAH
75,822,267
10,771,559
In each group, only keep the row that contains the best choice
<p>I have a dataframe with a column of Groups. I have another column with different choices. I want to only keep one row per group and to only keep the row containing the <strong>best</strong> choice. The choices in order of preference are First Choice, Second Choice, Third Choice, Fourth Choice.</p> <p>My dataframe is much larger, so I need a scalable option, but here is an examples dataframe.</p> <pre><code>Group Choice Group1 First Choice Group1 Second Choice Group1 Third Choice Group2 Third Choice Group2 Fourth Choice Group3 Second Choice Group3 Fourth Choice </code></pre> <p>My desired output would be</p> <pre><code>Group Choice Group1 First Choice Group2 Third Choice Group3 Second Choice </code></pre> <p>recreatable dataframe code:</p> <pre><code>d = {'Group': ['Group1', 'Group1', 'Group1', 'Group2', 'Group2', 'Group3', 'Group3'], 'Choice': ['First Choice', 'Second Choice', 'Third Choice', 'Third Choice', 'Fourth Choice', 'Second Choice', 'Fourth Choice']} df=pd.DataFrame(data=d) </code></pre>
<python><pandas>
2023-03-23 11:12:12
2
578
Niam45
75,822,180
11,141,816
Can pip be used to upgrade package installed with conda forge?
<p>A package was installed through conda-forge</p> <pre><code>conda install -c conda-forge &lt;package&gt; </code></pre> <p>However, I don't know how to upgrade the package to the newest version, because</p> <pre><code>conda install --upgrade -c conda-forge openai </code></pre> <p>didn't work, so I typed</p> <pre><code>pip install --upgrade &lt;package&gt; </code></pre> <p>Somehow it worked, but does it affect the package dependency?</p>
<python><installation><pip><anaconda><upgrade>
2023-03-23 11:03:43
0
593
ShoutOutAndCalculate
75,822,030
6,761,328
Include assets in dashboard created by an .exe via PyInstaller()
<p>I created a dashboard via <code>plotly dash</code>. This dashboard will be used on a single machine in a lab. Hence, I didn't host it on a server or the like but created an .exe with <code>PyInstaller()</code>. When executed, a window will open with these lines:</p> <pre><code>Dash is running on http://127.0.0.1:8054/ * Serving Flask app &quot;app&quot; (lazy loading) * Environment: production WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. * Debug mode: off </code></pre> <p>Then, one simply has to enter the above URL/IP and the dashboard opens up. This works fine except of the situation that the <code>.css</code> files in the assets folder are apparently ignored somewhere through the process. How do I ensure to include them?</p>
<python><pyinstaller><plotly-dash>
2023-03-23 10:47:10
2
1,562
Ben
75,821,999
4,659,442
Select row in dataframe containing a specific UUID
<p>I have a dataframe, one column of which holds a UUID:</p> <pre><code>import numpy as np import pandas as pd import uuid df = pd.DataFrame( data=[[1, 2, 3], [4, 5, 6]], columns=['a', 'b', 'c'] ) df['d'] = np.NaN df['d'] = df['d'].apply( lambda x: uuid.uuid4() ) </code></pre> <p>Preview:</p> <pre><code>df ------- a b c d ------- 0 1 2 3 31abc2af-117d-4fe8-b43f-e68fa429187f ------- 1 4 5 6 f63b36c8-bb4e-4148-ace9-a89fa117e15c </code></pre> <p>I now want to select rows based on a UUID. But the following returns an empty set of rows:</p> <pre><code>df.loc[ df['d'] == '31abc2af-117d-4fe8-b43f-e68fa429187f' ] </code></pre> <p>How do I select rows using UUID as the match criteria?</p>
<python><pandas><uuid>
2023-03-23 10:43:43
2
727
philipnye
75,821,940
6,772,468
How to pass file using html form, Dropzone and Flask
<p>I am using Dropzone and Flask using html form to pass files. like this below:</p> <pre><code>&lt;form action=&quot;/&quot; enctype=&quot;multipart/form-data&quot; method=&quot;POST&quot; class=&quot;dropzone&quot; id=&quot;myAwesomeDropzone&quot; &gt; &lt;div class=&quot;fallback&quot;&gt; &lt;input name=&quot;file&quot; id=&quot;file&quot; type=&quot;file&quot; accept=&quot;.csv, text/csv, text/plain, text/tsv, text/comma-separated-values&quot; /&gt; &lt;/div&gt; &lt;div class=&quot;dz-message needsclick&quot;&gt; &lt;i class=&quot;h1 text-muted dripicons-cloud-upload&quot;&gt;&lt;/i&gt; &lt;h3&gt;Drop CSV file here or click to upload.&lt;/h3&gt; &lt;/div&gt; &lt;div class=&quot;clearfix text-right mt-3&quot;&gt; &lt;button type=&quot;submit&quot; class=&quot;btn btn-danger&quot;&gt; &lt;i class=&quot;mdi mdi-send mr-1&quot;&gt;&lt;/i&gt; Submit&lt;/button&gt; &lt;/div&gt; &lt;/form&gt; </code></pre> <p>Here is my JS below:</p> <pre><code>&lt;script&gt; // when the form myAwesomeDropzone has a file added // submit the form Dropzone.options.myAwesomeDropzone = { autoProcessQueue: false, uploadMultiple: false, parallelUploads: 100, maxFiles: 1, maxFilesize: 20, acceptedFiles: &quot;.csv&quot;, addRemoveLinks: true, dictRemoveFile: &quot;Remove&quot;, dictFileTooBig: &quot;File is too big ({{filesize}}MiB). Max filesize: {{maxFilesize}}MiB.&quot;, dictInvalidFileType: &quot;You can't upload files of this type.&quot;, dictMaxFilesExceeded: &quot;You can only upload 1 file.&quot;, init: function () { var myDropzone = this; // if s CSV file is upload, submit the form with the csv file this.on(&quot;addedfile&quot;, function (file) { // Enable the submit button document.querySelector(&quot;button[type='submit']&quot;).disabled = false; }); }, }; &lt;/script&gt; </code></pre> <p>In the Flask route:</p> <pre><code>@app.route(&quot;/&quot;, methods=[&quot;POST&quot;]) @login_required def process_csv_file(): print(&quot;request files: &quot;, request.files) file = request.files.get(&quot;file&quot;) print(&quot;file: &quot;, file) data = pd.read_csv(io.StringIO(file.decode(&quot;utf-8&quot;))) print(&quot;data: &quot;) print(data.head()) return &quot;processed&quot; </code></pre> <p>The request files are empty. I have tried using the button, removing the button and tried debugging whats wrong but I cant understand why the request files are empty. What could be the error?</p> <p>Thanks in advance</p>
<javascript><python><python-3.x><flask><dropzone.js>
2023-03-23 10:36:45
1
1,375
JA-pythonista
75,821,774
10,620,788
Run an Azure Databricks notebook from another notebook with ipywidget
<p>I am trying to run a notebook from another notebook using the dbutils.notebook.run as follows:</p> <pre><code>import ipywidgets as widgets from ipywidgets import interact from ipywidgets import Box button = widgets.Button(description='Run model') out = widgets.Output() def on_button_clicked(b): button.description = 'Run model' with out: dbutils.notebook.run(&quot;/mynotebookpath&quot;,60) button.on_click(on_button_clicked) widgets.VBox([button, out]) </code></pre> <p>However, I am getting the following error:</p> <blockquote> <p>IllegalArgumentException: Context not valid. If you are calling this outside the main thread, you must set the Notebook context via dbutils.notebook.setContext(ctx), where ctx is a value retrieved from the main thread (and the same cell)</p> </blockquote> <p>I can run the notebook just fine when I do <code>%run</code> on a single cell and even <code>dbutils.notebook.run(&quot;/mynotebook&quot;, 60)</code> on a single cell. However I cannot get it to run within the ipywidget context</p>
<python><azure><pyspark><databricks><ipywidgets>
2023-03-23 10:20:35
1
363
mblume
75,821,671
1,866,038
Does DataFrame.index.empty imply DataFrame.empty?
<p>If I have a DataFrame, <code>df</code>, for which <code>df.index.empy</code> is <code>True</code>, will this ALWAYS imply that <code>df.empty</code> is also <code>True</code>?</p> <p>My intend is to test only df.index.empy when I need to test both conditions (lazy programming style).</p>
<python><pandas><dataframe>
2023-03-23 10:10:42
2
517
Antonio Serrano
75,821,598
792,015
Accessing an instance throughout a tkinter app
<p>I have a program that defines a Gear class and I am creating a tkinter GUI to manipulate it. However I'm unsure how to make a globally accessible instance of the Gear. Currently I am instantiating it like this and passing it into the sub frames I am creating like so ...</p> <pre><code>class MainApplication(ttk.Frame): def __init__(self, root, *args, **kwargs): ttk.Frame.__init__(self, root) self.gear = Gear.make_spur(50, 1, 20) # instance of gear creted self.home_frame = HomeFrame(root, gear) # Instance passed to home_frame self.home_frame.grid(row=0, column=1, sticky=&quot;nsew&quot;) self.calc_frame = CalcFrame(root, gear) # Instance passed to calc_frame self.calc_frame.grid(row=0, column=2, sticky=&quot;nsew&quot;) nav_frame = NavBar(root, controller=self, home_frame=self.home_frame, calc_frame=self.calc_frame) nav_frame.grid(row=0, column=0, sticky=&quot;nsew&quot;) </code></pre> <p>This allows both Frames to work on the gear, but if I delete and create a new instance in the home_frame, the calculations I do on the gear in the calc_frame use the gear created in the above code.</p> <p>Should I declare an instance as a global? or should I be considering a pattern like Singleton or Monostate? It seems like this should be a common problem when creating a program based around a single instance of a Class, but I don't know how to approach it.</p> <p>In Python there is supposed to be &quot;one right way&quot; to do things. Which is the right way?</p>
<python><oop><tkinter>
2023-03-23 10:03:50
1
1,466
Inyoka
75,821,466
1,651,481
Python render svg image with the python only modules
<p>The question is simple, but I have googled a lot of methods, and there no such solution as:</p> <pre><code>import svg-render-library figure = svg-render-library.open('test.svg') figure.render() </code></pre> <p>Is there any simple methods to display an SVG image using only python libraries? I am asking about rendering the SVG image without any conversion to another formats and to render using pure python, without any 3rd party software. As I have tried, this seems impossible for now.</p> <p>As built-in python I mean - only python packages available through pip, so it is not necessary to install/compile anything else. And to render I mean to show inside window which part of the python, not the browser or any external software.</p> <p>At least I have currently working solution <a href="https://stackoverflow.com/questions/75911809/svg2rlg-converting-svg-to-png-only-part-of-the-image-with-percentage-size">my question on Stack Overflow</a>.</p>
<python><svg><rendering>
2023-03-23 09:50:52
1
612
XuMuK
75,821,325
11,422,610
Why do I have to activate my virtual environment for the second time - i.e. from within a Python script despite having it already activated?
<p>I wrote many scripts to help me manage my django projects. One of them, <code>cr_app.py</code> creates a new app:</p> <pre><code>#!/usr/bin/env python3 from subprocess import run def main(): create_app() def create_app(): name = input(&quot;The name of the app?\n&quot;) run(f&quot;&quot;&quot;python3 manage.py startapp {name}&quot;&quot;&quot;, shell=True) if __name__ == &quot;__main__&quot;: main() </code></pre> <p>When I am inside of my virtual envirnment's project directory, having activated it with <code>. bin/activate</code>, and, <a href="https://docs.djangoproject.com/en/4.1/intro/tutorial01/" rel="nofollow noreferrer">following the tutorial</a>, I run manually <code>python3 manage.py startapp polls</code> then I find <code>models.py</code> already created inside of <code>polls/</code>. But this does not happen, <code>models.py</code> does not get created, when I run <code>cr_app.py</code> and create a new app <code>polls</code> even though I run this script while being in the projects activated virtual environment (although <code>cr_app.py</code> is not, it is located in a different, remote directory). The directory app, <code>polls/</code> gets created but without <code>models.py</code> inside of it. Activating the environment from within the script with <code>run(f&quot;&quot;&quot;. bin/activate &amp;&amp; python3 manage.py startapp {name}&quot;&quot;&quot;, shell=True)</code> fixes the issue, and <code>models.py</code> gets created. Why do I have to activate my virtual environment for the second time - i.e. from within a script, after having it already activated?</p>
<python><django><django-manage.py>
2023-03-23 09:36:38
0
937
John Smith
75,821,239
290,650
Multiple offset y-axis AND multiple offset x-axis for same subplot (at least 3 of each)
<p>I would like to plot data of various scales on the same axes object. This means having multiple x and y-axis on the same 'axes' object (I'm aware there are other ways to visualise multi-scale data but I don't want to get into the details too much of why I need to do it this way!).</p> <p>I understand how to use <code>twinx</code> to get multiple y-axis on the same plot using the &quot;<a href="https://matplotlib.org/3.4.3/gallery/ticks_and_spines/multiple_yaxis_with_spines.html" rel="nofollow noreferrer">Multiple Yaxis With Spines&quot; matplotlib demo</a>. However, in this example all the data share the same x-axis. I want to have both multiple y-axis and multiple x-axis. Having 2 of each is trivial: I can simply use <code>twin_ax = ax.twinx().twiny()</code>. However, if I want more than two x-axis and y-axis, I need a way to offset both x-axis and y-axis spines. The problem is that if I use <code>ax.twinx().twiny()</code>, I can only offset the top spine and not the right spine. If I use <code>ax.twiny().twinx()</code>, then it's the other way round.</p> <p>Here's my code:</p> <pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt fig, ax = plt.subplots(constrained_layout=True) twin1 = ax.twinx().twiny() twin2 = ax.twinx().twiny() twin2.spines.right.set_position((&quot;axes&quot;, 1.2)) twin2.spines.top.set_position((&quot;axes&quot;, 1.2)) plt.show() </code></pre> <p>It appears that only the top spine gets offset, not the right spine. If I swap the ordering to <code>twin2 = ax.twiny().twinx()</code>, then only the right spine gets offset and not the right! How do I fix this problem?</p>
<python><matplotlib><plot>
2023-03-23 09:28:11
1
6,901
Eddy
75,821,223
279,313
SGD optimizer: Cannot iterate over a scalar tensor
<p>Trying to execute code examples from the book, I get the similar issue for several examples. Probably, because I use new TensorFlow version. Minimal code to reproduce:</p> <pre><code>import tensorflow as tf import numpy as np X = tf.constant(np.linspace(-1, 1, 101), dtype=tf.float32) Y = tf.constant(np.linspace(-1, 1, 101), dtype=tf.float32) w = tf.Variable(0., name=&quot;weights&quot;, dtype=tf.float32) cost = lambda: tf.square(Y - tf.multiply(X, w)) train_op = tf.keras.optimizers.SGD(0.01) #train_op = tf.compat.v1.train.GradientDescentOptimizer(0.01) # works with this optimizer train_op.minimize(cost, w) # error is here </code></pre> <p>Error:</p> <pre> Traceback (most recent call last): File "/home/alex/tmp/test.py", line 15, in train_op.minimize(cost, w) File "/home/alex/.local/lib/python3.10/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 526, in minimize grads_and_vars = self.compute_gradients(loss, var_list, tape) File "/home/alex/.local/lib/python3.10/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 260, in compute_gradients return list(zip(grads, var_list)) File "/home/alex/.local/lib/python3.10/site-packages/tensorflow/python/framework/ops.py", line 583, in __iter__ raise TypeError("Cannot iterate over a scalar tensor.") TypeError: Cannot iterate over a scalar tensor. </pre> <p>If I replace <code>tf.keras.optimizers.SGD</code> with <code>tf.compat.v1.train.GradientDescentOptimizer</code>, this code is working as expected. How can I get it working with <code>SGD</code> optimizer?</p> <p>Pytnon version is 3.10.6, TensorFlow version is 2.11.0:</p> <pre> alex@alex-22:~$ python3 Python 3.10.6 (main, Mar 10 2023, 10:55:28) [GCC 11.3.0] on linux >>> import tensorflow as tf >>> print(tf.__version__) 2.11.0 </pre> <p>This code is from &quot;Machine Learning with TensorFlow&quot; book, full example code is here: <a href="https://github.com/chrismattmann/MLwithTensorFlow2ed/blob/master/TFv2/ch03/Listing%203.01%20-%203.02.ipynb" rel="nofollow noreferrer">https://github.com/chrismattmann/MLwithTensorFlow2ed/blob/master/TFv2/ch03/Listing%203.01%20-%203.02.ipynb</a></p>
<python><tensorflow><keras>
2023-03-23 09:27:04
1
43,451
Alex F
75,821,044
12,415,855
Python - reading hyperlink information?
<p>i try to read a hyperlink-information from an excel-sheet using the following code:</p> <pre><code>import os import sys import xlwings as xw path = os.path.abspath(os.path.dirname(sys.argv[0])) fn = os.path.join(path, &quot;inp.xlsx&quot;) wb = xw.Book (fn) ws1 = wb.sheets[&quot;MainOverview&quot;] val = ws1[&quot;H18&quot;].value inpLink = ws1[&quot;H18&quot;].hyperlink print(f&quot;{val}: {inpLink}&quot;) val = ws1[&quot;I18&quot;].value inpLink = ws1[&quot;I18&quot;].hyperlink print(f&quot;{val}: {inpLink}&quot;) </code></pre> <p>In cell H18 there is a link to another place / worksheet in the document - see attached the information it shows when hoovering ofer the link in the cell:</p> <p><a href="https://i.sstatic.net/rOdNA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rOdNA.png" alt="enter image description here" /></a></p> <p>And in the cell I18 i have a link to a website - attached the hoovering info:</p> <p><a href="https://i.sstatic.net/So8YB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/So8YB.png" alt="enter image description here" /></a></p> <p>But when i run the program i only get this output:</p> <pre><code>View Findings: Google: https://www.google.com/ </code></pre> <p>So i only get the link-information for the website-link but not to the file-link information from cell H18.</p> <p>How is it possible to get the full file link (file:///C:\DEV...) from the cell H18?</p>
<python><excel><xlwings>
2023-03-23 09:09:29
1
1,515
Rapid1898
75,820,977
4,681,355
Clickable elements not found by Selenium with wait
<p>I'm trying to scrape all the team statistics at this page: <a href="https://www.unitedrugby.com/clubs/glasgow-warriors/stats" rel="nofollow noreferrer">https://www.unitedrugby.com/clubs/glasgow-warriors/stats</a></p> <p>As you can see there are several drop down menus. The ones I'm interested in are the six ones describing the 2022/23 statistics (Attack, Defence, Kicking, Discipline, Lineouts, Scrums).</p> <p>I have inspected the page and the item to click to open each of the six menus should have the following XPATH: <code>//div[@class='bg-white px-6 py-2 absolute left-1/2 -translate-x-1/2 -top-5 text-slate-deep uppercase text-2xl leading-5 font-step-1 font-urc-sans tracking-[2px] hover:cursor-pointer select-none']</code>.</p> <p>In the Firefox inspector it also says &quot;event&quot; next to this particular line so (since I'm not that skilled in Selenium yet) I thought it was the element to click.</p> <p>I have used the following piece of code to retrieve all elements with that class:</p> <pre class="lang-py prettyprint-override"><code>Elements = WebDriverWait(driver, 60).until( EC.element_to_be_clickable((By.XPATH, &quot;//div[@class='bg-white px-6 py-2 absolute left-1/2 -translate-x-1/2 -top-5 text-slate-deep uppercase text-2xl leading-5 font-step-1 font-urc-sans tracking-[2px] hover:cursor-pointer select-none']&quot;)) ) </code></pre> <p>My idea was to find all these elements, wait for them to be clickable, then click them to open the dropdown menus, and scrape all the statistics contained inside.</p> <p>Regardless of how much time I allow it to wait, it always reaches a Timeout exception.</p> <p>Could anyone help me sorting out this issue?</p> <p><strong>EDIT #1:</strong></p> <p>Thanks to the answers I have achieved the first step. However, my ultimate goal is to retrieve the actual statistics (e.g. &quot;Points scored&quot;, inside &quot;Attack&quot;).</p> <p>These are all under the class <code>flex justify-between items-center border-t border-mono-300 py-4 md:py-6</code>.</p> <p>After clicking on the cookies button and waiting for the presence of all elements (which works now) I am not able to retrieve elements with this class.</p> <p>What I'm missing is how to open all those 6 menus prior to scrape the statistics, because they don't show up unless I click on the dropdown.</p> <p>I'm doing this:</p> <pre class="lang-py prettyprint-override"><code>Elements = [el.click() for el in Elements] </code></pre> <p>Because I'm trying to click on each of the 6 webdriver instances resulting from the previous <code>Wait</code>.</p> <p>I think this isn't the way I'm supposed to do it but I can't find how, in case any of you has any hint.</p>
<python><html><selenium-webdriver><web-scraping><xpath>
2023-03-23 09:02:40
3
622
schmat_90
75,820,971
9,751,398
How to cure Google Colab with miniconda installed and get on import ModuleNotFoundError?
<p>I have used Google Colab for more than half a year successfully for a special application. In the beginning of my Colab-notebook script I install conda and use conda-forge for installing a key Python package. At the beginning of this year 2023 I became aware of the Colab update to use Ubuntu 20.04 LTS and made some updates of scripts and all worked again.</p> <p>A few weeks ago, around 2023-03-09, something happened and the key package could still be imported and shows up using &quot;!conda list&quot;, but the package cannot be imported. The error message is: ModuleNotFoundError: No module named....</p> <p>I tested to install the package locally on a VBox VM with the same Ubuntu ver as Google Colab 20.04 and also same Python 3.8.16 and conda 23.1.0 version and works just fine.</p> <p>So I guess the problem is with Gooble Colab and its environment, but what can be done?</p>
<python><linux><jupyter-notebook><google-colaboratory><miniconda>
2023-03-23 09:01:48
1
1,156
janpeter
75,820,944
12,931,358
Is it possible to combine these three tensors into a large tensor as model input without using dict?
<p>I have a ANN network, and not I need a dict that combines these three tensors as input, for example,</p> <pre><code>model = MyNetwork(&quot;mypath&quot;) dummy_input = {} dummy_input[&quot;input_ids&quot;] = torch.randint(1, 512,(1,345)) dummy_input[&quot;attention_mask&quot;] = torch.randint(1, 512,(1,345)) dummy_input[&quot;bbox&quot;] = torch.randint(1, 512,(1,345,4)) torch_out = model(**dummy_input) #need to decompress before input </code></pre> <p>Thus,I was wondering if it is possible to combined above-mentioned tensor into one tenor, for example,</p> <pre><code>input_ids = torch.randint(1, 512,(1,345)) attention_mask = torch.randint(1, 512,(1,345)) bbox = torch.randint(1, 512,(1,345,4)) torch_out = model(input_ids = input_ids, attention_mask = attention_mask, bbox = bbox) #is it possible to input only one tensor to replace these three? print(torch_out) </code></pre> <p>btw, the forward func of my model like that, only this three tensors is necessary,</p> <pre><code>def forward( self, input_ids=None, bbox=None, attention_mask=None, token_type_ids=None, valid_span=None, position_ids=None, head_mask=None, inputs_embeds=None, encoder_hidden_states=None, encoder_attention_mask=None, past_key_values=None, use_cache=None, output_attentions=None, output_hidden_states=None, return_dict=None, images=None, ): </code></pre>
<python><deep-learning><pytorch><neural-network>
2023-03-23 08:58:22
1
2,077
4daJKong
75,820,725
19,556,055
Is there a fast way to merge overlapping columns between two pandas dataframes?
<p>I have a DataFrame with employee information that is missing some records, and want to fill these in using another DataFrame. The way I'm doing it now is below, but takes way too long because it's a lot of rows.</p> <pre><code>df_missing = df_cleaned.loc[(df_cleaned[&quot;HOURLY_BASE_RATE&quot;]&lt;= 0) | (df_cleaned[&quot;HOURLY_BASE_RATE&quot;].isna())] df_missing_in_integration = df_missing[[&quot;ASSOCIATE_ID&quot;, &quot;COUNTRY&quot;]].merge(df_integration_wages, on=[&quot;ASSOCIATE_ID&quot;, &quot;COUNTRY&quot;]) for index, row in df_missing_in_integration.iterrows(): associate_id = row[&quot;ASSOCIATE_ID&quot;] associate_country = row[&quot;COUNTRY&quot;] associate_index = df_cleaned.index[(df_cleaned[&quot;ASSOCIATE_ID&quot;] == associate_id) &amp; (df_cleaned[&quot;COUNTRY&quot;] == associate_country)] df_cleaned.loc[associate_index, &quot;HOURLY_BASE_RATE&quot;] = row[&quot;HOURLY_BASE_RATE&quot;] df_cleaned.loc[associate_index, &quot;CURRENCY&quot;] = row[&quot;CURRENCY&quot;] df_cleaned.loc[associate_index, &quot;PAY_COMPONENT&quot;] = row[&quot;PAY_COMPONENT&quot;] df_cleaned.loc[associate_index, &quot;FTE&quot;] = row[&quot;FTE&quot;] </code></pre> <p>Is there a faster way to fill in those missing values based on a unique combination of the ASSOCIATE_ID and COUNTRY columns? I've tried <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.merge.html" rel="nofollow noreferrer">merge</a>, but this gives me extra columns instead of filling in the values in the existing columns. I've also tried <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.combine_first.html" rel="nofollow noreferrer">combine_first</a>, but for some reason I still have the nan values when I try this.</p> <p>Here's some example DataFrames:</p> <pre><code>df_cleaned = pd.DataFrame({&quot;ASSOCIATE_ID&quot;: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], &quot;COUNTRY&quot;: [&quot;USA&quot;, &quot;USA&quot;, &quot;BEL&quot;, &quot;GER&quot;, &quot;BEL&quot;, &quot;USA&quot;, &quot;GER&quot;, &quot;GER&quot;, &quot;NLD&quot;, &quot;NLD&quot;], &quot;HOURLY_BASE_RATE&quot;: [15, np.nan, 20, 18, np.nan, np.nan, 43, 38, np.nan, 13], &quot;CURRENCY&quot;: [&quot;USD&quot;, &quot;USD&quot;, &quot;EUR&quot;, &quot;EUR&quot;, &quot;EUR&quot;, &quot;USD&quot;, &quot;EUR&quot;, &quot;EUR&quot;, &quot;EUR&quot;, &quot;EUR&quot;], &quot;PAY_COMPONENT&quot;: [&quot;Hourly&quot;, np.nan, &quot;Hourly&quot;, &quot;Hourly&quot;, np.nan, np.nan, &quot;Hourly&quot;, &quot;Hourly&quot;, np.nan, &quot;Hourly&quot;], &quot;FTE&quot;: [1, 1, 0.8, 1, np.nan, np.nan, 0.75, 0.75, np.nan, 1], &quot;LOCATION_TYPE&quot;: [&quot;Stores&quot;, &quot;Stores&quot;, &quot;Distribution Center&quot;, &quot;Stores&quot;, &quot;Headquarters&quot;, &quot;Headquarters&quot;, &quot;Headquarters&quot;, &quot;Distribution Center&quot;, &quot;Stores&quot;, &quot;Stores&quot;}) df_integration_wages = pd.DataFrame({&quot;ASSOCIATE_ID&quot;: [2, 5, 6, 9, 11, 12], &quot;COUNTRY&quot;: [&quot;USA&quot;, &quot;USA&quot;, &quot;USA&quot;, &quot;NLD&quot;, &quot;BEL&quot;, &quot;BEL&quot;], &quot;HOURLY_BASE_RATE&quot;: [2500, 23, 37, 20, 32, 16], &quot;CURRENCY&quot;: [&quot;USD&quot;, &quot;USD&quot;, &quot;USD&quot;, &quot;EUR&quot;, &quot;EUR&quot;, &quot;EUR&quot;], &quot;PAY_COMPONENT&quot;: [&quot;Monthly&quot;, &quot;Hourly&quot;, &quot;Hourly&quot;, &quot;Hourly&quot;, &quot;Hourly&quot;, &quot;Hourly&quot;], &quot;FTE&quot;: [1, 0.6, 1, 1, 0.8, 1]}) </code></pre> <p>I only want to replace the rows where the wage was missing. There are more rows in the integration file, but I don't want to include those. Is there a faster way to achieve what want?</p>
<python><pandas><numpy><merge>
2023-03-23 08:34:18
2
338
MKJ
75,820,497
10,284,437
No module named 'upwork.routers'
<p>I have this simple imports that is failing with</p> <pre><code> from upwork.routers.jobs import search ModuleNotFoundError: No module named 'upwork.routers' </code></pre> <p>Ref: <a href="https://upwork.github.io/python-upwork-oauth2/routers/jobs/search.html" rel="nofollow noreferrer">https://upwork.github.io/python-upwork-oauth2/routers/jobs/search.html</a><br /> <a href="https://developers.upwork.com/?lang=python#jobs" rel="nofollow noreferrer">https://developers.upwork.com/?lang=python#jobs</a></p> <p>Code:</p> <pre><code>#!/usr/bin/env python import upwork from upwork.routers.jobs import search </code></pre> <p>I did</p> <pre><code>$ pip install upwork $ pip list | grep upwork upwork 1.0.22 </code></pre> <p>What's wrong?</p>
<python><python-3.x><upwork-api>
2023-03-23 08:05:52
2
731
MΓ©vatlavΓ© Kraspek
75,820,434
12,931,358
How to create a high dimension tensor with fixed shape and dtype?
<p>I want to return a tensor with fixed shape size, like, <code>torch.Size([1,345])</code> However, when I input,</p> <pre><code>import torch pt1 = torch.tensor(data=(1, 345), dtype=torch.int64) </code></pre> <p>it only return <code>torch.Size([2])</code></p> <p>I followed some tensor tutorial a tried to</p> <pre><code>pt1 = torch.tensor(1, 345, dtype=torch.int64) pt1 = torch.tensor((1, 345), dtype=torch.int64) pt1 = torch.tensor(shape=(1, 345), dtype=torch.int64) </code></pre> <p>It still shows error like <code>tensor() takes 1 positional argument but 2 were given</code> I know some methods mean the data is (1,345) not shape, but...I am novice in pytorch and still not finding the solution of it...</p>
<python><pytorch>
2023-03-23 07:59:37
3
2,077
4daJKong
75,820,215
5,747,326
Pagination using python dictionaries
<p>What is the ideal and most performant way to paginate from dictionary with list of dicts? For example if below is my dictionary:</p> <pre><code>items = { &quot;device-1&quot;: [ {&quot;a&quot;: 1, &quot;b&quot;: 15}, {&quot;a&quot;: 11, &quot;b&quot;: 25}, {&quot;a&quot;: 21, &quot;b&quot;: 35}, {&quot;a&quot;: 31, &quot;b&quot;: 45}, ] &quot;device-2&quot;: [ {&quot;a&quot;: 100, &quot;b&quot;: 150}, {&quot;a&quot;: 110, &quot;b&quot;: 250}, {&quot;a&quot;: 210, &quot;b&quot;: 350}, {&quot;a&quot;: 310, &quot;b&quot;: 450}, ] } </code></pre> <p>If <code>Page:1</code> and <code>PageSize:3</code>, the response should be:</p> <pre><code>items = { &quot;device-1&quot;: [ {&quot;a&quot;: 1, &quot;b&quot;: 15}, {&quot;a&quot;: 11, &quot;b&quot;: 25}, {&quot;a&quot;: 21, &quot;b&quot;: 35}, ] } </code></pre> <p>If <code>Page:2</code> and <code>PageSize:3</code>, the response should be like:</p> <pre><code>items = { &quot;device-1&quot;: [ {&quot;a&quot;: 31, &quot;b&quot;: 45}, ] &quot;device-2&quot;: [ {&quot;a&quot;: 100, &quot;b&quot;: 150}, {&quot;a&quot;: 110, &quot;b&quot;: 250}, ] } </code></pre>
<python>
2023-03-23 07:30:33
1
746
N Raghu
75,820,210
7,556,646
Call SciPy from MATLAB using Python
<p>I would like to compute the <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.t.html" rel="nofollow noreferrer">inverse CDF of Student T distribution</a> using MATLAB without the statistic toolbox but with the help of Python and SciPy, see as well <a href="https://stackoverflow.com/a/20627638/7556646">https://stackoverflow.com/a/20627638/7556646</a>.</p> <p>The following code running from MATLAB R2023a using Python 3.9.13</p> <pre><code>py.scipy.stats.t().ppf(0.975, df=4) </code></pre> <p>gives me the following error:</p> <pre><code>Error using _distn_infrastructure&gt;__init__ Python Error: TypeError: _parse_args() missing 1 required positional argument: 'df' Error in _distn_infrastructure&gt;freeze (line 824) Error in _distn_infrastructure&gt;__call__ (line 829) </code></pre> <p>The argument <code>df</code> is provided but not recognized. I don't understand why? See as well <a href="https://ch.mathworks.com/help/matlab/matlab_external/python-function-arguments.html" rel="nofollow noreferrer">https://ch.mathworks.com/help/matlab/matlab_external/python-function-arguments.html</a></p> <p>For a normal distribution I can use SciPy from MATLAB:</p> <pre><code>py.scipy.stats.norm().ppf(0.975) </code></pre> <p>returns <code>1.9600</code> as expected.</p> <p>In Python I can do it:</p> <pre><code>&gt;&gt;&gt; scipy.stats.t.ppf(0.975, df=4) 2.7764451051977987 </code></pre>
<python><matlab><scipy>
2023-03-23 07:29:57
1
1,684
Wollmich
75,820,131
14,427,714
How to ignore this browser may not be secure using python selenium?
<p>I'm trying to automate testing of email login using Python and Selenium. Specifically, I want to test different email addresses and passwords to see if they work. However, I'm running into an issue where the browser shows &quot;not secure&quot; when I try to sign in to Gmail.</p> <p>Here's the code I'm using:</p> <pre><code>from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC from selenium import webdriver from selenium.webdriver.support.ui import WebDriverWait from time import sleep import pandas as pd from datetime import date, datetime from undetected_chromedriver import Chrome, ChromeOptions class MailGrabber(Chrome): def __init__(self, driver=&quot;C:\\chromedriver_win32\\chromedriver.exe&quot;, teardown=False): chrome_options = ChromeOptions() chrome_options.add_argument(&quot;--disable-web-security&quot;) chrome_options.add_argument(&quot;--ignore-certificate-errors&quot;) chrome_options.add_argument('--ignore-ssl-errors') chrome_options.add_argument(&quot;--no-sandbox&quot;) chrome_options.add_argument(&quot;--allow-running-insecure-content&quot;) self.driver = driver self.teardown = teardown super(MailGrabber, self).__init__(options=chrome_options, executable_path=driver) self.implicitly_wait(30) # def __exit__(self, exc_type, exc_val, exc_tb): # if self.teardown: # self.quit() def go_to_gmail(self, url): if url is not None: self.get(url) else: return None def show_intro(self): text = &quot;WELCOME TO MAILGRABBER&quot; headline = text.upper().center(80) print('\033[32m' + headline + '\033[0m') def instructions(self): text = &quot;DISCLAIMER&quot; print(text.upper().center(80)) print(&quot;1. Don't use same number everytime&quot;) print(&quot;2. Don't give a long try! It may causes issue&quot;) print(&quot;3. Happy Hacking!&quot;) print(&quot;&quot;) print(&quot;&quot;) def solve_captcha(self): pass def convert_to_dataframe(self, s_arr, f_arr): data_frame = pd.DataFrame({ &quot;Email&quot;: s_arr, &quot;Password&quot;: f_arr }) data_frame.to_csv(f&quot;{datetime.now}.csv&quot;, index=False) def start_grabbing(self, x_path, number, amount, length_of_password): arr = [] pass_arr = [] success_email = [] password_url = &quot;https://accounts.google.com/v3/signin/challenge/pwd?TL=ALbfvL3NQ06cSKbTPLbo&quot; \ &quot;-N_E8IBvg6pv5MGySGo7u-msP7SY7thja3BflZNMQP4p&amp;checkConnection=youtube%3A220%3A0&amp;checkedDomains&quot; \ &quot;=youtube&amp;cid=2&amp;continue=https%3A%2F%2Fmail.google.com&amp;dsh=S-1727980091%3A1678894620075701&quot; \ &quot;&amp;flowEntry=AddSession&amp;flowName=GlifWebSignIn&amp;hl=en&amp;pstMsg=1&amp;service=mail&amp;authuser=0 &quot; gmail_url = &quot;https://mail.google.com/mail/u/0/#inbox&quot; wait = WebDriverWait(self, 25) email = wait.until(EC.element_to_be_clickable((By.XPATH, x_path))) sleep(2) submit_email = wait.until(EC.element_to_be_clickable(( By.XPATH, &quot;//span[normalize-space()='Next']&quot; ))) sleep(2) email.click() i = 1 k = 1 while i &lt;= amount: i += 1 number = int(number) + 1 number_str = &quot;0{}&quot;.format(number) arr.append(number_str) pass_arr.append(str(number)[:length_of_password]) print(&quot;Numbers for bypassing Email =&gt; &quot; + str(arr)) print(&quot;Chosen Passwords =&gt; &quot; + str(pass_arr)) while k &lt;= amount: k += 1 for j in arr: email.send_keys(j) sleep(2) submit_email.click() print(&quot;Tried : &quot; + str(j)) if self.current_url == password_url: print(&quot;One email worked&quot;) success_email.append(j) print(&quot;Success Email&quot; + str(j)) # pass password from here for z in pass_arr: password = wait.until(EC.element_to_be_clickable(( By.XPATH, &quot;//input[@name='Passwd']&quot; ))) password.click() password.send_keys(z) if self.current_url == gmail_url: self.convert_to_dataframe(arr, pass_arr) sleep(5) email.clear() break print(&quot;Numbers logs = &gt; &quot;, arr) print(&quot;Testing all the numbers....&quot;) print(&quot;Tried =&gt; &quot;, len(arr), &quot;times&quot;) # for testing numbers # +18327321445 # +13125860756 # +17322496289 # +19566222852 # +12295853598 # +13127424480 # can target any email from any country </code></pre> <p>I tried so many options still getting this error: This browser may not be secured.</p>
<python><selenium-webdriver>
2023-03-23 07:18:08
0
549
Sakib ovi
75,819,830
1,342,522
Pytest Mocker dynamically created instance
<p>I am writing a unit test that verifies when a method from an instance of a class throws a <code>ValueError</code> or <code>KeyError</code> a certain event is raised.</p> <p>Usually when I do this for regular methods, I do a</p> <pre><code>mocker.patch('module.method',side_effect=ValueError()) </code></pre> <p>and that works fine. But this is not working for a class that I generate.</p> <p>The code for generation is something like this:</p> <pre><code>provider = clients.get_provider(provider_name) try: provider.generate_relationships() except (ValueError, KeyError) as e: raise RelationshipGenerationError(e) </code></pre> <p>In this, clients is an instance that is a global variable that contains an array of provider instances. the <code>get_provider</code> method, returns a provider instance from the clients array of instances.</p> <p>I've tried a few different things most are trying to do the same thing though:</p> <ol> <li><p>creating a fixture that returns an instance of a class, injecting that fixture into my unit test, and then using <code>mocker.patch.object(provider, 'generate_relationships', side_effect=ValueError)</code></p> </li> <li><p>fixture injection, <code>provider.generate_relationships = MagicMock(side_effect=ValueError())</code></p> </li> <li><p>This:</p> <pre><code>with mocker.patch(&quot;module.provider&quot;, return_value=provider_instance_fixture): with pytest.raises(RelationshipGenerationError): generate_relationships() </code></pre> </li> </ol> <p>I'm expecting the code <code>provider.generate_relationships()</code> to throw a <code>ValueError</code> but the method gets executed. MrBean Bremen made a comment that I think is correct. Because my code generates that provider globally before this particular code gets executed, the mock cant patch it because it already exists.</p> <p>Is there a force pytest to treat any instance of <code>Provider</code> as a mocked instance?</p>
<python><pytest>
2023-03-23 06:30:27
0
1,385
JakeHova
75,819,690
9,699,634
Error :'int' object is not subscriptable, while displying output from SQLite to Python code
<p>I am using SQLite in Python to store and retrieve data.</p> <p>This is my Python code:</p> <pre><code># Read API key API_KEY = os.environ[&quot;API_KEY&quot;] # Create bot object bot = telebot.TeleBot(API_KEY) # Connect to database conn = sqlite3.connect('message_tracker.db', check_same_thread=False) c = conn.cursor() # Create messages table if it does not exist c.execute('''CREATE TABLE IF NOT EXISTS messages (group_id integer, user_id integer, message_text text, message_date DATETIME)''') # Function to get message count for a given time period def get_message_count(chat_id, start_date, end_date): c.execute(&quot;SELECT count(*) FROM messages WHERE group_id=? AND message_date BETWEEN ? AND ? GROUP BY user_id&quot;, (chat_id, start_date, end_date)) result = c.fetchall() for row in result: output = f&quot;\n\t{row[0]}: {row[1]}&quot; print(output) return output # Handler for '/dcount' command @bot.message_handler(commands=['dcount']) def handle_daily_count(message): chat_id = message.chat.id user_id = message.from_user.id date_today = datetime.now().strftime('%Y-%m-%d') message_count = get_message_count(chat_id, user_id, date_today, date_today) print(date_today) print(chat_id) print(user_id) bot.reply_to(message, f&quot;{message.chat.title} - {message.from_user.first_name}: {message_count} messages today.&quot;) # Handler for '/wcount' command @bot.message_handler(commands=['wcount']) def handle_weekly_count(message): chat_id = message.chat.id user_id = message.from_user.id date_today = datetime.now() start_date = (date_today - timedelta(days=date_today.weekday())).strftime('%Y-%m-%d') end_date = (date_today + timedelta(days=6-date_today.weekday())).strftime('%Y-%m-%d') message_count = get_message_count(chat_id, user_id, start_date, end_date) bot.reply_to(message, f&quot;{message.chat.title} - {message.from_user.first_name}: {message_count} messages this week.&quot;) # Handler for '/mcount' command @bot.message_handler(commands=['mcount']) def handle_monthly_count(message): chat_id = message.chat.id user_id = message.from_user.id date_today = datetime.now() start_date = date_today.replace(day=1).strftime('%Y-%m-%d') end_date = date_today.replace(day=1) + timedelta(days=32) end_date = (end_date.replace(day=1) - timedelta(days=1)).strftime('%Y-%m-%d') message_count = get_message_count(chat_id, user_id, start_date, end_date) bot.reply_to(message, f&quot;{message.chat.title} - {message.from_user.first_name}: {message_count} messages this month.&quot;) # Handler for tracking messages @bot.message_handler(func=lambda message: True) def track_message(message): chat_id = message.chat.id user_id = message.from_user.id message_text = message.text if len(message_text.split()) &gt; 2: message_date = datetime.now().strftime('%Y-%m-%d') print(message_date) c.execute(&quot;INSERT INTO messages VALUES (?, ?, ?, ?)&quot;, (chat_id, user_id, message_text, message_date)) conn.commit() print(message_date) print(chat_id) print(user_id) else: bot.reply_to(message, &quot;Please use more than 3 words.&quot;) # Start the bot bot.polling() </code></pre> <p>I have added print to check what I am storing into the database, which is showing the correct information.</p> <p>I am able to store data successfully, but when I try to get counts, it showing below error</p> <pre><code> File &quot;main.py&quot;, line 95, in &lt;module&gt; bot.polling() File &quot;/home/runner/SupremeBotpy/venv/lib/python3.10/site-packages/telebot/__init__.py&quot;, line 1043, in polling self.__threaded_polling(non_stop=non_stop, interval=interval, timeout=timeout, long_polling_timeout=long_polling_timeout, File &quot;/home/runner/SupremeBotpy/venv/lib/python3.10/site-packages/telebot/__init__.py&quot;, line 1118, in __threaded_polling raise e File &quot;/home/runner/SupremeBotpy/venv/lib/python3.10/site-packages/telebot/__init__.py&quot;, line 1074, in __threaded_polling self.worker_pool.raise_exceptions() File &quot;/home/runner/SupremeBotpy/venv/lib/python3.10/site-packages/telebot/util.py&quot;, line 148, in raise_exceptions raise self.exception_info File &quot;/home/runner/SupremeBotpy/venv/lib/python3.10/site-packages/telebot/util.py&quot;, line 91, in run task(*args, **kwargs) File &quot;/home/runner/SupremeBotpy/venv/lib/python3.10/site-packages/telebot/__init__.py&quot;, line 6428, in _run_middlewares_and_handler result = handler['function'](message) File &quot;main.py&quot;, line 45, in handle_daily_count output += f&quot;{row[0]}: {row[1]}&quot; </code></pre>
<python><database><sqlite><sqlite3-python>
2023-03-23 06:06:14
0
1,702
VikaS GuttE
75,819,553
9,768,643
How to change the file name of automated chrome download option via python selinium
<p>I need to save the file into a specific name by changing the default name</p> <p>which is popping up by the python selenium download button click</p> <p>I tried the below</p> <pre><code>from selenium import webdriver from selenium.webdriver.common.by import By import time options = webdriver.ChromeOptions() prefs = {&quot;download.prompt_for_download&quot;:True, &quot;download.default_directory&quot;: r&quot;C:\Music\\&quot;,#IMPORTANT - ENDING SLASH V IMPORTANT &quot;directory_upgrade&quot;: True} options.add_experimental_option(&quot;prefs&quot;, prefs) driver =webdriver.Chrome(executable_path=r&quot;.\bot_automation\drivers\chromedriver.exe&quot;,chrome_options=options) driver.get('https://www.browserstack.com/test-on-the-right-mobile-devices') gotit = driver.find_element(By.ID,'accept-cookie-notification') gotit.click() downloadcsv = driver.find_element(By.CSS_SELECTOR,'.icon-csv') downloadcsv.click() #the below codes are not working need to replace that file_window=driver.switch_to.window(driver.window_handles[1]) file_window.clear() file_window.send_keys(&quot;Hii text&quot;) #till this and aslo need to send enter to save the file too time.sleep(5) # driver.close() # closes currently active window time.sleep(5) driver.close() </code></pre> <p>then the below popup is opening need to change the file name and save it</p> <p>Any help would be appreciated</p> <p><a href="https://i.sstatic.net/92hLv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/92hLv.png" alt="enter image description here" /></a></p>
<python><selenium-webdriver>
2023-03-23 05:38:37
1
836
abhi krishnan
75,819,291
1,358,829
Keras: time per step increases with a filter on the number of samples, epoch time continues the same
<p>I'm implementing a simple sanity check model on Keras for some data I have. My training dataset is comprised of about 550 files, and each contributes to about 150 samples. Each training sample has the following signature:</p> <pre class="lang-py prettyprint-override"><code>({'input_a': TensorSpec(shape=(None, 900, 1), dtype=tf.float64, name=None), 'input_b': TensorSpec(shape=(None, 900, 1), dtype=tf.float64, name=None)}, TensorSpec(shape=(None, 1), dtype=tf.int64, name=None) ) </code></pre> <p>Essentially, each training sample is made up of two inputs with shape (900, 1), and the target is a single (binary) label. The first step of my model is a concatenation of inputs into a (900, 2) Tensor.</p> <p>The total number of training samples is about 70000.</p> <p>As input to the model, I'm creating a tf.data.Dataset, and applying a few preparation steps:</p> <ol> <li><code>tf.Dataset.filter</code>: to filter some samples with invalid labels</li> <li><code>tf.Dataset.shuffle</code></li> <li><code>tf.Dataset.filter</code>: <strong>to undersample my training dataset</strong></li> <li><code>tf.Dataset.batch</code></li> </ol> <p>Step 3 is the most important in my question. To undersample my dataset I apply a simple function:</p> <pre class="lang-py prettyprint-override"><code>def undersampling(dataset: tf.data.Dataset, drop_proba: Iterable[float]) -&gt; tf.data.Dataset: def undersample_function(x, y): drop_prob_ = tf.constant(drop_proba) idx = y[0] p = drop_prob_[idx] v = tf.random.uniform(shape=(), dtype=tf.float32) return tf.math.greater_equal(v, p) return dataset.filter(undersample_function) </code></pre> <p>Essentially, the function accepts a a vector of probabilities <code>drop_prob</code> such that <code>drop_prob[l]</code> is the probability of dropping a sample with label <code>l</code> (the function is a bit convoluted, but it's the way I found to implement it as <code>Dataset.filter</code>). Using equal probabilities, say <code>drop_prob=[0.9, 0.9]</code>, I`ll be dropping about 90% of my samples.</p> <p>Now, the thing is, I've been experimenting with different undersamplings for my dataset, in order to find a sweet spot between performance and training time, but when I undersample, <strong>the epoch duration is the same, with time/step increasing instead</strong>.</p> <p>Keeping my <code>batch_size</code> fixed at 20000, for the complete dataset I have a total of 4 batches, and the following time for an average epoch:</p> <pre><code>Epoch 4/1000 1/4 [======&gt;.......................] - ETA: 9s 2/4 [==============&gt;...............] - ETA: 5s 3/4 [=====================&gt;........] - ETA: 2s 4/4 [==============================] - ETA: 0s 4/4 [==============================] - 21s 6s/step </code></pre> <p>While if I undersample my dataset with a <code>drop_prob = [0.9, 0.9]</code> (That is, I'm getting rid of about 90% of the dataset), and keeping the same <code>batch_size</code> of 20000, I have 1 batch, and the following time for an average epoch:</p> <pre><code>Epoch 4/1000 1/1 [==============================] - ETA: 0s 1/1 [==============================] - 22s 22s/step </code></pre> <p>Notice that while the number of batches is only 1, the epoch time is the same! It just takes longer to process the batch.</p> <p>Now, as a sanity check, I tried a different way of undersampling, by filtering the files instead. So I selected about 55 of the training files (10%), to have a similar number of samples in a single batch, and removed the undersampling from the <code>tf.Dataset</code>. The epoch time decreates as expected:</p> <pre><code>Epoch 4/1000 1/1 [==============================] - ETA: 0s 1/1 [==============================] - 2s 2s/step </code></pre> <p>Note that the original dataset has 70014 training samples, while the undersampled dataset by means of tf.Dataset.filter had 6995 samples and the undersampled dataset by means of file filtering had 7018 samples, thus the numbers are consistent.</p> <p>Much faster. In fact, it takes about 10% of the time as the epoch takes with the full dataset. So there is an issue with the way I'm performing undersampling (by using <code>tf.data.Dataset.filter</code>) when creating the <code>tf.Dataset</code>, I would like to ask for help to figure it out what is the issue. Thanks.</p>
<python><tensorflow><machine-learning><keras>
2023-03-23 04:36:41
2
1,232
Alb
75,819,271
5,420,846
Dash Extendable Graph -- use extendData without Interval
<p>I haven't seen this question asked in the exact same way online, so I'm gonna try to ask this in first principles (at least as far as Dash first principles go)</p> <p>I have a websocket feed streaming data from the internet. I'll get data from it, and sometimes I'll use that data to update my plot with a new data point.</p> <p>I am trying to use the <code>@app.callback(Output('graph', 'extendData')]</code> approach to do so.</p> <h2>What I've Tried:</h2> <p>I'm starting off with the code <a href="https://github.com/bcliang/dash-extendable-graph" rel="nofollow noreferrer">from this Github repo</a>. The same code is also pasted below:</p> <pre><code>import dash_extendable_graph as deg import dash from dash.dependencies import Input, Output, State import dash_html_components as html import dash_core_components as dcc import random app = dash.Dash(__name__) app.scripts.config.serve_locally = True app.css.config.serve_locally = True app.layout = html.Div([ deg.ExtendableGraph( id='extendablegraph_example', figure=dict( data=[{'x': [0], 'y': [0], 'mode':'lines+markers' }], ) ), dcc.Interval( id='interval_extendablegraph_update', interval=1000, n_intervals=0, max_intervals=-1), html.Div(id='output') ]) @app.callback(Output('extendablegraph_example', 'extendData'), [Input('interval_extendablegraph_update', 'n_intervals')], [State('extendablegraph_example', 'figure')]) def update_extendData(n_intervals, existing): x_new = existing['data'][0]['x'][-1] + 1 y_new = random.random() return [dict(x=[x_new], y=[y_new])], [0], 100 if __name__ == '__main__': app.run_server(debug=True) </code></pre> <p>The code above just adds a new random point to the graph every 1 second. It uses the interval object to do so.</p> <p>I'm trying to combine it with my callback function from my websocket code, which parses and prints a message in a callback method. Shown below, simplified:</p> <pre><code>def websocket_callback(msg): # # # do some message parsing... # (for now let's just assume casting to int is all that's needed) # new_point = int(msg) print(f&quot;Just received: {new_point}&quot;) ws = websocket.WebSocketApp(url, on_message=websocket_callback) wst = threading.Thread(target=ws.run_forever) wst.daemon = True wst.start() while True: time.sleep(1) # Wait for infinite websocket read </code></pre> <p>What I tried to do is just slap the decorator on top of the <code>websocket_callback</code> method as so:</p> <pre><code> counter = 0 @app.callback(Output('extendablegraph_example', 'extendData'), [Input('interval_extendablegraph_update', 'n_intervals')], [State('extendablegraph_example', 'figure')]) def update_extendData(n_intervals, existing, msg): # # # do some message parsing... # (for now let's just assume casting to int is all that's needed) # global counter counter +=1 new_point = int(msg) return [dict(x=[counter], y=[new_point])], [0], 100 </code></pre> <p>The problem with that is that the function is now being called by the websocket feed <em>and</em> the <code>Interval</code> object from the example.</p> <p>This code doesn't work for a lot of reasons, and I got stuck here</p> <hr /> <h3>Solution?</h3> <p>Is there a way to manually trigger an <code>extendData</code> event without needing the callback? I'm looking for a way to separate both parts, the websocket parsing part and the graph refresh/update part. Something like:</p> <pre><code> @app.callback(Output('graph', 'extendData')) def update_graph(new_x, new_y): return [dict(x=[new_x], y=[new_y])], [0], 1000 counter = 0 def websocket_callback(msg): # # # do some message parsing... # (for now let's just assume casting to int is all that's needed) # global counter new_point = int(msg) update_graph(counter, new_point) counter +=1 </code></pre> <p>that way I parse my message in <code>websocket_callback</code> and trigger a refresh of the graph manually.</p> <p>Is there a way to do this? Running the code above (adapted to my setup) complains with the following error message:</p> <pre><code>dash.exceptions.CallbackException: Inputs do not match callback definition``` which I'm assuming is because that `app.callback` decorator doesn't understand the two x,y inputs. So I'm so lost... </code></pre>
<python><websocket><callback><plotly><plotly-dash>
2023-03-23 04:29:58
0
1,988
Joe
75,819,187
19,003,861
Prevent creation of new model object if another one was created less than 24 hours ago in Django
<p>I am trying to prevent a model object to be created, if another one was created say less than 24 hours ago.</p> <p>There is a trick however. This rule does not apply to ALL model objects and will dependent on a set of rule laid out in a different model.</p> <p>To be more specific, if <code>Venue_1</code> has a <code>RuleModel</code> set to 1, then when creating a new <code>MyModel object</code> the database would check if a previous <code>MyModel object</code> had been created within less than 1 day ago. If true then new <code>MyModel object</code> is not saved, if false <code>MyModel object</code> is saved.</p> <p>However if Venue_2 does not have any rule, then MyModel objects can be created without any time restrictions.</p> <p>After some research on this forum, it looks like I need to override the save function in the model.</p> <p>This is not working at present. Not error message.</p> <p><strong>models.py</strong></p> <pre><code>class Venue(models.Model): name = models.CharField(verbose_name=&quot;Name&quot;,max_length=100, blank=True) class RuleModel(models.Model): venue = models.ForeignKey(Venue, null = True, blank= True, on_delete=models.CASCADE) points = models.IntegerField(verbose_name=&quot;loylaty_points_rule&quot;, null = True, blank=True) timestamp = models.DateTimeField(auto_now_add=True) timeframe = models.IntegerField(verbose_name='timeframe during which more points cannot be reclaimed', null=True, blank=True) class MyModel(models.Model): user = models.ForeignKey(UserProfile, blank=True, null=True, on_delete=models.CASCADE) venue = models.ForeignKey(Venue, blank=True, null=True, on_delete=models.CASCADE) add_points = models.DecimalField(name = 'add_points', null = True, blank=True, default=0, decimal_places=2, max_digits=50) qr_code_venue = models.ImageField(upload_to='qr_codes_venue', blank=True) timestamp = models.DateTimeField(auto_now_add=True, null=True, blank=True) def save(self, *args, **kwargs): rule_model = RuleModel() for rulemodel in rule_model: if rulemodel.timeframe: try: date_from = datetime.datetime.now() - datetime.timedelta(days=rulemodel.timeframe) MyModel.objects.get(timestamp__gte=date_from) #raise some save error except MyModel.DoesNotExist: super(MyModel,self).save(*args,**kwargs) </code></pre>
<python><django><django-models>
2023-03-23 04:11:43
1
415
PhilM
75,819,169
5,029,589
Get Closest match for a column in data frame
<p>I have a data Frame which contains different call types as below values</p> <pre><code> CallType 0 IN 1 OUT 2 a_in 3 asms 4 INCOMING 5 OUTGOING 6 A2P_SMSIN 7 ain 8 aout </code></pre> <p>I want to map this in such a way the output would be</p> <pre><code> CallType 0 IN 1 OUT 2 IN 3 SMS 4 IN 5 OUT 6 SMS 7 IN 8 OUT </code></pre> <p>I am trying to use difflib.closestmatch but it gives no result . Below is my code</p> <pre><code>CALL_TYPE=['IN','OUT','SMS','VOICE','SMT'] def test1(): final_file_data = pd.DataFrame({ 'CallType': ['IN', 'OUT', 'a_in', 'asms', 'INCOMING', 'OUTGOING','A2P_SMSIN', 'ain', 'aout']}) print(final_file_data) final_file_data['CallType'] = final_file_data['CallType'].apply(lambda x: difflib.get_close_matches(x, CALL_TYPE, n=1)) </code></pre> <p>The output I get is below which as results only for IN and OUT</p> <pre><code> CallType 0 [IN] 1 [OUT] 2 [] 3 [] 4 [] 5 [] 6 [] 7 [] 8 [] </code></pre> <p>I am not sure where I am going wrong .</p>
<python><pandas><dataframe>
2023-03-23 04:03:52
1
2,174
arpit joshi
75,818,988
2,216,108
How to access the current value of multiprocessor.value in the main process?
<p>CODE:</p> <pre><code>from multiprocessing import Process, Value import ctypes import time new_value = Value(ctypes.c_wchar_p, &quot;abc&quot;) def func(): global new_value new_value.value = &quot;xyz&quot; print ('updated value to xyz') class Test(): def __init__(self): global new_value def call(self): time.sleep(2) print (new_value.value) if __name__ == &quot;__main__&quot;: p1 = Process(target=func, args=()) p1.start() x = Test() x.call() p1.join() # x doesn't need to wait for p1 to end but access the current value of new_value </code></pre> <p>OUTPUT:</p> <pre><code>updated value to xyz abc </code></pre> <p>How do I get the current value <code>xyz</code> when accessing new_value.value from <code>x</code> object? I'm trying to access whatever the current value of <code>new_value</code> is.</p>
<python><python-3.x><multiprocessing>
2023-03-23 03:20:13
1
1,242
aste123
75,818,907
10,735,076
Running a python script from inside a CDK stack?
<p>I have an AWS CDK project with the following structure. The top <code>app.py</code> is simply the deafult CDK entry point, <code>my_Stack.py</code> is where the main Stack and Constructs live. The stack consists of static from the <code>site-staging</code> folder which is deployed to an S3 bucket and served with a CloudFront deployment.</p> <pre><code>project/ β”œβ”€β”€ app.py β”œβ”€β”€ site_staging/ └── my_stack/ β”œβ”€β”€ my_stack.py └── app/ β”œβ”€β”€ app.py β”œβ”€β”€ Map.py └── freeze.py </code></pre> <p>The problem is that the static files are built using Frozen-Flask from a Flask app that lives in <code>my_stack/app/app.py</code>. Currently I have to manually invoke the static build from the <code>app</code> directory (e.g. <code>python freeze.py</code>).</p> <p>What I want is to have the <code>my_stack.py</code> script run the <code>freeze.py</code> code when I deploy the stack with <code>cdk deploy</code>.</p> <p>I have tried:</p> <ul> <li>moving the <code>freeze.py</code> code into <code>my_stack.py</code> andimporting the flask <code>app</code> module directly, but then it can't find <code>Map.py</code> from the subfolder which it needs and can't find (<code>app.py</code> itself uses a simple <code>import Map</code>.)</li> <li>running it as a subprocess, which throws a 500 error that seems to be related to the Flask webserver not launching or binding to the right port</li> </ul> <p>I've simplified things a bit for discussion, the repo is here <a href="https://github.com/anthonymobile/rna-dashboard" rel="nofollow noreferrer">https://github.com/anthonymobile/rna-dashboard</a></p>
<python><flask><aws-cdk>
2023-03-23 02:57:28
0
313
Anthony Townsend
75,818,846
5,924,264
A value is trying to be set on a copy of a slice from a DataFrame when there is no copy?
<p>I'm currently getting the warning</p> <pre><code>A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead </code></pre> <p>on this line:</p> <pre><code> df[&quot;next&quot;] = df[ &quot;next&quot;] * df[&quot;curr&quot;].map(lookup) </code></pre> <p>I'm confused where the &quot;a value is trying to be set on a copy of a slice from a dataframe&quot; is coming from? from what I can see, the LHS is a reference to the column in <code>df</code>, so there are no copies here?</p>
<python><pandas><dataframe>
2023-03-23 02:40:48
1
2,502
roulette01
75,818,802
4,174,993
Python Import Path Issue
<p>So, <a href="https://realpython.com/python-import/#basic-python-import" rel="nofollow noreferrer">this article</a> did a very good job explaining paths &amp; imports and the most portable way to handle python projects. I have read it twice but I do not know what I am doing wrong. I decided for this project that, I will not manipulate <code>PYTHONPATH</code> nor use <code>sys</code> to do path hacks (So, please those suggestions are not ideal). I would appreciate suggestions on the recommended way to fix this which should be portable (run locally, in docker, and anywhere) as well.</p> <p>Here is my project structure</p> <pre><code>sample/ src/ __init__.py package1/ __init__.py module1.py module2.py tests/ unit/ __init__.py test_module1.py test_module2.py setup.cfg setup.py </code></pre> <p>In <code>src.package1.module2</code> I have</p> <pre><code>def demo() return &quot;foo&quot; </code></pre> <p>In <code>src.package1.module1.py</code> I am just importing one method from module1 so I have:</p> <pre><code>from module2 import demo def main(): print(&quot;foo&quot;) def helper1(): return &quot;help&quot; def helper2() return &quot;bar&quot; if __name__ = &quot;__main__&quot;: main() </code></pre> <p>In <code>tests.unit.test_module1.py</code></p> <pre><code>from src.package1 import module1 @unit.mock.patch(&quot;module1.helper1&quot;) def test1(): pass def test2(): pass </code></pre> <p>I have a <code>setup.cfg</code> like this</p> <pre><code>[metadata] name = my_test version = 1.0.0 description = sample project [options] packages = src </code></pre> <p>And <code>setup.py</code> with this:</p> <pre><code>import setuptools setuptools.setup() </code></pre> <p>After installing the package locally with <code>python3 -m pip install -e .</code> and executing the main method in <code>module1</code> like this: <code>python -m src/package1/module1</code>, everything works fine</p> <p>However, when I run the unit test like this <code>python -m unittest discover -s tests/unit -p &quot;test_*.py&quot;</code></p> <p>I get <code>ModuleNotFoundError: No module named module1</code>. This is coming from the <code>src.package1.module1</code> file.</p> <p>NB: I intend to run this app docker and locally for development.</p> <p>My question is</p> <ol> <li>What is the best way to organize a python project having this component and what is the standard way of managing paths for import?</li> <li>What am I doing wrong? Honestly, I think the article explained everything but I am surprised that I still could not figure this out.</li> </ol>
<python><python-3.x><unit-testing>
2023-03-23 02:30:42
1
7,548
papigee
75,818,717
14,325,145
PyQt6 - Automatically Close QMessageBox after N seconds
<p>I am trying to make a QMessageBox automatically close after <strong>3</strong> seconds in PyQt6</p> <p>Here is my minimum reproducible example which does not work:</p> <pre><code>import sys from PyQt6.QtWidgets import QApplication, QWidget, QMessageBox from PyQt6.QtCore import QTimer class App(QWidget): def __init__(self): super().__init__() msgBox = QMessageBox.about(self, 'hello', 'Need to automatically close after 3 seconds') QTimer.singleShot(3000, lambda : msgBox.done(0)) app = QApplication(sys.argv) ex = App() sys.exit(app.exec()) </code></pre> <p>If I start the QTimer before I call the QMessageBox it seems to work but isntead crashes the program</p>
<python><qt><pyqt6>
2023-03-23 02:06:25
2
373
zeroalpha
75,818,679
7,267,480
violin plot with categorization using two different columns of data for "one violin"
<p>trying to visualize the distributions of the data stored in a dataframe. I have 1000 rows, each of them has next columns:</p> <pre><code>sample_id | chi_2_n_est | chi_2_n_theo --------------------------------------- 1 | 1.01 | 1.001 1 | 1.03 |1.012 ... 2 | 1.11 | 1.04 3 | 1.21 | 1.03 ... </code></pre> <p>I want to display violin plots for the data stored in columns chi_2_n_est and chi_2_n_theo, but splitter - to compare the distributions for each sample_id in the dataframe.</p> <p>Something similar to:</p> <p><a href="https://i.sstatic.net/4lmmM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4lmmM.png" alt="enter image description here" /></a></p> <p>Where blue will be the distribution for chi_2_n_est, and orange for chi_2_n_theo for each sample_id...</p>
<python><violin-plot>
2023-03-23 01:54:47
1
496
twistfire
75,818,417
5,924,264
How to merging df2 into df1 and fill in NaNs while merging?
<p>I have 2 dataframes <code>df1, df2</code>.</p> <p><code>df1</code> has the columns <code>[&quot;id&quot;, &quot;time&quot;, &quot;cost&quot;, &quot;quantity]</code>.</p> <p><code>df2</code> has the columns <code>[&quot;id&quot;, &quot;time&quot;, &quot;modified_cost&quot;, &quot;modified_quantity]</code>.</p> <p>I would like to merge <code>df2</code> into <code>df1</code>. Currently I am doing</p> <pre><code>df1 = df1.merge(df2[&quot;id&quot;, &quot;time&quot;, &quot;modified_cost&quot;, &quot;modified_quantity&quot;], on=[&quot;id&quot;, &quot;time&quot;], how=&quot;left&quot;) </code></pre> <p>But this ends up filling in NaNs if <code>df2</code> doesn't have the same set of <code>[&quot;id&quot;, &quot;time&quot;]</code> as <code>df1</code>. Instead of this, I would like for those NaNs to be the unmodified quantities.</p> <p>here's an example:</p> <pre><code>import pandas as pd df1 = pd.DataFrame({&quot;id&quot;: [1,2,3,4], &quot;time&quot;: [3, 4, 5, 6], &quot;cost&quot;: [1.1, 2.2, 3.3, 4.4], &quot;quantity&quot;: [10,20,30,40]}) df2 = pd.DataFrame({&quot;id&quot;: [2,3,4], &quot;time&quot;: [4, 5, 6], &quot;modified_cost&quot;: [2.2, 3.3, 4.4], &quot;modified_quantity&quot;: [20,30,40]}) df1 = df1.merge(df2, on=[&quot;id&quot;, &quot;time&quot;], how=&quot;left&quot;) print(df1) </code></pre> <p>gives</p> <pre><code> id time cost quantity modified_cost modified_quantity 0 1 3 1.1 10 NaN NaN 1 2 4 2.2 20 2.2 20.0 2 3 5 3.3 30 3.3 30.0 3 4 6 4.4 40 4.4 40.0 </code></pre> <p>Instead of this, I want the NaNs to become whatever is in the <code>cost</code> and <code>quantity</code> column, so</p> <pre><code> id time cost quantity modified_cost modified_quantity 0 1 3 1.1 10 1.1 10.0 1 2 4 2.2 20 2.2 20.0 2 3 5 3.3 30 3.3 30.0 3 4 6 4.4 40 4.4 40.0 </code></pre> <p>How can I achieve this behavior when merging?</p> <p>Currently, the only solution I am aware of is to do this post merging</p> <pre><code>df1.loc[df1.modified_cost.isna(), &quot;modified_cost&quot;] = df1[df1.modified_cost.isna()].cost </code></pre>
<python><pandas><dataframe>
2023-03-23 00:51:11
3
2,502
roulette01
75,818,298
3,312,274
web2py: left-outer join not giving all records on left
<p>Why is this code not returning groups with zero (0) count? (I am sure there are groups with zero count)</p> <pre><code>count = db.auth_membership.id.count() groups = db().select(db.auth_membership.group_id, db.auth_group.role, count, left=db.auth_group.on(db.auth_membership.group_id==db.auth_group.id), groupby=db.auth_membership.group_id) </code></pre>
<python><web2py>
2023-03-23 00:17:16
1
565
JeffP
75,818,269
9,900,084
Polars: fill nulls with the only valid value within each group
<p>Each group only has one valid or not_null value in a random row. How do you fill each group with that value?</p> <pre class="lang-py prettyprint-override"><code>import polars as pl data = { 'group': ['1', '1', '1', '2', '2', '2', '3', '3', '3'], 'col1': [1, None, None, None, 3, None, None, None, 5], 'col2': ['a', None, None, None, 'b', None, None, None, 'c'], 'col3': [False, None, None, None, True, None, None, None, False] } df = pl.DataFrame(data) </code></pre> <pre><code>shape: (9, 4) β”Œβ”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ group ┆ col1 ┆ col2 ┆ col3 β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ i64 ┆ str ┆ bool β”‚ β•žβ•β•β•β•β•β•β•β•ͺ══════β•ͺ══════β•ͺ═══════║ β”‚ 1 ┆ 1 ┆ a ┆ false β”‚ β”‚ 1 ┆ null ┆ null ┆ null β”‚ β”‚ 1 ┆ null ┆ null ┆ null β”‚ β”‚ 2 ┆ null ┆ null ┆ null β”‚ β”‚ 2 ┆ 3 ┆ b ┆ true β”‚ β”‚ 2 ┆ null ┆ null ┆ null β”‚ β”‚ 3 ┆ null ┆ null ┆ null β”‚ β”‚ 3 ┆ null ┆ null ┆ null β”‚ β”‚ 3 ┆ 5 ┆ c ┆ false β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ </code></pre> <p>Desired output:</p> <pre><code>shape: (9, 4) β”Œβ”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ group ┆ col1 ┆ col2 ┆ col3 β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ i64 ┆ str ┆ bool β”‚ β•žβ•β•β•β•β•β•β•β•ͺ══════β•ͺ══════β•ͺ═══════║ β”‚ 1 ┆ 1 ┆ a ┆ false β”‚ β”‚ 1 ┆ 1 ┆ a ┆ false β”‚ β”‚ 1 ┆ 1 ┆ a ┆ false β”‚ β”‚ 2 ┆ 3 ┆ b ┆ true β”‚ β”‚ 2 ┆ 3 ┆ b ┆ true β”‚ β”‚ 2 ┆ 3 ┆ b ┆ true β”‚ β”‚ 3 ┆ 5 ┆ c ┆ false β”‚ β”‚ 3 ┆ 5 ┆ c ┆ false β”‚ β”‚ 3 ┆ 5 ┆ c ┆ false β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ </code></pre> <p>In pandas, I can do the following for each column</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd df = pd.DataFrame(data) df.col1 = df.groupby('group').col.apply(lambda x: x.ffill().bfill()) </code></pre> <p>How do you do this in polars, ideally with a window function (.over()) ?</p>
<python><dataframe><window-functions><python-polars>
2023-03-23 00:10:48
3
2,559
steven
75,818,230
10,452,700
What is the best practice to apply cross-validation using TimeSeriesSplit() over dataframe within end-2-end pipeline in python?
<p>Let's say I have <a href="https://drive.google.com/file/d/18PGLNnOI44LVFignYriBWQFW9WBkTX5c/view?usp=share_link" rel="nofollow noreferrer">dataset</a> within the following <a href="/questions/tagged/pandas" class="post-tag" title="show questions tagged &#39;pandas&#39;" aria-label="show questions tagged &#39;pandas&#39;" rel="tag" aria-labelledby="tag-pandas-tooltip-container">pandas</a> dataframe format with a <em>non-standard</em> timestamp column without datetime format as follows:</p> <pre><code>+--------+-----+ |TS_24hrs|count| +--------+-----+ |0 |157 | |1 |334 | |2 |176 | |3 |86 | |4 |89 | ... ... |270 |192 | |271 |196 | |270 |251 | |273 |138 | +--------+-----+ 274 rows Γ— 2 columns </code></pre> <p>I have already applied some regression algorithms after splitting data <strong>without</strong> using cross-validation (CV) into <em>training-set</em> and <em>test-set</em> and got results like the following:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import pandas as pd import matplotlib.pyplot as plt #Load the time-series data as dataframe df = pd.read_csv('/content/U2996_24hrs_.csv', sep=&quot;,&quot;) print(df.shape) # Split the data into training and testing sets from sklearn.model_selection import train_test_split train, test = train_test_split(df, test_size=0.27, shuffle=False) print(train.shape) #(200, 2) print(test.shape) #(74, 2) #visulize splitted data train['count'].plot(label='Training-set') test['count'].plot(label='Test-set') plt.legend() plt.show() #Train and fit the model from sklearn.ensemble import RandomForestRegressor rf = RandomForestRegressor().fit(train, train['count']) #X, y rf.score(train, train['count']) #0.9998644192184375 # Use the forest's model to predict on the test-set predictions = rf.predict(test) #convert prediction result into dataframe for plot issue in ease df_pre = pd.DataFrame({'TS_24hrs':test['TS_24hrs'], 'count_prediction':predictions}) # Calculate the mean absolute errors from sklearn.metrics import mean_absolute_error rf_mae = mean_absolute_error(test['count'], df_pre['count_prediction']) print(train.shape) #(200, 2) print(test.shape) #(74, 2) print(df_pre.shape) #(74, 2) #visulize forecast or prediction of used regressor model train['count'].plot(label='Training-set') test['count'].plot(label='Test-set') df_pre['count_prediction'].plot(label=f'RF_forecast MAE={rf_mae:.2f}') plt.legend() plt.show() </code></pre> <p><img src="https://i.sstatic.net/uk4gb.jpg" alt="img" /></p> <p>According this <a href="https://stackoverflow.com/a/64817316/10452700">answer</a> I noticed:</p> <blockquote> <p><em>if your data is <strong>already sorted based on time</strong> then simply use <code>shuffle=False</code></em> in <code>train, test = train_test_split(newdf, test_size=0.3, shuffle=False)</code></p> </blockquote> <p>So far, I have used this classic split data method, but I want to experiment with Time-series-based split methods that are summarized here:</p> <p><img src="https://miro.medium.com/v2/resize:fit:720/format:webp/1*BuJmKzWf0a_JDGPAw_A5Hw.png" alt="img" /></p> <p>Additionally, based on my investigation (please see the references at the end of the post), it is recommended to use the <strong>cross-validation</strong> method (K-Fold) before applying regression models. explanation: <a href="https://medium.com/@soumyachess1496/cross-validation-in-time-series-566ae4981ce4#:%7E:text=Cross%20Validation%20on%20Time%20Series,for%20the%20forecasted%20data%20points" rel="nofollow noreferrer">Cross Validation in Time Series</a></p> <p><strong>Problem:</strong> <em>How can split time-series data <strong>with</strong> using CV methods for comparable results?</em> (plot the quality of data split for ensure\evaluate the quality of data splitting)</p> <ul> <li>TSS CV method: <a href="https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.TimeSeriesSplit.html#sklearn-model-selection-timeseriessplit" rel="nofollow noreferrer"><code>TimeSeriesSplit()</code></a></li> <li>BTSS CV method: <a href="https://hub.packtpub.com/cross-validation-strategies-for-time-series-forecasting-tutorial/" rel="nofollow noreferrer"><code>BlockingTimeSeriesSplit()</code></a></li> </ul> <p>So far, the closest solution that crossed my mind is to separate the last 74 observations as <em>hold-on test-set</em> a side and do CV on just the first 200 observations. I'm still struggling with playing with these arguments <code>max_train_size</code>=199, <code>test_size=</code>73 to reach desired results, but it's very tricky and I couldn't figure it out. in fact, I applied time-series-based data split using TSS CV methods before training RF regressor to <em>train-set (first 200 days\observations)</em> and fit model over <em>test-set (last 74 days\observations)</em>.</p> <p>I've tried recommended <code>TimeSeriesSplit()</code> as the following unsuccessfully:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import pandas as pd import matplotlib.pyplot as plt #Load the time-series data as dataframe df = pd.read_csv('/content/U2996_24hrs_.csv', sep=&quot;,&quot;) print(df.shape) #Try to split data with CV (K-Fold) by using TimeSeriesSplit() method from sklearn.model_selection import TimeSeriesSplit tscv = TimeSeriesSplit( n_splits=len(df['TS_24hrs'].unique()) - 1, gap=0, # since data alraedy groupedby for 24hours to retrieve daily count there is no need to to have gap #max_train_size=199, #here: https://stackoverflow.com/a/43326651/10452700 they recommended to set this argument I'm unsure if it is the case for my problem #test_size=73, ) for train_idx, test_idx in tscv.split(df['TS_24hrs']): print('TRAIN: ', df.loc[df.index.isin(train_idx), 'TS_24hrs'].unique(), 'val-TEST: ', df.loc[df.index.isin(test_idx), 'TS_24hrs'].unique()) </code></pre> <p>The following figures for understanding and better alignment of split data <em>could</em> be part of the expected output if one could plot for each method:</p> <p><strong>expected output:</strong> <img src="https://i.sstatic.net/iijbI.jpg" alt="img" /></p> <p>References:</p> <ul> <li><p><a href="https://stats.stackexchange.com/questions/14099/using-k-fold-cross-validation-for-time-series-model-selection">Using k-fold cross-validation for time-series model selection</a></p> </li> <li><p><a href="https://stats.stackexchange.com/questions/326228/cross-validation-with-time-series">Cross validation with time series [duplicate]</a></p> </li> <li><p><a href="https://stats.stackexchange.com/questions/560119/time-series-k-fold-cross-validation-for-classification">Time series k-fold cross validation for classification</a></p> </li> <li><p><a href="https://stats.stackexchange.com/questions/554260/how-many-folds-for-time-series-cross-validation">How many folds for (time series) cross validation</a></p> </li> <li><p><a href="https://stats.stackexchange.com/questions/576797/cross-validation-for-time-series-classification-not-forecasting">Cross Validation for Time Series Classification (Not Forecasting!)</a></p> </li> </ul> <hr /> <h2><strong>Edit1</strong>:</h2> <p>I found 3 related posts:</p> <ul> <li><a href="https://stats.stackexchange.com/q/485906/240550">post1</a></li> <li><a href="https://www.projectpro.io/recipes/do-cross-validation-for-time-series" rel="nofollow noreferrer">post2</a> <blockquote> <p>I decided to apply <code>TimeSeriesSplit()</code> in short TTS cv output within <em>for loop</em> to train\fit regression model over <em>training-set</em> with assist of <em>CV-set</em> then <code>predict()</code> over <strong>Hold-on test-set</strong>. The current output of my implementation shows <strong>slightly</strong> improvement in forecasting <strong>with or without</strong>, which could be due to problems in my implementation. <img src="https://i.sstatic.net/TXO9n.jpg" alt="img" /></p> </blockquote> </li> </ul> <pre class="lang-py prettyprint-override"><code>#Load the time-series data as dataframe import numpy as np import pandas as pd import matplotlib.pyplot as plt df = pd.read_csv('/content/U2996_24hrs_.csv', sep=&quot;,&quot;) #print(df.shape) #(274, 2) #####----------------------------without CV # Split the data into training and testing sets from sklearn.model_selection import train_test_split train, test = train_test_split(df, test_size=0.27, shuffle=False) print(train.shape) #(200, 2) print(test.shape) #(74, 2) #visulize splitted data #train['count'].plot(label='Training-set') #test['count'].plot(label='Test-set') #plt.legend() #plt.show() #Train and fit the model from sklearn.ensemble import RandomForestRegressor rf = RandomForestRegressor().fit(train, train['count']) #X, y rf.score(train, train['count']) #0.9998644192184375 # Use the forest's model to predict on the test-set predictions = rf.predict(test) #convert prediction result into dataframe for plot issue in ease df_pre = pd.DataFrame({'TS_24hrs':test['TS_24hrs'], 'count_prediction':predictions}) # Calculate the mean absolute errors from sklearn.metrics import mean_absolute_error rf_mae = mean_absolute_error(test['count'], df_pre['count_prediction']) #####----------------------------with CV df1 = df[:200] #take just first 1st 200 records #print(df1.shape) #(200, 2) #print(len(df1)) #200 from sklearn.model_selection import TimeSeriesSplit tscv = TimeSeriesSplit( n_splits=len(df1['TS_24hrs'].unique()) - 1, #n_splits=3, gap=0, # since data alraedy groupedby for 24hours to retrieve daily count there is no need to to have gap #max_train_size=199, #test_size=73, ) #print(type(tscv)) #&lt;class 'sklearn.model_selection._split.TimeSeriesSplit'&gt; #mae = [] cv = [] TS_24hrs_tss = [] predictions_tss = [] for train_index, test_index in tscv.split(df1): cv_train, cv_test = df1.iloc[train_index], df1.iloc[test_index] #cv.append(cv_test.index) #print(cv_train.shape) #(199, 2) #print(cv_test.shape) #(1, 2) TS_24hrs_tss.append(cv_test.values[:,0]) #Train and fit the model from sklearn.ensemble import RandomForestRegressor rf_tss = RandomForestRegressor().fit(cv_train, cv_train['count']) #X, y # Use the forest's model to predict on the cv_test predictions_tss.append(rf_tss.predict(cv_test)) #print(predictions_tss) # Calculate the mean absolute errors #from sklearn.metrics import mean_absolute_error #rf_tss_mae = mae.append(mean_absolute_error(cv_test, predictions_tss)) #print(rf_tss_mae) #print(len(TS_24hrs_tss)) #199 #print(type(TS_24hrs_tss)) #&lt;class 'list'&gt; #print(len(predictions_tss)) #199 #convert prediction result into dataframe for plot issue in ease import pandas as pd df_pre_tss1 = pd.DataFrame(TS_24hrs_tss) df_pre_tss1.columns =['TS_24hrs_tss'] #df_pre_tss1 df_pre_tss2 = pd.DataFrame(predictions_tss) df_pre_tss2.columns =['count_predictioncv_tss'] #df_pre_tss2 df_pre_tss= pd.concat([df_pre_tss1,df_pre_tss2], axis=1) df_pre_tss # Use the forest's model to predict on the hold-on test-set predictions_tsst = rf_tss.predict(test) #print(len(predictions_tsst)) #74 #convert prediction result of he hold-on test-set into dataframe for plot issue in ease df_pre_test = pd.DataFrame({'TS_24hrs_tss':test['TS_24hrs'], 'count_predictioncv_tss':predictions_tsst}) # Fix the missing record (1st record) df_col_merged = df_pre_tss.merge(df_pre_test, how=&quot;outer&quot;) #print(df_col_merged.shape) #(273, 2) 1st record is missing ddf = df_col_merged.rename(columns={'TS_24hrs_tss': 'TS_24hrs', 'count_predictioncv_tss': 'count'}) df_first= df.head(1) df_merged_pred = df_first.merge(ddf, how=&quot;outer&quot;) #insert first record from original df to merged ones #print(df_merged_pred.shape) #(274, 2) print(train.shape) #(200, 2) print(test.shape) #(74, 2) print(df_pre_test.shape) #(74, 2) # Calculate the mean absolute errors from sklearn.metrics import mean_absolute_error rf_mae_tss = mean_absolute_error(test['count'], df_pre_test['count_predictioncv_tss']) #visulize forecast or prediction of used regressor model train['count'].plot(label='Training-set', alpha=0.5) test['count'].plot(label='Test-set', alpha=0.5) #cv['count'].plot(label='cv TSS', alpha=0.5) df_pre['count_prediction'].plot(label=f'RF_forecast MAE={rf_mae:.2f}', alpha=0.5) df_pre_test['count_predictioncv_tss'].plot(label=f'RF_forecast_tss MAE={rf_mae_tss:.2f}', alpha=0.5 , linestyle='--') plt.legend() plt.title('Plot forecast results with &amp; without cross-validation (K-Fold)') plt.show() </code></pre> <ul> <li><a href="https://scikit-learn.org/stable/auto_examples/applications/plot_cyclical_feature_engineering.html#naive-linear-regression" rel="nofollow noreferrer">post3 sklearn</a> <blockquote> <p>(I <strong>couldn't</strong> implement it, one can try this) using <code>make_pipeline()</code> and use <code>def evaluate(model, X, y, cv):</code> function but still confusing if I want to collect the results in the form of dataframe for visualizing case and what is the best practice to pass cv result to regressor and compare the results.</p> </blockquote> </li> </ul> <p><strong>Edit2</strong>: In the spirit of <a href="https://en.wikipedia.org/wiki/Don%27t_repeat_yourself" rel="nofollow noreferrer">DRY</a>, I tried to build an end-to-end pipeline without/with CV methods, load a dataset, perform feature scaling and supply the data into a regression model:</p> <pre class="lang-py prettyprint-override"><code>#Load the time-series data as dataframe import numpy as np import pandas as pd import matplotlib.pyplot as plt df = pd.read_csv('/content/U2996_24hrs_.csv', sep=&quot;,&quot;) #print(df.shape) #(274, 2) #####--------------Create pipeline without CV------------ # Split the data into training and testing sets for just visualization sense from sklearn.model_selection import train_test_split train, test = train_test_split(df, test_size=0.27, shuffle=False) print(train.shape) #(200, 2) print(test.shape) #(74, 2) from sklearn.model_selection import train_test_split from sklearn.preprocessing import MinMaxScaler from sklearn.ensemble import RandomForestRegressor from sklearn.pipeline import Pipeline # Split the data into training and testing sets without CV X = df['TS_24hrs'].values y = df['count'].values print(X_train.shape) #(200, 1) print(y_train.shape) #(200,) print(X_test.shape) #(74, 1) print(y_test.shape) #(74,) # Here is the trick X = X.reshape(-1,1) X_train, X_test, y_train, y_test = train_test_split(X, y , test_size=0.27, shuffle=False, random_state=0) print(X_train.shape) #(200, 1) print(y_train.shape) #(1, 200) print(X_test.shape) #(74, 1) print(y_test.shape) #(1, 74) #build an end-to-end pipeline, and supply the data into a regression model. It avoids leaking the test set into the train set rf_pipeline = Pipeline([('scaler', MinMaxScaler()),('RF', RandomForestRegressor())]) rf_pipeline.fit(X_train, y_train) #Displaying a Pipeline with a Preprocessing Step and Regression from sklearn import set_config set_config(display=&quot;diagram&quot;) rf_pipeline # click on the diagram below to see the details of each step </code></pre> <p><img src="https://i.sstatic.net/emibF.jpg" alt="img" /></p> <pre class="lang-py prettyprint-override"><code>r2 = rf_pipeline.score(X_test, y_test) print(f&quot;RFR: {r2}&quot;) # -0.3034887940244342 # Use the Randomforest's model to predict on the test-set y_predictions = rf_pipeline.predict(X_test.reshape(-1,1)) #convert prediction result into dataframe for plot issue in ease df_pre = pd.DataFrame({'TS_24hrs':test['TS_24hrs'], 'count_prediction':y_predictions}) # Calculate the mean absolute errors from sklearn.metrics import mean_absolute_error rf_mae = mean_absolute_error(y_test, df_pre['count_prediction']) print(train.shape) #(200, 2) print(test.shape) #(74, 2) print(df_pre.shape) #(74, 2) #visulize forecast or prediction of used regressor model train['count'].plot(label='Training-set') test['count'].plot(label='Test-set') df_pre['count_prediction'].plot(label=f'RF_forecast MAE={rf_mae:.2f}') plt.legend() plt.title('Plot results without cross-validation (K-Fold) using pipeline') plt.show() #####--------------Create pipeline with TSS CV------------ #####--------------Create pipeline with BTSS CV------------ </code></pre> <p><img src="https://i.sstatic.net/D7EqE.jpg" alt="img" /></p> <p>The results got <strong>worse</strong> using the pipeline, based on MAE score comparing implementation when separating the steps outside of the pipeline!</p>
<python><scikit-learn><time-series><regression><cross-validation>
2023-03-23 00:02:03
2
2,056
Mario
75,818,193
2,478,485
python pytest pluggy - pytest_runtest_logfinish AssertionError
<p><code>pytest</code> is failing with following error. Might be related to <a href="https://github.com/pytest-dev/pluggy" rel="nofollow noreferrer">https://github.com/pytest-dev/pluggy</a> package</p> <p><strong>Code</strong> Exception location <a href="https://github.com/pytest-dev/pytest/blob/main/src/_pytest/terminal.py#L598" rel="nofollow noreferrer">https://github.com/pytest-dev/pytest/blob/main/src/_pytest/terminal.py#L598</a></p> <p><strong>Error:</strong></p> <pre><code>INTERNALERROR&gt; Traceback (most recent call last): INTERNALERROR&gt; File &quot;/home/test/project/env/lib/python3.8/site-packages/_pytest/main.py&quot;, line 270, in wrap_session INTERNALERROR&gt; session.exitstatus = doit(config, session) or 0 INTERNALERROR&gt; File &quot;/home/test/project/env/lib/python3.8/site-packages/_pytest/main.py&quot;, line 324, in _main INTERNALERROR&gt; config.hook.pytest_runtestloop(session=session) INTERNALERROR&gt; File &quot;/home/test/project/env/lib/python3.8/site-packages/pluggy/_hooks.py&quot;, line 265, in __call__ INTERNALERROR&gt; return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) INTERNALERROR&gt; File &quot;/home/test/project/env/lib/python3.8/site-packages/pluggy/_manager.py&quot;, line 80, in _hookexec INTERNALERROR&gt; return self._inner_hookexec(hook_name, methods, kwargs, firstresult) INTERNALERROR&gt; File &quot;/home/test/project/env/lib/python3.8/site-packages/pluggy/_callers.py&quot;, line 60, in _multicall INTERNALERROR&gt; return outcome.get_result() INTERNALERROR&gt; File &quot;/home/test/project/env/lib/python3.8/site-packages/pluggy/_result.py&quot;, line 60, in get_result INTERNALERROR&gt; raise ex[1].with_traceback(ex[2]) INTERNALERROR&gt; File &quot;/home/test/project/env/lib/python3.8/site-packages/pluggy/_callers.py&quot;, line 39, in _multicall INTERNALERROR&gt; res = hook_impl.function(*args) INTERNALERROR&gt; File &quot;/home/test/project/env/lib/python3.8/site-packages/_pytest/main.py&quot;, line 349, in pytest_runtestloop INTERNALERROR&gt; item.config.hook.pytest_runtest_protocol(item=item, nextitem=nextitem) INTERNALERROR&gt; File &quot;/home/test/project/env/lib/python3.8/site-packages/pluggy/_hooks.py&quot;, line 265, in __call__ INTERNALERROR&gt; return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) INTERNALERROR&gt; File &quot;/home/test/project/env/lib/python3.8/site-packages/pluggy/_manager.py&quot;, line 80, in _hookexec INTERNALERROR&gt; return self._inner_hookexec(hook_name, methods, kwargs, firstresult) INTERNALERROR&gt; File &quot;/home/test/project/env/lib/python3.8/site-packages/pluggy/_callers.py&quot;, line 60, in _multicall INTERNALERROR&gt; return outcome.get_result() INTERNALERROR&gt; File &quot;/home/test/project/env/lib/python3.8/site-packages/pluggy/_result.py&quot;, line 60, in get_result INTERNALERROR&gt; raise ex[1].with_traceback(ex[2]) INTERNALERROR&gt; File &quot;/home/test/project/env/lib/python3.8/site-packages/pluggy/_callers.py&quot;, line 39, in _multicall INTERNALERROR&gt; res = hook_impl.function(*args) INTERNALERROR&gt; File &quot;/home/test/project/env/lib/python3.8/site-packages/_pytest/runner.py&quot;, line 113, in pytest_runtest_protocol INTERNALERROR&gt; ihook.pytest_runtest_logfinish(nodeid=item.nodeid, location=item.location) INTERNALERROR&gt; File &quot;/home/test/project/env/lib/python3.8/site-packages/pluggy/_hooks.py&quot;, line 265, in __call__ INTERNALERROR&gt; return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) INTERNALERROR&gt; File &quot;/home/test/project/env/lib/python3.8/site-packages/pluggy/_manager.py&quot;, line 80, in _hookexec INTERNALERROR&gt; return self._inner_hookexec(hook_name, methods, kwargs, firstresult) INTERNALERROR&gt; File &quot;/home/test/project/env/lib/python3.8/site-packages/pluggy/_callers.py&quot;, line 60, in _multicall INTERNALERROR&gt; return outcome.get_result() INTERNALERROR&gt; File &quot;/home/test/project/env/lib/python3.8/site-packages/pluggy/_result.py&quot;, line 60, in get_result INTERNALERROR&gt; raise ex[1].with_traceback(ex[2]) INTERNALERROR&gt; File &quot;/home/test/project/env/lib/python3.8/site-packages/pluggy/_callers.py&quot;, line 39, in _multicall INTERNALERROR&gt; res = hook_impl.function(*args) INTERNALERROR&gt; File &quot;/home/test/project/env/lib/python3.8/site-packages/_pytest/terminal.py&quot;, line 592, in pytest_runtest_logfinish INTERNALERROR&gt; assert self._session INTERNALERROR&gt; AssertionError </code></pre>
<python><pytest>
2023-03-22 23:52:46
1
3,355
Lava Sangeetham
75,818,186
14,729,820
How to rename images name using data frame?
<p>I have a data frame that has <code>file_name </code> and corresponding <code>text </code> and want to update the <code>file_name</code> and the image name in <code>imgs</code> folder by concatenating with some text or number the structure of <code>input_folder</code> look like :</p> <pre><code>input_folder --| |--- imgs -- |-- 0.jpg |-- 1.jpg |-- 2.jpg ......... |--- train.jsonl </code></pre> <p>the <code>train.jsonl</code> file has :</p> <pre><code>{&quot;file_name&quot;: &quot;0.jpg&quot;, &quot;text&quot;: &quot;The Fulton County Grand Jury said Friday an investigation&quot;} {&quot;file_name&quot;: &quot;1.jpg&quot;, &quot;text&quot;: &quot;of Atlanta's recent primary election produced \&quot;no evidence\&quot; that&quot;} </code></pre> <pre><code>path =&quot;input_folder/train.jsonl&quot; df = pd.read_json(path_or_buf = input_file, lines=True,) print(df.head()) # rename file_name col new_df['file_name'] = df['file_name '].apply(lambda x: 'A' + x) # def rename(df['file_name'],new_df['file_name']) </code></pre> <p>What I am expecting is : updating the <code>file_name</code> column in resulting data frame with renaming the image name in <code>imgs</code> folder</p> <pre><code>out_folder --| |-- imgs -- |-- A_0.jpg |-- A_1.jpg |-- A_2.jpg ......... |---- train.jsonl </code></pre> <p>the <code>train.jsonl</code> file has :</p> <pre><code>{&quot;file_name&quot;: &quot;A_0.jpg&quot;, &quot;text&quot;: &quot;The Fulton County Grand Jury said Friday an investigation&quot;} {&quot;file_name&quot;: &quot;A_1.jpg&quot;, &quot;text&quot;: &quot;of Atlanta's recent primary election produced \&quot;no evidence\&quot; that&quot;} </code></pre> <p>After using the code snippet given by <a href="https://stackoverflow.com/questions/75818186/how-to-rename-images-name-using-data-frame/75818522#75818522">@harriet</a> I have the correct new images name and corresponding file_name in <code>train.jsonl</code> file: but I have a new issue which is a Unicode issue because the text is in the Hungarian language there is some special character not recognized will example what I got in <code>train.jsonl</code> in output dir</p> <pre><code>{&quot;file_name&quot;:&quot;A_0.jpg&quot;,&quot;text&quot;:&quot;El\u00e9gedetlenek az emberek a k\u00f6zoktat\u00e1ssal? Belf\u00f6ld - Magyarorsz\u00e1g h\u00edrei&quot;} </code></pre> <p>But what I expect is :</p> <p><code>{&quot;file_name&quot;: &quot;A_0.jpg&quot;, &quot;text&quot;: &quot;ElΓ©gedetlenek az emberek a kΓΆzoktatΓ‘ssal? BelfΓΆld - MagyarorszΓ‘g hΓ­rei&quot;}</code></p>
<python><pandas><dataframe><deep-learning><operating-system>
2023-03-22 23:51:03
1
366
Mohammed
75,818,060
12,485,858
How to read a text file with unknown format and save it as utf-8?
<p>I have a text file with unknown formatting which contains some german characters (umlaut). I want to open this file with python and read it as &quot;utf-8&quot;. However, everything I tried out delivers an error: <code>UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe4 in position 1664: invalid continuation byte</code></p> <p>What I tried so far:</p> <pre><code>open(filepath, &quot;rb&quot;).read().decode(&quot;utf-8&quot;) </code></pre> <p>I also tried:</p> <pre><code>open(filepath, &quot;r&quot;, &quot;utf-8&quot;) </code></pre> <p>I know that I could for instance open up the file in a text editor such as notepad and when I click on &quot;save as&quot; I can choose the encoding of the file. After saving it as utf-8 I can of course process it with python just by calling <code>open(filepath)</code>. But how to achieve the same effect using only python (without the text editor step) ? I assume that I could somehow make the decoder work by surpressing errors, but I don't know how...</p> <p>EDIT: Is there a &quot;general approach&quot; to this problem? I just saw that many of the comments suggest that this file was encoded on a windows machine so I could &quot;guess&quot; the encoding beforehand. However, how should I approach this problem if let's say I develop a software and the user just provides a textfile as an input? I don't want to just output an Error stating that the encoding is wrong. Is there a way to transform any encoding into utf-8 ?</p>
<python><encoding><io><text-files>
2023-03-22 23:22:11
2
846
teoML
75,817,988
5,969,893
Airflow Function to get Task Status
<p>I am using <code>BranchPythonOperator</code> and I want to check if the preceding task <code>Task_1</code> is a success, and if a success return <code>Task_2</code> and if fail return <code>Task_3</code>. However, I am not sure how to get the state of the <code>Task_1</code>. I tried a few ways one of them below, but I couldn't find a clear way to get there.</p> <p>Any suggestions?</p> <pre><code> def check_task_status(**context): dagrun: DAG = context[&quot;dag_run&quot;] task_state = dagrun.get_task_instances('Task_1').state if task_state == 'success': return 'Task_2' else: return 'Task_3' branch_task_1 = BranchPythonOperator(task_id = 'Check_Task_Status' ,python_callable = check_task_status ,trigger_rule='all_done' ,provide_context=True ,dag=dag ) </code></pre>
<python><airflow>
2023-03-22 23:05:26
2
667
AlmostThere
75,817,867
2,103,050
Tensorflow Actor Critic fails locally in Pycharm with OperatorNotAllowedInGraphError
<p><a href="https://www.tensorflow.org/tutorials/reinforcement_learning/actor_critic" rel="nofollow noreferrer">https://www.tensorflow.org/tutorials/reinforcement_learning/actor_critic</a></p> <p>I'm trying to copy and run this code locally (outside colab where it runs fine) and modify it, but when I run (even the original) I get an error.</p> <p>I can't figure out how to fix (or what causes) the following error.</p> <pre><code>for t in tf.range(max_steps): state = tf.expand_dims(state, 0) softmax_action, value = model(state) action = tf.random.categorical(softmax_action, 1)[0,0] </code></pre> <blockquote> <p>File &quot;&quot;, line 89, in run_episode tensorflow.python.framework.errors_impl.OperatorNotAllowedInGraphError: Iterating over a symbolic <code>tf.Tensor</code> is not allowed: AutoGraph did convert this function. This might indicate you are trying to use an unsupported feature.</p> </blockquote> <p>Likewise, if I comment out the loop so it runs only one time (setting t=0) I still have another loop inside get_returns() producing the same error.</p> <blockquote> <p>File &quot;&quot;, line 124, in get_returns tensorflow.python.framework.errors_impl.OperatorNotAllowedInGraphError: Iterating over a symbolic <code>tf.Tensor</code> is not allowed: AutoGraph did convert this function. This might indicate you are trying to use an unsupported feature.</p> </blockquote> <p>If it matters, I'm on a macOS with M1 processor. Since it works in collab I'm wondering if that might have something to do with it?</p> <pre><code>tensorflow-macos==2.11.0 tensorflow-metal==0.7.1 </code></pre> <p>EDIT (in case the link breaks) Here is more code (minimal)</p> <pre><code>def run_episode(state, model, max_steps): action_probs = tf.TensorArray(dtype=tf.float32, size=0, dynamic_size=True) values = tf.TensorArray(dtype=tf.float32, size=0, dynamic_size=True) rewards = tf.TensorArray(dtype=tf.int32, size=0, dynamic_size=True) initial_state_shape = state.shape for t in tf.range(max_steps): state = tf.expand_dims(state, 0) softmax_action, value = model(state) action = tf.random.categorical(softmax_action, 1)[0,0] # ... more code below here... @tf.function def train_step(init_state, model, gamma, max_steps): with tf.GradientTape() as tape: action_probs, values, rewards = run_episode(init_state, model, max_steps) returns = get_returns(rewards, gamma, True) loss = custom_loss(action_probs, values, returns) grads = tape.gradient(loss, model.trainable_variables) optimizer.apply_gradients(zip(grads, model.trainable_variables)) episode_reward = tf.math.reduce_sum(rewards) return episode_reward t = tqdm.trange(num_episodes) for i in t: state, info = env.reset() # comment out this is for testing state = tf.constant(state, dtype=tf.float32) reward = train_step(state, model, gamma, max_steps) </code></pre>
<python><tensorflow>
2023-03-22 22:42:25
1
377
brian_ds
75,817,866
1,185,242
Can you mock system functionality in a pytest fixture?
<p>I have the following class which does a bunch of system level stuff like call <code>os.system</code>, <code>mkdir</code> and make HTTP requests for example:</p> <pre><code># mymodule.scene.py import os from pathlib import Path import requests class Scene: def __init__(self, id): self.do_stuff_1(id) self.do_stuff_2(id) def do_stuff_1(self, id): path = Path(f'/fake/cache/folder/{id}') path.mkdir(exist_ok=True) os.system(f'gsutil cp gs://myfile.txt /fake/cache/folder/{id}/myfile.txt') def do_stuff_2(self, id): resp = requests.get(f'http://www.google.com/{id}').json() return resp['status'] == 'OK' </code></pre> <p>I would like to create a <code>pytest.fixture</code> which injects this class while having all the <code>os</code>, <code>pathlib</code> and <code>requests</code> calls mocked:</p> <pre><code>#tests/conftest.py @pytest.fixture def scene(mocker): # mocker.patch.object( .. # How do I mock Path, mkdir and os.system in the fixture scene = Scene() return scene </code></pre> <p>So I can test them as follows:</p> <pre><code># tests/test_scene.py def test_do_stuff_1(scene): scene.do_stuff_1(123) # ? How do I assert Path, mkdir and os.system were called def test_do_stuff_2(scene): scene.do_stuff_2(456) # ? How do I assert requests get as called </code></pre> <p>My question is how to I do the mocking in the fixture and then access the mocked methods in the tests to confirm they were called with the correct parameters and mock the return values?</p>
<python><mocking><pytest>
2023-03-22 22:42:12
1
26,004
nickponline
75,817,692
10,428,677
Using Python fstrings to name columns in pyspark
<p>I am trying to iterate through a list of column names in pyspark and if they don't exist, I want to create them, fill with null values and give them a name using an fstring.</p> <p>However, when I try the below code, I get the error <code>Column is not iterable</code>. Is there a way to fix this?</p> <pre><code>for column in cols_list: if not column in df.columns: df= df.withColumn(F.col(f&quot;score_{column}&quot;), F.lit(None)) </code></pre> <p>The error in more detail:</p> <pre><code>----&gt; 4 df= df.withColumn(F.col(f&quot;score_{column}&quot;), F.lit(None)) /databricks/spark/python/pyspark/instrumentation_utils.py in wrapper(*args, **kwargs) 46 start = time.perf_counter() 47 try: ---&gt; 48 res = func(*args, **kwargs) 49 logger.log_success( 50 module_name, class_name, function_name, time.perf_counter() - start, signature /databricks/spark/python/pyspark/sql/dataframe.py in withColumn(self, colName, col) 3325 if not isinstance(col, Column): 3326 raise TypeError(&quot;col should be Column&quot;) -&gt; 3327 return DataFrame(self._jdf.withColumn(colName, col._jc), self.sparkSession) </code></pre> <p>Edit 2: if I remove <code>F.col</code>, it generates the columns but their type becomes <code>void</code></p>
<python><pyspark>
2023-03-22 22:10:11
1
590
A.N.
75,817,540
14,967,088
Prevent a player from moving faster than usual by moving in two directions
<p>I'm messing a bit with pygame, a player can freely move around the screen with WASD and change the direction they are facing with <kbd>←</kbd> and <kbd>β†’</kbd>, the problem is that they can move faster than usual if they hold two keys at once (e.g. <kbd>W</kbd> + <kbd>D</kbd>, <kbd>W</kbd> + <kbd>A</kbd>) and I was wondering what should I change to fix this.</p> <pre><code>import math import sys import pygame as pg class Player: def __init__(self): self.x, self.y = 400, 300 self.speed = 7.5 self.rot_speed = 0.05 self.angle = math.pi * 1.5 def movement(self): cos = math.cos(self.angle) sin = math.sin(self.angle) speed_cos = self.speed * cos speed_sin = self.speed * sin dx, dy = 0, 0 keys = pg.key.get_pressed() if keys[pg.K_w]: dx += speed_cos dy += speed_sin elif keys[pg.K_s]: dx -= speed_cos dy -= speed_sin if keys[pg.K_a]: dx += speed_sin dy -= speed_cos elif keys[pg.K_d]: dx -= speed_sin dy += speed_cos self.x += dx self.y += dy if keys[pg.K_LEFT]: self.angle -= self.rot_speed if keys[pg.K_RIGHT]: self.angle += self.rot_speed self.angle %= math.tau def draw(self): pg.draw.line( screen, &quot;yellow&quot;, (self.x, self.y), ( self.x + 100 * math.cos(self.angle), self.y + 100 * math.sin(self.angle), ), 1, ) pg.draw.circle(screen, &quot;green&quot;, (self.x, self.y), 15) pg.init() screen = pg.display.set_mode((800, 600)) clock = pg.time.Clock() player = Player() while True: for event in pg.event.get(): if event.type == pg.QUIT or (event.type == pg.KEYDOWN and event.key == pg.K_ESCAPE): pg.quit() sys.exit() screen.fill(&quot;black&quot;) player.movement() player.draw() pg.display.flip() clock.tick(60) </code></pre>
<python><pygame>
2023-03-22 21:49:24
1
741
qwerty_url
75,817,446
6,164,682
How to configure python virtual environment with different python version?
<p>I am trying to set virtual environment with a different python version.</p> <p>The default global version is <code>Python 3.11.0</code></p> <p>I have installed virtualenv using <code>pip install virtualenv</code></p> <p>Command to create virtualenv with different python version is <code>virtualenv -p &quot;C:/Python310/python.exe&quot; .venv</code></p> <p>Error when I run the above command is</p> <pre><code>RuntimeError: failed to query C:\Python310\python.exe with code 1 err: Traceback (most recent call last): File &quot;C:\Python311\Lib\site-packages\virtualenv\discovery\py_info.py&quot;, line 8, in &lt;module&gt; import json File &quot;C:\Python311\lib\json\__init__.py&quot;, line 106, in &lt;module&gt; from .decoder import JSONDecoder, JSONDecodeError File &quot;C:\Python311\lib\json\decoder.py&quot;, line 3, in &lt;module&gt; import re File &quot;C:\Python311\lib\re\__init__.py&quot;, line 125, in &lt;module&gt; from . import _compiler, _parser File &quot;C:\Python311\lib\re\_compiler.py&quot;, line 18, in &lt;module&gt; assert _sre.MAGIC == MAGIC, &quot;SRE module mismatch&quot; AssertionError: SRE module mismatch </code></pre> <p>Not sure what I am missing here. Any guidance is appreciated.</p>
<python><virtualenv>
2023-03-22 21:35:31
2
709
sub
75,817,299
5,091,720
Ways to speedup regex and make it faster
<p>Is there a way to speedup this code regex code? The file is really large and will not open in excel because of size.</p> <pre><code>import regex as re path = &quot;C:/Users/.../CDPH/&quot; with open(path + 'Thefile.tab') as file: data = file.read() # replace all space bars between tab characters data = re.sub('( )*(?=\n)|( )*(?=\t)', '', data ) with open(path + 'Data.csv', 'w') as file: file.write(data) </code></pre>
<python><regex>
2023-03-22 21:11:10
1
2,363
Shane S
75,817,201
14,729,820
How to save image with same name with python?
<p>I have image folder as below :</p> <pre><code>|------dir--| | |---- input--|-- 1.jpg | | |-- 2.jpg .. ... ... ... </code></pre> <p>where I want to do random rotation for <code>input folder</code> and save the results in <code>output folder</code></p> <p>I tryied the fillowing script :</p> <pre><code>import torch import torchvision.transforms as T from PIL import Image import os from os import listdir folder_dir = &quot;/dir/input/&quot; out_dir = &quot;dir/output/&quot; imgs = os.listdir(folder_dir) print(imgs) for img in imgs: img = Image.open(folder_dir+img) print(type(img)) # define a transform transform = T.RandomRotation(degrees=(0,360)) img = transform(img) # display result img.show() img.save(f'{out_dir}{img}.jpg') print('ok') </code></pre> <p>I got resulting image in <code>output</code> dir with object name <code>R&lt;PIL.Image.Image image mode=L size=988x128 at 0x7FD72613DBE0&gt;.jpg</code> but I expact to save image name with same input image name for example if I have read <code>1.jpg</code> in input dir I expact to save it as it is name <code>1.jpg</code> after rotation done</p> <pre><code>| ---dir ---| | | | |---- output | | | |-- 1.jpg | | |-- 2.jpg .. ... ... ... </code></pre>
<python><file><path>
2023-03-22 20:56:01
1
366
Mohammed
75,817,199
563,269
Working with multiple Snowflake tables in SQLAlchemy
<p>When I try to reference multiple Snowflake databases within SQLAlchemy, I receive an error message. In the below example, I'm trying to create two table objects to join together later. I'm running SQLAlchemy version 1.4.47.</p> <p>In this example, I'm trying to create two table objects. The table object from database one works fine. However, table two throws an error.</p> <pre><code>from sqlalchemy import create_engine from snowflake.sqlalchemy import URL import sqlalchemy connection_parameters = { &quot;account&quot;: 'myaccount', &quot;user&quot;: 'brian', &quot;password&quot;: 'xyzzy', &quot;role&quot;: &quot;myrole&quot;, &quot;warehouse&quot;: 'myware', &quot;schema&quot;: 'qed', 'database': 'DATABASE_1' } engine = create_engine(URL(**connection_parameters)) connection = engine.connect() meta = sqlalchemy.MetaData(engine) #This table, in DATABASE_1, works fine. prim_tbl = sqlalchemy.Table('prim_tbl'.lower(), meta, schema='qed', autoload_with=engine) connection.execute('USE DATABASE DATABASE_2;').fetchone() sec_tbl = sqlalchemy.Table('sec_tbl'.lower(), meta, schema='deq', autoload_with=engine) </code></pre> <p>The error is <code>&quot;Schema 'DATABASE_1.DEQ' does not exist or not authorized.&quot;</code>. How can I force SQLAlchemy to base sec_tbl on DATABASE_2?</p>
<python><sqlalchemy><snowflake-cloud-data-platform>
2023-03-22 20:55:49
1
641
Netbrian
75,817,059
9,415,280
LSTM using "return_sequences=False" return wrong shape dimention [nb_input, nb_neurones] in lieu of [nb_neurones]
<p>I use this code to pass my data inside a LSTM layer, input 1 and 2: x1, x2</p> <pre><code>this is how I setup my inputs x1= tf.data.Dataset.from_tensor_slices((X)) x2= tf.data.Dataset.from_tensor_slices((Xphysio)) output = tf.data.Dataset.from_tensor_slices((y)) combined_dataset = tf.data.Dataset.zip(((inputs_hydro, inputs_static), output)) input_dataset = combined_dataset.batch(batch_size_by_file) </code></pre> <p>and the simple exemple model</p> <pre><code> y = tf.keras.layers.LSTM(100, return_sequences=False, stateful=False)(input_dataset[0]) merge_2_input = tf.keras.layers.Concatenate(axis=1)([y, input_dataset[1]) output = tf.keras.layers.dense(100)(merge_2_input ) </code></pre> <p>my X data are shape [nb_sample, 10, 3] Why the LSTM layer using &quot;return_sequences=False&quot; return me an output of shape [10,100] in lieu of [100]??</p> <p>see error log:</p> <pre><code>A `Concatenate` layer requires inputs with matching shapes except for the concatenation axis. Received: input_shape=[(10, 100), (256, 17)] Call arguments received by layer &quot;feed_back&quot; &quot; f&quot;(type FeedBack): β€’ inputs=('tf.Tensor(shape=(256, 10, 3), dtype=float32)', 'tf.Tensor(shape=(256, 17), dtype=float32)') β€’ training=True </code></pre>
<python><tensorflow><input><lstm>
2023-03-22 20:38:16
0
451
Jonathan Roy
75,817,053
2,125,671
How to assign a list of instance methods to a class variable?
<p>I have this class which works:</p> <pre><code>class Point: def __init__(self): self.checks = [self.check1, self.check2] def check1(self): return True def check2(self): return True def run_all_checks(self): for check in self.checks: check() </code></pre> <p>The instance variable <code>checks</code> is not specific to instances, so I want to move it to class level, here is my attempt :</p> <pre><code>class Point: def __new__(cls, *args, **kwargs): cls.checks = [cls.check1, cls.check2] return super(Point, cls).__new__(cls, *args, **kwargs) def check1(self): return True def check2(self): return True def run_all_checks(self): for check in self.checks: check() </code></pre> <p>The class definition <code>seems</code> to work (in the sense that there are no <code>syntax</code> errors), but when I run it, got error :</p> <pre><code>TypeError: Point.check1() missing 1 required positional argument: 'self' </code></pre> <p><em>Update</em></p> <p>With @juanpa.arrivillaga's solution, my problem is solved :</p> <pre><code>class ParentFuncs: def check1(self): print(&quot;check1&quot;) def check2(self): print(&quot;check2&quot;) checks = [check1, check2] def run_all_checks(self): for check in self.checks: check(self) class ChildFuncs(ParentFuncs): def check3(self): print(&quot;check3&quot;) def check4(self): print(&quot;check4&quot;) checks = ParentFuncs.checks + [check3, check4] ChildFuncs().run_all_checks() # Output check1 check2 check3 check4 </code></pre>
<python><class><variables><instance>
2023-03-22 20:37:38
1
27,618
Philippe
75,817,029
13,494,917
60 column table with 700k rows taking about 30 minutes to go from dataframe to database
<p>Wondering if the title sounds normal and what I could possibly do to make this process faster. I am using python to transfer tables over from one Azure SQL database to another. The database I'm inserting tables into is a <code>General Purpose - Serverless: Standard-series (Gen5), 1 vCore</code> azure sql database- just in case that helps for context. From what I've seen online, I feel like this process should be a lot quicker and here's where I've gotten so far:</p> <p>Here's how I'm creating my engine- I've added fast_executemany=True</p> <pre class="lang-py prettyprint-override"><code>engine_azure2 = create_engine(conn_str2,echo=True, fast_executemany=True) conn2 = engine_azure2.connect() </code></pre> <p>and here's how I've got my to_sql set up- I've seen that I could use <code>method='multi'</code> here, however if I've got <code>fast_executemany=True</code> inside of the engine creation it just ends up giving me errors.</p> <pre class="lang-py prettyprint-override"><code>df.to_sql(table_name, conn2, if_exists='append', index=False, chunksize=10000, method=None) </code></pre>
<python><pandas><azure><sqlalchemy><azure-sql-database>
2023-03-22 20:34:26
0
687
BlakeB9
75,816,935
17,561,414
Remove a specific character from column value using pyspark
<p>I would like to remove some dots (<code>.</code>) from the column values but the tricky part is that I only want to remove the dots if they appear only once between numbers, but if there are two dots or three dots after each other, ignore them <strong>For example:</strong></p> <p><a href="https://i.sstatic.net/UPJKS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UPJKS.png" alt="enter image description here" /></a></p> <p>my code:</p> <pre><code>pattern = r'\.(?!\d)|(?&lt;!\d)\.' df_glaccount_totaling = df_glaccount_totaling.withColumn('TOTALING', regexp_replace('TOTALING', pattern, '')) </code></pre> <p>out come is 6598.659899999 (with on dot) Desired output: 6598..659899999 ( two dots)</p>
<python><pyspark>
2023-03-22 20:20:42
1
735
Greencolor
75,816,824
1,293,127
Implement seek in read-only gzip stream
<p>I have an app that seeks within a <code>.gz</code> file-like object.</p> <p>Python's <code>gzip.GzipFile</code> supports this, but very inefficiently – when the GzipFile object is asked to seek back, it will rewind to the beginning of the stream (<code>seek(0)</code>) and then read and decompress everything up to the desired offset.</p> <p>Needless to say this absolutely kills performance when seeking around a large <code>tar.gz</code> file (tens of gigabytes).</p> <p>So I'm looking to implement checkpointing: store the stream state every now and then, and when asked to seek back, go only to the next previous stored checkpoint, instead of rewinding all the way to the beginning.</p> <p>My question is around the <code>gzip</code> / <code>zlib</code> implementation: What does the &quot;current decompressor state&quot; consist of? Where is it stored? How big is it?</p> <p>And how do I copy that state out of an open GzipFile object, and then assign it back for the &quot;backward jump&quot; seek?</p> <p><strong>Note I have no control over the input .gz files. The solution must be strictly for GzipFile in read-only <code>rb</code> mode.</strong></p> <hr /> <p>EDIT: Looking at CPython's source, this is the relevant code flow &amp; data structures. Ordered from top-level (Python) down to raw C:</p> <ol> <li><p><a href="https://github.com/python/cpython/blob/1a6bacb31f7b49c244a6cc3ff0fa7f71a82412ef/Lib/gzip.py#L189-L190" rel="nofollow noreferrer">gzip.GzipFile._buffer.raw</a></p> </li> <li><p><a href="https://github.com/python/cpython/blob/1a6bacb31f7b49c244a6cc3ff0fa7f71a82412ef/Lib/gzip.py#L449" rel="nofollow noreferrer">gzip._GzipReader</a></p> </li> <li><p><a href="https://github.com/python/cpython/blob/f4c03484da59049eb62a9bf7777b963e2267d187/Lib/_compression.py#L130-L158" rel="nofollow noreferrer">gzip._GzipReader.seek() == DecompressReader.seek()</a> <strong>&lt;=== NEED TO CHANGE THIS</strong></p> </li> <li><p><a href="https://github.com/python/cpython/blob/90d85a9b4136aa1feb02f88aab614a3c29f20ed3/Modules/zlibmodule.c#L1336-L1351" rel="nofollow noreferrer">ZlibDecompressor state</a> + <a href="https://github.com/python/cpython/blob/90d85a9b4136aa1feb02f88aab614a3c29f20ed3/Modules/zlibmodule.c#L1147-L1191" rel="nofollow noreferrer">its deepcopy</a> <strong>&lt;=== NEED TO COPY / RESTORE THIS</strong></p> </li> <li><p><a href="https://github.com/madler/zlib/blob/04f42ceca40f73e2978b50e93806c2a18c1281fc/zlib.h#L86-L106" rel="nofollow noreferrer">z_stream struct</a></p> </li> <li><p><a href="https://github.com/madler/zlib/blob/04f42ceca40f73e2978b50e93806c2a18c1281fc/deflate.h#L100-L271" rel="nofollow noreferrer">internal_state struct</a></p> </li> </ol> <hr /> <p>EDIT2: Also found <a href="https://github.com/madler/zlib/blob/04f42ceca40f73e2978b50e93806c2a18c1281fc/examples/zran.c#L51-L55" rel="nofollow noreferrer">this teaser</a> in <code>zlib</code>:</p> <blockquote> <p>An access point can be created at the start of any deflate block, by saving the starting file offset and bit of that block, and the 32K bytes of uncompressed data that precede that block. Also the uncompressed offset of that block is saved to provide a reference for locating a desired starting point in the uncompressed stream.</p> </blockquote> <blockquote> <p>Another way to build an index would be to use inflateCopy(). That would not be constrained to have access points at block boundaries, but requires more memory per access point, and also cannot be saved to file due to the use of pointers in the state.</p> </blockquote> <p>(they call &quot;access points&quot; what I call &quot;check points&quot;; same thing)</p> <p>This pretty much answers all my questions but I still need to find a way to translate this <code>zran.c</code> example to work with the gzip/zlib scaffolding in CPython.</p>
<python><gzip><zlib><python-3.10>
2023-03-22 20:05:51
1
8,721
user124114
75,816,763
17,896,651
Django ImageField specific directory is full (maximum number of files in linux dir)
<p>While having this:</p> <pre><code>profile_pic: ImageField = models.ImageField( _(&quot;profile_picture&quot;), upload_to='profile_pic', blank=True, null=True) </code></pre> <p>And 23M record I found out that profile_pic is &quot;full&quot; (&quot;no space left on device&quot;) while having 300GB free.</p> <p>I thought of <strong>splitting the the files to folders with the first 3 letters</strong>, but how can I achieve that in the django ?</p> <p><a href="https://i.sstatic.net/RUslv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RUslv.png" alt="enter image description here" /></a></p> <p>REF:</p> <p><a href="https://askubuntu.com/questions/1419651/mv-fails-with-no-space-left-on-device-when-the-destination-has-31-gb-of-space">https://askubuntu.com/questions/1419651/mv-fails-with-no-space-left-on-device-when-the-destination-has-31-gb-of-space</a></p> <p><a href="https://pylessons.com/django-images" rel="nofollow noreferrer">https://pylessons.com/django-images</a></p>
<python><django><linux>
2023-03-22 19:57:39
1
356
Si si