QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
77,653,011
12,304,000
send 50k post requests within a minute
<p>I want to load-test a system so I want to send around 50k POST requests within a minute. I want to do this for around 4 hours consecutively. The URL and payload is always the same. I just need to play around with the number of requests.</p> <p>This is what I tried:</p> <pre><code>import requests import concurrent.futures import time .... events_per_minute_target = 50000 # Calculate the total number of events to send total_events = events_per_minute_target * 60 * 2 * 2 # Set the duration for each minute duration_per_minute = 60 start_time = time.time() def send_request(_): try: response = requests.post(url, headers=headers, json=payload) # Check if the request was successful (status code 2xx) if response.ok: print(f&quot;Request successful. Response: {response.text}&quot;) else: print(f&quot;Request failed. Status code: {response.status_code}, Response: {response.text}&quot;) except requests.exceptions.RequestException as e: print(f&quot;An error occurred: {e}&quot;) # Use a ThreadPoolExecutor to send requests concurrently for x minutes with concurrent.futures.ThreadPoolExecutor() as executor: for _ in range(duration_per_minute * 2 * 2): # Submit tasks to the executor futures = {executor.submit(send_request, None) for _ in range(events_per_minute_target)} # Wait for all tasks to complete concurrent.futures.wait(futures) # Calculate the actual events per minute for the entire test duration elapsed_time = time.time() - start_time actual_events_per_minute = total_events / (elapsed_time / 60) print(f&quot;Load test completed. Actual events per minute: {actual_events_per_minute:.2f}&quot;) </code></pre> <p>But my Python script keeps running after 4 hours as well so I am not sure if this is behaving as intended or not. Since it's running after 2 hours as well, i get an impression that it failed to send 50k requests within a minute.</p> <p>What could be an alternate approach?</p> <p>Edit:</p> <p>After running for more than 4 hours, I got this error:</p> <pre><code>Traceback (most recent call last): File &quot;/Users/Desktop/snowplow/gtm_load_test/gtm.py&quot;, line 222, in &lt;module&gt; for future in concurrent.futures.as_completed(futures, timeout=duration_per_minute * 4): File &quot;/opt/homebrew/Cellar/python@3.11/3.11.6/Frameworks/Python.framework/Versions/3.11/lib/python3.11/concurrent/futures/_base.py&quot;, line 239, in as_completed raise TimeoutError( TimeoutError: 45835 (of 50000) futures unfinished </code></pre>
<python><python-3.x><multithreading><http><concurrency>
2023-12-13 10:58:36
0
3,522
x89
77,652,773
1,342,516
Matching elements in two equal-sized lists so that the pairs of elements have similar values
<p>For lists</p> <pre><code>l1 = [2.5, 1.1, 3.6] l2 = [3.4, 1.0, 2.2] </code></pre> <p>how to sort l2 efficiently to</p> <pre><code>l2_sorted = [2.2, 1.0, 3.4] </code></pre> <p>so that Sum_i(l2_sorted[i] - l1[i])^2 is minimum?</p> <p>There is a context for this question: the three complex roots, root_A, root_B, and root_C for the polynomial function x^3 + a x^2 + b x + c = 0 vary continuously with the parameters a, b, and c in the complex plane, and I want to keep track which root is which when varying the parameters. Simply sorting them each time according to their locations doesn't work because their trajectories (series of locations when continuously varying the parameters) can cross each other.</p> <p>The actual problem is a much higher-order polynomial with a large number of roots, making it hard to track them manually.</p>
<python>
2023-12-13 10:24:51
1
539
user1342516
77,652,505
5,731,101
class inheritance in python as fields - side effects / incorrect handling
<p>Take the following simplified class structure which is used to clean and structure data from a legacy system.</p> <ul> <li>ProductModels contain fields</li> <li>Fields are their own classes and types on which a bunch of cleaning is done.</li> </ul> <p>Overall the approach works like a charm in our use-case. But there is an issue where newly set products hold values from a previously generated product is the field is unused.</p> <p>The minimal reproducible structure for Field / ProductModel looks like:</p> <pre class="lang-py prettyprint-override"><code>class BaseManager: database_settings = {} def __init__(self, model_class): self.model_class = model_class def build_query(self): pass def connection(self): pass def run_query(self): pass def select(self): pass class MetaModel(type): manager_class = BaseManager def __new__(mcs, name, bases, attrs): field_list = [] for k, v in attrs.items(): if isinstance(v, Field): v.field_name = k v.table_name = attrs.get('table_name') field_list.append(k) cls = type.__new__(mcs, name, bases, attrs) cls.__field_list__ = field_list return cls def _get_manager(cls): return cls.manager_class(model_class=cls) @property def objects(cls): return cls._get_manager() class Field: def __init__(self, field_name, value=None): self.field_name = field_name self.value = value def set_value(self, value): self.value = value class ProductModel(metaclass=MetaModel): sku = Field('sku') name = Field('name') table_name = 'my_table' def __init__(self, **field_data): for field_name, value in field_data.items(): getattr(self, field_name).set_value(value) def __str__(self): return f&quot;{self.sku.value=}, {self.name.value=}&quot; </code></pre> <p>Now look at the first example:</p> <pre class="lang-py prettyprint-override"><code> ...: prod = ProductModel(sku='124', name='Name') ...: print(prod) self.sku.value='124', self.name.value='Name' </code></pre> <p>The value for the sku = 124, which is correct. The value for the name = &quot;Name&quot;, which is also correct.</p> <p>But now, the second example:</p> <pre><code> ...: prod_two = ProductModel(sku='789') ...: print(prod_two) self.sku.value='789', self.name.value='Name' </code></pre> <p>The value for sku has changed to 789, correct. BUT the value for name, has remained &quot;Name&quot; instead of being None</p> <p>It seem that when I create a new product, the field values are somehow kept from the initial product instead of being re-initialised.</p> <p>One way of handling it, would be to reset all the field values upon a new ProductModel.<strong>init</strong>(). But this feels like a poor solution. Instead I would rather understand better how to initialise the classes correctly.</p> <p>Can you show me the right way?</p>
<python><python-3.x><class><composition>
2023-12-13 09:47:19
2
2,971
S.D.
77,652,464
11,801,298
How to add rows with missing dates in pandas dataframe?
<p>I have dataframe with empty dates. How can I add new rows with absent dates?</p> <pre><code> dt_object high 0 2000-01-03 27.490000 1 2000-01-04 27.448000 2 2000-01-05 27.597000 3 2000-01-06 27.597000 4 2000-01-07 27.174000 5 2000-01-10 28.090000 6 2000-01-11 29.250000 7 2000-01-12 28.850000 </code></pre> <p>Expected output:</p> <pre><code> dt_object high 0 2000-01-03 27.490000 1 2000-01-04 27.448000 2 2000-01-05 27.597000 3 2000-01-06 27.597000 4 2000-01-07 27.174000 5 2000-01-08 0 6 2000-01-09 0 7 2000-01-10 28.090000 8 2000-01-11 29.250000 9 2000-01-12 28.850000 </code></pre> <p>I saw this answer</p> <p>Add missing dates to pandas dataframe <a href="https://stackoverflow.com/questions/19324453/add-missing-dates-to-pandas-dataframe">Add missing dates to pandas dataframe</a></p> <p>but it doesn't describe my situation. My dt_object is NOT index.</p> <p>I need universal decision for any timeframes (15 minutes and hours too)</p> <pre><code>15 minutes: dt_object high 361980 2023-12-13 00:00:00 90.1216 361981 2023-12-13 00:15:00 90.1308 EMPTY 361983 2023-12-13 00:45:00 90.2750 EMPTY 361985 2023-12-13 01:15:00 90.3023 </code></pre>
<python><pandas>
2023-12-13 09:40:53
1
877
Igor K.
77,652,320
4,847,250
How to fix an vtk DLL import error when compiling with Pyinstaller?
<p>I'm having this error when I compile a code that uses the vtk module with pyinstaller. <a href="https://i.sstatic.net/oKMnY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oKMnY.png" alt="enter image description here" /></a></p> <p>usually I add some vtk hiddenimports to fix that kind of issue</p> <pre><code>a = Analysis( ['EEG_Viewer_Converter.py'], pathex=[], binaries=[], datas=data, hiddenimports=['vtkmodules','vtkmodules.vtkWebCore', 'vtkmodules.all', 'vtkmodules.qt.QVTKRenderWindowInteractor', 'vtkmodules.util', 'vtkmodules.util', 'vtkmodules.numpy_interface', 'vtkmodules.numpy_interface.dataset_adapter', </code></pre> <p>in the .spec file generated with pyinstaller. So I add <code>'vtkmodules.vtkWebCore'</code> in it but it didn't fix the issue this time. How can I fix this issue when I execute the .exe?</p>
<python><import><dll><pyinstaller><vtk>
2023-12-13 09:20:33
0
5,207
ymmx
77,652,217
143,091
Allow optional parameter anywhere between subcommands
<p>I'm using argparse to create a tool with sub-commands. I have a flag, for example <code>-j</code> which gives the output as JSON, and want the user to be able to pass it anywhere on the command line:</p> <pre><code>mytool courses list -j mytool -j courses list </code></pre> <p>If I add the argument to my subparser, then I can only use the first version. If I add it to the main parser, then only the second version works.</p> <p>I also tried using the <code>parents</code> argument, but that didn't work: If you pass the <code>-j</code> in the middle, then the last command doesn't see it and sets the variable to False.</p> <pre><code>parser = argparse.ArgumentParser() subparser = parser.add_subparsers(dest='command', required=True) common_args = argparse.ArgumentParser(add_help=False) common_args.add_argument('-j', '--json', action='store_true', help='Output as JSON', default=None) course_parser = subparser.add_parser('course', parents=[common_args]) course_subparser = course_parser.add_subparsers(dest='subcommand', required=True) course_list_parser = course_subparser.add_parser('list', parents=[common_args]) </code></pre>
<python><argparse>
2023-12-13 09:03:03
3
10,310
jdm
77,652,174
3,701,393
How to make the client download a static file from a custom module on clicking a button?
<p><strong>Odoo 14 Community Edition</strong></p> <p>I have a custom module with a custom model inside it.</p> <p>I created a custom view. It is working correctly.</p> <p>Now that I need to add a button that let the client download a static file from the server.</p> <p>The file should be put in the module folder, which I am still not sure where to put it, because of modularization.</p> <p>Assume the button is created with a method for it correctly, how do I implement this and where should I put the file in order for it to work?</p> <p><em>Note: It is just a normal PDF file. Think of it as a button to download a manual document.</em></p>
<python><odoo><odoo-14>
2023-12-13 08:55:05
1
6,768
holydragon
77,651,902
3,815,773
How can I figure out where a Python Segmentation fault comes from?
<p>I have a somewhat complex logging program, which I used to run for months in a row without crashes or problems. But recently I am getting - about once per day - crashes of this kind:</p> <blockquote> <p>Segmentation fault (core dumped)</p> </blockquote> <p>This is the sole error output I can see. It says &quot;core dumped&quot; but whereto? I found no dump file anywhere.</p> <p>I tested all Python versions 3.8, 3.9, 3.10, 3.11, 3.12 - it may take a while to get this crash, but eventually it happens in all versions. I suspect PyQt5, but this is really gut feeling only.</p> <p>What can I do about finding the cause of the Segmentation fault?</p>
<python><pyqt5><segmentation-fault>
2023-12-13 08:04:08
0
505
ullix
77,651,883
2,238,378
Getting/sending some information from Telegram at bot startup, after Application build() but before run_polling()?
<p>I'm a bit confused about this because Im new to python-telegram,-bot, and most examples online dont reflect the v20 changes for Updater and asyncio.</p> <p>The relevant snippet of code is:</p> <pre><code>def main() -&gt; None: persistence_object = PicklePersistence(filepath=persistent_data_file_path) application = ( ApplicationBuilder() .token(bot_token) .persistence(persistence=persistence_object) .post_init(post_init_handler) .post_stop(post_stop_handler) .build() ) application.run_polling() async def post_init_handler(application: Application) -&gt; None: print(&quot;init&quot;) bot_id=application.bot.bot.id # Needed: Code snippet here (details below) # It produces a list, which is sent as a raw text private message to a static defined user_id if __name__ == &quot;__main__&quot;: main() </code></pre> <p>The missing code snippet obtains a list of all chats the bot is in (private, group, supergroup), with some key information about each (see below) and then send that info to a specific user_id as a private message. It doesnt enter polling until that's succeeded. The info it collates and sends is:</p> <pre><code>(chat id, username/title of the chat, chat type (private, group etc), is the bot the owner of the chat? (if a group/channel), bot's admin rights in the chat (if a group/channel and admin) ) </code></pre> <p>So a typical output sent would be:</p> <pre><code>123456,@fred99,private,(ignored),(ignored) 872981,@technogeek_chat,supergroup,True,(rights data) (etc) </code></pre> <p>I cant figure out the correct way to get and send this data at bot startup.</p> <ul> <li><p><strong>CORRECT API CALL/S TO USE?</strong><br/>All the examples I can find, assume their actions are done reactively within some callback handler as a result of a polled received update, or if done initially, the example uses older Updater/dispatcher techniques not applicable in v20.<br/>I * think * from the Transition guide to v20 that <code>application.create_task</code> could be the correct answer but if so I dont have a good example to rely on, how I'd use it for this.</p> </li> <li><p><strong>CORRECT PLACEMENT FOR API CALL/S?</strong><br/>I tried to move the code to between <code>application...build()</code> and before <code>run_polling()</code>, but the assignment <code>bot_id=application.bot.bot.id</code> failed and I couldnt get that to work. It wanted it initialized, I think, but I wasnt sure what was correct, or which was the more correct place of the two for this code snippet, and left it within <code>post_init_handler()</code> which at least * seemed * to work?</p> </li> <li><p><strong>GETTING DATA FOR THE CALL/S?</strong><br/>I tried to find code to list the chats the bot is in. The only links were to the example &quot;chatmemberbot.py&quot;, and that code doesnt actually detect what chats the bot is in. It relies on adding and removing from a manually maintained list based on updates. Which could potentially desync from actual chats it's in, in some situations (bugs, persist store deleted/corrupted/old store restored, shutdown prior to final flush in the past, ...) and the bot would never know its list wasn't correct.</p> </li> </ul> <p>What's the correct way to do this, please? Is it possible to have an actual example - simplified if it's easier - how it should be done, as I've scoured the docs and can't figure it from those?</p> <p>Help much appreciated, thank you!!</p>
<python><telegram><python-telegram-bot>
2023-12-13 07:59:29
1
590
Stilez
77,651,841
8,037,521
Streamlit: buttons side-by-side in a sidebar
<p>I want to have two buttons side-by-side in the Streamlit sidebar. I search for this question and found this solution:</p> <pre><code>col1, col2 = st.sidebar.columns(2) with col1: st.sidebar.button(&quot;Add&quot;) with col2: st.sidebar.button(&quot;Remove&quot;) </code></pre> <p>Unfortunately, it does absolutely nothing for me: the resulting buttons are located in separate rows, not in separate columns. Note that there is enough width of side bar for both buttons to be displayed side-by-side.</p> <p>Why this does not work and is there some other solution? Streamlit version 1.29.0.</p>
<python><streamlit>
2023-12-13 07:51:52
1
1,277
Valeria
77,651,804
8,601,590
Python is not able to find libraries while accessing from shiny script
<p>I have a python script which is working fine while I'm running it from terminal. The same script I'm trying to invoke from shiny application with the arguments, but the shiny-server log says <code>File &quot;/home/linuxadmin/Desktop/ADLS_test2.py&quot;, line 9, in &lt;module&gt; from azure.identity import DefaultAzureCredential ModuleNotFoundError: No module named 'azure.identity'</code> But <code>pip list</code> is displaying <code>azure-common 1.1.28 azure-core 1.29.5 azure-identity 1.15.0 azure-keyvault-secrets 4.7.0 azure-storage-blob 12.19.0 </code></p> <p>I could understand that python is not able to recognize the libraries while the script is invoking from shiny server app. Below is the sample script from shiny-server app I'm trying to invoke python file. How do I set up the python libraries path to look for, in the Python file?</p> <pre><code>server &lt;- function(input, output, session) { observeEvent(input$submitid,{ source &lt;- renderText({ input$caption }) destination &lt;- renderText({ input$caption2 }) system('python3 /home/linuxadmin/Desktop/ADLS_test2.py source destination') output$info &lt;- paste0('Source : ', source, ' | Destination : ', destination) }) } </code></pre>
<python><r><python-3.x><shiny><shiny-server>
2023-12-13 07:42:17
1
1,233
msr_003
77,651,663
2,444,251
Ffmpeg - transform mulaw 8000khz audio buffer data into valid bytes format
<p>I'm trying to read a bytes variable using ffmpeg, but the audio stream I listen to, sends me buffer data in <strong>mulaw encoded buffer</strong> like this:</p> <p><a href="https://github.com/boblp/mulaw_buffer_data/blob/main/buffer_data" rel="nofollow noreferrer">https://github.com/boblp/mulaw_buffer_data/blob/main/buffer_data</a></p> <p>I'm having trouble running the <strong>ffmpeg_read</strong> function from the transformers library found <a href="https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/audio_utils.py#L10" rel="nofollow noreferrer">here</a>:</p> <pre><code>def ffmpeg_read(bpayload: bytes, sampling_rate: int) -&gt; np.array: &quot;&quot;&quot; Helper function to read an audio file through ffmpeg. &quot;&quot;&quot; ar = f&quot;{sampling_rate}&quot; ac = &quot;1&quot; format_for_conversion = &quot;f32le&quot; ffmpeg_command = [ &quot;ffmpeg&quot;, &quot;-i&quot;, &quot;pipe:0&quot;, &quot;-ac&quot;, ac, &quot;-ar&quot;, ar, &quot;-f&quot;, format_for_conversion, &quot;-hide_banner&quot;, &quot;-loglevel&quot;, &quot;quiet&quot;, &quot;pipe:1&quot;, ] try: with subprocess.Popen(ffmpeg_command, stdin=subprocess.PIPE, stdout=subprocess.PIPE) as ffmpeg_process: output_stream = ffmpeg_process.communicate(bpayload) except FileNotFoundError as error: raise ValueError(&quot;ffmpeg was not found but is required to load audio files from filename&quot;) from error out_bytes = output_stream[0] audio = np.frombuffer(out_bytes, np.float32) if audio.shape[0] == 0: raise ValueError( &quot;Soundfile is either not in the correct format or is malformed. Ensure that the soundfile has &quot; &quot;a valid audio file extension (e.g. wav, flac or mp3) and is not corrupted. If reading from a remote &quot; &quot;URL, ensure that the URL is the full address to **download** the audio file.&quot; ) return audio </code></pre> <p>But everytime I get:</p> <pre><code>raise ValueError( &quot;Soundfile is either not in the correct format or is malformed. Ensure that the soundfile has &quot; &quot;a valid audio file extension (e.g. wav, flac or mp3) and is not corrupted. If reading from a remote &quot; &quot;URL, ensure that the URL is the full address to **download** the audio file.&quot; ) </code></pre> <p>If I grab any wav file I can do something like this:</p> <pre><code>import wave with open('./emma.wav', 'rb') as fd: contents = fd.read() print(contents) </code></pre> <p>And running it through the function does work!</p> <p>So my question would be:</p> <p>How can I transform my <strong>mulaw encoded buffer</strong> data into a valid bytes format that works with <code>ffmpeg_read()</code>?</p> <p><strong>EDIT: I've found a way using pywav (<a href="https://pypi.org/project/pywav/" rel="nofollow noreferrer">https://pypi.org/project/pywav/</a>)</strong></p> <pre><code># 1 stands for mono channel, 8000 sample rate, 8 bit, 7 stands for MULAW encoding wave_write = pywav.WavWrite(&quot;filename.wav&quot;, 1, 8000, 8, 7) wave_write.write(mu_encoded_data) wave_write.close() </code></pre> <p>This is the result: <a href="https://github.com/boblp/mulaw_buffer_data/blob/main/filename.wav" rel="nofollow noreferrer">https://github.com/boblp/mulaw_buffer_data/blob/main/filename.wav</a></p> <p>the background noise is acceptable.</p> <p>However, I want to use a FFMPEG instead to avoid creating a tmp file.</p>
<python><ffmpeg>
2023-12-13 07:08:41
2
592
Bob Lozano
77,651,471
16,853,253
AWS boto3 error 'IllegalLocationConstraintException'
<p>I'm getting this error - <code>'IllegalLocationConstraintException' 'The me-central-1' location constraint is incompatible for the region specific endpoint this request was sent to'</code> issue when trying to access object using presigned-url. I have created bucket to the location <code>('me-central-1')</code> and file gets uploaded then I try to access it using presigned-url it throws error, So I decided to test it again by creating another bucket in <code>('ap-south-1')</code> and generated presigned-url and tried to access it and it works. I still don't get what's the issue causing Location errors when accessing the object in bucket in middle east. The endpoint to be correct to <code>https://s3.me-central-1.amazonaws.com</code> .</p> <p>Below is my python code:</p> <pre><code>def save_to_s3_bucket(file, file_name): try: import base64,json import requests from io import BytesIO from botocore.exceptions import ClientError from botocore.config import Config format, imgstr = file.split(';base64,') s3 = boto3.client('s3', aws_access_key_id=settings.AWS_ACCESS_KEY_ID, aws_secret_access_key=settings.AWS_SECRET_ACCESS_KEY, config=Config(signature_version='s3v4'), region_name=settings.AWS_DEFAULT_REGION) random_path = ''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(10)) s3.put_object(Bucket=settings.AWS_STORAGE_BUCKET_NAME, Key='pvt/doc/ln-app/'+random_path+'/'+file_name, Body=base64.b64decode(imgstr), StorageClass='REDUCED_REDUNDANCY') img_url = settings.AWS_BUCKET_URL+'pvt/doc/ln-app/'+random_path+'/'+file_name url = s3.generate_presigned_url('get_object', Params = {'Bucket': settings.AWS_STORAGE_BUCKET_NAME, 'Key': 'pvt/doc/ln-app/'+random_path+'/'+file_name}, ExpiresIn = 10000) print('AWS-S3-URL---&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;', url) res = { 'hasError': False, 'message': 'Success', 'response': { 'key': 'pvt/doc/ln-app/'+random_path+'/'+file_name, 'file_url': url } } return res except Exception as e: res = { 'hasError': True, 'message': str(e), 'response': None } return res </code></pre>
<python><amazon-web-services><amazon-s3><boto3>
2023-12-13 06:20:32
1
387
Sins97
77,651,418
2,052,889
How to use xfail in pytest in parameterized fixture when pytest.param is an elemenent in a tuple?
<p>This works</p> <pre><code>import pytest @pytest.mark.parametrize( &quot;expected&quot;, [ pytest.param(42, marks=pytest.mark.xfail(reason=&quot;some bug&quot;)), ],) def test_func(expected): assert 123 == expected </code></pre> <p>output is</p> <pre><code>test_xfail_demo_1.py::test_func[42] XFAIL (some bug) ========== 1 xfailed in 0.07s ========== </code></pre> <p>but this one does not</p> <pre><code>import pytest def someFunc(): return 1 def someOtherFunc(): return 123 @pytest.mark.parametrize( &quot;expected&quot;, [ (1, pytest.param(42, marks=pytest.mark.xfail(reason=&quot;some bug&quot;))), ],) def test_func(expected): assert someFunc() == expected[0] assert someOtherFunc() == expected[1] </code></pre> <p>The error I got is</p> <pre><code>&gt; assert someOtherFunc() == expected[1] E AssertionError: assert 123 == ParameterSet(values=(42,), marks=(MarkDecorator(mark=Mark(name='xfail', args=(), kwargs={'reason': 'some bug'})),), id=None) path/to/test_xfail_demo_2.py:10: AssertionError ==================== short test summary info =============== FAILED path/to/test_xfail_demo_2.py::test_func[expected0] - AssertionError: assert 123 == ParameterSet(values=(42,), marks=(MarkDecorator(mark=Mark(name='xfail', args=(), kwargs={'reason': 'some bug'})),), id=None) ============== 1 failed in 0.09s ===== </code></pre> <p>The reason I want expected to be a tuple is because I have multiple assert in a single test function, and I want to pass in all the expected values as a tuple or list, but mark only some of the list elements to be xfail or skip.</p>
<python><pytest><fixtures>
2023-12-13 06:08:17
0
4,192
Shuman
77,651,219
10,200,497
Finding the first row that meets conditions of a mask and selecting one row after it
<p>This is my dataframe:</p> <pre><code>import pandas as pd df = pd.DataFrame( { 'a': [100, 1123, 123, 100, 1, 0, 1], 'b': [1000, 11123, 1123, 0, 55, 0, 1], 'c': ['a', 'b', 'c', 'd', 'e', 'f', 'g'], } ) </code></pre> <p>And this is the output that I want. I want to create column <code>x</code>:</p> <pre><code> a b c x 0 100 1000 a NaN 1 1123 11123 b NaN 2 123 1123 c NaN 3 100 0 d NaN 4 1 55 e e 5 0 0 f NaN 6 1 1 g NaN </code></pre> <p>By using a mask:</p> <pre><code>mask = ( (df.a &gt; df.b) ) </code></pre> <p>First of all I need to find the first occurrence of this mask which in my example is row number <code>3</code>. Then I want to move one row below it and use the value in column <code>c</code> to create column <code>x</code>.</p> <p>So in my example, the first occurrence of <code>mask</code> is row 3. One row after it is row 4. That is why <code>e</code> is selected for column <code>x</code>.</p> <p>Note that in row 4 which is one row after the <code>mask</code>, no condition is needed. For example for row 4, It is NOT necessary that <code>df.a &gt; df.b</code>.</p> <p>This is what I have tried:</p> <pre><code>df.loc[mask.cumsum().eq(1) &amp; mask, 'x'] = df.c.shift(-1) </code></pre> <p>I provide some additional <code>df</code>s for convenience to test whether the code works in other examples. For instance what if there are no cases that meet the conditions of <code>mask</code>. In that case I just want a column of <code>NaN</code> for <code>x</code>.</p> <pre><code>df = pd.DataFrame({'a': [1000, 11230, 12300, 10000, 1000, 10000, 100000], 'b': [1000, 11123, 1123, 0, 55, 0, 1], 'c': ['a', 'b', 'c', 'd', 'e', 'f', 'g']}) df = pd.DataFrame({'a': [1, 1, 1, -1, -1, -1, -1], 'b': [1000, 11123, 1123, 0, 55, 0, 1], 'c': ['a', 'b', 'c', 'd', 'e', 'f', 'g']}) df = pd.DataFrame({'a': [-1, -1, -1, -1, -1, -1, 100000], 'b': [1000, 11123, 1123, 0, 55, 0, 1], 'c': ['a', 'b', 'c', 'd', 'e', 'f', 'g']}) </code></pre>
<python><pandas>
2023-12-13 05:07:12
2
2,679
AmirX
77,651,188
4,451,521
Visual Studio Code: How to run a python file with arguments
<p>I usually run and debug my programs from the console. Lately I have been started to run and debug from the vs code window. However I cannot do this when the program require arguments.</p> <p>Searching on how to do this I found <a href="https://www.youtube.com/watch?v=zSljcz54pYQ" rel="nofollow noreferrer">this video</a> and yes somehow I managed to run (or more accurately debug) the script from vscode</p> <p>However I have several doubts:</p> <ol> <li><p>To do this I had to press Debug, never Play because Play calls the script with no arguments. With debug I can set the arguments and run it but... is this always the case?</p> </li> <li><p>The <code>launch.json</code> file that is used is saved in the <code>.vscode</code> folder, so it is not ideal for git managing, right?</p> </li> <li><p>And more importantly, if I have two scripts Scritp1 and Script2 and they both use the <code>launch.json</code> file, how can I have two files of the same name? Do I have to create and rename files every time? That sounds highly impractical</p> </li> </ol>
<python><visual-studio-code><command-line-arguments>
2023-12-13 04:54:15
0
10,576
KansaiRobot
77,651,034
23,512,643
R: Making a Transparent Plot
<p>I am working with the R programming language.</p> <p>Here is a transparent wire plot made in Python (<a href="https://www.geeksforgeeks.org/3d-wireframe-plotting-in-python-using-matplotlib/" rel="nofollow noreferrer">https://www.geeksforgeeks.org/3d-wireframe-plotting-in-python-using-matplotlib/</a>):</p> <pre><code># importing modules from mpl_toolkits import mplot3d import numpy from matplotlib import pyplot # assigning coordinates a = numpy.linspace(-5, 5, 25) b = numpy.linspace(-5, 5, 25) x, y = numpy.meshgrid(a, b) z = numpy.sin(numpy.sqrt(x**2 + y**2)) # creating the visualization fig = pyplot.figure() wf = pyplot.axes(projection ='3d') wf.plot_wireframe(x, y, z, color ='green') # displaying the visualization wf.set_title('Example 2') pyplot.show() </code></pre> <p><a href="https://i.sstatic.net/IsyAp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IsyAp.png" alt="enter image description here" /></a></p> <p><strong>My Question :</strong> I am trying to recreate this kind of plot in R for a 2 dimensional Normal Distribution</p> <p>I know how to plot a regular 2d normal distribution in R:</p> <pre><code>library(plotly) library(MASS) library(mvtnorm) mu &lt;- c(0, 0) # Mean Sigma &lt;- matrix(c(1, 0.8, 0.8, 1), nrow=2) x &lt;- seq(-3, 3, length.out = 100) y &lt;- seq(-3, 3, length.out = 100) grid &lt;- expand.grid(x=x, y=y) z &lt;- matrix(apply(grid, 1, function(x) dmvnorm(x, mean=mu, sigma=Sigma)), nrow=length(x), ncol=length(y)) plot_ly(x = x, y = y, z = z, type = &quot;surface&quot;) %&gt;% add_trace( type = &quot;surface&quot;, showscale = FALSE, opacity = 0.5, surfacecolor = matrix(rep(1, length(x) * length(y)), nrow=length(x), ncol=length(y)), colorscale = list(c(0, 1), c(&quot;rgb(0,0,0)&quot;, &quot;rgb(0,0,0)&quot;)), lighting = list(ambient = 1, diffuse = 0, fresnel = 0, specular = 0, roughness = 1) ) %&gt;% layout(scene = list(xaxis = list(title = 'X1'), yaxis = list(title = 'X2'), zaxis = list(title = 'Density'))) </code></pre> <p><a href="https://i.sstatic.net/BIaU7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BIaU7.png" alt="enter image description here" /></a></p> <p>But I don't know how to make this plot look &quot;transparent wire mesh&quot; style. The closest thing I could find was something like this: <a href="https://www.r-bloggers.com/2010/05/creating-surface-plots/" rel="nofollow noreferrer">https://www.r-bloggers.com/2010/05/creating-surface-plots/</a></p> <p>Can someone please show me how to do this? Note: The plot does not need to be 3D. 2D or 3D is fine.</p> <p>Thanks!</p> <p><strong>EDIT:</strong> I think this is possible?</p> <pre><code>library(plotly) # Creating the data x &lt;- seq(-5, 5, length.out = 50) y &lt;- seq(-5, 5, length.out = 50) grid &lt;- expand.grid(x = x, y = y) R &lt;- sqrt(grid$x^2 + grid$y^2) z &lt;- sin(R) fig &lt;- plot_ly(x = ~grid$x, y = ~grid$y, z = ~z, type = 'scatter3d', mode = 'lines', line = list(color = '#0066FF', width = 2)) %&gt;% layout(title = &quot;Wireframe Plot&quot;, scene = list(xaxis = list(gridcolor = 'rgb(255, 255, 255)', zerolinecolor = 'rgb(255, 255, 255)', showbackground = TRUE, backgroundcolor = 'rgb(230, 230,230)'), yaxis = list(gridcolor = 'rgb(255, 255, 255)', zerolinecolor = 'rgb(255, 255, 255)', showbackground = TRUE, backgroundcolor = 'rgb(230, 230,230)'), zaxis = list(gridcolor = 'rgb(255, 255, 255)', zerolinecolor = 'rgb(255, 255, 255)', showbackground = TRUE, backgroundcolor = 'rgb(230, 230,230)')), showlegend = FALSE) # Print fig </code></pre> <p><a href="https://i.sstatic.net/feHwu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/feHwu.png" alt="enter image description here" /></a></p>
<python><r>
2023-12-13 03:57:02
1
6,799
stats_noob
77,649,919
23,002,898
In Tkinter, how to define two checkboxes from one file in the function of a function in another file?
<p>When i select <code>Tab2</code>, i would like to select the <code>Select all</code> checkbox which should automatically select the checkboxes <code>checkbox1</code> and <code>checkbox2</code>. The &quot;Select all&quot; Checkbox is located in the <code>x.py</code> file, so also the checkbox list <code>all_checkbox = [checkbox1, checkbox2]</code>. While the two checkboxes (checkbox1 and checkbox2) are found in the <code>page2.py</code> file. The code is reproducible.</p> <p><a href="https://i.sstatic.net/uwrqS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uwrqS.png" alt="enter image description here" /></a></p> <p><strong>PROBLEM:</strong> The problem is in the <code>all_checkbox = [checkbox1, checkbox2]</code> line of the<code> x.py</code> file, because i can't recover the two widgets from <code>page2.py</code>. I'm trying so many different ways, but I can't</p> <p><strong>ERROR</strong>: Of course, the error I get is that they are not defined:</p> <pre><code>NameError: name 'checkbox1' is not defined NameError: name 'checkbox2' is not defined </code></pre> <p><strong>IMPORTANT:</strong> I know I could solve the problem by moving <code>all_checkbox = [checkbox1, checkbox2]</code> to another place, but i need checkbox list is created inside file <code>x.py</code>, because the code in my question is just a short reproduced example of a larger project, so I need to organize it the way how I did it.</p> <p>How can I define <code>checkbox1</code> and <code>checkbox2</code>, in all_checkbox = [checkbox1, checkbox2], of <code>x.py </code>file?</p> <p><strong>ATTEMPT AT A SOLUTION:</strong> What did I try to fix the problem? I tried using checkbox1, checkbox2 as parameter in <code>def myfunction</code>, like this <code>def myfunction(Verticalbar, checkbox1, checkbox2)</code>. So also in the <code>main.py </code>file in <code>if tab_text == &quot;Tab2&quot;: myfunction(Verticalbar, checkbox1, checkbox2)</code>, but i get error:</p> <pre><code>NameError: name 'checkbox1' is not defined NameError: name 'checkbox2' is not defined </code></pre> <p>I can't define them in <code>main.py</code> either</p> <p><strong>main.py</strong></p> <pre><code>import tkinter as tk from tkinter import Frame, Label, ttk from page1 import Page1 from page2 import Page2 from x import * root = tk.Tk() root.geometry('475x325') style = ttk.Style() style.theme_use('default') nb = ttk.Notebook(root) nb.pack(fill='both', expand=1) page1 = Page1(nb) page2 = Page2(nb) nb.add(page1, text='Tab1', compound='left') nb.add(page2, text='Tab2', compound='left') Verticalbar = Frame(root, width=200, height=300, background=&quot;white&quot;, highlightthickness=0) Verticalbar.place(x=1, y=40) def on_tab_selection_changed(event): # First, get the selected tab id tab_id = nb.select() # Now we can get the tab text using the tab id tab_text = nb.tab(tab_id, &quot;text&quot;) if tab_text == &quot;Tab1&quot;: newlabel1 = Label(Verticalbar, text='TAB 1 EXAMPLE ', background=&quot;white&quot;, fg=&quot;#78b130&quot;, font='Arial 10 bold') newlabel1.place(x=2, y=3) if tab_text == &quot;Tab2&quot;: myfunction(Verticalbar) # Bind to a virtual event to detect when the tab selection changes nb.bind(&quot;&lt;&lt;NotebookTabChanged&gt;&gt;&quot;, on_tab_selection_changed) root.mainloop() </code></pre> <p><strong>page2.py</strong></p> <pre><code>import tkinter as tk class Page2(tk.Frame): def __init__(self, master, **kw): super().__init__(master, **kw) #CHEXKBOX 1 def test_checkbox1(): print(&quot;Printed Checkbox 1&quot;) checkbox1 = tk.Checkbutton(self, text=&quot;Checkbox 1&quot;, variable= self.checkbox1, onvalue=1, offvalue=0, height=1, highlightthickness=0, command= test_checkbox1) checkbox1.place(x=250, y=30) #CHEXKBOX 2 def test_checkbox2(): print(&quot;Printed Checkbox 2&quot;) checkbox2 = tk.Checkbutton(self, text=&quot;Checkbox 2&quot;, variable= self.checkbox2, onvalue=1, offvalue=0, height=1, highlightthickness=0, command= test_checkbox2) checkbox2.place(x=250, y=60) </code></pre> <p><strong>x.py</strong></p> <pre><code>from tkinter import Label, Tk import tkinter as tk def myfunction(Verticalbar): newlabel2 = Label(Verticalbar, text='TAB 2 EXAMPLE', background=&quot;white&quot;, fg=&quot;#78b130&quot;, font='Arial 10 bold') newlabel2.place(x=2, y=3) def select_first_and_second(): all_checkbox = [checkbox1, checkbox2] for cb in all_checkbox: cb.invoke() ### CHECKBOX SELECT ALL ### ckbx = tk.Checkbutton(Verticalbar, text=&quot;Select all&quot;, variable= ckbx_intvar, onvalue=1, offvalue=0, height=1, highlightthickness=0, command=select_first_and_second) ckbx.place(x=0, y=30) </code></pre> <p>This not needed, but I add them for completeness ​ <strong>page1.py</strong></p> <pre><code>import tkinter as tk class Page1(tk.Frame): def __init__(self, master, **kw): super().__init__(master, **kw) </code></pre>
<python><python-3.x><tkinter>
2023-12-13 02:27:35
1
307
Nodigap
77,649,812
825,227
Issues sending params dictionary to RESTful API
<p>Getting an error when trying to pass query params within a requests get call. Using the following in python:</p> <pre><code>import requests as re url = 'https://api.eia.gov/v2/natural-gas/pri/fut/data/?api_key=' + API_KEY params = { &quot;frequency&quot;: &quot;daily&quot;, &quot;data&quot;: [ &quot;value&quot; ], &quot;start&quot;: &quot;2021-01-01&quot;, &quot;facets&quot;: { &quot;series&quot;: [ &quot;RNGWHHD&quot; ] }, &quot;offset&quot;: 0, &quot;length&quot;: 5000 } x = re.get(url = url, params = params) if x.status_code == 200: print('Success') else: print(x.text) {&quot;error&quot;:&quot;Invalid format for facet 'series'. Must provide which facet and an array of values. For example, to filter by location for VA and CA: facets[location][]=VA&amp;facets[location][]=CA&quot;,&quot;code&quot;:400} </code></pre> <p>I'm able to return data using the <a href="https://www.eia.gov/opendata/browser/natural-gas/pri/fut?frequency=daily&amp;data=value;&amp;facets=duoarea;&amp;duoarea=RGC;&amp;start=2014-01-01&amp;sortColumn=period;&amp;sortDirection=desc;" rel="nofollow noreferrer">dashboard as reference</a>. And also by including params in the requested url as follows:</p> <pre><code>'https://api.eia.gov/v2/natural-gas/pri/fut/data/?frequency=daily&amp;data[0]=value&amp;facets[series][]=RNGWHHD&amp;start=2021-01-01&amp;sort[0][column]=period&amp;sort[0][direction]=desc&amp;offset=0&amp;length=5000&amp;api_key=' + API_KEY x = re.get(url = url) </code></pre> <p>But not by passing a parameter dict to <code>get</code>. What am I doing wrong?</p>
<python><python-requests>
2023-12-13 01:39:49
1
1,702
Chris
77,649,754
2,526,586
PyQt/Pyside: storing individual value in QPushButtons(or QWidgets in general)
<p>Suppose I have some <code>QPushButton</code>s displayed in a <code>QListWidget</code> or <code>QTableWidget</code>. The <code>QPushButtons</code> connect to the same slot function that carries out the same action for their relative list item or table row.</p> <p>I am not sure if I am thinking in the right direction:</p> <p>Suppose that the list items or table rows are created in a <code>for</code> loop, looping a list, for example. During the creation of the list item, or table row, along with the <code>QPushButton</code> in it, I am supposed to store some sort of data, or id, into the <code>QPushButton</code>, so that when the <code>QPushButton</code> is clicked, its connected slot will know which <code>QPushButton</code> is clicked and carry out some operation specifically to the <code>QPushButton</code>'s parent list item/table row. (assuming I am using the same one slot function for all the <code>QPushButton</code>s)</p> <p>So my question is: Given that the list items or table rows are created in a <code>for</code> loop of a list, how do I store some sort of list-related value(s) into the <code>QPushButton</code> and how do I retrieve the value(s) inside the slot function?</p> <p>I suppose my question isn't really bound to <code>QPushButton</code>s. The same question can go to <code>QCheckBox</code>es, <code>QLineEdit</code>s, or any <code>QWidget</code>s in general as long as they can send out signals.</p>
<python><pyqt><pyside><qpushbutton><slot>
2023-12-13 00:50:18
0
1,342
user2526586
77,649,610
9,661,990
Do I need to explicitely call con.close() after having done con.execute in SQLite?
<p>Suppose we have this code:</p> <pre><code>def _te(tn: str, db_id: str) -&gt; bool: with _connect_db(db_id) as conn: cur = conn.cursor() cur.execute(&quot;SELECT count(*) FROM sqlite_master WHERE type='table' AND name=?&quot;, (tn,)) return cur.fetchone()[0] == 1 def _connect_db(db_id: str) -&gt; sqlite3.Connection: return sqlite3.connect(db_id) </code></pre> <p>Do I need to call close at the end of <code>_te</code>?</p> <p>I found information that seems contradictory at my level of understanding:</p> <p>On one hand, this accepted answer says (with a link to SQLite3 source code): <a href="https://stackoverflow.com/a/25988110/9661990">https://stackoverflow.com/a/25988110/9661990</a></p> <blockquote> <p>You don't need to worry about closing the database. When you call prepare or execute, those calls are automatically call close when they are done. There is an internal rescue/ensure block that ensures the db is closed even if an error is raised. You can see this in the source code for SQLite3::Database.</p> </blockquote> <p>But on the other hand, this SQLite3 doc: <a href="https://docs.python.org/3/library/sqlite3.html" rel="nofollow noreferrer">https://docs.python.org/3/library/sqlite3.html</a></p> <p>has a snippet with <code>execute</code>, and yet in this snippet it specifically commands a manual <code>close</code>:</p> <pre><code># Create and fill the table. con = sqlite3.connect(&quot;:memory:&quot;) con.execute(&quot;CREATE TABLE lang(name, first_appeared)&quot;) data = [ (&quot;C++&quot;, 1985), (&quot;Objective-C&quot;, 1984), ] con.executemany(&quot;INSERT INTO lang(name, first_appeared) VALUES(?, ?)&quot;, data) # Print the table contents for row in con.execute(&quot;SELECT name, first_appeared FROM lang&quot;): print(row) print(&quot;I just deleted&quot;, con.execute(&quot;DELETE FROM lang&quot;).rowcount, &quot;rows&quot;) # close() is not a shortcut method and it's not called automatically; # the connection object should be closed manually con.close() </code></pre>
<python><sqlite>
2023-12-12 23:50:52
1
441
InfiniteLoop
77,649,601
2,821,077
Combining python "in" and "==" operator has confusing behavior
<p>A buddy of mine is learning python and I saw an odd thing he did with his code:</p> <pre><code>if ch in input[index] == test[index]) </code></pre> <p>Obviously there's some context missing for that fragment, but interestingly, that if statement gives him his desired behavior. Similarly, the below statement prints <code>True</code>:</p> <pre><code>print(&quot;w&quot; in &quot;w&quot; == &quot;w&quot;) </code></pre> <p>I'm not a python expert, so I don't understand why this happens. I would assume some kind of precedence and/or associativity would bungle things up here and return <code>False</code> or throw an error.</p> <p>From what I can tell <code>&quot;w&quot; in &quot;w&quot; == &quot;w&quot;</code> is the same as <code>&quot;w&quot; in &quot;w&quot; and &quot;w&quot;== &quot;w&quot;</code>, which is rather unintuitive. I'm hoping someone who understands python interpreting can explain why this kind of statement is evaluated in this way.</p>
<python><operators><operator-precedence>
2023-12-12 23:48:22
1
389
Christian Baker
77,649,536
4,019,495
Is there an easy way to reorder the coordinate levels in a DataArray?
<p>I have the following DataArray.</p> <pre><code>data = xr.DataArray( np.arange(24).reshape(2, 3, 4), dims=['x', 'y', 'z'], coords={ 'x': ['a', 'b'], 'y': [10, 20, 30], 'z': [100, 200, 300, 400] } ) transposed_data = data.transpose('z', 'y', 'x') </code></pre> <p>When I print <code>transposed_data</code>, I get coordinates:</p> <pre><code>&gt;&gt;&gt; print(transposed_data) ... Coordinates: * x (x) &lt;U1 'a' 'b' * y (y) int64 10 20 30 * z (z) int64 100 200 300 400 </code></pre> <p>that is, the coordinates are still in the order x, y, z. I would like to reorder them to be z, y, x (matching the dimension order). Specifically, I want to see</p> <pre><code>&gt;&gt;&gt; print(transposed_data) ... Coordinates: * z (z) int64 100 200 300 400 * y (y) int64 10 20 30 * x (x) &lt;U1 'a' 'b' </code></pre> <p>Is there an easy way to do this?</p>
<python><python-xarray>
2023-12-12 23:24:58
1
835
extremeaxe5
77,649,492
1,458,688
Running a Python script in Node-RED results in [Errno 2] No such file or directory
<p>I am just starting out with Node-RED and trying to run a Python script but get the error &quot;No such file or directory&quot;. Node-Red is running in docker on my Pi4.</p> <pre><code>python: can't open file '/home/pi/docker/nodered/data/python/bins.py': [Errno 2] No such file or directory </code></pre> <p>In node red, I'm using an inject node to run an exec with the Python command then outputting to a debug node.</p> <p>I can run the script from the command line using the same command so the path is correct. This is the output:</p> <pre><code>pi@raspberrypi4:/ $ python /home/pi/docker/nodered/data/python/bins.py I am bloody here: /home/pi/docker/nodered/data/python/bins.py </code></pre> <p>So the path is correct. What am I doing wrong that's stopping Node Red seeing the script? I've read several posts that say to the use the full path. The only other thing I can think of is that docker can't see outside of itself so would require a relative path but I've tried this as well.</p> <p>Feeling a bit dim at the moment!</p> <p>Many thanks,</p>
<python><node-red>
2023-12-12 23:06:11
1
319
Digital Essence
77,649,333
6,307,685
Substituting entire IndexedBase objects in Sympy
<p>I have an expression for the entropy <code>h</code> written in terms of the discrete distribution <code>p</code>, which is an IndexedBase.</p> <pre class="lang-py prettyprint-override"><code>import sympy as sym x = sym.Idx(&quot;x&quot;) p = sym.IndexedBase(&quot;p&quot;) N_X = sym.Symbol(&quot;N_X&quot;, positive=True) h = -sym.summation(p[x] * sym.log(p[x]), (x, 1, N_X)) </code></pre> <p>Pretty printing <code>h</code> outputs:</p> <p><a href="https://i.sstatic.net/joEIn.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/joEIn.jpg" alt="enter image description here" /></a></p> <p>I want to replace <code>p</code> by an expression in terms of the joint distribution <code>q</code>. Mathematically, I want to replace p by</p> <p><a href="https://i.sstatic.net/NFzBO.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NFzBO.jpg" alt="enter image description here" /></a></p> <p>Here is my attempt:</p> <pre class="lang-py prettyprint-override"><code>y = sym.Idx(&quot;y&quot;) N_Y = sym.Symbol(&quot;N_Y, positive=True&quot;) q = sym.IndexedBase(&quot;q&quot;) h.subs(p, sym.summation(q[x,y], (y, 1, N_Y))) </code></pre> <p>This raises a type error:</p> <pre class="lang-py prettyprint-override"><code>TypeError: The base can only be replaced with a string, Symbol, IndexedBase or an object with a method for getting items (i.e. an object with a `__getitem__` method). </code></pre> <p>I get that the sum is not an <code>IndexedBase</code>, and that is an issue. But I do not know how to solve this: is there a way to create an <code>IndexedBase</code> that is represented in this way (as a sum of an <code>IndexedBase</code> <code>q</code>)?</p>
<python><sympy><symbolic-math>
2023-12-12 22:23:58
0
761
soap
77,649,233
1,601,580
How to install swarms; AssertionError: Error: Could not open 'optimum/version.py' due [Errno 2] No such file or directory: 'optimum/version.py'
<p>I'm trying to install swarms but I cannot and get this error:</p> <pre class="lang-py prettyprint-override"><code>pip install swarms Collecting swarms Using cached swarms-2.7.7-py3-none-any.whl.metadata (15 kB) Collecting Pillow (from swarms) Using cached Pillow-10.1.0-cp310-cp310-manylinux_2_28_x86_64.whl.metadata (9.5 kB) Collecting PyPDF2 (from swarms) Using cached pypdf2-3.0.1-py3-none-any.whl (232 kB) Collecting accelerate (from swarms) Using cached accelerate-0.25.0-py3-none-any.whl.metadata (18 kB) Collecting asyncio (from swarms) ... Using cached optimum-0.1.1.tar.gz (17 kB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done Using cached optimum-0.1.0.tar.gz (16 kB) Installing build dependencies ... done Getting requirements to build wheel ... error error: subprocess-exited-with-error × Getting requirements to build wheel did not run successfully. │ exit code: 1 ╰─&gt; [24 lines of output] Traceback (most recent call last): File &quot;&lt;string&gt;&quot;, line 7, in &lt;module&gt; FileNotFoundError: [Errno 2] No such file or directory: 'optimum/version.py' During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;/home/ubuntu/miniconda/envs/tree_of_thoughts/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py&quot;, line 353, in &lt;module&gt; main() File &quot;/home/ubuntu/miniconda/envs/tree_of_thoughts/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py&quot;, line 335, in main json_out['return_val'] = hook(**hook_input['kwargs']) File &quot;/home/ubuntu/miniconda/envs/tree_of_thoughts/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py&quot;, line 118, in get_requires_for_build_wheel return hook(config_settings) File &quot;/tmp/pip-build-env-rn01vqif/overlay/lib/python3.10/site-packages/setuptools/build_meta.py&quot;, line 325, in get_requires_for_build_wheel return self._get_build_requires(config_settings, requirements=['wheel']) File &quot;/tmp/pip-build-env-rn01vqif/overlay/lib/python3.10/site-packages/setuptools/build_meta.py&quot;, line 295, in _get_build_requires self.run_setup() File &quot;/tmp/pip-build-env-rn01vqif/overlay/lib/python3.10/site-packages/setuptools/build_meta.py&quot;, line 480, in run_setup super(_BuildMetaLegacyBackend, self).run_setup(setup_script=setup_script) File &quot;/tmp/pip-build-env-rn01vqif/overlay/lib/python3.10/site-packages/setuptools/build_meta.py&quot;, line 311, in run_setup exec(code, locals()) File &quot;&lt;string&gt;&quot;, line 10, in &lt;module&gt; AssertionError: Error: Could not open 'optimum/version.py' due [Errno 2] No such file or directory: 'optimum/version.py' [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error × Getting requirements to build wheel did not run successfully. │ exit code: 1 ╰─&gt; See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. </code></pre> <p>how to fix this issue?</p> <p>cross: <a href="https://discord.com/channels/999382051935506503/1184252377356836954" rel="nofollow noreferrer">https://discord.com/channels/999382051935506503/1184252377356836954</a></p>
<python><machine-learning><deep-learning><nlp>
2023-12-12 21:58:14
0
6,126
Charlie Parker
77,649,157
9,251,158
How to attach files with space in filename when sending email
<p>I am developing a Python application to send email. I follow <a href="https://www.geeksforgeeks.org/send-mail-attachment-gmail-account-using-python/" rel="nofollow noreferrer">this tutorial</a> and attach with this code:</p> <pre><code>import smtplib from email.mime.multipart import MIMEMultipart from email.mime.text import MIMEText from email.mime.base import MIMEBase from email import encoders def prepare_attachment(filepath): filename = os.path.basename(filepath) attachment = open(filepath, &quot;rb&quot;) # instance of MIMEBase and named as p p = MIMEBase('application', 'octet-stream') # To change the payload into encoded form. p.set_payload((attachment).read()) # encode into base64 encoders.encode_base64(p) p.add_header('Content-Disposition', &quot;attachment; filename= %s&quot; % filename) return p class Sender(object): # other code... def send(self): msg = MIMEMultipart() # other code... # open the file to be sent for attachment in self.attachments: p = prepare_attachment(attachment) # attach the instance 'p' to instance 'msg' msg.attach(p) # rest of code... </code></pre> <p>The code runs and send the email. When the user adds an attachment, such as <code>my attachment.pdf</code>, the receiver sees <code>my</code> as the name of the attachment and their client does not show a preview of the attachment.</p> <p>When I replace spaces in the filename with <code>%20</code>, the receiver sees a preview of the file but also sees <code>%20</code> in the filename.</p> <p>How can I attach files with spaces in the filename?</p>
<python><email><email-attachments><mime>
2023-12-12 21:37:21
1
4,642
ginjaemocoes
77,649,036
9,072,753
how to detect if input is stdin or not using python click?
<p>Consider the following code:</p> <pre><code>import click @click.command() @click.argument(&quot;file&quot;, type=click.File()) def cli(file): print(file) if __name__ == &quot;__main__&quot;: cli() </code></pre> <p>Executing as:</p> <pre><code>$ python ./cmd.py - &lt;_io.TextIOWrapper name='&lt;stdin&gt;' mode='r' encoding='utf-8'&gt; $ touch '&lt;stdin&gt;' $ python ./cmd.py '&lt;stdin&gt;' &lt;_io.TextIOWrapper name='&lt;stdin&gt;' mode='r' encoding='UTF-8'&gt; </code></pre> <p>There is suprising difference in encoding.</p> <p>How can I detect if the input was actual <code>-</code> and not anything else in python click?</p>
<python><python-3.x><python-click>
2023-12-12 21:06:28
1
145,478
KamilCuk
77,649,007
3,084,178
Python itertools all permutations of a list with repeated elements
<p>I'm wanting to generate all permutations of a list that contains repetitions.</p> <p>A naive approach might be to use itertools.permutations, but permutations doesn't care about elements being repeated:</p> <pre><code>for j in itertools.permutations([0,1,1]): print(j) (0, 1, 1) (0, 1, 1) (1, 0, 1) (1, 1, 0) (1, 0, 1) (1, 1, 0) </code></pre> <p>So instead of 3 elements we get 6.</p> <p>You could track the elements it's producing and discard repetitions. But it's easier to do it with product and discard ones that don't conform to the original shape:</p> <pre><code>for j in itertools.product([0,1],repeat=3): if sum(j)!=2: continue print(j) (0, 1, 1) (1, 0, 1) (1, 1, 0) </code></pre> <p>This is the desired output, but does itertools support this more naturally? Obviously, this could get computationally intense for bigger lists.</p>
<python><permutation><python-itertools>
2023-12-12 20:59:37
0
1,014
Dr Xorile
77,648,923
2,573,075
Can I render a graph template in flask and save to PDF?
<p>It is not very clear for me if I can save a generated chart graphics to PDF.</p> <p>I have the following case, I have and endpoint that returns a json and i want to generate graphics and tables from that, and after than to make a PDF.</p> <p>The source of everything (endpoint, webpage) are on the same server. I would like to avoid opening another page and print to pdf from there.</p> <p>Here is the code that takes the json from endpoint and create what i want to save to PDF.</p> <pre class="lang-html prettyprint-override"><code>&lt;!DOCTYPE html&gt; &lt;html lang=&quot;en&quot;&gt; &lt;head&gt; &lt;meta charset=&quot;UTF-8&quot;&gt; &lt;title&gt;Local request&lt;/title&gt; &lt;meta name=&quot;viewport&quot; content=&quot;width=297mm, initial-scale=1.0&quot;&gt; &lt;script src=&quot;https://cdn.jsdelivr.net/npm/chart.js&quot;&gt;&lt;/script&gt; &lt;style&gt; .canvas { max-width: 297mm; max-height: 297mm; margin-left: auto; margin-right: auto; display: block; break-inside: avoid; page-break-inside: avoid; } table, th, td { text-align: center; border-collapse: collapse; border: 1px solid black; } th { background-color: grey; } &lt;/style&gt; &lt;/head&gt; &lt;body&gt; &lt;h1 style=&quot;text-align: center&quot;&gt;XPF REPORT&lt;/h1&gt; &lt;p&gt;&lt;br&gt;&lt;/p&gt; &lt;p&gt;&lt;br&gt;&lt;/p&gt; &lt;div class=&quot;canvas&quot;&gt; &lt;h2 style=&quot;text-align: left;&quot;&gt;PRI par IC et par etape&lt;/h2&gt; &lt;canvas id=&quot;PRIParICEtParEtape&quot;&gt;&lt;/canvas&gt; &lt;/div&gt; &lt;p&gt;&lt;br&gt;&lt;/p&gt; &lt;div class=&quot;canvas&quot;&gt; &lt;center&gt; &lt;h2 style=&quot;text-align: left;&quot;&gt;PRI par IC et par etape / table&lt;/h2&gt; &lt;table width=&quot;300px&quot;&gt; &lt;thead&gt; &lt;tr&gt; &lt;th&gt;IC&lt;/th&gt; &lt;th&gt;Total&lt;/th&gt; &lt;/tr&gt; &lt;/thead&gt; &lt;tbody id=&quot;icTableBody&quot;&gt; &lt;/tbody&gt; &lt;tfoot&gt; &lt;tr&gt; &lt;td&gt;Total:&lt;/td&gt; &lt;td id=&quot;totalSum&quot;&gt;&lt;/td&gt; &lt;/tr&gt; &lt;/tfoot&gt; &lt;/table&gt; &lt;/center&gt; &lt;/div&gt; &lt;div&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;/div&gt; &lt;div class=&quot;canvas&quot;&gt; &lt;h2 style=&quot;text-align: left;&quot;&gt;PRI par DSP et par etape&lt;/h2&gt; &lt;canvas id=&quot;PRIParDSPEtParEtape&quot;&gt;&lt;/canvas&gt; &lt;/div&gt; &lt;script&gt; document.addEventListener('DOMContentLoaded', function (event) { fetchData('http://localhost:8069/xpf_report') .then(data =&gt; { console.log(data); PRIParDSPEtParEtape(data); PRIParICEtParEtape(data); createICTable(data); }) .catch(error =&gt; console.error('Error fetching data:', error)); }); function createICTable(data) { const icTableBody = document.getElementById('icTableBody'); const totalSumElement = document.getElementById('totalSum'); const icTotalSum = {}; data.forEach(lead =&gt; { const icName = lead.user.name; const expectedRevenue = lead.expected_revenue; if (!icTotalSum[icName]) { icTotalSum[icName] = 0 } icTotalSum[icName] += expectedRevenue; }); let totalSum = 0; Object.keys(icTotalSum).forEach(icName =&gt; { const row = document.createElement('tr'); const icCell = document.createElement('td'); const totalCell = document.createElement('td'); icCell.textContent = icName; totalCell.textContent = icTotalSum[icName].toLocaleString('fr', {useGrouping: true}); row.append(icCell); row.append(totalCell); icTableBody.appendChild(row); totalSum += icTotalSum[icName]; }); totalSumElement.textContent = totalSum.toLocaleString('fr', {useGrouping: true}) } function PRIParICEtParEtape(data) { const userStageSum = {}; data.forEach(lead =&gt; { const userName = lead.user.name; const stage = lead.stage.stage; const expectedRevenue = lead.expected_revenue; if (!userStageSum[userName]) { userStageSum[userName] = {}; } if (!userStageSum[userName][stage]) { userStageSum[userName][stage] = 0; } userStageSum[userName][stage] += expectedRevenue; }); const ctx = document.getElementById('PRIParICEtParEtape').getContext('2d'); const userNames = Object.keys(userStageSum); const stages = [...new Set(data.map(lead =&gt; lead.stage.stage))]; const datasets = stages.map(stage =&gt; ({ label: stage, data: userNames.map(userName =&gt; userStageSum[userName][stage] || 0), borderWidth: 1 })); new Chart(ctx, { type: 'bar', data: { labels: userNames, datasets: datasets }, options: { scales: { x: { type: 'category', stacked: true, ticks: { color: 'black', order: [] } }, y: { stacked: true, beginAtZero: true, ticks: {} } } } }); } async function fetchData(apiUrl) { const response = await fetch(apiUrl); return await response.json(); } &lt;/script&gt; &lt;/body&gt; &lt;/html&gt; </code></pre>
<javascript><python><html><flask><pdf>
2023-12-12 20:36:54
0
633
Claudiu
77,648,683
401,173
How reference a namespace at the root of my Python package?
<p>I am creating a local Python package to pip install into a series of applications.</p> <p>I have a file in a subfolder of the package</p> <p><code>\src\my_package\model\do_stuff_x.py</code></p> <p>and a file at the root</p> <p><code>\src\my_package\request_models.py</code></p> <p>When I try to use <code>from request_models import MyModel</code> in <code>do_stuff_x.py</code>, I get the error</p> <p><code>No module named 'request_models'</code></p> <p>How do I properly reference a model at the root of my pip installed package?</p>
<python><pip><pyproject.toml>
2023-12-12 19:46:40
1
3,241
Josh Russo
77,648,668
8,112,003
setuptools dependency-management for different python versions
<p>Assume you have a giant continous integration to care about, there are tons of terabyte of docker-containers, and rebuilding containers takes more time somebody could ever pay for and whatever version you change will break all the depencies.</p> <p>Conclusion: you have docker-containers with python x.y.z and for all of those I need to install a pypi-package</p> <p>Therefore I tried something like this:</p> <pre><code>install_requires = [ &quot;jsonschema==2.6.0;python_version&lt;='3.7'&quot;, &quot;jsonschema==4.19.1;python_version&gt;'3.7'&quot;, ] </code></pre> <p>but still i get an error in python 3.7.9 (that could be ignored)</p> <pre><code>ERROR: ecuci-github 1.0.19 has requirement jsonschema==4.19.1, but you'll have jsonschema 2.6.0 which is incompatible. ERROR: ecuci-jgod 1.0.22 has requirement jsonschema==4.19.1, but you'll have jsonschema 2.6.0 which is incompatible. ERROR: ecuci-checks 1.0.8 has requirement jsonschema==4.19.1, but you'll have jsonschema 2.6.0 which is incompatible. ERROR: ecuci-release-gui 1.0.75 has requirement jsonschema==4.19.1, but you'll have jsonschema 2.6.0 which is incompatible. ERROR: ecuci-packaging 1.1.79 has requirement jsonschema==4.19.1, but you'll have jsonschema 2.6.0 which is incompatible. Installing collected packages: ecuci-certs, ecuci-release, typing-extensions, annotated-types, pydantic-core, pydantic, peppercorn, ecuci-varparser, ecuci-buildgen, pywin32, packaging, urllib3, idna, certifi, charset-normalizer, requests, websocket-client, docker, jsonschema, appdirs, cached-property, lxml, pytz, six, isodate, defusedxml, requests-toolbelt, zipp, importlib-metadata, attrs, zeep, ecuci-github, ecuci-jgod, ecuci-flexcopy, ecuci-choco, ecuci-checks, ecuci-security, ecuci-engine, pycparser, cffi, cryptography, pyspnego, requests-ntlm, ecuci-flexclean, ecuci-release-gui, ecuci-quatelets, ecuci-validation, ecuci-packaging, ecuci-conan-install, ecuci-tools Successfully installed annotated-types-0.5.0 appdirs-1.4.4 attrs-23.1.0 cached-property-1.5.2 certifi-2023.11.17 cffi-1.15.1 charset-normalizer-3.3.2 cryptography-41.0.7 defusedxml-0.7.1 docker-6.1.3 ecuci-buildgen-1.0.102 ecuci-certs-1.0.4 ecuci-checks-1.0.8 ecuci-choco-1.0.21 ecuci-conan-install-1.0.14 ecuci-engine-1.0.32 ecuci-flexclean-1.0.6 ecuci-flexcopy-1.0.6 ecuci-github-1.0.19 ecuci-jgod-1.0.22 ecuci-packaging-1.1.79 ecuci-quatelets-1.0.9 ecuci-release-1.0.26 ecuci-release-gui-1.0.75 ecuci-security-1.0.18 ecuci-tools-1.1.480 ecuci-validation-1.0.19 ecuci-varparser-1.0.22 idna-3.6 importlib-metadata-6.7.0 isodate-0.6.1 jsonschema-2.6.0 lxml-4.9.3 packaging-23.2 peppercorn-0.6 pycparser-2.21 pydantic-2.3.0 pydantic-core-2.6.3 pyspnego-0.9.2 pytz-2023.3.post1 pywin32-306 requests-2.30.0 requests-ntlm-1.2.0 requests-toolbelt-1.0.0 six-1.16.0 typing-extensions-4.7.1 urllib3-1.26.15 websocket-client-1.6.1 zeep-3.4.0 zipp-3.15.0 WARNING: You are using pip version 20.1.1; however, version 23.3.1 is available. You should consider upgrading via the 'C:\Data\ecuci\pyenv37\Scripts\python.exe -m pip install --upgrade pip' command. (pyenv37) C:\Data\ecuci&gt;echo %errorlevel% 0 </code></pre> <p>while its fine in version 3.10</p> <p>So question is: how to get rid of those errors, I considered following webpages: <a href="https://setuptools.pypa.io/en/latest/userguide/quickstart.html" rel="nofollow noreferrer">https://setuptools.pypa.io/en/latest/userguide/quickstart.html</a><br> <a href="https://github.com/pypa/setuptools/issues/3313" rel="nofollow noreferrer">https://github.com/pypa/setuptools/issues/3313</a></p>
<python><pip><setuptools><pypi>
2023-12-12 19:44:01
0
752
Nikolai Ehrhardt
77,648,536
3,458,191
How to create a line graph from multiple columns in streamlit and python
<p>I am trying to create python script with help of streamlit and pandas from a csv file that has weather data with the below columns: time,temperature,pressure,humidity,lux,Oxi-kO,Red-kO,NH3-kO,P1,P2,P3</p> <p>What I am trying to do is to be able to select:</p> <ol> <li>Date range</li> <li>Data set(s), a multiple selection of the columns temperature..P3</li> </ol> <p>Until now I have:</p> <pre><code>import streamlit as st import os import pandas as pd import plotly.express as px st.set_page_config( page_title = 'Real-Time Data Weather Dashboard', page_icon = '🌤️', layout = 'wide' ) st.title(&quot; :bar_chart: Real-Time / Weather Data from Enviro+&quot;) st.markdown('&lt;style&gt;div.block-container{padding-top:1rem;}&lt;/style&gt;', unsafe_allow_html=True) fl = st.file_uploader(&quot;:file_folder: Upload a file&quot;, type=([&quot;csv&quot;, &quot;txt&quot;, &quot;xlsx&quot;, &quot;xls&quot;])) if fl is not None: filename = fl.name st.write(filename) df = pd.read_csv(filename, encoding=&quot;ISO-8859-1&quot;) else: os.chdir(r&quot;C:\WeatherData&quot;) df = pd.read_csv(&quot;WeatherData.csv&quot;, encoding=&quot;ISO-8859-1&quot;) list_of_column_names = list(df.columns) col1, col2 = st.columns((2)) df[&quot;time&quot;] = pd.to_datetime(df[&quot;time&quot;]) # Getting the min and max date startDate = pd.to_datetime(df[&quot;time&quot;]).min() endDate = pd.to_datetime(df[&quot;time&quot;]).max() with col1: date1 = pd.to_datetime(st.date_input(&quot;Start Date&quot;, startDate)) with col2: date2 = pd.to_datetime(st.date_input(&quot;End Date&quot;, endDate)) df = df[(df[&quot;time&quot;] &gt;= date1) &amp; (df[&quot;time&quot;] &lt;= date2)].copy() st.sidebar.header(&quot;Choose your filter: &quot;) # Create for Data sets data_set = st.sidebar.multiselect(&quot;Pick your data set&quot;, df.columns[1:]) if not data_set: df2 = df.copy() else: df2 = df[data_set] </code></pre> <p>But I can't figure out how create a line graph the data set of the selected date range and e.g. temperature. One step further if I could combine in the graph for the same date range also the other columns depending on what has the use selected.</p> <p>How can I achieve this?</p> <p>Example of csv data set (delimenter is ','):</p> <pre><code>time,temperature,pressure,humidity,lux,Oxi-kO,Red-kO,NH3-kO,P1,P2,P3 2023-12-12 17:22:00.218,25.08,101330.98,33.52,18.62,11.32,266.51,263.17,11,15,15 2023-12-12 17:22:10.697,25.04,101329.69,33.58,18.62,12.90,269.93,338.87,11,13,13 2023-12-12 17:22:21.036,25.05,101327.94,33.52,18.62,14.48,269.93,374.77,13,15,15 2023-12-12 17:22:31.375,25.02,101330.41,33.65,18.62,15.71,268.21,393.64,13,15,16 2023-12-12 17:22:41.713,24.99,101331.40,33.75,18.62,16.73,266.51,403.70,12,15,16 2023-12-12 17:22:52.052,25.03,101329.48,33.70,18.62,17.60,266.51,414.23,12,15,16 2023-12-12 17:23:02.390,25.00,101329.97,33.77,18.62,18.31,266.51,417.85,11,14,14 </code></pre> <p>What I have until now when running the streamlit app: <a href="https://i.sstatic.net/PsnNt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PsnNt.png" alt="enter image description here" /></a></p> <p>After the comment from <a href="https://stackoverflow.com/users/13107804/r-beginners">r-beginners</a> I updated some part:</p> <pre><code># Create for Data sets data_set = st.sidebar.multiselect(&quot;Pick your data set&quot;, df_cols) if not data_set: data_set = [df_cols[0]] data_set.insert(0, &quot;time&quot;) df2 = df[data_set] else: data_set.insert(0, &quot;time&quot;) df2 = df[data_set] linechart = df2 fig2 = px.line(linechart, x=&quot;time&quot;, y=data_set, height=500, width=1000, template=&quot;gridon&quot;) st.plotly_chart(fig2, use_container_width=True) </code></pre> <p>In case the user does not select any of the data set by default 'temperature' is shown in the graph.</p>
<python><pandas><numpy><plotly><streamlit>
2023-12-12 19:17:01
1
1,187
FotisK
77,648,390
297,780
How to create Pandas Dataframe from Google API Response Object
<p>I'm trying to get a Google Analytics Admin API response into a Pandas dataframe, and I'm having trouble parsing the results. I'm working with the <a href="https://developers.google.com/analytics/devguides/config/admin/v1/rest/v1beta/properties.customDimensions/list" rel="nofollow noreferrer">list customDimensions method</a> within the admin_v1beta library. I can see the results, but I haven't yet figured out how to get them into a format that's usable within Pandas.</p> <p>My function:</p> <pre><code>def get_custom_dimensions(property_filter): client = admin_v1beta.AnalyticsAdminServiceClient() request = admin_v1beta.ListCustomDimensionsRequest( parent=property_filter ) return client.list_custom_dimensions(request=request) </code></pre> <p>And the output:</p> <pre><code> ga4_custom_dimensions = get_custom_dimensions(&quot;properties/xxx&quot;) print(type(ga4_custom_dimensions)) &lt;class 'google.analytics.admin_v1beta.services.analytics_admin_service.pagers.ListCustomDimensionsPager'&gt; print(ga4_custom_dimensions) ListCustomDimensionsPager&lt;custom_dimensions { name: &quot;properties/xxx/customDimensions/yy1&quot; parameter_name: &quot;custom_dimension_zz1&quot; display_name: &quot;Custom Dimension ZZ1&quot; description: &quot;The dimension ZZ1 useful for filtering.&quot; scope: EVENT } custom_dimensions { name: &quot;properties/xxx/customDimensions/yy2&quot; parameter_name: &quot;custom_dim... </code></pre> <p>The response has its own class, so before I can do anything with it I need to convert it and I haven't yet figured out how. My first thought was to convert to JSON:</p> <pre><code>custom_dimensions_json_1 = json.dumps(ga4_custom_dimensions.__dict__) custom_dimensions_json_2 = json.dumps(vars(ga4_custom_dimensions)) </code></pre> <p>These produce the same error,</p> <p><code>TypeError: Object of type _GapicCallable is not JSON serializable</code>.</p> <p>My next attempt was via the Pandas <code>json_normalize</code> method:</p> <pre><code>custom_dimension_df = pd.json_normalize(ga4_custom_dimensions) </code></pre> <p>The result is a dataframe with only an index.</p> <p>Lastly, I tried parsing the results directly:</p> <pre><code>temp_df = pd.DataFrame(ga4_custom_dimensions['custom_dimensions'])[['name', 'parameter_name', 'display_name', 'description', 'scope']] </code></pre> <p>This produces &quot;TypeError:</p> <p><code>ListCustomDimensionsPager' object is not subscriptable</code>.</p> <p>Can anyone guide me on how to parse the API response within Pandas?</p>
<python><pandas><google-api-python-client>
2023-12-12 18:43:48
1
1,414
Lenwood
77,648,219
1,283,611
Issue with numpy in WSL2: libgcc_s.so.1: cannot open shared object file: No such file or directory
<p>I'm trying to get a python project running that uses numpy down the line and am getting an ImportError issue. I can't seem to resolve and unfortunately I haven't found a useful answer yet, so I'm hoping for help here. This is all running in Windows WSL2 Ubuntu. Here is the latest I can do to reproduce the issue. This is in an empty directory with no other code.</p> <pre><code>If everything worked right, you would be out of a job.:~/src/pythontest$ python3 -m venv venv If everything worked right, you would be out of a job.:~/src/pythontest$ source venv/bin/activate (venv) If everything worked right, you would be out of a job.:~/src/pythontest$ pip install numpy Collecting numpy Using cached numpy-1.26.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (18.2 MB) Installing collected packages: numpy Successfully installed numpy-1.26.2 WARNING: There was an error checking the latest version of pip. (venv) If everything worked right, you would be out of a job.:~/src/pythontest$ python Python 3.9.13 (main, May 17 2022, 14:19:07) [GCC 5.4.0 20160609] on linux Type &quot;help&quot;, &quot;copyright&quot;, &quot;credits&quot; or &quot;license&quot; for more information. &gt;&gt;&gt; import numpy Traceback (most recent call last): File &quot;/home/flamebaud/src/pythontest/venv/lib/python3.9/site-packages/numpy/core/__init__.py&quot;, line 24, in &lt;module&gt; from . import multiarray File &quot;/home/flamebaud/src/pythontest/venv/lib/python3.9/site-packages/numpy/core/multiarray.py&quot;, line 10, in &lt;module&gt; from . import overrides File &quot;/home/flamebaud/src/pythontest/venv/lib/python3.9/site-packages/numpy/core/overrides.py&quot;, line 8, in &lt;module&gt; from numpy.core._multiarray_umath import ( ImportError: libgcc_s.so.1: cannot open shared object file: No such file or directory During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;/home/flamebaud/src/pythontest/venv/lib/python3.9/site-packages/numpy/__init__.py&quot;, line 130, in &lt;module&gt; from numpy.__config__ import show as show_config File &quot;/home/flamebaud/src/pythontest/venv/lib/python3.9/site-packages/numpy/__config__.py&quot;, line 4, in &lt;module&gt; from numpy.core._multiarray_umath import ( File &quot;/home/flamebaud/src/pythontest/venv/lib/python3.9/site-packages/numpy/core/__init__.py&quot;, line 50, in &lt;module&gt; raise ImportError(msg) ImportError: IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE! Importing the numpy C-extensions failed. This error can happen for many reasons, often due to issues with your setup or how NumPy was installed. We have compiled some common reasons and troubleshooting tips at: https://numpy.org/devdocs/user/troubleshooting-importerror.html Please note and check the following: * The Python version is: Python3.9 from &quot;/home/flamebaud/src/pythontest/venv/bin/python&quot; * The NumPy version is: &quot;1.26.2&quot; and make sure that they are the versions you expect. Please carefully study the documentation linked above for further help. Original error was: libgcc_s.so.1: cannot open shared object file: No such file or directory The above exception was the direct cause of the following exception: Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; File &quot;/home/flamebaud/src/pythontest/venv/lib/python3.9/site-packages/numpy/__init__.py&quot;, line 135, in &lt;module&gt; raise ImportError(msg) from e ImportError: Error importing numpy: you should not try to import numpy from its source directory; please exit the numpy source tree, and relaunch your python interpreter from there.``` </code></pre>
<python><python-3.x><numpy><gcc><windows-subsystem-for-linux>
2023-12-12 18:07:14
1
998
flamebaud
77,648,138
8,167,752
Looking for an efficient way to find specific subfolders in a folder
<p>I'm trying to come up with a faster/more efficient way to scan a folder and find the subfolders of interest.</p> <p>Below is what I have. It works fine for test cases, where I'm trying to find, say, a dozen folders of interest in a folder that contains, say, a hundred or so subfolders.</p> <p>However, it chokes in my real-world case, where I'm trying to find around 100 subfolders of interest in a folder that contains around 300,000 subfolders plus a few text files.</p> <p>I feel like there should be a faster/more efficient way to do this, because I have some MATLAB code that's doing the same thing (i.e. scanning a folder for subfolders of interest) and it works fine. However, for corporate reasons, I need to move from MATLAB to Python.</p> <p>For what it's worth, I'm running this as a Jupyter notebook.</p> <pre><code>import os import re def find_subfolders_of_interest(dir_of_interest, starting_string_of_interest): all_subfolders = [item for item in os.listdir(dir_of_interest) if os.path.isdir(dir_of_interest, item))] startWithPattern = starting_string_of_interest regexp_pattern = re.compile(startsWithPattern) all_subfolders_of_interest = list(filter(regexp_pattern.match, all_subfolders)) return all_subfolders_of_interest def main(): all_subfolders_of_interest = find_subfolders_of_interest('test_folder', 'string_of_interest') if __name__ == '__main__': main() </code></pre>
<python>
2023-12-12 17:53:05
1
477
BobInBaltimore
77,647,949
8,832,997
Python next() function fails while __next__ works on file?
<p>I was studying <em>Mark Lutz's</em> book on python called &quot;<em>Learning Python (5th Ed)</em>&quot;. While perusing chapter 14 on <em>Iteration and Comprehensions</em> , I faced a problem in page 423.</p> <p>There, we have created a file object returned by <code>os.popen()</code> call. It is an iterable object (has <code>__iter__</code> method) but not an iterator (no <code>__next__</code> method). So, this object should be called by neither <code>next()</code> built-in function nor <code>__next__</code> method call. But somehow this object returns values on next dunder method call but raise exception on next() builtin calls.</p> <p>In the book, the author said it is unusual. But he didn't provide any explanation for that.</p> <p>Can anyone explain me how <code>__next__</code> succeeds for that file-like object?</p> <pre><code>import os file = os.popen(&quot;ls -l&quot;) print(dir(file)) # this shows __iter__ but no __next__/ next # print(next(file)) # this fails if uncommented, expected print(file.__next__()) # this does NOT fail even there is no __next__ </code></pre> <p>Similar code for a list that behaves as I expect:</p> <pre><code>file = [1] print(dir(file)) # this shows __iter__ but no __next__/ next # print(next(file)) # this fails if uncommented, expected # print(file.__next__()) # this fails if uncommented, expected </code></pre>
<python>
2023-12-12 17:20:10
1
1,357
Sabbir Ahmed
77,647,923
1,471,980
How do you update column value based on other column values in pandas
<p>I have this data frame:</p> <pre><code>df Server Port Ser123 Ethernet3 Ser123 Ethernet4 Ser123 Ethernet12 Ser123 Ethernet567 Serabc Ethernet2 Serabc Ethernet34 Serabc Ethernet458 Serabc Ethernet5689 </code></pre> <p>I need to create another column based on values on Port column. I need to check if the number single digit after Ethernet (fir example Ethernet2) then function_val column value be '5k', if the the number Ethernet is double digit like Ethernet34, then the function_val should be 10k etc. I need to be able check for three digits so forth.</p> <p>Data frame needs to be like this:</p> <pre><code>Server Port function_val Ser123 Ethernet3 5k Ser123 Ethernet4 5k Ser123 Ethernet12 10k Ser123 Ethernet567 20k Serabc Ethernet2 5k Serabc Ethernet34 10k Serabc Ethernet458 20k Serabc Ethernet5689 20k </code></pre> <p>I tried this, any ideas how I could do this better and most efficient way?</p> <pre><code>df['function_val']=np.where((df.Port.str.contains(&quot;Ethernet[0-9]&quot;),'5k', df['function_val']) </code></pre>
<python><pandas><regex>
2023-12-12 17:14:26
1
10,714
user1471980
77,647,740
1,422,096
Copy a Numpy array to clipboard on Windows without win32 dependency?
<p>The main answer from <a href="https://stackoverflow.com/questions/22488566/how-to-paste-a-numpy-array-to-excel">How to paste a Numpy array to Excel</a> works to copy a Numpy array in the clipboard, ready to be pasted in Excel:</p> <pre><code>import numpy as np import win32clipboard as clipboard def toClipboardForExcel(array): array_string = &quot;\r\n&quot;.join(&quot;\t&quot;.join(line.astype(str)).replace(&quot;\n&quot;,&quot;&quot;) for line in array) clipboard.OpenClipboard() clipboard.EmptyClipboard() clipboard.SetClipboardText(array_string) clipboard.CloseClipboard() Y = np.arange(64).reshape((8, 8)) toClipboardForExcel(Y) </code></pre> <p><strong>Is it possible to do this without the extra <code>win32</code> package dependency?</strong> (I don't use it currently in my project and I'd like to avoid adding it only for clipboard, NB: I don't use <code>pandas</code> either)</p> <p>I already tried with <code>os.system(`echo string | clip`)</code> but it doesn't work for multiline content (containing <code>\n</code>).</p> <p>Or maybe is <code>OpenClipboard</code>, <code>SetClipboardText</code>, etc. accessible via <code>ctypes</code> (which I already use)?</p> <p>NB: this is not a duplicate of <a href="https://stackoverflow.com/questions/11063458/python-script-to-copy-text-to-clipboard">Python script to copy text to clipboard</a> because the latter is general, and with extra dependencies, and here in my question, we would like to avoid new dependencies.</p>
<python><windows><numpy><ctypes><clipboard>
2023-12-12 16:45:21
2
47,388
Basj
77,647,692
4,537,160
Pytorch I3D Resnet model on a custom dataset
<p>This is a follow-up to a couple of questions I asked before...I want to fine-tune the I3D model for action recognition from Pytorch hub (which is pre-trained on Kinetics 400 classes) on a custom dataset, where I have 4 possible output classes.</p> <p>I'm loading the model and modifying the last layer by:</p> <pre><code>model = torch.hub.load(&quot;facebookresearch/pytorchvideo&quot;, &quot;i3d_r50&quot;, pretrained=True) num_classes = 4 model.blocks[6].proj = torch.nn.Linear(2048, num_classes) </code></pre> <p>I defined the <strong>getitem</strong> method of my Dataset to return:</p> <pre><code>def __getitem__(self, ind): [...] return processed_images, target </code></pre> <p>where processed_images and target are Tensors, with shapes:</p> <pre><code>&gt;&gt;processed_images.shape torch.Size([5, 224, 224, 3]) &gt;&gt;target.shape torch.Size([4]) </code></pre> <p>Basically, processed_images is a sequence of 5 RGB images, each with shape (224, 224), while target is the one-hot encoding for the target classes.</p> <p>In the training part, I have:</p> <pre><code>model.train() model.to(device) train_dataloader = torch.utils.data.DataLoader( train_dataset, batch_size=batch_size, shuffle=True, drop_last=False, persistent_workers=False, timeout=0, ) for epoch in range(number_of_epochs): for batch_ind, batch_data in enumerate(train_dataloader): # Extract data and label datas, labels = batch_data # move to device datas_ = datas.to(device) labels_ = labels.to(device) weights_ = weights.to(device) # permute axes (changing from [22, 5, 224, 224, 3] -&gt; [22, 3, 5, 224, 224, 3] datas_ = datas_.permute(0, 4, 1, 2, 3) preds_ = model(datas_) </code></pre> <p>But I'm getting an error in the forward method of ResNetBasicHead:</p> <pre><code>Exception has occurred: RuntimeError input image (T: 2 H: 14 W: 14) smaller than kernel size (kT: 4 kH: 7 kW: 7) File &quot;/home/c.demasi/.cache/torch/hub/facebookresearch_pytorchvideo_main/pytorchvideo/models/head.py&quot;, line 374, in forward x = self.pool(x) File &quot;/home/c.demasi/.cache/torch/hub/facebookresearch_pytorchvideo_main/pytorchvideo/models/net.py&quot;, line 43, in forward x = block(x) File &quot;/home/c.demasi/work/projects/ball_shot_action_detection_dev_environment/src/train_torch.py&quot;, line 271, in train preds_ = model(datas_) File &quot;/home/c.demasi/work/projects/ball_shot_action_detection_dev_environment/src/train_torch.py&quot;, line 571, in train_roi train(training_parameters, train_from_existing_path=None, perform_tests=perform_tests, config=config) File &quot;/home/c.demasi/work/projects/ball_shot_action_detection_dev_environment/train.py&quot;, line 13, in &lt;module&gt; train_roi(config=config, perform_tests=False) RuntimeError: input image (T: 2 H: 14 W: 14) smaller than kernel size (kT: 4 kH: 7 kW: 7) </code></pre> <p>Any idea how to solve this?</p>
<python><pytorch><resnet>
2023-12-12 16:36:18
1
1,630
Carlo
77,647,564
7,563,454
Copy class instance, inheriting changes from old instance but allowing new changes without modifying it
<p>I'm looking for a way to clone a class instance, such that the new instance contains any variables changed at the time of copying but other changes can be done to it without retroactively modifying the instance it was copied from. I need to copy <code>y</code> from <code>x</code> instead of spawning a new <code>Item</code> class as the <code>x</code> instance contains properties that were modified elsewhere and I want <code>y</code> to inherit modified attributes then allow further changes to itself without also affecting <code>x</code>.</p> <pre><code>class Item: def __init__(self): self.value = None def change(self, value): self.value = value x = Item() x.change(0) print(str(x.value)) y = x y.change(1) print(str(x.value) + &quot; - &quot; + str(y.value)) </code></pre> <p>The result I get:</p> <pre><code>0 1 - 1 </code></pre> <p>The desired result would look like:</p> <pre><code>0 0 - 1 </code></pre> <p>Obviously this happens because <code>y = x</code> only copies a reference to the existing instance, but what else should I do? All answers I found suggest <code>copy()</code> or <code>deepcopy()</code> but neither works: Whether I do <code>y = x.deepcopy()</code> or <code>y = deepcopy(x)</code> I'm told Python has no <code>copy</code> / <code>deepcopy</code> builtins.</p>
<python><python-3.x>
2023-12-12 16:15:59
1
1,161
MirceaKitsune
77,647,453
4,588,188
Alias field to existing key in Pydantic
<p>I am working with a legacy API and need to alias a response field to something that has an existing key. For example, the API response looks like this:</p> <pre><code>[ { &quot;model_name&quot;: &quot;Survey&quot;, &quot;logo&quot;: { &quot;url&quot;: &quot;foo&quot; }, &quot;uuid&quot;: &quot;79bea0f3-d8d2-4b05-9ce5-84858f65ff4b&quot;, &quot;logo_url&quot;: &quot;foo&quot; } ] </code></pre> <p>and I would like the string <code>&quot;foo&quot;</code> in the response to be in the field <code>logo</code> – not as a nested object but just as the string itself. I have the following schema:</p> <pre><code>class Survey(UUIDIdentifiable): logo_url: str = Field(None, alias=&quot;logo&quot;) class Config: allow_population_by_field_name = True </code></pre> <p>but it doesn't appear to be working properly (I assume because the aliased key still exists in the response). I could of course just iterate through the responses and delete the one <code>logo</code> key:</p> <pre><code>for item in responses: del item[&quot;logo&quot;] </code></pre> <p>but that feels a bit hacky to me. Is there a better way to do this with Pydantic?</p>
<python><pydantic>
2023-12-12 16:00:56
1
618
AJwr
77,647,370
11,801,298
Make numbers less than 360 in pandas column
<p>This is my function</p> <pre><code>def price_to_ephe(data): converted = [] for i in data: while i &gt;= 360: i = i - 360 converted.append (i) return converted </code></pre> <p>it makes every number less than 360. I want to apply it to column in dataframe.</p> <pre><code>2009-01-01, 886.0 2009-01-02, 884.2 2009-01-03, 882.1 2009-01-04, 882.6 2009-01-05, 883.4 2009-01-06, 889.1 2009-01-07, 887.6 2009-01-08, 882.5 2009-01-09, 879.7 2009-01-10, 878.3 2009-01-11, 876.6 2009-01-12, 875.2 </code></pre> <p>Expected output:</p> <pre><code>2009-01-01, 166.0 2009-01-02, 164.2 .............. </code></pre> <p>...and so on. Numbers can be large and small: 10000 and 20.</p> <p>Help me please to do it in most efficient way. DataFrame is very large. I need all speed of pandas!</p>
<python><pandas>
2023-12-12 15:48:39
2
877
Igor K.
77,647,255
11,267,783
White space using GridSpec and right colorbar with Matplotlib
<p>I would like to make a figure with 2 subplots.</p> <p><a href="https://i.sstatic.net/UogI5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UogI5.png" alt="enter image description here" /></a></p> <p>The problem is that on the right of the first figure, there is a whitespace (which belongs to the subplot below). Is is possible to make the first subplot to fit the entire space on the right ?</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import matplotlib.pyplot as plt from matplotlib.gridspec import GridSpec data1 = np.random.rand(10,10) data2 = np.random.rand(10,10) fig = plt.figure(constrained_layout=True) gs = GridSpec(2,1,figure=fig) ax = fig.add_subplot(gs[0,0]) img = plt.imshow(data1, aspect='auto') plt.colorbar(location='bottom') ax = fig.add_subplot(gs[1,0]) plt.imshow(data2) plt.colorbar(location='right') plt.axis(&quot;scaled&quot;) plt.show() </code></pre> <p>Moreover, is it possible to have the right colorbar just next to the second plot ? As you can see here, the colorbar is at the far right.</p>
<python><matplotlib>
2023-12-12 15:31:43
1
322
Mo0nKizz
77,647,079
4,726,173
What's wrong with my subclass of pytorch's DataLoader? - Error: Too many open files
<p>What is wrong with my subclass of <code>torch.utils.data.DataLoader</code>? After using this instead of the original, I get an error. The error is solved by setting <code>pin_memory</code> (in the superclass) to <code>False</code>.</p> <p>This is the class (the actual class has a few more parameters which are all not used, for compatibility with my code):</p> <pre><code>class CustomDataLoader(torchdata.DataLoader): def __init__(self, dataset, batch_size, shuffle, collate_fn=None, pin_memory=False, num_workers=4, **kwargs): ''' Custom data loader to test how to subclass torchdata.DataLoader. ''' super().__init__( dataset, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn, pin_memory=pin_memory, num_workers=num_workers, **kwargs) def __iter__(self): self.super_iter = super().__iter__() return self def __next__(self): next_batch = self.super_iter.__next__() return next_batch, None, None </code></pre> <p>And here the start of the 150 lines long error message:</p> <pre><code>Traceback (most recent call last): File &quot;/opt/conda/lib/python3.10/multiprocessing/util.py&quot;, line 300, in _run_finalizers finalizer() File &quot;/opt/conda/lib/python3.10/multiprocessing/util.py&quot;, line 224, in __call__ res = self._callback(*self._args, **self._kwargs) File &quot;/opt/conda/lib/python3.10/multiprocessing/util.py&quot;, line 133, in _remove_temp_dir rmtree(tempdir) File &quot;/opt/conda/lib/python3.10/shutil.py&quot;, line 725, in rmtree _rmtree_safe_fd(fd, path, onerror) File &quot;/opt/conda/lib/python3.10/shutil.py&quot;, line 633, in _rmtree_safe_fd onerror(os.scandir, path, sys.exc_info()) File &quot;/opt/conda/lib/python3.10/shutil.py&quot;, line 629, in _rmtree_safe_fd with os.scandir(topfd) as scandir_it: OSError: [Errno 24] Too many open files: '/tmp/pymp-n1e35f4z' </code></pre> <p>The code that raises the error uses a dataset that is currently not public, so I cannot use the same code here. Just looping over the dataset fails to reproduce the issue, however.</p> <p>I've seen a few questions here asking something related, but usually the answer does not require to subclass <code>DataLoader</code> (<a href="https://stackoverflow.com/questions/71744788/subclass-of-pytorch-dataloader-for-changing-batch-output">here</a> and <a href="https://stackoverflow.com/questions/69170854/pytorch-customized-dataloader">here</a>). More related could be this: <a href="https://discuss.pytorch.org/t/too-many-open-files-caused-by-persistent-workers-and-pin-memory/193372" rel="nofollow noreferrer">https://discuss.pytorch.org/t/too-many-open-files-caused-by-persistent-workers-and-pin-memory/193372</a>, because setting <code>pin_memory</code> to <code>False</code> also helped in my case.</p> <p>Are there any issues with my class above, that could lead to pytorch-internal issues with data loading? Or does it look okay?</p> <hr /> <p>Edit: Remainder of the error message:</p> <pre><code>Traceback (most recent call last): File &quot;/opt/conda/lib/python3.10/multiprocessing/util.py&quot;, line 300, in _run_finalizers finalizer() File &quot;/opt/conda/lib/python3.10/multiprocessing/util.py&quot;, line 224, in __call__ s res = self._callback(*self._args, **self._kwargs) File &quot;/opt/conda/lib/python3.10/multiprocessing/util.py&quot;, line 133, in _remove_temp_dir rmtree(tempdir) File &quot;/opt/conda/lib/python3.10/shutil.py&quot;, line 725, in rmtree _rmtree_safe_fd(fd, path, onerror) File &quot;/opt/conda/lib/python3.10/shutil.py&quot;, line 633, in _rmtree_safe_fd onerror(os.scandir, path, sys.exc_info()) File &quot;/opt/conda/lib/python3.10/shutil.py&quot;, line 629, in _rmtree_safe_fd with os.scandir(topfd) as scandir_it: OSError: [Errno 24] Too many open files: '/tmp/pymp-n1e35f4z' Traceback (most recent call last): File &quot;/opt/conda/lib/python3.10/multiprocessing/util.py&quot;, line 300, in _run_finalizers finalizer() File &quot;/opt/conda/lib/python3.10/multiprocessing/util.py&quot;, line 224, in __call__ res = self._callback(*self._args, **self._kwargs) File &quot;/opt/conda/lib/python3.10/multiprocessing/util.py&quot;, line 133, in _remove_temp_dir rmtree(tempdir) File &quot;/opt/conda/lib/python3.10/shutil.py&quot;, line 725, in rmtree _rmtree_safe_fd(fd, path, onerror) File &quot;/opt/conda/lib/python3.10/shutil.py&quot;, line 633, in _rmtree_safe_fd onerror(os.scandir, path, sys.exc_info()) File &quot;/opt/conda/lib/python3.10/shutil.py&quot;, line 629, in _rmtree_safe_fd with os.scandir(topfd) as scandir_it: OSError: [Errno 24] Too many open files: '/tmp/pymp-u0_3n68n' Traceback (most recent call last): File &quot;/opt/conda/lib/python3.10/multiprocessing/resource_sharer.py&quot;, line 138, in _serve File &quot;/opt/conda/lib/python3.10/multiprocessing/connection.py&quot;, line 463, in accept File &quot;/opt/conda/lib/python3.10/multiprocessing/connection.py&quot;, line 609, in accept File &quot;/opt/conda/lib/python3.10/socket.py&quot;, line 293, in accept fd, addr = self._accept() OSError: [Errno 24] Too many open files Exception in thread Thread-534 (_pin_memory_loop): Traceback (most recent call last): File &quot;/opt/conda/lib/python3.10/threading.py&quot;, line 1016, in _bootstrap_inner self.run() File &quot;/opt/conda/lib/python3.10/threading.py&quot;, line 953, in run self._target(*self._args, **self._kwargs) File &quot;/opt/conda/lib/python3.10/site-packages/torch/utils/data/_utils/pin_memory.py&quot;, line 51, in _pin_memory_loop Traceback (most recent call last): File &quot;/opt/conda/lib/python3.10/multiprocessing/resource_sharer.py&quot;, line 145, in _serve send(conn, destination_pid) File &quot;/opt/conda/lib/python3.10/multiprocessing/resource_sharer.py&quot;, line 50, in send reduction.send_handle(conn, new_fd, pid) File &quot;/opt/conda/lib/python3.10/multiprocessing/reduction.py&quot;, line 183, in send_handle with socket.fromfd(conn.fileno(), socket.AF_UNIX, socket.SOCK_STREAM) as s: File &quot;/opt/conda/lib/python3.10/socket.py&quot;, line 545, in fromfd nfd = dup(fd) OSError: [Errno 24] Too many open files do_one_step() File &quot;/opt/conda/lib/python3.10/site-packages/torch/utils/data/_utils/pin_memory.py&quot;, line 28, in do_one_step r = in_queue.get(timeout=MP_STATUS_CHECK_INTERVAL) File &quot;/opt/conda/lib/python3.10/multiprocessing/queues.py&quot;, line 122, in get return _ForkingPickler.loads(res) File &quot;/opt/conda/lib/python3.10/site-packages/torch/multiprocessing/reductions.py&quot;, line 307, in rebuild_storage_fd fd = df.detach() File &quot;/opt/conda/lib/python3.10/multiprocessing/resource_sharer.py&quot;, line 58, in detach return reduction.recv_handle(conn) File &quot;/opt/conda/lib/python3.10/multiprocessing/reduction.py&quot;, line 189, in recv_handle return recvfds(s, 1)[0] File &quot;/opt/conda/lib/python3.10/multiprocessing/reduction.py&quot;, line 159, in recvfds raise EOFError EOFError Traceback (most recent call last): File &quot;/opt/conda/lib/python3.10/multiprocessing/queues.py&quot;, line 244, in _feed File &quot;/opt/conda/lib/python3.10/multiprocessing/reduction.py&quot;, line 51, in dumps cls(buf, protocol).dump(obj) File &quot;/opt/conda/lib/python3.10/site-packages/torch/multiprocessing/reductions.py&quot;, line 370, in reduce_storage File &quot;/opt/conda/lib/python3.10/multiprocessing/reduction.py&quot;, line 198, in DupFd return resource_sharer.DupFd(fd) File &quot;/opt/conda/lib/python3.10/multiprocessing/resource_sharer.py&quot;, line 48, in __init__ new_fd = os.dup(fd) OSError: [Errno 24] Too many open files Traceback (most recent call last): File &quot;/opt/conda/lib/python3.10/multiprocessing/util.py&quot;, line 300, in _run_finalizers finalizer() File &quot;/opt/conda/lib/python3.10/multiprocessing/util.py&quot;, line 224, in __call__ res = self._callback(*self._args, **self._kwargs) File &quot;/opt/conda/lib/python3.10/multiprocessing/util.py&quot;, line 133, in _remove_temp_dir rmtree(tempdir) File &quot;/opt/conda/lib/python3.10/shutil.py&quot;, line 725, in rmtree _rmtree_safe_fd(fd, path, onerror) File &quot;/opt/conda/lib/python3.10/shutil.py&quot;, line 633, in _rmtree_safe_fd onerror(os.scandir, path, sys.exc_info()) File &quot;/opt/conda/lib/python3.10/shutil.py&quot;, line 629, in _rmtree_safe_fd with os.scandir(topfd) as scandir_it: OSError: [Errno 24] Too many open files: '/tmp/pymp-oyedwdyi' Traceback (most recent call last): File &quot;/opt/conda/lib/python3.10/multiprocessing/queues.py&quot;, line 244, in _feed File &quot;/opt/conda/lib/python3.10/multiprocessing/reduction.py&quot;, line 51, in dumps File &quot;/opt/conda/lib/python3.10/site-packages/torch/multiprocessing/reductions.py&quot;, line 370, in reduce_storage File &quot;/opt/conda/lib/python3.10/multiprocessing/reduction.py&quot;, line 198, in DupFd File &quot;/opt/conda/lib/python3.10/multiprocessing/resource_sharer.py&quot;, line 48, in __init__ OSError: [Errno 24] Too many open files Traceback (most recent call last): File &quot;/opt/conda/lib/python3.10/multiprocessing/queues.py&quot;, line 244, in _feed File &quot;/opt/conda/lib/python3.10/multiprocessing/reduction.py&quot;, line 51, in dumps File &quot;/opt/conda/lib/python3.10/site-packages/torch/multiprocessing/reductions.py&quot;, line 370, in reduce_storage File &quot;/opt/conda/lib/python3.10/multiprocessing/reduction.py&quot;, line 198, in DupFd File &quot;/opt/conda/lib/python3.10/multiprocessing/resource_sharer.py&quot;, line 48, in __init__ OSError: [Errno 24] Too many open files </code></pre> <hr /> <p>And then the very final part (raised within 'finally' after the error above):</p> <pre><code>Exception in thread Thread-535 (_pin_memory_loop): Traceback (most recent call last): File &quot;/opt/conda/lib/python3.10/threading.py&quot;, line 1016, in _bootstrap_inner File &quot;/opt/conda/lib/python3.10/threading.py&quot;, line 953, in run File &quot;/opt/conda/lib/python3.10/site-packages/torch/utils/data/_utils/pin_memory.py&quot;, line 51, in _pin_memory_loop File &quot;/opt/conda/lib/python3.10/site-packages/torch/utils/data/_utils/pin_memory.py&quot;, line 28, in do_one_step File &quot;/opt/conda/lib/python3.10/multiprocessing/queues.py&quot;, line 122, in get File &quot;/opt/conda/lib/python3.10/site-packages/torch/multiprocessing/reductions.py&quot;, line 307, in rebuild_storage_fd File &quot;/opt/conda/lib/python3.10/multiprocessing/resource_sharer.py&quot;, line 57, in detach File &quot;/opt/conda/lib/python3.10/multiprocessing/resource_sharer.py&quot;, line 86, in get_connection File &quot;/opt/conda/lib/python3.10/multiprocessing/connection.py&quot;, line 502, in Client File &quot;/opt/conda/lib/python3.10/multiprocessing/connection.py&quot;, line 628, in SocketClient File &quot;/opt/conda/lib/python3.10/socket.py&quot;, line 232, in __init__ OSError: [Errno 24] Too many open files Traceback (most recent call last): File &quot;[...]/train.py&quot;, line 545, in train File &quot;[...]/train.py&quot;, line 248, in train_or_eval_epoch File &quot;/src/group-orbit-cl/group_orbit_cl/data/sample_transformer.py&quot;, line 637, in __iter__ File &quot;/opt/conda/lib/python3.10/site-packages/torch/utils/data/dataloader.py&quot;, line 442, in __iter__ File &quot;/opt/conda/lib/python3.10/site-packages/torch/utils/data/dataloader.py&quot;, line 388, in _get_iterator File &quot;/opt/conda/lib/python3.10/site-packages/torch/utils/data/dataloader.py&quot;, line 1043, in __init__ File &quot;/opt/conda/lib/python3.10/multiprocessing/process.py&quot;, line 121, in start File &quot;/opt/conda/lib/python3.10/multiprocessing/context.py&quot;, line 224, in _Popen File &quot;/opt/conda/lib/python3.10/multiprocessing/context.py&quot;, line 281, in _Popen File &quot;/opt/conda/lib/python3.10/multiprocessing/popen_fork.py&quot;, line 19, in __init__ File &quot;/opt/conda/lib/python3.10/multiprocessing/popen_fork.py&quot;, line 65, in _launch OSError: [Errno 24] Too many open files During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;[...]/train.py&quot;, line 1379, in &lt;module&gt; File &quot;[...]/train.py&quot;, line 1376, in main File &quot;[...]/train.py&quot;, line 639, in train File &quot;[...]/train.py&quot;, line 687, in maybe_store_model File &quot;/opt/conda/lib/python3.10/site-packages/torch/serialization.py&quot;, line 440, in save File &quot;/opt/conda/lib/python3.10/site-packages/torch/serialization.py&quot;, line 315, in _open_zipfile_writer File &quot;/opt/conda/lib/python3.10/site-packages/torch/serialization.py&quot;, line 288, in __init__ RuntimeError: File saved_models/4636315_0_mlp_ep-521.state_dict cannot be opened. </code></pre>
<python><pytorch><pytorch-dataloader>
2023-12-12 15:05:53
1
627
dasWesen
77,647,014
1,914,781
bucket time duration dataframe with pandas
<p>I would like to bucket a datetime duration dataframe with pandas. example:</p> <pre><code>import pandas as pd import itertools import numpy as np df = pd.DataFrame({ 'ts': [1, 5, 10, 12], 'dur': [1, 2, 6, 6], }) print(df) ts dur 0 1 1 1 5 2 2 10 6 3 12 6 </code></pre> <p>I need to bucket it with 3, so:</p> <p><a href="https://i.sstatic.net/cUMtB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cUMtB.png" alt="enter image description here" /></a></p> <pre><code>bin1, 1 bin2, 2 bin3, 0 bin4, 4 bin5, 6 bin6, 2 </code></pre> <p>What's proper way to do it with pandas?</p> <p>*Update To follow up @Pieree D's answer, it's great for above example. One problem if I use <code>ns</code> as unit and data more bigger as below:</p> <pre><code>import pandas as pd import itertools import numpy as np import io def bucket(df,ts,dur,span): freq = 'ns' # could be 's' if preferred dfs = pd.to_timedelta(df[ts], unit=freq) # start, inclusive dfe = df[ts] + df[dur] dfe = pd.to_timedelta(dfe, unit=freq) # end, exclusive dfout = pd.concat([ pd.Series(1, index=dfs), # start: +1 pd.Series(-1, index=dfe), # end: -1 ]).resample(freq).sum().cumsum().resample( f'{span}{freq}', origin='start', ).sum().reset_index(drop=True).rename_axis('bin') dfout = dfout.to_frame() dfout.columns= ['sum'] dfout[ts] = df[ts].min() + dfout.index*span return dfout csvdata = '''ts,dur 19318744574,391823 21320087699,527291 23322650667,345208 25325015510,355729 27327401707,356354 29329792123,464531 31332296861,408802 32596494257,1131354 32738075298,416459''' df = pd.read_csv(io.StringIO(csvdata)) df = bucket(df,'ts','dur',50) print(df) </code></pre> <p>Then python report error:</p> <pre><code>numpy.core._exceptions._ArrayMemoryError: Unable to allocate 100. GiB for an array with shape (13419747184,) and data type int64 </code></pre> <p>Maybe need to consider to use sparse solution?</p>
<python><pandas>
2023-12-12 14:57:03
1
9,011
lucky1928
77,646,970
2,328,066
How can I pytest+monkeypatch a callable class to reusably specify the return of __call__?
<p>I am trying to write tests for classes that use Langchain's <code>LLMChain</code> class under the covers. The <code>LLMChain</code> class is callable.</p> <pre class="lang-py prettyprint-override"><code>class MockLLMChain: &quot;&quot;&quot; Mock LLMChain class for testing Example usage in a test: monkeypatch.setattr(langchain.chains, &quot;LLMChain&quot;, MockLLMChain(desired_response='9')) &quot;&quot;&quot; def __init__(self, *args, **kwargs): MockLLMChain.response = &quot;default&quot; def __call__(self, *args, **kwargs): return MockLLMChain.response def test_yes_no_classifier(yes_no_classifier, monkeypatch): MockLLMChain.response = {'text': '9'} import src.query_helpers.yes_no_classifier monkeypatch.setattr( src.query_helpers.yes_no_classifier, &quot;LLMChain&quot;, MockLLMChain ) response = yes_no_classifier.classify( conversation=&quot;1234&quot;, statement=&quot;The sky is blue.&quot; ) assert response == 9 </code></pre> <p>The above code results in a test error as the <code>MockLLMChain</code> response is not actually overridden, and inside the <code>YesNoClassifier</code> we are expecting <code>response</code> to be a dict.</p> <pre class="lang-py prettyprint-override"><code>class MockLLMChain: &quot;&quot;&quot; Mock LLMChain class for testing Example usage in a test: monkeypatch.setattr(langchain.chains, &quot;LLMChain&quot;, MockLLMChain(desired_response='9')) &quot;&quot;&quot; def __init__(self, *args, **kwargs): self.response = &quot;default&quot; def __call__(self, *args, **kwargs): return self.response def test_yes_no_classifier(yes_no_classifier, monkeypatch): ml = MockLLMChain() ml.response = {'text':'default'} import src.query_helpers.yes_no_classifier monkeypatch.setattr( src.query_helpers.yes_no_classifier, &quot;LLMChain&quot;, ml ) response = yes_no_classifier.classify( conversation=&quot;1234&quot;, statement=&quot;The sky is blue.&quot; ) assert response == 9 </code></pre> <p>The above code results in an error:</p> <pre><code> llm_chain = LLMChain(llm=bare_llm, prompt=prompt_instructions, verbose=verbose) &gt; response = llm_chain(inputs={&quot;statement&quot;: statement}) E TypeError: 'dict' object is not callable </code></pre> <p>The error suggests to me that this monkeypatching is substituting the output of the callable class (the dict) and not the modified callable class (<code>ml</code>).</p> <p>Can someone suggest a way to make this <code>MockLLMChain</code> class reusable across many tests without having to hard-code the entire mocked class definition in each test just to control what the mocked class returns when it is called?</p>
<python><mocking><pytest>
2023-12-12 14:48:48
1
901
Erik Jacobs
77,646,956
12,596,824
Using llama index but avoiding the tiktoken API call
<p>I want to use <code>llama_index</code> but when I import the package I get the following error</p> <pre><code>ConnectionError: ('Connected aborted.', ConnectionResetError(10054, 'An existing connection was forcibly closed by the remote host', None, 10054, None)) </code></pre> <p>This is because <code>llama_index</code> makes a request via the package tiktoken by open ai. You can see this in line 55. <a href="https://github.com/run-llama/llama_index/blob/main/llama_index/utils.py#L52" rel="nofollow noreferrer">https://github.com/run-llama/llama_index/blob/main/llama_index/utils.py#L52</a></p> <p>Which at the end makes a call to this API (<a href="https://openaipublic.blob.core.windows.net/gpt-2/encodings/main/vocab.bpe" rel="nofollow noreferrer">https://openaipublic.blob.core.windows.net/gpt-2/encodings/main/vocab.bpe</a>) <a href="https://github.com/openai/tiktoken/blob/main/tiktoken_ext/openai_public.py#L11" rel="nofollow noreferrer">https://github.com/openai/tiktoken/blob/main/tiktoken_ext/openai_public.py#L11</a></p> <p>Is there a way to avoid this API call? And just maybe have it locally? I can't access the internet in the environment I'm working in so I can't make API calls.</p>
<python><nlp><large-language-model><llama-index><retrieval-augmented-generation>
2023-12-12 14:46:47
0
1,937
Eisen
77,646,924
12,390,973
How to add time limit to gurobi solver in Pypsa model?
<p>I have create a model using Pypsa and I am using gurobi solver to solve it. I am testing the time limit parameter of Gurobi. Here is how I am adding it to the model :</p> <pre><code>solverOptions = { 'LogFile': &quot;gurobiLog&quot;, 'MIPGap': model_inputs['model_data']['solverMipGap'], # 0.001 'BarConvTol': model_inputs['model_data']['BarConvTol'], # 0.01 'TimeLimit': 200, # 200 Seconds } network.lopf(network.snapshots, solver_name='gurobi', solver_options=solverOptions, extra_functionality=extra_functionality) </code></pre> <p>I have set the time limit of 200 seconds here and I can see that in the Gurobi log file these parameters are getting applied there:</p> <pre><code>Gurobi 10.0.1 (win64) logging started Tue Dec 12 19:51:07 2023 Set parameter LogFile to value &quot;gurobiLog&quot; Set parameter MIPGap to value 0.001 Set parameter BarConvTol to value 0.01 Set parameter TimeLimit to value 5 </code></pre> <p>after 5 seconds in the log file I can see the solver is stopped and this message is printed :</p> <pre><code>Stopped in 136184 iterations and 5.04 seconds (280.23 work units) Time limit reached </code></pre> <p>But in the code, I see this error:</p> <pre><code>network.lopf(network.snapshots, solver_name=solver, solver_options=solverOptions, extra_functionality=extra_functionality) File &quot;C:\Users\nvats\AppData\Local\Continuum\anaconda3\envs\production-cost-model_test\lib\site-packages\pypsa\components.py&quot;, line 769, in lopf return network_lopf(self, **args) File &quot;C:\Users\nvats\AppData\Local\Continuum\anaconda3\envs\production-cost-model_test\lib\site-packages\pypsa\opf.py&quot;, line 2437, in network_lopf extra_postprocessing=extra_postprocessing, File &quot;C:\Users\nvats\AppData\Local\Continuum\anaconda3\envs\production-cost-model_test\lib\site-packages\pypsa\opf.py&quot;, line 2296, in network_lopf_solve options=solver_options File &quot;C:\Users\nvats\AppData\Local\Continuum\anaconda3\envs\production-cost-model_test\lib\site-packages\pyomo\opt\base\solvers.py&quot;, line 630, in solve default_variable_value=self._default_variable_value) File &quot;C:\Users\nvats\AppData\Local\Continuum\anaconda3\envs\production-cost-model_test\lib\site-packages\pyomo\core\base\PyomoModel.py&quot;, line 228, in load_from % str(results.solver.status)) ValueError: Cannot load a SolverResults object with bad status: aborted </code></pre> <p>I don't understand, it should return the solution that it found by that time but it's saying that it is aborted. Can anyone please help? Is there any other way of doing it? I am also adding my code here:</p> <pre><code>import pypsa import numpy as np import pandas as pd from pyomo.environ import Constraint from pyomo.environ import value start_mt = 1 start_yr = 2022 end_mt = 12 end_yr = 2022 end_day = 31 frequency = 15 snapshots = pd.date_range(&quot;{}-{}-01&quot;.format(start_yr, start_mt), &quot;{}-{}-{} 23:59&quot;.format(end_yr, end_mt, end_day), freq=str(frequency) + &quot;min&quot;) np.random.seed(len(snapshots)) # Create a PyPSA network network = pypsa.Network() # Add a load bus network.add(&quot;Bus&quot;, &quot;Bus&quot;) network.set_snapshots(snapshots) load_profile = np.random.randint(2800, 3300, len(snapshots)) # Add the load to the network network.add(&quot;Load&quot;, &quot;Load profile&quot;, bus=&quot;Bus&quot;, p_set=load_profile) # Define the generator data dictionary generator_data = { 'coal1': {'capacity': 800, 'carrier': 'Coal', 'ramp up': 0.1, 'ramp down': 0.1, 'variable cost': 10, 'co2_emission_factor': 0.95}, 'coal2': {'capacity': 600, 'carrier': 'Coal', 'ramp up': 0.1, 'ramp down': 0.1, 'variable cost': 11, 'co2_emission_factor': 0.95}, 'coal3': {'capacity': 500, 'carrier': 'Coal', 'ramp up': 0.1, 'ramp down': 0.1, 'variable cost': 11, 'co2_emission_factor': 0.95}, 'gas1': {'capacity': 600, 'carrier': 'Gas', 'ramp up': 0.05, 'ramp down': 0.05, 'variable cost': 12, 'co2_emission_factor': 0.45}, 'gas2': {'capacity': 600, 'carrier': 'Gas', 'ramp up': 0.05, 'ramp down': 0.05, 'variable cost': 13, 'co2_emission_factor': 0.45}, 'nuclear1': {'capacity': 300, 'carrier': 'Nuclear', 'ramp up': 0.01, 'ramp down': 0.01, 'variable cost': 4, 'co2_emission_factor': 0.03}, 'nuclear2': {'capacity': 400, 'carrier': 'Nuclear', 'ramp up': 0.01, 'ramp down': 0.01, 'variable cost': 3, 'co2_emission_factor': 0.03}, 'nuclear3': {'capacity': 250, 'carrier': 'Nuclear', 'ramp up': 0.01, 'ramp down': 0.01, 'variable cost': 3, 'co2_emission_factor': 0.03}, 'solar1': {'capacity': 150, 'carrier': 'Solar', 'ramp up': 0.25, 'ramp down': 0.25, 'variable cost': 1, 'co2_emission_factor': 0.0}, 'solar2': {'capacity': 200, 'carrier': 'Solar', 'ramp up': 0.25, 'ramp down': 0.25, 'variable cost': 2, 'co2_emission_factor': 0.0}, 'backup': {'capacity': 1000, 'carrier': 'Import', 'ramp up': 0.25, 'ramp down': 0.25, 'variable cost': 2000, 'co2_emission_factor': 1.0}, } # Add generators to the network for name, data in generator_data.items(): network.add(&quot;Generator&quot;, name, bus=&quot;Bus&quot;, carrier=data['carrier'], p_nom=data['capacity'], marginal_cost=data['variable cost'], ramp_limit_up=data['ramp up'], ramp_limit_down=data['ramp down'], ) print(network.generators.carrier.values) network.add(&quot;Carrier&quot;, &quot;Coal&quot;, co2_emissions=0.95) network.add(&quot;Carrier&quot;, &quot;Gas&quot;, co2_emissions=0.45) network.add(&quot;Carrier&quot;, &quot;Nuclear&quot;, co2_emissions=0.03) network.add(&quot;Carrier&quot;, &quot;Import&quot;, co2_emissions=1.0) network.add(&quot;Carrier&quot;, &quot;Solar&quot;, co2_emissions=0) network.add( &quot;GlobalConstraint&quot;, &quot;CO2Limit&quot;, carrier_attribute=&quot;co2_emissions&quot;, sense=&quot;&lt;=&quot;, constant=50000000, ) solver_name = &quot;gurobi&quot; solverOptions = { 'LogFile': &quot;gurobiLog&quot;, 'MIPGap': 0.001, 'BarConvTol': 0.01, 'TimeLimit': 5, } network.lopf(network.snapshots, solver_name=solver_name, solver_options=solverOptions) csv_folder_name = 'model dump' network.export_to_csv_folder(csv_folder_name) dispatch = network.generators_t.p total_gen = dispatch.sum() co2 = sum([total_gen[gen] * data['co2_emission_factor'] for gen, data in generator_data.items()]) cost = sum([total_gen[gen] * data['variable cost'] for gen, data in generator_data.items()]) print('co2 emission = ', co2) print('total cost = ', cost) dispatch['load profile'] = load_profile dispatch.to_excel('fuel wise dispatch.xlsx') </code></pre>
<python><gurobi><pypsa>
2023-12-12 14:41:42
1
845
Vesper
77,646,884
5,401,672
How to invalidate a Cloudfront distribution using the AWS Python CDK?
<p>We have a static site which is hosted in S3 and I'm trying to create a stack which will automate uploading the files and invalidate the Cloudfront distribution.</p> <p>I can upload the files but can't see any way to create the invalidation.</p> <p>I can get the distribution like this:</p> <pre class="lang-py prettyprint-override"><code>distribution = aws_cloudfront.Distribution.from_distribution_attributes( self, id=&quot;ImportedDistribution&quot;, domain_name=&quot;https://hgfdh.cloudfront.net&quot;, distribution_id=&quot;hgfdh&quot;, ) </code></pre> <p>but can't see how to create the invalidation.</p>
<python><amazon-cloudfront><aws-cdk>
2023-12-12 14:36:16
2
1,601
Mick
77,646,789
747,228
How to have the dialog box for choosing download location appeared in the frontend, before the file gets downloaded, using FastAPI?
<p>I have a <code>GET</code> endpoint that should return a <strong>huge file</strong> (500Mb). I am using <code>FileResponse</code> to do that (the code is simplified for clarity reasons):</p> <pre><code> async def get_file() headers = {&quot;Content-Disposition&quot;: f&quot;attachment; filename={filename}&quot;} return FileResponse(file_path, headers=headers) </code></pre> <p>The problem is that I have to wait on the frontend until that file is <strong>completely</strong> downloaded, in order for the dialog box that allows choosing a download location to be shown: <a href="https://i.sstatic.net/lA6ER.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lA6ER.png" alt="enter image description here" /></a></p> <p>And then, this file is saved instantly.</p> <p>So, for example, I have a file with size 500 MB, when I click download on UI I have to wait a minute or something till the &quot;Save dialog&quot; is displayed. Then, when I click &quot;Save&quot; the file is saved instantly. Obviously, the frontend was waiting for the file to be downloaded.</p> <p><strong>What I need is</strong>: The dialog to be shown instantly and then the user waiting for the file downloading to finish, <strong>after</strong> the user clicks 'Save' to choose the download location. So, how can I achieve that?</p>
<javascript><python><html><download><fastapi>
2023-12-12 14:20:31
1
2,028
unresolved_external
77,646,624
9,284,651
Data Frame- remove special character
<p>My DF looks like below:</p> <pre><code>id date 1 ' : 07/01/2020 23:25' 2 ': 07/02/2020' 3 ' 07/03/2020 23:25 1' 4 '07/04/2020' 5 '23:50 07/05/2020' 6 '07 06 2023' 7 '00:00 07 07 2023' </code></pre> <p>I need to remove all special characters and numbers that are between ':' so DF should looks like below</p> <pre><code>id date 1 07/01/2020 2 07/02/2020 3 07/03/2020 4 07/04/2020 5 07/05/2020 </code></pre> <p>I can't use a simple: <code>df['date'].str.split(':').str[0]</code> or <code>df['date'].str.replace(&quot;:&quot;,'')</code> beacuse I will lose correct values. Do you have idea how I could solve this?</p> <p>Regards</p>
<python><pandas><dataframe><replace><data-cleaning>
2023-12-12 13:53:21
1
403
Tmiskiewicz
77,646,583
10,595,871
Pandas shift dataframe if date is found in string column
<p>I have a pandas dataframe that should have 7 columns. It has 9 columns instead, that's because some rows are shifted to the right to one or two columns.</p> <p>I need to make some order in my df. I have this elements:</p> <ul> <li>the first two columns are always fine, but the second one could be null (that is also fine);</li> <li>the third column is a date, could be null due to the shift to the next column or because it's actually null</li> <li>the fourth column is a name of a city, and sometimes it contains the date from column 3. Sometimes is null</li> <li>fifth column has the same behaviour as the fourth</li> <li>last two columns are fine (even if some rows are shifted)</li> </ul> <p>Now, the approach that I was thinking is to check in column 4 for values that could be converted into date and, if found, shift that row from that column of one space to the left (not the entire row, because the first two columns are fine). Then I would repeat with a check in the fifth column.</p> <p>code up until now (provided by chatGPT)</p> <pre><code>df['names'], df['dates'] = df['fourth column'].str.split(' ', 1).str </code></pre> <p>It successfully create a column with only the names, but it fails in creating the dates one. all of the results are null.</p> <p>Other approach with no results:</p> <pre><code>des_col= 'fourth' col_index= df.columns.get_loc(des_col) for index, row in df.iterrows(): if pd.api.types.is_datetime64_any_dtype(row[des_col]): df.iloc[index, col_index:] = row.iloc[col_index:].shift(-1) </code></pre> <p>sample: (tt is some text) In this case rows 0 and 2 are good, while 1 and 3 should be shifted to the left from column 3:</p> <pre><code>1 2 3 4 5 6 7 na1 na2 A E 2026-02-27 Torino tt tt tt NaN NaN Z G NaN 1964-06-22 tt tt tt tt NaN Z G NaN NaT tt tt tt NaN NaN Z F NaN 1961-04-26 tt tt tt NaN NaN </code></pre> <p>I made some progress by doing so:</p> <pre><code># first I've created a column with only the dates from column 4 df['date'] = pd.to_datetime(df['4], errors='coerce') #Then I assign the value at column 3, if it's not null, and then I shift for index, row in df.iterrows(): if pd.notnull(row['date']): df.iloc[index, df.columns.get_loc('3')] = row['date'] df.iloc[index, df.columns.get_loc('3'):] = row.iloc[df.columns.get_loc('3'):].shift(-1) </code></pre> <p>The problem here is that now the df is shifted, but there are still some rows not shifted due to the fact that they don't belong to the previous case.</p> <p>So the question now is how do I shift to the left the rows that still have column 'na1' populated, until I find an empty column?</p> <pre><code>1 2 3 4 5 6 7 na1 na2 Z G 1964-06-22 NaN tt tt tt tt NaN #expected output: 1 2 3 4 5 6 7 na1 na2 Z G 1964-06-22 tt tt tt tt NaN NaN </code></pre> <p>Thanks!</p>
<python><pandas>
2023-12-12 13:46:45
0
691
Federicofkt
77,646,470
4,537,160
Pytorch dataloader, returned shape of targets
<p>I'm running a Pytorch model training, where I defined the <strong>getitem</strong> method of the dataset to return:</p> <pre><code>def __getitem__(self, ind): [...] return processed_images, target </code></pre> <p>processed_images is a sequence of 5 RGB images with shape (224,224), while target is a 4-dim vector with one-hot encoding for the classes targets.</p> <p>So, each call of <strong>getitem</strong> is returning, for example:</p> <pre><code>&gt;&gt; processed_images.shape (5, 224, 224, 3) &gt;&gt; target [0.0, 1.0, 0.0, 0.0] </code></pre> <p>In the training script, I'm extracting batch using:</p> <pre><code>train_dataloader = torch.utils.data.DataLoader( train_dataset, batch_size=batch_size, shuffle=True, drop_last=False, persistent_workers=False, timeout=0, ) for epoch in range(number_of_epochs): for batch_ind, batch_data in enumerate(train_dataloader): datas, targets = batch_data </code></pre> <p>The problem is datas has the correct shape, i.e. a stack of 22 sequences of images:</p> <pre><code>datas.shape torch.Size([22, 5, 224, 224, 3]) </code></pre> <p>However, targets are stacked in a weird way:</p> <pre><code>len(targets) = 4 len(targets[0]) = 22 </code></pre> <p>while I would expect the opposite (a list of 22 elements, each of len=4). Am I doing something wrong?</p>
<python><pytorch><pytorch-dataloader>
2023-12-12 13:27:38
1
1,630
Carlo
77,646,466
13,178,529
Function that returns pointer, when imported in separate file, return number instead of object
<p>I have a file called, runner.py, which has a lot of functions that make calls to a market DLL.</p> <pre><code>#Imports para execução da DLL import time import gc from ctypes import * from ctypes.wintypes import UINT import struct from datetime import* import sys import functions from threading import Thread rotPassword = '' accountID = '' corretora = '' profit_dll = WinDLL('./ProfitDLL64.dll') profit_dll.argtypes = None def printPosition(): global profit_dll global corretora, accountID asset = 'asset' bolsa = 'f' thisCorretora = corretora acc_id = accountID priceToReturn = 0 result = profit_dll.GetPosition(c_wchar_p(str(acc_id)), c_wchar_p(str(thisCorretora)), c_wchar_p(asset), c_wchar_p(bolsa)) print(f'here {result}') return result n_qtd = result[0] if (n_qtd == 0): print('Nao ha posicao para esse ativo') return priceToReturn else: n_tam = result[1] arr = cast(result, POINTER(c_char)) frame = bytearray() for i in range(n_tam): c = arr[i] frame.append(c[0]) start = 8 for i in range(n_qtd): corretora_id = struct.unpack('i', frame[start:start+4])[0] start += 4 acc_id_length = struct.unpack('h', frame[start:start+2])[0] start += 2 account_id = frame[start:start+acc_id_length] start += acc_id_length titular_length = struct.unpack('h', frame[start:start+2])[0] start += 2 titular = frame[start:start+titular_length] start += titular_length ticker_length = struct.unpack('h', frame[start:start+2])[0] start += 2 ticker = frame[start:start+ticker_length] start += ticker_length intraday_pos = struct.unpack('i', frame[start:start+4])[0] start += 4 price = struct.unpack('d', frame[start:start + 8])[0] priceToReturn = str(price) start += 8 avg_sell_price = struct.unpack('d', frame[start:start + 8])[0] start += 8 sell_qtd = struct.unpack('i', frame[start:start+4])[0] start += 4 avg_buy_price = struct.unpack('d', frame[start:start + 8])[0] start += 8 buy_qtd = struct.unpack('i', frame[start:start+4])[0] start += 4 custody_d1 = struct.unpack('i', frame[start:start+4])[0] start += 4 custody_d2 = struct.unpack('i', frame[start:start+4])[0] start += 4 custody_d3 = struct.unpack('i', frame[start:start+4])[0] start += 4 blocked = struct.unpack('i', frame[start:start+4])[0] start += 4 pending = struct.unpack('i', frame[start:start+4])[0] start += 4 allocated = struct.unpack('i', frame[start:start+4])[0] start += 4 provisioned = struct.unpack('i', frame[start:start+4])[0] start += 4 qtd_position = struct.unpack('i', frame[start:start+4])[0] start += 4 available = struct.unpack('i', frame[start:start+4])[0] start += 4 print(f&quot;Corretora: {corretora_id}, Titular: {str(titular)}, Ticker: {str(ticker)}, Price: {price}, AvgSellPrice: {avg_sell_price}, AvgBuyPrice: {avg_buy_price}, SellQtd: {sell_qtd}, BuyQtd: {buy_qtd}&quot;) return priceToReturn def dllStart(): try: global profit_dll key = input(&quot;Chave de acesso: &quot;) user = input(&quot;Usuário: &quot;) # preencher com usuário da conta (email ou documento) password = input(&quot;Senha: &quot;) # preencher com senha da conta bRoteamento = True if bRoteamento : result = profit_dll.DLLInitializeLogin(c_wchar_p(key), c_wchar_p(user), c_wchar_p(password), stateCallback, historyCallBack, orderChangeCallBack, accountCallback, newTradeCallback, newDailyCallback, priceBookCallback, offerBookCallback, newHistoryCallback, progressCallBack, newTinyBookCallBack) else : result = profit_dll.DLLInitializeMarketLogin(c_wchar_p(key), c_wchar_p(user), c_wchar_p(password), stateCallback, newTradeCallback, newDailyCallback, priceBookCallback, offerBookCallback, newHistoryCallback, progressCallBack, newTinyBookCallBack) profit_dll.SendSellOrder.restype = c_longlong profit_dll.SendBuyOrder.restype = c_longlong profit_dll.SendStopBuyOrder.restype = c_longlong profit_dll.SendStopSellOrder.restype = c_longlong profit_dll.SendZeroPosition.restype = c_longlong profit_dll.GetAgentNameByID.restype = c_wchar_p profit_dll.GetAgentShortNameByID.restype = c_wchar_p profit_dll.GetPosition.restype = POINTER(c_int) profit_dll.SendMarketSellOrder.restype = c_longlong profit_dll.SendMarketBuyOrder.restype = c_longlong print('DLLInitialize: ' + str(result)) wait_login() except Exception as e: print(str(e)) if __name__ == '__main__': try: dllStart() strInput = '' while strInput != &quot;exit&quot;: strInput = input('Insira o comando: ') if strInput == 'subscribe': subscribeTicker() elif strInput == 'unsubscribe': unsubscribeTicker() elif strInput == 'offerbook': subscribeOffer() elif strInput == 'position': printPosition() elif strInput == 'lastAdjusted': printLastAdjusted() elif strInput == 'buystop' : buyStopOrder() elif strInput == 'sellstop': sellStopOrder() elif strInput == 'cancel': cancelOrder() elif strInput == 'changeOrder': changeOrder() elif strInput == 'cancelAllOrders': cancelAllOrders() elif strInput == 'getOrders': getOrders() elif strInput == 'getOrder': getOrder() elif strInput == 'selectOrder': selectOrder() elif strInput == 'cancelOrder': cancelOrder() elif strInput == 'getOrderProfitID': getOrderProfitID() elif strInput == 'getAllTickersByStock': getAllTickersByStock() elif strInput == 'getOld': getSerieHistory() elif strInput == 'account': getAccount() elif strInput == 'myBuy': sendBuyOrder() except KeyboardInterrupt: pass </code></pre> <p>The correct log that I get when calling:</p> <pre><code>&lt;ctypes.wintypes.LP_c_long object at 0x000001714E9CABC0&gt; </code></pre> <p>When running directly the <code>runner.py</code> and calling <code>printPosition()</code>, everything works fine and it returns what I need (the object above).<br> Although, I need to import this printPosition in a separate file to make a call for it. When performing so, I received a number only and not the object I needed.</p> <pre><code>import socket, base64 import re import sys from operator import neg from datetime import datetime import profitrunner import time import gc from ctypes import * from ctypes.wintypes import UINT import struct def calculate_pos(): memory_address = profitrunner.printPosition() print(heapPosition) # I though on doing but did not work: arr = cast(memory_address, POINTER(c_char)) </code></pre> <p>The logs from this:</p> <pre><code>2015042512 2015042656 2015042800 2015042944 2015043088 2015043232 </code></pre> <p>The documentation from <code>GetPosition</code> gives is the follwing:</p> <pre><code>Function that returns the position for a given ticker. Returns a data structure specified below. With full size (90 + N + T + K) bytes: function GetPosition( pwcIDAccount : PWideChar; pwcIDCorretora : PWideChar; pwcTicker : PWideChar; pwcBolsa : PWideChar) : Pointer; stdcall; </code></pre> <p><a href="https://i.sstatic.net/woKNa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/woKNa.png" alt="enter image description here" /></a></p>
<python><python-3.x><pointers>
2023-12-12 13:26:45
1
1,200
Nilton Schumacher F
77,646,455
1,075,996
PyMySQL TypeError when connecting
<p>I'm working with the following (very simple) MariaDB connection:</p> <pre><code>&gt;&gt;&gt; from dbconfig import db_host, db_user, db_pass, db_name, system_number &gt;&gt;&gt; import pymysql as mdb &gt;&gt;&gt; print(db_host, db_user, db_pass, db_name) some.server.co.uk my_username my_password my_db_name &gt;&gt;&gt; db = mdb.connect(db_host, db_user, db_pass, db_name) Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; TypeError: __init__() takes 1 positional argument but 5 were given </code></pre> <p>I feel like I have done this many times before but for some reason it is throwing this error and I'm unable to connect. I've confirmed that the details are correct and I'm able to use them to connect in a MySQL client on the same machine.</p> <p>What am I missing?</p>
<python><mysql><mariadb><pymysql>
2023-12-12 13:25:08
1
453
btongeorge
77,646,431
6,372,189
Why is shutil.rmtree takes hours to delete a folder sometimes? Any alternatives?
<p>It seems like a trivial issue but I've noticed lately that the <code>shutil.rmtree</code> function is either taking a ridiculously long time (like 6 hours) or is simply blocking the execution of the remaining script.</p> <p>I have a script that does a bunch of operations but before starting everything else, it deletes this folder called &quot;temp&quot; and waits for 5 minutes, then it does a bunch of operations. I've noticed that recently the script is taking over 6 hours to run instead of the usual 10 minutes. After troubleshooting the issue to looks like the script is getting stuck on the <code>shutil.rmtree</code> function which basically blocks further execution of the script. I know the script is logically and programmatically fine but this delay is simply causing a lot of trouble. Below is the sample of the script.</p> <pre><code>import shutil from pathlib import Path import os def delFolder(folderPath): try: shutil.rmtree(folderPath) except: pass def main_func(modulePath): modulePath = Path(modulePath) # Variable declarations and other stuff temp_folder = Path.joinpath(modulePath, &quot;tempFolder&quot;) print(&quot;Deleting temp folder....&quot;) delFolder(temp_folder ) print(&quot;Sleeping for 5 min....&quot;) time.sleep(5*60) # Other script functionality to be executed. if __name__ == &quot;__main__&quot;: this_modulePath = os.getcwd() main_func(this_modulePath) </code></pre> <p>It takes forever(like over 6 hours) to reach from <code>Deleting temp folder....</code> to <code>Sleeping for 5 min....</code>. I don't why this is happening with shutil. The folder that I am trying to delete is around 4 GB in size and has many files and subfolders in it and no other program is using any files from the temp folder (except onedrive sync, which shouldn't be an issue)</p>
<python><python-3.x><shutil>
2023-12-12 13:20:44
0
701
Prashant Kumar
77,646,352
14,695,308
Test-dependent teardown fixture
<p>I'm using teardown fixtures as follow</p> <pre><code>@pytest.fixture def clean_after_test() yield function_to_clean() def test_object(clean_after_test) &lt;TESTING CODE&gt; </code></pre> <p>But in specific test-case I need to create object, do something with it and after testing done remove it by ID. ID I can get only when test is in progress so I cannot use teardown fixture as usually since I cannot know ID in advance. I can add object-removal code line at the end of test, but it will be executed only in case of passed status, so it's a bad option</p> <p>So what will be the best practice for cleaning-up after test based on data received during test?</p>
<python><pytest>
2023-12-12 13:10:05
0
720
DonnyFlaw
77,646,307
6,836,950
Correlation matrix like DataFrame in Polars
<p>I have Polars dataframe</p> <pre class="lang-py prettyprint-override"><code>data = { &quot;col1&quot;: [&quot;a&quot;, &quot;b&quot;, &quot;c&quot;, &quot;d&quot;], &quot;col2&quot;: [[-0.06066, 0.072485, 0.548874, 0.158507], [-0.536674, 0.10478, 0.926022, -0.083722], [-0.21311, -0.030623, 0.300583, 0.261814], [-0.308025, 0.006694, 0.176335, 0.533835]], } df = pl.DataFrame(data) </code></pre> <p>I want to calculate cosine similarity for each combination of column <code>col1</code></p> <p>The desired output should be the following:</p> <pre><code>┌─────────────────┬──────┬──────┬──────┬──────┐ │ col1_col2 ┆ a ┆ b ┆ c ┆ d │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ str ┆ f64 ┆ f64 ┆ f64 ┆ f64 │ ╞═════════════════╪══════╪══════╪══════╪══════╡ │ a ┆ 1.0 ┆ 0.86 ┆ 0.83 ┆ 0.54 │ │ b ┆ 0.86 ┆ 1.0 ┆ 0.75 ┆ 0.41 │ │ c ┆ 0.83 ┆ 0.75 ┆ 1.0 ┆ 0.89 │ │ d ┆ 0.54 ┆ 0.41 ┆ 0.89 ┆ 1.0 │ └─────────────────┴──────┴──────┴──────┴──────┘ </code></pre> <p>Where each value represents cosine similarity between respective column values.</p> <p>I'm using following cosine similarity function</p> <pre class="lang-py prettyprint-override"><code>from numpy.linalg import norm cosine_similarity = lambda a,b: (a @ b.T) / (norm(a)*norm(b)) </code></pre> <p>I tried to use it with <code>pivot</code> method</p> <pre class="lang-py prettyprint-override"><code>df.pivot(on=&quot;col1&quot;, values=&quot;col2&quot;, index=&quot;col1&quot;, aggregate_function=cosine_similarity) </code></pre> <p>However I'm getting the following error</p> <pre><code>AttributeError: 'function' object has no attribute '_pyexpr' </code></pre>
<python><python-polars>
2023-12-12 13:02:00
3
4,179
Okroshiashvili
77,646,165
235,472
How to get the current directory from the whole path?
<p>With:</p> <pre><code>currentPath = pathlib.Path.cwd() </code></pre> <p>I get the current path:</p> <pre><code>/path/to/my/location/ </code></pre> <p>How can I get the directory <code>location</code> without the need to manually process the complete path?</p>
<python><python-3.x>
2023-12-12 12:36:41
2
13,528
Pietro
77,645,765
12,571,870
Custom Python module not found when placed in Docker container (with setup.py and entry_points, console_scripts)
<p>I have the following structure of my project:</p> <pre><code>|--Dockerfile |--requirements.txt |--setup.py |--mydummyproject |--|--__init__.py |--|--__main__.py |--|--dummy.py </code></pre> <p>The <code>dummy.py</code> has one simple function:</p> <pre><code>import numpy as np def return_output(*args): for a in args: print(f&quot;Hello {a}&quot;) print(&quot;Phew!&quot;) print(&quot;We've got numpy:&quot;,np.__version__) </code></pre> <p>The <code>__init__.py</code> file is there <em>pro forma</em> and in <code>__main__.py</code> I have a simple <code>argparse</code>interface:</p> <pre><code>import argparse from .dummy import return_output def command_line_target(): parser = argparse.ArgumentParser() parser.add_argument('-v','--value', nargs='+',default=[&quot;John&quot;,&quot;Jane&quot;]) args = parser.parse_args() return_output(*args.value) </code></pre> <p>My <code>setup.py</code> has the following:</p> <pre><code>from setuptools import setup with open(&quot;requirements.txt&quot;, &quot;r&quot;) as _f: install_reqs = _f.read().strip().split(&quot;\n&quot;) setup(name=&quot;dummy&quot;, version=&quot;0.0.1&quot;, package_dir={&quot;mydummyproject&quot;: &quot;mydummyproject&quot;}, install_requires=install_reqs, python_requires=&quot;&gt;=3.8&quot;, entry_points={ 'console_scripts': [ 'hello = mydummyproject.__main__:command_line_target' ]} ) </code></pre> <p>So basically once I install it and run something like <code>hello Mark Johnny</code> it will run the <code>return_output</code> and print out the above.</p> <p>I've tested this in a test conda environment, and it works. Next step is dockerizing my application. I'm running WSL and Docker Desktop. My <code>Dockerfile</code> is:</p> <pre><code>FROM python:3.8 WORKDIR /app COPY . /app RUN pip install --no-cache-dir -r requirements.txt RUN python setup.py install CMD [&quot;hello&quot;] </code></pre> <p>I'm using <code>docker build -t dummytest</code> in WSL and it builds it just fine. But when I run it via <code>docker run dummytest</code> I get the following error:</p> <pre><code> Traceback (most recent call last): File &quot;/usr/local/bin/hello&quot;, line 33, in &lt;module&gt; sys.exit(load_entry_point('dummy==0.0.1', 'console_scripts', 'hello')()) File &quot;/usr/local/bin/hello&quot;, line 25, in importlib_load_entry_point return next(matches).load() File &quot;/usr/local/lib/python3.8/importlib/metadata.py&quot;, line 77, in load module = import_module(match.group('module')) File &quot;/usr/local/lib/python3.8/importlib/__init__.py&quot;, line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1014, in _gcd_import File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 991, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 961, in _find_and_load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 219, in _call_with_frames_removed File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1014, in _gcd_import File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 991, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 973, in _find_and_load_unlocked ModuleNotFoundError: No module named 'mydummyproject' </code></pre> <p>This would imply that my custom module (mydummyproject) was not installed correctly. Is there an error in my Dockerfile?</p> <p>EDIT1: I've modified <code>Dockerfile</code>:</p> <pre><code>FROM python:3.8 WORKDIR /app COPY . /app RUN pip install . CMD [&quot;hello&quot;] </code></pre> <p>I've used command <code>docker build -t dummy2 .</code> and command <code>docker run dummy2</code>. The error was now:</p> <pre><code>Traceback (most recent call last): File &quot;/usr/local/bin/hello&quot;, line 5, in &lt;module&gt; from mydummyproject.__main__ import command_line_target ModuleNotFoundError: No module named 'mydummyproject' </code></pre> <p>I've changed <code>setup.py</code> and directory (replaced &quot;mydummyproject&quot; with &quot;dummy&quot;) and the error was still:</p> <pre><code>Traceback (most recent call last): File &quot;/usr/local/bin/hello&quot;, line 5, in &lt;module&gt; from dummy.__main__ import command_line_target ModuleNotFoundError: No module named 'dummy' </code></pre> <p>EDIT2: The solution by Teemu Risku works. The solution is to modify the <code>setup.py</code> via <code>find_packages</code>:</p> <pre><code>from setuptools import setup, find_packages with open(&quot;requirements.txt&quot;, &quot;r&quot;) as _f: install_reqs = _f.read().strip().split(&quot;\n&quot;) setup(name=&quot;dummy&quot;, version=&quot;0.0.1&quot;, packages=find_packages(), install_requires=install_reqs, python_requires=&quot;&gt;=3.8&quot;, entry_points={ 'console_scripts': [ 'hello = mydummyproject.__main__:command_line_target' ]} ) </code></pre>
<python><docker>
2023-12-12 11:33:33
1
438
ivan199415
77,645,742
6,060,982
pyright and imports from shared libraries
<p>Can pyright or any other linter resolve imports from <code>.so</code> libraries? In my case I have a c++ shared library with pybind11 used to generate the python bindings. I can successfully import and use the <code>FourierSeries</code> from the <code>.so</code> library</p> <pre class="lang-py prettyprint-override"><code>from .c_lib.c_fourier import FourierSeries as FourierSeries_c </code></pre> <p>but at the same time pyright complains that the import cannot be resolved.</p> <blockquote> <p>Import &quot;.c_lib.c_fourier&quot; could not be resolved</p> </blockquote> <p>Any ideas as to how this can be fixed, or at least how to silence this warning?</p>
<python><pybind11><pyright>
2023-12-12 11:29:56
0
700
zap
77,645,552
5,973,911
Is it possible to reformat python __all__ statement with ruff to make it multiline?
<p>I'm using <code>ruff</code> (<a href="https://docs.astral.sh/ruff/" rel="nofollow noreferrer">https://docs.astral.sh/ruff/</a>). Is it possible to format the code from this:</p> <pre><code>__all__ = [&quot;Model&quot;, &quot;User&quot;, &quot;Account&quot;] </code></pre> <p>Into this?</p> <pre><code>__all__ = [ &quot;Model&quot;, &quot;User&quot;, &quot;Account&quot; ] </code></pre>
<python><ruff>
2023-12-12 11:00:05
1
853
Михаил Павлов
77,645,520
6,759,459
Dockerized FastAPI SQLAlchemy PostgreSQL 'NoneType' object has no attribute 'execute'
<p>This is a sudden error, whose root cause I haven't been able to figure out. I successfully built 2 Docker images that are connected over the same network:</p> <ol> <li>FastAPI</li> <li>PostgreSQL</li> </ol> <p>I am not able to use the SQLAlchemy dependency injection to connect to my DB. However, I was able to seed my DB successfully many times over. Only made one additional field addition to my SQLAlchemy model and then this failure occurred.</p> <pre><code>backend-web-1 | ERROR:services.utils:Health check failed in 0.00 seconds: 'NoneType' object has no attribute 'execute' backend-web-1 | ERROR:services.utils:Traceback (most recent call last): backend-web-1 | File &quot;/app/main.py&quot;, line 27, in read_root backend-web-1 | db.execute('SELECT 1') backend-web-1 | ^^^^^^^^^^ backend-web-1 | AttributeError: 'NoneType' object has no attribute 'execute' </code></pre> <p>Code failure above happens here</p> <pre><code>from fastapi import FastAPI, Request, Depends, HTTPException from sqlalchemy.orm import Session import psutil import threading import time import traceback # Import your application modules from database.database_config import get_db from database.models import * from database.crud import * from api.interview import complete_interview from api.chat_handling import * from api.user_registration import register_user from api.stripe import * from services.utils import * from config.env_var import * app = FastAPI() @app.get(&quot;/&quot;) def read_root(db: Session = Depends(get_db)): start_time = time.time() try: # Check database connectivity db.execute('SELECT 1') db_status = &quot;Connected&quot; # Memory and Thread Count memory_usage = psutil.virtual_memory().percent thread_count = threading.active_count() response_time = time.time() - start_time logger.info(f&quot;Health check passed in {response_time:.2f} seconds, Memory Usage: {memory_usage}%, Thread Count: {thread_count}, DB Status: {db_status}&quot;) return {&quot;message&quot;: &quot;Hello, World!&quot;} except Exception as e: response_time = time.time() - start_time logger.error(f&quot;Health check failed in {response_time:.2f} seconds: {e}&quot;) logger.error(traceback.format_exc()) raise HTTPException(status_code=500, detail=str(e)) </code></pre> <p>For reference, here's how I define my dependency injection:</p> <pre><code>from sqlalchemy import create_engine from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import sessionmaker from services.utils import logger import traceback from config.env_var import * DB_USER = os.getenv('DB_USER') DB_PASSWORD = os.getenv('DB_PASSWORD') DB_HOST = os.getenv('DB_HOST') DB_NAME = os.getenv('DB_NAME') Base = declarative_base() db_url = f'postgresql://{DB_USER}:{DB_PASSWORD}@{DB_HOST}:5432/{DB_NAME}' try: engine = create_engine(db_url) except Exception as e: logger.info(f&quot;Error creating database engine: {e}&quot;) logger.info(traceback.format_exc()) raise SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine) def get_db(): db = SessionLocal() try: yield db except Exception as e: logger.info(f&quot;Database session error: {e}&quot;) logger.info(traceback.format_exc()) raise finally: db.close() </code></pre> <p>I am able to successfully connect via my FastAPI Docker instance to my PostgreSQL instance via this script though:</p> <pre><code>from sqlalchemy import create_engine from sqlalchemy.orm import sessionmaker import os DB_USER = os.getenv('DB_USER') DB_PASSWORD = os.getenv('DB_PASSWORD') DB_HOST = os.getenv('DB_HOST') DB_NAME = os.getenv('DB_NAME') db_url = f'postgresql://{DB_USER}:{DB_PASSWORD}@{DB_HOST}:5432/{DB_NAME}' engine = create_engine(db_url) SessionLocal = sessionmaker(bind=engine) def test_session_local(): session = SessionLocal() print(f&quot;Session: {session}&quot;) session.close() def test_db_connection(): try: db = SessionLocal() # Perform a simple query result = db.execute('SELECT 1') for row in result: print(row) db.close() print(&quot;Connection successful.&quot;) except Exception as e: print(f&quot;Connection failed: {e}&quot;) if __name__ == &quot;__main__&quot;: test_session_local() test_db_connection() </code></pre> <p>I don't see where I made a critical failure.</p>
<python><postgresql><docker><sqlalchemy><fastapi>
2023-12-12 10:56:18
1
926
Ari
77,645,510
16,674,436
Simple seaborn distribution plot not working
<p>I’m trying to plot the distribution of scores on reddit posts, but can’t figure it out.</p> <p>My data frame is something like that</p> <pre><code>df = pd.DataFrame({&quot;score&quot;: [12, 19, 25987, 887, 887, 1], &quot;author&quot;: [&quot;xxx&quot;, &quot;x&quot;, &quot;xxx&quot;, &quot;xx&quot;, &quot;xxxx&quot;, &quot;x&quot;]}) </code></pre> <p>With a lot more data points (about 300,000).</p> <p>I have attempted the following two things:</p> <pre><code>plt.figure(figsize=(10, 6)) sns.displot(data=df, x=&quot;score&quot;, bins=30, kde=True) plt.title('Distribution of Post Scores') plt.xlabel('Score') plt.ylabel('Frequency') plt.show() </code></pre> <p>But it gives me the following: </p> <p><a href="https://i.sstatic.net/l6Cph.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/l6Cph.png" alt="enter image description here" /></a></p> <p>And I’ve tried this:</p> <pre><code>score_freq = df['score'].value_counts() plt.figure(figsize=(10, 6)) sns.displot(score_freq, kind=&quot;kde&quot;, bw_adjust=30) plt.title('Distribution of Post Scores') plt.xlabel('Score') plt.ylabel('Frequency') plt.show() </code></pre> <p>Which gives me this:</p> <p><a href="https://i.sstatic.net/FnRWw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FnRWw.png" alt="enter image description here" /></a></p> <p>So the second option seems a bit better, though still wrong. I don’t have scores that go all the way to -100,000, and I <strong>do</strong> have scores that go all the way to 299,489.</p> <p>I don’t really get what I’m doing wrong.</p> <h2>Update</h2> <p>This is what the real data looks like when I do <code>df['score'].sort_values()</code>:</p> <p><a href="https://i.sstatic.net/N5sRE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/N5sRE.png" alt="enter image description here" /></a></p> <p>And this is what it looks like with <code>df['score'].value_counts()</code>: <a href="https://i.sstatic.net/ijimS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ijimS.png" alt="enter image description here" /></a></p>
<python><plot><statistics><seaborn><distribution>
2023-12-12 10:55:25
1
341
Louis
77,645,454
4,752,738
PyCharm removes unused imports when moving a file
<p>I'm using Pycharm for moving files inside my project and PyCharm updates the imports in the project to the new path which is great.</p> <p>The problem is that it also <em>removes unused imports</em> from those files. This is a very bad behavior for me and I can't find a way to stop this.</p> <p>I can't check whether it did that every time because sometimes its hundreds of files.</p>
<python><pycharm>
2023-12-12 10:45:59
2
943
idan ahal
77,645,290
9,488,023
How can I choose to not change the image size when reprojecting with Python rioxarray?
<p>I have an image in tif-format of size 18346x10218 in coordinate system EPSG:4326 that I want to convert to a new coordinate system, which I do with the following method with rioxarray in Python.</p> <pre><code>import rioxarray rast_path = &quot;old_projection.tif&quot; rds = rioxarray.open_rasterio(rast_path) rds_3996 = rds.rio.reproject(&quot;EPSG:3996&quot;) rds_3996.rio.to_raster(&quot;new_projection.tif&quot;) </code></pre> <p>This saves the tif file in the new projection, but I noticed that it increases the image size to 20198x20198. When I open the file, the new pixels are not visible, so I assume they do not contain any data, but the size of the file has more than doubled. Is there a way to re-project data in this manner without saving the empty pixels? Thanks for any help!</p>
<python><raster><python-xarray><projection><epsg>
2023-12-12 10:21:43
1
423
Marcus K.
77,645,267
8,580,469
pcolormesh: artefacts / overlapping points when using ec='face'
<p>I am plotting maps with pcolormesh and cartopy. When I don't use the option <code>edgecolor</code> or set it to None, results are like expected. But when I use <code>ec='face'</code>, I get really strange artefacts and/or overlapping points - depending on the selected output format. I have tried with <code>snap</code> and <code>antialiased</code> but those don't help unfortunately. When I zoom into the interactive plot, it looks not that bad but my output file (pdf or png) looks like a mess.</p> <p><strong>How can I activate the edgecolor without having this weird behaviour?</strong></p> <p>MWE below. Using python 3.10.12, matplotlib 3.5.1, Cartopy 0.22.0.</p> <pre><code>import numpy as np import cartopy.crs as ccrs import cartopy.feature as cfeature import matplotlib.pyplot as plt proj1 = ccrs.PlateCarree() proj2 = ccrs.LambertAzimuthalEqualArea() fig = plt.figure(figsize=(16,8), layout='constrained') # generate coordinates xx = np.linspace(-10,30,300) yy = np.linspace( 35,75,300) xxx,yyy = np.meshgrid(xx,yy) rng = np.random.default_rng(6666) # create artificial data zzz = xxx + yyy noise = rng.normal(0,np.nanmax(zzz)*0.1,zzz.shape) zzz += noise # create some gaps in data zdim = zzz.size idx = rng.choice(range(zdim), size=zdim//3) zshape = zzz.shape zz = zzz.flatten() zz[idx] = np.nan zzz = zz.reshape(zshape) ax = fig.add_subplot(1,2,1, projection=proj2) ax.add_feature(cfeature.BORDERS, lw=0.5, ec='k') ax.add_feature(cfeature.COASTLINE, lw=0.5, ec='k') # looks ok ax.pcolormesh(xxx, yyy, zzz, transform=proj1) ax = fig.add_subplot(1,2,2, projection=proj2) ax.add_feature(cfeature.BORDERS, lw=0.5, ec='k') ax.add_feature(cfeature.COASTLINE, lw=0.5, ec='k') # looks not ok ax.pcolormesh(xxx, yyy, zzz, #snap=True, #antialiased=True, ec='face', # this seems to have a problem here transform=proj1) plt.show() #plt.savefig('plot_test.pdf') </code></pre> <p><a href="https://i.sstatic.net/I9tQN.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/I9tQN.jpg" alt="enter image description here" /></a></p>
<python><matplotlib><cartopy>
2023-12-12 10:17:43
1
377
Waterkant
77,645,115
1,897,839
Bokeh legend breaks on Python callback
<p>I have implemented a Bokeh figure with</p> <ol> <li>a scatter plot using <code>circle()</code> with a <code>legend_group</code> for colouring and for creating an interactive legend.</li> <li>a range slider with a Python callback that filters the data</li> </ol> <p>I have set <code>click_policy=hide</code> for the legend so that points (circles) from a specific group are hidden when I click on a legend item.</p> <p>All of that works fine initially. When I move the slider, however, the legend &quot;breaks&quot;:</p> <ol> <li>Legend items disappear randomly, even though multiple groups from above are still visible</li> <li>The legend items that remain behave erroneously. With few initial total groups, only one item remains. With many initial groups, several tend to remain, but when I click on one of them, multiple are greyed out, while only one of the corresponding circle groups are hidden.</li> </ol> <p>This is what the initial plot looks like:</p> <p><a href="https://i.sstatic.net/7kUfF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7kUfF.png" alt="Plot with full range" /></a></p> <p>When I move the slider (at the bottom of the plot), some groups are correctly filtered out from the plot, but only one legend item remains, while multiple groups are still visible in the plot.</p> <p><a href="https://i.sstatic.net/ybdVD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ybdVD.png" alt="Plot after moving the slider" /></a></p> <p>The code is embedded into an object-oriented implementation. This is the method that adds the circles to the figure:</p> <pre class="lang-py prettyprint-override"><code> def _add_circles(self): palette = self._select_palette() labels = self._data[self._LABEL_FIELD].unique().tolist() for cluster in self._clusters: glyph = self._figure.circle( source=self._source, x=&quot;x&quot;, y=&quot;y&quot;, color=factor_cmap(self._LABEL_FIELD, palette, labels), legend_group=self._LABEL_FIELD, view=CDSView( filter=GroupFilter( column_name=self._LABEL_FIELD, group=cluster.label ), ), ) if cluster.label == OUTLIERS_LABEL: glyph.visible = False </code></pre> <p>This method sets up the legend:</p> <pre class="lang-py prettyprint-override"><code> def _setup_legend(self, legend_location: str = &quot;right&quot;, click_policy: str = &quot;hide&quot;): legend = self._figure.legend[0] legend.label_text_font_size = &quot;6px&quot; legend.spacing = 0 legend.location = legend_location legend.click_policy = click_policy </code></pre> <p>The slider with callback is added like this:</p> <pre class="lang-py prettyprint-override"><code> def _year_slider(self) -&gt; RangeSlider: def callback(attr, old, new): # noqa: unused-argument self._source.data = self._data.loc[ self._data.year.between(new[0], new[1]) ].to_dict(orient=&quot;list&quot;) min_year: int = self._data[self._YEAR_COLUMN].min() max_year: int = self._data[self._YEAR_COLUMN].max() slider = RangeSlider( start=min_year, end=max_year, value=(min_year, max_year), width=self._figure.frame_width, ) slider.on_change(&quot;value_throttled&quot;, callback) return slider </code></pre> <p>When I move the slider back to its original span, all legend items are displayed correctly again.</p> <p>The <code>Legend</code> object still contains all the <code>LegendItem</code> objects after moving the slider. All of them still have the <code>visible</code> property set to <code>true</code>.</p> <p>The question is: why the legend does not display all its items? How is this related to the slider and/or the callback the slider uses?</p>
<python><slider><filtering><legend><bokeh>
2023-12-12 09:53:46
1
2,092
Carsten
77,645,110
1,799,528
Iterate on langchain document items
<p>I loaded pdf files from a directory and I need to split them to smaller chunks to make a summary. The problem is that I can't iterate on documents object in a for loop and I get an error like this: <strong>AttributeError: 'tuple' object has no attribute 'page_content'</strong></p> <p>How can I iterate on my document items to call the summary function for each of them? Here is my code:</p> <pre><code># Load the documents from langchain.document_loaders import DirectoryLoader document_directory = &quot;pdf_files&quot; loader = DirectoryLoader(document_directory) documents = loader.load() text_splitter = RecursiveCharacterTextSplitter(chunk_size=4000, chunk_overlap=50) # Iterate on long pdf documents to make chunks (2 pdf files here) for doc in documents: # it fails on this line texts = text_splitter.split_documents(doc) chain = load_summarize_chain(llm, chain_type=&quot;map_reduce&quot;, map_prompt=prompt, combine_prompt=prompt) </code></pre> <p><a href="https://i.sstatic.net/BfR1e.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BfR1e.png" alt="enter image description here" /></a></p>
<python><openai-api><langchain>
2023-12-12 09:53:10
3
1,376
gabi
77,645,015
4,537,160
Fine-tune Pytorch I3D model on a custom dataset
<p>I want to fine-tune the I3D model from torch hub, which is pre-trained on Kinetics 400 classes, on a custom dataset, where I have 4 possible output classes.</p> <p>I'm loading the model by:</p> <pre><code>model = torch.hub.load(&quot;facebookresearch/pytorchvideo&quot;, i3d_r50, pretrained=True) </code></pre> <p>I printed it, and saw this layer:</p> <pre><code>(6): ResNetBasicHead( (pool): AvgPool3d(kernel_size=(4, 7, 7), stride=(1, 1, 1), padding=(0, 0, 0)) (dropout): Dropout(p=0.5, inplace=False) (proj): Linear(in_features=2048, out_features=400, bias=True) (output_pool): AdaptiveAvgPool3d(output_size=1) </code></pre> <p>So, I tried:</p> <pre><code>model = torch.hub.load(&quot;facebookresearch/pytorchvideo&quot;, i3d_r50, pretrained=True) num_classes = 4 model.ResNetBasicHead.proj = torch.nn.Linear(model.ResNetBasicHead.proj.in_features, num_classes) </code></pre> <p>but I'm getting the error:</p> <pre><code>AttributeError: 'Net' object has no attribute 'ResNetBasicHead' </code></pre> <p>What's the proper way to do this?</p>
<python><pytorch>
2023-12-12 09:39:49
1
1,630
Carlo
77,644,943
11,942,776
asynch.errors.UnexpectedPacketFromServerError: Code: 102. Unexpected packet from server <host:port> (expected Hello or Exception, got Unknown packet)
<p>I'm trying to connect to clickhouse with sqlalchemy. I'm using:</p> <ul> <li>python3.11</li> <li>clickhouse-driver == 0.2.6</li> <li>sqlalchemy == 2.0.23</li> <li>clickhouse-sqlalchemy == 0.3.0</li> <li>asynch == 0.2.3</li> <li>asyncio == 3.4.3</li> </ul> <p>Here is my script I used:</p> <pre class="lang-py prettyprint-override"><code>import asyncio import contextlib import pydantic import traceback import typing from sqlalchemy import text, TextClause, engine from sqlalchemy.ext.asyncio import AsyncEngine, AsyncSession, create_async_engine from sqlalchemy.pool import Pool, QueuePool class AsyncDatabase: def __init__(self): self.ch_uri: str = &quot;clickhouse+asynch://admin:Password123@host:31123/db&quot; self.ch_engine: AsyncEngine = create_async_engine( url=self.ch_uri, echo=False, pool_size=100, max_overflow=20, poolclass=QueuePool, ) self.ch_session: AsyncSession = AsyncSession(bind=self.ch_engine) self.ch_pool: Pool = self.ch_engine.pool async_db: AsyncDatabase = AsyncDatabase() @contextlib.asynccontextmanager async def get_ch_session() -&gt; typing.AsyncGenerator[AsyncSession, None]: try: yield async_db.ch_session except Exception as e: print(traceback.print_exc()) await async_db.ch_session.rollback() finally: await async_db.ch_session.close() async def hello() -&gt; str: session: AsyncSession = None async with get_ch_session() as session: stmt: TextClause = text(&quot;SELECT * FROM table_name LIMIT 1&quot;) result: engine.Result = await session.execute(stmt) print(result.all()) return &quot;ok&quot; if __name__ == &quot;__main__&quot;: asyncio.run(hello()) </code></pre> <p>I did try without async but still got the same error <code>asynch.errors.UnexpectedPacketFromServerError: Code: 102. Unexpected packet from server host:31123 (expected Hello or Exception, got Unknown packet)</code></p> <p>when I use DataGrid to connect with above creds, it works fine. so I think it's not about the <code>31123</code> port</p> <p>I stuck on this for 5 hrs, all answers from the web doesn't help me at all.</p>
<python><sqlalchemy><python-asyncio><clickhouse>
2023-12-12 09:28:53
1
331
Nguyễn Đức Huy
77,644,803
936,269
h5py: how save a scalar attribute as array with shape=(1,)
<p>I have some data that I need to write into an HDF5 file which will be read by a data processing application. This I do with <a href="https://www.h5py.org/" rel="nofollow noreferrer">h5py</a>. However, the data processing application seems to only accept attributes with an array size of one, but when I create my attributes with h5py I get an array size named scalar. Below picture is from <a href="https://www.hdfgroup.org/downloads/hdfview/" rel="nofollow noreferrer">HDFView</a>.</p> <p><a href="https://i.sstatic.net/nIFBb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nIFBb.png" alt="enter image description here" /></a></p> <p>I understand that when I call <code>dataset.attrs.create(key, value)</code> that I can set a dtype and shape <a href="https://docs.h5py.org/en/3.2.1/high/attr.html#h5py.AttributeManager.create" rel="nofollow noreferrer">h5py documentation for attributes</a>. But I do not understand what I need to provide to always have the array size as 1 and not scalar. In particular when as shown above, some of them seem to just be set to one out of the box.</p> <p>I create the attributes in the following way:</p> <pre class="lang-py prettyprint-override"><code>import h5py if __name__ == '__main__': out_file = h5py.File('tmp.h5') data = [0, 1, 2, 3, 4] attributes = {'sample': 1, 'option': 'tmp'} out_file.create_dataset('tmp_dataset', data=data) for attr_key, attr_value in attributes.items(): out_file['tmp_dataset'].attrs.create(attr_key, attr_value) out_file.close() </code></pre> <p>I have tried to specify the dtype to <code>float</code> and so on. But it did not seem to have an effect.</p>
<python><h5py>
2023-12-12 09:04:15
1
2,208
Lars Kakavandi-Nielsen
77,644,752
6,215,597
How to improve training performance of model.fit() with generators?
<p>Normally when I train models with tensorflow, I just feed my entire data set to the model training (Scenario 1). However, from time to time this data set is to large to fit into my GPU. For that reason I wanted to use generators and just load batch after batch into the GPU (Scenario 2). Both scenarios run perfectly fine but the normal fit without generators produces significantly better results. How can I use generators and yield the &quot;same&quot; result as when I'm loading the whole data set?</p> <p><strong>Scenario 1:</strong> parsing whole data set</p> <pre><code>from tensorflow.keras.models import Model import h5py # creating a model and stuff model = Model(...) model.compile(...) with h5py.File(&quot;db.h5py&quot;, 'r') as db_: model.fit(db_[&quot;X_train&quot;][...], db_[&quot;Y_train&quot;][...], epochs=60, batch_size=256, shuffle=True, validation_data=(db_[&quot;X_val&quot;][...], db_[&quot;Y_val&quot;][...]) ) </code></pre> <p><strong>Scenario 2:</strong> Using generators</p> <pre><code>import tensorflow as tf from tensorflow.keras.models import Model import h5py with h5py.File(&quot;db.h5py&quot;, 'r') as db_: x = db_[&quot;X_train&quot;][...] y = db_[&quot;Y_train&quot;][...] x_val = db_[&quot;X_val&quot;][...] y_val = db_[&quot;Y_val&quot;][...] class DataGen(tf.keras.utils.Sequence): def __init__(self, index_map, batch_size, args): if args == &quot;training&quot;: self.X = x self.y = y else: self.X = x_val self.y = y_val self.index_map = index_map self.batch_size = batch_size def __getitem__(self, index): X_batch = self.X[self.index_map[ index * self.batch_size: (index + 1) * self.batch_size ]] y_batch = self.y[self.index_map[ index * self.batch_size: (index + 1) * self.batch_size ]] return X_batch, y_batch def __len__(self): return len(self.index_map) // self.batch_size train_gen = DataGen(np.arange(x.shape[0]), 256, &quot;training&quot;) val_gen = DataGen(np.arange(x_val.shape[0]), 256, &quot;validation&quot;) # creating a model and stuff model = Model(...) model.compile(...) model.fit(train_gen, epochs=60, steps_per_epoch=x.shape[0] // 256, shuffle=True, validation_data=val_gen ) </code></pre> <p><strong>Generator case 4:</strong> Same as Scenario 2 + <code>on_epoch_end()</code> method</p> <pre><code>import tensorflow as tf from tensorflow.keras.models import Model import h5py with h5py.File(&quot;db.h5py&quot;, 'r') as db_: x = db_[&quot;X_train&quot;][...] y = db_[&quot;Y_train&quot;][...] x_val = db_[&quot;X_val&quot;][...] y_val = db_[&quot;Y_val&quot;][...] class DataGen(tf.keras.utils.Sequence): def __init__(self, index_map, batch_size, args): if args == &quot;training&quot;: self.X = x self.y = y else: self.X = x_val self.y = y_val self.index_map = index_map self.batch_size = batch_size def __getitem__(self, index): X_batch = self.X[self.index_map[ index * self.batch_size: (index + 1) * self.batch_size ]] y_batch = self.y[self.index_map[ index * self.batch_size: (index + 1) * self.batch_size ]] return X_batch, y_batch def __len__(self): return len(self.index_map) // self.batch_size def on_epoch_end(self): 'Updates indexes after each epoch' np.random.shuffle(self.index_map) train_gen = DataGen(np.arange(x.shape[0]), 256, &quot;training&quot;) val_gen = DataGen(np.arange(x_val.shape[0]), 256, &quot;validation&quot;) # creating a model and stuff model = Model(...) model.compile(...) model.fit(train_gen, epochs=60, steps_per_epoch=x.shape[0] // 256, shuffle=True, validation_data=val_gen ) </code></pre> <h3>Performance results</h3> <p>This plot shows predicting performance of the test dataset. Each boxplot is comprised of the evaluation metrics of 10 model runs with different hyperparameter configurations. <a href="https://i.sstatic.net/XdSsK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XdSsK.png" alt="Performance" /></a></p> <ul> <li>Generator case 1: model.fit() with <code>batch_size</code> argument</li> <li>Generator case 2: model.fit() without <code>batch_size</code> argument</li> <li>Generator case 3: model.fit() without <code>batch_size</code> argument, with <code>steps_per_epoch</code> argument</li> <li>Generator case 4: Same as &quot;Generator case 3&quot; + shuffle module <code>on_epoch_end()</code> within <code>DataGen</code> class</li> </ul> <h3>UPDATE: It seems like 'Generator case 4' solves the specified problem. However its unclear what the <code>shuffle</code> argument does when using generators.</h3>
<python><tensorflow><generator><tf.keras>
2023-12-12 08:55:18
0
433
Max2603
77,644,650
1,487,336
How to change pandas MultiIndex name?
<p>I would like to change the pandas multi-index name as follows and encountered this strange issue. Is this a bug? My pandas version is 1.5.3.</p> <p><a href="https://i.sstatic.net/J19Wf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/J19Wf.png" alt="enter image description here" /></a></p>
<python><pandas><dataframe>
2023-12-12 08:35:51
0
809
Lei Hao
77,644,631
17,347,824
Connecting Alpaca to Python in AWS
<p>I am creating a program to simulate stock trading with an Alpaca paper trade account but I'm not able to get my program to work with Alpaca.</p> <p>I have installed the module the documentation says to install using <code>pip3 install alpaca_py</code> it says it is successful, but then when I try to import it and use my API keys it gives an error that the module is not found.</p> <p>Here is the code I'm trying to use (only with my actual keys):</p> <pre><code>import alpaca_py as tradeapi # Set your Alpaca API key and secret api_key= &quot;api_key_here&quot; api_secret = &quot;api_secret_here&quot; # Set the base URL for paper trading base_url = &quot;https://paper-api.alpaca.markets&quot; # Create an Alpaca API connection api = tradeapi.REST(api_key, api_secret, base_url=base_url, api_version='v2') </code></pre> <p>This is the error I'm getting:</p> <pre><code>Traceback (most recent call last): File &quot;/home/ubuntu/environment/final_play.py&quot;, line 2, in &lt;module&gt; import alpaca_py as tradeapi ModuleNotFoundError: No module named 'alpaca_py' </code></pre> <p>I found a couple other articles about this, but they both mentioned it being an issue with a file sharing the name of the module. I do not have a file named alpaca.py, just final_play.py</p> <p><code>pip install alpaca-py</code> doesn't work to install. the output when I use this is below:</p> <pre><code>ubuntu:~/environment $ pip install alpaca-py DEPRECATION: Python 2.7 reached the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 is no longer maintained. pip 21.0 will drop support for Python 2.7 in January 2021. More details about Python 2 support in pip can be found at https://pip.pypa.io/en/latest/development/release-process/#python-2-support pip 21.0 will remove support for this functionality. Defaulting to user installation because normal site-packages is not writeable ERROR: Could not find a version that satisfies the requirement alpaca-py (from versions: none) ERROR: No matching distribution found for alpaca-py </code></pre> <p>If I use <code>pip3 install alpaca-py</code> I get the following:</p> <pre><code>ubuntu:~/environment $ pip3 install alpaca-py Defaulting to user installation because normal site-packages is not writeable Requirement already satisfied: alpaca-py in /home/ubuntu/.local/lib/python3.10/site-packages (0.13.4) Requirement already satisfied: sseclient-py&lt;2.0.0,&gt;=1.7.2 in /home/ubuntu/.local/lib/python3.10/site-packages (from alpaca-py) (1.8.0) Requirement already satisfied: pandas&gt;=1.5.3 in /home/ubuntu/.local/lib/python3.10/site-packages (from alpaca-py) (2.1.4) Requirement already satisfied: websockets&lt;12.0.0,&gt;=11.0.3 in /home/ubuntu/.local/lib/python3.10/site-packages (from alpaca-py) (11.0.3) Requirement already satisfied: msgpack&lt;2.0.0,&gt;=1.0.3 in /home/ubuntu/.local/lib/python3.10/site-packages (from alpaca-py) (1.0.7) Requirement already satisfied: pydantic&lt;3.0.0,&gt;=2.0.3 in /home/ubuntu/.local/lib/python3.10/site-packages (from alpaca-py) (2.5.2) Requirement already satisfied: requests&lt;3.0.0,&gt;=2.30.0 in /home/ubuntu/.local/lib/python3.10/site-packages (from alpaca-py) (2.31.0) Requirement already satisfied: python-dateutil&gt;=2.8.2 in /usr/local/lib/python3.10/dist-packages (from pandas&gt;=1.5.3-&gt;alpaca-py) (2.8.2) Requirement already satisfied: pytz&gt;=2020.1 in /usr/lib/python3/dist-packages (from pandas&gt;=1.5.3-&gt;alpaca-py) (2022.1) Requirement already satisfied: tzdata&gt;=2022.1 in /home/ubuntu/.local/lib/python3.10/site-packages (from pandas&gt;=1.5.3-&gt;alpaca-py) (2023.3) Requirement already satisfied: numpy&lt;2,&gt;=1.22.4 in /home/ubuntu/.local/lib/python3.10/site-packages (from pandas&gt;=1.5.3-&gt;alpaca-py) (1.26.2) Requirement already satisfied: annotated-types&gt;=0.4.0 in /home/ubuntu/.local/lib/python3.10/site-packages (from pydantic&lt;3.0.0,&gt;=2.0.3-&gt;alpaca-py) (0.6.0) Requirement already satisfied: pydantic-core==2.14.5 in /home/ubuntu/.local/lib/python3.10/site-packages (from pydantic&lt;3.0.0,&gt;=2.0.3-&gt;alpaca-py) (2.14.5) Requirement already satisfied: typing-extensions&gt;=4.6.1 in /usr/local/lib/python3.10/dist-packages (from pydantic&lt;3.0.0,&gt;=2.0.3-&gt;alpaca-py) (4.7.1) Requirement already satisfied: urllib3&lt;3,&gt;=1.21.1 in /usr/lib/python3/dist-packages (from requests&lt;3.0.0,&gt;=2.30.0-&gt;alpaca-py) (1.26.5) Requirement already satisfied: certifi&gt;=2017.4.17 in /usr/lib/python3/dist-packages (from requests&lt;3.0.0,&gt;=2.30.0-&gt;alpaca-py) (2020.6.20) Requirement already satisfied: charset-normalizer&lt;4,&gt;=2 in /home/ubuntu/.local/lib/python3.10/site-packages (from requests&lt;3.0.0,&gt;=2.30.0-&gt;alpaca-py) (3.3.2) Requirement already satisfied: idna&lt;4,&gt;=2.5 in /usr/lib/python3/dist-packages (from requests&lt;3.0.0,&gt;=2.30.0-&gt;alpaca-py) (3.3) Requirement already satisfied: six&gt;=1.5 in /usr/lib/python3/dist-packages (from python-dateutil&gt;=2.8.2-&gt;pandas&gt;=1.5.3-&gt;alpaca-py) (1.16.0) </code></pre>
<python><alpaca>
2023-12-12 08:31:37
2
409
data_life
77,644,599
18,125,313
Why does adding a break statement significantly slow down the Numba function?
<p>I have the following Numba function:</p> <pre class="lang-py prettyprint-override"><code>@numba.njit def count_in_range(arr, min_value, max_value): count = 0 for a in arr: if min_value &lt; a &lt; max_value: count += 1 return count </code></pre> <p>It counts how many values are in the range in the array.</p> <p>However, I realized that I only needed to determine if they existed. So I modified it as follows:</p> <pre class="lang-py prettyprint-override"><code>@numba.njit def count_in_range2(arr, min_value, max_value): count = 0 for a in arr: if min_value &lt; a &lt; max_value: count += 1 break # &lt;---- break here return count </code></pre> <p>Then, this function becomes <strong>slower</strong> than before the change. Under certain conditions, it can be surprisingly more than 10 times slower.</p> <p>Benchmark code:</p> <pre class="lang-py prettyprint-override"><code>from timeit import timeit rng = np.random.default_rng(0) arr = rng.random(10 * 1000 * 1000) # To compare on even conditions, choose the condition that does not terminate early. min_value = 0.5 max_value = min_value - 1e-10 assert not np.any(np.logical_and(min_value &lt;= arr, arr &lt;= max_value)) n = 100 for f in (count_in_range, count_in_range2): f(arr, min_value, max_value) elapsed = timeit(lambda: f(arr, min_value, max_value), number=n) / n print(f&quot;{f.__name__}: {elapsed * 1000:.3f} ms&quot;) </code></pre> <p>Result:</p> <pre><code>count_in_range: 3.351 ms count_in_range2: 42.312 ms </code></pre> <p>Further experimenting, I found that the speed varies greatly depending on the search range (i.e. <code>min_value</code> and <code>max_value</code>).</p> <p>At various search ranges:</p> <pre><code>count_in_range2: 5.802 ms, range: (0.0, -1e-10) count_in_range2: 15.408 ms, range: (0.1, 0.09999999990000001) count_in_range2: 29.571 ms, range: (0.25, 0.2499999999) count_in_range2: 42.514 ms, range: (0.5, 0.4999999999) count_in_range2: 24.427 ms, range: (0.75, 0.7499999999) count_in_range2: 12.547 ms, range: (0.9, 0.8999999999) count_in_range2: 5.747 ms, range: (1.0, 0.9999999999) </code></pre> <p>Can someone explain to me what is going on?</p> <hr /> <p>I am using Numba 0.58.1 under Python 3.10.11. Confirmed on both Windows 10 and Ubuntu 22.04.</p> <hr /> <h1>EDIT:</h1> <p>As an appendix to Jérôme Richard's answer:</p> <p>As he pointed out in the comments, the performance difference that depends on a search range is likely due to branch prediction.</p> <p>For example, when <code>min_value</code> is <code>0.1</code>, <code>min_value &lt; a</code> has a 90% chance of being true, and <code>a &lt; max_value</code> has a 90% chance of being false. So mathematically it can be predicted correctly with 81% accuracy. I have no idea how the CPU does this, but I have come up with a way to check if this logic is correct.</p> <p>First, by partitioning the array with values above and below the threshold, and second, by mixing it with a certain probability of error. When the array is partitioned, the number of branch prediction misses should be unaffected by the threshold. When we include errors in it, the number of misses should increase depending on the errors.</p> <p>Here is the updated benchmark code:</p> <pre class="lang-py prettyprint-override"><code>from timeit import timeit import numba import numpy as np @numba.njit def count_in_range(arr, min_value, max_value): count = 0 for a in arr: if min_value &lt; a &lt; max_value: count += 1 return count @numba.njit def count_in_range2(arr, min_value, max_value): count = 0 for a in arr: if min_value &lt; a &lt; max_value: count += 1 break # &lt;---- break here return count def partition(arr, threshold): &quot;&quot;&quot;Place the elements smaller than the threshold in the front and the elements larger than the threshold in the back.&quot;&quot;&quot; less = arr[arr &lt; threshold] more = arr[~(arr &lt; threshold)] return np.concatenate((less, more)) def partition_with_error(arr, threshold, error_rate): &quot;&quot;&quot;Same as partition, but includes errors with a certain probability.&quot;&quot;&quot; less = arr[arr &lt; threshold] more = arr[~(arr &lt; threshold)] less_error, less_correct = np.split(less, [int(len(less) * error_rate)]) more_error, more_correct = np.split(more, [int(len(more) * error_rate)]) mostly_less = np.concatenate((less_correct, more_error)) mostly_more = np.concatenate((more_correct, less_error)) rng = np.random.default_rng(0) rng.shuffle(mostly_less) rng.shuffle(mostly_more) out = np.concatenate((mostly_less, mostly_more)) assert np.array_equal(np.sort(out), np.sort(arr)) return out def bench(f, arr, min_value, max_value, n=10, info=&quot;&quot;): f(arr, min_value, max_value) elapsed = timeit(lambda: f(arr, min_value, max_value), number=n) / n print(f&quot;{f.__name__}: {elapsed * 1000:.3f} ms, min_value: {min_value:.1f}, {info}&quot;) def main(): rng = np.random.default_rng(0) arr = rng.random(10 * 1000 * 1000) thresholds = np.linspace(0, 1, 11) print(&quot;#&quot;, &quot;-&quot; * 10, &quot;As for comparison&quot;, &quot;-&quot; * 10) bench( count_in_range, arr, min_value=0.5, max_value=0.5 - 1e-10, ) print(&quot;\n#&quot;, &quot;-&quot; * 10, &quot;Random Data&quot;, &quot;-&quot; * 10) for min_value in thresholds: bench( count_in_range2, arr, min_value=min_value, max_value=min_value - 1e-10, ) print(&quot;\n#&quot;, &quot;-&quot; * 10, &quot;Partitioned (Yet Still Random) Data&quot;, &quot;-&quot; * 10) for min_value in thresholds: bench( count_in_range2, partition(arr, threshold=min_value), min_value=min_value, max_value=min_value - 1e-10, ) print(&quot;\n#&quot;, &quot;-&quot; * 10, &quot;Partitioned Data with Probabilistic Errors&quot;, &quot;-&quot; * 10) for ratio in thresholds: bench( count_in_range2, partition_with_error(arr, threshold=0.5, error_rate=ratio), min_value=0.5, max_value=0.5 - 1e-10, info=f&quot;error: {ratio:.0%}&quot;, ) if __name__ == &quot;__main__&quot;: main() </code></pre> <p>Result:</p> <pre><code># ---------- As for comparison ---------- count_in_range: 3.518 ms, min_value: 0.5, # ---------- Random Data ---------- count_in_range2: 5.958 ms, min_value: 0.0, count_in_range2: 15.390 ms, min_value: 0.1, count_in_range2: 24.715 ms, min_value: 0.2, count_in_range2: 33.749 ms, min_value: 0.3, count_in_range2: 40.007 ms, min_value: 0.4, count_in_range2: 42.168 ms, min_value: 0.5, count_in_range2: 37.427 ms, min_value: 0.6, count_in_range2: 28.763 ms, min_value: 0.7, count_in_range2: 20.089 ms, min_value: 0.8, count_in_range2: 12.638 ms, min_value: 0.9, count_in_range2: 5.876 ms, min_value: 1.0, # ---------- Partitioned (Yet Still Random) Data ---------- count_in_range2: 6.006 ms, min_value: 0.0, count_in_range2: 5.999 ms, min_value: 0.1, count_in_range2: 5.953 ms, min_value: 0.2, count_in_range2: 5.952 ms, min_value: 0.3, count_in_range2: 5.940 ms, min_value: 0.4, count_in_range2: 6.870 ms, min_value: 0.5, count_in_range2: 5.939 ms, min_value: 0.6, count_in_range2: 5.896 ms, min_value: 0.7, count_in_range2: 5.899 ms, min_value: 0.8, count_in_range2: 5.880 ms, min_value: 0.9, count_in_range2: 5.884 ms, min_value: 1.0, # ---------- Partitioned Data with Probabilistic Errors ---------- # Note that min_value = 0.5 in all the following. count_in_range2: 5.939 ms, min_value: 0.5, error: 0% count_in_range2: 14.015 ms, min_value: 0.5, error: 10% count_in_range2: 22.599 ms, min_value: 0.5, error: 20% count_in_range2: 31.763 ms, min_value: 0.5, error: 30% count_in_range2: 39.391 ms, min_value: 0.5, error: 40% count_in_range2: 42.227 ms, min_value: 0.5, error: 50% count_in_range2: 38.748 ms, min_value: 0.5, error: 60% count_in_range2: 31.758 ms, min_value: 0.5, error: 70% count_in_range2: 22.600 ms, min_value: 0.5, error: 80% count_in_range2: 14.090 ms, min_value: 0.5, error: 90% count_in_range2: 6.027 ms, min_value: 0.5, error: 100% </code></pre> <p>I am satisfied with this result.</p>
<python><performance><numba>
2023-12-12 08:24:15
1
3,446
ken
77,644,510
8,963,682
FastAPI with Azure AD: TypeError on OAuth2 Authentication
<p>I'm trying to implement Azure AD OAuth2 authentication in a FastAPI application. I have set up the environment variables correctly and configured the OAuth client with Azure AD's endpoints. However, I'm encountering a TypeError: Invalid type for url. Expected str or httpx.URL, got &lt;class 'NoneType'&gt;: None when trying to authenticate a user.</p> <p>Here's the relevant part of my code:</p> <p><strong>FastAPI:</strong></p> <pre><code>app.add_middleware(SessionMiddleware, secret_key=&quot;q803pJMcx6KNkIlBGi_mPQSYiOP0IPze&quot;) #app.add_middleware(TokenVerificationMiddleware) @app.get(&quot;/&quot;) async def health(): return JSONResponse(content={&quot;status&quot;: &quot;healthy&quot;}, status_code=200) # Redirect to login URL @app.get(&quot;/login&quot;) async def login(request: Request): redirect_uri = request.url_for('auth') return await oauth.azure.authorize_redirect(request, redirect_uri) # Callback endpoint @app.get(&quot;/auth&quot;) async def auth(request: Request): token = await oauth.azure.authorize_access_token(request) user = token.get('userinfo') if not user: raise HTTPException(status_code=status.HTTP_401_UNAUTHORIZED, detail='Authentication failed') return user </code></pre> <p><strong>OAuth2:</strong></p> <pre><code>from fastapi import Depends, HTTPException, status from fastapi.security import OAuth2AuthorizationCodeBearer from authlib.integrations.starlette_client import OAuth from starlette.requests import Request import os # Load environment variables CLIENT_ID = os.getenv(&quot;ASPEN_APP_AUTH_CLIENT_ID&quot;) TENANT_ID = os.getenv(&quot;ASPEN_APP_AUTH_TENANT_ID&quot;) CLIENT_SECRET = os.getenv(&quot;ASPEN_APP_AUTH_SECRET&quot;) # Initialize OAuth2 oauth = OAuth() oauth.register( name='azure', client_id=CLIENT_ID, client_secret=CLIENT_SECRET, authorize_url=f'https://login.microsoftonline.com/{TENANT_ID}/oauth2/v2.0/authorize', token_url=f'https://login.microsoftonline.com/{TENANT_ID}/oauth2/v2.0/token', client_kwargs={'scope': 'openid email profile'} ) oauth2_scheme = OAuth2AuthorizationCodeBearer( authorizationUrl=f'https://login.microsoftonline.com/{TENANT_ID}/oauth2/v2.0/authorize', tokenUrl=f'https://login.microsoftonline.com/{TENANT_ID}/oauth2/v2.0/token' ) async def get_current_user(request: Request, token: str = Depends(oauth2_scheme)): try: response = await oauth.azure.parse_id_token(request, token) return response except Exception as e: raise HTTPException( status_code=status.HTTP_401_UNAUTHORIZED, detail=str(e) ) </code></pre> <p>The error occurs at the line token = await oauth.azure.authorize_access_token(request). I have verified that the environment variables are set correctly and that the redirect URL in Azure matches exactly.</p> <p><strong>My dependencies are as follows:</strong></p> <pre><code>Python 3.12 uvicorn~=0.23.2 fastapi~=0.103.1 pydantic~=2.4.2 pandas~=2.1.0 requests~=2.31.0 beautifulsoup4~=4.12.2 aiohttp~=3.9.1 pyproj~=3.6.0 python-dateutil~=2.8.2 python-dotenv~=1.0.0 python-jose~=3.3.0 mangum~=0.17.0 SQLAlchemy~=2.0.21 starlette~=0.27.0 bs4~=0.0.1 httpx~=0.23.0 Authlib~=1.2.1 itsdangerous~=2.1.2 boto3~=1.33.12 botocore~=1.33.12 respx&gt;=0.20.1 </code></pre> <p>Traceback:</p> <pre><code>#I add some print statements to fetch access token redirect_uri: http://localhost:8000/auth metadata: {'token_url': 'https://login.microsoftonline.com/TENANT_ID/oauth2/v2.0/token'} token_endpoint: None ERROR: Exception in ASGI application Traceback (most recent call last): File &quot;C:\Users\X\PycharmProjects\ServicesAPI\venv\Lib\site-packages\uvicorn\protocols\http\h11_impl.py&quot;, line 408, in run_asgi result = await app( # type: ignore[func-returns-value] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\X\PycharmProjects\ServicesAPI\venv\Lib\site-packages\uvicorn\middleware\proxy_headers.py&quot;, line 84, in __call__ return await self.app(scope, receive, send) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\X\PycharmProjects\ServicesAPI\venv\Lib\site-packages\fastapi\applications.py&quot;, line 292, in __call__ await super().__call__(scope, receive, send) File &quot;C:\Users\X\PycharmProjects\ServicesAPI\venv\Lib\site-packages\starlette\applications.py&quot;, line 122, in __call__ await self.middleware_stack(scope, receive, send) File &quot;C:\Users\X\PycharmProjects\ServicesAPI\venv\Lib\site-packages\starlette\middleware\errors.py&quot;, line 184, in __call__ raise exc File &quot;C:\Users\X\PycharmProjects\ServicesAPI\venv\Lib\site-packages\starlette\middleware\errors.py&quot;, line 162, in __call__ await self.app(scope, receive, _send) File &quot;C:\Users\X\PycharmProjects\ServicesAPI\venv\Lib\site-packages\starlette\middleware\sessions.py&quot;, line 86, in __call__ await self.app(scope, receive, send_wrapper) File &quot;C:\Users\X\PycharmProjects\ServicesAPI\venv\Lib\site-packages\starlette\middleware\exceptions.py&quot;, line 79, in __call__ raise exc File &quot;C:\Users\X\PycharmProjects\ServicesAPI\venv\Lib\site-packages\starlette\middleware\exceptions.py&quot;, line 68, in __call__ await self.app(scope, receive, sender) File &quot;C:\Users\X\PycharmProjects\ServicesAPI\venv\Lib\site-packages\fastapi\middleware\asyncexitstack.py&quot;, line 20, in __call__ raise e File &quot;C:\Users\X\PycharmProjects\ServicesAPI\venv\Lib\site-packages\fastapi\middleware\asyncexitstack.py&quot;, line 17, in __call__ await self.app(scope, receive, send) File &quot;C:\Users\X\PycharmProjects\ServicesAPI\venv\Lib\site-packages\starlette\routing.py&quot;, line 718, in __call__ await route.handle(scope, receive, send) File &quot;C:\Users\X\PycharmProjects\ServicesAPI\venv\Lib\site-packages\starlette\routing.py&quot;, line 276, in handle await self.app(scope, receive, send) File &quot;C:\Users\X\PycharmProjects\ServicesAPI\venv\Lib\site-packages\starlette\routing.py&quot;, line 66, in app response = await func(request) ^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\X\PycharmProjects\ServicesAPI\venv\Lib\site-packages\fastapi\routing.py&quot;, line 273, in app raw_response = await run_endpoint_function( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\X\PycharmProjects\ServicesAPI\venv\Lib\site-packages\fastapi\routing.py&quot;, line 190, in run_endpoint_function return await dependant.call(**values) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\X\PycharmProjects\ServicesAPI\AspenAPI\fast_api.py&quot;, line 56, in auth token = await oauth.azure.authorize_access_token(request) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\X\PycharmProjects\ServicesAPI\venv\Lib\site-packages\authlib\integrations\starlette_client\apps.py&quot;, line 82, in authorize_access_token token = await self.fetch_access_token(**params, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\X\PycharmProjects\ServicesAPI\venv\Lib\site-packages\authlib\integrations\base_client\async_app.py&quot;, line 128, in fetch_access_token token = await client.fetch_token(token_endpoint, **params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\X\PycharmProjects\ServicesAPI\venv\Lib\site-packages\authlib\integrations\httpx_client\oauth2_client.py&quot;, line 125, in _fetch_token resp = await self.post( ^^^^^^^^^^^^^^^^ File &quot;C:\Users\X\PycharmProjects\ServicesAPI\venv\Lib\site-packages\httpx\_client.py&quot;, line 1842, in post return await self.request( ^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\X\PycharmProjects\ServicesAPI\venv\Lib\site-packages\authlib\integrations\httpx_client\oauth2_client.py&quot;, line 90, in request return await super(AsyncOAuth2Client, self).request( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\X\PycharmProjects\ServicesAPI\venv\Lib\site-packages\httpx\_client.py&quot;, line 1514, in request request = self.build_request( ^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\X\PycharmProjects\ServicesAPI\venv\Lib\site-packages\httpx\_client.py&quot;, line 344, in build_request url = self._merge_url(url) ^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\X\PycharmProjects\ServicesAPI\venv\Lib\site-packages\httpx\_client.py&quot;, line 374, in _merge_url merge_url = URL(url) ^^^^^^^^ File &quot;C:\Users\X\PycharmProjects\AspenServicesAPI\venv\Lib\site-packages\httpx\_urls.py&quot;, line 102, in __init__ raise TypeError( TypeError: Invalid type for url. Expected str or httpx.URL, got &lt;class 'NoneType'&gt;: None </code></pre> <p>I suspect there might be an issue with the dependencies, specifically httpx and authlib, but I'm not sure how to resolve this. Any insights or suggestions on what might be causing this error and how to fix it would be greatly appreciated.</p> <p><strong>Update : 12/12/2023</strong></p> <p>Using python 3.11 or 3.9 didn't work. However, I discovered that renaming 'toke_url' to 'token_endpoint' eliminates the initial error. Additionally, incorporating 'jwks_uri' seems to resolve a subsequent error. But now, I'm encountering a new issue: a 'KeyError' associated with 'id_token'. I have updated my post with more details for further insight.</p> <pre><code> JWKS_URI = &quot;https://login.microsoftonline.com/{TENANT_ID}/discovery/v2.0/keys&quot;.format(TENANT_ID=ASPEN_APP_AUTH_TENANT_ID) # Initialize OAuth2 oauth = OAuth() oauth.register( name='azure', client_id=ASPEN_APP_AUTH_CLIENT_ID, client_secret=ASPEN_APP_AUTH_SECRET, authorize_url=f'https://login.microsoftonline.com/{ASPEN_APP_AUTH_TENANT_ID}/oauth2/v2.0/authorize', token_endpoint=f'https://login.microsoftonline.com/{ASPEN_APP_AUTH_TENANT_ID}/oauth2/v2.0/token', token_url=f'https://login.microsoftonline.com/{ASPEN_APP_AUTH_TENANT_ID}/oauth2/v2.0/token', client_kwargs={'scope': 'openid email profile'}, jwks_uri=JWKS_URI ) </code></pre> <p><strong>Fixed</strong></p> <p>Replacing 'toke_url' to 'token_endpoint' <strong>fixed</strong> Expected str or httpx.URL, got &lt;class 'NoneType'&gt;</p> <p>than issue with token_id was fixed like this :</p> <pre><code>@app.get(&quot;/auth&quot;) async def auth(request: Request): try: token = await oauth.azure.authorize_access_token(request) nonce = token.get('userinfo').get('nonce') # Parse the ID token user_info = await oauth.azure.parse_id_token(token=token, nonce=nonce) return {&quot;user_info&quot;: user_info} except HTTPException as e: raise e except Exception as e: print(&quot;Error during authentication:&quot;, str(e)) raise HTTPException(status_code=500, detail=&quot;Authentication failed.&quot;) </code></pre> <p>I'm uncertain if this is the correct approach, but it seems to be functioning as intended for me. If there are any errors or security issues in what I've done, please let me know.</p>
<python><azure><oauth-2.0><oauth><fastapi>
2023-12-12 08:06:34
1
617
NoNam4
77,644,469
4,451,521
Counting the inequalities in a dataframe column
<p>I have two dataframes that are basically the same.</p> <p>However in a particular column there are some rows where the values (floats) are different.</p> <p>I would like to calculate the number of rows where the values are different.</p> <p>This task seem pretty simple if not for the fact of the following two things</p> <ul> <li>They are floats so a normal equality sometimes gives different to two rows that are basically the same</li> </ul> <p>And the most important thing</p> <ul> <li>The column has Nans . When I compare Nans they are deemed different. I don't want to count two rows with Nans as unequal but as equal. They should not be considered in the count</li> </ul> <p>How can I compare a series of floats when there are Nans in it</p>
<python><pandas>
2023-12-12 07:59:44
2
10,576
KansaiRobot
77,644,424
2,859,206
SQL code works in Azure, but Sqlalchemy throws ProgrammingErrors: Invalid column name OR multi-part identifier could not be bound
<p>I'm trying to run an sql query that works as native sql code in a program like Azure, but throws programming errors when I try to use the exact code in python using SQLAlchemy with ODBC and pandas.</p> <p>SQL code:</p> <pre><code>Select RecordNumbers -- ID des Falles , Min(T.Created) As FirstRecord -- erste From Data.Table1 V Left Outer Join Data.Table2Version TV On V.VersionNumber=TV.VersionNumber Left Outer Join Data.Table3 T On TV.IdT=T.IdT And T.Type=11 -- nur Export Where RType=1 -- nur was And Doob1=1 -- nur was auch And Doob2=1 -- nur anderes Group By RecordNumber </code></pre> <p>This returns the two selected columns.</p> <p>In SQLAlchemy, after establishing and testing a connection - dbConnection - I store query as text and use pandas to read the query:</p> <pre><code>from sqlalchemy import text query = text(&quot;Select RecordNumbers, \ Min(T.Created) As FirstRecord -- erste \ From Data.Table1 V \ Left Outer Join Data.Table2Version TV On V.VersionNumber=TV.VersionNumber \ Left Outer Join Data.Table3 T \ ON TV.IdT=T.IdT \ AND T.Type=11 -- nur Export \ Where (RType=1 -- nur was \ And Doob=1 -- nur was auch \ And Doob2=1 ) -- nur anderes \ Group By IdRecord \ Order By 1 Desc&quot;) out = pd.read_sql_query(query, dBConnection) </code></pre> <p>This throws two errors for both variables in the select part of the query.</p> <ol> <li>ProgrammingError: (pyodbc.ProgrammingError) ('42S22', '[42S22] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Invalid column name &quot;FirstRecord&quot;.</li> <li>The multi-part identifier &quot;T.CreatedAt&quot; could not be bound.</li> </ol> <p>Replacing the selected variables with &quot;Select * From...&quot; solves the problem, but the output is too large to be functional solution.</p> <p>Trying to solve invalid column name, if I explicitly define the location of V.FirstRecord, it throws the same multi-part identifier error for this varible.</p> <p>I've read about explicit and implicit queries, but think everything here is pretty explicit... maybe.</p> <p>SQLAlchemy says: This error is a DBAPI Error and originates from the database driver (DBAPI), not SQLAlchemy itself.</p> <p>I'm not very good at SQL, so would greatly appreciate being pointed in the right direction.</p>
<python><sql><sql-server><sqlalchemy>
2023-12-12 07:50:32
1
2,490
DrWhat
77,644,270
17,721,722
Creating Django Models Without Rounding off Decimal Fields
<p>For now, when I save an amount like 5400.5789, it is being saved as 5400.58. However, I want it to be saved as 5400.57 only. I don't want to round off the decimal places. How can I achieve that?</p> <pre><code>class PerTransaction(models.Model): amount = models.DecimalField(default=0, max_digits=10,decimal_places=2, verbose_name = &quot;Transaction Amount&quot;) </code></pre> <p>Database: PostgreSQL</p>
<python><django><postgresql><django-models>
2023-12-12 07:16:21
1
501
Purushottam Nawale
77,644,117
12,780,274
Decode protobuf without proto files in python
<p>I would like to decode protobuf.</p> <p>Example of the protobuf data: <code>0a06282c0241057a10011805220d080510bea3f493062a03010c1628f1a6f493063002382b4001481482010f383634333233303532343736343839</code></p> <p>I can Decoding online (e.g. via <a href="https://protobuf-decoder.netlify.app/" rel="nofollow noreferrer">https://protobuf-decoder.netlify.app/</a>) works fine.</p> <p>The website decode Protobuf without having the original .proto files. And all decoding is done locally via JavaScript (Contribute on <a href="https://github.com/pawitp/protobuf-decoder" rel="nofollow noreferrer">GitHub</a>).</p> <p>How can I do this in python?</p> <p>I try <a href="https://stackoverflow.com/a/55275458/12780274">this solution</a> but not exist .proto file.</p>
<python><protocol-buffers><protobuf.js><protobuf-python>
2023-12-12 06:42:10
1
643
henrry
77,644,026
11,082,866
Django middleware is executed at every API call
<p>I created a middleware to record the activity log for user login but it executes at every API call. The <code>ActivityLog</code> is to record the entries.</p> <p>Middleware.py</p> <pre><code>class UserActivityMiddleware: def __init__(self, get_response): self.get_response = get_response def __call__(self, request): # Set a flag in the session to indicate that the user has been logged in during this session request.session['user_logged_in'] = request.session.get('user_logged_in', False) response = self.get_response(request) # Check if the user is authenticated, has just logged in, and hasn't been logged during this session if ( request.user.is_authenticated and request.user.last_login is not None and request.session.get('user_logged_in', False) is False and not request.path.startswith('/static/') and not request.path.startswith('/media/') ): print(&quot;reached middleware&quot;) print(f&quot;User: {request.user.username}&quot;) print(f&quot;Last Login: {request.user.last_login}&quot;) print(f&quot;Session Flag: {request.session.get('user_logged_in')}&quot;) print(f&quot;Request Path: {request.path}&quot;) # Log the user login activity ActivityLog.objects.create( user=request.user, action=&quot;Login&quot;, target_model=&quot;User&quot;, target_object_id=request.user.id, details=f&quot;User {request.user.username} logged in at {timezone.now()}.&quot; ) # Set a flag in the session to indicate that the user has been logged in during this session request.session['user_logged_in'] = True return response </code></pre> <p>What should I do to reduce it to a single execution i.e. when the user logs in?</p>
<python><django>
2023-12-12 06:13:16
0
2,506
Rahul Sharma
77,643,953
4,620,616
PySimpleGUI appends Plots in Canvas
<p>I have the issue that if I try to update a canvas in PySimpleGUI the application instead appends the new plot below the old one. Trigering the 'Generate Plot' from the code below leads to the output at the bottom. Instead of Appending the plot I expected it to replace it.</p> <pre><code>import PySimpleGUI as sg import matplotlib.pyplot as plt import numpy as np from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg def draw_figure(canvas, figure): figure_canvas_agg = FigureCanvasTkAgg(figure, canvas) figure_canvas_agg.draw() figure_canvas_agg.get_tk_widget().pack(side='top', fill='both') return figure_canvas_agg # Define the layout of the GUI layout = [ [sg.Button('Generate Plot')], [sg.Canvas(size=(100, 100),key='canvas')], [sg.Button('Exit')] ] window = sg.Window('Time Series Plot', layout, finalize=True) # Event loop while True: event, values = window.read() x = np.linspace(1, 50) y = np.random.randn(50) if event == sg.WIN_CLOSED or event == 'Exit': break if event in ['Generate Plot']: fig, ax =plt.subplots() # Create a time series plot ax.plot(x, y) # Embed the Matplotlib plot into the PySimpleGUI window draw_figure(window['canvas'].TKCanvas, fig) # Close the window window.close() </code></pre> <p><a href="https://i.sstatic.net/EI4Cy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EI4Cy.png" alt="enter image description here" /></a></p>
<python><matplotlib><tkinter><pysimplegui>
2023-12-12 05:55:35
3
490
mhwh
77,643,828
10,964,685
Insert word at beginning of row - python
<p>I want to insert a word at the beginning of a string for each row in a column where the first word is not <em>x</em>. If it is <em>x</em>, then move on. For below, <em>x</em> = <code>BP</code>. So if first word in cat is not <code>BP</code>, then insert it.</p> <pre><code>df = pd.DataFrame({ 'cat': ['BP STATION', 'STATION', 'BP OLD', 'OLD OLD'], }) df['cat'] = df['cat'].str.replace(r'^\w+', 'BP') </code></pre> <p>intent:</p> <pre><code> cat 0 BP STATION 1 BP STATION 2 BP OLD 3 BP OLD OLD </code></pre>
<python>
2023-12-12 05:17:10
2
392
jonboy
77,643,527
20,803,947
Docker-compose: ModuleNotFoundError: No module named 'src'
<p>I have a problem trying to run my flask, python, celery, mongodb and rabbitmq app via docker.</p> <p>When I run the docker build command and then compose-up I get the following problem:</p> <p><a href="https://i.sstatic.net/15kLr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/15kLr.png" alt="enter image description here" /></a></p> <p>The problem appears to be when running the command: docker-compose up.</p> <p>Dockerfile:</p> <pre><code>FROM python:3.11-alpine AS builder RUN pip install poetry WORKDIR /backend COPY pyproject.toml poetry.lock ./ RUN poetry config virtualenvs.create false &amp;&amp; poetry install --no-root FROM python:3.11-alpine WORKDIR /backend COPY --from=builder /usr/local/lib/python3.11/site-packages/ /usr/local/lib/python3.11/site-packages/ COPY --from=builder /usr/local/bin/ /usr/local/bin/ COPY . . </code></pre> <p>docker-compose.yml :</p> <pre><code>version: '3.9' services: mongodb: image: mongo:latest container_name: mongodb environment: - MONGO_INITDB_ROOT_USERNAME=admin - MONGO_INITDB_ROOT_PASSWORD=admin restart: always ports: - 27017:27017 volumes: - mongodb_data:/data/db rabbitmq: image: rabbitmq:3-management container_name: rabbitmq restart: always environment: - RABBITMQ_DEFAULT_USER=admin - RABBITMQ_DEFAULT_PASS=admin - RABBITMQ_DEFAULT_VHOST=/ ports: - 5672:5672 - 15672:15672 volumes: - rabbitmq_data:/var/lib/rabbitmq celery_worker: build: . container_name: celery_worker environment: - CELERY_BROKER_URL=amqp://admin:admin@rabbitmq:5672 restart: always command: celery --app src.task worker --loglevel=info -O fair --queues default depends_on: - mongodb - rabbitmq flask_app: build: . container_name: flask_app command: python3 src/app.py restart: always tty: true environment: - SERVER_HOST=0.0.0.0 - SERVER_PORT=5000 - FLASK_APP=src/app.py ports: - 5000:5000 depends_on: - mongodb - rabbitmq - celery_worker volumes: mongodb_data: # Volume para persistência dos dados do MongoDB rabbitmq_data: # Volume para persistência dos dados do RabbitMQ </code></pre> <p>The folders structure:</p> <p><a href="https://i.sstatic.net/d4MNU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/d4MNU.png" alt="enter image description here" /></a></p> <p>src/app.py:</p> <pre><code>import os from flask import Flask from src.task import add app = Flask(__name__) @app.route('/') def hello(): add.delay(1, 2) return &quot;Docker compose working!&quot; if __name__ == '__main__': host = os.environ.get('SERVER_HOST', 'localhost') port = os.environ.get('SERVER_PORT', '8001') print(f&quot;Server listenning {host}:{port}&quot;) app.run(host=host, port=port, debug=True) </code></pre> <p>src/task.py:</p> <pre><code>import os from celery import Celery broker = os.environ.get('CELERY_BROKER_URL', 'amqp://guest@localhost//') celery = Celery('myapp', include=['src.task']) @celery.task(queue='default') def add(x, y): print(f'Adding {x} + {y}') return x + y </code></pre> <p>pyproject.toml:</p> <pre><code>[tool.poetry] name = &quot;&quot; version = &quot;0.1.0&quot; description = &quot;&quot; authors = &quot;&quot; [tool.poetry.dependencies] python = &quot;^3.11&quot; Flask = &quot;^3.0.0&quot; celery = &quot;^5.3.6&quot; pymongo = &quot;^4.6.1&quot; [tool.poetry.dev-dependencies] [build-system] requires = [&quot;poetry-core&gt;=1.0.0&quot;] build-backend = &quot;poetry.core.masonry.api&quot; </code></pre> <p>i can't perform the up using docker-compose up because of the errors, this is the exact code and folder structure I currently have. Can anyone tell me how to fix this and if I'm doing something wrong?</p>
<python><docker><docker-compose><dockerfile>
2023-12-12 03:25:29
1
309
Louis
77,643,512
10,964,685
Split column by last delimiter AND uppercase values - python
<p>I'm trying to split a column by the last <code>' - '</code> <strong>AND</strong> being followed by all uppercase strings letters.</p> <p><strong>It may not necessarily be the last delimiter in isolation. But it will be the last before all uppercase strings</strong>.</p> <p>I can find separate questions that separate based on first/last delimiter. But not with a combination.</p> <p>Below, I have a df with <code>Value</code> containing various combinations. I want to split the col into two individuals columns, whereby, everything before the last <code>' - '</code> and uppercase letters.</p> <p>I've got <code>Last</code> column correct but not <code>First</code> column.</p> <pre><code>df = pd.DataFrame({ 'Value': ['Juan-Diva - HOLLS', 'Carlos - George - ESTE BAN - BOM', 'Javier Plain - Hotham Ham - ALPINE', 'Yul - KONJ KOL MON'], }) </code></pre> <p>option 1)</p> <pre><code>df[['First', 'l']] = df['Value'].str.split(' - ', n=1, expand=True) df['Last'] = df['Value'].str.split('- ').str[-1] </code></pre> <p>option 2)</p> <pre><code># Regular expression pattern pattern = r'^(.*) - ([A-Z\s]+)$' # Extract groups into two new columns df[['First', 'Last']] = df['Value'].str.extract(pattern) </code></pre> <p>option 3)</p> <pre><code>df[[&quot;First&quot;, &quot;Last&quot;]] = df[&quot;Value&quot;].str.rsplit(&quot; - &quot;, n=1, expand=True) </code></pre> <p>None of these options return the intended output.</p> <p>intended output:</p> <pre class="lang-none prettyprint-override"><code> First Last 0 Juan-Diva HOLLS 1 Carlos - George ESTE BAN - BOM 2 Javier Plain - Hotham Ham ALPINE 3 Yul KONJ KOL MON </code></pre> <p>regex:</p> <pre><code>df[[&quot;First&quot;, &quot;Last&quot;]] = df[&quot;Value&quot;].str.extract(r'(.*?)\s*-\s*([A-Z]+(?:\s*-?\s*[A-Z]+)*)') </code></pre> <pre><code> Value First Last 0 Juan-Diva - HOLLS Juan D 1 Carlos - George - ESTE BAN - BOM Carlos G 2 Javier Plain - Hotham Ham - ALPINE Javier Plain H 3 Yul - KONJ KOL MON Yul KONJ KOL MON </code></pre>
<python><pandas><split>
2023-12-12 03:17:03
1
392
jonboy
77,643,373
188,740
Attach debugger to Firebase Python Functions
<p>I'm using the emulator to run cloud functions with Python. I'd like to attach a debugger using VS Code. This is easy when using Node, I just need to pass the <code>--inspect-functions</code> flag in when starting the emulator:</p> <pre><code>firebase emulators:start --project demo-project --inspect-functions </code></pre> <p>Here is my vscode launch.json:</p> <pre><code>{ &quot;version&quot;: &quot;0.2.0&quot;, &quot;configurations&quot;: [ { &quot;type&quot;: &quot;node&quot;, &quot;request&quot;: &quot;attach&quot;, &quot;name&quot;: &quot;Firebase Cloud Functions&quot;, &quot;port&quot;: 9229 } ] } </code></pre> <p>Now clicking <kbd>F5</kbd> is vscode will attach the debugger.</p> <p>When running Python functions, I get this warning in emulator logs:</p> <pre><code>functions --inspect-functions only supported for Node.js runtimes. functions --inspect-functions not supported for Python functions. Ignored. </code></pre> <p>How do I attach the debugger to Python cloud functions with Firebase?</p>
<python><google-cloud-functions><vscode-debugger><firebase-tools>
2023-12-12 02:34:22
0
57,942
Johnny Oshika
77,643,232
6,533,037
Output 256-bit using Argon2 hasher
<p>I am trying to generate a 256-bit output using Argon2 password hasher. The function take s a <code>hash_len</code> parameter which I set to 32. Thinking that 32 bytes equals 256 bits.</p> <p>Why does Argon2 output the length of 43, and not 32?</p> <pre><code>from argon2 import PasswordHasher password = &quot;abc123&quot; salt = b'b8b17dbde0a2c67707342c459f6225ed' hasher = PasswordHasher( salt_len=len(salt), hash_len=32, ) hasherOutput = hasher.hash(password, salt = salt) hash = hasherOutput.split('$')[-1] print(len(hash)) # Output: 43 # Expected: 32 </code></pre>
<python><argon2-cffi>
2023-12-12 01:33:34
1
1,683
O'Niel
77,643,184
10,266,059
Why does pydoc say "any" is a package? I thought any() was a function
<p>I am learning Python and I thought I would try using pydoc to give me a description of each built-in function. But when I run <code>pydoc any</code>, it returns:</p> <pre class="lang-none prettyprint-override"><code>Help on package any: NAME any PACKAGE CONTENTS FILE (built-in) </code></pre> <p>However, <code>pydoc builtins</code> has a FUNCTIONS section which has what I was looking for:</p> <pre class="lang-none prettyprint-override"><code>any(iterable, /) Return True if bool(x) is True for any x in the iterable. If the iterable is empty, return False. </code></pre> <p>In perl, you can run e.g., <code>perldoc -tf open</code> to see the documentation for the <code>open()</code> function.</p> <p>I was expecting similar functionality in Python, especially given this text from pydoc help:</p> <pre class="lang-none prettyprint-override"><code>pydoc &lt;name&gt; ... Show text documentation on something. &lt;name&gt; may be the name of a Python keyword, topic, function, module, or package, or a dotted reference to a class or function within a module or module in a package. If &lt;name&gt; contains a '/', it is used as the path to a Python source file to document. If name is 'keywords', 'topics', or 'modules', a listing of these things is displayed. </code></pre> <p>To recap, <code>any()</code> is a function but <code>pydoc any</code> doesn't show me the documentation for the <code>any()</code> function. Instead I have to dig inside the <code>pydoc builtins</code> output which is quite large.</p> <p>Is there a Python equivalent to <code>perldoc -tf</code> where I can pull out documentation for a single function?</p>
<python>
2023-12-12 01:12:06
1
1,676
Aleksey Tsalolikhin
77,643,183
10,200,497
groupby streak of numbers and one row after it then check the first value of a column for each group
<p>This is an extension to this <a href="https://stackoverflow.com/questions/77372530/groupby-streak-of-numbers-and-one-row-after-it">post</a>.</p> <p>This is my dataframe:</p> <pre><code>import pandas as pd df = pd.DataFrame( { 'a': [ 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0], 'b': [-1, 1, 1, -1, 1, -1, -1, 1, -1, -1, 1, 1, -1, -1] } ) </code></pre> <p>And my desired outcome which is about grouping them is:</p> <pre><code> a b 4 1 1 5 0 -1 10 1 1 11 1 1 12 0 -1 </code></pre> <p>Basically, I want to group them by streak of 1 and one row after where streak ends in column <code>a</code>. This <a href="https://stackoverflow.com/a/77372543/10200497">answer</a> does that:</p> <pre><code>g = df.loc[::-1, 'a'].eq(0).cumsum() out = [g for _,g in df.groupby(g, sort=False) if len(g)&gt;1] </code></pre> <p>But now what I want is check if the first value in <code>b</code> for each group is 1.</p> <p>I don't know what is the best approach to check the first value of <code>b</code>. This is what I have tried but I am not sure if it works in every case.</p> <pre><code>groups = df.groupby(g).filter(lambda x: x.b.iloc[0] == 1) </code></pre> <p>I have experienced some situations where the code works in an example but it does not work in every situation with different conditions so I want to double check my code.</p>
<python><pandas>
2023-12-12 01:11:37
2
2,679
AmirX
77,643,130
10,964,685
Split col by any delimiter and Uppercase values
<p>I'm trying to split a column by the last <code>' - '</code> that is followed by all uppercase strings letters.</p> <p>Below, I have a df with <code>Value</code> containing various combinations. I want to split the col into two individuals columns, whereby, everything before the last <code>' - '</code> and uppercase letters.</p> <p>I've got <code>Last</code> column correct but not <code>First</code> column.</p> <pre><code>df = pd.DataFrame({ 'Value': [ 'Juan-Diva - HOLLS', 'Carlos - George - ESTE BAN - BOM', 'Javier Plain - Hotham Ham - ALPINE', 'Yul - KONJ KOL MON'], }) </code></pre> <p>option 1)</p> <pre><code>df[['First', 'l']] = df['Value'].str.split(' - ', n=1, expand=True) df['Last'] = df['Value'].str.split('- ').str[-1] </code></pre> <p>option 2)</p> <pre><code># Regular expression pattern pattern = r'^(.*) - ([A-Z\s]+)$' # Extract groups into two new columns df[['First', 'Last']] = df['Value'].str.extract(pattern) </code></pre> <p>option 3)</p> <pre><code>df[[&quot;First&quot;, &quot;Last&quot;]] = df[&quot;Value&quot;].str.rsplit(&quot; - &quot;, n=1, expand=True) </code></pre> <p>Option 4 works but is incredibly slow. Use case datasets are only a few hundred rows and it's still computing after hours.</p> <p>Option 4)</p> <pre><code>df[[&quot;First&quot;, &quot;Last&quot;]] = df[&quot;Value&quot;].str.extract(r'(.*?)\s*-\s*([A-Z]+(?:\s*-?\s*[A-Z]+)*)') </code></pre> <p>intended output:</p> <pre class="lang-none prettyprint-override"><code> First Last 0 Juan-Diva HOLLS 1 Carlos - George ESTE BAN - BOM 2 Javier Plain - Hotham Ham ALPINE 3 Yul KONJ KOL MON </code></pre>
<python><pandas><split>
2023-12-12 00:57:17
2
392
jonboy
77,643,104
232,798
AWS Lambda, Snowflake connector and "module 'cryptography.hazmat.bindings._rust.openssl' has no attribute 'hashes'"
<p>I am creating a layer for the snowflake connector. I downloaded amazon linux image, installed the snowflake connector in the image: <code>pip install snowflake-connector-python</code>;</p> <p>after that I uploaded the layer and lambda through SAM installation. Lambda is still giving me this issue though</p> <pre><code> &quot; File \&quot;/var/lang/lib/python3.9/importlib/__init__.py\&quot;, line 127, in import_module\n return _bootstrap._gcd_import(name[level:], package, level)\n&quot;, &quot; File \&quot;&lt;frozen importlib._bootstrap&gt;\&quot;, line 1030, in _gcd_import\n&quot;, &quot; File \&quot;&lt;frozen importlib._bootstrap&gt;\&quot;, line 1007, in _find_and_load\n&quot;, &quot; File \&quot;&lt;frozen importlib._bootstrap&gt;\&quot;, line 986, in _find_and_load_unlocked\n&quot;, &quot; File \&quot;&lt;frozen importlib._bootstrap&gt;\&quot;, line 680, in _load_unlocked\n&quot;, &quot; File \&quot;&lt;frozen importlib._bootstrap_external&gt;\&quot;, line 850, in exec_module\n&quot;, &quot; File \&quot;&lt;frozen importlib._bootstrap&gt;\&quot;, line 228, in _call_with_frames_removed\n&quot;, &quot; File \&quot;/var/task/app.py\&quot;, line 2, in &lt;module&gt;\n import snowflake.connector\n&quot;, </code></pre> <p>any ideas appreciated.</p> <p>python 3.9.16</p> <p>snowflake-connector-python==3.6.0 cryptography==41.0.7</p> <p>amazonlinux: 2023.2.20231113.0</p>
<python><aws-lambda><snowflake-cloud-data-platform>
2023-12-12 00:44:46
0
10,613
Danail
77,643,086
11,124,121
How to solve the error:ufunc 'isfinite' not supported for the input types in the sm.models?
<p>I tried to follow the code in this link: <a href="https://jbhender.github.io/Stats506/F17/Projects/Poisson_Regression.html" rel="nofollow noreferrer">https://jbhender.github.io/Stats506/F17/Projects/Poisson_Regression.html</a></p> <p>But the error message popped up:</p> <pre><code>TypeError: ufunc 'isfinite' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe'' </code></pre> <p>And I checked the code and found that the dummy variables did not cast as integers, it was still a factor/character. I am not sure whether the error was due to the data type or something that I don't know. Please give me some suggestions. Thank you!</p>
<python><pandas><numpy>
2023-12-12 00:36:05
0
853
doraemon
77,642,881
9,263,783
Python 3.11 Typing for multiple inheritance models
<p>I want to implement typing for a group of related classes that share a common metaclass (<code>AMeta</code>), there are two 'de facto' abstract parent classes:</p> <ul> <li><code>A</code> (:type[AMeta])</li> <li><code>ADerived</code> - that also inherits from another class <code>C</code>.</li> </ul> <p>Finally there are actual implementation models of <code>ADerived</code> (<code>D1</code>, <code>D2</code>, ... these are dynamically created) and <code>A</code> (<code>E</code>, <code>F</code>, ...) parent classes.</p> <p>The implementations of <code>A</code> (<code>E</code>, <code>F</code>) have also a class variable of type <code>ADerived</code> (<code>_DerivedModel</code>) and the problem I have is trying to make <code>mypy</code> infer the correct type for them.</p> <p>I know this piece of code might not make sense, but the actual implementation is more complex and the derived models are dynamically created. Here's a simplification of it:</p> <pre><code>from __future__ import annotations from typing import TypeVar, Type, ClassVar _BModel = TypeVar(&quot;_BModel&quot;, bound=&quot;ADerived&quot;) class C: pass class AMeta(type): @property def BModel(cls: Type[A]) -&gt; Type[_BModel]: return cls._DerivedModel # Abstract Models class A(metaclass=AMeta): _DerivedModel: ClassVar[Type[_BModel]] class ADerived(A, C): pass # Derived Models (this models are dynamically created) class D1(ADerived): pass class D2(ADerived): pass # Implementations class E(A): _DerivedModel = D1 class F(A): _DerivedModel = D2 MyDerived1 = E.BModel # Should be infered by mypy as type[D1] MyDerived2 = F.BModel # Should be infered by mypy as type[D2] </code></pre>
<python><mypy><typing>
2023-12-11 23:23:50
1
341
Bravhek
77,642,836
4,398,966
Simplify .format printing in Python
<p>I'm trying to print out the following:</p> <pre><code>----------------------------- | | | | | P | | | Y | | | T | | PYTHON! | H | | | O | | | N | | | ! | | | | ----------------------------- | | | | | | | | | | | | |PYTHON! | PYTHON!| | | | | | | | | | | | | ----------------------------- </code></pre> <p>Here's what I have:</p> <pre><code>rowBorder = '-' * 29 col = '|' space = ' ' emptyColRow4 = (col + space * 13 + col + space * 13 + col + &quot;\n&quot;) * 4 text = 'PYTHON!' emptyRow = col+space*13+col+space*13+col print(rowBorder) print(emptyRow) for l in text: if l != 'H': verticalLetter = '{}{}{}'.format(col + space*13 + col + space*6,l,space*6+col) else: verticalLetter = '{}{:^13}{}{}{}'.format(col,text, col + space*6,l,space*6+col) print(verticalLetter) print(emptyRow) print(rowBorder) print(emptyColRow4,end='') print('{}{:&lt;13}{}{:&gt;13}{}'.format(col,text,col,text,col)) print(emptyColRow4,end='') print(rowBorder) </code></pre> <p>Is there some way we could embed the <em>for</em> loop directly into the <em>print</em> statement?</p>
<python><python-3.x><for-loop><format>
2023-12-11 23:07:15
3
15,782
DCR
77,642,795
13,135,901
Pandas change values in pandas dataframe based on a slice with selected indexes and shift()
<p>Let's say I have a dataframe <code>df</code>. First I filter it, then I need to find all rows that satisfy a criteria and then mark all these rows plus 2 previous rows for each in a specific bool column in the original dataframe:</p> <pre><code>df = pd.DataFrame({{'A': {0: nan, 1: nan, 2: 1944.09, 3: nan, 4: nan, 5: 1926.0, 6: nan, 7: 1930.31, 8: nan, 9: nan, 10: nan, 11: nan, 12: nan, 13: nan, 14: nan, 15: 1917.66, 16: 1920.43, 17: nan, 18: 1909.04, 19: nan, 20: nan, 21: nan, 22: nan, 23: nan, 24: 1920.05, 25: nan, 26: 1915.4, 27: 1921.87, 28: nan, 29: nan, 30: nan, 31: 1912.42, 32: 1920.08, 33: 1915.8, 34: nan, 35: nan, 36: nan, 37: nan, 38: 1919.71, 39: 1916.2, 40: nan, 41: 1926.79, 42: nan, 43: 1918.66, 44: nan, 45: 1925.5, 46: 1922.22, 47: nan, 48: nan, 49: 1927.87, 50: 1923.24, 51: nan, 52: 1929.53, 53: nan, 54: nan, 55: nan, 56: nan, 57: nan, 58: nan, 59: nan, 60: nan, 61: 1918.37, 62: nan, 63: nan, 64: 1923.61, 65: nan, 66: 1917.1, 67: nan, 68: nan, 69: nan, 70: nan, 71: nan, 72: nan, 73: nan, 74: nan, 75: nan, 76: nan, 77: nan, 78: nan, 79: nan, 80: nan, 81: 1924.48, 82: nan, 83: nan, 84: 1923.03, 85: nan, 86: nan, 87: nan, 88: nan, 89: 1926.87, 90: nan, 91: nan, 92: nan, 93: 1921.79, 94: nan, 95: 1925.27, 96: nan, 97: 1919.0, 98: nan, 99: nan, 100: 1923.74, 101: nan, 102: nan, 103: nan, 104: nan, 105: 1911.61, 106: nan, 107: 1923.33, 108: nan, 109: nan, 110: nan, 111: 1912.0, 112: nan, 113: 1915.8, 114: nan, 115: 1913.05, 116: nan, 117: nan, 118: nan, 119: nan, 120: nan, 121: nan, 122: 1916.93, 123: nan, 124: 1913.69, 125: nan, 126: nan, 127: nan, 128: nan, 129: 1918.38, 130: 1913.7, 131: nan, 132: nan, 133: nan, 134: nan, 135: nan, 136: 1919.5, 137: nan, 138: 1916.14, 139: nan, 140: nan, 141: nan, 142: nan, 143: nan, 144: 1921.28, 145: nan, 146: nan, 147: nan, 148: nan, 149: nan, 150: 1915.0, 151: nan, 152: nan, 153: nan, 154: nan, 155: nan, 156: 1927.48, 157: 1889.17, 158: nan, 159: 1921.91, 160: 1917.67, 161: 1923.23, 162: nan, 163: nan, 164: nan, 165: 1909.88, 166: nan, 167: 1913.82, 168: 1902.51, 169: nan, 170: nan, 171: nan, 172: nan, 173: nan, 174: nan, 175: nan, 176: nan, 177: nan, 178: nan, 179: 1920.15}, 'C': {0: False, 1: False, 2: True, 3: False, 4: False, 5: False, 6: False, 7: False, 8: False, 9: False, 10: False, 11: False, 12: False, 13: False, 14: False, 15: False, 16: False, 17: False, 18: False, 19: False, 20: False, 21: False, 22: False, 23: False, 24: False, 25: False, 26: False, 27: True, 28: False, 29: False, 30: False, 31: False, 32: True, 33: False, 34: False, 35: False, 36: False, 37: False, 38: False, 39: False, 40: False, 41: True, 42: False, 43: False, 44: False, 45: False, 46: False, 47: False, 48: False, 49: False, 50: False, 51: False, 52: True, 53: False, 54: False, 55: False, 56: False, 57: False, 58: False, 59: False, 60: False, 61: False, 62: False, 63: False, 64: False, 65: False, 66: False, 67: False, 68: False, 69: False, 70: False, 71: False, 72: False, 73: False, 74: False, 75: False, 76: False, 77: False, 78: False, 79: False, 80: False, 81: False, 82: False, 83: False, 84: False, 85: False, 86: False, 87: False, 88: False, 89: True, 90: False, 91: False, 92: False, 93: False, 94: False, 95: False, 96: False, 97: False, 98: False, 99: False, 100: False, 101: False, 102: False, 103: False, 104: False, 105: False, 106: False, 107: True, 108: False, 109: False, 110: False, 111: False, 112: False, 113: False, 114: False, 115: False, 116: False, 117: False, 118: False, 119: False, 120: False, 121: False, 122: False, 123: False, 124: False, 125: False, 126: False, 127: False, 128: False, 129: False, 130: False, 131: False, 132: False, 133: False, 134: False, 135: False, 136: False, 137: False, 138: False, 139: False, 140: False, 141: False, 142: False, 143: False, 144: False, 145: False, 146: False, 147: False, 148: False, 149: False, 150: False, 151: False, 152: False, 153: False, 154: False, 155: False, 156: True, 157: False, 158: False, 159: False, 160: False, 161: True, 162: False, 163: False, 164: False, 165: False, 166: False, 167: False, 168: False, 169: False, 170: False, 171: False, 172: False, 173: False, 174: False, 175: False, 176: False, 177: False, 178: False, 179: False}}) df2 = df[df.C] print(df2) 2 1944.09 27 1921.87 32 1920.08 41 1926.79 52 1929.53 89 1926.87 107 1923.33 156 1927.48 161 1923.23 Name: A, dtype: float64 df3 = df2[(df2.A &gt; df2.A.shift(1)) &amp; (df2.A.shift(1) &gt; df2.A.shift(2))] print(df3) 52 1929.53 Name: A, dtype: float64 df[FROM_2nd_PREVIOUS_TO_EVERY_ROW_OF_df3_IN_df2:TO_EVERY_ROW_IN_df3, 'B'] = True print(df) A B 0 NaN False .. ... ... 31 1234.56 False 32 1920.08 True 33 1234.56 True .. ... ... 41 1926.79 True 40 1234.56 True .. ... ... 51 1234.56 True 52 1929.53 True 52 1234.56 False .. ... ... 176 NaN False 177 NaN False 178 NaN False 179 1920.15 False </code></pre> <p>What is the correct way to do it?</p>
<python><pandas><dataframe>
2023-12-11 22:55:13
1
491
Viktor
77,642,777
329,365
inconsistent "Invalid JSON from piped input" error from python/powershell
<p>I have a Jenkins instance running on a windows server. There is a specific job that runs a python script and part of that python script is calling powershell:</p> <pre><code>generate_password = f&quot;&quot;&quot;Invoke-Expression $('{PASSWORD}' | op signin --account {ACCOUNT} ) op item create --vault shared title={TITLE} --generate-password='letters,digits,16' --category Password&quot;&quot;&quot; result = subprocess.Popen([&quot;powershell&quot;,&quot;-Command&quot;, generate_password&quot;, capture_output=True, text=True, encoding=&quot;utf-8&quot;) </code></pre> <p>The variables are passed from the Jenkins environment, but I have also tested this where they are hardcoded into the script.</p> <p>Now I can run the script fine on my local machine with no problems, i can run it on the jenkins server AS the jenkins user with no problems, but the jenkins job itself consistently gets the error &quot;Invalid JSON in piped input&quot;.</p> <p>I've even tried manually creating the script and storing it on the server, then instead of passing the script directly, I call powershell and tell it to run the saved file - this resutls in exactly the same error.</p> <p>I'm assuming the issue must relate to the Jenkins environment that is spawning the powershell process, but I'm unsure of what I can try to resolve it.</p> <p>I currently run a variety of python jobs via jenkins (80 or so) and this is the only time I've run into an issue like this.</p> <p><em>edit</em>: All of my jenkins pipelines are very simple, they pull a git repo and run a script in that repo. I'm using jenkins for orchestration only as it has brilliant features for monitoring and scheduling at a price that can't be beat.</p> <p><em>edit</em>: I've come up with a work around, I created a scheduled task on the Jenkins server that runs a specific powershell script in a specific location. In my python job I now create that script, call the task, read the output file, then delete the script and the output file. It's not ideal, but it works.</p> <p><em>edit</em>: Interestingly, this very similar code runs fine:</p> <pre><code>share_password = f&quot;&quot;&quot;Invoke-Expression $('{PASSWORD}' | op signin --account {ACCOUNT} ) op item share {final_password_name} --vault shared --emails {email_list}&quot;&quot;&quot; result = subprocess.Popen([&quot;powershell&quot;,&quot;-Command&quot;, share_password , capture_output=True, text=True, encoding=&quot;utf-8&quot;) </code></pre>
<python><powershell><jenkins>
2023-12-11 22:50:06
0
3,447
danspants
77,642,702
3,361,462
How does iterated application of the `+=` operator to a string variable manage to avoid quadratic complexity?
<p>I am kinda confused. I was sure that due do immutability of <code>str</code> operation like <code>s += &quot;abc&quot;</code> need to copy whole <code>s</code> content to another place in memory so effectively adding character to very long string will consume much time.</p> <p>I wrote snippet to prove my theory:</p> <pre><code>import timeit def foo(i): res = &quot;&quot; for _ in range(i): res += &quot;a&quot; return res def foo2(i): res = [] for _ in range(i): res.append(&quot;a&quot;) return &quot;&quot;.join(res) iterations = 100000 print(timeit.timeit('foo(iterations)', globals=globals(), number=100)) print(timeit.timeit('foo2(iterations)', globals=globals(), number=100)) </code></pre> <p>However it looks like</p> <ol> <li><code>foo</code> execution time grows <em>linearly</em> based on <code>iterations</code></li> <li><code>foo2</code> is barely two times faster than <code>foo</code></li> </ol> <p>I tried to inspect bytecode searching for some hidden optimizations. I tried to change constant string to randomized one with proper length to deny interning as well but couldn't find any explanation of that behaviour.</p> <p>Was I wrong then? Does <code>+=</code> depend on string length or not? If so, how can I prove that?</p>
<python><string><string-concatenation><cpython>
2023-12-11 22:28:47
1
7,278
kosciej16
77,642,635
7,563,454
Convert list index to a 3D position based on width and height
<p>I'm working on a CPU voxel raytracer in Python. With the amount of costly calculations involved everything must be as efficient as possible especially storing and fetching point data in space. I started by storing 2D and 3D data in dictionaries indexed by string positions, eg: <code>data[&quot;4,16&quot;] == &quot;solid&quot;</code>. Since converting positions to and from string is costly while dictionaries are slower than lists, I'm in the process of moving to the more efficient system of storing everything in an ordered array and using index to determine which position an entry refers to. I already have this working successfully in 2D:</p> <pre><code># Returns x, y position from index i based on width def index_vec2(i: int, width: int): return math.floor(i % width), math.floor(i / width) </code></pre> <p>Say you have a 4x4 square: If the array contains 16 entries, you easily know index 3 represents position <code>x == 3, y == 0</code> then index 4 is position <code>x == 0, y == 1</code>. The function only needs the rectangle width to deduce this, height isn't necessary as there's no need to ensure the whole area is filled or extra entries don't spill out.</p> <p>I'm trying to expand the same concept to 3D: You pass an <code>i</code> index to the function, in this case it needs to know both width and height to also calculate the depth. I'm struggling a bit as the math is more complex with a third axis... if I force my mind on it for a few more hours I might figure it out, but it also seems beneficial to have an answer here for others attempting the same thing.</p> <pre><code># Returns x, y, z position from index i based on width and height def index_vec3(i: int, width: int, height: int): return math.floor(i % width), math.floor(i / width), math.floor(i / (width * height)) </code></pre> <p>Currently this seems to report X and Z accurately, but I can't figure out what to do with Y. Initially it starts out well, increasing once per number of times <code>i</code> exceeds the width. Problem is that once we pass into the next Z layer, Y doesn't return back to 0 and start all over again, it keeps climbing all the way up to 15. Using <code>for i in range(0, 64): index_vec3(i, 4, 4)</code> (simulates iterating a 4x4x4 cube) I get the following result:</p> <pre><code>0,0,0 1,0,0 2,0,0 3,0,0 0,1,0 1,1,0 2,1,0 3,1,0 0,2,0 1,2,0 2,2,0 3,2,0 0,3,0 1,3,0 2,3,0 3,3,0 0,4,1 1,4,1 2,4,1 3,4,1 0,5,1 1,5,1 2,5,1 3,5,1 0,6,1 1,6,1 2,6,1 3,6,1 0,7,1 1,7,1 2,7,1 3,7,1 0,8,2 1,8,2 2,8,2 3,8,2 0,9,2 1,9,2 2,9,2 3,9,2 0,10,2 1,10,2 2,10,2 3,10,2 0,11,2 1,11,2 2,11,2 3,11,2 0,12,3 1,12,3 2,12,3 3,12,3 0,13,3 1,13,3 2,13,3 3,13,3 0,14,3 1,14,3 2,14,3 3,14,3 0,15,3 1,15,3 2,15,3 3,15,3 </code></pre> <p>How can I improve my math and what's the simplest form to achieve a third axis? Only rule is no loops or increments which would cheat the optimization and negate performance benefits, each axis must be determined by transforming <code>i</code> with simple math based on the width and height. Hopefully it's simple enough to remain a one-liner but that's not mandatory.</p>
<python><python-3.x><math><vector><3d>
2023-12-11 22:06:11
1
1,161
MirceaKitsune