QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
76,176,105
16,773,979
UDP traffic between two seperate virtual machines?
<p>I'm making a chat app in python that runs directly on UDP to get more familiar with networking, and I'm just fixing up my connection protocol, and I spun up a fresh VM behind a proxy to test if the python script would accept the UDP traffic from an external IP, and, nothing.</p> <p>The short and sweet version of my serverside code: (server.py)</p> <pre><code>import socket localIP = '' localPort = 1993 bufferSize = 1024 UDPServerSocket = socket.socket(family=socket.AF_INET, type=socket.SOCK_DGRAM) UDPServerSocket.bind((localIP, localPort)) print(&quot;UDP server up and listening&quot;) while (True): bytesAddressPair = UDPServerSocket.recvfrom(bufferSize) message = bytesAddressPair[0] address = bytesAddressPair[1] clientMsg = &quot;Message from Client:{}&quot;.format(message) clientIP = &quot;Client IP Address:{}&quot;.format(address) print(clientMsg) print(clientIP) </code></pre> <p>My client.py networking stuff:</p> <pre><code>import socket UDPClientSocket = socket.socket(family=socket.AF_INET, type=socket.SOCK_DGRAM) socket.setdefaulttimeout(10) #I know this IP is a placeholder, it's not that way in my actual code serverAddressPort = (&quot;1.1.1.1&quot;, 1993) bufferSize = 1024 def recvUDP(): try: response = UDPClientSocket.recvfrom(bufferSize) return response except Exception as e: choice = confirm( f&quot;Exception '{e}' occured while awaiting server response. Try again?&quot;) if choice: response = recvUDP() def getUDP(message): UDPClientSocket.sendto(str.encode(message), serverAddressPort) try: response = UDPClientSocket.recvfrom(bufferSize) while response is None: time.sleep(0.1) response = recvUDP() return response except Exception as e: choice = confirm( f&quot;Exception '{e}' occured while awaiting server response. Try again?&quot;) if choice: response = getUDP(message) def connect(): response = getUDP(&quot;initial&quot;) if response is None: sys.exit() print(&quot;Initial packet recieved&quot;) connect() </code></pre> <p>I can't seem to get any sort of communication between the two machines directly over UDP, but I did try a simple <code>nc -L -p 1993</code> from the server machine and <code>curl</code> from the client machine, which had the packet recieved. When I tried to bind it to the public IP, it said that the socket module was unable to bind to the given IP.</p> <p>Any suggestions? Thanks!</p>
<python><sockets><network-programming><udp>
2023-05-04 17:55:25
1
389
MrToenails
76,176,052
16,912,844
Python pytest Exclude Test Directory in PYTHONPATH
<p>I have a weird scenario with <code>pytest</code> where we need to exclude the test directory from <code>PYTHONPATH</code> during the test.</p> <p><strong>Context</strong></p> <p>The pytest command for reference</p> <p><code>python -m pytest main_project_a/folder_a/sub_project_a ...</code></p> <p>I have <code>main_project_a</code> that requires to run test with a custom build of Python. It is built in the <code>build</code> folder. I need to use it to run the test in <code>folder_a/sub_project_a</code>. But modules in <code>sub_project_a</code> requires another package <code>api_2</code>, but <code>pytest</code> includes the <em>test directory</em> <code>main_project_a/folder_a/sub_project_a</code> in the <code>PYTHONPATH</code>, so it tried to look for <code>api_2</code> in the <em>test directory</em>, which doesn't exist.</p> <p>How to tell <code>pytest</code> no to include the test directory, or tell <code>pytest</code> to look in <code>main_project_a/build/sub_project_a/lib/python3.9/sub_project_a</code> instead?</p> <pre><code>WORKSPACE - main_project_a - ... - build - sub_project_a - bin - lib - python3.9 - sub_project_a - api - api_2 - ... - folder_a - sub_project_a - api - module_a - test - module_b - test </code></pre>
<python><pytest><pythonpath>
2023-05-04 17:48:08
0
317
YTKme
76,175,836
1,451,346
Requiring variables to be declared at or before their first use
<p>Python has always allowed you to assign to a new variable with the same syntax that you use to assign to an existing variable. So if you misspell a variable name (or forget to say <code>global</code> or <code>nonlocal</code>, when you mean to refer to a preexisting but non-local variable), you silently create a new variable.</p> <p>I know that Python has added more syntactic support for type annotations over the years, and there are various tools that use these to enforce static checks on Python code. But I'm foggy on the details of what's possible. Can you automatically check that all variables are declared with an annotation, so that mistakenly created variables become an error for the type-checker?</p>
<python><python-typing>
2023-05-04 17:22:10
1
3,525
Kodiologist
76,175,601
5,597,037
Unable to authenticate with Godaddy/Office365 using IMAP and Python
<p>I am trying to authenticate and access my emails on a Godaddy/Office365 account using the imapclient library in Python. Despite providing the correct email address and password, I am unable to log in and keep getting a LoginError (LOGIN failed).</p> <pre><code>Here's the code I'm using: import imapclient import pyzmail email = &quot;myemail@example.com&quot; password = &quot;mypassword&quot; # Connect to the server imap = imapclient.IMAPClient('outlook.office365.com', ssl=True) imap.login(email, password) # Select the mailbox (folder) you want to read emails from, usually &quot;INBOX&quot; imap.select_folder('INBOX') # Search for all unseen emails uids = imap.search(['UNSEEN']) # Initialize email content email_content = '' # Iterate through the emails for uid in uids: raw_message = imap.fetch([uid], ['BODY[]', 'FLAGS'])[uid][b'BODY[]'] message = pyzmail.PyzMessage.factory(raw_message) # Extract the email content if message.text_part: email_content = message.text_part.get_payload().decode(message.text_part.charset) print(email_content) imap.logout() </code></pre> <p>Here are the server settings I'm using for IMAP, as provided by Godaddy:</p> <pre><code>Server name: outlook.office365.com Port: 993 Encryption method: TLS </code></pre> <p>I have double-checked that the email address and password are correct and can log in to the webmail without any issues.</p> <p>Is there anything I am missing or doing wrong? How can I successfully authenticate and access my emails using IMAP and Python with a Godaddy/Office365 account?</p>
<python><imap>
2023-05-04 16:53:15
0
1,951
Mike C.
76,175,525
2,932,907
How to get the value from a nested dictionary
<p>Given the following nested dictionary I get back from the RIPE NCC RESTful API when creating a Person Object:</p> <pre><code>dict = { 'link': { 'type': 'locator', 'href': 'http://rest-test.db.ripe.net/test/person?dry-run=True' }, 'objects': { 'object': [{ 'type': 'person', 'link': { 'type': 'locator', 'href': 'https://rest-test.db.ripe.net/test/person/MT5-TEST' }, 'source': { 'id': 'test' }, 'primary-key': { 'attribute': [{ 'name': 'nic-hdl', 'value': 'MT5-TEST' }]}, 'attributes': { 'attribute': [ {'name': 'person', 'value': 'Person test'}, {'name': 'address', 'value': 'Wegstraat 1'}, {'name': 'phone', 'value': '+31612345678'}, {'name': 'e-mail', 'value': 'person@company.com'}, {'link': { 'type': 'locator', 'href': 'https://rest-test.db.ripe.net/test/mntner/TEST-DBM-MNT'}, 'name': 'mnt-by', 'value': 'TEST-DBM-MNT', 'referenced-type': 'mntner'}, {'name': 'nic-hdl', 'value': 'PT5-TEST'}, {'name': 'created', 'value': '2023-05-04T13:20:57Z'}, {'name': 'last-modified', 'value': '2023-05-04T13:20:57Z'}, {'name': 'source', 'value': 'TEST'} ] } } ] }, 'errormessages': { 'errormessage': [{ 'severity': 'Info', 'text': 'Dry-run performed, no changes to the database have been made'} ] }, 'terms-and-conditions': { 'type': 'locator', 'href': 'http://www.ripe.net/db/support/db-terms-conditions.pdf' } } </code></pre> <p>To get, for example, the 'nic-hdl' value I would do something like this:</p> <pre><code>nic-hdl = ['objects']['object'][0]['primary-key']['attribute'][0]['value'] </code></pre> <p>Which is completely fine. However, it doesn't seem to be very clean.</p> <p>Same goes for the attributes, attribute part which contains a few more name, value pairs. What would be the best and cleanest way to get those values by its corresponding keys?</p>
<python><dictionary><nested>
2023-05-04 16:44:26
2
503
Beeelze
76,175,490
4,035,257
Filtering pandas DataFrame based on row non-zero values
<p>I have a pandas df like the following:</p> <pre><code>date X1 X2 X3 Y user 1/1 0 3 34 5 a 2/1 0 7 65 5 a 3/1 0 13 0 5 a 4/1 25 4 65 0 a 5/1 35 0 0 5 a 6/1 4 6 9 0 a 7/1 0 0 0 5 a 1/1 0 0 34 5 b 2/1 0 7 65 5 b 3/1 0 13 0 5 b 4/1 0 4 65 5 b 5/1 35 0 0 5 b 6/1 4 6 9 0 b 7/1 0 0 0 0 b </code></pre> <p>How can I use select rows per <code>user</code> only after all <code>Xs</code> start appearing non-zero values, using <code>groupby()</code> . In that case, select only rows for <code>dates</code> <code>4/1</code>,<code>5/1</code>,<code>6/1</code>,<code>7/1</code> for user <code>a</code> and <code>dates</code> <code>6/1</code>,<code>7/1</code> for user <code>b</code>. Thank you.</p>
<python><pandas><group-by>
2023-05-04 16:40:22
2
362
Telis
76,175,349
8,849,755
Pandas dataframe with lists to dataframe with numbers
<p>I have a pandas dataframe that looks like this:</p> <pre><code> Amplitude (V) Time (s) type board input A [-6.475490109651011e-05, -6.459461668784777e-0... [-1.8824999999999998e-08, -1.8775e-08, -1.8724... output A [-0.00043152308693648455, 0.0, 8.6307784645449... [-2.3625e-08, -2.3574999999999998e-08, -2.3525... B [-0.00038705227067327414, -0.00111277816346858... [-6.6025e-08, -6.597500000000001e-08, -6.5925e... </code></pre> <p>I want to &quot;explode&quot; each of these lists into numbers to get a &quot;normal dataframe&quot;. How can I do this? Preferably without loosing information about the index.</p>
<python><pandas>
2023-05-04 16:21:58
1
3,245
user171780
76,175,337
1,114,448
How to insert a level in-between 2 levels in a multi-index?
<p>E.g.</p> <p><code>df = pd.DataFrame({('X','A'): ['a','a','b'], ('Y','A'): [1,2,3], ('X','B'): [4,6,5], ('Y','B'): [9,4,2]})</code></p> <pre><code> X Y X Y A A B B ----------------- 0 a 1 4 9 1 a 2 6 4 2 b 3 5 2 </code></pre> <p>Then I want to insert two redundant levels, one is the before the first level and the other is between first and second levels, such that</p> <pre><code>L1 V L2 X Y X Y L3 W L4 A A B B ----------------- 0 a 1 4 9 1 a 2 6 4 2 b 3 5 2 </code></pre> <p>Using <code>pd.concat</code> seems only inserting levels to the outermost level:</p> <pre><code>pd.concat({('V', 'W'): df}, names=['L1', 'L2', 'L3', 'L4'], axis=1) </code></pre> <pre><code>L1 V L2 W L3 X Y X Y L4 A A B B ----------------- 0 a 1 4 9 1 a 2 6 4 2 b 3 5 2 </code></pre> <p>, which is not what I want.</p>
<python><pandas>
2023-05-04 16:20:30
2
2,007
Gary
76,175,240
6,712,951
import class from relative path in python 3.x
<p>Based on the below project structure how do I import Foo class in example1.py file?</p> <pre><code>question-project - src - models # this folder is deemed package so init file is prsent - __init__.py # Foo.py class contains Foo class that could be used by scripts located inside examples and releases folders - Foo.py - examples # this folder only contains script files and init file is not present - example1.py - example2.py - releases # this folder only contains script files and init file is not present - 0.0.1.py - 0.0.2.py - tests - Test_Foo.py </code></pre> <p>Foo.py</p> <pre class="lang-py prettyprint-override"><code>class Foo: def __init__(self): pass def hello(self): print(&quot;Foo hello!&quot;) </code></pre> <p>example1.py</p> <pre class="lang-py prettyprint-override"><code>from ..models.Foo import Foo x = Foo() x.hello() </code></pre> <p>The error I am getting when trying to run example1.py file:</p> <blockquote> <p>Traceback (most recent call last):</p> <p>File &quot;/question-project/src/examples/example1.py&quot;, line 1, in </p> <p>from ..models.Foo import Foo</p> <p>ImportError: attempted relative import with no known parent package</p> </blockquote>
<python><python-3.x><import><importerror>
2023-05-04 16:10:07
0
459
Martin
76,175,197
8,107,567
Pass parameter to another intent without followup question
<p>I am using dialogflow with python (library: flask_dialogflow) backend to create a chatbot. So I have an intent drive.read with a followup intent drive.read.yes containing a required paramater: &quot;path&quot;. I have another intent transform.word2pdf with a followup intent transform.word2pdf.yes also containing a required paramater: &quot;word_path&quot;. The conversation starts from drive.read and then drive.read.yes asks to fill the parameter &quot;path&quot; from the user. After filling that up, transform.word2pdf intent triggers and then transform.word2pdf.yes according to the flow of the conversation.</p> <p>Now I want to pass the &quot;path&quot; value from drive.read.yes to the transform.word2pdf.yes parameter &quot;word_path&quot; if it exists. Also, if it exists, then the followup prompt about the path filling should not be asked by the bot since the word_path has already been filled up from the previous conversation in the different intent.</p> <p>How should I pass the path parameter to another intent without followup prompt? Should I enter something in the intent's default value? If so, what exactly will be format? like #xxx.path? Right now, nothing exists in the value or default value of any intent and I didnt add any extra input/output contexts other than already created by default.</p>
<python><flask><dialogflow-es>
2023-05-04 16:05:44
1
339
Mujtaba Faizi
76,175,163
7,447,976
Creating time schedules using available and nonavailable times stored in a dictionary
<p>I have a list of entities for each of which there are available and nonavailable times stored in multiple lists. It is important to note that nonavailable times dominates the available times. For example, suppose an entity is available between [1,5], [8,15] and is non-available between [2,4], [7, 9], and [11,13]. Then, I can use this entity in [1,2], [4,5], [9,11], and [13, 15]. As a MWE, I am giving a single entity.</p> <pre><code>time_dict = { 'available': [[7590, 8280], [9030, 9720], [10470, 11160], [11910, 12600], [20550, 21240], [21990, 22680], [23430, 24120], [24870, 25560], [26310, 27000], [33510, 34200], [34950, 35640], [36390, 37080], [37830, 38520], [39270, 39960]], 'not_available': [[7740, 7755], [7920, 7950], [8100, 8115], [9180, 9195], [9360, 9390], [9540, 9555], [10620, 10635], [10800, 10830], [10980, 10995], [12060, 12075], [12240, 12270], [12420, 12435], [20700, 20715], [20880, 20910], [21060, 21075], [22140, 22155], [22320, 22350], [22500, 22515], [23580, 23595], [23760, 23790], [23940, 23955], [25020, 25035], [25200, 25230], [25380, 25395], [26460, 26475], [26640, 26670], [26820, 26835], [33660, 33675], [33840, 33870], [34020, 34035], [35100, 35115], [35280, 35310], [35460, 35475], [36540, 36555], [36720, 36750], [36900, 36915], [37980, 37995], [38160, 38190], [38340, 38355], [39420, 39435], [39600, 39630], [39780, 39795]] } </code></pre> <p>Now, I create available slots using a for loop.</p> <pre><code>available_slots = [] for available in time_dict['available']: start, end = available not_available = [d for d in time_dict['not_available'] if d[0] &lt; end and d[1] &gt; start] if not not_available: available_slots.append(available) else: prev_end = start for i in not_available: if i[0] &gt; prev_end: available_slots.append([prev_end, i[0]]) prev_end = i[1] if prev_end &lt; end: available_slots.append([prev_end, end]) print(&quot;Available Slots:&quot;, available_slots) Available Slots: [[7590, 7740], [7755, 7920], [7950, 8100], [8115, 8280], [9030, 9180], [9195, 9360], [9390, 9540], [9555, 9720], [10470, 10620], [10635, 10800], [10830, 10980], [10995, 11160], [11910, 12060], [12075, 12240], [12270, 12420], [12435, 12600], [20550, 20700], [20715, 20880], [20910, 21060], [21075, 21240], [21990, 22140], [22155, 22320], [22350, 22500], [22515, 22680], [23430, 23580], [23595, 23760], [23790, 23940], [23955, 24120], [24870, 25020], [25035, 25200], [25230, 25380], [25395, 25560], [26310, 26460], [26475, 26640], [26670, 26820], [26835, 27000], [33510, 33660], [33675, 33840], [33870, 34020], [34035, 34200], [34950, 35100], [35115, 35280], [35310, 35460], [35475, 35640], [36390, 36540], [36555, 36720], [36750, 36900], [36915, 37080], [37830, 37980], [37995, 38160], [38190, 38340], [38355, 38520], [39270, 39420], [39435, 39600], [39630, 39780], [39795, 39960]] </code></pre> <p>This works for me, but I don't find this effective. Since I will repeat this for each key in <code>time_dict</code>, I'll have a nested for loop. Is there a better way to conduct this operation?</p>
<python><list><performance><dictionary>
2023-05-04 16:03:38
1
662
sergey_208
76,175,104
4,918,632
Method to change xticks for Bokeh bar plot
<p>I would like to change the xticks and xiticklables in hv.Barplot. However, currently I could only achieve that by adding hv.Curve(bar).options(xticks = xtic_labels). This will be plotted as curve not bar plot.</p> <p>Can anyone give suggestion that how I could define custom xticklables for barplot?</p> <pre><code>HEIGHT = 300 xtick_ = np.arange(100,370,50) xtick_labels = [(float(mz), str(mz)) for mz in xtick_] hv.Curve(tof_pr.hvplot.bar(x='mz', y='factor_1',)).\ options(xticks=xtick_labels) pr_curve = tof_pr.hvplot.bar(x='mz', y='factor_1') # I had to add this sentence pr_curve = hv.Curve(pr_curve).options( xticks = xtick_labels, width =800) # show pr_curve </code></pre>
<python><visualization><bokeh><holoviz>
2023-05-04 15:56:29
0
3,852
Han Zhengzu
76,175,067
1,711,271
Find second occurence from the right of a character, without iteration
<p>I have a Python string:</p> <pre><code>test = 'blablabla_foo_bar_baz' </code></pre> <p>I want the negative index of the second occurence <strong>from the right</strong> of <code>_</code>: in this case, this would be -8, but even 13, the positive index, would be an acceptable answer, because it's not hard to compute the negative index from the positive index. How can I do that? A GPT-4 solution uses <code>rfind</code> + iteration, but I was wondering if it could be possible to solve it in one shot, without iteration. Of course the answer should handle strings of arbitrary length, and it should match an arbitrary character (<code>_</code> in this example).</p> <p>EDIT: I need the negative index of the second occurence from the right of <code>_</code>, because I want to strip away <code>_bar_baz</code>, which can be done easily with</p> <pre><code>test_prefix = test[:-8] </code></pre> <p>Thus, alternative solutions that allow me to get <code>test_prefix</code> without first finding the negative index of the second occurence from the right of <code>_</code>, would also be ok. Using the length of the substring <code>_bar_baz</code> (8 in this case) is <strong>not</strong> ok because I don't know the length of this substring. Instead computing the length of the substring from the second occurrence from the right of <code>_</code>, until the end of the string, would be ok. It doesn't seem any different from computing he negative index of the second occurence from the right of <code>_</code>, though.</p> <p>EDIT2: a commenter says that it's impossible to solve this without iteration. I'm not sure they're right, but I'll show a (GPT-4-generated) example of what I <em>don't</em> want:</p> <pre><code>def find_second_last_occurrence(text, char='_'): last_occurrence = text.rfind(char) second_last_occurrence = text.rfind(char, 0, last_occurrence) return second_last_occurrence string = &quot;blablabla_foo_bar_baz&quot; result = find_second_last_occurrence(string, '_') print(result) # Output: 20 </code></pre> <p>In this case, we had to call <code>rfind</code> twice. Is this really necessary? I don't think all possible solutions (including those based on <code>re</code>, <code>rsplit</code>, etc.) require calling a builtin twice. <strong>Note</strong> that I didn't include any code checking for the occurence of <code>_</code> in the string, because I know for sure that my input string contains at least two occurences of <code>_</code>.</p>
<python><string><indexing>
2023-05-04 15:52:45
5
5,726
DeltaIV
76,175,040
10,667,216
Pylint not recognizing packages installed in Docker container
<p>I'm running a Python project in a Docker container, and I'm using Pylint for linting. However, Pylint doesn't seem to recognize packages that are installed in my Docker container. I know the packages are installed, because I can import them and use them without any issues. But when I run Pylint, I get import errors for those packages.</p> <p>I'm using VS Code as my editor, and I have the Python extension installed. I've set the Python interpreter in my VS Code workspace to point to the Python executable inside the Docker container, but that doesn't seem to help. Here's my Dockerfile:</p> <pre><code># pull official base image, based on Debian FROM python:3.9.12 # set work directory WORKDIR /home/culturecrawler # set environment variables ENV PYTHONDONTWRITEBYTECODE 1 ENV PYTHONUNBUFFERED 1 ENV PYTHONPATH=/home/culturecrawler # install system dependencies RUN apt-get update &amp;&amp; apt-get install -y \ llvm-11 \ libpq-dev \ build-essential \ libldap2-dev \ libsasl2-dev \ libblas-dev \ liblapack-dev \ libatlas-base-dev \ libopenblas-dev \ libfreetype6-dev \ libpng-dev \ nano # # install dependencies RUN pip install --upgrade pip COPY ./requirements.txt . RUN pip install -r requirements.txt RUN pip install pyldap RUN pip install django-auth-ldap # to get of the errors in RUN pip install Flask==2.0.3 \ &amp;&amp; pip install Werkzeug==2.0.3 # copy project COPY . . # Set default command to launch an interactive shell RUN ln -sf /bin/bash /bin/sh </code></pre> <p>And here's the relevant part of my VS Code settings:</p> <pre><code> // // Python &quot;[python]&quot;: { &quot;editor.defaultFormatter&quot;: &quot;ms-python.black-formatter&quot;, &quot;editor.codeActionsOnSave&quot;: { &quot;source.organizeImports&quot;: true } }, &quot;python.defaultInterpreterPath&quot;: &quot;/usr/local/bin/python&quot;, &quot;python.linting.pylintPath&quot;: &quot;/usr/local/bin/pylint&quot;, &quot;python.formatting.provider&quot;: &quot;none&quot;, &quot;isort.args&quot;: [&quot;--profile&quot;, &quot;black&quot;], &quot;python.linting.enabled&quot;: true, &quot;python.linting.flake8Enabled&quot;: true, &quot;python.linting.pylintEnabled&quot;: true, &quot;python.linting.pylintArgs&quot;: [ &quot;--max-line-length=79&quot;, &quot;--disable=missing-module-docstring,too-few-public-methods&quot; ], </code></pre> <p>How can I get Pylint to recognize the packages installed in my Docker container?</p>
<python><docker><visual-studio-code><pylint>
2023-05-04 15:48:58
1
483
Davood
76,174,947
2,587,422
Subset of dataset based on variable start and end range
<p>I have a very large dataset with the following columns: <code>id</code>, <code>date</code>, <code>val</code>. The <code>(id, date)</code> pair constitutes the unique key of the dataset. I have some thousands of ids, and for each of them I have data relative some thousands of dates.</p> <p>I also have a support dataset, with the following columns: <code>id</code>, <code>start_date</code>, <code>end_date</code>. For each <code>id</code>, it tells what are the start date and end date we are interested in. Note that different ids will have different start and end dates.</p> <p>I would like to retrieve a subset of the first dataset, which for each id only contains the dates in the range given by the support dataset.</p> <p>My current solution in Python looks like this:</p> <pre class="lang-py prettyprint-override"><code>ids = set(df[&quot;id&quot;].to_list()) xs: List[pd.DataFrame] = [] for x in sids: start_end = date_ranges[date_ranges[&quot;id&quot;] == x].iloc[0] first_date = start_end[&quot;start_date&quot;] last_date = start_end[&quot;end_date&quot;] new_df = df[ df[&quot;id&quot;] == x &amp; (df[&quot;date&quot;] &gt;= first_date) &amp; (df[&quot;date&quot;] &lt;= last_date) ] xs.append(new_df) res = pd.concat(res) </code></pre> <p>However, having to do this for thousands of ids, it takes forever (almost a half hour) to generate the final DataFrame. I've been trying to figure out a faster way to do this, perhaps using <code>.groupby()</code>, but so far I couldn't come up with much. Can somebody help?</p>
<python><pandas><dataframe><group-by>
2023-05-04 15:37:29
1
315
Luigi D.
76,174,940
571,648
Unhandled Promise Rejection: SyntaxError: The string did not match the expected pattern
<p>I know there are some similar questions on this site but I can't seem to figure what's wrong here. In my small app from a html file I call a python file that returns a json.</p> <p>html file:</p> <pre><code>fetch('readjobs.py') .then(data =&gt; data.json()) .then(data =&gt; { data.forEach(job =&gt; { const row = document.createElement('tr'); row.innerHTML = ` &lt;td&gt;${job[1]}&lt;/td&gt; `; document.querySelector('#jobTable tbody').appendChild(row); }); }); </code></pre> <p>python file:</p> <pre><code>import mysql.connector import json mydb = mysql.connector.connect( host=&quot;localhost&quot;, user=&quot;root&quot;, password=&quot;sdfsgfgd&quot;, database=&quot;mydb&quot; ) mycursor = mydb.cursor() mycursor.execute(&quot;SELECT LINK, JOB_TITLE FROM jobs&quot;) data = mycursor.fetchall() mycursor.close() mydb.close() print(&quot;Content-Type: application/json&quot;) print(&quot;Access-Control-Allow-Origin: *&quot;) print() print(json.dumps(data)) </code></pre> <p>An example of the dump:</p> <pre><code>[[&quot;link1&quot;, &quot;Wizard&quot;], [&quot;link2&quot;, &quot;Superman&quot;]] </code></pre> <p>Since this is a valid json, I have no idea what's wrong.</p> <p>In Safari this is the error message:</p> <blockquote> <p>[Error] Unhandled Promise Rejection: SyntaxError: The string did not match the expected pattern. promiseEmptyOnRejected promiseReactionJob</p> </blockquote> <p>In Chrome this is it:</p> <blockquote> <p>Uncaught (in promise) SyntaxError: Unexpected token 'i', &quot;import mys&quot;... is not valid JSON</p> </blockquote>
<python>
2023-05-04 15:36:45
0
11,859
erdomester
76,174,805
8,321,207
Understanding what happens after interpolation within ECG beat in python
<p>I did an experiment with 8 different ecg beats from 8 increasing HR from 55 to 125. If I set up a stacked line plot in a way where the lowest HR beat is at the bottom, and as we go up, the HR increases. for the first iteration, I selected a fixed number of samples before and after the r peak and plotted them. no interpolation is done here as the length of each ecG beat is fixed. Now for the second iteration, instead of taking fixed lengths, I take a proportion of some samples before and after the r peak based on the relative rr distance (which represents the r peak to r peak distance). so here, based on rr, the length of each occupied beat will be different. now I interpolate and create stacked line plots as in the first iteration.</p> <p>what I observe is in the first iteration, if we draw a line from the lowest HR to the highest HR, the line slants towards R peak gradually from floor to top. however, in iteration 2, it goes the other way. why is this so? The two figures are posted here. One left is one before any interpolation or stretching. the second one is after. The slant of the line in both should be in the same direction, but it is not. I want to understand why this is happening.</p> <p><a href="https://i.sstatic.net/w2uLu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/w2uLu.png" alt="enter image description here" /></a></p>
<python><signal-processing><interpolation><numpy-ndarray>
2023-05-04 15:20:09
0
375
Kathan Vyas
76,174,725
10,901,843
How do I execute this complicated Pandas merge of two dataframes? Best described through an example, detailed example included
<p>Here are two datataframes that I'd like to merge together, along with the resulting data frame.</p> <pre><code>df1 = pd.DataFrame({'Show':['NBA Basketball','NHL Hockey'],'Start_Time':['19:00','15:00'],'End_Time':['20:00','15:30']}) df2 = pd.DataFrame({'Show':['NBA Basketball','NBA Basketball','NBA Basketball','NBA Basketball','NBA_Basketball','NHL Hockey','NHL Hockey','NHL Hockey'],'Half_Hour_Interval_Start':['19:00','19:30','20:00','20:30','21:00','15:00','15:30','16:00'],'Viewers':[1000,2000,1500,1000,3000,2000,4000,5000]}) result_df = pd.DataFrame({'Show':['NBA Basketball','NHL Hockey'],'Start_Time':['19:00','15:00'],'End_Time':['20:00','15:30'],'Viewers':[3000,2000]}) </code></pre> <p><a href="https://i.sstatic.net/ubf4j.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ubf4j.png" alt="enter image description here" /></a></p> <p>I want to essentially compute the sum of viewers in df2 only where the half hour interval is between the start time and end time of df1. Bit of a complicated merge here. Notice how in the result_df, the viewers column is only a subset of the total viewers from df2, depending on the official start time and end time of the show in df1.</p>
<python><pandas><dataframe><group-by><merge>
2023-05-04 15:10:13
1
407
AI92
76,174,698
1,000,343
Combine 2 Different FacetGrid Plots into the Same Plot
<p>How can I combine two 2 seaborn FacetGrid plots into the same plot side by side?</p> <p>I can combine plots with ease if the plots are not already sublpots <a href="https://stackoverflow.com/questions/41384040/subplot-for-seaborn-boxplot/41384984#41384984">per this question</a>. But how can I get the 2 trellised plots next to each other something like the image below for plots <code>g1</code> and <code>g2</code> in the MWE. In the R program I can do this with <a href="https://stackoverflow.com/a/72145522/1000343">ggplot2 plots using tools like patchwork</a>.</p> <pre><code>import seaborn as sns from matplotlib import pyplot as plt tips = sns.load_dataset(&quot;tips&quot;) g1 = sns.FacetGrid(tips, col=&quot;time&quot;, row=&quot;sex&quot;) g1.map(sns.scatterplot, &quot;total_bill&quot;, &quot;tip&quot;) plt.show() g2 = sns.FacetGrid(tips, col=&quot;size&quot;, height=2.5, col_wrap=3) g2.map(sns.histplot, &quot;total_bill&quot;) plt.show() </code></pre> <p><a href="https://i.sstatic.net/p9Oh1.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/p9Oh1.jpg" alt="enter image description here" /></a></p>
<python><matplotlib><seaborn>
2023-05-04 15:07:17
0
110,512
Tyler Rinker
76,174,583
6,855,636
BiLSTM-CRF for text classification in PYTORCH
<p>I have been having trouble with the bi-lastm-cfr model. I tried several fixes for different bugs but now i am stuck.</p> <pre><code>from transformers import AutoTokenizer, AutoModel import torch.nn as nn import torch import numpy as np from torchcrf import CRF class CRFBiLSTMModel(nn.Module): def __init__(self, n_class, bert): super(CRFBiLSTMModel, self).__init__() # Load the pre-trained tokenizer and model # self.tokenizer = AutoTokenizer.from_pretrained(model_name) # self.model = AutoModel.from_pretrained(model_name) self.bert = bert self.n_class = n_class self.dropout_rate = 0.2 self.lstm_hidden_size = self.bert.config.hidden_size # Add a Bi-LSTM layer self.lstm = nn.LSTM( input_size=self.bert.config.hidden_size, hidden_size=self.lstm_hidden_size, num_layers=2, batch_first=True, # dropout=0.3, bidirectional=True, ) # self.lstm = nn.LSTM(self.lstm_hidden_size, # self.lstm_hidden_size, bidirectional=True) # Add a linear layer/classifier # self.linear = nn.Linear(256, n_class) self.linear = nn.Linear(self.lstm_hidden_size * 2, self.n_class, bias=True) # self.dropout = nn.Dropout(p=self.dropout_rate) # Add a CRF layer self.crf = CRF(n_class) def forward(self, batch): # Get the model embeddings b_input_ids = batch[0] b_input_mask = batch[1] outputs = self.bert(b_input_ids, attention_mask=b_input_mask) embeddings = outputs.last_hidden_state # Add a mask to the input sequence mask = b_input_mask.unsqueeze(-1).repeat(1, 1, self.lstm_hidden_size * 2) # Convert mask to binary mask mask = mask.byte() # print(&quot;debug&quot;, mask[0].shape) # # Check if the first timestep has a zero mask value, and if so, set it to 1 # if not (mask[0].all()): # mask.fill_(1) # Pass the embeddings through the Bi-LSTM layer lstm_outputs, _ = self.lstm(embeddings) # Apply the mask to the LSTM outputs masked_lstm_outputs = lstm_outputs.masked_fill(mask == 1, float('-inf')) # Pass the masked LSTM outputs through the linear layer logits = self.linear(masked_lstm_outputs) logits = torch.nan_to_num(logits, nan=1.0) # print(&quot;debug logits&quot;, logits) # Apply the CRF layer try: predicted_labels = self.crf.decode(logits, b_input_mask) print(&quot;debug predicted_labels 1&quot;, predicted_labels) return (predicted_labels,) except ValueError as error: if str(error) == &quot;mask of the first timestep must all be on&quot;: print(&quot;hello&quot;) b_input_mask[0].fill_(1) predicted_labels = self.crf.decode(logits, b_input_mask.bool()) # predicted_labels = np.array([np.array(x) for x in predicted_labels]) print(&quot;debug predicted_labelsv2&quot;, predicted_labels) self.labels = predicted_labels return (predicted_labels,) else: print(&quot;debug: else crf_bilstm.py line 80&quot;) exit() </code></pre> <pre><code>class Train: def format_time(self, elapsed): ''' Takes a time in seconds and returns a string hh:mm:ss ''' # Round to the nearest second. elapsed_rounded = int(round((elapsed))) # Format as hh:mm:ss return str(datetime.timedelta(seconds=elapsed_rounded)) def fit(self, model, train_dataloader, validation_dataloader, epochs, device, optimizer, scheduler, criterion, writer, print_each=40): # Set the seed value all ovr the place to make this reproducible. seed_val = 2 random.seed(seed_val) np.random.seed(seed_val) torch.manual_seed(seed_val) torch.cuda.manual_seed_all(seed_val) model_save_path = 'tmp' loss_values = [] hist_valid_scores = [] # For each epoch... for epoch_i in range(0, epochs): logs = {} # ======================================== # Training # ======================================== # Perform one full pass over the training set. print(&quot;&quot;) print( '======== Epoch {:} / {:} ========'.format(epoch_i + 1, epochs)) print('Training...') # Measure how long the training epoch takes. t0 = time.time() # Reset the total loss for this epoch. total_loss = 0 total_accuracy = 0 # Put the model into training mode. Don't be mislead--the call to # `train` just changes the *mode*, it doesn't *perform* the training. model.train() # For each batch of training data... for step, batch in enumerate(train_dataloader): # Progress update every 40 batches. if step % print_each == 0 and not step == 0: # Calculate elapsed time in minutes. elapsed = self.format_time(time.time() - t0) # Report progress. print(' Batch {:&gt;5,} of {:&gt;5,}. Elapsed: {:}.'.format( step, len(train_dataloader), elapsed)) # move batch data to device (cpu or gpu) batch = tuple(t.to(device) for t in batch) # `batch` contains three pytorch tensors: # [0]: input ids # [1]: attention masks # [2]: labels model.zero_grad() outputs = model(batch) # The call to `model` always returns a tuple, so we need to pull the # loss value out of the tuple. logits = outputs[0] label_ids = batch[-1] # print(logits) if isinstance(logits, list): logits = torch.FloatTensor(logits[0]) label_ids = np.array([np.array(x) for x in label_ids]) label_ids = torch.FloatTensor(label_ids)#stoped here because criteroin expect float but this is long print(&quot;kjgkjgjg&quot;, len(label_ids), logits.view(-1), label_ids[-1]) loss = criterion(logits.view(-1), label_ids.view(-1)) # logits = np.argmax(logits, axis=1) # label_ids = np.argmax(label_ids, axis=1) else: loss = criterion(logits.view(-1, model.n_class), label_ids.view(-1)) # Move logits back to cpu for metrics calculations logits = logits.detach().cpu().numpy() label_ids = label_ids.to('cpu').numpy() # Calculate the accuracy for this batch of test sentences. print(logits, label_ids) current_accuracy = flat_accuracy(logits, label_ids) total_accuracy += current_accuracy # Accumulate the training loss over all of the batches so that we can # calculate the average loss at the end. `loss` is a Tensor containing a # single value; the `.item()` function just returns the Python value # from the tensor. total_loss += loss.item() writer.add_scalar('training loss', loss.item(), epoch_i * len(train_dataloader)+step) writer.add_scalar('training Accuracy', current_accuracy, epoch_i * len(train_dataloader)+step) # Perform a backward pass to calculate the gradients. loss.backward() # Clip the norm of the gradients to 1.0. # This is to help prevent the &quot;exploding gradients&quot; problem. torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0) # Update parameters and take a step using the computed gradient. # The optimizer dictates the &quot;update rule&quot;--how the parameters are # modified based on their gradients, the learning rate, etc. optimizer.step() # Update the learning rate. scheduler.step() # Calculate the average loss over the training data. avg_train_loss = total_loss / len(train_dataloader) avg_train_accuracy = total_accuracy / len(train_dataloader) # Store the loss value for plotting the learning curve. loss_values.append(avg_train_loss) logs[&quot;log loss&quot;] = avg_train_loss logs[&quot;accuracy&quot;] = avg_train_accuracy print(&quot;&quot;) print(&quot; Average training loss: {0:.2f}&quot;.format(avg_train_loss)) print(&quot; Training epcoh took: {:}&quot;.format( self.format_time(time.time() - t0))) # ======================================== # Validation # ======================================== # After the completion of each training epoch, measure our performance on # our validation set. print(&quot;&quot;) print(&quot;Running Validation...&quot;) t0 = time.time() # Put the model in evaluation mode--the dropout layers behave differently # during evaluation. model.eval() # Tracking variables eval_loss, eval_accuracy, eval_f1, eval_recall, eval_precesion = 0, 0, 0, 0, 0 nb_eval_steps, nb_eval_examples = 0, 0 validation_loss = 0 # Evaluate data for one epoch for step_valid, batch in enumerate(validation_dataloader): # Add batch to GPU batch = tuple(t.to(device) for t in batch) # Unpack the inputs from our dataloader # Telling the model not to compute or store gradients, saving memory and # speeding up validation with torch.no_grad(): # Forward pass, calculate logit predictions. # This will return the logits rather than the loss because we have # not provided labels. # token_type_ids is the same as the &quot;segment ids&quot;, which # differentiates sentence 1 and 2 in 2-sentence tasks. # The documentation for this `model` function is here: model.eval() # https://huggingface.co/transformers/v2.2.0/model_doc/bert.html#transformers.BertForSequenceClassification outputs = model(batch) # Get the &quot;logits&quot; output by the model. The &quot;logits&quot; are the output # values prior to applying an activation function like the softmax. logits = outputs[0] label_ids = batch[-1] validation_loss += criterion(logits.view(-1, model.n_class), label_ids.view(-1)) # print(logits) # Move logits and labels to CPU logits = logits.detach().cpu().numpy() label_ids = label_ids.to('cpu').numpy() # Calculate the accuracy for this batch of test sentences. tmp_eval_accuracy = flat_accuracy(logits, label_ids) tmp_eval_f1 = flat_f1(logits, label_ids) tmp_eval_recall = flat_recall(logits, label_ids) tmp_eval_precision = flat_precision(logits, label_ids) # Accumulate the total scores. eval_accuracy += tmp_eval_accuracy eval_f1 += tmp_eval_f1 eval_recall += tmp_eval_recall eval_precesion += tmp_eval_precision # Track the number of batches nb_eval_steps += 1 validation_loss = validation_loss/len(validation_dataloader) writer.add_scalar('validation Accuracy', tmp_eval_accuracy, epoch_i * len(validation_dataloader)+step_valid) writer.add_scalar('validation F1', tmp_eval_f1, epoch_i * len(validation_dataloader)+step_valid) writer.add_scalar('validation recall', tmp_eval_recall, epoch_i * len(validation_dataloader)+step_valid) writer.add_scalar('validation precesion', tmp_eval_precision, epoch_i * len(validation_dataloader)+step_valid) is_better = len(hist_valid_scores) == 0 or validation_loss &lt; min( hist_valid_scores) hist_valid_scores.append(validation_loss) if is_better: patience = 0 print( 'save currently the best model to [%s]' % model_save_path, file=sys.stderr) model.save(model_save_path) # also save the optimizers' state torch.save(optimizer.state_dict(), model_save_path + '.optim') elif patience &lt; 5: patience += 1 '''print('hit patience %d' % patience, file=sys.stderr) if patience == int(5): # decay lr, and restore from previously best checkpoint print('load previously best model and decay learning rate to ', file=sys.stderr) # load model params = torch.load(model_save_path, map_location=lambda storage, loc: storage) model.load_state_dict(params['state_dict']) model = model.to(torch.device(&quot;cuda&quot;)) print('restore parameters of the optimizers', file=sys.stderr) optimizer.load_state_dict(torch.load(model_save_path + '.optim')) # reset patience patience = 0''' # Report the final accuracy for this validation run. print(&quot; Accuracy: {0:.2f}&quot;.format(eval_accuracy/nb_eval_steps)) print(&quot; F1: {0:.2f}&quot;.format(eval_f1/nb_eval_steps)) print(&quot; Recall: {0:.2f}&quot;.format(eval_recall/nb_eval_steps)) print(&quot; Precision: {0:.2f}&quot;.format(eval_precesion/nb_eval_steps)) print(&quot; Validation took: {:}&quot;.format( self.format_time(time.time() - t0))) return (eval_accuracy/nb_eval_steps, eval_f1/nb_eval_steps, eval_recall/nb_eval_steps, eval_precesion/nb_eval_steps,) </code></pre> <pre><code>import sys sys.path.append('../library') import warnings warnings.filterwarnings('ignore') import gc from crf_bilstm import CRFBiLSTMModel from transformers import BertModel,BertTokenizer,FlaubertTokenizer, FlaubertModel,AutoTokenizer, BertForSequenceClassification , FlaubertForSequenceClassification from transformers.modeling_utils import SequenceSummary from torch import nn import torch.nn.functional as F from torch.nn.utils.rnn import pack_padded_sequence from torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSampler import re from nlp.models import BasicBertForClassification from nlp.training import Train from nlp.preprocessing import TextPreprocessing from nlp.feature_extraction import MetaFeaturesExtraction from nlp.data_visualisation import WordCloudMaker # needed import torch from tensorboardX import SummaryWriter from transformers import AutoModel import pandas as pd from sklearn.model_selection import train_test_split from nlp.feature_extraction import BertInput from transformers import AdamW,get_linear_schedule_with_warmup import numpy as np from sklearn.metrics import classification_report from sklearn.metrics import accuracy_score from focalLoss import FocalLoss2 from datetime import datetime ##################################### Functions ########################################################### def getLoss(name, normedWeights, dic_cat_labels): if name == &quot;default&quot;: return None elif name == &quot;crossentropy&quot;: return nn.CrossEntropyLoss() elif name == &quot;crossentropyweighted&quot;: return nn.CrossEntropyLoss(weight=normedWeights) elif name == &quot;focalloss&quot;: return FocalLoss2(gamma=5.,alpha=0.25, num_class=len(dic_cat_labels)) def get_sentences_labels(df,text_column='text_clean',label_column='CAT',cat_labels=None): dic_cat_labels = cat_labels if cat_labels is not None else {x:value for x,value in enumerate(df[label_column].unique())} dic_labels_to_cat = {value:x for x,value in dic_cat_labels.items() } #df[text_column]= df[text_column].map(lambda text_clean : re.sub('[&quot;#$%&amp;()*+,-./:;&lt;=&gt;@[\]^_`{|}~\n\t’\']', '', text_clean)) df2 = df[label_column].map(dic_labels_to_cat) sentences = df[text_column].values labels = df2.values.astype(int) return sentences,labels,dic_cat_labels def get_label_callback(dataset,idx): return dataset[idx][3].item() #################################### Inline code ########################################################## def main (file, separator, col_text, model_name, test, col_label, loss, finetune, no_other=False): writer = SummaryWriter('runs/test') print(&quot;Begin: Current Time =&quot;, datetime.now().strftime(&quot;%H:%M:%S %d/%m/%Y&quot;)) gc.collect() print(&quot;finetune:&quot;, finetune, &quot;start test with categories = &quot;, col_label, &quot; using loss = &quot;, loss, &quot; and sampling = &quot;, test) df = pd.read_csv(file,sep=separator, quotechar='&quot;', dtype='str') # df = pd.read_csv(file,sep=separator,lineterminator=&quot;\n&quot;) print(f&quot;Import de {file} : \nNombres d'instance : {len(df)} \n&quot;) print(df[col_label].value_counts(), &quot; \n&quot;) df.drop(df[df[col_label] ==&quot;NotAnnotated&quot;].index, inplace=True) df.drop(df[df[col_label] ==&quot;&lt;TOCOMPLETE&gt;&quot;].index, inplace=True) if col_label==&quot;SA2&quot; and no_other: df.drop(df[df[col_label] ==&quot;Autre&quot;].index, inplace=True) print(f&quot;AFTER DROP: Nombres d'instance : {len(df)} \n&quot;) # print(df.head(2)) df = df[0:50] text_preprocessing = TextPreprocessing(df,col_text) text_preprocessing.fit_transform() df_train , df_test = train_test_split(df,random_state=1, test_size=0.2) sentences_train,labels_train,dic_cat_labels=get_sentences_labels(df_train,text_column='processed_text',label_column=col_label) n_class = len(dic_cat_labels) sentences_test,labels_test,dic_cat_labels=get_sentences_labels(df_test,text_column='processed_text',label_column=col_label,cat_labels=dic_cat_labels) n_class = n_class if n_class &gt; len(dic_cat_labels) else len(dic_cat_labels) print(&quot;Classes : &quot; ) print(dic_cat_labels) bert_input= BertInput(AutoTokenizer.from_pretrained(model_name)) X_train = bert_input.fit_transform(sentences_train) X_test = bert_input.fit_transform(sentences_test) print(dic_cat_labels) # Use 90% for training and 10% for validation. train_inputs, validation_inputs, train_labels, validation_labels,train_masks,validation_masks = train_test_split(X_train[0], labels_train,X_train[1],random_state=1, test_size=0.2) # Do the same for the masks test_inputs = X_test[0] test_masks = X_test[1] print(labels_test) test_labels = np.argmax(np.array(labels_test), axis=0) #now I tried this following a stackoverflow suggestion # Convert all inputs and labels into torch tensors, the required datatype train_inputs = torch.tensor(train_inputs) validation_inputs = torch.tensor(validation_inputs) test_inputs = torch.tensor(test_inputs) train_labels = torch.tensor(train_labels) validation_labels = torch.tensor(validation_labels) test_labels = torch.tensor(test_labels) train_masks = torch.tensor(train_masks) validation_masks = torch.tensor(validation_masks) test_masks = torch.tensor(test_masks) print(&quot;len(train_labels)=&quot;, len(train_labels)) batch_size = 1 #here it should be 16 but it is not working # Create the DataLoader for our training set. train_data = TensorDataset(train_inputs,train_masks,train_labels) train_sampler = RandomSampler(train_data) train_dataloader = DataLoader(train_data, sampler=train_sampler, batch_size=batch_size,drop_last=True ) # Create the DataLoader for our validation set. validation_data = TensorDataset(validation_inputs,validation_masks ,validation_labels) validation_sampler = SequentialSampler(validation_data) validation_dataloader = DataLoader(validation_data, sampler=validation_sampler, batch_size=batch_size) # Create the DataLoader for our test set. test_data = TensorDataset(test_inputs,test_masks, test_labels) test_sampler = SequentialSampler(test_data) test_dataloader = DataLoader(test_data, sampler=test_sampler, batch_size=batch_size) print(&quot;len(test_data[0])=&quot;, len(test_data[0])) base_model = AutoModel.from_pretrained(model_name) model = CRFBiLSTMModel(bert=base_model,n_class=n_class) model.cpu() # finetune the embedding while training model.bert.embeddings.requires_grad = fine optimizer = AdamW(model.parameters(), lr = 2e-5, # args.learning_rate - default is 5e-5, our notebook had 2e-5 eps = 1e-8 # args.adam_epsilon - default is 1e-8. ) epochs = 4 # Total number of training steps is number of batches * number of epochs. total_steps = len(train_dataloader) * epochs # Create the learning rate scheduler. scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps = 0, # Default value in run_glue.py num_training_steps = total_steps) df_value = pd.DataFrame(train_labels).value_counts(sort=False) normedWeights = [1 - (x / sum(df_value)) for x in df_value] normedWeights = torch.FloatTensor(normedWeights).to('cpu') loss_function = getLoss(loss, normedWeights, dic_cat_labels) train = Train() train.fit(model,train_dataloader,validation_dataloader,epochs,torch.device('cpu'),optimizer,scheduler,loss_function, writer) # Train the model on your training data for epoch in range(epochs): for batch in train_dataloader: # Clear gradients optimizer.zero_grad() # Forward pass predicted_labels = model(batch) print(len(predicted_labels)) predicted_labels = torch.tensor(predicted_labels) true_labels = torch.tensor(batch[-1].to('cpu').numpy()) print(&quot;debug3&quot;, (predicted_labels), (true_labels)) loss = loss_function(predicted_labels, true_labels.view(len(true_labels), 1)) # Backward pass loss.backward() optimizer.step() # Evaluate the trained model on your test data with torch.no_grad(): all_predicted_labels = [] all_true_labels = [] for batch in validation_data: predicted_labels = model(batch) true_labels = batch.labels all_predicted_labels.extend(predicted_labels) all_true_labels.extend(true_labels) print(&quot;end validation&quot;) # Generate a classification report to evaluate the performance of your model target_names = [&quot;Negative&quot;, &quot;Positive&quot;] print(classification_report(all_true_labels, all_predicted_labels, target_names=target_names)) model_names = ['camembert-base'] tests = [&quot;Random Sampling&quot;] categories=[&quot;CAT&quot;] losses = [&quot;crossentropy&quot;, &quot;crossentropyweighted&quot;, &quot;focalloss&quot;] finetune = [False, True] for model_name in model_names: print(model_name) for test in tests: for col_label in categories: for loss in losses: for fine in finetune: main(&quot;../data/corpus1.csv&quot;, &quot;\t&quot;, &quot;text&quot;, model_name, test=test, loss = loss, col_label=col_label, finetune=fine, no_other=True) </code></pre> <p>My corpus is a normal csv I read it as a dataframe. It contains one column for the text and one column for the label. I have seven categories.</p> <p>Can you help me with this?</p> <p>Right now my model is :</p> <p>BiLSTM -&gt; Linear Layer (Hidden to tag) -&gt; CRf Layer</p>
<python><machine-learning><pytorch><crf><bilstm>
2023-05-04 14:56:37
0
581
leila
76,174,478
17,194,418
cannot call selenium-webdriver in linux
<p>I'm new at selenium, saw some tutorials but having problems using it.</p> <p>I have 2 browsers installed and hoping not having to install another to use selenium.</p> <blockquote> <p>Opera: Version:96.0.4693.50; System:Linux Mint 20 (x86_64; MATE); Chromium version:110.0.5481.178</p> </blockquote> <blockquote> <p>Chromium: Versión 110.0.5481.100 (Build oficial) for Linux Mint (64 bits)</p> </blockquote> <p>I've searched the web driver for both and the best I could find was:</p> <p>ChromeDriver 110.0.5481.77 but it's version does not match the chromium one.</p> <p>I tried :</p> <pre><code>from selenium import webdriver driver = webdriver.Chrome('chromedriver_linux64_114/chromedriver') </code></pre> <p>and get the error <code>Incompatible release of chromedriver (version 110.0.5481.100) detected in PATH: /usr/bin/chromedriver</code></p> <p>as I understand the version I've downloaded is not compatible, but it was the highest <code>110</code> one in chromedriver page (also tried with one <code>114</code>... the latest one)</p> <p>How could I solve this problem?</p>
<python><selenium-webdriver>
2023-05-04 14:45:59
1
1,735
Ulises Bussi
76,174,421
7,447,976
How to effectively decompose a dictionary containing nested dictionaries
<p>I have a dictionary that has thousands of keys and each key has a corresponding nested dictionaries. As a MWE, I'm showing a very simple structure.</p> <pre><code>original_dict = { 'H0JKYXRWSN0D': {'type': 'A', 'name': '5GJ3VT3P'}, 'T3W8Z1G3ZJFS': {'type': 'B', 'name': '2QNVJGM6'}, 'ZJ7VYG1J33FJ': {'type': 'B', 'name': 'UJV0LEH2'}, 'DK0MBF8N4H1S': {'type': 'A', 'name': '8X0JBY1R'}, 'DZC6UHTQ93UO': {'type': 'C', 'name': '83B4FLJ3'} } </code></pre> <p>My goal is to create individual lists for each unique type in the dictionary as:</p> <pre><code>{'A': ['H0JKYXRWSN0D', 'DK0MBF8N4H1S'], 'B': ['T3W8Z1G3ZJFS', 'ZJ7VYG1J33FJ'], 'C': ['DZC6UHTQ93UO']} </code></pre> <p>For instance, I would like to see all the keys that has type A. I can do this in a for loop, but was curious if there is a more efficient way to do like a built-in function that is faster than using a simple for loop. Here's my current solution.</p> <pre><code>def create_lists_by_type(dictionary): lists_by_type = {} for key, value in dictionary.items(): if value['type'] not in lists_by_type: lists_by_type[value['type']] = [] lists_by_type[value['type']].append(key) return lists_by_type </code></pre>
<python><dictionary>
2023-05-04 14:39:36
1
662
sergey_208
76,174,368
964,235
Write audio and video to video file one frame at a time
<p>I use opencv to read a frame from a video file, edit it, then write it to a new video file. In this way I can process large videos without the need to have the whole video in memory.<br /> Something like this for each frame of the video -</p> <pre><code>success, img = vidObj.read() img = processImg(img) vidWriter.write(img) </code></pre> <p>The problem is this doesn't save the audio. I tried using moviePy to add in the audio after but it is extremely slow since it has to make a whole new video.</p> <pre><code>def addAudio(audioSource, videoSource, savePath): source_audio = AudioFileClip(audioSource) target_video = VideoFileClip(videoSource) target_video = target_video.set_audio(source_audio) target_video.write_videofile(savePath, audio_codec='aac') </code></pre> <p>Is there a way to write the video frame by frame the way I can with opencv but include the audio too? Or just a way to edit the frames without having to have them all loaded into RAM?</p>
<python><opencv><audio><video-processing><moviepy>
2023-05-04 14:32:49
1
1,293
Frobot
76,174,362
10,713,813
Dynamic Lists as Decision Variables in pymoo
<p>I want to define the problem of parallel machine scheduling in pymoo, i.e. I have given:</p> <ul> <li><code>n_machines</code> : Integer representing the number of machines.</li> <li><code>processing_times</code> : Array where processing_time[i] is the processing time for job i.</li> <li><code>release_dates</code> : Array where release_dates[i] is the release date for job i.</li> <li><code>due_dates</code> : Array where due_dates[i] is the due date for job i.</li> </ul> <p>The results should be a list of lists, each presenting an ordering of jobs on the respective machine.</p> <p>This is what I have so far:</p> <p>class ParallelScheduling(ElementwiseProblem):</p> <pre><code>def __init__(self, n_machines, processing_times, release_dates, due_dates, **kwargs): self.n_jobs = processing_times.shape[0] self.n_machines = n_machines self.processing_times = processing_times self.release_dates = release_dates self.due_dates = due_dates super(ParallelScheduling, self).__init__( n_var=??, n_obj=2, xl=??, xu=??, vtype=int, **kwargs ) def _evaluate(self, X, out, *args, **kwargs): out[&quot;F&quot;] = sum(np.sum(get_tardiness(x)) for x in X) out[&quot;G&quot;] = -sum(np.sum(get_intime_jobs(x)) for x in X) </code></pre> <p>Now the decision variables are lists of a variable length, as each machine could in theory have a different number of jobs, but I am not sure how I can implement this.</p> <p>An alternative solution I came up with was using an array of length <code>n_jobs</code> and assigning the machines to the indices of their jobs (thus having <code>n_jobs</code> decision variables with bounds 0 to <code>n_machines</code>) But this does not include the ordering in which the jobs are executed on the machines, which is relevant here.</p> <p>The encoding here is generally a little bit problematic, since I used <code>numpy.array</code>s everywhere, but I just realized now that I can not use them to create arrays which contain arrays of different sizes.</p> <p>Any help would be appreciated, either on how to do this using the approach I have already or alternativly on how to define a problem with a more freely choosable encoding in pymoo.</p>
<python><arrays><numpy><optimization><pymoo>
2023-05-04 14:32:24
1
320
wittn
76,174,288
10,721,627
Which data type has a better time and space complexity in pandas: pd.CategoricalDtype or enum.Enum?
<p>According to the <a href="https://pandas.pydata.org/docs/user_guide/categorical.html" rel="nofollow noreferrer">documentation</a>, the categorical value is useful:</p> <ul> <li>If there are few different values for a given column</li> <li>If the lexical order differs from the logical order (e.g. &quot;small&quot;, &quot;big&quot;, &quot;large&quot;)</li> <li>If you want to signal the category type to other Python libraries (e.g. matplotlib, seaborn)</li> </ul> <p>The documentation states that converting string variables to categorical variables will save some memory. So the space complexity of the <code>categorical</code> type is better than other data types such as <code>str</code> or <code>enum.Enum</code> type.</p> <p>Is the time complexity of the categorical type also better than other data types?</p>
<python><pandas><performance><enums><time-complexity>
2023-05-04 14:24:32
1
2,482
Péter Szilvási
76,174,227
525,865
my first steps using BS4 and requests for scraping data and printing to screen
<p>i am working on a tiny script that fetches data from a web-page and prints them to the screen - using python, beautiful soup and requests:</p> <p>the page - my base_url = &quot;https://themanifest.com/in/web-development/wordpress/companies?page=&quot;</p> <p>here my steps: first i take care for the import of all the necessary libraries: in my case the following ones: BeautifulSoup and requests. then i define the base URL and the number of pages we want to scrape. afterwards i create a for loop that will iterate through each page and extract the relevant information; here i use the requests library to fetch the HTML content of each page. i use Beautiful Soup to parse the HTML content and extract the relevant information. afterwards i store the extracted data in a data structure such as a list or dictionary. i will take care that we repeat steps 4-6 for each page. Once all pages have been scraped, i will take care that we process and analyze the collected data as needed.</p> <p>Here's my first step towards this goal: how to scrape the web pages from the provided resource:</p> <pre><code>import requests from bs4 import BeautifulSoup # Define the base URL and number of pages to scrape base_url = &quot;https://themanifest.com/in/web-development/wordpress/companies?page=&quot; num_pages = 10 # Create an empty list to store the scraped data data = [] # Loop through each page and extract the relevant information for page_num in range(1, num_pages+1): # Fetch the HTML content of the page url = base_url + str(page_num) response = requests.get(url) html_content = response.content # Parse the HTML content using Beautiful Soup soup = BeautifulSoup(html_content, 'html.parser') # Extract the relevant information from the page company_list = soup.find_all('div', {'class': 'company__list-item'}) for company in company_list: name = company.find('h3', {'class': 'company__list-item__name'}).text location = company.find('div', {'class': 'company__list-item__location'}).text services = [service.text for service in company.find_all('span', {'class': 'badge__name'})] rating = company.find('span', {'class': 'rating__value'}).text # Store the extracted data in a dictionary and add it to the list data.append({'Name': name, 'Location': location, 'Services': services, 'Rating': rating}) # Print the collected data print(data) </code></pre> <p>BTW: well i guess that we can use pandas as well - alltogehter i guess that there are many options that are smarter than my choice -</p> <p>what do you say!?</p> <p>what do you think about the pandas-method - using pandas here:</p> <pre><code>import requests from bs4 import BeautifulSoup import pandas as pd # Define the base URL and number of pages to scrape base_url = &quot;https://themanifest.com/in/web-development/wordpress/companies?page=&quot; num_pages = 10 # Create empty lists to store the scraped data names = [] locations = [] services = [] ratings = [] # Loop through each page and extract the relevant information for page_num in range(1, num_pages+1): # Fetch the HTML content of the page url = base_url + str(page_num) response = requests.get(url) html_content = response.content # Parse the HTML content using Beautiful Soup soup = BeautifulSoup(html_content, 'html.parser') # Extract the relevant information from the page company_list = soup.find_all('div', {'class': 'company__list-item'}) for company in company_list: name = company.find('h3', {'class': 'company__list-item__name'}).text location = company.find('div', {'class': 'company__list-item__location'}).text services_list = company.find_all('span', {'class': 'badge__name'}) services_str = ', '.join([service.text for service in services_list]) rating = company.find('span', {'class': 'rating__value'}).text # Append the extracted data to the respective lists names.append(name) locations.append(location) services.append(services_str) ratings.append(rating) # Create a pandas DataFrame from the scraped data data = {'Name': names, 'Location': locations, 'Services': services, 'Rating': ratings} df = pd.DataFrame(data) # Print the DataFrame print(df) </code></pre> <p>Well; This code creates four separate lists for each of the data points we want to extract, and then combines them into a pandas DataFrame using a dictionary. The resulting DataFrame has four columns corresponding to the data points we extracted, and each row represents a company from the scraped web pages.</p> <p>i am musing about the best method to go...</p>
<python><pandas><beautifulsoup><python-requests>
2023-05-04 14:18:49
0
1,223
zero
76,174,064
5,716,192
Why does `from ruamel.yaml import CSafeDumper` throw an import error when executed on python3.11?
<p>Create a <code>test.py</code> that contains</p> <pre><code>import sys print(sys.version) from importlib.metadata import version print(f&quot;ruamel.yaml version {version('ruamel.yaml')}&quot;) from ruamel.yaml import CSafeDumper </code></pre> <p>Running <code>python3.11 test.py</code> on will generate</p> <pre><code>3.11.3 (main, Apr 5 2023, 14:14:37) [GCC 11.3.0] ruamel.yaml version 0.17.22 Traceback (most recent call last): File &quot;/home/victory/test.py&quot;, line 6, in &lt;module&gt; from ruamel.yaml import CSafeDumper ImportError: cannot import name 'CSafeDumper' from 'ruamel.yaml' (/home/victory/venv_3.11/lib/python3.11/site-packages/ruamel/yaml/__init__.py) </code></pre> <p><code>python3.10 test.py</code> on the other hand, returns</p> <pre><code>3.10.6 (main, Mar 10 2023, 10:55:28) [GCC 11.3.0] ruamel.yaml version 0.17.22 </code></pre> <p>I don't know if this is a bug or if something has changed in ruamel.yaml in the latest python version. <a href="https://pypi.org/project/ruamel.yaml/0.17.22/" rel="nofollow noreferrer">https://pypi.org/project/ruamel.yaml/0.17.22/</a> doesn't mention anything about this as far as i can tell.</p>
<python><yaml><ruamel.yaml>
2023-05-04 14:02:22
1
693
Victory Omole
76,173,895
3,575,623
Custom loss function using distance matrix
<p>For the loss function of my model, I would like to compare the distance between a predicted result and the true value with the distance between the predicted value and its other nearest neighbour in the reference dataset. I can calculate this value using np arrays just fine, but TF doesn't like me attempting to use this on tensors.</p> <pre><code>def nearest_other_neighbour(y_true, y_pred): losses=[] res_df = pd.DataFrame(cdist(keras.backend.array(y_pred), keras.backend.array(y_true))) for i in range(len(res_df.columns)) : mycol = list(res_df[i]) dist_ref_self = mycol.pop(i) dist_ref_min_other = min(mycol) losses.append(dist_ref_self/dist_ref_min_other) return losses </code></pre> <p>I found <a href="https://gist.github.com/mbsariyildiz/34cdc26afb630e8cae079048eef91865" rel="nofollow noreferrer">this</a> post which would allow me to calculate the distance matrix using only tensors, but I would still need to iterate over the distance matrix afterwards.</p> <p>How can I perform these calculations using only tensors? Or is there a way to convert the symbolic tensors into np arrays in the loss function?</p>
<python><numpy><tensorflow>
2023-05-04 13:43:35
1
507
Whitehot
76,173,785
1,373,209
Moviepy : how to add a background color with a stroke line to a Text that overlay a video?
<p>I'm trying to add a white background to my text with moviepy and I want to add a black stroke line to this background.</p> <p>For the moment I have that</p> <pre><code>for step in data[&quot;steps&quot;]: image_file = f&quot;image{str(idx)}.png&quot; # Replace with your actual image file names text1 = TextClip(wrap_text(step[&quot;text1&quot;]), fontsize=48, color='black', font='Stanberry.ttf') text2 = TextClip(wrap_text(step[&quot;text2&quot;]), fontsize=48, color='black', font='Stanberry.ttf') imageDuration = 5 videoDuration += imageDuration image = ImageClip(image_file).set_duration(imageDuration).resize(height=1920).resize(width=1080).margin(5, color=(0, 0, 0)) text1_bg = ColorClip(size=(text1.size[0] + 20, text1.size[1] + 20), color=(255, 255, 255)).set_duration(imageDuration) text2_bg = ColorClip(size=(text2.size[0] + 20, text2.size[1] + 20), color=(255, 255, 255)).set_duration(imageDuration) text1_pos = ('center', 50) text2_pos = ('center', image.size[1] - text2.size[1] - 50) text1 = text1.set_position(text1_pos).set_duration(imageDuration) text2 = text2.set_position(text2_pos).set_duration(imageDuration) text1_bg = text1_bg.set_position((image.w // 2 - text1.w // 2 - 10, text1_pos[1] - 10)) text2_bg = text2_bg.set_position((image.w // 2 - text2.w // 2 - 10, text2_pos[1] - 10)) composite = CompositeVideoClip([image, text1_bg, text1.set_start(0), text2_bg, text2.set_start(0)], size=(1080, 1920)) images.append(composite) idx += 1 </code></pre>
<python><python-3.x><moviepy>
2023-05-04 13:31:32
1
419
kavaliero
76,173,774
5,164,339
Missing frame and the end-of-stream event in GStreamer's AppSrc
<p>When shutting down a GStreamer pipeline that reads frames from an appsrc, encodes them, and writes them out to a file, I noticed that although I had pushed <code>N</code> buffers into the appsrc, the file would have only <code>N-1</code> frames about 50% of the time (the remainder of the time it was <code>N</code>).</p> <p>I am using GStreamer 1.14.5 on Ubuntu and I have tried sending the EOS action signal using both:</p> <pre class="lang-py prettyprint-override"><code>self._pipeline.send_event(Gst.Event.new_eos()) # and self._appsrc.send_event(Gst.Event.new_eos()) </code></pre>
<python><gstreamer><python-gstreamer>
2023-05-04 13:30:38
1
2,023
mallwright
76,173,666
4,301,236
How to implement io_manager that have a parameter at asset level?
<p>I am a bit confused about the usage of resources, configuration and how they are linked to a context and an asset.</p> <p>So I have a parquet io manager that is able to manipulate and partitionned not-partitionned datasets. To do so I check the presence of a partition on the context in the <code>self._get_path()</code> method and provide a unique name for each file, using the key of the asset and a date format of the partition.</p> <pre class="lang-py prettyprint-override"><code># from dagter examples if context.has_asset_partitions: end = context.asset_partitions_time_window </code></pre> <p>Now I have an issue if the same asset is used with different paritions sizes because the names not necesserally are the same during the reading and writing of the files. <em>e.g.</em> I have some 1h partitions asset and some 1d partitions asset using the same base asset.</p> <p>The solution to this, IMO is to use the <code>filters</code> kwargs from <code>pandas.read_parquet</code>, that would allow me to get only the data inside the time window of the partition. So I want to provide a string parameter to my io manager for it to know which column has to be used to filter the partition interval.</p> <p>This parameter is obviously linked to an asset.</p> <p>I could add this as a parameter of my io_manger constructor and create one instance of io_manager per different column name. But I find it cumbersome and my intuition tells me that I should be using the InputContext to retrieve this information. (the same way I am using the context to get the start,end of the partition)</p> <p>So maybe I should create a ConfigurableResource with only one string attribute (the time column's name), instantiate one object per different column name and provide it to the asset construction (via required_resource_keys?). If this is the right solution, how can I access to the ressource in the io_manager?</p> <p>Or is there any other parameter of the asset constructor that I should be using to achieve what I want?</p>
<python><parquet><dagster>
2023-05-04 13:18:26
2
389
guillaume latour
76,173,400
14,214,312
ValueError: expected sequence of length 4 at dim 2 (got 0)
<p>I am learning Reinforcement learning, i wrote the following code using cross-entropy algorithm to train cartpole game, <a href="https://github.com/PacktPublishing/Deep-Reinforcement-Learning-Hands-On/blob/master/Chapter04/01_cartpole.py" rel="nofollow noreferrer">official source code from book</a></p> <p>But i am getting the following error :</p> <pre><code>UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ..\torch\csrc\utils\tensor_new.cpp:248.) obs_t = torch.FloatTensor([obs]) Traceback (most recent call last): File &quot;C:\Users\tiklu\OneDrive\Desktop\DRL Resources\codes\basic_rl\cartpole_cross_entropy.py&quot;, line 101, in &lt;module&gt; for iter_no, batch in enumerate(iterate_batches(env, net, BATCH_SIZE)): File &quot;C:\Users\tiklu\OneDrive\Desktop\DRL Resources\codes\basic_rl\cartpole_cross_entropy.py&quot;, line 43, in iterate_batches obs_t = torch.FloatTensor([obs]) ValueError: expected sequence of length 4 at dim 2 (got 0) </code></pre> <p>Here is the function causing the error :</p> <pre><code>def iterate_batches(env,net,batch_size): batch = [] #List of episodes episode_reward = 0.0 episode_steps = [] obs = env.reset() sm = nn.Softmax(dim=1) while True: #this line obs_t = torch.FloatTensor([obs]) # obs_t = obs_t.view(1, -1) action_props_t = sm(net(obs_t)) #net expects a batch of items action_props = action_props_t.data.numpy()[0] action = np.random.choice(len(action_props),p = action_props) next_obs, reward, is_done, _, _ = env.step(action) episode_reward += reward episode_steps.append(EpisodeStep(observation=obs,action=action)) if is_done: batch.append(Episode(reward=episode_reward,steps=episode_steps)) episode_steps = [] episode_reward = 0.0 next_obs = env.reset() if len(batch)==batch_size: yield batch batch = [] obs = next_obs </code></pre> <p>Here is the requirements.txt, if you want to try</p> <pre><code>gym==0.26.2 numpy==1.24.3 tensorboardX==2.6 torch==2.0.0 </code></pre> <p>since the code from book is giving the error, and i don't have much experience with numpy and tensor library. I would really appreciate if you could help.</p>
<python><machine-learning><deep-learning><reinforcement-learning><openai-gym>
2023-05-04 12:50:15
2
358
Pulkit Prajapat
76,173,293
3,486,675
Where to store environment variables when using AWS CDK Pipelines?
<p>I've followed <a href="https://cdkworkshop.com/50-java/70-advanced-topics/100-pipelines.html" rel="nofollow noreferrer">this tutorial</a> to set up CDK Pipelines using Python with a trigger from Github.</p> <p>It looks something like this:</p> <pre class="lang-py prettyprint-override"><code>import aws_cdk as cdk from constructs import Construct from aws_cdk.pipelines import CodePipeline, CodePipelineSource, ShellStep from my_pipeline.my_pipeline_app_stage import MyPipelineAppStage class MyPipelineStack(cdk.Stack): def __init__(self, scope: Construct, construct_id: str, **kwargs) -&gt; None: super().__init__(scope, construct_id, **kwargs) pipeline = CodePipeline(self, &quot;Pipeline&quot;, pipeline_name=&quot;MyPipeline&quot;, synth=ShellStep(&quot;Synth&quot;, input=CodePipelineSource.git_hub(&quot;OWNER/REPO&quot;, &quot;main&quot;), commands=[ &quot;npm install -g aws-cdk&quot;, &quot;python -m pip install -r requirements.txt&quot;, &quot;cdk synth&quot;])) pipeline.add_stage(MyPipelineAppStage(self, &quot;my-stack&quot;)) </code></pre> <p>I want to be able to get custom environment variables from my CDK stack. For example:</p> <pre><code>organizational_unit_ids=[ os.environ.get(&quot;OU_ID&quot;), ] </code></pre> <p>I don't want to store these environment variables directly in my code. Now my question is, where should I store these environment variables in order for the <code>synth</code> <code>ShellStep</code> to use them?</p> <p>Should I add them manually to the CodeBuild action after it has been created?</p> <p>Or is there somewhere central I can store them?</p>
<python><aws-cdk><aws-pipeline>
2023-05-04 12:40:05
2
11,605
D Malan
76,173,241
2,825,403
How to allow scrapy to follow redirects?
<p>I am trying to scrape data from historical versions of web pages as backed up Wayback Machine.</p> <p>I have thousands of pages that need scraping and I don't want to go to trouble of finding out exact dates and time of available backups for each of them. I just want to get weekly historical data or the nearest available.</p> <p>What I know is that if I put a date in a link here:</p> <p><code>https://web.archive.org/web/&lt;some_date&gt;/&lt;some_url&gt;</code></p> <p>then Wayback Machine will automatically redirect to the closest available capture. This will work fine in my scenario.</p> <p>I have a <code>scrapy</code> spider that extracts the data and that I already successfully used on the current version of web pages, so I know that it works and it produces the correct output. But when I try to run scrapy on the backed up versions of pages I get the following output notifying that the page is redirecting and no data is returned:</p> <pre><code>2023-05-04 20:18:33 [scrapy.middleware] INFO: Enabled spider middlewares: ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware', 'scrapy.spidermiddlewares.referer.RefererMiddleware', 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 'scrapy.spidermiddlewares.depth.DepthMiddleware'] 2023-05-04 20:18:33 [scrapy.middleware] INFO: Enabled item pipelines: [] 2023-05-04 20:18:33 [scrapy.core.engine] INFO: Spider opened 2023-05-04 20:18:33 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2023-05-04 20:18:33 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023 2023-05-04 20:18:36 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to &lt;GET https://web.archive.org/web/20200204105913/&lt;some_url&gt;&gt; from &lt;GET https://web.archive.org/web/20050313/&lt;some_url&gt;&gt; </code></pre> <p>I've looked at other questions of similar nature and I understand I need to do something with the middleware, but those other questions were more about not allowing redirects, while I want the exact opposite.</p> <p>How do I allow <code>scrapy</code> to follow redirects?</p>
<python><http-redirect><scrapy>
2023-05-04 12:33:58
2
4,474
NotAName
76,173,178
3,558,626
python cannot import class, path variable shown defaults to anaconda?
<p>my file directory is simple:</p> <pre><code>/Users/dd/python/folder/ main.py store.py __init__.py </code></pre> <pre><code># store.py class Store: def __init__(self,location): self.location = location # main.py from store import Store store = Store('Kansas') print(store.location) </code></pre> <p>when I run main.py I get this error:</p> <pre><code>ImportError: cannot import name 'Store' from 'store' (/Users/dd/opt/anaconda3/lib/python3.9/store.py) </code></pre> <p>When I check <code>sys.path</code> I get a list that includes</p> <ul> <li>/Users/dd/opt/anaconda3/lib/python3.9</li> <li>/Users/dd/opt/anaconda3/lib/python3.9/site-packages</li> <li>...</li> <li>/Users/dd/python/folder</li> </ul> <p>From what I read in other questions if I add the path the way I did here at the end, it should work since the path list variable contains the local folder. Why does it ignore the local folder in this case?</p> <p>Is it a problem with Anaconda?</p> <p>I also couldn't find any examples of this error that included a path in parenthesis like this, where is that coming from?</p> <p>I tried a virtual environment and that didn't change anything either. How can I set up python such that when I start a new folder the code operates from that folder?</p>
<python><sys><pythonpath>
2023-05-04 12:27:54
1
365
user40551
76,173,091
14,775,478
What's the best way to sweep over a parameter space with ECS tasks?
<p>How can I trigger N executions of the same task with different parameters in a way to systematically sweep a 3D paramter space?</p> <p>I have AWS ECS Fargate tasks defined. They accept multiple args over the command line, or, as docker images, run args to <code>entrypoint</code>. So the following will train my models, test them, for my ensemble setup &quot;411&quot;:</p> <pre><code>docker run -it my_image -s train -a test -e 411 </code></pre> <p>Now on ECS, the args are part of the task defintion</p> <pre><code>-s,train,-a,test,-e,411 </code></pre> <p>Now, I want to sweep over many variations of <code>-e 411</code>, and 4-5 other dimensions. For scalability reasons, I don't want to sweep over the parameter space within my application (where, if I were to do it, it would really only be a nested set of <code>for</code> loops). The tasks will all have the same (a) code, (b) docker image, (c) task settings, incl CPU and RAM requirements, (d) are absolutely independent of one another, (e) excpet for those 4-5 parameters which I want to vary/explore.</p> <p>What's the best way to do this, keeping the paramters neat in one place, having a very simple way to start these tasks, and keep it manageable quickly for future changes?</p> <ul> <li>Define ~100 tasks, and write the paramters into those task definitions? Then I can start them by just calling the task, but maintaining the tasks is a nightmare (many tasks with many parameters to track &amp; update)</li> <li>Start ~100 tasks with &quot;overwrite&quot;, each from the command line, so a separate <code>for</code> loop to kick all the tasks off? If so, where is that &quot;starter batch job&quot; running - on the local machine via <code>aws cli</code>, or yet another ECS container just to start the productive &quot;task&quot; containers?</li> <li>Define a &quot;service&quot;, and hard-code the ~100 param variations into that service, so that service starts the tasks? At least the params are all in one place, but the starting/scheduling becomes quite complex.</li> <li>Use the AWS Batch? How would I control the parameter space in there? I only see that I can replace one placeholder by one parameter, but not an option to create tasks for a loop over placeholders.</li> </ul>
<python><docker><amazon-ecs><aws-cli><aws-batch>
2023-05-04 12:17:24
2
1,690
KingOtto
76,172,978
6,529,926
Generator always returning same value
<p>I have a function that reads a file line by line and returns it as a list of the words. Since the file is very large, i would like to make it a generator.</p> <p>Here is the function:</p> <pre class="lang-py prettyprint-override"><code>def tokenize_each_line(file): with open(file, 'r') as f: for line in f: yield line.split() </code></pre> <p>However, everytime i call <code>next(tokenize_each_line())</code>, it always returns the first line of the file. I guess this is not the expected behavior for generators. Instead, i'd like the function to return the next line.</p>
<python><file><io>
2023-05-04 12:06:19
2
729
heresthebuzz
76,172,975
9,805,238
Scaling the font width only while keeping the font height the same using reportlab
<p>I'm using Python and <a href="https://www.reportlab.com/" rel="nofollow noreferrer">reportlab</a> to generate PDFs. I would like to switch from the font <a href="https://fontsgeek.com/fonts/Courier-Condensed-Regular" rel="nofollow noreferrer">CourierCondensed</a> to Courier scaled by 90 %. However, I would like to scale the font width only, while keeping the font height the same.</p> <p>This is how my paragraph style looks like:</p> <pre><code>scale_factor = 1.0 my_style= ParagraphStyle( name=font_name, parent=my_parent, fontName=font_name_string, fontSize=font_size*scale_factor, leading=font_size) </code></pre> <p>This is the output if I use CourierCondensed: <a href="https://i.sstatic.net/o5HR0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/o5HR0.png" alt="Output using CourierCondensed as base font" /></a></p> <p>This is the output if I use Courier scaled by 90 % (i.e. if the scale_factor is set to 0.9): <a href="https://i.sstatic.net/I4Q2T.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/I4Q2T.png" alt="Output using Courier with a font_size scale_factor of 0.9" /></a></p> <p>One can note that the second image shows a text with much smaller margins. Hence, I would like to keep the initial height of the font to avoid a very large change of the margins.</p> <p>Any help or pointers would be highly appreciated.</p>
<python><fonts><reportlab><font-scaling>
2023-05-04 12:05:34
1
3,730
Hagbard
76,172,955
15,748,819
Using response data from Python in Typescript
<p>I'm using <a href="https://github.com/adw0rd/instagrapi" rel="nofollow noreferrer">instagrapi</a> which revolves around python:</p> <pre class="lang-py prettyprint-override"><code>from instagrapi import Client; app = Client() app.login('username', 'password') medias = app.user_medias('user_id') print(medias) </code></pre> <p>Now, the python app returns an <code>application/json</code> response, which I'd like to use in my typescript app.</p> <p>The main question is: <strong>How do I link python to a typescript app?</strong></p> <p>I'd like to receive my response from python which I want to use in the typescript app. I'm looking for ideas / methods to do so.</p> <blockquote> <p>I wanted to use sqlite as a medium to transfer data, but as the app would be used by at least 100 users, sqlite wouldn't be a consistent way to get the job done</p> </blockquote>
<python><typescript>
2023-05-04 12:01:47
0
809
Darshan B
76,172,954
6,302,803
Warning in traversing Pandas DataFrame index: Expected type 'int', got 'Hashable' instead
<p>I have a piece of code like:</p> <pre><code> for index, row in df.iterrows(): if index == 0: continue elif df.loc[df.index[index], 'date'] == df.loc[df.index[index - 1], 'date']: df.loc[df.index[index], 'desc'] = 'same date' </code></pre> <p>This code works. However, IDE (PyCharm) warns me that in <code>[index - 1]</code>, <code>index</code> is a Hashable not an Int which is expected. This typing warning makes sense because <code>.iterrows()</code> returns <code>index</code> as a Hashable but we are doing integer operation here. The question is, how to avoid this warning?</p>
<python><pandas>
2023-05-04 12:01:43
2
3,978
Z.Wei
76,172,946
17,176,270
Access to model attributes and methods in Django 4.2 after async call for model object
<p>How to access to model attributes and methods in Django 4.2 after async call for model object? Here is my model:</p> <pre><code>class ProjectModule(models.Model): ... url = models.URLField(verbose_name=&quot;URL&quot;) ... </code></pre> <p>I'm getting a coroutine with this func:</p> <pre><code>async def get_module_object(module_id: int): &quot;&quot;&quot;Get module object.&quot;&quot;&quot; try: return await ProjectModule.objects.aget(pk=module_id) except Exception as e: return HttpResponseServerError(f&quot;Error: {e}&quot;) </code></pre> <p>After that I query a url with next async function:</p> <pre><code>async def call_url(url: str): &quot;&quot;&quot;Make a request to an external url.&quot;&quot;&quot; try: async with aiohttp.ClientSession() as session: async with session.get(url=url) as response: return response except Exception as e: print(e) return web.Response(status=500) </code></pre> <p>Finally in my next function I call for model and url and trying to get <code>model</code> object from coroutine</p> <pre><code>async def htmx_test_run_module(request, module_id: int): &quot;&quot;&quot;Test run module with HTMX request.&quot;&quot;&quot; if request.method == &quot;GET&quot;: try: module_coroutine = await get_module_object(module_id=module_id) module = await module_coroutine # first error is here response = await call_url(url=module.url) response = await response_coroutine # second error is here soup = BeautifulSoup(response.text(), &quot;lxml&quot;) tag = soup.find( module.container_html_tag, class_=module.container_html_tag_class ) return HttpResponse(tag) except Exception as e: return HttpResponseServerError(f&quot;Error: {e}&quot;) </code></pre> <p>The first error is <code>Error: object ProjectModule can't be used in 'await' expression</code> and the second one is <code>Cannot find reference 'await' in 'ClientResponse | Response'</code>.</p> <p>Finally pyCharm PRO can't provide me with autocomplete for <code>module</code> object after <code>module = await module_coroutine</code></p> <p>What is wrong?</p>
<python><django><asynchronous><async-await><aiohttp>
2023-05-04 12:01:08
1
780
Vitalii Mytenko
76,172,923
12,636,391
Python: Only keep unique values inside a list of sets
<p>right now my python function get's a list containing one or more sets...</p> <pre><code>l = [('val_a', 100), ('val_b', 200), ('val_a', 100), ('val_b', 200)] </code></pre> <p>What I'm trying to achieve is to remove all sets inside the list, that appears more than once. If the remainig list has no more remainig sets, I want to return true. The order in which the sets can appear is absolutly random.</p> <p>Examples:</p> <pre><code>#001 True l = [('val_a', 100), ('val_b', 200), ('val_a', 100), ('val_b', 200)] #002 False l = [('val_a', 100), ('val_b', 200), ('val_a', 100), ('val_c', 300)] #003 True l = [('val_a', 100), ('val_a', 100)] #004 False l = [('val_a', 100)] </code></pre> <p>Also there's no need to keep the structure of the list.</p> <p>Thanks for all your help and have a great day!</p>
<python><list><validation>
2023-05-04 11:58:10
1
473
finethen
76,172,908
534,238
Why does the dependency resolver choose a version that should be explicitly forbidden (via !=)?
<p>My project has a dependency that looks like:</p> <pre class="lang-none prettyprint-override"><code>avro-python3&gt;=1.8.1,!=1.9.2,&lt;1.10.0; python_version &gt;= &quot;3.0&quot; </code></pre> <p>It's reflected like that on a line by itself in a <code>requirements.txt</code> file that I used as a basis for my project. I have this <code>pyproject.toml</code>, generated by PDM:</p> <pre class="lang-ini prettyprint-override"><code>[tool.pdm] [project] # PEP 621 project metadata # See https://www.python.org/dev/peps/pep-0621/ dependencies = [&quot;apache-beam[gcp]&gt;=2.16.0&quot;, &quot;avro-python3!=1.9.2,&lt;1.10.0,&gt;=1.8.1; python_version &gt;= \&quot;3.0\&quot;&quot;, &quot;fastavro&gt;=0.21.4&quot;, &quot;Faker&gt;=0.8.13&quot;, &quot;faker-schema&gt;=0.1.4&quot;, &quot;google-cloud&gt;=0.32&quot;, &quot;google-cloud-bigquery&gt;=1.1.0&quot;, &quot;google-cloud-pubsub&gt;=0.30.1&quot;, &quot;google-cloud-storage&gt;=1.6.0&quot;, &quot;google-cloud-vision&gt;=0.31.0&quot;, &quot;google-resumable-media&gt;=0.5.0&quot;, &quot;mock&lt;3.0.0,&gt;=1.0.1&quot;, &quot;numpy&gt;=1.14.2&quot;, &quot;pandas&gt;=0.23.4&quot;, &quot;scipy&gt;=1.1.0&quot;, &quot;httplib2&lt;=0.12.0,&gt;=0.8&quot;] requires-python = &quot;&gt;=3.8,&lt;3.9&quot; [build-system] requires = [&quot;pdm-backend&quot;] build-backend = &quot;pdm.backend&quot; </code></pre> <p>This reflects that version <code>1.9.2</code> of <code>avro-python3</code> should be forbidden:</p> <pre class="lang-ini prettyprint-override"><code>&quot;avro-python3!=1.9.2,&lt;1.10.0,&gt;=1.8.1; python_version &gt;= \&quot;3.0\&quot;&quot; </code></pre> <p>However, after I used PDM to resolve and install dependencies, it produced a lock file that shows that version <code>1.9.2.1</code> will be installed:</p> <pre><code>[[package]] name = &quot;avro-python3&quot; version = &quot;1.9.2.1&quot; requires_python = &quot;&gt;=3.5&quot; summary = &quot;Avro is a serialization and RPC framework.&quot; </code></pre> <p>Shouldn't this be prevented by <code>!=1.9.2</code> in the version specification? Doesn't that mean to forbid versions that start with <code>1.9.2</code>, such as <code>1.9.2.1</code>?</p> <p>How can I specifically prevent the dependency resolver from using any of the <code>1.9.2</code> prefixed versions of the library, without making other restrictions?</p>
<python><python-packaging>
2023-05-04 11:56:46
1
3,558
Mike Williamson
76,172,523
11,665,178
Error 'HTTPResponse' object has no attribute 'strict' when verifying id token with Firebase Admin authentication in python Cloud Function
<p>I have a Cloud Function in python 3.9 that calls this code :</p> <pre><code>firebase_admin.initialize_app() def check_token(token, app_check_token): &quot;&quot;&quot; :param app_check_token: :param token: :return: &quot;&quot;&quot; try: app_token = app_check.verify_token(app_check_token) logging.info(f&quot;App check token verified : {app_token}&quot;) except Exception as e: logging.error(f&quot;Exception while decoding app check token : {e}&quot;) try: decoded_token = auth.verify_id_token(token) logging.info(f&quot;verified token : {decoded_token}&quot;) if &quot;uid&quot; in decoded_token: return decoded_token[&quot;uid&quot;] return &quot;&quot; except Exception as e: logging.error(f&quot;check_token : {e}&quot;) return &quot;&quot; </code></pre> <p>And here is the log i have from Cloud logging when i call my function with a valid firebase id token :</p> <pre><code>ERROR:root:check_token : 'HTTPResponse' object has no attribute 'strict' </code></pre> <p>What does it mean please ?</p> <p>Note : i have 10 cloud functions, only one has this issue and there is no differences with the others...</p> <p>ChatGPT says it's an error with Firebase authentication backend and to contact firbease support, but since this happens only in one of my Cloud Functions i would like to understand if i am doing something wrong.</p> <p>PS : If i run this cloud function locally everything is fine with <code> functions-framework --target function_name --debug --port=8080</code> with the exact same code.</p>
<python><firebase><firebase-authentication><google-cloud-functions><firebase-admin>
2023-05-04 11:10:49
1
2,975
Tom3652
76,172,489
10,967,961
No Space left on device with dask
<p>I have two very large databases called Network (800MB) and SecondOrder (33GB) and wanna perform a series of merges like this:</p> <pre><code>NetworkDD = dd.from_pandas(Network, npartitions=Network['NiuSup'].nunique()) NodesSharingSupplier = dd.merge(NetworkDD, NetworkDD, on='NiuSup').query('NiuCust_x != NiuCust_y') ### WORKS UNTIL HERE NodesSharingSupplier=NodesSharingSupplier.drop('NiuSup', axis=1) NodesSharingSupplier=NodesSharingSupplier.drop_duplicates() NodesSharingSupplier=NodesSharingSupplier.rename(columns={&quot;NiuCust_x&quot;: &quot;NiuSup&quot;, &quot;NiuCust_y&quot;: &quot;NiuCust&quot;}) NodesSharingSupplier.to_csv(&quot;NodesSharingSupplier.csv&quot;) NodesSharingSupplier=NodesSharingSupplier.drop('NiuSup', axis=1) NodesSharingSupplier=NodesSharingSupplier.drop_duplicates() NodesSharingSupplier=NodesSharingSupplier.rename(columns={&quot;NiuCust_x&quot;: &quot;NiuSup&quot;, &quot;NiuCust_y&quot;: &quot;NiuCust&quot;}) SecondOrder=pd.read_csv(&quot;/home/francesco.serti/SecondOrder_line55.csv&quot;) SecondOrderDD = dd.from_pandas(SecondOrder, npartitions=SecondOrder['NiuSup'].nunique()) SecondOrderDD_all = SecondOrderDD.merge(NodesSharingSupplier, on=['NiuCust','NiuSup'], how='left', indicator=True) SecondOrderDD=SecondOrderDD_all.loc[SecondOrderDD_all._merge=='left_only',SecondOrderDD_all.columns!='_merge'] del SecondOrder del NodesSharingSupplier ##HERE IF I DO .compute() an OS Error: No Space left on Device SecondOrderDD.to_csv('/home/francesco.serti//SecondOrderDD_line75.csv', single_file=True) </code></pre> <p>I know that Dask operates in a lazy mode. I tried both to make all the computations in a lazy mode and then save it with .to_csv and to perform the .compute() immediately but in any case (maybe due to temp files generation?) I get a No Space left on Device error. Is there a fix to this (e.g. force dask not to generate temp)? I even performed it on a server and the same error occurred!</p>
<python><merge><dask>
2023-05-04 11:04:57
0
653
Lusian
76,172,476
5,213,451
Custom parsing typing in dataclasses
<p>As a Python user, I love keeping my code clean with <code>dataclasses</code>, and checked with <code>mypy</code>. However, I often find myself in the case where I want to have</p> <ul> <li>a special, lax use input type, for initialization</li> <li>a stronger, parsed value thereafter</li> </ul> <p>To illustrate my point, let's assume I wanted to convert the class <code>Foo</code> below into a dataclass.</p> <pre class="lang-py prettyprint-override"><code>BarLike = Bar | str def to_bar(x: BarLike) -&gt; Bar: if isinstance(x, Bar): return x else: ... # do some fancy parsing class Foo: bar: Bar def __init__(self, bar: BarLike): self.bar = to_bar(bar) def baz(self): return self.bar.baz() Bar(&quot;hello&quot;).baz() # &lt;- works and mypy is happy </code></pre> <p>This is a deliberately minimal example that is not representative of real, much more complicated usecases where the use of dataclasses is priceless to code readability. To illustrate this point, I'll assume below that once the arguments are parsed, I want them to be immutable, with all the niceties that come with <code>dataclass(frozen=True)</code>.</p> <p>I've so far thought of two ways of doing this, but both have their issues leading to necessary <code># type: ignore</code> sprinkled around the codebase, which I would like to avoid:</p> <h2>Publicly lax attribute</h2> <pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass </code></pre> <pre class="lang-py prettyprint-override"><code>@dataclass(frozen=True) class Foo: bar: BarLike def __post_init__(self): object.__setattr__(self, &quot;bar&quot;, to_bar(self.bar)) # &lt;- this is ugly def baz(self): return self.bar.baz() # &lt;- mypy complains Foo(&quot;hello&quot;).baz() # &lt;- works and mypy is happy </code></pre> <h2>Publicly strict attribute</h2> <pre class="lang-py prettyprint-override"><code>@dataclass(frozen=True) class Foo: bar: Bar def __post_init__(self): object.__setattr__(self, &quot;bar&quot;, to_bar(self.bar)) # &lt;- this is ugly def baz(self): return self.bar.baz() Foo(&quot;hello&quot;).baz() # &lt;- works but mypy complains </code></pre> <p>Am I missing a good way of getting the best of all worlds here?</p>
<python><mypy><python-typing><python-dataclasses>
2023-05-04 11:03:40
0
1,000
Thrastylon
76,172,407
7,318,120
docstring not working in Python on VS Code
<p>I use standard docstring format, there is a useful example here: <a href="https://stackoverflow.com/questions/3898572/what-are-the-most-common-python-docstring-formats">What are the most common Python docstring formats?</a></p> <p>So here is example code:</p> <pre class="lang-py prettyprint-override"><code>def function(a: str, b:bool, c:int): ''' the function doc string. Here is a bit more. args: a: a random string goes here (str) b: lets describe something binary (bool) c: we have a whole number (int) return: gives something back ''' a = a + ' world' b = 5 * b c = 10 + c return c </code></pre> <p>When I hover over the function definition in VS Code, the description is nicely formatted. Each argument is on its own separate line:</p> <p><a href="https://i.sstatic.net/LyB3l.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LyB3l.png" alt="enter image description here" /></a></p> <p>But I like to add the types at the beginning, like this:</p> <pre class="lang-py prettyprint-override"><code>def function(a: str, b:bool, c:int): ''' the function doc string. Here is a bit more. args: a: (str) a random string goes here b: (bool) lets describe something binary c: (int) we have a whole number return: gives something back ''' a = a + ' world' b = 5 * b c = 10 + c return c </code></pre> <p>Now when I hover over the function definition the args are all merged onto one line:</p> <p><a href="https://i.sstatic.net/mHFoV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mHFoV.png" alt="enter image description here" /></a></p> <p>I notice that this is caused by the parentheses around the types and removing them removes the problem.</p> <p>I also note that if I print the docstring, it looks how it should be, so it is like VS Code has an issue with the parentheses:</p> <pre class="lang-py prettyprint-override"><code>print(function.__doc__) </code></pre> <p>returns this:</p> <pre><code> the function doc string. Here is a bit more. args: a: (str) a random string goes here b: (bool) lets describe something binary c: (int) we have a whole number return: gives something back </code></pre> <p>But why is this the case and how can I get it back to normal (keeping the parentheses)?</p>
<python><visual-studio-code><docstring>
2023-05-04 10:55:31
1
6,075
darren
76,172,193
9,668,218
How to create a timestamp column in a PySpark dataframe that includes all time intervals?
<p>I have a pyspark dataframe with a column named &quot;interval_date_time&quot;.</p> <p>&quot;interval_date_time&quot; is timestamp like &quot;2022-01-01:00:00:00&quot;, &quot;2022-01-01:00:30:00&quot;, &quot;2022-01-03:01:00:00&quot;, &quot;2022-01-03:02:30:00&quot;, &quot;2022-01-03:14:00:00&quot;.</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>interval_date_time</th> </tr> </thead> <tbody> <tr> <td>2022-01-01:00:00:00</td> </tr> <tr> <td>2022-01-01:00:30:00</td> </tr> <tr> <td>2022-01-03:01:00:00</td> </tr> <tr> <td>2022-01-03:02:30:00</td> </tr> <tr> <td>2022-01-03:14:00:00</td> </tr> </tbody> </table> </div> <p>However, it doesn't have time values starting from &quot;00:00:00&quot; until &quot;23:30:00&quot; for all dates.</p> <p>I want to create a new PySpark DataFrame with a column that has a timestamp datatype.</p> <p>The difference with the original DataFrame is that this new DataFrame has every 30 minutes time intervals for all dates in the original DataFrame.</p> <p>Example values will be like :</p> <p>&quot;2022-01-01:00:00:00&quot;, &quot;2022-01-01:00:30:00&quot;, until &quot;2022-01-01:23:30:00&quot;. And the same for &quot;2022-01-03&quot;</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>interval_date_time_new</th> </tr> </thead> <tbody> <tr> <td>2022-01-01:00:00:00</td> </tr> <tr> <td>2022-01-01:00:30:00</td> </tr> <tr> <td>2022-01-01:01:00:00</td> </tr> <tr> <td>....</td> </tr> <tr> <td>2022-01-01:23:00:00</td> </tr> <tr> <td>2022-01-01:23:30:00</td> </tr> <tr> <td>2022-01-03:00:00:00</td> </tr> <tr> <td>2022-01-03:00:30:00</td> </tr> <tr> <td>2022-01-03:01:00:00</td> </tr> <tr> <td>.....</td> </tr> <tr> <td>2022-01-03:23:30:00</td> </tr> </tbody> </table> </div> <p>Any idea how to do that in PySpark?</p>
<python><dataframe><apache-spark><pyspark><timestamp>
2023-05-04 10:29:20
1
1,033
Mohammad
76,172,187
17,176,270
async delete from DB using Django 4.2 with adelete()
<p>As per docs since v4.2 Django comes with async version of delete method named <code>adelete()</code>. But I do not understand how to delete an object from DB using it.</p> <pre><code>db_object = await DbModel.objects.aget(pk=module_id) await db_object.adelete() # doesn't work </code></pre> <p>It fails with an error: <code>&quot;AttributeError: 'DbModel' object has no attribute 'adelete'</code></p> <p>What is wrong?</p>
<python><django><asynchronous><async-await>
2023-05-04 10:28:42
1
780
Vitalii Mytenko
76,172,174
5,561,649
How to tell if a string could have been generated by formatting another string?
<p>Say we have:</p> <pre class="lang-py prettyprint-override"><code>f = &quot;The object named '{}' is corrupted.&quot; s = &quot;The object named 'my_object' is corrupted.&quot; </code></pre> <p><code>s</code> is equal to <code>f.format(&quot;my_object&quot;)</code>. If we didn't know that, how could we check if <code>s</code> can be generated by formatting <code>f</code>? Is there a function to do that in the standard library?</p> <p>This can be done using regex but we'd have to reimplement the logic of how <code>format</code> works (e.g., curly braces are replaced with any string, etc). This is why I'd like to know if there's a way to use the same logic that <code>str.format</code> uses.</p>
<python><string-formatting>
2023-05-04 10:26:58
0
550
LoneCodeRanger
76,172,138
282,855
How to manage a function of a third-party library that stops returning value after a while?
<p>A function, namely, the <code>prompt</code> function of my <a href="https://github.com/nomic-ai/gpt4all" rel="nofollow noreferrer">GPT4All</a> object stops returning value after a while (I process a bunch of queries) without any errors/exceptions. I've tried to set a timeout to that function's execution but this control has caused the <code>[Errno 32] Broken pipe</code> error as I write the returned value of this prompt function to a <code>CSV</code> file.</p> <p>So, I'd like to get your recommendations to overcome this situation by simply skipping that iteration and continue processing other inputs.</p> <p>Here is my script that I've added a comment on the call that stops returning a value after a while:</p> <pre><code>from nomic.gpt4all import GPT4All import csv from time import time CMU_CSV_PATH = 'data/cmu_qa.csv' def single_chatgpt_offline(gpt, question): try: print(f'\tAsking {question}') resp = gpt.prompt(question) # ----&gt; This call stops returning a result after a while print(f'\tGot the response: {resp}') return resp except Exception as err: print(f'{str(err)}') return None def find_answers_and_write_csv(): # initialize GPT4 gpt = GPT4All() gpt.open() with open(CMU_CSV_PATH, 'r', encoding='utf-8') as csv_src: csv_reader = csv.DictReader(csv_src) with open('data/cmu_qa_answers.csv', 'w', encoding='utf-8', newline='') as csv_target: fields = ['question', 'answer', 'title', 'bard', 'bard_time', 'gpt', 'gpt_time'] csv_writer = csv.DictWriter(csv_target, fieldnames=fields) # write the header csv_writer.writeheader() for idx, line in enumerate(csv_reader): question = line['question'] line['title'] = line['title'].replace('_', ' ') # GPT duration_gpt_start = time() resp_gpt = single_chatgpt_offline(gpt, question) if resp_gpt and '\n' in resp_gpt: resp_gpt = resp_gpt.replace('\n', ' ') duration_gpt_end = time() line['gpt'] = resp_gpt line['gpt_time'] = round((duration_gpt_end - duration_gpt_start), NUM_DECIMAL) line[&quot;bard&quot;] = None line[&quot;bard_time&quot;] = None csv_writer.writerow(line) </code></pre>
<python><chatgpt-api>
2023-05-04 10:23:07
1
6,496
talha06
76,172,012
1,189,783
How to reshape the data using nested models with Pydantic
<p>I expect to get the response as as a list, e.g.:</p> <pre><code>{orders: [{'id': 111, 'info': {'dt': '2023-05-11'}}, ...]} </code></pre> <p>However the input data here is a list of flat dicts:</p> <pre><code>data = [ {'id': 111, 'dt': '2022-01-13', 'quantity': 5}, {'id': 112, 'dt': '2022-01-14', 'quantity': 10} ] </code></pre> <p>Schemas:</p> <pre><code>class OrderInfo(BaseModel): dt: date class Order(BaseModel): id: int info: OrderInfo class Orders(BaseModel): orders: List[Order] </code></pre> <p>Here I iterate over the data to append the list data:</p> <pre><code>@app.get(&quot;/&quot;, response_model=Orders) async def get_orders(): orders = [] for i in data: order = Order.parse_obj(i) orders.append(order) return Orders(orders=orders) </code></pre> <p>In my case the input data is a flat json (see below), which pydantic can't resolve into the nested model OrderInfo:</p> <pre><code>pydantic.error_wrappers.ValidationError: 1 validation error for Order info field required (type=value_error.missing) </code></pre> <p>If I declare the OrderInfo as nullable, then I can get the results, but with nulls:</p> <pre><code>{&quot;orders&quot;: [{&quot;id&quot;:111,&quot;info&quot;:null}, ...} </code></pre>
<python><pydantic>
2023-05-04 10:10:23
1
533
Alex
76,172,005
6,503,917
UMAP does not run on Apple M1 Pro
<p>I have been using UMAP on an Apple 2019 i7 machine perfectly fine. However, after updating my Mac to an M1 Pro model, the command</p> <p><code>import UMAP</code></p> <p>throws many errors beginning with</p> <p><code>Fatal Python error: Illegal instruction</code></p> <p>in Spyder and restarts the kernel.</p> <p>After searching I believe this is linked to the change of hardware.</p> <p>Does anyone know how to get umap working on Apple silicon?</p>
<python><spyder><apple-m1>
2023-05-04 10:09:51
1
419
javid
76,171,895
6,528,055
Why does model.fit produce "is incompatible with the layer"?
<p>This is my LSTM model:</p> <pre><code>lstm_out1 = 150 embed_dim = 768 model = Sequential() model.add(Embedding(embed_tensor.shape[0], embed_dim, weights=[embed_tensor], input_length=512, trainable=False)) model.add(LSTM(lstm_out1, dropout=0.2, recurrent_dropout=0.2)) model.add(Dense(64, activation='relu')) model.add(Dense(1, activation='sigmoid')) adam = Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0) model.compile(loss='binary_crossentropy', optimizer=adam, metrics=['accuracy']) model.summary() </code></pre> <p>Model summary is:</p> <pre><code>Model: &quot;sequential_8&quot; _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= embedding_12 (Embedding) (None, 512, 768) 91812096 lstm_10 (LSTM) (None, 150) 551400 dense_16 (Dense) (None, 64) 9664 dense_17 (Dense) (None, 1) 65 </code></pre> <p>I called model.fit() on tis:</p> <pre><code>model.fit(token_tensor, labels, batch_size=10, epochs=1, shuffle=True) </code></pre> <p>It's producing error:</p> <pre><code> ValueError: Input 0 of layer &quot;sequential_8&quot; is incompatible with the layer: expected shape=(None, 512), found shape=(10, 1, 512) </code></pre> <p>My each input vector is size of 512. the 10 in found shape is the batch size. Plz help me to resolve this.</p>
<python><machine-learning><deep-learning><nlp><lstm>
2023-05-04 09:57:07
1
969
Debbie
76,171,852
14,299,233
Inserting series into a new dataframe column for each group of a groupby object
<p>Apologies in advance if the question is too simple. I'm calculating vessel speeds, grouped by their mmsi identifier, and for each pair of consecutive rows. For this, I have a function that returns a series containing the speeds. I want to insert the speeds into a new column of the original dataframe in their corresponding position.</p> <p>For example, for <code>mmsi = 1</code>, the function returns <code>[nan, 15]</code>, for <code>mmsi = 2</code>, <code>[nan, 19, 8]</code>, and for <code>mmsi = 3</code>, <code>[nan, 19, 8, 15]</code></p> <pre><code>timestamp mmsi lat lon speed 0.2 1 4.7 3.2 nan 0.5 1 2.3 5.6 15 0.1 2 9.3 8.4 nan 0.6 2 8.3 9.2 19 0.8 2 6.1 2.3 8 0.2 3 1.5 9.4 nan 0.4 3 8.3 9.2 19 0.6 3 6.1 2.3 8 0.8 3 2.3 5.6 15 </code></pre> <p>I have tried this but I haven't figured out how to make it work correctly:</p> <pre><code>def speed(lons, lats, timestamps): #Calculates speed ... ... ... speed_knots = speed_knots.tolist() return speed_knots df = pd.read_csv('filtered.csv') df = df[df.duplicated('mmsi',keep=False)] #Drops rows without duplicated mmsi df['unix_datetime'] = pd.to_datetime(df['timestamp'],unit='s') #Converts Unix format to datetime df['speed_knots'] = df.groupby('mmsi').apply(lambda x: speed(x.lon,x.lat,x.unix_datetime)) </code></pre> <p>Thanks in advance for your help.</p>
<python><pandas><dataframe>
2023-05-04 09:52:00
2
339
JavierSando
76,171,781
11,024,270
Run Current File in Interactive Window opens a preview instead
<p>On Visual Studio Code, from time to time, this very weird behavior happens: When I click on <code>Run Current File in Interactive Window</code>, instead of opening an <em>Interactive Window</em> and running my code, it opens a <em>Preview</em> of my code. When it happens, the only solution I've found to solve the problem is to close and reopen VS Code.</p> <p>When I click for the first time on <code>Run Current File in Interactive Window</code>, it works properly: <a href="https://i.sstatic.net/yayzl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yayzl.png" alt="enter image description here" /></a></p> <p>If I close the Interactive Window and click again on <code>Run Current File in Interactive Window</code>, then it opens a <em>Preview</em> of the code instead of the <em>Interactive Window</em>: <a href="https://i.sstatic.net/sADSc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sADSc.png" alt="enter image description here" /></a></p> <p>I'm using the April 2023 (version 1.78) release Visual Studio Code and I think the issue only appears when I'm using VS Code on a remote SSH server.</p>
<python><visual-studio-code><ssh><remote-server>
2023-05-04 09:46:04
1
432
TVG
76,171,672
5,751,930
Airflow sensor unable to access context variable
<p>I'm trying to build a sensor that reads the dag parameters (that you can change when you trigger dag with config) to know how long to wait.</p> <pre><code>from airflow.decorators import dag, task, task_group from datetime import date, datetime, timedelta import re params = { &quot;time&quot;:&quot;8h&quot; } def parse_time(time_str): regex = re.compile(r'^((?P&lt;days&gt;[\.\d]+?)d)?((?P&lt;hours&gt;[\.\d]+?)h)?((?P&lt;minutes&gt;[\.\d]+?)m)?((?P&lt;seconds&gt;[\.\d]+?)s)?$') parts = regex.match(time_str) if parts is None: return timedelta() time_params = {name: float(param) for name, param in parts.groupdict().items() if param} return timedelta(**time_params) @dag( dag_id=&quot;test&quot;, start_date=datetime(2023, 5, 1), schedule_interval = None, catchup=False, default_args={&quot;retries&quot;:0}, params=params, tags=[&quot;test&quot;,&quot;debug&quot;], ) def test(): @task.sensor( task_id=f&quot;run_after&quot;, poke_interval=60 * 5, timeout=60 * 60 * 24 * 3, mode=&quot;reschedule&quot; ) def run_after(**context): run_after = context[&quot;params&quot;].get(&quot;time&quot;,&quot;0h&quot;) print(run_after) target_time = parse_time(run_after) time_since_midnight = datetime.now() - datetime.strptime(context[&quot;data_interval_end&quot;].strftime(&quot;%Y%m%d&quot;),&quot;%Y%m%d&quot;) return time_since_midnight &gt; target_time t=run_after() test() </code></pre> <p>It seems the context variable can't be accessed in the sensor (empty dict)... I have no troubles accessing it in a normal task. Am I doing this wrong ? is there a workaround ? (I guess I could access the params in another task and send the data to the sensor through XComs but it makes the dag more complex and it doesn't seem to be the right way to do this).</p> <p>Thanks for your help</p>
<python><airflow><airflow-2.x><airflow-taskflow>
2023-05-04 09:33:00
1
623
Thomas
76,171,546
8,182,504
QWizard/QWizardPage not displaying Logo
<p>I want to use add an Logo to the <em>QWizardPage</em>, however nothing is displayed. I'm using PySide6.</p> <pre class="lang-py prettyprint-override"><code>from PySide6.QtCore import Qt from PySide6.QtWidgets import (QVBoxLayout, QApplication, QWizardPage, QWizard) from PySide6.QtGui import (QIcon, QPixmap, QImage) class Page1(QWizardPage): def __init__(self, img, parent=None): super().__init__(parent) self.setTitle('General Properties') self.setSubTitle('Enter general properties for this project.') # Just for testing, include the same image - this is displayed! layout = QVBoxLayout() graphics = QLabel(self) graphics.setPixmap(QPixmap(img)) layout.addWidget(graphics) self.setLayout(layout) class ProjectWizard(QWizard): def __init__(self, parent=None): super().__init__(parent) img = './Icon_256.png' self.addPage(Page1(img, self)) self.setWindowTitle(&quot;New Project&quot;) # setPixmap(QWizard::WizardPixmap which, const QPixmap &amp;pixmap) self.setPixmap(QWizard.LogoPixmap, QPixmap.fromImage(QImage(img))) </code></pre> <p>The path to the image is valid and accessible (I've included the image in the dialog as well), however, no Image is displayed. Also a change of the <code>which</code> parameter of <code>setPixmap(which, pixmap)</code> does not help. I also did not find anything helpful in the documentation of the <a href="https://doc.qt.io/qt-6/qwizard.html#wizard-look-and-feel" rel="nofollow noreferrer">QWizardPage documentation</a> page.</p> <p><a href="https://i.sstatic.net/Ca9kO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ca9kO.png" alt="Screenshot" /></a></p> <p>Thank you in advance!</p>
<python><pyside6><qwizard>
2023-05-04 09:19:25
0
1,324
agentsmith
76,171,532
17,209,725
spark-cdm-connector in Databricks: java.lang.NoClassDefFoundError: org/apache/spark/sql/sources/v2/ReadSupport
<p>we are having compatibility issue with spark-cdm-connector, to give a little context I have a cdm data in ADLS which I’m trying to read into Databricks</p> <pre><code>Databricks Runtime Version 12.1 (includes Apache Spark 3.3.1, Scala 2.12) </code></pre> <p>, I have installed com.microsoft.azure:spark-cdm-connector:0.19.1</p> <p>I ran this code:</p> <pre><code>AccountName = &quot;&lt;AccountName&gt;&quot; container = &quot;&lt;container&gt;&quot; account_key = &quot;&lt;account_key&gt;&quot; Storage_Account = f&quot;account={AccountName};key={account_key}&quot; # Implicit write case from pyspark.sql.types import * from pyspark.sql import functions, Row from decimal import Decimal from datetime import datetime # Write a CDM entity with Parquet data files, entity definition is derived from the dataframe schema data = [ [1, &quot;Alex&quot;, &quot;Lai&quot;, &quot;alex.lai@adatis.co.uk&quot;, &quot;Consultant&quot;, &quot;Delivery&quot;, datetime.strptime(&quot;2018-07-03&quot;, '%Y-%m-%d'), datetime.now()], [2, &quot;James&quot;, &quot;Russel&quot;, &quot;james.russel@adatis.co.uk&quot;, &quot;Senior Consultant&quot;, &quot;Delivery&quot;, datetime.strptime(&quot;2014-05-14&quot;, '%Y-%m-%d'), datetime.now()] ] schema = (StructType() .add(StructField(&quot;EmployeeId&quot;, StringType(), True)) .add(StructField(&quot;FirstName&quot;, StringType(), True)) .add(StructField(&quot;LastName&quot;, StringType(), True)) .add(StructField(&quot;EmailAddress&quot;, StringType(), True)) .add(StructField(&quot;Position&quot;, StringType(), True)) .add(StructField(&quot;Department&quot;, StringType(), True)) .add(StructField(&quot;HiringDate&quot;, DateType(), True)) .add(StructField(&quot;CreatedDateTime&quot;, TimestampType(), True)) ) df = spark.createDataFrame(spark.sparkContext.parallelize(data), schema) display(df) # Creates the CDM manifest and adds the entity to it with parquet partitions # with both physical and logical entity definitions (df.write.format(&quot;com.microsoft.cdm&quot;) .option(&quot;storage&quot;, Storage_Account) .option(&quot;manifestPath&quot;, container + &quot;&lt;path to manifest.cdm.json file&gt;&quot;) .option(&quot;entity&quot;, &quot;Employee&quot;) .option(&quot;format&quot;, &quot;parquet&quot;) .mode(&quot;overwrite&quot;) .save() ) </code></pre> <p>and its throwing this error ERROR: java.lang.NoClassDefFoundError:org/apache/spark/sql/sources/v2/ReadSupport.</p> <pre><code>Py4JJavaError Traceback (most recent call last) File &lt;command-2314057479770273&gt;:3 1 # Creates the CDM manifest and adds the entity to it with parquet partitions 2 # with both physical and logical entity definitions ----&gt; 3 (df.write.format(&quot;com.microsoft.cdm&quot;) 4 .option(&quot;storage&quot;, Storage_Account) 5 .option(&quot;manifestPath&quot;, container + &quot;&lt;path to manifest.cdm.json file&gt;&quot;) 6 .option(&quot;entity&quot;, &quot;Employee&quot;) 7 .option(&quot;format&quot;, &quot;parquet&quot;) 8 .mode(&quot;overwrite&quot;) 9 .save() 10 ) File /databricks/spark/python/pyspark/instrumentation_utils.py:48, in _wrap_function.&lt;locals&gt;.wrapper(*args, **kwargs) 46 start = time.perf_counter() 47 try: ---&gt; 48 res = func(*args, **kwargs) 49 logger.log_success( 50 module_name, class_name, function_name, time.perf_counter() - start, signature 51 ) 52 return res File /databricks/spark/python/pyspark/sql/readwriter.py:1193, in DataFrameWriter.save(self, path, format, mode, partitionBy, **options) 1191 self.format(format) 1192 if path is None: -&gt; 1193 self._jwrite.save() 1194 else: 1195 self._jwrite.save(path) File /databricks/spark/python/lib/py4j-0.10.9.5-src.zip/py4j/java_gateway.py:1321, in JavaMember.__call__(self, *args) 1315 command = proto.CALL_COMMAND_NAME +\ 1316 self.command_header +\ 1317 args_command +\ 1318 proto.END_COMMAND_PART 1320 answer = self.gateway_client.send_command(command) -&gt; 1321 return_value = get_return_value( 1322 answer, self.gateway_client, self.target_id, self.name) 1324 for temp_arg in temp_args: 1325 temp_arg._detach() File /databricks/spark/python/pyspark/sql/utils.py:209, in capture_sql_exception.&lt;locals&gt;.deco(*a, **kw) 207 def deco(*a: Any, **kw: Any) -&gt; Any: 208 try: --&gt; 209 return f(*a, **kw) 210 except Py4JJavaError as e: 211 converted = convert_exception(e.java_exception) File /databricks/spark/python/lib/py4j-0.10.9.5-src.zip/py4j/protocol.py:326, in get_return_value(answer, gateway_client, target_id, name) 324 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client) 325 if answer[1] == REFERENCE_TYPE: --&gt; 326 raise Py4JJavaError( 327 &quot;An error occurred while calling {0}{1}{2}.\n&quot;. 328 format(target_id, &quot;.&quot;, name), value) 329 else: 330 raise Py4JError( 331 &quot;An error occurred while calling {0}{1}{2}. Trace:\n{3}\n&quot;. 332 format(target_id, &quot;.&quot;, name, value)) Py4JJavaError: An error occurred while calling o464.save. : java.lang.NoClassDefFoundError: org/apache/spark/sql/sources/v2/ReadSupport at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:757) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at java.net.URLClassLoader.defineClass(URLClassLoader.java:473) at java.net.URLClassLoader.access$100(URLClassLoader.java:74) at java.net.URLClassLoader$1.run(URLClassLoader.java:369) at java.net.URLClassLoader$1.run(URLClassLoader.java:363) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:362) at java.lang.ClassLoader.loadClass(ClassLoader.java:419) at com.databricks.backend.daemon.driver.ClassLoaders$LibraryClassLoader.loadClass(ClassLoaders.scala:151) at java.lang.ClassLoader.loadClass(ClassLoader.java:352) at com.databricks.backend.daemon.driver.ClassLoaders$ReplWrappingClassLoader.loadClass(ClassLoaders.scala:65) at java.lang.ClassLoader.loadClass(ClassLoader.java:406) at java.lang.ClassLoader.loadClass(ClassLoader.java:352) at org.apache.spark.sql.execution.datasources.DataSource$.$anonfun$lookupDataSource$5(DataSource.scala:717) at scala.util.Try$.apply(Try.scala:213) at org.apache.spark.sql.execution.datasources.DataSource$.$anonfun$lookupDataSource$4(DataSource.scala:717) at scala.util.Failure.orElse(Try.scala:224) at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:717) at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSourceV2(DataSource.scala:781) at org.apache.spark.sql.DataFrameWriter.lookupV2Provider(DataFrameWriter.scala:988) at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:293) at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:258) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:380) at py4j.Gateway.invoke(Gateway.java:306) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:195) at py4j.ClientServerConnection.run(ClientServerConnection.java:115) at java.lang.Thread.run(Thread.java:750) Caused by: java.lang.ClassNotFoundException: org.apache.spark.sql.sources.v2.ReadSupport at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:419) at com.databricks.backend.daemon.driver.ClassLoaders$LibraryClassLoader.loadClass(ClassLoaders.scala:151) at java.lang.ClassLoader.loadClass(ClassLoader.java:352) ... 36 more </code></pre>
<python><azure><apache-spark><databricks><azure-databricks>
2023-05-04 09:17:59
1
647
Thekingis007
76,171,489
7,295,936
How to reset python logger in a callback
<p>Hello here is my promblem : i work with a broker and a daemon which is listening it and each time a message (containing a file) comes in a callback function is triggered in my code like this :</p> <pre><code>def callback(self, ch, method, properties, body,args): (conn, thrds) = args delivery_tag = method.delivery_tag t = threading.Thread(target=self.do_work, args=(conn, ch, delivery_tag, body)) t.start() thrds.append(t) #callback=staticmethod(callback) def do_work(self, conn, ch, delivery_tag, body): thread_id = threading.get_ident() logging.basicConfig(filename=self.dict_config[&quot;filelog&quot;]+&quot;exchange_hashed&quot; +'_'+str(thread_id)+'_' + str(datetime.now().strftime('%Y%m%d%H%M%S%f')) + '.log',level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s') logger = logging.getLogger('exchange_hashed'+str(thread_id)) logger.info('Thread id: %s Delivery tag: %s', thread_id,delivery_tag) payload = body.decode(&quot;utf-8&quot;) payload = json.loads(payload) file_content = payload[&quot;values&quot;] file_path = file_content[&quot;header&quot;][&quot;meta_data&quot;][&quot;file_name&quot;] filename=file_path.split(&quot;/&quot;)[-1] logger.info(f&quot;Data received for {filename}&quot;) </code></pre> <p>in the function <code>do_work()</code> i initialize my logger and name it with it's thread id but instead of creating a new log file for each thread, it keeps writing in the first which was created.</p> <p>I've check logging doc and they say : Multiple calls to getLogger() with the same name will always return a reference to the same Logger object.</p> <p>so it's exactly what i am NOT doing</p> <p>I don't understand what i am missing.</p>
<python><python-3.x><logging><python-logging>
2023-05-04 09:13:47
1
1,560
FrozzenFinger
76,171,483
10,667,216
How can I format my Python code with autopep8 using VS Code's format on save feature?
<p>I am trying to format my Python code automatically using the format on save feature in vs Code. I have installed the <code>autopep8</code> package and added the following configuration to my <code>pyproject.toml</code> file:</p> <pre><code>[tool.autopep8] in-place = true aggressive = 2 </code></pre> <p>when I try like <code>autopep8 test.py</code> it is working. However, when I try to format my code using the format on save feature, I get the following error message:</p> <pre><code>--in-place cannot be used with standard input </code></pre> <p>How can I resolve this error and format my code using <code>autopep8</code> with desired options? and my vs code settings looks like:</p> <pre><code> // Python &quot;[python]&quot;: { &quot;editor.defaultFormatter&quot;: &quot;ms-python.autopep8&quot;, }, &quot;python.formatting.provider&quot;: &quot;none&quot;, </code></pre>
<python><visual-studio-code><pyproject.toml><autopep8>
2023-05-04 09:12:53
2
483
Davood
76,171,458
5,919,010
How work with NestedVariant object in Tensorflow
<p>I have loaded a dataset from <code>tfds</code> where the data is in a <code>NestedVariant</code> object. How do I extract the values from there?</p> <p>E.g.:</p> <pre><code>import tensorflow_datasets as tfds ds = tfds.load(&quot;rlu_control_suite&quot;, split=&quot;train&quot;) for example in ds.take(1): print(example[&quot;steps&quot;]) </code></pre> <p>Output:</p> <pre><code>&lt;tensorflow.python.data.ops.dataset_ops._NestedVariant object at 0x2c3324940&gt; </code></pre> <p>I can't find the docs on how to work with this object.</p>
<python><tensorflow><tensorflow-datasets>
2023-05-04 09:10:43
1
1,264
sandboxj
76,171,439
5,668,215
How to get display value from Django Choices directly from the declared choice constant?
<p>Lets say I defined a model class</p> <pre><code>from model_utils import Choices class SomeModel(TimeStampedModel): food_choices = Choices( (1, &quot;WATER&quot;, &quot;Water&quot;), (2, &quot;BURGER&quot;, &quot;Burger&quot;), (3, &quot;BIRYANI&quot;, &quot;Biryani&quot;) ) food = models.IntegerField(choices=food_choices, default=food_choices.WATER) </code></pre> <p>How do I get the display part while using the choices as a variable?</p> <p>eg. SomeModel.food_choices.WATER gives me 1, how do I get the value/string &quot;Water&quot;?</p> <p>I know if I create an instance, then I can fetch the display value using <code>.get_food_display()</code> for that instance, but I don’t want that I just want to use it directly from the constant created for <code>Choices</code>, ie <code>food_choices</code>.</p>
<python><python-3.x><django><django-models>
2023-05-04 09:08:09
2
1,346
faruk13
76,171,300
12,931,358
What the meaning of `text[..., :5]` in python?
<p>I am confused about one code line in <a href="https://github.com/lucidrains/DALLE2-pytorch/blob/main/dalle2_pytorch/dalle2_pytorch.py" rel="nofollow noreferrer">Github</a>, in line 295</p> <p><code>text = text[..., :5]</code></p> <p>I was wondering if it take the text list from index=0 to 5.</p> <p>However, it didn't work, I tried to use <code>text = [&quot;an oil painting of a corgi&quot;]</code> as input, it shows error, <code>string indices must be integers</code></p> <p>I am still confused about it, what the meaning of it?</p>
<python><numpy>
2023-05-04 08:53:26
1
2,077
4daJKong
76,171,283
8,391,698
How to format sequence of strings into some predetermined length with fasta like header with Python
<p>I have a text file called <code>input.txt</code> that looks like this.</p> <pre><code>A C H E C Q D S S C H H C R Q K L E D T S C H L E D V G K M N T Y H C G E G I N N G P N A S C K F M L P C V V A E F E N H T E T D W R C K L E A E H C D C K D A A V N H H F Y S L C K D V T E E W </code></pre> <p>Note that the input above has 3 rows of amino acid sequences.</p> <p>I have a text file called input.txt that looks like this.</p> <pre><code>A C H E C Q D S S C H H C R Q K L E D T S C H L E D V G K M N T Y H C G E G I N N G P N A S C K F M L P C V V A E F E N H T E T D W R C K L E A E H C D C K D A A V N H H F Y S L C K D V T E E W </code></pre> <p>Note that the input above has 3 rows of amino acid sequences.</p> <p>I'd like to convert that into the format below.</p> <pre><code>&lt;|endoftext|&gt; ACHECQDSSCHHCRQKLEDTSCHLEDVGKM &lt;|endoftext|&gt; NTYHCGEGINNGPNASCKFMLPCVVAEFEN HT &lt;|endoftext|&gt; ETDWRCKLEAEHCDCKDAAVNHHFYSLCKD VTEEW </code></pre> <p>Every beginning of the amino acid sequence should begin with this string &quot;&lt;|endoftext|&gt;&quot; and each new line should be no more than 30 amino acids.</p> <p>I have this code, but it doesn't do the job:</p> <pre><code>def process_amino_acids(file_name): with open(file_name, &quot;r&quot;) as file: data = file.read().replace(&quot;\n&quot;, &quot;&quot;).replace(&quot; &quot;, &quot;&quot;) output = &quot;&lt;|endoftext|&gt;&quot; for i, amino_acid in enumerate(data): if i % 30 == 0 and i != 0: output += &quot;\n&quot; output += amino_acid return output def main(): input_file = &quot;data/input.txt&quot; processed_amino_acids = process_amino_acids(input_file) with open(&quot;data/output.txt&quot;, &quot;w&quot;) as output_file: output_file.write(processed_amino_acids) print(&quot;Formatted amino acid sequences are written to output.txt&quot;) if __name__ == &quot;__main__&quot;: main() </code></pre> <p>The output it gave is this:</p> <pre><code>&lt;|endoftext|&gt;ACHECQDSSCHHCRQKLEDTSCHLEDVGKM NTYHCGEGINNGPNASCKFMLPCVVAEFEN HTETDWRCKLEAEHCDCKDAAVNHHFYSLC KDVTEEW </code></pre> <p>How can I do it properly with Python?</p>
<python><python-3.x><string><bioinformatics>
2023-05-04 08:51:34
1
5,189
littleworth
76,171,274
14,986,784
Empty stacks from torch profiler
<h2>Details of the problem</h2> <p>Hello, I am trying to reproduce the profiler example of the <a href="https://pytorch.org/tutorials/recipes/recipes/profiler_recipe.html#visualizing-data-as-a-flamegraph" rel="nofollow noreferrer">official Pytorch tutorial</a>. I want to export stacks of a forward pass of a model.</p> <p><strong>Although, the stacks files are created and they are empty.</strong></p> <pre class="lang-py prettyprint-override"><code>import torch from torch import profiler from torchvision.models import resnet18 model = resnet18().cuda() inputs = torch.rand(5, 3, 224, 224).cuda() with profiler.profile( activities=[profiler.ProfilerActivity.CPU, profiler.ProfilerActivity.CUDA], with_stack=True, )as p: model(inputs) p.export_stacks( f&quot;/tmp/profiler/stacks_cpu.txt&quot;, &quot;self_cpu_time_total&quot;) p.export_stacks( f&quot;/tmp/profiler/stacks_cuda.txt&quot;, &quot;self_cuda_time_total&quot;) </code></pre> <h5>Environment information</h5> <ul> <li>Running on Docker image pytorch/pytorch:2.0.0-cuda11.7-cudnn8-runtime</li> <li>torch: 2.0.0</li> <li>torchvision: 0.15.0</li> <li>Python: 3.10.9</li> </ul> <p><strong>Note:</strong> I reproduced it on the bare docker image.</p> <h2>What I tried</h2> <p>The very weird thing is that when I print the table from my script, I can see the trace. I give the exact snippet of code I use for that, I just put them right after the snippet of code above.</p> <pre class="lang-py prettyprint-override"><code>print(p.key_averages(group_by_stack_n=5).table( sort_by=&quot;self_cuda_time_total&quot;, row_limit=2)) </code></pre> <p><strong>Update</strong>: The printing option is also not working. It prints the table but not the stacks. With debug I can see the function <code>_build_table</code> in module <code>torch.autograd.profiler_util</code>. On <a href="https://github.com/pytorch/pytorch/blob/a204f7f51810f00acd4fd4737e831f0ca9ff0e9c/torch/autograd/profiler_util.py#L794" rel="nofollow noreferrer">Line 794</a>, the <code>stacks</code> variable is an empty list. (<code>_build_table</code> is called on <code>table</code> method in code snippet above). Also, in <code>key_averages</code> method of the class <code>EventList</code> - which is called in <code>key_averages</code> of <code>profiler</code> class (used in the code snipped) - each event has an empty <code>stacks</code> attribute on <a href="https://github.com/pytorch/pytorch/blob/a204f7f51810f00acd4fd4737e831f0ca9ff0e9c/torch/autograd/profiler_util.py#L298" rel="nofollow noreferrer">line 298 </a>. So question is, why the stack is not filled in those events? I will investigate furthermore.</p>
<python><pytorch><profiling>
2023-05-04 08:50:54
1
474
MufasaChan
76,171,250
11,431,068
No module named AppOpener
<p>I am just trying to open apps using python and i am using AppOpener. While it seems pretty simple in online and all, by just importing and using like this</p> <pre><code>from AppOpener import open open(&quot;google chrome&quot;) </code></pre> <p>it seems to show me the error</p> <p>Traceback (most recent call last):</p> <blockquote> <p>File &quot;C:/Users/ansha/AppData/Local/Programs/Python/Python310/openn.py&quot;, line 1, in from AppOpener import open <strong>ModuleNotFoundError: No module named 'AppOpener'</strong></p> </blockquote> <p>I can use AppOpener from cmd tho. Do u guys have any solution?</p> <pre><code>from AppOpener import open open(&quot;google chrome&quot;) </code></pre> <p>Traceback (most recent call last):</p> <blockquote> <p>File &quot;C:/Users/ansha/AppData/Local/Programs/Python/Python310/openn.py&quot;, line 1, in from AppOpener import open <strong>ModuleNotFoundError: No module named 'AppOpener'</strong></p> </blockquote>
<python>
2023-05-04 08:47:43
2
695
Ashique Razak
76,171,206
2,509,396
Pytorch RuntimeError: "check_uniform_bounds" not implemented for 'Int'
<p>Unable to set datatype for <code>torch.rand()</code> to <code>int</code>. The same line of code when set to <code>double</code> <code>x = torch.rand(2,2,dtype=torch.double)</code> didnt throw any exception and worked perfectly. Any reason why <code>int</code> is throwing RuntimeError?</p> <p>Python version: 3.9.6</p> <p>Here is the code snippet:</p> <pre><code>import torch x = torch.rand(2,2,dtype=torch.int) y = torch.rand(2,2,dtype=torch.int) print(&quot;x is&quot;, x) print(&quot;y is&quot;, y) z = torch.add(x,y) print(z) </code></pre> <p><strong>Error Output:</strong></p> <pre><code>x = torch.rand(2,2,dtype=torch.int) Traceback (most recent call last): File &quot;/Training/main.py&quot;, line 4, in &lt;module&gt; x = torch.rand(2,2,dtype=torch.int) RuntimeError: &quot;check_uniform_bounds&quot; not implemented for 'Int' </code></pre>
<python><python-3.x><deep-learning><pytorch>
2023-05-04 08:41:10
2
525
Harish Kannan
76,171,115
7,519,300
Custom Retry behaviour per each error with tenacity library
<p>I would like to make a custom retry behaviour for different error types. For example if I receive a 401 Error I would like to call a token refresh before retring a request. If I receive a 500 I would like to retry the error after some time. etc.</p> <p>Is this achievable with the tenacity library? <a href="https://tenacity.readthedocs.io/en/latest/" rel="nofollow noreferrer">https://tenacity.readthedocs.io/en/latest/</a></p>
<python><request><tenacity>
2023-05-04 08:31:52
1
315
Eduard6421
76,171,047
5,980,143
send parameters to setUp
<p>I am writing some unit tests, before each test I init some seed data in the db. here is my example code which works:</p> <pre><code>import unittest class TestLogin(unittest.TestCase): def setUp(self) -&gt; None: init_db() def test_login1(self): test_something() def test_login2(self): test_something() </code></pre> <p>I would like to send parameters to the setup, each text should be initialized with some other user, something like:</p> <pre><code>import unittest class TestLogin(unittest.TestCase): def setUp(self, params) -&gt; None: init_db(params.user) @param(user='jim') def test_login1(self): test_something() @param(user='bob') def test_login2(self): test_something() </code></pre> <p>How can I parameterize the setup for each test case?</p>
<python><unit-testing>
2023-05-04 08:24:20
3
4,369
dina
76,170,912
12,415,855
Get all elemenets form site using selenium?
<p>i try to get all the elements from this site: <a href="https://www.kw.com/agent/search/CA/San%20Diego" rel="nofollow noreferrer">https://www.kw.com/agent/search/CA/San%20Diego</a></p> <p>Using the following code:</p> <pre><code>import time from bs4 import BeautifulSoup from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.chrome.service import Service from selenium.webdriver.support.ui import WebDriverWait from webdriver_manager.chrome import ChromeDriverManager if __name__ == '__main__': WAIT = 3 print(f&quot;Checking Browser driver...&quot;) options = Options() # options.add_argument('--headless') options.add_argument(&quot;start-maximized&quot;) options.add_experimental_option(&quot;prefs&quot;, {&quot;profile.default_content_setting_values.notifications&quot;: 1}) options.add_experimental_option(&quot;excludeSwitches&quot;, [&quot;enable-automation&quot;]) options.add_experimental_option('excludeSwitches', ['enable-logging']) options.add_experimental_option('useAutomationExtension', False) options.add_argument('--disable-blink-features=AutomationControlled') srv=Service(ChromeDriverManager().install()) driver = webdriver.Chrome (service=srv, options=options) waitWD = WebDriverWait (driver, 10) link = f&quot;https://www.kw.com/agent/search/CA/San%20Diego&quot; driver.get (link) for i in range(1,50): print(f&quot;Round {i}&quot;) driver.execute_script(&quot;window.scrollTo(0, 200000)&quot;) time.sleep(WAIT) soup = BeautifulSoup (driver.page_source, 'html.parser') tmpDIV = soup.find_all(&quot;div&quot;, {&quot;class&quot;: &quot;FindAgentRoute__row&quot;}) print(f&quot;{len(tmpDIV)} elements found&quot;) input(&quot;Press!&quot;) </code></pre> <p>But i don´t get all the elements form the site. On the top of the site it says 1242 agents but i only get 1150.</p> <p>How can i get all the agents elements from this site?</p>
<python><selenium-webdriver>
2023-05-04 08:06:59
1
1,515
Rapid1898
76,170,831
14,775,478
Why is tempfile.TemporaryFile showing invalid path on AWS ECS/docker?
<p>I am surprised to see very different behavior of <code>tempfile.TemporaryFile()</code> on Windows vs. Linux.</p> <p>As is best pratice, I am using Python's <code>tempfile</code> module, as also suggested <a href="https://stackoverflow.com/questions/8577137/how-can-i-create-a-tmp-file-in-python">here</a>.</p> <p>Intent: I want to create a temporary file, and pass its name/path to some function (not the file object). The following code works fine on Windows (Win 10, Python 3.8.6), but fails in a docker container on Amazon ECS (<code>python:3.8.6-slim-buster</code>).</p> <pre><code>import tempfile import boto3 with tempfile.TemporaryFile() as tmpfile: s3 = boto3.client('s3') s3.download_file(&quot;my_bucket&quot;, &quot;my_object&quot;, tmpfile.name) </code></pre> <p>This code fails not for S3 access rights/roles (that's all ok), but indeed due to the file name being invalid! A valid file name is produced on Windows (see output below), but is clearly invalid in the docker container on ECS:</p> <ul> <li>Windows: <code>tmpfile.name: C:\Users\micha\AppData\Local\Temp\tmppky63n59</code></li> <li>Docker/ECS: <code>tmpfile.name: 6</code></li> </ul> <p>Why does <code>tempfile.TemporaryFile().name</code> produce valid output in one case, but no the other? How can I fix it?</p>
<python><windows><docker><amazon-ecs><temporary-files>
2023-05-04 07:58:39
1
1,690
KingOtto
76,170,810
6,936,489
tweepy/twitter api v2 : retrieve tweets on free access
<p>I'm trying to authenticate to twitter's new API (v2) using tweepy and retrieve tweets but encounter a strange error related to the authentication process.</p> <p>I'm currently using the free access to the API.</p> <p>Code sample :</p> <pre><code>import tweepy # Authentification OAuth 1.0a User Context to retrieve my own data dict_twitter_api = { &quot;consumer_key&quot;: &quot;blah&quot;, &quot;consumer_secret&quot;: &quot;blah&quot;, &quot;access_token&quot;: &quot;blah&quot;, &quot;access_token_secret&quot;: &quot;blah&quot; } client = tweepy.Client(**dict_twitter_api) # If you're working behind a corporate proxy, # client.session.proxies = { # &quot;http&quot;: &quot;my-corporate-proxy&quot;, # &quot;https&quot;: &quot;my-corporate-proxy&quot;, # } print(client.get_me()) # &lt;-- this works well print(client.get_home_timeline()) </code></pre> <p>Traceback result :</p> <pre><code>&gt; Forbidden: 403 Forbidden &gt; When authenticating requests to the Twitter API v2 endpoints, you must use keys and tokens from a Twitter developer App that is attached to a Project. You can create a project via the developer portal. </code></pre> <p>I've checked my different tokens and confirmed that the OAuth 1.0a user context authentication <a href="https://developer.twitter.com/en/docs/authentication/guides/v2-authentication-mapping" rel="noreferrer">should be working</a> to retrieve my own timeline.</p>
<python><twitter><tweepy>
2023-05-04 07:56:12
2
2,562
tgrandje
76,170,559
4,035,257
Converting all <NA> into nan in pandas dataframe - python
<p>I have a pandas dataframe with floats and strings and few <code>&lt;NA&gt;</code> and <code>nan</code>. I am trying to locate all the <code>&lt;NA&gt;</code> and convert them into <code>nan</code> using the following function <code>pd.to_numeric(....., errors='coerce')</code>. Also making sure that floats and strings remain untouched. Could you please help me with it? Also when I <code>df.to_dict('list')</code> I get few <code>''</code> and <code>&lt;NA&gt;</code>. Thank you.</p>
<python><pandas><dataframe><nan><na>
2023-05-04 07:26:01
1
362
Telis
76,170,397
2,059,689
Why pip installing qpth fails with "'install_requires' must be a string" error?
<p>I'm failing to install <code>qpth</code> package with pip inside of my virtual environment. It produces the following error and I fail to understand why. Would appreciate someone explaining what's the problem or how can I investigate it.</p> <pre><code>(venv) PS C:\Dev\python\my_test&gt; pip install qpth Collecting qpth Installing build dependencies ... done error: subprocess-exited-with-error × Getting requirements to build wheel did not run successfully. ╰─&gt; [3 lines of output] error in qpth setup command: 'install_requires' must be a string or list of strings containing valid project/version requirement specifiers; Expected end or semicolon (after version specifier) numpy&gt;=1&lt;2 ~~~^ [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error </code></pre> <p>This is my setup (upgraded pip and setuptools just in case):</p> <pre><code>OS Windows, Python 3.11.1 pip in c:\dev\python\my_test\venv\lib\site-packages (23.1.2) setuptools in c:\dev\python\my_test\venv\lib\site-packages (67.7.2) </code></pre>
<python><windows><pip>
2023-05-04 07:02:50
1
3,200
vvv444
76,170,250
13,494,917
How to downgrade "Microsoft.Azure.WebJobs.Extensions.Storage" to version 5.0.1 (Python Azure Blob Trigger)
<p>I've been receiving an error when trying to run my blob trigger locally:</p> <blockquote> <p>Microsoft.Azure.WebJobs.Extensions.Storage.Blobs: Could not load type 'Microsoft.Azure.WebJobs.ParameterBindingData' from assembly 'Microsoft.Azure.WebJobs, Version=3.0.34.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35'. Value cannot be null. (Parameter 'provider')</p> </blockquote> <p>I saw a solution <a href="https://github.com/Azure/azure-sdk-for-net/issues/34467" rel="nofollow noreferrer">here</a> that suggests downgrading &quot;Microsoft.Azure.WebJobs.Extensions.Storage&quot; to version 5.0.1 may solve the problem. But I'm not exactly sure how to do that, since I don't see a specific library named that in my project. My hunch tells me it might be a apart of this package <code>azure-storage-blob==12.14.1</code> but I'm not exactly sure. I'm even specifying the version I want, I don't see how it'd suddenly break.</p> <p>Any suggestions on how to do so, or other solutions I can try for the problem at hand would be greatly appreciated.</p>
<python><azure><azure-functions><azure-blob-storage>
2023-05-04 06:40:29
1
687
BlakeB9
76,170,151
5,942,100
Shift dataset dimension (Pandas)
<p>I would like to shift the dimension of a dataset below:</p> <p><strong>Data</strong></p> <pre><code>location range status type Q1 28 Q2 28 NY Low Re AA 2 0 NY Low Gr AA 2 2 NY Low Re BB 0 0 NY Low Gr BB 0 0 NY Low Re DD 0 2 NY Low Gr DD 2 2 NY Low Re SS 0 0 NY Low Gr SS 0 0 CA Med Re AA 0 4 CA Med Gr AA 2 0 CA Med Re BB 0 2 CA Med Gr BB 0 0 CA Med Re DD 0 6 CA Med Gr DD 2 0 CA Med Re SS 0 2 CA Med Gr SS 0 0 </code></pre> <p><strong>Desired</strong></p> <pre><code>location range status type quarter count NY Low Re AA Q1 28 2 NY Low Gr AA Q1 28 2 NY Low Re BB Q1 28 0 NY Low Gr BB Q1 28 0 NY Low Re DD Q1 28 0 NY Low Gr DD Q1 28 2 NY Low Re SS Q1 28 0 NY Low Gr SS Q1 28 0 CA Med Re AA Q1 28 0 CA Med Gr AA Q1 28 2 CA Med Re BB Q1 28 0 CA Med Gr BB Q1 28 0 CA Med Re DD Q1 28 0 CA Med Gr DD Q1 28 2 CA Med Re SS Q1 28 0 CA Med Gr SS Q1 28 0 NY Low Re AA Q2 28 0 NY Low Gr AA Q2 28 2 NY Low Re BB Q2 28 0 NY Low Gr BB Q2 28 0 NY Low Re DD Q2 28 2 NY Low Gr DD Q2 28 2 NY Low Re SS Q2 28 0 NY Low Gr SS Q2 28 0 CA Med Re AA Q2 28 4 CA Med Gr AA Q2 28 0 CA Med Re BB Q2 28 2 CA Med Gr BB Q2 28 0 CA Med Re DD Q2 28 6 CA Med Gr DD Q2 28 0 CA Med Re SS Q2 28 2 CA Med Gr SS Q2 28 0 </code></pre> <p><strong>Doing</strong></p> <pre><code>df = pd.melt(df, id_vars=['location', 'range'], var_name='quarter', value_name='count') </code></pre> <p>however not all of the data is pivoting. Any suggestion is helpful.</p>
<python><pandas><numpy>
2023-05-04 06:26:10
1
4,428
Lynn
76,169,954
5,942,100
Transform by creating new columns from regex transformation using Pandas
<p>I would like to create new columns from a transformation of values within a column.</p> <p><strong>Data</strong></p> <pre><code>location stat type Q1 28 Q2 28 Q3 28 Q4 28 NY Low_Re_Num_De AA 2 0 0 0 NY Low_Gr_Num_De AA 2 2 2 6 NY Low_Re_Num_De BB 0 0 0 0 NY Low_Gr_Num_De BB 0 0 2 4 NY Low_Re_Num_De DD 0 2 4 4 NY Low_Gr_Num_De DD 2 2 4 8 NY Low_Re_Num_De SS 0 0 0 0 NY Low_Gr_Num_De SS 0 0 0 2 CA Med_Re_Num_De AA 0 4 0 0 CA Med_Gr_Num_De AA 2 0 0 0 CA Med_Re_Num_De BB 0 2 0 0 CA Med_Gr_Num_De BB 0 0 0 2 CA Med_Re_Num_De DD 0 6 0 0 CA Med_Gr_Num_De DD 2 0 0 0 CA Med_Re_Num_De SS 0 2 0 0 CA Med_Gr_Num_De SS 0 0 0 0 </code></pre> <p><strong>Desired</strong></p> <pre><code>location range stat type Q1 28 Q2 28 Q3 28 Q4 28 NY Low Re AA 2 0 0 0 NY Low Gr AA 2 2 2 6 NY Low Re BB 0 0 0 0 NY Low Gr BB 0 0 2 4 NY Low Re DD 0 2 4 4 NY Low Gr DD 2 2 4 8 NY Low Re SS 0 0 0 0 NY Low Gr SS 0 0 0 2 CA Med Re AA 0 4 0 0 CA Med Gr AA 2 0 0 0 CA Med Re BB 0 2 0 0 CA Med Gr BB 0 0 0 2 CA Med Re DD 0 6 0 0 CA Med Gr DD 2 0 0 0 CA Med Re SS 0 2 0 0 CA Med Gr SS 0 0 0 0 </code></pre> <p><strong>Doing</strong></p> <pre><code>df['range'] = df['stat'].apply(lambda x: x.split('_')[0]) </code></pre> <p>However, this fails to create the second column.</p> <p>Any suggestion is appreciated.</p>
<python><pandas><numpy>
2023-05-04 05:53:04
1
4,428
Lynn
76,169,718
11,192,275
KNN imputer with nominal, ordinal and numerical variables
<p>I have the following data:</p> <pre><code># Libraries import pandas as pd import numpy as np from sklearn.impute import KNNImputer from sklearn.preprocessing import OneHotEncoder, OrdinalEncoder from sklearn.metrics.pairwise import nan_euclidean_distances # Data set toy_example = pd.DataFrame(data = {&quot;Color&quot;: [&quot;Blue&quot;, &quot;Red&quot;, &quot;Green&quot;, &quot;Blue&quot;, np.nan], &quot;Size&quot;: [&quot;S&quot;, &quot;M&quot;, &quot;L&quot;, np.nan, &quot;S&quot;], &quot;Weight&quot;: [10, np.nan, 15, 12, np.nan], &quot;Age&quot;: [2, 4, np.nan, 3, 1]}) toy_example </code></pre> <p>I want to impute the variables <code>Color</code> (nominal), <code>Size</code> (ordinal), <code>Weight</code> (numerical) and <code>Age</code> (numerical) where I want to use KNN imputer using the distance metric <code>nan_euclidean</code> from <code>sklearn.impute.KNNImputer</code></p> <p>I now that I need to pre-process the data first. Therefore I came up with the following 2 solutions</p> <p>a. <strong>One hot encoding</strong> for the nominal variable where the <code>NaN</code> values are encoded as a category</p> <pre><code># Preprocessing the data color_encoder = OneHotEncoder() color_encoder.fit(X=toy_example[[&quot;Color&quot;]]) ## Checking categories and names ### A na dummy is included by default color_encoder.categories_ color_encoder.get_feature_names_out() # Create a new DataFrame with the one-hot encoded &quot;Color&quot; column color_encoded = pd.DataFrame(color_encoder.transform(toy_example[[&quot;Color&quot;]]).toarray(), columns=color_encoder.get_feature_names_out([&quot;Color&quot;])) color_encoded # Create a dictionary to map the ordinal values of the &quot;Size&quot; column to numerical values size_map = {&quot;S&quot;: 1, &quot;M&quot;: 2, &quot;L&quot;: 3} size_map toy_example[&quot;Size&quot;] = toy_example[&quot;Size&quot;].map(size_map) # Concatenate encoded variables with numerical variables preprocessed_data = pd.concat([color_encoded, toy_example[[&quot;Size&quot;, &quot;Weight&quot;, &quot;Age&quot;]]], axis=1) preprocessed_data ## Matrix of euclidean distances matrix_nan_euclidean = nan_euclidean_distances(X=preprocessed_data) matrix_nan_euclidean # Perform nearest neighbors imputation imputer = KNNImputer(n_neighbors=2) imputed_df = pd.DataFrame(imputer.fit_transform(preprocessed_data), columns=preprocessed_data.columns) ## Here I have a problem where the NaN value in the variable ## &quot;Color&quot; in relation to the 5th row is not imputed ### I was expecting a 0 in the Color_nan and a positive value ### in any of the columns Color_Blue, Color_Green, Color_Red imputed_df </code></pre> <p>As I mention in the comments of the code this solution is not feasible for the case of the nominal variable because I obtain the following result where the nominal variable is not imputed:</p> <pre><code> Color_Blue Color_Green Color_Red Color_nan Size Weight Age 0 1.0 0.0 0.0 0.0 1.0 10.0 2.0 1 0.0 0.0 1.0 0.0 2.0 13.5 4.0 2 0.0 1.0 0.0 0.0 3.0 15.0 2.5 3 1.0 0.0 0.0 0.0 1.5 12.0 3.0 4 0.0 0.0 0.0 1.0 1.0 12.5 1.0 </code></pre> <p>For the case of the ordinal variable at least the value is imputed where I need to decide the appropiate roundig method to apply (classical rounding, ceiling or floor)</p> <p>b. <strong>One hot encoding</strong> for the nominal variable where the <code>NaN</code> values are not encoded as a category and the rest of the dummy variables are considered <code>NaN</code></p> <pre><code># Preprocessing the data color_encoder = OneHotEncoder() color_encoder.fit(X=toy_example[[&quot;Color&quot;]]) ## Checking categories and names ### A na dummy is included by default color_encoder.categories_ color_encoder.get_feature_names_out() # Create a new DataFrame with the one-hot encoded &quot;Color&quot; column color_encoded = pd.DataFrame(color_encoder.transform(toy_example[[&quot;Color&quot;]]).toarray(), columns=color_encoder.get_feature_names_out([&quot;Color&quot;])) color_encoded ## Don't take into account the nan values as a separate category color_encoded = color_encoded.loc[:, &quot;Color_Blue&quot;:&quot;Color_Red&quot;] ## Because I don't know in advance the values of the dummy variables ## I will replace them with NaN values which is a logical solution taking ## into account that I don't know the value of this observation in relation ## to the &quot;Color&quot; variable color_encoded.iloc[4, :] = np.nan color_encoded # Create a dictionary to map the ordinal values of the &quot;Size&quot; column to numerical values size_map = {&quot;S&quot;: 1, &quot;M&quot;: 2, &quot;L&quot;: 3} size_map toy_example[&quot;Size&quot;] = toy_example[&quot;Size&quot;].map(size_map) # Concatenate encoded variables with numerical variables preprocessed_data = pd.concat([color_encoded, toy_example[[&quot;Size&quot;, &quot;Weight&quot;, &quot;Age&quot;]]], axis=1) preprocessed_data ## Matrix of euclidean distances matrix_nan_euclidean = nan_euclidean_distances(X=preprocessed_data) matrix_nan_euclidean # Perform nearest neighbors imputation imputer = KNNImputer(n_neighbors=2) imputed_df = pd.DataFrame(imputer.fit_transform(preprocessed_data), columns=preprocessed_data.columns) ## Here I have a problem because I will need to decide ## how to round the values using classical rounding, ## ceiling or floor in relation to the 5th row. However ## any of this methods are inconsistent because an ## observation cannot be Blue and Green at the same time ## but it needs to be at least Blue, Green or Red imputed_df </code></pre> <p>As I mention in the comments of the code this solution is not feasible for the case of the nominal variable because I obtain the following result where the nominal variable takes 2 values or doesn't take any value:</p> <pre><code> Color_Blue Color_Green Color_Red Size Weight Age 0 1.0 0.0 0.0 1.0 10.0 2.0 1 0.0 0.0 1.0 2.0 13.5 4.0 2 0.0 1.0 0.0 3.0 15.0 3.5 3 1.0 0.0 0.0 1.5 12.0 3.0 4 0.5 0.5 0.0 1.0 12.5 1.0 </code></pre> <p>Taking into account that a. and b. doesn't work, anyone knows how to impute a nominal variable in a consistent way using <strong>multivariate imputation</strong>?</p> <p>So, how can I impute the observations of the <code>toy_example</code> for the case of the nominal variable using <strong>multivariate imputation</strong>?</p>
<python><scikit-learn><imputation>
2023-05-04 05:01:59
1
456
luifrancgom
76,169,607
6,751,456
django Baseserializer remove certain fields from validated_data
<p>I need to validate a payload json based on, say, fields A, B. But I don't want these to show in <code>serializer.validated_data</code></p> <p>I tried to override <code>validated_data</code> base class method.</p> <pre><code>class MySerializer(serializers.Serializer): fieldA = ... fieldB = ... fieldC = ... def validate_fieldA(self, fieldA): # validate def validate_fieldB(self, fieldB): # validate def validated_data(self): data = super().validated_data() exclude_fields = self.context.get('exclude_fields', []) for field in exclude_fields: # providing a default prevents a KeyError # if the field does not exist data.pop(field, default=None) return data </code></pre> <p>Now when I try to access <code>serializer.validated_data</code> dict, it returns a method instead of deserialized dictionary.</p> <pre><code>&lt;bound method MySerializer.validated_data of .... </code></pre> <p>How do I accomplish this correctly?</p>
<python><django><serialization><django-serializer><django-validation>
2023-05-04 04:30:50
1
4,161
Azima
76,169,438
16,496,244
Passing custom color to FPSDisplay in pyglet
<h2>Context</h2> <p>I am working on a chip 8 emulator using Python and for the display/screen, I am using the <strong>pyglet</strong> library. However, I have the pyglet window implemented in a class.</p> <p>Here is the basic code structure of the class I implemented for the screen of emulator <em>(with only the lines of code I felt will be relevant to the issue I am facing and to this question)</em></p> <p><em><strong>peripheral.py</strong></em></p> <pre class="lang-py prettyprint-override"><code>PIXEL_COLORS = { 0: ( 97, 134, 169), 1: ( 33, 41, 70) } class Peripherals(Window): def __init__(self, width=DEFAULT_WIDTH, height=DEFAULT_HEIGHT, caption=DEFAULT_CAPTION, fps=False, *args, **kwargs): self.memory = kwargs.pop('memory') super(Peripherals, self).__init__(width, height, style=Window.WINDOW_STYLE_DEFAULT, *args, **kwargs) self.alive = 1 self.sprites = Batch() self.fps_display = FPSDisplay(window=self, color = (*PIXEL_COLORS[1], 127)) self.pixel_matrix = dict() def clear_screen(self): r = PIXEL_COLORS[0][0] / 255 g = PIXEL_COLORS[0][1] / 255 b = PIXEL_COLORS[0][2] / 255 glClearColor(r, g, b, 1) self.clear() def on_close(self): self.alive = 0 def on_draw(self): self.render() def draw_pixel(self, idx): # Draws pixel at (X, Y) def render(self): self.clear_screen() # Logic to create the batch of all the pixels # That are to be drawn according to the display buffer in memory self.sprites.draw() self.fps_display.draw() self.flip() </code></pre> <p><strong>Note</strong> - If it is any relevant to the question, I am also using the <code>venv</code> for this project, to make sure it doesn't break due to any unexpected updates and so that any one can run it on their machine.</p> <h2>Issue</h2> <p>The above code was working perfectly with custom color for FPSDisplay, that is until I had to restart my PC. I made sure to save everything before I restarted. However, after the restart, above code is no longer working and gives the following error</p> <pre><code>self.fps_display = FPSDisplay(window=self, color = (*PIXEL_COLORS[1], 127)) TypeError: FPSDisplay.__init__() got an unexpected keyword argument 'color' </code></pre> <p>What am I missing here? Is there an issue with my code?</p> <p>I checked the <a href="https://pyglet.readthedocs.io/en/latest/modules/window.html#pyglet.window.FPSDisplay" rel="nofollow noreferrer">documentation</a> for pyglet as well and it shows that it can take in two additional keyword arguments, <code>color</code> and <code>samples</code>.</p> <blockquote> <p>classFPSDisplay(window, color=(127, 127, 127, 127), samples=240)</p> </blockquote>
<python><colors><pyglet><chip-8>
2023-05-04 03:37:48
1
901
Harsh Kasodariya
76,169,008
12,913,109
OpenAI Whisper API (InvalidRequestError)
<p>I'm trying to use OpenAI Whisper API to transcribe my audio files. When I run it by opening my local audio files from disk, it worked perfectly. Now I'm developing a FastAPI endpoint to receive an audio file from the client and transcribe it.</p> <p>However, when I try to use the same file received by FastAPI endpoint directly, it will reject the file, claiming the file received is in invalid format.</p> <p>I tried to read and write the received file to the disk directly from the endpoint. Then opening the file from disk and using it in Whisper API, it works without any issues. Below is the code that shows it.</p> <pre><code>@app.post(&quot;/audio&quot;) async def summarise_audio(file:UploadFile): audio =await file.read() with open(&quot;testy.wav&quot;,'wb') as f: f.write(audio) x = open(&quot;testy.wav&quot;,'rb') transcript = openai.Audio.transcribe(&quot;whisper-1&quot;,x) # worked # transcript = openai.Audio.transcribe(&quot;whisper-1&quot;,file.file) # did not work return transcript </code></pre> <p>How would I go to solve this problem, could there be an issue with the file format received by FastAPI endpoint?</p>
<python><fastapi><openai-api><openai-whisper>
2023-05-04 01:36:25
1
844
mightyandweakcoder
76,168,937
1,914,781
config subplot axis with common configuration
<p>I would like to config all subplots with the same axis configuration. below code works but in subplot increase, we need add more elif to handle new axis config. how to simplify this code to handle subplots with variable name as argument?</p> <pre><code>xaxis,yaxis = get_xyaxis() for i,yname in enumerate(colnames): trace1 = go.Scatter( x=df[xname], y=df[yname], name=yname) fig.add_trace( trace1, row=i+1, col=1 ) if i == 0: fig.update_layout( xaxis1=xaxis, yaxis1=yaxis, ) elif i == 1: fig.update_layout( xaxis2=xaxis, yaxis2=yaxis, ) elif i == 2: fig.update_layout( xaxis3=xaxis, yaxis3=yaxis, ) </code></pre> <p>I try <code>&quot;xaxis%d&quot;%(i + 1)=xaxis</code> but it doesn't work!</p> <p>full code:</p> <p>import re import pandas as pd</p> <pre><code>import plotly.graph_objects as go from plotly.subplots import make_subplots def plot_line(df,pngname): fontsize = 10 title = &quot;demo&quot; xlabel = &quot;KeyPoint&quot; ylabel = &quot;Duration(secs)&quot; xname = df.columns[0] colnames = df.columns[1:] n = len(colnames) </code></pre> <p>O</p> <pre><code> fig = make_subplots( rows=n, cols=1, shared_xaxes=True, vertical_spacing = 0.02, ) xaxis,yaxis = get_xyaxis() for i,yname in enumerate(colnames): trace1 = go.Scatter( x=df[xname], y=df[yname], text=df[yname], textposition='top center', mode='lines+markers', marker=dict( size=10, line=dict(width=0,color='DarkSlateGrey')), name=yname) fig.add_trace( trace1, row=i+1, col=1 ) # TODO if i == 0: fig.update_layout( xaxis1=xaxis, yaxis1=yaxis, ) elif i == 1: fig.update_layout( xaxis2=xaxis, yaxis2=yaxis, ) elif i == 2: fig.update_layout( xaxis3=xaxis, yaxis3=yaxis, ) xpading=.05 fig.update_layout( margin=dict(l=20,t=40,r=10,b=40), plot_bgcolor='#ffffff',#'rgb(12,163,135)', paper_bgcolor='#ffffff', title=title, title_x=0.5, showlegend=True, legend=dict(x=.02,y=1.05), barmode='group', bargap=0.05, bargroupgap=0.0, font=dict( family=&quot;Courier New, monospace&quot;, size=fontsize, color=&quot;black&quot; ), ) fig.show() return def get_xyaxis(): xaxis=dict( title_standoff=1, tickangle=-15, showline=True, linecolor='black', color='black', linewidth=.5, ticks='outside', showgrid=True, gridcolor='grey', gridwidth=.5, griddash='solid',#'dot', ) yaxis=dict( title_standoff=1, showline=True, linecolor='black', color='black', linewidth=.5, showgrid=True, gridcolor='grey', gridwidth=.5, griddash='solid',#'dot', zeroline=True, zerolinecolor='grey', zerolinewidth=.5, showticklabels=True, ) return [xaxis,yaxis] def main(): data = [ ['AAA',1,2,3], ['BBB',3,2,3], ['CCC',2,1,2], ['DDD',4,2,3], ] df = pd.DataFrame(data,columns=['name','v1','v2','v3']) print(df) plot_line(df,&quot;./demo.png&quot;) return main() </code></pre> <p>Output: <a href="https://i.sstatic.net/BYeez.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BYeez.png" alt="enter image description here" /></a></p>
<python><plotly>
2023-05-04 01:16:03
1
9,011
lucky1928
76,168,842
15,306,690
Configure Pyright to use Ruff as a linter
<p>I use Zed editor with Pyright and it works like a charm. However I want to use Ruff linter with Pyright but I don't find any documentation about how to achieve this for Zed editor. Do I have to specify linter directly in <code>pyrightconfig.json</code> and if so how?</p> <p>I'm a bit lost about langage server and how to specify linting for specific LSP.</p>
<python><linter><language-server-protocol><pyright><python-language-server>
2023-05-04 00:45:30
2
671
Howins
76,168,814
5,212,614
Geopy Geocoders returns the correct results in one way, but not another
<p>I'm trying to find the address of some universities. If I do this, everything is totally fine.</p> <pre><code>import pandas as pd import numpy as np import pandas as pd from geopy.geocoders import Nominatim geolocator = Nominatim(user_agent='ryan-data') import pandas as pd df = ['Rutgers University, New Jersey', 'Bucknell University, Pennsylvania', 'Colgate University, New York', 'Cornell University, New York', 'Syracuse University, New York'] df = pd.DataFrame(df) df.columns=['school'] df.head() df['location'] = df['school'].apply(lambda x: geolocator.geocode(x)) df.head() </code></pre> <p>When I do that, everything comes out fine. However, if I read the EXACT SAME ADDRESSES from a CSV file, like this.</p> <pre><code>try: df['location'] = df['school'].apply(lambda x: geolocator.geocode(x)) except: df['location'] = 'not found' df.head() </code></pre> <p>The exact same code always throws an error. I don't want to type out every single address. I just want to read from a CSV and get the results. How can I do that?</p>
<python><python-3.x><geopy>
2023-05-04 00:35:54
1
20,492
ASH
76,168,787
3,388,962
Degenerate root finding problem: get the first value for which f(x)>0
<p>Given a function <em>f(x)</em> that is zero for all values <em>x</em> less than a critical value <em>c</em>, and non-zero for values <em>x&gt;c</em>.</p> <p>I want to approximate the critical value <em>c</em> using an optimization method. Because the function <em>f(x)</em> is expensive, I want to compute it as few times as possible. Therefore, computing <em>f(x)</em> for a predefined list of values <em>x</em> is not viable.</p> <p>Think of a function like the following one, with critical point <em>a=1.4142</em></p> <pre class="lang-py prettyprint-override"><code>def func(x): return max(x**2 - 2, 0) if x &gt; 0 else 0 </code></pre> <img src="https://i.sstatic.net/0JdQA.png" width="400" /> <p>I would have implemented this using some custom bisection function. However, I was wondering if this &quot;degenerate&quot; root finding problem can be solved using existing functions in SciPy or NumPy. I have experimented with <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.root_scalar.html" rel="nofollow noreferrer"><code>scipy.optimize.root_scalar</code></a>. However, it does not seem to support functions like the one above.</p> <pre class="lang-py prettyprint-override"><code>from scipy.optimize import root_scalar root_scalar(func, bracket=[-6, 8]) # Yields an error: f(a) and f(b) must have different signs. </code></pre>
<python><algorithm><numpy><optimization><scipy>
2023-05-04 00:27:27
2
9,959
normanius
76,168,695
2,642,356
A memory leak in a simple Python C-extension
<p>I have some code similar to the one below. That code leaks, and I don't know why. The thing that leaks is a simple creation of a Python class' instance inside a C code. The function I use to check the leak is <code>create_n_times</code> that's defined below and just creates new Python instances and derefrences them in a loop.</p> <p>This is not an MWE per-se, but part of an example. To make it easier to understand, what the code does is:</p> <ol> <li>The Python code defines the dataclass and registers it into the C-extension using <code>set_ip_settings_type</code>.</li> <li>Then, a C-extension function <code>create_n_times</code> is called and that function creates and destroys <code>n</code> instances of the Python dataclass.</li> </ol> <p>Can anyone help?</p> <p>In Python:</p> <pre class="lang-py prettyprint-override"><code>import c_api @dataclass class IpSettings: ip: str port: int dhcp: bool c_api.set_ip_settings_type(IpSettings) c_api.generate_n_times(100000) </code></pre> <p>In C++ I have the following code that's compiled into a Python extension called <code>c_api</code> (it's a part of that library's definition):</p> <pre class="lang-cpp prettyprint-override"><code>#include &lt;Python.h&gt; // ... Other functions including a &quot;PyInit&quot; function extern &quot;C&quot; { PyObject* ip_settings_type = NULL; PyObject* set_ip_settings_type(PyObject* tp) { Py_XDECREF(ip_settings_type); Py_INCREF(tp); ip_settings_type = tp; return Py_None; } PyObject* create_n_times(PyObject* n) { long n_ = PyLong_AsLong(n); for (int i = 0; i &lt; n_ ++i) { PyObject* factory_object = ip_settings_type; PyObject* args = PyTuple_New(3); PyTuple_SetItem(args, 0, PyUnicode_FromString(&quot;123.123.123.123&quot;)); PyTuple_SetItem(args, 1, PyLong_FromUnsignedLong(1231)); PyTuple_SetItem(args, 2, Py_False); PyObject* obj = PyObject_CallObject(factory_object, args); Py_DECREF(obj); } return Py_None; } } </code></pre>
<python><c><python-c-api>
2023-05-04 00:01:07
1
1,864
EZLearner
76,168,620
5,032,387
Unexpected output from fftconvolve
<p>I have two distributions where the probability density from [-0.05, 0) is 0 and defined using interpolation for [0,1].</p> <pre><code>import matplotlib.pyplot as plt import numpy as np from scipy import signal step = 1e-3 x = np.arange(-0.05, 1, step) x_0minus = x[x&lt;0] x_0plus = x[x&gt;0] pdf1 = np.concatenate([np.repeat(0, len(x_0minus)), np.interp(x_0plus, [0, 0.08, 0.28], [0, 80, 0])]) pdf2 = np.concatenate([np.repeat(0, len(x_0minus)), np.interp(x_0plus, [0, 0.1, 0.3, 0.31], [60, 60, 60, 0])]) </code></pre> <p>We plot these for reference. <code>plt.scatter(x = x, y = pdf1)</code> <a href="https://i.sstatic.net/0eoZD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0eoZD.png" alt="Pdf1" /></a></p> <p><code>plt.scatter(x = x, y = pdf2)</code> <a href="https://i.sstatic.net/DzZkR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DzZkR.png" alt="Pdf2" /></a></p> <p>I don't understand why when I convolve these using fftconvolve, the resultant distribution has a non-zero probability density for values below 0. I'd like to remedy this without setting the range to be [-1,1], because that would be a computational waste in my actual use-case.</p> <pre><code>res = signal.fftconvolve(pdf1 / pdf1.sum(), pdf2 / pdf2.sum(), 'same') plt.scatter(x = x, y = res) </code></pre> <p><a href="https://i.sstatic.net/BvzN2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BvzN2.png" alt="Convolution" /></a></p>
<python><scipy><statistics><distribution><convolution>
2023-05-03 23:36:47
0
3,080
matsuo_basho
76,168,534
6,861,165
What is "py -m pip install XXX"
<p><code>pip3 install [package]</code> works for me for installing the package but when I ran the file calling this package it said it can't find the package.</p> <p>When I tried <code>py -m pip install [package]</code> it installed the package and the package is found by the file.</p> <p>I'm curious about the function of <code>py -m</code>. Did I install the package in another place with the second command?</p>
<python><pip>
2023-05-03 23:13:11
0
572
Joy
76,168,470
654,019
How to create a binary mask from a yolo8 segmentation result
<p>I want to segment an image using yolo8 and then create a mask for all objects in the image with specific class.</p> <p>I have developed this code:</p> <pre><code>img=cv2.imread('images/bus.jpg') model = YOLO('yolov8m-seg.pt') results = model.predict(source=img.copy(), save=False, save_txt=False) class_ids = np.array(results[0].boxes.cls.cpu(), dtype=&quot;int&quot;) for i in range(len(class_ids)): if class_ids[i]==0: empty_image = np.zeros((height, width,3), dtype=np.uint8) res_plotted = results[0][i].plot(boxes=0, img=empty_image) </code></pre> <p>In the above code, <code>res_plotted</code> is the mask for one object, in RGB. I want to add all of these images to each other and create a mask for all objects with class 0 (it is a pedestrian in this example)</p> <p>My questions:</p> <ol> <li>How can I complete this code?</li> <li>Is there any better way to achieve this without having a loop?</li> <li>Is there any utility function in the yolo8 library to do this?</li> </ol>
<python><image-segmentation><semantic-segmentation><yolov8>
2023-05-03 22:55:08
3
18,400
mans
76,168,144
482,819
Types in Specialized Python Generic class
<p>Consider the following generic class, which is then specialized.</p> <pre class="lang-py prettyprint-override"><code>from typing import Generic, TypeVar T = TypeVar(&quot;T&quot;) T1 = TypeVar(&quot;T1&quot;) T2 = TypeVar(&quot;T2&quot;) class X(Generic[T1, T2]): x: T1 y: T2 class Y(Generic[T], X[float, T]): pass class Z(Y[int]): pass </code></pre> <p>Is there a way to get the specialization values of <code>Z</code>, in this case <code>(float, int)</code>?</p> <p>As the order is important, the following example should</p> <pre class="lang-py prettyprint-override"><code>class Y(Generic[T], X[T, float]): pass class Z(Y[int]): pass </code></pre> <p>should return <code>(int, float)</code></p>
<python><generics><typing>
2023-05-03 21:34:36
1
6,143
Hernan
76,168,135
18,758,062
Get Netmiko script wait till command has finished running
<p>I'm using <code>netmiko</code> to SSH into an Ubuntu machine to run a Python script that takes a few minutes to finish running. The Python script does not write anything to <code>stdout</code> until just before it has finished running.</p> <p>However, the current implementation appears to be terminating the SSH connection early thinking that the Python script has already finished running when it has not.</p> <p>What is a good way to wait till the Python script has completed running (including having crashed with an exception) before closing the SSH connection.</p> <pre class="lang-py prettyprint-override"><code>from netmiko import ConnectHandler import time device = { &quot;device_type&quot;: &quot;linux&quot;, # ... } with ConnectHandler(**device) as ssh: ssh.write_channel(&quot;python slow_task.py\n&quot;) outputs = [] while True: time.sleep(10) output = ssh.read_channel() if not output: break print(output.strip()) outputs.append(output) </code></pre>
<python><python-3.x><ssh><paramiko><netmiko>
2023-05-03 21:32:39
1
1,623
gameveloster
76,168,125
5,507,389
ProcessingPool within Python class and Enum as instance attribute
<p>I'm using the multiprocessing module from the pathos library to parallelise a heavy process defined within a class. My class needs to have an Enum instance attribute defined and, unfortunately, this is breaking the multiprocessing fonctionnality. Here's a minimal example for how to replicate this error (I'm running on Python 3.10.8 and I don't have the possibility to run Python 3.11.x at work):</p> <pre><code>from enum import Enum from pathos.multiprocessing import ProcessingPool class MyClass: def __init__(self, group_dict): self.group_dict = group_dict self.tags_emum = Enum( value=&quot;MyEnum&quot;, names={v.upper(): v for v in self.group_dict.keys()}, type=str, ) def fnc1(self, names_list): pool = ProcessingPool(nodes=2) result = pool.map(self.fnc2, names_list) return result def fnc2(self, name): return len(name) if __name__ == &quot;__main__&quot;: inst = MyClass(group_dict={&quot;key1&quot;: &quot;val1&quot;, &quot;key2&quot;: &quot;val2&quot;}) print(inst.fnc1(names_list=[&quot;StackOverflow&quot;, &quot;Python&quot;, &quot;Question&quot;])) </code></pre> <p>Running this code will raise the following <code>PicklingError</code>:</p> <pre><code>_pickle.PicklingError: Can't pickle &lt;enum 'MyEnum'&gt;: it's not found as __main__.MyEnum </code></pre> <p>Removing the part where <code>self.tags_enum</code> is defined will make the code run just fine and produce the expected result: <code>[13, 6, 8]</code>.</p> <p>Given the above, I have the following two-part question:</p> <ul> <li>First, as I'm fairly new with mutltiprocessing, I would like to understand <em>why</em> this is failing.</li> <li>Then, I'm also looking for ways to fix this error. I should note that it is important that I have the <code>tags_enum</code> instance attribute set this way. Though it may not look important at all in this toy example, it is important in the real use case I'm working on.</li> </ul>
<python><class><enums><multiprocessing><pickle>
2023-05-03 21:30:08
1
679
glpsx
76,168,101
1,714,490
How to use PyInstaller to package a program using pyqtlet2?
<p>I have a complex program I need to package somehow (by &quot;somehow&quot; I mean PyInstaller is not mandatory, but it seems to be a popular choice, so...) to distribute it to run under Linux with minimal hassle.</p> <p>Program is very complex and uses several heavyweight libs, including <code>PyQt5</code>, <code>PyQt5-QtWeb</code>, <code>pyqtlet2</code>, <code>matplotlib</code> and <code>numpy</code>.</p> <p>First stumbling block is <code>pyqtlet2</code> which is a thin wrapper around <code>Leaflet</code> package (JavaScript).</p> <p>A simple example:</p> <pre class="lang-py prettyprint-override"><code>import os import sys from PyQt5.QtWidgets import QApplication, QVBoxLayout, QWidget from pyqtlet2 import L, MapWidget class MapWindow(QWidget): def __init__(self): # Setting up the widgets and layout super().__init__() self.mapWidget = MapWidget() self.layout = QVBoxLayout() self.layout.addWidget(self.mapWidget) self.setLayout(self.layout) # Working with the maps with pyqtlet self.map = L.map(self.mapWidget) self.map.setView([12.97, 77.59], 10) L.tileLayer('http://{s}.tile.osm.org/{z}/{x}/{y}.png').addTo(self.map) self.marker = L.marker([12.934056, 77.610029]) self.marker.bindPopup('Maps are a treasure.') self.map.addLayer(self.marker) self.show() if __name__ == '__main__': app = QApplication(sys.argv) widget = MapWindow() sys.exit(app.exec_()) </code></pre> <p>lifted directly from <code>pyqtlet2</code> on GitHub works as expected if launched directly:</p> <pre class="lang-bash prettyprint-override"><code>python3 -m venv venv venv/bin/pip install -U pip wheel pyqtlet2[pyqt5] PyInstaller venv/bin/python test.py </code></pre> <p>but fails if packaged with <code>PyInstaller</code>:</p> <pre class="lang-bash prettyprint-override"><code>( . venv/bin/activate ; pyinstaller test.py ) ( cd dist/test/ ; ./test ) </code></pre> <p>A quick perusal of generated <code>dist</code> directory shows it doesn't contain the needed pyqtlet2 files, so I assume I need to instruct <code>PyInstaller</code> to handle pyqtlet2, but I don't know how to do that, if at all possible.</p> <p>Question is: how do I convince <code>PyInstaller</code> to correctly package <a href="https://github.com/JaWeilBaum/pyqtlet2" rel="nofollow noreferrer">pyqtlet2</a>?</p>
<python><pyinstaller><pyqtlet>
2023-05-03 21:25:16
1
3,106
ZioByte
76,168,006
3,220,769
How do I get the log group associated with a Lambda function in boto3?
<p>I see that I can get the log group as an environment variable from within the Lambda itself through the <a href="https://docs.aws.amazon.com/lambda/latest/dg/configuration-envvars.html#configuration-envvars-runtime" rel="nofollow noreferrer">environment variables</a> but if I have a Cloudwatch Event of a Lambda success or failure, how can I use the Lambda function name to identify the log group it's writing to? I don't see any lambda or cloudwatch boto3 client methods that provide this functionality.</p> <p>This provides some details about the Lambda function (including env variables) but doesn't include log group</p> <pre><code>client = boto3.client('lambda') client.get_function(FunctionName=function_name) </code></pre> <p>There's this on the cloudwatch side, but it requires a prefix to the log group, no way to tie a resource ARN to the log group ARN.</p> <pre><code>client = boto3.client('cloudwatch') client.describe_log_groups() </code></pre>
<python><amazon-web-services><boto3>
2023-05-03 21:08:50
2
3,327
TomNash
76,167,967
2,168,554
How to turn off sorting for Values name in Pandas pivot_table
<p>I am trying to create a simple pivot table from Pandas pivot_table but I am not able to turn off sorting on the Values name in output table.</p> <p>Parameter &quot;sort = False&quot; only helps in turning off sorting in Index name.</p> <pre><code>import pandas as pd data = { &quot;StateName&quot; : [&quot;Manipur&quot;, &quot;Manipur&quot;, &quot;Assam&quot;, &quot;Chennai&quot;, &quot;Delhi&quot;, &quot;Delhi&quot;, &quot;Assam&quot;], &quot;Male Accounts&quot; : [121, 987, 1043, 34, 209, 89, 90], &quot;Female Accounts&quot; : [23, 890, 2012, 7810, 765, 902, 23], &quot;Small Accounts&quot; : [90, 21, 98, 45, 56, 34, 90], &quot;Current Accounts&quot; : [121, 623, 90, 76, 23, 87, 91] } df = pd.DataFrame(data) print(df) pv_table = pd.pivot_table(df, index = &quot;StateName&quot;, sort = False, aggfunc = 'sum') print(pv_table) </code></pre> <p>Output:</p> <pre><code> StateName Male Accounts Female Accounts Small Accounts Current Accounts 0 Manipur 121 23 90 121 1 Manipur 987 890 21 623 2 Assam 1043 2012 98 90 3 Chennai 34 7810 45 76 4 Delhi 209 765 56 23 5 Delhi 89 902 34 87 6 Assam 90 23 90 91 Current Accounts Female Accounts Male Accounts Small Accounts StateName Manipur 744 913 1108 111 Assam 181 2035 1133 188 Chennai 76 7810 34 45 Delhi 110 1667 298 90 </code></pre> <p>As you can see the Values columns header are getting sorted alphabetically here. Is there anyway to turn off this sorting.</p> <p>Desired Output:</p> <p><a href="https://i.sstatic.net/PE0An.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PE0An.png" alt="enter image description here" /></a></p>
<python><pandas><pivot-table>
2023-05-03 21:03:19
2
577
Shaurya Gupta
76,167,901
12,349,101
Tkinter - Modify fill option when using tksvg
<p>Thanks to <a href="https://stackoverflow.com/questions/74797469/tcl-svg-gradient-transformation-not-working-gradient-silently-ignored">this question</a>, I discovered <a href="https://pypi.org/project/tksvg/" rel="nofollow noreferrer">tksvg</a>.</p> <p>I already know how to display an svg file:</p> <pre class="lang-py prettyprint-override"><code>import tkinter as tk import tksvg window = tk.Tk() svg_image = tksvg.SvgImage(file=&quot;tests/orb.svg&quot;) label = tk.Label(image=svg_image) label.pack() window.mainloop() </code></pre> <p>and also how to do the same using svg data/string:</p> <pre class="lang-py prettyprint-override"><code>import tkinter as tk import tksvg svg_string = &quot;&quot;&quot; &lt;svg aria-hidden=&quot;true&quot; focusable=&quot;false&quot; role=&quot;img&quot; viewBox=&quot;0 0 24 24&quot; class=&quot;&quot; fill=&quot;none&quot; stroke-width=&quot;2&quot; stroke=&quot;currentColor&quot; stroke-linecap=&quot;round&quot; stroke-linejoin=&quot;round&quot;&gt;&lt;g stroke-width=&quot;1.5px&quot; stroke=&quot;#B8B8B8&quot; fill=&quot;none&quot;&gt;&lt;path stroke=&quot;none&quot; d=&quot;M0 0h24v24H0z&quot; fill=&quot;none&quot; stroke-width=&quot;1.5px&quot;&gt;&lt;/path&gt;&lt;line x1=&quot;4&quot; y1=&quot;20&quot; x2=&quot;7&quot; y2=&quot;20&quot; stroke=&quot;#B8B8B8&quot; fill=&quot;none&quot; stroke-width=&quot;1.5px&quot;&gt;&lt;/line&gt;&lt;line x1=&quot;14&quot; y1=&quot;20&quot; x2=&quot;21&quot; y2=&quot;20&quot; stroke=&quot;#B8B8B8&quot; fill=&quot;none&quot; stroke-width=&quot;1.5px&quot;&gt;&lt;/line&gt;&lt;line x1=&quot;6.9&quot; y1=&quot;15&quot; x2=&quot;13.8&quot; y2=&quot;15&quot; stroke=&quot;#B8B8B8&quot; fill=&quot;none&quot; stroke-width=&quot;1.5px&quot;&gt;&lt;/line&gt;&lt;line x1=&quot;10.2&quot; y1=&quot;6.3&quot; x2=&quot;16&quot; y2=&quot;20&quot; stroke=&quot;#B8B8B8&quot; fill=&quot;none&quot; stroke-width=&quot;1.5px&quot;&gt;&lt;/line&gt;&lt;polyline points=&quot;5 20 11 4 13 4 20 20&quot; stroke=&quot;#B8B8B8&quot; fill=&quot;none&quot; stroke-width=&quot;1.5px&quot;&gt;&lt;/polyline&gt;&lt;/g&gt;&lt;/svg&gt; &quot;&quot;&quot; window = tk.Tk() svg_image = tksvg.SvgImage(data=svg_string) label = tk.Label(image=svg_image) label.pack() window.mainloop() </code></pre> <p>but I'm wondering, how do I modify the fill option of the svg element (based on the svg data).</p> <p>Only way I thought of was to redraw/delete and create the svg element with a different color used in fill. Here is my attempt:</p> <pre class="lang-py prettyprint-override"><code>import tkinter as tk import tksvg svg_string = '&lt;svg viewBox=&quot;0 0 100 100&quot;&gt;&lt;circle cx=&quot;50&quot; cy=&quot;50&quot; r=&quot;40&quot; fill=&quot;red&quot;/&gt;&lt;/svg&gt;' def change_color(event): global svg_string fill_color = &quot;blue&quot; if &quot;red&quot; in svg_string else &quot;red&quot; svg_string = '&lt;svg viewBox=&quot;0 0 100 100&quot;&gt;&lt;circle cx=&quot;50&quot; cy=&quot;50&quot; r=&quot;40&quot; fill=&quot;{}&quot;/&gt;&lt;/svg&gt;'.format(fill_color) canvas.delete(image_store[0]) svg_store.pop(0) svg_store.append(tksvg.SvgImage(data=svg_string)) image_store[0] = canvas.create_image(100, 100, image=svg_store[0]) window = tk.Tk() canvas = tk.Canvas(window, width=200, height=200) canvas.pack() image_store = [] svg_store = [] svg_store.append(tksvg.SvgImage(data=svg_string)) image_store.append(canvas.create_image(100, 100, image=svg_store[0])) canvas.tag_bind(image_store[0], &quot;&lt;Button-1&gt;&quot;, change_color) window.mainloop() </code></pre> <p>Here I tried to switch the color back and forth, so that when it is blue it becomes red, and when red, it becomes blue. But in this case, the color only seems to change once.</p> <p>I'm only trying to make a workaround since I don't think this is supported yet (feel free to correct me if I'm wrong, as I tried to look at the code, with my best attempt at understanding it at <a href="https://github.com/TkinterEP/python-tksvg" rel="nofollow noreferrer">the official github repository</a>).</p> <p>How can I do this? either using the above workaround (redraw element) or with a better alternative?</p> <p>I'm on Windows 10, Python version 3.8.10, Tkinter 8.6.9</p>
<python><tkinter><svg><tksvg>
2023-05-03 20:52:43
3
553
secemp9
76,167,886
5,437,090
Count dictionary keys and argmax in a list of list
<p><strong>Given</strong>:</p> <pre><code>import numpy as np list_of_list = [ ['ccc', 'cccc', 'b', 'c', 'b'], ['ab', 'b', 'b', 'aa'], ['c', 'b', 'c', 'c', 'b', 'c'], ['bb', 'd', 'c'], ] my_dict = {key: None for key in 'abcde'} </code></pre> <p><code>list_of_list</code> is simplified in this test example but it actually is a list of vocabularies in a list:</p> <pre><code>list_of_list = [ ['word1', 'word2', ... , 'wordN'], ['word1', 'word2', ... , 'wordM'], ['word1', 'word2', ... , 'wordK'], ... ] </code></pre> <p><strong>Goal</strong>:</p> <p>I'd like to get updated dictionary with pattern: <code>&quot;key&quot;: [index_of_max_occurrence, max_occurrence]</code> given the components of <code>list_of_list</code>.</p> <p><strong>My inefficient solution</strong>:</p> <p>The following code snippet, using <code>for loop</code>, works fine with quite small dictionary and list of list. However, for bigger sizes, it obviously turns out to be very time consuming and inefficient:</p> <pre><code>for k in my_dict: counters = list() for lst in list_of_list: counters.append( lst.count(k) ) if any(counters): my_dict[k] = [ np.argmax(counters) , max(counters) ] print(my_dict) # {'a': None, 'b': [0, 2], 'c': [2, 4], 'd': [3, 1], 'e': None} </code></pre> <p>Is there any better robust solution that I could speed up my program?</p>
<python><list><dictionary>
2023-05-03 20:50:22
1
1,621
farid
76,167,841
4,139,024
Set bar with lower value to foreground in histplot
<p>When creating a histogram plot with seaborn, I would like to dynamically put the bar with the lower count to the front. Below is a minimal example where now the blue bar is always in the front, no matter its count. For example, in the second bin, I would like the orange bar to be in the front. Basically, I am looking for something similar to <code>multiple=&quot;stack&quot;</code>, however without adding the columns up. Is that possible?</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import pandas as pd import seaborn as sns sns.set() df_A = pd.DataFrame(np.random.randint(0, 10, 100), columns=[&quot;value&quot;]) df_A[&quot;label&quot;] = &quot;A&quot; df_B = pd.DataFrame(np.random.randint(0, 10, 100), columns=[&quot;value&quot;]) df_B[&quot;label&quot;] = &quot;B&quot; df = pd.concat([df_A, df_B]) sns.histplot(df, x=&quot;value&quot;, bins=np.arange(0, 10, 1), hue=&quot;label&quot;) </code></pre> <p><a href="https://i.sstatic.net/ZWnMh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZWnMh.png" alt="histogram" /></a></p>
<python><pandas><matplotlib><seaborn><histplot>
2023-05-03 20:44:21
1
3,338
timbmg
76,167,824
189,618
How to process telegram messages in parallel using pyrogram
<pre><code>from pyrogram import Client app = Client(client_name, api_id=api_id, api_hash=api_hash) async def handle_messages(client, message): await some_other_function() app.run() </code></pre> <p>If multiple messages are received together, it seems to process them one by one. How can I process multiple messages in parallel?</p>
<python><telegram><pyrogram>
2023-05-03 20:42:04
1
11,680
understack