QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
75,967,994
| 8,541,314
|
How to make bootstrap navlink active in dynamically generated for loop in django template
|
<p>I am trying to make a tag navlink active in for loop django template. Every link is passing id, base on id matching i would like make nav-link active.</p>
<p>This is my Template html page, this is for loop and here i am checking condition to make nav-link active. I am not able to highlight the nav-link.</p>
<pre><code><ul class="nav nav-pills" role="tablist">
<li class="nav-item">
<a class="nav-link" href="{% url 'core_app:store' 'all' %}">All</a>
</li>
{% for type in store_type %}
{% if type.id == typeid %}
<li class="nav-item">
<a class="nav-link active" href="{% url 'core_app:store' type.id %}">{{type.name}}</a>
</li>
{% else %}
<li class="nav-item">
<a class="nav-link" href="{% url 'core_app:store' type.id %}">{{type.name}}</a>
</li>
{% endif %}
{% endfor %}
</ul>
</code></pre>
<p>This is my View code
Here i am getting all <code>store_type</code> and base on link click and passing id and i am extracting type of store. Then i would like to highlight my nav-link active base on conditional matching</p>
<pre><code>def store(request, id):
if id == "all":
store_list = Store.objects.all().order_by('id')
else:
store_list = Store.objects.all().filter(store_type=int(id)).order_by('id')
return render(request, 'core_app/store/all_store.html', {'stores': store_list, 'typeid': id, "store": "active", 'store_list': 'active', 'store_type': StoreType.objects.all()})
</code></pre>
<p>Model (Store has StoreType)</p>
<pre><code>class StoreType(models.Model):
name = models.CharField(max_length=255, blank=True, null=True)
...
def __str__(self):
return self.name
class Store(model.Model):
store_type = models.ForeignKey(...)
name = models.CharField(max_length=255, blank=True, null=True)
...
</code></pre>
<p>Please help me to fix this or Suggest me better solution. Thanks</p>
|
<python><jquery><django><twitter-bootstrap><django-templates>
|
2023-04-08 22:33:08
| 1
| 1,008
|
Raikumar Khangembam
|
75,967,784
| 3,103,957
|
OOP hierarchy in Python
|
<p>In Python, everythig is a object.</p>
<p>If we create an instance of a user defined class, below how the instance is created.</p>
<p>Step:1 The <code>__call__()</code> method present in type class, is invoked.</p>
<p>Step:2 The <code>__call__</code> method in turn, calls the <code>__new__()</code> and <code>__init__()</code> methods respectively to create the instance of the user defined class.</p>
<pre><code>class Foo()
def __init__(self):
...
</code></pre>
<p><code>Foo()</code> <--- here, the above two steps happen.</p>
<p>Similarly when an object is created for the Foo class, the <code>__call__()</code> method needs to be called and it in turn
will invoke the <code>__new__()</code> and <code>__init__()</code> methods.</p>
<p>Could someone please help where this <code>__call__()</code> method is located.<br />
I am trying to understand how object <strong>for</strong> (not of) the class is constructed.</p>
<p>Thanks!</p>
|
<python><oop>
|
2023-04-08 21:33:17
| 1
| 878
|
user3103957
|
75,967,744
| 5,079,088
|
ROS2-Gazebo: show image on the side of a cube
|
<p>I am trying to display aruco marker on one (only one) side of the cube in Gazebo.
I am using the following model, that is pasted in a world file:</p>
<pre><code> <model name='charger'>
<static>true</static>
<link name='charger_link'>
<collision name='charger_collision'>
<geometry>
<box>
<size>1 1 1</size>
</box>
</geometry>
</collision>
<visual name='charger_visual'>
<geometry>
<box>
<size>1 1 1</size>
</box>
</geometry>
<material>
<script>
<uri>file://../models/aruco_markers/aruco_marker_42.png</uri>
<name>charger_material</name>
</script>
<ambient>1 1 1 1</ambient>
<diffuse>1 1 1 1</diffuse>
<specular>0 0 0 1</specular>
<emissive>1 1 1 0</emissive>
<shader type='vertex'>
<normal_map>__default__</normal_map>
</shader>
</material>
</visual>
</link>
<pose>0 7.7 0.5 0 0 0</pose>
</model>
</code></pre>
<p>The directory structure is:</p>
<p>src</p>
<p>src/worlds (this is where world file is)</p>
<p>src/models</p>
<p>src/models/aruco_markers (this is where images are)</p>
<p>Nevertheless, all I see is a cube, with no image on its sides.
Any suggestions?</p>
|
<python><ros2><aruco><gazebo-simu>
|
2023-04-08 21:23:49
| 2
| 449
|
Steve Brown
|
75,967,713
| 3,435,121
|
Byte ordering of protocol number in raw packets
|
<p>On Internet I found multiple occurrences of following code snippets.</p>
<pre><code>ETH_P_ALL=0x0003 # from /usr/include/linux/if_ether.h
sock=socket.socket( socket.AF_PACKET, socket.SOCK_RAW, socket.ntohs( ETH_P_ALL))
</code></pre>
<p>and</p>
<pre><code>ETH_P_IP=0x0800 # from /usr/include/linux/if_ether.h
sock=socket.socket( socket.AF_PACKET, socket.SOCK_RAW, ETH_P_IP)
</code></pre>
<p>Why do we need to change byte-ordering for ETH_P_ALL?<br />
Do Python include the byte-swap (htons) internally? If yes , the first snippet would be wrong.</p>
|
<python><network-programming>
|
2023-04-08 21:14:48
| 0
| 675
|
user3435121
|
75,967,712
| 19,157,137
|
Logic behind Recursive functions in Python
|
<p>I have the following recursive function below that returns the <code>k</code> to the <code>s</code> power of a certain integer.</p>
<p>However I do not understand how it works. The base class is 1 when <code>s</code> becomes 0.</p>
<p>How is it so that the returned value is actually <code>k^s</code> when <code>1</code> is returned? Since <code>1</code> is returned I would expect <code>powertoK(k, s)</code> to be <code>1</code> all the time.</p>
<p>Are there 2 memory addresses one holding 1 and the other as <code>k^s</code>?</p>
<pre><code>def powertoK(k, s):
if s != 0:
return k* powertoK(k,s-1)
else:
return 1
powertoK(3, 7)
</code></pre>
|
<python><python-3.x><function><recursion><return>
|
2023-04-08 21:14:31
| 1
| 363
|
Bosser445
|
75,967,590
| 3,042,018
|
When can/can't you modify a list during iteration?
|
<p>You are not supposed to modify a list during iteration apparently. I can see the potential for problems, but I'm not sure when it is or isn't OK. For example, the code below seems to work fine:</p>
<pre><code>import random
xs = [random.randint(-9,9) for _ in range(5)]
print(xs, sum(xs))
xs = [x * -1 for x in xs]
print(xs, sum(xs))
</code></pre>
<p>Is this OK because the list comprehension works differently to a normal <code>for</code> loop?</p>
<p>This version seems equally unproblematic:</p>
<pre><code>import random
xs = [random.randint(-9,9) for _ in range(5)]
print(xs, sum(xs))
for i in range(len(xs)):
xs[i] *= -1
print(xs, sum(xs))
</code></pre>
<p>So it looks like for this kind of use case there is no issue, unless I'm missing something.</p>
<p>What then are the situations when it is or isn't OK to modify a list while iterating over it?</p>
|
<python><list><iteration><logic-error>
|
2023-04-08 20:43:22
| 2
| 3,842
|
Robin Andrews
|
75,967,527
| 16,136,190
|
EXE created with PyInstaller: "Cannot copy tree 'Y': not a directory"
|
<p>I converted a file, <code>main.py</code>, into <code>main.exe</code>, using PyInstaller. <code>main.py</code> doesn't throw any error, but <code>main.exe</code> does.</p>
<p>The script didn't execute fully, so I used a <code>try-catch</code> and got this:</p>
<blockquote>
<p>Cannot copy tree 'Y': not a directory</p>
</blockquote>
<p>It looks like it's a problem with how I converted it into an EXE with PyInstaller.</p>
<p>This is the PyInstaller command I used to convert it into an EXE:</p>
<pre><code>pyinstaller --onefile --windowed --icon "X/logo.ico"
--name "H" --add-data "X/Y;Y/" --add-data
"X/dmode.png;." --add-data "X/lu.png;." --add-data "X/toggle.png;."
--add-data "X/white.png;." --paths "C:/Users/Username/venv/Lib/site-packages"
--hidden-import "tkinter" --hidden-import "os" --hidden-import "sys"
--hidden-import "distutils" --hidden-import "shutil"
--hidden-import "win32gui" "X/main.py"
</code></pre>
<p>And this is the file structure:</p>
<p><em><code>X</code></em></p>
<ul>
<li><p>build, dist and other folders created by PyInstaller and PyCharm (IDE)</p>
</li>
<li><p><em><code>Y</code></em></p>
<ul>
<li><em><code>JS</code></em>
<ul>
<li>cS.js</li>
</ul>
</li>
<li>i.jpeg</li>
<li>M.json</li>
</ul>
</li>
<li><p><em><code>Z</code></em> (the directory in which <code>Y</code> will be copied)</p>
</li>
<li><p>dmode.png</p>
</li>
<li><p>logo.ico</p>
</li>
<li><p>lu.png</p>
</li>
<li><p>main.py</p>
</li>
<li><p>toggle.png</p>
</li>
<li><p>white.png</p>
</li>
</ul>
<p><code>X</code> is the directory in which <code>main.py</code> is. This is apparently the code which causes the error:</p>
<pre class="lang-py prettyprint-override"><code>from os.path import join
from distutils.dir_util import copy_tree
if getattr(sys, 'frozen', False):
application_path = os.path.dirname(sys.executable)
elif __file__:
application_path = os.path.dirname(__file__)
if not os.path.exists(join(application_path, "Z")):
copy_tree("Y", join(application_path, "Z"))
</code></pre>
<p>But I don't get any errors when I run the PyInstaller command in CMD. So, where have I gone wrong? Have I used <code>--add-data</code> incorrectly? Is <code>application_path</code> wrong (copied from <a href="https://stackoverflow.com/a/404750/16136190">this</a>)? If yes, please mention where I've used it incorrectly and where I have referenced the files/directories incorrectly. I prefer changing the command instead of the <code>.spec</code> file, so a PyInstaller (CMD) command would be helpful.</p>
|
<python><path><pyinstaller><distutils><file-structure>
|
2023-04-08 20:25:40
| 0
| 859
|
The Amateur Coder
|
75,967,398
| 20,443,528
|
How to write an if statement where conditions can be variable in Python?
|
<p>I want to write an if statement.
For e.g.</p>
<pre><code>if first == a and second == b and third == c and fourth == d:
# code
</code></pre>
<p>The problem is that the value of a, b, c, d, e can be None. In that case that particular condition should not be evaluated.</p>
<p>For e.g. if b = None then the if condition becomes:</p>
<pre><code>if first == a and third == c and fourth == d:
# code
</code></pre>
<p>How do I implement this in python? Can someone please help?</p>
|
<python><if-statement>
|
2023-04-08 19:54:00
| 3
| 331
|
Anshul Gupta
|
75,967,324
| 2,233,500
|
How to install Stable Diffusion web UI from AUTOMATIC1111 on a Mac Intel?
|
<p>I've recently tried to use the Stable Diffusion web UI from AUTOMATIC1111 on my Mac Intel. There is no documentation for Mac Intel, and when I tried to use the methods presented for other platforms, I ended-up with a Python exception related to LZMA:</p>
<pre><code>ModuleNotFoundError: No module named '_lzma'
</code></pre>
<p>I was able to fix the problem thanks to <a href="https://stackoverflow.com/questions/59690698/modulenotfounderror-no-module-named-lzma-when-building-python-using-pyenv-on">this stack overflow post</a>. This allowed me to write an installation guide / documentation for Mac Intel that I share in my answer.</p>
<p>My configuration:</p>
<ul>
<li>MacBook Pro 2019</li>
<li>CPU 2,6 GHz 6-Core Intel Core i7</li>
<li>GPU Intel UHD Graphics 630 1536 MB</li>
<li>Memory 32 GB 2667 MHz DDR4</li>
<li>System macOS Ventura 13.2.1</li>
</ul>
|
<python><macos><artificial-intelligence><stable-diffusion>
|
2023-04-08 19:37:56
| 2
| 867
|
Vincent Garcia
|
75,967,131
| 9,757,174
|
Sentinel Values in Pydantic Model - Fastapi and Firestore
|
<p>I am creating a backend with FastAPI and Firestore. I want to set one of my model fields <code>createdOn</code> as a <code>firestore.SERVER_TIMESTAMP</code>. In Python, it's stored as a <code>Sentinel</code>. However, what data type should I store in my Pydantic model for it to allow server timestamp.</p>
<p>My current model looks like this</p>
<pre class="lang-py prettyprint-override"><code>class NGO(BaseModel):
displayName: str = Field(..., example="Lorem Ipsum")
tagline: str = Field(..., example="Lorem Ipsum")
description: str = Field(..., example="Lorem Ipsum")
status: str = Field(..., example="active")
logoUrl: str = Field(..., example="https://www.thehindu.com/static/theme/default/base/img/branding/logo.png")
bannerUrl: list = Field(..., example=["https://www.thehindu.com/static/theme/default/base/img/branding/logo.png"])
contactName: str = Field(..., example="Lorem Ipsum")
contactEmail: str = Field(..., example="abcd@gmail.com")
contactNumber: str = Field(..., example="+91 1234567890")
website: str = Field(..., example="https://www.thehindu.com")
associatedMagazines: Optional[list] = Field(None, example=["abc123"])
associatedProjects: Optional[list] = Field(None, example=["abc123"])
createdBy: Optional[str] = Field(None, example="Lorem Ipsum")
updatedBy: Optional[str] = Field(None, example="Lorem Ipsum")
isFeatured: Optional[bool] = Field(None, example=False)
publicRegistryId: Optional[str] = Field(None, example="abc123")
createdOn: Optional[datetime] = Field(None, example=datetime.now())
</code></pre>
<p>and this is how I am declaring the NGO object</p>
<pre class="lang-py prettyprint-override"><code>ngo = NGO(
displayName=displayName,
description=description,
tagline=tagline,
status=status,
logoUrl=logoUrl,
bannerUrl=bannerUrl,
contactName=contactName,
contactEmail=contactEmail,
contactNumber=contactNumber,
website=website,
updatedBy=updatedBy,
isFeatured=isFeatured,
publicRegistryId=publicRegistryId,
createdOn=firestore.SERVER_TIMESTAMP,
)
</code></pre>
<p>Currently, I am getting the following error.</p>
<pre><code>pydantic.error_wrappers.ValidationError: 1 validation error for NGO
createdOn
invalid type; expected datetime, string, bytes, int or float (type=type_error)
</code></pre>
<p>How should I go about saving the firestore Server timestamp using the Pydantic model. Alternate suggestions to Pydantic are also accepted.</p>
|
<python><firebase><datetime><fastapi><pydantic>
|
2023-04-08 18:52:40
| 1
| 1,086
|
Prakhar Rathi
|
75,966,791
| 4,213,362
|
How to pass file path from C# application to Python script and return the processed file path back to C#?In Visual Studio
|
<p><strong>This question is in regards to the references or some other similar feature in Visual Studio 2019. Both my C# and Python projects are part of a single solution</strong></p>
<p>I am developing a C# application using WPF that requires running Python scripts in the backend to perform specific operations on user-selected files. Due to strict dependencies on pandas, I cannot use IronPython, and I want to avoid using PYQt to avoid license fees since I plan to sell the program.</p>
<p>I have created two projects: one in C# and the other in Python. The C# application will have a GUI with buttons for different tasks. When a button is clicked, the user can select a file, and the C# application should pass the file path to the Python script as an argument. The Python script will process the file and return the processed file path back to the C# application. Finally, the C# application will prompt the user to save the generated file.</p>
<p>I am looking for a way to accomplish this communication between the two projects. How can I pass the file path from the C# application to the Python script and retrieve the processed file path back in the C# application? What is the best way to handle the communication between the two projects to ensure efficient and reliable operation?</p>
<p>Please provide sample code or relevant resources to help me achieve this functionality in my C# application.</p>
<p><a href="https://i.sstatic.net/dCjSW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dCjSW.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/BGN1Z.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BGN1Z.png" alt="enter image description here" /></a></p>
<p>I tried few things like IronPython and PythonNet. But I feel running a Python Project through the C# GUI would be the best thing. I can't say how complicated the next item in the grid may be. I may require a third language.</p>
|
<python><c#><windows><visual-studio><visual-studio-2019>
|
2023-04-08 17:42:36
| 1
| 811
|
Vishesh Mangla
|
75,966,711
| 5,875,276
|
XPath to select all href links that start with something
|
<p>I'm looking to get some information from this URL:</p>
<p><a href="https://www.bandsintown.com/c/madison-wi?date=2023-04-08T00%3A00%3A00%2C2023-04-09T23%3A00%3A00&date_filter=Choose+Dates&calendarTrigger=false" rel="nofollow noreferrer">https://www.bandsintown.com/c/madison-wi?date=2023-04-08T00%3A00%3A00%2C2023-04-09T23%3A00%3A00&date_filter=Choose+Dates&calendarTrigger=false</a></p>
<p>Using Selenium:</p>
<pre><code>find_href = driver.find_element(By.XPATH,'a[href^="https://www.bandsintown.com/e"')
</code></pre>
<p>Also, I've tried:</p>
<pre><code>find_href = driver.find_element(By.XPATH,'a[starts-with(@href="https://www.bandsintown.com/e")]')
</code></pre>
<p>Here's an example of a link I would want to get:</p>
<p><a href="https://www.bandsintown.com/e/1027653913-orquesta-salsoul-del-mad-at-the-bur-oak?came_from=253&utm_medium=web&utm_source=city_page&utm_campaign=event" rel="nofollow noreferrer">https://www.bandsintown.com/e/1027653913-orquesta-salsoul-del-mad-at-the-bur-oak?came_from=253&utm_medium=web&utm_source=city_page&utm_campaign=event</a></p>
<p>So I want to grab anything with <code>"www.bandsintown.com/e"</code></p>
<p>I'm getting the following error:</p>
<pre><code>selenium.common.exceptions.InvalidSelectorException: Message: invalid selector: Unable to locate an element with the xpath expression a[href^="https://www.bandsintown.com/e" because of the following error:
SyntaxError: Failed to execute 'evaluate' on 'Document': The string 'a[href^="https://www.bandsintown.com/e"' is not a valid XPath expression.
</code></pre>
<p>Do I have a syntax error in my XPath statement?</p>
<h2>Update</h2>
<p>Tried this:</p>
<pre><code>find_href = driver.find_element(By.XPATH,'a[href^="https://www.bandsintown.com/e"]')
</code></pre>
<p>And I still get the same error:</p>
<pre><code>selenium.common.exceptions.InvalidSelectorException: Message: invalid selector: Unable to locate an element with the xpath expression a[href^="https://www.bandsintown.com/e"] because of the following error:
SyntaxError: Failed to execute 'evaluate' on 'Document': The string 'a[href^="https://www.bandsintown.com/e"]' is not a valid XPath expression.
</code></pre>
|
<python><html><xml><selenium-webdriver><xpath>
|
2023-04-08 17:28:42
| 1
| 1,839
|
DiamondJoe12
|
75,966,684
| 13,307,245
|
Determine if program was called from a terminal or by running an associated file
|
<h1>Background</h1>
<p>I am attempting to create an interpreter for a toy programming language. I built the interpreter with ANTLR4 + Python. Then, I compiled the interpreter using Nuitka and bundled it using Inno Setup. After installing with Inno Setup, the executable can be invoked in two ways:</p>
<ol>
<li>Invoking the name from the terminal. (Running <code>mylang myfile.code</code>)</li>
<li>Double-click an associated file. (Clicking <code>myfile.code</code> in the explorer)</li>
</ol>
<h1>Issue</h1>
<p>When I run the executable by clicking the associated file, a terminal is spawned, but then it immediately disappears when the <code>myfile.code</code> program finishes, giving me no time to read the output. I found this <a href="https://stackoverflow.com/questions/577467/how-to-pause-a-script-when-it-ends-on-windows">SO question</a> which lead me to <code>os.system("pause")</code>. However, when this makes my program pause when it is invoked from the terminal normally, which is unnecessary and unwanted.</p>
<h1>Question</h1>
<p>Is there a way to detect which way the program was invoked so I can pause only when it was run from clicking an associated file?</p>
|
<python><windows><nuitka>
|
2023-04-08 17:23:00
| 1
| 579
|
SimonUnderwood
|
75,966,652
| 1,508,935
|
Cleaning-up sphinx-doc protobuf types
|
<p>I feel like I had this figured out before but it's not working again, I am developing some code based on protobuf and grpc and in the documentation all of the types come out really messily:</p>
<blockquote>
<p>create_session(name: str, path: str, file_type: <google.protobuf.internal.enum_type_wrapper.EnumTypeWrapper object at 0x10984e7d0> = 0, sample_rate: <google.protobuf.internal.enum_type_wrapper.EnumTypeWrapper object at 0x109837710> = 2, bit_depth: <google.protobuf.internal.enum_type_wrapper.EnumTypeWrapper object at 0x10982c6d0> = 2, io_setting: <google.protobuf.internal.enum_type_wrapper.EnumTypeWrapper object at 0x109819d10> = 1, is_interleaved: bool = True)</p>
</blockquote>
<p>This is from the following function in my source</p>
<pre class="lang-py prettyprint-override"><code>
import ptsl.PTSL_pb2 as pt # my grpc-tools generated type header
# yada yada yada
def create_session(self,
name: str,
path: str,
file_type: 'SessionAudioFormat' = pt.SAF_WAVE,
sample_rate: 'SampleRate' = pt.SR_48000,
bit_depth: 'BitDepth' = pt.Bit24,
io_setting: 'IOSettings' = pt.IO_Last,
is_interleaved: bool = True) -> None:
# etc...
</code></pre>
<p>My types annotations in the source are being converted into instances of that type and not just references to be linked into the rest of the documentation. Is there a way to make the documentation resolve to the real type names and link to my documentation of those types where I've done it (like any other type)?</p>
|
<python><python-sphinx><protobuf-python>
|
2023-04-08 17:16:21
| 1
| 9,464
|
iluvcapra
|
75,966,596
| 14,220,087
|
python dict get method runs the second argument even when key is in dict
|
<p>according to the doc, the get method of a dict can have two argument: key and a value to return if that key is not in the dict. However, when the second argument is a function, it'll run regardless of whether the key is in the dict or not:</p>
<pre class="lang-py prettyprint-override"><code>def foo():
print('foo')
params={'a':1}
print(params.get('a', foo()))
# foo
# 1
</code></pre>
<p>As shown above, the key <code>a</code> is in the dict, but <code>foo()</code> still runs. What happens here?</p>
|
<python><python-3.x><dictionary>
|
2023-04-08 17:05:54
| 2
| 829
|
Sam-gege
|
75,966,447
| 11,628,437
|
How do I deterministically convert Pandas string columns into specific numbers?
|
<p>I asked a similar question <a href="https://stackoverflow.com/questions/75966231/how-do-i-convert-a-pandas-string-column-into-specific-numbers-without-using-a-lo/75966283#75966283">here</a> and was grateful for the community's help.</p>
<p>I have a problem wherein I want to convert the dataframe strings into numbers. This time I cannot manually map the strings to numbers as this column is quite long in practise (the example below is just a minimal example). The constraint is that every time the same string is repeated, the number should be the same.</p>
<p>I tried using <code>pd.to_numeric</code> but it gave me an error -</p>
<pre><code>import pandas as pd
data = [['mechanical@engineer', 'Works on machines'], ['field engineer', 'Works on pumps'],
['lab_scientist', 'Publishes papers'], ['field engineer', 'Works on pumps'],
['lab_scientist','Publishes papers']]# Create the pandas DataFrame
df = pd.DataFrame(data, columns=['Job1', 'Description'])
role_to_code = {"mechanical@engineer": 0, "field engineer": 1, "lab_scientist": 2}
df['Job1'] = df['Job1'].map(role_to_code)
print(df.head())
df['Description'] = pd.to_numeric(df['Description'])
</code></pre>
<p>Here is the error -</p>
<pre><code>ValueError: Unable to parse string "Works on machines" at position 0
</code></pre>
<p>The solution for the above error as per similar SO posts is to specify a separator. But as the dataset is quite big, I don't want to specify multiple separators. Is there a way to automate this process?</p>
|
<python><pandas>
|
2023-04-08 16:38:05
| 1
| 1,851
|
desert_ranger
|
75,966,430
| 11,281,877
|
How to include p-values in heatpmap
|
<p>I generated a correlation heatmap of 4 variables using seaborn. In each cell of the heatmap, I would like to include both the correlation and the p-value associated with the correlation. Ideally, the p-value should be on a new line and in brackets. I am trying to use the annot argument for displaying both the correlation and p-value in the heatmap. However, I am getting the error below and I am not sure how to fix it.
---> 12 corr_matrix, pvalues = data.corr(method=lambda x, y: pearsonr(x, y))
ValueError: setting an array element with a sequence.</p>
<p>I really would appreciate if someone could help me out. Thank you in advance for your time.</p>
<p>Here is my code</p>
<pre><code>import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from scipy.stats import pearsonr
# Generate a synthetic dataset of 4 variables
np.random.seed(42)
data = pd.DataFrame(np.random.randn(100, 4), columns=list('ABCD'))
# Calculate the correlation matrix and p-values
corr_matrix, pvalues = data.corr(method=lambda x, y: pearsonr(x, y))
# Generate the heatmap of the correlation matrix
mask = np.triu(np.ones_like(corr_matrix, dtype=bool))
sns.set(style='white')
fig, ax = plt.subplots(figsize=(10, 8))
sns.heatmap(corr_matrix, annot=True, fmt='.2f', cmap='coolwarm', mask=mask, cbar_kws={'shrink': 0.8},
ax=ax, vmin=-1, vmax=1, center=0)
for i in range(corr_matrix.shape[0]):
for j in range(corr_matrix.shape[1]):
if i >= j:
continue
value = '{:.2f}\n(p={:.2e})'.format(corr_matrix.iloc[i, j], pvalues.iloc[i, j])
ax.text(j+0.5, i+0.5, value, ha='center', va='center', fontsize=10, color='white')
plt.title('Correlation Heatmap')
plt.tight_layout()
plt.show()
plt.close()
</code></pre>
|
<python><matplotlib><seaborn><heatmap>
|
2023-04-08 16:34:01
| 1
| 519
|
Amilovsky
|
75,966,231
| 11,628,437
|
How do I convert a Pandas string column into specific numbers without using a loop?
|
<p>I have a column of strings that I'd like to convert into specific numbers. My current approach involves using a for loop, but I feel that's not how Pandas was designed to be used. Could someone suggest a more elegant solution that is applicable to more than one column?</p>
<p>Here is my code -</p>
<pre><code>import pandas as pd
data = [['mechanical@engineer', 'field engineer'], ['field engineer', 'lab_scientist'],
['lab_scientist', 'mechanical@engineer'], ['field engineer', 'mechanical@engineer'],
['lab_scientist','mechanical@engineer']]# Create the pandas DataFrame
df = pd.DataFrame(data, columns=['Job1', 'Job2'])
for index, row in df.iterrows():
if row['Job1']=="mechanical@engineer":
row['Job1'] = 0
elif row['Job1']=="field engineer":
row['Job1'] = 1
elif row['Job1'] == "lab_scientist":
row['Job1'] = 2
print(df.head())
</code></pre>
|
<python><pandas>
|
2023-04-08 15:58:20
| 2
| 1,851
|
desert_ranger
|
75,966,115
| 1,667,868
|
Django use same object for multiple views
|
<p>In django im making my first app. I'm trying to find a way to share an object between multiple view methods.</p>
<p>e.g. In my views.py I have</p>
<pre><code>def lights_on(request):
hue_connector = HueConnector()
print(hue_connector.turn_all_lights_on())
return redirect('/')
def lights_off(request):
hue_connector = HueConnector()
print(hue_connector.turn_all_lights_off())
return redirect('/')
def light_on(request, light_id):
hue_connector = HueConnector()
hue_connector.set_light_on_state(light_id, "true")
html = "<html><body>" + light_id + "</body></html>"
return redirect('/')
</code></pre>
<p>Lets say I want 10 more views in my views.py that need a HueConnector.</p>
<p>How should I create one that can be used by all methods?</p>
|
<python><django>
|
2023-04-08 15:36:33
| 3
| 12,444
|
Sven van den Boogaart
|
75,966,097
| 15,724,084
|
trying python tkinter creating widgets with classes (frame)
|
<p>I am trying to create custom widgets with classes. I have one class, called <code>CustomWidgets</code> inside it, I have method for creating <code>Frame</code> object, then configure its <code>side</code> key with <code>config</code> method, but it gives an error.</p>
<p>code;</p>
<pre><code>from tkinter import *
from tkinter import ttk
class CustomWidgets():
def __init__(self):
self.root=Tk()
self.frm_upp=self.frm()
self.frm_upp.config(side='top')
def frm(self):
self.frm_0=LabelFrame(master=self.root)
self.frm_0.pack(expand='YES')
</code></pre>
<p>error;</p>
<pre><code> self.frm_upp=self.frm().config(side='top')
AttributeError: 'NoneType' object has no attribute 'config'
</code></pre>
<p>I guess, when I assign <code>self.frm_upp</code> variable to method <code>self.frm()</code> it does not assign and thus gives Nonetype object.</p>
|
<python><oop><tkinter><frame>
|
2023-04-08 15:32:36
| 1
| 741
|
xlmaster
|
75,965,988
| 13,454,049
|
Replacing loop variables requires time (when instantiating a wrapper)
|
<p>Why can't I instantiate this List wrapper in constant time? The list is already created (and not a generator), and I'm just saving it in my class. Lists are passed by reference, so please explain. I'm really confused, and can't seem to find answers.</p>
<p><strong>UPDATE</strong>: clearing the loop variables at the beginning of the loop offsets the time to there. But doesn't solve the issue.</p>
<p><strong>UPDATE 2</strong>: by extending the list I was able to avoid the time loss for clearing and generating the first half of the list again.</p>
<ul>
<li>Class:
<pre class="lang-py prettyprint-override"><code>"""
List instantiate test
"""
class List:
"""
List wrapper
"""
def __init__(self, a: list[Number]):
self.list = a
</code></pre>
</li>
<li>Main:
<pre class="lang-py prettyprint-override"><code>from datetime import datetime, timedelta
Number = int | float
def main() -> None:
generate_list: timedelta = timedelta()
instantiate: timedelta = timedelta()
size: int = 2 ** 25
while True:
start: datetime = datetime.now()
lst: list[Number] = list(range(size, 0, -1))
generate_list += datetime.now() - start
print(f"Total time generating lists: {generate_list}")
start: datetime = datetime.now()
_: List = List(lst)
instantiate += datetime.now() - start
print(f"Total time instantiating List wrappers: {instantiate}")
size *= 2
</code></pre>
</li>
<li>Output:
<pre class="lang-none prettyprint-override"><code>Total time generating lists: 0:00:00.849742
Total time instantiating List wrappers: 0:00:00.000019
Total time generating lists: 0:00:02.465526
Total time instantiating List wrappers: 0:00:00.300718
Total time generating lists: 0:00:05.719437
Total time instantiating List wrappers: 0:00:00.985957
...
</code></pre>
</li>
<li>Main with clearing:
<pre class="lang-py prettyprint-override"><code>def main() -> None:
generate_list: timedelta = timedelta()
instantiate: timedelta = timedelta()
clearing: timedelta = timedelta()
size: int = 2 ** 25
lst: list[Number] = []
lst_wrapper: List = None
while True:
start: datetime = datetime.now()
lst.clear()
lst_wrapper = None
clearing += datetime.now() - start
print(f"Total time clearing: {clearing}")
start: datetime = datetime.now()
lst = list(range(size, 0, -1))
generate_list += datetime.now() - start
print(f"Total time generating lists: {generate_list}")
start: datetime = datetime.now()
lst_wrapper = List(lst)
instantiate += datetime.now() - start
print(f"Total time instantiating List wrappers: {instantiate}")
size *= 2
</code></pre>
</li>
<li>Output:
<pre class="lang-none prettyprint-override"><code>Total time clearing: 0:00:00.000005
Total time generating lists: 0:00:00.825880
Total time instantiating List wrappers: 0:00:00.000006
Total time clearing: 0:00:00.354571
Total time generating lists: 0:00:02.449832
Total time instantiating List wrappers: 0:00:00.000026
Total time clearing: 0:00:00.996483
Total time generating lists: 0:00:05.796596
Total time instantiating List wrappers: 0:00:00.000046
</code></pre>
</li>
<li>Main with extending:
<pre class="lang-py prettyprint-override"><code>def main() -> None:
generate_list: timedelta = timedelta()
instantiate: timedelta = timedelta()
start: int = 2 ** 24
end: int = 2 ** 25
start_time: datetime = datetime.now()
lst: list[Number] = list(range(0, -start, -1))
generate_list += datetime.now() - start_time
print(f"Total time generating lists: {generate_list}")
while True:
start_time = datetime.now()
lst.extend(range(-start, -end, -1))
generate_list += datetime.now() - start_time
print(f"Total time generating lists: {generate_list}")
start_time = datetime.now()
_: List = List(lst)
instantiate += datetime.now() - start_time
print(f"Total time instantiating List wrappers: {instantiate}")
start *= 2
end *= 2
</code></pre>
</li>
<li>Output:
<pre class="lang-none prettyprint-override"><code>Total time generating lists: 0:00:00.514641
Total time generating lists: 0:00:00.981933
Total time instantiating List wrappers: 0:00:00.000005
Total time generating lists: 0:00:01.851684
Total time instantiating List wrappers: 0:00:00.000025
Total time generating lists: 0:00:03.668982
Total time instantiating List wrappers: 0:00:00.000030
</code></pre>
</li>
</ul>
|
<python><performance><time-complexity>
|
2023-04-08 15:09:17
| 1
| 1,205
|
Nice Zombies
|
75,965,872
| 5,231,110
|
Get MIDI pitch-bend range in Python
|
<p>In MIDI, the pitch-bend range is <code>2</code> per default, but <a href="http://midi.teragonaudio.com/tech/midispec/rpn.htm" rel="nofollow noreferrer">can be modified via specific RPN messages, usually followed by specific "Data Entry" messages</a>.</p>
<p>Python MIDI file parsers such as <code>pretty_midi</code> and <code>mido</code> seem to ignore these MIDI messages.</p>
<p>How do I correctly read the pitch (including non-default pitch-bend range)?</p>
|
<python><midi>
|
2023-04-08 14:42:48
| 0
| 2,936
|
root
|
75,965,829
| 18,157,326
|
is there any way to solve python version conflict
|
<p>I am a newbie to python and today I found the python dependencies is so easy to get conflict, each package need to run in a small version period, is there any better way or tricks to handle the version conflict?</p>
<pre><code>#8 75.87 The conflict is caused by:
#8 75.87 The user requested opencv-python-headless==4.7.0.72
#8 75.87 rembg 2.0.30 depends on opencv-python-headless~=4.6.0.66
#8 75.87
#8 75.87 To fix this you could try to:
#8 75.87 1. loosen the range of package versions you've specified
#8 75.87 2. remove package versions to allow pip attempt to solve the dependency conflict
</code></pre>
<p>I am using this command to generate the <code>requirement.txt</code>:</p>
<pre><code>pip3 freeze > requirement.txt
</code></pre>
<p>and this is the output:</p>
<pre><code>fastapi==0.95.0
openai==0.27.0
sse-starlette==1.3.3
SQLAlchemy~=1.4.6
pydantic~=1.10.5
rembg==2.0.30
pillow~=9.3.0
aiohttp==3.8.4
aiosignal==1.3.1
anyio==3.6.2
async-timeout==4.0.2
asyncer==0.0.2
attrs==22.2.0
certifi==2022.12.7
charset-normalizer==3.1.0
click==8.1.3
coloredlogs==15.0.1
filetype==1.2.0
flatbuffers==23.3.3
frozenlist==1.3.3
h11==0.14.0
httptools==0.5.0
humanfriendly==10.0
idna==3.4
ImageHash==4.3.1
imageio==2.27.0
lazy_loader==0.2
llvmlite==0.39.1
mpmath==1.3.0
multidict==6.0.4
networkx==3.1
numba==0.56.2
numpy==1.23.5
onnxruntime==1.14.1
opencv-python-headless~=4.6.0.66
packaging==23.0
platformdirs==3.2.0
pooch==1.7.0
protobuf==4.22.1
pydantic==1.10.7
PyMatting==1.1.8
python-dotenv==1.0.0
python-multipart==0.0.6
PyWavelets==1.4.1
PyYAML==6.0
requests==2.28.2
scikit-image~=0.19.3
scipy~=1.9.3
sniffio==1.3.0
starlette==0.26.1
sympy==1.11.1
tifffile==2023.3.21
tqdm~=4.64.1
typing_extensions==4.5.0
urllib3==1.26.15
uvicorn~=0.20.0
uvloop==0.17.0
watchdog~=2.1.9
watchfiles==0.19.0
websockets==11.0
yarl==1.8.2
psycopg2-binary==2.9.6
</code></pre>
<p>now I found a issue that the dependencies are totally chaos and it seem impossible to tweak the dependencies one by one. is there anyway like one key to fix the dependencies?</p>
|
<python>
|
2023-04-08 14:34:18
| 1
| 1,173
|
spark
|
75,965,667
| 12,832,931
|
Load / Plot geopandas dataframe with GeometryCollection column
|
<p>I have a dataset like this:</p>
<p><a href="https://i.sstatic.net/0Ex0n.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0Ex0n.png" alt="Dataframe" /></a></p>
<p>You can reproduce this dataframe loading the following dict:</p>
<pre><code>{'entity_ts': {0: '2022-11-01T00:00:56.000Z', 1: '2022-11-01T00:00:56.000Z'}, 'entity_id': {0: 'WAZE.jams.1172133072', 1: 'WAZE.jams.1284082818'}, 'street': {0: 'Ac. Monsanto, Benfica', 1: nan}, 'position': {0: {'type': 'GeometryCollection', 'geometries': [{'coordinates': [[[-9.168816, 38.741779], [-9.169618, 38.741353], [-9.16976, 38.741289]]], 'type': 'MultiLineString'}]}, 1: {'type': 'GeometryCollection', 'geometries': [{'coordinates': [[[-9.16116, 38.774899], [-9.16083, 38.774697]]], 'type': 'MultiLineString'}]}}, 'level': {0: 5, 1: 5}, 'length': {0: 99, 1: 36}, 'delay': {0: -1, 1: -1}, 'speed': {0: 0.0, 1: 0.0}}
</code></pre>
<p>My problem is:</p>
<p>I could not load this data using geopandas properly. On geopandas it is important to specify the geometry column, but this 'position' column format is quit new to me.</p>
<p>1 - I've tried to load using pandas dataframe</p>
<pre><code>data=`pd.read_csv(data.csv, converters={'position': json.loads})`
</code></pre>
<p>2 - Then I've converted to GeoDataFrame:</p>
<pre><code>import geopandas as gpd
import contextily as ctx
crs={'init':'epsg:4326'}
gdf = gpd.GeoDataFrame(
data, geometry=data['position'], crs=crs)
</code></pre>
<p>But I got this error:</p>
<pre><code>TypeError: Input must be valid geometry objects: {'type': 'GeometryCollection', 'geometries': [{'coordinates': [[[-9.168816, 38.741779], [-9.169618, 38.741353], [-9.16976, 38.741289]]], 'type': 'MultiLineString'}]}
</code></pre>
|
<python><dataframe><gis><geopandas><shapely>
|
2023-04-08 14:03:34
| 1
| 542
|
William
|
75,965,610
| 1,574,054
|
Best way to create a numpy array of custom objects from data arrays?
|
<p>Suppose you have a python class which encapsulates some data:</p>
<pre><code>import numpy
from dataclasses import dataclass
@dataclass
class A:
a: float
b: float
</code></pre>
<p>One can create an instance of <code>A</code> containing arrays instead of simple floats:</p>
<pre><code>a_vals = numpy.linspace(0, 1)
b_vals = numpy.linspace(0, 1)
ma, mb = numpy.meshgrid(a_vals, b_vals, indexing="ij")
# Now the instance
a_instance = A(ma, mb)
</code></pre>
<p>Alternatively, one can create an array of <code>A</code> instances</p>
<pre><code>a_instances = numpy.empty((50, 50), dtype=object)
for i in range(50):
for j in range(50):
a_instances[i, j] = A(a_vals[i], b_vals[j])
</code></pre>
<p>But this seems to be much more cumbersome, with actual loops being involved. Is there a better way to create the array of <code>A</code> instances over <code>a_vals</code> and <code>b_vals</code> (or their meshgrids)? What is the way of doing this intended by numpy?</p>
|
<python><arrays><numpy><numpy-ndarray>
|
2023-04-08 13:52:04
| 2
| 4,589
|
HerpDerpington
|
75,965,605
| 9,642
|
How to persist LangChain conversation memory (save and load)?
|
<p>I'm creating a conversation like so:</p>
<pre><code>llm = ChatOpenAI(temperature=0, openai_api_key=OPENAI_API_KEY, model_name=OPENAI_DEFAULT_MODEL)
conversation = ConversationChain(llm=llm, memory=ConversationBufferMemory())
</code></pre>
<p>But what I really want is to be able to save and load that <code>ConversationBufferMemory()</code> so that it's persistent between sessions. There doesn't seem to be any obvious tutorials for this but I noticed "Pydantic" so I tried to do this:</p>
<pre><code>saved_dict = conversation.memory.chat_memory.dict()
cm = ChatMessageHistory(**saved_dict) # or cm = ChatMessageHistory.parse_obj(saved_dict)
</code></pre>
<p>But this fails:</p>
<pre><code>ValidationError: 6 validation errors for ChatMessageHistory
messages -> 0
Can't instantiate abstract class BaseMessage with abstract method type (type=type_error)
</code></pre>
<p>Thoughts? I'd love links to any sort of guide, repo, reference, etc.</p>
|
<python><pydantic><langchain><large-language-model>
|
2023-04-08 13:51:18
| 8
| 20,614
|
Neil C. Obremski
|
75,965,577
| 2,947,435
|
python: topic clusters with dendrogram
|
<p>I have a list of keywords that I want to regroup by clusters based on semantic meaning. I try to use <code>dendrogram</code>algorithm. So it's actually working well: it's returning the clusters ID, but I don't know how to associate every keyword to the appropriate cluster. Here is my code:</p>
<pre><code> def clusterize(self, keywords):
preprocessed_keywords = normalize(keywords)
# Generate TF-IDF vectors for the preprocessed keywords
tfidf_matrix = self.vectorizer.fit_transform(preprocessed_keywords)
# Use hierarchical agglomerative clustering to cluster the keywords based on their semantic similarity
linkage_matrix = linkage(tfidf_matrix.todense(), method="ward")
res = dendrogram(linkage_matrix, truncate_mode='lastp', p=self.n_clusters, no_plot=True)
clusters = res['leaves']
return clusters
</code></pre>
<p>This will return <code>[12, 5, 22]</code> because <code>self.n_clusters=3</code> which is normal. So how can I make the association with the keywords?</p>
|
<python><hierarchical-clustering><dendrogram>
|
2023-04-08 13:45:14
| 0
| 870
|
Dany M
|
75,965,466
| 16,305,340
|
np.linalg.svd is giving me wrong results
|
<p>so I am trying to learn how SVD work to use it in PCA (principle component analysis), but the problem is that it seam I get wrong results, I tryied using <code>np.linalg.svd</code> and this is my code :</p>
<pre><code>A = np.array([[2, 2],
[1, 1]])
u, s, v = np.linalg.svd(A, full_matrices=False)
print(u)
print(s)
print(v)
</code></pre>
<p>and this is the result I got :</p>
<pre><code>[[-0.89442719 -0.4472136 ]
[-0.4472136 0.89442719]]
[3.16227766e+00 1.10062118e-17]
[[-0.70710678 -0.70710678]
[ 0.70710678 -0.70710678]]
</code></pre>
<p>and I tried to get SVD decomposition on <a href="https://www.wolframalpha.com/input?i=svd%20%7B%7B2%2C%202%7D%2C%7B1%2C%201%7D%7D" rel="nofollow noreferrer">WolframAlpha</a> and I got these results:</p>
<p><a href="https://i.sstatic.net/Acfvb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Acfvb.png" alt="enter image description here" /></a></p>
<p>the magnitude of values seems correct but the sign is not correct, even I followed up with a video for a professor on <a href="https://www.youtube.com/watch?v=mBcLRGuAFUk" rel="nofollow noreferrer">MIT OpenCourseWare on youtube</a> and he give these results :</p>
<p><a href="https://i.sstatic.net/UsVKf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UsVKf.png" alt="enter image description here" /></a></p>
<p>which is the same magnitude answer but different signs, so what could possibly had gone wrong?</p>
|
<python><numpy><linear-algebra><svd><wolframalpha>
|
2023-04-08 13:21:50
| 1
| 1,893
|
abdo Salm
|
75,965,447
| 10,392,393
|
dockerized Kafka service stuck at producer.send
|
<p>I am very new to docker and Kafka, and have a simple kafka python publisher shown below</p>
<p>The following are in my dockerfile:</p>
<pre><code>FROM python:3.10
WORKDIR /app
COPY . /app
RUN pip install --user pip==23.0.1 && pip install pipenv && pipenv install --system
ENV ENVIRONMENT=production
CMD ["python3", "src/producer.py"]
</code></pre>
<p>as well as my yaml file for compose:</p>
<pre><code>version: '3'
services:
zookeeper:
image: wurstmeister/zookeeper
container_name: zookeeper
ports:
- "2181:2181"
kafka:
image: wurstmeister/kafka
container_name: kafka
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: localhost
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
publisher:
container_name: publisher
build:
context: .
dockerfile: Dockerfile
depends_on:
- kafka
- zookeeper
</code></pre>
<p>in <code>producer.py</code> I have:.</p>
<pre><code>import json
from kafka import KafkaProducer,
sleep(10)
producer = KafkaProducer(bootstrap_servers=['kafka:9092'], api_version=(0, 10))
json_path = "/data"
with open(json_path, 'r') as f:
data = json.load(f)
print('Ready to publish')
for record in data:
producer.send(topic, json.dumps(record).encode('utf-8'))
print('Published message !!!')
producer.flush()
</code></pre>
<p>I first execute:</p>
<pre><code>sudo docker-compose -f docker-compose.yml up -d
</code></pre>
<p>and then:</p>
<p>sudo docker run publisher</p>
<p>when I ctrl+c, I see the trackback printed, and only the line for <code>print('Ready to publish')</code> rather than 'print('Published message !!!')'</p>
<p>Howeve, when I simply do</p>
<pre><code> python3 src/producer.py
</code></pre>
<p>it prints both and publishes the messages without being stuck at the "send" method.</p>
<p>I have looked at a few other threads such as <a href="https://stackoverflow.com/questions/51630260/connect-to-kafka-running-in-docker">this one</a>, but none really helped finding the mistake.</p>
|
<python><docker><apache-kafka><docker-compose><kafka-python>
|
2023-04-08 13:17:06
| 2
| 979
|
Alejandro
|
75,965,243
| 2,592,835
|
how to fit random vector into list of vectors python
|
<p>say I have the following list of vectors</p>
<pre><code>vectors=[(10,0),(-10,0),(0,10),(0,-10)]
</code></pre>
<p>and I have the following random vector:</p>
<pre><code>vector=(200,-300)
</code></pre>
<p>how can I get the vector from the list that is the most similar to the vector that I am inputting in python?</p>
|
<python><vector>
|
2023-04-08 12:37:25
| 2
| 1,627
|
willmac
|
75,965,211
| 9,078,185
|
Kivy: Set text attribute inside Screen class
|
<p>I am writing a Kivy project where my code will dynamically set the <code>text</code> property within a <code>Screen</code> class. Below is a MRE that recreates the problem, but my real code is much more complicated.</p>
<p>When I first wrote it, I tried something like <code>self.ids.my_label.text = 'This no worky'</code> but that threw an error:</p>
<blockquote>
<p>AttributeError: 'super' object has no attribute '<strong>getattr</strong>'</p>
</blockquote>
<p>I did a <code>print(self.ids)</code> and it is empty. Based on some other SO posts (such as <a href="https://stackoverflow.com/questions/63132728/kivy-attributeerror-super-object-has-no-attribute-getattr">here</a>) I added the line <code>s = self.root.get_screen('screen_one')</code> but that also throws an error:</p>
<blockquote>
<p>AttributeError: 'ScreenOne' object has no attribute 'root'</p>
</blockquote>
<p>I'm assuming the problem is that I'm calling the <code>ids</code> within the screen class, whereas every example I am finding is doing it from some other spot. I <em>sort of</em> get that calling <code>self.root</code> doesn't work because the <code>ids</code> doesn't live within the root, but every combo I could think of also doesn't work. I also tried <a href="https://stackoverflow.com/questions/63132728/kivy-attributeerror-super-object-has-no-attribute-getattr">this solution</a>, but either I didn't implement it right or it isn't my problem.
What's the solution?</p>
<pre><code>import kivy
from kivy.app import App
from kivy.uix.screenmanager import ScreenManager, Screen
class ScreenOne(Screen):
def __init__(self, **kwargs):
super().__init__(**kwargs)
s = self.root.get_screen('screen_one')
s.ids.my_label.text = 'This no worky'
class ScreenTwo(Screen):
pass
class ScreenThree(Screen):
pass
class MyScreenMgr(ScreenManager):
pass
class MultiApp(App):
def build(self):
return MyScreenMgr()
sample_app = MultiApp()
sample_app.run()
</code></pre>
<p>And multi.kv:</p>
<pre><code>#:import NoTransition kivy.uix.screenmanager.NoTransition
<MyScreenMgr>:
transition: NoTransition()
ScreenOne:
ScreenTwo:
ScreenThree:
<ScreenOne>:
name:'screen_one'
BoxLayout:
Label:
id: my_label
Button:
text: "Go to Screen 2"
background_color : 0, 0, 1, 1
on_press:
root.manager.current = 'screen_two'
<ScreenTwo>:
name:'screen_two'
BoxLayout:
Button:
text: "Go to Screen 3"
background_color : 1, 1, 0, 1
on_press:
root.manager.current = 'screen_three'
<ScreenThree>:
name:'screen_three'
BoxLayout:
Button:
text: "Go to Screen 1"
background_color : 1, 0, 1, 1
on_press:
root.manager.current = 'screen_one'
</code></pre>
|
<python><kivy>
|
2023-04-08 12:30:35
| 1
| 1,063
|
Tom
|
75,965,153
| 19,356,117
|
How to release memory space which occupied by segment-anything?
|
<p>I run this code in jupyterlab, using facebook's segment-anything:</p>
<pre><code>import cv2
import matplotlib.pyplot as plt
from segment_anything import SamAutomaticMaskGenerator, sam_model_registry
import numpy as np
import gc
def show_anns(anns):
if len(anns) == 0:
return
sorted_anns = sorted(anns, key=(lambda x: x['area']), reverse=True)
ax = plt.gca()
ax.set_autoscale_on(False)
polygons = []
color = []
for ann in sorted_anns:
m = ann['segmentation']
img = np.ones((m.shape[0], m.shape[1], 3))
color_mask = np.random.random((1, 3)).tolist()[0]
for i in range(3):
img[:,:,i] = color_mask[i]
ax.imshow(np.dstack((img, m*0.35)))
sam = sam_model_registry["default"](checkpoint="VIT_H SAM Model/sam_vit_h_4b8939.pth")
mask_generator = SamAutomaticMaskGenerator(sam)
image = cv2.imread('Untitled Folder/292282 sample.jpg')
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
sam.to(device='cuda')
masks = mask_generator.generate(image)
print(len(masks))
print(masks[0].keys())
plt.figure(figsize=(20,20))
plt.imshow(image)
show_anns(masks)
plt.axis('off')
plt.show()
del(masks)
gc.collect()
</code></pre>
<p>Before I run it, memory cost is about 200MB, and when it completes, memory cost is about 3.4GB, even if I close the notebook or re-run this program, those memory won't be released.
So how to solve the problem?</p>
|
<python><facebook><pytorch><image-segmentation>
|
2023-04-08 12:15:05
| 2
| 1,115
|
forestbat
|
75,965,067
| 12,297,666
|
Precision, Recall and F1 with Sklearn for a Multiclass problem
|
<p>I have a Multiclass problem, where <code>0</code> is my negative class and <code>1</code> and <code>2</code> are positive. Check the following code:</p>
<pre><code>import numpy as np
from sklearn.metrics import confusion_matrix
from sklearn.metrics import ConfusionMatrixDisplay
from sklearn.metrics import f1_score
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
# Outputs
y_true = np.array((1, 2, 2, 0, 1, 0))
y_pred = np.array((1, 0, 0, 0, 0, 1))
# Metrics
precision_macro = precision_score(y_true, y_pred, average='macro')
precision_weighted = precision_score(y_true, y_pred, average='weighted')
recall_macro = recall_score(y_true, y_pred, average='macro')
recall_weighted = recall_score(y_true, y_pred, average='weighted')
f1_macro = f1_score(y_true, y_pred, average='macro')
f1_weighted = f1_score(y_true, y_pred, average='weighted')
# Confusion Matrix
cm = confusion_matrix(y_true, y_pred)
disp = ConfusionMatrixDisplay(confusion_matrix=cm)
disp.plot()
plt.show()
</code></pre>
<p>The metrics calculated with <code>Sklearn</code> in this case are the following:</p>
<pre><code>precision_macro = 0.25
precision_weighted = 0.25
recall_macro = 0.33333
recall_weighted = 0.33333
f1_macro = 0.27778
f1_weighted = 0.27778
</code></pre>
<p>And this is the confusion matrix:</p>
<p><a href="https://i.sstatic.net/8EHDx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8EHDx.png" alt="enter image description here" /></a></p>
<p>The <code>macro</code> and <code>weighted</code> are the same because i have the same number of samples for each class? This is what i did manually.</p>
<p>1 - Precision = TP/(TP+FP). So for classes <code>1</code> and <code>2</code>, we get:</p>
<pre><code>Precision1 = TP1/(TP1+FP1) = 1/(1+1) = 0.5
Precision2 = TP2/(TP2+FP2) = 0/(0+0) = 0 (this returns 0 according Sklearn documentation)
Precision_Macro = (Precision1 + Precision2)/2 = 0.25
Precision_Weighted = (2*Precision1 + 2*Precision2)/4 = 0.25
</code></pre>
<p>2 - Recall = TP/(TP+FN). So for classes <code>1</code> and <code>2</code>, we get:</p>
<pre><code>Recall1 = TP1/(TP1+FN1) = 1/(1+1) = 0.5
Recall2 = TP2/(TP2+FN2) = 0/(0+2) = 0
Recall_Macro = (Recall1+Recall2)/2 = (0.5+0)/2 = 0.25
Recall_Weighted = (2*Recall1+2*Recall2)/4 = (2*0.5+2*0)/4 = 0.25
</code></pre>
<p>3 - F1 = 2*(Precision*Recall)/(Precision+Recall)</p>
<pre><code>F1_Macro = 2*(Precision_Macro*Recall_Macro)/(Precision_Macro*Recall_Macro) = 0.25
F1_Weighted = 2*(Precision_Weighted*Recall_Weighted)/(Precision_Weighted*Recall_Weighted) = 0.25
</code></pre>
<p>So, the Precision score is the same as <code>Sklearn</code>. But Recall and F1 are different. What did i do wrong here? Even if you use the values of Precision and Recall from <code>Sklearn</code> (i.e., <code>0.25</code> and <code>0.3333</code>), you can't get the <code>0.27778</code> F1 score.</p>
|
<python><scikit-learn><metrics><multiclass-classification>
|
2023-04-08 11:55:18
| 1
| 679
|
Murilo
|
75,964,936
| 7,318,120
|
Configured debug type "python" is not supported for VS Code (remote explorer)
|
<p>I get this error message:</p>
<pre><code>Configured debug type "python" is not supported for VS Code
</code></pre>
<p>It happens when running code from the <code>remote explorer</code>.</p>
<p>On researching, it is a very similar issue as this problem:</p>
<p><a href="https://stackoverflow.com/questions/64194481/configured-debug-type-python-is-not-supported-for-vs-code">Configured debug type "python" is not supported for VS Code</a></p>
<p>However, the difference is that the error <strong>only arises</strong> when I use the <code>remote explorer</code> which connects to GitHub repos.</p>
<p>I tried the proposed solutions to the previous question (given the similarity), but they did not work, so I am stuck here.</p>
<p>The same code runs perfectly fine in my file explorer.</p>
<p>So basically, this:</p>
<p><a href="https://i.sstatic.net/iELa6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iELa6.png" alt="enter image description here" /></a></p>
<p>I wonder if somebody can resolve this, given the above?</p>
|
<python><visual-studio-code><vscode-remote>
|
2023-04-08 11:25:16
| 0
| 6,075
|
darren
|
75,964,883
| 13,300,255
|
Improve speed when parsing numeric data from big files using Python
|
<p>I'm reading a viewfactor file generated in Ansys (using VFOPT) and converting it into a 2d array in python. I know that the final viewfactor must be a 6982*6982 array.</p>
<p>The viewfactor (file <code>viewfactor.db</code>, going from a few MB up to several GB) is formatted as follows:</p>
<pre><code>Ansys Release 2020 R2 Build 20.2 Update 20200601 Format 0
RS3D
Number of Enclosures = 1
Enclosure Number = 1 Number of Surfaces = 6982
Element number = 28868 Face Number 2 TOTAL= 1.0000
0.0000 0.0000 0.0000 0.0029 0.0056 0.0000 0.0000 0.0105 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
(numbers numbers numbers)
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.0000 0.0000
Element number = 28869 Face Number 2 TOTAL= 1.0000
0.0000 0.0000 0.0000 0.0029 0.0056 0.0000 0.0000 0.0105 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
etc etc
</code></pre>
<p>So, for the 1st element (element 28868) the viewfactor numbers are given (10 at a time, with the final line having possibly less), then there's two lines that I can ignore, then the new numbers are given, and so on.</p>
<p>I'm currently reading the file in the following way:</p>
<pre><code>import numpy as np
# the file reading part here is rather quick, it takes a couple seconds
with open('viewfactor.db', 'r') as f:
lines = f.read().splitlines()
nfaces = 6982 # I already had this number from before
vf = [[None] * nfaces] * nfaces
row = 0
col = 0
# this is the part that needs optimizing
for line in lines[7:]:
if line == '': continue
elif line.startswith('Element number'): # reset counters
row += 1
col = 0
else: # save numbers
nums = list(map(float, line.split()))
vf[row][col:col+len(nums)] = nums
col += len(nums)
vf = np.array(vf)
</code></pre>
<p>This works, but the script takes forever to execute. I've tested it with a 350MB viewfactor file and it takes about 50 seconds (if needed, I can provide the file).</p>
<p>Is it possible to lower the execution time? Can I do some parallel computing magic? Use C? Do without loops?</p>
<p>Thanks for any suggestion!</p>
|
<python><performance><loops>
|
2023-04-08 11:12:21
| 1
| 439
|
man-teiv
|
75,964,846
| 964,478
|
"module has no attribute" in Python C module
|
<p>I am trying to build <a href="https://bazaar.launchpad.net/%7Eonboard/onboard/trunk/files" rel="nofollow noreferrer">this old project</a> but <code>./setup.py build</code> shows this warning (not sure if related, could be normal):</p>
<pre><code>WARNING: the following files are not recognized by DistUtilsExtra.auto:
<the list of .c/.h files in Onboard/osk/>
</code></pre>
<p>and when I run <code>./onboard</code></p>
<pre><code>Traceback (most recent call last):
File "/home/....../onboard/./onboard", line 35, in <module>
from Onboard.OnboardGtk import OnboardGtk as Onboard
File "/home/....../onboard/Onboard/OnboardGtk.py", line 48, in <module>
from Onboard.Keyboard import Keyboard
File "/home/....../onboard/Onboard/Keyboard.py", line 45, in <module>
from Onboard.KeyboardPopups import TouchFeedback
File "/home/....../onboard/Onboard/KeyboardPopups.py", line 256, in <module>
class LabelPopup(KeyboardPopupDrawable):
File "/home/....../onboard/Onboard/KeyboardPopups.py", line 264, in LabelPopup
_osk_util = osk.Util()
AttributeError: module 'Onboard.osk' has no attribute 'Util'
</code></pre>
<p>any ideas why it happens? I have never worked with C modules in Python before.</p>
|
<python><python-c-api>
|
2023-04-08 11:04:38
| 0
| 3,827
|
Alex P.
|
75,964,805
| 648,045
|
Splitting and plotting the values on a python graph
|
<pre><code> show_id type title director cast country date_added release_year
s17 Movie Europe's Most Dangerous Man: Otto Skorzeny in Spain Pedro de Echave GarcÃa, Pablo AzorÃn Williams September 22, 2021 2020
</code></pre>
<p>Here is a one-row sample as a dict of lists:</p>
<pre><code>{'show_id': ['s17'], 'type': ['Movie'], 'title': ["Europe's Most Dangerous Man: Otto Skorzeny in Spain"], 'director': ['Pedro de Echave García, Pablo Azorín Williams'], 'cast': [''], 'country': [''], 'date_added': ['September 22, 2021'], 'release_year': ['2020']}
</code></pre>
<p>I am working on the netflix dataset to learn python. I have a dataframe like below, I am trying to split the directors into seperate rows so that i can plot a graph to analyze the number of films directed by a director in each year. I have tried the following code but it is throwing an error saying that column show_id is not found. Is there a better way to split the director name to plot onto a graph:</p>
<pre><code>df_new = df[['title','director']]
dfd = df_new['director'].str.split(',')
dfd = dfd.explode()
dfd = dfd.reset_index()
dfd.columns = ['show_id','director_1']
df.merge(dfd['show_id','director_1'], on = 'show_id' , how ='left')
</code></pre>
|
<python><pandas><dataframe><matplotlib>
|
2023-04-08 10:57:38
| 2
| 4,953
|
logeeks
|
75,964,791
| 7,347,925
|
How to add the occurance number after the selected character in string?
|
<p>I have a string which contains multiple <code>Q =</code> and the goal is to add the number of occurance after each <code>Q</code>.</p>
<p>For example, <code>'Q = 1 t h \n Q = 2 t h \n Q = 3 t h'</code> should be <code>'Q1 = 1 t h \n Q2 = 2 t h \n Q3 = 3 t h'</code></p>
<p>Here's my method:</p>
<pre><code>import re
test = 'Q = 1 t h \n Q = 2 t h \n Q = 3 t h'
num = test.count('Q =')
pattern = re.compile('[Q]')
for n in range(num):
where = [m for m in pattern.finditer(test)]
test = test[:where[n].start()+1]+f'{n+1}'+test[where[n].start()+1:]
print(test)
</code></pre>
<p>Is there any better solution?</p>
|
<python><string>
|
2023-04-08 10:54:00
| 3
| 1,039
|
zxdawn
|
75,964,689
| 7,644,562
|
Django admin show only current user for User field as ForeignKey while creating and object
|
<p>I'm working on a Django ( 4.0 +) project, where I have a model named <code>Post</code> in which we have a ForignKey field as <code>Author</code>, a user with <code>is_staff</code> and in <code>Authors</code> group can create an object of <code>Post</code> from admin.</p>
<p>Now the problem is when user click on <code>Add Post</code> as the <code>Author</code> field it should only display the current user not the others.</p>
<p>Here's what i have tried:</p>
<p><strong>From <code>models.py</code>:</strong></p>
<pre><code>class Post(models.Model):
title = models.CharField(max_length=200, unique=True)
slug = models.SlugField(max_length=200, unique=True)
author = models.ForeignKey(User, on_delete=models.CASCADE)
updated_on = models.DateTimeField(auto_now= True)
content = models.TextField()
created_on = models.DateTimeField(auto_now_add=True)
thumbnail = models.ImageField()
category = models.ForeignKey(Category, on_delete=models.DO_NOTHING, related_name='category')
featured = models.BooleanField()
status = models.IntegerField(choices=STATUS, default=0)
class Meta:
ordering = ['-created_on']
def __str__(self):
return self.title
</code></pre>
<p><strong>From <code>admin.py</code>:</strong></p>
<pre><code>class PostAdmin(admin.ModelAdmin):
list_display = ('title', 'slug', 'status','created_on')
list_filter = ("status",)
search_fields = ['title', 'content']
prepopulated_fields = {'slug': ('title',)}
def get_queryset(self, request):
qs = super().get_queryset(request)
if request.user.is_superuser:
return qs
return qs.filter(author=request.user)
</code></pre>
<p>How can I achieve that?</p>
|
<python><django><django-models><django-admin><django-4.0>
|
2023-04-08 10:30:40
| 2
| 5,704
|
Abdul Rehman
|
75,964,641
| 11,416,654
|
Analyze image similarity
|
<p>I am trying to check the similarity of n images I saved in an array (composed of 1797 samples, each of which is represented by 64 bits). My goal is to save the L1 distance of each pair of images in an array of size 1797x1797 (note that the method to compute L1 is written below). t the moment I am using the following method to compare them but it seems a bit slow and "non-pythonic", is there a way to do the same with numpy or similar libraries, which may end up in an upgrade in terms of performances?</p>
<pre><code>import numpy as np
def L1(x1, x2):
x1 = x1.astype(np.int64)
x2 = x2.astype(np.int64)
distanceL1 = np.sum(np.abs(x2 - x1))
return distanceL1
def L1_final(dataset):
images = dataset
images = np.array(images).reshape([1797,64]) #this is due to the fact that the 64 bits are a bi-dimensional array of size 8x8 at the start
d = np.zeros([len(images), len(images)])
length = len(images)
for x1 in range(length):
img = images[x1]
for x2 in range(length):
d[x1,x2] = L1(img, images[x2])
return d
print(L1_final(digits.images))
</code></pre>
|
<python><numpy><machine-learning>
|
2023-04-08 10:18:40
| 1
| 823
|
Shark44
|
75,964,492
| 1,581,090
|
How to decrease the volume of sound using pyaudio?
|
<p>I am using <code>pyaudio</code> to hear some sound that is in a <code>numpy</code> array <code>samples</code> (<code>python</code> 3.9.6). Using the following code:</p>
<pre><code>volume = 0.0002
output_bytes = (volume * samples).tobytes()
# for paFloat32 sample values must be in range [-1.0, 1.0]
stream = p.open(format=pyaudio.paFloat32,
channels=1,
rate=fs,
output=True)
# play. May repeat with different volume values (if done interactively)
start_time = time.time()
stream.write(output_bytes)
print("Played sound for {:.2f} seconds".format(time.time() - start_time))
stream.stop_stream()
stream.close()
p.terminate()
</code></pre>
<p>the sound should be extremely silent (as the <code>volume</code> variable is 0.0002), but in fact the sound plays extremely loud. Setting the <code>volume</code> variable to some other value does not seem to have any effect.</p>
<p>How can I make the sound play more silent? Or can I use another library for that? Also, I am using a Mac; maybe it is a MacOS issue...</p>
|
<python><audio>
|
2023-04-08 09:47:50
| 2
| 45,023
|
Alex
|
75,964,453
| 8,391,698
|
Why run_t5_mlm_flax.py does not produces model weight file etc?
|
<p>I was trying to reproduce this Hugging Face tutorial on <a href="https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling#t5-like-span-masked-language-modeling" rel="nofollow noreferrer"><strong>T5-like span masked-language-modeling</strong></a>.</p>
<p>I have the following code <code>tokenizing_and_configing.py</code>:</p>
<pre><code>import datasets
from t5_tokenizer_model import SentencePieceUnigramTokenizer
from transformers import T5Config
vocab_size = 32_000
input_sentence_size = None
# Calculate the total number of samples in the dataset
total_samples = datasets.load_dataset(
"nthngdy/oscar-mini", name="unshuffled_deduplicated_no", split="train"
).num_rows
# Calculate one thirtieth of the total samples
subset_samples = total_samples // 30
# Load one thirtieth of the dataset
dataset = datasets.load_dataset(
"nthngdy/oscar-mini",
name="unshuffled_deduplicated_no",
split=f"train[:{subset_samples}]",
)
tokenizer = SentencePieceUnigramTokenizer(
unk_token="<unk>", eos_token="</s>", pad_token="<pad>"
)
# Build an iterator over this dataset
def batch_iterator(input_sentence_size=None):
if input_sentence_size is None:
input_sentence_size = len(dataset)
batch_length = 100
for i in range(0, input_sentence_size, batch_length):
yield dataset[i : i + batch_length]["text"]
print("Train Tokenizer")
# Train tokenizer
tokenizer.train_from_iterator(
iterator=batch_iterator(input_sentence_size=input_sentence_size),
vocab_size=vocab_size,
show_progress=True,
)
# Save files to disk
tokenizer.save("./models/norwegian-t5-base/tokenizer.json")
print("DONE TOKENIZING ")
# CONFIG
config = T5Config.from_pretrained(
"google/t5-v1_1-small",
vocab_size=tokenizer.get_vocab_size()
# "google/t5-v1_1-base", vocab_size=tokenizer.get_vocab_size()
)
config.save_pretrained("./models/norwegian-t5-base")
print("DONE SAVING TOKENIZER ")
</code></pre>
<p>The dependency can be found here:</p>
<ul>
<li>📗 <a href="https://raw.githubusercontent.com/huggingface/transformers/main/examples/flax/language-modeling/t5_tokenizer_model.py" rel="nofollow noreferrer"><code>t5_tokenizer_model.py</code></a></li>
</ul>
<p>After <code>tokenizing_and_configing.py</code> is completed. I run this code:</p>
<pre><code>python run_t5_mlm_flax.py \
--output_dir="./models/norwegian-t5-base" \
--model_type="t5" \
--config_name="./models/norwegian-t5-base" \
--tokenizer_name="./models/norwegian-t5-base" \
--dataset_name="nthngdy/oscar-mini" \
--dataset_config_name="unshuffled_deduplicated_no" \
--max_seq_length="512" \
--per_device_train_batch_size="32" \
--per_device_eval_batch_size="32" \
--adafactor \
--learning_rate="0.005" \
--weight_decay="0.001" \
--warmup_steps="2000" \
--overwrite_output_dir \
--logging_steps="500" \
--save_steps="10000" \
--eval_steps="2500" \
--do_train \
--do_eval
</code></pre>
<p>The full code for <code>run_t5_mlm_flax.py</code> can be found <a href="https://raw.githubusercontent.com/huggingface/transformers/main/examples/flax/language-modeling/run_t5_mlm_flax.py" rel="nofollow noreferrer">here</a>.</p>
<p>But after <code>run_t5_mlm_flax.py</code> is completed , I can only find these files in <code>./model/norwegian-t5-base</code>:</p>
<pre><code>.
└── norwegian-t5-base
├── config.json
├── events.out.tfevents.1680920382.ip-172-31-30-81.71782.0.v2
└── tokenizer.json
└── eval_results.json
</code></pre>
<p>What's wrong with my process. I expect it to produce more files like these:</p>
<ol>
<li>flax_model.msgpack: This file contains the weights of the fine-tuned Flax model.</li>
<li>tokenizer_config.json: This file contains the tokenizer configuration, such as the vocabulary size and special tokens.</li>
<li>training_args.bin: This file contains the training arguments used during fine-tuning, such as learning rate and batch size.</li>
<li>merges.txt: This file is part of the tokenizer and contains the subword merges.</li>
<li>vocab.json: This file is part of the tokenizer and contains the vocabulary mappings.</li>
<li>train.log: Logs from the training process, including loss, learning rate, and other metrics.</li>
<li>Checkpoint files: If you have enabled checkpoints during training, you will find checkpoint files containing the model weights at specific training steps.</li>
</ol>
<p>Additional note: I don't experience any error messages AT ALL. Everything completes smoothly without interruption. I'm using Amazon AWS p3.2xlarge; cuda_11.2.r11.2/compiler.29618528_0</p>
|
<python><nlp><huggingface-transformers><transformer-model><flax>
|
2023-04-08 09:41:04
| 0
| 5,189
|
littleworth
|
75,964,405
| 7,644,562
|
Django: Add is_staff permission to user inside view based on a condition
|
<p>I'm working on a Django (4.0+) project,, where I'm using the default <code>User</code> model for login, and a custom form for <code>signup</code>, and I have a Boolean field named: <code>blogger</code> in signup form, if user check this, i have to set <code>is_staff</code> true, otherwise it goes as normal user.</p>
<p>Here's what I have:</p>
<p><strong><code>Registration Form: </code></strong></p>
<pre><code>class UserRegistrationForm(UserCreationForm):
first_name = forms.CharField(max_length=101)
last_name = forms.CharField(max_length=101)
email = forms.EmailField()
blogger = forms.BooleanField()
class Meta:
model = User
fields = ['username', 'first_name', 'last_name', 'email', 'password1', 'password2', 'blogger', 'is_staff']
</code></pre>
<p><strong><code>views.py:</code></strong></p>
<pre><code>def register(request):
if request.method == 'POST':
form = UserRegistrationForm(request.POST)
if form.is_valid():
print(form['blogger'])
if form['blogger']:
form['is_staff'] = True
form.save()
messages.success(request, f'Your account has been created. You can log in now!')
return redirect('login')
else:
print(form.errors)
else:
form = UserRegistrationForm()
context = {'form': form}
return render(request, 'users/signup.html', context)
</code></pre>
<p>Its giving an error as:</p>
<pre><code>TypeError: 'UserRegistrationForm' object does not support item assignment
</code></pre>
<p>How can i achieve that?</p>
|
<python><django><django-views><django-forms><django-4.0>
|
2023-04-08 09:29:48
| 2
| 5,704
|
Abdul Rehman
|
75,964,313
| 7,093,241
|
Is it possible to use __iter__ in LinkedList __str__ method without try and else?
|
<p>I wanted to extend <a href="https://stackoverflow.com/questions/39585740/how-can-i-print-all-of-the-values-of-the-nodes-in-my-singly-linked-list-using-a#answer-39585795">karin's answer here</a> where she uses a for loop for loop to print the linked list.</p>
<p>I would like to have <code>__str__</code> method that is part of <code>SinglyLinkedList</code> but would like to reuse the <code>__iter__</code> method as it is without the <code>except</code> clause which comes with <code>StopIteration</code> given that <a href="https://stackoverflow.com/questions/2522005/cost-of-exception-handlers-in-python">the answers here</a> say that <code>try</code> statements are cheap <strong>except when catching the exception</strong>. Is there anyway to go about it? Is it even possible? If I understand this correctly, the <code>for</code> loop calls <code>__iter__</code> so it would run into the same <code>StopIteration</code>.</p>
<p>The code that I have that works is here.</p>
<pre><code>from typing import Optional
# Definition for singly-linked list.
class ListNode:
def __init__(self, val=0, next=None):
self.val = val
self.next = next
def __str__(self):
return f"Node:{self.val} Next:{self.next.val if self.next else 'None'}\n"
class SinglyLinkedList():
def __init__(self):
self.head = None
self.tail = None
def __iter__(self):
node = self.head
while node:
yield node
node = node.next
def add(self, node):
if not self.head:
self.head = node
else:
self.tail.next = node
self.tail = node
def __str__(self):
iterator = self.__iter__()
values = []
try:
while True:
values.append( str(next(iterator).val) )
except StopIteration:
pass
return "->".join(values)
</code></pre>
<pre><code>one = ListNode(1)
two = ListNode(2)
three = ListNode(3)
four = ListNode(4)
five = ListNode(5)
ll = SinglyLinkedList()
ll.add(one)
ll.add(two)
ll.add(three)
ll.add(four)
ll.add(five)
print(ll)
</code></pre>
<p>which gives me the output that I would like.</p>
<pre><code>1->2->3->4->5
</code></pre>
<p>The other way I could do think of is to reuse code a while loop</p>
<pre><code>def __str__(self):
values = []
node = self.head
while node:
values.append( str(node.val) )
node = node.next
return "->".join(values)
</code></pre>
|
<python><linked-list><singly-linked-list>
|
2023-04-08 09:09:04
| 1
| 1,794
|
heretoinfinity
|
75,964,298
| 11,004,423
|
Why isn't there __setattribute__ in python?
|
<p>Python has these methods: <code>__getattr__</code>, <code>__getattribute__</code>, and <code>__setattr__</code>. I know the difference between <code>__getattr__</code> and <code>__getattribute__</code>, <a href="https://stackoverflow.com/questions/3278077/difference-between-getattr-and-getattribute">this</a> thread explains it well. But I couldn't find anywhere anybody mentioning why <code>__setattribute__</code> doesn't exist.</p>
<p>Does somebody know the reasons for this? Thank you.</p>
|
<python><setattribute><getattr><getattribute>
|
2023-04-08 09:05:43
| 1
| 1,117
|
astroboy
|
75,964,231
| 11,887,333
|
How can I have equal distance between tick marks on x axis in a matplotlib histogram
|
<p>I'm trying to plot a histogram from a list of float numbers. Due to the distribution of the floats I use customized bin and log scale to make the graph look better. But the distance between tick marks on x axis is not consistent so every bin is of different width, as is shown in the image below. Like the distance between <code>1e+6</code> and <code>1e+22</code> is much longer and that bin is much wider than the others.</p>
<p>Here's my code:</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
x_values = [1.0, 2.0, 5.0, 10.0, 20.0, 50.0, 2e+2, 1.2e+3, 7e+3, 1e+5, 1e+6, 1e+22]
plt.figure(figsize=(25,6))
plt.hist(float_l, bins=x_values)
# Set the x-axis scale to logarithmic
plt.xscale('log')
# Set the x-axis labels
plt.xticks(x_values, x_values)
plt.title('Float Histogram')
plt.xlabel('Float')
plt.ylabel('Count')
plt.show()
</code></pre>
<p>The histogram I get from it:
<a href="https://i.sstatic.net/xYKsi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xYKsi.png" alt="enter image description here" /></a></p>
<p>How can I have equal distance between tick marks?</p>
|
<python><numpy><matplotlib><histogram>
|
2023-04-08 08:48:02
| 1
| 861
|
oeter
|
75,964,037
| 9,108,781
|
Is it more correct to export bigrams from the bigram model or the trigram model in Gensim?
|
<p>After I train a bigram model and a trigram model using Gensim, I can export the bigrams from the bigram model. Alternatively, I can export the bigrams from the trigram model. I find that the bigrams from the two models can be quite different. There is a large overlap. But there is a large number appearing in only one of the lists. What is the right way? Thanks!</p>
<pre><code>bigram_model = gensim.models.Phrases(texts_unigram)
texts_bigram = [bigram_model[sent] for sent in texts]
trigram_model = gensim.models.Phrases(texts_bigram)
# Get from the bigram model
bigrams1 = list(bigram_model.export_phrases().keys())
# Get from the trigram model
ngrams = list(trigram_model.export_phrases().keys()) # This includes both bigrams and trigrams
bigrams2 = [g for g in ngrams if g.count("_")==1]
</code></pre>
|
<python><gensim>
|
2023-04-08 08:03:52
| 1
| 943
|
Victor Wang
|
75,963,828
| 11,098,908
|
Instantiate objects by their string names and integer values in Python
|
<p>I have 3 classes as such:</p>
<pre><code>class Year1(Student):
def action():
...
class Year2(Student):
def action():
...
class Year3(Student):
def action():
...
</code></pre>
<p>How can I instantiate those objects from a list of tuples of their string names and integer values like this :</p>
<pre><code>ls = [('Year1', 10), ('Year2', 15), ('Year3', 25)]
</code></pre>
<p>to this:</p>
<pre><code>years = [Year1(10), Year2(15), Year3(25)]
</code></pre>
<p>This <a href="https://stackoverflow.com/questions/51142320/how-to-instantiate-class-by-its-string-name-in-python-from-current-file">question</a> didn't help me solve the problem.</p>
|
<python><oop><instantiation>
|
2023-04-08 07:09:44
| 0
| 1,306
|
Nemo
|
75,963,822
| 1,581,090
|
How to resize image data using numpy?
|
<p>I want to resize image data in python, but a simple <code>numpy.resize</code> does not seem to work.</p>
<p>I read the image, and try to resize it using the following script. To check I write the file which results in an unexpected outcome.</p>
<pre><code>from PIL import Image
import numpy as np
# Read image data as black-white image
image = Image.open(input_image).convert("L")
arr = np.asarray(image)
# resize image by factor of 2 (original image has shape (834, 1102) )
image = np.resize(arr, (417, 551))
# same resized image to check if it worked
im = Image.fromarray(image)
im.save("test.jpeg")
</code></pre>
<p>The original image:</p>
<p><a href="https://i.sstatic.net/w3IAi.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/w3IAi.jpg" alt="enter image description here" /></a></p>
<p>The rescaled image:</p>
<p><a href="https://i.sstatic.net/8H4gi.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8H4gi.jpg" alt="enter image description here" /></a></p>
<p>I expect to see the same image (an airplane), but just smaller, with smaller resolution.</p>
<p>What am I doing wrong here?</p>
|
<python><image><numpy>
|
2023-04-08 07:08:44
| 4
| 45,023
|
Alex
|
75,963,638
| 14,109,040
|
Scraping a table of data from a pop up dialog using selenium
|
<p>I am using selenium to scrape flight prices over a range of dates from google flights. I have done the following (with some help from stackoverflow) and opened up the pop up with the grid of flight prices over a range of dates.</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.wait import WebDriverWait
driver = webdriver.Chrome()
driver.maximize_window()
driver.get('https://www.google.com/travel/flights')
wait = WebDriverWait(driver, 10)
wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, "div[aria-placeholder='Where from?'] input"))).click()
wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, "div[aria-label='Enter your origin'] input"))).send_keys("Sydney" + Keys.ARROW_DOWN + Keys.ENTER)
wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, "div[aria-placeholder='Where to?'] input"))).click()
wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, "div[aria-label='Enter your destination'] input"))).send_keys("Auckland" + Keys.ARROW_DOWN + Keys.ENTER)
wait.until(EC.element_to_be_clickable((By.XPATH, "//span[text()='Search']"))).click()
driver.switch_to.window(driver.window_handles[0])
print(driver.current_url)
# opening up date grid
wait.until(EC.element_to_be_clickable((By.XPATH, "//span[text()='Date grid']"))).click()
</code></pre>
<p>I am now unsure how I can refer to the open pop up and scrape the values from the table there since it isn't its own webpage and doesn't have its own URL</p>
|
<python><selenium-webdriver>
|
2023-04-08 06:20:26
| 1
| 712
|
z star
|
75,963,562
| 8,845,766
|
How to format output video/audio name with youtube-dl embed?
|
<p>I'm writing a python script that downloads videos from a URL using youtube-dl. Here's the code:</p>
<pre><code>def downloadVideos(videoURL):
ydl_opts = {
'format': 'bestvideo,bestaudio',
}
with youtube_dl.YoutubeDL(ydl_opts) as ydl:
ydl.download([videoURL])
</code></pre>
<p>The above method downloads an mp4 file for the video and a m4a file for the audio. The only thing is, the names for the files are some random strings. How can I give a custom name for the output files? And is it possible to put these in a specific folder?</p>
|
<python><youtube-dl>
|
2023-04-08 05:58:45
| 3
| 794
|
U. Watt
|
75,963,303
| 12,840,065
|
Resolve function resolving wrong view_name in django
|
<ul>
<li><strong>Django 4.0</strong></li>
<li><strong>Python 3.9</strong></li>
</ul>
<p>I recently upgraded my django project from 3.14 to 4.0 and django.urls.resolve function seems broken. Resolve is always returning app_name.views.view instead of app_name.views.actual_function_name.</p>
<p>Here is my code -</p>
<pre><code>path = request.path
match = resolve(path)
view_name = match.view_name
</code></pre>
<p>after the above code "view_name" variable is always "view" irrespective of the url. Any help is appreciated. Thanks for your time.</p>
<p>P.S. if there is any other way, I can find out the function from a url without using resolve, please let me know that as well.</p>
|
<python><python-3.x><django><django-rest-framework><django-urls>
|
2023-04-08 04:27:47
| 4
| 983
|
Arvind Kumar
|
75,963,302
| 2,707,864
|
TypeError: cannot determine truth value of Relational with dict(zip()), only with direct output
|
<p><strong>EDIT</strong>:
The problem only appears when trying to get "direct output" of the result of <code>dict(zip(...</code>.
This code, with output via <code>print</code>, works fine</p>
<pre><code>import sympy as sp
x, y = sp.symbols('x,y')
d = dict(zip([x, y], [1, 1]))
print(d)
</code></pre>
<p>but simply entering <code>dict(zip([x, y], [1, 1]))</code> throws the error.</p>
<hr>
Code as simple as this
<pre><code>import sympy as sp
x, y = sp.symbols('x,y')
dict(zip([x, y], [1, 1]))
</code></pre>
<p>Produces error</p>
<pre><code>TypeError: cannot determine truth value of Relational
</code></pre>
<p>Why, and how can I solve this?
If not using <code>OrderedDict</code>, that would be most convenient.</p>
<p>It seems <a href="https://stackoverflow.com/questions/42609884/cannot-determine-truth-value-of-relational-when-applying-lamdbdify">this OP</a> has no problem in doing what is giving me the error (it has a different problem).</p>
<p>Full code, etc. I know it's an old python version, but I guessed it should work.</p>
<pre><code>>>> x, y = sp.symbols('x,y')
>>> dict(zip([x, y], [1, 2]))
Traceback (most recent call last):
File "<string>", line 449, in runcode
File "<interactive input>", line 1, in <module>
File "<string>", line 692, in pphook
File "C:\development\Portable Python 3.6.5 x64 R2\App\Python\lib\pprint.py", line 53, in pprint
printer.pprint(object)
File "C:\development\Portable Python 3.6.5 x64 R2\App\Python\lib\pprint.py", line 139, in pprint
self._format(object, self._stream, 0, 0, {}, 0)
File "C:\development\Portable Python 3.6.5 x64 R2\App\Python\lib\pprint.py", line 161, in _format
rep = self._repr(object, context, level)
File "C:\development\Portable Python 3.6.5 x64 R2\App\Python\lib\pprint.py", line 393, in _repr
self._depth, level)
File "C:\development\Portable Python 3.6.5 x64 R2\App\Python\lib\pprint.py", line 405, in format
return _safe_repr(object, context, maxlevels, level)
File "C:\development\Portable Python 3.6.5 x64 R2\App\Python\lib\pprint.py", line 511, in _safe_repr
items = sorted(object.items(), key=_safe_tuple)
File "C:\development\Portable Python 3.6.5 x64 R2\App\Python\lib\site-packages\sympy\core\relational.py", line 384, in __nonzero__
raise TypeError("cannot determine truth value of Relational")
TypeError: cannot determine truth value of Relational
</code></pre>
|
<python><dictionary><sympy><python-zip>
|
2023-04-08 04:27:19
| 1
| 15,820
|
sancho.s ReinstateMonicaCellio
|
75,963,236
| 3,810,748
|
Why cant I set TrainingArguments.device in Huggingface?
|
<h2>Question</h2>
<p>When I try to set the <code>.device</code> attribute to <code>torch.device('cpu')</code>, I get an error. How am I supposed to set device then?</p>
<h2>Python Code</h2>
<pre><code>from transformers import TrainingArguments
from transformers import Trainer
import torch
training_args = TrainingArguments(
output_dir="./some_local_dir",
overwrite_output_dir=True,
per_device_train_batch_size=4,
dataloader_num_workers=2,
max_steps=500,
logging_steps=1,
evaluation_strategy="steps",
eval_steps=5
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=test_dataset,
compute_metrics=compute_metrics,
)
training_args.device = torch.device('cpu')
</code></pre>
<h2>Python Error</h2>
<pre><code>AttributeError Traceback (most recent call last)
<ipython-input-11-30a92c0570b8> in <cell line: 28>()
26 )
27
---> 28 training_args.device = torch.device('cpu')
AttributeError: can't set attribute
</code></pre>
|
<python><pytorch><huggingface-transformers>
|
2023-04-08 03:58:31
| 3
| 6,155
|
AlanSTACK
|
75,963,229
| 14,109,040
|
Automating filling in input fields and clicking a button using selenium
|
<p>I am trying to create a flight scraper between two locations using google flights.</p>
<p>I have tried the following:</p>
<pre><code>from selenium import webdriver
if __name__ == "__main__":
driver=webdriver.Chrome()
driver.get('https://www.google.com/travel/flights')
driver.maximize_window()
where_from = driver.find_element('xpath',
'/html/body/c-wiz[2]/div/div[2]/c-wiz/div[1]/c-wiz/div[2]/div[1]/div[1]/div[1]/div/div[2]/div[1]/div[1]/div/div/div[1]/div/div/input')
where_from.send_keys('Sydney')
where_to = driver.find_element('xpath',
'/html/body/c-wiz[2]/div/div[2]/c-wiz/div[1]/c-wiz/div[2]/div[1]/div[1]/div[1]/div/div[2]/div[1]/div[4]/div/div/div[1]/div/div/input')
where_to.send_keys('Auckland')
driver.implicitly_wait(30)
search = driver.find_element('xpath', '/html/body/c-wiz[2]/div/div[2]/c-wiz/div[1]/c-wiz/div[2]/div[1]/div[1]/div[2]/div/button/div[1]')
search.click()
</code></pre>
<p>I am trying to open up a web browser (chrome) and fill up the where from and where to input fields and click search. The where from and where to fields get filled in, but the search doesn't seem to work</p>
|
<python><selenium-webdriver>
|
2023-04-08 03:55:45
| 1
| 712
|
z star
|
75,963,224
| 7,347,911
|
SQL Stored procedure display output of RAISE INFO in Python
|
<p>I have a stored procedure as below, when i run the procedure in DBeaver then i am able to see the output of <code>RAISE INFO</code> in the console, but i want to log the same output when Running from python
using <code>cursor.execute('call testing_proc()')</code></p>
<p>How can i do that ?</p>
<pre><code>CREATE OR REPLACE PROCEDURE testing_proc()
AS $$
declare
cntr int :=1 ;
max_cntr int := 200;
time_counter timestamp ;
perc_cmp real;
time_taken real ;
tempp int ;
begin
time_counter = getdate() ;
while( cntr <= max_cntr)
loop
INSERT INTO table2 SELECT * FROM table1;
perc_cmp = cntr*100/max_cntr ;
RAISE INFO '****Total Processed = %
Total Processed = % ***',cntr , perc_cmp;
cntr = cntr+1;
end loop;
time_taken = datediff(second, time_counter , getdate() );
RAISE INFO '******* Total Time Taken = % seconds ', time_taken ;
end;
$$ LANGUAGE plpgsql;
</code></pre>
<p>PS: please don't bogged down in the details of understanding the code i just shared that for the reference</p>
|
<python><sql><stored-procedures><amazon-redshift><display>
|
2023-04-08 03:51:45
| 0
| 404
|
manoj
|
75,963,117
| 12,349,101
|
Tkinter - Moving the dots on a polyline with three path
|
<p>Based on a simple Line drawn using <code>canvas.create_line</code>, such as this <a href="https://i.sstatic.net/OL01n.png" rel="nofollow noreferrer">image here</a>, three dots, based on the start, middle, and end of the line are created, <a href="https://i.sstatic.net/6sNAI.png" rel="nofollow noreferrer">example here</a>.</p>
<p>I want to be able to click and drag the part of the Line where there is a dot, so I made the following MRE:</p>
<pre class="lang-py prettyprint-override"><code>import tkinter as tk
root = tk.Tk()
canvas = tk.Canvas(root, width=400, height=400)
canvas.pack()
line = canvas.create_line(50, 50, 350, 50, width=3)
start_x, start_y, end_x, end_y = canvas.coords(line)
start_dot = canvas.create_oval(start_x - 5, start_y - 5, start_x + 5, start_y + 5, fill='blue', tags=('start_dot',))
end_dot = canvas.create_oval(end_x - 5, end_y - 5, end_x + 5, end_y + 5, fill='red', tags=('end_dot',))
mid_x = (start_x + end_x) / 2
mid_y = start_y
mid_dot = canvas.create_oval(mid_x - 5, mid_y - 5, mid_x + 5, mid_y + 5, fill='green', tags=('mid_dot',))
def update_line(event):
global end_x, end_y, start_y, start_x, mid_y, mid_x, curve
x, y = event.x, event.y
item = canvas.find_withtag(tk.CURRENT)[0]
if item == start_dot:
canvas.coords(start_dot, x - 5, y - 5, x + 5, y + 5)
canvas.coords(line, x, y, end_x, end_y)
elif item == end_dot:
canvas.coords(end_dot, x - 5, y - 5, x + 5, y + 5)
canvas.coords(line, start_x, start_y, x, y)
elif item == mid_dot:
dx = x - (start_x + end_x) // 2
dy = y - (start_y + end_y) // 2
canvas.coords(mid_dot, mid_x + dx - 5, mid_y + dy - 5, mid_x + dx + 5, mid_y + dy + 5)
canvas.coords(line, start_x, start_y, mid_x + dx, mid_y + dy, end_x, end_y)
else:
return
test = canvas.coords(line)
if len(test) == 4:
start_x, start_y, end_x, end_y, = canvas.coords(line)
mid_x = (start_x + end_x) / 2
mid_y = (start_y + end_y) / 2
canvas.coords(mid_dot, mid_x - 5, mid_y - 5, mid_x + 5, mid_y + 5)
else:
pass
canvas.tag_bind('start_dot', '<B1-Motion>', update_line)
canvas.tag_bind('end_dot', '<B1-Motion>', update_line)
canvas.tag_bind('mid_dot', '<B1-Motion>', update_line)
root.mainloop()
</code></pre>
<p>This somewhat works, at least for only moving the start and end correctly. The part that doesn't work yet is updating the Line correctly after moving the middle dot and then moving either the start or end dot of the Line.
<a href="https://i.sstatic.net/mUfty.jpg" rel="nofollow noreferrer">Here</a> is a gif showcasing the problem, and <a href="https://i.sstatic.net/OD9tB.jpg" rel="nofollow noreferrer">here</a> the expected behavior (the rectangle can be ignored).</p>
<p>I noticed <code>canvas.coords</code> output 5 values instead of the usual 4 that I'm used to.</p>
<p>How can I make the above works with the middle path/dot too?</p>
<p>P.S.:I use the term "polyline" but I'm not sure if this is what I'm actually doing here (it does look similar when looking on google image). Feel free to mention a better fitting term for this if there is one.</p>
|
<python><tkinter><polyline>
|
2023-04-08 03:05:07
| 1
| 553
|
secemp9
|
75,962,992
| 310,370
|
How to modify this script that it can utilize GPU instead of CPU - moviepy > VideoFileClip
|
<p>I have the following script to split video into chunks and get certain number of chunks. It works good but uses cpu instead of my RTX 3090 GPU</p>
<p>How can I make it use GPU in the final render? Thank you so much for answers</p>
<pre><code>import math
import moviepy.editor as mp
import subprocess
import sys
# Define the input video file path and desired output duration
input_file = "install_second_part_2x.mp4"
output_duration = 158 # in seconds
# Load the input video file into a moviepy VideoFileClip object
video = mp.VideoFileClip(input_file)
# Calculate the number of 5-second chunks in the video
chunk_duration = 5
num_chunks = math.ceil(video.duration / chunk_duration)
# Create a list to hold the selected chunks
selected_chunks = []
# Calculate the number of chunks to select
output_num_chunks = math.ceil(output_duration / chunk_duration)
# Calculate the stride between adjacent chunks to select
stride = num_chunks // output_num_chunks
# Loop through the chunks and select every stride-th chunk
for i in range(num_chunks):
if i % stride == 0:
chunk = video.subclip(i*chunk_duration, (i+1)*chunk_duration)
selected_chunks.append(chunk)
progress = i / num_chunks * 100
sys.stdout.write(f"\rProcessing: {progress:.2f}%")
sys.stdout.flush()
# Keep adding chunks until the output duration is reached
output = selected_chunks[0]
for chunk in selected_chunks[1:]:
if output.duration < output_duration:
output = mp.concatenate_videoclips([output, chunk])
else:
break
# Write the output video to a file
output_file = "install_part_cut.mp4"
output.write_videofile(output_file)
</code></pre>
|
<python><moviepy>
|
2023-04-08 02:11:27
| 0
| 23,982
|
Furkan Gözükara
|
75,962,987
| 6,676,101
|
functools.singledispatchmethod raises IndexError: tuple index out of range
|
<p>What is wrong with how I applied the <code>functools.singledispatchmethod</code> decorator?</p>
<pre class="lang-python prettyprint-override"><code>import functools
class K:
@functools.singledispatchmethod
def m(self, iobj:object):
print("root")
@m.register
def m_from_str(self, istr:str):
print('string')
@m.register
def m_from_object(self, ilist:list):
print('list')
obj = K()
obj.m()
</code></pre>
<p>The Error message is:</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
method = self.dispatcher.dispatch(args[0].__class__)
IndexError: tuple index out of range
Process finished with exit code 1
</code></pre>
|
<python><python-3.x><decorator><python-decorators><dispatch>
|
2023-04-08 02:09:33
| 1
| 4,700
|
Toothpick Anemone
|
75,962,959
| 11,098,908
|
Instantiate objects from a list of tuples
|
<p>How can I instantiate all the classes for <code>Year</code> from a list of tuples <code>(str, int)</code> where <code>str</code> is the name of a class <code>Year</code> and <code>int</code> is their age?</p>
<pre><code>class Lesson():
def __init__(self, teacher: Teacher, years: list[tuple[str, int]]) -> None:
self.teacher = teacher # teacher is an instance of class Teacher(Person)
self.year1 = years[0][0](years[0][1]) # year is an instance of class Year(Student)
self.year2 = years[1][0](years[1][1])
...
self.yearN= years[N][0](years[N][1]) # N can be 1, 2 or maximum 3
</code></pre>
<p>The classes Student and Year (3 of them) were defined like this</p>
<pre><code>class Student(Person):
def __init__(self, age):
super().__init__(age)
...
class Year1(Student):
def action(self):
...
class Year2(Student):
def action(self):
...
class Year3(Student):
def action(self):
...
</code></pre>
<p><strong>EDITED</strong>:</p>
<p>In particular, how can I construct a list of objects like this</p>
<p><code>[Year1(x), Year2(y), Year3(z)]</code></p>
<p>from a list of tuples:</p>
<p><code>['Year1', x), ('Year2', y), ('Year3', z)]</code></p>
<p>Can I do this?</p>
<p><code>self.years= [year[0](year[1]) for year in years]</code></p>
|
<python><list><oop><instance><init>
|
2023-04-08 01:58:57
| 1
| 1,306
|
Nemo
|
75,962,909
| 657,477
|
Convert LLaMA to ONNX
|
<p>I am trying to convert LLaMA to ONNX like so:</p>
<pre><code>import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "C:/Projects/vicuna-13b-delta-v0"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Make sure the model is in evaluation mode
model.eval()
# Set up a dummy input for tracing
input_str = "Once upon a time"
input_ids = tokenizer.encode(input_str, return_tensors="pt")
# Convert the model to ONNX
with torch.no_grad():
symbolic_names = {0: "batch_size", 1: "seq_len"}
torch.onnx.export(
model,
input_ids,
"vicuna.onnx",
input_names=["input_ids"],
output_names=["output"],
dynamic_axes={"input_ids": symbolic_names, "output": symbolic_names},
opset_version=15, # Use a suitable opset version, such as 12 or 13
)
</code></pre>
<p>I get the error:</p>
<pre><code> python .\convert.py
Traceback (most recent call last):
File "C:\Projects\vicuna-13b-delta-v0\convert.py", line 5, in <module>
model = AutoModelForCausalLM.from_pretrained(model_name)
File "C:\Users\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\transformers\models\auto\auto_factory.py", line 441, in from_pretrained
config, kwargs = AutoConfig.from_pretrained(
File "C:\Users\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\transformers\models\auto\configuration_auto.py", line 917, in from_pretrained
config_class = CONFIG_MAPPING[config_dict["model_type"]]
File "C:\Users\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\transformers\models\auto\configuration_auto.py", line 623, in __getitem__
raise KeyError(key)
KeyError: 'llama'
</code></pre>
<p>Is it possible to convert LLaMA to ONNX with huggingface transformers?</p>
|
<python><pytorch>
|
2023-04-08 01:38:38
| 2
| 15,124
|
Guerrilla
|
75,962,602
| 2,774,589
|
Pearson correlation coefficient for complex time series
|
<p>I would like to know how to calculate the Pearson correlation coefficient for two complex time series.</p>
<p>Do we simply do
<a href="https://i.sstatic.net/NxSQ8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NxSQ8.png" alt="enter image description here" /></a></p>
<p>Or there is something else?</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
R = lambda x,y: ((x-x.mean())*(y-y.mean())).sum()/(np.sqrt(((x-x.mean())**2).sum())*np.sqrt(((y-y.mean())**2).sum()))
x,y = np.loadtxt("data.txt",dtype=np.complex128).T
ri = R(x,y)
ri = (ri*ri.conj()).real
print(ri)
</code></pre>
<p>File in <a href="https://file.io/KzwvQFx8XsXQ" rel="nofollow noreferrer">https://file.io/KzwvQFx8XsXQ</a></p>
|
<python><cross-correlation><pearson-correlation>
|
2023-04-07 23:33:22
| 1
| 1,660
|
iury simoes-sousa
|
75,962,489
| 19,996,942
|
Optimizing callstack for a recursive function
|
<p>I have a node which can generate adjacent members, via <code>node.create_adj_list()</code>, and I need to process a tree or graph with a recursive function, but I had some questions about the memory efficiency. Each node has a list of solutions that I need to gather.</p>
<p>Say I have this code, a DFS search:</p>
<pre><code>total_solutions = []
def handle_node(node):
for solution in find_solutions(node):
total_solutions.append(solution)
for adj_node in node.create_adj_list():
handle_node(adj_node)
</code></pre>
<p>If I call <code>handle_node</code> on a parent node, to my understanding what happens is:</p>
<ol>
<li>our parent node exists in memory</li>
<li>we generate solutions for that node</li>
<li>we generate the existence of adjacent nodes, storing them in memory</li>
<li>we recurse down the tree in a DFS fashion</li>
</ol>
<p>If our tree looks like this:</p>
<pre><code> O
/ \
O O
/ \
O O
</code></pre>
<p>As we recurse through the tree, we start on the left-most part of the tree and DFS down. But as we travel down this branch, we are also generating the children of each node from that branch, and storing those in memory. Doesn't this mean we are storing O(m*n) in memory, where <code>m</code> is the depth of the tree and <code>n</code> is the average number of children?</p>
<p>This seems really inefficient, as we shouldn't need to store all those children nodes in memory, we should just be able to store the depth in memory only.</p>
<p>How should I resolve this situation to make my code not store things in memory unnecessarily?</p>
<p>My initial thoughts are that <code>create_adj_list()</code> method is maybe not good.</p>
|
<python><recursion><memory><depth-first-search>
|
2023-04-07 23:02:26
| 0
| 385
|
713sean
|
75,962,385
| 2,876,983
|
JSON Normalize with Value as Column
|
<p>I have the following JSON;</p>
<pre><code>{
"data": [
{
"gid": "1203715497540179",
"completed": false,
"custom_fields": [
{
"gid": "1203887422469746",
"enabled": true,
"name": "Inputs",
"description": "",
"display_value": null,
"resource_subtype": "text",
"resource_type": "custom_field",
"text_value": null,
"type": "text"
},
{
"gid": "1126427465960522",
"enabled": false,
"name": "T-Minus",
"description": "",
"display_value": "54",
"resource_subtype": "text",
"resource_type": "custom_field",
"text_value": "54",
"type": "text"
}
],
"due_on": "2023-01-25",
"name": "General Information"
}
]
}
</code></pre>
<p>And I want to build the following pandas dataframe with it. Basically I want to grab name from custom_fields and make it a column whose value is display_value</p>
<pre><code>name due_on Inputs T-Minus
General Information 2023-01-25 null 54
</code></pre>
<p>I don't think this can be done with just normalizing. So I started with:</p>
<pre><code>df = pd.json_normalize(test,
record_path =['custom_fields'],
record_prefix='_',
errors='ignore',
meta=['name', 'due_on'])
</code></pre>
<p>This gets me to something like this:</p>
<pre><code>_name _display_value name due_on .....(extra fields that I do not need)
Inputs null General Information
T-Minus 54 General Information
</code></pre>
<p>How can I go now from this dataframe to the one I want?</p>
|
<python><json><pandas><json-normalize>
|
2023-04-07 22:36:31
| 1
| 321
|
user2876983
|
75,962,276
| 10,099,689
|
How to TDD with Internet-Dependency in Python?
|
<p>I need to make sure that I have a <strong>function</strong> that can <strong>download zip files</strong> from a <strong>list of links</strong> and write the file into the "zip_files/" folder.</p>
<p>I am dependent on the outside webpage for the links and zip files!</p>
<pre><code>Since you:
- shouldn't mock
- don't rely on outside dependencies
- want to have confidence your code works!
</code></pre>
<p>I feel I need a test to show me there is a working function to download a zip-file.</p>
<hr />
<p><strong>WHY do I need the function?</strong>
My python script shall return a JSON-file with two lists (years, grades).
The process to get the data is as follows:</p>
<ol>
<li>Go to Webpage</li>
<li>Get list of all -tags with "class=zip_download"</li>
<li>Download every zip on that list</li>
<li>Unzip files and move Excel-files to "excel_files/" (Need to back excel-files up for later!)</li>
<li>Run calculations on every Excel to extract desired JSON</li>
</ol>
<p>I am stuck writing tests for <strong>the third (3.) step</strong> and feel my thinking is too tightly coupled with the implementation.</p>
<hr />
<pre><code> def test_after_download_have_more_zip_files_than_links_in_list(self):
"""
GIVEN: I provide a list of links
WHEN: Downloading is finished
THEN: My folder has at least as many zip files as the list items
"""
zip_download_folder = "zip_files"
list_of_zip_links = load_json_data(zip_link_file)
download_zip_links(list_of_zip_links, zip_download_folder)
self.assertGreater(len(os.listdir(zip_download_folder)), list_of_zip_links)
for file in os.listdir(zip_download_folder):
self.assertEqual(file.endswith('.zip'))
</code></pre>
<p><strong>How do I write a useful test here?</strong></p>
|
<python><unit-testing><tdd>
|
2023-04-07 22:09:07
| 2
| 1,179
|
Martin Müsli
|
75,962,228
| 2,687,317
|
Create pandas dataframe from sets of series where there are many repeated columns
|
<p>I have some data in a csv file that I need to summarize in a DataFrame. The new df will be based on a time (LFrame) that increments monotonically, but repeats in the csv since there are several records associated with that time step. Many of these simply repeat, but several csv records are unique and I either sum values or take averages of those... as appropriate to my data.</p>
<p>I can do this with the code below, but it runs very slowly (the csv is 40-50MB). Is there a more efficient way of sampling the records that don't change for each LFrame (you can see them below where I access them via <code>iloc[0]</code>)?</p>
<pre><code>new_File = pd.DataFrame({"LFrame": pfd_df.LFrame.unique(),
"Num_Channels": [ pfd_df[pfd_df.LFrame == lf].num_signals.sum() for lf in pfd_df.LFrame.unique()],
"Date_Time": [pfd_df[pfd_df.LFrame == lf]["Date_Time"].iloc[0] for lf in pfd_df.LFrame.unique()],
"run_time": [pfd_df[pfd_df.LFrame == lf].run_time.iloc[0] for lf in pfd_df.LFrame.unique()],
"az": [pfd_df[pfd_df.LFrame == lf].az.iloc[0] for lf in pfd_df.LFrame.unique()],
"el": [pfd_df[pfd_df.LFrame == lf].el.iloc[0] for lf in pfd_df.LFrame.unique()],
"distance": [pfd_df[pfd_df.LFrame == lf].distance.iloc[0] for lf in pfd_df.LFrame.unique()],
"pass_ID": [pfd_df[pfd_df.LFrame == lf].pass_ID.iloc[0] for lf in pfd_df.LFrame.unique()],
"SV_ID": [pfd_df[pfd_df.LFrame == lf].SV_ID.iloc[0] for lf in pfd_df.LFrame.unique()],
"PFD": [pfd_df[pfd_df.LFrame == lf].pfd.sum() * (8.28/90) / (1e-26 * 41667) for lf in pfd_df.LFrame.unique()]
})
</code></pre>
<p>I think all the comprehensions are slowing me down significantly.</p>
|
<python><pandas>
|
2023-04-07 22:00:13
| 1
| 533
|
earnric
|
75,962,182
| 3,621,575
|
How to take the name of a parameter variable as a string (within a function) in python
|
<p>I plan to create a function and run the function with several data sets.</p>
<p>I would like to add a column with values for each record of the name of the data set, based on the name of the data frame passed as a parameter.</p>
<p>I have an example here that creates a column with values of the argument <code>argument1</code>, rather than the parameter name <code>string_i_want</code>.</p>
<p>Is there a way I could obtain the name of the variable passed as the parameter <code>string_i_want</code> and asign that as the value in the 'scenario' column rather than <code>argument1</code>?</p>
<pre><code>import pandas as pd
string_i_want = pd.DataFrame([1, 2, 3, 4], columns=['column_name'])
def example(argument1):
argument1['squared'] = argument1['column_name'] ** 2
argument1['scenario'] = f'{argument1=}'.split('=')[0]
return argument1
example(string_i_want)
</code></pre>
<p>The returned data frame is:</p>
<p><a href="https://i.sstatic.net/Zyk5f.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Zyk5f.png" alt="enter image description here" /></a></p>
<p>The returned data frame I would instead like to build is:</p>
<p><a href="https://i.sstatic.net/XXiMx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XXiMx.png" alt="enter image description here" /></a></p>
|
<python>
|
2023-04-07 21:48:31
| 2
| 581
|
windyvation
|
75,962,147
| 5,858,752
|
Can you set the column of a dataframe to be a reference of another column in the same dataframe?
|
<p>I don't know if this can be done.</p>
<p>For context, I have a dataframe <code>df</code>, with columns <code>modified</code> and <code>unmodified</code>. Depending on an user input flag, <code>use_modified</code>, I may want to use a modified or unmodified column to do some computations. The computations occur in a loop, so I don't want to do something like</p>
<pre><code>for i in len(df):
if use_modified:
# use df["modified"][i] to do some computations
else:
# use df["unmodified"][i] to do some computations
</code></pre>
<p>I would like to create a 3rd column "value_to_use" that's a reference to either <code>modified</code> or <code>unmodified</code> depending on the flag <code>use_modified</code>, and then the loop doesn't have to do if checks.</p>
|
<python><pandas><dataframe>
|
2023-04-07 21:42:09
| 2
| 699
|
h8n2
|
75,962,130
| 8,115,653
|
creating a .csv from a combination of strings and panas dfs
|
<p>my <code>.csv</code> with multiple blocks need to follow this format (1 block sample):</p>
<p><a href="https://i.sstatic.net/xcdSV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xcdSV.png" alt="enter image description here" /></a></p>
<p>so trying to do it in pandas and then write to csv. The problem is those comments above each of the two sections(outside the dataframes).</p>
<p>Here is sample code:</p>
<pre><code>import numpy as np
import pandas as pd
h_comment = pd.DataFrame(['#(H) Header'], columns=['name'])
df1 = pd.DataFrame({'name': 'Donald Trump',
'state':'FL',
'value':'0'},
index=[0])
data_comment = pd.DataFrame(['#(S) Schedule'], columns=['A'])
df2 = pd.DataFrame(np.random.rand(3,4),
columns=list('ABCD'))
to_csv1 = pd.concat([h_comment,df1])
to_csv2 = pd.concat([data_comment,df2])
</code></pre>
<p>the issue is that those "comments" are inside my df columns, for example:</p>
<pre><code>to_csv2
Out[116]:
A B C D
0 #(S) Schedule NaN NaN NaN
0 0.521739 0.622079 0.322372 0.687531
1 0.991336 0.297848 0.635697 0.025620
2 0.068900 0.898806 0.562971 0.567817
</code></pre>
<p>the solution to first create a <code>.csv</code> with comments and append <code>dfs</code> to it is not great since there are many blocks like the above impacting the performance, so i'd rather write to csv at the end.</p>
|
<python><pandas><dataframe><csv>
|
2023-04-07 21:40:28
| 1
| 1,117
|
gregV
|
75,962,020
| 357,024
|
Psycopg3 selecting null values
|
<p>I have started migrating some Python code from using psycopg2 to pscopyg3. It appears there is a change in how null values are accepted as query parameters in the newer version. Based on a read of the <a href="https://www.psycopg.org/psycopg3/docs/basic/params.html" rel="nofollow noreferrer">documentation</a> for this topic in psycopg3 there isn't any mention of how to deal with null values.</p>
<p>Below is a simplified example of my use case that worked fine in psycopg2</p>
<pre><code>cursor.execute("SELECT * FROM mock_table WHERE var_1 is %s", [None])
</code></pre>
<p>And the below error is produced now in psycopg3</p>
<blockquote>
<p>psycopg.errors.SyntaxError: syntax error at or near "$1" LINE 1:</p>
</blockquote>
<blockquote>
<p>SELECT * FROM mock_table WHERE var_1 is $1</p>
</blockquote>
|
<python><psycopg3>
|
2023-04-07 21:20:23
| 1
| 61,290
|
Mike
|
75,961,902
| 9,279,753
|
Change index column to real column in pandas
|
<p>I had an original DataFrame that looked like this:</p>
<pre class="lang-py prettyprint-override"><code> column0 column2
0 0 SomethingSomeData1MixedWithSomeData2
1 something SomeMoreDataWithSomeData3
2 whatever SomeData4
</code></pre>
<p>From that, I needed to extract the regex <code>(SomeData\d)</code>, which let me to use <code>df[column2].str.extractall()</code>, with the following results:</p>
<pre class="lang-py prettyprint-override"><code> data
match
0 0 SomeData1
0 1 SomeData2
1 0 SomeData3
2 0 SomeData4
</code></pre>
<p>What I actually need is to have something like this:</p>
<pre class="lang-py prettyprint-override"><code> data0 data1
0 SomeData1 SomeData2
1 SomeData3
2 SomeData4
</code></pre>
<p>It can also be something like:</p>
<pre class="lang-py prettyprint-override"><code> data
0 [SomeData1, SomeData2]
1 [SomeData3]
2 [SomeData4]
</code></pre>
<p>I've tried to use <code>df[column2].str.split()</code>, but it created a list on the column and kept all of the other stuff that wasn't needed.</p>
|
<python><pandas><dataframe>
|
2023-04-07 20:59:25
| 2
| 599
|
Jose Vega
|
75,961,881
| 5,500,634
|
Automate Outlook and Send Email on Mac with Python
|
<p>I am working on writing a Python script for automate Outlook email in Mac.</p>
<p>From the early Stackoverflow article: <a href="https://stackoverflow.com/questions/61529817/automate-outlook-on-mac-with-python">Automate Outlook on Mac with Python</a>, the code works to show Outlook email with attachment, but I need to send the email subsequently.</p>
<p>I added <code>send_email</code> with <code>Send()</code> operation, and it doesn't work:</p>
<pre><code>def create_message_with_attachment():
subject = 'This is an important email!'
body = 'Just kidding its not.'
to_recip = ['myboss@mycompany.com', 'theguyih8@mycompany.com']
msg = Message(subject=subject, body=body, to_recip=to_recip)
...
msg.send_email()
class Outlook(object):
def __init__(self):
self.client = app('Microsoft Outlook')
class Message(object):
def __init__(self, parent=None, subject='', body='', to_recip=[], cc_recip=[], show_=True):
if parent is None: parent = Outlook()
client = parent.client
self.msg = client.make(
new=k.outgoing_message,
with_properties={k.subject: subject, k.content: body})
.....
def send_email(self):
self.msg.Send()
....
</code></pre>
<p>What is the right method to send email? A ton of thanks!</p>
|
<python><python-3.x><macos><email><outlook>
|
2023-04-07 20:55:06
| 0
| 489
|
TripleH
|
75,961,781
| 13,916,049
|
Match column names to column values in another dataframe and get corresponding column value
|
<p>If <code>df.columns</code> match <code>map["query"]</code>, I want to replace <code>df.columns</code> with the corresponding <code>map["symbol"]</code>.</p>
<pre><code>import pandas as pd
df = df.T.loc[df.columns.isin(map["query"])].T
df.columns = map["symbol"]
df= df[df.columns.dropna()]
</code></pre>
<p>Traceback:</p>
<pre><code>---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Input In [60], in <cell line: 1>()
----> 1 df.columns = map["symbol"]
2 df = df[df.columns.dropna()]
3 df.head()
File /scg/apps/software/jupyter/python_3.9/lib/python3.9/site-packages/pandas/core/generic.py:5588, in NDFrame.__setattr__(self, name, value)
5586 try:
5587 object.__getattribute__(self, name)
-> 5588 return object.__setattr__(self, name, value)
5589 except AttributeError:
5590 pass
File /scg/apps/software/jupyter/python_3.9/lib/python3.9/site-packages/pandas/_libs/properties.pyx:70, in pandas._libs.properties.AxisProperty.__set__()
File /scg/apps/software/jupyter/python_3.9/lib/python3.9/site-packages/pandas/core/generic.py:769, in NDFrame._set_axis(self, axis, labels)
767 def _set_axis(self, axis: int, labels: Index) -> None:
768 labels = ensure_index(labels)
--> 769 self._mgr.set_axis(axis, labels)
770 self._clear_item_cache()
File /scg/apps/software/jupyter/python_3.9/lib/python3.9/site-packages/pandas/core/internals/managers.py:214, in BaseBlockManager.set_axis(self, axis, new_labels)
212 def set_axis(self, axis: int, new_labels: Index) -> None:
213 # Caller is responsible for ensuring we have an Index object.
--> 214 self._validate_set_axis(axis, new_labels)
215 self.axes[axis] = new_labels
File /scg/apps/software/jupyter/python_3.9/lib/python3.9/site-packages/pandas/core/internals/base.py:69, in DataManager._validate_set_axis(self, axis, new_labels)
66 pass
68 elif new_len != old_len:
---> 69 raise ValueError(
70 f"Length mismatch: Expected axis has {old_len} elements, new "
71 f"values have {new_len} elements"
72 )
ValueError: Length mismatch: Expected axis has 44598 elements, new values have 44603 elements
</code></pre>
<p>Data:</p>
<p><code>df</code></p>
<pre><code>pd.DataFrame({'ENSG00000279928': {2: 0, 3: 0},
'ENSG00000228037': {2: 0, 3: 0},
'ENSG00000142611': {2: 0, 3: 13},
'ENSG00000284616': {2: 0, 3: 0},
'ENSG00000157911': {2: 0, 3: 8},
'ENSG00000269896': {2: 0, 3: 0},
'ENSG00000228463': {2: 0, 3: 0},
'ENSG00000260972': {2: 0, 3: 0},
'ENSG00000224340': {2: 0, 3: 0},
'ENSG00000226374': {2: 0, 3: 0},
'ENSG00000229280': {2: 0, 3: 0},
'ENSG00000142655': {2: 0, 3: 2},
'ENSG00000232596': {2: 0, 3: 0},
'ENSG00000235054': {2: 0, 3: 0},
'ENSG00000231510': {2: 0, 3: 0},
'ENSG00000149527': {2: 0, 3: 0},
'ENSG00000284739': {2: 0, 3: 0},
'ENSG00000171621': {2: 0, 3: 0},
'ENSG00000272235': {2: 0, 3: 0}})
</code></pre>
<p><code>map</code></p>
<pre><code>pd.DataFrame({'query': {0: 'ENSG00000279928',
1: 'ENSG00000228037',
2: 'ENSG00000142611',
4: 'ENSG00000157911',
5: 'ENSG00000269896',
6: 'ENSG00000228463',
8: 'ENSG00000224340',
9: 'ENSG00000226374',
10: 'ENSG00000229280',
11: 'ENSG00000142655',
12: 'ENSG00000232596',
13: 'ENSG00000235054',
14: 'ENSG00000231510',
15: 'ENSG00000149527',
17: 'ENSG00000171621'},
'_id': {0: 'ENSG00000279928',
1: '100996583',
2: '63976',
4: '5192',
5: '100129534',
6: '728481',
8: '100270877',
9: '105376672',
10: '644357',
11: '5195',
12: '105376679',
13: '284661',
14: 'ENSG00000231510',
15: '9651',
17: '80176'},
'_score': {0: 8.327029,
1: 25.81547,
2: 24.07959,
4: 24.19017,
5: 8.320914,
6: 8.06594,
8: 8.327571,
9: 25.815289,
10: 8.327029,
11: 24.080423,
12: 25.932892,
13: 25.794834,
14: 25.811064,
15: 24.476448,
17: 25.008629},
'symbol': {0: 'DDX11L17',
1: 'LOC100996583',
2: 'PRDM16',
4: 'PEX10',
5: 'LOC100129534',
6: 'RPL23AP21',
8: 'RPL21P21',
9: 'LINC01345',
10: 'EEF1DP6',
11: 'PEX14',
12: 'LINC01646',
13: 'LINC01777',
14: 'LINC02782',
15: 'PLCH2',
17: 'SPSB1'}})
</code></pre>
|
<python><pandas><dataframe>
|
2023-04-07 20:39:10
| 1
| 1,545
|
Anon
|
75,961,719
| 8,372,455
|
pandas make dummys on small datasets
|
<p>Can someone help me with this function to make dummies:</p>
<pre><code>def make_dummies(df):
# Create dummies for all hours of the day
hours = pd.get_dummies(df.index.hour, prefix='hour')
# Create columns for hour, day of week, weekend, and month
df['hour'] = df.index.strftime('%H')
df['day_of_week'] = df.index.dayofweek
df['weekend'] = np.where(df['day_of_week'].isin([5,6]), 1, 0)
df['month'] = df.index.month
# Create dummies for hours of the day
hour_dummies = pd.get_dummies(df['hour'], prefix='hour')
# Create dummies for all days of the week
day_mapping = {0: 'monday', 1: 'tuesday', 2: 'wednesday', 3: 'thursday', 4: 'friday', 5: 'saturday', 6: 'sunday'}
all_days = pd.Categorical(df['day_of_week'].map(day_mapping), categories=day_mapping.values())
day_dummies = pd.get_dummies(all_days)
# Create dummies for all months of the year
month_mapping = {1: 'jan', 2: 'feb', 3: 'mar', 4: 'apr', 5: 'may', 6: 'jun', 7: 'jul',
8: 'aug', 9: 'sep', 10: 'oct', 11: 'nov', 12: 'dec'}
all_months = pd.Categorical(df['month'].map(month_mapping), categories=month_mapping.values())
month_dummies = pd.get_dummies(all_months)
# Merge all dummies with original DataFrame
df = pd.concat([df, hours, hour_dummies, day_dummies, month_dummies], axis=1)
# Drop redundant columns
df = df.drop(['hour', 'day_of_week', 'month'], axis=1)
return df
</code></pre>
<p>On a small dataset like this:</p>
<pre><code>import pandas as pd
import numpy as np
data = {"temp":[53.13,52.93,52.56,51.58,47.57],
"Date":["2023-04-07 15:00:00-05:00","2023-04-07 16:00:00-05:00","2023-04-07 17:00:00-05:00","2023-04-07 18:00:00-05:00","2023-04-07 19:00:00-05:00"]
}
df = pd.DataFrame(data).set_index("Date")
# Converting the index as date
df.index = pd.to_datetime(df.index)
df = make_dummies(df)
print(df)
</code></pre>
<p>This wont merge the data correctly. I apologize for the screenshot but the function is just stacking dummy variables beneath where what I was hoping for is ALL dummy variables would be added to the df and not stacked beneath. Hopefully this makes sense, was hoping to make a function that creates <em><strong>all</strong></em> dummy variables for each hour, month, and day type.</p>
<p><a href="https://i.sstatic.net/2sgNg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2sgNg.png" alt="enter image description here" /></a></p>
|
<python><pandas>
|
2023-04-07 20:26:13
| 2
| 3,564
|
bbartling
|
75,961,657
| 1,672,565
|
Make a generator yield "to two different places/branches" without recomputing its state(s)?
|
<p>I am sometimes building stuff similar to computational graphs using "simple" Python generators or generator comprehensions, for example:</p>
<pre class="lang-py prettyprint-override"><code># example 1
w1 = lambda v: v ** 2 # placeholder for expensive operation
w2 = lambda v: v - 3 # placeholder for expensive operation
w3 = lambda v: v / 7 # placeholder for expensive operation
d = [10, 11, 12, 13] # input data, could be "large"
r1 = (w1(x) for x in d) # generator for intermediary result 1
r2 = (w2(x) for x in r1) # generator for intermediary result 2
r3 = [w3(x) for x in r2] # final result
print(r3)
</code></pre>
<p>Imagine the list <code>d</code> being really large and filled with bigger stuff than integers. <code>r1</code> and <code>r2</code> are chained-up generators, saving a ton of memory. My lambdas are simple placeholders for expensive computations / processing steps that yield new, independent, intermediate results.</p>
<p>The cool thing about this approach is that one generator can depend on multiple other generators, e.g. the <code>zip</code> function, which technically allows to "merge/join branches" of a graph:</p>
<pre class="lang-py prettyprint-override"><code># example 2
wa1 = lambda v: v ** 2 # placeholder for expensive operation
wb1 = lambda v: v ** 3 # placeholder for expensive operation
wm = lambda a, b: a + b # placeholder for expensive operation (MERGE)
w2 = lambda v: v - 3 # placeholder for expensive operation
w3 = lambda v: v / 7 # placeholder for expensive operation
da = [10, 11, 12, 13] # input data "a", could be "large"
db = [20, 21, 22, 23] # input data "b", could be "large"
ra1 = (wa1(x) for x in da) # generator for intermediary result 1a
rb1 = (wb1(x) for x in db) # generator for intermediary result 1b
rm = (wm(x, y) for x, y in zip(ra1, rb1)) # generator for intermediary result rm -> MERGE of "a" and "b"
r2 = (w2(x) for x in rm) # generator for intermediary result 2
r3 = [w3(x) for x in r2] # final result
print(r3)
</code></pre>
<p>Two data sources, <code>da</code> and <code>db</code>. Their intermediate results get "merged" in <code>rm</code> though the actual computation is really only triggered by computing <code>r3</code>. Everything above is generators, computed on demand.</p>
<p>Something that I have been pondering about for a while is how to reverse this, i.e. how to "split into branches" using generators - without having to keep all intermediate results of one step in memory simultaneously. Consider the following example:</p>
<pre class="lang-py prettyprint-override"><code># example 3
w1 = lambda v: v ** 2 # placeholder for expensive operation
ws = lambda v: (v - 1, v + 1) # placeholder for expensive operation (SPLIT)
w2 = lambda v: v - 3 # placeholder for expensive operation
w3 = lambda v: v / 7 # placeholder for expensive operation
d = [10, 11, 12, 13] # input data, could be "large"
r1 = (w1(x) for x in d) # generator for intermediary result 1
rs = [ws(x) for x in r1] # ???
ra2 = (w2(x) for x, _ in rs) # generator for intermediary result 2
rb2 = (w2(x) for _, x in rs) # generator for intermediary result 2
ra3 = [w3(x) for x in ra2] # final result "a"
rb3 = [w3(x) for x in rb2] # final result "b"
print(ra3, rb3)
</code></pre>
<p>The results of generator <code>r1</code> are required by two different operations, as depicted in lambda <code>ws</code>, which also handles the "split into branches".</p>
<p>My question is: Can I substitute <code>rs</code>, currently a list comprehension, by something that behaves like a generator, computes every intermediate result only once but makes it available to multiple generators, e.g. <code>ra2</code> and <code>rb2</code>, "on demand"? If I had to keep <em>some</em> intermediate results, i.e. elements of <code>rs</code> cached in memory at any given time, I'd be fine - just not <em>all</em> of <code>rs</code> as e.g. a list.</p>
<p>Since the branches in example 3 are kind of symmetrical, I could get away with this:</p>
<pre class="lang-py prettyprint-override"><code># example 4
w1 = lambda v: v ** 2 # placeholder for expensive operation
ws = lambda v: (v - 1, v + 1) # placeholder for expensive operation (SPLIT)
w2 = lambda v: v - 3 # placeholder for expensive operation
w3 = lambda v: v / 7 # placeholder for expensive operation
d = [10, 11, 12, 13] # input data, could be "large"
r1 = (w1(x) for x in d) # generator for intermediary result 1
rs = (ws(x) for x in r1) # ???
r2 = ((w2(x), w2(y)) for x, y in rs) # generator for intermediary result 2
r3 = [(w3(x), w3(y)) for x, y in r2] # final result
print(r3)
</code></pre>
<p>For more complicated processing pipelines this could however become quite cluttered and impractical. For the purpose of this question, let's assume that I really want to separate between intermediate results 2 for branches "a" and "b".</p>
<p>My best bad idea so far was to work with threads and queues since all of this also implicitly raises the question of execution order. In example 3, <code>ra3</code> will be evaluated completely before <code>rb3</code> gets even touched, implicating that all intermediate results of <code>rs</code> must be kept around until <code>rb3</code> can be evaluated. Realistically, <code>ra3</code> and <code>rb3</code> must therefore be evaluated in parallel or alternating if I do not want to keep all of <code>rs</code> in memory at the same time. I am wondering if there is a better, more clever way to get this done - it smells a lot like some <code>async</code> magic could make sense here.</p>
|
<python><graph><generator>
|
2023-04-07 20:14:02
| 1
| 3,789
|
s-m-e
|
75,961,628
| 6,357,916
|
Python neither logging to console nor to file
|
<p>I have following code in <a href="https://colab.research.google.com/drive/1VA8EStfjt9KnK46WumF5AhURbcW7pPGl?usp=sharing" rel="nofollow noreferrer">my colab notebook</a>:</p>
<pre><code># Cell 1
import logging
# Cell 2
class MyClass:
def __init__(self):
self.configureLogger()
self.fileLogger.info('info log to file')
self.consoleLogger.debug('debug log to console')
def configureLogger(self):
# https://stackoverflow.com/a/56144390/6357916
# logging.basicConfig(level=logging.NOTSET)
self.fileLogger = logging.getLogger('fileLogger2')
if not self.fileLogger.hasHandlers():
self.fileLogger.setLevel(logging.INFO)
formatter1 = logging.Formatter("%(asctime)s,%(message)s")
file_handler = logging.FileHandler('log.txt')
file_handler.setLevel(logging.INFO)
file_handler.setFormatter(formatter1)
self.fileLogger.addHandler(file_handler)
self.consoleLogger = logging.getLogger('consoleLogger2')
if not self.consoleLogger.hasHandlers():
self.consoleLogger.setLevel(logging.DEBUG)
formatter2 = logging.Formatter("%(message)s")
console_handler = logging.StreamHandler()
console_handler.setFormatter(formatter2)
self.consoleLogger.addHandler(console_handler)
c = MyClass()
</code></pre>
<p>There <code>MyClass</code> has a method called <code>configureLogger()</code> which configures two loggers:</p>
<ol>
<li><code>fileLogger</code>: logs to file</li>
<li><code>consoleLogger</code>: logs t console</li>
</ol>
<p><code>MyClass.__init__()</code> call this method and uses both of these loggers to log single message each. But nothing gets logged to console. Nor does <code>log.txt</code> file gets created.</p>
<p>I am expecting it to log <code>debug log to console</code> to console and <code>info log to file</code> to file. This is how it is behaving on my local machine, but fails on colab</p>
<p>What I am missing here?</p>
<p><strong>Update</strong></p>
<p>After commenting both line containing <code>if ...</code>, it is now printing</p>
<pre><code>2023-04-07 20:10:20,860,info log to file
</code></pre>
<p>to file and</p>
<pre><code>INFO:fileLogger2:info log to file
debug log to console
DEBUG:consoleLogger2:debug log to console
</code></pre>
<p>to console.</p>
<p>It means those <code>if</code> conditions are <code>false</code> causing no file handler are getting added and that is why nothing was getting logged. But those if conditions should be <code>true</code> initially right (as logger should not have any handler to start with)?</p>
|
<python><python-3.x><python-logging>
|
2023-04-07 20:07:57
| 1
| 3,029
|
MsA
|
75,961,601
| 4,912,515
|
Pytest: Nested Parametrization
|
<p>I am using python to compile code for a system in a different language (verilog) and test them. with inputs. My system has a few compile-time and few runtime parameters. I'm currently doing this:</p>
<pre><code>@pytest.mark.parametrize("C1", [3,2]) # compile time
@pytest.mark.parametrize("C2", [8,1]) # compile time
@pytest.mark.parametrize("R1", [8,3]) # runtime
@pytest.mark.parametrize("R2", [8,2]) # runtime
def test_design(C1,C2,R1,R2):
compile(C1,C2)
prepare_inputs(C1,C2,R1,R2)
run(C1,C2,R1,R2)
compare_results(C1,C2,R1,R2)
</code></pre>
<p>This does 16 tests, and compiles the design 16 times. Instead I want to compile the design only 4 times (for every compile time parameter) and then test the design 4 times each, once per runtime parameter. Something like this:</p>
<pre><code>for C1 in [3,2]:
for C2 in [8,1]:
compile(C1,C2)
for R1 in [8,3]:
for R2 in [8,2]:
prepare_inputs(C1,C2,R1,R2)
run(C1,C2,R1,R2)
compare_results(C1,C2,R1,R2)
</code></pre>
<p>How can I write this with pytest?</p>
<p>Note: I referred to <a href="https://stackoverflow.com/questions/40164910/nested-parametrized-tests-pytest">this</a>, but I'm unable to understand how to make the compile() only for few parameters</p>
|
<python><pytest>
|
2023-04-07 20:03:22
| 2
| 493
|
Abarajithan
|
75,961,454
| 12,233,973
|
pytest run using threads causes future tests to fail
|
<p>I have a <code>pytest</code> class that sets up a server in a separate thread before each tests via <code>setUp()</code> like this:</p>
<pre><code>def setUp(self):
host = "localhost"
port = 13620
scheme = "grpc+tcp"
location = "{}://{}:{}".format(scheme, host, port)
self.arrow_server = FlightServer(host, location)
def start_server():
pa.start(self.arrow_server)
self.t = threading.Thread(target=start_server, args=(), daemon=True)
self.t.start()
</code></pre>
<p>And in <code>tearDown()</code> I shut down the server and call <code>join()</code> on the thread:</p>
<pre><code>def tearDown(self):
self.arrow_server.shutdown()
self.t.join()
self.arrow_server = None
</code></pre>
<p>The tests behave and pass as expected, but in subsequent <code>pytest</code> invocations, other unrelated integration tests fail due to <code>ProcessLookupError: [Errno 3] No such process</code> in their <code>tearDown</code> method:</p>
<pre><code>def tearDown(self):
if self.process is not None:
if platform.system() == "Windows":
subprocess.call(["taskkill", "/F", "/T", "/PID", str(self.process.pid)])
else:
os.killpg(os.getpgid(self.process.pid), signal.SIGTERM)
self.process.kill()
</code></pre>
<p>These tests fail invariably unless I restart my computer. This issue also occurs in Github Actions.</p>
<p>I'm guessing my code has some unintended side effects outside the context of <code>pytest</code>, but I'm not sure why or how.</p>
|
<python><python-3.x><multithreading><testing><pytest>
|
2023-04-07 19:35:05
| 0
| 512
|
c_sagan
|
75,961,092
| 3,136,710
|
Piet Mondrian style painting using Turtle graphics and recursion
|
<p>My Python exercise states the following:</p>
<blockquote>
<p>The twentieth-century Dutch artist Piet Mondrian developed a style of abstract
painting that exhibited simple recursive patterns. To generate such a pattern
with a computer, one would begin with a filled rectangle in a random color and
then repeatedly fill two unequal subdivisions with random colors, as shown in
Figure 7-16.</p>
</blockquote>
<p><a href="https://i.sstatic.net/bX9lG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bX9lG.png" alt="Generating a simple recursive pattern in the style of Piet Mondrian" /></a></p>
<blockquote>
</blockquote>
<p>As you can see, the algorithm continues the process of subdivision until an “aesthetically
right moment” is reached. In this version, the algorithm divides the
current rectangle into portions representing 1/3 and 2/3 of its area and alternates
these subdivisions along the horizontal and vertical axes. Design, implement, and
test a script that uses a recursive function to draw these patterns.</p>
<blockquote>
</blockquote>
<p>Now here is my code attempting to do this:</p>
<pre><code>from turtle import Turtle, tracer, update
from random import randint
count = 0
posTest = 0
def draw(t, length, width, step, level):
global count, posTest
if level == 0:#base case
t.fillcolor(randint(0,255),randint(0,255),randint(0,255))
t.begin_fill()
#being calculations
t.forward(step)
t.forward(length)
t.right(90)
t.forward(width)
t.right(90)
t.forward(length)
t.right(90)
t.forward(width)
t.right(90)
t.end_fill()
if posTest % 2 == 1:#if it's 2nd box, move to pos for next iteration
t.forward(length)
t.right(90)#length becomes width here
posTest += 1
else: # two calls for 2 more rectangles
if count % 2 == 0:
draw(t, length*(1/3), width, 0, level-1)
draw(t, length*(2/3), width, length*(1/3), level-1)
count += 1
else:
draw(t, width*(1/3), length, 0, level-1)#flip width and height
draw(t, width*(2/3), length, width*(1/3), level-1)
count += 1
def main():
t = Turtle()
#tracer(False)
#grid.draw_grid(10,-200,200,-100,100)
update()
length = 400
width = length//2
level = 2
step = 0
t.up()
t.goto(-.5*length,.5*width)
t.down()
draw(t,length,width, step, level)
if __name__ == "__main__":
main()
</code></pre>
<p>Let's assume I'm setting <code>level = 2</code>. My code basically draws 2 rectangles side by side for each pass of the else statement:</p>
<p><code>else: # two calls for 2 more rectangles</code></p>
<p>One rectangle is 1/3 the length, and one is 2/3 the length. After each of these cycles, the turtle goes to the end of the rectangle using the <code>step</code> value and turns and gets ready for the next iteration. Here is that part of the code, which is controlled by the variable posTest:</p>
<pre><code>if posTest % 2 == 1:#if it's 2nd box, move to pos for next iteration
t.forward(length)
t.right(90)#length becomes width here
</code></pre>
<p>The turtle's new direction has the affect of changing the length and the width for the next draw calls in the nested else statement, so I flip the arguments around here:</p>
<pre><code>else:
draw(t, width*(1/3), length, 0, level-1)#flip width and height
draw(t, width*(2/3), length, width*(1/3), level-1)
count += 1
</code></pre>
<p>The problem is, anything greater than level 1 shrinks the rectangle from the desirable size of</p>
<pre><code>length = 400
width = 200
</code></pre>
<p>At <code>level = 0</code> and <code>level = 1</code> the size and output are correct, but at <code>level=2</code> the rectangle shrinks, and anything greater than <code>level =2</code> produces a wrong result.</p>
<p>Here is a picture of my results for levels 0-3:</p>
<p><a href="https://i.sstatic.net/JhPzJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JhPzJ.png" alt="Levels 0-3" /></a></p>
<p>Just to note, I'm not trying to split each of the first two rectangles into 4 rectangles as shown in the level 2 picture in the problem's description. I'm assuming that would need 2 turtles or more complex code.</p>
<p>I'm simply trying to split the larger of the 2 rectangles created into two smaller rectangles, and repeating this process, but alternating the chops horizontally and vertically. For example, with my level 1 picture, the working area from that point should just be that red box.</p>
<p>I hope this makes sense.</p>
|
<python><recursion><turtle-graphics><python-turtle>
|
2023-04-07 18:29:06
| 0
| 652
|
Spellbinder2050
|
75,961,071
| 1,442,881
|
Pickled pandas DF exceeds maximum recursion depth
|
<p>Unlike <a href="https://stackoverflow.com/questions/23261911/pandas-to-pickle-error-maximum-recursion-depth-exceeded">this other question</a>, I am able <code>to_pickle()</code> my dataframe just fine. I've pulled a bunch of data from Twitter's API and data-framed it for analysis. I successfully pickled the dataframe, but now I am unable to <code>read_pickle()</code> it.</p>
<p>When I run the <code>read_pickle()</code> line, I get:</p>
<pre><code>File ~/.cache/pypoetry/virtualenvs/twitter-gQnmvvjM-py3.11/lib/python3.11/site-packages/pandas/io/pickle.py:208, in read_pickle(filepath_or_buffer, compression, storage_options)
205 with warnings.catch_warnings(record=True):
206 # We want to silence any warnings about, e.g. moved modules.
207 warnings.simplefilter("ignore", Warning)
--> 208 return pickle.load(handles.handle)
209 except excs_to_catch:
210 # e.g.
211 # "No module named 'pandas.core.sparse.series'"
212 # "Can't get attribute '__nat_unpickle' on <module 'pandas._libs.tslib"
213 return pc.load(handles.handle, encoding=None)
File ~/.cache/pypoetry/virtualenvs/twitter-gQnmvvjM-py3.11/lib/python3.11/site-packages/tweepy/mixins.py:33, in DataMapping.__getattr__(self, name)
31 def __getattr__(self, name):
32 try:
---> 33 return self.data[name]
34 except KeyError:
35 raise AttributeError from None
File ~/.cache/pypoetry/virtualenvs/twitter-gQnmvvjM-py3.11/lib/python3.11/site-packages/tweepy/mixins.py:33, in DataMapping.__getattr__(self, name)
...
---> 33 return self.data[name]
34 except KeyError:
35 raise AttributeError from None
RecursionError: maximum recursion depth exceeded
</code></pre>
<p>When I tried to <code>sys.setrecursionlimit(10**5)</code> or higher, I get this error:</p>
<pre><code>Canceled future for execute_request message before replies were done
The Kernel crashed while executing code in the the current cell or a previous cell. Please review the code in the cell(s) to identify a possible cause of the failure. Click here for more info. View Jupyter log for further details.
</code></pre>
<p>And the Jupyter log gives me this:</p>
<pre><code>error 12:23:55.212: Raw kernel process exited code: undefined
error 12:23:55.214: Error in waiting for cell to complete Error: Canceled future for execute_request message before replies were done
at t.KernelShellFutureHandler.dispose (/home/ryan/.vscode-server/extensions/ms-toolsai.jupyter-2023.3.1000892223/out/extension.node.js:2:32419)
at /home/ryan/.vscode-server/extensions/ms-toolsai.jupyter-2023.3.1000892223/out/extension.node.js:2:51471
at Map.forEach (<anonymous>)
at y._clearKernelState (/home/ryan/.vscode-server/extensions/ms-toolsai.jupyter-2023.3.1000892223/out/extension.node.js:2:51456)
at y.dispose (/home/ryan/.vscode-server/extensions/ms-toolsai.jupyter-2023.3.1000892223/out/extension.node.js:2:44938)
at /home/ryan/.vscode-server/extensions/ms-toolsai.jupyter-2023.3.1000892223/out/extension.node.js:17:96826
at ee (/home/ryan/.vscode-server/extensions/ms-toolsai.jupyter-2023.3.1000892223/out/extension.node.js:2:1589492)
at jh.dispose (/home/ryan/.vscode-server/extensions/ms-toolsai.jupyter-2023.3.1000892223/out/extension.node.js:17:96802)
at Lh.dispose (/home/ryan/.vscode-server/extensions/ms-toolsai.jupyter-2023.3.1000892223/out/extension.node.js:17:104079)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
warn 12:23:55.215: Cell completed with errors {
message: 'Canceled future for execute_request message before replies were done'
}
</code></pre>
|
<python><pandas><pickle>
|
2023-04-07 18:26:04
| 1
| 918
|
Ryan
|
75,961,021
| 18,157,326
|
ERROR: Could not find a version that satisfies the requirement onnxruntime==1.14.1 (from versions: none)
|
<p>I have read the onnxruntime <a href="https://onnxruntime.ai/docs/get-started/with-python.html" rel="nofollow noreferrer">official document</a> and found it support Python 3.6 - 3.9. then I changed the docker base image to <code>python:3.8.16-alpine3.17</code> , but still show error like this:</p>
<pre><code>Collecting numba==0.56.4
Downloading numba-0.56.4.tar.gz (2.4 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.4/2.4 MB 86.3 MB/s eta 0:00:00
Preparing metadata (setup.py): started
Preparing metadata (setup.py): still running...
Preparing metadata (setup.py): finished with status 'done'
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError("HTTPSConnectionPool(host='pypi.org', port=443): Read timed out. (read timeout=15)")': /simple/numpy/
Collecting numpy==1.24.2
Downloading numpy-1.24.2.tar.gz (10.9 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 10.9/10.9 MB 87.2 MB/s eta 0:00:00
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Preparing metadata (pyproject.toml): started
Preparing metadata (pyproject.toml): finished with status 'done'
ERROR: Could not find a version that satisfies the requirement onnxruntime==1.14.1 (from versions: none)
ERROR: No matching distribution found for onnxruntime==1.14.1
exit status 1
The command '/bin/sh -c pip3 install -r /root/chat-server/requirements.txt' returned a non-zero code: 1
Error: exit status 1
Usage:
github-actions build-push [flags]
Flags:
-h, --help help for build-push
</code></pre>
<p>why still throw this error info? what should I do to avoid this issue? this is my Dockerfile:</p>
<pre><code>FROM python:3.8.16-alpine3.17
LABEL jiangxiaoqiang (xxxx@gmail.com)
ENV LANG=en_US.UTF-8 \
LC_ALL=en_US.UTF-8 \
TZ=Asia/Shanghai
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime \
&& echo $TZ > /etc/timezone \
&& mkdir -p /root/chat-server
# upgrade pip
RUN pip install --upgrade pip && apk add --no-cache musl-dev gcc g++ libc-dev linux-headers curl
ADD . /root/chat-server/
EXPOSE 8001
RUN pip3 install -r /root/chat-server/requirements.txt
# RUN pip3 install -r /root/chat-server/requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
WORKDIR /root/chat-server/
ENTRYPOINT exec python3 -m uvicorn main:app --port 8001 --host 0.0.0.0 --reload
</code></pre>
<p>and this is my dependencies:</p>
<pre><code>fastapi==0.95.0
openai==0.27.0
sse-starlette==1.3.3
SQLAlchemy~=1.4.6
pydantic~=1.10.5
rembg==2.0.32
Pillow==9.5.0
aiohttp==3.8.4
aiosignal==1.3.1
anyio==3.6.2
async-timeout==4.0.2
asyncer==0.0.2
attrs==22.2.0
certifi==2022.12.7
charset-normalizer==3.1.0
click==8.1.3
coloredlogs==15.0.1
filetype==1.2.0
flatbuffers==23.3.3
frozenlist==1.3.3
h11==0.14.0
httptools==0.5.0
humanfriendly==10.0
idna==3.4
ImageHash==4.3.1
imageio==2.27.0
lazy_loader==0.2
llvmlite==0.39.1
mpmath==1.3.0
multidict==6.0.4
networkx==3.1
numba==0.56.4
numpy==1.24.2
onnxruntime==1.14.1
opencv-python-headless==4.7.0.72
packaging==23.0
Pillow==9.5.0
platformdirs==3.2.0
pooch==1.7.0
protobuf==4.22.1
pydantic==1.10.7
PyMatting==1.1.8
python-dotenv==1.0.0
python-multipart==0.0.6
PyWavelets==1.4.1
PyYAML==6.0
rembg==2.0.32
requests==2.28.2
scikit-image==0.20.0
scipy==1.10.1
sniffio==1.3.0
starlette==0.26.1
sympy==1.11.1
tifffile==2023.3.21
tqdm==4.65.0
typing_extensions==4.5.0
urllib3==1.26.15
uvicorn==0.21.1
uvloop==0.17.0
watchdog==3.0.0
watchfiles==0.19.0
websockets==11.0
yarl==1.8.2
psycopg2-binary==2.9.6
</code></pre>
|
<python><docker>
|
2023-04-07 18:18:09
| 1
| 1,173
|
spark
|
75,961,016
| 4,629,916
|
In pyspark, (or SQL) can I use the value calculated in the previous observation in the current observation. (rowwise calculation) (Like SAS Retain)
|
<p>I want to be able to consecutively go through a table using the value calculated in the previous row in the current row. It seems a window function could do this.</p>
<pre><code>from pyspark.sql import SparkSession
from pyspark.sql import Window
import pyspark.sql.functions as F
# Create a sample DataFrame
data = [
(1, 1, 3),
(2, 2, 4),
(3, 3, 5),
(4, 4, 6),
(5, 8, 10)
]
columns = ["id", "start", "stop"]
spark = SparkSession.builder.master("local").appName("SelfJoin").getOrCreate()
df = spark.createDataFrame(data, columns).sort('start')
# 1. Sort the DataFrame by 'start'
df = df.sort("start").withColumn('prev_start', col('start'))
# 2. Initialize a window that looks back one record
window = Window.orderBy(['start']).rowsBetween(-1, -1)
df = (
df
.withColumn("prev_start", F.lag("prev_start", 1).over(window))
)
df.toPandas().style.hide_index()
</code></pre>
<h2>At row one</h2>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>start</th>
<th>stop</th>
<th>prev_start</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>1</td>
<td>3</td>
<td>NaN</td>
</tr>
</tbody>
</table>
</div><h2>At row two</h2>
<p><strong>Extra</strong>: I need to specify that if value in previous row is missing, then use the value in the current row: 1.0</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>start</th>
<th>stop</th>
<th>prev_start</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>1</td>
<td>3</td>
<td>NaN</td>
</tr>
<tr>
<td>2</td>
<td>2</td>
<td>4</td>
<td>1.0</td>
</tr>
</tbody>
</table>
</div><h2>At Row 3:</h2>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>start</th>
<th>stop</th>
<th>prev_start</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>1</td>
<td>3</td>
<td>NaN</td>
</tr>
<tr>
<td>2</td>
<td>2</td>
<td>4</td>
<td>1.0</td>
</tr>
<tr>
<td>3</td>
<td>3</td>
<td>5</td>
<td>2.0</td>
</tr>
</tbody>
</table>
</div>
<p>Look back, <code>prev_start</code> assigned to previous row value of 1</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>start</th>
<th>stop</th>
<th>prev_start</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>1</td>
<td>3</td>
<td>NaN</td>
</tr>
<tr>
<td>2</td>
<td>2</td>
<td>4</td>
<td>1.0</td>
</tr>
<tr>
<td>3</td>
<td>3</td>
<td>5</td>
<td>1.0</td>
</tr>
</tbody>
</table>
</div><h2>At Row 4</h2>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>start</th>
<th>stop</th>
<th>prev_start</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>1</td>
<td>3</td>
<td>NaN</td>
</tr>
<tr>
<td>2</td>
<td>2</td>
<td>4</td>
<td>1.0</td>
</tr>
<tr>
<td>3</td>
<td>3</td>
<td>5</td>
<td>1.0</td>
</tr>
<tr>
<td>4</td>
<td>4</td>
<td>6</td>
<td>3.0</td>
</tr>
</tbody>
</table>
</div>
<p>Assign value of <code>prev_start</code> to what is now 1, not what it was originally: 2</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>start</th>
<th>stop</th>
<th>prev_start</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>1</td>
<td>3</td>
<td>NaN</td>
</tr>
<tr>
<td>2</td>
<td>2</td>
<td>4</td>
<td>1.0</td>
</tr>
<tr>
<td>3</td>
<td>3</td>
<td>5</td>
<td>1.0</td>
</tr>
<tr>
<td>4</td>
<td>4</td>
<td>6</td>
<td>1.0</td>
</tr>
</tbody>
</table>
</div>
<p>so now, the actually application, I also do this for <code>prev_end</code> = max(prev_end, end)</p>
<ul>
<li>as soon as this <code>prev_end</code> is less than the current row <code>start</code> (and the data is ordered correctly) I now have a non-contiguous time frame (start to stop), that will be from <code>prev_start</code> to <code>prev_end</code></li>
</ul>
<p>Result</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>start</th>
<th>stop</th>
<th>prev_start</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>1</td>
<td>3</td>
<td>NaN</td>
</tr>
<tr>
<td>2</td>
<td>2</td>
<td>4</td>
<td>1.0</td>
</tr>
<tr>
<td>3</td>
<td>3</td>
<td>5</td>
<td>2.0</td>
</tr>
<tr>
<td>4</td>
<td>4</td>
<td>6</td>
<td>3.0</td>
</tr>
<tr>
<td>5</td>
<td>8</td>
<td>10</td>
<td>4.0</td>
</tr>
</tbody>
</table>
</div>
<p>But I want</p>
<p>Result</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>start</th>
<th>stop</th>
<th>prev_start</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>1</td>
<td>3</td>
<td>NaN</td>
</tr>
<tr>
<td>2</td>
<td>2</td>
<td>4</td>
<td>1.0</td>
</tr>
<tr>
<td>3</td>
<td>3</td>
<td>5</td>
<td>1.0</td>
</tr>
<tr>
<td>4</td>
<td>4</td>
<td>6</td>
<td>1.0</td>
</tr>
<tr>
<td>5</td>
<td>8</td>
<td>10</td>
<td>1.0</td>
</tr>
</tbody>
</table>
</div>
<p>Because in this case, the values are calculated row by row, so once the value is set to 1, all <code>prev_start</code> values would consecutively be made 1</p>
|
<python><pyspark><rowwise><spark-window-function>
|
2023-04-07 18:17:22
| 0
| 1,522
|
Harlan Nelson
|
75,960,997
| 2,657,491
|
Python pyreadstat name 'row' is not defined
|
<p>I'm adapting this code from this thread:
<a href="https://stackoverflow.com/questions/58612304/reading-huge-sas-dataset-in-python">Reading huge sas dataset in python</a></p>
<p>When I test run with my own dataset :</p>
<pre><code>filename = 'df_53.SAS7BDAT'
CHUNKSIZE = 50000
offset = 0
# Get the function object in a variable getChunk
if filename.lower().endswith('sas7bdat'):
getChunk = pyreadstat.read_sas7bdat
else:
getChunk = pyreadstat.read_xport
allChunk,_ = getChunk(row['/data/research1/test/cc/53/'], row_limit=CHUNKSIZE, row_offset=offset)
allChunk = allChunk.astype('category')
while True:
offset += CHUNKSIZE
# for xpt data, use pyreadstat.read_xpt()
chunk, _ = pyreadstat.read_sas7bdat(filename, row_limit=CHUNKSIZE, row_offset=offset)
if chunk.empty: break # if chunk is empty, it means the entire data has been read, so break
for eachCol in chunk: #converting each column to categorical
colUnion = pd.api.types.union_categoricals([allChunk[eachCol], chunk[eachCol]])
allChunk[eachCol] = pd.Categorical(allChunk[eachCol], categories=colUnion.categories)
chunk[eachCol] = pd.Categorical(chunk[eachCol], categories=colUnion.categories)
allChunk = pd.concat([allChunk, chunk]) #Append each chunk to the resulting dataframe
</code></pre>
<p>I received an error that says:</p>
<p><code>name 'row' is not defined</code></p>
<p>Anyone has any idea how to fix the error?</p>
|
<python>
|
2023-04-07 18:15:04
| 0
| 841
|
lydias
|
75,960,672
| 8,484,885
|
GraphQL using a for loop in within the search query
|
<p>Because Yelp limits the output of businesses to 50, I'm trying to loop through the Census block centroid longitudes and latitudes of San Diego to get all the different businesses (which I want for a project). However, the query does not seem to accept anything that is not strictly numeric. Am I missing something?</p>
<p>This is the error:
<a href="https://i.sstatic.net/MPQ7I.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MPQ7I.gif" alt="enter image description here" /></a></p>
<p>Here is my code:</p>
<pre><code>from gql import gql, Client
from gql.transport.requests import RequestsHTTPTransport
yelp_api_key = 'xxx'
#define headers
header = {'Authorization': 'bearer {}'.format(yelp_api_key),
'Content-Type': 'application/json'}
#url = "https://api.yelp.com/v3/businesses/search?location=san%2520diego&sort_by=best_match&limit=20"
append_data = []
for i in range(0,len(bcx)):
#PARAMETERS = {'limit': 50,
# 'offset': 0,
# 'longitude': bcx[i],
# 'latitude': bcy[i]
# }
#build request framework
transport = RequestsHTTPTransport(url = 'https://api.yelp.com/v3/graphql', headers = header, use_json = True) #params = PARAMETERS
# Create a GraphQL client using the defined transport
client = Client(transport=transport, fetch_schema_from_transport=True)
# Provide a GraphQL query
query_bus = gql("""
{
sd_businesses: search(location: "san diego", limit: 50,
offset: 0,
longitude: bcx[i],
latitude: bcy[i]){
total
business {
name
coordinates{
latitude
longitude
}
}
}
}
""")
# Execute the query on the transport
result_bus = client.execute(query_bus)
append_data.append(result_bus)
print(append_data)
</code></pre>
|
<python><graphql><yelp-fusion-api>
|
2023-04-07 17:24:14
| 1
| 589
|
James
|
75,960,609
| 8,204,956
|
create consistant department name and code
|
<p>I have a department model in django:</p>
<pre><code>from django.db import models
class Departement(models.Model):
name = models.CharField(max_length=128, db_index=True)
code = models.CharField(max_length=3, db_index=True)
</code></pre>
<p>I'd like to create a fixture with factory_boy with a consistent department name and code.</p>
<p>Faker has a <code>department</code> provider which returns a tuple with the code and the department (ex for french: <a href="https://faker.readthedocs.io/en/master/locales/fr_FR.html#faker.providers.address.fr_FR.Provider.department" rel="nofollow noreferrer">https://faker.readthedocs.io/en/master/locales/fr_FR.html#faker.providers.address.fr_FR.Provider.department</a>)</p>
<p>I have created a <code>DepartmentFixture</code> class but fail to understand how to use an instance of faker to call the <code>faker.derpartments()</code> method then populate the code/name fields accordingly.</p>
|
<python><django><factory-boy>
|
2023-04-07 17:14:10
| 2
| 938
|
Rémi Desgrange
|
75,960,606
| 18,157,326
|
is it possible to install python numba in python alpine image
|
<p>I am install python3 numba packages in GitHub actions, but the install process show error:</p>
<pre><code>Collecting numba==0.56.4
Downloading numba-0.56.4.tar.gz (2.4 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.4/2.4 MB 59.8 MB/s eta 0:00:00
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'error'
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [301 lines of output]
/usr/local/lib/python3.10/site-packages/setuptools/installer.py:27: SetuptoolsDeprecationWarning: setuptools.installer is deprecated. Requirements should be satisfied by a PEP 517 installer.
warnings.warn(
error: subprocess-exited-with-error
× Building wheel for numpy (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [258 lines of output]
Running from numpy source directory.
setup.py:86: DeprecationWarning:
`numpy.distutils` is deprecated since NumPy 1.23.0, as a result
of the deprecation of `distutils` itself. It will be removed for
Python >= 3.12. For older Python versions it will remain present.
It is recommended to use `setuptools < 60.0` for those Python versions.
For more details, see:
https://numpy.org/devdocs/reference/distutils_status_migration.html
import numpy.distutils.command.sdist
Processing numpy/random/_bounded_integers.pxd.in
Processing numpy/random/_pcg64.pyx
Processing numpy/random/_philox.pyx
Processing numpy/random/_bounded_integers.pyx.in
Processing numpy/random/_generator.pyx
Processing numpy/random/mtrand.pyx
Processing numpy/random/_common.pyx
Processing numpy/random/_mt19937.pyx
Processing numpy/random/_sfc64.pyx
Processing numpy/random/bit_generator.pyx
Cythonizing sources
INFO: blas_opt_info:
INFO: blas_armpl_info:
INFO: customize UnixCCompiler
INFO: libraries armpl_lp64_mp not found in ['/usr/local/lib', '/usr/lib']
INFO: NOT AVAILABLE
INFO:
INFO: blas_mkl_info:
INFO: libraries mkl_rt not found in ['/usr/local/lib', '/usr/lib']
INFO: NOT AVAILABLE
INFO:
INFO: blis_info:
INFO: libraries blis not found in ['/usr/local/lib', '/usr/lib']
INFO: NOT AVAILABLE
INFO:
INFO: openblas_info:
INFO: libraries openblas not found in ['/usr/local/lib', '/usr/lib']
INFO: NOT AVAILABLE
INFO:
INFO: accelerate_info:
INFO: NOT AVAILABLE
INFO:
INFO: atlas_3_10_blas_threads_info:
INFO: Setting PTATLAS=ATLAS
INFO: libraries tatlas not found in ['/usr/local/lib', '/usr/lib']
INFO: NOT AVAILABLE
INFO:
INFO: atlas_3_10_blas_info:
INFO: libraries satlas not found in ['/usr/local/lib', '/usr/lib']
INFO: NOT AVAILABLE
INFO:
INFO: atlas_blas_threads_info:
INFO: Setting PTATLAS=ATLAS
INFO: libraries ptf77blas,ptcblas,atlas not found in ['/usr/local/lib', '/usr/lib']
INFO: NOT AVAILABLE
INFO:
INFO: atlas_blas_info:
INFO: libraries f77blas,cblas,atlas not found in ['/usr/local/lib', '/usr/lib']
INFO: NOT AVAILABLE
INFO:
/tmp/pip-wheel-eqyou62a/numpy_571e716b8f694316af15802515ca0b00/numpy/distutils/system_info.py:2077: UserWarning:
Optimized (vendor) Blas libraries are not found.
Falls back to netlib Blas library which has worse performance.
A better performance should be easily gained by switching
Blas library.
if self._calc_info(blas):
INFO: blas_info:
INFO: libraries blas not found in ['/usr/local/lib', '/usr/lib']
INFO: NOT AVAILABLE
INFO:
/tmp/pip-wheel-eqyou62a/numpy_571e716b8f694316af15802515ca0b00/numpy/distutils/system_info.py:2077: UserWarning:
Blas (http://www.netlib.org/blas/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [blas]) or by setting
the BLAS environment variable.
if self._calc_info(blas):
INFO: blas_src_info:
INFO: NOT AVAILABLE
INFO:
/tmp/pip-wheel-eqyou62a/numpy_571e716b8f694316af15802515ca0b00/numpy/distutils/system_info.py:2077: UserWarning:
Blas (http://www.netlib.org/blas/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [blas_src]) or by setting
the BLAS_SRC environment variable.
if self._calc_info(blas):
INFO: NOT AVAILABLE
INFO:
non-existing path in 'numpy/distutils': 'site.cfg'
INFO: lapack_opt_info:
INFO: lapack_armpl_info:
INFO: libraries armpl_lp64_mp not found in ['/usr/local/lib', '/usr/lib']
INFO: NOT AVAILABLE
INFO:
INFO: lapack_mkl_info:
INFO: libraries mkl_rt not found in ['/usr/local/lib', '/usr/lib']
INFO: NOT AVAILABLE
INFO:
INFO: openblas_lapack_info:
INFO: libraries openblas not found in ['/usr/local/lib', '/usr/lib']
INFO: NOT AVAILABLE
INFO:
INFO: openblas_clapack_info:
INFO: libraries openblas,lapack not found in ['/usr/local/lib', '/usr/lib']
INFO: NOT AVAILABLE
INFO:
INFO: flame_info:
INFO: libraries flame not found in ['/usr/local/lib', '/usr/lib']
INFO: NOT AVAILABLE
INFO:
INFO: atlas_3_10_threads_info:
INFO: Setting PTATLAS=ATLAS
INFO: libraries tatlas,tatlas not found in /usr/local/lib
INFO: libraries tatlas,tatlas not found in /usr/lib
INFO: <class 'numpy.distutils.system_info.atlas_3_10_threads_info'>
INFO: NOT AVAILABLE
INFO:
INFO: atlas_3_10_info:
INFO: libraries satlas,satlas not found in /usr/local/lib
INFO: libraries satlas,satlas not found in /usr/lib
INFO: <class 'numpy.distutils.system_info.atlas_3_10_info'>
INFO: NOT AVAILABLE
INFO:
INFO: atlas_threads_info:
INFO: Setting PTATLAS=ATLAS
INFO: libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib
INFO: libraries ptf77blas,ptcblas,atlas not found in /usr/lib
INFO: <class 'numpy.distutils.system_info.atlas_threads_info'>
INFO: NOT AVAILABLE
INFO:
INFO: atlas_info:
INFO: libraries f77blas,cblas,atlas not found in /usr/local/lib
INFO: libraries f77blas,cblas,atlas not found in /usr/lib
INFO: <class 'numpy.distutils.system_info.atlas_info'>
INFO: NOT AVAILABLE
INFO:
INFO: lapack_info:
INFO: libraries lapack not found in ['/usr/local/lib', '/usr/lib']
INFO: NOT AVAILABLE
INFO:
/tmp/pip-wheel-eqyou62a/numpy_571e716b8f694316af15802515ca0b00/numpy/distutils/system_info.py:1902: UserWarning:
Lapack (http://www.netlib.org/lapack/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [lapack]) or by setting
the LAPACK environment variable.
return getattr(self, '_calc_info_{}'.format(name))()
INFO: lapack_src_info:
INFO: NOT AVAILABLE
INFO:
/tmp/pip-wheel-eqyou62a/numpy_571e716b8f694316af15802515ca0b00/numpy/distutils/system_info.py:1902: UserWarning:
Lapack (http://www.netlib.org/lapack/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [lapack_src]) or by setting
the LAPACK_SRC environment variable.
return getattr(self, '_calc_info_{}'.format(name))()
INFO: NOT AVAILABLE
INFO:
INFO: numpy_linalg_lapack_lite:
INFO: FOUND:
INFO: language = c
INFO: define_macros = [('HAVE_BLAS_ILP64', None), ('BLAS_SYMBOL_SUFFIX', '64_')]
INFO:
Warning: attempted relative import with no known parent package
/tmp/pip-build-env-t_16bsry/overlay/lib/python3.10/site-packages/setuptools/_distutils/dist.py:275: UserWarning: Unknown distribution option: 'define_macros'
warnings.warn(msg)
running bdist_wheel
running build
running config_cc
INFO: unifing config_cc, config, build_clib, build_ext, build commands --compiler options
running config_fc
INFO: unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options
running build_src
INFO: build_src
INFO: building py_modules sources
creating build
creating build/src.linux-x86_64-3.10
creating build/src.linux-x86_64-3.10/numpy
creating build/src.linux-x86_64-3.10/numpy/distutils
INFO: building library "npymath" sources
WARN: Could not locate executable armflang
WARN: Could not locate executable gfortran
WARN: Could not locate executable f95
WARN: Could not locate executable ifort
WARN: Could not locate executable ifc
WARN: Could not locate executable lf95
WARN: Could not locate executable pgfortran
WARN: Could not locate executable nvfortran
WARN: Could not locate executable f90
WARN: Could not locate executable f77
WARN: Could not locate executable fort
WARN: Could not locate executable efort
WARN: Could not locate executable efc
WARN: Could not locate executable g77
WARN: Could not locate executable g95
WARN: Could not locate executable pathf95
WARN: Could not locate executable nagfor
WARN: Could not locate executable frt
WARN: don't know how to compile Fortran code on platform 'posix'
[Errno 2] No such file or directory: 'gcc'
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
main()
File "/usr/local/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "/usr/local/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 251, in build_wheel
return _build_backend().build_wheel(wheel_directory, config_settings,
File "/tmp/pip-build-env-t_16bsry/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 230, in build_wheel
return self._build_with_temp_dir(['bdist_wheel'], '.whl',
File "/tmp/pip-build-env-t_16bsry/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 215, in _build_with_temp_dir
self.run_setup()
File "/tmp/pip-build-env-t_16bsry/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 267, in run_setup
super(_BuildMetaLegacyBackend,
File "/tmp/pip-build-env-t_16bsry/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 158, in run_setup
exec(compile(code, __file__, 'exec'), locals())
File "setup.py", line 494, in <module>
setup_package()
File "setup.py", line 486, in setup_package
setup(**metadata)
File "/tmp/pip-wheel-eqyou62a/numpy_571e716b8f694316af15802515ca0b00/numpy/distutils/core.py", line 169, in setup
return old_setup(**new_attr)
File "/tmp/pip-build-env-t_16bsry/overlay/lib/python3.10/site-packages/setuptools/__init__.py", line 153, in setup
return distutils.core.setup(**attrs)
File "/tmp/pip-build-env-t_16bsry/overlay/lib/python3.10/site-packages/setuptools/_distutils/core.py", line 148, in setup
dist.run_commands()
File "/tmp/pip-build-env-t_16bsry/overlay/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 967, in run_commands
self.run_command(cmd)
File "/tmp/pip-build-env-t_16bsry/overlay/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 986, in run_command
cmd_obj.run()
File "/tmp/pip-build-env-t_16bsry/overlay/lib/python3.10/site-packages/wheel/bdist_wheel.py", line 299, in run
self.run_command('build')
File "/tmp/pip-build-env-t_16bsry/overlay/lib/python3.10/site-packages/setuptools/_distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/tmp/pip-build-env-t_16bsry/overlay/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 986, in run_command
cmd_obj.run()
File "/tmp/pip-wheel-eqyou62a/numpy_571e716b8f694316af15802515ca0b00/numpy/distutils/command/build.py", line 62, in run
old_build.run(self)
File "/tmp/pip-build-env-t_16bsry/overlay/lib/python3.10/site-packages/setuptools/_distutils/command/build.py", line 135, in run
self.run_command(cmd_name)
File "/tmp/pip-build-env-t_16bsry/overlay/lib/python3.10/site-packages/setuptools/_distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/tmp/pip-build-env-t_16bsry/overlay/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 986, in run_command
cmd_obj.run()
File "/tmp/pip-wheel-eqyou62a/numpy_571e716b8f694316af15802515ca0b00/numpy/distutils/command/build_src.py", line 144, in run
self.build_sources()
File "/tmp/pip-wheel-eqyou62a/numpy_571e716b8f694316af15802515ca0b00/numpy/distutils/command/build_src.py", line 155, in build_sources
self.build_library_sources(*libname_info)
File "/tmp/pip-wheel-eqyou62a/numpy_571e716b8f694316af15802515ca0b00/numpy/distutils/command/build_src.py", line 288, in build_library_sources
sources = self.generate_sources(sources, (lib_name, build_info))
File "/tmp/pip-wheel-eqyou62a/numpy_571e716b8f694316af15802515ca0b00/numpy/distutils/command/build_src.py", line 378, in generate_sources
source = func(extension, build_dir)
File "/tmp/pip-wheel-eqyou62a/numpy_571e716b8f694316af15802515ca0b00/numpy/core/setup.py", line 758, in get_mathlib_info
raise RuntimeError(
RuntimeError: Broken toolchain: cannot link a simple C program.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for numpy
ERROR: Failed to build one or more wheels
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/setuptools/installer.py", line 82, in fetch_build_egg
subprocess.check_call(cmd)
File "/usr/local/lib/python3.10/subprocess.py", line 369, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/usr/local/bin/python', '-m', 'pip', '--disable-pip-version-check', 'wheel', '--no-deps', '-w', '/tmp/tmp2nz6gige', '--quiet', 'numpy<1.24,>=1.11']' returned non-zero exit status 1.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/tmp/pip-install-xchug7bl/numba_00413dce325d4a8a88447ef4c07c125f/setup.py", line 431, in <module>
setup(**metadata)
File "/usr/local/lib/python3.10/site-packages/setuptools/__init__.py", line 86, in setup
_install_setup_requires(attrs)
File "/usr/local/lib/python3.10/site-packages/setuptools/__init__.py", line 80, in _install_setup_requires
dist.fetch_build_eggs(dist.setup_requires)
File "/usr/local/lib/python3.10/site-packages/setuptools/dist.py", line 875, in fetch_build_eggs
resolved_dists = pkg_resources.working_set.resolve(
File "/usr/local/lib/python3.10/site-packages/pkg_resources/__init__.py", line 789, in resolve
dist = best[req.key] = env.best_match(
File "/usr/local/lib/python3.10/site-packages/pkg_resources/__init__.py", line 1075, in best_match
return self.obtain(req, installer)
File "/usr/local/lib/python3.10/site-packages/pkg_resources/__init__.py", line 1087, in obtain
return installer(requirement)
File "/usr/local/lib/python3.10/site-packages/setuptools/dist.py", line 945, in fetch_build_egg
return fetch_build_egg(self, req)
File "/usr/local/lib/python3.10/site-packages/setuptools/installer.py", line 84, in fetch_build_egg
raise DistutilsError(str(e)) from e
distutils.errors.DistutilsError: Command '['/usr/local/bin/python', '-m', 'pip', '--disable-pip-version-check', 'wheel', '--no-deps', '-w', '/tmp/tmp2nz6gige', '--quiet', 'numpy<1.24,>=1.11']' returned non-zero exit status 1.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
</code></pre>
<p>I think maybe install numba need gcc to compile the source code, and the alpine did not have gcc by default. is it possible to install the dependencies in alpine? what should I do to make it work? should I install gcc in the base container? I still want to keep the image slim. This is my python dependencies:</p>
<pre><code>fastapi==0.95.0
openai==0.27.0
sse-starlette==1.3.3
SQLAlchemy~=1.4.6
pydantic~=1.10.5
rembg==2.0.32
Pillow==9.5.0
aiohttp==3.8.4
aiosignal==1.3.1
anyio==3.6.2
async-timeout==4.0.2
asyncer==0.0.2
attrs==22.2.0
certifi==2022.12.7
charset-normalizer==3.1.0
click==8.1.3
coloredlogs==15.0.1
filetype==1.2.0
flatbuffers==23.3.3
frozenlist==1.3.3
h11==0.14.0
httptools==0.5.0
humanfriendly==10.0
idna==3.4
ImageHash==4.3.1
imageio==2.27.0
lazy_loader==0.2
llvmlite==0.39.1
mpmath==1.3.0
multidict==6.0.4
networkx==3.1
numba==0.56.4
numpy==1.24.2
onnxruntime==1.14.1
opencv-python-headless==4.7.0.72
packaging==23.0
Pillow==9.5.0
platformdirs==3.2.0
pooch==1.7.0
protobuf==4.22.1
pydantic==1.10.7
PyMatting==1.1.8
python-dotenv==1.0.0
python-multipart==0.0.6
PyWavelets==1.4.1
PyYAML==6.0
rembg==2.0.32
requests==2.28.2
scikit-image==0.20.0
scipy==1.10.1
sniffio==1.3.0
starlette==0.26.1
sympy==1.11.1
tifffile==2023.3.21
tqdm==4.65.0
typing_extensions==4.5.0
urllib3==1.26.15
uvicorn==0.21.1
uvloop==0.17.0
watchdog==3.0.0
watchfiles==0.19.0
websockets==11.0
yarl==1.8.2
psycopg2-binary==2.9.6
</code></pre>
|
<python><alpine-linux>
|
2023-04-07 17:13:23
| 1
| 1,173
|
spark
|
75,960,578
| 713,200
|
How to set data into a complex json dictionary in python?
|
<p>Basically I need to set data into a JSON API Payload before I need to send it in the API request.
I have 2 files,</p>
<ol>
<li>text file which has JSON payload format, say json.txt</li>
<li>yml file which has actual data, say tdata.yml
Im writing a python code to create actual payload by inserting data from yml file to JSON payload format</li>
</ol>
<p>Content of file 1</p>
<pre><code>{
"mpls_routing":
{
"add_novel":
{
"device_ip": "DEVICE_IP",
"discoveryHelloHoldTime": "DISCOVERYHELLOHOLDTIME",
"discoveryHelloInterval": "DISCOVERYHELLOINTERVAL",
"discoveryTargetedHelloHoldTime": "DISCOVERYTARGETEDHELLOHOLDTIME",
"discoveryTargetedHelloInterval": "DISCOVERYTARGETEDHELLOINTERVAL",
"downstreamMaxLabel": "DOWNSTREAMMAXLABEL",
"downstreamMinLabel": "DOWNSTREAMMINLABEL",
"initialBackoff": "INITIALBACKOFF",
"instanceId": "554972419",
"interfaceName": "INTERFACENAME",
"isEntropyLabelEnabled": "ISENTROPYLABELENABLED",
"isExplicitNullEnabled": "ISEXPLICITNULLENABLED",
"isLoopDetectionEnabled": "ISLOOPDETECTIONENABLED",
"isNsrEnabled": "ISNSRENABLED",
"keepAliveInterval": "KEEPALIVEINTERVAL",
"ldpId": "LDPID",
"ldpIpMask": {
"address": ""
},
"ldpPepSettings": [
{
"com.novel.xmp.model.managed.standardTechnologies.ldp.LdpPepSettings": {
"labelDistributionMethod": "DISTMETHOD",
"ldpId": "LDPID",
"name": "MPLSINTERFACE",
"owningEntityId": "OWNINGENTITYID"
}
}
],
"maximumBackoff": "MAXIMUMBACKOFF",
"sessionHoldTime": "SESSIONHOLDTIME"
}
}
</code></pre>
<p>This is the yml data file</p>
<pre><code>mpls_interface:
DEVICE_IP: "10.104.120.141"
DISCOVERYHELLOHOLDTIME: "15"
DISCOVERYHELLOINTERVAL: "5"
DISCOVERYTARGETEDHELLOHOLDTIME: "90"
DISCOVERYTARGETEDHELLOINTERVAL: "10"
DOWNSTREAMMAXLABEL: "289999"
DOWNSTREAMMINLABEL: "24000"
INITIALBACKOFF: "15"
INTERFACENAME: "LOOPBACK0"
ISENTROPYLABELENABLED: "FALSE"
ISEXPLICITNULLENABLED: "FALSE"
ISLOOPDETECTIONENABLED: "FALSE"
ISNSRENABLED: "TRUE"
KEEPALIVEINTERVAL: "60"
LDPID: "192.168.0.141"
MAXIMUMBACKOFF: "120"
OWNINGENTITYID: ""
SESSIONHOLDTIME: "180"
DISTMETHOD: "LDP"
MPLSINTERFACE: "TenGigE0/0/0/4"
</code></pre>
<p>I have following code which reads the JSON format file data</p>
<pre><code> template_data = file.read()
template_data = json.loads(template_data)
if "add_novel" in template_data:
full_template_file = template_data["add_novel"]
if testcase_name in full_template_file:
template_file_data = full_template_file[testcase_name]
</code></pre>
<p>And following code which reads the yml file</p>
<pre><code> with open(self.mpls_yml_file) as stream:
yml_data = yaml.safe_load(stream)
if "mpls_interface" in yml_data:
yml_file_data = yml_data["mpls_interface"]
return yml_file_data
</code></pre>
<p>Then I pass both template_file_data and yml_file_data to following code to create the actual API payload.</p>
<pre><code>testcase_data = {key: yml_file_data[value] for key, value in template_file_data.items() for k, v in
yml_file_data.items()}
logger.info("\n Testcase data %s" % testcase_data)
return testcase_data
</code></pre>
<p>But I'm not able to get the right format since there is nested dictionary and also a array inside a dictionary.</p>
|
<python><arrays><json><python-3.x><dictionary>
|
2023-04-07 17:09:06
| 3
| 950
|
mac
|
75,960,564
| 3,484,568
|
Custom package installation fails
|
<p>I want to allow my Jupyter notebooks to import custom Python modules.
To this end, I attempt to install a package whose modules I can import, for example, through <code>from projectmnemonic import test</code>.</p>
<p>My library's source code directory has the following structure:</p>
<pre class="lang-none prettyprint-override"><code>my-datascience-project/
...
notebooks/
...
projectmnemonic/
projectmnemonic/
__init__.py
test.py
setup.py
...
</code></pre>
<p>The <code>setup.py</code> is very plain (there is no intention to publish it):</p>
<pre class="lang-py prettyprint-override"><code>from setuptools import setup
setup(
name="projectmnemonic",
version="0.1",
)
</code></pre>
<p>I install the package in an activated anaconda environment as follows</p>
<pre class="lang-bash prettyprint-override"><code>pip install -e path/to/my-datascience-project/projectmnemonic
</code></pre>
<p>but get the following error:</p>
<pre class="lang-none prettyprint-override"><code>Defaulting to user installation because normal site-packages is not writeable
Looking in indexes: <redacted>
Obtaining <path to my-datascience-project/projectmnemonic>
Preparing metadata (setup.py) ... done
Installing collected packages: <projectmnemonic>
Running setup.py develop for <projectmnemonic>
error: subprocess-exited-with-error
× python setup.py develop did not run successfully.
│ exit code: 1
╰─> [6 lines of output]
running develop
C:\ProgramData\Anaconda3\lib\site-packages\setuptools\command\easy_install.py:144: EasyInstallDeprecationWarning: easy_install command is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
C:\ProgramData\Anaconda3\lib\site-packages\setuptools\command\install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
error: --egg-path must be a relative path from the install directory to <projectmnemonic>.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× python setup.py develop did not run successfully.
│ exit code: 1
╰─> [6 lines of output]
running develop
C:\ProgramData\Anaconda3\lib\site-packages\setuptools\command\easy_install.py:144: EasyInstallDeprecationWarning: easy_install command is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
C:\ProgramData\Anaconda3\lib\site-packages\setuptools\command\install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
error: --egg-path must be a relative path from the install directory to <projectmnemonic>.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
</code></pre>
<p>When I move directory <code>projectmnemonic</code> to the Desktop and install from there, I have no issues in the installation procedure:</p>
<pre class="lang-none prettyprint-override"><code>Defaulting to user installation because normal site-packages is not writeable
Looking in indexes: <redacted>
Obtaining <path to my-datascience-project/projectmnemonic>
Preparing metadata (setup.py) ... done
Installing collected packages: <projectmnemonic>
Running setup.py develop for <projectmnemonic>
Successfully installed <projectmnemonic>-0.1
</code></pre>
<p>Python 3.9.12, setuptools 67.6.1, pip 21.2.4</p>
|
<python><jupyter-notebook><setuptools><setup.py><python-packaging>
|
2023-04-07 17:06:46
| 2
| 618
|
Jhonny
|
75,960,498
| 19,130,803
|
Docker pass dynamic directory name to WORKDIR
|
<p>I have python project and trying to dockerize it. The usecase is as below
User create a parent directory eg <code>titanic</code> and checkout the <code>github repo</code>.
The project structure becomes like this</p>
<ul>
<li>titanic/
<ul>
<li>src/</li>
<li>Dockerfile</li>
<li>docker-compose.yml</li>
<li>.env</li>
</ul>
</li>
</ul>
<p>Now what I need is, to pass this parent directory name to <code>docker's WORKDIR Command</code> to create docker image.</p>
<p>In Dockerfile I have, <code>WORKDIR $PROJ_WORKDIR</code></p>
<p>I tried many different ways to pass the directory name, but it always fails</p>
<pre><code>1. environment:
- PROJ_WORKDIR=titanic
</code></pre>
<pre><code>2. In .env, PROJ_WORKDIR=titanic
</code></pre>
<pre><code>3. In docker-compose, args:
- PROJ_WORKDIR=titanic
</code></pre>
<pre><code>4. In Dockerfile, ENV PROJ_WORKDIR=some_default
</code></pre>
<p>Is there a way to configure and access it.</p>
|
<python><docker>
|
2023-04-07 16:59:00
| 1
| 962
|
winter
|
75,960,479
| 6,075,349
|
Pandas dataframe multiple and sum row wise
|
<p>I have a pandas df of the following format</p>
<pre><code>
Market ID Col1 Col2 Col3 Price Sum_Price
USA 100 0.49 0.5 0.5 100 149
EU 200 0.25 0.25 0.75 100 125
</code></pre>
<p>Essentially <code>Sum_Price</code> is equal to <code>(Col1*Price+Col2*Price+Col*Price)</code>
I have tried <code>df.iloc[:,2:4]*df.iloc[:,5]</code> but it padded a few extra columns, which I am not sure of the behaviour...
Thanks</p>
|
<python><pandas>
|
2023-04-07 16:57:27
| 1
| 1,153
|
FlyingPickle
|
75,960,438
| 6,156,353
|
Airflow - get value at specific index of the task output
|
<p>I have a PythonOperator task in Airflow that outputs e.g. <code>['instance-id', 128.2.4.33]</code> and in the downstream task I reference this task' output using dynamic task mapping by <code>expand</code> and I reference the output like <code>upstream_python_task.output</code>, however this gives me the entire list with both values but my downstream task (<code>EC2StartInstanceOperator</code>) needs only 1 value (<code>instance_id</code>). I tried <code>upstream_python_task.output[0]</code> but that gives an error. How to reference the first element of the output?</p>
<p>Code:</p>
<pre class="lang-py prettyprint-override"><code>def get_files():
return [['23', 'abcd'], ['49', 'xyz']]
def create_instance(index, some_param, **kwargs):
return ['<instance-id>', '<ip-address>']
...
with dag:
run_get_files = PythonOperator(
task_id = 'get-files',
python_callable = get_files,
)
run_create_ec2_instance = PythonOperator.partial(
task_id = 'create-instance',
python_callable = create_instance,
).expand(
op_args = run_get_files.output
)
start_instance = EC2StartInstanceOperator.partial(
task_id = 'start-instance',
region_name = 'eu-central-1',
).expand(
instance_id = run_get_files.output[0], # error
)
</code></pre>
|
<python><airflow>
|
2023-04-07 16:52:16
| 0
| 1,371
|
romanzdk
|
75,960,383
| 1,383,378
|
Module not found when using CDK Python Layer
|
<p>I am very new to AWS CDK development.
I am trying to run some simple layer python code in a Lambda.</p>
<p>My directory structure looks like the following:</p>
<p><a href="https://i.sstatic.net/1ZVsJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1ZVsJ.png" alt="enter image description here" /></a></p>
<p>The constructs code:</p>
<pre><code># test_api_lambda_stack.py
lambda_layer = aws_lambda.LayerVersion(
self,
"lambda-layer",
code=aws_lambda.AssetCode("./lambda/layer"),
compatible_runtimes=[Runtime.PYTHON_3_9],
)
lambda_with_layers = aws_lambda.Function(
self,
"test-lambda-with-layers",
runtime=Runtime.PYTHON_3_9,
function_name="test-lambda-with-layers",
code=aws_lambda.Code.from_asset("./lambda/code"),
handler="my_lambda_code.handler",
environment={"NAME": "cdk-lambda-layer"},
layers=[lambda_layer],
)
...
</code></pre>
<p>Now, in my layer I simply have this function:</p>
<pre><code># layer/python/logic/logic.py
def add(x: int, y: int):
return x + y
</code></pre>
<p>Which I attempt calling from my lambda code:</p>
<pre><code># my_lambda_code.py
from logic import *
def handler(event, context):
print("SUM worked: " + str(add(1, 2)))
...
</code></pre>
<p>Following deployment and upon running the above, I receive <code>[ERROR] Runtime.ImportModuleError: Unable to import module 'my_lambda_code': No module named 'logic'</code>.</p>
<p>How should I import the layer in my lambda, in order to make this work?
Is there a way to navigate the file structure created by this deployment?</p>
|
<python><amazon-web-services><aws-cdk><aws-lambda-layers>
|
2023-04-07 16:44:33
| 1
| 633
|
LoreV
|
75,960,339
| 572,575
|
Selenium cannot click in webpage using CSS_SELECTOR
|
<p>I want to click this button in facebook for show all comment
<a href="https://i.sstatic.net/rpsrp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rpsrp.png" alt="enter image description here" /></a></p>
<p>I use this code to login and open url</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from bs4 import BeautifulSoup as bs
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
# set options as you wish
option = Options()
option.add_argument("--disable-infobars")
option.add_argument("start-maximized")
option.add_argument("--disable-extensions")
EMAIL = "myemail"
PASSWORD = "mypassword"
browser = webdriver.Chrome(executable_path="./chromedriver", options=option)
browser.get("http://facebook.com")
browser.maximize_window()
wait = WebDriverWait(browser, 30)
email_field = wait.until(EC.visibility_of_element_located((By.NAME, 'email')))
email_field.send_keys(EMAIL)
pass_field = wait.until(EC.visibility_of_element_located((By.NAME, 'pass')))
pass_field.send_keys(PASSWORD)
pass_field.send_keys(Keys.RETURN)
time.sleep(5)
browser.get('https://www.facebook.com/NetflixTH/photos/a.292183707902294/1709518956168755/') # once logged in, free to open up any target page
time.sleep(5)
browser.find_element(By.CSS_SELECTOR,'i[class="x1b0d499 x1d69dk1"]').click();
</code></pre>
<p>When I run this code it show error like this. How to fix it?</p>
<pre><code>ElementClickInterceptedException: element click intercepted: Element <i data-visualcompletion="css-img" class="x1b0d499 x1d69dk1" style="background-image: url(&quot;https://static.xx.fbcdn.net/rsrc.php/v3/yq/r/uo1Qvq4bB06.png&quot;); background-position: 0px -536px; background-size: 26px 926px; width: 16px; height: 16px; background-repeat: no-repeat; display: inline-block;"></i> is not clickable at point (1180, 360). Other element would receive the click: <div class="x1uvtmcs x4k7w5x x1h91t0o x1beo9mf xaigb6o x12ejxvf x3igimt xarpa2k xedcshv x1lytzrv x1t2pt76 x7ja8zs x1n2onr6 x1qrby5j x1jfb8zj" tabindex="-1">...</div>
</code></pre>
|
<python><selenium-webdriver>
|
2023-04-07 16:38:16
| 1
| 1,049
|
user572575
|
75,960,166
| 8,350,440
|
No module named `foo` - when foo.exe built from foo.py, imports bar which imports foo
|
<p>I built a tkinter application (let's call it <code>foo.py</code>) and packaged it as a onefile executable (<code>foo.exe</code>). Within <code>foo.py</code>, there is a function (<code>func_load_scripts</code>) that uses <code>importlib.import_module()</code> to import all <code>.py</code> files in a specific folder. (e.g. Suppose there is a <code>bar.py</code> in <code>C:\foobar\</code>).</p>
<ol>
<li>When I run <code>foo.exe</code>, <code>func_load_scripts</code> runs and it imports <code>bar.py</code>.</li>
<li>What I have noticed is that if it is a standard library, generally no issues.</li>
<li>Suppose <code>apple</code> is a third-party module, if both <code>foo.py</code> and <code>bar.py</code> have <code>import apple</code>, there is no issue too.</li>
<li>If <code>foo.py</code> doesn't have <code>import apple</code>, but <code>bar.py</code> has <code>import apple</code>, there will be a <em>ModuleNotFoundError</em> (kind of understandable...)</li>
<li>When <code>bar.py</code> contains <code>import foo</code>, there will be a <em>ModuleNotFoundError</em> too.</li>
</ol>
<p><strong>Doubt 1</strong>: Why can't <code>bar.py</code> import <code>foo.py</code>? (I believe it is because bar is looking for foo.py in the PYTHONPATH... and it it doesn't exist. I tried adding <code>sys._MEIPASS</code> (using <code>sys.path.insert()</code>) and it doesn't work too. It is probably because foo.py is not longer available, which is the purpose of packaging it as an executable in the first place.)</p>
<p><strong>Motivation</strong>: I want to expose some (i.e. not all) functions/classes within <code>foo.py</code> to the user. Hence, it will be good to allow them to use <code>from foo import func_one</code>.</p>
<p>Because <code>bar.py</code> cannot have the <code>import foo</code>, I created a separate file to house the common functions and classes that I want to expose to users and named this as <code>common.py</code>. Next, in both <code>foo.py</code> and <code>bar.py</code>, I have <code>import common</code> (hence satisfying #3 above)</p>
<p><strong>Question</strong>: Is this a good workaround? Is there some alternative method that users can probe and figure out other functions, classes, constants that I have in <code>foo.py</code>? Will they be able to <code>import foo</code> through some means?</p>
<p>Apologies for the long question.</p>
<p>Here's a small reproducible code which describes the problem.</p>
<pre><code>#foo.py
import importlib
import os
import sys
sys.path.insert(1, os.getcwd())
def func_one():
print('This is foo.py')
def func_load_scripts():
try:
mod = importlib.import_module('bar')
except ModuleNotFoundError as err:
print(err)
else:
print(dir(mod))
finally:
input('Press any key to exit...')
if __name__ == '__main__':
func_one()
func_load_scripts()
</code></pre>
<pre><code>#bar.py
import foo
def func_two():
print('We are in `bar.py`...')
foo.func_one()
</code></pre>
<p>Error message when I run <code>foo.exe</code> with <code>bar.py</code> in the same folder.</p>
<p><a href="https://i.sstatic.net/vw5Ua.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vw5Ua.png" alt="No module named 'foo'" /></a></p>
<p>To package as exe: <code>pyinstaller --onefile foo.py</code></p>
|
<python><pyinstaller><python-importlib>
|
2023-04-07 16:12:24
| 0
| 881
|
Ji Wei
|
75,960,142
| 1,714,692
|
Bazel genrule python import from a script in different directory
|
<p>I have a Bazel <code>native.genrule</code> that calls a python script to generate a file. In the working configuration my files are the following:</p>
<p>Python script:</p>
<pre><code>## test/sub1/test_python_script.py
def test_python_import():
print("this is a function to test function import")
if __name__=="__main__":
test_python_import()
</code></pre>
<p><code>BUILD</code> file</p>
<pre><code>## test/sub1/BUILD
load(":test.bzl", "test_python_import")
filegroup(
name = "test_python_import_script",
srcs = ["test_python_import.py"],
visibility = ["//visibility:public"],
)
test_python_import(
name = "test_python_import",
out_ = "test.txt",
visibility = ["//visibility:public"],
)
</code></pre>
<p><code>test.bzl</code> file:</p>
<pre><code># test/sub1/test.bzl
def test_python_import(
name,
out_ = "",
**kwargs):
native.genrule(
name = name,
outs=[out_,],
cmd = "python3 $(location //project/test/sub1:test_python_import_script) > $(@)",
tools = ["//project/test/sub1:test_python_import_script:test_python_import_script"],
)
</code></pre>
<p>From this working status, I want to move the function <code>test_python_import()</code> to another python file, so that it can be used by other scripts as well.</p>
<p>If I create the new file (<code>common.py</code>) under the same subfolder <code>sub1</code>, I can import the moved functions into the script <code>test_python_import</code> with:</p>
<pre><code>from common import test_python_import
</code></pre>
<p>and everything works with no need to modify anything.</p>
<p>If the new script containing the function definition is in another folder i.e., <code>/test/sub2/common.py</code>:</p>
<pre><code># test/sub2/common.py
def test_python_import():
print("this is a function to test function import")
</code></pre>
<p>and change the original script to include this function:</p>
<pre><code>## test/sub1/test_python_script.py
from project.test.sub2 import test_python_import
if __name__=="__main__":
test_python_import()
</code></pre>
<p>it does not work. I have tried to add a <code>test/sub2/BUILD</code> file to use <code>filegroup</code> and add this target to the <code>tools</code> and/or <code>srcs</code> attributes of the bazel macro:</p>
<pre><code># test/sub2/BUILD
filegroup(
name = "common_script",
srcs = ["common.py"],
visibility = visibility = ["//visibility:public"],
)
</code></pre>
<p>New macro definition</p>
<pre><code># test/sub1/test.bzl
def test_python_import(
name,
out_ = "",
**kwargs):
native.genrule(
name = name,
outs=[out_,],
srcs = ["//app/test/sub2:common_script",],
cmd = "python3 $(location //app/test/sub1:test_python_import_script) > $(@)",
tools = [
"//app/test/sub1:test_python_import_script",
"//app/test/sub2:common_script",
],
)
</code></pre>
<p>but I get the error</p>
<pre><code>Traceback (most recent call last):
File "app/test/sub1/test_python_import.py", line 1, in <module>
from app.test.sub2 import test_python_import
ModuleNotFoundError: No module named 'app'
</code></pre>
<p>I have tried to modify the <code>test/sub2/BUILD</code> file to define a python library by adding an empty <code>__init__.py</code> script and modifying the <code>BUILD</code> file to the following:</p>
<pre><code># test/sub2/BUILD
py_library(
name = "general_script",
deps = [],
srcs = ["general_script.py"],
visibility = ["//visibility:public"],
)
filegroup(
name = "general_script",
srcs = ["codegen_utils.py"],
visibility = ["//visibility:public"],
)
filegroup(
name = "init",
srcs = ["__init__.py"],
visibility = ["//visibility:public"],
)
</code></pre>
<p>and also to modify the genrule in different ways by adding both to <code>srcs</code> and <code>tools</code> fields the targets defined in the new <code>app/test/sub2/common</code> and then import in the <code>test_python_import.py</code> script the function, but whatever I've tried so far, I keep getting the same error.</p>
<p>How can import functions that are defined in a script into another script that lies into another directory within a genrule?</p>
|
<python><bazel><bazel-genrule>
|
2023-04-07 16:08:28
| 1
| 9,606
|
roschach
|
75,960,119
| 4,560,996
|
How Can I Efficiently Create Pandas DataFrame from Uneven Text Lists
|
<p>I have some data collected over a loop that results in some text organized as lists.</p>
<p>In the example below, the string of <code>out1[0]</code> is related to the many strings in <code>out2[0]</code> and the string in <code>out1[1]</code> is related to the single string in <code>out2[1]</code>.</p>
<p>I'd like to create a dataframe that is organized like the one I created manually below from some arbitrary number of counts in each list (i.e., out1 could have K statements and the number of statements in out2 could also be arbitrary.)</p>
<pre><code>out1 = ["I'm happy", 'This is crazy']
out2 = [["Im learning to be happy", "I'm troubled"], ["This is making me unhappy"]]
pd.DataFrame({'A': out1, 'B': ["Im learning to be happy", "This is making me unhappy"], 'C':["I'm troubled", "--"] })
</code></pre>
<p>Can anyone suggest a good way to produce such a dataframe in Pandas? Also, the '--' at the end is just a filler I created to mean nothing, so this should be an empty cell since this is essentially a ragged array of text data.</p>
|
<python><pandas>
|
2023-04-07 16:04:44
| 1
| 827
|
dhc
|
75,960,018
| 16,512,200
|
PyQt5 Font sizes scaled larger than expected after changing screen resolution
|
<p>I'm using PyQt5 to create a login window.
The window showed all the font at a normal size before I got a new laptop, now this laptop has a much higher resolution for the monitor. I'm struggling to get the size of the text back to normal.</p>
<p><a href="https://i.sstatic.net/3QV97.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3QV97.png" alt="Image of the enlarged text" /></a></p>
<p>The checkbox text should save Remember Server Information, but it's cropped because it's too big and so if the version in the bottom right.
The other Text is also much larger than normal.</p>
<p>I've tried to use StyleSheets to change the font-size.
I've also tried this:</p>
<pre><code>if hasattr(QtCore.Qt, 'AA_EnableHighDpiScaling'):
QtWidgets.QApplication.setAttribute(QtCore.Qt.AA_EnableHighDpiScaling, True)
if hasattr(QtCore.Qt, 'AA_UseHighDpiPixmaps'):
QtWidgets.QApplication.setAttribute(QtCore.Qt.AA_UseHighDpiPixmaps, True)
</code></pre>
<p>which doesn't seem to make a difference whether I have that there or not.
If anyone has any words of advice to get the text to scale normally again it would be very appreciated.</p>
<p>EDIT:</p>
<p>This is also happening the in Qt Designer Tool.</p>
<p><a href="https://i.sstatic.net/J9BSg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/J9BSg.png" alt="enter image description here" /></a></p>
|
<python><user-interface><fonts><pyqt5>
|
2023-04-07 15:53:09
| 1
| 371
|
Andrew
|
75,960,017
| 10,511,518
|
Invalid bucket name: gcsfs.retry.HttpError: Invalid bucket name: 'None', 400
|
<p>I have built an API using FastAPI to retrieve data from a bucket in Google Cloud Storage.
I tested the API locally with swagger and Postman (to double check), and it worked fine, but when I containerize it with docker, I get an
Invalid bucket name: "None", 400 error.</p>
<p>Here's the code...</p>
<pre><code>import os
import gcsfs
from typing import Any, List
import pandas as pd
from dotenv import load_dotenv
from fastapi import APIRouter
from app import schemas
from app.config import settings
load_dotenv()
GOOGLE_SERVICE_ACCOUNT = os.getenv("GOOGLE_SERVICE_ACCOUNT")
GCP_FILE_PATH = os.getenv("GCP_FILE_PATH")
fs = gcsfs.GCSFileSystem()
api_router = APIRouter()
@api_router.get("/health", response_model=schemas.Health, status_code=200)
def health() -> dict:
"""
Root Get
"""
health = schemas.Health(
name=settings.PROJECT_NAME, api_version="1.0.0"
)
return health.dict()
@api_router.get("/consumer_type_values", response_model=schemas.UniqueConsumerType, status_code=200)
def consumer_type_values() -> List:
X = pd.read_parquet(f"{GCP_FILE_PATH}/X.parquet", filesystem=fs)
unique_consumer_type = list(X.index.unique(level="consumer_type"))
results = {
"values": unique_consumer_type
}
return results
@api_router.get("/area_values", response_model=schemas.UniqueArea, status_code=200)
def area_values() -> List:
X = pd.read_parquet(f"{GCP_FILE_PATH}/X.parquet", filesystem=fs)
unique_area = list(X.index.unique(level="area"))
results = {
"values": unique_area
}
return results
@api_router.get("/predictions", response_model=schemas.PredictionResults, status_code=200)
def get_predictions() -> Any:
"""
Get predictions from GCP
"""
y_train = pd.read_parquet(f"{GCP_FILE_PATH}/y.parquet", filesystem=fs)
preds = pd.read_parquet(f"{GCP_FILE_PATH}/predictions.parquet", filesystem=fs)
datetime_utc = y_train.index.get_level_values("datetime_utc").to_list()
area = y_train.index.get_level_values("area").to_list()
consumer_type = y_train.index.get_level_values("consumer_type").to_list()
energy_consumption = y_train["energy_consumption"].to_list()
preds_datetime_utc = preds.index.get_level_values("datetime_utc").to_list()
preds_area = preds.index.get_level_values("area").to_list()
preds_consumer_type = preds.index.get_level_values("consumer_type").to_list()
preds_energy_consumption = preds["energy_consumption"].to_list()
results = {
"datetime_utc": datetime_utc,
"area": area,
"consumer_type": consumer_type,
"energy_consumption": energy_consumption,
"preds_datetime_utc": preds_datetime_utc,
"preds_area": preds_area,
"preds_consumer_type": preds_consumer_type,
"preds_energy_consumption": preds_energy_consumption
}
return results
</code></pre>
<p>Here's the logs from my Docker container...</p>
<pre><code>ERROR: Exception in ASGI application
2023-04-07 16:38:03 energy-forecasting-api-1 | Traceback (most recent call last):
2023-04-07 16:38:03 energy-forecasting-api-1 | File "/usr/local/lib/python3.9/site-packages/uvicorn/protocols/http/h11_impl.py", line 429, in run_asgi
2023-04-07 16:38:03 energy-forecasting-api-1 | result = await app( # type: ignore[func-returns-value]
2023-04-07 16:38:03 energy-forecasting-api-1 | File "/usr/local/lib/python3.9/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
2023-04-07 16:38:03 energy-forecasting-api-1 | return await self.app(scope, receive, send)
2023-04-07 16:38:03 energy-forecasting-api-1 | File "/usr/local/lib/python3.9/site-packages/fastapi/applications.py", line 276, in __call__
2023-04-07 16:38:03 energy-forecasting-api-1 | await super().__call__(scope, receive, send)
2023-04-07 16:38:03 energy-forecasting-api-1 | File "/usr/local/lib/python3.9/site-packages/starlette/applications.py", line 122, in __call__
2023-04-07 16:38:03 energy-forecasting-api-1 | await self.middleware_stack(scope, receive, send)
2023-04-07 16:38:03 energy-forecasting-api-1 | File "/usr/local/lib/python3.9/site-packages/starlette/middleware/errors.py", line 184, in __call__
2023-04-07 16:38:03 energy-forecasting-api-1 | raise exc
2023-04-07 16:38:03 energy-forecasting-api-1 | File "/usr/local/lib/python3.9/site-packages/starlette/middleware/errors.py", line 162, in __call__
2023-04-07 16:38:03 energy-forecasting-api-1 | await self.app(scope, receive, _send)
2023-04-07 16:38:03 energy-forecasting-api-1 | File "/usr/local/lib/python3.9/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
2023-04-07 16:38:03 energy-forecasting-api-1 | raise exc
2023-04-07 16:38:03 energy-forecasting-api-1 | File "/usr/local/lib/python3.9/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
2023-04-07 16:38:03 energy-forecasting-api-1 | await self.app(scope, receive, sender)
2023-04-07 16:38:03 energy-forecasting-api-1 | File "/usr/local/lib/python3.9/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
2023-04-07 16:38:03 energy-forecasting-api-1 | raise e
2023-04-07 16:38:03 energy-forecasting-api-1 | File "/usr/local/lib/python3.9/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
2023-04-07 16:38:03 energy-forecasting-api-1 | await self.app(scope, receive, send)
2023-04-07 16:38:03 energy-forecasting-api-1 | File "/usr/local/lib/python3.9/site-packages/starlette/routing.py", line 718, in __call__
2023-04-07 16:38:03 energy-forecasting-api-1 | await route.handle(scope, receive, send)
2023-04-07 16:38:03 energy-forecasting-api-1 | File "/usr/local/lib/python3.9/site-packages/starlette/routing.py", line 276, in handle
2023-04-07 16:38:03 energy-forecasting-api-1 | await self.app(scope, receive, send)
2023-04-07 16:38:03 energy-forecasting-api-1 | File "/usr/local/lib/python3.9/site-packages/starlette/routing.py", line 66, in app
2023-04-07 16:38:03 energy-forecasting-api-1 | response = await func(request)
2023-04-07 16:38:03 energy-forecasting-api-1 | File "/usr/local/lib/python3.9/site-packages/fastapi/routing.py", line 237, in app
2023-04-07 16:38:03 energy-forecasting-api-1 | raw_response = await run_endpoint_function(
2023-04-07 16:38:03 energy-forecasting-api-1 | File "/usr/local/lib/python3.9/site-packages/fastapi/routing.py", line 165, in run_endpoint_function
2023-04-07 16:38:03 energy-forecasting-api-1 | return await run_in_threadpool(dependant.call, **values)
2023-04-07 16:38:03 energy-forecasting-api-1 | File "/usr/local/lib/python3.9/site-packages/starlette/concurrency.py", line 41, in run_in_threadpool
2023-04-07 16:38:03 energy-forecasting-api-1 | return await anyio.to_thread.run_sync(func, *args)
2023-04-07 16:38:03 energy-forecasting-api-1 | File "/usr/local/lib/python3.9/site-packages/anyio/to_thread.py", line 31, in run_sync
2023-04-07 16:38:03 energy-forecasting-api-1 | return await get_asynclib().run_sync_in_worker_thread(
2023-04-07 16:38:03 energy-forecasting-api-1 | File "/usr/local/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
2023-04-07 16:38:03 energy-forecasting-api-1 | return await future
2023-04-07 16:38:03 energy-forecasting-api-1 | File "/usr/local/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 867, in run
2023-04-07 16:38:03 energy-forecasting-api-1 | result = context.run(func, *args)
2023-04-07 16:38:03 energy-forecasting-api-1 | File "/opt/api/app/api.py", line 47, in area_values
2023-04-07 16:38:03 energy-forecasting-api-1 | X = pd.read_parquet(f"{GCP_FILE_PATH}/X.parquet", filesystem=fs)
2023-04-07 16:38:03 energy-forecasting-api-1 | File "/usr/local/lib/python3.9/site-packages/pandas/io/parquet.py", line 503, in read_parquet
2023-04-07 16:38:03 energy-forecasting-api-1 | return impl.read(
2023-04-07 16:38:03 energy-forecasting-api-1 | File "/usr/local/lib/python3.9/site-packages/pandas/io/parquet.py", line 251, in read
2023-04-07 16:38:03 energy-forecasting-api-1 | result = self.api.parquet.read_table(
2023-04-07 16:38:03 energy-forecasting-api-1 | File "/usr/local/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 2926, in read_table
2023-04-07 16:38:03 energy-forecasting-api-1 | dataset = _ParquetDatasetV2(
2023-04-07 16:38:03 energy-forecasting-api-1 | File "/usr/local/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 2452, in __init__
2023-04-07 16:38:03 energy-forecasting-api-1 | finfo = filesystem.get_file_info(path_or_paths)
2023-04-07 16:38:03 energy-forecasting-api-1 | File "pyarrow/_fs.pyx", line 571, in pyarrow._fs.FileSystem.get_file_info
2023-04-07 16:38:03 energy-forecasting-api-1 | File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status
2023-04-07 16:38:03 energy-forecasting-api-1 | File "pyarrow/_fs.pyx", line 1490, in pyarrow._fs._cb_get_file_info
2023-04-07 16:38:03 energy-forecasting-api-1 | File "/usr/local/lib/python3.9/site-packages/pyarrow/fs.py", line 330, in get_file_info
2023-04-07 16:38:03 energy-forecasting-api-1 | info = self.fs.info(path)
2023-04-07 16:38:03 energy-forecasting-api-1 | File "/usr/local/lib/python3.9/site-packages/fsspec/asyn.py", line 115, in wrapper
2023-04-07 16:38:03 energy-forecasting-api-1 | return sync(self.loop, func, *args, **kwargs)
2023-04-07 16:38:03 energy-forecasting-api-1 | File "/usr/local/lib/python3.9/site-packages/fsspec/asyn.py", line 100, in sync
2023-04-07 16:38:03 energy-forecasting-api-1 | raise return_result
2023-04-07 16:38:03 energy-forecasting-api-1 | File "/usr/local/lib/python3.9/site-packages/fsspec/asyn.py", line 55, in _runner
2023-04-07 16:38:03 energy-forecasting-api-1 | result[0] = await coro
2023-04-07 16:38:03 energy-forecasting-api-1 | File "/usr/local/lib/python3.9/site-packages/gcsfs/core.py", line 790, in _info
2023-04-07 16:38:03 energy-forecasting-api-1 | exact = await self._get_object(path)
2023-04-07 16:38:03 energy-forecasting-api-1 | File "/usr/local/lib/python3.9/site-packages/gcsfs/core.py", line 491, in _get_object
2023-04-07 16:38:03 energy-forecasting-api-1 | res = await self._call(
2023-04-07 16:38:03 energy-forecasting-api-1 | File "/usr/local/lib/python3.9/site-packages/gcsfs/core.py", line 418, in _call
2023-04-07 16:38:03 energy-forecasting-api-1 | status, headers, info, contents = await self._request(
2023-04-07 16:38:03 energy-forecasting-api-1 | File "/usr/local/lib/python3.9/site-packages/decorator.py", line 221, in fun
2023-04-07 16:38:03 energy-forecasting-api-1 | return await caller(func, *(extras + args), **kw)
2023-04-07 16:38:03 energy-forecasting-api-1 | File "/usr/local/lib/python3.9/site-packages/gcsfs/retry.py", line 149, in retry_request
2023-04-07 16:38:03 energy-forecasting-api-1 | raise e
2023-04-07 16:38:03 energy-forecasting-api-1 | File "/usr/local/lib/python3.9/site-packages/gcsfs/retry.py", line 114, in retry_request
2023-04-07 16:38:03 energy-forecasting-api-1 | return await func(*args, **kwargs)
2023-04-07 16:38:03 energy-forecasting-api-1 | File "/usr/local/lib/python3.9/site-packages/gcsfs/core.py", line 411, in _request
2023-04-07 16:38:03 energy-forecasting-api-1 | validate_response(status, contents, path, args)
2023-04-07 16:38:03 energy-forecasting-api-1 | File "/usr/local/lib/python3.9/site-packages/gcsfs/retry.py", line 101, in validate_response
2023-04-07 16:38:03 energy-forecasting-api-1 | raise HttpError(error)
2023-04-07 16:38:03 energy-forecasting-api-1 | gcsfs.retry.HttpError: Invalid bucket name: 'None', 400
</code></pre>
<p>Any idea's why this may be happening?</p>
|
<python><docker><google-cloud-storage><fastapi>
|
2023-04-07 15:53:06
| 1
| 351
|
Kurtis Pykes
|
75,959,989
| 4,225,430
|
How to identify more than one mode in this python pandas code on dataframe?
|
<p>I found a problem in completing this python pandas exercise. I am asked to write a pandas program to display most frequent value in a random series and replace everything else as 'Other' in the series.</p>
<p>I can't do it in a series but kind of OK in a dataframe. The code is shown below. My problem is: I can only select the value from the first index of the frequency counts list (i.e. <code>index[0]</code>), but it implies there is only one mode. What if there are more than one mode?</p>
<p>Grateful with your help. Thank you!</p>
<pre><code>data19 = np.random.randint(46,50,size = 10)
df19 = pd.DataFrame(data19, columns = ["integer"])
print(df19)
freq19 = df19["integer"].value_counts()
print(freq19)
find_mode = df19["integer"] == freq19.index[0] #What if there are more than one mode?
df19.loc[~find_mode, "integer"] = "Other"
print(df19)
</code></pre>
|
<python><pandas><dataframe><frequency><mode>
|
2023-04-07 15:49:37
| 1
| 393
|
ronzenith
|
75,959,969
| 14,154,784
|
Django Crispy Forms Cancel Button That Sends User Back A Page
|
<p>I have a Django Crispy Form that I want to add a cancel button to. Unlike the answers <a href="https://stackoverflow.com/questions/26298332/django-crispy-forms-custom-button">here</a> and <a href="https://stackoverflow.com/questions/33680908/how-to-redirect-to-url-by-cancel-button-in-django-crispy-forms">here</a>, however, the user can get to the form via multiple pages, so I need to send them back to the page they came from if they cancel, not send them to a static page. How can I accomplish this?</p>
|
<javascript><python><html><django><django-crispy-forms>
|
2023-04-07 15:47:12
| 1
| 2,725
|
BLimitless
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.