QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
77,757,271
| 7,615,872
|
Running an async coroutine in the background from a sync function using create_task and asyncio.run does not finish
|
<p>I want to run an async code as a background task from a sync function. My use case is that I have a huge application written in sync Python but I want to run some background tasks from it. An illustration from what I am doing is:</p>
<pre><code>import asyncio
from time import sleep
import sys
async def task():
for i in range(5):
print(f"Background task iteration {i}")
await asyncio.sleep(1)
print('finished')
async def background_task():
print("a")
asyncio.create_task(task())
print("b")
def main():
print("Main program started python", sys.version)
asyncio.run(background_task())
for i in range(3):
sleep(3)
print(f"Main program iteration {i}")
if __name__ == "__main__":
main()
</code></pre>
<p>The output:</p>
<pre class="lang-none prettyprint-override"><code>Main program started python 3.11.6 (main, Oct 23 2023, 22:48:54) [GCC 11.4.0]
a
b
Background task iteration 0
Main program iteration 0
Main program iteration 1
Main program iteration 2
</code></pre>
<p>Why does the coroutine <code>task</code> never finish the loop it executes? The code never printed</p>
<pre class="lang-none prettyprint-override"><code>Background task iteration 1
Background task iteration 2
Background task iteration 3
Background task iteration 4
finished
</code></pre>
<p>Why is only the first iteration executed?</p>
|
<python><python-asyncio>
|
2024-01-04 09:53:13
| 2
| 1,085
|
Mehdi Ben Hamida
|
77,757,166
| 5,997,555
|
duckdb SQL select result to JSON
|
<p>Is there a way to convert the result of a <code>SELECT</code> query on a table directly to JSON without writing it to a file? Perhaps with the JSON extension of <code>duckdb</code>?</p>
<p>I could also use the python client, where I'd convert the result to a pandas dataframe and then to JSON, but I figured there should be a more direct way.</p>
<p>Example:</p>
<pre><code>CREATE TABLE weather (
city VARCHAR,
temp_lo INTEGER, -- minimum temperature on a day
temp_hi INTEGER, -- maximum temperature on a day
prcp REAL,
date DATE
);
INSERT INTO weather VALUES ('San Francisco', 46, 50, 0.25, '1994-11-27');
INSERT INTO weather VALUES ('Vienna', -5, 35, 10, '2000-01-01');
</code></pre>
<p>An example query would be <code>"SELECT city, temp_hi FROM weather;"</code>, and the desired json would look like:</p>
<pre><code>{"city": ["San Francisco", "Vienna"], "temp_hi": [50, 35]}
</code></pre>
<p>So to recap, I'm looking for way to create the desired JSON directly, without converting the result to a python object first.</p>
|
<python><sql><json><duckdb>
|
2024-01-04 09:34:37
| 2
| 7,083
|
Val
|
77,757,164
| 3,747,486
|
Looking for easy way to setup a callback url for a console program requesting an API
|
<p>I am building a python program that send POST request to a 3rd party API. I am the only user and not building an App for others. I built the program on my Win11 desktop try to query (read only) data from 3rd party.</p>
<p>The problem is the API required 3-Legged OAuth. Based on my understanding, I need to setup a web server and pay for a domain in order to provide a valid Callback URL. Is it true?</p>
<p>Are there are better approaches for my case? I have Azure subscription. Could I setup a static web app for this purpose?</p>
|
<python><callback><azure-web-app-service>
|
2024-01-04 09:34:28
| 1
| 326
|
Mark
|
77,756,723
| 7,622,324
|
Type hints for class decorator
|
<p>I have a class decorator which removes one method and adds another to a class.</p>
<p>How could I provide type hints for that? I've obviously tried to research this myself, to no avail.</p>
<p>Most people claim this requires an intersection type. Is there any recommended solution? Something I'm missing?</p>
<p>Example code:</p>
<pre><code>class MyProtocol(Protocol):
def do_check(self) -> bool:
raise NotImplementedError
def decorator(clazz: type[MyProtocol]) -> ???:
do_check: Callable[[MyProtocol], bool] = getattr(clazz, "do_check")
def do_assert(self: MyProtocol) -> None:
assert do_check(self)
delattr(clazz, "do_check")
setattr(clazz, "do_assert", do_assert)
return clazz
@decorator
class MyClass(MyProtocol):
def do_check(self) -> bool:
return False
mc = MyClass()
mc.do_check() # hints as if exists, but doesn't
mc.do_assert() # no hints, but works
</code></pre>
<p>I guess what I'm looking for is the correct return type for <code>decorator</code>.</p>
|
<python><mypy>
|
2024-01-04 08:11:00
| 1
| 669
|
Omer Lubin
|
77,756,493
| 15,106,139
|
OpenCV Document Scanner - better quadrilateral detection
|
<p>I'm having troubles detecting the edges of the documents real-time from the camera as the backgrounds could vary and the color of the document could be very close to the background color.
Any help regardless of the programming language will be appreciated.</p>
<p>I've tried to do so by:</p>
<pre><code>const val CLOSE_KERNEL_SIZE = 5.0
const val BLURRING_KERNEL_SIZE = 5.0
const val DILATE_KERNEL_SIZE = 5.0
val (threshold1, threshold2) = getCannyThresholds(src)
Imgproc.cvtColor(src, src, Imgproc.COLOR_BGR2GRAY)
Imgproc.GaussianBlur(src, src, Size(BLURRING_KERNEL_SIZE, BLURRING_KERNEL_SIZE), 3.0)
Imgproc.Canny(src, destination, threshold1, threshold2)
val dilateKernel = Imgproc.getStructuringElement(Imgproc.MORPH_ELLIPSE, Size(DILATE_KERNEL_SIZE, DILATE_KERNEL_SIZE))
Imgproc.dilate(destination, destination, dilateKernel)
val kernel = Imgproc.getStructuringElement(
Imgproc.MORPH_ELLIPSE,
Size(CLOSE_KERNEL_SIZE, CLOSE_KERNEL_SIZE)
)
Imgproc.morphologyEx(
destination,
destination,
Imgproc.MORPH_CLOSE,
kernel,
Point(-1.0, -1.0),
10
)
val preview2 = Mat(destination, Rect(0, 0, destination.width(), destination.height()))
showPreview(preview2)
</code></pre>
<p>As canny thresholds are dynamically calculated based on this method:</p>
<pre><code>val brightness = Core.mean(mat).`val`[0] / 255
val threshold1 = 10 / brightness
val threshold2 = 40 / brightness
</code></pre>
<p>And the contours are found by:</p>
<pre><code> private fun findLargestContours(inputMat: Mat): List<MatOfPoint>? {
val mHierarchy = Mat()
val mContourList: List<MatOfPoint> = ArrayList()
//finding contours - as we are sorting by area anyway, we can use RETR_LIST - faster than RETR_EXTERNAL.
Imgproc.findContours(inputMat, mContourList, mHierarchy, Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE)
// Convert the contours to their Convex Hulls i.e. removes minor nuances in the contour
val mHullList: MutableList<MatOfPoint> = ArrayList()
val tempHullIndices = MatOfInt()
for (i in mContourList.indices) {
Imgproc.convexHull(mContourList[i], tempHullIndices)
mHullList.add(hull2Points(tempHullIndices, mContourList[i]))
}
// Release mContourList as its job is done
for (c in mContourList) {
c.release()
}
tempHullIndices.release()
mHierarchy.release()
if (mHullList.size != 0) {
mHullList.sortWith { lhs, rhs ->
Imgproc.contourArea(rhs).compareTo(Imgproc.contourArea(lhs))
}
return mHullList.subList(0, min(mHullList.size, FIRST_MAX_CONTOURS))
}
return null
}
private fun hull2Points(hull: MatOfInt, contour: MatOfPoint): MatOfPoint {
val indexes = hull.toList()
val points: MutableList<Point> = ArrayList()
val ctrList = contour.toList()
for (index in indexes) {
points.add(ctrList[index])
}
val point = MatOfPoint()
point.fromList(points)
return point
}
</code></pre>
<p>As a result I am getting the following but the canny edge detection didn't workout as expected as the edges are not fully detected. The contours found are drawn in green:
<a href="https://i.sstatic.net/fiAad.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fiAad.jpg" alt="Result" /></a></p>
<p>Camera input:
<a href="https://i.sstatic.net/y5fPa.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/y5fPa.jpg" alt="Camera input" /></a></p>
<p>Note: I've tried with Hough Lines but as I am unable to remove the text there are too many lines.</p>
|
<python><android><opencv><image-processing>
|
2024-01-04 07:19:57
| 0
| 745
|
Stoyan Milev
|
77,756,344
| 5,761,601
|
PyCharm uses wrong interpreter with managed Jupyter server
|
<p>PyCharm uses the wrong python interpreter even when you select a different one using a managed Jupyter server.
<a href="https://i.sstatic.net/HSFzJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HSFzJ.png" alt="pycharm settings" /></a>
I noticed this first because I was getting wrong results from a dependency that was installed on the interpreter.
Debugging the code validated this. Things I tried are:</p>
<ul>
<li>deleting the Jupyter notebook checkpoints</li>
<li>restarting the kernel</li>
<li>reinstalling the dependency</li>
</ul>
<p>But nothing worked, anyone an idea how to force PyCharm to use the right interpreter?</p>
|
<python><jupyter-notebook><pycharm><jupyter>
|
2024-01-04 06:48:07
| 1
| 557
|
warreee
|
77,756,136
| 11,156,161
|
Properly scheduling nested functions with asyncio
|
<p>I'm having trouble scheduling work with asyncio. I have a code like this:</p>
<pre class="lang-py prettyprint-override"><code>import asyncio
async def stream():
char_string = "Hi. Hello. Hello."
for char in char_string:
await asyncio.sleep(0.1) # something time consuming happening here
print("got char:", char)
yield char
async def sentences_generator():
sentence = ""
async for char in stream():
sentence += char
if char in [".", "!", "?"]:
print("got sentence: ", sentence)
yield sentence
sentence = ""
async def process_sentence(sentence: str):
print("waiting for processing sentence: ", sentence)
await asyncio.sleep(len(sentence)*0.1)
print("sentence processed!")
async def main():
i=0
async for sentence in sentences_generator():
print("processing sentence: ", i)
await process_sentence(sentence)
i += 1
asyncio.run(main())
</code></pre>
<p>This is my output:</p>
<pre><code>got char: H
got char: i
got char: .
got sentence: Hi.
processing sentence: 0
waiting for processing sentence: Hi.
sentence processed!
got char:
got char: H
got char: e
got char: y
got char: .
got sentence: Hey.
processing sentence: 1
waiting for processing sentence: Hey.
sentence processed!
got char:
got char: H
got char: e
got char: l
got char: l
got char: o
got char: .
got sentence: Hello.
processing sentence: 2
waiting for processing sentence: Hello.
sentence processed!
</code></pre>
<p>This is not optimal. While the <code>process_sentence</code> is awaiting <code>asyncio.sleep()</code> (representing some other time consuming process), it should be already taking next chars from the stream. So, I would expect an output like this:</p>
<pre><code>got char: H
got char: i
got char: .
got sentence: Hi.
processing sentence: 0
waiting for processing sentence: Hi.
got char: # (space char)
got char: H
sentence processed!
got char: e
got char: y
got char: .
got sentence: Hey.
processing sentence: 1
waiting for processing sentence: Hey.
got char # (space char)
got char H
got char: e
sentence processed!
got char: l
got char: l
got char: o
got char: .
got sentence: Hello.
processing sentence: 2
waiting for processing sentence: Hello.
sentence processed!
</code></pre>
<p>How can I achieve it?</p>
|
<python><asynchronous><python-asyncio>
|
2024-01-04 05:47:55
| 2
| 727
|
MKaras
|
77,755,849
| 4,276,963
|
Is there any way to specify the type of event loop for the asyncio REPL in Python?
|
<p>I'm working on some networking code in Python that uses the asyncio module and I like to use the Python REPL from -m asyncio a lot for testing examples. What I've noticed though is the default event loop type for the asyncio REPL changes depending on the OS. For example, on Linux it uses the Selector Event Loop while on Windows it uses Proactor.</p>
<p>My question is this:</p>
<ol>
<li>Is there any way to specify what event loop asyncio REPL uses (without having to patch the module yourself which works obviously.)</li>
<li>Follow up question -- if there isn't -- is there perhaps a way to replace a running event loop in a thread with another type of event loop?</li>
</ol>
<p>Let me know what your thoughts are.</p>
|
<python><python-asyncio><read-eval-print-loop>
|
2024-01-04 04:10:39
| 2
| 767
|
Matthew Roberts
|
77,755,795
| 4,451,521
|
Poetry. Does add install or create a virtual environment?
|
<p>I am getting very confused trying to understand Poetry usage.</p>
<p>I thought you needed <code>poetry install</code> to actually install packages (after adding them)
But I have not run that, and only did</p>
<pre><code>poetry add pytest --dev
Creating virtualenv rp-poetry-L1ArV34E-py3.9 in /home/myself/.cache/pypoetry/virtualenvs
The --dev option is deprecated, use the `--group dev` notation instead.
Using version ^7.4.4 for pytest
Updating dependencies
Resolving dependencies... (0.8s)
Writing lock file
Package operations: 6 installs, 0 updates, 0 removals
• Installing exceptiongroup (1.2.0)
• Installing iniconfig (2.0.0)
• Installing packaging (23.2)
• Installing pluggy (1.3.0)
• Installing tomli (2.0.1)
• Installing pytest (7.4.4)
</code></pre>
<p>With that I get the following strange things:</p>
<ul>
<li><p>It says that add created a virtual environment! Why???</p>
</li>
<li><p>I did <code>ls ~/.cache/pypoetry/virtualenvs/</code> and there are <strong>two!</strong> virtual environments (When was the first one created and why <code>poetry env list</code> did not report anything)</p>
</li>
<li><p>I did</p>
<p>poetry env list</p>
<p>rp-poetry-L1ArV34E-py3.9 (Activated)</p>
</li>
</ul>
<p>so the virtual environment is activated right!?
Then why when I do this</p>
<pre><code>python -c 'import pytest'
Traceback (most recent call last):
File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'pytest'
</code></pre>
<p>but when I do this <code>poetry run python -c 'import pytest'</code> it is OK?</p>
<p>That means that the virtual environment <strong>IS NOT</strong> activated right?</p>
<ul>
<li>and last, why is pytest being imported without problem if I have not done <code>poetry install</code></li>
</ul>
|
<python><python-poetry>
|
2024-01-04 03:51:31
| 1
| 10,576
|
KansaiRobot
|
77,755,646
| 1,492,229
|
Bag of Words with Negative Words in Python
|
<p>I have this document</p>
<p>It is not normal text</p>
<p>It is a text of Scientific terminologies</p>
<p>The text of these documents are like this</p>
<pre><code>RepID,Txt
1,K9G3P9 4H477 -Q207KL41 98464 ... Q207KL41
2,D84T8X4 -D9W4S2 -D9W4S2 8E8E65 ... D9W4S2
3,-05L8NJ38 K2DD949 0W28DZ48 207441 ... K2D28K84
</code></pre>
<p>I can build a feature set using BOW algorithm</p>
<p>Here is my code</p>
<pre><code>def BOW(df):
CountVec = CountVectorizer() # to use only bigrams ngram_range=(2,2)
Count_data = CountVec.fit_transform(df)
Count_data = Count_data.astype(np.uint8)
cv_dataframe=pd.DataFrame(Count_data.toarray(), columns=CountVec.get_feature_names_out(), index=df.index) # <- HERE
return cv_dataframe.astype(np.uint8)
df_reps = pd.read_csv("c:\\file.csv")
df = BOW(df_reps["Txt"])
</code></pre>
<p>The result will be the count of words in the "<strong>Txt</strong>" column.</p>
<pre><code>RepID K9G3P9 4H477 -Q207KL41 98464 ... Q207KL41
1 2 8 3 2 ... 1
2 0 1 2 4 ... 2
</code></pre>
<p>The trick and here where I need the help, is that some of these terms have a <strong>-</strong> ahead of it, and that should count as negative value</p>
<p>So if the a text have these values <code>Q207KL41 -Q207KL41 -Q207KL41</code></p>
<p>in that case the terms that starts with - should be count as negative and therefore, the BOW for the <code>Q207KL41</code> is <strong>-1</strong></p>
<p>instead of having a feature for <code>Q207KL41</code> and <code>-Q207KL41</code>
they both count towards the same term <code>Q207KL41</code> but with <strong>positive and -negative</strong></p>
<p>so the dataset after BOW will look like this</p>
<pre><code>RepID K9G3P9 4H477 Q207KL41 98464 ...
1 2 8 -2 2 ...
2 0 1 0 4 ...
</code></pre>
<p>How to do that?</p>
|
<python><machine-learning><scikit-learn><nlp>
|
2024-01-04 02:56:20
| 1
| 8,150
|
asmgx
|
77,755,615
| 13,115,571
|
How to modify tree attribute inside form view in odoo16?
|
<p>I want to modify tree attributes inside form view for each record. I tried to use get_view() but it is not working as expected and active_id is not the one I opened and got another record. Which function is working everytime we open the record form view and how can I do it? This is my sample code for adding limit attribute to tree.</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>from odoo import models, fields, api
from lxml import etree
import logging
class StockPicking(models.Model):
_inherit = "stock.picking"
limit = fields.Integer(string="Tree Pagination Limit")
def tree_pagination_limit_apply(self):
return {
'type': 'ir.actions.client',
'tag': 'reload',
}
@api.model
def get_view(self, view_id=None, view_type='form', **options):
logging.info("Custom get_view called")
# Fetch the original view
result = super(StockPicking, self).get_view(view_id=view_id, view_type=view_type, **options)
if view_type == 'form':
# Parse the view architecture
doc = etree.XML(result['arch'])
# Locate the specific tree view within the form
for tree in doc.xpath("//field[@name='move_ids_without_package']/tree"):
# Ensure the context has 'active_id' when this form is opened
if 'active_id' in self.env.context:
active_id = self.env.context.get('active_id')
# Ensure active_id is not None and browse the record
if active_id:
try:
current_record = self.browse(active_id)
if current_record and current_record.limit > 0:
tree.set('limit', str(current_record.limit))
logging.info(f"Set tree view limit to {current_record.limit}")
except Exception as e:
logging.error(f"Error setting limit on tree view: {e}")
# Update the architecture in the result
result['arch'] = etree.tostring(doc, encoding='unicode')
return result</code></pre>
</div>
</div>
</p>
|
<python><xml><odoo><odoo-16>
|
2024-01-04 02:45:33
| 1
| 430
|
Neural
|
77,755,505
| 14,122
|
Use Pydantic with strict=False to coerce UUIDs in a structure to strings?
|
<p>In a Python 3.12 codebase using Pydantic 2.5.3, I'm trying to coerce a dictionary into a form where it contains only JSON-serializeable values (without actually converting it all the way to JSON at this time).</p>
<pre class="lang-py prettyprint-override"><code>import uuid, pydantic
</code></pre>
<p>I have a type definition akin to the following:</p>
<pre class="lang-py prettyprint-override"><code>type JsonDict = dict[str, "JsonTypes"]
type JsonTypes = None | str | float | bool | int | list["JsonTypes"] | JsonDict
</code></pre>
<p>I also have a value that <em>would</em> be JSON-serializable, but for presence of a UUID:</p>
<pre><code>myValue = {
"keyA": {
uuid.UUID('0589f92c-37b1-4837-983a-3fdc2d8416ea'): {
"keyB": uuid.UUID('a98251e6-2d80-407f-9975-4918f0a28119'),
},
},
}
</code></pre>
<hr />
<p>In general, pydantic's <code>strict=False</code> mode is willing to coerce things to the desired type when the type definition calls for it; but for some reason, that isn't true here:</p>
<pre class="lang-py prettyprint-override"><code>adapter = pydantic.TypeAdapter(JsonDict)
adapter.validate_python(myValue, strict=False)
</code></pre>
<p>...fails with:</p>
<pre class="lang-none prettyprint-override"><code>keyA.str
Input should be a valid string [type=string_type, input_value={UUID('0589f92c-37b1-4837...7f-9975-4918f0a28119')}}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.5/v/string_type
keyA.`dict[str,nullable[union[str,float,bool,int,list[...],...]]]`.UUID('0589f92c-37b1-4837-983a-3fdc2d8416ea').[key]
Input should be a valid string [type=string_type, input_value=UUID('0589f92c-37b1-4837-983a-3fdc2d8416ea'), input_type=UUID]
For further information visit https://errors.pydantic.dev/2.5/v/string_type
</code></pre>
<p>(and many further errors, describing why none of the other possibilities apply).</p>
<hr />
<p>By contrast, if using a dataclass or a proper <code>pydantic.BaseModel</code> derivative, one <em>can</em> coerce all the way to JSON even with the UUIDs present:</p>
<pre><code>@pydantic.dataclasses.dataclass
class MyValueType:
keyA: dict[uuid.UUID, dict[str, uuid.UUID]]
myValueObj = MyValueType(**myValue)
myValueJson = pydantic.RootModel[MyValueType](myValueObj).model_dump_json()
</code></pre>
<p>...but I don't know how to stop at "JSON-serializable" without actually going all the way to JSON: Changing <code>.model_dump_json()</code> to <code>.model_dump()</code> in the last snippet leaves the UUID objects <em>as UUIDs</em> without serializing them to strings.</p>
|
<python><pydantic>
|
2024-01-04 02:05:43
| 0
| 299,045
|
Charles Duffy
|
77,755,443
| 10,576,557
|
Python Pass Execution to External Process
|
<p>I would like to do some computation in Python and then pass or transfer to an interactive process, such as <code>ssh</code>. I want to dive into the new process. This would (preferably) be something where Python hands the interaction over and quits, but alternatively could be something the user interacts with and then returns to the original Python process when done.</p>
<p>I have tried this with <code>subprocess.run()</code>, but don't get to do anything on the remote host and the Python process continues after <code>subprocess.run</code> times out.</p>
<pre class="lang-python prettyprint-override"><code>myCommand = ['ssh','hostname']
subprocess.run(myCommand, useShell = False)
</code></pre>
<p>Is this something Python can handle or am I using the wrong tool?</p>
|
<python><python-3.x>
|
2024-01-04 01:37:02
| 0
| 569
|
shepster
|
77,755,439
| 404,604
|
How to set up a Poetry project to install module as a runnable Python binary?
|
<p>I have a Python project that I intend to run as a command line utility and I'm managing its build using Poetry. Through <code>poetry install</code>, I'm able to install a snapshot of the module and import it within the virtualised environment or run it through <code>python -m</code>. However, I want to be able to add it to the shell's path just like how <code>poetry</code> itself end up being runnable without necessarily having to invoke it with <code>python -m</code>. How do I configure my Poetry project to achieve this?</p>
|
<python><python-poetry>
|
2024-01-04 01:35:10
| 1
| 6,942
|
Psycho Punch
|
77,755,277
| 15,781,591
|
Unable to increase pandas dataframe decimal precision
|
<p>I make the following dataframe in python:</p>
<pre><code>import pandas as pd
data = [['Blue', 34], ['Green', 61], ['Red', 22]]
df = pd.DataFrame(data, columns=['Color', 'Value'])
df
</code></pre>
<p>and see:</p>
<pre><code> Color Value
------------------
0 Blue 34
1 Green 61
2 Red 22
</code></pre>
<p>I then want to divide the "Value" column by 7, yielding new values in each row that will not divide evenly and have many decimal places of precision.</p>
<p>I then divide by 7 with:</p>
<pre><code>df['Value'] = df['Value']/7
df
</code></pre>
<p>And I see:</p>
<pre><code> Color Value
-------------------
0 Blue 4.86
1 Green 8.71
2 Red 3.14
</code></pre>
<p>But I want to see more decimal precision than this. I would like to see 5 decimal places of precision.</p>
<p>And so I try using the <code>.round(5)</code> function with:</p>
<pre><code>df.round(5)
</code></pre>
<p>And nothing has changed to the precision of the values, still just 2 decimal places.</p>
<p>I am not sure why selecting 5 decimal places here is not being applied to my indicated data frame.</p>
<p>I also tried increasing the precision with:</p>
<pre><code>pd.set_option('display.precision',5)
</code></pre>
<p>before calling the dataframe, but this changed nothing.</p>
<p>How can I fix this code so that I get values with 5 decimal places of precision? This is pertaining to increasing decimal precision, instead of specifically rounding down.</p>
|
<python><pandas><dataframe>
|
2024-01-04 00:20:21
| 0
| 641
|
LostinSpatialAnalysis
|
77,754,884
| 2,152,371
|
applying read_csv to a Series of filenames only loads the first dataframe
|
<p>When I run this block of code:</p>
<pre><code>import pandas as pd
import os
working_dir = os.getcwd()+'/'
files = pd.Series(os.listdir(working_dir))
input_files = files[files.str.contains('.csv')]
input_files = working_dir+input_files
dataframes = input_files.apply(pd.read_csv)
</code></pre>
<p>It returns a series of the same dataframe, the first one I found.</p>
<p>What the? I have confirmed that all the files have different columns and data in them.</p>
<p>I expect that it returns a Series containing a dataframe for every filename in the original Series that the read_csv had been applied to.</p>
|
<python><pandas><dataframe><read-csv>
|
2024-01-03 22:12:49
| 1
| 470
|
Miko
|
77,754,838
| 6,840,119
|
How to avoid circular dependencies when using pydantic v2?
|
<p>I have a <code>Snapshot</code> class in one file with instance method <code>view</code> which returns a <code>View</code> class defined in another file. But also instance of <code>View</code> needs to be constructed with <code>Snapshot</code> instance.</p>
<p>Requirements:</p>
<ul>
<li>have these 2 classes in separate files</li>
<li>use pydantic-v2 types and checks correctly everywhere</li>
</ul>
<p>I have come up with the solution below, but using <code>"get_snapshot_class()"</code> and <code>TYPE_CHEKCING</code> seems a bit like a hack. Is there a better solution?</p>
<pre><code># snapshot.py
from __future__ import annotations
from typing import TYPE_CHECKING
from pydantic import BaseModel
if TYPE_CHECKING:
from .view import View
class Snapshot(BaseModel):
position: str
def view(self, context: str) -> "View":
# Delay import to avoid circular import
from .view import View
return View(snapshot=self, context=context)
</code></pre>
<pre><code># view.py
from pydantic import BaseModel
def get_snapshot_class():
from qcs.snapshot import Snapshot
return Snapshot
class View(BaseModel):
snapshot: "get_snapshot_class()"
context: str
</code></pre>
|
<python><python-3.x><pydantic><pydantic-v2>
|
2024-01-03 21:57:01
| 0
| 321
|
Danny
|
77,754,661
| 6,357,916
|
process.kill() does not kill the process
|
<p>I want to start a process from python and wait till exits. Also, if it throws error, I check if the string "error" is printed by it on terminal and if yes, I just want to kill it and proceed. I also want to write all terminal output to file. My current code looks like this:</p>
<pre><code>def monitor_process(process, file):
while True:
if process.stdout:
outline = process.stdout.readline().decode()
file.write(outline)
file.flush()
if "error" in outline:
file.write("stdout: Error encountered. Exiting process.\n")
file.flush()
process.kill() // in debug mode, code flow reaches here when process outputs "error" on terminal. However it doesnt kill process.
break
def runProcess(flight_path):
output_file = flight_path + "/Process_terminal.log"
with open(output_file, "w") as file:
os.chdir("./build")
command = "./Process"
process = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
output_thread = threading.Thread(target=monitor_process, args=(process, file))
output_thread.start()
process.wait()
output_thread.join()
</code></pre>
<p>As you can read in the comment, when process prints "error" to terminal, the code flow does indeed reaches <code>process.kill()</code> line, but executing that lines does not actually kill the process. What I am missing here?</p>
|
<python><multithreading>
|
2024-01-03 21:08:53
| 0
| 3,029
|
MsA
|
77,754,655
| 8,963,300
|
How to statically enforce frozen data classes in Python?
|
<p>I'm trying to write an example where I'd like to use a frozen dataclass instance during type checking and swap it out with a normal dataclass to avoid paying the instantiation cost of frozen dataclasses.</p>
<p>The goal is to ensure that the instance is immutable with the type checker and use a regular dataclass during runtime. Here's the snippet:</p>
<pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass
from typing import TYPE_CHECKING
from functools import partial
if TYPE_CHECKING:
frozen = partial(dataclass, frozen=True)
else:
frozen = dataclass
@frozen
class Foo:
x: int
y: int
foo = Foo(1, 2) # mypy complains about the number of arguments
foo.x = 3 # instead, mypy should complain here
</code></pre>
<p>This works as expected during runtime, but running mypy raises this error. Pyright gives me the same error as well:</p>
<pre><code>foo.py:49: error: Too many arguments for "Foo" [call-arg]
</code></pre>
<p>In this snippet, the type checker can catch the mutation error:</p>
<pre class="lang-py prettyprint-override"><code>@dataclass(frozen=True)
class Foo:
x: int
y: int
foo = Foo(1, 2)
foo.x = 3 # mypy correctly catches the error here
</code></pre>
<p>So, I'm guessing that the type checker doesn't like when I'm aliasing <code>frozen = dataclass</code> or <code>frozen = partial(...)</code>. How do I annotate this properly so that the type checker understands that it's a dataclass instance and doesn't complain about mismatched argument count?</p>
<p><em>P.S: This is just an exercise. I know turning on <code>dataclass(frozen=True)</code> is way easier, and I shouldn't care about performance in such cases. I was inspired to try this after reading a <a href="https://threeofwands.com/attra-iv-zero-overhead-frozen-attrs-classes/" rel="nofollow noreferrer">blog</a> post by Tin Tvrtković on making attr class instances frozen at compile time.</em></p>
|
<python><mypy><python-typing><python-dataclasses>
|
2024-01-03 21:06:42
| 1
| 1,716
|
rednafi
|
77,754,630
| 1,208,923
|
MyPy gives false alarm on `mutagen.id3.PictureType`
|
<p>I am writing some Python code (to manipulate audio metadata) using <a href="https://github.com/quodlibet/mutagen/tree/main" rel="nofollow noreferrer">mutagen</a> library. The library does not have type hints, however I have added type hints to my routines. In particular, I have a function with the following signature:</p>
<pre class="lang-py prettyprint-override"><code>from mutagen import id3
def fun(arg1: str, arg2: id3.PictureType = id3.PictureType.ARTIST):
# the body of the function
</code></pre>
<p>But, MyPy complains about the default argument assignment:</p>
<pre><code>error: Incompatible default for argument "arg2" (default has type "int", argument has type "PictureType")
</code></pre>
<p>I am confused and cannot figure out why this is happening. I checked that:</p>
<pre><code>>>> isinstance(id3.PictureType.ARTIST, id3.PictureType)
True
>>> isinstance(id3.PictureType.ARTIST, int)
True
>>> issubclass(id3.PictureType.ARTIST.__class__, int)
True
>>> inspect.getmro(id3.PictureType.ARTIST.__class__)
(mutagen.id3._specs.PictureType, int, object)
</code></pre>
<p>So, it looks like the type hierarchy is correct (<code>id3.PictureType.ARTIST</code> should be an instance of <code>id3.PictureType</code> which, itself is a subclass of <code>int</code>). Yet, MyPy is assuming <code>id3.PictureType.ARTIST</code> is of type <code>int</code>, why?</p>
<p>Could that be related to the "particular" way mutagen author's implement <code>id3.PictureType</code> as an enum (see <a href="https://github.com/quodlibet/mutagen/blob/f95d3ae19e25e3f0a91061566551843d799317c5/mutagen/id3/_specs.py#L17" rel="nofollow noreferrer">here</a> and <a href="https://github.com/quodlibet/mutagen/blob/f95d3ae19e25e3f0a91061566551843d799317c5/mutagen/_util.py#L329" rel="nofollow noreferrer">here</a>)? And if so, is there a way to fix this (without re-implementing <code>PictureType</code> as a proper <a href="https://docs.python.org/3/library/enum.html" rel="nofollow noreferrer">Python enum</a>)?</p>
|
<python><mypy><python-typing><mutagen>
|
2024-01-03 21:01:19
| 0
| 2,529
|
MikeL
|
77,754,521
| 10,798,917
|
add title to a dataframe, save it then read it in and display result not what I expected
|
<p>I have a simple dataframe that I read in and display. Then I add a title to the dataframe. I save the titled dataframe, then read it in and display it but the result is NOT what I expected. I repeats the title in each column. Code is shown below.</p>
<pre><code># read in a simple dataframe
csvpath=r'C:\Users\tfuser\Documents\2023 Taxes\test1.csv'
df=pd.read_csv(csvpath)
display(df)
# add a title to the dataframe and display the result
text=' TITLE'
df.columns = pd.MultiIndex.from_product([[text], df.columns])
display(df)
# save the titled dataframe, then read it in and display it
save_path=r'C:\Users\tfuser\Documents\2023 Taxes\saved.csv'
df.to_csv(save_path, index=False)
recovered_df=pd.read_csv(save_path)
display(recovered_df)
</code></pre>
<p>The resultant the outputs are shown below:</p>
<pre><code>
A B C
0 A1 B1 C1
1 A2 B2 C2
2 A3 B3 C3
TITLE
A B C
0 A1 B1 C1
1 A2 B2 C2
2 A3 B3 C3
TITLE TITLE.1 TITLE.2
0 A B C
1 A1 B1 C1
2 A2 B2 C2
3 A3 B3 C3
</code></pre>
<p>I know it has something to do with the muti index but I don't know how to fix it to get a single title as shown in the second output.</p>
|
<python><pandas><dataframe>
|
2024-01-03 20:33:40
| 3
| 8,192
|
Gerry P
|
77,754,272
| 5,867,094
|
Is there any example to customize current Streamlit component
|
<p>In the streamlit doc of component, it says:</p>
<blockquote>
<p>Custom versions of existing Streamlit elements and widgets, such as
st.slider or st.file_uploader.</p>
</blockquote>
<p>But there's no such example of doing it except creating a brand new component from scratch. Is there any tutorial that can help me to extend current component, for example st.file_uploader, to add a new custom JavaScript?</p>
|
<python><streamlit>
|
2024-01-03 19:35:34
| 0
| 891
|
Tiancheng Liu
|
77,754,266
| 9,092,669
|
in a delta table, how can i find the number of files per partition?
|
<p>in a delta table, how can i find the number of files per partition? also looking to find the size of each file as well. i've tried this code, but I run into an error TypeError: 'JavaPackage' object is not callable:</p>
<pre><code>from delta.tables import *
JDeltaLog = spark._jvm.org.apache.spark.sql.delta.DeltaLog
delta_log = JDeltaLog.forTable(spark, path)
all_files_jdf = delta_log.snapshot().allFiles().toDF()
all_files_df = DataFrame(all_files_jdf, spark._wrapped)
partition_counts_df = (
all_files_df
.groupBy([f.col('partitionValues')[key].alias(key) for key in partition_cols])
.count()
)
</code></pre>
|
<python><pandas>
|
2024-01-03 19:34:53
| 0
| 395
|
buttermilk
|
77,754,245
| 15,804,190
|
Dealing with PyArrow decimal128 precision
|
<p>[some example data at the end]
I've just started working with PyArrow, so forgive me if I'm missing something obvious here.</p>
<p>I have a project that I'm updating to (hopefully) better handle calculations on money. Mostly, these calculations are multiplying a normal money amount by a percentage, like <code>9.94 * 0.04</code>, things like that.</p>
<p>I had been using pandas v1.4.x and just had all the money as floats and was not consistent with rounding, which caused headaches. In the example above, I would want <code>9.94 * 0.04 = 0.40</code>, using normal rounding to two digits.</p>
<p>I was going to start forcing <code>decimal.Decimal</code> objects in everywhere instead of floats, when I saw that pyarrow has a builtin <code>decimal128</code> datatype that should work much better with pandas.</p>
<p>So, not I'm getting a lot of the following exception:</p>
<blockquote>
<p>pyarrow.lib.ArrowInvalid: Rescaling Decimal128 value would cause data loss</p>
</blockquote>
<p>I'm also getting changes to precision that, while not raising exceptions, I don't think I want.</p>
<p>For example, I have a pandas dataframe with a column called 'Pay Rate' with a dtype of <code>pa.decimal128(12,2)</code>. When I do <code>df['Pay Rate'] * decimal.Decimal('0.04')</code>, the result is of type <code>pa.decimal128(15,4)</code>. I'm assuming it is merging together the precisions of the two things being multiplied in a way that is reasonable but that I don't want. (Note: If i just do <code>df['Pay Rate'] * 0.04</code>, the result is a <code>double[pyarrow]</code> type.)</p>
<p>I want the end of my transformations here to result in columns that are type <code>decimal128(12,2)</code>, and so I'm also then trying <code>df['my_col'] = df['my_col'].astype(pd.ArrowDtype(pa.decimal128(12,2))</code>, and that is then sometimes giving me the error above about data loss.</p>
<p>It makes sense to me that there is data loss because I am indeed telling it to just drop off some decimal points, but really what I want is it to round and then, yea, drop them.</p>
<p>Is there some switch of function to handle this that I'm missing?</p>
<h2>some example data</h2>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import pyarrow as pa
from decimal import Decimal
data = {'col1': {0: Decimal('39.60'), 1: Decimal('39.60'), 2: Decimal('21.60'), 3: Decimal('7.20'), 4: Decimal('18.00'), 5: Decimal('18.00'), 6: Decimal('72.00'), 7: Decimal('30.60'), 8: Decimal('36.00'), 9: Decimal('41.40')}, 'col2': {0: Decimal('0.98'), 1: Decimal('1.00'), 2: Decimal('0.97'), 3: Decimal('0.46'), 4: Decimal('0.52'), 5: Decimal('1.00'), 6: Decimal('1.00'), 7: Decimal('1.00'), 8: Decimal('1.00'), 9: Decimal('1.00')}}
df = pd.DataFrame(data,dtype=pd.ArrowDtype(pa.decimal128(12, 2)))
df['col3'] = df['col1'] * df['col2']
#df['col3'] has a dtype of decimal128(25,4)
df['col3'].astype(pd.ArrowDtype(pa.decimal128(12, 2)))
#raises exception
</code></pre>
|
<python><pandas><dataframe><decimal><pyarrow>
|
2024-01-03 19:30:47
| 1
| 3,163
|
scotscotmcc
|
77,754,161
| 1,744,491
|
Polars getting ColumnNotFound when use the when() method
|
<p>I'm trying to use a combination of <code>contains</code> method together <code>when</code> in polars. However, I'm getting the following annoying error:</p>
<pre><code>Exception has occurred: ColumnNotFoundError
Foo message.
</code></pre>
<p>I read the documentation, but the examples are quite simple and don't cover a situation like this. Here follows my code sample:</p>
<pre><code>df = df.with_columns(
pl.when(pl.col("Foo").str.contains("Foo message. Bar Message."))
.then("Foo message")
.alias("Foo_Column")
)
</code></pre>
<p>Any help is appreciated.</p>
|
<python><dataframe><python-polars>
|
2024-01-03 19:14:31
| 1
| 670
|
Guilherme Noronha
|
77,754,139
| 2,183,336
|
Python reset or reuse custom range class
|
<p>Example custom range class is not reset or "reusable" like builtin range. How to make it so?</p>
<pre><code>def exampleCustomRange(stopExclusive):
for i in range(stopExclusive):
yield i
>> builtinRange = range(3)
>> [x for x in builtinRange]
[0, 1, 2]
>> [x for x in builtinRange]
[0, 1, 2] # See how this repeats on a second try? It is reusable or reset.
>> customRange = exampleCustomRange(3)
>> [x for x in customRange]
[0, 1, 2]
>> [x for x in customRange]
[] # See how this is now empty? It is not reusable or reset.
</code></pre>
<p>The second use of customRange shown in repl above does not repeat, reset, or reuse like the builtin range. I want to match the behavior of the builtinRange.</p>
|
<python><range><generator>
|
2024-01-03 19:09:12
| 2
| 665
|
user2183336
|
77,754,131
| 2,261,950
|
MacOs Tkinter - App terminating 'Invalid parameter not satisfying: aString != nil'
|
<p>When im launching my app via CLI, it works without issue</p>
<blockquote>
<p>./org_chart.app/Contents/MacOS/org_chart</p>
</blockquote>
<p>however when I launch via double click I met with the error</p>
<pre><code>*** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Invalid parameter not satisfying: aString != nil'
</code></pre>
<p>I used py2app to build the app. Im not sure where to begin debugging this, if someone could point me in the right direction?</p>
<p>Thanks for your help!</p>
<p>here's the full code for the small app</p>
<pre><code>import os, shutil
import tkinter as tk
from tkinter import filedialog, messagebox, Tk, Canvas, Entry, Text, Button, PhotoImage
from tkinter import font as tkFont
def build_org_chart():
print("im making a chart")
return 'Done chart created!'
if __name__ == "__main__":
window = Tk()
window.title("Org Chart Spreadsheet Generator")
# Variables to store file paths
window.geometry("1012x506")
window.configure(bg = "#00403D")
# Define the font properties
my_font = tkFont.Font(family="Montserrat SemiBold", size=16, weight="normal")
canvas = Canvas(
window,
bg = "#00403D",
height = 506,
width = 1012,
bd = 0,
highlightthickness = 0,
relief = "ridge"
)
canvas.place(x = 0, y = 0)
canvas.create_rectangle(
308.0,
0.0,
1012.0,
506.0,
fill="#FFFFFF",
outline="")
canvas.create_text(
320.0,
18.0,
anchor="nw",
text="Org Chart",
fill="#000000",
font=("Montserrat Bold", 64 * -1)
)
window.resizable(False, False)
window.mainloop()
</code></pre>
<p>Also now my app is so small its still crashing im thinking it could be something in the setup file too so ive added that code below</p>
<pre><code>import os
from setuptools import setup
def list_files(directory):
base_path = os.path.abspath(directory)
paths = []
for root, directories, filenames in os.walk(base_path):
for filename in filenames:
# Exclude .DS_Store files if you are on macOS
if filename != '.DS_Store':
paths.append(os.path.join(root, filename))
return paths
# Your assets folder
assets_folder = 'assets'
# Listing all files in the assets folder
assets_files = list_files(assets_folder)
APP = ['org_chart_min.py']
DATA_FILES = [('assets', assets_files)]
OPTIONS = {
'argv_emulation': True,
'packages': ['pandas', 'openpyxl','xlsxwriter'],
'plist': {
'CFBundleName': '_org_chart',
'CFBundleDisplayName': ' Org Chart',
'CFBundleGetInfoString': "Create a spreadsheet that populates our Lucid org chart template",
'CFBundleIdentifier': 'com.yourdomain.orgchart',
'CFBundleVersion': '0.1',
'CFBundleShortVersionString': '0.1',
'NSRequiresAquaSystemAppearance': True
},
'iconfile': 'org_chart.icns',
}
setup(
app=APP,
data_files=DATA_FILES,
options={'py2app': OPTIONS},
setup_requires=['py2app'],
)
</code></pre>
|
<python><macos><tkinter><py2app>
|
2024-01-03 19:07:12
| 3
| 2,163
|
AlexW
|
77,754,053
| 2,069,064
|
The type Type[Array] is not generic and not indexable
|
<p>This program:</p>
<pre class="lang-py prettyprint-override"><code>class Array:
def __init__(self, underlying):
self.underlying = underlying
def __class_getitem__(cls, key):
return Array(key)
def __getitem__(self, key):
pass
def __str__(self):
return f"Array({self.underlying=})"
def foo(name, kind):
pass
foo(name="x", kind=Array["int"])
</code></pre>
<p>is flagged by <code>mypy</code>:</p>
<pre><code>x.py:17: error: The type "Type[Array]" is not generic and not indexable [misc]
</code></pre>
<p>The <code>__getitem__</code> is something I added just to see if it would resolve the issue (it did not). I'm not intending to use <code>Array</code> here as a type (in the <code>typing</code> sense), just for cute syntax to pass an array type somewhere else.</p>
<p>What does this <code>mypy</code> error mean and how can I fix it (preferably with something other than <code># ignore</code> comments)?</p>
|
<python><python-3.x><mypy>
|
2024-01-03 18:49:42
| 2
| 311,202
|
Barry
|
77,753,860
| 1,686,628
|
Python subprocess not printing real time
|
<p>test.py</p>
<pre><code>import time
for x in range(0, 10, 1):
print(x)
time.sleep(1)
</code></pre>
<p><code>python test.py</code> prints real time i.e a number every second</p>
<pre><code>0
1
2
3
4
5
6
7
8
9
</code></pre>
<p>Now, I wanted to run <code>test.py</code> in another script called <code>run.py</code> via subprocess like below</p>
<p>run.py</p>
<pre><code>import subprocess
from subprocess import PIPE, STDOUT
proc = subprocess.Popen(
'python test.py',
stdout=PIPE,
stderr=STDOUT,
shell=True,
encoding="utf-8",
errors="replace",
universal_newlines=True,
text=True,
bufsize=1,
)
while (realtime_output := proc.stdout.readline()) != "" or proc.poll() is None:
print(realtime_output.strip(), flush=True)
</code></pre>
<p>when I run <code>python run.py</code>, the output is not real time (no number being printed every second)</p>
<p>Strangely, if i modify the code in <code>test.py</code> to <code>print(x, flush=True)</code> then <code>python run.py</code> prints every second</p>
<p>Is there a way to have real time output via <code>subprocess</code> in <code>run.py</code> without modifying <code>test.py</code> print statement?</p>
|
<python><python-3.x>
|
2024-01-03 18:07:21
| 1
| 12,532
|
ealeon
|
77,753,855
| 4,199,253
|
return a subset of dataframe based on a condition resulting from groupby python
|
<p>I have a data frame like below:</p>
<pre><code>date|point|agent
2023-10-02|A|agent1
2023-10-02|A|agent2
2023-10-05|B|agent3
2023-10-05|B|agent2
2023-10-02|C|agent1
2023-10-02|C|agent2
2023-10-02|C|agent3
</code></pre>
<p>On each day at a specific point, there should be only two agents. There are cases that there are more than two, I want to return those I want to return the rows that have more than 2 agents.</p>
<p>I used groupby to first count:</p>
<pre><code>df.groupby(['point','date'])['agent'].nunique()>2
</code></pre>
<p>I can use</p>
<pre><code>df['agent_count'] = df.groupby(['point','date'])['agent'].transform('nunique')
</code></pre>
<p>and then get the the rows that have more than 2. But is there another way, without having redundant data?
I used <code>loc</code> and <code>iloc</code>, <code>where</code> and exh gives me lots of error. I am looking for an efficient way to return the rows without adding the counts to the dataframe. I did explored questions here for two hours but none of them were working.</p>
|
<python><pandas><dataframe><group-by>
|
2024-01-03 18:05:53
| 2
| 1,034
|
GeoBeez
|
77,753,839
| 5,036,928
|
NumPy row operations that depend on other rows/columns
|
<h2>Problem</h2>
<p>I am trying to avoid a for loop in NumPy (which is quite messy and obviously performance prohibiting). My challenge is that operations on each row depend on other rows. That is:</p>
<p>I have a (very large) array with 8 columns. In each row I need to find the indices of the 2 (or some arbitrary number N) rows whose values in the first column are closest (to the first column of the current row) while ensuring that the values in the second column and fourth column are identical (across all 3 rows).</p>
<h2>Desired Output</h2>
<p>So, I should end up with an array that is the same length as my large initial array by 2 (or N) where each row contains the indices of the other rows in the "master" array that meet the conditions above.</p>
<h2>Current Attempt</h2>
<p>In my original for-loop I was broadcasting because I had filtered my array to be small enough to do so.</p>
<p>My attempt is now as follows:</p>
<pre><code>def query_trks(self, FULL, trk, t, v, N):
filt = np.where((FULL[:,1]==t, FULL[:,4]==v))[0]
trks = FULL[filt, 0]
idxs = np.abs(trks[:, None] - trks).argsort(axis=0)[:N]
return idxs.T.astype(int)
def create_idxs(self, DATA):
self.query_trks(DATA, DATA[:,0], DATA[:,1], DATA[:,4], 3)
</code></pre>
<p>I think I might still be able to broadcast but my issue is basically getting a filtered array (for the second and fourth columns) for each row of the "master" array. Right now, the entire length of the array is returned because of the elementwise comparison.</p>
<p>How can I obtain the desired output? Am I on the right track at least?</p>
<p>Thanks in advance!</p>
<p>EDIT:
Below is a minimized example of the "master" array with the farthest right column depicting the pattern for the desired output.</p>
<pre><code>1.00000e+00 0.00000e+00 -1.63711e+07 0.00000e+00 2.39604e+03 2.39604e+03 0.00000e+00 1.59155e-02 0.00000e+00 [27, 54]
1.00000e+00 1.00000e+00 -1.63687e+07 0.00000e+00 2.39604e+03 2.39753e+03 0.00000e+00 1.59280e-02 2.03148e-06 [29, 55]
1.00000e+00 2.00000e+00 -1.63639e+07 0.00000e+00 2.39604e+03 2.40051e+03 0.00000e+00 1.59406e-02 6.77694e-07 [30, 56]
1.00000e+00 3.00000e+00 -1.63567e+07 0.00000e+00 2.39604e+03 2.40497e+03 0.00000e+00 1.59532e-02 3.39114e-07 [31, 57]
1.00000e+00 4.00000e+00 -1.63471e+07 0.00000e+00 2.39604e+03 2.41093e+03 0.00000e+00 1.59658e-02 2.03629e-07 ...
1.00000e+00 5.00000e+00 -1.63350e+07 0.00000e+00 2.39604e+03 2.41839e+03 0.00000e+00 1.59784e-02 1.35860e-07
1.00000e+00 6.00000e+00 -1.63206e+07 0.00000e+00 2.39604e+03 2.42735e+03 0.00000e+00 1.59910e-02 9.71192e-08
1.00000e+00 7.00000e+00 -1.63036e+07 0.00000e+00 2.39604e+03 2.43783e+03 0.00000e+00 1.60036e-02 7.28968e-08
1.00000e+00 8.00000e+00 -1.62842e+07 0.00000e+00 2.39604e+03 2.44982e+03 0.00000e+00 1.60162e-02 5.67421e-08
1.00000e+00 9.00000e+00 -1.62622e+07 0.00000e+00 2.39604e+03 2.46335e+03 0.00000e+00 1.60288e-02 4.54294e-08
1.00000e+00 1.00000e+01 -1.62376e+07 0.00000e+00 2.39604e+03 2.47842e+03 0.00000e+00 1.60415e-02 3.71987e-08
1.00000e+00 1.10000e+01 -1.62104e+07 0.00000e+00 2.39604e+03 2.49505e+03 0.00000e+00 1.60541e-02 3.10231e-08
1.00000e+00 1.20000e+01 -1.61806e+07 0.00000e+00 2.39604e+03 2.51325e+03 0.00000e+00 1.60668e-02 2.62708e-08
1.00000e+00 1.30000e+01 -1.61481e+07 0.00000e+00 2.39604e+03 2.53305e+03 0.00000e+00 1.60795e-02 2.25354e-08
1.00000e+00 1.40000e+01 -1.61127e+07 0.00000e+00 2.39604e+03 2.55445e+03 0.00000e+00 1.60922e-02 1.95458e-08
1.00000e+00 1.50000e+01 -1.60746e+07 0.00000e+00 2.39604e+03 2.57748e+03 0.00000e+00 1.61049e-02 1.71157e-08
1.00000e+00 1.60000e+01 -1.60336e+07 0.00000e+00 2.39604e+03 2.60216e+03 0.00000e+00 1.61176e-02 1.51137e-08
1.00000e+00 1.70000e+01 -1.59895e+07 0.00000e+00 2.39604e+03 2.62851e+03 0.00000e+00 1.61303e-02 1.34446e-08
1.00000e+00 1.80000e+01 -1.59425e+07 0.00000e+00 2.39604e+03 2.65658e+03 0.00000e+00 1.61430e-02 1.20385e-08
1.00000e+00 1.90000e+01 -1.58923e+07 0.00000e+00 2.39604e+03 2.68637e+03 0.00000e+00 1.61557e-02 1.08427e-08
1.00000e+00 2.00000e+01 -1.58389e+07 0.00000e+00 2.39604e+03 2.71794e+03 0.00000e+00 1.61685e-02 9.81735e-09
1.00000e+00 2.10000e+01 -1.57822e+07 0.00000e+00 2.39604e+03 2.75130e+03 0.00000e+00 1.61812e-02 8.93139e-09
1.00000e+00 2.20000e+01 -1.57220e+07 0.00000e+00 2.39604e+03 2.78651e+03 0.00000e+00 1.61940e-02 8.16063e-09
1.00000e+00 2.30000e+01 -1.56584e+07 0.00000e+00 2.39604e+03 2.82360e+03 0.00000e+00 1.62068e-02 7.48589e-09
1.00000e+00 2.40000e+01 -1.55911e+07 0.00000e+00 2.39604e+03 2.86261e+03 0.00000e+00 1.62196e-02 6.89183e-09
1.00000e+00 2.50000e+01 -1.55200e+07 0.00000e+00 2.39604e+03 2.90361e+03 0.00000e+00 1.62324e-02 6.36604e-09
1.00000e+00 2.50000e+01 nan nan 2.39604e+03 nan nan nan 6.36604e-09
2.00000e+00 0.00000e+00 -1.63711e+07 0.00000e+00 2.39604e+03 1.69426e+03 1.69426e+03 1.59155e-02 0.00000e+00 [0, 54]
2.00000e+00 1.00000e+00 -1.63694e+07 1.69426e+03 2.39604e+03 1.69575e+03 1.69426e+03 1.59280e-02 2.03148e-06 [1, 55]
2.00000e+00 2.00000e+00 -1.63660e+07 5.08278e+03 2.39604e+03 1.69872e+03 1.69426e+03 1.59406e-02 6.77694e-07 [2, 56]
2.00000e+00 3.00000e+00 -1.63609e+07 1.01656e+04 2.39604e+03 1.70319e+03 1.69426e+03 1.59532e-02 3.39114e-07 [3, 57]
2.00000e+00 4.00000e+00 -1.63541e+07 1.69426e+04 2.39604e+03 1.70914e+03 1.69425e+03 1.59658e-02 2.03629e-07 ...
2.00000e+00 5.00000e+00 -1.63456e+07 2.54139e+04 2.39604e+03 1.71659e+03 1.69425e+03 1.59784e-02 1.35860e-07
2.00000e+00 6.00000e+00 -1.63353e+07 3.55794e+04 2.39604e+03 1.72554e+03 1.69423e+03 1.59910e-02 9.71192e-08
2.00000e+00 7.00000e+00 -1.63233e+07 4.74391e+04 2.39604e+03 1.73600e+03 1.69421e+03 1.60036e-02 7.28968e-08
2.00000e+00 8.00000e+00 -1.63094e+07 6.09929e+04 2.39604e+03 1.74797e+03 1.69417e+03 1.60162e-02 5.67421e-08
2.00000e+00 9.00000e+00 -1.62937e+07 7.62407e+04 2.39604e+03 1.76145e+03 1.69412e+03 1.60288e-02 4.54294e-08
2.00000e+00 1.00000e+01 -1.62762e+07 9.31823e+04 2.39604e+03 1.77647e+03 1.69405e+03 1.60415e-02 3.71987e-08
2.00000e+00 1.10000e+01 -1.62568e+07 1.11817e+05 2.39604e+03 1.79302e+03 1.69396e+03 1.60541e-02 3.10231e-08
2.00000e+00 1.20000e+01 -1.62353e+07 1.32146e+05 2.39604e+03 1.81111e+03 1.69383e+03 1.60668e-02 2.62708e-08
2.00000e+00 1.30000e+01 -1.62119e+07 1.54167e+05 2.39604e+03 1.83077e+03 1.69367e+03 1.60795e-02 2.25354e-08
2.00000e+00 1.40000e+01 -1.61864e+07 1.77879e+05 2.39604e+03 1.85200e+03 1.69347e+03 1.60922e-02 1.95458e-08
2.00000e+00 1.50000e+01 -1.61588e+07 2.03283e+05 2.39604e+03 1.87481e+03 1.69322e+03 1.61049e-02 1.71157e-08
2.00000e+00 1.60000e+01 -1.61290e+07 2.30377e+05 2.39604e+03 1.89923e+03 1.69291e+03 1.61176e-02 1.51137e-08
2.00000e+00 1.70000e+01 -1.60970e+07 2.59160e+05 2.39604e+03 1.92527e+03 1.69254e+03 1.61303e-02 1.34446e-08
2.00000e+00 1.80000e+01 -1.60626e+07 2.89630e+05 2.39604e+03 1.95295e+03 1.69210e+03 1.61430e-02 1.20385e-08
2.00000e+00 1.90000e+01 -1.60257e+07 3.21785e+05 2.39604e+03 1.98229e+03 1.69157e+03 1.61557e-02 1.08427e-08
2.00000e+00 2.00000e+01 -1.59864e+07 3.55622e+05 2.39604e+03 2.01331e+03 1.69095e+03 1.61685e-02 9.81735e-09
2.00000e+00 2.10000e+01 -1.59445e+07 3.91140e+05 2.39604e+03 2.04604e+03 1.69022e+03 1.61812e-02 8.93139e-09
2.00000e+00 2.20000e+01 -1.58998e+07 4.28334e+05 2.39604e+03 2.08050e+03 1.68937e+03 1.61940e-02 8.16063e-09
2.00000e+00 2.30000e+01 -1.58524e+07 4.67201e+05 2.39604e+03 2.11672e+03 1.68840e+03 1.62068e-02 7.48589e-09
2.00000e+00 2.40000e+01 -1.58021e+07 5.07736e+05 2.39604e+03 2.15474e+03 1.68728e+03 1.62196e-02 6.89183e-09
2.00000e+00 2.50000e+01 -1.57487e+07 5.49934e+05 2.39604e+03 2.19459e+03 1.68600e+03 1.62324e-02 6.36604e-09
2.00000e+00 2.50000e+01 nan nan 2.39604e+03 nan nan nan 6.36604e-09
3.00000e+00 0.00000e+00 -1.63711e+07 0.00000e+00 2.39604e+03 1.46715e-13 2.39604e+03 1.59155e-02 0.00000e+00
3.00000e+00 1.00000e+00 -1.63711e+07 2.39604e+03 2.39604e+03 1.48721e+00 2.39604e+03 1.59280e-02 2.03148e-06
3.00000e+00 2.00000e+00 -1.63711e+07 7.18813e+03 2.39604e+03 4.46162e+00 2.39604e+03 1.59406e-02 6.77694e-07
3.00000e+00 3.00000e+00 -1.63711e+07 1.43763e+04 2.39604e+03 8.92324e+00 2.39604e+03 1.59532e-02 3.39114e-07
3.00000e+00 4.00000e+00 -1.63710e+07 2.39604e+04 2.39604e+03 1.48721e+01 2.39604e+03 1.59658e-02 2.03629e-07
3.00000e+00 5.00000e+00 -1.63710e+07 3.59407e+04 2.39604e+03 2.23081e+01 2.39603e+03 1.59784e-02 1.35860e-07
3.00000e+00 6.00000e+00 -1.63709e+07 5.03169e+04 2.39604e+03 3.12314e+01 2.39601e+03 1.59910e-02 9.71191e-08
3.00000e+00 7.00000e+00 -1.63707e+07 6.70890e+04 2.39604e+03 4.16419e+01 2.39597e+03 1.60036e-02 7.28965e-08
3.00000e+00 8.00000e+00 -1.63704e+07 8.62570e+04 2.39604e+03 5.35398e+01 2.39593e+03 1.60162e-02 5.67417e-08
3.00000e+00 9.00000e+00 -1.63700e+07 1.07821e+05 2.39604e+03 6.69252e+01 2.39585e+03 1.60288e-02 4.54288e-08
3.00000e+00 1.00000e+01 -1.63694e+07 1.31780e+05 2.39604e+03 8.17983e+01 2.39576e+03 1.60415e-02 3.71979e-08
3.00000e+00 1.10000e+01 -1.63686e+07 1.58134e+05 2.39604e+03 9.81593e+01 2.39563e+03 1.60541e-02 3.10222e-08
3.00000e+00 1.20000e+01 -1.63675e+07 1.86882e+05 2.39604e+03 1.16009e+02 2.39545e+03 1.60668e-02 2.62696e-08
3.00000e+00 1.30000e+01 -1.63661e+07 2.18025e+05 2.39604e+03 1.35347e+02 2.39523e+03 1.60795e-02 2.25338e-08
3.00000e+00 1.40000e+01 -1.63644e+07 2.51560e+05 2.39604e+03 1.56175e+02 2.39495e+03 1.60922e-02 1.95439e-08
3.00000e+00 1.50000e+01 -1.63622e+07 2.87487e+05 2.39604e+03 1.78493e+02 2.39461e+03 1.61049e-02 1.71135e-08
3.00000e+00 1.60000e+01 -1.63595e+07 3.25804e+05 2.39604e+03 2.02303e+02 2.39419e+03 1.61176e-02 1.51111e-08
3.00000e+00 1.70000e+01 -1.63563e+07 3.66509e+05 2.39604e+03 2.27607e+02 2.39369e+03 1.61303e-02 1.34416e-08
3.00000e+00 1.80000e+01 -1.63525e+07 4.09601e+05 2.39604e+03 2.54404e+02 2.39309e+03 1.61430e-02 1.20350e-08
3.00000e+00 1.90000e+01 -1.63479e+07 4.55077e+05 2.39604e+03 2.82699e+02 2.39238e+03 1.61557e-02 1.08388e-08
3.00000e+00 2.00000e+01 -1.63425e+07 5.02932e+05 2.39604e+03 3.12493e+02 2.39155e+03 1.61685e-02 9.81292e-09
3.00000e+00 2.10000e+01 -1.63363e+07 5.53165e+05 2.39604e+03 3.43789e+02 2.39059e+03 1.61812e-02 8.92644e-09
3.00000e+00 2.20000e+01 -1.63291e+07 6.05770e+05 2.39604e+03 3.76590e+02 2.38948e+03 1.61940e-02 8.15512e-09
3.00000e+00 2.30000e+01 -1.63208e+07 6.60743e+05 2.39604e+03 4.10901e+02 2.38820e+03 1.62068e-02 7.47980e-09
3.00000e+00 2.40000e+01 -1.63114e+07 7.18077e+05 2.39604e+03 4.46726e+02 2.38675e+03 1.62196e-02 6.88512e-09
3.00000e+00 2.50000e+01 -1.63007e+07 7.77767e+05 2.39604e+03 4.84070e+02 2.38511e+03 1.62324e-02 6.35869e-09
3.00000e+00 2.50000e+01 nan nan 2.39604e+03 nan nan nan 6.35869e-09
4.00000e+00 0.00000e+00 -1.63711e+07 0.00000e+00 2.39604e+03 -1.69426e+03 1.69426e+03 1.59155e-02 0.00000e+00
4.00000e+00 1.00000e+00 -1.63728e+07 1.69426e+03 2.39604e+03 -1.69277e+03 1.69426e+03 1.59280e-02 2.03148e-06
4.00000e+00 2.00000e+00 -1.63762e+07 5.08278e+03 2.39604e+03 -1.68980e+03 1.69426e+03 1.59406e-02 6.77694e-07
4.00000e+00 3.00000e+00 -1.63812e+07 1.01656e+04 2.39604e+03 -1.68534e+03 1.69426e+03 1.59532e-02 3.39114e-07
4.00000e+00 4.00000e+00 -1.63880e+07 1.69426e+04 2.39604e+03 -1.67940e+03 1.69425e+03 1.59658e-02 2.03629e-07
4.00000e+00 5.00000e+00 -1.63964e+07 2.54139e+04 2.39604e+03 -1.67198e+03 1.69425e+03 1.59784e-02 1.35860e-07
4.00000e+00 6.00000e+00 -1.64065e+07 3.55794e+04 2.39604e+03 -1.66308e+03 1.69423e+03 1.59910e-02 9.71192e-08
4.00000e+00 7.00000e+00 -1.64181e+07 4.74391e+04 2.39604e+03 -1.65272e+03 1.69421e+03 1.60036e-02 7.28968e-08
4.00000e+00 8.00000e+00 -1.64314e+07 6.09929e+04 2.39604e+03 -1.64089e+03 1.69418e+03 1.60162e-02 5.67421e-08
4.00000e+00 9.00000e+00 -1.64462e+07 7.62407e+04 2.39604e+03 -1.62760e+03 1.69413e+03 1.60288e-02 4.54294e-08
4.00000e+00 1.00000e+01 -1.64626e+07 9.31823e+04 2.39604e+03 -1.61286e+03 1.69406e+03 1.60415e-02 3.71987e-08
4.00000e+00 1.10000e+01 -1.64804e+07 1.11817e+05 2.39604e+03 -1.59669e+03 1.69397e+03 1.60541e-02 3.10232e-08
4.00000e+00 1.20000e+01 -1.64997e+07 1.32146e+05 2.39604e+03 -1.57908e+03 1.69385e+03 1.60668e-02 2.62709e-08
4.00000e+00 1.30000e+01 -1.65203e+07 1.54167e+05 2.39604e+03 -1.56005e+03 1.69369e+03 1.60795e-02 2.25354e-08
4.00000e+00 1.40000e+01 -1.65423e+07 1.77880e+05 2.39604e+03 -1.53960e+03 1.69350e+03 1.60922e-02 1.95458e-08
4.00000e+00 1.50000e+01 -1.65656e+07 2.03284e+05 2.39604e+03 -1.51776e+03 1.69327e+03 1.61049e-02 1.71158e-08
4.00000e+00 1.60000e+01 -1.65900e+07 2.30379e+05 2.39604e+03 -1.49452e+03 1.69298e+03 1.61176e-02 1.51138e-08
4.00000e+00 1.70000e+01 -1.66157e+07 2.59162e+05 2.39604e+03 -1.46991e+03 1.69264e+03 1.61303e-02 1.34447e-08
4.00000e+00 1.80000e+01 -1.66424e+07 2.89634e+05 2.39604e+03 -1.44393e+03 1.69224e+03 1.61430e-02 1.20386e-08
4.00000e+00 1.90000e+01 -1.66700e+07 3.21791e+05 2.39604e+03 -1.41660e+03 1.69176e+03 1.61557e-02 1.08428e-08
4.00000e+00 2.00000e+01 -1.66987e+07 3.55631e+05 2.39604e+03 -1.38793e+03 1.69121e+03 1.61685e-02 9.81749e-09
4.00000e+00 2.10000e+01 -1.67281e+07 3.91153e+05 2.39604e+03 -1.35793e+03 1.69057e+03 1.61812e-02 8.93157e-09
4.00000e+00 2.20000e+01 -1.67583e+07 4.28354e+05 2.39604e+03 -1.32662e+03 1.68984e+03 1.61940e-02 8.16085e-09
4.00000e+00 2.30000e+01 -1.67892e+07 4.67230e+05 2.39604e+03 -1.29401e+03 1.68900e+03 1.62068e-02 7.48616e-09
4.00000e+00 2.40000e+01 -1.68207e+07 5.07777e+05 2.39604e+03 -1.26011e+03 1.68806e+03 1.62196e-02 6.89215e-09
4.00000e+00 2.50000e+01 -1.68526e+07 5.49992e+05 2.39604e+03 -1.22494e+03 1.68700e+03 1.62324e-02 6.36643e-09
4.00000e+00 2.50000e+01 nan nan 2.39604e+03 nan nan nan 6.36643e-09
5.00000e+00 0.00000e+00 -1.63711e+07 0.00000e+00 2.39604e+03 -2.39604e+03 2.93431e-13 1.59155e-02 0.00000e+00
5.00000e+00 1.00000e+00 -1.63735e+07 2.93431e-13 2.39604e+03 -2.39456e+03 2.93431e-13 1.59280e-02 2.03148e-06
5.00000e+00 2.00000e+00 -1.63783e+07 8.80292e-13 2.39604e+03 -2.39158e+03 2.93431e-13 1.59406e-02 6.77694e-07
5.00000e+00 3.00000e+00 -1.63854e+07 1.76058e-12 2.39604e+03 -2.38713e+03 2.93430e-13 1.59532e-02 3.39114e-07
5.00000e+00 4.00000e+00 -1.63950e+07 2.93431e-12 2.39604e+03 -2.38119e+03 2.93430e-13 1.59658e-02 2.03629e-07
5.00000e+00 5.00000e+00 -1.64069e+07 4.40146e-12 2.39604e+03 -2.37377e+03 2.93429e-13 1.59784e-02 1.35860e-07
5.00000e+00 6.00000e+00 -1.64212e+07 6.16204e-12 2.39604e+03 -2.36489e+03 2.93426e-13 1.59910e-02 9.71192e-08
5.00000e+00 7.00000e+00 -1.64378e+07 8.21604e-12 2.39604e+03 -2.35454e+03 2.93422e-13 1.60036e-02 7.28968e-08
5.00000e+00 8.00000e+00 -1.64567e+07 1.05634e-11 2.39604e+03 -2.34274e+03 2.93416e-13 1.60162e-02 5.67421e-08
5.00000e+00 9.00000e+00 -1.64778e+07 1.32042e-11 2.39604e+03 -2.32949e+03 2.93408e-13 1.60288e-02 4.54294e-08
5.00000e+00 1.00000e+01 -1.65012e+07 1.61384e-11 2.39604e+03 -2.31481e+03 2.93396e-13 1.60415e-02 3.71987e-08
5.00000e+00 1.10000e+01 -1.65267e+07 1.93658e-11 2.39604e+03 -2.29871e+03 2.93380e-13 1.60541e-02 3.10232e-08
5.00000e+00 1.20000e+01 -1.65544e+07 2.28865e-11 2.39604e+03 -2.28120e+03 2.93360e-13 1.60668e-02 2.62709e-08
5.00000e+00 1.30000e+01 -1.65842e+07 2.67003e-11 2.39604e+03 -2.26229e+03 2.93334e-13 1.60795e-02 2.25354e-08
5.00000e+00 1.40000e+01 -1.66160e+07 3.08072e-11 2.39604e+03 -2.24200e+03 2.93301e-13 1.60922e-02 1.95458e-08
5.00000e+00 1.50000e+01 -1.66498e+07 3.52071e-11 2.39604e+03 -2.22035e+03 2.93261e-13 1.61049e-02 1.71158e-08
5.00000e+00 1.60000e+01 -1.66855e+07 3.98996e-11 2.39604e+03 -2.19734e+03 2.93212e-13 1.61176e-02 1.51138e-08
5.00000e+00 1.70000e+01 -1.67231e+07 4.48847e-11 2.39604e+03 -2.17300e+03 2.93154e-13 1.61303e-02 1.34447e-08
5.00000e+00 1.80000e+01 -1.67624e+07 5.01621e-11 2.39604e+03 -2.14735e+03 2.93085e-13 1.61430e-02 1.20386e-08
5.00000e+00 1.90000e+01 -1.68035e+07 5.57315e-11 2.39604e+03 -2.12040e+03 2.93005e-13 1.61557e-02 1.08428e-08
5.00000e+00 2.00000e+01 -1.68462e+07 6.15925e-11 2.39604e+03 -2.09216e+03 2.92911e-13 1.61685e-02 9.81749e-09
5.00000e+00 2.10000e+01 -1.68904e+07 6.77448e-11 2.39604e+03 -2.06267e+03 2.92803e-13 1.61812e-02 8.93157e-09
5.00000e+00 2.20000e+01 -1.69361e+07 7.41878e-11 2.39604e+03 -2.03193e+03 2.92680e-13 1.61940e-02 8.16085e-09
5.00000e+00 2.30000e+01 -1.69832e+07 8.09211e-11 2.39604e+03 -1.99997e+03 2.92540e-13 1.62068e-02 7.48616e-09
5.00000e+00 2.40000e+01 -1.70316e+07 8.79439e-11 2.39604e+03 -1.96680e+03 2.92382e-13 1.62196e-02 6.89215e-09
5.00000e+00 2.50000e+01 -1.70812e+07 9.52557e-11 2.39604e+03 -1.93245e+03 2.92204e-13 1.62324e-02 6.36643e-09
5.00000e+00 2.50000e+01 nan nan 2.39604e+03 nan nan nan 6.36643e-09
6.00000e+00 0.00000e+00 -1.63711e+07 0.00000e+00 7.32003e+03 7.32003e+03 0.00000e+00 1.59155e-02 0.00000e+00 [162, 189]
6.00000e+00 1.00000e+00 -1.63638e+07 0.00000e+00 7.32003e+03 7.32152e+03 0.00000e+00 1.59280e-02 6.64958e-07 [163, 190]
6.00000e+00 2.00000e+00 -1.63491e+07 0.00000e+00 7.32003e+03 7.32449e+03 0.00000e+00 1.59406e-02 2.21828e-07 [164, 191]
6.00000e+00 3.00000e+00 -1.63271e+07 0.00000e+00 7.32003e+03 7.32897e+03 0.00000e+00 1.59532e-02 1.11001e-07 [165, 192]
6.00000e+00 4.00000e+00 -1.62978e+07 0.00000e+00 7.32003e+03 7.33495e+03 0.00000e+00 1.59658e-02 6.66533e-08 ...
6.00000e+00 5.00000e+00 -1.62612e+07 0.00000e+00 7.32003e+03 7.34245e+03 0.00000e+00 1.59784e-02 4.44706e-08
6.00000e+00 6.00000e+00 -1.62172e+07 0.00000e+00 7.32003e+03 7.35149e+03 0.00000e+00 1.59910e-02 3.17897e-08
6.00000e+00 7.00000e+00 -1.61657e+07 0.00000e+00 7.32003e+03 7.36210e+03 0.00000e+00 1.60036e-02 2.38611e-08
6.00000e+00 8.00000e+00 -1.61069e+07 0.00000e+00 7.32003e+03 7.37431e+03 0.00000e+00 1.60162e-02 1.85732e-08
6.00000e+00 9.00000e+00 -1.60406e+07 0.00000e+00 7.32003e+03 7.38813e+03 0.00000e+00 1.60288e-02 1.48703e-08
6.00000e+00 1.00000e+01 -1.59668e+07 0.00000e+00 7.32003e+03 7.40362e+03 0.00000e+00 1.60415e-02 1.21761e-08
6.00000e+00 1.10000e+01 -1.58854e+07 0.00000e+00 7.32003e+03 7.42082e+03 0.00000e+00 1.60541e-02 1.01547e-08
6.00000e+00 1.20000e+01 -1.57965e+07 0.00000e+00 7.32003e+03 7.43978e+03 0.00000e+00 1.60668e-02 8.59916e-09
6.00000e+00 1.30000e+01 -1.56999e+07 0.00000e+00 7.32003e+03 7.46054e+03 0.00000e+00 1.60795e-02 7.37644e-09
6.00000e+00 1.40000e+01 -1.55956e+07 0.00000e+00 7.32003e+03 7.48318e+03 0.00000e+00 1.60922e-02 6.39786e-09
6.00000e+00 1.50000e+01 -1.54836e+07 0.00000e+00 7.32003e+03 7.50776e+03 0.00000e+00 1.61049e-02 5.60244e-09
6.00000e+00 1.60000e+01 -1.53636e+07 0.00000e+00 7.32003e+03 7.53436e+03 0.00000e+00 1.61176e-02 4.94711e-09
6.00000e+00 1.70000e+01 -1.52358e+07 0.00000e+00 7.32003e+03 7.56307e+03 0.00000e+00 1.61303e-02 4.40077e-09
6.00000e+00 1.80000e+01 -1.51000e+07 0.00000e+00 7.32003e+03 7.59398e+03 0.00000e+00 1.61430e-02 3.94049e-09
6.00000e+00 1.90000e+01 -1.49560e+07 0.00000e+00 7.32003e+03 7.62719e+03 0.00000e+00 1.61557e-02 3.54908e-09
6.00000e+00 2.00000e+01 -1.48038e+07 0.00000e+00 7.32003e+03 7.66283e+03 0.00000e+00 1.61685e-02 3.21344e-09
6.00000e+00 2.10000e+01 -1.46433e+07 0.00000e+00 7.32003e+03 7.70103e+03 0.00000e+00 1.61812e-02 2.92343e-09
6.00000e+00 2.20000e+01 -1.44743e+07 0.00000e+00 7.32003e+03 7.74192e+03 0.00000e+00 1.61940e-02 2.67112e-09
6.00000e+00 2.30000e+01 -1.42967e+07 0.00000e+00 7.32003e+03 7.78568e+03 0.00000e+00 1.62068e-02 2.45025e-09
6.00000e+00 2.40000e+01 -1.41104e+07 0.00000e+00 7.32003e+03 7.83248e+03 0.00000e+00 1.62196e-02 2.25577e-09
6.00000e+00 2.50000e+01 -1.39153e+07 0.00000e+00 7.32003e+03 7.88253e+03 0.00000e+00 1.62324e-02 2.08365e-09
6.00000e+00 2.50000e+01 nan nan 7.32003e+03 nan nan nan 2.08365e-09
7.00000e+00 0.00000e+00 -1.63711e+07 0.00000e+00 7.32003e+03 5.17604e+03 5.17604e+03 1.59155e-02 0.00000e+00 [135, 189]
7.00000e+00 1.00000e+00 -1.63659e+07 5.17604e+03 7.32003e+03 5.17753e+03 5.17604e+03 1.59280e-02 6.64958e-07 [136, 190]
7.00000e+00 2.00000e+00 -1.63555e+07 1.55281e+04 7.32003e+03 5.18051e+03 5.17604e+03 1.59406e-02 2.21828e-07 [137, 191]
7.00000e+00 3.00000e+00 -1.63400e+07 3.10563e+04 7.32003e+03 5.18498e+03 5.17604e+03 1.59532e-02 1.11001e-07 [138, 192]
7.00000e+00 4.00000e+00 -1.63193e+07 5.17604e+04 7.32003e+03 5.19095e+03 5.17603e+03 1.59658e-02 6.66533e-08 ...
7.00000e+00 5.00000e+00 -1.62933e+07 7.76406e+04 7.32003e+03 5.19843e+03 5.17600e+03 1.59784e-02 4.44706e-08
7.00000e+00 6.00000e+00 -1.62622e+07 1.08697e+05 7.32003e+03 5.20744e+03 5.17596e+03 1.59910e-02 3.17897e-08
7.00000e+00 7.00000e+00 -1.62258e+07 1.44929e+05 7.32003e+03 5.21799e+03 5.17589e+03 1.60036e-02 2.38611e-08
7.00000e+00 8.00000e+00 -1.61841e+07 1.86336e+05 7.32003e+03 5.23010e+03 5.17578e+03 1.60162e-02 1.85732e-08
7.00000e+00 9.00000e+00 -1.61371e+07 2.32919e+05 7.32003e+03 5.24379e+03 5.17562e+03 1.60288e-02 1.48703e-08
7.00000e+00 1.00000e+01 -1.60847e+07 2.84676e+05 7.32003e+03 5.25909e+03 5.17540e+03 1.60415e-02 1.21761e-08
7.00000e+00 1.10000e+01 -1.60269e+07 3.41607e+05 7.32003e+03 5.27603e+03 5.17510e+03 1.60541e-02 1.01547e-08
7.00000e+00 1.20000e+01 -1.59637e+07 4.03711e+05 7.32003e+03 5.29464e+03 5.17471e+03 1.60668e-02 8.59916e-09
7.00000e+00 1.30000e+01 -1.58950e+07 4.70985e+05 7.32003e+03 5.31495e+03 5.17419e+03 1.60795e-02 7.37644e-09
7.00000e+00 1.40000e+01 -1.58208e+07 5.43429e+05 7.32003e+03 5.33701e+03 5.17354e+03 1.60922e-02 6.39786e-09
7.00000e+00 1.50000e+01 -1.57409e+07 6.21038e+05 7.32003e+03 5.36086e+03 5.17272e+03 1.61049e-02 5.60244e-09
7.00000e+00 1.60000e+01 -1.56553e+07 7.03810e+05 7.32003e+03 5.38654e+03 5.17171e+03 1.61176e-02 4.94711e-09
7.00000e+00 1.70000e+01 -1.55640e+07 7.91739e+05 7.32003e+03 5.41410e+03 5.17047e+03 1.61303e-02 4.40077e-09
7.00000e+00 1.80000e+01 -1.54668e+07 8.84821e+05 7.32003e+03 5.44360e+03 5.16897e+03 1.61430e-02 3.94049e-09
7.00000e+00 1.90000e+01 -1.53637e+07 9.83048e+05 7.32003e+03 5.47510e+03 5.16716e+03 1.61557e-02 3.54908e-09
7.00000e+00 2.00000e+01 -1.52545e+07 1.08641e+06 7.32003e+03 5.50867e+03 5.16502e+03 1.61685e-02 3.21344e-09
7.00000e+00 2.10000e+01 -1.51392e+07 1.19491e+06 7.32003e+03 5.54437e+03 5.16247e+03 1.61812e-02 2.92343e-09
7.00000e+00 2.20000e+01 -1.50177e+07 1.30851e+06 7.32003e+03 5.58227e+03 5.15948e+03 1.61940e-02 2.67112e-09
7.00000e+00 2.30000e+01 -1.48897e+07 1.42722e+06 7.32003e+03 5.62246e+03 5.15598e+03 1.62068e-02 2.45025e-09
7.00000e+00 2.40000e+01 -1.47553e+07 1.55101e+06 7.32003e+03 5.66503e+03 5.15190e+03 1.62196e-02 2.25577e-09
7.00000e+00 2.50000e+01 -1.46142e+07 1.67987e+06 7.32003e+03 5.71005e+03 5.14717e+03 1.62324e-02 2.08365e-09
7.00000e+00 2.50000e+01 nan nan 7.32003e+03 nan nan nan 2.08365e-09
8.00000e+00 0.00000e+00 -1.63711e+07 0.00000e+00 7.32003e+03 4.48222e-13 7.32003e+03 1.59155e-02 0.00000e+00
8.00000e+00 1.00000e+00 -1.63711e+07 7.32003e+03 7.32003e+03 1.48721e+00 7.32003e+03 1.59280e-02 6.64958e-07
8.00000e+00 2.00000e+00 -1.63711e+07 2.19601e+04 7.32003e+03 4.46162e+00 7.32003e+03 1.59406e-02 2.21828e-07
8.00000e+00 3.00000e+00 -1.63711e+07 4.39202e+04 7.32003e+03 8.92323e+00 7.32002e+03 1.59532e-02 1.11001e-07
8.00000e+00 4.00000e+00 -1.63710e+07 7.32003e+04 7.32003e+03 1.48720e+01 7.32001e+03 1.59658e-02 6.66533e-08
8.00000e+00 5.00000e+00 -1.63710e+07 1.09800e+05 7.32003e+03 2.23078e+01 7.31997e+03 1.59784e-02 4.44706e-08
8.00000e+00 6.00000e+00 -1.63709e+07 1.53720e+05 7.32003e+03 3.12306e+01 7.31991e+03 1.59910e-02 3.17897e-08
8.00000e+00 7.00000e+00 -1.63707e+07 2.04960e+05 7.32003e+03 4.16399e+01 7.31981e+03 1.60036e-02 2.38610e-08
8.00000e+00 8.00000e+00 -1.63704e+07 2.63519e+05 7.32003e+03 5.35353e+01 7.31967e+03 1.60162e-02 1.85731e-08
8.00000e+00 9.00000e+00 -1.63700e+07 3.29397e+05 7.32003e+03 6.69160e+01 7.31945e+03 1.60288e-02 1.48701e-08
8.00000e+00 1.00000e+01 -1.63694e+07 4.02593e+05 7.32003e+03 8.17810e+01 7.31915e+03 1.60415e-02 1.21759e-08
8.00000e+00 1.10000e+01 -1.63686e+07 4.83106e+05 7.32003e+03 9.81288e+01 7.31875e+03 1.60541e-02 1.01544e-08
8.00000e+00 1.20000e+01 -1.63675e+07 5.70934e+05 7.32003e+03 1.15957e+02 7.31822e+03 1.60668e-02 8.59875e-09
8.00000e+00 1.30000e+01 -1.63661e+07 6.66076e+05 7.32003e+03 1.35264e+02 7.31755e+03 1.60795e-02 7.37594e-09
8.00000e+00 1.40000e+01 -1.63644e+07 7.68527e+05 7.32003e+03 1.56046e+02 7.31670e+03 1.60922e-02 6.39726e-09
8.00000e+00 1.50000e+01 -1.63622e+07 8.78286e+05 7.32003e+03 1.78299e+02 7.31566e+03 1.61049e-02 5.60172e-09
8.00000e+00 1.60000e+01 -1.63595e+07 9.95346e+05 7.32003e+03 2.02017e+02 7.31439e+03 1.61176e-02 4.94627e-09
8.00000e+00 1.70000e+01 -1.63563e+07 1.11970e+06 7.32003e+03 2.27195e+02 7.31285e+03 1.61303e-02 4.39980e-09
8.00000e+00 1.80000e+01 -1.63525e+07 1.25135e+06 7.32003e+03 2.53826e+02 7.31103e+03 1.61430e-02 3.93939e-09
8.00000e+00 1.90000e+01 -1.63479e+07 1.39028e+06 7.32003e+03 2.81900e+02 7.30888e+03 1.61557e-02 3.54783e-09
8.00000e+00 2.00000e+01 -1.63426e+07 1.53648e+06 7.32003e+03 3.11408e+02 7.30637e+03 1.61685e-02 3.21203e-09
8.00000e+00 2.10000e+01 -1.63364e+07 1.68995e+06 7.32003e+03 3.42337e+02 7.30346e+03 1.61812e-02 2.92186e-09
8.00000e+00 2.20000e+01 -1.63292e+07 1.85066e+06 7.32003e+03 3.74675e+02 7.30012e+03 1.61940e-02 2.66939e-09
8.00000e+00 2.30000e+01 -1.63210e+07 2.01861e+06 7.32003e+03 4.08404e+02 7.29630e+03 1.62068e-02 2.44833e-09
8.00000e+00 2.40000e+01 -1.63116e+07 2.19377e+06 7.32003e+03 4.43508e+02 7.29196e+03 1.62196e-02 2.25368e-09
8.00000e+00 2.50000e+01 -1.63009e+07 2.37613e+06 7.32003e+03 4.79966e+02 7.28705e+03 1.62324e-02 2.08136e-09
8.00000e+00 2.50000e+01 nan nan 7.32003e+03 nan nan nan 2.08136e-09
...
</code></pre>
|
<python><numpy><vectorization><array-broadcasting><elementwise-operations>
|
2024-01-03 18:03:13
| 2
| 1,195
|
Sterling Butters
|
77,753,801
| 1,786,165
|
Query Python's iGraph as if it would be a Neo4j DB
|
<p>I'm trying to query an <code>igraph</code> graph in python as I would do in Neo4j to find all the sets of nodes that match a given pattern. I have read through the documentation, searched for answers here, and interrogated Google with no luck.</p>
<p>Assuming that I have the following:</p>
<pre><code>import pandas as pd
from igraph import Graph
df = pd.DataFrame({
"Person": ["P1", "P1", "P1", "P1", "P2", "P2", "P2", "P3", "P3", "P4"],
"Object": ["O1", "O2", "O3", "O6", "O2", "O3", "O4", "O5", "O6", "O6"],
"relation": ["LIKES"] * 10,
})
g = Graph.DataFrame(df, directed=False)
</code></pre>
<p>I'd like to run a query against <code>g</code> to return all px:Person, py:Person, ox:Object, and oy:Object such that <em>"px likes ox, which is also liked by py, which also likes oy AND px != py, ox != oy, (px, oy) not in df</em>.</p>
<p>Expected results would include:</p>
<pre><code>P1, O2, P2, O4
P2, O2, P1, O1
P2, O2, P1, O6
P2, O3, P1, O1
P2, O3, P1, O6
P4, O6, P1, O1
P4, O6, P1, O2
P4, O6, P1, O3
P4, O6, P3, O5
</code></pre>
<p>but not:</p>
<pre><code>P1, O2, P2, O3 because P1, O3 are already connected
P2, O3, P1, O2 because P2, O2 are already connected
</code></pre>
<p>In Neo4j, I'd have run something like:</p>
<pre><code>MATCH (p1:Person)-[:LIKES]->(o1:Object)<-[:LIKES]-(p2:Person)-[:LIKES]->(o2:Object)
WHERE p1 <> p2 AND o1 <> o2 AND NOT (p1)-[:LIKES]-(o2)
RETURN p1, o1, p2, o2
LIMIT 4
MATCH (p1:Person)-[:LIKES]->(o1:Object)<-[:LIKES]-(p2:Person)-[:LIKES]->(o2:Object) WHERE NOT (p1)-[:LIKES]->(o2) RETURN ....
</code></pre>
<p>Notice that my input files are quite big (~75k rows) and I would like to find all the possible <code>(p1, o1, p2, o2)</code> and of course not <code>LIMIT 4</code>. I've tried to join my dataframe twice to create a result dataframe to further filter, but it gets very slow or too big to fit in memory.</p>
<p>I was wondering if there is a way to run such a query in <code>igraph</code>, or if I have to break it down into parts and combine the results myself. Thanks in advance for any help!</p>
|
<python><neo4j><cypher><igraph>
|
2024-01-03 17:56:11
| 0
| 644
|
Stefano Bragaglia
|
77,753,726
| 1,939,576
|
Why does the output of Pandas DataFrame.sort_values differ from Series.sort_values?
|
<p>While teaching, one of my students pointed out that Pandas <code>DataFrame.sort_values</code> returns a different ordering (different tie breaks) to that from the equivalent <code>Series.sort_values</code>. Consider this</p>
<pre><code>>>> import pandas as pd
>>> df = pd.read_csv('https://gist.githubusercontent.com/matthew-brett/806a356bb7b7
... 1f08c5c6d0c5235e2f3d/raw/facb1aab243a33033b46657378f65dcd41542596/business.csv'
... )
>>> df['name'].value_counts().head(6)
name
Peet's Coffee & Tea 20
Starbucks Coffee 13
McDonald's 10
Jamba Juice 10
STARBUCKS 9
Proper Food 9
Name: count, dtype: int64
>>> df.value_counts('name').head(6)
name
Peet's Coffee & Tea 20
Starbucks Coffee 13
McDonald's 10
Jamba Juice 10
Proper Food 9
STARBUCKS 9
Name: count, dtype: int64
</code></pre>
<p>Of course, both of these orders are valid, given a not-stable default (quicksort) sort, but it's difficult to see why these would differ in the two cases, given the default method appears to be the same in both cases.</p>
|
<python><pandas><dataframe><series>
|
2024-01-03 17:45:12
| 1
| 925
|
Matthew Brett
|
77,753,682
| 1,469,954
|
Beautifulsoup selector in Python returns blank result set for valid selector
|
<p>We want to scrape some content from <a href="https://bankcodesfinder.com/world-postal-codes/india" rel="nofollow noreferrer">this</a> webpage. The HTML of the element we are interested in is this (<code>div.white-bg-border-radius-kousik.shadow-kousik-effect.mb-2</code>).</p>
<p><a href="https://i.sstatic.net/RsV7V.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RsV7V.png" alt="enter image description here" /></a></p>
<p>For this, we are trying to use this selector in <code>BeautifulSoup</code> (Python). It does not work. I tried three four variants, they did not work as well, the HTML shows that this element is present 36 times in the page. The selectors return either blank set or 2-3 results, so I am obviously missing something. Need to find out the right way of doing it.</p>
<pre><code>from bs4 import BeautifulSoup
import os
import urllib.request
url = "https://bankcodesfinder.com/world-postal-codes/india"
with urllib.request.urlopen(url) as response:
html = str(response.read())
soup = BeautifulSoup(html, 'html.parser')
elements = soup.find_all('div.white-bg-border-radius-kousik.shadow-kousik-effect.mb-2') # This returns blank set
elements2 = soup.findAll('div', class_=['shadow-kousik-effect', 'mb-2']) #returns just 3 elements, whereas this is a subset class search of the original list of 3 classes, so this should return at least 36 elements
elements3 = soup.select('div.shadow-kousik-effect') # returns just 3 results
</code></pre>
|
<python><beautifulsoup><urllib>
|
2024-01-03 17:36:26
| 1
| 5,353
|
NedStarkOfWinterfell
|
77,753,669
| 12,461,032
|
Python cannot add package leidenalg
|
<p>I want to install leidenalg package, but it failed both through pip and through Pycharm's package manager.</p>
<p>This is the error I get when installing with Pycharm package manager:</p>
<pre><code> Using cached leidenalg-0.8.10.tar.gz (3.8 MB)
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Requirement already satisfied: igraph<0.10,>=0.9.0 in /home/hm31/step_1/venv/lib64/python3.6/site-packages (from leidenalg) (0.9.11)
Requirement already satisfied: texttable>=1.6.2 in /home/hm31/step_1/venv/lib/python3.6/site-packages (from igraph<0.10,>=0.9.0->leidenalg) (1.7.0)
Building wheels for collected packages: leidenalg
Building wheel for leidenalg (setup.py): started
Building wheel for leidenalg (setup.py): finished with status 'error'
Running setup.py clean for leidenalg
Failed to build leidenalg
Installing collected packages: leidenalg
Running setup.py install for leidenalg: started
Running setup.py install for leidenalg: finished with status 'error'
ERROR: Command errored out with exit status 1:
command: /home/hm31/step_1/venv/bin/python -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-bj6ahghf/leidenalg_0b2f9bf76b0743108c873593854884a4/setup.py'"'"'; __file__='"'"'/tmp/pip-install-bj6ahghf/leidenalg_0b2f9bf76b0743108c873593854884a4/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-jowsz042
cwd: /tmp/pip-install-bj6ahghf/leidenalg_0b2f9bf76b0743108c873593854884a4/
Complete output (28 lines):
/home/hm31/step_1/venv/lib/python3.6/site-packages/setuptools/installer.py:30: SetuptoolsDeprecationWarning: setuptools.installer is deprecated. Requirements should be satisfied by a PEP 517 installer.
SetuptoolsDeprecationWarning,
running bdist_wheel
running build
running build_py
creating build
creating build/lib.linux-x86_64-3.6
creating build/lib.linux-x86_64-3.6/leidenalg
copying src/leidenalg/Optimiser.py -> build/lib.linux-x86_64-3.6/leidenalg
copying src/leidenalg/VertexPartition.py -> build/lib.linux-x86_64-3.6/leidenalg
copying src/leidenalg/__init__.py -> build/lib.linux-x86_64-3.6/leidenalg
copying src/leidenalg/functions.py -> build/lib.linux-x86_64-3.6/leidenalg
copying src/leidenalg/version.py -> build/lib.linux-x86_64-3.6/leidenalg
running build_ext
running build_c_core
We are going to build the C core of igraph.
Source folder: vendor/source/igraph
Build folder: vendor/build/igraph
Install folder: vendor/install/igraph
Configuring build...
CMake Error at CMakeLists.txt:4 (cmake_minimum_required):
CMake 3.16 or higher is required. You are running version 2.8.12.2
-- Configuring incomplete, errors occurred!
Build failed for the C core of igraph.
----------------------------------------
ERROR: Failed building wheel for leidenalg
WARNING: Error parsing requirements for decorator: [Errno 2] No such file or directory: '/home/hm31/step_1/venv/lib/python3.6/site-packages/decorator-5.1.1.dist-info/METADATA'
ERROR: Command errored out with exit status 1:
command: /home/hm31/step_1/venv/bin/python -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-bj6ahghf/leidenalg_0b2f9bf76b0743108c873593854884a4/setup.py'"'"'; __file__='"'"'/tmp/pip-install-bj6ahghf/leidenalg_0b2f9bf76b0743108c873593854884a4/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-8ze98gsh/install-record.txt --single-version-externally-managed --compile --install-headers /home/hm31/step_1/venv/include/site/python3.6/leidenalg
cwd: /tmp/pip-install-bj6ahghf/leidenalg_0b2f9bf76b0743108c873593854884a4/
Complete output (30 lines):
/home/hm31/step_1/venv/lib/python3.6/site-packages/setuptools/installer.py:30: SetuptoolsDeprecationWarning: setuptools.installer is deprecated. Requirements should be satisfied by a PEP 517 installer.
SetuptoolsDeprecationWarning,
running install
/home/hm31/step_1/venv/lib/python3.6/site-packages/setuptools/command/install.py:37: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
setuptools.SetuptoolsDeprecationWarning,
running build
running build_py
creating build
creating build/lib.linux-x86_64-3.6
creating build/lib.linux-x86_64-3.6/leidenalg
copying src/leidenalg/Optimiser.py -> build/lib.linux-x86_64-3.6/leidenalg
copying src/leidenalg/VertexPartition.py -> build/lib.linux-x86_64-3.6/leidenalg
copying src/leidenalg/__init__.py -> build/lib.linux-x86_64-3.6/leidenalg
copying src/leidenalg/functions.py -> build/lib.linux-x86_64-3.6/leidenalg
copying src/leidenalg/version.py -> build/lib.linux-x86_64-3.6/leidenalg
running build_ext
running build_c_core
We are going to build the C core of igraph.
Source folder: vendor/source/igraph
Build folder: vendor/build/igraph
Install folder: vendor/install/igraph
Configuring build...
CMake Error at CMakeLists.txt:4 (cmake_minimum_required):
CMake 3.16 or higher is required. You are running version 2.8.12.2
-- Configuring incomplete, errors occurred!
Build failed for the C core of igraph.
----------------------------------------
ERROR: Command errored out with exit status 1: /home/hm31/step_1/venv/bin/python -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-bj6ahghf/leidenalg_0b2f9bf76b0743108c873593854884a4/setup.py'"'"'; __file__='"'"'/tmp/pip-install-bj6ahghf/leidenalg_0b2f9bf76b0743108c873593854884a4/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-8ze98gsh/install-record.txt --single-version-externally-managed --compile --install-headers /home/hm31/step_1/venv/include/site/python3.6/leidenalg Check the logs for full command output.
</code></pre>
<p>I omitted the pip error due to space constraints.
I am using Python 3.6.8. Additionally, I recall adding Leidenalg to a conda environment with some difficulties that I don't remember now. I'm currently using Venv.</p>
<p>Thanks.</p>
<p>P.S: Even after upgrading cmake, this is the error I get:</p>
<pre><code>(venv) [hm31@chackoge-serv01 step_10_cluster]$ pip install leidenalg
Collecting leidenalg
Using cached leidenalg-0.8.10.tar.gz (3.8 MB)
Preparing metadata (setup.py) ... done
Requirement already satisfied: igraph<0.10,>=0.9.0 in /home/hm31/step_1/venv/lib64/python3.6/site-packages (from leidenalg) (0.9.11)
Requirement already satisfied: texttable>=1.6.2 in /home/hm31/step_1/venv/lib/python3.6/site-packages (from igraph<0.10,>=0.9.0->leidenalg) (1.7.0)
Building wheels for collected packages: leidenalg
Building wheel for leidenalg (setup.py) ... error
ERROR: Command errored out with exit status 1:
command: /home/hm31/step_1/venv/bin/python -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-h9r0okpt/leidenalg_f86369d826b04bc6bd372ccd68782138/setup.py'"'"'; __file__='"'"'/tmp/pip-install-h9r0okpt/leidenalg_f86369d826b04bc6bd372ccd68782138/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-z3o7dnh2
cwd: /tmp/pip-install-h9r0okpt/leidenalg_f86369d826b04bc6bd372ccd68782138/
Complete output (1059 lines):
/home/hm31/step_1/venv/lib/python3.6/site-packages/setuptools/installer.py:30: SetuptoolsDeprecationWarning: setuptools.installer is deprecated. Requirements should be satisfied by a PEP 517 installer.
SetuptoolsDeprecationWarning,
running bdist_wheel
running build
running build_py
creating build
creating build/lib.linux-x86_64-3.6
creating build/lib.linux-x86_64-3.6/leidenalg
copying src/leidenalg/Optimiser.py -> build/lib.linux-x86_64-3.6/leidenalg
copying src/leidenalg/VertexPartition.py -> build/lib.linux-x86_64-3.6/leidenalg
copying src/leidenalg/__init__.py -> build/lib.linux-x86_64-3.6/leidenalg
copying src/leidenalg/functions.py -> build/lib.linux-x86_64-3.6/leidenalg
copying src/leidenalg/version.py -> build/lib.linux-x86_64-3.6/leidenalg
running build_ext
running build_c_core
We are going to build the C core of igraph.
Source folder: vendor/source/igraph
Build folder: vendor/build/igraph
Install folder: vendor/install/igraph
Configuring build...
-- Setting build type to 'Release' as none was specified.
-- Version number: 0.9.8
-- The C compiler identification is GNU 4.8.5
-- The CXX compiler identification is GNU 4.8.5
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Performing Test COMPILER_SUPPORTS_NO_VARARGS_FLAG
-- Performing Test COMPILER_SUPPORTS_NO_VARARGS_FLAG - Success
-- Performing Test COMPILER_SUPPORTS_NO_UNKNOWN_WARNING_OPTION_FLAG
-- Performing Test COMPILER_SUPPORTS_NO_UNKNOWN_WARNING_OPTION_FLAG - Success
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE
-- Found FLEX: /usr/bin/flex (found version "2.5.37")
-- Found BISON: /usr/bin/bison (found version "3.0.4")
-- Looking for expm1
-- Looking for expm1 - found
-- Looking for fmin
-- Looking for fmin - found
-- Looking for finite
-- Looking for finite - found
-- Looking for isfinite
-- Looking for isfinite - found
-- Looking for log2
-- Looking for log2 - found
-- Looking for log1p
-- Looking for log1p - found
-- Looking for rint
-- Looking for rint - found
-- Looking for rintf
-- Looking for rintf - found
-- Looking for round
-- Looking for round - found
-- Looking for stpcpy
-- Looking for stpcpy - found
-- Looking for strcasecmp
-- Looking for strcasecmp - found
-- Looking for strdup
-- Looking for strdup - found
-- Looking for _stricmp
-- Looking for _stricmp - not found
-- Looking for _LIBCPP_VERSION
-- Looking for _LIBCPP_VERSION - not found
-- Looking for __GLIBCXX__
-- Looking for __GLIBCXX__ - found
-- Performing Test COMPILER_HAS_HIDDEN_VISIBILITY
-- Performing Test COMPILER_HAS_HIDDEN_VISIBILITY - Success
-- Performing Test COMPILER_HAS_HIDDEN_INLINE_VISIBILITY
-- Performing Test COMPILER_HAS_HIDDEN_INLINE_VISIBILITY - Success
-- Performing Test COMPILER_HAS_DEPRECATED_ATTR
-- Performing Test COMPILER_HAS_DEPRECATED_ATTR - Success
-- Found Python3: /home/hm31/step_1/venv/bin/python3.6 (found version "3.6.8") found components: Interpreter
--
-- -----[ Build configuration ]----
-- Version: 0.9.8
-- CMake build type: Release
-- Library type: static
--
-- ----------[ Features ]----------
-- GLPK for optimization: yes
-- Reading GraphML files: yes
-- Thread-local storage: no
-- Link-time optimization: no
--
-- --------[ Dependencies ]--------
-- ARPACK: vendored
-- BISON: yes
-- BLAS: vendored
-- CXSparse: vendored
-- FLEX: yes
-- GLPK: vendored
-- GMP: vendored
-- LAPACK: vendored
-- LibXml2: yes
-- OpenMP: yes
-- PLFIT: vendored
--
-- -----------[ Testing ]----------
-- Diff tool: not found
-- Sanitizers: none
-- Code coverage: no
-- Verify 'finally' stack: no
--
-- --------[ Documentation ]-------
-- HTML: no
-- PDF: no
--
-- igraph configured successfully.
--
-- Configuring done (3.0s)
-- Generating done (0.2s)
-- Build files have been written to: /tmp/pip-install-h9r0okpt/leidenalg_f86369d826b04bc6bd372ccd68782138/vendor/build/igraph
Running build...
(Omitted due to space limits)
Installing build...
-- Installing: /tmp/pip-install-h9r0okpt/leidenalg_f86369d826b04bc6bd372ccd68782138/vendor/install/igraph/lib64/libigraph.a
-- Installing: /tmp/pip-install-h9r0okpt/leidenalg_f86369d826b04bc6bd372ccd68782138/vendor/install/igraph/include/igraph
-- Installing: /tmp/pip-install-h9r0okpt/leidenalg_f86369d826b04bc6bd372ccd68782138/vendor/install/igraph/include/igraph/igraph.h
h9r0okpt/leidenalg_f86369d826b04bc6bd372ccd68782138/vendor/install/igraph/lib64/cmake/igraph/igraph-targets-release.cmake
Libraries: ['igraph', 'm', 'stdc++', 'xml2', 'z', 'gomp', 'pthread']
Exclusions: []
Found igraph as static library in vendor/install/igraph/lib64/libigraph.a.
Build type: dynamic extension with vendored igraph source
Include path: vendor/install/igraph/include/igraph
Library path: vendor/install/igraph/lib64 /usr/local/lib64 /usr/local/lib /usr/lib64 /usr/lib /lib64 /lib
Runtime library path:
Linked dynamic libraries: m xml2 z gomp pthread
Linked static libraries: vendor/install/igraph/lib64/libigraph.a
Extra compiler options:
Extra linker options:
building 'leidenalg._c_leiden' extension
creating build/temp.linux-x86_64-3.6
creating build/temp.linux-x86_64-3.6/src
creating build/temp.linux-x86_64-3.6/src/leidenalg
gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -Iinclude -Ivendor/install/igraph/include/igraph -I/home/hm31/step_1/venv/include -I/usr/include/python3.6m -c src/leidenalg/CPMVertexPartition.cpp -o build/temp.linux-x86_64-3.6/src/leidenalg/CPMVertexPartition.o
In file included from include/MutableVertexPartition.h:5:0,
from include/ResolutionParameterVertexPartition.h:4,
from include/LinearResolutionParameterVertexPartition.h:4,
from include/CPMVertexPartition.h:4,
from src/leidenalg/CPMVertexPartition.cpp:1:
include/GraphHelper.h: In function ‘T sum(std::vector<T>)’:
include/GraphHelper.h:35:14: error: range-based ‘for’ loops are not allowed in C++98 mode
for (T x : vec)
^
error: command 'gcc' failed with exit status 1
----------------------------------------
ERROR: Failed building wheel for leidenalg
Running setup.py clean for leidenalg
Failed to build leidenalg
WARNING: Error parsing requirements for decorator: [Errno 2] No such file or directory: '/home/hm31/step_1/venv/lib/python3.6/site-packages/decorator-5.1.1.dist-info/METADATA'
Installing collected packages: leidenalg
Running setup.py install for leidenalg ... error
ERROR: Command errored out with exit status 1:
command: /home/hm31/step_1/venv/bin/python -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-h9r0okpt/leidenalg_f86369d826b04bc6bd372ccd68782138/setup.py'"'"'; __file__='"'"'/tmp/pip-install-h9r0okpt/leidenalg_f86369d826b04bc6bd372ccd68782138/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-9i4vehuu/install-record.txt --single-version-externally-managed --compile --install-headers /home/hm31/step_1/venv/include/site/python3.6/leidenalg
cwd: /tmp/pip-install-h9r0okpt/leidenalg_f86369d826b04bc6bd372ccd68782138/
Complete output (43 lines):
/home/hm31/step_1/venv/lib/python3.6/site-packages/setuptools/installer.py:30: SetuptoolsDeprecationWarning: setuptools.installer is deprecated. Requirements should be satisfied by a PEP 517 installer.
SetuptoolsDeprecationWarning,
running install
/home/hm31/step_1/venv/lib/python3.6/site-packages/setuptools/command/install.py:37: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
setuptools.SetuptoolsDeprecationWarning,
running build
running build_py
creating build
creating build/lib.linux-x86_64-3.6
creating build/lib.linux-x86_64-3.6/leidenalg
copying src/leidenalg/Optimiser.py -> build/lib.linux-x86_64-3.6/leidenalg
copying src/leidenalg/VertexPartition.py -> build/lib.linux-x86_64-3.6/leidenalg
copying src/leidenalg/__init__.py -> build/lib.linux-x86_64-3.6/leidenalg
copying src/leidenalg/functions.py -> build/lib.linux-x86_64-3.6/leidenalg
copying src/leidenalg/version.py -> build/lib.linux-x86_64-3.6/leidenalg
running build_ext
running build_c_core
Libraries: ['igraph', 'm', 'stdc++', 'xml2', 'z', 'gomp', 'pthread']
Exclusions: []
Found igraph as static library in vendor/install/igraph/lib64/libigraph.a.
Build type: dynamic extension with vendored igraph source
Include path: vendor/install/igraph/include/igraph
Library path: vendor/install/igraph/lib64 /usr/local/lib64 /usr/local/lib /usr/lib64 /usr/lib /lib64 /lib
Runtime library path:
Linked dynamic libraries: m xml2 z gomp pthread
Linked static libraries: vendor/install/igraph/lib64/libigraph.a
Extra compiler options:
Extra linker options:
building 'leidenalg._c_leiden' extension
creating build/temp.linux-x86_64-3.6
creating build/temp.linux-x86_64-3.6/src
creating build/temp.linux-x86_64-3.6/src/leidenalg
gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -Iinclude -Ivendor/install/igraph/include/igraph -I/home/hm31/step_1/venv/include -I/usr/include/python3.6m -c src/leidenalg/CPMVertexPartition.cpp -o build/temp.linux-x86_64-3.6/src/leidenalg/CPMVertexPartition.o
In file included from include/MutableVertexPartition.h:5:0,
from include/ResolutionParameterVertexPartition.h:4,
from include/LinearResolutionParameterVertexPartition.h:4,
from include/CPMVertexPartition.h:4,
from src/leidenalg/CPMVertexPartition.cpp:1:
include/GraphHelper.h: In function ‘T sum(std::vector<T>)’:
include/GraphHelper.h:35:14: error: range-based ‘for’ loops are not allowed in C++98 mode
for (T x : vec)
^
error: command 'gcc' failed with exit status 1
----------------------------------------
ERROR: Command errored out with exit status 1: /home/hm31/step_1/venv/bin/python -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-h9r0okpt/leidenalg_f86369d826b04bc6bd372ccd68782138/setup.py'"'"'; __file__='"'"'/tmp/pip-install-h9r0okpt/leidenalg_f86369d826b04bc6bd372ccd68782138/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-9i4vehuu/install-record.txt --single-version-externally-managed --compile --install-headers /home/hm31/step_1/venv/include/site/python3.6/leidenalg Check the logs for full command output.```
</code></pre>
|
<python><python-3.x><linux><pip><leiden>
|
2024-01-03 17:32:44
| 0
| 472
|
m0ss
|
77,753,662
| 13,525,512
|
Fix column width in RecycleView when changing density
|
<p>I'm working with multiple columns in <code>RecycleView</code>. For the sake of simplification, the following example contains only a <code>ref</code> column which I expect to be displayed on one line and a <code>message</code> column that should take the rest of the screen, the text being wrapped in order to be fully readable. Both columns are implemented using <code>MDLabel</code> instances:</p>
<pre><code>from kivymd.app import MDApp
from kivy.lang import Builder
from kivy.properties import StringProperty
from kivy.uix.recycleview import RecycleView
from kivy.uix.boxlayout import BoxLayout
from kivymd.uix.label import MDLabel
from random import randrange
from loremipsum import get_sentences
Builder.load_string('''
<RVLayout>:
spacing: 2
size_hint: None, None
height: message.texture_size[1]
MDLabel:
size_hint_x: None # label width should fit text
halign: 'center'
text: root.ref
md_bg_color: app.theme_cls.bg_darkest
MDLabel:
id: message
text: root.message
md_bg_color: app.theme_cls.bg_darkest
<RV>:
viewclass: 'RVLayout'
RecycleBoxLayout:
spacing: 2
default_size: None, dp(10)
default_size_hint: 1, None
size_hint_y: None
height: self.minimum_height
orientation: 'vertical'
''')
class RVLayout(BoxLayout):
ref = StringProperty()
message = StringProperty()
def on_size(self, *args):
self.height = self.ids.message.texture_size[1]
class RV(RecycleView):
def __init__(self, **kwargs):
super(RV, self).__init__(**kwargs)
self.data = [
{
'ref': str(x).zfill(8),
'message': ' '.join(get_sentences(randrange(1, 4)))
} for x in range(50)]
class TestApp(MDApp):
def build(self):
return RV()
if __name__ == '__main__':
TestApp().run()
</code></pre>
<p>As you can see, I wrote an <code>on_size</code> method to my <code>RVLayout</code> so that everything looks well on resize (desktop case):</p>
<p><a href="https://i.sstatic.net/YSinJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YSinJ.png" alt="enter image description here" /></a></p>
<p>The height of the <code>message</code> elements fits perfectly for any window width, while the width of <code>ref</code> stays the same:</p>
<p><a href="https://i.sstatic.net/WSQ2H.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WSQ2H.png" alt="enter image description here" /></a></p>
<p>Now here comes my issue, when I start my app with a density > 1, the <code>ref</code> column width is still the same even though the font is bigger. For instance with the environment variable <code>KIVY_METRICS_DENSITY=2</code> I get the following:</p>
<p><a href="https://i.sstatic.net/eDngj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eDngj.png" alt="enter image description here" /></a></p>
<p>How can I make <code>ref</code> width adapt to the density?</p>
|
<python><kivy><kivymd>
|
2024-01-03 17:30:39
| 2
| 12,821
|
Tranbi
|
77,753,658
| 404,264
|
LangChain + local LLAMA compatible model
|
<p>I'm trying to setup a local chatbot demo for testing purpose. I wanted to use LangChain as the framework and LLAMA as the model. Tutorials I found all involve some registration, API key, HuggingFace, etc, which seems unnecessary for my purpose.</p>
<p>Is there a way to use a local LLAMA comaptible model file just for testing purpose? And also an example code to use the model with LangChain would be appreciated. Thanks!</p>
<p><strong>UPDATE:</strong> I wrote a <a href="https://medium.com/@weidagang/hello-llm-building-a-local-chatbot-with-langchain-and-llama2-3a4449fc4c03" rel="nofollow noreferrer">blog post</a> based on the accepted answer.</p>
|
<python><langchain><large-language-model><llama>
|
2024-01-03 17:30:14
| 1
| 26,824
|
Dagang Wei
|
77,753,534
| 16,236,118
|
How can I auto-insert a linebreak with quotation marks into a string in vscode?
|
<p>I have seen the following in a Python tutorial.
Instead of Visual Studio Code they use PyCharm (see pictures).</p>
<p>From this:</p>
<p><a href="https://i.sstatic.net/UtaJi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UtaJi.png" alt="Before newline" /></a></p>
<p>After pressing enter at <em><strong>p</strong></em> to this:</p>
<p><a href="https://i.sstatic.net/nNeq5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nNeq5.png" alt="After insert newline" /></a></p>
<blockquote>
<p>It therefore automatically generates a line break ("\") and converts
the text into a character string with quotation marks.</p>
</blockquote>
<p>Is there any option/shortcut/extension to achieve this in Visual Studio Code?<br>
(I am on a Windows with Visual Studio Code)</p>
|
<python><visual-studio-code>
|
2024-01-03 17:10:02
| 1
| 1,636
|
JAdel
|
77,753,482
| 839,238
|
How to render to an animated SVG on the page using pyodide and the Basthon turtle package?
|
<p>I am trying to use the Python turtle module with pyodide. This isn't <a href="https://pyodide.org/en/stable/usage/wasm-constraints.html#removed-modules" rel="nofollow noreferrer">officially supported</a> but I found another <a href="https://stackoverflow.com/questions/72096299/how-to-add-turtle-module-to-pyodide">StackOverflow answer</a> that suggested using a <a href="https://framagit.org/basthon/basthon-kernel/-/tree/master/packages/kernel-python3/src/modules/turtle/turtle" rel="nofollow noreferrer">modified version</a> from Basthon. I therefore copied the files from Basthon into a new directory structure:</p>
<pre><code>pyodide/
turtle/
src/
turtle/
__init__.py
svg.py
pyproject.toml
</code></pre>
<p>My pyproject.toml file contains the following:</p>
<pre class="lang-ini prettyprint-override"><code>[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
[project]
name = "turtle"
version = "0.0.1"
</code></pre>
<p>And I am successfully building a wheel with the following script:</p>
<pre class="lang-bash prettyprint-override"><code>#!/bin/bash
pushd pyodide/turtle
python3 -m pip install --upgrade build
python3 -m build
popd
</code></pre>
<p>I am then succesfully loading pyodide with the turtle wheel using the folowing HTML:</p>
<pre class="lang-html prettyprint-override"><code><!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Pyodide</title>
<script src="https://cdn.jsdelivr.net/pyodide/v0.24.1/full/pyodide.js"></script>
</head>
<body>
<textarea id="input" cols="80" rows="15">
import turtle
t = turtle.Turtle()
t.forward(100)
</textarea>
<br/>
<button id="run" onclick="run()" disabled>Run</button>
<pre id="output">Loading pyodide...</pre>
<div id="visual"></div>
<script type="text/javascript">
const runButton = document.getElementById("run");
const input = document.getElementById("input");
const output = document.getElementById("output");
let pyodide;
const main = async () => {
pyodide = await loadPyodide({
stdout: (text) => output.innerHTML += text + "\n",
});
runButton.disabled = false;
output.innerHTML = "";
};
main();
const run = async () => {
output.innerHTML = "";
try {
await pyodide.loadPackage("./turtle/dist/turtle-0.0.1-py2.py3-none-any.whl");
await pyodide.loadPackagesFromImports(input.value);
await pyodide.runPython(input.value);
} catch (error) {
console.log(error);
}
};
</script>
</body>
</html>
</code></pre>
<p>I ran the above HTML from a web server by running <code>python -m http.server</code>. This allows it to read the <code>.whl</code> file, due to CORS browser restrictions.</p>
<p>However, I am not sure how to actually output the SVG on the web page. I looked through the Basthon source code but couldn't see a method to do this. The closest I could find was <a href="https://framagit.org/basthon/basthon-kernel/-/blob/master/packages/kernel-python3/src/modules/turtle/turtle/__init__.py#L170" rel="nofollow noreferrer"><code>turtle.svg()</code></a> but when I called this from python, I received the following error in the JavaScript console:</p>
<blockquote>
<p>TypeError: 'pyodide.ffi.JsProxy' object is not callable</p>
</blockquote>
<p>I tried to read about pyodide <a href="https://github.com/pyodide/pyodide/blob/main/docs/usage/type-conversions.md" rel="nofollow noreferrer">Type Conversions</a> but didn't understand what I need to do.</p>
<p>Is it possible to bind the drawing context to an SVG on the page so that the turtle will animate in real time? Or is it only possible to render a finished SVG using the Basthon turtle package? Is there an easier way to get pyodide and turtle working with realtime graphics in a browser window?</p>
|
<javascript><python><turtle-graphics><webassembly><pyodide>
|
2024-01-03 17:02:33
| 1
| 1,690
|
Chris
|
77,753,450
| 19,130,803
|
accept multiple values from user
|
<p>I am developing <code>dash</code> application. I want to provide functionality that accepts <code>multiple</code> values (preferably comma separated) from the user.</p>
<p>Currently trying, by having an <code>input</code> box with <code>type=text</code> and user enters values by manually typing <code>comma</code> like below in the code, and in the callback I am getting it as <code>str</code> which I need to convert into <code>list</code> as final result which will be further used in computation.</p>
<pre><code># Input in textbox
-1, "", "na", "#99", 100
# Received in callback
val='-1, "", "na", "#99", 100'
# Output
[-1, "", "na", "#99", 100]
</code></pre>
<pre><code>import dash
from dash import Dash
import dash_bootstrap_components as dbc
dbc_css = (
"https://cdn.jsdelivr.net/gh/AnnMarieW/dash-bootstrap-templates@V1.0.2/dbc.min.css"
)
app = Dash(
__name__,
suppress_callback_exceptions=True,
external_stylesheets=[dbc.themes.BOOTSTRAP, dbc_css],
)
btn1 = dbc.Button(id='btn1', children='click')
ip1 = dbc.Input(id='ip1', type='text')
div1 = html.Div(id='div1')
@app.callback(
Output('div1', 'children'),
Input('btn1', 'n_clicks'),
State('ip1', 'value'),
prevent_initial_call=True,
)
def get_values_and_process(n, val):
status = False
if n is None:
raise PreventUpdate
print(f"{val=}")
print(f"{type(val)}")
# proposed(trying to do this way)
# get the input value which is string and convert into list and then use it further
return status
app.layout = dbc.Container(html.Div([btn1, ip1, div1]))
if __name__ == "__main__":
app.run(host="0.0.0.0", port="8001", debug=True)
</code></pre>
<p>I tried searching for a built-in component in dash for this functionality but no success. Is there any better way to achieve this by avoiding user not to type comma like such?</p>
|
<python><plotly><plotly-dash>
|
2024-01-03 16:58:27
| 1
| 962
|
winter
|
77,753,412
| 11,277,108
|
Get the standard deviation of a row ignoring the min and max values
|
<p>Given the following data frame:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>a</th>
<th>b</th>
<th>c</th>
<th>d</th>
<th>e</th>
<th>sd</th>
</tr>
</thead>
<tbody>
<tr>
<td>-100</td>
<td>2</td>
<td>3</td>
<td>60</td>
<td>4</td>
<td>1</td>
</tr>
<tr>
<td>7</td>
<td>5</td>
<td>-50</td>
<td>9</td>
<td>130</td>
<td>2</td>
</tr>
</tbody>
</table>
</div>
<p>How would I calculate the standard deviation <code>sd</code> column which excludes the minimum and maximum values from each row?</p>
<p>The actual data frame is a few million rows long so something vectorised would be great!</p>
<p>To replicate:</p>
<pre><code>df = pd.DataFrame(
{"a": [-100, 7], "b": [2, 5], "c": [3, -50], "d": [60, 9], "e": [4, 130]}
)
</code></pre>
|
<python><pandas>
|
2024-01-03 16:50:03
| 4
| 1,121
|
Jossy
|
77,753,131
| 519,422
|
Python: for a long string where a certain word is repeated, how to identify the first occurrence of the word after a unique word?
|
<p>I have a large file that is made up of many blocks of data. For example, two blocks of data would look like:</p>
<pre><code>name1 1234567 comment
property1 = 1234567.98765 property2 = 1234567.98765
property3 = 1234567.98765
final
name2 1234568 comment
property1 = 987654.321 property2 = 9876543.0
property3 = 1234567.98765
final
...
</code></pre>
<p><strong>Problem</strong>.
I have code to modify one block of data. However, the code results in a string (<code>updated_string</code>) that contains ALL data blocks in the file (the modified data block and all other unmodified data blocks).</p>
<p><strong>Goal</strong>.
I only want the modified data block in <code>updated_string</code> and then I want to put only <code>updated_string</code> in an external file and leave all other data blocks in the file unmodified.</p>
<p>So far I have figured out from previous posts here how to delete everything from <code>updated_string</code> that comes before the modified data block. For example, if the second data block has been modified, I would do:</p>
<pre><code>mystring = "name2"
begin = string.find(mystring)
string[:begin]
</code></pre>
<p>However, I am not able to delete everything after the "<code>final</code>" in the data block I want. I know I can do</p>
<pre><code>mystring2 = "final"
stop = string.find(mystring2)
string[stop:]
</code></pre>
<p>but it doesn't identify the particular data block I want. Can anyone please suggest how I might look for the first "final" after <code>name2</code> so that I can get a string made up of only the data block I want?</p>
|
<python><python-3.x><replace><extract><slice>
|
2024-01-03 16:02:25
| 1
| 897
|
Ant
|
77,753,107
| 1,265,067
|
Flask ajax callback fails to update graph
|
<p>I'm trying to update a plotly graph on click using ajax. I don't understand why it works only the first time. The action here color the point clicked, later I would like to perform other actions on click.</p>
<p>Here is the app:</p>
<pre><code>from flask import Flask, config, render_template, request
import numpy as np
import pandas as pd
import json
import plotly
import plotly.express as px
import plotly.graph_objects as go
app = Flask(__name__)
@app.route('/')
def index():
return render_template('data-explorer.html', graphJSON=map_filter())
@app.route('/scatter')
def scatter():
return map_filter(request.args.get('data'))
def map_filter(df=''):
x=[0, 1, 2, 3, 4]
y=[0, 1, 4, 9, 16]
if df=='':
fig = px.scatter(y, x)
else:
idx = x.index(int(df))
cols = ['blue']*len(x)
cols[idx] = 'red'
fig = px.scatter(y, x, color=cols)
graphJSON = json.dumps(fig, cls=plotly.utils.PlotlyJSONEncoder)
return graphJSON
if __name__=='__main__':
app.run(debug=True)
</code></pre>
<p>Here is the template <code>data-explorer.html</code>.</p>
<pre><code> <script>
function update_graph(selection){
var value= $.ajax({
url: "{{url_for('scatter')}}",
async: false,
data: { 'data': selection },
}).responseText;
return value;
}
</script>
<!-- Line plot html-->
<div id="chart" class="chart"></div>
<!-- Line plot -->
<script type="text/javascript">
d = {{ graphJSON | safe }};
var config = {displayModeBar: false};
var chart = document.getElementById('chart');
//Plotly.setPlotConfig(config);
console.log(d);
Plotly.newPlot('chart', d, {});
chart.on('plotly_click', function(data){
console.log(data.points[0].x);
//var figure = JSON.parse(myJson);
var result = JSON.parse(update_graph(data.points[0].x));
console.log(result);
Plotly.newPlot('chart', result, {});
});
</script>
</code></pre>
<p>Any help would be appreciated.</p>
|
<python><ajax><flask><plotly>
|
2024-01-03 15:58:05
| 1
| 897
|
user1265067
|
77,752,955
| 1,017,986
|
Dynamically discounted cumulative sum in Numpy
|
<p>I have a frequently occurring problem where I have two arrays of the same length: one with values and one with dynamic decay factors; and wish to calculate a vector of the decayed cumulative sum at each position. Using a Python loop to express the desired recurrence we have the following:</p>
<pre><code>c = np.empty_like(x)
c[0] = x[0]
for i in range(1, len(x)):
c[i] = c[i-1] * d[i] + x[i]
</code></pre>
<p>The Python code is very clear and readable, but slows things down significantly. I can get around this by using Numba to JIT-compile it, or Cython to precompile it. If the discount factors were a fixed number (which is not the case), I could have used SciPy's signals library and do an <code>lfilter</code> (see <a href="https://stackoverflow.com/a/47971187/1017986">https://stackoverflow.com/a/47971187/1017986</a>).</p>
<p>Is there a more "Numpythonic" way to express this without sacrificing clarity or efficiency?</p>
|
<python><numpy><cython><numba>
|
2024-01-03 15:32:23
| 2
| 899
|
masaers
|
77,752,416
| 4,451,521
|
Poetry new does not create a python test
|
<p>I have googled this question but it is more basic than any of the results.
I am reading a <a href="https://realpython.com/dependency-management-python-poetry/" rel="nofollow noreferrer">tutorial</a> on Poetry and it says that when I do</p>
<pre><code>poetry new rp-poetry
</code></pre>
<p>I should get a <code>test_rp_poetry.py</code></p>
<p>However I do not get that python file.
I also noticed that different than what the tutorial says, the <code>__init__.py</code> files are empty</p>
<p>Has the behavior of poetry changed lately?</p>
|
<python><python-poetry>
|
2024-01-03 14:03:37
| 1
| 10,576
|
KansaiRobot
|
77,752,407
| 9,462,829
|
FastAPI how to add Authorization header to next requests
|
<p>Been working on an API's authentication and I'm stuck in something kind of simple (I hope). I have a <code>login</code> endpoint:</p>
<pre><code>@router.get("/login")
def login(request: Request):
return templates.TemplateResponse("auth/login.html", {"request": request})
@router.post("/login")
def login(
request: Request,
username: str = Form(...),
password: str = Form(...),
):
errors = []
user_db = pd.read_excel('users.xlsx')
user = authenticate_user(user_db=user_db, username=username, password=password)
access_token = create_access_token(data={"sub": username})
response.set_cookie(
key="access_token", value=f"Bearer {access_token}", httponly=True
)
response.headers['Authorization'] = f"Bearer {access_token}"
return response
</code></pre>
<p>This works and produces a token, but then I have another endpoint that requires authentication, but it doesn't get the "Authorization" header when I enter, which makes authentication fail:</p>
<pre><code>@router.get("/register")
def register(request: Request,
access_token: Annotated[Union[str, None], Cookie()] = None
):
# print(access_token)
#print(request.headers['Authorization'])
return templates.TemplateResponse("auth/register.html", {"request": request})
@router.post("/register")
def register(
request: Request,
current_user: Annotated[User, Depends(get_current_active_user)], # This here fails
username: str = Form(...),
password: str = Form(...),
password_confirm: str = Form(...),
):
print(current_user)
try:
user = UserCreate(username=username, password=password, password_confirm=password_confirm,
superuser = False)
if current_user.superuser:
message = crear_usuario(user=user)
return JSONResponse(content={'message': message})
else:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="User should be a superuser",
headers={"WWW-Authenticate": "Bearer"},
)
except ValidationError as e:
errors_list = json.loads(e.json())
for item in errors_list:
errors.append(item.get("loc")[0] + ": " + item.get("msg"))
return templates.TemplateResponse(
"auth/register.html", {"request": request, "errors": errors}
)
</code></pre>
<p>I managed to send the token programmatically adding the token to the headers in Python, but I need this to work for a regular user. What does the "Authorize" button in Swagger's UI differently that then lets me send the Authorization header with my requests?
<a href="https://i.sstatic.net/DbVzh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DbVzh.png" alt="enter image description here" /></a></p>
|
<python><authentication><oauth-2.0><fastapi>
|
2024-01-03 14:02:00
| 2
| 6,148
|
Juan C
|
77,752,307
| 15,098,472
|
Fill boxes in a 3D grid
|
<p>I got an array of <code>n</code> boxes, where each box has 8 4D coordinates, i.e. a <code>(x, y, z, l)</code>, where <code>(x,y,z)</code> are the coordinates and <code>l</code> is some label, like 'car'</p>
<pre><code>boxes.shape = (4, 8, 3, 1)
</code></pre>
<p>The positions are for example given in meters, and the label is a simple integer. One element of the array could thus look like so:</p>
<pre><code>boxes[0]
[
[0.0, 0.0, 0.0, 1],
[2.0, 0.0, 0.0, 1],
[2.0, 3.0, 0.0, 1],
[0.0, 3.0, 0.0, 1],
[0.0, 0.0, 1.0, 1],
[2.0, 0.0, 1.0, 1],
[2.0, 3.0, 1.0, 1],
[0.0, 3.0, 1.0, 1]
]
</code></pre>
<p>I want to sample points (i.e. the <code>(x,y,z)</code> coordinates and <code>labels</code> separately), every <code>step_size</code> meters that are inside the box. For example, I want all points inside the box, that are 0.01 meters apart and add them to a list, with the corresponding amount of labels. I currently use the following approach:</p>
<pre><code>mins = np.min(boxes, axis=1)
maxs = np.max(boxes, axis=1)
# collect all new points
sampled_points = []
sampled_labels = []
# get the label for each box
labels = boxes[:, 3]
# distance between each point, equal for all dimensions
step_size = 0.01
# number of points we want to inlcude for each dimension
num_points_x = np.floor((maxs[:, 0] - mins[:, 0]) / step_size).astype(int)
num_points_y = np.floor((maxs[:, 1] - mins[:, 1]) / step_size).astype(int)
num_points_z = np.floor((maxs[:, 2] - mins[:, 2]) / step_size).astype(int)
# loop over all boxes and create the points
for i in range(boxes.shape[0]):
x_coords, y_coords, z_coords = np.mgrid[mins[i, 0]:maxs[i, 0]:num_points_x[i]*1j, # we use complex number here, so that the endpoint is inlcusive
mins[i, 1]:maxs[i, 1]:num_points_y[i]*1j,
mins[i, 2]:maxs[i, 2]:num_points_z[i]*1j]
points = np.vstack([x_coords.ravel(), y_coords.ravel(), z_coords.ravel()]).T
labels = np.repeat(labels[i], points.shape[0])
sampled_points.append(points)
sampled_labels.append(labels)
</code></pre>
<p>I am not sure if this is correct and if there is a better way</p>
|
<python>
|
2024-01-03 13:45:18
| 2
| 574
|
kklaw
|
77,752,028
| 1,515,333
|
jaydebeapi only exceptions from the first statement in Cursor.execute
|
<p>I'm connecting to a SQL Server from Python using <code>jaydebeapi</code>. When executing multiple <code>;</code>-separated SQL-Statements in a <strong>single</strong> <code>Cursor.execute</code>-call only errors from the <em>first</em> statement are thrown. Meaning that: when running</p>
<pre class="lang-py prettyprint-override"><code>curs.execute("""
corrent/successfull insert statement;
insert statement violating a foreign key;
""")
</code></pre>
<p>no Error is thrown. How do I make <code>jaydebeapi</code> or the SQL-Server stop on error?</p>
<p>One option is to use multiple <code>jaydebeapi.Cursor.execute</code>-calls, i.e.</p>
<pre class="lang-py prettyprint-override"><code>curs.execute("corrent/successfull insert statement")
curs.execute("insert statement violating a foreign key;")
</code></pre>
<p>This is tricky in my situtation, as I need to execute a complete SQL-file which includes newlines and semicolons within quotes. Wrapping inside a transaction (with <code>SET XACT_ABORT ON</code>) does not work: while the transaction is of course not executed, no Python error is thrown, nor was I able to access any error message on the <code>jaydebeapi.Cursor</code>.</p>
|
<python><sql-server><jaydebeapi>
|
2024-01-03 12:53:58
| 0
| 543
|
MeinAccount
|
77,751,795
| 2,447,233
|
How to get around "Instance attribute defined outside __init__" warning from PyCharm
|
<p>I have written this class with a load of variables in it that I need to be able to reset to defaults at any point. Initially I "declared" and initialised (as much as you seem to be able to declare in Python) these in <code>__init__</code> but that gave the problem that when I wanted an external force to set these variables to the default value I'd have to repeat thirty odd lines of code (as the class has about thirty variables).</p>
<p>I have seen one other post here that describes a similar issue, but the accepted answer was to set each variable to None and then you can access them from functions within the class without an issue.</p>
<p>In C++ you'd declare these variables as variables in the header and then you could call the same function both at instantiation and when <code>class_name.set_to_default()</code> was called and that's all easy and neat.</p>
<p>Surely there is a way to do the same in Python without an error for each variable? Something like:</p>
<pre><code> class Motor():
def __init__(self):
self.reset_all_values()
def reset_all_values(self):
self.speed_value = DEFAULT_SPEED_VALUE
self.hours_value = DEFAULT_HOURS_VALUE
# etc.
</code></pre>
<p>If so, how can I do it without PyCharm whinging at me?</p>
<p>Or do I just have to be incredibly verbose in the code? Is Python too different from other languages and you have to think in a completely different way?</p>
<p>I think it's important to note that I am a C programmer, who also does a little C++ and I do realise it's possible I simply think too much in terms of those languages.</p>
|
<python><pycharm>
|
2024-01-03 12:14:24
| 0
| 876
|
DiBosco
|
77,751,790
| 1,444,097
|
BeautifulSoup(response.content, 'html.parser') return wrong html structure
|
<p>why</p>
<pre><code>soup = BeautifulSoup(response.content, 'html.parser')
</code></pre>
<p>return</p>
<pre><code><ul><li><li><li></li></li></li></ul>
</code></pre>
<p>instead of</p>
<pre><code><ul><li></li><li></li><li></li></ul>
</code></pre>
<p>full code</p>
<pre><code>from datetime import datetime
import requests
from bs4 import BeautifulSoup
def is_holiday_or_weekend():
current_year = datetime.now().year
today = datetime.now().strftime('%Y-%m-%d')
url = f"https://www.kalendorius.today/nedarbo-dienos/{current_year}"
# Start a session to maintain cookies
session = requests.Session()
try:
# Send initial request to get PHPSESSID cookie
session.get(url)
headers = {
'User-Agent': 'Mozilla/5.0',
'Accept': 'application/json',
}
# Fetch the holiday data with headers and cookies
response = session.get(url, headers=headers)
response.raise_for_status()
# Parse the HTML content
soup = BeautifulSoup(response.content, 'html.parser')
print(soup) # here is problems, wrong html structure
# Extract <li> elements within <ul> of class 'calendar-items-list'
holidays_elements = soup.find_all('ul', class_='calendar-items-list')
holidays = {}
for ul in holidays_elements:
print(ul)
for index, li in enumerate(ul.find_all('li')):
date, name = li.get_text().strip().split(' - ', 1)
if date not in holidays: # Add this check to avoid duplicates
holidays[date] = name
# Check if today is a holiday or a weekend
if today in holidays or datetime.now().weekday() >= 5:
return True
return False
except requests.RequestException as e:
print(f"Error fetching holiday data: {e}")
return None
# Usage
if is_holiday_or_weekend():
print("Today is a holiday or weekend.")
else:
print("Today is a regular working day.")
</code></pre>
<p>so how to print each li element?</p>
|
<python><web-scraping><beautifulsoup>
|
2024-01-03 12:14:07
| 1
| 2,075
|
Dmitrij Holkin
|
77,751,766
| 19,155,645
|
Extracting Zip File from public google drive using colab notebook
|
<p>I would like to download a dataset from a public google drive folder (saved as a zip).</p>
<p><code> url = https://drive.google.com/drive/folders/1TzwfNA5JRFTPO-kHMU___kILmOEodoBo</code></p>
<p>Since I want it to be reproducible by other people, I do not want to copy it to my drive (and preferably also not to mount my drive in the notebook).</p>
<p>How can it be done?</p>
<p>So far I tried:</p>
<pre><code>import requests
import io
import zipfile
zip_url = 'https://drive.google.com/file/d/1fdFu5NGXe4rTLYKD5wOqk9dl-eJOefXo'
response = requests.get(zip_url)
file_contents = io.BytesIO(response.content)
print(file_contents)
with zipfile.ZipFile(file_contents, 'r') as zip_ref:
zip_ref.extractall('/content/') # Replace with your desired extraction path
</code></pre>
<p>but get this error (and print of "file_contents" before):</p>
<pre><code><_io.BytesIO object at 0x7ad7efbf27f0>
---------------------------------------------------------------------------
BadZipFile Traceback (most recent call last)
<ipython-input-18-56d2c8f2bfe8> in <cell line: 14>()
12 print(file_contents)
13 # Extract the zip file (if needed)
---> 14 with zipfile.ZipFile(file_contents, 'r') as zip_ref:
15 zip_ref.extractall('/content/') # Replace with your desired extraction path
1 frames
/usr/lib/python3.10/zipfile.py in _RealGetContents(self)
1334 raise BadZipFile("File is not a zip file")
1335 if not endrec:
-> 1336 raise BadZipFile("File is not a zip file")
1337 if self.debug > 1:
1338 print(endrec)
BadZipFile: File is not a zip file
</code></pre>
<p>and if I try the following way, i get an empty zip file:</p>
<pre><code>file_id = '1fdFu5NGXe4rTLYKD5wOqk9dl-eJOefXo'
download_url = f'https://drive.google.com/uc?export=download&id={file_id}'
!wget --no-check-certificate -O '/content/file.zip' 'https://drive.google.com/uc?export=download&id=1fdFu5NGXe4rTLYKD5wOqk9dl-eJOefXo'
</code></pre>
<p>Any help would be appreciated.</p>
|
<python><jupyter-notebook><google-drive-api><dataset><google-colaboratory>
|
2024-01-03 12:10:09
| 1
| 512
|
ArieAI
|
77,751,752
| 13,225,321
|
ValueError: Attempt to convert a value (class) with an unsupported type (class) to a Tensor
|
<p>these codes were running flawlessly on tensorflow==2.15, for the purpose of GPU acceleration, I switched to tensorflow-gpu == 2.10.1 with keras 2.10, respectively. and this ValueError raised up on my screen when I try to use decay class for learning rates. These are my model build below:</p>
<pre><code>#import packages
import pandas as pd
import numpy as np
import datetime as dt
import tushare as ts
import moduleigh as ml
token = ml.mytoken['Leigh']
ts.set_token(token)
pro = ts.pro_api(token)
# to plot within notebook
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['font.sans-serif'] = ['SimHei']
plt.rcParams['axes.unicode_minus'] = False
#setting figure size
from matplotlib.pylab import rcParams
rcParams['figure.figsize'] = 20,10
rcParams['font.sans-serif'] = ['SimHei']
rcParams['axes.unicode_minus'] = False
import tensorflow as tf
import keras
from tensorflow.keras.layers import \
Input, BatchNormalization, GaussianNoise, \
Dense, Activation, Dropout, \
Concatenate, Layer, Conv1D, \
Bidirectional, LayerNormalization, LSTM, add
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import Adam, schedules
from tensorflow.keras.losses import MeanSquaredError, BinaryCrossentropy
from tensorflow.keras.metrics import MeanAbsoluteError, AUC
from tensorflow.keras.callbacks import ReduceLROnPlateau, EarlyStopping, ModelCheckpoint, TensorBoard
from tensorflow.keras.utils import plot_model
from tensorflow.compat.v1.keras.layers import CuDNNLSTM
# sklearn
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler(feature_range=(0, 1))
### GPU
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '0'
os.environ['TF_ENABLE_ONEDNN_OPTS'] = '0'
</code></pre>
<pre><code>def cnn_lstm2_att():
model = build_model(**model_params)
lr = 0.001
ls = 0.01
batch = 16
epochs = 5
lr_scheduler = schedules.ExponentialDecay(
initial_learning_rate = lr,
decay_steps = int((x_train.shape[0] + batch)/batch),
decay_rate = 0.95
)
model.compile(
optimizer = Adam(learning_rate = lr_scheduler),
# optimizer = Adam(learning_rate = lr) ***this one works with a fix value***,
loss = {'dense_3': BinaryCrossentropy(label_smoothing = ls), },
metrics = {'dense_3': AUC(name = 'AUC'), },
)
my_callbacks = [
ReduceLROnPlateau(monitor='val_loss', factor=0.9, patience=10, min_lr=0.00001, verbose=1),
ModelCheckpoint(filepath=model_path, monitor='loss', save_best_only=True, verbose=1),
EarlyStopping(patience=10, monitor='loss', mode='min', verbose=1, restore_best_weights=True),
TensorBoard(log_dir=logdir, histogram_freq=1)
]
_history = model.fit(x_train, y_train, epochs=epochs, batch_size=batch, verbose=1, callbacks=my_callbacks)
inputs = new_data[len(new_data) - len(valid_MLP) - 60:].values
inputs = inputs.reshape(-1,nums_features)
inputs = scaler.transform(inputs)
X_test = []
for i in range(60,inputs.shape[0]):
X_test.append(inputs[i-60:i,:])
X_test = np.array(X_test)
X_test = np.reshape(X_test, (X_test.shape[0], X_test.shape[1], nums_features))
closing_price = model.predict(X_test)
closing_price_reshaped = np.zeros((closing_price.shape[0], nums_features))
closing_price_reshaped[:,0] = np.squeeze(closing_price)
preds = scaler.inverse_transform(closing_price_reshaped)[:,0]
rms=np.sqrt(np.mean(np.power((valid_MLP-preds),2)))
return preds, rms, _history
</code></pre>
<p>problem raised after first epoch, since the learning rate will decrease in the latter epochs:</p>
<pre><code>---> 48 _history = model.fit(x_train, y_train, epochs=epochs, batch_size=batch, verbose=1, callbacks=my_callbacks)
50 inputs = new_data[len(new_data) - len(valid_MLP) - 60:].values
51 inputs = inputs.reshape(-1,nums_features)
File ~\.conda\envs\py39tf27\lib\site-packages\keras\utils\traceback_utils.py:70, in filter_traceback.<locals>.error_handler(*args, **kwargs)
67 filtered_tb = _process_traceback_frames(e.__traceback__)
68 # To get the full stack trace, call:
69 # `tf.debugging.disable_traceback_filtering()`
---> 70 raise e.with_traceback(filtered_tb) from None
71 finally:
72 del filtered_tb
File ~\.conda\envs\py39tf27\lib\site-packages\tensorboard\plugins\scalar\summary_v2.py:88, in scalar(name, data, step, description)
83 summary_scope = (
84 getattr(tf.summary.experimental, "summary_scope", None)
85 or tf.summary.summary_scope
86 )
87 with summary_scope(name, "scalar_summary", values=[data, step]) as (tag, _):
---> 88 tf.debugging.assert_scalar(data)
89 return tf.summary.write(
90 tag=tag,
91 tensor=tf.cast(data, tf.float32),
92 step=step,
93 metadata=summary_metadata,
94 )
</code></pre>
<pre><code>ValueError: Attempt to convert a value (<keras.optimizers.schedules.learning_rate_schedule.ExponentialDecay object at 0x00000247A305AD60>) with an unsupported type (<class 'keras.optimizers.schedules.learning_rate_schedule.ExponentialDecay'>) to a Tensor.
</code></pre>
<p>I have tried numerous ways, such as <code>cast</code>,<code>variable</code>,<code>convert_to_tensor</code>, none of these worked, same error message raised. But when I just let learning rate equals a fix value, like 0.001, it works without a problem. I am just not sure what is going on with this class type thingy.</p>
<p>How can I work through this one, chiefs? Thank you very much.</p>
|
<python><tensorflow><tensor><python-class><decay>
|
2024-01-03 12:07:31
| 0
| 329
|
pepCoder
|
77,751,706
| 6,049,429
|
different results with icontains in django ORM
|
<p>I'm filtering results from django database (mysql) with the following:</p>
<pre><code>queryset = MyModel.objects.filter(value_name__icontains=search).order_by("pk")
class MyModel(models.Model):
...
value_name = models.CharField(unique=True, max_length=255)
...
class Meta:
managed = False
db_table = "my_table"
</code></pre>
<p>The collation value for field <code>value_name=utf8mb4_0900_bin</code></p>
<p>When the value of <code>search = "bin"</code> I'm getting one set of results,
Results are having substring "bin" in them.</p>
<p>and when the value of <code>search="Bin"</code> i'm getting another set of results.
Results are having substring "Bin" in them.</p>
<p>The results of both of these don't have any intersection.</p>
<p>It looks like a collation issue, how to fix this?
I can't change anything on the database, as I only have read access.</p>
<p>My django version is 3.2.</p>
<p>I tried this but it did not work.</p>
<p><code>value_name = models.CharField(unique=True, max_length=255, db_collation='utf8_general_ci')</code></p>
|
<python><django><django-models><django-orm>
|
2024-01-03 11:59:11
| 1
| 984
|
Cool Breeze
|
77,751,594
| 1,285,061
|
NumPy change float precision for the array
|
<p>How do I change the float precision for an entire array without having to do <code>np.set_printoptions</code>? It is not about printing; I want the values to be compressed to do equality checks with other arrays.</p>
<pre><code>e = np.array([0.8292222222222225, 0.1310000000000003])
</code></pre>
<p>to</p>
<pre><code>e = np.array([0.829225, 0.131003])
</code></pre>
<p>I want to be able to compare <code>0.8292222222222225</code> with <code>0.829225</code> in other arrays.</p>
<p>Cannot get equality as <code>True</code></p>
<pre><code>>>> e = np.array([0.8292222222222225, 0.1310000000000003])
>>> e[0]
0.8292222222222225
>>> e[0]==0.829225
False
>>>
</code></pre>
|
<python><numpy><floating-point>
|
2024-01-03 11:36:08
| 1
| 3,201
|
Majoris
|
77,751,307
| 12,633,371
|
Divide every nth element of a Polars list with a number
|
<p>I have the following DataFrame</p>
<pre><code>import polars as pl
pl.Config(fmt_table_cell_list_len=8, fmt_str_lengths=100)
data = {
'col': [[11, 21, 31, 41, 51], [12, 22, 32, 42, 52], [13, 23, 33, 43, 53]]
}
df = pl.DataFrame(data)
</code></pre>
<pre><code>shape: (3, 1)
┌──────────────────────┐
│ col │
│ --- │
│ list[i64] │
╞══════════════════════╡
│ [11, 21, 31, 41, 51] │
│ [12, 22, 32, 42, 52] │
│ [13, 23, 33, 43, 53] │
└──────────────────────┘
</code></pre>
<p>Starting from the first element of each list, I want to divide every two elements of the list with a number, and then starting from the second element of the list, divide again every two elements with another number. For example, if these two numbers are 5 and 10 respectively, the first list will be transformed like this</p>
<pre class="lang-py prettyprint-override"><code>[11/5, 21/10, 31/5, 41/10, 51/5]
</code></pre>
<p>resulting in</p>
<pre><code>[2.2, 2.1, 6.2, 4.1, 10.2]
</code></pre>
<p>I want to do the same transformation for all the lists of the column. How can I do that using the polars API?</p>
<pre><code>┌────────────────────────────┐
│ col │
│ --- │
│ list[f64] │
╞════════════════════════════╡
│ [2.2, 2.1, 6.2, 4.1, 10.2] │
│ [2.4, 2.2, 6.4, 4.2, 10.4] │
│ [2.6, 2.3, 6.6, 4.3, 10.6] │
└────────────────────────────┘
</code></pre>
|
<python><dataframe><list><python-polars>
|
2024-01-03 10:41:09
| 2
| 603
|
exch_cmmnt_memb
|
77,751,015
| 14,650,673
|
Creating a Tkinter Application for Farsi/Persian Language Support with Correct RTL Display
|
<p>I am working on a Tkinter application where I need to support the Farsi/Persian language. The title displays correctly, but the rest of the widgets don't show Persian text as expected. I want the text to be right-to-left (RTL), and the Persian letters should stick to each other.</p>
<p>Here's a simplified version of my code:</p>
<pre><code>import tkinter as tk
from tkinter import messagebox
class InvoiceApp:
def __init__(self, master):
self.master = master
self.master.title("نرمافزار صورتحساب")
# لیست محصولات
self.products = []
# ایجاد ویجتها
self.product_label = tk.Label(master, text='نام محصول:', justify='right')
self.product_entry = tk.Entry(master)
self.price_label = tk.Label(master, text="قیمت:")
self.price_entry = tk.Entry(master)
self.add_button = tk.Button(master, text="افزودن به صورتحساب", command=self.add_to_invoice, justify='right')
self.create_invoice_button = tk.Button(master, text="ساخت صورتحساب", command=self.create_invoice)
self.send_email_button = tk.Button(master, text="ارسال ایمیل", command=self.send_email)
def add_to_invoice(self):
# افزودن محصول به لیست
pass
def create_invoice(self):
# ساخت صورتحساب
pass
def send_email(self):
# ارسال ایمیل
pass
if __name__ == "__main__":
root = tk.Tk()
app = InvoiceApp(root)
root.mainloop()
</code></pre>
<p><a href="https://i.sstatic.net/0qwjn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0qwjn.png" alt="enter image description here" /></a>
I have tried setting the font and anchor properties, but it doesn't seem to work as expected. Can someone guide me on how to configure Tkinter widgets to display Persian text correctly, right-to-left, and with letters sticking together?</p>
<p>Any help would be greatly appreciated! Thank you.</p>
|
<python><tkinter><farsi>
|
2024-01-03 09:53:24
| 1
| 441
|
kzlca
|
77,750,570
| 936,269
|
Cannot start windows services
|
<p>I have a python program which have to run as a windows service.
However, when I try to run the service I get the error:</p>
<pre><code>Error starting service: The service did not respond to the start or control request in a timely fashion.
</code></pre>
<p>As in this question <a href="https://stackoverflow.com/q/64031887/936269">Error starting python windows service compiled using pyinstaller</a> I can both install and remove the service and it is only start which fails.</p>
<p>I started the service development based on this reply: <a href="https://stackoverflow.com/a/32440/936269">https://stackoverflow.com/a/32440/936269</a> to question: <a href="https://stackoverflow.com/q/32404/936269">How do you run a Python script as a service in Windows?</a></p>
<p>However, even with a very dialed down service see below, I get the exact same error.</p>
<pre class="lang-py prettyprint-override"><code>import win32serviceutil
import win32service
import win32event
import servicemanager
import socket
class AppServerSvc (win32serviceutil.ServiceFramework):
_svc_name_ = "TestService"
_svc_display_name_ = "Test Service"
def __init__(self,args):
win32serviceutil.ServiceFramework.__init__(self,args)
self.hWaitStop = win32event.CreateEvent(None,0,0,None)
socket.setdefaulttimeout(60)
def SvcStop(self):
self.ReportServiceStatus(win32service.SERVICE_STOP_PENDING)
win32event.SetEvent(self.hWaitStop)
def SvcDoRun(self):
servicemanager.LogMsg(servicemanager.EVENTLOG_INFORMATION_TYPE,
servicemanager.PYS_SERVICE_STARTED,
(self._svc_name_,''))
self.main()
def main(self):
pass
if __name__ == '__main__':
win32serviceutil.HandleCommandLine(AppServerSvc)
</code></pre>
<p>I have the following in <code>requirements.txt</code> file:</p>
<pre><code>pywin32==306
pyinstaller==6.2.0
</code></pre>
<p>I have tried both just running</p>
<pre><code>python main.py install
python main.py start
</code></pre>
<p>And I have tried using pyinstaller:</p>
<pre><code>pyinstaller --hiddenimport win32timezone --onefile .\main.py
</code></pre>
<p>And then:</p>
<pre><code>main.exe install
main.exe start
</code></pre>
<p>I am running on Windows 10 Pro and Python 3.10.4.
I do install the service under my local admin user.</p>
<p>I cannot see how I can make the service even simpler to test.</p>
|
<python><pywin32>
|
2024-01-03 08:29:45
| 0
| 2,208
|
Lars Kakavandi-Nielsen
|
77,750,442
| 11,054,829
|
python uses 100% of single CPU (Google Compute Engine) - RealESRGAN
|
<p>I have the following setup:</p>
<ul>
<li>Google Compute Engine VM with Nvidia Tesla T4 , 4CPU 16GB RAM</li>
<li>Conda environment where I am running the <code>inference_realesrgan.py</code> cloned from <a href="https://github.com/xinntao/Real-ESRGAN" rel="nofollow noreferrer">https://github.com/xinntao/Real-ESRGAN</a></li>
<li>Everything works as expected - the upscaling process is fine, but slow, the bottleneck is probably not the NVidia GPU, but the python process which uses only a single thread - probably for reading/converting/handling images by opencv? (see the attached image)</li>
<li>My goal: utilize all 4 CPUs during the upscale process</li>
<li>my initial command: <code>conda activate base && python ~/Real-ESRGAN/inference_realesrgan.py -i ~/in -o ~/out -dn 1 -t 660 -g 0 --model_path ~/my_model.pth</code></li>
</ul>
<p>The screenshot from Google Cloud VM - htop:
<a href="https://i.sstatic.net/ZxfvV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZxfvV.png" alt="enter image description here" /></a></p>
<p>Any hint/help appreciated!</p>
|
<python><google-cloud-vm>
|
2024-01-03 08:04:07
| 1
| 446
|
FeHora
|
77,750,328
| 2,170,269
|
Implementing operators using data model objects and checking with pyright
|
<p>I am working on a class where I have to implement arithmetic operators (<code>+</code>, <code>-</code>, <code>*</code>, <code>/</code>), where each operator has multiple overloads, but the overloads are all the same. Instead of copying the overloads to each operator, I thought I would implement it as a data model object, something along the lines of this:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Callable as Fn, Any, overload
import operator
class Apply:
"""Apply an operator to an object."""
def __init__(self, op: Fn[[Any, Any], Any], obj: Any) -> None:
self.op = op
self.obj = obj
# Two mock overloads...
@overload
def __call__(self, x: int) -> str: ...
@overload
def __call__(self, x: str) -> int: ...
def __call__(self, x: int | str) -> str | int:
...
class Op:
"""Data model object for an operator."""
def __init__(self, op: Fn[[Any, Any], Any]) -> None:
self.op = op
def __get__(self, obj: Any, _: Any) -> Apply:
return Apply(self.op, obj)
class Foo:
__add__ = Op(operator.add)
__mul__ = Op(operator.mul)
foo = Foo()
a: str = foo.__add__(2) # works fine
b: int = foo.__mul__("2") # works fine
_ = foo + 1 # type error
_ = foo * "2" # type error
</code></pre>
<p>The idea would be to have all the operator overloads implemented once, with a dispatch to an <code>operator</code> for the dynamic behaviour. That way, I avoid multiple copies of the boilerplate code, and I still get all my operators with the overload type annotations.</p>
<p>But the last two lines give me type errors when I check with pyright (<a href="https://pyright-play.net/?code=GYJw9gtgBALgngBwJYDsDmUkQWEMoDCAhgDYlEBGJAplEQM5QBiKANFAIIpztgBu1ECTBEAJgCgsOPFDAJBRGLnEqAxuXqMOCBCTgAucVGNQAROe264dFLPkhFuWGBuyKAK2qqYAOnOmVEyhRamAoAH1w1CQYSIAKemoSYF4EfWYUAG1Mrh5ObgBddlyit3d03IBKKABaAD4oADkwFGpDIKDE5J85KABeOyMO4y7gHo9%2BssCTAGIoABUAdxcIMFUAa1kBIRFReh8DoeMAAX5BYTEj4NCI8NVSEnjR9gAPdNQYavqoehgQdKgBx8V1O2wuEiCITCkXuZCeSRSUDePz%2BXwaHwBQOmxiht1hj3CCQRr3eKHwAB8USA0VSoJSMVcgli1BpGAB5NJXfwAEUURCgqxCJDKXnwwCcRFscgUShAfnM2Ou0KiKBi8OSqXSLGyuWKhT1cAKNOarXawx%2BCJ6CEmckVuMiaGosUJzzKFW47HC7rgNMsejNwxAToAriBbH64ETunJeB5KipxOoGIwmGAwAHbmJRJFJhy4tKHLKfFn40FIhBgwTcwh8-ZHHKKyR4ypxS4BqmwHF40R0r8QJNWz5Iln4gAmeMUUn4AaD8uV%2BKmUemePhAdpqAAaigAEZxKuZ%2BuAFRmJfiIA" rel="nofollow noreferrer">playground</a>). It seems to be working with mypy, but I use pylance+pyright for my project and would rather not change type checker.</p>
<p>Is there a way to get this to work with pyright?</p>
|
<python><pyright>
|
2024-01-03 07:37:50
| 1
| 1,844
|
Thomas Mailund
|
77,750,241
| 2,497,309
|
How do we properly mock async_engine and async_sessionmaker in python?
|
<p>I'm using SQLAlchemy and a Postgres DB in a project and wanted to write unit tests for a function which writes to the database. I'd like to use mocks instead of actually writing to a test database and I'm using the following code to create the clients in the actual code.</p>
<pre><code>async_engine = create_async_engine(
"postgresql+asyncpg://",
isolation_level="REPEATABLE READ",
)
async_ses = async_sessionmaker(
autocommit=False, autoflush=False, bind=async_engine
)
async def write_to_db():
async with async_ses() as db:
obj = (
(await db.execute(select(Item).where(Item.id == 100)))
.scalars()
.first()
)
...
</code></pre>
<p>I then use the following pytest fixtures to mock this:</p>
<pre><code>@pytest.fixture
async def mock_async_engine():
# Mock the async engine
mock_engine = MagicMock() # Create a mock or MagicMock for AsyncEngine
with patch('db.async_engine', return_value=mock_engine):
yield mock_engine
@pytest.fixture
async def mock_async_session():
# Mock the async session
mock_session = MagicMock() # Create a mock or MagicMock for AsyncSession
with patch('db.async_ses', return_value=mock_session):
yield mock_session
@pytest.mark.asyncio
async def test_write_to_db(mock_async_engine, mock_async_session):
async with async_ses() as db:
# TEST
</code></pre>
<p>However I noticed that it ends up writing to the actual database instead of mocking it out.</p>
|
<python><sqlalchemy><pytest><python-asyncio><pytest-mock>
|
2024-01-03 07:15:51
| 0
| 947
|
asm
|
77,750,066
| 6,108,107
|
Group axis labels for seaborn box plots
|
<p>I want a grouped axis label for box-plots for example a bit like this bar chart where the x axis is hierarchical:
<a href="https://i.sstatic.net/RiomQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RiomQ.png" alt="enter image description here" /></a></p>
<p>I am struggling to work with groupby objects to extract the values for the box plot.</p>
<p>I have found this <a href="https://stackoverflow.com/q/58854335">heatmap example</a> which references this <a href="https://stackoverflow.com/a/39502106/6108107">stacked bar answer</a> from @Stein but I can't get it to work for my box plots (I know I don't want the 'sum' of the groups but can't figure out how to get the values I want grouped correctly). In my real data the group sizes will be different, not all the same as in the example data. I don't want to use seaborn's 'hue' as a solution as I want all the boxes the same color.</p>
<p>This is the closest I have got, thanks:</p>
<pre><code>import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from itertools import groupby
def test_table():
data_table = pd.DataFrame({'Room':['Room A']*24 + ['Room B']*24,
'Shelf':(['Shelf 1']*12 + ['Shelf 2']*12)*2,
'Staple':['Milk','Water','Sugar','Honey','Wheat','Corn']*8,
'Quantity':np.random.randint(1, 20, 48),
})
return data_table
def add_line(ax, xpos, ypos):
line = plt.Line2D([xpos, xpos], [ypos + .1, ypos],
transform=ax.transAxes, color='black')
line.set_clip_on(False)
ax.add_line(line)
def label_len(my_index,level):
labels = my_index.get_level_values(level)
return [(k, sum(1 for i in g)) for k,g in groupby(labels)]
def label_group_bar_table(ax, df):
ypos = -.1
scale = 1./df.index.size
for level in range(df.index.nlevels)[::-1]:
pos = 0
for label, rpos in label_len(df.index,level):
lxpos = (pos + .5 * rpos)*scale
ax.text(lxpos, ypos, label, ha='center', transform=ax.transAxes)
add_line(ax, pos*scale, ypos)
pos += rpos
add_line(ax, pos*scale , ypos)
ypos -= .1
df = test_table().groupby(['Room','Shelf','Staple']).sum()
fig = plt.figure()
fig = plt.figure(figsize = (15, 10))
ax = fig.add_subplot(111)
sns.boxplot(x=df.Quantity, y=df.Quantity,data=df)
#Below 3 lines remove default labels
labels = ['' for item in ax.get_xticklabels()]
ax.set_xticklabels(labels)
ax.set_xlabel('')
label_group_bar_table(ax, df)
fig.subplots_adjust(bottom=.1*df.index.nlevels)
plt.show()
</code></pre>
<p>Which gives:</p>
<p><a href="https://i.sstatic.net/kHLbh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kHLbh.png" alt="box plots with multi level axis labels" /></a></p>
|
<python><pandas><matplotlib><seaborn>
|
2024-01-03 06:26:16
| 1
| 578
|
flashliquid
|
77,749,714
| 15,542,245
|
Regex to count words after an underscored word
|
<p>I want to count an unknown number of words in a string that appear after an underscored word.</p>
<pre><code>testString='21 High Street _Earth Mighty Motor Mechanic'
</code></pre>
<p>I can match these words using a non-capture group <code>(?:\s[a-zA-Z]+)</code> But cannot build the regex up to eliminate what comes before the underscored word from the match. See <a href="https://regex101.com/r/ULGx0d/1" rel="nofollow noreferrer">demo</a></p>
<p>I was looking to use the completed pattern in a Python script as follows:</p>
<pre><code>import re
pattern = r'(?:\s[a-zA-Z]+)'
results = re.findall(pattern, testString)
if results:
answer = len(results)
</code></pre>
|
<python><regex>
|
2024-01-03 04:20:33
| 2
| 903
|
Dave
|
77,749,705
| 10,200,497
|
Creating a new column by resetting cummin() of another column and a condition
|
<p>This is my DataFrame:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(
{
'a': [98, 97, 100, 135, 103, 100, 105, 109, 130],
'b': [100, 103, 101, 105, 110, 120, 101, 150, 160]
}
)
</code></pre>
<p>And this is the desired output. I want to create column <code>c</code>:</p>
<pre><code> a b c
0 98 100 100
1 97 103 100
2 100 101 100
3 135 105 100
4 103 110 110
5 100 120 110
6 105 101 101
7 109 150 150
8 130 160 150
</code></pre>
<p>It is not so easy for me to describe the issue in pure English, since it is a little bit complicated.<code>c</code> is <code>df.b.cummmin()</code> but under certain conditions it changes. I describe it row by row:</p>
<p>The process starts with:</p>
<pre><code>df['c'] = df.b.cummin()
</code></pre>
<p>The condition that changes <code>c</code> is :</p>
<pre><code>cond = df.a.shift(1) > df.c.shift(1)
</code></pre>
<p>Now the rows that matter are the ones that <code>cond == True</code>. For these rows <code>df.c = df.b</code> And the <code>cummin()</code> of <code>b</code> RESETS.</p>
<p>For example, the first instance of <code>cond</code> is row <code>4</code>. So <code>c</code> changes to 110 (in other words, whatever <code>b</code> is). And for row <code>5</code> it is the <code>cummin()</code> of <code>b</code> from row <code>4</code>. The logic is the same to the end.</p>
<p>This is one of my attempts. But it does not work where the <code>cond</code> kicks in:</p>
<pre><code>df['c'] = df.b.cummin()
df.loc[df.a.shift(1) > df.c.shift(1), 'c'] = df.b
</code></pre>
<hr />
<p>PS:</p>
<p>The accepted answer works for this example. But for my real data which is way bigger than this, it didn't work as expected. I still haven't found the problem with it.</p>
<p>However I really like <a href="https://stackoverflow.com/a/77749796/10200497">this</a> answer and it works for me. It is simple and very effective. I didn't change the accepted answer because it has a vectorized solution.</p>
|
<python><pandas><dataframe>
|
2024-01-03 04:16:41
| 3
| 2,679
|
AmirX
|
77,749,362
| 508,330
|
Install linux dependency during the python WebApp zip deployment (Azure, Oryx)
|
<p>I would like to install linux package during the python Azure Web App (app services) deployment.</p>
<p>Type of the app is "code" and at the moment I cannot use docker container deployment due to non-technical restrictions.</p>
<p>however application requires linux library to be installed.
I am searching for the alternatives on how to deliver it without using container solutions.</p>
<p>One of the articles suggested to check ORYX deployment, but I cannot find any good details or demo where package dll is installed when code is deployed using zip package.</p>
<p>I would appreciate any suggestion.</p>
|
<python><azure><deployment><web-applications><oryx>
|
2024-01-03 01:52:27
| 1
| 4,473
|
Ievgen
|
77,749,311
| 2,924,334
|
Replace multiple matching groups with modified captured gropus
|
<p>I am reading text from a file that contains flags <code>start</code> and <code>end</code>. I want to replace everything between <code>start</code> and <code>end</code> with the same text except I want to remove any newlines in the matching group.</p>
<p>I tried to do it as follows:</p>
<pre><code>import re
start = '---'
end = '==='
text = '''\
Some text
---line 1
line 2
line 3===
More text
...
Some more text
---line 4
line 5===
and even more text\
'''
modified = re.sub(pattern=rf'{start}(.+){end}', repl=re.sub(r'\n', ' ', r'\1'), string=text, flags=re.DOTALL)
print(modified)
</code></pre>
<p>This prints:</p>
<pre><code>Some text
line 1
line 2
line 3===
More text
...
Some more text
---line 4
line 5
and even more text
</code></pre>
<p>Couple of issues with this, 1. it matches the biggest group (and not the smaller matching groups), and 2. it does not remove the newlines.</p>
<p>I am expecting the output to be:</p>
<pre><code>Some text
line 1 line 2 line 3
More text
...
Some more text
line 4 line 5
and even more text
</code></pre>
<p>Any help will be appreciated. Thank you!</p>
|
<python><python-re>
|
2024-01-03 01:29:21
| 1
| 587
|
tikka
|
77,749,113
| 15,587,184
|
KerasTuner: Custom Metrics (e.g., F1 Score, AUC) in Objective with RandomSearch Error
|
<p>I'm using KerasTuner for hyperparameter tuning of a Keras neural network. I would like to use common metrics such as F1 score, AUC, and ROC as part of the tuning objective. However, when I specify these metrics in the kt.Objective during RandomSearch, I encounter issues with KerasTuner not finding these metrics in the logs during training.</p>
<p>Here is an example of how I define my objective:</p>
<pre><code>tuner = kt.RandomSearch(
MyHyperModel(),
objective=kt.Objective("val_f1", direction="max"),
max_trials=100,
overwrite=True,
directory="my_dir",
project_name="tune_hypermodel",
)
</code></pre>
<p>But I get:</p>
<pre><code>RuntimeError: Number of consecutive failures exceeded the limit of 3.
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/keras_tuner/src/engine/base_tuner.py", line 273, in _try_run_and_update_trial
self._run_and_update_trial(trial, *fit_args, **fit_kwargs)
File "/usr/local/lib/python3.10/dist-packages/keras_tuner/src/engine/base_tuner.py", line 264, in _run_and_update_trial
tuner_utils.convert_to_metrics_dict(
File "/usr/local/lib/python3.10/dist-packages/keras_tuner/src/engine/tuner_utils.py", line 132, in convert_to_metrics_dict
[convert_to_metrics_dict(elem, objective) for elem in results]
File "/usr/local/lib/python3.10/dist-packages/keras_tuner/src/engine/tuner_utils.py", line 132, in <listcomp>
[convert_to_metrics_dict(elem, objective) for elem in results]
File "/usr/local/lib/python3.10/dist-packages/keras_tuner/src/engine/tuner_utils.py", line 145, in convert_to_metrics_dict
best_value, _ = _get_best_value_and_best_epoch_from_history(
File "/usr/local/lib/python3.10/dist-packages/keras_tuner/src/engine/tuner_utils.py", line 116, in _get_best_value_and_best_epoch_from_history
objective_value = objective.get_value(metrics)
File "/usr/local/lib/python3.10/dist-packages/keras_tuner/src/engine/objective.py", line 59, in get_value
return logs[self.name]
KeyError: 'val_f1'
</code></pre>
<p>Are the actual metrics available on the Keras documentation? I have searched and I can't seem to find them. The only snippet of code that has worked for me is using the accuracy metric like this:</p>
<pre><code>import keras_tuner as kt
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.models import Sequential
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.regularizers import l2
from kerastuner.tuners import RandomSearch
class MyHyperModel(kt.HyperModel):
def build(self, hp):
model = Sequential()
model.add(layers.Flatten())
model.add(
layers.Dense(
units=hp.Int("units", min_value=24, max_value=128, step=10),
activation="relu",
)
)
model.add(layers.Dense(1, activation="sigmoid"))
model.compile(
optimizer=Adam(learning_rate=hp.Float('learning_rate', 5e-5, 5e-1, step=0.001)),#,Adam(learning_rate=hp.Float('learning_rate', 5e-5, 5e-1, sampling='log')),
loss='binary_crossentropy',
metrics=['accuracy']
)
return model
def fit(self, hp, model, *args, **kwargs):
return model.fit(
*args,
batch_size=hp.Choice("batch_size", [16, 32,52]),
epochs=hp.Int('epochs', min_value=5, max_value=25, step=5),
**kwargs,
)
tuner = kt.RandomSearch(
MyHyperModel(),
objective="val_accuracy",
max_trials=100,
overwrite=True,
directory="my_dir",
project_name="tune_hypermodel",
)
tuner.search(X_train, y_train, validation_data=(X_test, y_test), callbacks=[keras.callbacks.EarlyStopping('val_loss', patience=3)])
</code></pre>
<p>Is it possible that Keras only supports accuracy as the default metric, and we'll have to define any other metric ourselves? How can I define objective metrics for AUC and F1?</p>
|
<python><tensorflow><keras>
|
2024-01-03 00:00:02
| 1
| 809
|
R_Student
|
77,749,076
| 1,172,265
|
Efficient Text Search for Large Term List: Python vs PostgreSQL vs Elasticsearch
|
<p>I have a list containing terms that vary in length from 1 to 10 words, with approximately 500,000 entries. My goal is to search for these terms in a long text (converted from a PDF, typically 1.5 to 2 pages long). I need to perform the searches not only as exact matches but also using fuzzy (e.g., the term 'Lionel Messi' should match 'Lionel Mesi' in the text) and near options (e.g., the term 'Lionel Messi' should match 'Lionel J. Messi' in the text).</p>
<p>I aim to solve this problem in near real-time (1-2 second). I've tried using trie data structures and parallelization, but especially when the fuzzy aspect comes into play, the large size of the list and the PDF length lead to long processing times (about 30 seconds).</p>
<p>How should I approach this problem?</p>
<ol>
<li>Can I handle it on the fly with Python libraries (using parallelization, trie structures, etc.)?</li>
<li>Are there features in PostgreSQL that support such searches?</li>
<li>Should I use a framework like Elasticsearch?"</li>
</ol>
|
<python><postgresql><elasticsearch><full-text-search><fuzzy-search>
|
2024-01-02 23:46:46
| 3
| 1,885
|
Batuhan B
|
77,748,942
| 4,688,190
|
Python multihreading: Return result of fastest thread
|
<p>What is the easiest way to start both threads and return the output of the thread that completes first? I want to ignore the long-running thread and continue executing the rest of the script. In this example, the total time taken should be 1 second and the output should be 1.</p>
<pre><code>import threading, time
def one():
time.sleep(1)
return 1
def two():
time.sleep(5)
return 2
thread_one = threading.Thread(target=one)
thread_two = threading.Thread(target=two)
thread_one.start()
thread_two.start()
print(one())
</code></pre>
<p>I have looked at using a loop and something like<code>event = threading.Event()</code> and <code>if not event.is_set()</code>but don't think that I can do that because I am making an API request in one of these threads (not shown for simplicity). Threads are confusing, help appreciated.</p>
<p>EDIT: I think maybe what I want is some sort of conditional join statement at the end.</p>
|
<python><multithreading>
|
2024-01-02 22:51:38
| 2
| 678
|
Ned Hulton
|
77,748,873
| 1,848,345
|
Scipy signal correlate direct method - calculate results for a subset of all possible lags
|
<p>I'm trying to use scipy signal correlate with the <code>method="direct"</code> option and I'm wondering if it is possible to restrict the calculation to just a subset of all possible lags. The size of my input arrays is on the order of 240,000,000 entries, but I'd be interested in the about a 500k lag on either side of 0. It doesn't seem possible by looking at the API but I was curious if I missed something or there was another library to do this in python.</p>
<p>Note because of the size and sparseness of my data the <code>method="fft"</code> option does not work.</p>
<p>updates:</p>
<ul>
<li>the dimension of my arrays is approximately (240_000_000,)</li>
<li>one of the arrays is sparse but @hpaulj looked at source code (<a href="https://stackoverflow.com/questions/77748696/scipy-use-sparse-array-with-signal-correlate?noredirect=1#comment137068051_77748696">Scipy use sparse array with signal.correlate</a>) scipy.sparse won't work with scipy.signal</li>
</ul>
<p>I have written code to run the correlation calculation for subset of lags i.e. lag = range(-10_000,10_000) but I was curious if there was a better way and/or avoids me reinventing the wheel.</p>
|
<python><scipy><correlation>
|
2024-01-02 22:29:55
| 1
| 461
|
dllahr
|
77,748,811
| 7,454,177
|
n8n in GitHub pipeline fails to expose port
|
<p>We have a docker compose which we need for testing of our code. When we run it locally (macOS, different hosts) it works fine, however as soon as we run it on a GitHub action runner, it fails with</p>
<pre><code>requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=5678): Max retries exceeded with url: /api/v1/credentials (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f3d28e3d810>: Failed to establish a new connection: [Errno 111] Connection refused'))
</code></pre>
<p>The affected service is an n8n container, which should expose an HTTP API, which also works locally.</p>
<pre><code> n8n:
image: docker.n8n.io/n8nio/n8n
ports:
- "5678:5678"
environment:
- N8N_HOST=n8n:5678
- N8N_PORT=5678
- N8N_PROTOCOL=http
- NODE_ENV=production
- DB_TYPE=postgresdb
- DB_TABLE_PREFIX=n8n_
- DB_POSTGRESDB_DATABASE=n8n
volumes:
- ./DOCKER/n8n/data:/home/node/.n8n
- ./DOCKER/n8n/files:/files
</code></pre>
<p>We try to access this with our FastAPI API, using the python requests library. What could be the difference between these two systems? The docker compose is the same, env files are being reset by our testing script.</p>
<p><strong>Update</strong></p>
<p>I minimalized the reproducible issue to the following:</p>
<pre><code>curl --fail http://localhost:8080 || exit 1
curl --fail http://localhost:5678/api/v1/docs/ || exit 1
</code></pre>
<p>In my docker compose I have two services, one on 8080 which is working and n8n which is not working. The above code fails in the second line with error <code>curl: (7) Failed to connect to localhost port 5678 after 0 ms: Connection refused</code>. I also tried switching ports in case the GitHub actions already use this port.</p>
|
<python><docker><docker-compose><github-actions><fastapi>
|
2024-01-02 22:13:12
| 1
| 2,126
|
creyD
|
77,748,799
| 2,591,138
|
Use online webdriver for BeautifulSoup / Selenium Python code (vs. local reference)
|
<p>I wrote local Python code in Spyder that does webscraping using BeautifulSoup and Selenium. I now want to transfer that code to run online and on a schedule (using pythonanywhere). That works fine for the pure BeautifulSoup elements. The parts that use Selenium, currently have some configuration on my local setup, including a local reference to the webdriver, see below.</p>
<pre><code>options = webdriver.ChromeOptions()
options.add_argument('--ignore-certificate-errors')
options.add_argument('--incognito')
options.add_argument('--headless')
driver = webdriver.Chrome("C:/Users/my.name/Downloads/chromedriver-win64/chromedriver-win64/chromedriver.exe", options=options)
driver.get('URL TO SCRAPE')
</code></pre>
<p>When I transfer the code online, I obviously can't refer to my local C drive anymore. Is there such a thing as an online version of the webdriver exe file I can reference?</p>
<p>When I asked ChatGPT, I was referred to services like BrowserStack and SauceLabs, but (without reading their website too much) this looks a little overkill when seeing the price points.</p>
<p>Any advice or pointers are appreciated - thanks!</p>
|
<python><selenium-webdriver><beautifulsoup>
|
2024-01-02 22:10:40
| 1
| 1,083
|
Berbatov
|
77,748,747
| 3,726,933
|
Streamlit text_input or chat_input set dynamically
|
<p>Is there a way to set dynamic text on a text input or a chat input? Context is I am allowing users to say something on their microphones, transcribing audio to text and I want to place the transcribed text on the chat input or text input so they can interact with the app. Is this possible with text_input or chat_input? Or am I better off using another library?</p>
|
<python><streamlit>
|
2024-01-02 21:57:43
| 1
| 369
|
user3726933
|
77,748,731
| 5,421,539
|
How can I install packages in Jupyterlab online?
|
<p>I am using Jupyterlab online (<a href="https://jupyter.org/try-jupyter/lab" rel="nofollow noreferrer">https://jupyter.org/try-jupyter/lab</a>) and can't install a package. I did a lot search and people says it can be done by <code>!pip install PACKAGE</code>. However, I got this error when run <code>!pip install requests</code> or any packages:</p>
<pre><code>---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[16], line 1
----> 1 get_ipython().system('pip install requests')
File /lib/python3.11/site-packages/IPython/core/interactiveshell.py:2629, in InteractiveShell.system_piped(self, cmd)
2624 raise OSError("Background processes not supported.")
2626 # we explicitly do NOT return the subprocess status code, because
2627 # a non-None value would trigger :func:`sys.displayhook` calls.
2628 # Instead, we store the exit_code in user_ns.
-> 2629 self.user_ns['_exit_code'] = system(self.var_expand(cmd, depth=1))
File /lib/python3.11/site-packages/IPython/utils/_process_posix.py:129, in ProcessHandler.system(self, cmd)
125 enc = DEFAULT_ENCODING
127 # Patterns to match on the output, for pexpect. We read input and
128 # allow either a short timeout or EOF
--> 129 patterns = [pexpect.TIMEOUT, pexpect.EOF]
130 # the index of the EOF pattern in the list.
131 # even though we know it's 1, this call means we don't have to worry if
132 # we change the above list, and forget to change this value:
133 EOF_index = patterns.index(pexpect.EOF)
AttributeError: module 'pexpect' has no attribute 'TIMEOUT'
</code></pre>
<p>The error <code>Background processes not supported</code> seems mean it can't install any pakcage. Is it true? How can I make it run in frontground?</p>
<p>what is the right way to install an external package?</p>
|
<python><jupyter-notebook><jupyter-lab>
|
2024-01-02 21:54:36
| 0
| 43,242
|
Joey Yi Zhao
|
77,748,700
| 3,358,599
|
How to find element based on string that may span multiple child tags?
|
<p>I am trying to identify specific elements within a document based on a known text string. Normally, you could easily do this with</p>
<pre><code>soup.find(string=re.compile(".*some text string.*"))
</code></pre>
<p>However the known string may have (multiple) child elements within it. For example, if this is our document:</p>
<pre class="lang-py prettyprint-override"><code>test_doc = BeautifulSoup("""<html><h1>Title</h1><p>Some <b>text</b></p>""")
</code></pre>
<p>and I'm looking for a specific element. The only thing I know about this element is that it contains the text "Some text". I <em>don't</em> know that the word "text" within it is in a child bold tag.</p>
<pre class="lang-py prettyprint-override"><code>test_doc.find(string=re.compile(".*Some text.*"))
</code></pre>
<p>is <code>None</code> because "text" is within that child tag.</p>
<p>How can I return the parent tag (the <code>p</code> tag in my example), with all the children tags, if I don't know if/how the text is broken up into child tags?</p>
|
<python><web-scraping><beautifulsoup>
|
2024-01-02 21:44:23
| 3
| 6,596
|
T.C. Proctor
|
77,748,696
| 1,848,345
|
Scipy use sparse array with signal.correlate
|
<p>I'm trying to calculate a serial correlation in Scipy using a sparse matrix, and I'm getting an error that my arrays do not have the same dimensionality, but they appear to be identical. Example code:</p>
<pre class="lang-py prettyprint-override"><code>import numpy
import scipy.signal as signal
import scipy.sparse as sparse
my_sparse = sparse.csr_array(numpy.random.rand(10))
print(my_sparse)
print(my_sparse.shape)
my_dense = numpy.expand_dims(numpy.random.rand(10), 1).T
print(my_dense)
print(my_dense.shape)
corr = signal.correlate(my_sparse, my_dense, method="direct", mode="full")
print(corr)
print(corr.shape)
</code></pre>
<p>output:</p>
<pre class="lang-py prettyprint-override"><code>$ python test_sparse_with_correlate.py
(0, 0) 0.9842628978990572
(0, 1) 0.21288927130505086
(0, 2) 0.11321754428562536
(0, 3) 0.02907902961562403
(0, 4) 0.06534326022150638
(0, 5) 0.5611332833785263
(0, 6) 0.6693945587792903
(0, 7) 0.6421603953803423
(0, 8) 0.7299602339571223
(0, 9) 0.5241172106721759
(1, 10)
[[0.5422929 0.27431614 0.65966523 0.05579376 0.41876797 0.46535913
0.13079666 0.97518169 0.49505634 0.68287069]]
(1, 10)
Traceback (most recent call last):
File "/mnt/alt_wkdir/118_serial_correlation/code/test_sparse_with_correlate.py", line 13, in <module>
corr = signal.correlate(my_sparse, my_dense, method="direct", mode="full")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/bioinf_tools/miniconda3/envs/scipy/lib/python3.12/site-packages/scipy/signal/_signaltools.py", line 231, in correlate
raise ValueError("in1 and in2 should have the same dimensionality")
ValueError: in1 and in2 should have the same dimensionality
</code></pre>
<p>I've also tried this with the transposed versions of the arrays (so their shapes are both (10,1)) with the same resulting ValueError.</p>
<p>I'm using v1.11.4 of scipy. I first uncovered the problem with an older version, created a new conda environment with latest stable scipy and seeing the same behavior.</p>
<p>Is it possible to use scipy.sparse with scipy.signal.correlate and if so can you tell me how?</p>
|
<python><numpy><scipy>
|
2024-01-02 21:43:52
| 1
| 461
|
dllahr
|
77,748,613
| 5,651,960
|
ValueError: The truth value of a Series is ambiguous. Using function and pandas df
|
<p>Looking to run a function using dataframe elements to create a new column (IV) but the function is stuck on the line below.</p>
<p><strong>Error:</strong>
ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().</p>
<p><strong>Line</strong>:if (abs(diff) < PRECISION):</p>
<pre><code>from scipy.stats import norm
import pandas as pd
import numpy as np
N = norm.cdf
def bs_call(S, K, T, r, vol):
d1 = (np.log(S/K) + (r + 0.5*vol**2)*T) / (vol*np.sqrt(T))
d2 = d1 - vol * np.sqrt(T)
return S * norm.cdf(d1) - np.exp(-r * T) * K * norm.cdf(d2)
def bs_vega(S, K, T, r, sigma):
d1 = (np.log(S / K) + (r + 0.5 * sigma ** 2) * T) / (sigma * np.sqrt(T))
return S * norm.pdf(d1) * np.sqrt(T)
def find_vol(target_value, S, K, T, r, *args):
MAX_ITERATIONS = 200
PRECISION = 1.0e-5
sigma = 0.5
for i in range(0, MAX_ITERATIONS):
price = bs_call(S, K, T, r, sigma)
vega = bs_vega(S, K, T, r, sigma)
diff = target_value - price # our root
if (abs(diff) < PRECISION):
return sigma
sigma = sigma + diff/vega # f(x) / f'(x)
print(sigma)
return sigma # value wasn't found, return best guess so far
S = df['ClosingPrice']
K= df['Strike']
T= df['RemainingDays']/365
r= df['RfRate']
vol = 0.2
V_market = bs_call(S, K, T, r, vol)
implied_vol = find_vol(V_market, S, K, T, r)
df.loc[df['OptionType'] == 'c', 'IV'] = implied_vol
</code></pre>
|
<python><pandas><dataframe><quantitative-finance>
|
2024-01-02 21:21:17
| 1
| 949
|
RageAgainstheMachine
|
77,748,578
| 21,115
|
How does pathlib.Path implement '/' when the left operand is a string?
|
<p>I understand how <code>__truediv__</code> works, per <a href="https://stackoverflow.com/a/53085465/21115">this answer</a>.</p>
<p>But in this case:</p>
<pre><code>>>> from pathlib import Path
>>> 'foo' / Path('bar')
PosixPath('foo/bar')
</code></pre>
<p>Surely <code>__truediv__</code> is called on <code>str</code>, because <code>'foo'</code> is the left operand to <code>/</code>, in that case how can it return a <code>Path</code>?</p>
|
<python><pathlib>
|
2024-01-02 21:11:10
| 1
| 18,140
|
davetapley
|
77,748,465
| 6,703,783
|
How to create variable of type VectorSearchAlgorithmConfiguration
|
<p>Following this tutorial, <a href="https://learn.microsoft.com/en-us/azure/search/search-get-started-vector" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/search/search-get-started-vector</a> , instead of using the <code>json</code> <code>put</code> method, I am trying to use <code>python</code> API.</p>
<p>I want to create a variable of type <code>VectorSearchAlgorithmConfiguration</code>, as defined in <a href="https://learn.microsoft.com/en-us/python/api/azure-search-documents/azure.search.documents.indexes.models.vectorsearchalgorithmconfiguration?view=azure-python" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/python/api/azure-search-documents/azure.search.documents.indexes.models.vectorsearchalgorithmconfiguration?view=azure-python</a></p>
<p>As per my understanding, the api takes two definite parameters, the 3rd (and maybe more) dependon the value of 2nd parameter. Eg. <code>HnswParameters</code> for <code>kind="hnsw"</code> - if I map from the <code>json</code> to <code>api</code></p>
<p>I created the following two definitions</p>
<pre><code>hnswParameters = HnswParameters(m=4,ef_construction= 400, ef_search= 500, metric= "cosine")
vectorSearchAlgorithm = VectorSearchAlgorithmConfiguration(name = "my-hnsw-vector-config-1",kind="hnsw",hnswParameters=hnswParameters)
</code></pre>
<p>but I get error <code>hnswParameters is not a known attribute of class <class 'azure.search.documents.indexes._generated.models._models_py3.VectorSearchAlgorithmConfiguration'> and will be ignored</code>. How do I provide <code>hnswParameters </code>?</p>
<p>I later tried to create instance of <code>HnswVectorSearchAlgorithmConfiguration</code> as per <a href="https://learn.microsoft.com/en-us/python/api/azure-search-documents/azure.search.documents.indexes.models.hnswvectorsearchalgorithmconfiguration?view=azure-python-preview" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/python/api/azure-search-documents/azure.search.documents.indexes.models.hnswvectorsearchalgorithmconfiguration?view=azure-python-preview</a> but when I import</p>
<pre><code>from azure.search.documents.indexes.models import (
... HnswVectorSearchAlgorithmConfiguration
)
</code></pre>
<p>I get error <code>ImportError: cannot import name 'HnswVectorSearchAlgorithmConfiguration' from 'azure.search.documents.indexes.models' (c:\Users\manuchadha\...\demo-data\aoai-openai\aoai-code\.venv\Lib\site-packages\azure\search\documents\indexes\models\__init__.py)</code></p>
<p>Update.</p>
<p>Thanks Tim for pointing in right direction. I created <code>HnswAlgorithmConfiguration</code>
Probably this will work, I still have to test. The code compiles though</p>
<pre><code>hnswParameters = HnswParameters(m=4,ef_construction= 400, ef_search= 500, metric= "cosine")
vectorSearchAlgorithm = HnswAlgorithmConfiguration(name = "my-hnsw-vector-config-1",parameters=hnswParameters)
vectorSearchProfile = VectorSearchProfile(name="my-vector-profile",algorithm_configuration_name="my-hnsw-vector-config-1")
</code></pre>
|
<python>
|
2024-01-02 20:40:45
| 0
| 16,891
|
Manu Chadha
|
77,748,401
| 4,064,166
|
Can I install conda packages with pip?
|
<p>I want to install a package on macOS, which can be done by</p>
<p><code>conda env create -f environment.yaml</code><br/>
<code>conda activate <package></code></p>
<p>but I don't want to install <code>conda</code> and am looking for an alternative to install it with <code>pip</code> or <code>source</code> commands.</p>
<p>I tried <code>source env create -f environment.yaml</code>, and it errors out as <code>/usr/bin/env:1: parse error near ')'</code>.</p>
<p>If possible, I am looking for an alternative to install <code>conda</code> packages with <code>pip</code>.</p>
|
<python><pip><conda><python-packaging><python-install>
|
2024-01-02 20:23:18
| 1
| 577
|
Devharsh Trivedi
|
77,748,266
| 424,333
|
How can I check when any blog was last updated in Python?
|
<p>I'd like to write a script that takes the URL of any blog and returns the date of its most recent post.</p>
<p>So far I have tried two ways.</p>
<p>The first is <code>htmldate</code>, which has a high error rate for this use case:</p>
<pre><code>import requests
from htmldate import find_date
url = 'xxx'
# Check URL content
response = requests.get(url)
if response.status_code == 200:
html_content = response.text
else:
print(f"Failed to fetch HTML content. Status code: {response.status_code}")
# Use HTML content with htmldate
if response.status_code == 200:
date = find_date(html_content, outputformat='%d %B %Y')
print(date)
else:
print("Skipping htmldate processing due to failed HTML content retrieval.")
</code></pre>
<p>The second is <code>BeautifulSoup</code>, but it relies on providing a comprehensive list of possible tags for the date:</p>
<pre><code>import requests
from bs4 import BeautifulSoup
from dateutil import parser
def get_latest_post_date(url):
try:
# Send HTTP GET request to blog URL
response = requests.get(url)
# Check if request was successful
if response.status_code == 200:
# Parse HTML content of page
soup = BeautifulSoup(response.text, 'html.parser')
# List of common HTML elements that might contain publication date
date_elements = [
soup.find('time'),
soup.find('span', class_='date'),
soup.find('span', class_='post-date'),
soup.find('span', class_='dated-posted'),
soup.find('div', class_='post-date'),
soup.find('span', {'itemprop': 'datePublished'}),
soup.find('meta', {'itemprop': 'datePublished'}),
soup.find('span', {'property': 'article:published_time'}),
soup.find('meta', {'property': 'article:published_time'}),
# Add more elements based on the structure of the specific blog
]
# Find first non-empty date element
date_element = next((element for element in date_elements if element and element.text.strip()), None)
if date_element:
# Extract date string and parse into datetime object
date_string = date_element.text.strip()
date = parser.parse(date_string)
return date.strftime('%Y-%m-%d %H:%M:%S') # Format the date as needed
else:
return "Date not found on the page"
else:
return f"Failed to retrieve the page. Status code: {response.status_code}"
except Exception as e:
return f"Error: {e}"
blog_url = 'xxx'
latest_post_date = get_latest_post_date(blog_url)
print(f"The date of the latest post on {blog_url} is: {latest_post_date}")
</code></pre>
<p>Is there another method that could do this more successfully?</p>
|
<python><web-scraping>
|
2024-01-02 19:53:55
| 0
| 3,656
|
Sebastian
|
77,748,222
| 617,122
|
Determine Process in multiprocessing.Pool still active?
|
<h1>Background</h1>
<p>I have some code that looks like this:</p>
<pre class="lang-py prettyprint-override"><code>import datetime
import multiprocessing
def my_function(a, b, c):
c.append(a)
# A cool function that takes a long time and returns a lot of data
c.remove(a)
return (random_tuple, very_large_list_of_data)
def main():
start_time = datetime.datetime.now()
prev_one_minute_passed = False
large_list_a = list(range(100)) # ignore that this isn't pythonic, it's an example
large_list_b = [2] * 100
l = [] # this is actually handled with a multiprocessing.manager, but
# it's easier to conceptualize as a plain list.
with multiprocessing.Pool(processes=99) as pool:
out_results = pool.starmap_async(my_cool_function, [(a, b, l) for (a, b) in zip(a, b)])
while not out_results.ready():
# do some nifty logging
if ((datetime.datetime.now() - start_time) > datetime.timedelta(minutes=1)):
# we've been running for more than a minute
if not l:
# There's nothing left in l; all jobs have (probably) finished
break
out_tuple_list = out_results.get(timeout=300)
if __name__ == '__main__':
main()
</code></pre>
<p><em>Infrequently,</em> when I run this code, I get <code>multiprocessing.Timeout</code>s. This says to me that sometimes the result isn't ready to be read five minutes after the last result is removed from the shared not-quite-a-job-queue (the list <code>l</code>).</p>
<p>I've resorted to getting a shell in the interpreter with IPython when the Timeout is thrown. However, my experience with the multiprocessing library is somewhat limited. Here are some of the (less embarrassing) commands I've sent to IPython to try to determine what's going on:</p>
<pre><code>In [2]: pool.join()
---------------------------------------------------------------------------
TimeoutError Traceback (most recent call last)
File ~/Documents/src/projects/my_proj/./main.py:29, in create_repo_img_tuple_issue_list_map(json_report_locations, cve_resolution_dict, manager)
28 try:
--> 29 out_tuple_list = out_results.get(timeout=300)
30 except multiprocessing.TimeoutError:
File /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/pool.py:767, in ApplyResult.get(self, timeout)
766 if not self.ready():
--> 767 raise TimeoutError
768 if self._success:
TimeoutError:
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
Cell In[2], line 1
----> 1 pool.join()
File /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/pool.py:659, in Pool.join(self)
657 util.debug('joining pool')
658 if self._state == RUN:
--> 659 raise ValueError("Pool is still running")
660 elif self._state not in (CLOSE, TERMINATE):
661 raise ValueError("In unknown state")
ValueError: Pool is still running
In [3]: pool
Out[3]: <multiprocessing.pool.Pool state=RUN pool_size=99>
...
In [7]: out_results.ready()
Out[7]: False
In [8]: pool._terminate
Out[8]: <Finalize object, callback=_terminate_pool, args=(<_queue.SimpleQueue object at 0x10eb5dd00>, <multiprocessing.queues.SimpleQueue object at 0x10eb39cc0>, <multiprocessing.queues.SimpleQueue object at 0x10eb3b340>, [<SpawnProcess name='SpawnPoolWorker-2' pid=21273 parent=21269 started daemon>, ...<SpawnProcess name='SpawnPoolWorker-113' pid=21386 parent=21269 started daemon>], <multiprocessing.queues.SimpleQueue object at 0x10eb39ba0>, <Thread(Thread-1 (_handle_workers), started daemon 123145420689408)>, <Thread(Thread-2 (_handle_tasks), started daemon 123145437478912)>, <Thread(Thread-3 (_handle_results), started daemon 123145454268416)>, {0: <multiprocessing.pool.MapResult object at 0x10eb8dcf0>}), exitpriority=15>
...
In [11]: pool._terminate.still_active()
Out[11]: True
...
In [22]: all([p.is_alive() for p in pool._pool])
Out[22]: True
</code></pre>
<p>So it looks like all my processes are still alive.</p>
<h1>Question</h1>
<p>How can I determine which process in the <code>multiprocessing.Pool</code> is preventing the pool from entering the finished state? Is there a way (in python3.10) to monitor why a process is preventing the pool from moving into the ready state?</p>
<h2>EDIT</h2>
<p>I know there's a race condition here, but the jobs don't usually don't take 5 minutes to complete. I can also query the list <code>l</code> in the IPython shell and it's still empty.</p>
|
<python><python-3.x><python-multiprocessing>
|
2024-01-02 19:43:34
| 1
| 379
|
distortedsignal
|
77,748,127
| 16,503,741
|
Using regex (from the re python library) inside of a pydantic model
|
<p>I am using <code>re</code> to parse a string into its composite parts. The problem is that pydantic 2 does <em>NOT</em> like this.</p>
<p>Example:</p>
<pre><code>MyClass(RootModel[str])
root: str
_FREQUENCY_PATTERN = re.compile(r"^(\d+)\s*/\s*(\d+)([YMWD])$")
@classmethod
def _parse(cls, s: str) -> tuple[int, int, str]:
match = cls._FREQUENCY_PATTERN.search(s)
if match is None:
raise ValueError("must be a number over a period (D|W|M|Y). e.g. 5/1W")
n = int(match.group(1))
t = int(match.group(2))
u = match.group(3)
return n, t, u
@field_validator("root")
@classmethod
def _check_format(cls, v: str) -> str:
cls._parse(v) # use _parse to validate incoming data
return v
</code></pre>
<p>I need to keep the _parse method, as it is used in data storage methods. That is, we receive the data in this particular format, and we break it apart to store it using the <code>_parse</code> method.</p>
<p>When I run the code that tests the example, I get the error:</p>
<pre><code> def __getattr__(self, item: str) -> Any:
"""This function improves compatibility with custom descriptors by ensuring delegation happens
as expected when the default value of a private attribute is a descriptor.
"""
if item in {'__get__', '__set__', '__delete__'}:
if hasattr(self.default, item):
return getattr(self.default, item)
> raise AttributeError(f'{type(self).__name__!r} object has no attribute {item!r}')
E AttributeError: 'ModelPrivateAttr' object has no attribute 'search'
../../../../Library/Caches/pypoetry/virtualenvs/triple-models-4xhdxIhn-py3.11/lib/python3.11/site-packages/pydantic/fields.py:890: AttributeError
</code></pre>
<p>It seems like pydantic is overwriting the work that <code>re</code> should be doing.</p>
<p>This code did work with Pydantic 1.x.x</p>
<p>Two questions:</p>
<ol>
<li>What is going on?</li>
<li>How can I parse the incoming data using regex to accomplish the same thing I had working in pydantic 1, but in pydantic 2? (I need to be able to access each of the 3 elements individually for serialization.)</li>
</ol>
|
<python><pydantic>
|
2024-01-02 19:18:51
| 1
| 339
|
Jelkimantis
|
77,748,050
| 10,908,375
|
Attempting to get the rolling mean per group, getting wrong values and "TypeError: incompatible index of inserted column with frame index"
|
<p>I seem to misunderstand and misuse <code>pd.Series.rolling.mean()</code>. I have a toy <code>df</code> here:</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({
'a': np.random.choice(['x', 'y'], 8),
'b': np.random.choice(['r', 's'], 8),
'c': np.arange(1, 8 + 1)
})
</code></pre>
<pre><code> a b c
0 y s 1
1 y r 2
2 y s 3
3 y r 4
4 y s 5
5 x r 6
6 y r 7
7 x r 8
</code></pre>
<p>I do this grouping operation:</p>
<pre><code>df['ROLLING_MEAN'] = df.groupby(['a', 'b'])['c'].rolling(3).mean()#.values
</code></pre>
<p>That doesn't work. I get:</p>
<blockquote>
<p>TypeError: incompatible index of inserted column with frame index</p>
</blockquote>
<p>For some reason, when I uncomment the <code>.values</code> method, it works, but if I isolate one group, it doesn't have the intended effect.</p>
<pre><code>df[
(df['a'] == 'x') &
(df['b'] == 'r')
]
</code></pre>
<pre><code> a b c ROLLING_MEAN
0 x r 1 NaN
2 x r 3 2.666667
3 x r 4 4.000000
4 x r 5 5.666667
7 x r 8 NaN
</code></pre>
<p>How can there be a rolling mean value of <code>5.666</code> while no number that high has even been seen yet?</p>
<p>Here is my expected output:</p>
<pre><code> a b c ROLLING_MEAN
0 x r 1 NaN
2 x r 3 NaN
3 x r 4 ((1 + 3 + 4) / 3)
4 x r 5 ((3 + 4 + 5) / 3)
7 x r 8 ((4 + 5 + 8) / 3)
</code></pre>
|
<python><pandas><dataframe><numpy>
|
2024-01-02 19:00:51
| 1
| 36,924
|
Nicolas Gervais
|
77,747,852
| 2,583,346
|
python plotly - left-align subplots shared category labels
|
<pre><code>import plotly.graph_objects as go
from plotly.subplots import make_subplots
fig = make_subplots(rows=1, cols=2,
shared_yaxes=True)
fig.add_box(x=[1,3,5,7], row=1, col=1, name='short')
fig.add_box(x=[3,5,7,9], row=1, col=1, name='longer')
fig.add_box(x=[1,4,6,8], row=1, col=1, name='even longer')
fig.add_box(x=[1,3,5,7], row=1, col=2, name='short')
fig.add_box(x=[3,5,7,9], row=1, col=2, name='longer')
fig.add_box(x=[1,4,6,8], row=1, col=2, name='even longer')
fig.update_layout(showlegend=False)
fig.update_traces(marker_color='blue', orientation='h')
fig.show()
</code></pre>
<p><a href="https://i.sstatic.net/A7pGL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/A7pGL.png" alt="enter image description here" /></a></p>
<p>But I want to align the labels left, like this:
<a href="https://i.sstatic.net/U6Y6q.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/U6Y6q.png" alt="enter image description here" /></a></p>
<p>If I understand correctly, there is no easy way to do that. I've seen people doing some hacks by adding a "right" extra y-axis, but couldn't figure it out for the subplots case. Can anyone help?</p>
|
<python><plotly>
|
2024-01-02 18:16:14
| 1
| 1,278
|
soungalo
|
77,747,717
| 2,817,602
|
Exposing the right port to airflow services in docker
|
<p>I'm trying to build a minimal datapipeline using docker, postgres, and airflow. <a href="https://gist.github.com/Dpananos/f818a3253e0d01b8eed5673575796500" rel="nofollow noreferrer">My <code>docker-compose.yaml</code> file can be found here</a> and is exteneded from airflow's documentation <a href="https://airflow.apache.org/docs/apache-airflow/stable/howto/docker-compose/index.html#fetching-docker-compose-yaml" rel="nofollow noreferrer">here</a>. I've extended it to include a seperate postgres database where I will write data, and a pgadmin instance (these are added near the bottom).</p>
<p>I can confirm that the services are running and accessible when I run <code>docker compose up -d</code>, and I can log into the airflow web UI to see my dags. I've created a very simple dag to insert the date and time into a table every minute. The dag code is show below:</p>
<pre><code>from airflow import DAG
from airflow.operators.python import PythonOperator
from datetime import datetime, timedelta
import psycopg2
from airflow.hooks.postgres_hook import PostgresHook
default_args = {
'owner': 'airflow',
'retries': 1,
'retry_delay': timedelta(minutes=5),
'start_date': datetime(2024, 1, 1),
}
def create_table():
pg_hook = PostgresHook(postgres_conn_id='postgres_default')
conn = pg_hook.get_conn()
cursor = conn.cursor()
create_query = """
CREATE TABLE IF NOT EXISTS fact_datetime (
datetime TIMESTAMP
);
"""
cursor.execute(create_query)
conn.commit()
cursor.close()
conn.close()
def insert_datetime():
pg_hook = PostgresHook(postgres_conn_id='postgres_default')
conn = pg_hook.get_conn()
cursor = conn.cursor()
insert_query = """
INSERT INTO fact_datetime (datetime)
VALUES (NOW());
"""
cursor.execute(insert_query)
conn.commit()
cursor.close()
conn.close()
with DAG('insert_datetime_dag',
default_args=default_args,
description='DAG to insert current datetime every minute',
schedule_interval='*/1 * * * *',
catchup=False) as dag:
create_table_task = PythonOperator(
task_id='create_table',
python_callable=create_table,
)
insert_datetime_task = PythonOperator(
task_id='insert_datetime',
python_callable=insert_datetime,
)
create_table_task >> insert_datetime_task
</code></pre>
<p>Before running this dag, I've added a postgres connection in the airflow web UI, which should allow me to use the <code>PostgreHook</code>.</p>
<p>When I run the dag, the runs seem to be stuck on the <code>create_table</code> task, with the following logs:</p>
<pre><code>
ce682335169d
*** Found local files:
*** * /opt/airflow/logs/dag_id=insert_datetime_dag/run_id=scheduled__2024-01-02T17:24:00+00:00/task_id=create_table/attempt=1.log
[2024-01-02, 17:25:26 UTC] {taskinstance.py:1957} INFO - Dependencies all met for dep_context=non-requeueable deps ti=<TaskInstance: insert_datetime_dag.create_table scheduled__2024-01-02T17:24:00+00:00 [queued]>
[2024-01-02, 17:25:26 UTC] {taskinstance.py:1957} INFO - Dependencies all met for dep_context=requeueable deps ti=<TaskInstance: insert_datetime_dag.create_table scheduled__2024-01-02T17:24:00+00:00 [queued]>
[2024-01-02, 17:25:26 UTC] {taskinstance.py:2171} INFO - Starting attempt 1 of 2
[2024-01-02, 17:25:26 UTC] {taskinstance.py:2192} INFO - Executing <Task(PythonOperator): create_table> on 2024-01-02 17:24:00+00:00
[2024-01-02, 17:25:26 UTC] {standard_task_runner.py:60} INFO - Started process 148 to run task
[2024-01-02, 17:25:26 UTC] {standard_task_runner.py:87} INFO - Running: ['***', 'tasks', 'run', 'insert_datetime_dag', 'create_table', 'scheduled__2024-01-02T17:24:00+00:00', '--job-id', '7', '--raw', '--subdir', 'DAGS_FOLDER/dag.py', '--cfg-path', '/tmp/tmpkkdtejih']
[2024-01-02, 17:25:26 UTC] {standard_task_runner.py:88} INFO - Job 7: Subtask create_table
[2024-01-02, 17:25:26 UTC] {task_command.py:423} INFO - Running <TaskInstance: insert_datetime_dag.create_table scheduled__2024-01-02T17:24:00+00:00 [running]> on host ce682335169d
[2024-01-02, 17:25:26 UTC] {taskinstance.py:2481} INFO - Exporting env vars: AIRFLOW_CTX_DAG_OWNER='***' AIRFLOW_CTX_DAG_ID='insert_datetime_dag' AIRFLOW_CTX_TASK_ID='create_table' AIRFLOW_CTX_EXECUTION_DATE='2024-01-02T17:24:00+00:00' AIRFLOW_CTX_TRY_NUMBER='1' AIRFLOW_CTX_DAG_RUN_ID='scheduled__2024-01-02T17:24:00+00:00'
[2024-01-02, 17:25:26 UTC] {base.py:83} INFO - Using connection ID 'postgres_default' for task execution.
[2024-01-02, 17:25:26 UTC] {taskinstance.py:2699} ERROR - Task failed with exception
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 433, in _execute_task
result = execute_callable(context=context, **execute_callable_kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/operators/python.py", line 199, in execute
return_value = self.execute_callable()
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/operators/python.py", line 216, in execute_callable
return self.python_callable(*self.op_args, **self.op_kwargs)
File "/opt/airflow/dags/dag.py", line 16, in create_table
conn = pg_hook.get_conn()
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/providers/postgres/hooks/postgres.py", line 158, in get_conn
self.conn = psycopg2.connect(**conn_args)
File "/home/airflow/.local/lib/python3.8/site-packages/psycopg2/__init__.py", line 122, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: connection to server at "localhost" (127.0.0.1), port 5432 failed: Connection refused
Is the server running on that host and accepting TCP/IP connections?
connection to server at "localhost" (::1), port 5432 failed: Cannot assign requested address
Is the server running on that host and accepting TCP/IP connections?
[2024-01-02, 17:25:26 UTC] {taskinstance.py:1138} INFO - Marking task as UP_FOR_RETRY. dag_id=insert_datetime_dag, task_id=create_table, execution_date=20240102T172400, start_date=20240102T172526, end_date=20240102T172526
[2024-01-02, 17:25:26 UTC] {standard_task_runner.py:107} ERROR - Failed to execute job 7 for task create_table (connection to server at "localhost" (127.0.0.1), port 5432 failed: Connection refused
Is the server running on that host and accepting TCP/IP connections?
connection to server at "localhost" (::1), port 5432 failed: Cannot assign requested address
Is the server running on that host and accepting TCP/IP connections?
; 148)
[2024-01-02, 17:25:26 UTC] {local_task_job_runner.py:234} INFO - Task exited with return code 1
[2024-01-02, 17:25:26 UTC] {taskinstance.py:3281} INFO - 0 downstream tasks scheduled from follow-on schedule check
</code></pre>
<p>If I've read this correctly, it seems airflow can not see my postgres instance. This should be solved by exposing port 5432 to one of the airflow services.</p>
<p>I'm not sure which service needs exposure to the port, and I'm not sure how to edit my docker compose file. Could someone please:</p>
<ul>
<li>Let me know if I'm correct in my assessment of the problem, and</li>
<li>Suggest the correct edits to my docker compose file so I can run my dag successfully.</li>
</ul>
|
<python><docker><airflow>
|
2024-01-02 17:40:21
| 2
| 7,544
|
Demetri Pananos
|
77,747,683
| 893,254
|
How to prevent matplotlib from plotting the "day number" on an axis
|
<p>I am seeing a slightly annoying issue when plotting some data with matplotlib.</p>
<p>As shown in the example output below, the x-axis has a slightly unexpected format.</p>
<p>The labels are (for example) <code>04 12:30</code>. The <code>04</code> being the day of the month.</p>
<p>The code and data are provided to reproduce this figure, appended at the end of this question.</p>
<p>Why is the x-axis being formatted like this and what can I do to "fix" it? (I don't want the day of the month to be shown, and I expecially don't want it to be formatted as "04".)</p>
<p><a href="https://i.sstatic.net/UEG0o.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UEG0o.png" alt="matplotlib figure" /></a></p>
<h1>Code</h1>
<p>To run the code: <code>python3 example.py</code></p>
<pre><code># example.py
import matplotlib.pyplot as plt
import pandas
df = pandas.read_json('data.json')
print(df.head())
fig, ax1 = plt.subplots(figsize=(12, 8))
ax1.plot(df['column_name'], label='My data')
ax1.set_xlabel('Timestamp')
ax1.set_ylabel('Value')
ax1.grid(True)
ax1.legend()
plt.title('Example')
plt.tight_layout()
plt.savefig('example.png')
</code></pre>
<h1>Data</h1>
<p>Save this file as <code>data.json</code></p>
<pre><code>{"column_name":{"1649073600000":174.79,"1649073660000":174.8,"1649073720000":174.79,"1649073780000":174.76,"1649073840000":174.69,"1649073900000":174.7,"1649073960000":174.69,"1649074020000":174.65,"1649074140000":174.67,"1649074200000":174.7,"1649074260000":174.74,"1649074320000":174.65,"1649074380000":174.69,"1649074440000":174.65,"1649074500000":174.7,"1649074560000":174.74,"1649074680000":174.72,"1649074740000":174.7,"1649074860000":174.7,"1649074920000":174.71,"1649074980000":174.75,"1649075040000":174.76,"1649075100000":174.73,"1649075160000":174.76,"1649075220000":174.7,"1649075280000":174.66,"1649075340000":174.61,"1649075400000":174.63,"1649075460000":174.65,"1649075520000":174.7,"1649075580000":174.69,"1649075640000":174.63,"1649075760000":174.66,"1649075820000":174.63,"1649075880000":174.58,"1649075940000":174.5,"1649076000000":174.52,"1649076060000":174.55,"1649076120000":174.55,"1649076180000":174.54,"1649076240000":174.48,"1649076300000":174.45,"1649076360000":174.38,"1649076420000":174.22,"1649076480000":174.21,"1649076540000":174.15,"1649076660000":174.25,"1649076720000":174.3,"1649076780000":174.28,"1649076840000":174.26,"1649076900000":174.25,"1649076960000":174.23,"1649077020000":174.21,"1649077080000":174.25,"1649077140000":174.26,"1649077200000":174.27}}
</code></pre>
|
<python><pandas><matplotlib>
|
2024-01-02 17:32:31
| 1
| 18,579
|
user2138149
|
77,747,671
| 108,390
|
Python application packaged for Windows 11 using pyside6-deploy complains about numpy installation
|
<p>I have spent the day working on a Qt POC application based on the <a href="https://doc.qt.io/qtforpython-6/examples/example_external_pandas.html" rel="nofollow noreferrer">Pandas simple example</a> and trying to package it using <a href="https://doc.qt.io/qtforpython-6/deployment/deployment-pyside6-deploy.html" rel="nofollow noreferrer">pyside6-deploy</a>. The Linux parts works fine on WSL, and once I had installed gcc.exe and the rest, running and packaging everything on Windows 11 also seemed to work out well.</p>
<p>However, when I try to run my .exe file on Windows I get the following error:</p>
<pre><code>ImportError: Unable to import required dependencies:
numpy: Error importing numpy: you should not try to import numpy from
its source directory; please exit the numpy source tree, and relaunch
your python interpreter from there.
</code></pre>
<p>I have done some searching, but the suggestions I have found so far wants me to build numpy from source. This feels somewhat backwards.</p>
<p>I have not used any requirements.txt nor pyproject.toml files, since the only dependency is Pandas</p>
<p>Python version on windows is 3.10, since later releases did not play nicely with Nuitka</p>
<p>What am I doing wrong?</p>
<p>Edit: Running the other, more simple, examples without any non PyQT-package dependencies works just fine.</p>
|
<python><pyqt><pyside6><python-3.10><nuitka>
|
2024-01-02 17:29:53
| 1
| 1,393
|
Fontanka16
|
77,747,594
| 157,704
|
pyspark sql combine two columns with same value but different names
|
<p>I have two tables with below structure</p>
<p>Table1</p>
<pre class="lang-none prettyprint-override"><code>lang created_date
java 11-01-23
python 11-11-23
</code></pre>
<p>Table2</p>
<pre class="lang-none prettyprint-override"><code>lang ingested_date
scala 11-21-23
</code></pre>
<p>I want to create combined table with expected result:</p>
<p>Table3</p>
<pre class="lang-none prettyprint-override"><code>lang created_date
java 11-01-23
python 11-11-23
scala 11-21-23
</code></pre>
<p>Actual Result</p>
<pre class="lang-none prettyprint-override"><code>lang created_date
java 11-01-23
python 11-11-23
scala 11-21-23
scala null
</code></pre>
<p>I am using below python pyspark code but it is giving me additional row with null value in created_date column.</p>
<pre><code>table1DF = sparkSession.read.table("Table1")
table2DF = sparkSession.read.table("Table2")
table1 = table1DF.col("lang").col("created_date")
table2 = table2DF.col("lang").col("ingested_date").alias("created_date")
merged_table = table1.union(table2)
final_table = merged_table.groupBy("lang", "created_date")
</code></pre>
<p>How do I avoid getting the last row with null value in one of the columns when I am combining data from two tables using union?</p>
|
<python><sql><pyspark>
|
2024-01-02 17:12:03
| 2
| 2,834
|
Amol Aggarwal
|
77,747,279
| 5,036,928
|
Avoiding for-loop in NumPy 1D nearest neighbors
|
<p>I have the following code in which I get the N nearest neighbors in 1D:</p>
<pre><code>import numpy as np
def find_nnearest(arr, val, N):
idxs = []
for v in val:
idx = np.abs(arr - v).argsort()[:N]
idxs.append(idx)
return np.array(idxs)
A = np.arange(10, 20)
test = find_nnearest(A, A, 3)
print(test)
</code></pre>
<p>which clearly uses a for-loop to grab the <code>idx</code>'s. Is there a numpythonic way to avoid this for-loop (but return the same array)?</p>
|
<python><numpy><for-loop><vectorization><nearest-neighbor>
|
2024-01-02 16:15:00
| 1
| 1,195
|
Sterling Butters
|
77,747,229
| 967,621
|
Log a dataframe using logging and pandas
|
<p>I am using <code>pandas</code> to operate on dataframes and <a href="https://docs.python.org/3/library/logging.html" rel="nofollow noreferrer"><code>logging</code></a> to log intermediate results, as well as warnings and errors, into a separate log file. I need to also print into the same log file a few intermediate dataframes. Specifically, I want to:</p>
<ul>
<li><strong>Print dataframes into the same log file as the rest of the <code>logging</code> messages</strong> (to ensure easier debugging and avoid writing many intermediate files, as would be the case with calls to <code>to_csv</code> with a file destination),</li>
<li><strong>Control logging verbosity (as is commonly done) using <a href="https://docs.python.org/3/library/logging.html#logging-levels" rel="nofollow noreferrer"><code>logging</code> levels</a></strong>, such as <code>DEBUG</code> or <code>INFO</code>, sharing this with the verbosity of other logging messages (including those that are <strong>not</strong> related to dataframes).</li>
<li><strong>Control logging verbosity <em>(on a finer level)</em> using a separate variable that determines how many rows of the dataframe to print.</strong></li>
<li><strong>Pretty-print 1 row per line, with aligned columns, and with each row preceded by the typical logging metadata</strong>, such as <code>240102 10:58:20 INFO:</code>.</li>
</ul>
<p>The best I could come up is the code below, which is a bit too verbose. Is there a simpler and more pythonic way to log a dataframe slice?</p>
<p><strong>Note:</strong></p>
<p><strong>Please include an example of usage.</strong></p>
<p><strong>Example:</strong></p>
<pre><code>import io
import logging
import pandas as pd
# Print into log this many lines of several intermediate dataframes,
# set to 20 or so:
MAX_NUM_DF_LOG_LINES = 4
logging.basicConfig(
datefmt = '%y%m%d %H:%M:%S',
format = '%(asctime)s %(levelname)s: %(message)s')
logger = logging.getLogger(__name__)
# Or logging.DEBUG, etc:
logger.setLevel(level = logging.INFO)
# Example of a simple log message:
logger.info('Reading input.')
TESTDATA="""
enzyme regions N length
AaaI all 10 238045
AaaI all 20 170393
AaaI captured 10 292735
AaaI captured 20 229824
AagI all 10 88337
AagI all 20 19144
AagI captured 10 34463
AagI captured 20 19220
"""
df = pd.read_csv(io.StringIO(TESTDATA), sep='\s+')
# ...some code....
# Example of a log message with a chunk of a dataframe, here, using
# `head` (but this can be another method that slices a dataframe):
logger.debug('less important intermediate results: df:')
for line in df.head(MAX_NUM_DF_LOG_LINES).to_string().splitlines():
logger.debug(line)
# ...more code....
logger.info('more important intermediate results: df:')
for line in df.head(MAX_NUM_DF_LOG_LINES).to_string().splitlines():
logger.info(line)
# ...more code....
</code></pre>
<p>Prints:</p>
<pre><code>240102 10:58:20 INFO: Reading input.
240102 10:58:20 INFO: more important intermediate results: df:
240102 10:58:20 INFO: enzyme regions N length
240102 10:58:20 INFO: 0 AaaI all 10 238045
240102 10:58:20 INFO: 1 AaaI all 20 170393
240102 10:58:20 INFO: 2 AaaI captured 10 292735
240102 10:58:20 INFO: 3 AaaI captured 20 229824
</code></pre>
<p><strong>Related:</strong></p>
<p>None of this accomplishes what I try to do, but it is getting closer:</p>
<ul>
<li><a href="https://stackoverflow.com/q/45216826/967621">How to print multiline logs using python logging module?</a>
<ul>
<li>See this comment, which is neat, but not very pythonic, as it calls <code>print</code> from inside a list comprehension and then discards the result: <em>"Do note that the latter only works on py2 due to map being lazy; you can do <code>[logger.info(line) for line in 'line 1\nline 2\nline 3'.splitlines()]</code> on py3. –
Kyuuhachi, Jun 22, 2021 at 16:30".</em></li>
<li>Also, the accepted answer by <em>Qeek</em> has issues: (a) it lacks the functionality to dynamically define the max number of dataframe rows to write into the log (define this once per script, not every call to logger); and (b) it has no examples of usage, so it is unclear.</li>
</ul>
</li>
<li><a href="https://stackoverflow.com/q/42515493/967621">Write or log print output of pandas Dataframe</a> - this prints something like this, that is it is missing the timestamp + logging level metadata at the beginning of each line:</li>
</ul>
<pre><code>240102 12:27:19 INFO: dataframe head - enzyme regions N length
0 AaaI all 10 238045
1 AaaI all 20 170393
2 AaaI captured 10 292735
...
</code></pre>
<ul>
<li><a href="https://stackoverflow.com/q/48369647/967621">How to log a data-frame to an output file</a> - same as the previous answer.</li>
</ul>
|
<python><pandas><dataframe><logging><pretty-print>
|
2024-01-02 16:07:07
| 2
| 12,712
|
Timur Shtatland
|
77,747,221
| 1,907,755
|
Delta Table read performance when using delta-rs Python API?
|
<p>I'm trying to read <code>Delta Table</code> using delta-rs library (Python).</p>
<p>The table has millions of records and we wanted to read it frequently using <code>Rest API</code> call(only specific records, based on request).</p>
<p>So, i was checking the <code>delta-rs</code> library. Since it has millions of records the read performance is not good..</p>
<p>Its reading the entire table and convert it as Pandas DF( before i can filter based on my request).</p>
<p>Is there a way to read only the records what i need instead of reading entire table then filter ( like <code>column pruning</code> , <code>predicate pushdown</code> etc)</p>
<p><strong>Update:</strong> i followed this issue (<a href="https://github.com/delta-io/delta-rs/issues/631" rel="nofollow noreferrer">https://github.com/delta-io/delta-rs/issues/631</a>) and able to get good performance by converting DeltaTable to PyArrow Dataset and then using Duckdb to filter.</p>
|
<python><delta-lake><delta-rs>
|
2024-01-02 16:05:40
| 0
| 9,019
|
Shankar
|
77,746,775
| 16,626,443
|
Google Sheets API giving 400 response INVALID_ARGUMENT when attempting to add rows, can't figure out why
|
<p>I am trying to add rows to a Google Spreadsheet using the Python API and I am getting a 400 response code and the response body says that all the values I am trying to add are invalid.</p>
<p>I simplified my code to try and add 1 string value and I get the same result. I was going to start adding values back 1 by 1 to see which one was the problem, but I think the problem is something else and not actually to do with invalid value as the only thing I am trying to add now is simple string of the current date and I get the same response.</p>
<p>Here is my code:</p>
<pre class="lang-py prettyprint-override"><code> client = gspread.oauth(credentials_filename="credentials.json")
sheet = client.open(constants_manager.SPREADSHEET_FILE_NAME)
worksheet = sheet.worksheet(constants_manager.SPREADSHEET_SHEET_NAME)
today_date = date.today().strftime("%d/%m/%Y")
old_num_rows = len(worksheet.get_all_values())
print(old_num_rows)
new_values = [today_date]
# for item in macros.values():
# new_values.append(round(item,2))
# print(type(item))
print(new_values)
worksheet.insert_rows(new_values, old_num_rows+1)
</code></pre>
<p>And here is my console output with the error message. You can see that I print <code>new_values</code> which is a 1 element list with the current date as a string.</p>
<pre class="lang-bash prettyprint-override"><code>['02/01/2024']
Traceback (most recent call last):
File "/home/moorby/Documents/coding_projects/loseit_calorie_weight_data_tracker/main.py", line 63, in <module>
main()
File "/home/moorby/Documents/coding_projects/loseit_calorie_weight_data_tracker/main.py", line 59, in main
worksheet.insert_rows(new_values)
File "/home/moorby/.pyenv/versions/loseit_calorie_weight_data_tracker/lib/python3.11/site-packages/gspread/worksheet.py", line 2049, in insert_rows
res = self.spreadsheet.values_append(range_label, params, body)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/moorby/.pyenv/versions/loseit_calorie_weight_data_tracker/lib/python3.11/site-packages/gspread/spreadsheet.py", line 141, in values_append
r = self.client.request("post", url, params=params, json=body)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/moorby/.pyenv/versions/loseit_calorie_weight_data_tracker/lib/python3.11/site-packages/gspread/client.py", line 93, in request
raise APIError(response)
gspread.exceptions.APIError: {'code': 400, 'message': 'Invalid value at \'data.values[0]\' (type.googleapis.com/google.protobuf.ListValue), "02/01/2024"', 'status': 'INVALID_ARGUMENT', 'details': [{'@type': 'type.googleapis.com/google.rpc.BadRequest', 'fieldViolations': [{'field': 'data.values[0]', 'description': 'Invalid value at \'data.values[0]\' (type.googleapis.com/google.protobuf.ListValue), "02/01/2024"'}]}]}
</code></pre>
<p>I found some other answers that suggested the Spreadsheet ID might be wrong. I do not think that is the case for me as I can set <code>inherit_from_before</code> as <code>True</code> in the <code>worksheet.insert_rows()</code> method and I can see new rows being added with the same background colour as before.</p>
<p>I'm really confused as my string value of the current date can't be an invalid value, it's just a string. What's going on?</p>
<p><strong>EDIT</strong>: I also just tried with <code>new_values = ["hello"]</code> and I get the same result, apparently it is an invalid value.</p>
|
<python><google-sheets-api>
|
2024-01-02 14:45:20
| 1
| 760
|
Mo0rBy
|
77,746,656
| 19,130,803
|
apply different function on each row
|
<p>I have a dataframe with 2 columns <code>field</code> and <code>value</code>(number of rows maximum 10). I need to perform some checks depending on the field(ie need to apply different function on each row) and store its result in <code>status</code> column. Below is sample:</p>
<pre><code>data = {
'field': ['a', 'b'],
'value': [5, 20],
}
df = pd.DataFrame(data)
print('Initial DF')
print(f"{df=}")
def _check_field_a(value):
_min = 1
_max = 10
if _min <= value <= _max:
return True
return False
def _check_field_b(value):
values = [10, 20, 30, 40]
if value in values:
return True
return False
func = [_check_field_a, _check_field_b]
df['status'] = df.apply(lambda row: func[row.name](row['value']), axis=1)
print('After check DF')
print(f'{df=}')
</code></pre>
<p><strong>output</strong></p>
<pre><code>Initial DF
df= field value
0 a 5
1 b 20
After check DF
df= field value status
0 a 5 True
1 b 20 True
</code></pre>
<p>The above code is working, just wondering is there any other better way to achieve the same?</p>
<p><strong>Edit-1</strong>
Getting inspired from all below answers, I have modified the code but currently not working.</p>
<pre><code>data = {
'field': ['a', 'b', 'c'],
'value': [5, 20, 80],
}
df = pd.DataFrame(data)
print('Initial DF')
print(f"{df=}")
conditions = {
'a': {'values': (1, 10), 'check_type': 'between'},
'b': {'values': [10, 20, 30, 40], 'check_type': 'isin'},
'c': {'values': (50, 100), 'check_type': 'between'},
}
df['status'] = False
for key, condition in conditions.items():
if condition['check_type'] == 'between':
df['status'] = df.loc[df['field'] == key].between(condition['values'])
elif condition['check_type'] == 'isin':
df['status'] = df.loc[df['field'] == key].isin(condition['values'])
print('After check DF')
print(f'{df=}')
</code></pre>
<p><strong>Edit-2</strong>
Using <code>np.select</code>, This is working and so can add as many fields easily</p>
<pre><code>data = {
'field': ['a', 'b', 'c'],
'value': [5, 20, 80],
}
df = pd.DataFrame(data)
print('Initial DF')
print(f"{df=}")
condlist = [df['field'] == 'a', df['field'] == 'b', df['field'] == 'c']
choicelist = [df['value'].between(1,10), df['value'].isin([10,20,30,40]), df['value'].between(50,100)]
df['status'] = np.select(condlist, choicelist, False)
print('After check DF')
print(f'{df=}')
</code></pre>
|
<python><pandas><dataframe>
|
2024-01-02 14:24:22
| 2
| 962
|
winter
|
77,746,653
| 2,386,113
|
Allocate unified memory on GPU
|
<p>I am using Python and cuPy to access a GPU cluster (NVIDIA V100, Four GPUs, 32 GB each). I need to allocate large arrays which cannot fit on a single GPU. Therefore, I would like to use Unified Memory using <strong><a href="https://docs.cupy.dev/en/stable/reference/generated/cupy.cuda.ManagedMemory.html" rel="nofollow noreferrer">cuPy</a></strong>, which can be allocated on all available GPUs.</p>
<p><strong>My code:</strong></p>
<pre><code>import cupy as cp
import numpy as np
# allocate unified memory
pool = cp.cuda.MemoryPool(cp.cuda.malloc_managed)
cp.cuda.set_allocator(pool.malloc)
# Desired memory in GB
desired_memory_gb = 42 # SET this value to greater than 32 GB
# Calculate the number of elements required to achieve desired memory
element_size_bytes = np.dtype(np.float64).itemsize
desired_memory_bytes = desired_memory_gb * (1024**3) # Convert GB to bytes
num_elements = desired_memory_bytes // element_size_bytes
# Create the array with the calculated number of elements
array = cp.full(num_elements, 1.1, dtype=np.float64)
# GPU
print("Array allocated on unified memory...")
</code></pre>
<p><strong>Problem:</strong> With the code above, I am able to allocate unified memory and also able to have larger array (greater than 32 GB) but the unified memory is not allocated on all GPUs. In this case, 32 GB memory is allocated on one single GPU and rest is allocated on CPU (not on GPU-2, 3 or 4).</p>
<p>How can I force the program to use all GPUs (instead of just CPU) for the allocation of unified memory?</p>
|
<python><gpu><cupy>
|
2024-01-02 14:23:07
| 1
| 5,777
|
skm
|
77,746,627
| 9,669,142
|
Convolution integral with limits
|
<p>I'm trying to implement a convolution integral in Python with limits, but I cannot get it to work.</p>
<p>I have this convolution integral:</p>
<p><a href="https://i.sstatic.net/lGGbx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lGGbx.png" alt="enter image description here" /></a></p>
<p>Where t0 is the start year and t is the end year. E is a list of values (one value per year) and G is defined as:</p>
<p><a href="https://i.sstatic.net/tvVhe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tvVhe.png" alt="enter image description here" /></a></p>
<p>Where A, alpha and tau are known constants.</p>
<p>Now I need to calculate the result from the convolution integral with the t0 to t limits, but I cannot get it to work. Scipy seems to not have the option to enter the limits. I tried to use the Convolve functions of Scipy and Numpy, but both cannot handle self-chosen limits inside the functions.</p>
<p>Does someone know how to calculate the convolution integral with Python with the limits t0 and t?</p>
|
<python><convolution>
|
2024-01-02 14:17:56
| 0
| 567
|
Fish1996
|
77,746,509
| 13,225,321
|
code run on tensorflow cpu version, won't run on tensorflow gpu version, trying cnn bilstm model
|
<p>I am currently working on my CQF final on deep learning right now.</p>
<p>I built a cnn-bilstm-attention model earlier as the course progress. When I throw in more and more features into the model, the training process was getting slower and slower. 4 features will drag me down to 10s per epoch. So I re-did my windows, and everything. Set up with newest miniconda3 with python 3.9.16, tensorflow-gpu & keras ==2.10, cuda == 11.5 with cudnn, all worked up with deep learning codes.</p>
<p>The cnn-bilstm-attention script was working on tensorflow==2.15, but slow. When I moved to tensorflow-gpu==2.10, it won't work. Stopped at Epoch 1, nothing showing up, no error, no warning, nothing. GPU memery occupancy drops to normal level after 10 ~ 15 seconds later.</p>
<p>However, autoencoder model does work on both tensorflow, cpu and gpu version without a problem, the training dataset on autoencoder is even tripled than what i have on cnn-bilstm-attention model. It is just this cnn-lstm-attention model won't run on tensorflow-gpu==2.10. I did some research on lstm vs cudnnlstm, and I switched, nothing changed, still stuck on epoch 1.</p>
<p>Does anyone have any idea why this is not working? I am not a CS guy, python is the only language I am familiar with. Any hints would be helpful I suppose. Thank you very much.</p>
|
<python><tensorflow><keras><deep-learning>
|
2024-01-02 13:53:27
| 1
| 329
|
pepCoder
|
77,746,452
| 7,281,675
|
DRF inserting relative urls to models.URLField
|
<p>I am posting a <code>relative url</code> to a model DRF which includes a <code>models.URLField</code>. The problem is that it returns for the field <code>insert a valid internet address</code>. I know that the problem is with <code>is_valid</code> method. Also, I know that I can edit the <code>relative url</code> at both sides to be an <code>absolute url</code>. But I do not know how to handle the problem to just allow <code>relative urls</code> too. Any idea?</p>
|
<python><django><django-rest-framework>
|
2024-01-02 13:43:28
| 0
| 4,603
|
keramat
|
77,746,344
| 2,080,368
|
How to hyperlink nodes in d3js NetworkX diGraph
|
<p>I would like to create a NetworkX graph and visualize it using d3js similarly to
the javascript <a href="https://networkx.org/documentation/stable/auto_examples/external/javascript_force.html#sphx-glr-auto-examples-external-javascript-force-py" rel="nofollow noreferrer">example</a> in the NetworkX docs. This graph is very similar to the interactive graph on the NetworkX homepage <a href="https://networkx.org/" rel="nofollow noreferrer">page</a>.</p>
<p>The example works for me, but I would like to add hyperlinks to the nodes. I think I have node attributes called "xlink:href", but I have not figured out how.</p>
<p>This question was ansered for NewtworkX and visualization with bokeh <a href="https://stackoverflow.com/questions/69456723/networkx-add-hyperlink-to-each-node">here</a>. I have not tested this example, since I want to use d3js. The code below is available <a href="https://gist.github.com/BjornFJohansson/a60d754bf6330852c5b683e1dafbf2d4" rel="nofollow noreferrer">here</a>.</p>
<p>So far:</p>
<pre><code>import json
import networkx as nx
G = nx.Graph()
G.add_node('Node1')
G.add_node('Node2')
G.add_edge('Node1', 'Node2')
for n in G:
G.nodes[n]["name"] = "My" + str(n)
G.nodes[n]["xlink:href"] = "http://google.com" # <==<< link Not working
d = nx.json_graph.node_link_data(G)
json.dump(d, open("force/force.json", "w"))
print("Wrote node-link JSON data to force/force.json")
</code></pre>
<p>The above produces:</p>
<pre><code>{'directed': False,
'multigraph': False,
'graph': {},
'nodes': [{'name': 'MyNode1',
'xlink:href': 'http://google.com',
'id': 'Node1'},
{'name': 'MyNode2', 'xlink:href': 'http://google.com', 'id': 'Node2'}],
'links': [{'source': 'Node1', 'target': 'Node2'}]}
</code></pre>
<p>Which can be visualized like this:</p>
<pre><code># Serve the file over http to allow for cross origin requests
import flask
app = flask.Flask(__name__, static_folder="force")
@app.route("/")
def static_proxy():
return app.send_static_file("force.html")
app.run(port=8001)
</code></pre>
<p><a href="https://i.sstatic.net/da0bD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/da0bD.png" alt="enter image description here" /></a></p>
<p>Interestingly, the tooltip on the graph displays "Node2" and not "MyNode2".</p>
<p>Links collected while trying to solve this:</p>
<ul>
<li><a href="https://github.com/simonlindgren/nXd3" rel="nofollow noreferrer">https://github.com/simonlindgren/nXd3</a></li>
<li><a href="http://www.d3noob.org/2014/05/including-html-link-in-d3js-tool-tip.html" rel="nofollow noreferrer">http://www.d3noob.org/2014/05/including-html-link-in-d3js-tool-tip.html</a></li>
<li><a href="https://networkx.org/documentation/stable/reference/readwrite/json_graph.html" rel="nofollow noreferrer">https://networkx.org/documentation/stable/reference/readwrite/json_graph.html</a></li>
</ul>
|
<python><d3.js><networkx>
|
2024-01-02 13:20:53
| 3
| 406
|
Björn Johansson
|
77,746,126
| 20,770,190
|
How to use dependency injection in FastAPI properly?
|
<p>I am new to FastAPI, and I'm trying to use a simple dependency injection via <code>Depends</code> with the FastAPI.</p>
<p>Can somebody tell me what is differences between these two code snippets for using <code>Depends</code> to have <strong>Shared Logic</strong>?</p>
<h2>1:</h2>
<pre class="lang-py prettyprint-override"><code>from typing import Annotated
from uuid import UUID
from fastapi import APIRouter, Depends
router = APIRouter()
async def test_injection(id: UUID) -> str:
return str(id)
@router.get("/", response_model=str)
async def get_root(uuid: Annotated[UUID, Depends(test_injection)]) -> str:
return uuid
</code></pre>
<h2>2:</h2>
<pre class="lang-py prettyprint-override"><code>async def test_injection(id: UUID) -> str:
return str(id)
@router.get("/", response_model=str)
async def get_root(uuid: UUID = Depends(test_injection)) -> str:
return uuid
</code></pre>
<p>The idea for the first code snippet comes from FastAPI dependency injection <a href="https://fastapi.tiangolo.com/tutorial/dependencies/#create-a-dependency-or-dependable" rel="nofollow noreferrer">document</a>, and the second idea comes from the most examples across the Internet to use <code>get_db</code> method as a dependency injection.</p>
|
<python><dependency-injection><fastapi><python-typing>
|
2024-01-02 12:40:36
| 0
| 301
|
Benjamin Geoffrey
|
77,746,021
| 8,648,222
|
Using multiple **kwarg in Python with syntax def func(a,b, *, kwarg1, kwarg2)
|
<p>I forgot where I saw a post before talking about how we can use this syntax <code>*</code> to give as many kwargs after it:</p>
<pre><code>def func(arg1, arg2, *, kwarg1, kwarg2):
print(arg1, arg2)
for k, v in kwarg1.items():
print(k, v)
for k, v in kwarg2.items():
print(k, v)
return None
kwarg1 = {'a':1, 'b':2, 'c':3}
kwarg2 = {'d':4, 'e':5, 'f':6}
func(6,7, **kwarg1, **kwarg2)
</code></pre>
<p>The expected outout is:</p>
<pre><code>6 7
a 1
b 2
e 5
d 4
c 3
f 6
</code></pre>
<p>But I get this error: <code>TypeError: func() got an unexpected keyword argument 'b'</code></p>
<p>I can have this version working, it's just I am interested in the above.</p>
<pre><code>def func(arg1, arg2, **kwarg1):
print(arg1, arg2)
for k, v in kwarg1.items():
print(k, v)
return None
kwarg1 = {'a':1, 'b':2, 'c':3}
kwarg2 = {'d':4, 'e':5, 'f':6}
kwarg1.update(kwarg2)
func(6,7, **kwarg1)
</code></pre>
|
<python>
|
2024-01-02 12:17:01
| 0
| 825
|
v_head
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.