QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
78,171,831
| 9,363,441
|
Reading of EEG file by mne shows - Channels contain different lowpass filters
|
<p>I have eeg files from BrainVision and would like to open it to dataset by mne (or any other pip lib) and I'm getting the error:</p>
<pre><code>RuntimeWarning: Channels contain different lowpass filters. Highest (weakest) filter setting (131.00 Hz) will be stored.
raw = mne.io.read_raw_brainvision('S01_base.vhdr', preload=True)
</code></pre>
<p>which is produced by code:</p>
<pre><code>import mne
raw = mne.io.read_raw_brainvision('base.vhdr', preload=True)
events, event_ids = mne.events_from_annotations(raw)
</code></pre>
<p>I also tried to use for that task MathLab with EEGLab, but it is not installable in mathlab as I see :( I don't have any .edf file for example. So, how to get some dataset from .eeg file for example which will be like <code>timestamp -> signal</code> ?</p>
|
<python><mne-python><eeglab>
|
2024-03-16 12:52:32
| 1
| 2,187
|
Andrew
|
78,171,824
| 10,565,197
|
tqdm nested progress bars with multiprocessing
|
<p>I'm using multiprocessing to do multiple long jobs, and an outer progress bar tracks how many jobs are completed. With an inner progress bar, I want to show the progress of an individual job, and also be able to print out when the inner progress bar completes.</p>
<p><a href="https://imgur.com/a/RqlCyLk" rel="nofollow noreferrer">This is what it should look like.</a></p>
<p>The problem is that when the inner progress bar completes, it disappears, because leave=False. leave=True also does not work because I have to be able to restart the inner progress bar. Therefore my solution has been to simply print out the completed bar manually.</p>
<p>My solution is shown below. Because it uses `sleep(.04)', the .04 needs to be changed depending on the computer, number of workers, job length etc. Also, it doesn't always work, even if you try to adjust the sleep. Therefore, I'm looking for a non-hacky answer which will work on any computer.</p>
<pre><code>from tqdm import tqdm
from time import sleep
import multiprocessing
def do_the_thing(my_args):
if my_args:
pbar_inner = tqdm(total=15, position=1, leave=False)
for i in range(15):
sleep(.1)
pbar_inner.update()
else:
sleep(1.5)
if __name__ == '__main__':
postfix = ' [Use this line/progress bar to print some stuff out.]'
pbar_outer = tqdm(total=60, position=0, leave=True)
for n in range(3):
pool = multiprocessing.Pool(2)
args = [True if i % 8 == 0 else False for i in range(20)]
for count, m in enumerate(pool.imap_unordered(do_the_thing, args)):
pbar_outer.update()
if args[count]:
sleep(.04)
my_pbar_inner = tqdm(total=15, position=1, leave=False,
bar_format='{l_bar}{bar}| {n_fmt}/{total_fmt}' + postfix)
my_pbar_inner.update(15)
my_pbar_inner.set_postfix_str('')
pool.close()
pool.join()
</code></pre>
|
<python><multiprocessing><tqdm>
|
2024-03-16 12:50:15
| 2
| 429
|
Taw
|
78,171,813
| 4,429,265
|
Qdrant filteration using nested object fields
|
<p>I have a data structure on Qdrant that in the payload, I have something like this:</p>
<pre><code>
{
"attributes": [
{
"attribute_value_id": 22003,
"id": 1252,
"key": "Environment",
"value": "Casual/Daily",
},
{
"attribute_value_id": 98763,
"id": 1254,
"key": "Color",
"value": "Multicolored",
},
{
"attribute_value_id": 22040,
"id": 1255,
"key": "Material",
"value": "Polyester",
},
],
"brand": {
"id": 114326,
"logo": None,
"slug": "happiness-istanbul-114326",
"title": "Happiness Istanbul",
},
}
</code></pre>
<p>According to <a href="https://qdrant.tech/documentation/concepts/filtering/" rel="nofollow noreferrer">Qdrant documentations</a>, I implemented filtering for brand like this:</p>
<pre><code>filters_list = []
if param_filters:
brands = param_filters.get("brand_params")
if brands:
filter = models.FieldCondition(
key="brand.id",
match=models.MatchAny(any=[int(brand) for brand in brands]),
)
filters_list.append(filter)
search_results = qd_client.search(
query_filter=models.Filter(must=filters_list),
collection_name=f"lang{lang}_products",
query_vector=query_vector,
search_params=models.SearchParams(hnsw_ef=128, exact=False),
limit=limit,
)
</code></pre>
<p>Which so far works. But things get complicated when I try to filter on the "attributes" field. As you see, it is a list of dictionaries, containing dictionaries like:</p>
<pre><code>{
"attribute_value_id": 22040,
"id": 1255,
"key": "Material",
"value": "Polyester",
}
</code></pre>
<p>And the <code>attrs</code> filter sent from the front-end is in this structure:</p>
<pre><code>attrs structure: {"attr_id": [attr_value_ids], "attr_id": [att_value_ids]}
>>> example: {'1237': ['21727', '21759'], '1254': ['52776']}
</code></pre>
<p>How can I filter to see if the provided <code>attr_id</code> in the query filter params (here, it is either <code>1237</code>, or <code>1254</code>) exists in the <code>attributes</code> field and has one of the <code>attr_value_id</code>s provided in the list (e.g. <code>['21727', '21759']</code> here)?</p>
<p>This is what I've tried so far:</p>
<pre><code>if attrs:
# attrs structure: {"attr_id": [attr_value_ids], "attr_id": [att_value_ids]}
print("attrs from search function:", attrs)
for attr_id, attr_value_ids in attrs.items():
# Convert attribute value IDs to integers
attr_value_ids = [
int(attr_value_id) for attr_value_id in attr_value_ids
]
# Add a filter for each attribute ID and its values
filter = models.FieldCondition(
key=f"attributes.{attr_id}.attr_value_id",
match=models.MatchAny(any=attr_value_ids),
)
filters_list.append(filter)
</code></pre>
<p>The problem is that <code>key=f"attributes.{attr_id}.attr_value_id",</code> is wrong and I do not know how to achieve this.</p>
<p>UPDATE: Maybe one step closer:</p>
<p>I decided to flatten out the data in the db, to maybe do this better. First, I created a new filed named flattened_attributes, that is as below:</p>
<pre><code>[
{
"1237": 21720
},
{
"1254": 52791
},
{
"1255": 22044
},
]
</code></pre>
<p>Also, before filtering, I followed the same approach on the attr filters sent from front-end:</p>
<pre><code> if attrs:
# attrs structure: {"attr_id": [attr_value_ids], "attr_id": [att_value_ids]}
# we need to flatten attrs to filter on payloads
flattened_attr = []
for attr_id, attr_value_ids in attrs.items():
for attr_value_id in attr_value_ids:
flattened_attr.append({attr_id:int(attr_value_id)})
</code></pre>
<p>Now, i have two similar list of dicts, and i want to filter those who has at leas one of which is received from front-end (<code>flattened_attr</code>).</p>
<p>There is one type of filtering that we filter if the value of the key exists in a list of values, as mentioned <a href="https://qdrant.tech/documentation/concepts/filtering/#match-any" rel="nofollow noreferrer">here in the docs</a>. But I do not know how to check if a dict exists in the <code>flattened_attributes</code> field in the db.</p>
|
<python><python-3.x><filtering><qdrant>
|
2024-03-16 12:48:04
| 1
| 417
|
Vahid
|
78,171,674
| 6,455,731
|
Pydantic: How to type hint a model constructor?
|
<p>How to properly type hint a model constructor, e.g. function that takes a model class and returns a model instance?</p>
<pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel
class Point(BaseModel):
x: int
y: int
</code></pre>
<p>First approach:</p>
<pre class="lang-py prettyprint-override"><code>def make_model_instance(model: type[BaseModel], **kwargs) -> BaseModel:
return model(**kwargs)
point: Point = make_model_instance(model=Point, x=1, y=2)
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
</code></pre>
<p>Here the linter flags the call to make_model_instance, but only if I type hint <code>point</code> with <code>Point</code>.</p>
<p>Second approach:</p>
<pre class="lang-py prettyprint-override"><code>from typing import TypeVar
ModelType = TypeVar("ModelType", bound=BaseModel)
def make_model_instance(model: ModelType, **kwargs) -> ModelType:
return model(**kwargs)
^~~~~~~~~~~~~~
point: Point = make_model_instance(model=Point, x=1, y=2)
^~~~~
</code></pre>
<p>Third approach:</p>
<pre class="lang-py prettyprint-override"><code>from typing import TypeVar
ModelType = TypeVar("ModelType", bound=BaseModel)
def make_model_instance(model: type[ModelType], **kwargs) -> ModelType:
return model(**kwargs)
point: Point = make_model_instance(model=Point, x=1, y=2)
</code></pre>
<p>This appears to do the trick, no complaints from the linter, but I don't really know why.</p>
|
<python><python-typing><pydantic>
|
2024-03-16 12:06:01
| 0
| 964
|
lupl
|
78,171,645
| 6,151,828
|
Python: memory consuption when passing data to a function vs. loading within the function
|
<p>I am working with a large dataset, in Python2, loading it via pandas. Occasionally the execution is spontaneously killed - likely to due to the high memory consumption. Thus I am trying to make my code more efficient in terms of not duplicating data in new variables, eliminating the variables that are no more needed, etc.</p>
<p>I currently load dataset and then pass the data to a function that operates on it:</p>
<pre><code>def func(df):
do something
return result
df = pd.read_csv(path_to_data)
func(df)
</code></pre>
<p>I wonder whether there is any advantage (memoriwize) in loading the dataset within the function:</p>
<pre><code>def func(path to data):
df = pd.read_csv(path_to_data)
do something
return result
func(path_to data)
</code></pre>
<p>Other tips on optimizing memory use with python and pandas would be appreciated as well.</p>
|
<python><pandas><function><memory>
|
2024-03-16 11:55:32
| 0
| 803
|
Roger V.
|
78,171,567
| 2,545,680
|
Is uvicorn used for an external threadpool or an internal event loop which runs in the main (single) thread
|
<p>FastAPI uses <code>uvicorn</code> package to run the script:</p>
<pre><code>main:app --reload
</code></pre>
<p>The docs explain that:</p>
<blockquote>
<p>Uvicorn is an ASGI web server implementation for Python... The ASGI specification fills this gap, and means we're now able to start building a common set of tooling usable across all async frameworks.</p>
<p>...will install uvicorn with "Cython-based" dependencies (where possible) and other "optional extras".In this context, "Cython-based" means the following:</p>
<ul>
<li>the event loop uvloop will be installed and used if possible.
<strong>uvloop is a fast, drop-in replacement of the built-in asyncio event loop</strong>. It is implemented in Cython. The built-in asyncio event loop serves as an easy-to-read reference implementation and is there for easy debugging as it's pure-python based.</li>
</ul>
</blockquote>
<p>Also, <a href="https://stackoverflow.com/a/71517830/2545680">this great</a> answer explains that:</p>
<blockquote>
<p>Thus, <code>def</code> endpoints (in the context of asynchronous programming, a function defined with just <code>def</code> is called synchronous function), in FastAPI, <strong>run in a separate thread from an external threadpool</strong> that is then awaited, and hence, FastAPI will still work asynchronously. In other words, the server will process requests to such endpoints concurrently. Whereas, <code>async def</code> endpoints run directly in the event loop which runs in the main (single) thread...</p>
</blockquote>
<p>So my question what is <code>uvicorn</code> in this context - a separate thread from an external threadpool or an event loop which runs in the main (single) thread? From the description above it seems <code>uvicorn</code> replaces the built-in event loop where <code>def</code> functions are executed. Correct? If so, what is used for a separate thread from an external threadpool?</p>
|
<python><multithreading><fastapi>
|
2024-03-16 11:26:22
| 0
| 106,269
|
Max Koretskyi
|
78,171,527
| 5,938,276
|
Populate class based on attribute value
|
<p>I have data that is streamed in a repeating fashion that I need to apply business logic to before populating a @dataclass.</p>
<p>The data class has the following structure:</p>
<pre><code>class Foo:
color: str
engine: int
petrol: int
diesel: int
</code></pre>
<p>The data consists of a enum color, and two pairs of two digit numbers streamed a single digit at a time.</p>
<pre><code>Red, 3, 1, 4, 5, Blue, 2, 2, 7, 5, Orange, 5, 2, 6, 8 etc...
</code></pre>
<p>Red is assigned to color attribute, 31 is assigned to engine attribute.</p>
<p>If the color attribute is Red, Blue or Yellow then the last two digits (45) are assigned to petrol attribute.</p>
<p>If the color attribute is Orange or Green then the last two digits (45) are assigned to diesel attribute.</p>
<p>I tried this:</p>
<pre><code>digits = 0
positional_counter = 0
# Set first attribute color
if value in ["Red", "Blue", "Orange", "Green"]:
self.foo.color = value
positional_counter += 1
# Set second attribute - joining the digits
if positional_counter == 1:
digits = value * 10
positional_counter += 1
if positional_counter == 2:
self.foo.engine = digits + value
digits = 0
positional_counter += 1
# Whether the second set of digits is set against petrol or diesel depends on color attrib
if self.foo.color in ["Red", "Blue"]:
if positional_counter == 3:
digits = value * 10
positional_counter += 1
if positional_counter == 4:
self.foo.petrol = digits + value
positional_counter = 0
if self.foo.color in ["Orange", "Green"]:
if positional_counter == 3:
digits = value * 10
positional_counter += 1
if positional_counter == 4:
self.foo.diesel = digits + value
positional_counter = 0
</code></pre>
<p>Is there a better way to do this? The above code looks unreadable and messy. It is also possible that the data may start streaming on a value other than a color enum - in which case the first one or two values can be skipped using until a color enum is encountered to set start position.</p>
|
<python>
|
2024-03-16 11:10:31
| 1
| 2,456
|
Al Grant
|
78,171,386
| 333,151
|
how to plot calendar heatmap by month
|
<p>This seems quite simple, but for some reason, I'm struggling to get it working the way I want.</p>
<p>This is the data I have.</p>
<p><a href="https://i.sstatic.net/efRQ2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/efRQ2.png" alt="Sales by Date" /></a></p>
<p>I want to show it as a heat map broken down by month, for all months, like below:</p>
<p><a href="https://i.sstatic.net/4tRTX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4tRTX.png" alt="Calendar heatmap" /></a></p>
<p>I tried calplot, but unable to figure out making it show by month and the July package started giving me all sorts of conversion errors.</p>
<p>Please assist.</p>
<p>Thanks,
Arun</p>
|
<python>
|
2024-03-16 10:24:19
| 1
| 1,017
|
Arun
|
78,171,367
| 5,539,782
|
How to Find and Decode URL Encoded Strings in Website's HTML/JavaScript for Scraping Live Odds from OddsPortal?
|
<p>I'm working on a project to scrape live odds for individual games from OddsPortal.
<a href="https://www.oddsportal.com/inplay-odds/live-now/football/" rel="nofollow noreferrer">https://www.oddsportal.com/inplay-odds/live-now/football/</a>
based on this helpful guide <a href="https://github.com/jckkrr/Unlayering_Oddsportal" rel="nofollow noreferrer">https://github.com/jckkrr/Unlayering_Oddsportal</a> ,</p>
<p>My goal is to obtain live odds data for each game, but I'm encountering a challenge in accessing the necessary URLs.</p>
<p>Using Python's requests library, I can fetch the list of all live matches from this feed URL:
<a href="https://www.oddsportal.com/feed/livegames/liveOdds/0/0.dat?_=" rel="nofollow noreferrer">https://www.oddsportal.com/feed/livegames/liveOdds/0/0.dat?_=</a></p>
<pre><code>import requests
url = "https://www.oddsportal.com/feed/livegames/liveOdds/0/0.dat?_="
response = requests.get(url)
data = response.text
</code></pre>
<p>The problem arises when trying to access the live odds for each game.</p>
<p>The odds are contained in separate URLs with the following structure:
<code>https://fb.oddsportal.com/feed/match/1-1-{match_id_code}-1-2-{secondary_id_code}.dat</code></p>
<p>This is a screenshot for individual live game webpage and it's respective odds feed url <a href="https://www.oddsportal.com/feed/live-event/1-1-AsILkjnd-1-2-yjbd1.dat" rel="nofollow noreferrer">https://www.oddsportal.com/feed/live-event/1-1-AsILkjnd-1-2-yjbd1.dat</a> (when the live game end, the odds url return 404)</p>
<p><a href="https://i.sstatic.net/CDHbi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CDHbi.png" alt="enter image description here" /></a></p>
<p>In this example (from the screenshot), the first id code <code>AsILkjnd</code> can be found on the list of all live matches from this feed URL: <a href="https://www.oddsportal.com/feed/livegames/liveOdds/0/0.dat?_=" rel="nofollow noreferrer">https://www.oddsportal.com/feed/livegames/liveOdds/0/0.dat?_=</a></p>
<p>but the secondary_id_code is not found there or even in the html of the individual page.</p>
<p>I'm currently stuck on finding and decoding the secondary_id_code.</p>
<p>It appears to be a URL encoded string similar to <code>%79%6a%39%64%39</code>, which I believe is hidden within the websiteβs HTML or JavaScript code.</p>
<p>So far, I've been unable to locate these encoded strings.</p>
<p>Can anyone help how to find and decode these URL encoded strings</p>
|
<python><web-scraping><python-requests>
|
2024-03-16 10:17:03
| 1
| 547
|
Khaled Koubaa
|
78,171,052
| 9,251,158
|
Flip card animation in Python
|
<p>I have two still images, one for the front of a card, and one for the back. I want to create a video animation of the card flipping over and showing the back, similar to <a href="https://codepen.io/auroratide/pen/WNmmQzP" rel="nofollow noreferrer">this CodePen</a> that does it in CSS and JavaScript.</p>
<p>I am sure that it's possible in Python and a library like PIL, Pyglet or Pygame (e.g., with <a href="https://stackoverflow.com/questions/14177744/how-does-perspective-transformation-work-in-pil">perspective drawing in PIL</a>), but I cannot find it with a search for <code>python pil|pyglet|pygame code flip card animation</code>. It seems general enough that I think someone has needed this before.</p>
<p>Has anyone faced this problem and has code that animates a card turning over?</p>
|
<python><animation><graphics>
|
2024-03-16 08:15:45
| 1
| 4,642
|
ginjaemocoes
|
78,170,965
| 4,451,315
|
Why does `.rename(columns={'b': 'b'}, copy=False)` followed by inplace method not update the original dataframe?
|
<p>Here's my example:</p>
<pre class="lang-py prettyprint-override"><code>In [1]: import pandas as pd
In [2]: df = pd.DataFrame({'a': [1,2,3], 'b': [4,5,6]})
In [3]: df1 = df.rename(columns={'b': 'b'}, copy=False)
In [4]: df1.isetitem(1, [7,8,9])
In [5]: df
Out[5]:
a b
0 1 4
1 2 5
2 3 6
In [6]: df1
Out[6]:
a b
0 1 7
1 2 8
2 3 9
</code></pre>
<p>If <code>df1</code> was derived from <code>df</code> with <code>copy=False</code>, then I'd have expected an in-place modification of <code>df1</code> to also affect <code>df</code>. But it doesn't. Why?</p>
<p>I'm using pandas version 2.2.1, with no options (e.g. copy-on-write) enabled</p>
|
<python><pandas><dataframe>
|
2024-03-16 07:45:23
| 2
| 11,062
|
ignoring_gravity
|
78,170,789
| 15,412,256
|
Use the group max from another Polars dataframe to clip values to an upper bound
|
<p>I have 2 DataFrames:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df1 = pl.DataFrame(
{
"group": ["A", "A", "A", "B", "B", "B"],
"index": [1, 3, 5, 1, 3, 8],
}
)
df2 = pl.DataFrame(
{
"group": ["A", "A", "A", "B", "B", "B"],
"index": [3, 4, 7, 2, 7, 10],
}
)
</code></pre>
<p>I want to cap the <code>index</code> in <code>df2</code> using the <strong>largest index</strong> of each group in <code>df1</code>. The groups in two DataFrames are the same.</p>
<p>expected output for <code>df2</code>:</p>
<pre class="lang-py prettyprint-override"><code>shape: (6, 2)
βββββββββ¬ββββββββ
β group β index β
β --- β --- β
β str β i64 β
βββββββββͺββββββββ‘
β A β 3 β
β A β 4 β
β A β 5 β
β B β 2 β
β B β 7 β
β B β 8 β
βββββββββ΄ββββββββ
</code></pre>
|
<python><dataframe><python-polars>
|
2024-03-16 06:22:41
| 1
| 649
|
Kevin Li
|
78,170,750
| 11,783,015
|
Python tensorflow keras error when load a json model: Could not locate class 'Sequential'
|
<p>I've built and trained my model weeks ago and saved it on model.json dan model.h5</p>
<p>Today, when I tried to load it using model_from_json, it gives me an error</p>
<pre class="lang-py prettyprint-override"><code>TypeError: Could not locate class 'Sequential'. Make sure custom classes are decorated with `@keras.saving.register_keras_serializable()`. Full object config: {'class_name': 'Sequential', 'config': {'name': 'sequential_7', 'layers': [{'module': 'keras.layers', 'class_name': 'InputLayer', 'config': {'batch_input_shape': [None, 244, 244, 3], 'dtype': 'float32', 'sparse': False, 'ragged': False, 'name': 'conv2d_15_input'}, 'registered_name': None}, {'module': 'keras.layers', 'class_name': 'Conv2D', 'config': {'name': 'conv2d_15', 'trainable': True, 'dtype': 'float32', 'batch_input_shape': [None, 244, 244, 3], 'filters': 32, 'kernel_size': [3, 3], 'strides': [1, 1], 'padding': 'valid', 'data_format': 'channels_last', 'dilation_rate': [1, 1], 'groups': 1, 'activation': 'relu', 'use_bias': True, 'kernel_initializer': {'module': 'keras.initializers', 'class_name': 'GlorotUniform', 'config': {'seed': None}, 'registered_name': None}, 'bias_initializer': {'module': 'keras.initializers', 'class_name': 'Zeros', 'config': {}, 'registered_name': None}, 'kernel_regularizer': None, 'bias_regularizer': None, 'activity_regularizer': None, 'kernel_constraint': None, 'bias_constraint': None}, 'registered_name': None, 'build_config': {'input_shape': [None, 244, 244, 3]}}, {'module': 'keras.layers', 'class_name': 'MaxPooling2D', 'config': {'name': 'max_pooling2d_14', 'trainable': True, 'dtype': 'float32', 'pool_size': [2, 2], 'padding': 'valid', 'strides': [2, 2], 'data_format': 'channels_last'}, 'registered_name': None, 'build_config': {'input_shape': [None, 242, 242, 32]}}, {'module': 'keras.layers', 'class_name': 'Conv2D', 'config': {'name': 'conv2d_16', 'trainable': True, 'dtype': 'float32', 'filters': 64, 'kernel_size': [3, 3], 'strides': [1, 1], 'padding': 'valid', 'data_format': 'channels_last', 'dilation_rate': [1, 1], 'groups': 1, 'activation': 'relu', 'use_bias': True, 'kernel_initializer': {'module': 'keras.initializers', 'class_name': 'GlorotUniform', 'config': {'seed': None}, 'registered_name': None}, 'bias_initializer': {'module': 'keras.initializers', 'class_name': 'Zeros', 'config': {}, 'registered_name': None}, 'kernel_regularizer': None, 'bias_regularizer': None, 'activity_regularizer': None, 'kernel_constraint': None, 'bias_constraint': None}, 'registered_name': None, 'build_config': {'input_shape': [None, 121, 121, 32]}}, {'module': 'keras.layers', 'class_name': 'MaxPooling2D', 'config': {'name': 'max_pooling2d_15', 'trainable': True, 'dtype': 'float32', 'pool_size': [2, 2], 'padding': 'valid', 'strides': [2, 2], 'data_format': 'channels_last'}, 'registered_name': None, 'build_config': {'input_shape': [None, 119, 119, 64]}}, {'module': 'keras.layers', 'class_name': 'Flatten', 'config': {'name': 'flatten_7', 'trainable': True, 'dtype': 'float32', 'data_format': 'channels_last'}, 'registered_name': None, 'build_config': {'input_shape': [None, 59, 59, 64]}}, {'module': 'keras.layers', 'class_name': 'Dense', 'config': {'name': 'dense_14', 'trainable': True, 'dtype': 'float32', 'units': 64, 'activation': 'relu', 'use_bias': True, 'kernel_initializer': {'module': 'keras.initializers', 'class_name': 'GlorotUniform', 'config': {'seed': None}, 'registered_name': None}, 'bias_initializer': {'module': 'keras.initializers', 'class_name': 'Zeros', 'config': {}, 'registered_name': None}, 'kernel_regularizer': None, 'bias_regularizer': None, 'activity_regularizer': None, 'kernel_constraint': None, 'bias_constraint': None}, 'registered_name': None, 'build_config': {'input_shape': [None, 222784]}}, {'module': 'keras.layers', 'class_name': 'Dense', 'config': {'name': 'dense_15', 'trainable': True, 'dtype': 'float32', 'units': 2, 'activation': 'softmax', 'use_bias': True, 'kernel_initializer': {'module': 'keras.initializers', 'class_name': 'GlorotUniform', 'config': {'seed': None}, 'registered_name': None}, 'bias_initializer': {'module': 'keras.initializers', 'class_name': 'Zeros', 'config': {}, 'registered_name': None}, 'kernel_regularizer': None, 'bias_regularizer': None, 'activity_regularizer': None, 'kernel_constraint': None, 'bias_constraint': None}, 'registered_name': None, 'build_config': {'input_shape': [None, 64]}}]}, 'keras_version': '2.13.1', 'backend': 'tensorflow'}
</code></pre>
<p>I've imports all the requirements:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from keras.preprocessing import image
from keras.models import model_from_json
from tensorflow.keras import layers
from tensorflow.keras.preprocessing.image import ImageDataGenerator, load_img, img_to_array
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Dense, GlobalAveragePooling2D, Dropout, Flatten
from tensorflow.keras.applications import VGG16
</code></pre>
<p>And this is my code used to load the saves json model:</p>
<pre class="lang-py prettyprint-override"><code>json_file = open('./model/model.json', 'r')
loaded_model_json = json_file.read()
json_file.close()
loaded_model = model_from_json(loaded_model_json)
loaded_model.load_weights("model.h5")
</code></pre>
<p>Am I missing something?</p>
|
<python><tensorflow><machine-learning><keras><deep-learning>
|
2024-03-16 05:59:37
| 1
| 380
|
Arvin
|
78,170,536
| 13,325,046
|
How to drop column from a pandas dataframe if it has a missing value after a specified row
|
<p>I have a dataframe with hundreds of columns and would like to drop a column if it contains any missing values after row <code>n</code>. How can this be done? Thanks</p>
|
<python><python-3.x><pandas>
|
2024-03-16 03:51:07
| 1
| 495
|
te time
|
78,170,492
| 1,676,217
|
Python: bug in re (regex)? Or please help me understand what I'm missing
|
<p>Python 3.9.18. If the basic stuff below isn't a bug in <code>re</code> then how come I'm getting different results with what's supposed to be equivalent code (NOTE: I am not looking for alternative ways to achieve the expected result, I already have plenty such alternatives):</p>
<pre class="lang-py prettyprint-override"><code>import re
s = '{"merge":"true","from_cache":"true","html":"true","links":"false"}'
re.sub(r'"(true|false)"', r'\1', s, re.I)
'{"merge":true,"from_cache":true,"html":"true","links":"false"}'
</code></pre>
<p>^^^ note how only the 1st and 2nd <code>"true"</code> were replaced, but the 3rd and 4rd are still showing quotes <code>"</code> around them.</p>
<p>Whereas the following, which is supposed to be equivalent (<code>(?i)</code> instead of <code>re.I</code>), works as expected:</p>
<pre class="lang-py prettyprint-override"><code>import re
s = '{"merge":"true","from_cache":"true","html":"true","links":"false"}'
re.sub(r'(?i)"(true|false)"', r'\1', s)
'{"merge":true,"from_cache":true,"html":true,"links":false}'
</code></pre>
<p>^^^ all instances of <code>"true"</code> and <code>"false"</code> were replaced.</p>
|
<python><regex>
|
2024-03-16 03:13:55
| 1
| 1,252
|
Normadize
|
78,170,429
| 3,282,758
|
MySQLdb installation issue on Mac. Required for MySQL connectivity with Django
|
<p>I am creating a Django project on Mac. I want to use MySQL for database. I am trying to install MySQLdb which gives the following error:</p>
<pre><code>(djangodev) $ pip3 install MySQL-python
Collecting MySQL-python
Using cached MySQL-python-1.2.5.zip (108 kB)
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
Γ python setup.py egg_info did not run successfully.
β exit code: 1
β°β> [8 lines of output]
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/private/var/folders/9z/lf5bsw2929j34m5g6g9qcfqm0000gn/T/pip-install-5qv449ur/mysql-python_191252dd91f14deab09b6860c8c503d9/setup.py", line 13, in <module>
from setup_posix import get_config
File "/private/var/folders/9z/lf5bsw2929j34m5g6g9qcfqm0000gn/T/pip-install-5qv449ur/mysql-python_191252dd91f14deab09b6860c8c503d9/setup_posix.py", line 2, in <module>
from ConfigParser import SafeConfigParser
ModuleNotFoundError: No module named 'ConfigParser'
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
Γ Encountered error while generating package metadata.
β°β> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
</code></pre>
|
<python><mysql-python><libmysqlclient>
|
2024-03-16 02:40:05
| 1
| 1,493
|
user3282758
|
78,170,272
| 1,440,349
|
FileNotFoundError: [WinError 2] The system cannot find the file specified - FFMPEG
|
<p>I am running a script using conda environment on Windows and getting this error (stack trace below), which is apparently caused by python executable not being able to find the ffmpeg.exe. There are numerous questions about this, but none of the solutions unfortunately worked for me, so I hope someone has fresh ideas.</p>
<p>What I tried:</p>
<ul>
<li><code>conda install -c conda-forge ffmpeg</code> (after this I can run ffmpeg in command line, but still getting the error)</li>
<li><code>pip install ffmpeg-python</code></li>
<li>Added the folder where ffmpeg.exe is located in conda env to the Windows Path as well as to sys.path in python.</li>
<li>copied the same ffmpeg.exe a) to the location of python.exe in conda env, b) to the location of the script I am running, c) to the location of subprocess.py.</li>
<li>Downloaded ffmpeg windows binary and repeated the last two steps with that file.</li>
</ul>
<p>Is there anything else that I can try to make it work?</p>
<pre><code>---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
Cell In[10], line 22
19 truncate_second = 8.2 # Video end = start_second + truncate_second
21 # Extract Video CAVP Features & New Video Path:
---> 22 cavp_feats, new_video_path = extract_cavp(video_path, start_second, truncate_second, tmp_path=tmp_path)
File D:\Software\Anaconda\envs\diff_foley\Lib\site-packages\torch\nn\modules\module.py:1518, in Module._wrapped_call_impl(self, *args, **kwargs)
1516 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1517 else:
-> 1518 return self._call_impl(*args, **kwargs)
File D:\Software\Anaconda\envs\diff_foley\Lib\site-packages\torch\nn\modules\module.py:1527, in Module._call_impl(self, *args, **kwargs)
1522 # If we don't have any hooks, we want to skip the rest of the logic in
1523 # this function, and just call forward.
1524 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1525 or _global_backward_pre_hooks or _global_backward_hooks
1526 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1527 return forward_call(*args, **kwargs)
1529 try:
1530 result = None
File D:\Software\Anaconda\envs\diff_foley\Lib\site-packages\torch\utils\_contextlib.py:115, in context_decorator.<locals>.decorate_context(*args, **kwargs)
112 @functools.wraps(func)
113 def decorate_context(*args, **kwargs):
114 with ctx_factory():
--> 115 return func(*args, **kwargs)
File D:\Work\DIff-Foley\Diff-Foley\inference\demo_util.py:131, in Extract_CAVP_Features.forward(self, video_path, start_second, truncate_second, tmp_path)
129 print("truncate second: ", truncate_second)
130 # Load the video, change fps:
--> 131 video_path_low_fps = reencode_video_with_diff_fps(video_path, self.tmp_path, self.fps, start_second, truncate_second)
132 video_path_high_fps = reencode_video_with_diff_fps(video_path, self.tmp_path, 21.5, start_second, truncate_second)
134 # read the video:
File D:\Work\DIff-Foley\Diff-Foley\inference\demo_util.py:42, in reencode_video_with_diff_fps(video_path, tmp_path, extraction_fps, start_second, truncate_second)
31 def reencode_video_with_diff_fps(video_path: str, tmp_path: str, extraction_fps: int, start_second, truncate_second) -> str:
32 '''Reencodes the video given the path and saves it to the tmp_path folder.
33
34 Args:
(...)
40 str: The path where the tmp file is stored. To be used to load the video from
41 '''
---> 42 assert which_ffmpeg() != '', 'Is ffmpeg installed? Check if the conda environment is activated.'
43 # assert video_path.endswith('.mp4'), 'The file does not end with .mp4. Comment this if expected'
44 # create tmp dir if doesn't exist
45 os.makedirs(tmp_path, exist_ok=True)
File D:\Work\DIff-Foley\Diff-Foley\inference\demo_util.py:26, in which_ffmpeg()
20 def which_ffmpeg() -> str:
21 '''Determines the path to ffmpeg library
22
23 Returns:
24 str -- path to the library
25 '''
---> 26 result = subprocess.run(['which', 'ffmpeg'], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
27 ffmpeg_path = result.stdout.decode('utf-8').replace('\n', '')
28 return ffmpeg_path
File D:\Software\Anaconda\envs\diff_foley\Lib\subprocess.py:548, in run(input, capture_output, timeout, check, *popenargs, **kwargs)
545 kwargs['stdout'] = PIPE
546 kwargs['stderr'] = PIPE
--> 548 with Popen(*popenargs, **kwargs) as process:
549 try:
550 stdout, stderr = process.communicate(input, timeout=timeout)
File D:\Software\Anaconda\envs\diff_foley\Lib\subprocess.py:1026, in Popen.__init__(self, args, bufsize, executable, stdin, stdout, stderr, preexec_fn, close_fds, shell, cwd, env, universal_newlines, startupinfo, creationflags, restore_signals, start_new_session, pass_fds, user, group, extra_groups, encoding, errors, text, umask, pipesize, process_group)
1022 if self.text_mode:
1023 self.stderr = io.TextIOWrapper(self.stderr,
1024 encoding=encoding, errors=errors)
-> 1026 self._execute_child(args, executable, preexec_fn, close_fds,
1027 pass_fds, cwd, env,
1028 startupinfo, creationflags, shell,
1029 p2cread, p2cwrite,
1030 c2pread, c2pwrite,
1031 errread, errwrite,
1032 restore_signals,
1033 gid, gids, uid, umask,
1034 start_new_session, process_group)
1035 except:
1036 # Cleanup if the child failed starting.
1037 for f in filter(None, (self.stdin, self.stdout, self.stderr)):
File D:\Software\Anaconda\envs\diff_foley\Lib\subprocess.py:1538, in Popen._execute_child(self, args, executable, preexec_fn, close_fds, pass_fds, cwd, env, startupinfo, creationflags, shell, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite, unused_restore_signals, unused_gid, unused_gids, unused_uid, unused_umask, unused_start_new_session, unused_process_group)
1536 # Start the process
1537 try:
-> 1538 hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
1539 # no special security
1540 None, None,
1541 int(not close_fds),
1542 creationflags,
1543 env,
1544 cwd,
1545 startupinfo)
1546 finally:
1547 # Child is launched. Close the parent's copy of those pipe
1548 # handles that only the child should have open. You need
(...)
1551 # pipe will not close when the child process exits and the
1552 # ReadFile will hang.
1553 self._close_pipe_fds(p2cread, p2cwrite,
1554 c2pread, c2pwrite,
1555 errread, errwrite)
FileNotFoundError: [WinError 2] The system cannot find the file specified
</code></pre>
|
<python><ffmpeg><anaconda><subprocess><ffmpeg-python>
|
2024-03-16 01:01:32
| 1
| 465
|
shiftyscales
|
78,170,266
| 3,813,064
|
Python Decorator for Async and Sync Function without code duplication
|
<p>I have seen <a href="https://stackoverflow.com/questions/68410730/how-to-make-decorator-work-with-async-function">at least</a> <a href="https://stackoverflow.com/questions/66685788/decorator-for-all-async-coroutine-and-sync-functions">two examples</a> of decorator that work with both normal <code>def sync_func()</code> and <code>async def async_function()</code>.</p>
<p>But all these example basically duplicated the decorator code like this</p>
<pre class="lang-py prettyprint-override"><code>import asyncio
import typing as t
import functools
R = t.TypeVar("R")
P = t.ParamSpec("P")
def decorator(func: t.Callable[P, R]) -> t.Callable[P, R]:
if asyncio.iscoroutinefunction(func):
@functools.wraps(func)
async def decorated(*args: P.args, **kwargs: P.kwargs) -> R:
# β¦ do something before
result = await func(*args, **kwargs)
# β¦ do something after
return result
else:
@functools.wraps(func)
def decorated(*args: P.args, **kwargs: P.kwargs) -> R:
# β¦ do something before [code duplication!]
result = func(*args, **kwargs)
# β¦ do something after [code duplication!]
return result
return decorated
</code></pre>
<p>Is there a possibility to create a decorator <strong>without</strong> code duplication inside <code>decorated</code> ?</p>
|
<python><python-asyncio><python-typing><python-decorators>
|
2024-03-16 00:59:24
| 1
| 2,711
|
Kound
|
78,170,072
| 4,089,351
|
How can I render a scatter plot in Python with a "double-log" axis?
|
<p><a href="https://i.sstatic.net/NVtWg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NVtWg.png" alt="enter image description here" /></a></p>
<p>This plot is from <a href="https://en.wikipedia.org/wiki/Birch_and_Swinnerton-Dyer_conjecture" rel="nofollow noreferrer">Wikipedia</a>, and the legend reads:</p>
<blockquote>
<p>The X-axis is in log(log) scale -X is drawn at distance proportional to
log(log(X)) from 0- and the Y-axis is in a logarithmic scale</p>
</blockquote>
<p>How can it be reproduced in python?</p>
<p>The option:</p>
<pre><code>fig, ax = plt.subplots()
plt.xscale('log')
ax.scatter(p, mult, facecolor='blue', marker='.', s=10)
plt.show()
</code></pre>
<p>is not the same:</p>
<p><a href="https://i.sstatic.net/Gpfok.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Gpfok.png" alt="enter image description here" /></a></p>
|
<python><matplotlib><plot>
|
2024-03-15 23:17:10
| 1
| 4,851
|
Antoni Parellada
|
78,169,969
| 10,181,236
|
Get frames as observation for CartPole environment
|
<p>In Python, I am using <code>stablebaselines3</code> and <code>gymnasium</code> to implement a custom DQN. Using atari games I tested the agent and works, now I need to test it also on environments like <code>CartPole</code>
The problem is that this kind of environment does not return frames as observation but instead returns just a vector.
So I need a way to make return CartPole frames as observation and apply the same preprocessing stuff that I do on Atari games (like stack 4 frames of the game together)</p>
<p>I searched on the internet how to do it and I came up with this code after some tries, but I have some problems.</p>
<p>This is the code:</p>
<pre class="lang-py prettyprint-override"><code>from stable_baselines3.common.env_util import make_atari_env, make_vec_env
from stable_baselines3.common.vec_env import VecFrameStack, VecTransposeImage
import gymnasium as gym
from gymnasium import spaces
from gymnasium.envs.classic_control import CartPoleEnv
import numpy as np
import cv2
class CartPoleImageWrapper(gym.Wrapper):
metadata = {'render.modes': ['rgb_array']}
def __init__(self, env):
super(CartPoleImageWrapper, self).__init__(env)
self.observation_space = spaces.Box(low=0, high=255, shape=(84, 84, 1), dtype=np.uint8)
def _get_image_observation(self):
# Render the CartPole environment
cartpole_image = self.render()
# Resize the image to 84x84 pixels
resized_image = cv2.resize(cartpole_image, (84, 84))
# make it grayscale
resized_image = cv2.cvtColor(resized_image, cv2.COLOR_RGB2GRAY)
resized_image = np.expand_dims(resized_image, axis=-1)
return resized_image
def reset(self):
self.env.reset()
return self._get_image_observation()
def step(self, action):
observation, reward, terminated, info = self.env.step(action)
return self._get_image_observation(), reward, terminated, info
env = CartPoleImageWrapper(CartPoleEnv(render_mode='rgb_array'))
vec_env = make_vec_env(lambda: env, n_envs=1)
vec_env = VecTransposeImage(vec_env)
vec_env = VecFrameStack(vec_env, n_stack=4)
obs = vec_env.reset()
print(f"Observation space: {obs.shape}")
#exit()
vec_env.close()
</code></pre>
<p>And the error is this when I call <code>env.reset()</code>:</p>
<pre class="lang-py prettyprint-override"><code>Traceback (most recent call last):
File "/data/g.carfi/rl/tmp.py", line 41, in <module>
obs = vec_env.reset()
File "/data/g/.virtualenvs/rl_new/lib/python3.8/site-packages/stable_baselines3/common/vec_env/vec_frame_stack.py", line 41, in reset
observation = self.venv.reset()
File "/data/g/.virtualenvs/rl_new/lib/python3.8/site-packages/stable_baselines3/common/vec_env/vec_transpose.py", line 113, in reset
observations = self.venv.reset()
File "/data/g/.virtualenvs/rl_new/lib/python3.8/site-packages/stable_baselines3/common/vec_env/dummy_vec_env.py", line 77, in reset
obs, self.reset_infos[env_idx] = self.envs[env_idx].reset(seed=self._seeds[env_idx], **maybe_options)
File "/data/g/.virtualenvs/rl_new/lib/python3.8/site-packages/stable_baselines3/common/monitor.py", line 83, in reset
return self.env.reset(**kwargs)
TypeError: reset() got an unexpected keyword argument 'seed'
</code></pre>
<p>how can I solve the problem?</p>
|
<python><reinforcement-learning><openai-gym><atari-2600>
|
2024-03-15 22:40:09
| 2
| 512
|
JayJona
|
78,169,916
| 2,986,153
|
After I unpivot a polars dataframe, how can I pivot it back to its original form without adding an index?
|
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame({
'A': range(1,4),
'B': range(1,4),
'C': range(1,4),
'D': range(1,4)
})
print(df)
</code></pre>
<pre class="lang-none prettyprint-override"><code>shape: (3, 4)
βββββββ¬ββββββ¬ββββββ¬ββββββ
β A β B β C β D β
β --- β --- β --- β --- β
β i64 β i64 β i64 β i64 β
βββββββͺββββββͺββββββͺββββββ‘
β 1 β 1 β 1 β 1 β
β 2 β 2 β 2 β 2 β
β 3 β 3 β 3 β 3 β
βββββββ΄ββββββ΄ββββββ΄ββββββ
</code></pre>
<pre class="lang-py prettyprint-override"><code>df_long = df.unpivot(
variable_name="recipe",
value_name="revenue")
print(df_long)
</code></pre>
<pre class="lang-none prettyprint-override"><code>shape: (12, 2)
ββββββββββ¬ββββββββββ
β recipe β revenue β
β --- β --- β
β str β i64 β
ββββββββββͺββββββββββ‘
β A β 1 β
β A β 2 β
β A β 3 β
β B β 1 β
β B β 2 β
β β¦ β β¦ β
β C β 2 β
β C β 3 β
β D β 1 β
β D β 2 β
β D β 3 β
ββββββββββ΄ββββββββββ
</code></pre>
<p>It seems I need to add an index in order to pivot <code>df_long</code> back into the original form of <code>df</code>? Is there no way to pivot a polars dataframe without adding an index?</p>
<pre class="lang-py prettyprint-override"><code>df_long = df_long.with_columns(index=pl.col("revenue").cum_count().over("recipe"))
df_long.pivot(
on='recipe',
index='index',
values='revenue',
aggregate_function='first'
)
</code></pre>
<pre class="lang-none prettyprint-override"><code>shape: (3, 5)
βββββββββ¬ββββββ¬ββββββ¬ββββββ¬ββββββ
β index β A β B β C β D β
β --- β --- β --- β --- β --- β
β u32 β i64 β i64 β i64 β i64 β
βββββββββͺββββββͺββββββͺββββββͺββββββ‘
β 1 β 1 β 1 β 1 β 1 β
β 2 β 2 β 2 β 2 β 2 β
β 3 β 3 β 3 β 3 β 3 β
βββββββββ΄ββββββ΄ββββββ΄ββββββ΄ββββββ
</code></pre>
<p>In R, I can perform the equivalent to unpivot and pivot without indexing, and was seeking the same functionality in Python.</p>
<pre><code>df_pandas = df.to_pandas()
</code></pre>
<pre><code>library(tidyverse)
library(reticulate)
df_long <-
py$df_pandas |>
pivot_longer(
everything(),
names_to = 'recipe',
values_to = 'value'
)
df_long |>
pivot_wider(
names_from='recipe',
values_from='value'
) |>
unnest(cols = c(A,B,C,D))
</code></pre>
|
<python><python-polars>
|
2024-03-15 22:23:55
| 2
| 3,836
|
Joe
|
78,169,841
| 9,538,589
|
plotly parallel coordinates and categories
|
<p>In a dataframe, columns A and B are categorical data, while columns X and Y is numerical and continuous. How can I use plotly to draw a parallel coordinates+categories plot such that the first two y-axis corresponding to A and B are categorical, and the latter two corresponding to X and Y are numerical?</p>
<p>For example, consider the following data:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({'A':['a','a','b','b'],
'B':['1','2','1','2'],
'X':[5.3,6.7,3.1,0.8],
'Y':[0.4,0.6,3.6,4.8]})
</code></pre>
<p>I tried <code>plotly.express.parallel_categories</code>. But this treats all numerical values as categories. On the other hand, <code>plotly.express.parallel_coordinates</code> omits categorical columns.</p>
|
<python><pandas><plotly><parallel-coordinates>
|
2024-03-15 21:58:21
| 1
| 369
|
H.Alzy
|
78,169,800
| 111,992
|
Why PYTHONPATH need to be exported?
|
<p>I am getting some module not found excepiton when setting PYTHONPATH and <em>then</em> executing some py script :</p>
<pre><code>$ PYTHONPATH=somepath/a/b
$ python myscript.py
Exception has occurred: ModuleNotFoundError (note: full exception trace is shown but execution is paused at: _run_module_as_main)
No module named 'c'
</code></pre>
<p>but if I export PYTHONPATH, everything works fine</p>
<pre><code>$ export PYTHONPATH=somepath/a/b
$ python myscript.py
==>ok
</code></pre>
<p>It is also working when I am inlining the PYTHONPATH assignation :</p>
<pre><code>$ PYTHONPATH=somepath/a/b python myscript.py
==>ok
</code></pre>
<p>Why do we have this behavior ?</p>
|
<python><shell><environment-variables><pythonpath>
|
2024-03-15 21:44:52
| 1
| 7,739
|
Toto
|
78,169,666
| 4,001,592
|
Summing the values of leafs in XGBRegressor trees do not match prediction
|
<p>It was my understanding that the final prediction of an XGBoost model (in this particular case an XGBRegressor) was obtained by summing the values of the predicted leaves <a href="https://discuss.xgboost.ai/t/xgboost-trees-understanding/1822" rel="nofollow noreferrer">[1]</a> [<a href="https://stats.stackexchange.com/questions/502000/how-prediction-of-xgboost-correspond-to-leaves-values">2</a>]. Yet I'm failing to match the prediction summing the values. Here is a MRE:</p>
<pre><code>import json
from collections import deque
import numpy as np
from sklearn.datasets import load_diabetes
from sklearn.model_selection import train_test_split
import xgboost as xgb
def leafs_vector(tree):
"""Returns a vector of nodes for each tree, only leafs are different of 0"""
stack = deque([tree])
while stack:
node = stack.popleft()
if "leaf" in node:
yield node["leaf"]
else:
yield 0
for child in node["children"]:
stack.append(child)
# Load the diabetes dataset
diabetes = load_diabetes()
X, y = diabetes.data, diabetes.target
# Split the dataset into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Define the XGBoost regressor model
xg_reg = xgb.XGBRegressor(objective='reg:squarederror',
max_depth=5,
n_estimators=10)
# Train the model
xg_reg.fit(X_train, y_train)
# Compute the original predictions
y_pred = xg_reg.predict(X_test)
# get the index of each predicted leaf
predicted_leafs_indices = xg_reg.get_booster().predict(xgb.DMatrix(X_test), pred_leaf=True).astype(np.int32)
# get the trees
trees = xg_reg.get_booster().get_dump(dump_format="json")
trees = [json.loads(tree) for tree in trees]
# get a vector of nodes (ordered by node id)
leafs = [list(leafs_vector(tree)) for tree in trees]
l_pred = []
for pli in predicted_leafs_indices:
l_pred.append(sum(li[p] for li, p in zip(leafs, pli)))
assert np.allclose(np.array(l_pred), y_pred, atol=0.5) # fails
</code></pre>
<p>I also tried adding the default value (<code>0.5</code>) of the <code>base_score</code> (as written <a href="https://stackoverflow.com/questions/68000028/how-are-the-leaf-values-of-xgboost-regression-trees-relate-to-the-prediction">here</a>) to the total sum but it also didn't work.</p>
<pre><code>l_pred = []
for pli in predicted_leafs_indices:
l_pred.append(sum(li[p] for li, p in zip(leafs, pli)) + 0.5)
</code></pre>
|
<python><machine-learning><xgboost>
|
2024-03-15 21:07:25
| 1
| 62,150
|
Dani Mesejo
|
78,169,571
| 4,379,593
|
what does mean "NULL" in f-string specification in python?
|
<p>I open <a href="https://docs.python.org/3/reference/lexical_analysis.html#formatted-string-literals" rel="nofollow noreferrer">python specification</a> at chapter 2. Lexical analysis and found</p>
<pre><code>format_spec ::= (literal_char | NULL | replacement_field)*
literal_char ::= <any code point except "{", "}" or NULL>
</code></pre>
<p>What does mean <code>NULL</code>?<br />
It is not specified any where before</p>
|
<python>
|
2024-03-15 20:45:16
| 1
| 373
|
Π€ΠΈΠ»Ρ Π£ΡΠΊΠΎΠ²
|
78,169,456
| 11,793,491
|
Can't activate Python environment after creating it in bash
|
<p>I have the following script <code>myenv.sh</code>:</p>
<pre class="lang-bash prettyprint-override"><code>#!bin/bash
echo "####### Install automated template ########"
python3 -m venv .venv
source .venv/bin/activate
pip install --upgrade pip
pip install ipykernel pandas seaborn scikit-learn
echo "######## Done ########"
</code></pre>
<p>So I run the script, and it creates the environment but doesn't stay activated. Perhaps it is exiting. So I tried this:</p>
<pre class="lang-bash prettyprint-override"><code>#!bin/bash
echo "####### Install automated template ########"
python3 -m venv .venv
source .venv/bin/activate
pip install --upgrade pip
pip install ipykernel pandas seaborn scikit-learn
source ${PWD}/.venv/bin/activate
echo "######## Done ########"
</code></pre>
<p>But nothing happens. Please, could you point out what I am doing wrong?</p>
|
<python><bash>
|
2024-03-15 20:15:19
| 1
| 2,304
|
Alexis
|
78,169,454
| 6,587,318
|
Best way to use re.sub with a different behavior when first called
|
<p>I'm trying to perform a number of replacements using <a href="https://docs.python.org/3.8/library/re.html#re.sub" rel="nofollow noreferrer"><code>re.sub()</code></a>, except I want the first replacement to be different. One straightforward approach would be to run <code>re.sub()</code> twice with <code>count = 1</code> for the first call, but because <code>re.sub()</code> allows for the <code>repl</code> argument to be a function, we can do this in a single call:</p>
<pre><code>import re
def repl(matchobj):
global first_sub
if first_sub:
first_sub = False
print(f"Replacing '{matchobj.group()}' at {matchobj.start()} with ':)'")
return ":)"
else:
print(f"Deleting '{matchobj.group()}' at {matchobj.start()}")
return ""
text = "hello123 world456"
first_sub = True
text = re.sub(r"\d+", repl, text)
# Output:
# Replacing '123' at 5 with ':)'
# Deleting '456' at 14
</code></pre>
<p>Unfortunately, this makes use of <code>global</code>, which isn't great. Is there a better way to do this?</p>
|
<python><function><global-variables><python-re>
|
2024-03-15 20:14:49
| 2
| 326
|
Zachary
|
78,169,351
| 1,231,450
|
Return two Enum states
|
<p>Suppose, I have</p>
<pre><code>class State(Enum):
TAKEPROFIT = 1
STOPPEDOUT = 2
WINNER = 3
LOSER = 4
</code></pre>
<p>How can I return the combination of e.g. <code>State.STOPPEDOUT</code> and <code>State.LOSER</code> ?<br />
The <code>|</code> does not seem to be supported:</p>
<pre><code>return State.STOPPEDOUT | State.LOSER
</code></pre>
<p>throws</p>
<pre><code>TypeError: unsupported operand type(s) for |: 'State' and 'State'
</code></pre>
|
<python>
|
2024-03-15 19:46:28
| 3
| 43,253
|
Jan
|
78,169,254
| 1,709,440
|
Resizing image from the center in moviepy
|
<p>So I have an issue with moviepy and applying resize effects.</p>
<p>Creating the image using:</p>
<pre><code>img_clip_pos = ("center", "center")
clip = ImageClip(image_path) \
.set_position(img_clip_pos) \
.set_duration(req_dur)
</code></pre>
<p>Then I want it to scale over time, like a zoom effect.</p>
<pre><code>clip = clip.fx(vfx.resize, lambda t: 1 + zoom_speed * t)
</code></pre>
<p>But it's zooming in to the top left corner.
Is there any option to set anchor point of the image? So that I can make it resize from center</p>
|
<python><moviepy>
|
2024-03-15 19:25:25
| 1
| 325
|
mhmtemnacr
|
78,169,208
| 1,005,334
|
Quart: how to get Server Sent Events (SSE) working?
|
<p>I'm trying to implement an endpoint for Server Sent Events (SSE) in Quart, following the example from the official documentation: <a href="http://pgjones.gitlab.io/quart/how_to_guides/server_sent_events.html" rel="nofollow noreferrer">http://pgjones.gitlab.io/quart/how_to_guides/server_sent_events.html</a></p>
<p>I've copy-pasted the code and used some dummy string for <code>data</code>. This locked up my server, because it would endlessly stream my 'data'. So it's not quite a working example, you still have to figure out how to properly add events to the stream.</p>
<p>Now I've set up a proper queue using <code>asyncio.queues</code> and I can see the <code>send_events()</code> function now responding when I add something. Great!</p>
<p>The only problem is I'm not getting any output when calling the endpoint (with Postman). It keeps waiting for a response. If I stop the server mid-flight, I do get the SSE output generated up to that point. So the events themselves are triggered and properly formed, it's the output that isn't streamed like it should be.</p>
<p>I found this example that has the same issue: <a href="https://github.com/encode/starlette/issues/20#issuecomment-586169195" rel="nofollow noreferrer">https://github.com/encode/starlette/issues/20#issuecomment-586169195</a>. However, this discussion goes in a different direction and ultimately an implementation for Starlette is created. I've tried this package (I'm using FastAPI as well) but the return for the SSE endpoint is <code>EventSourceResponse</code> and then I get the error:</p>
<pre><code>TypeError: The response value type (EventSourceResponse) is not valid
</code></pre>
<p>Right... so Quart doesn't like the response value. I see no way of making that work with Quart, and since the example from the Quart docs doesn't work, it looks like the only option is to ditch Quart.</p>
<p>Or is there another solution?</p>
<h3>Code</h3>
<p>I've got a dataclass with only <code>event</code> and <code>data</code> properties:</p>
<pre><code>@dataclass
class ServerSentEvent:
data: str
event: str
def encode(self) -> bytes:
message = f'data: {self.data}'
message = f'{message}\nevent: {self.event}'
message = f'{message}\r\n\r\n'
return message.encode('utf-8')
</code></pre>
<p>And then the endpoint - note I've included some print lines and add something to the queue initially to test (elsewhere in the code events would be added to the queue in the same way):</p>
<pre><code>queues = []
@bp.get('/sse')
async def sse():
"""Server Sent Events"""
if 'text/event-stream' not in request.accept_mimetypes:
abort(400)
async def send_events():
while True:
print('waiting for event')
event = await queue.get()
print('got event')
print(event.encode())
yield event.encode()
queue = Queue()
queues.append(queue)
await queue.put(ServerSentEvent(**{'event': 'subscribed', 'data': 'will send new events'}))
print('-- SSE connected')
response = await make_response(
send_events(),
{
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache',
'Transfer-Encoding': 'chunked',
'Connection': 'keep-alive'
}
)
response.timeout = None
return response
</code></pre>
|
<python><fastapi><server-sent-events><starlette><quart>
|
2024-03-15 19:16:27
| 1
| 1,544
|
kasimir
|
78,169,159
| 2,779,432
|
PIL fromarray for single channel image
|
<p>I am trying to obtain an image shaped (1080, 1920, 1) from one shaped (1080, 1920, 3)
This is what I have been trying without success:</p>
<pre><code> for fr in fr_lst:
frame = cv2.imread(os.path.join(frame_root, fr))
#SPLIT CHANNELS
frame = (cv2.cvtColor(frame, cv2.COLOR_BGR2RGB))
r, g, b = cv2.split(frame)
r = np.expand_dims(r, axis=2)
print(r.shape)
frame = Image.fromarray(r)
</code></pre>
<p>when I print the shape of r I get (1080, 1920, 1) but <code>Image.fromarray(r)</code> returns the error</p>
<pre><code>TypeError: Cannot handle this data type: (1, 1, 1), |u1
</code></pre>
<p>I tried not expanding the dimensions, obtaining the shape of r of (1080,1920) and successfully running <code>Image.fromarray(r)</code></p>
<p>I also tried to expand the dimensions of the PIL image <code>frame = np.expand_dims(frame, axis=(2))</code> which seems to return the appropriate result, but has a strange behaviour:</p>
<p>If I use an array of size (1080, 1920, 3) and run <code>size = frames[0].size</code> I obtain <code>size = 1920, 1080</code> which is great. But if I run <code>size = frames[0].size</code> with frames of shape (1080, 1920, 1) I obtain <code>size = 2073600</code></p>
<p>My goal is to have an array of size (1920, 1080) when passing a frame of shape (1080, 1920, 1).</p>
<p>What am I doing wrong or not understanding?</p>
<p>Thank you</p>
|
<python><numpy><opencv><python-imaging-library>
|
2024-03-15 19:04:28
| 1
| 501
|
Francesco
|
78,169,114
| 12,240,037
|
How To Buffer a Selected Point in pyQGIS
|
<p>I have a few random points that I generated within a polygon. I randomly selected one of the points and would like to buffer that point. In the case below, buffering will be applied to all points and not the selected point. Do I need to create a new memory layer of the selected point first? If so, how?</p>
<pre><code>#Generating random points within an "aoi" polygon file
pnts = processing.runAndLoadResults("native:randompointsinpolygons", {'INPUT':aoi,
'POINTS_NUMBER':10,
'MIN_DISTANCE':0,
'MIN_DISTANCE_GLOBAL':0,
'MAX_TRIES_PER_POINT':10,
'SEED':None,
'INCLUDE_POLYGON_ATTRIBUTES':True,
'OUTPUT':'memory:'})
#Randomly Choose one of the Points
processing.run("qgis:randomselection",
{'INPUT':pnts['OUTPUT'],
'METHOD':0,'NUMBER':1})
#Buffering the point
pnt_buf = processing.runAndLoadResults("native:buffer",
{'INPUT':pnts,
'DISTANCE':3000,
'SEGMENTS':6,
'END_CAP_STYLE':0,
'JOIN_STYLE':1,
'MITER_LIMIT':3,
'DISSOLVE':True,
'SEPARATE_DISJOINT':False,
'OUTPUT':'memory:'})
</code></pre>
|
<python><buffer><qgis><pyqgis>
|
2024-03-15 18:57:17
| 1
| 327
|
seak23
|
78,169,098
| 7,155,895
|
Read a Weidmuller File and get data from it
|
<p>I have a file, created with the program "<a href="https://www.weidmuller.it/it/prodotti/workplace_accessories/software/m_print_pro.jsp" rel="nofollow noreferrer">M-Print Pro, for Weidmuller</a>". It has the file extension ".mpc" The file description given by the program is "Files with M-Print PRO contents"</p>
<p>This is an example of what it looks like via the program:</p>
<p><a href="https://i.sstatic.net/2dq6s.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2dq6s.png" alt="enter image description here" /></a></p>
<p>Now, I tried to look for a Python library to read the file, but unfortunately I didn't find any, since that kind of extension is commonly used for Musepack audio.</p>
<p>At this point, I simply tried opening it with:</p>
<pre><code>r = open("C:/Users/Acer/Desktop/E3611310/CABLAGGIO.mpc", "r")
print(r.read())
</code></pre>
<p>What I get is the following error:</p>
<pre><code>return codecs.charmap_decode(input,self.errors,decoding_table)[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
UnicodeDecodeError: 'charmap' codec can't decode byte 0x8f in position 188: character maps to <undefined>
</code></pre>
<p>I tried adding <code>encoding="utf8"</code>, but still I get an error::</p>
<pre><code>File "<frozen codecs>", line 322, in decode
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x84 in position 10: invalid start byte
</code></pre>
<p>What I'm trying to do are two, namely being able to read and recover the data of this type of file, and being able to create a similar file with new data.</p>
<p>Do you have any ideas on how I could solve it? Thank you all</p>
|
<python><python-3.11>
|
2024-03-15 18:53:51
| 1
| 579
|
Rebagliati Loris
|
78,169,083
| 12,904,817
|
Preserve type hints (checked with mypy) for function decorated with aiocache's @cached decorator
|
<p>Python version: 3.9.17</p>
<p>When I decorate a function with @cached, mypy will simply accept calls to the decorated function with invalid args. Making the typed params in the decorated function useless. How do I make mypy understand that it should flag calls to the decorated function that don't follow it's function signature?</p>
<p>aka:</p>
<pre class="lang-py prettyprint-override"><code>from aiocache import cached
@cached
def foo_cached(x: str) -> str:
return x
def foo(x: str) -> str:
return x
foo_cached() # mypy is fine with this.
foo() # mypy flags this, as it's missing arg for param 'x'.
</code></pre>
<p>looking for a solution that works for python 3.9</p>
|
<python><python-decorators>
|
2024-03-15 18:50:19
| 0
| 595
|
Vitor EL
|
78,168,840
| 10,967,961
|
Pandas dataset from dynamic website scraping
|
<p>This question is related to a previous question of mine, so here I am assuming that I was able to open all the "plus sigs" from <a href="http://data.europa.eu/esco/skill/f8c676de-c871-424f-9a65-77059d07910a" rel="nofollow noreferrer">this web page from esco</a>.
Once I have expanded the plus signs under "handling and disposing of waste and hazardous materials" (which is the skills to which the above link points to) how can I move from there (expanded page) to have a dataframe with 2 columns one called "Skills" and the other "Skills codes" having on the column "Skills" the skills names and on the "Skills codes" column the code of the "parent node"? In the mock uri above, this should result in something like on the "Skills" column I should have:</p>
<ul>
<li>collect industrial waste</li>
<li>handling waste</li>
<li>manage waste</li>
<li>coordinate waste management procedures</li>
<li>design plant waste procedures</li>
<li>develop waste management processes</li>
<li>dispose food waste</li>
<li>dispose of soldering waste</li>
<li>handle waste rock</li>
<li>manage waste treatment facility</li>
<li>communicate with waste treatment facilities</li>
<li>train staff on waste management</li>
</ul>
<p>and on the "Skills codes" the skill code S6.13.0.</p>
<p>I tried to make this as well but failed miserably:</p>
<pre><code>import pandas as pd
from bs4 import BeautifulSoup
# Replace with your actual HTML file path
file_path = '/Users/federiconutarelli/Desktop/esco.html'
# Read the HTML file
with open(file_path, 'r', encoding='utf-8') as file:
soup = BeautifulSoup(file, 'html.parser')
# Initialize lists to store skills and codes
skills = []
codes = []
# Find elements containing skills and their codes
# Replace 'skill_class_name' and 'code_class_name' with the actual class names or use other selectors based on your HTML structure
for skill_element in soup.find_all('div', class_='classification_item'):
skill_name = skill_element.get_text(strip=True)
skills.append(skill_name)
# Assuming the code is in a close relation to the skill element, you might need to adjust the method of finding it
code_element = skill_element.find_next_sibling('div', class_='main_item')
if code_element:
skill_code = code_element.get_text(strip=True)
codes.append(skill_code)
else:
codes.append('') # Append an empty string or None if no code is found
# Create a DataFrame
df = pd.DataFrame({
'Skill': skills,
'Code': codes
})
print(df)
</code></pre>
<p><strong>Notice:</strong> I called the uri a "mock uri" because in reality I have many mores uri and have to repeat the same procedure over and over.</p>
|
<python><pandas><web-scraping>
|
2024-03-15 17:56:39
| 1
| 653
|
Lusian
|
78,168,618
| 202,168
|
Python 3.11 with no builtin pip module - what's going on?
|
<pre><code>.venv/bin/python -m pip uninstall mysqlclient
/Users/anentropic/dev/project/.venv/bin/python: No module named pip
</code></pre>
<p>and</p>
<pre><code>.venv/bin/python
Python 3.11.5 (main, Sep 18 2023, 15:04:25) [Clang 14.0.3 (clang-1403.0.22.14.1)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import pip
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'pip'
</code></pre>
<p>I thought <code>pip</code> was a builtin for recent Python versions, how is it that I have one where it doesn't exist?</p>
|
<python><pip>
|
2024-03-15 17:07:33
| 1
| 34,171
|
Anentropic
|
78,168,594
| 13,126,794
|
Conditional update in pandas dataframe column based on paragraph type
|
<p>I have a dataframe like this:</p>
<pre><code> data = {
'document_section_id': ['1', '1.1', None, None, None, None, None, None, None, '3', None, None, None, '4.1', None, None, None, None, None, None, None, None, None, None, None, None, None, None, '1.2', None, None, None, None, None, None, None, '1.3', '1.3.1', None, None, '1.3.2', None, None, None, None, '1.3.3', None, None, None, '1.3.4', None, None,None],
' ': ['Heading1', 'Heading2', 'HeadingNoTOC', 'HeadingNoTOC', 'HeadingNoTOC', 'HeadingNoTOC', 'HeadingNoTOC', 'TblFootnote', 'HeadingNoTOC', 'HeadingNoTOC', 'HeadingNoTOC', 'HeadingNoTOC', 'HeadingNoTOC', 'ListParagraph', 'ListParagraph', 'ListParagraph', 'ListParagraph', 'ListParagraph', 'ListParagraph', 'ListParagraph', 'ListParagraph', 'ListParagraph', 'ListParagraph', 'HeadingNoTOC', 'HeadingNoTOC', 'HeadingNoTOC', 'HeadingNoTOC', 'HeadingNoTOC', 'HeadingNoTOC', 'HeadingNoTOC', 'Heading2', 'HeadingNoTOC', 'TblFootnote', 'TblFootnote', 'TblFootnote', 'TblFootnote', 'TblFootnote', 'TblFootnote', 'Heading2', 'HeadingNoTOC', 'Heading3', 'HeadingNoTOC', 'Heading3', 'HeadingNoTOC', 'HeadingNoTOC', 'Heading3', 'HeadingNoTOC', 'Heading3', 'TblFootnote', 'Heading3', 'Heading3', 'Heading3', 'Heading1']
}
</code></pre>
<p>I want to do the below:</p>
<ol>
<li>get all rows spanning from one occurrence of "<code>Heading1</code>" to the next occurrence of "<code>Heading1</code>" in the "<code>paragraph_type</code>" column.</li>
<li>Capture and store the "<code>SECTION_ID</code>" value corresponding to the first occurrence of "<code>Heading1</code>" in a variable called <code>var1</code>.</li>
<li>In this specified range (Heading1-Heading1), if the value of any section_id column does not begin with the value stored in var1, then replace its "SECTION_ID" with an empty string. Ex: for 10th and 14th row section id doesnt starts with 1 so update the value with ''</li>
</ol>
<p>This has to be done for all existing Heading1</p>
<p>This is what I tried but it doesn't work:</p>
<pre><code>modified_dfs = []
for i in range(len(df[df['PARAGRAPH_TYPE'] == 'Heading1']) - 1):
start_index = df[df['PARAGRAPH_TYPE'] == 'Heading1'].index[i]
end_index = df[df['PARAGRAPH_TYPE'] == 'Heading1'].index[i+1]
var1 = df.loc[start_index, 'SECTION_ID']
section_df = df.reset_index().loc[start_index:end_index-1].copy()
for j in range(start_index, end_index):
SECTION_ID = section_df.loc[j, 'SECTION_ID']
if isinstance(SECTION_ID, str) and not SECTION_ID.startswith(str(var1)):
section_df.loc[j, 'SECTION_ID'] = ''
modified_dfs.append(section_df)
result_df = pd.concat(modified_dfs)
</code></pre>
<p>expected output:</p>
<pre><code> output = {
'document_section_id': ['1', '1.1', None, None, None, None, None, None, None, '', None, None, None, '', None, None, None, None, None, None, None, None, None, None, None, None, None, None, '1.2', None, None, None, None, None, None, None, '1.3', '1.3.1', None, None, '1.3.2', None, None, None, None, '1.3.3', None, None, None, '1.3.4', None, None, None, None],
' ': ['Heading1', 'Heading2', 'HeadingNoTOC', 'HeadingNoTOC', 'HeadingNoTOC', 'HeadingNoTOC', 'HeadingNoTOC', 'TblFootnote', 'HeadingNoTOC', 'HeadingNoTOC', 'HeadingNoTOC', 'HeadingNoTOC', 'HeadingNoTOC', 'ListParagraph', 'ListParagraph', 'ListParagraph', 'ListParagraph', 'ListParagraph', 'ListParagraph', 'ListParagraph', 'ListParagraph', 'ListParagraph', 'ListParagraph', 'ListParagraph', 'HeadingNoTOC', 'HeadingNoTOC', 'HeadingNoTOC', 'HeadingNoTOC', 'HeadingNoTOC', 'HeadingNoTOC', 'HeadingNoTOC', 'Heading2', 'HeadingNoTOC', 'TblFootnote', 'TblFootnote', 'TblFootnote', 'TblFootnote', 'TblFootnote', 'TblFootnote', 'Heading2', 'HeadingNoTOC', 'Heading3', 'HeadingNoTOC', 'Heading3', 'HeadingNoTOC', 'HeadingNoTOC', 'Heading3', 'HeadingNoTOC', 'Heading3', 'TblFootnote', 'Heading3', 'Heading3', 'Heading3', 'Heading1']
}
</code></pre>
|
<python><python-3.x><pandas><dataframe>
|
2024-03-15 17:04:28
| 2
| 961
|
Dcook
|
78,168,551
| 10,200,497
|
How to get the first instance of a mask if only it is in top N rows?
|
<p>This is my DataFrame.</p>
<pre><code>import pandas as pd
df = pd.DataFrame(
{
'a': [100, 1123, 9999, 100, 1, 954, 1],
'b': [1000, 11123, 1123, 0, 55, 0, 1],
},
)
</code></pre>
<p>Expected output is creating column <code>c</code>:</p>
<pre><code> a b c
0 100 1000 NaN
1 1123 11123 NaN
2 9999 1123 9999.0
3 100 0 NaN
4 1 55 NaN
5 954 0 NaN
6 1 1 NaN
</code></pre>
<p>The mask is:</p>
<pre><code>mask = ((df.a > df.b))
</code></pre>
<p>I want to get the first row that meets the conditions of this mask IF ONLY it is in top 3 rows and put <code>df.a</code> to create <code>c</code>. For this example this code works:</p>
<pre><code>df.loc[mask.cumsum().eq(1) & mask, 'c'] = df.a
</code></pre>
<p>But for this DataFrame it should return <code>NaN</code> for <code>c</code> because the first instance of <code>mask</code> is not in top 3 rows. But it does not work.</p>
<pre><code>df = pd.DataFrame(
{
'a': [0, 0, 0, 0, 0, 954, 1],
'b': [1000, 11123, 1123, 0, 55, 0, 1],
},
)
</code></pre>
|
<python><pandas><dataframe>
|
2024-03-15 16:57:06
| 3
| 2,679
|
AmirX
|
78,168,492
| 14,661,648
|
Python: VSCode does not show docstring for inherited exceptions
|
<pre><code>class CustomNamedException(Exception):
"""Example docstring here."""
def __init__(self, name) -> None:
self._name = name
def __str__(self):
return("Error message.")
</code></pre>
<p>My exception above does not show the docstring when I use the class in VSCode, what appears in the info box is instead <code>Common base class for all exceptions</code>.</p>
<p>Screenshot:</p>
<p><a href="https://i.sstatic.net/mRaUA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mRaUA.png" alt="enter image description here" /></a></p>
<p>Seems redundant to include the base class docstring instead of the actual exception when I call it. I want <code>Example docstring here.</code> to display instead. Am I missing something?</p>
<p><strong>UPDATE:</strong>
The docstring for <code>__init__()</code> instead actually appears if I give it one:</p>
<p><a href="https://i.sstatic.net/GrTNZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GrTNZ.png" alt="enter image description here" /></a></p>
<p>Is this basically the intended behaviour? Are docstrings ignored when you inherit from the BaseException class unless you put it in <code>__init__()</code>?</p>
|
<python><visual-studio-code><pylance>
|
2024-03-15 16:47:04
| 1
| 1,067
|
Jiehfeng
|
78,168,476
| 10,200,497
|
How to get the index of the first row that meets the conditions of a mask?
|
<p>This is my DataFrame:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(
{
'a': [100, 1123, 123, 100, 1, 0, 1],
'b': [1000, 11123, 1123, 0, 55, 0, 1],
},
index=range(100, 107)
)
</code></pre>
<p>And this is the expected output. I want to create column <code>c</code>:</p>
<pre><code> a b c
100 100 1000 NaN
101 1123 11123 NaN
102 123 1123 NaN
103 100 0 3.0
104 1 55 NaN
105 0 0 NaN
106 1 1 NaN
</code></pre>
<p>The mask that is used is:</p>
<pre><code>mask = ((df.a > df.b))
</code></pre>
<p>I want to get the index of first row that <code>mask</code> occurs. I want to preserve the original index but get the <code>reset_index()</code> value. In this example the first instance of the mask is at index <code>3</code>.</p>
<p>I can get the first instance of the mask by this:</p>
<pre><code>df.loc[mask.cumsum().eq(1) & mask, 'c'] = 'the first row'
</code></pre>
<p>But I don't know how to get the index.</p>
|
<python><pandas><dataframe>
|
2024-03-15 16:44:29
| 5
| 2,679
|
AmirX
|
78,168,451
| 13,491,504
|
Derivative of a Derivative in sympy
|
<p>I am trying to do some calculations in python and now need to find the derivative of the a function, that I have derived prior and is now called derivative(x(t), t):</p>
<pre><code>t = sp.symbols('t')
x = sp.symbols('x')
B = sp.symbols('B')
C = sp.symbols('C')
x = sp.Function('x')(t)
Lx = B * sp.diff(x,t) * C
</code></pre>
<p>Because the derivative of x is called <code>Derivative(x(t),t)</code> by SymPy,the function Lx is equal to <code>B*Derivative(x(t),t)*C</code> and the derivative of our function must be called the following:</p>
<pre><code>ELx = sp.diff(Lx,Derivative(x(t),t))
</code></pre>
<p>But I allways get an error message:
<code>NameError: name 'Derivative' is not defined</code> what should I do.</p>
<p>I mean I can define the Derived function with antoher varibale, but the logic and cleaner way looks like this.</p>
|
<python><sympy><derivative>
|
2024-03-15 16:40:39
| 1
| 637
|
Mo711
|
78,168,443
| 12,198,665
|
Errors with reading GTFS tripupdates.pb real time data using get() function
|
<p>We want to extract the stop arrival time and departure time from the list within entity using the following code. However, I am getting repeated errors.</p>
<pre><code>dict_obj['entity'][0]
Gives the following output
{'id': '8800314',
'tripUpdate': {'trip': {'tripId': '8800314',
'startTime': '11:30:00',
'startDate': '20240313',
'routeId': '20832'},
'stopTimeUpdate': [{'stopSequence': 1,
'arrival': {'time': '1710344086'},
'departure': {'time': '1710344086'},
'stopId': '86900',
'stopTimeProperties': {}},
{'stopSequence': 2,
'arrival': {'time': '1710343956'},
'departure': {'time': '1710343956'},
'stopId': '86024',
'stopTimeProperties': {}},
{'stopSequence': 3,
'arrival': {'time': '1710343995'},
'departure': {'time': '1710343995'},
'stopId': '86560',
'stopTimeProperties': {}},
{'stopSequence': 4,
</code></pre>
<p>We want to extract arrival time:</p>
<pre><code>#for trip updates
collector1 = []
counter1=0
for block1 in dict_obj1['entity']:
counter1 += 1
row = OrderedDict()
row['stop_AT'] = block1.get('tripUpdate').get('stopTimeUpdate')[0].get('arrival').get('time')
row['stop_DT'] = block1.get('tripUpdate').get('stopTimeUpdate')[0].get('departure').get('time')
collector1.append(row)
df1 = pd.DataFrame(collector1)
</code></pre>
<pre><code>
Error:
AttributeError: 'list' object has no attribute 'get'
</code></pre>
<p>Code source:</p>
<p><a href="https://nbviewer.org/url/nikhilvj.co.in/files/gtfsrt/locations.ipynb#" rel="nofollow noreferrer">https://nbviewer.org/url/nikhilvj.co.in/files/gtfsrt/locations.ipynb#</a></p>
|
<python><dataframe><get><protocol-buffers><gtfs>
|
2024-03-15 16:39:03
| 1
| 518
|
vp_050
|
78,168,362
| 10,967,961
|
Selenium dynamic scraping and put in a database
|
<p>I am trying to scrape the following <a href="http://data.europa.eu/esco/skill/f8c676de-c871-424f-9a65-77059d07910a" rel="nofollow noreferrer">web page</a> (I actually have more of these types of uris butt for the sake of simplicity I am just posting one here). So, since the page is dynamic, the very first thing I have to do is open all the "plus signs" for that specific skill category indicated by the uris (I previously tried to open all the + signs but it ended in a timeout error). So here is <strong>the question</strong>. I am trying to do this as follows where, I believe the part that I am doing wrong is <code>expand_buttons = wait.until(EC.presence_of_all_elements_located((By.XPATH, "//div[@class='classification_item cont_change_right hierarchy_active']]//span[@class='api_hierarchy has-child-link']")))</code> :</p>
<pre><code>import time
from selenium import webdriver
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.chrome.service import Service
from webdriver_manager.chrome import ChromeDriverManager
from selenium.common.exceptions import StaleElementReferenceException, NoSuchElementException
import time
from selenium.common.exceptions import StaleElementReferenceException
driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()))
#driver.get("https://esco.ec.europa.eu/en/classification/skill_main")
driver.get("http://data.europa.eu/esco/skill/f8c676de-c871-424f-9a65-77059d07910a")
driver.implicitly_wait(10)
wait = WebDriverWait(driver, 20)
# Define a function to click all expandable "+" buttons
def click_expand_buttons():
while True:
try:
# Find all expandable "+" buttons
expand_buttons = wait.until(EC.presence_of_all_elements_located((By.XPATH, "//div[@class='classification_item cont_change_right hierarchy_active']]//span[@class='api_hierarchy has-child-link']")))
#expand_buttons = wait.until(EC.presence_of_all_elements_located(
# (By.CSS_SELECTOR, ".api_hierarchy.has-child-link"))
#)
# If no expandable buttons are found, we are done
if not expand_buttons:
break
# Click each expandable "+" button
for button in expand_buttons:
try:
driver.implicitly_wait(10)
driver.execute_script("arguments[0].click();", button)
# Wait for the dynamic content to load
#time.sleep(1)
except StaleElementReferenceException:
# If the element is stale, we find the elements again
break
except StaleElementReferenceException:
continue
# Call the function to start clicking "+" buttons
click_expand_buttons()
# After expanding all sections, parse the page source with BeautifulSoup or extract data directly using Selenium
html_source = driver.page_source
soup = BeautifulSoup(html_source, 'html.parser')
# Example of data extraction: Find all skills (this needs to be adapted based on actual page structure)
skills = []
for skill_element in soup.find_all("div", class_="skill-class"): # Adjust selector as needed
skill_name = skill_element.find("h2", class_="skill-name").text # Adjust based on actual HTML structure
skill_description = skill_element.find("p", class_="skill-description").text # Adjust as needed
skills.append({"name": skill_name, "description": skill_description})
# Convert the data to JSON and save it
with open("/Users/federiconutarelli/Desktop/escodata/expanded_esco_skills_page.json", "w", encoding="utf-8") as json_file:
json.dump(skills, json_file, indent=4, ensure_ascii=False)
driver.quit()
</code></pre>
|
<python><pandas><selenium-webdriver><beautifulsoup>
|
2024-03-15 16:25:13
| 0
| 653
|
Lusian
|
78,168,290
| 984,621
|
Scrapy: Where to init database connection, so it is available and accessible in spiders, pileines, and classes
|
<p>I have a fairly standard Scrapy project, its dir structure looks like this</p>
<pre><code>my_project
scrapy.cfg
my_project
__init__.py
items.py
itemsloaders.py
middlewares.py
MyStatsCollector.py
pipelines.py
settings.py
spiders
__init__.py
spider1.py
spider2.py
spider3.py
</code></pre>
<p>Right now, my database connection is placed in the <code>my_project/pipelines.py</code>:</p>
<pre><code>import psycopg2
class SaveToPostgresPipeline:
def __init__(self):
hostname = ''
username = ''
password = ''
database = ''
</code></pre>
<p>and the spiders works the way that they scrape data, send it to pipeline and it will save it to the database.</p>
<p>I would need now to fetch some data from the database also in spiders (<code>spider1.py</code>, <code>spider2.py</code>, <code>spider3.py</code>) and in <code>MyStatsCollector.py</code>.</p>
<p>Where should I set the database connection within the project, so ideally I init the database connection just once and then use it in spiders, pipelines, or in MyStatsCollector.py.</p>
<p>Right now, my only idea is to init the DB connection in each of these files, which doesn't looks very elegant. What's the best way to handle this?</p>
|
<python><database><scrapy>
|
2024-03-15 16:10:45
| 1
| 48,763
|
user984621
|
78,167,996
| 543,913
|
Is os.makedirs(path, exist_ok=True) susceptible to race-conditions?
|
<p>Suppose two different processes simultaneously call <code>os.makedirs(path, exist_ok=True)</code>. Is it possible that one will raise a spurious exception due to a race condition?</p>
<p>My fear is the call might do something like this under the hood:</p>
<pre><code>if not dir_exists(d):
try_make_dir_and_raise_if_exists(d)
</code></pre>
<p>I carefully read the <a href="https://docs.python.org/3/library/os.html#os.makedirs" rel="nofollow noreferrer">documentation</a>, but I see no clear assertion of race-condition safety.</p>
<p>Some other <a href="https://stackoverflow.com/a/57394882/543913">answers</a> on this site suggest the call is safe, but provide no citations.</p>
|
<python>
|
2024-03-15 15:23:25
| 1
| 2,468
|
dshin
|
78,167,938
| 9,318,323
|
SqlAlchemy Correct way to create url for engine
|
<p>What is the best / correct way to create a <code>url</code> which needs to be passed to <code>sqlalchemy.create_engine</code>? <a href="https://docs.sqlalchemy.org/en/20/core/engines.html#sqlalchemy.create_engine" rel="nofollow noreferrer">https://docs.sqlalchemy.org/en/20/core/engines.html#sqlalchemy.create_engine</a></p>
<p>My connection string looks similar to this:</p>
<pre><code>con_str = "Driver={ODBC Driver 17 for SQL Server};Server=tcp:somedb.database.windows.net,1433;Database=somedbname;Uid=someuser;Pwd=some++pass=;Encrypt=yes;TrustServerCertificate=no"
</code></pre>
<p>If I do (<a href="https://stackoverflow.com/questions/15750711/connecting-to-sql-server-2012-using-sqlalchemy-and-pyodbc">Connecting to SQL Server 2012 using sqlalchemy and pyodbc</a>):</p>
<pre><code>import urllib
import sqlalchemy as sa
connection_url = sa.engine.URL.create(
"mssql+pyodbc",
query={"odbc_connect": urllib.parse.quote_plus(con_str)},
)
print(connection_url.render_as_string(hide_password=False))
</code></pre>
<p>I get this output:</p>
<pre><code>mssql+pyodbc://?odbc_connect=Driver%3D%7BODBC+Driver+17+for+SQL+Server%7D%3BServer%3Dtcp%3Asomedb.database.windows.net%2C1433%3BDatabase%3Dsomedbname%3BUid%3Dsomeuser%3BPwd%3Dsome%2B%2Bpass%3D%3BEncrypt%3Dyes%3BTrustServerCertificate%3Dno
</code></pre>
<p>But if I do (<a href="https://stackoverflow.com/questions/66371841/how-do-i-use-sqlalchemy-create-engine-with-password-that-includes-an">How do I use SQLAlchemy create_engine() with password that includes an @</a>):</p>
<pre><code>connection_url = sa.engine.URL.create(
drivername="mssql+pyodbc",
username="someuser",
password="some++pass=",
host="tcp:somedb.database.windows.net",
port=1433,
database="somedbname",
query={'driver': 'ODBC Driver 17 for SQL Server', 'encrypt': 'yes', 'trustservercertificate': 'no'},
)
print(connection_url.render_as_string(hide_password=False))
</code></pre>
<p>I get a different output:</p>
<pre><code>mssql+pyodbc://someuser:some++pass%3D@[tcp:somedb.database.windows.net]:1433/somedbname?driver=ODBC+Driver+17+for+SQL+Server&encrypt=yes&trustservercertificate=no
</code></pre>
<p>Both of them work for general reads but for more obscure uses <strong>they produce different results</strong>.</p>
<p>For example, for a particular piece of code the former option works while the latter option throws:</p>
<blockquote>
<p>('42000', '[42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Implicit conversion from data type nvarchar(max) to binary is not allowed. Use the CONVERT function to run this query. (257) (SQLExecDirectW)').</p>
</blockquote>
<p>I am assuming the former is correct since the majority of StackOverflow answers provide it as an example. I am interested why different parameters produce such different results and where can I read about it on <a href="https://docs.sqlalchemy.org/" rel="nofollow noreferrer">https://docs.sqlalchemy.org/</a>?</p>
|
<python><sql-server><sqlalchemy>
|
2024-03-15 15:14:41
| 1
| 354
|
Vitamin C
|
78,167,866
| 12,520,740
|
How to make pattern matching efficient for large text datasets
|
<p>I'm currently working on a project that involves processing large volumes of textual data for natural language processing tasks. One critical aspect of my pipeline involves string matching, where I need to efficiently match substrings within sentences against a predefined set of patterns.</p>
<p>Here's a mock example to illustrate the problem with following list of sentences:</p>
<pre class="lang-py prettyprint-override"><code>sentences = [
"the quick brown fox jumps over the lazy dog",
"a watched pot never boils",
"actions speak louder than words"
]
</code></pre>
<p>And I have a set of patterns:</p>
<pre class="lang-py prettyprint-override"><code>patterns = [
"quick brown fox",
"pot never boils",
"actions speak"
]
</code></pre>
<p>My goal is to efficiently identify sentences that contain any of these patterns. Additionally, I need to tokenize each sentence and perform further analysis on the matched substrings.</p>
<p>Currently, I'm using a brute-force approach with nested loops, but it's not scalable for large datasets. I'm looking for more sophisticated techniques or algorithms to optimize this process.</p>
<p>How can I implement string matching for this scenario, considering scalability and performance? Any suggestions would be highly appreciated!</p>
|
<python><substring>
|
2024-03-15 15:03:18
| 1
| 1,156
|
melvio
|
78,167,748
| 1,945,486
|
Python Protocol incompatible warning
|
<p>I would like to typehint, that my pure mixin (<code>MyMixin</code>) expects the super class to have the <code>get_context_data</code> method.</p>
<pre><code>class HasViewProtocol(Protocol):
kwargs: dict
@abstractmethod
def get_context_data(self, **kwargs) -> dict:
pass
class MyMixin(HasViewProtocol):
def get_context_data(self, **kwargs) -> dict:
return super().get_context_data(**kwargs) | {
"extra_data": "bla"
}
</code></pre>
<p>Here Pycharm's inspection issues:</p>
<pre><code>type of 'get_context_data' is incompatible with 'HasViewProtocol`
</code></pre>
<ul>
<li>Are protocols the right aproach to annotate my expectation on the super class?</li>
<li>Why is the <code>get_context_data</code> incompatible with the <code>HasViewProtocol</code>?</li>
</ul>
|
<python><python-typing>
|
2024-03-15 14:43:58
| 1
| 12,343
|
ProfHase85
|
78,167,633
| 13,126,794
|
How to generate the section id based on certain condition
|
<p>I have pandas dataframe with this input:</p>
<pre><code> data = {
'sec_id': ['1', '', '1.2', '1.3', '1.3.1', '1.3.2', '2', '2.1', '2.2', '2.3', '', '2.3.2', '2.3.3', '3', '4', '4.1', '4.1.1', '4.2', '4.3', '4.4', '5', '5.1', '5.2', '5.3', '5.3.1', '5.3.2', '5.3.3', '5.3.4', '5.3.5', '5.4', '5.5', '6', '6.1', '6.1.1', '6.2', '6.3', '6.4', '6.5', '6.6', '6.6.1', '6.6.2', '6.6.3', '6.7', '6.8', '6.9', '6.9.1', '', '', '6.9.2'],
'p_type': ['Heading1', 'Heading2', 'Heading2', 'Heading2', 'Heading3', 'Heading3', 'Heading1', 'Heading2', 'Heading2', 'Heading2', 'Heading3', 'Heading3', 'Heading3', 'Heading1', 'Heading1', 'Heading2', 'Heading3', 'Heading2', 'Heading2', 'Heading2', 'Heading1', 'Heading2', 'Heading2', 'Heading2', 'Heading3', 'Heading3', 'Heading3', 'Heading3', 'Heading3', 'Heading2', 'Heading2', 'Heading1', 'Heading2', 'Heading3', 'Heading2', 'Heading2', 'Heading2', 'Heading2', 'Heading2', 'Heading3', 'Heading3', 'Heading3', 'Heading2', 'Heading2', 'Heading2', 'Heading3', 'Heading4', 'Heading4', 'Heading3']
}
df = pd.DataFrame(data)
</code></pre>
<p>The problem is to populate the blank "document_section_id" values with accurate section ID values, using the preceding ones as references.</p>
<p>Conditions:</p>
<ol>
<li><p>The number of digits is determined by the "paragraph type" column. For example, for "Heading3," there should be 4 digits and 3 dots, like so: 1.2.3.1.</p>
</li>
<li><p>For each empty value, it should reference the preceding available "paragraph type" and increment by 1 accordingly.
Example 1:Given the input, the section ID for the 12th row can be derived from the previous one, resulting in the computed value of 2.3.1.Example 2:
For the 48th and 49th rows, the section ID needs to be derived as 6.9.1.1 and 6.9.1.2, respectively.</p>
</li>
</ol>
<p>There can be max 10 levels of subsection, so that should be taken care irrespective of number of sub sections.</p>
<p>Output:</p>
<pre><code> sec_id = [
'1', '1.1', '1.2', '1.3', '1.3.1', '1.3.2', '2', '2.1', '2.2', '2.3',
'2.3.1', '2.3.2', '2.3.3', '3', '4', '4.1', '4.1.1', '4.2', '4.3',
'4.4', '5', '5.1', '5.2', '5.3', '5.3.1', '5.3.2', '5.3.3', '5.3.4',
'5.3.5', '5.4', '5.5', '6', '6.1', '6.1.1', '6.2', '6.3', '6.4',
'6.5', '6.6', '6.6.1', '6.6.2', '6.6.3', '6.7', '6.8', '6.9', '6.9.1',
'6.9.1.1', '6.9.1.2', '6.9.2'
]
p_type = [
'Heading1', 'Heading2', 'Heading2', 'Heading2', 'Heading3', 'Heading3',
'Heading1', 'Heading2', 'Heading2', 'Heading2', 'Heading3', 'Heading3',
'Heading3', 'Heading1', 'Heading1', 'Heading2', 'Heading3', 'Heading2',
'Heading2', 'Heading2', 'Heading1', 'Heading2', 'Heading2', 'Heading2',
'Heading3', 'Heading3', 'Heading3', 'Heading3', 'Heading3', 'Heading2',
'Heading2', 'Heading1', 'Heading2', 'Heading3', 'Heading2', 'Heading2',
'Heading2', 'Heading2', 'Heading2', 'Heading3', 'Heading3', 'Heading3',
'Heading3', 'Heading2', 'Heading2', 'Heading2', 'Heading3', 'Heading4',
'Heading4', 'Heading3'
]
</code></pre>
<p>This is what I tried but it's not giving accurate output:</p>
<pre><code>current_section_id = ""
current_level = 0
for index, row in df.iterrows():
if row['sec_id'] == '':
current_level += 1
section_id = current_section_id.split('.')
section_id[current_level - 1] = str(int(section_id[current_level - 1]) + 1)
section_id = '.'.join(section_id[:current_level])
current_section_id = section_id
df.at[index, 'document_section_id'] = section_id
else:
current_section_id = row['sec_id']
current_level = row['paragraph_type'].count('Heading') - 1
</code></pre>
|
<python><python-3.x><pandas><dataframe>
|
2024-03-15 14:25:43
| 1
| 961
|
Dcook
|
78,167,419
| 307,050
|
How to determine frequency dependant amplitude with FFT
|
<p>I'm trying to measure and calculate the frequency response of an audio device by simply generating input signals and measuring the output with a frequency sweep.</p>
<p>Here's what I'm trying to achieve in pseudo code:</p>
<pre><code>for each frequency f in (10-20k):
generate reference signal s with frequency f
async play s and record result r
determine amplitude a of r using FFT
add tuple (f,a) to result
</code></pre>
<p>And in python:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from matplotlib import pyplot as plt
import sounddevice as sd
import wave
from math import log10, ceil
from scipy.fft import fft, rfft, rfftfreq, fftfreq
SAMPLE_RATE = 44100 # Hertz
DURATION = 3 # Seconds
def generate_sine_wave(freq, sample_rate, duration):
x = np.linspace(0, duration, int(sample_rate * duration), endpoint=False)
frequencies = x * freq
y = np.sin((2 * np.pi) * frequencies)
return x, y
def main():
# info about our device
print(sd.query_devices(device="Studio 24c"))
# default device settings
sd.default.device = 'Studio 24c'
sd.default.samplerate = SAMPLE_RATE
f_start = 10
f_end = 20000
samples_per_decade = 10
ndecades = ceil(log10(f_end) - log10(f_start))
npoints = ndecades * samples_per_decade
freqs = np.logspace(log10(f_start), log10(f_end), num=npoints, endpoint=True, base=10)
measure_duration = 0.25 # seconds
peaks = []
for f in freqs:
_, y = generate_sine_wave(f, SAMPLE_RATE, measure_duration)
rec = sd.playrec(y, SAMPLE_RATE, input_mapping=[2], output_mapping=[2])
sd.wait()
yf = np.fft.rfft(rec, axis=0)
yf_abs = 1 / rec.size * np.abs(yf)
xf = np.fft.rfftfreq(rec.size, d=1./SAMPLE_RATE)
peaks.append(np.max(yf_abs))
plt.xscale("log")
plt.scatter(freqs,peaks)
plt.grid()
plt.show()
if __name__ == "__main__":
main()
</code></pre>
<p>Before measuring the actual device, I simply looped the output signal to the input of my audio interface to basically "calibrate" what I'm doing. I was expecting the amplitude to be equal across all frequencies. However, this is what I got:</p>
<p><a href="https://i.sstatic.net/Zo2Gb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Zo2Gb.png" alt="enter image description here" /></a></p>
<p>Why are all the amplitudes all over the place even though the generated sine wave is the same? I didn't change anything on my audio interface during the sweep measuring phase.</p>
|
<python><numpy><matplotlib><fft>
|
2024-03-15 13:52:35
| 1
| 1,347
|
mefiX
|
78,167,398
| 2,656,359
|
Django: annotate query of one model with count of different non-related model which is filtered by a field of first model
|
<p>Long title in short : I have a complex annotate query to work with.
Example Models:</p>
<pre><code>class FirstModel(models.Model):
master_tag = models.CharField()
... other fields
class SecondModel(models.Model):
ref_name = models.CharField()
</code></pre>
<p>I want to fetch all objects from <strong>FirstModel</strong> with count of all objects from <strong>SecondModel</strong> if ref_name of this objects are same as master_tag of FirstModel object.</p>
<p>What i tried:</p>
<p>I tried using annotate with Subquery and OuterRef but can not get this to work as getting constant errors.</p>
<pre><code>from django.db.models import OuterRef, Subquery, Count
sub_query = Subquery(
SecondModel.objects.filter(ref_name=OuterRef("master_tag")).values_list("pk", flat=True)
)
FirstModel.objects.annotate(sm_count=Count(sub_query))
</code></pre>
<p>This gave me error : "django.db.utils.ProgrammingError: more than one row returned by a subquery used as an expression"
I tried lots of other things one of which is putting ".count()" at the end of subquery but that causes another error as count tries to evaluate query eagerly and fails due to OuterRef.</p>
<p>So is there a way to fetch a query like this with the count annotation ? Any stupid mistakes i made in writing above query ?</p>
|
<python><django><annotations>
|
2024-03-15 13:50:15
| 1
| 456
|
Dishant Chavda
|
78,167,219
| 12,320,370
|
Retrieve Extra parameters from Airflow Connection
|
<p>I have a Snowflake connection defined in Airflow.<a href="https://i.sstatic.net/uWFA6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uWFA6.png" alt="enter image description here" /></a></p>
<p>I am selecting the user, password and schema using the below:</p>
<pre><code>conn = BaseHook.get_connection("snowflake_conn")
conn.login
</code></pre>
<p>return the login (EXAMPLE in this case)</p>
<p>If I try to access the 'extra' parameters, it doesn't work.</p>
<pre><code>conn = BaseHook.get_connection("snowflake_conn")
conn.role
</code></pre>
<p>returns a <strong>AttributeError: 'Connection' object has no attribute 'role'</strong></p>
<p>Is there any different way in which I could grab the extra parameters from Airflow Connection settings?</p>
|
<python><airflow><connection><hook>
|
2024-03-15 13:20:33
| 1
| 333
|
Nairda123
|
78,167,201
| 2,725,810
|
"matching query does not exist" when creating an object
|
<p>I have:</p>
<pre class="lang-py prettyprint-override"><code>class GoogleSignIn(APIView):
def post(self, request):
settings = request.settings
code = request.data['code']
# Verify the OAuth code with Google
try:
google_user_info = verify_user(code)
except Exception as e:
print(str(e))
return Response({'error': 'Failed to verify with Google.'})
email = google_user_info['email']
picture = google_user_info['picture']
try:
user = User.objects.get(email=email)
user_info = UserInfo.objects.get(user=user)
except User.DoesNotExist:
first_name = google_user_info['given_name']
user = User.objects.create_user(
email = email,
username = email,
first_name = first_name,
last_name = google_user_info['family_name'],
password=None)
user_info = UserInfo.objects.create(
user = user,
credits = request.settings.INITIAL_CREDITS,
expiry = compute_credits_expiry(
num_days=settings.FREE_CREDITS_EXPIRY_IN_DAYS))
</code></pre>
<p>The last statement (that is, creating the <code>UserInfo</code> object in the exception handler) produces an exception:</p>
<pre><code>accounts.models.UserInfo.DoesNotExist: UserInfo matching query does not exist.
</code></pre>
<p>What does it mean?</p>
<p>The complete output:</p>
<pre><code>Internal Server Error: /accounts/google-sign-in/
Traceback (most recent call last):
File "/mnt/c/Dropbox/Parnasa/Web/wherewasit/backend/accounts/views.py", line 129, in post
user = User.objects.get(email=email)
File "/usr/lib/python3/dist-packages/django/db/models/manager.py", line 85, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/usr/lib/python3/dist-packages/django/db/models/query.py", line 435, in get
raise self.model.DoesNotExist(
django.contrib.auth.models.User.DoesNotExist: User matching query does not exist.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/django/core/handlers/exception.py", line 47, in inner
response = get_response(request)
File "/usr/lib/python3/dist-packages/django/core/handlers/base.py", line 181, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/usr/lib/python3/dist-packages/django/views/decorators/csrf.py", line 54, in wrapped_view
return view_func(*args, **kwargs)
File "/usr/lib/python3/dist-packages/django/views/generic/base.py", line 70, in view
return self.dispatch(request, *args, **kwargs)
File "/home/meir/.local/lib/python3.10/site-packages/rest_framework/views.py", line 509, in dispatch
response = self.handle_exception(exc)
File "/home/meir/.local/lib/python3.10/site-packages/rest_framework/views.py", line 469, in handle_exception
self.raise_uncaught_exception(exc)
File "/home/meir/.local/lib/python3.10/site-packages/rest_framework/views.py", line 480, in raise_uncaught_exception
raise exc
File "/home/meir/.local/lib/python3.10/site-packages/rest_framework/views.py", line 506, in dispatch
response = handler(request, *args, **kwargs)
File "/mnt/c/Dropbox/Parnasa/Web/wherewasit/backend/accounts/views.py", line 140, in post
user_info = UserInfo.objects.create(
File "/usr/lib/python3/dist-packages/django/db/models/manager.py", line 85, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/usr/lib/python3/dist-packages/django/db/models/query.py", line 453, in create
obj.save(force_insert=True, using=self.db)
File "/usr/lib/python3/dist-packages/django/db/models/base.py", line 739, in save
self.save_base(using=using, force_insert=force_insert,
File "/usr/lib/python3/dist-packages/django/db/models/base.py", line 763, in save_base
pre_save.send(
File "/usr/lib/python3/dist-packages/django/dispatch/dispatcher.py", line 180, in send
return [
File "/usr/lib/python3/dist-packages/django/dispatch/dispatcher.py", line 181, in <listcomp>
(receiver, receiver(signal=self, sender=sender, **named))
File "/mnt/c/Dropbox/Parnasa/Web/wherewasit/backend/accounts/views.py", line 220, in your_model_pre_save
old = UserInfo.objects.get(id=instance.id)
File "/usr/lib/python3/dist-packages/django/db/models/manager.py", line 85, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/usr/lib/python3/dist-packages/django/db/models/query.py", line 435, in get
raise self.model.DoesNotExist(
accounts.models.UserInfo.DoesNotExist: UserInfo matching query does not exist.
</code></pre>
|
<python><django>
|
2024-03-15 13:16:02
| 1
| 8,211
|
AlwaysLearning
|
78,167,174
| 1,296,783
|
Pydantic V2 @field_validator is not executed
|
<p>I try to parse and validate a CSV file using the following Pydantic (V2) model.</p>
<pre><code>from typing import Optional
from pydantic import BaseModel, field_validator
class PydanticProduct(BaseModel):
fid: Optional[float]
water: Optional[float]
class ConfigDict:
from_attributes = True
@field_validator('fid', 'water', mode='before')
def check_empty_fields(cls, value) -> float:
if value == '':
return None
try:
float_value = float(value)
except ValueError:
raise ValueError("Unable to parse string as a number. Please provide a valid number.")
return float_value
</code></pre>
<p>If the CSV file contents look like this</p>
<pre><code>fid,water
1.0,81.3
1.25,26.3
3.0,31.5
</code></pre>
<p>the fields will get converted from strings to float correctly.</p>
<p>On the other hand if the CSV file contents have empty strings</p>
<pre><code>fid,water
1.0,
,26.3
3.0,31.5
</code></pre>
<p>then I will get the following error</p>
<blockquote>
<p>Input should be a valid number, unable to parse string as a number [type=float_parsing, input_value='', input_type=str]</p>
</blockquote>
<p>I tried to use the <code>@field_validator</code> with the <code>mode="before"</code> but the problem is that the validation is not executed.</p>
<p>In addition, I noticed that the validation is not executed even if there are no errors (aka. the case without empty strings)</p>
|
<python><pydantic-v2>
|
2024-03-15 13:10:43
| 1
| 798
|
yeaaaahhhh..hamf hamf
|
78,167,076
| 21,185,825
|
Python - azure.devops - get branch/commits for a pull request
|
<p>I am parsing pull requests one by one with azure.devops python library</p>
<p><strong>for each PR I need the related branch and the commits</strong></p>
<p>is there a way to do this ?</p>
<pre><code>repositories = git_client.get_repositories(project=None, include_links=None, include_all_urls=None, include_hidden=None)
for repo in repositories:
if repo.remote_url == repo_url:
pull_requests = git_client.get_pull_requests(repository_id=repo.id,search_criteria=GitPullRequestSearchCriteria(status='completed'))
for pr in pull_requests:
print(f'Pull Request #{pr.pull_request_id}: {pr.title}')
</code></pre>
<p>been looking at the doc, I cannot find anything related to this</p>
<p>thanks for your help</p>
|
<python><azure-devops>
|
2024-03-15 12:54:57
| 1
| 511
|
pf12345678910
|
78,166,924
| 4,792,022
|
How to Improve Efficiency in Random Column Selection and Assignment in Pandas DataFrame?
|
<p>I'm working on a project where I need to create a new DataFrame based on an existing one, with certain columns randomly selected and assigned in each row with probability proportional to the number in that column.</p>
<p>However, my current implementation seems to be inefficient, especially when dealing with large datasets. I'm seeking advice on how to optimize this process for better performance.</p>
<p>Here's a simplified version of what I'm currently doing:</p>
<pre><code>import pandas as pd
import numpy as np
# Sample DataFrame
data = {
'dog': [1, 2, 3, 4],
'cat': [5, 6, 7, 8],
'parrot': [9, 10, 11, 12],
'owner': ['fred', 'bob', 'jim', 'jannet']
}
df = pd.DataFrame(data)
# List of relevant columns
relevant_col_list = ['dog', 'cat', 'parrot']
# New DataFrame with the same number of rows
new_df = df.copy()
# Create 'iteration_1' column in new_df
new_df['iteration_1'] = ""
# Iterate over rows
for index, row in new_df.iterrows():
# Copy columns not in relevant_col_list
for column in new_df.columns:
if column not in relevant_col_list and column != 'iteration_1':
new_df.at[index, column] = row[column]
# Randomly select a column from relevant_col_list with probability proportional to the number in the column
probabilities = df[relevant_col_list ].iloc[index] / df[relevant_col_list ].iloc[index].sum()
chosen_column = np.random.choice(relevant_col_list , p=probabilities)
# Write the name of the chosen column in the 'iteration_1' column
new_df.at[index, 'iteration_1'] = chosen_column
print(new_df)
</code></pre>
<p>How can I speed it up?</p>
|
<python><pandas><performance><random>
|
2024-03-15 12:28:59
| 1
| 544
|
Abijah
|
78,166,720
| 6,943,622
|
Number of outstanding shares per query
|
<p>I came across this question and i was able to solve it but inefficiently. My solution is a naive o(m*n) where m is the number of queries and n is the number of orders.</p>
<blockquote>
<p>You are given a log (std:vector) of orders sent from our trading
system over a day. Each order has the following properties:</p>
<ul>
<li>order_token: unique integer identifying the order</li>
<li>shares: number of shares to buy or sell</li>
<li>price: price to buy or sell each share at</li>
<li>side: false = sell, true = buy</li>
<li>created at: timestamp when the order was created</li>
<li>cancelled_or_executed_at: timestamp when the order
was cancelled or executed (filled)</li>
</ul>
</blockquote>
<blockquote>
<p>An order is live in the interval
[created_at, cancelled_or_executed_at). That is, created at is
inclusive and cancelled_or_executed_at is exclusive. Timestamps are
represented as integers, e.g. milliseconds since midnight. You may
assume each order is cancelled or executed in full.</p>
</blockquote>
<blockquote>
<p>In addition to
orders, you are also given a vector of queries. Each query has one
field: query_time, a timestamp. The answer to the query is the total
number of shares outstanding among all orders which were live at the
query time. Outstanding shares are aggregated regardless of time, e.g.
open orders to Buy 10 and Sell 10 aggregate to 20 shares live.</p>
</blockquote>
<p>I was wondering if anyone had a better way to optimize my solution below with any data structure or algorithm. i'm sure there is. It's a c++ question but I did my solution in python for my convenience</p>
<pre><code>def calculate_outstanding_shares(orders, queries):
result = {}
for query in queries:
live_trades = 0
for order in orders:
if query > order[4] and query < order[5]:
live_trades += order[1]
result[query] = live_trades
return result
# Example usage
orders = [
[3, 15, 200, True, 2000, 4000],
[1,10,100,True,0,5000],
[4, 25, 250, False, 2500, 6000],
[2,20,150,False,1000,3000],
]
queries = [
500, # Before any order
1500, # Inside the duration of the first buy order
2500, # Inside the duration of both buy orders and the start of the second sell order
3500, # Inside the duration of the second sell order and after the first sell order ends
5500 # After all orders have ended
]
result = calculate_outstanding_shares(orders, queries)
print(result)
</code></pre>
|
<python><algorithm><intervals>
|
2024-03-15 11:53:57
| 2
| 339
|
Duck Dodgers
|
78,166,642
| 13,381,632
|
Append Columns From Multiple CSVs into One File - Python
|
<p>I am trying to use the Pandas library to append columns multiple CSV files of the same format into a single file, but cannot seem to get the syntax correct in my script. Here are the CSVs I am trying to parse, all located in the same file path, and the desired output csv file:</p>
<p>CSV #1:</p>
<pre><code>Application, Classifier
Request, 14
Timeframe, 3
Adjudication, 10
</code></pre>
<p>CSV #2:</p>
<pre><code>Application, Processor
Request, 15
Timeframe, 5
Adjudication, 20
</code></pre>
<p>CSV #3:</p>
<pre><code>Application, Receiver
Request, 12
Timeframe, 10
Adjudication, 21
</code></pre>
<p>Desired CSV:</p>
<pre><code>Application, Classifier, Processor, Receiver
Request, 14, 15, 12
Timeframe, 3, 5, 10
Adjudication, 10, 20, 21
</code></pre>
<p>Below is the code I am trying to implement to write to the desired CSV to a single file:</p>
<pre><code>import os
import pandas as pd
path = 'C:\\Users\\mdl518\\Desktop\\'
extension = '.csv'
files = [file for file in os.listdir(path) if file.endswith(extension)]
dfs = []
for file in files:
df = pd.read_csv(os.path.join(path, file))
dfs.append(df)
df1 = pd.concat(dfs, ignore_index=True)
df2 = df1.apply(lambda x: pd.Series(x.dropna().values)).dropna()
df2.to_csv('df_results.csv', index = False)
</code></pre>
<p>I feel there must be something small I am missing to append each column of the individual CSVs to a single output, any assistance is most appreciated!</p>
|
<python><pandas><csv><indexing><output>
|
2024-03-15 11:39:42
| 0
| 349
|
mdl518
|
78,166,627
| 87,416
|
Failed to authenticate the user '' in Active Directory (Authentication option is 'ActiveDirectoryIntegrated')
|
<p>I am wanting to user the logged in Windows credentials of Visual Studio Code as a pass through to authenticate to an Azure SQL database.</p>
<p>Below is my python code using odbc driver 18.</p>
<pre><code>connectionString = f'Driver={{ODBC Driver 18 for SQL Server}};Server={SERVER};Database={DATABASE};Authentication=ActiveDirectoryIntegrated;Encrypt=yes'
conn = pyodbc.connect(connectionString)
</code></pre>
<p>The error I'm receiving is:</p>
<blockquote>
<p>pyodbc.Error: ('FA004', "[FA004] [Microsoft][ODBC Driver 18 for SQL
Server][SQL Server]Failed to authenticate the user '' in Active
Directory (Authentication option is
'ActiveDirectoryIntegrated').\r\nError code 0xCAA2000C; state
10\r\nAADSTS50076: Due to a configuration change made by your
administrator, or because you moved to a new location, you must use
multi-factor authentication to access ...</p>
</blockquote>
|
<python><visual-studio-code><pyodbc>
|
2024-03-15 11:37:10
| 0
| 15,381
|
David
|
78,166,612
| 9,749,124
|
Installing ChromeDriver and Headless Chrome Driver with latest version of Selenium
|
<p>I am using <code>Python</code> and <code>Selenium</code> on <code>AWS Lambdas</code> for crawling.
I have updated <code>Python</code> to 3.11 and <code>Selenium</code> to 4.18.0, but then my crawlers stopped working.
This is the code for <code>Selenium</code>:</p>
<pre><code>import os
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.chrome.service import Service
def get_headless_driver():
options = Options()
service = Service(executable_path=r'/opt/chromedriver')
options.binary_location = '/opt/headless-chromium'
options.add_argument('--headless')
options.add_argument('--no-sandbox')
options.add_argument('--single-process')
options.add_argument('--disable-dev-shm-usage')
options.add_argument('--window-size=1920x1080')
options.add_argument('--start-maximized')
return webdriver.Chrome(service=service, options=options)
def get_selenium_driver():
return get_local_driver() if os.environ.get('STAGE') == 'local' else get_headless_driver()
</code></pre>
<p>This is the code for installing <code>chromedriver</code>:</p>
<pre><code>#! /bin/bash
# exit when any command fails
set -e
NOCOLOR='\033[0m'
GREEN='\033[0;32m'
echo -e "\n${GREEN}Installing 'headless' layer dependencies...${NOCOLOR}\n"
sudo mkdir -p layers/headless && sudo chmod 777 layers/headless && cd layers/headless
# https://github.com/adieuadieu/serverless-chrome/issues/133
echo -e "\n${GREEN}Installing Chrome Driver v2.37...${NOCOLOR}\n"
sudo curl -SL https://chromedriver.storage.googleapis.com/2.37/chromedriver_linux64.zip > chromedriver.zip
sudo chmod 755 chromedriver.zip
sudo unzip chromedriver.zip
sudo rm chromedriver.zip
echo -e "\n${GREEN}Installing Serverless Chrome v1.0.0-37...${NOCOLOR}\n"
sudo curl -SL https://github.com/adieuadieu/serverless-chrome/releases/download/v1.0.0-41/stable-headless-chromium-amazonlinux-2017-03.zip > headless-chromium.zip
sudo chmod 755 headless-chromium.zip
sudo unzip headless-chromium.zip
sudo rm headless-chromium.zip
cd ../../
</code></pre>
<p>I am getting this error:</p>
<pre><code>Message: Service /opt/chromedriver unexpectedly exited. Status code was: 127
</code></pre>
<p>How should I fix this error?
Should I also update the chromedriver and headless?
What versions should I chose?</p>
|
<python><selenium-webdriver><aws-lambda><selenium-chromedriver>
|
2024-03-15 11:34:35
| 2
| 3,923
|
taga
|
78,166,510
| 9,097,114
|
Filter data from df using date picker - tkinter python
|
<p>Hi I am trying to filter the data based on date range from below df and create output as df1 using tkinter - python</p>
<p><strong>df:</strong></p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>ID</th>
<th>Date</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>361</td>
<td>10/15/2023</td>
<td>21</td>
</tr>
<tr>
<td>865</td>
<td>8/8/2023</td>
<td>565</td>
</tr>
<tr>
<td>66</td>
<td>1/22/2023</td>
<td>263</td>
</tr>
<tr>
<td>54</td>
<td>5/6/2023</td>
<td>350</td>
</tr>
<tr>
<td>989</td>
<td>10/30/2023</td>
<td>253</td>
</tr>
<tr>
<td>843</td>
<td>6/26/2023</td>
<td>62</td>
</tr>
<tr>
<td>957</td>
<td>10/17/2023</td>
<td>476</td>
</tr>
</tbody>
</table></div>
<p>Output should be result from df by filtering data between any range (for ex: below df1 filtered after selecting date between 01/01/2023 to 05/31/2023 (mm/dd/yyyy)<br />
<strong>df1_1</strong>:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>ID</th>
<th>Date</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>66</td>
<td>1/22/2023</td>
<td>263</td>
</tr>
<tr>
<td>54</td>
<td>5/6/2023</td>
<td>350</td>
</tr>
</tbody>
</table></div>
<p>The code i have written is as follows:</p>
<pre><code>from tkinter import *
import tkcalendar
from datetime import timedelta
root = Tk()
def date_range(start,stop):
global dates # If you want to use this outside of functions
dates = []
diff = (stop-start).days
for i in range(diff+1):
day = start + timedelta(days=i)
dates.append(day)
if dates:
print(dates) # Print it, or even make it global to access it outside this
else:
print('Make sure the end date is later than start date')
date1 = tkcalendar.DateEntry(root)
date1.pack(padx=10,pady=10)
date2 = tkcalendar.DateEntry(root)
date2.pack(padx=10,pady=10)
Button(root,text='Find range',command=lambda: date_range(date1.get_date(),date2.get_date())).pack()
root.mainloop()
</code></pre>
<p>How do i include my required functionality (creating df1 from df in above code)<br />
Thanks in advance.</p>
|
<python><tkinter>
|
2024-03-15 11:16:15
| 0
| 523
|
san1
|
78,166,405
| 16,815,358
|
Reproducing the phase spectrum while using np.fft.fft2 and cv2.dft. Why are the results not similar?
|
<p>Another <a href="https://stackoverflow.com/q/78163775/16815358">question</a> was asking about the correct way of getting magnitude and phase spectra while using <code>cv2.dft</code>.</p>
<p>My answer was limited to the numpy approach and then I thought that using OpenCV for this would be even nicer. I am currently trying to reproduce the same results but I am seeing significant differences in the phase spectrum.</p>
<p>Here are my imports:</p>
<pre><code>%matplotlib notebook
import matplotlib.pyplot as plt
import numpy as np
import cv2
im = np.zeros((50, 50), dtype = np.float32) # create empty array
im[2:10, 2:10] = 255 # draw a rectangle
</code></pre>
<p>The numpy example and results:</p>
<pre><code>
imFFTNumpy = np.fft.fft2(im)
imFFTNumpyShifted = np.fft.fftshift(imFFTNumpy)
magSpectrumNumpy = np.abs(imFFTNumpyShifted)
phaseSpectrumNumpy = np.angle(imFFTNumpyShifted)
fig, ax = plt.subplots(nrows = 1, ncols = 3)
ax[0].imshow(im)
ax[1].imshow(magSpectrumNumpy)
ax[2].imshow(phaseSpectrumNumpy)
plt.suptitle("Using Numpy np.fft.fft2 and np.abs/ np.angle")
</code></pre>
<p><a href="https://i.sstatic.net/W846G.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/W846G.png" alt="Numpy results" /></a></p>
<p>The OpenCV example and results:</p>
<pre><code>imFFTOpenCV = cv2.dft(im, flags=cv2.DFT_COMPLEX_OUTPUT)
imFFTOpenCVShifted = np.fft.fftshift(imFFTOpenCV)
magSpectrumOpenCV, phaseSpectrumOpenCV = cv2.cartToPolar(imFFTOpenCVShifted[:,:,0], imFFTOpenCVShifted[:,:,1])
fig, ax = plt.subplots(nrows = 1, ncols = 3)
ax[0].imshow(im)
ax[1].imshow(magSpectrumOpenCV)
ax[2].imshow(phaseSpectrumOpenCV)
plt.suptitle("Using OpenCV cv2.dft and cv2.cartToPolar")
</code></pre>
<p><a href="https://i.sstatic.net/6QrWk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6QrWk.png" alt="OpenCV results" /></a></p>
<p>As you can see, while the magnitude spectrum looks the same (it has some expected deviations due to floating-point arithmetic), the phase spectrum looks significantly different. I dug around a bit and found out that OpenCV usually returns phase from 0 to 2Ο, whereas <code>np.angle</code> returns the phase from -Ο to +Ο. Subtracting Ο from the OpenCV phase does not correct difference though.</p>
<p>What could be the reason for this? Is it possible to get almost identical phase using both approaches, just like with magnitude?</p>
|
<python><numpy><opencv><image-processing><fft>
|
2024-03-15 10:57:48
| 2
| 2,784
|
Tino D
|
78,166,363
| 7,672,005
|
Python data to dict memory concerns: how to efficiently load data into a dict?
|
<p>I'm trying to load a very large dataset (~560MB) into a dict in order to display it as a 3D graph.
I have run into memory issues that resulted in "Killed.", so I've added some logic to read my dataset in chunks and dump the dict to a json file periodically, I hoped this would avoid my RAM being filled up.
However, I still get it killed after reaching about 4.00M/558.0M progress.</p>
<p>I want to understand how this roughly 560MB file is costing me gigabytes of RAM just to cut away unwanted columns and transform into a dict? And if there's any more efficient methods to get to what I need: a data object where I can efficiently extract sets of coords with their vals.</p>
<p>Please find my code and some example data below:</p>
<pre><code>import json
import logging
import os
import pandas as pd
from tqdm import tqdm
def create_grid_dict(file_path, chunk_size=500000):
"""
:param file_path: Path to a grid file.
:param chunk_size: Number of lines to process before dumping into json
:return: Dictionary object containing the gist grid data with as index the voxel number
and as values the x, y and z coordinates, and the value
"""
# Read the data from the file
with open(file_path, 'r') as file:
# Read the first line
header = file.readline().strip()
header2 = file.readline().strip()
# Log the header
logging.info(header)
columns = header2.split(' ')
# Get the file size
file_size = os.path.getsize(file_path)
output_file = 'datasets/cache.json'
# Check if the output file already exists
if os.path.exists(output_file):
with open(output_file, 'r') as f:
grid_dict = json.load(f)
return grid_dict
else:
# Create an empty dictionary to store the grid data
grid_dict = {}
logging.info(f"Reading file size {file_size} in chunks of {chunk_size} lines.")
# Read the file in chunks
with tqdm(total=file_size, unit='B', unit_scale=True, desc="Processing") as pbar:
for chunk in pd.read_csv(file_path, delim_whitespace=True, skiprows=2, names=columns, chunksize=chunk_size):
# Filter out the columns you need
chunk = chunk[['voxel', 'xcoord', 'ycoord', 'zcoord', 'val1', 'val2']]
# Iterate through each row in the chunk
for index, row in chunk.iterrows():
voxel = row['voxel']
# Store the values in the dictionary
grid_dict[voxel] = {
'xcoord': row['xcoord'],
'ycoord': row['ycoord'],
'zcoord': row['zcoord'],
'val': row['val1'] + 2 * row['val2']
}
pbar.update(chunk_size)
# Write the grid dictionary to the output file after processing each chunk
with open(output_file, 'w') as f:
json.dump(grid_dict, f)
return grid_dict
</code></pre>
<pre><code># Example space-delimited dataset
voxel xcoord ycoord zcoord val1 val2
1 0.1 0.2 0.3 10 5
2 0.2 0.3 0.4 8 4
3 0.3 0.4 0.5 12 6
4 0.4 0.5 0.6 15 7
5 0.5 0.6 0.7 9 3
6 0.6 0.7 0.8 11 5
7 0.7 0.8 0.9 13 6
8 0.8 0.9 1.0 14 7
9 0.9 1.0 1.1 16 8
10 1.0 1.1 1.2 18 9
</code></pre>
|
<python><dictionary><memory><memory-management>
|
2024-03-15 10:49:36
| 1
| 534
|
Zyzyx
|
78,166,342
| 3,970,455
|
Django: collectstatic is not collecting static files
|
<p>I found a github proyect. Copied it locally, but running collecstatic does not copy files to staticfiles folder. Why?</p>
<p>If I do the full path search:</p>
<p><code>python manage.py findstatic D:\web_proyects\imprenta_gallito\static\css\home.css</code></p>
<p>I get error:</p>
<p><code>django.core.exceptions.SuspiciousFileOperation: The joined path (D:\web_proyects\imprenta_gallito\static\css\home.css) is located outside of the base path component (D:\virtual_envs\imprenta_gallito\Lib\site-packages\django\contrib\admin\static)</code></p>
<p><a href="https://i.sstatic.net/b4bFD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/b4bFD.png" alt="enter image description here" /></a></p>
<p>settings.py:</p>
<pre><code>import os
import os
import secrets
from pathlib import Path
import dj_database_url
from decouple import config
# SITE_ROOT = root()
# Build paths inside the project like this: BASE_DIR / 'subdir'.
BASE_DIR = Path(__file__).resolve().parent.parent
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/2.1/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = os.environ.get(
"SECRET_KEY",
default=secrets.token_urlsafe(nbytes=64),
)
#DEBUG = config('DEBUG', default=False, cast=bool)
# The `DYNO` env var is set on Heroku CI, but it's not a real Heroku app, so we have to
# also explicitly exclude CI:
# https://devcenter.heroku.com/articles/heroku-ci#immutable-environment-variables
IS_HEROKU_APP = "DYNO" in os.environ and not "CI" in os.environ
# SECURITY WARNING: don't run with debug turned on in production!
if not IS_HEROKU_APP:
DEBUG = True
# On Heroku, it's safe to use a wildcard for `ALLOWED_HOSTS``, since the Heroku router performs
# validation of the Host header in the incoming HTTP request. On other platforms you may need
# to list the expected hostnames explicitly to prevent HTTP Host header attacks. See:
# https://docs.djangoproject.com/en/4.2/ref/settings/#std-setting-ALLOWED_HOSTS
if IS_HEROKU_APP:
ALLOWED_HOSTS = ["*"]
else:
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = [
# Use WhiteNoise's runserver implementation instead of the Django default, for dev-prod parity.
"whitenoise.runserver_nostatic",
# Uncomment this and the entry in `urls.py` if you wish to use the Django admin feature:
# https://docs.djangoproject.com/en/4.2/ref/contrib/admin/
"django.contrib.admin",
"django.contrib.auth",
"django.contrib.contenttypes",
"django.contrib.sessions",
"django.contrib.messages",
"django.contrib.staticfiles",
'shop',
'search_app',
'cart',
'order',
'marketing',
'django.contrib.humanize',
'crispy_forms',
'crispy_bootstrap4'
]
MIDDLEWARE = [
"django.middleware.security.SecurityMiddleware",
# Django doesn't support serving static assets in a production-ready way, so we use the
# excellent WhiteNoise package to do so instead. The WhiteNoise middleware must be listed
# after Django's `SecurityMiddleware` so that security redirects are still performed.
# See: https://whitenoise.readthedocs.io
"whitenoise.middleware.WhiteNoiseMiddleware",
"django.contrib.sessions.middleware.SessionMiddleware",
"django.middleware.common.CommonMiddleware",
"django.middleware.csrf.CsrfViewMiddleware",
"django.contrib.auth.middleware.AuthenticationMiddleware",
"django.contrib.messages.middleware.MessageMiddleware",
"django.middleware.clickjacking.XFrameOptionsMiddleware",
]
ROOT_URLCONF = 'imprenta_gallito.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [os.path.join(BASE_DIR, 'templates'),
os.path.join(BASE_DIR, 'shop', 'templates/'),
os.path.join(BASE_DIR, 'search_app', 'templates/'),
os.path.join(BASE_DIR, 'cart', 'templates/'),
os.path.join(BASE_DIR, 'order', 'templates/'), ]
,
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
'shop.context_processor.menu_links',
'shop.context_processor.has_shop',
# 'cart.context_processor.current_time',
'cart.context_processor.cart_items_counter'
],
},
},
]
WSGI_APPLICATION = 'imprenta_gallito.wsgi.application'
# Database
# https://docs.djangoproject.com/en/2.1/ref/settings/#databases
# Database
# https://docs.djangoproject.com/en/4.2/ref/settings/#databases
if IS_HEROKU_APP:
# In production on Heroku the database configuration is derived from the `DATABASE_URL`
# environment variable by the dj-database-url package. `DATABASE_URL` will be set
# automatically by Heroku when a database addon is attached to your Heroku app. See:
# https://devcenter.heroku.com/articles/provisioning-heroku-postgres
# https://github.com/jazzband/dj-database-url
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': config('HEROKU_POSTGRESQL_NAME'),
'USER': config('HEROKU_POSTGRESQL_USER'),
'PASSWORD': config('HEROKU_POSTGRESQL_PASSWORD'),
'HOST': config('HEROKU_POSTGRESQL_HOST'),
'PORT': config('HEROKU_POSTGRESQL_PORT'),
}
}
else:
# When running locally in development or in CI, a sqlite database file will be used instead
# to simplify initial setup. Longer term it's recommended to use Postgres locally too.
SECURE_SSL_REDIRECT = False
DATABASES = {
"default": {
"ENGINE": "django.db.backends.sqlite3",
"NAME": BASE_DIR / "db.sqlite3",
}
}
### HEROKU POSTGRESS ACCESS
# DATABASES = {
# 'default': {
# 'ENGINE': 'django.db.backends.postgresql',
# 'NAME': config('HEROKU_POSTGRESQL_NAME'),
# 'USER': config('HEROKU_POSTGRESQL_USER'),
# 'PASSWORD': config('HEROKU_POSTGRESQL_PASSWORD'),
# 'HOST': config('HEROKU_POSTGRESQL_HOST'),
# 'PORT': config('HEROKU_POSTGRESQL_PORT'),
# }
# }
####
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
LANGUAGE_CODE = 'es-PE'
TIME_ZONE = 'UTC'
USE_THOUSAND_SEPARATOR = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/2.1/howto/static-files/
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/4.2/howto/static-files/
STATIC_ROOT = BASE_DIR / "staticfiles"
STATIC_URL = "static/"
STORAGES = {
# Enable WhiteNoise's GZip and Brotli compression of static assets:
# https://whitenoise.readthedocs.io/en/latest/django.html#add-compression-and-caching-support
"default": {
"BACKEND": "django.core.files.storage.FileSystemStorage",
},
"staticfiles": {
"BACKEND": "whitenoise.storage.CompressedStaticFilesStorage",
},
}
# Don't store the original (un-hashed filename) version of static files, to reduce slug size:
# https://whitenoise.readthedocs.io/en/latest/django.html#WHITENOISE_KEEP_ONLY_HASHED_FILES
WHITENOISE_KEEP_ONLY_HASHED_FILES = True
# Default primary key field type
# https://docs.djangoproject.com/en/4.2/ref/settings/#default-auto-field
DEFAULT_AUTO_FIELD = "django.db.models.BigAutoField"
### Acaba Heroku Docs
# MEDIAFILES_LOCATION = 'media'
MEDIA_URL = '/media/'
MEDIA_ROOT = os.path.join(BASE_DIR, 'static', 'media')
CRISPY_TEMPLATE_PACK = 'bootstrap4'
### AMAZON ###
AWS_S3_OBJECT_PARAMETERS = {
'Expires': 'Thu, 31 Dec 2099 20:00:00 GMT',
'CacheControl': 'max-age=94608000',
}
# AWS_STORAGE_BUCKET_NAME = ''#os.environ['AWS_STORAGE_BUCKET_NAME']
# AWS_S3_REGION_NAME = 'os'#os.environ['AWS_S3_REGION_NAME']
# Tell django-storages the domain to use to refer to static files.
# AWS_S3_CUSTOM_DOMAIN = '%s.s3.amazonaws.com' % AWS_STORAGE_BUCKET_NAME
# AWS_ACCESS_KEY_ID = os.environ['AWS_ACCESS_KEY_ID']
# AWS_SECRET_ACCESS_KEY = os.environ['AWS_SECRET_ACCESS_KEY']
### MAILGUN - EMAIL MESSAGE SETTINGS ###
EMAIL_HOST = os.environ['EMAIL_HOST']
EMAIL_PORT = os.environ['EMAIL_PORT']
EMAIL_USE_TLS = config('EMAIL_USE_TLS', default=True, cast=bool)
EMAIL_HOST_USER = os.environ['EMAIL_HOST_USER']
EMAIL_HOST_PASSWORD = os.environ['EMAIL_HOST_PASSWORD']
### manage.py check --deploy
SECURE_BROWSER_XSS_FILTER = True
X_FRAME_OPTIONS = 'DENY'
### Promociones ###
PACKS3X2 = os.environ['PACKS3X2']
CRISPY_TEMPLATE_PACK = 'bootstrap4'
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'verbose': {
'format': ('%(asctime)s [%(process)d] [%(levelname)s] ' +
'pathname=%(pathname)s lineno=%(lineno)s ' +
'funcname=%(funcName)s %(message)s'),
'datefmt': '%Y-%m-%d %H:%M:%S'
},
'simple': {
'format': '%(levelname)s %(message)s'
}
},
'handlers': {
'null': {
'level': 'DEBUG',
'class': 'logging.NullHandler',
},
'console': {
'level': 'DEBUG',
'class': 'logging.StreamHandler',
'formatter': 'verbose'
}
},
'loggers': {
'testlogger': {
'handlers': ['console'],
'level': 'INFO',
}
}
}
DEBUG_PROPAGATE_EXCEPTIONS = True
COMPRESS_ENABLED = os.environ.get('COMPRESS_ENABLED', False)
</code></pre>
|
<python><django>
|
2024-03-15 10:47:01
| 2
| 4,038
|
Omar Gonzales
|
78,166,304
| 4,599,620
|
AKS - Cronjob use script file inside PVC
|
<p>I would like to create a cronjob that could get the script file inside PVC and run with it.<br />
Here I have created a PVC inside AKS as follows:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
volume.beta.kubernetes.io/mount-options: dir_mode=0777,file_mode=0777,uid=999,gid=999
volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/azure-file
name: cronjob-scripts
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: aks-azure-files
</code></pre>
<p>After that, I connected it with a pod <code>general-utils</code>,<br />
and added a Python script file into there using the below command:<br />
<code>kubectl -n <NAMESPACE> cp cronjob_script.py general-utils:/var/scripts</code></p>
<p>Once confirmed the script files is there in PVC, I create a cronjob as following:</p>
<pre><code>apiVersion: batch/v1
kind: CronJob
metadata:
name: testing-cronjob
spec:
schedule: "00 0 * * *"
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
startingDeadlineSeconds: 300
jobTemplate:
spec:
template:
spec:
imagePullSecrets:
- name: <SECRET_NAME>
restartPolicy: OnFailure
volumes:
- name: cronjob-scripts
persistentVolumeClaim:
claimName: cronjob-scripts
containers:
- name: testing-cronjob
image: <PYTHON_INSTALLED_ALPINE_IMAGE>
command: ["py /var/scripts/cronjob_script.py"]
volumeMounts:
- name: cronjob-scripts
mountPath: "/var/scripts"
imagePullPolicy: Always
</code></pre>
<p>After created, I manually trigger the cronjob by running:<br />
<code>kubectl -n <NAMESPACE> create job --from=cronjob/testing-cronjob job-of-testing-cronjob</code></p>
<p>However, the pod created by the job throws out below error in the describe events:</p>
<pre><code>Warning Failed 13s (x2 over 13s) kubelet
Error: failed to create containerd task: failed to create shim task:
OCI runtime create failed: runc create failed:
unable to start container process: exec: "py /var/scripts/cronjob_script.py":
stat py /var/scripts/cronjob_script.py: no such file or directory: unknown
</code></pre>
<p>Not sure if PVC will be connected after the pod is created, so it becomes a dead loop.</p>
<p>If I want to let cronjob getting script file to run, how should I achieve?</p>
|
<python><dockerfile><kubectl><kubernetes-pvc><kubernetes-cronjob>
|
2024-03-15 10:36:43
| 1
| 996
|
Wing Choy
|
78,166,245
| 202,335
|
RuntimeError: This event loop is already running, 'Cannot run the event loop while another loop is running')
|
<p>Below is a minimal reproducible example.</p>
<pre><code>import asyncio
import aiohttp
from database_utils import connect_to_database
from pytdx.hq import TdxHq_API
api = TdxHq_API()
api.connect('119.147.212.81', 7709)
async def execute_main(conn, c):
try:
c.execute("SELECT stockCode, shsz FROM stocks")
rows = c.fetchall()
column_names = [description[0] for description in c.description]
results = [dict(zip(column_names, row)) for row in rows]
async with aiohttp.ClientSession() as session:
tasks = []
for result in results:
stock_code = result['stockCode'].strip()
market = result['shsz']
tasks.append(fetch_quote(session, api, market, stock_code, conn, c))
await asyncio.gather(*tasks)
except Exception as e:
print(f"An error occurred: {e}")
api.disconnect()
print("Execution completed.")
# Establish a connection to the database
conn, c = connect_to_database()
# Run the main function
loop = asyncio.get_event_loop()
loop.run_until_complete(execute_main(conn, c))
597 def _check_running(self):
598 if self.is_running():
--> 599 raise RuntimeError('This event loop is already running')
600 if events._get_running_loop() is not None:
601 raise RuntimeError(
602 'Cannot run the event loop while another loop is running')
</code></pre>
<p>When I use</p>
<pre><code>asyncio.run(execute_main(conn, c))
</code></pre>
<p>I get:</p>
<pre><code>---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
File c:\Users\Steven\my-app\management\real.time.price2.py:52
49 conn, c = connect_to_database()
51 # Run the main function
---> 52 asyncio.run(execute_main(conn, c))
File c:\Python312\Lib\asyncio\runners.py:190, in run(main, debug, loop_factory)
161 """Execute the coroutine and return the result.
162
163 This function runs the passed coroutine, taking care of
(...)
186 asyncio.run(main())
187 """
188 if events._get_running_loop() is not None:
189 # fail fast with short traceback
--> 190 raise RuntimeError(
191 "asyncio.run() cannot be called from a running event loop")
193 with Runner(debug=debug, loop_factory=loop_factory) as runner:
194 return runner.run(main)
RuntimeError: asyncio.run() cannot be called from a running event loop
The current running event loop is: <_WindowsSelectorEventLoop running=True closed=False debug=False>
</code></pre>
|
<python><aiohttp><pytest-aiohttp>
|
2024-03-15 10:26:06
| 0
| 25,444
|
Steven
|
78,166,162
| 3,735,871
|
How to keep the date in Airflow execution date and convert time to 00:00
|
<p>I'm using <code>time_marker = {{execution_date.in_timezone('Europe/Amsterdam')}}</code> in my dag.py program. I'm trying to keep the date part in execution date, and set the time to <code>"T00:00:00"</code>
So whenever during the execution date it runs, the time_marker will always be for example <code>20240115T00:00:00</code></p>
<p>How should I do this? I tried to use pendulum.parse but didn't work out how to do this. Thanks.</p>
|
<python><date><airflow>
|
2024-03-15 10:16:19
| 1
| 367
|
user3735871
|
78,165,987
| 13,491,504
|
SymPy vector substraction not doing what expected
|
<p>I have this code in which I try to do a Vector calculation, but the code does not do what would be mathematically expected:</p>
<pre><code>import sympy as sp
from sympy.vector import CoordSys3D
N = CoordSys3D('N')
t = sp.symbols('t')
r = sp.symbols('r')
w = sp.symbols('w')
o = sp.symbols('o')
x = sp.Function('x')(t)
y = sp.Function('y')(t)
z = sp.Function('z')(t)
p = x*N.i + y*N.j + z*N.k
q = r*(p.dot(N.i)*sp.cos(w*t)+p.dot(N.j)*sp.sin(w*t))+o*p.dot(N.k)
print(p-q)
</code></pre>
<p>The result is:</p>
<pre><code>(x(t))*N.i + (y(t))*N.j + (z(t))*N.k
</code></pre>
<p>But mathematically I would expect it to be the following:</p>
<pre><code>x(t)-r*x(t)*sp.cos(w*t) + y(t)-r*y(t)*sp.sin(w*t) + z(t)*o*z(t)
</code></pre>
<p>What am I doing wrong? Am I overseeing something?</p>
|
<python><math><sympy>
|
2024-03-15 09:48:49
| 1
| 637
|
Mo711
|
78,165,961
| 5,985,798
|
Firestore Trigger Python Cloud Functions (gen2) - takes 1 positional argument but 2 were given
|
<p>I have deployed the <strong>firestore trigger function <em>on_document_created</em></strong> as per the example: <a href="https://firebase.google.com/docs/reference/functions/2nd-gen/python/firebase_functions.firestore_fn#functions" rel="nofollow noreferrer">https://firebase.google.com/docs/reference/functions/2nd-gen/python/firebase_functions.firestore_fn#functions</a></p>
<pre><code>@on_document_created(document="test/{testId}")
def example(event: Event[DocumentSnapshot]):
print("Hello World")
pass
</code></pre>
<p>To deploy it I used this command:</p>
<pre><code>gcloud functions deploy example --gen2 --trigger-event-filters=type=google.cloud.firestore.document.v1.created --trigger-event-filters=database='(default)' --trigger-event-filters-path-pattern=document='' --project test-project --runtime python311 --memory 512 --region europe-west3 --env-vars-file prod.yml
</code></pre>
<p>Whenever a document is created on firestore the function is correctly called but I get the following error in the logs:</p>
<pre><code>TypeError: example() takes 1 positional argument but 2 were given
</code></pre>
<p>I can't figure out what I'm doing wrong.
Can you please help me?</p>
<p>Thank you</p>
|
<python><firebase><google-cloud-firestore><google-cloud-functions>
|
2024-03-15 09:44:13
| 1
| 835
|
Carlo
|
78,165,946
| 447,426
|
why Pydantic construcor only accepts model_dump - error Input should be a valid dictionary or instance of
|
<p>I have some pydantic classes and try to construct objects using the constructor.
i get this error:</p>
<pre><code>> part: PartOut = PartOut(type="battery", position="l", correlation_timestamp=datetime.now().isoformat(), kpis=kpis)
E pydantic_core._pydantic_core.ValidationError: 1 validation error for PartOut
E kpis
E Input should be a valid dictionary or instance of KpiValues [type=model_type, input_value=KpiValues(values=[KPIValu...hase=None, stage=None)]), input_type=KpiValues]
</code></pre>
<p>first notice it tells i should input <code>instance of KpiValues</code> but as you see i exactly putting this in.</p>
<p>here is the code how i construct the object:</p>
<pre><code>kpi: KPIValue = KPIValue(name="somthin'", value=7, timestamp=datetime.now().isoformat())
kpis: KpiValues = KpiValues(values=[kpi])
# an part with one kpi
part: PartOut = PartOut(type="battery", position="l", correlation_timestamp=datetime.now().isoformat(), kpis=kpis)
</code></pre>
<p><strong>the code works fine if i give <code>kpis=kpis.model_dump()</code> - my question is why i need to do this?</strong></p>
<p>here is relevant code of pydantic classes:</p>
<pre><code>class PartOut(BaseModel):
# omitting other fields ...
kpis: KpiValues = Field(..., alias='kpis')
# validators for other fields
@field_validator('type','kpis', mode='before')
def convert_none_to_empty_list(cls, v):
return v if v is not None else []
model_config = ConfigDict(use_enum_values = True, arbitrary_types_allowed = True, from_attributes = True)
class KpiValues(BaseModel):
"""
This class represents the values of all KPIs for one part and one point in time.
It is meant to be used as documents structure for the database (json column).
"""
values: list[KPIValue]
</code></pre>
<p>So is there a way to get this working without the need of using <code>model_dump()</code>?</p>
|
<python><pydantic>
|
2024-03-15 09:41:17
| 0
| 13,125
|
dermoritz
|
78,165,875
| 2,542,516
|
How to broadcast a tensor from main process using Accelerate?
|
<p>I want to do some computation in the main process and broadcast the tensor to other processes. Here is a sketch of what my code looks like currently:</p>
<pre class="lang-py prettyprint-override"><code>from accelerate.utils import broadcast
x = None
if accelerator.is_local_main_process:
x = <do_some_computation>
x = broadcast(x) # I have even tried moving this line out of the if block
print(x.shape)
</code></pre>
<p>This gives me following error:
<code>TypeError: Unsupported types (<class 'NoneType'>) passed to `_gpu_broadcast_one` . Only nested list/tuple/dicts of objects that are valid for `is_torch_tensor` s hould be passed.</code></p>
<p>Which means that <code>x</code> is still <code>None</code> and is not really being broadcasted. How do I fix this?</p>
|
<python><pytorch><huggingface><accelerate>
|
2024-03-15 09:30:29
| 2
| 2,937
|
Priyatham
|
78,165,778
| 1,662,268
|
How to define an `Index()` in SqlAlchemy+Alembic, on a column from a base table
|
<p>I am a python novice.</p>
<p>My project is using SqlAlchemy, Alembic and MyPy.</p>
<p>I have a pair of parent-child classes defined like this (a bunch of detail elided):</p>
<pre><code>class RawEmergency(InputBase, RawTables):
__tablename__ = "emergency"
id: Mapped[UNIQUEIDENTIFIER] = mapped_column(
UNIQUEIDENTIFIER(), primary_key=True, autoincrement=False
)
attendance_id: Mapped[str | None] = guid_column()
admitted_spell_id: Mapped[str | None] = guid_column()
__table_args__ = (
PrimaryKeyConstraint("id", mssql_clustered=False),
Index(
"index_emergency_pii_patient_id_and_datetimes",
pii_patient_id,
attendance_start_date.desc(),
attendance_start_time.desc(),
),
)
class InputBase(DeclarativeBase):
metadata = MetaData(schema="raw")
refresh_date: Mapped[str] = date_str_column()
refresh_time: Mapped[str] = time_str_column()
class RawTables(object):
id: Mapped[UNIQUEIDENTIFIER] = mapped_column(
UNIQUEIDENTIFIER(), primary_key=True, autoincrement=False
)
__table_args__: typing.Any = (
PrimaryKeyConstraint(name="id", mssql_clustered=False),
)
</code></pre>
<p>I want to add a 2nd index to the Emergency table, indexing the refresh columns provided by the base table.</p>
<p>I expect to do so by adding an additional <code>Index()</code> call into the <code>__table_args__</code> setup.</p>
<p>Then I want to run my standard migration creation/checking tool:
<code>poetry run alembic --config operator_app/alembic.ini revision --autogenerate -m "refresh_col_indexes"</code></p>
<p>How do I reference the refresh columns in this declaration?</p>
<hr>
<p>Current <s>attem</s>guesses that have failed:</p>
<pre><code> Index(
"index_emergency_refresh_date_time",
refresh_date.desc(),
refresh_time.desc(),
),
</code></pre>
<p>mypy and the IDE both say they don't know what <code>refresh_date</code> is.
<code>error: Name "refresh_date" is not defined [name-defined]</code></p>
<pre><code> Index(
"index_emergency_refresh_date_time",
InputBase.refresh_date.desc(),
InputBase.refresh_time.desc(),
),
</code></pre>
<p>compiles now, but the alembic command doesn't work:
<code>sqlalchemy.exc.ArgumentError: Can't add unnamed column to column collection full error below</code></p>
<pre><code> Index(
"index_emergency_refresh_date_time",
super().refresh_date.desc(),
super().refresh_time.desc(),
),
</code></pre>
<p>Mypy/IDE say no:
<code>error: "super()" outside of a method is not supported</code></p>
<pre><code> Index(
"index_emergency_refresh_date_time",
super(InputBase, self).refresh_date.desc(),
super(InputBase, self).refresh_time.desc(),
),
</code></pre>
<p><code>self is not defined</code></p>
<pre><code> Index(
"index_emergency_refresh_date_time",
super(InputBase, None).refresh_date.desc(),
super(InputBase, None).refresh_time.desc(),
),
</code></pre>
<p>mypy says <code>Unsupported argument 2 for "super"</code>
and alembic says <code>AttributeError: 'super' object has no attribute 'refresh_date'</code></p>
|
<python><inheritance><indexing><sqlalchemy><alembic>
|
2024-03-15 09:12:39
| 2
| 8,742
|
Brondahl
|
78,165,650
| 10,396,491
|
Rotations using quaternions and scipy.spatial.transform.Rotation for 6 degree of freedom simulation
|
<p>I am writing a simple simulator for a remotely operated underwater vehicle (ROV). I want to use <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.transform.Rotation.html" rel="nofollow noreferrer">scipy</a> implementation of quaternions instead of my previous approach based on constructing the rotation matrices myself using trigonometric functions like explained at <a href="https://en.wikipedia.org/wiki/Rotation_matrix#General_3D_rotations" rel="nofollow noreferrer">Wiki rotations</a>.</p>
<p>My initial problem (solved): Clearly, I am getting confused somewhere - the code below appears to work at first, but after a few transformations rotations no longer occur around the body axes. I don't yet fully understand what's going on but it appears like some kind of accumulation of errors. Any help in getting this sorted out would be much appreciated.</p>
<p>My current problem: the yaw rotation happens around the global z axis but the other two around the vehicle x and y axes. Any idea how to make this consistent?</p>
<pre><code>import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib.widgets import Slider
import numpy as np
from scipy.spatial.transform import Rotation
class RovTemp(object):
def __init__(self):
# Main part - rotation matrix around the the global coordinate system axes.
self.vehicleAxes = np.eye(3)
# Current roll, pitch, yaw
self.rotation_angles = np.zeros(3)
# Unit vectors along the vehicle x, y, z axes unpacked from the aggregate
# array for ease of use.
self.iHat, self.jHat, self.kHat = self.getCoordSystem()
def getCoordSystem(self):
# iHat, jHat, kHat
return self.vehicleAxes.T
def computeRollPitchYaw(self):
# Compute the global roll, pitch, and yaw angles
roll = np.arctan2(self.kHat[1], self.kHat[2])
pitch = np.arctan2(-self.kHat[0], np.sqrt(self.kHat[1]**2 + self.kHat[2]**2))
yaw = np.arctan2(self.jHat[0], self.iHat[0])
return np.array([roll, pitch, yaw])
def updateMovingCoordSystem(self, rotation_angles):
# Compute the change in the rotation angles compared to the previous time step.
# dRotAngles = rotation_angles - self.rotation_angles
# Store the current orientation.
self.rotation_angles = rotation_angles
# Create quaternion from rotation angles from (roll pitch yaw)
self.vehicleAxes = Rotation.from_euler('xyz', rotation_angles, degrees=False).as_matrix()
# Extract the new coordinate system vectors
self.iHat, self.jHat, self.kHat = self.getCoordSystem()
rov = RovTemp()
# Plot orientation.
fig = plt.figure()
ax = fig.add_subplot(projection='3d')
ax.set_xlabel("x")
ax.set_ylabel("y")
ax.set_zlabel("z")
ax.set_aspect("equal")
plt.subplots_adjust(top=0.95, bottom=0.15)
lim = 0.5
ax.set_xlim((-lim, lim))
ax.set_ylim((-lim, lim))
ax.set_zlim((-lim, lim))
def plotCoordSystem(ax, iHat, jHat, kHat, x0=np.zeros(3), ds=0.45, ls="-"):
x1 = x0 + iHat*ds
x2 = x0 + jHat*ds
x3 = x0 + kHat*ds
lns = ax.plot([x0[0], x1[0]], [x0[1], x1[1]], [x0[2], x1[2]], "r", ls=ls, lw=2)
lns += ax.plot([x0[0], x2[0]], [x0[1], x2[1]], [x0[2], x2[2]], "g", ls=ls, lw=2)
lns += ax.plot([x0[0], x3[0]], [x0[1], x3[1]], [x0[2], x3[2]], "b", ls=ls, lw=2)
return lns
# Plot twice - one plot will be updated, the other one will stay as reference.
plotCoordSystem(ax, rov.iHat, rov.jHat, rov.kHat, ls="--")
lns = plotCoordSystem(ax, rov.iHat, rov.jHat, rov.kHat)
sldr_ax1 = fig.add_axes([0.15, 0.01, 0.7, 0.025])
sldr_ax2 = fig.add_axes([0.15, 0.05, 0.7, 0.025])
sldr_ax3 = fig.add_axes([0.15, 0.09, 0.7, 0.025])
sldrLim = 180
sldr1 = Slider(sldr_ax1, 'phi', -sldrLim, sldrLim, valinit=0, valfmt="%.1f deg")
sldr2 = Slider(sldr_ax2, 'theta', -sldrLim, sldrLim, valinit=0, valfmt="%.1f deg")
sldr3 = Slider(sldr_ax3, 'psi', -sldrLim, sldrLim, valinit=0, valfmt="%.1f deg")
def onChanged(val):
global rov, lns, ax
angles = np.array([sldr1.val, sldr2.val, sldr3.val])/180.*np.pi
rov.updateMovingCoordSystem(angles)
for l in lns:
l.remove()
lns = plotCoordSystem(ax, rov.iHat, rov.jHat, rov.kHat)
ax.set_title(
"roll, pitch, yaw = " +", ".join(['{:.1f} deg'.format(v) for v in rov.computeRollPitchYaw()/np.pi*180.]))
return lns
sldr1.on_changed(onChanged)
sldr2.on_changed(onChanged)
sldr3.on_changed(onChanged)
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/bfR93.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bfR93.png" alt="Specified rotation angles (sliders) are not the same as what I calculate later (figure title)." /></a></p>
|
<python><scipy><rotation><physics><rotational-matrices>
|
2024-03-15 08:47:43
| 1
| 457
|
Artur
|
78,165,559
| 1,870,832
|
How to write a polars dataframe to DuckDB
|
<p>I am trying to write a Polars DataFrame to a duckdb database. I have the following simple code which I expected to work:</p>
<pre><code>import polars as pl
import duckdb
pldf = pl.DataFrame({'mynum': [1,2,3,4]})
with duckdb.connect(database="scratch.db", read_only=False) as con:
pldf.write_database(table_name='test_table', connection=con)
</code></pre>
<p>However, I get the following error:</p>
<pre><code>sqlalchemy.exc.ArgumentError: Expected string or URL object, got <duckdb.duckdb.DuckDBPyConnection object
</code></pre>
<p>I get a similar error if I use the non-default <code>engine='adbc'</code> instead of <code>df.write_database()</code>'s default <code>engine='sqlalchemy'</code>.</p>
<p>So it seemed it should be easy enough to just swap in a URI for my ducdkb database, but I haven't been able to get that to work either. Potentially it's complicated by my being on Windows?</p>
|
<python><uri><python-polars><duckdb>
|
2024-03-15 08:31:15
| 2
| 9,136
|
Max Power
|
78,165,556
| 3,030,926
|
PyUno and calc/spreadsheet: how to export single sheet to csv?
|
<p>I need to automate some spreadsheet manipulation and exporting via PyUNO, but I'm struggling on exporting a single sheet to CSV.</p>
<p>I made a simple shell script that first launch LibreOffice to open a given .xlsx file and then run a python3 script to execute all the needed logic.</p>
<p>Now I need to just export the current (<code>ActiveSheet</code>) to .csv but the PyUNO and OO UNO documentations are really terrible IMHO and I cannot find anything related to this.</p>
<p>Can anyone point me in the right direction?</p>
<p>Below is a simplified version of my script</p>
<pre class="lang-py prettyprint-override"><code>import socket
import uno
def init():
# get the uno component context from the PyUNO runtime
localContext = uno.getComponentContext()
# create the UnoUrlResolver
resolver = localContext.ServiceManager.createInstanceWithContext(
"com.sun.star.bridge.UnoUrlResolver", localContext )
# connect to the running office
ctx = resolver.resolve( "uno:socket,host=localhost,port=2002;urp;StarOffice.ComponentContext" )
smgr = ctx.ServiceManager
# get the central desktop object
desktop = smgr.createInstanceWithContext( "com.sun.star.frame.Desktop",ctx)
# access the current writer document
model = desktop.getCurrentComponent()
return model
model = init()
active_sheet = model.CurrentController.ActiveSheet
# business logic
# now I need to export active_sheet to .csv
</code></pre>
<h3>Bonus question</h3>
<p>How can I get the opened file name with PyUNO?</p>
|
<python><csv><spreadsheet><pyuno>
|
2024-03-15 08:30:10
| 1
| 3,034
|
fudo
|
78,165,544
| 14,345,081
|
Linear regression model of scikit-learn not working as expected
|
<p>I'm trying to understand the internal working of the Linear-regression model in Scikit-learn.</p>
<p>This is my dataset</p>
<p><a href="https://i.sstatic.net/ngx1v.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ngx1v.png" alt="before 1 hot encoding" /></a></p>
<p>And this is my dataset after performing one-hot-encoding.</p>
<p><a href="https://i.sstatic.net/JKnEw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JKnEw.png" alt="after 1 hot encoding" /></a></p>
<p>And this are values of the coefficients and intercept after performing linear-regression.
<a href="https://i.sstatic.net/ELLBU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ELLBU.png" alt="coefficients" /></a></p>
<p><strong>Sell Price</strong> is the dependent column and rest of the columns are features.<br/>
And these are the predicted values which works fine in this case.<br/>
<a href="https://i.sstatic.net/cMKjL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cMKjL.png" alt="predicted values" /></a></p>
<p>I noticed that the number of coefficients is 1 greater than the number of features. So this is how I generated the feature matrix:</p>
<pre><code>feature_matrix = dataFrame.drop(['Sell Price($)'], axis = 'columns').to_numpy()
# Array to be added as column
bias_column = np.array([[1] for i in range(len(feature_matrix))])
# Adding column to array using append() method
feature_matrix = np.concatenate([bias_column, feature_matrix], axis = 1) # axis = 1 means column, 0 means row
</code></pre>
<p><strong>Result</strong><br/>
<a href="https://i.sstatic.net/86n9N.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/86n9N.png" alt="enter image description here" /></a></p>
<p>What I want to know is how does Scikit-learn use these coefficients and intercept to predict the values.<br/>
This is what I tried.<br/>
<a href="https://i.sstatic.net/OhU4x.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OhU4x.png" alt="enter image description here" /></a><br/>I also noticed that the value I get by doing this calculation is actually equal to the mileage in every case. But that's not the dependent feature here. So what's going on?</p>
|
<python><pandas><numpy><machine-learning><scikit-learn>
|
2024-03-15 08:26:45
| 2
| 304
|
Saptarshi Dey
|
78,165,495
| 11,945,463
|
use SeamlessM4Tv2Model, I want to slow down the rate of speech of audio output
|
<pre><code>text_inputs = processor(text="I have a daughter 2 years old, I wanted her name to be HΖ°Ζ‘ng Ly", src_lang="eng", return_tensors="pt").to(device)
audio_array = model.generate(**text_inputs, tgt_lang=language)[0].cpu().numpy().squeeze()
file_path = 'audio_from_text.wav'
sf.write(file_path, audio_array, 16000)
</code></pre>
<p><a href="https://huggingface.co/docs/transformers/main/en/model_doc/seamless_m4t_v2" rel="nofollow noreferrer">doc</a>
[<a href="https://huggingface.co/spaces/Imadsarvm/Sarvm_Audio_Translation/resolve/b5b7fa364f8c567c4eb330583f673cd4c600976b/app.py?download=true" rel="nofollow noreferrer">ex</a>]</p>
<p>it has returned a 3 seconds audio</p>
<p>I try adding <code>speech_temperature=0.2</code> or <code>speech_do_sample=True</code> to <code>generate()</code> but there is no change, it still has returned a 3 seconds audio, for example, I want to change the rate of speech so it will be 5 seconds audio
any ideal ?</p>
|
<python><pytorch><text-to-speech><huggingface>
|
2024-03-15 08:17:22
| 0
| 649
|
lam vu Nguyen
|
78,165,465
| 2,372,467
|
Niftyindices.com not able to scrape Index data
|
<p>I was able to scrape data from NSE India's Niftyindices.com website earlier. this is the historical Index Data I need.
The website is <a href="http://www.niftyindices.com" rel="nofollow noreferrer">www.niftyindices.com</a> and the historical data can be found under Reports Section of the home page under Historical Data. On this page you need to Select Index Type as Equity and the Index as NIFTY 100
Data range can be - i generally select from 01-Jan-2001 till today.
till yesterday, i was able to fetch data from this site. but all of a sudden from today morning, there is no data appearing using the same code.
following is the code i am using for scraping.
bm_sv is an essential cookie which needs to be passed to the site - this is from my previous experience.
The Json received does not have any data anymore.</p>
<p>Thanks for any help in advance</p>
<pre><code>import requests
import json
from datetime import datetime
print('start')
data=[]
headers = {'Content-Type': 'application/json; charset=utf-8'
,'Accept':'application/json, text/javascript, */*; q=0.01'
,'Accept-Encoding':'gzip, deflate'
,'Accept-Language':'en-US,en;q=0.9'
,'Content-Length':'100'
,'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.190 Safari/537.36'
,'Cookie':'bm_sv=9B706239B47F50CA0B651E20BA5CBF74~YAAQFjkgF2zWSiyOAQAAdV0dQRfVSICkZc20SfI+PnDk8taK1Ppu1ZSmjclFkHqVgsGOE0vK3WnPMHuhY5kOStjVm4OnN1wm9SBRO3nIAvXWAVCR8iN23B8R7kHpcme82M8ytCrJ/LozntCxQlQSFqzuFwLw4+ZPBjdkICfQH4piCmjvZB3AH8NvCmf+nbzT34Q4JO4zYeYadkjlKjVRVIh0lzX2BK8crljTE9W+F1DUdtZYBRBUCM83OIfmZhnH6PnDu79C~1'
}
row=['indexname','date','open','high','low','close']
data.append(row)
payload={'name': 'NIFTY 100','startDate':'01-Jan-2001','endDate': '10-Mar-2024'}
JSonURL='https://www.niftyindices.com/Backpage.aspx/getHistoricaldatatabletoString'
r=requests.post(JSonURL, data=json.dumps(payload),headers=headers)
print(r.text)
text=r.json()
print(text)
datab=json.loads(text['d'])
sorted_data=sorted(datab,key=lambda x: datetime.strptime(x['HistoricalDate'], '%d %b %Y'), reverse=False)
print('startdata available from: ',datetime.strftime(datetime.strptime(sorted_data[0]['HistoricalDate'], '%d %b %Y'),'%d-%b-%Y'))
print('data available till',datetime.strftime(datetime.strptime(sorted_data[len(datab)-1]['HistoricalDate'], '%d %b %Y'),'%d-%b-%Y\n'))
for rec in sorted_data:
row=[]
row.append(rec['Index Name'])
row.append(datetime.strptime(rec['HistoricalDate'], '%d %b %Y'))
row.append(rec['OPEN'].replace('-','0'))
row.append(rec['HIGH'].replace('-','0'))
row.append(rec['LOW'].replace('-','0'))
row.append(rec['CLOSE'])
print(row)
data.append(row)
print(data)
</code></pre>
|
<python><post><python-requests>
|
2024-03-15 08:10:13
| 1
| 301
|
Kiran Jain
|
78,165,358
| 8,968,910
|
Python: faster way to do Geocoding API
|
<p>It almost takes 1 mimute to convert 60 addresses to coordinates with my code. I want the result to contain address, latitude and longtitude in order. Is there other faster ways? Thanks</p>
<p>code:</p>
<pre><code>import time
import requests
import json
import pandas as pd
import numpy as np
import string
addressList=['xxx','xx1'] #330,000 addresses in total
def get_latitude_longtitude(address, GOOGLE_PLACES_API_KEY):
url = 'https://maps.googleapis.com/maps/api/geocode/json?address=' + address + '&key=' + GOOGLE_PLACES_API_KEY
while True:
res = requests.get(url)
js = json.loads(res.text)
if js['status'] != 'OVER_QUERY_LIMIT':
time.sleep(1)
break
result = js['results'][0]['geometry']['location']
lat = result['lat']
lng = result['lng']
return address, lat, lng
lst=[]
for address in addressList:
GOOGLE_PLACES_API_KEY = 'kkkkkkk'
res = get_latitude_longtitude(address,GOOGLE_PLACES_API_KEY)
#print(res)
address=res[0]
lat=res[1]
lng=res[2]
lst.append(address)
lst.append(lat)
lst.append(lng)
</code></pre>
|
<python><google-geocoding-api>
|
2024-03-15 07:45:07
| 1
| 699
|
Lara19
|
78,165,006
| 4,470,126
|
How to display dash_table for a selected radio item
|
<p>I am new python Dash programming, and i have taken reference from plotly.community. I am able display list of s3 buckets from my AWS account, and for a selected bucket I am showing list of all CSV files.</p>
<p>The next step is, for a selected CSV file, I need to display CSV contents as a dash table on the same page just below the radio buttons, with a scroll bar and pagination. Appreciate any help pls. I am stuck here, please help.</p>
<p>This is what i have tried thus far:</p>
<pre><code>from dash import Dash, dcc, html, Input, Output, callback, dash_table
import boto3
import pandas as pd
# Retrieve the list of existing buckets
s3 = boto3.client('s3')
response = s3.list_buckets()
all_options = {}
# Output the bucket names
for bucket in response['Buckets']:
# print(f' {bucket["Name"]}')
if bucket["Name"].startswith("ag-"):
if len(all_options) < 5:
# Get a list of all objects in the bucket
objects = s3.list_objects_v2(Bucket=bucket['Name'])
# Create a list to store the files in the bucket
files = []
# Iterate over the objects
for obj in objects['Contents']:
if obj['Key'].endswith('.csv'):
if len(files) < 5:
# Add the file name to the list
files.append(obj['Key'])
# Add the bucket and files to the dictionary
all_options[bucket['Name']] = files
external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css']
app = Dash(__name__, external_stylesheets=external_stylesheets)
app.layout = html.Div([
dcc.RadioItems(
list(all_options.keys()),
0,
id='buckets-radio',
),
html.Hr(),
dcc.RadioItems(id='files-radio'),
html.Hr(),
html.Div(id='display-selected-values')
])
@callback(
Output('files-radio', 'options'),
Input('buckets-radio', 'value'))
def set_cities_options(selected_bucket):
return [{'label': i, 'value': i} for i in all_options[selected_bucket]]
@callback(
Output('files-radio', 'value'),
Input('files-radio', 'options'))
def set_cities_value(available_options):
return available_options[0]['value']
@callback(
Output('display-selected-values', 'children'),
Input('buckets-radio', 'value'),
Input('files-radio', 'value'))
def set_display_children(selected_bucket, selected_file):
# obj = s3.get_object(Bucket=selected_country, Key=selected_city)
# df = pd.read_csv(obj['Body'])
#
# app.layout = html.Div([
# html.H4('Simple interactive table'),
# html.P(id='table_out'),
# dash_table.DataTable(
# id='table',
# columns=[{"name": i, "id": i}
# for i in df.columns],
# data=df.to_dict('records'),
# style_cell=dict(textAlign='left'),
# style_header=dict(backgroundColor="paleturquoise"),
# style_data=dict(backgroundColor="lavender")
# ),
# ])
#
# def update_graphs(active_cell):
# if active_cell:
# cell_data = df.iloc[active_cell['row']][active_cell['column_id']]
# return f"Data: \"{cell_data}\" from table cell: {active_cell}"
# return "Click the table"
return f'{selected_file} is a file in {selected_bucket}'
# if __name__ == '__main__':
# app.run(debug=True)
app.run_server(debug=True)
</code></pre>
|
<python><python-3.x><plotly-dash><plotly>
|
2024-03-15 06:25:25
| 1
| 3,213
|
Yuva
|
78,164,994
| 17,015,816
|
Navigation across the site using scrapy
|
<p>So I am working on a personal project where i am learning Web Scraping. I have chose this as my website to scrape. I am using scrapy as it seemed like a good library. For this website. I need to loop through each page in A-Z to open each sublink and get title and description. I was able to do the sublink part. I am not able to navigate across the A-Z as there is no next button or css with such class. This is link for <a href="https://medlineplus.gov/ency/encyclopedia_A.htm" rel="nofollow noreferrer">https://medlineplus.gov/ency/encyclopedia_A.htm</a>.</p>
<p>Here encyclopedia_A.htm will be encyclopedia_B.htm for B page. How do i navigate across them. Or how should i use them if i add them in Start_urls.
<a href="https://i.sstatic.net/Ec3BJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ec3BJ.png" alt="enter image description here" /></a></p>
|
<python><scrapy><pagination>
|
2024-03-15 06:21:54
| 0
| 479
|
Sairam S
|
78,164,916
| 2,604,247
|
How to Implement Dependency Inversion and Interface Segregation for a Concrete Class that Needs to Be Initiated?
|
<h4>Context</h4>
<p>So far as I understand, the Dependency Inversion and Interface Segregation principles of SOLID OOP tell us to write our program according to the <em>interface</em>, not internal details. So, I am trying to develop a simple stock-market data collector in Python, roughly with the following object diagram, where <code>main</code> encapsulates the application business logic, user input handling etc. Here is how to make sense of it</p>
<ul>
<li>Pink represents concrete function or class, green represents an abstract class</li>
<li>Hollow arrow head represents a subclass of/implements relationship, and solid arrow head represents a <em>using</em> relationship (following the convention of <em>Clean Architecture</em> by Robert Martin)</li>
</ul>
<p><a href="https://i.sstatic.net/WbPl8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WbPl8.png" alt="Object diagram of stock price reader" /></a></p>
<p>So the main function uses the abstract interface which fetches a stock price against a symbol. The abstract class looks like</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3
# encoding: utf-8
"""
Defines the abstract stock price reader
"""
from abc import ABC, abstractmethod
class StockPriceReader(ABC):
"""Defines the general tick reader interface."""
@abstractmethod
def get_price(self, symbol:str)->float:
"""
Gets the price of a stock represented by the symbol, e.g
when symbol='AAPL', it gets the Apple Inc stock price.
"""
raise NotImplementedError
</code></pre>
<p>The <code>TickReaderConcrete</code> class implements the internal details, and gets the actual stock price by something like a Bloomberg or trading exchange API call. The credentials necessary to make the API call have to be part of the internal details. Not showing the code here, as it is pretty simple to implement.</p>
<h4>Dilemma</h4>
<p>Now, based on the above simple class dependency diagram, the same book (<em>Clean Architecture</em>) seems to imply that (here I emphasise)</p>
<blockquote>
<p>The main block should <em>not even be aware</em> that the TickReaderConcrete exists.</p>
</blockquote>
<p>At least, that is my understanding of what the book is saying, as there is no arrow head from <code>main</code> to the <code>TickReaderConcrete</code>, correct me if I am wrong.</p>
<p>But when I write the <code>main.py</code> I cannot pretend that <code>TickReaderConcrete</code> does not exist, in other words, seems <code>main</code> cannot help but know about the existence of <code>TickReaderConcrete</code> when the code looks like</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3
# encoding: utf-8
"""
The main function to invoke the stockprice reader
"""
from tickreader import TickReaderConcrete
...
if __name__ == '__main__':
# This line gives rise to the alternative class diagram below
reader=TickReaderConcrete(...)
# After initialised, we can use the interface permitted by the abstract base class
reader.get_price(symbol='IBM')
</code></pre>
<h5>Question</h5>
<p>So how to make sure the <code>main</code> is unaware of the concrete reader's existence? If <code>main</code> does not import the concrete reader at all, it cannot even instantiate the concrete reader object, and the abstract reader cannot be initialised anyway.
So how to basically organise the code to properly implement the object diagram above?</p>
<h6>Slightly Paraphrased Question</h6>
<p>Even if the abstract base class exposes the necessary public methods, at the very least, the <em>initialisation</em> requires knowledge of the existence of concrete subclass. Can the concrete subclass hide behind the abstract base class? Look at the alternative object diagram, which is what is implemented by the above code snippet. How to get rid of the broken line?</p>
<p><a href="https://i.sstatic.net/9hLlP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9hLlP.png" alt="enter image description here" /></a></p>
|
<python><oop><solid-principles><dependency-inversion><interface-segregation-principle>
|
2024-03-15 05:57:19
| 1
| 1,720
|
Della
|
78,164,847
| 12,390,973
|
How to add param values along with variable values in the objective function using PYOMO?
|
<p>I have written a program to maximize the revenue, where there is a facility that contains Solar and Wind power plants. They serve a demand profile. To serve that demand profile they get some revenue based on <strong>PPA(Power Purchasing Agreement)</strong>. There are three components which affect the revenue:</p>
<ol>
<li>Actual Revenue: <strong>Min( Facility Generation(Solar + Wind), Demand Profile) * PPA</strong></li>
<li>Excess Revenue: <strong>(Max(Facility Generation, Demand Profile) - Demand Profile) * 50% of PPA</strong></li>
<li>Shortfall Penalty (If Facility output is less than <strong>90%(DFR)</strong> of Demand profile then this will apply): <strong>(Min(Facility output, DFR * Demand Profile) - Demand Profile * Shortfall Penalty</strong></li>
</ol>
<p><strong>Total Revenue = Actual Revenue + Excess Revenue - Shortfall Penalty</strong>
The program that I wrote for this:</p>
<pre><code>import numpy as np
import pandas as pd
from pyomo.environ import *
total_instances = 8760
np.random.seed(total_instances)
load_profile = np.random.randint(80, 130, total_instances)
solar_profile = np.random.uniform(0, 0.9, total_instances)
wind_profile = np.random.uniform(0, 0.9, total_instances)
month_idx = pd.date_range('1/1/2023', periods=total_instances, freq='60min').month.tolist()
PPA = 10
shortfall_penalty_rate = 15
excess_rate = 5
DFR = 0.9
solar_capacity = 120
wind_capacity = 130
# Not in Use right now
load_profile_month = list(zip(month_idx, load_profile))
monthly_sum = [0] * 12
for i in range(len(load_profile_month)):
month_idx, load_value = load_profile_month[i]
monthly_sum[month_idx - 1] += load_value
model = ConcreteModel()
model.m_index = Set(initialize=list(range(len(load_profile))))
# variable
model.facility_output = Var(model.m_index, domain=NonNegativeReals)
# gen variable
model.solar_use = Var(model.m_index, domain=NonNegativeReals)
model.wind_use = Var(model.m_index, domain=NonNegativeReals)
# Load profile
model.load_profile = Param(model.m_index, initialize=load_profile)
model.solar_profile = Param(model.m_index, initialize=solar_profile)
model.wind_profile = Param(model.m_index, initialize=wind_profile)
model.lost_load = Var(model.m_index, domain=NonNegativeReals)
# Objective function
def revenue(model):
actual_revenue = sum(
min((model.solar_use[m] + model.wind_use[m]), model.load_profile[m]) * PPA
for m in model.m_index
)
excess_revenue = sum(
(max(model.solar_use[m] + model.wind_use[m], model.load_profile[m]) - model.load_profile[m]) * excess_rate
for m in model.m_index
)
shortfall_penalty = sum(
(min(model.solar_use[m] + model.wind_use[m], DFR * model.load_profile[m]) - model.load_profile[m] * DFR) * shortfall_penalty_rate
for m in model.m_index
)
total_revenue = actual_revenue + excess_revenue + shortfall_penalty
return total_revenue
model.obj = Objective(rule=revenue, sense=maximize)
def energy_balance(model, m):
return model.grid[m] == model.solar_use[m] + model.wind_use[m] + model.lost_load[m]
model.energy_balance = Constraint(model.m_index, rule=energy_balance)
def grid_limit(model, m):
return model.grid[m] >= model.load_profile[m]
model.grid_limit = Constraint(model.m_index, rule=grid_limit)
def max_solar(model, m):
eq = model.solar_use[m] <= solar_capacity * model.solar_profile[m]
return eq
model.max_solar = Constraint(model.m_index, rule=max_solar)
def max_wind(model, m):
eq = model.wind_use[m] <= wind_capacity * model.wind_profile[m]
return eq
model.max_wind = Constraint(model.m_index, rule=max_wind)
Solver = SolverFactory('gurobi')
Solver.options['LogFile'] = "gurobiLog"
# Solver.options['MIPGap'] = 0.0
print('\nConnecting to Gurobi Server...')
results = Solver.solve(model)
if (results.solver.status == SolverStatus.ok):
if (results.solver.termination_condition == TerminationCondition.optimal):
print("\n\n***Optimal solution found***")
print('obj returned:', round(value(model.obj), 2))
else:
print("\n\n***No optimal solution found***")
if (results.solver.termination_condition == TerminationCondition.infeasible):
print("Infeasible solution")
exit()
else:
print("\n\n***Solver terminated abnormally***")
exit()
grid_use = []
solar = []
wind = []
# e_in = []
# e_out = []
# soc = []
lost_load = []
load = []
for i in range(len(load_profile)):
grid_use.append(value(model.grid[i]))
solar.append(value(model.solar_use[i]))
wind.append(value(model.wind_use[i]))
lost_load.append(value(model.lost_load[i]))
load.append(value(model.load_profile[i]))
# e_in.append(value(model.e_in[i]))
# e_out.append(value(model.e_out[i]))
# soc.append(value(model.soc[i]))
pd.DataFrame({
'Grid': grid_use,
'Solar': solar,
'Wind': wind,
'Shortfall': lost_load,
'Load Profile': load,
}).to_excel('testing4.xlsx')
</code></pre>
<p>The error that I am getting is I think because I am using <strong>Load profile</strong> which is <strong>Param</strong> with the <strong>Variables</strong> <strong>Solar and wind</strong>:</p>
<pre><code>ValueError: Error retrieving immutable Param value (load_profile[0]):
The Param value is undefined and no default value is specified.
ERROR: Rule failed when generating expression for objective obj: ValueError:
Error retrieving immutable Param value (load_profile[0]):
The Param value is undefined and no default value is specified.
ERROR: Constructing component 'obj' from data=None failed: ValueError: Error
retrieving immutable Param value (load_profile[0]):
The Param value is undefined and no default value is specified.
Process finished with exit code 1
</code></pre>
|
<python><pyomo>
|
2024-03-15 05:32:12
| 1
| 845
|
Vesper
|
78,164,659
| 8,364,971
|
Python UTC America/New York Time Conversion
|
<p>Working on a problem where I have to evaluate whether a time stamp (UTC) exceeds 7PM <code>Americas/New_York</code>.</p>
<p>I naively did my check as:</p>
<pre><code>if (timestamp - timedelta(hours=4)).time() > 19:
__logic__
</code></pre>
<p>Which obviously failed with daylight savings EST/EDT.</p>
<p>I think this works but feels wrong.</p>
<pre><code>EST = pytz.timezone('America/New_York')
biz_end = datetime(1990, 1, 1, 19, tzinfo=EST).astimezone(pytz.utc).hour
if timestamp.time() > biz_end:
__logic__
</code></pre>
<p>Is there a better solution here?</p>
|
<python><datetime><timezone><utc><pytz>
|
2024-03-15 04:20:36
| 1
| 342
|
billash
|
78,164,251
| 4,450,134
|
Dividing each column in Polars dataframe by column-specific scalar from another dataframe
|
<p>Polars noob, given an <code>m x n</code> Polars dataframe <code>df</code> and a <code>1 x n</code> Polars dataframe of scalars, I want to divide each column in <code>df</code> by the corresponding scalar in the other frame.</p>
<pre><code>import numpy as np
import polars as pl
cols = list('abc')
df = pl.DataFrame(np.linspace(1, 9, 9).reshape(3, 3),
schema=cols)
scalars = pl.DataFrame(np.linspace(1, 3, 3)[:, None],
schema=cols)
</code></pre>
<pre><code>In [13]: df
Out[13]:
shape: (3, 3)
βββββββ¬ββββββ¬ββββββ
β a β b β c β
β --- β --- β --- β
β f64 β f64 β f64 β
βββββββͺββββββͺββββββ‘
β 1.0 β 2.0 β 3.0 β
β 4.0 β 5.0 β 6.0 β
β 7.0 β 8.0 β 9.0 β
βββββββ΄ββββββ΄ββββββ
In [14]: scalars
Out[14]:
shape: (1, 3)
βββββββ¬ββββββ¬ββββββ
β a β b β c β
β --- β --- β --- β
β f64 β f64 β f64 β
βββββββͺββββββͺββββββ‘
β 1.0 β 2.0 β 3.0 β
βββββββ΄ββββββ΄ββββββ
</code></pre>
<p>I can accomplish this easily in Pandas as shown below by delegating to NumPy broadcasting, but was wondering what the best way to do this is without going back and forth between Polars / Pandas representations.</p>
<pre><code>In [16]: df.to_pandas() / scalars.to_numpy()
Out[16]:
a b c
0 1.0 1.0 1.0
1 4.0 2.5 2.0
2 7.0 4.0 3.0
</code></pre>
<p>I found <a href="https://stackoverflow.com/questions/76307206/what-is-a-good-way-to-divide-all-columns-element-wise-by-a-column-specific-scala">this similar question</a> where the scalar constant is already a row in the original frame, but don't see how to leverage a row from <em>another</em> frame.</p>
<p>Best I can come up with thus far is combining the frames and doing some... nasty looking things :D</p>
<pre><code>In [31]: (pl.concat([df, scalars])
...: .with_columns(pl.all() / pl.all().tail(1))
...: .head(-1))
Out[31]:
shape: (3, 3)
βββββββ¬ββββββ¬ββββββ
β a β b β c β
β --- β --- β --- β
β f64 β f64 β f64 β
βββββββͺββββββͺββββββ‘
β 1.0 β 1.0 β 1.0 β
β 4.0 β 2.5 β 2.0 β
β 7.0 β 4.0 β 3.0 β
βββββββ΄ββββββ΄ββββββ
</code></pre>
|
<python><numpy><python-polars>
|
2024-03-15 01:38:17
| 2
| 3,386
|
HavelTheGreat
|
78,164,196
| 2,115,971
|
MinGW ld linking error - undefined reference
|
<p>I hope that in the age of AI, there are at least a few humans who can still help troubleshoot a problem that the "all-knowing" GPT cannot.</p>
<p><strong>The Problem</strong>:</p>
<p>I'm trying to create a Python interface for a C++ library, and a few modules give me linking errors. The error details are given below.</p>
<p><strong>Error Details</strong></p>
<pre><code>2024-03-14 18:26:40,630 - ERROR -
------------------------------
Error compiling stepper_motor:
Error code: 1
Error: C:/SysGCC/mingw64/bin/../lib/gcc/x86_64-w64-mingw32/12.2.0/../../../../x86_64-w64-mingw32/bin/ld.exe: C:\Users\User\AppData\Local\Temp\ccfElxip.o:stepper_motor_wrap.cxx:(.text+0x178a4): undefined reference to `SBC_RequestTriggerSwitches'collect2.exe: error: ld returned 1 exit status
g++ command: ['g++', '-shared', '-o', 'd:\\CZI_scope\\code\\pymodules\\thorlabs_kinesis\\motion_control\\benchtop\\_stepper_motor.pyd', 'd:\\CZI_scope\\code\\pymodules\\thorlabs_kinesis\\motion_control\\benchtop\\stepper_motor_wrap.cxx', '-Ic:\\Users\\User\\mambaforge\\envs\\rich\\include', '-Id:\\CZI_scope\\code\\pymodules\\thorlabs_kinesis\\__include', '-Lc:\\Users\\User\\mambaforge\\envs\\rich', '-Ld:\\CZI_scope\\code\\pymodules\\thorlabs_kinesis\\__lib', '-lpython39', '-lThorlabs.MotionControl.Benchtop.StepperMotor', '-Wno-error']
------------------------------
</code></pre>
<p>The symbol in question is defined in the header file (hence linking error not compilation). I have dumped the exports of the .lib file using <code>dumpbin</code> and it looks, from the output, that the name has been mangled. I know this is standard for c++ libraries so I'm not certain that's the issue.</p>
<p><strong>Dumpbin Output</strong></p>
<pre><code>?RequestTriggerSwitches@CBenchtopStepperMotorChannel@StepperMotor@Benchtop@MotionControl@Thorlabs@@QEBAFXZ (public: short __cdecl Thorlabs::MotionControl::Benchtop::StepperMotor::CBenchtopStepperMotorChannel::RequestTriggerSwitches(void)const )
</code></pre>
<p><strong>Header Definition</strong></p>
<pre class="lang-cpp prettyprint-override"><code>#ifdef BENCHTOPSTEPPERMOTORDLL_EXPORTS
/// <summary> Gets the Benchtop Stepper API. </summary>
#define BENCHTOPSTEPPERMOTOR_API __declspec(dllexport)
#else
#define BENCHTOPSTEPPERMOTOR_API __declspec(dllimport)
#endif
extern "C"
{
BENCHTOPSTEPPERMOTOR_API short __cdecl SBC_RequestTriggerSwitches(char const * serialNo, short channel);
...
}
</code></pre>
<p><strong>SWIG Interface File</strong></p>
<pre><code>
%module stepper_motor
// Remove calling convention macros compatibility with SWIG
#define __cdecl
#define __stdcall
#define __declspec(x)
#define WINAPI
%{
#include <windows.h>
#include <stdbool.h>
#define BENCHTOPSTEPPERMOTORDLL_EXPORTS
#include "Thorlabs.MotionControl.Benchtop.StepperMotor.h"
%}
%include "Thorlabs.MotionControl.Benchtop.StepperMotor.h"
</code></pre>
|
<python><c++><mingw><swig>
|
2024-03-15 01:12:30
| 1
| 5,244
|
richbai90
|
78,164,093
| 2,735,009
|
Store quantile ranges in a new column
|
<p>I have written the following code to grab the quantile ranges and store it in a new column:</p>
<pre><code>df_temp = pd.DataFrame(data_train['paper_mentions'])
df_temp['q_dr_du'] = pd.Series(pd.qcut(df_temp['paper_mentions'], q=4, duplicates='drop'))
df_temp['q_rank'] = pd.Series(pd.qcut(df_temp['paper_mentions'].rank(method='first'), q=4))
</code></pre>
<p>I'm creating 2 different columns for 2 types of ranges. Output below:
<a href="https://i.sstatic.net/7LUxW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7LUxW.png" alt="enter image description here" /></a></p>
<p>I want the actual quantile ranges stored in a new column, which is similar to what I'm storing in <code>df_temp['q_dr_du']</code>. I've had to use <code>duplicated='drop'</code> to create this column otherwise I was getting this error:</p>
<pre><code>ValueError: Bin edges must be unique:
</code></pre>
<p>But with <code>duplicates='drop'</code>, I'm only getting a single value stored in all the cells. How do I fix this problem and store the actual quantile ranges for all the values?</p>
<p>PS: I tried creating a sample dataframe that would replicate this issue, but I wasn't able to create one where the above error was replicated.</p>
|
<python><pandas><dataframe>
|
2024-03-15 00:23:03
| 0
| 4,797
|
Patthebug
|
78,164,024
| 3,362,074
|
Converting depth map to distance map (Python OpenCV)
|
<p>I am trying to estimate the distance between objects in a scene and the camera. I have generated a depth map from a rectified stereo pair, but if I understand correctly, that's the distance between the plane where the sensor is located and the objects, but not the camera point specifically.</p>
<p>I am currently solving this by dividing the distance value found for each pixel by the cosine of the angle between the center of the image and that pixel.</p>
<p><a href="https://i.sstatic.net/RaowM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RaowM.png" alt="depth vs distance from image" /></a></p>
<p>This is for video frames, so I compute the cosines once:</p>
<pre><code>depthToDistance = np.ones((width, height))
degreesPerPixel = hFov / width
for x in range(width):
for y in range(height):
distY = abs(y-height/2)
distX = abs(x-width/2)
dist = math.sqrt(distX**2+distY**2)
degrees = dist * degreesPerPixel
factor = math.cos(math.radians(degrees))
depthToDistance[x,y] = factor
</code></pre>
<p>And apply the cosines to the depth map generated for every frame (pair)</p>
<pre><code>for x in range(width):
for y in range(height):
factor = depthToDistance[x,y]
depth_map[x,y] /= factor
</code></pre>
<p>My two questions are?</p>
<ul>
<li>Is this mathematically correct? My early tests look good to my eye, but I haven't checked this methodically yet.</li>
<li>Is there a faster way to do this? I don't normally write Python code, so I may be doing something stupid. In JS, I would use shaders to speed this up dramatically but I'm reusing Python depth estimation examples and lilbraries.</li>
</ul>
<p>Thanks.</p>
|
<python><opencv>
|
2024-03-14 23:49:20
| 0
| 579
|
Kajuna
|
78,163,936
| 2,648,947
|
How to chain jobs in Dagster?
|
<p>I need to chain several jobs. Some have to be started right after other have finished and some need results of other jobs as an input.</p>
<p>Seems that I can start one job after the other by using sensors. AI suggests using <code>@solid</code> and <code>@pipeline</code> decorators, but I was unable to find suitable example of their usage in Dagster documentation or the internet. I can't figure out how to pass output from one job to the other. <code>job_3(job_2())</code> call doesn't seem like a Dagster approach, isn't it?</p>
<p>Here is the code to illustrate an issue:</p>
<pre class="lang-py prettyprint-override"><code>@job
def job_1():
save_to_db_op(
make_api_call_op()
)
@job
def job_2():
out_1, out_2 = process_data_op(
make_another_api_call_op()
)
save_to_db_op(out_1)
return out_2 # I need to pass it to another job
@job
def job_3(out_2): # how to pass input here?
process_op(out_2)
do_some_other_staff_op()
# this function is a pseudocode to represent what I want to recreate in Dagster
def figure_it_out_pipeline():
job_1() # wait until complete
job_3(
job_2
)
</code></pre>
|
<python><python-3.x><jobs><job-scheduling><dagster>
|
2024-03-14 23:16:26
| 0
| 1,384
|
SS_Rebelious
|
78,163,914
| 2,862,945
|
Rotating a 3D body in python results in holes in the body
|
<p>I have a 3D body, let's say a cuboid, which I want to rotate. For simplicity, let's assume it's just rotation around the x-axis. So I use the corresponding rotation matrix <code>R</code> and multiply it with the coordinate vector <code>v</code> to get the new coordinate vector <code>v'</code>: <code>v'=R*v</code>. As a visualization tool, I use mayavi.</p>
<p>While the rotation does work, it has some annoying side effect: some values inside of the rotated cuboid are missing. You can see that in the following snapshot, the blue cuboid is the rotated body, and it has some "holes" in it.</p>
<p><a href="https://i.sstatic.net/x9SSg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/x9SSg.png" alt="Snapshot of original cuboid and rotated cuboid" /></a></p>
<p>My question now is, what am I doing wrong with my rotation?</p>
<p>Here is the corresponding python code:</p>
<pre><code># import standard modules
import numpy as np
# import modules for 3D visualization of data
from mayavi import mlab
def plot_simple( data2plot ):
# define contour levels
contLevels = np.linspace(0, np.amax(data2plot), 5)[1:].tolist()
# create figure with white background and black axes and labels
fig1 = mlab.figure( bgcolor=(1,1,1), fgcolor=(0,0,0), size=(800,600) )
# make the contour plot
cont_plt = mlab.contour3d( data2plot, contours=contLevels,
transparent=True, opacity=.4,
figure=fig1 )
# create axes instance to modify some of its properties
ax1 = mlab.axes( nb_labels=4, extent=[1, data2plot.shape[0],
1, data2plot.shape[1],
1, data2plot.shape[2] ] )
mlab.outline(ax1)
ax1.axes.label_format = '%.0f'
ax1.axes.x_label = 'x'
ax1.axes.y_label = 'y'
ax1.axes.z_label = 'z'
# set initial viewing angle
mlab.view( azimuth=290, elevation=80 )
mlab.show()
def Rx(alpha):
# rotation matrix for rotation around x-axis
return np.matrix([[ 1, 0 , 0 ],
[ 0, np.cos(alpha), -np.sin(alpha)],
[ 0, np.sin(alpha), np.cos(alpha) ]])
def make_rotated_cube( Nx=100, Ny=70, Nz=40 ):
arr = np.zeros( [Nx, Ny, Nz] )
# define center of cuboid
xc = Nx/2
yc = Ny/2
zc = Nz/2
# define width of cuboid in each direction
dx = Nx/4
dy = Ny/4
dz = Nz/4
# rotation angle in degrees
alpha = 20
alpha = np.radians(alpha)
# loop through arr and define cuboid and rotated cuboid
# note that this is a very inefficient way to define a cuboid
# (the actual thing to rotate is different, this is just to make it simple)
for ii in range(Nx):
for jj in range(Ny):
for kk in range(Nz):
# check if coordinate is inside original cuboid
if ( (ii > (xc-dx/2) and ii < (xc+dx/2))
and (jj > (yc-dy/2) and jj < (yc+dy/2))
and (kk > (zc-dz/2) and kk < (zc+dz/2)) ):
# set density of original cuboid
arr[ii,jj,kk] = 5.
# apply rotation
new_coords = Rx(alpha)*np.array([[ii],[jj],[kk]])
# set density of rotated cuboid to different value
arr[ round(new_coords[0,0]),
round(new_coords[1,0]),
round(new_coords[2,0]) ] = 2
return arr
def main():
cubes = make_rotated_cube()
plot_simple(cubes)
if __name__ == '__main__':
main()
</code></pre>
|
<python><geometry><rotation><computational-geometry><rotational-matrices>
|
2024-03-14 23:10:01
| 1
| 2,029
|
Alf
|
78,163,868
| 15,412,256
|
Polars Expressions Failed to Access Intermediate Column Creation Expressions
|
<p>I want to encode the <code>non-zero</code> binary events with integer numbers. Following is a demo table:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame(
{
"event": [0, 1, 1, 0],
"foo": [1, 2, 3, 4],
"boo": [2, 3, 4, 5],
}
)
</code></pre>
<p>The expected output is achieved by:</p>
<pre class="lang-py prettyprint-override"><code>df = df.with_row_index()
events = df.select(pl.col(["index", "event"])).filter(pl.col("event") == 1).with_row_index("event_id").drop("event")
df = df.join(events, on="index", how="left")
out:
shape: (4, 5)
βββββββββ¬ββββββββ¬ββββββ¬ββββββ¬βββββββββββ
β index β event β foo β boo β event_id β
β --- β --- β --- β --- β --- β
β u32 β i64 β i64 β i64 β u32 β
βββββββββͺββββββββͺββββββͺββββββͺβββββββββββ‘
β 0 β 0 β 1 β 2 β null β
β 1 β 1 β 2 β 3 β 0 β
β 2 β 1 β 3 β 4 β 1 β
β 3 β 0 β 4 β 5 β null β
βββββββββ΄ββββββββ΄ββββββ΄ββββββ΄βββββββββββ
</code></pre>
<p>I want to get the expeceted output by <strong>chaining the expressions</strong>:</p>
<pre class="lang-py prettyprint-override"><code>(
df
.with_row_index()
.join(
df
.select(pl.col(["index", "event"]))
.filter(pl.col("event") == 1)
.with_row_index("event_id")
.drop("event"),
on="index",
how="left",
)
)
</code></pre>
<p>However, the expressions within the <code>.join()</code> expression does not seem to have added <code>index</code> column from the <code>df.with_row_index()</code> operation:</p>
<pre class="lang-py prettyprint-override"><code>ColumnNotFoundError: index
Error originated just after this operation:
DF ["event", "foo", "boo"]; PROJECT */3 COLUMNS; SELECTION: "None"
</code></pre>
|
<python><pandas><dataframe><python-polars>
|
2024-03-14 22:57:39
| 2
| 649
|
Kevin Li
|
78,163,832
| 1,925,652
|
Why does PDB incorrectly report location of exceptions?
|
<p>I've run into this problem countless times and I've tried to just overlook it but it's infuriating. About 25% of the time that I use the python debugger (usually on the command line, haven't tested elsewhere) it will report that an exception occurred in a location where it obviously didn't occur... (e.g. on the very first line of code importing a module).</p>
<p>Then I have to manually enter 'n' (for next) like 50-100 times until the error actually occurs and then it realizes that the error actually occurred somewhere else. Why does this happen? Is there a workaround? Should I report it as a bug, and how haven't other people noticed it yet?</p>
<p>Example -- Incorrect Reporting (<strong>line 6</strong>) after continue:</p>
<pre><code>python -m pdb ablate_LHS_MMA.py
> /projects/academic/chrest/dwyerdei/CMAME_Final/sampling/ablate_LHS_MMA.py(6)<module>()
-> import lhsmdu
(Pdb) c
/projects/academic/chrest/dwyerdei/CMAME_Final/sampling/ablate_LHS_MMA.py:6: DeprecationWarning:
Pyarrow will become a required dependency of pandas in the next major release of pandas (pandas 3.0),
(to allow more performant data types, such as the Arrow string type, and better interoperability with other libraries)
but was not found to be installed on your system.
If this would cause problems for you,
please provide us feedback at https://github.com/pandas-dev/pandas/issues/54466
import lhsmdu
Traceback (most recent call last):
File "/user/dwyerdei/big_home/mambaforge/envs/Alex_UQ_Ensembles/lib/python3.9/pdb.py", line 1726, in main
pdb._runscript(mainpyfile)
File "/user/dwyerdei/big_home/mambaforge/envs/Alex_UQ_Ensembles/lib/python3.9/pdb.py", line 1586, in _runscript
self.run(statement)
File "/user/dwyerdei/big_home/mambaforge/envs/Alex_UQ_Ensembles/lib/python3.9/bdb.py", line 580, in run
exec(cmd, globals, locals)
File "<string>", line 1, in <module>
File "/projects/academic/chrest/dwyerdei/CMAME_Final/sampling/ablate_LHS_MMA.py", line 6, in <module>
import lhsmdu
ValueError: operands could not be broadcast together with shapes (6,) (250,5)
</code></pre>
<p>Example -- Correct Reporting (<strong>line 52</strong>) after next-stepping:</p>
<pre><code>(Pdb)
> /projects/academic/chrest/dwyerdei/CMAME_Final/sampling/ablate_LHS_MMA.py(52)<module>()
-> lhs_scaled = (min_params + (max_params - min_params) * np.array(lhs).T)
(Pdb) n
ValueError: operands could not be broadcast together with shapes (6,) (250,5)
> /projects/academic/chrest/dwyerdei/CMAME_Final/sampling/ablate_LHS_MMA.py(52)<module>()
-> lhs_scaled = (min_params + (max_params - min_params) * np.array(lhs).T)
</code></pre>
<p>P.S. Notice the line number is next to the file name: <code>ablate_LHS_MMA.py(52)</code></p>
|
<python><pdb>
|
2024-03-14 22:46:48
| 0
| 521
|
profPlum
|
78,163,694
| 9,718,199
|
Python sqlalchemy connection fails with "socket.gaierror: [Errno -2] Name or service not known"
|
<p>I'm trying to use a local postgres database connection through sqlalchemy. It manages to get inside the <code>with</code> statement, but then starts crashing as soon as I try to do anything with the connection.</p>
<pre class="lang-py prettyprint-override"><code>async with database.async_session.begin() as db_session:
account = await AccountOrm.one_or_none(db_session, ...) # crash here, deep inside sqlalchemy internals
</code></pre>
<p>The very long stack trace goes down through sqlalchemy into the connection pool, ending with <code>socket.gaierror: [Errno -2] Name or service not known</code>. All other SO answers related to this error are about HTTP issues, so I'm at a bit of a loss at what could be going wrong.</p>
|
<python><sqlalchemy>
|
2024-03-14 22:10:10
| 1
| 369
|
ConfusedPerson
|
78,163,337
| 9,951,273
|
How to release Python memory back to the OS
|
<p>I have a long running Python script that takes 1-2 hours to complete. It's running on a 4gb container with 1 CPU.</p>
<p>The script fetches and processes data in a for loop. Something like the following:</p>
<pre><code>for i in ENOUGH_API_CALLS_TO_TAKE_2_HOURS:
data = fetch_data()
process_data(data)
</code></pre>
<p>The 4gb container crashes halfway through script execution due to lack of memory. There's no way any individual API call comes close to retrieving 4gb of data though.</p>
<p>After using <code>tracemalloc</code> to debug, I think Python is slowly eating memory on each API call without releasing it back to the OS. Eventually crashing the process by exceeding memory limits.</p>
<p>I've read <a href="https://stackoverflow.com/a/15492488">threads</a> that discuss using multiprocessing to ensure memory gets released when tasks complete. But here I only have 1 CPU so I don't have a second processor to work with.</p>
<p>Is there any other way to release memory back to the OS from inside my main thread?</p>
<p>Note I've tried <code>gc.collect()</code> without any success.</p>
|
<python><memory>
|
2024-03-14 20:37:30
| 1
| 1,777
|
Matt
|
78,162,990
| 1,712,287
|
How to make bitcoin compressed public key
|
<p>This is my code</p>
<pre><code>from bitcoinlib.keys import PrivateKey
from bitcoinlib.encoding import pubkeyhash_to_addr
# Example WIF private key
wif_private_key = "5HpHagT65TZzG1PH3CSu63k8DbpvD8s5ip4nEB3kEsreAnchuDf"
# Decode the WIF private key to obtain the raw private key
raw_private_key = PrivateKey(wif=wif_private_key).to_hex()
# Derive the compressed public key from the raw private key
compressed_public_key = PrivateKey(raw=raw_private_key).public_key().to_hex(compressed=True)
# Generate the Bitcoin address from the compressed public key
compressed_address = pubkeyhash_to_addr(compressed_public_key)
print("Compressed Bitcoin Address:", compressed_address)
</code></pre>
<p>But output is</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\Hp\Desktop\puzzle\a.py", line 1, in <module>
from bitcoinlib.keys import PrivateKey
ImportError: cannot import name 'PrivateKey' from 'bitcoinlib.keys' (C:\Users\Hp\AppData\Local\Programs\Python\Python312\Lib\site-packages\bitcoinlib\keys.py)
</code></pre>
|
<python><blockchain><bitcoin>
|
2024-03-14 19:30:00
| 1
| 1,238
|
Asif Iqbal
|
78,162,964
| 424,333
|
Why is the Python IMAP library failing with Outlook?
|
<p>I'm trying to use Python to create a draft email programmatically in Office365. I have a similar code for Gmail that works fine. However, I'm getting the error <code>b'LOGIN failed.'</code>.</p>
<p>I thought the issue might be having 2FA enabled, but it's still not working with an app secret.</p>
<p>Here's my code excerpt:</p>
<pre><code>import imaplib
from email.message import EmailMessage
import pandas as pd
import random
import time
# Define IMAP server settings for Outlook
outlook_imap_server = 'outlook.office365.com'
outlook_imap_port = 993 # Port for SSL
# Set Outlook credentials
outlook_email = # email
outlook_password = # app secret
message = "hello"
utf8_message = message.as_bytes()
# Connect to Outlook IMAP server and save the email as draft
try:
with imaplib.IMAP4_SSL(outlook_imap_server, outlook_imap_port) as server:
server.login(outlook_email, outlook_password)
server.append('"Drafts"', None, imaplib.Time2Internaldate(time.time()), utf8_message)
print("success")
except imaplib.IMAP4.error as e:
print(f"Error: {e}")
except Exception as e:
print(f"An unexpected error occurred: {e}")
</code></pre>
|
<python><email><office365>
|
2024-03-14 19:24:54
| 0
| 3,656
|
Sebastian
|
78,162,874
| 17,729,094
|
Losing "type" information inside polars dataframe
|
<p>Sorry if my question doesn't make a lot of sense. I don't have much experience in python.</p>
<p>I have some code that looks like:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
from typing import NamedTuple
class Event(NamedTuple):
name: str
description: str
def event_table(num) -> list[Event]:
events = []
for i in range(5):
events.append(Event("name", "description"))
return events
def pretty_string(events: list[Event]) -> str:
pretty = ""
for event in events:
pretty += f"{event.name}: {event.description}\n"
return pretty
# This does work
print(pretty_string(event_table(5)))
# But then it doesn't work if I have my `list[Event]` in a dataframe
data = {"events": [0, 1, 2, 3, 4]}
df = pl.DataFrame(data).select(events=pl.col("events").map_elements(event_table))
# This doesn't work
pretty_df = df.select(events=pl.col("events").map_elements(pretty_string))
print(pretty_df)
# Neither does this
print(pretty_string(df["events"][0]))
</code></pre>
<p>It fails with error:</p>
<pre><code>Traceback (most recent call last):
File "path/to/script.py", line 32, in <module>
pretty_df = df.select(events=pl.col("events").map_elements(pretty_string))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "path/to/.venv/lib/python3.11/site-packages/polars/dataframe/frame.py", line 8116, in select
return self.lazy().select(*exprs, **named_exprs).collect(_eager=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "path/to/.venv/lib/python3.11/site-packages/polars/lazyframe/frame.py", line 1934, in collect
return wrap_df(ldf.collect())
^^^^^^^^^^^^^
polars.exceptions.ComputeError: AttributeError: 'dict' object has no attribute 'name'
</code></pre>
<p>Looks like my <code>list[Event]</code> is no longer that inside the <code>df</code>. I am not sure how to go about getting this to work.</p>
|
<python><python-polars>
|
2024-03-14 19:05:01
| 1
| 954
|
DJDuque
|
78,162,861
| 4,547,189
|
Pandas rolling average - Leading and Trailing
|
<p>I was wondering if Pandas had a simple syntax to do the Leading and Trailing average calculation?</p>
<p>If I wanted to do a Trailing 30 days and leading 3 days to calculate the rolling average, is there a simple way to do this?</p>
<p>e.g:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>Id</th>
<th>Date</th>
<th>CountA</th>
<th>Avg_vew</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>2022-04-01</td>
<td>1050</td>
<td>14330.562857142857</td>
</tr>
<tr>
<td>2</td>
<td>2022-04-02</td>
<td>1283</td>
<td>18568.152766952455</td>
</tr>
<tr>
<td>3</td>
<td>2022-04-03</td>
<td>1071</td>
<td>19176.377217553687</td>
</tr>
<tr>
<td>4</td>
<td>2022-04-04</td>
<td>982</td>
<td>20578.77800407332</td>
</tr>
<tr>
<td>5</td>
<td>2022-04-05</td>
<td>996</td>
<td>21000.55</td>
</tr>
</tbody>
</table></div>
<p>The Rolling avg for April1st would be (14330.56+18568.15+19176.37+20578.77)/4 This only includes current date as the older data is not available, In regular scenarios it would go back x(30) number of days back and y(3) number of days forward and calculate mean.</p>
<p>Thanks</p>
|
<python><pandas>
|
2024-03-14 19:01:23
| 1
| 648
|
tkansara
|
78,162,851
| 6,580,142
|
How does Python logging library get lineno and funcName efficiently?
|
<p>In my Python service I'm using a logging library that's built in-house and not compatible with Python's logging module. While using this library, I want to add additional info to the logs like lineno and funcName. I have been using the <code>inspect</code> library to get this info like below:</p>
<pre><code>frame = inspect.currentframe().f_back
frame_info = inspect.getframeinfo(frame)
frame_info.lineno
</code></pre>
<p>However as per discussions in <a href="https://stackoverflow.com/questions/17407119/python-inspect-stack-is-slow">Python inspect.stack is slow</a>, these methods of using <code>inspect</code> library are somewhat expensive since they access filesystem. We notice a performance decrease in our service because of this (although not nearly as bad as when we were using <code>inspect.stack()[0]</code> earlier).</p>
<p>I noticed Python's native logging module is able to get funcName and lineno methods efficiently. These are exposed through LogRecord attributes: <a href="https://python.readthedocs.io/en/latest/library/logging.html#logrecord-attributes" rel="nofollow noreferrer">https://python.readthedocs.io/en/latest/library/logging.html#logrecord-attributes</a>.</p>
<p>My question is, how does Python's logging module work to achieve efficient collection of this info? I tried to dig into its source code but had a hard time finding the right spot.</p>
|
<python><python-logging>
|
2024-03-14 18:59:59
| 2
| 895
|
David Liao
|
78,162,830
| 14,875,896
|
How to create cronjob such that reminders are triggered as per user timezone?
|
<p>I have an app where I need to send reminders for upcoming sessions. Now, users are spread across the globe. There session timing is stored in UTC in the database. Now, I am getting confused as to how should I design the cronjob such that notifications are sent at 4pm as per user local timezone?</p>
<p>Thanks in advance.</p>
|
<python><cron><utc><reminders>
|
2024-03-14 18:56:32
| 0
| 354
|
lsr
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.