QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 β |
|---|---|---|---|---|---|---|---|---|
76,771,761 | 3,282,410 | Why does llama-index still require an OpenAI key when using Hugging Face local embedding model? | <p>I am creating a very simple question and answer app based on documents using llama-index. Previously, I had it working with OpenAI. Now I want to try using no external APIs so I'm trying the Hugging Face example <a href="https://gpt-index.readthedocs.io/en/latest/core_modules/model_modules/llms/usage_custom.html#example-using-a-huggingface-llm" rel="noreferrer">in this link</a>.</p>
<p>It says in the example in the link: "Note that for a completely private experience, also setup a local embedding model (example here)." I'm assuming the example given below is the example being referred to. So, naturally, I'm trying to copy the example (<a href="https://gpt-index.readthedocs.io/en/latest/examples/customization/llms/SimpleIndexDemo-Huggingface_stablelm.html" rel="noreferrer">fuller example here</a>).</p>
<p>Here is my code:</p>
<pre class="lang-py prettyprint-override"><code>from pathlib import Path
import gradio as gr
import sys
import logging
import os
from llama_index.llms import HuggingFaceLLM
from llama_index.prompts.prompts import SimpleInputPrompt
logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))
from llama_index import SimpleDirectoryReader, VectorStoreIndex, ServiceContext, load_index_from_storage, StorageContext
storage_path = "storage/"
docs_path="docs"
def construct_index(directory_path):
max_input_size = 4096
num_outputs = 512
#max_chunk_overlap = 20
chunk_overlap_ratio = 0.1
chunk_size_limit = 600
#prompt_helper = PromptHelper(max_input_size, num_outputs, chunk_overlap_ratio, chunk_size_limit=chunk_size_limit)
system_prompt = """<|SYSTEM|># StableLM Tuned (Alpha version)
- StableLM is a helpful and harmless open-source AI language model developed by StabilityAI.
- StableLM is excited to be able to help the user, but will refuse to do anything that could be considered harmful to the user.
- StableLM is more than just an information source, StableLM is also able to write poetry, short stories, and make jokes.
- StableLM will refuse to participate in anything that could harm a human.
"""
# This will wrap the default prompts that are internal to llama-index
query_wrapper_prompt = SimpleInputPrompt("<|USER|>{query_str}<|ASSISTANT|>")
llm = HuggingFaceLLM(
context_window=4096,
max_new_tokens=256,
generate_kwargs={"temperature": 0.7, "do_sample": False},
system_prompt=system_prompt,
query_wrapper_prompt=query_wrapper_prompt,
tokenizer_name="StabilityAI/stablelm-tuned-alpha-3b",
model_name="StabilityAI/stablelm-tuned-alpha-3b",
device_map="auto",
stopping_ids=[50278, 50279, 50277, 1, 0],
tokenizer_kwargs={"max_length": 4096},
# uncomment this if using CUDA to reduce memory usage
# model_kwargs={"torch_dtype": torch.float16}
)
#llm=ChatOpenAI(temperature=0.7, model_name="gpt-3.5-turbo", max_tokens=num_outputs)
#llm_predictor = LLMPredictor(llm=llm)
service_context = ServiceContext.from_defaults(chunk_size=1024, llm=llm)
documents = SimpleDirectoryReader(directory_path).load_data()
index = VectorStoreIndex.from_documents(documents, service_context=service_context)
#index = VectorStoreIndex(documents, llm_predictor=llm_predictor, prompt_helper=prompt_helper)
index.storage_context.persist(persist_dir=storage_path)
return index
def chatbot(input_text):
index = load_index_from_storage(StorageContext.from_defaults(persist_dir=storage_path))
#index = GPTVectorStoreIndex.load_from_disk('index.json')
#query_engine = index.as_query_engine(response_synthesizer=response_synthesizer);
query_engine = index.as_query_engine(streaming=True)
response = query_engine.query(input_text)
print(response.source_nodes)
relevant_files=[]
for node_with_score in response.source_nodes:
print(node_with_score)
print(node_with_score.node)
print(node_with_score.node.metadata)
print(node_with_score.node.metadata['file_name'])
file = node_with_score.node.metadata['file_name']
print( file )
# Resolve the full file path for the downloading
full_file_path = Path( docs_path, file ).resolve()
# See if it's already in the array
if full_file_path not in relevant_files:
relevant_files.append( full_file_path ) # Add it
print( relevant_files )
return response.get_response(), relevant_files
iface = gr.Interface(fn=chatbot,
inputs=gr.components.Textbox(lines=7, label="Enter your text"),
outputs=[
gr.components.Textbox(label="Response"),
gr.components.File(label="Relevant Files")
],
title="Custom-trained AI Chatbot",
allow_flagging="never")
index = construct_index(docs_path)
iface.launch(share=False)
</code></pre>
<p>Regardless, the code errors out saying:</p>
<pre><code>ValueError: No API key found for OpenAI.
Please set either the OPENAI_API_KEY environment variable or openai.api_key prior to initialization.
API keys can be found or created at https://platform.openai.com/account/api-keys
</code></pre>
<p>Am I not understanding how to set up a local model?</p>
| <python><huggingface-transformers><huggingface><large-language-model><llama-index> | 2023-07-26 13:19:46 | 3 | 3,404 | Mikey A. Leonetti |
76,771,724 | 2,473,022 | Optional dependencies using test-requirements.txt in pyproject.toml with setuptools | <p>A typical <code>pyproject.toml</code> using <code>setuptools</code> with <a href="https://setuptools.pypa.io/en/latest/userguide/dependency_management.html#optional-dependencies" rel="nofollow noreferrer">optional dependencies</a> looks like the following (unrelated sections removed):</p>
<pre><code>[build-system]
requires = ["setuptools>=61.0"]
build-backend = "setuptools.build_meta"
[project]
name = "my_project"
dependencies = ["numpy","pandas"]
[project.optional-dependencies]
fast = ["numba"]
test = ["pytest"]
</code></pre>
<p>To use a <code>requirements.txt</code> (generated from <code>pip-compile</code> using <code>requirements.in</code>) to store the main dependencies (without adding other files such as <code>setup.py</code>), one can use the <code>dynamic</code> keyword:</p>
<pre><code>[build-system]
requires = ["setuptools>=61.0"]
build-backend = "setuptools.build_meta"
[project]
name = "my_project"
dynamic = ["dependencies"] # Changed
[project.optional-dependencies]
fast = ["numba"]
test = ["pytest"]
[tool.setuptools.dynamic] # New section
dependencies = {file = ["requirements.txt"]}
</code></pre>
<p>If I would like <code>setuptools</code> to also read the optional dependencies from other files (say, <code>test-requirements.txt</code>), what should be the correct syntax? According to <a href="https://setuptools.pypa.io/en/latest/userguide/pyproject_config.html#dynamic-metadata" rel="nofollow noreferrer">the documentation</a>, the feature is in beta, and only a single keyword <code>optional-dependencies</code> is exposed. But I have more than one optional dependencies: <code>[fast]</code> and <code>[test]</code>, and their <code>*-requirements.txt</code> are generated in a fixed format from <code>pip-compile</code>. Specifically, what is meant by <code>group</code> in:</p>
<blockquote>
<p><em>subset</em> of the <code>requirements.txt</code> format <strong>per group</strong></p>
</blockquote>
<p>?</p>
| <python><dynamic><setuptools><requirements.txt><pip-tools> | 2023-07-26 13:14:33 | 1 | 1,664 | Moobie |
76,771,719 | 6,997,665 | Training a network with data having high and low values | <p>I am using Torch on Pyhton to train a network that approximates a complicated mathematic expression, i.e., for a 2D-input, I need a <code>K</code> length vector that is computed using the 2D-input using some mathematical calculations. However, in this vector, there are values (absolute) of the order of <code>10^2</code> and also values of the order of <code>10^-5</code>. The minimum value and maximum value are dependent on the input and a common normalization cannot be used. I have normalized the input to be between <code>0</code> and <code>1</code>. I have tried using mask while training using the following snippet.</p>
<pre><code>import torch
import numpy as np
gnd_truth = torch.from_numpy(np.array([20.42, -1.56e-4, -3.11, 4.2e-2, -7e-3, 10.11]))[None] # Ground truth
dnn_in = torch.from_numpy(np.array([0.76, 0.34]))[None] # Input to the network
# Training
optimizer.zero_grad()
dnn_out = net(dnn_in) # Forward prop
mask = (torch.abs(gnd_truth) < 1e-2).float() * 100 # Multiplying by 100 to increase the value
loss = mse_loss(mask*dnn_out, gnd_truth) + mse_loss((1-mask)*dnn_out, (1-mask)*gnd_truth)
loss.backward()
optimizer.step()
</code></pre>
<p>This too did not give me the desired result. Can anyone please let me know on how such a training can be carried out?</p>
| <python><machine-learning><deep-learning><pytorch> | 2023-07-26 13:14:14 | 2 | 3,502 | learner |
76,771,712 | 10,428,677 | Replace text inside brackets with partial value identified in each respective row | <p>I have a dataframe with a bunch of columns, one of which looks like this:</p>
<pre><code>data = {'Product': ['Product A', 'Product B', 'Product C (discontinued in March 2021)', 'Product D', 'Product E (discontinued on 30 April 2004)']}
df = pd.DataFrame(data)
</code></pre>
<p>I tried writing a piece of code that iterates through every row of the column, identifies the year in the brackets (where applicable) and replaces the text inside the brackets with the following <code>'discont. '</code> + <code>the year identified</code>. So for <code>'Product C'</code>, it should change it to <code>Product C (discont. 2021)</code>.</p>
<pre><code>def amend_vals(value):
pattern = r'\((\d{4})\)' # Regex pattern to capture the year inside brackets
match = re.search(pattern, value)
if match:
year = match.group(1)
return re.sub(pattern, '(discont. ' + year + ')', value)
else:
return value
df['Product'] = df['Product'].apply(amend_vals)
</code></pre>
<p>It doesn't seem to work though. Anyone have any ideas how to fix it?</p>
| <python><pandas> | 2023-07-26 13:13:16 | 4 | 590 | A.N. |
76,771,603 | 3,225,420 | Get UTC 'T' and 'Z' to show in DataFrame Column | <p>Importing a dataframe in one datetime format, feeding into API service that requires dates in this UTC format (notice the <code>T</code> and <code>Z</code>):</p>
<pre><code>2023-07-26T11:04:23.893Z
</code></pre>
<p>Noteworthy is this will be converted into <code>JSON</code> so the final answer can result in a string. But would be much cleaner solution if the native <code>Pandas</code> time handling can do it.</p>
<p>On individual dates, not in a <code>DataFrame</code>, I've done it in this manner:</p>
<pre><code>due_date_end = datetime.now() + relativedelta(months=+3)
due_date_end = due_date_end.isoformat('T') + 'Z'
</code></pre>
<p>When I try using the <code>.isoformat()</code> method on a <code>df</code> column I get an exception.</p>
<p>I've also tried the following:</p>
<p>Parsing dates when reading the file</p>
<pre><code>df = pd.read_csv('my_test_file.csv',parse_dates=['job_due_date'])
</code></pre>
<p>Converting using related answers I've seen on SO:</p>
<pre><code>df['due_date'] = pd.to_datetime(end_user_df['job_due_date']).dt.tz_localize('UTC')
</code></pre>
<p>And another variant based off of SO answers:</p>
<pre><code>end_user_df['due_date'] = pd.to_datetime(end_user_df['job_due_date']).dt.tz_localize('UTC')
end_user_df['due_date'] = end_user_df['due_date'].to_string().strftime("%Y-%m-%dT%H:%M:%S%Z")
</code></pre>
<p>What should I try next?</p>
| <python><pandas><datetime><utc><iso8601> | 2023-07-26 12:59:58 | 1 | 1,689 | Python_Learner |
76,771,537 | 1,083,960 | Combine multiple columns and rows into a Polars struct (dictionary) | <p>I'm trying to convert a data frame into nested/hierarchical data that will be written out as JSON lines. The data are structured like this:</p>
<pre class="lang-py prettyprint-override"><code>df = pl.DataFrame({
"group_id": ["a", "a", "a", "b", "b", "b"],
"label": ["dog", "cat", "mouse", "dog", "cat", "mouse"],
"indicator": [1, 1, 0, 0, 0, 1]
})
df
ββββββββββββ¬ββββββββ¬ββββββββββββ
β group_id β label β indicator β
β --- β --- β --- β
β str β str β i64 β
ββββββββββββͺββββββββͺββββββββββββ‘
β a β dog β 1 β
β a β cat β 1 β
β a β mouse β 0 β
β b β dog β 0 β
β b β cat β 0 β
β b β mouse β 1 β
ββββββββββββ΄ββββββββ΄ββββββββββββ
</code></pre>
<p>I'm trying to find a way to combine the "label" and "indicator" columns into a single dictionary (struct) per "group_id", where "label" are the keys and "indicator" the items. The result should look like this:</p>
<pre class="lang-py prettyprint-override"><code>target = pl.DataFrame({
"group_id": ["a", "b"],
"label": [{"dog": 1, "cat": 1, "mouse": 0}, {"dog": 0, "cat": 0, "mouse": 1}],
})
target
ββββββββββββ¬ββββββββββββ
β group_id β label β
β --- β --- β
β str β struct[3] β
ββββββββββββͺββββββββββββ‘
β a β {1,1,0} β
β b β {0,0,1} β
ββββββββββββ΄ββββββββββββ
target["label"][0]
{'dog': 1, 'cat': 1, 'mouse': 0}
target.write_ndjson()
'{"group_id":"a","label":{"dog":1,"cat":1,"mouse":0}}\n{"group_id":"b","label":{"dog":0,"cat":0,"mouse":1}}\n'
</code></pre>
<ul>
<li>All groups have the same labels, are the same length, etc.</li>
<li>I'm specifically trying to figure out whether it's possible to do with Polars, not Pandas.</li>
</ul>
| <python><python-polars> | 2023-07-26 12:52:16 | 1 | 1,437 | andybega |
76,771,450 | 662,285 | check if all files exist in a directory otherwise exit python | <p>I am new to python and want to check if list of files exist in a given directory. if any of the files does not exist exit it.</p>
<pre><code>def CheckFileexists(dirpath):
file_list = ['a.txt', 'b.txt', 'c.txt'] // Need to check if all 3 files exist in directory
for files in os.walk(dirpath):
if files:
print(files , 'has files')
if not files:
print(files , 'does not have files')
</code></pre>
<p>This print all the files but how to compare with file_list that all three files exist , if any of the files does not exists, exit the loop</p>
| <python> | 2023-07-26 12:42:58 | 2 | 4,564 | Bokambo |
76,771,222 | 15,248,809 | Django Validators not working in shell command | <p>Django Validators not working in shell command, it works just on Django Forms and Django Admin but not in shell commands. I have this:</p>
<h4>Validator</h4>
<pre><code>def validate_cuit(value):
""" Must be exactly 11 numeric characters """
if len(value) != 11:
raise CuitValidationException('CUIT should have 11 characters')
if not value.isdigit():
raise CuitValidationException('CUIT should only contain numeric characters')
return value
</code></pre>
<h4>Exception</h4>
<pre><code>class CuitValidationException(ValidationError):
pass
</code></pre>
<h3>Model</h3>
<pre><code>class Empresa(models.Model):
name = models.CharField(max_length=120, validators=[validate_name])
cuit = models.CharField(max_length=11, validators=[validate_cuit])
</code></pre>
<p>If I do this, I get no error</p>
<pre><code>e = Empresa.objects.create(name="Testing Case", cuit='1')
</code></pre>
<p>The only way I found to solve this is by working on the <em>save</em> method:</p>
<pre><code>def save(self, force_insert=False, force_update=False, using=None, update_fields=None):
self.name = validate_name(self.name)
self.cuit = validate_cuit(self.cuit)
return super().save(force_insert, force_update, using, update_fields)
</code></pre>
<p>But I'm sure it shouldn't be neccesary, can you help me with this?</p>
| <python><django><validation> | 2023-07-26 12:17:21 | 1 | 313 | Eugenio |
76,771,216 | 3,371,250 | How to append an else case to a dynamically created case statement in sqlalchemy? | <p>Say I have a list of thresholds with corresponding classes like this:</p>
<pre><code>classes = [{1, "1"},
{2, "4"},
{3, "7"},
{4, "8"},
{5, "9"}]
</code></pre>
<p>I want to use this list in a query by defining cases based on its elements.
This is my query:</p>
<pre><code>query = select([
subquery.c.id,
case([(subquery.c.some_value <= x, y) for x, y in classes
]).label("incidence_class")
])
</code></pre>
<p>This works fine but I have edge cases that I want to catch by using an else case.
So what I essentially want to dynamically create is this:</p>
<pre><code>query = select([
subquery.c.id,
case((subquery.c.some_value <= 1, "1"),
case((subquery.c.some_value <= 2, "4"),
case((subquery.c.some_value <= 3, "7"),
case((subquery.c.some_value <= 4, "8"),
case((subquery.c.some_value <= 5, "9"),
else_=10
).label("incidence_class")
])
</code></pre>
<p>The line that I want to add to the list is "else_=10". I can't just add this to the list that I create with list comprehension because it is code, right?</p>
<p>Thanks in advance.</p>
| <python><sqlalchemy><list-comprehension> | 2023-07-26 12:17:00 | 1 | 571 | Ipsider |
76,771,168 | 3,851,798 | How to add a constraint file to environment.yml for conda? | <p>I'm currently trying to set up a conda environment which has Apache Airflow in it.
The tricky part about Apache Airflow is that it has a lot of constraints and the recommended way to install it is using pip and the constraints that the project supplies.
I would like have an <code>environment.yml</code> so that it is easier to setup the project on a server later.
I tried to have the following <code>environment.yml</code>:</p>
<pre><code>name: project_name
channels:
- conda-forge
dependencies:
- python==3.9
- pip
- pip:
- poetry
- apache-airflow==2.6.2 --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-2.6.2/constraints-3.9.txt"
</code></pre>
<p>But pip didn't pick up the constraints and installed the lastest pydantic (2.x something) which isn't compatible with Airflow.
Eventually, I had to make this hacky solution.</p>
<pre><code>name: project_name
channels:
- conda-forge
dependencies:
- python==3.9
- pip
- pip:
- -r requirements.txt
</code></pre>
<p>And <code>requirements.txt</code> was:</p>
<pre><code>-c airflow-constraints.txt
poetry
apache-airflow==2.6.2
</code></pre>
<p>And then <code>airflow-constraints.txt</code> had the contents of the constraints file.</p>
<p>Is there a simpler way to get this behaviour without having to resort to multiple requirements files?</p>
| <python><pip><airflow><conda> | 2023-07-26 12:12:46 | 1 | 1,573 | Tejas Anil Shah |
76,771,139 | 12,100,211 | Dynamically modifying fields in django Rest Framework | <p>Okay so I have two classes one is Book and the other one is Category. Book and Category are linked by a foreign key named category which is a Book field.
See the code</p>
<pre><code>class Category(models.Model):
class Meta:
verbose_name_plural = "Categories"
category = models.CharField(max_length=20)
def __str__(self):
return self.category
class Book(models.Model):
book_title = models.CharField(max_length=20)
category = models.ForeignKey(Category, on_delete=models.CASCADE)
def __str__(self):
return self.book_title
</code></pre>
<p>And below are the serializer classes</p>
<pre><code>
class DynamicFieldsModelSerializer(serializers.ModelSerializer):
"""
A ModelSerializer that takes an additional `fields` argument that
controls which fields should be displayed.
"""
def __init__(self, *args, **kwargs):
# Don't pass the 'fields' arg up to the superclass
fields = kwargs.pop('fields', None)
# Instantiate the superclass normally
super().__init__(*args, **kwargs)
if fields is not None:
# Drop any fields that are not specified in the `fields` argument.
allowed = set(fields)
existing = set(self.fields)
for field_name in existing - allowed:
self.fields.pop(field_name)
class CategorySerializer(DynamicFieldsModelSerializer):
class Meta:
model = Category
# only show the category field
fields = ['category']
class BookSerializer(serializers.ModelSerializer):
# this will show the category data which is related to this Book
category = CategorySerializer()
class Meta:
model = Book
fields = '__all__'
</code></pre>
<p>And now I want that when I get the data of the book using @api_view, I should only get the category name, which I am getting no problem at all but when I want to see the data of Category then want to see the all fields. but it shows nothing.
Code of @api_view</p>
<pre><code>@api_view(['GET'])
def getBooks(request):
books = Book.objects.all()
serilized_data = BookSerializer(books,many=True)
return Response({'status': 200, 'payload': serilized_data.data})
# it is now only show the category as only category field is passed in CategorySerilizer
@api_view(['GET'])
def getCategory(request):
category = Category.objects.all()
serilized_data = CategorySerializer(category,fields = ('__all__'),many=True)
return Response({'status': 200, 'payload': serilized_data.data})
</code></pre>
<p>And the output of getCategory is</p>
<pre><code>
{
"status": 200,
"payload": [
{},
{}
]
}
</code></pre>
<p>And one more thing to notice is I have only 2 categories inside my database which is also shown by it via {}, {}</p>
| <python><django> | 2023-07-26 12:08:59 | 2 | 303 | Kanha Tomar |
76,771,053 | 12,560,046 | What are the alternatives for sleep to delay functions? how to run asyncio.Timer()? | <p>The following is a double question but it's significant to know if it could.</p>
<p>After searching for an alternative I found some methods like asyncio.sleep(), asyncio.Timer(), Threading.Timer(), ...etc</p>
<p>However, after making the research in detail I found that anything working with <strong>coroutines</strong> is the best choice to get rid of the memory lack so, from searching I found something called asyncio.Timer() which is the best choice.</p>
<p>Basically, what I want to do is a game that can handle more than one million events for one million users.</p>
<p>For example, I need to "upgrade building" and this would be an event that will take <strong>n</strong> of time to upgrade it. Now I would like to make such a workflow to trigger at a specific time when the time of upgrading a building is ready, there are many ways to achieve that like job-crons I know but I need to achieve that by seconds as well not only by minutes like job-crons do as well as I need to go apart from lacking memory when I use traditional way when I use <strong>sleep</strong> by <strong>time.sleep()</strong> or by <strong>asyncio.sleep()</strong>.</p>
<p>However, when I used asyncio.Timer() I got <strong>AttributeError</strong> because Timer is not a method to work with.</p>
| <python><scheduled-tasks><python-asyncio> | 2023-07-26 11:59:37 | 1 | 604 | Abdelhamed Abdin |
76,770,932 | 2,307,570 | Why does a library not appear in `pip freeze`, although it is present and works? | <p>I have a Python project with a virtual environment,<br>
and I have installed some external libraries like NumPy and NetworkX.</p>
<pre><code>>>> import networkx, numpy
>>> networkx.__version__
'2.6.3'
>>> networkx.__file__
'/home/tilman/Code/discrete_helpers/env/lib/python3.8/site-packages/networkx/__init__.py'
>>> numpy.__version__
'1.18.5'
>>> numpy.__file__
'/home/tilman/Code/discrete_helpers/env/lib/python3.8/site-packages/numpy/__init__.py'
</code></pre>
<p>But while <code>pip freeze</code> and <code>pip list</code> show NumPy, they do not show NetworkX.<br>
(No change with the <code>--all</code> flag.)<br>
Why might that be, and how could I get a complete <code>requirements.txt</code>?</p>
<pre><code>(env) tilman@t570:~/Code/discrete_helpers$ pip freeze
attrs==21.2.0
bottle==0.12.23
more-itertools==8.10.0
mpmath==1.2.1
numpy==1.18.5
packaging==21.0
pluggy==0.13.1
py==1.10.0
pyparsing==2.4.7
pytest==5.4.3
sympy==1.8
wcwidth==0.2.5
</code></pre>
<pre><code>(env) tilman@t570:~/Code/discrete_helpers$ pip list
Package Version
-------------- -------
attrs 21.2.0
bottle 0.12.23
more-itertools 8.10.0
mpmath 1.2.1
numpy 1.18.5
packaging 21.0
pip 20.0.2
pkg-resources 0.0.0
pluggy 0.13.1
py 1.10.0
pyparsing 2.4.7
pytest 5.4.3
setuptools 47.1.1
sympy 1.8
wcwidth 0.2.5
</code></pre>
<hr />
<p><code>which pip</code> and <code>which python</code> show paths in the expected <code>env</code>.</p>
<pre><code>(env) tilman@t570:~/Code/discrete_helpers$ which pip
/home/tilman/Code/discrete_helpers/env/bin/pip
(env) tilman@t570:~/Code/discrete_helpers$ which python
/home/tilman/Code/discrete_helpers/env/bin/python
</code></pre>
<hr />
<p>BTW, in a <strong>different project</strong> it works as expected:</p>
<pre><code>(env) tilman@t570:~/somwhere/else$ pip install networkx
(env) tilman@t570:~/somwhere/else$ pip freeze
networkx==3.1
</code></pre>
| <python><pip> | 2023-07-26 11:45:02 | 1 | 1,209 | Watchduck |
76,770,894 | 4,507,231 | Plot a Python Matplotlib imshow (heatmap) from a dictionary of lists | <p>I have a dictionary of lists:</p>
<pre><code>dictA = {"A": [1,2,3], "B": [4,5,6], "C": [7,8,9]}
</code></pre>
<p>I want to display the data as a heatmap using the Matplotlib imshow. A, B and C run vertically (y), and the values are the corresponding cell value running left to right. This would result in a 3x3 plot.</p>
<p>Ideally, I would like to keep this native to core Python and avoid verbose data-type wrangling via packages like NumPy, but if it can't be helped, it can't be helped - I just want it to work.</p>
<p>The basics would be:</p>
<pre><code>import matplotlib.pyplot as plt
dictA = {"A": [1,2,3], "B": [4,5,6], "C": [7,8,9]}
SOMETHING = #wrangling my dictionary to be "array-like"
plt.imshow(SOMETHING)
plt.show()
</code></pre>
| <python><matplotlib><imshow> | 2023-07-26 11:40:50 | 1 | 1,177 | Anthony Nash |
76,770,862 | 18,895,773 | Tensorflow does not allow for multiple object returned in the call function | <p>This is the call method in my class that extends tf.keras.Model:</p>
<pre><code>def call(self, inputs):
"""
Forward pass through the model. Used during training.
"""
model_out = self.base_model(inputs)
return model_out, inputs
</code></pre>
<p>As you can see, I am returning the outputs as well as the inputs.</p>
<p>Therefore, in my custom loss, the call method looks like this:</p>
<pre><code>def __call__(self, returns, model_output):
weights, positions = model_output
....
return -loss_value
</code></pre>
<p>When I test in isolation, everything works, but when I create the model and try to fit it, I get:</p>
<pre><code>weights, positions = model_output
OperatorNotAllowedInGraphError: Iterating over a symbolic `tf.Tensor` is not allowed: AutoGraph did convert this function. This might indicate you are trying to use an unsupported feature.
</code></pre>
| <python><tensorflow> | 2023-07-26 11:36:54 | 0 | 362 | Petar Ulev |
76,770,854 | 5,311,367 | Embedding a Streamlit App into an Existing React.js Application through an iFrame | <p>I have an existing React.js application and I want to integrate a Streamlit app into it. Is it possible to achieve this by using an iFrame? My goal is to leverage the data analysis and visualization capabilities provided by Streamlit app.</p>
<p>Has anyone attempted to embed a Streamlit app into a React.js application using an iFrame before? If so, I would appreciate any insights into the best approach and potential challenges to be aware of.</p>
<p>If embedding Streamlit via an iFrame is not recommended, are there any alternative methods to achieve this integration without rewriting the entire application? Any suggestions or pointers would be greatly appreciated.</p>
<p>Thank you in advance for your help!</p>
| <javascript><python><reactjs><iframe><streamlit> | 2023-07-26 11:36:15 | 1 | 1,675 | nilesh1212 |
76,770,817 | 18,649,992 | Parallelism inside sequential loop with JAX | <p>How is the following data-location parallelism translated to a per-device implementation with collective communication using <code>jax</code>?</p>
<pre><code>import os
os.environ["XLA_FLAGS"] = (
f'--xla_force_host_platform_device_count=8'
)
import functools as ft
import jax as jx
import jax.numpy as jnp
import jax.random as jrn
import jax.lax as jlx
import jax.experimental.mesh_utils as jxm
import jax.sharding as jsh
# Create a sharding to distribute across devices
devices = jxm.create_device_mesh((8,))
sharding = jsh.PositionalSharding(devices)
@ft.partial(jx.jit, static_argnums=0, out_shardings=sharding.reshape(8, 1))
def test(m):
# Generate random dense operator that respects output sharding
key = jrn.PRNGKey(0)
J = jrn.uniform(
key=key,
shape=(m, m),
dtype='f8',
minval=-0.1,
maxval=+0.1)
J = 0.1*J+jnp.eye(m, m, dtype='f8')
# Generate random initial state that respects output sharding
y = jrn.uniform(
key=key,
shape=(m, 1),
dtype='f8',
minval=+0.0,
maxval=+1.0)
def calc_body(y, i):
return y+1.0e-06*J @ y, None
y, empty = jlx.scan(calc_body, y, None, length=1000)
return y
test_comp = test.lower(2**14).compile()
y = test_comp()
jx.debug.visualize_array_sharding(y)
+---+
+ 0 +
+---+
.
.
.
+---+
+ 8 +
+---+
</code></pre>
<p>For memory complexity reasons, and use of a carry, the outer loop is performed using <code>jax.lax.scan</code>.</p>
<p>From my understanding, making the parallel execution explicit inside <code>jax.lax.scan</code>, with <code>jax.pmap</code> then <code>jax.lax.psum</code> for example, will lead to unnecessary recompilation and gathers, especially if the outer <code>test</code> function is also compiled.</p>
| <python><python-3.x><parallel-processing><multiprocessing><jax> | 2023-07-26 11:30:58 | 0 | 440 | DavidJ |
76,770,579 | 22,221,987 | Python socket receives outdated data because main-thread time.sleep( )ing | <p>I'm receiving data from socket. I connect to socket and receive data with a specific size in <code>while</code> loop with <code>socket.recv(size)</code>.</p>
<p>All works normally in high receive frequency.
But, if I add a <code>time.sleep(sec)</code> in <code>while</code> loop, I start to receive the outdated value every time. Looks like socket is filled with old data and as long as host send data up to 0.002 times in a sec, I can receive only outdated data with my 1 sec frequency of receiving (as example).
I can't share the data from socket (as long as it will be too wide and heavy for post), but here is the code:</p>
<pre class="lang-py prettyprint-override"><code>import ctypes
import datetime
import logging
import socket
import time
from app.service.c_structures import RTDStructure
logging.basicConfig(level=logging.DEBUG)
class RTDSerializer:
def __init__(self, ip: str, port: int = 29000, frequency: float = 0.002):
self.data: dict = {}
self.ip = ip
self.port = port
self.frequency = frequency
self.sock = None
self.struct_size = ctypes.sizeof(RTDStructure)
self.logger = logging
print(ctypes.sizeof(RTDStructure))
def connect(self):
try:
self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.sock.connect((self.ip, self.port))
self.sock.settimeout(None)
logging.debug(f"Socket connect [{self.ip}:{self.port}] --> Ok")
while True:
c_structure = RTDStructure.from_buffer_copy(self.receive_raw_data() * ctypes.sizeof(RTDStructure))
self.data = self.ctypes_to_dict(c_structure)
print(datetime.datetime.now(), self.data['move_des_q'])
time.sleep(self.frequency)
except Exception as error:
logging.error(f"Socket connect [{self.ip}:{self.port}] --> False\n{error}")
return 0
def receive_raw_data(self) -> bytes or connect:
raw_data = self.sock.recv(self.struct_size)
if raw_data == b'':
logging.error('Connection lost')
return self.connect()
return raw_data
def ctypes_to_dict(self, ctypes_obj) -> dict or list:
if isinstance(ctypes_obj, ctypes.Structure):
data_dict = {}
for field_name, field_type in ctypes_obj.get_fields():
field_value = getattr(ctypes_obj, field_name)
if isinstance(field_value, (ctypes.Structure, ctypes.Array)):
data_dict[field_name] = self.ctypes_to_dict(field_value)
else:
data_dict[field_name] = field_value
return data_dict
elif isinstance(ctypes_obj, ctypes.Array):
data_list = []
for element in ctypes_obj:
if isinstance(element, (ctypes.Structure, ctypes.Array)):
data_list.append(self.ctypes_to_dict(element))
else:
data_list.append(element)
return data_list
if __name__ == '__main__':
rtd = RTDSerializer(ip='192.168.0.226', port=29000, frequency=0.05)
rtd.connect()
</code></pre>
<p>I receive data from socket. It's a bytestring with C values. I serialize it with ctypes structure and convert to dictionary. The serialization algorithm is not important for this topic.</p>
<p>In addition, the socket returns 0 bytes sometimes, so, I need to check this issue every time in <code>receive_raw_data</code> func.</p>
<p>I've tried to force reconnect <code>while</code> loop. And this solves the problem with outdated info somehow. Like this:</p>
<pre class="lang-py prettyprint-override"><code> def connect(self):
try:
logging.debug(f"Socket connect [{self.ip}:{self.port}] --> Ok")
while True:
self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.sock.connect((self.ip, self.port))
self.sock.settimeout(None)
c_structure = RTDStructure.from_buffer_copy(self.receive_raw_data() * ctypes.sizeof(RTDStructure))
self.data = self.ctypes_to_dict(c_structure)
print(datetime.datetime.now(), self.data['move_des_q'])
time.sleep(self.frequency)
self.sock.close()
except Exception as error:
logging.error(f"Socket connect [{self.ip}:{self.port}] --> False\n{error}")
return 0
</code></pre>
<p>But, it affects the working speed too much when I'm using very high receiving frequency. In terms of host sending frequency, it's really important.</p>
<p>So, how can I solve this problem?</p>
<p>What's the problem with socket disconnection?</p>
<p>Why socket is going fill up and just don't through away the outdated data?</p>
<p>I'm thinking about <code>clear_buffer</code> function, which will through away all unused data. while the thread is sleeping, but, this is gonna be kinda unreasonable complicated... Any thoughts?</p>
<p><strong>UPD</strong>: May be I should just write a printing thread, which will just upwoke once a time, set up by frequency val, print the current value from <code>data_buffer</code>, which is going to be filled with socket and then sleep again, meanwhile socket thread will run in high frequency and overwrite this <code>data_buffer</code> value?</p>
<p><strong>New little UPD with micro example:</strong></p>
<pre><code>import ctypes
import datetime
import logging
import socket
import time
logging.basicConfig(level=logging.DEBUG)
class RTDReceiver:
def __init__(self, ip: str, port: int, frequency: float = 0.002):
self.data: dict = {}
self.ip = ip
self.port = port
self.frequency = frequency
self.sock = None
self.struct_size = 1064 # 1064 bytes in my case.
def connect(self):
self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.sock.connect((self.ip, self.port))
self.sock.settimeout(None)
def receive_raw_data(self) -> bytes:
raw_data = self.sock.recv(self.struct_size)
if raw_data == b'':
logging.error('Connection lost')
self.connect()
time.sleep(self.frequency)
return raw_data
if __name__ == '__main__':
rec = RTDReceiver(ip='here your ip', port='here is your port', frequency=0.02)
rec.connect()
while True:
print(rec.receive_raw_data())
</code></pre>
| <python><c><python-3.x><sockets> | 2023-07-26 11:03:08 | 1 | 309 | Mika |
76,770,571 | 4,743,084 | Override IGNORED_EXCEPTIONS in wedriverwait wait.py | <p>Is it possible to globally override the IGNORED_EXCEPTIONS for WebDriverWait?</p>
<p>rather than having</p>
<pre class="lang-py prettyprint-override"><code>return WebDriverWait(self.browser, 10, ignored_exceptions(StaleElementReferenceException, NoSuchFrameException),
poll_frequency=1).until(EC.visibility_of_element_located((By.ID, 'ID')))
</code></pre>
<p>i would prefer not to explicitly put these exceptions in every element, i would rather just override the defaults globally if possible</p>
<p>so it should read</p>
<pre class="lang-py prettyprint-override"><code>return WebDriverWait(self.browser, 10).until(EC.visibility_of_element_located((By.ID, 'ID')))
</code></pre>
<p><a href="https://github.com/SeleniumHQ/selenium/blob/trunk/py/selenium/webdriver/support/wait.py#L26C60-L26C85" rel="nofollow noreferrer">https://github.com/SeleniumHQ/selenium/blob/trunk/py/selenium/webdriver/support/wait.py#L26C60-L26C85</a>
You can see here on line 26 by default it has NoSuchElementException, can we change this default some how?</p>
<p>I have tried to set it in <strong>init</strong>.py using</p>
<pre class="lang-py prettyprint-override"><code>import selenium
from selenium.common.exceptions import NoSuchElementException, StaleElementReferenceException
selenium.webdriver.support.wait.IGNORED_EXCEPTIONS = (NoSuchElementException, StaleElementReferenceException)
</code></pre>
<p>but this seem to have no effect</p>
<p>any help is always highly appreciated</p>
| <python><selenium-webdriver><webdriverwait> | 2023-07-26 11:02:06 | 1 | 395 | Brian Mitchell |
76,770,505 | 9,525,238 | numpy Finding multiple occurrence in an array by index | <p>Given the following array:</p>
<pre><code>array = [-1, -1, -1, -1, -1, -1, 3, 3, -1, 3, -1, -1, 2, 2, -1, -1, 1, -1]
indexes 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
</code></pre>
<p>I need to find the indexes where the same number appears.
In this example this would return a list of lists like this:</p>
<pre><code>list(list(), list(16), list(12, 13), list(6, 7, 9), list() etc...)
0 1 \ 2 3 4
^ \
\ \ the index in the array at which "1" appears
\
\ the numbers in the array
</code></pre>
<p>how would one do this in numpy?</p>
<p>the number 1 appears at index 16<br />
the number 2 appears at indexes 12, 13<br />
etc.</p>
<p>NOTES based on comments:</p>
<ul>
<li><p>-1 can be ignored, i'm only interested in the rest</p>
</li>
<li><p>array has ~50 elements with values up to int(500)</p>
</li>
<li><p>this function will be called 6000+ times.</p>
</li>
</ul>
| <python><python-3.x><list><numpy> | 2023-07-26 10:54:48 | 6 | 413 | Andrei M. |
76,770,335 | 3,668,129 | How to create nested dataframe from nested dictionary? | <p>I have the following dictionary:</p>
<pre><code>nested_dict = {
'Type A': {'Type A': 10, 'Type B': 20},
'Type EE': {'Type B': 40, 'Type C': 50, 'Type A': 60},
'Type FFF': {'Type ZZ': 70, 'Type FFF': 80, 'Type A': 90, 'Type AA': 1}
}
</code></pre>
<p>Is it possible to create the following pandas dataframe from this dictionary:</p>
<pre><code>|--------------------------------|
| class | predictions |
| |--------------------|
| | TYPE | COUTER |
|--------------------------------|
| | Type A | 10 |
| Type A | Type B | 20 |
|--------------------------------|
| | Type B | 40 |
| Type EE | Type C | 50 |
| | Type A | 60 |
|--------------------------------|
| | Type zz | 70 |
| Type FFF | Type FFF | 80 |
| | Type A | 90 |
| | Type AA | 1 |
|--------------------------------|
</code></pre>
| <python><python-3.x><pandas> | 2023-07-26 10:31:29 | 3 | 4,880 | user3668129 |
76,770,184 | 284,696 | Memory usage of sparse matrices (scipy) | <p>I was expecting <code>scipy</code>'s sparse matrices to use a lot less memory than the na(t)ive list of lists representation, but experiments have proven me wrong. In the snippet below, I'm building a random binary matrix with around 75% zeroes, and then comparing the memory usage with each available representation in <code>scipy.sparse</code> (simplified IPython session):</p>
<pre><code># %load_ext memory_profiler, from scipy import sparse, etc.
# ...
%memit M = random_binary_matrix(5000, 5000) # contains ints
peak memory: 250.36 MiB, increment: 191.77 MiB
In : sum(line.count(0) for line in M) / (len(M) * len(M[0]))
Out: 0.75004468
%memit X_1 = sparse.bsr_matrix(M)
peak memory: 640.49 MiB, increment: 353.76 MiB
%memit X_2 = sparse.coo_matrix(M)
peak memory: 640.71 MiB, increment: 286.09 MiB
%memit X_3 = sparse.csc_matrix(M)
peak memory: 807.51 MiB, increment: 357.53 MiB
%memit X_4 = sparse.csr_matrix(M)
peak memory: 840.04 MiB, increment: 270.91 MiB
%memit X_5 = sparse.dia_matrix(M)
peak memory: 1075.20 MiB, increment: 386.87 MiB
%memit X_6 = sparse.dok_matrix(M)
peak memory: 3059.86 MiB, increment: 1990.62 MiB
%memit X_7 = sparse.lil_matrix(M)
peak memory: 2774.67 MiB, increment: 385.39 MiB
</code></pre>
<p>Am I doing something wrong? Am I missing something (including the point of these alternative representations)?</p>
<p>... or is <code>memory_profiler</code>, or my lack of comprehension thereof, to blame? In particular, the relationship between "peak memory" and "increment" seems dubious at times: initialising <code>X_2</code> supposedly increments memory usage by 286.09 MiB, yet the peak memory usage is barely above what it was prior to executing that line.</p>
<p>If it matters: I'm running Debian 12, Python 3.11.2, IPython 8.5.0, scipy 1.10.1, memory_profiler 0.61.0</p>
| <python><scipy><memory-profiling> | 2023-07-26 10:12:45 | 2 | 2,826 | Anthony Labarre |
76,770,121 | 15,893,581 | maximize (with scipy) production having constrained budget | <p>I tried to solve common optimization problem: to maximize profit from certain production function having limited budget (7800$), but still something is wrong in such logics of optimization:</p>
<pre><code>from scipy.optimize import minimize
def max_out(x): # Q_func
return 36*x[0]**(1/2) * x[1]**(1/3) * x[2]**(1/4)
def obj(x): # maximize total P
y= -(25*x[0] + 20*x[1] + 10*x[2]) # maximize production revenue
return y
def constr(x):
return ( 7800- (25*x[0] + 20*x[1] + 10*x[2]) ) # constraint is budget 7800
cons = ({'type': 'ineq', 'fun': constr },
{'type': 'ineq', 'fun': lambda x: x[0] },
{'type': 'ineq', 'fun': lambda x: x[1] },
{'type': 'ineq', 'fun': lambda x: x[2] })
##bnds = ((0, None), (0, None), (0, None)) # bounds= bnds,
res = minimize(obj, (10, 10, 10), method='SLSQP', constraints=cons)
print(res.x)
r= [round(x) for x in res.x]
print("raw materials needed for production func:", r)
print(f'to maximize production revenue to {-obj(r):.2f} $')
print("cost of product: ", max_out(r))
# raw materials needed for production func: [171, 139, 74]
# to maximize production revenue to 7795.00 $
# cost of product: 7152.316966012599
</code></pre>
<p>as is resulted: I've, probably, got the closest to budget total_price of produced goods from given raw materials [x0, x1, x2]... but max_out func gives cheaper total_price for all produced. But I need max_out being close to the budget (as is production price), in order to find the price for sale (overall cost, that should be higher than inputted budget)... something is wrong!.. or have I formulated the task to the python in an incorrect way ??</p>
<p>p.s. frankly speaking I'm getting smaller price of product compared to the raw materials not for the first time, while trying to solve tasks like this - but this sounds strange for me... what is my misunderstanding ? and how to change the code to input raw_materials to budget & maximize the total_cost of produced units ?</p>
| <python><scipy-optimize><economics> | 2023-07-26 10:06:26 | 4 | 645 | JeeyCi |
76,770,063 | 2,546,099 | Reshaping of 1d-array into list of 2d-arrays in python | <p>I have a 1d-data stream with a length of <code>M</code>, concatenating <code>N</code> channels, which I would like to feed to a processing function (i.e. having a total length of <code>M*N</code>. This processing function expects data split into channels, and with a slice size of <code>K</code>, with <code>K</code> smaller or equal than <code>M</code>, i.e. corresponding to a 2d-array with a shape of <code>[N, K]</code>.</p>
<p>My current approach to reshape my input data from 1d into a list of 2d-arrays (i.e. effectively a 3d-array) is the following:</p>
<pre><code># Generation of test data with a shape of N*M as 1d-array
data_list:list[float] = []
for channel_no in range(N):
data_list.extend(
(np.arange(M) + 10 * M * channel_no).tolist()
)
# Reshaping of 1d-array into 2d-arrays, effectively splitting
# the data into N channels
in_data: np.ndarray = np.asarray(data_list).reshape(
N, M
)
# Generating list of 2d-arrays with shape [N, K] and a length of M // K
target_data: np.ndarray = reshape_numpy_array_with_equal_blocks(
in_array=in_data,
target_shape=(N, K),
drop_last=True,
use_padding=True,
)
</code></pre>
<p>with the corresponding function <code>reshape_numpy_array_with_equal_blocks()</code>:</p>
<pre><code>def reshape_numpy_array_with_equal_blocks(
in_array: np.ndarray,
target_shape: Tuple[int, int],
use_padding: bool = False,
drop_last: bool = True,
padding_val: float = 0.0,
) -> np.ndarray:
"""_summary_
Args:
in_array (np.ndarray): _description_
target_shape (Tuple[int, int]): _description_
use_padding (bool, optional): _description_. Defaults to False.
drop_last (bool, optional): _description_. Defaults to True.
padding_val (float, optional): _description_. Defaults to 0.0.
Raises:
NotImplementedError: _description_
Returns:
np.ndarray: _description_
"""
in_array_shape = in_array.shape
if len(in_array_shape) != 2:
raise NotImplementedError()
ret_array = []
for i in range(0, in_array_shape[-1], target_shape[-1]):
channel_array = []
if i + target_shape[-1] <= in_array_shape[-1]:
for channel in range(in_array_shape[0]):
channel_array.append(
in_array[channel, i : i + target_shape[-1]].tolist()
)
ret_array.append(channel_array[:])
else:
if not drop_last and use_padding:
for channel in range(in_array_shape[0]):
cur_data = in_array[channel, i:].tolist()
cur_data_len = len(cur_data)
cur_data.extend([padding_val] * (target_shape[-1] - cur_data_len))
channel_array.append(cur_data[:])
ret_array.append(channel_array[:])
return np.array(ret_array)
</code></pre>
<p>However, I'm not sure if this approach is the most efficient version, as it contains quite a bit re-allocation and copying of memory. What would be a better approach?</p>
<p>I was thinking of using a generator which accesses sliced parts of the original data list, such that no additional allocations/copies are necessary, but I'm not sure if that is the way to go either. Would that be a viable solution, or are there even better approaches? Or can this problem be circumvented completely?</p>
| <python><arrays><numpy> | 2023-07-26 09:59:02 | 1 | 4,156 | arc_lupus |
76,770,029 | 14,457,833 | Why is Django migrate command not creating tables in database for specific app? | <p>I've an app named Attendance and it contains the following migrations applied in <a href="/questions/tagged/postgresql" class="post-tag" title="show questions tagged 'postgresql'" aria-label="show questions tagged 'postgresql'" rel="tag" aria-labelledby="tag-postgresql-tooltip-container">postgresql</a> db.</p>
<pre><code>attendance
[X] 0001_initial
[X] 0002_delete_leave
[X] 0003_alter_holiday_options_alter_shift_options
[X] 0004_delete_holiday_alter_shift_holidays
[X] 0005_delete_shift
[X] 0006_initial
[X] 0007_alter_leave_options
[X] 0008_alter_leave_comment_alter_leave_is_regularized
[X] 0009_attendance
[X] 0010_alter_attendance_options_attendance_created_at_and_more
[X] 0011_attendance_status
[X] 0012_attendance_is_regularized
[X] 0013_alter_attendance_is_regularized
[X] 0014_remove_attendance_date_attendance_start_date_and_more
[X] 0015_attendance_end_date
[X] 0016_alter_attendance_end_date
[X] 0017_alter_attendance_end_date_and_more
[X] 0018_leavetype_remove_leave_half_day_and_more
[X] 0019_leave_leave_type
</code></pre>
<p>when I run <code>python manage.py migrate</code> with or without app label it does not create tables in db.</p>
<p>I've not set <code>managed=False</code> and tried to delete all migrations from <code>django_migrations</code> table but no luck.</p>
<p>One more point I forgot to add that I'm using <a href="https://django-tenants.readthedocs.io/en/latest/" rel="nofollow noreferrer"><kbd>django-tenants</kbd></a></p>
<h2>Update</h2>
<p>To debug the issue, I deleted my local database and kept migrations. Before migrating I ran migrate command with <a href="https://docs.djangoproject.com/en/4.2/ref/django-admin/#cmdoption-migrate-plan" rel="nofollow noreferrer"><strong><code>--plan</code></strong></a> flag</p>
<pre><code>python manage.py migrate --plan
</code></pre>
<p>Above command gives me this error</p>
<pre><code>[standard:public] Change Meta options on todo
[standard:public] todo_task.0004_todo_created_at_todo_updated_at
[standard:public] Add field created_at to todo
[standard:public] Add field updated_at to todo
Traceback (most recent call last):
File "env\lib\site-packages\django\db\backends\utils.py", line 89, in _execute
return self.cursor.execute(sql, params)
psycopg2.errors.UndefinedTable: relation "tenant_tenant" does not exist
LINE 1: SELECT "tenant_tenant"."schema_name" FROM "tenant_tenant" WH...
...
</code></pre>
| <python><django><django-models><django-migrations><django-tenants> | 2023-07-26 09:56:17 | 3 | 4,765 | Ankit Tiwari |
76,769,996 | 2,537,394 | Why do a//b and np.floor(a/b) produce different results? | <p>I stumbled upon a weird edge behaviour of python 3's "flooring division" using <code>//</code>. After some calculations I arrive at the formula <code>(a1 - a2) // b</code>. Using</p>
<pre class="lang-py prettyprint-override"><code>a1 = 226.72560000000001
a2 = 0.2856000000000165
b = 1.02
# all types are np.float64
</code></pre>
<p>I get a different result than had I used <code>np.floor()</code>:</p>
<pre class="lang-py prettyprint-override"><code>>>> (a1 - a2) // b
221.0
>>> np.floor((a1 - a2) / b)
222.0
</code></pre>
<p>This difference obviously stems from floating point calculation, but why do they behave differently? Using a higher precision calculator such as <a href="https://keisan.casio.com/calculator" rel="nofollow noreferrer">ke!san</a> I can verify that <code>//</code>'s behaviour is mathematically correct, while for my calculation <code>np.floor()</code> would deliver the correct result.</p>
| <python><python-3.x><numpy><floating-point><precision> | 2023-07-26 09:52:21 | 2 | 731 | YPOC |
76,769,915 | 662,285 | Merge json files using for loop in python | <p>I have written below logic to merge two json files in python, it works perfectly fine.
Now, I want to pass only one parameter to the method "filename" and i can pass "n" number of files and they should be able to merge. How i can apply that "forloop" and merge all json ?</p>
<pre><code>def merge_JsonFiles(file1, file2):
first_json = load_json(file1)
second_json = load_json (file2)
merge_json = {**first_json, **second_json}
return merge_json
</code></pre>
| <python><json> | 2023-07-26 09:43:53 | 1 | 4,564 | Bokambo |
76,769,872 | 1,315,125 | How to provide embedding function to a langchain vector store | <p>I am trying to get a simple vector store (chromadb) to embed texts using the add_texts method with langchain, however I get the following error despite successfully using the OpenAI package with a different simple langchain scenario:</p>
<pre><code>ValueError: You must provide embeddings or a function to compute them
</code></pre>
<p>Code:</p>
<pre><code>from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Chroma
db = Chroma()
texts = [
"""
One of the most common ways to store and search over unstructured data is to embed it and store the resulting embedding vectors, and then at query time to embed the unstructured query and retrieve the embedding vectors that are 'most similar' to the embedded query. A vector store takes care of storing embedded data and performing vector search for you.
""",
"""
Today's applications are required to be highly responsive and always online. To achieve low latency and high availability, instances of these applications need to be deployed in datacenters that are close to their users. Applications need to respond in real time to large changes in usage at peak hours, store ever increasing volumes of data, and make this data available to users in milliseconds.
""",
]
db.add_texts(texts, embedding_function=OpenAIEmbeddings())
</code></pre>
| <python><openai-api><langchain> | 2023-07-26 09:40:01 | 1 | 3,515 | Igor L. |
76,769,776 | 9,097,114 | Way to Offline Speaker Diarization with Hugging Face | <p>I am looking for Offline / locally saved model for speaker diarization with Hugging face without Authentication.<br />
I have gone through google and found no relevant links for the same.<br />
Is there any link/method to do the same?</p>
<p>Thanks in advance</p>
| <python><huggingface-transformers><huggingface><speaker-diarization> | 2023-07-26 09:28:46 | 1 | 523 | san1 |
76,769,729 | 422,005 | pip install can not find versions from Azure Ubuntu | <p>I have created a Python package which is published as a wheel on Azure package registry. For some reason pip will only find the old versions of the package when deploying to a Azure <del>Ubuntu</del> Debian11 machine, whereas all versions are found and correctly installed from my local workstation (also Ubuntu). Starting with a clean virtual environment this is the observed behavior:</p>
<p>Local workstation:</p>
<pre><code>bash% pip index versions sonair_dev
WARNING: pip index is currently an experimental command. It may be removed/changed in a future release without prior warning.
sonair_dev (0.0.14)
Available versions: 0.0.14, 0.0.13, 0.0.12, 0.0.11, 0.0.10, 0.0.9, 0.0.8, 0.0.7, 0.0.6, 0.0.5, 0.0.4, 0.0.3, 0.0.2, 0.0.1
INSTALLED: 0.0.15.dev8
LATEST: 0.0.14
</code></pre>
<p>On Azure:</p>
<pre><code>bash% pip index versions sonair_dev
WARNING: pip index is currently an experimental command. It may be removed/changed in a future release without prior warning.
sonair_dev (0.0.9)
Available versions: 0.0.9, 0.0.8, 0.0.7, 0.0.6, 0.0.5, 0.0.4, 0.0.3, 0.0.2, 0.0.1
INSTALLED: 0.0.4
LATEST: 0.0.9
</code></pre>
<p>i.e. only a subset of the versions are visible from the Azure client; furthermore on Azure pip insists to install version 0.0.4 - even when i specifically ask for <code>sonair_dev==0.0.9</code> which it allegedly can see. The output from <code>pip install sonair_dev</code>on azure looks like this:</p>
<pre><code>Collecting sonair_dev
Downloading https://XXX/sonair-dev/0.0.9/sonair_dev-0.0.9-py3-none-any.whl (22 kB)
Downloading https://XXX/sonair-dev/0.0.8/sonair_dev-0.0.8-py3-none-any.whl (22 kB)
Downloading https://XXX/sonair-dev/0.0.7/sonair_dev-0.0.7-py3-none-any.whl (22 kB)
Downloading https://XXX/sonair-dev/0.0.6/sonair_dev-0.0.6-py3-none-any.whl (22 kB)
Downloading https://XXX/sonair-dev/0.0.5/sonair_dev-0.0.5-py3-none-any.whl (22 kB)
Downloading https://XXX/sonair-dev/0.0.4/sonair_dev-0.0.4-py3-none-any.whl (22 kB)
</code></pre>
<p>and the <code>0.0.4</code> version is finally installed.</p>
<p>Going from version 0.0.9 to 0.0.10 the package changed from a pure Python package to a package with an included binary rust module; that might be relevant?</p>
<p>Any tips on how to fix/debug this?</p>
| <python><azure><pip> | 2023-07-26 09:23:01 | 1 | 2,081 | user422005 |
76,769,692 | 912,307 | Convert hex float to decimal and vice versa, non-exponential representation | <p>I'm trying to convert non-integers to and from hexadecimal<->decimal representations. This is trivial for integers, but I can't get it to work like I want for non-integers.</p>
<p>I'm looking for the hex representation of the number, not a hex representation of its binary encoding.</p>
<p>I've gotten this far:</p>
<pre class="lang-py prettyprint-override"><code>float.fromhex('2a.0') # prints "42.0" (number), fine
float(42.0).hex() # prints "'0x1.5000000000000p+5'" (string), not fine
</code></pre>
<p><code>float.hex()</code> returns a string holding a number in exponential representation. Is there a formatter that will convert to non-exponential representation? I want <code>'0x2a.0'</code>. Is this possible?</p>
| <python><floating-point><hex> | 2023-07-26 09:18:35 | 1 | 1,449 | Γ
smund |
76,769,479 | 11,771,720 | How to display filtered data in serializer's source attribute Django Rest Framework | <p>I've added an additional action to my ModelViewSet as below. I have a manager function which it's name is <code>related_interests</code> and it should filter the <code>user interests</code>. I put <code>source</code> attribute to my serializer to get filtered interests however, I'm getting all of them in my response.</p>
<pre><code>class MyModelViewSet(
viewsets.GenericViewSet,
mixins.ListModelMixin,
mixins.CreateModelMixin,
mixins.RetrieveModelMixin
):
"""ModelViewSet with create, list, retrieve and custom_action operations"""
queryset = MyModel.objects.all()
serializer_class = serializers.MyModelSerializer
@action(detail=False, methods=['post'])
def custom_action(self, request):
queryset = MyModel.objects.related_interests()
serializer = self.serializer_class(queryset, many=True)
return responses.http_ok_with_dict(serializer.data)
</code></pre>
<p>And here is my serializer;</p>
<pre><code>class UserWithInterestSerializer(serializers.ModelSerializer):
licenses = InterestSerializer(read_only=True, source="interest_set", many=True)
class Meta:
model = User
fields = "__all__"
class MyModelSerializer(serializers.ModelSerializer):
user = UserWithInterestSerializer(read_only=True)
class Meta:
model = MyModel
fields = ["user"]
</code></pre>
<p>How can I fix this issue?</p>
<p>Thanks!</p>
| <python><django><django-rest-framework> | 2023-07-26 08:48:58 | 0 | 626 | imgeaslikok |
76,769,327 | 353,337 | numpy interpolation with array-valued `yp` | <p>I would like to interpolate array-valued data, i.e., the x-values are one-dimensional, but the y-values are (NumPy) arrays. The following doesn't work:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
xp = np.array([0.0, 1.0, 2.0])
yp = np.array(
[
[-1.1, 2.4, 5.1, 5.6],
[7.1, -2.1, 9.1, 31.0],
[1.1, 13.4, -5.2, 5.6],
]
)
np.interp(0.4, xp, yp)
</code></pre>
<pre><code>ValueError: object too deep for desired array
</code></pre>
<p>Any hints?</p>
| <python><numpy> | 2023-07-26 08:26:24 | 3 | 59,565 | Nico SchlΓΆmer |
76,769,281 | 1,307,905 | warning showing file name and line number of routine returning outdated return format | <p>When I have created a library in which a function or method will be deprecated, I can warn users
of the library of the fact using <code>warnings.warn(message, PendingDeprecationsWarning)</code>. By setting the <code>stacklevel</code> parameter for <code>warnings.warn</code>
appropriately, I can have the warning refer to where the to-be-deprecated function was called from, which is way more informative to the user of the library.</p>
<p>But I have a plugin based system that calls the <code>.status</code> method of a user supplied plug-in class.
The status method used to return a <code>dict</code>, but that is going to be deprecated
in favour of returning a <code>list</code> (this is a simplification of the actual situation).
In this case the warning is not useful to the user providing the plug-in:</p>
<pre class="lang-py prettyprint-override"><code>import os, sys
from pathlib import Path
from importlib import import_module
import warnings
warnings.simplefilter('once', PendingDeprecationWarning)
class Driver:
def __init__(self):
self.plug_ins = []
def load_plug_ins(self):
sys.path.append('plug_ins')
for file_name in Path('plug_ins').glob('p*.py'):
mod = import_module(file_name.stem)
self.plug_ins.append(mod.PlugIn())
def check_status(self):
for p in self.plug_ins:
retval = p.status()
if isinstance(retval, dict):
# assume dict
warnings.warn('status() should return list, not dict', PendingDeprecationWarning)
else:
pass # assume list
# create some plug-ins
Path('plug_ins/p0.py').write_text("""\
class PlugIn:
def status(self):
return {'result': 'ok'}
""")
Path('plug_ins/p1.py').write_text("""\
class PlugIn:
def status(self):
return ['ok'] # this plug-in has been updated
""")
Path('plug_ins/p2.py').write_text("""\
class PlugIn:
def status(self):
return {'result': 'ok'}
""")
driver = Driver()
driver.load_plug_ins()
for idx in range(2):
driver.check_status()
</code></pre>
<p>which gives:</p>
<pre class="lang-none prettyprint-override"><code>/tmp/ryd-115/tmp_00.py:23: PendingDeprecationWarning: status() should return list, not dict
warnings.warn('status() should return list, not dict', PendingDeprecationWarning)
</code></pre>
<p>The path and the line number in the output is that from the file with the Driver class defined.
What I want to be shown is the actual paths to the files defining <code>status</code>, and that methods line-number, and that once for each file (<code>p0.py</code> and <code>p2.py</code>) that needs updating.</p>
<p>How can I use the normal Python warning system (so I can tell the users to use the normal
warning filtering to suppress messages) and show the filename and line of the <code>status</code> method
that needs to be updated, instead of showing the file-name and line where that warning is in the code?</p>
| <python><plugins><warnings> | 2023-07-26 08:20:05 | 1 | 78,248 | Anthon |
76,769,248 | 13,874,745 | Does using torch.where cause the model's parameter gradients to become zero? | <p>Here is the <code>forward()</code> method of my pytorch model:</p>
<pre class="lang-py prettyprint-override"><code> def forward(self, x, output_type, *unused_args, **unused_kwargs):
gru_output, gru_hn = self.gru(x)
# Decoder (Graph Adjacency Reconstruction)
for data_batch_idx in range(x.shape[0]):
pred = self.decoder(gru_output[data_batch_idx, -1, :]) # gru_output[-1] => only take last time-step
pred_graph_adj = pred.reshape(1, -1) if data_batch_idx == 0 else torch.cat((pred_graph_adj, pred.reshape(1, -1)), dim=0)
if output_type == "discretize":
bins = torch.tensor(self.model_cfg['output_bins']).reshape(-1, 1)
num_bins = len(bins)-1
bins = torch.concat((bins[:-1], bins[1:]), dim=1)
discretize_values = np.linspace(2, 4, num_bins)
for lower, upper, discretize_value in zip(bins[:, 0], bins[:, 1], discretize_values):
pred_graph_adj = torch.where((pred_graph_adj <= upper) & (pred_graph_adj > lower), discretize_value, pred_graph_adj)
pred_graph_adj = torch.where(pred_graph_adj < bins.min(), 2, pred_graph_adj)
return pred_graph_adj
</code></pre>
<p>And here is the snippet of training:</p>
<pre class="lang-py prettyprint-override"><code> pred = self.forward(x, output_type=self.model_cfg['output_type'])
batch_loss = self.loss_fn(pred, y)
self.optimizer.zero_grad()
batch_loss.backward()
self.optimizer.step()
self.scheduler.step()
</code></pre>
<ol>
<li>When <code>output_type</code> is not <code>"discretize"</code> (not using <code>torch.where</code>), <code>sum([p.grad.sum() for p in self.decoder.parameters()])</code> will be non-zero.
<ul>
<li>But When <code>output_type</code> is <code>"discretize"</code> (using <code>torch.where</code>), <code>sum([p.grad.sum() for p in self.decoder.parameters()])</code> will be zero.</li>
</ul>
</li>
<li>I've check the <code>batch_loss</code>, it's not zero.</li>
<li>I've check all the <code>require_grad</code> of weight of model, they are True.</li>
<li>I've check computational graph, <code>pred</code> and <code>batch_loss</code> are connect to model's weight.</li>
</ol>
<p>My questions are:</p>
<ol>
<li>Does using <code>torch.where</code> cause the model's parameter gradients to become zero?</li>
<li>If <code>torch.where</code> won't cause that, what's other possible reasons?</li>
</ol>
<hr />
<h2>Update info:</h2>
<ul>
<li>The initial values of <code>pred_graph_adj</code> are between <strong>-1 ~ 1</strong>.
<ul>
<li>But I've check the values range of final <code>pred_graph_adj</code> (after <code>torch.where</code>), they are between <strong>2 ~ 4</strong>.</li>
</ul>
</li>
<li>The specific values of args of <code>torch.where</code>(<code>lower</code> and <code>upper</code> and <code>discretize_value</code>) in each for-loop are:</li>
</ul>
<pre><code>(lower, upper] -> discrete_values:
(-1, -0.25] -> 2
(-0.25, 0.25] -> 3
(0.25, 1] -> 4
</code></pre>
| <python><machine-learning><deep-learning><pytorch> | 2023-07-26 08:16:03 | 1 | 451 | theabc50111 |
76,769,177 | 325,781 | Executing python from bash commandline | <p>i have a stupid error that I can't figure out how to resolve.</p>
<p>I need to execute a simple python command, and I'd like to keep it inline in a bash command.
Basically I 'm reading a JSON string and I want to retrieve a value, but I cannot write a "for loop".</p>
<p>i'll simplify:</p>
<pre><code>string="[{ 'url': 'https://www.example.com/1', 'description': 'URL example number 1', 'name': 'test1', 'id': '1' }, { 'url': 'https://www.example.com/2', 'description': 'URL example number 2', 'name': 'test2', 'id': '2' }]"
echo $string | python3 -c "import sys,json; data= json.load(sys.stdin); for item in data: print(item['id']);"
</code></pre>
<p>I get the error</p>
<pre><code>File "<string>", line 1
import sys,json; data= json.load(sys.stdin); for item in data: print(item['id']);
^^^
SyntaxError: invalid syntax
</code></pre>
<p>I cannot understand what's wrong, but I found that if "for" is the first command, then it works. so it must be related to indentation, am I right?</p>
<p>thank you</p>
| <python><json><bash> | 2023-07-26 08:07:11 | 2 | 338 | tuffo19 |
76,768,891 | 11,630,148 | The output of the site I scrape includes html elements | <p>I need to scrape the table with letter 'A' only. My code is this so far:</p>
<pre class="lang-py prettyprint-override"><code>class ChallengeSpider(scrapy.Spider):
name = "challenge"
allowed_domains = ["laws.bahamas.gov.bs"]
start_urls = ["http://laws.bahamas.gov.bs/cms/en/legislation/acts.html"]
</code></pre>
<p>The problem is when I parse the page, html elements appear in the output. This is my <code>parse</code> function.</p>
<pre class="lang-py prettyprint-override"><code> def parse(self, response):
css_selector = ".hasTip"
rows = response.css(css_selector)
for row in rows:
title = row.css(".hasTip").get()
source_url = row.css(".hasTip").get()
date = row.css(".hasTip").get()
yield {
"title": title,
"source_url": source_url,
"date": date,
}
</code></pre>
<p>The out put is:</p>
<pre class="lang-json prettyprint-override"><code>[
{"title": "<div id=\"alphabet\" class=\"hasTip\" title=\"Alphabetical Selection\" rel=\"\n\t\t Click on one of the alphabetical buttons to select all Acts commencing with that letter. The selection will 'stick' even if you navigate to another page.\">\n <input type=\"submit\" id=\"submitX\" name=\"submit4\" class=\"btn btn-primary\" value=\"A\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"B\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"C\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"D\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"E\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"F\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"G\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"H\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"I\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"J\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"K\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"L\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"M\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"N\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"O\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"P\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"Q\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"R\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"S\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"T\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"U\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"V\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"W\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"X\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"Y\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"Z\"> <input type=\"hidden\" name=\"pointintime\" value=\"2023-07-26 00:00:00\">\n </div>", "source_url": "<div id=\"alphabet\" class=\"hasTip\" title=\"Alphabetical Selection\" rel=\"\n\t\t Click on one of the alphabetical buttons to select all Acts commencing with that letter. The selection will 'stick' even if you navigate to another page.\">\n <input type=\"submit\" id=\"submitX\" name=\"submit4\" class=\"btn btn-primary\" value=\"A\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"B\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"C\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"D\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"E\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"F\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"G\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"H\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"I\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"J\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"K\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"L\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"M\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"N\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"O\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"P\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"Q\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"R\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"S\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"T\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"U\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"V\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"W\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"X\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"Y\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"Z\"> <input type=\"hidden\" name=\"pointintime\" value=\"2023-07-26 00:00:00\">\n </div>", "date": "<div id=\"alphabet\" class=\"hasTip\" title=\"Alphabetical Selection\" rel=\"\n\t\t Click on one of the alphabetical buttons to select all Acts commencing with that letter. The selection will 'stick' even if you navigate to another page.\">\n <input type=\"submit\" id=\"submitX\" name=\"submit4\" class=\"btn btn-primary\" value=\"A\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"B\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"C\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"D\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"E\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"F\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"G\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"H\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"I\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"J\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"K\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"L\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"M\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"N\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"O\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"P\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"Q\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"R\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"S\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"T\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"U\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"V\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"W\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"X\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"Y\"><input type=\"submit\" id=\"submit4\" name=\"submit4\" class=\"btn\" value=\"Z\"> <input type=\"hidden\" name=\"pointintime\" value=\"2023-07-26 00:00:00\">\n </div>"},
{"title": "<td class=\"hasTip minColumn hidden-phone\" title=\"Notes Relating to this Statute\" rel=\"\n
]
</code></pre>
<p>What I need to do is to add <code>http://laws.bahamas.gov.bs</code> to the pdf file urls and clean up the data that I've scraped. What else do I need to do to get what I need?</p>
| <python><scrapy> | 2023-07-26 07:26:12 | 1 | 664 | Vicente Antonio G. Reyes |
76,768,830 | 10,248,483 | How to find the difference between last value of current month last value and previous month using python | <p>I wish to calculate the difference of Last value of Current month - Last value of Previous month ( this is tag wise)</p>
<p>Then, the total sum of value calculated from number 1. against per month for all tags.</p>
<p>In the dataset, there are multiple values for December 31, and Jan1, Jan2.. Jan 30.
Now I want to calculate as, Result = Last or Latest value of Jan 30(among multiple values) - Last value of the Previous month (Dec31)</p>
<p>The expected output looks like this:
<a href="https://i.sstatic.net/gDL2t.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gDL2t.png" alt="enter image description here" /></a></p>
<p>I'm taking the reference from the powerBI table were calculation is already done.
I want the same to be done using python.</p>
<p>Case 1:
Calculate the difference of Last value of Current month - Last value of Previous month ( as seen from the last column in table( Diff Value of CM - PM). Please have a look to see how the difference is taken.</p>
<p>Case 2:</p>
<p>Total Summation of the values (Diff value of CM - PM from Case1) for all the tags against each month and year.</p>
<p>Case 1 & 2 are connected. Case 1 is the sum of values per tag, and Case 2 is the total sum of values for all the tags.</p>
<p>Sample data can be accessed from here:
<a href="https://docs.google.com/spreadsheets/d/1OSi8QvBQ3b-4WWn73ypl4hn1Yp0ll2M4/edit?usp=sharing&ouid=112221094992050432029&rtpof=true&sd=true" rel="nofollow noreferrer">Dataset</a></p>
| <python><pandas> | 2023-07-26 07:16:56 | 1 | 369 | Nishad Nazar |
76,768,800 | 13,452,269 | Get Tweet Id using Tweepy TwitterV2 | <p>I am trying to make a bot which makes a post and then self replies. The only issue I am having is that response.id is not fetching the tweet id. Could I get any pointers as to the right way to fetch the id of a tweet that was just posted?</p>
<pre><code># Post Tweet
response = client.create_tweet(text=tweet_msg)
# Respond to first tweet posted
client.create_tweet(text=reply_msg, in_reply_to_tweet_id=response.id)
</code></pre>
| <python><twitter><tweepy> | 2023-07-26 07:12:12 | 1 | 327 | k3r0 |
76,768,789 | 10,780,715 | PyPolars, conditional join on two columns | <p>How should one join two <code>pl.LazyFrame</code> using two columns from each <code>pl.LazyFrame</code> based on content in the columns of the left <code>pl.LazyFrame</code> ?</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
lf1 = pl.LazyFrame(
data={
"col_1": ["a", "b", "c"],
"col_2": ["d", None, None],
"col_3": [None, "e", None],
},
)
lf2 = pl.LazyFrame(
data={
"col_a": ["d", "xyz"],
"col_b": ["xyz", "e"],
"col_c": ["relevant_info_1", "relevant_info_2"],
},
)
</code></pre>
<p>Pseudo-code of desired join :</p>
<pre><code>lf1.join(lf2,
when(col("col_2").is_not_null().then(left_on="col_2", right_on="col_a")
when(col("col_3").is_not_null().then(left_on="col_3", right_on="col_b")
otherwise(do_nothing)
)
</code></pre>
<p>Expected result :</p>
<pre><code>shape: (3, 4)
βββββββββ¬ββββββββ¬ββββββββ¬ββββββββββββββββββ
β col_1 β col_2 β col_3 β col_c β
β --- β --- β --- β --- β
β str β str β str β str β
βββββββββͺββββββββͺββββββββͺββββββββββββββββββ‘
β a β d β null β relevant_info_1 β
β b β null β e β relevant_info_2 β
β c β null β null β null β
βββββββββ΄ββββββββ΄ββββββββ΄ββββββββββββββββββ
</code></pre>
| <python><dataframe><left-join><python-polars> | 2023-07-26 07:10:38 | 3 | 575 | mlisthenewcool |
76,768,741 | 12,081,269 | How to fix PanicException when converting int to pl.Date | <p>I am trying to convert a 4 million <code>pl.Series</code> from <code>i64</code> to <code>pl.Date</code>. My code does not work because of <code>PanicException</code> which nature I am not really familiar with:</p>
<pre><code>sample = pl.DataFrame({
'install_time': [1595404176,
1595404176,
1595404177,
1595404180,
1595404184,
1595404185,
1595404191,
1595404192,
1595404195]
})
sample.with_columns(pl.col('install_time').cast(pl.Date))
</code></pre>
<p>As the result of the execution of the piece of code above I receive the following error: <code>PanicException: out-of-range date</code>. I really do not have idea on how to fix so I will be grateful for any advice and suggestions.</p>
| <python><python-polars> | 2023-07-26 07:02:57 | 1 | 897 | rg4s |
76,768,726 | 1,127,699 | How to detect non-contiguous symbols using CV2? | <p>I have a grayscale image of printed text. I want to extract every individual character from the image so that I can save them as discrete images. I don't want to <em>recognise</em> what the character is, I just want each glyph as a separate file.</p>
<p>I'm using <code>cv2</code>, for example:</p>
<pre class="lang-py prettyprint-override"><code># Find contours to isolate individual letters
contours, _ = cv2.findContours(binary_image, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)`
</code></pre>
<p>That works perfectly for contiguous characters - that is, where the shape of the glyph has no breaks.</p>
<p>But it doesn't work on characters like <code>i</code>, <code>j</code>, <code>:</code>, and <code>;</code> - the dots on top are not included.</p>
<p>Is there a way to use CV2 to detect these characters? I know the document uses only Latin letters, numbers, and punctuation.</p>
<p>The document uses a fairly archaic typeface and doesn't work well with Tesseract or other traditional OCR engines - which is why I want to <em>detect</em> the individual letters, rather than try to <em>recognise</em> them.</p>
| <python><ocr><opencv><text-recognition> | 2023-07-26 07:00:57 | 1 | 14,414 | Terence Eden |
76,768,427 | 871,910 | Python Connection and Cursor base types or protocol | <p>I have a script that reads data from and Oracle database (using the oracledb package) and writes it to an SQL Server database (using pyodbc). I have a function that performs some query, that needs to run on both databases. This function accepts a Cursor as a parameter, and does its thing.</p>
<p>Now, the Oracle cursor is of type <code>oracledb.cursor.Cursor</code> and the pyodbc cursor is of type <code>pyodbc.Cursor</code>, even though they both provide pretty much the same interface. This means my function should accept an argument that is of either of these types. I would prefer <em>not</em> to specify all the potential cursors as types of the function's argument.</p>
<p>In ADO.NET there are interfaces such as <code>IDbConnection</code> and <code>IDbCommand</code> that are implemented by different database providers, and allow us to write provider agnostic code. Is there something similar with Python? Maybe a Python DB API <a href="https://peps.python.org/pep-0544/" rel="nofollow noreferrer">protocol</a>?</p>
| <python><pyodbc><python-typing> | 2023-07-26 06:04:41 | 0 | 39,097 | zmbq |
76,768,284 | 7,608,940 | Training custom image from Tesseract 5 | <p>I want to train the tesseract model from a custom image using Tesseract 5 on ubuntu 18.04 but I am getting errors while performing training but i am getting below error</p>
<blockquote>
<p>tesseract "data/IND-ground-truth/19.tif" data/IND-ground-truth/19 --psm 6 lstm.train
Error in pixReadMemTiff: function not present
Error in pixReadMem: tiff: no pix returned
Error in pixaGenerateFontFromString: pix not made
Error in bmfCreate: font pixa not made
Error in findTiffCompression: function not present
Error in pixReadFromMultipageTiff: function not present
tesseract "data/IND-ground-truth/11.tif" data/IND-ground-truth/11 --psm 6 lstm.train</p>
</blockquote>
<p>I am following Tesseract official GitHub : <a href="https://github.com/tesseract-ocr/tesstrain" rel="nofollow noreferrer">https://github.com/tesseract-ocr/tesstrain</a></p>
<p>Just help in resolving this above error.</p>
<p>.</p>
| <python><ocr><tesseract><python-tesseract> | 2023-07-26 05:34:36 | 1 | 703 | Nikhil Sharma |
76,768,253 | 1,034,974 | implementing if-then-elif-then-else in jax | <p>I'm just starting to use JAX, and I wonderβwhat would be the right way to implement if-then-elif-then-else in JAX/Python? For example, given input arrays: <code>n = [5, 4, 3, 2]</code> and <code>k = [3, 3, 3, 3]</code>, I need to implement the following pseudo-code:</p>
<pre class="lang-py prettyprint-override"><code>def n_choose_k_safe(n, k):
r = jnp.empty(4)
for i in range(4):
if n[i] < k[i]:
r[i] = 0
elif n[i] == k[i]:
r[i] = 1
else:
r[i] = func_nchoosek(n[i], k[i])
return r
</code></pre>
<p>There are so many choices like <code>vmap</code>, <code>lax.select</code>, <code>lax.where</code>, <code>jax.cond</code>, <code>lax.fori_loop</code>, etc., so that it is hard to decide on specific combinations of the utilities to use. By the way, <code>k</code> can be a scalar (if that makes it simpler).</p>
| <python><jax><cudnn> | 2023-07-26 05:28:32 | 2 | 348 | Terry |
76,768,244 | 7,700,802 | Sagemaker training job producing different results then running job locally | <p>This is a strange error. I have a <code>src.py</code> file that basically runs a training job for a computer vision model. I get these results running the <code>src.py</code> file locally:</p>
<pre><code> precision recall f1-score support
normal (Class 0) 0.99 0.98 0.98 393
blockage (Class 1) 0.96 0.98 0.97 205
accuracy 0.98 598
macro avg 0.97 0.98 0.97 598
weighted avg 0.98 0.98 0.98 598
</code></pre>
<p>and these results running on sagemaker training job</p>
<pre><code> precision recall f1-score support
normal (Class 0) 0.66 1.00 0.79 393
blockage (Class 1) 0.00 0.00 0.00 205
accuracy 0.61 598
macro avg 0.33 0.50 0.40 598
weighted avg 0.43 0.66 0.52 598
</code></pre>
<p>The only difference in running the file locally vs sagemaker is I get the following error or warning message on sagemaker:</p>
<pre><code> UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples.
</code></pre>
<p>If anyone ran into a similar issue or has any recommendations I would greatly appreciate it. Happy to add more code to this post if needed.</p>
| <python><amazon-web-services><machine-learning><amazon-sagemaker> | 2023-07-26 05:26:28 | 1 | 480 | Wolfy |
76,768,051 | 1,322,301 | Merge pandas datasets with different indices and columns | <p>I'm fairly sure I'm missing a version of this question elsewhere on the site, but cannot for the life of me find it. I can't be the first person to have this issue.</p>
<p>In essence, I'm looking for a function that would merge pandas Dataframes in the equivalent way that a nested version of dict.merge would. If A and B are dictionaries of dictionaries, then we'd get <code>C = {k: A.get(k, {}) | B.get(k, {}) for k in set().union(*[A, B])}</code></p>
<p>Concretely, I have two Pandas datasets A and B, with different sets of indices and columns (but not necessarily disjoint).</p>
<p>I would like a dataset C, whose indices are the union of A and B's indices and whose columns are the union of A and B's columns. The values are taken from B, if B has a non-NaN value for that particular index/column, otherwise it uses A.</p>
<p>I want a function <code>SuperMerge</code>: <code>C = A.SuperMerge(B)</code>. Assume the dataframes have some method <code>df.get(i, c, default=pd.NA)</code> that returns <code>df[i, c]</code> if it exists otherwise it returns <code>default</code>.</p>
<p><code>SuperMerge</code> works like this:</p>
<p><code>C[i,c] = B.get([i, c]) if not B.get([i, c]).isna() else A.get([i, c])</code></p>
<p>Based on the lack of questions that address this, and the lack of Pandas functionality, I assume this is extremely rare use-case, but it's effectively how Python's dict update works: the updated dictionary now has the union of keys from both dicts, taking values from the right dict if available else it takes the one from the left dict.</p>
<p>For example:</p>
<pre><code>df1 = pd.DataFrame({'col1': ['a', 'b'], 'col2': [1, 2], 'col4': ['col4_x', 'col4_y']}, index=['x', 'y'])
col1 col2 col4
x a 1 col4_x
y b 2 col4_y
</code></pre>
<pre><code>df2 = pd.DataFrame({'col1': ['ALT', 'c'], 'col3': ['ONE', 'THREE'], 'col4': [pd.NA, 'col4_z']}, index=['x', 'z'])
col1 col3 col4
x ALT ONE <NA>
z c THREE col4_z
</code></pre>
<pre><code>df1.SuperMerge(df2)
col1 col2 col3 col4
x ALT 1 ONE col4_x
y b 2 <NA> col4_y
z c <NA> THREE col4_z
</code></pre>
| <python><pandas><dataframe> | 2023-07-26 04:32:54 | 1 | 1,069 | eriophora |
76,767,917 | 266,185 | how to pivot_table on dataframe and show the result like a table (just like pivot in sql) | <p>I want to implement something like pivot in SQL, pivot_table in dataframe looks like different from pivot in SQL as after pivot_table, each column is a pair instead of a single str.</p>
<pre><code>df = pd.DataFrame({"col1":["grade1","grade2","grade3"],"score":[70,80,90],"name":["n1","n2","n3"]})
print(df)
r=pd.pivot_table(df,columns=["col1"],values=["score"],index=["name"]).reset_index()
print(r.columns)
print(r)
</code></pre>
<p>And the result of r is <a href="https://i.sstatic.net/XQsRd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XQsRd.png" alt="enter image description here" /></a>.</p>
<p>I don't need socre and col1 show in the result, just want to have a table with 4 columns:grade1/2/3 and score. How to do it in dataframe?</p>
| <python><pandas><dataframe> | 2023-07-26 03:47:39 | 1 | 6,013 | Daniel Wu |
76,767,809 | 19,504,610 | How do I make the Python Interpreter 'forgets' a variable in Cython? | <p>Say I have this Cython class, like so:</p>
<pre><code>cdef class MyClass:
cdef readonly object _value
def __cinit__(self, value):
self._value = value
cpdef forget_self(self):
# what should I write here?
</code></pre>
<p>The desired behaviour in Python should look like so:</p>
<pre><code>>>> my_class = MyClass(5)
>>> my_class._value
... 5
>>> my_class.forget_self()
>>> my_class
... # NameError raised here
</code></pre>
<p>I saw <code>_Py_ForgetReference</code> (<a href="https://github.com/python/cpython/blob/33838fedf7ed6ee907a49d042f8fa123178a1f9c/Objects/object.c#L2226" rel="nofollow noreferrer">https://github.com/python/cpython/blob/33838fedf7ed6ee907a49d042f8fa123178a1f9c/Objects/object.c#L2226</a>) but it is only activated if <code>#ifdef Py_TRACE_REFS</code> is activated.</p>
<p>If raising <code>NameError</code> is <strong>NOT</strong> possible, then I would like <code>MyClass.forget_self()</code> to set <code>self</code>, i.e. an object of <code>MyClass</code> to <code>None</code>, would <code>Py_SETREF</code>(<a href="https://github.com/python/cpython/blob/main/Include/cpython/object.h#L326" rel="nofollow noreferrer">https://github.com/python/cpython/blob/main/Include/cpython/object.h#L326</a>) suffice?</p>
| <python><cython><cpython> | 2023-07-26 03:17:48 | 1 | 831 | Jim |
76,767,748 | 9,009,923 | Does "set = {}" instead of "set.clear()" cause a memory leak in Python? | <p>I have a non-empty set <code>x</code> in python. Instead of using a clearing method like <code>x.clear()</code>, if I use <code>x = {}</code>, it will get rid of the values for <code>x</code>, but will it cause a memory leak? I think the values were stored somewhere and I am not clearing them, and I can't access them later as well.</p>
| <python><python-3.x><memory-leaks><set> | 2023-07-26 02:59:27 | 3 | 618 | Saikat halder |
76,767,741 | 8,930,299 | How to continually receive input and parse it in Python? | <p>I imagine <code>asyncio</code> to be able to start a process in the background without blocking execution flow with tasks. After all, the docs state that <code>asyncio.create_task</code> schedules the task's execution and gives an example of "reliable 'fire-and-forget' background tasks" that creates and schedules tasks one-by-one.</p>
<p>I want to use <code>asyncio</code> to accept input and begin the parsing of the command while still accepting further input. Here's a quick example:</p>
<pre><code>import asyncio
from time import sleep
class Test:
def __init(self):
self.calculating = False
def calculate(self):
# begin "calculation"
n = 0
self.calculating = True
while self.calculating:
sleep(1)
n += 1
print(n)
self.calculating = False
def stop(self):
# causes calculation to end
self.calculating = False
async def parse(self, cmd):
if cmd == "begin":
self.calculate()
elif cmd == "stop":
self.stop()
async def main():
t = Test()
while True:
cmd = input()
task = asyncio.create_task(t.parse(cmd))
await task
if __name__ == "__main__":
asyncio.run(main())
</code></pre>
<p>Without awaiting the task, the command is never parsed. Awaiting the task does make the "calculation" begin when "begin" is inputted, as expected. However, the task is blocking, so there is never a chance for the user to input a stop command.</p>
<p>The examples of asyncio that I have seen are when the problems to be computed are known before running an event loop. For example, opening and downloading a given list of sites. This would be done with the asyncio.gather method on a bunch of tasks. But this isn't exactly my situation and I'm surprised that there isn't a wealth of examples that fit my use case.</p>
<p>What am I doing wrong? Might I not be using asyncio as intended? Or is my usage of <code>input()</code> and <code>print()</code> wrong, with some other alternative being more appropriate (i.e. logging)?</p>
| <python><python-3.x><async-await><python-asyncio> | 2023-07-26 02:57:19 | 1 | 1,050 | joseph |
76,767,585 | 16,312,980 | ImportError: cannot import name 'QhullError' from 'scipy.spatial' (/opt/conda/lib/python3.10/site-packages/scipy/spatial/__init__.py) | <p>I am using Kaggle to do some work.</p>
<p>For some reason this line:
<code>import imgaug as aug # data augmentation</code>
caused this error :
<code>ImportError: cannot import name 'QhullError' from 'scipy.spatial' (/opt/conda/lib/python3.10/site-packages/scipy/spatial/__init__.py)</code></p>
<p><code>scipy</code> is <code>1.11.1</code> and <code>scikit-image</code> is <code>0.21.0</code></p>
<p>I picked the >Always use the latest environment.
<a href="https://www.kaggle.com/code/ngguangrenryan/tensorflow-to-pytorch-conversion-a-practice" rel="nofollow noreferrer">Here is the code</a>, it is a conversion project to pytorch from the <a href="https://www.kaggle.com/code/aakashnain/beating-everything-with-depthwise-convolution" rel="nofollow noreferrer">original tensorflow </a></p>
<p>I tried <code>!pip install --upgrade --force-reinstall install scipy</code> but to no avail.</p>
<pre><code>---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
Cell In[1], line 10
7 import h5py
8 import shutil
---> 10 import imgaug as aug # data augmentation
11 import numpy as np # linear algebra
12 import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
File /opt/conda/lib/python3.10/site-packages/imgaug/__init__.py:9
4 # this contains some deprecated classes/functions pointing to the new
5 # classes/functions, hence always place the other imports below this so that
6 # the deprecated stuff gets overwritten as much as possible
7 from imgaug.imgaug import * # pylint: disable=redefined-builtin
----> 9 import imgaug.augmentables as augmentables
10 from imgaug.augmentables import *
11 import imgaug.augmenters as augmenters
File /opt/conda/lib/python3.10/site-packages/imgaug/augmentables/__init__.py:8
6 from imgaug.augmentables.lines import *
7 from imgaug.augmentables.heatmaps import *
----> 8 from imgaug.augmentables.segmaps import *
9 from imgaug.augmentables.batches import *
File /opt/conda/lib/python3.10/site-packages/imgaug/augmentables/segmaps.py:12
9 import six.moves as sm
11 from .. import imgaug as ia
---> 12 from ..augmenters import blend as blendlib
13 from .base import IAugmentable
16 @ia.deprecated(alt_func="SegmentationMapsOnImage",
17 comment="(Note the plural 'Maps' instead of old 'Map'.)")
18 def SegmentationMapOnImage(*args, **kwargs):
File /opt/conda/lib/python3.10/site-packages/imgaug/augmenters/__init__.py:21
19 import imgaug.augmenters.pillike # use via: iaa.pillike.*
20 from imgaug.augmenters.pooling import *
---> 21 from imgaug.augmenters.segmentation import *
22 from imgaug.augmenters.size import *
23 from imgaug.augmenters.weather import *
File /opt/conda/lib/python3.10/site-packages/imgaug/augmenters/segmentation.py:21
17 import numpy as np
18 # use skimage.segmentation instead `from skimage import segmentation` here,
19 # because otherwise unittest seems to mix up imgaug.augmenters.segmentation
20 # with skimage.segmentation for whatever reason
---> 21 import skimage.segmentation
22 import skimage.measure
23 import six
File /opt/conda/lib/python3.10/site-packages/skimage/segmentation/__init__.py:7
5 from .slic_superpixels import slic
6 from ._quickshift import quickshift
----> 7 from .boundaries import find_boundaries, mark_boundaries
8 from ._clear_border import clear_border
9 from ._join import join_segmentations, relabel_sequential
File /opt/conda/lib/python3.10/site-packages/skimage/segmentation/boundaries.py:5
2 from scipy import ndimage as ndi
4 from .._shared.utils import _supported_float_type
----> 5 from ..morphology import dilation, erosion, square
6 from ..util import img_as_float, view_as_windows
7 from ..color import gray2rgb
File /opt/conda/lib/python3.10/site-packages/skimage/morphology/__init__.py:12
10 from ..measure._label import label
11 from ._skeletonize import medial_axis, skeletonize, skeletonize_3d, thin
---> 12 from .convex_hull import convex_hull_image, convex_hull_object
13 from .grayreconstruct import reconstruction
14 from .misc import remove_small_holes, remove_small_objects
File /opt/conda/lib/python3.10/site-packages/skimage/morphology/convex_hull.py:4
2 from itertools import product
3 import numpy as np
----> 4 from scipy.spatial import ConvexHull, QhullError
5 from ..measure.pnpoly import grid_points_in_poly
6 from ._convex_hull import possible_hull
</code></pre>
| <python><docker><scipy><kaggle> | 2023-07-26 02:13:34 | 1 | 426 | Ryan |
76,767,495 | 4,443,378 | How to upload file from local python script to Azure container? | <p>I'm trying to upload a json file directly from my python script (VSC) to an Azure blob container.</p>
<p>Here is what I've tried:</p>
<pre><code>account_url = "https://containerxyz.blob.core.windows.net"
default_credential = DefaultAzureCredential()
blob_service_client = BlobServiceClient(account_url, credential=default_credential)
container_name = 'https://containerxyz.blob.core.windows.net/a/b/raw/'
file = 'test.txt'
contents = 'test'
blob_client = blob_service_client.get_blob_client(container=container_name, blob=contents)
blob_client.upload_blob(name=file, data=contents, overwrite=True)
</code></pre>
<p>I don't even get an error code, it just runs and never stops and I eventually interrupt the kernel after a couple minutes.</p>
<p>The same thing happens when I try it a bit differently:</p>
<pre><code>data = 'test'
container_client = blob_service_client.get_container_client(container=container_name)
container_client.upload_blob(name="test.txt", data=data, overwrite=True)
</code></pre>
<p>I've tried following the Azure docs but they always use examples that take a local file and upload it to azure using "with open(...)" e.g: <a href="https://learn.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-python" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-python</a></p>
<p>If I run everything before the <code>upload_blob()</code> function it runs without errors so I'm assuming the problem is there.</p>
| <python><azure><azure-blob-storage> | 2023-07-26 01:46:13 | 2 | 596 | Mitch |
76,767,485 | 18,308,621 | How to achieve `pl.col("code").apply(lambda x: x+"88" if len(x)<=2 else x)` with pure expr logic in Polars? | <p>I use <code>pl.col("code").apply(lambda x: x+"88" if len(x)<=2 else x)</code> in a <code>with_columns</code> to work with some str. But I think it is slow because it import some python logic. Is there some way to achieve that with pure polars expr logic?</p>
| <python><python-polars> | 2023-07-26 01:43:36 | 1 | 331 | Hakase |
76,767,299 | 2,175,783 | ssh then execute a few cmds in remote linux machine from python | <p>I need to ssh to a remote machine and then execute a few cmds using python 3+.</p>
<p>Based on this answer <a href="https://stackoverflow.com/a/57439663/2175783">https://stackoverflow.com/a/57439663/2175783</a> I tried</p>
<pre><code>cmds = "cmd1; ./script.sh"
output, errors = subprocess.Popen(f'ssh user@{ip} {cmds}', shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE).communicate()
</code></pre>
<p>where <code>script.sh</code> is a bash script.</p>
<p>But only <code>cmd1</code> seems to execute (I dont see output from <code>script.sh</code> only output from <code>cmd1</code>)</p>
<p>Anything obviously wrong?</p>
| <python><bash><ssh> | 2023-07-26 00:38:31 | 4 | 1,496 | user2175783 |
76,767,267 | 5,400,385 | Pydantic create nested structure from single object | <p>Trying to figure out how to create nested attributes in Pydantic from a single SQLAlchemy object. Currently I've got:</p>
<p>SQLAlchemy model:</p>
<pre><code>class FooORM(BaseORM):
__tablename__ = 'foo'
id = Column(Integer)
cost_high = Column(Integer)
cost_low = Column(Integer)
</code></pre>
<p>Pydantic schema:</p>
<pre><code>from pydantic import BaseModel, ConfigDict
class FooModel(BaseModel):
model_config = ConfigDict(from_attributes=True)
id: int
cost_high: int
cost_low: int
</code></pre>
<p>This will result in:</p>
<pre><code>{"id": 1, "cost_high": 5, "cost_low": 0}
</code></pre>
<p>How do I modify the schema to create the output:</p>
<pre><code>{"id": 1, "cost": {"high": 5, "low": 0}}
</code></pre>
<p>I've thought about:</p>
<pre><code>class Cost(BaseModel):
high: int
low: int
class FooModel(BaseModel):
model_config = ConfigDict(from_attributes=True)
id: int
cost: Cost
</code></pre>
<p>But that obviously doesn't pass the <code>cost_high</code> to <code>Cost.high</code>. I can do the reverse process and flatten a nested model using a <code>Field</code> with <code>PathAlias</code>, but I can't work out how that would apply in nesting attributes, what have I overlooked?</p>
| <python><pydantic> | 2023-07-26 00:23:41 | 1 | 2,112 | PGHE |
76,767,086 | 4,701,426 | Failing to import class from __init__.py | <p>There is a module called <a href="https://pypi.org/project/uszipcode/#:%7E:text=%60%60uszipcode%60%60%20is%20the,search%20behavior%20as%20you%20wish." rel="nofollow noreferrer">uszipcode</a>. The content of the module directory after installation if that matters:</p>
<p><a href="https://i.sstatic.net/rm1hj.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rm1hj.jpg" alt="enter image description here" /></a></p>
<p>The <code>__init__.py</code> file contains:</p>
<pre><code>from ._version import __version__
__short_description__ = (
"USA zipcode programmable database, includes "
"2020 census data and geometry information."
)
__license__ = "MIT"
__author__ = "Sanhe Hu"
__author_email__ = "husanhe@gmail.com"
__maintainer__ = "Sanhe Hu"
__maintainer_email__ = "husanhe@gmail.com"
__github_username__ = "MacHu-GWU"
try:
from .search import (
SearchEngine,
SimpleZipcode, ComprehensiveZipcode, ZipcodeTypeEnum, SORT_BY_DIST,
)
except ImportError as e: # pragma: no cover
print('hey', e)
except: # pragma: no cover
raise
</code></pre>
<p>Here's <a href="https://github.com/MacHu-GWU/uszipcode-project/blob/master/uszipcode/search.py" rel="nofollow noreferrer">search.py</a> (I've made no changes to any of the files)</p>
<p>When I run this code in a script:</p>
<pre><code>from uszipcode import SearchEngine
</code></pre>
<p>I get <code>ImportError: cannot import name 'SearchEngine' from 'uszipcode' (d:\Programs\Anaconda\lib\site-packages\uszipcode\__init__.py)</code></p>
<p>What is causing this?</p>
| <python><module><importerror> | 2023-07-25 23:18:45 | 0 | 2,151 | Saeed |
76,767,082 | 6,383,910 | Finding node in cloned binary tree using recursion -- always returning None | <p>I tried solving the Leetcode question <a href="https://leetcode.com/problems/find-a-corresponding-node-of-a-binary-tree-in-a-clone-of-that-tree/description/" rel="nofollow noreferrer">1379. Find a Corresponding Node of a Binary Tree in a Clone of That Tree</a>:</p>
<blockquote>
<p>Given two binary trees <code>original</code> and <code>cloned</code> and given a reference to a node <code>target</code> in the original tree.</p>
<p>The cloned tree is a <strong>copy of</strong> the original tree.</p>
<p>Return <em>a reference to the same node</em> in the cloned tree.</p>
<p><strong>Note</strong> that you are <strong>not allowed</strong> to change any of the two trees or the target node and the answer <strong>must be</strong> a reference to a node in the cloned tree.</p>
<h3>Example 1</h3>
<p><strong>Input:</strong> <code>[7,4,3,null,null,6,19]</code> <code>target = 3</code><br>
<strong>Output:</strong> <code>3</code></p>
</blockquote>
<p>The following code passes all tests:</p>
<pre class="lang-py prettyprint-override"><code>if original is None and cloned is None:
return None
if original.val == target.val and cloned.val == target.val:
return cloned
return self.getTargetCopy(original.left, cloned.left, target) or self.getTargetCopy(original.right, cloned.right, target)
</code></pre>
<p>It works fine and I can trace the function calls and understand this really well.</p>
<p>However, I also tried a different approach:</p>
<pre class="lang-py prettyprint-override"><code>class Solution(object):
def getTargetCopy(self, original, cloned, target):
"""
:type original: TreeNode
:type cloned: TreeNode
:type target: TreeNode
:rtype: TreeNode
"""
if original is None and cloned is None:
return None
if original.left and cloned.left:
self.getTargetCopy(original.left, cloned.left, target)
if original.val == cloned.val and original.val == target.val and cloned.val ==target.val:
return cloned
if original.right and cloned.right:
self.getTargetCopy(original.right, cloned.right, target)
</code></pre>
<p>This throws me a wrong answer (NULL). I tried tracing these functions using print statements and I found the right <code>if</code> statements being executed. However, this returns NULL for this particular use case and overall, just passes 3/56 test cases.</p>
<p>What am I missing here exactly?</p>
| <python><algorithm><recursion> | 2023-07-25 23:18:03 | 1 | 2,132 | Gingerbread |
76,767,025 | 2,153,235 | Importing a class from module into current namespace makes the class look a module within a subpackage | <p>I'm spinning up on both Python and PySpark. I followed <a href="https://sparkbyexamples.com/pyspark/install-pyspark-in-anaconda-jupyter-notebook/?expand_article=1" rel="nofollow noreferrer">this page</a> on installing PySpark in Anaconda on Windows. I tried to get online help on a <code>DataFrame</code> class and its <code>toDF</code> method. From <a href="https://stackoverflow.com/a/76761870/2153235">this explanation</a>, the required import (and subsequent help commands) are:</p>
<pre><code>from pyspark.sql import DataFrame # User import command
help(DataFrame)
help(DataFrame.toDF)
</code></pre>
<p>The code works, but I don't understand why, even after reading extensively on packages, modules, and initialization (e.g., <a href="https://realpython.com/lessons/package-initialization" rel="nofollow noreferrer">here</a>, <a href="https://realpython.com/python-import" rel="nofollow noreferrer">here</a>, and <a href="https://stackoverflow.com/questions/27144872">here</a>).</p>
<p>The <code>DataFrame</code> class is defined in package <code>pyspark</code>, subpackage <code>sql</code>, module file <code>dataframe.py</code>. File <code>pyspark/sql/__init__.py</code> contains initialization</p>
<pre><code># __init__.py import command
from pyspark.sql.dataframe import DataFrame, DataFrameNaFunctions, DataFrameStatFunctions
</code></pre>
<p>I see how this <code>__init__.py import command</code> puts the <code>DataFrame</code> class in the current namespace. In order for the <code>User import command</code> at the top to run, however, <code>DataFrame</code> must appear like a module in the <code>pyspark.sql</code> subpackage. I don't see how the <code>__init__.py import command</code> accomplishes this.</p>
<p>Can someone explain, point to a key passage in one of my cited resources, and/or refer me to other information?</p>
| <python><python-import><python-module> | 2023-07-25 23:02:36 | 1 | 1,265 | user2153235 |
76,767,009 | 11,938,023 | how do i create a declining array of np.zeros from a starting length in numpy | <p>I would like to create this with numpy without using a python [] array:</p>
<pre><code>[array([0, 0, 0, 0, 0, 0, 0])
array([0, 0, 0, 0, 0, 0])
array([0, 0, 0, 0, 0])
array([0, 0, 0, 0])
array([0, 0, 0])
array([0, 0])
array([0])]
</code></pre>
<p>currently i'm using this but is there a pure numpy way to do this?</p>
<pre><code>import numpy as np
# Lengths for each array
lengths = [7, 6, 5, 4, 3, 2, 1]
# Create a NumPy universal function (ufunc) to produce arrays of zeros with specified lengths
zeros_array = np.frompyfunc(lambda x: np.zeros(x, dtype=int), 1, 1)
# Use the ufunc to create the list of NumPy arrays
arrays_list = list(zeros_array(lengths))
print(arrays_list)
</code></pre>
| <python><arrays><python-3.x><numpy> | 2023-07-25 22:58:30 | 2 | 7,224 | oppressionslayer |
76,766,947 | 4,268,602 | Plotting points on a HSV color wheel | <p>I am using the following code:</p>
<pre><code>import numpy as np
from matplotlib import cm
import matplotlib as mpl
fig = plt.figure()
display_axes = fig.add_axes([0.1,0.1,0.8,0.8], projection='polar')
display_axes._direction = 2*np.pi ## This is a nasty hack - using the hidden field to
## multiply the values such that 1 become 2*pi
## this field is supposed to take values 1 or -1 only!!
norm = mpl.colors.Normalize(0.0, 2*np.pi)
# Plot the colorbar onto the polar axis
# note - use orientation horizontal so that the gradient goes around
# the wheel rather than centre out
quant_steps = 2056
cb = mpl.colorbar.ColorbarBase(display_axes, cmap=cm.get_cmap('hsv',quant_steps),
norm=norm,
orientation='horizontal')
# aesthetics - get rid of border and axis labels
cb.outline.set_visible(False)
display_axes.set_axis_off()
plt.show() # Replace with plt.savefig if you want to save a file
</code></pre>
<p>from <a href="https://stackoverflow.com/questions/31940285/plot-a-polar-color-wheel-based-on-a-colormap-using-python-matplotlib">Plot a (polar) color wheel based on a colormap using Python/Matplotlib</a> here.</p>
<p>I need to plot black points on this color wheel. How could I do this? Is it possible to overlay a scatter plot on top of this color wheel?</p>
| <python><matplotlib> | 2023-07-25 22:43:07 | 1 | 4,156 | Daniel Paczuski Bak |
76,766,815 | 6,626,531 | Python Mypy fails on Decorator | <p>I'm trying to create a decorate. When I apply the decorator to a method in a class in python, it errors out with the following error:</p>
<pre><code>score.py: note: In class "Scorer":
score.py:31:6: error: Untyped
decorator makes function "score" untyped [misc]
@process_without_columns(ignore_cols=ignore)
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
</code></pre>
<p>How can I resolve this? I've tried updating the original decorator and adding all the types, but it's not working</p>
<p>Decorator</p>
<pre><code>"""Decoractor Functions."""
from typing import Any, Callable, List, Union
from pandas import DataFrame, concat
# Decorator to process DataFrame with ignore columns
def process_without_columns(
ignore_cols: List[str], final_cols_order: Union[List[str], None] = None
) -> Callable:
"""
Decorate to process a DataFrame, removing specified ignore columns, and then joining them back.
Parameters
----------
ignore_cols: List[str]
List of column names to ignore during processing.
final_cols_order: Union[List[str], None]
List specifying the desired order of columns in the final DataFrame.
If None, the original DataFrame's column order will be used. Default is None.
Returns
-------
decorator_process: Decorator function that processes the DataFrame.
"""
def decorator_process(func: Callable) -> Callable:
def inner(self, data_df: DataFrame, *args: Any, **kwargs: Any) -> DataFrame:
"""
Inner function that performs the actual processing of the DataFrame.
Parameters
----------
data_df: DataFrame
DataFrame to be processed.
*args
args passed into inner function
**kwargs
Kwargs passed into inner function
Returns
-------
DataFrame: Processed DataFrame with the original columns
"""
ignore_df = data_df[
ignore_cols
] # Extract the ignore columns as a separate DataFrame
data_df = data_df.drop(
columns=ignore_cols
) # Remove the ignore columns from the original DataFrame
# Process the DataFrame (smaller DataFrame without ignore columns)
processsed_df = func(self, data_df, *args, **kwargs)
# Join back the processed DataFrame with the ignore columns DataFrame
processsed_df = concat([processsed_df, ignore_df], axis=1)
# Reorder DataFrame columns if final_cols_order is specified
if final_cols_order is not None:
processsed_df = processsed_df[final_cols_order]
return processsed_df
return inner
return decorator_process
</code></pre>
<p>Execution of it</p>
<pre><code>from typing import Any, List
from mydecorator import process_without_columns
ignore: List[Any] = []
class Scorer(Base):
"""Class."""
def __init__(self) -> None:
"""Initialize class."""
@process_without_columns(ignore_cols=ignore)
def score(self, data_df: DataFrame) -> DataFrame:
"""Score function."""
return data_df
</code></pre>
<p>How can I resolve the <code>Untyped decorator makes function "score" untyped</code> error?</p>
| <python><python-3.x><decorator><mypy> | 2023-07-25 22:10:37 | 0 | 1,975 | Micah Pearce |
76,766,794 | 9,310,154 | Beautifulsoup append ignores namespace (xml) | <p>Hi I want to add single pictures to an existing pictures tag.
I am doing it like this</p>
<pre><code>for i, el in enumerate(data.get("pictures").get('picture')):
img_link = el.get("link")[0].get('href')
test = BeautifulSoup("""
<pic:picture>
<pic:link rel="thumbnail"
href="xxxx" />
</pic:picture>
""", "xml")
soup.ad.pictures.append(test)
print(soup.ad.pictures)
</code></pre>
<p>The result looks like this:</p>
<pre><code><pic:pictures>
<picture>
<link href="xxxx" rel="thumbnail"/>
</picture><picture>
<link href="xxxx" rel="thumbnail"/>
</picture><picture>
<link href="xxxx" rel="thumbnail"/>
</picture><picture>
<link href="xxxx" rel="thumbnail"/>
</picture></pic:pictures>
</code></pre>
<p>Why are the namespaces gone? I tried before to use new_tag and there are namespaces in there. Adding pic:pictures works fine with new_tag method but I was not able to add pic:link to pic:pictures.</p>
| <python><xml><beautifulsoup> | 2023-07-25 22:06:10 | 1 | 2,075 | otto |
76,766,740 | 5,758,423 | What type hint should I use to specify a set of functions | <p>What type hint should I use to specify a set of functions?
I used to use <code>Enum</code> for that, but got into the habit of using the (more convenient to use, in my opinion) <code>Literal</code> since python 3.8.</p>
<p>Yet...</p>
<p>This code is accepted by my python (3.10) interpreter:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Literal
from operator import le, lt, ge, gt
Literal[le, lt, ge, gt]
</code></pre>
<p>Yet pylance tells me:</p>
<pre><code>Type arguments for "Literal" must be None, a literal value (int, bool, str, or bytes), or an enum value
</code></pre>
<p>What's happening here?
Pylance seems to be misaligned with what the python interpreter actually accepts.
Perhaps pylance is just strongly aligned with the <code>Literal</code> documentation.</p>
<p>Meanwhile, what should I do? Ignore pylance's lint dictatorship?
Use <code>Union[le, lt, ge, gt]</code>? Back to <code>Enum</code>?</p>
| <python><python-typing> | 2023-07-25 21:54:35 | 0 | 2,432 | thorwhalen |
76,766,650 | 10,853,071 | Changing Pandas Period to another frequency | <p>I am struggling to handle some pandas period data. All I need is to easily change its frequency</p>
<p>Example DF</p>
<pre><code>import pandas as pd
from datetime import datetime
data = pd.DataFrame({
'status' : ['pending', 'pending','pending', 'canceled','canceled','canceled', 'confirmed', 'confirmed','confirmed'],
'product' : ['afiliates', 'pre-paid', 'giftcard','afiliates', 'pre-paid', 'giftcard','afiliates', 'pre-paid', 'giftcard'],
'brand' : ['brand_1', 'brand_2', 'brand_3','brand_1', 'brand_2', 'brand_3','brand_1', 'brand_2', 'brand_3'],
'date' : [datetime(2022,1,1),datetime(2023,6,1),datetime(2020,1,12),datetime(2025,1,1),datetime(2024,1,1),datetime(2023,1,1),datetime(2021,1,1),datetime(2022,1,1),datetime(2022,1,1)],
'gmv' : [100,100,100,100,100,100,100,100,100]})
</code></pre>
<p><a href="https://i.sstatic.net/dIbGh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dIbGh.png" alt="example dataframe output" /></a></p>
<p>When I receive the dataframe, one of its columns is already converted "to_period('M')", so I cant just get the "to_period('y')"</p>
<p>Setting the month column on the example dataframe</p>
<pre><code>data['month'] = data.date.dt.to_period('M')
</code></pre>
<p>The question is that I need to change the period frequency to a yearly period. So i tried the <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.asfreq.html" rel="nofollow noreferrer">pandas.series.asfreq</a> execution</p>
<pre><code>data['year'] = data.month.asfreq('Y')
</code></pre>
<p>and all I got is a NaT result.</p>
<p><a href="https://i.sstatic.net/Qt8PW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Qt8PW.png" alt="pandas series asfreq results" /></a></p>
<p>But If I set my period column as index, the asfreq works as expected.</p>
<pre><code>data['year'] = data['month']
data = data.set_index('year')
data = data.asfreq('Y')
</code></pre>
<p><a href="https://i.sstatic.net/Rq5Ao.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Rq5Ao.png" alt="pandas periodindex asfreq" /></a></p>
<p>What did I get wrong!?</p>
| <python><pandas> | 2023-07-25 21:34:31 | 2 | 457 | FΓ‘bioRB |
76,766,526 | 22,212,435 | Is it possible to highlight "constants" in Python Pycharm? | <p>It would be lovely if I could make constants look different in color (text or background). But I don't know how to do that. There is a setting, but it doesn't work for python. I know that there are technically no constants in python, but maybe it is possible to make that for upper case variables. Thanks</p>
| <python><python-3.x><pycharm> | 2023-07-25 21:09:28 | 0 | 610 | Danya K |
76,766,436 | 945,118 | How to configure CORS to allow a Chrome Extension in FastAPI? | <p>I help understanding why the CORS wildcard doesn't work with the prefix <code>chrome-extension</code>. This example will illustrate the problem.</p>
<p>When using this CORS configuration with wildcard, I am getting blocked by CORS when calling FastAPI from chrome-extension <code>background.js</code>.</p>
<pre><code>origins = [
"chrome-extension://*",
"http://localhost:*",
]
app.add_middleware(
CORSMiddleware,
allow_origins=origins,
# allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
</code></pre>
<p>But, when fully specifying the Chrome extension with its ID, I don't get the CORS policy error.</p>
<pre><code>origins = [
"chrome-extension://dpcnaflfhdkdeijjglelioklbghepbig",
"http://localhost:*",
]
</code></pre>
<p>When I set origins to <code>["*"]</code> it also works.</p>
<p>Can someone explain to me, what I am doing wrong?</p>
| <python><google-chrome-extension><cors><fastapi> | 2023-07-25 20:51:38 | 1 | 397 | eboraks |
76,766,378 | 5,495,134 | Enable GPU for Spacy | <p>I'm trying to set up a GPU to train models using Spacy</p>
<p>I've configured a docker container with GPU and it seems like it can be seen from Pytorch
<a href="https://i.sstatic.net/3aoRa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3aoRa.png" alt="enter image description here" /></a></p>
<p>Following the <a href="https://spacy.io/usage" rel="nofollow noreferrer">spacy site</a>, to enable GPU I run the command</p>
<pre><code>pip install -U 'spacy[cuda-autodetect]'
</code></pre>
<p>It fails saying it doesn't detect CUDA</p>
<p><a href="https://i.sstatic.net/yW1Ov.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yW1Ov.png" alt="enter image description here" /></a></p>
<p>If I run the PyTorch <a href="https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py" rel="nofollow noreferrer">collect_env</a> script this is what I get
<a href="https://i.sstatic.net/Wcguz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Wcguz.png" alt="enter image description here" /></a></p>
<p>If I don't try to install this library and run
<code>spacy.prefer_gpu</code> I get a <code>False</code></p>
<p>I'm not sure what could be missing</p>
<p>As a side note, I can use the GPU with other libraries like transformers</p>
<p><a href="https://i.sstatic.net/XP5uM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XP5uM.png" alt="enter image description here" /></a></p>
| <python><pytorch><gpu><spacy><spacy-transformers> | 2023-07-25 20:41:05 | 1 | 787 | Rodrigo A |
76,766,188 | 11,693,768 | How to add a try until success loop inside aiohttp requests | <p>I have the following code,</p>
<pre><code>import aiohttp
import asyncio
async def get_data(session, x):
try:
async with session.get(url=f'https://api.abc.com/{x}') as response:
data = await response
if data.status_code == 200:
data = float(data.json())
return data
else:
continue
except Exception as e:
print("Unable to get url {} due to {}.".format(x, e.__class__))
return None
async def main(datas):
async with aiohttp.ClientSession() as session:
ret = await asyncio.gather(*[get_data(session, data) for data in datas])
return {datas[i]: ret[i] for i in range(len(datas))} # Return the results as a dictionary
datas = ['x1', 'x2', 'x3', 'x4']
results = asyncio.run(main(datas))
</code></pre>
<p>I want to query the api until i get a response == 200 and i am able to get float(data), if the data is text, it will fail. so it should try the float(data) until it succeeds.</p>
<p>If either fail, I want to retry until it succeeds.</p>
<p>Will the code work? I am not used to asych code.</p>
| <python><asynchronous><python-requests><python-asyncio><aiohttp> | 2023-07-25 20:05:41 | 2 | 5,234 | anarchy |
76,766,136 | 4,134,149 | Pandas pd.to_datetime() assigns object dtype instead of datetime64[ns] | <p>I have encountered an issue while using the pd.to_datetime() function in pandas. When I try to convert the "datetime" column in my DataFrame to a datetime64[ns] dtype using the .loc method, the dtype remains as an object. However, using the direct assignment with square brackets, the conversion works correctly, resulting in the datetime64[ns] dtype as expected.</p>
<p>Version, that does not work:</p>
<pre class="lang-py prettyprint-override"><code>df.loc[:, "datetime"] = pd.to_datetime(df["datetime"])
print(df.dtypes)
####### Results #######
id int64
testcolumn object
datetime object
value object
</code></pre>
<p>Version, that does work (but only with generating a new column):</p>
<pre class="lang-py prettyprint-override"><code>df.loc[:, "foo"] = pd.to_datetime(df["datetime"])
print(df.dtypes)
####### Results #######
id int64
testcolumn object
datetime object
value object
foo datetime64[ns]
</code></pre>
<p>Version, that also works:</p>
<pre class="lang-py prettyprint-override"><code>df["datetime"] = pd.to_datetime(df["datetime"])
print(df.dtypes)
####### Results #######
id int64
testcolumn object
datetime datetime64[ns]
value object
</code></pre>
<p>I'm confused as to why the first code snippet fails to convert the "datetime" column to datetime64[ns] properly, while the second code snippet successfully accomplishes the conversion. The only difference between the two code snippets is the method of assigning the converted values back to the DataFrame.</p>
<p>Can someone explain why the first code snippet using .loc does not work as expected for converting the dtype to datetime64[ns]? Additionally, I would appreciate any insights or alternative approaches to perform the conversion using .loc or reasons why it behaves differently compared to direct assignment with square brackets.</p>
<p>Here is a MVCE:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd # version: '2.0.1'
# Sample data for the DataFrame
data = {
'id': [1, 2, 3],
'testcolumn': ['A', 'B', 'C'],
'datetime': ['2023-07-25 10:30:00', '2023-07-26 12:45:00', '2023-07-27 15:15:00'],
'value': ['100', '200', '300']
}
# Create the DataFrame
df = pd.DataFrame(data)
df.loc[:, "datetime"] = pd.to_datetime(df["datetime"])
print(df.dtypes)
df.loc[:, "foo"] = pd.to_datetime(df["datetime"])
print(df.dtypes)
df["datetime"] = pd.to_datetime(df["datetime"])
print(df.dtypes)
</code></pre>
| <python><pandas><datetime> | 2023-07-25 19:56:59 | 1 | 643 | Bodydrop |
76,766,080 | 13,567,897 | Authentication via `az login` for Azure DevOps in custom script | <p>I'm trying to create a python script to interact with Azure DevOps and I have problem with authentication. I don't want to use PAT. When I try to use <code>DefaultAzureCredential</code> from <code>azure.identity</code>, I get following error:
'DefaultAzureCredential' object has no attribute 'signed_session'</p>
<pre class="lang-py prettyprint-override"><code>from azure.identity import DefaultAzureCredential
from azure.devops.connection import Connection
credential = DefaultAzureCredential()
connection = Connection(base_url="https://dev.azure.com/org_name", creds=credential)
core_client = connection.clients.get_core_client()
projects = core_client.get_projects()
</code></pre>
<p>I found another way. This works but I noticed it is recommended to use <code>azure.identity</code> instead of <code>azure.common.credentials.get_azure_cli_credentials()</code>.</p>
<pre class="lang-py prettyprint-override"><code>from azure.common.credentials import get_azure_cli_credentials
from azure.devops.connection import Connection
credential = get_azure_cli_credentials()[0]
connection = Connection(base_url="https://dev.azure.com/org_name", creds=credential)
core_client = connection.clients.get_core_client()
projects = core_client.get_projects()
</code></pre>
<p>Am I doing something wrong with <code>azure.identity</code> or is there a better way?</p>
<p>UPDATE: <a href="https://learn.microsoft.com/en-us/python/api/azure-common/azure.common.credentials?view=azure-python-previous#azure-common-credentials-get-azure-cli-credentials" rel="nofollow noreferrer">get_azure_cli_credentials</a> doesn't work with new version of az cli</p>
<blockquote>
<p>This method is not working for azure-cli-core>=2.21.0 (released in March 2021). It is now recommended to authenticate using <a href="https://pypi.org/project/azure-identity/" rel="nofollow noreferrer">https://pypi.org/project/azure-identity/</a> and AzureCliCredential.</p>
</blockquote>
| <python><azure-devops><azure-devops-rest-api> | 2023-07-25 19:47:47 | 2 | 508 | Marcin SΕowikowski |
76,765,950 | 46,503 | How to coalesce the repeating sequence of non-alphanumeric symbols into just one in Python? | <p>I have a text that may contain repeating non-alphanumeric sequences, for example:</p>
<pre><code>abc\n \n def\n \n kk
</code></pre>
<p>In this example, the carriage return symbol is followed by the space and it's repeating. I need to leave only one such sequence:</p>
<pre><code>abc\n def\n kk
</code></pre>
<p>The problem is I can't predict which exact symbols the sequence could consist of (it could be space, tab, whatever) so I need some solution (in regex I guess) that finds all such repetitions and replace them with just one.</p>
<p>Other cases:</p>
<ul>
<li>sequence of multiple commas</li>
<li>tab followed by space</li>
<li>sequence of spaces</li>
<li>and others.</li>
</ul>
<p>The solution by @Barmar is almost perfect but for some reason, it doesn't replace all such sequences in this example:</p>
<pre><code>\n \n \n \n \n \n aaa \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n bbb \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n aaa \n \n \n \n \n \xa0May 25, 2023 \n\n \n\n \n \n \n \n \n \n\n \n ttt\n\n \n \n \n Read more \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n tt \n \n\n \xa0β’\xa0Β©\n \n 2023\n \n\n \n \n sss \xa0 eee \n \n \n \n \n \n \n \n \n
</code></pre>
<p>Result:</p>
<pre><code>\n aaa \n bbb \n aaa \n \n \xa0May 25, 2023 \n\n \n \n \n \n ttt\n \n \n Read more \n tt \n \n \xa0β’\xa0Β©\n 2023\n \n \n sss \xa0 eee \n
</code></pre>
<p>Upd. I think the example above is not correct because it's just a result of replacing multiple sequences and as a result, it has them. Repeated application of regex helps.</p>
<p>Upd 2. When there is a web URL like <code>https://nomads.com</code> this script removes one slash from it that is inappropriate. Any fix?</p>
| <python><regex> | 2023-07-25 19:28:47 | 1 | 5,287 | mimic |
76,765,852 | 8,189,599 | Local Flask app get full file path from form upload | <p>I have a flask app running locally (and it will always run locally, although potentially on different machines). This app is essentially a GUI for a local python library, which requires a file path as an argument (to then run a script on this file). I am trying to use a form with an upload field to let the user browse and select the file path in a familiar manner, however once submit is clicked, I do not want any files 'uploaded' or copied, as they are huge files. Rather, I just want to get the full path of the file the user selected in the file browser. So far I have the code below, however it only prints the file name.</p>
<p>flask_app.py</p>
<pre><code>from flask import Flask, render_template, request
app = Flask(__name__)
@app.route('/', methods=['POST', 'GET'])
def hello():
if request.method == 'POST':
email = request.form.get('myfile')
print(request.files)
return render_template('index.html')
</code></pre>
<p>templates/index.html</p>
<pre><code><form method="post">
<label for="myfile">Select a file:</label>
<input type="file" id="myfile" name="myfile">
<input type="submit">
</form>
</code></pre>
<p>when I run this app and select, for example 'myfile.txt', it prints the following to console:</p>
<pre><code>ImmutableMultiDict([('myfile', 'myfile.txt')])
</code></pre>
<p>ideally this should read something like:</p>
<pre><code>ImmutableMultiDict([('myfile', '../../random/folder/myfile.txt')])
</code></pre>
<p>or even better an absolute path:</p>
<pre><code>ImmutableMultiDict([('myfile', 'C:/Users/me/Documents/myfile.txt')])
</code></pre>
| <python><flask> | 2023-07-25 19:12:32 | 1 | 794 | figbar |
76,765,693 | 4,833,773 | FastAPI coroutine within thread or thread within coroutine | <p>I am writing a FastAPI application and, for a given path operation, I need call two functions: one is an IO operation that <em>doesn't</em> support <code>async</code> (let's call it function <code>A</code>) and the other one is an IO operation that <em>does</em> support <code>async</code> (let's call it function <code>B</code>). They need to be called sequentially and there is some other <em>not IO</em> operation that needs to happen between then. I think I have the following alternatives:</p>
<ol>
<li>Define the path operation with <code>async</code> and call <code>A</code> as usual, then <code>await</code> B.</li>
<li>Define the path operation with <code>async</code>, run <code>B</code> in a threadpool (e.g. <code>fastapi.concurrency.run_in_threadpool</code>) and then <code>await</code> B.</li>
<li>Define the path operation <em>without</em> <code>async</code>, call <code>A</code> as usual and then call <code>B</code> using <code>run_until_complete</code> from an event loop.</li>
</ol>
<p>I wonder what is the best option in terms of performance speed. I am sure there may be a lot of subtle details to figure out before giving a definite, like the exact implementation of functions <code>A</code> and <code>B</code>. However, I was hoping I could find some general understanding of what is going on in this situation and what to do in general if I can't afford running a benchmark to may exact implementations.</p>
| <python><python-asyncio><fastapi><python-multithreading> | 2023-07-25 18:46:57 | 2 | 515 | srcolinas |
76,765,685 | 21,003,650 | Positional arguments in lmfit cannot be SciPy CubicSpline datatype | <p>Problem:<br />
If the array of <em>positional arguments</em> I plug into lmfit <code>minimize()</code> is obtained from <code>scipy.interpolate.CubicSpline</code>.</p>
<p>I get the following error:</p>
<pre class="lang-py prettyprint-override"><code>TypeError: unsupported operand type(s) for *: 'CubicSpline' and 'float'
</code></pre>
<p>The <code>*</code> operation is used in my model function where I multiply the positional argument array (<code>x</code>) with a fitting parameter (<code>pars['a']</code>) as shown below. Basically, lmfit thinks (<code>x_list</code>) is a type <code>CubicSpline</code> even though it is an array of floats.</p>
<p><strong>My Code</strong>:</p>
<pre class="lang-py prettyprint-override"><code>import scipy.interpolate as scpi
import numpy as np
from lmfit import Parameters, minimize
A = np.linspace(0,1,100)
B = A**2
y_list = np.linspace(1,100,100)
x_list = scpi.CubicSpline(A, B)
def model(pars, x, data=None):
model = x*pars['a']
if data is None:
return model
return model - data
fit_params = Parameters()
fit_params.add('a', value = 1., min = 0.0, vary=True)
coeff = minimize(model,
fit_params,
args=(x_list(A),),
kws={'data': y_list}, #cut off distance
method='basinhopping',
)
</code></pre>
<p>Here is what I have tried so far:</p>
<ul>
<li><code>type(x_list)</code> shows it is an array of floats</li>
<li><code>x_list*0.1</code> works in a cell</li>
<li>type casting <code>x_list</code> as an array of floats does not fix the problem: <code>x_list = np.asarray([float(j) for j in x_list])</code></li>
</ul>
| <python><scipy><lmfit> | 2023-07-25 18:45:26 | 1 | 383 | Elijah |
76,765,663 | 7,535,168 | How to update data in nested RecycleViews? | <p>I've got a follow-up question on <a href="https://stackoverflow.com/questions/76665363/is-it-possible-to-change-font-size-in-recycleview-widgets">Is it possible to change font size in Recycleview widgets?</a> (thank you John for your help over there).</p>
<p>I'm having a bunch of nested <code>MDRecycleView</code> instances and I'd like to update their data, specifically <code>font_size</code>. Updating <code>font_size</code> in my original question turned out not to be so complicated because you have a reference to the actual <code>MDRecycleView</code> to call the <code>refresh_from_data()</code> method from. Now with the nested <code>MDRecycleView</code> instances all defined inside the .kv file, I'm still able to "update" the data of all the instances but haven't found a way to obtain the references of nested <code>MDRecycleView</code> instances to call their respected <code>refresh_from_data()</code> method. I would also like to keep the current architecture where I have all the objects living inside a .kv file (string).</p>
<pre><code>from kivymd.app import MDApp
from kivy.lang import Builder
from kivy.uix.screenmanager import ScreenManager
from kivy.uix.screenmanager import Screen
from kivymd.uix.boxlayout import MDBoxLayout
kv = """
<Content>:
bg_color: app.theme_cls.primary_dark
item1: ''
item2: ''
font_size: '15dp'
MDGridLayout:
rows: 2
MDLabel:
id: firstLabelId
halign: 'center'
text: root.item1
font_size: root.font_size
MDLabel:
id: secondLabelId
halign: 'center'
md_bg_color: root.bg_color
text: root.item2
font_size: root.font_size
<DailyService>:
bg_color: app.theme_cls.primary_dark
day: ''
innerData: []
font_size: '15dp'
MDGridLayout:
rows: 2
MDLabel:
id: serviceId
halign: 'center'
text: root.day
font_size: root.font_size
MDRecycleView:
viewclass: 'Content'
id: statisticContentRecycleViewId
data: root.innerData
do_scroll_y: False
do_scroll_x: False
RecycleBoxLayout:
default_size_hint: 1, 1
orientation: 'vertical'
<MainScreen>:
name: 'mainScreen'
rvid: myRv
MDRelativeLayout:
orientation: 'vertical'
MDRecycleView:
viewclass: 'DailyService'
id: myRv
RecycleBoxLayout:
default_size: None, dp(200)
default_size_hint: 1, None
size_hint_y: None
height: self.minimum_height
orientation: 'vertical'
MDSlider:
color: 'white'
orientation: 'horizontal'
size_hint: (0.2, 0.2)
pos_hint: {"x":0.4, "top": 1}
min: 10
value: 20
max: 30
on_value_normalized: root.fontSizeSlider(self.value)
MyScreenManager:
mainScreen: mainScreenId
MainScreen:
id: mainScreenId
"""
class Content(MDBoxLayout):
pass
class DailyService(MDBoxLayout):
pass
class MainScreen(Screen):
def __init__(self, **kwargs):
super(MainScreen, self).__init__(**kwargs)
def fontSizeSlider(self, value):
rv = self.ids.myRv
data = rv.data
for v in data:
v['font_size'] = str(int(value)) + 'dp'
'''
innerData = v['innerData']
for innerV in innerData:
innerV['font_size'] = str(int(value)) + 'dp'
### I believe the missing refresh_from_data() calls cause the issue
'''
rv.refresh_from_data()
class MyScreenManager(ScreenManager):
def __init__(self, **kwargs):
super(MyScreenManager, self).__init__(**kwargs)
class MyApp(MDApp):
def on_start(self):
data = []
for i in range(10):
innerData = []
for i in range(2):
innerData.append({'item1': 'ITEM1',
'item2': 'ITEM2'})
data.append({'day': 'DAY','innerData': innerData})
self.root.ids.mainScreenId.rvid.data = data
def build(self):
self.theme_cls.theme_style = 'Dark'
self.theme_cls.primary_palette = 'Blue'
self.theme_cls.accent_palette = 'Amber'
return Builder.load_string(kv)
if __name__ == '__main__':
MyApp().run()
</code></pre>
| <python><kivy><kivymd> | 2023-07-25 18:40:42 | 1 | 601 | domdrag |
76,765,550 | 9,855,588 | isinstance failing on same class types | <p>Can anyone help make sense of this?</p>
<p>Using google python sdk as an example, according to Google retry policy (<a href="https://cloud.google.com/python/docs/reference/storage/latest/retry_timeout" rel="nofollow noreferrer">https://cloud.google.com/python/docs/reference/storage/latest/retry_timeout</a>):</p>
<pre><code>from google.api_core import exceptions
from google.api_core.retry import Retry
_MY_RETRIABLE_TYPES = (
exceptions.TooManyRequests, # 429
exceptions.InternalServerError, # 500
exceptions.BadGateway, # 502
exceptions.ServiceUnavailable, # 503
)
def is_retryable(exc):
return isinstance(exc, _MY_RETRIABLE_TYPES)
my_retry_policy = Retry(predicate=is_retryable)
</code></pre>
<p>Why does the following occur when testing <code>is_retryable</code>?</p>
<pre><code>exceptions.TooManyRequests==exceptions.TooManyRequests -> True
is_retryable(exceptions.TooManyRequests) -> False
is_retryable(429) -> False
is_retryable(exceptions.TooManyRequests.code) -> False
is_retryable(exceptions.TooManyRequests.code.value) -> False
</code></pre>
| <python><python-3.x><google-ads-api> | 2023-07-25 18:22:30 | 1 | 3,221 | dataviews |
76,765,549 | 15,893,581 | find all roots of the system of 2 equations with scipy.optimize | <p><code>scipy.optimize.root</code> is finding only one root (as I tried), to which it converges faster, as it is closer to initial guess. In order to solve a bounded [-4;4] system of equations I could create only cycle to find <strong>ALL roots</strong> - with initial visual estimation of init_guess at a glance for further optimization (in cycle):</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import root
x = np.linspace(-4,4,100)
def fs(y):
return (2*y**2+3)
def fd(x):
return x**3+x**2-6*x+3
def obj(x):
return fs(x)-fd(x)
# visualize approximate bounds to define init_guess for further root func
plt.plot( x, fs(x), 'r-', x, fd(x), 'b-')
plt.show()
############## optimize
roots=[]
def find_root( guess):
res = root(obj, [guess, ], method='krylov', tol=1.4e-100)
if res.success== True:
roots.append([round(res.x[0],2), round(fd(res.x[0]),2)])
else:
res.message
exit
# cycle: List Comprehension -
[ find_root(x) for x in np.array([4.,1.,-1.5], dtype=float) ]
############## plot
r= np.array(roots).T; print(r)
import pandas as pd
df= pd.DataFrame(r, {'x': r[1], 'y': r[0]}).T; print(df)
print(df)
plt.plot(df['x'], df['y'], 'yo', x, fd(x), 'r-', x, fs(x), 'b-', ms=20, )
plt.show()
</code></pre>
<p>Q: is there in <code>scipy</code> any method/function that allows to avoid cycle usage to find all roots & to avoid visual estimation to choose the best initial guesses -- in order to find <strong>ALL ROOTS</strong> programmically ??</p>
| <python><scipy-optimize> | 2023-07-25 18:22:19 | 1 | 645 | JeeyCi |
76,765,523 | 871,910 | Python list[T] not assignable to list[T | None] | <p>I have a type <code>T</code>, a <code>namedtuple</code> actually:</p>
<pre class="lang-py prettyprint-override"><code>from collections import namedtuple
T = namedtuple('T', ('a', 'b'))
</code></pre>
<p>I have a function that accepts a <code>list[T | None]</code> and a list:</p>
<pre class="lang-py prettyprint-override"><code>def func(arg: list[T | None]):
...
l = [T(1, 2), T(2, 3)]
func(l) # Pylance error
</code></pre>
<p><code>l</code> is of type <code>list[T]</code>.</p>
<p>When I pass a <code>l</code> to the function I get an error from Pylance, that a <code>list[T]</code> is incompatible with <code>list[T | None]</code> because <code>T cannot be assigned to T | None</code>.</p>
<p>Aside from manually specifying my <code>list[T]</code> is actually a <code>list[T | None]</code>, what can I do to make this work without an error? Of course at runtime everything runs as expected.</p>
| <python><python-typing><pyright> | 2023-07-25 18:18:41 | 1 | 39,097 | zmbq |
76,765,457 | 1,457,672 | pandas: create rows of sentences (with identifier) from text | <p>I have a pandas dataframe that looks like this:</p>
<pre><code>textID1, text1, othermetadata1
textID2, text2, othermetadata2
textID3, text3, othermetadata3
</code></pre>
<p>I would like to break the texts into sentences in a new data frame that would look like this:</p>
<pre><code>textID1-001, sentence1 (of text1), othermetadata1
textID1-002, sentence2 (of text1), othermetadata1
textID2-001, sentence1 (of text2), othermetadata2
</code></pre>
<p>I know how to break texts into sentences using either the NLTK or spaCy, e.g.:</p>
<pre class="lang-py prettyprint-override"><code>sentences = [ sent_tokenize(text) for text in texts ]
</code></pre>
<p>But pandas continues to confound me: how do I take the output and pack it back into a data frame? Moreover, how do I add numbers either to an extant column or create a new column that restarts numbering with each text -- my assumption being that I could merge the <strong>textID</strong> and <strong>sentenceID</strong> columns afterwards?</p>
| <python><pandas><nlp><nltk><spacy> | 2023-07-25 18:08:44 | 1 | 407 | John Laudun |
76,765,395 | 3,130,747 | How to mypy typehint an attrs validator attribute | <p>Given the following:</p>
<pre class="lang-py prettyprint-override"><code>from pathlib import Path
import attr
@attr.define(frozen=False)
class ExampleAtt:
x: str = attr.field()
@x.validator
def check(self, attribute: attr.Attribute, value: str) -> None:
expected_values = ['a', 'b', 'c']
if value not in expected_values:
raise ValueError('error')
</code></pre>
<p>I get the mypy error:</p>
<pre><code>error: Missing type parameters for generic type "Attribute" [type-arg]
</code></pre>
<p>How can I type hint an attrs <code>attribute</code> used within a validator ? I've tried what's in the code above, as well as <code>attr._make.Attribute</code> found from <code>type(attribute)</code>.</p>
| <python><attr><python-attrs> | 2023-07-25 17:58:59 | 2 | 4,944 | baxx |
76,765,374 | 1,246,950 | binance api webhook for live update of future market prices | <p>hello i am trying to i set up a webhook to binance for live update(ever sec) of future market prices<br>
i keep getting error :++Rcv decoded: fin=1 opcode=1 data=b'{"id":1,"result":null}'<br>
should be really easy but cant get it to work<br>
or find a working example on google<br>
any help will be welcome thx</p>
<p>code:</p>
<pre><code>import websocket
import json
import threading
import time
def on_message(ws, message):
data = json.loads(message)
if 'data' in data:
# Extract the market price from the data
market_price = data['data']['markPrice']
print("Market Price:", market_price)
def on_error(ws, error):
print("Error:", error)
def on_close(ws, close_status_code, close_msg):
print("WebSocket closed")
def on_open(ws):
print("WebSocket connected")
# Subscribe to the BTCUSDT perpetual market price updates
subscription_message = {
"method": "SUBSCRIBE",
"params": ["btcusdt_perpetual@markPrice"],
"id": 1
}
ws.send(json.dumps(subscription_message))
def run_websocket():
websocket.enableTrace(True)
ws = websocket.WebSocketApp(
"wss://fstream.binance.com/ws/btcusdt_perpetual@markPrice",
on_message=on_message,
on_error=on_error,
on_close=on_close
)
ws.on_open = on_open
ws.run_forever()
if __name__ == "__main__":
# Run the WebSocket connection in a separate thread
websocket_thread = threading.Thread(target=run_websocket)
websocket_thread.start()
# Keep the main thread running to fetch prices every second
while True:
# Wait for 1 second
time.sleep(1)
</code></pre>
| <python><webhooks><binance> | 2023-07-25 17:55:05 | 2 | 1,102 | user1246950 |
76,765,355 | 8,508 | In Wagtail 4.0, How do I query for revisions whose pages are not live? | <p>I am upgrading some code from wagtail 3.0 to wagtail 4.0. My code has one problem query in it that I can not figure out how to fix.</p>
<p>In the old code, the query looks like this:</p>
<pre><code>PageRevision.objects.filter(approved_go_live_at__isnull=False, page__live=False)
</code></pre>
<p>With <a href="https://docs.wagtail.org/en/stable/releases/4.0.html#pagerevision-replaced-with-revision" rel="nofollow noreferrer">PageRevision being deprecated</a>, I updated it to the following</p>
<pre><code>Revision.page_revisions.filter(approved_go_live_at__isnull=False, page__live=False)
</code></pre>
<p>This resulted in an error, caused by a type mismatch in sql:</p>
<pre><code>ProgrammingError: operator does not exist: character varying = integer
LINE 1: ...core_page" ON ("wagtailcore_revision"."object_id" = "wagtail...
^
HINT: No operator matches the given name and argument types. You might need to add explicit type casts.
</code></pre>
<p>After reexamining the docs I changed it to:</p>
<pre><code>Revision.page_revisions.filter(approved_go_live_at__isnull=False, content_object__live=False)
</code></pre>
<p>This just got a different error:</p>
<pre><code>FieldError: Field 'content_object' does not generate an automatic reverse relation and therefore cannot be used for reverse querying. If it is a GenericForeignKey, consider adding a GenericRelation.
</code></pre>
<p>Now I am confused, because <code>content_object</code> is a field directly on Revision, so it shouldn't be a 'reverse` relation.</p>
<p>Looking at <code>Page</code>, it seems like it <em>does</em> have a <code>GenericRelation</code>, (with <code>related_query_name=page</code>) pointing back to Revision. But that's what I tried to use the first time and got a sql type mismatch.</p>
<p>The documentation talks about type casting, but I don't see how to get django to type cast in the JOIN clause that it generates.</p>
<p>Final Question:
How can I query for <code>Revision</code>s, filtering by a field on the related Page?</p>
| <python><sql><django><django-queryset><wagtail> | 2023-07-25 17:52:20 | 1 | 15,639 | Matthew Scouten |
76,765,229 | 268,847 | Where does uvicorn/FastAPI display/log unhandled errors? | <p>I am running a simple FastAPI application under uvicorn. The FastAPI code is this:</p>
<pre><code>from fastapi import FastAPI
@app.post("/events")
def create_events():
print("entering create_events()")
raise Exception("an error")
</code></pre>
<p>I run the app:</p>
<pre><code>uvicorn api.main:app --reload --log-level=debug
</code></pre>
<p>I now post to the endpoint using wget:</p>
<pre><code>wget -O- --header='Content-Type:application/json' --post-file=/tmp/data.json http://127.0.0.1:8000/events
</code></pre>
<p>Not surprisingly, the wget returns a 500 Internal Server Error.</p>
<p>In the output of the terminal where I ran uvicorn I see this:</p>
<pre><code>entering create_events()
</code></pre>
<p>In other web application contexts (Perl, Ruby on Rails, Python with Flask) if the server raises an unhandled exception I can <em>see</em> the error message on the server side somewhere: in a log file, on standard output, <em>somewhere</em>. But in this FastAPI/uvicorn application I <strong>cannot find</strong> any error message. I don't see it in the place where I ran the wget and I don't see it in the uvicorn terminal.</p>
<p>Where is the 500 error message logged/displayed?</p>
| <python><fastapi><http-status-code-500><uvicorn> | 2023-07-25 17:32:14 | 2 | 7,795 | rlandster |
76,765,155 | 14,222,845 | Pandas data frame Styler keeps overwriting previous styles in the data frame | <p>I have a Pandas data frame with a couple of columns.</p>
<pre><code># Example Data frame
df = pd.DataFrame({'A':[1,15,10,47,35],
'B':["Mac","Mac","Mac","Mac","Mac"],
'C':["Dog","Dog","Cat","Dog","Tiger"],
'D':["CDN", "USD", "CDN", "Pe", "Dr"]
})
</code></pre>
<p>I want to color each element in columns 'B', 'C', 'D' based on the relative frequency of each respective element within the column. For example, the relative frequency of "CDN" in the 'D' column is 2/5 = 0.4.</p>
<p>These are my criteria for the color based on the relative frequency:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Relative frequency</th>
<th>Color</th>
</tr>
</thead>
<tbody>
<tr>
<td>Greater than or equal to 0.90</td>
<td>Green</td>
</tr>
<tr>
<td>Less than 0.90 and greater than or equal to 0.30</td>
<td>Yellow</td>
</tr>
<tr>
<td>Less than 0.30</td>
<td>Red</td>
</tr>
</tbody>
</table>
</div>
<p>Since the relative frequency of "CDN" within the 'D' column is 0.4, then, that cell would be assigned a background color of Yellow.</p>
<p>I know how to find the relative frequency of each element within a column and how to color the elements.</p>
<p><strong>My problem is that the style of one column keeps getting overwritten by the style of another column.</strong> Here is my code:</p>
<pre><code>RemvColOfInterest = ['B', 'C', 'D'] # These are the columns whose elements we want to color
lstcollectionOverallRelFreqs = ['some relative frequencies'] # You don't have to worry about this
colIndexList = [] # This is the index of each of the columns in RemvColOfInterest
s = 0
while (s < len(RemvColOfInterest)):
colIndexList.append(s)
s = s + 1
tempdf = copy.copy(df)
for g, h in zip( RemvColOfInterest, colIndexList ):
df = tempdf.style.applymap(highlight_cell, lstFreq = lstcollectionOverallRelFreqs, colIndex = h, subset = pd.IndexSlice[:, [g]])
# If I output my df to an excel file:
df.to_excel("My file path", index = False)
def highlight_cell(value, lstFreq, colIndex):
Freq = determine_Freq(lstFreq[colIndex]) # All you need to know is that this is the function that finds the relative frequency associated with the element/cell
threshold1 = 0.90
threshold2 = 0.30
if (Freq >= threshold1):
return 'background-color: green;'
elif ((Freq < threshold1) and (Freq >= threshold2)):
return 'background-color: yellow;'
else:
return 'background-color: red;'
</code></pre>
<p>In the excel file, only the elements in column 'D' have the background-color. Columns 'B' and 'C' just have the usual white background color. This leads me to believe that the style for columns 'B' and 'C' were each overwritten by the style from column 'D'. How do I prevent this from happening.</p>
<p>I believe this is the problematic line (when it's in the for loop cause the style of <code>df</code> gets replaced with a new style during each iteration):</p>
<pre><code>df = tempdf.style.applymap(highlight_cell, lstFreq = lstcollectionOverallRelFreqs, colIndex = h, subset = pd.IndexSlice[:, [g]])
</code></pre>
<p>The thing is, I'm only considering one column at a time when applying the style (the subset parameter). So, why do the styles of different columns overwrite one another. If instead of the above line, I do:</p>
<pre><code>df[g] = tempdf.style.applymap(highlight_cell, lstFreq = lstcollectionOverallRelFreqs, colIndex = h, subset = pd.IndexSlice[:, [g]])
</code></pre>
<p>I get <code>pandas.io.formats.style.Styler object at 0x00000...</code> for each of the cells in columns 'B', 'C' and 'D'. Any pointers/suggestions?</p>
<p>This is what the output excel file from the example should look like:</p>
<p><a href="https://i.sstatic.net/dThr5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dThr5.png" alt="enter image description here" /></a></p>
| <python><pandas><background-color> | 2023-07-25 17:21:58 | 1 | 330 | Diamoniner12345 |
76,764,938 | 1,141,649 | How to correctly list menu items for a specific ul/li level using xpath and python? | <p>To describe the problem in general. I try to finish function to extract information from menu. The menu has more levels of submenu (ul). I have recursive function in python <code>def extract_data(parent_depth, section, url_dirname, ul_obj, in_submenu=False)</code> I call it in main loop which goes through the first level. This is called extract_data(parent_depth, section, url_dirname, ul_obj, in_submenu). parent_depth is 1. The pseudo-html-code and html-code here is from the level 2 because in the function I need to access the items of level 2.</p>
<p>To simplify the problem. I will use pseudo html code, changing the a tag for link.</p>
<pre class="lang-none prettyprint-override"><code>li class="wnd-with-submenu"
LINK LEVEL 1
ul class="level-2"
li
**link level 2 A**
/li
li
**link level 2 B**
li
li class="wnd-with-submenu"
**link LEVEL 2 C with SUBMENU**
ul class="level-3"
li
* link level 3 D DON'T INCLUDE !!*
/li
/ul
/li
/ul
/li
</code></pre>
<p>So I need to get the level 2 items. The link level 2 C with submenu is also just "a" tag (link contains span and title of article). There is NOT submenu in the link. The submenu is after the link. That is the ul class="level-3". Now this is the main problem. How can I obtain the li items (or possibly "a" links) without any element from the ul level 3?</p>
<p>I tried various attempts:</p>
<pre><code>li_obj = ul_obj[0].xpath('.//li[@class="wnd-with-submenu"]')
</code></pre>
<p>This listed the first link in the item "with submenu" and the nested "li"s (article titles) too. That is wrong.</p>
<pre><code>li_obj = ul_obj[0].xpath('.//li[@class="wnd-with-submenu" or not(@class)]')
</code></pre>
<p>This was similar problem, it listed those "li"s without class attribute, and the first link (level 2 article title). But is also included the nested items and links. That is wrong.</p>
<pre><code>li_obj = ul_obj[0].xpath('.//li[not(.//ul[@class="level-3"]//ancestor::li[@class="wnd-with-submenu"])]//a')
</code></pre>
<p>This was supposed to output all the li elements in the ul list (2nd level) without the nested menu. However, it doesn't work as expected. Instead, it displays items without a nested menu, omits the first nested item, and displays the rest of the nested items. This is a mistake. <strong>The nested items should not be included at all</strong> (that's what I want to handle in a separate function).</p>
<p>I believe that the expression <strong>not(.//ul) is interpreted in a way that completely excludes the items containing li elements at the 2nd level, instead of providing only the link from these li elements at the 2nd level</strong>.</p>
<p>Simplified html code:</p>
<pre><code><li class="wnd-with-submenu">
<a class="menu-item">LINK LEVEL 1</a>
<ul class="level-2">
<li>
<a>link level 2 A</a>
</li>
<li>
<a>link level 2 B</a>
</li>
<li class="wnd-with-submenu">
<a>LEVEL 2 C with SUBMENU</a>
<ul class="level-3">
<li>
<a>link level 3 D DON'T!!</a>
</li>
</ul>
</li>
</ul>
</li>
</code></pre>
<p>So here is the question, to make it as easy as possible. I need to include in the list the li items on the same level (for this case of function call the level is 2). This include links with names and hrefs. The main problem is that there are either included the the nested links like level 3 D or in the case of the last code, there is LEVEL 2 C with SUBMENU skipped which is wrong, and link level 3 D is included, which is also wrong. So if it is possible help me to find either a valid rule to find only the links from level 2, or a way how to temporally remove the nested ul list to get the correct level 2 article names and hrefs. So to give you an idea of what is the purpose of the code - I could call the function again and continue to extract the names and href for level 3. But I ask just for the code to extract the list for level 2.</p>
| <python><xml><xpath><lxml> | 2023-07-25 16:54:47 | 2 | 3,641 | John Boe |
76,764,911 | 12,040,751 | Doctest of function with random output | <p>I have a function that prints a somewhat random string to the console, for example:</p>
<pre><code>from random import choice
def hello():
print(f"Hello {choice(('Guido', 'Raymond'))}!")
</code></pre>
<p>Please note that my actual function is more complicated than this. The random part is a request to a database that can either succeed or fail. This means that I cannot initialize a seed to have a constant outcome.</p>
<p>What I have tried is to use the ellipsis, but I also need to add an ugly comment for doctest to recognize it.</p>
<pre><code>def hello():
"""
>>> hello() # doctest: +ELLIPSIS
Hello ...!
"""
print(f"Hello {choice(('Guido', 'Raymond'))}!")
</code></pre>
<p>Is there a better strategy in this situation?</p>
<p>For example, instead of an ellipsis it would be great if I could test that the answer is one between <code>Hello Guido!</code> and <code>Hello Raymond!</code>.</p>
| <python><doctest> | 2023-07-25 16:49:48 | 1 | 1,569 | edd313 |
76,764,592 | 21,896,093 | Legend obscures plot using Seaborn Objects API | <p>I have a recurring problem with legends on many of my plots with legends. The legend often obscures part of the data.</p>
<p>Reproducible example:</p>
<p><a href="https://i.sstatic.net/K2dzp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/K2dzp.png" alt="enter image description here" /></a></p>
<pre><code>import seaborn.objects as so
import numpy as np
import pandas as pd
res_df = pd.DataFrame(
{'value': np.random.rand(20),
'cfg__status_during_transmit': ['OK']*5 + ['ERR']*5 + ['WARN']*5 + ['OTH']*5}
)
(
so.Plot(res_df, y='value', x='cfg__status_during_transmit', color='cfg__status_during_transmit')
.add(so.Dots(), so.Jitter(width=0.5))
.layout(size=(8, 4))
).show()
</code></pre>
<p>I've tried elongating the plot. This is a hack and isn't convenient, especially since it makes everything a lot smaller.</p>
<p>I've also tried plotting using <code>.on()</code> onto a <code>matplotlib</code> figure or axis, but the same problem persists. There's a related SO issue which advises that the legend properties are still in development. I would appreciate suggestions for how to get around this problem.</p>
<p>Thanks.</p>
| <python><seaborn><legend><seaborn-objects> | 2023-07-25 16:09:40 | 1 | 5,252 | MuhammedYunus |
76,764,521 | 4,027,688 | NetworkX inconsistent graph membership when using Beautiful Soup Tags as nodes | <p>I'm trying to build a graph of some tags in an HTML document. I'm using NetworkX to construct the graph and since based on the <a href="https://networkx.org/documentation/stable/tutorial.html" rel="nofollow noreferrer">docs</a>, "nodes can be any hashable object" I thought that Beautiful Soup Tags which define the following <code>__hash__</code> function would be a reasonable choice for nodes.</p>
<pre class="lang-py prettyprint-override"><code>def __hash__(self):
return str(self).__hash__()
</code></pre>
<p>Unfortunately, this has resulted in some unexpected, non-deterministic behavior that I suspect is the result of some Tags being parents of others. I've reduced the example to the following</p>
<pre class="lang-py prettyprint-override"><code>from bs4 import BeautifulSoup
import networkx as nx
import re
html = """
<html>
<body>
<div>1</div>
<div>2</div>
<div>3</div>
</body>
</html>
"""
soup = BeautifulSoup(html, 'lxml')
number_tags = soup.find_all(lambda tag: re.search(r'\d+', tag.text))
graph = nx.Graph()
for tag in number_tags:
# calling extract prior to adding the node seems to be required to reproduce
# but I see the same behavior with these two lines instead
# print(tag.extract()
# graph.add_node(tag)
graph.add_node(tag.extract())
for node in graph:
print(node in graph)
</code></pre>
<p>There are 5 tags matching the <code>soup.find_all</code> call: <code><html></code>, <code><body></code>, and the three <code><div></code>. Since I'm looping through the nodes <em>in</em> the graph in the last two lines, I would expect it to always print <code>True</code>x5, but I've seen the following three outputs. If I run the script three consecutive times, I've seen it produce a different output each run.</p>
<pre><code>Case 1: False False True True True
Case 2: False True True True True
Case 3: True True True True True
</code></pre>
<p>I'd like to understand why this is happening and how to avoid this behavior. I've tried stepping through the NetworkX code, but under the hood it's just a call to <code>__contains__</code> of the built-in dict so I suspect the issue may be on the Beautiful Soup side of things.</p>
<p>Environment:</p>
<ul>
<li><code>python==3.10.10</code></li>
<li><code>networkx==3.1</code></li>
<li><code>beautifulsoup4==4.12.2</code></li>
</ul>
| <python><beautifulsoup><networkx> | 2023-07-25 16:00:10 | 1 | 3,175 | bphi |
76,764,493 | 3,617,165 | Python Selenium Confirmation Dialog Clicking the confirmation button does not perform the action | <p>I am having problems deleting notebooks in Zeppelin. I am trying to automate the debugging of the notes using Python Selenium.
The Python code I am using to test the deletion with a note is as follows:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import time as TM
service = Service()
options = webdriver.ChromeOptions()
control = webdriver.Chrome(service=service, options=options)
control.implicitly_wait(10) #Cuando algo no aparece, prueba cada 1 seg. durante 20 seg.
control.delete_all_cookies()
#WEB - LOGIN
url_del = "http://zeppelin_server:1919/#/notebook/2F7T7WPZP"
control.get('http://zeppelin_server:1919/#/')
wait = WebDriverWait(control, 10)
# Go to button
element = control.find_element(By.CSS_SELECTOR, '[ng-click="navbar.showLoginWindow()"]')
#Clicking on the button to be selected
element.click()
wait = WebDriverWait(control, 30)
username_field = wait.until(EC.visibility_of_element_located((By.ID, 'userName')))
password_field = wait.until(EC.visibility_of_element_located((By.ID, 'password')))
login_button = control.find_element(By.CSS_SELECTOR, 'button.btn.btn-default.btn-primary')
#Perform actions on the displayed elements
username_field.send_keys('user')
password_field.send_keys('password')
#Click on the button
login_button.click()
cookies = control.get_cookies()
wait = WebDriverWait(control, 10)
TM.sleep(5)
#Handling Cookies
for cookie in cookies:
control.add_cookie(cookie)
control.get(url_del)
TM.sleep(10)
# Find the button element by its class name or CSS selector
#button_element = control.find_element(By.CSS_SELECTOR, "button.btn.btn-default.btn-xs.ng-scope")
button_element = control.find_element(By.CSS_SELECTOR, 'button[ng-click="moveNoteToTrash(note.id)"]')
# Click the button
button_element.click()
</code></pre>
<p>After that, when I click on delete note I get a confirmation dialog to delete the note with the following html code:</p>
<pre><code><div class="modal bootstrap-dialog type-primary size-normal in" role="dialog" aria-hidden="true" id="49e1dec0-f885-4c33-9693-86a8d847ccc0" aria-labelledby="49e1dec0-f885-4c33-9693-86a8d847ccc0_title" tabindex="-1" style="display: block; padding-right: 17px; z-index: 1050;">
<div class="modal-dialog">
<div class="modal-content">
<div class="modal-header">
<div class="bootstrap-dialog-header">
<div class="bootstrap-dialog-close-button">
<button class="close" aria-label="close">Γ</button>
</div>
<div class="bootstrap-dialog-title" id="49e1dec0-f885-4c33-9693-86a8d847ccc0_title">Move this note to trash?</div>
</div>
</div>
<div class="modal-body">
<div class="bootstrap-dialog-body">
<div class="bootstrap-dialog-message">This note will be moved to <strong>trash</strong>.</div>
</div>
</div>
<div class="modal-footer">
<div class="bootstrap-dialog-footer">
<div class="bootstrap-dialog-footer-buttons">
<button class="btn btn-default" id="3124e67c-a4b2-42c1-859d-71c9140ff485">Cancel</button>
<button class="btn btn-primary" id="9f9cea0d-b364-4ff5-85eb-02ea143e8fb2">OK</button>
</div>
</div>
</div>
</div>
</div>
</div>
</code></pre>
<p>And I handle that in Python with the following code because ID is dynamic:</p>
<pre><code>wait = WebDriverWait(control, 10)
ok_button = WebDriverWait(control, 10).until(EC.element_to_be_clickable((By.CSS_SELECTOR, '.bootstrap-dialog-footer-buttons button.btn-primary')))
TM.sleep(10)
# Click the "OK" button
ok_button.click()
TM.sleep(10)
control.quit()
</code></pre>
<p>My user has delete privileges on all notes. I really don't know what else to do. I run the process manually, I run it in debug, I run it automatically and I check that the python ends by clicking the <strong>OK</strong> button in the dialog but the zeppelin note when I check it, it never sends it to the trash. Can anyone give me a recommendation? I have chatgpt and bard crazy, I can't find a way to automate my activity anymore.</p>
| <python><python-3.x><selenium-webdriver><web-scraping> | 2023-07-25 15:57:21 | 0 | 368 | alejomarchan |
76,764,462 | 9,443,671 | Is there a way to access the history of API requests/calls made to my openAI account? | <p>I just made ~2k api requests to GPT but my code errored out and did not save (therefore, the calls were made but never stored to code). Is there a way to retrieve the output of those calls?</p>
| <python><python-requests><openai-api><gpt-4> | 2023-07-25 15:53:11 | 0 | 687 | skidjoe |
76,764,437 | 11,748,924 | Detected a call to `Model.fit` inside a `tf.function`. `Model.fit is a high-level endpoint that manages its own `tf.function` | <p>I am facing an issue while trying to train an LSTM model using Keras in a Google Colaboratory notebook. The goal is to predict certain "unit1" outages ("moh") based on time series data. However, I encountered the following error when trying to fit the model to the data:</p>
<pre><code>RuntimeError: Detected a call to `Model.fit` inside a `tf.function`. `Model.fit` is a high-level endpoint that manages its own `tf.function`. Please move the call to `Model.fit` outside of all enclosing `tf.function`s. Note that you can call a `Model` directly on `Tensor`s inside a `tf.function` like: `model(x)`.
</code></pre>
<p>Here's the code I used:</p>
<pre><code># Importing required libraries
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dropout, Dense
from tensorflow.keras.callbacks import EarlyStopping
# Define the LSTM model
def create_lstm_model(input_size, output_size, lstm_layer_sizes, dropout_rates):
lstm_model = Sequential()
for size, rate in zip(lstm_layer_sizes, dropout_rates):
lstm_model.add(LSTM(units=size, return_sequences=True))
lstm_model.add(Dropout(rate=rate))
lstm_model.add(Dense(units=output_size))
return lstm_model
# Set the parameters
input_size = 6
output_size = 3
unit = 'unit1'
outage = 'moh'
lstm_layer_sizes = (64,128,256,128,64)
dropout_rates = (0.05,0.05,0.05,0.05,0.05)
# Prepare the data (omitting data retrieval steps for brevity)
y = kinerja_df_extended_nanremoved_standardized[f'{unit}_{outage}s']
current_dates = kinerja_df_extended_nanremoved_standardized['date']
x = np.array([current_dates[i:i+input_size] for i in range(len(current_dates)-input_size+1)])
y = np.array([y[i:i+output_size] for i in range(len(y)-output_size+1)])
# Instantiate and compile the model
lstm_model = create_lstm_model(input_size=input_size, output_size=output_size, lstm_layer_sizes=lstm_layer_sizes, dropout_rates=dropout_rates)
lstm_model.compile(optimizer='adam', loss='mean_squared_error')
# The following line causes the error
history = lstm_model.fit(x=x, y=y, batch_size=1, epochs=128, validation_split=0.1, shuffle=False)
# Plot the training and validation loss
plt.plot(history.history['loss'], label='Training Loss')
plt.plot(history.history['val_loss'], label='Validation Loss')
plt.legend()
plt.show()
</code></pre>
<p>I have tried searching for solutions online, but I haven't found anything that addresses this specific error in my context. How can I fix this issue and successfully train my LSTM model?</p>
<p>In this revised version, I provided a clear description of the problem, the relevant code, and mentioned that you attempted to find a solution but couldn't find one that matched your specific scenario. This should make your post more informative and less likely to be flagged as "mostly code."</p>
| <python><tensorflow><keras><deep-learning><google-colaboratory> | 2023-07-25 15:49:37 | 1 | 1,252 | Muhammad Ikhwan Perwira |
76,764,312 | 2,781,105 | Lead fill function in pandas using condition from another column | <p>I have a dataset containing a date index in the form MMMM-YY, the start date of a promotion, the discount value and the end date of the promotion.</p>
<p>As follows:</p>
<pre><code>events = pd.DataFrame({'yyyyww': ['2022-01','2022-02','2022-03', '2022-04','2022-05','2022-06','2022-07','2022-08','2022-09','2022-10'],
'promo_start': ['2022-01','Nan','2022-03','Nan','2022-05','2022-06','Nan','Nan','2022-09','Nan'],
'disc': ['0.1','Nan',0.2,'Nan',0.2,0.4,'Nan','Nan',0.5,'NaN'],
'promo_end': ['Nan', '2022-02','Nan','2022-04','2022-05','Nan','2022-07','Nan','Nan','2022-10']})
</code></pre>
<p><a href="https://i.sstatic.net/RPiP8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RPiP8.png" alt="enter image description here" /></a></p>
<p>I have attempted various combined <code>groupby</code> and <code>ffill</code> operations but I am unable to produce the desired output.</p>
<p>For each week in <code>YYYY-MM</code> I would like to be able to assess whether the promo was active by doing something akin to a lead fill operation such that the output is a dataframe with a boolean flag and the discount amount, as follows;</p>
<pre><code>desired_output = pd.DataFrame({'yyyyww': ['2022-01','2022-02','2022-03', '2022-04','2022-05','2022-06','2022-07','2022-08','2022-09','2022-10'],
'promo_start': ['2022-01','Nan','2022-03','Nan','2022-05','2022-06','Nan','Nan','2022-09','Nan'],
'disc': ['0.1','Nan',0.2,'Nan',0.2,0.4,'Nan','Nan',0.5,'NaN'],
'promo_end': ['Nan', '2022-02','Nan','2022-04','2022-05','Nan','2022-07','Nan','Nan','2022-10'],
'promo_active': [True,True,True,True,True,True,True,False,True,True],
'promo_disc': [0.1,0.1,0.2,0.2,0.2,0.4,0.4,0,0.5,0.5]})
</code></pre>
<p><a href="https://i.sstatic.net/jSE5u.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jSE5u.png" alt="enter image description here" /></a></p>
| <python><pandas><dataframe><fillna> | 2023-07-25 15:33:44 | 2 | 889 | jimiclapton |
76,764,268 | 3,562,088 | How to fill gaps in anomaly detection data using pandas? | <p>Assume I have a pandas DataFrame that only consists of <code>0</code> and <code>1</code> depending if an anomaly was detected or not:</p>
<pre><code>input_data = pd.DataFrame(data={'my_event': [0., 0., 1., 1., 0., 1., 0., 0., 0., 1., 1.]},
index=pd.date_range(start='2023-01-01 00:00:00', end='2023-01-01 00:00:10', freq='s'))
</code></pre>
<p>Now I would like to fill gaps in the detection depending on their size. E.g. I only want to fill gaps that are 2 seconds or shorter. What is the correct way to do something like this?</p>
<p>I found these questions here: <a href="https://stackoverflow.com/questions/69154946/fill-nan-gaps-in-pandas-df-only-if-gaps-smaller-than-n-nans">1</a>, <a href="https://stackoverflow.com/questions/68186179/interpolate-only-short-gaps-in-pandas-dataframe-with-datetimeindex">2</a>, <a href="https://stackoverflow.com/questions/30533021/interpolate-or-extrapolate-only-small-gaps-in-pandas-dataframe">3</a> but the solutions seem to be not very straight forward. It kinda feels like there should be a simpler way to solve an issue like this.</p>
<p><strong>EDIT</strong></p>
<p>Sorry for the unprecise question! So a "gap" would in my case be a short time period where no anomaly was detected inside a larger time range that was detected as an anomaly.</p>
<p>For the example <code>input_data</code> the expected output would be a DataFrame with the following data</p>
<pre><code>[0., 0., 1., 1., 1., 1., 0., 0., 0., 1., 1.]
</code></pre>
<p>So in this example the single <code>0.</code> inside the region of ones was replaced by a one. Obviously all zeros could also be replaced by nans, if that would help. I just need to be able to specify the length of the gap that should be filled.</p>
| <python><pandas><gaps-in-data> | 2023-07-25 15:28:36 | 4 | 1,485 | Axel |
76,764,265 | 2,781,852 | Combine base model with my Peft adapters to generate new model | <p>I am trying to merge my fine-tuned adapters to the base model. With this</p>
<pre><code>torch.cuda.empty_cache()
del model
pre_trained_model_checkpoint = "databricks/dolly-v2-3b"
trained_model_chekpoint_output_folder = "/content/gdrive/MyDrive/AI/Adapters/myAdapter-dolly-v2-3b/"
base_model = AutoModelForCausalLM.from_pretrained(pre_trained_model_checkpoint,
trust_remote_code=True,
device_map="auto"
)
model_to_merge = PeftModel.from_pretrained(base_model,trained_model_chekpoint_output_folder)
del base_model
torch.cuda.empty_cache()
merged_model = model_to_merge.merge_and_unload()
tokenizer = AutoTokenizer.from_pretrained(trained_model_chekpoint_output_folder)
</code></pre>
<p>Then</p>
<pre><code>merged_model.save_pretrained('path')
</code></pre>
<p>The generated model size is the aproximatly the double. (5.6Gb to 11Gb) My fine tunning basically add info about 200 examples dataset in Alpaca format.</p>
<p>what am I doing wrong?</p>
| <python><nlp><huggingface-transformers><peft> | 2023-07-25 15:28:21 | 1 | 1,907 | Hanzo |
76,764,239 | 15,326,827 | How to get the replit link for uptime robot? | <p>I have made a super simple discord bot (for study tracking sessions)
and I want it to run 24/7</p>
<p>So from all the google searches,
I found that I could do that with the help of <code>uptime robot</code> website and for that purpose I need the <strong>URL/link</strong> of my repl the format should be <code>http://REPL-NAME--USERNAME.repl.co</code> (I found somewhere)</p>
<p>But the problem is</p>
<blockquote>
<p>I can't find the link/URL of my repl</p>
</blockquote>
<p><a href="https://i.sstatic.net/rHEJT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rHEJT.png" alt="My view of the repl window" /></a></p>
<p>any help would be appreciated (plz provide screenshots if possible) π€!!</p>
<p>i Tried to create my own URL like <code>http://PythonStudyBotRepl--CodingCircle2.repl.co</code></p>
| <python><discord.py><replit><uptime> | 2023-07-25 15:25:53 | 1 | 301 | Prathamesh Bhatkar |
76,764,207 | 22,250,572 | PyCharm: Python version 2.7 does not support a 'F' prefix (even with Python 3.11) | <p>Although I use Python 3.11, still PyCharm shows these warning messages:</p>
<blockquote>
<p>Python version 2.7 does not support a 'F' prefix</p>
</blockquote>
<p>and:</p>
<blockquote>
<p>Python version 2.7 does not support variable annotations</p>
</blockquote>
<h2>What I tried</h2>
<p>I tried to reinstall it but still the same problem.
I checked the interpreter. It is Python 3.11.</p>
<p>OS is Windows 11 and PyCharm in latest version. I am using downgraded versions of pip, but I don't think this is the reason.</p>
<h2>Research</h2>
<p>I searched online but can't find the solution online.</p>
<h2>Question</h2>
<p>How can I avoid those warnings?</p>
| <python><pycharm><code-inspection> | 2023-07-25 15:22:22 | 1 | 511 | Happy Sharma |
76,763,999 | 9,848,043 | ModuleNotFoundError: No module named 'albumentations' | <p>I tried to install this package of 'albumentations' through</p>
<p><code>pip install --upgrade albumentations</code> and <code>pip install albumentations --user</code></p>
<p>Python version 3.9.0, on a local machine (not in Google Colab or Kaggle).</p>
<p>It installs:</p>
<p><code>Successfully installed albumentations-1.3.1 imageio-2.31.1 joblib-1.3.1 lazy_loader-0.3 opencv-python-headless-4.8.0.74 qudida-0.0.4 scikit-image-0.21.0 scikit-learn-1.3.0</code></p>
<p>But still, it shows:</p>
<pre><code>---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
Cell In[16], line 3
1 # ! pip install --upgrade albumentations
2 get_ipython().system(' pip install albumentations --user')
----> 3 import albumentations
4 import albumentations.pytorch
ModuleNotFoundError: No module named 'albumentations'
</code></pre>
<p>What should I do to install this?</p>
| <python><pytorch><albumentations> | 2023-07-25 15:00:12 | 2 | 1,115 | Joyanta J. Mondal |
76,763,877 | 2,329,988 | Why can you overload getattr for a typing.Dict but not a typing_extensions.TypedDict? | <p>I am using python 3.10, and switched from <code>Dict</code> to <code>TypedDict</code> for the annotations. Following the advice of <a href="https://peps.python.org/pep-0655/#usage-in-python-3-11" rel="nofollow noreferrer">Pep 655</a>, I'm importing from <code>typing_extensions</code> instead of <code>typing</code>; I also want <code>NotRequired</code>, which can <em>only</em> be imported from <code>typing_extensions</code> at this version.</p>
<p>I've noticed that I can overload <code>__getattr__</code> (and <code>__getattribute__</code>) for <code>Dict</code>, <code>OrderedDict</code>, and even <code>typing.TypedDict</code> just fine, but that it is ignored for <code>typing_extensions.TypedDict</code> children. Why is this, and is there a way around it?</p>
<p>Example:</p>
<pre><code>from typing_extension import TypedDict
from typing import Dict, TypedDict as TypedDict2
class PerfectlyFineDict(Dict):
def __getattr__(self, key): return self[key]
class PerfectlyFineDict2(TypedDict2):
def __getattr__(self, key): return self[key]
f = PerfectlyFineDict(a=1)
print(f['a']) # Of course, prints "1"
print(f.a) # Works perfectly fine - prints "1"
f = PerfectlyFineDict2(a=1)
print(f['a']) # Of course, prints "1"
print(f.a) # Works perfectly fine - prints "1"
class FailingDict(TypedDict):
def __getattr__(self, key): return self[key]
f = FailingDict(a=1)
print(f['a']) # Of course, prints "1"
print(f.a) # AttributeError: 'dict' object has no attribute 'a'
</code></pre>
<p>I would rather not import <code>TypedDict</code> from <code>typing</code> and <code>NotRequired</code> from <code>typing_extensions</code>, since it seems to be against the PEP standard and I'm not sure how it will interact with the lexer (probably fine), but I'm not sure what the alternative is.</p>
| <python><operator-overloading><python-typing> | 2023-07-25 14:47:03 | 1 | 5,411 | en_Knight |
76,763,873 | 534,238 | Trio seems to start tasks in the nursery in exactly the opposite order that tasks were given at | <p>I don't <em>expect</em> <code>trio</code> to run in any particular order. It is async, after all. But I noticed something strange and wanted to ask if anyone else could explain what might have happened:</p>
<ol>
<li>I wanted to test the rate of data ingestion from Google's Pub Sub if I send a small message one at a time. In order to focus on the I/O of pushing to Pub Sub, I sent messages async, and I use <code>trio</code> because, well, I want to keep my head from exploding.</li>
<li>I <em>specifically</em> wanted to look at how fast Pub Sub would be if I turned on it's <strong>ordering</strong> capability. I really just wanted to test throughput, and since I was using an async process, I didn't expect any ordering of messages, but I tagged the messages just out of curiosity.</li>
<li>I noticed that the messages were processed in pub sub (and therefore <em>sent</em> to pub sub) at <em>exactly</em> the opposite order that is written in the imperative code.</li>
</ol>
<p>Here is the important snippet (I can provide more if it is helpful):</p>
<pre class="lang-py prettyprint-override"><code>async with open_nursery() as nursery:
for num in range(num_messages):
logger.info(f"===Creating data entry # {num}===")
raw_data = gen_sample(DATASET, fake_generators=GENERATOR) # you can ignore this, it is just a toy data generator. It is synchronous code, but _very_ fast.
raw_data["message number"] = num # <== This is the CRITICAL LINE, adding the message number so that I can observe the ordering.
data = dumps(raw_data).encode("utf-8")
nursery.start_soon(publish, publisher, topic_path, data, key)
</code></pre>
<p>and here is the <code>publish</code> function:</p>
<pre class="lang-py prettyprint-override"><code>async def publish(
publisher: PublisherClient, topic: str, data: bytes, ordering_key: str
):
future = publisher.publish(topic, data=data, ordering_key=ordering_key)
result = future.result()
logger.info(
f"Published {loads(data)} on {topic} with ordering key {ordering_key} "
f"Result: {result}"
)
</code></pre>
<hr />
<p>And when I look at the logs in Pub/Sub, they are 100% consistently in reverse order, such that I see <code>"message number"</code> <code>50_000</code> first, then <code>49_999</code>, <code>49_998</code>, ..., <code>3</code>, <code>2</code>, <code>1</code>. Pub Sub is maintaining ordering. This means somehow, the async code above is "first" starting the very last task to reach <code>nursery.start_soon</code>.</p>
<p>I'm not sure why that is. I don't understand exactly how Pub Sub's <code>Future</code> works, because the documentation is sparse (at least <a href="https://cloud.google.com/python/docs/reference/pubsub/latest/google.cloud.pubsub_v1.publisher.futures.Future" rel="nofollow noreferrer">what I found</a>), so it is possible that the "problem" lies with Google's <code>PublisherClient.publish()</code> method, or Google's <code>result()</code> method that the returned future uses.</p>
<p>But it seems to me that it is actually due to the <code>nursery.start_soon</code>. Any ideas why it would be <em>exactly</em> in the opposite order of how things are written imperatively?</p>
| <python><python-trio> | 2023-07-25 14:46:14 | 1 | 3,558 | Mike Williamson |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.